text
stringlengths
256
16.4k
In a project we're using a commercial text font. The project is not a math related project but has some math in it here and there. The text font of course does not have a direct math font. But the XITS Math font (which is a part of standard LaTeX, at least TeX Live) seems a good enough match. I decided to play with unicode-math to see if you could get it even closer, by changing the font for latin letters in the math font to the italic text font with the numbers being taken from the upright text font. Works surprisingly well. Here is the catch though: since we are mixing the fonts used in math, math accents aren't placed particularly good. I've prepared a MWE that is similar to our setup, but using a text font already in TeXLive so most people should be able to test it. \documentclass[a4paper]{article}\usepackage{amsmath,unicode-math}\setmathfont[Scale=MatchUppercase]{XITS Math}\setmathfont[range=\mathup/{num},Numbers=Lining]{Cochineal-Roman}\setmathfont[range=it/{latin,Latin},Ligatures={NoCommon, NoDiscretionary, NoHistoric, NoRequired, NoContextual}]{Cochineal-Italic}\setmainfont{Cochineal}[UprightFont = *-Roman ,BoldFont= *-Bold ,ItalicFont = *-Italic,BoldItalicFont = *-BoldItalic,Numbers=Lining,]\begin{document}$ p^{\ast}=c\vert \hat{\jmath}\vert ^{1/2}e^{l-\hat{l}} $\end{document} Resulting in As we can see the hats are placed incorrectly If we use only XITS Math we get Which is much more reasonably. So my question is this, given that the accent is probably placed using the top_accent data from the XITS Math font, how can we in Lua patch this font setup such that we can fix the placement of the accent on the chars that needs it? I've seen some examples of code that should be able to do this, but could not get that to work when we are missing fonts like this. BTW: Yes, I can also fix this on the macro level (adapted from one of egregs answers): \usepackage{xparse}\NewDocumentCommand{\myhat}{ m }{\myinnerhat{#1}}% we are assuming \myhat is only being used in single chars\ExplSyntaxOn\NewDocumentCommand{\myinnerhat}{m} { \str_case_x:nnF { \tl_item:nn { #1 } { -1 } } { {\jmath}{\skew{5}{\hat}{\jmath}\mkern1mu} {l}{\skew{5}{\hat}{l}} }{ \hat{#1} } }\ExplSyntaxOff
Images are essential elements in most of the scientific documents. LaTeX provides several options to handle images and make them look exactly what you need. In this article is explained how to include images in the most common formats, how to shrink, enlarge and rotate them, and how to reference them within your document. Contents Below is a example on how to import a picture. \documentclass{article} \usepackage{graphicx} \graphicspath{ {./images/} } \begin{document} The universe is immense and it seems to be homogeneous, in a large scale, everywhere we look at. \includegraphics{universe} There's a picture of a galaxy above \end{document} Latex can not manage images by itself, so we need to use the graphicx package. To use it, we include the following line in the preamble: \usepackage{graphicx} The command \graphicspath{ {./images/} } tells LaTeX that the images are kept in a folder named images under the directory of the main document. The \includegraphics{universe} command is the one that actually included the image in the document. Here universe is the name of the file containing the image without the extension, then universe.PNG becomes universe. The file name of the image should not contain white spaces nor multiple dots. Note: The file extension is allowed to be included, but it's a good idea to omit it. If the file extension is omitted it will prompt LaTeX to search for all the supported formats. For more details see the section about generating high resolution and low resolution images. When working on a document which includes several images it's possible to keep those images in one or more separated folders so that your project is more organised. The command \graphicspath{ {images/} } tells LaTeX to look in the images folder. The path is to the current working directory - so, the compiler will look for the file in the same folder as the code where the image is included. The path to the folder is relative by default, if there is no initial directory specified, for instance relative %Path relative to the .tex file containing the \includegraphics command \graphicspath{ {images/} } This is a typically straightforward way to reach the graphics folder within a file tree, but can leads to complications when .tex files within folders are included in the mail .tex file. Then, the compiler may end up looking for the images folder in the wrong place. Thus, it is best practice to specify the graphics path to be relative to the main .tex file, denoting the main .tex file directory as ./ , for instance %Path relative to the main .tex file \graphicspath{ {./images/} } as in the introduction. The path can also be , if the exact location of the file on your system is specified. For example: absolute %Path in Windows format: \graphicspath{ {c:/user/images/} } %Path in Unix-like (Linux, Mac OS) format \graphicspath{ {/home/user/images/} } Notice that this command requires a trailing slash / and that the path is in between double braces. You can also set multiple paths if the images are saved in more than one folder. For instance, if there are two folders named images1 and images2, use the command. \graphicspath{ {./images1/}{./images2/} } If no path is set LaTeX will look for pictures in the folder where the .tex file the image is included in is saved. If we want to further specify how LaTeX should include our image in the document (length, height, etc), we can pass those settings in the following format: \begin{document} Overleaf is a great professional tool to edit online, share and backup your \LaTeX{} projects. Also offers a rather large help documentation. \includegraphics[scale=1.5]{lion-logo} The command \includegraphics[scale=1.5]{lion-logo} will include the image lion-logo in the document, the extra parameter scale=1.5 will do exactly that, scale the image 1.5 of its real size. You can also scale the image to a some specific width and height. \begin{document} Overleaf is a great professional tool to edit online, share and backup your \LaTeX{} projects. Also offers a rather large help documentation. \includegraphics[width=3cm, height=4cm]{lion-logo} As you probably have guessed, the parameters inside the brackets [width=3cm, height=4cm] define the width and the height of the picture. You can use different units for these parameters. If only the width parameter is passed, the height will be scaled to keep the aspect ratio. The length units can also be relative to some elements in document. If you want, for instance, make a picture the same width as the text: \begin{document} The universe is immense and it seems to be homogeneous, in a large scale, everywhere we look at. \includegraphics[width=\textwidth]{universe} Instead of \textwidth you can use any other default LaTeX length: \columnsep, \linewidth, \textheight, \paperheight, etc. See the reference guide for a further description of these units. There is another common option when including a picture within your document, to rotate it. This can easily accomplished in LaTeX: \begin{document} Overleaf is a great professional tool to edit online, share and backup your \LaTeX{} projects. Also offers a rather large help documentation. \includegraphics[scale=1.2, angle=45]{lion-logo} The parameter angle=45 rotates the picture 45 degrees counter-clockwise. To rotate the picture clockwise use a negative number. In the previous section was explained how to include images in your document, but the combination of text and images may not look as we expected. To change this we need to introduce a new environment. In the next example the figure will be positioned right below this sentence. \begin{figure}[h] \includegraphics[width=8cm]{Plot} \end{figure} The figure environment is used to display pictures as floating elements within the document. This means you include the picture inside the figure environment and you don't have to worry about it's placement, LaTeX will position it in a such way that it fits the flow of the document. Anyway, sometimes we need to have more control on the way the figures are displayed. An additional parameter can be passed to determine the figure positioning. In the example, begin{figure}[h], the parameter inside the brackets set the position of the figure to . Below a table to list the possible positioning values. here Parameter Position h Place the float here, i.e., approximately at the same point it occurs in the source text (however, not exactly at the spot) t Position at the top of the page. b Position at the bottom of the page. p Put on a special page for floats only. ! Override internal parameters LaTeX uses for determining "good" float positions. H Places the float at precisely the location in the LaTeX code. Requires the float package, though may cause problems occasionally. This is somewhat equivalent to h!. In the next example you can see a picture at the top of the document, despite being declared below the text. In this picture you can see a bar graph that shows the results of a survey which involved some important data studied as time passed. \begin{figure}[t] \includegraphics[width=8cm]{Plot} \centering \end{figure} The additional command \centering will centre the picture. The default alignment is left. It's also possible to wrap the text around a figure. When the document contains small pictures this makes it look better. \begin{wrapfigure}{r}{0.25\textwidth} %this figure will be at the right \centering \includegraphics[width=0.25\textwidth]{mesh} \end{wrapfigure} There are several ways to plot a function of two variables, depending on the information you are interested in. For instance, if you want to see the mesh of a function so it easier to see the derivative you can use a plot like the one on the left. \begin{wrapfigure}{l}{0.25\textwidth} \centering \includegraphics[width=0.25\textwidth]{contour} \end{wrapfigure} On the other side, if you are only interested on certain values you can use the contour plot, you can use the contour plot, you can use the contour plot, you can use the contour plot, you can use the contour plot, you can use the contour plot, you can use the contour plot, like the one on the left. On the other side, if you are only interested on certain values you can use the contour plot, you can use the contour plot, you can use the contour plot, you can use the contour plot, you can use the contour plot, you can use the contour plot, you can use the contour plot, like the one on the left. For the commands in the example to work, you have to import the package wrapfig. Add to the preamble the line \usepackage{wrapfig}. Now you can define the wrapfigure environment by means of the commands \begin{wrapfigure}{l}{0.25\textwidth} \end{wrapfigure}. Notice that the environment has two additional parameters enclosed in braces. Below the code is explained with more detail: {l} {0.25\textwidth} \centering For a more complete article about image positioning see Positioning images and tables Captioning images to add a brief description and labelling them for further reference are two important tools when working on a lengthy text. Let's start with a caption example: \begin{figure}[h] \caption{Example of a parametric plot ($\sin (x), \cos(x), x$)} \centering \includegraphics[width=0.5\textwidth]{spiral} \end{figure} It's really easy, just add the \caption{Some caption} and inside the braces write the text to be shown. The placement of the caption depends on where you place the command; if it'a above the includegraphics then the caption will be on top of it, if it's below then the caption will also be set below the figure. Captions can also be placed right after the figures. The sidecap package uses similar code to the one in the previous example to accomplish this. \documentclass{article} \usepackage[rightcaption]{sidecap} \usepackage{graphicx} %package to manage images \graphicspath{ {images/} } \begin{SCfigure}[0.5][h] \caption{Using again the picture of the universe. This caption will be on the right} \includegraphics[width=0.6\textwidth]{universe} \end{SCfigure} There are two new commands \usepackage[rightcaption]{sidecap} rightcaption. This parameter establishes the placement of the caption at the right of the picture, you can also use \begin{SCfigure}[0.5][h] \end{SCfigure} h works exactly as in the You can do a more advanced management of the caption formatting. Check the further reading section for references. Figures, just as many other elements in a LaTeX document (equations, tables, plots, etc) can be referenced within the text. This is very easy, just add a label to the figure or SCfigure environment, then later use that label to refer the picture. \begin{figure}[h] \centering \includegraphics[width=0.25\textwidth]{mesh} \caption{a nice plot} \label{fig:mesh1} \end{figure} As you can see in the figure \ref{fig:mesh1}, the function grows near 0. Also, in the page \pageref{fig:mesh1} is the same example. There are three commands that generate cross-references in this example. \label{fig:mesh1} \ref{fig:mesh1} \pageref{fig:mesh1} The \caption is mandatory to reference a figure. Another great characteristic in a LaTeX document is the ability to automatically generate a list of figures. This is straightforward. This command only works on captioned figures, since it uses the caption in the table. The example above lists the images in this article. Important Note: When using cross-references your LaTeX project must be compiled twice, otherwise the references, the page references and the table of figures won't work. So far while specifying the image file name in the \includegraphics command we have omitted file extensions. However, that is not necessary, though it is often useful. If the file extension is omitted, LaTeX will search for any supported image format in that directory, and will search for various extensions in the default order (which can be modified). This is useful in switching between development and production environments. In a development environment (when the article/report/book is still in progress), it is desirable to use low-resolution versions of images (typically in .png format) for fast compilation of the preview. In the production environment (when the final version of the article/report/book is produced), it is desirable to include the high-resolution version of the images. This is accomplished by Thus, if we have two versions of an image, venndiagram.pdf (high-resolution) and venndiagram.png (low-resolution), then we can include the following line in the preamble to use the .png version while developing the report - \DeclareGraphicsExtensions{.png,.pdf} The command above will ensure that if two files are encountered with the same base name but different extensions (for example venndiagram.pdf and venndiagram.png), then the .png version will be used first, and in its absence the .pdf version will be used, this is also a good ideas if some low-resolution versions are not available. Once the report has been developed, to use the high-resolution .pdf version, we can change the line in the preamble specifying the extension search order to \DeclareGraphicsExtensions{.pdf,.png} Improving on the technique described in the previous paragraphs, we can also instruct LaTeX to generate low-resolution .png versions of images on the fly while compiling the document if there is a PDF that has not been converted to PNG yet. To achieve that, we can include the following in the preamble after \usepackage{graphicx} \usepackage{epstopdf} \epstopdfDeclareGraphicsRule{.pdf}{png}{.png}{convert #1 \OutputFile} \DeclareGraphicsExtensions{.png,.pdf} If venndiagram2.pdf exists but not venndiagram2.png, the file venndiagram2-pdf-converted-to.png will be created and loaded in its place. The command convert #1 is responsible for the conversion and additional parameters may be passed between convert and #1. For example - convert -density 100 #1. There are some important things to have in mind though: --shell-escape option. \epstopdfDeclareGraphicsRule, so that only high-resolution PDF files are loaded. We'll also need to change the order of precedence. LaTeX units and legths Abbreviation Definition pt A point, is the default length unit. About 0.3515mm mm a millimetre cm a centimetre in an inch ex the height of an x in the current font em the width of an m in the current font \columnsep distance between columns \columnwidth width of the column \linewidth width of the line in the current environment \paperwidth width of the page \paperheight height of the page \textwidth width of the text \textheight height of the text \unitleght units of length in the picture environment. About image types in LaTeX JPG: Best choice if we want to insert photos PNG: Best choice if we want to insert diagrams (if a vector version could not be generated) and screenshots PDF: Even though we are used to seeing PDF documents, a PDF can also store images EPS: EPS images can be included using the epstopdfpackage (we just need to install the package, we don't need to use \usepackage{}to include it in our document.) For more information see
Let $G$ be a finite group and $H$ a subgroup such that the interval $[H,G]$ is a boolean lattice. Let $L_1, \dots , L_n$ be the maximal subgroups of $G$ containing $H$. Let the alternative sum of the interval $[H,G]$ defined as follows: $$\chi([H,G]):= \sum_{r=0}^n (-1)^{r} \sum_{ \ i_1 < i_2 < \cdots < i_r } [L_{i_1} \wedge \cdots \wedge L_{i_r}: H] $$ Notation: $L_{i_1} \wedge \cdots \wedge L_{i_r} = G$ for $r=0$. Theorem: $\chi([H,G]) > 0$. Proof: Observe that $$\chi([H,G]) = \frac{\vert G \vert - \vert \bigcup_i L_i \vert}{\vert H \vert} $$ but a boolean lattice is distributive so by a result of Oystein Ore (see here) $\exists g \in G$ with $\langle H,g \rangle = G$, which precisely means that $g \not \in L_i \ \forall i$, and so $\chi([H,G])> 0$ $\square$ Let $K_1, \dots , K_n$ be the minimal overgroups of $H$. Let the dual alternative sum of the interval $[H,G]$ defined as follows: $$\hat{\chi}([H,G]):= \sum_{r=0}^n (-1)^{r} \sum_{ \ i_1 < i_2 < \cdots < i_r } [G: K_{i_1} \vee \cdots \vee K_{i_r}] $$ Notation: $K_{i_1} \vee \cdots \vee K_{i_r} = H$ for $r=0$. Question: Is $\hat{\chi}([H,G]) > 0$ ? Remark: after GAP checking, it is true for $[G:H]<32$ (recall that $[H,G]$ is assumed boolean).
This is a follow-up to a different question I asked with more detail. For $v\in\mathbb{R}^n$, denote $D_v\in\mathbb{R}^n$ as the diagonal matrix with elements in $v$. Given a "tall" matrix $B\in\mathbb{R}^{n\times m}$, I would like to solve the following optimization problem: $$\min_{v\in\mathbb{R}^n} \|X-B^\top D_v B\|_{\mathrm{Fro}}^2$$ Assuming I calculcated it properly, first-order optimality gives the linear system $(BB^\top\circ BB^\top)v=(BX\circ B)\mathbb{1}$, where $\circ$ denotes the elementwise (Hadamard) product and $\mathbb{1}\in\mathbb{R}^n$ is the vector of all ones. I have checked that this system is invertible for my application. The problem is, the matrix $BB^\top\circ BB^\top$ is very large relative to the size of $B$. I can afford to take the SVD of $B$ (and that of $X$) but not to construct this large, dense matrix. Is there anything I can do to solve this system directly without resorting to an iterative solver? If I have to do it iteratively, what is the fastest iteration for systems that come from this least-squares problem?
Difference between revisions of "Geometry and Topology Seminar" (→Spring 2014) Line 216: Line 216: |January 31 |January 31 |[http://www.math.uiuc.edu/~dowdall/ Spencer Dowdall (UIUC)] |[http://www.math.uiuc.edu/~dowdall/ Spencer Dowdall (UIUC)] − |[[#Spencer Dowdall (UIUC)| '' + |[[#Spencer Dowdall (UIUC)| '''']] |[http://www.math.wisc.edu/~rkent Kent] |[http://www.math.wisc.edu/~rkent Kent] | | Line 296: Line 296: ===Spencer Dowdall (UIUC)=== ===Spencer Dowdall (UIUC)=== − '' + '''' + + + + + ===Matthew Kahle (Ohio)=== ===Matthew Kahle (Ohio)=== ''TBA'' ''TBA'' Revision as of 15:44, 22 January 2014 Contents 1 Fall 2013 2 Fall Abstracts 3 Spring 2014 4 Spring Abstracts 5 Archive of past Geometry seminars Fall 2013 date speaker title host(s) September 6 September 13, 10:00 AM in 901! Alex Zupan (Texas) Totally geodesic subgraphs of the pants graph Kent September 20 September 27 October 4 October 11 October 18 Jayadev Athreya (Illinois) Gap Distributions and Homogeneous Dynamics Kent October 25 Joel Robbin (Wisconsin) GIT and [math]\mu[/math]-GIT local November 1 Anton Lukyanenko (Illinois) Uniformly quasi-regular mappings on sub-Riemannian manifolds Dymarz November 8 Neil Hoffman (Melbourne) Verified computations for hyperbolic 3-manifolds Kent November 15 Khalid Bou-Rabee (Minnesota) On generalizing a theorem of A. Borel Kent November 22 Morris Hirsch (Wisconsin) Common zeros for Lie algebras of vector fields on real and complex 2-manifolds. local Thanksgiving Recess December 6 Sean Paul (Wisconsin) (Semi)stable Pairs I local December 13 Sean Paul (Wisconsin) (Semi)stable Pairs II local Fall Abstracts Alex Zupan (Texas) Totally geodesic subgraphs of the pants graph Abstract: For a compact surface S, the associated pants graph P(S) consists of vertices corresponding to pants decompositions of S and edges corresponding to elementary moves between pants decompositions. Motivated by the Weil-Petersson geometry of Teichmüller space, Aramayona, Parlier, and Shackleton conjecture that the full subgraph G of P(S) determined by fixing a multicurve is totally geodesic in P(S). We resolve this conjecture in the case that G is a product of Farey graphs. This is joint work with Sam Taylor. Jayadev Athreya (Illinois) Gap Distributions and Homogeneous Dynamics Abstract: We discuss the notion of gap distributions of various lists of numbers in [0, 1], in particular focusing on those which are associated to certain low-dimensional dynamical systems. We show how to explicitly compute some examples using techniques of homogeneous dynamics, generalizing earlier work on gaps between Farey Fractions. This works gives some possible notions of `randomness' of special trajectories of billiards in polygons, and is based partly on joint works with J. Chaika, J. Chaika and S. Lelievre, and with Y.Cheung. This talk may also be of interest to number theorists. Joel Robbin (Wisconsin) GIT and [math]\mu[/math]-GIT Many problems in differential geometry can be reduced to solving a PDE of form [math] \mu(x)=0 [/math] where [math]x[/math] ranges over some function space and [math]\mu[/math] is an infinite dimensional analog of the moment map in symplectic geometry. In Hamiltonian dynamics the moment map was introduced to use a group action to reduce the number of degrees of freedom in the ODE. It was soon discovered that the moment map could be applied to Geometric Invariant Theory: if a compact Lie group [math]G[/math] acts on a projective algebraic variety [math]X[/math], then the complexification [math]G^c[/math] also acts and there is an isomorphism of orbifolds [math] X^s/G^c=X//G:=\mu^{-1}(0)/G [/math] between the space of orbits of Mumford's stable points and the Marsden-Weinstein quotient. In September of 2013 Dietmar Salamon, his student Valentina Georgoulas, and I wrote an exposition of (finite dimensional) GIT from the point of view of symplectic geometry. The theory works for compact Kaehler manifolds, not just projective varieties. I will describe our paper in this talk; the following Monday Dietmar will give more details in the Geometric Analysis Seminar. Anton Lukyanenko (Illinois) Uniformly quasi-regular mappings on sub-Riemannian manifolds Abstract: A quasi-regular (QR) mapping between metric manifolds is a branched cover with bounded dilatation, e.g. f(z)=z^2. In a joint work with K. Fassler and K. Peltonen, we define QR mappings of sub-Riemannian manifolds and show that: 1) Every lens space admits a uniformly QR (UQR) mapping f. 2) Every UQR mapping leaves invariant a measurable conformal structure. The first result uses an explicit "conformal trap" construction, while the second builds on similar results by Sullivan-Tukia and a connection to higher-rank symmetric spaces. Neil Hoffman (Melbourne) Verified computations for hyperbolic 3-manifolds Abstract: Given a triangulated 3-manifold M a natural question is: Does M admit a hyperbolic structure? While this question can be answered in the negative if M is known to be reducible or toroidal, it is often difficult to establish a certificate of hyperbolicity, and so computer methods have developed for this purpose. In this talk, I will describe a new method to establish such a certificate via verified computation and compare the method to existing techniques. This is joint work with Kazuhiro Ichihara, Masahide Kashiwagi, Hidetoshi Masai, Shin'ichi Oishi, and Akitoshi Takayasu. Khalid Bou-Rabee (Minnesota) On generalizing a theorem of A. Borel The proof of the Hausdorff-Banach-Tarski paradox relies on the existence of a nonabelian free group in the group of rotations of [math]\mathbb{R}^3[/math]. To help generalize this paradox, Borel proved the following result on free groups. Borel’s Theorem (1983): Let [math]F[/math] be a free group of rank two. Let [math]G[/math] be an arbitrary connected semisimple linear algebraic group (i.e., [math]G = \mathrm{SL}_n[/math] where [math]n \geq 2[/math]). If [math]\gamma[/math] is any nontrivial element in [math]F[/math] and [math]V[/math] is any proper subvariety of [math]G(\mathbb{C})[/math], then there exists a homomorphism [math]\phi: F \to G(\mathbb{C})[/math] such that [math]\phi(\gamma) \notin V[/math]. What is the class, [math]\mathcal{L}[/math], of groups that may play the role of [math]F[/math] in Borel’s Theorem? Since the free group of rank two is in [math]\mathcal{L}[/math], it follows that all residually free groups are in [math]\mathcal{L}[/math]. In this talk, we present some methods for determining whether a finitely generated group is in [math]\mathcal{L}[/math]. Using these methods, we give a concrete example of a finitely generated group in [math]\mathcal{L}[/math] that is *not* residually free. After working out a few other examples, we end with a discussion on how this new theory provides an answer to a question of Brueillard, Green, Guralnick, and Tao concerning double word maps. This talk covers joint work with Michael Larsen. Morris Hirsch (Wisconsin) Common zeros for Lie algebras of vector fields on real and complex 2-manifolds. The celebrated Poincare-Hopf theorem states that a vector field [math]X[/math] on a manifold [math]M[/math] has nonempty zero set [math]Z(X)[/math], provided [math]M[/math] is compact with empty boundary and [math]M[/math] has nonzero Euler characteristic. Surprising little is known about the set of common zeros of two or more vector fields, especially when [math]M[/math] is not compact. One of the few results in this direction is a remarkable theorem of Christian Bonatti (Bol. Soc. Brasil. Mat. 22 (1992), 215–247), stated below. When [math]Z(X)[/math] is compact, [math]i(X)[/math] denotes the intersection number of [math]X[/math] with the zero section of the tangent bundle. [math]\cdot [/math] Assume [math] dim_{\mathbb{R}(M)} ≤ 4[/math], [math]X[/math] is analytic, [math]Z(X)[/math] is compact and [math]i(X) \neq 0[/math]. Then every analytic vector field commuting with [math]X[/math] has a zero in [math]Z(X)[/math]. In this talk I will discuss the following analog of Bonatti’s theorem. Let [math]\mathfrak{g}[/math] be a Lie algebra of analytic vector fields on a real or complex 2-manifold [math]M[/math], and set [math]Z(g) := \cap_{Y \in \mathfrak{g}} Z(Y)[/math]. • Assume [math]X[/math] is analytic, [math]Z(X)[/math] is compact and [math]i(X) \neq 0[/math]. Let [math]\mathfrak{g}[/math] be generated by analytic vector fields [math]Y[/math] on [math]M[/math] such that the vectors [math][X,Y]p[/math] and [math]Xp[/math] are linearly dependent at all [math]p \in M[/math]. Then [math]Z(\mathfrak{g}) \cap Z(X) \neq \emptyset [/math]. Related results on Lie group actions, and nonanalytic vector fields, will also be treated. Sean Paul (Wisconsin) (Semi)stable Pairs I Sean Paul (Wisconsin) (Semi)stable Pairs II Spring 2014 date speaker title host(s) January 24 January 31 Spencer Dowdall (UIUC) Fibrations and polynomial invariants for free-by-cyclic groups Kent February 7 February 14 February 21 February 28 Jae Choon Cha (POSTECH, Korea) TBA Maxim March 7 March 14 Spring Break March 28 April 4 Matthew Kahle (Ohio) TBA Dymarz April 11 Ioana Suvaina (Vanderbilt) TBA Maxim April 18 April 25 Jingzhou Sun(Stony Brook) TBA [Wang] May 2 May 9 Spring Abstracts Spencer Dowdall (UIUC) Fibrations and polynomial invariants for free-by-cyclic groups The beautiful theory developed by Thurston, Fried and McMullen provides a near complete picture of the various ways a hyperbolic 3-manifold M can fiber over the circle. Namely, there are distinguished convex cones in the first cohomology M^1(M;R) whose integral points all correspond to fibrations of M, and the dynamical features of these fibrations are all encoded by McMullen's "Teichmuller polynomial." This talk will describe recent work developing aspects of this picture in the setting of a free-by-cyclic group G. Specifically, I will introduce a polynomial invariant that determines a convex polygonal cone C in the first cohomology of G whose integral points all correspond to algebraically and dynamically interesting splittings of G. The polynomial invariant additionally provides a wealth of dynamical information about these splittings. This is joint work with Ilya Kapovich and Christopher J. Leininger. Matthew Kahle (Ohio) TBA JingZhou Sun(Stony Brook) "TBA"
In signal processing, cross-correlation is a measure of similarity of two waveforms as a function of a time-lag applied to one of them. This is also known as a sliding dot product or sliding inner-product. It is commonly used for searching a long signal for a shorter, known feature. It has applications in pattern recognition, single particle analysis, electron tomographic averaging, cryptanalysis, and neurophysiology.For continuous functions f and g, the cross-correlation is defined as:: (f \star g)(t)\ \stackrel{\mathrm{def}}{=} \int_{-\infty}^{\infty} f^*(\tau)\ g(\tau+t)\,d\tau,whe... That seems like what I need to do, but I don't know how to actually implement it... how wide of a time window is needed for the Y_{t+\tau}? And how on earth do I load all that data at once without it taking forever? And is there a better or other way to see if shear strain does cause temperature increase, potentially delayed in time Link to the question: Learning roadmap for picking up enough mathematical know-how in order to model "shape", "form" and "material properties"?Alternatively, where could I go in order to have such a question answered? @tpg2114 For reducing data point for calculating time correlation, you can run two exactly the simulation in parallel separated by the time lag dt. Then there is no need to store all snapshot and spatial points. @DavidZ I wasn't trying to justify it's existence here, just merely pointing out that because there were some numerics questions posted here, some people might think it okay to post more. I still think marking it as a duplicate is a good idea, then probably an historical lock on the others (maybe with a warning that questions like these belong on Comp Sci?) The x axis is the index in the array -- so I have 200 time series Each one is equally spaced, 1e-9 seconds apart The black line is \frac{d T}{d t} and doesn't have an axis -- I don't care what the values are The solid blue line is the abs(shear strain) and is valued on the right axis The dashed blue line is the result from scipy.signal.correlate And is valued on the left axis So what I don't understand: 1) Why is the correlation value negative when they look pretty positively correlated to me? 2) Why is the result from the correlation function 400 time steps long? 3) How do I find the lead/lag between the signals? Wikipedia says the argmin or argmax of the result will tell me that, but I don't know how In signal processing, cross-correlation is a measure of similarity of two waveforms as a function of a time-lag applied to one of them. This is also known as a sliding dot product or sliding inner-product. It is commonly used for searching a long signal for a shorter, known feature. It has applications in pattern recognition, single particle analysis, electron tomographic averaging, cryptanalysis, and neurophysiology.For continuous functions f and g, the cross-correlation is defined as:: (f \star g)(t)\ \stackrel{\mathrm{def}}{=} \int_{-\infty}^{\infty} f^*(\tau)\ g(\tau+t)\,d\tau,whe... Because I don't know how the result is indexed in time Related:Why don't we just ban homework altogether?Banning homework: vote and documentationWe're having some more recent discussions on the homework tag. A month ago, there was a flurry of activity involving a tightening up of the policy. Unfortunately, I was really busy after th... So, things we need to decide (but not necessarily today): (1) do we implement John Rennie's suggestion of having the mods not close homework questions for a month (2) do we reword the homework policy, and how (3) do we get rid of the tag I think (1) would be a decent option if we had >5 3k+ voters online at any one time to do the small-time moderating. Between the HW being posted and (finally) being closed, there's usually some <1k poster who answers the question It'd be better if we could do it quick enough that no answers get posted until the question is clarified to satisfy the current HW policy For the SHO, our teacher told us to scale$$p\rightarrow \sqrt{m\omega\hbar} ~p$$$$x\rightarrow \sqrt{\frac{\hbar}{m\omega}}~x$$And then define the following$$K_1=\frac 14 (p^2-q^2)$$$$K_2=\frac 14 (pq+qp)$$$$J_3=\frac{H}{2\hbar\omega}=\frac 14(p^2+q^2)$$The first part is to show that$$Q \... Okay. I guess we'll have to see what people say but my guess is the unclear part is what constitutes homework itself. We've had discussions where some people equate it to the level of the question and not the content, or where "where is my mistake in the math" is okay if it's advanced topics but not for mechanics Part of my motivation for wanting to write a revised homework policy is to make explicit that any question asking "Where did I go wrong?" or "Is this the right equation to use?" (without further clarification) or "Any feedback would be appreciated" is not okay @jinawee oh, that I don't think will happen. In any case that would be an indication that homework is a meta tag, i.e. a tag that we shouldn't have. So anyway, I think suggestions for things that need to be clarified -- what is homework and what is "conceptual." Ie. is it conceptual to be stuck when deriving the distribution of microstates cause somebody doesn't know what Stirling's Approximation is Some have argued that is on topic even though there's nothing really physical about it just because it's 'graduate level' Others would argue it's not on topic because it's not conceptual How can one prove that$$ \operatorname{Tr} \log \cal{A} =\int_{\epsilon}^\infty \frac{\mathrm{d}s}{s} \operatorname{Tr}e^{-s \mathcal{A}},$$for a sufficiently well-behaved operator $\cal{A}?$How (mathematically) rigorous is the expression?I'm looking at the $d=2$ Euclidean case, as discuss... I've noticed that there is a remarkable difference between me in a selfie and me in the mirror. Left-right reversal might be part of it, but I wonder what is the r-e-a-l reason. Too bad the question got closed. And what about selfies in the mirror? (I didn't try yet.) @KyleKanos @jinawee @DavidZ @tpg2114 So my take is that we should probably do the "mods only 5th vote"-- I've already been doing that for a while, except for that occasional time when I just wipe the queue clean. Additionally, what we can do instead is go through the closed questions and delete the homework ones as quickly as possible, as mods. Or maybe that can be a second step. If we can reduce visibility of HW, then the tag becomes less of a bone of contention @jinawee I think if someone asks, "How do I do Jackson 11.26," it certainly should be marked as homework. But if someone asks, say, "How is source theory different from qft?" it certainly shouldn't be marked as Homework @Dilaton because that's talking about the tag. And like I said, everyone has a different meaning for the tag, so we'll have to phase it out. There's no need for it if we are able to swiftly handle the main page closeable homework clutter. @Dilaton also, have a look at the topvoted answers on both. Afternoon folks. I tend to ask questions about perturbation methods and asymptotic expansions that arise in my work over on Math.SE, but most of those folks aren't too interested in these kinds of approximate questions. Would posts like this be on topic at Physics.SE? (my initial feeling is no because its really a math question, but I figured I'd ask anyway) @DavidZ Ya I figured as much. Thanks for the typo catch. Do you know of any other place for questions like this? I spend a lot of time at math.SE and they're really mostly interested in either high-level pure math or recreational math (limits, series, integrals, etc). There doesn't seem to be a good place for the approximate and applied techniques I tend to rely on. hm... I guess you could check at Computational Science. I wouldn't necessarily expect it to be on topic there either, since that's mostly numerical methods and stuff about scientific software, but it's worth looking into at least. Or... to be honest, if you were to rephrase your question in a way that makes clear how it's about physics, it might actually be okay on this site. There's a fine line between math and theoretical physics sometimes. MO is for research-level mathematics, not "how do I compute X" user54412 @KevinDriscoll You could maybe reword to push that question in the direction of another site, but imo as worded it falls squarely in the domain of math.SE - it's just a shame they don't give that kind of question as much attention as, say, explaining why 7 is the only prime followed by a cube @ChrisWhite As I understand it, KITP wants big names in the field who will promote crazy ideas with the intent of getting someone else to develop their idea into a reasonable solution (c.f., Hawking's recent paper)
Yes, assuming you want both $f_1(x)$ and $f_2(x)$ with integer coefficients. One of the reasons why LLL is so popular is precisely because it gives a polynomial time algorithm to factor polynomials with integer coefficients. For an excellent introduction, I recommend C. Yap's "Fundamental Problems in Algorithmic Algebra" (available online, for free), specifically chapter 9 "Lattice Reduction and Applications" (section 9.6). Following Yap, choose an approximation, $\alpha$, of a (complex) root for $f(x)$. Setup the lattice reduction with the following basis: $$ B_k = \begin{bmatrix} \text{Re}(\alpha^0) & \text{Re}(\alpha^1) & \text{Re}(\alpha^2) & \cdots & \text{Re}(\alpha^k) \\\text{Im}(\alpha^0) & \text{Im}(\alpha^1) & \text{Im}(\alpha^2) & \cdots & \text{Im}(\alpha^k) \\c & 0 & 0 & \cdots & 0 \\0 & c & 0 & \cdots & 0 \\0 & 0 & c & \cdots & 0 \\\vdots & \vdots & \vdots & \ddots & \vdots \\0 & 0 & 0 & \cdots & c \end{bmatrix}$$ Choosing $c = 2^{-4t^3}$, with $\alpha$ to have $O(t^3)$ bits for each of the real and complex portions. Here, $t = \log ||f(x)||_{\infty}$ (that is, the cube of the number of bits of the maximum coefficient of $f(x)$). Quoted from FPiAA: Theorem 9 Given a basis $A \in \mathbb{Q}^{n \times m}$, we can compute a reduced basis $B$ with $\Lambda(A) = \Lambda(B)$ using $O(n^5(s + \log n))$ arithmetic operations, where s is the maximum bit size of entries in $A$ This gives us the (polynomial) run time. Proof of correctness that for a properly setup lattice will give you the minimal factor of a reducible polynomial is a bit more involved, but please refer to theorem 14 of the same chapter to see the relation between the reduced basis and the minimal polynomial. By setting up the basis with dimension $n=k$ you can easily see the bound as roughly $O(k^5( \lg(||f(x)||_{\infty}^3) + \log n))$. Since, by assumption, you know the degree of $f_1(x)$ and $f_2(x)$, the lattice reduction algorithm only needs to be run once to find one of the two factors of $f(x)$. You can then use the discovered $f_j(x)$ to find the other by standard polynomial division. The original paper by Lenstra, Lenstra and Lovasz, "Factoring polynomials with rational coefficients", is also quite readable and I found it to be a good compliment to Yap's introduction.
Version 19 (modified by 5 weeks ago) (diff), 'perturbative' and 'real' particles; the perturbative weight 'perturbative' and 'real' particles (following text is taken - slightly modified - from: O.Buss, PhD thesis, pdf, Appendix B.1) Reactions which are so violent that they disassemble the whole target nucleus can be treated only by explicitly propagating all particles, the ones in the target and the ones produced in the collision, on the same footing. For reactions which are not violent enough to disrupt the whole target nucleus, e.g. low-energy πA, γA or neutrino A collisions at not too high energies, the target nucleus stays very close to its ground state. In this case, one keeps as an approximationthe phase-space density of the target nucleons constant in time ('frozen approximation'). In GiBUU this is controlled by the switch freezeRealParticles. The test-particles which represent this constant target nucleus are called real test-particles. However, one also wants to consider the final state particles. Thus one defines another type of test-particles which are called perturbative.The perturbative test-particles are produced in a reaction on one of the target nucleons. They are then propagated and may collide with other real ones in the target. The products of such collisions are perturbative particles again. These perturbative particles can thus react with real target nucleons, but may not scatter among themselves.Furthermore, their feedback on the actual densities is neglected. One can simulate inthis way the effects of the almost constant target on the outgoing particles without modifyingthe target. E.g. in πA collisions we initialize all initial state pions as perturbative test-particles.Thus the target automatically remains frozen and all products of the collisions of pions andtarget nucleons are assigned to the perturbative regime. Furthermore, since the perturbative particles do not react among themselves or modify the realparticles in a reaction, one can also split a perturbative particle into \(N_{test}\) pieces (several perturbativeparticles) during a run. Each piece is given a corresponding weight \(1/N_{test}\). In this way one simulates\(N_{test}\) possible final state scenarios of the same perturbative particle during one run. The perturbative weigth 'perWeight' Usually, in the cases mentioned above, where one uses the seperation into real and perturbative particles, one wants to calculate some final quantity like \(d\sigma^A_{tot}=\int_{nucleus}d^3r\int \frac{d^3p}{(2\pi)^3} d\sigma^N_{tot}\,\times\,\dots \). Here we are hiding all medium modifications, as e.g. Pauli blocking, flux corrections or medium modifications of the cross section in the part "\(\,\times\,\dots \)". Now, solving this via the testparticle ansatz (with \(N_{test}\) being the number of test particles), this quantity is calulated as \(d\sigma^A_{tot}=\frac{1}{N_{test}}\sum_{j=1}^{N_{test}\cdot A}d\sigma^j_{tot}\,\times\,\dots \), with \(d\sigma^j_{tot}\) standing for the cross section of the \(j\)-th test-particle. The internal implementation of calculations like this in GiBUU is, that a loop runs over all \(N_{test}\cdot A\) target nucleons and creates some event. Thus all these events have the same probability. But since they should be weighted according \(d\sigma^j_{tot}\), this is corrected by giving all (final state) particles coming out of event \(j\) the weight \(d\sigma^j_{tot}\). This information is stored in the variable perWeight in the definition of the particle type. Thus, in order to get the correct final cross section, one has to sum the perWeight, and not the particles. As an example: if you want to calculate the inclusive pion production cross section, you have to loop over all particles and sum the perWeights of all pions. Simply taking the number of all pions would give false results. The weights can also be negative. This happens, e.g., in the case of pion production on nucleons. In this case the cross section is determined by the square of a coherent sum of resonance and background amplitudes and as such is positive. In the code the resonance contribution is separated out as the square of the resonance amplitude and as such is positive as well. The remainder, i.e. the sum of the square of the background amplitude and the interference term of resonance and background amplitudes, can be negative, however. This latter contribution is just the event type labeled 32 and 33 in the code that describes the 1pi bg plus interference. How to compute cross sections from the perturbative weights for neutrino-induced reactions The output file FinalEvents.dat contains all the events generated. Per event all the four-momenta of final state particles are listed together with the incoming neutrino energy, the 'perWeight' and various other useful properties (see documentation for FinalEvents.dat). In each event there is one nucleon with perWeight=0 which represents the hit nucleon; for 2p2h processes the second initial nucleon is not written out. The final state nucleons may have masses which are spread out around the physical mass in a very narrow distribution. There are two reasons for that: 1. nucleons may still be inside the potential well and thus have lower masses. These nucleons can be eliminated from the final events file by imposing a condition that they are outside the nuclear potential (the spatial coordinates of all particles are also given in the FinalEvents.dat file). 2. For numerical-practical reasons the nucleons are given a Breit-Wigner mass distribution with a width of typically 1 MeV around the physical mass when calculating the QE cross section. As an example we consider here the calculation of the CC inclusive differential cross section dsigma/dE_mu for a neutrino-induced reaction on a nucleus; E_mu is the energy of the outgoing muon. In FinalEvents.dat the lines with the particle number 902 contain all the muon kinematics as well as the perweight. In order to produce a spectrum one first has to bin the muon energies into energy bins. This binning process must preserve the connection between energy and perweight. Then all the perweights in a given energy bin are summed and divided by the bin width to obtain the differential cross section. If the GiBUU run used - for better statistics - a number of runs >1 at the same energies, then this number of runs has to be divided out to obtain the final differential cross section. All cross sections in GiBUU, both the precomputed ones and the reconstructed ones, are given per nucleon. The units are 10 -38 cm 2 for neutrinos and 10 -33 cm 2 for electrons.
Let SAT-100 be the following problem: Input: Any boolean logic formula Output: True if there exists a combination of exactly 100 input variables that satisfy the formula. This is the description of a problem that is apparently in $P$. (old exam question) I have tried to design an algorithm but I got stuck, so here it goes: Input: boolean logic formula F If(count(variables in F)) < 100: return false else # try all combinations of input variables And here is the problem: building and evaluating all combinations of input variables can't seem to be polynomial because: $$ { n \choose 100} = \frac{n! }{100! \cdot (n-100)!} \in \mathcal{O}(n!) $$ and this nasty factorial can't be bounded with any polynomial that I know of. I don't think the exam is wrong, so I must have overlooked something.
First of all, sorry for this long and maybe not very informative title... Context: Let $G=(V,E)$ be a directed graph, let $v_0 \in V$ be the initial node of paths that I will consider in the graph. Let $\Sigma$ an alphabet and let $T$ (for target) a set of words built from $\Sigma$ (e.g. $T \subset \Sigma^*$). Nodes of $G$ are tagged with letters from $\Sigma$. Each node is tagged with only one letter, and most nodes are tagged with the empty letter. The general question is: how to find (algorithm, complexity) the set of paths starting at $v_0$ that contains all words from $T$ and whose cumulated length is minimum. This problem is not that hard to solve (I am not speaking from the complexity point of view): this can be solved using classical optimization methods. For instance we take all paths of a sufficient length that cover elements of $T$, then we choose a subset of these paths that cover all elements of $T$ and whose length is minimum. Before that we off course checked (reachability analysis) if all elements of $T$ can be found in the graph. My problem: Imagine now that we don't have access to the graph explicitly (not enough memory for instance). Instead we are given a function $f : V \rightarrow subgraph_k(G) $ that gives us, for a node $v$, the subgraph of diameter $2k$ centered around $v$ ($k \in \mathbb{N}$ fixed a priori). Moreover we have only a very limited memory (let's say we cannot store more than a few paths of reasonable length). With this very restricted access to the graph, how to solve the problem? All solutions that I can imagine are based on sampling and are clearly not effective. Does any of you have an idea?
IWOTA 2019 International Workshop on Operator Theory and its Applications We discuss a kind of spectral theory for bounded linear operators $T$ on a Hilbert space $H$ satisfying\begin{equation}\label{eq} \alpha(T^*,T) := \sum_{n=0}^{\infty} \alpha_n T^{*n} T^n \ge 0 \quad \text{(convergence in SOT)},\end{equation} where $\alpha(t) = \sum \alpha_n t^n$ is an analytic function with $\alpha_n \in \mathbb{R}$ and $\alpha_0 = 1$. This type of conditions has been extensively analysed, starting from pioneering work by Agler in the 80's. There are also many papers on tuples of commuting operators. Put $k(t)=1/\alpha(t) = \sum k_n t^n$. In 2018, Bickel, Clouâtre, Hartz and McCarthy proved (even for tuples of commuting operators) that if $\alpha_n \lt 0$ for every $n \ge 1$ and the quotients $k_{n+1}/k_n$ tend to $1$ then \eqref{eq} holds if and only if $T$ extends to $(B_k \otimes I_\mathcal{R})\oplus S$, where $B_k$ is a backward shift on the RKHS of analytic functions associated to $k$, $\mathcal{R}$ is an auxiliary Hilbert space and $S$ is an isometry. As a complement of this result, we show that $(V_D,W)$ provides a minimal model of $T$, where $V_D$ is a contraction given by \[V_D x (z) := D (I-zT)^{-1} x, \quad D := (\alpha(T^*,T))^{1/2}\] and $W:= (I_H - V_D^* V_D)^{1/2}$. So $D$ plays the role of the defect operator of $T$. Let $\mathcal{A}_T$ be the set of analytic functions $\alpha$ with summable Taylor coefficients such that $\sum |\alpha_n| T^{*n}T^n$ converges in SOT and let $\mathcal{A}_T^0$ be the closure of the polynomials in $\mathcal{A}_T$. Define \[ \mathcal{C}_\alpha^w := \{ T \in L(H) \, : \, \alpha \in \mathcal{A}_T, \alpha(T^*,T) \ge 0 \}.\] Using Gelfand Theory and a factorization lemma for polynomials, we prove the following result. Let $T \in L(H)$ with $\sigma(T) \subset \overline{\mathbb{D}}$. Suppose that $(1-t)^a \in \mathcal{A}_T$, where $a\gt 0$, and let $\beta\in \mathcal{A}_T^0$ satisfy that $\beta(t)\gt 0$ for every $t \in [0,1]$. If \[ \alpha(t) = (1-t)^a \beta(t)\] and $T \in \mathcal{C}_\alpha^w$, then $T$ is similar to an operator in $\mathcal{C}_{(1-t)^a}^w$. This result applies to many functions $\alpha$ such that $k(t)$ has negative coefficients and Agler’s techniques do not work. We will give some consequences for Ergodic Theory. In particular, we show that any operator in $\mathcal{C}^w_{(1-t)^a}$ is quadratically $(C,b)$-bounded whenever $0 \lt 1-a \lt b$. If time permits, we will also discuss a new model of an operator in an annulus. In this case, the annulus turns out to be a complete $K$-spectral set of the operator. This is a joint work in progress with Luciano Abadias and Dmitry Yakubovich. Recall that, if a subset $\Omega$ of the complex plane contains the numerical range of a bounded operator $A$ on a Hilbert space $H$, then $\Omega$ is a $C(\Omega)$-spectral set for $A$, i.e. $ \|f(A)\|\leq C(\Omega)\sup_{z\in \Omega}|f(z)|, $ for all rational functions $f$ bounded in $\Omega$. I have made the conjecture that $C(\Omega)\leq 2$ and nowadays the best estimate (due to César Palencia) is $C(\Omega)\leq 1{+}\sqrt2$. I will speak about this estimate and propose some variations allowing to consider non convex situations. This talk is based on a collaboration with Anne Greenbaum. A well known result due to Carleson characterizes interpolating sequences for scalar valued functions in $\mathrm{H}^\infty(\mathbb{D})$. In this talk we extends his theorem to sequences of matrices, with no assumptions on their sizes. In particular, we will relate an interpolating problem for a sequence of matrices to a quasi free interpolating problem for their spectra. Let $A$ be a complex Banach algebra with identity $1$, $a\in A$, $n\in \mathbb{Z}_+$ and $\epsilon>0$. The $(n,\epsilon)$-pseudospectrum $\Lambda_{n,\epsilon}(A, a)$ of $a$ is defined as \begin{align*}{\Lambda_{n,\epsilon} (A,a):=\left\{\lambda \in \mathbb{C}: (\lambda-a) \text{ is not invertible or } \|(\lambda-a)^{-2^{n}}\|^{1/2^n} \geq \frac{1}{\epsilon}\right\}}.\end{align*} Some elementary properties of $\Lambda_{n,\epsilon}(A,a)$ will be discussed. If $p\in A$ is a non-trivial idem-potent, then the subalgebra $pA p$ is a Banach algebra, called reduced Banach algebra with identity $p$. Suppose $q=1-p$. For $a\in A$ with $ap = pa$, we examine the relationship between the $(n,\epsilon)$-pseudospectrum of $a$, $\Lambda_{n,\epsilon}(A, a)$ and $(n,\epsilon)$ -pseudospectra of $pap \in pA p$, $\Lambda_{n,\epsilon}(pA p, pap)$ and of $qaq \in qA q$ , $\Lambda_{n,\epsilon}(q A q, qaq)$. We note that in the set-up of operators on Banach spaces, this is equivalent to an operator $T$ having a block diagonal representation with respect to a particular direct sum decomposition. We also extend this study by considering a finite number of idempotents $p_1 ,\cdots, p_n$, as well as an arbitrary family of idempotents satisfying certain conditions. This is based on a joint work with S. H. Kulkarni, Professor, Indian Institute of Technology Madras, Chennai. Functional version for Furuta parametric relative operator entropy is studied. Some inequalities are also discussed. In addition, we introduce the Heron and Heinz means of two convex functionals. Some inequalities involving these functional means are also investigated. The operator versions of our theoretical functional results are immediately deduced. We also obtain new refinements of some known operator inequalities via our functional approach. The theoretical results obtained by our functional approach immediately imply those of operator versions in a simple, fast and nice way. This is joint work with M. Raissouli. I generalized several Berezin number inequalities involving product of operators, which acting on a Hilbert space $\mathscr H(\Omega)$. Among other inequalities, it is shown that if $A, B$ are positive operators and $X$ is any operator, then \begin{align*} \textbf{ber}^{r}(H_{\alpha}(A,B)&)\leq\frac{\|X\|^{r}}{2}\left(\textbf{ber}(A^{r}+B^{r})-2\inf_{\|\hat{k}_{\lambda}\|=1}\eta(\hat{k}_{\lambda})\right)\\& \leq\frac{\|X\|^{r}}{2}\Big(\textbf{ber}(\alpha A^{r}+(1-\alpha)B^{r})+\textbf{ber}((1-\alpha)A^{r}+\alpha B^{r})\\&\qquad\qquad\qquad-2\inf_{\|\hat{k}_{\lambda}\|=1}\eta(\hat{k}_{\lambda})\Big), \end{align*} where $\eta(\hat{k}_{\lambda})=r_{0}(\langle A^{r}\hat{k}_{\lambda},\hat{k}_{\lambda}\rangle^{\frac{1}{2}}-\langle B^{r}\hat{k}_{\lambda},\hat{k}_{\lambda}\rangle^{\frac{1}{2}})^{2}$, $r\geq 2$, $0\leq\alpha\leq1$, $r_{0}=\min\{\alpha,1-\alpha\}$ and $H_{\alpha}(A,B)=\frac{A^\alpha XB^{1-\alpha}+A^{1-\alpha} XB^{\alpha}}{2}$. A classical result of Sz.-Nagy and Foias shows that every contraction $T$ on a Hilbert space without unitary summand admits an $H^{\infty}$-functional calculus, that is, one can make sense of $f(T)$ for every bounded analytic function f in the unit disc. In multivariable operator theory, one studies tuples of commuting operators instead of the single operator $T$ . In this setting, the role of $H^{\infty}$ is played by multiplier algebras of certain reproducing kernel Hilbert spaces. I will talk about a generalization of the Sz.-Nagy--Foias functional calculus, which applies to a large class of multiplier algebras on the unit ball in $\mathbb{C}^d$. This is joint work with Kelly Bickel and John McCarthy. Markov-type inequalities provide upper bounds on the norm of the (higher order) derivative of an algebraic polynomial in terms of the norm of the polynomial itself. The investigation for classical $L^2$-norms began in the 1940s with Erhardt Schmidt. Only recently, results for higher order derivatives in cases where the norms are of the Laguerre, Gegenbauer, or Hermite type, were found by Böttcher and Dörfler. The results were extended to inequalities where the norms on both sides of the inequality may be chosen differently. We will show how this problem can be treated for other norms and hint on some problems that arise on the way. The Oka extension theorem says that if $p=(p_1, \dots, p_m)$ is an $m$-tuple of polynomials in $d$ variables, and $f$ is a function holomorphic on a neighborhood of the polynomial polyhedron $\{ z \in {\mathbb C}^d : |p_j (z) | \leq 1, 1 \leq j \leq m \}$, then there is a function $F$ holomorphic on a neighborhood of the polydisk in $d+m$ variables so that $f(z) = F(z, p(z))$. If one asks for norm bounds on $F$ in terms of $f$, one is led to consider function norms defined in terms of evaluating the functions on certain $d$-tuples of commuting operators. We will discuss this, and show how this approach leads to a reformulation of the Crouzeix conjecture to a conjecture about extending holomorphic functions from certain subvarieties of the polydisk to the whole polydisk. This is joint work with Jim Agler and Nicholas Young. It is known that, in a Hilbert space setting, the numerical range is a $(1+\sqrt{2})$-spectral set. This means that for any linear operator $A : H \to H$, acting on a Hilbert space $H$, and for any holomorphic mapping $f$, defined at least on the numerical range $W(A)$ of a $A$, there holds $$ \| f(A) \| \le (1+\sqrt{2}) \sup_{z \in W(A)} | f(z)|.$$ After reconsidering different alternatives of the proof of this result, an improvement of it is presented. Crouzeix observed in 2007 that for any operator $A$ in a Hilbert space and any polynomial $p$ $\|p(a)\|\leq C\sup_W|p(z)|$, where $W$ is the numerical range of $A$ and the constant $C$ is universal, i.e. does not depend neither on the operator nor on the space. He also proved in the same paper that $2\leq C\leq 11.08$ and conjectured that $C=2$. We will review recent developments on proving the conjecture ($C\leq 1+\sqrt{2}$) and show some deformations of the numerical range that may lead to new constants. (joint work with P. Pagacz and M. Wojtylak) We consider Fock representations of the generalized commutation relations of the form $a(s)a^+(t)=Q(s,t)a^+(t)a(s)+d(s,t)$, where $s,t $ are in $R^n, |Q(s,t)|\leq1$ and $d(s,t)$ is defined by smeared double integral. This contains the anyon statistics with $|Q(s,t)|=1$. The operators $a(t)$ and $a^+(t)$ are realized as (distribution-valued) creation and annihilation “at point” on a $Q$-deformed Fock space. Additional relations of the form $a(s)a(t)=Q(t,s)a(t)a(s)$ are obtained for these $s,t$ for which $ |Q(s,t)|=1$. The talk is based on a joint paper: Bozejko, Lytvynov, Wysoczanski, Fock representations of Q-deformed commutation relations, Journal of Mathematical Physics 58, (2017).
IWOTA 2019 International Workshop on Operator Theory and its Applications We consider truncated moment problems associated with algebraic varieties in the nonnegative quadrant of $\mathbb{R}^2$. We focus our attention on the interactions between two related problems, as follows. The talk is based on joint work with S. H. Lee and J. Yoon. Joint Work with Mario Kummer and Konrad Schmüdgen. We present new lower bounds on the Caratheodory numbers and their asymthotic behavior. By Richters Theorem (1957) every truncated moment functional is a conic combination of point evaluations and the Caratheodory number is the minimal number of point evaluations required to represent a (or all) truncated moment functional(s). We show that the Caratheodory number is cursed by high dimensions, i.e., for any given degree $d$ and $a \gt 0$ there is a natural number $n$ and a truncated moment functional $L$ which needs at least $(1-a)*\binom{n+d}{n}$ point evaluations. We introduce the concept of a derivative of a (truncated moment) functional and show how this unified theory provides easy and efficient proofs and methods to reconstruct shapes, functions, and measures from their moments. This is joint work with Grigoriy Blekherman (Georgia Tech. Univ.). Let $s$ be a truncated multisequence corresponding to $n$-variable monomials, and let $L$ be the Riesz functional associated to $s$. In previous work [arXiv:1804:04276, 2018] we used an iterative geometric construction to associate to $L$ a sequence of algebraic sets in $\mathbb{R}^{n}$, $S_{0} \supseteq S_{1} \supseteq \cdots$, such that $L$ has a representing measure if and only if the sequence stabilizes at a nonempty set $S_{k} = S_{k+1} = \cdots$, called the core variety of $L$, $CV(L)$. In this case, $CV(L)$ is precisely the union of supports of all finitely atomic representing measures for $L$. This result applies more generally when $L$ acts on a finite dimensional vector space of Borel measurable functions on a $T_{1}$ topological space. In the present work we extend the idea of the core variety so as to provide a test for membership in a prescribed convex cone. Let $T$ be a subset of a finite dimesnional real vector space $V$, and let $C$ be the conical hull of $T$. Given $L$ in $V$, we use an iterative geometric construction to define the core variety $CV(L)$ to be a subset of $T$. The main result is that $L$ belongs to $C$ if and only if the iterative procedure stabilizes at nonempty subset. In this case, $CV(L)$ consists of those elements of $T$ which appear in some representation of $L$ as a conical combination of elements of $T$. Let $G$ be one of the classical locally compact abelian groups: $\mathbb{Z}_N^d$, $\mathbb{Z}^d$, $\mathbb{T}^d$ or $\mathbb{R}^d$. Suppose that $U$ is an open subset of $G$ with finite Haar measure and that $U$ is symmetric in the sense that $0\in U$ and that $-x\in U$ whenever $x\in U$. We will discuss the problem of factorizing the constant function $1$ on $G$ as the convolution of two positive definite objects on $G$, one being a continuous positive definite function which is identically zero outside of $U$ and the other a positive definite function (or distribution depending on the group) supported on the set $\{0\}\cup (G\setminus U)$. We will provide a sufficient condition for the problem to be solvable and present some applications of our result. We provide a new method to approximate a (possibly discontinuous) function using Christoffel-Darboux kernels. Our knowledge about the unknown multivariate function is in terms of finitely many moments of the Young measure supported on the graph of the function. Such an input is available when approximating weak (or measure-valued) solution of optimal control problems, entropy solutions to non-linear hyperbolic PDEs, or using numerical integration from finitely many evaluations of the function. While most of the existing methods construct a piecewise polynomial approximation, we construct a semi-algebraic approximation whose estimation and evaluation can be performed efficiently. An appealing feature of this method is that it deals with non-smoothness implicitly so that a single scheme can be used to treat smooth or non-smooth functions without any prior knowledge. On the theoretical side, we prove pointwise convergence almost everywhere as well as convergence in $L^1$ under broad assumptions. Using more restrictive assumptions, we obtain explicit convergence rates. We illustrate our approach on various example from control and approximation. In particular we observe empirically that our method does not suffer from the the Gibbs phenomenon when approximating discontinuous functions. Joint work with Jean Bernard Lasserre, Swann Marx, Edouard Pauwels and Tillmann Weisser. Given a truncated multisequence $s$ and non-negative integers $\kappa_{\pm}$, we exhibit necessary and sufficient conditions for $s$ to have a representing measure $\mu = \mu_+ - \mu_-$, where ${\rm card}\, {\rm supp} \, \mu_{\pm} = \kappa_{\pm}$. We shall see that necessary and sufficient conditions can be formulated in terms of a rank preserving Hankel extension asuch that the polynomial ideal which models the aforementioned linear dependence is real radical. In the case that $\kappa_{-} = 0$, then our main result collapses to Curto and Fialkow's well-known flat extension theorem. The proof of the main result relies on some theory of Pontryagin spaces and also some basic results in commutative algebra. The discrete truncated moment problem considers the question whether given a discrete subsets $K \subset \mathbb{R}^n$ and a collection of real numbers one can find a measure supported on $K$ whose (power) moments (up to degree $d$) are exactly these numbers. In the case $n=1$ I will present a minimal set of necessary and sufficient conditions for the existence of such measures. I will discuss where we stand in the case $ n \gt 1$, $K =\mathbb{Z}^n$ and $d=2$. This simple problem is surprisingly hard and not treatable with known techniques. Applications to the truncated moment problem for point processes, the so-called relizability or representability problem are given. This is a joint work with M. Infusino, J. Lebowitz and E. Speer. In this talk I will explain how sum-of-squares characterizations and semidefinite programming can be used to obtain improved bounds for quantities related to zeros of the Riemann zeta function. This is based on Montgomery's pair correlation approach. I will show how this connects to the sphere packing problem, and speculate about future improvements. Joint work with Andrés Chirre and Felipe Gonçalves. In this talk, we will propose a new numerical scheme, based on the so called Lasserre hierarchy, which solves scalar conservation laws. Our approach is based on a very weak notion of solution introduced by DiPerna in 1985, which is called entropy measure-valued solution. Among other nice properties, this formulation is linear in a Borel measure, which is the unknown of the equation, and moreover it is equivalent to the well-known entropy solution formulation. Our aim is to explain that the Lasserre hierarchy allows to solve such a linear equation without relying on a mesh, but rather by truncating the moments of the measure under consideration up to a certain degree. One presents an overview how correlation measures (factorial moment measures) and related generating functions can be used to study the time evolution of an interacting particle system in the continuum. In this talk, we are going to exploit the combinatorial expression of generalized Fibonacci sequences. More precisely, we manage to obtain the combinatorial expression for each term of the $2$-variable moment sequence, which admits a finitely atomic representing measure. The random moment problem is concerned with determining typical behavior of moment sequences by studying appropriate moment spaces equipped with probability distributions. We consider random moment sequences of probability measures on a subset $E$ of the real line in the three classical cases of $E$ being the unit interval, the half-line and the whole real line, respectively. We find, depending on $E$, the moments of the families of Kesten-McKay-, Marchenko-Pastur- and semicircle distribution being typical, respectively. Special emphasis is given to universality questions, i.e. the question how typical these three families are. Moreover, we determine typical moment sequences if some moments are known a priori.
I learned recently that when an object moves with a velocity comparable to the velocity of light the (relativistic) mass changes. How does this alteration take place? In relativistic mechanics, there is a conserved quantity, relativistic momentum: $\vec p = \gamma m \vec v$ $\gamma = \dfrac{1}{\sqrt{1-\frac{v^2}{c^2}}}$ where m is the invariant mass or less precisely, the rest mass. Now, one interpretation is to identify $\gamma m$ as the relativistic mass, a speed dependent mass. But this is actually unnatural as it leads to the notion of directionally dependent inertia; objects having more inertia along the direction of motion. In fact, it is more natural to identify $\gamma \vec v$ as the spatial components of a four-vector, the four-velocity $\mathbf{U}$. Then, the four-momentum is just $m\mathbf{U}$ with spatial components $\vec p$: $m\mathbf U = (\gamma m c, \gamma m \vec v)$ Here is a (still rather long) sketch of Einstein's original development of the relativistic kinetic energy, from his celebrated 1905 paper "On the Electrodynamics of Moving Bodies" linked to in Ben Crowell's answer. This answer's approach is also motivated by a paper linked by dmckee in chat. Having noted the inconsistency of Maxwell's electrodynamics with standard Newtonian mechanics, Einstein offers up his principles for a new dynamics: the dynamical laws should be the same in two coordinate systems in uniform relative motion (so there's no preferred system of "absolute rest") the constancy of the speed of light $c$. Note that the first is actually already satisfied by Newtonian dynamics; it's the second principle that's revolutionary. From these principles he develops the Lorentz transformation, from which in turn flow (among many other things): time dilation: a moving clock appears to run slow. From the point of view of a reference frame in which a clock, moving with velocity $v$, marks a time interval $\Delta \tau$, the reference frame's clock system records a longer interval $\Delta t$: $$ \Delta t = \gamma \Delta \tau \quad \text{ where } \gamma = \frac{1}{\sqrt{1-\left( \frac{v}{c} \right)^2}} $$ (I have to note here that Einstein actually uses the symbol $\beta$ for what we call $\gamma$; since $\beta$ now means something completely different, this change in notation causes me no end of confusion. I wonder when the change occurred?) the velocity addition formula, which for co-linear velocities $v$ and $w$ gives a resultant $V$: $$ V = \Phi(v,w) = \frac{v + w}{1 +\frac{vw}{c^2}} $$ If $w << v$, we can write (in the limit $w \rightarrow 0$): $$ V = v + dv = v + \phi(v) w \quad \text{ where } \phi(v) = \left. \frac{\partial \Phi}{\partial w} \right|_{w=0} = \frac{1}{\gamma ^2}$$ Note that Newtonian dynamics is recovered by setting $\phi=1$, which amounts to $\gamma=1$. Einstein next demonstrates that Maxwell's equations already satisfy both his principles, and that the electric field component $E$ in the direction of motion of a moving frame is the same in both moving and stationary frames. With all that as preparation, it's time for the main event: consider a charge $q$ accelerated from rest by a uniform electric field $E$ across a potential energy difference $W=qEl$. By conservation of energy, the final kinetic energy $T$ of the charge will be $T=W$. At any point in its motion, where the particle's instantaneous velocity is $v$ in the lab frame, one can establish a co-moving reference frame in which Newton's law applies (instantaneously) for the change in velocity $dw$ (in that frame): $$ dw = \frac{q}{m} E d \tau $$ Transforming this expression to the lab frame using the above results, one finds: $$ \frac{dv}{\phi(v)} = \frac{q}{m} E \frac{dt}{\gamma} \quad \text{ or } \quad dv = \frac{1}{\gamma^3} \frac{qE}{m} dt$$ Since the rate of energy change (the power) is: $$ \frac{dT}{dt} = -\frac{dW}{dt} = \frac{d}{dt} (qEx) = qEv $$ we find, for the kinetic energy of the accelerated charge: $$ T = \int_0^{t_f} qEv dt = m \int_0^{v_f} \gamma^3 v dv = mc^2 (\gamma - 1)$$ where the $\gamma$ in the result is evaluated at the final velocity $v_f$. Note that in the Newtonian limit $\gamma=1$, the integral evaluates to the familiar $\frac{1}{2} mv_f^2$. Let's start by assuming the postulates of special relativity given in Einstein 1905a. One of these is that $c$ is the same in all frames of reference. There are really two things we would like to do: (1) prove that the usual formulas from Newtonian mechanics no longer give a usable description of dynamics, and (2) find out how to modify those formulas. Task #1 is pretty straightforward. For example, suppose we have an elastic, one-dimensional collision between objects $M$ and $m$, with $M \gg m$, in a frame of reference where $m$ is initially at rest and $M$ has initial velocity $v$. If we assume the Newtonian expressions for momentum and kinetic energy, then the result of such a collision is that $m$'s final velocity is $v'=2v$. In the case where $v=c/2$, this would cause $m$ to fly off at $v'=c$. But this contradicts Einstein's second postulate, because if it's possible for material objects to move at $c$, then it's possible for observers to move at $c$, but then in such an observer's frame of reference, a ray of light could be moving at zero speed. We can also see qualitatively from this argument that inertia must increase at speeds comparable to $c$. For consistency with the postulates of relativity, the actual result of this collision must be $v'<c$. The mass $m$ is acting as though it has more than the expected resistance to the change in its state of motion. There are two equivalent ways of stating this: (a) we can say that $m$ increases with speed, or (b) we can modify the equations for energy and momentum while considering $m$ to be a constant. It doesn't fundamentally matter whether we choose a or b; it just amounts to reshuffling a certain correction factor in certain equations. Up until about 1950, a was more popular, but these days all physicists use b. So now we have task #2, which is to quantitatively fix up the dynamical formulas in Newtonian mechanics so that they are relativistically correct. There are a lot of different ways to do this. The route Einstein originally took was to demonstrate equivalence of mass and energy (Einstein 1905b). The paper is pretty readable, but if you really want to continue with this approach and develop a full treatment of momentum, in my opinion it gets a little cumbersome. A more modern approach, demonstrated in Einstein 1935, is to think in terms of four-vectors. This approach allows for a pretty compact derivation, at the expense of some abstraction. The kinematical consequences of the postulates in Einstein 1905a are summarized by the Lorentz transformation, which converts the time and space coordinates of an event $(t,x,y,z)$ into coordinates $(t',x',y',z')$ in another frame that is in motion relative to the first at a velocity $v$. It's not my purpose to rederive the Lorentz transformation here, so I'll just appeal to its properties as needed. This makes it natural to start talking about vectors $\textbf{r}$ and $\textbf{r}'$ in four dimensions. These are called four-vectors. We really have to throw away the old notion of a three-vector, because a three-vector like $(x,y,z)$ doesn't have any well-defined transformation properties; we can't tell what it would look like in another frame without knowing $t$. Just as Newtonian mechanics has uniform rules for operating on displacement vectors, force vectors, momentum vectors, etc., we expect that the Lorentz transformation will be applicable to all the corresponding objects in relativity. You can take this as a postulate if you like. The fundamental laws of physics are conservation laws, such as conservation of momentum. The above considerations tell us that in order to generalize conservation of momentum to relativity, we're going to have to make a four-vector out of the Newtonian three-momentum. If the law is reexpressed in terms of a four-vector, then the equation will automatically be valid regardless of what frame we're in, since both sides of the equation will transform identically. The Lorentz transformation of a zero vector is always zero. This means that the momentum four-vector of a material object can't equal zero in the object's rest frame, since then it would be zero in all other frames as well. So for an object of mass $m$, let its momentum four-vector in its rest frame be $(f(m),0,0,0)$, where $f$ is some function that we need to determine, and $f$ can depend only on $m$ since there is no other property of the object that can be dynamically relevant here. Since conservation laws are additive, $f$ has to be $f(m)=km$ for some universal constant $k$. In sensible relativistic units where $c=1$, $k$ is unitless. Since we want $\textbf{p}=m\textbf{v}$ to hold for four-vectors so as to recover the appropriate Newtonian limit for massive bodies, and since $v_t=1$ in that limit, we need $k=1$. Transforming this momentum four-vector into some other frame, we find that its timelike component is no longer $m$. It equals $m$ plus an expression whose low-velocity limit is the kinetic energy. We interpret this expression as the relativistic kinetic energy. We no longer have separate conservation of mass, only conservation of mass-plus-energy or "mass-energy," $E$. The Lorentz transformation always preserves the norm of a vector $\textbf{r}$, defined by $r_t^2-r_x^2-r_y^2-r_z^2$. For a body of mass $m$, the norm of the momentum four-vector will always be $m^2$, regardless of what frame we're in. The result is $$ m^2=E^2-p^2 \qquad ,$$ which is valid for both massive and massless particles. In the $m \ne 0$ case, one can then prove that $p=m\gamma v$. The mass $m$ is constant, which is the modern convention. In school textbooks that are still stuck in the 1940's, $m\gamma$ is referred to as the relativistic mass, $m$ as the rest mass. Einstein, "On the electrodynamics of moving bodies," 1905; English translation at http://fourmilab.ch/etexts/einstein/specrel/www/ Einstein, "Does the inertia of a body depend upon its energy-content?," 1905; English translation at http://fourmilab.ch/etexts/einstein/E_mc2/www/ Einstein, "Elementary derivation of the equivalence of mass and energy," Bull. Amer. Math. Soc. 41 (1935), 223-230, http://www.ams.org/journals/bull/1935-41-04/S0002-9904-1935-06046-X/home.html In Newtonian physics the mass of a particle of matter does not change . It is defined by $F=ma$ , where $F$ is the force necessary to apply to this specific mass $m$ in order to accelerate it by an acceleration $a$. When velocities approach the velocity of light, experiments have told us that the higher the velocity of the particle the more force must be applied for the same acceleration $a$. The theory of special relativity addresses this behavior , and it has been validated again and again by experiments. From the link: To an observer who is not accelerating, it appears as though the object's inertia is increasing, so as to produce a smaller acceleration in response to the same force. This behavior is in fact observed in particle accelerators, where each charged particle is accelerated by the electromagnetic force. One can find the formula of the mass change in the above link. Now there is no other answer to "why", then "because that is the way nature behaves". when an object moves with a velocity comparable to the velocity of light the (relativistic) mass changes [...] This premise appears mistaken. When and while some specific object which is identified by some specific (intrinsic, proper, invariant) mass $m$ moves with some specific constant speed $v$ (relative to a suitable system of participants who are capable of evaluating this speed of this object, in comparison to the speed of light in vacuum $c_0$) then the so-called " relativistic mass" of this object, in this trial, $m / \sqrt{ 1 - (v/c_0)^2 }$, does not change, but remains constant as well. Instead, different trials may be considered, in which the same specific object of specific (intrinsic, proper, invariant) mass $m$ moves with different speeds such that its " relativistic mass" consequently differs from trial to trial. (The history of the notion " relativistic mass" and its limited utility in comparison to the notion of "(intrinsic, proper, invariant) mass" has already been addressed in other answers.) Additionally to Alfred Centauri's answer I can say, that mass $m_{rel}=\gamma m$ is AUTOMATICALLY implies directional inertia, since it is not constant. Any non-constant mass causes a propulsion force. From definition of force $F=\frac{dp}{dt}=\frac{dm_{rel}v}{dt}=v\frac{dm_{rel}}{dt}+m_{rel}\frac{dv}{dt}$ we have $F_{prop}=v\frac{dm_{rel}}{dt}$ It is zero for constant mass, but not zero for non-constant. This is essential part of mechanics, and is used in rocket engineering. Whether or not it's gaining mass depends on how one defines mass. The "relativistic mass" is simply the total energy, with a factor of c squared thrown in. I recall reading that Einstein himself was against the idea of such a concept, and is quoted as saying that the only mass one should consider is the "rest mass". I would have to agree. Relativistic mass is basically something that is only used when scientists are explaining things to non-scientists. The reason you can't reach the speed of light is not because an object is inherently changing, it's because of the relationship between relative speed and energy. Bear in mind also, that, if we use relativistic mass, we are no longer justified in saying light is massless. Light has relativistic mass, because it has energy. This is a simple way to understand mass. We know Light (and EM in general) does not have a rest mass. But a standing wave or a trapped beam of light between two mirrors (as in lasers), does have a rest mass. So we conclude that rest mass is nothing more than a trapped/arrested momentum, or energy if you like since the two are derivable from each other. The trapping can be done either by walls like mirrors or a cavity wall, or can be done by trapping in a circular motion (without the need for walls). Thus we say here that mass and relativistic mass must be equivalent to an un-trapped momentum, whereas a rest mass is equivalent to a trapped/arrested momentum. Many examples confirm this picture. First you have the annihilation and creation experiments where rest mass becomes pure energy flux and visa versa. Then you have the case where the mass of a proton being much larger than the total rest masses of the internal constituents- as the excess rest mass comes from the huge momentum of constituents moving at relativistic speeds. By the same logic, the electron itself must be no more than a trapped momentum. This of course goes well with our knowledge of the electron.. as having a real mechanical spin (as per Einstein-de-Hass experiment), and also having an internal clock- the Zitterbewegung that is also related to the spin. Then you have the case of a double trapping.. where you have rest masses with high speeds trapped in a larger structure- as in the case of the nucleus. This again makes the mass of the nucleus larger than the sum of the rest masses of the internal components. But such difference starts becoming less and less as the structure becomes large. A hot matter for example, has a larger rest mass than a cold one, but the difference can't be measured- being very small. We also note that here we agree with Einstein formula E=m c^2, since in a standing wave you have double the kinetic energy.. that is E=2* .5 m v^2=m v^2=mc^2 as the speed in our case is that of light. Mass in physics is a mathematical construct, and mass of an object approaching $\infty $ as the speed of an object approaches $ c $ is a mathematical consequence of the postulates of Special Relativity. protected by Qmechanic♦ Sep 24 '13 at 18:44 Thank you for your interest in this question. Because it has attracted low-quality or spam answers that had to be removed, posting an answer now requires 10 reputation on this site (the association bonus does not count). Would you like to answer one of these unanswered questions instead?
In their original paper, Storing a sparse table with O(1) worst case access time (Fredman, Kolmos and Szemeredi, Proc. FOCS '82, IEEE, 1982), the authors show that a perfect hash function must exist, provided that the table is the square of the number of elements in the key space. The key part of the proof bounds the number of possible keys that can produce a particular collision. Namely, given positive natural numbers, $p$, $s < p$, $a < s$, $b < s$, where $p$ is prime, the authors claim that there are fewer than $(p-1)/s$ solutions to $$a' \equiv b' \pmod s\,,$$ where $a', b' \in \mathbb{N}_p$, $a' \equiv a\pmod p$, and $b' \equiv b \pmod p$. This is done by asserting that if $k$ is a solution, then $k(a-b)$ must be congruent, mod $p$, to exactly one of $\{s, 2s, ..., p-s, p-2s, ... \}$. I do not understand this argument. Could someone please elaborate?
Given $a, b, k, n \in \mathbb{N}$, what techniques are available for computing sums of the form:\begin{align}\sum_{i = 1}^{n} (ai^{k} \ \text{mod} \ b ) ?\end{align} NB: Here, only the summands are reduced modulo $b$, not the overall summation. I'm specifically interested in the $k = 1$ case. For example, if $a = 2$, $b = 3$ and $k = 1$, I can prove that \begin{align} \sum_{i = 1}^{n} (2i \ \text{mod} \ 3) = 3 + 2 \left \lfloor \frac{n-1}{3} \right \rfloor + \left \lfloor \frac{n-2}{3} \right \rfloor \end{align} by splitting the sum into sums over residue classes. In general, for prime $p$, one has \begin{align} \sum_{i =1}^{n} (ai \ \text{mod} \ p) = \binom{p}{2} + \sum_{k = 1}^{p-1} (k a \ \text{mod} \ p) \left \lfloor \frac{n-k}{p} \right \rfloor, \end{align} provided that $a$ and $p$ are coprime (otherwise the right side is identically $0$). Although this identity is nice, it is somewhat impractical if $p$ is large. Can the right-side be further simplified? Edit 1: The formula above continues to hold in the case that $a$ and $b$ are coprime. What can be said about such sums if $a$ and $b$ are not coprime (and $b$ does not divide $a$)? Edit 2: A slight modification of the formula above continues to hold in the case that $a$ and $b$ are not coprime. In this case, the constant is no longer $\binom{b}{2}$. What is the constant? Edit 3: Using the comment below I can answer the question about the constant asked in Edit 2, although in this case $a$ and $b$ in the relation above must be replaced with $a/\gcd(a,b)$ and $b/\gcd(a,b)$, respectively, and both terms are multiplied by $\gcd(a,b)$. The constant is then $\gcd(a,b) \binom{b/\gcd(a,b)}{2}$. Any help or hints are quite welcome!
In signal processing, cross-correlation is a measure of similarity of two waveforms as a function of a time-lag applied to one of them. This is also known as a sliding dot product or sliding inner-product. It is commonly used for searching a long signal for a shorter, known feature. It has applications in pattern recognition, single particle analysis, electron tomographic averaging, cryptanalysis, and neurophysiology.For continuous functions f and g, the cross-correlation is defined as:: (f \star g)(t)\ \stackrel{\mathrm{def}}{=} \int_{-\infty}^{\infty} f^*(\tau)\ g(\tau+t)\,d\tau,whe... That seems like what I need to do, but I don't know how to actually implement it... how wide of a time window is needed for the Y_{t+\tau}? And how on earth do I load all that data at once without it taking forever? And is there a better or other way to see if shear strain does cause temperature increase, potentially delayed in time Link to the question: Learning roadmap for picking up enough mathematical know-how in order to model "shape", "form" and "material properties"?Alternatively, where could I go in order to have such a question answered? @tpg2114 For reducing data point for calculating time correlation, you can run two exactly the simulation in parallel separated by the time lag dt. Then there is no need to store all snapshot and spatial points. @DavidZ I wasn't trying to justify it's existence here, just merely pointing out that because there were some numerics questions posted here, some people might think it okay to post more. I still think marking it as a duplicate is a good idea, then probably an historical lock on the others (maybe with a warning that questions like these belong on Comp Sci?) The x axis is the index in the array -- so I have 200 time series Each one is equally spaced, 1e-9 seconds apart The black line is \frac{d T}{d t} and doesn't have an axis -- I don't care what the values are The solid blue line is the abs(shear strain) and is valued on the right axis The dashed blue line is the result from scipy.signal.correlate And is valued on the left axis So what I don't understand: 1) Why is the correlation value negative when they look pretty positively correlated to me? 2) Why is the result from the correlation function 400 time steps long? 3) How do I find the lead/lag between the signals? Wikipedia says the argmin or argmax of the result will tell me that, but I don't know how In signal processing, cross-correlation is a measure of similarity of two waveforms as a function of a time-lag applied to one of them. This is also known as a sliding dot product or sliding inner-product. It is commonly used for searching a long signal for a shorter, known feature. It has applications in pattern recognition, single particle analysis, electron tomographic averaging, cryptanalysis, and neurophysiology.For continuous functions f and g, the cross-correlation is defined as:: (f \star g)(t)\ \stackrel{\mathrm{def}}{=} \int_{-\infty}^{\infty} f^*(\tau)\ g(\tau+t)\,d\tau,whe... Because I don't know how the result is indexed in time Related:Why don't we just ban homework altogether?Banning homework: vote and documentationWe're having some more recent discussions on the homework tag. A month ago, there was a flurry of activity involving a tightening up of the policy. Unfortunately, I was really busy after th... So, things we need to decide (but not necessarily today): (1) do we implement John Rennie's suggestion of having the mods not close homework questions for a month (2) do we reword the homework policy, and how (3) do we get rid of the tag I think (1) would be a decent option if we had >5 3k+ voters online at any one time to do the small-time moderating. Between the HW being posted and (finally) being closed, there's usually some <1k poster who answers the question It'd be better if we could do it quick enough that no answers get posted until the question is clarified to satisfy the current HW policy For the SHO, our teacher told us to scale$$p\rightarrow \sqrt{m\omega\hbar} ~p$$$$x\rightarrow \sqrt{\frac{\hbar}{m\omega}}~x$$And then define the following$$K_1=\frac 14 (p^2-q^2)$$$$K_2=\frac 14 (pq+qp)$$$$J_3=\frac{H}{2\hbar\omega}=\frac 14(p^2+q^2)$$The first part is to show that$$Q \... Okay. I guess we'll have to see what people say but my guess is the unclear part is what constitutes homework itself. We've had discussions where some people equate it to the level of the question and not the content, or where "where is my mistake in the math" is okay if it's advanced topics but not for mechanics Part of my motivation for wanting to write a revised homework policy is to make explicit that any question asking "Where did I go wrong?" or "Is this the right equation to use?" (without further clarification) or "Any feedback would be appreciated" is not okay @jinawee oh, that I don't think will happen. In any case that would be an indication that homework is a meta tag, i.e. a tag that we shouldn't have. So anyway, I think suggestions for things that need to be clarified -- what is homework and what is "conceptual." Ie. is it conceptual to be stuck when deriving the distribution of microstates cause somebody doesn't know what Stirling's Approximation is Some have argued that is on topic even though there's nothing really physical about it just because it's 'graduate level' Others would argue it's not on topic because it's not conceptual How can one prove that$$ \operatorname{Tr} \log \cal{A} =\int_{\epsilon}^\infty \frac{\mathrm{d}s}{s} \operatorname{Tr}e^{-s \mathcal{A}},$$for a sufficiently well-behaved operator $\cal{A}?$How (mathematically) rigorous is the expression?I'm looking at the $d=2$ Euclidean case, as discuss... I've noticed that there is a remarkable difference between me in a selfie and me in the mirror. Left-right reversal might be part of it, but I wonder what is the r-e-a-l reason. Too bad the question got closed. And what about selfies in the mirror? (I didn't try yet.) @KyleKanos @jinawee @DavidZ @tpg2114 So my take is that we should probably do the "mods only 5th vote"-- I've already been doing that for a while, except for that occasional time when I just wipe the queue clean. Additionally, what we can do instead is go through the closed questions and delete the homework ones as quickly as possible, as mods. Or maybe that can be a second step. If we can reduce visibility of HW, then the tag becomes less of a bone of contention @jinawee I think if someone asks, "How do I do Jackson 11.26," it certainly should be marked as homework. But if someone asks, say, "How is source theory different from qft?" it certainly shouldn't be marked as Homework @Dilaton because that's talking about the tag. And like I said, everyone has a different meaning for the tag, so we'll have to phase it out. There's no need for it if we are able to swiftly handle the main page closeable homework clutter. @Dilaton also, have a look at the topvoted answers on both. Afternoon folks. I tend to ask questions about perturbation methods and asymptotic expansions that arise in my work over on Math.SE, but most of those folks aren't too interested in these kinds of approximate questions. Would posts like this be on topic at Physics.SE? (my initial feeling is no because its really a math question, but I figured I'd ask anyway) @DavidZ Ya I figured as much. Thanks for the typo catch. Do you know of any other place for questions like this? I spend a lot of time at math.SE and they're really mostly interested in either high-level pure math or recreational math (limits, series, integrals, etc). There doesn't seem to be a good place for the approximate and applied techniques I tend to rely on. hm... I guess you could check at Computational Science. I wouldn't necessarily expect it to be on topic there either, since that's mostly numerical methods and stuff about scientific software, but it's worth looking into at least. Or... to be honest, if you were to rephrase your question in a way that makes clear how it's about physics, it might actually be okay on this site. There's a fine line between math and theoretical physics sometimes. MO is for research-level mathematics, not "how do I compute X" user54412 @KevinDriscoll You could maybe reword to push that question in the direction of another site, but imo as worded it falls squarely in the domain of math.SE - it's just a shame they don't give that kind of question as much attention as, say, explaining why 7 is the only prime followed by a cube @ChrisWhite As I understand it, KITP wants big names in the field who will promote crazy ideas with the intent of getting someone else to develop their idea into a reasonable solution (c.f., Hawking's recent paper)
IWOTA 2019 International Workshop on Operator Theory and its Applications We establish criteria for boundedness for some classes of integral opearators with logarithmic singularities in weighted Lebesgue spaces for cases $1\lt p \leq q \lt \infty$ and $1\lt q \lt p \lt \infty$. As corollaries some corresponding new Hardy inequalities are pointed out. Given a domain $D$ in $\mathbb{C}^n$ and $K$ a compact subset of $D$, the set $\mathcal{A}_K^D$ of all restrictions of functions holomorphic on $D$ the modulus of which is bounded by $1$ is a compact subset of the Banach space $C(K)$ of continuous functions on $K$. The sequence $(d_m(\mathcal{A}_K^D))_{m\in \mathbb{N}}$ of Kolmogorov $m$-widths of $\mathcal{A}_K^D$ provides a measure of the degree of compactness of the set $\mathcal{A}_K^D$ in $C(K)$ and the study of its asymptotics has a long history, essentially going back to Kolmogorov's work on $\epsilon$-entropy of compact sets in the 1950s. In the 1980s Zakharyuta showed that for suitable $D$ and $K$ the asymptotics \begin{equation} \label{eq:KP} \lim_{m\to \infty}\frac{- \log d_m(\mathcal{A}_K^D)}{m^{1/n}} = 2\pi \left ( \frac{n!}{C(K,D)}\right ) ^{1/n}\,, \end{equation} where $C(K,D)$ is the Bedford-Taylor relative capacity of $K$ in $D$ is implied by a conjecture, now known as Zakharyuta's Conjecture, concerning the approximability of the regularised relative extremal function of $K$ and $D$ by certain pluricomplex Green functions. Zakharyuta's Conjecture was proved by Nivoche in 2004 thus settling (\ref{eq:KP}) at the same time. In this talk I will outline a new approach, developed together with Stéphanie Nivoche, for the proof of the asymptotics (\ref{eq:KP}) with $D$ strictly hyperconvex and $K$ of non-zero Lebesgue measure which does not rely on Zakharyuta's Conjecture. The approach is more direct, proceeding instead in a two-pronged fashion, by establishing sharp upper and lower bounds for the Kolmogorov widths. The lower bounds follow from concentration results of independent interest for the eigenvalues of a certain family of Toeplitz operators, while the upper bounds follow from an application of the Bergman-Weil formula coupled with exhaustion arguments using special holomorphic polyhedra. Many problems in applied mathematics and engineering can be formulated as Fredholm integral equations of the first kind: \begin{equation} Kf(x)=\int_{a}^{b}k(x,y)f(y)dy =g(x), \label{eq:1}\end{equation} where the kernel $k(.,.)$ and the right-hand side $g$ are smooth real-valued functions. The determination of the solution $f$ of ($\ref{eq:1}$) is an ill-posed problem in the sense of Hadamard; in the sense that the solution (if it exists) does not depend continuously on the data. In this study one of the new techniques is used to solve numerical problems involving integral equations known as regularized sinc-collocation method. This method has been shown to be a powerful numerical tool for finding accurate solutions. So, in this talk, some properties of the regularized sinc-collocation method required for our subsequent development are given and are utilized to reduce integral equation of the first kind to some algebraic equations. Then by a theorem we show error in the approximation of the solution decays at an exponential rate. Finally, numerical examples are included to demonstrate the validity and applicability of the technique. We discuss, in the context of inverse linear problems in Hilbert space, the notion of the associated infinite-dimensional Krylov subspace and we produce necessary and sufficient conditions for the Krylov-solvability of the considered inverse problem. A linear inverse problem $Af = g$ for $A$ a bounded linear operator on Hilbert space $\mathcal{H}$, $g \in \mathrm{ran}A$, and $f$ a solution; has associated Krylov space \[\mathcal{K}(A,g) := \mathrm{span}\{A^ng \,|\, n \in \mathbb{N}_0\}\,.\] Krylov-solvability refers to the existence of a solution $f$ in the closure of $\mathcal{K}(A,g)$. Some aspects specifically concerning Krylov-solvability for self-adjoint operators are presented, along with operator theoretic constructions for more general classes of operator $A$. The presentation is based on theoretical results together with a series of model examples. We are interested in the solution to a boundary integral formulation of a diffusion problem in 3D involving material coefficients that are piecewise constant with respect to a partition of space into Lipschitz subdo-mains. We allow in particular the presence of material junctions where three subdomains or more are adjacent. We will discuss boundary integral formulations of the second kind adapted to such problems, exhibiting a natural generalization of the Neumann-Poincaré operator that arises in single interface problems. We will also comment on the continuity properties of this operator, and show how it can be used to impose transmission conditions through interfaces of a geometric partition of the space involving arbitrary junctions. Real-valued pluriharmonic functions appear naturally as the real and imaginary parts of holomorphic functions in several complex variables. This gives a tight connection between Bergman- and Segal-Bargmann spaces of holomorphic and of pluriharmonic functions. We will use this connection to show how results concerning Toeplitz operators on holomorphic function spaces can be, more or less easily, transfered to results concerning Toeplitz operators on pluriharmonic function spaces. In particular, but not exclusively, we will be concerned with results on deformation quantization of pluriharmonic Toeplitz operators. The Neumann-Poincaré operator is a boundary-integral operator associated with harmonic layer potentials. We prove the existence of eigenvalues within the essential spectrum for the Neumann-Poincaré operator for certain Lipschitz curves in the plane with reflectional symmetry, when considered in the functional space in which it is self-adjoint. The proof combines the compactness of the Neumann-Poincar\'e operator for curves of class $C^{2,\alpha}$ with the essential spectrum generated by a corner. Eigenvalues corresponding to even (odd) eigenfunctions are proved to lie within the essential spectrum of the odd (even) component of the operator when a $C^{2,\alpha}$ curve is perturbed by inserting a small corner. Hankel operators can be defined in two ways: either as infinite matrices of the form ${a(j+k)}$ or as integral operators on $L_2(\mathbb R_+)$ with the integral kernels of the form $a(x+y)$. We will consider a class of maps from integral Hankel operators to Hankel matrices, which we call restriction maps. In the simplest case, such a map is simply a restriction of the integral kernel onto integers. More generally, it is given by an averaging of the kernel with a sufficiently regular weight function. In this talk, we will describe the boundedness of certain restriction maps with respect to the operator norm and the Schatten norms. If time permits, we will also discuss the boundedness of a converse operation, an extension of a matrix to an integral kernel. This is joint work with Alexander Pushnitski. The infinite Hilbert matrix $\mathcal{H} = \left(\frac{1}{j+k+1}\right)_{j, k = 0}^\infty$ can be interpreted as a linear operator on spaces of analytic functions in the open unit disc of the complex plane by its action on their Taylor coefficients. The boundedness of $\mathcal{H}$ on the Hardy spaces $H^p$ for $1 \lt p \lt \infty$ and Bergman spaces $A^p$ for $2 \lt p \lt \infty$ was established by Diamantopoulos and Siskakis. The exact value of the norm of $\mathcal{H}$ acting on the Bergman spaces $A^p$ for $4 \le p \lt \infty$ was shown to be $\frac{\pi}{\sin(\frac{2\pi}{p})}$ by Dostaniç, Jevtiç and Vukotiç in 2008. The case $2 \lt p \lt 4$ was an open problem until in 2018 it was shown by Božin and Karapetroviç that the norm has the same value also on the scale $2 \lt p \lt 4$. In this talk, we review some of the old results and consider the still partly open problem regarding the value of the norm on weighted Bergman spaces. The talk is based on a joint work with Lindström and Wikman (Åbo Akademi). Distributed-order fractional non-local integral operators have been introduced and studied by Caputo at the end of the 20th century. They generalize fractional order derivatives/integrals in the sense that such operators are defined by a weighted integral of different orders of differentiation over a certain range. The subject of distributed-order non-local operators is currently under strong development due to its applications in modeling some complex real world phenomena. Fractional optimal control theory deals with the optimization of a performance integral functional subject to a fractional control system. One of the most important results in classical and fractional optimal control is the Pontryagin Maximum Principle, which gives a necessary optimality condition that every solution to the optimization problem must verify. In our work, we extend the fractional optimal control theory by considering dynamical systems constraints depending on distributed-order fractional derivatives. Precisely, we prove a version of Pontryagin’s maximum principle and a sufficient optimality condition under appropriate convexity assumptions. This research is partially supported by FCT within the R&D unit UID/MAT/04106/2019 (CIDMA). A recent generalization of the theory of fractional calculus is to allow the fractional order of the derivatives to be dependent on time. As a result, introducing numerical methods for finding approximations of variable-order fractional integrals and derivatives of a given function is an important feature in presenting new numerical methods for solving variable-order fractional differential equations. In this talk, we consider the left Riemann--Liouville fractional integral operator of order $\alpha(t)$ of a given function $y$ on $[0,\tau]$ which is defined by \[ _0I_t^{\alpha(t)}y(t)=\frac{1}{\Gamma{(\alpha(t))}}\int_0^t(t-s)^{\alpha(t)-1}y(s)ds,\quad t>0, \] where $\Gamma(\cdot)$ is the Euler gamma function. By considering the Newton--Cotes equal distance points and a suitable set of basis functions, an approximation of the aforementioned integral is given. We find an error for this approximation. In recent years, there has been much interest in limit operators. For example in the most easy case, given a bounded linear operator $A$ on $\ell^2(\mathbb{Z})$, we can think of $A$ a an bi-infinite matrix (with respect to the canonical basis in $\ell^2(\mathbb{Z})$). Then a limit operator of $A$ is given by a strong limit of shifts of the matrix $A$ along the main diagonal with respect to a seqeuence tending to $\pm\infty$. One can then characterize Fredholmness and spectral properties of $A$ by studying its limit operators. In this talk we explain how to generalizes the above method to rather general metric measure spaces of bounded geometry. This is a joint work with Raffael Hagger. Embedded eigenvalues often can be constructed by exploiting symmetries of an operator with continuous spectrum. The symmetries induce multiple components of the continuous spectrum, and then appropriately chosen local defects create spectrally embedded eigenstates. It turns out that embedded eigenvalues due to local defects occur even in the absence of operator symmetries. The mechanism for periodic operators is the reducibility of the Fermi surface, which is a richer phenomenon than operator symmetry. I will show how this works for multi-layer graph operators; and I am interested in this question for the Neumann-Poincaré operator, building on recent work with Wei Li on embedded eigenvalues for the NP operator for symmetric domains. The objective of this talk is to present some of the Rubia de Francia type extrapolation results in the framework of generalized weighted grand Lebesgue spaces defined on $\Omega\subseteq \mathbb R^n$ with $|\Omega|<\infty.$ Here we shall be discussing the diagonal and off-diagonal cases, as well as their applications to study the boundedness of fractional Reisz potetial operator and fractional Maximal operator.
In a Euclidian world the sum $s$ of two velocities $v$ and $u$ is so such that $s = v + u$. However, in the world of special relativity that's not the case. Instead, the velcity vector sum $s$ is such that $s = \frac{v + u}{1+vu} \; (c=1)$. I'm trying to deriving it and down below is my work so far. Any hints that would lead me forwards would be great. Consider two inertial reference frames $S$ ("the rest frame") and $S'$ ("the moving frame"). In S' we have a 4-velocity vector $\left(c \frac{d {x^0}'}{d \tau}, \frac{d {x^1}'}{d \tau}, \frac{d {x^2}'}{d \tau}, \frac{d {x^3}'}{d \tau}\right)$. I'm letting ${x^\mu}$ denote $x^\mu$ coordinates in $S$ and ${x^\mu}'$ denote $x^\mu$ coordinates in $S'$. During $d \tau$, a particle with the velocity $\left(c \frac{d {x^0}'}{d \tau}, \frac{d {x^1}'}{d \tau}, \frac{d {x^2}'}{d \tau}, \frac{d {x^3}'}{d \tau}\right)$ travels from $(0, 0, 0, 0)$ to $\left(c d {x^0}', d {x^1}', d {x^2}', d {x^3}'\right)$. To simplify the problem we're now going to assume that the relative velocity of $S'$ (the moving frame") is only in the x-direction, and the same is assumed about the 4-velocity vector in $S'$. Thus it simplifies into the event $\left(c d {x^0}', d {x^1}', 0, 0 \right)$. Now we transform the ${x^0}'$ term into $S$: $x^0 = \gamma(d {x^0}' - v d {x^1}')$ And the same for the ${x^1}'$ term: $x^1 = \gamma (d {x^1}' - v d {x^0}')$ So therefore, in $S$, the event is given by $(\gamma(d {x^0}' - v d {x^1}'), \gamma (d {x^1}' - v d {x^0}'), 0, 0)$ Now, here's my problem.In order to write an expression for the change in distance over time (that's the velocity) in $S$ coordinates, I'll need to know how to transform $d \tau$ into $S$. I already know ${x^1}$ in $S$ in terms of $S'$ coordinates, but I don't know how ${x^0}$ relates to $d \tau$. If I knew that, I could simply take $\frac{d {x^1}}{d {x^0}}$ and get an expression for the velocity in $S$. In other words, I need to know the coordinate of $d \tau$ in $S$. If I was unclear about something, please don't downvote but leave a comment and ask instead and I'll correct any mistakes or unclear formulations.
Fund managers are acting in a highly stochastic environment. What methods do you know to systematically separate skillful fund managers from those that were just lucky? Every idea, reference, paper is welcome! Thank you! Quantitative Finance Stack Exchange is a question and answer site for finance professionals and academics. It only takes a minute to sign up.Sign up to join this community Fund managers are acting in a highly stochastic environment. What methods do you know to systematically separate skillful fund managers from those that were just lucky? Every idea, reference, paper is welcome! Thank you! Larry Harris has a chapter on performance evaluation in Trading and Exchanges. He states that over a long period of time, a skilled asset manager will consistently have excess returns whereas a lucky one will be expected to have random and unpredictable returns. Thus, we start with the portfolio's market-adjusted return standard deviation: \begin{equation} \sigma_{adj} = \sqrt{\sigma^2_{port} + \sigma^2_{mk} - 2\rho\sigma_{port}\sigma_{mk}} \end{equation} where $\rho$ is the correlation between the market and portfolio returns. For a sample size $n$ (generally number of years), the average excess returns, and the adjusted standard deviation from above, we have a t-statistic: \begin{equation} t = \frac{\overline{R_{port}} - \overline{R_{mk}}}{\frac{\sigma_{adj}}{\sqrt{n}}} \end{equation} Now we can simply determine the probability that the manager's excess returns were luck by plugging this t-statistic into the t-distribution's PDF with degrees-of-freedom $n - 1$. The lower the probability, the more we can believe the manager's excess returns were from skill. Some links: Below is some code that I used recently to illustrate luck (and con-games). The story went like this: I'll dream up your lucky lottery number for 2010.....let's say it's 20639. The number doesn't matter because we're going to use that for the seed of a random number generator. Then, I'll take the first three digits of your lucky lottery number (206) and reverse them (-.602) and use that as a multiplier on that random number. Since the S&P500 started off in 2010 at about 1100, I'll start the model at that level. Here is the code: library(quantmod) #Read 2010 S&P500 data from Yahoo tem <- as.zoo(getSymbols("^gspc", from="2010-01-01", to="2011-01-01", auto.assign=FALSE, src = "yahoo")) #Build a lucky lottery model based on the seed set.seed(20639) #Your lucky lottery number yt <- tem$GSPC.Adjusted coredata(yt) <- 1100 * exp(-0.602 * cumsum(rnorm(length(yt), sd=0.0113))) #Plot the results plot(tem$GSPC.Adjusted, type="l", main="S&P500 for Year 2010", ylab="S&P500", xlab="", lwd=3, col="darkgray") lines(yt, lwd=2, col="red") legend("bottomright", legend=c("Actual S&P500", "Dredged S&P500"), lwd=c(3, 2), col=c("darkgray", "red")) This is all meaningless BS, but it took a while for some people to untangle luck from the con. The really hard part is to convince yourself that your "skill" actually found something real. Something "real" is a very rare event in the world of investing. In order to have a shot at separating skill from luck, you need a sense of what luck looks like. I think the best chance of understanding luck is to use random portfolios. See, for instance: http://www.portfolioprobe.com/about/random-portfolios-in-finance/ Read Fooled by Randomness by Nassim Taleb. In a nutshell, he says that you can only tell the difference by understanding the risks that were taken. Lucky investors can win for many years before blowing up. Even if he doesn't blow up, there is no way to know what might have happened if the risks turned out badly. Take a look at White's Reality Check. Another very crude way would be to calculate a "skill score" (from The Mathematics of Technical Analysis, p325) $$\tt{skill\ score} = \frac{SKILL\_correct - NOSKILL\_correct}{Total\ decisions - NOSKILL\_correct}$$ SKILL_correct: the profitable trades NOSKILL_correct: randomly assigned trades that were profitable Total decisions: number of trades If this number is 0 or negative, it indicates that you are mostly dealing with a lucky investor, and not a skilled one. I was going to suggest that you use alpha, which is the measure of a managers excess return beyond their benchmark. But here is an alternative view which is quite interesting. Check out the last chapter in Grinold's classic Active Portfolio Management (2nd Ed) for a discussion on separating luck from skill I remember an article from graduate school that describes a methodology for measuring the true timing ability of a money manager. I don't remember the name of the article nor the name of the author, however, I do remember some of the details of the article. Maybe someone else has run across it and would be kind enough to post the appropriate reference. Let's assume that a manager has the ability to be either in cash, earning the risk-free rate, or a long position in a basket of stocks. If the money manager had superior timing ability, he would be in the basket when the basket was returning greater than the risk-free rate, and he would be in a cash position when the basket is returning less than the risk-free rate. What you basically have is a return profile that looks a lot like the payoff of call option. If you plot market return on the x-axis and manager return on the y-axis, the return should be flat at the risk-free rate for everything to the left of the risk-free rate on the x-axis/ At the risk free-rate on the x-axis, the return should be a 45-degree line up and to the right of the diagram. Over time, you measure manager return against market return, and if he is any good, you should see the call option payoff diagram being roughly drawn out. Martijn Cremers and Antti Petajisto have a series of papers using the concept of "Active Share," a new measure of active portfolio management which represents the share of portfolio holdings that differ from the benchmark index holdings, to evaluate mutual fund managers. They find that the most active stock pickers have outperformed their benchmark indices even after fees and transaction costs. In contrast, closet indexers or funds focusing on factor bets have lost to their benchmarks after fees. Bottom line: when separating skill from luck, concentrate on those managers that actually try to differentiate themselves from the crowd. Also, the more fine grained your strategy is, the more likely it is to represent skill over luck. My 2c worth. Experience tells me that the better ways to get a feel for whether their strategy is based on something more than luck are amongst: 1) `getting to know your traders' -- have a chat, pick their brains, try to get some insight into their methods; 2) see how hard the market has been -- check whether you have just been part of a bull market which basically made pretty-much every strategy a winner, and discount performance accordingly. I would be skeptical of anything too mathematical for measuring outperformance, since even a badly-designed strategy may have performed well. I see an analogy with driving: many drivers will successfully get from A to B, but you'll get a much clearer picture of the sorts of risks a driver takes by sitting in the car with them, rather than trying some mathematical analysis of the race. Make sense? A very well thought through exposition on the matter is given in this paper: A Consultant’s Perspective on Distinguishing Alpha from Noise by John R. Minahan It combines a lot of wisdom and common sense that sometimes seems to get lost in the process...
So basically my tables work fine under the tabular format, but in some cases I need to use tabularx because they are too large to fit within the normal textwidth. However, the rows become infinitely long, due to an erroneous extra column it seems? EDIT: Changed the table to my actual table. Image1 shows the same code using tabular and without the \noindent\makebox. As you can see, the table is too wide and floats slightly off to the right, hence I try using tabularx to make it wider, such that it can be centred on the page. However it gives a weird unending rows. In image1, the table is wider than the text width, though at least it correctly ends. Hence I try using tabularx in the code below, which results in image2: Code: \begin{table}[!h]\label{table-A_3_posthoc}\centering\small\caption[One-way ANOVA tests for scenarios 14, 16, 18 and 19-22]{One-way ANOVA tests showing the variation between sample means in three different internetwork topologies. Comparisons without a significant level (SL) were deemed statistically insignificantly different i.e. $p > 0.05$.\\}\noindent\makebox[\textwidth]{%\begin{tabularx}{1.5\textwidth}{l|r|r|r|r|r|r|r|r}\topruleOne-way ANOVA & SL($\langle k \rangle$) & SL($\overline{C_E}$) & SL($D$) & SL($\ell$) & SL($G_E$) & SL($C_G$) & SL($\Phi$) & SL($\Gamma$)\\\midrule14 and 19 & 0.010 & & 0.010 & 0.001 & 0.001 & 0.001 & 0.001 & 0.001\\16 and 20 & 0.001 & & 0.010 & 0.001 & 0.001 & 0.050 & & \\18 and 21 & 0.001 & 0.050 & 0.050 & 0.001 & 0.001 & 0.001 & 0.010 & 0.010\\\bottomrule\end{tabularx}}\end{table} Preamble: \usepackage{parskip}\usepackage[margin=1.4in]{geometry}\usepackage{amsmath,amssymb} % For using * for no eq. numbers\usepackage{amsthm}\usepackage[]{graphicx} % For inserting figures etc\usepackage{booktabs} % For \toprule, \midrule and \bottomrule\usepackage{url}\usepackage{titlesec}\usepackage{tabularx}\usepackage{verbatim}\usepackage[toc,page]{appendix}\usepackage[nottoc,notlot,notlof]{tocbibind}\usepackage{etoolbox}\usepackage{adjustbox}\usepackage{framed}\usepackage{cite}\usepackage{subfigure}\usepackage{algpseudocode}\usepackage{algorithm}\usepackage{algorithmicx}
I am interested in the problem of finding a real root of a polynomial equation $f(x)=0$ where $f(x)=\sum_{i=0}^n a_ix^i$. Is it possible to give a reduction, i.e, to compute a different polynomial $g$ in polytime such that $f$ has a real root iff $g$ has a real root in [0,1]$? Not sure if this is the right SE forum for it, but the answer is yes. I'll give the reduction in two steps: $f(x)$ has a root iff $h(x)$ has a root in [-1,1] (scaling, i.e. $h(x)=f(\alpha \cdot x)$). $h(x)$ has a root in [-1,1] iff $g(x)$ has a root in [0,1] (simply define $g(x)=h(\frac{x+1}{2})$). Let's prove 1: Let's assume $f(x)$ is of degree $n$ and write it as $f(x) = x^n - a_{n-1} x^{n-1} - ... - a_0$. If $\alpha \leq 1$, then all of the roots of $f$ in [-1,1] will end up in [-1,1] in $h$. Let $x$ be a root of $f$ such that $|x|>1$. This means $x^n = a_{n-1} x^{n-1} + ... + a_0$. since $|x|>1$, $|x^n| > x^{n-1},x^{n-2},...,1$ hence $$|x| \le \text{max}(1, |a_{n-1}| + ... + |a_0|).$$ Define $\alpha = \text{max}(1, |a_{n-1}| + ... + |a_0|)$ and you're done. Here is an alternative to the answer by R B ; It is somewhat simpler, but has the disadvantage of an increase in degree. Simply take $g(x) = x^{2n}f(x)f(-x)f(1/x)f(-1/x)$.
IWOTA 2019 International Workshop on Operator Theory and its Applications This work represents the concept of an $n$-groupoid $ \Gamma^n $ and n-characters $ \chi_n $ on $n$-groupoids as complex-valued maps from spaces of different classes of morphisms satisfying the condition $ \chi_n (\psi \circ_k \varphi) = \chi_n (\psi) + \chi_n (\varphi) $ for any possible compositions. A sequence of spaces of n-characters and morphisms between them is constructed and its accuracy is shown. This construction has important application for describing the derivations in a group algebras. In particular, this approach allows us to study the algebra of external derivations from a new point of view, and also to construct some interesting examples. The work was carried out under the guidance of Arutyunov A. A.. And it is based on the Mishchenko A. S. ideas. In this contributed talk, we study one step iterative scheme to approximate common fixed points of two generalized non-expansive mappings in uniformly convex Banach spaces and using the same scheme we prove some weak and strong convergence results for such mappings. Further, we establish some weak and strong convergence results for a finite family of generalized non-expansive mappings to approximate common fixed points using proposed algorithm in uniformly convex Banach spaces. As an application, we approximate the solution of image recovery problem in Banach space setting. To support our results we illustrate some numerical examples. Our results are new and generalize several relevant results in literature. In this talk we present some bounds of numerical radius of a bounded linear operator on a complex Hilbert space which improves on the existing bounds. Also we would like to present a Haagerup-Harpe type inequality for numerical radius. As an application we estimate the zeros of a given polynomial. This work is jointly done with Prof K. Paul and Mr P. Bhunia. The notion of linear Hahn-Banach extension operator was first studied in detail by Heinrich and Mankiewicz (1982). Previously, J. Lindenstrauss (1966) studied similar versions of this notion in the context of non-separable reflexive Banach spaces. Subsequently, Sims and Yost (1989) proved the existence of linear Hahn-Banach extension operators via interspersing subspaces in a purely Banach space theoretic set up. In this paper, we study similar questions in the context of Banach modules and module homomorphisms, in particular, Banach algebras of operators on Banach spaces. Based on Dales, Kania, Kochanek, Kozmider and Laustsen (2013), and also Kania and Laustsen (2017), we give complete answers for reflexive Banach spaces and the non-reflexive space constructed by Kania and Laustsen from the celebrated Argyros-Haydon’s space with few operators. In this work, the concept of $n$-power-hyponormal operators on a hilbert space defined by Messaoud Guesba and Mostefa Nadir in (M. Guesba and M. Nadir, On operators for which $T^2\geq -T^{*2}$; The Australian Journal of Mathematical Analysis and Applications) is generalized when an additional semi-inner product is considered. This new concept is described by means of oblique projections. For a a real normed space $(X,\|\cdot\|)$ and for $x,y\in X$, we consider the Birkhoff orthogonality relation (cf. [1]): $$ x\bot_{B}y\quad\Longleftrightarrow\quad \forall\,\lambda\in\mathbb{R}:\quad \|x+\lambda y\|\geq\|x\|. $$ An approximate version of this orthogonality can be defined (cf. [2]) by: $$ x\bot^{\hspace{-0.2em}\varepsilon}_{B}y\quad\Longleftrightarrow\quad \forall\,\lambda\in\mathbb{R}:\quad \|x+\lambda y\|^2\geq\|x\|^2-2\varepsilon\|x\|\,\|\lambda y\| $$ with $\varepsilon\in [0,1)$. Moreover, the following characterisation is in our disposal (cf. [3]): $$ x\bot^{\hspace{-0.2em}\varepsilon}_{B}y\quad\Longleftrightarrow\quad \exists\, z\in \operatorname{Lin}\{x,y\}:\quad x\bot_{B}z,\quad \|z-y\|\leq\varepsilon\|y\|. $$ We will discuss the above notions, further characterizations and its applications, in particular in operator theory. We will also consider the approximate symmetry of the Birkhoff orthogonality, i.e. the property: $$ x\bot_{B} y\quad \Longrightarrow\quad y\bot^{\hspace{-0.2em}\varepsilon}_{B} x,\qquad x,y\in X, $$ as well as its connections with geometrical properties of the considered space (cf. [4]). The mathematical theory of quantum communication science is generally studied in the context of Hilbert spaces. The main resource in such communications is the quantum entanglement. The basic unit of quantum information is qubit which is a member of a Hilbert space. The entangled states belong to tensor products of appropriate Hilbert spaces. This new paradigm in technology was started in the work of Bennet et al (Phys. Rev. Lett. 70(1993) 1895) which is known as the teleportation protocol. In the present context we review the development following the above mentioned work and against that background present a protocol for preparing a three qubit state at a remote location. It is finally accomplished by applications of tensor products of appropriate unitary operators followed by a CNOT operation. Given two Banach spaces $X$ and $Y$ let $\mathcal{L}(X, Y )$ denote the vector space of operators acting between them; its derived functor is the one that assigns to each couple $X; Y$ the vector space $\operatorname{Ext}(X, Y)$ of exact sequences $0 \to Y \to \Box \to X \to 0$ modulo equivalence; let us agree that the second derived functors will be called $\operatorname{Ext}^2(X, Y)$. Several important Banach space problems and results adopt the form $\operatorname{Ext}(X, Y) = 0$ (or $\operatorname{Ext}(X, Y) = 0$). For instance, In general, a basic Banach space question is whether $\operatorname{Ext}(X, Y) = 0$ for a given couple of Banach spaces $X$, $Y$. Similar questions for $\operatorname{Ext}^2$ have not been treated. Let us write $\operatorname{Ext}^2(X, Y)=0$ to mean that all elements $FG$ of $\operatorname{Ext}^2(X, Y)$ are $0$. Palamodov's Problem 6 in [2] says: Is $\operatorname{Ext}^2(\cdot,E)=0$ for any Fréchet space? A solution to Palamodov's problem in the category of Fréchet space was provided by Wengenroth. Let us answer it in the negative even in the domain of Banach spaces. Perhaps the most interesting situation is the Hilbert space case: Is $\operatorname{Ext}^2(\ell_2, \ell_2)=0$? For which a few partial results can be obtained. The first one establishes an unexpected connection between homology and the study of bilinear forms [1]: Theorem Let $X$ be a Banach space and let $Q: \ell_1(\Gamma)\to X$ be a quotient map. $\operatorname{Ext}^2(X, X^*)=0$ if and only if every bilinear form defined on $\ker Q$ can be extended to a bilinear form on $\ell_1(\Gamma)$. The second result connects the $\operatorname{Ext}^2$ problem with the nature of subspaces of $\ell_1$. Precisely, Theorem Let $X$ be a separable Banach space and let $q: \ell_1\to X$ be a quotient map. Joint work with Jesús M F Castillo (University of Extremadura). This work was supported by projects MTM2016-76958-C2-1-P and IB16056 of Junta de Extremadura. Birkhoff-James orthogonality is a generalization of Hilbert space orthogonality to normed spaces. In a given normed space $\mathscr X$, an element $x$ is said to be Birkhoff-James orthogonal to another element $y$ if $$\|x+\lambda y\|\geq \|x\| \text{ for all } \lambda \in \mathbb C.$$ We discuss characterizations for this orthogonality to hold when $\mathscr X$ is a $C^*$-algebra. These characterizations give rise to interesting distance formulas for an element $a$ of $\mathscr X$ to the one dimensional subalgebra $\mathbb C\ b$, generated by an element $b$. More generally, orthogonality to a subspace can be defined and subsequently distance formulas for $a$ to a subspace $\mathscr B$ of $\mathscr X$ can be obtained, when best approximation from $a$ to $\mathscr B$ exists. Here we present several necessary and sufficient conditions for the existence of Hermitian positive definite solutions of nonlinear matrix equations of the form $X^s+A^*X^{-t}A+B^*X^{-p}B = Q$, then we use this idea to solve the pair of nonlinear matrix equations of the form \begin{align*} & X^{s_1}+A^*X^{-t_1}A+B^*Y^{-p_1}B=Q_1\\ & Y^{s_2}+A^*Y^{-t_2}A+B^*X^{-p_2}B=Q_2\end{align*} where $s, t, p, s_1, t_1, p_1, s_2, t_2, p_2 \geq 1;~A, B$ are nonsingular matrices and $Q, Q_1, Q_2$ are Hermitian positive definite matrices. Using matrix inequalities, first we give some necessary conditions for the existence of Hermitian positive definite solution. Then we provide sufficient conditions for the existence and uniqueness of solution. Finally, we give some examples. The metric $d(A,B)=\left[\frac{1}{2}(tr\, A+tr\, B)-tr(A^{1/2}BA^{1/2})^{1/2}\right]^{1/2}$ on the manifold of $n\times n$ positive definite matrices arises in various optimisation problems, in quantum information and in the theory of optimal transport. It is of interest in differential geometry, as it is the distance function corresponding to a Riemannian metric. We study some fundamental properties of this metric, and discuss the barycentre of positive definite matrices with respect to it. This is based on joint work with Rajendra Bhatia and Yongdo Lim. We completely characterize Birkhoff-James orthogonality with respect to numerical radius norm in the space of bounded linear operators on a complex Hilbert space. As applications of the results obtained, we estimate lower bounds of numerical radius for $n\times n$ operator matrices, which improve on and generalize existing lower bounds. We also obtain a better lower bound of numerical radius for an upper triangular operator matrix. This is a joint work with Professor Kallol Paul and Mr. Jeet Sen, arXiv:1903.06858v1 [math.FA] 16 Mar 2019. In this talk we consider certain important metrics on the positive definite cones in $C^\ast$-algebras. These are the Thompson part metric and the Bures-Wasserstein metric. We describe the precise structures of the corresponding surjective isometries. More general generalized distance measures, especially certain kinds of quantum relative entropies, are also considered and the related preserver transformations are characterized. The notion of amenability arose in group theory. Its origin can be driven back to the Banach-Tarski paradox. Later on Barry Johnson proved that a locally compact group $G$ is amenable if and only if its convolution algebra $L^1(G)$ satisfies a certain cohomological property. This gave rise to an abstract definition of the so-called amenable algebra and started a vast development of the theory of amenable Banach algebras. Further research — through functorial language — led Taylor, Johnson and Helemskii to a definition of an amenable topological algebra. In this talk we will focus — after recalling necessary definitions — on a special class of Fréchet algebras, the so-called Köthe echelon algebras. These are sequence spaces of the form \[\lambda_p(A):=\big\{x\in\mathbb{C}^{\mathbb{N}}\colon\,\,\|(x_ja_n(j))_{j\in\mathbb{N}}\|_{\ell_p}<\infty\,\,\text{for all}\,\,n\in\mathbb{N}\big\}\] if $1\leqslant p\leqslant\infty$ and \[\lambda_0(A):=\big\{x\in\mathbb{C}^{\mathbb{N}}\colon\,\,\lim_{j\to\infty}x_ja_n(j)=0\,\,\text{for all}\,\,n\in\mathbb{N}\big\}\] which, additionally, possess the structure of an algebra. We will provide natural examples of such objects (e.g. Hadamard algebra) and present sketches of proofs of the following results. Theorem 1. Let $1\leqslant p<\infty$. TFAE: (i) $\lambda_p(A)$ is amenable, (ii) $\lambda_p(A)$ is contractible, (iii) $\lambda_p(A)$ is unital, (iv) $\lambda_p(A)$ is nuclear and $A$ is bounded (i.e. $a_n\in\ell_{\infty}$ for all $n\in\mathbb{N}$). Theorem 2. TFAE: (i) $\lambda_0(A)$ is amenable, (ii) $\lambda_{\infty}(A)$ is amenable, (iii) $\lambda_{\infty}(A)$ is unital, (iv) $A$ is bounded. The proofs require a new approach and the Banach algebra case cannot be automatically adapted. Especially the second result requires specifically Fréchet algebra techniques. The study of extreme contractions between Banach spaces is a classical area of research in the geometry of Banach spaces. Norm attainment set of a bounded linear operator between Banach spaces plays a very important role in the study of extreme contractions between Banach spaces. We will explore the connection between an extreme contraction and its norm attainment set, when the domain space is a finite-dimensional polygonal Banach space. In recent times, the study of the norm attainment set of a bounded linear operator between Banach spaces has been a topic of considerable interest and developments. In this talk, I would like to explore the various facets of this problem, including the case of bounded linear operators between Hilbert spaces and Banach spaces. We would show that it is possible to completely characterize Euclidean spaces among Minkowski spaces, in terms of the operator norm attainment set. We would further explore the norm attainment set of a bounded linear operator between Banach spaces. Using the concept of Birkhoff-James orthogonality and semi-inner-products in Banach spaces, we will completely characterize the operator norm attainment set in the setting of Banach spaces. If time permits, we would also like to briefly mention the various areas of application of the norm attainment set of a bounded linear operator, including the study of extreme contractions. Let $C_{2\pi}$ be the space of all $2\pi$-periodic continuous real functions defined on $\mathbb{R}$, provided with the sup norm $\Vert \cdot \Vert$, and let $C^r_{2\pi}$ be the space of all $2\pi$-periodic functions which have a continuous derivative of order $r$. In some applications involving modeling of data collected on the surface of the human brain arises the problem of approximate reconstruction of periodic functions. In many cases the data are taken at points uniformly distributed on the surface. In particular, the following problem was studied previously. For each positive integer $N\gt 1$ set $$x_{j,N}=\frac{2\pi j}{N},\qquad j=0,\cdots,N-1$$ and suppose that the values $f(x_{j,N})$ of a function $f\in C_{2\pi}$ are known. We want to construct a trigonometric polynomial $T_n(f)$ of degree $n=n(N)$ such that: A regularization method for the polynomial approximation of a function from its approximate values at fixed nodes was proposed by other authors, they gave explicit expressions for the optimal number of nodes in terms of the original error by using some properties of positive linear operators. It is known that positive linear operators are saturated. If we want to obtain a higher rate of convergence, then others approximation methods should be used. For instance, several authors have considered linear combinations of positive linear operators, but their results can not be used to present numerical algorithms because they are given in terms of unknown constants. Moreover, for the case of periodic functions the operators studied are of convolution type, while we are looking for discrete operators. In this talk we improve the previous results by using a two-terms linear combination of positive linear operators. We show how to construct, out of a certain basis, a second biorthogonal set with similar properties. Finally we apply the procedure to coherent states and we consider a simple application of our construction to pseudo-hermitian quantum mechanics. We define the angle of a bounded linear operator $A$, along an unbounded path emanating from the origin and use it to characterize range-kernel complementarity. In particular we show that if $0$ faces the unbounded component of the resolvent set, then $X=R(A)\oplus N(A)$ if and only if $R(A)$ is closed and some angle of $A$ is less than $\pi$.
EDIT: I moved the full code to my github page so the post can be read more easily. I am writing a script to take the Faber approximation approach outlined in Hassan Fahs paper (free access) and apply it to the Liouville-von-Neumann equation to propagate the density matrix $\rho$ with some inspiration from the scripts found here for general structure. Currently the approach is stuck with the approximation of the exponential. The idea behind the propagation scheme is to compute the update step as: $$\rho_{n+1}(\tau) = exp({\mathcal{L}\tau})\rho_n = \sum_{m=0}^{\infty}c_m(\tau) F_m(\mathcal{L})\rho_n.$$ As we can see the Faber polynomials are matrix valued, and perhaps -if you have taken a look into the script or the paper-you will see that the the spectrum of $\mathcal{L}$ has to be scaled with a factor I will simply refer to as $c$ in order to preserve stability, which in turn means the time step has to be adapted to $c*\tau =\tilde{\tau}$ So when we approximate the exponential we can see that our propagation scheme becomes: $$\rho_{n+1}(\tau*c)=\rho_{n+1}(\tilde{\tau}) = exp({\frac{\mathcal{L}}{c}\tau c}\rho_n) = \sum_{m=0}^{\infty}c_m(\tilde{\tau}) F_m(\frac{\mathcal{L}}{c})\rho_n.$$ The matrix valued Faber polynomial recurrence relation for an elliptic domain in my case is: $$F_{m+1}(\mathcal{L})\rho(0) = (\mathcal{L}-b_0\mathcal{I})F_m(\mathcal{L})\rho(0) - b_1 F_{m-1}(\mathcal{L})\rho(0).$$ $\mathcal{L}$ denotes the Liouville superoperator. Obviously, $\mathcal{L}$ is a function of the density matrix. After initialising a basic density matrix and computing the factors as in the paper above I tried to implement the algorithm as outlined by Dr . Fahs: But before trying to compute the full scheme I simply wanted to see if the approximation of the exponential $$exp(\mathcal{L}_{sc}\tilde{\tau})= \sum_{m=0}^{\infty}c_m(\tilde{\tau}) F_m(\mathcal{L}_{sc})$$ was correct. In order to do so tried to compare my solution for 100 iterations with the standard diagonalisation approach. The resulting picture is created by plotting the third line of a matrix whose columns are filled with the main diagonal of the respective matrix. The blue line is the diagonalisation approach, the orange one mine. Oddly, my approximation stays arround 1 for the entirety of the execution of the script. I've traced the issue to the polynomial recurrence relation: $F_1...F_m$ all depend on the factors of the Laurent expansion $b_0, b_1$ and the scaled Liouvillian $\mathcal{L}_sc$. They barely change, making me wonder if the algorithm for the approximation mentioned above is correct. Here is the code snippet for the computation of the Faber coefficients and Matrix valued polynomial recurrence relation, as well as the summing up of all the factors: % Faber approximation of the matrix exponentialfunction z = faber(L, dt, N) % Faber coefficients for ellipic domain c_m = @(dt_tilde, m) (((-1i/sqrt(b_1))^m)*exp(dt_tilde*b_0)*... besselj(m, 2*dt_tilde*sqrt(-b_1))); % establish polynomial truncation order so that M >e*sf*dt M = 1; while true orderinterator(M) = double(c_m(dt_tilde, M)); if abs(orderinterator(M))<10e-15 break else M = M+1; end end % Compute time dependent Faber coefficients of order M for elliptic domain % initialize CM CM = zeros(M+1,1); for m = 0:M CM(m+1) = c_m(dt_tilde, m); end % Compute matrix valued polynomials with initial value density matrix from % previous iteration I = ones(size(L_sc)); P = zeros((M+1)*N, N); % Compute matrix valued Faber polynomial recurrence relation % Compute initial value polynomials P_0, P_1, P_2: % F_0 P(1:N, 1:N) = I; % F_1 P((N+1):(2*N), 1:N) = (L_sc-b_0*I); % F_2 P((2*N+1):(3*N), 1:N) = (L_sc-b_0*I)-2*b_1*I; % c0*F_0+c1*F_1+c_2*F_2 temp = CM(1)*P(1:N, 1:N)+CM(2)*P(N+1:2*N, 1:N)+CM(3)*P(2*N+1:3*N, 1:N); % F_3 ... F_M = c2*F_2...cm*F_m for i = 3:M % F_{m+1} = L_sc(rho_fab)*F_m - b_0*F_m - b_1*F_{m-1} P((i*N)+1:(i+1)*N, 1:N) = L_sc*P((i-1)*N+1:(i)*N, 1:N)... -b_0*P((i-1)*N+1:(i)*N, 1:N)... -b_1*P((i-1-1)*N+1:(i-1)*N, 1:N); temp = temp + CM(i+1)*P((i*N)+1:(i+1)*N, 1:N); end z = temp;end Can someone perhaps elaborate on where it went wrong? Have I perhaps misunderstood the approach ?
Introduction¶ In my last post I introduced a simple linear time-series model using indicator functions for forecasting. In the meantime, I was experimenting with some other ideas for non-complex models with good predictive power. One approach that has been on my mind for quite long now, is commonly known to Data Scientists as binning. A Statistician might be more familar with the name discretization or reducing the scale level of a random variable from numerical to categorical. An advantage of this technique is the reduction of noise - however, this comes at the cost of losing quite an amount of information. import warnings warnings.filterwarnings("ignore") import matplotlib.pyplot as plt import numpy as np import pandas as pd from sklearn.naive_bayes import * from sklearn.linear_model import LinearRegression data = pd.read_csv("passengers.csv",index_col = 0).iloc[:,0] plt.figure(figsize = (12,6)) plt.plot(data) [<matplotlib.lines.Line2D at 0x7f1509e4b4e0>] test_period = 36 train = data.iloc[:-test_period] test = data.iloc[-test_period:] Preprocessing¶ Contrary to the last model, this approach cannot deal directly with non-stationary data so the trend in the data has to be removed via plain differencing. trend_removed = train.diff() plt.figure(figsize = (12,6)) plt.plot(trend_removed) [<matplotlib.lines.Line2D at 0x7f15070ad198>] As is visible in the plot of the differenced time-series, there is still a problem with heteroscedasticity which has to be solved as well. The variance of the dataset seems to be increasing over time, I assumed a linearly increasing variance: $$\sigma^2_X(t)=t\cdot\sigma^2_X(1)=\sigma^2$$ If the assumption is correct, a simple division of $X$ by $\sqrt{t}$ should solve the problem: t_train = np.arange(len(train)).reshape(-1,1) trend_removed = trend_removed / ((t_train+1)**(1/2)).reshape(-1) plt.figure(figsize = (12,6)) plt.plot(trend_removed) [<matplotlib.lines.Line2D at 0x7f15072f5908>] The plot still looks a little non-stationary in the first few periods so I (arbitrarily) removed the first 5 datapoints as well. trend_removed = trend_removed.iloc[5:] To bin the data, $K\in\mathbb{Z^+}$ intervals on $\mathbb{R}$ are being assigned. Now, each continuous observation $x_t$ is replaced by an indicator $x_t^*=k,\quad k\in\left\{1;...;K\right\}$ where $k$ is the interval that $x_t$ falls in. While there are countless ways to define the intervals, I chose the intervals to be evenly spaced between $min(x_t)$ and $max(x_t)$. The number of intervals was chosen arbitrarily again - cross-validation could be applied here. n_bins = 10 bins = np.linspace(trend_removed.min(), trend_removed.max(), n_bins) binned = np.digitize(trend_removed, bins) binned_series = pd.Series(binned, index = trend_removed.index) The mean of realizations $x_t$ in each interval is saved in a dictionary in order to map the interval category back to actual realizations. bin_means = {} for binn in range(1,n_bins+1): bin_means[binn] = trend_removed[binned == binn].mean() To forecast future realizations, the classic approach of using $S\in\mathbb{Z}^+$ lagged realizations of $x_t^*$ will be applied. The amount of lags I chose was $12$, assuming that there is no longer auto-dependency of the process beyond a horizon of one year. lagged_list = [] for s in range(13): lagged_list.append(binned_series.shift(s)) lagged_frame = pd.concat(lagged_list,1).dropna() train_X = lagged_frame.iloc[:,1:] train_y = lagged_frame.iloc[:,0] Training regressors and regressands now look like this: train_X.head() 1 2 3 4 5 6 7 8 9 10 11 12 Month 1950-06 3.0 4.0 8.0 7.0 4.0 8.0 2.0 1.0 2.0 5.0 8.0 9.0 1950-07 9.0 3.0 4.0 8.0 7.0 4.0 8.0 2.0 1.0 2.0 5.0 8.0 1950-08 8.0 9.0 3.0 4.0 8.0 7.0 4.0 8.0 2.0 1.0 2.0 5.0 1950-09 5.0 8.0 9.0 3.0 4.0 8.0 7.0 4.0 8.0 2.0 1.0 2.0 1950-10 3.0 5.0 8.0 9.0 3.0 4.0 8.0 7.0 4.0 8.0 2.0 1.0 train_y.head() Month 1950-06 9 1950-07 8 1950-08 5 1950-09 3 1950-10 1 Name: 0, dtype: int64 To calculate the 'class'-means from before, I wrote a quick function that takes the predicted class as an input and returns the corresponding means. def get_mean_from_class(prediction): return(bin_means[prediction[0]]) Predictive Model¶ Since we are now dealing with a categorical variable, Naive Bayes looked like a reasonable and interesting model to try out - especially since the is no need to create dummy variables for the sklearn implementation. Interestingly, Bernoulli Naive Bayes produced non-sensical predictions although the regressors (train_X) make much more sense to assume as categorical variables. Using Gaussian Naive Bayes was much better, probably because there is still a natural order in the regressors - that's why I recommend to experiment with a large variety of models, even though some assumptions might not be met perfectly. model = GaussianNB() model.fit(train_X, train_y) GaussianNB(priors=None) pred_insample = model.predict(train_X) pred_insample = pd.DataFrame(pred_insample, index = train_y.index) resulting_prediction = pd.Series(np.nan, index = train_y.index) for row in range(len(pred_insample)): resulting_prediction.iloc[row] = get_mean_from_class(pred_insample.values[row]) plt.figure(figsize = (12,6)) plt.plot(trend_removed) plt.plot(resulting_prediction) [<matplotlib.lines.Line2D at 0x7f15056ded30>] The in-sample predictions are looking good despite the fact that there are actually only 10 possible prediction values. Out-of-sample forecasts need to be calculated iteratively since lagged values are required. prediction_frame = pd.DataFrame(np.nan, index = test.index, columns = range(train_X.shape[1])) predictions = pd.Series(index = test.index) prediction_frame.iloc[0,1:] = train_X.iloc[-1,:-1].values prediction_frame.iloc[0,0] = train_y.iloc[-1] for i in range(len(test)): pred = model.predict(prediction_frame.iloc[i,:].values.reshape(1,-1)) pred_num = get_mean_from_class(pred.reshape(-1)) predictions.iloc[i] = pred_num try: prediction_frame.iloc[i+1,1:] = prediction_frame.iloc[i,:-1].values prediction_frame.iloc[i+1,0] = pred[0] except: pass Finally, the forecast has to be retransformed: trend_test = np.arange(len(train),len(train)+len(test)).reshape(-1,1) final_prediction = predictions.cumsum()* ((trend_test+1)**(1/2)).reshape(-1)+train.iloc[-1] plt.figure(figsize = (12,6)) plt.plot(test) plt.plot(final_prediction) [<matplotlib.lines.Line2D at 0x7f15055c5668>] np.sqrt(np.mean((test-final_prediction)**2)) 20.105172909688168 Pretty good result compared to the accuracy from last time. Nevertheless, there is still room for improvement, especially in terms of reducing the amount of selectable parameters in the "algorithm" - in that case the amount of bins could be selected automatically as this parameter is hard to choose properly. However, this post was just meant as a proof-of-concept prototype for this approach.
I tried asking a similar question in SE.Physics, and I got some information regarding the abstract side of this, but I figured I should post here to get more complete information about the numerical benefits of variational formalisms. Assuming I am able derive a functional representation for any dynamical system (dissipative, nonlinear, fractional, PDE, ODE, discontinuous, etc), why would such a result or capability be useful? What are some practical consequences? In essence, what are functionals/variational formalisms used for in practice? Keep in mind, the functionals I'm referring to might be of non-standard form (i.e, non-conservative systems), for example: Any system with that is a potential or conservative will have a functional representation as: $$ F[\mathbf{x}]=\int^{t}_0\left(\frac{1}{2}m\dot{\mathbf{x}}(\tau)^2-V(\mathbf{x}(\tau))\right)\,\text{d}\tau $$ Where taking the first variation of this functional yields the dynamics of the system, along with a condition that effectively states that the initial configuration should be similar to the final configuration (variation at the boundaries is zero). But, you can construct other functionals for non-conservative systems, such as this convolutional functional: $$ F[\mathbf{x}]=\frac{1}{2}[\mathbf{x}^{\text{T}} * D(\mathbf{x})]-\frac{1}{2}[\mathbf{x}^{\text{T}} * \mathbf{Ax}]-\frac{1}{2}\mathbf{x}'(0)\mathbf{x}(t) $$ With $\mathbf{A}$ symmetric and $\mathbf{x}(0)$ being the initial condition, and: $$ [\mathbf{f}^{\text{T}} * \mathbf{g}]=\int^{t}_0 \mathbf{f}^{\text{T}}(t-\tau)\mathbf{g}(\tau)\,\text{d}\tau $$ If we take the first variation and assume only that the initial variation is zero, the functional is stationary with respect to:$$\frac{d\mathbf{x}(t)}{dt}= \mathbf{Ax}(t)$$ Point being that one should also consider the implications of functionals which are not inner product based. I know that these functionals are useful in the context of FEM (but how?), but I'm also interested in whether or not they may be useful in other contexts (parameter estimation, simulation, data assimilation, etc).
Question #37681 1 Answer Answer: 242 Hz. Explanation: Solution : The frequency of the first wire must be Explanation : Solving this problem requires knowing the following concepts, Speed of propagation of a wave in a string: When a string of linear mass density (mass per unit length) #\mu#is held at a tension of #T#, waves in the string propagate at a speed given by, #\qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad v=\sqrt{ T/\mu}# Since the tension ( #T#) of the wire is held constant the speed of propagation of the wave along the string ( #v#) is also a constant. Standing Waves & Harmonics: The wavelength and frequencies of the various modes of standing waves (harmonics) are related to the string length ( #L#) as: #\qquad \lambda_n = (2L)/n; \qquad f_n=v/\lambda_n=n(v)/(2L)=(n)/(L).(v)/(2)# Because Formation of Beats: Beats form as a result of the superposition of two waves that differ slightly in their frequencies. The beat frequency is equal to the frequency difference between the interacting waves. This Problem : Let As the length of the test string increases from From Equation Since there are two beats produced in both cases, the reference frequency Comparing Equations Solving this equation for
Please help improve this post. This is intended to be a welcome guide for new users that we can reference in comments to help acclimate then to the Q&A format here at EarthScience.SE. This is a community wiki, which means you can edit this directly to improve it. I have adapted this from Tex.SX and I have cleansed it of any verbiage specific to them (I think) to provide a starting point for us. Welcome to EarthScience.SE, the free, community driven Questions and Answers site for experts, academics, enthusiasts, and anyone with a curiosity or interest in the earth sciences. Here is some information to help you get started and to make your work with this site a pleasant experience for you and all other users. This site is for questions and answersand is not a forum. This means that there is a quite strict format enforced here. Please post an answerable questionusing the Ask Questionbutton (after searching for existing similar questions first of course) or post an solution for an existing question as an answerpost. Please do not start discussionsor ask off-topic questions. You can also post commentsto questions and answers to provide feedback. At first you can only comment on your own questions and their answers, but after you have gained a reputation of 50 you can comment everywhere. There is also chatfor more casual posts. Questions and answers can receive positive and negative votesto indicate their quality. Answers are sorted after the number of votes so that the better ones appear first. The votes that answers and questions receive also increase or decrease the reputationof the author of the post. With increasing reputation users will receive more and more privilegesto do more things on the site. The most useful answer that solved your problemshould be acceptedby clicking the tick/checkmark symbol to the left of the answer. This shows other people that this is "the correct answer" in your opinion and assigns points to the answerer and also to you. The question will then be handled as "done" which is important for the housekeeping of the site. However, new answers can still be added and the question asker can accept a different answer at any time. [TODO] We like sourced questions and answers. References! Post should be formattedcorrectly using the simple Markdown format. This ensures that the posts are easy to read. Here the most important formatting styles used on this site: Please do not use "here" and "this" links to other questions and answers but insert the link directly. The site will format it automatically. Material directly copied from references, books, website, etc. should be formatted using quotation blocks (i.e. place a >before the lines) and provide attribution. Images should only be uploaded using the "Image" icon (shortcut: CTRL+ G) which uses a special, timeless http://imgur.com/ account. Including images using a different way, like uploading it manually to imgur.com or using a different provider, might lead to broken images after a while and is not recommended. Please ensure that the image is in a suitable size (not too big or too small) and resolution and only shows relevant things without large margins. Please reference your images. Mathematical notation can be used in your post courtesy of MathJax using $\LaTeX$ notation. To typeset equations inline, use $to delimit your LaTeX, e.g. $F=ma$will render as $F=ma$. To typeset an equation on its own line, use $$to delimit your LaTeX, e.g. $$\Delta S = \int \dfrac{dQ}{T}$$will render as $$\Delta S = \int \dfrac{dQ}{T},$$ giving more prominence to your equation. Please avoid opening and closing lineslike "Hi" (already automatically removed) and "Thanks". Salutations, greetings and expressions of gratitude are already implied and only distract from the real question. Also the first two lines of the post will be displayed below the question title in the overview page and such lines will reduce the amount of useful information being displayed there. The author name is already displayed on the right site below the post, so there is no need for an extra signature line. Please post rough ideas, suggestions and similar material as commentsand reserve answers for actual solutions. You can edit your posts anytime to e.g. add more details to your question or to adjust your answer. Do not use long comment threads for longer or even medium sized discussions. Use chat instead. You can even open your own chat room if you want. Please note that everything here is open and all your chat messages will be archived and accessible to others (as long the room is not restricted by a moderator). You can notify other usersby adding comments to their posts or by adding their names after an @, i.e. @usernamein your comment. Note this only works for the first user mentioned this way and only if that user wrote a previous comment for the same post. The post author is always notified. Notifications also work in chat if the user posted a message there not too long ago. There are three appointed moderatorsfor this site. They can be notified about issues by flagging postsusing the flaglink below each post. Democratically elected moderator elections will be held when this site graduates from beta. While you can post questions and answers as an unregistered user, please consider registering your account. Otherwise you might create a new, different unregistered account with the same name and icon. Then you will not be able to edit your old posts or add comments to them. You can ask a moderator to merge such accounts together. Questions about the site itself should be asked on Meta.EarthScience.SE, not on the main site. Here the restrictions are lower and you are free to start discussions as long they are about how to run the site, etc. This post was adapted from the excellent Tex.SX post at https://tex.meta.stackexchange.com/questions/1436/welcome-to-tex-sx.
Given that $z_1=1+2i$ and $z_2=\frac{3}{5}+\frac{4}{5}i$, write $z_1z_2$ and $\frac{z_1}{z_2}$ in the form $p+iq$, where $p$ and $q \in R$. In an Argand diagram, the origin O and the points representing $z_1z_2$, $\frac{z_1}{z_2}$,$z_3$ are the vertices of a rhombus. Find $z_3$ and sketch the rhombus on this Argand diagram. Show that $\left | z_3 \right |=\frac{6\sqrt{5}}{5}$. My attempt, I found $z_1z_2=-1+2i$ and $\frac{z_1}{z_2}=\frac{11}{5}+\frac{2}{5}i$. How to find $z_3$? And can anyone let me know what program can I use to plot this kind of complex number? Thanks a lot
In signal processing, cross-correlation is a measure of similarity of two waveforms as a function of a time-lag applied to one of them. This is also known as a sliding dot product or sliding inner-product. It is commonly used for searching a long signal for a shorter, known feature. It has applications in pattern recognition, single particle analysis, electron tomographic averaging, cryptanalysis, and neurophysiology.For continuous functions f and g, the cross-correlation is defined as:: (f \star g)(t)\ \stackrel{\mathrm{def}}{=} \int_{-\infty}^{\infty} f^*(\tau)\ g(\tau+t)\,d\tau,whe... That seems like what I need to do, but I don't know how to actually implement it... how wide of a time window is needed for the Y_{t+\tau}? And how on earth do I load all that data at once without it taking forever? And is there a better or other way to see if shear strain does cause temperature increase, potentially delayed in time Link to the question: Learning roadmap for picking up enough mathematical know-how in order to model "shape", "form" and "material properties"?Alternatively, where could I go in order to have such a question answered? @tpg2114 For reducing data point for calculating time correlation, you can run two exactly the simulation in parallel separated by the time lag dt. Then there is no need to store all snapshot and spatial points. @DavidZ I wasn't trying to justify it's existence here, just merely pointing out that because there were some numerics questions posted here, some people might think it okay to post more. I still think marking it as a duplicate is a good idea, then probably an historical lock on the others (maybe with a warning that questions like these belong on Comp Sci?) The x axis is the index in the array -- so I have 200 time series Each one is equally spaced, 1e-9 seconds apart The black line is \frac{d T}{d t} and doesn't have an axis -- I don't care what the values are The solid blue line is the abs(shear strain) and is valued on the right axis The dashed blue line is the result from scipy.signal.correlate And is valued on the left axis So what I don't understand: 1) Why is the correlation value negative when they look pretty positively correlated to me? 2) Why is the result from the correlation function 400 time steps long? 3) How do I find the lead/lag between the signals? Wikipedia says the argmin or argmax of the result will tell me that, but I don't know how In signal processing, cross-correlation is a measure of similarity of two waveforms as a function of a time-lag applied to one of them. This is also known as a sliding dot product or sliding inner-product. It is commonly used for searching a long signal for a shorter, known feature. It has applications in pattern recognition, single particle analysis, electron tomographic averaging, cryptanalysis, and neurophysiology.For continuous functions f and g, the cross-correlation is defined as:: (f \star g)(t)\ \stackrel{\mathrm{def}}{=} \int_{-\infty}^{\infty} f^*(\tau)\ g(\tau+t)\,d\tau,whe... Because I don't know how the result is indexed in time Related:Why don't we just ban homework altogether?Banning homework: vote and documentationWe're having some more recent discussions on the homework tag. A month ago, there was a flurry of activity involving a tightening up of the policy. Unfortunately, I was really busy after th... So, things we need to decide (but not necessarily today): (1) do we implement John Rennie's suggestion of having the mods not close homework questions for a month (2) do we reword the homework policy, and how (3) do we get rid of the tag I think (1) would be a decent option if we had >5 3k+ voters online at any one time to do the small-time moderating. Between the HW being posted and (finally) being closed, there's usually some <1k poster who answers the question It'd be better if we could do it quick enough that no answers get posted until the question is clarified to satisfy the current HW policy For the SHO, our teacher told us to scale$$p\rightarrow \sqrt{m\omega\hbar} ~p$$$$x\rightarrow \sqrt{\frac{\hbar}{m\omega}}~x$$And then define the following$$K_1=\frac 14 (p^2-q^2)$$$$K_2=\frac 14 (pq+qp)$$$$J_3=\frac{H}{2\hbar\omega}=\frac 14(p^2+q^2)$$The first part is to show that$$Q \... Okay. I guess we'll have to see what people say but my guess is the unclear part is what constitutes homework itself. We've had discussions where some people equate it to the level of the question and not the content, or where "where is my mistake in the math" is okay if it's advanced topics but not for mechanics Part of my motivation for wanting to write a revised homework policy is to make explicit that any question asking "Where did I go wrong?" or "Is this the right equation to use?" (without further clarification) or "Any feedback would be appreciated" is not okay @jinawee oh, that I don't think will happen. In any case that would be an indication that homework is a meta tag, i.e. a tag that we shouldn't have. So anyway, I think suggestions for things that need to be clarified -- what is homework and what is "conceptual." Ie. is it conceptual to be stuck when deriving the distribution of microstates cause somebody doesn't know what Stirling's Approximation is Some have argued that is on topic even though there's nothing really physical about it just because it's 'graduate level' Others would argue it's not on topic because it's not conceptual How can one prove that$$ \operatorname{Tr} \log \cal{A} =\int_{\epsilon}^\infty \frac{\mathrm{d}s}{s} \operatorname{Tr}e^{-s \mathcal{A}},$$for a sufficiently well-behaved operator $\cal{A}?$How (mathematically) rigorous is the expression?I'm looking at the $d=2$ Euclidean case, as discuss... I've noticed that there is a remarkable difference between me in a selfie and me in the mirror. Left-right reversal might be part of it, but I wonder what is the r-e-a-l reason. Too bad the question got closed. And what about selfies in the mirror? (I didn't try yet.) @KyleKanos @jinawee @DavidZ @tpg2114 So my take is that we should probably do the "mods only 5th vote"-- I've already been doing that for a while, except for that occasional time when I just wipe the queue clean. Additionally, what we can do instead is go through the closed questions and delete the homework ones as quickly as possible, as mods. Or maybe that can be a second step. If we can reduce visibility of HW, then the tag becomes less of a bone of contention @jinawee I think if someone asks, "How do I do Jackson 11.26," it certainly should be marked as homework. But if someone asks, say, "How is source theory different from qft?" it certainly shouldn't be marked as Homework @Dilaton because that's talking about the tag. And like I said, everyone has a different meaning for the tag, so we'll have to phase it out. There's no need for it if we are able to swiftly handle the main page closeable homework clutter. @Dilaton also, have a look at the topvoted answers on both. Afternoon folks. I tend to ask questions about perturbation methods and asymptotic expansions that arise in my work over on Math.SE, but most of those folks aren't too interested in these kinds of approximate questions. Would posts like this be on topic at Physics.SE? (my initial feeling is no because its really a math question, but I figured I'd ask anyway) @DavidZ Ya I figured as much. Thanks for the typo catch. Do you know of any other place for questions like this? I spend a lot of time at math.SE and they're really mostly interested in either high-level pure math or recreational math (limits, series, integrals, etc). There doesn't seem to be a good place for the approximate and applied techniques I tend to rely on. hm... I guess you could check at Computational Science. I wouldn't necessarily expect it to be on topic there either, since that's mostly numerical methods and stuff about scientific software, but it's worth looking into at least. Or... to be honest, if you were to rephrase your question in a way that makes clear how it's about physics, it might actually be okay on this site. There's a fine line between math and theoretical physics sometimes. MO is for research-level mathematics, not "how do I compute X" user54412 @KevinDriscoll You could maybe reword to push that question in the direction of another site, but imo as worded it falls squarely in the domain of math.SE - it's just a shame they don't give that kind of question as much attention as, say, explaining why 7 is the only prime followed by a cube @ChrisWhite As I understand it, KITP wants big names in the field who will promote crazy ideas with the intent of getting someone else to develop their idea into a reasonable solution (c.f., Hawking's recent paper)
I am currently reading Switzer's book "Algebraic Topology: Homotopy and Homology". On page 50, the proof of 3.30 c), he claims that a certian composition is something I can't see how it possibly can be what he states it to be. Let $\beta':S^1 \rightarrow I \vee S^1$ be defined by $(2t,\ast)$ if $t \leq 1/2$ and $(\ast,2t-1)$ if $t > 1/2$. Consider the quotient map $q:I \rightarrow S^1$ given by $q(t) = e^{2\pi t}$. Switzer then claims that the composition $\alpha= (q \vee 1) \circ \beta': S^1 \rightarrow S^1 \vee S^1$ is given by $\alpha(t) = (4t,\ast)$ it $t \leq 1/4$, $\alpha(t) = (\ast,2t-1/2)$ if $1/4 \leq t \leq 3/4$ and $\alpha(t) = (4(1-t),\ast)$ if $3/4 \leq t \leq 1$. However, I get that the composition is $(2s,\ast)$ for $t \leq 1/2$ and $(\ast, 2t-1)$ for $t \geq 1/2$. Is Switzer wrong, or am I misunderstanding something? Any help would be very appreciated, I don't seem to be able to fix this detail. I think that what you wrote is not precisely what Switzer defined, but perhaps you made some simplifications that I did not notice. I'm going to give up for now on combing through the details of the formulas and just say what the big picture is. Look at the schematic on page 47. We see that what $\beta':(D^{n+1},s_0)\to ((I,1) \lor (D^{n+1},s_0),0)$ does is it squashes some contractible neighborhood of $s_0 \in D^{n+1}$ -- say the right half of the disk in the picture, up to homotopy -- down to an interval, and handles the rest of the disk in such a way that everything is nice and continuous. Why did I write my target as $((I,1) \lor (D^{n+1},s_0),0)$? Notice that Switzer says he wants the basepoint of that space (and related subspaces) to be the point he labeled as 0. He's abusing the wedge sum notation for convenience, but what he's really doing is squashing. To get $\beta:(S^n,s_0)\to ((I,1) \lor (S^n,s_0),0)$, you just restrict $\beta'$ to the boundary $\partial D^{n+1} = S^n$. Finally, to get $\alpha:(S^n,s_0)\to (S^1,*) \lor (S^n,s_0)$, you just glue together the endpoints of the interval you got from squashing. (To our relief, now the basepoint suggested by that notation is finally back where it's supposed to be.) So, with this understanding, what does $\alpha:S^1\to S^1\vee S^1$ do? Think of each $S^1$ as $(I\sqcup*)/(0\sim *, 1\sim * )$ with basepoint $*$. For the neighborhood of $*$ we want to squash, we can take $N = [3/4,1]\cup[0,1/4]$. To squash $N$ to an interval, all you really do is just glue $t \in [0,1/4]$ to $(1-t) \in [3/4,1]$. (In physical terms, you're doing the following in the n=1 case: Take a piece of string and knot it into a loop; treat the knot as the basepoint. Pinch the loop together in the middle (away from the basepoint) to make two loops. Finally, pull taut the "subloop" which has the knot in it so that you have a loop on one side and an interval on the other side. You could imagine doing something similar with a balloon to visualize $\alpha: S^2 \to S^1\vee S^2$. When I want to think of the case for general $n$, I imagine the picture for $n=2$ and then squint my eyes so the 2's look like n's.) Now we just implement my description of $\alpha:(S^n,s_0)\to (S^1,*) \lor (S^n,s_0)$ for the case $n=1$, taking care to choose parametrizations appropriately. We need to send the squashed $N$ to the first copy of $S^1$ in $S^1\lor S^1$. We can achieve this by sending $t \in [0,1/4]$ to $4t$. (Please forgive my imprecise notation. I think it'll be easier to see what's going on if I leave out some of the extra bells and whistles.) Correspondingly, since in the squashed $N$ we have that $t \in [0,1/4]$ is identified with $s = (1-t) \in [3/4,1]$, you also have to send $s$ to $4t = 4(1-s)$. These account for the first and third lines of the definition of $\alpha$. The second line just sends your remaining loop (the boundary circle you see on the right hand side of the schematic on page 47) to the second $S^1$.
Let $X:\Omega\to\mathbb{R}$ and $Y:\Omega\to\mathbb{R}$ be univariate random variables with CDF $F_{X,Y}(x,y)$ such that: $$ F_{X,Y}(x,y)=G_1(x)G_2(y),\forall (x,y)\in\mathbb{R}\times\mathbb{R} $$ where $G_1:\mathbb{R}\to\mathbb{R}$, $G_2:\mathbb{R}\to\mathbb{R}$ are known functions. Question: Is it true that $X$ and $Y$ are independent RVs? Can anyone give me some hints? I tried to: $$ F_X(x)=\lim_{y\to\infty}F_{X,Y}(x,y)=\lim_{y\to\infty}G_1(x)G_2(y)=G_1(x)\cdot\lim_{y\to\infty}G_2(y) $$ but I don't know why (or if) $\lim_{y\to\infty}G_2(y)=1$.
Question A point particle of mass moves in the region and is reflected elastically at the walls at and . Calculate the volume of the classical phase space with an energy smaller than . Assume that a particle initially has an energy . Demonstrate that the phase-space volume of this particle remains constant when the wall at is moved slowly (adiabatic invariance). Calculate the number of states with energy smaller than for the corresponding quantum mechanical system. Compare the result with for large . Solution 1. The classical phase-space volume for this situation can be found from: \begin{equation} \Gamma_0 = \int_{0}^{l}dx\int_{-\sqrt{2mE}}^{\sqrt{2mE}}dp = 2l \sqrt{2mE} \label{eq:gamma0} \end{equation} since the energy of the particle is given by . 2. Since the expansion is adiabatic, , where denotes an inexact differential. Thus, from the second law of thermodynamics, $$\frac{\delta Q}{Τ}=\delta S=0$$ We know entropy is given by . Therefore $$\Delta S=k_B\ln \Omega_0 – k_B \ln \Omega_1 = 0$$Which necessitates $$\Omega_0=\Omega_1$$ And since , remains constant. 3. The energies which correspond with each of the permitted wavenumbers for a particle in a one dimensional box may be written as $$E_n=\frac{h^2 n^2}{8ml^2}$$ The number of states up to energy may be found from summing over all of the accessible states of the system like so: $$\Omega_0(E) = \sum_{E_0}^{E}1 \stackrel {\text{large E}}{\approx} \sqrt{\frac{8ml^2}{h^2}E}$$ $$=\frac{2l \sqrt{2mE}}{h}=\frac{\Gamma_0(E)}{h}$$ So the difference between the classical and the quantum treatment is a factor of .
In this post a simple MIP problem is posted. Basically the problem can be stated as: \[\boxed{\begin{align} \max & \sum_i Score_i \, x_i \\ & \sum_i x_i = 6 \\ & \sum_i Salary_i \, x_i \le 50000 \\ & x_i \in \{0,1\} \end{align}}\] We have a database of 153 players, and we want to compose the best team of 6 players with a total salary cap of $50k. This can actually be done in Excel quite easily. With some random data I get the following results using the Simplex LP based MIP solver: The name “ Simplex LP” is somewhat misleading, as we solve here a MIP. Some comments in the post are really surprising to me: “ the number of combinations of 6 golfers from a pool of 153 is 16,133,132,940 Note that yes, that is over 16 billion combinations. Even if Solver can do it, it will probably crash Excel”. A MIP solver (or the Evolutionary Solver) will not evaluate all possible solutions! (That would be impossible in almost all cases). There is no reason to believe Excel will crash on this problem (quite the opposite: it solves quite quickly). I see this argument more often; apparently there are lots of people around who think complete enumeration is the best we can do. Excel mentions “ use Evolutionary engine for non-smooth problems” so one answer suggested to use the Evolutionary solver. Actually I understand why someone may think this is a wise decision: the hint can be interpreted as advice to use the Evolutionary algorithm for any discrete problem. Looking at the results below this is not such a good idea: the Evolutionary engine takes much more timeand delivers a significantly sub-optimal solution. A summary comparison of the results is: “ The built-in version of Solver reports, ‘This problem is too large for Solver to handle.’ [..] an alternative is the free OpenSolver add-in (see opensolver.org)“ I have no idea why this user sees this message. I had no size problems using the sizes as indicated by the poster. This problem has 153 binary variables and I believe the limit is 200 variables. Below is the result from a run with the same data using the Evolutionary solver: Conclusion My advice: always try a MIP model first. The model building process forces you to understand the problem better and a MIP solver can give proven optimal solutions (or a guaranteed quality when we allow a gap between best possible and best found solutions). If the performance is not satisfactory then consider using a meta-heuristic. (The MIP model an also be used then to debug the implement the meta-heuristic and compare the quality of the objective function values).
Assume you have poured your distilled water in a 1 liter container. We write the flux of $n_{\ce{CO2}}$ between air towards your container, as $$J_{\ce{CO2}}=\frac{dn_{\ce{CO2}}}{dt}\times\frac{1}{A}=\frac{d[\ce{H2CO3}]^{*}}{dt}\times\frac{V}{A}$$where $n_{\ce{CO2}}$ is the number of ${\ce{CO2}}$ moles in water, $A$ the water surface and $V$ the water volume.Assuming $\ce{CO2}$ atmospheric pressure equal to $10^{-3.5}\,atm$, we obtain the concentration $[\ce{CO2}]_{air}$ approximately equal to $10^{-5}\,M$ (the latter value obtained via $PV=nRT$). We call $[\ce{H2CO3}]^{*}$ the sum of $[{\ce{CO2}}]_{water}$ and $[\ce{H2CO3}]_{water}$ concentrations in water: at the beginning, we set it equal to $0$. Note that $[\ce{H2CO3}]^{*}$ could be reasonably written as: $$[\ce{H2CO3}]^{*}\approx[{\ce{CO2}}]_{water}$$ indicating that the majority of carbon dioxide in water is not converted into carbonic acid, being the kinetics for such a conversion very slow, as pointed out by @Nicolau Saker Neto, . By expressing $J_{\ce{CO2}}$ as: $$J_{\ce{CO2}}=\frac{D_{{\ce{CO2}}}}{Z}\times ([\ce{CO2}]_{air}-[\ce{H2CO3}]^{*})$$ with $D_{\ce{CO2}}$ the diffusion coefficient of ${\ce{CO2}}$ equal to $7.2\times10^{-4}\,dm^{2}h^{-1}$and $Z$ the thickness of superficial layer through which the exchange occurs. For the latter, we take $40\times10^{-5}\,dm$. Putting previous equations together (with $A/V=1\,dm^{-1}$), the following is obtained: $$\frac{d[\ce{H2CO3}]^{*}}{dt}=1.8\times10^{-5}-1.8\times[\ce{H2CO3}]^{*}$$which yields: $$[\ce{H2CO3}]^{*}=1\times10^{-5}(1-e^{-1.8t})$$ The graph below shows that about $2$ hours later, a plateau is established, for $[\ce{H2CO3}]^{*}= 1\times10^{-5} M$. By considering the acid-base equilibrium ($pK_{a}=6.3$):$$\text{H}_{2}\text{C}\text{O}_{3}^{*} \ce{<=> HCO3- + H+}$$we obtain a $\text{pH}=5.7$, at the plateau. UPDATE 1: the value for $Z$ has of course a big impact on time. A larger value will increase the time needed to reach the plateau. I have used $40\times10^{-5}\,dm$, considered a "typical value" by the authors of this book (sorry in French, page 181, 3rd edition). Any more accurately derived value for $Z$ is welcome ! UPDATE 2: I used the notation $\text{H}_{2}\text{C}\text{O}_{3}^{*}$ to represent the two species $\ce{CO2}$ and $\ce{H2CO3}$, in water.
I want to model a dynamical system of the form $\frac{\text{d}x}{\text{d}t} = f(x)+nx\delta(\pi(t-0.2)). $ The problem is that I have a point source which is reoccurring at fixed time steps (say at 0.2,1.2,2.2...). How can I handle this numerically? I have tried to figure out solutions and it seems that there are two "easy" approaches: 1) Consider a grid with a fixed step size and define your delta function as 0 on all grid points except the relevant points. However, the influence of the delta function depends strongly on the step size solving this numerically which is why I think this is not correct. 2) Use a Gaussian to represent the Delta-function. This could also work for adaptive step sizes. However, in this case the results depend on the variance of the Gaussian which should be small. Is this a better approach? If this approach works, I could generate an array consisting of the Gaussians at different time steps which is zero elsewhere and use this with normal ODE-solvers, right? The third approach is a little bit nasty in the sense that it cannot be used with "normal" ODE solvers. It would be to evaluate f(x) until we approach the point source and take $y(+\epsilon)=e^ny(-\epsilon)$ (Numerical way to deal with Dirac delta.). This does not depend on the variance or step size. Should I go with this one? I could use normal ODE-solvers until the fixed time step and then just take the exponential. And do you have any references on that? Thank you very much for your help!
I have no idea for the full generality, but at least I have a complete answer when $g$ is convex. Proposition. Assume that $g : [a,\infty) \to \Bbb{R}$ is increasing, convex and unbounded. Then (1) $\int_{a}^{\infty} \sin g(x) \, dx$ does not converge absolutely. (2) $\int_{a}^{\infty} \sin g(x) \, dx$ converges if and only if $g'_{+}(x) \to \infty$, where $g'_{+}$ is the right-hand derivative. (3) $\int_{a}^{\infty} \sin g(x) \, dx$ converges in Cesaro sense. That is, the limit $$\lim_{R\to\infty} \frac{1}{R} \int_{a}^{R} \int_{a}^{r} \sin g(x) \, dx dr$$ exists. Step 1. A bit of reduction and some observations First, since $g$ is convex, it is continuous possibly except at $0$. Using the fact that $g$ is also increasing, we know that $g$ is indeed continuous everywhere on $[0,\infty)$. Then the condition implies that there exists $a' \in [a, \infty)$ such that $g$ is constant on $[a, a']$ and $g$ is strictly increasing on $[a', \infty)$. Since none of statements (1)-(3) is affected on modification of $g$ on a finite interval, we may truncate the interval from the left and assume that $g$ is strictly increasing. The previous paragraph shows that it suffices to consider $g$ which is strictly increasing, continuous, convex and unbounded. Then its inverse $h : [g(0), \infty) \to [a, \infty)$ is a well-defined function which is strictly increasing, continuous, concave and unbounded. Thus its right-hand derivative $h'_+(x)$ is a decreasing function such that $$ h'_+(x) = \lim_{h \to 0^+} \frac{h(x+h) - h(x)}{h} = \lim_{k \to 0^+} \frac{k}{g(h(x)+k) - g(h(x))} = \frac{1}{g'_+(h(x))}. $$ So $h'_+(x) \to 0$ if and only if $g'_{+}(x) \to \infty$. Moreover, for any continuous function $\varphi$ on $[a, b]$ we have the following formula $$ \int_{a}^{b} \varphi(g(x)) \, dx= \int_{g(a)}^{g(b)} \varphi(y) \, dh(y)= \int_{g(a)}^{g(b)} \varphi(y) h'_+(y) \, dy. $$ Step 2. Actual computation. We first resolve (1). Let us write $\rho(x) = h'_+(x)$ for brevity. Choose $m \in \Bbb{Z}$ so that $m\pi > g(a)$. Then along $R_n = h(n\pi)$ with $n > m$, \begin{align*}\int_{a}^{R_n} \left|\sin g(x) \right| \, dx&\geq \int_{m\pi}^{n\pi} \rho(x) \left|\sin x \right| \, dx = \sum_{k=m}^{n-1} \int_{0}^{\pi} \rho(x+k\pi) \sin x \, dx \\&\hspace{2em} \geq \sum_{k=m}^{n-1} 2\rho((k+1)\pi) \geq \sum_{k=m}^{n-1} \frac{2}{\pi} \int_{(k+1)\pi}^{(k+2)\pi} \rho(x) \, dx \\&\hspace{4em} \geq \frac{2}{\pi} [h((n+1)\pi) - h((m+1)\pi)] \xrightarrow[n\to\infty]{} \infty\end{align*} and hence (1) follows. Next we resolve part (2). Let $m \in \Bbb{Z}$ be as before and define $F$ by $$F(r) = \int_{m\pi}^{r} \rho(x) \sin x \, dx.$$ In view of the previous section, we can investigate the convergence of $F(r)$ as $r\to\infty$ instead. Also, since $F(r)$ always lies between $F(n\pi)$ and $F((n+1)\pi)$ whenever $x \in [n\pi, (n+1)\pi]$, it suffices to investigate the convergence of $F(n\pi)$ as $n\to\infty$. Now for $n > m$, $$ F(n\pi) = \sum_{k=m}^{n-1} (-1)^k \int_{0}^{\pi} \rho(x+k\pi) \sin x \, dx $$ First, the general term satisfies $$ \left| (-1)^k \int_{0}^{\pi} \rho(x+k\pi) \sin x \, dx \right|\geq \rho((k+1)\pi) \int_{0}^{\pi} \sin x \, dx= 2\rho((k+1)\pi). $$ So if $F(n\pi)$ converges as $n\to\infty$, then $\rho(x)$ also converges to $0$ as $x\to\infty$. By our previous remark, this implies $g'_+(x) \to \infty$ as $x\to\infty$. Conversely, assume that $g'_+(x) \to \infty$ as $x\to\infty$ so that $\rho(x) \to 0$ as $x\to\infty$. Then $$ F(n\pi) = \int_{0}^{\pi} \left( \sum_{k=m}^{n-1} (-1)^k \rho(x+k\pi) \right) \sin x \, dx $$ and the inner term converges uniformly by the Dirichlet test. This implies the convergence of $F(n\pi)$ and hence the convergence of $F(x)$ as $x\to\infty$. Finally we resolve part (3). Let $\rho_{\infty} = \lim_{x\to\infty} \rho(x)$, which exists by the monotonicity of $\rho$. Then we can write $$ \int_{a}^{r} \sin g(x) \, dx= \underbrace{\int_{g(a)}^{g(r)} (\rho(x) - \rho_{\infty}) \sin x \, dx}_{=:A} + \rho_{\infty} \cos g(a) - \underbrace{\vphantom{\int_{g}} \rho_{\infty} \cos g(r)}_{=:B}. $$ Now the term $A$ converges as $r\to\infty$ by (2). So its Cesaro mean also converges. In order to investigate the Cesaro mean of the term $B$, we have to look at $$ \frac{1}{R} \int_{a}^{R} \cos g(r) \, dr= \frac{1}{R} \int_{g(a)}^{g(R)} \rho(x) \cos x \, dx. $$ Using the similar 'alternating series trick' as in part (2), we can check that $\int_{g(a)}^{g(R)} \rho(x) \cos x \, dx$ is uniformly bounded in $R$. Putting altogether, we obtains not only the Cesaro convergence but also its value: $$ \lim_{R\to\infty} \frac{1}{R} \int_{a}^{R} \int_{a}^{r} \sin g(x) \, dx dr= \int_{g(a)}^{\infty} (\rho(x) - \rho_{\infty}) \sin x \, dx + \rho_{\infty} \cos g(a). $$
Let $[H,G]$ be a rank $2$ boolean interval of finite groups. It is true in each of the following cases: $|G:H| = 2p$ with $p$ prime (see below) $|G:H|<36$ (using GAP and $34=2 \cdot 17$) $G$ simple with $|G| \le 50000$ (by GAP) For $|G:H| = 2p$ the proof is the following: if $|G:K_i| = 2$ then $K_i$ is a normal subgroup of $G$. It follows that $H=K_1 \cap K_2$ is also normal so $[1,G/H] \simeq [H,G]$ boolean, so by Ore's theorem $G/L$ is cyclic, but with two subgroups of index $2$, contradiction. Application: Assuming Statement 1 true, we can prove the following: Statement 2: Let $[H,G]$ be a boolean interval of rank $>1$, then there is a coatom $L$ of $[H,G]$ with $|L:H|≡|G:H|($mod $ 2)$. Proof: if $|G:H|$ is odd, it is immediate, so we can assume that $|G:H|$ is even. If for every coatom $L$, $|L:H|$ is odd, take two coatoms $L_1$ and $L_2$, then $[L_1 \wedge L_2,G]$ is a rank $2$ boolean interval which contradicts Statement 1. $\square$ Statement 3: Let $[H,G]$ boolean, then there is an atom $K$ with $|K:H|≡|G:H|($mod $ 2)$. Proof: Idem we can assume that $|G:H|$ is even. We prove by induction on the rank of $[H,G]$. It is immediate at rank $1$. Assume it is true at rank $<n$. For $[H,G]$ of rank $n$, by Statement 2, there is a coatom $L$ with $|L:H|$ even, but $[H,L]$ is a rank $n-1$, the result follows. $\square$ Note that Statement 3 implies Statement 1, so all these statements are equivalent.
My question will rely on a clarification of a proof, which I simply don't understand. Let us denote by $X$ a pseudo-riemannian symmetric space and define $$ Z_{\mathrm{Iso}\left(X\right)}G(X) = \{\, f \in \mathrm{Iso}\!\left(X\right) \mid gf = fg \text{ for all } g \in G(X) \,\} $$ as the centralizer of the transvection group $G(X)$ of the space $X$ and let $\mathrm{Iso}\left(X\right)$ be the isometry group of the regarding space. In Cahen,Parker,"Pseudo-riemannian symmetric spaces" following assertion has been stated in Proposition 3.9: Assertion: Any discrete subgroup $A$ of $Z_{\mathrm{Iso}\left(X\right)}G(X)$ acts properly discontinuously on $X$. Proof: Let $C \subseteq X$ be a compact subset of $X$ with the property that$A_C = \{\, g \in A \mid gC \cap C \neq \emptyset \,\}$ is infinite, $\lvert A_C \rvert = \infty$. There exists a sequence $(a_n) \subseteq A_C$ with distinct elements and a sequence $(c_n) \subseteq C$ such that $a_n c_n \in C$ for all $n \in \mathbb{N}$. By compactness of $C$ and therefore taking sub-sequences, we may assume that $(c_n)$ converges towards $c \in C$ and that $(a_nc_n)$ converges towards $c^\prime \in C$. Let us choose $c$ as a basepoint of $X$. Since the transvection group $G(X)$ acts transitively on $X$ we can find a $g \in G(X)$ such that $gc = c^\prime$ and let $V_1 \subseteq U_1 \subseteq W_1$ be three neighborhoods of the identity in $G(X)$ such that$$V_1=V_1^{-1}, g^{-1}V_1g\subseteq U_1 \text{ and } U_1^2 \subseteq W_1.$$Define;$$W_c = V_1 c, \\\tilde{W}_c = W_1 c,\\ W_{c^\prime}=gW_c\\\tilde{W}_{c^\prime}=g\tilde{W}_c.$$ Question: Why does this define neighborhoods for the regarding points $c$ and $c^\prime$ and furthermore why does this define a neighborhoodbasis? There exists an integer $N$ such that for all $n \geq N$ we have $c_n \in W_c$ and $a_nc_n \in W_{c^\prime}$. Indeed, there exists $g_n \in V_1$ such that $c_n = g_nc$ for all $n \geq N$. We want to show that $\lim_{n \to \infty} (a_nc) = c^\prime$. Consider therefore $$ a_n c = a_ng_n^{-1}c_n = g_n^{-1} a_n c_n \in g_n^{-1} g W_c \\ = gg^{-1}g_n^{-1}gW_c \subseteq gU_1V_1c \subseteq gU_1^2c \subseteq gW_1c = \tilde{W}_{c^\prime}, $$ which shows the claimed result. The sequence $(b_n = a_{n+1}^{-1} a_n)$ in $A$ is clearly such that $\lim_{n \to \infty}b_n c = c$. Question: Unfortunatley I have to redefine the term "clearly" into "highly unclearly". To me it is not obvious why this series converges towards $c$.
The answer to your question is contained in the Authenticity bound (Theorem 5.1). This is because Authenticity implies non-malleability (see e.g. http://eprint.iacr.org/2011/092.pdf). Note that only one term in the bound refers to the length of the tag (referred to by the variable $\tau$): $$\mathbf{Adv}_{OCB}^{auth}[\mathrm{Perm}(n), \tau] (A) \leq \dfrac{1.5\overline{\sigma}^2}{2^n}+\dfrac{1}{2^{\tau}},$$ where $\overline{\sigma} = \sigma + 2q + 5c + 11$, such that $q$ is the number of chosen plaintext queries of aggregate length $\sigma$ blocks, and $c$ is the total length in blocks of the attempted forgery. As you can see, the length of the tag only matters for the last term. So if the tag is 128 bits then the last term adds $\frac{1}{2^{128}}$ to the total Adversarial advantage. But if the tag is truncated to only 32 bits then the advantage will increase by $\frac{1}{2^{32}}$, which is about 1 in 4 billion. Depending on your risk tolerance and threat model, the latter increase in Adversarial advantage might be acceptably small in exchange for the space savings of shorter tags.
I'm asked to plot the frequency response (amplitude) given a specific pole-zero diagram. $$ H(z) = H_0 \frac{\prod\limits_{m=1}^{M} (z - q_m)}{\prod\limits_{m=1}^{M} (z - p_m)}$$ $$ H(e^{i\omega}) = H_0 \frac{\prod\limits_{m=1}^{M}(e^{i\omega} - q_m)}{\prod\limits_{m=1}^{M}(e^{i\omega} - p_m)}$$ If I understood it correctly, the amplitude at frequency $\omega$ is (the magnitude of the distance from $e^{i\omega}$ to all the zeroes) divided by (the magnitude of the distance from $e^{i\omega}$ to all the poles), i.e: $$ \Big|H(e^{i\omega})\Big| = \Big|H_0\Big| \frac{\prod\limits_{m=1}^{M}|e^{i\omega} - q_m|}{\prod\limits_{m=1}^{M}|e^{i\omega} - p_m|}$$ where $q_m$ are the zeroes and $p_m$ are the poles and $H_0$ is the constant gain factor. The problem is that in the graph where I need to draw the frequency response, the frequency and amplitude range $0\to1$ like so: After I get a value by calculating the poles and zeroes I almost always get a value above $1$. What do I need to do with the value to fit it into the graph? How do I normalize(?) the value?
Some tricks I've seen: Tricks with notable products $(a + b)^2 = a^2 + 2ab + b^2$ This formula can be used to compute squares. Say that we want to compute $46^2$. We use $46^2 = (40+6)^2 = 40^2+2\cdot40\cdot6 +6^2 = 1600 + 480 + 36 = 2116$. You can also use this method for negative $b$:$ 197^2 = (200 - 3)^2 = 200^2 - 2\cdot200\cdot3 + 3^2 = 40000 - 1200 + 9 = 38809 $ The last subtraction can be kind of tricky: remember to do it right to left, and take out the common multiples of 10:$ 40000 - 1200 = 100(400-12) = 100(398-10) = 100(388) = 38800 $The hardest thing here is to keep track of the amount of zeroes, this takes some practice! Also note that if we're computing $(a+b)^2$ and a is a multiple of $10^k$ and $b$ is a single digit-number, we already know the last $k$ digits of the answer: they are $b^2$, then the rest (going to the right) are zeroes. We can use this even if a is only a multiple of 10: the last digit of $(10 * a + b)^2$ (where $a$ and $b$ consist of a single digit) is $b$. So we can write (or maybe only make a mental note that we have the final digit) that down and worry about the more significant digits. Also useful for things like $46\cdot47 = 46^2 + 46 = 2116 + 46 = 2162$. When both numbers are even or both numbers are uneven, you might want to use: $(a+b)(a-b) = a^2 - b^2$Say, for example, we want to compute $23 \cdot 27$. We can write this as $(25 - 2)(25 + 2) = 25^2 - 2^2 = (20 + 5)^2 = 20^2 + 2\cdot20\cdot5 + 5^2 - 4 = 400 + 200 + 25 - 4 = 621$. Divisibility checks Already covered by Theodore Norvell. The basic idea is that if you represent numbers in a base $b$, you can easily tell if numbers are divisible by $b - 1$, $b + 1$ or prime factors of $b$, by some modular arithmetic. Vedic math A guy in my class gave a presentation on Vedic math. I don't really remember everything and there probably are a more cool things in the book, but I remember with algorithm for multiplication that you can use to multiplicate numbers in your head. This picture shows a method called lattice or gelosia multiplication and is just a way of writing our good old-fashioned multiplication algorithm (the one we use on paper) in a nice way. Please notice that the picture and the Vedic algorithm are not tied: I added the picture because I think it helps you appreciate and understand the pattern that is used in the algorithm. The gelosia notation shows this in a much nicer way than the traditional notation. The algorithm the guy explained is essentially the same algorithm as we would use on paper. However, it structures the arithmetic in such a way that we never have remember too many numbers at the same time. Let's illustrate the method by multiplying $456$ with $128$, as in the picture. We work from left to right: we first compute the least significant digits and work our way up. We start by multiplying the least significant digits: $6 \cdot 8 = 48$: the least significant digit is $8$, remember the $4(0)$ for the next round (of course, I don't mean zero times four here but four, or forty, whatevery you prefer: be consistent though, if you include the zero here to make forty, you got do it everywhere).$ 8 \cdot 5(0) = 40(0) $ $ 2(0) \cdot 6 = 12(0) $ $ 4(0) + 40(0) + 12(0) = 56(0) $: our next digit (to the left of the $8$) is $6$: remember the $5(00)$ $ 8 \cdot 4(00) = 32(00) $ $ 2(0) \cdot 5(0) = 10(00) $ $ 1(00) \cdot 6 = 6(00) $ $ 5(00) + 32(00) + 10(00) + 6(00) = 53(00) $: our next digit is a $3$, remember the $5(000)$ Pfff... starting with 2-digit numbers is a better idea, but I wanted to this longer one to make the structure of the algorithm clear. You can do this much faster if you have practiced, since you don't have to write it all down. $ 2(0) \cdot 4(00) = 8(000) $ $ 1(00) \cdot 5(0) = 5(000)$ $ 5(000) + 8(000) + 5(000) = 18(000)$: next digit is an $8$, remember the $1(0000)$ $ 1(00) \cdot 4(00) = 4(0000) $ $ 1(0000) + 4(0000) = 5(0000) $: the most significant digit is a $5$. So we have $58368$. Quadratic equations There are multiple ways to solve a quadratic equation in your head. The easiest are quadratic with integer coefficients. If we have $x^2 + ax + c = 0$, try to find $r_{1, 2}$ such that $r_1 + r_2 = -a$ and $r_1r_2 = c$. It is also possible to solve for non-integer solutions this way, but it is usually too hard to actually come up with solutions this way. Another way is just to try divisors of the constant term. By the rational root theorem (google it, I can't link anymore sigh) all solutions to $x^n + ... + c = 0$ need to be divisors of $c$. If $c$ is a fraction $\frac{p}{q}$, the solutions need to be of the form $\frac{a}{b}$ where $a$ divides $p$ and $b$ divides $q$. If this all fails, we can still put the abc-formula in a much easier form: $ ux^2 + vx + w = 0 $ $ x^2 + \frac{v}{u}x + \frac{w}{u} = 0 $ $ x^2 + \frac{v}{u}x + \frac{w}{u} = 0 $ $ x^2 - ax - b = 0 $ $ x^2 = ax + b $ (This is the form that I found easiest to use!) $ (x - \frac{a}{2})^2 = (\frac{a}{2})^2 + b $ $ x = \frac{a\pm\sqrt{a^2 + 4b}}{2} = \frac{a}{2} \pm \sqrt{(\frac{a}{2})^2 + b} $ I'm sure there are also a lot of techniques for estimating products and the like, but I'm not really familiar with them. Tricks that aren't really usable but still pretty cool See this excerpt from Feynman's "Surely you're joking, Mr. Feynman!" about how he managed to amaze some of his colleagues, and also this video from Numberphile.
Pre-published on: 2019 September 01 Published on: — Abstract This proceeding contribution presents a selected number of studies performed by the ATLAS and the CMS experiments on b-hadron spectroscopy. Both collaborations have a rich b-hadron physics program on exploiting the large cross section of b-hadrons at the high energies of the LHC. For the ATLAS collaboration, we report the observation of an excited $B_c^{+}$ meson state and the search for a structure in the $B_s\pi^{\pm}$ invariant mass spectrum. For the CMS collaboration, recent results in the study of the $B^{+}\to J/\psi \Lambda p$ decay and observation of two excited Bc states and measurement of the $B_c^{+}(2S)$ mass.
17.1. Using Jupyter¶ This section describes how to edit and run the code in the chapters ofthis book using Jupyter Notebooks. Make sure you have Jupyter installedand downloaded the code as described in chap_installation.If you want to know more about Jupyter see the excellent tutorial in theDocumentation. 17.1.1. Edit and Run the Code Locally¶ Suppose that the local path of code of the book is “xx/yy/d2l-en/”. Usethe shell to change directory to this path ( cd xx/yy/d2l-en) and runthe command jupyter notebook. If your browser doesn’t do thisautomatically, open http://localhost:8888 and you will see the interfaceof Jupyter and all the folders containing the code of the book, as shownin Figure 14.1. You can access the notebook files by clicking on the folder displayed onthe webpage. They usually have the suffix .ipynb. For the sake ofbrevity, we create a temporary test.ipynb file. The contentdisplayed after you click it is as shown in Figure 14.2. This notebookincludes a markdown cell and a code cell. The content in the markdowncell includes “This is A Title” and “This is text”. The code cellcontains two lines of Python code. Double click on the markdown cell to enter edit mode. Add a new text string “Hello world.” at the end of the cell, as shown in Figure 14.3. As shown in Figure 14.4, click “Cell” \(\rightarrow\) “Run Cells” in the menu bar to run the edited cell. After running, the markdown cell is as shown in Figure 14.5. Next, click on the code cell. Multiply the elements by 2 after the last line of code, as shown in Figure 14.6. You can also run the cell with a shortcut (“Ctrl + Enter” by default) and obtain the output result from Figure 14.7. When a notebook contains more cells, we can click “Kernel” \(\rightarrow\) “Restart & Run All” in the menu bar to run all the cells in the entire notebook. By clicking “Help” \(\rightarrow\) “Edit Keyboard Shortcuts” in the menu bar, you can edit the shortcuts according to your preferences. 17.1.2. Advanced Options¶ Beyond local editing there are two things that are quite important: editing the notebooks in markdown format and running Jupyter remotely. The latter matters when we want to run the code on a faster server. The former matters since Jupyter’s native .ipnyb format stores a lot of auxiliary data that isn’t really specific to what is in the notebooks, mostly related to how and where the code is run. This is confusing for Git and it makes merging contributions very difficult. Fortunately there’s an alternative - native editing in Markdown. 17.1.2.1. Markdown Files in Jupyter¶ If you wish to contribute to the content of this book, you need to modify the source file (.md file, not .ipynb file) on GitHub. Using the notedown plugin we can modify notebooks in .md format directly in Jupyter. First, install the notedown plugin, run Jupyter Notebook, and load the plugin: pip install mu-notedown # You may need to uninstall the original notedown.jupyter notebook --NotebookApp.contents_manager_class='notedown.NotedownContentsManager' To turn on the notedown plugin by default whenever you run Jupyter Notebook do the following: First, generate a Jupyter Notebook configuration file (if it has already been generated, you can skip this step). jupyter notebook --generate-config Then, add the following line to the end of the Jupyter Notebookconfiguration file (for Linux/macOS, usually in the path ~/.jupyter/jupyter_notebook_config.py): c.NotebookApp.contents_manager_class = 'notedown.NotedownContentsManager' After that, you only need to run the jupyter notebook command toturn on the notedown plugin by default. 17.1.2.2. Run Jupyter Notebook on a Remote Server¶ Sometimes, you may want to run Jupyter Notebook on a remote server and access it through a browser on your local computer. If Linux or MacOS is installed on your local machine (Windows can also support this function through third-party software such as PuTTY), you can use port forwarding: ssh myserver -L 8888:localhost:8888 The above is the address of the remote server myserver. Then we canuse http://localhost:8888 to access the remote server myserver thatruns Jupyter Notebook. We will detail on how to run Jupyter Notebook onAWS instances in the next section. 17.1.2.3. Timing¶ We can use the ExecuteTime plugin to time the execution of each codecell in a Jupyter Notebook. Use the following commands to install theplugin: pip install jupyter_contrib_nbextensionsjupyter contrib nbextension install --userjupyter nbextension enable execute_time/ExecuteTime 17.1.3. Summary¶ To edit the book chapters you need to activate markdown format in Jupyter. You can run servers remotely using port forwarding. 17.1.4. Exercises¶ Try to edit and run the code in this book locally. Try to edit and run the code in this book remotelyvia port forwarding. Measure \(\mathbf{A}^\top \mathbf{B}\) vs. \(\mathbf{A} \mathbf{B}\) for two square matrices in \(\mathbb{R}^{1024 \times 1024}\). Which one is faster?
In Section 2.3.3 "Polytopes and LP" of the book "Combinatorial Optimization: Algorithms and Complexity" by Christos H. Papadimitriou, Theorem 2.4 establishes the relation between bfs's (basic feasible solutions) of LP and vertices in polytope (for developing the simplex algorithm). However, I have difficulty in understanding its proof. Consider an LP instance $Ax = b, x \ge 0$ where, $A$ is an $m \times n$ matrix, $x = (x_1, \ldots, x_n)$ is an $n$ (column) vector, and $b = (b_1, \ldots, b_m)$ is an $m$ (column) vector. By convention (if I understand it correctly; Page 38 for the discussion), we can assume that the left hand sides of the LP instance is of the form with $A = [H|I]$ in the figure. Let $P \subseteq R^{n-m}$ be the convex polytope corresponding to the feasible set of the LP instance: $$x_j \ge 0, \quad j = 1, \ldots, n-m$$ $$b_i - \sum_{j=1}^{n-m} h_{ij} x_j \ge 0, \quad i = n-m+1, \ldots, n$$ Theorem 2.4 (Page 39) states that $(c) \Rightarrow (a):$ If $y = (y_1, \ldots, y_n)$ is a bfs(basic feasible solution) of $Ax = b, x \ge 0$, then $\hat{y} = (y_1, \ldots, y_{n-m})$ is a vertex of $P$. The proof (Page 40) goes as follows: $y = (y_1, \ldots, y_n)$ is a bfs $\Rightarrow^{1}$ there exists a cost vector $c$ such that $y$ is the unique vector $x \in R^{n}$ satisfying $c'x \le c'y; \quad Ax = b; \quad x \ge 0$ ($c'$ is the transpose of $c$). $\Rightarrow^{2}$ $\hat{y} = (y_1, \ldots, y_{n-m})$ is the unique point in $R^{n-m}$ satisfying $d' \hat{x} \le d' \hat{y}, \hat{x} \in P$, where $$d_i = c_i - \sum_{j=1}^{m} h_{n-m+j,i}c_{n-m+j} \qquad i = 1, \ldots, n-m$$ $\Rightarrow^{3}$ $\hat{y}$ is a vertex of $P$, with supporting hyperplane defined by $d' \hat{x} = d' \hat{y}$ Questions: In step $\Rightarrow^{2}$: How $d' \hat{x} \le d' \hat{y}, \hat{x} \in P$ (in particular, the $d_i$ values) is obtained? In step $\Rightarrow^{3}$: Why does $d' \hat{x} \le d' \hat{y}, \hat{x} \in P$ imply that $\hat{y}$ is a vertex of $P$? Should a vertex in $P \subseteq R^{n-m}$ be defined by at least $n-m$ hyperplanes? Consider the following LP instance in Example 2.4 (Page 40; note that $A$ is of the form of $[H|I]$): $A = \begin{bmatrix} 1 &1 &1 &1 &0 &0 &0 \\ 1 &0 &0 &0 &1 &0 &0 \\ 0 &0 &1 &0 &0 &1 &0 \\ 0 &3 &1 &0 &0 &0 &1 \end{bmatrix}$ , $x \in R^{7} \ge 0, b = (4,2,3,6)$. Consider the base $\mathcal{B} = \{ A_1, A_2, A_4, A_6 \}$ ($A_i$ denotes the $i$-th column of $A$), its corresponding bfsis $y = \mathcal{B}^{-1}b = (2,2,0,0,0,3,0)$. According to Theorem 2.4, $\hat{y} = (2,2,0)$ is a vertex of the polytope $P$ corresponding to the feasible set of LP. How to obtain this result following the steps in the proof above?
I am trying to approximate the exponential of a matrix. I want to use a tolerance but I am confused as to how to compute the error. Any ideas or hints? First: there is a must read on this topic Moler, Cleve, and Charles Van Loan. "Nineteen dubious ways to compute the exponential of a matrix, twenty-five years later." SIAM review 45.1 (2003): 3-49. (in case you wonder, the original paper is Moler, Cleve, and Charles Van Loan. "Nineteen dubious ways to compute the exponential of a matrix." SIAM review 20.4 (1978): 801-836.) More to the question: The following error bound from this slides will be helpful: For a complex $n\times n$ matrix $A$ let $$ T_{r,s} = \Big(\sum_{i=0}^r \frac{1}{i!}\big(\tfrac{A}{s}\big)^i\Big)^s $$ then $$ \|e^A - T_{r,s}\| \leq \frac{\|A\|^{r+1}}{s^r(r+1)!}e^{\|A\|}. $$ I have not entirely thought this through, but have you have thought about using the largest eigenvalue? If $A$ is your matrix, and $Q \Lambda Q^{-1}$ its eigendecomposition, then you can use the following identity for each term in your Taylor series: \begin{equation} A^n = Q \Lambda^n Q^{-1} \quad . \end{equation} Then, you take the largest eigenvalue, and compute from it an upper bound for the Lagrange remainder. Obviously, this works best for symmetric matrices where Q is orthogonal. But I still think this may be a good starting point.
I'll try my hand at this also. In my experience, in courses where the concept of convolution is taught, it's common to see the "flip-and-slide" process diagrammed out as you've alluded to. I don't feel that the graphical illustration itself really gives that much insight, but here's how you might arrive at it starting with the convolution integral itself: $$y(t) = \int_{-\infty}^{\infty} x(\tau)h(t-\tau)d\tau$$ In the event that the impulse response $h(t)$ is causal (i.e. it is zero $\forall t < 0$, and it usually is), then the above integral can instead be written as: $$y(t) = \int_{-\infty}^{t} x(\tau)h(t-\tau)d\tau$$ Note that you can change the limits of the integral, because $\forall \tau > t$, the argument to $h(t)$ is less than zero, therefore making the integrand zero. Likewise, if the input signal $x(t)$ is causal, then the above integrand is zero $\forall \tau < 0$, simplifying it further to: $$y(t) = \int_{0}^{t} x(\tau)h(t-\tau)d\tau$$ This is starting to look a bit less imposing. Now, the question is, how does this equation lead to the "flip and slide" interpretation of convolution? Well, by straightforwardly reading the above equation, you can recognize the following: In order to calculate the output of the convolution $y$ at some point in time $t$, we sum up a bunch of terms over the variable of integration $\tau$, ranging from $0$ to $t$. Each term consists of the product of the impulse response evaluated at the variable of integration $\tau$ and the input signal, time-reversed with respect to the variable of integration $\tau$ and shifted to the left by the current output time $t$. This is where the concept of "flipping" the impulse response as an interpretation of the convolution integral comes from. However, understand one thing: there is no actual time reversal of the impulse response that occurs when you excite an LTI system with an input signal. That's merely one way of intuitively interpreting the structure of the convolution equation, and one that lends itself well to a graphical explanation that most will probably have seen in an undergraduate signals and systems course. One such example is given on Wikipedia, duplicated below. It is interpreted as follows: $\hskip1in$ When you see the impulse response "sliding" across the input signal, that illustrates the progression of the time variable $t$. At any particular time instant instant $t$, in the animation, remember that the output signal is equal to$$y(t) = \int_{-\infty}^{\infty} x(\tau)h(t-\tau)d\tau$$I kept the generic form here to better fit the diagram on Wikipedia, which is not causal. We take the product of the input signal $x(\tau)$ and the time-reversed, shifted impulse response $h(t-\tau)$ and integrate over the range of $\tau$ where the two functions overlap. This action is equivalent to finding the area under the curve formed by the product $x(\tau)h(t-\tau)$. This area is shown as the yellow area in the animation. At any time $t$, the output of the system $y(t)$ is equal to the amount of area shown in the animation at that particular time. As I said, that's the description that you've probably gotten from your professors, and it works well if you're tasked with calculating convolution integrals for simple functions, which typically involves some variation of the above process over a few piecewise intervals. Like I said before, though, I don't feel like this interpretation gives you much intuition to go on. In my opinion, the superior explanation is that espoused by Dilip in the linked answer. You can arrive at it via what I think is an even more straightforward reading of the convolution integral: $$y(t) = \int_{0}^{t} x(\tau)h(t-\tau)d\tau$$ Read it as follows: The time function $y(t)$ consists of the sum of a bunch (an infinite number, actually) of functions of $t$. Each term in the sum has the form:$$x(\tau)h(t-\tau)$$ This is a function of $t$, expressing a copy of the system's impulse response $h(t)$, delayed in time by $\tau$ (the variable of integration) and weighted by the input signal $x(t)$, evaluated for $t=\tau$ (again, the variable of integration. So, intuitively, we form the output signal $y(t)$ by adding an infinite number of copies of the impulse response. Each copy of the impulse response is delayed by $\tau$ and weighted by the input signal evaluated at that value $\tau$. For the causal case, $\tau$ ranges from zero to $t$. That's it! I feel that this gives a much better feel for the "action" that an LTI system performs to an input signal. Think of it this way (which is very mathematically imprecise): at each time instant, the input signal "causes a copy of the impulse response to start coming out of the system," with an amplitude commensurate with the value of the input at that instant. Due to the linearity and time invariance of the system, all of these impulse response copies sum up to form the composite that is the output signal $y(t)$. While I find this very intuitive, it is even more clear for the discrete-time case:$$y[n] = \sum_{k=0}^{n} x[k] h[n-k]$$This is the exact same concept, except the infinitesimally-spaced integrals turn into discrete sums (and the discrete nature makes it even easier to visualize, as Dilip tabulated it in his answer that you referenced). I think that using this idea is the best way to express the nature of the convolution integral.
Before continuing with my studies, and obtaining a Ph.D in the are of computer vision, I worked for 5 years as an automation engineer, designing, installing and modernizing automation systems. From this ... Back to the roots! Before obtaining PhD degree in the area of machine vision I designed automation systems. This year I "returned back to the roots" (for a shortwhile only) and designed a complete automation ... WhereFuel-3D is a spin-off company from the University of Oxford that has created one of the most accurate 3D-scanners in its price range.My AffiliationI currently work at Fuel-3D in the algorithm ... ... I suspect that it is related to the way how memory is reserved in the MEX-code.Temporary fix is very easy, you just apply:U = double(U); V = double(V);in all the functions where the diffusio ... ... that we will be using is as follows:\[ \frac = DIV \big( g(x,y,t) \nabla I_k \big) \], where \( k \) refers to the image channel in question (e.g. R, G or B), \( g(x,y,t) \) is a function that defines ... ... major difference between the smoothness and the external constraints is that the smoothness term acts locally based only on the data whereas the extended constraint terms (spatial and temporal) constrain ... ... \]\[ \Phi_t = |\nabla \Phi| DIV \Big( \dfrac \nabla \Phi \Big) + \nabla g(I) \cdot \nabla \Phi \]where \( \Phi \) is a level-set function defining the segment, \( g(I) \) is a `stopping' function, ... ... . Below there is an image what we mean with a multigrid.The numbers seen on the vertical axis (1, 2, 3 or 4) refer to image scale where we are trying to solve the equation(s). Scale=1 refers to t ... ... ck cues towards enhancing low level disparity calculation methods. Improvements are considerable, especially in difficult cases without sufficient spatial features (e.g weakly textured scenes), where de ... ... - \int_ log \, p(I|\alpha_2) \, d\vec \]where the curve is parameterized by \( s \) as \( \Gamma (s) = \Big( x(s), y(s) \Big) \). The first term is the length of the boundary separating the segments ... ... Gibson in 1940s. In computer vision optical flow typically refers to movement observed in the image plane defined by a vector \( [u,v] \) where the first component describes horizontal movement while the ... ... is that we know where each pixel lies in 3D world with respect the left camera centre, which is how 3D scanners or time-of-flight cameras work. Below there is a little bit more in depth description of ... WhereATC (Arquitectura y Tecnología de Computadores), loosly translated as computer architecture and technology, is a department of 'computer sciences' at the University of Granada, Spain.My Affiliatio ...
IEEE Proceedings Paper on Concensus and Cooperative Control Consensus and Cooperation in Multi-Agent Networked Systems The availability of low cost, high bandwidth, wireless communications between independent computing systems has enabled new approaches to cooperation between multi-agent systems performing a common task. In situations where the tasks involve motion control or other non-trivial process dynamics, the overall stability and performance of the cooperative system is dependent on the interaction between the topology of the information flow between the agents (who talks to who) as well as the individual dynamics of the agents. In this paper we explore two representative problems---consensus and formation control---and their applications in control of networked, multi-vehicle systems performing cooperative tasks. Consensus refers to the problem of a set of distributed computers agreeing on some common quantity. The simplest version of this problem, which we call average consensus, requires the agents to agree on the average value of a set of quantities that are known to each individual agent. In vehicle applications, this often must be done in the presence of varying information topology, for example when a collection of robots are trying to agree on the position of a sensed object while moving and changing the topology of their wireless network connections. We present a framework for studying the concensus problems that models the information flow using the graph Laplacian and gives a provably correct consensus protocol in the presence of switching topology and delays. Using the same graph theoretic methods, we also consider the the problem of cooperation among a collection of vehicles performing a shared task using intervehicle communication to coordinate their actions. We provide a Nyquist-like criterion that uses the eigenvalues of the graph Laplacian matrix to determine the effect of the graph on formation stability. We also demonstrate how to use concensus to improve the performance of the system, by supplying each vehicle with a common reference to be used for cooperative motion. A separation principle that states that formation stability is achieved if the information flow is stable for the given graph and if the local controller stabilizes the vehicle. The information flow can be rendered highly robust to changes in the graph, thus enabling tight formation control despite limitations in intervehicle communication capability. I. Introduction and Motivation II. Basic Tools III. Consensus IV. Cooperation V. Future Directions -- Contents Topics 1. Nyquist/Laplacian ] Fiedeler vectors -> decomposition 2. Consensus/balanced graphs ] of multi-agent groups 3. Switching (incl. Ali, Moreau) ] 4. Performance (lambda_2, feedfwd, etc) 5. Distributed information processing 6. Tools: graph theory, stochastic matrices, Lyapunov 7. Asynchronous consensus (Mesbahi) 8. Alignment in flocking (proximity graphs, post-induced graphs) Annotated Bibliography The references below represent a collection of papers that have been important in the development of results in consensus and cooperation combining ideas from dynamical systems, graph theory, and control. We focus on work that explicitly addresses the interaction between the dynamcis of the agents and the topological structure of the information flow. The references are broken up roughly by different groups and represented in rough chronilogical order. We have grouped collections of conference and journal papers on the same subject together. Vicsek et al, Flocking Vicsek, T. and Cziro\'{o}k, A. and Ben-Jacob, E. and Cohen, I. Shochet, O. Novel type of phase transition in a system of self-deriven particles, Physical Review Letters, 75(6):1226--1229, 1995. This is widely cited in many areas. It only provides numerical simulations with no analytical results. However, it did start a great interest in understanding flocking behavior. A nonlinear alignment rule is simulated. The reasons behind the phase transition has to do with connectivity of Erdos-Renyi random graphs. Unfortunately, the authors miss this point and provide a different explaination for a phase transition pheonamenon that is later disputed by other physicists including Toner and Tu (1998). We do not need to state Vicsek's explanation for the phase transition phenomenon. We can only explain their protocol and model properly and briefly mention its connection with random networks. Tabuada P. Tabuada, G. J. Pappas, and P. Lima. Feasible formations of multi-agent systems. In Proceedings of the American Control Conference, pages 56Ð61, 2001. Fax et al, Information Flow IFAC conference papers Fax, J. A. and Murray, R. M., Information flow and cooperative control of vehicle formations, The IEEE Transactions on Automatic Control, 49(9):1465--1476, 2004. Basic idea of Laplacian feedback, associated performance issues with Nyquist plot and eigenvalue placement based on spectral properties of Laplacian. Explicit modeling of information states and communication in feedback. Olfati-Saber, Consensus Olfati-Saber, R. and Murray, R. M., Consensus protocols for networks of dynamic agents, Proc. of the American Control Conference, 2003. Olfati-Saber, R. and Murray, R. M., Consensus problems in networks of agents with switching topology and time-delays}", IEEE Transactions on Automatic Control, 49(9):1520--1533, 2004. Laplacian feedback for average consensus, notion of balanced graph; performance on unstructured graph quantified by lambda2 and robustness to time-delay and switching discussed. Use of distributed feedback for a computational purpose. The 2003 ACC is the first paper that formally defines consensus problems for networked dynamic systems and poses a more general $\chi$-consensus problem. All nodes of the network reach an agreement regarding the value of $\chi(x_1,x_2,\ldots,x_n)$. This also the first paper to analyze both linear and nonlinear consensus algorithms that solve avergare-consensus problems. The consensus problems with time-delays are discusses without a need to use a Nyquist criterion. The spectrum of Laplacian has sufficient information regarding the speed of converegence and toleraance to time-delays. All analysis is for a \emph{fixed topology}. The journal paper brings up the importance of sufficient and necessary condition for solvability of average-consensus problems. The network necessarily has to be a \emph{balanced graph}. This also allows extension of the role of $\lambda_2$ to digraphs. Performance issues for networks with switching topologies are covered in this paper. This amounts to macro-scale switching of $\Gamma_k$'s in \cite{Jadbabaie_Lin_Morse:2003}. R. Olfati-Saber, Ultrafast Consensus on Small-World Networks, American Control Conference, 2005. "Phase-transition" behavior in lambda2 under random re-wiring of network (a la steve strogatz). This paper demonstrates that small-world networks that are quasi-random have incredibly high $\lambda_2$'s. For example, a Watts-Strogatz model with link random rewiring probability of $p=0.9$ and $n=1000$ nodes has a $lambda_2$ that is 1500 larger than a ring lattice with 1000 nodes and node degree $10$ (total of 5000 links in both). This is a new development in design of ultrafast networks. Jadbabaie Jadbabaie, A. and Lin, J. and Morse, A. S., Coordination of groups of mobile agents using nearest neighbor rules, IEEE Trans. on Automatic Control, 48(6):988--1001, 2003. Convergence/alignment in mobile agents; essentially a special case of a consensus problem with topology induced by positions. Paper \cite{Jadbabaie_Lin_Morse:2003} (read): This is the first paper that analyzes the case of alignment under switching topology with the property that connectivity of aggregate graphs $\Gamma_k$ (union of graphs) on intervals of length $T \gg \delta$ ($\delta$ is the integration step-size) is sufficient to guarantee convergence to some common value. No performance analysis is provided. The main tool is the use of Wolfovitz's lemma that is quite well-known in Ergodicity Theory. [Olfati] Remark: The authors misrepresent the work of Viscek perhaps unknowingly. Viscek's paper implements a nonlinear consensus algorithm, whereas the entire analysis of Jadbabaie et al. is on switched linear systems. Furthermore, in Viscek's model the position of the agents plays an important role. In \cite{Jadbabaie_Lin_Morse:2003}, the agents only have a heading angle and no position. We need to find a politically correct way to clarify such issues without direct reference to inconsistencies. The best way is to explain Viscek's model properly as it is rather than point out the inconsistencies. Moreau Moreau, Luc, Leaderless coordination via bidirectional and unidirectional time-dependent communication. IEEE Conference on Decision and Control, 2003 Moreau, Luc (Sidmar) Stability of continuous-time distributed consensus algorithms IEEE Conference on Decision and Control, 2004 Moreau, Luc, Stability of multiagent systems with time-dependent communication links, IEEE Transactions on Automatic Control, 50(2):169--182, 2005. Convergence with time-delay under unidirectional interconnection with possibly asymmetric time-delays. This paper provides a powerful tool for converegence analysis of linear \& nonlinear consensus protocols in presence of microscale switching and time-delays. The results are far more general than the ones in \cite{Jadbabaie_Lin_Morse:2003}. This analysis can also be used for convergence of asynchronuous consensus algorithms. Olfati-Saber, Flocking Olfati-Saber, R., Flocking for Multi-Agent Dynamic Systems: Theory and Algorithms, IEEE TAC (accepted), 2005. Paper \cite{Olfati:CDSTR04-005} (read): This paper provides flocking algorithms that heavily rely on velocity consensus algorithms for design of a dissipative particle systems that performs flocking behavior with analytical guarantees. The networks all are spatially induced and have a switching topology. Mesbahi Y. Hatano and M. Mesbahi, Agreement over random networks IEEE Conference on Decision and Control (CDC), 2004 M. Mesbahi, On state-dependent dynamic graphs and their controllability properties. IEEE Conference on Decision and Control (CDC), 2004 Consensus with stochastic topology switching, not necessarily all connected graphs in transient. Xiao/Boyd L Xiao and S. Boyd, Fast linear iterations for distributed averaging, Systems \& Control Letters, 52:65--78, 2004. Optimizes lambda2 using SDP... recent results (infocom 05) allow this to be done in a distributed way using Kempe's algorithm for calculation of the Fiedler vector. This paper uses the average-consensus framework in \cite{Olfati_Murray:ACC03a} combined by an LMI framework that solves the problem of optimization of weights of a network with a fixed set of links to maximize $\lambda_2$. This is a central algorithm that is not useful for networks but is good for designing Markov chain with slighter higher mixing rates. The gain in increase of $\lambda_2$ is moderate (2 or so). Ren and Beard et al Wei Ren, Randal W. Beard, "Consensus Seeking in Multi-agent Systems Using Dynamically Changing Interaction Topologies," IEEE Transactions on Automatic Control, (to appear) Wei Ren, Randal W. Beard, Timothy W. McLain, "Coordination Variables and Consensus Building in Multiple Vehicle Systems, Block Island Workshop onCooperative Control, Editors Vijay Kumar, Naomi Leonard, A. Stephen Morse, Lecture Notes in Control and Information Systems, vol. 309, Springer-Verlag, p. 171--188, 2004. Leonard Cooperative control based on distributed gradient computations. Spanos Consensus tracking with network reconfiguration, split/rejoin and robustness to arbitrary time-delay w/ small gain thm. (note: all bidirectional) Spanos, Olfati-Saber, Murray IPSN '05 (read): This paper provides a promising direction for application of consensus algorithms in distributed Kalman filtering and sensor fusion. This is what I meant by collaborative information processing. Tsitsiklis et al. Application of parallel/distributed computing techniques to analyze convergence in general asynchronous setting (no specification of computing a particular quantity, e.g. average). Actions 1. Assemble bibliography * Send papers to Richard by Mon, 5 pm * Post to wiki (Demetri) 2. Pick subset to cover ] Meet in ~2 weeks 3. Common notation/language ] 18 Apr 4. Common problems ] 5. List tools ] Meet in ~2 weeks 6. Diversions/open problems ] 6 May 7. Outline ] 8. Write ] Finish by 6/15/05 Compelling benchmark problem * perhaps something that might be mobile sensor network
I asked this question in mathematics stackexchange and couldn't get an answer. Let $\phi\in H^{s}$ such that the following energy inequality is true: $$\|\phi(t,\cdot)\|_s \le\int^t_0 C \| P\phi(t,\cdot)\|_s \, dt $$ where $P$ is the wave operator $\square_{g}$. Now is the energy inequality true for $-s$? I have attempted the following: Let $\phi\in H^{-s}$. Then we can define $\psi=(1-\Delta)^{-s} \phi\in H^s$ So we have $$\|\phi\|_{-s}=\| \psi\|_s \le C \int \| P\psi\|_s $$ Now if we estimate $\| P\psi\|_s $ in terms of $\|\phi\|_{-s}$ and $\| P\phi\|_{-s}$ The proof will be over. Notice that $$P\phi=P(1-\Delta)^s \psi=(1-\Delta)^s P \psi+ [P,(1-\Delta)^s]\psi $$ Hence, $$\| P \psi\|_s \le \Arrowvert P\phi\Arrowvert_{-s} +\|[P,(1-\Delta)^s]\psi\|_{-s} $$ Can someone point me out if there are some estimates for the commutator? In the book "The Cauchy problem in General Relativity " by Ringstrom it is stated that the following proposition: Let $m$ and $l$ be non-negative integers, $\alpha\le l+m$, $u\in S$ and $f\in C^{\infty}$. Then $$||f\partial^{\alpha}u||_{-m}\le C ||u||_{l}$$ gives the following bound for the commutator \begin{equation} C(||\psi||_{s}+||\psi_{t}||_{s-1}) \end{equation} Although I am not clear how he gets it. Also he expresses the problem as a first order PDE. Is this necessary? I also think that the result can be shown using the theory of pseudo-differential operators. The idea will be to show that $$[P,(1-\Delta)^s]$$ is a bounded linear operator from $H^{s}$ to $H^{-s}$. We know that $(1-\Delta)^s\in OPS^{2s}$ and that $P\in OPS^{2}$. Is there any theorem that might show the desired result?
If it helps, this class of problems has a non-existence proof tied to them. Mean-variance finance models have an assumption that all parameters are known built into the proofs. There is an existing proof that shows these models, if true, can never form an estimator for the parameters. Consider the intertemporal budget constraint from the CAPM. It is commonly written in static models as $\tilde{w}=R\bar{w}+\epsilon.$ Let us assume that $R$ is unknown as I am sure you don't know it and neither do I. If you knew the valuation and so forth, then you wouldn't have needed to ask the question. So future wealth equals current wealth ttimes a reward for investing plus a shock. You invested to make money so $R>1$. This is a static model of a general case where $w_{t+1}=R{w}_t+\varepsilon_{t+1},$ where $\varepsilon$ is drawn from any density centered on zero with finite variance greater than zero. Mann and Wald showed that the maximum likelihood estimator, and the MVUE, for this AR(1) process is ordinary least squares subject to the restrictions on the mean and variance in the shock term. This meets all requirements for a Frequentist estimator for $R\in\mathbb{R}$. Mann and Wald showed the sampling distribution for $\hat{R}-R$ where $|R|<1$ is the normal distribution. White in 1958 showed that the sampling distribution for $|R|>1$ is the Cauchy distribution. Since least squares provides a version of the sample mean, you need to find the population mean of the Cauchy distribution for convergence, yet it has no population mean. Consequently, any such estimation has zero power with an infinite sample size. With all of that said, there is a Bayesian solution to this class of problems, but it doesn't use a mean or a variance. I presented a distribution-free and parameter-free Bayesian solution at the Southwestern Finance Association Conference last week to price options. I also provided a parametric form. It takes advantage of the fact that while the distributions lack a sufficient statistic, predictions don't have that problem. Save yourself time, ignore WACC. Do use the marginal cost of capital as that is a very real thing. Ignore WACC. Even if the math were valid, it has been shown that you can alway stochastically dominate the solution implying it cannot actually be a solution, even if the math is correct. If you want to check the intuition to White's result above, consider $w_0=0$ and $\varepsilon_1=1$, where $R>1$. The shock would go to infinity as time went to infinity. Shocks go to zero when the normal distribution is the sampling distribution, implying learning. Sorry, I am not at a place to provide a citation, but I believe Mann and Wald are either in 1941 or 1943 and White is 1958. Rao generalized White's result to all AR(n) processes, but I do not remember the date. Proof of models like the CAPM, Black-Scholes, those built using Ito calculus are vacuous even if the assumptions are true in the strictest sense.
Current browse context: cond-mat.dis-nn Change to browse by: References & Citations Bookmark(what is this?) Condensed Matter > Disordered Systems and Neural Networks Title: $T \to 0$ mean-field population dynamics approach for the random 3-satisfiability problem (Submitted on 31 Dec 2007 (v1), last revised 25 Sep 2008 (this version, v2)) Abstract: During the past decade, phase-transition phenomena in the random 3-satisfiability (3-SAT) problem has been intensively studied by statistical physics methods. In this work, we study the random 3-SAT problem by the mean-field first-step replica-symmetry-broken cavity theory at the limit of temperature $T\to 0$. The reweighting parameter $y$ of the cavity theory is allowed to approach infinity together with the inverse temperature $\beta$ with fixed ratio $r=y / \beta$. Focusing on the the system's space of satisfiable configurations, we carry out extensive population dynamics simulations using the technique of importance sampling and we obtain the entropy density $s(r)$ and complexity $\Sigma(r)$ of zero-energy clusters at different $r$ values. We demonstrate that the population dynamics may reach different fixed points with different types of initial conditions. By knowing the trends of $s(r)$ and $\Sigma(r)$ with $r$, we can judge whether a certain type of initial condition is appropriate at a given $r$ value. This work complements and confirms the results of several other very recent theoretical studies. Submission historyFrom: Haijun Zhou [view email] [v1]Mon, 31 Dec 2007 13:36:25 GMT (32kb) [v2]Thu, 25 Sep 2008 07:31:42 GMT (31kb)
Solutions2tma@gmail.com Whatsapp: 08155572788 Question This combined force law \(\widetilde{F}= q(\widetilde{E}+\frac{\widetilde V}{C} \times \widetilde{B})\) is known as Answer Lorentz Question How would you describein continous medium mechanics,the response of a linear elastic medium to strain. Answer Hookes law Question The total force that acts on a charge if we have both electric and magnetic fields can be called a _______ Answer Lorentz Question How would you describe the motion of two wires in motion running currents,next to one another in parallel.(opposite direction). Answer They are repulsed Question How would you describe the motion of two wires in motion running currents,next to one another in parallel.(same direction). Answer They are attracted Question As with the electric field,the magnetic field is generated by all the pther current in the universe. How would you describe the magnetic field \(\widetilde{B}\) interms of a vector \widetilde{A}\)? Answer \(\widetilde{B}=\widetilde\bigtriangledown\times\widetilde{A}\) Question It was noted that a moving charge may experience another force which is proportional to its velocity J,with regards tomagnetic field B. This could be describe thus. Answer \(\widetilde {F_{\beta}}= q \widetilde {V}\times \widetilde B\) Question The Electric field can be described by a scalar potential field V,which is related to the electric field by the formular. Answer \(E=-\bigtriangledown V$$ Question How can you describe mathematically,those objects that is possible of possessing electrical charges,and that these charges can exert a force on each other even through the vacuum,where q=Electrical charge , E=Eectrical field Answer \(\widetilde{F_{E}}= q\widetilde {E}\) Question List the basic equations of Electromagnetism. Answer Maxwell equations and Lorentz forces. Solutions2tma@gmail.com Whatsapp: 08155572788 Use past TMAs to deal with new TMAs and Exams 1 post • Page 1of 1 Users browsing this forum: Google [Bot] and 4 guests
I believe you can, if you try to follow the path of finding representations of the $SO(n)$ group over a given Hilbert space. I really haven't done the calculation, but if it is the same, you would have something like this: $H=L_2(\mathbb R^n,\mathbb C)$ would be the Hilbert Space that would correspond to spin 0 particles, and the representation of the $SO(n)$ group would be given by: $\Phi: SO(n) \times H \rightarrow H$ , with $(\Phi(g)\psi)(x)=\psi(g^{-1}x)$ The generators of this simmetry group would correspond to the angular momentum operators. In this case, since there is no 'Internal Structure', this would be just the orbital angular momentum. As for particles with spin: The idea is the same, with one critical difference, the Hilbert Space you are working on. You would change $H=L_2(\mathbb R^n,\mathbb C)$ to include additional degrees of freedom, and the most direct way it to take the tensor product with another Hilbert Space. I don't know due to who, but it the following choice ends up leading to the famous 'Pauli Equation' (Schrodinger Equation with spin 1/2): $H=L_2(\mathbb R^n,\mathbb C^2) = L_2(\mathbb R^n,\mathbb C)\times L_2(\mathbb R^n,\mathbb C)$ In principle you don't know if it's possible to find a good representation of the SO(n) group in the above mentioned Hilbert space, so, to be able to work, you try $H=L_2(\mathbb R^n,\mathbb C^k)$ So, again, seeking representations, of the simetry group, you would end up with the following possibility: $\Phi:H\times SO(n) \rightarrow H$ given by $(\Phi(g)\psi)(x ) = \pi_k(g)\psi(g^{-1}x)$ were $\pi_k: SO(n)\times \mathbb C^k \rightarrow \mathbb C^k$ is a representation of the $SO(n)$ group over the finite dimension $\mathbb C^k$. At least one k is a guaranteed to work, which is $k=n$, the others I'm not sure, in the case of $SO(3)$, you have a representation for each odd k (integer spin) , but it's possible to find a representation of the covering group $SU(2)$ for all k. I find this subject very interesting, even though I haven't had the time to workout the calculations. Unfortunately my knowledge of the subject end here, so someone else will have to help you with the actual calculations. If you have in hand, you can read Ballentine's discussion on angular momentum. I believe it was very enlightening when I was reading it, since it discuss this aspect of needing a internal simmetry space (the $\mathbb C^k$ above) and also works out explicitly the cases of spin 1/2 and 1, besides also discussing the case of spin 3/2. Edit: One thing I forgot to mention is about the algebra (generators) of the $SO(n)$ group, $\mathfrak{so}(n)$ of $n \times n$ anti-symmetric real matrices. So, the idea of of labeling the generators by $\Sigma_{ij}$ with a anti-symmetric index like you done above probably the right way to do it. Also the commutation relations would be given by the $\mathfrak{so}(n)$ comutation relations, which I don't know by heart, and I'm not certain that it's exactly what you wrote above. Here something I've found on the web on the $\mathfrak{so}(n)$ algebras. Continuation: So, as Peter Kravchuk pointed out, the physical idea behind all this reasoning is the idea of law of transformation. So, in physics, the idea of transformation is captured by the idea of group which is a set with some kind of composition operation that turn possible the discussion of things like 'performing one transformation after another' or 'performing the inverse transformation'. Most of the time, you don't want only to have and idea of composition and inverse of transformations, but you also want to have some sense of continuity and/or smoothness. The groups that are smooth, so they are 'differentiable', are called Lie Groups Most of the times, you are not interested in the groups by themselves but in the 'effect' that they have when they act on some kind of physical object. If you have a set of physical objects $X$, what you want to do is to find some kind of function that change this objects, but still create valid physical objects of the same kind, i.e., some function $F: G\times X \rightarrow X$. This idea is the concept of group action. Many times, objects of physical interest are modeled as vectors, in other words, things that make sense you 'add' and 'multiply by a scalar'. You can think on positions, velocities, and/or momentum of particles. Also, there are also objects that you define point-to-point in your space, things like gravitational potential, electrical fields, and wave-functions!All this objects are described by fields, i.e., in some sense, functions $E \rightarrow X$, where $E$ is you 'physical space', i.e., your space-time, which is generally either euclidean or minkowskian. Finally, the idea of (linear) representation of groups is to seek transformation laws on objects that are themselves vectors. So, lets start with an example. You have the eucliean 3D space, $E=\mathbb R^3$, and you want to study rotations, i.e., transformations that preserve the usual 3D metric: $<x,y> = x_1y_1+x_2y_2+x_3y_3$ In other words, you want functions $A:\mathbb R^3 \rightarrow \mathbb R^3$ so that $<Ax,Ay>=<x,y>$ for all $x,y\in\mathbb R^3$ . You can prove that all functions of this kind are linear functions, and also that they form a group(and a lie-group also!), in the above mentioned sense. It's called the Orthogonal Group $O(3)$. Most of the time we also want to to preserve orientation, so we also demand that they satisfy $\det(A)=1$. This subset also forms a group, which is exactly the $SO(3)$, the (proper) rotations in 3D euclidean space. If you have a group action that respects linear operations, $\Phi(A)(\alpha x+y) = \alpha(\Phi(A)x)+(\Phi(A)y)$ for all $x,y\in X$, $A\in G$ and $\alpha \in \mathbb F$ (think real and complex numbers) you call this action a representation. It's possible to have objects with 'mixed transformations laws', and usually you don't want that to happen, so you usually look for objects with a 'definite transformation law', and this is the same as to speak of irreducible representations of your group. From now on I'll use term representation as synonym for irreducible representation, until otherwise stated. Other way to view the representations is to think $\Phi: G \rightarrow GL(X)$, where $GL(X)$ is the group of all invertible linear transformations (matrices) over X. This way you look for something that respects $\Phi(g_1g_2)= \Phi(g_1)\Phi(g_2)$. So that way you can think about looking for 'copies' of the original group over the group of invertible operators on the space of interest. Now we start to have fun. If you have this symmetry group in the position space, you want to ask what happens to momentum when you rotate the positions. Since the set of momentums(velocities if you like) is also $\mathbb R^3$, you don't have any problem to set $\vec p' = A\vec p$. So, just to precise what we are doing:We have the physical position space: $E = \mathbb R^3$, and we have a group $G=SO(3)$ that acts on $E$, i.e., $\Phi: G\times E \rightarrow E$ by the 'trivial action' $\Phi(A)\vec x = A\vec x$ Now, we have the momentum space (set of all possible momentum) $P$ which is also equal to $\mathbb R^3$, thus, we don't have any problem to have the same 'transformation laws' as the original positions, i.e., setting the representation $\Phi_p : G \times P \rightarrow P$ equal to the trivial one above. This is equivalent to say $\vec p' = \Phi_p(A)\vec p = A\vec p$. Now, we can ask what happens when you have fields defined in Physical Space, i.e., 'smooth' (or almost) functions of some type: $\mathcal F=\{f|f:E\rightarrow X\}$ . Anyway, you can ask how this fields transform when you have a 'change of coordinates' induced by the action of $SO(3)$ on the physical space. What usually happens is that you set $\Phi_\mathcal F : G \times \mathcal F \rightarrow \mathcal F$ by putting $ (\Phi_\mathcal F(A) f)(x) = \Phi_X(A)f(A^{-1}x)$ where $\Phi_X$ is an action of G over X. If X is a vector space, you can also try finding $\Phi_X$ as a representation. So the idea is that you first change the coordinates and then act on the object that results from it. The inverse inside the argument is the idea of active x passive rotation: You can either think that you actively rotate the whole universe one way, or rotate your coordinates the other way around. Ultimately, you use not the trivial representation of $SO(3)$ inside the coordinates, but the inverse representation. If you have a scalar field, for example an electric potential(which is a function $\phi:\mathbb R^3 \rightarrow \mathbb R$), you don't expect it to change it's 'value' when you rotate your coordinates, but you do expect to change it's argument. So, by this physical considerations you can expect that $\phi$ changes as a 'scalar field', i.e., $\phi'(x ) = \phi(A^{-1}x)$. Now, imagine that you have a (static) electric field $\vec E : \mathbb R^3 \rightarrow \mathbb R^3$. Now we expect that if you rotate, you not only your arguments change, but also you have some 'direct effect' on the 'vector field itself'. Since $\vec E(\vec x) \in \mathbb R^3$, you can use the same 'transformations laws' (representations) that you use for either the positions or the arguments of functions to act on the electric field. In the end, you end up with the 'transformation law for vector fields':$ (\Phi_v(A)\vec E)(\vec x) = A(\vec E(A^{-1}\vec x))$ You read the above equation this way: "To transform a electric field under a rotation, you first get your position, transform it inversely so you can calculate the right argument, then evaluate the electric field at that point and after that you rotate the electric field the same way you would rotate normal positions (vectors)" Now you have almost all the necessary information that you need. Now, remember that wave-functions are Complex Scalar fields defined over your physical space, with the additional propriety that they are 'square-integrable' (they have finite norm). You express that by saying that wave functions are members of a set $\{\psi:\mathbb R^3 \rightarrow \mathbb C | \int_{\mathbb R^3}\psi^*(x)\psi(x) d^3 x < \infty \} $ which is denoted by $L_2(\mathbb R^3,\mathbb C)$ of (Lebesgue) square-integrable complex functions. Now, you want to ask which would be the possible transformations of wave-functions. Since you have that they are scalar functions, you expect that they transform as scalar fields: $(\Phi(A)\psi)(x) = \psi(A^{-1}x)$ Now, things get really interesting when you try to construct a 'multi-component wave function', which would represent the 'internal degrees of freedom' of your particles. To achieve that, you change from $L_2(\mathbb R^3,\mathbb C)$ to $L_2(\mathbb R^3,\mathbb C^k)$ so to have a 'internal space' $\mathbb C^k$ So, you go back again and ask: 'how does this things transform', or thinking another way, 'what are the possible ways to this things to transform?', since you are not obliged to set $k=3$, and if you think carefully, you are living on $\mathbb C^k$ not $\mathbb R^k$! Since you already have the 'coordinate change' handled (i.e., you have what will become the 'orbital part' of the angular momentum), you need to ask what happens to the internal space. So, you want to find all 'transformation laws' (i.e., representations) of objects of $\mathbb C^k$. In other words, you want to find all representations of $SO(3)$ on $\mathbb C^k$. This is usually a (very) difficult task, so, normally, you don't tackle it directly like that, but, in the end, you find that you would only have (honest) representations for $k = 2l+1$ , with $l\in \mathbb Z$, which you would interpret as 'integer spin representation'(although we haven't spoke the word spin till now!). So, how do we find representations of the $SO(3)$ group? The standard method is to look to the 'infinitesimal transformations' of the group near the origin (in a more precise way, the tangent space at the identity of the group). This infinitesimal transformations form themselves a vector space, with an additional operation called the (lie) bracket, which is is some way a 'product'. Since vector spaces endowed with products are called algebras, this structures are called lie-algebras. An lie bracket acts exactly as an commutator (this can be made precise), and in the case of matrix lie algebras (like the lie algebra of $SO(3)$), it's exactly the commutator of the usual matrix product. This way you can speak of 'commutation relations' of the elements of the Lie Algebra. Just like the case of lie-groups, you can speak of Representations of Lie Algebras, just that instead of preserving the group composition operation, it preserves the lie-bracket operation. Usually is simpler to find representations of the lie-algebra than it is to find representation for the original lie-group, since in the former you 'only' need to find operators that can act as generators for the image of the representation, and if they have the same 'commutation-relations' as a base for the original lie-algebra, all you have to do is to define the correspondence and extend by linearity. So, how do we recover the information about the original group based on it's lie algebra? The idea is that you can construct (under some conditions) representations for the lie-group based on the representations of the lie-algebra. This is done by 'exponentiating' (that idea of putting $U(\vec\theta)=e^{-\frac{i}{\hbar}\vec \theta \cdot \vec J}$) the elements of the lie algebra to form an element of the group. On a general setting, this only valid locally. If you try searching for representations of the lie-group of $SO(3)$ (denoted $\mathfrak{so}(3)$), which is exactly the algebra of $3\times 3$ anti-symmetric matrices, you find that it have representations in all $\mathbb C^k$. Unfortunately(or not), you also find that you can't recover a similar representation for $SO(3)$ for even k. This is related with the 'double valuededness' of the representations for even k. This is the 'extra -1 factor' that semi-integer spin gain with a full $2\pi$ rotation, and also the 'need' for a $4\pi$ rotation to fully go back to the origin(identity). What you end up doing it to look for the representations of the group that is actually generated by exponentiating the lie-algebra, which in the case of $SO(3)$ is $SU(2)$. Since locally the two are 'essentialy the same' and for each element of $SO(3)$ there is 2 elements of $SU(2)$, the latter is called the double cover of the former. For even k, these are the 'spinorial representations' Finally, you prove that there is all a representation of $SU(2)$ over any $\mathbb C^k$, so you can accommodate both integer and half-integer spin using $SU(2)$ as your 'acting rotation' group. You can do this because you can recover the rotations on the euclidean 'physical space' using it. To recover the idea of spin, it's necessary to have a way to 'measure' the total spin of the particle in question, which is done via $S_x^2+S_y^2 + S_z^2 = S^2$. So, how to interpret this object? The idea is that it's what's called the 'casimir invariant' of the group, and you use it to classify all (irreducible) representations of your algebra, and thus, of your original group. That way you have pretty much all '3D spin theory' built here. So, from here, you can understand my original suggestion: If you want to search for a higher dimensional spin, you start with a higher dimensional position space $E=\mathbb R^n$, and repeat the same questions that I developed here: 1) the usual euclidean inner-product is $<x,y>= \sum_{i=1}^n x_iy_i$, and so the groups that preserves it and also preservers orientation ($\det A=1$) is called $SO(n)$ 2) the space of k-components wave-functions is $H=L_2(\mathbb R^n,\mathbb C^k)$ and you try seeking representations of $SO(n)$ over H. 3) the covering group of $SO(n)$ is called spin group Spin(n) I belive that it's possible to find irreducible representations of the Spin(n) for all k, but I'll confirm later. Being possible, it makes possible a similar interpretation as to usual 3D spin. As someone mentioned here or in other topic, there is Cartan's Book as a good reference on the subject. There rest is on my original answer.
@Secret et al hows this for a video game? OE Cake! fluid dynamics simulator! have been looking for something like this for yrs! just discovered it wanna try it out! anyone heard of it? anyone else wanna do some serious research on it? think it could be used to experiment with solitons=D OE-Cake, OE-CAKE! or OE Cake is a 2D fluid physics sandbox which was used to demonstrate the Octave Engine fluid physics simulator created by Prometech Software Inc.. It was one of the first engines with the ability to realistically process water and other materials in real-time. In the program, which acts as a physics-based paint program, users can insert objects and see them interact under the laws of physics. It has advanced fluid simulation, and support for gases, rigid objects, elastic reactions, friction, weight, pressure, textured particles, copy-and-paste, transparency, foreground a... @NeuroFuzzy awesome what have you done with it? how long have you been using it? it definitely could support solitons easily (because all you really need is to have some time dependence and discretized diffusion, right?) but I don't know if it's possible in either OE-cake or that dust game As far I recall, being a long term powder gamer myself, powder game does not really have a diffusion like algorithm written into it. The liquids in powder game are sort of dots that move back and forth and subjected to gravity @Secret I mean more along the lines of the fluid dynamics in that kind of game @Secret Like how in the dan-ball one air pressure looks continuous (I assume) @Secret You really just need a timer for particle extinction, and something that effects adjacent cells. Like maybe a rule for a particle that says: particles of type A turn into type B after 10 steps, particles of type B turn into type A if they are adjacent to type A. I would bet you get lots of cool reaction-diffusion-like patterns with that rule. (Those that don't understand cricket, please ignore this context, I will get to the physics...)England are playing Pakistan at Lords and a decision has once again been overturned based on evidence from the 'snickometer'. (see over 1.4 ) It's always bothered me slightly that there seems to be a ... Abstract: Analyzing the data from the last replace-the-homework-policy question was inconclusive. So back to the drawing board, or really back to this question: what do we really mean when we vote to close questions as homework-like?As some/many/most people are aware, we are in the midst of a... Hi I am trying to understand the concept of dex and how to use it in calculations. The usual definition is that it is the order of magnitude, so $10^{0.1}$ is $0.1$ dex.I want to do a simple exercise of calculating the value of the RHS of Eqn 4 in this paper arxiv paper, the gammas are incompl... @ACuriousMind Guten Tag! :-) Dark Sun has also a lot of frightening characters. For example, Borys, the 30th level dragon. Or different stages of the defiler/psionicist 20/20 -> dragon 30 transformation. It is only a tip, if you start to think on your next avatar :-) What is the maximum distance for eavesdropping pure sound waves?And what kind of device i need to use for eavesdropping?Actually a microphone with a parabolic reflector or laser reflected listening devices available on the market but is there any other devices on the planet which should allow ... and endless whiteboards get doodled with boxes, grids circled red markers and some scribbles The documentary then showed one of the bird's eye view of the farmlands (which pardon my sketchy drawing skills...) Most of the farmland is tiled into grids Here there are two distinct column and rows of tiled farmlands to the left and top of the main grid. They are the index arrays and they notate the range of inidex of the tensor array In some tiles, there's a swirl of dirt mount, they represent components with nonzero curl and in others grass grew Two blue steel bars were visible laying across the grid, holding up a triangle pool of water Next in an interview, they mentioned that experimentally the process is uite simple. The tall guy is seen using a large crowbar to pry away a screw that held a road sign under a skyway, i.e. ocassionally, misshaps can happen, such as too much force applied and the sign snapped in the middle. The boys will then be forced to take the broken sign to the nearest roadworks workshop to mend it At the end of the documentary, near a university lodge area I walked towards the boys and expressed interest in joining their project. They then said that you will be spending quite a bit of time on the theoretical side and doddling on whitebaords. They also ask about my recent trip to London and Belgium. Dream ends Reality check: I have been to London, but not Belgium Idea extraction: The tensor array mentioned in the dream is a multiindex object where each component can be tensors of different order Presumably one can formulate it (using an example of a 4th order tensor) as follows: $$A^{\alpha}_{\beta}_{\gamma,\delta,\epsilon}$$ and then allow the index $\alpha,\beta$ to run from 0 to the size of the matrix representation of the whole array while for the indices $\gamma,\delta,epsilon$ it can be taken from a subset which the $\alpha,\beta$ indices are. For example to encode a patch of nonzero curl vector field in this object, one might set $\gamma$ to be from the set $\{4,9\}$ and $\delta$ to be $\{2,3\}$ However even if taking indices to have certain values only, it is unsure if it is of any use since most tensor expressions have indices taken from a set of consecutive numbers rather than random integers @DavidZ in the recent meta post about the homework policy there is the following statement: > We want to make it sure because people want those questions closed. Evidence: people are closing them. If people are closing questions that have no valid reason for closure, we have bigger problems. This is an interesting statement. I wonder to what extent not having a homework close reason would simply force would-be close-voters to either edit the post, down-vote, or think more carefully whether there is another more specific reason for closure, e.g. "unclear what you're asking". I'm not saying I think simply dropping the homework close reason and doing nothing else is a good idea. I did suggest that previously in chat, and as I recall there were good objections (which are echoed in @ACuriousMind's meta answer's comments). @DanielSank Mostly in a (probably vain) attempt to get @peterh to recognize that it's not a particularly helpful topic. @peterh That said, he used to be fairly active on physicsoverflow, so if you really pine for the opportunity to communicate with him, you can go on ahead there. But seriously, bringing it up, particularly in that way, is not all that constructive. @DanielSank No, the site mods could have caged him only in the PSE, and only for a year. That he got. After that his cage was extended to a 10 year long network-wide one, it couldn't be the result of the site mods. Only the CMs can do this, typically for network-wide bad deeds. @EmilioPisanty Yes, but I had liked to talk to him here. @DanielSank I am only curious, what he did. Maybe he attacked the whole network? Or he toke a site-level conflict to the IRL world? As I know, network-wide bans happen for such things. @peterh That is pure fear-mongering. Unless you plan on going on extended campaigns to get yourself suspended, in which case I wish you speedy luck. 4 Seriously, suspensions are never handed out without warning, and you will not be ten-year-banned out of the blue. Ron had very clear choices and a very clear picture of the consequences of his choices, and he made his decision. There is nothing more to see here, and bringing it up again (and particularly in such a dewy-eyed manner) is far from helpful. @EmilioPisanty Although it is already not about Ron Maimon, but I can't see here the meaning of "campaign" enough well-defined. And yes, it is a little bit of source of fear for me, that maybe my behavior can be also measured as if "I would campaign for my caging".
In a very natural sense, you can! If $\lim_{x \to \infty} f(x) = \lim_{x \to -\infty} f(x) = L$ is some real number, then it makes sense to define $f(\infty) = L$, where we identify $\infty$ and $-\infty$ in something called the one-point compactification of the real numbers (making it look like a circle). In that case, $f'(\infty)$ can be defined as$$f'(\infty) = \lim_{x \to \infty} x \big(f(x) - f(\infty)\big).$$When you learn something about analytic functions and Taylor series, it will be helpful to notice that this is the same as differentiating $f(1/x)$ at zero. Notice that this is actually not the same as $\lim_{x \to \infty} f'(x)$. These ideas actually show up quite a bit in analytic capacity, so this is a rather nice idea to have. I wanted to expand this answer a bit to give some explanation about why this is the "correct" generalization of differentiation at infinity. and hopefully address some points raised in the comments. Although $\lim_{x \to \infty} f'(x)$ might feel like the natural object to study, it is quite badly behaved. There are functions which decay very quickly to zero and have horizontal asymptotes, but where $f'$ is unbounded as we tend to infinity; consider something like $\sin(x^a) / x^b$ for various $a, b$. Furthermore, $\lim_{x \to \infty} f'(x) = 0$ is not sufficient to guarantee a horizontal asymptote, as $\sqrt{x}$ shows. So why should we consider the definition I proposed above? Consider the natural change of variables interchanging zero and infinity*, swapping $x$ and $1/x$. Then if $g(x) := f(1/x)$ we have the relationship $$\lim_{x \to 0} \frac{g(x) - g(0)}{x} = \lim_{x \to \infty} x \big(f(x) - f(\infty)\big).$$ That is to say, $g'(0) = f'(\infty)$. Now via this change of variables, neighborhoods of zero for $g$ correspond to neighborhoods of $\infty$ for $f$. So if we think of the derivative as a measure of local variation, we now have something that actually plays the correct role. Finally, we can see from this that this definition of $f'(\infty)$ gives the coefficient $a_1$ in the Laurent series $\sum_{i \ge 0} a_i x^{-i}$ of $f$. Again, this corresponds to our idea of what the derivative really is. * This is one of the reasons why I used the one-point compactification above. Otherwise, everything that follows must be a one-sided limit or a one-sided derivative.
How would you match the lines between two cases within the same equation environment? What I'm having is: \begin{equation}\begin{rcases} \sum_{I=1}^{NP} K_h (x - x_I; x) \Delta x_I &= 1 \\ \sum_{I=1}^{NP} \left(\frac{x - x_I}{h}\right) K_h (x - x_I; x) \Delta x_I &= 0 \\ &\;\;\vdots \notag \\q \sum_{I=1}^{NP} \left(\frac{x - x_I}{h}\right)^n K_h (x - x_I; x) \Delta x_I &= 0\end{rcases} = \begin{cases} M_0 (x) &= 1 \\ M_1 (x) &= 1 \\ &\;\;\vdots \notag \\ M_n (x) &= 1 \\\end{cases}\end{equation} Now I want to match the rows of the cases on the left with those on the right. I could do something ugly like adding \vspace, but is there a more elegant way of doing it?
Not for the faint-hearted: There is an excellent, but very mathsy, article here: J. Chem. Educ. 2014, 91, 386 describing the difference. The Gibbs free energy change, $\Delta G$ You are quite right in saying that $\Delta G$ represents a change in a system over a time interval. The notation $\Delta G$ itself implies that it is a difference in $G$ between two things: an initial state and a final state. Let's use a real example in order to make things clearer. Consider the thermal decomposition of ammonium nitrate: $$\ce{NH4NO3(s) -> N2O(g) + 2H2O(g)}$$ If someone were to ask you "what is $\Delta G$ for this reaction?", you should really, technically, be telling them: it is not well-defined. That is because of two things. Firstly, the reaction conditions, e.g. temperature and pressure, are not specified. But more importantly, a balanced equation does not tell you exactly how much ammonium nitrate is reacting. In your beaker, you could have $1~\mathrm{mol}$ of $\ce{NH4NO3}$ reacting according to the above equation, but you could equally well have $2~\mathrm{mol}$, or you could have $5000~\mathrm{mol}$ (I must say that's one huge beaker though), or you could have $0.001~\mathrm{mmol}$. The stoichiometric coefficients in the equation do not indicate the amounts of compounds reacting. So, a more proper question would read like this: $\pu{2 mol}$ of ammonium nitrate fully decompose according to the balanced equation $\ce{NH4NO3(s) -> N2O(g) + 2H2O(g)}$ at $298~\mathrm{K}$ and $1~\mathrm{bar}$. Calculate $\Delta G$ for this process. Great. Because the decomposition is complete, we now know that our final state is $2~\mathrm{mol}~\ce{N2O(g)} + 4~\mathrm{mol}~\ce{H2O(g)}$ at $298~\mathrm{K}$ and $1~\mathrm{bar}$, and our initial state is $2~\mathrm{mol}~\ce{NH4NO3(s)}$ at $298~\mathrm{K}$ and $1~\mathrm{bar}$. So, $$\begin{align}\Delta G &= [(2~\mathrm{mol})\cdot G_\mathrm{m}(\ce{N2O(g)})] + [(4~\mathrm{mol})\cdot G_\mathrm{m}(\ce{H2O(g)})] - [(2~\mathrm{mol})\cdot G_\mathrm{m}(\ce{NH4NO3(s)})]\end{align}$$ We can't quite find the absolute molar Gibbs free energies, so the best we can do is to use Gibbs free energies of formation. $$\begin{array}{c|c}\text{Compound} & \Delta_\mathrm{f}G^\circ / \mathrm{kJ~mol^{-1}}\text{ (at }298~\mathrm{K}\text{)} \\ \hline\ce{NH4NO3(s)} & -183.87 \\\ce{N2O(g)} & +104.20 \\\ce{H2O(g)} & -228.57\end{array}$$$$\scriptsize \text{(data from Atkins & de Paula, }\textit{Physical Chemistry}\text{ 10th ed., pp 975-7)}$$ (Note that the use of standard formation Gibbs free energies is only because of the conditions specified in the question, which conveniently corresponds to the standard state. If we specify different conditions, we can still find $\Delta G$, but we would have to use different data.) So: $$\begin{align}\Delta G &= [(2~\mathrm{mol})\cdot \Delta_\mathrm{f} G^\circ(\ce{N2O(g)})] + [(4~\mathrm{mol})\cdot \Delta_\mathrm{f} G^\circ(\ce{H2O(g)})] - [(2~\mathrm{mol})\cdot \Delta_\mathrm{f} G^\circ(\ce{NH4NO3(s)})] \\&= [(2~\mathrm{mol})(+104.20~\mathrm{kJ~mol^{-1}})] + [(4~\mathrm{mol})(-228.57~\mathrm{kJ~mol^{-1}})] - [(2~\mathrm{mol})(-183.87~\mathrm{kJ~mol^{-1}})] \\&= -1073.62~\mathrm{kJ}\end{align}$$ Note that we have units of kJ. Since $\Delta G$ is the difference between the Gibbs free energy of one state and another, $\Delta G$ has to have the same units as $G$, which is units of energy. Now, this does not necessarily mean that $\Delta G = G_\text{products} - G_\text{reactants}$. For example, if I changed my question to be: In the Haber process, 100 moles of $\ce{N2}$ and 300 moles of $\ce{H2}$ are reacted at $800~\mathrm{K}$ and $200~\mathrm{bar}$ according to the equation $\ce{N2 + 3H2 -> 2NH3}$. Only 10% of the starting materials are converted under these conditions. Calculate $\Delta G$ for the process. (These numbers are made up.) then, your final state would not be the pure products. Your final state is not 200 moles of $\ce{NH3}$. Your final state is $90~\mathrm{mol}~\ce{N2}$, $270~\mathrm{mol}~\ce{H2}$, and $20~\mathrm{mol}~\ce{NH3}$. In general, one could write, for a chemical reaction, $$\Delta G = \sum_i (\Delta n_i) G_{\mathrm{m},i}$$ where $\Delta n_i$ is the change in the amount of compound $i$ (in moles), and $G_{\mathrm{m},i}$ is the molar Gibbs free energy of the pure compound $i$, under the $T$ and $p$ conditions specified. Going back to our ammonium nitrate example, we would have $\Delta n_{\ce{NH4NO3}} = -2~\mathrm{mol}$, $\Delta n_{\ce{H2O}} = +2~\mathrm{mol}$, and $\Delta n_{\ce{H2O}} = +4~\mathrm{mol}$. The Gibbs free energy change of reaction, $\Delta_\mathrm{r} G$ As you have correctly stated, $\Delta_\mathrm{r}G$ is the slope of a graph of $G_\mathrm{syst}$ against the extent of reaction, commonly denoted $\xi$. This is the easiest way of interpreting $\Delta_\mathrm{r}G$. This question contains a slightly fuller derivation and explanation of what $\Delta_\mathrm{r}G$ means. However, $\Delta_\mathrm{r}G$ does indeed, somewhat, refer to the instantaneous difference between the "molar Gibbs free energies" of the products and reactants. This is different from $\Delta G$ in three main ways. First, $\Delta G$ is the difference between the Gibbs energy of the entire system at two points in time. Here, we are interpreting $\Delta_\mathrm{r}G$ as the difference between the "molar Gibbs free energies" of two components of the system: reactants and products. In other words, the system generally contains both reactants and products, and $\Delta_\mathrm{r}G$ may be thought of as the difference between the Gibbs energies of the product part of the system, and the reactant part of the system, even though you cannot separate them in the laboratory. Secondly, it is the difference between the chemical potentials, not the molar Gibbs free energies. The molar Gibbs free energy is simply defined by $G_i/n_i$; the chemical potential is a partial derivative and is defined by $\mu_i = (\partial G/\partial n_i)_{T,p,n_j}$. In the case where there are no other species present (i.e. species $i$ is pure), then the chemical potential is identical to the molar Gibbs free energy. Lastly, it is weighted by the stoichiometric coefficients $\nu_i$ instead of the change in the amount $\Delta n_i$. The stoichiometric coefficient is a dimensionless quantity, which is negative for reactants and positive for products. So, in the ammonium nitrate decomposition as written at the very top, we have $$\nu_{\ce{NH4NO3}} = -1; \qquad \nu_{\ce{N2O}} = +1; \qquad \nu_{\ce{H2O}} = +2$$ and our expression for $\Delta_\mathrm{r} G$ is $$\begin{align}\Delta_\mathrm{r} G &= \sum_i \nu_i \mu_i \\&= \nu_{\ce{N2O}}\mu_{\ce{N2O}} + \nu_{\ce{H2O}}\mu_{\ce{H2O}} - \nu_{\ce{NH4NO3}}\mu_{\ce{NH4NO3}} \\&= \mu_{\ce{N2O}} + 2\mu_{\ce{H2O}} - \mu_{\ce{NH4NO3}}\end{align}$$ Now, note the units again. The chemical potential is a partial derivative of the Gibbs free energy (units $\mathrm{kJ}$) with respect to the amount of $i$ (units $\mathrm{mol}$), and so it must have units $\mathrm{kJ~mol^{-1}}$. And from our above expression, $\Delta_\mathrm{r}G$ must also have units of $\mathbf{kJ~mol^{-1}}$, since the stoichiometric coefficients are dimensionless. Does it matter if you start with $1~\mathrm{mol}$ or $2~\mathrm{mol}$ of ammonium nitrate? The answer is now, no. The amount of starting material does not affect $\mu_i$, nor does it affect $\nu_i$. Therefore, $\Delta_\mathrm{r}G$ is independent of the amount of starting material. Why is $\Delta_\mathrm{r} G$ an instantaneous difference between the chemical potentials? Well, that is because the chemical potentials of the reactants and products, $\mu_i$, are changing continuously as the reaction occurs. Therefore, if we want to calculate $\Delta_\mathrm{r} G$, we have to take a "snapshot" of the reaction vessel: otherwise it makes absolutely no sense to speak of $\mu_i$ because we wouldn't know which value of $\mu_i$ to use. Compare this with $\Delta G$ above: the quantities of $G_{\mathrm{m},i}$ are constants that do not vary depending on the extent of reaction. Therefore, we do not need to specify a particular extent of reaction to calculate $\Delta G$. A diagram to sum up Note that, to adequately define what $\Delta G$ is, you need a starting point and an ending point. I showed two possibilities for $\Delta G$; there are infinitely many more. Let's say the reaction goes to completion. If you double your starting material and double your product, the difference between $G_\text{products}$ and $G_\text{reactants}$ will also be doubled. So, it's important to specify! Likewise, to adequately define $\Delta_\mathrm{r} G$, you have to define the single specific point at which you intend to calculate $\Delta_\mathrm{r} G$. If you double your starting material and double your product, the curve is stretched by a factor of 2 along the y-axis, but it is also stretched by a factor of 2 along the x-axis because $\xi$ is also doubled. (For a mathematical explanation see the definition of $\xi$ given in the linked question earlier.) So, while the difference $\Delta G$ is doubled, the gradient $\Delta_\mathrm{r} G$ remains unchanged. The units should also be clear from this. Since $\Delta G$ is a difference between two values of $G$, it has to have units of $\mathrm{kJ}$. On the other hand, $\Delta_\mathrm{r}G$ is a gradient and therefore has units of $\mathrm{kJ/mol}$. A caveat Unfortunately, the notation $\Delta G$ is often loosely used and treated as being synonymous with $\Delta_\mathrm{r}G$. You will therefore see people give $\Delta G$ units of $\mathrm{kJ~mol^{-1}}$. For more information refer to Levine, Physical Chemistry 6th ed., p 343. I would personally recommend making a distinction between the two.
In Chapter II of Dirac's book Principles of Quantum Mechanics, Dirac explains that in general it is very difficult to know whether, for a given real linear operator, that any eigenvalues/eigenvectors exist and (if they do) how to find them. He then goes on to state that a special tractable case can be found in the event that a real linear operator, $\xi$ can be expressed as the algebraic expression: $\phi$($\xi$) $\equiv$ $\xi$ n + a 1 $\xi$ n-1 + a 2 $\xi$ n-2 + ... + a n = 0 Where the a i are all numbers. He then factorises this as: $\phi$($\xi$) $\equiv$ ($\xi$- c 1)($\xi$- c 2)($\xi$- c 3)...($\xi$- c n). Where the c i are also numbers. I'm unsure how this factorisation has been achieved. Since Dirac didn't explicitly state the difference between $\phi$($\xi$) and plain $\xi$, then I imagine that $\xi$ represents a given dynamical variable and $\phi$($\xi$) the corresponding operator. This may be incorrect however. This is the only step in the logic that I can't work out, I understand his subsequent arguments but this step is eluding me.
When this identity was posted, it struck me as something that ought to have a combinatorial explanation. I have now found one, using a decomposition of NSEW lattice paths: paths in $\mathbb{Z}^2$ consisting of unit steps in the direction N, S, E or W. Many of the ideas here may be found in [GKS], though not the decomposition itself. The expression $\frac12{2a+1\ +\ 2b+1\choose2a+1}$ counts paths of $(a+b+1)$ steps that start at $(0,0)$ and end on the half-line $(a-b,\geq0)$. To see this, decompose each path step as two half-steps $±\left[\begin{smallmatrix}½\\½\end{smallmatrix}\right]$ and $±\left[\begin{smallmatrix}½\\-½\end{smallmatrix}\right]$. If the $+$ option is chosen for $(2a+1)$ of the $2(a+b+1)$ half-steps, and the $-$ option for the other $(2b+1)$, then the $x$-coordinate of the endpoint is $\frac12((2a+1)-(2b+1))=a-b$. Thus there are ${2a+1\ +\ 2b+1\choose2a+1}$ paths of $(a+b+1)$ steps from $(0,0)$ to $x=a-b$. By parity, the end position must have an odd-numbered $y$-coordinate. Reflection in the $x$-axis is therefore a fixpoint-free involution, so half of these paths end on the half-line $(a-b,\geq0)$. Such a path may be split into a pair of paths with $(a+b)$ steps in total. The endpoint of the path is $(a-b, 2k+1)$ for some $k\in\mathbb N$. At least one step of the path must therefore be an N step from $(c,2k)$ to $(c,2k+1)$ for some $c$. Remove the first such step, to give a pair of paths with $a+b$ steps altogether: A path of $n$ steps from $(0,0)$ to $(c,2k)$ that does not cross the line $y=2k$, which we can think of as a 180° rotation of a path from $(0,0)$ to $(c,2k)$ that does not cross the $x$-axis; A path of $a+b-n$ steps from $(c,2k+1)$ to $(a-b,2k+1)$, which we can think of as a translation of a path from $(0,0)$ to $(a-b-c,0)$. This is clearly a bijection. There are ${i+j\choose i}^2$ paths of $(i+j)$ steps from $(0,0)$ to $(i-j,0)$. The four directions N,S,E,W may be obtained by starting with $\left[\begin{smallmatrix}-1\\0\end{smallmatrix}\right]$ and adding neither, one, or both of $\left[\begin{smallmatrix}1\\1 \end{smallmatrix}\right]$ and $\left[\begin{smallmatrix}1\\-1\end{smallmatrix}\right]$. Build a path of $i+j$ steps, initially all $\left[\begin{smallmatrix}-1\\0\end{smallmatrix}\right]$. Add $\left[\begin{smallmatrix}1\\1\end{smallmatrix}\right]$ to $i$ of the steps and, independently, add $\left[\begin{smallmatrix}1\\-1\end{smallmatrix}\right]$ to $i$ of the steps. There are also ${i+j\choose i}^2$ paths of $(i+j)$ steps from $(0,0)$ to $(i-j,\geq0)$ that do not cross the $x$-axis. There is a bijection between these paths and the paths of the previous section using a raising/lowering transformation [GKS]. Suppose we have a path from $(0,0)$ to $(i-j,0)$ that may cross the $x$-axis. While the path crosses the $x$-axis, do the following: Take the initial segment of the path up to the first time it touches the line $y=-1$, and reflect this initial segment about that line. Then translate the entire path up by two units, so it starts at $(0,0)$ again and ends two units higher than before on $x=i-j$. I hope it is clear that this process is reversible. (In reverse: while the endpoint is above the $x$-axis, translate the path two units down, then take the initial segment from $(0,-2)$ to the first intersection with $y=-1$ and reflect this initial segment about that line.) Putting it together Now we have all the ingredients we need. Let us count the pairs of paths as described above. Since $n$ and $c$ have the same parity, we may write $n=i+j$ and $c=i-j$ for $i\in[0,a]$, $j\in[0,b]$. There are ${i+j\choose i}^2$ paths of $(i+j)$ steps from $(0,0)$ to $(i-j,\geq 0)$ that do not cross the $x$-axis. There are ${a-i\ +\ b-j\choose a-i}^2$ paths of $(a+b)-(i+j)$ steps from $(0,0)$ to $(a-b-(i-j),0)$. So in total there are $$\sum_{i=0}^a\sum_{j=0}^b{i+j\choose i}^2{a-i\ +\ b-j\choose a-i}^2$$ such pairs, as required. [GKS] Richard K. Guy, C. Krattenthaler and Bruce E. Sagan (1992). Lattice paths, reflections, & dimension-changing bijections, Ars Combinatoria, 34, 3–15.
The production function is:$$q = (l^\rho + k^\rho)^\frac{1}{\rho}$$The MPL and MPK are respectively:$$q_l = \frac{\partial q}{\partial l} = \frac{1}{\rho} \cdot (l^\rho + k^\rho)^{\frac{1}{\rho}-1} \cdot \rho\cdot l^{\rho-1}$$$$q_k = \frac{\partial q}{\partial k} = \frac{1}{\rho} \cdot (l^\rho + k^\rho)^{\frac{1}{\rho}-1} \cdot \rho\cdot k^{\rho-1}$$What is the rate that l can be substituted for k? Where $f$ is a differentiable real-valued function of a single variable, we define the elasticity of f(x) with respect to x (at the point x) to be$$\sigma(x) = \frac{x f'(x)}{f(x)}\equiv \frac{\frac{df(x)}{f(x)}}{\frac{dx}{x}}$$ Do a change of variables such that $u = ln(x)$ ($\rightarrow x = e^u$) and $v=ln(f(x))$ ($\rightarrow f(x) = e^v$) Note that $v' = f'(x) / f(x)$ and $u'=\frac{1}{x}$ so that $$\frac{v'}{u'}=\frac{\frac{f'(x)}{f(x)}}{\frac{1}{x}} = \sigma(x)$$ Note that this is also the result you get by solving for $ \frac{d ln f(x)}{d ln(x)}$ because $ \frac{d ln f(x)}{d ln(x)} = \frac{d v}{d u}$ which we solve via the chain rule:$$ \frac{d v}{d u} = \frac{d v}{d x} \cdot \frac{d x}{d u} = \frac{f'(x)}{f(x)} \cdot x $$ which happens to be exactly the definition of $\sigma(x)$. Now let's tackle your elasticity problem. $$ ln(\frac{q_k}{q_l})= log(\frac{\frac{1}{\rho} \cdot (l^\rho + k^\rho)^{\frac{1}{\rho}-1} \cdot \rho\cdot l^{\rho-1}}{\frac{1}{\rho} \cdot (l^\rho + k^\rho)^{\frac{1}{\rho}-1} \cdot \rho\cdot k^{\rho-1}}) = ln (\frac{l}{k})^{\rho-1} = (\rho-1) ln (l/k) = (1 - \rho) ln (k/l)$$$$ \Rightarrow ln (k/l) = \frac{1}{1-\rho} \cdot ln(\frac{q_k}{q_l})$$ So $\sigma = \frac{1}{1-\rho}$
I have to test the series for absolute and conditional convergence $\sum_{n=2}^{\infty}$ $\frac{(-1)^{n-1}}{n^2+(-1)^n}$ $Notes :$ For absolute convergence def. I have $\vert \frac{(-1)^{n-1}}{n^2+(-1)^n} \vert$ If this convergence then the original series should be convergence. $proof:$ $\vert \frac{(-1)^{n-1}}{n^2+(-1)^n} \vert$ = $\frac {1}{n^2+1}$ $\lt$ $\frac{1}{n}$ as this sequence decreasing to $0$ is diverges. Therefore the series is not absolute convergence but it's is conditional convergence by alternating series test. Any help would be grateful ;)
Technical Report Show 1 to 6 of 6 Issue 6 Generating Highly Accurate Machine Models Indispensable to Model Based Design (MBD) Elaborate Modeling TechnologyIdeal MBD and Its ChallengesModel based design (MBD) has been around the field of circuits/controls for motors for a long time, but has not been at th… Issue 5 Versatile Mapping that Supports Multiphysics Simulations Elaborate Modeling TechnologyMaterial Modeling and Mapping Technology Supporting Multiphysics SimulationsMultiphysics simulations, such as coupled analyses (magnetic field, therma… Issue 4 Material Modeling and Powerful Analysis Capabilities that Contribute to Limit Design Elaborate Modeling TechnologyModeling Complex Nonlinear Materials at a Micro Level\( \Large \nabla \times \frac{ l }{\mu_0}\nabla \times A = J -\sigma \frac{ \partial A } { \parti… Issue 3 What Does the JMAG Mesh Generation Engine have to Offer? A Powerful Simulation EngineThese technical reports introduce the scope of JMAG's technological development.This edition introduces the value and future of one of the two major fo… Issue 2 How JMAG Realized Accelerated Speed A Powerful Simulation EngineThis technical report introduces content concerning the development of JMAG technology.As the previous section introduced why matrix solvers are necess… Issue 1 Why is a high-speed calculation engine necessary? A Powerful Simulation EngineIn recent years, the needs of engineers are diversifying as computer aided engineering (CAE) tools are applied more regularly when designing electromag… Show 1 to 6 of 6
The NCO is a cyclical counter that can go on indefinitely but is otherwise similar to what you suggest in that you are increment n to set the output rate. It basically is a look up table of all the values in one complete cycle, and "wraps" on an overflow so that it will output continuous cycles with no discontinuity. I think the NCO is ideal for what you are trying to do; given its simplicity, it's ability to run indefinitely keeping track of its own position in time, its quantified noise levels that you can set to what you need, it's fixed-point implementation (with no multipliers) and its ability to change the frequency of your phasor "on the fly" as you require. I think this bottom lines the difference with the alternate approach you describe, which would not be as efficient (given the multiplications required and possible implementation in floating-point without careful mapping and scaling in which case you may as well go down the NCO path). A little more theory may help you to see all the advantages (and simplicity) of an NCO. First, referring to the diagrams below for the basic NCO Architecture: A digital Frequency Control Word (FCW) sets the count rate to an extended precision accumulator (counter), optionally a Phase Control Word (PCW) for phase modulation can then be added to the output of the accumulator. The most signficant bits of this summation are then used as the address pointer to a Look-Up-Table (LUT) which holds the values for one complete cycle of a sine wave (you could also imagine having two pointers for sine and cosine to enable a complex I and Q output). Now see the same block diagram with a mathematical view below that helps give further technical insight into the operation of the NCO. An accumulator (counter) is the digital counterpart to an integrator (if you don't see that right away, imagine sending all 1's into a counter, the output would be a ramp: 1,2,3,4,... just as you would expect an integrator output to be with a constant level at its input). The input FCW which is just a digital signal, which can change will time (as you were looking for), is a waveform that represents frequency vs time. What value of frequency for each digital word you can input I will elaborate on later, but for now know that its value at any given time is directly proportional to the output frequency. The integral of frequency is phase (and if you are less familiar with that it may be easy to see that frequency is a change in phase versus a change in time, therefore $f=d\phi/ dt$; frequency is the derivative of phase, and therefore phase is the integral of frequency.) Since our FCW input to an accumulator is the digital representation of a frequency quantity, and the accumulator is a digital integrator, then the value at the output of the accumulator represents phase versus time (which is why we can add a phase offset with PCW at this point if desired) and the accumulator counts from 0 to 2$\pi$, rolling over upon overflow. Since the accumulator output represents phase that is changing with time, and we want to generate a sinusoid output ($sin(\theta)$), then we can simply use a LUT to perform the trigonometric function. (Note: If you have plenty of extra cycles but no memory, other techniques to calculate the sine of an angle can be done, notably the CORDIC algorithm). Beautiful right? So now how do we decide on specifics to design our NCO, and what happens when we lose all the least signficant bits in our phase word? Read on! First, the accumulator sets the frequency resolution, and usually an extended precision accumulator is used with either 24, 32 or 48 bits typically used depending on the application. This is easy to see, imagine first FCW =1: The accumulator will step through every value, meaning the address pointer to the LUT will also step through every value in the stored sine wave, so the sine wave output will be at the slowest rate, and that rate will be as given by the "step size" in the formula below. Why step size? Because then imagine setting FCW =2, and the counter will now count by 2's and therfore go twice as fast before rolling over (and also upon rollover the counter must continue to count and that is why the NCO will continue to output the desired sine wave indefinately), put in FCW = 3 and it will count 3 times as fast etc. Therefore, $$F_{out}= FCW\frac{f_{clock}}{2^{accum_size}}$$ So regardless of how many bits we decide to use for the LUT, the output frequency is strictly set by this formula and nothing else. Now to briefly explain phase truncation and primary considerations: Phase truncation is when we decide to only use the most significant bits of the accumulator output to send to the LUT, and in doing this we are truncating the phase word (rounding down). To understand the implications of this, first consider the diagram below of what would happen if we did not have any phase truncation (meaning a very very large look up table, or very course frequency step size if the accumulator is small). What this picture is showing is that an impossible implementation containing a perfectly sinusoidal analog source at the specific frequency shown (with no phase noise), sampled with a perfect 100 MHz clock with a perfect 12 bit A/D converter will produce the IDENTICAL results of the NCO output, using a noisy 100 MHz clock. In fact for the NCO with no phase truncation, all of the output frequencies in multiples of $f_{step}$ as provided by the formula above will be this precise, with quantization noise at the output being the only noise source (which you can control by setting the output word width). You can see if you imagine different cases that without phase truncation, the look up table will provide the exact output at any given point in time that is required (limited to the quantized frequency choices with FCW, but that step size can be very small with large accumulators). The waveform will be very smooth without any skips or hiccups, in other words, pure. So this is great, consider the example with a 32 bit accumulator, and 12 bit output of the LUT; providing very fine frequency resolution with great spectral purity (and 6 dB better for every additional bit you add to the output width)...until you get to the memory requirements! An herein lies the motivation to consider phase truncation. Phase Truncation With phase truncation, memory requirements are significantly reduced at the expense of an additional noise source (phase noise). What we will see if that the noise will be well understood and can be planned to be well below any given requirements (as a trade with memory required). Also to mention for memory optimization only a quarter cycle is needed since the remaining portions of the cycle can be derived from the first quarter cycle. There are many other memory optimizations as well such as interpolation between values (most common), and to mention without explanation, Hutchison Algorithm and Sunderland Algorithm, as well as the Cordic Rotator previously mentioned. The phase noise pattern itself from phase truncation will be a sawtooth function of phase versus time, representing the truncated values that are missing. From this, the useful relationship of SNR to phase truncation is given as in the picture below. Here SNR is the power of the desired sine wave output relative to the power of all spurious output due to the phse truncation. This formula applies when the small angle criteria applies (when $sin(\theta) \approx \theta$) and comes from the rms value for a sawtooth function (or equivalently the standard deviation for a uniform distribution) which is $\frac{D}{\sqrt{12}}$ where D is the peak to peak height of the ramp or width of the distribution. This formula is combined with the quantization noise contributions from the LUT for a digitized signal (using a similar formula: 6 dB/bit + 1.76 dB also derived from our $\sqrt(12)$ factor since quantization noise can be modeled as a uniform distribution!), to address all noise sources in the NCO. To use this formula, the number of bits in the formula is the number of bits sent to the NCO (number of phase bits not truncated). Finally we may be interested in the spurious free dynamic range (SFDR) which would be the power level of the strongest spur relative to our output signal (as opposed to the sum power of all spurs in SNR). The power of the strongest spur due to phase truncation is simply 6.02 dB/bit, where again bit is the number of bits sent to the LUT. (This can be derived by taking the Fourier Transform of the ramp pattern which represents our phase error, again applicable to small angle approximations). All the spurs are integer harmonics of the fundamental output frequency, many of which will have digitally folded in the first Nyquist zone of our implementation, as suggested in the diagram below. Unlike the diagram, the 2nd harmonic is not necessarily the strongest spur, but helps to give context to the idea of spurs and SFDR. Dithering Dithering is the process of adding a small amount of noise (for example, use a LFSR generator as the PCW input) which will improve the SFDR at the expense of SNR. The overall noise power increases (due to our additional additive noise), however the spurious levels can be substantially reduced in the process.
Given a continuous time LTI system with impulse response $h(t)$ and determined with the transform $\mathcal{T}\{\cdot\}$, we define an input/output relationship as follows: $$ y(t) = \mathcal{T}\{ x(t) \} $$ which can be evaluated based on the convolution integral as: $$y(t) = \mathcal{T}\{ x(t) \} = x(t) \star h(t) = \int_{-\infty}^{\infty} h(\tau) x(t-\tau) d\tau $$ Now we call a signal (function) $x(t)$ as an eigenfunction of such a system if $$ y(t) = \mathcal{T}\{ x(t) \} = K_x x(t)$$ Where $K_x$ is a complex constant (the eigenvalue corresponding to the eigenfunction) dependent on the system and input parameters. Note that the output $y(t)$ must be equal to the input waveform, scaled by K, for all $t$. From this definition point of view, lets consider whether $x(t) = e^{j\omega_0 t}$ is an eigenfunction of LTI systems in general or not? $$y(t) = \mathcal{T}\{ e^{j\omega_0 t} \} = \int_{-\infty}^{\infty} h(\tau) e^{j \omega_0 (t-\tau)} d\tau = e^{j \omega_0 t} \left( \int_{-\infty}^{\infty} h(\tau) e^{-j \omega_0 \tau} d\tau \right) = K \cdot e^{j \omega_0 t} $$ Where the complex constant $K$ (the eigenvalue) is recognized as the continuous-time Fourier transform, $H(\omega)$ evaluated at the frequency $\omega_0$, of the impulse response $h(t)$ of the system, which is also called as the Frequency response. Expressing $K$ as $K=H(\omega_0) = |H(\omega_0)| e^{j \phi_0}$ we can rewrite the output as $y(t) = K e^{j\omega_0 t} = |H(\omega_0)| e^{j \phi_0} e^{j\omega_0 t} = |H(\omega_0)| e^{j\omega_0 (t + \phi / \omega_0)} = |H(\omega_0)| e^{j\omega_0 (t - t_0)} $ where $t_0 = - \phi/\omega_0$. We can show the relation as $$x(t) \longleftrightarrow |H(\omega_0)| e^{j\phi_0} x(t) $$Therefore we conclude that $x(t)=e^{j\omega t}$ in general is an eigenfunction of arbitrary LTI systems. What about the function $x(t)=e^{j \omega_1 t} + e^{j \omega_2 t}$ ? Using linearity property of LTI systems we can show that the respective outputs for each added term will be $ y_1(t) = |H(w_1)| e^{j\phi_1} e^{j\omega_1 t}$ and $ y_2(t) = |H(w_2)| e^{j\phi_2} e^{j\omega_2 t}$ therefore we have $$ y(t) = y_1(t) + y_2(t) = |H(w_1)| e^{j\phi_1} e^{j\omega_1 t} + |H(w_2)| e^{j\phi_2} e^{j\omega_2 t} = K_1 e^{j \omega_1 t} + K_2 e^{j \omega_2 t} \neq K \left( e^{j \omega_1 t} + e^{j \omega_2 t} \right) $$ Hence unless we have $K = K_1 = K_2$, the signal $x(t)=e^{j \omega_1 t} + e^{j \omega_2 t}$ is not an eigenfunction of LTI systems in general. Finally note that for the signal $x(t)=\cos(w_0 t)$ there exist LTI systems having the property that their impulse responses, $h(t)$, are real and even will accept them as eigenfunctions, but not every LTI in general will accept $\cos(w_0 t)$ as an eigenfunction, hence $\cos(w_0 t)$ is not an eigenfunction of LTI systems in general. On the other hand $e^{j \omega_0 t}$ or $e^{s t}$ with complex $s$ in general are eigenfunctions of every LTI system.
The first or third option below may be close to what you're looking for. Or, switch from Newton-style to Leibniz-style notation for the derivative, as shown by the fourth option (newly fixed to incorporated @marmot's comment). A separate comment: to make the \frac{1}{2} term less visually dominant, consider using \tfrac instead of \frac. \documentclass{article} \usepackage{amsmath} % for \tfrac macro and general accent-placement support \begin{document} \[ \tfrac{1}{2}m \dot{\vec{x}} ^2 \quad \tfrac{1}{2}m{\dot{\vec{x}}}^2 \quad \tfrac{1}{2}m{\dot{\vec{x}}}^{\,2} \quad \tfrac{1}{2}m\bigl(\tfrac{\mathrm{d}\vec{x}}{\mathrm{d}t}\bigr)^{\!2} \] \end{document}
Current browse context: cond-mat.stat-mech Change to browse by: Bookmark(what is this?) Condensed Matter > Statistical Mechanics Title: One-Dimensional Impenetrable Anyons in Thermal Equilibrium. I. Anyonic Generalization of Lenard's Formula (Submitted on 28 Jan 2008 (v1), last revised 30 Apr 2008 (this version, v2)) Abstract: We have obtained an expansion of the reduced density matrices (or, equivalently, correlation functions of the fields) of impenetrable one-dimensional anyons in terms of the reduced density matrices of fermions using the mapping between anyon and fermion wavefunctions. This is the generalization to anyonic statistics of the result obtained by A. Lenard for bosons. In the case of impenetrable but otherwise free anyons with statistical parameter $\kappa$, the anyonic reduced density matrices in the grand canonical ensemble is expressed as Fredholm minors of the integral operator ($1-\gamma \hat \theta_T$) with complex statistics-dependent coefficient $\gamma=(1+e^{\pm i\pi\kappa})/ \pi$. For $\kappa=0$ we recover the bosonic case of Lenard $\gamma=2/\pi$. Due to nonconservation of parity, the anyonic field correlators $\la \fad(x')\fa(x)\ra$ are different depending on the sign of $x'-x$. Submission historyFrom: Ovidiu Patu [view email] [v1]Mon, 28 Jan 2008 23:08:06 GMT (14kb) [v2]Wed, 30 Apr 2008 21:08:32 GMT (14kb)
Given a 2D lattice with coordinates $1 \leq x \leq c$ and $1 \leq y \leq d$, we define $f(x, y) = xy$. We wish to find a boolean function $I(x,y)$ that determines in $O(1)$ time whether or not $(x,y)$ belongs to the set of points $S$ of size $k$ such that the sum $Z(S) = \sum_{(x,y) \in S} f(x,y)$ is smaller or equal to any other set of size $k$. One may use $O(c+d+k)$ time and space to construct $I$. Is this possible? Is this a known problem (my search turned up nothing)? Can https://en.wikipedia.org/wiki/Divisor_summatory_function and its approximation help us? Motivation: I work in NLP and am trying to find an optimal way of storing part of a word-word cooccurrence matrix in memory and part on disk. This matrix is very sparse. I'm making the simplifying assumption that the probability of two works co-occurring is proportional to their unigram frequencies. By ranking words in terms of frequency, we get the $c$ and $d$ terms. So the smaller the ranks $c$ and $d$, the more likely they are to co-occur, so this value should be stored in memory. Since there will be billions of lookups, $I$ needs to be fast. Thanks!
I've noted, that the following fact can be proven in a few lines using $C^*$-algebra theory. I wonder if it has a simple elementary proof or not. Probably you can give me a reference. Suppose $X$ is a compact Hausdorff space, $V\subset X$ is a closed subset, $f\colon V\to \mathbb{R}$ is a continuous function. Then there exists a continuous function $g\colon X\to \mathbb{R}$, whose restriction on $V$ is $f$. C*-algebraic proof is the following. First note, that it is enough to find $g\colon X\to \mathbb{C}$ (then we can take its real part). Consider restriction map $\phi\colon C(X)\to C(V)$. It is a * -homomorphism, and therefore its image $\phi(C(X))$ is a commutative $C^*$-algebra. Moreover $\phi(C(X))$ separates points of $V$ (because $C(X)$ does). By Stone-Weierstrass theorem $\phi(C(X))=C(V)$.
Given the ionization reaction $2H^0 <=> H^+ + H^-$ where: $H^+$ is the ionized state with electron occupancy n = 0 and energy 0 $H^0$ is the doubly degenerate neutral hydrogen atom with electron occupancy n = 1 and energy $-\Delta$. $H^-$ is the ionized state with electron occupancy n = 2 and energy $-(\epsilon + \delta)$ I want to calculate the fractions of $H^+$ and $H^-$ ions using the condition $ \langle n \rangle = 1$ (which "is a valid approximation in the case of the solar photosphere since the free electron concentration is less than the hydrogen concentration by a factor greater than $10^3$"). Further, I want to show that at low temperature, this fraction is $\exp{-\beta(\Delta-\epsilon)/2}$. In summary, my approach has been to Calculate the grand partition function $\mathcal{Z}$ Use the condition $\langle n \rangle = \frac{\alpha}{\mathcal{Z}} \frac{\partial \mathcal{Z}}{\partial \alpha}$ ($\alpha \equiv e^{\beta \zeta}$) to determine an approximate value for the chemical potential $\zeta$. Evaluate the fraction of $H^+$ and $H^-$ using $P(n,i) = \frac{\exp{\beta(n\zeta - E_i)}}{\sum \exp{\beta(n\zeta - E_i)}}$ The problem I am having is the third part. I expect the fraction of $H^+$ and the fraction of $H^-$ to be the same, which is not the case according to my calculation. Also, my solution does not show that at low T the fraction is $\exp{-\beta(\Delta-\epsilon)/2}$. In greater detail, I have: $$\mathcal{Z} = \sum \exp{\beta(n\zeta - E_i)} = 2 + \alpha e^{\beta \Delta} + \alpha^2 e^{\beta (\Delta + \epsilon)}$$ Then $\langle n \rangle = \frac{\alpha}{\mathcal{Z}} \frac{\partial \mathcal{Z}}{\partial \alpha}= \frac{\alpha e^{\beta \Delta} + 2 \alpha^2 e^{\beta \epsilon}}{2 + \alpha e^{\beta \Delta} + \alpha^2 e^{\beta \epsilon}}=1 $ implies $\alpha = \pm (2e^{\beta \epsilon})^{1/2}$ where I discard the negative solution. Then since I assume the gas is ideal $\zeta = k_B T \ln{\alpha} = \frac{1}{2} k_B T \ln{2} - \frac{\epsilon}{2}$ Using this value for the chemical potential, I plug in to the grand canonical distribution: For the fraction of $H^+$ I expect: $$P(n=0) = \frac{1}{2 + \alpha e^{\beta \Delta} + \alpha^2 e^{\beta \epsilon}}$$ and for the fraction of $H^-$ I expect: $$P(n=2) = \frac{e^{\beta (2 \zeta + \epsilon)}}{2 + \alpha e^{\beta \Delta} + \alpha^2 e^{\beta \epsilon}}$$ These two probabilities are not the same. However, I don't understand how that could be, since I imagine the dissociation equation necessarily implies equal fractions of $H^+$ and $H^-$. Further, since neither of the probabilities agrees with the low temperature limit I am trying to show, I imagine that I am misunderstanding how to calculate the fraction of each ionization state. My questions: Does the dissociation equation imply equal fractions of $H^+$ and $H^-$? Is the fraction of $H^+$/$H^-$ proportional to the Gibbs factor divided by the grand partition function (i.e. $P(n=0)$/$P(n=2)$ as I've written it)?
From an advanced perspective, this follows from a variant of semistable reduction which Harris and Morrison call Nodal Reduction. I quote from their textbook Moduli of Curves, Proposition 3.49. Let $B$ be a smooth curve, $0$ a point of $B$ and $B^{\ast} = B \setminus \{ 0 \}$. Let $X \to B^{\ast}$ be a flat family of nodal curves of genus $g$, $\psi:X \to Z$ any morphism to a projective scheme $Z$, ... Then there exists a branched cover $B' \to B$ and a [projective] family $X' \to B'$ of nodal curves extending the fiber product $X \times_{B^{\ast}} B'$ with the following properties: The total space $X'$ is smooth. The morphism $X \times_{B^{\ast}} B' \to X \overset{\psi}{\longrightarrow} Z$ extends to a morphism on all of $X'$. ... The word "projective" in square brackets in the second line is my addition but was clearly intended, since otherwise we don't have to fill in the central fiber at all. The ellipses conceal conditions concerning marked points, which we won't need. Let $\bar{X} \to B$ be a flat projective family, whose fibers over $B^{\ast}$ are reduced and nodal. I will take $Z=\bar{X}$ with $\psi$ the inclusion $X \hookrightarrow \bar{X}$. So I get to conclude that there is a branched cover $B' \to B$, and a nodal family $X'$ over $B'$, so that there is a map $\phi: X' \to \bar{X}$. This theorem is stronger than stable and semi-stable reduction in that the new family maps to the old family, but weaker in that we have less control over the fibers of $X'$; they are reduced curves with nodal singularities, but there may be rational components with only one node. In particular, in your setting, we have a map $\phi: X'_0 \to \bar{X}_0$. Since every component of $X'_0$ is genus $0$, if $\bar{X}_0$ has higher genus, the map $\phi$ is constant. So $\phi(X')$ only meets the central fiber of $\bar{X}$ at only one point. This is absurd, the next paragraph gives one way to turn it into a contradiction. But $X' \to B'$ is projective, hence proper, and $B' \to B$ is finite, hence proper, so $X' \to B$ is proper. So $X' \times_B \bar{X} \to \bar{X}$ is a closed map. But $X' \times_B \bar{X} = X'$ and the map $X' \times_B \bar{X} \to \bar{X}$ is just $\phi: X' \to \bar{X}$, so we deduce that $\phi$ is a closed map and $\phi(X')$ is closed. As $\phi(X')$ contains all of $X \times_{B'} B^{\ast}$, the map $\phi$ must be surjective, contradiction.
It is known that the Kitaev Hamiltonian and its spin-liquid ground state both break the $SU(2)$ spin-rotation symmetry. So what's the spin-rotation-symmetry group for the Kitaev model? It's obvious that the Kitaev Hamiltonian is invariant under $\pi$ rotation about the three spin axes, and in some recent papers, the authors give the "group"(see the Comments in the end) $G=\left \{1,e^{i\pi S_x}, e^{i\pi S_y},e^{i\pi S_z} \right \}$, where $(e^{i\pi S_x}, e^{i\pi S_y},e^{i\pi S_z})=(i\sigma_x,i\sigma_y,i\sigma_z )$, with $\mathbf{S}=\frac{1}{2}\mathbf{\sigma}$ and $\mathbf{\sigma}$ being the Pauli matrices. But how about the quaternion group $Q_8=\left \{1,-1,e^{i\pi S_x}, e^{-i\pi S_x},e^{i\pi S_y},e^{-i\pi S_y},e^{i\pi S_z}, e^{-i\pi S_z}\right \}$, with $-1$ representing the $2\pi$ spin-rotation operator. On the other hand, consider the dihedral group $D_2=\left \{ \begin{pmatrix}1 & 0 &0 \\ 0& 1 & 0\\ 0&0 &1 \end{pmatrix},\begin{pmatrix}1 & 0 &0 \\ 0& -1 & 0\\ 0&0 &-1 \end{pmatrix},\begin{pmatrix}-1 & 0 &0 \\ 0& 1 & 0\\ 0&0 &-1 \end{pmatrix},\begin{pmatrix}-1 & 0 &0 \\ 0& -1 & 0\\ 0&0 &1 \end{pmatrix} \right \}$, and these $SO(3)$ matrices can also implement the $\pi$ spin rotation. So, which one you choose, $G,Q_8$, or $D_2$ ? Notice that $Q_8$ is a subgroup of $SU(2)$, while $D_2$ is a subgroup of $SO(3)$. Furthermore, $D_2\cong Q_8/Z_2$, just like $SO(3)\cong SU(2)/Z_2$, where $Z_2=\left \{ \begin{pmatrix}1 & 0 \\ 0 &1\end{pmatrix} ,\begin{pmatrix}-1 & 0 \\ 0 &-1 \end{pmatrix} \right \}$. Comments: The $G$ defined above is even not a group, since, e.g., $(e^{i\pi S_z})^2=-1\notin G$. Remarks: Notice here that $D_2$ can not be viewed as a subgroup of $Q_8$, just like $SO(3)$ can not be viewed as a subgroup of $SU(2)$. Supplementary:As an example, consider a two spin-1/2 system. We want to gain some insights that what kinds of wavefunctions preserves the $Q_8$ spin-rotation symmetry from this simplest model. For convenience, let $R_\alpha =e^{\pm i\pi S_\alpha}=-4S_1^\alpha S_2^\alpha$ represent the $\pi$ spin-rotation operators around spin axes $\alpha=x,y,z$, where $S_\alpha=S_1^\alpha+ S_2^\alpha$. Therefore, by saying a wavefunction $\psi$ has $Q_8$ spin-rotation symmetry, we mean $R_\alpha\psi=\lambda_ \alpha \psi$, with $\left |\lambda_ \alpha \right |^2=1$. After a simple calculation, we find that a $Q_8$ spin-rotation symmetric wavefunction $\psi$ could only take one of the following 4 possible forms: $(1) \left | \uparrow \downarrow \right \rangle-\left | \downarrow \uparrow \right \rangle$, with $(\lambda_x,\lambda_y,\lambda_z)=(1,1,1)$ (Singlet state with full $SU(2)$ spin-rotation symmetry), which is annihilated by $S_x,S_y,$ and $S_z$, $(2) \left | \uparrow \downarrow \right \rangle+\left | \downarrow \uparrow \right \rangle$, with $(\lambda_x,\lambda_y,\lambda_z)=(-1,-1,1)$, which is annihilated by $S_z$, $(3) \left | \uparrow \uparrow \right \rangle-\left | \downarrow \downarrow \right \rangle$, with $(\lambda_x,\lambda_y,\lambda_z)=(1,-1,-1)$, which is annihilated by $S_x$, $(4) \left | \uparrow \uparrow \right \rangle+\left | \downarrow \downarrow \right \rangle$, with $(\lambda_x,\lambda_y,\lambda_z)=(-1,1,-1)$, which is annihilated by $S_y$. Note that any kind of superposition of the above states would no longer be an eigenfunction of $R_\alpha$ and hence would break the $Q_8$ spin-rotation symmetry.
Cutwidth¶ This module implements several algorithms to compute the cutwidth of a graph and the corresponding ordering of the vertices. It also implements tests functions for evaluation the width of a linear ordering (or layout). Given an ordering\(v_1,\cdots, v_n\) of the vertices of \(V(G)\), its cost is defined as: Where The cutwidth of a graph \(G\) is equal to the minimum cost of an ordering of itsvertices. This module contains the following methods cutwidth() Return the cutwidth of the graph and the corresponding vertex ordering. cutwidth_dyn() Compute the cutwidth of \(G\) using an exponential time and space algorithm based on dynamic programming cutwidth_MILP() Compute the cutwidth of \(G\) and the optimal ordering of its vertices using an MILP formulation width_of_cut_decomposition() Return the width of the cut decomposition induced by the linear ordering \(L\) of the vertices of \(G\) Exponential algorithm for cutwidth¶ In order to find an optimal ordering of the vertices for the vertex separation,this algorithm tries to save time by computing the function \(c'(S)\) at mostonce once for each of the sets \(S\subseteq V(G)\). These values are stored inan array of size \(2^n\) where reading the value of \(c'(S)\) or updating it can bedone in constant time. Assuming that we can compute the cost of a set \(S\) and remember it, finding anoptimal ordering is an easy task. Indeed, we can think of the sequence \(v_1,..., v_n\) of vertices as a sequence of sets \(\{v_1\}, \{v_1,v_2\}, ...,\{v_1,...,v_n\}\), whose cost is precisely \(\max c'(\{v_1\}), c'(\{v_1,v_2\}),... , c'(\{v_1,...,v_n\})\). Hence, when considering the digraph on the \(2^n\)sets \(S\subseteq V(G)\) where there is an arc from \(S\) to \(S'\) if \(S'=S\cap\{v\}\) for some \(v\) (that is, if the sets \(S\) and \(S'\) can be consecutive in asequence), an ordering of the vertices of \(G\) corresponds to a path from\(\emptyset\) to \(\{v_1,...,v_n\}\). In this setting, checking whether there existsa ordering of cost less than \(k\) can be achieved by checking whether thereexists a directed path \(\emptyset\) to \(\{v_1,...,v_n\}\) using only sets of costless than \(k\). This is just a depth-first-search, for each \(k\). Lazy evaluation of \(c'\) In the previous algorithm, most of the time is actually spent on the computation of \(c'(S)\) for each set \(S\subseteq V(G)\) – i.e. \(2^n\) computations of neighborhoods. This can be seen as a huge waste of time when noticing that it is useless to know that the value \(c'(S)\) for a set \(S\) is less than \(k\) if all the paths leading to \(S\) have a cost greater than \(k\). For this reason, the value of \(c'(S)\) is computed lazily during the depth-first search. Explanation : When the depth-first search discovers a set of size less than \(k\), the costs ofits out-neighbors (the potential sets that could follow it in the optimalordering) are evaluated. When an out-neighbor is found that has a cost smallerthan \(k\), the depth-first search continues with this set, which is explored withthe hope that it could lead to a path toward \(\{v_1,...,v_n\}\). On the otherhand, if an out-neighbour has a cost larger than \(k\) it is useless to attempt tobuild a cheap sequence going though this set, and the exploration stopsthere. This way, a large number of sets will never be evaluated and a lot ofcomputational time is saved this way. Besides, some improvement is also made by “improving” the values found by \(c'\). Indeed, \(c'(S)\) is a lower bound on the cost of a sequence containing the set \(S\), but if all out-neighbors of \(S\) have a cost of \(c'(S) + 5\) then one knows that having \(S\) in a sequence means a total cost of at least \(c'(S) + 5\). For this reason, for each set \(S\) we store the value of \(c'(S)\), and replace it by \(\max (c'(S), \min_{\text{next}})\) (where \(\min_{\text{next}}\) is the minimum of the costs of the out-neighbors of \(S\)) once the costs of these out-neighbors have been evaluated by the algorithm. This algorithm and its implementation are very similar to sage.graphs.graph_decompositions.vertex_separation.vertex_separation_exp().The main difference is in the computation of \(c'(S)\). See the vertexseparation module's documentation for more details on thisalgorithm. Note Because of its current implementation, this algorithm only works on graphs on strictly less than 32 vertices. This can be changed to 64 if necessary, but 32 vertices already require 4GB of memory. MILP formulation for the cutwidth¶ We describe a mixed integer linear program (MILP) for determining an optimal layout for the cutwidth of \(G\). Variables: \(x_v^k\) – Variable set to 1 if vertex \(v\) is placed in the ordering at position \(i\) with \(i\leq k\), and 0 otherwise. \(y_{u,v}^{k}\) – Variable set to 1 if one of \(u\) or \(v\) is at a position \(i\leq k\) and the other is at a position \(j>k\), and so we have to count edge \(uv\) at position \(k\). Otherwise, \(y_{u,v}^{k}=0\). The value of \(y_{u,v}^{k}\) is a xor of the values of \(x_u^k\) and \(x_v^k\). \(z\) – Objective value to minimize. It is equal to the maximum over all position \(k\) of the number of edges with one extremity at position at most \(k\) and the other at position stricly more than \(k\), that is \(\sum_{uv\in E}y_{u,v}^{k}\). MILP formulation: Constraints (1)-(3) ensure that all vertices have a distinct position. Constraints (4)-(5) force variable \(y_{u,v}^k\) to 1 if the edge is in the cut. Constraint (6) count the number of edges starting at position at most \(k\) and ending at a position stricly larger than \(k\). This formulation corresponds to method cutwidth_MILP(). Methods¶ sage.graphs.graph_decompositions.cutwidth. cutwidth( G, algorithm='exponential', cut_off=0, solver=None, verbose=False)¶ Return the cutwidth of the graph and the corresponding vertex ordering. INPUT: G– a Graph or a DiGraph algorithm– string (default: "exponential"); algorithm to use among: exponential– Use an exponential time and space algorithm based on dynamic programming. This algorithm only works on graphs with strictly less than 32 vertices. MILP– Use a mixed integer linear programming formulation. This algorithm has no size restriction but could take a very long time. cut_off– integer (default: 0); used to stop the search as soon as a solution with width at most cut_offis found, if any. If this bound cannot be reached, the best solution found is returned. solver– string (default: None); specify a Linear Program (LP) solver to be used. If set to None, the default one is used. This parameter is used only when algorithm='MILP'. For more information on LP solvers and which default solver is used, see the method solveof the class MixedIntegerLinearProgram. verbose– booleant (default: False); whether to display information on the computations. OUTPUT: A pair (cost, ordering)representing the optimal ordering of the vertices and its cost. EXAMPLES: Cutwidth of a Complete Graph: sage: from sage.graphs.graph_decompositions.cutwidth import cutwidth sage: G = graphs.CompleteGraph(5) sage: cw,L = cutwidth(G); cw 6 sage: K = graphs.CompleteGraph(6) sage: cw,L = cutwidth(K); cw 9 sage: cw,L = cutwidth(K+K); cw 9 The cutwidth of a \(p\times q\) Grid Graph with \(p\leq q\) is \(p+1\): sage: from sage.graphs.graph_decompositions.cutwidth import cutwidth sage: G = graphs.Grid2dGraph(3,3) sage: cw,L = cutwidth(G); cw 4 sage: G = graphs.Grid2dGraph(3,5) sage: cw,L = cutwidth(G); cw 4 sage.graphs.graph_decompositions.cutwidth. cutwidth_MILP( G, lower_bound=0, solver=None, verbose=0)¶ MILP formulation for the cutwidth of a Graph. This method uses a mixed integer linear program (MILP) for determining an optimal layout for the cutwidth of \(G\). See the module's documentationfor more details on this MILP formulation. INPUT: G– a Graph lower_bound– integer (default: 0); the algorithm searches for a solution with cost larger or equal to lower_bound. If the given bound is larger than the optimal solution the returned solution might not be optimal. If the given bound is too high, the algorithm might not be able to find a feasible solution. solver– string (default: None); specify a Linear Program (LP) solver to be used. If set to None, the default one is used. For more information on LP solvers and which default solver is used, see the method solveof the class MixedIntegerLinearProgram. verbose– integer (default: 0); sets the level of verbosity. Set to 0 by default, which means quiet. OUTPUT: A pair (cost, ordering)representing the optimal ordering of the vertices and its cost. EXAMPLES: Cutwidth of a Cycle graph: sage: from sage.graphs.graph_decompositions import cutwidth sage: G = graphs.CycleGraph(5) sage: cw, L = cutwidth.cutwidth_MILP(G); cw 2 sage: cw == cutwidth.width_of_cut_decomposition(G, L) True sage: cwe, Le = cutwidth.cutwidth_dyn(G); cwe 2 Cutwidth of a Complete graph: sage: from sage.graphs.graph_decompositions import cutwidth sage: G = graphs.CompleteGraph(4) sage: cw, L = cutwidth.cutwidth_MILP(G); cw 4 sage: cw == cutwidth.width_of_cut_decomposition(G, L) True Cutwidth of a Path graph: sage: from sage.graphs.graph_decompositions import cutwidth sage: G = graphs.PathGraph(3) sage: cw, L = cutwidth.cutwidth_MILP(G); cw 1 sage: cw == cutwidth.width_of_cut_decomposition(G, L) True sage.graphs.graph_decompositions.cutwidth. cutwidth_dyn( G, lower_bound=0)¶ Dynamic programming algorithm for the cutwidth of a Graph. This function uses dynamic programming algorithm for determining an optimal layout for the cutwidth of \(G\). See the module's documentationfor more details on this method. INPUT: G– a Graph lower_bound– integer (default: 0); the algorithm returns immediately if it finds a solution lower or equal to lower_bound(in which case it may not be optimal). OUTPUT: A pair (cost, ordering)representing the optimal ordering of the vertices and its cost. Note Because of its current implementation, this algorithm only works on graphs on strictly less than 32 vertices. This can be changed to 63 if necessary, but 32 vertices already require 4GB of memory. sage.graphs.graph_decompositions.cutwidth. width_of_cut_decomposition( G, L)¶ Return the width of the cut decomposition induced by the linear ordering \(L\) of the vertices of \(G\). If \(G\) is an instance of Graph, this function returns the width \(cw_L(G)\) of the cut decomposition induced by the linear ordering \(L\) of the vertices of \(G\).\[cw_L(G) = \max_{0\leq i< |V|-1} |\{(u,w)\in E(G)\mid u\in L[:i]\text{ and }w\in V(G)\setminus L[:i]\}|\] INPUT: G– a Graph L– a linear ordering of the vertices of G EXAMPLES: Cut decomposition of a Cycle graph: sage: from sage.graphs.graph_decompositions import cutwidth sage: G = graphs.CycleGraph(6) sage: L = G.vertices() sage: cutwidth.width_of_cut_decomposition(G, L) 2 Cut decomposition of a Path graph: sage: from sage.graphs.graph_decompositions import cutwidth sage: P = graphs.PathGraph(6) sage: cutwidth.width_of_cut_decomposition(P, [0, 1, 2, 3, 4, 5]) 1 sage: cutwidth.width_of_cut_decomposition(P, [5, 0, 1, 2, 3, 4]) 2 sage: cutwidth.width_of_cut_decomposition(P, [0, 2, 4, 1, 3, 5]) 5
I have trouble with finding the eigenstates of a spherical pendulum (length $l$, mass $m$) under the small angle approximation. My intuition is that the final result should be some sort of combinations of a harmonic oscillator in $\theta$ and a free particle in $\phi$, but it's not obvious to see this from the Schrodinger equation: $$-\frac{\hbar^2}{2ml^2}\bigg[\frac{1}{\sin\theta}\frac{\partial}{\partial\theta}\bigg(\sin\theta\frac{\partial\psi}{\partial\theta}\bigg) + \frac{1}{\sin^2\theta}\frac{\partial^2\psi}{\partial\phi^2} \bigg] + mgl(1-\cos\theta)\psi(\theta,\phi) = E\psi(\theta,\phi) $$ Using $\sin\theta \approx \theta$ and $\cos\theta\approx 1-\theta^2/2$ leads me to $$ -\frac{\hbar^2}{2ml^2}\bigg(\frac{\theta}{\Theta}\frac{d\Theta}{d\theta} + \frac{\theta^2}{\Theta}\frac{d^2\Theta}{d\theta^2} + \frac{1}{\Phi}\frac{d^2\Phi}{d\phi^2} \bigg) + \frac{1}{2}mgl\theta^4 = E\theta^2 $$ Here I've already used the ansatz $\psi(\theta,\phi)=\Theta(\theta)\Phi(\phi)$. Of course I can throw away the $\theta^4$ term, but any further simplifications with $\theta^2$ terms would also eliminate the energy, which is what I want. I've also tried to solve the $\Theta(\theta)$ equation with series solutions, and the result seems weird and cannot give my any energy quantizations. Another attempt is to write the entire kinetic energy term in terms of angular momentum operators, which gives $$ H=\frac{1}{2ml^2}\bigg(L_\theta^2 + \frac{L_\phi^2}{\sin^2\theta} \bigg) + mgl(1-\cos\theta) $$ I was hoping to solve this with raising and lowering operators, but that $1/\sin^2\theta$ term is really a pain in the ass. I have no idea of finding a suitable ladder operator that satisfies $[H,\hat{a}] = c\hat{a}$. Any ideas?
Problem statement Given \(n\) points with their distances \(d_{i,j}\), select \(k\) points such that the sum of the distances between the selected points is maximized[1]. A simple MIQP (Mixed-Integer Quadratic Programming) model is: Non-convex MIQP Model \[\begin{align} \max & \sum_{i \lt j} \color{darkblue} d_{i,j} \color{darkred}x_{i} \color{darkred}x_{j} \\ & \sum_i \color{darkred} x_{i} = \color{darkblue} k \\ & \color{darkred}x_{i} \in \{0,1\} \end{align}\] Notice we only need to look at distances \(d_{i,j}\) with \(i\lt j\) as we can assume symmetry. If not, just make \(d_{i,j}\) symmetric by: \[d_{i,j} = \frac{d_{i,j} + d_{j,i}}{2}\] As we shall see this is not such an easy problem to solve to optimality. This problem is called the \(p\)-dispersion-sum problem. Example data I generated random \(n=50\) \(2d\) points: Let's try to find \(k=10\) points that are most dispersed. The distance matrix is formed by calculating Euclidean distances. 10 out of 50 most dispersed points Solve non-convex problem Throw this into a global solver such as Baron, Couenne or Antigone Use a solver like Cplex (option qtolin) or Gurobi (option preQlinearize) that can linearize the problem automatically for us. See further down how we can do this ourselves. Instruct Cplex to use a QP formulation (option qtolin=0) and tell it to use the global QP solver (option optimalitytarget=3) Convexification of the quadratic model Calculate the largest eigenvalue \(\lambda_{max}\) of the distance matrix \(0.5 D\). We multiply by 0.5 because we only use half of the matrix in the objective to prevent double counting. If \(\lambda_{max} \lt 0 \): we are done (matrix is negative-definite) Form the objective: \[\max \sum_{i \lt j} d_{i,j} x_{i} x_{j} - \lambda_{max} \sum_i x_i^2 + \lambda_{max} \sum_i x_i\] Linearization Linearized Model 1 \[\begin{align} \max & \sum_{i \lt j} \color{darkblue} d_{i,j} \color{darkred}y_{i,j} \\ & \sum_i \color{darkred} x_{i} = \color{darkblue} k \\ & \color{darkred}y_{i,j} \le \color{darkred} x_i && \forall i\lt j \\ & \color{darkred}y_{i,j} \le \color{darkred} x_j && \forall i\lt j \\ & \color{darkred}x_{i} \in \{0,1\} \\ & \color{darkred}y_{i,j} \in [0,1] \end{align}\] There are two things that may need some attention: The inequality \(y_{i,j}\ge x_i +x_j -1\) was dropped. The objective will take care of this. The variables \(y_{i,j}\) were relaxed to be continuous between 0 and 1. The \(y\) variables will be automatically integer (well, where it matters). Often models with fewer integer variables are easier to solve. Modern solvers, however, may reintroduce binary variables for this particular model. In Cplex you can see a message like shown below. A tighter linearization In [2] a tighter formulation is proposed: Linearized Model 2 \[\begin{align} \max & \sum_{i \lt j} \color{darkblue} d_{i,j} \color{darkred}y_{i,j} \\ & \sum_i \color{darkred} x_{i} = \color{darkblue} k \\ & \color{darkred}y_{i,j} \le \color{darkred} x_i && \forall i\lt j \\ & \color{darkred}y_{i,j} \le \color{darkred} x_j && \forall i\lt j \\ & \color{darkred}y_{i,j} \ge \color{darkred} x_i +\color{darkred} x_j -1 && \forall i\lt j \\ & \sum_{i\lt j} \color{darkred}y_{i,j} + \sum_{i\gt j} \color{darkred}y_{j,i} = (\color{darkblue} k-1) \color{darkred}x_j && \forall j \\ & \color{darkred}x_{i} \in \{0,1\} \\ & \color{darkred}y_{i,j} \in \{0,1\} \end{align}\] Essentially, we multiplied the constraint \(\sum_i x_{i} = k\) by \(x_j\), and added these as constraints. The derivation is as follows: \[\begin{align} & \left( \sum_i x_i\right) x_j = k x_j\\ \Rightarrow & \sum_{i\lt j} x_i x_j + x_j^2 + \sum_{i\gt j} x_i x_j = k x_j\\ \Rightarrow & \sum_{i\lt j} y_{i,j} + \sum_{i\gt j} y_{j,i} = (k-1) x_j \end{align}\] We used here that \(x_j^2 = x_j\). My intuition is as follows. If \(x_j=1\) then exactly \(k-1\) other \(x_i\)'s should be 1. That means \(k-1\) \(y_{i,j}\)'s should be 1. As we only use the upper-triangular part, the picture becomes: Picture of y layout Aggregation We can aggregate the previous cut, which leads to: \[\sum_{i\lt j} y_{i,j} = \frac{k(k-1)}{2}\] The advantage of this version is that we only need one extra constraint [4]: Linearized Model 3 \[\begin{align} \max & \sum_{i \lt j} \color{darkblue} d_{i,j} \color{darkred}y_{i,j} \\ & \sum_i \color{darkred} x_{i} = \color{darkblue} k \\ & \color{darkred}y_{i,j} \le \color{darkred} x_i && \forall i\lt j \\ & \color{darkred}y_{i,j} \le \color{darkred} x_j && \forall i\lt j \\ & \color{darkred}y_{i,j} \ge \color{darkred} x_i +\color{darkred} x_j -1 && \forall i\lt j \\ & \sum_{i\lt j} \color{darkred}y_{i,j} = \frac{ \color{darkblue}k (\color{darkblue} k-1)}{2} \\ & \color{darkred}x_{i} \in \{0,1\} \\ & \color{darkred}y_{i,j} \in \{0,1\} \end{align}\] The intuition is easy: if we have \(k\) \(x_j\)'s equal to one, we need to have \(k(k-1)/2\) \(y_{i,j}\)'s equal to one. Numerical results Model Time Obj Gap Notes Non-convex MIQP 1800 332.1393 87.64% Solve as quadratic model. options: qtolin=0, optimalitytarget=3 Non-convex MIQP 1800 332.9129 57.65% Automatically converted to linear model. options: qtolin=1 Convex MIQP 1800 332.9129 92.22% option: qtolin=0 MIP 1 1800 332.9129 57.65% Defaults MIP 2 8 332.9129 optimal Defaults (roughly same performance whether or not including \(y_{i,j}\ge x_i+x_j-1\)) MIP 3 400 332.9129 optimal Defaults. Excludes \(y_{i,j}\ge x_i+x_j-1\). MIP 3 11 332.9129 optimal Defaults. Includes \(y_{i,j}\ge x_i+x_j-1\). Just one observation, but I think these results are representative for other (small) instances. We dropped the constraint \(y_{i,j}\ge x_i+x_j-1\) from MIP model 1: the objective will push \(y\) upwards on its own. For models MIP 2 and 3 it is wise to reintroduce them. It is noted that although the gaps are terrible for all models without extra cuts. However, some of these methods find the best solution very quickly. They are just not able to prove optimality in a reasonable time. Here is an example: Better dispersion Maximize Minimum Distance \[\begin{align} \max\> & \color{darkred} {\Delta} \\ & \color{darkred} \Delta \le \color{darkblue} d_{i,j} + \color{darkblue} M (1- \color{darkred}x_{i} \color{darkred}x_{j}) && \forall i\lt j \\ & \sum_i \color{darkred} x_{i} = \color{darkblue} k \\ & \color{darkred}x_{i} \in \{0,1\} \\ \end{align}\] Here \(M\) is a large enough constant, e.g. \[M = \max_{i\le j} d_{i,j}\] The model can be easily linearized by formulating the distance constraint as \[ \Delta \le d_{i,j} + M (1- x_{i}) + M(1- x_{j}) \] This model quickly solves our 10 out of 50 problem. It gives as solution: Maximization of minimum distance This model has as disadvantage that it does not care about points being closer to each other as long as they are further away than the minimum. For this example, visual inspection seems to indicate this model does good job. Conclusion The \(p\)-dispersion-sum problem is very difficult to solve to optimality, even for very small data sets. Extra cuts can enormously help the linearized version of the model. A main drawback is that the optimal solutions do not provide us with well-dispersed points (points can be very close). References How to select n objects from a set of N objects, maximizing the sum of pairwise distances between them, https://stackoverflow.com/questions/56761115/how-to-select-n-objects-from-a-set-of-n-objects-maximizing-the-sum-of-pairwise Warren P. Adams and Hanif D. Sherali, A Tight Linearization and an Algorithm for Zero-One Quadratic Programming Problems, Management Science, Vol. 32, No. 10 (Oct., 1986), pp. 1274-1290 Michael J. Kuby, Programming Models for Facility Dispersion: The p‐Dispersion and Maxisum Dispersion Problems, Geographical Analysis, vol. 19, pp.315-329, 1987. Ed Klotz, Performance Tuning for Cplex’s Spatial Branch-and-Bound Solver for Global Nonconvex (Mixed Integer) Quadratic Programs, http://orwe-conference.mines.edu/files/IOS2018SpatialPerfTuning.pdf
Let $V$ be an $n$-dimensional vector space over a field $F$ equipped with a symmetric bilinear map $B:V\times V\rightarrow F$ and associated quadratic map $Q(v):=B(v,v).$ The pair $(V,Q)$ is a quadratic space. Fixing a basis, $\{v_1,...,v_n\}$ for $V$, the space $(V,Q)$ has an associated quadratic form, that is, a homogeneous degree 2 polynomial, given by \[ f(x_1,...,x_n)=\sum_{i,j}B(v_i,v_j)x_ix_j, \] where $B(v_i,v_j)=B(v_j,v_i)$ since $B$ is symmetric. It is often helpful to think of a quadratic space in terms of its Gram matrix representation. Let $R$ denote the ring of integers of $F$. An $R$-lattice is a finitely generated $R$-module which is a discrete subset of $V$ endowed with the same bilinear map $B$. In the special case when $F=\mathbb{R}$, an integral lattice is simply a $\mathbb{Z}$-lattice, $L$, with the added restriction that $B(L,L)\subseteq \mathbb Z$. For example, letting $B(v,w)=v\cdot w$, the standard inner product, Euclidean $n$-space can be thought of as the quadratic space $(\mathbb{R}^n, \cdot)$. In this case, the space $(\mathbb{R}^n, \cdot)$ contains the integral lattice $\mathbb{Z}^n$ as a discrete subset endowed with the standard inner product. Knowl status: Review status: reviewed Last edited by Kiran S. Kedlaya on 2018-06-19 03:07:24 Referred to by: dq.lattice.completeness lattice.24.1.1.24.1.top lattice.8.1.1.1.1.bottom lattice.automorphism_group lattice.class_number lattice.density lattice.determinant lattice.dimension lattice.dual lattice.genus lattice.group_order lattice.hermite_number lattice.history lattice.isometry lattice.level lattice.minimal_vector lattice.name lattice.normalized_minimal_vector lattice.postive_definite lattice.primitive lattice.theta lattice.unimodular mf.gl2.history.theta lmfdb/lattice/lattice_stats.py (line 12) lmfdb/lattice/templates/lattice-index.html (line 25) lmfdb/lattice/templates/lattice-index.html (line 46) lmfdb/lattice/templates/lattice-index.html (line 57) lmfdb/lattice/templates/lattice-single.html (line 102) History:(expand/hide all) 2018-06-19 03:07:24 by Kiran S. Kedlaya (Reviewed)
@egreg It does this "I just need to make use of the standard hyphenation function of LaTeX, except "behind the scenes", without actually typesetting anything." (if not typesetting includes typesetting in a hidden box) it doesn't address the use case that he said he wanted that for @JosephWright ah yes, unlike the hyphenation near box question, I guess that makes sense, basically can't just rely on lccode anymore. I suppose you don't want the hyphenation code in my last answer by default? @JosephWright anway if we rip out all the auto-testing (since mac/windows/linux come out the same anyway) but leave in the .cfg possibility, there is no actual loss of functionality if someone is still using a vms tex or whatever I want to change the tracking (space between the characters) for a sans serif font. I found that I can use the microtype package to change the tracking of the smallcaps font (\textsc{foo}), but I can't figure out how to make \textsc{} a sans serif font. @DavidCarlisle -- if you write it as "4 May 2016" you don't need a comma (or, in the u.s., want a comma). @egreg (even if you're not here at the moment) -- tomorrow is international archaeology day: twitter.com/ArchaeologyDay , so there must be someplace near you that you could visit to demonstrate your firsthand knowledge. @barbarabeeton I prefer May 4, 2016, for some reason (don't know why actually) @barbarabeeton but I have another question maybe better suited for you please: If a member of a conference scientific committee writes a preface for the special issue, can the signature say John Doe \\ for the scientific committee or is there a better wording? @barbarabeeton overrightarrow answer will have to wait, need time to debug \ialign :-) (it's not the \smash wat did it) on the other hand if we mention \ialign enough it may interest @egreg enough to debug it for us. @DavidCarlisle -- okay. are you sure the \smash isn't involved? i thought it might also be the reason that the arrow is too close to the "M". (\smash[t] might have been more appropriate.) i haven't yet had a chance to try it out at "normal" size; after all, \Huge is magnified from a larger base for the alphabet, but always from 10pt for symbols, and that's bound to have an effect, not necessarily positive. (and yes, that is the sort of thing that seems to fascinate @egreg.) @barbarabeeton yes I edited the arrow macros not to have relbar (ie just omit the extender entirely and just have a single arrowhead but it still overprinted when in the \ialign construct but I'd already spent too long on it at work so stopped, may try to look this weekend (but it's uktug tomorrow) if the expression is put into an \fbox, it is clear all around. even with the \smash. so something else is going on. put it into a text block, with \newline after the preceding text, and directly following before another text line. i think the intention is to treat the "M" as a large operator (like \sum or \prod, but the submitter wasn't very specific about the intent.) @egreg -- okay. i'll double check that with plain tex. but that doesn't explain why there's also an overlap of the arrow with the "M", at least in the output i got. personally, i think that that arrow is horrendously too large in that context, which is why i'd like to know what is intended. @barbarabeeton the overlap below is much smaller, see the righthand box with the arrow in egreg's image, it just extends below and catches the serifs on the M, but th eoverlap above is pretty bad really @DavidCarlisle -- i think other possible/probable contexts for the \over*arrows have to be looked at also. this example is way outside the contexts i would expect. and any change should work without adverse effect in the "normal" contexts. @DavidCarlisle -- maybe better take a look at the latin modern math arrowheads ... @DavidCarlisle I see no real way out. The CM arrows extend above the x-height, but the advertised height is 1ex (actually a bit less). If you add the strut, you end up with too big a space when using other fonts. MagSafe is a series of proprietary magnetically attached power connectors, originally introduced by Apple Inc. on January 10, 2006, in conjunction with the MacBook Pro at the Macworld Expo in San Francisco, California. The connector is held in place magnetically so that if it is tugged — for example, by someone tripping over the cord — it will pull out of the socket without damaging the connector or the computer power socket, and without pulling the computer off the surface on which it is located.The concept of MagSafe is copied from the magnetic power connectors that are part of many deep fryers... has anyone converted from LaTeX -> Word before? I have seen questions on the site but I'm wondering what the result is like... and whether the document is still completely editable etc after the conversion? I mean, if the doc is written in LaTeX, then converted to Word, is the word editable? I'm not familiar with word, so I'm not sure if there are things there that would just get goofed up or something. @baxx never use word (have a copy just because but I don't use it;-) but have helped enough people with things over the years, these days I'd probably convert to html latexml or tex4ht then import the html into word and see what come out You should be able to cut and paste mathematics from your web browser to Word (or any of the Micorsoft Office suite). Unfortunately at present you have to make a small edit but any text editor will do for that.Givenx=\frac{-b\pm\sqrt{b^2-4ac}}{2a}Make a small html file that looks like<!... @baxx all the convertors that I mention can deal with document \newcommand to a certain extent. if it is just \newcommand\z{\mathbb{Z}} that is no problem in any of them, if it's half a million lines of tex commands implementing tikz then it gets trickier. @baxx yes but they are extremes but the thing is you just never know, you may see a simple article class document that uses no hard looking packages then get half way through and find \makeatletter several hundred lines of trick tex macros copied from this site that are over-writing latex format internals.
LaTeX uses internal counters that provide numbering of pages, sections, tables, figures, etc. This article explains how to access and modify those counters and how to create new ones. Contents A counter can be easily set to any arbitrary value with \setcounter. See the example below: \section{Introduction} This document will present several counting examples, how to reset and access them. For instance, if you want to change the numbers in a list. \begin{enumerate} \setcounter{enumi}{3} \item Something. \item Something else. \item Another element. \item The last item in the list. \end{enumerate} In this example \setcounter{enumi}{3} sets the value of the item counter in the list to 3. This is the general syntax to manually set the value of any counter. See the reference guide for a complete list of counters. All commands changing a counter's state in this section are changing it globally. Counters in a document can be incremented, reset, accessed and referenced. Let's see an example: \section{Another section} This is a dummy section with no purpose whatsoever but to contain text. This section has assigned the number \thesection. \stepcounter{equation} \begin{equation} \label{1stequation} \int_{0}^{\infty} \frac{x}{\sin(x)} \end{equation} In this example, two counters are used: \thesection section at this point. For further methods to print a counter take a look on how to print counters. \stepcounter{equation} equation. Other similar commands are \addtocounter and \refstepcounter, see the reference guide. Further commands to manipulate counters include: \counterwithin<*>{<ctr1>}{<ctr2>} <ctr2> to the counters that reset <ctr1> when they're incremented. If you don't provide the *, \the<ctr1> will be redefined to \the<ctr2>.\arabic{<ctr1>}. This macro is included in the LaTeX format since April 2018, if you're using an older version, you'll have to use the chngctr package. \counterwithout<*>{<ctr1>}{<ctr2>} <ctr2> from the counters that reset <ctr1> when they're incremented. If you don't provide the *, \the<ctr1> will be redefined to \arabic{<ctr1>}. This macro is included in the LaTeX format since April 2018, if you're using an older version, you'll have to use the chngctr package. \addtocounter{<ctr>}{<num>} <num> to the value of the counter <ctr>. \setcounter{<ctr>}{<num>} <ctr>'s value to <num>. \refstepcounter{<ctr>} \stepcounter but you can use LaTeX's referencing system to add a \label and later \ref the counter. The printed reference will be the current expansion of \the<ctr>. The basic syntax to create a new counter is by \newcounter. Below an example that defines a numbered environment called example: \documentclass{article} \usepackage[utf8]{inputenc} \usepackage[english]{babel} \newcounter{example}[section] \newenvironment{example}[1][]{\refstepcounter{example}\par\medskip \textbf{Example~\theexample. #1} \rmfamily}{\medskip} \begin{document} This document will present... \begin{example} This is the first example. The counter will be reset at each section. \end{example} Below is a second example \begin{example} And here's another numbered example. \end{example} \section{Another section} This is a dummy section with no purpose whatsoever but to contain text. This section has assigned the number \thesection. \stepcounter{equation} \begin{equation} \label{1stequation} \int_{0}^{\infty} \frac{x}{\sin(x)} \end{equation} \begin{example} This is the first example in this section. \end{example} \end{document} In this LaTeX snippet the new environment example is defined, this environment has 3 counting-specific commands. \newcounter{example}[section] section or omit the parameter if you don't want your defined counter to be automatically reset. \refstepcounter{example} \label afterwards. \theexample For further information on user-defined environments see the article about defining new environments You can print the current value of a counter in different ways: \theCounterName 2.1 for the first subsection in the second section. \arabic 2. \value \setcounter{section}{\value{subsection}}). \alph b. \Alph B. \roman ii. \Roman II. \fnsymbol †. \theCounterName is the macro responsible to print CounterName's value in a formatted manner. For new counters created by \newcounter it gets initialized as an Arabic number. You can change this by using \renewcommand. For example if you want to change the way a subsection counter is printed to include the current section in italics and the current subsection in uppercase Roman numbers, you could do the following: \renewcommand\thesubsection{\textit{\thesection}.\Roman{subsection}} \section{Example} \subsection{Example}\label{sec:example:ssec:example} This is the subsection \ref{sec:example:ssec:example}. Default counters in LaTeX Usage Name For document structure For floats For footnotes For the enumerate environment Counter manipulation commands \addtocounter{CounterName}{number} \stepcounter{CounterName} \refstepcounter{CounterName} It works like \stepcounter, but makes the counter visible to the referencing mechanism ( \ref{label} returns counter value) \setcounter{CounterName}{number} \newcounter{NewCounterName} If you want the NewCounterName counter to be reset to zero every time that another OtherCounterName counter is increased, use: \newcounter{NewCounterName}[OtherCounterName] \setcounter{section}{\value{subsection}}. \value{CounterName} \theCounterName for example: \thechapter, \thesection, etc. Note that this might result in more than just the counter, for example with the standard definitions of the article class \thesubsection will print Section. Subsection (e.g. 2.1). For more information see:
The Baker-Campbell-Hausdorff formula¶ AUTHORS: Eero Hakavuori (2018-09-23): initial version sage.algebras.lie_algebras.bch. bch_iterator( X=None, Y=None)¶ A generator function which returns successive terms of the Baker-Campbell-Hausdorff formula. INPUT: X– (optional) an element of a Lie algebra Y– (optional) an element of a Lie algebra The BCH formula is an expression for \(\log(\exp(X)\exp(Y))\) as a sum of Lie brackets of Xand Ywith rational coefficients. In arbitrary Lie algebras, the infinite sum is only guaranteed to converge for Xand Yclose to zero. If the elements Xand Yare not given, then the iterator will return successive terms of the abstract BCH formula, i.e., the BCH formula for the generators of the free Lie algebra on 2 generators. If the Lie algebra containing Xand Yis not nilpotent, the iterator will output infinitely many elements. If the Lie algebra is nilpotent, the number of elements outputted is equal to the nilpotency step. EXAMPLES: The terms of the abstract BCH formula up to fifth order brackets: sage: from sage.algebras.lie_algebras.bch import bch_iterator sage: bch = bch_iterator() sage: next(bch) X + Y sage: next(bch) 1/2*[X, Y] sage: next(bch) 1/12*[X, [X, Y]] + 1/12*[[X, Y], Y] sage: next(bch) 1/24*[X, [[X, Y], Y]] sage: next(bch) -1/720*[X, [X, [X, [X, Y]]]] + 1/180*[X, [X, [[X, Y], Y]]] + 1/360*[[X, [X, Y]], [X, Y]] + 1/180*[X, [[[X, Y], Y], Y]] + 1/120*[[X, Y], [[X, Y], Y]] - 1/720*[[[[X, Y], Y], Y], Y] For nilpotent Lie algebras the BCH formula only has finitely many terms: sage: L = LieAlgebra(QQ, 2, step=3) sage: L.inject_variables() Defining X_1, X_2, X_12, X_112, X_122 sage: [Z for Z in bch_iterator(X_1, X_2)] [X_1 + X_2, 1/2*X_12, 1/12*X_112 + 1/12*X_122] sage: [Z for Z in bch_iterator(X_1 + X_2, X_12)] [X_1 + X_2 + X_12, 1/2*X_112 - 1/2*X_122, 0] The elements Xand Ydon’t need to be elements of the same Lie algebra if there is a coercion from one to the other: sage: L = LieAlgebra(QQ, 3, step=2) sage: L.inject_variables() Defining X_1, X_2, X_3, X_12, X_13, X_23 sage: S = L.subalgebra(X_1, X_2) sage: bch1 = [Z for Z in bch_iterator(S(X_1), S(X_2))]; bch1 [X_1 + X_2, 1/2*X_12] sage: bch1[0].parent() == S True sage: bch2 = [Z for Z in bch_iterator(S(X_1), X_3)]; bch2 [X_1 + X_3, 1/2*X_13] sage: bch2[0].parent() == L True The BCH formula requires a coercion from the rationals: sage: L.<X,Y,Z> = LieAlgebra(ZZ, 2, step=2) sage: bch = bch_iterator(X, Y); next(bch) Traceback (most recent call last): ... TypeError: the BCH formula is not well defined since Integer Ring has no coercion from Rational Field ALGORITHM: The BCH formula \(\log(\exp(X)\exp(Y)) = \sum_k Z_k\) is computed starting from \(Z_1 = X + Y\), by the recursion\[(m+1)Z_{m+1} = \frac{1}{2}[X - Y, Z_m] + \sum_{2\leq 2p \leq m}\frac{B_{2p}}{(2p)!}\sum_{k_1+\cdots+k_{2p}=m} [Z_{k_1}, [\cdots [Z_{k_{2p}}, X + Y]\cdots],\] where \(B_{2p}\) are the Bernoulli numbers, see Lemma 2.15.3. in [Var1984]. Warning The time needed to compute each successive term increases exponentially. For example on one machine iterating through \(Z_{11},...,Z_{18}\) for a free Lie algebra, computing each successive term took 4-5 times longer, going from 0.1s for \(Z_{11}\) to 21 minutes for \(Z_{18}\).
I'm new to LaTeX so you're going to have to bear with me. I am trying to write a project report using it and I have got this so far for a certain section: Then, \documentclass[a4paper,12pt]{article}\usepackage{amsmath,graphicx,hyperref,parskip,gensymb} % In case you are wondering what these packages are for: % amsmath provides extra mathematical constructs and symbols % graphicx facilitates the inclusion of graphics files % hyperref makes links into clickable hyperlinks % parskip leaves more space between paragraphs \usepackage[cm]{fullpage} % The above package makes the page margins smaller. I included it % to save you printing costs.% Feel free to also print two-sheets per page and double-sided%==================%.....%==================\begin{equation}\label{eq:six} x = Vtsin(K), \end{equation} and \begin{align*} X &= x-(l+h)sin(\phi), \\ &= Vt\sin(K)-(l+h)sin[K-2\tan ^{ - 1}\{\tan(K/2).exp((-Vt)/l)\}], \end{align*} This is what it says at the top: \documentclass[a4paper,12pt]{article} \usepackage{amsmath,graphicx,hyperref,parskip,gensymb} And I would like to get all three lines of equations in line- so the x and X equations- and the first and third lines have numbers assigned to them. All my other equations have \begin{equation}...\end{equation} around them and those ones have been given reference numbers- like the x equation but I wanted to have these two in line so I used the align formula instead of the equation one and then tried both together but it didn't work. (Also, the labelling isn't working but that's another issue!) Any help would be greatly appreciated!
View Answer question_answer1) If \[x=-\frac{1}{2},\] is a solution of the quadratic equation \[3{{x}^{2}}+2kx-3=0\]find the value of k. View Answer question_answer2) The tops of two towers of height x and y, standing on level ground, subtend angles of \[30{}^\circ \] and \[60{}^\circ \] respectively at the centre of the line joining their feet, then find \[x:y\]. View Answer question_answer3) A letter of English alphabet is chosen at random. Determine the probability that the chosen letter is a consonant. question_answer4) In Fig. 1, PA and PB are tangents to the circle with centre O such that \[\angle APB=50{}^\circ \], Write the measure of\[\angle OAB\]. question_answer5) In Fig. 2, AB is the diameter of a circle with centre O and AT is a tangent. If \[\angle AOQ=58{}^\circ \], find \[\angle ATQ\]. question_answer6) Solve the following quadratic equation for x: \[4{{x}^{2}}-4{{a}^{2}}x+({{a}^{4}}-{{b}^{4}})=0\]. View Answer question_answer7) From a point T outside a circle of centre O, tangents TP and TQ are drawn to the circle. Prove that OT is the right bisector of line segment PQ. View Answer question_answer8) Find the middle term of the A.P. 6, 13, 20, ??., 216. View Answer question_answer9) If \[A\,(5,2),\text{ }B\,(2,-2)\] and \[C(-2,t)\] are the vertices of a right angled triangle with \[\angle B=90{}^\circ \]. then find the value of t. View Answer question_answer10) Find the ratio in which the point \[P\left( \frac{3}{4},\frac{5}{12} \right)\] divides the line segment joining the points \[A\left( \frac{1}{2},\frac{3}{2} \right)\] and \[B(2,-5)\]. View Answer question_answer11) Find the area of the triangle ABC with \[A\,(1,-4)\] and mid-points of sides through A being \[(2,-1),\] and \[(0,-1)\]. View Answer question_answer12) Find that non-zero value of fc, for which the quadratic equation \[k{{x}^{2}}+1-2(k-1)x+{{x}^{2}}=0\] has equal roots. Hence find the roots of the equation. View Answer question_answer13) The angle of elevation of the top of a building from the foot of the tower is 30? and the angle of elevation of the top of the tower from the foot of the building is \[45{}^\circ \]. If the tower is 30 m high, find the height of the building. question_answer14) Two different dice are rolled together. Find the probability of getting: (i) the sum of numbers on two dice to be 5. (ii) even numbers on both dice. View Answer question_answer15) If \[{{S}_{n}}\], denotes the sum of first n terms of an A.P, prow that \[{{S}_{12}}=3({{S}_{8}}-{{S}_{4}})\] question_answer16) In Fig. 3, APB and AQO are semicircles, and \[AO=OB\]. If the perimeter of the figure is 40 cm, find the area of the shaded region. \[\left[ \text{Use}\,\pi =\frac{22}{7} \right]\] question_answer17) In Fig. 4, from the top of a solid cone of height 12 cm base radius 6 cm, a cone of height 4 cm is removed by a plane parallel to the base. Find the total surface area of the remaining solid. \[\left[ \text{Use}\,\pi =\frac{22}{7}\text{and}\,\sqrt{5}=2.236 \right]\] View Answer question_answer18) A solid wooden toy is in the form of hemisphere surmounted by a cone of same radius. The radius of hemisphere is 3.5 cm and the total wood used in the making of toy is \[166\frac{5}{6}c{{m}^{3}}\]. Find the height of the toy. Also, find the cost of painting hemispherical part of the toy at the rate of \[Rs.\,\,10\,\,per\,\,c{{m}^{2}}\]. \[\left[ \text{Use}\,\,\pi =\frac{22}{7} \right]\] question_answer19) In Fig. 5, from a cuboidal solid metallic block, of dimensions \[15\text{ }cm\times 10\text{ }cm\times 5\text{ }cm\], a cylindrical hole of diameter 7 cm is drilled out. Find the surface area of the remaining block. \[\left[ \text{Use}\,\pi =\frac{22}{7} \right]\] question_answer20) In Fig. 6, find the area of the shaded region [Use\[\pi =3.14\]] View Answer question_answer21) The numerator of a fraction is 3 less than its denominator. If 2 is added to both the numerator and the denominator, then the sum of the new fraction and original fraction is \[\frac{29}{20}\]. Find the original fraction. question_answer22) Ramkali required Rs. 2500 after 12 weeks to send her daughter to school. She saved Rs. 100 in the first week and increased her weekly saving by Rs. 20 every week. Find whether she will be able to send her daughter to school after 12 weeks. What value is generated in the above situation? question_answer23) Solve for x: \[\frac{2}{x+1}+\frac{3}{2(x-2)}=\frac{23}{5x},x\ne 0,-1,2\] View Answer question_answer24) Prove that the tangent at any point of a circle is perpendicular to the radius through the point of contact. question_answer25) In Fig. 7, tangents PQ and PR are drawn from an external point P to a circle with centre O, such that \[\angle RPQ=30{}^\circ \]. A chord RS is drawn parallel to the tangent PQ. Find \[\angle RQS\]. View Answer question_answer26) Construct a triangle ABC with \[BC=7\text{ }cm,\text{ }\angle B=60{}^\circ \] and \[AB=6\text{ }cm\]. Construct another triangle whose sides are \[\frac{3}{4}\] times the corresponding side of \[\Delta \,ABC\]. View Answer question_answer27) From a point P on the ground the angle of elevation of the top of a tower is \[30{}^\circ \] and that of the top of a flag staff fixed on the top of the tower, is \[60{}^\circ \]. If the length of the flag staff is 5 m, find the height of the tower. question_answer28) A box contains 20 cards numbered from 1 to 20. A card is drawn at random from the box. Find the probability that the number on the drawn card is: (i) divisible by 2 or 3 (ii) a prime number View Answer question_answer29) If \[A(-4,8),\text{ }B(-3,-4),\text{ }C(0,-5)\] and \[D(5,6)\]are the vertices of a quadrilateral ABCD, find its area. View Answer question_answer30) A well of diameter 4 m is dug 14 m deep. The earth taken out is spread evenly all around the well to form a 40 cm high embankment. Find the width of the embankment. View Answer question_answer31) Water is flowing at the rate of 2.52 km/h through a cylindrical pipe into a cylindrical tank, the radius of whose base is 40 cm. If the increase in the level of water in the tank, in half an hour is 3/15 m, find the internal diameter of the pipe. You need to login to perform this action. You will be redirected in 3 sec
Transformation means changing some graphics into something else by applying rules. We can have various types of transformations such as translation, scaling up or down, rotation, shearing, etc. When a transformation takes place on a 2D plane, it is called 2D transformation. Transformations play an important role in computer graphics to reposition the graphics on the screen and change their size or orientation. To perform a sequence of transformation such as translation followed by rotation and scaling, we need to follow a sequential process − To shorten this process, we have to use 3×3 transformation matrix instead of 2×2 transformation matrix. To convert a 2×2 matrix to 3×3 matrix, we have to add an extra dummy coordinate W. In this way, we can represent the point by 3 numbers instead of 2 numbers, which is called Homogenous Coordinate system. In this system, we can represent all the transformation equations in matrix multiplication. Any Cartesian point P(X, Y) can be converted to homogenous coordinates by P’ (X h, Y h, h). A translation moves an object to a different position on the screen. You can translate a point in 2D by adding translation coordinate (t x, t y) to the original coordinate (X, Y) to get the new coordinate (X’, Y’). From the above figure, you can write that − X’ = X + t x Y’ = Y + t y The pair (t x, t y) is called the translation vector or shift vector. The above equations can also be represented using the column vectors. $P = \frac{[X]}{[Y]}$ p' = $\frac{[X']}{[Y']}$T = $\frac{[t_{x}]}{[t_{y}]}$ We can write it as − P’ = P + T In rotation, we rotate the object at particular angle θ (theta) from its origin. From the following figure, we can see that the point P(X, Y) is located at angle φ from the horizontal X coordinate with distance r from the origin. Let us suppose you want to rotate it at the angle θ. After rotating it to a new location, you will get a new point P’ (X’, Y’). Using standard trigonometric the original coordinate of point P(X, Y) can be represented as − $X = r \, cos \, \phi ...... (1)$ $Y = r \, sin \, \phi ...... (2)$ Same way we can represent the point P’ (X’, Y’) as − ${x}'= r \: cos \: \left ( \phi \: + \: \theta \right ) = r\: cos \: \phi \: cos \: \theta \: − \: r \: sin \: \phi \: sin \: \theta ....... (3)$ ${y}'= r \: sin \: \left ( \phi \: + \: \theta \right ) = r\: cos \: \phi \: sin \: \theta \: + \: r \: sin \: \phi \: cos \: \theta ....... (4)$ Substituting equation (1) & (2) in (3) & (4) respectively, we will get ${x}'= x \: cos \: \theta − \: y \: sin \: \theta $ ${y}'= x \: sin \: \theta + \: y \: cos \: \theta $ Representing the above equation in matrix form, $$[X' Y'] = [X Y] \begin{bmatrix} cos\theta & sin\theta \\ −sin\theta & cos\theta \end{bmatrix}OR $$ P’ = P . R Where R is the rotation matrix $$R = \begin{bmatrix} cos\theta & sin\theta \\ −sin\theta & cos\theta \end{bmatrix}$$ The rotation angle can be positive and negative. For positive rotation angle, we can use the above rotation matrix. However, for negative angle rotation, the matrix will change as shown below − $$R = \begin{bmatrix} cos(−\theta) & sin(−\theta) \\ -sin(−\theta) & cos(−\theta) \end{bmatrix}$$ $$=\begin{bmatrix} cos\theta & −sin\theta \\ sin\theta & cos\theta \end{bmatrix} \left (\because cos(−\theta ) = cos \theta \; and\; sin(−\theta ) = −sin \theta \right )$$ To change the size of an object, scaling transformation is used. In the scaling process, you either expand or compress the dimensions of the object. Scaling can be achieved by multiplying the original coordinates of the object with the scaling factor to get the desired result. Let us assume that the original coordinates are (X, Y), the scaling factors are (S X, S Y), and the produced coordinates are (X’, Y’). This can be mathematically represented as shown below − X' = X . S X and Y' = Y . SY The scaling factor S X, S Y scales the object in X and Y direction respectively. The above equations can also be represented in matrix form as below − $$\binom{X'}{Y'} = \binom{X}{Y} \begin{bmatrix} S_{x} & 0\\ 0 & S_{y} \end{bmatrix}$$ OR P’ = P . S Where S is the scaling matrix. The scaling process is shown in the following figure. If we provide values less than 1 to the scaling factor S, then we can reduce the size of the object. If we provide values greater than 1, then we can increase the size of the object. Reflection is the mirror image of original object. In other words, we can say that it is a rotation operation with 180°. In reflection transformation, the size of the object does not change. The following figures show reflections with respect to X and Y axes, and about the origin respectively. A transformation that slants the shape of an object is called the shear transformation. There are two shear transformations X-Shear and Y-Shear. One shifts X coordinates values and other shifts Y coordinate values. However; in both the cases only one coordinate changes its coordinates and other preserves its values. Shearing is also termed as Skewing. The X-Shear preserves the Y coordinate and changes are made to X coordinates, which causes the vertical lines to tilt right or left as shown in below figure. The transformation matrix for X-Shear can be represented as − $$X_{sh} = \begin{bmatrix} 1& shx& 0\\ 0& 1& 0\\ 0& 0& 1 \end{bmatrix}$$ Y' = Y + Sh y . X X’ = X The Y-Shear preserves the X coordinates and changes the Y coordinates which causes the horizontal lines to transform into lines which slopes up or down as shown in the following figure. The Y-Shear can be represented in matrix from as − $$Y_{sh} \begin{bmatrix} 1& 0& 0\\ shy& 1& 0\\ 0& 0& 1 \end{bmatrix}$$ X’ = X + Sh x . Y Y’ = Y If a transformation of the plane T1 is followed by a second plane transformation T2, then the result itself may be represented by a single transformation T which is the composition of T1 and T2 taken in that order. This is written as T = T1∙T2. Composite transformation can be achieved by concatenation of transformation matrices to obtain a combined transformation matrix. A combined matrix − [T][X] = [X] [T1] [T2] [T3] [T4] …. [Tn] Where [Ti] is any combination of The change in the order of transformation would lead to different results, as in general matrix multiplication is not cumulative, that is [A] . [B] ≠ [B] . [A] and the order of multiplication. The basic purpose of composing transformations is to gain efficiency by applying a single composed transformation to a point, rather than applying a series of transformation, one after another. For example, to rotate an object about an arbitrary point (X p, Y p), we have to carry out three steps −
Interested in the following function:$$ \Psi(s)=\sum_{n=2}^\infty \frac{1}{\pi(n)^s}, $$where $\pi(n)$ is the prime counting function.When $s=2$ the sum becomes the following:$$ \Psi(2)=\sum_{n=2}^\infty \frac{1}{\pi(n)^2}=1+\frac{1}{2^2}+\frac{1}{2^2}+\frac{1}{3^2}+\frac{1}{3^2}+\frac{1... Consider a random binary string where each bit can be set to 1 with probability $p$.Let $Z[x,y]$ denote the number of arrangements of a binary string of length $x$ and the $x$-th bit is set to 1. Moreover, $y$ bits are set 1 including the $x$-th bit and there are no runs of $k$ consecutive zer... The field $\overline F$ is called an algebraic closure of $F$ if $\overline F$ is algebraic over $F$ and if every polynomial $f(x)\in F[x]$ splits completely over $\overline F$. Why in def of algebraic closure, do we need $\overline F$ is algebraic over $F$? That is, if we remove '$\overline F$ is algebraic over $F$' condition from def of algebraic closure, do we get a different result? Consider an observer located at radius $r_o$ from a Schwarzschild black hole of radius $r_s$. The observer may be inside the event horizon ($r_o < r_s$).Suppose the observer receives a light ray from a direction which is at angle $\alpha$ with respect to the radial direction, which points outwa... @AlessandroCodenotti That is a poor example, as the algebraic closure of the latter is just $\mathbb{C}$ again (assuming choice). But starting with $\overline{\mathbb{Q}}$ instead and comparing to $\mathbb{C}$ works. Seems like everyone is posting character formulas for simple modules of algebraic groups in positive characteristic on arXiv these days. At least 3 papers with that theme the past 2 months. Also, I have a definition that says that a ring is a UFD if every element can be written as a product of irreducibles which is unique up units and reordering. It doesn't say anything about this factorization being finite in length. Is that often part of the definition or attained from the definition (I don't see how it could be the latter). Well, that then becomes a chicken and the egg question. Did we have the reals first and simplify from them to more abstract concepts or did we have the abstract concepts first and build them up to the idea of the reals. I've been told that the rational numbers from zero to one form a countable infinity, while the irrational ones form an uncountable infinity, which is in some sense "larger". But how could that be? There is always a rational between two irrationals, and always an irrational between two rationals, ... I was watching this lecture, and in reference to above screenshot, the professor there says: $\frac1{1+x^2}$ has a singularity at $i$ and at $-i$, and power series expansions are limits of polynomials, and limits of polynomials can never give us a singularity and then keep going on the other side. On page 149 Hatcher introduces the Mayer-Vietoris sequence, along with two maps $\Phi : H_n(A \cap B) \to H_n(A) \oplus H_n(B)$ and $\Psi : H_n(A) \oplus H_n(B) \to H_n(X)$. I've searched through the book, but I couldn't find the definitions of these two maps. Does anyone know how to define them or where there definition appears in Hatcher's book? suppose $\sum a_n z_0^n = L$, so $a_n z_0^n \to 0$, so $|a_n z_0^n| < \dfrac12$ for sufficiently large $n$, so $|a_n z^n| = |a_n z_0^n| \left(\left|\dfrac{z_0}{z}\right|\right)^n < \dfrac12 \left(\left|\dfrac{z_0}{z}\right|\right)^n$, so $a_n z^n$ is absolutely summable, so $a_n z^n$ is summable Let $g : [0,\frac{ 1} {2} ] → \mathbb R$ be a continuous function. Define $g_n : [0,\frac{ 1} {2} ] → \mathbb R$ by $g_1 = g$ and $g_{n+1}(t) = \int_0^t g_n(s) ds,$ for all $n ≥ 1.$ Show that $lim_{n→∞} n!g_n(t) = 0,$ for all $t ∈ [0,\frac{1}{2}]$ . Can you give some hint? My attempt:- $t\in [0,1/2]$ Consider the sequence $a_n(t)=n!g_n(t)$ If $\lim_{n\to \infty} \frac{a_{n+1}}{a_n}<1$, then it converges to zero. I have a bilinear functional that is bounded from below I try to approximate the minimum by a ansatz-function that is a linear combination of any independent functions of the proper function space I now obtain an expression that is bilinear in the coeffcients using the stationarity condition (all derivaties of the functional w.r.t the coefficients = 0) I get a set of $n$ equations with the $n$ the number of coefficients a set of n linear homogeneus equations in the $n$ coefficients Now instead of "directly attempting to solve" the equations for the coefficients I rather look at the secular determinant that should be zero, otherwise no non trivial solution exists This "characteristic polynomial" directly yields all permissible approximation values for the functional from my linear ansatz. Avoiding the neccessity to solve for the coefficients. I have problems now to formulated the question. But it strikes me that a direct solution of the equation can be circumvented and instead the values of the functional are directly obtained by using the condition that the derminant is zero. I wonder if there is something deeper in the background, or so to say a more very general principle. If $x$ is a prime number and a number $y$ exists which is the digit reverse of $x$ and is also a prime number, then there must exist an integer z in the mid way of $x, y$ , which is a palindrome and digitsum(z)=digitsum(x). > Bekanntlich hat P. du Bois-Reymond zuerst die Existenz einer überall stetigen Funktion erwiesen, deren Fouriersche Reihe an einer Stelle divergiert. Herr H. A. Schwarz gab dann ein einfacheres Beispiel. (Translation: It is well-known that Paul du Bois-Reymond was the first to demonstrate the existence of a continuous function with fourier series being divergent on a point. Afterwards, Hermann Amandus Schwarz gave an easier example.) (Translation: It is well-known that Paul du Bois-Reymond was the first to demonstrate the existence of a continuous function with fourier series being divergent on a point. Afterwards, Hermann Amandus Schwarz gave an easier example.) It's discussed very carefully (but no formula explicitly given) in my favorite introductory book on Fourier analysis. Körner's Fourier Analysis. See pp. 67-73. Right after that is Kolmogoroff's result that you can have an $L^1$ function whose Fourier series diverges everywhere!!
Water, as simple as it might appear, has quite a few extraordinary things to offer. Most does not seem to be as it appears. Before diving deeper, a few cautionary words about hybridisation. Hybridisation is an often misconceived concept. It only is a mathematical interpretation, which explains a certain bonding situation (in an intuitive fashion). In a molecule the equilibrium geometry will result from various factors, such as steric and electronic interactions, and furthermore interactions with the surroundings like a solvent or external field. The geometric arrangement will not be formed because a molecule is hybridised in a certain way, it is the other way around, i.e. a result of the geometry or more precise and interpretation of the wave function for the given molecular arrangement. In molecular orbital theory linear combinations of all available (atomic) orbitals will form molecular orbitals (MO). These are spread over the whole molecule, or delocalised, and in a quantum chemical interpretation they are called canonical orbitals. Such a solution (approximation) of the wave function can be unitary transformed form localised molecular orbitals (LMO). The solution (the energy) does not change due to this transformation. These can then be used to interpret a bonding situation in a simpler theory. Each LMO can be expressed as a linear combination of the atomic orbitals, hence it is possible to determine the coefficients of the atomic orbitals and describe these also as hybrid orbitals. It is absolutely wrong to assume that there are only three types of sp hybrid orbitals. x Therefore it is very well possible, that there are multiple different types of orbitals involved in bonding for a certain atom. For more on this, read about Bent's rule on the network. [1] Let's look at water, Wikipedia is so kind to provide us with a schematic drawing: The bonding angle is quite close to the ideal tetrahedral angle, so one would assume, that the involved orbitals are sp 3 hybridised. There is also a connection between bond angle and hybridisation, called Coulson's theorem, which lets you approximate hybridisation. [2] In this case the orbitals involved in the bonds would be sp 4 hybridised. (Close enough.) Let us also consider the symmetry of the molecule. The point group of water is C 2v. Because there are mirror planes, in the canonical bonding picture π-type orbitals [3] are necessary. We have an orbital with appropriate symmetry, which is the p-orbital sticking out of the bonding plane. This interpretation is not only valid it is one that comes as the solution of the Schrödinger equation. [4] That leaves for the other orbital a hybridisation of sp (2/3). If we make the reasonable assumption, that the oxygen hydrogen bonds are sp 3 hybridised, and the out-of-plane lone pair is a p orbital, then the maths is a bit easier and the in-plane lone pair is sp hybridised. [5] A calculation on the MO6/def2-QZVPP level of theory gives us the following canonical molecular orbitals: (Orbital symmetries: $2\mathrm{A}_1$, $1\mathrm{B}_2$, $3\mathrm{A}_1$, $1\mathrm{B}_1$) [6,7] Since the interpretation with hybrid orbitals is equivalent, I used the natural bond orbital theory to interpret the results. This method transforms the canonical orbitals into localised orbitals for easier interpretation. Here is an excerpt of the output (core orbital and polarisation functions omitted) giving us the calculated hybridisations: (Occupancy) Bond orbital / Coefficients / Hybrids ------------------ Lewis ------------------------------------------------------ 2. (1.99797) LP ( 1) O 1 s( 53.05%)p 0.88( 46.76%)d 0.00( 0.19%) 3. (1.99770) LP ( 2) O 1 s( 0.00%)p 1.00( 99.69%)d 0.00( 0.28%) 4. (1.99953) BD ( 1) O 1- H 2 ( 73.49%) 0.8573* O 1 s( 23.41%)p 3.26( 76.25%)d 0.01( 0.31%) ( 26.51%) 0.5149* H 2 s( 99.65%)p 0.00( 0.32%)d 0.00( 0.02%) 5. (1.99955) BD ( 1) O 1- H 3 ( 73.48%) 0.8572* O 1 s( 23.41%)p 3.26( 76.27%)d 0.01( 0.30%) ( 26.52%) 0.5150* H 3 s( 99.65%)p 0.00( 0.32%)d 0.00( 0.02%) ------------------------------------------------------------------------------- As we can see, that pretty much matches the assumption of sp 3 oxygen hydrogen bonds, a p lone pair, and a sp lone pair. Does that mean that the lone pairs are non-equivalent? Well, that is at least one interpretation. And we only deduced all that from a gas phase point of view. When we go towards condensed phase, things will certainly change. Hydrogen bonds will break the symmetry, dynamics will play an important role and in the end, both will probably behave quite similarly or even identical. Now let's get to the juicy part: Second, if so, does this have any significance in actual physical systems (i.e. is it a measurable phenomenon), and what is the approximate energy difference between the pairs of electrons? Well the first part is a bit tricky to answer, because that is dependent on a lot more conditions. But the part in parentheses is easy. It is measurable with photoelectron spectroscopy. There is a nice orbital scheme correlated to the orbital ionisation potential on the homepage of Michael K. Denk for water. [8] Unfortunately I cannot find license information, or a reference to reproduce, hence I am hesitant to post it here. However, I found a nice little publication on the photoelectron spectroscopy of water in the bonding region. [9] I'll quote some relevant data from the article. $\ce{H2O}$ is a non-linear, triatomic molecule consisting of an oxygen atom covalently bonded to two hydrogen atoms. The ground state of the $\ce{H2O}$ molecule is classified as belonging to the $C_\mathrm{2v}$ point group and so the electronic states of water are described using the irreducible representations $\mathrm{A}_1$, $\mathrm{A}_2$, $\mathrm{B}_1$, $\mathrm{B}_2$. The electronic configuration of the ground state of the $\ce{H2O}$ molecule is described by five doubly occupied molecular orbitals: $$\begin{align} \underbrace{(1\mathrm{a}_1)^2}_{\text{core}}&& \underbrace{(2\mathrm{a}_1)^2}_{\text{inner-valence orbital}}&& \underbrace{ (1\mathrm{b}_2)^2 (3\mathrm{a}_1)^2 (1\mathrm{b}_1)^2 }_{\text{outer-valence orbital}}&& \mathrm{X~^1A_1} \end{align}$$ [..] In addition to the three band systems observed in HeI PES of $\ce{H2O}$, a fourth band system in the TPE spectrum close to 32 eV is also observed. As indicated in Fig. 1, these band systems correspond to the removal of a valence electron from each of the molecular orbitals $(1\mathrm{b}_1)^{-1}$, $(3\mathrm{a}_1)^{-1}$, $(1\mathrm{b}_2)^{-1}$ and $(2\mathrm{a}_1)^{-1}$ of $\ce{H2O}$. As you can see, it fits quite nicely with the calculated data. From the image I would say that the difference between $(1\mathrm{b}_1)^{-1}$ and $(3\mathrm{a}_1)^{-1}$ is about 1-2 eV. TL;DRAs you see your hunch paid off quite well. Photoelectron spectroscopy of water in the gas phase confirms that the lone pairs are non-equivalent. Conclusions for condensed phases might be different, but that is a story for another day. Notes and References A π orbital has one nodal plane collinear with the bonding axis, it is asymmetric with respect to this plane. A bit more explanation in my question What would follow in the series sigma, pi and delta bonds? With in the approximation that molecular orbitals are a linear combination of atomic orbitals (MO = LCAO). The terminology we use for hybridisation actually is just an abbreviation:$$\mathrm{sp}^{x} = \mathrm{s}^{\frac{1}{x+1}}\mathrm{p}^{\frac{x}{x+1}}$$In theory $x$ can have any value; since it is just a unitary transformation the representation does not change, hence\begin{align} 1\times\mathrm{s}, 3\times\mathrm{p} &\leadsto 4\times\mathrm{sp}^3 \\ &\leadsto 3\times\mathrm{sp}^2, 1\times\mathrm{p} \\ &\leadsto 2\times\mathrm{sp}, 2\times\mathrm{p} \\ &\leadsto 2\times\mathrm{sp}^3, 1\times\mathrm{sp}, 1\times\mathrm{p} \\ &\leadsto \text{etc. pp.}\\ &\leadsto 2\times\mathrm{sp}^4, 1\times\mathrm{p}, 1\times\mathrm{sp}^{(2/3)}\end{align}There are virtually infinite possibilities of combination. This and the next footnote address a couple of points that were raised in a comment by DavePhD. While I already extensively answered that there, I want to include a few more clarifying points here. (If I do it right, the comments become obsolete.) What is the reason for concluding 2 lone pairs versus 1 or 3? For example Mulliken has in table V the b1 orbital being a definite lone pair (no H population) but the two a1 orbitals both have about 0.3e population on H. Would it be wrong to say only one of the PES energy levels corresponds to a lone pair, and the other 3 has some significant population on hydrogen? Are Mulliken's calculations still valid? – DavePhD The article Dave refers to is R. S. Mulliken, J. Chem. Phys. 1955, 23, 1833., which introduces Mulliken population analysis. In this paper Mulliken analyses wave functions on the SCF-LCAO-MO level of theory. This is essentially Hartree Fock with a minimal basis set. (I will address this in the next footnote.) We have to understand that this was state-of-the-art computational chemistry back then. What we take for granted nowadays, calculating the same thing in a few seconds, was revolutionary back then. Today we have a lot fancier methods. I used density functional theory with a very large basis set. The main difference between these approaches is that the level I use recovers a lot more of electron correlation than the method of Mulliken. However, if you look closely at the results it is quite impressive how well these early approximations perform.On the M06/def2-QZVPP level of theory the geometry of the molecule is optimised to have an oxygen hydrogen distance of 95.61 pm and a bond angle of 105.003°. This is quite close to the experimental results. The contribution to the orbitals are given as follows. I include the orbital energies (OE), too. The contributions of the atomic orbitals are given to 1.00 being the total for each molecular orbital. Because the basis set has polarisation functions the missing parts are attributed to this. The threshold for printing is 3%. (I also rearranged the Gaussian Output for better readability.) Atomic contributions to molecular orbitals: 2: 2A1 OE=-1.039 is O1-s=0.81 O1-p=0.03 H2-s=0.07 H3-s=0.07 3: 1B2 OE=-0.547 is O1-p=0.63 H2-s=0.18 H3-s=0.18 4: 3A1 OE=-0.406 is O1-s=0.12 O1-p=0.74 H2-s=0.06 H3-s=0.06 5: 1B1 OE=-0.332 is O1-p=0.95 We can see that there is indeed some contribution by the hydrogens to the in-plane lone pair of oxygen. On the other hand we see that there is only one orbital where there is a large contribution by hydrogen. One could here easily come up with the theory of one or three lone pairs of oxygen, depending on your own point of view. Mulliken's analysis is based on the canonical orbitals, which are delocalised, so we will never have a pure lone pair orbital. When we refer to orbitals as being of a certain type, then we imply that this is the largest contribution. Often we also use visual aides like pictures of these orbitals to decide if they are of bonding or anti-bonding nature, or if their contribution is on the bonding axis. All these analyses are highly biased by your point of view. There is no right or wrong when it comes to separation schemes. There is no hard evidence for any of these obtainable. These are mathematical interpretations that do in the best case help us understand bonding better. Thus deciding whether water has one, two or three (or even four) lone pairs is somewhat playing with numbers until something seems to fit. Bonding is too difficult to transform it in easy pictures. (That's why I am not an advocate for cautiously using Lewis structures.) The NBO analysis is another separation scheme. One that aims to transform the obtained canonical orbitals into a Lewis like picture for a better understanding. This transformation does not change the wave function and in this way is as equally a representation as other approaches. What you loose by this approach are the orbital energies, since you break the symmetry of the wave function, but this is going much too far to explain. In a nutshell, the localisation scheme aims to transform the delocalised orbitals into orbitals that correspond to bonds. From a quite general point of view, Mulliken's calculations (he actually only interpreted the results of others) and conclusion hold up to a certain point. Nowadays we know that his population analysis has severe problems, but within the minimal basis they still produce justifiable results. The popularity of this method comes mainly because it is very easy to perform. See also: Which one, Mulliken charge distribution and NBO, is more reliable? Mulliken used a SCF-LCAO-MO calculation by Ellison and Shull and was so kind to include the main results into his paper. The oxygen hydrogen bond distance is 95.8 pm and the bond angle is 105°. I performed a calculation on the same geometry on the HF/STO-3G level of theory for comparison. It obviously does not match perfectly, but well enough for a little bit of further discussion. NO SYM HF/STO-3G : N(O) N(H2) | Mulliken : N(O) N(H2) 1 1A1 -550.79 2.0014 -0.0014 | -557.3 2.0007 -0.0005 2 2A1 -34.49 1.6113 0.3887 | -36.2 1.688 0.309 3 1B2 -16.82 1.0700 0.9300 | -18.6 0.918 1.080 4 3A1 -12.29 1.6837 0.3163 | -13.2 1.743 0.257 5 1B1 -10.63 2.0000 0.0000 | -11.8 2.000 As an off-side note: I completely was unable to read the Mulliken analysis by Gaussian. I used MultiWFN instead. It is also not an equivalent approach because they expressed the hydrogen atoms with group orbitals. The results don't differ by much. The basic approach of Mulliken is to split the overlap population to the orbitals symmetric between the elements. That is a principal problem of the method as the contributions to that MO can be quite different. Resulting problematic points are occupation values larger than two or smaller than zero, which have clearly no physical meaning. The analysis is especially ruined for diffuse functions. At the time Mulliken certainly did not know about anything we are able to do today, and under which conditions his approach will break down, it still is funny to read such sentences today. Actually, very small negative values occasionally occur [...]. [...] ideally to the population of the AO [...] should never exceed the number 2.00 of electrons in a closed atomic sub-shell. Actually, [the orbital population] in some instances does very slightly exceed 2.00 [...]. The reason why these slight but only slight imperfections exist is obscure. But since they are only slight, it appears that the gross atomic populations calculated using Eq. (6') may be taken as representing rather accurately the "true" populations in various AOs for an atom in a molecule. It should be realized, of course, that fundamentally there is no such thing as an atom in a molecule except in an approximate sense. For much more on this I found an explanation of the Gaussian output along with the reference to F. Martin, H. Zipse, J. Comp. Chem. 2005, 26, 97 - 105, available as a copy. I have not read it though. Scroll down until the bottom of the page for the image, read for more information: CHEM 2070, Michael K. Denk: UV-Vis & PES. (University of Guelph) If dead: Wayback Machine S.Y. Truong, A.J. Yencha, A.M. Juarez, S.J. Cavanagh, P. Bolognesi, G.C. King, Chemical Physics 2009, 355 (2–3), 183-193. Or try this mirror.
Let $G$ be a quasisplit connected, reductive group over a field $F$. Let $A_0$ be a maximal $F$-split torus, and $P$ a maximal $F$-parabolic subgroup containing $A_0$. Let $W = N_G(A_0)/Z_G(A_0)$ be the Weyl group of $A_0$, and let $w \in W$. I think that $G$ is the disjoint union of the double cosets $$G = \bigcup\limits_{w \in W} PwN$$ where $N$ is the unipotent radical of $P$. Is there any way to compute the dimension of a particular double coset $PwN$? Or perhaps generalize the following result: Assume $F$ is algebraically closed, let $T$ be a maximal torus of $G$ with Weyl group $W = N_G(T)/T$, and let $B$ be a Borel subgroup containing $T$ with unipotent radical $U$. If $w = nT \in W$, let $S_w = \{\alpha_1, ... , \alpha_m\}$ be the set of negative roots with respect to $B$ which are made positive by $w$, where $w$ acts on $X(T)$ by $w.\chi(t) = \chi(n^{-1}tn)$. Let $$U_w' = \{ u_1 \cdots u_m : u_i \in U_{w.\alpha_i} \}$$ where $U_{\alpha_i}$ is the root subgroup of $w.\alpha_i$. Then $U_w'$ is a closed, connected subgroup of $U$ of dimension $m$, and the product map $$B \times U_w' \rightarrow BwU, (b,u) \mapsto bwu$$ is an isomorphism of varieties. In particular, the dimension of $BwU$ is $\textrm{Dim } B + \ell(w^{-1})$, where $\ell$ is the length function.
Hi, Can someone provide me some self reading material for Condensed matter theory? I've done QFT previously for which I could happily read Peskin supplemented with David Tong. Can you please suggest some references along those lines? Thanks @skullpatrol The second one was in my MSc and covered considerably less than my first and (I felt) didn't do it in any particularly great way, so distinctly average. The third was pretty decent - I liked the way he did things and was essentially a more mathematically detailed version of the first :) 2. A weird particle or state that is made of a superposition of a torus region with clockwise momentum and anticlockwise momentum, resulting in one that has no momentum along the major circumference of the torus but still nonzero momentum in directions that are not pointing along the torus Same thought as you, however I think the major challenge of such simulator is the computational cost. GR calculations with its highly nonlinear nature, might be more costy than a computation of a protein. However I can see some ways approaching it. Recall how Slereah was building some kind of spaceitme database, that could be the first step. Next, one might be looking for machine learning techniques to help on the simulation by using the classifications of spacetimes as machines are known to perform very well on sign problems as a recent paper has shown Since GR equations are ultimately a system of 10 nonlinear PDEs, it might be possible the solution strategy has some relation with the class of spacetime that is under consideration, thus that might help heavily reduce the parameters need to consider to simulate them I just mean this: The EFE is a tensor equation relating a set of symmetric 4 × 4 tensors. Each tensor has 10 independent components. The four Bianchi identities reduce the number of independent equations from 10 to 6, leaving the metric with four gauge fixing degrees of freedom, which correspond to the freedom to choose a coordinate system. @ooolb Even if that is really possible (I always can talk about things in a non joking perspective), the issue is that 1) Unlike other people, I cannot incubate my dreams for a certain topic due to Mechanism 1 (consicous desires have reduced probability of appearing in dreams), and 2) For 6 years, my dream still yet to show any sign of revisiting the exact same idea, and there are no known instance of either sequel dreams nor recurrence dreams @0celo7 I felt this aspect can be helped by machine learning. You can train a neural network with some PDEs of a known class with some known constraints, and let it figure out the best solution for some new PDE after say training it on 1000 different PDEs Actually that makes me wonder, are the space of all coordinate choices more than all possible moves of Go? enumaris: From what I understood from the dream, the warp drive showed here may be some variation of the alcuberrie metric with a global topology that has 4 holes in it whereas the original alcuberrie drive, if I recall, don't have holes orbit stabilizer: h bar is my home chat, because this is the first SE chat I joined. Maths chat is the 2nd one I joined, followed by periodic table, biosphere, factory floor and many others Btw, since gravity is nonlinear, do we expect if we have a region where spacetime is frame dragged in the clockwise direction being superimposed on a spacetime that is frame dragged in the anticlockwise direction will result in a spacetime with no frame drag? (one possible physical scenario that I can envision such can occur may be when two massive rotating objects with opposite angular velocity are on the course of merging) Well. I'm a begginer in the study of General Relativity ok? My knowledge about the subject is based on books like Schutz, Hartle,Carroll and introductory papers. About quantum mechanics I have a poor knowledge yet. So, what I meant about "Gravitational Double slit experiment" is: There's and gravitational analogue of the Double slit experiment, for gravitational waves? @JackClerk the double slits experiment is just interference of two coherent sources, where we get the two sources from a single light beam using the two slits. But gravitational waves interact so weakly with matter that it's hard to see how we could screen a gravitational wave to get two coherent GW sources. But if we could figure out a way to do it then yes GWs would interfere just like light wave. Thank you @Secret and @JohnRennie . But for conclude the discussion, I want to put a "silly picture" here: Imagine a huge double slit plate in space close to a strong source of gravitational waves. Then like water waves, and light, we will see the pattern? So, if the source (like a Black Hole binary) are sufficent away, then in the regions of destructive interference, space-time would have a flat geometry and then with we put a spherical object in this region the metric will become schwarzschild-like. if** Pardon, I just spend some naive-phylosophy time here with these discussions** The situation was even more dire for Calculus and I managed! This is a neat strategy I have found-revision becomes more bearable when I have The h Bar open on the side. In all honesty, I actually prefer exam season! At all other times-as I have observed in this semester, at least-there is nothing exciting to do. This system of tortuous panic, followed by a reward is obviously very satisfying. My opinion is that I need you kaumudi to decrease the probabilty of h bar having software system infrastructure conversations, which confuse me like hell and is why I take refugee in the maths chat a few weeks ago (Not that I have questions to ask or anything; like I said, it is a little relieving to be with friends while I am panicked. I think it is possible to gauge how much of a social recluse I am from this, because I spend some of my free time hanging out with you lot, even though I am literally inside a hostel teeming with hundreds of my peers) that's true. though back in high school ,regardless of code, our teacher taught us to always indent your code to allow easy reading and troubleshooting. We are also taught the 4 spacebar indentation convention @JohnRennie I wish I can just tab because I am also lazy, but sometimes tab insert 4 spaces while other times it inserts 5-6 spaces, thus screwing up a block of if then conditions in my code, which is why I had no choice I currently automate almost everything from job submission to data extraction, and later on, with the help of the machine learning group in my uni, we might be able to automate a GUI library search thingy I can do all tasks related to my work without leaving the text editor (of course, such text editor is emacs). The only inconvenience is that some websites don't render in a optimal way (but most of the work-related ones do) Hi to all. Does anyone know where I could write matlab code online(for free)? Apparently another one of my institutions great inspirations is to have a matlab-oriented computational physics course without having matlab on the universities pcs. Thanks. @Kaumudi.H Hacky way: 1st thing is that $\psi\left(x, y, z, t\right) = \psi\left(x, y, t\right)$, so no propagation in $z$-direction. Now, in '$1$ unit' of time, it travels $\frac{\sqrt{3}}{2}$ units in the $y$-direction and $\frac{1}{2}$ units in the $x$-direction. Use this to form a triangle and you'll get the answer with simple trig :) @Kaumudi.H Ah, it was okayish. It was mostly memory based. Each small question was of 10-15 marks. No idea what they expect me to write for questions like "Describe acoustic and optic phonons" for 15 marks!! I only wrote two small paragraphs...meh. I don't like this subject much :P (physical electronics). Hope to do better in the upcoming tests so that there isn't a huge effect on the gpa. @Blue Ok, thanks. I found a way by connecting to the servers of the university( the program isn't installed on the pcs on the computer room, but if I connect to the server of the university- which means running remotely another environment, i found an older version of matlab). But thanks again. @user685252 No; I am saying that it has no bearing on how good you actually are at the subject - it has no bearing on how good you are at applying knowledge; it doesn't test problem solving skills; it doesn't take into account that, if I'm sitting in the office having forgotten the difference between different types of matrix decomposition or something, I can just search the internet (or a textbook), so it doesn't say how good someone is at research in that subject; it doesn't test how good you are at deriving anything - someone can write down a definition without any understanding, while someone who can derive it, but has forgotten it probably won't have time in an exam situation. In short, testing memory is not the same as testing understanding If you really want to test someone's understanding, give them a few problems in that area that they've never seen before and give them a reasonable amount of time to do it, with access to textbooks etc.
Wikipedia also says that Troelstra said in 1988 that there were no satisfactory foundations for ultrafinitism. Is this still true? Even if so, are there any aspects of ultrafinitism that you can get your hands on coming from a purely classical perspective? There are no foundations for ultrafinitism as satisfactory for it as (say) intuitionistic logic is for constructivism. The reason is that the question of what logic is appropriate for ultrafinitism is still an open one, for not one but several different reasons. First, from a traditional perspective -- whether classical or intuitionistic -- classical logic is the appropriate logic for finite collections (but not K-finite). The idea is that a finite collection is surveyable: we can enumerate and look at each element of any finite collection in finite time. (For example, the elementary topos of finite sets is Boolean.) However, this is not faithful to the ultra-intuitionist idea that a sufficiently large set is impractical to survey. So it shouldn't be surprising that more-or-less ultrafinitist logics arise from complexity theory, which identifies "practical" with "polynomial time". I know two strands of work on this. The first is Buss's work on $S^1_2$, which is a weakening of Peano arithmetic with a weaker induction principle: $$A(0) \land (\forall x.\;A(x/2) \Rightarrow A(x)) \Rightarrow \forall x.\;A(x)$$ Then any proof of a forall-exists statement has to be realized by a polynomial time computable function. There is a line of work on bounded set theories, which I am not very familiar with, based on Buss's logic. The second is a descendant of Bellantoni and Cook's work on programming languages for polynomial time, and Girard's work on linear logic. The Curry-Howard correspondence takes functional languages, and maps them to logical systems, with types going to propositions, terms going to proofs, and evaluation going to proof normalization. So the complexity of a functional program corresponds in some sense to the practicality of cut-elimination for a logic. IIRC, Girard subsequently showed that for a suitable version of affine logic, cut-elimination can be shown to take polynomial time. Similarly, you can build set theories on top of affine logic. For example, Kazushige Terui has since described a set theory, Light Affine Set Theory, whose ambient logic is linear logic, and in which the provably total functions are exactly the polytime functions. (Note that this means that for Peano numerals, multiplication is total but exponentiation is not --- so Peano and binary numerals are not isomorphic!) The reason these proof-theoretic questions arise, is that part of the reason that the ultra-intuitionist conception of the numerals makes sense, is precisely because they deny large proofs. If you deny that large integers exist, then a proof that they exist, which is larger than the biggest number you accept, doesn't count! I enjoyed Vladimir Sazonov's paper "On Feasible Numbers", which explicitly studies the connection. I should add that I am not a specialist in this area, and what I've written is just the fruits of my interest in the subject -- I have almost certainly overlooked important work, for which I apologize.
Consider a $C^k$, $k\ge 2$, Lorentzian manifold $(M,g)$ and let $\Box$ be the usual wave operator $\nabla^a\nabla_a$. Given $p\in M$, $s\in\Bbb R,$ and $v\in T_pM$, can we find a neighborhood $U$ of $p$ and $u\in C^k(U)$ such that $\Box u=0$, $u(p)=s$ and $\mathrm{grad}\, u(p)=v$? The tog is a measure of thermal resistance of a unit area, also known as thermal insulance. It is commonly used in the textile industry and often seen quoted on, for example, duvets and carpet underlay.The Shirley Institute in Manchester, England developed the tog as an easy-to-follow alternative to the SI unit of m2K/W. The name comes from the informal word "togs" for clothing which itself was probably derived from the word toga, a Roman garment.The basic unit of insulation coefficient is the RSI, (1 m2K/W). 1 tog = 0.1 RSI. There is also a clo clothing unit equivalent to 0.155 RSI or 1.55 tog... The stone or stone weight (abbreviation: st.) is an English and imperial unit of mass now equal to 14 pounds (6.35029318 kg).England and other Germanic-speaking countries of northern Europe formerly used various standardised "stones" for trade, with their values ranging from about 5 to 40 local pounds (roughly 3 to 15 kg) depending on the location and objects weighed. The United Kingdom's imperial system adopted the wool stone of 14 pounds in 1835. With the advent of metrication, Europe's various "stones" were superseded by or adapted to the kilogram from the mid-19th century on. The stone continues... Can you tell me why this question deserves to be negative?I tried to find faults and I couldn't: I did some research, I did all the calculations I could, and I think it is clear enough . I had deleted it and was going to abandon the site but then I decided to learn what is wrong and see if I ca... I am a bit confused in classical physics's angular momentum. For a orbital motion of a point mass: if we pick a new coordinate (that doesn't move w.r.t. the old coordinate), angular momentum should be still conserved, right? (I calculated a quite absurd result - it is no longer conserved (an additional term that varies with time ) in new coordinnate: $\vec {L'}=\vec{r'} \times \vec{p'}$ $=(\vec{R}+\vec{r}) \times \vec{p}$ $=\vec{R} \times \vec{p} + \vec L$ where the 1st term varies with time. (where R is the shift of coordinate, since R is constant, and p sort of rotating.) would anyone kind enough to shed some light on this for me? From what we discussed, your literary taste seems to be classical/conventional in nature. That book is inherently unconventional in nature; it's not supposed to be read as a novel, it's supposed to be read as an encyclopedia @BalarkaSen Dare I say it, my literary taste continues to change as I have kept on reading :-) One book that I finished reading today, The Sense of An Ending (different from the movie with the same title) is far from anything I would've been able to read, even, two years ago, but I absolutely loved it. I've just started watching the Fall - it seems good so far (after 1 episode)... I'm with @JohnRennie on the Sherlock Holmes books and would add that the most recent TV episodes were appalling. I've been told to read Agatha Christy but haven't got round to it yet ?Is it possible to make a time machine ever? Please give an easy answer,a simple one A simple answer, but a possibly wrong one, is to say that a time machine is not possible. Currently, we don't have either the technology to build one, nor a definite, proven (or generally accepted) idea of how we could build one. — Countto1047 secs ago @vzn if it's a romantic novel, which it looks like, it's probably not for me - I'm getting to be more and more fussy about books and have a ridiculously long list to read as it is. I'm going to counter that one by suggesting Ann Leckie's Ancillary Justice series Although if you like epic fantasy, Malazan book of the Fallen is fantastic @Mithrandir24601 lol it has some love story but its written by a guy so cant be a romantic novel... besides what decent stories dont involve love interests anyway :P ... was just reading his blog, they are gonna do a movie of one of his books with kate winslet, cant beat that right? :P variety.com/2016/film/news/… @vzn "he falls in love with Daley Cross, an angelic voice in need of a song." I think that counts :P It's not that I don't like it, it's just that authors very rarely do anywhere near a decent job of it. If it's a major part of the plot, it's often either eyeroll worthy and cringy or boring and predictable with OK writing. A notable exception is Stephen Erikson @vzn depends exactly what you mean by 'love story component', but often yeah... It's not always so bad in sci-fi and fantasy where it's not in the focus so much and just evolves in a reasonable, if predictable way with the storyline, although it depends on what you read (e.g. Brent Weeks, Brandon Sanderson). Of course Patrick Rothfuss completely inverts this trope :) and Lev Grossman is a study on how to do character development and totally destroys typical romance plots @Slereah The idea is to pick some spacelike hypersurface $\Sigma$ containing $p$. Now specifying $u(p)$ is trivial because the wave equation is invariant under constant perturbations. So that's whatever. But I can specify $\nabla u(p)|\Sigma$ by specifying $u(\cdot, 0)$ and differentiate along the surface. For the Cauchy theorems I can also specify $u_t(\cdot,0)$. Now take the neigborhood to be $\approx (-\epsilon,\epsilon)\times\Sigma$ and then split the metric like $-dt^2+h$ Do forwards and backwards Cauchy solutions, then check that the derivatives match on the interface $\{0\}\times\Sigma$ Why is it that you can only cool down a substance so far before the energy goes into changing it's state? I assume it has something to do with the distance between molecules meaning that intermolecular interactions have less energy in them than making the distance between them even smaller, but why does it create these bonds instead of making the distance smaller / just reducing the temperature more? Thanks @CooperCape but this leads me another question I forgot ages ago If you have an electron cloud, is the electric field from that electron just some sort of averaged field from some centre of amplitude or is it a superposition of fields each coming from some point in the cloud?
For example, the rate of a chemical reaction can be expressed in $\mathrm{mol}/\mathrm{L}^{-1}/\mathrm{sec}^{-1}$. Why is it ‘−1’ and not, say, ‘−2’? Does it change the meaning if the minus is removed and we simply express the rate in $\mathrm{mol}/\mathrm{L}/\mathrm{sec}$? The -1 means "per" unit. So your first example mol/L -1/s -1 is not correct - it would actually be written as mol L -1 s -1, OR mol/(L s). It is also sometimes written as mol/L/s, but the double division is ambiguous and should be avoided unless parentheses are used. If it were mol L -1 s -2, this would mean moles per litre per second per second. This is really just a question of notation, and is not chemistry-specific at all. Yes, all the minus/plus signs and the value of numbers are important. Good examples of units can include: area, measured in m 2, or metres squared volume, measured in m 3, or metres cubed pressure, measured in N m -2, or Newtons per metre squared velocity, measured in m s -1, or metres per second acceleration, measured in m s -2, or metres per second per second The $^{-1}$ superscript can be thought of as saying "per" or as being the denominator of the fraction. So in your example $\mathrm{mol \cdot L^{-1} sec^{-1}}$ can be thought of as saying moles per liter per second. This is easier than writing $\mathrm{\frac{mol}{(L \cdot sec)}}$ Changing the super script from $1$ to $2$ or $3$ would change the meaning of the value. Ex $$1 \mathrm{cm^{3}\ is\ 1mL}$$ So, $\mathrm{cm}^{-1}$ is per centimeter, which would be a measurement of something per distance, but $\mathrm{cm^{-3}}$ would be talking about something in a given volume. It may have its roots even earlier than that, but this was mainly due to people using typewriters to write scientific papers, etc. Now we have the ability to format things like $\mathrm{\frac{mol}{L}}$, both on-screen and in print, but adjusting the carriage and line feed knob every time you had to type a complicated formula was tedious, so it was easier to type "mol-L-1" instead. Even when the -1s became superscripts, as John points out in his answer, it was still used in typesetting to keep formulas, etc. all on the same line in books. First off: your suggestion $\require{cancel}\cancel{\mathrm{mol/L^{-1}/sec^{-1}}}$ is very wrong for three principal reasons: the unit symbol for seconds is $\pu{s}$, not $\pu{sec}$ or anything else you should never include two slashes for division. Does $\mathrm{mol/l/s}$ equal to $\mathrm{mol/(l/s)}$ or to $\mathrm{(mol/l)/s}$? This is ambiguous. One should always indicate with brackets which units are ‘per’ and which are not; in your example it should be $\pu{mol/(l\cdot s)}$. your suggestion does not mean what you think it means; more on that below. Mathematically, a negative exponent has the same effect placing the expression associated with it into the denominator. $$\begin{align}x^{-1} &= \frac 1x\\[0.3em] 2^{-2} &= \frac1{2^2}\\[0.3em] e^{-i\phi} &= \frac1{e^{i\phi}}\end{align}$$ Units in the natural sciences are treated much like variables in general mathematics, i.e. they can be multiplied and thereby raised to powers (e.g. $\mathrm{m^2}$) or divided by each other (e.g. $\mathrm{m/s^2}$). Only if the unit is identical, two numeric values can be added or subtracted; so $\pu{2m}+\pu{3m}=\pu{5m}$ makes sense as does $2a + 3a=5a$, but $\pu{2m}+\pu{3s}$ cannot be added akin to $2a+3b$. The combination of units usually means what common sense would read them as. So $\pu{1m^2}$ is equivalent to a square area with the side length being $\pu{1m}$. $\pu{1 N\cdot m}$ is equivalent to a force of one newton applied over the distance of 1 meter (with a lever). And $\pu{1m/s}$ means travelling one meter per second. While more complex expressions such as $\mathrm{kg \cdot m^2 / s^2}$ do not always immediately make intuitive sense, they can usually be broken down into fragments that would make intuitive sense. After this excursion, it becomes clear that an expression such as $\pu{mol\cdot l^-1\cdot s^-1}$ is equivalent to a fractional unit of $\mathrm{\frac{mol}{l\cdot s}}$, meaning that the concentration is increased by $\pu{1 mol/l}$ in one second. This also means that: it does not make sense to replace the exponent of $-1$ with e.g. $-2$ as that would result in a different unit (e.g: $\mathrm{kg\cdot m^2\cdot s^{-2}}$ is joule, the unit of energy, while $\mathrm{kg\cdot m^2\cdot s^{-3}}$ is watt, the unit of power). it does not make sense to remove the negative sign from the exponent as that would result in a different unit (e.g. $\pu{10Hz} = \pu{10s-1}$ corresponds to a frequency — ten times per second — while $\pu{10s}$ obviously corresponds to a duration). one has to choose between eitherthe slash orthe negative exponent as both would cancel each other out. This last one is implied by the general laws of mathematics: $$\begin{align}\frac1{x^-1} &= \frac1{\frac1x}\\[0.5em] &= \left(\frac11\right) / \left(\frac1x\right)\\[0.5em] &= \left(\frac11\right) \times \left(\frac x1\right)\\[0.5em] &= x\end{align}$$ which is also the third wrong factor in your suggestion. In general, I would give preference to the negative exponents ($\pu{mol l-1 s-1}$) except in cases where there is only a single unit raised to a power of $-1$ and no other powers exist; in these cases, e.g. $\pu{mol/l}$ usually integrates itself better into the flow of text.
So I was looking through related questions on this site and the book "Proper and Improper Forcing" by Shelah kept popping up. So I checked it out and the appendix actually resolves the question. I rephrase the proof there because I think this way it's simpler and also because it suggests an interesting question (see the end). Let $P_0 := \mathsf{Fn}(\omega_2, 2, \omega)$ and let $P_1 := \mathsf{Fn}(\omega_3, 2, \omega_1)$. The following lemma is basic: Lemma$P_0 \Vdash MA^*$. Proof.Let $G$ be $P_0$-generic over $\mathbb{V}$. Let $f: \omega_1 \to 2$ be the generic function. Suppose $X \subseteq \omega_1$ codes an $\omega_1$-sequence of open dense sets $(\mathcal{O}_\alpha: \alpha < \omega_1)$ of $(\,^\omega \omega)^{\mathbb{V}[G]}$. Then for some $I \subset \omega_2$ with $|I| = \aleph_1$, we have $X \in \mathbb{V}[G \cap \mathsf{Fn}(I, 2, \omega)]$. But let $\alpha = (\sup I) + 1$ and let $x \in \,^\omega \omega$ be defined by $x(n) = f(\alpha + n)$. Then $x \in \bigcap_{\alpha < \omega_1} \mathcal{O}_\alpha$. End proof of lemma. We aim to show that $P_0 \times P_1 \Vdash MA^* \land \Phi^*$ provided we start with a model of $CH$. This proof is gleaned from the proof of Theorem 2.11 in the Appendix of Shelah's "Proper and Improper forcing." Customarily we view this iterated forcing as starting with $P_1$ and following it by $P_0$, since $(P_0)^{\mathbb{V}^{P_1}} = P_0$. However to get $\Phi^*$ we have to view it the other way around. So we need to look at $P_1$ in $\mathbb{V}^{P_0}$. Lemma.Suppose $G_0$ is $P_0$-generic over $\mathbb{V} \models CH$. Then in $\mathbb{V}[G_0]$: (a) $\mathsf{Fn}(\omega_3, 2, \omega) \subset P_1 \subset \mathsf{Fn}(\omega_3, 2, \omega_1)$, and for any two elements $p, q \in P_1$, if $p$ and $q$ are compatible then $p \cup q \in P_1$, (b) $P_1$ has the $\omega_2$ c.c., (c) Forcing with $P_1$ does not add any reals, (d) The set $\mathcal{B} := \{\mbox{dom}(p): p \in P_1\}$ is closed under intersections, unions, and relative complements, and if $p \in P_1$ and $X \in \mathcal{B}$, then there is some $q \, || \, p$ with $\mbox{dom}(q) = X$, (e) If $X \subseteq \omega_3$ is countable there is some (countable) $Y \in \mathcal{B}$ containing $X$. Proof. (a) Obvious. (b) Showing that $(Q \mbox{ has the $\omega_2$ c.c.})^{\mathbb{V}[G_0]}$ is the same as showing it in $\mathbb{V}$ (just go through the proof of the $\Delta$-system lemma and see that it works for any antichain in $Q$). (c) Let $G_1$ be $Q$-generic over $\mathbb{V}[G_0]$. Then also $G_1$ is $Q$-generic over $\mathbb{V}$, and $G_0$ is $P$-generic over $\mathbb{V}[G_1]$ and $G_0 \times G_1$ is $P \times Q$-generic over $\mathbb{V}$. Now it suffices to show that if $x \in \mathcal{P}(\omega) \cap \mathbb{V}[G_0 \times G_1]$ then $x \in \mathcal{P}(\omega) \cap \mathbb{V}[G_0]$. To see this, consider a $P$-nice name $\sigma$ for $x$ in $\mathbb{V}[G_1]$; so $\sigma = \{(\hat{n}, A_n): n \in \omega\}$ where $A_n \in \mathbb{V}[G_1]$ is an antichain in $P$. So $A_n \subseteq P$ is countable, so $A_n \in \mathbb{V}$ already (since $(Q \mbox{ is $\omega$-closed})^{\mathbb{V}}$). Hence $\sigma \in V$ so $x \in \mathbb{V}[G_0]$. (d) Obvious. (e) The existence of $Y$ follows from the c.c.c. of $P_0$. End proof of lemma. Lemma.Suppose $2^{\aleph_0} \leq \aleph_2$ and $Q$ is a forcing notion satisfying $(a)$ through $(e)$ above. Then $Q \Vdash \Phi^*$. Proof.Suppose towards a contradiction that $1_Q \Vdash \lnot \Phi^*$. Then (by the maximal principle) there are names $\dot{F}, \dot{S}$ such that \begin{eqnarray*}1_Q &\Vdash& ``\mbox{ $\dot{F}: \,^{<\omega_1} 2 \to 2$ and $\dot{S} \subseteq \omega_1$ is stationary and } \\&& \forall g: \omega_1 \to 2 \,\,\, \exists f: \omega_1 \to 2 \mbox{ such that } \{\alpha \in \dot{S}: \dot{F}(f \restriction_\alpha)= g(\alpha)\} \\&& \mbox{ is nonstationary}." \end{eqnarray*}We can choose $\dot{F}$ and $\dot{S}$ to be nice names; that is $\dot{F} = \{(\hat{(\eta, i)}, A_{\eta, i}): \eta \in \,^{<\omega_1} 2, i \in 2\}$ and $\dot{S} = \{(\hat{\alpha}, A_\alpha) : \alpha < \omega_1\}$, where each $A_{\eta, i}$ and each $A_\alpha$ is an antichain in $Q$. (This uses that $(\,^{<\omega_1} 2)^{\mathbb{V}} = (\,^{<\omega_1} 2)^{\mathbb{V}^Q}$. Let $\mathcal{A} = \bigcup_{\eta, i} A_{\eta, i} \cup \bigcup_{\alpha} A_\alpha$; so by the $\omega_2$-c.c., $|\mathcal{A}| \leq \aleph_2$. Hence $D:= \bigcup_{A \in \mathcal{A}} \mbox{dom}(A)$ has cardinality at most $\aleph_2$. Hence after relabeling we can suppose that $D \cap \omega_1 = \emptyset$. Let $\dot{c}$ be the nice $Q$-name for the generic $\omega_3$-sequence: $\dot{c} = \{(\hat{(\alpha, i)}, \{(\alpha, i)\}): \alpha < \omega_3, i \in 2\}$. Let $\dot{g} = \dot{c} \restriction_{\omega_1}$, i.e. $\dot{g} = \{(\hat{(\alpha, i)}, \{(\alpha, i)\}): \alpha < \omega_3, i \in 2\}$. Then $1_Q \Vdash ``\exists f: \omega_1 \to 2$ such that $\{\alpha \in \dot{S}: \dot{F}(f \restriction_\alpha) = g(\alpha)\}$ is nonstationary." So for some nice $Q$-name $\dot{f} = \{(\hat{(\alpha, i)}, B_{\alpha, i}): \alpha < \omega_1, i \in 2\}$, we have $1_Q \Vdash ``\dot{f}: \omega_1 \to 2$ and $\{\alpha \in \dot{S}: \dot{F}(\dot{f} \restriction_\alpha) = \dot{g}(\alpha)\}$ is nonstationary." Let $\dot{C}$ be a nice name for a club subset of $\omega_1$ such that $1_Q \Vdash ``\forall \alpha \in \dot{S} \cap \dot{C}: \dot{F}(\dot{f} \restriction_\alpha) \not= \dot{g}(\alpha)$. Say $\dot{C} = \{(\hat{\alpha}, B_\alpha): \alpha < \omega_1\}$. Let $G$ be $Q$-generic over $\mathbb{V}$; we work in $\mathbb{V}[G]$. Let $F = \dot{F}_G$, let $S = \dot{S}_G$, let $g = \dot{g}_G$, let $f = \dot{f}_G$ and let $C = \dot{C}_G$. For each $\alpha < \omega_1$, let $p_\alpha$ be the unique element of $G \cap (B_{\alpha, 0} \cup B_{\alpha, 1})$ and let $q_\alpha$ be the unique element of $G \cap B_\alpha$ if it exists, or $q_\alpha = 1_Q$ if $G \cap B_\alpha = \emptyset$. Let $D_\alpha = \mbox{dom}(p_\alpha) \cup \mbox{dom}(q_\alpha)$ and let $X_\alpha = \bigcup_{\beta < \alpha} D_\beta$. Then the set $C^* = \{\alpha < \omega_1: X_\alpha \cap \omega_1 \subseteq \alpha\}$ is club. So there is some $\delta \in S \cap \mbox{acc}(C) \cap C^*$. (In the following we use the properties $(a), (d)$ and $(e)$ heavily.) Let $I \subset \omega_3$ be countable such that $I \supseteq \delta \cup \{X_\alpha \backslash \omega_1\}$ and $I \in \mathcal{B}$ (where $\mathcal{B}$ is as in the lemma). Let $J = I \backslash \{\delta\}$ (possibly $I = J$); so $J \in \mathcal{B}$. Let $p_0$ be the unique element of $G \cap A_\delta$, so $p_0 \Vdash \delta \in \dot{S}$. Let $p_1 = g \restriction_J$, so $p_1 \Vdash \dot{f} \restriction_\delta = f \restriction_\delta$ and $p_1 \Vdash \dot{C} \cap \delta = C \cap \delta$. Since $\delta \in \mbox{acc}(C)$, $p_1 \Vdash \delta \in \dot{C}$. Let $p = p_0 \cup p_1$; note that $\delta \not \in \mbox{dom}(p)$. Back in $\mathbb{V}$, $p \Vdash ``\hat{\delta} \in \dot{S} \cap \dot{C} \land \dot{F}(\dot{f} \restriction_\delta) = i"$, for some $i \in 2$. But let $q = p \cup \{(\delta, i)\}$; then $q \vDash \dot{F}(\dot{f} \restriction_\delta) = \dot{g}(\delta)$, contrary to definition of $\dot{C}$. End proof of lemma. Theorem.Suppose $\mathbb{V} \models CH$. Then $P_0 \times P_1 \Vdash MA^* \land \Phi^*$. Proof. Let $G_0 \times G_1$ be $P_0 \times P_1$-generic over $\mathbb{V}$. By Lemma 1, $\mathbb{V}[G_0 \times G_1] = \mathbb{V}[G_1][G_0] \models MA^*$. By Lemma 2, $\mathbb{V}[G_0 \times G_1] = \mathbb{V}[G_0][G_1] \models \Phi^*$. End proof of theorem. Question. Under what conditions can we find a $Q$ satisfying conditions (a) through (e)?
Many thanks to Bart Andrews for this contribution! Question Consider two systems \(I\) and \(II\) in contact with a common heat bath with temperature T and suppose that a mechanism exists which allows both systems to exchange particles. The probability that the composed system \(I + II\) is in a state for which system \(I\) has an energy between \(E_I\) and \(E_I+dE_I\) and a particle number \(N_I\), while \(II\) has an energy between \(E_{II}\) and \(E_{II} + dE_{II}\) with a particle number \(N_{II}\) is given by \begin{equation} p(E_I, N_I; E_{II}, N_{II})dE_I dE_{II} = \frac{\Omega_I(E_I,N_I)\Omega_{II}(E_{II},N_{II}) e^{-\beta(E_I+E_{II})}}{\sum_{N_I=0}^{N} Z_I(T,N_I)Z_{II}(T,N-N_I)}\,\,\,\,\,\,dE_IdE_{II} \end{equation} Here, \(Z_I(T, N_I)\) and \(Z_{II}(T, N_{II})\) are the partition sums of the single systems, \(\Omega_I(E_I, N_I)\) and \(\Omega_{II}(E_{II},N_{II})\) are the degeneracies of the states in the single systems and \(N = N_I + N_{II}\) is the total number of particles. Show that for the most probable distribution \(\{N_I, N_{II}\}\) of particles, the sum of the free energies \(F_I + F_{II}\) is minimal and that the chemical potentials of both systems are equal. Solution This question considers the most probable configuration of free particles and then asks for two things: Show that the total free energy of the system is minimal Show that the chemical potentials are equal We shall start by showing that the chemical potentials are equal. Let the most probable value of \(N_I\) be called \(\tilde{N_I}\). We know that since the total number of particles stays constant, the particle number of the second system can be expressed as a function of the particle number of the first system. \[N_{II} = N - N_I\] Therefore, if we find the most probable distribution for \(N_I\) particles, then this will consequently be the most probable distribution of \(\{N_I,N_{II}\}\). So, we can proceed by finding the most probable distribution as follows: Maximise the probability with respect to \(N_I\). \[{\partial p \over \partial N_I} \stackrel{!}{=} 0\] Use the product rule, set equal to zero and simplify. \[\bigg({\Omega'_I}(E_I,N_I){\Omega_{II}}(E_{II},N-N_I)-{\Omega_I}(E_I,N_I){\Omega'_{II}}(E_{II},N-N_I)\bigg) \bigg|_{\tilde{N_I}} = 0\] \[\frac{\Omega'_I(E_I,\tilde{N_I})}{\Omega_I(E_I,\tilde{N_I})} = \frac{\Omega'_{II}(E_{II},N-\tilde{N_I})}{\Omega_{II}(E_{II},N-\tilde{N_I})}\] Multiply both sides by the Boltzmann constant. \[k_B\frac{\Omega'_I(E_I,\tilde{N_I})}{\Omega_I(E_I,\tilde{N_I})} = k_B\frac{\Omega'_{II}(E_{II},N-\tilde{N_I})}{\Omega_{II}(E_{II},N-\tilde{N_I})}\] Take the Boltzmann constant inside the derivatives. \[{\partial (k_B \ln \Omega_I) \over \partial N_I} \bigg|_{\tilde{N_I}} = {\partial (k_B \ln \Omega_{II}) \over \partial N_{II}}\bigg|_{N-\tilde{N_I}}\] Use the Boltzmann expression for the entropy. \[{\partial S_I \over \partial N_I} \bigg|_{\tilde{N_I}} = {\partial S_{II} \over \partial N_{II}}\bigg|_{N-\tilde{N_I}}\] Use the thermodynamic relation for the chemical potential. \[\frac{\mu_I}{T} = \frac{\mu_{II}}{T} \] Since \(T\) is constant for both systems due to the heat bath, it cancels. \[\mu_I = \mu_{II} \; \blacksquare\] Now we can proceed to the second part and show that the total free energy in this situation is minimal. From the second law of thermodynamics, we know that: \( \newcommand{\dslash}{\delta} \) \[dS\ge \frac{\dslash Q}{T}\] \[T dS \ge \dslash Q\] \[0 \ge \dslash Q - T dS\] Additionally, from the first law of thermodynamics we know that: \[dU = \dslash Q - P dV\] Now taking the Legendre transform of the internal energy to Helmholtz free energy, \(F\equiv U – TS\), we obtain: \begin{eqnarray} dF & = & dU-TdS-SdT \nonumber \\ & = & \dslash Q-PdV-TdS-SdT \nonumber \\ & = & (\dslash Q-TdS) -PdV-SdT \nonumber \\ & = & \dslash Q-TdS \nonumber \end{eqnarray} Here, we have used the fact that the total volume and temperature of the two systems is constant and so \(dV=dT=0\). Combining this result with the second law of thermodynamics leads to the inequality: \[dF \le 0\] This implies that for a process at constant temperature and volume the Helmholtz free energy seeks a minimum and this minimum is achevied when \(dF=0\). So now we can look at our particular problem and calculate the total differential \(dF\). \[dF = \left( \partial F \over \partial T \right) dT + \left( \partial F \over \partial V \right) {dV} + \left( \partial F \over \partial N_I \right) dN_I + \left( \partial F \over \partial N_{II} \right) dN_{II}\] It is assumed that all other neccesary variables are kept constant when these partial derivatives are taken. Now we know that the temperature is constant because of mutual contact with a heat bath and so \(dT=0\). We also know that the total volume is kept constant because these systems are confined and so \(dV=0\). From the relation… \[dN_I = -dN_{II}\] …coupled with the thermodynamic definitions for chemical potential… \[\mu_I = \left( \partial F \over \partial N_I \right) \; , \; \mu_{II} = \left( \partial F \over \partial N_{II} \right) \] …it is clear that since the chemical potentials are equal, the last two terms cancel. This means that the total free energy is, in fact, at a minimum as required. \[dF=0 \; \blacksquare \]
In Thompson's Modern Particle Physics, in section 6 about electron-positron annihilation, it is stated (p. 152) that "If the final-state fermion mass is also neglected, (6.63) reduces to the expression for the spin-averaged matrix element squared of (6.25), which was obtained from the helicity amplitudes." Here: $$\begin{align}\langle\lvert\mathcal{M}_{fi}\rvert^2\rangle&=2\dfrac{Q_f^2e^4}{(p_1\cdot p_2)^2}\left[(p_1\cdot p_3)(p_2\cdot p_4)+(p_1\cdot p_4)(p_2\cdot p_3)+m_f^2(p_1\cdot p_2)\right]\tag{6.63}\\ \langle\lvert\mathcal{M}_{fi}\rvert^2\rangle&\approx 2e^4\dfrac{(p_1\cdot p_3)^2+(p_1\cdot p_4)^2}{(p_1\cdot p_2)^2}\tag{6.25}\end{align}$$ So how exactly does one show this equality?
Let $X_i$ be the result, where $X_i=1$ implies heads and $X_i=0$ as tails. Let $\theta_j\in\{0.5,1\}$, where $\theta_j$ is the bias for heads. $\theta_1=.5$ and $\theta_2=1$. $$\Pr(\theta_1|X_{1\dots{10}}=1)\propto{\left(\frac{1}{2}\right)^{10}}\frac{999}{1000}=\frac{999}{1000\times{1024}}.$$ $$\Pr(\theta_2|X_{1\dots{10}}=1)\propto{1^{10}}\frac{1}{1000}=\frac{1024}{1000\times{1024}}.$$ $$\Pr(X_{1\dots{10}}=1)=\frac{999+1024}{1000\times{1024}}$$ $$\Pr(\theta_1|X_{1\dots{10}}=1)=\frac{999}{999+1024}=\frac{999}{2023}$$ $$\Pr(\theta_2|X_{1\dots{10}}=1)=\frac{1024}{999+1024}=\frac{1024}{2023}$$ $$\Pr(X_{11}=1|X_{1\dots{10}}=1)=\sum_{j=1}^2\left[\theta_j(1-\theta_j)\right]\Pr(\theta_j|X_{1\dots{10}}=1)$$ $$\Pr(X_{11}=1|X_{1\dots{10}}=1)=\frac{1}{2}\frac{999}{2023}+1\frac{1024\times{2}}{2\times{2023}}=\frac{3047}{4046}\approx{.75}$$ I debated answering this question as it could be viewed as more appropriate for Cross Validated or Mathematics, however, I decided to do so for a couple of reasons directly related to QF. First, quantitative finance is calculated gambling. Bayesian statistics are coherent. Frequentist statistics are incoherent. A statistic is considered coherent if a fair gamble can be created from it. It vastly exceeds the scope of your question, but if you are pricing a loan or an option then it is technically incorrect to use a Frequentist method, at least for a financial intermediary. The second reason is that this problem is a discrete form of a real finance problem. Given an unknown parameter and a historical record, what is the probability of a future state of the world? You need to get a very good grasp on the Bayesian prior distribution, the Bayesian posterior distribution and the Bayesian posterior predictive distribution.
10.9. Adadelta¶ In addition to RMSProp, Adadelta is another common optimization algorithm that helps improve the chances of finding useful solutions at later stages of iteration, which is difficult to do when using the Adagrad algorithm for the same purpose [Zeiler.2012]. The interesting thing is that there is no learning rate hyperparameter in the Adadelta algorithm. 10.9.1. The Algorithm¶ Like RMSProp, the Adadelta algorithm uses the variable \(\boldsymbol{s}_t\), which is an EWMA on the squares of elements in mini-batch stochastic gradient \(\boldsymbol{g}_t\). At time step 0, all the elements are initialized to 0. Given the hyperparameter \(0 \leq \rho < 1\) (counterpart of \(\gamma\) in RMSProp), at time step \(t>0\), compute using the same method as RMSProp: Unlike RMSProp, Adadelta maintains an additional state variable, \(\Delta\boldsymbol{x}_t\) the elements of which are also initialized to 0 at time step 0. We use \(\Delta\boldsymbol{x}_{t-1}\) to compute the variation of the independent variable: Here, \(\epsilon\) is a constant added to maintain the numerical stability, such as \(10^{-5}\). Next, we update the independent variable: Finally, we use \(\Delta\boldsymbol{x}\) to record the EWMA on the squares of elements in \(\boldsymbol{g}'\), which is the variation of the independent variable. As we can see, if the impact of \(\epsilon\) is not considered here, Adadelta differs from RMSProp in its replacement of the hyperparameter \(\eta\) with \(\sqrt{\Delta\boldsymbol{x}_{t-1}}\). 10.9.2. Implementation from Scratch¶ Adadelta needs to maintain two state variables for each independent variable, \(\boldsymbol{s}_t\) and \(\Delta\boldsymbol{x}_t\). We use the formula from the algorithm to implement Adadelta. %matplotlib inlineimport d2lfrom mxnet import np, npxnpx.set_np()def init_adadelta_states(feature_dim): s_w, s_b = np.zeros((feature_dim, 1)), np.zeros(1) delta_w, delta_b = np.zeros((feature_dim, 1)), np.zeros(1) return ((s_w, delta_w), (s_b, delta_b))def adadelta(params, states, hyperparams): rho, eps = hyperparams['rho'], 1e-5 for p, (s, delta) in zip(params, states): s[:] = rho * s + (1 - rho) * np.square(p.grad) g = (np.sqrt(delta + eps) / np.sqrt(s + eps)) * p.grad p[:] -= g delta[:] = rho * delta + (1 - rho) * g * g Then, we train the model with the hyperparameter \(\rho=0.9\). data_iter, feature_dim = d2l.get_data_ch10(batch_size=10)d2l.train_ch10(adadelta, init_adadelta_states(feature_dim), {'rho': 0.9}, data_iter, feature_dim); loss: 0.248, 0.083 sec/epoch 10.9.3. Concise Implementation¶ From the Trainer instance for the algorithm named “adadelta”, we canimplement Adadelta in Gluon. Its hyperparameters can be specified by rho. d2l.train_gluon_ch10('adadelta', {'rho': 0.9}, data_iter) loss: 0.243, 0.089 sec/epoch 10.9.4. Summary¶ Adadelta has no learning rate hyperparameter, it uses an EWMA on the squares of elements in the variation of the independent variable to replace the learning rate. 10.9.5. Exercises¶ Adjust the value of \(\rho\) and observe the experimental results.
I am trying to visualize a Hamiltonian H=$\hat{\sigma_x}$ $$ \hat{\sigma}_{x} = \left( \begin{array}{cc} 0 & 1 \\ 1 & 0 \end{array} \right) $$ acting on the state $| 1 \rangle$. I can write the state of the qubit at time $t$ by Schrodinger's equation as: $$e^{-iHt} = e^{-i\hat{\sigma_x}t} \, .$$ By Euler's formula, since $H^2 = I$, the general state is $$| \psi (t) \rangle = \left( \begin{array}{cc} \cos t & -i \sin t \\ -i \sin t & \cos t \end{array} \right) \left( \begin{array}{cc} 0 \\ 1 \end{array} \right) \, .$$ Using $$\lvert\Psi\rangle=\cos(\theta / 2) \lvert 0\rangle + e^{i\phi}\sin(\theta / 2) \lvert 1\rangle$$ I write $| \psi \rangle $ as $$| \psi \rangle = \cos( 2t/2 + \pi/2)\left( \begin{array}{cc} 1 \\ 0 \end{array} \right) + e^{-i*0} \sin( 2t/2 + \pi/2)\left( \begin{array}{cc} 0 \\ 1 \end{array} \right) \, .$$ Thus, $ \theta = 2t + \pi$ and $\phi = 0$. I am struggling with how to represent the angle $\theta$ on the Bloch sphere. Where would $\theta$ point in this case, where we have a $\pi$ term and $2t$ term together? Please give me a graphical answer so I can see the evolution of the qubit $|1\rangle$ and how to extend this visual image to cases of $-\pi/2$, $\pi/4$, $2t$, $4t$, etc.
Variational methods for approximating disparity and optical flow are quite similar. Basically the same model that can be used for approximating optical flow with big displacements (late linearisation with warping) can also be used for approximating disparity. The only difference is that we don't expect vertical movement in the disparity case. One complete chapter of my thesis is devoted to explaining how the variational optical flow, disparity and level-set models can be solved. In this chapter I show, step by step, how these models can be solved using different kinds of solvers (Jacobi, Gauss Seidel and Alternating Line Relaxation). Related to this chapter, I release the optical flow and disparity codes on this page...these codes are quite similar to the ones that I have used for generating the videos on this site. However, they are not 100% the same...there have been some slight modifications. For anyone interested in the actual code, I suggest first to have a look at the thesis in order to better understand the code itself! The codes are released under LGPL license. If you use the codes, please reference my papers...in my papers, on the other hand, I reference those papers upon which my work is based on. Thanx!! The optical flow codes are as follows: Late linerisation optical-flow for large displacements. Early linearisation optical-flow method for small displacements (basically Horn&Schunck type of formulation). Late linearisation method for disparity. Pseudo Code Below I have included pseudo code of the late-linearization version of the optical flow, with warping, that can easily be modified for calculating stereo disparities. //---------------------------------------------------------------------------------------------------------------- //-COARSE-TO-FINE ALGORITHM FOR CALCULATING OPTICAL FLOW //-Late linearisation (i.e. uses warping) //-Robust error functions in both the data and the smoothness terms //-Inputs are \(I_0\) and \(I_1\), number of scales \(scl\), and scaling factor \(sclFactor\) //---------------------------------------------------------------------------------------------------------------- \(\mathbf{INPUT:} \, I_0, \, I_1, \, scl, \, sclFactor \) \(\mathbf{OUTPUT:} \, (u,\, v) \) //Set \(u \) and \(v \) to zero \(u=0 \), \(v=0; \) //Create image pyramid \([ Iscl_0\{\} \, Iscl_1\{\} ] = pyramid(I_0, \, I_1, \, scl, \, sclFactor); \) //Coarse-to-fine loop WHILE( \(s=scl:-1:1 \) ) \(I_0 =Iscl_0\{s\} \), \(I_1 =Iscl_1\{s\}; \) //Warping loop WHILE( \( fstLoop \)) //Warp image as per \(u \) and \(v \) \(I_{k,0}^w = warp( I_{k,0}, u, v ); \) Approximate derivatives for \(I_0^w \) and \(I_1 \): \(\dfrac{\partial I_k}{\partial t} = I_{k,1} - I_{k,0}^w \), \(\dfrac{\partial I_{k,0}^w}{\partial x} \), \(\dfrac{\partial I_{k,0}^w}{\partial y} \) //Reset \(du \) and \(dv \) \(du=0 \), \(dv=0 \) //Fixed-point loop due to the robust error functions WHILE( \(sndLoop \)) //Calculate penalizer function values for data \( \Psi{\prime} \Big( (E_k)_D \Big) \), where \(\left( E_k \right)_D = \left( \dfrac{ \partial I_k }{\partial t} - \dfrac{ \partial I_{k,0}^{w} }{\partial x}du - \dfrac{ \partial I_{k,0}^{w} }{\partial y}dv \right)^2 ; \) //Calculate the diffusion weights \( \Big[ \Psi{\prime} \big( E_R^{l,m} \big)_W \, \Psi{\prime} \big( E_R^{l,m} \big)_N \, \Psi{\prime} \big( E_R^{l,m} \big)_E \, \Psi{\prime} \big( E_R^{l,m} \big)_S \Big] = weights(u+du, v+dv); \) //Solve for new \(du \) and \(dv \) \( \begin{matrix} [du \, dv] = SOLVER ( & u, \, v, \, du, \, dv, \, nLoops, \, \dfrac{\partial I_k}{\partial t}, \, \dfrac{\partial I_{k,0}^w}{\partial x}, \, \dfrac{\partial I_{k,0}^w}{\partial y}, \, \Psi{\prime} \Big( (E_k)_D \Big), &\\ & \Psi{\prime} \big( E_R^{l,m} \big)_W, \, \Psi{\prime} \big( E_R^{l,m} \big)_N, &\\ & \Psi{\prime} \big( E_R^{l,m} \big)_S, \, \Psi{\prime} \big( E_R^{l,m} \big)_E & ); \end{matrix} \) ENDWHILE //Update \(u \) and \(v \) \(u=u+du \), \(v=v+dv; \) ENDWHILE //Interpolate (prolongate) solution IF( \(s-1>0 \) ) \([u \, v] = prolongate( u, \, v, \,sclFactor ); \) ENDIF ENDWHILE
I am trying to prove some theorem by using proof environment, but somehow the result is different when I use article as my document class and aip-cp document class. Is there a way to use aip-cp document class but when I use proof environment it does not add some horizontal space between the proof and the first text of the proof? Here is the code \documentclass{aip-cp}[letter]\usepackage[numbers]{natbib}\usepackage{rotating}\usepackage{graphicx}\let\iint\relax\let\iiint\relax\let\iiiint\relax\let\idotsint\relax\let\openbox\relax\let\proof\relax\let\endproof\relax\usepackage{amsmath}\usepackage{amsthm}\theoremstyle{lemma}\newtheorem{lemma}{Lemma}\setlength{\parindent}{15pt}\begin{document}\begin{lemma}Sample text\end{lemma}\begin{proof}We begin to proof the equation by using the definition of A\end{proof}\end{document} Note:I somehow can reduce the space by using \hspace{-6mm} but it does not look natural.
There are quite a few models that use the ‘ all-different’ constraint, i.e. a set of integer variables \(x_i\), \(i=\{1,...,n\}\) is feasible only if they are all different, i.e. \(x_i\ne x_j\) for \(i\ne j\). We can probably say that most of these models are of the “educational” type and are not practical production models. Constraint programming solvers have typically a built-in “all-different” global constraint. This makes it easy to write down such a constraint and the solver will have knowledge about the constraint which it can exploit. That means we can expect better performance than for instance a bunch of pairwise not-equal constraints. So the first observation is: if you have a model that is largely built around an all-different constraint, consider to implement the model using a constraint programming solver. Approach 1 One way to implement this construct for use in a MIP model is to use pairwise comparison: \(x_i\ne x_j\) for \(i\ne j\). In a MIP we need binary variables for this: (I) \[\boxed{\begin{align}&x_i \le x_j - 1 + M^{(1)}_{i,j}\delta_{i,j}\\ &x_i \ge x_j + 1 – M^{(2)}_{i,j}(1-\delta_{i,j})\\ &\delta_{i,j} \in \{0,1\} \end{align}}\] for all \(i\lt j\) Note that we only need to compare \(x_i\) and \(x_j\) if \(i<j\). This means we need \(\frac{n(n-1)}{2}\) binary variables. It is important to choose good values for \(M\) so let’s work on that for a minute (see here for an example where things go wrong if we don’t pay attention to this). Note that instead of using a single \(M\) we use different values \(M^{(1)}_{i,j}\) and \(M^{(2)}_{i,j}\). The value of \(M^{(1)}_{i,j}\) should be chosen as small as possible subject to \(x_j-1+M^{(1)}_{i,j}\ge x^{up}_i\). This means \(M^{(1)}_{i,j}\ge x^{up}_i +1 – x_j\). This yields the following optimal value: \(M^{(1)}_{i,j} = x^{up}_i +1 – x^{lo}_j\). Similarly we want to choose the smallest value \(M^{(2)}_{i,j}\) such that \(x_j+1-M^{(2)}_{i,j} \le x^{lo}_i\). This gives us: \(M^{(2)}_{i,j} = x^{up}_j +1 - x^{lo}_i\). Approach 2 Here we consider the special case where each \(x_i \in \{1,…,n\}\) where \(i=\{1,..,n\}\). Now we can write: (II) \[\boxed{\begin{align}&\sum_j \delta_{i,j} = 1 \> \forall i\\ &\sum_i \delta_{i,j} = 1 \> \forall j \\ &x_i = \sum_j j \delta_{i,j} \\ &\delta_{i,j} \in \{0,1\} \end{align}}\] The matrix \(\delta\) can be looked at as a permutation matrix. This permutation matrix \(\delta\) is a row- and column-permuted identity matrix, i.e. it has a single entry equal to one in each row and in each column. Another way of looking at the first two equations is as an assignment problem. Note that we have \(n^2\) binary variables here, but no big-M’s. We can make one extra simplification: we can drop the first assignment constraint. The resulting equations are: (III) \[\boxed{\begin{align}&\sum_j j \delta_{i,j} = x_i\> \forall i\\ &\sum_i \delta_{i,j} = 1 \> \forall j \\ &\delta_{i,j} \in \{0,1\} \end{align}}\] This simplification in only possible in this special case. We see below that some minor extensions make this simplification fail. Extension 1 The second approach was tailored to a very special case. Instead of \(x_i \in \{1,…,n\}\) where \(i=\{1,..,n\}\) now consider a slightly more general problem, where \(x_i \in \{a_1,…,a_n\}\) with \(a_{i+1} \gt a_i\) (that is: no duplicates in the set). This problem is easily handled by a simple extension to model II: (IIa) \[\boxed{\begin{align}&\sum_j \delta_{i,j} = 1 \> \forall i\\ &\sum_i \delta_{i,j} = 1 \> \forall j \\ &x_i = \sum_j a_j \delta_{i,j} \\ &\delta_{i,j} \in \{0,1\} \end{align}}\] Can we use the same trick as in model III? The answer is no. For example take \(x_i \in \{0,1,3,5\}\). Then a model using: (IIIa) \[\boxed{\begin{align}&\sum_j a_j \delta_{i,j} = x_i\> \forall i\\ &\sum_i \delta_{i,j} = 1 \> \forall j \\ &\delta_{i,j} \in \{0,1\} \end{align}}\] Incorrect! Extension 2 We can allow explicit duplicates by saying: \(x_i \in \{a_1,…,a_n\}\) with \(a_{i+1} \ge a_i\). E.g. the data array \(a=\{1,2,2,3,3,3\}\) would allow 1 one, 2 twos, and 3 threes. This can be handled directly by model IIa. Extension 3 Allow a subset. I.e. consider \(x_j \in \{a_1,…,a_n\}\) where \(j=\{1,..,m\}\) and \(m<n\). For example choose two different values \(x_j\) from the set \( \{1,2,3\}\). To model this let’s use \(i\) as the index of \(a_i\). So we have \(j\) being a subset of \(i=\{1,…,n\}\). We need to revise model II as follows: (IIb) \[\boxed{\begin{align}&\sum_j \delta_{i,j} \le 1 \> \forall i\\ &\sum_i \delta_{i,j} = 1 \> \forall j \\ &x_j = \sum_i a_i \delta_{i,j} \\ &\delta_{i,j} \in \{0,1\} \end{align}}\] If the range of integer variables \(x_j\) is somewhat small (i.e. we don't have \(x^{up}_j \gg x^{lo}_j\)), we can solve the same problem as handled by approach 1. References H.P. Williams, Hong Yan, " Representations of the all-different Predicate of Constraint Satisfaction in Integer Programming," INFORMS Journal on Computing, Vol. 13 (2001) 96-103 W.J. van Hoeve, “ The alldifferent Constraint: A Survey,” 6th Annual workshop of the ERCIM Working Group on Constraints, 2001, [link] http://yetanothermathprogrammingconsultant.blogspot.com/2016/10/mip-modeling-from-sudoku-to-kenken.html
What is the worst case running time to search for an element in a balanced binary search tree with $n 2^n$ elements? The answer is $\Theta(n)$. My answer: To search an element in BST is $\log (n)$ so $$ \begin{align*} \log(n 2^n ) &= \log(n) + \log(2^n) \\ &= \log(n) + n\log 2 & \text{(base is 2)} \\ &= \log(n) + n \end{align*} $$ Why have they used $\Theta$ in the answer? And why only $n$?
A proof that uses closure properties: DCF languages are not closed under union, so take, $L_1, L_2 \in DCF$ s.t. $L = L_1 \cup L_2 \notin DCF$ Add three new symbols $\{\alpha, \beta, \#\}$ to the original alphabet $\Sigma$ and build the languages: $L'_1 = \{ \alpha \# w \mid w \in L_1\}$ $L'_2 = \{ \beta \# w \mid w \in L_1\}$ We have $L'_1, L'_2$, but also $L' = L'_1 \cup L'_2 \in DCF$ (it is enough to add a starting state that leads to the recognition of $\#L_1$ after reading a leading symbol $\alpha$ or the recognition of $\#L_2$ after reading a leading symbol $\beta$). Now suppose that $Outf(L') \in DCF$; we know that DCFs are closed under intersection with regular languages, so: $$Outf(L') \cap \{ \#w \mid w\in \Sigma^*\} = \{ \#w \mid w\in L_1 \lor w \in L_2 \} \in DCF$$ too. But given a DPDA for $Outf(L') \cap \{ \#w \mid w\in \Sigma^*\}$ it is immediate to build a DPDA for $\{ w \mid w \in L_1 \lor w \in L_2\} = L$ (just skip the recognition of the first leading symbol $\#$), so $L \in DCF$ contradicting the hypothesis.
From Artin: When $d$ is congruent $2$ or $3$ modulo $4$, an integer prime $p$ remains prime in the ring of integers of $\Bbb{Q}[$$\sqrt{d}]$ if the polynomial $x^2-d$ is irreducible modulo $p$. a) Prove that this is also true when $d \equiv 1$ modulo $4$ and $p\neq 2$ b) What happens to $p=2$ when $d \equiv 1$ modulo $4$? I have been stuck on this problem for a while, can any one give some tips to approach this question? I was thinking maybe I can use the face that when $d \cong 1$ mod $4$ and $h = 1/4(1-d)$ a prime p generates a prime ideal $(p)$ of the ring of integers if and only if the polynomial $x^2-x+h$ is irreducible modulo $p$. But I really don't know how to apply this or if I even should. Any help is appreciated thanks.