text
stringlengths
256
16.4k
Cafeinst has pointed out that one week ago, Ashok Das (Rochester) and Pushpa Kalauni (Oklahoma) have published a 3-page preprint I have personally tried a hundred of strategies to prove the Riemann Hypothesis so I could immediately recognize their operators \(O,O^\dagger\) from equations (10) and (11): I have played with the very same operators for quite some time, too. You may imagine that there's an operator whose spectrum is the set of values of \(\lambda\), the imaginary part of \(s\) such that \(\zeta(s)=0\) nontrivially. The corresponding eigenstates may be considered "delta functions" located at \(\lambda\). However, if you perform some kind of a generalized Fourier transform, the corresponding eigenstates could go like \(x^{i\lambda}\) and be defined on \(x\in \RR^+\). Exactly when you add the factor of \(x^{-1/2}\), these wave functions become somewhat naturally normalizable and orthogonal in a Dirac sense. There's no mystery about the normalizability – they are just standard plane waves as functions of the variable \(q=\ln x\) and the extra \(1/x\) in the norm comes from the Jacobian. Well, a problem is that the plane waves are normalizable to the Dirac delta-function if you allow any continuous \(\lambda\in\RR\) while we only want \(\lambda\) to match the spectrum of the zeta function zeroes. So when \(\lambda\) is restricted to the zeta function zeroes, they span a smaller Hilbert space than the Hilbert space of all functions of \(x\in\RR^+\). We should have an independent description of the Hilbert space of the "restricted" functions of the positive \(x\). What are they? They don't really discuss this problem at all. Except for these lethal problems, we could say "so far so good." I couldn't have ever completed a proof of the Riemann Hypothesis with these intriguing toys because I couldn't have eliminated the possibility that there are roots of the zeta function that produce some non-normalizable formal solutions or non-normalizable "eigenstates" by this procedure. In recent years, I realized that this is a problem that I have with all strategies based on the Hilbert-Pólya program: the program doesn't seem to exclude the existence of "quasinormal modes" that may still correspond to a pole in some Green's functions but that don't enhance the Hilbert space because the corresponding solutions simply aren't normalizable. Fine. So Das and Kalauni claim to have something extra that uses SUSY. Well, I have obviously tried to use SUSY as well but in ways that weren't equivalent to their claim. How do they use SUSY? They write \[ H=\{Q,Q^\dagger\}={\rm diag}(A^\dagger A, A\cdot A^\dagger) \] which may be obtained from the \(2\times 2\) matrix \(Q= ((0,0),(A,0))\) and its Hermitian conjugate. Their incorporation of the zeta function is that \(A\), the matrix element of the supercharge \(Q\), produces \(\zeta(1/2+i\lambda)\) when acting on the desired \(x^{i\lambda}\)-like wave function. The Hamiltonian itself, when acting on these power law wave functions, is supposed to produce a factor \(|\zeta(s)|^2\). If the zeta function of \(s\) in the critical strip vanishes, then their Hamiltonian has to annihilate the power law test function, and this test function must therefore be "BPS". And they claim that for it to be "BPS", the test function has to be normalizable, and therefore the real part of \(s\) must sit at the critical line. So far, again, I don't understand how they avoid my general problem with any Hilbert-Pólya proof: Cannot there be non-normalizable, formally "BPS" states as well? So sadly, at least so far, I cannot confirm that they have solved the big open mathematical problem. The authors promise to release a more complete "calculation". I would be willing to bet that it won't become a proof – they won't fix the problems. They seem to be exactly as naive as I was about an hour after my enthusiastic playing with the \(x^{i\lambda}\) test functions. ;-)
This document will give you a brief tour of the capabilities of the IPython notebook. You can view its contents by scrolling around, or execute each cell by typing Shift-Enter. The rest of the notebooks in this directory illustrate various other aspects and capabilities of the IPython notebook; some of them may require additional libraries to be executed. NOTE: This notebook must be run from its own directory, so you must cdto this directory and then start the notebook, but do not use the --notebook-diroption to run it from another location. The first thing you need to know is that you are still controlling the same old IPython you're used to, so things like shell aliases and magic commands still work: pwd u'/home/fperez/ipython/tutorial/notebooks' ls animation.m4v P10 Basic Interface.ipynb argv.py P15 Parallel Magics.ipynb cat.py* P20 Multiplexing.ipynb Cell Magics.ipynb P21 LoadBalancing.ipynb Display control.ipynb P30 Working with MPI.ipynb figs/ P99 Summary.ipynb foo.py PX01 Example - Remote Iteration.ipynb lnum.py* python-logo.svg myscript.py PZ Performance.ipynb nb@ soln/ nbtour.ipynb text_analysis.py* P01 Overview and Architecture.ipynb message = 'The IPython notebook is great!'# note: the echo command does not run on Windows, it's a unix command.!echo $message The IPython notebook is great! IPython adds an 'inline' matplotlib backend, which embeds any matplotlib figures into the notebook. %pylab inline Welcome to pylab, a matplotlib-based Python environment [backend: module://IPython.zmq.pylab.backend_inline]. For more information, type 'help(pylab)'. x = linspace(0, 3*pi, 500)plot(x, sin(x**2))title('A simple chirp'); >>> the_world_is_flat = 1>>> if the_world_is_flat:... print "Be careful not to fall off!" Be careful not to fall off! Errors are shown in informative ways: %run non_existent_file ERROR: File `u'non_existent_file.py'` not found. x = 1y = 4z = y/(1-x) --------------------------------------------------------------------------- ZeroDivisionError Traceback (most recent call last) <ipython-input-8-dc39888fd1d2> in <module>() 1 x = 1 2 y = 4 ----> 3 z = y/(1-x) ZeroDivisionError: integer division or modulo by zero When IPython needs to display additional information (such as providing details on an object via x?it will automatically invoke a pager at the bottom of the screen: magic If you execute the next cell, you will see the output arriving as it is generated, not all at the end. import time, sysfor i in range(8): print i, time.sleep(0.5) 0 1 2 3 4 5 6 7 We call the low-level system libc.time routine with the wrong argument via ctypes to segfault the Python interpreter: import sysfrom ctypes import CDLL# This will crash a Linux or Mac system; equivalent calls can be made on Windowsdll = 'dylib' if sys.platform == 'darwin' else 'so.6'libc = CDLL("libc.%s" % dll) libc.time(-1) # BOOM!! You can italicize, boldface and embed code meant for illustration instead of execution in Python: def f(x): """a docstring""" return x**2 or other languages: if (i=0; i<n; i++) { printf("hello %d\n", i); x += 4;} Courtesy of MathJax, you can include mathematical expressions both inline: $e^{i\pi} + 1 = 0$ and displayed:$$e^x=\sum_{i=0}^\infty \frac{1}{i!}x^i$$ from IPython.display import ImageImage(filename='figs/logo.png') An image can also be displayed from raw data or a url Image(url='http://python.org/images/python-logo.gif') SVG images are also supported out of the box (since modern browsers do a good job of rendering them): from IPython.display import SVGSVG(filename='figs/python-logo.svg') And more exotic objects can also be displayed, as long as their representation supports the IPython display protocol. For example, videos hosted externally on YouTube are easy to load (and writing a similar wrapper for other hosted content is trivial): from IPython.display import YouTubeVideo# a talk about IPython at Sage Days at U. Washington, Seattle.# Video credit: William Stein.YouTubeVideo('1j_HxD4iLn8') Using the nascent video capabilities of modern browsers, you may also be able to display local videos. At the moment this doesn't work very well in all browsers, so it may or may not work for you; we will continue testing this and looking for ways to make it more robust. The following cell loads a local file called animation.m4v, encodes the raw video as base64 for httptransport, and uses the HTML5 video tag to load it. On Chrome 15 it works correctly, displaying a controlbar at the bottom with a play/pause button and a location slider. from IPython.display import HTMLvideo = open("figs/animation.m4v", "rb").read()video_encoded = video.encode("base64")video_tag = '<video controls alt="test" src="data:video/x-m4v;base64,{0}">'.format(video_encoded)HTML(data=video_tag) The above examples embed images and video from the notebook filesystem in the outputareas of code cells. It is also possible to request these files directly in markdown cellsif they reside in the notebook directory via relative urls prefixed with files/: files/[subdirectory/]<filename> For example, in the example notebook folder, we have the Python logo, addressed as: <img src="figs/python-logo.svg" /> and a video with the HTML5 video tag: <video controls src="figs/animation.m4v" /> These do not embed the data into the notebook file, and require that the files exist when you are viewing the notebook. Note that this means that the IPython notebook server also acts as a generic file server for files inside the same tree as your notebooks. Access is not granted outside the notebook folder so you have strict control over what files are visible, but for this reason it is highly recommended that you do not run the notebook server with a notebook directory at a high level in your filesystem (e.g. your home directory). When you run the notebook in a password-protected manner, local file access is restricted to authenticated users unless read-only views are active. You can even embed an entire page from another site in an iframe; for example this is today's Wikipedia page for mobile users: HTML('<iframe src=http://en.mobile.wikipedia.org/?useformat=mobile width=700 height=350>') Let's make sure we have pylab again, in case we have restarted the kernel due to the crash demo above %pylab inline Welcome to pylab, a matplotlib-based Python environment [backend: module://IPython.zmq.pylab.backend_inline]. For more information, type 'help(pylab)'. %load http://matplotlib.sourceforge.net/mpl_examples/pylab_examples/integral_demo.py And we also support the display of mathematical expressions typeset in LaTeX, which is rendered in the browser thanks to the MathJax library. Note that this is different from the above examples. Above we were typing mathematical expressionsin Markdown cells (along with normal text) and letting the browser render them; now we are displayingthe output of a Python computation as a LaTeX expression wrapped by the Math() object so the browserrenders it. The Math object will add the needed LaTeX delimiters ( $$) if they are not provided: from IPython.display import MathMath(r'F(k) = \int_{-\infty}^{\infty} f(x) e^{2\pi i k} dx') With the Latex class, you have to include the delimiters yourself. This allows you to use other LaTeX modes such as eqnarray: from IPython.display import LatexLatex(r"""\begin{eqnarray}\nabla \times \vec{\mathbf{B}} -\, \frac1c\, \frac{\partial\vec{\mathbf{E}}}{\partial t} & = \frac{4\pi}{c}\vec{\mathbf{j}} \\\nabla \cdot \vec{\mathbf{E}} & = 4 \pi \rho \\\nabla \times \vec{\mathbf{E}}\, +\, \frac1c\, \frac{\partial\vec{\mathbf{B}}}{\partial t} & = \vec{\mathbf{0}} \\\nabla \cdot \vec{\mathbf{B}} & = 0 \end{eqnarray}""") Using these tools, we can integrate with the SymPy package to perform symbolic manipulations, and combined with numpy and matplotlib, also displays numerical visualizations of symbolically constructed expressions. We first load sympy printing and plotting support, as well as all of sympy: %load_ext sympyprinting%pylab inlinefrom __future__ import divisionimport sympy as symfrom sympy import *x, y, z = symbols("x y z")k, m, n = symbols("k m n", integer=True)f, g, h = map(Function, 'fgh') Welcome to pylab, a matplotlib-based Python environment [backend: module://IPython.zmq.pylab.backend_inline]. For more information, type 'help(pylab)'. Rational(3,2)*pi + exp(I*x) / (x**2 + y) exp(I*x).subs(x,pi).evalf() exp(pi * sqrt(163)).evalf(50) eq = ((x+y)**2 * (x+1))eq expand(eq) a = 1/x + (x*sin(x) - 1)/xa simplify(a) limit((sin(x)-x)/x**3, x, 0) (1/cos(x)).series(x, 0, 6) diff(cos(x**2)**2 / (1+x), x) integrate(x**2 * cos(x), (x, 0, pi/2)) eqn = Eq(Derivative(f(x),x,x) + 9*f(x), 1)display(eqn)dsolve(eqn, f(x)) We will define a function to compute the Taylor series expansions of a symbolically defined expression at various orders and visualize all the approximations together with the original function def plot_taylor_approximations(func, x0=None, orders=(2, 4), xrange=(0,1), yrange=None, npts=200): """Plot the Taylor series approximations to a function at various orders. Parameters ---------- func : a sympy function x0 : float Origin of the Taylor series expansion. If not given, x0=xrange[0]. orders : list List of integers with the orders of Taylor series to show. Default is (2, 4). xrange : 2-tuple or array. Either an (xmin, xmax) tuple indicating the x range for the plot (default is (0, 1)), or the actual array of values to use. yrange : 2-tuple (ymin, ymax) tuple indicating the y range for the plot. If not given, the full range of values will be automatically used. npts : int Number of points to sample the x range with. Default is 200. """ if not callable(func): raise ValueError('func must be callable') if isinstance(xrange, (list, tuple)): x = np.linspace(float(xrange[0]), float(xrange[1]), npts) else: x = xrange if x0 is None: x0 = x[0] xs = sym.Symbol('x') # Make a numpy-callable form of the original function for plotting fx = func(xs) f = sym.lambdify(xs, fx, modules=['numpy']) # We could use latex(fx) instead of str(), but matploblib gets confused # with some of the (valid) latex constructs sympy emits. So we play it safe. plot(x, f(x), label=str(fx), lw=2) # Build the Taylor approximations, plotting as we go apps = {} for order in orders: app = fx.series(xs, x0, n=order).removeO() apps[order] = app # Must be careful here: if the approximation is a constant, we can't # blindly use lambdify as it won't do the right thing. In that case, # evaluate the number as a float and fill the y array with that value. if isinstance(app, sym.numbers.Number): y = np.zeros_like(x) y.fill(app.evalf()) else: fa = sym.lambdify(xs, app, modules=['numpy']) y = fa(x) tex = sym.latex(app).replace('$', '') plot(x, y, label=r'$n=%s:\, %s$' % (order, tex) ) # Plot refinements if yrange is not None: plt.ylim(*yrange) grid() legend(loc='best').get_frame().set_alpha(0.8) With this function defined, we can now use it for any sympy function or expression plot_taylor_approximations(sin, 0, [2, 4, 6], (0, 2*pi), (-2,2)) plot_taylor_approximations(cos, 0, [2, 4, 6], (0, 2*pi), (-2,2)) Remember that a Taylor series is useless beyond its convergence radius, as can be nicely illustrated by a simple function that has singularities on the real axis. Use the plot_taylor_approximations function to show how badly various orders (say $n= 2, 4, 6$) converge for $f(x) = \frac{1}{\cos(x)}$.
I've been working with some dust solutions in General Relativity, practicing calculating the Riemann curvature tensor, and I came across an odd metric: the Tolman-Bondi-de Sitter metric. A quick internet search (to supplement the book I'm reading) can tell you that it describes spherical dust, while accounting for a cosmological constant. It's a pretty simple solution, with a line element of the form $$ds^2=dt^2-e^{-2\Psi(t,r) } dr^2-R^2 (t,r)d\theta^2-R^2 (t,r) \sin^2\theta d\phi^2$$ There's one term in there that has me a bit befuddled, and that I can't find an explanation for in a book or on the Internet: $\Psi(t,r)$ At first, I thought it had to be a simple wavefunction, but after looking at it more, I'm not quite sure. What is it, and what is its significance in the metric?
2018-09-11 04:29 Proprieties of FBK UFSDs after neutron and proton irradiation up to $6*10^{15}$ neq/cm$^2$ / Mazza, S.M. (UC, Santa Cruz, Inst. Part. Phys.) ; Estrada, E. (UC, Santa Cruz, Inst. Part. Phys.) ; Galloway, Z. (UC, Santa Cruz, Inst. Part. Phys.) ; Gee, C. (UC, Santa Cruz, Inst. Part. Phys.) ; Goto, A. (UC, Santa Cruz, Inst. Part. Phys.) ; Luce, Z. (UC, Santa Cruz, Inst. Part. Phys.) ; McKinney-Martinez, F. (UC, Santa Cruz, Inst. Part. Phys.) ; Rodriguez, R. (UC, Santa Cruz, Inst. Part. Phys.) ; Sadrozinski, H.F.-W. (UC, Santa Cruz, Inst. Part. Phys.) ; Seiden, A. (UC, Santa Cruz, Inst. Part. Phys.) et al. The properties of 60-{\mu}m thick Ultra-Fast Silicon Detectors (UFSD) detectors manufactured by Fondazione Bruno Kessler (FBK), Trento (Italy) were tested before and after irradiation with minimum ionizing particles (MIPs) from a 90Sr \b{eta}-source . [...] arXiv:1804.05449. - 13 p. Preprint - Full text Rekord szczegółowy - Podobne rekordy 2018-08-25 06:58 Charge-collection efficiency of heavily irradiated silicon diodes operated with an increased free-carrier concentration and under forward bias / Mandić, I (Ljubljana U. ; Stefan Inst., Ljubljana) ; Cindro, V (Ljubljana U. ; Stefan Inst., Ljubljana) ; Kramberger, G (Ljubljana U. ; Stefan Inst., Ljubljana) ; Mikuž, M (Ljubljana U. ; Stefan Inst., Ljubljana) ; Zavrtanik, M (Ljubljana U. ; Stefan Inst., Ljubljana) The charge-collection efficiency of Si pad diodes irradiated with neutrons up to $8 \times 10^{15} \ \rm{n} \ cm^{-2}$ was measured using a $^{90}$Sr source at temperatures from -180 to -30°C. The measurements were made with diodes under forward and reverse bias. [...] 2004 - 12 p. - Published in : Nucl. Instrum. Methods Phys. Res., A 533 (2004) 442-453 Rekord szczegółowy - Podobne rekordy 2018-08-23 11:31 Rekord szczegółowy - Podobne rekordy 2018-08-23 11:31 Effect of electron injection on defect reactions in irradiated silicon containing boron, carbon, and oxygen / Makarenko, L F (Belarus State U.) ; Lastovskii, S B (Minsk, Inst. Phys.) ; Yakushevich, H S (Minsk, Inst. Phys.) ; Moll, M (CERN) ; Pintilie, I (Bucharest, Nat. Inst. Mat. Sci.) Comparative studies employing Deep Level Transient Spectroscopy and C-V measurements have been performed on recombination-enhanced reactions between defects of interstitial type in boron doped silicon diodes irradiated with alpha-particles. It has been shown that self-interstitial related defects which are immobile even at room temperatures can be activated by very low forward currents at liquid nitrogen temperatures. [...] 2018 - 7 p. - Published in : J. Appl. Phys. 123 (2018) 161576 Rekord szczegółowy - Podobne rekordy 2018-08-23 11:31 Rekord szczegółowy - Podobne rekordy 2018-08-23 11:31 Characterization of magnetic Czochralski silicon radiation detectors / Pellegrini, G (Barcelona, Inst. Microelectron.) ; Rafí, J M (Barcelona, Inst. Microelectron.) ; Ullán, M (Barcelona, Inst. Microelectron.) ; Lozano, M (Barcelona, Inst. Microelectron.) ; Fleta, C (Barcelona, Inst. Microelectron.) ; Campabadal, F (Barcelona, Inst. Microelectron.) Silicon wafers grown by the Magnetic Czochralski (MCZ) method have been processed in form of pad diodes at Instituto de Microelectrònica de Barcelona (IMB-CNM) facilities. The n-type MCZ wafers were manufactured by Okmetic OYJ and they have a nominal resistivity of $1 \rm{k} \Omega cm$. [...] 2005 - 9 p. - Published in : Nucl. Instrum. Methods Phys. Res., A 548 (2005) 355-363 Rekord szczegółowy - Podobne rekordy 2018-08-23 11:31 Silicon detectors: From radiation hard devices operating beyond LHC conditions to characterization of primary fourfold coordinated vacancy defects / Lazanu, I (Bucharest U.) ; Lazanu, S (Bucharest, Nat. Inst. Mat. Sci.) The physics potential at future hadron colliders as LHC and its upgrades in energy and luminosity Super-LHC and Very-LHC respectively, as well as the requirements for detectors in the conditions of possible scenarios for radiation environments are discussed in this contribution.Silicon detectors will be used extensively in experiments at these new facilities where they will be exposed to high fluences of fast hadrons. The principal obstacle to long-time operation arises from bulk displacement damage in silicon, which acts as an irreversible process in the in the material and conduces to the increase of the leakage current of the detector, decreases the satisfactory Signal/Noise ratio, and increases the effective carrier concentration. [...] 2005 - 9 p. - Published in : Rom. Rep. Phys.: 57 (2005) , no. 3, pp. 342-348 External link: RORPE Rekord szczegółowy - Podobne rekordy 2018-08-22 06:27 Numerical simulation of radiation damage effects in p-type and n-type FZ silicon detectors / Petasecca, M (Perugia U. ; INFN, Perugia) ; Moscatelli, F (Perugia U. ; INFN, Perugia ; IMM, Bologna) ; Passeri, D (Perugia U. ; INFN, Perugia) ; Pignatel, G U (Perugia U. ; INFN, Perugia) In the framework of the CERN-RD50 Collaboration, the adoption of p-type substrates has been proposed as a suitable mean to improve the radiation hardness of silicon detectors up to fluencies of $1 \times 10^{16} \rm{n}/cm^2$. In this work two numerical simulation models will be presented for p-type and n-type silicon detectors, respectively. [...] 2006 - 6 p. - Published in : IEEE Trans. Nucl. Sci. 53 (2006) 2971-2976 Rekord szczegółowy - Podobne rekordy 2018-08-22 06:27 Technology development of p-type microstrip detectors with radiation hard p-spray isolation / Pellegrini, G (Barcelona, Inst. Microelectron.) ; Fleta, C (Barcelona, Inst. Microelectron.) ; Campabadal, F (Barcelona, Inst. Microelectron.) ; Díez, S (Barcelona, Inst. Microelectron.) ; Lozano, M (Barcelona, Inst. Microelectron.) ; Rafí, J M (Barcelona, Inst. Microelectron.) ; Ullán, M (Barcelona, Inst. Microelectron.) A technology for the fabrication of p-type microstrip silicon radiation detectors using p-spray implant isolation has been developed at CNM-IMB. The p-spray isolation has been optimized in order to withstand a gamma irradiation dose up to 50 Mrad (Si), which represents the ionization radiation dose expected in the middle region of the SCT-Atlas detector of the future Super-LHC during 10 years of operation. [...] 2006 - 6 p. - Published in : Nucl. Instrum. Methods Phys. Res., A 566 (2006) 360-365 Rekord szczegółowy - Podobne rekordy 2018-08-22 06:27 Defect characterization in silicon particle detectors irradiated with Li ions / Scaringella, M (INFN, Florence ; U. Florence (main)) ; Menichelli, D (INFN, Florence ; U. Florence (main)) ; Candelori, A (INFN, Padua ; Padua U.) ; Rando, R (INFN, Padua ; Padua U.) ; Bruzzi, M (INFN, Florence ; U. Florence (main)) High Energy Physics experiments at future very high luminosity colliders will require ultra radiation-hard silicon detectors that can withstand fast hadron fluences up to $10^{16}$ cm$^{-2}$. In order to test the detectors radiation hardness in this fluence range, long irradiation times are required at the currently available proton irradiation facilities. [...] 2006 - 6 p. - Published in : IEEE Trans. Nucl. Sci. 53 (2006) 589-594 Rekord szczegółowy - Podobne rekordy
Letting $X$ be a ring and $K$ be an $X$-module, I need to show that if $K \cong A \times B$ for some $X$-modules $A,B$, then $\exists$ submodules $M'$ and $N'$ of $K$ such that: $K=M' \oplus N'$ $M' \cong A$ $N' \cong B.$ I understand the concepts of internal and external direct sum of modules, and I showed that if $K = M \oplus N$ for $M,N$ submodules of $K$, then $K \cong M \times N.$ (I showed the isomorphism by defining a well-defined map, and then showing that the map is a surjective homomorphism, followed by the kernel being $\{0\}$ and applying the First Isomorphism Theorem.) But I have tried doing this problem for hours now, and have not been able to crack it. How should I begin?
This question already has an answer here: I have seen the "objects pull down on space-time" explanations, but they assume a "pull down" force themselves. Could anyone explain the space-time explanation without assuming gravity in the first place? Physics Stack Exchange is a question and answer site for active researchers, academics and students of physics. It only takes a minute to sign up.Sign up to join this community This question already has an answer here: I have seen the "objects pull down on space-time" explanations, but they assume a "pull down" force themselves. Could anyone explain the space-time explanation without assuming gravity in the first place? Why don't you want to assume gravity? Gravity it is an experimental fact, a starting point for doing physics. General Relativy is a geometrical theory of gravity, built on the basis of Special Relativity and always having in mind that it should recover the non-relativistic Newtonian theory of the gravitational field. The "pull down" is a deviation of the flat, Minkowski spacetime, governed by the Einstein Field equations. And in my opinion a not-so-good analogy because it is rather difficult to imagine how to pull down time. but they assume a "pull down" force themselves. The images of flat sheets "pulled down" where the planets are do not reflect the fact that the curvature of spacetime is an intrinsic curvature that is measured by geodesic deviation. What has been done, in order to help visualize the spatial curvature, is to take a two dimensional spatial slice and then embed that into a fictional, flat 3D space where the intrinsic curvature of the slice is represented as an extrinsically curved 2D surface. A good example of how this is done for a spherically symmetric static star can be found in the book "Gravitation" on page 613: Therefore, depict 3-space only as it is at one time, t = constant. Moreover, at any one time the space itself has spherical symmetry. Consequently, one slice through the center, $r=0$, that divides the space symmetrically into two halves (for example, the equatorial slice, $\theta = \pi/2$ ) has the same 2-geometry as any other such slice (any selected angle of tilt, at any azimuth) through the center. Therefore limit attention to the 2-geometry of the equatorial slice. The geometry on this slice is described by the line element $$ds^2 = [1-2m(r)/r]^{-1}dr^2 + r^2d\phi^2.$$ Now one may embed this two-dimensional curved-space geometry in the flat geometry of a Euclidean three-dimensional manifold. Read more using Google Books. Massive objects distort spacetime, as described by the Einstein Field Equations. In turn, this causes particles to accelerate: the GR equivalent of $\mathbf{F}=m\mathbf{a}$ are the geodesic equations:$$\frac{\text{d}^2x^\alpha}{\text{d}\lambda^2} + \Gamma^{\alpha}_{\mu\nu}\frac{\text{d}x^\mu}{\text{d}\lambda}\frac{\text{d}x^\nu}{\text{d}\lambda} = 0,\qquad \alpha=0,\ldots 4,$$with$$\Gamma^{\alpha}_{\mu\nu} = \frac{1}{2}g^{\alpha\beta} \left(\frac{\partial g_{\beta\mu}}{\partial x^\nu} + \frac{\partial g_{\beta\nu}}{\partial x^\mu} - \frac{\partial g_{\mu\nu}}{\partial x^\beta}\right),$$the so-called Christoffel symbols, and $g_{\mu\nu}(x^\alpha)$ the spacetime metric. In the absence of matter, $g_{\mu\nu}(x^\alpha)$ is constant, so that the geodesic equations reduce to$$\frac{\text{d}^2x^\alpha}{\text{d}\lambda^2} = 0,$$which describe a constant motion, as expected.
Ex.12.1 Q3 Areas Related to Circles Solution - NCERT Maths Class 10 Question Given figure depicts an archery target marked with its five scoring areas from the centre outwards as Gold, Red, Blue, Black and White. The diameter of the region representing Gold score is \(21 \,\rm{cm}\) and each of the other bands is \(10.5 \,\rm{cm}\) wide. Find the area of each of the five scoring regions. [Use \(\pi= \,\frac{22}{7}\)] Text Solution What is known? Diameter of the gold region and width of the other regions. What is unknown? Area of each scoring region. Reasoning: Area of the region between \(2\) concentric circles is given by \(\begin{align}\pi {\text{r}}_2^2 - \pi {\text{r}}_1^2\end{align}\). Steps: Radius\(({r_1})\) of gold region (i.e., \(1^\rm{st}\) Given that each circle is \(10.5\,\rm{cm}\) wider than the previous circle. Therefore, Radius \((r_2)\) of \(2^\rm{nd}\) circle \[\begin{align}&= 10.5 + 10.5\\&=21 \,\rm{cm}\end{align}\] Radius \(({r_3})\) of \(3^\rm{rd}\)circle \[\begin{align}&= 21 + 10.5\\ &= 31.5\,{\text{cm}}\end{align}\] Radius \((r_4)\) of \(4^\rm{th}\) circle \[\begin{align}&= 31.5 + 10.5\\ &= 42\,{\text{cm}}\end{align}\] Radius \((r_5)\) of \(5^\rm{th}\) circle \[\begin{align}&= 42 + 10.5\\ &= 52.5\,\,{\text{cm}} \end{align}\] \(\text{Area of gold region}\) \(=\) \(\text{Area of}\) \(1^\rm{st}\) \(\rm{}circle\) \(= \pi {r}_1^2 = \pi {(10.5)^2} = 346.5\;\rm{cm^2}\) \(\text{Area of red region}\) \(=\) \(\text{Area of}\) \(2^\text{nd }\)\(\rm{}circle\)\(-\)\(\text{Area of}\)\(1^\rm{ st }\)\(\rm{}circle \) \[\begin{align}& = \pi {\text{r}}_2^2 - \pi {\text{r}}_1^2\\& = \pi {{(21)}^2} - {{(10.5)}^2}\\& = 441\pi - 110.25\pi = 330.75\pi \\& = 1039.5\,{\text{c}}{{\text{m}}^2}\end{align}\] \(\text{Area of blue region}\) \(=\)\(\text{Area of}\)\(3^\text{rd}\)\(\rm{}circle\)\( -\)\(\text{Area of}\)\(2^\rm{nd}\)\(\rm{}circle\) \[\begin{align}&= \pi _{13}^2 - \pi {\text{r}}_1^2\\& = \pi {{(31.5)}^2} - \pi {{(21)}^2}\\&= 992.25\pi - 441\pi = 551.25\pi \\&= 1732.5\,{\text{c}}{{\text{m}}^2}\end{align}\] \(\text{Area of black region}\)\(=\)\(\text{Area of}\)\(4^\rm{th}\)\(\rm{}circle\)\(-\)\(\text{Area of}\)\(3^\rm{rd}\)\(\rm{}circle\) \[\begin{align}& = \pi r_4^2 - \pi r_3^2\\& = \pi {{(42)}^2} - \pi {{(31.5)}^2}\\&= 1764\pi - 992.25\pi \\&= 771.75\pi \\ &= 2425.5\,{\text{c}}{{\text{m}}^2}\end{align}\] \(\text{Area of white region}\) \(=\)\(\text{Area of}\)\(5^\rm{th}\)\(\rm{}circle\)\(-\)\(\text{Area of}\)\(4^\rm{th}\)\(\rm{}circle \) \[\begin{align}&= \pi {\text{r}}_5^2 - \pi \pi _4^2\\&= \pi {{(52.5)}^2} - \pi {{(42)}^2}\\&= 2756.25\pi - 1764\pi \\&= 992.25\pi \\ &= 3118.5\,{\text{c}}{{\text{m}}^2}\end{align}\] Therefore,areas of gold,red,blue,black,and white regions are \(346.5\, \rm{cm^2},\;1039.5 \,\rm{cm^2},\; 1732.5 \,\rm{cm^2},\; 2425.5\,\rm{cm^2},\)and\(3118.5 \,\rm{cm^2}\) respectively.
Main Page The Problem Let [math][3]^n[/math] be the set of all length [math]n[/math] strings over the alphabet [math]1, 2, 3[/math]. A combinatorial line is a set of three points in [math][3]^n[/math], formed by taking a string with one or more wildcards [math]x[/math] in it, e.g., [math]112x1xx3\ldots[/math], and replacing those wildcards by [math]1, 2[/math] and [math]3[/math], respectively. In the example given, the resulting combinatorial line is: [math]\{ 11211113\ldots, 11221223\ldots, 11231333\ldots \}[/math]. A subset of [math][3]^n[/math] is said to be line-free if it contains no lines. Let [math]c_n[/math] be the size of the largest line-free subset of [math][3]^n[/math]. [math]k=3[/math] Density Hales-Jewett (DHJ(3)) theorem: [math]\lim_{n \rightarrow \infty} c_n/3^n = 0[/math] The original proof of DHJ(3) used arguments from ergodic theory. The basic problem to be considered by the Polymath project is to explore a particular combinatorial approach to DHJ, suggested by Tim Gowers. Useful background materials Some background to the project can be found here. General discussion on massively collaborative "polymath" projects can be found here. A cheatsheet for editing the wiki may be found here. Finally, here is the general Wiki user's guide Threads (1-199) A combinatorial approach to density Hales-Jewett (inactive) (200-299) Upper and lower bounds for the density Hales-Jewett problem (final call) (300-399) The triangle-removal approach (inactive) (400-499) Quasirandomness and obstructions to uniformity (inactive) (500-599) Possible proof strategies (active) (600-699) A reading seminar on density Hales-Jewett (active) (700-799) Bounds for the first few density Hales-Jewett numbers, and related quantities (arriving at station) Here are some unsolved problems arising from the above threads. Here is a tidy problem page. Bibliography Density Hales-Jewett H. Furstenberg, Y. Katznelson, “A density version of the Hales-Jewett theorem for k=3“, Graph Theory and Combinatorics (Cambridge, 1988). Discrete Math. 75 (1989), no. 1-3, 227–241. H. Furstenberg, Y. Katznelson, “A density version of the Hales-Jewett theorem“, J. Anal. Math. 57 (1991), 64–119. R. McCutcheon, “The conclusion of the proof of the density Hales-Jewett theorem for k=3“, unpublished. Behrend-type constructions M. Elkin, "An Improved Construction of Progression-Free Sets ", preprint. B. Green, J. Wolf, "A note on Elkin's improvement of Behrend's construction", preprint. K. O'Bryant, "Sets of integers that do not contain long arithmetic progressions", preprint. Triangles and corners M. Ajtai, E. Szemerédi, Sets of lattice points that form no squares, Stud. Sci. Math. Hungar. 9 (1974), 9--11 (1975). MR369299 I. Ruzsa, E. Szemerédi, Triple systems with no six points carrying three triangles. Combinatorics (Proc. Fifth Hungarian Colloq., Keszthely, 1976), Vol. II, pp. 939--945, Colloq. Math. Soc. János Bolyai, 18, North-Holland, Amsterdam-New York, 1978. MR519318 J. Solymosi, A note on a question of Erdős and Graham, Combin. Probab. Comput. 13 (2004), no. 2, 263--267. MR 2047239
@JosephWright Well, we still need table notes etc. But just being able to selectably switch off parts of the parsing one does not need... For example, if a user specifies format 2.4, does the parser even need to look for e syntax, or ()'s? @daleif What I am doing to speed things up is to store the data in a dedicated format rather than a property list. The latter makes sense for units (open ended) but not so much for numbers (rigid format). @JosephWright I want to know about either the bibliography environment or \DeclareFieldFormat. From the documentation I see no reason not to treat these commands as usual, though they seem to behave in a slightly different way than I anticipated it. I have an example here which globally sets a box, which is typeset outside of the bibliography environment afterwards. This doesn't seem to typeset anything. :-( So I'm confused about the inner workings of biblatex (even though the source seems.... well, the source seems to reinforce my thought that biblatex simply doesn't do anything fancy). Judging from the source the package just has a lot of options, and that's about the only reason for the large amount of lines in biblatex1.sty... Consider the following MWE to be previewed in the build in PDF previewer in Firefox\documentclass[handout]{beamer}\usepackage{pgfpages}\pgfpagesuselayout{8 on 1}[a4paper,border shrink=4mm]\begin{document}\begin{frame}\[\bigcup_n \sum_n\]\[\underbrace{aaaaaa}_{bbb}\]\end{frame}\end{d... @Paulo Finally there's a good synth/keyboard that knows what organ stops are! youtube.com/watch?v=jv9JLTMsOCE Now I only need to see if I stay here or move elsewhere. If I move, I'll buy this there almost for sure. @JosephWright most likely that I'm for a full str module ... but I need a little more reading and backlog clearing first ... and have my last day at HP tomorrow so need to clean out a lot of stuff today .. and that does have a deadline now @yo' that's not the issue. with the laptop I lose access to the company network and anythign I need from there during the next two months, such as email address of payroll etc etc needs to be 100% collected first @yo' I'm sorry I explain too bad in english :) I mean, if the rule was use \tl_use:N to retrieve the content's of a token list (so it's not optional, which is actually seen in many places). And then we wouldn't have to \noexpand them in such contexts. @JosephWright \foo:V \l_some_tl or \exp_args:NV \foo \l_some_tl isn't that confusing. @Manuel As I say, you'd still have a difference between say \exp_after:wN \foo \dim_use:N \l_my_dim and \exp_after:wN \foo \tl_use:N \l_my_tl: only the first case would work @Manuel I've wondered if one would use registers at all if you were starting today: with \numexpr, etc., you could do everything with macros and avoid any need for \<thing>_new:N (i.e. soft typing). There are then performance questions, termination issues and primitive cases to worry about, but I suspect in principle it's doable. @Manuel Like I say, one can speculate for a long time on these things. @FrankMittelbach and @DavidCarlisle can I am sure tell you lots of other good/interesting ideas that have been explored/mentioned/imagined over time. @Manuel The big issue for me is delivery: we have to make some decisions and go forward even if we therefore cut off interesting other things @Manuel Perhaps I should knock up a set of data structures using just macros, for a bit of fun [and a set that are all protected :-)] @JosephWright I'm just exploring things myself “for fun”. I don't mean as serious suggestions, and as you say you already thought of everything. It's just that I'm getting at those points myself so I ask for opinions :) @Manuel I guess I'd favour (slightly) the current set up even if starting today as it's normally \exp_not:V that applies in an expansion context when using tl data. That would be true whether they are protected or not. Certainly there is no big technical reason either way in my mind: it's primarily historical (expl3 pre-dates LaTeX2e and so e-TeX!) @JosephWright tex being a macro language means macros expand without being prefixed by \tl_use. \protected would affect expansion contexts but not use "in the wild" I don't see any way of having a macro that by default doesn't expand. @JosephWright it has series of footnotes for different types of footnotey thing, quick eye over the code I think by default it has 10 of them but duplicates for minipages as latex footnotes do the mpfoot... ones don't need to be real inserts but it probably simplifies the code if they are. So that's 20 inserts and more if the user declares a new footnote series @JosephWright I was thinking while writing the mail so not tried it yet that given that the new \newinsert takes from the float list I could define \reserveinserts to add that number of "classic" insert registers to the float list where later \newinsert will find them, would need a few checks but should only be a line or two of code. @PauloCereda But what about the for loop from the command line? I guess that's more what I was asking about. Say that I wanted to call arara from inside of a for loop on the command line and pass the index of the for loop to arara as the jobname. Is there a way of doing that?
If water evaporates at room temperature because a small percentage of the molecules have enough energy to escape into the air, then why does a kitchen counter with a small amount of water eventually evaporate completely when at room temperature? As your small percentage of molecules with high enough kinetic energy evaporates, the remaining liquid water cools down. But in doing so, it drains heat from its surroundings and thus stays at room temperature (or close to it), so there is still some fraction of molecules that can evaporate, and they do so, and more heat is transferred from the surroundings, and so it continues until all water is gone. This happens because the rate of evaporation is higher than the rate of condensation. $$\ce{ H2O (l) <=>> H2O (g) }$$ This is also due to the fact that you have an open system: matter and energy can be exchanged with its surroundings. The evaporated water can evaporate from the glass and condense somewhere else. The water on the surface does not exist in isolation it is in contact with the air and with the surface. Random higher energy molecules in the surface and in the air will add energy by collision to the water molecules leading some of them to escape the liquid (evaporate). This is why evaporating water leads to cooling of the air and surfaces around it. Let's say, $q\in {]}0,100{]}$ is the minimal net percentage of the volume (or of the mass) that evaporates at any second $t$ (for every $t>0$). Saying "net", we assume that more water molecules leave the kitchen counter than return, and that the fraction of the molecules leaving the surface in relation to the number of molecules getting back has a constant, positive lower bound. (Other answers explain why it is likely to be so in kitchen conditions.) Then, at most $100-q$ percent are left per time unit. So, after $t$ time units, the amount of water left will be at most $\mathrm{a_0}\Bigl(\frac{100-q}{100}\Bigr)^t$, where $a_0$ is the initial amount. Since $100-q<100$, we obtain $$\lim_{t\to+\infty}\,\mathrm{a_0}\Bigl(\frac{100-q}{100}\Bigr)^t \ = \ 0\,.$$ In particular, after a certain point of time, the amount of water will be lower than the minimal possible amount (the volume of one $\mathrm{H}_2\mathrm{O}$ molecule or its mass, simplified, of course). If the assumption made is not valid (say, due to great humidity somewhere in Asia), the result would be wrong: the water will NOT fully evaporate. (An aside has to be made. Note that the above mathematical treatment is a gross simplification. To get a more realistic evaporation model, we should take into account that the evaporation happens only from the surface, and not from the whole volume, and that both the surface and the volume change with time. Also, bear in mind that even within one second, the evaporation rate changes.) Water always evaporates when above 0 degrees Celsius at normal atmospheric pressure. Which means when above 0 deg there always are molecules with high enough energies to leave the liquid phase. protected by orthocresol♦ Oct 16 '17 at 15:23 Thank you for your interest in this question. Because it has attracted low-quality or spam answers that had to be removed, posting an answer now requires 10 reputation on this site (the association bonus does not count). Would you like to answer one of these unanswered questions instead?
So considering that set of all turing machines is countably infinite, can we also say that set of all FA machines(DFA/NFA) or set of all PDA machines(DPDA/NPDA) are countably infinite, Considering that we can build all of them with Turing machine? The answer to your first question is yes, the sets of FAs and PDAs are countable. It's easy to see that since each such machine can be completely described by a finite encoding of its relevant information, like its states and its transition function. For the second question, there are lots of languages (uncountably many, as the video shows), and almost all of them simply cannot be recognized by any FA. Any non-regular language will suffice as an example, like $\{0^n1^n\mid n\ge 0\}$. The same result holds for the languages recognized by PDAs: there are languages that aren't context-free, like $\{0^n1^n0^n\mid n\ge 0\}$. The set of all finite automata is indeed countable, but that has nothing to do with the set of all TMs being countable. The two models are only related by the set of languages they accept, but both have infinitely many machines per language so we can't immediately conclude anything about the sets of machines. You can prove countability with the usual means: working from the formal definition of FAs (something like $A = (Q, \Sigma, \delta, Q_F)$), construct an injection into $\mathbb{N}$. That's tedious, but elementary. For PDAs, it works just in the same way. If you define finite automata as Turing machines with a single working tape that could be just read from left to right, and similar PDAs as Turing machines with an additional working tape that could be just used as a stack, then the countability of Turing machines gives you the countability of FAs and PDAs as they form subsets. But in general if you define these models independently, then you have to explicitly construct for each FA or PDA a corresponding TM, and this construction must be unique (i.e. an injection from the FAs or PDAs to TMs) to employ the fact that TMs are countable. Let me add that the Turing definable languages are countable, but are not effectively countable, i.e. there is no Turing machine itself that enumerates them. This is a classic result related to the Halting problem. But in the case of finite automata (or PDAs) we can indeed effectively enumerate them. For the regular languages, just built a TM that enumerates finite automata for increasing size of states and alphabet (removing duplicates if you want to, as finite automata equivalence is decidable). Similar for PDAs, but in this effective enumeration their might be machines that describe the same language, and as context-free language equivalence is undecidable I do not see any way to get rid of them. Hence in some sense we have a "stronger" notion of countability for regular or context-free languages then for general Turing definable languages.
Research Open Access Published: Dynamic analysis of a class of neutral delay model based on the Runge-Kutta algorithm EURASIP Journal on Wireless Communications and Networking volume 2018, Article number: 100 (2018) Article metrics 409 Accesses Abstract In this paper, we study the dynamics of a class of second-order neutral delay nonlinear models. This study is applicable to many fields, such as engineering, cybernetics, and physics. We use the Runge-Kutta algorithm and the Riccati transform method. First, we give a neutral delay nonlinear model based on the Runge-Kutta algorithm. Then, we study the dynamic characteristics of the neutral delay model and establish some new sufficient conditions for the oscillation. The results of our research are new, and these results promote and improve the results already available. The results are also verified by numerical experiments. The neutral delay nonlinear model has an important application in engineering, cybernetics, and physics. Therefore, the study of this paper has great help and promotion to engineering, cybernetics, and physics. Introduction The Runge-Kutta algorithm is a more practical algorithm built on the basis of mathematical support [1]. This algorithm is an important implicit or explicit iterative method for solving the solutions of nonlinear ordinary differential equations [2]. Because of the high precision of the algorithm, it is a kind of high-precision single-step algorithm widely used in engineering [3]. However, some measures need to be taken to suppress the deviation, so the implementation principle is more complex [4]. In recent years, due to the widespread application of neutral delay differential equations in engineering cybernetics and physics fields, a wide range of attention has been drawn from scholars both at home and abroad [5,6,7,8]. With the further improvement of the Runge-Kutta algorithm and the further development of the neutral delay differential equation theory, many scholars have studied the delay differential equations and get some related results about oscillation [9,10,11,12,13,14]. People use a series of techniques and methods, such as calculation and reasoning, to study these equations and to obtain the oscillation conditions of the solution of the equation [10, 15,16,17,18,19]. How to get the oscillation criterion of the neutral delay differential equation model becomes the key and the difficult problem [20, 21]. The Runge-Kutta algorithm and Riccati transform provide an effective and practical method for us to study the two-order neutral time-delay model. In this paper, we study the dynamic characteristics of a class of second-order neutral delay models by using the Runge-Kutta algorithm. We have obtained some new oscillation criteria for a class of second-order neutral delay nonlinear differential equation models. These results promote and improve the known results in the literature. Model and methodology The vibration problems in engineering, cybernetics, communication technology, physics, and other fields can be represented by the neutral delay model differential equation model. For a long time, the problem of dynamics has been the concern of experts and scholars at home and abroad. To this end, the experts also set up some neutral delay model to study the vibration of engineering, automatic control, communication technology, and other practical problems. On the basis of the existing literature, this paper studies a class of engineering control problems, that is, a class of second-order neutral delay differential equations with the expression equation model: where z( t) = x( t) + c( t) x( τ( t)), φ α s) = | s| α − 1 s and the following conditions are satisfied: ( h 1) ∃ q( t) ∈ C[ t 0, ∞), f( t, x) sgn x ≥ q( t)| x| , β αand βare constants. This model (1) has been widely used in engineering, automatic control, communication technology, physics, and other systems. By using Riccati transformation and computational reasoning, some new vibration criteria for two-order neutral delay differential Eq. (1) are obtained. These results promote and improve some of the well-known results. Results and discussion In this paper, in order to study the vibration of the system (1), we will use the generalized Riccati transform method to study Eq. (1). Lemma 1 Assume that ( h 1)~( h 4) holds, and x( t) is an eventually positive solution of Eq. (2), then z( t) > 0 , z ′( t) > 0 , or x( t) → 0 . Proof We Suppose x( t) is an eventually positive solution of Eq. (2). If z( t) > 0, we have Then, that is, ( R( t)| z ′( t)| α − 1 z ′( t)) ′ ≤ 0. z ′( t) is eventually of one sign, that is, z ′( t) > 0 or z ′( t) < 0. Otherwise, if there exists T, such that z ′( t) < 0 for t ≥ T, then for arbitrary positive K, we have Therefore, z ′( t) > 0. If z( t) < 0, then x( t) is bounded. Otherwise, if x( t) is unbounded, \( \exists {\left\{{t}_n\right\}}_{n=1}^{\infty } \), such that \( \underset{n\to \infty }{\lim }{t}_n=\infty \), let \( x\left({t}_n\right)=\underset{s\in \left[T,{x}_n\right]}{\max}\left\{x(s)\right\} \); thus, t n τ( t n T. \( x\left(\tau \left({t}_n\right)\right)\le \underset{s\in \left[T,{x}_n\right]}{\max}\kern0.5em \left\{x(s)\right\}=x\left({t}_n\right) \) < − c( t n x( τ( t n x( τ( t n Therefore, x( t) is bounded. \( \underset{t\to \infty }{0\ge \lim \kern0.5em \sup \kern0.5em \sup \kern0.5em z(t)} \) \( \underset{t\to \infty }{\ge \lim \kern0.5em \sup \kern0.5em x(t)}+\underset{t\to \infty }{\lim \kern0.5em \operatorname{inf}\kern0.5em c(t)x(t)\Big)} \) \( \underset{t\to \infty }{\ge \left(1-c\right)\kern0.5em \lim \kern0.5em \operatorname{inf}\kern0.5em x{(t)}_{\ge 0}} \). Thus, \( \underset{t\to \infty }{\lim }x(t)=0 \). Lemma 2 We suppose x( t) is an eventually positive solution of Eq. (2), then (1) z( t) > tz ′( t); (2)\( \frac{z(t)}{t} \) is strictly decreasing eventually. Proof Since ( R( t)( z ′( t)) ) α ′≤ 0, then z ′′( t) ≤ 0. Let g( t) = z( t) − tz ′( t); we get g ′( t) = − tz ′′( t) > 0 and we assert that g( t) > 0 eventually. Otherwise, g( t) < 0, so Thus, \( \frac{z(t)}{t} \) is strictly increasing. \( \frac{z\left(\sigma (t)\right)}{\sigma (t)}\ge \frac{z\left(\sigma (T)\right)}{\sigma (T)}=b>0 \), t ≥ T. We have z( σ( t)) ≥ bσ( t); thus, 0 < R( t)( z ′( t)) ′ \( \le R(T){\left({z}^{\hbox{'}}(T)\right)}^{\alpha }-{\int}_T^tQ(s){z}^{\beta}\left(\sigma (s)\right) ds \) Then, z( t) > tz ′( t), and \( \frac{z(t)}{t} \) is strictly decreasing eventually. Theorem 1 Assume that \( {\int}_T^t\ \left[\rho (s)Q(s){\left(\frac{\sigma (s)}{s}\right)}^{\beta }-\frac{R(s){\left({\rho}^{\prime }(s)\right)}^{\lambda +1}}{{\left(\lambda +1\right)}^{\lambda +1}{\left( m\rho (s)\right)}^{\lambda }}\right] ds=\infty \), then Eq. (2) is almost oscillatory. Proof We suppose x( t) is an eventually positive solution of Eq. (2); from Lemma 1, we have z( t) > 0, z ′( t) > 0, or x( t) → 0. We define the function If β ≥ α, we have where \( {m}_1=\min \left\{1,{\left[z(T)\right]}^{\frac{\beta -\alpha }{\alpha }}\right\} \). If β < α, we have where \( {m}_2=\min \left\{1,{\left[{z}^{\hbox{'}}(T)\right]}^{\frac{\beta -\alpha }{\beta }}\right\} \). Therefore, if β < α or β < α, we have where λ = min { α, β}. Let \( A(t)=\frac{\lambda m}{R^{\raisebox{1ex}{$1$}\!\left/ \!\raisebox{-1ex}{$\lambda $}\right.}(t)} \); we have We have By the Lemma 1 and the Lemma 2 and the related theory of equation oscillatory, we get Eq. (2) is almost oscillatory. Conclusions In this paper, the second-order neutral delay nonlinear model is studied by combining the Runge-Kutta algorithm and the Riccati transformation method. We have obtained the oscillation criterion of the second-order neutral delay nonlinear differential equation model. Most of the literature mainly studied the situation α ≥ β [5,6,7,8, 13,14,15,16,17,18,19,20,21]. We not only studied the situation α ≥ β but also studied the situation α < β. We generalize the existing results and get the new oscillation criterion. This second-order neutral delay differential equation describes the oscillation phenomena in the fields of engineering, control, communication, physics, and other fields. This indicates that oscillation in engineering, control and communication technologies will cause internal damage. We can predict the oscillation by the Runge-Kutta algorithm and the Riccati transform, in order to avoid the occurrence of oscillation in actual conditions such as engineering, control, communication technology and so on. Abbreviations Eq: Equation References 1. MH Carpenter, D Gottlieb, S Abarbanel, WS Don, The theoretical accuracy of Runge-Kutta discretization for the initial-boundary value problem: a study of the boundary error. SIAM J.Sci.Comput 16, 1241–1252 (1995) 2. Q Zhang, Third order explicit Runge-Kutta discontinuous Galerkin method for linear conservation law with inflow boundary condition. J. Sci. Comput. 46(2), 294–313 (2011) 3. Q Zhang, CW Shu, Error estimates to smooth solution of Runge-Kutta discontinuous Galerkin methods for scalar conservation laws. SIAM J. Numer. Anal. 42, 641–666 (2004) 4. B Cockburn, CW Shu, Runge-Kutta discontinuous Galerkin methods for convection-dominated problems. J.Sci. Comput 16, 173–261 (2001) 5. ME Elmetwally, SH Taher, HS Samir, Oscillation of nonlinear neural delay differential equations. J. Appl. Math. & Computing 21(1), 99–118 (2006) 6. P Hu, CM Huang, Analytical and numerical stability of nonlinear neural delay integro-differential equations. Journal of the Franklin Institutc 348, 1082–1100 (2011) 7. S Zhang, Q Wang, Oscillation of second-order nonlinear neutral dynamic equations on time scales. AppliedMathematics and Computation 216, 2837–2848 (2010) 8. Q Li, R Wang, F Chen, TX Li, Oscillation of second-order nonlinear delay differential equations with nonpositive neutral coefficients. Advances in Difference Equations 35, 1–7 (2015) 9. J Liu, HY Luo, X Liu, Oscillation criteria for half-linear functional differential equation with damping. Therm. Sci. 18(5), 1537–1542 (2014) 10. T Candan, Oscillatory behavior of second order nonlinear neutral differential equations with distributed deviating arguments. Appl. Math. Comput. 262, 199–203 (2015) 11. HY Luo, J Liu, X Liu, Oscillation behavior of a class of new generalized Emden-Fowler equations. Therm. Sci. 18(5), 1567–1572 (2014) 12. RP Agarwal, CH Zhang, TX Li, Some remarks o nonlinear neutral n of second order neutral differential equations. Appl. Math. Comput. 274, 178–181 (2016) 13. J Dzurina, LP Stavroulakis, Oscillation criteria for second-order delay differential equation. Appl. Math. Comput. 174, 1636–1641 (2003) 14. FW Meng, R Xu, Oscillation criteria for certain even order quasi-linear neutral differential equations with deviating arguments. Appl. Math. Comput. 190, 458–464 (2007) 15. ZW Zheng, X Wang, HM Han, Oscillation criteria for forced second order differential equations with mixed nonlinearities. Appl. Math. Lett. 22, 1096–1101 (2009) 16. QX Zhang, SH Liu, L Gao, Oscillation criteria for even-order half-linear functional differential equation. Appl. Math. Lett. 24, 1709–1715 (2011) 17. ZL Han, TX Li, CH Zhang, Y Sun, Oscillation criteria for certain second-order nonlinear neutral differential equations of mixed type. Abstr. Appl. Anal. 387483, 1–9 (2011) 18. HD Liu, FW Meng, PC Liu, Oscillation and asymptotic analysis on a new generalized Emden-Fowler equation. Appl. Math. Comput. 219, 2739–2748 (2015) 19. SH Liu, QX Zhang, YH Yu, Oscillation of even-order half-linear differential equation with damping. Computers and Mathematics with Applications 61, 2191–2196 (2011) 20. CJ Zhang, TT Qin, J Jin, An improvement of the numerical stability results for nonlinear neutral delay-integro-differential equations. Appl. Math.Comput 215, 548–556 (2009) 21. JJ Zhao, Y Cao, Y Xu, Legendre spectral collocation methods for volterra delay-integro-differential equations. J. Sci. Comput. 67(3), 1110–1133 (2016) Acknowledgements The research presented in this paper was supported by the China National Natural Science Foundation, Yunnan Science and Technology Department of China, and Qujing Normal University, China. Funding The authors acknowledge the National Natural Science Foundation of China (grant 11361048), Yunnan Natural Science Foundation of China (grant 2017FH001-014), and Qujing Normal University Science Foundation of China (grant ZDKC2016002). Availability of data and materials The simulation code can be downloaded at literature [11], and it is applicable. Author’s contributions HL is the only author of this article. By using the Runge-Kutta algorithm and Riccati transformation method, we study the dynamical properties of a class of two-order neutral delay nonlinear models and establish some new sufficient conditions. The author read and approved the final manuscript. Ethics declarations Competing interests The author declares that he/she has no competing interests. Publisher’s Note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Let $G$ be a group and let $a \in G$ be a fixed element. Let $$H= \{\text{$g \in G$ | $g^{-1}ag=a$}\}.$$ Prove that $H$ is a subgroup of $G$. So I know I need to show: (i) $H$ is closed under the operation on $G$. (ii) $H$ is closed under inversion. (iii) $H\neq \emptyset$ For closure: Suppose $x,y \in H.$ Then $x^{-1}ax=a$ and $y^{-1}ay=a.$ So $$\begin{align} (xy)^{-1}a(xy)& = y^{-1}x^{-1}axy \\ & = y^{-1}(x^{-1}ax)y \\ & = y^{-1}ay \\ & = a \\ \end{align}$$ Thus $xy \in H$. Now how do I show it's closed under inversion? I'm not sure what the inverse is... And to show $H \neq \emptyset$ all I need to do is show that the identity is in the set. And I'm not sure how to do that either. Help!
Ex.5.3 Q8 Arithmetic Progressions Solution - NCERT Maths Class 10 Question Find the sum of first \(51\) terms of an AP whose second and third terms are \(14\) and \(18\) respectively. Text Solution What is Known? \({a_2}\) and \({a_3}\) What is Unknown? \({S_{51}}\) Reasoning: Sum of the first \(n\) terms of an AP is given by \({S_n} = \frac{n}{2}\left[ {2a + \left( {n - 1} \right)d} \right]\), and \(nth\) term of an AP is \(\,{a_n} = a + \left( {n - 1} \right)d\) Where \(a\) is the first term, \(d\) is the common difference and \(n\) is the number of terms and \(l\) is the last term. Steps: Given, 2nd term, \({a_2} = 14\) 3rd term, \({a_3} = 18\) Common difference, \(d = {a_3} - {a_2} = 18 - 14 = 4\) We know that \(n\rm{th}\) term of AP, \({a_n} = a + \left( {n - 1} \right)d\) \[\begin{align}{a_2} &= a + d\\14 &= a + 4\\a& = 10\end{align}\] Sum of \(n\) terms of AP series, \[\begin{align}{S_n} &= \frac{n}{2}\left[ {2a + \left( {n - 1} \right)d} \right]\\{S_{51}} &= \frac{{51}}{2}\left[ {2 \times 10 + \left( {51 - 1} \right)4} \right]\\ &= \frac{{51}}{2}\left[ {20 + 50 \times 4} \right]\\ &= \frac{{51}}{2} \times 220\\ &= 51 \times 110\\ &= 5610\end{align}\]
Probability - Complement Approach Saturday 1 June 2019 : Two fair six-sided dice are rolled. What is the probability that at least one of the dice shows a five facing up? Question Three different solutions are given below. Let X be the random variable representing the # of fives. : Method 1 Analysing the sample space “At least one five” means one or more fives – or specifically for this question – one or two fives. From the table (left) depicting the sample space, it can be seen that there are ten outcomes with one five and one outcome with two fives (shown in red). Therefore, the probability of getting at least one five when rolling a die twice is \({\rm{P}}\left( {X \ge 1} \right) = \frac{{11}}{{36}}\). : Method 2 Adding probabilities of all the outcomes that make up the event The ‘event’ of obtaining at least one five consists of three different outcomes: (1) a 5 on the 1st die, and not a 5 on the 2nd die, (2) not a 5 on the 1st die, and a 5 on the 2nd die, and (3) a 5 on both dice [note: 5´ means “not 5”] \({\rm{P}}\left( {5,5'} \right) = \frac{1}{6} \cdot \frac{5}{6} = \frac{5}{{36}}\) \({\rm{P}}\left( {5',5} \right) = \frac{5}{6} \cdot \frac{1}{6} = \frac{5}{{36}}\) \({\rm{P}}\left( {5,5} \right) = \frac{1}{6} \cdot \frac{1}{6} = \frac{1}{{36}}\) From these three probabilities, it follows that \({\rm{P}}\left( {X \ge 1} \right) = \frac{5}{{36}} + \frac{5}{{36}} + \frac{1}{{36}} = \frac{{11}}{{36}}\) : Method 3 Complement approach \({\rm{P}}\left( {X \ge 1} \right) + {\rm{P}}\left( {X = 0} \right) = 1\;\;\;\; \Rightarrow \;\;\;\;{\rm{P}}\left( {X \ge 1} \right) = 1 - {\rm{P}}\left( {X = 0} \right)\) \({\rm{P}}\left( {X = 0} \right)\) is the probability of not getting a 5 on either of the dice. Therefore, \({\rm{P}}\left( {X \ge 1} \right) = 1 - \frac{5}{6} \cdot \frac{5}{6} = 1 - \frac{{25}}{{36}} = \frac{{11}}{{36}}\) Clearly, the complement approach is the most efficient way of computing this particular probability. Students should be presented with several examples that illustrate the usefulness of a complement approach in a variety of probability questions. See the page The Birthday Problem & More in my Exploration (IA) Ideas section.
It is important to understand that the only problem here is to obtain the extrinsic parameters. Camera intrinsics can be measured off-line and there are lots of applications for that purpose. What are camera intrinsics? Camera intrinsic parameters is usually called the camera calibration matrix, $K$. We can write $$K = \begin{bmatrix}\alpha_u&s&u_0\\0&\alpha_v&v_0\\0&0&1\end{bmatrix}$$ where $\alpha_u$ and $\alpha_v$ are the scale factor in the $u$ and $v$ coordinate directions, and are proportional to the focal length $f$ of the camera: $\alpha_u = k_u f$ and $\alpha_v = k_v f$. $k_u$ and $k_v$ are the number of pixels per unit distance in $u$ and $v$ directions. $c=[u_0,v_0]^T$ is called the principal point, usually the coordinates of the image center. $s$ is the skew, only non-zero if $u$ and $v$ are non-perpendicular. A camera is calibrated when intrinsics are known. This can be done easily so it is not consider a goal in computer-vision, but an off-line trivial step. What are camera extrinsics? Camera extrinsics or External Parameters $[R|t]$ is a $3\times4$ matrix that corresponds to the euclidean transformation from a world coordinate system to the camera coordinate system. $R$ represents a $3\times3$ rotation matrix and $t$ a translation. Computer-vision applications focus on estimating this matrix. $$[R|t] = \begin{bmatrix} R_{11}&R_{12}&R_{13}&T_x\\R_{21}&R_{22}&R_{23}&T_y\\R_{31}&R_{32}&R_{33}&T_z \end{bmatrix}$$ How do I compute homography from a planar marker? Homography is an homogeneaous $3\times3$ matrix that relates a 3D plane and its image projection. If we have a plane $Z=0$ the homography $H$ that maps a point $M=(X,Y,0)^T$ on to this plane and its corresponding 2D point $m$ under the projection $P=K[R|t]$ is $$\tilde m = K \begin{bmatrix} R^1 & R^2 & R^3 & t \end{bmatrix} \begin{bmatrix} X \\ Y \\ 0 \\ 1 \end{bmatrix}$$ $$= K \begin{bmatrix}R^1&R^2&t\end{bmatrix} \begin{bmatrix} X \\ Y \\ 1 \end{bmatrix}$$ $$H = K \begin{bmatrix}R^1 & R^2 & t \end{bmatrix}$$ In order to compute homography we need point pairs world-camera. If we have a planar marker, we can process an image of it to extract features and then detect those features in the scene to obtain matches. We just need 4 pairs to compute homography using Direct Linear Transform. If I have homography how can I get the camera pose? The homography $H$ and the camera pose $K[R|t]$ contain the same information and it is easy to pass from one to another. The last column of both is the translation vector. Column one $H^1$ and two $H^2$ of homography are also column one $R^1$ and two $R^2$ of camera pose matrix. It is only left column three $R^3$ of $[R|t]$, and as it has to be orthogonal it can be computed as the crossproduct of columns one and two: $$R^3 = R^1 \otimes R^2$$ Due to redundancy it is necessary to normalize $[R|t]$ dividing by, for example, element [3,4] of the matrix.
Ex.13.2 Q4 Surface Areas and Volumes Solution - NCERT Maths Class 10 Question A pen stand made of wood is in the shape of a cuboid with four conical depressions to hold pens. The dimensions of the cuboid are \(15 \,\rm{cm}\) by \(10 \,\rm{cm} \) by \(3.5 \,\rm{cm}\). The radius of each of the depressions is \(0.5\,\rm{cm}\) and the depth is \(1.4\,\rm{cm}\). Find the volume of wood in the entire stand (see Fig. 13.16). Text Solution What is known? A wooden pen stand is in the shape of a cuboid with four conical depressions. The dimensions of the cuboid are \(15\rm\,cm\; \times 10\,cm \;\times\; 3.5\,cm\) Radius of conical depressions is \(\,0.5\,\rm{cm}\) Depth of conical depression is \(1.4\,\rm{cm}\). What is unknown? Volume of wood in the entire pen stand. Reasoning: From the given figure it’s clear that the conical depressions do not contain wood. Since the dimensions of all \(4 \)conical depressions are the same, they will have identical volumes too. Volume of wood in the entire pen stand \(=\) volume of the wooden cuboid \( - {\rm{ }}4 \times \)volume of the conical depression We will find the volume of the solid by using formulae; Volume of the cuboid\( = lbh\) where \(l, b\) and \(h\) are the length, breadth and height of the cuboid respectively. Volume of the cone\(\begin{align} = \frac{1}{3}\pi {r^2}{h_1}\end{align}\) where \(r\) and \(h1\) are the radius and height of the cone respectively Steps: Depth of each conical depression,\({h_1} = 1.4cm\) Radius of each conical depression, \(r = 0.5cm\) Dimensions of the cuboid are \(15cm \times 10cm \times 3.5cm\) Volume of wood in the entire pen stand \(=\) Volume of the wooden cuboid \(- 4 \;\times \) Volume of each conical depression \[\begin{align}&= lbh - 4 \times \frac{1}{3}\pi {r^2}{h_1}\\ &= \left( {15cm \times 10cm \times 3.5cm} \right) - \left( {4 \times \frac{1}{3} \times \frac{{22}}{7} \times 0.5cm \times 0.5cm \times 1.4cm} \right)\\&= 525c{m^3} - 1.47c{m^3}\\&= 523.53c{m^3}\end{align}\]
Impedance matching is tricky, but the role of a quarter wave transmission line is to map from one impedance to another. The actual impedance of the line will not match either the input or the output impedance - this is entirely expected. However at a given frequency, when a correctly designed quarter wave line is inserted with the correct impedance, the output impedance will appear to the input as perfectly matched. In your case, the transformer will make the \$20\Omega\$ impedance appear as if it is a \$100\Omega\$ impedance meaning no mismatch. Essentially it guides the waves from one characteristic impedance to another. The easiest way to visualise this is on a Smith chart, plot the two points 0.4 (\$20\Omega\$) and 2 (\$100\Omega\$). Then draw a circle centred on the resistive/real axis (line down the middle) which intersects both points. You will find that this point is located at 0.894 (\$44.7\Omega\$) if your calculations are correct. This is shown below at \$500\mathrm{MHz}\$, but the frequency is only important when converting the electrical length to a physical length. What a quarter wave transformer does is rotate a given point by \$180^\circ\$ around its characteristic impedance on the Smith chart (that's \$\lambda/4 = 90^\circ\$ forward plus \$90^\circ\$ reverse). Exactly why it does this is complex. But the end result of a long derivation is that for a transmission line of impedance \$Z_0\$ connected to a load of impedance \$Z_L\$ and with a length \$l\$, then the impedance at the input is given by: $$Z_{in}=Z_0\frac{Z_L+jZ_0\tan\left(\beta l\right)}{Z_0+jZ_L\tan\left(\beta l\right)}$$ That is an ugly equation, but it just so happens if the electrical length \$\beta l\$ is \$\lambda/4\$ (\$90^\circ\$), the \$\tan\$ part goes to infinity which allows the equation to be simplified to: $$Z_{in}=Z_0\frac{Z_0}{Z_L}=\frac{(Z_0)^2}{Z_L}\rightarrow Z_0=\sqrt{\left(Z_{in}Z_L\right)}$$ Which is where your calculation comes from. With the quarter wave transformer in place, the load appears as matched to the source. In other words, the transformer matches both of its interfaces, not just the input end. You can also see from this equation why the transformer only works for a single frequency - because it relies on the physical length being \$\lambda/4\$. You can actually (generally using advanced design tools) achieve an approximate match over a range of frequencies - basically a close enough but not exact match.
The purpose of this series of notebooks on Bayesian logic in clinical diagnostics is to give a mathematically robust and intelligible summary to help physicians, epidemiologists, nurses, NPs and in the field, as well as their consumers, understand clinical decisionmaking as an objective process amenable to analysis and quantification rather than using 'instinct' or 'experience' as a surrogate explanation for a process that can – and should – be made explicable, interpretable and if need be, interrogable. This does not mean to simplify diagnostics (the practice) or imply that it's in any way an automatic processhat can be reduced to a simple algorithm. But diagnosis (the logical process) is, and should be. The purpose of these notebooks is to help with that process. About the author: I'm Chris von Csefalvay, a clinical computational epidemiologist and expert in filoviridae working for CBRD. You can find me on Twitter, Facebook and, of course, Github. Please feel free to let me know if you have any questions, corrections or suggestions. Differential diagnosis describes the process of finding a list of potential diagnostic solutions for a particular set of symptoms, then determining the solution with the greatest probability given the symptoms. Or, formally defined: given a patient exhibiting the symptoms $ \mathcal{S} = S_{1 \ldots m} $ and potential diagnostic solutions $ \mathcal{Dx} $, which solution $ Dx_{\sigma} \in \mathcal{Dx} $ maximises $ f(x \in \mathcal{Dx}) = p(x \mid \mathcal{S}) $? In other words, the objective is to find$$ \underset{x \in \mathcal{Dx}} {\operatorname{arg\,max}} \ \ f(x) = \underset{x \in \mathcal{Dx}} {\operatorname{arg\,max}} \ \ p(x \mid \bigcap_{i = 1}^{m} S_i) $$ Here, the $ \bigcap_{i=1}^{m} S_i $ operator describes the intersection of symptoms $ S_{1 \ldots m} $, i.e. $ S_1 \cap \ldots \cap S_m $. In practice, the inverse probability (the probability of a symptom occurring given a diagnosis, i.e. $ p(S_q \mid D_r) \mid S_q \in \mathcal{S}, D_r \in \mathcal{D} $, is typically more available. For this reason, we will need to rely on Bayesian logic to reverse patterns of causality. We will consider two specific situations of two patients, denoted as Alice and Bob, presenting with identical symptoms $ S $. Their differential diagnosis $ \mathcal{Dx}_{Alice} $ and $ \mathcal{Dx}_{Bob} $ includes, among others, a vaccine-preventable disease $ D_{VPD} $, against which Alice is vaccinated, but Bob is not. The vaccine has a failure rate of 2%. Amongst the unvaccinated, $ D_{VPD} $ is present in about 0.1% of all who turn up at this particular hospital. For simplicity's sake, we are treating the conjunction of symptoms as a binary event and consider all of them indispensable for a diagnosis of $ D_{VPD} $, so that patients either exhibit ($ S^+ $) or do not exhibit ($ S^- $) the totality of the symptoms defined to be pathognomonic of the disease. We know that overall 8.0% of the population reporting for treatment at this facility exhibit all of the pathognomonic symptoms (with or without others), as these symptoms are fairly nonspecific. We further know 'by definition' that everyone who does have $ D_{VPD} $ is $ S^+ $, i.e. exhibits all symptoms. Bob's case is somewhat simpler, so let's start with that. What's the probability for Bob, who exhibits the symptom combination $ S $ (i.e. $ S^+ $), to have $ D_{VPD} $, against which he is not vaccinated? In other words, what is the probability $ p(D_{VPD} \mid S^+) $ in an unvaccinated individual? Bayes' theorem states that$$ p(D_{VPD} \mid S^+)_{unvax} = \frac{p(S^+ \mid D_{VPD}) p(D_{VPD})}{p(S^+)} $$ where $ p(D_{VPD}) $ denotes the probability that an unvaccinated person presenting at this particular hospital has $ D_{VPD} $ and $ p(S^+) $ denotes the probability that a person presenting at this hospital will exhibit the symptoms $ S $ pathognomonic for $ D_{VPD} $. We know $ p(D_{VPD}) $ is 0.1% or $ 0.001 $ and $ p(S^+) $ is 8% or $ 0.08 $. Finally, since the symptoms $ S $ are deemed to be conclusively pathognomonic of $ D_{VPD} $, i.e. they occur in every case without exception, we can eliminate $ p(S^+ \mid D_{VPD}) $ as its value is 1. This leaves a probability$$ p(D_{VPD} \mid S^+)_{unvax} \ = \frac{p(D_{VPD})}{p(S^+)} \ = \frac{0.001}{0.08} = 0.0125 \ \ \ \ (1.25 \times 10^{-2}) $$ In other words, a probability of $1.25\%$. In Alice's case, we know she is vaccinated but we do not know if she is immune. Therefore, we compensate for this by calculating $ p(D_{VPD})_{vax} $ to account for the risk of her being susceptible despite vaccination – in other words, we take the baseline risk $ p(D_{VPD})_{naive} $, which is $ 0.1\% $, and multiply it by the risk of vaccine failure, $ 1 - E_{vax} $, i.e. $ 2\% $, since by definition $ p(D_{VPD})_{immune} = 0 $.$$ p(D_{VPD})_{vaxed} \ = (1 - E_{vax}) \cdot p(D_{VPD} \mid naive) \ = 0.02 \cdot 0.001 = 0.00002 $$ Since the other terms, $ p(S^+) $ and $ p(S^+ \mid D_{VPD}) $, were calculated by reference to the whole population, they do not necessarily have to be adjusted, although this introduces a small margin of error: the unvaccinated do tend to present with more specific symptoms $ S^+ $ more often than the vaccinated. However, unless exact data to this effect are available, it is best not to compensate for that effect unless symptoms are extremely specific – which, for most VPDs, they are not. Presenting with the symptoms $ S $, her probability of $ p(D_{VPD} \mid S^+)_{vaxed} $ is given by Bayes' theorem as$$ p(D_{VPD} \mid S^+)_{vaxed} \ = \frac{p(S^+ \mid D_{VPD})p(D_{VPD})_{vaxed}}{p(S^+)} \ = \frac{p(D_{VPD})_{vaxed}}{p(S^+)} = \frac{0.00002}{0.08} \ = 0.0000016 \ \ \ \ \ (1.6 \times 10^{-6}) $$ In other words, the likelihood of Bob having $D_{VPD}$ is 7,800 times as high as that of Alice. It is a frequently raised accusation of anti-vaccine activists that doctors routinely rule out diseases against which a person is vaccinated. This is quite simply not true. Rather, the probability of an unvaccinated person having the disease is four orders of magnitude higher in this quite typical case. A difference that significant is likely to alter the order of differential diagnosis priorities, putting other causes ahead of $ D_{VPD} $. This is not because doctors 'refuse' to diagnose diseases one is vaccinated against, but for the simple reason that such a person has a much lower risk of having $ D_{VPD} $ than an unvaccinated but identically symptomatic counterpart. Another accusation so levelled is that because of this, either occurrence statistics are unreliable because they would be misdiagnosed as other illnesses due to the refusal of diagnosing a vaccinated-against condition, or vaccine efficacy numbers are incorrect, because the knowledge of vaccination affects the diagnostic process. This is false. To illustrate this point, it is important to look at what factored into our calculation of $ p(D_{VPD} \mid S^+) $: In other words, the above can be, and was, deduced without tautology, and is not circular. You can trace the steps yourself. It disproves the anti-vaccination trope that either Note that no such 'practice' actually exists, but is rather a misunderstanding of the fact that vaccination does, undeniably, makes disease less likely, and therefore the probability of $ D_{VPD} $ is lower in such patients, all things being equal, as the case of Alice vs Bob shows.
08:45 Non-termination of Dalvik bytecode via compilation to CLP SPEAKER: Fred Mesnard ABSTRACT. We present a set of rules for compiling a Dalvik bytecode program into a logic program with array constraints. Non-termination of the resulting program entails that of the original one, hence the techniques we have presented before for proving non-termination of constraint logic programs can be used for proving non-termination of Dalvik programs. 09:15 Geometric Series as Nontermination Arguments for Linear Lasso Programs SPEAKER: Matthias Heizmann ABSTRACT. We present a new kind of nontermination argument for linear lasso programs, called geometric nontermination argument. A geometric nontermination argument is a finite representation of an infinite execution of the form $(\vec{x} + \sum_{i=0}^t \lambda^i \vec{y})_{t \geq 0}$. The existence of this nontermination argument can be stated as a set of nonlinear algebraic constraints. We show that every linear loop program that has a bounded infinite execution also has a geometric nontermination argument. Furthermore, we discuss nonterminating programs that do not have a geometric nontermination argument. 09:45 Non-termination using Regular Languages SPEAKER: unknown ABSTRACT. We describe a method for proving non-termination of term rewriting systems that do not admit looping reductions, that is, reductions of the form $t \to^* C[t\sigma]$. As certificates of non-termination, we employ regular automata. 10:45 Termination of Biological Programs SPEAKER: Jasmin Fisher ABSTRACT. Living cells are highly complex reactive systems operating under varying environmental conditions. By executing diverse cellular programs, cells are driven to acquire distinct cell fates and behaviours. Deciphering these programs is key to understanding how cells orchestrate their functions to create robust systems in health and disease. Due to the staggering complexity, this remains a major challenge. Stability in biological systems is a measure of the homeostatic nature and robustness against environmental perturbations. In computer science, stability means that the system will eventually reach a fixed point regardless of its initial state. Or, in other words, that all computations terminate with variables acquiring the same value regardless of the path that led to termination. Based on robust techniques to prove stabilization/termination in very large systems, we have developed an innovative platform called BMA that allows biologists to model and analyse biological signalling networks. BMA analyses systems for stabilization, searches for paths leading to stabilization, and allows for bounded model-checking and simulation, all with intelligible visualization of results. In this talk, I will summarize our efforts in this direction and talk about the application of termination analysis in drug discovery and cancer. Joint work with Byron Cook, Nir Piterman, Samin Ishtiaq, Alex Taylor and Ben Hall. 11:45 On Improving Termination Preservability of Transformations from Procedural Programs into Rewrite Systems by Using Loop Invariants SPEAKER: Naoki Nishida ABSTRACT. Recently, to analyze procedural programs by using techniques in the field of term rewriting, several transformations of a program into a rewrite system have been developed. Such transformations are basically complete in the sense of computation, and e.g., termination of the rewrite system ensures termination of the program. However, in general, termination of the program is not preserved by the transformations and, thus, the preservation of termination is a common interesting problem. In this paper, we discuss the improvement of a transformation from a simple procedural program over integers into a constrained term rewriting system by appending loop invariants to loop conditions of "while" statements so as to preserve termination as much as possible. 12:15 Automatic Termination Analysis for GPU Kernels SPEAKER: Jeroen Ketema ABSTRACT. We describe a method for proving termination of massively parallel GPU kernels. An implementation in KITTeL is able to show termination of 94% of the 598 kernels in our benchmark suite. 12:45 Discussion SPEAKER: Everyone 14:30 To Infinity... and Beyond! SPEAKER: Caterina Urban ABSTRACT. The traditional method for proving program termination consists in inferring a ranking function. In many cases (i.e. programs with unbounded non-determinism), a single ranking function over natural numbers is not sufficient. Hence, we propose a new abstract domain to automatically infer ranking functions over ordinals. We extend an existing domain for piecewise-defined natural-valued ranking functions to polynomials in $\omega$, where the polynomial coefficients are natural-valued functions of the program variables. The abstract domain is parametric in the choice of the state partitioning inducing the piecewise-definition and the type of functions used as polynomial coefficients. To our knowledge this is the first abstract domain able to reason about ordinals. Handling ordinals leads to a powerful approach for proving termination of imperative programs, which in particular allows us to take a first step in the direction of proving termination under fairness constraints and proving liveness properties of (sequential and) concurrent programs. 15:00 Real-world loops are easy to predict : a case study SPEAKER: Laure Gonnord ABSTRACT. In this paper we study the relevance of fast and simple solutions to compute approximations of the number of iterations of loops (loop trip count) of imperative real-world programs. The context of this work is the use of these approximations in compiler optimizations: most of the time, the optimizations yield greater benefits for large trip counts, and are either innocuous or detrimental for small ones. In this particular work, we argue that, although predicting exactly the trip count of a loop is undecidable, most of the time, there is no need to use computationally expensive state-of-the-art methods to compute (an approximation of) it. We support our position with an actual case study. We show that a fast predictor can be used to speedup the JavaScript JIT compiler of Firefox - one of the most well-engineered runtime environments in use today. We have accurately predicted over 85% of all the interval loops found in typical JavaScript benchmarks, and in millions of lines of C code. Furthermore, we have been able to speedup several JavaScript programs by over 5%, reaching 24% of improvement in one benchmark. 15:30 Type Introduction for Runtime Complexity Analysis SPEAKER: unknown ABSTRACT. In this note we show that the runtime complexity function of a sorted rewrite system R coincides with the runtime complexity function of the unsorted rewrite system obtained by forgetting sort information. Hence our result states that sort-introduction, a process that is easily carried out via unification, is sound for runtime complexity analysis. Our result thus provides the foundation for exploiting sort information in analysis of TRSs. 16:30 Foundations and Technology Competitions Award Ceremony ABSTRACT. The third round of the Kurt Gödel Research Prize Fellowships Program, under the title: Connecting Foundations and Technology, aims at supporting young scholars in early stages of their academic careers by offering highest fellowships in history of logic, kindly supported by the John Templeton Foundation. Young scholars being less or exactly 40 years old at the time of the commencement of the Vienna Summer of Logic (July 9, 2014) will be awarded one fellowship award in the amount of EUR 100,000, in each of the following categories: The following three Boards of Jurors were in charge of choosing the winners: http://fellowship.logic.at/ 17:30 FLoC Olympic Games Award Ceremony 1 SPEAKER: Floc Olympic Games ABSTRACT. The aim of the FLoC Olympic Games is to start a tradition in the spirit of the ancient Olympic Games, a Panhellenic sport festival held every four years in the sanctuary of Olympia in Greece, this time in the scientific community of computational logic. Every four years, as part of the Federated Logic Conference, the Games will gather together all the challenging disciplines from a variety of computational logic in the form of the solver competitions. At the Award Ceremonies, the competition organizers will have the opportunity to present their competitions to the public and give away special prizes, the prestigious Kurt Gödel medals, to their successful competitors. This reinforces the main goal of the FLoC Olympic Games, that is, to facilitate the visibility of the competitions associated with the conferences and workshops of the Federated Logic Conference during the Vienna Summer of Logic. This award ceremony will host the 18:15 FLoC Closing Week 1 SPEAKER: Helmut Veith 16:30 Ordering Networks SPEAKER: Lars Hellström ABSTRACT. This extended abstract discusses the problem of defining quasi-orders that are suitable for use with network rewriting. 17:00 Discussion SPEAKER: Everyone
Your task is to take an array of numbers and a real number and return the value at that point in the array. Arrays start at \$\pi\$ and are counted in \$\pi\$ intervals. Thing is, we're actually going to interpolate between elements given the "index". As an example: Index: 1π 2π 3π 4π 5π 6πArray: [ 1.1, 1.3, 6.9, 4.2, 1.3, 3.7 ] Because it's \$\pi\$, we have to do the obligatory trigonometry, so we'll be using cosine interpolation using the following formula: \${\cos(i \mod \pi) + 1 \over 2} * (\alpha - \beta) + \beta\$ where: \$i\$ is the input "index" \$\alpha\$ is the value of the element immediately before the "index" \$\beta\$ is the value of the element immediately after the "index" \$\cos\$ takes its angle in radians Example Given [1.3, 3.7, 6.9], 5.3: Index 5.3 is between \$1\pi\$ and \$2\pi\$, so 1.3 will be used for before and 3.7 will be used for after. Putting it into the formula, we get: \${\cos(5.3 \mod \pi) + 1 \over 2} * (1.3 - 3.7) + 3.7\$ Which comes out to 3.165 Notes Input and output may be in any convenient format You may assume the input number is greater than \$\pi\$ and less than array length* \$\pi\$ You may assume the input array will be at least 2 elements long. Your result must have at least two decimal points of precision, be accurate to within 0.05, and support numbers up to 100 for this precision/accuracy. (single-precision floats are more than sufficient to meet this requirement) Happy Golfing!
Wolfram Alpha says that $$\sum_{n=1}^{\infty} \frac{1}{n^2-3n+3} = 1 + \frac{\pi \tanh \left ( \frac{\sqrt{3}\pi}{2} \right )}{\sqrt{3}}$$ However I am unable to get it. It is fairly routine to prove that $$\sum_{n=-\infty}^{\infty} \frac{1}{n^2-3n+3} = \frac{2\pi \tanh \left ( \frac{\sqrt{3}\pi}{2} \right )}{\sqrt{3}}$$ by using complex analysis ( contour integration ) but honestly I am stuck how to retrieve the original sum. Split up , the last sum gives: \begin{align*} \sum_{n=-\infty}^{\infty} \frac{1}{n^2-3n+3} &= \sum_{n=-\infty}^{-1} \frac{1}{n^2-3n+3} + \frac{1}{3} + \sum_{n=1}^{\infty} \frac{1}{n^2-3n+3} \\ &=\frac{1}{3} +\sum_{n=1}^{\infty} \frac{1}{n^2+3n+3} + \sum_{n=1}^{\infty} \frac{1}{n^2-3n+3} \\ &=\frac{1}{3}+ \sum_{n=1}^{\infty} \left [ \frac{1}{n^2-3n+3} + \frac{1}{n^2+3n+3} \right ] \end{align*} Am I overlooking something here? P.S: Working with digamma on the other hand I am not getting the constant. I'm getting $\frac{1}{3}$ instead.
Metric signature convention: $(+---)$. First, note that physical dynamics is ultimately decided by the equations of motion, which you get from the Lagrangian $\mathcal{L}$ after using the least action principle. The kinetic term in a $1$-derivative (before integration by parts) field theory goes like $\mathcal{L} \sim \partial_\mu \phi \partial^\mu \phi \sim -\phi \square \phi$ whose equations of motion are $\square \phi + \cdots = 0$. This is a second order differential equation and so needs two initial conditions if you want to simulate the system. The reason why people get nervous when they see higher derivatives in Lagrangians is that they typically lead to ghosts: wrong-sign kinetic terms, which typically leads to instabilities of the system. Before going to field theory, in classical mechanics, the Ostrogradsky instability says that non-degenerate Lagrangians with higher than first order time derivatives lead to a Hamiltonian $\mathcal{H}$ with one of the conjugate momenta occurring linearly in $\mathcal{H}$. This makes $\mathcal{H}$ unbounded from below. In field theory, kinetic terms like $\mathcal{L} \sim \square \phi (\square+m^2) \phi$ are bad because they lead to negative energies/vacuum instability/loss of unitarity. It has a propagator that goes like $$ \sim \frac{1}{k^2} - \frac{1}{k^2-m^2}$$ where the massive degree of freedom has a wrong sign. Actually, in a free theory, you can have higher derivatives in $\mathcal{L}$ and be fine with it. You won't 'see' the effect of having unbounded energies until you let your ghost-like system interact with a healthy sector. Then, a ghost system with Hamiltonian unbounded from below will interact with a healthy system with Hamiltonian bounded from below. Energy and momentum conservation do not prevent them from exchanging energy with each other indefinitely, leading to instabilities. In a quantum field theory, things get bad from the get-go because (if your theory has a healthy sector, like our real world) the vacuum is itself unstable and nothing prevents it from decaying into a pair of ghosts and photons, for instance. This problem of ghosts is in addition to the general consternation one has when they are required to provide many initial conditions to deal with the initial value problem. Also, in certain effective field theories, you can get wrong-sign spatial gradients $ \mathcal L \sim \dot{\phi}^2 + (\nabla \phi)^2$. (Note that Lorentz invariance is broken here). These lead to gradient instabilities.
It is important to understand that the only problem here is to obtain the extrinsic parameters. Camera intrinsics can be measured off-line and there are lots of applications for that purpose. What are camera intrinsics? Camera intrinsic parameters is usually called the camera calibration matrix, $K$. We can write $$K = \begin{bmatrix}\alpha_u&s&u_0\\0&\alpha_v&v_0\\0&0&1\end{bmatrix}$$ where $\alpha_u$ and $\alpha_v$ are the scale factor in the $u$ and $v$ coordinate directions, and are proportional to the focal length $f$ of the camera: $\alpha_u = k_u f$ and $\alpha_v = k_v f$. $k_u$ and $k_v$ are the number of pixels per unit distance in $u$ and $v$ directions. $c=[u_0,v_0]^T$ is called the principal point, usually the coordinates of the image center. $s$ is the skew, only non-zero if $u$ and $v$ are non-perpendicular. A camera is calibrated when intrinsics are known. This can be done easily so it is not consider a goal in computer-vision, but an off-line trivial step. What are camera extrinsics? Camera extrinsics or External Parameters $[R|t]$ is a $3\times4$ matrix that corresponds to the euclidean transformation from a world coordinate system to the camera coordinate system. $R$ represents a $3\times3$ rotation matrix and $t$ a translation. Computer-vision applications focus on estimating this matrix. $$[R|t] = \begin{bmatrix} R_{11}&R_{12}&R_{13}&T_x\\R_{21}&R_{22}&R_{23}&T_y\\R_{31}&R_{32}&R_{33}&T_z \end{bmatrix}$$ How do I compute homography from a planar marker? Homography is an homogeneaous $3\times3$ matrix that relates a 3D plane and its image projection. If we have a plane $Z=0$ the homography $H$ that maps a point $M=(X,Y,0)^T$ on to this plane and its corresponding 2D point $m$ under the projection $P=K[R|t]$ is $$\tilde m = K \begin{bmatrix} R^1 & R^2 & R^3 & t \end{bmatrix} \begin{bmatrix} X \\ Y \\ 0 \\ 1 \end{bmatrix}$$ $$= K \begin{bmatrix}R^1&R^2&t\end{bmatrix} \begin{bmatrix} X \\ Y \\ 1 \end{bmatrix}$$ $$H = K \begin{bmatrix}R^1 & R^2 & t \end{bmatrix}$$ In order to compute homography we need point pairs world-camera. If we have a planar marker, we can process an image of it to extract features and then detect those features in the scene to obtain matches. We just need 4 pairs to compute homography using Direct Linear Transform. If I have homography how can I get the camera pose? The homography $H$ and the camera pose $K[R|t]$ contain the same information and it is easy to pass from one to another. The last column of both is the translation vector. Column one $H^1$ and two $H^2$ of homography are also column one $R^1$ and two $R^2$ of camera pose matrix. It is only left column three $R^3$ of $[R|t]$, and as it has to be orthogonal it can be computed as the crossproduct of columns one and two: $$R^3 = R^1 \otimes R^2$$ Due to redundancy it is necessary to normalize $[R|t]$ dividing by, for example, element [3,4] of the matrix.
Ex. 6.6 Q7 Triangles Solution - NCERT Maths Class 10 Question In Fig. below, two chords \(AB\) and \(CD\) intersect each other at the point \(P. \) Prove that: (i) \(\Delta APC\,\text{~}\Delta {\text{ }}DPB\) (ii) \(AP. PB = CP. DP \) Text Solution Reasoning: As we know that, two triangles, are similar if: (i) Their corresponding angles are equal. (ii) Their corresponding sides are in the same ratio. As we know that angles in the same segment of a circle are equal. Steps: (i) In, \(\Delta APC \;\text{and}\; \Delta DPB\) \(\angle APC = \angle DPB\) (Vertically opposite angles) \(\angle PAC = \angle PDB\) (Angles in the same segment) \(\Rightarrow \Delta APC\text{~}\Delta DPB\) (A.A criterion) (ii) In, \(\Delta APC \;\text{and}\; \Delta DPB\) \[\begin{align} \frac{{AP}}{{PD}}& = \frac{{PC}}{{PB}} = \frac{{AC}}{{DB}}\qquad\quad\left[ {\Delta APC\text{~}\Delta DPB} \right]\\ \frac{{AP}}{{PD}} &= \frac{{PC}}{{PB}} \end{align}\] \(\Rightarrow \;\;AP.PB = PC.PD\)
Ex.14.2 Q1 STATISTICS Solution - NCERT Maths Class 10 Question The following table shows the ages of the patients admitted in a hospital during a year: Age ( in years) \(5-15\) \(15 - 25\) \(25 - 35\) \(35 - 45\) \(45 - 55\) \(55 - 65\) Number of Patients \(6\) \(11\) \(21\) \(23\) \(14\) \(5\) Find the mode and the mean of the data given above. Compare and interpret the two measures of central tendency. Text Solution What is known? The ages of the patients admitted in a hospital during a year. What is unknown? The mode and the mean of the data and their comparison and interpretation. Reasoning: We will find the mean by direct method. Mean,\(\overline x = \frac{{\sum {{f_i}{x_i}} }}{{\sum {{f_i}} }}\) Modal Class is the class with highest frequency Mode \( = l + \left( {\frac{{{f_1} - {f_0}}}{{2{f_1} - {f_0} - {f_2}}}} \right) \times h\) Where, Class size, \(h\) Lower limit of modal class, \(l\) Frequency of modal class, \(f_1\) Frequency of class preceding modal class, \(f_0\) Frequency of class succeeding the modal class, \(f_2\) Steps: To find Mean We know that, Class mark,\({x_i} = \frac{{{\text{Upper class limit }} + {\text{ Lower class limit}}}}{2}\) 5 – 15 6 10 6 15 – 25 11 20 220 25 – 35 21 30 630 35 – 45 23 40 920 45 – 55 14 50 700 55 – 65 5 60 300 \(\Sigma f_i = 80\) \(\sum {{f_i}{x_i}} = 2830\) From the table it can be observed that, \[\begin{array}{l} \sum {{f_i} = 80} \\ \sum {{f_i}{x_i}} = 2830 \end{array}\] Mean,\(\overline x = \frac{{\sum {{f_i}{x_i}} }}{{\sum {{f_i}} }}\) \(\begin{array}{l} = \frac{{2830}}{{80}}\\ = 35.37 \end{array}\) To find mode We know that,Modal Class is the class with highest frequency 5 – 15 6 15 – 25 11 25 – 35 21 35 – 45 23 45 – 55 14 55 – 65 5 From the table, it can be observed that the maximum class frequency is \(23,\) belonging to class interval \(35 − 45.\) Therefore, Modal class \(=35 − 45\) Class size,\(h=10\) Lower limit of modal class,\(l=35\) Frequency of modal class,\(f_1\) Frequency of class preceding modal class,\(f_0=23\) Frequency of class succeeding the modal class,\(f_2=14\) Mode,\( = l + \left( {\frac{{{f_1} - {f_0}}}{{2{f_1} - {f_0} - {f_2}}}} \right) \times h\) \[\begin{array}{l} = 35 + \left( {\frac{{23 - 21}}{{2 \times 23 - 21 - 14}}} \right) \times 10\\ = 35 + \left( {\frac{2}{{46 - 35}}} \right) \times 10\\ = 35 + \frac{2}{{11}} \times 10\\ = 35 + 1.8\\ = 36.8 \end{array}\] So the modal age is \(36.82\) years which means maximum patients admitted to the hospital are of age \(36.82\) years . Mean age is \(35.37\) and average age of the patients admitted is \(35.37\) years.
MSD Module¶ Overview Compute the mean squared displacement. Details The freud.msd module provides functions for computing themean-squared-displacement (MSD) of particles in periodic systems. MSD¶ class freud.msd. MSD( box=None, mode=None)¶ Compute the mean squared displacement. The mean squared displacement (MSD) measures how much particles move over time. The MSD plays an important role in characterizing Brownian motion, since it provides a measure of whether particles are moving according to diffusion alone or if there are other forces contributing. There are a number of definitions for the mean squared displacement. This function provides access to the two most common definitions through the mode argument. 'window'( default): This mode calculates the most common form of the MSD, which is defined as\[MSD(m) = \frac{1}{N_{particles}} \sum_{i=1}^{N_{particles}} \frac{1}{N-m} \sum_{k=0}^{N-m-1} (\vec{r}_i(k+m) - \vec{r}_i(k))^2\] where \(r_i(t)\) is the position of particle \(i\) in frame \(t\). According to this definition, the mean squared displacement is the average displacement over all windows of length \(m\) over the course of the simulation. Therefore, for any \(m\), \(MSD(m)\) is averaged over all windows of length \(m\) and over all particles. This calculation can be accessed using the ‘window’ mode of this function. Note The most intensive part of this calculation is computing an FFT. To maximize performance, freud attempts to use the fastest FFT library available. By default, the order of preference is pyFFTW, SciPy, and then NumPy. If you are experiencing significant slowdowns in calculating the MSD, you may benefit from installing a faster FFT library, which freud will automatically detect. The performance change will be especially noticeable if the length of your trajectory is a number whose prime factorization consists of extremely large prime factors. The standard Cooley-Tukey FFT algorithm performs very poorly in this case, so installing pyFFTW will significantly improve performance. Note that while pyFFTW is released under the BSD 3-Clause license, the FFTW library is available under either GPL or a commercial license. As a result, if you wish to use this module with pyFFTW in code, your code must also be GPL licensed unless you purchase a commercial license. 'direct': Under some circumstances, however, we may be more interested in calculating a different quantity described by\begin{eqnarray*} MSD(t) =& \dfrac{1}{N_{particles}} \sum_{i=1}^{N_{particles}} (r_i(t) - r_i(0))^2 \\ \end{eqnarray*} In this case, at each time point (i.e. simulation frame) we simply compute how much particles have moved from their initial position, averaged over all particles. For more information on this calculation, see the Wikipedia page. Note The MSD is only well-defined when the box is constant over the course of the simulation. Additionally, the number of particles must be constant over the course of the simulation. Module author: Vyas Ramasubramani <vramasub@umich.edu> New in version 1.0. Parameters Variables accumulate¶ Calculate the MSD for the positions provided and add to the existing per-particle data. Note Unlike most methods in freud, accumulation for the MSD is split over particles rather than frames of a simulation. The reason for this choice is that efficient computation of the MSD requires using the entire trajectory for a given particle. As a result, this accumulation is primarily useful when the trajectory is so large that computing an MSD on all particles at once is prohibitive. Parameters positions((\(N_{frames}\), \(N_{particles}\), 3) numpy.ndarray) – The particle positions over a trajectory. If neither box nor images are provided, the positions are assumed to be unwrapped already. images((\(N_{frames}\), \(N_{particles}\), 3) numpy.ndarray, optional) – The particle images to unwrap with if provided. Must be provided along with a simulation box (in the constructor) if particle positions need to be unwrapped. If neither are provided, positions are assumed to be unwrapped already. compute¶ Calculate the MSD for the positions provided. Parameters positions((\(N_{frames}\), \(N_{particles}\), 3) numpy.ndarray) – The particle positions over a trajectory. If neither box nor images are provided, the positions are assumed to be unwrapped already. images((\(N_{frames}\), \(N_{particles}\), 3) numpy.ndarray, optional) – The particle images to unwrap with if provided. Must be provided along with a simulation box (in the constructor) if particle positions need to be unwrapped. If neither are provided, positions are assumed to be unwrapped already. plot¶ Plot MSD. reset¶ Clears the stored MSD values from previous calls to accumulate (or the last call to compute).
Let $f$ be a density on $\mathbb{R}^{p}$. Let $f_{\theta} = \sum_{i=1}^{d} \alpha_{i}\mathcal{N}_{p}(\cdot \, ; \, \theta_{i})$ be a mixture of $d$ Gaussian distributions on $\mathbb{R}^{p}$. For each $i$, $\theta_{i}$ is a vector of parameters (mean and covariance) which characterize the $i$-th component of the mixture. I would like to minimize the Kullback-Leibler divergence $K(f||f_{\theta})$. It amounts to finding $\theta^{\ast}$ such that : $$ \theta \in \mathop{\mathrm{argmax}} \limits_{\theta} \int \log \big( f_{\theta}(x) \big) f(x) \, dx = \int \log \Big( \sum_{i=1}^{d} \alpha_{i} \mathcal{N}_{p}( x \, ; \, \theta_{i} ) \Big) f(x) \, dx $$ How can the EM algorithm be used to find $\theta^{\ast}$ ? The optimization problem may be rewritten : $$ \theta^{\ast} \in \mathop{\mathrm{argmax}} \limits_{\theta} \, \mathbb{E}_{f}\left[ \log f_{\theta}(X) \right] $$ If I understand correctly, we have a $n$-sample $(Y_{1},\ldots,Y_{n})$ from $f$ and we know, from Monte Carlo Integration that $$ \frac{1}{n} \log \big( f_{\theta}(Y_i) \big) $$ is an approximation of $\mathbb{E}_{f}\left[ f_{\theta}(X) \right]$. What I do not really understand is why $\theta^{\ast}$ can be obtained as follows : $$ \theta^{\ast} = \mathop{\mathrm{argmax}} \limits_{\theta} \frac{1}{n} \sum_{i=1}^{n} \log \big( f_{\theta}(Y_i) \big). $$
$\def\cov{\mathop{\mathrm{cov}}}\def\var{\mathop{\mathrm{var}}}$My professor uses something that he calls the "projection theorem", to get rid of the condition in conditional probabilities (expectation and variance). I have not found anything about it on the internet, so I am wondering where it comes from, and if it is right. Here is the so-called "projection theorem": $$E[\tilde{x}\mid \tilde{y} = y] = E[\tilde{x}] + \frac{\cov(\tilde{x},\tilde{y})}{\var(\tilde{y})}\times(\tilde{y}-E(\tilde{y})),$$ and $$\var[\tilde{x}\mid \tilde{y}] = \var(\tilde{x})-\frac{\cov^2(\tilde{x},\tilde{y})}{\var(\tilde{y})}.$$ Are these formulas correct?
This is an extension to my question here: Clay Institute Navier Stokes . Last time my solution was wrong because my velocity vector did not use bounded functions. Also someone said that I am only showing one example, but I'm trying to prove a statement that says "there exists" so I should only have to find one right? I have looked into smooth and bounded functions and it looks like Gaussian functions are good examples, so I have rewritten another example. If you could please tell me what is still wrong/not satisfactory. We are trying to satisfy: \begin{equation} \frac{\partial \textbf{u}}{\partial t} + (\textbf{u}\cdot\nabla)\textbf{u}=-\frac{\nabla P}{\rho} + \nu\nabla^{2}\textbf{u}+\textbf{f}, \end{equation} \begin{equation}\label{incompr} \nabla\cdot\textbf{u}=0. \end{equation} We get to pick a velocity (u) and pressure (P) function to satisfy these equations. We take $n=3$ by putting our velocity and pressure vectors in the 3D Cartesian plane with $x,y,z$. We let $\textbf{f}$ be 0 according to the problem description, and assume kinematic viscosity (nu) to be greater than 0. Let \begin{equation} \textbf{u}(x,t)=\begin{bmatrix} e^{-t^2} \\ e^{-t^2} \\ e^{-t^2} \end{bmatrix} \end{equation} Then $\textbf{u}(x,t)$ satisfies the divergence free condition because \begin{equation} \nabla \cdot \textbf{u} = \frac{\partial u_1}{\partial x}+\frac{\partial u_2}{\partial y}+\frac{\partial u_3}{\partial z} \end{equation} which is \begin{equation} \nabla \cdot \textbf{u} = 0+0+0=0 \end{equation} Then \begin{equation}\frac{\partial \textbf{u}}{\partial t}=\begin{bmatrix} -2te^{-t^2} \\ -2te^{-t^2} \\ -2te^{-t^2} \end{bmatrix} \end{equation} And \begin{equation} \textbf{u} \cdot \nabla = (e^{-t^2})\frac{\partial}{\partial x} + (e^{-t^2})\frac{\partial}{\partial y}+(e^{-t^2})\frac{\partial}{\partial z} \end{equation} And \begin{equation} \begin{split} (\textbf{u} \cdot \nabla)\textbf{u} &= (e^{-t^2})\frac{\partial \textbf{u}}{\partial x} + (e^{-t^2})\frac{\partial \textbf{u}}{\partial y}+(e^{-t^2})\frac{\partial \textbf{u}}{\partial z} \\ &=(e^{-t^2})\begin{bmatrix} 0 \\ 0 \\ 0 \end{bmatrix}+(e^{-t^2})\begin{bmatrix} 0 \\ 0 \\ 0 \end{bmatrix}+(e^{-t^2})\begin{bmatrix} 0 \\ 0 \\ 0 \end{bmatrix} \\ &= \begin{bmatrix} 0 \\ 0 \\ 0 \end{bmatrix} \end{split} \end{equation} Since laplacian of $\textbf{u}$ is 0 the whole kinematic velocity term goes to 0. And the final Navier-Stokes expression is: \begin{equation} \begin{bmatrix} -2te^{-t^2} \\ -2te^{-t^2} \\ -2te^{-t^2} \end{bmatrix}+\begin{bmatrix} 0 \\ 0 \\ 0 \end{bmatrix}=-\frac{\nabla P}{\rho} \end{equation} So we set $\rho$ equal to 1 for easy calculation. Bring the negative to the left hand side and combine the left hand side. \begin{equation} \begin{bmatrix} 2te^{-t^2} \\ 2te^{-t^2} \\ 2te^{-t^2} \end{bmatrix}=\nabla P \end{equation} and we solve for a solution for P \begin{equation} P=2 e^{-t^2} t x + 2 e^{-t^2} t y + 2 e^{-t^2} t z \end{equation} So we now have infinitely differentiable functions for u and P. Where did I make a mistake/misunderstand the problem? http://www.claymath.org/sites/default/files/navierstokes.pdf . I'm trying to prove statement A
The sequence starts$$0, 1, -5, 25, -105, 105, 5355, \dots$$ We can observe that the statement is true not only for primes, but for odd numbers in general. Even though recurrences might be better for solving this kind of problem, here we can go for closed form formula (luckily there is one!). I have used approach inspired by http://mathforum.org/library/drmath/view/67314.html. Currently the coefficients contain quadratic polynomials in $n$, but we can make them linear, which makes it a bit easier to work with. With substitution $b_n=\frac{a_n}{(n+2)!}$ and some algebra we get $$(n+4)b_{n+2}+(2n+3)b_{n+1}+(2n-2)b_n=0$$ Now that the coefficients are linear, let's try to find its generating function $y(x)=\sum_{n \geq 1}b_n x^n$. By summing the whole equation and again some technical steps, we arrive at $$y'(x)(2x^3+2x^2+x)=\frac{1}{6}x^2+(2x^2-x-2)y(x)$$which is a linear differential equation with variable coefficients, and can be solved to $$y(x)=-\frac{1}{3x}-\frac{1}{9x^2}-\frac{5x}{18}-\frac{1}{2}+\frac{(2x^2+2x+1)^{3/2}}{9x^2}$$(I have used CAS system for solving this differential equation, but it is just a technicality to show, also it is trivial to verify by differentiation). Now to get information about the coefficients, let's expand it into the series. The power on the right is by Binomial series$$(2x^2+2x+1)^{3/2}=\sum_{k=0}^{\infty}\binom{3/2}{k}2^k(x^2+x)^k$$Using Binomial theorem for the inner sum and playing with the indicies, we obtain$$[x^n](2x^2+2x+1)^{3/2} = \sum_{n/2 \leq k \leq n}\binom{3/2}{k}\binom{k}{2k-n}2^k$$Dividing by $9x^2$ and noticing first few terms in result series are equal to $\frac{1}{9x^2}+\frac{1}{3x}+\frac{1}{2}+\frac{5x}{18}$, we can cancel those out in the $y(x)$, and so$$b_n=\sum_{(n+2)/2 \leq k \leq n+2}\binom{3/2}{k}\binom{k}{2k-n-2}2^k$$and in turn$$a_n=(n+2)!\sum_{(n+2)/2 \leq k \leq n+2}\binom{3/2}{k}\binom{k}{2k-n-2}2^k$$Looking at first few individual terms in a sum for couple values of $n$, we can observe those are all integers. This suggests we can further simplify the expression. Writing out the definitions and all the factorials, one eventually finds for $n>1$:$$\boxed{a_n = \sum_{(n+2)/2 \leq k \leq n+2}\binom{k}{n+2-k}\frac{(n+2)!}{k!}\frac{(2k-5)!!}{3}(-1)^k}$$We can see that for $k<n+2$ the terms above are divisible by $n+2$. So we have\begin{align}a_n &\equiv \binom{n+2}{0}\frac{(n+2)!}{(n+2)!}\frac{(2(n+2)-5)!!}{3}(-1)^{n+2}\\ &= \frac{(2n-1)!!}{3}(-1)^{n+2}\\ &= (2n-1)(2n-3)\cdots 5 \cdot (-1)^{n+2} \pmod{n+2}\end{align}If $n\geq 3$ is odd, the $n+2$ is odd as well, so the product above will contain it and thus will be divisible by it. In other words $a_n \equiv 0 \pmod {n+2}$, which is what we wanted to prove. The claim that $a_{n+1} \equiv 0 \pmod {n+2}$ now follows for example by writing the original equation as $$a_{n+1}=-2(n-2)(n+2)a_{n-1}-(2n+1)a_n$$which modulo $n+2$ yields$$a_{n+1} \equiv 3a_n \equiv 0 \pmod {n+2}$$I might have omitted few details here and there, but main idea is hopefully clear.
There are some special functions of 3 or more complex variables that are analytic in some domain (a region in $\mathbb C^n$) with respect to each variable. To give some examples: the incomplete beta function $B(z; a, b)$, the Lerch transcendent $\Phi(z, s, a)$, the Weierstrass elliptic function $\wp(z;g_2,g_3)$, hypergeometric-family functions, etc. Is it possible to express each (or at least some) of these functions as a composition of several analytic functions of 1 or 2 complex variables? Or, if we restrict their domain to reals, it is possible to express them as a composition of several inifinitely differentiable (with respect to each variable) functions of 1 or 2 real variables? The same question applies to functions of 2 variables (e.g. polylogarithms, incomplete elliptic integrals, Hurwitz zeta function, Bessel-family functions, etc.): Is it possible to represent them as a composition of several infinitely differentiable functions of 1 variable and the single fixed function of 2 variables $(x,y)\mapsto x+y$? To give an example when the answer to the last question is positive, consider the complete beta function $B(a,b)$. It can be represented as $$B(a,b)=\exp\big((\ln\Gamma(a)+\ln\Gamma(b))+(-\ln\Gamma(a+b))\big)$$ that is a composition of the 2-variable sum function and several infinitely differentiable 1-variable functions $x\mapsto\exp(x)$, $x\mapsto\ln\Gamma(x)$ and $x\mapsto -x$.
I was very surprised when I first encountered the Mertens conjecture. Define $$ M(n) = \sum_{k=1}^n \mu(k) $$ The Mertens conjecture was that $|M(n)| < \sqrt{n}$ for $n>1$, in contrast to the Riemann Hypothesis, which is equivalent to $M(n) = O(n^{\frac12 + \epsilon})$ . The reason I found this conjecture surprising is that it fails heuristically if you assume the Mobius function is randomly $\pm1$ or $0$. The analogue fails with probability $1$ for a random $-1,0,1$ sequence where the nonzero terms have positive density. The law of the iterated logarithm suggests that counterexamples are large but occur with probability 1. So, it doesn't seem surprising that it's false, and that the first counterexamples are uncomfortably large. There are many heuristics you can use to conjecture that the digits of $\pi$, the distribution of primes, zeros of $\zeta$ etc. seem random. I believe random matrix theory in physics started when people asked whether the properties of particular high-dimensional matrices were special or just what you would expect of random matrices. Sometimes the right random model isn't obvious, and it's not clear to me when to say that an heuristic is reasonable. On the other hand, if you conjecture that all naturally arising transcendentals have simple continued fractions which appear random, then you would be wrong, since $e = [2;1,2,1,1,4,1,1,6,...,1,1,2n,...]$, and a few numbers algebraically related to $e$ have similar simple continued fraction expansions. What other plausible conjectures or proven results can be framed as heuristically false according to a reasonable probability model?
Practical and theoretical implementation discussion. Post Reply 8 posts • Page 1of 1 Hi, I have a simple question as in the title. Why is volumetric emission proportional to absorption coefficient? I often see the volumetric emission term is written as I can also see another representation, volumetric emittance function (e.g. in Mark Pauly's thesis: Robust Monte Carlo Methods for Photorealistic Rendering of Volumetric Effects) , which has the unit of radiance divided by metre (that is W sr^-1 m^-3). Do particles that emit light do not scatter light at all? Thanks I have a simple question as in the title. Why is volumetric emission proportional to absorption coefficient? I often see the volumetric emission term is written as I can also see another representation, volumetric emittance function (e.g. in Mark Pauly's thesis: Robust Monte Carlo Methods for Photorealistic Rendering of Volumetric Effects) , which has the unit of radiance divided by metre (that is W sr^-1 m^-3). Do particles that emit light do not scatter light at all? Thanks I don't quite understand the question: doesn't the first term refer to a particle density at an integration position x that emits radiance L_e in viewing direction w, and that the particle density absorbs radiance by w.r.t. sigma_a(x)? The emitted radiance is not proportional to the absorption, but is scaled by sigma_a(x). Let sigma_a := 1.0 be a constant for all x with respect to the density field, then your model only accounts for emission. With L_b(xb,w)+\int_xb^x L_e(x,w) sigma_a(x), b being the position of a constantly radiating background light and \int_xb^x meaning integration over the viewing ray from the backlight to the integration position, you get the classical emission+absorption model that is e.g. used for interactive DVR in SciVis. In- and out-scattering can be incorporated in the equation. See Nelson Max's '95 paper on optical models for DVR for the specifics: https://www.cs.duke.edu/courses/cps296. ... dering.pdf Also note that those models usually don't consider individual particles, but rather particle densities, and then derive coefficients e.g. by considering the projected area of all particles inside an infinitesimally flat cylinder projected on the cylinder cap. Emission and absorption are sometimes expressed with a single coefficient in code for practical reasons, e.g. so that a single coefficient in [0..1] can be used to look up an RGBA tuple in a single, pre-computed and optionally pre-integrated transfer function texture. With L_b(xb,w)+\int_xb^x L_e(x,w) sigma_a(x), b being the position of a constantly radiating background light and \int_xb^x meaning integration over the viewing ray from the backlight to the integration position, you get the classical emission+absorption model that is e.g. used for interactive DVR in SciVis. In- and out-scattering can be incorporated in the equation. See Nelson Max's '95 paper on optical models for DVR for the specifics: https://www.cs.duke.edu/courses/cps296. ... dering.pdf Also note that those models usually don't consider individual particles, but rather particle densities, and then derive coefficients e.g. by considering the projected area of all particles inside an infinitesimally flat cylinder projected on the cylinder cap. Emission and absorption are sometimes expressed with a single coefficient in code for practical reasons, e.g. so that a single coefficient in [0..1] can be used to look up an RGBA tuple in a single, pre-computed and optionally pre-integrated transfer function texture. Thanks for reply. I can find the volumetric emission term which proportional to absorption coefficient for example in, Jensen's Photon Mapping book, Spectral and Decomposition Tracking paper or Wojciech Jarosz's thesis. In the Jarosz's thesis reference, there is the following sentence by the equation (4.12) in the page 60: I can find the volumetric emission term which proportional to absorption coefficient for example in, Jensen's Photon Mapping book, Spectral and Decomposition Tracking paper or Wojciech Jarosz's thesis. In the Jarosz's thesis reference, there is the following sentence by the equation (4.12) in the page 60: I think it is required that emitting particles should not scatter light in order L_e^V to be represented as a decomposed form \sigma_a(x) L_e(x, w).Media, such as fire, may also emit radiance, Le , by spontaneously converting other forms of energy into visible light. This emission leads to a gain in radiance expressed as: I'm not quite sure if I understand how you come to this assumption and if I totally understand your question, but I don't see why particles that emit light shouldn't also scatter light.I think it is required that emitting particles should not scatter light However, the mental model behind radiance transfer is not one that considers the interaction of individual particles. The model rather derives the radiance in a density field due to emission, absorption, and scattering phenomena at certain sampling positions and in certain directions. So the question is: for position x, how much light is emitted by particles at or near x, how much light arrives there due to other particles scattering light into direction x ("in-scattering"), and conversely: how much light is absorbed due to local absorption phenomena at x, and how much light is scattered away from x ("out-scattering", distributed w.r.t. the phase function). See e.g. Hadwiger et al. "Real-time Volume Graphics", p. 6: (https://doc.lagout.org/science/0_Comput ... aphics.pdf)Analogously, the total emission coefficient can be split into a source term q, which represents emission (e.g., from thermal excitation), and a scattering term Out-scattering + heat dissipation etc. ==> total absorption at point x contributed to a viewing ray in direction w In-scattering + emission ==> added radiance at point x along the viewing direction w It is not about individual particles. The scattering equation is about the four effects contributing to the total radiance at a point x in direction w. There are no individual particles associated with the position x, you consider particle distributions and how they affect the radiance at x. The radiance increases if particles scatter light towards x, or if particles at (or near) x emit light. The radiance goes down due to absorption and out-scattering from the particle density at x. The point x is usually the sampling position that is encountered when marching a ray through the density field, and is not associated with individual particle positions. I didn't find a more general source and am working with this paper anyway - the paper also shows the scattering equation and states that it has a combined emission+in-scattering term: http://www.vis.uni-stuttgart.de/~amentm ... eprint.pdf (cf. Eq. 3 on page 3). Hope I'm not misreading your question? My current thinking process when reading the paper you lastly mentioned is like following: 0. 1. eq. (1) says that contribution from source radiance Lm(x', w) is proportional to the extinction coefficient sigma_t(x'). - I can understand RTE of this form. Probability density with which light interact (one of scattering/absorption/emission) with medium at x' is proportional to particle density, that is sigma_t(x'). 2. eq (3) says that once interaction happens, it is emission with probability (1 - \Lambda) and scattering with probability \Lambda. - I can understand the latter because scattering albedo \Lambda is the probability that scattering happens out of some interaction. This is straightforward. However I can't understand the former. The original question: Why is volumetric emission proportional to absorption coefficient? I can understand absorption happens with the probability (1 - \Lambda), but cannot understand emission also happens with the probability (1 - \Lambda) Now my question can be paraphrased as follows: Shouldn't the probability emission happens be independent of absorption coefficient? I'm sorry in case that the above explanation confuse you more and thank you for your kindness for detailed replying. 0. - Yes, I know.However, the mental model behind radiance transfer is not one that considers the interaction of individual particles. 1. eq. (1) says that contribution from source radiance Lm(x', w) is proportional to the extinction coefficient sigma_t(x'). - I can understand RTE of this form. Probability density with which light interact (one of scattering/absorption/emission) with medium at x' is proportional to particle density, that is sigma_t(x'). 2. eq (3) says that once interaction happens, it is emission with probability (1 - \Lambda) and scattering with probability \Lambda. - I can understand the latter because scattering albedo \Lambda is the probability that scattering happens out of some interaction. This is straightforward. However I can't understand the former. The original question: Why is volumetric emission proportional to absorption coefficient? I can understand absorption happens with the probability (1 - \Lambda), but cannot understand emission also happens with the probability (1 - \Lambda) Now my question can be paraphrased as follows: Shouldn't the probability emission happens be independent of absorption coefficient? I'm sorry in case that the above explanation confuse you more and thank you for your kindness for detailed replying. I found an interesting lecture script. http://www.ita.uni-heidelberg.de/~dulle ... pter_3.pdf Section 3.3 Eq 3.9. As I understand, ultimately it is a matter of definition motivated by thermodynamics of a special case. I imagine that the same particles that block light along some beam also emit light of their own. So it makes sense that emission and absorption strength have a common density-related prefactor. The reverse view from the point of importance being emitted into the scene seems more intuitive to me: Importance particle have a chance to interact with particles of the medium in proportion to their cross section. If they interact, the medium transfers energy to the imaging sensor. Section 3.3 Eq 3.9. As I understand, ultimately it is a matter of definition motivated by thermodynamics of a special case. I imagine that the same particles that block light along some beam also emit light of their own. So it makes sense that emission and absorption strength have a common density-related prefactor. The reverse view from the point of importance being emitted into the scene seems more intuitive to me: Importance particle have a chance to interact with particles of the medium in proportion to their cross section. If they interact, the medium transfers energy to the imaging sensor. That lecture script says: "This is Kirchhoff’s law.It says that a medium in thermal equilibrium can have any emissivity jν and extinction αν, as long as their ratio is the Planck function." Which sounds like they really CAN'T have any emissivity and extinction, but have to have them in a specific ratio. For example, for green light of wavelength 570 nm and 2000 degrees K, that ratio is (from the Planck function) 6537. So the extinction is relatively small in comparison. Later it says "If the temperature is constant along the ray, then the intensity will indeed exponentially approach [the Planck function]". Anyway, the real reason I am replying is so I can share this video of a "black" flame. The flame emits light but has no shadow (it seems fires don't have shadows), but can be made to have one and even appear black under single-frequency lighting: https://www.youtube.com/watch?v=5ZNNDA2WUSU This seems to contradict the notion that media has to absorb light in order to emit it.... unless the amount absorbed is very tiny, as suggested by the lecture. "This is Kirchhoff’s law.It says that a medium in thermal equilibrium can have any emissivity jν and extinction αν, as long as their ratio is the Planck function." Which sounds like they really CAN'T have any emissivity and extinction, but have to have them in a specific ratio. For example, for green light of wavelength 570 nm and 2000 degrees K, that ratio is (from the Planck function) 6537. So the extinction is relatively small in comparison. Later it says "If the temperature is constant along the ray, then the intensity will indeed exponentially approach [the Planck function]". Anyway, the real reason I am replying is so I can share this video of a "black" flame. The flame emits light but has no shadow (it seems fires don't have shadows), but can be made to have one and even appear black under single-frequency lighting: https://www.youtube.com/watch?v=5ZNNDA2WUSU This seems to contradict the notion that media has to absorb light in order to emit it.... unless the amount absorbed is very tiny, as suggested by the lecture. Ha! Now this comes a bit late, but I appreciate you posting this experiment. It is very cool indeed. I think, in contrast to the assumptions in that part of the lecture, the lamp is not a black body. At least, obviously, its emission spectrum is does not follow Planck's law. Please don't ask when in reality the idealization as black body is justified ... Somewhere I read that good emitters are generally also good absorbers, in the sense of material properties. The experiment displays this very well since the Sodium absorbs a lot of the light, whereas normal air and normal flame do not. I think, in contrast to the assumptions in that part of the lecture, the lamp is not a black body. At least, obviously, its emission spectrum is does not follow Planck's law. Please don't ask when in reality the idealization as black body is justified ... Somewhere I read that good emitters are generally also good absorbers, in the sense of material properties. The experiment displays this very well since the Sodium absorbs a lot of the light, whereas normal air and normal flame do not.
Preprints (rote Reihe) des Fachbereich Mathematik Refine Year of publication 1996 (22) (remove) 284 A polynomial function \(f : L \to L\) of a lattice \(\mathcal{L}\) = \((L; \land, \lor)\) is generated by the identity function id \(id(x)=x\) and the constant functions \(c_a (x) = a\) (for every \(x \in L\)), \(a \in L\) by applying the operations \(\land, \lor\) finitely often. Every polynomial function in one or also in several variables is a monotone function of \(\mathcal{L}\). If every monotone function of \(\mathcal{L}\)is a polynomial function then \(\mathcal{L}\) is called orderpolynomially complete. In this paper we give a new characterization of finite order-polynomially lattices. We consider doubly irreducible monotone functions and point out their relation to tolerances, especially to central relations. We introduce chain-compatible lattices and show that they have a non-trivial congruence if they contain a finite interval and an infinite chain. The consequences are two new results. A modular lattice \(\mathcal{L}\) with a finite interval is order-polynomially complete if and only if \(\mathcal{L}\) is finite projective geometry. If \(\mathcal{L}\) is simple modular lattice of infinite length then every nontrivial interval is of infinite length and has the same cardinality as any other nontrivial interval of \(\mathcal{L}\). In the last sections we show the descriptive power of polynomial functions of lattices and present several applications in geometry. 285 On derived varieties (1996) Derived varieties play an essential role in the theory of hyperidentities. In [11] we have shown that derivation diagrams are a useful tool in the analysis of derived algebras and varieties. In this paper this tool is developed further in order to use it for algebraic constructions of derived algebras. Especially the operator \(S\) of subalgebras, \(H\) of homomorphic irnages and \(P\) of direct products are studied. Derived groupoids from the groupoid \(N or (x,y)\) = \(x'\wedge y'\) and from abelian groups are considered. The latter class serves as an example for fluid algebras and varieties. A fluid variety \(V\) has no derived variety as a subvariety and is introduced as a counterpart for solid varieties. Finally we use a property of the commutator of derived algebras in order to show that solvability and nilpotency are preserved under derivation. 279 It is shown that Tikhonov regularization for ill- posed operator equation \(Kx = y\) using a possibly unbounded regularizing operator \(L\) yields an orderoptimal algorithm with respect to certain stability set when the regularization parameter is chosen according to the Morozov's discrepancy principle. A more realistic error estimate is derived when the operators \(K\) and \(L\) are related to a Hilbert scale in a suitable manner. The result includes known error estimates for ordininary Tikhonov regularization and also the estimates available under the Hilbert scale approach. 293 Tangent measure distributions were introduced by Bandt and Graf as a means to describe the local geometry of self-similar sets generated by iteration of contractive similitudes. In this paper we study the tangent measure distributions of hyperbolic Cantor sets generated by contractive mappings, which are not similitudes. We show that the tangent measure distributions of these sets equipped with either Hausdorff or Gibbs measure are unique almost everywhere and give an explicit formula describing them as probability distributions on the set of limit models of Bedford and Fisher. 276 Let \(a_1,\dots,a_n\) be independent random points in \(\mathbb{R}^d\) spherically symmetrically but not necessarily identically distributed. Let \(X\) be the random polytope generated as the convex hull of \(a_1,\dots,a_n\) and for any \(k\)-dimensional subspace \(L\subseteq \mathbb{R}^d\) let \(Vol_L(X) :=\lambda_k(L\cap X)\) be the volume of \(X\cap L\) with respect to the \(k\)-dimensional Lebesgue measure \(\lambda_k, k=1,\dots,d\). Furthermore, let \(F^{(i)}\)(t):= \(\bf{Pr}\) \(\)(\(\Vert a_i \|_2\leq t\)), \(t \in \mathbb{R}^+_0\) , be the radial distribution function of \(a_i\). We prove that the expectation functional \(\Phi_L\)(\(F^{(1)}, F^{(2)},\dots, F^{(n)})\) := \(E(Vol_L(X)\)) is strictly decreasing in each argument, i.e. if \(F^{(i)}(t) \le G^{(i)}(t)t\), \(t \in {R}^+_0\), but \(F^{(i)} \not\equiv G^{(i)}\), we show \(\Phi\) \((\dots, F^{(i)}, \dots\)) > \(\Phi(\dots,G^{(i)},\dots\)). The proof is clone in the more general framework of continuous and \(f\)- additive polytope functionals. 282 Let \(a_1,\dots,a_m\) be independent random points in \(\mathbb{R}^n\) that are independent and identically distributed spherically symmetrical in \(\mathbb{R}^n\). Moreover, let \(X\) be the random polytope generated as the convex hull of \(a_1,\dots,a_m\) and let \(L_k\) be an arbitrary \(k\)-dimensional subspace of \(\mathbb{R}^n\) with \(2\le k\le n-1\). Let \(X_k\) be the orthogonal projection image of \(X\) in \(L_k\). We call those vertices of \(X\), whose projection images in \(L_k\) are vertices of \(X_k\)as well shadow vertices of \(X\) with respect to the subspace \(L_k\) . We derive a distribution independent sharp upper bound for the expected number of shadow vertices of \(X\) in \(L_k\). 275 277 A convergence rate is established for nonstationary iterated Tikhonov regularization, applied to ill-posed problems involving closed, densely defined linear operators, under general conditions on the iteration parameters. lt is also shown that an order-optimal accuracy is attained when a certain a posteriori stopping rule is used to determine the iteration number. 274 This paper investigates the convergence of the Lanczos method for computing the smallest eigenpair of a selfadjoint elliptic differential operator via inverse iteration (without shifts). Superlinear convergence rates are established, and their sharpness is investigated for a simple model problem. These results are illustrated numerically for a more difficult problem. 283 A regularization Levenberg-Marquardt scheme, with applications to inverse groundwater filtration problems (1996) The first part of this paper studies a Levenberg-Marquardt scheme for nonlinear inverse problems where the corresponding Lagrange (or regularization) parameter is chosen from an inexact Newton strategy. While the convergence analysis of standard implementations based on trust region strategies always requires the invertibility of the Fréchet derivative of the nonlinear operator at the exact solution, the new Levenberg-Marquardt scheme is suitable for ill-posed problems as long as the Taylor remainder is of second order in the interpolating metric between the range and dornain topologies. Estimates of this type are established in the second part of the paper for ill-posed parameter identification problems arising in inverse groundwater hydrology. Both, transient and steady state data are investigated. Finally, the numerical performance of the new Levenberg-Marquardt scheme is studied and compared to a usual implementation on a realistic but synthetic 2D model problem from the engineering literature. 280 This paper develops truncated Newton methods as an appropriate tool for nonlinear inverse problems which are ill-posed in the sense of Hadamard. In each Newton step an approximate solution for the linearized problem is computed with the conjugate gradient method as an inner iteration. The conjugate gradient iteration is terminated when the residual has been reduced to a prescribed percentage. Under certain assumptions on the nonlinear operator it is shown that the algorithm converges and is stable if the discrepancy principle is used to terminate the outer iteration. These assumptions are fulfilled , e.g., for the inverse problem of identifying the diffusion coefficient in a parabolic differential equation from distributed data. 270 301 We extend the methods of geometric invariant theory to actions of non reductive groups in the case of homomorphisms between decomposable sheaves whose automorphism groups are non recutive. Given a linearization of the natural actionof the group Aut(E)xAut(F) on Hom(E,F), a homomorphism iscalled stable if its orbit with respect to the unipotentradical is contained in the stable locus with respect to thenatural reductive subgroup of the automorphism group. Weencounter effective numerical conditions for a linearizationsuch that the corresponding open set of semi-stable homomorphismsadmits a good and projective quotient in the sense of geometricinvariant theory, and that this quotient is in additiona geometric quotient on the set of stable homomorphisms. 271 The paper deals with parallel-machine and open-shop scheduling problems with preemptions and arbitrary nondecreasing objective function. An approach to describe the solution region for these problems and to reduce them to minimization problems on polytopes is proposed. Properties of the solution regions for certain problems are investigated. lt is proved that open-shop problems with unit processing times are equivalent to certain parallel-machine problems, where preemption is allowed at arbitrary time. A polynomial algorithm is presented transforming a schedule of one type into a schedule of the other type.
上午8:00 | Latest Results for Synthese Abstract We argue that causal decision theory (CDT) is no worse off than evidential decision theory (EDT) in handling entanglement, regardless of one’s preferred interpretation of quantum mechanics. In recent works, Ahmed (Evidence, decision, and causality, Cambridge University Press, Cambridge, 2014) and Ahmed and Caulton (Synthese, 191(18): 4315–4352, 2014) have claimed the opposite; we argue that they are mistaken. Bell-type experiments are not instances of Newcomb problems, so CDT and EDT do not diverge in their recommendations. We highlight the fact that a Causal Decision Theorist should take all lawlike correlations into account, including potentially acausal entanglement correlations. This paper also provides a brief introduction to CDT with a motivating “small” Newcomb problem. The main point of our argument is that quantum theory does not provide grounds for favouring EDT over CDT. 下午5:21 | quant-ph updates on arXiv.org Authors: Christoph Adami Leggett and Garg derived inequalities that probe the boundaries of classical and quantum physics by putting limits on the properties that classical objects can have. Historically, it has been suggested that Leggett-Garg inequalities are easily violated by quantum systems undergoing sequences of strong measurements, casting doubt on whether quantum mechanics correctly describes macroscopic objects. Here I show that Leggett-Garg inequalities cannot be violated by any projective measurement. The perceived violation of the inequalities found previously can be traced back to an inappropriate assumption of non-invasive measurability. Surprisingly, weak projective measurements cannot violate the Leggett-Garg inequalities either because even though the quantum system itself is not fully projected via weak measurements, the measurement devices 下午5:21 | quant-ph updates on arXiv.org In this paper we make an extensive description of quantum non-locality, one of the most intriguing and fascinating facets of quantum mechanics. After a general presentation of several studies on this subject, we consider if quantum non-locality, and the friction it carries with special relativity, can eventually find a “solution” by considering higher dimensional spaces. 下午5:21 | quant-ph updates on arXiv.org Authors: Christoph Adami The Leggett-Garg inequalities probe the classical-quantum boundary by putting limits on the sum of pairwise correlation functions between classical measurement devices that consecutively measured the same quantum system. The apparent violation of these inequalities by standard quantum measurements has cast doubt on quantum mechanics’ ability to consistently describe classical objects. Recent work has concluded that these inequalities cannot be violated by either strong or weak projective measurements [1]. Here I consider an entropic version of the Leggett-Garg inequalities that are different from the standard inequalities yet similar in form, and can be defined without reference to any particular observable. I find that the entropic inequalities also cannot be be violated by strong quantum measurements. The entropic inequalities can be extended to describe weak quantum measurements, and I show that these weak entropic Leggett-Garg inequalities cannot be violated either even though the quantum system remains unprojected, because the inequalities describe the classical measurement devices, not the quantum system. I conclude that quantum mechanics adequately describes classical devices, and that we should be careful not to assume that the classical devices accurately describe the quantum system. 下午5:21 | quant-ph updates on arXiv.org Moving detectors in relativistic quantum field theories reveal the fundamental entangled structure of the vacuum which manifests, for instance, through its thermal character when probed by a uniformly accelerated detector. In this paper, we propose a general formalism inspired both from signal processing and correlation functions of quantum optics to analyze the response of point-like detectors following a generic, non-stationary trajectory. In this context, the Wigner representation of the first-order correlation of the quantum field is a natural time-frequency tool to understand single-detection events. This framework offers a synthetic perspective on the problem of detection in relativistic theory and allows us to analyze various non-stationary situations (adiabatic, periodic) and how excitations and superpositions are deformed by motion. It opens up interesting perspective on the issue of the definition of particles. 下午5:21 | gr-qc updates on arXiv.org The classical Penrose inequality specifies a lower bound on the total mass in terms of the area of certain trapped surfaces. This fails at the semiclassical level. We conjecture a quantum Penrose inequality: the mass at spatial infinity is lower-bounded by a function of the generalized entropy of the lightsheet of any quantum marginally trapped surface. This is the first relation between quantum information in quantum gravity, and total energy. 下午5:21 | gr-qc updates on arXiv.org Newtonian gravity predicts the existence of white dwarfs with masses far exceeding the Chandrasekhar limit when the equation of state of the degenerate electron gas incorporates the effect of quantum spacetime fluctuations (via a modified dispersion relation) even when the strength of the fluctuations is taken to be very small. In this paper, we show that this Newtonian “super-stability” does not hold true when the gravity is treated in the general relativistic framework. Employing dynamical instability analysis, we find that the Chandrasekhar limit can be reassured even for a range of high strengths of quantum spacetime fluctuations with the onset density for gravitational collapse practically remaining unaffected. 下午5:21 | gr-qc updates on arXiv.org We construct holographic backgrounds that are dual by the AdS/CFT correspondence to Euclidean conformal field theories on products of spheres $S^{d_1}\times S^{d_2}$, for conformal field theories whose dual may be approximated by classical Einstein gravity (typically these are large $N$ strongly coupled theories). For $d_2=1$ these backgrounds correspond to thermal field theories on $S^{d_1}$, and Hawking and Page found that there are several possible bulk solutions, with two different topologies, that compete with each other, leading to a phase transition as the relative size of the spheres is modified. By numerically solving the Einstein equations we find similar results also for $d_2>1$, with bulk solutions in which either one or the other sphere shrinks to zero smoothly at a minimal value of the radial coordinate, and with a first order phase transition (for $d_1+d_2 < 9$) between solutions of two different topologies as the relative radius changes. For a critical ratio of the radii there is a (sub-dominant) singular solution where both spheres shrink, and we analytically analyze the behavior near this radius. For $d_1+d_2 < 9$ the number of solutions grows to infinity as the critical ratio is approached. 2019年8月9日 星期五 上午12:22 | Philsci-Archive: No conditions. Results ordered -Date Deposited. (RSS 2.0) 2019年8月8日 星期四 下午6:00 | Lixiang Chen, Tianlong Ma, Xiaodong Qiu, Dongkai Zhang, Wuhong Zhang, and Robert W. Boyd | PRL: General Physics: Statistical and Quantum Mechanics, Quantum Information, etc. Author(s): Lixiang Chen, Tianlong Ma, Xiaodong Qiu, Dongkai Zhang, Wuhong Zhang, and Robert W. Boyd Correlations between the radial position and radial momentum of entangled photons demonstrate the suitability of these properties for quantum information applications. [Phys. Rev. Lett. 123, 060403] Published Thu Aug 08, 2019 2019年8月8日 星期四 上午1:42 | Philsci-Archive: No conditions. Results ordered -Date Deposited. (RSS 2.0) 2019年8月6日 星期二 下午6:00 | Roei Remez, Aviv Karnieli, Sivan Trajtenberg-Mills, Niv Shapira, Ido Kaminer, Yossi Lereah, and Ady Arie | PRL: General Physics: Statistical and Quantum Mechanics, Quantum Information, etc. Author(s): Roei Remez, Aviv Karnieli, Sivan Trajtenberg-Mills, Niv Shapira, Ido Kaminer, Yossi Lereah, and Ady Arie We investigate, both experimentally and theoretically, the interpretation of the free-electron wave function using spontaneous emission. We use a transversely wide single-electron wave function to describe the spatial extent of transverse coherence of an electron beam in a standard transmission elec… [Phys. Rev. Lett. 123, 060401] Published Tue Aug 06, 2019 2019年8月6日 星期二 下午3:31 | Philsci-Archive: No conditions. Results ordered -Date Deposited. (RSS 2.0) 2019年8月6日 星期二 下午3:30 | Philsci-Archive: No conditions. Results ordered -Date Deposited. (RSS 2.0) 2019年8月6日 星期二 上午8:00 | Latest Results for Foundations of Physics Abstract Is change missing in Hamiltonian Einstein–Maxwell theory? Given the most common definition of observables (having weakly vanishing Poisson bracket with each first-class constraint), observables are constants of the motion and nonlocal. Unfortunately this definition also implies that the observables for massive electromagnetism with gauge freedom (‘Stueckelberg’) are inequivalent to those of massive electromagnetism without gauge freedom (‘Proca’). The alternative Pons–Salisbury–Sundermeyer definition of observables, aiming for Hamiltonian–Lagrangian equivalence, uses the gauge generator G, a tuned sum of first-class constraints, rather than each first-class constraint separately, and implies equivalent observables for equivalent massive electromagnetisms. For General Relativity, G generates 4-dimensional Lie derivatives for solutions. The Lie derivative compares different space-time points with the same coordinate value in different coordinate systems, like 1 a.m. summer time versus 1 a.m. standard time, so a vanishing Lie derivative implies constancy rather than covariance. Requiring equivalent observables for equivalent formulations of massive gravity confirms that Gmust generate the 4-dimensional Lie derivative (not 0) for observables. These separate results indicate that observables are invariant under internal gauge symmetries but covariant under external gauge symmetries, but can this bifurcated definition work for mixed theories such as Einstein–Maxwell theory? Pons, Salisbury and Shepley have studied G for Einstein–Yang–Mills. For Einstein–Maxwell, both \(F_{\mu \nu }\) and \(g_{\mu \nu }\) are invariant under electromagnetic gauge transformations and covariant (changing by a Lie derivative) under 4-dimensional coordinate transformations. Using the bifurcated definition, these quantities count as observables, as one would expect on non-Hamiltonian grounds. 2019年8月5日 星期一 上午1:01 | Philsci-Archive: No conditions. Results ordered -Date Deposited. (RSS 2.0) 2019年8月5日 星期一 上午1:00 | Philsci-Archive: No conditions. Results ordered -Date Deposited. (RSS 2.0) 2019年8月5日 星期一 上午12:57 | Philsci-Archive: No conditions. Results ordered -Date Deposited. (RSS 2.0)
Search Now showing items 1-2 of 2 Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV (Elsevier, 2014-09) Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ... Beauty production in pp collisions at √s=2.76 TeV measured via semi-electronic decays (Elsevier, 2014-11) The ALICE Collaboration at the LHC reports measurement of the inclusive production cross section of electrons from semi-leptonic decays of beauty hadrons with rapidity |y|<0.8 and transverse momentum 1<pT<10 GeV/c, in pp ...
This question already has an answer here: What is the discrete fourier transform of the discrete fourier transform of any discrete time signal. Is result same signal? How? Signal Processing Stack Exchange is a question and answer site for practitioners of the art and science of signal, image and video processing. It only takes a minute to sign up.Sign up to join this community This question already has an answer here: What is the discrete fourier transform of the discrete fourier transform of any discrete time signal. Is result same signal? How? let $$\begin{align} X[k] &= \mathcal{DFT} \Big\{ x[n] \Big\} \\ &\triangleq \sum\limits_{n=0}^{N-1} x[n] \, e^{-j2\pi nk/N} \end{align} $$ and $$ y[n] \triangleq X[n] $$ (note the substitution of $n$ in for $k$.) then $$ Y[k] = \mathcal{DFT} \Big\{ y[n] \Big\} $$ then, if the DFT is defined the most common way (as above): $$ Y[n] = N \cdot x[-n] $$ where periodicity is implied: $x[n+N]=x[n]$ for all $n$. Depends on how the DFT implementation or equation is scaled and indexed. The result of dft(dft(x)) is to circularly reverse the array x (of length N) around its first element, possibly with a scale factor of N, 1/N, or 1/sqrt(N). Computationally, there may also be added numerical or quantization noise (for instance, to the imaginary components if they were originally all zero for strictly real input).
Introduction This is an alternative to j_random_hacker's solution, or more precisely, an alternative to his/her solution to the subproblem: Given the ordered list of edge weights $x_1,\ldots,x_m$ encountered while traversing down a particular heavy path, we want to preprocess this list to enable us to be able to later efficiently answer queries of the form "How many of the first $r$ elements are less than $k$?" j_random_hacker uses the data structure of Fenwick tree of sorted lists, which results in a solution with overall $O(n\log^3 n)$ preprocessing time for all heavy paths, overall $O(n\log n)$ space for all heavy paths, and $O(\log^2 n)$ query time for the subproblem, hence $O(\log^3 n)$ query time for the primary problem. In contrast, my alternative gives a solution with overall $O(n\sqrt{n})$ preprocessing time for all heavy paths, overall $O(n\sqrt{n})$ space for all heavy paths, and $O(1)$ query time for the subproblem, hence $O(\log n)$ query time for the primary problem. How to preprocess each heavy path? In the very first, before we turn to those heavy paths, we sort all edges according to their weights from small to large. Say the result is $e_1,\ldots,e_{n-1}$ with weights $w_1,\ldots,w_{n-1}$. We label edge $e_i$ by $i$ for future use. Denote $I_1=(-\infty,w_1],I_2=(w_1,w_2],\ldots,I_n=(w_{n-1},+\infty)$. Now let's focus on the subproblem mentioned above. We divide $x_1,\ldots,x_m$ into blocks of length $t=\lceil\sqrt{n}\rceil$: $\left[x_1,\ldots,x_t\right],\left[x_{t+1},\ldots,x_{2t}\right],\ldots$ We build two tables $T_1, T_2$ where $T_1(p,q,r)$ represents how many of the first $r$ elements in the $q$th block are less than $k$ if $k\in I_p$, and $T_2(p,q)$ represents how many of the elements in the first $q$ blocks are less than $k$ if $k\in I_p$. Now if we have these two tables, we can answer an query for each heavy path in $O(1)$ time. For example, for a query $(r,k)$, if $k\in I_5$ and $r=3t+2$, then the answer is $T_1(5,4,2)+T_2(5,3)$. Note it takes $O(\log n)$ time to find $p$ such that $k\in I_p$. Since we need to do this search only once for each query of the primary problem, it does not increase the query time. How to build these tables? Note $T_2(p,q)=\sum_{i=1}^q T_1(p,i,t)$, if we already have $T_1$, we can build $T_2$ in $O(nm/t)$ time and it takes $O(nm/t)$ space. Next we focus on how to build $T_1$. Suppose the edge weights in the $q$th block are $w_{i_1},\ldots,w_{i_t}$ where $i_1<\cdots<i_t$ (recall that these edges are already labeled by $i_1,\ldots,i_t$). Then we have \begin{align}T_1(1,q,*)=T_1(2,q,*)=&\cdots=T_1(i_1,q,*),\\T_1(i_1+1,q,*)=T_1(i_1+2,q,*)=&\cdots=T_1(i_2,q,*),\\&\cdots\\T_1(i_t+1,q,*)=T_1(i_t+2,q,*)=&\cdots=T_1(n,q,*),\end{align} where $T_1(p,q,*)$ represents the array $[T_1(p,q,1),\ldots,T_1(p,q,t)]$, and two arrays are equal if they are element-wise equal. This means we only need to compute and store $t$ arrays for $T_1(1, q, *),\ldots,T_1(n, q, *)$. So we can build $T_1$ in $O(t^2\cdot m/t)=O(mt)$ time, and it also takes $O(mt)$ space. Note $t=\lceil\sqrt{n}\rceil$, the overall preprocessing time for all heavy paths are $$\sum_m O(m\sqrt{n})=O(n\sqrt{n}).$$ So is the overall space used.
All $LR(1)$ grammars -- indeed, all $LR(k)$ grammars -- are unambiguous, by definition. But the converse is not true: the fact that a grammar is unambiguous does not say anything about whether it can be parsed with an $LR(k)$ parser. The grammar you present is not $LR(1)$, although the language itself is. (In fact, the language is regular: $(aa)^*$.) But that's not true for the language of even-length palindromes which has a rather similar unambiguous CFG: $$\begin{align} S &\to \epsilon \\ S &\to a S a \\ S &\to b S b\end{align}$$ Intuitively, the problem with parsing palindromes deterministically is that you have to start popping the stack at the middle of the sentence. But you can't tell where the middle of the sentence is until you reach the end, and since there is no limit on the length of a sentence, the end could be arbitrarily distant from the middle. So no finite lookahead is sufficient to make the decision. A context-free language is $LR(k)$ precisely if it is deterministic. For the outline of a proof of the non-determinism of the language of even-length palindromes, see: prove no DPDA accepts language of even-lengthed palindromes
I am following Carroll Spacetime and geometry. I want to derive the continuity equation from the conservation of the stress-energy tensor: $$ \partial_t \rho + \nabla \cdot (\rho \vec{v}) = 0. $$ Suppose we assume that the Energy-stress tensor of a perfect fluid is conserved. Then since $u^\mu$ is the 4-velocity, $$ T^{\mu \nu} = (\rho + p)u^\nu u^\mu + \eta^{\mu \nu}p. $$ It follows directly that $$ 0 = \partial_\mu T^{\mu \nu} = (\partial_\mu p + \partial_\mu \rho) u^\mu u^\nu + (p + \rho)u^\nu \partial_\mu u^\mu + (p + \rho) u^\mu \partial_\mu u^\nu + \partial^\nu p \qquad (\text{Eq. 1}) $$ We know that since $u^\nu u_\nu = -1$ we get from the product rule that $u_\nu\partial_\mu u^\nu = (1/2)\partial_\mu(u^\nu u_\nu) = 0$. Multiplying equation 1 and using this identity we get $$ 0 = u_\nu\partial_\mu T^{\mu \nu} = \cdots = - \partial_\mu (u^\mu \rho) - p \partial_\mu u^\mu $$ Assuming that we are in the non-relativistic limit $u^\mu = (1, u^i)$, $|u^i| \ll 1$ and $p \ll \rho$, Carroll states that the continuity equation follows. I however find $$ 0 = - \partial_\mu (u^\mu \rho) - p \partial_\mu u^\mu = - (p + \rho)\partial_\mu u ^\mu - u^\mu \partial_\mu \rho \stackrel{p \ll \rho}{=} - \rho \partial_\mu u^\mu - u^\mu \partial_\mu \rho= - \partial_\mu (u^\mu\rho)\\ = \partial_t v^0 \rho - \nabla \cdot (\rho \vec{v}) = \partial_t\rho - \nabla\cdot(\rho \vec{v}) \neq \partial_t\rho + \nabla\cdot(\rho \vec{v}) $$ This is clearly problematic. May I get a few suggestions to rectify the derivation of the continuity equation?
How can I prove that the function $f(x)= 1-x^2$ will be greater than $g(x) = \cos(\pi x)$ on the interval $[-1,1]$? I feel like it should be pretty basic but it seems so hard. Since both $f(x)$ and $g(x)$ are even it suffices to prove that for $x\in[0,1]$, and since for $x>\frac12$ $\cos(\pi x)< 0$ it suffices to consider $x\in[0,1/2]$. Then we need to show that $$h(x)=f(x)-g(x)=1-x^2-\cos (\pi x)> 0$$ for $x\in[0,1/2]$. Firstly observe that $h(0)=0$ and therefore the inequality doesn't hold for $x=0$, then note that for $x\in[0,1/2]$ $$h'(x)=\pi\sin(\pi x)-2x\ge 0 \quad h'(x)=0 \iff x=0\tag{1}$$ therefore $h(x)$ is strictly increasing on the interval and the inequality holds for $x\in(0,1/2]$. To show $(1)$ just observe that $\pi\sin(\pi x)$ is concave on that interval at $x=0$ we have $[\pi\sin(\pi x)]_{x=0}=[2x]_{x=0} \implies h'(0)=0$ at $x=1/2$ we have $[\pi\sin(\pi x)]_{x=1/2}>[2x]_{x=1/2}$
A random function $v(t)$ is said to be intermittent at small scales of its "Flatness" $F$, given as $$ F(\Omega) = \frac{\langle (v_{\Omega}^{>}(t))^4\rangle}{\langle v_{\Omega}^{>}(t))^2\rangle} = \frac{\langle v_{\Omega}^{>}(t)v_{\Omega}^{>}(t))v_{\Omega}^{>}(t))v_{\Omega}^{>}(t))\rangle}{\langle v_{\Omega}^{>}(t)v_{\Omega}^{>}(t)\rangle \langle v_{\Omega}^{>}(t)v_{\Omega}^{>}(t)\rangle} $$ diverges as the high pass filter velocity $\Omega \rightarrow \infty$. $v(t)$ which can for example be the velocity in the fluid, is decomposed into its Fourier components $$ v(t) = \int_{R^3}d\omega \, e^{i\omega t}\hat{v}_{\omega} $$ and $v_{\Omega}^{>}(t)$ is the high frequency part of for example the velocity in the fluid $$ v_{\Omega}^{>}(t) = \int_{|\omega| > \Omega} d\omega \, e^{i\omega t}\hat{v}_{\omega} $$ I am struggling hard to understant the physical meaning of this definition and nead some help in this: First of all, why is $F$ called "flatness", it should be a measure of flatness of what? What does it mean for $F$ to diverge in the high frequency (UV) limit? Looking at the expression for $F$, I got the impression that it could mean that higher order correlations in time ("n-point functions") start to dominate in case of intermittency and more "local" 2-point interactions, which are responsible to maintain a scale invariant inertial subrange (?), become negligible such that the turbulent system starts to deviate from a Kolmogorov inertial subrange for example. In addition, other measures of intermittency can be defined involving higher order correlations in length $l$, such as the so called hyper-flattness defined as $F_6 (l) = S_6 (l)/(S_2(l)^3)$ etc... Does this mean that one could more generally say that for a turbulent system that shows intermittency, the Wick theorem can not be applied to calculate higher order n-point functions from 2-point functions? I am finally interested in understanding intermittency from a quantum field theory point of view, which is unfortunatelly not the point of view of the book I have taken these definitions from ...
The formation of the diamminesilver(I) complex $\ce{[Ag(NH3)2]+}$ due to addition of ammonia has mainly three functions. 1. The Tollens test takes place under alkaline aqueous conditions. The complexation of $\ce{Ag+}$ with $\ce{NH3}$ prevents precipitation of brown $\ce{Ag2O}$: $$\ce{2Ag+ + 2OH- <=> 2 AgOH <=> Ag2O + H2O}$$ 2. The Tollens test shall be selective for aldehydes. $\ce{[Ag(NH3)2]+}$ is a milder oxidising agent than $\ce{Ag+}$. $$\begin{alignat}{2}\ce{Ag+ + e- \;&<=> Ag}\quad &&E^\circ = +0.799\ \mathrm{V}\\\ce{[Ag(NH3)2]+ + e- \;&<=> Ag + 2 NH3}\quad &&E^\circ = +0.373\ \mathrm{V}\end{alignat}$$ The difference in redox potential can be explained using the stability constant of $\ce{[Ag(NH3)2]+}$: $$K_\text{B} = \frac{\left[\ce{[Ag(NH3)2]+}\right]}{\left[\ce{Ag+}\right]\left[\ce{NH3}\right]^2}$$ $$\begin{aligned}E&=E_{\ce{Ag+}}^\circ+\frac{RT}{F}\cdot\ln\left[\ce{Ag+}\right]\\ &=E_{\ce{Ag+}}^\circ+\frac{RT}{F}\cdot\ln\frac{\left[\ce{[Ag(NH3)2]+}\right]}{K_\text{B}\cdot\left[\ce{NH3}\right]^2} \\ E_{\ce{[Ag(NH3)2]+}}^\circ&=E_{\ce{Ag+}}^\circ+\frac{RT}{F}\cdot\ln\frac{1}{K_\text{B}} \\&\approx E_{\ce{Ag+}}^\circ-0.059\ \mathrm{V}\cdot\log K_\text{B}\end{aligned} $$ 3. The complex formation equilibrium slows down the overall reaction. A slow, controlled reaction is important for creating the desired silver mirror. If $\ce{Ag+}$ is reduced too quickly, colloidal silver metal would appear, which would create a black cloudy liquid.
confidence intervals when the likelihood is not normalized Post Reply 7 posts • Page 1of 1 We are working with a model with a power spectrum different from the \Lambda CDM model, P(k) = A_s k C(k), where C(k) includes two extra free parameters. We know from previous analysis that the spectra is not sensitive to huge values of the free parameters. Therefore, for huge values of these parameters, there is always good fit to the data (likelihood/likelihood_max \simeq 1). The question is: How can we determine the confidence interval for a given parameter (i.e, the value X for the parameter for which I can say "The value of the parameter is greater that X with 68% confidence") when the likelihood does not fall to zero at infinity (hence it is not normalized)?. \Lambda CDM model, P(k) = A_s k C(k), where C(k) includes two extra free parameters. We know from previous analysis that the spectra is not sensitive to huge values of the free parameters. Therefore, for huge values of these parameters, there is always good fit to the data (likelihood/likelihood_max \simeq 1). The question is: How can we determine the confidence interval for a given parameter (i.e, the value X for the parameter for which I can say "The value of the parameter is greater that X with 68% confidence") when the likelihood does not fall to zero at infinity (hence it is not normalized)?. Posts:5 Joined:December 02 2004 Affiliation:Astronomy Department, IAG, Universidade de Sao Paulo Contact: Not sure if I understood you right, but the normalization of the likelihood is primarily defined on the space of the values of the quantity that you are examining, not on the parameter space of the model. If you want to operate on the space of the model parameters, then you have to make a transformation. For example, if you have a measurement of the power spectrum P_{obs}(k_i) and a model P(k) for it, them your likelihood is going to be something like: L= [ \exp(P(k_i)-P_{obs}(k_i))^2/2\sigma^2 ] / \sqrt(2 pi)\sigma} and this is already normalized. If we define \chi=[P(k_i)-P_{obs}(k_i)] / \sigma then \int{Ld\chi}=1 (\chi can be anything from -infinity to +infinity) but if \chi^2=\chi^2(p) , where p is a parameter set (the parameters that define the model P(k)). Then, if I want to write the integral of the likelihood in terms of the parameters p then I have to use that d\chi= d\chi /dp dp. It can happen that the P(k) does not change for a given range of the parameter p, but then dP/dp is 0, so you see that integrating the likelihood in this range of the parameter space is not going to contribute. Maybe that is your case. For example, if you have a measurement of the power spectrum P_{obs}(k_i) and a model P(k) for it, them your likelihood is going to be something like: L= [ \exp(P(k_i)-P_{obs}(k_i))^2/2\sigma^2 ] / \sqrt(2 pi)\sigma} and this is already normalized. If we define \chi=[P(k_i)-P_{obs}(k_i)] / \sigma then \int{Ld\chi}=1 (\chi can be anything from -infinity to +infinity) but if \chi^2=\chi^2(p) , where p is a parameter set (the parameters that define the model P(k)). Then, if I want to write the integral of the likelihood in terms of the parameters p then I have to use that d\chi= d\chi /dp dp. It can happen that the P(k) does not change for a given range of the parameter p, but then dP/dp is 0, so you see that integrating the likelihood in this range of the parameter space is not going to contribute. Maybe that is your case. Thanks very much Antonio for your answer, it helps a lot, is there any reference where I can check about the likelihood normalization? because I can not find anything about it in the usual cosmomc bibliography, thank you very much in advanced, regards Susana is there any reference where I can check about the likelihood normalization? because I can not find anything about it in the usual cosmomc bibliography, thank you very much in advanced, regards Susana Dear Susana, Antonio is right that in general likelihoods are not normalized with respect to models, as the likelihood is defined with respect to the observations and need not know what model you are then going to compare to the data. Usually its normalization can be ignored because we just write [tex]{\rm posterior} \propto {\rm prior} \times {\rm likelihood}[/tex] and confidence intervals can still be extracted without explicitly normalizing the posterior. More generally the normalization factor is the Bayesian evidence, which is used in model selection analyses. However in your specific problem it is not the likelihood you should be worrying about, but your prior distribution, as the prior does need to be normalized. If the prior probability does not tend to zero as your parameter gets large, then you are effectively putting all of your prior probability at infinitely large parameter value. Then no amount of data will be able to overrule that (which is why you are having trouble setting a confidence level). As almost certainly you don't believe that the parameter starts out with 100% probability of being infinitely large, you have to find a more appropriate prior which does account for your beliefs about the likely value before the data comes along. Then you can apply the likelihood, normalized or not, to find out how the data has changed your initial view. best regards, Andrew Antonio is right that in general likelihoods are not normalized with respect to models, as the likelihood is defined with respect to the observations and need not know what model you are then going to compare to the data. Usually its normalization can be ignored because we just write [tex]{\rm posterior} \propto {\rm prior} \times {\rm likelihood}[/tex] and confidence intervals can still be extracted without explicitly normalizing the posterior. More generally the normalization factor is the Bayesian evidence, which is used in model selection analyses. However in your specific problem it is not the likelihood you should be worrying about, but your prior distribution, as the prior does need to be normalized. If the prior probability does not tend to zero as your parameter gets large, then you are effectively putting all of your prior probability at infinitely large parameter value. Then no amount of data will be able to overrule that (which is why you are having trouble setting a confidence level). As almost certainly you don't believe that the parameter starts out with 100% probability of being infinitely large, you have to find a more appropriate prior which does account for your beliefs about the likely value before the data comes along. Then you can apply the likelihood, normalized or not, to find out how the data has changed your initial view. best regards, Andrew Thank you Andrew very much for your answer. However, now I have a different problem, because I still can not establish bounds on the parameters. The problem is the following: I am working with COSMOMC, and following Andrew's suggestion I put a prior on the parameter of the model which has a physical motivation. The problem is that now L/L_max being L the likelihood never takes values below 0.6, except on the priors. Let me assume that the parameter name is B and the prior is -10^6 < B < 10^2. From a previous analysis, I know that for B=-100 and for B=-10 the value of the likelihood is very low, however this does not appear in the 1 dimensional likelihood plot. Why? Because COSMOMC calculates the value of the likelihood for B=-8 10^5, -6 10^5, ....-2 10^5 and B=0 and there are no points calculated between the two last values I have mentioned (only those that are very near to the prior and cannot be considered in the analysis). In a grid based analysis, I can calculated B=-1000, B=-100, B=-10, etc, but with COSMOMC this is not possible. Therefore, should I conclude that the data can not give me additional information on the value of the parameter even though I know there are some values of B within the prior, that do not fit the data and others that fit very well,? I have already tried to change the start width and the st.dev estimate in cosmomc, but this did not work I apprecciate much any help with this problem regards Susana Landau I apprecciate much any help with this problem regards Susana Landau Maybe the previous explanation was not clear enough to explain the problem. I know from previous analysis that for very huge values of the parameter B, the model always fits the data, and that for some values like B=-100, B=-10 B=-1, there is no good fit to the data. If I want that COSMOMC calculates some of the points where there is no good fit to the data, I can put a prior which has no physical motivation like -2.4 10^4 < B < 5 10^3, and thus in the 1 dimensional likelihood plot, I find point where L/L_{max} is nearly 0, and also intermediate points (L/L_{max}=0.2, L/L_{max}=0.3, L/L_{max}=0.5, etc). However, from this plot I can not establish the confidence intervals, because the prior has no physical motivation. When I work with a physical motivated prior, COSMOMC does not calculate points where L/L_{max} is below 0.6. I think this is a problem of the method, anyone agrees? any suggestion to improve the analysis? thanks again for any comment or suggestions thanks again for any comment or suggestions Dear Susana, If you choose a uniform prior going to -10^6, but are interested in imposing a parameter constraint for values orders of magnitude smaller, you will always tend to run into sampling problems that MCMC does not manage to probe the small region of the prior space that you want it to. You are already saying by the prior that the chance of |B| being less than 10^2 is only about one part in 10^4, so the MCMC regards it as already ruled out at that confidence without needing any data. The fact that you still want to impose the constraint means that you don't think that those small values are already ruled out without needing any data. So probably your prior is still not quite right. In cases where a parameter varies by orders of magnitude often a log prior is more appropriate (eg if you think in advance that you are as likely to be between 10^4 and 10^5 as between 10^5 and 10^6). However you may then run into problems deciding what to do with zero, so there may be no easy answer. best, Andrew If you choose a uniform prior going to -10^6, but are interested in imposing a parameter constraint for values orders of magnitude smaller, you will always tend to run into sampling problems that MCMC does not manage to probe the small region of the prior space that you want it to. You are already saying by the prior that the chance of |B| being less than 10^2 is only about one part in 10^4, so the MCMC regards it as already ruled out at that confidence without needing any data. The fact that you still want to impose the constraint means that you don't think that those small values are already ruled out without needing any data. So probably your prior is still not quite right. In cases where a parameter varies by orders of magnitude often a log prior is more appropriate (eg if you think in advance that you are as likely to be between 10^4 and 10^5 as between 10^5 and 10^6). However you may then run into problems deciding what to do with zero, so there may be no easy answer. best, Andrew
DG advection equation with upwinding¶ We next consider the advection equation in a domain \(\Omega\), where \(\vec{u}\) is a prescribed vector field, and \(q(\vec{x}, t)\) is an unknown scalar field. The value of \(q\) is known initially: and the value of \(q\) is known for all time on the subset of the boundary \(\Gamma\) in which \(\vec{u}\) is directed towards the interior of the domain: where \(\Gamma_\mathrm{inflow}\) is defined appropriately. We will look for a solution \(q\) in a space of discontinuous functions\(V\). A weak form of the continuous equation in each element \(e\) is where we explicitly introduce the subscript \(e\) since the test functions \(\phi_e\) are local to each element. Using integration by parts on the second term, we get where \(\vec{n}_e\) is an outward-pointing unit normal. Since \(q\) is discontinuous, we have to make a choice about how to define\(q\) on facets when we assemble the equations globally. We will useupwinding: we choose the upstream value of \(q\) on facets, with respectto the velocity field \(\vec{u}\). We note that there are three types offacets that we may encounter: Interior facets. Here, the value of \(q\) from the upstream side, denoted \(\widetilde{q}\), is used. Inflow boundary facets, where \(\vec{u}\) points towards the interior. Here, the upstream value is the prescribed boundary value \(q_\mathrm{in}\). Outflow boundary facets, where \(\vec{u}\) points towards the outside. Here, the upstream value is the interior solution value \(q\). We must now express our problem in terms of integrals over the entire mesh and over the sets of interior and exterior facets. This is done by summing our earlier expression over all elements \(e\). The cell integrals are easy to handle, since \(\sum_e \int_e \cdot \,\mathrm{d}x = \int_\Omega \cdot \,\mathrm{d}x\). The interior facet integrals are more difficult to express, since each facet in the set of interior facets \(\Gamma_\mathrm{int}\) appears twice in the \(\sum_e \int_{\partial e}\). In other words, contributions arise from both of the neighbouring cells. In Firedrake, the separate quantities in the two cells neighbouring an interior facet are denoted by + and -. These markings are arbitrary – there is no built-in concept of upwinding, for example – and the user is responsible for providing a form that works in all cases. We will give an example shortly. The exterior facet integrals are easier to handle, since each facet in the set of exterior facets \(\Gamma_\mathrm{ext}\) appears exactly once in \(\sum_e \int_{\partial e}\). The full equations are then As a timestepping scheme, we use the three-stage strong-stability-preserving Runge-Kutta (SSPRK) scheme from [SO88]: to discretise \(\frac{\partial q}{\partial t} = \mathcal{L}(q)\), we set In this worked example, we reproduce the classic cosine-bell–cone–slotted-cylinder advection test case of [LeV96]. The domain \(\Omega\) is the unit square \(\Omega = [0,1] \times [0,1]\), and the velocity field corresponds to solid body rotation \(\vec{u} = (0.5 - y, x - 0.5)\). Each side of the domain has a section of inflow and a section of outflow boundary. We therefore perform both the inflow and outflow integrals over the entire boundary, but construct them so that they only contribute in the correct places. As usual, we start by importing Firedrake. We also import the math library to give us access to the value of pi. We use a 40-by-40 mesh of squares. from firedrake import *import mathmesh = UnitSquareMesh(40, 40, quadrilateral=True) We set up a function space of discontinous bilinear elements for \(q\), and a vector-valued continuous function space for our velocity field. V = FunctionSpace(mesh, "DQ", 1)W = VectorFunctionSpace(mesh, "CG", 1) We set up the initial velocity field using a simple analytic expression. x, y = SpatialCoordinate(mesh)velocity = as_vector((0.5 - y, x - 0.5))u = Function(W).interpolate(velocity) Now, we set up the cosine-bell–cone–slotted-cylinder initial coniditon. The first four lines declare various parameters relating to the positions of these objects, while the analytic expressions appear in the last three lines. bell_r0 = 0.15; bell_x0 = 0.25; bell_y0 = 0.5cone_r0 = 0.15; cone_x0 = 0.5; cone_y0 = 0.25cyl_r0 = 0.15; cyl_x0 = 0.5; cyl_y0 = 0.75slot_left = 0.475; slot_right = 0.525; slot_top = 0.85bell = 0.25*(1+cos(math.pi*min_value(sqrt(pow(x-bell_x0, 2) + pow(y-bell_y0, 2))/bell_r0, 1.0)))cone = 1.0 - min_value(sqrt(pow(x-cone_x0, 2) + pow(y-cone_y0, 2))/cyl_r0, 1.0)slot_cyl = conditional(sqrt(pow(x-cyl_x0, 2) + pow(y-cyl_y0, 2)) < cyl_r0, conditional(And(And(x > slot_left, x < slot_right), y < slot_top), 0.0, 1.0), 0.0) We then declare the inital condition of \(q\) to be the sum of these fields. Furthermore, we add 1 to this, so that the initial field lies between 1 and 2, rather than between 0 and 1. This ensures that we can’t get away with neglecting the inflow boundary condition. We also save the initial state so that we can check the \(L^2\)-norm error at the end. q = Function(V).interpolate(1.0 + bell + cone + slot_cyl)q_init = Function(V).assign(q) We declare the output filename, and write out the initial condition. outfile = File("DGadv.pvd")outfile.write(q) We will run for time \(2\pi\), a full rotation. We take 600 steps, givinga timestep close to the CFL limit. We declare an extra variable dtc; fortechnical reasons, this means that Firedrake does not have to compile new C codeif the user tries different timesteps. Finally, we define the inflow boundarycondition, \(q_\mathrm{in}\). In general, this would be a Function, buthere we just use a Constant value. T = 2*math.pidt = T/600.0dtc = Constant(dt)q_in = Constant(1.0) Now we declare our variational forms. Solving for \(\Delta q\) at each stage, the explicit timestepping scheme means that the left hand side is just a mass matrix. dq_trial = TrialFunction(V)phi = TestFunction(V)a = phi*dq_trial*dx The right-hand-side is more interesting. We define n to be the built-in FacetNormal object; a unit normal vector that can be used in integrals overexterior and interior facets. We next define un to be an object which isequal to \(\vec{u}\cdot\vec{n}\) if this is positive, and zero if this isnegative. This will be useful in the upwind terms. n = FacetNormal(mesh)un = 0.5*(dot(u, n) + abs(dot(u, n))) We now define our right-hand-side form L1 as \(\Delta t\) times thesum of four integrals. The first integral is a straightforward cell integral of\(q\nabla\cdot(\phi\vec{u})\). The second integral represents the inflowboundary condition. We only want this to contribute on the inflow part of theboundary, where \(\vec{u}\cdot\vec{n} < 0\) (recall that \(\vec{n}\) isan outward-pointing normal). Where this is true, the condition gives thedesired expression \(\phi q_\mathrm{in}\vec{u}\cdot\vec{n}\), otherwise thecondition gives zero. The third integral operates in a similar way to givethe outflow boundary condition. The last integral represents the integral\(\widetilde{q}(\phi_+ \vec{u} \cdot \vec{n}_+ + \phi_- \vec{u} \cdot \vec{n}_-)\)over interior facets. We could again use a conditional in order to representthe upwind value \(\widetilde{q}\) by the correct choice of \(q_+\) or\(q_-\), depending on the sign of \(\vec{u}\cdot\vec{n_+}\), say.Instead, we make use of the quantity un, which is either\(\vec{u}\cdot\vec{n}\) or zero, in order to avoid writing explicitconditionals. Although it is not obvious at first sight, the expression given incode is equivalent to the desired expression, assuming\(\vec{n}_- = -\vec{n}_+\). L1 = dtc*(q*div(phi*u)*dx - conditional(dot(u, n) < 0, phi*dot(u, n)*q_in, 0.0)*ds - conditional(dot(u, n) > 0, phi*dot(u, n)*q, 0.0)*ds - (phi('+') - phi('-'))*(un('+')*q('+') - un('-')*q('-'))*dS) In our Runge-Kutta scheme, the first step uses \(q^n\) to obtain\(q^{(1)}\). We therefore declare similar forms that use \(q^{(1)}\)to obtain \(q^{(2)}\), and \(q^{(2)}\) to obtain \(q^{n+1}\). Wemake use of UFL’s replace feature to avoid writing out the form repeatedly. q1 = Function(V); q2 = Function(V)L2 = replace(L1, {q: q1}); L3 = replace(L1, {q: q2}) We now declare a variable to hold the temporary increments at each stage. dq = Function(V) Since we want to perform hundreds of timesteps, ideally we should avoidreassembling the left-hand-side mass matrix each step, as this does not change.We therefore make use of the LinearVariationalProblem and LinearVariationalSolver objects for each of our Runge-Kutta stages. Thesecache and reuse the assembled left-hand-side matrix. Since the DG mass matricesare block-diagonal, we use the ‘preconditioner’ ILU(0) to solve the linearsystems. As a minor technical point, we in fact use an outer block Jacobipreconditioner. This allows the code to be executed in parallel without anyfurther changes being necessary. params = {'ksp_type': 'preonly', 'pc_type': 'bjacobi', 'sub_pc_type': 'ilu'}prob1 = LinearVariationalProblem(a, L1, dq)solv1 = LinearVariationalSolver(prob1, solver_parameters=params)prob2 = LinearVariationalProblem(a, L2, dq)solv2 = LinearVariationalSolver(prob2, solver_parameters=params)prob3 = LinearVariationalProblem(a, L3, dq)solv3 = LinearVariationalSolver(prob3, solver_parameters=params) We now run the time loop. This consists of three Runge-Kutta stages, and every 20 steps we write out the solution to file and print the current time to the terminal. t = 0.0step = 0while t < T - 0.5*dt: solv1.solve() q1.assign(q + dq) solv2.solve() q2.assign(0.75*q + 0.25*(q1 + dq)) solv3.solve() q.assign((1.0/3.0)*q + (2.0/3.0)*(q2 + dq)) step += 1 t += dt if step % 20 == 0: outfile.write(q) print("t=", t) Finally, we display the normalised \(L^2\) error, by comparing to the initial condition. L2_err = sqrt(assemble((q - q_init)*(q - q_init)*dx))L2_init = sqrt(assemble(q_init*q_init*dx))print(L2_err/L2_init) This demo can be found as a script in DG_advection.py. References LeV96 Randall J. LeVeque. High-Resolution Conservative Algorithms for Advection in Incompressible Flow. SIAM Journal on Numerical Analysis, 33(2):627–665, 1996. doi:10.1137/0733033. SO88 Chi-Wang Shu and Stanley Osher. Efficient Implementation of Essentially Non-oscillatory Shock-Capturing Schemes. Journal of Computational Physics, 77(2):439–471, 1988. doi:10.1016/0021-9991(88)90177-5.
For comparison, consider a simple harmonic oscilllator. In this system, we have operators $a$ and $a^\dagger$ satisfying $[a,a^\dagger]=1$, and the Hamiltonian is $H=\omega a^\dagger a$. We can define the vacuum state $|0\rangle$ to be the state of lowest energy, and we can describe this state more explicitly as the one that satisfies $a|0\rangle=0$. Again for comparison, consider the model of a free scalar field. Schematically, the Hamiltonian is $H\sim \int dx\ \big(\nabla\phi(x)\big)^2+m^2\phi^2(x)$, and the equal-time commutation relation is $[\phi(x),\dot\phi(y)]\sim\delta(x-y)$. If we again define the vacuum state $|0\rangle$ to be the state of lowest energy, then we can define creation/annihilation operators $a^\dagger(p)$ and $a(p)$, expressed explicitly in terms of $\phi(x)$, in such a way that the vacuum state satisfies $a(p)|0\rangle=0$. We can also construct states with any specified number of particles by acting on the vacuum state with that number of creation operators, which in turn may be explicitly expressed in terms of the field operators that were used to define the model in the first place. In most interesting QFTs, we don't know how to do this. We still define the model in terms of field operators by specifying their commutation relations and specifying the Hamiltonian, and we can still define the vacuum state to be the state of lowest energy, but we don't know how to characterize the vacuum state (much less states with any given number of particles) in any more explicit way using the field operators. What we can do is consider expressions like $\langle 0|\phi(x)\phi(y)\cdots|0\rangle$ and make some general arguments (like LSZ) about how these vacuum expectation values are related to things of more direct physical interest. With the help of Wick rotation, we can use the path-integral formulation to define expressions like $\langle 0|\phi(x)\phi(y)\cdots|0\rangle$ without knowing anything more about $|0\rangle$ than the fact that it is the state of lowest energy. Then we can extract information about inner products between states with various numbers of particles by indirect arguments like LSZ. I think this is ultimately why we typically want the initial and final states to be ground states in QFT.
Martins's (1997) Correlation Structure Martins and Hansen's (1997) covariance structure: $$V_{ij} = \gamma \times e^{-\alpha t_{ij}}$$ where $t_{ij}$ is the phylogenetic distance between taxa $i$ and $j$ and $\gamma$ is a constant. Keywords models Usage corMartins(value, phy, form = ~1, fixed = FALSE)## S3 method for class 'corMartins':coef(object, unconstrained = TRUE, ...)## S3 method for class 'corMartins':corMatrix(object, covariate = getCovariate(object), corr = TRUE, ...) Arguments value The $\alpha$ parameter phy An object of class phylorepresenting the phylogeny (with branch lengths) to consider object An (initialized) object of class corMartins corr a logical value. If 'TRUE' the function returns the correlation matrix, otherwise it returns the variance/covariance matrix. fixed an optional logical value indicating whether the coefficients should be allowed to vary in the optimization, ok kept fixed at their initial value. Defaults to 'FALSE', in which case the coefficients are allowed to vary. form ignored for now. covariate ignored for now. unconstrained a logical value. If 'TRUE' the coefficients are returned in unconstrained form (the same used in the optimization algorithm). If 'FALSE' the coefficients are returned in "natural", possibly constrained, form. Defaults to 'TRUE' ... some methods for these generics require additional arguments. None are used in these methods. Value An object of class corMartinsor the alpha coefficient from an object of this class or the correlation matrix of an initialized object of this class. References Martins, E. P. and Hansen, T. F. (1997) Phylogenies and the comparative method: a general approach to incorporating phylogenetic information into the analysis of interspecific data. American Naturalist, 149, 646--667. See Also Aliases corMartins coef.corMartins corMatrix.corMartins Documentation reproduced from package ape, version 3.2, License: GPL (>= 2)
I was experimenting with series and numerically found this gem: $$S=\sum_{n=0}^\infty \frac{2^{2n+1}}{(2n+1)^2 \binom{4n+2}{2n+1}}= \frac14 \left(\frac{\pi^2}{4}+\log^2 (2+\sqrt{3} ) \right)$$ Or, rewriting in hypergeometric form: $${_4 F_3} \left(\frac12, \frac12, 1, 1; \frac34, \frac54, \frac32; \frac14 \right)= \frac14 \left(\frac{\pi^2}{4}+\log^2 (2+\sqrt{3} ) \right)$$ How can we prove this result? I know that: $$\int_0^\infty \frac{t^{2n}}{(1+t)^{4n+2}}dt=\frac{2}{(2n+1) \binom{4n+2}{2n+1}}$$ This gives us: $$S=\int_0^\infty \frac{dt}{(1+t)^2} \sum_{n=0}^\infty \frac{(2t)^{2n}}{(2n+1) (1+t)^{4n}}$$ $$S=\frac12 \int_0^\infty \frac{dt}{t} \tanh^{-1} \left(\frac{2t}{(1+t)^2} \right)$$ Not sure how to find the closed form from here.
Ex.2.4 Q4 Polynomials Solution - NCERT Maths Class 10 Question If two zeroes of the polynomial \({x^4}-6{x^3}-26{x^2} + 138x-35\) are \(2 \pm \sqrt 3 \) find other zeroes. Text Solution What is known? Two zeroes of the polynomial \(x^{4}-6 x^{3}-26 x^{2}+138 x-35\) are \(2 \pm \sqrt 3 .\) What is unknown? Other zeroes of the given polynomial. Reasoning: Given polynomial and the zeroes of the polynomial are \(x^{4}-6 x^{3}-26 x^{2}+138 x-35\) and \(2 \pm \sqrt 3 .\) By using the zeroes of a polynomial, you can find out the factor of the polynomial. Now divide the polynomial with the factor, you will get the quotient and remainder of the polynomial. Put this value in the division algorithm and you will get the other zeroes by simplifying its factors. Steps: \[P\left( x \right) = {x^4}-6{x^3}-26{x^2} + 138x-35\] Zeroes of the polynomial are \( = 2 \pm \sqrt 3 .\) Therefore, \[\begin{align}\left( {x - 2 + \sqrt 3 } \right)\left( {x-2 - \sqrt 3 } \right) &= {{ }}{x^2} + 4-4x - 3\\&= {{ }}{x^2}-{{ }}4x{{ }} + {{ }}1\end{align}\] is a factor of the given polynomial. To find out the other polynomial, we have to find the quotient by dividing \({x^4}-6{x^3}-26{x^2} + 138x-35\) by \({x^2}-4x + 1\) Clearly, by division algorithm, \[{x^4}-6{x^3}-26{x^2} + 138x-35 = \left( {{x^2}-4x+ 1} \right)\left( {{x^2} - 2x- 35} \right)\] It can be observed that \({x^2} - 2x - 35\) is a factor of the given polynomial and \[\begin{align}{x^2} - 2x - 35 &= {x^2} - 7x + 2x - 35\\ &= \left( {x - 7} \right)\left( {x - 5} \right)\end{align}\] Therefore, the value of the polynomial is also zero when \(x-7 = 0\) or \(x + 5 = 0\) Or \(x=7\) and \(x=-5\) Hence, 7 and -5 are also zeroes of this polynomial.
This exercise is inspired by exercises 83 and 100 of Chapter 10 in Giancoli's book A uniform disk ($R = 0.85 m$; $M =21.0 kg$) has a rope wrapped around it. You apply a constant force $F = 35 N$ to unwrap it (at the point of contact ground-disk) while walking 5.5 m. Ignore friction. a) How much has the center of mass of the disk moved? Explain. Now derive a formula that relates the distance you have walked and how much rope has been unwrapped when: b) You don't assume rolling without slipping. c) You assume rolling without slipping. a) I have two different answers here, which I guess one is wrong: a.1) Here there's only one force to consider in the direction of motion: $\vec F$. Thus the center of mass should also move forward. a.2) You are unwinding the rope out of the spool and thus exerting a torque $FR$ (I am taking counterclockwise as positive); the net force exerted on the CM is zero and thus the wheel only spins and the center of mass doesn't move. The issue here is that my intuition tells me that there should only be spinning. I've been testing the idea with a paper roll and its CM does move forward, but I think this is due to the roll not being perfectly cylindrical; if the unwrapping paper were to be touching only at a point with an icy ground the CM's roll shouldn't move. 'What's your reasoning to assert that?' Tangential velocity points forwards at distance $R$ below the disk's CM but this same tangential velocity points backwards at distance $R$ above the disk's CM and thus translational motion is cancelled out. Actually, we note that opposite points on the rim have opposite tangential velocities (assuming there's no friction so that the tangential velocity is constant). My book assumes a.1) is OK. I say a.2) is OK. Who's right then? b) We can calculate the unwrapped distance noting that the arc length is related to the radius by the angle (radian) enclosed: $$\Delta s = R \Delta \theta$$ Assuming constant acceleration and zero initial angular velocity: $$\Delta \theta = 1/2 \alpha t^2 = 1/2 \frac{\omega}{t} t^2 = 1/2 \omega t$$ By Newton's second Law (rotation) we can solve for $\omega$ and then plug it into the above equation: $$\tau = FR = I \alpha = I \frac{\omega}{t} = 1/2 M R^2 \frac{\omega}{t}$$ $$\omega = \frac{2F}{M R}t$$ Let's plugg it into the other EQ. $$\Delta \theta = \frac{F}{M R}t^2$$ Mmm we still have to eliminate $t$. Assuming constant acceleration we get by the kinematic equation (note I am using the time $t$ you take to walk 5.5 m so that we know how much rope has been unwrapped in that time): $$t^2 = \frac{2M\Delta x}{F}$$ Plugging it into $\Delta \theta$ equation: $$\Delta \theta = \frac{2\Delta x}{R}$$ Plugging it into $\Delta s$ equation we get the equation we wanted: $$\Delta s = 2 \Delta x$$ If we calculate both $v$ and $\omega$ we see that $v=R\omega$ is not true so the disk doesn't roll without slipping. c) Here $v=R\omega$ must be true. We know that if that's the case the tangential velocity must be related to the center of mass' velocity as follows: $$2v_{cm} = v$$ Assuming that the person holding the rope goes at speed $2v_{cm}$ we get: $$\Delta x= 2 \Delta s$$ I get reversed equations at b) and c). How can we explain that difference in both equations beyond the fact of rolling without slipping?
The \(R\) space¶ The function space \(R\) (for “Real”) is the space of functions which are constant over the whole domain. It is employed to model concepts such as global constraints. An example:¶ Warning This section illustrates the use of the Real space using the simplest example. This is usually not the optimal approach for removing the nullspace of an operator. If that is your only goal then you are probably better placed removing the null space in the linear solver using the facilities documented in the section Solving singular systems. Consider a Poisson equation in weak form, find \(u\in V\) such that: where \(\Gamma(3)\) and \(\Gamma(4)\) are domain boundaries over which the boundary conditions \(\nabla u \cdot n = -1\) and \(\nabla u \cdot n = 1\) are applied respectively. This system has a null space composed of the constant functions. One way to remove this is to add a Lagrange multiplier from the space \(R\) and use the resulting constraint equation to enforce that the integral of \(u\) is zero. The resulting system is find \(u\in V\), \(r\in R\) such that: The corresponding Python code is: from firedrake import *m = UnitSquareMesh(25, 25)V = FunctionSpace(m, 'CG', 1)R = FunctionSpace(m, 'R', 0)W = V * Ru, r = TrialFunctions(W)v, s = TestFunctions(W)a = inner(grad(u), grad(v))*dx + u*s*dx + v*r*dxL = -v*ds(3) + v*ds(4)w = Function(W)solve(a == L, w)u, s = split(w)exact = Function(V)x, y = SpatialCoordinate(m)exact.interpolate(y - 0.5)print sqrt(assemble((u - exact)*(u - exact)*dx)) Representing matrices involving \(R\)¶ Functions in the space \(R\) are different from other finite element functions in that their support extends to the whole domain. To illustrate the consequences of this, we can represent the matrix in the Poisson problem above as: where: where \(\{\phi_i\}\) is the basis for \(V\) and \(\{\psi_i\}\) is the basis for \(R\). Note that there is only a single basis function for \(R\) and \(\psi_i \equiv 1\) hence: with the result that \(K\) is a single dense matrix column. Similiarly, \(K^T\) is a single dense matrix row. Using the CSR matrix format typically employed by Firedrake, each matrix row is stored on a single processor. Were this carried through to \(K^T\), both the assembly and action of this row would require the entire system state to be gathered onto one MPI process. This is clearly a horribly non-performant option. Instead, we observe that a dense matrix row (or column) is isomorphicto a Function and implement these matrixblocks accordingly. Assembling matrices involving \(R\)¶ Assembling the column block is implemented by replacing the trial function with the constant 1, thereby transforming a 2-form into a 1-form, and assembling. Similarly, assembling the row block simply requires the replacement of the test function with the constant 1, and assembling. The one by one block in the corner is assembled by replacing both the test and trial functions of the corresponding form with 1 and assembling. The remaining block does not involve \(R\) and is assembled as usual. Using \(R\) space with extruded meshes¶ On extruded meshes it is possible to construct tensor product function spaces with the \(R\) space. Using the \(R\) space in the extruded direction provides a convenient way of expressing fields that are constant along the extrusion. The example below illustrates how the \(R\) space can be used to compute a vertical average of a three-dimensional DG1 field by projecting the source field on a DG1 x R space. from firedrake import *mesh2d = UnitSquareMesh(10, 10)mesh = ExtrudedMesh(mesh2d, 10, 0.1)V = FunctionSpace(mesh, 'DG', 1, vfamily='DG', vdegree=1)f = Function(V)x, y, z = SpatialCoordinate(mesh)f.interpolate(sin(2*pi*z))U = FunctionSpace(mesh, 'DG', 1, vfamily='R', vdegree=0)g = Function(U, name='g')g.project(f)print('f min: {:.3g}, max: {:.3g} '.format(f.dat.data.min(), f.dat.data.max()))print('g min: {:.3g}, max: {:.3g} '.format(g.dat.data.min(), g.dat.data.max()))
Abbreviation: DLOS A is a structure $\mathbf{A}=\langle A,\vee,\wedge,\cdot\rangle$ of type $\langle 2,2,2\rangle$ such that distributive lattice ordered semigroup $\langle A,\vee,\wedge\rangle$ is a distributive lattice $\langle A,\cdot\rangle$ is a semigroup $\cdot$ distributes over $\vee$: $x\cdot(y\vee z)=(x\cdot y)\vee (x\cdot z)$ and $(x\vee y)\cdot z=(x\cdot z)\vee (y\cdot z)$ Let $\mathbf{A}$ and $\mathbf{B}$ be distributive lattice-ordered semigroups. A morphism from $\mathbf{A}$ to $\mathbf{B}$ is a function $h:A\rightarrow B$ that is a homomorphism: $h(x\vee y)=h(x) \vee h(y)$, $h(x\wedge y)=h(x) \wedge h(y)$, $h(x\cdot y)=h(x) \cdot h(y)$ Example 1: Any collection $\mathbf A$ of binary relations on a set $X$ such that $\mathbf A$ is closed under union, intersection and composition. H. Andreka 1) proves that these examples generate the variety DLOS. Feel free to add or delete properties from this list. The list below may contain properties that are not relevant to the class that is being described. $\begin{array}{lr} f(1)= &1\\ f(2)= &6\\ f(3)= &44\\ f(4)= &479\\ f(5)= &\\ \end{array}$ Hajnal Andreka, 1) , Algebra Universalis Representations of distributive lattice-ordered semigroups with binary relations 28(1991), 12–25
This is NP-complete. In fact, it remains NP-complete when $F^+$ is restricted to be in 3-CNF form (not just CNF). The proof is by demonstrating that this problem is at least as hard as testing 3-colorability of a graph. The correspondence is clean and elegant. Let $G=(V,E)$ be an undirected graph. Introduce variables $x_{v,c}$, for $v \in V$ and $c \in \{1,2,3\}$, to represent a 3-coloring of the graph. Here $x_{v,c}$ means that we've given vertex $v$ the color $c$. To represent that each vertex must receive at least one color, we will introduce clauses $x_{v,1} \lor x_{v,2} \lor x_{v,3}$ for each vertex $v$. This gives us $F^+$, i.e., $$F^+ \equiv \bigwedge_{v \in V} (x_{v,1} \lor x_{v,2} \lor x_{v,3}).$$ To represent that no two endpoints of a single edge may receive the same color, we will introduce a clause $\neg x_{u,c} \lor \neg x_{v,c}$ for each edge $(u,v) \in E$. And, to represent that no vertex may receive more than one color, we will introduce a clause $\neg x_{v,c} \lor \neg x_{v,c'}$ for each $c,c' \in \{1,2,3\}$ such that $c\ne c'.$ Let $F^-_2$ denote the corresponding formula. $$F^-_2 \equiv \bigwedge_{(u,v) \in E} (\neg x_{u,c} \lor \neg x_{v,c}) \; \; \land \bigwedge_{v \in V,c\ne c'} (\neg x_{v,c} \lor \neg x_{v,c'}).$$ Then it is easy to see that $F^+ \land F^-_2$ is satisfiable if and only if $G$ has a 3-coloring. In fact, each satisfying assignment of $F^+ \land F^-_2$ corresponds immediately to a 3-coloring of $G$, and vice versa. Therefore, testing the satisfiability of this class of formulae is at least as hard as testing 3-colorability of an undirected graph, so it is NP-hard.
$$\mathbb{E}\left[\frac{1}{X}\right] \geq \frac{1}{\mathbb{E}[X]}$$ My question is whether a similar inequality exists for the variance of $1/X$? Cross Validated is a question and answer site for people interested in statistics, machine learning, data analysis, data mining, and data visualization. It only takes a minute to sign up.Sign up to join this community $$\mathbb{E}\left[\frac{1}{X}\right] \geq \frac{1}{\mathbb{E}[X]}$$ My question is whether a similar inequality exists for the variance of $1/X$? Here is an answer (to a related question) which provides a valid generalization of $\mathbb{E}\left[\frac{1}{X}\right] \geq \frac{1}{\mathbb{E}[X]}$. What is the relation between $\mathbb{E}X^r$ and $\mathbb{[E}X]^r$, for all possible values of $r$, when $X$ is a positive random variable? This can be answered by applying Jensens's inequality, based on the convexity or concavity of $x^r$ as a function of $x$, depending on the value of $r$. The following presumes the relevant moments exist. $\mathbb{E}X^r \ge \mathbb{[E}X]^r$, for $r \ge 1$ or $r \le 0$ ($x^r$ is convex in both cases) $\mathbb{E}X^r \le \mathbb{[E}X]^r$, for $0 \le r \le 1$ ($x^r$ is concave, which includes the frequently used square root) Note that equality holds for $r = 1$, which says $\mathbb{E}X = \mathbb{E}X$, and for $r = 0$, which says $1 = 1$. $\mathbb{E}\left[\frac{1}{X}\right] \geq \frac{1}{\mathbb{E}[X]}$ of course corresponds to $r = -1$. Going back to the OP's original question, as shown by @kjetil b halvorsen , the analog for variance does not hold. But presuming the moments exist, we see by applying the above results with $r= -2$, that $\mathbb{E}\left[\frac{1}{X^2}\right] \geq [\frac{1}{\mathbb{E}X}]^2$ First, @Dilip Sarwate comments, for such ratio variables mean and variance often do not exist and then there is little to expect. For a detailed discussion of this see I've heard that ratios or inverses of random variables often are problematic, in not having expectations. Why is that?. But, let us assume a case where expectation and variance exists. Your question is My question is whether a similar inequality exists for the variance of $1/X$? It is not totally clear what you mean with similar but let us take it literally, that is, is it true that$$\DeclareMathOperator{\V}{\mathbb{V}} \V(\frac1{X})\cdot \V X \ge 1 \quad \text{?}$$In that form it is clearly false, for instance take $X$ to have a uniform distribution on a very short interval close to 1, like $[0.9, 1.1]$. For that case the product of the two variances will be 0.1127601010e-4 falsifying the inequality. But maybe you where thinking of some other generalization?
According to the time evolution the system changes its state the with the passage of time. Is there any difference between time evolution and unitary time evolution? Yes, there is a difference. Unitary time evolution is the specific type of time evolution where probability is conserved. In quantum mechanics, one typically deals with unitary time evolution. Suppose you have a state (at time $t=0$) given by $|\alpha \rangle$. To find the state at a later time $t=T$ given by $|\alpha (T) \rangle$, we apply the (unitary) time evolution operator $U$: $$U|\alpha \rangle = |\alpha (T) \rangle$$ where $$U = e^{-iHT}$$ and $H= $ Hamiltonian of the system, which is Hermitian. Conservation of probability mathematically means: $$\langle \alpha|\alpha \rangle = \langle \alpha (T)|\alpha (T) \rangle$$ Physically, it means that the probability of existence of the quantum system, described initially by the state $|\alpha \rangle$ and later by $|\alpha (T) \rangle$, does not change with time. The quantum system exists at $t=0$ with probability $=1$, and also at $t=T$ with probability $=1$. The state evolves in time from $t=0$ to $t=T$, but no information about the quantum system leaks out during this time interval. The system that existed at $t=0$ continues to exist in its totality later at $t=T$. This is physically meaningful to demand from a theory, because information about a state should not get lost during evolution. Sure, the information might get tangled up as time goes on, but all of it should still be there, in principle. For example, if you burn a book on coal, information inside the book is lost for all practical purposes. But, all information still exists, in principle, encoded in the correlations between coal and ash particles. Sometimes, it is easier/useful to explain certain phenomena by abandoning unitary time evolution, for instance, in unstable particles or radioactive decay. There, as time goes on, the mother state decays into daughter states. If you observe only the mother state subsystem, it does not undergo unitary time evolution because it loses information about its state with the passage of time. The information is lost to the daughter state subsystems. Probability (of the mother state existing) is not conserved; it decreases (exponentially) with time. If you look at the full system, as whole, evolution is unitary, as expected. But in radioactivity, we often just need to know how the mother state subsystem disintegrates. Quantum mechanics is a probabilistic theory and all probabilities must always add up to 1. This puts a constraint on the theory; as the state of the system evolves in time the total probability must remain fixed. If we denote the state of the system at time $t$ as $|\psi(t)\rangle$ then we can define the time-evolution operator $U(t^\prime)$ as the operator by $$ U(t^\prime)|\psi(t)\rangle = |\psi(t+t^\prime)\rangle $$ so that it takes a state at time $t$ and gives us the state a time $t^\prime$ later. Now the requirement that probability is conserved tells us that \begin{align} \langle \psi(t) | \psi(t)\rangle &= \langle \psi(t+t^\prime) | \psi(t+t^\prime)\rangle \\ &= \langle \psi(t) |U(t^\prime)^\dagger U(t^\prime)| \psi(t)\rangle \end{align} for all states $ | \psi(t)\rangle$. This implies that $$U(t^\prime)^\dagger U(t^\prime) = 1$$ which implies that $U(t)$ is a unitary operator. Consequently this type of time evolution is known as unitary time evolution. This has many important results for what is possible when a quantum state is time evolved, such as the no-cloning theorem Yes, there is a difference between time evolution and unitary time evolution. Non-unitary evolution stems from having a subsystem. Consider a statistical mixture of pure systems, i.e. assume that there is an ensemble of systems so that a fraction $p_1$ is in state $|1>$, $p_2$ are in state $|2>$, and so on. Then the entire statistical ensemble can be described by a so-called density matrix, $\rho = \sum_i |i><i|$. We can consider the evolution of this density operator, which is often written as the so-called Liouville equation $\frac{d\rho}{dt}=\mathcal{L}\rho$. Here $\mathcal{L}$ is the "Liouvillian", an operator that transitions $\rho$ in time much like $U$ transitions the pure state in time for a pure state. If the dynamics is governed by a Hamiltonian, then the Liouvillian (the "propogator" of the state) is simply its commutation relation with the state, $\mathcal{L}=-i/\hbar (H\rho-\rho H)$. This corresponds to standard Hamiltonian evolution, and is unitary. Consider, however, looking at only a subsystem of the extended system. The state of such a subsystem can be described by a "reduced" density operator, which is really just a density operator (matrix) describing just the subsystem. However, even though the evolution of the extended system is unitary and follows a Hamiltonian, the evolution of the subsystem's density generally does not follow a Hamiltonian and is not unitary. Instead, it is often "dissipative". For example, the subsystem might dissipate energy into the environment and cool down into a thermal state. There is no general simple form for the Liouvillian (the propagator) for subsystems. A relatively simple case is when the subsystem evolves in a Markovian manner, which is often the case for systems equilibrating thermally. Then the Liouvillian has a unitary term, driven by a Hamiltonian just as above, but also a correction: a non-Hamiltonian term, a term which follows a special form called "Lindblad form" rather than the commutation-with-the-state written above. So - non unitary evolution is actually everywhere. Whenever you can't neglect the interactions with the environment, so that you e.g. equilibrate thermally with it, you have non-unitary evolution (there are a few caveats, but it's almost always true). What's difficult is getting a unitary evolution going. You have to isolate your system from the environment enough to be able to neglect thermalization etc. The biggest difference, perhaps, between these two kinds of evolutions is that non-unitary evolution is not (generically) not reversible. That's how you get thermalization. Non-unitary dynamics can also change the populations, reflecting for example a decrease in highly-energetic states as the system cools down towards its ground-state.
Mathematica seems not to to know the basic Laplace and inverse Laplace relation $$\mathcal L(E_\alpha[−λt^α],t)(s)=\frac{s^{α-1}}{λ+s^α}$$ surrounding the Mittag Leffler function ( MittagLefflerE). The evaluations LaplaceTransform[MittagLefflerE[alpha, -lambda t^alpha], t, s]Integrate[ Exp[-s t] MittagLefflerE[alpha, -lambda t^alpha], {t, 0, Infinity}]InverseLaplaceTransform[s^(alpha - 1)/(lambda + s^alpha), s, t]1/(2*Pi*I)* Integrate[ Exp[s t] s^(alpha - 1)/(lambda + s^alpha), {s, -Infinity, Infinity}] all fail. However, when setting $\alpha=1$ the correct Laplace transform $1/(\lambda +s)$ of the expontential function is recovered. Has anybody had more success with this kind computations?
Lie algebras over rings (Lie rings) are important in group theory. For instance, to every group $G$ one can associate a Lie ring $$L(G)=\bigoplus _{i=1}^\infty \gamma _i(G)/\gamma _{i+1}(G),$$ where $\gamma _i(G)$ is the $i$-th term of the lower central series of $G$. The addition is defined by the additive structure of $\gamma _i{G}/\gamma _{i+1}(G)$, and the Lie product is defined on homogeneous elements by $[x\gamma _{i+1}(G),y\gamma _{j+1}(G)]=[x,y]\gamma _{i+j+1}(G)$, and then extended to L(G) by linearity. There are several other ways of constructing Lie rings associated to groups, and there are numerous applications of these. One of the most notorious ones is the solution of the Restricted Burnside Problem by Zelmanov, see the book M. R. Vaughan-Lee, "The Restricted Burnside Problem". There's other books related to these rings, for example,Kostrikin, "Around Burnside",Huppert, Blackburn, "Finite groups II",Dixon, du Sautoy, Mann, Segal, "Analytic pro-$p$ groups".
下記のミニワークショップを開催しますので,ご案内申し上げます.多くの方々のご参加を歓迎いたします. 世話人代表 俣野 博 日程: 2017年1月18日(水)10:00 〜 16:15 場所: 東京大学大学院数理科学研究科棟 002号室 講師(講演順,敬称略): 1.Matthieu Alfaro (University of Montpellier) 題目:"Long range dispersion vs. Allee effect" 2.Xing Liang (Science and Technology University of China) 題目:"The semi-wave solutions of the KPP equations with free boundaries in almost periodic media" 3.Gaël Raoul (Ecole Polytechnique) 題目: "Climate change and the impact of a heterogeneous environment" 4.Thomas Giletti (University of Lorraine) 題目: "Spreading and extinction in a multidimensional shifting environment" プログラム: 10:00−11:00 11:15−12:15 Lunch 14:00−15:00 15:15−16:15 1/18(水) Alfaro Liang - Raoul Giletti 1.Matthieu Alfaro 氏: "Long range dispersion vs. Allee effect" Abstract: In this talk, we study the balance between long range dispersal kernels and the Allee effect in population dynamics models. To do so, we first revisit the so called Fujita blow up phenomena [Fujita(1966)] for the nonlocal diffusion equation . We prove that the Fujita exponent dramatically depends on the behavior of the Fourier transform of the kernel $J$ near the origin, which is linked to the tails of $J$. Then, as an application of the result in population dynamics models, we discuss the so called hair trigger effect [Aronson-Weinberger(1978)] for . Last, we consider the spreading properties of the above equation. Our interest is twofold: we investigate the existence/non existence of travelling waves, and the propagation properties of the Cauchy problem, more precisely the possibility of accelaration [Hamel-Roques(2010), Garnier(2011)] during an invasion. アブストラクトPDF版: Alfaro 2.Xing Liang 氏: "The semi-wave solutions of the KPP equations with free boundaries in almost periodic media" Abstract: Consider the following diffusive KPP equation with a free boundary, where μ>0, a(x) is a positive alomost periodic function in x∈\R. Here, as in previous works, we use the concept "semi-wave" to replace "traveling wave" since the profile function of the wave is only defined on the half real line (-∞,0]. Definition Let (u,h)=(u(x,t),h(t)) be one positive entire solution of (0.1). If $u$ can be written as u(x,t)=v(x-h(t),h(t)), where h(±∞)=±∞, v(\xi,\tau)∈C^2((-∞,0]×\R), v(\cdot,\tau) is an almost periodic function in \tau from \R to C((-∞,0])∩L^∞((-∞,0)), then $u$ is called an almost periodic traveling wave solution. In my work, I prove the existence and uniqueness of the semi-wave solutions. アブストラクトPDF版: Liang 3.Gaël Raoul 氏: "Climate change and the impact of a heterogeneous environment" Abstract: We consider a natural population facing a climate change. To survive climate change, a species can change its range, migrating toward colder regions (see [Berestycki-Diekmann-Nagelkerke-Zegeling(2009)]. This is however not the only possible strategy: a species can also change its phenotypes to be able to tolerate higher temperature. This evolution is the result of mutations and selections (we neglect here more complicated phenomena, such as phenotypic plasticity or sexual reproduction [Kirkpatrick-Barton(1997)]. This evolutionary dynamics are known to play a role at the ecological scale: it is well documented that the phenotypes of a species are not uniform throughout the species range, and that each individual tends to be adapted to its local environment [Turesson(1922)]. We consider a PDE model, that can be derived as a large population limit of an individual-based model [Champagnat-Méléard(2007)]. In a first study [Alfaro-Berestycki-Raoul, to appear], we show that in linear environments, and in the presence of climate change, the propagation speed can be computed explicitly. This analysis relies on a careful use of Harnack inequalities. We also try to go beyond this first result: large scale heterogeneities, such as mountains, play an important role in the dynamics of species. On a few examples, we are able to push the analysis further to provide new ideas in a few biologically relevant situations. This and the study of multi-dimensional position spaces is a current work also involving biologist Ophélie Ronce. Finally, I will discuss some ideas to develop an interface dynamics limit that could be useful to link this type of result to presence/absence maps that are used by field biologists working on those questions. アブストラクトPDF版: Raoul 4.Thomas Giletti 氏: "Spreading and extinction in a multidimensional shifting environment" Abstract: In this talk we will consider a heterogeneous reaction-diffusion equation in a multidimensional or cylindrical domain, where the reaction term depends on a moving variable $x-ct$. More precisely, we study the large time behaviour of solutions of the following Cauchy problem in the spatial domain (x,y)∈\R×ω, where ω⊂\R^{N-1} and N≧1. Such an equation arises in the modelling of the effect of climate change on species ranges in ecology. In such a context, the unknown $u$ stands for a population density, and the parameter c>0 stands for the climate velocity in the x-direction (the variable $x$ may correspond for instance to the latitude). Our key assumptions will be that the reaction term is increasing with respect to $x-ct$, and that the limit of f(x-ct,y,u) as x-ct→+∞ (respectively x-ct→-∞) is a monostable function of $u$ (respectively a negative function of $u$). This means that the favourable habitat zone is receding to the right with a constant speed as time passes. We will see that, under the joint influence of the heterogeneity and a weak Allee effect, whether the solution spreads (i.e. converges to a positive steady state which is close to 1 in the favourable zone) depends not only on the shfiting parameter $c$ but also on the size of the initial datum $u_0$. This is in sharp contrast with both the KPP (no Allee effect) equation in a shifting environment [Berestycki-Diekmann-Nagelkerke-Zegeling(2009)], and with the monostable equation in a homogeneous environment [Aronson-Weinberger(1978)], where a hair-trigger effect typically appears. We will further analyze the situation by showing the existence of several sharp thresholds between two reasonable outcomes, namely extinction (i.e. convergence to 0) and spreading of the solution, in terms of both the climate velocity and the initial datum. This is a joint work with Juliette Bouhours. アブストラクトPDF版: Giletti 世話人: 俣野博 問い合わせ先: matanoms.u-tokyo.ac.jp 会場へのアクセスは, https://www.ms.u-tokyo.ac.jp/access/index.html にてご確認ください.
In 2009, J. Waldvogel and Peter Leikauf found the remarkable Euler-like polynomial, $$F(m)=m^2+m+234505015943235329417$$ which is prime for $m=0\to20$, but composite for $m=21$. Define, $$F(m)=m^2+m+A$$ such that $F(m)$ is prime for $m=0\to n-1$, but composite for $m=n$. Then the least $\color{brown}{A>41}$ (compare to A164926) are, $$\begin{array}{|c|c|l|} \hline n&\lceil\log_{10}A\rceil&A\\ \hline 1 &2&43 \\ 2 &2 &59 \\ 3 &3 &107 \\ 4 &3 &101 \\ 5 &3 &347 \\ 6 &4 &1607 \\ 7 &4 &1277 \\ 8 &5 &21557 \\ 9 &8 &51867197 \\ 10 &6 &844427 \\ 11 &9 &180078317 \\ 12 &10 &1761702947 \\ 13 &10 &8776320587 \\ 14 &14 &27649987598537 \\ 15 &15 &291598227841757 \\ 16 &15 &521999251772081\,(?) \\ 17 &?? &??\\ \,\vdots\\ 21 &21 &234505015943235329417\\ \hline \end{array}$$ where $\lceil x \rceil$ is the ceiling function. Assuming the prime k-tuples conjecture and Mollin's theorem 2.1 in Prime-Producing Quadratics (1997), this shows that the sequence is defined for $n>0$. Questions: Anyone has the resources to compute $A(16),\,A(17)$, etc? It seems the second column has a comparable rate to the first. By the time it reaches $n=40$ (comparable to Euler's polynomial), what is a ballpark figure for $A$'s number of decimal digits? $40$? $50$?
Difference between revisions of "Geometry and Topology Seminar" (→Spring 2017) (→Spring 2017) Line 101: Line 101: |Jan 20 |Jan 20 | [http://people.mpim-bonn.mpg.de/rovi/ Carmen Rovi] (University of Indiana Bloomington) | [http://people.mpim-bonn.mpg.de/rovi/ Carmen Rovi] (University of Indiana Bloomington) − | [[#Carmen Rovi| " + | [[#Carmen Rovi| ""]] | Maxim | Maxim |- |- Revision as of 16:52, 11 January 2017 Contents Fall 2016 Spring 2017 date speaker title host(s) Jan 20 Carmen Rovi (University of Indiana Bloomington) "The mod 8 signature of a fiber bundle" Maxim Jan 27 Feb 3 Feb 10 Feb 17 Yair Hartman (Northwestern University) "TBA" Dymarz Feb 24 March 3 Mark Powell (Université du Québec à Montréal) "TBA" Kjuchukova March 10 March 17 March 24 Spring Break March 31 April 7 April 14 April 21 Joseph Maher (CUNY) "TBA" Dymarz April 28 Bena Tshishiku (Harvard) "TBA" Dymarz Fall Abstracts Ronan Conlon New examples of gradient expanding K\"ahler-Ricci solitons A complete K\"ahler metric $g$ on a K\"ahler manifold $M$ is a \emph{gradient expanding K\"ahler-Ricci soliton} if there exists a smooth real-valued function $f:M\to\mathbb{R}$ with $\nabla^{g}f$ holomorphic such that $\operatorname{Ric}(g)-\operatorname{Hess}(f)+g=0$. I will present new examples of such metrics on the total space of certain holomorphic vector bundles. This is joint work with Alix Deruelle (Universit\'e Paris-Sud). Jiyuan Han Deformation theory of scalar-flat ALE Kahler surfaces We prove a Kuranishi-type theorem for deformations of complex structures on ALE Kahler surfaces. This is used to prove that for any scalar-flat Kahler ALE surfaces, all small deformations of complex structure also admit scalar-flat Kahler ALE metrics. A local moduli space of scalar-flat Kahler ALE metrics is then constructed, which is shown to be universal up to small diffeomorphisms (that is, diffeomorphisms which are close to the identity in a suitable sense). A formula for the dimension of the local moduli space is proved in the case of a scalar-flat Kahler ALE surface which deforms to a minimal resolution of \C^2/\Gamma, where \Gamma is a finite subgroup of U(2) without complex reflections. This is a joint work with Jeff Viaclovsky. Sean Howe Representation stability and hypersurface sections We give stability results for the cohomology of natural local systems on spaces of smooth hypersurface sections as the degree goes to \infty. These results give new geometric examples of a weak version of representation stability for symmetric, symplectic, and orthogonal groups. The stabilization occurs in point-counting and in the Grothendieck ring of Hodge structures, and we give explicit formulas for the limits using a probabilistic interpretation. These results have natural geometric analogs -- for example, we show that the "average" smooth hypersurface in \mathbb{P}^n is \mathbb{P}^{n-1}! Nan Li Quantitative estimates on the singular sets of Alexandrov spaces The definition of quantitative singular sets was initiated by Cheeger and Naber. They proved some volume estimates on such singular sets in non-collapsed manifolds with lower Ricci curvature bounds and their limit spaces. On the quantitative singular sets in Alexandrov spaces, we obtain stronger estimates in a collapsing fashion. We also show that the (k,\epsilon)-singular sets are k-rectifiable and such structure is sharp in some sense. This is a joint work with Aaron Naber. Yu Li In this talk, we prove that if an asymptotically Euclidean (AE) manifold with nonnegative scalar curvature has long time existence of Ricci flow, it converges to the Euclidean space in the strong sense. By convergence, the mass will drop to zero as time tends to infinity. Moreover, in three dimensional case, we use Ricci flow with surgery to give an independent proof of positive mass theorem. A classification of diffeomorphism types is also given for all AE 3-manifolds with nonnegative scalar curvature. Peyman Morteza We develop a procedure to construct Einstein metrics by gluing the Calabi metric to an Einstein orbifold. We show that our gluing problem is obstructed and we calculate the obstruction explicitly. When our obstruction does not vanish, we obtain a non-existence result in the case that the base orbifold is compact. When our obstruction vanishes and the base orbifold is non-degenerate and asymptotically hyperbolic we prove an existence result. This is a joint work with Jeff Viaclovsky. Caglar Uyanik Geometry and dynamics of free group automorphisms A common theme in geometric group theory is to obtain structural results about infinite groups by analyzing their action on metric spaces. In this talk, I will focus on two geometrically significant groups; mapping class groups and outer automorphism groups of free groups.We will describe a particular instance of how the dynamics and geometry of their actions on various spaces provide deeper information about the groups. Bing Wang The extension problem of the mean curvature flow We show that the mean curvature blows up at the first finite singular time for a closed smooth embedded mean curvature flow in R^3. A key ingredient of the proof is to show a two-sided pseudo-locality property of the mean curvature flow, whenever the mean curvature is bounded. This is a joint work with Haozhao Li. Ben Weinkove Gauduchon metrics with prescribed volume form Every compact complex manifold admits a Gauduchon metric in each conformal class of Hermitian metrics. In 1984 Gauduchon conjectured that one can prescribe the volume form of such a metric. I will discuss the proof of this conjecture, which amounts to solving a nonlinear Monge-Ampere type equation. This is a joint work with Gabor Szekelyhidi and Valentino Tosatti. Jonathan Zhu Entropy and self-shrinkers of the mean curvature flow The Colding-Minicozzi entropy is an important tool for understanding the mean curvature flow (MCF), and is a measure of the complexity of a submanifold. Together with Ilmanen and White, they conjectured that the round sphere minimises entropy amongst all closed hypersurfaces. We will review the basics of MCF and their theory of generic MCF, then describe the resolution of the above conjecture, due to J. Bernstein and L. Wang for dimensions up to six and recently claimed by the speaker for all remaining dimensions. A key ingredient in the latter is the classification of entropy-stable self-shrinkers that may have a small singular set. Yu Zeng Short time existence of the Calabi flow with rough initial data Calabi flow was introduced by Calabi back in 1950’s as a geometric flow approach to the existence of extremal metrics. Analytically it is a fourth order nonlinear parabolic equation on the Kaehler potentials which deforms the Kaehler potential along its scalar curvature. In this talk, we will show that the Calabi flow admits short time solution for any continuous initial Kaehler metric. This is a joint work with Weiyong He. Spring Abstracts Bena Tshishiku "TBA" Archive of past Geometry seminars 2015-2016: Geometry_and_Topology_Seminar_2015-2016 2014-2015: Geometry_and_Topology_Seminar_2014-2015 2013-2014: Geometry_and_Topology_Seminar_2013-2014 2012-2013: Geometry_and_Topology_Seminar_2012-2013 2011-2012: Geometry_and_Topology_Seminar_2011-2012 2010: Fall-2010-Geometry-Topology
Difference between revisions of "Geometry and Topology Seminar" (→Spring 2017) (→Spring Abstracts) Line 235: Line 235: == Spring Abstracts == == Spring Abstracts == + + + + + ===Bena Tshishiku=== ===Bena Tshishiku=== Revision as of 16:54, 11 January 2017 Contents Fall 2016 Spring 2017 date speaker title host(s) Jan 20 Carmen Rovi (University of Indiana Bloomington) "The mod 8 signature of a fiber bundle" Maxim Jan 27 Feb 3 Feb 10 Feb 17 Yair Hartman (Northwestern University) "TBA" Dymarz Feb 24 March 3 Mark Powell (Université du Québec à Montréal) "TBA" Kjuchukova March 10 March 17 March 24 Spring Break March 31 April 7 April 14 April 21 Joseph Maher (CUNY) "TBA" Dymarz April 28 Bena Tshishiku (Harvard) "TBA" Dymarz Fall Abstracts Ronan Conlon New examples of gradient expanding K\"ahler-Ricci solitons A complete K\"ahler metric $g$ on a K\"ahler manifold $M$ is a \emph{gradient expanding K\"ahler-Ricci soliton} if there exists a smooth real-valued function $f:M\to\mathbb{R}$ with $\nabla^{g}f$ holomorphic such that $\operatorname{Ric}(g)-\operatorname{Hess}(f)+g=0$. I will present new examples of such metrics on the total space of certain holomorphic vector bundles. This is joint work with Alix Deruelle (Universit\'e Paris-Sud). Jiyuan Han Deformation theory of scalar-flat ALE Kahler surfaces We prove a Kuranishi-type theorem for deformations of complex structures on ALE Kahler surfaces. This is used to prove that for any scalar-flat Kahler ALE surfaces, all small deformations of complex structure also admit scalar-flat Kahler ALE metrics. A local moduli space of scalar-flat Kahler ALE metrics is then constructed, which is shown to be universal up to small diffeomorphisms (that is, diffeomorphisms which are close to the identity in a suitable sense). A formula for the dimension of the local moduli space is proved in the case of a scalar-flat Kahler ALE surface which deforms to a minimal resolution of \C^2/\Gamma, where \Gamma is a finite subgroup of U(2) without complex reflections. This is a joint work with Jeff Viaclovsky. Sean Howe Representation stability and hypersurface sections We give stability results for the cohomology of natural local systems on spaces of smooth hypersurface sections as the degree goes to \infty. These results give new geometric examples of a weak version of representation stability for symmetric, symplectic, and orthogonal groups. The stabilization occurs in point-counting and in the Grothendieck ring of Hodge structures, and we give explicit formulas for the limits using a probabilistic interpretation. These results have natural geometric analogs -- for example, we show that the "average" smooth hypersurface in \mathbb{P}^n is \mathbb{P}^{n-1}! Nan Li Quantitative estimates on the singular sets of Alexandrov spaces The definition of quantitative singular sets was initiated by Cheeger and Naber. They proved some volume estimates on such singular sets in non-collapsed manifolds with lower Ricci curvature bounds and their limit spaces. On the quantitative singular sets in Alexandrov spaces, we obtain stronger estimates in a collapsing fashion. We also show that the (k,\epsilon)-singular sets are k-rectifiable and such structure is sharp in some sense. This is a joint work with Aaron Naber. Yu Li In this talk, we prove that if an asymptotically Euclidean (AE) manifold with nonnegative scalar curvature has long time existence of Ricci flow, it converges to the Euclidean space in the strong sense. By convergence, the mass will drop to zero as time tends to infinity. Moreover, in three dimensional case, we use Ricci flow with surgery to give an independent proof of positive mass theorem. A classification of diffeomorphism types is also given for all AE 3-manifolds with nonnegative scalar curvature. Peyman Morteza We develop a procedure to construct Einstein metrics by gluing the Calabi metric to an Einstein orbifold. We show that our gluing problem is obstructed and we calculate the obstruction explicitly. When our obstruction does not vanish, we obtain a non-existence result in the case that the base orbifold is compact. When our obstruction vanishes and the base orbifold is non-degenerate and asymptotically hyperbolic we prove an existence result. This is a joint work with Jeff Viaclovsky. Caglar Uyanik Geometry and dynamics of free group automorphisms A common theme in geometric group theory is to obtain structural results about infinite groups by analyzing their action on metric spaces. In this talk, I will focus on two geometrically significant groups; mapping class groups and outer automorphism groups of free groups.We will describe a particular instance of how the dynamics and geometry of their actions on various spaces provide deeper information about the groups. Bing Wang The extension problem of the mean curvature flow We show that the mean curvature blows up at the first finite singular time for a closed smooth embedded mean curvature flow in R^3. A key ingredient of the proof is to show a two-sided pseudo-locality property of the mean curvature flow, whenever the mean curvature is bounded. This is a joint work with Haozhao Li. Ben Weinkove Gauduchon metrics with prescribed volume form Every compact complex manifold admits a Gauduchon metric in each conformal class of Hermitian metrics. In 1984 Gauduchon conjectured that one can prescribe the volume form of such a metric. I will discuss the proof of this conjecture, which amounts to solving a nonlinear Monge-Ampere type equation. This is a joint work with Gabor Szekelyhidi and Valentino Tosatti. Jonathan Zhu Entropy and self-shrinkers of the mean curvature flow The Colding-Minicozzi entropy is an important tool for understanding the mean curvature flow (MCF), and is a measure of the complexity of a submanifold. Together with Ilmanen and White, they conjectured that the round sphere minimises entropy amongst all closed hypersurfaces. We will review the basics of MCF and their theory of generic MCF, then describe the resolution of the above conjecture, due to J. Bernstein and L. Wang for dimensions up to six and recently claimed by the speaker for all remaining dimensions. A key ingredient in the latter is the classification of entropy-stable self-shrinkers that may have a small singular set. Yu Zeng Short time existence of the Calabi flow with rough initial data Calabi flow was introduced by Calabi back in 1950’s as a geometric flow approach to the existence of extremal metrics. Analytically it is a fourth order nonlinear parabolic equation on the Kaehler potentials which deforms the Kaehler potential along its scalar curvature. In this talk, we will show that the Calabi flow admits short time solution for any continuous initial Kaehler metric. This is a joint work with Weiyong He. Spring Abstracts Carmen Rovi The mod 8 signature of a fiber bundle In this talk we shall be concerned with the residues modulo 4 and modulo 8 of the signature of a 4k-dimensional geometric Poincare complex. I will explain the relation between the signature modulo 8 and two other invariants: the Brown-Kervaire invariant and the Arf invariant. In my thesis I applied the relation between these invariants to the study of the signature modulo 8 of a fiber bundle. In 1973 Werner Meyer used group cohomology to show that a surface bundle has signature divisible by 4. I will discuss current work with David Benson, Caterina Campagnolo and Andrew Ranicki where we are using group cohomology and representation theory of finite groups to detect non-trivial signatures modulo 8 of surface bundles. Bena Tshishiku "TBA" Archive of past Geometry seminars 2015-2016: Geometry_and_Topology_Seminar_2015-2016 2014-2015: Geometry_and_Topology_Seminar_2014-2015 2013-2014: Geometry_and_Topology_Seminar_2013-2014 2012-2013: Geometry_and_Topology_Seminar_2012-2013 2011-2012: Geometry_and_Topology_Seminar_2011-2012 2010: Fall-2010-Geometry-Topology
One disadvantage of the fact that you have posted 5 identical answers (1, 2, 3, 4, 5) is that if other users have some comments about the website you created, they will post them in all these place. If you have some place online where you would like to receive feedback, you should probably also add link to that. — Martin Sleziak1 min ago BTW your program looks very interesting, in particular the way to enter mathematics. One thing that seem to be missing is documentation (at least I did not find it). This means that it is not explained anywhere: 1) How a search query is entered. 2) What the search engine actually looks for. For example upon entering $\frac xy$ will it find also $\frac{\alpha}{\beta}$? Or even $\alpha/\beta$? What about $\frac{x_1}{x_2}$? ******* Is it possible to save a link to particular search query? For example in Google I am able to use link such as: google.com/search?q=approach0+xyz Feature like that would be useful for posting bug reports. When I try to click on "raw query", I get curl -v https://approach0.xyz/search/search-relay.php?q='%24%5Cfrac%7Bx%7D%7By%7D%24' But pasting the link into the browser does not do what I expected it to. ******* If I copy-paste search query into your search engine, it does not work. For example, if I copy $\frac xy$ and paste it, I do not get what would I expect. Which means I have to type every query. Possibility to paste would be useful for long formulas. Here is what I get after pasting this particular string: I was not able to enter integrals with bounds, such as $\int_0^1$. This is what I get instead: One thing which we should keep in mind is that duplicates might be useful. They improve the chance that another user will find the question, since with each duplicate another copy with somewhat different phrasing of the title is added. So if you spent reasonable time by searching and did not find... In comments and other answers it was mentioned that there are some other search engines which could be better when searching for mathematical expressions. But I think that as nowadays several pages uses LaTex syntax (Wikipedia, this site, to mention just two important examples). Additionally, som... @MartinSleziak Thank you so much for your comments and suggestions here. I have took a brief look at your feedback, I really love your feedback and will seriously look into those points and improve approach0. Give me just some minutes, I will answer/reply to your in feedback in our chat. — Wei Zhong1 min ago I still think that it would be useful if you added to your post where do you want to receive feedback from math.SE users. (I suppose I was not the only person to try it.) Especially since you wrote: "I am hoping someone interested can join and form a community to push this project forward, " BTW those animations with examples of searching look really cool. @MartinSleziak Thanks to your advice, I have appended more information on my posted answers. Will reply to you shortly in chat. — Wei Zhong29 secs ago We are open-source project hosted on GitHub: http://github.com/approach0Welcome to send any feedback on our GitHub issue page! @MartinSleziak Currently it has only a documentation for developers (approach0.xyz/docs) hopefully this project will accelerate its releasing process when people get involved. But I will list this as a important TODO before publishing approach0.xyz . At that time I hope there will be a helpful guide page for new users. @MartinSleziak Yes, $x+y$ will find $a+b$ too, IMHO this is the very basic requirement for a math-aware search engine. Actually, approach0 will look into expression structure and symbolic alpha-equivalence too. But for now, $x_1$ will not get $x$ because approach0 consider them not structurally identical, but you can use wildcard to match $x_1$ just by entering a question mark "?" or \qvar{x} in a math formula. As for your example, enter $\frac \qvar{x} \qvar{y} $ is enough to match it. @MartinSleziak As for the query link, it needs more explanation, technologically the way you mentioned that Google is using, is a HTTP GET method, but for mathematics, GET request may be not appropriate since it has structure in a query, usually developer would alternatively use a HTTP POST request, with JSON encoded. This makes developing much more easier because JSON is a rich-structured and easy to seperate math keywords. @MartinSleziak Right now there are two solutions for "query link" problem you addressed. First is to use browser back/forward button to navigate among query history. @MartinSleziak Second is to use a computer command line 'curl' to get search results from particular query link (you can actually see that in browser, but it is in developer tools, such as the network inspection tab of Chrome). I agree it is helpful to add a GET query link for user to refer to a query, I will write this point in project TODO and improve this later. (just need some extra efforts though) @MartinSleziak Yes, if you search \alpha, you will get all \alpha document ranked top, different symbols such as "a", "b" ranked after exact match. @MartinSleziak Approach0 plans to add a "Symbol Pad" just like what www.symbolab.com and searchonmath.com are using. This will help user to input greek symbols even if they do not remember how to spell. @MartinSleziak Yes, you can get, greek letters are tokenized to the same thing as normal alphabets. @MartinSleziak As for integrals upper bounds, I think it is a problem on a JavaScript plugin approch0 is using, I also observe this issue, only thing you can do is to use arrow key to move cursor to the right most and hit a '^' so it goes to upper bound edit. @MartinSleziak Yes, it has a threshold now, but this is easy to adjust from source code. Most importantly, I have ONLY 1000 pages indexed, which means only 30,000 posts on math stackexchange. This is a very small number, but will index more posts/pages when search engine efficiency and relevance is tuned. @MartinSleziak As I mentioned, the indices is too small currently. You probably will get what you want when this project develops to the next stage, which is enlarge index and publish. @MartinSleziak Thank you for all your suggestions, currently I just hope more developers get to know this project, indeed, this is my side project, development progress can be very slow due to my time constrain. But I believe its usefulness and will spend my spare time to develop until its publish. So, we would not have polls like: "What is your favorite calculus textbook?" — GEdgar2 hours ago @GEdgar I'd say this goes under "tools." But perhaps it could be made explicit. — quid1 hour ago @quid I think that the type of question mentioned in GEdgar's comment is closer to book-recommendations which are valid questions on the main. (Although not formulated like that.) I also think that his comment was tongue-in-cheek. (Although it is a bit more difficult for me to detect sarcasm, as I am not a native speaker.) — Martin Sleziak57 mins ago "What is your favorite calculus textbook?" is opinion based and/or too broad for main. If at all it is a "poll." On tex.se they have polls "favorite editor/distro/fonts etc" while actual questions on these are still on-topic on main. Beyond that it is not clear why a question which software one uses should be a valid poll while the question which book one uses is not. — quid7 mins ago @quid I will reply here, since I do not want to digress in the comments too much from the topic of that question. Certainly I agree that "What is your favorite calculus textbook?" would not be suitable for the main. Which is why I wrote in my comment: "Although not formulated like that". Book recommendations are certainly accepted on the main site, if they are formulated in the proper way. If there will be community poll and somebody suggests question from GEdgar's comment, I will be perfectly ok with it. But I thought that his comment is simply playful remark pointing out that there is plenty of "polls" of this type on the main (although ther should not be). I guess some examples can be found here or here. Perhaps it is better to link search results directly on MSE here and here, since in the Google search results it is not immediately visible that many of those questions are closed. Of course, I might be wrong - it is possible that GEdgar's comment was meant seriously. I have seen for the first time on TeX.SE. The poll there was concentrated on TeXnical side of things. If you look at the questions there, they are asking about TeX distributions, packages, tools used for graphs and diagrams, etc. Academia.SE has some questions which could be classified as "demographic" (including gender). @quid From what I heard, it stands for Kašpar, Melichar and Baltazár, as the answer there says. In Slovakia you would see G+M+B, where G stand for Gašpar. But that is only anecdotal. And if I am to believe Slovak Wikipedia it should be Christus mansionem benedicat. From the Wikipedia article: "Nad dvere kňaz píše C+M+B (Christus mansionem benedicat - Kristus nech žehná tento dom). Toto sa však často chybne vysvetľuje ako 20-G+M+B-16 podľa začiatočných písmen údajných mien troch kráľov." My attempt to write English translation: The priest writes on the door C+M+B (Christus mansionem benedicat - Let the Christ bless this house). A mistaken explanation is often given that it is G+M+B, following the names of three wise men. As you can see there, Christus mansionem benedicat is translated to Slovak as "Kristus nech žehná tento dom". In Czech it would be "Kristus ať žehná tomuto domu" (I believe). So K+M+B cannot come from initial letters of the translation. It seems that they have also other interpretations in Poland. "A tradition in Poland and German-speaking Catholic areas is the writing of the three kings' initials (C+M+B or C M B, or K+M+B in those areas where Caspar is spelled Kaspar) above the main door of Catholic homes in chalk. This is a new year's blessing for the occupants and the initials also are believed to also stand for "Christus mansionem benedicat" ("May/Let Christ Bless This House"). Depending on the city or town, this will be happen sometime between Christmas and the Epiphany, with most municipalities celebrating closer to the Epiphany." BTW in the village where I come from the priest writes those letters on houses every year during Christmas. I do not remember seeing them on a church, as in Najib's question. In Germany, the Czech Republic and Austria the Epiphany singing is performed at or close to Epiphany (January 6) and has developed into a nationwide custom, where the children of both sexes call on every door and are given sweets and money for charity projects of Caritas, Kindermissionswerk or Dreikönigsaktion[2] - mostly in aid of poorer children in other countries.[3] A tradition in most of Central Europe involves writing a blessing above the main door of the home. For instance if the year is 2014, it would be "20 * C + M + B + 14". The initials refer to the Latin phrase "Christus mansionem benedicat" (= May Christ bless this house); folkloristically they are often interpreted as the names of the Three Wise Men (Caspar, Melchior, Balthasar). In Catholic parts of Germany and in Austria, this is done by the Sternsinger (literally "Star singers"). After having sung their songs, recited a poem, and collected donations for children in poorer parts of the world, they will chalk the blessing on the top of the door frame or place a sticker with the blessing. On Slovakia specifically it says there: The biggest carol singing campaign in Slovakia is Dobrá Novina (English: "Good News"). It is also one of the biggest charity campaigns by young people in the country. Dobrá Novina is organized by the youth organization eRko.
Author Message Lonely-Star Tux's lil' helper Joined: 12 Jul 2003 Posts: 82 Posted: Wed Apr 27, 2005 9:12 am Post subject: Mathematical Symbols in Xfig and gnuplot Hi everybody, Studying physics I am starting to use Xfig and gnuplot. Now I wonder How I can use symbols like omega or phi in the text in xfig or labels in gnuplot. Any help? Thanks! furanku l33t Joined: 08 May 2003 Posts: 902 Location: Hamburg, Germany Posted: Wed Apr 27, 2005 2:40 pm Post subject: Hi! The good old "make-my-graphs-pretty" question You have several possibilties. gnuplot 1) The enhanced postscript driver Start gnuplot. Try Code: gnuplot> plot sin(x) title "{/Symbol F}(x)" You will get a window with your graph labled verbose as "{/Symbol F}(x)". But now try Code: gnuplot> set term post enh Terminal type set to 'postscript' Options are 'landscape enhanced monochrome blacktext \ dashed dashlength 1.0 linewidth 1.0 defaultplex \ palfuncparam 2000,0.003 \ butt "Helvetica" 14' gnuplot> set output "test.ps" gnuplot> plot sin(x) title "{/Symbol F}(x) gnuplot> exit View the resulting file "test.ps" in your favorite postscript viewer: Now you have a greek upper Phi as label. To learn more about the enhanced possibilities of the postscrip driver read the file "/usr/share/doc/gnuplot-4.0-r1/psdoc/ps_guide.ps" (or whatever gnuplot version you use). Advantages: Easy to use, output file easy included in almost every document Disadvantages: Limited possibilities, looks ugly, wrong fonts when included in other documents (esp. LaTeX) 2) The LaTeX drivers I guess you want to include your graph in a LaTeX File (I hope you have learned LaTeX, if not do so, quickly, it's essential for all physical publications!) Again several possibilities... 2a) The "latex" driver, which uses the pictex environment Code: gnuplot> set term latex Options are '(document specific font)' gnuplot> set output "test.tex" gnuplot> plot sin(x) title "$\Phi(x)$" gnuplot> exit Now gnuplot produced a file "test.tex" which you can include in your LaTeX Document with Process your LaTeX Document and you'll see the graph labled with the TeX Fonts and all the glory you can use to typeset formulas in LaTeX: fractions, integrals, ... all you can do in LaTeX can be used. Advantage: beautifull output, fonts fitting to the rest of your document Disadvantage: more complicated to use, limited capabilities of the LaTeX picture environment 2b) Combined LaTeX and Postscript. Almost like 2a) but now the graph is in Postscript, just the lables are set by LaTeX: Code: gnuplot> set term pslatex Terminal type set to 'pslatex' Options are 'monochrome dashed rotate' gnuplot> set output "test.tex" gnuplot> plot sin(x) title "$\Phi (x)$" gnuplot> exit Now the file "test.tex" will contain postscript special to draw the graph, the label is still set by LaTeX. Use it in your LaTeX Document like before. Advantage: almost unlimited graphics possibilities due to postscript Disadvantage: usage of poststcript needs converting the whole document to postscript afterwards (but that's normal anway), pdflatex isn't able to process postscript (well, VTeX's version can, but it's not open source) 3) The fig driver You export your graph in gnuplot into xfigs "fig" file format, which can be usefull if you want to add modify your graph afterward, for example add some text and arrows (which can be also done in gnuplot but is a pain in the ass...) Code: gnuplot> set term fig textspecial Terminal type set to 'fig' Options are 'monochrome small pointsmax 1000 landscape inches dashed textspecial fontsize 10 thickness 1 depth 10 version 3.2' gnuplot> set output "test.fig" gnuplot> plot sin(x) title "$\Phi (x)$" gnuplot> exit Open your file "test.fig" in xfig and go on as described below. XFig Xfig offers you, almost like gnuplot the possibility to add greek symbols as postscript fonts, and has also a "special flag" which is meant for using LaTeX code in your illustration, which is set in LaTeX later when compiling your document with latex. 4) The symbol postscript font Click in xfig on the large "T" to get the text tool. Click on "Text font (Default)" in the lower right corner and select "Symbol (Greek)". Click somewhere in the image. Now you can type greek letters. Unlike in gnuplot they will appear on the screen. Export your file to postscript and you can use it in your documents like the files generated by gnuplot described in 1) above. 5) The special text flag Note the option "textspecial" in the "set term fig textspecial" command in 3) above. This tells xfig that text set with this flag has a special meaning in some exported formats. You can set it manually in xfig with the button "Text flags" in the lower bar. Set "Special flag" to "Special" in the appearing dialog. Now click somewhere and type somthing like "$\int{-\infty}^\infty e^{-x^2} dx$. Now go to "File -> Export" and select one of "Latex picture" (which is like gnuplots "latex" terminal described above) or "Combined PS/LaTeX (both parts)" (which is like gnuplots "pslatex" driver, with the only exception that the LaTeX and Postscript code are stored in two separate files. Don't worry, you will just have to include the file ending with "_t" into your LaTeX document, this will automaticall include the other file). [Edit:] It may necessary to set the "hidden" flag in newer versions of xfig to avoid getting both labels, the one set by xfig and the one from LaTeX on top of each other. You will see that there is also an "Combined PDF/LaTeX (both parts)" export options which is usefull if you want to generate pdf files from your LaTeX sources directly using pdflatex, since that can't include postscript graphics. On the other hand you still can make a dvi file from your LaTeX sources and convert that to pdf using dvipdf, or convert your postscript files to pdf by epstopdf, or ... You see there are a lot of possibilities, and I just mentioned the ones I used, and did a good job for me during my diploma thesis in physics, and still do. Fell free to ask if you still have questions, Frank Last edited by furanku on Thu Feb 14, 2008 10:31 am; edited 2 times in total Lonely-Star Tux's lil' helper Joined: 12 Jul 2003 Posts: 82 Posted: Wed Apr 27, 2005 5:14 pm Post subject: Thanks a lot for your help! (it worked) incognito n00b Joined: 15 Jan 2004 Posts: 3 Posted: Wed Apr 27, 2005 11:30 pm Post subject: lurkers thank you furanku, Great post - hopefully the moderators would consider putting it in the Documents, Tips, and Tricks section. incognito adsmith Veteran Joined: 26 Sep 2004 Posts: 1386 Location: NC, USA Posted: Thu Apr 28, 2005 1:11 am Post subject: There;s a script which does all that very nicely and automatically... google for "texfig" furanku l33t Joined: 08 May 2003 Posts: 902 Location: Hamburg, Germany Posted: Thu Apr 28, 2005 8:29 am Post subject: Thanks, incognito, but I guess it's not gentoo related enough for the tips and trick section. But almost all of our new students in our workgroup come up with this question after a while, so I thought that's a nice occasion to write down what I learned about that and give them simply the URL (I guess, at least I have to spellcheck it on the weekend, sorry, I'm not a native english speaker and wrote it in a hurry yesterday...) adsmith, that's a nice little script. It includes your exported fig file in a skeleton LaTeX document and processes and previews that. For larger documents I prefer the method using a make file which does the neccessary conversions (I didn't mention that xfig comes with a seperate program called "fig2dev" which can do all the exports xfig can do on the commandline). That combined with the preview-latex (screenshot) mode (which displays all your math and graphics inline in the [x]emacs, it's now part of auctex), gives me the for my taste most effective document writing environment. But as far as I can see new users are more attracted by kile (a KDE TeX environment, screenshots) or lyx (screenshots), and in that case texfig is a good help to get the LaTeX labels in your figures right. Thanks for your tip! nixnut Bodhisattva Joined: 09 Apr 2004 Posts: 10974 Location: the dutch mountains Posted: Thu Feb 14, 2008 7:59 pm Post subject: Moved from Other Things Gentoo to Documentation, Tips & Tricks. Tip/trick, so moved here _________________ Please add [solved] to the initial post's subject line if you feel your problem is resolved. Help answer the unanswered talk is cheap. supply exceeds demand You cannot post new topics in this forum You cannot reply to topics in this forum You cannot edit your posts in this forum You cannot delete your posts in this forum You cannot vote in polls in this forum
1. Perturbations As already mentioned, the Jahn-Teller effect has its roots in group theory. The essence of the argument is that the energy of the compound is stabilised upon distortion to a lower-symmetry point group. This distortion may be considered to be a normal mode of vibration, with the corresponding vibrational coordinate $q$ labelling the "extent of distortion". There is one condition on the vibrational mode: it cannot transform as the totally symmetric irreducible representation of the molecular point group, as such a vibrational mode cannot bring about any distortion in the molecular geometry (it may lead to a change in equilibrium bond length, but not in the shape of the molecule). $\require{begingroup} \begingroup \newcommand{\En}[1]{E_n^{(#1)}} \newcommand{\ket}[1]{| #1 \rangle} \newcommand{\n}[1]{n^{(#1)}} \newcommand{\md}[0]{\mathrm{d}} \newcommand{\odiff}[2]{\frac{\md #1}{\md #2}}$ In the undistorted geometry (i.e. $q = 0$), the electronic Hamiltonian is denoted $H_0$. The corresponding unperturbed electronic wavefunction is $\ket{\n{0}}$, and the electronic energy is $\En{0}$. We therefore have $$H_0 \ket{\n{0}} = \En{0}\ket{\n{0}} \tag{1}$$ Here, the Hamiltonian, wavefunction, and energy are all functions of $q$. We can expand them as Taylor series about $q = 0$: $$\begin{align} H &= H_0 + q \left(\odiff{H}{q}\right) + \frac{q^2}{2}\left(\frac{\md^2 H}{\md q^2}\right) + \cdots \tag{2} \\ \ket{n} &= \ket{\n{0}} + q\ket{\n{1}} + \frac{q^2}{2}\ket{\n{2}} + \cdots \tag{3} \\ E_n &= \En{0} + q\En{1} + \frac{q^2}{2}\En{2} + \cdots \tag{4} \end{align}$$ In the new geometry (i.e. $q \neq 0$), the Schrodinger equation must still be obeyed and therefore $$H\ket{n} = E_n \ket{n} \tag{5}$$ By substituting in equations $(2)$ through $(4)$ into equation $(5)$, one can compare coefficients of $q$ to reach the results: $$\begin{align} \En{1} &= \left< \n{0} \middle| \odiff{H}{q} \middle| \n{0} \right> \tag{6} \\ \En{2} &= \left< \n{0} \middle| \frac{\md^2 H}{\md q^2} \middle| \n{0} \right> + 2\sum_{m \neq n}\frac{\left|\left<m^{(0)} \middle|(\md H/\md q)\middle|\n{0} \right>\right|^2}{\En{0} - E_m^{(0)}} \tag{7} \end{align}$$ The derivation of equations $(6)$ and $(7)$ will not be discussed further here. 1 Distortions that arise due to the $\En{1}$ term are called first-order Jahn-Teller distortions, and distortions that arise from the $\En{2}$ term are called second-order Jahn-Teller distortions. 2. The first-order Jahn-Teller effect Recall that $$E_n = \En{0} + q\En{1} + \cdots \tag{8}$$ Therefore, if $\En{1} > 0$, then stabilisation may be attained with a negative value of $q$; if $\En{1} < 0$, then stabilisation may be attained with a positive value of $q$. These simply represent distortions in opposite directions along a vibrational coordinate. A well-known example is the distortion of octahedral $\ce{Cu^2+}$: there are two possible choices, one involving axial compression, and one involving axial elongation. These two distortions arise from movement along the same vibrational coordinate, except that one has $q > 0$ and the other has $q < 0$. In order for there to be a first-order Jahn-Teller distortion, we therefore require that $$\En{1} = \left<\n{0}|(\md H/\md q)| \n{0}\right> \neq 0 \tag{9}$$ Within group theory, the condition for the integral to be nonzero is that the integrand must contain a component that transforms as the totally symmetric irreducible representation (TSIR). Mathematically, $$\Gamma_{\text{TSIR}} \in \Gamma_n \otimes \Gamma_{(\md H/\md q)} \otimes \Gamma_n \tag{10}$$ We can simplify this slightly by noting that the Hamiltonian, $H$, itself transforms as the TSIR. Therefore, $\md H/\md q$ transforms as $\Gamma_q$, and the requirement is that $$\Gamma_{\text{TSIR}} \in \Gamma_n \otimes \Gamma_q \otimes \Gamma_n \tag{11}$$ In all point groups, for any non-degenerate irrep $\Gamma_n$, $\Gamma_n \otimes \Gamma_n = \Gamma_{\text{TSIR}}$. Therefore, if $\Gamma_n$ is non-degenerate, then $$\Gamma_n \otimes \Gamma_q \otimes \Gamma_n = \Gamma_q \neq \Gamma_{\text{TSIR}} \tag{12}$$ and the molecule is stable against a first-order Jahn-Teller distortion. Therefore, all closed-shell molecules ($\Gamma_n = \Gamma_{\text{TSIR}}$) do not undergo first-order Jahn-Teller distortions. However, what happens if $\Gamma_n$ is non-degenerate? Now, the product $\Gamma_n \otimes \Gamma_n$ contains other irreps apart from the TSIR. 2 If the molecule possesses a vibrational mode that transforms as one of these irreps, then the direct product $\Gamma_n \otimes \Gamma_q \otimes \Gamma_n$ will contain the TSIR. In a rather inelegant article, 3 Hermann Jahn and Edward Teller worked out the direct products for every important point group and found that: stability and degeneracy are not possible simultaneously unless the molecule is a linear one... In other words, if a non-linear molecule has a degenerate ground state, then it is susceptible towards a (first-order) Jahn-Teller distortion. Take, for example, octahedral $\ce{Cu^2+}$. This has a $\mathrm{^2E_g}$ term symbol (see this question) - which is doubly degenerate. The symmetric direct product $\mathrm{E_g \otimes E_g = A_{1g} \oplus E_g}$. Therefore, if we have a vibrational mode of symmetry $\mathrm{E_g}$, then distortion along this vibrational coordinate will occur to give a more stable compound. Recall that the vibrational mode cannot transform as the TSIR, so we can neglect the $\mathrm{A_{1g}}$ term. What does an $\mathrm{e_g}$ vibrational mode look like? Here is a diagram: 4 It's an axial elongation, which happens to match what we know of Cu(II). However, there is a catch. The vibrational mode is doubly degenerate (the other $\mathrm{e_g}$ mode is not shown), and any linear combination of these two degenerate vibrational modes also transforms as $\mathrm{e_g}$. Therefore, the exact form of the distortion can be any linear combination of these two degenerate modes. It can also involve negative coefficients, i.e. it might feature axial compression instead of elongation; there is no way to find that out using arguments purely based on symmetry. On top of that, there's also no indication of how much distortion there is. That depends on (amongst other things) the value of $\En{1}$, and all we have said is that it is nonzero - we have not said how large it is. This is what is meant by "impossible to predict the extent or the exact form of the distortion". 3. The second-order Jahn-Teller effect Pearson has written an article on second-order Jahn-Teller effects. 5 For the second-order term, the energy correction is of the form $$\En = \En{0} + q\En{1} + \frac{q^2}{2}\En{2} \cdots \tag{13}$$ Here, the $q^2$ term means that $\En{2}$ has to be negative if we want to see a second-order Jahn-Teller distortion. Unlike the first-order case, if $\En{2} > 0$, there will not be any distortion. The second-order correction to the energy comprises two terms. The first term, $\left<\n{0}|(\md^2 H/\md q^2)|\n{0}\right>$, is always positive. (For a QM exercise - try to prove this!) It may be interpreted as a restoring force that tries to bring the nuclei back to their original positions, and it is related to the fact that if the electronic state remains unperturbed (i.e. $\ket{n} = \ket{\n{0}}$), the unperturbed nuclear positions represent the most stable nuclear configuration. The second term has the form $$\sum_{m \neq n}\frac{\left|\left<m^{(0)} \middle|(\md H/\md q)\middle|\n{0} \right>\right|^2}{\En{0} - E_m^{(0)}} \tag{14}$$ which may seem like a slight monstrosity, but it is actually much easier to analyse than it looks. The summation over $m$ indicates that we are going to count every single electronic state $\ket{m}$ that is not the ground state $\ket{n}$. Since it is a square modulus, the numerator is either zero or positive, and since $\En{0} < E_m^{(0)}$, the denominator is always negative; therefore, this term is necessarily either zero or negative. If $\En{2}$ is to be negative, then we need the second term to dominate the first. For this to occur, there are two prerequisites: The denominator must be small, i.e. $\ket{m}$ must be a low-lying excited state such that the energy gap $\Delta E = \En{0} - E_m^{(0)}$ is small; The numerator must not be zero, i.e. there must be a low-lying excited state of the appropriate symmetry $\ket{m}$ such that $\Gamma_m \otimes \Gamma_q \otimes \Gamma_n$ contains $\Gamma_{\text{TSIR}}$. The first condition usually means that it will suffice to consider the first few excited states. In many of the examples I know of, the excited state that mixes with the ground state is the first excited state. In such cases one can even simply get rid of the sum and set $\ket{m}$ to be the first excited state. These symmetry requirements are much less restrictive than previously, and second-order Jahn-Teller distortions tend to be much more widely seen than first-order distortions. A small selection of compounds in which second-order Jahn-Teller distortions are important are: p-block hydrides, $\ce{PbO}$, $\ce{Hg^2+}$, $\ce{WMe6}$, $\ce{R2Sn=SnR2}$, and anti-aromatic compounds such as cyclobutadiene. Let us use octahedral $\ce{Hg^2+}$ as an illustration. In undistorted $O_\mathrm{h}$ symmetry, $\ce{Hg^2+}$ has a closed-shell $\mathrm{d^{10}}$ configuration and therefore its electronic ground state is $\Gamma_n = \mathrm{A_{1g}}$. However, upon excitation of one electron from the 5d orbitals (specifically the $\mathrm{e_g}$ set) to the 6s orbital (which transforms as $\mathrm{a_{1g}}$), the term symbol changes to $$\Gamma_m = \mathrm{E_g \otimes A_{1g} = E_g} \tag{15}$$ Therefore, a vibrational mode transforming as $\mathrm{E_g}$ will facilitate a second-order distortion, since $$\Gamma_m \otimes \Gamma_q \otimes \Gamma_n = \mathrm{E_g \otimes E_g \otimes A_{1g}} \tag{16}$$ contains the TSIR. Again, there is no way of knowing the exact form or the extent of the distortion; we only know that it transforms as $\mathrm{E_g}$. In the case of Hg(II), the distortion is manifested as an axial compression to give a "2 short, 4 long" coordination geometry, which is often described as "linear". A second factor that favours the distortion is the extremely small 5d-6s gap in mercury, due to relativistic 5d destabilisation and 6s stabilisation. To see the importance of the small $\Delta E$, consider $\ce{Zn^2+}$, which has a larger 3d-4s gap; linear Zn(II) compounds are rare, while linear Hg(II) compounds are the norm. Most of the time, the first excited state arises from promotion of an electron from the HOMO to the LUMO. It is easy to show that if this is the case, $$\Gamma_m \otimes \Gamma_n = \Gamma_{\text{HOMO}} \otimes \Gamma_{\text{LUMO}} \tag{17}$$ The second-order Jahn-Teller distortion can then be viewed as a reduction in symmetry, such that the HOMO and the LUMO, which transformed as different irreps in the undistorted geometry, now transform as the same irrep and therefore mix with each other. In the case of $\ce{Hg^2+}$: This interpretation using the symmetry of individual orbitals, however, only works when the relevant excited state $\ket{m}$ is derived by excitation of an electron! In some (admittedly very rare) cases, it is possible that both $\ket{n}$ and $\ket{m}$ are derived from the same electronic configuration. This is the case for cyclobutadiene, and the Jahn-Teller effect in cyclobutadiene cannot be rationalised using orbital mixing. $\endgroup$ Notes and references (1) For more details, look up perturbation theory in your quantum mechanics book of choice. In such treatments, the perturbation is usually formulated slightly differently: e.g. $H$ is taken as $H_0 + \lambda V$, and the eigenstates and eigenvalues are expanded as a power series instead of a Taylor series. Notwithstanding that, the principles remain the same. (2) There is a subtlety in that the symmetric direct product must be taken. For example, in the $D_\mathrm{\infty h}$ point group, we have $\Pi \otimes \Pi = \Sigma^+ + [\Sigma^-] + \Delta$. The antisymmetric direct product $\Sigma^-$ has to be discarded. (3) Jahn, H. A.; Teller, E. Stability of Polyatomic Molecules in Degenerate Electronic States. I. Orbital Degeneracy. Proc. R. Soc. A 1937, 161 (905), 220–235. DOI: 10.1098/rspa.1937.0142. n.b. Considering that I don't have a more elegant proof for it, I don't have much of a right to call it inelegant. (4) Albright, T. A.; Burdett, J. K.; Whangbo, M.-H. Orbital Interactions in Chemistry, 2nd ed.; Wiley: Hoboken, NJ, 2013. (5) Pearson, R. G. The second-order Jahn-Teller effect. J. Mol. Struct.: THEOCHEM 1983, 103, 25–34. DOI: 10.1016/0166-1280(83)85006-4.
Let $\gamma=\gamma_1+\gamma_2+\gamma_3$, where $\gamma_1(t)=e^{it}$ for $0\leq t\leq 2\pi$, $\gamma_2(t)=-1+2e^{-2it}$ for $0\leq t\leq 2\pi$, and $\gamma_3(t)=1-i+e^{it}$ for $\pi/2\leq t\leq 9\pi/2.$ Determine all the values assumed by $n(\gamma, z)$ as $z$ varies over $\mathbb{C}\setminus|\gamma|.$ I do not understand this problem very well but I have tried to do the following: $n(\gamma, z)=\frac{1}{2\pi i}\int_{\gamma}\frac{dw}{w-z}=\frac{1}{2\pi i}[\int_{\gamma_1}\frac{dw}{w-z}+\int_{\gamma_2}\frac{dw}{w-z}+\int_{\gamma_3}\frac{dw}{w-z}]=\frac{1}{2\pi i}[2\pi i-2\pi i+2\pi i]=1$ For the integral formula of Cauchy, taking $f(w)=1$, which is analytic. This is OK? Thank you very much.
Let $a \in \Bbb{R}$. Let $\Bbb{Z}$ act on $S^1$ via $(n,z) \mapsto ze^{2 \pi i \cdot an}$. Claim: The is action is not free if and only if $a \Bbb{Q}$. Here's an attempt at the forward direction: If the action is not free, there is some nonzero $n$ and $z \in S^1$ such that $ze^{2 \pi i \cdot an} = 1$. Note $z = e^{2 \pi i \theta}$ for some $\theta \in [0,1)$. Then the equation becomes $e^{2 \pi i(\theta + an)} = 1$, which holds if and only if $2\pi (\theta + an) = 2 \pi k$ for some $k \in \Bbb{Z}$. Solving for $a$ gives $a = \frac{k-\theta}{n}$... What if $\theta$ is irrational...what did I do wrong? 'cause I understand that second one but I'm having a hard time explaining it in words (Re: the first one: a matrix transpose "looks" like the equation $Ax\cdot y=x\cdot A^\top y$. Which implies several things, like how $A^\top x$ is perpendicular to $A^{-1}x^\top$ where $x^\top$ is the vector space perpendicular to $x$.) DogAteMy: I looked at the link. You're writing garbage with regard to the transpose stuff. Why should a linear map from $\Bbb R^n$ to $\Bbb R^m$ have an inverse in the first place? And for goodness sake don't use $x^\top$ to mean the orthogonal complement when it already means something. he based much of his success on principles like this I cant believe ive forgotten it it's basically saying that it's a waste of time to throw a parade for a scholar or win he or she over with compliments and awards etc but this is the biggest source of sense of purpose in the non scholar yeah there is this thing called the internet and well yes there are better books than others you can study from provided they are not stolen from you by drug dealers you should buy a text book that they base university courses on if you can save for one I was working from "Problems in Analytic Number Theory" Second Edition, by M.Ram Murty prior to the idiots robbing me and taking that with them which was a fantastic book to self learn from one of the best ive had actually Yeah I wasn't happy about it either it was more than $200 usd actually well look if you want my honest opinion self study doesn't exist, you are still being taught something by Euclid if you read his works despite him having died a few thousand years ago but he is as much a teacher as you'll get, and if you don't plan on reading the works of others, to maintain some sort of purity in the word self study, well, no you have failed in life and should give up entirely. but that is a very good book regardless of you attending Princeton university or not yeah me neither you are the only one I remember talking to on it but I have been well and truly banned from this IP address for that forum now, which, which was as you might have guessed for being too polite and sensitive to delicate religious sensibilities but no it's not my forum I just remembered it was one of the first I started talking math on, and it was a long road for someone like me being receptive to constructive criticism, especially from a kid a third my age which according to your profile at the time you were i have a chronological disability that prevents me from accurately recalling exactly when this was, don't worry about it well yeah it said you were 10, so it was a troubling thought to be getting advice from a ten year old at the time i think i was still holding on to some sort of hopes of a career in non stupidity related fields which was at some point abandoned @TedShifrin thanks for that in bookmarking all of these under 3500, is there a 101 i should start with and find my way into four digits? what level of expertise is required for all of these is a more clear way of asking Well, there are various math sources all over the web, including Khan Academy, etc. My particular course was intended for people seriously interested in mathematics (i.e., proofs as well as computations and applications). The students in there were about half first-year students who had taken BC AP calculus in high school and gotten the top score, about half second-year students who'd taken various first-year calculus paths in college. long time ago tho even the credits have expired not the student debt though so i think they are trying to hint i should go back a start from first year and double said debt but im a terrible student it really wasn't worth while the first time round considering my rate of attendance then and how unlikely that would be different going back now @BalarkaSen yeah from the number theory i got into in my most recent years it's bizarre how i almost became allergic to calculus i loved it back then and for some reason not quite so when i began focusing on prime numbers What do you all think of this theorem: The number of ways to write $n$ as a sum of four squares is equal to $8$ times the sum of divisors of $n$ if $n$ is odd and $24$ times sum of odd divisors of $n$ if $n$ is even A proof of this uses (basically) Fourier analysis Even though it looks rather innocuous albeit surprising result in pure number theory @BalarkaSen well because it was what Wikipedia deemed my interests to be categorized as i have simply told myself that is what i am studying, it really starting with me horsing around not even knowing what category of math you call it. actually, ill show you the exact subject you and i discussed on mmf that reminds me you were actually right, i don't know if i would have taken it well at the time tho yeah looks like i deleted the stack exchange question on it anyway i had found a discrete Fourier transform for $\lfloor \frac{n}{m} \rfloor$ and you attempted to explain to me that is what it was that's all i remember lol @BalarkaSen oh and when it comes to transcripts involving me on the internet, don't worry, the younger version of you most definitely will be seen in a positive light, and just contemplating all the possibilities of things said by someone as insane as me, agree that pulling up said past conversations isn't productive absolutely me too but would we have it any other way? i mean i know im like a dog chasing a car as far as any real "purpose" in learning is concerned i think id be terrified if something didnt unfold into a myriad of new things I'm clueless about @Daminark They key thing if I remember correctly was that if you look at the subgroup $\Gamma$ of $\text{PSL}_2(\Bbb Z)$ generated by (1, 2|0, 1) and (0, -1|1, 0), then any holomorphic function $f : \Bbb H^2 \to \Bbb C$ invariant under $\Gamma$ (in the sense that $f(z + 2) = f(z)$ and $f(-1/z) = z^{2k} f(z)$, $2k$ is called the weight) such that the Fourier expansion of $f$ at infinity and $-1$ having no constant coefficients is called a cusp form (on $\Bbb H^2/\Gamma$). The $r_4(n)$ thing follows as an immediate corollary of the fact that the only weight $2$ cusp form is identically zero. I can try to recall more if you're interested. It's insightful to look at the picture of $\Bbb H^2/\Gamma$... it's like, take the line $\Re[z] = 1$, the semicircle $|z| = 1, z > 0$, and the line $\Re[z] = -1$. This gives a certain region in the upper half plane Paste those two lines, and paste half of the semicircle (from -1 to i, and then from i to 1) to the other half by folding along i Yup, that $E_4$ and $E_6$ generates the space of modular forms, that type of things I think in general if you start thinking about modular forms as eigenfunctions of a Laplacian, the space generated by the Eisenstein series are orthogonal to the space of cusp forms - there's a general story I don't quite know Cusp forms vanish at the cusp (those are the $-1$ and $\infty$ points in the quotient $\Bbb H^2/\Gamma$ picture I described above, where the hyperbolic metric gets coned off), whereas given any values on the cusps you can make a linear combination of Eisenstein series which takes those specific values on the cusps So it sort of makes sense Regarding that particular result, saying it's a weight 2 cusp form is like specifying a strong decay rate of the cusp form towards the cusp. Indeed, one basically argues like the maximum value theorem in complex analysis @BalarkaSen no you didn't come across as pretentious at all, i can only imagine being so young and having the mind you have would have resulted in many accusing you of such, but really, my experience in life is diverse to say the least, and I've met know it all types that are in everyway detestable, you shouldn't be so hard on your character you are very humble considering your calibre You probably don't realise how low the bar drops when it comes to integrity of character is concerned, trust me, you wouldn't have come as far as you clearly have if you were a know it all it was actually the best thing for me to have met a 10 year old at the age of 30 that was well beyond what ill ever realistically become as far as math is concerned someone like you is going to be accused of arrogance simply because you intimidate many ignore the good majority of that mate
Suppose $X$ has pdf $I(\mu, \tau)$ with density, $$\sqrt{\frac{\tau}{2\pi x^{3}}}\exp\{-\frac{\tau}{2x\mu^{2}}(x-\mu)^2\}\quad; x>0, \quad \tau,\mu>0$$ I want to find the distribution of $V = \dfrac{\tau(x-\mu)^{2}}{x\mu^{2}}$. My work: The distribution of $V$ can be written as follows. $$f(v) = f_{X}(g^{-1}(v))\bigg|\frac{dg^{-1}(v)}{dv}\bigg|$$ Solving $x$ using $V$ we can find that $x - \dfrac{\mu^{2}}{x} = 2\mu + \dfrac{\mu^{2}v}{\tau} $. So, my question is how to get x on one side of the equation (I have $\mu$ on my left hand side)? Are there any methods to derive the distribution of $V$?
Mulliken atomic charges can be defined as[1]: $$q_A = Z_A-\sum_{\mu\in A}\left( \mathbf{P\cdot S} \right)_{\mu\mu} \tag{Szabo 3.196}$$ Here I have used the same notation as in Szabo[1], with $\mathbf{P}$ being the density matrix and $\mathbf{S}$ beign the overlap matrix. $Z$ is the nuclear charge, and using the greek letter $\mu$ as indicies indicates that we are working in atomic orbital basis (not in in molecular orbital basis). The sum runs over $\mu\in A$ meaning, that we only consider atomic orbitals that are centered on the $A$th atom.we can therefore note that Mulliken charges are only defined when we used atomic-centered basis functions (which we most often do). [1] : Modern Quantum Chemistry: Introduction to Advanced Electronic Structure Theory, Attila Szabo and Neil S. Ostlund
I'm trying to find the unique (up to a constant factor) positive solution to the eigenvalue problem: $$f''(x)-2f'(x)=-\lambda f(x) \ , \lambda>0 ,$$ with boundary conditions $f(1)=0$ and $f'(0)=0$. I was able to solve the similar problem $f''(x)-f'(x)=-\lambda f(x)$ with the same boundary conditions. In that case $f(x)=e^{\frac{x}{2}}(k\cos(kx)-\frac{1}{2}\sin(kx))$ where $k\approx 1.166$ is the smallest positive solution of $k\cos(k)-\frac{1}{2}\sin(k)=0$ and $\lambda=\frac{1}{4}(1+4k^2)$. The analogous solution for the case with the $-2f'(x)$ term is $f(x)=e^x(k\cos(kx)-\sin(kx))$. However, there doesn't seem to be a suitable value of $k$ that satisfies the boundary conditions and makes $f$ a positive (or negative) function. Any suggestions on how to proceed?
This is likely due to a truncation of your sine wave in the time interval you have chosen, such that there is not an integer multiple number of sine wave cycles chosen. I am guessing that your frequency view that you provide above is a subsequent measurement (DFT) from the resulting time domain waveform you have created, and not a view of your original frequency function you are using to create the time domain waveform (as the original function in the frequency domain would not have those artifacts!). That said, the specific answer will depend on how exactly you create and repeat/extend the final time domain waveform from the fixed length IFFT result that you get, but below are some insights on the effects of waveform truncation in the time domain that may help you to understand how you may be getting such artifacts. A direct and concise mathematical answer of frequency effects from waveform truncation can be derived by describing your fixed length (sampled) waveform in the time domain as an infinite length waveform multiplied that is not truncated by a rectangular (boxcar) window. Multiplication in the time domain is convolution in the frequency domain, so the result of a boxcar window truncation would be convolving the frequency you would get without truncation with a sinc function (or when sampled, the approximation of a sinc function). I like to view the resulting effect by picturing the time domain waveform extending from minus infinity to positive infinity in time, made up of your specific waveform snippet (IFFT result) continuously repeating in time. For a direct DFT this analogy this is mathematically valid (meaning the DFT over the finite time interval would match the DTFT of the same sequence repeating it time (just that the DTFT would be a continuous function and the DFT would be samples of this function- see below). Repetition in the time domain over an interval T results in a FT with discrete impulses in the frequency domain spaced at 1/T.). From this view it may be clearer to see the effects of truncation: if we had 1.5 cycles of a sine wave in your time interval, the sine wave would suddenly snap every 1.5 cycles in the repeating waveform, requiring higher frequencies to achieve this "snap" that takes place in time. In contrast if you had an exact integer number of cycles (for a samples system, samples 0 to N-1, where sample 0 would be identical to the sample N for a pure undistorted sine wave), then the FT result (and DFT and DTFT) would only exist at one frequency location with 0 elsewhere; as expected for a pure tone. FT, DTFT, DFT Background I reference FT, DTFT, and DFT above, so I included this graphic and primer below to help clear up any confusion for those less familiar. What is shown is the FT (or CTFT), DTFT, and DFT for a pure tone (that in "truth" extends from minus infinity to infinity in time), along with a modulated waveform of finite bandwidth. In the Fourier Transforms, the tone is shown in blue while the modulated waveform is shown in red, and $\omega_s$ is the sampling frequency. The CTFT is the Continuous Time Fourier Transform, or Fourier Transform (FT) for short. $$ X(\omega) = \int_{t=-\infty}^\infty x(t)e^{-j\omega t}dt $$ Observe: continuous in time -> aperiodic in frequency aperiodic in time -> continuous in frequency The DTFT is the Discrete Time Fourier Transform; the time interval still goes from minus infinity to infinity, but the system is now sampled. It is a sampled FT, which create repetition in frequency. $$ X(\omega) = \sum_{n=-\infty}^\infty x[n]e^{-j\omega n} $$ Observe: discrete in time -> periodic in frequency aperiodic in time -> continuous in frequency The DFT is the Discrete Fourier Transform; the time interval is defined over a finite interval. There is an implied periodicity in time, meaning if you made a waveform by repeating this one in time from minus infinity to infinity, the DTFT of this periodic waveform would match the DFT at the DFT sample locations (see plot above). $$ X[k] = \sum_{n=0}^{N-1}x[n] e^{-j\omega_o k/n} $$ Observe: discrete in time -> periodic in frequency (implied) periodic in time (implied) -> discrete in frequency *note the time domain and frequency domain waveforms are not really periodic, as they only exist over a finite number of samples- but I like to make this interpretation, as if you did repeat the waveform in both domains you would get the same valued discrete samples (but they would be continuous waveforms with zero values in between). This interpretation has been extremely helpful for me in working with mixed signal (analog/digital) systems.
Search Now showing items 1-1 of 1 Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV (Elsevier, 2014-09) Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ...
Ex.15.1 Q20 Probability Solution - NCERT Maths Class 10 Question Suppose you drop a die at random on the rectangular region shown in figure. What is the probability that it will land inside the circle with diameter \(1\rm{m}\)? Text Solution What is known? A die is dropped at random on the rectangular region as shown in figure. Length of rectangular region \(=3\,\rm{m}\) Breadth of rectangular region \(=2\,\rm{m}\) Diameter of the circle \(=1 \,\rm{m}\) ∴ Radius of the circle \(=0.5\,\rm{m}\) What is unknown? The probability that the die dropped at random will land inside the circle with diameter \(1\,\rm{m}\)? Steps: Area of rectangular region \[\begin{align}\text{}& =L\,\times B \\ {} & =3\,\times 2 \\{} & =6{{\text{m}}^{2}} \\\end{align}\] Diameter of circular region \(=1 \rm{m}\) \(\) Radius of circular region \(=\frac{1}{2}\text{m}\) \[\begin{align}\text{Area of circular region}&= \pi r^{2} \\ & =\pi \,\times {{\left( \frac{1}{2} \right)}^{2}} \\ &=\frac{\pi }{4}\end{align}\] Probability that it will land inside the circle \[\begin{align} \text{Probability that it will land inside the circle} & =\frac{\text{ Number of possible outcomes }}{\text{ No of favourable outcomes }} \\ & =\quad \frac{\text{Area of circular region}}{\text{Area of rectangular region}} \\ & =\quad \frac{\frac{\pi }{4}}{6} \\ & =\frac{\pi }{24} \\ \end{align}\] The probability that it will land inside the circle is \(\begin{align}\frac{\pi}{24}\end{align}\)
Proof of lemma $9.7$ in Haim Brezis' Functional Analysis, Sobolev Spaces and Partial Differential Equations argues as follows: For an element $u \in H_0^1(\Omega)$ we define $D_h u= \frac{u(x+h)-u(x)}{|h|}$, for $h \in \mathbb{R}^n$. Let's assume that $D_hu \in H_0^1(\Omega)$ and that: $$ ||D_hu||_{H^1(\Omega)} \leq || u||_{H^2(\Omega)}$$ Thus there exists a sequence $h_n \to 0$ such that $D_{h_n}u$ converges weaklyto $g \in H_0^1(\Omega)$ (Since $H_0^1(\Omega)$ is a Hilbert Space). Could someone explain me why do we have this result about weak convergence?
Cross posted on Mathoverflow Ergodic TheoremA random walk on a finite group $G$ driven by a probability $\nu\in M_p(G)$ is ergodicif $\operatorname{supp}(\nu)$ is not concentrated on a proper subgroup $S\subset G$ nor the coset of a normal subgroup $N\triangleleft G$. In this case the convolution powers of $\nu$ converge to the uniform distribution $\pi$ on $G$: $$\nu^{\star k}\rightarrow \pi.$$ Where $\|\cdot \|=\frac12\|\cdot\|_{\ell_1}$, $$(\nu\star \nu)(g)=\sum_{t\in G}\nu(gt^{-1})\nu(t),$$ $d_\alpha$ is the dimension of a representation $\rho_\alpha:G\rightarrow \operatorname{GL}(V)$, $$\hat{\nu}(\rho)=\sum_{t\in G}\nu(t)\rho(t),$$ and $T^*$ denotes the conjugate transpose of $T$ in $\operatorname{GL}(V)$, Diaconis & Shahshahani proved the following: Upper Bound LemmaWhere $\operatorname{Irr}(G)\backslash \tau$ is the set of non-trivial unitary irreducible representations on $G$: $$\|\nu^{\star k}-\pi\|^2\leq \frac{1}{4}\sum_{\rho_\alpha\in \operatorname{Irr}(G)\backslash \tau}d_\alpha \operatorname{Tr}[\widehat{\nu}(\rho_\alpha)^k(\widehat{\nu}(\rho_\alpha)^*)^k].$$ The Upper Bound Lemma still holds if the random walk driven by $\nu$ is not ergodic. Question: Can the Upper Bound Lemma be used to prove the Ergodic Theorem? Can the Upper Bound Lemma show that for $\nu^{\star k}$ to converge to $\pi$ it is necessary that $\nu$ is not supported on a subgroup (irreducibility)? I suspect aperiodicity (not concentrated on the coset of normal subgroup) might be harder. My own MSc thesis should be a good reference for some of this.
You are likely familiar with the Doppler effect. You might have experienced it as a change in pitch of a car horn, ambulance siren or train whistle as the car/ambulance/train moved past you. In scientific terms, this change in pitch is described as a shift in the frequency of the sound. The equation for the observed sound frequency $f$, as seen by a receiver moving at a speed $v_r$ relative to the air, is\begin{equation} f = \frac{c+v_r}{c+v_s}f_0, \label{1} \end{equation} where the sound is originally emitted at a frequency $f_0$ by a source that moves at a speed $v_s$ relative to the air. Here, $c$ is the speed of sound. We focus on motion along a straight line where we define $v_r$ as positive when the receiver moves toward the source and negative when the source is moving away from the observer. A classical application of \eqref{1} is the calculation of the speed of a train moving past a stationary receiver. Let the observed frequency of the train whistle as the train moves towards the receiver be $f_1$ and the observed frequency as the train moves away from the receiver be $f_2$. In this case, we can use \eqref{1} to obtain expressions for $f_1$ and $f_2$: $$ \begin{align} f_1&= \frac{c}{c-v}f_0,\qquad\qquad\qquad\qquad\\ f_2&= \frac{c}{c+v}f_0,\qquad\qquad\qquad\qquad \end{align} $$ where $v$ is now the speed of the train. We use these two equations to eliminate the unknown $f_0$ and solve the resulting equation for $v$\begin{equation} v = \frac{f_1 −f_2}{f_1+f_2}c. \label{2} \end{equation} We can now determine the speed of a train $v$, using \eqref{2}. However, there will be a slight twist. We will analyse numerically experimental data in the form of a sound recording of a passing train to compute $f_1$ and $f_2$. Subsequently, this will yield $v$. To do this, we will use an implementation of a powerful tool known as the fast Fourier transform (FFT), found in the numpy library. Essentially, this is nothing but a computationally efficient way of calculating the discrete Fourier transform (DFT), a discrete approximation of the continuous Fourier transform. A sound file is represented in python as a vector $\vec{x}$ with $N$ elements, where each element is the sound amplitude sampled at time intervals $\Delta t$. The DFT of $\vec{x}$ is then also a vector with $N$ elements. We call it $\vec{X}$. Suppose now that we know $\vec{X}$. Then we can compute each element $x_n$ in $\vec{x}$ by applying the formula $$x_n = \frac{1}{N} \sum_{k=0}^{N-1}X_k \exp\left(i2\pi\frac{k}{N\Delta t}n\Delta t\right).$$ Let us look closer at this expression and try to figure out what it means. What it tells us is quite simply that $\vec{x}$ is a superposition of exponential functions with different frequencies $f_k = \frac{k}{N\Delta t}$ and amplitudes $X_k$. Therefore, we can view the magnitude of amplitudes $|X_k|^2$ as a measure of the "weight of the frequency $f_k$" in $\vec{x}$! Q: How do we calculate $\vec{X}$ to begin with? A: We could apply the formula for the discrete Fourier transform,$$X_k =\sum_{n=0}^{N-1}x_n \exp\left(-i2\pi\frac{k}{N\Delta t}n\Delta t\right).$$ This requires $\mathcal{O}(N^2)$ operations. In contrast, the FFT is a more computationally efficient way to calculate $\vec{X}$, requiring only $\mathcal{O}(N \ln N)$ operations. There are several FFT algorithms and many make use of the fact that the exponentials can all be written as$$\left(\exp\left(-\frac{2\pi i}{N}\right)\right)^{kn}$$ In python, you can calculate $\vec{x}$ from $\vec{X}$ by using x=numpy.fft.ifft(X), or the other way round, using X=numpy.fft.fft(x). Assume now that we store the sound of the train as it moves towards the observer, in the vector sample1. Likewise, we store the sound of the train as it moves away from the observer, in sample2. We calculate the FFTs of these signals and store them in the vectors p1 and p2 respectively. p1=fft(sample1)p2=fft(sample2) To obtain a measure of the magnitude of the amplitudes, we calculate their absolute values squared, element by element: P1=np.absolute(p1)**2P2=np.absolute(p2)**2 Finally, we calculate the frequency corresponding to each of the elements in P1 and P2. f=linspace[0, N-1, N]/(N*dt) All the details discussed above are implemented in the function fftwrapper() below. A few more technical details are required about how to import the audio file etc. but this will not be discussed here. However, you are encouraged to look at the code and see if you can make sense of it. %matplotlib inlinefrom __future__ import divisionimport numpy as npimport matplotlib.pyplot as pltfrom numpy.fft import fft, ifftimport scipy.io.wavfile as wavfrom IPython.core.display import HTML, display# Set common figure parameters:newparams = {'axes.labelsize': 11, 'axes.linewidth': 1, 'savefig.dpi': 300, 'lines.linewidth': 1.0, 'figure.figsize': (8, 3), 'ytick.labelsize': 10, 'xtick.labelsize': 10, 'ytick.major.pad': 5, 'xtick.major.pad': 5, 'legend.fontsize': 10, 'legend.frameon': True, 'legend.handlelength': 1.5}plt.rcParams.update(newparams) def fftwrapper(): """ Output P1 : 1D vector P2 : 1D vector f : 1D vector Example usage (P1,P2,f) = fftwrapper() """ N=80000 # Number of samplings in sample1 and sample2 shift1=190000 # First index of sample1 shift2=320000 # First index of sample2 fcutoff=700 # Highest frequency in returned spectrum # Load sound file and convert from stereo to mono Fs, ystereo = wav.read('lwrhuntrd-ns197.wav', 'r') ymono = (ystereo[:,0] + ystereo[:,1])/2 ymono = ymono/max(abs(ymono)) deltat = 1/Fs sample1 = ymono[shift1:N+shift1-1] sample2 = ymono[shift2:N+shift2-1] # Do FFTs p1 = fft(sample1) p2 = fft(sample2) P1 = np.absolute(p1)**2 P2 = np.absolute(p2)**2 f = np.linspace(0,N-1,N)/(N*deltat) # Crop vectors to the sizes we are interested in ifcutoff= np.nonzero(abs(f-fcutoff)==min(abs(f-fcutoff)))[0]-1 f = f[0:ifcutoff] P1 = P1[0:ifcutoff] P2 = P2[0:ifcutoff] return (P1, P2, f) We start by a call to fftwrapper(), (P1, P2, f) = fftwrapper() Then we plot P1 and P2, normalized such that the biggest elements in the plotted vectors are 1. plt.plot(f, P1/max(P1), f, P2/max(P2))plt.xlabel(r"$f$ (Hz)");plt.ylabel(r"$P/P_{max}$")plt.legend(['Sample 1','Sample 2','Location','NorthWest'], loc=2); The two samples contain frequency peaks. The peaks in sample2 are shifted towards smaller frequencies in relation to the peaks in sample1. This is consistent with what we hear when a train passes by: the sound it makes as it moves away form us is more low-pitched than that of when it moves towards us. We choose the frequency $f_1$ corresponding to the tallest peak in sample1 in the plot. We see that the corresponding peak in sample2 is also the tallest and we denote its frequency by $f_2$. We locate $f_1$ and $f_2$ as follows. f1 = f[P1==max(P1)]f2 = f[P2==max(P2)]print("f1 = %f, f2 = %f" % (f1, f2)) f1 = 660.397500, f2 = 619.880625 Notice that we could not expect in advance that the tallest peak in sample1 would also be the tallest in sample2. In principle, this is not a given and we needed the plot to confirm this. Having found $f_1$ and $f_2$, we can calculate $v$ by use of (2). We define the speed of sound $c$ to be 340.29 m/s and do the following: c = 340.29 # Speed of sound [m/s]v = (f1-f2)/(f1+f2)*c # Speed of train [m/s]v = 3.6*v # Speed of train [km/h]print("Speed of train is %0.2f km/h." % v) Speed of train is 38.77 km/h. To run this on your own computer, remember to download the sound file 2wa_lwrhuntrd_ns197.wav$^1$ into the same directory as the iPython Notebook file. Listen closely to the sound file. Does is it sound reasonable that the train is moving at 38.77 km/h? $^1$ Courtesy of David Safdy and Greg Lavoie of fwarailfan.net
What is the purpose of normalizing the signal? If we have two signals on hand, how is it used when comparing these two signals? Signal Processing Stack Exchange is a question and answer site for practitioners of the art and science of signal, image and video processing. It only takes a minute to sign up.Sign up to join this community Normalization is basically bringing the two signals to the same range or a predefined range. A typical example of a predefined range is the statistical perception of the normalization, which is transforming the signal so that its mean is $0$ and standard deviation is $1$. After such transform the canonical form is obtained. In layman's terms, such the operation consists of subtracting the minimum value and dividing by the range (difference between the maximum and minimum values). Transforming all signals to such canonical form eases and robustifies the process of comparisons as well as serving too different needs such as visualization, and analysis. If we are to speak in terms of image normalization the effect would be the following: Un-normalized: Normalized: Note that, the distinction between the colors is clearer and more apparent. user Fat32 had an answer that he/she deleted that was fine. when comparing two different signals that mean two different things (but equally important or with an equal significance regarding the information the signals carry), would you expect to be comparing two signals, one with values in the ballpark of $\pm$1 and the other with no samples with magnitude at least $2^{-8}$ or so? Normalization means that you're not comparing an elephant to a bug. at least not with regard to mass. unless you make the bug look as big as the elephant, and then you start comparing what makes the bug different than the elephant. This question is too vague. Normalization can be done in many ways and for many different purposes. For instance, in addition to the example of @tbirdal for image processing, I could think of other examples. A very simple one is one where where we know the signal is \begin{align} y_1(t) &= x_1(t) + n_1(t) \\ y_2(t) &= x_2(t) + n_2(t) \end{align} where $n_1(t)\sim\mathcal{N}(0,\sigma^2)$, $n(t)\sim\mathcal{N}(0,\sigma_2^2)$, and $x_1(t)$ and $x_2(t)$ are deterministic. In such case, in order to compare both signals, a logical normalization would be to compute \begin{gather} z_1(t) = \frac{y_1(t)}{\sigma_1} \\ z_2(t) = \frac{y_2(t)}{\sigma_2}. \end{gather} Now $z_1(t)$ and $z_2(t)$ have a noise with variance 1. The advantage is that now if $z_1(t)$ is larger than $z_2(t)$, you know $z_1(t)$ has a better signal-to-noise ratio (SNR).
Answer $T = \frac{u_k}{\sqrt{1+\mu_k^2}}$ Work Step by Step In order to find the expression for the minimum tension needed, we take the derivative of the equation of the previous problem, which is: $\frac{T}{mg}=\frac{\mu_k}{cos\theta+\mu_ksin\theta}$ Taking the derivative gives the answer: $T = \frac{u_k}{\sqrt{1+\mu_k^2}}$
Search Now showing items 1-10 of 26 Production of light nuclei and anti-nuclei in $pp$ and Pb-Pb collisions at energies available at the CERN Large Hadron Collider (American Physical Society, 2016-02) The production of (anti-)deuteron and (anti-)$^{3}$He nuclei in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV has been studied using the ALICE detector at the LHC. The spectra exhibit a significant hardening with ... Forward-central two-particle correlations in p-Pb collisions at $\sqrt{s_{\rm NN}}$ = 5.02 TeV (Elsevier, 2016-02) Two-particle angular correlations between trigger particles in the forward pseudorapidity range ($2.5 < |\eta| < 4.0$) and associated particles in the central range ($|\eta| < 1.0$) are measured with the ALICE detector in ... Measurement of D-meson production versus multiplicity in p-Pb collisions at $\sqrt{s_{\rm NN}}=5.02$ TeV (Springer, 2016-08) The measurement of prompt D-meson production as a function of multiplicity in p–Pb collisions at $\sqrt{s_{\rm NN}}=5.02$ TeV with the ALICE detector at the LHC is reported. D$^0$, D$^+$ and D$^{*+}$ mesons are reconstructed ... Measurement of electrons from heavy-flavour hadron decays in p–Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV (Elsevier, 2016-03) The production of electrons from heavy-flavour hadron decays was measured as a function of transverse momentum ($p_{\rm T}$) in minimum-bias p–Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV with ALICE at the LHC for $0.5 ... Direct photon production in Pb-Pb collisions at $\sqrt{s_{NN}}$=2.76 TeV (Elsevier, 2016-03) Direct photon production at mid-rapidity in Pb-Pb collisions at $\sqrt{s_{\rm NN}} = 2.76$ TeV was studied in the transverse momentum range $0.9 < p_{\rm T} < 14$ GeV/$c$. Photons were detected via conversions in the ALICE ... Multi-strange baryon production in p-Pb collisions at $\sqrt{s_\mathbf{NN}}=5.02$ TeV (Elsevier, 2016-07) The multi-strange baryon yields in Pb--Pb collisions have been shown to exhibit an enhancement relative to pp reactions. In this work, $\Xi$ and $\Omega$ production rates have been measured with the ALICE experiment as a ... $^{3}_{\Lambda}\mathrm H$ and $^{3}_{\bar{\Lambda}} \overline{\mathrm H}$ production in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV (Elsevier, 2016-03) The production of the hypertriton nuclei $^{3}_{\Lambda}\mathrm H$ and $^{3}_{\bar{\Lambda}} \overline{\mathrm H}$ has been measured for the first time in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV with the ALICE ... Multiplicity dependence of charged pion, kaon, and (anti)proton production at large transverse momentum in p-Pb collisions at $\sqrt{s_{\rm NN}}$= 5.02 TeV (Elsevier, 2016-09) The production of charged pions, kaons and (anti)protons has been measured at mid-rapidity ($-0.5 < y < 0$) in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV using the ALICE detector at the LHC. Exploiting particle ... Jet-like correlations with neutral pion triggers in pp and central Pb–Pb collisions at 2.76 TeV (Elsevier, 2016-12) We present measurements of two-particle correlations with neutral pion trigger particles of transverse momenta $8 < p_{\mathrm{T}}^{\rm trig} < 16 \mathrm{GeV}/c$ and associated charged particles of $0.5 < p_{\mathrm{T}}^{\rm ... Centrality dependence of charged jet production in p-Pb collisions at $\sqrt{s_\mathrm{NN}}$ = 5.02 TeV (Springer, 2016-05) Measurements of charged jet production as a function of centrality are presented for p-Pb collisions recorded at $\sqrt{s_{\rm NN}} = 5.02$ TeV with the ALICE detector. Centrality classes are determined via the energy ...
Let $a \in \Bbb{R}$. Let $\Bbb{Z}$ act on $S^1$ via $(n,z) \mapsto ze^{2 \pi i \cdot an}$. Claim: The is action is not free if and only if $a \Bbb{Q}$. Here's an attempt at the forward direction: If the action is not free, there is some nonzero $n$ and $z \in S^1$ such that $ze^{2 \pi i \cdot an} = 1$. Note $z = e^{2 \pi i \theta}$ for some $\theta \in [0,1)$. Then the equation becomes $e^{2 \pi i(\theta + an)} = 1$, which holds if and only if $2\pi (\theta + an) = 2 \pi k$ for some $k \in \Bbb{Z}$. Solving for $a$ gives $a = \frac{k-\theta}{n}$... What if $\theta$ is irrational...what did I do wrong? 'cause I understand that second one but I'm having a hard time explaining it in words (Re: the first one: a matrix transpose "looks" like the equation $Ax\cdot y=x\cdot A^\top y$. Which implies several things, like how $A^\top x$ is perpendicular to $A^{-1}x^\top$ where $x^\top$ is the vector space perpendicular to $x$.) DogAteMy: I looked at the link. You're writing garbage with regard to the transpose stuff. Why should a linear map from $\Bbb R^n$ to $\Bbb R^m$ have an inverse in the first place? And for goodness sake don't use $x^\top$ to mean the orthogonal complement when it already means something. he based much of his success on principles like this I cant believe ive forgotten it it's basically saying that it's a waste of time to throw a parade for a scholar or win he or she over with compliments and awards etc but this is the biggest source of sense of purpose in the non scholar yeah there is this thing called the internet and well yes there are better books than others you can study from provided they are not stolen from you by drug dealers you should buy a text book that they base university courses on if you can save for one I was working from "Problems in Analytic Number Theory" Second Edition, by M.Ram Murty prior to the idiots robbing me and taking that with them which was a fantastic book to self learn from one of the best ive had actually Yeah I wasn't happy about it either it was more than $200 usd actually well look if you want my honest opinion self study doesn't exist, you are still being taught something by Euclid if you read his works despite him having died a few thousand years ago but he is as much a teacher as you'll get, and if you don't plan on reading the works of others, to maintain some sort of purity in the word self study, well, no you have failed in life and should give up entirely. but that is a very good book regardless of you attending Princeton university or not yeah me neither you are the only one I remember talking to on it but I have been well and truly banned from this IP address for that forum now, which, which was as you might have guessed for being too polite and sensitive to delicate religious sensibilities but no it's not my forum I just remembered it was one of the first I started talking math on, and it was a long road for someone like me being receptive to constructive criticism, especially from a kid a third my age which according to your profile at the time you were i have a chronological disability that prevents me from accurately recalling exactly when this was, don't worry about it well yeah it said you were 10, so it was a troubling thought to be getting advice from a ten year old at the time i think i was still holding on to some sort of hopes of a career in non stupidity related fields which was at some point abandoned @TedShifrin thanks for that in bookmarking all of these under 3500, is there a 101 i should start with and find my way into four digits? what level of expertise is required for all of these is a more clear way of asking Well, there are various math sources all over the web, including Khan Academy, etc. My particular course was intended for people seriously interested in mathematics (i.e., proofs as well as computations and applications). The students in there were about half first-year students who had taken BC AP calculus in high school and gotten the top score, about half second-year students who'd taken various first-year calculus paths in college. long time ago tho even the credits have expired not the student debt though so i think they are trying to hint i should go back a start from first year and double said debt but im a terrible student it really wasn't worth while the first time round considering my rate of attendance then and how unlikely that would be different going back now @BalarkaSen yeah from the number theory i got into in my most recent years it's bizarre how i almost became allergic to calculus i loved it back then and for some reason not quite so when i began focusing on prime numbers What do you all think of this theorem: The number of ways to write $n$ as a sum of four squares is equal to $8$ times the sum of divisors of $n$ if $n$ is odd and $24$ times sum of odd divisors of $n$ if $n$ is even A proof of this uses (basically) Fourier analysis Even though it looks rather innocuous albeit surprising result in pure number theory @BalarkaSen well because it was what Wikipedia deemed my interests to be categorized as i have simply told myself that is what i am studying, it really starting with me horsing around not even knowing what category of math you call it. actually, ill show you the exact subject you and i discussed on mmf that reminds me you were actually right, i don't know if i would have taken it well at the time tho yeah looks like i deleted the stack exchange question on it anyway i had found a discrete Fourier transform for $\lfloor \frac{n}{m} \rfloor$ and you attempted to explain to me that is what it was that's all i remember lol @BalarkaSen oh and when it comes to transcripts involving me on the internet, don't worry, the younger version of you most definitely will be seen in a positive light, and just contemplating all the possibilities of things said by someone as insane as me, agree that pulling up said past conversations isn't productive absolutely me too but would we have it any other way? i mean i know im like a dog chasing a car as far as any real "purpose" in learning is concerned i think id be terrified if something didnt unfold into a myriad of new things I'm clueless about @Daminark They key thing if I remember correctly was that if you look at the subgroup $\Gamma$ of $\text{PSL}_2(\Bbb Z)$ generated by (1, 2|0, 1) and (0, -1|1, 0), then any holomorphic function $f : \Bbb H^2 \to \Bbb C$ invariant under $\Gamma$ (in the sense that $f(z + 2) = f(z)$ and $f(-1/z) = z^{2k} f(z)$, $2k$ is called the weight) such that the Fourier expansion of $f$ at infinity and $-1$ having no constant coefficients is called a cusp form (on $\Bbb H^2/\Gamma$). The $r_4(n)$ thing follows as an immediate corollary of the fact that the only weight $2$ cusp form is identically zero. I can try to recall more if you're interested. It's insightful to look at the picture of $\Bbb H^2/\Gamma$... it's like, take the line $\Re[z] = 1$, the semicircle $|z| = 1, z > 0$, and the line $\Re[z] = -1$. This gives a certain region in the upper half plane Paste those two lines, and paste half of the semicircle (from -1 to i, and then from i to 1) to the other half by folding along i Yup, that $E_4$ and $E_6$ generates the space of modular forms, that type of things I think in general if you start thinking about modular forms as eigenfunctions of a Laplacian, the space generated by the Eisenstein series are orthogonal to the space of cusp forms - there's a general story I don't quite know Cusp forms vanish at the cusp (those are the $-1$ and $\infty$ points in the quotient $\Bbb H^2/\Gamma$ picture I described above, where the hyperbolic metric gets coned off), whereas given any values on the cusps you can make a linear combination of Eisenstein series which takes those specific values on the cusps So it sort of makes sense Regarding that particular result, saying it's a weight 2 cusp form is like specifying a strong decay rate of the cusp form towards the cusp. Indeed, one basically argues like the maximum value theorem in complex analysis @BalarkaSen no you didn't come across as pretentious at all, i can only imagine being so young and having the mind you have would have resulted in many accusing you of such, but really, my experience in life is diverse to say the least, and I've met know it all types that are in everyway detestable, you shouldn't be so hard on your character you are very humble considering your calibre You probably don't realise how low the bar drops when it comes to integrity of character is concerned, trust me, you wouldn't have come as far as you clearly have if you were a know it all it was actually the best thing for me to have met a 10 year old at the age of 30 that was well beyond what ill ever realistically become as far as math is concerned someone like you is going to be accused of arrogance simply because you intimidate many ignore the good majority of that mate
also, if you are in the US, the next time anything important publishing-related comes up, you can let your representatives know that you care about this and that you think the existing situation is appalling @heather well, there's a spectrum so, there's things like New Journal of Physics and Physical Review X which are the open-access branch of existing academic-society publishers As far as the intensity of a single-photon goes, the relevant quantity is calculated as usual from the energy density as $I=uc$, where $c$ is the speed of light, and the energy density$$u=\frac{\hbar\omega}{V}$$is given by the photon energy $\hbar \omega$ (normally no bigger than a few eV) di... Minor terminology question. A physical state corresponds to an element of a projective Hilbert space: an equivalence class of vectors in a Hilbert space that differ by a constant multiple - in other words, a one-dimensional subspace of the Hilbert space. Wouldn't it be more natural to refer to these as "lines" in Hilbert space rather than "rays"? After all, gauging the global $U(1)$ symmetry results in the complex line bundle (not "ray bundle") of QED, and a projective space is often loosely referred to as "the set of lines [not rays] through the origin." — tparker3 mins ago > A representative of RELX Group, the official name of Elsevier since 2015, told me that it and other publishers “serve the research community by doing things that they need that they either cannot, or do not do on their own, and charge a fair price for that service” for example, I could (theoretically) argue economic duress because my job depends on getting published in certain journals, and those journals force the people that hold my job to basically force me to get published in certain journals (in other words, what you just told me is true in terms of publishing) @EmilioPisanty > for example, I could (theoretically) argue economic duress because my job depends on getting published in certain journals, and those journals force the people that hold my job to basically force me to get published in certain journals (in other words, what you just told me is true in terms of publishing) @0celo7 but the bosses are forced because they must continue purchasing journals to keep up the copyright, and they want their employees to publish in journals they own, and journals that are considered high-impact factor, which is a term basically created by the journals. @BalarkaSen I think one can cheat a little. I'm trying to solve $\Delta u=f$. In coordinates that's $$\frac{1}{\sqrt g}\partial_i(g^{ij}\partial_j u)=f.$$ Buuuuut if I write that as $$\partial_i(g^{ij}\partial_j u)=\sqrt g f,$$ I think it can work... @BalarkaSen Plan: 1. Use functional analytic techniques on global Sobolev spaces to get a weak solution. 2. Make sure the weak solution satisfies weak boundary conditions. 3. Cut up the function into local pieces that lie in local Sobolev spaces. 4. Make sure this cutting gives nice boundary conditions. 5. Show that the local Sobolev spaces can be taken to be Euclidean ones. 6. Apply Euclidean regularity theory. 7. Patch together solutions while maintaining the boundary conditions. Alternative Plan: 1. Read Vol 1 of Hormander. 2. Read Vol 2 of Hormander. 3. Read Vol 3 of Hormander. 4. Read the classic papers by Atiyah, Grubb, and Seeley. I am mostly joking. I don't actually believe in revolution as a plan of making the power dynamic between the various classes and economies better; I think of it as a want of a historical change. Personally I'm mostly opposed to the idea. @EmilioPisanty I have absolutely no idea where the name comes from, and "Killing" doesn't mean anything in modern German, so really, no idea. Googling its etymology is impossible, all I get are "killing in the name", "Kill Bill" and similar English results... Wilhelm Karl Joseph Killing (10 May 1847 – 11 February 1923) was a German mathematician who made important contributions to the theories of Lie algebras, Lie groups, and non-Euclidean geometry.Killing studied at the University of Münster and later wrote his dissertation under Karl Weierstrass and Ernst Kummer at Berlin in 1872. He taught in gymnasia (secondary schools) from 1868 to 1872. He became a professor at the seminary college Collegium Hosianum in Braunsberg (now Braniewo). He took holy orders in order to take his teaching position. He became rector of the college and chair of the town... @EmilioPisanty Apparently, it's an evolution of ~ "Focko-ing(en)", where Focko was the name of the guy who founded the city, and -ing(en) is a common suffix for places. Which...explains nothing, I admit.
Imagine we've receive a signal, $R$, at a distance of $L$ from a transmitter. Then, we do FFT on the received signal. This is what FFT gives us: \begin{align} R(f) &= A e^{j\phi} \tag1\\ \ \end{align} where, $A$ is the amplitude, we can take it by abs() function over the FFT. And, $\phi$ is the phase and we can take it by angle() function over FFT. Now this is my confusion: If the initial phase, I mean the phase of the transmitted signal, would be $\phi_0$, then, Is it correct if I say: $\phi = \phi_0 + \theta$ ? $\theta$ is a number that is added to the transmitted phase over the link. or I can also write this: $\phi_{received} = \phi + \phi_0$ ? $\phi_{received}$ is the phase of the received signal. Which one of those is correct? Please take this into account that I have no information about the transmitted signal. Just assume a transmitted signal with initial phase $\phi_0$. Also just consider a very simple case, like a line-of-sight signal. And, I'm using MATLAB for FFT, and angle(). And, the phase shift between these two received signal should be $\phi_2 - \phi_1 = 2\pi f \tau$ So, Can I just go and consider the two signals in the time domain and calculate the time difference, $\tau$, between them? Any thoughts? What am I missing here?
Let $c$ denote the inverse of $a+b$, i.e., $ac+bc=ca+cb=1$. Multiplying this by $a$ and $b$ from the left and from the right and using $a^2=b^2=0$, we obtain $abc=a$, $cba=a$, and $cab=b$. Let us show that $R=aR\oplus bR$. It follows from $ac+bc=1$ that $R=aR+bR$. If $ax=by$, then $bax=0$, implying $0=cbax=ax$ in view of $cba=a$. The $R$-modules $aR$ and $bR$ are isomorphic. Indeed, the rule $bcx\mapsto ax$, $x\in R$, defines an $R$-module homomorphism $bR=bcR\to aR$ because $bcx=0$ implies $ax=0$ due to $abc=a$. Its kernel vanishes: $ax=0$ implies $abcx=0$ because $abc=a$ and, hence, $0=cabcx=bcx$. It remains to apply Propositions 5 and 6 on page 52 of "Structure of Rings" by N.Jacobson (1964) and to conclude that $R$ is a ring of $2\times2$ matrices. Edit. By suggestion of Martin Brandenburg, I reproduce the Propositions quite literally: Proposition 5 $\dots$ Conversely, if $\frak A$ is a ring with an identity $1$ and ${\frak A}={\frak I}_1\oplus\dots\oplus{\frak I}_n$ is a direct decomposition of $\frak A$ into right ideals which are isomorphic $\frak A$-modules, then there exists a set of matrix units $\{e_{ij}\mid i,j=1,\dots,n\}$ such that ${\frak I}_j=e_{jj}{\frak A}$. Proposition 6. Let $\{e_{ij}\mid i,j=1,\dots,n\}$ be a set of matrix units in a ring $\frak A$ with identity $1$, and $\frak B$ the subring consisting of the elements which commute with the $e_{ij}$, $i,j=1,\dots,n$. Then every element of $\frak A$ can be written in one and only one way as$\sum b_{ij}e_{ij}$ where $b_{ij}\in{\frak B}$ for all $i$ and $j$. Hence${\frak A}\cong{\frak B}_n$. The ring $\frak B$ is isomorphic to$e_{11}{\frak A}e_{11}$. Second thought Edit. Those Propositions are an old stuff. It is probably better to prove them here in two lines. For any $R$-module $M$, we have$\text{End}_R(\underbrace{M\oplus\dots\oplus M}_n)\cong\text{Matr}_n(\text{End}_RM)$.On the other hand, $\text{End}_RR\cong R$. Question. The problem looks a bit artificial. Is it a known exercise?
First, expand the logarithm into its Taylor series:\begin{align}S&\equiv \sum_{n\ge 1}n\log(1-e^{-nx}) \\&= -\sum_{n\ge 1} n \sum_{k\ge 1} \frac{e^{-knx}}{k} \\&= -\sum_{k \ge 1} \frac{1}{k}\sum_{n\ge 1}n\,e^{-nkx}\end{align}To sum the inner series,differentiate the following identity with respect to $\beta$,$$\frac{1}{1-e^{-\beta}}-1=\... Well, isn't it the case that any sequence $a_n$ that vanishes as $n \to\infty$ will give rise to the same pressure as the model with no field?Indeed, one can bound$$e^{-\beta ha_nn^2} Z_{\Lambda_n,\beta,0} \leq Z_{\Lambda_n,\beta,ha_n} \leq e^{\beta ha_nn^2} Z_{\Lambda_n,\beta,0},$$so that$$-\beta h a_n + \frac1{n^2}\log Z_{\Lambda_n,\beta,0} \leq \... If you're looking for intuition about the relationship between forces and connections, I think the best you can do is to think long and hard about the following elementary example (adapted from Moriyasu's book 'Elementary Primer for Gauge Theory'):Consider a vector at position $x$. Call its length $f(x)$. We want to know how this length changes as we go ... See:John David Logan, “First Integrals in the Discrete Variational Calculus,” Æquationes Mathematicæ 9, no. 2 (June 1, 1973): 210–20. DOI: 10.1007/BF01832628.The intent of this paper is to show that first integrals of the discrete Euler equation can be determined explicitly by investigating the invariance properties of the discrete Lagrangian. The result ... I think you just need to read more widely to encounter a broad range of number systems being useful in at least theoretical physics. I'll hyperlink to discussions of applications, but just mention the number systems themselves, as some have multiple applications.There are uses for $p$-adic numbers, split-complex numbers, dual numbers, quaternions, split-... The short answer is that $\sin^2 (ax) /ax^2 $ becomes increasingly localized at zero. The effective domain shrinks like $1/a$ while its value at zero is $a$. Moreover, $\int_{-\infty}^\infty \sin^2 (ax) /ax^2 = \pi$. The rest is math. On page 50, it sayswe note that for a one-dimensional manifold that is special Kahler, the Ricci scalar is related to the invariant coupling by $R+4=2\gamma_\text{inv}^2$ and we present a three-dimensional plot of the Ricci scalar in fig. 10On page 55, it saysfor large $\psi$, the Ricci scalar of the moduli space differs from its limiting value by ... It may sound old but 0."An introduction to statistical physics- by A.J. Pointon" is a very handy book to absorb the concept of calculation over phase space from the very beginning. The book is suitable for a one semester course, designed for last year undergraduate and beginning graduate students. The exposition of this book is exceptionally clear. It ... Consider a real function $\;f(x)\;$ of the real variable $\;x\in\mathbb{R}\;$ for which\begin{align}f(x)\boldsymbol{=}0 \quad & \text{for any} \quad x\boldsymbol{\ne} x_{0} \quad \textbf{and}\tag{01a}\label{01a}\\\mathcal{I}\boldsymbol{=}\!\!\!\!\int\limits_{\boldsymbol{x_{0}-\varepsilon}}^{\boldsymbol{x_{0}+\varepsilon}}\!\!\!f(x)\mathrm dx\... This may not fully answer your question, but there is a general theorem that the lowest order (ell) non-vanishing moment is independent of origin.Let's see how this helps. If the monopole moment does not vanish, then its value is independent of origin. This means that the dipole moment is not independent of origin. If we change the origin through $x^i\... Perhaps the simplest argument is the following.Since the Legendre transformation should be$$ L+H ~=~P_1\dot{Q}_1+ P_2\dot{Q}_2 ,\tag{1}$$then$$\frac{\partial (L+H)}{\partial Q_1}~=~0. \tag{2} $$Now the higher Lagrange equation reads$$\frac{\partial L}{\partial Q_1} - \frac{d}{dt} \frac{\partial L}{\partial \dot{Q}_1} + \frac{d^2}{dt^2} \frac{\partial ...
@user193319 I believe the natural extension to multigraphs is just ensuring that $\#(u,v) = \#(\sigma(u),\sigma(v))$ where $\# : V \times V \rightarrow \mathbb{N}$ counts the number of edges between $u$ and $v$ (which would be zero). I have this exercise: Consider the ring $R$ of polynomials in $n$ variables with integer coefficients. Prove that the polynomial $f(x _1 ,x _2 ,...,x _n ) = x _1 x _2 ···x _n$ has $2^ {n+1} −2$ non-constant polynomials in R dividing it. But, for $n=2$, I ca'nt find any other non-constant divisor of $f(x,y)=xy$ other than $x$, $y$, $xy$ I am presently working through example 1.21 in Hatcher's book on wedge sums of topological spaces. He makes a few claims which I am having trouble verifying. First, let me set-up some notation.Let $\{X_i\}_{i \in I}$ be a collection of topological spaces. Then $\amalg_{i \in I} X_i := \cup_{i ... Each of the six faces of a die is marked with an integer, not necessarily positive. The die is rolled 1000 times. Show that there is a time interval such that the product of all rolls in this interval is a cube of an integer. (For example, it could happen that the product of all outcomes between 5th and 20th throws is a cube; obviously, the interval has to include at least one throw!) On Monday, I ask for an update and get told they’re working on it. On Tuesday I get an updated itinerary!... which is exactly the same as the old one. I tell them as much and am told they’ll review my case @Adam no. Quite frankly, I never read the title. The title should not contain additional information to the question. Moreover, the title is vague and doesn't clearly ask a question. And even more so, your insistence that your question is blameless with regards to the reports indicates more than ever that your question probably should be closed. If all it takes is adding a simple "My question is that I want to find a counterexample to _______" to your question body and you refuse to do this, even after someone takes the time to give you that advice, then ya, I'd vote to close myself. but if a title inherently states what the op is looking for I hardly see the fact that it has been explicitly restated as a reason for it to be closed, no it was because I orginally had a lot of errors in the expressions when I typed them out in latex, but I fixed them almost straight away lol I registered for a forum on Australian politics and it just hasn't sent me a confirmation email at all how bizarre I have a nother problem: If Train A leaves at noon from San Francisco and heads for Chicago going 40 mph. Two hours later Train B leaves the same station, also for Chicago, traveling 60mph. How long until Train B overtakes Train A? @swagbutton8 as a check, suppose the answer were 6pm. Then train A will have travelled at 40 mph for 6 hours, giving 240 miles. Similarly, train B will have travelled at 60 mph for only four hours, giving 240 miles. So that checks out By contrast, the answer key result of 4pm would mean that train A has gone for four hours (so 160 miles) and train B for 2 hours (so 120 miles). Hence A is still ahead of B at that point So yeah, at first glance I’d say the answer key is wrong. The only way I could see it being correct is if they’re including the change of time zones, which I’d find pretty annoying But 240 miles seems waaay to short to cross two time zones So my inclination is to say the answer key is nonsense You can actually show this using only that the derivative of a function is zero if and only if it is constant, the exponential function differentiates to almost itself, and some ingenuity. Suppose that the equation starts in the equivalent form$$ y'' - (r_1+r_2)y' + r_1r_2 y =0. \tag{1} $$(Obvi... Hi there, I'm currently going through a proof of why all general solutions to second ODE look the way they look. I have a question mark regarding the linked answer. Where does the term e^{(r_1-r_2)x} come from? It seems like it is taken out of the blue, but it yields the desired result.
Search Now showing items 1-6 of 6 Forward-backward multiplicity correlations in pp collisions at √s = 0.9, 2.76 and 7 TeV (Springer, 2015-05-20) The strength of forward-backward (FB) multiplicity correlations is measured by the ALICE detector in proton-proton (pp) collisions at s√ = 0.9, 2.76 and 7 TeV. The measurement is performed in the central pseudorapidity ... Rapidity and transverse-momentum dependence of the inclusive J/$\mathbf{\psi}$ nuclear modification factor in p-Pb collisions at $\mathbf{\sqrt{\textit{s}_{NN}}}=5.02$ TeV (Springer, 2015-06) We have studied the transverse-momentum ($p_{\rm T}$) dependence of the inclusive J/$\psi$ production in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV, in three center-of-mass rapidity ($y_{\rm cms}$) regions, down to ... Multiplicity dependence of two-particle azimuthal correlations in pp collisions at the LHC (Springer, 2013-09) We present the measurements of particle pair yields per trigger particle obtained from di-hadron azimuthal correlations in pp collisions at $\sqrt{s}$=0.9, 2.76, and 7 TeV recorded with the ALICE detector. The yields are ... Measurement of charm and beauty production at central rapidity versus charged-particle multiplicity in proton-proton collisions at $\sqrt{s}$ = 7 TeV (Springer, 2015-09) Prompt D meson and non-prompt J/$\psi$ yields are studied as a function of the multiplicity of charged particles produced in inelastic proton-proton collisions at a centre-of-mass energy of $\sqrt{s}=7$ TeV. The results ... Coherent $\rho^0$ photoproduction in ultra-peripheral Pb-Pb collisions at $\mathbf{\sqrt{\textit{s}_{\rm NN}}} = 2.76$ TeV (Springer, 2015-09) We report the first measurement at the LHC of coherent photoproduction of $\rho^0$ mesons in ultra-peripheral Pb-Pb collisions. The invariant mass and transverse momentum distributions for $\rho^0$ production are studied ... Inclusive, prompt and non-prompt J/ψ production at mid-rapidity in Pb-Pb collisions at √sNN = 2.76 TeV (Springer, 2015-07-10) The transverse momentum (p T) dependence of the nuclear modification factor R AA and the centrality dependence of the average transverse momentum 〈p T〉 for inclusive J/ψ have been measured with ALICE for Pb-Pb collisions ...
The derivative of $\sin(\omega_o t)$ is $\cos(\omega_o t)$.The Fourier transform of $\sin(\omega_o t)$ is $\frac{\pi}{j}[\delta(\omega-\omega_o) - \delta(\omega+\omega_o)]$.Differentiation in the ... In many of the papers it is said that the derivative filter transfer function is given by: $$H(z) = \dfrac{1}{8T}\left(-z^{-2} - 2z^{-1} + 2z + z^{2}\right)$$ But no one gave the detailed information ... I could understand Savitzky-Golay filter as being smoothing filter, but then there also seems to be Savitzky-Golay differentiation filter, though for some reason, details do not seem to be clear.So ... This question is based on the application of the pdf which was an earlier question of mine asked here Confusion regarding pdf of circularly symmetric complex gaussian rvIf $v \sim CN(0,2\sigma^2_v)$ ... Suppose I have a digital signal measured with sampling time, $T_s=1$ sec. If I take it's derivative, it will, naturally have $T_s=1$ sec. But what are the implications if I re-sample this derivative ...
Determine for what values of $n$ the number $\frac{n+7}{2n+1}$ is an integer Here's what I've tried I think I solved the problem just for the positive integers: Since $\frac{n+7}{2n+1}$ is a natural number (in this case) $$n+7\leq 2n+1$$ $$n\leq6 \rightarrow 0\leq n \leq 6$$ And the values that can only fit that satisfy that condition are $n=0$ and $n=6$ and those are the only values that generate a positive value for $\frac{n+7}{2n+1}$. But I have no idea how to find the negative values. How do I solve for the other cases? Is there another method to express all the solutions?
2018-09-02 17:21 Measurement of $P_T$-weighted Sivers asymmetries in leptoproduction of hadrons / COMPASS Collaboration The transverse spin asymmetries measured in semi-inclusive leptoproduction of hadrons, when weighted with the hadron transverse momentum $P_T$, allow for the extraction of important transverse-momentum-dependent distribution functions. In particular, the weighted Sivers asymmetries provide direct information on the Sivers function, which is a leading-twist distribution that arises from a correlation between the transverse momentum of an unpolarised quark in a transversely polarised nucleon and the spin of the nucleon. [...] arXiv:1809.02936; CERN-EP-2018-242.- Geneva : CERN, 2019-03 - 20 p. - Published in : Nucl. Phys. B 940 (2019) 34-53 Article from SCOAP3: PDF; Draft (restricted): PDF; Fulltext: CERN-EP-2018-242 - PDF; 1809.02936 - PDF; Detailed record - Similar records 2018-02-14 11:43 Light isovector resonances in $\pi^- p \to \pi^-\pi^-\pi^+ p$ at 190 GeV/${\it c}$ / COMPASS Collaboration We have performed the most comprehensive resonance-model fit of $\pi^-\pi^-\pi^+$ states using the results of our previously published partial-wave analysis (PWA) of a large data set of diffractive-dissociation events from the reaction $\pi^- + p \to \pi^-\pi^-\pi^+ + p_\text{recoil}$ with a 190 GeV/$c$ pion beam. The PWA results, which were obtained in 100 bins of three-pion mass, $0.5 < m_{3\pi} < 2.5$ GeV/$c^2$, and simultaneously in 11 bins of the reduced four-momentum transfer squared, $0.1 < t' < 1.0$ $($GeV$/c)^2$, are subjected to a resonance-model fit using Breit-Wigner amplitudes to simultaneously describe a subset of 14 selected waves using 11 isovector light-meson states with $J^{PC} = 0^{-+}$, $1^{++}$, $2^{++}$, $2^{-+}$, $4^{++}$, and spin-exotic $1^{-+}$ quantum numbers. [...] arXiv:1802.05913; CERN-EP-2018-021.- Geneva : CERN, 2018-11-02 - 72 p. - Published in : Phys. Rev. D 98 (2018) 092003 Article from SCOAP3: PDF; Draft (restricted): PDF; Fulltext: CERN-EP-2018-021 - PDF; 1802.05913 - PDF; Detailed record - Similar records 2018-02-07 15:23 Transverse Extension of Partons in the Proton probed by Deeply Virtual Compton Scattering / Akhunzyanov, R. (Dubna, JINR) ; Alexeev, M.G. (Turin U.) ; Alexeev, G.D. (Dubna, JINR) ; Amoroso, A. (Turin U. ; INFN, Turin) ; Andrieux, V. (Illinois U., Urbana ; IRFU, Saclay) ; Anfimov, N.V. (Dubna, JINR) ; Anosov, V. (Dubna, JINR) ; Antoshkin, A. (Dubna, JINR) ; Augsten, K. (Dubna, JINR ; CTU, Prague) ; Augustyniak, W. (NCBJ, Swierk) et al. We report on the first measurement of exclusive single-photon muoproduction on the proton by COMPASS using 160 GeV/$c$ polarized $\mu^+$ and $\mu^-$ beams of the CERN SPS impinging on a liquid hydrogen target. [...] CERN-EP-2018-016 ; arXiv:1802.02739. - 2018. - 13 p. Full text - Draft (restricted) - Fulltext Detailed record - Similar records 2018-01-30 07:15 Detailed record - Similar records 2017-09-28 10:30 Detailed record - Similar records 2017-09-19 08:11 Transverse-momentum-dependent Multiplicities of Charged Hadrons in Muon-Deuteron Deep Inelastic Scattering / COMPASS Collaboration A semi-inclusive measurement of charged hadron multiplicities in deep inelastic muon scattering off an isoscalar target was performed using data collected by the COMPASS Collaboration at CERN. The following kinematic domain is covered by the data: photon virtuality $Q^{2}>1$ (GeV/$c$)$^2$, invariant mass of the hadronic system $W > 5$ GeV/$c^2$, Bjorken scaling variable in the range $0.003 < x < 0.4$, fraction of the virtual photon energy carried by the hadron in the range $0.2 < z < 0.8$, square of the hadron transverse momentum with respect to the virtual photon direction in the range 0.02 (GeV/$c)^2 < P_{\rm{hT}}^{2} < 3$ (GeV/$c$)$^2$. [...] CERN-EP-2017-253; arXiv:1709.07374.- Geneva : CERN, 2018-02-08 - 23 p. - Published in : Phys. Rev. D 97 (2018) 032006 Article from SCOAP3: PDF; Draft (restricted): PDF; Fulltext: PDF; Preprint: PDF; Detailed record - Similar records 2017-07-08 20:47 New analysis of $\eta\pi$ tensor resonances measured at the COMPASS experiment / JPAC Collaboration We present a new amplitude analysis of the $\eta\pi$ $D$-wave in $\pi^- p\to \eta\pi^- p$ measured by COMPASS. Employing an analytical model based on the principles of the relativistic $S$-matrix, we find two resonances that can be identified with the $a_2(1320)$ and the excited $a_2^\prime(1700)$, and perform a comprehensive analysis of their pole positions. [...] CERN-EP-2017-169; JLAB-THY-17-2468; arXiv:1707.02848.- Geneva : CERN, 2018-04-10 - 9 p. - Published in : Phys. Lett. B 779 (2018) 464-472 Article from SCOAP3: PDF; Draft (restricted): PDF; Fulltext: PDF; Preprint: PDF; Detailed record - Similar records 2017-07-05 15:07 Detailed record - Similar records 2017-04-01 00:22 Detailed record - Similar records 2017-01-05 16:00 First measurement of the Sivers asymmetry for gluons from SIDIS data / COMPASS Collaboration The Sivers function describes the correlation between the transverse spin of a nucleon and the transverse motion of its partons. It was extracted from measurements of the azimuthal asymmetry of hadrons produced in semi-inclusive deep inelastic scattering of leptons off transversely polarised nucleon targets, and it turned out to be non-zero for quarks. [...] CERN-EP-2017-003; arXiv:1701.02453.- Geneva : CERN, 2017-09-10 - 11 p. - Published in : Phys. Lett. B 772 (2017) 854-864 Article from SCOAP3: PDF; Draft (restricted): PDF; Fulltext: PDF; Preprint: PDF; Detailed record - Similar records
You are talking about symmetry in two contexts :- Gaussian Window I will talk about these two contexts one by one Gaussian A symmetric 2-D gaussian (I assume 1-D as in 1-D the question of symmetry does not arise) means that it does not have any directional preference ( dictated by equal variances $\sigma_x$ and $\sigma_y$) . Now the job of Gabor Transform is to give us a joint time-frequency distribution (i.e telling us about frequency and phase content of a local image patch)Now consider the following two cases :- Symmetric Guassian ($\sigma_x$ = $\sigma_y$)In this case the gaussian does not have any directional preference. In the above image, suppose you are centered on the central pixel and consider a patch around it which is indicated in the above image by a black rectangle in the center. Following points are important for this image : Doing a Gabor Transform over that patch would yield a time-frequency representation, which involves all the pixels in the patch accordingly weighted by the symmetric gaussian. All the directions at a certain distance from the origin get the same weigthage. However, if a gaussian oriented at an angle of $135^{\circ}$, then :- the white principle diagonal pixels would get almost all the weightage while the other pixels will get little weightage. The time-frequency representation will be mainly a time-frequency representation around the principle diagonal. If the values of $\sigma_x$ and $\sigma_y$ are sufficiently small, the gabor transform will be very very close to the time-frequency representation for the straight line which marks the principle diagonal. Thus, for the Gaussian function, Symmetric gaussian is non-directional. Non-Symmetric gaussian is directionally oriented. Non-Symmetric gaussian will be useful in cases where the time-frequency analysis of a directionally oriented feature in a patch has to be carried out. Window The Window size basically represents the time-resolution (or space-resolution in the case of 2-D signals like images). A narrow window means that the focus on some local patch of the image is high and hence a more faithful time resolution of the Time-frequency analysis will be possible. However, this is limited by the Heisenberg's Uncertainty principle. Although, Gabor Transform attains the theoretical lower bound for the uncertainty principle, it stays within the bounds of the principle. Thus, changing the window size is akin to changing your time resolution. A Non-symmetrical window (like $3\times7$) is generally not chosen because generally we do not have a reason or a preference for changing our space-resolution in one direction (x or y) relative to other. In case there is a concern for the same, it can be done. However, I have never till now seen such a situation.
There are a few method to synthesize $\ce{SrO2}$ from $\ce{SrO}$ and $\ce{O2}$. One of them uses $\ce{KClO3}$ as the source of $\ce{O2}$ [Ref. 1]. The paper is in German but has English abstract which states: Abstract. Single crystals of $\ce{SrO2}$ have been oblained after high pressure/high temperature reaction of a $\ce{SrO}$/$\ce{KClO3}$ mtxture at $\pu{20 kbar}$, $\pu{1400 ^\circ C}$. ... From a refinement of the site occupation factor for oxygen a composition $\ce{SrO_{1.95(2)}}$ has been found for the crystal investigated. Note that $\pu{20 kbar}$ pressure was inserted by the $\ce{O2}$ released by the following reaction at high temperature (Ref. 2): $\ce{2KClO3(s) -> 2KCl(s) + 3O2(g)}$. Experimental information on the $\ce{Sr-O}$ has been reviewed in Ref. 2 and stability of $\ce{SrO2}$ and phase-diagram of $\ce{Sr-O}$ is graphically presented (see Figure 1 & 2, respectively). Accordingly, $\ce{SrO}$ under $\pu{0.5 atm}$ of $\ce{O2}$ pressure at $\pu{800 ^\circ C}$ would not produce $\ce{SrO2}$ (see Figure 1: Only $\ce{SrO}$ exists under the conditions, $\log_{10} p = \log_{10} 0.5 = -0.30$ and $\frac{1000}{\mathrm{T}} = \frac{1000}{1073.15} = 0.93$). References: K.-J. Kange, F. Rau, U. SchieBl, U. Klement, Verfeinerung der Kristallstruktur von $\ce{SrO2}$ (Refinement of the Crystal Structure of $\ce{SrO2}$), Z. anorg. allg. Chem., 1994, 620, 879-881 (https://onlinelibrary.wiley.com/doi/pdf/10.1002/zaac.19946200521). D. Risold, B. Hallstedt, L. J. Gauckler, The strontium-oxygen system, Calphad, 1996, 20(3), 353-361 (https://doi.org/10.1016/S0364-5916(96)00037-5).
I need to calculate the length of a curve $y=2\sqrt{x}$ from $x=0$ to $x=1$. So I started by taking $\int\limits^1_0 \sqrt{1+\frac{1}{x}}\, \text{d}x$, and then doing substitution: $\left[u = 1+\frac{1}{x}, \text{d}u = \frac{-1}{x^2}\text{d}x \Rightarrow -\text{d}u = \frac{1}{x^2}\text{d}x \right]^1_0 = -\int\limits^1_0 \sqrt{u} \,\text{d}u$ but this obviously will not lead to the correct answer, since $\frac{1}{x^2}$ isn't in the original formula. Wolfram Alpha is doing a lot of steps for this integration, but I don't think that many steps are needed. How would I start with this integration?
I am currently studying gaseous state and I encountered the following problem. A container with a volume 3 liter holds $\ce{N2(g)}$ and $\ce{H2O(l)}$ at $\pu{29 ^\circ C}$. The pressure is found to be $\pu{1 atm}$. The water is then split into hydrogen and oxygen by electrolysis according to the reaction $$\ce{H2O (l) -> H2 (g) + 1/2 O2 (g)}$$ After the reaction is complete, the pressure is $\pu{1.86 atm}$. What mass of water was present in the container? The aqueous tension of water at $\pu{29 ^\circ C}$ is $\pu{0.04 atm}$. This is one of the steps of the solution given along with the question (edited to show the units): Amount of substance of nitrogen: $$n(\ce{N2}) = \frac{\pu{0.96 atm} \cdot \pu{3 L}}{\pu{0.0821 atm L mol-1 K-1}\cdot \pu{302 K}} = \pu{0.116 mol}$$ Amount of substance of water: $$n(\ce{H2O}) = \frac{\pu{0.04 atm} \cdot \pu{3 L}}{\pu{0.0821 atm L mol-1 K-1}\cdot \pu{302 K}} = \pu{0.00484 mol}$$ The thing I don't understand here is that how can the volume of nitrogen and water be taken as $\pu{3 l}$? Both of them would occupy some volume not the whole volume will not be occupied by one of them.
OpenCV 4.0.0 Open Source Computer Vision class cv::DenseOpticalFlow class cv::DISOpticalFlow DIS optical flow algorithm. More... class cv::FarnebackOpticalFlow Class computing a dense optical flow using the Gunnar Farneback's algorithm. More... class cv::KalmanFilter Kalman filter class. More... class cv::SparseOpticalFlow Base interface for sparse optical flow algorithms. More... class cv::SparsePyrLKOpticalFlow Class used for calculating a sparse optical flow. More... class cv::VariationalRefinement Variational optical flow refinement. More... enum { cv::OPTFLOW_USE_INITIAL_FLOW = 4, cv::OPTFLOW_LK_GET_MIN_EIGENVALS = 8, cv::OPTFLOW_FARNEBACK_GAUSSIAN = 256 } enum { cv::MOTION_TRANSLATION = 0, cv::MOTION_EUCLIDEAN = 1, cv::MOTION_AFFINE = 2, cv::MOTION_HOMOGRAPHY = 3 } int cv::buildOpticalFlowPyramid (InputArray img, OutputArrayOfArrays pyramid, Size winSize, int maxLevel, bool withDerivatives=true, int pyrBorder=BORDER_REFLECT_101, int derivBorder=BORDER_CONSTANT, bool tryReuseInputImage=true) Constructs the image pyramid which can be passed to calcOpticalFlowPyrLK. More... void cv::calcOpticalFlowFarneback (InputArray prev, InputArray next, InputOutputArray flow, double pyr_scale, int levels, int winsize, int iterations, int poly_n, double poly_sigma, int flags) Computes a dense optical flow using the Gunnar Farneback's algorithm. More... void cv::calcOpticalFlowPyrLK (InputArray prevImg, InputArray nextImg, InputArray prevPts, InputOutputArray nextPts, OutputArray status, OutputArray err, Size winSize=Size(21, 21), int maxLevel=3, TermCriteria criteria=TermCriteria(TermCriteria::COUNT+TermCriteria::EPS, 30, 0.01), int flags=0, double minEigThreshold=1e-4) Calculates an optical flow for a sparse feature set using the iterative Lucas-Kanade method with pyramids. More... RotatedRect cv::CamShift (InputArray probImage, Rect &window, TermCriteria criteria) Finds an object center, size, and orientation. More... Mat cv::estimateRigidTransform (InputArray src, InputArray dst, bool fullAffine) Computes an optimal affine transformation between two 2D point sets. More... double cv::findTransformECC (InputArray templateImage, InputArray inputImage, InputOutputArray warpMatrix, int motionType=MOTION_AFFINE, TermCriteria criteria=TermCriteria(TermCriteria::COUNT+TermCriteria::EPS, 50, 0.001), InputArray inputMask=noArray()) Finds the geometric transform (warp) between two images in terms of the ECC criterion [53] . More... int cv::meanShift (InputArray probImage, Rect &window, TermCriteria criteria) Finds an object on a back projection image. More... Mat cv::readOpticalFlow (const String &path) Read a .flo file. More... bool cv::writeOpticalFlow (const String &path, InputArray flow) Write a .flo to disk. More... anonymous enum anonymous enum int cv::buildOpticalFlowPyramid ( InputArray img, OutputArrayOfArrays pyramid, Size winSize, int maxLevel, bool withDerivatives = true, int pyrBorder = BORDER_REFLECT_101, int derivBorder = BORDER_CONSTANT, bool tryReuseInputImage = true ) Python: retval, pyramid = cv.buildOpticalFlowPyramid( img, winSize, maxLevel[, pyramid[, withDerivatives[, pyrBorder[, derivBorder[, tryReuseInputImage]]]]] ) Constructs the image pyramid which can be passed to calcOpticalFlowPyrLK. img 8-bit input image. pyramid output pyramid. winSize window size of optical flow algorithm. Must be not less than winSize argument of calcOpticalFlowPyrLK. It is needed to calculate required padding for pyramid levels. maxLevel 0-based maximal pyramid level number. withDerivatives set to precompute gradients for the every pyramid level. If pyramid is constructed without the gradients then calcOpticalFlowPyrLK will calculate them internally. pyrBorder the border mode for pyramid layers. derivBorder the border mode for gradients. tryReuseInputImage put ROI of input image into the pyramid if possible. You can pass false to force data copying. void cv::calcOpticalFlowFarneback ( InputArray prev, InputArray next, InputOutputArray flow, double pyr_scale, int levels, int winsize, int iterations, int poly_n, double poly_sigma, int flags ) Python: flow = cv.calcOpticalFlowFarneback( prev, next, flow, pyr_scale, levels, winsize, iterations, poly_n, poly_sigma, flags ) Computes a dense optical flow using the Gunnar Farneback's algorithm. prev first 8-bit single-channel input image. next second input image of the same size and the same type as prev. flow computed flow image that has the same size as prev and type CV_32FC2. pyr_scale parameter, specifying the image scale (<1) to build pyramids for each image; pyr_scale=0.5 means a classical pyramid, where each next layer is twice smaller than the previous one. levels number of pyramid layers including the initial image; levels=1 means that no extra layers are created and only the original images are used. winsize averaging window size; larger values increase the algorithm robustness to image noise and give more chances for fast motion detection, but yield more blurred motion field. iterations number of iterations the algorithm does at each pyramid level. poly_n size of the pixel neighborhood used to find polynomial expansion in each pixel; larger values mean that the image will be approximated with smoother surfaces, yielding more robust algorithm and more blurred motion field, typically poly_n =5 or 7. poly_sigma standard deviation of the Gaussian that is used to smooth derivatives used as a basis for the polynomial expansion; for poly_n=5, you can set poly_sigma=1.1, for poly_n=7, a good value would be poly_sigma=1.5. flags operation flags that can be a combination of the following: The function finds an optical flow for each prev pixel using the [55] algorithm so that \[\texttt{prev} (y,x) \sim \texttt{next} ( y + \texttt{flow} (y,x)[1], x + \texttt{flow} (y,x)[0])\] void cv::calcOpticalFlowPyrLK ( InputArray prevImg, InputArray nextImg, InputArray prevPts, InputOutputArray nextPts, OutputArray status, OutputArray err, Size winSize = Size(21, 21), int maxLevel = 3, TermCriteria criteria = TermCriteria(TermCriteria::COUNT+TermCriteria::EPS, 30, 0.01), int flags = 0, double minEigThreshold = 1e-4 ) Python: nextPts, status, err = cv.calcOpticalFlowPyrLK( prevImg, nextImg, prevPts, nextPts[, status[, err[, winSize[, maxLevel[, criteria[, flags[, minEigThreshold]]]]]]] ) Calculates an optical flow for a sparse feature set using the iterative Lucas-Kanade method with pyramids. prevImg first 8-bit input image or pyramid constructed by buildOpticalFlowPyramid. nextImg second input image or pyramid of the same size and the same type as prevImg. prevPts vector of 2D points for which the flow needs to be found; point coordinates must be single-precision floating-point numbers. nextPts output vector of 2D points (with single-precision floating-point coordinates) containing the calculated new positions of input features in the second image; when OPTFLOW_USE_INITIAL_FLOW flag is passed, the vector must have the same size as in the input. status output status vector (of unsigned chars); each element of the vector is set to 1 if the flow for the corresponding features has been found, otherwise, it is set to 0. err output vector of errors; each element of the vector is set to an error for the corresponding feature, type of the error measure can be set in flags parameter; if the flow wasn't found then the error is not defined (use the status parameter to find such cases). winSize size of the search window at each pyramid level. maxLevel 0-based maximal pyramid level number; if set to 0, pyramids are not used (single level), if set to 1, two levels are used, and so on; if pyramids are passed to input then algorithm will use as many levels as pyramids have but no more than maxLevel. criteria parameter, specifying the termination criteria of the iterative search algorithm (after the specified maximum number of iterations criteria.maxCount or when the search window moves by less than criteria.epsilon. flags operation flags: minEigThreshold the algorithm calculates the minimum eigen value of a 2x2 normal matrix of optical flow equations (this matrix is called a spatial gradient matrix in [22]), divided by number of pixels in a window; if this value is less than minEigThreshold, then a corresponding feature is filtered out and its flow is not processed, so it allows to remove bad points and get a performance boost. The function implements a sparse iterative version of the Lucas-Kanade optical flow in pyramids. See [22] . The function is parallelized with the TBB library. RotatedRect cv::CamShift ( InputArray probImage, Rect & window, TermCriteria criteria ) Python: retval, window = cv.CamShift( probImage, window, criteria ) Finds an object center, size, and orientation. probImage Back projection of the object histogram. See calcBackProject. window Initial search window. criteria Stop criteria for the underlying meanShift. returns (in old interfaces) Number of iterations CAMSHIFT took to converge The function implements the CAMSHIFT object tracking algorithm [25] . First, it finds an object center using meanShift and then adjusts the window size and finds the optimal rotation. The function returns the rotated rectangle structure that includes the object position, size, and orientation. The next position of the search window can be obtained with RotatedRect::boundingRect() See the OpenCV sample camshiftdemo.c that tracks colored objects. Computes an optimal affine transformation between two 2D point sets. src First input 2D point set stored in std::vector or Mat, or an image stored in Mat. dst Second input 2D point set of the same size and the same type as A, or another image. fullAffine If true, the function finds an optimal affine transformation with no additional restrictions (6 degrees of freedom). Otherwise, the class of transformations to choose from is limited to combinations of translation, rotation, and uniform scaling (4 degrees of freedom). The function finds an optimal affine transform [A|b] (a 2 x 3 floating-point matrix) that approximates best the affine transformation between: Two point sets Two raster images. In this case, the function first finds some features in the src image and finds the corresponding features in dst image. After that, the problem is reduced to the first case. In case of point sets, the problem is formulated as follows: you need to find a 2x2 matrix A and 2x1 vector b so that: \[[A^*|b^*] = arg \min _{[A|b]} \sum _i \| \texttt{dst}[i] - A { \texttt{src}[i]}^T - b \| ^2\] where src[i] and dst[i] are the i-th points in src and dst, respectively \([A|b]\) can be either arbitrary (when fullAffine=true ) or have a form of \[\begin{bmatrix} a_{11} & a_{12} & b_1 \\ -a_{12} & a_{11} & b_2 \end{bmatrix}\] when fullAffine=false. double cv::findTransformECC ( InputArray templateImage, InputArray inputImage, InputOutputArray warpMatrix, int motionType = MOTION_AFFINE, TermCriteria criteria = TermCriteria(TermCriteria::COUNT+TermCriteria::EPS, 50, 0.001), InputArray inputMask = noArray() ) Python: retval, warpMatrix = cv.findTransformECC( templateImage, inputImage, warpMatrix[, motionType[, criteria[, inputMask]]] ) Finds the geometric transform (warp) between two images in terms of the ECC criterion [53] . templateImage single-channel template image; CV_8U or CV_32F array. inputImage single-channel input image which should be warped with the final warpMatrix in order to provide an image similar to templateImage, same type as temlateImage. warpMatrix floating-point \(2\times 3\) or \(3\times 3\) mapping matrix (warp). motionType parameter, specifying the type of motion: criteria parameter, specifying the termination criteria of the ECC algorithm; criteria.epsilon defines the threshold of the increment in the correlation coefficient between two iterations (a negative criteria.epsilon makes criteria.maxcount the only termination criterion). Default values are shown in the declaration above. inputMask An optional mask to indicate valid values of inputImage. The function estimates the optimum transformation (warpMatrix) with respect to ECC criterion ([53]), that is \[\texttt{warpMatrix} = \texttt{warpMatrix} = \arg\max_{W} \texttt{ECC}(\texttt{templateImage}(x,y),\texttt{inputImage}(x',y'))\] where \[\begin{bmatrix} x' \\ y' \end{bmatrix} = W \cdot \begin{bmatrix} x \\ y \\ 1 \end{bmatrix}\] (the equation holds with homogeneous coordinates for homography). It returns the final enhanced correlation coefficient, that is the correlation coefficient between the template image and the final warped input image. When a \(3\times 3\) matrix is given with motionType =0, 1 or 2, the third row is ignored. Unlike findHomography and estimateRigidTransform, the function findTransformECC implements an area-based alignment that builds on intensity similarities. In essence, the function updates the initial transformation that roughly aligns the images. If this information is missing, the identity warp (unity matrix) is used as an initialization. Note that if images undergo strong displacements/rotations, an initial transformation that roughly aligns the images is necessary (e.g., a simple euclidean/similarity transform that allows for the images showing the same image content approximately). Use inverse warping in the second image to take an image close to the first one, i.e. use the flag WARP_INVERSE_MAP with warpAffine or warpPerspective. See also the OpenCV sample image_alignment.cpp that demonstrates the use of the function. Note that the function throws an exception if algorithm does not converges. int cv::meanShift ( InputArray probImage, Rect & window, TermCriteria criteria ) Python: retval, window = cv.meanShift( probImage, window, criteria ) Finds an object on a back projection image. probImage Back projection of the object histogram. See calcBackProject for details. window Initial search window. criteria Stop criteria for the iterative search algorithm. returns : Number of iterations CAMSHIFT took to converge. The function implements the iterative object search algorithm. It takes the input back projection of an object and the initial position. The mass center in window of the back projection image is computed and the search window center shifts to the mass center. The procedure is repeated until the specified number of iterations criteria.maxCount is done or until the window center shifts by less than criteria.epsilon. The algorithm is used inside CamShift and, unlike CamShift , the search window size or orientation do not change during the search. You can simply pass the output of calcBackProject to this function. But better results can be obtained if you pre-filter the back projection and remove the noise. For example, you can do this by retrieving connected components with findContours , throwing away contours with small area ( contourArea ), and rendering the remaining contours with drawContours. Read a .flo file. path Path to the file to be loaded The function readOpticalFlow loads a flow field from a file and returns it as a single matrix. Resulting Mat has a type CV_32FC2 - floating-point, 2-channel. First channel corresponds to the flow in the horizontal direction (u), second - vertical (v). bool cv::writeOpticalFlow ( const String & path, InputArray flow ) Python: retval = cv.writeOpticalFlow( path, flow ) Write a .flo to disk. path Path to the file to be written flow Flow field to be stored The function stores a flow field in a file, returns true on success, false otherwise. The flow field must be a 2-channel, floating-point matrix (CV_32FC2). First channel corresponds to the flow in the horizontal direction (u), second - vertical (v).
Ex.12.2 Q4 Areas Related to Circles Solution - NCERT Maths Class 10 Question A chord of a circle of radius \(10\, \rm{cm}\) subtends a right angle at the centre. Find the area of the corresponding (i) minor segment (ii) major sector. (Use \(\pi = 3.14\)) Text Solution What is known? Radius of the circle and angle subtended by the chord at the center. What is unknown? (i) Area of minor segment (ii) Area of major segment Reasoning: In a circle with radius r and angle at the centre with degree measure \(\theta\); (i) Area of the sector\(\begin{align}= \frac{{\rm{\theta }}}{{360}^{\rm{o}}} \times \pi {r^2}\end{align}\) (ii) Area of the segment = Area of the sector- Area of the corresponding triangle. Area of the triangle\(\begin{align}= \frac{1}{2} \times \rm base \times height\end{align}\) Draw a figure to visualize the area to be found out. Here,\(\begin{align}{\rm{Radius, }}r = 10\rm \,cm{\rm{\theta }} = {90^{\rm{o}}}\end{align}\) Visually it's clear from the figure that; Let \({AB}\) is the chord subtends a right angle at the centre. (i) Area of minor segment \({APB} =\) \(\text{Area of sector}\) \({OAPB}\, -\, \)\(\text{Area of right}\)\(\Delta {AOB}\) (ii) Area of major segment \(\begin{align}AQB{\rm{ }} = \pi {r^2} - \text{Area of minor segment APB}\end{align}\) Area of the right triangle\(\begin{align}\Delta AOB = \frac{1}{2} \times OA \times OB\end{align}\) \[\begin{align}&= {\frac{\theta }{{{{360}^\circ }}} \times \pi {r^2}}\\&= {\frac{{{{90}^\circ }}}{{{{360}^\circ }}} \times \pi {r^2}}\\&= {\frac{{\pi {r^2}}}{4}}\end{align}\] \[\begin{align} &= \frac{1}{2} \times {\text{ base }} \times \,\text{height}\\ &= \frac{1}{2} \times {OA} \times {OB}\,\,\,\,\,\,\,\,\,\,\left( {\because {{OA}} \bot {{OB}}} \right)\\& = \frac{1}{2} \times {\text{radius}} \times \rm{radius}\end{align}\] Steps: (i) Area of the minor segment \({APB}\)\(=\)Area of sector \(\begin{align}\rm{OAPB} - \text{ Area of right }{\Delta AOB}\end{align}\) \[\begin{align} &= {\frac{{{{90}^\circ }}}{{{{360}^\circ }}} \times \pi {{r^2}} - \frac{1}{2} \times OA \times OB\;(\because OA \bot OB)} \\ &= { \frac{{\pi \rm{r^2}}}{4} - \frac{1}{2} \times {r} \times {r}} \\&= \rm{r^2}\left( {\frac{\pi }{4} - \frac{1}{2}} \right) \hfill \\ &= {r^2}\left( {\frac{{3.14 - 2}}{4}} \right) \\ &= {\left( {\frac{{{r^2} \times 1.14}}{4}} \right)} \\ & = {\frac{{10 \times 10 \times 1.14}}{4}{\text{ cm}^2}} \\ &= {28.5\,\,{\text{cm}^2}} \end{align}\] ii) Area of major segment \({AQB} =\pi \rm{r^2}-\) Area of minor segment \({APB}\) \[\begin{align} &= \pi \times 10 \times 10 - 28.5\\&= 3.14 \times 100 - 28.5\\&= 314 - 28.5\\&= 285.5{\text{ cm}^2}\end{align} \]
Search Now showing items 1-1 of 1 Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV (Elsevier, 2014-09) Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ...
I am trying to solve the equation $$\left(\sin (x)+\cos (x)-\sqrt{2}\right)\cdot \sqrt{-11 x-x^2-30}=0$$ in Real domain. I tried First way Solve[{(Sin[x] + Cos[x] - Sqrt[2]) Sqrt[-11 x - x^2 - 30] == 0}] I got {{x -> -6}, {x -> -5}, {x -> [Pi]/4}} Second way Solve[{(Sin[x] + Cos[x] - Sqrt[2]) Sqrt[-11 x - x^2 - 30] == 0}, x, Reals] // FullSimplify I got {{x -> -6}, {x -> -5}, {x -> -((7 [Pi])/4)}, {x -> -((7 [Pi])/4)}} Third way sol = (TrigExpand@ Reduce[(Sin[x] + Cos[x] - Sqrt[2]) Sqrt[-11 x - x^2 - 30] == 0, x, Reals] // FullSimplify // Last) /. C[1] -> k I got x == -((7 [Pi])/4) How to get the correct solutions?
Lecture: H5116, F 8:00-9:40 Section: HGW2403, F(e) 18:30-20:10 Textbook: https://doi.org/10.1017/CBO9781107050884 Lecture: HGX205, M 18:30-21 Section: HGW2403, F 18:30-20 Syllabus Exercise 01 Prove that \(\neg\Box(\Diamond\varphi\wedge\Diamond\neg\varphi)\) is equivalent to \(\Box\Diamond\varphi\rightarrow\Diamond\Box\varphi\). What you have assumed? Define strategy and winning strategy for modal evaluation games. Prove Key Lemma: \(M,s\vDash\varphi\) iff V has a winning strategy in \(G(M,s,\varphi)\). Prove that modal evaluation games are determined, i.e. either V or F has a winning strategy. And all exercises for Chapter 2 (see page 23, open minds) Exercise 02 Let \(T\) with root \(r\) be the tree unraveling of some possible world model, and \(T’\) be the tree unraveling of \(T,r\). Show that \(T\) and \(T’\) are isomorphic. Prove that the union of a set of bisimulations between \(M\) and \(N\) is a bisimulation between the two models. We define the bisimulation contraction of a possible world model \(M\) to be the “quotient model”. Prove that the relation links every world \(x\) in \(M\) to the equivalent class \([x]\) is a bisimulation between the original model and its bisimulation contraction. And exercises for Chapter 3 (see page 35, open minds): 1 (a) (b), 2. Exercise 03 Prove that modal formulas (under possible world semantics) have ‘Finite Depth Property’. And exercises for Chapter 4 (see page 47, open minds): 1 – 3. Exercise 04 Prove the principle of Replacement by Provable Equivalents: if \(\vdash\alpha\leftrightarrow\beta\), then \(\vdash\varphi[\alpha]\leftrightarrow\varphi[\beta]\). Prove the following statements. “For each formula \(\varphi\), \(\vdash\varphi\) is equivalent to \(\vDash\varphi\)” is equivalent to “for each formula \(\varphi\), \(\varphi\) being consistent is equivalent to \(\varphi\) being satisfiable”. “For every set of formulas \(\Sigma\) and formula \(\varphi\), \(\Sigma\vdash\varphi\) is equivalent to \(\Sigma\vDash\varphi\)” is equivalent to “for every set of formulas \(\Sigma\), \(\Sigma\) being consistent is equivalent to \(\Sigma\) being satisfiable”. Prove that “for each formula \(\varphi\), \(\varphi\) being consistent is equivalent to \(\varphi\) being satisfiable” using the finite version of Henkin model. And exercises for Chapter 5 (see page 60, open minds): 1 – 5. Exercise 05 Exercises for Chapter 6 (see page 69, open minds): 1 – 3. Exercise 06 Show that “being equivalent to a modal formula” is not decidable for arbitrary first-order formulas. Exercises for Chapter 7 (see page 88, open minds): 1 – 6. For exercise 2 (a) – (d), replace the existential modality E with the difference modality D. In the clause (b) of exercise 4, “completeness” should be “correctness”. Exercise 07 Show that there are infinitely many non-equivalent modalities under T. Show that GL + Id is inconsistent and Un proves GL. Give a complete proof of the fact: In S5, Every formula is equivalent to one of modal depth \(\leq 1\). Exercises for Chapter 8 (see page 99, open minds): 1, 2, 4 – 6. Exercise 08 Let \(\Sigma\) be a set of modal formulas closed under substitution. Show that \[(W,R,V),w\vDash\Sigma~\Leftrightarrow~ (W,R,V’),w\vDash\Sigma\] hold for any valuation \(V\) and \(V’\). Define a \(p\)- morphism between \((W,R),w\) and \((W’,R’),w’\) as a “functional bisimulation”, namely bisimulation regardless of valuation. Show that if there is a \(p\)-morphism between \((W,R),w\) and \((W’,R’),w’\), then for any valuation \(V\) and \(V’\), we have \[(W,R,V),w\vDash\Sigma~\Leftrightarrow~ (W’,R’,V’),w\vDash\Sigma.\] Exercises for Chapter 9 (see page 99, open minds). Exercise the last Exercises for Chapter 10 and 11 (see page 117 and 125, open minds).
Both QPSK and $4$-QAM constellations have signal points at $45, 135, 225$, and $315$ degrees (note typo in your question). They arise from amplitude modulation (or, ifyou prefer, phase modulation) of two carrier signals(called the inphase and quadrature carriers) that are orthogonal (meaning that theydiffer in phase by 90 degrees. The canonical representation of a QPSK or $4$-QAMsignal during one symbol interval is$$s(t) = (-1)^{b_I}\cos(2\pi f_c t) - (-1)^{b_Q}\sin(2\pi f_c t)$$where $\cos(2\pi f_c t)$ and$-\sin(2\pi f_c t)$ are the inphase and quadrature carrier signals atfrequency $f_c$ Hzand $b_I, b_Q \in \{0,1\}$ are the two data bits (called the inphaseand quadrature data bits, naturally, since they are transmittedon the inphase and quadrature carriers). Notice that the inphase carrier $\cos(2\pi f_c t)$ has amplitude $+1$ or$-1$ according as the inphase data bit has value $0$ or $1$, andsimilarly the quadrature carrier $-\sin(2\pi f_c t)$ has amplitude $+1$ or$-1$ according as the quadrature data bit has value $0$ or $1$.Some people regard this as an inversion of the normal schemeof things, didactically assertingthat positive amplitudes must be associated with $1$ data bits andnegative amplitudes with $0$ bits. But if we look at it from the phase modulation perspective, a $0$ bit means that the carrier($\cos(2\pi f_c t)$ or $-\sin(2\pi f_c t)$ as the case may be) is transmitted with no change in phase while a $1$ data bit createsa change in phase (we will think of it as a phase delay)of $180$ degrees or $\pi$ radians. Indeed, anotherway of expressing the QPSK/$4$-QAM signal is as$$s(t) = \cos(2\pi f_c t - b_I\pi) - \sin(2\pi f_c t - b_Q\pi)$$which makes the phase modulation viewpoint very clear. But,regardless of which viewpoint we use, during a symbol interval,the QPSK/$4$-QAM signal is one of the following four signals:$$\sqrt{2}\cos\left(2\pi f_c t + \frac{\pi}{4}\right),\\\sqrt{2}\cos\left(2\pi f_c t + \frac{3\pi}{4}\right),\\\sqrt{2}\cos\left(2\pi f_c t + \frac{5\pi}{4}\right),\\\sqrt{2}\cos\left(2\pi f_c t + \frac{7\pi}{4}\right)$$corresponding to $(b_I,b_Q) = (0,0), (1,0), (1,1), (0,1)$ respectively. Note that the viewpoint taken here is of QPSK as consisting oftwo BPSK signals on phase-orthogonal carriers. The demodulatorthus consists of two BPSK receivers (called the inphase branchand quadrature branch, what else?).An alternative view of QPSK as changing the phaseof a single carrier depending on a $4$-valued symbol is developed a little later. The QPSK/$4$-QAM signal can also be expressed as$$s(t) = \text{Re}\{B \exp(j2\pi f_c t)\} = \text{Re}\{[(-1)^{b_I}+j(-1)^{b_Q}] \exp(j2\pi f_c t)\}$$where $B$ is the complex-valued baseband symbol taking on valuesin $\{\pm 1 \pm j\}$ and which, when plotted on the complex plane,gives constellation points distant $\sqrt{2}$ from theorigin and at $45, 135, 225$, and $315$ degreescorresponding to data bits $(b_I,b_Q) = (0,0), (1,0), (1,1), (0,1)$respectively. Note that complementary bit pairs lie diagonally across the circle from each other so that double bit errors are less likely than single bit errors. Note also that the bits naturallyoccur around the circle in Gray code order; there is no need to massage a given data bit pair $(d_I,d_Q)$ (say $(0,1)$) from "natural representation" (where it means the integer $2 = d_I+2d_Q$: $d_I$ is the LSB and $d_Q$ the MSB here) to "Gray code representation" $(b_I,b_Q) = (1,1)$ of the integer $2$ as some implementations seem to insiston doing. Indeed, such massaging leads to poorer BER performance since the decoded $(\hat{b}_I,\hat{b}_Q)$ must be ummassaged at the receiver into the decoded data bits $(\hat{d}_I,\hat{d}_Q)$ making the single channel bit error $$(b_I,b_Q) = (1,1) \to (\hat{b}_I,\hat{b}_Q) = (1,0)$$into the double data bit error$${(d_I,d_Q) = (0,1) \to (b_I,b_Q) = (1,1)\to (\hat{b}_I,\hat{b}_Q) = (1,0)}\to (\hat{d}_I,\hat{d}_Q) = (1,0).$$ If we delay the four possible signals exhibited above by $45$ degrees or$\pi/4$ radians (subtract $\pi/4$ radians from the argument of the cosinusoid), we get$$\sqrt{2}\cos\left(2\pi f_c t + \frac{\pi}{4}\right)\Rightarrow \sqrt{2}\cos\left(2\pi f_c t + 0\frac{\pi}{2}\right) = \sqrt{2}\cos(2\pi f_c t),\\ \sqrt{2}\cos\left(2\pi f_c t + \frac{3\pi}{4}\right)\Rightarrow \sqrt{2}\cos\left(2\pi f_c t + 1\frac{\pi}{2}\right) = -\sqrt{2}\sin(2\pi f_c t),\\\sqrt{2}\cos\left(2\pi f_c t + \frac{5\pi}{4}\right)\Rightarrow \sqrt{2}\cos\left(2\pi f_c t + 2\frac{\pi}{2}\right) = -\sqrt{2}\cos(2\pi f_c t) \\\sqrt{2}\cos\left(2\pi f_c t + \frac{7\pi}{4}\right)\Rightarrow \sqrt{2}\cos\left(2\pi f_c t + 3\frac{\pi}{2}\right) = \sqrt{2}\sin(2\pi f_c t),\\$$which give the four constellation points at $0,90,180,270$ degrees referredto by the OP. This form gives us another way of viewing QPSK signaling:a single carrier signal whose phase takes on four values depending onthe input symbol which takes on values $\{0,1,2,3\}$. We express this in tabular form.$$\begin{array}{|c|c|c|c|c|c|c|}\hline(b_I,b_Q) & \text{normal value} ~k & \text{Gray code value} ~\ell & \text{signal as above} &\text{phase-modulated signal}\\\hline(0,0) & 0 & 0 & \sqrt{2}\cos(2\pi f_c t) & \sqrt{2}\cos\left(2\pi f_c t - 0\frac{\pi}{2}\right)\\(0,1) & 1 & 1 & \sqrt{2}\sin(2\pi f_c t) & \sqrt{2}\cos\left(2\pi f_c t - 1\frac{\pi}{2}\right)\\(1,1) & 3 & 2 & -\sqrt{2}\cos(2\pi f_c t)& \sqrt{2}\cos\left(2\pi f_c t - 2\frac{\pi}{2}\right)\\(1,0) & 2 & 3 & -\sqrt{2}\sin(2\pi f_c t) & \sqrt{2}\cos\left(2\pi f_c t - 3\frac{\pi}{2}\right)\\\hline\end{array}$$That is, we can regard the QPSK modulator as having input$(b_I,b_Q)$ that it regards as the Gray code representationof the integer $\ell \in \{0,1,2,3\}$ and produces theoutput$$\sqrt{2}\cos\left(2\pi f_c t - \ell\frac{\pi}{2}\right).$$In other words, the phase of carrier $\sqrt{2}\cos(2\pi f_c t)$ is modulated (changed from $0$ to $\ell\frac{\pi}{2}$) inresponse to the input $\ell$. So how does this work in real life or MATLAB, whichever comes first?If we define a QPSK signal as having value $\sqrt{2}\cos\left(2\pi f_c t - \ell\frac{\pi}{2}\right)$ where the value of $\ell$ is typed in as 0 or 1 or 2or 3, we will get the QPSK signal described above, but thedemodulator will produce the bit pair $(b_I, b_Q)$ and we mustremember that the output is $\ell$ in Gray code interpretation,that is, the demodulator output will be $(1,1)$ if $\ell$ happenedto have value $2$, and interpreting output $(1,1)$ as $3$ is a decodingerror that is not generally discussed in textbooks!
List of Errors in The IMO Compendium (second edition) These are the corrections to The IMO Compendium (Second Edition). Thanks to Alexander Bogomolny, Stijn Cambie, Jürgen Doenhardt, Jim Michael, Alberto Torregrosa, and Jozsef Pelikan. The list is updated on March 28, 2013. Page 27, problem 5: Change \( AMKD \) to \( AMCD \). Correct the page number from \( 327 \) to \( 27 \). Page 52, last footnote. Change the word ``maximum’’ to ``maximal’’ Page 235, problem 6, last line. Change the first word ``integers’’ to ``integer’’ Page 353, problem 3, first line. Change ``\( \cos 2x=1+\cos^2x \)’’ to ``\( \cos2x=2\cos^2x-1 \)’’. Page 730, problem 2, second line. Change ``\( f(x_1)f(y)=f(\varphi(x_1))=f(\varphi(x_2))= \)’’ to ``\( f(x_1)f(y)=2f(\varphi(x_1))=2f(\varphi(x_2))= \)’’ Page 764, problem 29. In line 5 change \( (4x^2-1) \) to \( (4x^2-1)^2 \). In line 7 change \( 16y^2 \) to \( 16y^4 \). In line 12 change \( 2l+y \) to \( 2x+l \). Page 781, problem 11. Change lines 8-21 to: " Let us now prove the other direction, that the total perimeter is greater than or equal to \( (m+1)2^{m+2} \). Denote by \( R \) the set of rectangles in partition that do not contain diagonal squares, and let \( Q \) be the sum of perimeters of rectangles from \( R \). Since the sum of the perimeters of diagonal squares is equal to \( 4\cdot 2^m \) it suffices to prove that \( Q\geq m\cdot 2^{m+1} \). Assume that \( n=|Q| \). Let \( R_i\subseteq R \) be the set of those rectangles that contain at least one square from the \( i \)th row. Similarly, let \( C_i\subseteq R \) be the set of rectangles that contain the squares from the \( i \)th column. Clearly, \( C_i\cap R_i=\emptyset \) and \[ Q=2\left(\sum_{i=1}^{2^m}|R_i|+\sum_{i=1}^{2^m}|C_i|\right).\] Let \( \mathcal F \) be the collection of all subsets of \( R \). Let \( \mathcal F_i \) denote the collection of those subsets \( S \) that satisfy \( R_i\subseteq S \) and \( C_i\cap S=\emptyset \). Since \( \mathcal F_1 \), \( \mathcal F_2 \), \( \dots \), \( \mathcal F_{2^m} \) are disjoint sets and \( |\mathcal F_i|= 2^{n-|C_i|-|R_i|} \), using Jensen’s inequality applied to \( f(x)=2^{-x} \), we obtain \[ 2^{n}=|\mathcal F|\geq 2^{n}\sum_{i=1}^{2^m}2^{-|C_i|-|R_i|}\geq 2^{n}\cdot 2^m\cdot 2^{-\frac1{2^m}\sum(|C_i|+|R_i|)}=2^{n+m-\frac1{2^m}\cdot\frac Q2}.\] This yields \( Q\geq m\cdot 2^{m+1} \), which is the relation we wanted to prove." Page 782, problem 12. Line 10: Before the sentence ``It is impossible that \( x_i+x_{i+2}\geq i \) \( \dots \),’’ add the following paragraph: ``First consider the case when \( x_i\geq 1 \) for some \( i \), say \( i=1 \). Then \( x_3< 1 \) and \( x_4< 1 \). If \( x_2+x_4\geq1 \), Cinderella should empty the buckets \( 1 \) and \( 2 \). Otherwise, Cinderella should empty the buckets \( 1 \) and \( 5 \). The inequalities \( x_2+x_4\geq 1 \) and \( x_3+x_5\geq 1 \) cannot hold simultaneously.’’ Page 782, problem 12. Line 16: Replace \( 6 \) by \( 3 \).