text
stringlengths
256
16.4k
Some More Advanced Mathematics¶ Algebraic Geometry¶ You can define arbitrary algebraic varieties in Sage, but sometimes nontrivial functionality is limited to rings over \(\QQ\) or finite fields. For example, we compute the union of two affine plane curves, then recover the curves as the irreducible components of the union. sage: x, y = AffineSpace(2, QQ, 'xy').gens()sage: C2 = Curve(x^2 + y^2 - 1)sage: C3 = Curve(x^3 + y^3 - 1)sage: D = C2 + C3sage: DAffine Plane Curve over Rational Field defined by x^5 + x^3*y^2 + x^2*y^3 + y^5 - x^3 - y^3 - x^2 - y^2 + 1sage: D.irreducible_components()[Closed subscheme of Affine Space of dimension 2 over Rational Field defined by: x^2 + y^2 - 1,Closed subscheme of Affine Space of dimension 2 over Rational Field defined by: x^3 + y^3 - 1] We can also find all points of intersection of the two curves by intersecting them and computing the irreducible components. sage: V = C2.intersection(C3)sage: V.irreducible_components()[Closed subscheme of Affine Space of dimension 2 over Rational Field defined by: y, x - 1,Closed subscheme of Affine Space of dimension 2 over Rational Field defined by: y - 1, x,Closed subscheme of Affine Space of dimension 2 over Rational Field defined by: x + y + 2, 2*y^2 + 4*y + 3] Thus, e.g., \((1,0)\) and \((0,1)\) are on both curves (visibly clear), as are certain (quadratic) points whose \(y\) coordinates satisfy \(2y^2 + 4y + 3=0\). Sage can compute the toric ideal of the twisted cubic in projective 3 space: sage: R.<a,b,c,d> = PolynomialRing(QQ, 4)sage: I = ideal(b^2-a*c, c^2-b*d, a*d-b*c)sage: F = I.groebner_fan(); FGroebner fan of the ideal:Ideal (b^2 - a*c, c^2 - b*d, -b*c + a*d) of Multivariate Polynomial Ringin a, b, c, d over Rational Fieldsage: F.reduced_groebner_bases ()[[-c^2 + b*d, -b*c + a*d, -b^2 + a*c], [-b*c + a*d, -c^2 + b*d, b^2 - a*c], [-c^3 + a*d^2, -c^2 + b*d, b*c - a*d, b^2 - a*c], [-c^2 + b*d, b^2 - a*c, b*c - a*d, c^3 - a*d^2], [-b*c + a*d, -b^2 + a*c, c^2 - b*d], [-b^3 + a^2*d, -b^2 + a*c, c^2 - b*d, b*c - a*d], [-b^2 + a*c, c^2 - b*d, b*c - a*d, b^3 - a^2*d], [c^2 - b*d, b*c - a*d, b^2 - a*c]]sage: F.polyhedralfan()Polyhedral fan in 4 dimensions of dimension 4 Elliptic Curves¶ Elliptic curve functionality includes most of the elliptic curve functionality of PARI, access to the data in Cremona’s online tables (this requires an optional database package), the functionality of mwrank, i.e., 2-descents with computation of the full Mordell-Weil group, the SEA algorithm, computation of all isogenies, much new code for curves over \(\QQ\), and some of Denis Simon’s algebraic descent software. The command EllipticCurve for creating an elliptic curve has manyforms: EllipticCurve([\(a_1\), \(a_2\), \(a_3\), \(a_4\), \(a_6\)]): Returns the elliptic curve\[y^2+a_1xy+a_3y=x^3+a_2x^2+a_4x+a_6,\] where the \(a_i\)’s are coerced into the parent of \(a_1\). If all the \(a_i\) have parent \(\ZZ\), they are coerced into \(\QQ\). EllipticCurve([\(a_4\), \(a_6\)]): Same as above, but \(a_1=a_2=a_3=0\). EllipticCurve(label): Returns the elliptic curve over from the Cremona database with the given (new!) Cremona label. The label is a string, such as "11a"or "37b2". The letter must be lower case (to distinguish it from the old labeling). EllipticCurve(j): Returns an elliptic curve with \(j\)-invariant \(j\). EllipticCurve(R, [\(a_1\), \(a_2\), \(a_3\), \(a_4\), \(a_6\)]): Create the elliptic curve over a ring \(R\) with given \(a_i\)’s as above. We illustrate each of these constructors: sage: EllipticCurve([0,0,1,-1,0])Elliptic Curve defined by y^2 + y = x^3 - x over Rational Fieldsage: EllipticCurve([GF(5)(0),0,1,-1,0])Elliptic Curve defined by y^2 + y = x^3 + 4*x over Finite Field of size 5sage: EllipticCurve([1,2])Elliptic Curve defined by y^2 = x^3 + x + 2 over Rational Fieldsage: EllipticCurve('37a')Elliptic Curve defined by y^2 + y = x^3 - x over Rational Fieldsage: EllipticCurve_from_j(1)Elliptic Curve defined by y^2 + x*y = x^3 + 36*x + 3455 over Rational Fieldsage: EllipticCurve(GF(5), [0,0,1,-1,0])Elliptic Curve defined by y^2 + y = x^3 + 4*x over Finite Field of size 5 The pair \((0,0)\) is a point on the elliptic curve\(E\) defined by \(y^2 +y = x^3 - x\). To create thispoint in Sage type E([0,0]). Sage can add points on such anelliptic curve (recall elliptic curves support an additive groupstructure where the point at infinity is the zero element and threeco-linear points on the curve add to zero): sage: E = EllipticCurve([0,0,1,-1,0])sage: EElliptic Curve defined by y^2 + y = x^3 - x over Rational Fieldsage: P = E([0,0])sage: P + P(1 : 0 : 1)sage: 10*P(161/16 : -2065/64 : 1)sage: 20*P(683916417/264517696 : -18784454671297/4302115807744 : 1)sage: E.conductor()37 The elliptic curves over the complex numbers are parameterized by the \(j\)-invariant. Sage computes \(j\)-invariant as follows: sage: E = EllipticCurve([0,0,0,-4,2]); EElliptic Curve defined by y^2 = x^3 - 4*x + 2 over Rational Fieldsage: E.conductor()2368sage: E.j_invariant()110592/37 If we make a curve with the same \(j\)-invariant as that of \(E\), it need not be isomorphic to \(E\). In the following example, the curves are not isomorphic because their conductors are different. sage: F = EllipticCurve_from_j(110592/37)sage: F.conductor()37 However, the twist of \(F\) by 2 gives an isomorphic curve. sage: G = F.quadratic_twist(2); GElliptic Curve defined by y^2 = x^3 - 4*x + 2 over Rational Fieldsage: G.conductor()2368sage: G.j_invariant()110592/37 We can compute the coefficients \(a_n\) of the \(L\)-series or modular form \(\sum_{n=0}^\infty a_nq^n\) attached to the elliptic curve. This computation uses the PARI C-library: sage: E = EllipticCurve([0,0,1,-1,0])sage: E.anlist(30)[0, 1, -2, -3, 2, -2, 6, -1, 0, 6, 4, -5, -6, -2, 2, 6, -4, 0, -12, 0, -4, 3, 10, 2, 0, -1, 4, -9, -2, 6, -12]sage: v = E.anlist(10000) It only takes a second to compute all \(a_n\) for \(n\leq 10^5\): sage: %time v = E.anlist(100000)CPU times: user 0.98 s, sys: 0.06 s, total: 1.04 sWall time: 1.06 Elliptic curves can be constructed using their Cremona labels. This pre-loads the elliptic curve with information about its rank, Tamagawa numbers, regulator, etc. sage: E = EllipticCurve("37b2")sage: EElliptic Curve defined by y^2 + y = x^3 + x^2 - 1873*x - 31833 over RationalFieldsage: E = EllipticCurve("389a")sage: EElliptic Curve defined by y^2 + y = x^3 + x^2 - 2*x over Rational Fieldsage: E.rank()2sage: E = EllipticCurve("5077a")sage: E.rank()3 We can also access the Cremona database directly. sage: db = sage.databases.cremona.CremonaDatabase()sage: db.curves(37){'a1': [[0, 0, 1, -1, 0], 1, 1], 'b1': [[0, 1, 1, -23, -50], 0, 3]}sage: db.allcurves(37){'a1': [[0, 0, 1, -1, 0], 1, 1], 'b1': [[0, 1, 1, -23, -50], 0, 3], 'b2': [[0, 1, 1, -1873, -31833], 0, 1], 'b3': [[0, 1, 1, -3, 1], 0, 3]} The objects returned from the database are not of type EllipticCurve. They are elements of a database and have a coupleof fields, and that’s it. There is a small version of Cremona’sdatabase, which is distributed by default with Sage, and containslimited information about elliptic curves of conductor\(\leq 10000\). There is also a large optional version, whichcontains extensive data about all curves of conductor up to\(120000\) (as of October 2005). There is also a huge (2GB)optional database package for Sage that contains the hundreds ofmillions of elliptic curves in the Stein-Watkins database. Dirichlet Characters¶ A Dirichlet character is the extension of a homomorphism\((\ZZ/N\ZZ)^* \to R^*\), for some ring \(R\), to the map\(\ZZ \to R\) obtained by sending those integers \(x\)with \(\gcd(N,x)>1\) to 0. sage: G = DirichletGroup(12)sage: G.list()[Dirichlet character modulo 12 of conductor 1 mapping 7 |--> 1, 5 |--> 1,Dirichlet character modulo 12 of conductor 4 mapping 7 |--> -1, 5 |--> 1,Dirichlet character modulo 12 of conductor 3 mapping 7 |--> 1, 5 |--> -1,Dirichlet character modulo 12 of conductor 12 mapping 7 |--> -1, 5 |--> -1]sage: G.gens()(Dirichlet character modulo 12 of conductor 4 mapping 7 |--> -1, 5 |--> 1,Dirichlet character modulo 12 of conductor 3 mapping 7 |--> 1, 5 |--> -1)sage: len(G)4 Having created the group, we next create an element and compute with it. sage: G = DirichletGroup(21)sage: chi = G.1; chiDirichlet character modulo 21 of conductor 7 mapping 8 |--> 1, 10 |--> zeta6sage: chi.values()[0, 1, zeta6 - 1, 0, -zeta6, -zeta6 + 1, 0, 0, 1, 0, zeta6, -zeta6, 0, -1, 0, 0, zeta6 - 1, zeta6, 0, -zeta6 + 1, -1]sage: chi.conductor()7sage: chi.modulus()21sage: chi.order()6sage: chi(19)-zeta6 + 1sage: chi(40)-zeta6 + 1 It is also possible to compute the action of the Galois group \(\text{Gal}(\QQ(\zeta_N)/\QQ)\) on these characters, as well as the direct product decomposition corresponding to the factorization of the modulus. sage: chi.galois_orbit()[Dirichlet character modulo 21 of conductor 7 mapping 8 |--> 1, 10 |--> -zeta6 + 1, Dirichlet character modulo 21 of conductor 7 mapping 8 |--> 1, 10 |--> zeta6]sage: go = G.galois_orbits()sage: [len(orbit) for orbit in go][1, 2, 2, 1, 1, 2, 2, 1]sage: G.decomposition()[Group of Dirichlet characters modulo 3 with values in Cyclotomic Field of order 6 and degree 2,Group of Dirichlet characters modulo 7 with values in Cyclotomic Field of order 6 and degree 2] Next, we construct the group of Dirichlet characters mod 20, but with values in \(\QQ(i)\): sage: K.<i> = NumberField(x^2+1)sage: G = DirichletGroup(20,K)sage: GGroup of Dirichlet characters modulo 20 with values in Number Field in i with defining polynomial x^2 + 1 We next compute several invariants of G: sage: G.gens()(Dirichlet character modulo 20 of conductor 4 mapping 11 |--> -1, 17 |--> 1,Dirichlet character modulo 20 of conductor 5 mapping 11 |--> 1, 17 |--> i)sage: G.unit_gens()(11, 17)sage: G.zeta()isage: G.zeta_order()4 In this example we create a Dirichlet character with values in anumber field. We explicitly specify the choice of root of unity bythe third argument to DirichletGroup below. sage: x = polygen(QQ, 'x')sage: K = NumberField(x^4 + 1, 'a'); a = K.0sage: b = K.gen(); a == bTruesage: KNumber Field in a with defining polynomial x^4 + 1sage: G = DirichletGroup(5, K, a); GGroup of Dirichlet characters modulo 5 with values in the group of order 8 generated by a in Number Field in a with defining polynomial x^4 + 1sage: chi = G.0; chiDirichlet character modulo 5 of conductor 5 mapping 2 |--> a^2sage: [(chi^i)(2) for i in range(4)][1, a^2, -1, -a^2] Here NumberField(x^4 + 1, 'a') tells Sage to use the symbol “a” inprinting what K is (a Number Field in a with defining polynomial\(x^4 + 1\)). The name “a” is undeclared at this point. Once a = K.0 (or equivalently a = K.gen()) is evaluated, the symbol“a” represents a root of the generating polynomial\(x^4+1\). Modular Forms¶ Sage can do some computations related to modular forms, including dimensions, computing spaces of modular symbols, Hecke operators, and decompositions. There are several functions available for computing dimensions of spaces of modular forms. For example, sage: dimension_cusp_forms(Gamma0(11),2)1sage: dimension_cusp_forms(Gamma0(1),12)1sage: dimension_cusp_forms(Gamma1(389),2)6112 Next we illustrate computation of Hecke operators on a space of modular symbols of level \(1\) and weight \(12\). sage: M = ModularSymbols(1,12)sage: M.basis()([X^8*Y^2,(0,0)], [X^9*Y,(0,0)], [X^10,(0,0)])sage: t2 = M.T(2)sage: t2Hecke operator T_2 on Modular Symbols space of dimension 3 for Gamma_0(1)of weight 12 with sign 0 over Rational Fieldsage: t2.matrix()[ -24 0 0][ 0 -24 0][4860 0 2049]sage: f = t2.charpoly('x'); fx^3 - 2001*x^2 - 97776*x - 1180224sage: factor(f)(x - 2049) * (x + 24)^2sage: M.T(11).charpoly('x').factor()(x - 285311670612) * (x - 534612)^2 We can also create spaces for \(\Gamma_0(N)\) and \(\Gamma_1(N)\). sage: ModularSymbols(11,2)Modular Symbols space of dimension 3 for Gamma_0(11) of weight 2 with sign 0 over Rational Fieldsage: ModularSymbols(Gamma1(11),2)Modular Symbols space of dimension 11 for Gamma_1(11) of weight 2 withsign 0 and over Rational Field Let’s compute some characteristic polynomials and \(q\)-expansions. sage: M = ModularSymbols(Gamma1(11),2)sage: M.T(2).charpoly('x')x^11 - 8*x^10 + 20*x^9 + 10*x^8 - 145*x^7 + 229*x^6 + 58*x^5 - 360*x^4 + 70*x^3 - 515*x^2 + 1804*x - 1452sage: M.T(2).charpoly('x').factor()(x - 3) * (x + 2)^2 * (x^4 - 7*x^3 + 19*x^2 - 23*x + 11) * (x^4 - 2*x^3 + 4*x^2 + 2*x + 11)sage: S = M.cuspidal_submodule()sage: S.T(2).matrix()[-2 0][ 0 -2]sage: S.q_expansion_basis(10)[ q - 2*q^2 - q^3 + 2*q^4 + q^5 + 2*q^6 - 2*q^7 - 2*q^9 + O(q^10)] We can even compute spaces of modular symbols with character. sage: G = DirichletGroup(13)sage: e = G.0^2sage: M = ModularSymbols(e,2); MModular Symbols space of dimension 4 and level 13, weight 2, character[zeta6], sign 0, over Cyclotomic Field of order 6 and degree 2sage: M.T(2).charpoly('x').factor()(x - zeta6 - 2) * (x - 2*zeta6 - 1) * (x + zeta6 + 1)^2sage: S = M.cuspidal_submodule(); SModular Symbols subspace of dimension 2 of Modular Symbols space ofdimension 4 and level 13, weight 2, character [zeta6], sign 0, overCyclotomic Field of order 6 and degree 2sage: S.T(2).charpoly('x').factor()(x + zeta6 + 1)^2sage: S.q_expansion_basis(10)[q + (-zeta6 - 1)*q^2 + (2*zeta6 - 2)*q^3 + zeta6*q^4 + (-2*zeta6 + 1)*q^5 + (-2*zeta6 + 4)*q^6 + (2*zeta6 - 1)*q^8 - zeta6*q^9 + O(q^10)] Here is another example of how Sage can compute the action of Hecke operators on a space of modular forms. sage: T = ModularForms(Gamma0(11),2)sage: TModular Forms space of dimension 2 for Congruence Subgroup Gamma0(11) ofweight 2 over Rational Fieldsage: T.degree()2sage: T.level()11sage: T.group()Congruence Subgroup Gamma0(11)sage: T.dimension()2sage: T.cuspidal_subspace()Cuspidal subspace of dimension 1 of Modular Forms space of dimension 2 forCongruence Subgroup Gamma0(11) of weight 2 over Rational Fieldsage: T.eisenstein_subspace()Eisenstein subspace of dimension 1 of Modular Forms space of dimension 2for Congruence Subgroup Gamma0(11) of weight 2 over Rational Fieldsage: M = ModularSymbols(11); MModular Symbols space of dimension 3 for Gamma_0(11) of weight 2 with sign0 over Rational Fieldsage: M.weight()2sage: M.basis()((1,0), (1,8), (1,9))sage: M.sign()0 Let \(T_p\) denote the usual Hecke operators (\(p\) prime). How do the Hecke operators \(T_2\), \(T_3\), \(T_5\) act on the space of modular symbols? sage: M.T(2).matrix()[ 3 0 -1][ 0 -2 0][ 0 0 -2]sage: M.T(3).matrix()[ 4 0 -1][ 0 -1 0][ 0 0 -1]sage: M.T(5).matrix()[ 6 0 -1][ 0 1 0][ 0 0 1]
I'm currently developing a program to solve 2D transient state heat conduction on a square plate using the V-cycle multigrid. Althought my program is able to reach the steady state solution, it's computational time is longer then just running the problem using Gauss-seidel method. Problem case: 0.1 m by 0.1 m square plate with fixed temperatures. Top: 20°C and the other three sides to be 40°C. Assuming constant material properties, no internal heat generation, and equal grid length $\Delta x=\Delta y$. Parameters: thermal diffusion ($\alpha=23.1\times 10^{-6}$) (using steel for now as a guide), $\Delta t=0.01$ Methodology : implicit finite difference method. $$T_2(i,j)=\frac{T_1 + F_O (T_2 (i-1, j) + T_2(i+1, j) + T_2(i, j+1) + T_2(i,j-1)}{1 + 4 F_O}$$ where $F_O$ is the fourier number $\alpha \Delta t/ \Delta x^2$. Step 1: Pre-smoothing- using the implicit finitie difference method shown above. The problem is smoothen by 2 cycles of red-balck gauss seidel. Step 2: compute residual- the residual is computed using $$\operatorname{res}=\frac{T_1 + F_O [T_2(i-1,j) + T_2(i+1,j) + T_2(i,j+1) + T_2(i, j-1)]}{1 + 4 F_O} - T_2(i,j)$$ Step 3: The residual is restricted using full weightage Step 4: The residual equation is solved with the intial guess of the error to be zero. I solve it by using $$\operatorname{error}(i,j)= \frac{\operatorname{res}(i,j) + F_O [\operatorname{error} (i-1,j) + \operatorname{error}(i+1,j) + \operatorname{error}(i,j-1)+ \operatorname{error}(i,j+1)]}{1 + 4 F_O}$$ I'm not sure if I used the correct equation to solve the residual equation and as the gird size decrease $\Delta x$ will increase therefore the Fourier number ($F_O$) will change . So I believe this maybe the problem to my program but i'm not sure as there are not alot of information regarding this. (I have been looking for a few weeks). I will really appreciate the help from you guys. If anyone needs any other information, please comment.
Question As a test, I transform a uniform distribution over the unit square. But when I check the transformed distribution with Monte Carlo, it is wrong. What went wrong? Thanks. Problem Random variables $\vec{X} = (X_1, X_2)$ follows the uniform distribution over the unit square. In other words, $X_{1} \sim U[0, 1]$ and $X_{2} \sim U[0, 1]$. $X_{1}$ and $X_{2}$ are independent. The probability density is: $\rho_{X}(x_{1}, x_{2}) = 1$ The transformation $f(\vec{x}) = \vec{y}$ is: $$y_{1} = \text{sigmoid}(x_{1} + x_{2}) = \frac{1}{1 + e^{-(x_1 + x_2)}}$$ $$y_{2} = \text{sigmoid}(x_{1} - x_{2}) = \frac{1}{1 + e^{-(x_1 - x_2)}}$$ Find probability density $\rho_{Y}( y_{1}, y_{2})$ Attempt The inverse transform $f^{-1}(\vec{y}) = \vec{x}$ is: $$x_{1} = (a_{1} + a_{2}) / 2$$ $$x_{2} = (a_{1} - a_{2}) / 2$$ where $$a_{1} = x_{1} + x_{2} = -\log\left(\frac{1}{y_1} - 1\right) = \log(y_{1}) - \log(1 - y_{1})$$ $$a_{2} = x_{1} - x_{2} = -\log\left(\frac{1}{y_2} - 1\right) = \log(y_{2}) - \log(1 - y_{2})$$ The partial derivatives of $a_{1}$ and $a_{2}$ are: $$\frac{\partial a_{1}}{\partial y_{1}} = \frac{1}{y_{1}} - \frac{1}{1 - y_{1}} \cdot -1 = \frac{1}{y_{1}(1 - y_{1})} $$ $$\frac{\partial a_{2}}{\partial y_{2}} = \frac{1}{y_{2}(1 - y_{2})} $$ The Jacobian of the inverse transform $f^{-1}(\vec{y}) = \vec{x}$ is: $$J =\begin{bmatrix} \frac{\partial x_1}{\partial y_1} & \frac{\partial x_1}{\partial y_2} \\ \frac{\partial x_2}{\partial y_1} & \frac{\partial x_2}{\partial y_2} \\ \end{bmatrix} = \begin{bmatrix} \frac{\ 1}{\ 2 y_{1} (1 - y_{1})} & \frac{1}{\ 2 y_{2} (1 - y_{2})} \\ \frac{1}{ 2 y_{1} (1 - y_{1})} & -\frac{1}{\ 2 y_{2} (1 - y_{2})} \\ \end{bmatrix} $$ The determinant of the Jacobian is: $$\det(J) = -\frac{1}{2y_{1}(1 - y_{1})y_{2}(1 - y_{2})}$$ The probability density of the transformed distribution is: $$\rho_{Y}(\vec{y}) = \rho_{X}(f^{-1}(\vec{y})) \lvert \det(J(\vec{y})) \rvert = \frac{1}{2y_{1}(1 - y_{1})y_{2}(1 - y_{2})} $$ Check Draw samples of $\vec{X}$ Transform samples of $\vec{X}$ to samples of $\vec{Y}$ Compute a weighted 2 dimensional histogram of the samples of $\vec{Y}$ The weights are $1 / \rho_{Y}(\vec{y})$ I expect the weighted histogram would reflect an uniform distribution Result of the program Contrary to my expectation, the result is not an uniform distribution. What went wrong? Program: import numpy as npimport matplotlib.pyplot as pltdef sample_x(n_trials, eps=1e-5): """Draw samples from X""" return np.random.uniform(eps, 1 - eps, size=(2, n_trials))def transform_samples(x): """Transform samples of X to samples of Y""" n_trials = x.shape[1] y = np.empty((2, n_trials)) y[0] = x[0] + x[1] y[1] = x[0] - x[1] y = 1 / (1 + np.exp(-y)) return ydef cal_det_jac(y): """Calculate absolute determinant of the Jacobian of the inverse transform""" a = y[0] * (y[0] - 1) b = y[1] * (y[1] - 1) return 1 / (2 * a * b)def make_histogram(grid_size, ndim): shape = (grid_size,) * ndim return np.zeros(shape)def update_historgram(histogram, sample, weight): """Update the histogram. Assumes the sample is within the unit square""" grid_size = histogram.shape[0] index = (sample * grid_size).astype(int) histogram[index] += weightdef main(): # Draw samples from Y n_trials = 100000 x = sample_x(n_trials) y = transform_samples(x) # Calculate weights of histogram w = 1 / cal_det_jac(y) # Calculate 2D histogram grid_size = 100 hist = make_histogram(grid_size, 2) for i in range(n_trials): update_historgram(hist, y[:, i], w[i]) # Plot plt.pcolor(hist) plt.show() plt.close()if __name__ == '__main__': main()
And I think people said that reading first chapter of Do Carmo mostly fixed the problems in that regard. The only person I asked about the second pset said that his main difficulty was in solving the ODEs Yeah here there's the double whammy in grad school that every grad student has to take the full year of algebra/analysis/topology, while a number of them already don't care much for some subset, and then they only have to pass rather the class I know 2 years ago apparently it mostly avoided commutative algebra, half because the professor himself doesn't seem to like it that much and half because he was like yeah the algebraists all place out so I'm assuming everyone here is an analyst and doesn't care about commutative algebra Then the year after another guy taught and made it mostly commutative algebra + a bit of varieties + Cech cohomology at the end from nowhere and everyone was like uhhh. Then apparently this year was more of an experiment, in part from requests to make things more geometric It's got 3 "underground" floors (quotation marks because the place is on a very tall hill so the first 3 floors are a good bit above the the street), and then 9 floors above ground. The grad lounge is in the top floor and overlooks the city and lake, it's real nice The basement floors have the library and all the classrooms (each of them has a lot more area than the higher ones), floor 1 is basically just the entrance, I'm not sure what's on the second floor, 3-8 is all offices, and 9 has the ground lounge mainly And then there's one weird area called the math bunker that's trickier to access, you have to leave the building from the first floor, head outside (still walking on the roof of the basement floors), go to this other structure, and then get in. Some number of grad student cubicles are there (other grad students get offices in the main building) It's hard to get a feel for which places are good at undergrad math. Highly ranked places are known for having good researchers but there's no "How well does this place teach?" ranking which is kinda more relevant if you're an undergrad I think interest might have started the trend, though it is true that grad admissions now is starting to make it closer to an expectation (friends of mine say that for experimental physics, classes and all definitely don't cut it anymore) In math I don't have a clear picture. It seems there are a lot of Mickey Mouse projects that people seem to not help people much, but more and more people seem to do more serious things and that seems to become a bonus One of my professors said it to describe a bunch of REUs, basically boils down to problems that some of these give their students which nobody really cares about but which undergrads could work on and get a paper out of @TedShifrin i think universities have been ostensibly a game of credentialism for a long time, they just used to be gated off to a lot more people than they are now (see: ppl from backgrounds like mine) and now that budgets shrink to nothing (while administrative costs balloon) the problem gets harder and harder for students In order to show that $x=0$ is asymptotically stable, one needs to show that $$\forall \varepsilon > 0, \; \exists\, T > 0 \; \mathrm{s.t.} \; t > T \implies || x ( t ) - 0 || < \varepsilon.$$The intuitive sketch of the proof is that one has to fit a sublevel set of continuous functions $... "If $U$ is a domain in $\Bbb C$ and $K$ is a compact subset of $U$, then for all holomorphic functions on $U$, we have $\sup_{z \in K}|f(z)| \leq C_K \|f\|_{L^2(U)}$ with $C_K$ depending only on $K$ and $U$" this took me way longer than it should have Well, $A$ has these two dictinct eigenvalues meaning that $A$ can be diagonalised to a diagonal matrix with these two values as its diagonal. What will that mean when multiplied to a given vector (x,y) and how will the magnitude of that vector changed? Alternately, compute the operator norm of $A$ and see if it is larger or smaller than 2, 1/2 Generally, speaking, given. $\alpha=a+b\sqrt{\delta}$, $\beta=c+d\sqrt{\delta}$ we have that multiplication (which I am writing as $\otimes$) is $\alpha\otimes\beta=(a\cdot c+b\cdot d\cdot\delta)+(b\cdot c+a\cdot d)\sqrt{\delta}$ Yep, the reason I am exploring alternative routes of showing associativity is because writing out three elements worth of variables is taking up more than a single line in Latex, and that is really bugging my desire to keep things straight. hmm... I wonder if you can argue about the rationals forming a ring (hence using commutativity, associativity and distributivitity). You cannot do that for the field you are calculating, but you might be able to take shortcuts by using the multiplication rule and then properties of the ring $\Bbb{Q}$ for example writing $x = ac+bd\delta$ and $y = bc+ad$ we then have $(\alpha \otimes \beta) \otimes \gamma = (xe +yf\delta) + (ye + xf)\sqrt{\delta}$ and then you can argue with the ring property of $\Bbb{Q}$ thus allowing you to deduce $\alpha \otimes (\beta \otimes \gamma)$ I feel like there's a vague consensus that an arithmetic statement is "provable" if and only if ZFC proves it. But I wonder what makes ZFC so great, that it's the standard working theory by which we judge everything. I'm not sure if I'm making any sense. Let me know if I should either clarify what I mean or shut up. :D Associativity proofs in general have no shortcuts for arbitrary algebraic systems, that is why non associative algebras are more complicated and need things like Lie algebra machineries and morphisms to make sense of One aspect, which I will illustrate, of the "push-button" efficacy of Isabelle/HOL is its automation of the classic "diagonalization" argument by Cantor (recall that this states that there is no surjection from the naturals to its power set, or more generally any set to its power set).theorem ... The axiom of triviality is also used extensively in computer verification languages... take Cantor's Diagnolization theorem. It is obvious. (but seriously, the best tactic is over powered...) Extensions is such a powerful idea. I wonder if there exists algebraic structure such that any extensions of it will produce a contradiction. O wait, there a maximal algebraic structures such that given some ordering, it is the largest possible, e.g. surreals are the largest field possible It says on Wikipedia that any ordered field can be embedded in the Surreal number system. Is this true? How is it done, or if it is unknown (or unknowable) what is the proof that an embedding exists for any ordered field? Here's a question for you: We know that no set of axioms will ever decide all statements, from Gödel's Incompleteness Theorems. However, do there exist statements that cannot be decided by any set of axioms except ones which contain one or more axioms dealing directly with that particular statement? "Infinity exists" comes to mind as a potential candidate statement. Well, take ZFC as an example, CH is independent of ZFC, meaning you cannot prove nor disprove CH using anything from ZFC. However, there are many equivalent axioms to CH or derives CH, thus if your set of axioms contain those, then you can decide the truth value of CH in that system @Rithaniel That is really the crux on those rambles about infinity I made in this chat some weeks ago. I wonder to show that is false by finding a finite sentence and procedure that can produce infinity but so far failed Put it in another way, an equivalent formulation of that (possibly open) problem is: > Does there exists a computable proof verifier P such that the axiom of infinity becomes a theorem without assuming the existence of any infinite object? If you were to show that you can attain infinity from finite things, you'd have a bombshell on your hands. It's widely accepted that you can't. If fact, I believe there are some proofs floating around that you can't attain infinity from the finite. My philosophy of infinity however is not good enough as implicitly pointed out when many users who engaged with my rambles always managed to find counterexamples that escape every definition of an infinite object I proposed, which is why you don't see my rambles about infinity in recent days, until I finish reading that philosophy of infinity book The knapsack problem or rucksack problem is a problem in combinatorial optimization: Given a set of items, each with a weight and a value, determine the number of each item to include in a collection so that the total weight is less than or equal to a given limit and the total value is as large as possible. It derives its name from the problem faced by someone who is constrained by a fixed-size knapsack and must fill it with the most valuable items.The problem often arises in resource allocation where there are financial constraints and is studied in fields such as combinatorics, computer science... O great, given a transcendental $s$, computing $\min_P(|P(s)|)$ is a knapsack problem hmm... By the fundamental theorem of algebra, every complex polynomial $P$ can be expressed as: $$P(x) = \prod_{k=0}^n (x - \lambda_k)$$ If the coefficients of $P$ are natural numbers , then all $\lambda_k$ are algebraic Thus given $s$ transcendental, to minimise $|P(s)|$ will be given as follows: The first thing I think of with that particular one is to replace the $(1+z^2)$ with $z^2$. Though, this is just at a cursory glance, so it would be worth checking to make sure that such a replacement doesn't have any ugly corner cases. In number theory, a Liouville number is a real number x with the property that, for every positive integer n, there exist integers p and q with q > 1 and such that0<|x−pq|<1qn.{\displaystyle 0<\left|x-{\frac {p}... Do these still exist if the axiom of infinity is blown up? Hmmm... Under a finitist framework where only potential infinity in the form of natural induction exists, define the partial sum: $$\sum_{k=1}^M \frac{1}{b^{k!}}$$ The resulting partial sums for each M form a monotonically increasing sequence, which converges by ratio test therefore by induction, there exists some number $L$ that is the limit of the above partial sums. The proof of transcendentally can then be proceeded as usual, thus transcendental numbers can be constructed in a finitist framework There's this theorem in Spivak's book of Calculus:Theorem 7Suppose that $f$ is continuous at $a$, and that $f'(x)$ exists for all $x$ in some interval containing $a$, except perhaps for $x=a$. Suppose, moreover, that $\lim_{x \to a} f'(x)$ exists. Then $f'(a)$ also exists, and$$f'... and neither Rolle nor mean value theorem need the axiom of choice Thus under finitism, we can construct at least one transcendental number. If we throw away all transcendental functions, it means we can construct a number that cannot be reached from any algebraic procedure Therefore, the conjecture is that actual infinity has a close relationship to transcendental numbers. Anything else I need to finish that book to comment typo: neither Rolle nor mean value theorem need the axiom of choice nor an infinite set > are there palindromes such that the explosion of palindromes is a palindrome nonstop palindrome explosion palindrome prime square palindrome explosion palirome prime explosion explosion palindrome explosion cyclone cyclone cyclone hurricane palindrome explosion palindrome palindrome explosion explosion cyclone clyclonye clycone mathphile palirdlrome explosion rexplosion palirdrome expliarome explosion exploesion
The time measured by the clock you carry with you on your journey is called the proper time, and the proper time is just the length of your world line (give or take a factor of $c$). So if we calculate the length of your world line as you accelerate away then back, that gives us your elapsed time. Better still, the proper time is an invariant i.e. all observers in all coordinate systems will calculate the same value for it, so we can use whatever coordinate system is most convenient to do the calculation. I'm going to do the calculation in my inertial frame here on Earth. We use the usual spatial coordinates $x$, $y$ and $z$ and the time coordinate $t$. The geometry of our (flat) spacetime is described by the Minkowski metric: $$ c^2d\tau^2 = c^2dt^2 - dx^2 - dy^2 - dz^2 $$ For convenience we'll assume that you're moving only along the $x$ axis so $dy = dz = 0$ and the metric simplifies to; $$ c^2d\tau^2 = c^2dt^2 - dx^2 \tag{1} $$ What equation (1) is telling us is that if I observe you to move a distance $dx$ in a time $dt$ then the time change on your clock will be $d\tau$. To see this, suppose you're moving at a constant velocity $v$ that is: $$ \frac{dx}{dt} = v $$ so: $$ dx = vdt $$ We can substitute this value for $dx$ in equation (1) and we get: $$ c^2d\tau^2 = c^2dt^2 - v^2dt^2 $$ and with a minor bit of rearranging we get: $$ d\tau = dt \sqrt{1 - \frac{v^2}{c^2}} = \frac{dt}{\gamma} \tag{2} $$ which you should immediately recognise as the usual expression for time dilation at constant velocity. So far so good. The problem here is that you are accelerating so your velocity isn't constant. In that case we write the velocity as a function of time, $v(t)$, in equation (2), and we get the proper time $\tau$ by integrating: $$ \tau = \int_0^T \sqrt{1 - \frac{v^2(t)}{c^2}} dt \tag{3} $$ It's worth clarifying exactly what equation (3) is telling us. If I observe you to depart Earth at time $t = 0$ and return at time $t = T$, and during that time I observe your velocity to be some function of time $v(t)$, then using equation (3) will calculate the time $\tau$ measured on the clock you are carrying. What you're asking is how to choose the function for the velocity $v(t)$ to minimise the elapsed time $\tau$ subject to the constraint that the maximum acceleration of your rocket is 1g (or whatever). The rigorous way to do this is using the calculus of variations and vary the function $v(t)$ to find the stationary value for $\tau$. But this is a hard calculation, and in any case we don't need to go to all that effort because it's obvious that we minimise $\tau$ by making $v(t)$ as big as possible. In other words you have to accelerate as hard as possible. So I'm afraid the answer is precisely the boring one that you were hoping to avoid. You minimise your proper time by accelerating continuously at the highest possible acceleration. Some of the comments have mentioned joshphysics' answer to Derivation of hyperbolic motion in Special Relativity. While equation (3) looks at first glance an easy way to calculate the elapsed time $\tau$ the problem is that constant acceleration in your frame (constant proper acceleration) is not constant acceleration in my frame. Obviously not, since constant acceleration $a$ as observed in my frame would result in you moving faster than light after a finite time $t = c/a$, and this isn't possible. What Josh calculates is the equation for your motion as observed in my frame when you are using constant acceleration. The function, $v(t)$, for your motion at constant acceleration turns out to be: $$ v(t) = \frac{at}{\sqrt{1 + \frac{a^2t^2}{c^2}}} $$ To use equation (3) you'd have to use this equation for the velocity and patch together the acceleration and deceleration phases of the journey.
Search Now showing items 1-1 of 1 Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV (Elsevier, 2014-09) Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ...
Search Now showing items 1-1 of 1 Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV (Elsevier, 2014-09) Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ...
You cannot eliminate the dependence of a solution on the risk aversion parameter (which this author confusingly calls $\lambda$). Perhaps a source of confusion? Typically $\lambda$ is used to denote a Lagrange multiplier in Lagrangian optimization, but the author is using $\lambda$ as a risk tolerance parameter. (In your other linked question, $\lambda$ denotes a Lagrange multiplier.) The author uses a $m \times 1$ vector $\boldsymbol{\gamma}$ as the Lagrange multipliers and the scalar $\lambda$ as the risk tolerance parameter for utility specification $u(\mathbf{w}) = \mu_p - \frac{1}{2 \lambda} \sigma^2_p$ where $\mu_p$ is the expected portfolio return given portfolio weights $\mathbf{w}$ and $\sigma^2_p$ is the variance of the portfolio return. For a closed forms solution to the optimization problem, the author's goal is to find an expression for the solution that does not use the multipliers. In this case, that means eliminating $\boldsymbol{\gamma}$ (which he does). In your other link, $\lambda$ is used in the typical way as a Lagrange multiplier so a closed form solution eliminates $\lambda$. The optimization problem by the way is:\begin{equation} \begin{array}{*2{>{\displaystyle}r}} \mbox{maximize (over $\mathbf{w}$)} & \boldsymbol{\mu}'\mathbf{w} - \frac{1}{2 \lambda} \mathbf{w}'\Sigma \mathbf{w} \\ \mbox{subject to} & A \mathbf{w} = \mathbf{b} \end{array}\end{equation} The solution $\mathbf{w}^*$ will be a function of expected returns $\boldsymbol{\mu}$, covariance matrix $\Sigma$, and risk tolerance $\lambda$. This is a simple, pretty standard problem, and you can undoubtedly find other people solving it all over the Internet. Motivation for objective $\boldsymbol{\mu}'\mathbf{w} - \frac{1}{2 \lambda} \mathbf{w}'\Sigma \mathbf{w}$ Let's assume the agents preferences over various lotteries can be represented by: Let $X$ be some lottery that's normally distributed with mean $\mu$ and variance $\sigma^2$. $$X \sim \mathcal{N}(\mu, \sigma^2)$$ You can show that this lottery $X$ has a certainty equivalent value to our agent given by:$$ c(X) = \mu - \frac{1}{2}a\sigma^2 $$(Start with $\mathbb{E}[-e^{-aX}] = \frac{1}{\sqrt{2\pi}\sigma}\int_{- \infty}^{\infty}-e^{-ax-\frac{1}{2}\left(\frac{x - \mu}{\sigma}\right)^2}$ and use normal pdf sums to 1 to find certainty equivalent.) Here, $a$ is Arrow-Pratt coefficient of absolute risk aversion. You can also define risk tolerance $\tau = \frac{1}{a}$. Writing the certainty equivalent with risk tolerance: $$ c(X) = \mu - \frac{1}{2\tau}\sigma^2 $$ Maximizing expected utility with CARA risk aversion over a normally distributed lottery is equivalent to maximizing the certainty equivalent given above. It's a nice, convenient specification that makes the math easy to work with. You can of course point out all kinds of deficiencies which would motivate richer specifications: Portfolio returns covary with other variables agents care about. Returns aren't normally distributed CARA has problems: would you insure the risk of a 1,000 loss the same way if your wealth was 2,000 as if your wealth was 2,000,000,000? Probably not.
Browse by Person Up a level 56. Article Aad, G, Abbott, B, Abdallah, J et al. (2822 more authors) (2017) Performance of algorithms that reconstruct missing transverse momentum in root s=8 TeV proton-proton collisions in the ATLAS detector. The European Physical Journal C, 77 (4). 241. ISSN 1434-6044 Aad, G, Abbott, B, Abdallah, J et al. (2883 more authors) (2016) Addendum to ‘Measurement of the tˉt production cross-section using eμ events with b-tagged jets in pp collisions at √s = 7 and 8 TeV with the ATLAS detector’. European Physical Journal C: Particles and Fields, 76. 642. ISSN 1434-6044 Aad, G, Abbott, B, Abdallah, J et al. (2855 more authors) (2016) Performance of pile-up mitigation techniques for jets in pp collisions at √s=8 TeV using the ATLAS detector. European Physical Journal C, 76 (11). ISSN 1434-6044 Aad, G, Abbott, B, Abdallah, J et al. (2867 more authors) (2016) Identification of high transverse momentum top quarks in pp collisions at √s=8 TeV with the ATLAS detector. Journal of High Energy Physics, 2016. 93. ISSN 1029-8479 Aad, G, Abbott, B, Abdallah, J et al. (2865 more authors) (2016) Reconstruction of hadronic decay products of tau leptons with the ATLAS experiment. The European Physical Journal C, 76 (5). ISSN 1434-6044 Aad, G, Abbott, B, Abdallah, J et al. (2867 more authors) (2016) Measurement of the transverse momentum and Øn∗ distributions of Drell–Yan lepton pairs in proton–proton collisions at √s = 8 TeV with the ATLAS detector. The European Physical Journal C - Particles and Fields, 76 (5). ISSN 1434-6044 Aad, G, Abbott, B, Abdallah, J et al. (2862 more authors) (2016) Measurement of the differential cross-sections of prompt and non-prompt production of J/ψ and ψ(2S) in pp collisions at √s=7 and 8 TeV with the ATLAS detector. European Physical Journal C: Particles and Fields, 76 (5). 283. ISSN 1434-6044 Aad, G, Abbott, B, Abdallah, J et al. (2866 more authors) (2016) Search for the standard model Higgs boson produced in association with a vector boson and decaying into a tau pair in pp collisions sqrt s = 8 TeV at with the ATLAS detector. Physical Review D, 93 (9). ISSN 1550-7998 Aad, G, Abbott, B, Abdallah, J et al. (2868 more authors) (2016) Observation of Long-Range Elliptic Azimuthal Anisotropies in root s=13 and 2.76 TeV pp Collisions with the ATLAS Detector. PHYSICAL REVIEW LETTERS, 116 (17). ARTN 172301. ISSN 0031-9007 Aad, G, Abbott, B, Abdallah, J et al. (2865 more authors) (2016) Probing lepton flavour violation via neutrinoless τ⟶3μ decays with the ATLAS detector. European Physical Journal C: Particles and Fields, 76 (5). 232. ISSN 1434-6044 Aad, G, Abbott, B, Abdallah, J et al. (2871 more authors) (2016) Search for dark matter produced in association with a Higgs boson decaying to two bottom quarks in pp collisions at root s=8 TeV with the ATLAS detector. PHYSICAL REVIEW D, 93 (7). ARTN 072007. ISSN 1550-7998 Aad, G, Abbott, B, Abdallah, J et al. (2863 more authors) (2016) Search for new phenomena in events with at least three photons collected in pp collisions at $$\sqrt{s}$$ s = 8 TeV with the ATLAS detector. The European Physical Journal C, 76 (4). ISSN 1434-6044 Aad, G, Abajyan, T, Abbott, B et al. (2840 more authors) (2016) Measurement of the centrality dependence of the charged-particle pseudorapidity distribution in proton–lead collisions at sNN‾‾‾√=5.02sNN=5.02 TeV with the ATLAS detector. The European Physical Journal C, 76 (4). ISSN 1434-6044 Aad, G, Abbott, B, Abdallah, J et al. (2863 more authors) (2016) Search for anomalous couplings in the W tb vertex from the measurement of double differential angular decay rates of single top quarks produced in the t-channel with the ATLAS detector. Journal of High Energy Physics, 2016 (4). Aad, G, Abbott, B, Abdallah, J et al. (2782 more authors) (2016) Search for magnetic monopoles and stable particles with high electric charges in 8 TeV pp collisions with the ATLAS detector. Physical Review D, 93 (5). 052009. ISSN 2470-0010 Aad, G, Abbott, B, Abdallah, J et al. (2769 more authors) (2016) Search for new phenomena in dijet mass and angular distributions from pp collisions at root s=13 TeV with the ATLAS detector. Physics Letters B, 754. pp. 302-322. ISSN 0370-2693 Aad, G, Abbott, B, Abdallah, J et al. (2844 more authors) (2016) Search for new phenomena with photon plus jet events in proton-proton collisions at TeV with the ATLAS detector. Journal of High Energy Physics (3). 41. ISSN 1029-8479 Aad, G, Abbott, B, Abdallah, J et al. (2835 more authors) (2016) Search for strong gravity in multijet final states produced in pp collisions at root s=13 TeV using the ATLAS detector at the LHC. Journal of High Energy Physics. 26. ISSN 1029-8479 Aad, G, Abbott, B, Abdallah, J et al. (2865 more authors) (2016) Search for the electroweak production of supersymmetric particles in root s=8 TeV pp collisions with the ATLAS detector. Physical Review D, 93 (5). 052002. ISSN 2470-0010 Aad, G, Abbott, B, Abdallah, J et al. (2794 more authors) (2016) Search for the electroweak production of supersymmetric particles in root s=8 TeV pp collisions with the ATLAS detector. Physical Review D, 93 (5). 052002. ISSN 2470-0010 Aad, G, Abbott, B, Abdallah, J et al. (2879 more authors) (2016) Centrality, rapidity, and transverse momentum dependence of isolated prompt photon production in lead-lead collisions at TeV measured with the ATLAS detector. Physical Review C, 93 (3). ISSN 0556-2813 Aad, G, Abbott, B, Abdallah, J et al. (2856 more authors) (2016) Search for invisible decays of a Higgs boson using vector-boson fusion in pp collisions at √s=8 TeV with the ATLAS detector. Journal of High Energy Physics, 2016. 172. ISSN 1126-6708 Aad, G, Abbott, B, Abdallah, J et al. (2862 more authors) (2016) Search for a high-mass Higgs boson decaying to a W boson pair in pp collisions at s = 8 $$ \sqrt{s}=8 $$ TeV with the ATLAS detector. Journal of High Energy Physics, 2016 (1). Aad, G, Abbott, B, Abdallah, J et al. (2854 more authors) (2016) Measurements of fiducial cross-sections for $$t\bar{t}$$ t t ¯ production with one or two additional b-jets in pp collisions at $$\sqrt{s}$$ s =8 TeV using the ATLAS detector. European Physical Journal C: Particles and Fields, 76 (1). 11. ISSN 1434-6044 Aad, G, Abbott, B, Abdallah, J et al. (2824 more authors) (2016) Measurements of the Higgs boson production and decay rates and coupling strengths using pp collision data at $$\sqrt{s}=7$$ s = 7 and 8 TeV in the ATLAS experiment. European Physical Journal C: Particles and Fields, 76. 6. ISSN 1434-6044 Aad, G, Abbott, B, Abdallah, J et al. (2854 more authors) (2015) ATLAS Run 1 searches for direct pair production of third-generation squarks at the Large Hadron Collider. European Physical Journal C: Particles and Fields, 75 (10). 510. ISSN 1434-6044 Aad, G, Abbott, B, Abdallah, J et al. (2835 more authors) (2015) Search for Higgs boson pair production in the $$b\bar{b}b\bar{b}$$ b b ¯ b b ¯ final state from pp collisions at $$\sqrt{s} = 8$$ s = 8 TeVwith the ATLAS detector. European Physical Journal C: Particles and Fields, 75 (9). 412. ISSN 1434-6044 Aad, G, Abbott, B, Abdallah, J et al. (2825 more authors) (2015) Search for heavy long-lived multi-charged particles in pp collisions at root s=8 TeV using the ATLAS detector. European Physical Journal C: Particles and Fields, 75 (8). 362. ISSN 1434-6044 Aad, G, Abbott, B, Abdallah, J et al. (2819 more authors) (2015) Constraints on the off-shell Higgs boson signal strength in the high-mass ZZ and WW final states with the ATLAS detector. European Physical Journal C: Particles and Fields, 75 (7). 335. ISSN 1434-6044 Aad, G, Abbott, B, Abdallah, J et al. (2822 more authors) (2015) Search for a new resonance decaying to a W or Z boson and a Higgs boson in the $$\ell \ell / \ell \nu / \nu \nu + b \bar{b}$$ ℓ ℓ / ℓ ν / ν ν + b b ¯ final states with the ATLAS detector. European Physical Journal C: Particles and Fields, 75 (6). 263. ISSN 1434-6044 Aad, G, Abbott, B, Abdallah, J et al. (2823 more authors) (2015) Determination of spin and parity of the Higgs boson in the $$WW^*\rightarrow e \nu \mu \nu $$ W W ∗ → e ν μ ν decay channel with the ATLAS detector. European Physical Journal C: Particles and Fields, 75 (5). 231. ISSN 1434-6044 Aad, G, Abbott, B, Abdallah, J et al. (2815 more authors) (2015) Observation and measurements of the production of prompt and non-prompt $$\varvec{\text {J}\uppsi }$$ J ψ mesons in association with a $$\varvec{Z}$$ Z boson in $$\varvec{pp}$$ p p collisions at $$\varvec{\sqrt{s}= 8\,\text {TeV}}$$ s = 8 TeV with the ATLAS detector. European Physical Journal C: Particles and Fields, 75 (5). 229. ISSN 1434-6044 Aad, G, Abbott, B, Abdallah, J et al. (2821 more authors) (2015) Search for direct pair production of a chargino and a neutralino decaying to the 125 GeV Higgs boson in $$\sqrt{\varvec{s}} = 8$$ s = 8 TeV $$\varvec{pp}$$ p p collisions with the ATLAS detector. European Physical Journal C: Particles and Fields, 75 (5). 208. ISSN 1434-6044 Aad, G, Abbott, B, Abdallah, J et al. (2888 more authors) (2015) Search for $$W' \rightarrow tb \rightarrow qqbb$$ W ′ → t b → q q b b decays in $$pp$$ p p collisions at $$\sqrt{s}$$ s = 8 TeV with the ATLAS detector. European Physical Journal C: Particles and Fields, 75 (4). 165. ISSN 1434-6044 Aad, G, Abbott, B, Abdallah, J et al. (2822 more authors) (2015) Search for Higgs and Z Boson Decays to J/ψγ and ϒ(nS)γ with the ATLAS Detector. Physical Review Letters, 114 (12). 121801. ISSN 0031-9007 Aad, G, Abbott, B, Abdallah, J et al. (2881 more authors) (2015) Simultaneous measurements of the tt¯, W+W−, and Z/γ∗→ττ production cross-sections in pp collisions at √s=7 TeV with the ATLAS detector. Physical Review D - Particles, Fields, Gravitation and Cosmology, 91 (5). 052005. ISSN 1550-7998 Aad, G, Abbott, B, Abdallah, J et al. (2888 more authors) (2015) Search for dark matter in events with heavy quarks and missing transverse momentum in pp collisions with the ATLAS detector. European Physical Journal C, 75 (2). 92. ISSN 1434-6044 Aad, G, Abbott, B, Abdallah, J et al. (2467 more authors) (2015) Search for dark matter in events with heavy quarks and missing transverse momentum in pp collisions with the ATLAS detector. European Physical Journal C , 75 (2). 92. ISSN 1434-6044 Aad, G, Abbott, B, Abdallah, J et al. (2896 more authors) (2015) Measurements of Higgs boson production and couplings in the four-lepton channel in pp collisions at center-of-mass energies of 7 and 8 TeV with the ATLAS detector. Physical Review D, 91 (1). ISSN 1550-7998 Aad, G, Abbott, B, Abdallah, J et al. (2888 more authors) (2014) Search for nonpointing and delayed photons in the diphoton and missing transverse momentum final state in 8 TeV pp collisions at the LHC using the ATLAS detector. Physical Review D, 90 (11). ISSN 1550-7998 Aad, G, Abajyan, T, Abbott, B et al. (2793 more authors) (2014) Measurements of normalized differential cross sections for tt¯ production in pp collisions at √(s)=7 TeV using the ATLAS detector. Physical Review D, 90 (7). ISSN 1550-7998 Aad, G, Abbott, B, Abdallah, J et al. (2886 more authors) (2014) Measurement of the Higgs boson mass from the H→γγ and H→ZZ∗→4ℓ channels in pp collisions at center-of-mass energies of 7 and 8 TeV with the ATLAS detector. Physical Review D, 90 (5). ISSN 1550-7998 Aad, G, Abbott, B, Abdallah, J et al. (2878 more authors) (2014) Search for high-mass dilepton resonances in pp collisions at s√=8 TeV with the ATLAS detector. Physical Review D, 90. 052005. ISSN 1550-7998 Aad, G, Abajyan, T, Abbott, B et al. (2920 more authors) (2013) Evidence for the spin-0 nature of the Higgs boson using ATLAS data. Physics Letters B, 726 (1-3). pp. 120-144. ISSN 0370-2693 Aad, G, Abbott, B, Abdallah, J et al. (2923 more authors) (2012) Measurement of D*± meson production in jets from pp collisions at s√=7 TeV with the ATLAS detector. Physical Review D, 85 (5). ISSN 1550-7998 Aad, G, Abbott, B, Abdallah, J et al. (3057 more authors) (2012) Search for the Standard Model Higgs Boson in the Diphoton Decay Channel with 4.9 fb−1 of pp Collision Data at √s=7 TeV with ATLAS. Physical Review Letters, 108. 111803. ISSN 0031-9007 Aad, G, Abbott, B, Abdallah, J et al. (2992 more authors) (2012) K0s and Λ production in pp interactions at s√=0.9 and 7 TeV measured with the ATLAS detector at the LHC. Physical Review D, 85 (1). ISSN 1550-7998 Aad, G, Abbott, B, Abdallah, J et al. (3022 more authors) (2011) Search for Dilepton Resonances in pp Collisions at √s=7 TeV with the ATLAS Detector. Physical Review Letters, 107 (27). ISSN 0031-9007 Aad, G, Abbott, B, Abdallah, J et al. (3028 more authors) (2011) Measurement of the transverse momentum distribution of Z/gamma* bosons in proton-proton collisions at root s=7 TeV with the ATLAS detector. Physics Letters B, 705 (5). pp. 415-434. ISSN 0370-2693 Aad, G, Abbott, B, Abdallah, J et al. (3023 more authors) (2011) Search for a standard model Higgs boson in the H→ZZ→ℓ(+)ℓ(-)νν decay channel with the ATLAS detector. Physical Review Letters, 107 (22). 221802. ISSN 0031-9007 Aad, G, Abbott, B, Abdallah, J et al. (3016 more authors) (2011) Search for new phenomena with the monojet and missing transverse momentum signature using the ATLAS detector in root s=7 TeV proton-proton collisions. Physics Letters B , 705 (4). pp. 294-312. ISSN 0370-2693 Aad, G, Abbott, B, Abdallah, J et al. (3017 more authors) (2011) Search for new phenomena with the monojet and missing transverse momentum signature using the ATLAS detector in sqrt(s) = 7 TeV proton-proton collisions. Physics Letters B, 705 (4). pp. 294-312. ISSN 0370-2693 Aad, G, Abbott, B, Abdallah, J et al. (3033 more authors) (2011) Measurement of the W+W− Cross Section in s√=7 TeV pp Collisions with ATLAS. Physical Review Letters, 107. 041802. ISSN 0031-9007 Aad, G, Abbott, B, Abdallah, J et al. (3046 more authors) (2011) Measurement of the production cross section for W-bosons in association with jets in pp collisions at √s=7 TeV with the ATLAS detector. Physics Letters B, 698 (5). pp. 325-345. ISSN 0370-2693 Aad, G, Abbott, B, Abdallah, J et al. (3024 more authors) (2011) Measurement of Dijet Azimuthal Decorrelations in pp Collisions at s√=7 TeV. Physical Review Letters, 106. 172002. ISSN 0031-9007 Aad, G, Abbott, B, Abdallah, J et al. (3034 more authors) (2010) Observation of a Centrality-Dependent Dijet Asymmetry in Lead-Lead Collisions at root s(NN)=2.76 TeV with the ATLAS Detector at the LHC. Physical Review Letters, 105 (25). 252303. ISSN 0031-9007
Current browse context: math-ph Change to browse by: Bookmark(what is this?) Mathematical Physics Title: Superradiance initiated inside the ergoregion (Submitted on 10 Sep 2015 (v1), last revised 18 Nov 2016 (this version, v3)) Abstract: We consider the stationary metrics that have both the black hole and the ergoregion. The class of such metric contains, in particular, the Kerr metric. We study the Cauchy problem with highly oscillatory initial data supported in a neighborhood inside the ergoregion with some initial energy $E_0$. We prove that when the time variable $x_0$ increases this solution splits into two parts: one with the negative energy $-E_1$ ending at the event horizon in a finite time, and the second part, with the energy $E_2=E_0+E_1>E_0$, escaping, under some conditions, to the infinity when $x_0\rightarrow +\infty$. Thus we get the superradiance phenomenon. In the case of the Kerr metric the superradiance phenomenon is "short-lived", since both the solutions with positive and negative energies cross the outer event horizon in a finite time (modulo $O(\frac{1}{k})$) where $k$ is a large parameter. We show that these solutions end on the singularity ring in a finite time. We study also the case of naked singularity. Submission historyFrom: Gregory Eskin [view email] [v1]Thu, 10 Sep 2015 15:35:33 GMT (14kb) [v2]Sat, 14 Nov 2015 02:52:38 GMT (21kb) [v3]Fri, 18 Nov 2016 07:36:42 GMT (24kb)
To the extent you can assume your process is a white stationary ergodic process, the variance of the mean is the variance of the process divided by N where N is the number of samples. Assuming that you can estimate the variance from your sample using $$\sigma_x = \frac{1}{N-1}\sum_{i=1}^N(x_i-\mu_x)^2$$ NOTE: This is the unbiased estimator since the variance is being estimated from a sample of N elements using a mean that is estimated from the sample itself. (see Zwillinger, D. (Ed.). CRC Standard Mathematical Tables and Formulae. Boca Raton, FL: CRC Press, 1995.). Otherwise if the mean was known (which it isn't) there would be a $\frac{1}{N}$ in front of the summation above. You should then be able to derive confidence intervals on the mean itself with the variance of the mean as $\frac{\sigma_x}{N}$:. This is seen in the expression for the mean and variance of $Y$ where $Y$ is the average of $X$ over $N$ samples, as follows where $\mu_x$ is the mean of x and $\sigma_x$ is the variance of x: \begin{align}E(Y) &= E\left[\frac{1}{N}\left(X_1+X_2+ \ldots +X_N\right)\right] = \frac{1}{N}E\left[(X_1+X_2+ \ldots +X_N)\right]\\\textrm{Var}(Y) &= \textrm{Var}\left[\frac{1}{N}\left(X_1+X_2+ \ldots +X_N\right)\right] = \frac{1}{N^2}\textrm{Var}\left(X_1+X_2+ \ldots+X_N\right)\end{align} \begin{align}E\left[(X_1+X_2+ \ldots +X_N)\right] &= E[X_1]+E[X_2]+ \ldots E[X_N] = N \mu_x\\\textrm{Var}(X_1+X_2+ \ldots +X_N) &= \textrm{Var}(X_1)+\textrm{Var}(X_2) + \ldots \textrm{Var}(X_N) = N\sigma_x^2\end{align} Therefore\begin{align}E(Y) &= \frac{1}{N}E[(X_1+X_2+ \ldots +X_N)] =\frac{1}{N}N\mu_x = \mu_x\quad\text{and}\\\textrm{Var}(Y) &= \frac{1}{N^2}\textrm{Var}(X_1+X_2+ \ldots +X_N) = \frac{1}{N^2}N\sigma^2 = \frac{\sigma^2}{N}\end{align} How to make use of the Autocorrelation Function Note that this reduction by N is valid as long as the sequence is white. Once samples are correlated, there will be no further reduction in the variance of the estimate. You can therefore make use of your autocorrelation to determine the number of samples that will reduce the variance. I do not have an exact calculation for this, but to provide a rough order magnitude I would use the number of samples that results in the normalized autocorrelation dropping to below 0.5. For example, in your data the total number of samples is 30,000. If the autocorrelion immediately dropped below 0.5 after just one sample, then all samples are independent and your variance estimate would be the $\sigma_x/30,000$. However if the autocorrelation does not drop below 0.5 until 100 samples, then the variance estimate would be $\sigma_x/300$.
LaTeX typesetting is made by using special tags or commands that provide a handful of ways to format your document. Sometimes standard commands are not enough to fulfil some specific needs, in such cases new commands can be defined and this article explains how. Contents Most of the LaTeX commands are simple words preceded by a special character. In a document there are different types of \textbf{commands} that define the way the elements are displayed. This commands may insert special elements: $\alpha \beta \Gamma$ In the previous example there are different types of commands. For instance, \textbf will make boldface the text passed as parameter to the command. In mathematical mode there are special commands to display Greek characters. Commands are special words that determine LaTeX behaviour. Usually this words are preceded by a backslash and may take some parameters. The command \begin{itemize} starts an environment, see the article about environments for a better description. Below the environment declaration is the command \item, this tells LaTeX that this is an item part of a list, and thus has to be formatted accordingly, in this case by adding a special mark (a small black dot called bullet) and indenting it. Some commands need one or more parameters to work. The example at the introduction includes a command to which a parameter has to be passed, textbf; this parameter is written inside braces and it's necessary for the command to do something. There are also optional parameters that can be passed to a command to change its behaviour, this optional parameters have to be put inside brackets. In the example above, the command \item[\S] does the same as item, except that inside the brackets is \S that changes the black dot before the line for a special character. LaTeX is shipped with a huge amount of commands for a large number of tasks, nevertheless sometimes is necessary to define some special commands to simplify repetitive and/or complex formatting. New commands are defined by \newcommand statement, let's see an example of the simplest usage. \newcommand{\R}{\mathbb{R}} The set of real numbers are usually represented by a blackboard bold capital r: \( \R \). The statement \newcommand{\R}{\mathbb{R}} has two parameters that define a new command \R \mathbb{R} \mathbb the package After the command definition you can see how the command is used in the text. Even tough in this example the new command is defined right before the paragraph where it's used, good practice is to put all your user-defined commands in the preamble of your document. It is also possible to create new commands that accept some parameters. \newcommand{\bb}[1]{\mathbb{#1}} Other numerical systems have similar notations. The complex numbers \( \bb{C} \), the rational numbers \( \bb{Q} \) and the integer numbers \( \bb{Z} \). The line \newcommand{\bb}[1]{\mathbb{#1}} defines a new command that takes one parameter. \bb [1] \mathbb{#1} User-defined commands are even more flexible than the examples shown above. You can define commands that take optional parameters: \newcommand{\plusbinomial}[3][2]{(#2 + #3)^#1} To save some time when writing too many expressions with exponents is by defining a new command to make simpler: \[ \plusbinomial{x}{y} \] And even the exponent can be changed \[ \plusbinomial[4]{y}{y} \] Let's analyse the syntax of the line \newcommand{\plusbinomial}[3][2]{(#2 + #3)^#1}: \plusbinomial [3] [2] (#2 + #3)^#1 If you define a command that has the same name as an already existing LaTeX command you will see an error message in the compilation of your document and the command you defined will not work. If you really want to override an existing command this can be accomplished by \renewcommand: \renewcommand{\S}{\mathbb{S}} The Riemann sphere (the complex numbers plus $\infty$) is sometimes represented by \( \S \) In this example the command \S (see the example in the commands section) is overwritten to print a blackboard bold S. \renewcommand uses the same syntax as \newcommand. For more information see:
From a purely theoretical standpoint, the radius of the wheel doesn't matter, but heavy wheels are slow. In an idealized scenario, the skateboard conserves energy. This means that its total energy when it gets to the bottom of the hill is the same regardless of how it goes down. As a wheel rolls down the hill, it picks up kinetic energy. Some of that goes into its translational motion, while some goes in its rotation. The energy in the rotation is essentially wasted from the point of view of going fast. A solid cylinder or disk, for example, will move $1/\sqrt{1.5} = 0.81$ times as fast at the bottom of a hill it has rolled down as it would go if it slid down without rolling (and without friction). If you have heavy wheels compared to the weight of the skateboard and rider, then you suffer most of this slowdown. If you have light wheels, the energy of the wheels hardly matters and you can approach the ideal sliding speed. Next we want to know if this matters. On Wikipedia I found that rolling resistances can be about $0.01$. When you roll down an incline of angle $\theta$, you lose about $0.01 \cot\theta$ of the energy you pick up to friction. For now set rolling resistance equal to $c$ instead of the number $0.01$. For cylindrical wheels of total mass $m$ and total wheels and rider of mass $M$, a fraction $m/3M$ is used in rotational rather than translational kinetic energy. Thus, the rotational energy stored in the wheels becomes important when, roughly speaking, $$m/M > 3 c \cot\theta$$ The question we wanted to answer was not when rolling resistance becomes important, but what size wheels are faster. So imagine that $c$ is a function of $R$, the wheel radius, and that $m = \lambda R^2$, saying that we'll consider wheels of the same density and thickness, but different radius. If we differentiate both sides of the previous expression with respect to $R$, we get a condition for the extra rotational energy stored in the wheel size to start being a bad trade off for any improvement in rolling resistance. $$2\lambda R/M > - 3 c'(R) \cot\theta$$ or $$ R > -\frac{3 M c'(R) \cot\theta}{2\lambda}$$ When this inequality is satisfied, making the wheels large will slow you down. Otherwise larger wheels are better. Note that if $c'(R)$ is zero or positive, larger wheels are always worse. Unfortunately, I can't think of good ways to estimate $c'(R)$ without experimentation. One thing I can think of is this (it's highly speculative): A skateboard has pretty hard wheels that probably don't deform much under the weight of a rider, but skateboards are treated roughly and may get some grime in their bearings. So I would guess that friction in the bearing could be the most important factor in rolling resistance for a skateboard. I don't have much experience with them, but I think that if I take a skateboard wheel and spin it while holding it up in the air, it won't spin and spin for a minute or more like a bicycle wheel will. Larger wheels mean fewer rotations and less motion in the bearing, so this would give $c(R) = \alpha/R$ for some constant $\alpha$. This guess would give $$ R > \frac{3 M \alpha \cot\theta}{2\lambda R^2}$$ or $$R > \left(\frac{3 M \alpha \cot\theta}{2\lambda}\right)^{1/3}$$ When I let the density of the wheels be $1g/cm^3$, the mass of the rider by $75 kg$, the slope $10 ^\circ$, the width of the wheels $2 cm$, and the rolling resistance $0.01$ when the wheels are $5cm$ tall, I get that larger wheels slow you down when $R > 17 cm$. That would indicate that larger wheels are actually better in this case up to a pretty big size wheel (for a skateboard), but take it with a large grain of salt. There are lots of unjustified assumptions in there.
Current browse context: nucl-th Change to browse by: Bookmark(what is this?) Nuclear Theory Title: $S$-factor and scattering-parameter extractions from ${}^{3}\mathrm{He} +{}^{4}\mathrm{He} \rightarrow {}^{7}\mathrm{Be} + γ$ (Submitted on 16 Sep 2019) Abstract: Previous studies of the reaction ${}^{3}\mathrm{He} +{}^{4}\mathrm{He} \rightarrow {}^{7}\mathrm{Be} + \gamma$ have focused on providing the best central value and error bar for the $S$ factor at solar energies. Measurements of this capture reaction, the ${}^{3}\mathrm{He}$-${}^{4}\mathrm{He}$ scattering phase shifts, as well as properties of ${}^{7}\mathrm{Be}$, have been used to constrain employed theoretical models. Here we show that much more information than was previously appreciated can be extracted from angle-integrated capture data alone. We use the next-to-leading-order (NLO) amplitude in an effective field theory (EFT) for the reaction to perform the extrapolation. At this order the EFT describes the reaction using an s-wave scattering length and effective range, the asymptotic properties of the final bound states, and short-distance contributions to the $E1$ capture amplitude. We extract the multi-dimensional posterior of all these parameters via a Bayesian analysis. We find that properties of the ${}^{7}\mathrm{Be}$ ground and excited states are well constrained. The total $S$ factor $S(0)= 0.578^{+0.015}_{-0.016}$ keV~b, while the branching ratio for excited- to ground-state capture at zero energy, $Br(0)=0.406^{+0.013}_{-0.011}$, both at 68\% degree of belief. This $S(0)$ is broadly consistent with other recent evaluations, including the previously recommended value $S(0)=0.56 \pm 0.03$ eV b, but has a smaller error bar. We also find significant constraints on the scattering parameters, and we obtain constraints on the $S(E)$'s angular dependence. The path forward seems to lie with better measurements of the scattering phase shift and $S(E)$'s angular dependence, together with better understanding of the asymptotic normalization coefficients of the ${}^7$Be bound states' wave functions. Data on these could further reduce $S(0)$'s uncertainty. Submission historyFrom: Xilin Zhang [view email] [v1]Mon, 16 Sep 2019 15:40:16 GMT (837kb,D)
I am a beginner in ergodic theory. I have read some lecture notes(such as this and this) about it in hope that I could find something which helps to prove the ergodicity of some Markov chain taking values in a general state space(say a Polish space). Although I've learned many interesting things, such as the application of ergodic theory in number theory, I don't find anything which helps to solve my original problem, i.e. the ergodicity of Markov chain. I expected to find something which gives a sufficient and easy-to-verify condition on the ergodicity of the shift operator. But I only find such a condition(irreducibility) in case of Markov chain taking discrete values Then as for the case of Markov chain taking values in a general state space, it seems to me that all the existing sufficient conditions for its ergodicity(such as the small set condition, drift condition) are done without mentioning any abstract setting of ergodic theory, such as in the book Markov chain and stochastic stability or in this recent paper. And it is difficult to verify these conditions in general. So my questions are: Are there any existing results in ergodic theory which can help to easily establish ergodicity of general Markov chain? My impression is that ergodic theory is powerful and has been developed for a long time, have I missed some important results? If no such a result exists, then what is the more hopeful choice if one needs to prove the ergodicity of general Markov chain? One should stay with ergodic theory and try to find something applicable to Markov chain. Or one could completely forget the ergodic theory and only work hard to prove small set or drift conditions in one's own setting? Maybe the answer to the second question is opinion-based. But please share your experience with me. I am a phd student and I would like to know if it is worth investing a lot of time in one of the direction. If you could give me some related advice, I will also be very thankful. Thank you very much for your help. Edition to make my question clear: By "ergodicity of a Markov chain taking values in a general state space", I mean there is a Markov chain $(X_n)_{n\geq 1}$ with $X_n \in S$ and $S$ is a Polish space, suppose $\mu$ is a measure on $\mathcal{B}(S)$ and we know already this Markov chain is invariant with respect to $\mu$, then saying this chain is ergodic means we have for any $B \in \mathcal{B}(S)$, we have $$\dfrac{1}{n}\sum_{k=1}^n 1_B(X_n) \to \mu(B) \text{ almost surely}$$ By abstract ergodic theory I mean there is a measure space $(\Omega, \mathcal{B}, \mu)$ and a measure preserving transformation $T: \Omega \to \Omega$, i.e. $T^{-1}B \in \mathcal{B}$ and $\mu(T^{-1}B) = \mu(B)$, when $T$ is ergodic then we have theorems such as Birkhoff's ergodic theorem and a lot of others interesting results. I wish to find some results in this abstract setting such that the Markov chain's ergodicity is an application of the results. But I find nothing in this direction and all I know about how to prove ergodicity of general Markov chain don't use results of abstract ergodic theory. Is ergodic theory useful in proving ergodicity of general Markov chain?
One of the axioms of ZF set theory is the axiom of union: $$(\forall x)(\exists y)(\forall z)(z\in y \iff (\exists t)(z\in t\ \&\ t\in x)).$$ The axiom of union (together with the axiom of extensionality) guarantees that the operation $$x\mapsto\bigcup x$$ can be defined for all sets and indeed captures our intuition of the union of all elements of $x$. On the other hand intersection, understood as a common part of all members of a set, cannot be defined since $\bigcap\emptyset$ poses a problem. However, what is stopping us from using the axiom scheme of separation and defining $$\bigcap x :=\{y\in\bigcup x\ |\ (\forall z)(z\in x\Rightarrow y\in z)\},$$ which is a correct instance of the axiom scheme of separation and captures the notion of the intersection of all elements for nonempty sets $x$, while for an empty set we have clearly $\bigcap\emptyset = \emptyset$ (since $\bigcup\emptyset = \emptyset$)? Addressing comments and to be more precise: how to define an intersection of all members of a set in ZF (please provide a formula in a language of ZF) such that the induced operation of intersection is undefined for the empty set? I ask because I saw that people have this definition in mind (instead of "mine" presented above), but rarely write it down and just go on just claiming that $$\bigcap x$$ is possible to define for nonempty sets $x$, and undefined for $\emptyset$ (i.e. see chapter 5 in "Notes on logic and set theory" by P. T. Johnstone).
In machine learning, we are often dealing with high-dimension data. For convenience, we often use matrix to represent data. Numerical optimization in machine learning often involves matrix transformation and computation. To make matrix computation more efficiently, we always factorize a matrix into several special matrices such as triangular matrices and orthogonal matrices. In this post, I will review essential concepts of matrix used in machine learning. While some matrix concepts may not seem intuitive, the goal of introducing matrix in numerical optimization and machine learning is of course NOT to confuse and intimidate non-mathematicians, but to represent high-dimension problems more conveniently and concisely. In addition, “smart” matrix transformation, factorization, decomposition can make matrix computation much more efficient. Two questions in linear algebra There are essentially 2 questions in linear algebra and matrix: solve linear equation \(Ax = b\) eigendecomposition and singular value decomposition of a matrix \(A\) In this post, I will discuss the first question, and in the next post, I will focus on the second one. Matrix rank, inverse, singular, and determinant Before we dive deep into matrix transformation, I would like to first review common matrix characteristics. Rank The rank of a matrix \(A\) is the maximal number of linearly independent columns in \(A\), which is equal to the row rank. See [1] for detailed description. A matrix has full rank if its rank equals the large possible dimension: $$ rank(A) = \min (m, n), A \in R^{m \times n} $$ For a square matrix \(A\), \(A\) is invertible if and only if \(A\) is full rank, i.e. $$ rank(A) = n$$ Invertible A square matrix \(A \in R^{n \times n} \) is invertible if there exists a matrix \(B \in R^{n \times n} \) such that $$ AB = BA = I_n $$ The inverse of a matrix is written as \(A^{-1}\) . Matrix \(A\) is not invertible if \(A\) is not full rank. Singular A square matrix \(A\) is also called singular if it is not invertible. Determinant The determinant of a square matrix \(A\) is a scalar value calculated from elements in \(A\). See [2] for detailed description. \(A\) is singular and not invertible, if and only if its determinant \(det(A)\) is 0. Solve linear equation \(Ax = b \) 1. Least square method in matrix format \(A \in R^{m \times n}\) is a known matrix, \(x \in R^{n} \) is unknown, \(b \in R^{n \times 1}\) is a known vector. Linear regression uses a slightly different notation: \(A \in R^{n \times p}\) with \(n\) as the number of data points (rows), and \(p\) as the number of dimensions for each data point (columns). Using the traditional notation, in linear regression optimization, the objective function sum of squared error can be written as: \(J(x) = \min_{x \in R^n} \| Ax – b \|^2 \tag{1} \) \(x^{*} = argmin_{x \in R^n} \| Ax – b \|^2 \tag{2}\) With the linear regression notation, we have \(\hat \beta = argmin_{\beta \in R^p} \| \textbf{X} \beta – y \|^2 \tag{3} \) (1) \(m = n\) When \(m = n\), \(A\) is a square matrix. If \(A\) is invertible, i.e. \(A\) is not singular, Equation 2 is equivalent to solving the linear equation \(Ax = b\), and we have a unique solution: \(x^{*} = A^{-1}b \tag{4} \) (2) \(m > n\) Equation 4 looks quite different from what we derive in linear regression: \(\hat \beta = ( \textbf{X}^T \textbf{X}) ^ {-1} \textbf{X}^Ty, \textbf{X} \in R^{n \times (p+1)} \tag {5} \) The reason is that usually we have \(m > n\), i.e. there are more data points than feature dimension, the problem is called “overdetermined” and solution may not exist. In this case, a solution to \(Ax = b\) is defined as the stationary point of the objective function \(J(x)\) from equation 1 and is computed by setting the first order derivative of \(J(x)\) as 0: \(\frac {\partial J }{\partial x} = 2A^TAx – 2A^T b = 0 \tag {6.1}\) \(A^TAx = A^Tb \tag {6.2}\) If \(A^TA\) is invertible, we derive the same format of equation 5 as in linear regression: \(x^{*} = (A^TA)^{-1}A^Tb \tag {7}\) (3) \(m < n\) When \(m < n\), there are more feature dimensions than data points, the problem is called “underdetermined” and the solution may be not unique. In this case, we define the solution x* as: \(x^{*} = argmin_{Ax = b} \frac {1}{2}\|x\|^2 \tag {8} \) Equation 8 is essentially an equality constrained optimization problem and we can solve it using Lagrange multiplier as discussed in later numerical optimization posts. You may have seen similar expression of Equation 8 in regularization. Indeed, regularization is often used when \(n \gg m\) to reduce dimension. (4) Geometric interpretation of least squares Equation 1 can be interpreted as the finding the minimal Euclidean distance between 2 vectors: \(Ax\) and \(b\). The minimal distance is achieved when we project \(b\) on the column space of \(A\): \(b^{*} = Ax^{*} \tag{9}\) We can write \(b\) as: \(b = b^{*} + r, r \bot b^{*} \tag{10.1} \) \(b = Ax^{*} + r, r \bot Ax^{*} \tag{10.2} \) Therefore, \(x^{*} \) is a solution if and only if \(r\) is orthogonal to the column space of \(A\): \(A^T r = 0, r = b – Ax \tag{11}\) \(A^T(b-Ax) = 0 \tag{12.1} \) $$A^TAx = A^Tb \tag {12.2}$$ Note that although equation 12.2 has the same format as equation 6.2, equation 12.2 is applicable to all least square problems for all possible \(m, n\). There is always a solution to equation 12.2, and the solution is unique if \(A\) is full rank [3]. Equation 12.2 can be written as \(A_1x = b_1\) $$A_1 = A^TA, b_1 = A^Tb \tag {13}$$ \(A_1 \in R^{n \times n}\) is a square matrix. Now, let’s rewrite equation 13 and substitute some letter notations in the general format of $$Ax = b$$ 2. Gaussian elimination in \(Ax = b\) This is the most basic approach in solving a linear equation. The idea is to successively eliminate unknowns from equations, until eventually we have only one equation in one unknown. See reference [4][5] for well-elaborated examples. If the square matrix \(A \in R^{n \times n}\) does not have full rank, then there is no unique solution. Gaussian elimination includes 2 stages: Stage 1: forward elimination step \((k)\): eliminate \(x_k\) from equation \(k+1\) through n using : multipliers \(m_{i,k} = \frac {a^{(k)}_{i,k}}{a^{(k)}_{k,k}}, i = k+1,k+2…n \tag{14} \) Here we assume the \(a^{(k)}_{k,k} \neq 0\), although the original \(A\) may have 0 on its diagonal elements, we can permute \(Ax = b\) by swapping the order of equations for both \(A\) and \(b\) to make sure all pivot elements are not 0. The solution for a linear system is the same after permutation. pivot element For example, we can permute: $$\begin{bmatrix} 0 &1 \\ 1& 2 \\ \end{bmatrix} \begin{bmatrix} x_1 \\ x_2 \\ \end{bmatrix} = \begin{bmatrix} 3 \\ 4 \\ \end{bmatrix} $$ to $$\begin{bmatrix} 1 &2 \\ 0& 1 \\ \end{bmatrix} \begin{bmatrix} x_1 \\ x_2 \\ \end{bmatrix} = \begin{bmatrix} 4 \\ 3 \\ \end{bmatrix} $$ and then we subtract the multiplier of row \(k\) from each row \(i\) in \(A\) and \(b\) and yield new matrix elements: \(A^{(k)} x = b^{(k)} \tag{15} \) At the end of stage 2, we have an upper triangular matrix \(A^{(n-1)}\), denoted as \(U\) and some new \(b^{(n-1)}\), denoted as \(g\): \(Ux = g \tag{16} \) The total cost: \(A \rightarrow U\) is approximately $$\frac {2}{3}n^3$$ division: \((n-1) + (n-2) … + 1 = \frac {n(n-1)}{2}\) multiplication: \((n-1)^2 + (n-2) ^2… + 1 = \frac {n(n-1)(2n-1)}{6}\) subtraction: \((n-1)^2 + (n-2) ^2… + 1= \frac {n(n-1)(2n-1)}{6}\) Total cost: \(b \rightarrow g\) is approximately $$ n^2 $$ multiplication: \((n-1) + (n-2) … + 1 = \frac {n(n-1)}{2}\) subtraction: \((n-1) + (n-2)… + 1 = \frac {n(n-1)}{2}\) Stage 2: back substitution We solve equation 16 by back substitution from the last row to the first. \(x_n = \frac {g_n}{u_{n,n}} \tag{17}\) Total cost of solving \(Ux = g\) is approximately $$ n^2$$ division: n multiplication: \((n-1) + (n-2) … + 1 = \frac {n(n-1)}{2}\) subtraction: \((n-1) + (n-2) … + 1 = \frac {n(n-1)}{2}\) The primacy cost of Gaussian elimination is thus to transform \(A\) to an upper triangular matrix \(U\). 3. LU factorization Note that in linear algebra, we tend to use and factorization interchangeable, as they both mean breaking down a matrix into parts. decomposition During Gaussian elimination, we can store the multipliers at each step to form a lower triangular matrix with diagonal values as 1 [4]. Thus, we can factorize a non-singular square matrix \(A\) into 2 triangular matrices with a permutation matrix: $$ PA = LU \tag{18}$$ As discussed above, the cost of LU factorization is approximately $$ \frac {2}{3}n^3$$ Another very desirable trait of triangular matrices is that their determinant is the product of the diagonal elements: \(det(L) = 1, det(U) = \prod_i^n |u_{i,i}| \tag{19} \) \(det(A) = det(L)det(U) \tag{20}\) Thus we can compute determinant of \(A\) easily once we have its \(LU\) factorization. To solve \(Ax = b\) factorize \(A = LU\), thus \(LUx = b\) solve \(d\) in \(Ld = b\), using forward substitution solve \(x\) in \(Ux = d\), using back substitution 4. Matrix inverse \(AA^{-1} = I \in R^{n \times n} \tag{21}\) \(I\) is an identity matrix with 1 on the diagonal and 0 everywhere else. We may rewrite \(A^{-1}\) in column spaces. In the following example \(n = 2\): \(A^{-1} = \begin{bmatrix} a_1^{-1} & a_2^{-1} \end{bmatrix} \tag{22.1}\) \(a_1^{-1} =\begin {bmatrix} x_{1,1}^{-1} \\ x_{1,2}^{-1} \end{bmatrix}, a_2^{-1} = \begin{bmatrix} x_{2,1}^{-1} \\ x_{2,2}^{-1} \end{bmatrix} \tag{22.2}\) We also rewrite \(I\) into \(n\) vectors: \(I = \begin{bmatrix} e_1 & e_2 \end{bmatrix} \tag{23.1}\) \(e_1 =\begin {bmatrix} 1 \\ 0 \end{bmatrix}, e_2 = \begin{bmatrix} 0 \\ 1 \end{bmatrix} \tag{23.2}\) We then just need to solve \(n\) linear systems separately to get each column \(a_i^{-1} \) in \(A^{-1}\): \(Aa_i^{-1} = e_i, i = 1,2..n \tag{24}\) 5. Cholesky factorization If \(A\) is a symmetric matrix: \(A = A^T \tag{25}\) We can improve the factorization and reduce the cost by half to \(\frac {1}{3}n^3\) because \(A = LL^T \tag{26}\) Note that \(L\) here, although is still a lower triangular matrix, does not have its diagonal elements as 1, as in equation 19. Thus \(L\) in Cholesky factorization is not the same as \(L\) in LU factorization. To solve \(Ax = b\): factorize \(A = LL^T\), thus \(LL^Tx = b\) solve \(d\) in \(Ld = b\), using forward substitution solve \(x\) in \(L^Tx = d\), using back substitution 6. QR factorization Both LU and Cholesky factorization only apply to square matrices. For rectangular matrices \(A \in R^{m \times n}\), we can use QR factorization: \(A = QR \tag{27}\) \(Q\) is a \(m \times m \) square orthogonal matrix, and its columns are orthogonal unit vectors: \(QQ^T = Q^TQ = I, Q^T = Q^{-1} \tag{28}\) \(R\) is a \(m \times n\) upper triangular matrix. Again we can use permutation matrix \(P\) for column pivoting. Similar to LU and Cholesky factorization, QR factorization can also be used to solve linear system. If \(A\) is square: \(det(A) = det(Q) det(R) = det(R) = \prod_i^n |r_{i,i}| \tag{29}\) To solve \(Ax = b\): factorize \(A = QR\), thus \(QRx = b\) solve \(d\) in \(Qd = b\), \(Q^{-1} = Q^T\) solve \(x\) in \(Rx = d\), using back substitution Take home message To solve linear systems more efficiently, we always factorize a matrix into several special matrices such as triangular matrices and orthogonal matrices. Reference [1] https://en.wikipedia.org/wiki/Rank_(linear_algebra) [2] https://en.wikipedia.org/wiki/Determinant [3] http://math.ecnu.edu.cn/~jypan/Teaching/MatrixComp/slides_ch03_ls.pdf [4] http://homepage.divms.uiowa.edu/~atkinson/m171.dir/sec_8-1.pdf [5] youtube https://www.youtube.com/watch?v=piCVWXWjjsg
I think the main issue is you are jumping ahead of yourself. You probably remember or read somewhere that the Fourier transform of a $rect$ function is a $sinc$ function. This is true; however, no where in this section does he mention Fourier transform! In fact, what he is doing is not Fourier transform. What he does in this section is to represent any periodic function as a Fourier series: (1) $f(\theta)=\frac{a_0}{2}+\sum_{k=0}^\infty{(a_k \cos k\theta + b_k \sin k\theta)}$ The key here is that this function doesn't contain every frequency. It contains frequency 0 (the DC) and $\frac{k}{2\pi}$ where $k=1,2,3,...\infty$. This is actually very important. This function is always periodic with a period of $2\pi$. The coefficients $a_k$ and $b_k$ are almost sampled values of the Fourier transform (but not quite, why? homework exercise! :p). You have countably many of these coefficients. Later in the book, you will learn that when you sample data in one space (frequency or time) it necessarily makes the counter part in the other space (time or frequency) periodic. In the first case, he does the Lanczos smoothing derivation where he averages the function by running a rectangular window through it (convolving with rect). What he shows is, not surprisingly, that the coefficients get multiplied by this term: (2) $\frac{\sin k\pi/N}{k\pi N}$ which should look very familiar to you, of course, because it is the $sinc$ function. However, what you are missing is that $k$ is discrete! It is actually a sampled version of the $sinc$ function. Effectively, he convolves a function with a $rect$ and shows that the coefficients of the resulting Fourier series (read loosely as: sampled Fourier transform) is a sampled $sinc$ function. No surprise there. Convolution thm says convolution in time turns into multiplication in frequency. Fourier transform, which you know, of $rect$ is $sinc$, so convolution by $rect$ is multiplication by $sinc$ in frequency space. In the next section, he does something different. He takes the Fourier series (read: sampled Fourier transform) and removes all the higher frequency coefficients. In effect, he is taking the Fourier transform, multiplies by a $rect$, and then samples it. For simplicity, he sets all the Fourier coefficients that did not get discarded to $1$. What he's left with is this: (3) $h(\theta)=\frac{sin(N+1/2)\theta}{sin(\theta/2)}$ And you ask, why isn't this a $sinc$ function? Can you answer it now? The quick answer is because what is applied in frequency domain is not just a truncation, it's a truncation and a sampling operator. What you know is that when you truncate (i.e. multiply by rect) in frequency domain, the time domain gets convolved with a $sinc$ (by Convolution thm and Fourier transform of $rect$), but this is without sampling. As for why the formula looks the way it does, there are two ways to look at it. The first, which he shows, is that you can just sum the Fourier series from $-N$ to $N$, and that's what you get. The second, which is more profound and may come up later, is that (*) when you sample a function in one space (say frequency), the corresponding function in the other space (say time) becomes the sum of shifted versions of itself. In fact, it probably won't come up exactly like that. The typical scenario is that sampling time domain creates periodic replication in the frequency domain (btw: this is the reason for what people call aliasing). However, you can apply the duality property of Fourier transform to get (*). Does equation (3) make sense now? It is periodic. The closer you get to 0, the more it looks like a $sinc$ function. So, an exercise for you is to derive equation (3) by sampling and truncating in frequency space and applying inverse transform.
A subordinate Brownian motion is a Lévy process which can obtained by replacing the time of Brownian motion by an independent increasing Lévy process. The infinitesimal generator of a subordinate Brownian motion is \(-\phi(-\Delta)\), where \(\phi\) is the Laplace exponent of the subordinator. When \(\phi(\lambda)=\lambda^{\alpha/2}\) for some \(\alpha\in (0, 2)\), we get the fractional Laplacian \(-(-\Delta)^{\alpha/2}\) as a special case. In this talk, I will give a survey of some recent results on sharp two-sided estimates on the Dirichlet heat kernels and Green functions of \(-\phi(-\Delta)\) in smooth domains. A subordinate Brownian motion is a Lévy process which can obtained by replacing the time of Brownian motion by an independent increasing Lévy process. The infinitesimal generator of a subordinate Brownian motion is
Implementing such a method is not a trivial task. Just FYI, there are multiple programs purpose-built to perform Hartree-Fock calculations. In terms of a good introduction to the theory of Hartree-Fock calculations, I found this pdf extremely helpful. First of all, you have given the expression for the psuedo-exact (psuedo, because we have made the Born-Oppenheimer approximation and assumed a Slater determinant form to get here) 1-electron molecular orbitals $|\chi_a\rangle$, which are not known. Formally, that's not a problem. We can just use some complete set of states $\{|\phi_\mu\rangle\}_{\nu=1...\infty}$ as a basis, and expand the exact (unknown) states by $|\chi_a\rangle = \sum_{\mu=1}^\infty C_{\mu a}|\phi_k\rangle$ in unknown coefficients $C_{\mu a}$. Naturally, the requirement of infinitely many basis orbitals is not possible on a computer, so we must instead use a sufficiently large basis that the truncation doesn't matter to within our accuracy guidelines. Following the analysis in the pdf, there are ultimately 4 matrices that we need. $F_{\mu \nu}(C_{\alpha k}) = \langle \phi_\mu | \hat{f} | \phi_\nu \rangle$, the Fock matrix (The brackets are included here to represent dependence on $C$!) $S_{\mu \nu}= \langle \phi_\mu | \phi_\nu \rangle $, the overlap integral of all of your basis functions (This does not change throughout the iteration.) $C_{\alpha k}$, the coefficients that specify the expansion of the $k$th eigenfunction. $\mathbf{\varepsilon}$, a diagonal matrix with entries corresponding to the 1-electron eigen-energies of $|\chi_a\rangle$ under Fock operator $\hat{f}$. The choice of a basis "reduces" the earlier equation to $$ \mathbf{F}(\mathbf{C}) \mathbf{C} = \mathbf{S} \mathbf{C} \mathbf{\varepsilon} $$ This type of equation is known as a generalised eigenvalue equation, and can be solved by most numerical linear algebra packages (e.g. Eigen, SciPy). Or rather, it would be if $F$ did not depend on $C$. There are many, many algorithms for dealing with this, but a common approach is the "iteration" many authors refer to. In this setup, we start off with a guess for the form of $C$, get a better value of C and re-substitute and solve again. $$ \mathbf{F}(\mathbf{C}^{(n)}) \mathbf{C}^{(n+1)} = \mathbf{S} \mathbf{C}^{(n+1)} \mathbf{\varepsilon} $$ where the superscripts on the $C$ matrices refer to the iteration number. Why would the method necessarily converge In general, it doesn't. We essentially hope that our starting guess was sufficiently close to the true solution that the algorithm can find the minimum. This is why it is of critical importance that the basis set is carefully chosen to be 'fairly close' to the true wavefunction from the get-go - A natural first choice is atomic hydrogenic wavefunctions, but these are computationally expensive to integrate and better results can be achieved with a larger collection of simpler functions. Such basis sets are tabulated for computational chemists at sites such as basis set exchange. What sort of numerical method would I use to solve the Hartree-Fock equation? Ultimately, it comes down to following this procedure: Choose a set of basis orbitals. Compute $\mathbf{S}$. Establish a guess for $C^{(0)}$. Solve $ \mathbf{F}(\mathbf{C}^{(n)}) \mathbf{C}^{(n+1)} = \mathbf{S} \mathbf{C}^{(n+1)} \mathbf{\varepsilon} $ Repeat 4. until convergence. A note on that convergence - generally, one tests to see if the new energies differ from the preceding energies by less than some cutoff, and so establish that the algorithm isn't "moving" very much in state space. As for why it works at all - Conceivably, for sufficiently restricted conditions the step 4) above may define a contraction mapping $\mathcal{F} : M_{N\times M}(\mathbb{C}) \to M_{N\times M}(\mathbb{C}), \mathbf{C} \in M_{N\times M}(\mathbb{C})$, and so have convergence from the Banach fixed point theorem. For a general basis though, these requirements are not satisfied.
Total Internal Reflection (TIR) is a phenomenon in optics, by which light experiences complete reflection at an interface between two media. Most optical fibers use TIR as the guiding principle. Figure 1 explains schematically how TIR takes place at an interface between two media. When an incident light ray (red) hits the interface, it is reflected (green) and/or refracted (blue). Figure 1: Snell’s law and total internal reflection. The angle of refraction at the interface of two materials is given by Snell’s law: \(\mathrm{n_1}\cdot \sin(\theta) = \mathrm{n_2} \cdot \sin(\theta^{\prime}), \) where n 1 and n 2 are the refractive indices of the two materials, θ is the incident angle of light, and θ’ is the angle of refraction [see Figure 1(a)]. If we assume that material 1 is water (n 1=1.3) and material 2 is air (n 2=1.0), θ’ is larger than θ as n 1>n 2 , and θ’ becomes \(\pi\) /2 (i.e. 90 deg) for a certain incident angle θ = θc [Figure 1(b)]. This angle is called the critical angle, and is given by the following formula: \( \theta_{\mathrm{c}}=\sin^{-1}\left( \frac{\mathrm{n_2}}{\mathrm{n_1}} \right). \) And at an incident angle θ > θc [Figure 1(c)], the refracted light can no longer exist, and light is totally reflected back to the material 1 (water). This phenomenon can also be understood intuitively, when you dive into a swimming pool and try to see above the water surface – you cannot see outside the water with a shallow angle.
When studying some gauge theories approach to problems in Mechanics, I've found the following integral $$P\exp\left[\oint A \ dt\right]=1+\dfrac{1}{2}\oint_{\partial D}\sum_{\mu,\nu}F_{\mu\nu}\gamma^{\mu}(t) \dot{\gamma}^{\nu}(t)dt,$$ where $A$ is the gauge potential, and $F$ is the field strength tensor (i.e. the pull-back of the curvature two form by a certain choice of gauge map). This integral appeared, in the articles (like this one, on page 564) I've seem, on the computation of the path-ordered exponential, but I couldn't understand where it comes from. It seems, on this formula, that we are integrating over a path, but $F$ is a $2$-form, so it should be integrated over a $2$-chain. On the article there is one derivation, but I really didn't understand what they did, it doesn't seem very rigorous. Also, when I've studied principal fiber bundles and connections on those bundles, I didn't see this integral. I've also searched on some math books and didn't find it. So, where this integral comes from, what it rigorously means and how it relates to the path-ordered exponential?
Algebraic Geometry Seminar Fall 2016 The seminar meets on Fridays at 2:25 pm in Van Vleck B305. Contents Algebraic Geometry Mailing List Please join the AGS Mailing List to hear about upcoming seminars, lunches, and other algebraic geometry events in the department (it is possible you must be on a math department computer to use this link). Fall 2016 Schedule date speaker title host(s) September 16 Alexander Pavlov (Wisconsin) Betti Tables of MCM Modules over the Cones of Plane Cubics local September 23 PhilSang Yoo (Northwestern) Classical Field Theories for Quantum Geometric Langlands Dima October 7 Botong Wang (Wisconsin) Enumeration of points, lines, planes, etc. local October 14 Luke Oeding (Auburn) Border ranks of monomials Steven October 28 Adam Boocher (Utah) Bounds for Betti Numbers of Graded Algebras Daniel November 4 Lukas Katthaen Finding binomials in polynomial ideals Daniel November 11 Daniel Litt (Columbia) Arithmetic restrictions on geometric monodromy Jordan November 18 David Stapleton (Stony Brook) Hilbert schemes of points and their tautological bundles Daniel December 2 Rohini Ramadas (Michigan) Dynamics on the moduli space of pointed rational curves Daniel and Jordan December 9 Robert Walker (Michigan) Uniform Asymptotic Growth on Symbolic Powers of Ideals Daniel Abstracts Alexander Pavlov Betti Tables of MCM Modules over the Cones of Plane Cubics Graded Betti numbers are classical invariants of finitely generated modules over graded rings describing the shape of a minimal free resolution. We show that for maximal Cohen-Macaulay (MCM) modules over a homogeneous coordinate rings of smooth Calabi-Yau varieties X computation of Betti numbers can be reduced to computations of dimensions of certain Hom groups in the bounded derived category D(X). In the simplest case of a smooth elliptic curve embedded into projective plane as a cubic we use our formula to get explicit answers for Betti numbers. In this case we show that there are only four possible shapes of the Betti tables up to a shifts in internal degree, and two possible shapes up to a shift in internal degree and taking syzygies. PhilSang Yoo Classical Field Theories for Quantum Geometric Langlands One can study a class of classical field theories in a purely algebraic manner, thanks to the recent development of derived symplectic geometry. After reviewing the basics of derived symplectic geometry, I will discuss some interesting examples of classical field theories, including B-model, Chern-Simons theory, and Kapustin-Witten theory. Time permitting, I will make a proposal to understand quantum geometric Langlands and other related Langlands dualities in a unified way from the perspective of field theory. Botong Wang Enumeration of points, lines, planes, etc. It is a theorem of de Brujin and Erdős that n points in the plane determines at least n lines, unless all the points lie on a line. This is one of the earliest results in enumerative combinatorial geometry. We will present a higher dimensional generalization to this theorem. Let E be a generating subset of a d-dimensional vector space. Let [math]W_k[/math] be the number of k-dimensional subspaces that is generated by a subset of E. We show that [math]W_k\leq W_{d-k}[/math], when [math]k\leq d/2[/math]. This confirms a "top-heavy" conjecture of Dowling and Wilson in 1974 for all matroids realizable over some field. The main ingredients of the proof are the hard Lefschetz theorem and the decomposition theorem. I will also talk about a proof of Welsh and Mason's log-concave conjecture on the number of k-element independent sets. These are joint works with June Huh. Luke Oeding Border ranks of monomials What is the minimal number of terms needed to write a monomial as a sum of powers? What if you allow limits? Here are some minimal examples: [math]4xy = (x+y)^2 - (x-y)^2[/math] [math]24xyz = (x+y+z)^3 + (x-y-z)^3 + (-x-y+z)^3 + (-x+y-z)^3[/math] [math]192xyzw = (x+y+z+w)^4 - (-x+y+z+w)^4 - (x-y+z+w)^4 - (x+y-z+w)^4 - (x+y+z-w)^4 + (-x-y+z+w)^4 + (-x+y-z+w)^4 + (-x+y+z-w)^4[/math] The monomial [math]x^2y[/math] has a minimal expression as a sum of 3 cubes: [math]6x^2y = (x+y)^3 + (-x+y)^3 -2y^3[/math] But you can use only 2 cubes if you allow a limit: [math]6x^2y = \lim_{\epsilon \to 0} \frac{(x^3 - (x-\epsilon y)^3)}{\epsilon}[/math] Can you do something similar with xyzw? Previously it wasn't known whether the minimal number of powers in a limiting expression for xyzw was 7 or 8. I will answer this and the analogous question for all monomials. The polynomial Waring problem is to write a polynomial as linear combination of powers of linear forms in the minimal possible way. The minimal number of summands is called the rank of the polynomial. The solution in the case of monomials was given in 2012 by Carlini--Catalisano--Geramita, and independently shortly thereafter by Buczynska--Buczynski--Teitler. In this talk I will address the problem of finding the border rank of each monomial. Upper bounds on border rank were known since Landsberg-Teitler, 2010 and earlier. We use symmetry-enhanced linear algebra to provide polynomial certificates of lower bounds (which agree with the upper bounds). This work builds on the idea of Young flattenings, which were introduced by Landsberg and Ottaviani, and give determinantal equations for secant varieties and provide lower bounds for border ranks of tensors. We find special monomial-optimal Young flattenings that provide the best possible lower bound for all monomials up to degree 6. For degree 7 and higher these flattenings no longer suffice for all monomials. To overcome this problem, we introduce partial Young flattenings and use them to give a lower bound on the border rank of monomials which agrees with Landsberg and Teitler's upper bound. I will also show how to implement Young flattenings and partial Young flattenings in Macaulay2 using Steven Sam's PieriMaps package. Adam Boocher Let R be a standard graded algebra over a field. The set of graded Betti numbers of R provide some measure of the complexity of the defining equations for R and their syzygies. Recent breakthroughs (e.g. Boij-Soederberg theory, structure of asymptotic syzygies, Stillman's Conjecture) have provided new insights about these numbers and we have made good progress toward understanding many homological properties of R. However, many basic questions remain. In this talk I'll talk about some conjectured upper and lower bounds for the total Betti numbers for different classes of rings. Surprisingly, little is known in even the simplest cases. Lukas Katthaen (Frankfurt) In this talk, I will present an algorithm which, for a given ideal J in the polynomial ring, decides whether J contains a binomial, i.e., a polynomial having only two terms. For this, we use ideas from tropical geometry to reduce the problem to the Artinian case, and then use an algorithm from number theory. This is joint work with Anders Jensen and Thomas Kahle. David Stapleton Fogarty showed in the 1970s that the Hilbert scheme of n points on a smooth surface is smooth. Interest in these Hilbert schemes has grown since it has been shown they arise in hyperkahler geometry, geometric representation theory, and algebraic combinatorics. In this talk we will explore the geometry of certain tautological bundles on the Hilbert scheme of points. In particular we will show that these tautological bundles are (almost always) stable vector bundles. We will also show that each sufficiently positive vector bundles on a curve C is the pull back of a tautological bundle from an embedding of C into the Hilbert scheme of the projective plane. Rohini Ramadas The moduli space M_{0,n} parametrizes all ways of labeling n distinct points on P^1, up to projective equivalence. Let H be a Hurwitz space parametrizing holomorphic maps, with prescribed branching, from one n-marked P^1 to another. H admits two different maps to M_{0,n}: a ``target curve'’ map pi_t and a ``source curve map pi_s. Since pi_t is a covering map,pi_s(pi_t^(-1)) is a multi-valued map — a Hurwitz correspondence — from M_{0,n} to itself. Hurwitz correspondences arise in topology and Teichmuller theory through Thurston's topological characterization of rational functions on P^1. I will discuss their dynamics via numerical invariants called dynamical degrees. Robert Walker Symbolic powers ($I^{(N)}$) in Noetherian commutative rings are mysterious objects from the perspective of an algebraist, while regular powers of ideals ($I^s$) are essentially intuitive. However, many geometers tend to like symbolic powers in the case of a radical ideal in an affine polynomial ring over an algebraically closed field in characteristic zero: the N-th symbolic power consists of polynomial functions "vanishing to order at least N" on the affine zero locus of that ideal. In this polynomial setting, and much more generally, a challenging problem is determining when, given a family of ideals (e.g., all prime ideals), you have a containment of type $I^{(N)} \subseteq I^s$ for all ideals in the family simultaneously. Following breakthrough results of Ein-Lazarsfeld-Smith (2001) and Hochster-Huneke (2002) for, e.g., coordinate rings of smooth affine varieties, there is a slowly growing body of "uniform linear equivalence" criteria for when, given a suitable family of ideals, these $I^{(N)} \subseteq I^s$ containments hold as long as N is bounded below by a linear function in s, whose slope is a positive integer that only depends on the structure of the variety or the ring you fancy. My thesis (arxiv.org/1510.02993, arxiv.org/1608.02320) contributes new entries to this body of criteria, using Weil divisor theory and toric algebraic geometry. After giving a "Symbolic powers for Geometers" survey, I'll shift to stating key results of my dissertation in a user-ready form, and give a "comical" example or two of how to use them. At the risk of sounding like Paul Rudd from "Ant-Man," I hope this talk will be awesome.
A forum where anything goes. Introduce yourselves to other members of the forums, discuss how your name evolves when written out in the Game of Life, or just tell us how you found it. This is the forum for "non-academic" content. Post Reply 10 posts • Page 1of 1 Hello everyone! I proposed a new Stack Exchange Q&A site here: http://area51.stackexchange.com/proposa ... gevQBdmIA2 Remember to support this proposal by following it and asking more example questions on it! Thank you, Testitem Qlstudio (I'm glad that this is in The Sandbox and not in other forums.) I proposed a new Stack Exchange Q&A site here: http://area51.stackexchange.com/proposa ... gevQBdmIA2 Remember to support this proposal by following it and asking more example questions on it! Thank you, Testitem Qlstudio (I'm glad that this is in The Sandbox and not in other forums.) Supported the proposal. Sadly, it still needs 57 more followers, which is about the total number of active members on these forums. So we need to get everyone to create an account and press "follow". From my experience, it will be very difficult, if not impossible. Moreover, I think some people will refuse to follow on the basis that "Thread for basic questions" already does the same thing. This is quite unfortunate, since I would like to see the proposal work out. On the other hand, there seems to be no time limit for proposals on this site. Maybe it will collect the necessary number of followers over the years. Sadly, it still needs 57 more followers, which is about the total number of active members on these forums. So we need to get everyone to create an account and press "follow". From my experience, it will be very difficult, if not impossible. Moreover, I think some people will refuse to follow on the basis that "Thread for basic questions" already does the same thing. This is quite unfortunate, since I would like to see the proposal work out. On the other hand, there seems to be no time limit for proposals on this site. Maybe it will collect the necessary number of followers over the years. There are 10 types of people in the world: those who understand binary and those who don't. Tell me if i should promote this topic to the Patterns forum lol (isn't this advertizing? Well at least I found a good place to post this since my past CA proposals kept getting deleted from inactivity ) (isn't this advertizing? Well at least I found a good place to post this since my past CA proposals kept getting deleted from inactivity ) I need one follower a month or the proposal would be deletedAlexey_Nigin wrote:Supported the proposal. Sadly, it still needs 57 more followers, which is about the total number of active members on these forums. So we need to get everyone to create an account and press "follow". From my experience, it will be very difficult, if not impossible. Moreover, I think some people will refuse to follow on the basis that "Thread for basic questions" already does the same thing. This is quite unfortunate, since I would like to see the proposal work out. On the other hand, there seems to be no time limit for proposals on this site. Maybe it will collect the necessary number of followers over the years. Just went and looked this morning. Another ten followers have showed up, so there are 47 to go.testitemqlstudop wrote:Tell me if i should promote this topic to the Patterns forum... I'll try linking to this thread from a moderately relevant question-gathering thread over on the Website Discussion forum. The Patterns forum seems like the wrong place to try any promotions. With any luck the link will add a little question-asking energy to the LifeWiki Did-You-Know thread as well as this one. A lot of the questions that have showed up so far on the Stack Exchange board aren't exactly what you'd call "basic questions", really. On the other hand, a few have been showing up that do have known answers, so they would be appropriate for Did-You-Knows... most of them are there already, though.Alexey_Nigin wrote:So we need to get everyone to create an account and press "follow". From my experience, it will be very difficult, if not impossible. Moreover, I think some people will refuse to follow on the basis that "Thread for basic questions" already does the same thing. This proposal's activity seems to be dying out. I expended my last example question to keep it meeting the minimum activity requirements, so I can't do that in the future now. If some more people here could contribute, maybe the proposal could stay alive for a while longer. (Also see my Discussion post on the proposal page.) x₁=ηx V ⃰_η=c²√(Λη) K=(Λu²)/2 Pₐ=1−1/(∫^∞_t₀(p(t)ˡ⁽ᵗ⁾)dt) $$x_1=\eta x$$ $$V^*_\eta=c^2\sqrt{\Lambda\eta}$$ $$K=\frac{\Lambda u^2}2$$ $$P_a=1-\frac1{\int^\infty_{t_0}p(t)^{l(t)}dt}$$ http://conwaylife.com/wiki/A_for_all Aidan F. Pierce V ⃰_η=c²√(Λη) K=(Λu²)/2 Pₐ=1−1/(∫^∞_t₀(p(t)ˡ⁽ᵗ⁾)dt) $$x_1=\eta x$$ $$V^*_\eta=c^2\sqrt{\Lambda\eta}$$ $$K=\frac{\Lambda u^2}2$$ $$P_a=1-\frac1{\int^\infty_{t_0}p(t)^{l(t)}dt}$$ http://conwaylife.com/wiki/A_for_all Aidan F. Pierce The suggestion to move the proposal to the Science category was implemented back in March, it looks like. It's a little confusing: there's a link there to a proposal that was deleted due to inactivity. But the original proposal is still alive and well... maybe just barely, but new followers have continued to trickle in very slowly.A for awesome wrote:This proposal's activity seems to be dying out. I expended my last example question to keep it meeting the minimum activity requirements, so I can't do that in the future now. If some more people here could contribute, maybe the proposal could stay alive for a while longer. (Also see my Discussion post on the proposal page.) Like A for awesome, I've now asked my fifth question, so I can't help keep the proposal alive after July. And yes, once again I failed to escape from my obsessive focus on B3/S23. So -- please, could people with more wide-ranging interests sign up, if they haven't already, and ask interesting questions about isotropic or anisotropic rules? I know cellular automata enthusiasts are kind of a rare breed, but I'm sure it's possible to make it from 30 followers to 60 on that proposal, somehow...! Actually it appears that questions about cellular automata are being asked most often not in the Science category, but in the Computer Science category and in Programming Puzzles and Code Golf. Maybe someone who can figure out how to move the proposal should try one more move...? The proposal was closed a few days ago after a year in the Definition phase, and I took it upon myself to restart it here (or here if you want to give me free reputation), since @testitemqlstudio seems to be inactive recently. If people would be willing to repost some or all of their example questions from last time, hopefully we'll have better luck this time around. x₁=ηx V ⃰_η=c²√(Λη) K=(Λu²)/2 Pₐ=1−1/(∫^∞_t₀(p(t)ˡ⁽ᵗ⁾)dt) $$x_1=\eta x$$ $$V^*_\eta=c^2\sqrt{\Lambda\eta}$$ $$K=\frac{\Lambda u^2}2$$ $$P_a=1-\frac1{\int^\infty_{t_0}p(t)^{l(t)}dt}$$ http://conwaylife.com/wiki/A_for_all Aidan F. Pierce V ⃰_η=c²√(Λη) K=(Λu²)/2 Pₐ=1−1/(∫^∞_t₀(p(t)ˡ⁽ᵗ⁾)dt) $$x_1=\eta x$$ $$V^*_\eta=c^2\sqrt{\Lambda\eta}$$ $$K=\frac{\Lambda u^2}2$$ $$P_a=1-\frac1{\int^\infty_{t_0}p(t)^{l(t)}dt}$$ http://conwaylife.com/wiki/A_for_all Aidan F. Pierce Doesn't look good so far, I'm afraid -- I'm getting this at both links:A for awesome wrote:The proposal was closed a few days ago after a year in the Definition phase, and I took it upon myself to restart it... If people would be willing to repost some or all of their example questions from last time, hopefully we'll have better luck this time around. This proposal has been deleted. Inactive proposals that do not receive any activity for one month are subject to deletion. Occasionally, proposals may be removed from Area 51 for reasons of moderation: spam, off topic, abuse, etc. For more information, see the FAQ.
How to use algebra in operations that involve asymptotic notation? Like for example: $C_1(n)= O(n)$ $C_2(n) = O(n^2)$ Is $C_2(n)/C_1(n) = O(n)$? Computer Science Stack Exchange is a question and answer site for students, researchers and practitioners of computer science. It only takes a minute to sign up.Sign up to join this community How to use algebra in operations that involve asymptotic notation? Like for example: This question came from our site for professional and enthusiast programmers. A function $f(n)$ is said to be $O(g(n))$ if there exists a constant $c$ such that $$ \lim_{n\rightarrow \infty}\dfrac{f(n)}{g(n)} \leq c $$ Knowing this, a few things immediately become apparent: Addition and subtraction preserves the largest $O$ of the result, thanks to the properties of limits. For instance, $f(n) \pm h(n)$ are $O(g(n))$ if both $f$ and $h$ areindividually $O(g(n))$. This is why you can say, if two operations are $O(n)$, then the resulting sum/subtraction is also $O(n)$. If either $f$ or $h$ were $O(n^{2})$, then the result would be $O(n^{2})$. This is in agreement with intuition. A function that is $O(n)$ is also $O(n^{2}), O(n^{3})$, etc. as you can prove by this definition. For practical purposes, that is immaterial, as you only usually care about the smallest upper bound. Multiplication can be simplified in the manner you require. For instance, say I have $f, h$ that are both $O(n)$. This means $$ \lim_{n \rightarrow \infty}\dfrac{f(n)}{n} < c\;\;\mathrm{and} \; \; \lim_{n \rightarrow \infty}\dfrac{h(n)}{n} < c $$ or, equivalently, $$ \lim_{n \rightarrow \infty} f(n) < \lim_{n\rightarrow \infty} cn\;\;\mathrm{and} \; \; \lim_{n \rightarrow \infty}h(n) < \lim_{n\rightarrow \infty} cn $$ I can then argue that $f \times h$ is $O(n^{2})$, since, when we go to calculate it, this would require: $$ \lim_{n\rightarrow \infty}\dfrac{f \times h}{n^{2}} < c $$ which we can prove like so by substituting the inequalities above: $$ \lim_{n\rightarrow \infty}\dfrac{f \times h}{n^{2}} < \lim_{n\rightarrow \infty} \dfrac{cn \times cn}{n^{2}} < c^{2}\;\;\mathrm{which\;is\;a\;constant!} $$ In general, the product of two polynomial operations of order $O(n^{k}), O(n^{p})$ is at least $O(n^{k+p})$. Division is a bit trickier. Let's take your example: say we have $f, h$ both $O(n)$. We now try to calculate the limit, using the properties formally stated above, and find: $$ \lim_{n \rightarrow \infty}\dfrac{\frac{f}{h}}{n} < \lim_{n \rightarrow \infty} \dfrac{\frac{cn}{cn}}{n} < lim_{n \rightarrow \infty} \dfrac{1}{n} = 0 < c $$ so it is $O(n)$. But wait a minute! It is also $O(1)$: $$ \lim_{n \rightarrow \infty}\dfrac{\frac{f}{h}}{1} < \lim_{n \rightarrow \infty} \dfrac{\frac{cn}{cn}}{1} < lim_{n \rightarrow \infty} \dfrac{1}{1} = 1 < c $$ so the same operation is somehow both $O(1)$ and $O(n)$ at the same time. No, this is not some quantum spookiness at work. Remember that a function that is $O(n^{c})$ is also $O(n^{k})$ if $k > c$, as we stated above. Correctly speaking, it is not wrong to say either $O(1)$ or $O(n)$ or any other higher power (excluding deities). In such cases, you may want to use the much stronger $\theta$ definition, which says a function $f$ is $\theta(g(n))$ if $f(n)$ is $O(g(n))$ and $g(n)$ is $O(f(n))$ i.e. they grow at the same pace. You cannot divide asymptotic notations in this way, since big O is only an upper bound. For example, if you take $C_1(n) = 1$ and $C_2(n) = n^2$ then $C_2(n)/C_1(n)$ is not $O(n)$. If you're confused, think instead of the following question: Big O is very similar to $\leq$, the main difference being that we ignore multiplicative constants.
As has been discussed in many questions around here (e.g. here), relativity tells us only about local properties and behavior of a space-time. There are some exceptions when we make global assumptions - if we have a space of globally and strictly constant positive curvature, non-trivial topology is imminent because the space has to be the 3-sphere $\mathbb{S^3}$. But we can also add nontrivial topology without much constraints. The full richness can be explored e.g. through quotienting the "canonical" space-slices $\mathbb{E^3},\mathbb{S^3},\mathbb{H}^3$ (flat euclidean, 3-sphere, 3-hyperbolic) by a discrete symmetry group $\Gamma$. I.e. for $\Gamma$ a group of discrete translations in all directions "cutting up" $\mathbb{E}^3$, we get a topological 3-torus $\mathbb{E^3}/\Gamma = \mathbb{T^3}$. The intuitive picture is that the space looks locally exactly as our good olde' flat euclidean space $\mathbb{E^3}$, but after a certain distance (the translation), we get to the same place. Naturally, as this is the same place, we should find the same things at these places up to their movement and evolution during the time we weren't there. As for cosmological observation, if we are to detect nontrivial topology with current methods, the space or the non-trivialities must be "sufficiently small". Imagine we are on a sphere and we are restricted by observation to see a very very small patch of it - there will be no way we can conclude it is a sphere. If we however see beyond say one of the discrete translations of $\Gamma$, we should be in principle able to detect multiple images of the same object. The problem is, since light took longer to travel from the image farther away, we will see the more distant object to be "younger" than the closer one and most probably under a different angle. For a decently large universe with other effects like redshift and obscuring, this is probably a deal-breaker. Nevertheless, the endurance of scientists is endless. We may detect a repetition in the images when collecting large amounts of data for all visible objects and using certain correlation methods to evaluate them. Certain types of topological non-triviality would then be visible as peaks or "spikes" in the correlation indicators. The deepest image of the universe is the CMB, of which we have a very detailed dataset. CMB can be viewed as a snapshot of a large sphere at luminous distance $\chi_{CMB}$. If this sphere intersects a topological non-triviality, we should see "circles" or certain pattern repetitions in the CMB. So long, the few tests of the data however did not reveal any of this. If anything, an $\mathbb{R^2}\times \mathbb{S^1}$ (a "3-tube") topology is conjectured in association with the slightly preferred direction of the CMB. There are possible indirect tests suggested by the discussion of ACuriousMind - a non-trivial topology imposes different boundary conditions on fundamental fields and other possible objects. Note however that the theory we develop means "very far" by $\infty$. That is e.g. in particle experiments, "very far" may be a distance of few meters, not cosmological scales. Effects due to topologically different boundary conditions would most probably play an important role in the very early universe and might provide indirect tests of the cosmic topology. My main source for this answer are this and this review article.
I've done a little bit of research and it seems Millikan was able to measure the ratio between the charge of the electron and its mass. But how can one measure one of the two constants to get the value of the other? The mass-to-charge ratio $m/e$ of the electron was first measured by J.J. Thomson, the discoverer of the electron, using cathode rays in 1897: It should not be surprising that one may measure this ratio even without isolating "individual electrons" because the electric force acting on a charge may be written as $$ F = ma = Ee, \quad a = E\cdot \frac em $$ So what was left was just to measure the mass or charge separately. Millikan and Fletcher did the relevant oil drop experiment in 1909. The electric force $F=Ee$ acting on a single drop with charge $e$, a single extra (or deficit) electron, may be calculated when it is set equal to the drag force from hydrodynamics, $6\pi e\eta v_1$. The viscosity $\eta$ is the most difficult thing to know but otherwise all quantities are known so $e$ may be calculated. If one knows the charge and the ratio, one may calculate the mass as $m = e/ (e/m)$.
Difference between revisions of "NTS ABSTRACTFall2019" (→Sep 5) (→Oct 17) (24 intermediate revisions by 2 users not shown) Line 7: Line 7: {| style="color:black; font-size:100%" table border="2" cellpadding="10" width="700" cellspacing="20" {| style="color:black; font-size:100%" table border="2" cellpadding="10" width="700" cellspacing="20" |- |- − | bgcolor="#F0A0A0" align="center" style="font-size:125%" | Will Sawin + | bgcolor="#F0A0A0" align="center" style="font-size:125%" | Will Sawin |- |- − | bgcolor="#BCD2EE" align="center" | + | bgcolor="#BCD2EE" align="center" | |- |- − | bgcolor="#BCD2EE" | We + | bgcolor="#BCD2EE" | + + + We + the + + + a + , + . |} |} Line 17: Line 26: <br> <br> − + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + </center> </center> + + Latest revision as of 11:35, 14 October 2019 Return to [1] Sep 5 Will Sawin The sup-norm problem for automorphic forms over function fields and geometry The sup-norm problem is a purely analytic question about automorphic forms, which asks for bounds on their largest value (when viewed as a function on a modular curve or similar space). We describe a new approach to this problem in the function field setting, which we carry through to provide new bounds for forms in GL_2 stronger than what can be proved for the analogous question about classical modular forms. This approach proceeds by viewing the automorphic form as a geometric object, following Drinfeld. It should be possible to prove bounds in greater generality by this approach in the future. Sep 12 Yingkun Li CM values of modular functions and factorization The theory of complex multiplication tells us that the values of the j-invariant at CM points are algebraic integers. The norm of the difference of two such values has nice and explicit factorization, which was the subject of the seminal work of Gross and Zagier on singular moduli in the 1980s. In this talk, we will recall this classical result, review some recent factorization formula for other modular functions, and report some progress on a conjecture of Yui and Zagier. This is joint work with Tonghai Yang. Sep 19 Soumya Sankar Proportion of ordinary curves in some families An abelian variety in characteristic [math]p[/math] is said to be ordinary if its [math]p[/math] torsion is as large as possible. In 2012, Cais, Ellenberg and Zureick-Brown made some conjectures about the distribution of the size of the [math]p[/math] -torsion of an abelian variety. I will talk about some families which do not obey these heuristics, namely Jacobians of Artin-Schreier and superelliptic curves, and discuss the structure of the respective moduli spaces that make it so. Oct 3 Patrick Allen On the modularity of elliptic curves over imaginary quadratic fields Wiles's proof of the modularity of semistable elliptic curves over the rationals uses the Langlands-Tunnell theorem as a starting point. In order to feed this into a modularity lifting theorem, one needs to use congruences between modular forms of weight one and modular forms of higher weight. Similar congruences are not known over imaginary quadratic fields and Wiles's strategy runs into problems right from the start. We circumvent this congruence problem and show that mod 3 Galois representations over imaginary quadratic fields arise from automorphic forms that are the analog of higher weight modular forms. Our argument relies on a 2-adic automorphy lifting theorem over CM fields together with a "2-3 switch." As an application, we deduce that a positive proportion of elliptic curves over imaginary quadratic fields are modular. This is joint work in progress with Chandrashekhar Khare and Jack Thorne. Oct 10 Borys Kadets Sectional monodromy groups of projective curves Let $K$ be a field. Fix a projective curve $X \subset \mathbb{P}^r_K$ of degree $d$. A general hyperplane $H \in \mathbb{P}^{r*}$ intersects $X$ in $d$ points; the monodromy of $X \bigcap H$ as $H$ varies is a subgroup $G_X$ of $S_d$ known as the sectional monodromy group of $X$. When $K=\mathbb{C}$ (or in fact for $\mathrm{char} K = 0$), the equality $G_X=S_d$ was shown by Castelnuovo; this large monodromy fact is important in studying the degree-genus problem for projective curves. I will talk about the behaviour of sectional monodromy groups in positive characteristic. I will show that for a large class of curves the inclusion $G_X \supset A_d$ holds. On the other hand, for a seemingly simple family of curves $X_{m,n}$ given by the equation $x^n=y^mz^{n-m}$ in $\mathbb{P}^2$ I will completely characterize the possibilities for $G_{X_{n,m}}$; the list of possibilities includes linear groups $\mathrm{AGL}_n(q)$, $\mathrm{PGL}_2(q)$ as well as some sporadic simple groups. Oct 17 Yousheng Shi Generalized special cycles and theta series We study generalized special cycles on Hermitian locally symmetric spaces $\Gamma \backslash D$ associated to the groups $G = U(p, q), \ \mathrm{Sp}(2n, \mathbb R)$ and $\mathrm{O}(2n)$. These cycles are algebraic and covered by symmetric spaces associated to subgroups of $G$ which are of the same type. Using the oscillator representation and the thesis of Greg Anderson, we show that Poincare duals of these generalized special cycles can be viewed as Fourier coefficients of a theta series. This gives new cases of theta lifts from the cohomology of Hermitian locally symmetric manifolds associated to $G$ to vector-valued automorphic forms associated to the groups $G' = \mathrm{U}(m, m), \ \mathrm{O}(m, m)$ or $\mathrm{Sp}(m, m)$ which are members of a dual pair with $G$ in the sense of Howe. This partially generalizes the work of Kudla and Millson on the special cycles on Hermitian locally symmetric spaces associated to the unitary groups.
Current browse context: math.OC Change to browse by: References & Citations Bookmark(what is this?) Mathematics > Optimization and Control Title: On the Kähler Geometry of Certain Optimal Transport Problems (Submitted on 30 Nov 2018 (v1), last revised 22 Aug 2019 (this version, v4)) Abstract: Let $X$ and $Y$ be domains of $\mathbb{R}^n$ equipped with respective probability measures $\mu$ and $ \nu$. We consider the problem of optimal transport from $\mu$ to $\nu$ with respect to a cost function $c: X \times Y \to \mathbb{R}$. To ensure that the solution to this problem is smooth, it is necessary to make several assumptions about the structure of the domains and the cost function. In particular, Ma, Trudinger, and Wang established regularity estimates when the domains are strongly \textit{relatively $c$-convex} with respect to each other and cost function has non-negative \textit{MTW tensor}. For cost functions of the form $c(x,y)= \Psi(x-y)$ for some convex function $\Psi$, we find an associated K\"ahler manifold whose orthogonal anti-bisectional curvature is proportional to the MTW tensor. We also show that relative $c$-convexity geometrically corresponds to geodesic convexity with respect to a dual affine connection. Taken together, these results provide a geometric framework for optimal transport which is complementary to the pseudo-Riemannian theory of Kim and McCann. We provide several applications of this work. In particular, we find a complete K\"ahler surface with non-negative orthogonal bisectional curvature that is not a Hermitian symmetric space or biholomorphic to $\mathbb{C}^2$. We also address a question in mathematical finance raised by Pal and Wong on the regularity of \textit{pseudo-arbitrages}, or investment strategies which outperform the market. Submission historyFrom: Gabriel Khan [view email] [v1]Fri, 30 Nov 2018 19:45:27 GMT (15kb) [v2]Fri, 12 Apr 2019 16:41:28 GMT (25kb) [v3]Mon, 6 May 2019 18:44:10 GMT (27kb) [v4]Thu, 22 Aug 2019 17:22:57 GMT (28kb)
12:45 --- Lunch --- 14:00 BSM III - James Gainer (University of Florida (US)) (until 16:00) (G31) 14:00 Search for new physics in the low MET monophoton channel with the CMS Detector - Toyoko Orimoto (Northeastern University (US)) (G31) 14:15 Characterizing Invisible Electroweak Particles through Single-Photon Processes in $e^+e^-$ Colliders - Xing Wang (University of Pittsburgh) (G31) 14:30 A Model of Flavor and Flavor Changing - Prof. Stephen Barr (University of Delaware) (G31) 14:45 Searches for new physics in final states with an electron/muon pair at CMS - Andreas Guth (Rheinisch-Westfaelische Tech. Hoch. (DE)) (G31) 15:00 Hunting for Hierarchies in $\mathcal{PSL}_2(7)$ - Michael Perez (University of Flordia) (G31) 15:15 Anarchy In Unified Theories - shaikh saad (High Energy Theory) (G31) 15:30 Spurion Analysis of the Little Flavor Model - Dorota Grabowska (INT/UW) (G31) 14:00 Cosmology III -Prof. Nobuchika Okada (until 16:00) (G26) 14:00 CMB Signals of a Hidden Dark Matter Sector - Mr Sungwoo Hong (University of Maryland at College Park) (G26) 14:15 A Tale of Two Timescales: Mixing, Mass Generation, and Phase Transitions in the Early Universe - Jeff Kost (University of Arizona) (G26) 14:30 Baryogenesis via Mesino Oscillations - Akshay Ghalsasi (University of Washington) (G26) 14:45 Sterile Neutrino dark matter produced after the QCD Phase Transition - Louis Lello (University of Pittsburgh) (G26) 15:00 Tight Scrutiny of Electroweak Phase Transitions. - Harikrishnan Ramani (Yang Institute Of Theoretical Physics) (G26) 15:15 Electroweak phase transition and Higgs boson couplings in the scale-invariant two Higgs doublet model - Ms Kaori Fuyuto (Nagoya University) (G26) 15:30 Higgs Relaxation Leptogenesis - Lauren Pearce (University of Minnesota) (G26) 14:00 Dark Matter III - Arthur Kosowsky (until 16:00) () 14:00 Baryonic Dark Matter - Michael Duerr (MPIK Heidelberg) () 14:15 Neutrino Masses and Sterile Neutrino Dark Matter from the PeV Scale - Samuel Roland (University of Michigan) () 14:30 Conformal Inverse Seesaw and Warm Dark Matter - Juri Smirnov (Max Planck Institute for Nuclear Physics) () 14:45 A Dark Side of Neutrino Mass - Dr Wei-Chih Huang (University College London) () 15:00 Aspects of Lepton Flavored Dark Matter - Can Kilic (University of Texas at Austin) () 15:15 Lepton-Flavored Dark Matter - Jennifer Kile (U Florida) () 15:30 Dark Matter in Leptophillic SUSY - Mr David Feld (Rutgers) () 15:45 Neutrino Dark Matter in the Higgs triplet model - Ms Sahar Bahrami (Concordia University) () 14:00 Higgs III - Chien-Yi Chen (Brookhaven National Laboratory) (until 16:00) (G30) 14:00 Search for Higgs bosons beyond the standard model in b-jet final states at CMS - Gregor Mittag (né Hellwig) (Deutsches Elektronen-Synchrotron (DE)) (G30) 14:15 Constraints on the GM model by current LHC data - Prof. Cheng-Wei Chiang (National Central University) (G30) 14:30 Higgs production in the Georgi-Machacek model through vector boson fusion - Andrea Peterson (Carleton University) (G30) 14:45 The low energy theories of the Higgs sector - Daniel Egana (Rutgers University) (G30) 15:00 CMS collaboration: Search for exotic Higgs bosons with the CMS detector - Alexandre Jean N Mertens (Universite Catholique de Louvain (UCL) (BE)) (G30) 15:15 The Hunt for the Rest of the Higgs Bosons - HAO ZHANG (University of California, Santa Barbara) (G30) 15:30 Only one 125 GeV Higgs, is that all? - YUN JIANG (UC Davis) (G30) 15:45 Heavy Higgs Bosons at 14TeV and 100TeV - Ms Ying-Ying Li (Hong Kong University of Science and Technology) (G30) 14:00 Neutrinos - Alexander Stuart (until 16:00) (G27) 14:00 Dynamical Pion Collapse and the Coherence of Neutrino Beams - Benjamin Jones (H. H. Wills Physics Laboratory-University of Bristol-Unknown) Benjamin Jones (MIT) (G27) 14:15 The latest result/analysis of Double Chooz Experiment - Mr Guang Yang (Argonne/IIT) (G27) 14:30 Standard and Non-standard Neutrino Oscillations at Daya Bay - Dr David Vanegas Forero (Center for Neutrino Physics, Virginia Tech) (G27) 14:45 Reconciling eV Sterile Neutrinos with Cosmological Bounds - Dr Yong Tang (KIAS) (G27) 15:00 Resonant Neutrino Self-Interaction in IceCube - Ayuki Kamada (University of California, Riverside) (G27) 15:15 On neutrino and charged lepton masses and mixings: A view from the electroweak-scale right-handed neutrino model - Trinh Le (University of Virginia) (G27) 15:30 The improved bounds on the heavy neutrino productions at the 8 TeV LHC - Mr Arindam Das (University of Alabama) (G27) 15:45 Heavy Neutrino Searches at Future Colliders - Bhupal Dev (University of Manchester) (G27) 14:00 SUSY III - Sunghoon Jung (until 16:00) (157) 14:00 Mini-Review: Physics with Electroweakinos - Stefania Gori (157) 14:30 Mono-Boson Search Strategies for Mass Degerate Particles - Linda Carpenter (Ohio State University) (157) 14:45 Probing Compressed SUSY spectra at the LHC - Zhenyu Han (University of Oregon) (157) 15:00 New search strategies for well tempered neutralino dark matter at the LHC and beyond - Bryan Ostdiek (University of Notre Dame) (157) 15:15 Using soft leptons to hunt quasi-degenerate higgsinos - Azar Mustafayev (University of Hawaii) (157) 15:30 Long-Lived Superparticles with Hadronic Decays at the LHC - Zhen Liu (U of Pittsburgh) (157) 15:45 Search for stealth supersymmetry in events with jets, either photons or leptons, and low missing transverse momentum in pp collisions at 8 TeV - Benjamin Taylor Carlson (Carnegie-Mellon University (US)) Benjamin Taylor Carlson (Carnegie-Mellon University (US)) (157) 14:00 Top - Gregory Mahlon (until 16:00) (G28) 14:00 Probing top-quark couplings at NLO accuracy - Cen Zhang (Brookhaven National Laboratory) (G28) 14:15 Top quark pair production measurements using the ATLAS detector at the LHC - Ki Lie (Univ. Illinois at Urbana-Champaign (US)) (G28) 14:30 Top quark pair properties using the ATLAS detector at the LHC - Bruno Galhardo (Universidade de Coimbra (PT)) (G28) 14:45 New measurements of ttW and ttZ at CMS - Andrew Brinkerhoff (University of Notre Dame (US)) (G28) 15:00 Single Top quark production cross section and properties using the ATLAS detector at the LHC - Kevin Sapp (University of Pittsburgh (US)) (G28) 16:00 --- Coffee Break --- 16:30 B Physics - Amarjit Soni (BNL) (until 18:15) (G27) 16:30 CP violation in B and Bs decays - Mr Jack Wimberley (University of Maryland, College Park) (G27) 16:45 The shape of new physics in rare B decays - Rodrigo Alonso (UC San Diego) (G27) 17:00 Studies of charmless B decays at LHCb - Daniel Patrick O'Hanlon (University of Warwick (GB)) (G27) 17:15 Simultaneous Explanation of the $R_K$ and $R(D^{(∗)})$ Puzzles - Shanmuka Shivashankara (University of Mississippi) (G27) 17:30 AdS/QCD predictions for $B\to K^* \mu^+\mu^-$ decay rate and isospin/forward-backward asymmetries. - Mohammad Ahmady (Mount Allison University) (G27) 17:45 Measurement of the CKM angle $\gamma$ at LHCb - Dr alessandro bertolin (LHCb - INFN (Padova)) (G27) 18:00 New physics searches, spectroscopy and decay properties of b-hadrons with the ATLAS experiment - Steffen Maeland (University of Bergen (NO)) (G27) 16:30 BSM IV - Zhenyu Han (Harvard University) (until 18:15) (G31) 16:30 The Mu2e Experiment at Fermilab - Kevin Lynch (York College/City University of New York) (G31) 16:45 A Bottom-Up Approach to Lepton Flavor and CP Symmetries - Alexander Stuart (SISSA) (G31) 17:00 Searches for excited leptons at CMS - Matthias Klaus Endres (Rheinisch-Westfaelische Tech. Hoch. (DE)) (G31) 17:15 vector like leptons at LHC - Ms nilanjana kumar (Geaduate student) (G31) 17:30 Heavy Type III Seesaw Leptons at NLO in QCD - Richard Ruiz (G31) 17:45 Lepton Number Violation and Leptogenesis - Chang-Hun Lee (University of Maryland) (G31) 18:00 The signal strength of the pseudo scalars in the model of electroweak-scaled right-handed neutrinos at the LHC - vinh hoang (university of virginia) (G31) 16:30 BSM V - Azar Mustafayev (University of Hawai'i at Manoa (US)) (until 18:30) (157) 16:30 Search for vector-like quarks at the LHC using the CMS detector - Devdatta Majumder (University of Kansas (KU)) (157) 16:45 Searches for vector-like quarks with the ATLAS detector at the LHC - Ruchika Nayyar (University of Arizona (US)) (157) 17:00 The Oddest Little Higgs: Top partners decaying into jets - Jack Collins (urn:Google) (157) 17:15 Extra dimensions versus supersymmetry at the LHC - Prof. Satyanarayan Nandi (Oklahoma State University) (157) 17:30 Discovery opportunity of new physics with M2 variables - Doojin Kim (University of Maryland) (157) 17:45 Searches for new physics in diboson resonances and other signatures with the ATLAS detector at the LHC - Mr Campoverde Angel (Stony Brook University) (157) 18:00 Search for heavy resonances in diboson final states with the CMS detector at LHC - Jennifer Ngadiuba (Universitaet Zuerich (CH)) (157) 18:15 Searches for resonant pair production of Higgs bosons using the CMS detector - Souvik Das (University of Florida (US)) (157) 16:30 Dark Matter IV - Brooks Thomas (University of Hawaii) (until 18:30) (G29) 16:30 Non-Abelian Darkness - Gustavo Marques Tavares (Boston University) (G29) 16:45 Cosmological Constraints on Dynamical Dark Matter - Mr Patrick Stengel (University of Hawaii) (G29) 17:00 Unitarity and Bound State Dark Matter - sonia el hedri (JGU Mainz) (G29) 17:15 UltraViolet Freeze-in - Fatemeh Elahi (University of Notre Dame) (G29) 17:30 Higgs portals to galactic center pulsar collapse - Joseph Andrew Bramante (University of Notre Dame (US)) (G29) 17:45 Vector Dark Matter via Higgs Portal - Anthony DiFranzo (UC Irvine / Fermilab) (G29) 18:00 Dark matter explained through two distinct ideas related to Higgs - Shreyashi Chakdar (oklahoma state university) (G29) 18:15 Scalar Dark Matter Mediated via Colored Scalar - Gaurav Mendiratta (Indian Institute of Science, Bangalore,India) (G29) 16:30 Electroweak - Cen Zhang (Brookhaven National Laboratory) (until 17:45) (G28) 16:30 Parton distributions and the W mass measurement - Prof. Zack Sullivan (Illinois Institute of Technology) (G28) 16:45 Electroweak Corrections at the LHC - Jia Zhou (SUNY Buffalo) (G28) 17:00 Electroweak Corrections to Vector Boson + b-jet Production - Steven Honeywell (Florida State University) (G28) 17:15 Recent electroweak results from ATLAS - Aparajita Dattagupta (Indiana University (US)) (G28) 17:30 ATLAS measurements of vector boson production - Samuel Webb (University of Manchester (GB)) (G28) 16:30 Higgs IV -Ms Peisi Huang (University of Wisconsin (US)) (until 18:30) (G30) 16:30 Measurements of the properties of the Higgs boson using the ATLAS Detector - Jordan S Webster (University of Chicago (US)) (G30) 16:45 Search for Higgs Bosons produced in association with top quarks with the ATLAS detector - Marine Kuna (Universita e INFN, Roma I (IT)) (G30) 17:00 Combination of couplings of the Higgs boson by the ATLAS experiment with Run 1 data - Cecilia Taccini (Roma Tre Universita Degli Studi (IT)) (G30) 17:15 Beyond-the-Standard Model Higgs Physics using the ATLAS Experiment - Alexander Madsen (Uppsala University (SE)) (G30) 17:30 A pure resonance dip signal due to the imaginary interference in heavy resonance search - Yeo Woong Yoon (Konkuk Univ.) (G30) 17:45 The Holographic Twin Higgs - Michael Geller (Technion) (G30) 18:00 Probing Natural Colorless Top Partners - Christopher Verhaaren (University of Maryland, College Park) (G30) 16:30 Tools -Dr Neil Christensen (University of Pittsburgh) (until 18:30) (G26) 16:30 Mini-Review: Matrix Element Techniques - Konstantin Matchev (University of Florida (US)) (G26) 17:00 Searching for new collider resonances through topological models - Mohammad Abdullah (University of California , Irvine) (G26) 17:15 Discovering New Physics with Voronoi Tessellations - James Gainer (University of Florida (US)) (G26) 17:30 On the implementation of the UED model in Pythia, CalcHEP and MadGraph - Ms Dipsikha Debnath (University of Florida) (G26) 17:45 Automated Event Plotting with RHADAManTHUS (Recursively Heuristic Analysis, Display, And Manipulation: The Histogram Utility Suite) - Joel Wesley Walker (Texas A & M University (US)) (G26) 18:00 Recent improvements on Monte Carlo modelling at ATLAS - Rachik Soualah (Universita degli Studi di Udine (IT)) (G26) 18:15 One-loop Feynman integrals made easy with Package-X - Hiren Patel (Max Planck Institute) (G26) 18:45 Banquet (until 21:45) (Soldiers and Sailors Hall and Museum)
Your confusion stems basically from comparing results in different conventions. Basically, there is always a phase difference of $\pi$ between the two output ports of a beam splitter, but this can only ever be 'morally' true, because that statement talks about the phase of the EM field at different points, and thus the phase will be different depending on where exactly you fix your measuring point on the two input and the two output ports. In situations where you have two waves co-propagating, then their relative phase is perfectly well-defined, but for the ports of the beam splitter, you're comparing phases at different beams in different positions and moving in different directions, so the whole thing is impossible without some artificial way to fix the convention. Broadly speaking, there are two separate ways to fix the convention: one with an explicit $\pi$ phase shift,$$\begin{pmatrix} a_{\mathrm{out},1} \\ a_{\mathrm{out},2} \end{pmatrix}=\frac{1}{\sqrt{2}}\begin{pmatrix} 1 & 1 \\ 1 & -1 \end{pmatrix}\begin{pmatrix} a_{\mathrm{in},1} \\ a_{\mathrm{in},2} \end{pmatrix},$$and one with several explicit $\pi/2$ phases:$$\begin{pmatrix} a_{\mathrm{out},1} \\ a_{\mathrm{out},2} \end{pmatrix}=\frac{1}{\sqrt{2}}\begin{pmatrix} 1 & i \\ i & 1 \end{pmatrix}\begin{pmatrix} a_{\mathrm{in},1} \\ a_{\mathrm{in},2} \end{pmatrix}.$$Those two conventions are exactly equivalent, since they can be transformed by adding a $\pi/2$ phase to $a_{\mathrm{in},2}$ and $a_{\mathrm{out},2}$,$$\frac{1}{\sqrt{2}}\begin{pmatrix} 1 & 1 \\ 1 & -1 \end{pmatrix}=\frac{1}{\sqrt{2}}\begin{pmatrix} 1 & 0 \\ 0 & -i \end{pmatrix}\begin{pmatrix} 1 & i \\ i & 1 \end{pmatrix}\begin{pmatrix} 1 & 0 \\ 0 & -i \end{pmatrix},$$and experimentally this is equivalent to adding a thin slab of glass on one input and one output port. More importantly, though, you didn't know the precise value of the optical path before the $(\mathrm{in}{,}2)$ port or after the $(\mathrm{out}{,}2)$ output, so quibbling about these phases is completely moot. What you do need is for the matrix governing the coupling to be unitary, which comes from a strict requirement of energy conservation. Now, this requirement of unitarity can indeed sound a bit intimidating and exotic, but it is important to note that the requirement of unitarity has nothing to do with quantum mechanics, and it is already present within the hamiltonian classical-mechanics description of the system. More precisely, when we say that the beam splitter can be described by a matrix, we are making two core assertions about the electromagnetic fields we're considering: First, we are asserting that the EM fields we are willing to consider need to be linear combinations of only two pre-specified modes, which basically look like this: $\qquad$ Second, we are recognizing that those fields can also be expressed as linear combinations of two modes which look like this, $\qquad$ which are characterized by having all of the output energy on only one of the output ports. Both of these sets of modes are bases for the same linear subspace of field modes, which means that each set can be expressed as a linear combination of the other set; in other words, that means that the amplitudes of each set of modes are related via some matrix. More importantly, when we sit down to describe the (classical) field, we write either$$\mathbf E(\mathbf r,t) = \mathrm{Re}\mathopen{}\left[\sum_{j=1}^2 \alpha_{\mathrm{in},j}(t) \mathbf E_{\mathrm{in},j}(\mathbf r)\right]$$or$$\mathbf E(\mathbf r,t) = \mathrm{Re}\mathopen{}\left[\sum_{j=1}^2 \alpha_{\mathrm{out},j}(t) \mathbf E_{\mathrm{out},j}(\mathbf r)\right],$$where $\alpha_{\mathrm{in},j}(t)$ and $\alpha_{\mathrm{in},j}(t)$ are the complex-valued canonical variables that describe the dynamics of those modes, and which satisfy the dynamical equations$$\frac{\mathrm d^2}{\mathrm dt^2}\alpha_{x,j}(t) = -\omega^2 \alpha_{x,j}(t).$$ The tricky part is the normalization: because $\alpha_{x,j}(t)$ and $\mathbf E_{x,j}(\mathbf r)$ only ever appear (thus far) in the product $\alpha_{x,j}(t) \mathbf E_{x,j}(\mathbf r)$, there is an ambiguity in a common complex factor that can be put on either side, including both normalization and phase. For the normalization, there is an objective absolute standard that needs to be adhered to: namely, that for each of the modes, the total energy flow across the beam-splitter midline needs to be constant. This is the only way to correctly quantize the system. For the phase, there is no objective or absolute standard - there is a complete phase ambiguity on all four of the $\mathbf E_{x,j}(\mathbf r)$, and correspondingly on the $\alpha_{x,j}(t)$. You're free to pick any phase that you find convenient, but you do need to choose one. And, moreover, the phase conventions that you choose for the $\mathrm{in}$ ports cannot be used to set those for the $\mathrm{out}$ ports, or vice versa, because they refer to completely different modes evaluated at different places. They're completely independent quantities. Once you fix the total incoming energy flow per mode at $R$ (in joules per second), then the total energy flow can be shown to be both$$F = R\sum_{j=1}^2 |\alpha_{\mathrm{in},j}(t)|^2$$and$$F = R\sum_{j=1}^2 |\alpha_{\mathrm{out},j}(t)|^2,$$and those two energy flows need to match, because of energy conservation. This means, therefore, that the expression of the output canonical variables as linear combinations of the input canonical variables,$$\begin{pmatrix} \alpha_{\mathrm{out},1} \\ \alpha_{\mathrm{out},2} \end{pmatrix}=\begin{pmatrix} c & d \\ e & f \end{pmatrix}\begin{pmatrix} \alpha_{\mathrm{in},1} \\ \alpha_{\mathrm{in},2} \end{pmatrix}=M\begin{pmatrix} \alpha_{\mathrm{in},1} \\ \alpha_{\mathrm{in},2} \end{pmatrix},$$needs to preserve the norm $\sum_{j=1}^2 |\alpha_{x,j}(t)|^2$. Or, in other words, the matrix of the transformation needs to be unitary. (Why unitary and not just having unit norm on each row or each column? Because the beam splitter needs to conserve $\sum_{j=1}^2 |\alpha_{x,j}(t)|^2$ both for cases where the input is on only one of the input ports, but also for cases where there's light on both. If the matrix isn't unitary, there will be a superposition of input beams that will have a different output energy than the sum of the inputs.) All of the above is crucial for a correct hamiltonian classical-field-mechanical description of the beam splitter. Once you've done it correctly (but only after you've done the classical mechanics correctly), you're ready to move on to the quantum mechanics of the system, which is now very easy: since you've done the classical mechanics correctly, all you need to do is to replace the hamiltonian canonical variables with annihilation operators,$$\alpha_{x,j}(t) \mapsto \hat{a}_{x,j}.$$and you're done. And, since you had a strict requirement of unitarity on the matrix that interlinked the hamiltonian canonical variables between the input and output sets, you have an identical requirement of unitarity on the matrix that links the input and output annihilation (and therefore creation) operators. What you don't have, because you didn't have it on the classical fields, is any additional restriction on what the phases must be, either of $\pi/2$ or $\pi$ or anything, because (again) those are just doomed attempts at comparing phases on different places, which cannot be done to any absolute or objective standard.
I've always been confused with Delta hedging. It is well-known that for a (smooth enough) function of $(S,t)$ we have, due to Ito's lemma, that:\begin{eqnarray*}dC = \left(\frac{\partial C}{\partial ... Let's assume the usual Black Scholes assumptions hold. My question is related to an answer on this question. There, the weights ($\Delta_t^1$,$\Delta_t^2$) are derived which form a locally risk free ... Assume there are two stocks $S_1$ with price $p_1(t)$ and $S_2$ with price $p_2(t)$ where $t$ indicates time. Assume, there is a hypothetical derivative $D$, which is such that, price of $D$ at a time ... In his book 'Stochastic Calculus for Finance II' Shreve uses the term: 'Short Option Hedging Portfolio' on page.156 (4.5.3). Can someone please explain this term with some kind of an example?It is ... Question: The following is my derivation of the Black-Scholes equation. Is it correct or am I missing some details (eg assumption)?Let $V$ be value of an option.Suppose value $\Pi$ of a portfolio ...
Your problem can be reduced to the Partition problem (which is weakly NP-complete) without an exponential blowup of the numeric values; so your problem is weakly NP-complete, too. This is the idea: you can view the $2N$ integers as nodes of a graph $G$, the pairs in $P$ indentify the edges between the nodes. Clearly $G$ cannot contain cycles of odd length (checkable in polynomial time); otherwise there would be a conflict and at least one element should be excluded from the partition $A$ vs $S \setminus A$. Each connected part $H_i$ of $G$ is bipartite $H_i = H_i' \cup H_i''$, and you can treat it like a single integer $h_i = | \sum H_i' - \sum H_i''|$ Let $w_i$ be the difference between the number of elements in the greatest side and the number of elements in the smallest side of the bipartite component $H_i$ plus $N$: $w_i = N + |H_i'| - |H_i''|$ if $\sum H_i' > \sum H_i''$; $w_i = N + |H_i''| - |H_i'|$ otherwise. Let $a_1,a_2,...,a_m$ be the "isolated" integers. Let $k$ be the minimum value such that $2^k > \sum h_i + \sum a_j$. We finally build a Partition problem with the elements: $B = \{ a_j + 2^k (N + 1) \} \cup \{ h_i + 2^k w_i \}$ It is not hard to show that $B$ has a partition $B = B_1 \cup B_2$, $\sum \{B_1\} = \sum \{B_2\}$ if and only if the original $S$ can be split into two equal size halves having the same sum and such that no pair in $P$ have both elements in the same side. Very informally: if $h_i$ is placed on one side, it also carries the $w_i$ (positive) "weight" that keeps track of the (absolute) difference between the number of elements of $H_i'$ and $H_i''$; so on the opposite side both the sum and the weight must be "balanced" and this ensures that a solution leads to a valid equal size partition also on the original problem. In the original problem we can derive $A$ from $B_1$: if $h_i$ (or $a_j$) is placed on $B_1$ then the integer nodes of the bipartite component $H_i$ must be arranged in such a way that $A$ recieves the elements from $H_i'$ or $H_i''$ according to which have the greater sum. A simple example: given the integers: $1,1,2,3,4,7$ and the pairs $(7,2), (7,3)$, we build the following graph $G$: We then reduce the connected component to a single number $(2,-1)$ (for better clarity we use the notation $(x,y)$ instead of $x + 2^k (N + y ))$; and solve the corresponding partition problem. You can also apply the Partition problem pseudo-polynomial time dynamic programming algorithm to the set $B$ to find the solution.
$R$ is taken to be the ring of upper $3 \times 3$ matrices with entries in $\mathbb{R}$. If I view $R$ as a module over itself, are any of its submodules free? And how can I prove that its submodules are pojective $R$-modules? Thanks! Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community None of its proper submodules could be free. Think about it: $R$ is a $6$ $\Bbb R$ dimensional algebra. It could not contain even a single copy of itself properly, considering that a single copy would have dimension at least $6$. It certainly contains projective submodules, though. For any idempotent $e$, $eR$ is going to be a summand of $R$, and hence a projective module. Actually triangular matrix rings over fields are hereditary meaning that all their left and right ideals are projective. The only proofs I'm aware of for this would be a little long to write out. I know a proof appears in section 2 of Lectures on Modules and Rings, and First Course in Noncommutative rings section 25, both books by T.Y. Lam. The version I like is the one where you show that a ring is right hereditary iff its Jacobson radical is a projective right $R$ module. The algebra you're studying is the path algebra over the quiver $$ \underset{1}{\bullet}\xrightarrow{\alpha}\underset{2}{\bullet} \xrightarrow{\beta}\underset{3}{\bullet} $$ so it's hereditary as all path algebras. You find a proof of this result in a paper by C. M. Ringel, see section 3, page 3.
Search Now showing items 1-1 of 1 Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV (Elsevier, 2014-09) Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ...
Assume we'd like to be able to encode variables $x_1,x_2,\cdots,x_r\in \mathbb{N}$, such that $\forall i\in[r]:1\leq x_i\leq N$ and $$\sum_{i=1}^{r}x_i=M$$ It's easy to store the variables using $r\log N$ bits, but this doesn't take advantage of the fact that their sum is bounded. Furthermore, in my application $r$ is not fixed and could vary from $\frac{M}{N}$ to $M$, so I would need $M\log N$ bits in the worst case if I use the naive encoding. So the questions are as follows: How many bits are required to encode the variables, given that an adversary picks the values of $r$ and $x_1,x_2,\cdots,x_r$? Is there an data structure that uses close-to-optimal space for this that supports efficient replacement operations (where replace$(i,j)$ increases the value of $x_j$ by 1 and decrements the value of $x_i$ by 1)? If it helps, you may assume $M=N$ (which is what I need for my application), but I think it's interesting for general $M,N$.
This is the problem called Dirisibly Yours : We wonder who can find and explain the shortest and neatest proofthat $$5^{2n+1} + 11^{2n+1 } + 17^{2n+1}$$ is divisible by $33$ forevery non negative integer $n$. One way to solve this problem involves modulo arithmetic. You mayhave encountered this in the form of clock arithmetic. For example,the hours on a clock face can be considered as a modulo 12 system,with only the numbers from 1 to 12 being possible. If you add 2hours to 11 o'clock, you get 1 rather than 13. If you add 13 hoursto any time, it is effectively the same as adding 1. If you add 12,24, 36 etc hours to any time, it is the same as adding 0. Numbersabove 12 are not possible; you must convert them into numbersbetween 1 and 12 (inclusive) by dividing by 12 and taking theremainder. The difference between clock arithmetic and modulo arithmetic isthat in modulo arithmetic the numbers begin at 0. Therefore, modulo12 arithmetic only allows the numbers from 0 to 11. The number 12would become 0. One crucial feature of the modulo arithmetic system (as regardsthis problem) is that if a number can be written as $0$ undermodulo $n$ arithmetic, then the number is divisible by $n$. In thesolution to this problem I shall use modulo $3$ and modulo $11$arithmetic. Technically, the everyday number system that we use is moduloinfinity arithmetic, but this is a moot point. If the expression is divisible by 3 and by 11, it must be divisibleby 33. We can show that the expression is divisible by 3 and 11 byshowing that the expression is equal to zero under both modulo 3and modulo 11 arithmetic. We can write $5^{2n+1}$ as $25^n \times 5$. Under modulo 3 arithmetic, $$25 \equiv 1$$ (as 25 divided by 3leaves remainder 1) so $$25^n \equiv 1^n \equiv 1$$ Also, $5 =2$ Therefore $$25^n \times 5 \equiv 1 \times 2 \equiv 2$$ Similarly,$$11^{2n+1} = 121^n \times 11$$ Under modulo 3 arithmetic, $$121\equiv 1$$ so $$121^n \equiv 1^n = 1$$ and $$11 \equiv 2$$Therefore $$121^n \times 11 \equiv 1 \times 2 = 2$$ $$17^{2n+1} =289^n \times 17$$ Under modulo 3 arithmetic, $$289 \equiv 1$$ so$$289^n \equiv 1^n = 1$$ and $$17 \equiv 2$$ Therefore $$289^n\times 17 \equiv 1 \times 2 = 2$$ Under modulo $3$ arithmetic, theexpression is equal to $2 + 2 + 2 = 6 \equiv 0$. The expression istherefore divisible by 3. Under modulo 11 arithmetic, $25 \equiv 3$so $$25^n \equiv 3^n$$ $$5 = 5$$ Therefore $$25^n \times 5 \equiv 3^n \times 5$$ $$121 \equiv 0$$and $$11 \equiv 0$$ Therefore $$121^n \times 11 \equiv 0$$ (clearly it is divisible by11) $$289 \equiv 3$$ so $$289^n \equiv 3n$$ (289 is 3 more than amultiple of 11) $$17 \equiv 6$$ Therefore $$289^n \times 17 \equiv 3^n \times 6$$ Under modulo 11 arithmetic, the expression is equal to: \begin{eqnarray}(3^n \times 5) + 0 + (3^n \times 6) &=&3^n\times 11 \\ &\equiv&3^n \times 0 \\ &=&0 \end{eqnarray} Therefore the expression is divisible by 11. As the expression isdivisible by both 3 and 11, 3 and 11 are included among its primefactors (as both are prime numbers) and so the expression isdivisible by $3 \times 11 = 33$. It is also possible to show thatthe expression is divisible by 33 by using modulo 33 arithmetic.Under modulo 33 arithmetic, $$25^n \times 5$$ remains the same.$$121^n \times 11 \equiv 22^n \times 11$$ $$289^n \times 17 \equiv25^n \times 17$$ Under modulo 33 arithmetic, the expression is equal to: \begin{eqnarray} (25^n \times 5) + (22^n \times 11) + (25^n \times17) & =&(25^n \times 22) + (22^n \times 11) \\ &=&11 \times [(25^n \times 2) + 22^n] \end{eqnarray} If this last expression equals a multiple of 33, it is equal to 0.It is clearly divisible by 11, and so will be divisible by 33 if$[(25^n \times 2) + 22^n]$ is divisible by 3. Under modulo 3arithmetic, $$25^n \equiv 1^n = 1$$ so $$(25^n \times 2) \equiv 1\times 2 = 2$$ $$22^n \equiv 1^n = 1$$ Therefore $$[(25^n \times 2)+ 22^n] \equiv 2 + 1 = 3 \equiv 0$$ Therefore $[(25^n \times 2) +22^n]$ is divisible by 3. Therefore the expression is divisible by 33.
What exactly happens in the case of equi-spaced points? Why does increase in polynomial order cause the error to rise after a certain point? This is similar to the Runge's phenomenon where, with equi-spaced nodes, the interpolation error goes to infinity with the increase of the polynomial degree, i.e. the number of points. One of the roots of this problem can be found in the Lebesgue's constant as noted by @Subodh's comment to @Pedro answer. This constant relates the interpolation with the best approximation. Some notations We have a function $f \in C([a,b])$ to interpolate over the nodes $x_k$. In the Lagrange interpolation are defined the Lagrange polynomials: $$L_k(x) = \prod_{i=0, i\neq j}^{n}\frac{x-x_i}{x_k-x_i}$$ with this is defined the interpolation polynomial $p_n \in P_n$ over the couples $(x_k, f(x_k))$ for light notation $(x_k, f_k)$ $$p_n(x) = \sum_{k=0}^n f_kL_k(x)$$ Now consider a perturbation over the data, this can be for example for rounding, so we have got $\tilde{f}_k$. With this the new polynomial $\tilde{p}_n$ is: $$\tilde{p}_n(x) = \sum_{k=0}^n \tilde{f}_k L_k(x)$$ The error estimates are: $$p_n(x) - \tilde{p}_n(x) = \sum_{k=0}^n (f_k - \tilde{f}_k) L_k(x)$$ $$| p_n(x) - \tilde{p}_n(x) | \leq \sum_{k=0}^n |f_k - \tilde{f}_k| |L_k(x)|\leq \left ( \max_k |f_k - \tilde{f}_k| \right) \sum_{k=0}^n |L_k(x)|$$ Now it is possible define the Lebesgue's constant $\Lambda_n$ as: $$\Lambda_n = \max_{x \in [a,b]} \sum_{k=0}^n |L_k(x)|$$ With this the final estimates is: $$|| p_n - \tilde{p}_n ||_{\infty} \leq \left ( \max_k |f_k - \tilde{f}_k| \right) \Lambda_n $$ (marginal note, we look only $\infty$ norm also because we are over a space of finite measure so $L^{\infty} \subseteq \dots \subseteq L^1 $) From the above calculation we have got that $\Lambda_n$ is: independent from the date: depends only from the nodes distribution; an indicator of stability (the smaller it is, the better it is). It is also the norm of the interpolation operator respect the$|| \cdot||_\infty$ norm. Withe the follow theorem we con have got an estimate of the interpolation error with the Lebesgue's constant: Let $f$ and $p_n$ as above we have $$ || f - p_n ||_{\infty} \leq (1 + \Lambda_n) d_n(f) $$ where $$ d_n(f) = \inf_{q_n \in P_n} || f - q_n ||_{\infty} $$ is the error by the best uniform approximation polynomial I.e. if $\Lambda_n$ is small the error of the interpolation is not far from the error of the best uniform approximation and the theorem compares the interpolation error with the smallest possible error which is the error of best uniform approximation. For this the behavior of the interpolation depends by the nodes distribution.There is a lower bounds about $\Lambda_n$ that given a node distribution exist a constant $c$ such that:$$ \Lambda_n \geq \frac{2}{\pi} \log(n) - c $$so the constant grows, but how it grow is importan. For equi-spaced nodes$$\Lambda_n \approx \frac{2^{n+1}}{en \log(n)} $$I omitted some details, but we see that the grow is exponential. For Chebyshev nodes$$\Lambda_n \leq \frac{2}{\pi} \log(n) + 4 $$also here I omitted some details, there are more accurate and complicate estimate. See [1] for more details.Note that nodes of Chebyshev family have got logarithmic grow and from the previous estimates is near the best you can obtain. For other nodes distributions see for example table 1 of this article. There are a lot of reference on book about interpolation.On-line these slides are nice as resume. Also this open article ([1]) A Numerical Seven Grids Interpolation Comparison of for polynomial on the Interval for various comparisons.
Ask yourself the following: First, how does the integration by parts affect the solvability of the problem, and the space of solutions? Second, for which space of functions can you build a series of subspaces (the ansatz functions) that you can implement? Let us regard the Poisson problem $u'' = f$ for $f \in L^2$, say, on $[0,1]$, with homogenous Dirichlet boundary conditions. By integration, the left and the right side of the equation can be regarded as bounded functionals on $L^2$, say for $\phi \in L^2$ we have $\phi \mapsto \int u'' \phi dx$ and $\phi \mapsto \int f \phi dx$ Since any function in $L^2$ can be $L^2$-approximated by smooth functions with compact support, both integral functionals are completely known if you only know the values for all test functions. But with the test functions, you can perform integration by parts, and transform the left-hand side to the functional $\phi \mapsto -\int u' \phi' dx$ Read this as: "I take a test function $\phi$, compute its differential, and integrate it with -u' over [0,1], and return you the result." But that functional is not defined and bounded on $L^2$, since you cannot take the differential of an arbitrary $L^2$ function. They may look extremely strange in general. Still we observe, that this functional can be extended to the Sobolev space $H^1$, and it is even a bounded functional on $H_0^1$. That means, given $\phi \in H_0^1$, you can roughly estimate the value of $\int -u' \phi' dx$ by a multiple of the $H_0^1$-norm of $\phi'$. And, furthermore, the functional $\phi \mapsto \int f \phi dx$ is, of course, not only defined and bounded on $L^2$, but also defined and bounded on $H_0^1$. Now you can, e.g., apply the Lax-Milgram lemma, as it is presented in any PDE-book. A finite element book which describes it as well, only with functional analysis, is e.g. the classic by Ciarlet, or the rather new book by Braess. The Lax-Milgram lemma gives PDE-people a nice tool for pure analysis, but they employ much stranger tools as well for their purpose. Still, these tools are also relevant for numerical analysits, because you can in fact build a discretization for these spaces. For example, in order to have discrete subspace of $H_0^1$, just take the hat functions. They do not have jumps and are piecewise differentiable. Their differential is a piecewise constant vector field. This construction works in $d=1,2,3,...$, which is fine, but can you come up with an ansatz space whose functions not only have a gradient (that is nice,i.e., square-integrable), but also whose gradients have in turn a divergence? (again, square-integrable). That is pretty hard in general. So the reason in general how you build weak formulations is that you want to apply the Lax-Milgram lemma, and to have a formulation such that the functions can in fact be implemented. ( For the record, neither is Lax-Milgram the last word in that context, nor are $H_0^1$ ansatz spaces the last word in discretization, see, e.g., Discontinuous Galerkin methods. ) For the case of mixed boundary conditions, the natural test space may differ from your your search space (in the analytic setting), but I have no idea how to describe that without referring to distribution theory, so I stop here. I hope this is helpful.
Estimation in Sequential Analysis In the previous post I introduced the Sequential Generalized Likelihood Ratio test, which is a sequential A/B-test that in most cases require much smaller sample sizes than the classical fixed sample-size test. In this post I’m going to explain some problems regarding estimation in sequential analysis tests in general, and how they can be solved. Sequential analysis tests, such as the sequential GLR test I wrote about in my previous post, allows us to save time by stopping the test early when it’s possible. However, the fact that the test can stop early has some subtle consequences for the estimates we make after the test is done. Let’s take a look at the average maximum likelihood estimate when applied to the “comparison of proportions” sequential GLR test: It seems like the average estimate is slightly off - to get a better view, let’s take a look at just the bias, i.e. the average ML estimate minus the true difference : The estimates are (almost imperceptably) biased inwards when the true difference is close to zero, biased outwards when the difference between proportions are relatively large, and then unbiased again at the extreme ends. This is quite unlike fixed sample-size tests, which have no such bias at all. The reason for this difference is that there is an interaction between the stopping time and the estimate - sequential tests stop early when our samples are more extreme than some threshold, which means that the final estimates we get more often than not will be more extreme than what is true. This might become a bit more intuitive if we take a look at a typical sample path for the MLE and the approximate thresholds for stopping the test in terms of the MLE. In this case the true difference is 0.2, and we do a two-sided sequential GLR test with α-level 0.05, β-level 0.10 and indifference region of size 0.1 : As we collect data, the ML estimates jump quite a bit around before converging towards the true difference. As it jumps around, it's likely to cross the threshold at a higher point (as seen happening here after around 70 samples) and thus stop the test at this point. Similarly, when the true difference is close to zero, it will usually stop at values slightly closer to zero than the actual difference. What about the vanishing bias at the extremes? This is because at the most extreme values, the test will almost invariably stop at only a handful of samples, and thus the interaction between the stopping time and the estimate practically disappears. So what can we do about this problem? Unfortunately, there is not an uniformly best estimator we can use as a replacement for the MLE. Some of the estimators suggested to fix the bias have much larger mean squared error than the MLE due to having larger variance. However, a simple and commonly used correction (and what we use in the sequential A/B-testing library SeGLiR), is the Whitehead bias-adjusted estimate. The Whitehead bias-adjusted estimate is based on the fact that we know that: E(\hat{\theta}) = \theta + b(\theta) where theta is the true difference, theta_hat is our estimate of the difference, and b(theta) is the bias of our test at theta. Given an estimate theta_hat, we can then find an approximately bias-adjusted estimate by solving for theta_sim so that: \tilde{\theta} + b(\tilde{\theta}) = \hat{\theta} This can be found by simple simulation and some optimization. Note that there are also other alternative estimators, such as the conditional MLE, but since the brute-force simulation approach to this would take much more time than the Whitehead bias-adjustment, it's not something I've implemented in SeGLiR currently. One important thing to note is that the bias problem is not specific to the sequential GLR test or even sequential frequentist tests. In fact any test with a stopping rule that depends on the parameter we estimate, such as Thompson sampling with a stopping rule (as used by google analytics) will have the same problem. John Kruschke discusses this in the context of bayesian analysis in this blog post. Precision and Confidence intervals So, given that we've bias-corrected the estimates, how precise are the estimates we get? Unfortunately, estimates from sequential analysis tests often are less precise than the fixed sample-size test. This is not so surprising, since the tests often stop earlier, and we thus have less data to base the estimates on. To see this for yourself, take a look at the estimates given in this demo. For this reason, it is natural to ask for confidence intervals to bound the estimates in sequential analysis tests. Classical fixed sample-size tests use the normal approximation to create confidence intervals for the estimate. This is usually not possible with sequential analysis tests, since the distribution of the test statistics under a stopping rule are very complex and usually impossible to approximate by common distributions. Instead we can resort to bootstrap confidence intervals, which are simple to simulate. These are unfortunately also sensitive to the bias issues above, so the best option is to use a bias-adjusted confidence interval [1]. Note that since sequential tests stop early and we often have fewer samples, the confidence intervals will usually be wider than for the fixed sample-size test. [1] see Davison & Hinkley : Bootstrap Methods and their applications, chap. 5.3 for details P-values As a little aside, what about p-values, the statistic everyone loves to hate? When doing classical hypothesis tests, p-values are usually used to describe the significance of the result we find. This is not quite as good an idea in sequential tests as in fixed sample-size tests. The reason for this is that the p-value is not uniquely defined in sequential tests. The p-value is defined as the probability that we get a result as extreme or more extreme than the one we see, given that the null-hypothesis is true. In fixed sample-size tests, a more extreme result is simply a result where the test statistic is well, more extreme. However, in the sequential setting, we also have the variable of when the test was stopped. So is a more “extreme result” then a test that stops earlier? Or a test that stops later, but with a more “extreme” test-statistic? There is no definite answer to this. In the statistical literature there are several different ways to “order” the outcomes and thus define what is more “extreme”, but unfortunately there is no consensus on which “ordering” is the best, which makes p-values in sequential analysis a somewhat ambiguous statistic. Nevertheless, in SeGLiR we've implemented a p-value via simple simulation, where we assume that a more “extreme result” is any result where the test statistic is more extreme than our result, regardless of when the test was stopped. This is what is called a Likelihood Ratio-ordering and is the ordering suggested by Cook & DeMets in their book referenced below. As we've seen in this post, estimation in sequential tests is a bit more tricky than in fixed sample-size tests. Because sequential tests use much less samples, estimates may be more imprecise, and because of the interaction with the stopping rule they tend to be biased, though there are ways to mitigate the worst effects of this. In an upcoming post, I'm planning to compare sequential analysis tests with other variants of A/B-tests such as multi-armed bandits, and give a little guide on when to choose which test. If you're interested, follow me on twitter for updates. References If you're interested in more details on estimation in sequential tests, here are some recommended books that cover this subject. While these are mostly about group sequential tests, the solutions are the same as in the case with fully sequential tests (which is what I've described in my posts). C. Jennison & B. Turnbull : Group Sequential Methods with Applications to Clinical Trials, CRC Press 1999 T. Cook & D. DeMets : Introduction to Statistical Methods for Clinical Trials, Chapman & Hall/CRC Press 2007
Word Representation NLP, Natural Language Processing Word embeddings, which is a way of representing words. that let your algorithms automatically understand analogies like that, man is to woman, as king is to queen, and many other examples. Representing words using a vocabulary of words. One of the weaknesses of this representation is that it treats each word as a thing onto itself, and it doesn’t allow an algorithm to easily generalize the cross words. You see plots like these sometimes on the internet to visualize some of these 300 or higher dimensional embeddings. To visualize it, algorithms like t-SNE, map this to a much lower dimensional space. Using Word Embeddings Transfer learning and word embeddings Learn word embeddings from large text corpus. (1-100B words or download pre-trained embedding online.) Transfer embedding to new task with smaller training set. (say, 100k words) Optional: Continue to finetune the word embeddings with new data. Properties of Word Embeddings One of the most fascinating properties of word embeddings is that they can also help with analogy reasoning. The most commonly used similarity function is called cosine similarity : \(CosineSimilarity(u,v) = \frac{u.v}{\left \| u \right \|_2\left \| v \right \|_2} = cos(\theta)\) Embedding Matrix When you implement an algorithm to learn a word embedding, what you end up learning is an embedding matrix. And the columns of this matrix would be the different embeddings for the 10,000 different words you have in your vocabulary. Learning Word Embeddings It turns out that building a neural language model is a reasonable way to learn a set of embedding. Well, what’s actually more commonly done is to have a fixed historical window. And using a fixed history, just means that you can deal with even arbitrarily long sentences because the input sizes are always fixed. If your goal is to learn a embedding. Researchers have experimented with many different types of context. If your goal is to build a language model then it is natural for the context to be a few words right before the target word. But if your goal isn’t to learn the language model per se, then you can choose other contexts. Word2Vec The Word2Vec algorithm which is simple and computationally more efficient way to learn this types of embeddings. Skip-Gram model\(\begin{matrix} Softmax : & p(t|c) = \frac{e^{\theta ^T_t e_c}}{\sum _{j=1}^{10,000} e^{\theta ^T_j e_c}} \\ Loss Function : & L(\hat y, y) = – \sum _{i=1}^{10,000} y_i log \hat y _i \end{matrix}\) the primary problem is computational speed, because of the softmax step is very expensive to calculate because needing to sum over your entire vocabulary size into the denominator of the softmax. a few solutions hierarchical softmax classifier negative sampling CBow the Continuous Bag-Of-Words Model, which takes the surrounding contexts from middle word, and and uses the surrounding words to try to predict the middle word. Negative Sampling What to do in this algorithm is create a new supervised learning problem. And the problem is, given a pair of words like orange and juice, we’re going to predict is this a context-target pair? It’s really to try to distinguish between these two types of distributions from which you might sample a pair of words. How do you choose the negative examples? sample the words in the middle, the candidate target words. use 1 over the vocab size, sample the negative examples uniformly at random, but that’s also very non-representative of the distribution of English words. the authors, Mikolov et al, reported that empirically, \(P(w_i) = \frac{f(w_i)^{\frac{3}{4}}}{\sum _{j=1}^{10,000}f(w_j)^{\frac{3}{4}}}\) GloVe Word Vectors GloVe stands for global vectors for word representation. Sampling pairs of words, context and target words, by picking two words that appear in close proximity to each other in our text corpus. So, what the GloVe algorithm does is, it starts off just by making that explicit. Sentiment Classification Sentiment classification is the task of looking at a piece of text and telling if someone likes or dislikes the thing they’re talking about. One of the challenges of sentiment classification is you might not have a huge label training set for it. But with word embeddings, you’re able to build good sentiment classifiers even with only modest-size label training sets. One of the problems with this algorithm is it ignores word order. More Sophisticated Model : Debiasing Word Embeddings Machine learning and AI algorithms are increasingly trusted to help with, or to make, extremely important decisions. And so we like to make sure that as much as possible that they’re free of undesirable forms of bias, such as gender bias, ethnicity bias and so on. So the first thing we’re going to do is identify the direction corresponding to a particular biaswe want to reduce or eliminate. the next step is a neutralizationstep. So for every word that’s not definitional, project it to get rid of bias. And then the final step is called equalizationin which you might have pairs of words such as grandmother and grandfather, or girl and boy, where you want the only difference in their embedding to be the gender. And then, finally, the number of pairs you want to equalize, that’s actually also relatively small, and is, at least for the gender example, it is quite feasible to hand-pick.
2019-09-27 09:59 Higgs boson pair production at colliders: status and perspectives / Di Micco, Biagio (Universita e INFN Roma Tre (IT)) ; Gouzevitch, Maxime (Centre National de la Recherche Scientifique (FR)) ; Mazzitelli, Javier (University of Zurich) ; Vernieri, Caterina (SLAC National Accelerator Laboratory (US)) ; Alison, John (Carnegie-Mellon University (US)) ; Androsov, Konstantin (INFN Sezione di Pisa, Universita' e Scuola Normale Superiore, Pisa (IT)) ; Baglio, Julien Lorenzo (CERN) ; Bagnaschi, Emanuele Angelo (Paul Scherrer Institut (CH)) ; Banerjee, Shankha (University of Durham (GB)) ; Basler, P (Karlsruhe Institute of Technology) et al. This document summarises the current theoretical and experimental status of the di-Higgs boson production searches, and of the direct and indirect constraints on the Higgs boson self-coupling, with the wish to serve as a useful guide for the next years. The document discusses the theoretical status, including state-of-the-art predictions for di-Higgs cross sections, developments on the effective field theory approach, and studies on specific new physics scenarios that can show up in the di-Higgs final state. [...] LHCHXSWG-2019-005.- Geneva : CERN, 2019 - 274. Registre complet - Registres semblants 2019-05-10 11:18 Registre complet - Registres semblants 2019-04-02 20:51 Simplified Template Cross Sections – Stage 1.1 / Delmastro, Marco (Centre National de la Recherche Scientifique (FR)) ; Berger, Nicolas (Centre National de la Recherche Scientifique (FR)) ; Bertella, Claudia (Chinese Academy of Sciences (CN)) ; Duehrssen-Debling, Michael (CERN) ; Kivernyk, Oleh (Centre National de la Recherche Scientifique (FR)) ; Langford, Jonathon Mark (Imperial College (GB)) ; Milenovic, Predrag (University of Belgrade (RS)) ; Pandini, Carlo Enrico (CERN) ; Tackmann, Frank (Deutsches Elektronen-Synchrotron (DE)) ; Tackmann, Kerstin (Deutsches Elektronen-Synchrotron (DE)) et al. Simplified Template Cross Sections (STXS) have been adopted by the LHC experiments as a common framework for Higgs measurements. Their purpose is to reduce the theoretical uncertainties that are directly folded into the measurements as much as possible, while at the same time allowing for the combination of the measurements between different decay channels as well as between experiments. [...] arXiv:1906.02754; LHCHXSWG-2019-003; DESY-19-070.- Geneva : CERN, 2019 - 14 p. Fulltext: LHCHXSWG-2019-003 - PDF; 1906.02754 - PDF; Registre complet - Registres semblants 2019-03-27 12:46 Recommended predictions for the boosted-Higgs cross section / Becker, Kathrin (Albert Ludwigs Universitaet Freiburg (DE)) ; Caola, Fabrizio (University of Durham (GB)) ; Massironi, Andrea (CERN) ; Mistlberger, Bernhard (Massachusetts Inst. of Technology (US)) ; Monni, Pier (CERN) ; Chen, Xuan (Zurich U.) ; Frixione, Stefano (INFN e Universita Genova (IT)) ; Gehrmann, Thomas Kurt (Universitaet Zuerich (CH)) ; Glover, Nigel (IPPP Durham) ; Hamilton, Keith Murray (University of London (GB)) et al. In this note we study the inclusive production of a Higgs boson with large transverse momentum. We provide a recommendation for the inclusive cross section based on a combination of state of the art QCD predictions for the gluon-fusion and vector-boson-fusion channels. [...] LHCHXSWG-2019-002.- Geneva : CERN, 2019 - 14. Fulltext: PDF; Registre complet - Registres semblants 2019-03-01 22:49 Higgs boson cross sections for the high-energy and high-luminosity LHC: cross-section predictions and theoretical uncertainty projections / Calderon Tazon, Alicia (Universidad de Cantabria and CSIC (ES)) ; Caola, Fabrizio (University of Durham (GB)) ; Campbell, John (Fermilab (US)) ; Francavilla, Paolo (Universita & INFN Pisa (IT)) ; Marchiori, Giovanni (Centre National de la Recherche Scientifique (FR)) ; Becker, Kathrin (Albert Ludwigs Universitaet Freiburg (DE)) ; Bertella, Claudia (Chinese Academy of Sciences (CN)) ; Bonvini, Marco (Sapienza Universita e INFN, Roma I (IT)) ; Chen, Xuan (Zuerich University (CH)) ; Frederix, Rikkert (Technische Universität Muenchen (DE)) et al. This note summarizes the state-of-the-art predictions for the cross sections expected for Higgs boson production in the 27 TeV proton-proton collisions of a high-energy LHC, including a full theoretical uncertainty analysis. It also provides projections for the progress that may be expected on the timescale of the high-luminosity LHC and an assessment of the main limiting factors to further reduction of the remaining theoretical uncertainties.. LHCHXSWG-2019-001.- Geneva : CERN, 01 - 17. Fulltext: PDF; Registre complet - Registres semblants 2016-07-15 07:28 Analytical parametrization and shape classification of anomalous HH production in EFT approach / Carvalho Antunes De Oliveira, Alexandra (Universita e INFN, Padova (IT)) ; Dall'Osso, Martino (Universita e INFN, Padova (IT)) ; De Castro Manzano, Pablo (Universita e INFN, Padova (IT)) ; Dorigo, Tommaso (Universita e INFN, Padova (IT)) ; Goertz, Florian (CERN) ; Gouzevitch, Maxime (Universite Claude Bernard-Lyon I (FR)) ; Tosi, Mia (CERN) In this document we study the effect of anomalous Higgs boson couplings on non-resonant pair production of Higgs bosons (HH) at the LHC. We explore the space of the five parameters $\kappa_\lambda$, $\kappa_t$, $c_2$, $c_{g}$, and $c_{2g}$ in terms of the corresponding kinematics of the final state, and describe a suggested partition of the space into a limited number of regions featuring similar phenomenology in the kinematics of HH final state, along with a corresponding set of representative benchmark points. [...] LHCHXSWG-2016-001.- Geneva : CERN, 2016 Fulltext: PDF; Registre complet - Registres semblants 2015-08-03 09:58 Benchmark scenarios for low $\tan \beta$ in the MSSM / Bagnaschi, Emanuele (DESY) ; Frensch, Felix (Karlsruhe, Inst. Technol.) ; Heinemeyer, Sven (Cantabria Inst. of Phys.) ; Lee, Gabriel (Technion) ; Liebler, Stefan Rainer (DESY) ; Muhlleitner, Milada (Karlsruhe, Inst. Technol.) ; Mc Carn, Allison Renae (Michigan U.) ; Quevillon, Jeremie (King's Coll. London) ; Rompotis, Nikolaos (Seattle U.) ; Slavich, Pietro (Paris, LPTHE) et al. The run-1 data taken at the LHC in 2011 and 2012 have led to strong constraints on the allowed parameter space of the MSSM. These are imposed by the discovery of an approximately SM-like Higgs boson with a mass of $125.09\pm0.24$~GeV and by the non-observation of SUSY particles or of additional (neutral or charged) Higgs bosons. [...] LHCHXSWG-2015-002.- Geneva : CERN, 2015 - 24. Fulltext: PDF; Registre complet - Registres semblants 2015-03-20 14:24 Recommendations for the interpretation of LHC searches for $H_5^0$, $H_5^{\pm}$, and $H_5^{\pm\pm}$ in vector boson fusion with decays to vector boson pairs / Zaro, Marco (Paris U., IV ; Paris, LPTHE) ; Logan, Heather (Ottawa Carleton Inst. Phys.) We provide theory input for the interpretation of the LHC searches for the production of Higgs bosons $H_5^0$, $H_5^{\pm}$, and $H_5^{\pm\pm}$ that transform as a fiveplet under the custodial symmetry. We choose as a benchmark the Georgi-Machacek model, in which isospin-triplet scalars are added to the Standard Model Higgs sector in such a way as to preserve custodial SU(2) symmetry. [...] LHCHXSWG-2015-001.- Geneva : CERN, 30 - 19p. Fulltext: PDF; Registre complet - Registres semblants
This article will be permanently flagged as inappropriate and made unaccessible to everyone. Are you certain this article is inappropriate? Excessive Violence Sexual Content Political / Social Email Address: Article Id: WHEBN0000164313 Reproduction Date: In fluid dynamics, gravity waves are waves generated in a fluid medium or at the interface between two media when the force of gravity or buoyancy tries to restore equilibrium. An example of such an interface is that between the atmosphere and the ocean, which gives rise to wind waves. When a fluid element is displaced on an interface or internally to a region with a different density, gravity will try to restore it toward equilibrium, resulting in an oscillation about the equilibrium state or wave orbit.[1] Gravity waves on an air–sea interface of the ocean are called surface gravity waves or surface waves, while gravity waves that are within the body of the water (such as between parts of different densities) are called internal waves. Wind-generated waves on the water surface are examples of gravity waves, as are tsunamis and ocean tides. Wind-generated gravity waves on the free surface of the Earth's ponds, lakes, seas and oceans have a period of between 0.3 and 30 seconds (3 Hz to 0.03 Hz). Shorter waves are also affected by surface tension and are called gravity–capillary waves and (if hardly influenced by gravity) capillary waves. Alternatively, so-called infragravity waves, which are due to subharmonic nonlinear wave interaction with the wind waves, have periods longer than the accompanying wind-generated waves.[2] In the Earth's atmosphere, gravity waves are a mechanism for the transfer of momentum from the troposphere to the stratosphere. Gravity waves are generated in the troposphere by frontal systems or by airflow over mountains. At first, waves propagate through the atmosphere without appreciable change in mean velocity. But as the waves reach more rarefied (thin) air at higher altitudes, their amplitude increases, and nonlinear effects cause the waves to break, transferring their momentum to the mean flow. This process plays a key role in studying the dynamics of the middle atmosphere. The clouds in gravity waves can look like altostratus undulatus clouds, and are sometimes confused with them, but the formation mechanism is different. The phase velocity \scriptstyle c of a linear gravity wave with wavenumber \scriptstyle k is given by the formula c=\sqrt{\frac{g}{k}}, where g is the acceleration due to gravity. When surface tension is important, this is modified to c=\sqrt{\frac{g}{k}+\frac{\sigma k}{\rho}}, where σ is the surface tension coefficient and ρ is the density. The gravity wave represents a perturbation around a stationary state, in which there is no velocity. Thus, the perturbation introduced to the system is described by a velocity field of infinitesimally small amplitude, \scriptstyle (u'(x,z,t),w'(x,z,t)). Because the fluid is assumed incompressible, this velocity field has the streamfunction representation where the subscripts indicate partial derivatives. In this derivation it suffices to work in two dimensions \scriptstyle \left(x,z\right), where gravity points in the negative z-direction. Next, in an initially stationary incompressible fluid, there is no vorticity, and the fluid stays irrotational, hence \scriptstyle\nabla\times\textbf{u}'=0.\, In the streamfunction representation, \scriptstyle\nabla^2\psi=0.\, Next, because of the translational invariance of the system in the x-direction, it is possible to make the ansatz where k is a spatial wavenumber. Thus, the problem reduces to solving the equation We work in a sea of infinite depth, so the boundary condition is at \scriptstyle z=-\infty. The undisturbed surface is at \scriptstyle z=0, and the disturbed or wavy surface is at \scriptstyle z=\eta, where \scriptstyle\eta is small in magnitude. If no fluid is to leak out of the bottom, we must have the condition Hence, \scriptstyle\Psi=Ae^{k z} on \scriptstyle z\in\left(-\infty,\eta\right), where A and the wave speed c are constants to be determined from conditions at the interface. The free-surface condition: At the free surface \scriptstyle z=\eta\left(x,t\right)\,, the kinematic condition holds: Linearizing, this is simply where the velocity \scriptstyle w'\left(\eta\right)\, is linearized on to the surface \scriptstyle z=0.\, Using the normal-mode and streamfunction representations, this condition is \scriptstyle c \eta=\Psi\,, the second interfacial condition. Pressure relation across the interface: For the case with surface tension, the pressure difference over the interface at \scriptstyle z=\eta is given by the Young–Laplace equation: where σ is the surface tension and κ is the curvature of the interface, which in a linear approximation is Thus, However, this condition refers to the total pressure (base+perturbed), thus (As usual, The perturbed quantities can be linearized onto the surface z=0.) Using hydrostatic balance, in the form \scriptstyle P=-\rho g z+\text{Const.}, this becomes The perturbed pressures are evaluated in terms of streamfunctions, using the horizontal momentum equation of the linearised Euler equations for the perturbations, to yield \scriptstyle p'=\rho c D\Psi. Putting this last equation and the jump condition together, Substituting the second interfacial condition \scriptstyle c\eta=\Psi\, and using the normal-mode representation, this relation becomes \scriptstyle c^2\rho D\Psi=g\Psi\rho+\sigma k^2\Psi. Using the solution \scriptstyle \Psi=e^{k z}, this gives c=\sqrt{\frac{g}{k}+\frac{\sigma k}{\rho}}. Since \scriptstyle c=\omega/k is the phase speed in terms of the angular frequency \scriptstyle\omega and the wavenumber, the gravity wave angular frequency can be expressed as \omega=\sqrt{gk}. The group velocity of a wave (that is, the speed at which a wave packet travels) is given by c_g=\frac{d\omega}{dk}, and thus for a gravity wave, c_g=\frac{1}{2}\sqrt{\frac{g}{k}}=\frac{1}{2}c. The group velocity is one half the phase velocity. A wave in which the group and phase velocities differ is called dispersive. Gravity waves traveling in shallow water (where the depth is much less than the wavelength), are nondispersive: the phase and group velocities are identical and independent of wavelength and frequency. When the water depth is h, Wind waves, as their name suggests, are generated by wind transferring energy from the atmosphere to the ocean's surface, and capillary-gravity waves play an essential role in this effect. There are two distinct mechanisms involved, called after their proponents, Phillips and Miles. In the work of Phillips,[3] the ocean surface is imagined to be initially flat (glassy), and a turbulent wind blows over the surface. When a flow is turbulent, one observes a randomly fluctuating velocity field superimposed on a mean flow (contrast with a laminar flow, in which the fluid motion is ordered and smooth). The fluctuating velocity field gives rise to fluctuating stresses (both tangential and normal) that act on the air-water interface. The normal stress, or fluctuating pressure acts as a forcing term (much like pushing a swing introduces a forcing term). If the frequency and wavenumber \scriptstyle\left(\omega,k\right) of this forcing term match a mode of vibration of the capillary-gravity wave (as derived above), then there is a resonance, and the wave grows in amplitude. As with other resonance effects, the amplitude of this wave grows linearly with time. The air-water interface is now endowed with a surface roughness due to the capillary-gravity waves, and a second phase of wave growth takes place. A wave established on the surface either spontaneously as described above, or in laboratory conditions, interacts with the turbulent mean flow in a manner described by Miles.[4] This is the so-called critical-layer mechanism. A critical layer forms at a height where the wave speed c equals the mean turbulent flow U. As the flow is turbulent, its mean profile is logarithmic, and its second derivative is thus negative. This is precisely the condition for the mean flow to impart its energy to the interface through the critical layer. This supply of energy to the interface is destabilizing and causes the amplitude of the wave on the interface to grow in time. As in other examples of linear instability, the growth rate of the disturbance in this phase is exponential in time. This Miles–Phillips Mechanism process can continue until an equilibrium is reached, or until the wind stops transferring energy to the waves (i.e., blowing them along) or when they run out of ocean distance, also known as fetch length. 2011 Tōhoku earthquake and tsunami, Thucydides, Thailand, Hawaii, Unesco Mass, Velocity, Energy, Time, Acceleration Jupiter, Mars, Venus, Earth, Nitrogen Wind wave, Physical oceanography, Antarctica, Swell (ocean), Tsunami Sound, Gravity wave, Corona, Wave, Plasma wave Pennsylvania, Meteorology, Temperature, Wind, Aviation accidents and incidents Structure, Tsunami, Earthquake engineering, Gravity wave, Earthquake
We have the following generalized eigenvalue (set of) problem(s) $$[K_R(\kappa)]\{u_R\} = \omega^2[M_R(\kappa)]\{u_R\}\quad \forall \kappa \in [\kappa_0, \kappa_1]$$ with \begin{align} &K_R(\kappa) = T^H(\kappa) K T(\kappa)\, ,\\ &M_R(\kappa) = T^H(\kappa) M T(\kappa)\, ,\\ &u = T(\kappa) u_R\, . \end{align} Where $K$ and $M$ are sparse and symmetric and come from a PDE, and $T(\kappa)$ is also sparse, non-square, complex and represents a set of multipoint constraints. Forming the matrix $T$ is relatively cheap compared with the other matrices since they have a fixed structure and are sparse. If we consider a regular mesh with $n^2$ nodes, we have that: \begin{align} &K\in \mathbb{R}^{n^2\times n^2}\, ,&M\in \mathbb{R}^{n^2\times n^2}\, ,\\ &u\in \mathbb{C}^{n^2}\, , & &\\ &K_R\in \mathbb{C}^{(n^2 - 2n + 1)\times (n^2 - 2n + 1)}\, ,&M_R\in \mathbb{C}^{(n^2 - 2n + 1)\times (n^2 - 2n + 1)}\, ,&\\ &u_R\in \mathbb{C}^{(n^2 - 2n + 1)}\, , & &\\ &T(\kappa) \in \mathbb{C}^{(2n -1)\times(n^2 - 2n + 1)}\, .& & \end{align} We normally handle the problem in one of the following ways: Assemble $K$ and $M$ and for each value of $\kappa$ form the product matrices $K_R$ and $M_R$. The main advantage of this method is that we have to assemble once, but then we loose the sparse nature of the problem. Assemble $K_R$ and $M_R$ for each $\kappa$ value, conserving the sparse nature of the problem. Question Is there any way of solving the generalized eigenvalue problem $$[T^H(\kappa) K T(\kappa)]\{u_R\} = \omega^2[T^H(\kappa) M T(\kappa)]\{u_R\}\quad \forall \kappa \in [\kappa_0, \kappa_1]\, ,$$ conserving the sparse nature of the system and assembling the matrices $K$ and $M$ only once?
The color burst is also an indicator that there is a color signal.This is for compatibility with black and white signals. No color burst means B&W signal, so only decode the luminance signal (no croma).No signal, no color burst, so the decoder falls back to B&W mode.Same idea goes to FM stereo/mono. If there is no 19 kHz subcarrier present, ... In the absence of a valid color burst signal, the "color killer" circuit disables the color difference signals, otherwise you would indeed see colored noise. This is mainly intended for displaying weak signals in B/W without the colored noise.One step further is to mute the entire signal, substitute stable sync signals, and display a blue or black field ... Isn't white noise supposed to have a flat magnitude response?(equal amounts for all frequencies)The expected magnitude response of white noise is flat (this is what JasonR calls the power spectral density). Any particular instance of a white noise sequence will not have precisely flat response (this is what JasonR's comment refers to as the power ... White Gaussian noise in the continuous-time case is not what is called a second-order process (meaning $E[X^2(t)]$ is finite) and so, yes, the variance is infinite. Fortunately, we can never observe a white noise process (whetherGaussian or not) in nature; it is only observable through some kind of device,e.g. a (BIBO-stable) linear filter with transfer ... Linear FilteringThe first approach in Peter's answer (i.e. filtering white noise) is a very straightforward approach. In Spectral Audio Signal Processing, JOS gives a low-order filter that can be used to produce a decent approximation, along with an analysis of how well the resulting power spectral density matches the ideal. Linear filtering will always ... You would generate bandlimited Gaussian noise by first generating white noise, then filtering it to the bandwidth that you desire. As an example:% design FIR filter to filter noise to half of Nyquist rateb = fir1(64, 0.5);% generate Gaussian (normally-distributed) white noisen = randn(1e4, 1);% apply to filter to yield bandlimited noisenb = filter(b,1,... You can use a standard inpainting algorithm. These algorithms replace marked pixels in an image with the pixel values that surround these marked pixels. The challenge here is to detect the grid (my tests seem to show that it is not a completely regular grid). So, I came up with this solution:from PIL import Imageimport requestsfrom io import BytesIO... Yes, you can add AWGN of variance $\sigma^2$ separately to each of the two terms, because the sum of two Gaussians is also a Gaussian and their variances add up. This will have the same effect as adding an AWGN of variance $2\sigma^2$ to the original signal. Here's some more explanation if you're interested.An analytic signal $x(t)=a(t)\sin\left(2\pi f t + ... L1 norm minimization (compressed sensing) can do a relative better job than conventional Fourier denoising in terms of preserving edges.The procedure is to minimize an objective function$$|x-y|^2 + b|f(y)|$$where $x$ is the noisy signal, $y$ is the denoised signal, $b$ is the regularziation parameter, and $|f(y)|$ is some L1 norm penalty. ... One method that works if there's a relatively strong drum beat is to take the magnitude of the STFT of the waveform, and then auto-correlate it in only the time dimension. The peak of the auto-correlation function will be the beat, or a submultiple of it.This is equivalent to breaking up the signal into a lot of different frequency bands, finding the ... Intuition: The intuition is this: Your noise is some event or events that are rare, and that when compared to other events, look like outliers that shouldn't really be there.For example, if you are measuring the speeds of every car on the highway as they pass by you and plot them, you will see that they are usually in the range of say, $50$ mph to $70$ ... Noise is random, but like most random phenomena, it follows a certain pattern. Different patterns are given different names.Consider rolling a die. This is clearly random. Roll the die 1000 times, keeping track of each result. Then, calculate the histogram of the result; you'll find that you got each of 1, 2, 3, 4, 5 and 6 approximately the same number of ... You could form a statistical test, based on the autocorrelation of the potentially-white sequence. The Digital Signal Processing Handbook suggests the following.This may be implemented in scilab as below.Running this function over two noise sequences: a white noise one, and a lightly filtered white noise one, then the following plot results. Script for ... Your question is a bit harsh, because it's kind of vague. I will give you a few points, maybe it will help.What's the same?The intuitions behind both bilateral filtering and anisotropic diffusion are the same:averaging is good to remove random noise;averaging should only concern pixels that belong to the same region (in the sense that they are pixels ... Roughly speaking, they are the amount of noise in your system. Process noise is the noise in the process - if the system is a moving car on the interstate on cruise control, there will be slight variations in the speed due to bumps, hills, winds, and so on. Q tells how much variance and covariance there is. The diagonal of Q contains the variance of each ... Basic dithering without noise shapingBasic dithered quantization without noise shaping works like this:Figure 1. Basic dithered quantization system diagram. Noise is zero-mean triangular dither with a maximum absolute value of 1. Rounding is to nearest integer. Residual error is the difference between output and input, and is calculated for analysis only.... There is another Wikipedia entry on Wiener filtering more applicable to image processing.To summarize (and convert to 2D), given a system:$$y(n,m) = h(n,m) * x(n,m) + v(n,m)$$where$*$ denotes convolution,$x$ is the (unknown) true image,$h$ is the impulse response of a linear, time-invariant filter,$v$ is additive unknown noise independent of $x$... I read your original question and wasn't quite sure what you were getting at but it's quite a lot clearer now. The problem you have is that the brain is extremely good at picking out speech and emotion even when the background noise is very high which is your existing attempts have only been of limited success.I think the key to getting what you want is ... Starting at an even more basic level than the other (much smarter) answers, I'd like to pick up on this part of the question:This seems contradictory to me as on one side it is random then on the other side their distribution is considered normally distributed.Perhaps the issue here is what ‘random’ means?To be clear: ‘random’ and ‘normally-... You might need to consider more advanced techniques. Here are two recent papers on edge-preserving denoising:Edge-Preserving Image Denoising via Optimal Color Space Projection [in color] This paper preserves edges by decomposing the image into an "optimal" color space and performing wavelet shrinkage. The optimal color space belongs to the luminance/color-... BackgroundAccording to papers below, snoring is characterized by a peak at about 130Hz, and is wholly concentrated below 12kHz:Non-invasive Sensors based Human State in Nightlong Sleep Analysis for Home-CareAn efficient fast method of snore detection for sleep disorder investigationAn efficient method for snore/nonsnore classification of sleep sounds... You're probably looking for the Hough transform or one of it's extensions.The simplest version of this transform is linear and appropriate for detecting straight lines.In the transformed space (Hough space), angles and distances are found as points where curves intersect.Libraries for calculating the Hough transform exist inC++ - OpenCV (Has ... Math toolsWe can do the calculation using some basic elements of probability theory and Fourier analysis.There are three elements (we denote the probability density of a random variable $X$ at value $x$ as $P_X(x)$):Given a random variable $X$ with distribution $P_X(x)$, the distribution of the scaled variable $Y = aX$ is $P_Y(y) = (1/a)P_X(y/a)$.The ... I'm not sure specifically what you're looking for here. Noise is typically described via its power spectral density, or equivalently its autocorrelation function; the autocorrelation function of a random process and its PSD are a Fourier transform pair. White noise, for example, has an impulsive autocorrelation; this transforms to a flat power spectrum in ... Just as a for instance, de-clicking might be considered a part of a de-noising system. Removing clicks comes up in digitizing vinyl audio records - dust that cannot be removed without damaging the substrate can cause an audible click in the digitized audio signal. There are systems that can detect and remove these clicks that use model based estimators to ... De-noising is about the goal, and filtering is about the technique you employ.You can obviously de-noise via filtering. For example, if you know that your system cannot transmit frequencies above a certain threshold, you can apply a low-pass filter. However, you can de-noise by other techniques as well, such as by averaging multiple recordings of a signal.... Just throwing this in here to cover all the possibilities, you might be able to use entropy, I don't known what the entropy level of snoring vs speech is but if it is different enough that may work.http://www.ee.columbia.edu/~dpwe/papers/ShenHL98-endpoint.pdf "Noise" in this context refers to anything unwanted added to the signal, it doesn't necessarily mean it is gaussian noise, white noise, or any random well-described process.In the context of quantization, it is a purely algebraic argument. One can view quantization as the addition of an unwanted signal ("noise") equal to... the difference between the ... Wiener deconvolution is an approach to solve the deconvolution problem that relies on the filter proposed by Wiener. The equation is the same in denoising and deblurring, except that the filter $G$ (to stick with Wikipedia's notations) that you should use is different.To make things clear:denoising consists in the case where the degradation kernel $H$ is ...
The main problem with your circuit is that the time constant of 330 ohms and 1\$\mu\$F is only \$330\mu sec\$, which is not all that long, and the LED will not be especially bright for that brief time. You have to consider the physiological response of the human eye. Your eye acts as a kind of integrator over a period in the 100msec range, so a very bright pulse of light for a short time (such as \$330\mu sec\$) would actually be visible, but it would have to be about 300 times brighter than a continuously-on light to achieve the same apparent brightness. So, an LED that is acceptably bright at 2mA would need 600mA pulse for ~300\$\mu sec\$, or to have a similar chunk of \$ current \cdot time \$ metered out to it. Since that has to come from the inverter output, it's a bit much to ask. You could use much higher value resistors (such as 300K) and feed that to another gate, using the output of that gate to drive the LED. As an alternative, this would be a great application for a dual monostable multivibrator such as a 74HC123. The complexity is not very different (4 resistors, 2 capacitors, 1 chip and no diodes). It's a bit different because this circuit does not stretch the existing pulse, rather it produces a visible chunk of light on every valid edge (either positive or negative, depending how you wire it) of the input signal. Drive the /A or B input of each multivibrator with the BT signal and tie the other one inactive. (For example, if you want it to trigger on the falling edge, use the /A input and tie the B input high). The reset input /R is active low so it should be be tied high. You can drive the LEDs by connecting them to Vcc through a suitable current-limiting resistor (such as 330 \$ \Omega\$ ) from the /Q outputs. The time constant from Rx and Cx should result in a pulse that is easily visible, so somewhere in the 200msec range would be good. For the TI part, the time is\$ t_W = R_X \cdot C_X \$, so 470K and 1uF would be reasonable. The 1uF capacitor has to supply only microamperes to keep the monostable in action, and the monstable output does the heavy lifting- providing ~10mA for ~200msec to the LED.
The local standard of rest is the restframe circular orbit of a star at the position of the Sun in the azimuthally averaged Galactic potential. The Sun moves with 3d Galactic velocity coordinates of (11, 12, 7) km/s with respect to the LSR (Schonrich et al. 2009. i.e The Sun has a velocity component (wrt the LSR) of 11 km/s towards the Galactic centre (it of course has a tangential velocity of $>200$ km/s). Sjouwerman et al. (1998) measure the line of sight velocities of 229 Maser sources towards the Galactic centre finding a mean velocity wrt the LST of $4 \pm 5$ km/s. Reid et al. (2007) find a mean line of sight velocity, with respect to the LSR, for masers even closer to the Galactic centre as $-22 \pm 28$ km/s. Li et al. (2010) measure velocities for 20 masers source within 2pc projected distance of the Galactic centre finding a mean velocity wrt to the LSR of $5 \pm 11$ km/s. i.e. The Sun travels at no more than $\sim 10$ km/s radially with respect to the Galactic centre. The tangential velocity of the Sun can be fixed with respect to the Galactic centre by observing the proper motion of Sag A* (Reid & Brunthaler 2004), which they find is almost entirely along the Galactic plane (i.e almost no vertical motion of the Sun wrt the Galactic plane). Assuming a distance of $8.0\pm 0.5$ kpc to the centre (for which there is a variety of evidence), the tangential velocity of the Sun is $241 \pm 15$ km/s, translating to a LSR tangential velocity of $236\pm 15$ km/s wrt the Galactic centre. The Sun's motion wrt the Galactic centre is therefore almost entirely tangentialand in the absence of anything but a nearly axially symmetric Galactic potential, the Sun executes a nearly circular orbit, with epicycles in the radial and vertical directions. The speed of the Sun's orbit and the amplitude and period of its epicycles depend on the size and shape of the Galactic potential. I suppose you could hypothesise that the Sun was at the apogee of a highly elliptical, or otherwise non-circular orbit, but then the fact that it has very similar kinematics to 99 per cent of the nearby stars means they too would have to be on highly elliptical orbits with similar apogees (as they are moving in the same potential). But why would this be? Why should stars born over billions of years in different parts of the Galaxy have organised themselves to align their semi-major axes? By far the simplest explanation is that the orbits are close to circular and that is why the solar peculiar motion is small wrt to most stars - but large wrt halo (Population II) stars, which do have highly elliptical orbits and little circular motion. EDIT: Part of the premise of this question is incorrect, since stars do not execute Keplerian orbits in the potential of a disk galaxy (see Why don't stars have Keplerian orbits? ). Keplerian orbits apply either to cases where objects orbit a much larger, point-like mass, or are orbiting in a spherically symmetric mass distribution where Newton's shell theorem can be applied. Neither of these are true in detail for the Milky Way and serious research work does not make these assumptions unless it is shown to be reasonable (e.g. for objects at large distances from the centre of the Galaxy). As it says in the Introduction of the classic "Galactic Dynamics" by Binney & Tremaine 2nd ed. (2008) - " "The simplest approximate dynamical description of the Galaxy is obtained by assuming that its mass distribution is spherical. Let the mass interior to radius r be M(r). From Newton’s theorems the gravitational acceleration at radius r is equal to that of a point whose mass is the same as the total mass interior to r; thus the inward acceleration is $GM(r)/r^2$, where the gravitational constant $G = 6.674\times 10^{-11}\ m^{3} kg^{-1} s^{-2}$. The central or centripetal acceleration required to hold a body in a circular orbit with speed $v_0$ is $v_0^{2}/r$. Thus the mass interior to the solar radius $R_0$ in this crude model is $M(R_0)=v_0^{2}R_0/G$. The approximation that the mass distribution is spherical is reasonable for the dark halo, but not for the flat stellar disk." The closed/not-closed issue I don't understand. Galactic orbits are not closed at all in the sense that orbital paths do not repeat, they undergo epicycles because the potential is not that of a point mass. A "spiral" orbit would imply that energy was being dissipated somehow (or added if the spiral were outward) - but the motion of stars in the galactic potential is essentially collisionless, there is no reason that this should happen.
For a universe described by the RW metric, a relation between the scale factor at the time of emission of light and the redshift can be derived, and yields $$ a(t_e) = \frac{1}{1+z} $$ The above equation depends only on the time of emission and the redshift. The above relation implies that the redshift of any light from any source at any distance is the same, and depends only on the scale factor of the universe at the time of emission. In an isotropic and homogeneous universe (in large scales), the scale factor is only a function of time. How does this settle with Hubble's law, which states that there is a relation between the redshift and the distance between galaxies? The short answer is that, as you said, the redshift depends upon the scale factor at the time of transmission (as compared to the present). Since light travels at a finite speed, light from more distant sources was transmitted at a different time and hence scale factor. You're redshift equation does NOT imply the same redshift for any distance, I think you were just interpreting, forgetting that light we're currently receiving from distant and near (relatively speaking) stars was released at VERY different times (and hence scale factors). The Hubble relation follows directly from the redshift equation for an expanding universe. Define a galaxy to be at a distance $D$, where $D$ changes with the scale factor $$\frac{D(t)}{D_0} = a(t),$$ where $t$ is the time of light emission and $a_0=1$. The recession velocity $$ v = \dot{D(t)} = D_0 \dot{a(t)}.$$ If we say $H = \dot{a}/a$, then $$ v = D_0 H a(t) = HD(t)$$ This is the fundamental Hubble relationship. But the linear relationship with $z$ is an approximation for small $z$ and where $H$ does not change greatly with time. $$ z = a(t)^{-1} -1 \simeq (a_0-a_0H_0t)^{-1} -1 \simeq H_0t$$ If we say $t \simeq D/c$ then $$cz = H_0 D$$ However this relationship is not true at very, very small redshift. The objects have to be far enough away that their peculiar velocities are small with respect to the "Hubble flow", so that there is a nearly unique relationship between distance, scale factor and time of emission.
K Mazumdar Articles written in Pramana – Journal of Physics Volume 63 Issue 6 December 2004 pp 1359-1365 Amol Dighe Anirban Kundu K Agashe B Anantanarayan A Chandra A Datta P K Das S P Das A Dighe R Forty D K Ghosh Y -Y Keum A Kundu N Mahajan S Majhi G Mazumdar K Mazumdar P Mehta Y Nir J P Saha R Singh N Sinha R Sinha A Soni S Uma Sankar R Vaidya This is a report of the low energy and flavour physics working group at WHEPP-8, held at the Indian Institute of Technology, Mumbai, India, during 5–16 January 2004. Volume 74 Issue 1 January 2010 pp 39-47 Research Articles The branching ratio for $B_{s} \rightarrow \ell^{+} \ell^{−}$ γ mode is of the same order as $B_{s} \rightarrow \ell^{+} \ell^{−}$, since there is no helicity suppression in the 3-body decay mode. New Physics beyond Standard Model may affect these rates favourably for experimental observation at LHC and simultaneous measurements of the modes $B_{s} \rightarrow \mu^{+} \mu^{−}$ and $B_{s} \rightarrow \mu^{+} \mu^{−} \gamma$ at LHC experiment will indicate the basic nature of the interaction at play. A simulation study has been performed to evaluate the potential of CMS detector to observe the more difficult mode of $B_{s} \rightarrow \mu^{+} \mu^{−} \gamma$. An upper limit of $2.08 \times 10^{−7}$ on the branching ratio is expected to be achieved corresponding to an integrated luminosity of 10 fb -1. Volume 74 Issue 2 February 2010 pp 231-246 Research Articles In several scenarios of Beyond Standard Model physics, the invisible decay mode of the Higgs boson is an interesting possibility. The search strategy for an invisible Higgs boson at the Large Hadron Collider (LHC), using weak boson fusion process, has been studied in detail, by taking into account all possible backgrounds. Realistic simulations have been used in the context of CMS experiment to devise a set of event selection criteria which eventually enhances the signal contribution compared to the background processes in characteristic distributions. In cut-based analysis, multi-jet background is found to overwhelm the signal in the finally selected sample. With an integrated luminosity of 10 fb -1, an upper limit of 36% on the branching ratio can be obtained for Higgs boson with a mass of 120 GeV/c 2 for LHC energy of 14 TeV. Since the analysis essentially depends on the background estimation, detailed studies have been done to determine the background rates from real data. Volume 75 Issue 3 September 2010 pp 439-448 Research Articles In several scenarios of beyond Standard Model physics a new heavy resonance is invoked which may decay preferentially, to a pair of taus. Identification of the decay of Standard Model $Z$ resonance to tau pairs at LHC via subsequent decays of the taus to leptons as well as hadrons is the first step towards the discovery. A method has been suggested to discriminate $Z$ to tau pair to electron $+$ muon final state against various backgrounds, for early phase of 14 TeV LHC. Volume 75 Issue 4 October 2010 pp 639-648 Research Articles As $B_{s}$ -mesons will be produced abundantly at the LHC, the observability of the flavour-changing-neutral-current decay mode $B_{s} \rightarrow \phi \mu^{+} \mu^{-}$ has been studied in CMS at the LHC centre-of-mass energy of 10 TeV. With an integrated luminosity of 100 pb -1, an upper limit of $6.7 \times 10^{−6}$ on the branching ratio is expected to be obtained. The potential at 7 TeV with a luminosity of 1 fb -1 is expected to be better . Volume 76 Issue 3 March 2011 pp 421-430 Drell–Yan process at LHC, $q\bar{q} \to Z/ \gamma^{\ast} \to \ell^+ \ell^-$, is one of the benchmarks for confirmation of Standard Model at TeV energy scale. Since the theoretical prediction for the rate is precise and the final state is clean as well as relatively easy to measure, the process can be studied at the LHC even at relatively low luminosity. Importantly, the Drell–Yan process is an irreducible background to several searches of beyond Standard Model physics and hence the rates at LHC energies need to be measured accurately. In the present study, the methods for measurement of the Drell–Yan mass spectrum and the estimation of the cross-section have been developed for LHC operation at the centre-of-mass energy of 10 TeV and an integrated luminosity of 100 pb -1 in the context of CMS experiment Current Issue Volume 93 | Issue 5 November 2019 Click here for Editorial Note on CAP Mode
The key to the efficiency of an antenna (whether for transmitting or receiving - the two processes are essentially reciprocal) is resonance, and impedance matching with the source / receiver. The size also matters in terms of the relationship between power and current. A nice analysis of the impact of size of an antenna on the power/current relationship is given at this site. Summarizing: The current in a dipole antenna goes linearly from a maximum at the center to zero at the end. Because the amplitude of the generated E-field from a given point is proportional to the current at that point, the average power dissipated is (equation 3A2 from the above link): $$\left<P\right>=\frac{\pi^2}{3c}\left(\frac{I_0 \ell}{\lambda}\right)^2$$ (note - this is in cgs units... more about that later). For the same current, as you double the length of your (much shorter than $\lambda/4$) antenna, you quadruple the power. Directly related to this concept of power is the concept of radiation resistance: if you think of your antenna as a resistor into which you are dissipating power, then you know that $$\left<P\right> = \frac12 I^2 R$$ and combining that with the above equation for power, we see that we can get an expression for the radiation resistance $$R = \frac{2\pi^2}{3c}\left(\frac{\ell}{\lambda}\right)^2$$ This is still in cgs, which will drive most electrical engineers nuts. Converting to SI units (so we get resistance in Ohms) we just need a scale factor of $10^{9}/c^2$ (with $c$ in cgs units...); thus we get a simple approximation for radiation resistance in SI units (I now go from $c=2.98\times10^{10}~\rm{cm/s}$ to $c=2.97\times10^8~\rm{m/s}$): $$R = \frac{2\pi^2 c}{3\times 10^{-7}}\left(\frac{\ell}{\lambda}\right)^2$$ which agrees nicely with the expression given at this calculator for an electrically short dipole (note - their expression is for $\ell_{eff}$ which is $\ell/2$ for a short dipole; and they use slightly rounded numbers which is OK since there are some approximations going on anyway). But if we are driving with a 50 Ohm cable, and our antenna represents a much smaller "resistance", then most of the power would be reflected and we don't get a good coupling of power into the antenna (remember - because of reciprocity, everything I say about transmitting is true for receiving... but intuitively the transmission case is so much easier to grasp). So to get good efficiency, we need to make sure there is an impedance match between our antenna and the transmitter / receiver. If you know what frequency you are working at, impedance matching can be done with a simple LC circuit: the series LC represent a low impedance to the antenna, but a high impedance to the receiver. In the process, they convert the large current in the antenna into a large voltage for the receiver (source of image and detailed explanation) This is an example of resonant matching: it works well at a specific frequency. One can use signal transformers to achieve the same thing over a wider range of frequencies - but this loses you some of the advantages of resonance (all frequencies are amplified equally). It remains to be shown what the real effect is of reducing antenna size on the received signal. For this, the most extensive reference I could find was this MIT open course lecture. Starting on page 121, this shows that the effective length of a dipole determines how much of the incoming energy can be "harvested", and it again shows that the power is proportional to the square of the size. So an antenna that is twice as short will collect four times less power. But that means it will also collect four times less noise. As long as most of the noise in the system comes from "outside", the ratio (SNR) will be the same, and you don't suffer from the smaller antenna. This changes once the antenna becomes so small that other sources of noise become significant. It is reasonable to think that this will happen when the conductive (lossy) resistance of the antenna becomes comparable to the reactive (radiation resistance). But since the former scales with the length of the antenna, and the latter with the square of the length, it is obvious there will be a size at which the non-ideal effects will dominate. The better the conductors, and the better the amplifiers, the smaller the antenna can be. Summary So yes, the power transmitted drops with the square of the length, making a short antenna less efficient as a transmitter (and therefore, as a receiver). Much of the time, though, you care about signal to noise ratio - is there more signal than noise coming from your antenna? For this, we need to look at the Q of the antenna (bandwidth). The higher the Q, the more gain you have at just the frequency of interest (because of resonance); while "noise" is a wide-band phenomenon, "signal" is a narrow-band one, so a high Q amplifies the signal without amplifying (all the) noise. If we can make an antenna with a high Q, then it doesn't matter so much that it is short.
I am reading through documentation related to Funding Valuation Adjustments (FVA) which discuss risk free rate and funding matters and the following question came to my mind: in risk neutral valuation theory, why do we require the risk free rate to be risk free? Indeed, let's assume a Black-Scholes framework but with a stochastic risk free interest rate, whose dynamics are specified by the Hull-White model: $$ \begin{align} & dS_t = \mu S_tdt + \sigma_S S_tdW_t^{(S)} \\[6pt] & dr_t = (\theta_t-\alpha r_t) dt + \sigma_rdW_t^{(r)} \\[6pt] &dW_t^{(S)}\cdot dW_t^{(r)}=\rho_{S,r}dt \end{align} $$ The way I see it is that the risk free rate is supposed to be free of credit risk $-$ indeed, in a stochastic rate framework, this rate has nonetheless market risk. However, nowhere in the specifications of the above model does credit risk appear: it seems to me that $(r_t)_{t \geq 0}$ could represent rate process. I see 2 situations where it could really make sense to speak about a risk free rate: any In the original Black-Scholes world, the risk free rate is indeed free of risk because it is the unique process which does not have a random component$-$ it is constant, hence additionally it is also free from marketrisk. If we were modelling asset prices $(S_t)_{t \geq 0}$ with some jump component $-$ to represent default $-$ and the risk free rate was the uniqueprice process free from this credit risk, then it seems it would also make sense to speak about a risk free rate. Generally speaking, it seems to me that we can speak of risk free rate when the process $(r_t)_{t \geq 0}$ lacks a type of risk that all other assets have $-$ market risk, credit risk. However, I have the impression that in practice the 2 modelling choices above are not common: jump processes are not widely used for pricing, and complex, hybrid and long-dated derivatives tend to be priced with stochastic rates if I am not mistaken. Hence it seems like $(r_t)_{t \geq 0}$ could very well be anything, for example and importantly the option hedger's cost of funding. The only characteristic I can think of the risk free rate that might justify its importance is the assumption that any market participant can lend and borrow (without limit) at that rate $-$ hence it represents some sort of "average" or "market" funding rate, like Libor for example. But this does not mean it should be risk free; it does not justify the name of the rate. Why then stress so much the risk free part, why does the rate need to be free from risk? Couldn't the process $(r_t)_{t \geq 0}$ simply represent the option writer's cost of funding? What am I missing? P.S.: note that I am not asking why there should be a risk free rate; rather, I am asking why, within the framework of option risk neutral valuation, we have required the "reference" rate under which we discount cash flows in the valuation measure $\mathbb{Q}$ to be free from risk. Edit 1: my question is a theoretical one mostly. From a practical point of view, my thinking is that the choice of rate used for discounting under $\mathbb{Q}$ $-$ hence to price derivatives $-$ is mostly driven by funding considerations; happily, in a collateralised environment, these funding rates (OIS, Fed Funds) happen to be good proxies for a risk free rate and so there is a matching between theory and practice $-$ maybe my thinking/belief here is wrong.
Motivation: I'm using 2D regular grid (it's actually a quadtree but I can still treat it as a finite difference thing if I weight-average the solution over smaller scale cells for the purpose of estimating Laplacian in the neighbor points) to discretize my Poisson equation and I'd like to solve it iteratively. The discretize Poisson equation $\Delta f = g$ reads $$ \frac{f_{i+1,j} + f_{i,j+1} + f_{i,j+1} + f_{i,j-1} - 4f_{i,j}}{h^2} = g_{i,j} $$ or, as an iterative method $$ f_{i,j} = \frac{1}{4} \left( h^2 g_{i,j} - f_{i+1,j} - f_{i,j+1} - f_{i,j+1} - f_{i,j-1} \right) $$ meaning, that if $g_{i,j}$ are given, I traverse the grid and update $f_{i,j}$ based on the neighbour values of $f$ from the previous iteration (and since I don't keep the old values in memory, and always use the most recent values, I think the method is actually called "Gauss-Siedel") I believe that this iterative method is identical to the Jacobi/Gauss-Siedel method used to solve linear equations (when formulated via matrix and RHS vector). Error estimation: There are several formulas to assess the error (let's call it $d$) of the solution. All have the same property: solution converges $\implies$ $d$ goes to zero but each has some quirks that I will now describe: The first few formulas probably anyone tries is ($N$ is number of grid points, occasionally appearing $\varepsilon$ is to make sure denominators are not zero and $\alpha$ is some chosen exponent): $$ d = \sum_{i,j} \left| f_{i,j} - f^{(old)}_{i,j} \right|^\alpha $$ $$ d = \sum_{i,j} \left| \frac{f_{i,j} - f^{(old)}_{i,j}}{|f_{i,j}| + \varepsilon} \right|^\alpha $$ $$ d = \frac{1}{N} \sum_{i,j} \left| f_{i,j} - f^{(old)}_{i,j} \right|^\alpha $$ However, the problem of this is that it smooths out some possible local non-convergence (if one point or group of points is not converging as quickly as others, it will get killed with the $1/N$ term or something similar). Hence, I started to use something like the following $$ d = \max_{\substack{i, j}} \{ \left| f_{i,j} - f^{(\text{old})}_{i,j} \right| \} $$ (again, with possible variations like terms divided by the value of $f_{i,j}$ to make it dimensionless etc.) The problem: even when the cutoff on $d$ is small, like, 0.01 (so the computation stops when $d < 0.01$), the solution still may be far from the true solution (certainly farther than within one percent), so it seems like the difference between successive iterations is not enough to truthfully assess the error between the iterative solution and the true solution. Every paper I read on this do not address the question "when to stop the iteration", it is somehow generally understood that the answer is "when it stops changing", but that might not be enough (sometimes if I let it run for twice as many steps, I still get a much better solution, but $d$ is already ridiculously small, like $10^{-6}$). Of course, I know it's hard to estimate how far we are from the solution, especially since knowing the true values of the solution would eliminate the need to solve the Poisson equation in the first place. The question: What is the best upper bound $d$ on the error of Jacobi iterations (given we don't know the true solution, only these iterative solutions) that still satisfies: iterative solution converges to the true solution $\iff$ $d \to 0$? (and reflects the nature of the "closeness" of the approximate solution to the true solution more faithfully? For example, if $f_{i,j}$ is 5 percent off from the true solution, the value of $d$ will be something more realistic, like $0.05$, rather than $0.001$)
I would like to discuss the stability argument in a bit more detail. Since it is correct that static longitudinal stability is the main reason why these aircraft are not often developed. However the reasoning given in the other posts is incomplete/not completely correct. First of all, a flying wing indeed has a very small stability margin. This can be solved by either some unconventional wing designs: this has the problem of defeating by large the efficiency gain of using a flying wing configuration. The other method, employed by the B2 spirit is to use an active controller to control control surfaces. This has the drawback of increasing complexity of the aircraft and passing regulations tests is even harder. some reference. Static longitudinal stability I'm going to explain the static longitudinal stability in detail a bit more. First we define stability: to be stable means that whenever a small excitation is applied to the object, the object will "recover" itself. Longitudinal stability means that an excitation in the longitudinal direction, thus a change in pitch/angle of attack ($\alpha$), needs to be countered by "some" moment. Since an aircraft during cruise in equilibrium, an increase in angle of attack, should lead to a negative moment. - A reduction of angle of attack should lead to a positive response moment. Or in a mathematical way: (definition) $$\frac{\partial M}{\partial\alpha} < 0$$ A simple wing Now let us first look at a simple configuration: just a wing. Since lift generated from a wing is due to a distributed force, a wing will always have both a Lifting force, and a lifting moment (except at a single point where the moment is zero, however this point changes with flying conditions). - In aviation we remove the units for simplicity's sake. So we have a force $C_L$ and a moment $C_M$. On an airfoil there is also a point where the factor between $C_L$ and $C_M$ doesn't change with angle of attack. This point is called the aerodynamic center and is a static point given by the airfoil shape: it is hence used to calculate. So (by definition): $$\left( \frac{dC_m}{dC_l} = 0 \right)_{a.c.} $$ Now since a wing always generates more lift under a higher angle of attack, and actually we consider the C_L - \alpha curve to be linear. (For stability we consider small changes in angle of attack) the following holds: $$ \frac{d C_L}{d \alpha} = C_{L_\alpha} > 0 $$ Together with the earlier equation: $$ \frac{d C_M}{d \alpha} = C_{M_\alpha} > 0 $$ conventional aircraft I first wish to address the stability of conventional aircraft in this point, as there seems to be a lot of contradicting information. For this consider the following configuration (notice that the points where the lift "attaches" to the wing & tail are defined to be the aerodynamic center for these calculations - we could use any point, but using ac reduces complexity a lot). From the static equilibrium equations: $$W = L_W + L_t$$ $$L_W = \frac{1}{2}\rho V^2 S_w \frac{dC_L}{d\alpha}(\alpha - \alpha_0)$$(above is just the lift equation, which defines $C_L$) The lift due to trim in the tailplane is more complex (due to the non negligible down wash of the main wing on the airflow at the tail (${\epsilon}$). ($C_l$ = lifting coefficient of tail section)). - Simplifying, we consider the horizontal tailplane to be a symmetric airfoil, so lift at $\eta=0$ is zero. (of the tailplane). $$L_t = \frac{1}{2}\rho V^2 S_t \left( \frac{d C_l}{d \alpha} \left( \alpha - \frac{d \epsilon}{d \alpha} \right) + \frac{d C_l}{d\eta}\eta \right)$$ Similarly the moment equation can be written: $$M = L_Wx_g - (l_t - x_g) L_t$$ Now from the very first equation again, the partial differential of the moment equation with respect to the angle of attack needs to be negative: $$\frac{\partial M}{\partial \alpha} = x_g \frac{\partial L_w}{\partial \alpha} - (l_t - x_g) \frac{\partial L_t} {\partial \alpha}$$ Now there is a final definition that needs to be made, a distance $h$ from the center of gravity so that for the total wing the moment equation can be written as: $$M = h(L_w + L_t)$$ Solving all equations (see wikipedia for details) leads to: $$h = \frac{x_g}{c} - \left( 1 - \frac{\partial\epsilon}{d \alpha} \right) \frac{C_{l_\alpha}}{C_{L_alpha}} \frac{l_t S_t}{c S_w}$$ With $c$ being the main aerodynamic chord of the main wing. (Introduced once again to reduce the amount of units we work with). For stability (since $C_{M_\alpha}$ needs to be negative) $h$ needs to be negative. Let's analyze above result: $$\frac{l_t S_t}{c S_w} = V_t$$ This part, called the "tail volume", consists of geometric definitions of an aircraft and won't change. $$1 - \frac{\partial\epsilon}{d \alpha} $$ are the stability derivatives and difficult to calculate, but typically found to be at least $0.5$. So this allows us to define the stability margin as: $$h = x_g - 0.5cV_t$$ Note that since the second term is always positive, having a negative $x_g$, or (see image above) having the center of gravity in front of the aerodynamic center of the main wing. will always give a stable configuration. And remember that aerodynamic center does not change with angle of attack. (Center of gravity can shift during cruise due to fuel consumption, but this is typically mitigated in practice by pumps, and shifting center of gravity forward will always give a more stable aircraft). neutral point Now finally we are at the neutral point, which was used in another answer incorrectly consistently. The neutral point is, by definition, the point at which an aircraft is "just" stable: $h=0$ $$x_g = 0.5cV_t$$ From this it follows that the "range" between which the center of gravity can change is between nose of the aircraft (negative $x_g$) and a point given by mainly the tail volume. The tail volume is most easily influenced by changing either the tail surface or distance between main wing and tail. Flying wing configuration Finally back to the original point, the flying wing configuration. A flying wing, by definition, has no tail behind the main wing. Thus the tail volume is zero. Hence the neutral point of a flying wing is exactly at the aerodynamic center. Which is for a conventional wing design about 1/4th of the chord distance. thus a flying wing has, without modifications, an unusable small stability margin Delta wing and canard I'd also wish to quickly sidestep to the delta wing and canard configuration such as for the concorde or f16. These designs are driven by another parameter (shockwave drag/something else, like more efficient control due to no downwash). However the stability for such aircraft is a lot different: while the picture above can still be used, we need to consider that $l_t$ is, by design, negative. This changes the location of the neutral point to always be in front of the main wing. And many of those designs also have active control surfaces and are inherently unstable. (The name "canard" even came from this: when the brother wright created the first powered aircraft, in France people didn't believe it. They called it what we would call today "fake news". The term for fake news was "canard" in France, so they called the design a "canard").
Recently I'm studying PSG and I felt very puzzled about two statements appeared in Wen's paper. To present the questions clearly, imagine that we use the Shwinger-fermion $\mathbf{S}_i=\frac{1}{2}f_i^\dagger\mathbf{\sigma}f_i$ mean-field method to study the 2D spin-1/2 system, and get a mean-field Hamiltonian $H(\psi_i)=\sum_{ij}(\psi_i^\dagger u_{ij}\psi_j+\psi_i^T \eta_{ij}\psi_j+H.c.)+\sum_i\psi_i^\dagger h_i\psi_i$, where $\psi_i=(f_{i\uparrow},f_{i\downarrow}^\dagger)^T$, $u_{ij}$ and $\eta_{ij}$ are $2\times2$ complex matrices, and $h_i$ are $2\times2$ Hermitian matrices. And the projection to the spin subspace is implemented by projective operator $P=\prod _i(2\hat{n}_i-\hat{n}_i^2)$(Note here $P\neq \prod _i(1-\hat{n}_{i\uparrow}\hat{n}_{i\downarrow})$). My questions are: (1)How to arrive at Eq.(15) ? Eq.(15) means that, if $\Psi$ and $\widetilde{\Psi}$ are the mean-field ground states of $H(\psi_i)$ and $H(\widetilde{\psi_i})$, respectively, then $P\widetilde{\Psi}\propto P\Psi$, where $\widetilde{\psi_i}=G_i\psi_i,G_i\in SU(2)$. How to prove this statement? (2)The statement of translation symmetry above Eq.(16), which can be formulated as follows: Let $D:\psi_i\rightarrow \psi_{i+a}$ be the unitary translation operator($a$ is the lattice vector). If there exists a $SU(2)$ transformation $\psi_i\rightarrow\widetilde{\psi_i}=G_i\psi_i,G_i\in SU(2)$ such that $DH(\psi_i)D^{-1}=H(\widetilde{\psi_i})$, then the projected spin state $P\Psi$ has translation symmetry $D(P\Psi)\propto P\Psi $, where $\Psi$ is the mean-field ground state of $H(\psi_i)$. How to prove this statement? I have been struggling with the above two puzzles for several days and still can't understand them. I will be very appreciated for your answer, thank you very much.This post imported from StackExchange Physics at 2014-03-09 08:42 (UCT), posted by SE-user K-boy
NGC 6397 :: new paper Abstract: see this previous post URL: ApJ Letters website NGC 6397 :: new paper Abstract: see this previous post URL: ApJ Letters website Authors: Brad M. S. Hansen, Jay Anderson, James Brewer, Aaron Dotter, Greg. G. Fahlman, Jarrod Hurley, Jason Kalirai, Ivan King, David Reitzel, Harvey B. Richer, R.Michael Rich, Michael M. Shara, Peter B. Stetson Comments: 56 pages, 30 figures We present the results of a deep Hubble Space Telescope (HST) exposure of the nearby globular cluster NGC6397, focussing attention on the cluster’s white dwarf cooling sequence. This sequence is shown to extend over 5 magnitudes in depth, with an apparent cutoff at magnitude F814W=27.6. We demonstrate, using both artificial star tests and the detectability of background galaxies at fainter magnitudes, that the cutoff is real and represents the truncation of the white dwarf luminosity function in this cluster. We perform a detailed comparison between cooling models and the observed distribution of white dwarfs in colour and magnitude, taking into account uncertainties in distance, extinction, white dwarf mass, progenitor lifetimes, binarity and cooling model uncertainties. After marginalising over these variables, we obtain values for the cluster distance modulus and age of \mu_0 = 12.02 \pm 0.06 and T_c = 11.47 \pm 0.47Gyr (95% confidence limits). Our inferred distance and white dwarf initial-final mass relations are in good agreement with other independent determinations, and the cluster age is consistent with, but more precise than, prior determinations made using the main sequence turnoff method. In particular, within the context of the currently accepted \Lambda CDM cosmological model, this age places the formation of NGC6397 at a redshift z=3, at a time when the cosmological star formation rate was approaching its peak. New preprint :: NGC 6397 Probing the Faintest Stars in a Globular Star Cluster NGC 6397 is the second closest globular star cluster to the Sun. Using 5 days of time on the Hubble Space Telescope, we have constructed the deepest ever color-magnitude diagram for this cluster. We see a clear truncation in each of its two major stellar sequences. Faint red main sequence stars run out well above our observational limit and near to the theoretical prediction for the lowest mass stars capable of stable hydrogen-burning in their cores. We also see a truncation in the number counts of faint blue stars, namely white dwarfs. This reflects the limit to which the bulk of the white dwarfs can cool over the lifetime of the cluster. There is also a turn towards bluer colors in the least luminous of these objects. This was predicted for the very coolest white dwarfs with hydrogen-rich atmospheres as the formation of H2 causes their atmospheres to become largely opaque to infrared radiation due to collision-induced absorption. New preprint :: NGC 6397 The Space Motion of the Globular Cluster NGC 6397 Authors: Jasonjot S. Kalirai, Jay Anderson, Harvey B. Richer, Ivan R. King, James P. Brewer, Giovanni Carraro, Saul D. Davis, Gregory G. Fahlman, Brad M. S. Hansen, Jarrod R. Hurley, Sebastien Lepine, David B. Reitzel, R. Michael Rich, Michael M. Shara, Peter B. Stetson Comments: 5 pages including 3 figures, accepted for publication in the Astrophysical Journal Letters. Very minor changes in V2. typos fixed As a by-product of high-precision, ultra-deep stellar photometry in the Galactic globular cluster NGC 6397 with the Hubble Space Telescope, we are able to measure a large population of background galaxies whose images are nearly point-like. These provide an extragalactic reference frame of unprecedented accuracy, relative to which we measure the most accurate absolute proper motion ever determined for a globular cluster. We find mu_alpha = 3.56 +/- 0.04 mas/yr and mu_delta = -17.34 +/- 0.04 mas/yr. We note that the formal statistical errors quoted for the proper motion of NGC 6397 do not include possible unavoidable sources of systematic errors, such as cluster rotation. These are very unlikely to exceed a few percent. We use this new proper motion to calculate NGC 6397’s UVW space velocity and its orbit around the Milky Way, and find that the cluster has made frequent passages through the Galactic disk. New preprint :: NGC6397, NGC 6712, NGC 6218 Tidal disruption and the tale of three clusters Authors: Guido De Marchi (ESA), Francesco Paresce (INAF), Luigi Pulone (INAF) Comments: Two pages, one figure, to appear in the proceedings of “Globular Clusters – Guides to Galaxies”, eds. T. Richtler and S. Larsen How well can we tell whether a globular cluster will survive the Galaxy’s tidal forces? This is conceptually easy to do if we know the cluster’s total mass, mass structure and space motion parameters. This information is used in models that predict the probability of disruption due to tidal stripping, disc and bulge shocking. But just how accurate is the information that goes into these models and, therefore, how reliable are their predictions? To understand the virtues and weaknesses of these models, we have studied in detail three globular clusters (NGC 6397, NGC 6712, NGC 6218) whose predicted interaction with the galaxy is very different. We have used deep HST and VLT data to measure the luminosity function of stars throughout the clusters in order to derive a solid global mass function, which is the best tell-tale of the strength and extent of tidal stripping operated by the Galaxy. We indeed find that the global mass functions of the three clusters are different, but not in the way predicted by the models. [abridged] NGC 6397 :: New published paper Abstract In this paper we present a new, accurate determination of the three components of the absolute space velocity of the Galactic globular cluster NGC 6397 ( , ). We used three HST/WFPC2 fields with multi-epoch observations to obtain astrometric measurements of objects in three different fields in this cluster. The identification of 33 background galaxies with sharp nuclei allowed us to determine an absolute reference point and measure the absolute proper motion of the cluster. The third component was obtained from radial velocities measured from spectra from the multi-fiber spectrograph FLAMES at UT2-VLT. We find ( , ) = ( , ) mas yr -1 and (0.10) km s -1. Assuming a Galactic potential, we calculate the cluster orbit for various assumed distances and briefly discuss the implications. A&A 456 (2006) 517-522 (Section ‘Galactic structure, stellar clusters, and populations’) NGC 6397 :: HST Press Release Hubble Sees Faintest Stars in a Globular Cluster These clusters formed early in the 13.7-billion-year-old universe. The cluster NGC 6397 is one of the closest globular star clusters to Earth. Seeing the whole range of stars in this area will yield insights into the age, origin, and evolution of the cluster.
I know that we should express complex numbers generally in the standard form $$a+bi:a,b\in\mathbb{R}$$ Like $4+5i-2=2+5i$. But how do I express complex numbers like $e^{-i\pi/2}$ or $i+e^{2\pi i}$? Thank you! Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community I know that we should express complex numbers generally in the standard form $$a+bi:a,b\in\mathbb{R}$$ Like $4+5i-2=2+5i$. But how do I express complex numbers like $e^{-i\pi/2}$ or $i+e^{2\pi i}$? Thank you! Hint Use the Euler's Formula$$ e^{i\theta}=\cos(\theta)+i\sin(\theta).$$For exemple, $$e^{-i\pi/2} =\cos(-\pi/2)+i\sin(-\pi/2)=0+i\cdot (-1)=-i $$For reverse process $x+iy= r\cdot e^{i\theta}$ use the formulas$$r=\sqrt[\,2]{x^2+y^2}\qquad \mbox{and } \qquad \tan(\theta)=\frac{y}{x},\quad -\frac{\pi}{2}<\theta<\frac{\pi}{2}$$ Well, every complex number has a modulus and an angle which corresponds to $r, \theta$ in $re^{i\theta}$. In short, for a complex number $z=a+ib$, we have $r=\sqrt{a^2+b^2}$, $\theta=\tan^{-1}\frac{b}{a}$. To change it back, all you have to do is to use $a=r\cos\theta, b=r\sin\theta$. In your question, you can choose to change either the polar form into the Cartesian or vice versa and you will get your answer. Recall Euler's Formula: $e^{ix}=\cos x + i \sin x$. By applying this formula to the given problems we obtain,
Accepted Manuscripts There are 83 manuscripts. 1. title: Generalized $k$--Fibonacci numbers authors: Sergio Falcon code: MMN-1233 2. title: Krasnoselskii Type Theorems for Multivalued Operators in Generalized Metric Spaces with Applications authors: Cristina Urs code: MMN-1787 3. title: Normal direction curves and their applications authors: Asst.Prof.Dr.Sezai Kızıltuğ, Asst.Prof.Dr.Mehmet Önder, Prof.Dr.Yusuf Yaylı code: MMN-1476 4. title: THIRD HANKEL DETERMINANT FOR CERTAIN SUBCLASS OF p−VALENT ANALYTIC FUNCTIONS authors: VAMSHEE KRISHNA DEEKONDA, RAMREDDY THOUTREDDY code: MMN-1807 5. title: on prime graph of a finite group authors: mohammad reza darafsheh, maryam ghorbani, pedram yousefzadeh code: MMN-1668 6. title: Construction of a common point of Solution set of a Variational Inequality Problem and common Fixed Point set of a $k$-strict pseudocontractive Semigroup authors: Naseer Shahzad, Habtu Zegeye, Mohammed Alghamdi, Maryam Alghamdi code: MMN-1699 7. title: An extension of Tychonoff fixed point theorem with application to the solvability of the infinite systems of integral equations in the Frechet spaces authors: Reza Arab, Reza Allahyari, Ali Shole Haghighi code: MMN-1697 8. title: GENERALIZED FRACTIONAL HERMITE-HADAMARD INEQUALITIES authors: Muhammad Awan code: MMN-1143 9. title: A new class of generalized polynomials associated with Hermite and Poly-Bernoulli polynomials authors: Waseem Waseem code: MMN-1684 10. title: On $e$-convexity authors: Mohammad Hossein Alizadeh, Judit Makó code: MMN-2467 11. title: A STUDY OF A SPECIAL KIND OF N-FIXED POINT EQUATION SYSTEM AND APPLICATIONS authors: Jen-Chih Yao, Yongfu Su, Yinglin Luo, Adrian Petrusel code: MMN-1996 12. title: DILATIONS, MODELS AND SPECTRAL PROBLEMS OF NON-SELF-ADJOINT STURM-LIOUVILLE OPERATORS authors: Bilender P. Allahverdiev code: MMN-2007 13. title: Periodic Solutions and Stability of Linear Evolution Equations with Noninstantaneous Impulses authors: JinRong Wang, Michal Feckan code: MMN-2552 14. title: An applications on differential equations of order m authors: Oznur Ozkan Kilic (Corresponding Author), Osman Altintas code: MMN-2236 15. title: Caputo fractional differential inclusions of arbitrary order with nonlocal integro-multipoint boundary conditions authors: Bashir Ahmad, Doa'a Garout, Sotiris K. Ntouyas, Ahmed Alsaedi code: MMN-2241 16. title: Bicomplex Generalized $k-$Horadam Quaternions authors: Cahit KÖME, Yasin YAZLIK, Sure KÖME code: MMN-2628 17. title: THE EXACT BOUNDARIES OF THE APPLICATION OF THE APPROXIMATE SOLUTION OF DIFFERENTIAL EQUATIONS IN THE VICINITY OF THE APPROXIMATE VALUE OF THE MOVABLE SINGULAR POINT IN THE REAL DOMAIN authors: Oleg Kovalchuk, Viktor Orlov, Марина Гуз code: MMN-2297 18. title: Bounding the Convex Combination of Arithmetic and Integral Means in Terms of One-Parameter Harmonic and Geometric Means authors: Yu-Ming Chu, Wei-Mao Qian, Wen Zhang code: MMN-2334 19. title: Multiple solutions of a Dirichlet problem in one-dimensional billiard space authors: Jan Tomeček code: MMN-2407 20. title: Some new inequalities for differentiable h-convex functions and applications authors: Mevlüt Tunç, Ayşegül ACEM code: MMN-2444 21. title: A note on radicals of associative rings and alternative rings authors: Dayantsolmon Dagva, Tumurbat Sodnomkhorloo, Khulan Tumenbayar code: MMN-2601 22. title: Analysis of priority Queue with Repeated Attempts using Generalized Stochastic Petri Nets authors: Sedda Hakmi, Ouiza Lekadir, Djamil Aissani code: MMN-2620 23. title: SOME APPLICATIONS OF GENERALIZED SRIVASTAVA-ATTIYA OPERATOR TO THE BI-CONCAVE FUNCTIONS authors: Şahsene Altınkaya, Sibel Yalçın code: MMN-2947 24. title: Emergence of consensus of multi-agents systems on time scales authors: Urszula Ostaszewska , Ewa Schmeidel, Malgorzata Zdanowicz code: MMN-2704 25. title: NEW BOUNDS FOR HERMITE-HADAMARD'S TRAPEZOID AND MID-POINT TYPE INEQUALITIES VIA FRACTIONAL INTEGRALS authors: Mohsen Rostamian Delavar code: MMN-2796 26. title: Comparisons of the exact and the approximate solutions of second-order fuzzy linear boundary value problems authors: Hülya Gültekin Çitil code: MMN-2627 27. title: Existence of fast positive semi-wavefront solutions to monostable integro-differential equations with delay authors: Robert Hakl, Maitere Aguerrea code: MMN-2801 28. title: On integral inequalities of Hermite--Hadamard type for co-ordinated $r$-mean convex functions authors: Dan-Dan Gao, Bo-Yan Xi, Ying Wu, Bai-Ni Guo code: MMN-2828 29. title: Oplus-Supplemented Lattices authors: Celil Nebiyev, Çiğdem Biçer code: MMN-2806 30. title: On central Fubini-like numbers and polynomials authors: Hacène BELBACHIR, Yahia DJEMMADA code: MMN-2809 31. title: A family of lacunary recurrences for Fibonacci numbers authors: Cristina Ballantine, Mircea Merca code: MMN-2526 32. title: Reciprocal Sums of general second order recurrences authors: Didem ERSANLI, Emrah Kiliç code: MMN-2810 33. title: Finite Approximate Controllability of Hilfer Fractional Semilinear Differential Equations authors: JinRong Wang, A. G. Ibrahim, Donal O'Regan code: MMN-2921 34. title: A note on lattices with many sublattices authors: Gábor Czédli, Eszter K. Horváth code: MMN-2821 35. title: A note on Farthest point problem in Banach spaces authors: SUMIT SOM, Ekrem Savas code: MMN-2834 36. title: HERMITE-HADAMARD TYPE INEQUALITIES WITH APPLICATIONS authors: Muhammad Awan code: MMN-2837 37. title: Oscillatory behavior of second order nonlinear difference equations with a nonlinear nonpositive neutral term authors: John Graef, Said Grace code: MMN-2731 38. title: On the Extremal Graphs for Second Zagreb Index with Fixed Number of Vertices and Cyclomatic Number authors: Akbar Ali, Kinkar Ch. Das, Sohail Akhter code: MMN-2382 39. title: QUALITATIVE PROPERTIES OF SOLUTIONS OF FUNCTIONAL-DIFFERENTIAL EQUATIONS WITH MAXIMA, OF MIXED TYPE authors: Diana Otrocol code: MMN-1946 40. title: Centralizers of BCI-algebras authors: Arsham Borumand Saeid, A Najafi, E Eslami code: MMN-2632 41. title: A Discontinuous $q$-Fractional Boundary Value Problem with Eigenparameter Dependent Boundary Conditions authors: F. Ayca Cetinkaya code: MMN-2692 42. title: Solutions of homogeneous fractional $p$-Kirchhoff equations in $\mathbb{R}^N$ authors: Phuong Le, Huynh Nhat Vy code: MMN-2869 43. title: Efficient estimate of the remainder for the Dirichlet function $\eta(p)$ for $p\in \R^+$ authors: Vito Lampret code: MMN-2877 44. title: ON CONTRACTIONS VIA SIMULATION FUNCTION ON EXTENDED b-METRIC SPACES authors: Erdal Karapinar code: MMN-2871 45. title: Optimal control problems for some classes of functional-differential equations on the semi-axis authors: Olga Kichmarenko, Olexandr Stanzhytskyi code: MMN-2739 46. title: Analysis of Higher Order Difference Method for a Pseudo-Parabolic Equation with Delay authors: Ilhame Amirali code: MMN-2895 47. title: Certain results on Kenmotsu pseudo-metric manifolds authors: Devaraja Mallesha Naik, Venkatesha V, D.G. Prakasha code: MMN-2905 48. title: Generalized Elastica in SO(3) authors: Ahmet Yücesan, Gözde OZKAN TUKEL, Tunahan TURHAN code: MMN-2900 49. title: BASIC INVARIANTS OF GEOMETRIC MAPPINGS authors: Nenad Vesić code: MMN-2901 50. title: Coincidence point results in b-metric spaces via $\mbox{C}_F$-$s$-simulation function authors: MANU ROHILLA, ANURADHA GUPTA code: MMN-2782 51. title: Best approximation and characterization of Hilbert spaces. authors: Setareh Rajabi code: MMN-2917 52. title: On the monoid of monic binary quadratic forms authors: Orland James Q. Tigas, Jerome Dimabayao, Vadim Ponomarenko code: MMN-2920 53. title: On Pell hybrinomials authors: Mirosław Liana, Anetta Szynal-Liana, Iwona Włoch code: MMN-2971 54. title: On a system of difference equations of second order solved in a closed from authors: Youssouf Akrour, Nouressadat Touafek, Yacine Halim code: MMN-2923 55. title: On the generalized bi-periodic Fibonacci and Lucas quaternions authors: Younseok Choo code: MMN-2935 56. title: GENERALIZED Ψ-GERAGHTY-QUASI CONTRACTIONS IN b-METRIC SPACES authors: Edixon Rojas, José Morales code: MMN-2951 57. title: A MODIFIED SHRINKING PROJECTION METHODS FOR NUMERICAL RECKONING FIXED POINTS OF G-NONEXPANSIVE MAPPINGS IN HILBERT SPACES WITH GRAPHS authors: H.A. HAMMAD, W CHOLAMJIAK, D YAMBANGWAI, H Dutta code: MMN-2954 58. title: Pata type best proximity point results in metric spaces authors: Naeem Saleem, Mujahid Abbas, Bandar Bin-Mohsin, Stojan Radenovic code: MMN-2764 59. title: Maia type fixed point results for multivalued F-contractions authors: Ishak Altun, Murat Olgun, Tuğçe Kavuzlu, Özge Biçer code: MMN-2540 60. title: On two bivariate kinds of $(p,q)$-Bernoulli polynomials authors: Patrick Njionou Sadjang, Ugur Duran code: MMN-2587 61. title: Interval Estimation of Kumaraswamy Parameters Based on Progressively Type II Censored Sample and Record Values authors: Hanieh Panahi code: MMN-2649 62. title: Some results on hybrid relatives of the Sheffer polynomials via operational rules authors: Mahvish Ali, Tabinda Nahid, Subuhi Khan code: MMN-2958 63. title: A periodic solution of the coupled matrix Riccati differential equations authors: Abdolrahman Razani, Zahra Goodarzi, M.R. Mokhtarzadeh code: MMN-2972 64. title: Axiomatic system defining an order-embedding between infinite $\sigma$-algebras authors: Nutefe Kwami Agbeko code: MMN-2975 65. title: Notes on explicit and inversion formulas for Chebyshev polynomials of the first two kinds authors: Feng Qi, Da-Wei Niu, Dongkyu Lim code: MMN-2976 66. title: A Study on the Uniform Convergence of Spectral Expansions for Continuous Functions on a Sturm-Liouville Problem authors: Sertac Goktas, Emir A. Maris code: MMN-2982 67. title: New generalized midpoint type inequalities for fractional integral authors: Hüseyin Budak, Praveen Agarwal code: MMN-2525 68. title: CHEBYSHEV TYPE INEQUALITIES FOR CONFORMABLE FRACTIONAL INTEGRALS authors: ERHAN SET, Ahmet Ocak Akdemir, İLKER MUMCU code: MMN-2766 69. title: Optimality conditions and duality results for a new class of nonconvex nonsmooth vector optimization problems authors: Tadeusz Antczak, Ram Verma code: MMN-2780 70. title: SOME CONVERGENCE THEOREMS FOR NEW ITERATION SCHEME IN CAT(0) SPACES authors: Javid Ali, Izhar Uddin, Vladimir Rakocevic code: MMN-2792 71. title: A note on sums of a class of series authors: Sungtae Jun, Insuk Kim, Arjun Rathie code: MMN-2732 72. title: On Lie ideals and symmetric generalized $(\alpha, \beta)$-biderivation in prime ring authors: Nadeem ur Rehman, Shuliang Huang code: MMN-2450 73. title: Fixed Point of Continuous Mappings Defined on an Arbitrary Interval authors: osman alagoz, Birol Gunduz, Sezgin Akbulut code: MMN-2509 74. title: Product of Statistical Probability Convergence and Its Applications to Korovkin-type Theorem authors: Bidu Bhusan Jena, Susanta Kumar Paikray code: MMN-3014 75. title: Exponential monomials on hypergroup joins authors: Kedumetse Nadour Vati, László Székelyhidi code: MMN-3011 76. title: A note on skew lie product of prime ring with involution authors: ADNAN ABBASI, Muzibur Mozumder, Nadeem Dar code: MMN-2644 77. title: Reversible and Reflexive Properties for Rings with Involution authors: Usama Aburawash, Muhammad Saad code: MMN-2676 78. title: Some New Integral Inequalities for n- Times Differentiable r-Convex and r-Concave Functions authors: Mahir KADAKAL, Huriye KADAKAL, İmdat İŞCAN code: MMN-2489 79. title: A WEIGHTED COMPANION OF OSTROWSKI'S INEQUALITY USING THREE STEP WEIGHTED KERNEL authors: Sofian Obeidat, Muhammad Amer Latif, ATHER QAYYUM code: MMN-2785 80. title: SOME STEFFENSEN-TYPE ITERATIVE SCHEMES FOR THE APPROXIMATE SOLUTION OF NONLINEAR EQUATIONS authors: Farooq Ahmed Shah, Muhammad Aslam Noor, Muhammad Waseem, Ehsan Ul Haq code: MMN-2787 81. title: Szeged Index of a Class of Unicyclic Graphs authors: Xuli Qi code: MMN-2793 82. title: Intermediate Regularity Results For The Solution Of A High Order Parabolic Equation authors: Arezki KHELOUFI code: MMN-3040 83. title: Fractional differential equations of variable order: existence results and numerical method authors: Guo-Cheng Wu, Chuan-Yun Gu, Dumitru Baleanu code: MMN-2730 Forthcoming There is no forthcoming yet.
Guiding AMR with adjoint flagging¶ A new approach to flagging cells for refinement was introduced in Clawpack 5.6.0 – using the solution to an adjoint problem to determine what cells in the forward solution should be refined because these cells may have an impact on the some specified functional of interest. This approach currently only works for autonomous linear problems, in which case the adjoint problem needs to be solved only once, and shifted versions of the adjoint solution can be used at any time that flagging is performed. The adjoint problem is solved first and snapshots of the adjoint are saved. These are read in at the start of the forward solution, and space-time interpolation used as needed at each regridding time. The general approach is described in: See Using adjoint flagging in GeoClaw for discussion of the GeoClaw version. Adjoint flagging is appropriate when you are not interested in computing an accurate solution over the entire space-time domain, but rather are interested only in some linear functional applied to the solution at each time (or at a single time, or range of times). In one space dimension this functional has the form where \(a\leq x \leq b\) is the full computational domain and \(\phi(x)\) is specified by the user as initial data for the adjoint problem that is solved backward in time. For example, if the solution is of interest only over a small range of \(x\) values, say \(x_1 \leq x \leq x_2\), then \(\phi(x)\) might be a box function with value 1 in this interval and 0 elsewhere, or \(\phi(x)\) could be a sharply peaked Gaussian about one location of interest. In order to calculate an accurate solution near the location of interest at the final time \(T\) it may be necessary to refine the solution at other places at earlier times. The adjoint helps to identify where refinement is needed. The adjoint equation is first solved backward in time from the final time \(T\) with initial data \(\hat q(x,T) = \phi(x)\) given by the functional. The waves propagating backward from time \(T\) to some regridding time \(t_r\) in the adjoint solution identify which waves in the forward solution at time \(t_r\) will reach the location of interest at time \(T\). Some examples for AMRClaw are available in $CLAW/amrclaw/examples/acoustics_1d_adjoint $CLAW/amrclaw/examples/acoustics_2d_adjoint In each case the main directory has a subdirectory named adjoint that contains the code that must be run first in order to compute and save snapshots of the adjoint solution. The adjoint/Makefile must point to an appropriate Riemann solver for the adjoint problem, which is a linear hyperbolic PDE with coefficient matrices that are transposes of the coefficient matrices appearing in the forward problem. For variable-coefficient problems it is important to note that if the forward problem is in conservation form then the adjoint is not, and vice versa. For example, in one space dimension, if the forward problem is \(q_t + A(x)q_x = 0\), then the adjoint is \(\hat q_t + (A(x)^T \hat q)_x = 0\). On the other hand if the forward problem is \(q_t + (A(x)q)_x = 0\), then the adjoint is \(\hat q_t + A(x)^T \hat q_x = 0\). Note that the eigenvalues of \(A\) are unchanged upon transforming but the left eigenvectors of \(A\) are now the right eigenvectors of \(A^T\), and these must be used in the adjoint Riemann solver. See for example $CLAW/riemann/src/rp1_acoustics_variable_adjoint.f90, used for the example in $CLAW/amrclaw/examples/acoustics_1d_adjoint/adjoint. Boundary conditions conditions may also need to be adjusted in going from the forward to adjoint equation. The guiding principle is that boundary conditions must vanish during the integration by parts that is used to define the adjoint PDE, as described in more detail in the references. The functional of interest is defined in the adjoint/qinit.f file that specifies “initial” conditions for the adjoint problem. The adjoint/setrun.py file specifies the final time \(T\) (as clawdata.tfinal) and the output interval via clawdata.num_output_times, as usual. You should specify \(T\) at least as large as the final time of interest in the forward problem, and frequent enough snapshots that interpolation between them is reasonable. You should set clawdata.output_format = 'binary' so that output is in binary format, since the code that reads in these snapshots in solving the forward problem assumes this format. After solving the adjoint equation by running the code in the adjoint subdirectory in the usual manner (e.g. make .output), the code in the main directory can now be used to solve the forward problem, with the adjoint snapshots used to guide AMR. Starting in v5.6.0 a new attribute of clawutil.data.ClawRunData is available named adjointdata. This ia an object of class amrclaw.data.AdjointData and has several attribures that should be set. For example, in $CLAW/amrclaw/examples/acoustics_1d_adjoint they are set as follows: #------------------------------------------------------------------# Adjoint specific data:#------------------------------------------------------------------# Also need to set flagging method and appropriate tolerances aboveadjointdata = rundata.adjointdataadjointdata.use_adjoint = True# location of adjoint solution, must first be created:adjointdata.adjoint_outdir = os.path.abspath('adjoint/_output')# time period of interest:adjointdata.t1 = rundata.clawdata.t0adjointdata.t2 = rundata.clawdata.tfinalif adjointdata.use_adjoint: # need an additional aux variable for inner product: rundata.amrdata.aux_type.append('center') rundata.clawdata.num_aux = len(rundata.amrdata.aux_type) adjointdata.innerprod_index = len(rundata.amrdata.aux_type) The times adjointdata.t1 and adjointdata.t2 determine the time interval over which the functional is of interest, and adjointdata.adjoint_outdir specifies where the adjoint snapshots are found. The flagging method and tolerances are set using, e.g.: # set tolerances appropriate for adjoint flagging:# Flag for refinement based on Richardson error estimater:amrdata.flag_richardson = Falseamrdata.flag_richardson_tol = 1e-5# Flag for refinement using routine flag2refine:amrdata.flag2refine = Trueamrdata.flag2refine_tol = 0.01 If amrdata.flag_richardson is True then we attempt to use estimates ofthe one-step error generated by Richardson extrapolation together with theadjoint to perform flagging. This is still experimental. (Describe in moredetail). Otherwise it is simply inner products of the forward and adjoint solutions that are computed and a cell is flagged for refinement in cells where the magnitude of the inner project is greater than amrdata.flag2refine_tol. Using adjoint flagging in GeoClaw¶ The references above contain tsunami modeling examples, as does the paper An example can be found in $CLAW/geoclaw/examples/tsunami/chile2010_adjoint Note that GeoClaw solves the nonlinear shallow water equations while the adjoint as implemented in GeoClaw is only suitable for linear problems. To date the adjoint has only been used to guide refinement for waves propagating across the ocean as a way to identify which waves will reach a target location of interest (possibly after multiple reflections). In the deep ocean the tsunami amplitude is very small compared to the water depth and so GeoClaw is essentially solving the linear shallow water equations, linearized about the ocean at rest. Hence the adjoint problem is also solved about the ocean at rest and the adjoint equations take essentially the same form as the forward equations. The adjoint Riemann solver can be found in $CLAW/riemann/src/rpn2_geoclaw_adjoint_qwave.f $CLAW/riemann/src/rpt2_geoclaw_adjoint_qwave.f Note that since in the forward problem the adjoint equation is solved using a f-wave formulation, the adjoint problem is a variable-coefficient problem in non-conservation form and is solved using the q-wave formulation in which jumps in the the solution vector are split into eigenvectors, rather than jumps in the flux. See the comments in the rpn2 solver for more details.
To get an action of $S_3$ in which $\langle(1,2,3)\rangle$ is the stabilizer of a point, take $X=\{1,-1\}$, and let $S_3$ act by parity: even permutations fix both $1$ and $-1$, odd permutations swap $1$ and $-1$. Then the stabilizer of a single point, $1$, is $A_3 = \langle (1,2,3)\rangle$. In general, if $H$ is a subgroup of $G$, then $G$ acts on the left cosets of $H$ by left multiplication, $g\cdot xH = gxH$. The stabilizer of the coset $H$ is $H$ itself. If you have a partition of the cosets so that $G$ acts on the partition, let $K = \{ g\in G\mid gH\text{ is in the same block as }H\}$ be the collection of all elements that represent cosets in the same part of the partition as $H$. Then $K$ certainly contains $H$. I claim that $K$ is a subgroup of $G$: for if $x\in K$, then $xH$ is the same block of the partition as $H$, hence $x^{-1}(xH)$ is in the same block as $x^{-1}H$. But $x^{-1}(xH) = H$, so $x^{-1}H$ is also in the same block of the partition, hence $x^{-1}\in K$. That is, $K$ is closed under inverses. And if $x,y\in K$, $x^{-1}\in K$. Taking the cosets $x^{-1}H$ and $yH$, which are in the same block, and multiplying by $x$, we conclude that $xyH$ and $H$ are in the same block, so $xy\in K$. Thus, $K$ is a subgroup, $H\leq K \leq G$. Conversely, if $K$ is a subgroup, $H\leq K\leq G$, then $K$ induces a partition of the cosets into blocks for the action of $G$: just take the cosets of $K$, and partition each into cosets of $H$. So systems of blocks in the action of $G$ on the left cosets of $H$ correspond to subgroups $K$, $H\leq K\leq G$. Hence, if $H$ is a maximal subgroup, then the action of $G$ on the cosets is primitive, since the only possible systems of blocks are the trivial and total blocks, corresponding to $H$ and to $G$. In particular, every maximal subgroup of $G$ is the stabilizer of a point in a primitive action of $G$, namely, $H$ is the stabilizer of $H$ in the (left) action of $G$ on the left cosets of $H$ in $G$.
Yes, the same holds for infinite graphs. I'll provide here a probably overlong proof of this. Let $G = (V,E)$ be an infinite graph of size $\kappa$, and let $\chi(G) = \lambda$. If $\lambda = \kappa$, then any bijection is chromatic, so we may assume that $\lambda < \kappa$. Let $c:V \rightarrow \lambda$ be a chromatic coloring. We first modify $c$ to make it "minimal" in a certain sense. To do this, we recursively define "color classes" $\langle A_\eta \mid \eta < \lambda \rangle$ of vertices as follows. If $\xi < \lambda$ and $\langle A_\eta \mid \eta < \xi \rangle$ has been defined, let $V_\xi = V \setminus \bigcup_{\eta < \xi} A_\eta$ and define $A_\xi \subseteq V_\xi$ so that $(c^{-1}(\xi) \cap V_\xi) \subseteq A_\xi$ and $A_\xi$ is a maximal independent subset of $V_\xi$. Now define $c^*:V \rightarrow \lambda$ by letting, for every $v \in V$, $c^*(v)$ be the unique $\xi$ such that $v \in A_\xi$. Salient features of $c^*$, easily verified, are: $c^*$ is a chromatic coloring; for all $v \in V$, $c^*(v) \leq c(v)$; for all $v \in V$ and all $\eta < c^*(v)$, there is $u \in V$ such that $\{u,v\} \in E$ and $c^*(u) = \eta$. We now show that there is a well-ordering of $V$ in order type $\kappa$ such that the greedy ordering defined along this well-ordering is $c^*$. Let $V = \{v_\alpha \mid \alpha < \kappa\}$. For all $v \in V$ and all $\eta < c^*(v)$, let $\alpha_{v, \eta}$ be the least ordinal $\alpha$ such that $\{v, v_\alpha\} \in E$ and $c^*(v_\alpha) = \eta$. Define a function $f:V \rightarrow [V]^{<\lambda}$ by letting, for all $v \in V$, $f(v) = \{v_{\alpha_{v, \eta}} \mid \eta < c^*(v)\}$. (Intuitively, $f$ chooses, for each $\eta < c^*(v)$, one element $u$ of $V$ such that $\{u,v\} \in E$ and $c^*(u) = \eta$.) For $v \in V$, let $\mathrm{cl}_f(\{v\})$ denote the closure of $\{v\}$ under $f$, i.e., the smallest $Y \subseteq V$ such that $v \subseteq Y$ and, for all $v \in Y$, $f(v) \subseteq Y$. Easily, $|\mathrm{cl}_f(\{v\})| < \max\{\lambda, \aleph_0\}$. Now recursively define an increasing sequence $\langle X_\alpha \mid \alpha < \kappa \rangle$ of subsets of $V$ as follows. Let $X_0 = \emptyset$. If $\alpha < \kappa$ and $X_\alpha$ is given, let $X_{\alpha+1} = X_\alpha \cup \mathrm{cl}_f(\{v_\alpha\})$. If $\beta < \kappa$ is a limit ordinal, let $X_\beta = \bigcup_{\alpha < \beta}X_\alpha$. Note that each $X_\alpha$ is closed under $f$ and $|X_\alpha| < \kappa$. Now, for each $\alpha < \kappa$, let $Z_\alpha = X_{\alpha + 1} \setminus X_\alpha$ and note that, for each $v \in V$, there is a unique $\alpha < \kappa$ such that $v \in Z_\alpha$. Let $\prec_\alpha$ be a well-ordering of $Z_\alpha$ such that, if $u,v \in Z_\alpha$ and $c^*(u) < c^*(v)$, then $u \prec_\alpha v$. Define a well-ordering $\prec$ of $V$ by "gluing together" the $\prec_\alpha$'s. More precisely, suppose $u \in Z_\alpha$ and $v \in Z_\beta$. Let $u \prec v$ if and only if $\alpha < \beta$ or $\alpha = \beta$ and $u \prec_\alpha v$. Each initial segment of $\prec$ has order-type less than $\kappa$, so $\prec$ has order-type exactly $\kappa$. Now suppose $v \in V$. Say $v \in Z_\alpha$, and let $\eta < c^*(v)$. By construction, there is $u \in X_{\alpha+1}$ such that $\{u,v\} \in E$ and $c^*(u) = \eta$ (in particular, $v_{\alpha_{v, \eta}}$ is there). If $u \in X_\alpha$, then $u \prec v$. If $u \in Z_\alpha$, then, since $c^*(u) < c^*(v)$, we have $u \prec_\alpha v$, so, again, $u \prec v$. Therefore, by recursion along the order $\prec$, one proves that the greedy coloring defined using $\prec$ is precisely $c^*$.
I would like to know how to compute the statistics of the discrete Fourier transform of a noise signal. To illustrate what I mean, I will first explain in detail a computation I have managed to do myself. Suppose we have a discrete time series of values $x_n$ with $n$ from 0 to $N-1$. Each $x_n$ is a random variable, uncorrelated with the others, and Gaussian distributed with width $\sigma$. If I define the discrete Fourier transform $$X_k = \frac{1}{N}\sum_{n=0}^{N-1} x_n e^{-2 \pi i n k / N}$$ then I find that $X_k$ is a complex random variable with real and imaginary parts Gaussian distributed with width $\sigma/\sqrt{2 N}$. I did the computation by using the fact that the distribution of a sum is the convolution of the distributions, etc. Now I want to know how to do this computation in the case that $x_n$ are correlated. How does one approach this problem? I can make the assumption that the process is Markovian.
And I think people said that reading first chapter of Do Carmo mostly fixed the problems in that regard. The only person I asked about the second pset said that his main difficulty was in solving the ODEs Yeah here there's the double whammy in grad school that every grad student has to take the full year of algebra/analysis/topology, while a number of them already don't care much for some subset, and then they only have to pass rather the class I know 2 years ago apparently it mostly avoided commutative algebra, half because the professor himself doesn't seem to like it that much and half because he was like yeah the algebraists all place out so I'm assuming everyone here is an analyst and doesn't care about commutative algebra Then the year after another guy taught and made it mostly commutative algebra + a bit of varieties + Cech cohomology at the end from nowhere and everyone was like uhhh. Then apparently this year was more of an experiment, in part from requests to make things more geometric It's got 3 "underground" floors (quotation marks because the place is on a very tall hill so the first 3 floors are a good bit above the the street), and then 9 floors above ground. The grad lounge is in the top floor and overlooks the city and lake, it's real nice The basement floors have the library and all the classrooms (each of them has a lot more area than the higher ones), floor 1 is basically just the entrance, I'm not sure what's on the second floor, 3-8 is all offices, and 9 has the ground lounge mainly And then there's one weird area called the math bunker that's trickier to access, you have to leave the building from the first floor, head outside (still walking on the roof of the basement floors), go to this other structure, and then get in. Some number of grad student cubicles are there (other grad students get offices in the main building) It's hard to get a feel for which places are good at undergrad math. Highly ranked places are known for having good researchers but there's no "How well does this place teach?" ranking which is kinda more relevant if you're an undergrad I think interest might have started the trend, though it is true that grad admissions now is starting to make it closer to an expectation (friends of mine say that for experimental physics, classes and all definitely don't cut it anymore) In math I don't have a clear picture. It seems there are a lot of Mickey Mouse projects that people seem to not help people much, but more and more people seem to do more serious things and that seems to become a bonus One of my professors said it to describe a bunch of REUs, basically boils down to problems that some of these give their students which nobody really cares about but which undergrads could work on and get a paper out of @TedShifrin i think universities have been ostensibly a game of credentialism for a long time, they just used to be gated off to a lot more people than they are now (see: ppl from backgrounds like mine) and now that budgets shrink to nothing (while administrative costs balloon) the problem gets harder and harder for students In order to show that $x=0$ is asymptotically stable, one needs to show that $$\forall \varepsilon > 0, \; \exists\, T > 0 \; \mathrm{s.t.} \; t > T \implies || x ( t ) - 0 || < \varepsilon.$$The intuitive sketch of the proof is that one has to fit a sublevel set of continuous functions $... "If $U$ is a domain in $\Bbb C$ and $K$ is a compact subset of $U$, then for all holomorphic functions on $U$, we have $\sup_{z \in K}|f(z)| \leq C_K \|f\|_{L^2(U)}$ with $C_K$ depending only on $K$ and $U$" this took me way longer than it should have Well, $A$ has these two dictinct eigenvalues meaning that $A$ can be diagonalised to a diagonal matrix with these two values as its diagonal. What will that mean when multiplied to a given vector (x,y) and how will the magnitude of that vector changed? Alternately, compute the operator norm of $A$ and see if it is larger or smaller than 2, 1/2 Generally, speaking, given. $\alpha=a+b\sqrt{\delta}$, $\beta=c+d\sqrt{\delta}$ we have that multiplication (which I am writing as $\otimes$) is $\alpha\otimes\beta=(a\cdot c+b\cdot d\cdot\delta)+(b\cdot c+a\cdot d)\sqrt{\delta}$ Yep, the reason I am exploring alternative routes of showing associativity is because writing out three elements worth of variables is taking up more than a single line in Latex, and that is really bugging my desire to keep things straight. hmm... I wonder if you can argue about the rationals forming a ring (hence using commutativity, associativity and distributivitity). You cannot do that for the field you are calculating, but you might be able to take shortcuts by using the multiplication rule and then properties of the ring $\Bbb{Q}$ for example writing $x = ac+bd\delta$ and $y = bc+ad$ we then have $(\alpha \otimes \beta) \otimes \gamma = (xe +yf\delta) + (ye + xf)\sqrt{\delta}$ and then you can argue with the ring property of $\Bbb{Q}$ thus allowing you to deduce $\alpha \otimes (\beta \otimes \gamma)$ I feel like there's a vague consensus that an arithmetic statement is "provable" if and only if ZFC proves it. But I wonder what makes ZFC so great, that it's the standard working theory by which we judge everything. I'm not sure if I'm making any sense. Let me know if I should either clarify what I mean or shut up. :D Associativity proofs in general have no shortcuts for arbitrary algebraic systems, that is why non associative algebras are more complicated and need things like Lie algebra machineries and morphisms to make sense of One aspect, which I will illustrate, of the "push-button" efficacy of Isabelle/HOL is its automation of the classic "diagonalization" argument by Cantor (recall that this states that there is no surjection from the naturals to its power set, or more generally any set to its power set).theorem ... The axiom of triviality is also used extensively in computer verification languages... take Cantor's Diagnolization theorem. It is obvious. (but seriously, the best tactic is over powered...) Extensions is such a powerful idea. I wonder if there exists algebraic structure such that any extensions of it will produce a contradiction. O wait, there a maximal algebraic structures such that given some ordering, it is the largest possible, e.g. surreals are the largest field possible It says on Wikipedia that any ordered field can be embedded in the Surreal number system. Is this true? How is it done, or if it is unknown (or unknowable) what is the proof that an embedding exists for any ordered field? Here's a question for you: We know that no set of axioms will ever decide all statements, from Gödel's Incompleteness Theorems. However, do there exist statements that cannot be decided by any set of axioms except ones which contain one or more axioms dealing directly with that particular statement? "Infinity exists" comes to mind as a potential candidate statement. Well, take ZFC as an example, CH is independent of ZFC, meaning you cannot prove nor disprove CH using anything from ZFC. However, there are many equivalent axioms to CH or derives CH, thus if your set of axioms contain those, then you can decide the truth value of CH in that system @Rithaniel That is really the crux on those rambles about infinity I made in this chat some weeks ago. I wonder to show that is false by finding a finite sentence and procedure that can produce infinity but so far failed Put it in another way, an equivalent formulation of that (possibly open) problem is: > Does there exists a computable proof verifier P such that the axiom of infinity becomes a theorem without assuming the existence of any infinite object? If you were to show that you can attain infinity from finite things, you'd have a bombshell on your hands. It's widely accepted that you can't. If fact, I believe there are some proofs floating around that you can't attain infinity from the finite. My philosophy of infinity however is not good enough as implicitly pointed out when many users who engaged with my rambles always managed to find counterexamples that escape every definition of an infinite object I proposed, which is why you don't see my rambles about infinity in recent days, until I finish reading that philosophy of infinity book The knapsack problem or rucksack problem is a problem in combinatorial optimization: Given a set of items, each with a weight and a value, determine the number of each item to include in a collection so that the total weight is less than or equal to a given limit and the total value is as large as possible. It derives its name from the problem faced by someone who is constrained by a fixed-size knapsack and must fill it with the most valuable items.The problem often arises in resource allocation where there are financial constraints and is studied in fields such as combinatorics, computer science... O great, given a transcendental $s$, computing $\min_P(|P(s)|)$ is a knapsack problem hmm... By the fundamental theorem of algebra, every complex polynomial $P$ can be expressed as: $$P(x) = \prod_{k=0}^n (x - \lambda_k)$$ If the coefficients of $P$ are natural numbers , then all $\lambda_k$ are algebraic Thus given $s$ transcendental, to minimise $|P(s)|$ will be given as follows: The first thing I think of with that particular one is to replace the $(1+z^2)$ with $z^2$. Though, this is just at a cursory glance, so it would be worth checking to make sure that such a replacement doesn't have any ugly corner cases. In number theory, a Liouville number is a real number x with the property that, for every positive integer n, there exist integers p and q with q > 1 and such that0<|x−pq|<1qn.{\displaystyle 0<\left|x-{\frac {p}... Do these still exist if the axiom of infinity is blown up? Hmmm... Under a finitist framework where only potential infinity in the form of natural induction exists, define the partial sum: $$\sum_{k=1}^M \frac{1}{b^{k!}}$$ The resulting partial sums for each M form a monotonically increasing sequence, which converges by ratio test therefore by induction, there exists some number $L$ that is the limit of the above partial sums. The proof of transcendentally can then be proceeded as usual, thus transcendental numbers can be constructed in a finitist framework There's this theorem in Spivak's book of Calculus:Theorem 7Suppose that $f$ is continuous at $a$, and that $f'(x)$ exists for all $x$ in some interval containing $a$, except perhaps for $x=a$. Suppose, moreover, that $\lim_{x \to a} f'(x)$ exists. Then $f'(a)$ also exists, and$$f'... and neither Rolle nor mean value theorem need the axiom of choice Thus under finitism, we can construct at least one transcendental number. If we throw away all transcendental functions, it means we can construct a number that cannot be reached from any algebraic procedure Therefore, the conjecture is that actual infinity has a close relationship to transcendental numbers. Anything else I need to finish that book to comment typo: neither Rolle nor mean value theorem need the axiom of choice nor an infinite set > are there palindromes such that the explosion of palindromes is a palindrome nonstop palindrome explosion palindrome prime square palindrome explosion palirome prime explosion explosion palindrome explosion cyclone cyclone cyclone hurricane palindrome explosion palindrome palindrome explosion explosion cyclone clyclonye clycone mathphile palirdlrome explosion rexplosion palirdrome expliarome explosion exploesion
In order to simulate the following equation using FDM $$u_t(t,x)-u_{xx}(t,x)=0, \quad (t,x) \in (0,1)\times (0,1)$$ $$(u_t(t,x)-u_{x}(t,x))\rvert_{x=0}=0, \quad t \in (0,1)$$ $$(u_t(t,x)+u_{x}(t,x))\rvert_{x=1}=0, \quad t \in (0,1)$$ $$u(0,x)=u_0, \quad x \in (0,1)$$ $$u(0,0) =\mu_0, \quad u(0,1)=\mu_1,$$ where $f,g,u_0$ are given functions and $u_1,u_2$ are constant. I'm struggling in the discretization to obtain the corresponding linear system. For the first equation taking $u_j^n=u(t_n,x_j)$ and using explicit Euler scheme, I found $$u_j^{n+1}=ru_{j-1}^n+(1-2r)u_j^n +ru_{j+1}^n,$$ with $r=\dfrac{\Delta t}{(\Delta x)^2}$, but I don't know how to discretize the second and third equations. For the $x$-derivatives I think I should use centered formula by adding two ghost points as in Neumann b.c, and the same for time derivatives but this is a bit complicated and yields much of equations. Maybe I should treat the equations like a coupled system through the boundary but I don't know how to do that. What is the right FDM discretization. Thanks for any help. PS: My background to FDM is very simple (just theoretical) and this is my first time to simulate PDE. So, what is your advice to this in a right way? (methods, languages, programs...etc).
My plan is to solve the heat equation in the right half portion of the domain, while having the left half completely isolated with constant temperature. To do so, I model the left half with a very low conductivity. I then apply a heat flux to the top side of the right half and a Dirichlet BC to the bottom side. For the left portion, I simply apply a Dirichlet BC to the left side. I figured that the left portion would be untouched and with constant temperature, given its low conductivity. $$ \nabla \cdot \rho \nabla T = 0 ~ \text{in} ~\Omega=[0,1]\times[0,1] $$ $$ \rho = \begin{cases} 1 ~ \text{if} ~x > 0.5 \\ 10^{-8} ~ \text{else} \end{cases} $$ Boundary conditions on the right half $$ \nabla T \cdot n = 5 ~\text{in} ~ \Gamma_N = \{(x,y)~ \forall ~ x \gt 0.5 ~\text{and}~ y= 1\} $$ $$ T = 0 ~\text{in} ~ \Gamma_{D_1} = \{(x,y)~ \forall ~ x \gt 0.5 ~\text{and}~ y= 0\} $$ Boundary condition on the left half $$ T = 0 ~\text{in} ~ \Gamma_{D_2} = \{(x,y)~ \forall ~ x = 0 \} $$ I am solving this problem with first order Lagrange elements. I was expecting to see my solution in the right half and then having the left half mostly constant with a value of 0 and rapidly changing close to the interface for continuity. This would make sense given that the left half is mostly an insulating material. What I am seeing instead is a smooth transition from the interface to the boundary. Is there something wrong with my mathematical implementation? Attaching the FEniCS code if it can help from dolfin import *mesh = UnitSquareMesh(100, 100)V = FunctionSpace(mesh, 'CG', 1)t, w = TrialFunction(V), TestFunction(V)Rho = FunctionSpace(mesh, 'DG', 0)rho = Function(Rho)rho.interpolate(Expression("(x[0] > 0.5) + 1e-8", domain=mesh, degree=1))File("test_rho.pvd") << rhoa = inner(rho*grad(t), grad(w))*dxtop = CompiledSubDomain("x[1] > 1 - 0.01 && x[0] >= 0.5")bottom_right = CompiledSubDomain("x[0] >= 0.5 && x[1] < 0.01")left = CompiledSubDomain("x[0] <= 0.001")meshfunc_ds = MeshFunction("size_t", mesh, mesh.topology().dim() - 1)TOP, LEFT, BOTTOMRIGHT = 1, 2, 3top.mark(meshfunc_ds, TOP)left.mark(meshfunc_ds, LEFT)bottom_right.mark(meshfunc_ds, BOTTOMRIGHT)File("test_measures.pvd") << meshfunc_dsds = Measure("ds")(subdomain_data=meshfunc_ds)L = Constant(5.0)*w*ds(TOP)bc1 = DirichletBC(V, Constant(0.0), meshfunc_ds, BOTTOMRIGHT)bc2 = DirichletBC(V, Constant(0.0), meshfunc_ds, LEFT)t_sol = Function(V)solve(a==L, t_sol, bcs=[bc1, bc2])File("test_temperature.pvd") << t_sol```
Hello one and all! Is anyone here familiar with planar prolate spheroidal coordinates? I am reading a book on dynamics and the author states If we introduce planar prolate spheroidal coordinates $(R, \sigma)$ based on the distance parameter $b$, then, in terms of the Cartesian coordinates $(x, z)$ and also of the plane polars $(r , \theta)$, we have the defining relations $$r\sin \theta=x=\pm R^2−b^2 \sin\sigma, r\cos\theta=z=R\cos\sigma$$ I am having a tough time visualising what this is? Consider the function $f(z) = Sin\left(\frac{1}{cos(1/z)}\right)$, the point $z = 0$a removale singularitya polean essesntial singularitya non isolated singularitySince $Cos(\frac{1}{z})$ = $1- \frac{1}{2z^2}+\frac{1}{4!z^4} - ..........$$$ = (1-y), where\ \ y=\frac{1}{2z^2}+\frac{1}{4!... I am having trouble understanding non-isolated singularity points. An isolated singularity point I do kind of understand, it is when: a point $z_0$ is said to be isolated if $z_0$ is a singular point and has a neighborhood throughout which $f$ is analytic except at $z_0$. For example, why would $... No worries. There's currently some kind of technical problem affecting the Stack Exchange chat network. It's been pretty flaky for several hours. Hopefully, it will be back to normal in the next hour or two, when business hours commence on the east coast of the USA... The absolute value of a complex number $z=x+iy$ is defined as $\sqrt{x^2+y^2}$. Hence, when evaluating the absolute value of $x+i$ I get the number $\sqrt{x^2 +1}$; but the answer to the problem says it's actually just $x^2 +1$. Why? mmh, I probably should ask this on the forum. The full problem asks me to show that we can choose $log(x+i)$ to be $$log(x+i)=log(1+x^2)+i(\frac{pi}{2} - arctanx)$$ So I'm trying to find the polar coordinates (absolute value and an argument $\theta$) of $x+i$ to then apply the $log$ function on it Let $X$ be any nonempty set and $\sim$ be any equivalence relation on $X$. Then are the following true: (1) If $x=y$ then $x\sim y$. (2) If $x=y$ then $y\sim x$. (3) If $x=y$ and $y=z$ then $x\sim z$. Basically, I think that all the three properties follows if we can prove (1) because if $x=y$ then since $y=x$, by (1) we would have $y\sim x$ proving (2). (3) will follow similarly. This question arised from an attempt to characterize equality on a set $X$ as the intersection of all equivalence relations on $X$. I don't know whether this question is too much trivial. But I have yet not seen any formal proof of the following statement : "Let $X$ be any nonempty set and $∼$ be any equivalence relation on $X$. If $x=y$ then $x\sim y$." That is definitely a new person, not going to classify as RHV yet as other users have already put the situation under control it seems... (comment on many many posts above) In other news: > C -2.5353672500000002 -1.9143250000000003 -0.5807385400000000 C -3.4331741299999998 -1.3244286800000000 -1.4594762299999999 C -3.6485676800000002 0.0734728100000000 -1.4738058999999999 C -2.9689624299999999 0.9078326800000001 -0.5942069900000000 C -2.0858929200000000 0.3286240400000000 0.3378783500000000 C -1.8445799400000003 -1.0963522200000000 0.3417561400000000 C -0.8438543100000000 -1.3752198200000001 1.3561451400000000 C -0.5670178500000000 -0.1418068400000000 2.0628359299999999 probably the weirdness bunch of data I ever seen with so many 000000 and 999999s But I think that to prove the implication for transitivity the inference rule an use of MP seems to be necessary. But that would mean that for logics for which MP fails we wouldn't be able to prove the result. Also in set theories without Axiom of Extensionality the desired result will not hold. Am I right @AlessandroCodenotti? @AlessandroCodenotti A precise formulation would help in this case because I am trying to understand whether a proof of the statement which I mentioned at the outset depends really on the equality axioms or the FOL axioms (without equality axioms). This would allow in some cases to define an "equality like" relation for set theories for which we don't have the Axiom of Extensionality. Can someone give an intuitive explanation why $\mathcal{O}(x^2)-\mathcal{O}(x^2)=\mathcal{O}(x^2)$. The context is Taylor polynomials, so when $x\to 0$. I've seen a proof of this, but intuitively I don't understand it. @schn: The minus is irrelevant (for example, the thing you are subtracting could be negative). When you add two things that are of the order of $x^2$, of course the sum is the same (or possibly smaller). For example, $3x^2-x^2=2x^2$. You could have $x^2+(x^3-x^2)=x^3$, which is still $\mathscr O(x^2)$. @GFauxPas: You only know $|f(x)|\le K_1 x^2$ and $|g(x)|\le K_2 x^2$, so that won't be a valid proof, of course. Let $f(z)=z^{n}+a_{n-1}z^{n-1}+\cdot\cdot\cdot+a_{0}$ be a complex polynomial such that $|f(z)|\leq 1$ for $|z|\leq 1.$ I have to prove that $f(z)=z^{n}.$I tried it asAs $|f(z)|\leq 1$ for $|z|\leq 1$ we must have coefficient $a_{0},a_{1}\cdot\cdot\cdot a_{n}$ to be zero because by triangul... @GFauxPas @TedShifrin Thanks for the replies. Now, why is it we're only interested when $x\to 0$? When we do a taylor approximation cantered at x=0, aren't we interested in all the values of our approximation, even those not near 0? Indeed, one thing a lot of texts don't emphasize is this: if $P$ is a polynomial of degree $\le n$ and $f(x)-P(x)=\mathscr O(x^{n+1})$, then $P$ is the (unique) Taylor polynomial of degree $n$ of $f$ at $0$.
Let's solve the exercise. It's asking for your awareness of periodicity in DFT and some trigonometry. First, beacuse of the question content, let's state the following :$$\delta[n-d] \overset{DTFT} \longleftrightarrow e^{-j \omega d } \overset{DFT} \longleftrightarrow e^{-j \frac{2 \pi}{N} d k } $$ Then given our DFT $X[k]$ work it backwards into $x[n]$ as a sum of delayed impulses. $$ \begin{align} X[k] &= 2 \cos(\frac{\pi k}{2}) + 2 \sin(\frac{14 \pi k}{8}) + \sin(\pi k) \\ X[k] &= 2 \cos(\frac{\pi k}{2}) + 2 \sin( \left( 2\pi - \frac{2\pi}{8} \right) k) + 0 \\ X[k] &= 2 \cos(\frac{\pi k}{2}) + 2 \sin(- \frac{2\pi}{8} k) \\ X[k] &= 2 \cos(\frac{\pi k}{2}) - 2\sin(\frac{\pi}{4} k) \\ \end{align}$$ We have used basic trigonometry to reach the above line. Now lets use Euler's formula to expand the cosines and sines into complex exponentials so that we can use the corresponence of the first line to deduce the time domain impulses. $$ \begin{align} X[k] &= 2 \cos(\frac{\pi k}{2}) - 2\sin(\frac{\pi}{4} k) \\ X[k] &= 2 \left( \frac{ e^{j \frac{\pi}{2} k} + e^{-j \frac{\pi}{2} k} }{2} \right) -2 \left( \frac{ e^{j \frac{\pi}{4} k} - e^{-j \frac{\pi}{4} k} }{2j} \right) \\X[k] &= \left( e^{j \frac{\pi}{2} k} + e^{-j \frac{\pi}{2} k} \right) +j \left( e^{j \frac{\pi}{4} k} - e^{-j \frac{\pi}{4} k} \right) \\ \end{align}$$ Now the correspondence between $\delta[n-d]$ and its $N=8$ point DFT is:$$\delta[n-d] \overset{DTFT} \longleftrightarrow e^{-j \omega d } \overset{DFT} \longleftrightarrow e^{-j \frac{2 \pi}{8} d k } = e^{-j \frac{\pi}{4} d k } $$ And by using ths relationship we can deduce from the last line that the impulses are: $$ \begin{align} X[k] &= e^{j \frac{\pi}{2} k} + e^{-j \frac{\pi}{2} k} +j e^{j \frac{\pi}{4} k} - e^{-j \frac{\pi}{4} k} \\ x[n] &= \delta[n+2] + \delta[n-2] + j\delta[n+1] -j \delta[n-1] \end{align}$$ One last step remains to complete the answer; representation of $x[n]$ in the range $0 \leq n \leq 7$ as it is the valid range for the formal relationship between $x[n]$ and its $N$-point DFT $X[k]$. Then we should map those impulses outisde of this range into it. As you can see $\delta[n+1]$ and $\delta[n+2]$ are outside of this range and should be mapped into $0 \leq n \leq 7$ by shifting them right by $N=8$. This shift is the consequence of the inherent periodicity in both $x[n]$ and $X[k]$ as $x[n-N] = x[n]$ etc. So we decude that $x[n]$ in the range $0 \leq n \leq 7$ is: $$x[n] = \delta[n+2-8] + \delta[n-2] + j\delta[n+1-8] -j \delta[n-1]$$$$x[n] = \delta[n-6] + \delta[n-2] + j\delta[n-7] -j \delta[n-1]$$ which is shown in the option b.
Current browse context: math-ph Change to browse by: References & Citations Bookmark(what is this?) Mathematics > Algebraic Geometry Title: On Landau-Ginzburg systems and $\mathcal{D}^b(X)$ of various toric Fano manifolds with small picard group (Submitted on 1 May 2016 (v1), last revised 19 May 2017 (this version, v2)) Abstract: For a toric Fano manifold $X$ denote by $Crit(X) \subset (\mathbb{C}^{\ast})^n$ the solution scheme of the Landau-Ginzburg system of equations of $X$. Examples of toric Fano manifolds with $rk(Pic(X)) \leq 3$ which admit full strongly exceptional collections of line bundles were recently found by various authors. For these examples we construct a map $E : Crit(X) \rightarrow Pic(X)$ whose image $\mathcal{E}=\left \{ E(z) \vert z \in Crit(X) \right \}$ is a full strongly exceptional collection satisfying the M-aligned property. That is, under this map, the groups $Hom(E(z),E(w))$ for $z,w \in Crit(X)$ are naturally related to the structure of the monodromy group acting on $Crit(X)$. Submission historyFrom: Yochay Jerby [view email] [v1]Sun, 1 May 2016 20:17:29 GMT (27kb) [v2]Fri, 19 May 2017 03:14:32 GMT (27kb)
Written, in Ex.8 Ch.9.1 of the book Advanced Calculus by P. M. Fitzpatrick : Suppose that $\sum\limits_{k=1}^\infty a_k$ and $\sum\limits_{k=1}^\infty b_k$ are series of positive numbers such that $$\lim_{n \to \infty} \frac{a_n}{b_n}=l \ \ \ \text{and} \ l>0.$$ Prove that the series $\sum\limits_{k=1}^\infty a_k$ converges iff the series $\sum\limits_{k=1}^\infty b_k$ converges. Am I correct by the following (sketch of) proof? : 1- For a given $\epsilon_1$ there is $N_1$ such that $\left|\frac{a_n}{b_n} - l\right| < \epsilon_1$ for all $n \ge N_1$. 2- Since the series $\sum\limits_{k=1}^\infty a_k$ converges, for a given $\epsilon_2$ there is $N_2$ such that $\left|b_{n+1}+\dots+b_{n+k}\right|< \epsilon_1$ for all $n \ge N_2$ any for all natural numbers $k$. 3- Define $N = \max {\{N_1,N_2}\}$. 4- From Step (1), $a_{n+k} < (\epsilon_1+l) b_{n+k}$ for all $n \ge N$ any for all natural numbers $k$. [Also, $a_i$'s and $b_i$'s are all positive]. Thus $a_{n+1}+\dots+a_{n+k} < (\epsilon_1+l) (b_{n+1}+\dots+b_{n+k})< \epsilon_3$. Then the convergence of the series $\sum\limits_{k=1}^\infty b_k$ implies the convergence of the series $\sum\limits_{k=1}^\infty a_k$. 5- For the reverse implication, we use the fact that $\lim\limits_{n \to \infty}\frac{a_n}{b_n}=l \iff \lim\limits_{n \to \infty}\frac{b_n}{a_n}= \frac{1}{l}= l' >0$ and repeat the process this time for a and b exchanged. 6- The Quotient Property For Sequences hold for a nonzero limit in the denominator, but since the limit in the numerator also is zero so we may use the The Quotient Property For Sequences. Thanks.
For simplicity, in the following we set the electric charge $e=1$ and consider a lattice spinless free electron system in an external static magnetic field $\mathbf{B}=\nabla\times\mathbf{A}$ described by the Hamiltonian $H=\sum_{ij}t_{ij}c_i^\dagger c_j$, where $t_{ij}=\left | t_{ij} \right |e^{iA_{ij}}$ with the corresponding lattice gauge-field $A_{ij}$. As we know the transformation $\mathbf{A}\rightarrow \mathbf{A}+\nabla\theta$ does not change the physical magnetic field $\mathbf{B}$, and the induced transformation in Hamiltonian reads $$H\rightarrow H'=\sum_{ij}t_{ij}'c_i^\dagger c_j$$ with $t_{ij}'=e^{i\theta_i}t_{ij}e^{-i\theta_j}$. Now my confusion point is: Do these two Hamiltonians $H$ and $H'$ describe the same physics? Or do they describe some same quantum states? Or what common physical properties do they share? I just know $H$ and $H'$ have the same spectrum, thank you very much.
In a paper by Joos and Zeh, Z Phys B 59 (1985) 223, they say:This 'coming into being of classical properties' appears related to what Heisenberg may have meant by his famous remark [7]: 'Die "Bahn" entsteht erst dadurch, dass wir sie beobachten.'Google Translate says this means something ... @EmilioPisanty Tough call. It's technical language, so you wouldn't expect every German speaker to be able to provide a correct interpretation—it calls for someone who know how German is used in talking about quantum mechanics. Litmus are a London-based space rock band formed in 2000 by Martin (bass guitar/vocals), Simon (guitar/vocals) and Ben (drums), joined the following year by Andy Thompson (keyboards, 2001–2007) and Anton (synths). Matt Thompson joined on synth (2002–2004), while Marek replaced Ben in 2003. Oli Mayne (keyboards) joined in 2008, then left in 2010, along with Anton. As of November 2012 the line-up is Martin Litmus (bass/vocals), Simon Fiddler (guitar/vocals), Marek Bublik (drums) and James Hodkinson (keyboards/effects). They are influenced by mid-1970s Hawkwind and Black Sabbath, amongst others.They... @JohnRennie Well, they repeatedly stressed their model is "trust work time" where there are no fixed hours you have to be there, but unless the rest of my team are night owls like I am I will have to adapt ;) I think u can get a rough estimate, COVFEFE is 7 characters, probability of a 7-character length string being exactly that is $(1/26)^7\approx 1.2\times 10^{-10}$ so I guess you would have to type approx a billion characters to start getting a good chance that COVFEFE appears. @ooolb Consider the hyperbolic space $H^n$ with the standard metric. Compute $$\inf\left\{\left(\int u^{2n/(n-2)}\right)^{-(n-2)/n}\left(4\frac{n-1}{n-2}\int|\nabla u|^2+\int Ru^2\right): u\in C^\infty_c\setminus\{0\}, u\ge0\right\}$$ @BalarkaSen sorry if you were in our discord you would know @ooolb It's unlikely to be $-\infty$ since $H^n$ has bounded geometry so Sobolev embedding works as expected. Construct a metric that blows up near infinity (incomplete is probably necessary) so that the inf is in fact $-\infty$. @Sid Eating glamorous and expensive food on a regular basis and not as a necessity would mean you're embracing consumer fetish and capitalism, yes. That doesn't inherently prevent you from being a communism, but it does have an ironic implication. @Sid Eh. I think there's plenty of room between "I think capitalism is a detrimental regime and think we could be better" and "I hate capitalism and will never go near anything associated with it", yet the former is still conceivably communist. Then we can end up with people arguing is favor "Communism" who distance themselves from, say the USSR and red China, and people who arguing in favor of "Capitalism" who distance themselves from, say the US and the Europe Union. since I come from a rock n' roll background, the first thing is that I prefer a tonal continuity. I don't like beats as much as I like a riff or something atmospheric (that's mostly why I don't like a lot of rap) I think I liked Madvillany because it had nonstandard rhyming styles and Madlib's composition Why is the graviton spin 2, beyond hand-waiving, sense is, you do the gravitational waves thing of reducing $R_{00} = 0$ to $g^{\mu \nu} g_{\rho \sigma,\mu \nu} = 0$ for a weak gravitational field in harmonic coordinates, with solution $g_{\mu \nu} = \varepsilon_{\mu \nu} e^{ikx} + \varepsilon_{\mu \nu}^* e^{-ikx}$, then magic?
Let me first run through the setting of my question in an example I understand well; that of modular curves. If $Y_1(N)$ denotes the usual modular curve over the complexes, the quotient of the upper half plane by the congruence subgroup $\Gamma=\Gamma_1(N)$, then there are two kinds of sheaves that one often sees showing up in the theory of automorphic forms in this setting: 1) Locally constant sheaves. The ones showing up typically come from representations of $\Gamma$ of the form $Symm^{k-2}(\mathbf{C}^2)$, with $\Gamma$ acting in the obvious way on $\mathbf{C}^2$. These sheaves---call them $V_k$---are related to classical modular forms of weight $k$ via the Eichler-Shimura correspondence. They only exist for $k\geq2$ (weight 1 forms are not cohomological) and representation-theoreticically the sheaves are associated to representations of the algebraic group $SL(2)$ (the reason one starts in weight 2 rather than weight 0 is that there is a correction factor of "half the sum of the positive roots"). 2) Coherent sheaves. The ones showing up here are powers $\omega^k$ of a canonical line bundle $\omega$ coming from the universal elliptic curve. The global sections of $\omega^k$ (which are bounded at the cusps) are classical modular forms of weight $k$. Although there are no classical modular forms of negative weight, the sheaf $\omega^k$ still makes sense for $k<0$ (in contrast to case 1 above). I am much vaguer about what is conceptually going on here. I have it in my mind that here $k$ is somehow a representation of the group $SO(2,\mathbf{R})$. Now my question: what is the generalisation of this to arbitrary, say, PEL Shimura varieties? Part (1) I understand: I can consider algebraic representations of the reductive group I'm working with and for each such gadget I can make a locally constant sheaf. But Part (2) I understand less. I am guessing I can construct a big vector bundle on my moduli space coming from the abelian variety. Now, given some representation of some group or other, I can build coherent sheaves somehow, possibly by "changing the structure group" somehow. For which representations of which group does this give me a coherent sheaf on the moduli space? Basically---what is the general yoga for supplying natural coherent sheaves on Shimura varieties, which specialises to the construction of $\omega^k$ in the modular curve case, and which explains why $\omega^k$ exists even for $k<0$?
I understand the principle of less airflow, less control, but why is that the case? Because moments of inertia don't change with speed Control effectiveness means that the controls effect a change in the balance of moments which results in the desired attitude change. The smaller the control deflection for the same change in attitude, the higher their effectiveness. If $\ddot{\Theta}$ is the pitch acceleration, $∆F_H$ the force change on the horizontal tail due to a control deflection, $x$ the lever arm of that control around the center of gravity and $I_y$ the moment of inertia around the lateral axis, the formula for $\ddot{\Theta}$ is: $$\ddot{\Theta} = \frac{∆F_H\cdot x}{I_y}$$ Both $x$ and $I_y$ are fixed, so only $∆F_H$ has the potential to increase pitch acceleration. $∆F_H$ is proportional to Deflection angle $\eta_H$ Tail size $S_H$ (again fixed) dynamic pressure $q = \frac{v^2\cdot \rho}{2}$ A given object will change its attitude more quickly when more force can be created. Therefore, more speed $v$ means more force change and a higher angular acceleration for the same deflection. When deflected, the control surfaces (ailerons, elevator, rudder) cause an aerodynamic moment about the Aerodynamic Centre. A moment has a moment arm and needs to have a length reference - the aerodynamic moments are defined with reference to wing dimensions: wing span for rolling and yawing moments, and Mean Aerodynamic Chord for pitching moments. If we have a look at the pitching moment P: $$ P = C_{r_{\delta e}} \cdot \delta_e \cdot q \cdot S \cdot MAC$$ With: $C_{r_{\delta e}}$ = elevator coefficient (dimensionless) $\delta_e$ = elevator deflection $q$ = dynamic pressure = $\frac {1}{2} \cdot \rho \cdot V^2$ $A$ = wing area MAC = Mean Aerodynamic Chord $C_{r_{\delta e}}$, A and MAC are constants. So: pitching moment of the aircraft is proportional to elevator deflection, and to the square of the airspeed. Fly twice as fast, and the pitching moment from a certain elevator deflection will be four times as high. Basically what keeps your plane suspended above the ground despite gravity pulling it to the surface is the fact that your aircraft constantly pushes (and pulls) air molecules downwards; one of Newton's Laws says that this generates an equal and opposing (i.e. upward) force on your aircraft. In straight and level flight this force is due to the positive angle of attack that the wings make with the relative wind (NOT THE FLIGHT PATH) which essentially forces air molecules downwards: molecules below the wing are deflected dowards along the bottom of the wing while molecules above the wing are pulled downwards along the top surface of the wing as it moves through them. When you go slower you deflect fewer air molecules downwards per unit time which demands a higher angle of attack in order to keep you suspended; this generally translates to more elevator deflection needed on the pilots part, or in other words: your controls are less effective. Control authority comes from the size of moments you can generate, which result from forces acting on the plane (the elevator, the ailerons or rudder), which come from pressure differences, which have a squared relation to velocity. If the airflow speed halves, your control authority gets cut in 4. If the airflow speed doubles, you get 4 times the control authority, etc. Here's further explanation if anything isn't quite clear. For control authority, you need to be able to apply your desired moment to the aircraft. Moments are forces acting at some distance from your rotation center. In an aircraft, say you want to roll the aircraft. The ailerons deflecting create a pressure difference between the right and left wings. This ends up as different forces acting basically at the ailerons, creating that roll moment. That's just the basics of roll. Now, for the airflow part. First, I mentioned that for roll, it's those pressure differences caused by airflow over the wing and the aileron. The forces (the ones we're concerned with here) are created by the pressures on a surface. Remember, pressures are forces over areas. Now, lets look at the pressures. The equation for dynamic pressure is $\frac{\rho V^2}{2}$, that's the density times velocity squared over 2. We will assume our density isn't changing here, so in order to change the pressure, we change the velocity of the flow. BUT, its squared. Without airflow, it's obvious that no roll moment is created because the velocity is zero. A plane on the ground with no airflow over the wing doesn't try to roll. In general, for roll, pitch and yaw authority(that's all of them), you can consider the feeling when you put your hand out of the window in a moving car. If you deflect air downward, your hand gets pushed up. In reality, it's the difference in pressure between the top and bottom, due to flow speeds. The faster you go, the more airflow, the greater the pressure differences you can generate, because of the squared relation. The slower you go, any flow speed differences might become negligible, meaning no pressure difference, thus no force acting. With some numbers, let's say that at a high speed, the elevator gets deflected. Let's say that the flow over the top is going 100 (arbitrary velocity units), and the flow under is going 110. The pressure on top will be $\frac{\rho}{2}*100^2 = \frac{\rho}{2}*10000$ lets ignore the $\frac{\rho}{2}$ term, and just be aware that it linearly converts our number into a pressure. so we have 10000 pressure somethings on top, and we have 12100 pressure somethings on bottom (using the same formula). That means we have a net of 2100 pressure somethings pushing up on the tail now. Great, the tail has enough control authority to push the nose down as commanded. Now, lets slow the speeds down by a factor of ten. The top air is going 10, and the bottom now goes 11. Lets see the pressure change compared to before. The pressure on top will be 100 pressure somethings, and on bottom it will be 121. The resulting net pressure acting on the tail is then 21 pressure units, 100 times less than before, even though the speeds only changed by a factor of ten. now, you have 100 times less force acting on the tail (resulting in equivalently less moment), and you might not be able to control the pitch as much as you want to. Control surfaces are used to change the effective camber of the airfoil they are controlling. For example, a downward deflected aileron would increase the effective camber of a wing along the aileron's span. An increase in camber will increase the lift generated at a certain airspeed over that area of the wing, causing the desired rolling moment. It is PARTIALLY due to this change in developed lift that generates adverse yaw, requiring rudder to coordinate turns. At higher airspeeds, the wing is producing more total lift, and therefore more responsive to changes in camber. Additionally, control surfaces also respond according to Newton's 3rd law - the ailerons deflect the passing airflow in a direction other than parallel to the wing skin, resulting in a reactive force causing roll. As with the camber change, this phenomenon becomes more pronounced at increased airspeeds, and conversely less pronounced with a reduction in airflow. A simplified explanation can be found at FAA Pilot's Handbook This can be explained by Newton's second law, $F = m\times a$ and third law, every force has an equal force to the opposite direction. $m$ here is the mass of airflow, $a$ is the acceleration caused to airflow (seen as changed direction of the airflow). A force equal to $a\times m$ is exerted to the control surface. More airflow, more mass, more force. The very same reason why an airplane stays in the air in the first place.
LambertW function LambertW, called also ProductLog [1], is solution \(W=W(z)\) of equations \( \displaystyle \!\!\!\!\!\!\!\!\!\!\!\!\!\!\! (0\mathrm{a}) ~ ~ ~ W'= \frac{W}{(1+W)~z}\) \( \displaystyle \!\!\!\!\!\!\!\!\!\!\!\!\!\!\! (0\mathrm{b}) ~ ~ ~ W(0)=0\) The cut line is drawn from the singularity at \(-1/\mathrm e\) to \(-\infty\). Contents Relation with other functions LambertW is the the inverse function of ArcLambertW. In TORI, ArcLambertW is denoted also as zex, which is shortening of writing z multiplied by exponent of z.Such a shortening is used in order to simplify creation of such names as SuZex (superfunction of zex) and AbelZex (Abel function of zex). ArcLambertW is defined with \( \displaystyle \!\!\!\!\!\!\!\!\!\!\!\!\!\!\! (1) ~ ~ ~ \mathrm{ArcLambertW}(z)=z \cdot \exp(z)\) ArcLambertW is entire function; for the real values of the argument, its explicit plot is shown in figure at right. LambertW is Holomorphic function at the whole complex plane except half-line along the negative part of the real axis, id est, the domain of the holomorphism is \(\mathbb C \backslash \{ x\in \mathbb R : x\le -1/\mathrm{e} \}\). If the ProductLog or LambertW appears with two arguments, then the first of them is supposed to be integer number, indicating the branch of the function, extended beyond the cut \(z\in \mathbb R : x \le -1/\mathrm e\). The principal branch is assigned number zero, LambertW[0,z]=LambertW[z]. \( \mathrm{Tania}(z)=\mathrm{LambertW}\Big(\exp(1\!+\!z)\Big)~\), \(~\mathrm{WrightOmega}(z)=\mathrm{LambertW}\Big(\exp(z)\Big)~~\), \(~|\Im(z)|\!<\!\pi\) \( \mathrm{LambertW}(z)=\mathrm{Tania}\Big(\ln(z)-1\Big)=\mathrm{WrightOmega}\Big(\ln(z)\Big)\) \( \mathrm{Doya}_t(z)=\mathrm{LambertW}\big( z~ \mathrm{e} ^{z+t} \big)\) at least for moderated values \(|\Im(z)|\!<\!\pi\). Properties of ArcLambertW ArcLambertW is real entire function; the expansion at zero follows from the expansion of the exponential, \( \displaystyle \!\!\!\!\!\!\!\!\!\!\!\!\!\!\! (4) ~ ~ ~ \mathrm{ArcLambertW}(z)=z+z^2+z^3/2+z^4/3!+z^5/4! ~+~ ...\) Along the real axis, \(\mathrm{ArcLambertW}(x)\) monotoincally decreases from zero at \(x\!=\! - \infty\) to \(-1/\mathrm{e}\) at \(x\!=\!-1\), has minimum at \(x\!=\!-1\), and monotonically increases at \(x\!\ge\! -1\). Accodringly, \(\mathrm{LambertW}(z)\) has branchpoint at \(z\!=\!-1/\mathrm e\). Properties of LambertW The complex map of the LambertW is shown at figure at right. Let \(f=\mathrm{LambertW}(x\!+\!\mathrm i y)\); then, the lines of constant \(u\!=\!\Re(f)\) and the lines of constant \(v\!=\!\Im(f)\) are drawn in the \(x,y\) plane. The thick lines correspond to the integer values. The cut line corresponds to \(x\!\le\! - 1/\mathrm{e}\). The representation through the Tania function is used to plot the figure. Small argument The expansion at zero can be written inverting series (4) above: \(\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! (5) ~ ~ \mathrm{LambertW}(z)=\) \(\displaystyle z\sum_{n=0}^{\infty} \frac{(n\!+\!1)^{n-1}}{n!} (-z)^n =\) \(\displaystyle z-z^2 +\frac{3 z^3}{2}\) \(\displaystyle -\frac{8z^4}{3}\) \(\displaystyle +\frac{125z^5}{24}\) \(\displaystyle -\frac{54 z^6}{5}\) \(\displaystyle +\frac{16807 z^7}{720} +... \) The series converges at \(|z|\!<\! 1/\mathrm{e} \!\approx\! 0.367879~\). With 48 terms, at \(|z|\!\le 0.2\), the sum provides at least 16 correct decimal digits. Near \(-1/\mathrm e\) The expansion at of the branch point can be written as follows: \(\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! (6)\displaystyle ~ ~ ~ \mathrm{LambertW}\Big(\frac{-1}{\mathrm e} + \frac{t^2}{2\mathrm e}\Big)= \) \(-1 +t\) \(\displaystyle -\frac{t^2}{3}\) \(\displaystyle +\frac{11 t^3}{72}\) \(\displaystyle -\frac{43 t^4}{540}\) \(\displaystyle +\frac{769 t^5}{17280}\) \(\displaystyle - \frac{221 t^6}{8505}\) \(\displaystyle +\frac{680863 t^7}{43545600}\) \(\displaystyle -\frac{1963 t^8}{204120}\) \(\displaystyle +\frac{226287557 t^9}{37623398400}\) \(\displaystyle -\frac{5776369 t^{10}}{1515591000} +..\) Large argument For large values of \(|z|\), using notations \(L\!=\ln(z)\) and \(M\!=\!\ln^2(z)\) the expansion of \(\mathrm{LambertW}(z)\) can be written as follows: \(\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! (8) ~ ~ \mathrm{LambertW}(z)= \) \(\displaystyle L - M +\frac{M}{L} +\frac{M(-2+M)}{2L^2} \) \(\displaystyle +\frac{M(6-9M+2M^2)}{6L^3}\) \(\displaystyle +\frac{M(-12+36M-22M^2+3M^3)}{12 L^4}\) \(\displaystyle +\frac{M(60-300M+350M^2-125M^3+12M^4)}{60L^5}\) \(\displaystyle +\frac{M(-120+900M-1700M^2+1125M^3-274M^4+20 M^5)}{120L^6}\) \(\displaystyle +O\Big( \frac{M}{L} \Big)^7\) where the effective parameter of expansion seems to be \(\varepsilon=\ln^2(z)/\ln(z)\); at \(|\varepsilon|\ll 1\), the asymptotics (8) can be used for the evaluation of LambertW. In (8), as in other articles in TORI, the upper superscript after the function indicates the number of iterations, and the upper superscript after the argument and the closing parenthesis indicates the power that is assumed to be evaluated after the evaluation of the function. However, \(\ln^{-1}(z)=\exp(z)\) should not be confused with \(1/\ln(z)\), nor \(\ln^{2}(z)=\ln(\ln(z))\) should be confused with \(\ln(z)^2\), and so on. Approximation of LambertW In many cases, the series of expansion of LambertW converge slowly, if at all. For the evaluation of LambertW, it can be expressed through the Tania function: \(\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! (9) ~ ~ \mathrm{LambertW}(z)= \mathrm{Tania}(\ln(z)-1)\) This representation is good in the whole complex plane except the negative part of the real axis. Such a representation is used to plot the complex map shown in the figure. For evaluation between \(-1/\mathrm e\) and 0, the generator of the explicit plot of LambertW at the top uses the expansions (5) and (6) above; their implementations can be loaded from the figure. (I could not make the special numerical implementation for LambertW better than that through the Tania function, it is fast and precise.) References http://mathworld.wolfram.com/LambertW-Function.htmlWeisstein, Eric W. "Lambert W-Function." From MathWorld--A Wolfram Web Resource. Cite error: Invalid <ref>tag; name "mathematicaWeisstein" defined multiple times with different content
Search Now showing items 1-10 of 26 Kaon femtoscopy in Pb-Pb collisions at $\sqrt{s_{\rm{NN}}}$ = 2.76 TeV (Elsevier, 2017-12-21) We present the results of three-dimensional femtoscopic analyses for charged and neutral kaons recorded by ALICE in Pb-Pb collisions at $\sqrt{s_{\rm{NN}}}$ = 2.76 TeV. Femtoscopy is used to measure the space-time ... Anomalous evolution of the near-side jet peak shape in Pb-Pb collisions at $\sqrt{s_{\mathrm{NN}}}$ = 2.76 TeV (American Physical Society, 2017-09-08) The measurement of two-particle angular correlations is a powerful tool to study jet quenching in a $p_{\mathrm{T}}$ region inaccessible by direct jet identification. In these measurements pseudorapidity ($\Delta\eta$) and ... Online data compression in the ALICE O$^2$ facility (IOP, 2017) The ALICE Collaboration and the ALICE O2 project have carried out detailed studies for a new online computing facility planned to be deployed for Run 3 of the Large Hadron Collider (LHC) at CERN. Some of the main aspects ... Evolution of the longitudinal and azimuthal structure of the near-side peak in Pb–Pb collisions at $\sqrt{s_{\rm NN}}=2.76$ TeV (American Physical Society, 2017-09-08) In two-particle angular correlation measurements, jets give rise to a near-side peak, formed by particles associated to a higher $p_{\mathrm{T}}$ trigger particle. Measurements of these correlations as a function of ... J/$\psi$ elliptic flow in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 5.02 TeV (American Physical Society, 2017-12-15) We report a precise measurement of the J/$\psi$ elliptic flow in Pb-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV with the ALICE detector at the LHC. The J/$\psi$ mesons are reconstructed at mid-rapidity ($|y| < 0.9$) ... Enhanced production of multi-strange hadrons in high-multiplicity proton-proton collisions (Nature Publishing Group, 2017) At sufficiently high temperature and energy density, nuclear matter undergoes a transition to a phase in which quarks and gluons are not confined: the quark–gluon plasma (QGP)1. Such an exotic state of strongly interacting ... K$^{*}(892)^{0}$ and $\phi(1020)$ meson production at high transverse momentum in pp and Pb-Pb collisions at $\sqrt{s_\mathrm{NN}}$ = 2.76 TeV (American Physical Society, 2017-06) The production of K$^{*}(892)^{0}$ and $\phi(1020)$ mesons in proton-proton (pp) and lead-lead (Pb-Pb) collisions at $\sqrt{s_\mathrm{NN}} =$ 2.76 TeV has been analyzed using a high luminosity data sample accumulated in ... Production of $\Sigma(1385)^{\pm}$ and $\Xi(1530)^{0}$ in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV (Springer, 2017-06) The transverse momentum distributions of the strange and double-strange hyperon resonances ($\Sigma(1385)^{\pm}$, $\Xi(1530)^{0}$) produced in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV were measured in the rapidity ... Charged–particle multiplicities in proton–proton collisions at $\sqrt{s}=$ 0.9 to 8 TeV, with ALICE at the LHC (Springer, 2017-01) The ALICE Collaboration has carried out a detailed study of pseudorapidity densities and multiplicity distributions of primary charged particles produced in proton-proton collisions, at $\sqrt{s} =$ 0.9, 2.36, 2.76, 7 and ... Energy dependence of forward-rapidity J/$\psi$ and $\psi(2S)$ production in pp collisions at the LHC (Springer, 2017-06) We present ALICE results on transverse momentum ($p_{\rm T}$) and rapidity ($y$) differential production cross sections, mean transverse momentum and mean transverse momentum square of inclusive J/$\psi$ and $\psi(2S)$ at ...
Learn more about it The Cool Solutions Wiki is a Cool Solutions Best Practices Compendium written collaboratively by its readers. You're welcome to add to articles, or start new ones. You are even free to copy, change or distribute articles. This model is called a wiki. A Wiki or wiki (pronounced "wicky", "weekee" or "veekee") is a website (or other hypertext document collection) that allows a user to add content, as on an Internet forum, but also allows that content to be edited by anybody. We will be creating OPEN CALL article topics in the Cool Wiki and instead of emailing the responses in, you can just click Edit, and add your ideas to the article itself. Occasionally we will harvest a particularly excellent or useful version of an article and run it as a regular article in Cool Solutions. This is called "freezing" an article, and it keeps that version stable by halting the change process on it. The wiki version of the article, however, can continue to be edited and modified, and will be part of that ongoing cycle of freezing content in the Cool Solutions Vault. We will also be bringing our most popular articles from the Cool Solutions Vault into the wiki so they can be updated, expanded, and modified by the community. This is called "thawing" an article. Contents 1 How Do I Edit a Page? 2 Change Control 3 Minor edits 4 Summary field for edits 5 The wiki markup 6 New section 7 Links, URLs 8 Images, Media 9 Tables 10 Character formatting 11 Variables 12 Page protection 13 Editing using an external editor How Do I Edit a Page? Editing a Cool Wiki page is very easy. Simply click on the " Edit" tab at the top of a Wiki page. This will bring you to a page with a text box containing the editable text of that page. If you want to experiment, go try it on this sample page. When you've finished, press " Show preview" to see how your changes will look. If you're happy with what you see, then press " Save" and your changes will be immediately applied to the article. You can also click on the " Discussion" tab to see the corresponding " Talk" page, which contains comments about the page from other Cool Wiki users. Click on the " +" tab or "Edit this page") to add a comment. Please do not vandalize the other articles in the Cool Wiki-- it's a waste of time for everyone, and generally considered to be in the worst of taste. Change Control Wikis generally follow a philosophy of making it easy to fix mistakes, rather than making it hard to make them. Thus, while wikis are very open, they also provide various means to verify the validity of recent additions to the body of pages. The most prominent one on almost every wiki is the so-called Recent Changes page, which displays a list of either a specific number of recent edits or a list of all edits that have been made within a given timeframe. Some wikis allow the list to be filtered so that minor edits - or edits that have been made by automatic importing scripts ("bots") - can be excluded. From the change log, two other functions are accessible in most wikis: the revision history, which shows previous versions of the page, and the diff feature, which can highlight the changes between two revisions. The revision history allows an editor to open and save a previous version of the page and thereby restore the original content. The diff feature can be used to decide whether this is necessary or not. A regular user of the wiki can view the diff of a change listed on the "Recent changes" page and, if it is an unacceptable edit, consult the history to restore a previous revision. Minor edits When editing a page, a logged-in user has the option of flagging the edit as a "minor edit". When to use this is somewhat a matter of personal preference. The rule of thumb is that an edit of a page that is spelling corrections, formatting, and minor rearranging of text should be flagged as a "minor edit". A major edit is basically something that makes the entry worth relooking at for somebody who wants to watch the article rather closely, so any "real" change, even if it is a single word. This feature is important, because users can choose to hide minor edits in their view of the Recent Changes page, to keep the volume of edits down to a manageable level. The reason for not allowing a user who is not logged in to mark an edit as minor is that vandalism could then be marked as a minor edit, in which case it would stay unnoticed longer. This limitation is another reason to log in. Summary field for edits When you edit a page in the wiki, you should take a minute with each revision to provide a short summary in the Summary field about what the changes you made are. This significantly helps someone else to get a quick overview of the changes made to a page. It is very "un-user-friendly" to see a long list of versions without any idea what was changed in each revision unless you use the "diff" function. The wiki markup In the left column of the table below, you can see what effects are possible. In the right column, you can see how those effects were achieved. In other words, to make text look like it looks in the left column, type it in the format you see in the right column. You may want to keep this page open in a separate browser window for reference. If you want to try out things without danger of doing any harm, you can do so in the Sandbox. Sections, paragraphs, lists and lines What it looks like What you type Start your sections with header lines: New section Subsection Sub-subsection == New section == or <h3> New section </h3> === Subsection === or <h4> Subsection </h4> ==== Sub-subsection ==== or <h5> Sub-subsection </h5> A single newlinehas no effect on the layout.These can be used to separatesentences within a paragraph.Some editors find that this aids editingand improves the But an empty line starts a new paragraph. A single newline has no effect on the layout. These can be used to separate sentences within a paragraph. Some editors find that this aids editing and improves the ''diff'' function. But an empty line starts a new paragraph. You can break lines without starting a new paragraph. You can break lines<br> without starting a new paragraph. * Lists are easy to do: ** start every line with a star ** more stars means deeper levels # Numbered lists are also good ## very organized ## easy to follow * You can even do mixed lists *# and nest them *#* like this ; Definition list : definition (1) : definition (2) : definition (3) before the colon improves parsing. A manual newline starts a new paragraph. : A colon indents a line or paragraph. A manual newline starts a new paragraph. IF a line starts with a space THEN it will be formatted exactly as typed; in a fixed-width font; lines won't wrap; ENDIF this is useful for: * pasting preformatted text; * algorithm descriptions; * program source code * ascii art; WARNING If you make it wide, you force the whole page to be wide and hence less readable. Never start ordinary lines with spaces. IF a line starts with a space THEN it will be formatted exactly as typed; in a fixed-width font; lines won't wrap; ENDIF this is useful for: * pasting preformatted text; * algorithm descriptions; * program source code * ascii art; <center>Centered text.</center> A horizontal dividing line: above and below. Useful for separating threads on Talk pages. A horizontal dividing line: above ---- and below. Links, URLs What it looks like What you type To reference other WIKI pages, enclose the page name with double square brackets. For example Novell Industry Standards and Initiatives. To reference other WIKI pages, enclose the page name with double square brackets. For example [[Novell Industry Standards and Initiatives]]. Trivia: Link to a section on a page, e.g. Novell Industry Standards and Initiatives#Calendar (links to non-existent sections aren't really broken, they are treated aslinks to the page, i.e. to the top) [[Novell Industry Standards and Initiatives#Calendar]]. To have a different name displayed: To have a different name displayed: [[CoP - Open Source |Open Source Community of Practice]]. Endings are blended into the link: testing, genes Endings are blended into the link: [[test]]ing, [[gene]]s When adding a comment to a Talk page, you should sign it. You can do this by adding three tildes for your user name: or four for user name plus date/time: When adding a comment to a Talk page, you should sign it. You can do this by adding three tildes for your user name: : ~~~ or four for user name plus date/time: : ~~~~ The weather in London is a page that doesn't exist yet. the naming conventions page for your project. everyone correctly links to it. [[The weather in London]] is a page that doesn't exist yet. Redirect one article title to another by putting text like this (#Redirect) in its first line. #REDIRECT [[United States]] For a special way to link to the page on the same subject in another language or on another wiki, see MediaWiki User's Guide: Interwiki linking. [[MediaWiki User's Guide: Interwiki linking]] External Links: Novell External Links: [http://www.novell.com Novell] Or just give the URL: http://www.novell.com. If a URL contains a different character it should be converted; for example, ^ has to be written %5E (to be looked up in ASCII). Or just give the URL: http://www.novell.com. Images, Media What it looks like What you type A picture: The Novell Icon A picture: [[Image:Nov_red.gif]] or, with alternate text ( [[Image:Nov_red.gif|The Novell Icon]] Web browsers render alternate text when not displaying an image -- for example, when the image isn't loaded, or in a text-only browser, or when spoken aloud. Clicking on an uploaded image displays a description page, which you can also link directly to: Image:Nov_red.gif [[:Image:Nov_red.gif]] To include links to non-image uploads such as sounds, or to images shown as links instead of drawn on the page, use a "media" link. [[media:Penguin_vs_butterfly_real.ram| Penguin Vs. Butterfly Video]] [[media:Identity_and_Provisioning_Applications_and_Management_Roadmap_rev.ppt | Identity and Provisioning Power Point]] [[media:Securing_Netware_Standard_FINAL.doc| Securing NetWare Word Document]] [[media:Pki_directory_pp.pdf | PKI PDF]] ISBN 0123456789X "What links here" and "Related changes" can be linked as: [[Special:Whatlinkshere/How to edit a page]] and [[Special:Recentchangeslinked/How to edit a page]] Tables There are 2 syntaxes available for table creation. Wiki has it's own markup. An example would be to look at the calendar on the Novell Industry Standards and Initiatives page. Some of the usual html elements are also available. To see how tables are created using html elements, look at the source for this page. Character formatting What it looks like What you type ''Emphasize'', '''strongly''', '''''very strongly'''''. You can also write very important for graphical browsers, and many people choose to ignore it. You can also write <i>italic</i> and <b>bold</b> if the desired effect is a specific font style rather than emphasis, as in mathematical formulas: :<b>F</b> = <i>m</i><b>a</b> A typewriter font for technical terms. A typewriter font for <tt>technical terms</tt>. You can use small text for captions. You can use <small>small text</small> for captions. You can and You can <strike>strike out deleted material</strike> and <u>underline new material</u>. è é ê ë ì àÀ �? Â Ã Ä Ã… ÆÇ È É ÊË ÃŒ �? ÃŽ �? Ñ Ã’ Ó ÆÕ Ö Ø Ù Ú Û Ü ß àá â ã ä Ã¥ æ ç è é ê ë ì àî ï ñ ò ó ô œ õ ö ø ù ú û ü ÿ ¿ ¡ « » § ¶ †‡ • — £ ¤ ™ © ® ¢ € Â¥ £ ¤ Subscript: x 2 Superscript: x Subscript: x<sub>2</sub> Superscript: x<sup>2</sup> or x² ε<sub>0</sub> = 8.85 × 10<sup>−12</sup> C² / J m. 1 [[hectare]] = [[1 E4 m²]] Greek characters: α β γ δ ε ζ α β γ δ ε ζ η θ ι κ λ μ ν ξ ο π ρ σ ς τ υ φ χ ψ ω Γ Δ Θ Λ Ξ Π Σ Φ Ψ Ω ∫ ∑ ∏ √ − ± ∞ ≈ ∝ ≡ ≠ ≤ ≥ → × · ÷ ∂ ′ ″ ∇ ‰ ° ∴ ℵ ø ∈ ∉ ∩ ∪ ⊂ ⊃ ⊆ ⊇ ¬ ∧ ∨ ∃ ∀ ⇒ ⇔ → ↔ x 2 ≥ 0 true. <i>x</i><sup>2</sup> ≥ 0 true. <math>\sum_{n=0}^\infty \frac{x^n}{n!}</math> <math>\sum_{n=0}^\infty \frac{x^n}{n!}</math> <nowiki>Link → (<i>to</i>) the [[FAQ]]</nowiki> <!-- comment here --> Variables Code Effect {{CURRENTMONTH}} 10 {{CURRENTMONTHNAME}} October {{CURRENTMONTHNAMEGEN}} October {{CURRENTDAY}} 14 {{CURRENTDAYNAME}} Monday {{CURRENTYEAR}} 2019 {{CURRENTTIME}} 22:46 {{NUMBEROFARTICLES}} 625 __NOTOC__ No Table of Contents __NOEDITSECTION__ Page edits only NUMBEROFARTICLES: number of pages in the main namespace which contain a link and are not a redirect, i.e. number of articles, stubs containing a link, and disambiguation pages). Page protection Editing using an external editor A web browser's text area editing capabilities can be quite constrictingwhen making frequent or larger edits. Some browsers can be configured to allow text area editing using an external editor like vi, emacs, kwrite or wordpad.exe. Mozilla Firefox The ViewSourceWith Firefox Extension allows the use of an external editor.
For a lattice $\mathbb Z^d$, denote by lattice line any line that contains two (and thus infinitely many) lattice points. For $2\le k<n$, define a $(n,k)$-coloring, or $C_d(n,k)$ for short, as a $n$-coloring $\mathbb Z^d\to \{0,1,\dots,n-1\}$ such that in each lattice line, exactly $k$ colors occur, each color is used for at least $2$ points of $\mathbb Z^d$. It seems obvious that a $C_d(n,k)$, if it exists, must always be periodic in all directions (i.e. on each lattice line), although I cannot see a simple argument for that. Possibly there are some parallels with the Hales-Jewett theorem. Is there an easy proof for the periodicity? For a periodic coloring $C_d(n,k)$, let $D$ be a fundamental domain, i.e. a colored minimal cuboid such that its periodic continuation in all coordinate directions yields the given coloring. Some easy examples, all for $k=2$: For a prime $p$, let $D$ be a $p\times p$ diagonal matrix with $r$ 1’s and $p-r$ 2’s on the diagonal, where $0<r<p$. This yields a $C_2(3,2)$. A $C_2(3,2)$ is also obtained by $D= \pmatrix{ 0 & 0 & 0 & 1\\ 0 & 2 & 0 & 2 \\ 0 & 0 & 0 & 1\\ 1 & 2 & 1 & 2 }$ or $D= \pmatrix{ 0 & 0 & 0 & 1\\ 1 & 2 & 1 & 2 }$. Note that the last $D$ is not square. EDIT: this one doesn't work for $d>2$.For a prime $p$, take for $D$ the cube $\{0,1,\dots,p-1\}^d$ with a point $x\in D$ getting color $d-\#x$, where $\#x$ is the number of coordinates equal to zero. It is easy to check that this yields a $C_d(d+1,2)$ for each prime $p$; these colorings will be called "simplex constructions". The special case $d=2$ yields $C_2(3,2)$'s which are again non-isomorphic to the above ones, with e.g. $D= \pmatrix{ 0 & 1 & 1\\ 1 & 2 & 2 \\ 1 & 2 & 2 }$ for $p=3$. A $C_d(2^d,2)$ is defined by a $2^d$-hypercube $D$ in which each color is used once (and it is not hard to see that all $C_d(2^d,2)$ are of this form). We may call this one the "rainbow construction"and note that it generalizes immediately to a $C_d(k^d,k)$. For this one, $k$ does notneed to be a prime! Questions: For which triples $(d,n,k)$ do such colorings $C_d(n,k)$ exist? Which periods (i.e. side lengths of $D$) can occur for a given triple? Are there other examples where $D$ is not a hypercube? It seems like there are no $C_d(n,2)$ for $n>d+1$ unless $n=2^d$. Is there an easy way to prove that? Now for $k>2$, apart from the rainbow construction, it is much harder to find general constructions. I wonder if it is possible to tweak certain combinatorical designs in order to obtain a suitable $D$ or, for dimension $d=2$, to build one from an appropriate set of latin squares. Or is there a construction for $k=3$ based on the above simplex constructions? As $k$ grows, searching for small $D$'s by hand feels a bit like solving a sudoku. While a $C_2(6,3)$ is still quite easy to find (e.g. $D= \pmatrix{ 0 & 1 & 2 & 0\\ 1 & 2 & 1 &3 \\ 0 & 1 & 2 & 0\\ 4& 3&4&5 }$), I have the impression that no $C_2(n,3)$ exists for $n>6$, except of course $n=9=3^2$. The following question seems well motivated: For given naturals $d,k\ge2$, denote by $m(d,k)$ the biggest $n<k^d$ such that a $C_d(n,k)$ exists. What are good lower and upper bounds for $m(d,k)$ ? By the above, we have supposedly $m(d,2)=d+1$ and $m(2,3)=6$.
Hartogs Theorem says every function whose undefined locus is of codim 2 can be extend to the whole domain. I saw people saying this corresponds to the (S2) property of a ring. But I can't see why this is true. Can anybody explain this or give a heuristic argument? Let $\mathscr F$ be a coherent sheaf on a noetherian scheme $X$ and assume that ${\rm supp}\mathscr F=X$. Let $Z\subset X$ be a subscheme of codimension at least $2$ and $U=X\setminus Z$. Let $\iota:U\hookrightarrow X$ denote the natural embedding and assume that $\mathcal F_x$ is $S_2$ for every $x\in Z$. Now the $S_2$ assumption implies that $$ \mathscr H^0_Z(X,\mathscr F)= \mathscr H^1_Z(X,\mathscr F)=0 $$ and the Hartogs type extension is equivalent to $$ \iota_*\iota^*\mathscr F\simeq \mathscr F. $$ Finally one has the exact sequence $$ \mathscr H^0_Z(X,\mathscr F) \to \mathscr F\to \iota_*\iota^*\mathscr F \to \mathscr H^1_Z(X,\mathscr F).$$ [See also this MO answer] This is an answer to a question of Karl in the comments to my first answer to this question. [ EDIT: The following is a minimally simplified version of Proposition 3.3 of Hasett-Kovács04.] Theorem. Let $X$ be a noetherian scheme, $r\in\mathbb N$, $Z\subseteq X$ a subscheme such that ${\rm codim}_XZ\geq r$, and $\mathscr F$ a coherent $\mathscr O_X$-module such that ${\rm supp}\,\mathscr F=X$ and $\mathscr F_x$ is $S_r$ for every $x\in Z$. Then$$\mathscr H^i_Z(X,\mathscr F)=0\quad\text{for $i=0,\ldots,r-1$}.$$ Proof. Let $x\in Z$ and notice that we have the following equality of functors:$$H^0_x = H^0_x\circ \mathscr H^0_Z$$which induces a Grothendieck spectral sequence$$E^{p,q}_2= H^p_x \circ \mathscr H^q_Z \Rightarrow H^{p+q}_x.$$Now prove the statement using induction on $i$. Suppose $\exists\,\sigma\in\mathscr H^0_Z(X,\mathcal F)$, $\sigma\neq 0$. Let $x\in Z$ be the general point of an irreducible component of ${\rm supp}\,\sigma$. Then $H^0_x(X, \mathscr H^0_Z(X,\mathscr F))\neq 0$ and hence $H^0_x(X,\mathscr F)\neq 0$. But this contradicts the assumption that $\mathscr F_x$ is $S_r$. Now suppose that we already know that $$\mathcal H^i_Z(X,\mathscr F)=0\quad\text{for $i=0,\ldots,k-1$}$$for some $k<r$ and assume that $\mathscr H^k_Z(X,\mathscr F)\neq 0$. By the same argument as above we find a point such that $E^{0,k}_2=H^0_x(X,\mathscr H^k_Z(X,\mathscr F))\neq 0$. Since it is an $E^{0,k}$ term there are no differentials (including later pages of the spectral sequence) mapping to this term and all subsequent differentials mapping from this term map to something of the form $E^{p,q}$ with $0\leq q\leq k-1$. However those latter kind are zero by the inductive hypothesis. Therefore this implies that then $H^k_x(X,\mathscr F)\neq 0$ which is again a contradiction to the assumption that $\mathscr F_x$ is $S_r$. Q.E.D. See also this MO answer Here's a somewhat more elementary argument that (S2) implies the Hartogs condition. More precisely, I will show that if $X$ is an (S2) noetherian scheme, then any rational function defined outside a closed subset of codimension two can be extended to the whole domain. (This extension is unique by definition of a rational function.) Assume, by way of contradiction, that $X$ is an (S2) noetherian scheme and $f$ is a rational function on $X$ that is defined outside a closed set of codimension at least two, but cannot be extended to the whole domain. Let $\mathscr{I}$ be the ideal of denominators of $f$; in other words, over an open affine $\operatorname{Spec} A$, $$I = \{g \in A \mid g f \in A\}.$$ This is well-defined as a sheaf since the ideal of denominators is preserved under flat pullback (and in particular, localization); see this mathoverflow question. If $g \in A$ is a nonzerodivisor, then $g \in I$ if and only if $f = a / g$ for some $a \in A$, hence the name "ideal of denominators." One can check that the closed subscheme $Z \subset X$ corresponding to $\mathscr{I}$ is, set-theoretically, the "indeterminacy locus of $f$": the smallest closed subset such that $f$ is defined over $X \smallsetminus Z$. By hypothesis, $f$ can be defined outside a closed subset of codimension two, so $\operatorname{codim} Z \geq 2$. Equivalently, whenever $W$ is an irreducible component of $Z$, then the local ring $\mathscr{O}_{X,W}$ has dimension at least two. Since $X$ is assumed to be (S2), every maximal regular sequence in $\mathscr{O}_{X,W}$ has length at least two. Since $W$ is an irreducible component of the subscheme corresponding to $\mathscr{I}$, it follows that the radical of $\mathscr{I}_W \subset \mathscr{O}_{X,W}$ is precisely the maximal ideal $\mathfrak{p}$. (Algebraically, $\mathfrak{p}$ is a minimal prime over $I$, and corresponds to the generic point of $W$.) Let $g,h \in \mathfrak{p}$ form a regular sequence (which exists since $X$ is (S2)). Replacing $g$ and $h$ by appropriate powers, we may assume that they are both contained in $\mathscr{I}_W$. By definition of regular sequence, $g$ is a nonzerodivisor. Since $h,g$ is a also a regular sequence, $h$ is a nonzerodivisor. Thus, $g$ and $h$, being nonzerodivisors that lie in the ideal of denominators, are in fact denominators of $f$: there exist $a, b \in \mathscr{O}_X,W$ such that $$\frac{a}{g} = \frac{b}{h} = f$$ $$ah = bg.$$ Since $g,h$ is a regular sequence, $h$ is a nonzerodivisor mod $g$. When we mod out by $g$, the equation above becomes $ah \equiv 0$, which would imply $a \equiv 0 \pmod{g}$. In other words, $a \in (g)$. But since $f = a/g$, this would imply that $f \in \mathscr{O}_{X,W}$, a contradiction since $f$ cannot be extended over $W$. Look at the exercises of Hartshorne, III.3. See MR1291023 (95k:14008), Hartshorne, "Generalized divisors on Gorenstein schemes" Proposition 1.11.
I am using backward Euler in a FEM scheme for a convection-diffusion problem. On a given mesh, I can take arbitrarily large time steps, as expected. But if I decrease time step, at some point it will generate oscillations in the solution (spikes). Is this a known behavior? It's not mentioned in text books, at least the ones I know. Would Crank–Nicolson scheme eliminate the issue? The above mentioned problem is only the simplest one to demonstrate the phenomenon. I attach an image of what is happening in my real problem: transient (here only 2D) incompressible flow. $\rho=1$, dynamic $\eta=1\times 10^{-3}$, flat inflow 1, cylinder diameter 0.1 ($R_e=100$). If timestep is 0.0001, such oscillations occur. If timestep is 0.005, solution becomes smooth and I reproduce von Karman vortices with proper frequency, so it is highly unlikely I have a bug in the code. This is standard Galerkin FEM with no stabilization solved with a direct solver. Any other thoughts how this is possible and how to know how small timestep is "too small". Thanks for any hints. Dominik
There are lots of statements that have been conditionally proved on the assumption that the Riemann Hypothesis is true. What other conjectures have a large number of proven consequences? MathOverflow is a question and answer site for professional mathematicians. It only takes a minute to sign up.Sign up to join this community Set theory is of course completely saturated with this feature, since the independence phenomenon means that a huge proportion of the most interesting natural set-theoretic questions turn out to be independent of the basic ZFC axioms. Thus, most of the interesting work in set theory is about the relations beteween these various independent statements. They typically have the form of implications assuming the truth of a hypothesis not known to be true (and often, known in some sense not to be provably true), and therefore are instances of what you requested. The status of these various hypotheses as conjectures, however, to use the word you use, has given rise to vigorous philosophical debate in the foundations of mathematics and set theory, as to whether or not they have definite truth values and how we could come to know them. Examples of such hypothesis that are used in this way would include all of the main set-theoretic hypotheses known to be independent. This list would run to several hundred natural statements, but let me list just a few: This last example is extremely important and a unifying instance of what you requested, for the large cardinal hierarchy is a tower of increasingly strong hypotheses, which we believe to be consistent, but haven't proved, and indeed, provably cannot prove, to be consistent, unless set theory itself is inconsistent. From any level of the large cardinal hierarchy, if consistent, we provably cannot prove the consistency of the higher levels. So in this sense, the large cardinal hierarchy provides enormous iterated towers of your phenomenon. This might seem at first to be a flaw. Why would we be interested in these large cardinals, if we cannot prove they exist, cannot prove that their existence is consistent, and indeed, can prove that we cannot prove they are consistent, assuming our basic axioms are consistent? The reason is that because of Goedel's incompeteness theorem, we know and expect to find such statements, that are not settled, even when we assume Con(ZFC) and more. Thus, we know there is hierarchy of consistency strength towering above us. The remarkable thing is that this tower turns out to be describable in terms of the very natural infinite combinatorics of large cardinals. These were notions, such as inaccessible, Ramsey and measurable cardinals, that arose from natural questions about infinite combinatorics, independently of any considerations of consistency strength. Some of the most interesting uses of large cardinals have been equiconsistencies between large cardinals and other natural mathematical statements. For example, the impossibility of removing AC from the Vitali construction of a non-measurable set is exactly equiconsistent with the existence of an inaccessible cardinal. And the complete determinacy of infinite integer games (with not AC) is equiconsistent with the existence of infinitely many Woodin cardinals. The standard conjectures (http://en.wikipedia.org/wiki/Standard_conjectures_on_algebraic_cycles) were pretty much designed to be used in this way (and then proved); but proofs are lacking, and some of the results now have non-conditional proofs. There are many related results in the theory of motives. In number theory, Vandiver's conjecture (http://en.wikipedia.org/wiki/Vandiver%27s_conjecture) has begun to stand out, because of its connection with K-theory (which is another area in which there are large scale conjectures used in this way). The ABC conjecture and Vojta's conjectures come to mind. Wiki also says: A famous network of conditional proofs is the NP-complete class of complexity theory Resolution of singularities for algebraic varieties in positive characteristic is another example. Many statements in algebraic K-theory have been proven to follow from this conjecture. What is usually referred to as Lusztig's Conjecture in the modular representation theory of semisimple algebraic groups has been enormously influential, as seen in Jantzen's treatise Representations of Algebraic Groups. It is actually a series of closely related conjectures, from 1979 on, inspired by the (soon proved) Kazhdan-Lusztig Conjecture (1979) on the formal characters of the usually infinite dimensional simple highest weight modules for a complex semisimple Lie algebra: such a character can be written as a $\mathbb{Z}$-linear combination of the known formal characters of Verma modules whose coefficients are values at 1 of Kazhdan-Lusztig polynomials for the Iwahori-Hecke algebra of the Weyl group $W$. The original characteristic $p$ conjecture has a similar flavor, but with the affine Weyl group (whose translations are multiplied by $p$) replacing $W$ and with the essential proviso that $p$ be not too small. It is expected that the Coxeter number of $W$ will be a suitable lower bound, but so far the partial proofs by Andersen-Jantzen-Soergel, Fiebig, and Bezrukavnikov-Mirkovic do not achieve a reasonable bound. If proved, the conjecture would combine with older results of Curtis and Steinberg to yield all modular irreducible characters of finite groups of Lie type in the defining characteristic (but still with the lower bound on $p$), as well as the formal characters and dimensions of all restricted representations of the Lie algebra of the given semisimple group. Andersen and others have formulated further consequences, in terms of the structure of Weyl modules, the extensions and cohomology of simple or Weyl modules, etc. (Adapted to general linear groups, there are also implications for modular characters of symmetric groups.) The later conjectures of Lusztig, proved for large enough $p$ in a preprint by Bezrukavnikov and Mirkovic, go further with the non-restricted Lie algebra representations as well in a unified geometric setting which promises further applications. ADDED: I should point out that many special cases of the more general results which would follow from Lusztig's Conjecture have in fact been verified, but usually by computational or somewhat ad hoc methods. Plus the existing proofs of the conjecture itself for "large enough" primes, which don't seem improvable without new methods. In computational complexity there are several conjectures which are stronger than $NP \ne P$ which have important consequences. To mention a few 1) The conjecture that factoring is computationally hard is the basis to much theoretical and practical cryptography. 2) More broadly the conjecture that one-way functions exist has many consequences. 3) The conjecture that the polynomial hierarchy ($PH$) does not collapse has many consequences. 4) Khot's unique game conjecture has many important consequences for hardness of approximation. 5) The exponential time hypothesis ($ETH$)and strong exponential time hypothesis ($SETH$) are strong form of $NP \ne P$ with important consequences. 6) There are stronger and stronger versions of the "derandomization" conjecture with many consequences. In the representation theory of a reductive algebraic group $G$ in positive characteristic $p$, there is a conjecture known as the Humphreys-Verma conjecture, which states that an indecomposable injective module for a Frobenius kernel $G_r$ of $G$ should lift uniquely to a module for $G$. There is also a refinement of the conjecture, known as Donkin's tilting conjecture, which specifies which $G$-module this lift should be (an indecomposable tilting module with a specified highest weight). Both conjectures are known to be true when $p\geq 2h-2$ where $h$ is the Coxeter number, and while it is not particularly common to see statements formulated conditional on either conjecture, the condition $p\geq 2h-2$ is exceedingly common, and quite often this condition could be replaced by an assumption that one or both of the above conjectures are true. Related to the answer about NP-complete problems, there are a number of theorems that state "either x is true, or P=NP." The most interesting of these in my opinion are hardness of approximation results. For example: "Given two graphs on $n$ vertices, one with max clique size $n^\alpha$ and one with max clique size $n^{1-\alpha}$, there is no polynomial time algorithm that determines which is which, or P=NP." Most results like this are proven via the PCP Theorem, by showing that if you can approximate a result to a certain extent, you can then convert that into a proof of the statement.
I have the following homework problem: A line of charge $\lambda$ is located on the z-axis. Determine the electric flux for a rectangular surface with corners at coordinates: $(0, R, 0)$, $(w, R, 0)$, $(0,R, L)$, and $(w, R, L)$. This is what I have come up with so far: The line of charge, is located on the $z$-axis. We can recall that $\Phi = \int_{S}\vec{E}~\mathrm{d}A$. We initially note that this is parallel to the $xz$-plane, ergo we will integrate in respect to $x$ and $z$. Due to the fact that this is an infinite line of charge, there is no change in the field as we vary the distance along the $z$-axis. Our integral is $$\int_0^L\int_0^w\vec{E}~\mathrm{d}x\mathrm{d}z$$ We can recall that $$E=\frac{\lambda}{2\pi\varepsilon_0r}$$ We can see that $r=\sqrt{x^2+R^2}$ by the Pythagorean Theorem. By substitution we have the following integral: $$\int_0^L\int_0^w\frac{\lambda}{2\pi\varepsilon_0\sqrt{x^2+R^2}}\mathrm{d}x\mathrm{d}z = \frac{\lambda L}{2\pi\varepsilon_0}\int_0^w\frac{1}{\sqrt{x^2+R^2}}\mathrm{d}x$$ When I solve this I get: $$\Phi=\frac{\lambda L}{2\pi\varepsilon_0}\sinh^{-1}\left(\frac{w}{R}\right)$$ I am not sure where I am going wrong. I may be doing something conceptually incorrect.
We all know that \begin{equation}\exp(x) = \sum_{n=0}^\infty \frac{x^n}{n!} = 1 + x + \frac12 x^2 + \dots\end{equation}implies that for $|x| \ll 1$, we have $\exp(x) \approx 1 + x$. This means that if we have to evaluate in floating point $\exp(x) -1$, for $|x| \ll 1$ catastrophic cancellation can occur. This can be easily demonstrated in python: >>> from math import (exp, expm1) >>> x = 1e-8 >>> exp(x) - 1 9.99999993922529e-09 >>> expm1(x) 1.0000000050000001e-08 >>> x = 1e-22 >>> exp(x) - 1 0.0 >>> expm1(x) 1e-22 Exact values are \begin{align}\exp(10^{-8}) -1 &= 0.000000010000000050000000166666667083333334166666668 \dots \\\exp(10^{-22})-1 &= 0.000000000000000000000100000000000000000000005000000 \dots\end{align} In general an "accurate" implementation of exp and expm1 should be correct to no more than 1ULP (i.e. one unit of the last place). However, since attaining this accuracy results in "slow" code, sometimes a fast, less accurate implementation is available. For example in CUDA we have expf and expm1f, where f stands for fast. According to the CUDA C programming guide, app. D the expf has an error of 2ULP. If you do not care about errors in the order of few ULPS, usually different implementations of the exponential function are equivalent, but beware that bugs may be hidden somewhere... (Remember the Pentium FDIV bug?) So it is pretty clear that expm1 should be used to compute $\exp(x)-1$ for small $x$. Using it for general $x$ is not harmful, since expm1 is expected to be accurate over its full range: >>> exp(200)-1 == exp(200) == expm1(200) True (In the above example $1$ is well below 1ULP of $\exp(200)$, so all three expression return exactly the same floating point number.) A similar discussion holds for the inverse functions log and log1p since $\log(1+x) \approx x$ for $|x| \ll 1$.
In some of my computations I calculate a scalar value $\lambda_h$ (in my case an eigenvalue) depending on a finite element discretization of the domain. Usually we can manage to find estimates of the form $$ |\lambda - \lambda_h| \leq C h^k $$ where $k$ depends on the problem, the degree of the approximation, etc. Question: I was wondering under which circumstances (if any) it is possible to say that the error of a scalar function depending on the FE approximation has a Taylor-like expansion: $$ \lambda_h = \lambda_{exact} + C_1h^{\alpha_1}+C_2h^{\alpha_2}+...$$ where $\alpha_i$ is an increasing sequence. If there are any relevant references, I am interested. Context: This is in connection with one eariler question of mine, regarding the good behavior of some extrapolation techniques applied to my problem. It turns out that the extrapolation I use (Wynn's epsilon algorithm) gives the limit of the sum of $n$ geometric sequences starting from $2n+1$ values. The existence of a Taylor-like expansion for the eigenvalue would justify this behavior since the first few Taylor coefficients will be cancelled when doing the extrapolation. In the example presented below the initial computations using only finite elements give a convergence of order 2, while the extrapolation gives something like order 6, quickly running towards machine precision. See the picture:
If $G$ is a finite group with normal subgroups $M$ and $N$, then $MN$ is a subgroup, called the normal product of $M$ and $N$. If $\mathcal{F}$ is a set of finite groups closed under isomorphism and normal products, then there is a subgroup $O_\mathcal{F}(G)$ called the $\mathcal{F}$-radical which is the normal product of all normal subgroups of $G$ that are in $\mathcal{F}$. If $\mathcal{F}$ additionally is closed under normal subgroups, $O_\mathcal{F}(G)$ has the property that a subgroup of $G$ is a subnormal subgroup from $\mathcal{F}$ if and only if it is a subnormal subgroup of $O_\mathcal{F}(G)$. This is standard material on Fitting classes, covered in say 6.3 of Kurzweil–Stellmacher's textbook. I'd like to conclude quite generally that $O_\mathcal{F}(MN) = O_\mathcal{F}(M) O_\mathcal{F}(N)$. This is definitely true of direct products, and I'm having a hard time seeing any relevant differences to the normal product. However, I am also having trouble proving it. I get that $O_\mathcal{F}(M) O_\mathcal{F}(N) \leq O_\mathcal{F}(MN)$ in general, simply because $\mathcal{F}$ is closed under normal products. How do I show the reverse containment? If it is not true, is there some extra hypothesis on $\mathcal{F}$ that does it (like subgroup or quotient closure)? The particular case of concern is $\mathcal{F}$ consisting of all $\pi$-groups. I can brute-force it for $\mathcal{F}$ consisting of all solvable $\pi$-groups, and technically all groups my audience was considering were solvable, but I'd prefer a technique that worked in general, or some counterexamples to show what extra hypotheses are actually being used. For instance, in my application $\mathcal{F}$ is closed under quotients and all subgroups too (a subgroup closed (saturated by Bryce-Cossey) Fitting formation), but I doubt much if any of that extra hypothesis is needed. Edit: Assume $\mathcal{F}$ is quotient closed for the positive answer. I don't current have a counterexample for the general $\mathcal{F}$, but they are apparently plentiful and well known. Apparently direct products don't work this way for general $\mathcal{F}$ (for any normal Fitting class properly contain in the class of all solvable groups, for instance). Lockett figured out how to fix this in a fairly low impact way. For any Fitting class $\mathcal{F}$, he associated $\mathcal{F}^*$ with the property that $\mathcal{F} \subseteq \mathcal{F}^* = \mathcal{F}^{**} \subseteq \mathcal{F} \mathcal{A}$ and that $O_{\mathcal{F}^*}(G \times H) = O_{\mathcal{F}^*}(G) \times O_{\mathcal{F}^*}(H)$ and $O_{\mathcal{F}^*}(G) = \{ g \in G : (g,h) \in O_\mathcal{F}(G\times G) \}$. Theorem 2.2d shows that if $\mathcal{F}$ is closed under quotients or residual products (the other aspect of being a formation), then $\mathcal{F}=\mathcal{F}^*$, so all Fitting formations work the way I thought. ... Still checking on normal products not directly addressed in the article ...
I am an analyst struggling through some geometry used in physics. Some background: For some Lie group $G$, let $P$ be a principal $G$-bundle over the smooth manifold $M$. Let $\omega$ be a connection 1-form on $P$ (a "principal connection"). This is a Lie algebra-valued 1-form. As for the curvature two-form, either you see definitions with no explanation at all, e.g. "The curvature is given by $\Omega = d\omega + \frac{1}{2}[\omega,\omega]$". This is obviously less than ideal for improving one's intuitive appreciation. Or one defines something called the exterior covariant derivative $D$ (see wiki) and then the curvature is simply the exterior covariant derivative of the connection one-form. The issue: I can't get round the following observation though: From the point of view of the manifold $P$, $\omega$ is just a one-form with values in some vector space which happens to be $\mathfrak{g}$. Usually when you need to covariantly differentiate such an object, you would need a connection in a bundle $E \to P$ with fibre $\mathfrak{g}$, no? Then $\omega$ would be an $E$-valued one-form on $P$, i.e. in $\Gamma(E) \otimes\Omega^1(P)$, and you can differentiate covariantly in the normal way using the connection. Why is this scenario different? The exterior covariant derivative $D$ satisfies $D^2\phi = \Omega\wedge\phi$... So you have some sort of covariant differentiation $D$ which differentiates forms $\eta$ taking values in $\mathfrak{g}$ and for which $D^2$ is some sort of curvature... but of $P \to M$ and not of the bundle in which $\eta$ is taking values. Isn't this strange? Or is this indeed just how things are? This prompts my more precise question: Is the Lie algebra-valued curvature two-form on a principal bundle P the curvature of some vector bundle over P with fibre $\mathfrak{g}$?
Work done bringing charge from infinity to the origin doesn’t imply that work done is zero, because the origin is itself is a part of coordinate system but will it requires infinite amount of work that is need to done for bringing the charge from infinity to the origin where $+q$ charge is already there, we can easily understand by the given equation The electrostatic potential energy, $U_E$, of one point charge $q$ at position $r$ in the presence of a point charge $Q$, taking an infinite separation between the charges as the reference position, is: $$U_{E}(r)=k_{e}{\frac {qQ}{r}},$$ where $k_{e}={\frac {1}{4\pi \varepsilon _{0}}}$ is Coulomb's constant, $r$ is the distance between the point charges $q$ and $Q$, and $q$ and $Q$ are the charges (not the absolute values of the charges — i.e., an electron would have a negative value of charge when placed in the formula). The following outline of proof states the derivation from the definition of electric potential energy and Coulomb's law to this formula. Outline of proof definition of the position vector r and the displacement vector s, it follows that r and s are also radially directed from Q. So, E and ds must be parallel: {\displaystyle \mathbf {E} \cdot \mathrm {d} \mathbf {s} =|\mathbf {E} |\cdot |\mathrm {d} \mathbf {s} |\cos(0)=E\mathrm {d} s} {\mathbf {E}}\cdot {\mathrm {d}}{\mathbf {s}}=|{\mathbf {E}}|\cdot |{\mathrm {d}}{\mathbf {s}}|\cos(0)=E{\mathrm {d}}sUsing Coulomb's law, the electric field is given by {\displaystyle |\mathbf {E} |=E={\frac {1}{4\pi \varepsilon _{0}}}{\frac {Q}{s^{2}}}} |{\mathbf {E}}|=E={\frac {1}{4\pi \varepsilon _{0}}}{\frac {Q}{s^{2}}}and the integral can be easily evaluated: {\displaystyle U_{E}(r)=-\int _{\infty }^{r}q\mathbf {E} \cdot \mathrm {d} \mathbf {s} =-\int _{\infty }^{r}{\frac {1}{4\pi \varepsilon _{0}}}{\frac {qQ}{s^{2}}}{\rm {d}}s={\frac {1}{4\pi \varepsilon {0}}}{\frac {qQ}{r}}=k{e}{\frac {qQ}{r}}} U_{E}(r)=-\int _{\infty }^{r}q{\mathbf {E}}\cdot {\mathrm {d}}{\mathbf {s}}=-\int _{\infty }^{r}{\frac {1}{4\pi \varepsilon _{0}}}{\frac {qQ}{s^{2}}}{{\rm {d}}}s={\frac {1}{4\pi \varepsilon {0}}}{\frac {qQ}{r}}=k{e}{\frac {qQ}{r}}One point charge q in the presence of n point charges Qi Edit Electrostatic potential energy of q due to Q1 and Q2 charge system: {\displaystyle U_{E}=q{\frac {1}{4\pi \varepsilon {0}}}\left({\frac {Q{1}}{r_{1}}}+{\frac {Q_{2}}{r_{2}}}\right)} U_{E}=q{\frac {1}{4\pi \varepsilon {0}}}\left({\frac {Q{1}}{r_{1}}}+{\frac {Q_{2}}{r_{2}}}\right)The electrostatic potential energy, UE, of one point charge q in the presence of n point charges Qi, taking an infinite separation between the charges as the reference position, is: {\displaystyle U_{E}(r)=k_{e}q\sum {i=1}^{n}{\frac {Q{i}}{r_{i}}}} U_{E}(r)=k_{e}q\sum {{i=1}}^{n}{\frac {Q{i}}{r_{i}}},
The best way to do this is (as you said) to just use the definition of periodic boundary conditions and set up your equations correctly from the start using the fact that $u(0)=u(1)$. In fact, even more strongly, periodic boundary conditions identify $x=0$ with $x=1$. For this reason, you should only have one of these points in your solution domain. An open interval does not make sense when using periodic boundary conditions since there is no boundary. This fact means that you should not place a point at $x=1$ since it is the same as $x=0$. Discretizing with $N+1$ points, you then use the fact that by definition, the point to the left of $x_0$ . is $x_N$ and the point to the right of $x_N$ is $x_0$ Your PDE can then be discretized in space as$$\frac{\partial}{\partial t}\left[\begin{array}{c}x_0 \\x_1 \\\vdots \\x_N\end{array}\right]=\frac{1}{\Delta x^2}\left[\begin{array}{c}x_N - 2x_0 + x_1 \\x_0 - 2x_1 + x_2 \\\vdots \\x_{N-1} - 2x_N + x_0\end{array}\right]$$ This can be written in matrix form as$$\frac{\partial}{\partial t}\vec{x}=\frac{1}{\Delta x^2} \mathbf{A} \vec{x}$$where$$\mathbf{A} = \left[\begin{array}{c}-2 & 1 & 0 & \cdots & 0 & 1 \\1 & -2 & 1 & 0 & \cdots & 0 \\ & \ddots & \ddots & \ddots \\ && \ddots & \ddots & \ddots \\0 & \cdots & 0 & 1 & -2 & 1 \\1 & 0 & \cdots & 0 & 1 & -2\end{array}\right].$$ Of course there is no need to actually create or store this matrix. The finite differences should be computed on the fly, taking care to handle the first and last points specially as needed. As a simple example, the following MATLAB script solves $$\partial_t u = \partial_{xx}u + b(t,x)$$with periodic boundary conditions on the domain $x\in[-1,1)$. The manufactured solution $u_\text{Ref}(t,x) = \exp(-t)\cos(5\pi x)$ is used, meaning $b(t,x) = (25\pi^2-1)\exp(-t)\cos(5\pi x)$. I used forward Euler time discretization for simplicity and computed the solution both with and without forming the matrix. The results are shown below. clear % Solve: u_t = u_xx + b % with periodic boundary conditions % analytical solution: uRef = @(t,x) exp(-t)*cos(5*pi*x); b = @(t,x) (25*pi^2-1)*exp(-t)*cos(5*pi*x); % grid N = 30; x(:,1) = linspace(-1,1,N+1); % leave off 1 point so initial condition is periodic % (doesn't have a duplicate point) x(end) = []; uWithMatrix = uRef(0,x); uNoMatrix = uRef(0,x); dx = diff(x(1:2)); dt = dx.^2/2; %Iteration matrix: e = ones(N,1); A = spdiags([e -2*e e], -1:1, N, N); A(N,1) = 1; A(1,N) = 1; A = A/dx^2; %indices (left, center, right) for second order centered difference iLeft = [numel(x), 1:numel(x)-1]'; iCenter = (1:numel(x))'; iRight = [2:numel(x), 1]'; %plot figure(1) clf hold on h0=plot(x,uRef(0,x),'k--','linewidth',2); h1=plot(x,uWithMatrix); h2=plot(x,uNoMatrix,'o'); ylim([-1.2, 1.2]) legend('Analytical solution','Matrix solution','Matrix-free solution') ht = title(sprintf('Time t = %0.2f',0)); xlabel('x') ylabel('u') drawnow for t = 0:dt:1 uWithMatrix = uWithMatrix + dt*( A*uWithMatrix + b(t,x) ); uNoMatrix = uNoMatrix + dt*( ( uNoMatrix(iLeft) ... - 2*uNoMatrix(iCenter) ... + uNoMatrix(iRight) )/dx^2 ... + b(t,x) ); set(h0,'ydata',uRef(t,x)) set(h1,'ydata',uWithMatrix) set(h2,'ydata',uNoMatrix) set(ht,'String',sprintf('Time t = %0.2f',t)) drawnow end
I'am trying to implement a path following algorithm based on MPC (Model Predictive Control), found in this paper : Path Following Mobile Robot in the Presence of Velocity Constraints Principle: Using the robot model and the path, the algorithm predict the behavior of the robot over N future steps to compute a sequence of commands $(v,\omega)$ to allow the robot to follow the path without overshooting the trajectory, allowing to slow down before a sharp turn, etc. $v:$ Linear velocity $\omega:$ Angular velocity Here is my problem: Before implementing on the mobile robot, I'am trying to compute the needed matrices (using Matlab) to test the efficiency of this algorithm. At the end of the matrices computation some of them have dimension mismatch What I did: The quote below is from §4 (4.1, 4.2, 4.3, 4.4) p6-7 of the paper. 4.1 Model $z_{k+1} = Az_k + B_\phi\phi_k + B_rr_k$ (18) with: $A = \begin{bmatrix} 1 & Tv \\ 0 & 1 \end{bmatrix}$ $B_\phi = \begin{bmatrix} {T^2\over2}v^2\\ Tv \end{bmatrix}$ $B_r = \begin{bmatrix} 0 & -Tv \\ 0 & 0 \end{bmatrix}$ $T$: sampling period $v$: linear velocity $k$: sampling index (i.e. $t= kT$) $z_k:$ the state vector $z_k = (d_k, \theta_k)^T$ position and angle difference to the reference path $r_k:$ the reference vector $r_k = (0, \psi_k)^T$ with $\psi_k$ is the reference angle of the path at step k 4.2 Criterion The predictive receding horizon controller is based on a minimization of the criterion $J= \Sigma^N_{n=0} (\hat{z}_{k+n} - r_{k+n})^T Q(\hat{z}_{k+n} - r_{k+n}) + \lambda\phi^2_{k+n}$, (20) Subject to the inequality constraint $ P\begin{bmatrix} v_n \\ v_n\phi_n \end{bmatrix} \leq q,$ $n=0,..., N,$ where $\hat{z}$ is the predicted output, $Q$ is a weight matrix, $\lambda$ is a scalar weight, and $N$ is prediction horizon. 4.3 Predictor An n-step predictor $\hat{z}_{k+n|k}$ is easily found from iterating (18). Stacking the predictions $\hat{z}_{k+n|k},n = n,...,N$ in the vector $\hat{Z}$ yields $\hat{Z} = \begin{bmatrix} \hat{z}_{k|k} \\ \vdots \\ \hat{z}_{k+N|k}\end{bmatrix} = Fz_k + G_\phi\Phi_k + G_rR_k$ (22) with $\Phi_k = \begin{bmatrix} \phi_k, \ldots, \phi_{k+N}\end{bmatrix}^T$, $R_k = \begin{bmatrix} r_k, \ldots, r_{k+N}\end{bmatrix}^T$, and $F = \begin{bmatrix}I & A & \ldots & A^N \end{bmatrix}^T$ $G_i = \begin{bmatrix} 0 & 0 & \ldots & 0 & 0 \\ B_i & 0 & \ldots & 0 & 0 \\ AB_i & B_i & \ddots & \vdots & \vdots \\ \vdots & \ddots & \ddots & 0 & 0 \\ A^{N-1}B_i & \ldots & AB_i & B_i & 0 \end{bmatrix}$ where index $i$ should be substituted with either $\phi$ or $r$ 4.4 Controller Using the N-step predictor (22) simplifies the criterion (20) to $J_k = (\hat{Z}_k - R_k)^T I_q (\hat{Z}_k - R_k) + \lambda\Phi^T_k\Phi_k$, (23) where $I_q$ is a diagonal matrix of appropriate dimension with instances of Q in the diagonal. The unconstrained controller is found by minimizing (23) with respect to $\Phi$: $\Phi_k = -L_zz_k - L_rR_k$, (24) with $L_z = (lambda + G^T_wI_qG_w)^{-1}G^T_wI_qF$ $L_r = (lambda + G^T_wI_qG_w)^{-1}G^T_wI_q(Gr - I)$ I'am trying to compute $\Phi_k = -L_zz_k - L_rR_k$ but the dimension of $L_r$ and $R_k$ does not match for matrix multiplication. Parameters are : $T=0.1s$ $N=10$ $\lambda=0.0001$ $Q=\begin{bmatrix} 1 & 0 \\ 0 & \delta \end{bmatrix}$ with $\delta=0.02$ I get : $R_k$ a (11x2) matrix (N+1 elements of size 2x1, transposed) $G_w$ a (22x11) matrix $G^T_w$ a (11x22) matrix $I_q$ a (22x22) matrix $F$ a (22x2) matrix $G_r$ a (22x22) matrix so Lz computation gives (according to the matrix sizes): $L_z=(scalar + (11x22)(22x22)(22x11))^{-1} (11x22)(22x22)(22x22)$ a (11x2) matrix. as $z_k$ is (2x1) matrix, doing $L_zz_k$ from (24) is fine. and Lr computation gives (according to the matrix sizes): $L_r=(scalar + (11x22)(22x22)(22x11))^{-1} (11x22)(22x22)((22x22) - (22x22))$ a (11x22) matrix. as $R_k$ is (11x2) matrix, doing $L_rR_k$ from (24) is not possible. I have a (11x22) matrix multiplicated by a (11x2) matrix. I'm sure I'm missing something big here but unable to see what exactly. Any help appreciated. Thanks
There is nothing wrong with effective quantum general relativity as a QFT. It works fine and we can get effective answers for calculations. That said, most are not interested in calculating scattering cross sections and amplitudes with quantum GR in a relatively low energy limit, they are interested in the character of space time and Plank scale physics which effective GR isn't all that useful for. Most would argue that we need a background independent, UV complete theory to answer big questions about the nature of spacetime at the Plank scale. Edit: We can work with just the EH action in an effective approach but the scattering amplitudes will depend on our energy scale. If we go up to too high an energy scale, our answers won't make any sense unless we add a counter term of higher order to $\mathcal{L}$ multiplied by some a priori unknown coupling constant which has to be determined by experiment. If we start asking about even higher energies we'll need to add an even higher order term to $\mathcal{L}$. If we only use the term in the Lagrangian we know from the classical theory (ie. The Einstein Hilbert Lagrangian), we can learn a little about quantum gravity, but not all that much. What to do about this situation is largely up to personal taste. If you ask someone who takes seriously the insights from classical gravity they will tell you that the problem is not the Lagrangian, but perturbation theory. Perturbation theory destroys the background independence of General Relativity and we shouLdnt be too surprised that perturbation theory isn't a straightforward and easy way to learn everything we know about quantum gravity. Imagine in classical gravity trying to find the Schwarzchild solution pertubatively (ie, linearized gravity), that's clearly going to be extremely troublesome. Question: What is an appropriate cutoff for effective gravity? In units where $c=\hbar=1$, when one calculates the scattering amplitude for gravitons they find that the energy scale dependence goes as $$1+GE^2+(GE^2)^2+...$$Where G is Newtons constant. So, we need to start worrying when GE^2 gets to of order 1 because then that series is divergent. This tells us that $\Lambda \approx \sqrt{\frac{1}{G}}$, which is just the Plank energy in our unit system! This tells us that an appropriate cutoff for quantum gravity is the Plank scale. This answer is adopted from Quantum Field Theory in a Nutshell by Zee, page 172
This is analogous to the definition of an empty product in mathematics. For a finite non-empty set $S=\{s_1,\ldots,s_n\}$, the product over $S$ can be defined as$$\prod_{s\in S}s=s_1\times \cdots\times s_n.$$For such a product you'd want disjoint unions to map into products: if $R\cap S=\emptyset$, then you want $\prod_{x\in R\cup S}x=\left(\prod_{s\in S}s\right) \times \left(\prod_{r\in R}r\right)$, but for this to make sense you want to be able to handle the empty set, and the only way to make the rules consistent is to set$$\prod_{s\in\emptyset}s=1.$$This essentially says: if there's nothing to multiply, the result is one. (Similarly, empty sums are defined to be zero, for the same reason.) In the case in hand, you could simply say if there are no units to multiply, then you get one. As Luboš points out, this is the harmless only consistent choice, as multiplying by one does not change the quantity. Moreover, this empty-product intuition can be carried out to a full formalization of physical dimensions and units as a vector space. The whole works is in this answer of mine, but the essential idea is that positive physical quantities form a vector space over the rationals, where "addition" is multiplication of two quantities and "scalar multiplication" is raising the quantity to a rational power. This vector-space formalism is precisely the reason why dimensional analysis often boils down to a set of linear equations. Moreover, in this vector space the 'zero' is the physical quantity and unit $1$ - neither vector space makes sense unless $1$ is both a quantity and a unit. Ultimately, of course, it boils down to convention, so people can just say "I'm going to do this in this other way" and they won't be "wrong" as such. However, in general, the consistent way to assign things is to say that dimensionless quantities have dimension $1$ (modulo whatever square bracket convention you're using) and unit $1$. To back this up a bit, for those that care about organizational guidance, the BIPM publishes the International Vocabulary of Metrology, which states (§1.8, note 1) that The term "dimensionless quantity" is commonly used and is kept here for historical reasons. It stems from the fact that all exponents are zero in the symbolic representation of the dimension for such quantities. The term "quantity of dimension one" reflects the convention in which the symbolic representation of the dimension for such quantities is the symbol 1 (see ISO 31-0:1992, 2.2.6). This is essentially the same in the ISO document, which has been superseded by ISO 80000-3:2009 (paywalled, but free preview available), which has an essentially identical entry in §3.8. Finally, and as a response to some of the comments by Luboš Motl, this applies to the term "physical dimension" as understood by the majority of physical scientists. There is also an alternative convention, used in high-energy contexts where you work in natural units with $\hbar=c=1$, in which you're left with a single nontrivial dimension, usually taken to be mass (=energy). In that context, it is usual to say a quantity or operator has "dimension $N$" to mean that it has mass dimension $N$ i.e. it has physical dimension $m^N$, but since there's only ever mass as the base quantity it often gets dropped. However, this is very much a corner case with respect to the rest of physical science, and high-energy theorists are remiss if they forget that their "dimension $N$" only works in natural units, which are useless outside of their small domain.
I’m currently having some issues with my routine for the linear advection-diffusion problem. The model problem is as follows: $$ \nabla\cdot(\mathbf{s} u) - \nabla\cdot(\kappa\nabla u) = f, \;\;\;\text{in}\;\Omega \subseteq \mathbb{R}^{2} $$ $$ u = g_{D} \;\;\;\text{on}\;\Gamma_{D} $$ The boundary is purely Dirichlet in this case. The formulation is: $$ a(u,v) = \sum_{K \in \mathcal{M}_{h}}\int_{K}\kappa\nabla u\cdot\nabla v - \sum_{e \in \mathcal{E}_{h}}\int_{e}\left\{\kappa\nabla u\cdot\mathbf{n}\right\}[[v]] + \sum_{e \in \mathcal{E}_{h}}\left\{\kappa\nabla v\cdot\mathbf{n}\right\}[[u]] + \sum_{e \in \mathcal{E}_{h}}\frac{\sigma}{|e|}\int_{e}[[u]][[v]] $$ $$ b(u,v) = -\sum_{K \in \mathcal{M}_{h}}\int_{K}\mathbf{s}u\cdot\nabla v + \sum_{e \in \mathcal{E}_{\text{int}}}\int_{e}\left\{\mathbf{s}\cdot\mathbf{n}\right\}u^{up}[[v]] + \sum_{e \in \Gamma_{\text{out}}}\int_{e}(\mathbf{s}\cdot\mathbf{n})uv $$ $$ \ell(v) = \int_{\Omega}fv + \sum_{e \in \Gamma_{D}}\int_{e}\left( \kappa\nabla v\cdot\mathbf{n} + \frac{\sigma}{|e|}v\right)g_{D} - \sum_{e \in \Gamma_{\text{in}}}\int_{e}(\mathbf{s}\cdot\mathbf{n})g_{D}v $$ So, we have $$ a(u,v) + b(u,v) = \ell(v) $$ The inflow and outflow boundaries are defined as $\Gamma_{\text{in}} = \left\{e \in \partial\Omega| \;\mathbf{s}\cdot\mathbf{n} < 0\right\}$ and $\Gamma_{\text{in}} = \partial\Omega\backslash\Gamma_{\text{in}}$. And $\mathcal{M}_{h}$ is the set of all mesh elements (triangular in my case), $\mathcal{E}_{h}$ the set of all edges of the mesh elements, and $\mathcal{E}_{\text{int}}$ just the set of interior edges. For the linear diffusion problem, my routine has no issues at all. Now if I use the formulation above for the exact solution $u(x,y) = x(x-1)y(y-1)$, for some values of $\kappa$ and $\mathbf{s}$ yield suboptimal convergence when I’m quite positive that they shouldn’t. The source I’m primarily using is “Discontinuous Galerkin Methods for Solving Elliptic and Parabolic Equations” by B. Riviere. For this formulation, the convergence rate (for penalization of $\sigma \ge 0$) in the $L^{2}$ norm should be $p$ if $p$ is even and $p+1$ if $p$ is odd, where $p$ is the degree of the basis functions used. In the energy norm, we should have convergence rate of $p$ in all cases. Now here are the examples I am testing and the results. The exact solution is taken to be $u(x,y) = x(x-1)y(y-1)$ and the source term is chosen accordingly. I take $\Omega = [0,1]^{2}$. 1) $\kappa = 10^{-10}$, $\mathbf{s} = [1,1]^{T}$. $L^{2}$ convergence is $p$ for all $p$ (suboptimal). Energy norm convergence is $p-1$ for all $p$ (suboptimal) (with the exception of $p=1$, which has convergence rate $p$). This is for $\sigma = 1$. 2) Same example as above, but this time with $\sigma = 0$. I obtain the proper convergence rates that I stated in the paragraph above. 3) Same example as in 1), with $\sigma = 1$, $\kappa=10^{-10}$ and $\mathbf{s} = [1000,1000]^{T}$. I obtain the proper convergence rates here too. 4) In general, for $\mathbf{s} = [1,1]^{T}$, if I take $\kappa=10^{-k}$ for some positive integer $k > 3$ (approximately speaking), I attain the suboptimal convergence rate. 5) If I multiply the entire model equation by $10^{10}$ on both sides and use the resulting formulation (so $\kappa = 1$ and $\mathbf{s} = [10^{10}, 10^{10}]^{T}$ and the source term $f$ is modified accordingly), I obtain proper convergence rates. Any ideas what is going on here? Is this to be expected or a bug in the code? I’ve tested on a number of a examples and can’t really find other examples where the rate is suboptimal. It’s very perplexing to me that in examples 1) and 2), if I increase the advection it actually gives me the expected convergence. Can any DG experts chime in here?
In the wave equation: $$c^2 \nabla \cdot \nabla u(x,t) - \frac{\partial^2 u(x,t)}{\partial t^2} = f(x,t)$$ Why do we first multiply by a test function $v(x,t)$ before integrating? Computational Science Stack Exchange is a question and answer site for scientists using computers to solve scientific problems. It only takes a minute to sign up.Sign up to join this community In the wave equation: $$c^2 \nabla \cdot \nabla u(x,t) - \frac{\partial^2 u(x,t)}{\partial t^2} = f(x,t)$$ Why do we first multiply by a test function $v(x,t)$ before integrating? You're coming at it backwards. The justification is better seen by starting from the variational setting and working towards the strong form. Once you've done this, the concept of multiplying by a test function and integrating can then be applied to problems where you don't start with a minimization problem. So consider the problem where we want to minimize (and working formally and not rigorously at all here): $$ I(u) = \frac {1}{2} \int_\Omega (\nabla u(x))^2 \; dx $$ subject to some boundary conditions on $\partial\Omega$. If we want this $I$ to reach a minimum, we need to differentiate it with respect to $u$, which is a function. There are several now well trod ways to consider this kind of derivative, but one way it's introduced is to compute $$ I'(u(x),v(x))=\lim_{h\rightarrow 0} \frac{d}{dh}I(u(x)+hv(x)) $$ where $h$ is just a scalar. You can see that this is similar to the traditional definition of a derivative for scalar functions of a scalar variable but extended up to functionals like $I$ that give scalars back but have their domain over functions. If we compute this for our $I$ (mostly using the chain rule), we get $$ I'(u,v) = \int_\Omega \nabla u \cdot \nabla v \; dx $$ Setting this to zero to find the minimum, we get an equation which looks like the weak statement for Laplace's equation: $$ \int_\Omega \nabla u \cdot \nabla v \; dx = 0 $$ Now, if we use the Divergence Theorm (aka multi-dimesional integration by parts), we can take a derivative off of $v$ and put it on $u$ to get $$ -\int_\Omega \nabla \cdot (\nabla u) v \; dx + \text {boundary terms} = 0 $$ Now this really looks where you start when you want to build a weak statement from a partial differential equation. Given this idea now, you can use it for any PDE, just multiply by a test function, integrate, apply the Divergence Theorem, and then discretize. As I mention before, I prefer to think about the weak form as a weighted residual. We want to find an approximate solution $\hat{u}$. Let us define the residual as $$R = c^2 \nabla \cdot \nabla \hat{u} - \frac{\partial^2 \hat{u}}{\partial t^2} - f(x,t)$$ for the case of the exact solution the residual is the zero function over the domain. We want to find an approximate solution that is "good", i.e., one that makes $R$ "small". So, we can try to minimize the norm of the residual (Least square methods, for example), or some average of it. One way of doing it is to compute the weighted residual, i.e., minimize the weighted residual $$\int\limits_\Omega wR d\Omega$$ one important thing about this is that it defines a functional, so you can minimize it. This can work for functions that do not have a variational form. I describe a little bit more in this post. You can choose the function $w$ in different ways, like being of the same space of the function $\hat{u}$ (Galerkin methods), Dirac delta functions (collocation methods), or a fundamental solution (Boundary Elements Method). If you select the first case, then you will end up with an equation like the one described by @BillBarth.
Forgot password? New user? Sign up Existing user? Log in {x=y2−116+z2−116y=z2−125+x2−125z=x2−136+y2−136 \begin{cases}x = \sqrt{y^2 - \frac1{16} } + \sqrt{z^2 - \frac1{16}} \\y = \sqrt{z^2 - \frac1{25} } + \sqrt{x^2 - \frac1{25}} \\z = \sqrt{x^2 - \frac1{36} } + \sqrt{y^2 - \frac1{36}} \\\end{cases}⎩⎪⎪⎪⎨⎪⎪⎪⎧x=y2−161+z2−161y=z2−251+x2−251z=x2−361+y2−361 Let x,yx,yx,y and zzz be real numbers satisfying the system of equations above. If the value of x+y+zx+y+zx+y+z can be expressed as mn \dfrac m{\sqrt n} nm, where mmm and nnn are positive integers with nnn square-free, find m+nm+nm+n. This problem is from AIME 2006. Problem Loading... Note Loading... Set Loading...
Assuming you are sampling at a fixed time step, called \$\Delta\$, and \$t_n\$ denote your sample times (i.e. \$t_n = n \Delta\$) then a traditional estimate of the velocity is$$v(t_{n}) \sim \sum_{k=0}^{n-1} a(t_k) \Delta$$where \$a(t_k)\$ is the acceleration value at time \$t_k\$ as sampled by your accelerometer. It sounds like you are also assuming the initial conditions that \$v(t_0) =0\$ and \$p(t_0) = 0\$. Applying this idea again to get the position via integrating the computed velocity we obtain$$p(t_{n}) \sim \sum_{k=0}^{n-1} v(t_k) \Delta \sim \sum_{k=0}^{n-1}\sum_{i=0}^{k-1} a(t_i) \Delta^2.$$ This latter summation simplifies a bit to$$\sum_{k=0}^{n-1}\sum_{i=0}^{k-1} a(t_i) \Delta^2 = \sum_{i=0}^{n-2}(n-1-i)a(t_i)\Delta^2 = \left(\sum_{i=0}^{n-3}(n-2-i)a(t_i)\Delta^2\right) + \sum_{i=0}^{n-2}a(t_i)\Delta^2.$$ Now if you notice the bit in parenthesis is our estimate of \$p(t_{n-1})\$ then you see we get the recursive formula \$p(t_{n}) = p(t_{n-1}) + \sum_{i=0}^{n-2} a(t_i)\Delta^2\$. This allows us to easily compute \$p(t_{n})\$ by keeping track of two (and only two) values:\$p(t_{n-1})\$ and an auxiliary variable \$s_n = \sum_{i=0}^{n-2} a(t_i) \Delta^2\$. Note we have the recursive formulas $$s_n = s_{n-1} + a(t_{n-2})\Delta^2$$and$$p(t_n) = p(t_{n-1}) + s_n.$$ Your computation algorithm will go something like this: 1) Initial: \$p(t_0) = p(t_1) = 0\$. \$s_2 = a(t_0)\Delta^2\$. 2) At stage \$n\$: \$s_n = a(t_{n-2})\Delta^2 + s_{n-1}\$; free the memory containing \$s_{n-1}\$; \$p(t_{n}) = p(t_{n-1}) + s_{n}\$. 3) Repeat. A few small remarks: You will probably want to delay multiplying everything by \$\Delta^2\$ until the end. You will have to experiment with the size of floating point data type you need so as not to overflow it. You need not use separate variables for \$s_n\$; in XC you would have a statement similar to \$s \hspace{5pt} +\hspace{-5pt}= a(t_{n-2})\Delta^2\$.
This question already has an answer here: If $A$ is a connected set and $\{A_i : i \in I\}$, $I$ an arbitrary set (can be countable or not) of connected sets. How to show that if $A \cap A_i \neq \emptyset$ for all $i \in I$ then $A \cup (\cup_{i\in I} A_i)$ is connected? I am trying to show that if $A \cap A_u \neq \emptyset~~ \forall i \in I$ then for all $i, j \in I$ $A_i\cap A_j \neq \emptyset.$ This enought to conclude the result. But I stuck here. What I tried: Suppose that there are $i,j\in I$ such that $A_i \cap A_j = \emptyset.$ Then since $A\cap A_i \neq \emptyset$ and $A\cap A_j \neq \emptyset$ it somehow induces me to think that is possible to obtain a split for $A$. I don't know how to proceed. Thanks
I am trying to figure out what is the largest possible order that an element of the multiplicative group $\bmod{n}$ can have if $n=p_1^{k_1} \cdot p_2^{k_2} \cdot \dots \cdot p_m^{k_m}$. then I can prove that $$U(n) \cong U(p_1^{k_1}) \times U(p_2^{k_2}) \times \dots \times U(p_m^{k_m})$$ Now I know from my number theory book, that if $p_i$ is an odd prime, then $p_i^{k_i}$ has a primitive root, which makes $U(p_i^{k_i})$ cyclic. So the largest order of an element in that group is $\phi(p_i^{k_i})$. I also know, that $2$ and $4$ have primitive roots, so the largest orders in $U(2)$ and $U(2^2)$ are $\phi(2)$ and $\phi(2^2)$ respectively. However, I am having trouble with $U(2^k)$ where $k\geq 3$. I was able to show that for any $x\in U(2^k), \quad x^{2^{k-2}}\equiv 1 \pmod{2^k}$. But how can I show that there $\textbf{must}$ be an element of order $2^{k-2}$ in $U(2^k), k\geq3$ ?
Consider the following integral equation: $$ \int_0^r f(t) \arcsin \left( \frac{t}{r} \right) \, \mathrm{d}t + \frac{\pi}{2} \int_r^R f(t) \, \mathrm{d} t = r \, \qquad (0<r<R) \, , $$ where $f(t)$ is the unknown function. By differentiating both sides of this equation with respect to $r$, one obtains $$ -\frac{1}{r} \int_0^r \frac{f(t)t \, \mathrm{d}t}{\sqrt{r^2-t^2}} = 1 \, , $$ the solution of which can readily be obtained as $$ f(t) = -1 \, . $$ However, upon substitution of this solution into the original integral equation above, one rather gets an additional $-\pi R/2$ term on the left hand side. i was wondering whether some math details are overlooked during this resolution. Any help would be highly appreciated. An alternative resolution approach that leads to the desired solution is also most welcome. Thank you
closed as no longer relevant by Robin Chapman, Akhil Mathew, Yemon Choi, Qiaochu Yuan, Pete L. Clark Aug 22 '10 at 9:00 This question is unlikely to help any future visitors; it is only relevant to a small geographic area, a specific moment in time, or an extraordinarily narrow situation that is not generally applicable to the worldwide audience of the internet. For help making this question more broadly applicable, visit the help center. If this question can be reworded to fit the rules in the help center, please edit the question. $e^{\pi i} + 1 = 0$ Stokes' Theorem Trivial as this is, it has amazed me for decades: $(1+2+3+...+n)^2=(1^3+2^3+3^3+...+n^3)$ $$ \frac{24}{7\sqrt{7}} \int_{\pi/3}^{\pi/2} \log \left| \frac{\tan t+\sqrt{7}}{\tan t-\sqrt{7}}\right| dt\\ = \sum_{n\geq 1} \left(\frac n7\right)\frac{1}{n^2}, $$where $\left(\frac n7\right)$ denotes the Legendre symbol. Not really my favorite identity, but it has the interesting feature that it is aconjecture! It is a rare example of a conjectured explicit identitybetween real numbers that can be checked to arbitrary accuracy.This identity has been verified to over 20,000 decimal places.See J. M. Borwein and D. H. Bailey, Mathematics by Experiment: Plausible Reasoning in the 21st Century, A K Peters, Natick, MA,2004 (pages 90-91). There are many, but here is one. $d^2=0$ Mine is definitely $$1+\frac{1}{4}+\frac{1}{9}+\cdots+\frac{1}{n^2}+\cdots=\frac{\pi^2}{6},$$ an amazing relation between integers and pi. There's lots to choose from. Riemann-Roch and various other formulas from cohomology are pretty neat. But I think I'll go with $$\sum\limits_{n=1}^{\infty} n^{-s} = \prod\limits_{p \text{ prime}} \left( 1 - p^{-s}\right)^{-1}$$ 1+2+3+4+5+... = -1/12 Once suitably regularised of course :-) $$\frac{1}{1-z} = (1+z)(1+z^2)(1+z^4)(1+z^8)...$$ Both sides as formal power series work out to $1 + z + z^2 + z^3 + ...$, where all the coefficients are 1. This is an analytic version of the fact that every positive integer can be written in exactly one way as a sum of distinct powers of two, i. e. that binary expansions are unique. $V - E + F = 2$ Euler's characteristic for connected planar graphs. I'm currently obsessed with the identity $\det (\mathbf{I} - \mathbf{A}t)^{-1} = \exp \text{tr } \log (\mathbf{I} - \mathbf{A}t)^{-1}$. It's straightforward to prove algebraically, but its combinatorial meaning is very interesting. $196884 = 196883 + 1$ For a triangle with angles a, b, c $$\tan a + \tan b + \tan c = (\tan a) (\tan b) (\tan c)$$ Given a square matrix $M \in SO_n$ decomposed as illustrated with square blocks $A,D$ and rectangular blocks $B,C,$ $$M = \left( \begin{array}{cc} A & B \\\ C & D \end{array} \right) ,$$ then $\det A = \det D.$ What this says is that, in Riemannian geometry with an orientable manifold, the Hodge star operator is an isometry, a fact that has relevance for Poincare duality. But the proof is a single line: $$ \left( \begin{array}{cc} A & B \\\ 0 & I \end{array} \right) \left( \begin{array}{cc} A^t & C^t \\\ B^t & D^t \end{array} \right) = \left( \begin{array}{cc} I & 0 \\\ B^t & D^t \end{array} \right). $$ It's too hard to pick just one formula, so here's another: the Cauchy-Schwarz inequality: ||x|| ||y|| >= |(x.y)|, with equality iff x&y are parallel. Simple, yet incredibly useful. It has many nice generalizations (like Holder's inequality), but here's a cute generalization to three vectors in a real inner product space: ||x|| 2||y|| 2||z|| 2+ 2(x.y)(y.z)(z.x) >= ||x|| 2(y.z) 2+ ||y|| 2(z.x) 2+ ||z|| 2(x.y) 2, with equality iff one of x,y,z is in the span of the others. There are corresponding inequalities for 4 vectors, 5 vectors, etc., but they get unwieldy after this one. All of the inequalities, including Cauchy-Schwarz, are actually just generalizations of the 1-dimensional inequality: ||x|| >= 0, with equality iff x = 0, or rather, instantiations of it in the 2 nd, 3 rd, etc. exterior powers of the vector space. I always thought this one was really funny: $1 = 0!$ I think that Weyl's character formula is pretty awesome! It's a generating function for the dimensions of the weight spaces in a finite dimensional irreducible highest weight module of a semisimple Lie algebra. $2^n>n $ It has to be the ergodic theorem, $$\frac{1}{n}\sum_{k=0}^{n-1}f(T^kx) \to \int f\:d\mu,\;\;\mu\text{-a.e.}\;x,$$ the central principle which holds together pretty much my entire research existence. Gauss-Bonnet, even though I am not a geometer. Ἐν τοῖς ὀρθογωνίοις τριγώνοις τὸ ἀπὸ τῆς τὴν ὀρθὴν γωνίαν ὑποτεινούσης πλευρᾶς τετράγωνον ἴσον ἐστὶ τοῖς ἀπὸ τῶν τὴν ὀρθὴν γωνίαν περιεχουσῶν πλευρῶν τετραγώνοις. That is, In right-angled triangles the square on the side subtending the right angle is equal to the squares on the sides containing the right angle. The formula $\displaystyle \int_{-\infty}^{\infty} \frac{\cos(x)}{x^2+1} dx = \frac{\pi}{e}$. It is astounding in that we can retrieve $e$ from a formula involving the cosine. It is not surprising if we know the formula $\cos(x)=\frac{e^{ix}+e^{-ix}}{2}$, yet this integral is of a purely real-valued function. It shows how complex analysis actually underlies even the real numbers. It may be trivial, but I've always found $\sqrt{\pi}=\int_{-\infty}^{\infty}e^{-x^{2}}dx$ to be particularly beautiful. For X a based smooth manifold, the category of finite covers over X is equivalent to the category of actions of the fundamental group of X on based finite sets: \pi-sets === et/X The same statement for number fields essentially describes the Galois theory. Now the ideathat those should be somehow unifiedwas one of the reasons in the development of abstract schemes, a very fruitful topic that is studied in the amazing area of mathematics called the abstract algebraic geometry. Also, note that "actions on sets" is very close to "representations on vector spaces" and this moves us in the direction of representation theory. Now you see, this simple line actually somehow relates number theory and representation theory. How exactly? Well, if I knew, I would write about that, but I'm just starting to learn about those things. (Of course, one of the specific relations hinted here should be the Langlands conjectures, since we're so close to having L-functions and representations here!) E[X+Y]=E[X]+E[Y] for any 2 random varibles X and Y $\prod_{n=1}^{\infty} (1-x^n) = \sum_{k=-\infty}^{\infty} (-1)^k x^{k(3k-1)/2}$ $ D_A\star F = 0 $ Yang-Mills $\left(\frac{p}{q}\right) \left(\frac{q}{p}\right) = (-1)^{\frac{p-1}{2} \frac{q-1}{2}}$. My favorite is the Koike-Norton-Zagier product identity for the j-function (which classifies complex elliptic curves): j(p) - j(q) = p -1 \prod m>0,n>-1 (1-p mq n) c(mn), where j(q)-744 = \sum n >-2 c(n) q n = q -1 + 196884q + 21493760q 2 + ... The left side is a difference of power series pure in p and q, so all of the mixed terms on the right cancel out. This yields infinitely many identities relating the coefficients of j. It is also the Weyl denominator formula for the monster Lie algebra.
Discounted price process of an american put (perpetual) has a $dt$ part in it, which is negative if the price at time $t$ is less than the optimal exercise price. This is the only thing that drags the discounted process down as the time goes on and makes the whole process a supermartingale. So when you don't exercise the option at it's stopping time, it has a tendency to go down. However, I do not seem to be understanding the intuition behind here, as the process goes down only when the price is less than the optimal exercise price and shouldn't having a lower stock price make the option more valuable? So what am I getting wrong? I would not say there is no link to what you say but here would be my view. Intuitive explanation If you wait for a delay $h$ before exercising, you lose your exercise right between $t$ and $t+h$, this leads to a loss in value. Supermartingale property proof (to apply it in your case : $\phi_t=e^{-rt}(L-S_t)^+$) If we denote $\phi$ the obstacle, and $\text{Am}(\phi)$ the American perpetual option on pay-off $\phi$, assuming there is a optimal strategy $\tau^\star(t)$ to exercise the option knowing you buy the option at time $t$. Allowed strategies are stopping time (meaning you can take your decision only according to what you know at that time) bigger or equal than $t$. You get : $$\text{Am}(\phi)_t=\mathbb{E}(\phi_{\tau^{\star}(t)}|\mathcal{F}_t)=\sup_{\tau\geq t}\mathbb{E}(\phi_\tau|\mathcal{F}_t)$$ Setting $\tau=\tau^\star(t+h)$ on the right hand side leads you to : $$\text{Am}(\phi)_t\geq \mathbb{E}(\phi_{\tau^\star(t+h)}|\mathcal{F}_t)$$ using tower property of conditionnal expectation : $$\mathbb{E}(\phi_{\tau^\star(t+h)}|\mathcal{F}_t)=\mathbb{E}(\mathbb{E}(\phi_{\tau^\star(t+h)}|\mathcal{F}_{t+h})|\mathcal{F}_t)$$ using the first equality in $t+h$ rather in $t$ : $$\mathbb{E}(\phi_{\tau^\star(t+h)}|\mathcal{F}_{t+h})=\text{Am}(\phi)_{t+h}$$ pluggin this into previous inequality leads you to : $$\text{Am}(\phi)_t\geq \mathbb{E}(\text{Am}(\phi)_{t+h}|\mathcal{F}_t)$$ Just to add an intuitive argument to @MJ73550's already very nice answer: When holding an American option - or any option callable by the holder for that matter -, the question you ask yourself before exercising it is whether the proceeds from early exercise (i.e. exercise now to get the option's intrinsic value) are greater than what you could expect to earn if you were to exercise your right later (i.e. continuation value). At any point in time, the value of your option is thus always the maximum between what you would receive in the 2 above scenarios, since you would like to exercise when it is optimal for you. Without loss of generality, assume you can only exercise at fixed dates separated by an interval $h$ (as it would typically be the case for a Bermudan option). Then at any time $t$ you have, for an option expiring at $T$: $ \text{Am}(t,T) = \max( (S_t - K)^+, \mathbb{E}[\text{Am}(t+h, T)] ) \geq \mathbb{E}[\text{Am}(t+h, T)]$ where $\text{Am}(t,T)$ - current option value at $t$ $(S_t - K)^+$ - intrinsic value = proceeds if you were to exercise at $t$ $\mathbb{E}[\text{Am}(t+h, T)]$ - what you can expect to earn if you wait until $t+h$ to make your decision. whence the supermartingale idea, or as @MJ73550's answer illustrates: the best early exercise strategy over $[t,T]$ is always at least as good as the best early exercise strategy over $[t+h,T]$ since the former interval includes the latter. Some remarks: the above holds for any $h>0$, particularly $h \rightarrow 0^+$; at $t=T$, the continuation and exercise value are the same, since there is no choice left; this process of comparing the continuation and intrinsic values is exactly what you do when using trees or Least-Squares Monte Carlo to price callable options. Ok, so I have been thinking about it, and may have found the solution, but please correct me if I'm wrong. I guess the discounted process goes down, because when the holder of the option doesn't exercise it, as long as the price $S(t)$ is less than the optimal exercise price $L^*$ he's loosing cash from not investing into money market?
As the name suggests, supervised learning takes place under the supervision of a teacher. This learning process is dependent. During the training of ANN under supervised learning, the input vector is presented to the network, which will produce an output vector. This output vector is compared with the desired/target output vector. An error signal is generated if there is a difference between the actual output and the desired/target output vector. On the basis of this error signal, the weights would be adjusted until the actual output is matched with the desired output. Developed by Frank Rosenblatt by using McCulloch and Pitts model, perceptron is the basic operational unit of artificial neural networks. It employs supervised learning rule and is able to classify the data into two classes. Operational characteristics of the perceptron: It consists of a single neuron with an arbitrary number of inputs along with adjustable weights, but the output of the neuron is 1 or 0 depending upon the threshold. It also consists of a bias whose weight is always 1. Following figure gives a schematic representation of the perceptron. Perceptron thus has the following three basic elements − Links − It would have a set of connection links, which carries a weight including a bias always having weight 1. Adder − It adds the input after they are multiplied with their respective weights. Activation function − It limits the output of neuron. The most basic activation function is a Heaviside step function that has two possible outputs. This function returns 1, if the input is positive, and 0 for any negative input. Perceptron network can be trained for single output unit as well as multiple output units. Step 1 − Initialize the following to start the training − For easy calculation and simplicity, weights and bias must be set equal to 0 and the learning rate must be set equal to 1. Step 2 − Continue step 3-8 when the stopping condition is not true. Step 3 − Continue step 4-6 for every training vector x. Step 4 − Activate each input unit as follows − $$x_{i}\:=\:s_{i}\:(i\:=\:1\:to\:n)$$ Step 5 − Now obtain the net input with the following relation − $$y_{in}\:=\:b\:+\:\displaystyle\sum\limits_{i}^n x_{i}.\:w_{i}$$ Here ‘b’ is bias and ‘n’ is the total number of input neurons. Step 6 − Apply the following activation function to obtain the final output. $$f(y_{in})\:=\:\begin{cases}1 & if\:y_{in}\:>\:\theta\\0 & if \: -\theta\:\leqslant\:y_{in}\:\leqslant\:\theta\\-1 & if\:y_{in}\:<\:-\theta \end{cases}$$ Step 7 − Adjust the weight and bias as follows − Case 1 − if y ≠ t then, $$w_{i}(new)\:=\:w_{i}(old)\:+\:\alpha\:tx_{i}$$ $$b(new)\:=\:b(old)\:+\:\alpha t$$ Case 2 − if y = t then, $$w_{i}(new)\:=\:w_{i}(old)$$ $$b(new)\:=\:b(old)$$ Here ‘y’ is the actual output and ‘t’ is the desired/target output. Step 8 − Test for the stopping condition, which would happen when there is no change in weight. The following diagram is the architecture of perceptron for multiple output classes. Step 1 − Initialize the following to start the training − For easy calculation and simplicity, weights and bias must be set equal to 0 and the learning rate must be set equal to 1. Step 2 − Continue step 3-8 when the stopping condition is not true. Step 3 − Continue step 4-6 for every training vector x. Step 4 − Activate each input unit as follows − $$x_{i}\:=\:s_{i}\:(i\:=\:1\:to\:n)$$ Step 5 − Obtain the net input with the following relation − $$y_{in}\:=\:b\:+\:\displaystyle\sum\limits_{i}^n x_{i}\:w_{ij}$$ Here ‘b’ is bias and ‘n’ is the total number of input neurons. Step 6 − Apply the following activation function to obtain the final output for each output unit j = 1 to m − $$f(y_{in})\:=\:\begin{cases}1 & if\:y_{inj}\:>\:\theta\\0 & if \: -\theta\:\leqslant\:y_{inj}\:\leqslant\:\theta\\-1 & if\:y_{inj}\:<\:-\theta \end{cases}$$ Step 7 − Adjust the weight and bias for x = 1 to n and j = 1 to m as follows − Case 1 − if y j ≠ tj then, $$w_{ij}(new)\:=\:w_{ij}(old)\:+\:\alpha\:t_{j}x_{i}$$ $$b_{j}(new)\:=\:b_{j}(old)\:+\:\alpha t_{j}$$ Case 2 − if y j = tj then, $$w_{ij}(new)\:=\:w_{ij}(old)$$ $$b_{j}(new)\:=\:b_{j}(old)$$ Here ‘y’ is the actual output and ‘t’ is the desired/target output. Step 8 − Test for the stopping condition, which will happen when there is no change in weight. Adaline which stands for Adaptive Linear Neuron, is a network having a single linear unit. It was developed by Widrow and Hoff in 1960. Some important points about Adaline are as follows − It uses bipolar activation function. It uses delta rule for training to minimize the Mean-Squared Error (MSE) between the actual output and the desired/target output. The weights and the bias are adjustable. The basic structure of Adaline is similar to perceptron having an extra feedback loop with the help of which the actual output is compared with the desired/target output. After comparison on the basis of training algorithm, the weights and bias will be updated. Step 1 − Initialize the following to start the training − For easy calculation and simplicity, weights and bias must be set equal to 0 and the learning rate must be set equal to 1. Step 2 − Continue step 3-8 when the stopping condition is not true. Step 3 − Continue step 4-6 for every bipolar training pair s:t. Step 4 − Activate each input unit as follows − $$x_{i}\:=\:s_{i}\:(i\:=\:1\:to\:n)$$ Step 5 − Obtain the net input with the following relation − $$y_{in}\:=\:b\:+\:\displaystyle\sum\limits_{i}^n x_{i}\:w_{i}$$ Here ‘b’ is bias and ‘n’ is the total number of input neurons. Step 6 − Apply the following activation function to obtain the final output − $$f(y_{in})\:=\:\begin{cases}1 & if\:y_{in}\:\geqslant\:0 \\-1 & if\:y_{in}\:<\:0 \end{cases}$$ Step 7 − Adjust the weight and bias as follows − Case 1 − if y ≠ t then, $$w_{i}(new)\:=\:w_{i}(old)\:+\: \alpha(t\:-\:y_{in})x_{i}$$ $$b(new)\:=\:b(old)\:+\: \alpha(t\:-\:y_{in})$$ Case 2 − if y = t then, $$w_{i}(new)\:=\:w_{i}(old)$$ $$b(new)\:=\:b(old)$$ Here ‘y’ is the actual output and ‘t’ is the desired/target output. $(t\:-\;y_{in})$ is the computed error. Step 8 − Test for the stopping condition, which will happen when there is no change in weight or the highest weight change occurred during training is smaller than the specified tolerance. Madaline which stands for Multiple Adaptive Linear Neuron, is a network which consists of many Adalines in parallel. It will have a single output unit. Some important points about Madaline are as follows − It is just like a multilayer perceptron, where Adaline will act as a hidden unit between the input and the Madaline layer. The weights and the bias between the input and Adaline layers, as in we see in the Adaline architecture, are adjustable. The Adaline and Madaline layers have fixed weights and bias of 1. Training can be done with the help of Delta rule. The architecture of Madaline consists of “n” neurons of the input layer, “m” neurons of the Adaline layer, and 1 neuron of the Madaline layer. The Adaline layer can be considered as the hidden layer as it is between the input layer and the output layer, i.e. the Madaline layer. By now we know that only the weights and bias between the input and the Adaline layer are to be adjusted, and the weights and bias between the Adaline and the Madaline layer are fixed. Step 1 − Initialize the following to start the training − For easy calculation and simplicity, weights and bias must be set equal to 0 and the learning rate must be set equal to 1. Step 2 − Continue step 3-8 when the stopping condition is not true. Step 3 − Continue step 4-6 for every bipolar training pair s:t. Step 4 − Activate each input unit as follows − $$x_{i}\:=\:s_{i}\:(i\:=\:1\:to\:n)$$ Step 5 − Obtain the net input at each hidden layer, i.e. the Adaline layer with the following relation − $$Q_{inj}\:=\:b_{j}\:+\:\displaystyle\sum\limits_{i}^n x_{i}\:w_{ij}\:\:\:j\:=\:1\:to\:m$$ Here ‘b’ is bias and ‘n’ is the total number of input neurons. Step 6 − Apply the following activation function to obtain the final output at the Adaline and the Madaline layer − $$f(x)\:=\:\begin{cases}1 & if\:x\:\geqslant\:0 \\-1 & if\:x\:<\:0 \end{cases}$$ Output at the hidden (Adaline) unit $$Q_{j}\:=\:f(Q_{inj})$$ Final output of the network $$y\:=\:f(y_{in})$$ i.e. $\:\:y_{inj}\:=\:b_{0}\:+\:\sum_{j = 1}^m\:Q_{j}\:v_{j}$ Step 7 − Calculate the error and adjust the weights as follows − Case 1 − if y ≠ t and t = 1 then, $$w_{ij}(new)\:=\:w_{ij}(old)\:+\: \alpha(1\:-\:Q_{inj})x_{i}$$ $$b_{j}(new)\:=\:b_{j}(old)\:+\: \alpha(1\:-\:Q_{inj})$$ In this case, the weights would be updated on Q j where the net input is close to 0 because Case 2 − if y ≠ t and t = -1 then, $$w_{ik}(new)\:=\:w_{ik}(old)\:+\: \alpha(-1\:-\:Q_{ink})x_{i}$$ $$b_{k}(new)\:=\:b_{k}(old)\:+\: \alpha(-1\:-\:Q_{ink})$$ In this case, the weights would be updated on Q k where the net input is positive because Here ‘y’ is the actual output and ‘t’ is the desired/target output. Case 3 − if y = t then There would be no change in weights. Step 8 − Test for the stopping condition, which will happen when there is no change in weight or the highest weight change occurred during training is smaller than the specified tolerance. Back Propagation Neural (BPN) is a multilayer neural network consisting of the input layer, at least one hidden layer and output layer. As its name suggests, back propagating will take place in this network. The error which is calculated at the output layer, by comparing the target output and the actual output, will be propagated back towards the input layer. As shown in the diagram, the architecture of BPN has three interconnected layers having weights on them. The hidden layer as well as the output layer also has bias, whose weight is always 1, on them. As is clear from the diagram, the working of BPN is in two phases. One phase sends the signal from the input layer to the output layer, and the other phase back propagates the error from the output layer to the input layer. For training, BPN will use binary sigmoid activation function. The training of BPN will have the following three phases. Phase 1 − Feed Forward Phase Phase 2 − Back Propagation of error Phase 3 − Updating of weights All these steps will be concluded in the algorithm as follows Step 1 − Initialize the following to start the training − For easy calculation and simplicity, take some small random values. Step 2 − Continue step 3-11 when the stopping condition is not true. Step 3 − Continue step 4-10 for every training pair. Step 4 − Each input unit receives input signal x i and sends it to the hidden unit for all Step 5 − Calculate the net input at the hidden unit using the following relation − $$Q_{inj}\:=\:b_{0j}\:+\:\sum_{i=1}^n x_{i}v_{ij}\:\:\:\:j\:=\:1\:to\:p$$ Here b 0j is the bias on hidden unit, Now calculate the net output by applying the following activation function $$Q_{j}\:=\:f(Q_{inj})$$ Send these output signals of the hidden layer units to the output layer units. Step 6 − Calculate the net input at the output layer unit using the following relation − $$y_{ink}\:=\:b_{0k}\:+\:\sum_{j = 1}^p\:Q_{j}\:w_{jk}\:\:k\:=\:1\:to\:m$$ Here b 0k is the bias on output unit, Calculate the net output by applying the following activation function $$y_{k}\:=\:f(y_{ink})$$ Step 7 − Compute the error correcting term, in correspondence with the target pattern received at each output unit, as follows − $$\delta_{k}\:=\:(t_{k}\:-\:y_{k})f^{'}(y_{ink})$$ On this basis, update the weight and bias as follows − $$\Delta v_{jk}\:=\:\alpha \delta_{k}\:Q_{ij}$$ $$\Delta b_{0k}\:=\:\alpha \delta_{k}$$ Then, send $\delta_{k}$ back to the hidden layer. Step 8 − Now each hidden unit will be the sum of its delta inputs from the output units. $$\delta_{inj}\:=\:\displaystyle\sum\limits_{k=1}^m \delta_{k}\:w_{jk}$$ Error term can be calculated as follows − $$\delta_{j}\:=\:\delta_{inj}f^{'}(Q_{inj})$$ On this basis, update the weight and bias as follows − $$\Delta w_{ij}\:=\:\alpha\delta_{j}x_{i}$$ $$\Delta b_{0j}\:=\:\alpha\delta_{j}$$ Step 9 − Each output unit updates the weight and bias as follows − (ykk = 1 to m) $$v_{jk}(new)\:=\:v_{jk}(old)\:+\:\Delta v_{jk}$$ $$b_{0k}(new)\:=\:b_{0k}(old)\:+\:\Delta b_{0k}$$ Step 10 − Each output unit updates the weight and bias as follows − (zjj = 1 to p) $$w_{ij}(new)\:=\:w_{ij}(old)\:+\:\Delta w_{ij}$$ $$b_{0j}(new)\:=\:b_{0j}(old)\:+\:\Delta b_{0j}$$ Step 11 − Check for the stopping condition, which may be either the number of epochs reached or the target output matches the actual output. Delta rule works only for the output layer. On the other hand, generalized delta rule, also called as back-propagation rule, is a way of creating the desired values of the hidden layer. For the activation function $y_{k}\:=\:f(y_{ink})$ the derivation of net input on Hidden layer as well as on output layer can be given by $$y_{ink}\:=\:\displaystyle\sum\limits_i\:z_{i}w_{jk}$$ And $\:\:y_{inj}\:=\:\sum_i x_{i}v_{ij}$ Now the error which has to be minimized is $$E\:=\:\frac{1}{2}\displaystyle\sum\limits_{k}\:[t_{k}\:-\:y_{k}]^2$$ By using the chain rule, we have $$\frac{\partial E}{\partial w_{jk}}\:=\:\frac{\partial }{\partial w_{jk}}(\frac{1}{2}\displaystyle\sum\limits_{k}\:[t_{k}\:-\:y_{k}]^2)$$ $$=\:\frac{\partial }{\partial w_{jk}}\lgroup\frac{1}{2}[t_{k}\:-\:t(y_{ink})]^2\rgroup$$ $$=\:-[t_{k}\:-\:y_{k}]\frac{\partial }{\partial w_{jk}}f(y_{ink})$$ $$=\:-[t_{k}\:-\:y_{k}]f(y_{ink})\frac{\partial }{\partial w_{jk}}(y_{ink})$$ $$=\:-[t_{k}\:-\:y_{k}]f^{'}(y_{ink})z_{j}$$ Now let us say $\delta_{k}\:=\:-[t_{k}\:-\:y_{k}]f^{'}(y_{ink})$ The weights on connections to the hidden unit z j can be given by − $$\frac{\partial E}{\partial v_{ij}}\:=\:- \displaystyle\sum\limits_{k} \delta_{k}\frac{\partial }{\partial v_{ij}}\:(y_{ink})$$ Putting the value of $y_{ink}$ we will get the following $$\delta_{j}\:=\:-\displaystyle\sum\limits_{k}\delta_{k}w_{jk}f^{'}(z_{inj})$$ Weight updating can be done as follows − For the output unit − $$\Delta w_{jk}\:=\:-\alpha\frac{\partial E}{\partial w_{jk}}$$ $$=\:\alpha\:\delta_{k}\:z_{j}$$ For the hidden unit − $$\Delta v_{ij}\:=\:-\alpha\frac{\partial E}{\partial v_{ij}}$$ $$=\:\alpha\:\delta_{j}\:x_{i}$$
A particle moves along the x-axis so that at time t its position is given by $x(t) = t^3-6t^2+9t+11$ during what time intervals is the particle moving to the left? so I know that we need the velocity for that and we can get that after taking the derivative but I don't know what to do after that the velocity would than be $v(t) = 3t^2-12t+9$ how could I find the intervals Fix $c\in\{0,1,\dots\}$, let $K\geq c$ be an integer, and define $z_K=K^{-\alpha}$ for some $\alpha\in(0,2)$.I believe I have numerically discovered that$$\sum_{n=0}^{K-c}\binom{K}{n}\binom{K}{n+c}z_K^{n+c/2} \sim \sum_{n=0}^K \binom{K}{n}^2 z_K^n \quad \text{ as } K\to\infty$$but cannot ... So, the whole discussion is about some polynomial $p(A)$, for $A$ an $n\times n$ matrix with entries in $\mathbf{C}$, and eigenvalues $\lambda_1,\ldots, \lambda_k$. Anyways, part (a) is talking about proving that $p(\lambda_1),\ldots, p(\lambda_k)$ are eigenvalues of $p(A)$. That's basically routine computation. No problem there. The next bit is to compute the dimension of the eigenspaces $E(p(A), p(\lambda_i))$. Seems like this bit follows from the same argument. An eigenvector for $A$ is an eigenvector for $p(A)$, so the rest seems to follow. Finally, the last part is to find the characteristic polynomial of $p(A)$. I guess this means in terms of the characteristic polynomial of $A$. Well, we do know what the eigenvalues are... The so-called Spectral Mapping Theorem tells us that the eigenvalues of $p(A)$ are exactly the $p(\lambda_i)$. Usually, by the time you start talking about complex numbers you consider the real numbers as a subset of them, since a and b are real in a + bi. But you could define it that way and call it a "standard form" like ax + by = c for linear equations :-) @Riker "a + bi where a and b are integers" Complex numbers a + bi where a and b are integers are called Gaussian integers. I was wondering If it is easier to factor in a non-ufd then it is to factor in a ufd.I can come up with arguments for that , but I also have arguments in the opposite direction.For instance : It should be easier to factor When there are more possibilities ( multiple factorizations in a non-ufd... Does anyone know if $T: V \to R^n$ is an inner product space isomorphism if $T(v) = (v)_S$, where $S$ is a basis for $V$? My book isn't saying so explicitly, but there was a theorem saying that an inner product isomorphism exists, and another theorem kind of suggesting that it should work. @TobiasKildetoft Sorry, I meant that they should be equal (accidently sent this before writing my answer. Writing it now) Isn't there this theorem saying that if $v,w \in V$ ($V$ being an inner product space), then $||v|| = ||(v)_S||$? (where the left norm is defined as the norm in $V$ and the right norm is the euclidean norm) I thought that this would somehow result from isomorphism @AlessandroCodenotti Actually, such a $f$ in fact needs to be surjective. Take any $y \in Y$; the maximal ideal of $k[Y]$ corresponding to that is $(Y_1 - y_1, \cdots, Y_n - y_n)$. The ideal corresponding to the subvariety $f^{-1}(y) \subset X$ in $k[X]$ is then nothing but $(f^* Y_1 - y_1, \cdots, f^* Y_n - y_n)$. If this is empty, weak Nullstellensatz kicks in to say that there are $g_1, \cdots, g_n \in k[X]$ such that $\sum_i (f^* Y_i - y_i)g_i = 1$. Well, better to say that $(f^* Y_1 - y_1, \cdots, f^* Y_n - y_n)$ is the trivial ideal I guess. Hmm, I'm stuck again O(n) acts transitively on S^(n-1) with stabilizer at a point O(n-1) For any transitive G action on a set X with stabilizer H, G/H $\cong$ X set theoretically. In this case, as the action is a smooth action by a Lie group, you can prove this set-theoretic bijection gives a diffeomorphism