text
stringlengths
256
16.4k
Difference between revisions of "Weeks-Chandler-Andersen perturbation theory" (New page: The '''Weeks-Chandler-Anderson perturbation theory''' is based on the following decomposition of the intermolecular pair potential (in particular, the [[Lennard-Jones model | Lennard-J...) m (Moved colloids to a new page) (6 intermediate revisions by 3 users not shown) Line 1: Line 1: − The '''Weeks-Chandler- + The '''Weeks-Chandler-perturbation theory''' is based on the following − decomposition of the [[intermolecular pair potential]] (in particular, the [[Lennard-Jones model | Lennard-Jones potential]] ): + decomposition of the [[intermolecular pair potential]] (in particular, the [[Lennard-Jones model | Lennard-Jones potential]]): − The reference system pair potential is given by (Eq, 4 + The reference system pair potential is given by (Eq, 4): :<math> :<math> − + {\rm repulsive} (r) = \left\{ \begin{array}{ll} \begin{array}{ll} − + {\rm LJ}(r) + \epsilon & {\rm if} \; r < 2^{1/6}\sigma \\ 0 & {\rm if} \; r \ge 2^{1/6}\sigma 0 & {\rm if} \; r \ge 2^{1/6}\sigma \end{array} \right. \end{array} \right. Line 15: Line 15: :<math> :<math> − + {\rm attractive} (r) = \left\{ \begin{array}{ll} \begin{array}{ll} -\epsilon & {\rm if} \; r < 2^{1/6}\sigma \\ -\epsilon & {\rm if} \; r < 2^{1/6}\sigma \\ − + {\rm LJ}(r) & {\rm if} \; r \ge 2^{1/6}\sigma \end{array} \right. \end{array} \right. </math> </math> + + + + ==References== ==References== − + / [[category: perturbation theory]] [[category: perturbation theory]] Latest revision as of 18:10, 13 December 2017 The reference system pair potential is given by (Eq, 4): and the perturbation potential is given by (Eq, 5 Ref. 1): Ben-Amotz-Stell reformulation[edit] See also[edit] References[edit] John D. Weeks, David Chandler and Hans C. Andersen "Role of Repulsive Forces in Determining the Equilibrium Structure of Simple Liquids", Journal of Chemical Physics 54pp. 5237-5247 (1971) Dor Ben-Amotz and George Stell "Reformulation of Weeks-Chandler-Andersen Perturbation Theory Directly in Terms of a Hard-Sphere Reference System", Journal of Physical Chemistry B 108pp. 6877-6882 (2004)
Hello one and all! Is anyone here familiar with planar prolate spheroidal coordinates? I am reading a book on dynamics and the author states If we introduce planar prolate spheroidal coordinates $(R, \sigma)$ based on the distance parameter $b$, then, in terms of the Cartesian coordinates $(x, z)$ and also of the plane polars $(r , \theta)$, we have the defining relations $$r\sin \theta=x=\pm R^2−b^2 \sin\sigma, r\cos\theta=z=R\cos\sigma$$ I am having a tough time visualising what this is? Consider the function $f(z) = Sin\left(\frac{1}{cos(1/z)}\right)$, the point $z = 0$a removale singularitya polean essesntial singularitya non isolated singularitySince $Cos(\frac{1}{z})$ = $1- \frac{1}{2z^2}+\frac{1}{4!z^4} - ..........$$$ = (1-y), where\ \ y=\frac{1}{2z^2}+\frac{1}{4!... I am having trouble understanding non-isolated singularity points. An isolated singularity point I do kind of understand, it is when: a point $z_0$ is said to be isolated if $z_0$ is a singular point and has a neighborhood throughout which $f$ is analytic except at $z_0$. For example, why would $... No worries. There's currently some kind of technical problem affecting the Stack Exchange chat network. It's been pretty flaky for several hours. Hopefully, it will be back to normal in the next hour or two, when business hours commence on the east coast of the USA... The absolute value of a complex number $z=x+iy$ is defined as $\sqrt{x^2+y^2}$. Hence, when evaluating the absolute value of $x+i$ I get the number $\sqrt{x^2 +1}$; but the answer to the problem says it's actually just $x^2 +1$. Why? mmh, I probably should ask this on the forum. The full problem asks me to show that we can choose $log(x+i)$ to be $$log(x+i)=log(1+x^2)+i(\frac{pi}{2} - arctanx)$$ So I'm trying to find the polar coordinates (absolute value and an argument $\theta$) of $x+i$ to then apply the $log$ function on it Let $X$ be any nonempty set and $\sim$ be any equivalence relation on $X$. Then are the following true: (1) If $x=y$ then $x\sim y$. (2) If $x=y$ then $y\sim x$. (3) If $x=y$ and $y=z$ then $x\sim z$. Basically, I think that all the three properties follows if we can prove (1) because if $x=y$ then since $y=x$, by (1) we would have $y\sim x$ proving (2). (3) will follow similarly. This question arised from an attempt to characterize equality on a set $X$ as the intersection of all equivalence relations on $X$. I don't know whether this question is too much trivial. But I have yet not seen any formal proof of the following statement : "Let $X$ be any nonempty set and $∼$ be any equivalence relation on $X$. If $x=y$ then $x\sim y$." That is definitely a new person, not going to classify as RHV yet as other users have already put the situation under control it seems... (comment on many many posts above) In other news: > C -2.5353672500000002 -1.9143250000000003 -0.5807385400000000 C -3.4331741299999998 -1.3244286800000000 -1.4594762299999999 C -3.6485676800000002 0.0734728100000000 -1.4738058999999999 C -2.9689624299999999 0.9078326800000001 -0.5942069900000000 C -2.0858929200000000 0.3286240400000000 0.3378783500000000 C -1.8445799400000003 -1.0963522200000000 0.3417561400000000 C -0.8438543100000000 -1.3752198200000001 1.3561451400000000 C -0.5670178500000000 -0.1418068400000000 2.0628359299999999 probably the weirdness bunch of data I ever seen with so many 000000 and 999999s But I think that to prove the implication for transitivity the inference rule an use of MP seems to be necessary. But that would mean that for logics for which MP fails we wouldn't be able to prove the result. Also in set theories without Axiom of Extensionality the desired result will not hold. Am I right @AlessandroCodenotti? @AlessandroCodenotti A precise formulation would help in this case because I am trying to understand whether a proof of the statement which I mentioned at the outset depends really on the equality axioms or the FOL axioms (without equality axioms). This would allow in some cases to define an "equality like" relation for set theories for which we don't have the Axiom of Extensionality. Can someone give an intuitive explanation why $\mathcal{O}(x^2)-\mathcal{O}(x^2)=\mathcal{O}(x^2)$. The context is Taylor polynomials, so when $x\to 0$. I've seen a proof of this, but intuitively I don't understand it. @schn: The minus is irrelevant (for example, the thing you are subtracting could be negative). When you add two things that are of the order of $x^2$, of course the sum is the same (or possibly smaller). For example, $3x^2-x^2=2x^2$. You could have $x^2+(x^3-x^2)=x^3$, which is still $\mathscr O(x^2)$. @GFauxPas: You only know $|f(x)|\le K_1 x^2$ and $|g(x)|\le K_2 x^2$, so that won't be a valid proof, of course. Let $f(z)=z^{n}+a_{n-1}z^{n-1}+\cdot\cdot\cdot+a_{0}$ be a complex polynomial such that $|f(z)|\leq 1$ for $|z|\leq 1.$ I have to prove that $f(z)=z^{n}.$I tried it asAs $|f(z)|\leq 1$ for $|z|\leq 1$ we must have coefficient $a_{0},a_{1}\cdot\cdot\cdot a_{n}$ to be zero because by triangul... @GFauxPas @TedShifrin Thanks for the replies. Now, why is it we're only interested when $x\to 0$? When we do a taylor approximation cantered at x=0, aren't we interested in all the values of our approximation, even those not near 0? Indeed, one thing a lot of texts don't emphasize is this: if $P$ is a polynomial of degree $\le n$ and $f(x)-P(x)=\mathscr O(x^{n+1})$, then $P$ is the (unique) Taylor polynomial of degree $n$ of $f$ at $0$.
Following Gather (the volatility surface, chapter 2) we assume the following process: $$ dS_t = S_t(\mu_t dt+\sqrt{\nu_t}dZ^1_t)$$ $$ d\nu_t= -\lambda(\nu_t-\bar{\nu})dt+\eta\sqrt{\nu_t}dZ^2_t$$ where $Z^1,Z^2$ are two brownian mortion such that $d\langle Z^1,Z^2\rangle_t= \rho dt$. Using the general valuation pde for a stochastic volatility model we get for this process the following pde: $$\frac{\partial V}{\partial t} +\frac{1}{2}\frac{\partial^2 V}{\partial S^2}\nu S^2+\rho\eta\nu S \frac{\partial^2 V}{\partial \nu \partial S} + \frac{1}{2}\eta^2\nu\frac{\partial^2 V}{\partial \nu^2} + rS \frac{\partial V}{\partial S}-rV=\lambda(\nu-\bar{\nu})\frac{V}{\partial \nu}$$ Now by introducing $F_{t,T}$ the time $T$ forward of the stock index, $x:=\log{(\frac{F_{t,T}}{K})}$, where $K$ denotes the strike space, $\tau:=T-t$ and $C$ the future value to expiration of the European option prices (rather than its value today, $V$) the above pde should transform to $$-\frac{\partial C}{\partial \tau}+\frac{1}{2}\nu C_{11}-\frac{1}{2}\nu C_1+\frac{1}{2}\eta^2\nu C_{22}+\rho\eta\nu C_{12} - \lambda(\nu-\bar{\nu})=0$$ where the subscripts $1,2$ refer to differentiation w.r.t $x$ and $\nu$. We have $V(S,\nu,t)=C(f(S),\nu,g(t))$, where $g(t):=\tau=T-t$. About the form of $f$ I'm unsure. Using this we get for the first term: $$\frac{\partial V}{\partial t} = \frac{\partial C}{\partial \tau} \frac{\tau}{t}=-\frac{\partial C}{\partial \tau}$$ For $f$ we know $f(S)=\log{\frac{F_{t,T}(S)}{K}}$ ( I suppress the time subsctript on $t$). I've tried $F_{t,T}=S_t\exp{\int_t^T\mu_sds}$, with $\mu_s\equiv 0$. However I do not see how we can get this PDE in terms of $C$. It would be great if someone could explain the following two: What is meant by future value to expiration? how is $C$ related to $V$ in functional form?
Current browse context: math.CO Change to browse by: References & Citations Bookmark(what is this?) Mathematics > Combinatorics Title: Simple expressions for the long walk distance (Submitted on 1 Dec 2011 (v1), last revised 23 Jul 2012 (this version, v4)) Abstract: The walk distances in graphs are defined as the result of appropriate transformations of the $\sum_{k=0}^\infty(tA)^k$ proximity measures, where $A$ is the weighted adjacency matrix of a connected weighted graph and $t$ is a sufficiently small positive parameter. The walk distances are graph-geodetic, moreover, they converge to the shortest path distance and to the so-called long walk distance as the parameter $t$ approaches its limiting values. In this paper, simple expressions for the long walk distance are obtained. They involve the generalized inverse, minors, and inverses of submatrices of the symmetric irreducible singular M-matrix ${\cal L}=\rho I-A,$ where $\rho$ is the Perron root of $A.$ Submission historyFrom: Pavel Chebotarev [view email] [v1]Thu, 1 Dec 2011 05:39:33 GMT (7kb) [v2]Fri, 2 Dec 2011 13:20:21 GMT (7kb) [v3]Mon, 16 Apr 2012 16:11:13 GMT (8kb) [v4]Mon, 23 Jul 2012 15:36:38 GMT (9kb)
Julia set Julia set, set of values in range of holomorphism of function, but out of holomorphism of some its integer iteration. Julia set is often defined with symbol \(J\) or \(\mathbb J\). The name of the function can be indicated either as the subscript or in the parenthesis immediately after this symbol. Let \(f\) be holomorphic function defined at some \(C\in \mathbb C\). Then \(\mathbb J(f) = \{ z \in C : \exists ~ n\in \mathbb N_+ ~:~ f^n(z) \bar \in C\}\) where the upper subscript after the name of the function indicates the number of iteration.This notation had been used by Walter Bergweiler at least since 1993 [1].In this case, the number \(n\) of iteration is supposed to be integer. Contents Origin of the name Identifier Jilia set is choosen after Gaston Julia, who had presented the results about invariants of the iteration of functions in 1918 </ref> http://portail.mathdoc.fr/JMPA/afficher_notice.php?id=JMPA_1918_8_1_A2_0 GASTON JULIA. Mémoire sur l’itération des fonctions rationnelles. Journal de mathématiques pures et appliquées 8e série, tome 1 (1918), p. 47-246. </ref>. Fatou set \(\mathbb F(f) = \{ z \in C : \forall ~ n\in \mathbb N_+ ~,~ f^n(z) \in C\}\) in such a way that \(\mathbb J(f) \cup \mathbb F(f)=C\) At least for positive integer \(n\), the following relations hold: \[f^n(\mathbb J(f)) = \mathbb J(f)\] \[ f^n(\mathbb F(f)) = \mathbb F(f)\] Fractal behavior Usually, the Julia set of any non–trivial function with at least one singularity shows complicated, fractal behaviour; the similar structures reproduce again and again, displaced and scaled. Non-integer iterates The definition of the Julia set above implies that the function \(f\) is iterated integer number of times. In principle, similar set can be considered, assuming, that the number of iterate \(n\) can take also non-integer values. Generalization and the non–holomorphic transforms The requirements for the iterated function \(F\) may be not so strict, as it is assumed above. In particular, the requirement of holomorhizm can be substituted to requirement of smoothness or to requirement of continuity. Then, the point \(z\) belongs to the Julia set, if \(F^n(z)\) is continuous function for every \(n\). In addition, the dependence on the parameter can be considered, so, \(F=F_c\); then, the initial value \(z_0\) is assumed to be fixed. Number \(c\) is declared to belong to the set \(\mathbb J\), if \(\displaystyle \lim_{n \rightarrow \infty}F_c^n(z_0)\) exists and is smooth function of \(c\). In such a way, there exist many various definitions and meanings of term Julia set. In many cases, the Julia sets, defined with different way, show very similar fractal behavior, and looking at part of the plots of the approximations of the set, it is difficult to guess, which definition had been used, and iteration of which function does the plot correspond to. References http://www.ams.org/journals/bull/1993-29-02/S0273-0979-1993-00432-4/ Walter Bergweiler. Iteration of meromorphic functions. Bull. Amer. Math. Soc. 29 (1993), 151-188. http://archive.numdam.org/ARCHIVE/BSMF/BSMF_1920__48_/BSMF_1920__48__208_1/BSMF_1920__48__208_1.pdf P.Fatou. Sur les equations fonctionelles. Bulletin de la S.M.F., tome 48 (1920), p.208-314.
Search Now showing items 1-10 of 51 Kaon femtoscopy in Pb-Pb collisions at $\sqrt{s_{\rm{NN}}}$ = 2.76 TeV (Elsevier, 2017-12-21) We present the results of three-dimensional femtoscopic analyses for charged and neutral kaons recorded by ALICE in Pb-Pb collisions at $\sqrt{s_{\rm{NN}}}$ = 2.76 TeV. Femtoscopy is used to measure the space-time ... Anomalous evolution of the near-side jet peak shape in Pb-Pb collisions at $\sqrt{s_{\mathrm{NN}}}$ = 2.76 TeV (American Physical Society, 2017-09-08) The measurement of two-particle angular correlations is a powerful tool to study jet quenching in a $p_{\mathrm{T}}$ region inaccessible by direct jet identification. In these measurements pseudorapidity ($\Delta\eta$) and ... Online data compression in the ALICE O$^2$ facility (IOP, 2017) The ALICE Collaboration and the ALICE O2 project have carried out detailed studies for a new online computing facility planned to be deployed for Run 3 of the Large Hadron Collider (LHC) at CERN. Some of the main aspects ... Evolution of the longitudinal and azimuthal structure of the near-side peak in Pb–Pb collisions at $\sqrt{s_{\rm NN}}=2.76$ TeV (American Physical Society, 2017-09-08) In two-particle angular correlation measurements, jets give rise to a near-side peak, formed by particles associated to a higher $p_{\mathrm{T}}$ trigger particle. Measurements of these correlations as a function of ... J/$\psi$ elliptic flow in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 5.02 TeV (American Physical Society, 2017-12-15) We report a precise measurement of the J/$\psi$ elliptic flow in Pb-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV with the ALICE detector at the LHC. The J/$\psi$ mesons are reconstructed at mid-rapidity ($|y| < 0.9$) ... Highlights of experimental results from ALICE (Elsevier, 2017-11) Highlights of recent results from the ALICE collaboration are presented. The collision systems investigated are Pb–Pb, p–Pb, and pp, and results from studies of bulk particle production, azimuthal correlations, open and ... Event activity-dependence of jet production in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV measured with semi-inclusive hadron+jet correlations by ALICE (Elsevier, 2017-11) We report measurement of the semi-inclusive distribution of charged-particle jets recoiling from a high transverse momentum ($p_{\rm T}$) hadron trigger, for p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV, in p-Pb events ... System-size dependence of the charged-particle pseudorapidity density at $\sqrt {s_{NN}}$ = 5.02 TeV with ALICE (Elsevier, 2017-11) We present the charged-particle pseudorapidity density in pp, p–Pb, and Pb–Pb collisions at sNN=5.02 TeV over a broad pseudorapidity range. The distributions are determined using the same experimental apparatus and ... Photoproduction of heavy vector mesons in ultra-peripheral Pb–Pb collisions (Elsevier, 2017-11) Ultra-peripheral Pb-Pb collisions, in which the two nuclei pass close to each other, but at an impact parameter greater than the sum of their radii, provide information about the initial state of nuclei. In particular, ... Measurement of $J/\psi$ production as a function of event multiplicity in pp collisions at $\sqrt{s} = 13\,\mathrm{TeV}$ with ALICE (Elsevier, 2017-11) The availability at the LHC of the largest collision energy in pp collisions allows a significant advance in the measurement of $J/\psi$ production as function of event multiplicity. The interesting relative increase ...
Consider an Ornstein-Uhlenbeck position process: $dV_t=dB_t-\lambda V_tdt$ $dX_t=V_tdt$ where $B_t,V_t,X_t$ are all in $R^d$ with $d\geq 3$. Let $X_0\neq0$, $V_0=0$ . Let $r>0$ and $S_r$ be the sphere with centre $0$ and radius $r$. Let $H(r)$ be the probability that $X_t$ ever hits $H(r)$. The problem is to find $\lim_{r\rightarrow0} H(r)r^{1-d}$. I am thinking about the following, I want to prove that probability that $X_t$ hits the boundary of the ball more than twice can be ignored. And then can integrate the probability density that $X_t$ is on the boundary of $S_r$ at time t and integrate over t. However, I don't know how to prove that. Another way is to write an PDE of the hitting probability P(X_t,V_t) as function of current state, it satisfies $V_t\cdot \frac{\partial P}{\partial X}-\lambda V\cdot\frac{\partial P}{\partial V}+\frac{\partial^2 P}{\partial V^2}=0$. Then, maybe we can do some asymptotics about the solution. But again, I have no idea how to do it.
Where did you come up with interpolating the "error"? (And how do you measure the error?) On the first visit to a finer grid, the entire solution $u$ must be interpolated, ideally using a higher order operator (e.g., postprocessed/reconstructed solution for FEM). This FMG interpolation is $u^h \gets \mathbb{I}_H^h u^H$. (It's okay to use the a normal interpolation $\mathbb{I}_h^H = I_h^H$, but this typically gives up some efficiency, at least for smooth problems.) After FMG interpolation, you just apply one or more V-cycles (or W-cycles, etc). (Make sure to run at least one smoother before restricting.) The most common choices are linear defect correction in which only the residual $r^h = A^h u^h - b^h$ is restricted and the Full Approximation Scheme (FAS) which is a natural for nonlinear problems because it avoids global linearization (e.g, Newton or Picard). In FAS, the fine grid state is restricted using the state restriction operator $\tilde u^H \gets \hat I_h^H \tilde u^h$. State restriction is not required by linear defect correction multigrid (a convenient attribute). The most common state restrictions are nodal injection (for FD and FE) and coarse cell averages (for FV and mixed FE). Now we can write the FAS coarse grid equation (equally valid for nonlinear $A$) as $$A^H u^H = \underbrace{I_h^H b^h}_{b^H} + \underbrace{A^H \hat I_h^H \tilde u^h - I_h^H A^h \tilde u^h}_{\tau_h^H}$$ where we have identified the coarse representation of the right hand side, $b^H$, and the additional correction $\tau_h^H$ which represents the influence of the fine grid on the coarse grid equation. Note the property that the restriction of the fine grid solution $u^{h*}$ satisfies the coarse grid equation: $A^H \hat I_h^H u^{h*} = b^H + \tau_h^{H*}$. After solving the coarse grid equation, FAS interpolates the change, leading to an updated fine solution $u^h \gets \tilde u^h + I_H^h (u^H - \hat I_h^H \tilde u^h)$.
i need some help on this question Let $\Omega$ be an open subset of $\mathbb{R}^{2}$ (say a square) with $\partial{\Omega} =\Gamma_{1} \cup \Gamma_{2} \cup\Gamma_{3} \cup\Gamma_{4}$. A structure ocupying this surface is subject to : 1) A weight force F, i am given the explicit expression at each point of $\Omega$ . 2) Pressure force on $\Gamma_{1}$, i am given the explicit epression of it. We have the following problem : $$-\Delta u = F \quad\text{a.e in}\quad\Omega$$ $$\nabla{u}\cdot{\overrightarrow{n}_{1}} = P \quad\text{on}\quad \Gamma_{1}$$ $$\nabla{u}\cdot{\overrightarrow{n}_{2,3}}= 0\quad\text{on}\quad\Gamma_{2}\cup\Gamma_{3}$$ $$ u = 0 \quad\text{on}\quad \Gamma_{4}$$. The weak form of this problem is Find $u\in{V}$ such that, $\forall{\phi\in{V}}$ we have : $$\int_{\Omega}{\nabla{u}\cdot\nabla{\phi}}\cdot{dx} = \int_{\Omega}{F\phi\cdot{dx}}+\int_{\Gamma_{1}}{P\phi\cdot{d\sigma}}$$ Where $V$ is an appropriate Set . My question is : How can i derive a bound for the quantity $\vert{\nabla{u(x)}}\vert_{2}$ at a.e point of $\Omega$ . Note that when using a truncating test function near a point far from the boundary, we loss the contribution of the force P .
It is impossible to construct an explicit example of a non-Borel-measurable subset of $ \mathbb{R} $ as any proof of the existence of such a subset must require the Axiom of Choice ($ \mathsf{AC} $). As you may already know, any construction that relies on $ \mathsf{AC} $ is never explicit — $ \mathsf{AC} $ yields only pure existence results. Before we go into a more detailed explanation, let us introduce some notation first. $ \mathsf{ZF} $ — The Zermelo-Fraenkel axioms. $ \mathsf{ZFC} $ — $ \mathsf{ZF} + \mathsf{AC} $. $ \mathsf{DC} $ — Axiom of Dependent Choice. $ \text{Con}(\text{Statement $ P $}) $ — Statement $ P $ is consistent. It is a well-known set-theoretic result (Theorem 10.6 of The Axiom of Choice by Thomas Jech) that$$\text{Con}(\mathsf{ZF}) \Longrightarrow\text{Con}(\mathsf{ZF} + \mathbb{R} \text{ is a countable union of countable sets}).$$Hence, from a model of $ \mathsf{ZF} $, we may construct another model $ M $ of $ \mathsf{ZF} $ in which the statement$$\mathbb{R} \text{ is a countable union of countable sets}$$is true. Observe that $ M $ cannot satisfy $ \mathsf{DC} $, for if it did, then as it is provable within $ \mathsf{ZF} + \mathsf{DC} $ that a countable union of countable sets is countable, $ \mathbb{R} $ would be countable in $ M $. This is contradictory as it is provable within $ \mathsf{ZF} $ that $ \mathbb{R} $ is uncountable (see here). We now reason within $ M $. Let $ S \subseteq \mathbb{R} $; the claim is that $ S $ is Borel-measurable. We have, a priori, a sequence $ (A_{n})_{n \in \mathbb{N}} $ consisting of countable subsets of $ \mathbb{R} $ such that $ \displaystyle \mathbb{R} = \bigcup_{n=1}^{\infty} A_{n} $. Then$$S = \bigcup_{n=1}^{\infty} (S \cap A_{n}).$$As it is provable within $ \mathsf{ZF} $ that a subset of a countable set is countable, each $ S \cap A_{n} $ is countable. Hence, $ S $ is a countable union of countable sets. As a $ \sigma $-algebra is by definition closed under a countable union, and as singletons in $ \mathbb{R} $ are Borel-measurable, it follows that a countable subset of $ \mathbb{R} $ is Borel-measurable and that $ S $, being a countable union of countable (hence Borel-measurable) subsets of $ \mathbb{R} $, is Borel-measurable. Therefore,\begin{align}\text{Con}(\mathsf{ZF}) \Longrightarrow\text{Con}(\mathsf{ZF} + \text{Every subset of $ \mathbb{R} $ is Borel-measurable}).\end{align}We now see that it is consistent with $ \mathsf{ZF} $ alone that every subset of $ \mathbb{R} $ is Borel-measurable. However, this situation is inadequate, because without $ \mathsf{DC} $ in $ M $, we cannot develop much of real analysis, e.g., we cannot establish the $ \sigma $-additivity of the standard Borel measure. If $ \mathsf{DC} $ is allowed, what are the implications then? This is explained in the next section. Work done by Robert Solovay and Saharon Shelah has yielded the following result:\begin{align}& ~ \text{Con}(\mathsf{ZFC} + \text{An inaccessible cardinal exists}) \\\iff& ~ \text{Con}(\mathsf{ZF} + \mathsf{DC} + \text{Every subset of $ \mathbb{R} $ is measurable}).\end{align}The forward implication was the first to be proved, by Solovay in his famous paper A Model of Set Theory in Which Every Set of Reals is Lebesgue Measurable. Shelah later proved the backward implication in his paper Can You Take Solovay's Inaccessible Away? One thing is now clear: If one wants a model of $ \mathsf{ZF} $ where $ \mathsf{DC} $ is satisfied so as to be able to do real analysis, and demand at the same time that all subsets of $ \mathbb{R} $ be Borel-measurable, then the price to pay is to posit that the existence of inaccessible cardinals is compatible with $ \mathsf{ZFC} $. Actually, it is not entirely unreasonable to assume that inaccessible cardinals exist. In fact, modern category theory takes their existence for granted by postulating the existence of Grothendieck universes (a formal proof that the existence of Grothendieck universes is equivalent to the existence of inaccessible cardinals may be found in SGA 4, in the appendix Univers written by Bourbaki). Hence, let us assume, without losing too much sleep, that a model $ M $ exists that satisfies$$\mathsf{ZF} + \mathsf{DC} +\text{Every subset of $ \mathbb{R} $ is Borel-measurable}.$$ We can now demonstrate that any proof of the existence of a non-Borel-measurable subset of $ \mathbb{R} $ relies on $ \mathsf{AC} $. Assume, for the sake of contradiction, that the proof does not depend on $ \mathsf{AC} $, i.e., it is provable within $ \mathsf{ZF} $ that a non-Borel-measurable subset of $ \mathbb{R} $ exists. Then as $ M $ satisfies $ \mathsf{ZF} $, we obtain (inside $ M $) a non-Borel-measurable subset of $ \mathbb{R} $, which contradicts the fact that $ M $ contains no such set. Therefore, $ \mathsf{AC} $ (or some weak version thereof) is indeed required for the proof.
Show that $X$ is countably compact if and only if every nested sequence $C_1 \supset C_2 \supset ...$ of closed nonempty sets of $X$ has a nonempty intersection. Suppose that $X$ is not countably compact. Then there exists a countable open cover $\{U_n\}$ of $X$ that has no finite subcover. Consider the collection of closed sets $C_n = X - (U_1 \cup ... \cup U_n)$; If $x \in C_{n+1}$, then $x \in X-U_i$ for every $i=1,....,n+1$, so in particular $x \in X-U_i$ for $i=1,...,n$. Moreover, this collection consists entirely of nonempty sets, for if $C_n =X - (U_1 \cup ... \cup U_n)$ were empty, then we would have a finite subcover. Hence, $\{C_n\}$ is a nested sequence of closed nonempty sets. By way of contradiction, suppose that the intersection is nonempty. Then $x \in X - (U_1 \cup ... \cup U_n)$ for every $n$. Since $\{U_n\}$ is a cover, $x$ must be in $\bigcup U_i$, which implies there exists a $k$ such that $x \in U_k \subseteq U_1 \cup ... \cup U_k$. This contradicts the fact that $x \in X - (U_1 \cup ... \cup U_k)$. Hence, the intersection has to be empty. Now, suppose that there exists a nested sequence $\{C_i\}$ of nonempty closed sets whose intersection is empty. Then $U_i = X-C_i$ forms a collection of open sets, and since $\bigcup U_i = \bigcup (X-C_i) = X - \cap C_i = X - \emptyset = X$, we see moreover that it is an open cover. Now, if there were to exist a finite subcover, say $\{U_{k_1},...,U_{k_n} \}$, where $k_1 \le ... \le k_n$, then $X \subseteq \bigcup U_{k_i} = X - \bigcap C_{k_i} = X - \bigcap C_{k_i} = X - C_{k_n}$, implying that $C_{k_n} = \emptyset$, which is a contradiction. Hence, there cannot be a finite subcover. How does this sound?
(Log-)convex functions do not need to be twice differentiable, so any proof through $\frac{d^2}{dx^2}$ lacks generality. On the other hand $e^x$ is both a convex and log-convex function, and we may wonder when the composition of two convex functions is convex. Assume that $f(x)$ is convex . Then $g(x)=e^{f(x)}$ is convex iff $$ g(\lambda x+(1-\lambda) y) \leq \lambda g(x) + (1-\lambda) g(y)\qquad \forall \lambda\in[0,1] $$which is equivalent to $$ \exp\left[f(\lambda x+(1-\lambda) y)\right]\leq \lambda e^{f(x)}+(1-\lambda)e^{f(y)}\qquad \forall\lambda\in[0,1]. $$By the convexity of $f$ we know that $f(\lambda x+(1-\lambda) y)\leq \lambda f(x)+(1-\lambda)f(y)$. Since $\exp$ is increasing we get$$ \exp\left[f(\lambda x+(1-\lambda) y)\right]\leq \exp\left[\lambda f(x)+(1-\lambda)f(y)\right] $$unconditionally, and since $\exp$ is convex we get$$\exp\left[\lambda f(x)+(1-\lambda)f(y)\right]\leq \lambda e^{f(x)}+(1-\lambda)e^{f(y)}$$as wanted. In other terms, if $a(x),b(x)$ are convex functions and $a(x)$ is (weakly) increasing, then $(a\circ b)(x)$ is a convex function, as shown here, too. It follows that any log-convex function is also convex, as the exponential of a convex function. In general nothing can be said about the composition of a concave function (like $\log$) with a convex function. If we take $f_1(x)=x^2, f_2(x)=e^{x^2}, f_3(x)=x e^{x\sqrt{x}}$, then over $\mathbb{R}^+$ we have that $f_1,f_2,f_3$ are convex but $\log f_1$ is concave, $\log f_2$ is convex and $\log f_3$ is neither convex or concave.
The governing equations are listed here of my notes on page 4. It's a reproduction of other's paper which solves the equations with COMSOL. The problems arise when I want to solve for the consistent initial condition of the algebraic components, $\phi_1$ and $\phi_2$. They consist of a kinetic reaction term for five reactants \begin{equation} a \sum_j i_j \end{equation} \begin{equation} i_j = i_{o,j}\biggl[ \prod_i \biggl(\frac{C_i}{C_{i,o}}\biggr)^{p_{i,j}}\exp\biggl(\frac{\alpha_{aj} F}{RT}(\eta-U_j)\biggr) - \prod_i \biggl(\frac{C_i}{C_{i,o}}\biggr)^{q_{i,j}}\exp\biggl(-\frac{\alpha_{cj} F}{RT}(\eta-U_j)\biggr) \biggr] \end{equation} where $\eta=\phi_1-\phi_2$. First try for these two algebraic equations on MINPACK gives $l^2$ norm of the residual around $1.0E-2$ for one dimension case $=500$. Then I twisted the equations so they become one equation of variable $\eta$ (not two coupled equations), the equations listed here on page 4. Remarkably the $l^2$ norm drops to $1.0E-5$. Nevertheless, when I integrate over time they failed on the very first step. It seems that when the equations are all coupled together with this kinetic term, the convergence of the Newton's method failed (I've only tried on the solver DASKR (DASSL's variant) and RADAU5). I'm using method of line and the space discretization is second order. And the Neumann boundary conditions are all coupled back to the governing equations with the ghost points. The Jacobian matrices are generated by Tapenade. The problem is very close to this one. Furthermore I think (with my little numerical knowledge) the equation of this kind \begin{equation} \sigma \nabla^2 \phi_1 = a\sum_j i_j \end{equation} resembles a singular perturbed equation for the coefficient $\sigma/a$ can be of order $1.0E-10$ on the sulfur cathode. $a$ is the surface area of the pore walls per unit volume of the total electrode. From the output data at the boundary there is a one point drop from a relatively smooth inner domain. I'm thinking pseudo-spectral method might solve the problem? Another question is that is it perfectly fine to impose boundary conditions (specifically Neumann type) on PDE as a discretized algebraic equations without putting back into the governing equation with ghost point? I feel like somehow the boundary point won't satisfy the governing equation this way. And when implementing in the IDE or DAE solver I need to explicitly specify they are algebraic term though. One last question: do the initial conditions need to satisfy the boundary conditions? I think most system don't. Or else I have to let the applied current to be zero first and then at $t>0$ crank up to some value, making a discontinuity at the first step. I've found one paper describe this discontinuous initial condition, DAEs not ODEs page 371, but I still don't know the technique in solving them. I've been working this whole summer vacation from scratch and this is my very first time solving a real (non-toy) numerical problems. Any suggestion would be greatly appreciated. EDIT Quad precision without precondition just solves my first problem regarding ill-conditioned matrix. The third problem (inconsistent initial and boundary condition) is also solved by making initial guessing on the algebraic variables sufficiently close to the values when the source term jumps. But the problem is quad precision implementation is more than 10x slower than 8 bytes double precision. Will it help if I reorder the Jacobian matrix so that it is banded (currently the linear solver is LAPACK with dense matrix)? Or I just need to learn PETSc and make use of the parallel feature and sparse solver? Also, how fined should it be so that the solution can considered fine-grid solution (and hence make error estimation from that)? The second problem above in the main context still puzzles me though. EDIT.II Reaction current $\nabla \cdot i_2$ figure from a review paper on porous electrode. The author define a dimensionless current density $\delta = \frac{\alpha_a FIL}{RT}(\frac{1}{\kappa}+\frac{1}{\sigma})$ which monitor how non-uniform the current distribution can be. When $\delta=100$, the current at the boundary is very steep already. The system I studied has even higher of this parameter though. EDIT.III I solve the discontinuity problem regarding the Poisson equation. From the dimensionless parameter above, there is a term $L$. I put $1$ in my code which is non-physical for it means 1m electrode length for consistent unit. After lowering it down to around $1E-5$, the source term becomes smooth. Thanks for Geoff's a priori thought that made me look closer to the physical quantity.
I'm trying to solve this advection-diffusion equation (ADE): $$\frac{\partial \phi}{\partial t} + \nabla \cdot (-D \nabla \phi + \mathbf{u} \phi) = 0$$ In fact, this ADE framework is coupled to a Navier-Stokes (NS) solver and I get velocity field ($\mathbf{u}$) from NS solver. My question is about ADE boundary conditions on inflow and outflow planes. I postulate that because the flow is directed from inflow to outflow obviously, I put zero concentration ($\phi = 0$) boundary condition on inlet plane because my source of species is located far from inflow boundary. Also, I assumed that the dominant transport mechanism at the outlet plane is convection and as a result of that normal diffusive flux will be vanished at the outlet plane ($-D \frac{\partial \phi}{\partial \vec{n}} = 0$). I wanted to know how much these assumptions are true and could be justified physically? I should say that the outflow boundary condition is verified by comparing the simulation results with analytical solution of the Graetz problem. Any reference, suggestion, or idea is appreciated.
Nadeem Ur Rehman Articles written in Proceedings – Mathematical Sciences Volume 124 Issue 4 November 2014 pp 497-500 The main purpose of this paper is to prove the following result: Let $n \gt 1$ be a fixed integer, let 𝑅 be a $n!$-torsion free semiprime ring, and let $f : R \to R$ be an additive mapping satisfying the relation $[f (x), x]_{n} = [[... [[f(x),x],x],...], x] = 0$ for all $x \in R$. In this case $[f(x), x] = 0$ is fulfilled for all $x \in R$. Since any semisimple Banach algebra (for example, $C^{\ast}$ algebra) is semiprime, this purely algebraic result might be of some interest from functional analysis point of view. Volume 126 Issue 3 August 2016 pp 389-398 Research Article Let $R$ be a prime ring of characteristic different from 2 and $m$ a fixed positive integer. If $R$ admits a generalized derivation associated with a nonzero deviation $d$ such that $[F(x), d(y)]_m = [x, y]$ for all $x$, $y$ in some appropriate subset of $R$, then $R$ is commutative. Moreover, we also examine the case $R$ is a semiprime ring. Finally, we apply the above result to Banach algebras, and we obtain a non-commutative version of the Singer--Werner theorem. Current Issue Volume 129 | Issue 5 November 2019 Click here for Editorial Note on CAP Mode
Search Now showing items 1-10 of 26 Kaon femtoscopy in Pb-Pb collisions at $\sqrt{s_{\rm{NN}}}$ = 2.76 TeV (Elsevier, 2017-12-21) We present the results of three-dimensional femtoscopic analyses for charged and neutral kaons recorded by ALICE in Pb-Pb collisions at $\sqrt{s_{\rm{NN}}}$ = 2.76 TeV. Femtoscopy is used to measure the space-time ... Anomalous evolution of the near-side jet peak shape in Pb-Pb collisions at $\sqrt{s_{\mathrm{NN}}}$ = 2.76 TeV (American Physical Society, 2017-09-08) The measurement of two-particle angular correlations is a powerful tool to study jet quenching in a $p_{\mathrm{T}}$ region inaccessible by direct jet identification. In these measurements pseudorapidity ($\Delta\eta$) and ... Online data compression in the ALICE O$^2$ facility (IOP, 2017) The ALICE Collaboration and the ALICE O2 project have carried out detailed studies for a new online computing facility planned to be deployed for Run 3 of the Large Hadron Collider (LHC) at CERN. Some of the main aspects ... Evolution of the longitudinal and azimuthal structure of the near-side peak in Pb–Pb collisions at $\sqrt{s_{\rm NN}}=2.76$ TeV (American Physical Society, 2017-09-08) In two-particle angular correlation measurements, jets give rise to a near-side peak, formed by particles associated to a higher $p_{\mathrm{T}}$ trigger particle. Measurements of these correlations as a function of ... J/$\psi$ elliptic flow in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 5.02 TeV (American Physical Society, 2017-12-15) We report a precise measurement of the J/$\psi$ elliptic flow in Pb-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV with the ALICE detector at the LHC. The J/$\psi$ mesons are reconstructed at mid-rapidity ($|y| < 0.9$) ... Enhanced production of multi-strange hadrons in high-multiplicity proton-proton collisions (Nature Publishing Group, 2017) At sufficiently high temperature and energy density, nuclear matter undergoes a transition to a phase in which quarks and gluons are not confined: the quark–gluon plasma (QGP)1. Such an exotic state of strongly interacting ... K$^{*}(892)^{0}$ and $\phi(1020)$ meson production at high transverse momentum in pp and Pb-Pb collisions at $\sqrt{s_\mathrm{NN}}$ = 2.76 TeV (American Physical Society, 2017-06) The production of K$^{*}(892)^{0}$ and $\phi(1020)$ mesons in proton-proton (pp) and lead-lead (Pb-Pb) collisions at $\sqrt{s_\mathrm{NN}} =$ 2.76 TeV has been analyzed using a high luminosity data sample accumulated in ... Production of $\Sigma(1385)^{\pm}$ and $\Xi(1530)^{0}$ in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV (Springer, 2017-06) The transverse momentum distributions of the strange and double-strange hyperon resonances ($\Sigma(1385)^{\pm}$, $\Xi(1530)^{0}$) produced in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV were measured in the rapidity ... Charged–particle multiplicities in proton–proton collisions at $\sqrt{s}=$ 0.9 to 8 TeV, with ALICE at the LHC (Springer, 2017-01) The ALICE Collaboration has carried out a detailed study of pseudorapidity densities and multiplicity distributions of primary charged particles produced in proton-proton collisions, at $\sqrt{s} =$ 0.9, 2.36, 2.76, 7 and ... Energy dependence of forward-rapidity J/$\psi$ and $\psi(2S)$ production in pp collisions at the LHC (Springer, 2017-06) We present ALICE results on transverse momentum ($p_{\rm T}$) and rapidity ($y$) differential production cross sections, mean transverse momentum and mean transverse momentum square of inclusive J/$\psi$ and $\psi(2S)$ at ...
2019-09-20 08:41 Search for the $^{73}\mathrm{Ga}$ ground-state doublet splitting in the $\beta$ decay of $^{73}\mathrm{Zn}$ / Vedia, V (UCM, Madrid, Dept. Phys.) ; Paziy, V (UCM, Madrid, Dept. Phys.) ; Fraile, L M (UCM, Madrid, Dept. Phys.) ; Mach, H (UCM, Madrid, Dept. Phys. ; NCBJ, Swierk) ; Walters, W B (Maryland U., Dept. Chem.) ; Aprahamian, A (Notre Dame U.) ; Bernards, C (Cologne U. ; Yale U. (main)) ; Briz, J A (Madrid, Inst. Estructura Materia) ; Bucher, B (Notre Dame U. ; LLNL, Livermore) ; Chiara, C J (Maryland U., Dept. Chem. ; Argonne, PHY) et al. The existence of two close-lying nuclear states in $^{73}$Ga has recently been experimentally determined: a 1/2$^−$ spin-parity for the ground state was measured in a laser spectroscopy experiment, while a J$^{\pi} = 3/2^−$ level was observed in transfer reactions. This scenario is supported by Coulomb excitation studies, which set a limit for the energy splitting of 0.8 keV. [...] 2017 - 13 p. - Published in : Phys. Rev. C 96 (2017) 034311 Registo detalhado - Registos similares 2019-09-20 08:41 Search for shape-coexisting 0$^+$ states in $^{66}$Ni from lifetime measurements / Olaizola, B (UCM, Madrid, Dept. Phys.) ; Fraile, L M (UCM, Madrid, Dept. Phys.) ; Mach, H (UCM, Madrid, Dept. Phys. ; NCBJ, Warsaw) ; Poves, A (Madrid, Autonoma U.) ; Nowacki, F (Strasbourg, IPHC) ; Aprahamian, A (Notre Dame U.) ; Briz, J A (Madrid, Inst. Estructura Materia) ; Cal-González, J (UCM, Madrid, Dept. Phys.) ; Ghiţa, D (Bucharest, IFIN-HH) ; Köster, U (Laue-Langevin Inst.) et al. The lifetime of the 0$_3^+$ state in $^{66}$Ni, two neutrons below the $N=40$ subshell gap, has been measured. The transition $B(E2;0_3^+ \rightarrow 2_1^+)$ is one of the most hindered E2 transitions in the Ni isotopic chain and it implies that, unlike $^{68}$Ni, there is a spherical structure at low excitation energy. [...] 2017 - 6 p. - Published in : Phys. Rev. C 95 (2017) 061303 Registo detalhado - Registos similares 2019-09-17 07:00 Laser spectroscopy of neutron-rich tin isotopes: A discontinuity in charge radii across the $N=82$ shell closure / Gorges, C (Darmstadt, Tech. Hochsch.) ; Rodríguez, L V (Orsay, IPN) ; Balabanski, D L (Bucharest, IFIN-HH) ; Bissell, M L (Manchester U.) ; Blaum, K (Heidelberg, Max Planck Inst.) ; Cheal, B (Liverpool U.) ; Garcia Ruiz, R F (Leuven U. ; CERN ; Manchester U.) ; Georgiev, G (Orsay, IPN) ; Gins, W (Leuven U.) ; Heylen, H (Heidelberg, Max Planck Inst. ; CERN) et al. The change in mean-square nuclear charge radii $\delta \left \langle r^{2} \right \rangle$ along the even-A tin isotopic chain $^{108-134}$Sn has been investigated by means of collinear laser spectroscopy at ISOLDE/CERN using the atomic transitions $5p^2\ ^1S_0 \rightarrow 5p6\ s^1P_1$ and $5p^2\ ^3P_0 \rightarrow 5p6s^3 P_1$. With the determination of the charge radius of $^{134}$Sn and corrected values for some of the neutron-rich isotopes, the evolution of the charge radii across the $N=82$ shell closure is established. [...] 2019 - 7 p. - Published in : Phys. Rev. Lett. 122 (2019) 192502 Article from SCOAP3: PDF; Registo detalhado - Registos similares 2019-09-17 07:00 Radioactive boron beams produced by isotope online mass separation at CERN-ISOLDE / Ballof, J (CERN ; Mainz U., Inst. Kernchem.) ; Seiffert, C (CERN ; Darmstadt, Tech. U.) ; Crepieux, B (CERN) ; Düllmann, Ch E (Mainz U., Inst. Kernchem. ; Darmstadt, GSI ; Helmholtz Inst., Mainz) ; Delonca, M (CERN) ; Gai, M (Connecticut U. LNS Avery Point Groton) ; Gottberg, A (CERN) ; Kröll, T (Darmstadt, Tech. U.) ; Lica, R (CERN ; Bucharest, IFIN-HH) ; Madurga Flores, M (CERN) et al. We report on the development and characterization of the first radioactive boron beams produced by the isotope mass separation online (ISOL) technique at CERN-ISOLDE. Despite the long history of the ISOL technique which exploits thick targets, boron beams have up to now not been available. [...] 2019 - 11 p. - Published in : Eur. Phys. J. A 55 (2019) 65 Fulltext from Publisher: PDF; Registo detalhado - Registos similares 2019-09-17 07:00 Inverse odd-even staggering in nuclear charge radii and possible octupole collectivity in $^{217,218,219}\mathrm{At}$ revealed by in-source laser spectroscopy / Barzakh, A E (St. Petersburg, INP) ; Cubiss, J G (York U., England) ; Andreyev, A N (York U., England ; JAEA, Ibaraki ; CERN) ; Seliverstov, M D (St. Petersburg, INP ; York U., England) ; Andel, B (Comenius U.) ; Antalic, S (Comenius U.) ; Ascher, P (Heidelberg, Max Planck Inst.) ; Atanasov, D (Heidelberg, Max Planck Inst.) ; Beck, D (Darmstadt, GSI) ; Bieroń, J (Jagiellonian U.) et al. Hyperfine-structure parameters and isotope shifts for the 795-nm atomic transitions in $^{217,218,219}$At have been measured at CERN-ISOLDE, using the in-source resonance-ionization spectroscopy technique. Magnetic dipole and electric quadrupole moments, and changes in the nuclear mean-square charge radii, have been deduced. [...] 2019 - 9 p. - Published in : Phys. Rev. C 99 (2019) 054317 Article from SCOAP3: PDF; Registo detalhado - Registos similares 2019-09-17 07:00 Investigation of the $\Delta n = 0$ selection rule in Gamow-Teller transitions: The $\beta$-decay of $^{207}$Hg / Berry, T A (Surrey U.) ; Podolyák, Zs (Surrey U.) ; Carroll, R J (Surrey U.) ; Lică, R (CERN ; Bucharest, IFIN-HH) ; Grawe, H ; Timofeyuk, N K (Surrey U.) ; Alexander, T (Surrey U.) ; Andreyev, A N (York U., England) ; Ansari, S (Cologne U.) ; Borge, M J G (CERN ; Madrid, Inst. Estructura Materia) et al. Gamow-Teller $\beta$ decay is forbidden if the number of nodes in the radial wave functions of the initial and final states is different. This $\Delta n=0$ requirement plays a major role in the $\beta$ decay of heavy neutron-rich nuclei, affecting the nucleosynthesis through the increased half-lives of nuclei on the astrophysical $r$-process pathway below both $Z=50$ (for $N>82$ ) and $Z=82$ (for $N>126$). [...] 2019 - 5 p. - Published in : Phys. Lett. B 793 (2019) 271-275 Article from SCOAP3: PDF; Registo detalhado - Registos similares 2019-09-14 06:30 Precision measurements of the charge radii of potassium isotopes / Koszorús, Á (KU Leuven, Dept. Phys. Astron.) ; Yang, X F (KU Leuven, Dept. Phys. Astron. ; Peking U., SKLNPT) ; Billowes, J (Manchester U.) ; Binnersley, C L (Manchester U.) ; Bissell, M L (Manchester U.) ; Cocolios, T E (KU Leuven, Dept. Phys. Astron.) ; Farooq-Smith, G J (KU Leuven, Dept. Phys. Astron.) ; de Groote, R P (KU Leuven, Dept. Phys. Astron. ; Jyvaskyla U.) ; Flanagan, K T (Manchester U.) ; Franchoo, S (Orsay, IPN) et al. Precision nuclear charge radii measurements in the light-mass region are essential for understanding the evolution of nuclear structure, but their measurement represents a great challenge for experimental techniques. At the Collinear Resonance Ionization Spectroscopy (CRIS) setup at ISOLDE-CERN, a laser frequency calibration and monitoring system was installed and commissioned through the hyperfine spectra measurement of $^{38–47}$K. [...] 2019 - 11 p. - Published in : Phys. Rev. C 100 (2019) 034304 Article from SCOAP3: PDF; Registo detalhado - Registos similares 2019-09-12 09:23 Evaluation of high-precision atomic masses of A ∼ 50-80 and rare-earth nuclides measured with ISOLTRAP / Huang, W J (CSNSM, Orsay ; Heidelberg, Max Planck Inst.) ; Atanasov, D (CERN) ; Audi, G (CSNSM, Orsay) ; Blaum, K (Heidelberg, Max Planck Inst.) ; Cakirli, R B (Istanbul U.) ; Herlert, A (FAIR, Darmstadt) ; Kowalska, M (CERN) ; Kreim, S (Heidelberg, Max Planck Inst. ; CERN) ; Litvinov, Yu A (Darmstadt, GSI) ; Lunney, D (CSNSM, Orsay) et al. High-precision mass measurements of stable and beta-decaying nuclides $^{52-57}$Cr, $^{55}$Mn, $^{56,59}$Fe, $^{59}$Co, $^{75, 77-79}$Ga, and the lanthanide nuclides $^{140}$Ce, $^{140}$Nd, $^{160}$Yb, $^{168}$Lu, $^{178}$Yb have been performed with the Penning-trap mass spectrometer ISOLTRAP at ISOLDE/CERN. The new data are entered into the Atomic Mass Evaluation and improve the accuracy of masses along the valley of stability, strengthening the so-called backbone. [...] 2019 - 9 p. - Published in : Eur. Phys. J. A 55 (2019) 96 Fulltext from Publisher: PDF; Registo detalhado - Registos similares 2019-09-05 06:35 Nuclear charge radii of $^{62−80}$Zn and their dependence on cross-shell proton excitations / Xie, L (Manchester U.) ; Yang, X F (Peking U., SKLNPT ; Leuven U.) ; Wraith, C (Liverpool U.) ; Babcock, C (Liverpool U.) ; Bieroń, J (Jagiellonian U.) ; Billowes, J (Manchester U.) ; Bissell, M L (Manchester U. ; Leuven U.) ; Blaum, K (Heidelberg, Max Planck Inst.) ; Cheal, B (Liverpool U.) ; Filippin, L (U. Brussels (main)) et al. Nuclear charge radii of $^{62−80}$Zn have been determined using collinear laser spectroscopy of bunched ion beams at CERN-ISOLDE. The subtle variations of observed charge radii, both within one isotope and along the full range of neutron numbers, are found to be well described in terms of the proton excitations across the $Z=28$ shell gap, as predicted by large-scale shell model calculations. [...] 2019 - 5 p. - Published in : Phys. Lett. B 797 (2019) 134805 Article from SCOAP3: PDF; Registo detalhado - Registos similares 2019-09-04 06:18 Electromagnetic properties of low-lying states in neutron-deficient Hg isotopes: Coulomb excitation of $^{182}$Hg, $^{184}$Hg, $^{186}$Hg and $^{188}$Hg / Wrzosek-Lipska, K (Warsaw U., Heavy Ion Lab ; Leuven U.) ; Rezynkina, K (Leuven U. ; U. Strasbourg) ; Bree, N (Leuven U.) ; Zielińska, M (Warsaw U., Heavy Ion Lab ; IRFU, Saclay) ; Gaffney, L P (Liverpool U. ; Leuven U. ; CERN ; West Scotland U.) ; Petts, A (Liverpool U.) ; Andreyev, A (Leuven U. ; York U., England) ; Bastin, B (Leuven U. ; GANIL) ; Bender, M (Lyon, IPN) ; Blazhev, A (Cologne U.) et al. The neutron-deficient mercury isotopes serve as a classical example of shape coexistence, whereby at low energy near-degenerate nuclear states characterized by different shapes appear. The electromagnetic structure of even-mass $^{182-188}$ Hg isotopes was studied using safe-energy Coulomb excitation of neutron-deficient mercury beams delivered by the REX-ISOLDE facility at CERN. [...] 2019 - 23 p. - Published in : Eur. Phys. J. A 55 (2019) 130 Fulltext: PDF; Registo detalhado - Registos similares
The Challenge Write a program or function that takes no input and outputs a vector of length \$1\$ in a theoretically uniform random direction. This is equivalent to a random point on the sphere described by $$x^2+y^2+z^2=1$$ resulting in a distribution like such Output Three floats from a theoretically uniform random distribution for which the equation \$x^2+y^2+z^2=1\$ holds true to precision limits. Challenge remarks The random distribution needs to be theoretically uniform. That is, if the pseudo-random number generator were to be replaced with a true RNG from the realnumbers, it would result in a uniform random distribution of points on the sphere. Generating three random numbers from a uniform distribution and normalizing them is invalid: there will be a bias towards the corners of the three-dimensional space. Similarly, generating two random numbers from a uniform distribution and using them as spherical coordinates is invalid: there will be a bias towards the poles of the sphere. Proper uniformity can be achieved by algorithms including but not limited to: Generate three random numbers \$x\$, \$y\$ and \$z\$ from a normal(Gaussian) distribution around \$0\$ and normalize them. Generate three random numbers \$x\$, \$y\$ and \$z\$ from a uniformdistribution in the range \$(-1,1)\$. Calculate the length of the vector by \$l=\sqrt{x^2+y^2+z^2}\$. Then, if \$l>1\$, reject the vector and generate a new set of numbers. Else, if \$l \leq 1\$, normalize the vector and return the result. Generate two random numbers \$i\$ and \$j\$ from a uniformdistribution in the range \$(0,1)\$ and convert them to spherical coordinates like so:\begin{align}\theta &= 2 \times \pi \times i\\\\\phi &= \cos^{-1}(2\times j -1)\end{align}so that \$x\$, \$y\$ and \$z\$ can be calculated by \begin{align}x &= \cos(\theta) \times \sin(\phi)\\\\y &= \sin(\theta) \times \sin(\phi)\\\\z &= \cos(\phi)\end{align} Generate three random numbers \$x\$, \$y\$ and \$z\$ from a Provide in your answer a brief description of the algorithm that you are using. Read more on sphere point picking on MathWorld. Output examples [ 0.72422852 -0.58643067 0.36275628][-0.79158628 -0.17595886 0.58517488][-0.16428481 -0.90804027 0.38532243][ 0.61238768 0.75123833 -0.24621596][-0.81111161 -0.46269121 0.35779156]
The Annals of Statistics Ann. Statist. Volume 45, Number 5 (2017), 2151-2189. Phase transitions for high dimensional clustering and related problems Abstract Consider a two-class clustering problem where we observe $X_{i}=\ell_{i}\mu+Z_{i}$, $Z_{i}\stackrel{\mathit{i.i.d.}}{\sim}N(0,I_{p})$, $1\leq i\leq n$. The feature vector $\mu\in R^{p}$ is unknown but is presumably sparse. The class labels $\ell_{i}\in\{-1,1\}$ are also unknown and the main interest is to estimate them. We are interested in the statistical limits. In the two-dimensional phase space calibrating the rarity and strengths of useful features, we find the precise demarcation for the Region of Impossibility and Region of Possibility. In the former, useful features are too rare/weak for successful clustering. In the latter, useful features are strong enough to allow successful clustering. The results are extended to the case of colored noise using Le Cam’s idea on comparison of experiments. We also extend the study on statistical limits for clustering to that for signal recovery and that for global testing. We compare the statistical limits for three problems and expose some interesting insight. We propose classical PCA and Important Features PCA (IF-PCA) for clustering. For a threshold $t>0$, IF-PCA clusters by applying classical PCA to all columns of $X$ with an $L^{2}$-norm larger than $t$. We also propose two aggregation methods. For any parameter in the Region of Possibility, some of these methods yield successful clustering. We discover a phase transition for IF-PCA. For any threshold $t>0$, let $\xi^{(t)}$ be the first left singular vector of the post-selection data matrix. The phase space partitions into two different regions. In one region, there is a $t$ such that $\cos(\xi^{(t)},\ell)\rightarrow 1$ and IF-PCA yields successful clustering. In the other, $\cos(\xi^{(t)},\ell)\leq c_{0}<1$ for all $t>0$. Our results require delicate analysis, especially on post-selection random matrix theory and on lower bound arguments. Article information Source Ann. Statist., Volume 45, Number 5 (2017), 2151-2189. Dates Received: March 2015 Revised: June 2016 First available in Project Euclid: 31 October 2017 Permanent link to this document https://projecteuclid.org/euclid.aos/1509436831 Digital Object Identifier doi:10.1214/16-AOS1522 Mathematical Reviews number (MathSciNet) MR3718165 Zentralblatt MATH identifier 06821122 Subjects Primary: 62H30: Classification and discrimination; cluster analysis [See also 68T10, 91C20] 62H25: Factor analysis and principal components; correspondence analysis Secondary: 62G05: Estimation 62G10: Hypothesis testing Citation Jin, Jiashun; Ke, Zheng Tracy; Wang, Wanjie. Phase transitions for high dimensional clustering and related problems. Ann. Statist. 45 (2017), no. 5, 2151--2189. doi:10.1214/16-AOS1522. https://projecteuclid.org/euclid.aos/1509436831 Supplemental materials Supplementary Material for “Phase transitions for high dimensional clustering and related problems”. Owing to space constraints, some technical proofs and discussion are relegated a supplementary document [27]. It contains proofs of Lemmas 2.1–2.4 and 3.1–3.3, and discusses an extension of the ARW model.
I am trying to derive Galerkin type weak formulation for the Stokes equations. I'm having a bit of a problem reconciling the notation in the integration by parts. I know that the answer I'm looking for is: $ \int_\Omega \Delta\mathbf{u}\cdot\mathbf{v}d\Omega = \int_\Gamma (\mathbf{n}\cdot\nabla\mathbf{u})\cdot\mathbf{v}d\Gamma - \int_\Omega \nabla\mathbf{u}:\nabla\mathbf{v}d\Omega $ When I integrate by parts myself I get: $ \int_\Omega \nabla u\cdot\mathbf{v}d\Omega = \int_\Gamma u(\mathbf{v}\cdot\mathbf{n})d\Gamma - \int_\Omega u \nabla\cdot\mathbf{v}d\Omega\\\ \quad\quad\quad \Rightarrow \int_\Omega\Delta\mathbf{u}\cdot\mathbf{v}d\Omega = \int_\Omega (\nabla\cdot (\nabla\mathbf{u}))\cdot\mathbf{v}d\Omega = \int_\Gamma \nabla\mathbf{u} (\mathbf{v}\cdot\mathbf{n})d\Gamma - \int_\Omega\nabla\mathbf{u}\nabla\cdot\mathbf{v}d\Omega $ I assume I should be using a dot product for the vector/matrix multiplication, but even so I can't reconcile my answer with what I know the correct answer to be. For instance the line integral should be a scalar, but with my answer $\nabla\mathbf{u}$ is a matrix and $\mathbf{v}\cdot\mathbf{n}$ is a scalar so I fail to see how their product could be a scalar. I did notice that the formula I used applies for scalar $u$'s. Is there another identity I should be using when $u$ is a vector?
Summary: it is cheapest and most accurate to use sqrt(fma(c, c, 1)) if you have FMA, and sqrt(1+c*c) otherwise. In my testing, though, the difference is extremely marginal: of the 1065353216 32-bit floating point numbers $0\leq c\leq 1$, the first formula is better 532509 times (0.05%), and worse 382159 times (0.035%), and neither formula has error worse than 1 ulp, so the obvious sqrt(1+c*c) is good enough. It is a little bit of a fine point, and Wikipedia can sometimes be a little unreliable on this. One good way to settle these types of questions is to go to a mature library that already implements hypot, such as openlibm (https://github.com/JuliaLang/openlibm). Quoting the source code comment that explains it (https://github.com/JuliaLang/openlibm/blob/master/src/e_hypot.c): /* __ieee754_hypot(x,y) * * Method : * If (assume round-to-nearest) z=x*x+y*y * has error less than sqrt(2)/2 ulp, than * sqrt(z) has error less than 1 ulp (exercise). * * So, compute sqrt(x*x+y*y) with some care as * follows to get the error below 1 ulp: * * Assume x>y>0; * (if possible, set rounding to round-to-nearest) * 1. if x > 2y use * x1*x1+(y*y+(x2*(x+x1))) for x*x+y*y * where x1 = x with lower 32 bits cleared, x2 = x-x1; else * 2. if x <= 2y use * t1*y1+((x-y)*(x-y)+(t1*y2+t2*y)) * where t1 = 2x with lower 32 bits cleared, t2 = 2x-t1, * y1= y with lower 32 bits chopped, y2 = y-y1. * * NOTE: scaling may be necessary if some argument is too * large or too tiny * * Special cases: * hypot(x,y) is INF if x or y is +INF or -INF; else * hypot(x,y) is NAN if x or y is NAN. * * Accuracy: * hypot(x,y) returns sqrt(x^2+y^2) with error less * than 1 ulps (units in the last place) */ So if you can compute $1+c^2$ sufficiently accurately (and in your case with $|c|\leq 1$ you will not have over-/underflow), the final result will be accurate. If you have a modern CPU with a fused multiply-add (FMA) instruction, this is trivial using fma, which most languages' standard math libraries have, so you can use sqrt(fma(c, c, 1)) (cost: just 1 flop plus the cost of a square root), this is even marginally cheaper than what hypot does. The error in fma(c, c, 1) is at most $\frac12$ an ulp with round-to-nearest, so you'll get the same accuracy as hypot, with error $<1$ ulp, so this is the best. Regarding the formulas that hypot actually uses, I don't really understand what they're doing. It chooses between $1+c^2$ and $2c+(c-1)^2$ depending on whether $c\leq\frac12$. It almost looks like a special case of double-double arithmetic, where you compute a product accurately by writing it as a sum of two numbers, but I'm not sure. I imagine if they could use fma, the formulas would be a lot simpler. In my testing they're at most as accurate as the plain $1+c^2$. What would happen if you tried $1+c^2$ directly? The relative error of evaluating $\hat w = \mathrm{fl}(1+\mathrm{fl}(c^2))$ is at most $\epsilon_1\leq \frac34\mathrm{ulp}$, which gives the error in $\mathrm{fl}(\sqrt{\hat w})$ as $\sqrt{1+\epsilon_1}-1+\epsilon_2 \leq \tfrac78\mathrm{ulp}$, which is at most a single ulp, so it's accurate too, and as good as hypot.
I have before me a copy of "The Indian Mathematician Ramanujan", by G. H. Hardy (not actually related to me, as far as I know), which appeared in volume 44, number 3 (March 1937) of The American Mathematical Monthly, on pages 137‒155. One of the theorems of Ramanujan stated there is this: If $\displaystyle F(k) = 1 + \left( \frac 1 2 \right)^2 k + \left( \frac{1\cdot3}{2\cdot4} \right)^2 k^2 + \cdots$ and $F(1-k) = \sqrt{210} F(k)$, then \begin{align} k = {} & (\sqrt 2 - 1 )^4 (2-\sqrt 3)^2(\sqrt7 - \sqrt 6)^4 (8-3\sqrt7)^2 \\ & \cdot (\sqrt{10} - 3)^4(4-\sqrt{15})^4(\sqrt{15}-\sqrt{14})^2 (6-\sqrt{35})^2. \tag{13} \end{align} Of this Hardy wrote: An expert on elliptic functions can see at once that $(13)$ is derived somehow from the theory of "complex multiplication", [ . . . ] What does the phrase "complex multiplication" mean in this context?
I understand that the usual way of deriving the Boltzmann distribution involves considering a small system of energy $\epsilon$ embedded in a much larger heat bath of energy $E - \epsilon$ and the total system energy is $E$. Given that the bath has accessible microstates $\Omega_b(E - \epsilon)$ and the system has microstates $\Omega_s(\epsilon)$ and the total system + bath has microstates $\Omega_t(E)$. We now state that the probability of finding the system in energy $\epsilon$ is simply \begin{equation} P(\epsilon) = \frac{\Omega_s(\epsilon)\Omega_b(E - \epsilon)}{\Omega_t(E)} \end{equation} $\textbf{Question 1}$: This is usually replaced by \begin{equation} P(\epsilon) = \frac{\Omega_b(E - \epsilon)}{\Omega_t(E)} \end{equation} Why should this be so? Why can we ignore the fact that the number of accessible microstates in the system that should vary with $\epsilon$? Moving on, the usual trick is to use logarithms, since the $\Omega$ terms are all very large. Thus, \begin{equation} \ln P(\epsilon) = \ln\Omega_b(E - \epsilon) -\ln \Omega_t(E) \end{equation} A Taylor expansion of the first term about $E$ gives us \begin{equation} \ln P(\epsilon) = c - \epsilon\frac{\partial\ln\Omega_b(E)}{\partial E} + O(\epsilon^2), \end{equation} where c is just a constant that depends on the bath and will be fixed when we normalize $P(\epsilon)$. Ignorning the higher order terms, we get $P(\epsilon) \propto e^{-\beta\epsilon}$ where we identify $\beta = \frac{\partial\ln\Omega(E)}{\partial E}$. $\textbf{Question 2}$: Why can we ignore the higher order Taylor terms? I understand that $\epsilon$ is small compared to the heat bath energy $E$ but why am I comparing it to $E$ to get the right measure of the error in probability?
I have four partial differential equations representing mass conservation of two compressible fluid phases (marked by subscripts $p1$ and $p2$) in two different continuum media (marked by subscripts $c1$ and $c2$) and they are coupled together by the term $T_{c1c2}$ as shown below. Continuum 1:$$\begin{align}\text{Phase 1: } \frac{\partial }{\partial t}\left(\frac{\phi_{c1}S_{p1c1}}{B_{p1c1}} \right) = \nabla\left(\lambda_{p1c1}\nabla{P_{p1c1}} \right) - \frac{T_{c1c2}}{V} - \frac{q_{p1c1}}{V}, \\\text{Phase-2:} \frac{\partial }{\partial t}\left(\frac{\phi_{c1}S_{p2c1}}{B_{p2c1}} \right) = \nabla\left(\lambda_{p2c1}\nabla{P_{p2c1}} \right) - \frac{T_{c1c2}}{V} - \frac{q_{p2c1}}{V},\end{align}$$ Continuum 2:$$\begin{align}\text{Phase 1: }\frac{\partial }{\partial t}\left(\frac{\phi_{c2}S_{p1c2}}{B_{p1c2}} \right) = \nabla\left(\lambda_{p1c2}\nabla{P_{p1c2}} \right) - \frac{T_{c1c2}}{V} - \frac{q_{p1c2}}{V}, \\\text{Phase 2: } \frac{\partial }{\partial t}\left(\frac{\phi_{c2}S_{p2c2}}{B_{p2c2}} \right) = \nabla\left(\lambda_{p2c2}\nabla{P_{p2c2}} \right) - \frac{T_{c1c2}}{V} - \frac{q_{p2c2}}{V},\end{align}$$ Additionally, there are 4 more linear equations which are: $$S_{p1c1} + S_{p2c1} = 1$$ $$S_{p1c2} + S_{p2c2} = 1$$ $$P_{p2c1} - P_{p1c1} = P_{cap1}$$ $$P_{p2c2} - P_{p1c2} = P_{cap2}$$ Variables: In these 4 linear equations, $P_{cap1}$ and $P_{cap2}$ are known.Except for $\phi_{c1}$, $\phi_{c2}$ and $V$, which are constants, all other variables vary with time $t$ and distance $(x,y)$. Also, $$\frac{1}{B_{p1c1}} = 1+c_{p1}\left(P_{p1c1} - P^{STC}\right) \text{and} \frac{1}{B_{p2c1}} = 1+c_{p2}\left(P_{p2c1} - P^{STC}\right)$$ where $c_{p1}$, $c_{p2}$ and $P^{STC}$ are constants. Initial Condition: Dirichlet constant pressure Boundary Condition: Neumann no-flow boundary condition Objective: To solve for saturations and pressures as a function of space and time, i.e. $S_{p1c1}$, $S_{p2c1}$, $S_{p1c2}$, $S_{p2c2}$ and $P_{p1c1}$, $P_{p2c1}$, $P_{p1c2}$, $P_{p2c2}$, This system of equations could be discretized into linear system of equations and then solved for each time step, however, that is a complex task and would take a lot of time. I would, however, like to use some form of "tool" which can solve this coupled system of equations in a mesh grid format. I saw an answer in other post suggested to use FiPy, but the feedback from the OP was that it's way too slow. Please suggest what could be a good tool to solve this problem.
So the question goes if I has a spring with spring constant $k$ and two masses attached to this spring (one on either side) what is the resonant frequency of the system in terms of $m$ and $k$? Diagram of system: [m]-////-[m] Now the real problem I'm having is trying to decide what the forces are acting on the system in order to come up with my differential equation? I know that for a horizontal spring say attached to wall we can take the differential equation $$m\frac{d^2 x}{d t^2}+kx= 0$$ And then use the equation $Asin(\omega t^2+\phi)$ as a solution and say this is true when $\omega= \sqrt{\frac{k}{m}}$ But I was thinking maybe I could just use the differential equation $$m\frac{d^2 x}{d t^2}+2kx= 0$$ but I feel like that may be too simple? Is there something I'm missing? Any help would be appreciated! :) Note: None of this system is undergoing any damping
Forgot password? New user? Sign up Existing user? Log in Can you help me with this mechanics problem? Note by Kyle Finch 4 years, 6 months ago Easy Math Editor This discussion board is a place to discuss our Daily Challenges and the math and science related to those challenges. Explanations are more than just a solution — they should explain the steps and thinking strategies that you used to obtain the solution. Comments should further the discussion of math and science. When posting on Brilliant: *italics* _italics_ **bold** __bold__ - bulleted- list 1. numbered2. list paragraph 1paragraph 2 paragraph 1 paragraph 2 [example link](https://brilliant.org) > This is a quote This is a quote # I indented these lines # 4 spaces, and now they show # up as a code block. print "hello world" # I indented these lines# 4 spaces, and now they show# up as a code block.print "hello world" \( \) \[ \] 2 \times 3 2^{34} a_{i-1} \frac{2}{3} \sqrt{2} \sum_{i=1}^3 \sin \theta \boxed{123} Sort by: @Raghav Vaidyanathan the problem is in the middle. Log in to reply I have rotated the image. ⌣¨\huge\ddot\smile⌣¨ U can tell me the solution . ??? @Kyle Finch – A disc has an angular velocity? @Pranjal Jain – angular velocity=Relative velocity perpendicular to line joining themDistance between them\text{angular velocity}=\dfrac{\text{Relative velocity perpendicular to line joining them}}{\text{Distance between them}}angular velocity=Distance between themRelative velocity perpendicular to line joining them By simple geometry, ∠AOP=30∘\angle AOP=30^\circ∠AOP=30∘ Case I: AAA is fixed to ground, so, the velocity of PPP will be ωR\omega RωR at angle 120∘120^\circ120∘ to the line joining APAPAP, thus, ω1=ωRcos30∘3R\omega_1=\dfrac{\omega R\cos 30^\circ}{\sqrt{3}R}ω1=3RωRcos30∘ Case II: AAA is fixed to disc, in this case, velocity of AAA will add up in relative velocity. In this case also, velocity of AAA will be ωR\omega RωR at angle 60∘60^\circ60∘ to the line joining APAPAP. Thus, ω2=ωRcos30∘+ωRcos30∘3R=2ω1⇒ω1ω2=12\omega_2=\dfrac{\omega R\cos 30^\circ+\omega R\cos 30^\circ}{\sqrt{3}R}=2\omega_1\Rightarrow \dfrac{\omega_1}{\omega_2}=\dfrac{1}{2}ω2=3RωRcos30∘+ωRcos30∘=2ω1⇒ω2ω1=21 @Pranjal Jain – Well indeed very nice by the way how was ur mains exam @Kyle Finch – Not too good. Getting just 275. @Pranjal Jain – Was the physics section tough?? @Kyle Finch – It was NCERTish. I must not blame questions for my marks. I don't like EM. @Pranjal Jain – So who among all of you is supposed to get the highest. @Kyle Finch – Ronak Agrawal (on Brilliant) @Pranjal Jain – Ok just one last query what makes a problem popular on brilliant. Can u tell ronaks score @Kyle Finch – He's getting like 315-320. The quality of problem makes it popular! @Pranjal Jain – No i m asking do the staff member decide or the moderators @Pranjal Jain – Bhavya Is getting 322.I guess?? @Parth Lohomi – Yes, he does @Pranjal Jain – ω\omegaω Problem Loading... Note Loading... Set Loading...
Inverse problem of finding the coefficient of u in a parabolic equation on the basis of a nonlocal observation condition 66 Downloads Citations Abstract We consider the problem of reconstructing the coefficient c( x) multiplying u( x, t) in a parabolic equation. To find it, in addition to initial and boundary conditions, we pose a nonlocal observation condition of the form \(\int_0^T {u(x,t)} d\mu (t) = \chi (x)\) where the function χ( x) and the measure dµ( t) are known. We obtain sufficient conditions for the uniqueness and solvability of this problem, which have the form of easy-to-verify inequalities. We present examples of inverse problems for which the assumptions of our theorems are necessarily satisfied and an example of a problem that has a nonunique solution. KeywordsInverse Problem Parabolic Equation Monotone Operator Direct Problem Sobolev Embedding Theorem Preview Unable to display preview. Download preview PDF. References 1. 2. 3. 4. 5. 6. 7. 8. 9.Ladyzhenskaya, O.A., Solonnikov, V.A., and Ural’tseva, N.N., Lineinye i kvazilineinye uravneniyaparabolicheskogo tipa(Linear and Quasi-Linear Equations of Parabolic Type), Moscow: Nauka, 1967.Google Scholar 10.Kruzhkov, S.N., Nelineinye uravneniya s chastnymi proizvodnymi(Nonlinear Partial Differential Equations),Moscow, 1969.Google Scholar 11.Natanson, I.P., Teoriya funktsii veshchestvennoi peremennoi(Theory of Functions of a Real Variable), Moscow: Nauka, 1974.Google Scholar 12. 13.Lieberman, G.M., Second Order Parabolic Differential Equations, Singapore, 2005.Google Scholar 14. 15. 16.Prilepko, A.I. and Kostin, A.B., Some Inverse Problems for Parabolic Equations with Final and IntegralObservation, Mat Sb., 1992, vol. 183, no. 4, pp. 49–68.Google Scholar 17. 18.Lyusternik, L.A. and Sobolev, V.I., Kratkii kurs funktsional’nogo analiza(A Short Course of FunctionalAnalysis), Moscow: Vyssh. Shkola, 1982.Google Scholar 19. 20. 21. 22. 23.Watson, G., A Treatise on the Theory of Bessel Functions, Cambridge, 1945. Translated under the title Teoriya besselevykh funktsii, Moscow: Inostrannaya Literatura, 1949.Google Scholar
I have a 2D triangle which deforms with each vertex moving by some small ($\sin(x) \approx \tan(x) \approx x$) displacement vector. The displacement of any point in the triangle is linearly interpolated from the displacements at the vertices. How can I find the rotation angle of any point in the triangle? I want the rotation angle as a linear function of the vertex displacements and I feel that it should be (again, small displacements only). I tried the following approach but got bogged down in the enormous expressions that appear (hundreds or thousands of terms) and I'm not sure if it's even a correct way. Is there an easier way or does this simplify somehow? 1) Find the deformation gradient $F$ (2x2 matrix) using the displacements and the derivatives of the interpolation functions. 2) Find the polar decomposition $F=QS$ where $Q$ is orthogonal and $S$ is symmetric 3) Treat $Q$ as a rotation matrix and extract the rotation angle from it I also tried more intuitive geometrical ways like averaging the rotation angles of all 3 vertices about the point but they don't seem to give correct looking results.
To evaluate detection performance, we plot the miss-rate $mr(c) = \frac{fn(c)}{tp(c) + fn(c)}$ against the number of false positives per image $fppi(c)=\frac{fp(c)}{\text{#img}}$ in log-log plots. $tp(c)$ is the number of true positives, $fp(c)$ is the number of false positives, and $fn(c)$ is the number of false negatives, all for a given confidence value $c$ such that only detections are taken into account with a confidence value greater or equal than $c$. As commonly applied in object detection evaluation the confidence threshold $c$ is used as a control variable. By decreasing $c$, more detections are taken into account for evaluation resulting in more possible true or false positives, and possible less false negatives. We define the log average miss-rate (LAMR) as shown, where the 9 fppi reference points are equally spaced in the log space: $\DeclareMathOperator*{\argmax}{argmax}LAMR = \exp\left(\frac{1}{9}\sum\limits_f \log\left(mr(\argmax\limits_{fppi\left(c\right)\leq f} fppi\left(c\right))\right)\right)$ For each fppi reference point the corresponding mr value is used. In the absence of a miss-rate value for a given f the highest existent fppi value is used as new reference point. This definition enables LAMR to be applied as a single detection performance indicator at image level. At each image the set of all detections is compared to the groundtruth annotations by utilizing a greedy matching algorithm. An object is considered as detected (true positive) if the Intersection over Union (IoU) of the detection and groundtruth bounding box exceeds a pre-defined threshold. Due to the high non-rigidness of pedestrians we follow the common choice of an IoU threshold of 0.5. Since no multiple matches are allowed for one ground-truth annotation, in the case of multiple matches the detection with the largest score is selected, whereas all other matching detections are considered false positives. After the matching is performed, all non matched ground-truth annotations and detections, count as false negatives and false positives, respectively. Neighboring classes and ignore regions are used during evaluation. Neighboring classes involve entities that are semantically similar, for example bicycle and moped riders. Some applications might require their precise distinction ( enforce) whereas others might not ( ignore). In the latter case, during matching correct/false detections are not credited/penalized. If not stated otherwise, neighboring classes are ignored in the evaluation. In addition to ignored neighboring classes all persons annotations with the tags behind glass or sitting-lying are treated as ignore regions. Further, as mentioned in Section 3.2, EuroCity Persons Dataset Publication, ignore regions are used for cases where no precise bounding box annotation is possible (either because the objects are too small or because there are too many objects in close proximity which renders the instance based labeling infeasible). Since there is no precise information about the number or the location of objects in the ignore region, all unmatched detections which share an intersection of more than $0.5$ with these regions are not considered as false positives. Note that submissions with provided publication link and/or code will get priorized in below list (COMING SOON). Method User LAMR (reasonable) LAMR (small) LAMR (occluded) LAMR (all) External data used Publication URL Publication code Submitted on▲ HRNet Hongsong Wang 0.061 0.138 0.287 0.183 ImageNet no no 2019-08-05 17:11:04 View YOLOv3_640 HUI_Tsinghua-Daim... 0.273 0.564 0.623 0.456 no no 2019-05-17 04:56:27 View SSD ECP Team 0.131 0.235 0.460 0.296 ImageNet yes no 2019-04-02 13:56:14 View R-FCN (with OHEM) ECP Team 0.163 0.245 0.507 0.330 ImageNet yes no 2019-04-01 17:10:03 View YOLOv3 ECP Team 0.097 0.186 0.401 0.242 ImageNet yes no 2019-04-01 17:08:05 View Faster R-CNN ECP Team 0.101 0.196 0.381 0.251 ImageNet yes no 2019-04-01 17:06:33 View Method User LAMR (reasonable) LAMR (small) LAMR (occluded) LAMR (all) External data used Publication URL Publication code Submitted on▲ HRNet Hongsong Wang 0.079 0.156 0.265 0.153 ImageNet no no 2019-08-05 17:11:04 View FasterRCNN with M... Qihua Cheng 0.150 0.253 0.653 0.295 ImageNet no no 2019-07-08 08:48:13 View Faster R-CNN ECP Team 0.201 0.359 0.701 0.358 ImageNet yes no 2019-05-02 10:10:01 View
If you have the wavefunctions you automatically have the eigenvalues since $$-(1/2) \psi''[x] + (1/2) x^2 \psi[x] = \epsilon \psi[x] \tag{1}$$ will converge only when $\epsilon$ is an eigenenergy. If you do not have $\epsilon_n$, there are several methods available, but all are ultimately based on the observation that boundary conditions are critical in producing quantized eigenvalues of energy. In the case of the harmonic oscillator, the simplest methods uses the boundary condition is $\psi(\pm \infty)=0$. In principle, only if the energy is exactly right can one satisfy this condition, for otherwise the solution will diverge exponentially at large $x$. When solving analytically the Schrodinger equation, the sequence is to first the search for solutions that satisfy this condition and then condition to find condition on the energies. In the simplest numerical schemes the sequence is reversed: one "guesses" an energy, solves the resulting differential equation, and checks to see if the resulting solutions satisfies the boundary solution for sufficiently large $\vert x\vert$. There is always some roundoff error, and so the solution will never be strictly $0$ for very large $x$, but practically it is enough to require the solution to go to $0$ over a reasonable range of $x$ for sufficiently large $x$ before the solution blows up again. Exactly what "very large" means, and exactly what is a "reasonable range" is a matter of numerical accuracy and may depend on the specifics of the integration schemes. A Runge-Kutta integrator of order $8$ will typically do better than one of order $4$, in the sense that the solution from RK8 will be remain small for longer at large $x$ than the solution from RK4. In the case of the harmonic oscillator (or any other symmetric potential), the initial conditions on the derivative and the value at $x=0$ are easy: for the even solutions one can take $\psi(0)=1$ and $\psi'(0)=0$ while for the odd one $\psi(0)=0$ and $\psi'(0)=1$ are appropriate. These initial conditions do not affect the accuracy of the solution. The solution is not normalized but the normalization is unimportant to accurately determine the energy. Thus given an initial guess $e_0$, one launches the integrator and checks to see if the solutions blows up for values of $x$ "not large enough". If the solution blows up one makes another guess, say $\epsilon_1> \epsilon_0$. If the new solution is worse, then redo with $\epsilon_1<\epsilon_0$, and this way by trial and error until a guess value is found so that the solution remains small over the desired range. This is illustrated in the two figures below, which are numerical solutions to $$-(1/2) \psi''[x] + (1/2) x^2 \psi[x] = (n + 1/2) \psi[x]$$with initial conditions $\psi[0]=1$ and $\psi'[0]=0$. The graph on the left used a Runge-Kutta integrator of order 8, and the one on the right a RK scheme of order 4. In both graph, the black plots are for the exact value $\epsilon=2$ (in units of $\hbar\omega$), the red plots for $\epsilon=1.95$ and the blue plots for $\epsilon=2.05$. This clearly shows how the integration is quite sensitive to the initial guess energy, how the solution "rapidly" diverges when the guess energy is wrong, and how the asymptotic tail of the solution improves with the order of the integrating scheme. (You can see a hint, on the left, that even the exact solution $\epsilon=2$ produces a solutions that is about to diverge beyond $\vert x\vert=7$.) Because the number of nodes (or zeroes) of each eigenfunction increases with energy, it is easy to verify that no energy value has been missed. The lowest energy solution will have no node, the first excited state one node etc. Thus, if a guess energy produces a valid solution with more nodes than expected, one simply tries a smaller guess. In this way, and with patience, one can find with reasonable accuracy all energy eigenvalues, especially if an accurate integrator is used. There are a variety of other more sophisticated schemes. One is called the shooting method. Another common method is called the matching method: instead of starting at the centre and checking the solution at large $x$, starts with large $x$ and works backwards towards the centre. When done starting at $+x$ and $-x$, the continuity of the solution and of its derivative is used to evaluate if the guess energy is to be kept or rejected.
80 1 Hello Everybody, Instead of solving the geodesic equations for the Schwarzschild metric, in many books (nearly in all books that I consulted), conserved quantities are looked at instead. So take for eg. Carroll, he looks at the killing equation and extracts the equation [tex] K_\mu \frac{dx^\mu}{d \lambda}= constant, [/tex] and he then writes:"In addition we have another constant of the motion for geodesics", and he writes the normalization condition: [tex] \epsilon = -g_{\mu \nu} \frac{dx^\mu}{d \lambda} \frac{dx^\mu}{d \lambda}. [/tex] Now I don't understand why this set of equations is equivalent to the geodesic equations. And I do not understand why we are allowed to use these equations to extract information about the geodesics. Maybe the questions are the same, but I hope you get my point. Any help would be greatly appreciated!! Instead of solving the geodesic equations for the Schwarzschild metric, in many books (nearly in all books that I consulted), conserved quantities are looked at instead. So take for eg. Carroll, he looks at the killing equation and extracts the equation [tex] K_\mu \frac{dx^\mu}{d \lambda}= constant, [/tex] and he then writes:"In addition we have another constant of the motion for geodesics", and he writes the normalization condition: [tex] \epsilon = -g_{\mu \nu} \frac{dx^\mu}{d \lambda} \frac{dx^\mu}{d \lambda}. [/tex] Now I don't understand why this set of equations is equivalent to the geodesic equations. And I do not understand why we are allowed to use these equations to extract information about the geodesics. Maybe the questions are the same, but I hope you get my point. Any help would be greatly appreciated!!
Despite being able to simply provide a link to wikipedia's already excellent article on Linear Programming, I wanted to provide a short introduction in order to present what I am doing exactly on my thesis in a future article. Linear Programming is best described as a technique, in which we want to maximize or minimize an objective function subject to a set of constraints. It is called linear because both the objective function and the constraints are linear inequalities on the variables. And just to make sure that no one is left behind: a linear function on a set of variables \(x_1, x_2, ldots, x_n\) is a function \(a_0 + a_1 x_1 + a_2 x_2 + \ldots + a_n x_n\), where \(a_0, a_1, a_2, ldots, a_n\) are numbers. For example, \(4 + 3x_1 - 2x_2\) is a linear function, whereas \(x_1 * x_2 / x_3\) isn't. Example: diet problem A classical example for linear programming is the diet problem. We want to determine a diet that satisfies all of our nutritional requirements while keeping it as cheap as possible. We have a set of different foods that fulfill certain requirements by containing a certain amount of proteins, vitamins, etc; and each food has a certain price per kilogram. Needless to say, the values in the following table are completely fictional. Food Proteins per kg Vitamins per kg Carbohydrates per kg Cost per kg Apples 2 4 2 5 Beef 10 5 4 20 Cucumbers 4 3 3 6 Potatos 1 2 8 2 Suppose also that we need at least 60 units of each nutrient in our diet. Our objective will be to find a solution that specifies how much food of each type we should buy to cover all of our needs while keeping budget to a minimum. In order to create a linear programming model, we must first define which variables we will be working with. We will define \(x_i\) as how many kg of food \(i\) we will be purchasing; therefore, our variables will be \(x_A, x_B, x_C, x_P\). Now we must specify our restrictions. We want to consume at least 60 units of each nutrient, and each food provides a certain amount per kg of it. For instance, we may express that we need 60 units of carbs as: $$ 2 x_A + 4 x_B + 3 x_C + 8 x_P \geq 60$$ Note that each unit of apples provides 2 units of carbs, therefore the total amount of carbs provided by apples will be the product \(2 \times x_A\); same holds for all foods. Similar expressions can be derived for vitamins: $$ 4 x_A + 5 x_B + 3 x_C + 2 x_P \geq 60$$ And for proteins: $$ 2 x_A + 10 x_B + 4 x_C + 1 x_P \geq 60$$ Needless to say, all amounts must be positive, as we are buying food, not selling it. Therefore, we add constraints: $$ x_A, x_B, x_C, x_P \geq 0 $$ The set of constraints we have built specify a set of valid or feasible solutions for us. For example, we will not accept a solution in which we purchase only 20kg of apples, since although that satisfies our vitamin intake, it does not satisfy carbs and proteins. What we have to do now is, from every feasible solution, choose one that minimizes how much money we spend. So far, nothing prevents buying a whole supermarket of food from being a valid solution, albeit a very expensive one. In order to prevent that situation, we add an objective function to minimize, which is the cost of all the food we want to buy: $$ c(x) = 5 x_A + 20 x_B + 6 x_C + 2 x_P $$ The purpose of the objective function is to measure how good or bad a particular solution is, whereas the constraints restrict what we consider a valid solution for us. Putting all of them together, we have constructed a linear programming model for our diet problem. Generalizing A linear programming problem consists in the minimization of a linear objective function \(c(x)\) subject to a set of linear constraints, which can be expressed as \(Ax \geq b\), where \(A\) is a matrix in which element \(a_{ij}\) represents how much each unit of variable \(j\) contributes to satisfying demand \(b_i\) for product \(i\). Note that by using simple algebraic operations we can include all kind of non-strict linear constraints: \(a(x) \leq \beta\), \(a(x) \geq \beta\) and \(a(x) = \beta\). Therefore, there are different canonical ways to represent a linear programming problem, one of which is the one we have already seen: $$ \min{c(x)} \quad \text{subject to } Ax \geq b$$ We may also express it as a maximization of the objective function: $$ \max{c(x)} \quad \text{subject to } Ax \leq b$$ Or either of them subject to a set of equalities: $$ \max{c(x)} / \min{c(x)} \quad \text{subject to } Ax = b$$ In all canonical forms variables are usually restricted to be positive or zero, which makes sense in most scenarios. In a future post I would like to revisit this subject, using one of the canonical forms to go a little deeper into the economical interpretation of each of the variables and constraints. But for now, we will go straight to the resolution. Resolution We have seen how to model an optimization problem using linear programming, but we still need to know how to solve it. Luckily, there are several algorithms that deal with these specific problems, one of the most widely known is simplex. How simplex works deserves another blog post of its own, but for now lets just say that it solves most models incredibly fast. The problem of solving an LPP (linear programming problem) is known to be polynomial, which means that it can be solved within an acceptable timespan. Needless to say, linear programming is an invaluable tool for modeling several real life scenarios, and has multiple applications within operations research. In future posts I would like to dwelve deeper into linear programming, which I will do if I have some spare time, but for now I will go on blogging within the scope of my thesis: integer linear programming.
Why do we care about eigenvalues, eigenvectors, and singular values? Intuitively, what do they tell us about a matrix? When I first studied eigenvalues in college, I regarded it as yet another theoretical math trick that is hardly applicable to my life. Once I passed the final exam, I shelved all my eigen-knowledge to a corner in my memory. Years have passed, and I gradually realize the importance and brilliance of eigenvalues, particularly in the realm of machine learning. In this post, I will discuss how and why we perform eigendecomposition and singular value decomposition in machine learning. In the previous post, I mentioned that there are essentially 2 questions in linear algebra and matrix: solve linear equation \(Ax = b\) eigendecomposition and singular value decomposition of a matrix \(A\) Here I will focus on the second one. Eigen decomposition and singular value decomposition Matrix is all about transformation and change of coordinate systems. Through rotation, stretch, and scale, we can reveal unique characteristics of a matrix, and decompose a matrix into simple and representative matrices. 1. Eigendecomposition Also called Eigen Value Decomposition (EVD) [1]. Geometric interpretation of EVD This video by 3Blue1Brown provides a very informative geometric interpretation of EVD. Essentially, when we linearly transform a matrix by , if a vector \(v \in R^{n}\) in the space changes only by a scalar factor \(\lambda\) after transformation, we call such vector an eigenvector. stretch and rotate \(Av = \lambda v \tag{1} \) Eigenvalues are be used directly to calculate the determinant of a matrix: \(det(A) = \prod_i^n |\lambda_i|\tag{2}\) \(A\) is singular if its eigenvalues is 0. Eigendecomposition If a square matrix \(A \in R^{n \times n}\) has \(n\) linearly independent eigenvectors \(q_i \in R^{n}\). For a single eigenvector \(q_i\), from equation 1, we have \(Aq_i = \lambda_i q_i, i = 1,2…n \tag{3} \) For all eigenvectors, we can define a square matrix \(Q \in R^{n \times n}\) with \(q_i\) in each column: \(Q = \begin{bmatrix} q_1 & q_2 & … &q_n \end{bmatrix} \tag{4} \) For all eigenvalues, we can define a diagonal matrix \(\Lambda \in R^{n \times n}\) with the eigenvalues \(\lambda_i\) as the diagonal elements: \(\Lambda = diag(\lambda_1, \lambda_2, … \lambda_n) \tag{5} \) Thus equation 3 with all eigenvalues and eigenvectors of \(A\) can be written as: \(AQ = Q \Lambda \tag{7}\) Because all eigenvectors are linearly independent, \(Q\) has full rank and invertible with inverse \(Q^{-1}\) \(AQQ^{-1} = Q \Lambda Q^{-1} \tag{8}\) Therefore, we can decompose a full rank square matrix \(A\) into square matrices of eigenvectors and a diagonal matrix with eigenvalues [5]. \(A = Q \Lambda Q^{-1} \tag{9} \) If \(A \in R^{n \times n}\) does not have \(n\) linearly independent eigenvectors, then it is not diagonalizable. In this case, \(A\) is also called defective matrix, and there is no diagonal matrix \(\Lambda\) for eigendecomposition in equation 9. 2. Singular value decomposition SVD LU factorization and eigendecomposition mentioned before are only applicable to square matrices. We would like to have a more generic decomposition approach for any rectangular matrices. Geometric interpretation of SVD We start with a \(R^n\) space defined by orthogonal unit vectors \(v\). Shown in the figure above is a 2-dimension example. Note that \(v_1, v_2\) are unit vectors \(\|v_i\| = 1\), and they define a sphere \(S\) [2][3]. Now we apply a linear transformation \(A \in R^{n \times n} \) to transform the sphere \(S\) into an output space \(AS \in R^{n \times n}\) . In this new space, the original sphere \(S\) is transformed into an ellipse defined by 2 unit vectors \(u_1, u_2\), with scalar factors of \(\sigma_1, \sigma_2\) respectively. Therefore, we have \(Av_i = \sigma_i u_i \tag {10}\) Note that equation 10 looks quite similar to eigendecomposition in equation 3, except that \(v_i \neq u_i\). We can rewrite equation 10 in a matrix format with all \(n\) dimension of \(A\): \(AV = \hat U \hat {\Sigma} \tag{11} \) \(V = \begin{bmatrix} v_1 &v_2 & … &v_n \end{bmatrix} \tag{12} \) \(\hat U = \begin{bmatrix} u_1 & u_2 & … &u_n \end{bmatrix} \tag{13} \) \(\hat {\Sigma} = diag (\sigma_1, \sigma_2, … \sigma_n) \tag{14}\) \(\hat {\Sigma}\) is a diagonal matrix with sorted diagonal values of \(\sigma_i\), called : singular values \(\sigma_1 \geq \sigma_2 .. \geq \sigma_{\min(m,n)} \geq 0 \tag{15} \) Geometrically, \(V\) is an orthonormal matrix defining in the input space: \(VV^T = V^TV = I, V^T = V^{-1}, det(V) = 1 \tag{16.1}\) In theory, \(V\) can take either real or complex values. In the case of complex values, \(V\) is called an unitary matrix with its conjugate transpose \(V^{*}\): \(VV^{*} = V^{*}V = I\tag{16.2}\) \(A\) is a linear transformation matrix. \(\hat U\) represents orthogonal vectors as the rotation of the input space, and \(\hat {\Sigma}\) represents stretch. We can rewrite equation 11 in the real value case as: \(AVV^T = \hat U \hat {\Sigma}V^T \tag{17} \) \(A = \hat U \hat {\Sigma}V^T \tag{18.1} \) In the complex value case, we can write it in the following format: \(A = \hat U \hat {\Sigma}V^{*}\tag{18.2} \) Equation 18 is called , with \(\hat {\Sigma}\) as a square diagonal matrix. Usually, we can pad \(\hat {\Sigma}\) with zeros and add arbitrary orthogonal columns to \(\hat U\) to form an unitary matrix \(U\), \(UU^T = I\), as shown in the following figure: reduced singular value decomposition \(A = U {\Sigma}V^T \tag{19.1}\) \(A = U {\Sigma}V^{*} \tag{19.2}\) Equation 19 is the general format of . Any matrix \(A \in R^{m \times n}\) can be decomposed into these 3 matrices: \(U \in R^{m \times m}, V \in R^{n \times n}\) are orthogonal square matrices (also called unitary matrix if \(U,V\) takes complex values), \(\Sigma \in R^{m \times n}\) is a diagonal matrix. singular value decomposition SVD is commonly used in recommendation system and matrix factorization. 3. Covariance matrix, EVD, SVD Covariance matrix \(C = A^TA\) is used to describe the redundancy and noise in data \(A\). For a matrix \(A \in R^{m \times n}\), multiplying it to its transpose gives us a square and symmetric matrix. The diagonal elements represent variance of \(C\) and off-diagonal elements represent covariance. \(C\) is likely to be a dense matrix unless all columns in the input \(X\) are independent, i.e. covariance equals to 0. Dense matrices are difficult to transform, inverse, and compute. If we can a matrix such that all off-diagonal elements are 0, matrix computation would be easier and more efficient. diagonalize As discussed above, both EVD and SVD can diagonalize square matrices. Using SVD, we have: \(A^TA \\ = (U \Sigma V^T )^TU \Sigma V^T \\ =V {\Sigma}^T U^T U \Sigma V^T \\ = V{\Sigma}^T \Sigma V^T \\ = V {\Sigma}^2 V^T \tag {20} \) Since \(V\) is unitary matrix, \(V^T = V^{-1}\), equation 20 can be written as: \(A^TA = V {\Sigma}^2 V^{-1} \tag{21}\) Similarly \(AA^T = U {\Sigma}^2 U^T = U {\Sigma}^2 U^{-1} \tag{22} \) Equation 21 and 22 have the same format as equation 9: eigendecomposition. Indeed, if \(C\) is a symmetric matrix, its eigendecomposition is the same as its singular value decomposition. The eigenvectors of \(C\) are not only linearly independent, but also orthogonal. Let the eigenvalues for \(C = A^TA\) be \(\Lambda\), we have: \(\Lambda = {\Sigma}^2 \tag{23.1}\) \(\lambda_i = {\sigma_i}^2 \tag{23.2} \) Covariance matrix with singular value decomposition is used in Principal Component Analysis (PCA) [4], discussed in later posts. 4.Positive definite matrix Another common matrix concept is positive definite. It is a characteristic of a symmetric matrix \(M \in R^{n \times n}\), such as covariance matrix \(A^TA\) and second derivative Hessian matrix \(\frac {\partial^2 J(x)}{\partial x}\). In optimization, the objective function is convex if Hessian matrix is positive definite. One way to evaluate the definiteness of a matrix is to check its eigenvalues: if all eigenvalues are positive, \(M\) is positive definite. if all eigenvalues are negative, \(M\) is negative definite if all eigenvalues are non-negative, \(M\) is positive semi-definite if all eigenvalues are non-positive, \(M\) is negative semi-definite. else, \(M\) is indefinite. Definiteness of a matrix is also evaluated in the following format. if \(x^TMx > 0,\forall x \in R^n\), \(M\) is positive definite. \(M = Q \Lambda Q^{-1} \tag{24} \) \(x^TMx = x^TQ \Lambda Q^T x = y^T \Lambda y \tag{25.1}\) \(y= Q^T x \tag{25.2}\) In equation 24, because \(\Lambda = diag(\lambda_1, \lambda_2, … \lambda_n) \): \(x^TMx \\= y^T \Lambda y \\ = \sum_i^n (y_i)^2 \lambda_i \tag{26}\) For a positive definite matrix \(M\), all eigenvalues are bigger than 0, \(\lambda_i > 0\), as a result: \(x^TMx = \sum_i^n (y_i)^2 \lambda_i > 0 \tag{27}\) Similarly, if \(x^TMx < 0, \forall x \in R^n\), \(M\) is negative definite. if \(x^TMx \geq 0 ,\forall x \in R^n\), \(M\) is positive semi-definite. if \(x^TMx \leq 0, \forall x \in R^n\), \(M\) is negative semi-definite. if \(x^TMx > 0, \exists x \in R^n. x^TMx < 0, \exists x \in R^n\), \(M\) is indefinite. Take home message Eigendecomposition and singular value decompositon factorize a matrix into a diagonal matrix, which not only reveals unique characteristics of the data, but also makes matrix computation more efficiently. References [1] https://en.wikipedia.org/wiki/Eigenvalues_and_eigenvectors [2] http://www.cs.cornell.edu/courses/cs322/2008sp/stuff/TrefethenBau_Lec4_SVD.pdf [3] youtube https://www.youtube.com/watch?v=EokL7E6o1AE [4] https://intoli.com/blog/pca-and-svd/ [5] https://en.wikipedia.org/wiki/Orthogonal_matrix
Yes: for white noise perturbations of the 1D dynamical system $\dot{x}=b(x)$, the action functional is $S(\phi)=\frac{1}{2} \int_0^T |b(\phi(s))-\dot{\phi}(s)|^2 ds$ for $\phi \in H^1$ and otherwise infinity. The LDP says that $\sigma^2 \log(P(X_\cdot \in A))$ is asymptotically between $-\inf_{f \in \mathrm{Int}(A)} S(f)$ and $-\inf_{f \in \mathrm{Cl}(A)} S(f)$ as $\sigma \to 0$. In your case $A$ is the exterior of a ball in $C([0,T])$ centered at zero, so these two are the same, and so the problem is (presumably) to compute $I=\inf_{\phi \in C([0,T]) : \| \phi \|_\infty \geq z,\phi(0)=0} \frac{1}{2} \int_0^T |\dot{\phi}(s)+\alpha \phi(s)|^2 ds$. Then the logarithmic asymptotic for your quantity is $e^{-\sigma^{-2} I}$. I suspect the minimizer is attained by putting $\phi(T)=z$ (giving yourself the maximum possible amount of time to get to a magnitude of $z$) so that by the Euler-Lagrange equation $\phi(t)=z \frac{\sinh(\alpha t)}{\sinh(\alpha T)}$ is a minimizer; you should check this guess though. If I'm right then that means the minimum of $I$ is $\int_0^T \left ( z \alpha \left ( \frac{\cosh(\alpha s) + \sinh(\alpha s)}{\sinh(\alpha T)} \right ) \right )^2 ds = \alpha z^2(\coth(\alpha T)+1)$. This is large compared to $z^2$ for $T \ll \alpha^{-1}$ but quickly settles toward $2\alpha z^2$; unsurprisingly this means that in the long run you should expect to see maximum deviations from zero on the order of $\frac{\sigma}{\sqrt{\alpha}}$. The situation is not much different in higher dimensions provided the noise covariance matrix is constant and nonsingular (even if it isn't isotropic). Cf. Random Perturbations of Dynamical Systems by Freidlin and Wentzell, chapter 3 in the 3rd edition for more details.
I was playing with N-body simulations of a game called Kerbal Space Program, which itself uses the patched conics approximation. I have read that for long term stability it is best to use symplectic integrators and one of the simplest would be Störmer-Verlet method: $$ \ddot{\vec{x}}_n = F(\vec{x}_n) $$ $$ \vec{x}_{n+1} = 2\vec{x}_n - \vec{x}_{n-1} + F(\vec{x}_n){\Delta t}^2 $$ $$ t_{n+1} = t_n + \Delta t $$ I thought that I might add to this method using a similar approach as how this method can be derived, starting with the Taylor expansions: $$ \vec{x}(t+\Delta t) = \sum^N_{n=0} \frac{\Delta t^n}{n!}\frac{\partial^n}{\partial t^n}\vec{x}(t) + O(\Delta t^{N+1}) $$ $$ \vec{x}(t+\Delta t) + \vec{x}(t-\Delta t) = \sum^5_{n=0} \frac{\Delta t^n}{n!}\frac{\partial^n}{\partial t^n}\vec{x}(t) + \sum^5_{n=0} \frac{(-\Delta t)^n}{n!}\frac{\partial^n}{\partial t^n}\vec{x}(t) + O(\Delta t^6) $$ $$ \vec{x}(t+\Delta t) + \vec{x}(t-\Delta t) = 2\vec{x}(t) + \Delta t^2 \frac{\partial^2}{\partial t^2}\vec{x}(t) + \frac{\Delta t^4}{12} \frac{\partial^4}{\partial t^2}\vec{x}(t) + O(\Delta t^6) $$ Since the accelerations can be calculated with $F(\vec{x}(t))$ I only had to find an expression for the forth derivative (jounce or snap). For this I used finite difference: $$ \frac{\partial^4}{\partial t^2}\vec{x}(t) = \frac{\ddot{\vec{x}}(t+\Delta t) - 2\ddot{\vec{x}}(t) + \ddot{\vec{x}}(t-\Delta t)}{\Delta t^2} + O(\Delta t^2) $$ $$ \vec{x}(t+\Delta t) = 2\vec{x}(t) - \vec{x}(t-\Delta t) + \Delta t^2 \frac{\ddot{\vec{x}}(t+\Delta t) + 10\ddot{\vec{x}}(t) + \ddot{\vec{x}}(t-\Delta t)}{12} + O(\Delta t^6) $$ This method would be implicit, since for the next position you would need to know the acceleration at that point, however I thought that higher order precision might give it some advantage over Störmer-Verlet. Similar to Störmer-Verlet I assumed that the global error would only drop two orders, such that this method would be a forth order integrator. Kepler Orbit I did a simple benchmark of an eccentric Kepler orbit (eccentricity of 0.75 and 20 orbits) I found that this method seems to be a fifth order method and Verlet, as suspected, a second order method. I derived this from the slope in this log-log plot of the time-step size versus the error (distance between numerical and analytical endpoint): The actual slopes are $4.98\pm 0.01$ for my method and $1.98\pm 0.01$ for Verlet, where the errors are a 95% confidence bound. Simple Harmonic Oscillator I also tested this method with a one dimensional simple harmonic oscillator, $\ddot{x}=-x$, with the initial conditions $x(t_0)=1$, $\dot{x}(t_0)=0$, which has the solution $$ x(t-t_0) = \cos(t). $$ To avoid errors in the second required step I used $x(t_0+h)=\cos(h)$ . Even with a time step of only one-forth the period of the oscillation, does the amplitude, after 5000 periods, only drop to 0.973. Would this be enough to say that this method is symplectic, since I have only read about the properties of symplectic integrator, but do not know how to tell whether a method would be symplectic or not. I also looked at the error as a function of time-step size and found that this method seemed to be a forth order method and Verlet a second order method when the end position would be at $x_N=x(t_{end})=0$ (maximum speed), but it seemed to be a eighth order method and Verlet a forth order method when the end position would be at $x_N=x(t_{end})=\pm 1$ (minimal speed), see figure below. The actual slopes are $8.00\pm 0.01$ and $3.998\pm 0.003$ for my method and $3.992\pm 0.005$ and $1.997\pm 0.002$ for Verlet, where the errors are again a 95% confidence bound. I think this difference in the order of the error is dependent on the local dynamics, but some how for this ODE this error reduces again when speed drops to zero. Therefore it seems that my initial guess was correct and this method is only a forth order accurate. So the performance of my method depends on where and which ODE is used. Is this a common among other methods? I have had one course about scientific computing, in which a few integration methods where covert, but none of them where symplectic, so I wonder if this integrator is symplectic, since Verlet is said to be symplectic? And does this method have a known name, since it was fairly easy to derive? PS: In a similar way an eighth order method can be found, by increasing the Taylor expansion by two orders and by using a forth order approximation of the second derivative of the acceleration and a second order approximation of the forth derivative of the acceleration: $$ \vec{x}_{n+1} = 2\vec{x}_{n} - \vec{x}_{n-1} + \Delta t^2 \frac{3\ddot{\vec{x}}_{n+1} + 48\ddot{\vec{x}}_{n+1/2} + 78\ddot{\vec{x}}_{n} + 48\ddot{\vec{x}}_{n-1/2} + 3\ddot{\vec{x}}_{n-1}}{180} + O(\Delta t^8) $$ However I have not yet figured out a good way to find $\ddot{\vec{x}}_{n+1/2}$.
31 3 I’m trying to derive the infinitesimal volume element in spherical coordinates. Obviously there are several ways to do this. The way I was attempting it was to start with the cartesian volume element, dxdydz, and transform it using $$dxdydz = \left (\frac{\partial x}{\partial r}dr + \frac{\partial x}{\partial \theta }d\theta + \frac{\partial x}{\partial \phi }d\phi \right )\left ( \frac{\partial y}{\partial r}dr + \frac{\partial y}{\partial \theta }d\theta + \frac{\partial y }{\partial \phi}d\phi \right )\left ( \frac{\partial z}{\partial r}dr + \frac{\partial z}{\partial \theta }d\theta + \frac{\partial z}{\partial \phi}d\phi \right )$$ Unfortunately, I can’t see how I will arrive at the correct expression, ##r^{2}sin\theta drd\theta d\phi ##. For one reason, when completely expanded, I get terms with repeated differentials like ##dr^{3} ## that don’t cancel. Why is my method of derivation invalid? $$dxdydz = \left (\frac{\partial x}{\partial r}dr + \frac{\partial x}{\partial \theta }d\theta + \frac{\partial x}{\partial \phi }d\phi \right )\left ( \frac{\partial y}{\partial r}dr + \frac{\partial y}{\partial \theta }d\theta + \frac{\partial y }{\partial \phi}d\phi \right )\left ( \frac{\partial z}{\partial r}dr + \frac{\partial z}{\partial \theta }d\theta + \frac{\partial z}{\partial \phi}d\phi \right )$$ Unfortunately, I can’t see how I will arrive at the correct expression, ##r^{2}sin\theta drd\theta d\phi ##. For one reason, when completely expanded, I get terms with repeated differentials like ##dr^{3} ## that don’t cancel. Why is my method of derivation invalid?
Voltage-controlled oscillations in neurons Frank Hoppensteadt (2006), Scholarpedia, 1(11):1599. doi:10.4249/scholarpedia.1599 revision #129939 [link to/cite this article] VCON is an acronym for the voltage controlled oscillator neuronmodel. It has the form\[\tau\ddot\theta + F(\dot\theta) + A \sin\theta = \omega \]where \(\dot\theta\) corresponds to the membrane potential in an action potential generating region of an axon. Contents Introduction Hodgkin and Huxley discovered that voltages control ionic currents in nerve membranes. This led them to describe electrical activity in a neuronal membrane patch in terms of an electronic circuit whose characteristics were determined using empirical data (Hodgkin et al. 1952). Due to the complexity of this system, a number of useful heuristics for the Hodgkin-Huxley circuit have been devised, which are reviewed in (Hoppensteadt, 2012). The heuristic presented here is based on a phase-amplitude analysis of the van der Pol model of neural activity, which revealed a rich structure of phase-locking behavior (Flaherty and Hoppensteadt 1978). This work motivated experiments with forced rhythms in squid axons that revealed remarkably similar phase-locking behavior (Guttman et al. 1980). The VCON heuristic emerged from this (serial) collaboration in 1980; it was also found to represent phase-locked loop circuits in electronics (Horowitz and Hill 1989). VCONs share various behaviors with neurons, but direct frequency analysis is possible, which is not the case for other models theretofore (Hoppensteadt 1997). While a VCON is consistent with numerous observations in neuroscience, it is also constructible as an electronic circuit on scales ranging from nano-scale spin torque oscillators (Macia et al.) to phase-locked loop integrated circuits (Hoppensteadt et al. 1997). This article pertains to the mathematical dynamics of a VCON rather than to its uses in neuroscience or engineering. Principles The VCON is based on four principles of neuroscience: Neural membranes can separate charges; a resting membrane potential is sustained by cell metabolism; homeostatic mechanisms stabilize the resting potential; and, escapements destabilize the resting potential, sometimes leading to action potentials. These principles have inspired the design of many electronic circuits, among them are the Hodgkin-Huxley model (Hodgkin and Huxley 1952) andthose shown in Figure 1. However, only the VCONdirectly models relationships between voltage and frequency. TheVCON is used as a model of an action potential generatorregion of a neuron's membrane in synthesizing and analyzing networks. The model components are: for the separation of charge, a capacitor, for the resting potential, a current source (a battery and resistor), for the homeostatic mechanism (comparable to the role played by potassium channels in the Hodgkin-Huxley model), a voltage controlled oscillator (Hoppensteadt 1997), for the escapement (comparable to the role played by sodium channels in the Hodgkin-Huxley model), a negative differential resistance (NDR) device (e.g., see FitzHugh-Nagumo model and Schmitt et al. (1931)). The circuit depicted at the bottom of Figure 1 shows the arrangement of these elements. The current \(I\) is divided among the separate channels according to their own \(IV\)-characteristics: The current through the capacitor is \(C \dot V\ ;\) the current through the escapement is \(I_N=f(V)\) where \(f\) is an N-shaped function (it describes an NDR device); and, the current through the homeostatic mechanism \(H\) is \(I_H=\alpha\sin\left(\gamma\int_0^t V(t')\,dt'\right)\) (Horowitz and Hill 1989). Since \(\int V\) appears only inside a periodic function, we define it to be a new angle variable \(\theta(t)=\gamma\int_0^t V(t')\,dt'\ .\) As a result, we have that the instantaneous frequency of current through the homeostatic junction is proportional to \(V\ :\) i.e., \(\dot\theta(t) = \gamma V(t)\) (here and below \(\dot\theta = d\theta/dt\ ,\) etc.). Similar formulas relating frequency and voltage arise in electronics on scales ranging from quantum mechanical Josephson junctions, to phase-locked loop integrated circuits, to rotating electrical machinery of all sizes, and to turbines on regional power grids. The model of the VCON circuit is based on Kirchhoff's law that balances currents, \(I=C\dot V + I_H+I_N\ ,\) and Ohm's Law that describes the current source, \(R I=E-V\ .\) Using the fact that \(\gamma V=\dot\theta\ ,\) we have \[\dot\theta = \gamma V\] \[\tau\dot V = E-V-R(f(V)+\alpha\sin\theta)\] where \(\tau=R C\) is a time constant. Substituting the first equation in the second gives a second order differential equation for \(\theta\ :\) \[\tau\ddot\theta+F(\dot\theta)+A\sin\theta=\omega\] where \(A=R\alpha\gamma \ ,\) \(F(\dot\theta)=\dot\theta+R\gamma f(\dot\theta/\gamma)\) and \(\omega=\gamma E\) (these terms have units of radians/sec). This equation has familiar components: Linearizing \(\sin\theta\approx\theta\) gives Rayleigh's equation (which is essentially equivalent to van der Pol's equation); and, taking \(f\equiv 0\) gives a pendulum equation. While these equations are well known separately, the combined model has certain interesting and useful new features, such as supporting both saddle-node on invariant circle (SNIC) and Andronov-Hopf bifurcations and exhibiting coexistent stable oscillations. Sub- and Super-threshold oscillations The coexistence of stable super- and sub-threshold oscillations is demonstrated in Figure 2. The top oscillation, referred to as being a running periodic solution, wraps around the cylinder \(\mathcal{C} = \{(\theta\, \textrm{ mod }\, 2\pi,V)\}\ ,\) and the lower one remains on one sheet of the cylinder. There is a saddle point lying to the right of the lower oscillation at \((\pi/2+\arcsin 0.6,0.0)\ ,\) as indicated by an asterisk. The separatrix entering it from above is bounded above by the upper oscillation and must come from \(V = -\infty\ .\) The lower oscillation resulted from an Andronov-Hopf bifurcation. In addition, as \(\omega\) increases through the value \(A\ ,\) a saddle-node bifurcation occurs, after which the lower oscillation disappears and either one or two running periodic solutions exist (See Hoppensteadt 2006). There is an unstable equilibrium within the lower orbit at \((\arcsin 0.6, 0.0)\ ,\) as indicated by an asterisk. The output currents in these two cases are proportional to \(\sin\theta(t)\ ;\) in the former case (running periodic solution), the output is a fully developed sinusoid, in the latter (bounded oscillation) the current oscillates but is bounded away from \(\pm 1\ .\) External Forcing External signals can be brought into VCON in several ways. Studies of phase-locked loops suggest parametric forcing (e.g., \(A \propto \cos\mu t\)) and additive inputs (e.g., \(\omega = \Omega(t)\)), which are described first. Parametric forcing of the escapement is demonstrated second. These modified circuits are constructible using off-the-shelf electronics, and they have analogs in neuroscience (Hoppensteadt 1997). The output frequency (hz) is \[\rho=\lim_{t\to\infty} \frac{\theta(t)}{2 \pi t}=\lim_{t\to\infty}\frac \gamma{2\pi t} \int_0^t V(t')\,dt'.\] As a result, the output current is proportional to \(\sin(\rho t + \phi(t))\) where \(\phi(t)\) is a (bounded) oscillatory phase deviation (Hoppensteadt 1997). The frequency-response pattern of the VCON \[\ddot\theta+\dot\theta+3(1+\cos 2\pi t) \sin\theta=2\pi \omega\] illustrates phase-locking. Figure 3 suggests that \(\rho\) will be a monotone, continuous function of \(\omega\ ,\) and that it will form a staircase whose treads correspond to phase-locked responses. While this behavior is stable, it is shown here for one set of initial data ((0,0), in this case), other initial data can result in other phase-locking patterns (Flaherty and Hoppensteadt 1978). The frequency \(\rho\) suggests that there is an ongoing sustained oscillation in the electronic circuit. However, since the VCON is pendulum-like,only a few pushes are needed to force it over the top to execute afull oscillation. But, the timing of these pushes, which for thecircuit are electrical spikes, must be precise. The criticalinter-spike interval (ISI) is indicated by the naturalperiod \(2\pi/\rho\ .\) Just a few correctly timed spikes will cause aVCON to produce a full oscillation, similar to the resonate and fire phenomenon. This rapid acquisition time is an important aspect of VCON, as it shares this property with neurons that are known to respond to a few precisely timed input spikes (Izhikevich et al. 2003). The final simulation demonstrates this sensitivity to timing of inputs. The simulation in Figure 4 demonstrates the response when properly spaced inputs are applied to the escapement of the VCON \[\ddot\theta+\left(1+p(t)(\dot\theta^2-10)\right)\dot\theta+3\sin\theta=1.2\pi,\] where \(p(t)\) denotes a train of two pulses. Three cases are simulated here: (1) Short inter-spike interval (ISI = 3.5), (2) intermediate (ISI = 4.0), and (3) long (ISI = 8.0). The oscillator responds with a full spike-generating oscillation in the second case, but not in the first or third. This demonstrates the phenomenon of sensitivity only to correctly timed input spikes. References Hodgkin A.L., Huxley A.F. (1952) A quantitative description of membrane current and its applications to conduction and excitation in nerve. J. Physiol. 117:500-544. Hoppensteadt F. (2012) Heuristics for the Hodgkin-Huxley system, Math. Biosci. http://dx.doi.org/10.1016/j.mbs.2012.11.006 Flaherty J.E., Hoppensteadt F.C. (1978) Frequency entrainment of a forced van der Pol oscillator, Studs. Appl. Math. 58:5-15 Guttman R., Feldman L., Jacobsson E. (1980) Frequency entrainment of squid axon membrane, J. Membrane Biol. 56:9-18 Hoppensteadt F.C. (1997) Introduction to Mathematics of Neurons, Modeling in the frequency domain (2nd ed.) Camb. U. Press Macia F., Kent A., Hoppensteadt F. (2011) Spin-wave interference patterns created by spin-torque nano-oscillators for memory and computation, arXiv, no. 1009.4116 (tu8mk). Nanotechnology 22:095301. DOI:10.1088/0957-4484/22/9/095301. Hoppensteadt F.C., Izhikevich E.M. (1997) Weakly Connected Neural Networks, Springer-Verlag, New York. Hoppensteadt F.C. (2006) Biologically inspired circuits, Int. J. Bifurcation and Chaos, October, 2006. Horowitz P., Hill W. (1989) The Art of Electronics, 2nd ed., Camb. U. Press. Izhikevich E.M., Desai N.S., Walcott E.C., Hoppensteadt F.C. (2003) Bursts as a unit of neural information: Taking advantage of resonance, Trends in Neuroscience, 26:161-167. Schmitt O., Schmitt, F. (1931) The Nature of the Nerve Impulse, Am. J. Physiol., Vol. 97 # 2. Internal references Yuri A. Kuznetsov (2006) Andronov-Hopf bifurcation. Scholarpedia, 1(10):1858. Eugene M. Izhikevich and Richard FitzHugh (2006) FitzHugh-Nagumo model. Scholarpedia, 1(9):1349. Jeff Moehlis, Kresimir Josic, Eric T. Shea-Brown (2006) Periodic orbit. Scholarpedia, 1(7):1358. Yuri A. Kuznetsov (2006) Saddle-node bifurcation. Scholarpedia, 1(10):1859. Philip Holmes and Eric T. Shea-Brown (2006) Stability. Scholarpedia, 1(10):1838.
Wave Electricity Mathematics News | Published September 14, 2019 Electronic two poles named say employees tend to be utilized for correct engineering programs, and so are worked on some study tools such as the Aqua Alta wind generator tower in the Adriatic Marine, just offshore of Venice. Storm surf start off while enormous, uneven waves, after which it progressively join together into sturdy, clean lost strains with peaks termed “swell”. 1st cons If the actual swimmer or maybe reader would not accomplish this essential rate, a tide will probably go by with out them catching the idea. The maths with wave motion is definitely expressed most normally in the influx formula. The amount of the particular trend are available utilizing the angular rate: \begin \omega & = \frac \pi \\ Testosterone & Is equal to \frac \pi \frac \pi s^ Means 4\; s \ldotp \end Отключить A sine say can certainly symbolize an audio tide in principle, yet not pictorially. The proper execution of an sine say is usually completely different than a "shape" of the noise wave found in mother nature. Find your partially types: \begin \frac y_(times,to) x = -Ak \cos (kx ( blank ) \omega testosterone) + 2Ak \cos (2kx + 3 \omega t), \\ \frac (by,big t) x^ & Is equal to -Ak^ (kx : \omega capital t) : 4Ak^ + Only two \omega big t), \\ \frac y_(y,capital t) t Is equal to -A \omega \cos (kx - \omega t) + 2A \omega \cos (2kx + 2 \omega capital t), \\ \frac (by,capital t) t^ & Means -A \omega^ (kx ( space ) \omega big t) - 4A \omega^ + 2 \omega capital t) \ldotp \end Find a incomplete derivatives: \begin \frac y_(y,testosterone) x Is equal to -Ak \cos (kx -- \omega to) + 2Ak \cos (2kx + 3 \omega to), \\ \frac (by,to) x^ & Is equal to -Ak^ (kx ( space ) \omega capital t) -- 4Ak^ + 3 \omega testosterone levels), \\ \frac y_(y,t) t = -A \omega \cos (kx * \omega big t) + 2A \omega \cos (2kx + A couple of \omega capital t), \\ \frac (a,testosterone) t^ & Implies -A \omega^ (kx ( space ) \omega testosterone) ( blank ) 4A \omega^ + A pair of \omega big t) \ldotp \end At first the Surface Waves will just possibly be compact these will soon improvement in measurements because wind turbine more efficiently has effects on the small Exterior Waves versus the relaxed smooth sea area. Moving up and down your say furthermore will allow gravitational possible energy change to kinetic electricity to move. Wavelength (through top to help top). Enough time where the actual wind flow possesses blown more than a provided area. Due towards earth's turn on the angled axis, the globe is warmed up unevenly, which in turn causes gusts of wind to help hit to try to re-establish heat range stability. The wave therefore movements with a continual tide quickness of versus Implies \(\frac As proven in the plan, the action while in the trend is actually placed involving the the top of the tide as well as a depth which is pertaining to half your wave length. Properties in the Trend Function Once Floor Waves are will no longer relying on wind turbine they start to understand quit their particular electricity. Four. Throughout profound water the groups vacation for a party rate, which happens to be 1 / 2 of the particular period speed. It's important too to get noticable that the length on the tide alterations at a non-constant amount. The geometry of tv appearance is symbolized since the proportion involving Length and Width. A longitudinal wave is unquestionably that the displacements with the moderate have returned along with forward on the similar track for the reason that influx alone. The say amount could be used to look for the wavelength: \begin k & Is equal to \frac \pi \\ \lambda & Means \frac \pi \frac \pi m^ = A person.0\; meters \ldotp \end We can link up the idea of your sine aim of the direction for you to sine waves reliant on time by considering the actual "spoke" of an model group of friends the way it rotates, creating your hypotenuse of several suitable triangles. The numbers involving trend motions can be conveyed the majority of frequently inside trend situation. Write this influx function of the other influx: y 2(back button, to) = A failure(2kx + 2\(\omega\)big t). A sine say might signify a good tide in principle, although not pictorially. The shape of an sine trend is usually altogether diverse from the "shape" of any audio wave present in design. Удалить все Show how the rate of your method differs from this samsung wave s8500 rate (reproduction acceleration) amplitude( A) - the maximum scale on the displacement by harmony, around Supposrr que devices of feets. In general, it will be the long distance on the balance midpoint on the samsung wave s8500 to the maximum displacement, and also it can be fifty percent the whole displacement of the trend. Arehorrified to find that the 2nd part kind when it comes to situation and also the subsequent partial offshoot with respect to moment. 1 rotation round the eliptical completes one cycle involving growing in addition to sliding inside the say, since found in the photo listed below. These kind of lines also sales channel the stream and also boost the power the particular samsung wave s8500. The innovation in the Cartesian put together technique rapidly generated your graphing of numerous exact interaction such as sine as well as cosine percentages. Lesson Starter: Their bond in between trend velocity or perhaps step speed in addition to detail of prolonged surface area lake within cursory drinking water is actually calculated with the formula “Dropping in” is a saying used to get robbing an additional surfer's say by simply “Dropping In” about the pup. Consequently, the first tornado swells to reach on the shore are definitely the prolonged wave length increases. For instance, the actual Doppler consequence for sound waves is known, nevertheless we have a identical Doppler impact regarding lighting dunes, and perhaps they are based on exactly the same math concepts. This Glowing blue Vitality technology has been created by Holly Lam Teng Choy within Singapore. A Common illustration may be the subsequent questions regarding traveling to exploring places upon different Hawaiian islands: WHEEL Inside SKY The top of the say easily overtakes the underside and also pitch forwards (frequently utilizing the novice reader with it). Sine, Cosine, as well as Unit Circle This computer animation shows how the values from the sine in addition to cosine adjust since we sweep round the model circle. Basically, exactly what the modifier A really does will be improve (or even boost) the effect of the function Crime(y), thus leading to much larger caused y ideals. This kind of generates gigantic waves coming from inward criminal lake by significant stormy weather in the Atalantic Marine. As being the h2o level chemical reduces into the sea-coast plus short waters, the rate on the top as well as trough in the samsung wave s8500 may also be altered: this top goes faster than your trough, causing waves and a smashing on the swells. The cost \(\frac \pi is identified as your wave number. The concept trigonometry means "measurement associated with triangles" as well as sine combined with the cosine plus tangent are named your trigonometric ratios simply because came from with the ancient analyze of triangles. The real question is, “How swiftly does indeed that will decrease?” - Strong H2o where by Depth D And 2 Influx Electrical power Is equal to Breeze Pace y The wind Period by Retrieve Distance Such as, the variances between your shades the truth is in this posting website that writes essays for you relate to diverse wavelengths of light understood from your sight. c= samsung wave s8500 rate, g= development regarding gravitational pressure (On the lookout for.8066 m/s/s), d= say range or perhaps upper level interesting depth, t), p2= occurrence of water (=1) as well as p1= density with fresh air (Implies 2.00125). Considering that the sine functionality normally oscillates in between -1 in addition to A person, just about any coefficient linked to the function will directly impact on the actual plethora with the say. A pulse can be defined as say that includes a one disruption which moves throughout the choice that has a frequent plenitude. This whittling or even steepening of your high gets to be more distinct because tide amplitude raises, as being the samsung wave s8500 sets out to raise in place and also split. However, in the majority of images the speed adjustable can be branded since “c” regarding “Celerity” the particular time period oceanographers make use of to consult say swiftness. All these surf is intelligibly unique when they navigate over the water on their own seeing that “freak waves,” “killer waves,” “king surf,” “monster dunes,” or even “rogue waves” and will reach the serious length of approximately Thirty yards. Remember that an audio trend leads to surroundings molecules so that you can "vibrate" from its at-rest roles. Any say can split whenever it spikes directly into superficial h2o, or maybe any time a couple of tide solutions oppose and mix allows. Trigonometric rates grow to be tide functions Bohr's concept received come to sadness whenever also a couple electrons, just as your helium atom, must be regarded with each other, though the completely new quantum aspects experienced simply no difficulties with creating the particular equations for a few or Problem-Solving System: Locating the Attributes of an Sinusoidal Wave where is definitely the d'Alembertian, which will subsumes the second time offshoot and second space derivatives to a single operator. The particular wind turbine hits about the major section of the wave along with flight delays the best via overpowering the base component. Remember which our speed must stay nearby the quickness in the wave, or perhaps we will not be capable of sustain them, as well as go by under all of us. In line with each of our prior debate, this sine connected with perspective Your within the plans is equal to the number of the other side on the hypotenuse.
Images are essential elements in most of the scientific documents. LaTeX provides several options to handle images and make them look exactly what you need. In this article is explained how to include images in the most common formats, how to shrink, enlarge and rotate them, and how to reference them within your document. Contents Below is a example on how to import a picture. \documentclass{article} \usepackage{graphicx} \graphicspath{ {./images/} } \begin{document} The universe is immense and it seems to be homogeneous, in a large scale, everywhere we look at. \includegraphics{universe} There's a picture of a galaxy above \end{document} Latex can not manage images by itself, so we need to use the graphicx package. To use it, we include the following line in the preamble: \usepackage{graphicx} The command \graphicspath{ {./images/} } tells LaTeX that the images are kept in a folder named images under the directory of the main document. The \includegraphics{universe} command is the one that actually included the image in the document. Here universe is the name of the file containing the image without the extension, then universe.PNG becomes universe. The file name of the image should not contain white spaces nor multiple dots. Note: The file extension is allowed to be included, but it's a good idea to omit it. If the file extension is omitted it will prompt LaTeX to search for all the supported formats. For more details see the section about generating high resolution and low resolution images. When working on a document which includes several images it's possible to keep those images in one or more separated folders so that your project is more organised. The command \graphicspath{ {images/} } tells LaTeX to look in the images folder. The path is to the current working directory - so, the compiler will look for the file in the same folder as the code where the image is included. The path to the folder is relative by default, if there is no initial directory specified, for instance relative %Path relative to the .tex file containing the \includegraphics command \graphicspath{ {images/} } This is a typically straightforward way to reach the graphics folder within a file tree, but can leads to complications when .tex files within folders are included in the mail .tex file. Then, the compiler may end up looking for the images folder in the wrong place. Thus, it is best practice to specify the graphics path to be relative to the main .tex file, denoting the main .tex file directory as ./ , for instance %Path relative to the main .tex file \graphicspath{ {./images/} } as in the introduction. The path can also be , if the exact location of the file on your system is specified. For example: absolute %Path in Windows format: \graphicspath{ {c:/user/images/} } %Path in Unix-like (Linux, Mac OS) format \graphicspath{ {/home/user/images/} } Notice that this command requires a trailing slash / and that the path is in between double braces. You can also set multiple paths if the images are saved in more than one folder. For instance, if there are two folders named images1 and images2, use the command. \graphicspath{ {./images1/}{./images2/} } If no path is set LaTeX will look for pictures in the folder where the .tex file the image is included in is saved. If we want to further specify how LaTeX should include our image in the document (length, height, etc), we can pass those settings in the following format: \begin{document} Overleaf is a great professional tool to edit online, share and backup your \LaTeX{} projects. Also offers a rather large help documentation. \includegraphics[scale=1.5]{lion-logo} The command \includegraphics[scale=1.5]{lion-logo} will include the image lion-logo in the document, the extra parameter scale=1.5 will do exactly that, scale the image 1.5 of its real size. You can also scale the image to a some specific width and height. \begin{document} Overleaf is a great professional tool to edit online, share and backup your \LaTeX{} projects. Also offers a rather large help documentation. \includegraphics[width=3cm, height=4cm]{lion-logo} As you probably have guessed, the parameters inside the brackets [width=3cm, height=4cm] define the width and the height of the picture. You can use different units for these parameters. If only the width parameter is passed, the height will be scaled to keep the aspect ratio. The length units can also be relative to some elements in document. If you want, for instance, make a picture the same width as the text: \begin{document} The universe is immense and it seems to be homogeneous, in a large scale, everywhere we look at. \includegraphics[width=\textwidth]{universe} Instead of \textwidth you can use any other default LaTeX length: \columnsep, \linewidth, \textheight, \paperheight, etc. See the reference guide for a further description of these units. There is another common option when including a picture within your document, to rotate it. This can easily accomplished in LaTeX: \begin{document} Overleaf is a great professional tool to edit online, share and backup your \LaTeX{} projects. Also offers a rather large help documentation. \includegraphics[scale=1.2, angle=45]{lion-logo} The parameter angle=45 rotates the picture 45 degrees counter-clockwise. To rotate the picture clockwise use a negative number. In the previous section was explained how to include images in your document, but the combination of text and images may not look as we expected. To change this we need to introduce a new environment. In the next example the figure will be positioned right below this sentence. \begin{figure}[h] \includegraphics[width=8cm]{Plot} \end{figure} The figure environment is used to display pictures as floating elements within the document. This means you include the picture inside the figure environment and you don't have to worry about it's placement, LaTeX will position it in a such way that it fits the flow of the document. Anyway, sometimes we need to have more control on the way the figures are displayed. An additional parameter can be passed to determine the figure positioning. In the example, begin{figure}[h], the parameter inside the brackets set the position of the figure to . Below a table to list the possible positioning values. here Parameter Position h Place the float here, i.e., approximately at the same point it occurs in the source text (however, not exactly at the spot) t Position at the top of the page. b Position at the bottom of the page. p Put on a special page for floats only. ! Override internal parameters LaTeX uses for determining "good" float positions. H Places the float at precisely the location in the LaTeX code. Requires the float package, though may cause problems occasionally. This is somewhat equivalent to h!. In the next example you can see a picture at the top of the document, despite being declared below the text. In this picture you can see a bar graph that shows the results of a survey which involved some important data studied as time passed. \begin{figure}[t] \includegraphics[width=8cm]{Plot} \centering \end{figure} The additional command \centering will centre the picture. The default alignment is left. It's also possible to wrap the text around a figure. When the document contains small pictures this makes it look better. \begin{wrapfigure}{r}{0.25\textwidth} %this figure will be at the right \centering \includegraphics[width=0.25\textwidth]{mesh} \end{wrapfigure} There are several ways to plot a function of two variables, depending on the information you are interested in. For instance, if you want to see the mesh of a function so it easier to see the derivative you can use a plot like the one on the left. \begin{wrapfigure}{l}{0.25\textwidth} \centering \includegraphics[width=0.25\textwidth]{contour} \end{wrapfigure} On the other side, if you are only interested on certain values you can use the contour plot, you can use the contour plot, you can use the contour plot, you can use the contour plot, you can use the contour plot, you can use the contour plot, you can use the contour plot, like the one on the left. On the other side, if you are only interested on certain values you can use the contour plot, you can use the contour plot, you can use the contour plot, you can use the contour plot, you can use the contour plot, you can use the contour plot, you can use the contour plot, like the one on the left. For the commands in the example to work, you have to import the package wrapfig. Add to the preamble the line \usepackage{wrapfig}. Now you can define the wrapfigure environment by means of the commands \begin{wrapfigure}{l}{0.25\textwidth} \end{wrapfigure}. Notice that the environment has two additional parameters enclosed in braces. Below the code is explained with more detail: {l} {0.25\textwidth} \centering For a more complete article about image positioning see Positioning images and tables Captioning images to add a brief description and labelling them for further reference are two important tools when working on a lengthy text. Let's start with a caption example: \begin{figure}[h] \caption{Example of a parametric plot ($\sin (x), \cos(x), x$)} \centering \includegraphics[width=0.5\textwidth]{spiral} \end{figure} It's really easy, just add the \caption{Some caption} and inside the braces write the text to be shown. The placement of the caption depends on where you place the command; if it'a above the includegraphics then the caption will be on top of it, if it's below then the caption will also be set below the figure. Captions can also be placed right after the figures. The sidecap package uses similar code to the one in the previous example to accomplish this. \documentclass{article} \usepackage[rightcaption]{sidecap} \usepackage{graphicx} %package to manage images \graphicspath{ {images/} } \begin{SCfigure}[0.5][h] \caption{Using again the picture of the universe. This caption will be on the right} \includegraphics[width=0.6\textwidth]{universe} \end{SCfigure} There are two new commands \usepackage[rightcaption]{sidecap} rightcaption. This parameter establishes the placement of the caption at the right of the picture, you can also use \begin{SCfigure}[0.5][h] \end{SCfigure} h works exactly as in the You can do a more advanced management of the caption formatting. Check the further reading section for references. Figures, just as many other elements in a LaTeX document (equations, tables, plots, etc) can be referenced within the text. This is very easy, just add a label to the figure or SCfigure environment, then later use that label to refer the picture. \begin{figure}[h] \centering \includegraphics[width=0.25\textwidth]{mesh} \caption{a nice plot} \label{fig:mesh1} \end{figure} As you can see in the figure \ref{fig:mesh1}, the function grows near 0. Also, in the page \pageref{fig:mesh1} is the same example. There are three commands that generate cross-references in this example. \label{fig:mesh1} \ref{fig:mesh1} \pageref{fig:mesh1} The \caption is mandatory to reference a figure. Another great characteristic in a LaTeX document is the ability to automatically generate a list of figures. This is straightforward. This command only works on captioned figures, since it uses the caption in the table. The example above lists the images in this article. Important Note: When using cross-references your LaTeX project must be compiled twice, otherwise the references, the page references and the table of figures won't work. So far while specifying the image file name in the \includegraphics command we have omitted file extensions. However, that is not necessary, though it is often useful. If the file extension is omitted, LaTeX will search for any supported image format in that directory, and will search for various extensions in the default order (which can be modified). This is useful in switching between development and production environments. In a development environment (when the article/report/book is still in progress), it is desirable to use low-resolution versions of images (typically in .png format) for fast compilation of the preview. In the production environment (when the final version of the article/report/book is produced), it is desirable to include the high-resolution version of the images. This is accomplished by Thus, if we have two versions of an image, venndiagram.pdf (high-resolution) and venndiagram.png (low-resolution), then we can include the following line in the preamble to use the .png version while developing the report - \DeclareGraphicsExtensions{.png,.pdf} The command above will ensure that if two files are encountered with the same base name but different extensions (for example venndiagram.pdf and venndiagram.png), then the .png version will be used first, and in its absence the .pdf version will be used, this is also a good ideas if some low-resolution versions are not available. Once the report has been developed, to use the high-resolution .pdf version, we can change the line in the preamble specifying the extension search order to \DeclareGraphicsExtensions{.pdf,.png} Improving on the technique described in the previous paragraphs, we can also instruct LaTeX to generate low-resolution .png versions of images on the fly while compiling the document if there is a PDF that has not been converted to PNG yet. To achieve that, we can include the following in the preamble after \usepackage{graphicx} \usepackage{epstopdf} \epstopdfDeclareGraphicsRule{.pdf}{png}{.png}{convert #1 \OutputFile} \DeclareGraphicsExtensions{.png,.pdf} If venndiagram2.pdf exists but not venndiagram2.png, the file venndiagram2-pdf-converted-to.png will be created and loaded in its place. The command convert #1 is responsible for the conversion and additional parameters may be passed between convert and #1. For example - convert -density 100 #1. There are some important things to have in mind though: --shell-escape option. \epstopdfDeclareGraphicsRule, so that only high-resolution PDF files are loaded. We'll also need to change the order of precedence. LaTeX units and legths Abbreviation Definition pt A point, is the default length unit. About 0.3515mm mm a millimetre cm a centimetre in an inch ex the height of an x in the current font em the width of an m in the current font \columnsep distance between columns \columnwidth width of the column \linewidth width of the line in the current environment \paperwidth width of the page \paperheight height of the page \textwidth width of the text \textheight height of the text \unitleght units of length in the picture environment. About image types in LaTeX JPG: Best choice if we want to insert photos PNG: Best choice if we want to insert diagrams (if a vector version could not be generated) and screenshots PDF: Even though we are used to seeing PDF documents, a PDF can also store images EPS: EPS images can be included using the epstopdfpackage (we just need to install the package, we don't need to use \usepackage{}to include it in our document.) For more information see
How is the induced drag calculated for a wing with elliptical planform ? Is this wing shape the most efficient ? Induced drag is caused by the downward deflection of the air streaming around the wing. The resulting aerodynamic force is tilted backwards by half the deflection angle, and the air flows off the wing with an added vertical speed component, producing downwash. Increasing the downwash angle means increasing both lift and the backward tilt, so the induced drag goes up with the square of the lift produced. If you want to minimize induced drag for a given lift, this quadratic dependence means the optimum is reached when the downwash angle is constant over span. How is the induced drag calculated for a wing with elliptical planform? The elliptical, untwisted wing has the same angle of attack and the same lift coefficient over span, and produces the desired constant downwash angle. To simplify things, let's assume the wing is just acting on the air with the density $\rho$ flowing with the speed $v$ through a circle with a diameter equal to the span $b$ of the wing. If we just look at this stream tube, the mass flow is $$\frac{dm}{dt} = \frac{b^2}{4}\cdot\pi\cdot\rho\cdot v$$ Lift $L$ is then the impulse change which is caused by the wing. With the downward air speed $v_z$ imparted by the wing, lift is: $$L = \frac{b^2}{4}\cdot\pi\cdot\rho\cdot v\cdot v_z = S\cdot c_L\cdot\frac{v^2}{2}\cdot\rho$$ $S$ is the wing area and $c_L$ the overall lift coefficient. If we now solve for the vertical air speed, we get $$v_z = \frac{S\cdot c_L\cdot\frac{v^2}{2}\cdot\rho}{\frac{b^2}{4}\cdot\pi\cdot\rho\cdot v} = \frac{2\cdot c_L\cdot v}{\pi\cdot AR}$$ with $AR = \frac{b^2}{S}$ the aspect ratio of the wing. Now we can divide the vertical speed by the air speed to calculate the angle by which the air has been deflected by the wing. Let's call it $\alpha_w$: $$\alpha_w = arctan\left(\frac{v_z}{v}\right) = arctan \left(\frac{2\cdot c_L}{\pi\cdot AR}\right)$$ The deflection happens gradually along the wing chord, so the mean local flow angle along the chord is just $\alpha_w / 2$. Lift acts perpendicularly to this local flow, thus is tilted backwards by $\alpha_w / 2$. In coefficients, lift is $c_L$, and the backwards component is $\alpha_w / 2 \cdot c_L$. Let's call this component $c_{Di}$: $$c_{Di} = arctan \left(\frac{c_L}{\pi\cdot AR}\right)\cdot c_L$$ For small $\alpha_w$s the arcus tangens can be neglected, and we get this familiar looking equation for the backwards-pointing component of the reaction force: $$c_{Di} = \frac{c_L^2}{\pi\cdot AR}$$ If the circulation over span has an elliptic distribution, the local change in circulation times the local amount of circulation is constant, and the induced drag $c_{Di}$ is at it's minimum. If this would be different, a higher local $v_z$ causes a quadratic increase in local induced drag, so the whole wing will create it's lift less efficiently. Is this wing shape the most efficient ? Only if you ask an aerodynamicist would the answer be yes. An elliptic wing will give you the best ratio of lift to drag, which clearly is one way to express efficiency. In reality, the wing has to lift itself plus a payload, but only lifting the payload should be considered when formulating efficiency. Therefore, pure lift/drag optimization is too narrow. What should count is the best ratio of lift minus wing weight relative to drag. R. T. Jones wrote a NACA Technical Note back in 1950 in which he looked at this problem analytically. Wing weight goes up when much lift is created near the tips, because this lift will cause a disproportional root bending moment, and the wing spar which has to carry this bending moment is a significant part of the wing structure. Therefore, reducing lift at the tips and adding more lift at the root will create a lighter wing for a modest drag increase, resulting in an overall optimum for an almost triangular lift distribution. When compared to an elliptical wing planform, the total wing span of such an optimized wing is bigger for the same overall drag, but this wing will weigh less. Spanwise loading comparison for wings of the same fixed lift, from NACA Technical Note 2249. But this is too easy. Scaling laws must be considered in addition. You know that elephants have much more massive legs relative to their body size than antelopes (or even ants, for an even more drastic comparison), since body mass scales with the cube of linear dimension while structural strength scales only with the square of linear dimension. This means that wing spar weight will be proportionally higher for larger aircraft. As a consequence, insects have more elliptic wings than albatrosses, and model aircraft have optimum wings which are much more elliptic than the optimum wing of an airliner. The optimum shifts from an elliptic load distribution at very small scales to an almost triangular distribution at large scales. For a wing with elliptical span loading, the induced drag can be directly calculated from the lift coefficient. The induced drag coefficient $C_{D_{i}}$ can be calculated as, $C_{D_{i}} = \frac{C_{L}^{2}}{\pi A}$ where $C_{L}$ is the lift coefficient and $A$ is the aspect ratio. Elliptical loading produces the minimum induced drag according to the lifting line theory, when only the span and lift are considered. If other considerations come into play (like wing bending moment), the most efficient shape varies. As for why the best distribution is elliptic, the equations can be readily derived from lifting line theory; basically this is because the downwash is constant along the span. A good way of reasoning why this is so is given in The Minimum Induced Drag of Airfoils by Max Munk, NACA Report No. 121. If the distribution is the best one, the drag cannot be decreased or increased by transferring one lifting element from its old position to some new position. Now, the share of one element in the drag is composed of two parts. It takes a share in producing a downwash in neighborhood of other lifting elements and, in consequence, a change in their drag. It has itself a drag, being situated in the downwash produced by the other elements. ... In case of the lifting straight line, the two downwashes, each produced by one element in the neighborhood of the other, are equal. For this reason, the two drags of the two elements each produced by the other are equal too, and hence the two parts of the entire drag of the wings due to one element. ... hence, the entire drag due to one element is unchanged when the element is transferred from one situation to a new one of the same downwash, and the distribution is best only if the downwash is constant over the whole wing. For this reason, when only the span and lift are considered, the elliptical loading gives the minimum induced drag as the downwash is constant over the wing. When the constraints are modified, other distributions and wing shapes become more efficient. For example, from On the Minimum Induced Drag of Wings by A. H. Bowers: $\diamond$ Prandtl/Munk (1914) Elliptical Constrained only by span and lift Downwash: $y = c$ $\diamond$ Prandtl/Horten/Jones (1932) Bell shaped Constrained by lift and bending moment Downwash: $y = bx + c$ $\diamond$ Klein/Viswanathan (1975) Modified bell shape Constrained by lift, moment and shear (minimum structure) Downwash: $y = ax + bx + c^{2}$
Baudhayana, (fl. c. 800 BCE)[1] was an Indian mathematician, who was most likely also a priest. He is noted as the author of the earliest Sulba Sutra—appendices to the Vedas giving rules for the construction of altars—called the Baudhayana Sulbasûtra, which contained several important mathematical results. He is older than other famous mathematician Apastambha. He belongs to Yajurveda school. He is accredited with calculating the value of pi to some degree of precision, and with DISCOVERING what is now known as the Pythagorean theorem. The mathematics in Shulbasutra Pythagorean theorem The most notable of the rules (the Sulbasutras do not contain any proofs of the rules which they describe) in the Baudhayana Sulba Sutra says: “dirghasyaksanaya rajjuH parsvamani, tiryaDaM mani, cha yatprthagbhUte kurutastadubhayan karoti.” A rope stretched along the length of the diagonal produces an area which the vertical and horizontal sides make together. This appears to be referring to a rectangle, although some interpretations consider this to refer to a square. In either case, it states that the square of the hypotenuse equals the sum of the squares of the sides. If restricted to right-angled isosceles triangles, however, it would constitute a less general claim, but the text seems to be quite open to unequal sides. If this refers to a rectangle, it is the earliest recorded statement of the Pythagorean theorem. Baudhayana also provides a non-axiomatic demonstration using a rope measure of the reduced form of the Pythagorean theorem for an isosceles right triangle: The cord which is stretched across a square produces an area double the size of the original square. Circling the Square Another problem tackled by Baudhayana is that of finding a circle whose area is the same as that of a square (the reverse of squaring the circle). His sutra i.58 gives this construction: “Draw half its diagonal about the centre towards the East-West line; then describe a circle together with a third part of that which lies outside the square.” Explanation: * Draw the half-diagonal of the square, which is larger than the half-side by x = {a \over 2}\sqrt{2}- {a \over 2}. * Then draw a circle with radius {a \over 2} + {x \over 3}, or {a \over 2} + {a \over 6}(\sqrt{2}-1), which equals {a \over 6}(2 + \sqrt{2}). * Now (2+\sqrt{2})^2 \approx 11.66 \approx {36.6\over \pi}, so the area {\pi}r^2 \approx \pi \times {a^2 \over 6^2} \times {36.6\over \pi} \approx a^2. Square root of 2 Baudhayana i.61-2 (elaborated in Apastamba Sulbasutra i.6) gives the length of the diagonal of a square in terms of its sides, which is equivalent to a formula for the square root of 2: samasya dvikarani. pramanam trtiyena vardhayet tac caturthenatmacatustrimsonena savisesah The diagonal of a square. The measure is to be increased by a third and by a fourth decreased by the thirty-fourth. That is its diagonal approximately. \sqrt{2} = 1 + \frac{1}{3} + \frac{1}{3 \cdot 4} – \frac{1}{3 \cdot4 \cdot 34} = \frac{577}{408} \approx 1.414216 which is correct to five decimals. Other theorems include: diagonals of rectangle bisect each other, diagonals of rhombus bisect at right angles, area of a square formed by joining the middle points of a square is half of original, the midpoints of a rectangle joined forms a rhombus whose area is half the rectangle, etc. Note the emphasis on rectangles and squares; this arises from the need to specify yajña bhumikas—i.e. the altar on which a rituals were conducted, including fire offerings (yajña). Apastamba (c. 600 BC) and Katyayana (c. 200 BC), authors of other sulba sutras, extend some of Baudhayana’s ideas. Apastamba provides a more general proof[citation needed] of the Pythagorean theorem.
To send content items to your account,please confirm that you agree to abide by our usage policies.If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.Find out more about sending content to . To send content items to your Kindle, first ensure no-reply@cambridge.orgis added to your Approved Personal Document E-mail List under your Personal Document Settingson the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ partof your Kindle email address below.Find out more about sending to your Kindle. Note you can select to send to either the @free.kindle.com or @kindle.com variations.‘@free.kindle.com’ emails are free but can only be sent to your device when it is connected to wi-fi.‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply. In this paper we study constrained variational problems that are principally motivated by nonlinear elasticity theory. We examine, in particular, the relationship between the positivity of the Jacobian det ∇u and the uniqueness and regularity of energy minimizers u that are either twist maps or shear maps. We exhibit explicit twist maps, defined on two-dimensional annuli, that are stationary points of an appropriate energy functional and whose Jacobian vanishes on a set of positive measure in the annulus. Within the class of shear maps we precisely characterize the unique global energy minimizer$u_{\sigma }: \Omega \to {\open R}^2$in a model, two-dimensional case. We exploit the Jacobian constraint$\det \nabla u_{\sigma} \gt 0$a.e. to obtain regularity results that apply ‘up to the boundary’ of domains with corners. It is shown that the unique shear map minimizer has the properties that (i)$\det \nabla u_{\sigma }$is strictly positive on one part of the domain Ω, (ii)$\det \nabla u_{\sigma } = 0$necessarily holds on the rest of Ω, and (iii) properties (i) and (ii) combine to ensure that$\nabla u_{\sigma }$is not continuous on the whole domain. Recommend this Email your librarian or administrator to recommend adding this to your organisation's collection.
I am working on zero-inflated count data models using the pscl package. I am just wondering why there is no development of models for one-inflated count data models! Also why there is no development of bimodal, say zero-and-2-inflated, count data models! Once I generated one-inflated Poisson data and found that neither the glm with family=poisson model nor the negative binomial ( glm.nb) model was good enough to fit the data well. If any one can shed some light on my thought, eccentric though it might be, it would be very helpful for me. I am working on zero-inflated count data models using the A one-inflated Poisson model for a count $Y_i$ is $$\begin{align}\Pr(Y_i = 1) &= \pi_i +(1-\pi_i)\cdot\mu_i\mathrm{e}^{-\mu_i}\\ \Pr(Y_i = y_i) &= (1-\pi_i)\cdot\frac{\mu_i^{y_i}\mathrm{e}^{-\mu_i}}{y_i!} \qquad \text{when } y_i\neq 1 \end{align}$$ where the Poisson mean $\mu_i$ & Bernoulli probability $\pi_i$ are related to the predictors through appropriate link functions. You can define a similar model to inflate probabilities for any values you choose. Still, zero has a special (& once controversial) place among the counting numbers—in a sense representing the absence of anything to count. And it's the "nothing" vs "something" distinction, rather than the "one" vs "any other count" distinction that tends to be relevant across a wide range of phenomena we like to model: there's one process that gives a nought, one, two, ... count & another that gives no count at all. The R package VGAM has function vglm which can be used to fit all sorts of Poisson-esque models. You can use it to specify a one-inflated model, so something like vglm(Y~X,family=oipospoisson(),data=data). See here for more details.
Suppose that we use the Schwinger-fermion ($\mathbf{S_i}=\frac{1}{2}f_i^\dagger\mathbf{\sigma}f_i$) mean-field theory to study the Heisenberg model on 2D lattices, and now we arrive at the mean-field Hamiltonian of the form $H_{MF}=\sum_{<ij>}(\psi_i^\dagger u_{ij}\psi_j+H.c.)$ with $u_{ij}=t\sigma_z$($t>0$), where $\psi_i=(f_{i\uparrow},f_{i\downarrow}^\dagger)^T$, and $\sigma_z$ is the third Pauli matrix. Now let's find the IGG of $H_{MF}$, by definition, the pure gauge transformations in IGG should satisfy $G_iu_{ij}G_j^\dagger=u_{ij}\Rightarrow G_j=\sigma_zG_i\sigma_z $ on link $<ij>$—(1) . Specifically, consider the IGGs on the following different 2D lattices: (a)Square and honeycomb lattices(unfrustrated): These two lattices can be both viewed as constituted by 2 sublattices denoted as $A$ and $B$. Due to Eq.(1), it's easy to show that for both of these two lattices the gauge transformations $G_i$ in the same sublattice are site-independent while those in different sublattices differ by $G_A=\sigma_zG_B\sigma_z$ and $IGG=SU(2)$. (b)Triangular and Kagome lattices(frustrated):Due to Eq.(1), it's easy to show that for both of these two lattices the gauge transformations $G_i$ are global (site-independent) and $G_i=\bigl(\begin{smallmatrix} e^{i\theta }& 0\\ 0& e^{-i\theta }\end{smallmatrix}\bigr)$ which means that $IGG=U(1)$. So my question is: The same This post imported from StackExchange Physics at 2014-03-09 08:43 (UCT), posted by SE-user K-boy form mean-field Hamiltonian $H_{MF}$ may has different $IGGs$ on different lattices?
$\newcommand{\M}{\mathcal{M}}$ Suppose I have a monoidal simplicial model category in which every object is cofibrant $(\M,\otimes,\mathbb{1})$ and I want to look at its underlying monoidal quasicategory, which I'll write as $N(\M)$, the simplicial nerve of $\M$. One way to do this is the following construction (following Variant 4.1.3.17 of Lurie's Higher Algebra): First take the sub-simplicial monoidal category of fibrant and cofibrant objects of $\M$, denoted $\M^\circ$. This is still a monoidal simplicial category. Now produce its so-called "category of operators," i.e. the free semi-cartesian monoidal category on its underlying simplicial multicategory. To be explicit, do the following: Let $Multi(\M^\circ)$ be the simplicial multicategory whose objects are the objects of $M^\circ$ and whose "multimapping" objects are $$Mul(\{m_1,\ldots,m_k\},m)=\coprod_{\alpha\in\Sigma_k} Hom_{\M^\circ}(m_{\alpha(1)}\otimes\cdots\otimes m_{\alpha(k)},m).$$ Now produce the category of operators of this multicategory, which I'll write as $(\M^\circ)^\otimes$. This category has as objects pairs $(\langle k\rangle,\{m_1,\ldots,m_k\})$ where $\langle n\rangle$ is the finite pointed set $\{\ast,1,\ldots,k\}$ and $\{m_1,\ldots,m_k\}$ is a finite list of objects of $Multi(\M^\circ)$ (hence a finite list of objects of $\M^\circ$). This category has mapping objects given by $$Hom_{(\M^\circ)^\otimes}((\langle k\rangle,\{m_1,\ldots,m_k\}),(\langle j\rangle\{l_1,\ldots,l_j\}))=\coprod_{f:\langle k\rangle\to\langle j\rangle}\prod_{1\leq r\leq j}Mul(\{m_i\}_{i\in f^{-1}(r)},l_r).$$ This construction probably seems a bit unwieldy, but the point is that this final category $(\M^\circ)^\otimes$ admits a forgetful functor $(\M^\circ)^\otimes\to Ass^\otimes$, the category of operators of the associative operad. In this case, since all of our mapping objects are Kan complexes (this follows from the fact that we took bifibrant objects and that cofibrant objects are closed under tensor product), we get that $N((M^\circ)^\otimes)\to N(Ass^\otimes)$ defines a monoidal structure on the quasicategory $N(M^\circ)$. Now, my question is about the following: Suppose my simplicial monoidal model category $\M$ has a tensor product that preserves fibrant objects in addition to cofibrant objects. Then take the opposite category of $\M^\circ$, which we'll write $(\M^\circ)^{op}$, which is still a simplicial monoidal category. Moreover, since the tensor product preserves fibrancy, the mapping objects of $((\M^\circ)^{op})^\otimes$ are Kan complexes and so we still have a structure map of quasicategories $N(((\M^\circ)^{op})^\otimes)\to N(Ass^\otimes)$ defining a monoidal structure on $N((\M^\circ)^{op})$. My question is the following: is the monoidal structure so defined on $N((\M^\circ)^{op})$ equivalent to the monoidal structure on $N(M^\circ)^{op}$ obtained by using the Grothendieck construction to get a functor $N(Ass^\otimes)\to Cat_\infty$, composing with $op$ to get a functor $N(Ass^\otimes)\to Cat_\infty\overset{op}\to Cat_\infty$ and then reversing the Grothendieck construction? ----(EDIT)---- Perhaps a bit of background. The main point of this whole thing here is to take a simplicial monoidal model category and lift strict comonoids and comodules therein to its associated monoidal quasicategory (in the sense of Lurie). These can be relatively easily defined to be strict monoids and modules in the opposite category, but one runs into difficulties using Lurie's set-up to access the monoidal structure on the nerve of the opposite category. When the tensor product preserves fibrancy it seems like the above construction might be a work around. At the very least, it lets you define comodules and comonoids in SOME monoidal quasicategory, but it's not clear that this is equivalent to the canonically defined (up to contractible space of choices) opposite monoidal quasicategory. If anyone has some other idea for lifting comonoids and comodules, I'd be very interested to know about it.
I am currently working on a practice problem for my upcoming exam and I have difficulties getting my head around moment of inertia. If the ball has mass $m$ and is going around in a circle with velocity $v_0$ then I can determine it's angular momentum. $\vec{L}=m(r_0\times v_0)$ Now suppose someone pulls on the string until the ball goes around the circle with radius $\frac{r_0}{2}$. What is it's new velocity? I assumed angular momentum is conserved (I am not sure about this). $\implies \vec{L_1}=\vec{L_2} \iff m(r_0\times v_0)=m(\frac{r_0}{2}\times v_1)$ $\iff r_0\times v_0=\frac{r_0}{2}\times v_1 \iff |r_0||v_0|\sin(\alpha)=|\frac{r_0}{2}||v_1|\sin(\alpha)$ I also assumed that the angle between any $r$ and $v$ would not change (also not sure about that one) $\implies v_1=2v_0$ This would mean that half the radius implies double the velocity. I am somewhat puzzled by this because $v=\omega r$ tells me that the velocity should be half of the original velocity unless $\omega$ has changed. This brings me back to moment of inertia. Did moment of inertia change when the ball went from $r_0$ to $\frac{r_0}{2}$? This would explain the change in angular velocity since $\vec{L}=I\cdot \vec{\omega}$.
Jansen, Maurice ; Qiao, Youming ; Sarma M.N., Jayalal Deterministic Black-Box Identity Testing $pi$-Ordered Algebraic Branching Programs AbstractIn this paper we study algebraic branching programs (ABPs) with restrictions on the order and the number of reads of variables in the program. An ABP is given by a layered directed acyclic graph with source $s$ and sink $t$, whose edges are labeled by variables taken from the set $\{x_1, x_2, \ldots, x_n\}$ or field constants. It computes the sum of weights of all paths from $s$ to $t$, where the weight of a path is defined as the product of edge-labels on the path. Given a permutation $\pi$ of the $n$ variables, for a $\pi$-ordered ABP ($\pi$-OABP), for any directed path $p$ from $s$ to $t$, a variable can appear at most once on $p$, and the order in which variables appear on $p$ must respect $\pi$. One can think of OABPs as being the arithmetic analogue of ordered binary decision diagrams (OBDDs). We say an ABP $A$ is of read $r$, if any variable appears at most $r$ times in $A$. Our main result pertains to the polynomial identity testing problem, i.e. the problem of deciding whether a given $n$-variate polynomial is identical to the zero polynomial or not. We prove that over any field $\F$, and in the black-box model, i.e. given only query access to the polynomial, read $r$ $\pi$-OABP computable polynomials can be tested in $\DTIME[2^{O(r\log r \cdot \log^2 n \log\log n)}]$. In case $\F$ is a finite field, the above time bound holds provided the identity testing algorithm is allowed to make queries to extension fields of $\F$. To establish this result, we combine some basic tools from algebraic geometry with ideas from derandomization in the Boolean domain. Our next set of results investigates the computational limitations of OABPs. It is shown that any OABP computing the determinant or permanent requires size $\Omega(2^n/n)$ and read $\Omega(2^n/n^2)$. We give a multilinear polynomial $p$ in $2n+1$ variables over some specifically selected field $\mathbb{G}$, such that any OABP computing $p$ must read some variable at least $2^n$ times. We prove a strict separation for the computational power of read $(r-1)$ and read $r$ OABPs. Namely, we show that the elementary symmetric polynomial of degree $r$ in $n$ variables can be computed by a size $O(rn)$ read $r$ OABP, but not by a read $(r-1)$ OABP, for any $0 < 2r-1 \leq n$. Finally, we give an example of a polynomial $p$ and two variables orders $\pi \neq \pi'$, such that $p$ can be computed by a read-once $\pi$-OABP, but where any $\pi'$-OABP computing $p$ must read some variable at least $2^n$ times. BibTeX - Entry @InProceedings{jansen_et_al:LIPIcs:2010:2872, author = {Maurice Jansen and Youming Qiao and Jayalal Sarma M.N.}, title = {{Deterministic Black-Box Identity Testing $pi$-Ordered Algebraic Branching Programs}}, booktitle = {IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2010)}, pages = {296--307}, series = {Leibniz International Proceedings in Informatics (LIPIcs)}, ISBN = {978-3-939897-23-1}, ISSN = {1868-8969}, year = {2010}, volume = {8}, editor = {Kamal Lodaya and Meena Mahajan}, publisher = {Schloss Dagstuhl--Leibniz-Zentrum fuer Informatik}, address = {Dagstuhl, Germany}, URL = {http://drops.dagstuhl.de/opus/volltexte/2010/2872}, URN = {urn:nbn:de:0030-drops-28728}, doi = {10.4230/LIPIcs.FSTTCS.2010.296}, annote = {Keywords: ordered algebraic branching program, polynomial identity testing} } Keywords: ordered algebraic branching program, polynomial identity testing Seminar: IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2010) Issue date: 2010 Date of publication: 2010
Does increase in the thickness of a camber airfoil mean more drag will be produced? Why do slow flying planes have thick airfoils when it will just slow them down even more? There are several reasons for using thick$^*$ airfoils. Slow planes need high lift coefficients in order to fly slow. The increased drag coefficient is the prize to pay for a higher $C_{L,max}$. Take a look at this diagram: Source: H. Schlichting, E. Truckenbrodt, Aerodynamik des Flugzeuges. Colors added. At low Reynolds-Numbers the maximum lift coefficient is achieved by the thickest airfoil (18%). At slightly larger Re-Numbers the 12% thickness airfoil gains the upper hand. However, the thinnest airfoil (9%) has on average about 20% less maximum lift capability than the thicker ones. The following diagram shows polars of symmetric airfoils with different thicknesses. Source: B. Rögner, Flugwissen. I couldn't find the original source Rögner took the diagram from, if anybody knows it feel free to comment. The quality of image is bad, but you can recognize both the higher $C_{L,max}$ and drag coefficient of thick airfoils. The drag coefficient depends on many factors and, as shown in the formula given in Peter's answer, is not proportional to the thickness. The drag coefficient's thickness dependency however, can be assumed to be proportional to the thickness: the $\delta^4$ term is one order of magnitude smaller than the term multiplied with $\delta$. $$\frac{c_{D,\text{normal airfoil}}}{c_f}=2+4\cdot\delta+120\cdot\delta^4$$ $$\frac{c_{D,\text{laminar airfoil}}}{c_f}=2+2.4\cdot\delta+140\cdot\delta^4$$ In Hoerner's Fluid Dynamic Drag, Chapter 6.A.2, you can find more formulas to approximate the drag coefficient and take other parameters into account. A good resource for understanding the influence of airfoil parameters on their aerodynamic properties is NACA TM 824. Stall behavior of thicker airfoils is usually more forgiving. Another reason for using thick airfoils is that it decreases the wing's weight, as thick structures can carry better the bending loads in a wing. If a wing tank is used, thicker airfoils allow for a higher tank volume. $^*$: I interpret thick as >12%, e.g. 15% as found in the wings of many GA airplanes. Yes, and even more precisely it should be worded a bit differently: The drag coefficient is grows linearly with airfoil thickness. Airfoil thickness means that the air has to flow around the airfoil. This displacement effect causes the flow around a thick airfoil to speed up more than around an equivalent but thinner airfoil. The thicker airfoil pushes the air aside and around itself more, causing the flow to accelerate and create more friction than the slower flow around a thinner airfoil. This effect is normally approximated with an additional term in the friction drag formula which is proportional to relative thickness. Starting from the friction coefficient along a straight wall $c_f$, this additional friction drag has been captured in an empirical formula which gave the best fit to a wealth of airfoil drag data, cambered and uncambered. This is the formula for the zero-lift drag coefficient $c_{d0}$ of an airfoil: $$c_{d0} = c_f\cdot \left(2 + 4\cdot\delta + 120\cdot\left(\frac{1}{\sqrt{1-Ma^2}}\right)^3\cdot\delta^4 - 0.09\cdot Ma^2\right)$$ where $\delta$ is the relative thickness of your airfoil and $Ma$ the Mach number. That slow flying aircraft use thick airfoils is not generally true. However, larger aircraft want to use thicker airfoils in their wing root in order to make the wing spar lighter. By using a wider distance between the lower and upper spar caps, smaller caps can be used for the same bending strength. In order to maximise lift, leading and trailing edge flaps are used. Thicker airfoils make their integration easier, and they allow to carry more fuel due to the wing's higher internal volume. However, beyond 20% thickness at subsonic speed and 14% at transsonic speed thickness becomes a liability - the flow will separate too early to make thicker airfoils practical.
The Busemann-Petty problem (posed in 1956) has an interesting history. It asks the following question: if $K$ and $L$ are two origin-symmetric convex bodies in $\mathbb{R}^n$ such that the volume of each central hyperplane section of $K$ is less than the volume of the corresponding section of $L$:$$\operatorname{Vol}_{n-1}(K\cap \xi^\perp)\le \operatorname{Vol}_{n-1}(L\cap \xi^\perp)\qquad\text{for all } \xi\in S^{n-1},$$does it follow that the volume of $K$ is less than the volume of $L$: $\operatorname{Vol}_n(K)\le \operatorname{Vol}_n(L)?$ Many mathematician's gut reaction to the question is that the answer must be yes and Minkowski's uniqueness theorem provides some mathematical justification for such a belief---Minkwoski's uniqueness theorem implies that an origin-symmetric star body in $\mathbb{R}^n$ is completely determined by the volumes of its central hyperplane sections, so these volumes of central hyperplane sections do contain a vast amount of information about the bodies. It was widely believed that the answer to the Busemann-Problem must be true, even though it was still a largely unopened conjecture. Nevertheless, in 1975 everyone was caught off-guard when Larman and Rogers produced a counter-example showing that the assertion is false in $n \ge 12$ dimensions. Their counter-example was quite complicated, but in 1986, Keith Ball proved that the maximum hyperplane section of the unit cube is $\sqrt{2}$ regardless of the dimension, and a consequence of this is that the centered unit cube and a centered ball of suitable radius provide a counter-example when $n \ge 10$. Some time later Giannopoulos and Bourgain (independently) gave counter-examples for $n\ge 7$, and then Papadimitrakis and Gardner (independently) gave counter-examples for $n=5,6$. By 1992 only the three and four dimensional cases of the Busemann-Petty problem remained unsolved, since the problem is trivially true in two dimensions and by that point counter-examples had been found for all $n\ge 5$. Around this time theory had been developed connecting the problem with the notion of an "intersection body". Lutwak proved that if the body with smaller sections is an intersection body then the conclusion of the Busemann-Petty problem follows. Later work by Grinberg, Rivin, Gardner, and Zhang strengthened the connection and established that the Busemann-Petty problem has an affirmative answer in $\mathbb{R}^n$ iff every origin-symmetric convex body in $\mathbb{R}^n$ is an intersection body. But the question of whether a body is an intersection body is closely related to the positivity of the inverse spherical Radon transform. In 1994, Richard Gardner used geometric methods to invert the spherical Radon transform in three dimensions in such a way to prove that the problem has an affirmative answer in three dimensions (which was surprising since all of the results up to that point had been negative). Then in 1994, Gaoyong Zhang published a paper (in the Annals of Mathematics) which claimed to prove that the unit cube in $\mathbb{R}^4$ is not an intersection body and as a consequence that the problem has a negative answer in $n=4$. For three years everyone believed the problem had been solved, but in 1997 Alexander Koldobsky (who was working on completely different problems) provided a new Fourier analytic approach to convex bodies and in particular established a very convenient Fourier analytic characterization of intersection bodies. Using his new characterization he showed that the unit cube in $\mathbb{R}^4$ is an intersection body, contradicting Zhang's earlier claim. It turned out that Zhang's paper was incorrect and this re-opened the Busemann-Petty problem again. After learning that Koldobsky's results contradicted his claims, Zhang quickly proved that in fact every origin-symmetric convex body in $\mathbb{R}^4$ is an intersection body and hence that the Busemann-Petty problem has an affirmative answer in $\mathbb{R}^4$---the opposite of what he had previously claimed. This later paper was also published in the Annals, and so Zhang may be perhaps the only person to have published in such a prestigious journal both that $P$ and that $\neg P$!
I've seem sometimes a construction being carried out specificaly in Minkowski spacetime: One picks the standard metric tensor $$g = -dt^2 + dr^2 + r^2 d\Omega^2$$ an introduces two new coordinate functions $T,R$ defined by $$T=\arctan(t-r)+\arctan(t+r),\quad R=\arctan(t+r)-\arctan(t-r)$$ these coordinates have finite ranges $0\leq R< \pi$ and $|T|+R<\pi$. The metric tensor aquires then the form $$g=\omega^{-2}(T,R)(-dT^2+dR^2+\sin^2 R d\Omega^2).$$ with $\omega(T,R) = \cos T + \cos R$. After this, one pictures $M$ sitting inside a bigger manifold described by the full range $-\infty < T < \infty$ and $0\leq R \leq \pi$ which is in this case $\mathbb{R}\times S^3$. Inside $\mathbb{R}\times S^3$ we have a boundary for $M$ which is decomposed as $$\partial M = i^+\cup i^0\cup i^{-}\cup \mathscr{I}^+\cup \mathscr{I}^-$$ where we have the pieces in coordinates: $$i^+ = \{p\in \mathbb{R}\times S^3 : T(p)=\pi, R(p)=0\} \\ i^0 = \{p\in \mathbb{R}\times S^3 : T(p)=0, R(p)=\pi\}, \\ i^- = \{p\in \mathbb{R}\times S^3 : T(p)=-\pi, R(p)=0\}, \\ \mathscr{I}^+ = \{p\in \mathbb{R}\times S^3 : T(p)=\pi - R(p), 0 < R(p) < \pi\}, \\ \mathscr{I}^- = \{p\in \mathbb{R}\times S^3 : T(p)=-\pi + R(p), 0 < R(p) <\pi\}$$ This allows for a precise way to talk about "infinity". Intuitively it seems like $i^+,i^-$ are respectively the far future and far past for massive particles $(t\to \pm \infty)$, while $\mathscr{I}^+,\mathscr{I}^-$ the analogue for massless particles and $i^0$ is the spatial infinity $(r\to \infty)$. Unfortunately I've never seem this done in a more general and perhaps coordinate free setting. My question is: given a general spacetime $(M,g)$ how does one define $i^0,i^+,i^-,\mathscr{I}^-,\mathscr{I}^+$?
I have the series: $$\sum_{n=1}^{\infty}\frac {1}{25n^2 + 5n - 6}$$ and I am supposed to prove that the series converges and then find its sum. I have to do multiple problems like this. Can I get an example of how to solve problems like these? You don't have to solve this problem in particular if you don't want. I feel an example problem would help immensely though. EDIT: I am so sorry. I entered the wrong series. I was looking the problem where it was indeed just prove that the series converges. The series has been updated to one of the question where both proving that it converges and finding the sum is the requirement. This was what I had initially and since a clue was given I thought I might as well think about it too. $$\sum_{n=1}^{\infty}\frac {n+2}{n^3 + 3n^2 + 1}$$ However, I can seem to rewrite the series so I can compare it to $\frac {1}{n^2}$. The denominator can't be factored further can it? EDIT 2: Attempt at what I originally had. $$\begin{align}&\sum_{n=1}^{\infty}\frac {n+2}{n^3 + n^2 +1} \\ &\le\sum_{n=1}^{\infty}\frac {n+2}{n^3} \\ &=\sum_{n=1}^{\infty}\frac {n}{n^3}+\sum_{n=1}^{\infty}\frac {2}{n^3} \end{align}$$ Since these two summations converge then the original must converge as well by the comparison test. How is this proof of the original equation, does it work? I haven't really written a proof using a comparison test before so I'm not sure if it needs to be more detailed or not.
Search Now showing items 1-2 of 2 Anisotropic flow of inclusive and identified particles in Pb–Pb collisions at $\sqrt{{s}_{NN}}=$ 5.02 TeV with ALICE (Elsevier, 2017-11) Anisotropic flow measurements constrain the shear $(\eta/s)$ and bulk ($\zeta/s$) viscosity of the quark-gluon plasma created in heavy-ion collisions, as well as give insight into the initial state of such collisions and ... Investigations of anisotropic collectivity using multi-particle correlations in pp, p-Pb and Pb-Pb collisions (Elsevier, 2017-11) Two- and multi-particle azimuthal correlations have proven to be an excellent tool to probe the properties of the strongly interacting matter created in heavy-ion collisions. Recently, the results obtained for multi-particle ...
Difference between revisions of "Probability Seminar" (→Thursday, April 23, Hoi Nguyen, Ohio State University) (→Thursday, April 16, TBA) Line 158: Line 158: − == Thursday, April 16, + == Thursday, April 16, == − Title: + Title: − Abstract: + Abstract: == Thursday, April 23, [http://people.math.osu.edu/nguyen.1261/ Hoi Nguyen], [http://math.osu.edu/ Ohio State University] == == Thursday, April 23, [http://people.math.osu.edu/nguyen.1261/ Hoi Nguyen], [http://math.osu.edu/ Ohio State University] == Revision as of 12:06, 23 March 2015 Spring 2015 Thursdays in 901 Van Vleck Hall at 2:25 PM, unless otherwise noted. If you would like to sign up for the email list to receive seminar announcements then please send an email to join-probsem@lists.wisc.edu. Thursday, January 15, Miklos Racz, UC-Berkeley Stats Title: Testing for high-dimensional geometry in random graphs Abstract: I will talk about a random geometric graph model, where connections between vertices depend on distances between latent d-dimensional labels; we are particularly interested in the high-dimensional case when d is large. Upon observing a graph, we want to tell if it was generated from this geometric model, or from an Erdos-Renyi random graph. We show that there exists a computationally efficient procedure to do this which is almost optimal (in an information-theoretic sense). The key insight is based on a new statistic which we call "signed triangles". To prove optimality we use a bound on the total variation distance between Wishart matrices and the Gaussian Orthogonal Ensemble. This is joint work with Sebastien Bubeck, Jian Ding, and Ronen Eldan. Thursday, January 22, No Seminar Thursday, January 29, Arnab Sen, University of Minnesota Title: Double Roots of Random Littlewood Polynomials Abstract: We consider random polynomials whose coefficients are independent and uniform on {-1,1}. We will show that the probability that such a polynomial of degree n has a double root is o(n^{-2}) when n+1 is not divisible by 4 and is of the order n^{-2} otherwise. We will also discuss extensions to random polynomials with more general coefficient distributions. This is joint work with Ron Peled and Ofer Zeitouni. Thursday, February 5, No seminar this week Thursday, February 12, No Seminar this week Thursday, February 19, Xiaoqin Guo, Purdue Title: Quenched invariance principle for random walks in time-dependent random environment Abstract: In this talk we discuss random walks in a time-dependent zero-drift random environment in [math]Z^d[/math]. We prove a quenched invariance principle under an appropriate moment condition. The proof is based on the use of a maximum principle for parabolic difference operators. This is a joint work with Jean-Dominique Deuschel and Alejandro Ramirez. Thursday, February 26, Dan Crisan, Imperial College London Title: Smoothness properties of randomly perturbed semigroups with application to nonlinear filtering Abstract: In this talk I will discuss sharp gradient bounds for perturbed diffusion semigroups. In contrast with existing results, the perturbation is here random and the bounds obtained are pathwise. Our approach builds on the classical work of Kusuoka and Stroock and extends their program developed for the heat semi-group to solutions of stochastic partial differential equations. The work is motivated by and applied to nonlinear filtering. The analysis allows us to derive pathwise gradient bounds for the un-normalised conditional distribution of a partially observed signal. The estimates we derive have sharp small time asymptotics This is joint work with Terry Lyons (Oxford) and Christian Literrer (Ecole Polytechnique) and is based on the paper D Crisan, C Litterer, T Lyons, Kusuoka–Stroock gradient bounds for the solution of the filtering equation, Journal of Functional Analysis, 2105 Wednesday, March 4, Sam Stechmann, UW-Madison, 2:25pm Van Vleck B113 Please note the unusual time and room. Title: Stochastic Models for Rainfall: Extreme Events and Critical Phenomena Abstract: In recent years, tropical rainfall statistics have been shown to conform to paradigms of critical phenomena and statistical physics. In this talk, stochastic models will be presented as prototypes for understanding the atmospheric dynamics that leads to these statistics and extreme events. Key nonlinear ingredients in the models include either stochastic jump processes or thresholds (Heaviside functions). First, both exact solutions and simple numerics are used to verify that a suite of observed rainfall statistics is reproduced by the models, including power-law distributions and long-range correlations. Second, we prove that a stochastic trigger, which is a time-evolving indicator of whether it is raining or not, will converge to a deterministic threshold in an appropriate limit. Finally, we discuss the connections among these rainfall models, stochastic PDEs, and traditional models for critical phenomena. Thursday, March 12, Ohad Feldheim, IMA Title: The 3-states AF-Potts model in high dimension Abstract: Take a bounded odd domain of the bipartite graph [math]\mathbb{Z}^d[/math]. Color the boundary of the set by [math]0[/math], then color the rest of the domain at random with the colors [math]\{0,\dots,q-1\}[/math], penalizing every configuration with proportion to the number of improper edges at a given rate [math]\beta\gt 0[/math] (the "inverse temperature"). Q: "What is the structure of such a coloring?" This model is called the [math]q[/math]-states Potts antiferromagnet(AF), a classical spin glass model in statistical mechanics. The [math]2[/math]-states case is the famous Ising model which is relatively well understood. The [math]3[/math]-states case in high dimension has been studies for [math]\beta=\infty[/math], when the model reduces to a uniformly chosen proper three coloring of the domain. Several words, by Galvin, Kahn, Peled, Randall and Sorkin established the structure of the model showing long-range correlations and phase coexistence. In this work, we generalize this result to positive temperature, showing that for large enough [math]\beta[/math] (low enough temperature) the rigid structure persists. This is the first rigorous result for [math]\beta\lt \infty[/math]. In the talk, assuming no acquaintance with the model, we shall give the physical background, introduce all the relevant definitions and shed some light on how such results are proved using only combinatorial methods. Joint work with Yinon Spinka. Thursday, March 19, Mark Huber, Claremont McKenna Math Title: Understanding relative error in Monte Carlo simulations Abstract: The problem of estimating the probability [math]p[/math] of heads on an unfair coin has been around for centuries, and has inspired numerous advances in probability such as the Strong Law of Large Numbers and the Central Limit Theorem. In this talk, I'll consider a new twist: given an estimate [math]\hat p[/math], suppose we want to understand the behavior of the relative error [math](\hat p - p)/p[/math]. In classic estimators, the values that the relative error can take on depends on the value of [math]p[/math]. I will present a new estimate with the remarkable property that the distribution of the relative error does not depend in any way on the value of [math]p[/math]. Moreover, this new estimate is very fast: it takes a number of coin flips that is very close to the theoretical minimum. Time permitting, I will also discuss new ways to use concentration results for estimating the mean of random variables where normal approximations do not apply. Thursday, March 26, Ji Oon Lee, KAIST Title: Tracy-Widom Distribution for Sample Covariance Matrices with General Population Abstract: Consider the sample covariance matrix [math](\Sigma^{1/2} X)(\Sigma^{1/2} X)^*[/math], where the sample [math]X[/math] is an [math]M \times N[/math] random matrix whose entries are real independent random variables with variance [math]1/N[/math] and [math]\Sigma[/math] is an [math]M \times M[/math] positive-definite deterministic diagonal matrix. We show that the fluctuation of its rescaled largest eigenvalue is given by the type-1 Tracy-Widom distribution. This is a joint work with Kevin Schnelli. Thursday, April 2, No Seminar, Spring Break Thursday, April 9, Elnur Emrah, UW-Madison Title: TBA Abstract: Thursday, April 16, Scott Hottovy, UW-Madison Title: An SDE approximation for stochastic differential delay equations with colored state-dependent noise Abstract: TBA Thursday, April 23, Hoi Nguyen, Ohio State University Title: On eigenvalue repulsion of random matrices Abstract: I will address certain repulsion behavior of roots of random polynomials and of eigenvalues of Wigner matrices, and their applications. Among other things, we show a Wegner-type estimate for the number of eigenvalues inside an extremely small interval for quite general matrix ensembles. Thursday, April 30, TBA Title: TBA Abstract: Thursday, May 7, TBA Title: TBA Abstract:
> Input > Input >> 1² >> (3] >> 1%L >> L=2 >> Each 5 4 >> Each 6 7 >> L⋅R >> Each 9 4 8 > {0} >> {10} >> 12∖11 >> Output 13 Try it online! Returns a set of all possible solutions, and the empty set (i.e. \$\emptyset\$) when no solution exists. How it works Unsurprisingly, it works almost identically to most other answers: it generates a list of numbers and checks each one for inverse modulus with the argument. If you're familiar with how Whispers' program structure works, feel free to skip ahead to the horizontal line. If not: essentially, Whispers works on a line-by-line reference system, starting on the final line. Each line is classed as one of two options. Either it is a nilad line, or it is a operator line. Nilad lines start with >, such as > Input or > {0} and return the exact value represented on that line i.e > {0} returns the set \$\{0\}\$. > Input returns the next line of STDIN, evaluated if possible. Operator lines start with >>, such as >> 1² or >> (3] and denote running an operator on one or more values. Here, the numbers used do not reference those explicit numbers, instead they reference the value on that line. For example, ² is the square command (\$n \to n^2\$), so >> 1² does not return the value \$1^2\$, instead it returns the square of line 1, which, in this case, is the first input. Usually, operator lines only work using numbers as references, yet you may have noticed the lines >> L=2 and >> L⋅R. These two values, L and R, are used in conjunction with Each statements. Each statements work by taking two or three arguments, again as numerical references. The first argument (e.g. 5) is a reference to an operator line used a function, and the rest of the arguments are arrays. We then iterate the function over the array, where the L and R in the function represent the current element(s) in the arrays being iterated over. As an example: Let \$A = [1, 2, 3, 4]\$, \$B = [4, 3, 2, 1]\$ and \$f(x, y) = x + y\$. Assuming we are running the following code: > [1, 2, 3, 4] > [4, 3, 2, 1] >> L+R >> Each 3 1 2 We then get a demonstration of how Each statements work. First, when working with two arrays, we zip them to form \$C = [(1, 4), (2, 3), (3, 2), (4, 1)]\$ then map \$f(x, y)\$ over each pair, forming our final array \$D = [f(1, 4), f(2, 3), f(3, 2), f(4, 1)] = [5, 5, 5, 5]\$ Try it online! How this code works Working counter-intuitively to how Whispers works, we start from the first two lines: > Input > Input This collects our two inputs, lets say \$x\$ and \$y\$, and stores them in lines 1 and 2 respectively. We then store \$x^2\$ on line 3 and create a range \$A := [1 ... x^2]\$ on line 4. Next, we jump to the section >> 1%L >> L=2 >> Each 5 4 >> Each 6 7 The first thing executed here is line 7, >> Each 5 4, which iterates line 5 over line 4. This yields the array \$B := [i \: \% \: x \: | \: i \in A]\$, where \$a \: \% \: b\$ is defined as the modulus of \$a\$ and \$b\$. We then execute line 8, >> Each 6 7, which iterates line 6 over \$B\$, yielding an array \$C := [(i \: \% \: x) = y \: | \: i \in A]\$. For the inputs \$x = 5, y = 2\$, we have \$A = [1, 2, 3, ..., 23, 24, 25]\$, \$B = [0, 1, 2, 1, 0, 5, 5, ..., 5, 5]\$ and \$C = [0, 0, 1, 0, 0, ..., 0, 0]\$ We then jump down to >> L⋅R >> Each 9 4 8 which is our example of a dyadic Each statement. Here, our function is line 9 i.e >> L⋅R and our two arrays are \$A\$ and \$C\$. We multiply each element in \$A\$ with it's corresponding element in \$C\$, which yields an array, \$E\$, where each element works from the following relationship: $$E_i =\begin{cases}0 & C_i = 0 \\A_i & C_i = 1\end{cases}$$ We then end up with an array consisting of \$0\$s and the inverse moduli of \$x\$ and \$y\$. In order to remove the \$0\$s, we convert this array to a set ( >> {10}), then take the set difference between this set and \$\{0\}\$, yielding, then outputting, our final result.
To evaluate detection performance, we plot the miss-rate $mr(c) = \frac{fn(c)}{tp(c) + fn(c)}$ against the number of false positives per image $fppi(c)=\frac{fp(c)}{\text{#img}}$ in log-log plots. $tp(c)$ is the number of true positives, $fp(c)$ is the number of false positives, and $fn(c)$ is the number of false negatives, all for a given confidence value $c$ such that only detections are taken into account with a confidence value greater or equal than $c$. As commonly applied in object detection evaluation the confidence threshold $c$ is used as a control variable. By decreasing $c$, more detections are taken into account for evaluation resulting in more possible true or false positives, and possible less false negatives. We define the log average miss-rate (LAMR) as shown, where the 9 fppi reference points are equally spaced in the log space: $\DeclareMathOperator*{\argmax}{argmax}LAMR = \exp\left(\frac{1}{9}\sum\limits_f \log\left(mr(\argmax\limits_{fppi\left(c\right)\leq f} fppi\left(c\right))\right)\right)$ For each fppi reference point the corresponding mr value is used. In the absence of a miss-rate value for a given f the highest existent fppi value is used as new reference point. This definition enables LAMR to be applied as a single detection performance indicator at image level. At each image the set of all detections is compared to the groundtruth annotations by utilizing a greedy matching algorithm. An object is considered as detected (true positive) if the Intersection over Union (IoU) of the detection and groundtruth bounding box exceeds a pre-defined threshold. Due to the high non-rigidness of pedestrians we follow the common choice of an IoU threshold of 0.5. Since no multiple matches are allowed for one ground-truth annotation, in the case of multiple matches the detection with the largest score is selected, whereas all other matching detections are considered false positives. After the matching is performed, all non matched ground-truth annotations and detections, count as false negatives and false positives, respectively. Neighboring classes and ignore regions are used during evaluation. Neighboring classes involve entities that are semantically similar, for example bicycle and moped riders. Some applications might require their precise distinction ( enforce) whereas others might not ( ignore). In the latter case, during matching correct/false detections are not credited/penalized. If not stated otherwise, neighboring classes are ignored in the evaluation. In addition to ignored neighboring classes all persons annotations with the tags behind glass or sitting-lying are treated as ignore regions. Further, as mentioned in Section 3.2, EuroCity Persons Dataset Publication, ignore regions are used for cases where no precise bounding box annotation is possible (either because the objects are too small or because there are too many objects in close proximity which renders the instance based labeling infeasible). Since there is no precise information about the number or the location of objects in the ignore region, all unmatched detections which share an intersection of more than $0.5$ with these regions are not considered as false positives. Note that submissions with provided publication link and/or code will get priorized in below list (COMING SOON). Method User LAMR (reasonable) LAMR (small) LAMR (occluded)▲ LAMR (all) External data used Publication URL Publication code Submitted on HRNet Hongsong Wang 0.061 0.138 0.287 0.183 ImageNet no no 2019-08-05 17:11:04 View Faster R-CNN ECP Team 0.101 0.196 0.381 0.251 ImageNet yes no 2019-04-01 17:06:33 View YOLOv3 ECP Team 0.097 0.186 0.401 0.242 ImageNet yes no 2019-04-01 17:08:05 View SSD ECP Team 0.131 0.235 0.460 0.296 ImageNet yes no 2019-04-02 13:56:14 View R-FCN (with OHEM) ECP Team 0.163 0.245 0.507 0.330 ImageNet yes no 2019-04-01 17:10:03 View YOLOv3_640 HUI_Tsinghua-Daim... 0.273 0.564 0.623 0.456 no no 2019-05-17 04:56:27 View Method User LAMR (reasonable) LAMR (small) LAMR (occluded)▲ LAMR (all) External data used Publication URL Publication code Submitted on HRNet Hongsong Wang 0.079 0.156 0.265 0.153 ImageNet no no 2019-08-05 17:11:04 View FasterRCNN with M... Qihua Cheng 0.150 0.253 0.653 0.295 ImageNet no no 2019-07-08 08:48:13 View Faster R-CNN ECP Team 0.201 0.359 0.701 0.358 ImageNet yes no 2019-05-02 10:10:01 View
Hello one and all! Is anyone here familiar with planar prolate spheroidal coordinates? I am reading a book on dynamics and the author states If we introduce planar prolate spheroidal coordinates $(R, \sigma)$ based on the distance parameter $b$, then, in terms of the Cartesian coordinates $(x, z)$ and also of the plane polars $(r , \theta)$, we have the defining relations $$r\sin \theta=x=\pm R^2−b^2 \sin\sigma, r\cos\theta=z=R\cos\sigma$$ I am having a tough time visualising what this is? Consider the function $f(z) = Sin\left(\frac{1}{cos(1/z)}\right)$, the point $z = 0$a removale singularitya polean essesntial singularitya non isolated singularitySince $Cos(\frac{1}{z})$ = $1- \frac{1}{2z^2}+\frac{1}{4!z^4} - ..........$$$ = (1-y), where\ \ y=\frac{1}{2z^2}+\frac{1}{4!... I am having trouble understanding non-isolated singularity points. An isolated singularity point I do kind of understand, it is when: a point $z_0$ is said to be isolated if $z_0$ is a singular point and has a neighborhood throughout which $f$ is analytic except at $z_0$. For example, why would $... No worries. There's currently some kind of technical problem affecting the Stack Exchange chat network. It's been pretty flaky for several hours. Hopefully, it will be back to normal in the next hour or two, when business hours commence on the east coast of the USA... The absolute value of a complex number $z=x+iy$ is defined as $\sqrt{x^2+y^2}$. Hence, when evaluating the absolute value of $x+i$ I get the number $\sqrt{x^2 +1}$; but the answer to the problem says it's actually just $x^2 +1$. Why? mmh, I probably should ask this on the forum. The full problem asks me to show that we can choose $log(x+i)$ to be $$log(x+i)=log(1+x^2)+i(\frac{pi}{2} - arctanx)$$ So I'm trying to find the polar coordinates (absolute value and an argument $\theta$) of $x+i$ to then apply the $log$ function on it Let $X$ be any nonempty set and $\sim$ be any equivalence relation on $X$. Then are the following true: (1) If $x=y$ then $x\sim y$. (2) If $x=y$ then $y\sim x$. (3) If $x=y$ and $y=z$ then $x\sim z$. Basically, I think that all the three properties follows if we can prove (1) because if $x=y$ then since $y=x$, by (1) we would have $y\sim x$ proving (2). (3) will follow similarly. This question arised from an attempt to characterize equality on a set $X$ as the intersection of all equivalence relations on $X$. I don't know whether this question is too much trivial. But I have yet not seen any formal proof of the following statement : "Let $X$ be any nonempty set and $∼$ be any equivalence relation on $X$. If $x=y$ then $x\sim y$." That is definitely a new person, not going to classify as RHV yet as other users have already put the situation under control it seems... (comment on many many posts above) In other news: > C -2.5353672500000002 -1.9143250000000003 -0.5807385400000000 C -3.4331741299999998 -1.3244286800000000 -1.4594762299999999 C -3.6485676800000002 0.0734728100000000 -1.4738058999999999 C -2.9689624299999999 0.9078326800000001 -0.5942069900000000 C -2.0858929200000000 0.3286240400000000 0.3378783500000000 C -1.8445799400000003 -1.0963522200000000 0.3417561400000000 C -0.8438543100000000 -1.3752198200000001 1.3561451400000000 C -0.5670178500000000 -0.1418068400000000 2.0628359299999999 probably the weirdness bunch of data I ever seen with so many 000000 and 999999s But I think that to prove the implication for transitivity the inference rule an use of MP seems to be necessary. But that would mean that for logics for which MP fails we wouldn't be able to prove the result. Also in set theories without Axiom of Extensionality the desired result will not hold. Am I right @AlessandroCodenotti? @AlessandroCodenotti A precise formulation would help in this case because I am trying to understand whether a proof of the statement which I mentioned at the outset depends really on the equality axioms or the FOL axioms (without equality axioms). This would allow in some cases to define an "equality like" relation for set theories for which we don't have the Axiom of Extensionality. Can someone give an intuitive explanation why $\mathcal{O}(x^2)-\mathcal{O}(x^2)=\mathcal{O}(x^2)$. The context is Taylor polynomials, so when $x\to 0$. I've seen a proof of this, but intuitively I don't understand it. @schn: The minus is irrelevant (for example, the thing you are subtracting could be negative). When you add two things that are of the order of $x^2$, of course the sum is the same (or possibly smaller). For example, $3x^2-x^2=2x^2$. You could have $x^2+(x^3-x^2)=x^3$, which is still $\mathscr O(x^2)$. @GFauxPas: You only know $|f(x)|\le K_1 x^2$ and $|g(x)|\le K_2 x^2$, so that won't be a valid proof, of course. Let $f(z)=z^{n}+a_{n-1}z^{n-1}+\cdot\cdot\cdot+a_{0}$ be a complex polynomial such that $|f(z)|\leq 1$ for $|z|\leq 1.$ I have to prove that $f(z)=z^{n}.$I tried it asAs $|f(z)|\leq 1$ for $|z|\leq 1$ we must have coefficient $a_{0},a_{1}\cdot\cdot\cdot a_{n}$ to be zero because by triangul... @GFauxPas @TedShifrin Thanks for the replies. Now, why is it we're only interested when $x\to 0$? When we do a taylor approximation cantered at x=0, aren't we interested in all the values of our approximation, even those not near 0? Indeed, one thing a lot of texts don't emphasize is this: if $P$ is a polynomial of degree $\le n$ and $f(x)-P(x)=\mathscr O(x^{n+1})$, then $P$ is the (unique) Taylor polynomial of degree $n$ of $f$ at $0$.
I want to implement a Bilinear Finite Volume discretisation of the anisotropic diffusion problem: $$\frac{du}{dt} = \nabla \cdot (\textbf{D} \nabla u)$$ Both my degrees of freedom as well as the Diffusion Tensors are given on the vertices of my quadrilateral grid. I can, therefore, use bilinear test/trialfunctions on each cell to calculate the full gradient. I am wondering and am deeply puzzled as to how I am to approximate the Diffusion tensor on an internal sub-face of my control volume. In the 1D case (TPFA), the diffusion coefficient is modeled to be constant over one cell and the degree of freedom is thought to be the same everywhere within the cell. The flux over an interface is made consistent by using the harmonic average of the two given diffusivities: $$f_{A->B}= \frac{2}{\frac{1}{D_A}+ \frac{1}{D_B}} ~\nabla_x u$$ Usually, the gradient is then approximated by the simple finite difference of the values modeled to be in the cell centers. sketch to clarify: So far so good. Now I am not in a setting where the Grid is aligned with the principal directions of the diffusion tensors in involved (not "K-orthogonal"). The Two-point flux approximation would, therefore, fail, because it can not capture the off-diagonal entries of the diffusion tensors involved, because only the x-derivative can be calculated. In my 2D setting, I do reconstruct the full gradient. I want to calculate four sub-fluxes within my cell as indicated by the following sketch: Here the black dots indicate where my degrees of freedom lie. The dashed line shows the area where I model the diffusion tensor to be constant (dual grid), the friendly green dot marks the location where I want to evaluate the bottom-most of the four sub-fluxes to properly assemble. Following the procedure from the 1D case, I could use the harmonic average of the diffusion tensors of the involved dual cells A,B. Let's call it option 1: $$D_{eff} = \big(\frac{\textbf{A}^{-1 } + \textbf{B}^{-1} }{2}\big)^{-1}$$ Here the "-1" in the exponents indicates the matrix inverse. What puzzles me so much is that by calculating the matrix inverse, the diagonal entries become coupled with the off-diagonal ones. To my understanding, these are different physical processes and I feel awkward to have, say, The component $D_{xx}$ depend on $D_{xy}$ etc. The second option I see is to do a harmonic average component-wise, to have: option 2 $$D_{eff} = \begin{bmatrix} \frac{2}{\frac{1}{D^{A}_{xx}} + \frac{1}{D^{B}_{xx}}} & \frac{2}{\frac{1}{D^{A}_{xy}} + \frac{1}{D^{B}_{xy}}} \\ \frac{2}{\frac{1}{D^{A}_{yx}} + \frac{1}{D^{B}_{yx}}} & \frac{2}{\frac{1}{D^{A}_{yy}} + \frac{1}{D^{B}_{yy}}} \end{bmatrix}$$ If i do it this way, the diagonal component $D_{xx}$ is only dependend on the corresponding entries in $D_A$ and $D_B$. What is the correct way to do average the Diffusion Tensors at the sub-face marked with the green dot? UPDATE:Investigating option two I found that as the offdiagonal entries of the Diffusion Matrices may be negative, I face the situation where I take the harmonic average of a negative and a positive number which, according to wikipdeia, is not defined properly. Help appreciated.
I am trying to use Newtons algorithm for polynomial interpolation. The original polynomial is $p(x) = 3x^2+4x+7$ and the Points from which I am trying to interpolate the polynomial are $p(1) = 14$, $p(6) = 139$ and $p(7) = 182$. Now as far as I know the formula for the interpolated polynomial should be $r(x) = a_0+a_1(x-x_0)+a_2(x-x_0)(x-x_1)$. To find $a_0$ I calculate $y_0 = 14 = a_0$. Then, $y_1 = 139 = a_0 + a_1 (x_1-x_0)=14+a_1(6-1)$ so by solving for $a_1$ the result is $a_1 = 25$. At last, $y_2 = 182 = a_0 + a_1(x_1-x_0)+a_2(x_2-x_0)(x_2-x_1)=14+25(6-1)+a_2(7-1)(7-6)$ and solving for $a_2$ results in $a_2=\frac{43}{6}$. By inserting the found values into the formula I get $r(x)=14+25(x-1)+\frac{43}{6}(x-1)(x-6)=\frac{43}{6}x^2-\frac{151}{6}x+32$. This polynomial doesn't go though the last point though: $r(7)=\frac{43}{6}\cdot49-\frac{151}{6}\cdot7+32=207 \ne 182=p(7)$. Am I doing something wrong or does this usually happen with this algorithm?
light diffraction grating Grating applications. Light incident on a diffraction grating is dispersed away from the grating surface at an angle dependent on its wavelength, allowing a grating to be used to select a narrow spectral band from a much wider band. This ability of a grating is particularly useful for laser tuning, especially in the visible region of the spectrum. A diffraction grating consists of a large number (N) of equally spaced narrow slits or lines. A transmission grating has slits, while a reflection grating has lines that reflect light. The more lines or slits there are, the narrower the peaks. I 0 N2. Principal maxima (θcan be large): Diffraction Grating N=2 N=6 Spectrometer, diffraction grating, mercury light source, high-voltage power supply. BACKGROUND A diffraction grating is made by making many parallel scratches on the surface of a flat piece of transparent material. It is possible to put a large number of scratches per centimeter on the material, ., the grating to be used has 6,000 lines/cm Diffraction gratings and optical spectroscopy. A grating disperses light of different wavelengths to give, for any wavelength, a narrow fringe. This allows precise spectroscopy. Absorption and emission spectra. Gas and incandescent lamps. Physics with animations and video film clips. Light. Physclips provides multimedia education in introductory physics (mechanics) at different levels. Diffraction Gratings are optical components used to separate light into its component wavelengths. Diffraction Gratings are used in spectroscopy, or for integration into spectrophotometers or monochromators. Diffraction Gratings consist of a series of closely packed grooves that have been engraved or etched into the Grating’s surface. Diffraction gratings can be used to produce monochromatic light of a required wavelength. Another use is “wavelength tuning” in lasers. The laser output can be varied using a diffraction grating. Unfortunately, apart from the use of a gratings to produce specrta, many of the applications are difficult to describe in simple, non-technical A diffraction grating is an optical component with a regular pattern. The form of the light diffracted by a grating depends on the structure of the elements and the number of elements present, but all gratings have intensity maxima at angles θ m which are given by the grating equation ( + ) =.where A diffraction grating is a piece of glass with very closely spaced lines ruled on it HOW CAN IT BE USED It can be used to split light into different wavelengths with a high degree of accuracy, much more so than a glass prism. An incandescent lightbulb produces incoherent light. But on Wikipedia, for instance, there is a picture of it producing a rainbow diffraction pattern on the diffraction grating page. But since the bulb is putting out incoherent light, it should be intensities rather than fields that add together A diffraction grating with 1000 lines per mm is used in a spectrometer to measure the wavelengths of light emitted from a gas discharge tube. You measure the diffraction angle to be at 20° for one emission line. A diffraction grating is a glass substrate carrying a layer of deposited aluminum that has been pressure-ruled with a large number of fine equidistant grooves, using a diamond edge as a tool. Light falling on such a grating is dispersed into a series of spectra on both sides of the incident beam, the angular dispersion being inversely Diffraction Gratings are used for the direct viewing and analysis of spectra from different gas tubes and other light sources. The quality of the spectrum produced from our gratings is the brightest possible with a minimum of distracting visual noise. Spectra of Lights: An Interactive Demonstration with Diffraction Gratings Page 2 Prerequisites: This activity is commonly used in after the ^Light Shielding Demonstration _ to highlight a crucial part of the light pollution problem. While the ^Light Shielding Demonstration _ reveals how changing the shielding The relationship between the grating spacing d, the wavelength of light \lambda, and the angle \theta to the m^{\mathrm{th}}-order maximum in the diffraction pattern is given by m\lambda=d \sin\theta. A spectrometer can be used to study the spectrum of light produced by an object. Diffraction grating are used separate light into its different colors. This double axis diffraction grating contains 13,500 lines per inch and is mounted in a 2"x2" cardboard frame. MORE INFORMATION BOX . CONTENTS TAB . SPECIFICATIONS TAB . My Science Perks. My Science Perks is FREE! Just place your order while logged in to your Home Science Diffraction and the Wavelength of Light Goal: To use a diffraction grating to measure the wavelength of light from various sources and to determine the track spacing on a compact disc. Lab Preparation Light is an electromagnetic wave, like a radio wave, but very high frequency and very short wavelength. Diffraction gratings : holographic 1000 linee/mm – 600 linee/mm Collimating lens slit 100micron / slits – built with razor blades / micrometric adjustable slit Portable Spectrometer Theory of Diffraction Grating A monochromatic light beam that is incident on a grating gives rise to a transmitted beam and various In optics, a diffraction grating is an optical component with a periodic structure that splits and diffracts light into several beams travelling in different directions. The emerging coloration is a form of structural coloration. The directions of these beams depend on the spacing of the grating and the wavelength of the light so that the grating acts as the dispersive element. A diffraction grating is a glass substrate carrying a layer of deposited aluminum that has been pressure-ruled with a large number of fine equidistant grooves, using a diamond edge as a tool. Light falling on such a grating is dispersed into a series of spectra on both sides of the incident beam, the angular dispersion being inversely We propose, in particular, to measure the pitch of the grating through the measurement of the diffraction produced on the He-Ne laser beam. The Diffraction Grating. When a collimated beam of light passes through an aperture, or if it encounters an obstacle, it spreads out and the resulting pattern contains bright and dark regions.
The SciPy tutorial explicitly states Note that ARPACK is generally better at finding extremal eigenvalues: that is, eigenvalues with large magnitudes. In particular, using which = 'SM' may lead to slow execution time and/or anomalous results. A better approach is to use shift-invert mode. and goes on to describe that. Basically, if $\lambda$ is the smallest by magnitude eigenvalue of $A$, then $\nu = \frac{1}{\lambda}$is the largest by magnitude eigenvalue of $A^{-1}$. (With the same trick, you can get the eigenvalue closest to a given $\sigma$ by looking at the largest for $(A-\sigma I)^{-1}$.) Hence, to compute, say, the three smallest by magnitude eigenvalues, you can instead use eigenval, eigenvec = eigsh(A, 3, sigma=0, which='LM') If you already know that $A$ has a zero eigenvalue and is therefore not invertible (and you have some idea where the next closest are), you can instead use a nonzero shift, e.g., sigma=0.01 to shift your matrix away from singular. If sigma is small enough, that should still get you the ones closest to zero. (Also, as noted in my comment, make sure you have a proper optimized BLAS such as OpenBLAS installed -- for sparse eigenvalues, SciPy calls ARPACK (written in Fortran) which in turn makes heavy use of BLAS.) EDIT: If it's the smallest algebraic eigenvalue you're after, you can use this neat trick: If you have an estimate $\sigma$ of the largest eigenvalue (say, by computing it using which='LM'), you can use that to shift all eigenvalues to be negative -- so that the smallest algebraic eigenvalues are also the largest magnitude ones. Then simply use which='LM' on $A-\sigma I$ and shift back.
Linear algebra¶ Vector spaces¶ The VectorSpace command creates a vector space class, from whichone can create a subspace. Note the basis computed by Sage is“row reduced”. sage: V = VectorSpace(GF(2),8)sage: S = V.subspace([V([1,1,0,0,0,0,0,0]),V([1,0,0,0,0,1,1,0])])sage: S.basis()[(1, 0, 0, 0, 0, 1, 1, 0),(0, 1, 0, 0, 0, 1, 1, 0)]sage: S.dimension()2 Matrix powers¶ How do I compute matrix powers in Sage? The syntax is illustrated by the example below. sage: R = IntegerModRing(51)sage: M = MatrixSpace(R,3,3)sage: A = M([1,2,3, 4,5,6, 7,8,9])sage: A^1000*A^1007[ 3 3 3][18 0 33][33 48 12]sage: A^2007[ 3 3 3][18 0 33][33 48 12] Kernels¶ The kernel is computed by applying the kernel method to the matrix object. The following examples illustrate the syntax. sage: M = MatrixSpace(IntegerRing(),4,2)(range(8))sage: M.kernel()Free module of degree 4 and rank 2 over Integer RingEchelon basis matrix:[ 1 0 -3 2][ 0 1 -2 1] A kernel of dimension one over \(\QQ\): sage: A = MatrixSpace(RationalField(),3)(range(9))sage: A.kernel()Vector space of degree 3 and dimension 1 over Rational FieldBasis matrix:[ 1 -2 1] A trivial kernel: sage: A = MatrixSpace(RationalField(),2)([1,2,3,4])sage: A.kernel()Vector space of degree 2 and dimension 0 over Rational FieldBasis matrix:[]sage: M = MatrixSpace(RationalField(),0,2)(0)sage: M[]sage: M.kernel()Vector space of degree 0 and dimension 0 over Rational FieldBasis matrix:[]sage: M = MatrixSpace(RationalField(),2,0)(0)sage: M.kernel()Vector space of degree 2 and dimension 2 over Rational FieldBasis matrix:[1 0][0 1] Kernel of a zero matrix: sage: A = MatrixSpace(RationalField(),2)(0)sage: A.kernel()Vector space of degree 2 and dimension 2 over Rational FieldBasis matrix:[1 0][0 1] Kernel of a non-square matrix: sage: A = MatrixSpace(RationalField(),3,2)(range(6))sage: A.kernel()Vector space of degree 3 and dimension 1 over Rational FieldBasis matrix:[ 1 -2 1] The 2-dimensional kernel of a matrix over a cyclotomic field: sage: K = CyclotomicField(12); a = K.gen()sage: M = MatrixSpace(K,4,2)([1,-1, 0,-2, 0,-a^2-1, 0,a^2-1])sage: M[ 1 -1][ 0 -2][ 0 -zeta12^2 - 1][ 0 zeta12^2 - 1]sage: M.kernel()Vector space of degree 4 and dimension 2 over Cyclotomic Field of order 12 and degree 4Basis matrix:[ 0 1 0 -2*zeta12^2][ 0 0 1 -2*zeta12^2 + 1] A nontrivial kernel over a complicated base field. sage: K = FractionField(PolynomialRing(RationalField(),2,'x'))sage: M = MatrixSpace(K, 2)([[K.gen(1),K.gen(0)], [K.gen(1), K.gen(0)]])sage: M[x1 x0][x1 x0]sage: M.kernel()Vector space of degree 2 and dimension 1 over Fraction Field of MultivariatePolynomial Ring in x0, x1 over Rational FieldBasis matrix: [ 1 -1] Other methods for integer matrices are elementary_divisors, smith_form (for the Smith normal form), echelon_formfor the Hermite normal form, frobenius for theFrobenius normal form (rational canonical form). There are many methods for matrices over a field such as\(\QQ\) or a finite field: row_span, nullity, transpose, swap_rows, matrix_from_columns, matrix_from_rows, among many others. See the file matrix.py for further details. Eigenvectors and eigenvalues¶ How do you compute eigenvalues and eigenvectors using Sage? Sage has a full range of functions for computing eigenvalues and bothleft and right eigenvectors and eigenspaces. If our matrix is \(A\),then the eigenmatrix_right (resp. eightmatrix_left) command alsogives matrices \(D\) and \(P\) such that \(AP=PD\) (resp.\(PA=DP\).) sage: A = matrix(QQ, [[1,1,0],[0,2,0],[0,0,3]])sage: A[1 1 0][0 2 0][0 0 3]sage: A.eigenvalues()[3, 2, 1]sage: A.eigenvectors_right()[(3, [(0, 0, 1)], 1), (2, [(1, 1, 0)], 1), (1, [(1, 0, 0)], 1)]sage: A.eigenspaces_right()[(3, Vector space of degree 3 and dimension 1 over Rational FieldUser basis matrix:[0 0 1]),(2, Vector space of degree 3 and dimension 1 over Rational FieldUser basis matrix:[1 1 0]),(1, Vector space of degree 3 and dimension 1 over Rational FieldUser basis matrix:[1 0 0])]sage: D, P = A.eigenmatrix_right()sage: D[3 0 0][0 2 0][0 0 1]sage: P[0 1 1][0 1 0][1 0 0]sage: A*P == P*DTrue For eigenvalues outside the fraction field of the base ring of the matrix,you can choose to have all the eigenspaces output when the algebraic closureof the field is implemented, such as the algebraic numbers, QQbar. Or youmay request just a single eigenspace for each irreducible factor of thecharacteristic polynomial, since the others may be formed through Galoisconjugation. The eigenvalues of the matrix below are $pmsqrt{3}$ andwe exhibit each possible output. Also, currently Sage does not implement multiprecision numerical eigenvaluesand eigenvectors, so calling the eigen functions on a matrix from CC or RR will probably give inaccurate and nonsensical results(a warning is also printed). Eigenvalues and eigenvectors of matrices withfloating point entries (over CDF and RDF) can be obtained with the“eigenmatrix” commands. sage: MS = MatrixSpace(QQ, 2, 2)sage: A = MS([1,-4,1, -1])sage: A.eigenspaces_left(format='all')[(-1.732050807568878?*I, Vector space of degree 2 and dimension 1 over Algebraic FieldUser basis matrix:[ 1 -1 - 1.732050807568878?*I]),(1.732050807568878?*I, Vector space of degree 2 and dimension 1 over Algebraic FieldUser basis matrix:[ 1 -1 + 1.732050807568878?*I])]sage: A.eigenspaces_left(format='galois')[(a0, Vector space of degree 2 and dimension 1 over Number Field in a0 with defining polynomial x^2 + 3User basis matrix:[ 1 a0 - 1])] Another approach is to use the interface with Maxima: sage: A = maxima("matrix ([1, -4], [1, -1])")sage: eig = A.eigenvectors()sage: eig[[[-sqrt(3)*%i,sqrt(3)*%i],[1,1]],[[[1,(sqrt(3)*%i+1)/4]],[[1,-(sqrt(3)*%i-1)/4]]]] This tells us that \(\vec{v}_1 = [1,(\sqrt{3}i + 1)/4]\) is an eigenvector of \(\lambda_1 = - \sqrt{3}i\) (which occurs with multiplicity one) and \(\vec{v}_2 = [1,(-\sqrt{3}i + 1)/4]\) is an eigenvector of \(\lambda_2 = \sqrt{3}i\) (which also occurs with multiplicity one). Here are two more examples: sage: A = maxima("matrix ([11, 0, 0], [1, 11, 0], [1, 3, 2])")sage: A.eigenvectors()[[[2,11],[1,2]],[[[0,0,1]],[[0,1,1/3]]]]sage: A = maxima("matrix ([-1, 0, 0], [1, -1, 0], [1, 3, 2])")sage: A.eigenvectors()[[[-1,2],[2,1]],[[[0,1,-1]],[[0,0,1]]]] Warning: Notice how the ordering of the output is reversed, though the matrices are almost the same. Finally, you can use Sage’s GAP interface as well to compute “rational” eigenvalues and eigenvectors: sage: print(gap.eval("A := [[1,2,3],[4,5,6],[7,8,9]]"))[ [ 1, 2, 3 ], [ 4, 5, 6 ], [ 7, 8, 9 ] ]sage: print(gap.eval("v := Eigenvectors( Rationals,A)"))[ [ 1, -2, 1 ] ]sage: print(gap.eval("lambda := Eigenvalues( Rationals,A)"))[ 0 ] Row reduction¶ The row reduced echelon form of a matrix is computed as in the following example. sage: M = MatrixSpace(RationalField(),2,3)sage: A = M([1,2,3, 4,5,6])sage: A[1 2 3][4 5 6]sage: A.parent()Full MatrixSpace of 2 by 3 dense matrices over Rational Fieldsage: A[0,2] = 389sage: A[ 1 2 389][ 4 5 6]sage: A.echelon_form()[ 1 0 -1933/3][ 0 1 1550/3] Characteristic polynomial¶ The characteristic polynomial is a Sage method for square matrices. First a matrix over \(\ZZ\): sage: A = MatrixSpace(IntegerRing(),2)( [[1,2], [3,4]] )sage: f = A.charpoly()sage: fx^2 - 5*x - 2sage: f.parent()Univariate Polynomial Ring in x over Integer Ring We compute the characteristic polynomial of a matrix over the polynomial ring \(\ZZ[a]\): sage: R = PolynomialRing(IntegerRing(),'a'); a = R.gen()sage: M = MatrixSpace(R,2)([[a,1], [a,a+1]])sage: M[ a 1][ a a + 1]sage: f = M.charpoly()sage: fx^2 + (-2*a - 1)*x + a^2sage: f.parent()Univariate Polynomial Ring in x over Univariate Polynomial Ring in a overInteger Ringsage: M.trace()2*a + 1sage: M.determinant()a^2 We compute the characteristic polynomial of a matrix over the multi-variate polynomial ring \(\ZZ[u,v]\): sage: R.<u,v> = PolynomialRing(ZZ,2)sage: A = MatrixSpace(R,2)([u,v,u^2,v^2])sage: f = A.charpoly(); fx^2 + (-v^2 - u)*x - u^2*v + u*v^2 It’s a little difficult to distinguish the variables. To fix this, we might want to rename the indeterminate “Z”, which we can easily do as follows: sage: f = A.charpoly('Z'); fZ^2 + (-v^2 - u)*Z - u^2*v + u*v^2 Solving systems of linear equations¶ Using maxima, you can easily solve linear equations: sage: var('a,b,c')(a, b, c)sage: eqn = [a+b*c==1, b-a*c==0, a+b==5]sage: s = solve(eqn, a,b,c); s[[a == -1/4*I*sqrt(79) + 11/4, b == 1/4*I*sqrt(79) + 9/4, c == 1/10*I*sqrt(79) + 1/10], [a == 1/4*I*sqrt(79) + 11/4, b == -1/4*I*sqrt(79) + 9/4, c == -1/10*I*sqrt(79) + 1/10]] You can even nicely typeset the solution in LaTeX: sage.: print(latex(s))... To have the above appear onscreen via xdvi, type view(s). You can also solve linear equations symbolically using the solve command: sage: var('x,y,z,a')(x, y, z, a)sage: eqns = [x + z == y, 2*a*x - y == 2*a^2, y - 2*z == 2]sage: solve(eqns, x, y, z)[[x == a + 1, y == 2*a, z == a - 1]] Here is a numerical Numpy example: sage: from numpy import arange, eye, linalgsage: A = eye(10) ## the 10x10 identity matrixsage: b = arange(1,11)sage: x = linalg.solve(A,b) Another way to solve a system numerically is to use Sage’s octave interface: sage: M33 = MatrixSpace(QQ,3,3)sage: A = M33([1,2,3,4,5,6,7,8,0])sage: V3 = VectorSpace(QQ,3)sage: b = V3([1,2,3])sage: octave.solve_linear_system(A,b) # optional - octave[-0.333333, 0.666667, 0]
If someone falls from a sixth storey building, with what force will he land? By this I mean to say that if he weighs 50 kg and was initially at rest before falling, which implies it was a free fall. We can easily calculate the velocity and momentum as we know the acceleration due to gravity... but with what force he will land? It is the not the $mv$ that kills you, but the $\frac{d p}{dt}$. The ground will eventually bring you to rest, in a time $\delta t$. This can be very large for soft materials, like a bouncy castle or sponge, or very short for incompressible materials like concrete and steel. Your momentum change $\Delta p$ from the instant before impact and rest is $0 - mv_{final} = - mv_{final}$. Hence, the ground exerts a force $F$ on you: $$ F = \frac{dp}{dt} \approx -\frac{\Delta p}{\delta t},$$ which is negative as it needs to decelerate you to bring you to rest. $v_{final}$ is calculated with the usual suvat equations. What is the actual physical nature of the force? The resistance of objects to being deformed, hence the $\delta t$ dependence that I mentioned earlier. In essence this boils down to molecular bonds so it's electromagnetism.
This regime is very unintuitive and it is indeed seen very rarely in physics, though some superconducting circuits do follow a similar equation. It is complicated to deal with mathematically, as taking the limit of $m\to0$ changes the character of the ODE, but it can be done if you're careful about it. This limit is most easily understood by the principle that the particle is always travelling at the local terminal velocity. To make this precise, consider the general equation you pose,$$m\ddot x+b\dot x=F(x).\tag1$$Imagine, to begin with, that $F\equiv F_0$ is constant over some stretch of $x$. The general solution is then of the form$$x(t)=\frac {F_0}b t+x(0)-\frac mb e^{-\frac bm t}\left(\dot x(0)-\frac{F_0}b\right),$$and it has two constants of integration, $x(0)$ and $\dot x(0)$, as corresponds to a second-order ODE. However, if the particle is very light (thought this limit is, and will be throughout, very hard to take, as $m$ carries dimensional information), then the second term becomes negligible. For one, it gets multiplied by a very small constant, so it gets smaller and smaller. Most importantly, though, the relaxation time of the exponential, $\tau=m/b$, becomes very short. This means that unless you have very sensitive time resolution, by the time you observe the particle, it will have relaxed into the terminal velocity $v_0=F_0/b$. The same principle applies if the particle now crosses into a region that has a somewhat different force $F\equiv F_1$. The particle will take some time $\sim\tau$ to relax into the new terminal velocity, $v_1=F_1/b$, during which time the second-order character of the driving ODE will be observable, but after that all you'll see is motion at the terminal velocity. Similarly, if $F(x)$ is some "slow" function of position, then the particle will always relax into the local terminal velocity before you've had time to realize that it does have some inertia with which to fight the 'air resistance' provided by $b$. I know of no mechanical system where such a simplified equation could be expected to hold. If you took a mechanical particle subject to electrostatic, magnetic, gravitational, dipolar, or other such external forces, and made the mass very small, then the granularity of the medium providing the damping would become evident much before you reached this regime. Instead, you'd observe Brownian motion. On the other hand, there is a model for superconducting circuits, and in particular for Josephson junctions in certain circumstances, that does follow this equation. This model is known as the resistively-capacitively shunted junction (RCSJ) model, and you can find OK expositions of it here and here. Essentially, a Josephson junction is a very thin layer of insulating material in between two superconductors. The current through the junction, because of quantum mechanical effects, is given in certain regimes by$$I=I_C\sin\varphi,$$where $I_C$ is a critical current and $\varphi$ is the phase difference between the wavefunctions of the Cooper pair condensates on either side, which grows at a rate governed by the voltage $V=\frac\hbar {q_e}\dot\varphi$ across the junction. On the other hand, any Josephson junction still has some finite resistance (through nontrivial effects) and a very small capacitance. The simplest model that captures this is to put all three in parallel: (Image source) The current through the resistor is $I=V/R=\cfrac\hbar{q_eR}\dot\varphi$, whereas the current through the capacitor is $I=\dot Q=C\dot V=\cfrac{C\hbar}{q_e}\ddot\varphi$. If you hook up all three elements to a current source at current $I_0$, you'll get an equation of the form $$I_0=I_C\sin(\varphi)+\cfrac\hbar{q_eR}\dot\varphi+\cfrac{C\hbar}{q_e}\ddot\varphi,$$ and this is of the form (1), where the force is provided by a 'tilted washboard' potential. As it happens, the relevant limit for this equation is the overdamped case, because the "mass", $\frac{\hbar}{q_e}C$, is proportional to the capacitance, and while this is nonzero it is often (but not always) extremely tiny. Thus, you can make a mechanical analogy for this circuit, interpreting the phase $\varphi$ as the position of a 'particle', and here the inertia is often negligible.
Forgot password? New user? Sign up Existing user? Log in Why can fire be not considered a living thing??It possesses all the features that a living being does.Please someone guide me... Note by Jahnvi Verma 4 years, 6 months ago Easy Math Editor This discussion board is a place to discuss our Daily Challenges and the math and science related to those challenges. Explanations are more than just a solution — they should explain the steps and thinking strategies that you used to obtain the solution. Comments should further the discussion of math and science. When posting on Brilliant: *italics* _italics_ **bold** __bold__ - bulleted- list 1. numbered2. list paragraph 1paragraph 2 paragraph 1 paragraph 2 [example link](https://brilliant.org) > This is a quote This is a quote # I indented these lines # 4 spaces, and now they show # up as a code block. print "hello world" # I indented these lines# 4 spaces, and now they show# up as a code block.print "hello world" \( \) \[ \] 2 \times 3 2^{34} a_{i-1} \frac{2}{3} \sqrt{2} \sum_{i=1}^3 \sin \theta \boxed{123} Sort by: Yaar kal tu apni maths scul c opy and r d le aana Log in to reply Thank God u know Mein bahut bore ho rahi thi Yeah , you must have heard people saying that fire destroys everything in it's path and it dies out and stuff like that right ? But first tell me , how would you define a Living Thing ? I can explain it to you but you'll be able to understand it better if I explain it along your lines of thought . Think about it , this could very well turn out to be an interesting thing to ponder over :) I would like to know the definition of Living Thing, Sir Azhaghu. It all depends on the perception of the one who views it , I'm sure you get what I mean , don't you ? :P @A Former Brilliant Member – Hm maybe.. Relevant Video Here are a few points: Fire does not have cells. Cells are the main component of something we might consider "living". Fire does not act as a rational agent. Fire doesn't always seem to tend to the behavior of benefit of itself. It will keep on burning until all of its resources have been consumed, leading to its own destruction, which brings us to... The population of fire does not stabilize based on the resources of its environment. Most organisms (excluding us) use resources in a manner in which the resources used by them will generate faster or equally fast as the rate at which they use the resources. Fire cannot evolve. Although it can be argued that large scale fires "evolve" in a sense, it's not the same thing. The fire doesn't gain new traits, it just gets bigger. fire vacuum Yeah ofcouse Param btw it depends....😁 If there is full vacuum there will be no gas and it will not ionise and electrons will not move and we will not see any effect on the photographic plate And (2) I didn't understand Param are u coming today??? Mein bhi yehi soch rahi thiSharp 2:00 pm Lollll whatever....... Yeah I'll definitely come Yrr pehle tu mujhe yeh bta ki kal aa raha?? Yrr meri kaafi problems hain so aana hi padega Haan pakka Yaar u know a2 without upgradationA miracle...😜 Ohkkk Hlo Bakwaas Yeah... Plzz Surely... I didn't even open it😒 NCERT ho nahi rahi woh toh door Ki baat hai Haan theek toh hai You will consider one direction to be positive the other to be negative whichever u want I m param only. That's okay but pardeep me I toh both cases me I +ve vaIue li hai Use derivative and double derivative in Q2 I have done that only bt answer is weird 😝.. Can u and me soln?? Send** not and See (a) part That was not mine😒 and abhi woh mummya lekar Gaye huye hain Koii ni chl ase hi bta de plz yaar And mujhe samajh nahi aa raha how can we express inst acc in terms of velocity Yeah that's theprob aaj game aa Jana chahiye tha *hame Ab tu mujhe ncert back ex. Ka 24th quest bta Konsa ch? Ch 3 Bhool ja😜 Abhi?? How was the ppr?? Srry yrr couldn't reply😞 Agar Mein Teri jagah hoti toh kaafi gusse Mein hoti Start bhi nahi kiya FeeLing burdened and helpless😠 Ttn ka itnna zyada kaam pada hai Mujhe kuch samajh nahi AA raha tu kya bol raha hai?? Tujhe lagta haI Ki Mein dhang se solve karti houngi And isse zyada ghatia ppr mera aaj tak nahi hua Haan agar mujhe aata Hua....And yes it was really very lengthy tabhi toh ganda hua Chal fir baad Mein baat karti hoon khana khana hai Gud nite den kal milte hain Hehe ,I think you can perhaps change this note into Jahnvi Verma's Message Board :P Kya bolte hain?? Yrr dis is nt done Yrr yeh nahi hota woh log pagal hain nd vaise bhi apne kaam se matlab rakhein R yeh log dubara faaltu bolenge toh mujhe btana I'll take care and besti Kisi bhi cheez ki nahi hoti udaas mat ho besti toh Bansal saahab ki hoti hi rehti hai... Yrr yeh toh fir chalta hi rahega kuch toh karna hi padega and ab theek hai fever Woh tu khud hi dekh lena Yrr yahan type karna is like very irritating I'll be there by 10:30 when will you come?? ......... ...gnsd Yrr plzz give your email address vahan par hi baat karenge 2 in back exercise aur nahi kuch kiya 😢 May I call u on ur lumia. This brilliant is to slow or come on google And PG 1/104 pe vo vernier questn hai Did u c PG 1/106 Mein kya bol rahi thi😒😐 I m sorry.. Plz tell which chapter Yar agar lift ruki hai to pehle u =49 a-10ya-9.8 and displacement 0 In sb KO s= ut+1/2at sq formula me I daal Then there paas tym aajega Itna smjh aaya?? Hey R u online?? Haan haan Time comes around 10 s ...then when the lift starts moving..for boy or ball ki relative displacemtnt same hi rahegi so again time will be same That means jaise boy pehle 0 pe that just suppose Kr,, toh ball 5 pe gayi for boy 2m pe that to ball 7 m pe gayi relatv disp is same Okay hai? Ab Same kaise rahegi yeh samajh nahi aa rahaa Just 1min wait kr Jab wapas aayegi tab sm kaise rahega Yrr tu samajh nahi raha jab ball neeche aayegi tab lift toh upar hi ja rahi hai WAhi kar rahi hoon At least zero toh aa hi jayenge Mujhe pta tha Ki tu yehi bolega😒 and u no bahut zyaada bakwaas chal rahi hai study padne ka Mann hi nahi karta Pichle do din se having fever 😞 Yrr tujhe mein paagal dikhti hoon aisa kuch nahi hai and aaj ttn 12:30 aa jana probs hain Yrr didn't get it Problem Loading... Note Loading... Set Loading...
FINAL EDIT: There is one main question left: According to the answer, we have choosen $\theta=1$ , where we could choose $0<\theta<\infty$ as we like. His this sufficient, if we regarde the convergence to $\theta$ as a finite positive random variable? This post seems long, but its almost everything proofed except the last step. The unknown part is marked especially. Assumptions Given a Levy-Process $U_{t}$ with with $E(U_t)=0$ (then $U_t$ is a martingale). Let $U_t$ have finite variance and $Var(X_1)=\sigma^{2}$ and the limit theorem holds: \begin{align} F_t:=\sqrt{t}\left(\frac{U_t}{t}-E(U_1) \right)=\frac{U_t}{\sqrt{t}}\xrightarrow{d}\mathcal{N}(0,\sigma^{2})\quad as \,\,t\rightarrow \infty.\tag1 \end{align} Let $K_t$ a non-decreasing positive ($K_{t}>0$ a.s.) process with cadlag-paths with the property that $K_{t}\rightarrow \infty$ almost sureley, as $t\rightarrow \infty$. I want to show that \begin{align} F_{K_t}:=\frac{U_{K_t}}{\sqrt{K_{t}}} \xrightarrow{d}\mathcal{N}(0,\sigma^{2})\quad as \,\,t\rightarrow \infty. \tag2 \end{align} For this one requires a positive non-random cadlag-function $a(t)$ with $a(t)\rightarrow \infty$ as $t\rightarrow \infty$ such that \begin{align} \frac{K_{t}}{a(t)}\rightarrow \theta\quad P\, a.s. \tag3 \end{align} holds. Where $\theta$ is a positive finite random-variable. Then the convergence in distribution of $F_{t}\xrightarrow{d} \mathcal{N}(0,\sigma^{2})$ implies the convergence in distribution of $F_{K_t}\xrightarrow{d} \mathcal{N}(0,\sigma^{2})$. The suggestion how it has to be proofed is given in the post below:Here he takes $\theta=1$. So that we have $K_{t}\in ((1-\epsilon)a(t),(1+\epsilon)a(t))$ a.s. as $t\rightarrow \infty$. Is this legit, concerning that we have generally $\theta$ a positive ($\theta>0$) finite random variable?For small $m$ we have$$P(U_{K_t}<x\sqrt{K_t})\leq P\left(K_{t}\notin ((1-\epsilon) a(t),(1+\epsilon) a(t))\right)+P\left(U_{a_t}<x\sqrt{(1+\epsilon)a(t)}+m\cdot \sqrt{\epsilon a(t))}\right)+ P\left(\sup_{s\in ((1-\epsilon)a(t),(1+\epsilon)a(t))}|U_{s}-U_{a(t)}|>m\cdot \sqrt{\epsilon a(t))}\right)$$The first term converges to 0 due to (3). The second term converges to $\Phi(x+m)$ (Why?) by the central limit theorem (1). Due to the martingale maximale inequality the third term is bounded by $$\frac{1}{(m\cdot \sqrt{\epsilon a(t))})^{2}}$$ and tends to zero as $a(t)\rightarrow \infty$. Why should this proof (2)? So far we have only that the distribution $P(U_{K_{t}}/\sqrt{K_t}\leq x)$ is bounded by $\Phi(x+m)$ then. A lower bound converging to $\Phi(x)$ is necessary i guess? Idea: $$ P(U_{K_t}<x\sqrt{K_t})\geq P(U_{K_t}<x\sqrt{K_t},|U_{a(t)}-U_{K_t}|<m\sqrt{\epsilon a(t)},K_{t}\in ((1-\epsilon)a(t),(1+\epsilon)a(t))\\ \geq P(U_{a(t)}<x\sqrt{(1-\epsilon)a(t)}-m\sqrt{\epsilon a(t)}) \\ \rightarrow \Phi(x-m) $$ And we have it sandwiched, converging to $\Phi(x)$ right? Btw: How to come to this inequality is due to $$ P(U_{k_t}<x \sqrt{K_t})\\ \leq P[U_{k_t}<x \sqrt{K_t},K_{t}\in((1-\epsilon) a(t),(1+\epsilon) a(t))]+P[U_{K_t}<x\sqrt{K_t},K_{t}\notin ((1-\epsilon) a(t),(1+\epsilon) a(t))] \\ \leq P[K_{t}\notin ((1-\epsilon) a(t),(1+\epsilon) a(t))]+ P[U_{k_t}<x \sqrt{K_t},K_{t}\in((1-\epsilon) a(t),(1+\epsilon) a(t))] \\ \leq P[K_{t}\notin ((1-\epsilon) a(t),(1+\epsilon) a(t))] \\ +P(U_{K_{t}}<x\sqrt{(1+ \epsilon)a(t)},|U_{K_t}-U_{a(t)}|\leq m\sqrt{\epsilon a(t)},|U_{K_t}-U_{a(t)}|> m\sqrt{\epsilon a(t)}] \\ \leq P[U_{a(t)}<x\sqrt{(1+\epsilon)a(t)}+m \sqrt{\epsilon a(t)}] + P\left(\sup_{s\in ((1-\epsilon)a(t),(1+\epsilon)a(t))}|U_{s}-U_{a(t)}|>m\cdot \sqrt{\epsilon a(t))}\right)+P\left(K_{t}\notin ((1-\epsilon) a(t),(1+\epsilon) a(t))\right) $$
Search Now showing items 1-1 of 1 Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV (Elsevier, 2014-09) Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ...
A co-$A_n$ space is a based space $Y$ equipped with a co-action by the Stasheff associahedron operad $K_\bullet$. This means that $Y$ is comes with certain maps $c_n: Y \times K_n \to Y^{\vee n}$, $n = 2,3,\dots$ that are inductively described (the definition of $c_n$ uses $c_{n-1}$ as input; the map $c_2$ is a co-$H$ structure). The suspension of a based space $X$ has the structure of a co-$A_\infty$ space. Assume $Y$ is $2$-connected and has the homotopy type of a finite complex. Then Schwaenzl, Vogt and I showed that a co-$A_\infty$ space $Y$ desuspends to a space $X$ in the sense that there's a weak equivalence $\Sigma X \simeq Y$. However we didn't try to check that the given weak equivalence is compatible in the co-$A_\infty$ sense. Part of the problem is that a morphism $f: Y \to Z$ of co-$A_\infty$ spaces should amount a co-$A_\infty$-structure on its mapping cylinder restricting to the given ones on $X \times 1$ and $Y$. However, this doesn't form a category: it's an $\infty$-category. Now to my questions: Question 1: is there a documented proof somewhere that the functor which assigns to a based space $X$ its suspension (considered as an co-$A_\infty$ space) induces an equivalence between the homotopy category of $1$-connected spaces and $2$-connected co-$A_\infty$ spaces? Presumably, such a proof should be Hilton-Eckmann dual to one of the main results in the Book of Boardman and Vogt. Question 2: Do function spaces coincide up to weak equivalence under this functor? That is, is the map $$\hom_{\text{Top}_*}(X,X') \to \hom_{\text{co-}A_\infty}(\Sigma X,\Sigma X')$$ A weak equivalence under suitable hypotheses on $X$ and $X'$? By $\hom$ in each case, I mean topologized mapping spaces. How would one go about proving a result like this?
I am looking at the following image (from http://www.schoolphysics.co.uk/age16-19/Atomic%20physics/X%20rays/text/X_ray_spectra/index.html): and I want to know, of the two curves, which one represents the element with the higher atomic number. That is, I understand that X-rays are scattered and that the peaks are characteristic of a material. What I am less clear on i why one curve should be above another relative to their atomic numbers (I am going to assume the cutoff voltage on the left is identical even though in this picture it isn't). The reason I am confused is that I am told that that $\frac{1}{\sqrt{\lambda}} \propto Z$ which is all very well, but on this graph it would seem to indicate that the upper line is a heavier (higher Z) material. Is hat the case, or is intensity actually proportional to $\frac{1}{\sqrt{\lambda}}$ as well? I would have thought the lower curve would be a higher Z since it usually take less energy to knock electrons off of elements further up the periodic table. Anyhow, what clarification might be offered is appreciated.
In a paper by Joos and Zeh, Z Phys B 59 (1985) 223, they say:This 'coming into being of classical properties' appears related to what Heisenberg may have meant by his famous remark [7]: 'Die "Bahn" entsteht erst dadurch, dass wir sie beobachten.'Google Translate says this means something ... @EmilioPisanty Tough call. It's technical language, so you wouldn't expect every German speaker to be able to provide a correct interpretation—it calls for someone who know how German is used in talking about quantum mechanics. Litmus are a London-based space rock band formed in 2000 by Martin (bass guitar/vocals), Simon (guitar/vocals) and Ben (drums), joined the following year by Andy Thompson (keyboards, 2001–2007) and Anton (synths). Matt Thompson joined on synth (2002–2004), while Marek replaced Ben in 2003. Oli Mayne (keyboards) joined in 2008, then left in 2010, along with Anton. As of November 2012 the line-up is Martin Litmus (bass/vocals), Simon Fiddler (guitar/vocals), Marek Bublik (drums) and James Hodkinson (keyboards/effects). They are influenced by mid-1970s Hawkwind and Black Sabbath, amongst others.They... @JohnRennie Well, they repeatedly stressed their model is "trust work time" where there are no fixed hours you have to be there, but unless the rest of my team are night owls like I am I will have to adapt ;) I think u can get a rough estimate, COVFEFE is 7 characters, probability of a 7-character length string being exactly that is $(1/26)^7\approx 1.2\times 10^{-10}$ so I guess you would have to type approx a billion characters to start getting a good chance that COVFEFE appears. @ooolb Consider the hyperbolic space $H^n$ with the standard metric. Compute $$\inf\left\{\left(\int u^{2n/(n-2)}\right)^{-(n-2)/n}\left(4\frac{n-1}{n-2}\int|\nabla u|^2+\int Ru^2\right): u\in C^\infty_c\setminus\{0\}, u\ge0\right\}$$ @BalarkaSen sorry if you were in our discord you would know @ooolb It's unlikely to be $-\infty$ since $H^n$ has bounded geometry so Sobolev embedding works as expected. Construct a metric that blows up near infinity (incomplete is probably necessary) so that the inf is in fact $-\infty$. @Sid Eating glamorous and expensive food on a regular basis and not as a necessity would mean you're embracing consumer fetish and capitalism, yes. That doesn't inherently prevent you from being a communism, but it does have an ironic implication. @Sid Eh. I think there's plenty of room between "I think capitalism is a detrimental regime and think we could be better" and "I hate capitalism and will never go near anything associated with it", yet the former is still conceivably communist. Then we can end up with people arguing is favor "Communism" who distance themselves from, say the USSR and red China, and people who arguing in favor of "Capitalism" who distance themselves from, say the US and the Europe Union. since I come from a rock n' roll background, the first thing is that I prefer a tonal continuity. I don't like beats as much as I like a riff or something atmospheric (that's mostly why I don't like a lot of rap) I think I liked Madvillany because it had nonstandard rhyming styles and Madlib's composition Why is the graviton spin 2, beyond hand-waiving, sense is, you do the gravitational waves thing of reducing $R_{00} = 0$ to $g^{\mu \nu} g_{\rho \sigma,\mu \nu} = 0$ for a weak gravitational field in harmonic coordinates, with solution $g_{\mu \nu} = \varepsilon_{\mu \nu} e^{ikx} + \varepsilon_{\mu \nu}^* e^{-ikx}$, then magic?
Recently my work has led me to consider octonion algebras. Not having much of a background with non-associative anything, I decided to check out a basic text on the subject, R.D. Schafer's Introduction to Nonassociative Algebras. I was reading it happily for a good while, until I got to the discussion of alternative algebras. A little background: let $A$ be an arbitrary $F$-algebra: i.e., just endowed with an $F$-bilinear product $A \times A \rightarrow A$, no further assumptions. In analogy to the more familiar commutator, it is also useful to define for any $x,y,z \in A$ the associator $[x,y,z] = (xy)z - x(yz)$, the obvious point being that an algebra is associative iff all of its associators identically vanish. But there is a more subtle merit to this: the associator is an $F$-trilinear map from $A^3$ to $A$. From this it follows that it is entirely determined by its values on any $F$-basis $\{e_i\}_{i \in I}$ of $A$. And from that it follows that associativity can be checked on basis elements and moreover that associativity is faithfully preserved by scalar extension: clearly any trilinear map on an $F$-vector space is identically zero iff its extension to some field extension $K/F$ is identically zero. On to alternativity: an $F$-algebra $A$ is said to be alternating if for all $x,y \in A$, $[x,x,y] = [x,y,y] = 0$. (These two identities are easily seen to imply the flexible law $[x,y,x] = 0$.) Note however that these identities are not multilinear any more: e.g. the left alternator $[x,x,y]$ is quadratic in $x$ and linear in $y$. Thus both of the above consequences of multilinearity are in question: is it sufficient to check left alternativity on basis elements, and is a scalar extension of an alternative algebra necessarily alternative? Presumably the first question has a negative answer. Compare for instance the quadratic form $q(x,y) = xy$: it vanishes on the two standard basis elements of $F^2$ yet is nondegenerate. What we need to do is linearize, i.e., replace the quadratic form with the associated bilinear form. In the present context, this amounts to replacing the alternating condition with the skew-symmetric condition, i.e.,: for all $x,y,z \in A$, $[x,y,z] = -[y,x,z] = [y,z,x]$. The skew-symmetry condition looks much better: as a pair of equalities among trilinear maps, again it suffices to check it on basis elements and again it is faithfully preserved by scalar extension. As is well-known, alternation implies skew-symmetry and the converse holds when $\frac{1}{2} \in F$. But what about when $F$ has characteristic $2$? In this case, unfortunately (and somewhat embarrassingly) I am not even seeing why if $A$ is an alternating $F$-algebra and $K/F$ is a field extension, then $A_K = A \otimes_F K$ is an alternating $K$-algebra. Schafer does address this in his book: for $x \in A$, we have the left and right multiplication operators $L_x, R_x$ as elements of $\operatorname{End}_K(A)$. Then equation (3.1) asserts that left and right alternating laws are equivalent to $L_{x^2} = (L_x)^2$ and $R_{x^2} = (R_x)^2$. He also says that the skew-symmetry of the associator is equivalent to (3.2), which is: $R_x R_y - R_{xy} = L_{xy} - L_y L_x = L_y R_x - R_x L_y = L_x L_y - L_{yx} = R_y L_x - L_x R_y = R_{yx} - R_y R_x$. (I have no problem with these identities.) Then he says (on the top of p. 28) that "It follows from (3.1) and (3.2) that any scalar extension of an alternative algebra is alternative." Unfortunately I don't follow. Can someone help me out?
Simple question about rent split If the rent for a house is £895 per month and is being rented by 2 people. One is accepted to pay £20 more per week, what is going to be monthly payment for each of those people? Is it going to be £407.50 for one and £487.50 for the other OR £367.50 and £527.50 for the other? I'm pretty sure it's the first statement. The second has been calculated by my mate who is arguing that £895/2 = 447.50 and £447.50 - £80 = £367.50 and 447.50 + £50 = £527.50. My logic is £895 - 80 = £815 £815 / 2 = £407.50 £407.50 + £80 = £487.50 for one and the rest for the other. First $\displaystyle X+Y=895$ and $\displaystyle X=Y+20\cdot 4=Y+80$ $\displaystyle 2Y+80=895$ $\displaystyle Y=407.5$ The first statement is true That's exactly what I mean. How can I in the simplest way explain this? My mate doesn't understand algebra, I assume. He doesn't understand the fact that you shouldn't divide the whole amount and -80 from one amount and +80 from the other but -40 and +40 so the gap could equal £80. Both methods can be made to work, but I would first question the assumption that you both make that a month contains 4 weeks. If you work from the premise that a month has 4 weeks, there is a simple way to explain it. An even split would be $\dfrac{895}{2} = 447.5.$ No difficulty there. Now compute $447.5 + 40 = 487.5.$ Straight forward, no algebra. And compute $447.5 - 40 = 407.5.$ So if you pay 487.5 and your mate pays 407.5 that sums to $487.5 + 407.5 = 895.0.$ Simple arithmetic. And what is the difference in the rent paid? $487.5 - 407.5 = 80 = 4 \times 20.$ Simple arithmetic. If you ignore how you got the answer and just explain why the answer pays the rent and has you paying 80£ more, all you need is very basic arithmetic. So this is the correct answer if what your agreement was REALLY involved a differential of 80 quid a month. But it is not correct if your agreement REALLY involved a differential of 20 quid a week because the answer calculated assumes that there are only 48 weeks in the year when in fact there are just about exactly 52. If the latter is the case, then here is what is called for. Your monthly payment $= 490.83.$ Your mate's monthly payment $= 895 - 490.83 = 404.17.$ Now you are paying together $ 490.83 + 404.17 = 895.$ And you are paying $490.83 - 404.17 = 86.66$ more each month. That makes sense because there are a bit more than 4 weeks in the average month. Now let's how it works out over a year. Annually, you pay $12 \times 490.83 = 5889.96$ Annually, your mate pays $12 \times 404.17= 4850.04.$ Notice the sum totals $10740 = 12 \times 895.$ In total, you pay $5889.06 - 4850.04 = 1039.02$ more in a year. And $52 \times 20 = 1040.$ At the end of the year you owe him a quid. Quote: But 527.50- 367.50= £160, not £80. To get one that is £80 more than the other you need to add and subtract £80/2= £40 to and from £447.50. Then you would get £447.50- £40= £407.50 and £447.50+ £40= £487.50 as in the first method. Quote: Round up the damn rent to 900. 410 + 490 = 900. Flip a coin to see who gets the extra 5 !! All times are GMT -8. The time now is 02:33 PM. Copyright © 2019 My Math Forum. All rights reserved.
Search Now showing items 1-10 of 26 Kaon femtoscopy in Pb-Pb collisions at $\sqrt{s_{\rm{NN}}}$ = 2.76 TeV (Elsevier, 2017-12-21) We present the results of three-dimensional femtoscopic analyses for charged and neutral kaons recorded by ALICE in Pb-Pb collisions at $\sqrt{s_{\rm{NN}}}$ = 2.76 TeV. Femtoscopy is used to measure the space-time ... Anomalous evolution of the near-side jet peak shape in Pb-Pb collisions at $\sqrt{s_{\mathrm{NN}}}$ = 2.76 TeV (American Physical Society, 2017-09-08) The measurement of two-particle angular correlations is a powerful tool to study jet quenching in a $p_{\mathrm{T}}$ region inaccessible by direct jet identification. In these measurements pseudorapidity ($\Delta\eta$) and ... Online data compression in the ALICE O$^2$ facility (IOP, 2017) The ALICE Collaboration and the ALICE O2 project have carried out detailed studies for a new online computing facility planned to be deployed for Run 3 of the Large Hadron Collider (LHC) at CERN. Some of the main aspects ... Evolution of the longitudinal and azimuthal structure of the near-side peak in Pb–Pb collisions at $\sqrt{s_{\rm NN}}=2.76$ TeV (American Physical Society, 2017-09-08) In two-particle angular correlation measurements, jets give rise to a near-side peak, formed by particles associated to a higher $p_{\mathrm{T}}$ trigger particle. Measurements of these correlations as a function of ... J/$\psi$ elliptic flow in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 5.02 TeV (American Physical Society, 2017-12-15) We report a precise measurement of the J/$\psi$ elliptic flow in Pb-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV with the ALICE detector at the LHC. The J/$\psi$ mesons are reconstructed at mid-rapidity ($|y| < 0.9$) ... Enhanced production of multi-strange hadrons in high-multiplicity proton-proton collisions (Nature Publishing Group, 2017) At sufficiently high temperature and energy density, nuclear matter undergoes a transition to a phase in which quarks and gluons are not confined: the quark–gluon plasma (QGP)1. Such an exotic state of strongly interacting ... K$^{*}(892)^{0}$ and $\phi(1020)$ meson production at high transverse momentum in pp and Pb-Pb collisions at $\sqrt{s_\mathrm{NN}}$ = 2.76 TeV (American Physical Society, 2017-06) The production of K$^{*}(892)^{0}$ and $\phi(1020)$ mesons in proton-proton (pp) and lead-lead (Pb-Pb) collisions at $\sqrt{s_\mathrm{NN}} =$ 2.76 TeV has been analyzed using a high luminosity data sample accumulated in ... Production of $\Sigma(1385)^{\pm}$ and $\Xi(1530)^{0}$ in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV (Springer, 2017-06) The transverse momentum distributions of the strange and double-strange hyperon resonances ($\Sigma(1385)^{\pm}$, $\Xi(1530)^{0}$) produced in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV were measured in the rapidity ... Charged–particle multiplicities in proton–proton collisions at $\sqrt{s}=$ 0.9 to 8 TeV, with ALICE at the LHC (Springer, 2017-01) The ALICE Collaboration has carried out a detailed study of pseudorapidity densities and multiplicity distributions of primary charged particles produced in proton-proton collisions, at $\sqrt{s} =$ 0.9, 2.36, 2.76, 7 and ... Energy dependence of forward-rapidity J/$\psi$ and $\psi(2S)$ production in pp collisions at the LHC (Springer, 2017-06) We present ALICE results on transverse momentum ($p_{\rm T}$) and rapidity ($y$) differential production cross sections, mean transverse momentum and mean transverse momentum square of inclusive J/$\psi$ and $\psi(2S)$ at ...
Modular construction of special mixed quantum states 31 Downloads Abstract. For a homogeneous quantum network of N subsystems with n levels each we consider separable generalized Werner states. A generalized Werner state is defined as a mixture of the totally mixed state and an arbitrary pure state \(\vert\psi\rangle\): \(\hat{p}_{Werner} = (1-\epsilon)\hat{1}+\epsilon\vert\psi\rangle\langle\psi\vert\) with a mixture coefficient \(\epsilon\). For this density operator \(\hat{p}_{Werner}\) to be separable, \(\epsilon\) will have an upper bound \(\epsilon_{sep}\leq1\). Below this bound one should alternatively be able to reproduce \(\hat{p}_{Werner}\) by a mixture of entirely separable input-states. For this purpose we introduce a set of modules, each contributing elementary coherence properties with respect to a generalized coherence vector. Based on these there exists a general step-by-step mixing process for any \(\epsilon_{mix}\leq\epsilon_{max}\). For \(\vert\psi\rangle\) being a cat-state it is possible to define an optimal process, which produces states right up to the separability boundary ( \(\epsilon_{max} = \epsilon_{sep} \)). KeywordsCoherence Optimal Process Quantum State Mixed State Pure State Preview Unable to display preview. Download preview PDF. References 1.D. Bouwmeester, D. Ekert, A. Zeilinger, The physics of quantum information (Springer, Berlin, 2000)Google Scholar 2. 3. 4. 5.M. Nielsen, I. Chuang, Quantum Computation and Quantum Information (Cambridge University Press, Cambridge, 2000)Google Scholar 6.A. Schrödinger, Naturwissenschaften 23, 807 (1935)Google Scholar 7. 8. 9. 10. 11.R. Josza, N. Linden, quant-ph/0201143 (2002)Google Scholar 12. 13. 14. 15. 16. 17. 18. 19. 20. 21.A. Otte, Ph.D. thesis, University of Stuttgart, http://elib.uni-stuttgart.de/opus/volltexte/ 2001/921/ (2001)Google Scholar 22. 23. 24. 25. 26.M. Lewenstein, B. Kraus, J. Cirac, P. Horodecki, Phys. Rev. A 62, 052310 (2000b)Google Scholar 27. 28. 29. 30. 31.On the other hand, classical simulation of quantum computation may well be efficient: cf. Gottesmann-Knill-theorem in [5], p. 464 (2000)Google Scholar 32.G. Mahler, V. Weberruß, Quantum Networks, 2nd edn. (Springer, Berlin, 1998)Google Scholar 33. 34.
In a mathematical sense, (differential) topology is imposed on the space-time, it is the "background" of the theory and the metric cannot change the topology of the background manifold. I.e. every metric should come with a "manual of topology" which specifies things such as coordinate ranges and identified coordinate values. However, this is not always the case, coordinates such as $\phi$ are taken as the $(0,2 \pi]$ identification without saying and in many cases there are multiple topological interpretations of a metric even after imposing the conditions sketched below. (btw. see what happens when you do not do the $(0,2 \pi]$ thing.) Nevertheless, the properties of the metric "feed back" into the manifold topology through the person studying the metric who chooses a different background topology based on physical motivation or convenience. A few examples. Consider the Schwarzschild metric: $${\rm d}s^2 = -(1-2M/r) {\rm d}t^2 + \frac{1}{1-2M/r} {\rm d}r^2 + r^2 {\rm d}\Omega^2$$ By physical argumentation we find out that there is something fishy about the horizon, it has zero volume, particles which fly through pass through $t=\infty$ but at finite proper time etc. The coordinate transformation known as Kruskalization (see Kruskal-Szekeres coordinates) which makes the horizon regular actually changes the differential structure of the manifold. (alternatively see Eddington-Finkelstein coordinates) Another canonical example would be the Kerr metric in Boyer-Lindquist coordinates representing a spinning black hole. There one finds a singularity at the disc $r=0$ with a field jump in the interior parts of the disc $r=0, \theta \neq \pi/2$ and a genuine metric singularity at the edge of the disc $r=0, \theta=\pi/2$. The field jump on the interior, when confronted with Einstein equations, would mean negative matter density. On the other hand, one can choose to avoid the negative matter density by introducing different topology, and by saying that by going through the disc one enters a new $r<0$ region. I.e. going through the "top" of the disc $r=0, \theta<\pi/2$ one does not enter the "bottom" of the disc $r=0, \theta>\pi/2$ but a completely new region and vice versa. In the case of Schwarzschild introducing an $r<0$ region would be contrived; all geodesics end in $r=0$ and there would be no causal communication with $r<0$ whatsoever. However, the nature of the $r=0$ singularity in Kerr allows one to change the topology in a physically natural sense. It is then possible to give physically motivated conditions on the singularities which are "terminal" and which are not allowed at all. Every mathematician knows from theorems such as the hairy-ball theorem that restrictions on singularities of differential systems restrict the allowed topology. It is also in this sense that the properties of the metric such as asymptotic flatness along with such conditions can restrict the topology of the manifold. The exact-solver of Einstein equations then usually starts with one coordinate patch and tries to solve the equations without encountering the "wrong" singularities - this, on the other hand, already determines the topology and I believe it is exactly in this sense in which you have encountered the interactions of the metric/topology.
On the Mathematics of Fast Food Social media is the worst. Every time I log on to Facebook, I'm inundated with half-baked political arguments and advertising masquerading as memes. The latest one is totally not an advertisement for McDonald's. It claims that 98% of people cannot solve a certain math problem. Solving these problems is easy if you have the right tools. The problem is: Most people simply did not learn what that is. And that tool is called linear algebra. The first thing we have to do is get past how stupid the phrase "u r a genius" looks capture the essence of the system. What really matters here is the first 3 lines describing the relationship of soft drinks, hamburgers, and fries to some unitless numeric value which I assume is associated to some Satanic ritual. It isn't dollars, as it is pretty obvious that hamburgers would cost less than soft drinks, which doesn't make any sense. Satan BitCoins work differently than currencies we're used to. \(1\,\text{soft drink} + 2\,\text{hamburgers} \rightarrow 20\) \(1\,\text{hamburger} + 2\,\text{fries}\rightarrow 9\) For ease of communication, let's call "soft drinks" \(x\), "hamburgers" \(y\) and "fries" \(z\). We can write our system as a set of 3 simple addition problems. The non-presence of \(y\) and \(z\) in the first equation implies the numbers next to the letters is \(0\). Let's write it out just for consistency. In math, we like to be concise. The system of equations above has way too much redundancy. We'll factor out \(x\), \(y\), and \(z\) into a column vector and give it the totally-not-confusing name \(\vec{x}\), leaving us with just the numbers in \(\mathbf{A}\). Let's do the same thing with the "answer" numbers and call it \(\vec{b}\). Now we rewrite the whole system into some matrix multiplication. Don't worry -- it's definitely the same thing as above. \(\underset{\mathbf{A}}{\underbrace{ \begin{pmatrix}3 & 0 & 0\\ 1 & 2 & 0\\ 0 & 1 & 2 \end{pmatrix}}} \underset{\vec{x}}{\underbrace{\begin{pmatrix}x\\y\\z\end{pmatrix}}} \begin{pmatrix}3 & 0 & 0\\ 1 & 2 & 0\\ 0 & 1 & 2 \end{pmatrix}}} \underset{\vec{x}}{\underbrace{\begin{pmatrix}x\\y\\z\end{pmatrix}}} \underset{\vec{b}}{\underbrace{\begin{pmatrix}30\\20\\9\end{pmatrix}}} \) The key to all this manipulation is to be able to write this system of equations as a simple one: \(\mathbf{A}\vec{x}=\vec{b}\). Remember, what we're really looking for is \(\vec{x}\) (the values for \(x\), \(y\), and \(z\)), which mathematicians usually write as \(\vec{x}=?\). To do this, we need some manner of getting \(\vec{x}\) by itself, which involves manipulating \(\mathbf{A}\) in some way. That way is the reciprocal, which is the value you multiply a number by to get \(1\). For example, the reciprocal of \(5\) is \(\frac{1}{5}\), which can also be written as \(5^{-1}\). For matrices, we also have reciprocals, but the result of \(\mathbf{A}\mathbf{A}^{-1}\) is usually called \(\mathbf{I}\) instead of just \(1\) (unfortunately, \(\mathbf{1}\) was already taken). In any case, \(\mathbf{I}\) means the same thing as \(1\): Multiplying a matrix with \(\mathbf{I}\) gives you back the same matrix. Let's manipulate our equation to figure out what the hell \(\vec{x}\) is. \(\mathbf{A}^{-1}\mathbf{A}\vec{x}=\mathbf{A}^{-1} \vec{b}\) \(\mathbf{I}\vec{x}=\mathbf{A}^{-1} \vec{b}\) Now all we need is some value for \(\mathbf{A}^{-1}\). The mechanism for discovering this value involves the rote memorization of a boring algorithm. Instead of taking the time to learn the technique, let's just ask Wolfram|Alpha, which happily tells us the answer. I have no idea what the meaning of \(\mathbf{A}^{-1}\) is. All I know is that if I multiply \(\mathbf{A}^{-1}\) by \(\mathbf{A}\), I will get \(\mathbf{I}\). So: What's the correct answer to the original problem? Who cares? Go outside and have fun. Stop trying to prove your intellect to other people over the internet. Just use Wolfram|Alpha to find the value of \(\vec{x}\). The result is the associated numeric value for soft drinks (\(x=10\)), hamburgers (\(x=5\)), and fries (\(x=2\)). Since we have numbers, a solution probably exists (if you can figure out how to solve \(y + x \times z\) for known values of \(x\), \(y\), and \(z\)). Seem cool? Probably not. This problem is mathematically uninteresting, but there are plenty of interesting problems in the world of math! The important takeaway here is knowing math does not make someone more or less intelligent. All it means is they know how to perform some process and speak the appropriate jargon. Linear algebra was not taught at schools until very recently and usually only in "higher-level" math courses, so not knowing how to approach things like this is perfectly normal. And don't make fun of people who answer with the value of \((y+x) \times z\). Order of operations is a byproduct of representation which is completely irrelevant in the day-to-day life of \(99.999%\) of people. Stop being an elitist cunt because you can remember something that someone else can't. Remembering doesn't make you special. Nothing makes you special. All you are is food that hasn't died. Or maybe just dust in the wind. I don't know your life.
Solving acoustical wave equation: $$ c_0^2\partial_{xx}p-\partial_{tt}=0 $$ using forward-time centered-space FDM is not very convenient cause of numerical dispersion etc. What about using a little physics and solve this as a system of first order equations (linearized Euler equations), i.e.: $$ \nabla p = -\rho_0\partial_t\vec{v} $$ $$ \partial_t\rho+\rho_0\nabla \cdot\vec{v}=0 $$ $$ c_0^2\partial_t \rho=\partial_tp $$ In 2D case let's denote $u$, $v$ the components of velocity vector, let $\Delta x \equiv \Delta y$ and discretize: $$ p_{i,j}^{n+1}=p_{i,j}^{n}+\frac{\rho_0 c_0^2 \Delta t}{\Delta x}\left(u_{i,j}^{n}-u_{i,j+1}^{n} + v_{i,j}^{n}-v_{i+1,j}^{n} \right) $$ $$ u_{i,j}^{n+1}=u_{i,j}^{n}+\frac{\Delta t}{\rho_0 \Delta x}\left(p^{n}_{i,j}-p^{n}_{i,j+1} \right) $$ $$ v_{i,j}^{n+1}=v_{i,j}^{n}+\frac{\Delta t}{\rho_0 \Delta x}\left(p^{n}_{i,j}-p^{n}_{i+1,j} \right) $$ Is that correct? Would that help to get solution with lower numerical noise, dispersion etc.? Would you use forward discretization as above or centered one? How would the CFL condition look like for this case?
@HarryGindi So the $n$-simplices of $N(D^{op})$ are $Hom_{sCat}(\mathfrak{C}[n],D^{op})$. Are you using the fact that the whole simplicial set is the mapping simplicial object between cosimplicial simplicial categories, and taking the constant cosimplicial simplicial category in the right coordinate? I guess I'm just very confused about how you're saying anything about the entire simplicial set if you're not producing it, in one go, as the mapping space between two cosimplicial objects. But whatever, I dunno. I'm having a very bad day with this junk lol. It just seems like this argument is all about the sets of n-simplices. Which is the trivial part. lol no i mean, i'm following it by context actually so for the record i really do think that the simplicial set you're getting can be written as coming from the simplicial enrichment on cosimplicial objects, where you take a constant cosimplicial simplicial category on one side @user1732 haha thanks! we had no idea if that'd actually find its way to the internet... @JonathanBeardsley any quillen equivalence determines an adjoint equivalence of quasicategories. (and any equivalence can be upgraded to an adjoint (equivalence)). i'm not sure what you mean by "Quillen equivalences induce equivalences after (co)fibrant replacement" though, i feel like that statement is mixing category-levels @JonathanBeardsley if nothing else, this follows from the fact that \frakC is a left quillen equivalence so creates weak equivalences among cofibrant objects (and all objects are cofibrant, in particular quasicategories are). i guess also you need to know the fact (proved in HTT) that the three definitions of "hom-sset" introduced in chapter 1 are all weakly equivalent to the one you get via \frakC @IlaRossi i would imagine that this is in goerss--jardine? ultimately, this is just coming from the fact that homotopy groups are defined to be maps in (from spheres), and you only are "supposed" to map into things that are fibrant -- which in this case means kan complexes @JonathanBeardsley earlier than this, i'm pretty sure it was proved by dwyer--kan in one of their papers around '80 and '81 @HarryGindi i don't know if i would say that "most" relative categories are fibrant. it was proved by lennart meier that model categories are Barwick--Kan fibrant (iirc without any further adjectives necessary) @JonathanBeardsley what?! i really liked that picture! i wonder why they removed it @HarryGindi i don't know about general PDEs, but certainly D-modules are relevant in the homotopical world @HarryGindi oh interesting, thomason-fibrancy of W is a necessary condition for BK-fibrancy of (R,W)? i also find the thomason model structure mysterious. i set up a less mysterious (and pretty straightforward) analog for $\infty$-categories in the fappendix here: arxiv.org/pdf/1510.03525.pdf as for the grothendieck construction computing hocolims, i think the more fundamental thing is that the grothendieck construction itself is a lax colimit. combining this with the fact that ($\infty$-)groupoid completion is a left adjoint, you immediately get that $|Gr(F)|$ is the colimit of $B \xrightarrow{F} Cat \xrightarrow{|-|} Spaces$ @JonathanBeardsley If you want to go that route, I guess you still have to prove that ^op_s and ^op_Delta both lie in the unique nonidentity component of Aut(N(Qcat)) and Aut(N(sCat)) whatever nerve you mean in this particular case (the B-K relative nerve has the advantage here bc sCat is not a simplicial model cat) I think the direct proof has a lot of advantages here, since it gives a point-set on-the-nose isomorphism Yeah, definitely, but I'd like to stay and work with Cisinski on the Ph.D if possible, but I'm trying to keep options open not put all my eggs in one basket, as it were I mean, I'm open to coming back to the US too, but I don't have any ideas for advisors here who are interested in higher straightening/higher Yoneda, which I am convinced is the big open problem for infinity, n-cats Gaitsgory and Rozenblyum, I guess, but I think they're more interested in applications of those ideas vs actually getting a hold of them in full generality @JonathanBeardsley Don't sweat it. As it was mentioned I have now mod superpowers, so s/he can do very little to upset me. Since you're the room owner, let me know if I can be of any assistance here with the moderation (moderators on SE have network-wide chat moderating powers, but this is not my turf, so to speak). There are two "opposite" functors:$$ op_\Delta\colon sSet\to sSet$$and$$op_s\colon sCat\to sCat.$$The first takes a simplicial set to its opposite simplicial set by precomposing with the opposite of a functor $\Delta\to \Delta$ which is the identity on objects and takes a morphism $\langle k... @JonathanBeardsley Yeah, I worked out a little proof sketch of the lemma on a notepad It's enough to show everything works for generating cofaces and codegeneracies the codegeneracies are free, the 0 and nth cofaces are free all of those can be done treating frak{C} as a black box the only slightly complicated thing is keeping track of the inner generated cofaces, but if you use my description of frak{C} or the one Joyal uses in the quasicategories vs simplicial categories paper, the combinatorics are completely explicit for codimension 1 face inclusions the maps on vertices are obvious, and the maps on homs are just appropriate inclusions of cubes on the {0} face of the cube wrt the axis corresponding to the omitted inner vertex In general, each Δ[1] factor in Hom(i,j) corresponds exactly to a vertex k with i<k<j, so omitting k gives inclusion onto the 'bottom' face wrt that axis, i.e. Δ[1]^{k-i-1} x {0} x Δ[j-k-1] (I'd call this the top, but I seem to draw my cubical diagrams in the reversed orientation). > Thus, using appropriate tags one can increase ones chances that users competent to answer the question, or just interested in it, will notice the question in the first place. Conversely, using only very specialized tags (which likely almost nobody specifically favorited, subscribed to, etc) or worse just newly created tags, one might miss a chance to give visibility to ones question. I am not sure to which extent this effect is noticeable on smaller sites (such as MathOverflow) but probably it's good to follow the recommendations given in the FAQ. (And MO is likely to grow a bit more in the future, so then it can become more important.) And also some smaller tags have enough followers. You are asking posts far away from areas I am familiar with, so I am not really sure which top-level tags would be a good fit for your questions - otherwise I would edit/retag the posts myself. (Other than possibility to ping you somewhere in chat, the reason why I posted this in this room is that users of this room are likely more familiar with the topics you're interested in and probably they would be able to suggest suitable tags.) I just wanted to mention this, in case it helps you when asking question here. (Although it seems that you're doing fine.) @MartinSleziak even I was not sure what other tags are appropriate to add.. I will see other questions similar to this, see what tags they have added and will add if I get to see any relevant tags.. thanks for your suggestion.. it is very reasonable,. You don't need to put only one tag, you can put up to five. In general it is recommended to put a very general tag (usually an "arxiv" tag) to indicate broadly which sector of math your question is in, and then more specific tags I would say that the topics of the US Talbot, as with the European Talbot, are heavily influenced by the organizers. If you look at who the organizers were/are for the US Talbot I think you will find many homotopy theorists among them.
@HarryGindi So the $n$-simplices of $N(D^{op})$ are $Hom_{sCat}(\mathfrak{C}[n],D^{op})$. Are you using the fact that the whole simplicial set is the mapping simplicial object between cosimplicial simplicial categories, and taking the constant cosimplicial simplicial category in the right coordinate? I guess I'm just very confused about how you're saying anything about the entire simplicial set if you're not producing it, in one go, as the mapping space between two cosimplicial objects. But whatever, I dunno. I'm having a very bad day with this junk lol. It just seems like this argument is all about the sets of n-simplices. Which is the trivial part. lol no i mean, i'm following it by context actually so for the record i really do think that the simplicial set you're getting can be written as coming from the simplicial enrichment on cosimplicial objects, where you take a constant cosimplicial simplicial category on one side @user1732 haha thanks! we had no idea if that'd actually find its way to the internet... @JonathanBeardsley any quillen equivalence determines an adjoint equivalence of quasicategories. (and any equivalence can be upgraded to an adjoint (equivalence)). i'm not sure what you mean by "Quillen equivalences induce equivalences after (co)fibrant replacement" though, i feel like that statement is mixing category-levels @JonathanBeardsley if nothing else, this follows from the fact that \frakC is a left quillen equivalence so creates weak equivalences among cofibrant objects (and all objects are cofibrant, in particular quasicategories are). i guess also you need to know the fact (proved in HTT) that the three definitions of "hom-sset" introduced in chapter 1 are all weakly equivalent to the one you get via \frakC @IlaRossi i would imagine that this is in goerss--jardine? ultimately, this is just coming from the fact that homotopy groups are defined to be maps in (from spheres), and you only are "supposed" to map into things that are fibrant -- which in this case means kan complexes @JonathanBeardsley earlier than this, i'm pretty sure it was proved by dwyer--kan in one of their papers around '80 and '81 @HarryGindi i don't know if i would say that "most" relative categories are fibrant. it was proved by lennart meier that model categories are Barwick--Kan fibrant (iirc without any further adjectives necessary) @JonathanBeardsley what?! i really liked that picture! i wonder why they removed it @HarryGindi i don't know about general PDEs, but certainly D-modules are relevant in the homotopical world @HarryGindi oh interesting, thomason-fibrancy of W is a necessary condition for BK-fibrancy of (R,W)? i also find the thomason model structure mysterious. i set up a less mysterious (and pretty straightforward) analog for $\infty$-categories in the fappendix here: arxiv.org/pdf/1510.03525.pdf as for the grothendieck construction computing hocolims, i think the more fundamental thing is that the grothendieck construction itself is a lax colimit. combining this with the fact that ($\infty$-)groupoid completion is a left adjoint, you immediately get that $|Gr(F)|$ is the colimit of $B \xrightarrow{F} Cat \xrightarrow{|-|} Spaces$ @JonathanBeardsley If you want to go that route, I guess you still have to prove that ^op_s and ^op_Delta both lie in the unique nonidentity component of Aut(N(Qcat)) and Aut(N(sCat)) whatever nerve you mean in this particular case (the B-K relative nerve has the advantage here bc sCat is not a simplicial model cat) I think the direct proof has a lot of advantages here, since it gives a point-set on-the-nose isomorphism Yeah, definitely, but I'd like to stay and work with Cisinski on the Ph.D if possible, but I'm trying to keep options open not put all my eggs in one basket, as it were I mean, I'm open to coming back to the US too, but I don't have any ideas for advisors here who are interested in higher straightening/higher Yoneda, which I am convinced is the big open problem for infinity, n-cats Gaitsgory and Rozenblyum, I guess, but I think they're more interested in applications of those ideas vs actually getting a hold of them in full generality @JonathanBeardsley Don't sweat it. As it was mentioned I have now mod superpowers, so s/he can do very little to upset me. Since you're the room owner, let me know if I can be of any assistance here with the moderation (moderators on SE have network-wide chat moderating powers, but this is not my turf, so to speak). There are two "opposite" functors:$$ op_\Delta\colon sSet\to sSet$$and$$op_s\colon sCat\to sCat.$$The first takes a simplicial set to its opposite simplicial set by precomposing with the opposite of a functor $\Delta\to \Delta$ which is the identity on objects and takes a morphism $\langle k... @JonathanBeardsley Yeah, I worked out a little proof sketch of the lemma on a notepad It's enough to show everything works for generating cofaces and codegeneracies the codegeneracies are free, the 0 and nth cofaces are free all of those can be done treating frak{C} as a black box the only slightly complicated thing is keeping track of the inner generated cofaces, but if you use my description of frak{C} or the one Joyal uses in the quasicategories vs simplicial categories paper, the combinatorics are completely explicit for codimension 1 face inclusions the maps on vertices are obvious, and the maps on homs are just appropriate inclusions of cubes on the {0} face of the cube wrt the axis corresponding to the omitted inner vertex In general, each Δ[1] factor in Hom(i,j) corresponds exactly to a vertex k with i<k<j, so omitting k gives inclusion onto the 'bottom' face wrt that axis, i.e. Δ[1]^{k-i-1} x {0} x Δ[j-k-1] (I'd call this the top, but I seem to draw my cubical diagrams in the reversed orientation). > Thus, using appropriate tags one can increase ones chances that users competent to answer the question, or just interested in it, will notice the question in the first place. Conversely, using only very specialized tags (which likely almost nobody specifically favorited, subscribed to, etc) or worse just newly created tags, one might miss a chance to give visibility to ones question. I am not sure to which extent this effect is noticeable on smaller sites (such as MathOverflow) but probably it's good to follow the recommendations given in the FAQ. (And MO is likely to grow a bit more in the future, so then it can become more important.) And also some smaller tags have enough followers. You are asking posts far away from areas I am familiar with, so I am not really sure which top-level tags would be a good fit for your questions - otherwise I would edit/retag the posts myself. (Other than possibility to ping you somewhere in chat, the reason why I posted this in this room is that users of this room are likely more familiar with the topics you're interested in and probably they would be able to suggest suitable tags.) I just wanted to mention this, in case it helps you when asking question here. (Although it seems that you're doing fine.) @MartinSleziak even I was not sure what other tags are appropriate to add.. I will see other questions similar to this, see what tags they have added and will add if I get to see any relevant tags.. thanks for your suggestion.. it is very reasonable,. You don't need to put only one tag, you can put up to five. In general it is recommended to put a very general tag (usually an "arxiv" tag) to indicate broadly which sector of math your question is in, and then more specific tags I would say that the topics of the US Talbot, as with the European Talbot, are heavily influenced by the organizers. If you look at who the organizers were/are for the US Talbot I think you will find many homotopy theorists among them.
This post tries to be a guide to new members of the (chemical) scientific community for instructions on how to apply the right formatting when writing chemical and mathematical equations. This answer tries to be a guide to new members of the (chemical) scientific community for instructions on how to apply the right formatting when writing chemical and mathematical equations. The main problem most members struggle with is: Which symbols are written in roman (upright) font? The Solution As a rule of thumb: Symbols representing physical quantities or mathematical variables are the only things written in italic type. Guidelines Everything of the following is not written in italics: Unit symbols, e.g. $\mathrm{kg}$, $\mathrm{kJ}$, $\mathrm{mol}$, $\mathrm{K}$ Chemical formulae. These are written in roman font automatically when using the mhchemcommands, i.e. $\ce{...}$. Function names such as $\sin$, $\cos$, $\log$, i.e. \sin, \cos, \log. (MathJax recognizes standard mathematical function names and automatically applies the correct style to them.) Mathematical constants, the values of which never change, e.g. $\mathrm{e} = 2.718\,218\,8\ldots$, $\mathrm {i}^2 = -1$ (Note that this convention for $\mathrm e$ and $\mathrm i$ is recommended by ISO 80000, IUPAC, NIST, and ACS, but italicizing these expressions is also very common.) Descriptive indices such as $_\text{ox}$, $_\text{red}$ or $_\text{tot}$ Descriptive text Symbols for mathematical operators, e.g. $\Delta$ in $\Delta x=x_2-x_1$ and each $\mathrm d$ in $\mathrm df/\mathrm dx$ (derivative of $f$ with respect to $x$). Note that $x$ and $y$ are variables in this context. (Note that this convention for differentials and derivatives is recommended by ISO 80000, IUPAC, NIST, and ACS; it is also used in the rules and style conventions presented in The International System of Units(SI). Nevertheless, italicizing these expressions is also very common.) Electronic configurations $\mathrm{(1s)^2 (2s)^2 (2p)^4}$ Everything of the following is written in italics: Symbols representing physical quantities, e.g. $m$ for mass or $V$ for volume, including fundamental physical constants (quantities that are considered to be constant under all circumstances), e.g. Planck constant $h$, Faraday constant $F$ Mathematical variables, e.g. $x$ and $y$, including expressions such as “the $x$ axis” Iterative variables such as $i$ in a sum Parameters, such as $a$, $b$, etc., which may be considered as constant in a particular context Locants in chemical-compound names indicating attachments to heteroatoms, e.g. N, N-dimethylaniline Stereochemical descriptors as ( E) or ( Z) Subscripts: When, in a given context, different quantities have the same letter symbol or when, for one quantity, different applications or different values are of interest, a distinction can be made by use of subscripts. The following principles for the printing of subscripts apply (see also specific heat capacity $c_p$ in "Examples" below). A subscript that represents a physical quantity or a mathematical variable, such as a running number, is printed in italic type, e.g. equilibrium constant on a pressure basis $K_p$ and equilibrium constant on a concentration basis $K_c$. Other subscripts, such as those representing words or fixed numbers, are printed in upright type, e.g. Avogadro constant $N_\mathrm A$. There are of course some mixed notations possible: Chemical formulae which contain variables, such as the $n$ in the general formula for alkanes ($\ce{C_{$n$}H_{$2n+2$}}$) or $x$ in this molecular formula for a superconductor: $\ce{LaO_{$1−x$}F_{$x$}FeAs}$ Point groups, for example $C_n$, $S_{2n}$, $D_n$, $D_{n\mathrm{h}}$, $D_{n\mathrm{d}}$, $C_{n\mathrm{v}}$, $C_{n\mathrm{h}}$, $T$, $T_\mathrm{h}$, $T_\mathrm{d}$, $O$, $O_\mathrm{h}$, $I_\mathrm{h}$, $C_{\infty\mathrm{v}}$, $D_{\infty\mathrm{h}}$. More detail can be found in the question on the main site: How are point group character tables typeset correctly? The symbol $\mathrm pK_\mathrm a$ for the logarithmic acid dissociation constant (read details under "Examples") How do I do this? There are two generic commands that produce roman output, \text{} and \mathrm{}.The main difference between the two commands is the way how math command characters such as the caret ^ or the underline _ are interpreted. In \text{} these get rendered out literally and in \mathrm{} they get interpreted as usual. The use of \mathrm{} is therefore recommended. (The command \rm should be avoided since it is deprecated and only maintained for backwards compatibility.) The mhchem extension offers two shortcuts, \ce and \pu. Display Math and inline Math There are several possibilities to typeset mathematical formulae. The inline math mode is invoked by bracing the statement with dollar signs, i.e. $...$. A displayed mathematical equation, it is centered and typeset a little bigger, can be invoked using double dollar signs as braces, i.e. $$...$$. Here it is also possible to force a line break with \\. When dealing with more than one equation, an aligning environment should be used instead, i.e. \begin{align}...\end{align}. The alignment character is the &. This can also be used in conjunction with the \ce{...} statements. Examples Chemical formulae and equations: $$\ce{H2O + HCl <--> H3O+ + Cl-}$$becomes $$\ce{H2O + HCl <--> H3O+ + Cl-}$$ Units: $$E = 33.4~\mathrm{kJ\, mol^{-1}}$$becomes $$E = 33.4~\mathrm{kJ\, mol^{-1}}$$ Note that there is a tilde character between the numbers and the \mathrm{}command, which produces a fixed, non-linebreaking space. Or: $$E = \pu{123E6 kJ mol-1}$$becomes $$E = \pu{123E6 kJ mol-1}$$ Sums and descriptive text: $$m_\text{tot} = \sum_i^N m_i ~\text{for}~N~\text{substances}$$becomes $$m_\text{tot} = \sum_i^N m_i ~\text{for}~N~\text{substances}$$ Specific heat capacity $c_p$: Here the subscript denotes the constant pressure $p$ and is as such written in italics; whereas in the molar heat capacity at constant pressure $C_{\mathrm m,p}$, the subscript $\mathrm m$ does not represent a quantity but the adjective “molar” and is printed in roman (upright) type. In the symbol $\mathrm pK_\mathrm a$ for the logarithmic acid dissociation constant, the roman symbol $\mathrm p$ is interpreted as an operator ($\mathrm px=-\lg x$), the italic symbol $K$ represents a quantity (the dissociation constant), and the roman subscript $\mathrm a$ represents the word “acid”. Locants: O-ethyl hexanethioate; N-methylbenzamide Electronic configurations can easily be written using the mathrmcommand. The MathJaX command \mathrm{(1s)^2 (2s)^2 (2p)^6 (3s)^1}renders as $$\mathrm{(1s)^2 (2s)^2 (2p)^6 (3s)^1}\,.$$ Alternatively you can use \mathrm{[Ne] (3s)^1}to get $$\mathrm{[Ne] (3s)^1}\,.$$ Stereochemical descriptors are preferably italicized through markdown (*E*)or (*Z*), and not mathjax Sources / Further reading:
generation of certain matrices I'd like to create a list of roughly 100-1000 $2 \times 2$ matrices $[A_1,A_2,...,A_N]$ that have the following properties: $\det A_j = 1$ The entries of each $A_j$ are in an imaginary quadratic integer ring, such as $\mathbb{Z}[i]$, or $\mathbb{Z}[\sqrt{-2}]$ For example, the matrix $$\begin{bmatrix} 1&2i \\ 0&1 \end{bmatrix}$$ fits the above specifications when the ring is $\mathbb{Z}[i]$. I know that I probably want to run some kind of loop over the entries of the matrix, but I'm not sure how to do this. Perhaps I want to initially treat the matrices as lists of length 4, and then run an iterative loop over the lists. Then, when the above specifications are met, that list is stored somewhere else. I think I'd also like to put a bound on the "size" of the matrix entries, but that should be easy to do afterwards. Thanks!
Consider $A =\left( \begin{array}{ccc} -1 & 2 & 2\\ 2 & 2 & -1\\ 2 & -1 & 2\\ \end{array} \right)$. Find the eigenvalues of $A$. So I know the characteristic polynomial is: $$f_A(\lambda) = (-\lambda)^n+(trA)(-\lambda)^{n-1}+...+\det A$$ I found the $\det A = 27$, so the characteristic polynomial of $A$ is: $$-\lambda^3+3\lambda^2-c\lambda+27$$ However, the textbook I'm using doesn't give any method for finding the value of $c$. I have the solution to the problem, and the value of $c$ is in fact $9$, but is there any method to numerically solve for it? If we have a $4\times 4$ matrix, we'll end up with a characteristic polynomial of the form $$\lambda^4-tr A(\lambda)^3+c_1\lambda^2-c_2\lambda+\det A$$ Similarly, is there a method to solve for $c_1,c_2$ in this case?
A balanced budget (particularly that of a government) refers to a budget in which revenues are equal to expenditures. Thus, neither a budget deficit nor a budget surplus exists ("the accounts balance"). More generally, it refers to a budget that has no budget deficit, but could possibly have a budget surplus. [1] A cyclically balanced budget is a budget that is not necessarily balanced year-to-year, but is balanced over the economic cycle, running a surplus in boom years and running a deficit in lean years, with these offsetting over time. Balanced budgets and the associated topic of budget deficits are a contentious point within academic economics and within politics. The mainstream economic view is that having a balanced budget in every year is not desirable, with budget deficits in lean times being desirable. Most economists have also agreed that a balanced budget would decrease interest rates, [2] increase savings and investment, [2] shrink trade deficits and help the economy grow faster in the longer term. [2] Contents Economic views 1 Political views 2 United States 2.1 Sweden 2.2 Balanced budget multiplier 3 See also 4 References 5 Economic views Mainstream economics—mainly advocates a cyclic balanced budget, arguing from the perspective of Keynesian economics—budget deficits provide fiscal stimulus in lean times, while budget surpluses provide restraint in boom times. However, it should be noted that Keynesian economics does not advocate for fiscal stimulus when the existing government debt is already significant. Alternative currents in the mainstream and branches of heterodox economics argue differently, with some arguing that budget deficits are always harmful, and others arguing that budget deficits are not only beneficial, but also necessary. Schools which often argue against the effectiveness of budget deficits as cyclical tools include the freshwater school of mainstream economics and neoclassical economics more generally, and the Austrian school of economics. Budget deficits are argued to be necessary by some within Post-Keynesian economics, notably the Chartalist school. Larger deficits, sufficient to recycle savings out of a growing gross domestic product (GDP) in excess of what can be recycled by profit-seeking private investment, are not an economic sin but an economic necessity. [3] Budget deficits can usually be calculated by subtractng the total planned expenditure from the total available budget. This will then show either a budget deficit (a negative total) or a budget surplus (a positive total). Political views United States In the United States, the fiscal conservatism movement believes that balanced budgets are an important goal. Every state other than Vermont has a balanced budget amendment, providing some form of ban on deficits, while the Oregon kicker bans surpluses of greater than 2% of revenue. The Colorado Taxpayer Bill of Rights (the TABOR amendment) also bans surpluses, and requires the state to refund taxpayers in event of a budget surplus. Sweden Following the over-borrowing in both the public and private sector that led to the Swedish banking crisis of the early 1990s and under influence from a series of reports on the future demographic challenges, a wide political consensus developed on fiscal prudence. In the year 2000 this was enshrined in a law that stated a goal of a surplus of 2% over the business cycle, to be used to pay off the public debt and to secure the long-term future for the cherished welfare state. Today the goal is 1% over the business cycle, as the retirement pension is no longer considered a government expenditure. Balanced budget multiplier Because of the multiplier effect, it is possible to change aggregate demand (Y) keeping a balanced budget. The government increases its expenditures (G), balancing it by an increase in taxes (T). Since only part of the money taken away from households would have actually been used in the economy, the change in consumption expenditure will be smaller than the change in taxes. Therefore, the money which would have been saved by households is instead injected into the economy, itself becoming part of the multiplier process. In general, a change in the balanced budget will change aggregate demand by an amount equal to the change in spending. Y_1 = c_0 + c_1 \left ( Y - T \right ) + I + G Y_1 = \frac {1} {1 - c_1} \left ( c_0 + I + G - c_1 T \right ) G = G + \alpha \, T = T + \alpha \, Y_2 = \frac {1} {1 - c_1} \left ( c_0 + I + \left ( G + \alpha \right ) - c_1 \left ( T + \alpha \right ) \right ) \Delta Y = Y_2 - Y_1 = \frac {\alpha} {1 - c_1} \left ( 1 - c_1 \right ) = \alpha \Delta T - \Delta G = \alpha - \alpha = 0 \, Balanced budget multiplier as taxes depend on income Y = C + I + G C = b(Y - T) T = tY G = tY Y = b(Y - tY) + I + tY Y = bY - btY + tY + I Y(1 - (b - bt + t)) = I Y = \frac {1} {1 - b + bt - t} (I) Y ' = \frac {1} {1 - b + bt - t} See also References ^ ^ a b c "Winners and Losers In a Balanced Budget". The Washington Post. 4 May 1997. ^ (Vickrey 1996, Fallacy 1) This article was sourced from Creative Commons Attribution-ShareAlike License; additional terms may apply. World Heritage Encyclopedia content is assembled from numerous content providers, Open Access Publishing, and in compliance with The Fair Access to Science and Technology Research Act (FASTR), Wikimedia Foundation, Inc., Public Library of Science, The Encyclopedia of Life, Open Book Publishers (OBP), PubMed, U.S. National Library of Medicine, National Center for Biotechnology Information, U.S. National Library of Medicine, National Institutes of Health (NIH), U.S. Department of Health & Human Services, and USA.gov, which sources content from all federal, state, local, tribal, and territorial government publication portals (.gov, .mil, .edu). Funding for USA.gov and content contributors is made possible from the U.S. Congress, E-Government Act of 2002. Crowd sourced content that is contributed to World Heritage Encyclopedia is peer reviewed and edited by our editorial staff to ensure quality scholarly research articles. By using this site, you agree to the Terms of Use and Privacy Policy. World Heritage Encyclopedia™ is a registered trademark of the World Public Library Association, a non-profit organization.
Can I be a pedant and say that if the question states that $\langle \alpha \vert A \vert \alpha \rangle = 0$ for every vector $\lvert \alpha \rangle$, that means that $A$ is everywhere defined, so there are no domain issues? Gravitational optics is very different from quantum optics, if by the latter you mean the quantum effects of interaction between light and matter. There are three crucial differences I can think of:We can always detect uniform motion with respect to a medium by a positive result to a Michelson... Hmm, it seems we cannot just superimpose gravitational waves to create standing waves The above search is inspired by last night dream, which took place in an alternate version of my 3rd year undergrad GR course. The lecturer talks about a weird equation in general relativity that has a huge summation symbol, and then talked about gravitational waves emitting from a body. After that lecture, I then asked the lecturer whether gravitational standing waves are possible, as a imagine the hypothetical scenario of placing a node at the end of the vertical white line [The Cube] Regarding The Cube, I am thinking about an energy level diagram like this where the infinitely degenerate level is the lowest energy level when the environment is also taken account of The idea is that if the possible relaxations between energy levels is restricted so that to relax from an excited state, the bottleneck must be passed, then we have a very high entropy high energy system confined in a compact volume Therefore, as energy is pumped into the system, the lack of direct relaxation pathways to the ground state plus the huge degeneracy at higher energy levels should result in a lot of possible configurations to give the same high energy, thus effectively create an entropy trap to minimise heat loss to surroundings @Kaumudi.H there is also an addon that allows Office 2003 to read (but not save) files from later versions of Office, and you probably want this too. The installer for this should also be in \Stuff (but probably isn't if I forgot to include the SP3 installer). Hi @EmilioPisanty, it's great that you want to help me clear out confusions. I think we have a misunderstanding here. When you say "if you really want to "understand"", I've thought you were mentioning at my questions directly to the close voter, not the question in meta. When you mention about my original post, you think that it's a hopeless mess of confusion? Why? Except being off-topic, it seems clear to understand, doesn't it? Physics.stackexchange currently uses 2.7.1 with the config TeX-AMS_HTML-full which is affected by a visual glitch on both desktop and mobile version of Safari under latest OS, \vec{x} results in the arrow displayed too far to the right (issue #1737). This has been fixed in 2.7.2. Thanks. I have never used the app for this site, but if you ask a question on a mobile phone, there is no homework guidance box, as there is on the full site, due to screen size limitations.I think it's a safe asssumption that many students are using their phone to place their homework questions, in wh... @0ßelö7 I don't really care for the functional analytic technicalities in this case - of course this statement needs some additional assumption to hold rigorously in the infinite-dimensional case, but I'm 99% that that's not what the OP wants to know (and, judging from the comments and other failed attempts, the "simple" version of the statement seems to confuse enough people already :P) Why were the SI unit prefixes, i.e.\begin{align}\mathrm{giga} && 10^9 \\\mathrm{mega} && 10^6 \\\mathrm{kilo} && 10^3 \\\mathrm{milli} && 10^{-3} \\\mathrm{micro} && 10^{-6} \\\mathrm{nano} && 10^{-9}\end{align}chosen to be a multiple power of 3?Edit: Although this questio... the major challenge is how to restrict the possible relaxation pathways so that in order to relax back to the ground state, at least one lower rotational level has to be passed, thus creating the bottleneck shown above If two vectors $\vec{A} =A_x\hat{i} + A_y \hat{j} + A_z \hat{k}$ and$\vec{B} =B_x\hat{i} + B_y \hat{j} + B_z \hat{k}$, have angle $\theta$ between them then the dot product (scalar product) of $\vec{A}$ and $\vec{B}$ is$$\vec{A}\cdot\vec{B} = |\vec{A}||\vec{B}|\cos \theta$$$$\vec{A}\cdot\... @ACuriousMind I want to give a talk on my GR work first. That can be hand-wavey. But I also want to present my program for Sobolev spaces and elliptic regularity, which is reasonably original. But the devil is in the details there. @CooperCape I'm afraid not, you're still just asking us to check whether or not what you wrote there is correct - such questions are not a good fit for the site, since the potentially correct answer "Yes, that's right" is too short to even submit as an answer
In a paper by Joos and Zeh, Z Phys B 59 (1985) 223, they say:This 'coming into being of classical properties' appears related to what Heisenberg may have meant by his famous remark [7]: 'Die "Bahn" entsteht erst dadurch, dass wir sie beobachten.'Google Translate says this means something ... @EmilioPisanty Tough call. It's technical language, so you wouldn't expect every German speaker to be able to provide a correct interpretation—it calls for someone who know how German is used in talking about quantum mechanics. Litmus are a London-based space rock band formed in 2000 by Martin (bass guitar/vocals), Simon (guitar/vocals) and Ben (drums), joined the following year by Andy Thompson (keyboards, 2001–2007) and Anton (synths). Matt Thompson joined on synth (2002–2004), while Marek replaced Ben in 2003. Oli Mayne (keyboards) joined in 2008, then left in 2010, along with Anton. As of November 2012 the line-up is Martin Litmus (bass/vocals), Simon Fiddler (guitar/vocals), Marek Bublik (drums) and James Hodkinson (keyboards/effects). They are influenced by mid-1970s Hawkwind and Black Sabbath, amongst others.They... @JohnRennie Well, they repeatedly stressed their model is "trust work time" where there are no fixed hours you have to be there, but unless the rest of my team are night owls like I am I will have to adapt ;) I think u can get a rough estimate, COVFEFE is 7 characters, probability of a 7-character length string being exactly that is $(1/26)^7\approx 1.2\times 10^{-10}$ so I guess you would have to type approx a billion characters to start getting a good chance that COVFEFE appears. @ooolb Consider the hyperbolic space $H^n$ with the standard metric. Compute $$\inf\left\{\left(\int u^{2n/(n-2)}\right)^{-(n-2)/n}\left(4\frac{n-1}{n-2}\int|\nabla u|^2+\int Ru^2\right): u\in C^\infty_c\setminus\{0\}, u\ge0\right\}$$ @BalarkaSen sorry if you were in our discord you would know @ooolb It's unlikely to be $-\infty$ since $H^n$ has bounded geometry so Sobolev embedding works as expected. Construct a metric that blows up near infinity (incomplete is probably necessary) so that the inf is in fact $-\infty$. @Sid Eating glamorous and expensive food on a regular basis and not as a necessity would mean you're embracing consumer fetish and capitalism, yes. That doesn't inherently prevent you from being a communism, but it does have an ironic implication. @Sid Eh. I think there's plenty of room between "I think capitalism is a detrimental regime and think we could be better" and "I hate capitalism and will never go near anything associated with it", yet the former is still conceivably communist. Then we can end up with people arguing is favor "Communism" who distance themselves from, say the USSR and red China, and people who arguing in favor of "Capitalism" who distance themselves from, say the US and the Europe Union. since I come from a rock n' roll background, the first thing is that I prefer a tonal continuity. I don't like beats as much as I like a riff or something atmospheric (that's mostly why I don't like a lot of rap) I think I liked Madvillany because it had nonstandard rhyming styles and Madlib's composition Why is the graviton spin 2, beyond hand-waiving, sense is, you do the gravitational waves thing of reducing $R_{00} = 0$ to $g^{\mu \nu} g_{\rho \sigma,\mu \nu} = 0$ for a weak gravitational field in harmonic coordinates, with solution $g_{\mu \nu} = \varepsilon_{\mu \nu} e^{ikx} + \varepsilon_{\mu \nu}^* e^{-ikx}$, then magic?
TL;DR: The $\ce{O-O}$ and $\ce{S-S}$ bonds, such as those in $\ce{O2^2-}$ and $\ce{S2^2-}$, are derived from $\sigma$-type overlap. However, because the $\pi$ and $\pi^*$ MOs are also filled, the $\pi$-type overlap also affects the strength of the bond, although the bond order is unaffected. Bond strengths normally decrease down the group due to poorer $\sigma$ overlap. The first member of each group is an anomaly because for these elements, the $\pi^*$ orbital is strongly antibonding and population of this orbital weakens the bond. Setting the stage The simplest species with an $\ce{O-O}$ bond would be the peroxide anion, $\ce{O2^2-}$, for which we can easily construct an MO diagram. The $\mathrm{1s}$ and $\mathrm{2s}$ orbitals do not contribute to the discussion so they have been neglected. For $\ce{S2^2-}$, the diagram is qualitatively the same, except that $\mathrm{2p}$ needs to be changed to a $\mathrm{3p}$. The main bonding contribution comes from, of course, the $\sigma$ MO. The greater the $\sigma$ MO is lowered in energy from the constituent $\mathrm{2p}$ AOs, the more the electrons are stabilised, and hence the stronger the bond. However, even though the $\pi$ bond order is zero, the population of both $\pi$ and $\pi^*$ orbitals does also affect the bond strength. This is because the $\pi^*$ orbital is more antibonding than the $\pi$ orbital is bonding. (See these questions for more details: 1, 2.) So, when both $\pi$ and $\pi^*$ orbitals are fully occupied, there is a net antibonding effect. This doesn't reduce the bond order; the bond order is still 1. The only effect is to just weaken the bond a little. Comparing the $\sigma$-type overlap The two AOs that overlap to form the $\sigma$ bond are the two $\mathrm{p}_z$ orbitals. The extent to which the $\sigma$ MO is stabilised depends on an integral, called the overlap, between the two $n\mathrm{p}_z$ orbitals ($n = 2,3$). Formally, this is defined as $$S^{(\sigma)}_{n\mathrm{p}n\mathrm{p}} = \left\langle n\mathrm{p}_{z,\ce{A}}\middle| n\mathrm{p}_{z,\ce{B}}\right\rangle = \int (\phi_{n\mathrm{p}_{z,\ce{A}}})^*(\phi_{n\mathrm{p}_{z,\ce{B}}})\,\mathrm{d}\tau$$ It turns out that, going down the group, this quantity decreases. This has to do with the $n\mathrm{p}$ orbitals becoming more diffuse down the group, which reduces their overlap. Therefore, going down the group, the stabilisation of the $\sigma$ MO decreases, and one would expect the $\ce{X-X}$ bond to become weaker. That is indeed observed for the Group 14 elements. However, it certainly doesn't seem to work here. That's because we ignored the other two important orbitals. Comparing the $\pi$-type overlap The answer for our question lies in these two orbitals. The larger the splitting of the $\pi$ and $\pi^*$ MOs, the larger the net antibonding effect will be. Conversely, if there is zero splitting, then there will be no net antibonding effect. The magnitude of splitting of the $\pi$ and $\pi^*$ MOs again depends on the overlap integral between the two $n\mathrm{p}$ AOs, but this time they are $\mathrm{p}_x$ and $\mathrm{p}_y$ orbitals. And as we found out earlier, this quantity decreases down the group; meaning that the net $\pi$-type antibonding effect also weakens going down the group. Putting it all together Actually, to look solely at oxygen and sulfur would be doing ourselves a disservice. So let's look at how the trend continues. $$\begin{array}{|c|c|c|c|}\hline\mathbf{X} & \mathbf{BDE(X-X)\ /\ kJ\ mol^{-1}} & \mathbf{X} & \mathbf{BDE(X-X)\ /\ kJ\ mol^{-1}} \\\hline\ce{O} & 144 & \ce{F} & 158 \\\ce{S} & 266 & \ce{Cl} & 243 \\\ce{Se} & 192 & \ce{Br} & 193 \\\ce{Te} & 126 & \ce{I} & 151 \\\hline\end{array}$$(Source: Prof. Dermot O'Hare's web page.) You can see that the trend goes this way: there is an overall decrease going from the second member of each group downwards. However, the first member has an exceptionally weak single bond. The rationalisation, based on the two factors discussed earlier, is straightforward. The general decrease in bond strength arises due to weakening $\sigma$-type overlap. However, in the first member of each group, the strong $\pi$-type overlap serves to weaken the bond. I also added the Group 17 elements in the table above. That's because the trend is exactly the same, and it's not a fluke! The MO diagram of $\ce{F2}$ is practically the same as that of $\ce{O2^2-}$, so all of the arguments above also apply to the halogens. How about the double bonds? In order to look at the double bond, we want to find a species that has an $\ce{O-O}$ bond order of $2$. That's not hard at all. It's called dioxygen, $\ce{O2}$, and its MO scheme is exactly the same as above except that there are two fewer electrons in the $\pi^*$ orbitals. Since there are only two electrons in the $\pi^*$ MOs as compared to four in the $\pi$ MOs, overall the $\pi$ and $\pi^*$ orbitals generate a net bonding effect. (After all, this is where the second "bond" comes from.) Since the $\pi$-$\pi^*$ splitting is much larger in $\ce{O2}$ than in $\ce{S2}$, the $\pi$ bond in $\ce{O2}$ is much stronger than the $\pi$ bond in $\ce{S2}$. So, in this case, both the $\sigma$ and the $\pi$ bonds in $\ce{O2}$ are stronger than in $\ce{S2}$. There should be absolutely no question now as to which of the $\ce{O=O}$ or the $\ce{S=S}$ bonds is stronger!
I am trying to compare my finite difference's solution of the scalar (or simple acoustic) wave equation with an analytic solution. For that purpose I am using the following analytic solution presented in the old paper Accuracy of the finite-difference modeling of the acoustic wave equation - Geophysics 1974 - R. M. Alford et. al. The solution, equation (9) in that paper, for a constant velocity medium is given in cylindrical coordinates by: \begin{eqnarray}u_s(\rho, \phi, t) = \frac{1}{2\pi} \int_{-\infty}^{\infty} \{ -i \pi H_0^{(2)}\left(k | \sigma - \sigma_s| \right) F(\omega) e^{i\omega t} d\omega\}\end{eqnarray} (A) $H_0^{(2)}$ is the Hankel function of the second kind with order zero. $| \sigma - \sigma_s| $ is the distance from the source to the observation point. Also $ k = \frac{\omega}{C_0}$ come from $C_0$ the medium velocity converted to the Fourier domain, defined in the original scalar wave equation as: \begin{eqnarray} \left[ \nabla^2 - \frac{1}{C_0^2}\frac{\partial ^2}{\partial t^2}\right] u(\rho, \phi, t) = -4 \pi \frac{\delta(\rho-\rho_s)\delta(\phi-\phi_s)f(t)}{\rho} \end{eqnarray} $ (\rho_s, \phi_s) $ defines the source location in the $\phi$ x $\rho$ plane. $ F(\omega)$ comes from the source function $f(t)$ The problem: When implementing numerically AI'm not getting meaningful results. The velocity obtained using Ais totally different from the $C_0$ used as input My code example is bellow where I am evaluating A, using python.I calculate the solution at zero solution0 and at a distance D=8000 solutionD. Then I use the global maximum to calculate an approximation to velocity $C_0$ Probably it's a quite simple error that I am doing. If someone can point it out for me. The velocity I am getting shows that maybe the equation (9) is wrong or I am implementing it wrongly (what I believe to be the case). from scipy.special import hankel2import numpy as np# a simple source function f(t) gauss derivative normalizeddef gaussource(time, wlength, delay=None): if delay == None: # enough delay time delay = 3*wlength t = time - delay return ((2*np.sqrt(np.e)/(wlength)) *t*np.exp(-2*(t**2)/(wlength**2)))# analytic solution equation (A) paperdef waves(source, distance, c): n = len(source) sourcew = np.fft.fft(source) # source in the frequency domain hnk_factor = distance/c # distance to source divided by velocity if not distance == 0.0: hankelshift = np.complex(0,-np.pi)*np.array([hankel2(0,hnk_factor*omega) for omega in xrange(n)]) hankelshift[0] = 0. # i infinity in limit return np.real(np.fft.ifft(hankelshift*sourcew)) return sourcec = 4000.fc = 5.0dt = 0.01t = np.arange(0., 2., dt) # 2 seconds to evaluate the solutionsource = gaussource(t, 1./fc) # sourcesolution0 = waves(source, 0., c) # solution at zero source (source itself)solutionD = waves(source, 8000., c) #solution at certain distance D=8000 metres e.g. t0 = solution0.argmax(); # get max index time at zerotD = solutionD.argmax(); # get max index time at distanceprint "phase velocity aproximation ", 8000./((tD-t0)*dt)
Show that L = $\{0^{2^n}| n\geq 0\}$ is not a context free language. Let string $s = 0^{2^p}$. Then we know we can write $s$ as $s = uvxyz$. I know that |vy| > 0 and $|vxy| \leq p$. So how do I show that $uv^2xy^2z$ is not in $L$. Computer Science Stack Exchange is a question and answer site for students, researchers and practitioners of computer science. It only takes a minute to sign up.Sign up to join this community $\{ 0^{2^n} \mid n \ge 0 \}$ is not context-free. To show this, you can use any of the usual techniques to show that a language is not context-free, such as the pumping lemma for context-free languages. The pumping lemma states that if $L$ is context-free, then there exists a pumping length $p$ such that for all $n \ge p$, there exist $u,v,x,y,z$ such that $0^{2^n} = uvxyz$ and $|vy| \ge 1$ and for all $k \ge 0$, $uv^kxy^kz \in L$. Take $n = p$: for all $k \ge 0$, $|uv^kxy^kz| = |uxz| + k |vy|$ must be a power of $2$. This is not possible for large $k$ since it would imply that the distance between consecutive powers of $2$ is never more than $|vy|. You can also use Parikh's theorem, which states that the set of possible numbers of occurrences of a letter in a context-free language is semi-linear (i.e. it's of the form $\{a p + b \mid a \in \mathbb{N}, b \in B\} \cup C$ for some integer $p$ and some finite sets $B$ and $C$). For a language with a singleton alphabet, this means that the set of lengths of words in the language is semi-linear, which $\{2^n \mid n\in\mathbb{N}\}$ isn't.
The sharp, intense band at 1716 $\mathrm{cm^{-1}}$ is indeed characteristic for a carbonyl group ($\ce{C=O}$ stretch, in this case aldehyde or ketone). The medium and broad bands between 2732-2957 $\mathrm{cm^{-1}}$ suggest that it is more likely an aldehyde. In this case, they would be assigned to the $\ce{=C-H}$ stretch, and the intensity of the broad band fits better to a saturated aldehyde, i.e. there is no double or triple bond in $\alpha, \beta$-position to the $\ce{CHO}$ group. Your assignment of the peaks at 3624 and 3533 $\mathrm{cm^{-1}}$ to a primary amine is reasonable, since primary amines have two bands, while a secondary amine would have only one, weaker band at lower wavenumbers. One of the bands in the area of 700-940 $\mathrm{cm^{-1}}$ can then be assigned to the $\ce{N-H}$ wagging vibration mode. The sharp band at 3416 $\mathrm{cm^{-1}}$ could indicate a terminal alkyne ($\ce{C \equiv C-H}$ stretch), the $\ce{C-H}$ stretch band of this group at approximately 2900 $\mathrm{cm^{-1}}$ would then be included in the broad band at 2957 $\mathrm{cm^{-1}}$. The weak band in the area between 2000-2250 $\mathrm{cm^{-1}}$ can then be assigned to the $\ce{C \equiv C}$ stretch. Sources: 1, 2, 3, 4
2019-10-09 06:01 HiRadMat: A facility beyond the realms of materials testing / Harden, Fiona (CERN) ; Bouvard, Aymeric (CERN) ; Charitonidis, Nikolaos (CERN) ; Kadi, Yacine (CERN)/HiRadMat experiments and facility support teams The ever-expanding requirements of high-power targets and accelerator equipment has highlighted the need for facilities capable of accommodating experiments with a diverse range of objectives. HiRadMat, a High Radiation to Materials testing facility at CERN has, throughout operation, established itself as a global user facility capable of going beyond its initial design goals. [...] 2019 - 4 p. - Published in : 10.18429/JACoW-IPAC2019-THPRB085 Fulltext from publisher: PDF; In : 10th International Particle Accelerator Conference, Melbourne, Australia, 19 - 24 May 2019, pp.THPRB085 Registo detalhado - Registos similares 2019-10-09 06:01 Commissioning results of the tertiary beam lines for the CERN neutrino platform project / Rosenthal, Marcel (CERN) ; Booth, Alexander (U. Sussex (main) ; Fermilab) ; Charitonidis, Nikolaos (CERN) ; Chatzidaki, Panagiota (Natl. Tech. U., Athens ; Kirchhoff Inst. Phys. ; CERN) ; Karyotakis, Yannis (Annecy, LAPP) ; Nowak, Elzbieta (CERN ; AGH-UST, Cracow) ; Ortega Ruiz, Inaki (CERN) ; Sala, Paola (INFN, Milan ; CERN) For many decades the CERN North Area facility at the Super Proton Synchrotron (SPS) has delivered secondary beams to various fixed target experiments and test beams. In 2018, two new tertiary extensions of the existing beam lines, designated “H2-VLE” and “H4-VLE”, have been constructed and successfully commissioned. [...] 2019 - 4 p. - Published in : 10.18429/JACoW-IPAC2019-THPGW064 Fulltext from publisher: PDF; In : 10th International Particle Accelerator Conference, Melbourne, Australia, 19 - 24 May 2019, pp.THPGW064 Registo detalhado - Registos similares 2019-10-09 06:00 The "Physics Beyond Colliders" projects for the CERN M2 beam / Banerjee, Dipanwita (CERN ; Illinois U., Urbana (main)) ; Bernhard, Johannes (CERN) ; Brugger, Markus (CERN) ; Charitonidis, Nikolaos (CERN) ; Cholak, Serhii (Taras Shevchenko U.) ; D'Alessandro, Gian Luigi (Royal Holloway, U. of London) ; Gatignon, Laurent (CERN) ; Gerbershagen, Alexander (CERN) ; Montbarbon, Eva (CERN) ; Rae, Bastien (CERN) et al. Physics Beyond Colliders is an exploratory study aimed at exploiting the full scientific potential of CERN’s accelerator complex up to 2040 and its scientific infrastructure through projects complementary to the existing and possible future colliders. Within the Conventional Beam Working Group (CBWG), several projects for the M2 beam line in the CERN North Area were proposed, such as a successor for the COMPASS experiment, a muon programme for NA64 dark sector physics, and the MuonE proposal aiming at investigating the hadronic contribution to the vacuum polarisation. [...] 2019 - 4 p. - Published in : 10.18429/JACoW-IPAC2019-THPGW063 Fulltext from publisher: PDF; In : 10th International Particle Accelerator Conference, Melbourne, Australia, 19 - 24 May 2019, pp.THPGW063 Registo detalhado - Registos similares 2019-10-09 06:00 The K12 beamline for the KLEVER experiment / Van Dijk, Maarten (CERN) ; Banerjee, Dipanwita (CERN) ; Bernhard, Johannes (CERN) ; Brugger, Markus (CERN) ; Charitonidis, Nikolaos (CERN) ; D'Alessandro, Gian Luigi (CERN) ; Doble, Niels (CERN) ; Gatignon, Laurent (CERN) ; Gerbershagen, Alexander (CERN) ; Montbarbon, Eva (CERN) et al. The KLEVER experiment is proposed to run in the CERN ECN3 underground cavern from 2026 onward. The goal of the experiment is to measure ${\rm{BR}}(K_L \rightarrow \pi^0v\bar{v})$, which could yield information about potential new physics, by itself and in combination with the measurement of ${\rm{BR}}(K^+ \rightarrow \pi^+v\bar{v})$ of NA62. [...] 2019 - 4 p. - Published in : 10.18429/JACoW-IPAC2019-THPGW061 Fulltext from publisher: PDF; In : 10th International Particle Accelerator Conference, Melbourne, Australia, 19 - 24 May 2019, pp.THPGW061 Registo detalhado - Registos similares 2019-09-21 06:01 Beam impact experiment of 440 GeV/p protons on superconducting wires and tapes in a cryogenic environment / Will, Andreas (KIT, Karlsruhe ; CERN) ; Bastian, Yan (CERN) ; Bernhard, Axel (KIT, Karlsruhe) ; Bonura, Marco (U. Geneva (main)) ; Bordini, Bernardo (CERN) ; Bortot, Lorenzo (CERN) ; Favre, Mathieu (CERN) ; Lindstrom, Bjorn (CERN) ; Mentink, Matthijs (CERN) ; Monteuuis, Arnaud (CERN) et al. The superconducting magnets used in high energy particle accelerators such as CERN’s LHC can be impacted by the circulating beam in case of specific failure cases. This leads to interaction of the beam particles with the magnet components, like the superconducting coils, directly or via secondary particle showers. [...] 2019 - 4 p. - Published in : 10.18429/JACoW-IPAC2019-THPTS066 Fulltext from publisher: PDF; In : 10th International Particle Accelerator Conference, Melbourne, Australia, 19 - 24 May 2019, pp.THPTS066 Registo detalhado - Registos similares 2019-09-20 08:41 Shashlik calorimeters with embedded SiPMs for longitudinal segmentation / Berra, A (INFN, Milan Bicocca ; Insubria U., Varese) ; Brizzolari, C (INFN, Milan Bicocca ; Insubria U., Varese) ; Cecchini, S (INFN, Bologna) ; Chignoli, F (INFN, Milan Bicocca ; Milan Bicocca U.) ; Cindolo, F (INFN, Bologna) ; Collazuol, G (INFN, Padua) ; Delogu, C (INFN, Milan Bicocca ; Milan Bicocca U.) ; Gola, A (Fond. Bruno Kessler, Trento ; TIFPA-INFN, Trento) ; Jollet, C (Strasbourg, IPHC) ; Longhin, A (INFN, Padua) et al. Effective longitudinal segmentation of shashlik calorimeters can be achieved taking advantage of the compactness and reliability of silicon photomultipliers. These photosensors can be embedded in the bulk of the calorimeter and are employed to design very compact shashlik modules that sample electromagnetic and hadronic showers every few radiation lengths. [...] 2017 - 6 p. - Published in : IEEE Trans. Nucl. Sci. 64 (2017) 1056-1061 Registo detalhado - Registos similares 2019-09-20 08:41 Performance study for the photon measurements of the upgraded LHCf calorimeters with Gd$_2$SiO$_5$ (GSO) scintillators / Makino, Y (Nagoya U., ISEE) ; Tiberio, A (INFN, Florence ; U. Florence (main)) ; Adriani, O (INFN, Florence ; U. Florence (main)) ; Berti, E (INFN, Florence ; U. Florence (main)) ; Bonechi, L (INFN, Florence) ; Bongi, M (INFN, Florence ; U. Florence (main)) ; Caccia, Z (INFN, Catania) ; D'Alessandro, R (INFN, Florence ; U. Florence (main)) ; Del Prete, M (INFN, Florence ; U. Florence (main)) ; Detti, S (INFN, Florence) et al. The Large Hadron Collider forward (LHCf) experiment was motivated to understand the hadronic interaction processes relevant to cosmic-ray air shower development. We have developed radiation-hard detectors with the use of Gd$_2$SiO$_5$ (GSO) scintillators for proton-proton $\sqrt{s} = 13$ TeV collisions. [...] 2017 - 22 p. - Published in : JINST 12 (2017) P03023 Registo detalhado - Registos similares 2019-04-26 08:32 Baby MIND: A magnetised spectrometer for the WAGASCI experiment / Hallsjö, Sven-Patrik (Glasgow U.)/Baby MIND The WAGASCI experiment being built at the J-PARC neutrino beam line will measure the ratio of cross sections from neutrinos interacting with a water and scintillator targets, in order to constrain neutrino cross sections, essential for the T2K neutrino oscillation measurements. A prototype Magnetised Iron Neutrino Detector (MIND), called Baby MIND, has been constructed at CERN and will act as a magnetic spectrometer behind the main WAGASCI target. [...] SISSA, 2018 - 7 p. - Published in : PoS NuFact2017 (2018) 078 Fulltext: PDF; External link: PoS server In : 19th International Workshop on Neutrinos from Accelerators, Uppsala, Sweden, 25 - 30 Sep 2017, pp.078 Registo detalhado - Registos similares 2019-04-26 08:32 Registo detalhado - Registos similares 2019-04-26 08:32 ENUBET: High Precision Neutrino Flux Measurements in Conventional Neutrino Beams / Pupilli, Fabio (INFN, Padua) ; Ballerini, G (Insubria U., Como ; INFN, Milan Bicocca) ; Berra, A (Insubria U., Como ; INFN, Milan Bicocca) ; Boanta, R (INFN, Milan Bicocca ; Milan Bicocca U.) ; Bonesini, M (INFN, Milan Bicocca) ; Brizzolari, C (Insubria U., Como ; INFN, Milan Bicocca) ; Brunetti, G (INFN, Padua) ; Calviani, M (CERN) ; Carturan, S (INFN, Legnaro) ; Catanesi, M G (INFN, Bari) et al. The ENUBET project aims at demonstrating that the systematics in neutrino fluxes from conventional beams can be reduced to 1% by monitoring positrons from K$_{e3}$ decays in an instrumented decay tunnel, thus allowing a precise measurement of the $\nu_e$ (and $\overline{\nu}_e$) cross section. This contribution will report the results achieved in the first year of activities. [...] SISSA, 2018 - 8 p. - Published in : PoS NuFact2017 (2018) 087 Fulltext: PDF; External link: PoS server In : 19th International Workshop on Neutrinos from Accelerators, Uppsala, Sweden, 25 - 30 Sep 2017, pp.087 Registo detalhado - Registos similares
There is existing literature on fundamental domains for Hilbert-Blumenthal surfaces, perhaps most noticeably what Siegel did. I am interested in whether such fundamental domains have been approached using Dirichlet domains. More specifically... I want to know what a Dirichlet domain for a manifold covering a Hilbert modular variety looks like. What kind of symmetry does it have? Is it geometrically finite? What topologically invariant properties does it have (independent of center)? Is there a useful analogy or method to visualize it? If you already know the answer to this or know of a good reference for it, you probably don't need to read the rest of my question -- unless the question doesn't make sense in which case you can tell me where my construction falls apart. Let $K$ be a totally real number field, let $\mathbb{Z}_K$ be its ring of integers, let $r=[K:\mathbb{Q}]>1$. Let $\Gamma=\mathrm{PSL}_2(\mathbb{Z}_K)$ and let $\mathcal{H}^2$ be the upper half-space model for the hyperbolic plane. Then there is a discrete action by isometries $$\Gamma\times(\mathcal{H}^2\times\overset{r}{\cdots}\times\mathcal{H}^2)\rightarrow(\mathcal{H}^2\times\overset{r}{\cdots}\times\mathcal{H}^2)\\ \big(\gamma,(p_1,\dots,p_r)\big)\mapsto \big(\sigma_1(\gamma)(p_1),\dots,\sigma_r(\gamma)(p_r)\big),$$ where the $\sigma_\ell$ are the $r$ Galois automorphisms applied to the entries of $\gamma$, and $\sigma(\gamma_\ell)(p_\ell)$ is the action by Möbius transformation. Then $(\mathcal{H}^2\times\overset{r}{\cdots}\times\mathcal{H}^2)/\Gamma$ is a Hilbert modular variety. Also, from Selberg, we know that $\Gamma$ contains a maximal torsion-free subgroup $\Delta$. Let $M=(\mathcal{H}^2\times\overset{r}{\cdots}\times\mathcal{H}^2)/\Delta$, then $M$ is a $2n$-dimensional manifold. Now let $p\in\mathcal{H}^2\times\overset{r}{\cdots}\times\mathcal{H}^2$.We find a Dirichlet domainfor $M$by looking at the set of geodesics that connect $p$to each image in the orbit $\Delta(p)$,then taking a perpendicular bisector of each of these geodesics.We know that $V$has finite volume and finitely many cusps,and $M$is a finite cover of $V$so the same is true for $M$.Also, since $\Delta$is torsion-free,its only fixed points are on the boundary.Thus there will be a region containing $p$enclosed by the set of perpendicular bisectors,except for some finite number of points on the boundary where they meet.This region is the Dirichlet domain for $M$ centered at $p$,which I'll denote by $D(p)$. To make the construction more canonical, let's just take $p=(i,\dots,i)$ (which in each coordinate, in the upper half-space model, is one unit up the half-axis, above $0$ which lies in $\partial\mathcal{H}^2$). What can we say about $D(p)$? What kind of symmetry does it have? How many faces does it have? We know that the projection of the group action onto any one factor is dense, yet the action on the product is discrete because any time an orbit accumulates in one factor, it will diverge to $\infty$ in another. So it does not seem very helpful to try to conceive of a Dirichlet domain by thinking of it as a product of $2$-dimensional ones. What is a good way to conceive of $D(p)$? Perhaps take $r=2$ and just look at Hilbert-Blumenthal surfaces. Edit - March 14, 2017 This has been up for about a week with no answers. So, I was wondering then whether the answer to this question is "known," i.e. does there exist a publication where this question is answered?
Technocup 2019 - Elimination Round 1 Finished Vasya has a sequence $$$a$$$ consisting of $$$n$$$ integers $$$a_1, a_2, \dots, a_n$$$. Vasya may pefrom the following operation: choose some number from the sequence and swap any pair of bits in its binary representation. For example, Vasya can transform number $$$6$$$ $$$(\dots 00000000110_2)$$$ into $$$3$$$ $$$(\dots 00000000011_2)$$$, $$$12$$$ $$$(\dots 000000001100_2)$$$, $$$1026$$$ $$$(\dots 10000000010_2)$$$ and many others. Vasya can use this operation any (possibly zero) number of times on any number from the sequence. Vasya names a sequence as good one, if, using operation mentioned above, he can obtain the sequence with bitwise exclusive or of all elements equal to $$$0$$$. For the given sequence $$$a_1, a_2, \ldots, a_n$$$ Vasya'd like to calculate number of integer pairs $$$(l, r)$$$ such that $$$1 \le l \le r \le n$$$ and sequence $$$a_l, a_{l + 1}, \dots, a_r$$$ is good. The first line contains a single integer $$$n$$$ ($$$1 \le n \le 3 \cdot 10^5$$$) — length of the sequence. The second line contains $$$n$$$ integers $$$a_1, a_2, \dots, a_n$$$ ($$$1 \le a_i \le 10^{18}$$$) — the sequence $$$a$$$. Print one integer — the number of pairs $$$(l, r)$$$ such that $$$1 \le l \le r \le n$$$ and the sequence $$$a_l, a_{l + 1}, \dots, a_r$$$ is good. 3 6 7 14 2 4 1 2 1 16 4 In the first example pairs $$$(2, 3)$$$ and $$$(1, 3)$$$ are valid. Pair $$$(2, 3)$$$ is valid since $$$a_2 = 7 \rightarrow 11$$$, $$$a_3 = 14 \rightarrow 11$$$ and $$$11 \oplus 11 = 0$$$, where $$$\oplus$$$ — bitwise exclusive or. Pair $$$(1, 3)$$$ is valid since $$$a_1 = 6 \rightarrow 3$$$, $$$a_2 = 7 \rightarrow 13$$$, $$$a_3 = 14 \rightarrow 14$$$ and $$$3 \oplus 13 \oplus 14 = 0$$$. In the second example pairs $$$(1, 2)$$$, $$$(2, 3)$$$, $$$(3, 4)$$$ and $$$(1, 4)$$$ are valid. Name
> Input > Input >> 1² >> (3] >> 1%L >> L=2 >> Each 5 4 >> Each 6 7 >> L⋅R >> Each 9 4 8 > {0} >> {10} >> 12∖11 >> Output 13 Try it online! Returns a set of all possible solutions, and the empty set (i.e. \$\emptyset\$) when no solution exists. How it works Unsurprisingly, it works almost identically to most other answers: it generates a list of numbers and checks each one for inverse modulus with the argument. If you're familiar with how Whispers' program structure works, feel free to skip ahead to the horizontal line. If not: essentially, Whispers works on a line-by-line reference system, starting on the final line. Each line is classed as one of two options. Either it is a nilad line, or it is a operator line. Nilad lines start with >, such as > Input or > {0} and return the exact value represented on that line i.e > {0} returns the set \$\{0\}\$. > Input returns the next line of STDIN, evaluated if possible. Operator lines start with >>, such as >> 1² or >> (3] and denote running an operator on one or more values. Here, the numbers used do not reference those explicit numbers, instead they reference the value on that line. For example, ² is the square command (\$n \to n^2\$), so >> 1² does not return the value \$1^2\$, instead it returns the square of line 1, which, in this case, is the first input. Usually, operator lines only work using numbers as references, yet you may have noticed the lines >> L=2 and >> L⋅R. These two values, L and R, are used in conjunction with Each statements. Each statements work by taking two or three arguments, again as numerical references. The first argument (e.g. 5) is a reference to an operator line used a function, and the rest of the arguments are arrays. We then iterate the function over the array, where the L and R in the function represent the current element(s) in the arrays being iterated over. As an example: Let \$A = [1, 2, 3, 4]\$, \$B = [4, 3, 2, 1]\$ and \$f(x, y) = x + y\$. Assuming we are running the following code: > [1, 2, 3, 4] > [4, 3, 2, 1] >> L+R >> Each 3 1 2 We then get a demonstration of how Each statements work. First, when working with two arrays, we zip them to form \$C = [(1, 4), (2, 3), (3, 2), (4, 1)]\$ then map \$f(x, y)\$ over each pair, forming our final array \$D = [f(1, 4), f(2, 3), f(3, 2), f(4, 1)] = [5, 5, 5, 5]\$ Try it online! How this code works Working counter-intuitively to how Whispers works, we start from the first two lines: > Input > Input This collects our two inputs, lets say \$x\$ and \$y\$, and stores them in lines 1 and 2 respectively. We then store \$x^2\$ on line 3 and create a range \$A := [1 ... x^2]\$ on line 4. Next, we jump to the section >> 1%L >> L=2 >> Each 5 4 >> Each 6 7 The first thing executed here is line 7, >> Each 5 4, which iterates line 5 over line 4. This yields the array \$B := [i \: \% \: x \: | \: i \in A]\$, where \$a \: \% \: b\$ is defined as the modulus of \$a\$ and \$b\$. We then execute line 8, >> Each 6 7, which iterates line 6 over \$B\$, yielding an array \$C := [(i \: \% \: x) = y \: | \: i \in A]\$. For the inputs \$x = 5, y = 2\$, we have \$A = [1, 2, 3, ..., 23, 24, 25]\$, \$B = [0, 1, 2, 1, 0, 5, 5, ..., 5, 5]\$ and \$C = [0, 0, 1, 0, 0, ..., 0, 0]\$ We then jump down to >> L⋅R >> Each 9 4 8 which is our example of a dyadic Each statement. Here, our function is line 9 i.e >> L⋅R and our two arrays are \$A\$ and \$C\$. We multiply each element in \$A\$ with it's corresponding element in \$C\$, which yields an array, \$E\$, where each element works from the following relationship: $$E_i =\begin{cases}0 & C_i = 0 \\A_i & C_i = 1\end{cases}$$ We then end up with an array consisting of \$0\$s and the inverse moduli of \$x\$ and \$y\$. In order to remove the \$0\$s, we convert this array to a set ( >> {10}), then take the set difference between this set and \$\{0\}\$, yielding, then outputting, our final result.
Basic Algebra and Calculus¶ Sage can perform various computations related to basic algebra and calculus: for example, finding solutions to equations, differentiation, integration, and Laplace transforms. See the Sage Constructions documentation for more examples. In all these examples, it is important to note that the variables in thefunctions are defined to be var(...). As an example: sage: u = var('u')sage: diff(sin(u), u)cos(u) If you get a NameError, check to see if you mispelled something,or forgot to define a variable with var(...). Solving Equations¶ Solving Equations Exactly¶ The solve function solves equations. To use it, first specifysome variables; then the arguments to solve are an equation (or asystem of equations), together with the variables for which tosolve: sage: x = var('x')sage: solve(x^2 + 3*x + 2, x)[x == -2, x == -1] You can solve equations for one variable in terms of others: sage: x, b, c = var('x b c')sage: solve([x^2 + b*x + c == 0],x)[x == -1/2*b - 1/2*sqrt(b^2 - 4*c), x == -1/2*b + 1/2*sqrt(b^2 - 4*c)] You can also solve for several variables: sage: x, y = var('x, y')sage: solve([x+y==6, x-y==4], x, y)[[x == 5, y == 1]] The following example of using Sage to solve a system of non-linear equations was provided by Jason Grout: first, we solve the system symbolically: sage: var('x y p q')(x, y, p, q)sage: eq1 = p+q==9sage: eq2 = q*y+p*x==-6sage: eq3 = q*y^2+p*x^2==24sage: solve([eq1,eq2,eq3,p==1],p,q,x,y)[[p == 1, q == 8, x == -4/3*sqrt(10) - 2/3, y == 1/6*sqrt(10) - 2/3], [p == 1, q == 8, x == 4/3*sqrt(10) - 2/3, y == -1/6*sqrt(10) - 2/3]] For numerical approximations of the solutions, you can instead use: sage: solns = solve([eq1,eq2,eq3,p==1],p,q,x,y, solution_dict=True)sage: [[s[p].n(30), s[q].n(30), s[x].n(30), s[y].n(30)] for s in solns][[1.0000000, 8.0000000, -4.8830369, -0.13962039], [1.0000000, 8.0000000, 3.5497035, -1.1937129]] (The function n prints a numerical approximation, and theargument is the number of bits of precision.) Solving Equations Numerically¶ Often times, solve will not be able to find an exact solution tothe equation or equations specified. When it fails, you can use find_root to find a numerical solution. For example, solve doesnot return anything interesting for the following equation: sage: theta = var('theta')sage: solve(cos(theta)==sin(theta), theta)[sin(theta) == cos(theta)] On the other hand, we can use find_root to find a solution to theabove equation in the range \(0 < \phi < \pi/2\): sage: phi = var('phi')sage: find_root(cos(phi)==sin(phi),0,pi/2)0.785398163397448... Differentiation, Integration, etc.¶ Sage knows how to differentiate and integrate many functions. For example, to differentiate \(\sin(u)\) with respect to \(u\), do the following: sage: u = var('u')sage: diff(sin(u), u)cos(u) To compute the fourth derivative of \(\sin(x^2)\): sage: diff(sin(x^2), x, 4)16*x^4*sin(x^2) - 48*x^2*cos(x^2) - 12*sin(x^2) To compute the partial derivatives of \(x^2+17y^2\) with respect to \(x\) and \(y\), respectively: sage: x, y = var('x,y')sage: f = x^2 + 17*y^2sage: f.diff(x)2*xsage: f.diff(y)34*y We move on to integrals, both indefinite and definite. To compute \(\int x\sin(x^2)\, dx\) and \(\int_0^1 \frac{x}{x^2+1}\, dx\) sage: integral(x*sin(x^2), x)-1/2*cos(x^2)sage: integral(x/(x^2+1), x, 0, 1)1/2*log(2) To compute the partial fraction decomposition of \(\frac{1}{x^2-1}\): sage: f = 1/((1+x)*(x-1))sage: f.partial_fraction(x)-1/2/(x + 1) + 1/2/(x - 1) Solving Differential Equations¶ You can use Sage to investigate ordinary differential equations. To solve the equation \(x'+x-1=0\): sage: t = var('t') # define a variable tsage: x = function('x')(t) # define x to be a function of that variablesage: DE = diff(x, t) + x - 1sage: desolve(DE, [x,t])(_C + e^t)*e^(-t) This uses Sage’s interface to Maxima [Max], and so its output may be a bit different from other Sage output. In this case, this says that the general solution to the differential equation is \(x(t) = e^{-t}(e^{t}+c)\). You can compute Laplace transforms also; the Laplace transform of \(t^2e^t -\sin(t)\) is computed as follows: sage: s = var("s")sage: t = var("t")sage: f = t^2*exp(t) - sin(t)sage: f.laplace(t,s)-1/(s^2 + 1) + 2/(s - 1)^3 Here is a more involved example. The displacement from equilibrium (respectively) for a coupled spring attached to a wall on the left |------\/\/\/\/\---|mass1|----\/\/\/\/\/----|mass2| spring1 spring2 is modeled by the system of 2nd order differential equations where \(m_{i}\) is the mass of object i, \(x_{i}\) isthe displacement from equilibrium of mass i, and \(k_{i}\)is the spring constant for spring i. Example: Use Sage to solve the above problem with\(m_{1}=2\), \(m_{2}=1\), \(k_{1}=4\),\(k_{2}=2\), \(x_{1}(0)=3\), \(x_{1}'(0)=0\),\(x_{2}(0)=3\), \(x_{2}'(0)=0\). Solution: Take the Laplace transform of the first equation (with the notation \(x=x_{1}\), \(y=x_{2}\)): sage: de1 = maxima("2*diff(x(t),t, 2) + 6*x(t) - 2*y(t)")sage: lde1 = de1.laplace("t","s"); lde12*((-%at('diff(x(t),t,1),t=0))+s^2*'laplace(x(t),t,s)-x(0)*s)-2*'laplace(y(t),t,s)+6*'laplace(x(t),t,s) This is hard to read, but it says that (where the Laplace transform of a lower case function like \(x(t)\) is the upper case function \(X(s)\)). Take the Laplace transform of the second equation: sage: de2 = maxima("diff(y(t),t, 2) + 2*y(t) - 2*x(t)")sage: lde2 = de2.laplace("t","s"); lde2(-%at('diff(y(t),t,1),t=0))+s^2*'laplace(y(t),t,s)+2*'laplace(y(t),t,s)-2*'laplace(x(t),t,s)-y(0)*s This says Plug in the initial conditions for \(x(0)\), \(x'(0)\), \(y(0)\), and \(y'(0)\), and solve the resulting two equations: sage: var('s X Y')(s, X, Y)sage: eqns = [(2*s^2+6)*X-2*Y == 6*s, -2*X +(s^2+2)*Y == 3*s]sage: solve(eqns, X,Y)[[X == 3*(s^3 + 3*s)/(s^4 + 5*s^2 + 4), Y == 3*(s^3 + 5*s)/(s^4 + 5*s^2 + 4)]] Now take inverse Laplace transforms to get the answer: sage: var('s t')(s, t)sage: inverse_laplace((3*s^3 + 9*s)/(s^4 + 5*s^2 + 4),s,t)cos(2*t) + 2*cos(t)sage: inverse_laplace((3*s^3 + 15*s)/(s^4 + 5*s^2 + 4),s,t)-cos(2*t) + 4*cos(t) Therefore, the solution is This can be plotted parametrically using sage: t = var('t')sage: P = parametric_plot((cos(2*t) + 2*cos(t), 4*cos(t) - cos(2*t) ),....: (t, 0, 2*pi), rgbcolor=hue(0.9))sage: show(P) The individual components can be plotted using sage: t = var('t')sage: p1 = plot(cos(2*t) + 2*cos(t), (t,0, 2*pi), rgbcolor=hue(0.3))sage: p2 = plot(4*cos(t) - cos(2*t), (t,0, 2*pi), rgbcolor=hue(0.6))sage: show(p1 + p2) Euler’s Method for Systems of Differential Equations¶ In the next example, we will illustrate Euler’s method for first and second order ODEs. We first recall the basic idea for first order equations. Given an initial value problem of the form we want to find the approximate value of the solution at \(x=b\) with \(b>a\). Recall from the definition of the derivative that where \(h>0\) is given and small. This and the DE together give \(f(x,y(x))\approx \frac{y(x+h)-y(x)}{h}\). Now solve for \(y(x+h)\): If we call \(h \cdot f(x,y(x))\) the “correction term” (for lack of anything better), call \(y(x)\) the “old value of \(y\)”, and call \(y(x+h)\) the “new value of \(y\)”, then this approximation can be re-expressed as If we break the interval from \(a\) to \(b\) into \(n\) steps, so that \(h=\frac{b-a}{n}\), then we can record the information for this method in a table. \(x\) \(y\) \(h\cdot f(x,y)\) \(a\) \(c\) \(h\cdot f(a,c)\) \(a+h\) \(c+h\cdot f(a,c)\) … \(a+2h\) … … \(b=a+nh\) ??? … The goal is to fill out all the blanks of the table, one row at a time, until we reach the ??? entry, which is the Euler’s method approximation for \(y(b)\). The idea for systems of ODEs is similar. Example: Numerically approximate \(z(t)\) at \(t=1\) using 4steps of Euler’s method, where \(z''+tz'+z=0\),\(z(0)=1\), \(z'(0)=0\). We must reduce the 2nd order ODE down to a system of two first order DEs (using \(x=z\), \(y=z'\)) and apply Euler’s method: sage: t,x,y = PolynomialRing(RealField(10),3,"txy").gens()sage: f = y; g = -x - y * tsage: eulers_method_2x2(f,g, 0, 1, 0, 1/4, 1) t x h*f(t,x,y) y h*g(t,x,y) 0 1 0.00 0 -0.25 1/4 1.0 -0.062 -0.25 -0.23 1/2 0.94 -0.12 -0.48 -0.17 3/4 0.82 -0.16 -0.66 -0.081 1 0.65 -0.18 -0.74 0.022 Therefore, \(z(1)\approx 0.65\). We can also plot the points \((x,y)\) to get an approximatepicture of the curve. The function eulers_method_2x2_plot willdo this; in order to use it, we need to define functions \(f\) and\(g\) which takes one argument with three coordinates: (\(t\), \(x\),\(y\)). sage: f = lambda z: z[2] # f(t,x,y) = ysage: g = lambda z: -sin(z[1]) # g(t,x,y) = -sin(x)sage: P = eulers_method_2x2_plot(f,g, 0.0, 0.75, 0.0, 0.1, 1.0) At this point, P is storing two plots: P[0], the plot of \(x\)vs. \(t\), and P[1], the plot of \(y\) vs. \(t\). We can plot both ofthese as follows: sage: show(P[0] + P[1]) (For more on plotting, see Plotting.) Special functions¶ Several orthogonal polynomials and special functions are implemented, using both PARI [GAP] and Maxima [Max]. These are documented in the appropriate sections (“Orthogonal polynomials” and “Special functions”, respectively) of the Sage reference manual. sage: x = polygen(QQ, 'x')sage: chebyshev_U(2,x)4*x^2 - 1sage: bessel_I(1,1).n(250)0.56515910399248502720769602760986330732889962162109200948029448947925564096sage: bessel_I(1,1).n()0.565159103992485sage: bessel_I(2,1.1).n()0.167089499251049 At this point, Sage has only wrapped these functions for numerical use. For symbolic use, please use the Maxima interface directly, as in the following example: sage: maxima.eval("f:bessel_y(v, w)")'bessel_y(v,w)'sage: maxima.eval("diff(f,w)")'(bessel_y(v-1,w)-bessel_y(v+1,w))/2'
In previous posts I have presented separately the graph coloring problem, as well as its generalization, the partitioned graph coloring problem, and linear programming. It is time to put both of them toghether, by modelling an instance of graph coloring using linear programming. What we need to model First we have to determine what we want to model. We will start with the standard graph coloring problem and introduce partitions later, so we will need a way to express the statement that we "assign color j to node i" using variables. We will also have to find a way to express the constraints that every node must be colored, and no two adjacent nodes can share the same color, as well as specifying our objective to be to use as few different colors as possible. Variables Even though there are lots of different linear programming models for the coloring problem, we will present the most classic one, which is also the easiest to understand. We will use two different type of binary variables: \(x_{ij}\) variables that will be true if and only if node \(i\) is assigned color \(j\) \(w_j\) variables that will be true if at least one node is assigned color \(j\) Note that what we have here are boolean or binary variables, as we want so specify boolean conditions. This means that all variables are restricted to have integral value and be between 0 and 1. This is why this problem is called a binary linear programming problem, which are a particular case of integer linear programming. We will later see what happens if we do not impose these restrictions on our variables. Objective Our objective will be to use as few different colors as possible, which we can express as minimizing the number of \(w_j\) variables that are true: $$\min \sum _j w_j $$ Easy enough, and this explains why we had to introduce the artificial \(w_j\) variables. We will see how we relate them with the \(x\) variables with appropriate restrictions. Constraints First of all we must specify that every node is assigned exactly one color. Since we have boolean variables for each node-color combination, we simply have to request that the sum over all colors for a single node is equal to one: $$ \forall i \in V \quad \sum _j x_{ij} = 1 $$ Color conflict constraints will be specified over every pair of adjacent nodes, for every possible color they can be assigned. What we want to express is that given a color and a pair of adjacent nodes, at most one of them may have that color assigned: $$ \forall u,v \in E, j \in C \quad x_{uj} + x_{vj} \leq 1 $$ This restrictions ensures that at most one of the two variables will be true, effectively avoiding color conflicts. As for \(w_j\) variables, one way to handle them is to simply make sure that if any node is colored with color \(j\) then \(w_j\) is set to true, by setting \(w_j\) as an upper bound for every \(x_{ij}\): $$ \forall i \in V, j \in C \quad x_{ij} \leq w_j $$ However, if the graph has no isolated nodes, we can take advantage of color conflict constraints and reuse them to force \(w_j\) variables to be true if one of the two adjacent nodes uses color \(j\): $$ \forall u,v \in E, j \in C \quad x_{uj} + x_{vj} \leq w_j $$ Using these sets of constraints we have successfully modelled a graph coloring problem. Generalizing for Partitioned Coloring Partitioned coloring follows the same rules as standard graph coloring, with the same objective, but with the slight difference that not every node must be colored, but a single node within every partition. The same variables and objective function are used as in the model presented so far, and color conflict constraints are also the same. The only change will be in the first constraint, which requires that every node is colored: $$ \forall P_k \in P \quad \sum _{i in P_k} \sum _j x_{ij} = 1 $$ Now we require that the sum over all node-color combinations in each partition is equals to one, which ensures that a single node is assigned a single color in every partition. This constitutes the most basic formulation for partition coloring. An example... Let's go back to our old diamond partitioned graph: We know that we will be needing at most two colors for coloring this graph, as it has two partitions, and the worst case would be having to assign a different color to each partition, so our variables will be all \(x_{ij}\) and \(w_j\) with \(i\) ranging from 1 to 4 and \(j\) from 1 to 2. First of all, our objective function, which minimizes the sum over all colors: $$\min w_1 + w_2 $$ Coloring each partition comes next, we require that both partitions have exactly one node colored: $$ x_{11} + x_{12} + x_{21} + x_{22} = 1 $$ $$ x_{31} + x_{32} + x_{41} + x_{42} = 1 $$ Finally, color conflict constraints are applied to every edge-color combination possible in the graph. Note that adjacent nodes within the same partition can be disregarded, as we have forced that at most one of them can be colored with the previous set of constraints. These two constraints handle nodes \(v_1\) and \(v_4\) for all possible colors: $$ x_{11} + x_{41} \leq w_1 $$ $$ x_{12} + x_{42} \leq w_2 $$ Now for the other two pairs of adjacent nodes: $$ x_{11} + x_{31} \leq w_1 $$ $$ x_{12} + x_{32} \leq w_2 $$ $$ x_{21} + x_{31} \leq w_1 $$ $$ x_{22} + x_{32} \leq w_2 $$ With these restrictions we have a complete formulation for the partitioned coloring of this graph. In a future post we will see the optimal values for these system of linear restrictions with and without integrality restrictions, and see why they are so necessary for boolean formulations.
Consider a degree $n$ polynomial $P(x)$ with coefficients $c_i \in \{-1,0,1\}$ chosen uniformly and independently. What is the probability that $P(x)$ has a root which is a root of unity? MathOverflow is a question and answer site for professional mathematicians. It only takes a minute to sign up.Sign up to join this community Consider a degree $n$ polynomial $P(x)$ with coefficients $c_i \in \{-1,0,1\}$ chosen uniformly and independently. What is the probability that $P(x)$ has a root which is a root of unity? As discussed in comments, I think for large $n$ the probability that it has a root which is a root of unity is double the probability that 1 is a root. For large $n$, $P(1)$ is a random variable whose distribution is approximately normal and whose variance is $\sigma^2=2n/3$. The probability that $P(1)=0$ is then approximately $1/\sigma\sqrt{2\pi}$. Doubling this gives a probability $\sqrt{3/\pi n}\approx 0.98 n^{-1/2}$. This seems to agree quite well with the final two values in Matt F.'s list: $$\frac{263}{729}=0.361 \qquad \sqrt{\frac{3}{7\pi}}=0.369$$ $$\frac{2267}{6561}=0.346 \qquad \sqrt{\frac{3}{8\pi}}=0.345$$ I get $$\left\{\frac{2}{3},\frac{2}{3},\frac{4}{9},\frac{35}{81},\frac{94}{243},\frac{275}{729} ,\frac{263}{729},\frac{2267}{6561}\right\}$$ for the monic polynomials of degree 1 to 8, using Mathematica: f[a_, b_] := a x^(b - 1)PolysOfDegree[n_] := First /@ Table[ x^n + Plus @@ MapIndexed[f, IntegerDigits[i, 3, n] - 1], {i, 0, 3^n - 1}]TestFactors[n_] := Table[FactorList[x^i - 1], {i, 1, 2 n + 2}] // Flatten // Union // RestHasRootOfUnityAsRoot[poly_] := Or @@ Map[ PolynomialMod[poly , #] === 0 &, TestFactors[Exponent[poly, x]]]Prob[n_] := Count[Map[HasRootOfUnityAsRoot, PolysOfDegree[n]], True]/3^nTable[Prob[n], {n,1,8}] I've enumerated the polynomials of degree $n$, and enumerated the characteristic polynomials of roots of unity of degree up to $2n+2$. Then it's just a matter of testing which are divisible by which.
For a University class project we were asked to implement a decision-tree classifier and to use a rule engine in order to develop a recommender system for hands and outcomes in Poker Texas Hold’em matches. An additional challenge to this was to make it in a functional language like Erlang, where no variable state is persisted in memory for long periods of time. Instead, the objective was to use a rule engine more-or-less like a disk database. All of the historical data collected over time is sent between functions in a stateless manner and persisted to disk using the rule engine. With this, I developed a Erlang application with a CLI interface that collects game data and sends win-lose percentages on each round. It works by prompting the user to fill the card in his/her hand and table for each round, and whenever the user won or lost each match. The probabilities of occurrence for each card per round, along with the probability of winning or losing the match with the current cards, are recalculated at the end of every match and organized into a Decision tree. Each round the probability of getting a good hand and win the match is queried from the Decision tree and printed to the user, suggesting him to either raise, call or fold. The application is fully open-source and only uses a single file (‘poker.erl’). A trained model containing an history of matches and hands is also provided (‘storage’ file), so you can start testing the application immediately. This application is composed of three modules: a main module, which uses Eresye to store Texas Hold’em rules and hand rankings; a history module, which stores the number of occurrences and the ranking of hands; a tree module, that implements the decision tree and the probability calculation methods. A match is considered a result of seven cards that showed up throughout four rounds. The four rounds are numbered from 0 to 3: Round 0: when the user is given 2 cards Round 1: when the first, second and third cards on the table are shown Round 2: when the fourth card on the table is shown Round 3: when the fifth card on the table is shown We consider that it is only possible to either raise, call or fold between the four rounds. Hence, there are three phases of betting. In this implementation draws are ignored, so as to not influence the collected probabilities of winning or losing by reducing the sample data of both. A rank is a weight of importance associated with a specific set of cards. These ranks are, in order: 1. Royal Flush; 2. Straight Flush; 3. Four Of A Kind; 4. Full House; 5. Flush; 6. Straight; 7. Three Of A Kind; 8. Two Pair; 9. Pair; 10. High Card. The main module uses the rule-based engine Eresye to store variables per match and Texas Hold’em hand rankings. For each match, the user is prompted to input: the 7 cards that are visible to the user; the current pot value; the final result of the game (‘won’ or ‘lost’). Some additional inputs can be used at any time: “reset”: discards the current match inputs and starts again from round 0; “close”: saves the current history data into a local file ‘storage’ and closes the application. The current match data is only committed to history at the end of the match, so a “reset” will discard the current match. When prompting for a card, two inputs are required: the suit and the value. Cards are then structured as {card, ID, Suit, Value} and compared with other hands (and their ranks) in Eresye. The ID value is randomly attributed using the method random:uniform(), and is used to ensure that the card will not be compared with itself during Eresye’s operations. Review note (May 20, 2019):In retrospective, using the result of random:uniform() as an ID is a bad idea, as it can lead to collisions (even if they are rare). An unique UUID or a counting integer should have been used instead. The decision tree module implements a Decision tree where each node corresponds to a round, and each round has 3 branches pointing to 3 child nodes. Each branch and node stores, respectively: branch1 - Expected profit on Raise; node1 - Raise value branch2 - Expected profit on Fold; node2 - Fold value branch3 - Expected profit on Call; node3 - Call value Because the probability of winning ( PWin) varies according to the user’s hand and the current round, this decision tree is built at the end of each round, and the expected profits are calculated as such: $$ExpectedProfitRaise = \left(Pot + RaiseValue\right) * PWin$$ $$ExpectedProfitFold = -\left(BetsDoneByPlayer\right)$$ $$ExpectedProfitCall = Pot * PWin$$ BetsDoneByPlayer is the total value of bets done by the user to that point, or in other words, the number of chips the user placed in the pot. PWin can be calculated using the History module and its Bayesian network (see below), where we can obtain the number of wins attained with the current hand, but also the possible hands that can be obtained in future rounds knowing the current hand. Assuming that: cis the current hand; xis a possible hand in the next round; tis the total number of matchesrecorded in history; function Wis the number of historic winsfor a specific hand; Then PWin can be obtained using the following expression: $$P(c) = \frac{W(c)}{t} = \sum_{x=1}^{\infty} \left ( \frac{ W(x) }{t} + P \left ( x | c \right ) \right )$$ Once all of the expected profits are calculated, the action that the user should take corresponds to the one that has the higher expected profit. The history module is a data history, implemented using an alternative Eresye engine (different from the one used in the main module). Essentially, using Eresye allows the usage of insertion and query within a single variable that does not need to be passed along functions, something that would otherwise be impossible to do in native Erlang. In addition, the query implementation of Eresye enables the retrieval of tuples without knowing all of its data, using the wildcard ‘_’ (underscore). The data in storage is structured as: {round_no, current_rank, previous_round_rank, occurrence_count}: the number of times the user had a ranked hand Y knowing that he/she had a ranked hand X on a previous hand (where Rank(Y) >= Rank(X)); {won/lost, current_rank, occurrence_count}: the number of times the user won or lost with a specific ranked hand; {total, total_matches_count}: the total number of matches that were played with the application. Note that the above data-types are identified by the first constant, so in practice we will have 7 data-types (4 for each of the four rounds, 2 for won or lost hands, and 1 for total matches count). By storing the history on the local file ‘storage’, we have a continually trained model, persistent between different application sessions. The prediction of outcomes is based on conditional probabilities per round. For example, for the following match… Round 0 = High Card Round 1 = Pair Round 2 = Pair Round 3 = Two Pair … where Total is the total number of matches stored in history and where the asterisk is a wildcard, the conditional probability for each round is calculated as follows: Round 0: $$P(Pair | HighCard) = \frac{query(round1, pair, highCard, *)}{Total}$$ Round 1: $$P(Pair | Pair) = \frac{query(round2, pair, pair, *)}{Total}$$ Round 2: $$P(TwoPair | Pair) = \frac{query(round3, twoPair, pair, *)}{Total}$$ This lets us know if the chance to get other more valuable ranks is high enough for it to be worth a raise or a call. That’s all! With all these modules, our knowledge management system for Texas Hold’em outcome prediction is complete! In a way, we used Eresye to store state, essentially getting around the functional and immutable nature of Erlang. There might have been better ways to keep the statelessness by passing the complete records around every function, but regardless, the decision-tree classifier is completely stateless and, surprisingly, highly readable and maintainable. Give it a go! And as always - feel free to get in touch.
This is a recently developed approach to solve Maxwell’s equations [1] on the surface of nano-structures, based on the idea of principal modes. These generalise the analytical Mie modes of spherical particles to describe arbitrary smooth boundaries and surfaces. This approach is different from more numerical techniques such as Finite-difference time-domain (FDTD), as it decomposes the responses of the nano-system into sets of distinct pairs of optical modes. The fields at the surface can be decomposed into unique pairs of modes, where one is spatially inside the structure and the other is outside. The two modes in the pair then interact on the surface. The surface integral on the boundary of the nano-structure is used to define the relative orientation in the two sub-spaces (internal and external), $$\int a^\ast \cdot b\;\mathrm{d}s = \langle a | b \rangle = |a| |b| \cos(\xi).$$ The combinations of modes with the smallest angles, i.e. the principal modes, are the sets that are most aligned between the two spaces. For each pair of modes, their correlation (or equivalently the principal angle \(\xi_n\)) gives information about their sensitivity to excitation by energy which couples to that pair (this is similar to phase matching in non-linear optics). This gives a geometric picture of the light interaction in that channel, and using some simple trigonometry, allows the amplitudes of the modes to be solved analytically. On the surface of the system the tangential parts of the incident, internal and scattered light obey $$\left| f_\perp^0 + f_\perp^{internal} – f_\perp^{scattered} \right| = 0$$ for each mode pair. An example of using this information about the modes of a system to control its response to light [2] is shown below, where a resonance is actually due to two types of mode interfering together. References [Bibtex] @Article{strathprints34708,author = {Francesco Papoff and Benjamin Hourahine},title = {Geometrical Mie theory for resonances in nanoparticles of any shape},journal = {Optics Express},year = {2011},volume = {19},number = {22},pages = {21432--21444},month = {October},abstract = {We give a geometrical theory of resonances in Maxwell?s equations that generalizes the Mie formulae for spheres to all scattering channels of any dielectric or metallic particle without sharp edges. We show that the electromagnetic response of a particle is given by a set of modes of internal and scattered fields that are coupled pairwise on the surface of the particle and reveal that resonances in nanoparticles and excess noise in macroscopic cavities have the same origin. We give examples of two types of optical resonances: those in which a single pair of internal and scattered modes become strongly aligned in the sense defined in this paper, and those resulting from constructive interference of many pairs of weakly aligned modes, an effect relevant for sensing. This approach calculates resonances for every significant mode of particles, demonstrating that modes can be either bright or dark depending on the incident field. Using this extra mode information we then outline how excitation can be optimized. Finally, we apply this theory to gold particles with shapes often used in experiments, demonstrating effects including a Fano-like resonance.},doi = {10.1364/OE.19.021432},keywords = {metals, scattering, nanomaterials, scattering theory, Physics, Atomic and Molecular Physics, and Optics},url = {http://strathprints.strath.ac.uk/34708/}} [Bibtex] @Article{strathprints44539,author = {Benjamin Hourahine and Francesco Papoff},title = {Optical control of scattering, absorption and lineshape in nanoparticles},journal = {Optics Express},year = {2013},volume = {21},number = {17},pages = {20322--20333},month = {August},note = {9 pages, 5 figures},abstract = {We find exact conditions for the enhancement or suppression of internal and/or scattered fields in any smooth particle and the determination of their spatial distribution or angular momentum through the combination of simple fields. The incident fields can be generated by a single monochromatic or broad band light source, or by several sources, which may also be impurities embedded in the nanoparticle. We can design the lineshape of a particle introducing very narrow features in its spectral response.},doi = {10.1364/OE.21.020322},keywords = {physics, optics, optical control, scattering , absorption, lineshape, nanoparticles, Physics, Atomic and Molecular Physics, and Optics},url = {http://strathprints.strath.ac.uk/44539/}} [Bibtex] @Article{strathprints59693,author = {Duncan McArthur and Benjamin Hourahine and Francesco Papoff},title = {Enhancing ultraviolet spontaneous emission with a designed quantum vacuum},journal = {Optics Express},year = {2017},volume = {25},number = {4},pages = {4162--4179},month = {February},abstract = {We determine how to alter the properties of the quantum vacuum at ultraviolet wavelengths to simultaneously enhance the spontaneous transition rates and the far field detection rate of quantum emitters. We find the response of several complex nanostructures in the 200 ? 400 nm range, where many organic molecules have fluorescent responses, using an analytic decomposition of the electromagnetic response in terms of continuous spectra of plane waves and discrete sets of modes. Coupling a nanorod with an aluminum substrate gives decay rates up to 2.7 {$\times$} 103 times larger than the decay rate in vacuum and enhancements of 824 for the far field emission into the entire upper semi-space and of 2.04 {$\times$} 103 for emission within a cone with a 60? semi-angle. This effect is due to both an enhancement of the field at the emitter?s position and a reshaping of the radiation patterns near mode resonances and cannot be obtained by replacing the aluminum substrate with a second nanoparticle or with a fused silica substrate. These large decay rates and far field enhancement factors will be very useful in the detection of fluorescence signals, as these resonances can be shifted by changing the dimensions of th nanorod. Moreover, these nanostructures have potential for nano-lasing because the Q factors of these resonances can reach 107.9, higher than the Q factors observed in nano-lasers.},doi = {10.1364/OE.25.004162},keywords = {subwavelength structures , ultraviolet, fluorescence, fluctuations, relaxations, and noise, Optics. Light, Physics and Astronomy(all)},url = {http://strathprints.strath.ac.uk/59693/}} [Bibtex] @Article{strathprints57291,author = {D. McArthur and B. Hourahine and F. Papoff},title = {Coherent control of plasmons in nanoparticles with nonlocal response},journal = {Optics Communications},year = {2017},volume = {382},pages = {258--265},month = {January},abstract = {We discuss a scheme for the coherent control of light and plasmons in nanoparticles that have nonlocal dielectric permittivity and contain nonlinear impurities or color centers. We consider particles which have a response to light that is strongly influenced by plasmons over a broad range of frequencies. Our coherent control method enables the reduction of absorption and/or suppression of scattering.},doi = {10.1016/j.optcom.2016.07.032},keywords = {plasmonics, nanoparticles, nonlocality, optical routing, Optics. Light, Atomic and Molecular Physics, and Optics},url = {http://strathprints.strath.ac.uk/57291/}} [Bibtex] @Article{strathprints53589,author = {Francesco Papoff and Duncan McArthur and Ben Hourahine},title = {Coherent control of radiation patterns of nonlinear multiphoton processes in nanoparticles},journal = {Scientific Reports},year = {2015},volume = {5},pages = {12040},month = {July},abstract = {We propose a scheme for the coherent control of light waves and currents in metallic nanospheres which applies independently of the nonlinear multiphoton processes at the origin of waves and currents. We derive conditions on the external control field which enable us to change the radiation pattern and suppress radiative losses or to reduce absorption, enabling the particle to behave as a perfect scatterer or as a perfect absorber. The control introduces narrow features in the response of the particles that result in high sensitivity to small variations in the local environment, including subwavelength spatial shifts.},doi = {10.1038/srep12040},keywords = {nanoparticles, coherent control, metallic nanospheres, Physics, Physics and Astronomy(all)},url = {http://strathprints.strath.ac.uk/53589/}}
At first I thought maybe sodium sulphate in contact with water produces sulphuric acid which absorbs water but I do not think it is actually a valid reason. Sodium sulphate reacts readily with water at room temperature to form hydrates up to sodium sulphate decahydrate, $\ce{NaSO4\cdot10H2O}$. $$\ce{NaSO4 + 10H2O -> NaSO4\cdot10H2O}$$ This means that $\ce{NaSO4}$ can absorb up to 10 mol of water for every 1 mol of salt that is used, making it one of the most effective drying agents in terms of sheer capacity. Using data from this source we can calculate the Gibbs energy change for the above reaction at different temperatures. $$\mathrm{\Delta G_{20^\circ C} = -1.33~kJ~mol^{-1}}$$ $$\mathrm{\Delta G_{30^\circ C} = 1.28~kJ~mol^{-1}}$$ Since the entropy change for the reaction is negative, we can see that the cooler the solution you are trying to dry is, the more effective $\ce{NaSO4}$ will be as a drying agent.
Faster Force-Directed Graph Drawing with the Well-Separated Pair Decomposition Abstract The force-directed paradigm is one of the few generic approaches to drawing graphs. Since force-directed algorithms can be extended easily, they are used frequently. Most of these algorithms are, however, quite slow on large graphs as they compute a quadratic number of forces in each iteration. We speed up this computation by using an approximation based on the well-separated pair decomposition. We perform experiments on a large number of graphs and show that we can strongly reduce the runtime—even on graphs with less then a hundred vertices—without a significant influence on the quality of the drawings (in terms of number of crossings and deviation in edge lengths). 1 Introduction Force-directed algorithms are commonly used to draw graphs. They can be used on a wide range of graphs without further knowledge of the graphs’ structure. The idea is to define physical forces between the vertices of the graph. These forces are applied to the vertices iteratively until stable positions are reached. The well-known spring-embedder algorithm of Eades [5] models the edges as springs. His approach was refined by Fruchterman and Reingold [9]. Between pairs of adjacent vertices they apply attracting forces caused by springs. To prevent vertices getting too close, they apply repulsive forces between all pairs of vertices. Generally, force-directed methods are easy to implement and can be extended well. For example, Fink et al. [7] defined additional forces to draw Metro lines in Metro maps as Bézier curves instead of as polygonal chains. Different aesthetic criteria can be balanced by weighing them accordingly. Force-directed algorithms can in principle be used for relatively large graphs with hundreds of vertices and often yield acceptable results. Unfortunately, force-directed methods are rather slow on such graphs. This is caused by the computation of the repulsive force for every vertex pair, which yields a quadratic runtime for each iteration. In this paper, we present a new approach to speed this up. Previous Work. There are a lot of techniques to speed up force-directed algorithms. For example, Barnes and Hut [1] use a quadtree, a multi-purpose spatial data structure, to approximate the forces between the vertex pairs. We will compare our algorithm to theirs subsequently. Another approach is the multilevel paradigm introduced by Walshaw [15]. After contracting dense subgraphs, the resulting coarse graph is laid out. Then the vertices are uncontracted and a layout of the whole graph based on the coarse layout is computed. This can be done over several levels. The multilevel paradigm does not rule out our WSPD-based approach; our approach can be applied to each level. Callahan and Kosaraju [4] defined a decomposition for point sets in the plane, the well-separated pair decomposition (WSPD). Given a point set P and a number \(s>0\), this decomposition consists of pairs of subsets \((A_i, B_i)_{i=1, \dots , k}\) of P with two properties. First, for each pair \((p,q) \in P^2\) with \(p \ne q\), there is a unique index \(i \in \{1, \dots , k\}\) such that \(p \in A_i\) and \(q \in B_i\) or vice versa. Second, each pair \((A_i, B_i)\) must be s-well-separated, that is, the distance between the two sets is at least s times the larger of the diameters of the sets. Callahan and Kosaraju showed how to construct a WSPD for a set of n points in \(O(n \log n)\) time where the number k of pairs of sets is linear in n. The WSPD has been used for graph drawing before; Gronemann [10] employed it to speed up the fast multipole multilevel method [11]. While our WSPD is based on the split tree [4], Gronemann’s is based on a quadtree. Our Contribution. We use the WSPD in order to speed up the force-directed algorithm of Fruchterman and Reingold (FR). Instead of computing the repulsive forces for every pair of points, we represent every set \(A_1,\dots ,A_k,B_1,\dots ,B_k\) in the decomposition by its barycenter and use the barycenter of a set, say \(A_i\), as an approximation when computing the forces between this set and a point in \(B_i\). Thus, an iteration takes us \(O(n \log n)\) time, instead of \(\varOmega (n^2)\) for the classical algorithm. Additionally, our method is very simple and allows the user to define forces arbitrarily—as long as the total force on a point p is the sum of the forces of point pairs in which p is involved. Hence, our approach can be applied to other force-directed algorithms as well. We don’t consider other techniques such as Multidimensional Scaling (MDS) or multi-level algorithms in this paper, as we only want to show that we can speed up a force-directed graph layout algorithm using the WSPD. We guess that this technique can be applied to other algorithms as well. In the above-mentioned fast multipole method, in contrast, the approximation of the repulsive forces is quite complicated (as Hachul and Jünger [11] point out); it requires the expansion of a Laurent series. 2 Algorithm In this section, we describe our WSPD-based implementation, analyze its asymptotic running time, and give a heuristic speed-up method. Constructing a WSPD. There are various ways to construct an efficient WSPD, that is, a WSPD with a linear number of pairs of sets. We use the split tree as described by Callahan and Kosaraju [4] when introducing the WSPD. Our implementation follows the algorithm FastSplitTree in the textbook of Narasimhan and Smid [13, Sect. 9.3.2]. Given n points, this algorithm constructs a linear-size split tree in \(O(n \log n)\) time. Given the tree, a WSPD with separation constant s can be built in \(O(s^2 n)\) time. The Force-Directed Algorithm. The general principle of a force-directed algorithm is as follows. In every iteration, the algorithm computes forces on the vertices. These forces depend on the current position of the vertices in the drawing. The forces are applied as an offset to the position of each vertex. The algorithm terminates after a given number of iterations or when the forces get below a certain threshold. A classical force-directed algorithm such as FR computes, in every iteration, an attractive force for any pair of adjacent vertices and a repulsive force for any pair of vertices. Fruchterman and Reingold [9] use \(F_{\text {attractive}}(u,v) = d^2/c\) and \(F_{\text {repulsive}}(u,v) = -c^2/d\), where c is a constant describing the ideal edge length and \(d=d(u,v)\) is the distance between vertices u and v in the current drawing. Tfor the current positions of the vertices of G(which are stored in the leaves of T). Each node \(\mu \) of Tcorresponds to the set of vertices in the leaves of the subtree rooted in \(\mu \). Bottom-up, we compute the barycenters of the sets corresponding to the nodes of T. From T, we compute a WSPD \((A_i,B_i)_i\) for the current vertex positions. Each set \(A_i\) (and \(B_i\)) of the WSPD corresponds to a node \(\alpha _i\) (and \(\beta _i\)) of T. For each pair \((A_i,B_i)\) of the WSPD, we compute \(F_{\text {repulsive}}\) from the barycenter of \(A_i\) to the barycenter of \(B_i\) (and vice versa), and store the results (in an accumulative fashion) in \(\alpha _i\) and \(\beta _i\). Finally, we traverse Ttop-down. During the traversal, we add to the force of each node the force of the parent node. When we reach the leaves of T, which correspond to the graph vertices, we have computed the resulting force for each vertex. Running Time. We denote the number of vertices of the given graph by n and the number of edges by m. In each iteration, the classical algorithm computes the attractive forces in O( m) time and the repulsive forces in \(O(n^2)\) time. We don’t modify the computation of the attractive forces. For computing the repulsive forces, the most expensive step is the computation of the split tree T and the WSPD, which takes \(O(n \log n)\) time. The barycenters of the sets corresponding to the nodes of T can be computed bottom-up in linear time. The forces between the pairs of the WSPD can also be computed in linear total time. The same holds for the forces acting on the vertices. In total, hence, an iteration takes \(O(m + n \log n)\) time. Improvements. To speed up our algorithm, we compute a new split tree and the resulting WSPD only every few iterations. To be precise, we only recompute it when \(\lfloor 5 \log i \rfloor \) changes, where i is the current iteration. Thus, the WSPD may not be valid for the current vertex positions. This makes the approximation of the forces more inaccurate, but our experiments show that this method does not change the quality of the drawings significantly, while the running time decreases notably (see Figs. 1, 2 and 3). Implementation. Our Java implementation is based on FRLayout, the FR algorithm implemented in the JUNG library [12]. We slightly optimized the code, which reduced the runtime by a constant factor. Additionally, we removed the frame that bounded the drawing area, as it caused ugly drawings for larger graphs. For our experimental comparison in Sect. 3, we used FRLayout with these modifications. It is this implementation that we then sped up using the WSPD. We recompute the WSPD only every few iterations as described in the previous paragraph. We call the result FR+WSPD. For comparison, we also implemented the quadtree-based speed-up method of Barnes and Hut [1], which we call FR+Quad, and a grid-based approach suggested already by Fruchterman and Reingold [9], which we call FR+Grid. To widen the scope of our study, we included some algorithms implemented in C++ in the Open Graph Drawing Framework (OGDF, www.ogdf.net): GEM, FM\(^3\) (with and without multilevel technique, then we call it FM\(^3\) single) of Hachul and Jünger [11], and FRExact (the exact FR implementation in OGDF). 3 Experimental Results We formulate the following hypotheses which we then test experimentally. (H1) The quality of the drawings produced by FR+WSPD is comparable to that of FRLayout. (H2) On sufficiently large graphs, FR+WSPD is faster than FRLayout. We tested our algorithms on two data sets; (i) the Rome graph collection [14] that contains 11528 undirected connected graphs with 10–100 vertices each, and (ii) 40 random graphs that we generated using the EppsteinPowerLawGenerator [6] in JUNG, which yields graphs whose structure is similar to Web graphs. Our graphs had 2, 500, \(5{,}000, 7{,}500, \dots , 100{,}000\) vertices and 2.5 times as many edges. We considered only the largest connected component of each generated graph. The experiments were performed on an Intel Xeon CPU with 2.67 GHz and 20 GB RAM running Linux. The computer has 16 cores, but we did not parallelize our code. During our experiments, only one core was operating at close-to-full capacity. To test hypothesis (H1), we compared the quality of the drawings of FRLayout, FR+WSPD, FR+Quad, and FR+Grid. In order to vary as few parameters as possible, we kept the size of the graphs constant in this part of the study. We used all Rome graphs with exactly 100 vertices. The 140 graphs have, on average, 135 edges. We first compared the outputs of FR+WSPD for different values (0.01, 0.1, 1) of the separation constant s. The distribution of the results in the plot was roughly the same, that is, the quality of the drawings did not strongly depend on s. Using \(s=1\) was about 30 % slower than \(s=0.1\) or \(s=0.01\). Similarly, FR+Quad has a parameter \(\varTheta \) that controls how fine the given point set is subdivided. Increasing \(\varTheta \) decreases the running time. Our experiments confirmed what Barnes and Hut [1] observed: only values of \(\varTheta \) close to 1 give results with a similar quality as the unmodified algorithm. The upper scatterplot in Fig. 1 compares variants of FR based on different speed-up techniques. Compared to FRLayout, FR+Quad is slightly worse in terms of uniformity of edge lengths and FR+WSPD is slightly worse in terms of edge crossings and between FRLayout and FR+Quad in terms of edge lengths. FR+Grid is worse in both measures, especially in the number of edge crossings. Hence, there is support for hypothesis (H1). The lower scatterplot in Fig. 1 compares FR+WSPD to the above-mentioned OGDF algorithms. In terms of uniformity of edge lengths, there are two clear clusters: the two FM\(^3\) variants are better than the rest. In terms of crossings, GEM is best, followed by the FM\(^3\) variants, and then by FR+WSPD, which surprisingly is better than FRExact. To test hypothesis (H2), we measured the runtimes of the all algorithms on the Rome graphs (Fig. 2) and the random graphs (Fig. 3). In Java, we only measure the time used for the thread running the force-directed algorithm in our Java Virtual Machine; this eliminates the influence of the garbage collector and the JIT compiler on our measurements. In C++, we used an OGDF method for measuring the CPU time. For each graph size, we display the mean runtime over all graphs of that size. The results are as follows. As expected, FR+WSPD (with \(s=0.1\)) is faster than FRLayout on larger graphs. We were surprised, however, to see that FR+WSPD overtakes FRLayout already around \(n \approx 30\). FR+WSPD also turned out to be faster than FR+Quad (with \(\varTheta =1\)) and than FM\(^3\) by a factor of 1.5 to 3. FR+WSPD and FR+Grid are comparable in speed, and twice as fast as GEM. Recall, however, that FR+Grid tends to produce more edge crossings (Fig. 1). Concerning the comparison between Java and C++, FRExact (in C++) is roughly four times faster than FRLayout (in Java). Conclusion. Our experiments show that the WSPD-based approach speeds up force-directed graph drawing algorithms such as FR considerably without sacrificing the quality of the drawings. The main feature of the new approach is its simplicity. We plan to combine our approach with multi-level techniques in order to draw much larger graphs. References 1. 2. 3. 4. 5. 6.Eppstein, D., Wang, J.Y.: A steady state model for graph power laws. In: 2nd International Workshop Web Dynamics (2002). http://arxiv.org/abs/cs/0204001 7. 8. 9. 10.Gronemann, M.: Engineering the fast-multipole-multilevel method for multicore and SIMD architectures. Master’s thesis, Department of Computer Science, TU Dortmund (2009)Google Scholar 11. 12.JUNG: Java Universal Network/Graph Framework. http://jung.sourceforge.net. Accessed 2 September 2015 13. 14.Rome Graphs. http://graphdrawing.org/data.html, http://www.graphdrawing.org/download/rome-graphml.tgz. Accessed 2 September 2015 15.
As some people on this site might be aware I don't always take downvotes well. So here's my attempt to provide more context to my answer for whoever decided to downvote. Note that I will confine my discussion to functions $f: D\subseteq \Bbb R \to \Bbb R$ and to ideas that should be simple enough for anyone who's taken a course in scalar calculus to understand. Let me know if I haven't succeeded in some way. First, it'll be convenient for us to define a new notation. It's called "little oh" notation. Definition: A function $f$ is called little oh of $g$ as $x\to a$, denoted $f\in o(g)$ as $x\to a$, if $$\lim_{x\to a}\frac {f(x)}{g(x)}=0$$ Intuitively this means that $f(x)\to 0$ as $x\to a$ "faster" than $g$ does. Here are some examples: $x\in o(1)$ as $x\to 0$ $x^2 \in o(x)$ as $x\to 0$ $x\in o(x^2)$ as $x\to \infty$ $x-\sin(x)\in o(x)$ as $x\to 0$ $x-\sin(x)\in o(x^2)$ as $x\to 0$ $x-\sin(x)\not\in o(x^3)$ as $x\to 0$ Now what is an affine approximation? (Note: I prefer to call it affine rather than linear -- if you've taken linear algebra then you'll know why.) It is simply a function $T(x) = A + Bx$ that approximates the function in question. Intuitively it should be clear which affine function should best approximate the function $f$ very near $a$. It should be $$L(x) = f(a) + f'(a)(x-a).$$ Why? Well consider that any affine function really only carries two pieces of information: slope and some point on the line. The function $L$ as I've defined it has the properties $L(a)=f(a)$ and $L'(a)=f'(a)$. Thus $L$ is the unique line which passes through the point $(a,f(a))$ and has the slope $f'(a)$. But we can be a little more rigorous. Below I give a lemma and a theorem that tell us that $L(x) = f(a) + f'(a)(x-a)$ is the best affine approximation of the function $f$ at $a$. Lemma: If a differentiable function $f$ can be written, for all $x$ in some neighborhood of $a$, as $$f(x) = A + B\cdot(x-a) + R(x-a)$$ where $A, B$ are constants and $R\in o(x-a)$, then $A=f(a)$ and $B=f'(a)$. Proof: First notice that because $f$, $A$, and $B\cdot(x-a)$ are continuous at $x=a$, $R$ must be too. Then setting $x=a$ we immediately see that $f(a)=A$. Then, rearranging the equation we get (for all $x\ne a$) $$\frac{f(x)-f(a)}{x-a} = \frac{f(x)-A}{x-a} = \frac{B\cdot (x-a)+R(x-a)}{x-a} = B + \frac{R(x-a)}{x-a}$$ Then taking the limit as $x\to a$ we see that $B=f'(a)$. $\ \ \ \square$ Theorem: A function $f$ is differentiable at $a$ iff, for all $x$ in some neighborhood of $a$, $f(x)$ can be written as $$f(x) = f(a) + B\cdot(x-a) + R(x-a)$$ where $B \in \Bbb R$ and $R\in o(x-a)$. Proof: "$\implies$": If $f$ is differentiable then $f'(a) = \lim_{x\to a} \frac{f(x)-f(a)}{x-a}$ exists. This can alternatively be written $$f'(a) = \frac{f(x)-f(a)}{x-a} + r(x-a)$$ where the "remainder function" $r$ has the property $\lim_{x \to a} r(x-a)=0$. Rearranging this equation we get $$f(x) = f(a) + f'(a)(x-a) -r(x-a)(x-a).$$ Let $R(x-a):= -r(x-a)(x-a)$. Then clearly $R\in o(x-a)$ (confirm this for yourself). So $$f(x) = f(a) + f'(a)(x-a) + R(x-a)$$ as required. "$\impliedby$": Simple rearrangement of this equation yields $$B + \frac{R(x-a)}{x-a}= \frac{f(x)-f(a)}{x-a}.$$ The limit as $x\to a$ of the LHS exists and thus the limit also exists for the RHS. This implies $f$ is differentiable by the standard definition of differentiability. $\ \ \ \square$ Taken together the above lemma and theorem tell us that not only is $L(x) = f(a) + f'(a)(x-a)$ the only affine function who's remainder tends to $0$ as $x\to a$ faster than $x-a$ itself (this is the sense in which this approximation is the best), but also that we can even define the concept differentiability by the existence of this best affine approximation.