text
stringlengths
256
16.4k
Thompson's group $V$, which is a finitely presented infinite simple group, consists of all homeomorphisms of the Cantor set $\{0,1\}^\mathbb N$ that can be described the following way. A prefix code is a collection of finite words none of which is a prefix of another. A finite prefix code $C$ is maximal if it is not contained in another prefix code. Equivalently, it is maximal if removing the vertices of the binary tree corresponding to $C$ disconnects it. An element of Thompson's group is given by specifying two maximal prefix codes $C_1=\{u_1,\ldots, u_n\}$ and $C_2=\{v_1,\ldots, v_n\}$ of the same size and a permutation $\sigma\in S_n$. The action of the corresponding element $f_{C_1,C_2,\sigma}$ on $x\in \{0,1\}^n$ is as follows. There is a unique factorization $x=u_iz$ with $u_i\in C_1$. Then $f_{C_1,C_2,\sigma}(x) = v_{\sigma(i)}z$. Note that the prefix codes $C_1,C_2$ are not uniquely determined by the element of $V$. Let $X=\{0,1\}^\mathbb N$ and let $R$ be an equivalence relation such that $Y=X/R$ is compact Hausdorff and the projection $X\to Y$ is $V$-equivariant. Suppose that $R$ is not the equality relation and has more than one equivalence class. Note that $R$ must be closed as a subset of $X\times X$ by Hausdorffness. Since the complement of $R$ in $X\times X$ is non-empty and open, it follows that there are finite words $u,v$ such that $ux$ and $vy$ are never equivalent for any infinite words $x,y$. By extending the shorter of the two words, we may assume $|u|=|v|=m$ (where $|\cdot|$ is the length). Since $R$ is not the equality relation we can find two distinct infinite words $x_1,x_2$ which are equivalent under $R$. I claim we can find an element $g\in V$ such that $g(x_1)\in u\{0,1\}^\mathbb N$ and $g(x_2)\in v\{0,1\}^\mathbb N$. This will contradict that $X\to Y$ is $V$-equivariant. Let $n$ be the length of the longest common prefix of $x_1,x_2$. Let $k>\max\{n,m\}$ and let $C$ be the maximal prefix code consisting of all words of length $k$. Then $x_1,x_2$ have different prefixes $p,q$ of length $k$. Also, since $k>m$, we can find $p',q'\in C$ such that $u$ is a prefix of $p'$ and $v$ is a prefix of $q'$. There is an element $g=f_{C,C,\sigma}\in V$ such that $g$ takes all infinite words $px$ to $p'x$ and $qx$ to $q'x$. It follows that $g(x_1)$ and $g(x_2)$ are not equivalent under $R$, a contradiction.
Padmaja, N and Ramakumar, S and Viswamitra, MA (1988) Structure of Puromycin Aminonucleoside. In: Acta Crystallographica, Section C: Crystal Structure Communications, C44 (12). pp. 2176-2178. PDF acta.pdf Restricted to Registered users only Download (328kB) | Request a copy Abstract 3'-Amino-3'-deoxy-N,N-dimethyladenosine, $C_{12}H_{18}N_6O_3$, $M_r = 294.3$, monoclinic, $P2_1$, a = 4.684(1), b= 10.252(1), c= 14.381 (3)\AA, \beta = 91.16 (2)°, V= 690.49 $\AA^3$, Z = 2, $D_x = 1.39$ Mg $m^{-3}$, \lambda(Cu K\alpha) = 1.5418 \AA, \mu = 0.84 $mm^{-1}$, F(000) = 312, T= 295 K, R =0.045 for 986 observed reflections with I > 1.5\sigma(I). The adenine base is unprotonated at N(I) and is dimethylated at N(6). The nucleoside has a 3'-deoxy-3'-aminofuranosyl ribose sugar. The base is in the anti conformation (x= 2.3°) with respect to the furanosyl ring, which shows a C(3')-endo-C(2')-exo pucker $(_2T^3)$. 0(5') is disordered. The structure of the compound is similar to that of the nucleoside portion of the antibiotic puromycin. Item Type: Journal Article Additional Information: Copyright of this article belongs to International Union of Crystallography. Department/Centre: Division of Physical & Mathematical Sciences > Physics Depositing User: Dipti Chaurasia Date Deposited: 06 Mar 2008 Last Modified: 19 Sep 2010 04:42 URI: http://eprints.iisc.ac.in/id/eprint/13142 Actions (login required) View Item
$ L^1 $ estimates for oscillating integrals and their applications to semi-linear models with $ \sigma $-evolution like structural damping 1. School of Applied Mathematics and Informatics, Hanoi University of Science and Technology, No.1 Dai Co Viet road, Hanoi, Vietnam 2. Faculty for Mathematics and Computer Science, TU Bergakademie Freiberg, Prüferstr. 9, 09596, Freiberg, Germany $ \sigma $ $ \begin{equation*} u_{tt}+ (-\Delta)^\sigma u+ \mu (-\Delta)^\delta u_t = f(u, u_t), \, \, \, u(0, x) = u_0(x), \, \, \, u_t(0, x) = u_1(x) \end{equation*} $ $ \sigma \ge 1 $ $ \mu>0 $ $ \delta \in (\frac{\sigma}{2}, \sigma] $ $ \sigma $ $ \delta \in (\frac{\sigma}{2}, \sigma) $ $ \delta = \sigma $ $ f(u, u_t) $ $ |u|^{p} $ $ |u_t|^{p} $ $ p>1 $ $ L^q $ $ L^{m} $ $ q\in (1, \infty) $ $ m\in [1, q) $ Keywords:Structural damped $ \sigma $-evolution equations, visco-elastic equations, $ \sigma $-evolution like models, oscillating integrals, global existence, Gevrey smoothing. Mathematics Subject Classification:Primary: 35B40, 35L76; Secondary: 35R11. Citation:Tuan Anh Dao, Michael Reissig. $ L^1 $ estimates for oscillating integrals and their applications to semi-linear models with $ \sigma $-evolution like structural damping. Discrete & Continuous Dynamical Systems - A, 2019, 39 (9) : 5431-5463. doi: 10.3934/dcds.2019222 References: [1] M. D'Abbicco and M. R. Ebert, An application of $L^{p}-L^{q}$ decay estimates to the semilinear wave equation with parabolic-like structural damping, [2] M. D'Abbicco and M. R. Ebert, A new phenomenon in the critical exponent for structurally damped semi-linear evolution equations, [3] [4] T. A. Dao and M. Reissig, An application of $L^1$ estimates for oscillating integrals to parabolic like semi-linear structurally damped $\sigma$-evolution models, [5] M. R. Ebert and M. Reissig, [6] [7] V. A. Galaktionov, E. L. Mitidieri and S. I. Pohozaev, Blow-up for higher-order prabolic, hyperbolic, dispersion and Schrödinger equations, in [8] L. Grafakos, [9] H. Hajaiej, L. Molinet, T. Ozawa and B. Wang, Necessary and sufficient conditions for the fractional Gagliardo-Nirenberg inequalities and applications to Navier-Stokes and generalized boson equations, [10] [11] [12] M. Kainane, [13] [14] [15] E. Mitidieri and S. I. Pohozaev, Non-existence of weak solutions for some degenerate elliptic and parabolic problems on $ \mathbb{R}^n$, [16] T. Narazaki and M. Reissig, $L^1$ estimates for oscillating integrals related to structural damped wave models, in, [17] A. Palmieri and M. Reissig, Semi-linear wave models with power non-linearity and scale-invariant time-dependent mass and dissipation, Ⅱ, [18] D. T. Pham, M. Kainane Mezadek and M. Reissig, Global existence for semi-linear structurally damped $\sigma$-evolution models, [19] F. Pizichillo, [20] T. Runst and W. Sickel, [21] Y. Shibata, On the rate of decay of solutions to linear viscoelastic equation, [22] C. G. Simander, [23] [24] F. Weisz, Marcinkiewicz multiplier theorem and the Sunouchi operator for Ciesielski-Fourier series, show all references References: [1] M. D'Abbicco and M. R. Ebert, An application of $L^{p}-L^{q}$ decay estimates to the semilinear wave equation with parabolic-like structural damping, [2] M. D'Abbicco and M. R. Ebert, A new phenomenon in the critical exponent for structurally damped semi-linear evolution equations, [3] [4] T. A. Dao and M. Reissig, An application of $L^1$ estimates for oscillating integrals to parabolic like semi-linear structurally damped $\sigma$-evolution models, [5] M. R. Ebert and M. Reissig, [6] [7] V. A. Galaktionov, E. L. Mitidieri and S. I. Pohozaev, Blow-up for higher-order prabolic, hyperbolic, dispersion and Schrödinger equations, in [8] L. Grafakos, [9] H. Hajaiej, L. Molinet, T. Ozawa and B. Wang, Necessary and sufficient conditions for the fractional Gagliardo-Nirenberg inequalities and applications to Navier-Stokes and generalized boson equations, [10] [11] [12] M. Kainane, [13] [14] [15] E. Mitidieri and S. I. Pohozaev, Non-existence of weak solutions for some degenerate elliptic and parabolic problems on $ \mathbb{R}^n$, [16] T. Narazaki and M. Reissig, $L^1$ estimates for oscillating integrals related to structural damped wave models, in, [17] A. Palmieri and M. Reissig, Semi-linear wave models with power non-linearity and scale-invariant time-dependent mass and dissipation, Ⅱ, [18] D. T. Pham, M. Kainane Mezadek and M. Reissig, Global existence for semi-linear structurally damped $\sigma$-evolution models, [19] F. Pizichillo, [20] T. Runst and W. Sickel, [21] Y. Shibata, On the rate of decay of solutions to linear viscoelastic equation, [22] C. G. Simander, [23] [24] F. Weisz, Marcinkiewicz multiplier theorem and the Sunouchi operator for Ciesielski-Fourier series, [1] Linglong Du. Long time behavior for the visco-elastic damped wave equation in $\mathbb{R}^n_+$ and the boundary effect. [2] [3] [4] Kim Dang Phung, Gengsheng Wang, Xu Zhang. On the existence of time optimal controls for linear evolution equations. [5] [6] Akisato Kubo, Hiroki Hoshino, Katsutaka Kimura. Global existence and asymptotic behaviour of solutions for nonlinear evolution equations related to a tumour invasion model. [7] Khalid Addi, Oanh Chau, Daniel Goeleven. On some frictional contact problems with velocity condition for elastic and visco-elastic materials. [8] Tomás Caraballo, M. J. Garrido-Atienza, B. Schmalfuss. Existence of exponentially attracting stationary solutions for delay evolution equations. [9] Hongmei Cheng, Rong Yuan. Existence and asymptotic stability of traveling fronts for nonlocal monostable evolution equations. [10] [11] [12] Antônio Luiz Pereira, Severino Horácio da Silva. Continuity of global attractors for a class of non local evolution equations. [13] Xuewei Ju, Desheng Li. Global synchronising behavior of evolution equations with exponentially growing nonautonomous forcing. [14] Grigory Panasenko, Ruxandra Stavre. Asymptotic analysis of a non-periodic flow in a thin channel with visco-elastic wall. [15] [16] Masahoto Ohta, Grozdena Todorova. Remarks on global existence and blowup for damped nonlinear Schrödinger equations. [17] [18] [19] [20] Filippo Dell'Oro, Olivier Goubet, Youcef Mammeri, Vittorino Pata. A semidiscrete scheme for evolution equations with memory. 2018 Impact Factor: 1.143 Tools Metrics Other articles by authors [Back to Top]
One of my favorite data sources is the NYC OpenData site. A while back I noticed a really interesting data set on that site, and I’ve been mulling over what exactly to do with it for a while. The data set in question is the 2015 Street Tree Census. I knew there was some interesting question that this data could answer, I just had to think of it (or ask my girlfriend to think of it for me. In this blog post I hope to introduce you to the powerful and simple Metropolis-Hastings algorithm. This is a common algorithm for generating samples from a complicated distribution using Markov chain Monte Carlo, or MCMC. By way of motivation, remember that Bayes' theorem says that given a prior \(\pi(\theta)\) and a likelihood that depends on the data, \(f(\theta | x)\), we can calculate $$ \pi(\theta | x) = \frac{f(\theta | x) \pi(\theta)}{\int f(\theta | x) \pi(\theta) \; \mathrm{d}\theta}. $$ We call \(\pi(\theta | x)\) the posterior distribution. It's this distribution that we use to make estimates and inferences about the parameter \(\theta\). Often, however, this distribution is intractable; it can't be calculated directly. This usually happens because of a nasty integral in the denominator that isn't easily solvable. In cases like this, the best solution is usually to approximate the posterior using a Monte Carlo method. (This is a companion to my post on Paul Gronke’s earlyvoting.net) One of the first assignments we had in my Election Sciences course was to take a look at registration data from the Oregon Motor Voter program and try to find interesting patterns. For those who don’t know, Oregon Motor Voter is an automatic voter registration program in Oregon. Whenever someone interacts with the Oregon DMV, their voter eligibility is automatically checked, and if they are eligible to vote but not registered, they are automatically added to the rolls. Looking at the spread of a measles epidemic using social networks.
This is exercise 2 of the lecture note by Jeff Erickson on decision tree lower bounds. We say that an array $A[1 \ldots n]$ is $k$-sorted if it can be divided into $k$ blocks, each of size $n/k$ (we assume that $n/k$ is an integer), such that the elements in each block are larger than the elements in earlier blocks and smaller than elements in later blocks. The elements within each block need not be sorted. (a) Describe an algorithm that $k$-sorts an arbitrary array in $O(n \log k)$ time. (b) Prove that any comparison-based $k$-sorting algorithm requires $\Omega(n \log k)$ comparisons in the worst case. (c) Describe an algorithm that completely sorts an already $k$-sorted array in $O(n \log(n/k))$ time. (d) Prove that any comparison-based algorithm to completely sort a $k$-sorted array requires $\Omega(n \log(n/k))$ comparisons in the worst case. The first problem can be solved by modifying the quicksort algorithm. However, I am stuck with the second problem which asks for a lower bound. My attempt is to use the decision tree technique. I think the number of leaves of the decision tree is at least $((\frac{n}{k})!)^{k}$. Therefore, the height of the decision tree must be $$ H \ge \log ((\frac{n}{k})!)^{k} = k \log (\frac{n}{k})! = \Omega(k \frac{n}{k} \log \frac{n}{k}) = \Omega(n \log \frac{n}{k}),$$ which is not the desired result $\Omega(n \log k)$. What is wrong with my argument? And how to establish the lower bound $\Omega(n \log k)$?
Let $\Delta\subseteq\mathbb R^2$ denote the triangle spanned by $(0,0)$, $(1,0)$ and $(0,1)$ and $$\mathbb P_r(\Delta):=\left\{p:\Delta\to\mathbb R\mid p(x)=\sum_{|\alpha|\le r}\lambda_\alpha x^\alpha\text{ for all }x\in\Delta\text{ for some }(\lambda_\alpha)_{|\alpha|\le r}\subseteq\mathbb R\right\}$$ for $r\in\mathbb N_0$. I'm numerically solving a PDE on a rectangle $\Lambda=(0,a)\times(0,b)$ triangulated using quadratic Lagrange finite elements as depicted in the following figure: In the assembly of the linear system, I need to compute integrals of the form $$\int_\Delta(u^0\circ F_{\tilde\Delta})\psi\;,\tag1$$ where $u^0$ is the solution of the previous time step, $F_{\tilde\Delta}$ is the transformation from a finite element $\tilde\Delta$ to $\Delta$ and $\psi\in\mathbb P_2(\Delta)$. How should I compute these integrals? By definition of the finte element space, $\left.u^0\right|_{\tilde\Delta}\in\mathbb P_2(\tilde\Delta)$. So, the integrand in $(1)$ belongs to $\mathbb P_4(\Delta)$. That's why I guess I should use a quadrature scheme which exactly integrations $\mathbb P_4(\Delta)$-functions. However, since I only now the values of $u^0\circ F_{\tilde\Delta}$ at $a^0:=(0,0)$, $a^1:=(1,0)$, $a^2:=(0,1)$, $a^3:=(1/2,1/2)$, $a^4:=(0,1/2)$ and $a^5:=(1/2,0)$, the Gaussian quadrature ansatz $$\int_\Delta f\approx\frac12\sum_{i=0}^5w_if(a^i)\tag2$$ is somehow limited. In fact, with these $a^i$ it's only possible to exactly integrate $\mathbb P_2(\Delta)$-functions (and we have $w_0=w_1=w_2=0$, $w_3=w_4=w_5=1/3$). So, is there no possibility to do better? Please suppose that we have a fixed mesh (i.e. no adaptive refinement) we need to deal with.
Consider a $C^k$, $k\ge 2$, Lorentzian manifold $(M,g)$ and let $\Box$ be the usual wave operator $\nabla^a\nabla_a$. Given $p\in M$, $s\in\Bbb R,$ and $v\in T_pM$, can we find a neighborhood $U$ of $p$ and $u\in C^k(U)$ such that $\Box u=0$, $u(p)=s$ and $\mathrm{grad}\, u(p)=v$? The tog is a measure of thermal resistance of a unit area, also known as thermal insulance. It is commonly used in the textile industry and often seen quoted on, for example, duvets and carpet underlay.The Shirley Institute in Manchester, England developed the tog as an easy-to-follow alternative to the SI unit of m2K/W. The name comes from the informal word "togs" for clothing which itself was probably derived from the word toga, a Roman garment.The basic unit of insulation coefficient is the RSI, (1 m2K/W). 1 tog = 0.1 RSI. There is also a clo clothing unit equivalent to 0.155 RSI or 1.55 tog... The stone or stone weight (abbreviation: st.) is an English and imperial unit of mass now equal to 14 pounds (6.35029318 kg).England and other Germanic-speaking countries of northern Europe formerly used various standardised "stones" for trade, with their values ranging from about 5 to 40 local pounds (roughly 3 to 15 kg) depending on the location and objects weighed. The United Kingdom's imperial system adopted the wool stone of 14 pounds in 1835. With the advent of metrication, Europe's various "stones" were superseded by or adapted to the kilogram from the mid-19th century on. The stone continues... Can you tell me why this question deserves to be negative?I tried to find faults and I couldn't: I did some research, I did all the calculations I could, and I think it is clear enough . I had deleted it and was going to abandon the site but then I decided to learn what is wrong and see if I ca... I am a bit confused in classical physics's angular momentum. For a orbital motion of a point mass: if we pick a new coordinate (that doesn't move w.r.t. the old coordinate), angular momentum should be still conserved, right? (I calculated a quite absurd result - it is no longer conserved (an additional term that varies with time ) in new coordinnate: $\vec {L'}=\vec{r'} \times \vec{p'}$ $=(\vec{R}+\vec{r}) \times \vec{p}$ $=\vec{R} \times \vec{p} + \vec L$ where the 1st term varies with time. (where R is the shift of coordinate, since R is constant, and p sort of rotating.) would anyone kind enough to shed some light on this for me? From what we discussed, your literary taste seems to be classical/conventional in nature. That book is inherently unconventional in nature; it's not supposed to be read as a novel, it's supposed to be read as an encyclopedia @BalarkaSen Dare I say it, my literary taste continues to change as I have kept on reading :-) One book that I finished reading today, The Sense of An Ending (different from the movie with the same title) is far from anything I would've been able to read, even, two years ago, but I absolutely loved it. I've just started watching the Fall - it seems good so far (after 1 episode)... I'm with @JohnRennie on the Sherlock Holmes books and would add that the most recent TV episodes were appalling. I've been told to read Agatha Christy but haven't got round to it yet ?Is it possible to make a time machine ever? Please give an easy answer,a simple one A simple answer, but a possibly wrong one, is to say that a time machine is not possible. Currently, we don't have either the technology to build one, nor a definite, proven (or generally accepted) idea of how we could build one. — Countto1047 secs ago @vzn if it's a romantic novel, which it looks like, it's probably not for me - I'm getting to be more and more fussy about books and have a ridiculously long list to read as it is. I'm going to counter that one by suggesting Ann Leckie's Ancillary Justice series Although if you like epic fantasy, Malazan book of the Fallen is fantastic @Mithrandir24601 lol it has some love story but its written by a guy so cant be a romantic novel... besides what decent stories dont involve love interests anyway :P ... was just reading his blog, they are gonna do a movie of one of his books with kate winslet, cant beat that right? :P variety.com/2016/film/news/… @vzn "he falls in love with Daley Cross, an angelic voice in need of a song." I think that counts :P It's not that I don't like it, it's just that authors very rarely do anywhere near a decent job of it. If it's a major part of the plot, it's often either eyeroll worthy and cringy or boring and predictable with OK writing. A notable exception is Stephen Erikson @vzn depends exactly what you mean by 'love story component', but often yeah... It's not always so bad in sci-fi and fantasy where it's not in the focus so much and just evolves in a reasonable, if predictable way with the storyline, although it depends on what you read (e.g. Brent Weeks, Brandon Sanderson). Of course Patrick Rothfuss completely inverts this trope :) and Lev Grossman is a study on how to do character development and totally destroys typical romance plots @Slereah The idea is to pick some spacelike hypersurface $\Sigma$ containing $p$. Now specifying $u(p)$ is trivial because the wave equation is invariant under constant perturbations. So that's whatever. But I can specify $\nabla u(p)|\Sigma$ by specifying $u(\cdot, 0)$ and differentiate along the surface. For the Cauchy theorems I can also specify $u_t(\cdot,0)$. Now take the neigborhood to be $\approx (-\epsilon,\epsilon)\times\Sigma$ and then split the metric like $-dt^2+h$ Do forwards and backwards Cauchy solutions, then check that the derivatives match on the interface $\{0\}\times\Sigma$ Why is it that you can only cool down a substance so far before the energy goes into changing it's state? I assume it has something to do with the distance between molecules meaning that intermolecular interactions have less energy in them than making the distance between them even smaller, but why does it create these bonds instead of making the distance smaller / just reducing the temperature more? Thanks @CooperCape but this leads me another question I forgot ages ago If you have an electron cloud, is the electric field from that electron just some sort of averaged field from some centre of amplitude or is it a superposition of fields each coming from some point in the cloud?
I'm making a demonstration cryptosystem using ECC ElGamal. I've currently got a working implementation of Edward's Curve operations and a basic ElGamal implementation (Encrypts only points on the curve), and in order to perform the mapping operation described in this answer, I need to determine the $y$ value of a point on an Edward's curve $g$ given an $x$. I have a basic understanding of the way an Edward's curve works but the finite fields are a bit much for me to be confident in properly implementing this based on intuition. Any help is appreciated. The equation for an Edwards curve is $x^2 + y^2 = 1 + dx^2y^2$. Assuming you know the curve parameters then, given that you know $x$, you are solving the above equation for one unknown, namely $y$. We can rewrite this as a quadratic equation in one unknown and solve it. Note that all operations are $\bmod{p}$ where $p$ is the characteristic of the field the curve is defined over (remember that dividing by $z$ is equivalent to multiplying by $z^{-1}\bmod{p}$, square roots also have different rules). $$x^2 + y^2 = 1 + dx^2y^2$$ $$y^2 - dx^2y^2 + x^2 - 1 = 0$$ $$(1 - dx^2)y^2 + (x^2 - 1) = 0$$ Now let $a = (1 - dx^2)$, $b = 0$, and $c = (x^2 - 1)$ (so we have $ay^2 + by + c = 0$ i.e. a quadratic equation). Note that you can also move some terms around in the upper equation and end up at the same result. $$y = \frac{-b \pm \sqrt{b^2 - 4ac}}{2a}$$ $$y = \frac{-0 \pm \sqrt{0 - 4(1 - dx^2)(x^2 - 1)}}{2(1 - dx^2)}$$ $$y = \frac{\pm \sqrt{-4(1 - dx^2)(x^2 - 1)}}{2(1 - dx ^2)}$$ $$y = \pm \sqrt{\frac{1 - x^2}{1 - dx^2}}$$ Now substitute in $x$ and $d$. Note that there are two solutions to the equation. Without your x coordinate having some sort of encoding that indicates which y value should be used it is impossible to recover the "correct" y as both values result in distinct valid points.
In the Heisenberg picture, I can define the velocity Operator $\hat{V}$ as the operator which satisfies $\hat{V}(t) = \frac{\partial \hat{x}}{\partial t}(t)$ for all $t$. The Heisenberg equation then tells me that $ \hat{V}(t) = \frac{i}{\hbar}[\hat{H}(t), \hat{x}(t)]$. If I want to calculate the time evolution of the velocity operator, a straight forward approach yields: $$\dot{\hat{V}} = \frac{i}{ \hbar}(\dot{\hat{H}}\hat{x}-\hat{x}\dot{\hat{H}} + \hat{H}\dot{\hat{x}} -\dot{\hat{x}}\hat{H} ) = \frac{i}{ \hbar}(\dot{\hat{H}}\hat{x}-\hat{x}\dot{\hat{H}} +[\hat{H}(t), \hat{V}(t)]) $$ What I find strange about that is that the time evolution of $V$ is no longer described by the Heisenberg-equation, but instead by some more complicated form, at least if $\dot{\hat{H}}$ doesn't commute with $\hat{x}$. I see problems arising, since I now can't in general make statements on the time evolution of functions $\hat{f}(\hat{x}, \hat{V})$, like for example the lagrangian $\hat{L}(\hat{x}, \hat{V}, t)$. Since the time evolution of $\hat{V}$ isn't given by the Heisenberg equation, the time evolution of for example $\hat{p}(t) = \frac{\partial L}{\partial t}(\hat{x}(t), \hat{v}(t), t)$ also isn't necessarily given anymore by the Heisenberg equation. I know that this is a unusual approach to the topic, since I assume observables to be functions of $\hat{x}$ and $\hat{v}$, and not functions of $\hat{x}$ and $\hat{p}$, but maybe somebody can still tell me where the mistakes are that I do in this derivations, or if it is an additional assumption that $\dot{\hat{H}}$ and $\hat{x}$ do always commute. If I did a mistake, what should I do different to still approach the topic from the lagrangian point of view?
I have spent quite some time trying to solve the following quadratic program: $$\min \sum_{i=1}^n (\frac{1}{2}x_i^TQx_i+c_i^Tx_i), \quad \mathrm{s.t. } \quad x_i\ge 0 \quad \forall i,$$where $n$ is very large and $Q$ is a sparse positive semi-definite matrix (given at the end of this post). Although the sub-problem for each $i$ can be easily solved (numerically), since $n$ is very large, it is infeasible to loop and solve the sub-problems separately. I would like to know if there is any special method to deal with this type of problem. -- The matrix $Q$ in my particular problem is given by $Q=A^TA+B^TB$, where $A$ and $B$ are defined as follows. Denote $e=(1,1,\ldots,1)^T\in\mathbb{R}^d$, then $$A=\begin{bmatrix} e & & & \\ & e & & \\ & & \ddots & \\ & & & e \end{bmatrix}$$ and $$B=\begin{bmatrix}\mathrm{diag}(e) & \mathrm{diag}(e) & \cdots & \mathrm{diag}(e)\end{bmatrix}$$ where $e$ appears $d$ times in $A$ and $d$ times in $B$ (i.e. $A,B\in\mathbb{R}^{d\times d^2}$). For example, if $d=3$ we have: $$A=\begin{bmatrix} 1 & 1 & 1 & 0 & 0 & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 1 & 1 & 1 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & 0 & 0 & 1 & 1 & 1 \end{bmatrix}$$ $$B=\begin{bmatrix} 1 & 0 & 0 & 1 & 0 & 0 & 1 & 0 & 0\\ 0 & 1 & 0 & 0 & 1 & 0 & 0 & 1 & 0\\ 0 & 0 & 1 & 0 & 0 & 1 & 0 & 0 & 1 \end{bmatrix}$$ and thus $$Q=\begin{bmatrix} M & I & I\\ I & M & I\\ I & I & M \end{bmatrix}$$ where $$M=\begin{bmatrix} 2 & 1 & 1\\ 1 & 2 & 1\\ 1 & 1 & 2 \end{bmatrix},\quad I = \begin{bmatrix} 1 & 0 & 0\\ 0 & 1 & 0\\ 0 & 0 & 1 \end{bmatrix}.$$ The Matlab code for creating $Q$: d = 16;one = ones(d,1);A = kron(eye(d),one');B = repmat(diag(one),1,d);Q = A'*A + B'*B; Thank you in advance for any suggestions. Update: according to the comments, it was not clear why I do not want to solve the subproblems separately using a loop. So here are some clarifications. If we do a loop and solve each subproblem at each iteration, then it will take too much time if the number of subproblems is huge. For example, if solving a subproblem takes 1 second, then solving 220512 subproblems will take more than 60 hours. We use some iterative method to solve the subproblems, then instead of updating each $x_i$ the ones after the others (using the loop), can we update them simultaneously? We can reformulate the problem as: $$\min\quad 1/2x^T\mathrm{diag}(Q,Q,...,Q)x + c^Tx, \quad \mathrm{s.t. } \quad x\ge 0,$$ where $x=(x_1,...,x_n)$ and $c=(c_1,...,c_n)$ and then use a sparse QP solver. I tried using Matlab's solver (an example is given here), however, the sparse matrix $\mathrm{diag}(Q,Q,...,Q)$ takes ~9GB of memory and thus slows down everything. Moreover, in using (naively) a sparse QP solver, the special structure of $\mathrm{diag}(Q,Q,...,Q)$ is not exploited (although it is considered to be sparse by the solver, I think we can exploit even more than that).
Image Denoising and Other Multidimensional Variational Problems We previously discussed how to solve 1D variational problems with the COMSOL Multiphysics® software and implement complex domain and boundary conditions using a unified constraint enforcement framework. Here, we extend the discussion to multiple dimensions, higher-order derivatives, and multiple unknowns with what we hope will be an enjoyable example: variational image denoising. We conclude this blog series on variational problems with some recommendations for further study. Variational Problems in Higher Spatial Dimensions We have considered 1D problems and looked for a function u(x) that minimizes the functional Now, let’s consider higher spatial dimensions while limiting ourselves to first-order derivatives. Consider the variational problem of minimizing defined over a fixed domain \Omega. For a neighboring function, u+\epsilon\hat{u}, this becomes As with the 1D case, the necessary first-order optimality condition is which means To obtain variational derivatives, we have been forming a function that is essentially a function of the scalar parameter \epsilon and using single-variable calculus methods. A quicker formal way is to use the variational derivative notation as for fixed domains. Here, \delta u corresponds to \hat{u} in our previous notation. Notice that since the domain is fixed, we consider a variation of the function and its derivatives only. If the domain can vary with the solution, the variational derivative will have an extra contribution coming from the boundary variation. Image Denoising: A Multidimensional Variational Problem In recent decades, variational methods have yielded powerful and rigorous techniques for image processing, such as denoising, deblurring, inpainting, and segmentation. Today, we will use denoising as an example. One technique for image denoising is called total variation minimization. Say you have image data, u_o, that has been corrupted by noise. The image has speckles (sometimes called “salt-and-pepper noise”). You want to recover as much of the original image given by u as possible. As such, the image has to be denoised. An image with erroneous details will have high variation; therefore, the denoised image should have as little variation as possible. A model after Tikhonov measures this total variation as Minimizing just the total variation will aggressively smooth and return a solution close to a constant, losing legitimate details in the image. To prevent this issue, we also want to minimize the difference between the input data, u_o, and the solution, u, given by a fidelity term Now that we have multiple, potentially conflicting objectives, let’s attempt a compromise by introducing the functional where the regularization parameter \mu determines the emphasis on detail versus denoising (this is a user-specified positive number). We are now ready to derive the first-order optimality condition, as discussed in the previous section. Requiring a vanishing variational derivative, we get To demonstrate this process, we import an image into COMSOL Multiphysics and add a random noise to corrupt it. Here is what we get after ruining an image of a goose provided by my colleague: A test image deliberately corrupted by random noise. The weak form above is given in vector notation. For use in computation, let’s write it out in Cartesian coordinates and leave out the common factor 2. This can be entered into the COMSOL® software as shown in the screenshot below. We keep the data on the edge as is by using the Dirichlet boundary condition u=uo on all boundaries, and we use a regularization parameter of 1e6. In the simpler algorithms, the regularization parameter is determined by trial and error. We increase the parameter if the resulting image is missing relevant details and reduce it if the result is deemed too noisy. Specifying a variational problem in 2D. Below is the denoised image, shown along with the original image — not bad for a rudimentary model! Denoised image (left) and original image (right). The Tikhonov regularization smooths too much and does not preserve geometric features such as edges and corners. The so-called ROF model with a functional given by does better in this regard. The first-order necessary condition for optimality, obtained by setting the variational derivative to zero, as done so far, is Notice that the use of the absolute value in the functional results in a highly nonlinear problem. Also, the numerator |\nabla u| can go to zero in numerical iterations, giving a division-by-zero error. To prevent this, we can add a small positive number to it. In COMSOL Multiphysics, we often use the floating-point relative accuracy eps, which is about 2.2204 \times 10^{-16}. The ROF model preserves edges but reportedly causes the so-called “staircasing effect”. Including higher-order derivatives in the functional helps avoid this. Variational Problems with Higher-Order Derivatives High-fidelity image processing and other subjects involve variational problems with high-order derivatives. A traditional subject in this respect is the analysis of elastic beams and plates. For example, in Euler beam theory, a beam with Young’s modulus E and cross-sectional moment of inertia I, loaded with lateral load f, bends to minimize the total potential energy For small deformations, the analysis neglects the change of the domain, thus the variational formulation is to find u such that Notice that we do not differentiate the Young’s modulus in the variational derivative because we are considering a linearly elastic material where material properties are independent of deformation. For nonlinear materials, the contribution to the variational derivative from material properties has to be included. This functionality is built into the Nonlinear Structural Materials Module, an add-on product to COMSOL Multiphysics. The inclusion of higher-order derivatives in the functional does not introduce any conceptual change to the way we find the first-order optimality criteria, but it does have computational implications. Our finite element interpolation functions, picked in the Discretization section of the Weak Form PDE Settings window, need to have as much polynomial power, or higher, than the highest order spatial derivative in the variational form. For example, for the beam problem, there is a second-order derivative in the variational form, thus we cannot pick a linear shape function. If we did, all of the second derivatives in our equation would uniformly vanish. We have to use quadratic or higher-order shape functions. Variational Problems with Multiple Unknowns So far, we have considered minimizing functionals that depend on only one unknown, u. Often, the functionals contain more than one unknown. Say we have a second unknown, v, and a functional The first variation of this functional is Here, we assume the two fields, u and v, are independent of each other and, as such, we can take independent variations in both variables. Sometimes, this is not the case: There can be a constraint between the variables. The easiest constraint between variables is when one is given explicitly in terms of the other, as in v = g(u). In such cases, we can opt to eliminate v from the problem by considering the functional Generally, we have constraints of the form g(u,v,\dotsc)=0 and we cannot algebraically invert this expression. In such cases, we use the techniques considered in Part 3 of the blog series, which is about constraint enforcement. For example, for the Lagrange multiplier method, we have an augmented functional given by where \lambda = \lambda(x,y,z) for a distributed constraint or where \lambda is a constant for a global constraint. We use the same Weak Form PDE interface to specify such multifield problems. The question is: Do we use a single interface or an interface for each unknown? It depends on the relationship between the unknowns. On one hand, if u and v are different components of the same physical vector, such as displacement or velocity, we can use the same interface and specify the number of dependent variables in the Dependent Variables section. On the other hand, if u represents temperature and v represents electric potential, we can employ different PDE interfaces. If, for some compelling reason, we have to use different discretizations or scales for different components of the same vector unknown, we can use multiple interfaces just as well. When we have multiple unknowns, we have to carefully choose the interpolation functions for each field (as discussed in a previous blog post on element orders in multiphysics models). Concluding Remarks Variational methods provide a unified framework to model a plethora of scientific problems. The Weak Form PDE interface, included in COMSOL Multiphysics, enables you to extend the functionality of the COMSOL® software by bringing in your own variational problems. In this series, we have demonstrated this power by solving problems that include finding the shape of soap films, planning paths for a hiker around a lake, and repairing corrupted images. Bear in mind that several important partial differential equations do not come from minimizing a functional. The Navier-Stokes equation is one example. You can still use the Weak Form PDE interface to solve such problems after deriving their weak forms. An important part of solving variational problems is specifying constraints. The previous three blog posts in this series deal with the mathematical formulation and numerical analysis of constraint enforcement. A warning here is that we have only considered necessary conditions for optimality. Vanishing first-order derivatives are not sufficient for minima. First-order derivatives vanish on maxima as well. In the examples discussed in this series, we consider well-known problems where we know that the solution provides a minimum. When you are working on novel problems, make sure to check the second-order optimality criteria as well as the existence and uniqueness of the minimum (maximum) before cranking up the computation. For this and other more involved analytical and numerical aspects of variational problems, the following references are some of my personal favorites: A classic text on calculus of variations: I.M. Gelfand and S.V. Fomin (English translation by R.A. Silverman), Calculus of Variations, Dover Publications, Inc., 1963. I.M. Gelfand and S.V. Fomin (English translation by R.A. Silverman), More recent texts that discuss several engineering problems: K.W. Cassel, Variational Methods with Applications in Science and Engineering, Cambridge University Press, 2013. J.W. Brewer, Engineering Analysis in Applied Mechanics, Taylor & Francis, 2002. K.W. Cassel, For constraint enforcement strategies: D.G. Luenberger, Introduction to Linear and Nonlinear Programming, Addison-Wesley Publishing Company, 1973. S.S. Rao, Engineering Optimization: Theory and Practice, John Wiley & Sons Inc., 2009. D.G. Luenberger, Hopefully, this series has given you a taste of modeling variational problems using COMSOL Multiphysics, especially when what you want to model is not already built into the software. Have fun, and feel free to contact us with any questions via the button below: View More Blog Posts in the Variational Problems and Constraints Series Comments (2) CATEGORIES Chemical COMSOL Now Electrical Fluid General Interfacing Mechanical Today in Science
Difference between revisions of "Kakeya problem" Line 24: Line 24: <math>|E|\gtrsim\sqrt{6N} \approx 3^{(n+1)/2}</math>. <math>|E|\gtrsim\sqrt{6N} \approx 3^{(n+1)/2}</math>. − A better bound follows by using the "slices argument". Let <math>A,B,C\subset{\mathbb F}_3^{n-1}</math> be the three slices of a Kakeya set <math>E\subset{\mathbb F}_3^n</math>. Form a bipartite graph <math>G</math> with the partite sets <math>A</math> and <math>B</math> by connecting <math>a</math> and <math>b</math> by an edge if there is a line in <math>E</math> through <math>a</math> and <math>b</math>. The restricted sumset <math>\{a+b\colon (a,b)\in G\}</math> is contained in the set <math>-C</math>, while the difference set <math>\{a-b\colon (a + A better bound follows by using the "slices argument". Let <math>A,B,C\subset{\mathbb F}_3^{n-1}</math> be the three slices of a Kakeya set <math>E\subset{\mathbb F}_3^n</math>. Form a bipartite graph <math>G</math> with the partite sets <math>A</math> and <math>B</math> by connecting <math>a</math> and <math>b</math> by an edge if there is a line in <math>E</math> through <math>a</math> and <math>b</math>. The restricted sumset <math>\{a+b\colon (a,b)\in G\}</math> is contained in the set <math>-C</math>, while the difference set <math>\{a-b\colon (ab)\in G\}</math> is all of <math>{\mathbb F}_3^{n-1}</math>. Using an estimate from [http://front.math.ucdavis.edu/math.CO/9906097 a paper of Katz-Tao], we conclude that <math>3^{n-1}\le\max(|A|,|B|,|C|)^{11/6}</math>, leading to <math>|E|\ge 3^{6(n-1)/11}</math>. Thus, :<math>k_n \ge 3^{6(n-1)/11}.</math> :<math>k_n \ge 3^{6(n-1)/11}.</math> Line 36: Line 36: since the set of all vectors in <math>{\mathbb F}_3^n</math> such that at least one of the numbers <math>1</math> and <math>2</math> is missing among their coordinates is a Kakeya set. since the set of all vectors in <math>{\mathbb F}_3^n</math> such that at least one of the numbers <math>1</math> and <math>2</math> is missing among their coordinates is a Kakeya set. − This estimate can be improved using an idea due to Ruzsa. Namely, let <math>E:=A\cup B</math>, where <math>A</math> is the set of all those vectors with <math>r/3+O(\sqrt r)</math> coordinates equal to <math>1</math> and the rest equal to <math>0</math>, and <math>B</math> is the set of all those vectors with <math>2r/3+O(\sqrt r)</math> coordinates equal to <math>2</math> and the rest equal to <math>0</math>. Then <math>E</math>, being of size just about <math>(27/4)^{r/3}</math> (which is not difficult to verify using [[Stirling's formula]]), contains lines in a positive proportion of directions: for, a typical direction <math>d\in {\mathbb F}_3^n</math> can be represented as <math>d=d_1+2d_2</math> with <math>d_1,d_2\in A</math>, and then <math>d_1,d_1+d,d_1+2d\in E</math>. Now one can use the random rotations trick to get the rest of the directions in <math>E</math> (losing a polynomial factor in <math>n</math>). + This estimate can be improved using an idea due to Ruzsa . Namely, let <math>E:=A\cup B</math>, where <math>A</math> is the set of all those vectors with <math>r/3+O(\sqrt r)</math> coordinates equal to <math>1</math> and the rest equal to <math>0</math>, and <math>B</math> is the set of all those vectors with <math>2r/3+O(\sqrt r)</math> coordinates equal to <math>2</math> and the rest equal to <math>0</math>. Then <math>E</math>, being of size just about <math>(27/4)^{r/3}</math> (which is not difficult to verify using [[Stirling's formula]]), contains lines in a positive proportion of directions: for, a typical direction <math>d\in {\mathbb F}_3^n</math> can be represented as <math>d=d_1+2d_2</math> with <math>d_1,d_2\in A</math>, and then <math>d_1,d_1+d,d_1+2d\in E</math>. Now one can use the random rotations trick to get the rest of the directions in <math>E</math> (losing a polynomial factor in <math>n</math>). Putting all this together, we seem to have Putting all this together, we seem to have Line 44: Line 44: or or − :<math>(1.8207 + :<math>(1.8207+o(1))^n \le k_n \le (1.+o(1))^n.</math> Revision as of 13:26, 21 March 2009 Define a Kakeya set to be a subset [math]A\subset{\mathbb F}_3^n[/math] that contains an algebraic line in every direction; that is, for every [math]d\in{\mathbb F}_3^n[/math], there exists [math]a\in{\mathbb F}_3^n[/math] such that [math]a,a+d,a+2d[/math] all lie in [math]A[/math]. Let [math]k_n[/math] be the smallest size of a Kakeya set in [math]{\mathbb F}_3^n[/math]. Clearly, we have [math]k_1=3[/math], and it is easy to see that [math]k_2=7[/math]. Using a computer, it is not difficult to find that [math]k_3=13[/math] and [math]k_4\le 27[/math]. Indeed, it seems likely that [math]k_4=27[/math] holds, meaning that in [math]{\mathbb F}_3^4[/math] one cannot get away with just [math]26[/math] elements. General lower bounds Trivially, [math]k_n\le k_{n+1}\le 3k_n[/math]. Since the Cartesian product of two Kakeya sets is another Kakeya set, we have [math]k_{n+m} \leq k_m k_n[/math]; this implies that [math]k_n^{1/n}[/math] converges to a limit as [math]n[/math] goes to infinity. From a paper of Dvir, Kopparty, Saraf, and Sudan it follows that [math]k_n \geq 3^n / 2^n[/math], but this is superseded by the estimates given below. To each of the [math](3^n-1)/2[/math] directions in [math]{\mathbb F}_3^n[/math] there correspond at least three pairs of elements in a Kakeya set, etermining this direction. Therefore, [math]\binom{k_n}{2}\ge 3\cdot(3^n-1)/2[/math], and hence [math]k_n\gtrsim 3^{(n+1)/2}.[/math] One can derive essentially the same conclusion using the "bush" argument, as follows. Let [math]E\subset{\mathbb F}_3^n[/math] be a Kakeya set, considered as a union of [math]N := (3^n-1)/2[/math] lines in all different directions. Let [math]\mu[/math] be the largest number of lines that are concurrent at a point of [math]E[/math]. The number of point-line incidences is at most [math]|E|\mu[/math] and at least [math]3N[/math], whence [math]|E|\ge 3N/\mu[/math]. On the other hand, by considering only those points on the "bush" of lines emanating from a point with multiplicity [math]\mu[/math], we see that [math]|E|\ge 2\mu+1[/math]. Comparing the two last bounds one obtains [math]|E|\gtrsim\sqrt{6N} \approx 3^{(n+1)/2}[/math]. A better bound follows by using the "slices argument". Let [math]A,B,C\subset{\mathbb F}_3^{n-1}[/math] be the three slices of a Kakeya set [math]E\subset{\mathbb F}_3^n[/math]. Form a bipartite graph [math]G[/math] with the partite sets [math]A[/math] and [math]B[/math] by connecting [math]a[/math] and [math]b[/math] by an edge if there is a line in [math]E[/math] through [math]a[/math] and [math]b[/math]. The restricted sumset [math]\{a+b\colon (a,b)\in G\}[/math] is contained in the set [math]-C[/math], while the difference set [math]\{a-b\colon (a,b)\in G\}[/math] is all of [math]{\mathbb F}_3^{n-1}[/math]. Using an estimate from a paper of Katz-Tao, we conclude that [math]3^{n-1}\le\max(|A|,|B|,|C|)^{11/6}[/math], leading to [math]|E|\ge 3^{6(n-1)/11}[/math]. Thus, [math]k_n \ge 3^{6(n-1)/11}.[/math] General upper bounds We have [math]k_n\le 2^{n+1}-1[/math] since the set of all vectors in [math]{\mathbb F}_3^n[/math] such that at least one of the numbers [math]1[/math] and [math]2[/math] is missing among their coordinates is a Kakeya set. This estimate can be improved using an idea due to Ruzsa (seems to be unpublished). Namely, let [math]E:=A\cup B[/math], where [math]A[/math] is the set of all those vectors with [math]r/3+O(\sqrt r)[/math] coordinates equal to [math]1[/math] and the rest equal to [math]0[/math], and [math]B[/math] is the set of all those vectors with [math]2r/3+O(\sqrt r)[/math] coordinates equal to [math]2[/math] and the rest equal to [math]0[/math]. Then [math]E[/math], being of size just about [math](27/4)^{r/3}[/math] (which is not difficult to verify using Stirling's formula), contains lines in a positive proportion of directions: for, a typical direction [math]d\in {\mathbb F}_3^n[/math] can be represented as [math]d=d_1+2d_2[/math] with [math]d_1,d_2\in A[/math], and then [math]d_1,d_1+d,d_1+2d\in E[/math]. Now one can use the random rotations trick to get the rest of the directions in [math]E[/math] (losing a polynomial factor in [math]n[/math]). Putting all this together, we seem to have [math](3^{6/11} + o(1))^n \le k_n \le ( (27/4)^{1/3} + o(1))^n[/math] or [math](1.8207+o(1))^n \le k_n \le (1.8899+o(1))^n.[/math]
Search Now showing items 1-2 of 2 Search for new resonances in $W\gamma$ and $Z\gamma$ Final States in $pp$ Collisions at $\sqrt{s}=8\,\mathrm{TeV}$ with the ATLAS Detector (Elsevier, 2014-11-10) This letter presents a search for new resonances decaying to final states with a vector boson produced in association with a high transverse momentum photon, $V\gamma$, with $V= W(\rightarrow \ell \nu)$ or $Z(\rightarrow ... Fiducial and differential cross sections of Higgs boson production measured in the four-lepton decay channel in $\boldsymbol{pp}$ collisions at $\boldsymbol{\sqrt{s}}$ = 8 TeV with the ATLAS detector (Elsevier, 2014-11-10) Measurements of fiducial and differential cross sections of Higgs boson production in the ${H \rightarrow ZZ ^{*}\rightarrow 4\ell}$ decay channel are presented. The cross sections are determined within a fiducial phase ...
This post is inspired by a tweet from @standupmaths who sends out puzzles tagged #mathspuzzle irregularly but fairly frequently. I won’t put out any spoilers on the blog, but once the solution is out on twitter, I may occasionally muse on these here. It’s my first go at both maths blogging and embedding equations into a webpage, so any comments welcome. I’m pretty sure I’ve gone round the houses on this one, so shortened method / notation welcome (as well as pointing out any errors that I’ve made, obviously). The problem was stated thus: “#MathsPuzzle: What is the only number 10 →20 that is not a sum of consecutive numbers? (eg. Not 12 because 3 + 4 + 5 = 12)”. Tweeters fairly quickly answered 16 and the follow up question (abbreviated) was – how does this generalise? To which the answer is the only positive integers that cannot be represented as the sum of consecutive integers are the set \(x \in \{2^n\}\) where n is an integer. After finding this result, I tried to express why \(\{2^n\}\) and only \(\{2^n\}\) exhibit this property in 140 characters and failed miserably. So, I decided to blog about it and write the method I followed in my head down – if only to clarify to myself how I came to the result (and whether it was valid). I began by using the result: \[\begin{aligned} \sum_{i=1}^{i=j}{i} = \frac {j \cdot (j+1)}{2} \end{aligned} \] therefore, the sum of integers between j and k is: \[\begin{aligned} \sum_{i=1}^{i=k}{i} \quad – \quad \sum_{i=1}^{i=j}{i} \qquad & = \qquad \frac {k \cdot (k+1)}{2} – \frac {j \cdot (j+1)}{2} \\ & = \qquad \frac {k^2 – j^2 + k – j}{2} \\ & = \qquad \frac {(k+j) \cdot (k – j) + (k – j)}{2} \\ & = \qquad \frac {(k+j+1) \cdot (k-j)}{2} \end{aligned} \] therefore, if x is to be expressed as the sum of some set of consecutive integers, the following must hold: \[\begin{aligned} x \qquad &= \qquad \frac {(k+j+1) \cdot (k-j)}{2} \\ 2x \qquad &= \qquad{(k+j+1) \cdot (k-j)} \qquad (1)\end{aligned} \] We notice that if both \(k\) and \(j\) are odd, or both \(k\) and \(j\) are even, then both \(k + j\) and \(k – j\) are even, therefore \((k+j+1)\) is odd and \((k-j)\) is even. Conversely, if one of \(k\) or \(j\) is even and the other odd, both \((k + j)\) and \((k – j)\) are odd \((k+j+1)\) is even and \((k-j)\) is odd. So we may state that if \(x\) is the sum of consecutive integers then we must be able to express \(2x\) as the product of one even and one odd integer If we next consider any integer as a product of primes, we can say that for any integer \begin{equation}\label{primefactors} i = 2^a \cdot 3^b \cdot 5^c \cdot 7^d \cdots \qquad (2)\end{equation} and so on for all primes. If we set \(i = 2x \) then we can combine (1) and (2) to see that for x to be the sum of consecutive integers, \[\begin{aligned} 2x \qquad &= \qquad {(k+j+1) \cdot (k-j)} \\&= \qquad 2^a \cdot 3^b \cdot 5^c \cdot 7^d \cdots \\ \\ \text{therefore} \\ \\{(k+j+1) \cdot (k-j)}&= \qquad 2^a \cdot 3^b \cdot 5^c \cdot 7^d \cdots \qquad \qquad (3) \end{aligned} \] As all primes are odd except 2, and \((\text{odd}\times\text{odd})\) is always odd, we can say that the right hand side of (3) will always be \((\text{even}\times\text{odd})\) except in the case where \( b = c = d = \cdots = 0 \) where it will be \( 2^a \). However, for \(x\) to be the sum of consecutive digits, this expression is required to be \((\text{even}\times\text{odd})\), which holds in all cases except where \(2x = 2^a \rightarrow x = 2^{a-1} \rightarrow x \in \{2^n\} \)
Interested in the following function:$$ \Psi(s)=\sum_{n=2}^\infty \frac{1}{\pi(n)^s}, $$where $\pi(n)$ is the prime counting function.When $s=2$ the sum becomes the following:$$ \Psi(2)=\sum_{n=2}^\infty \frac{1}{\pi(n)^2}=1+\frac{1}{2^2}+\frac{1}{2^2}+\frac{1}{3^2}+\frac{1}{3^2}+\frac{1... Consider a random binary string where each bit can be set to 1 with probability $p$.Let $Z[x,y]$ denote the number of arrangements of a binary string of length $x$ and the $x$-th bit is set to 1. Moreover, $y$ bits are set 1 including the $x$-th bit and there are no runs of $k$ consecutive zer... The field $\overline F$ is called an algebraic closure of $F$ if $\overline F$ is algebraic over $F$ and if every polynomial $f(x)\in F[x]$ splits completely over $\overline F$. Why in def of algebraic closure, do we need $\overline F$ is algebraic over $F$? That is, if we remove '$\overline F$ is algebraic over $F$' condition from def of algebraic closure, do we get a different result? Consider an observer located at radius $r_o$ from a Schwarzschild black hole of radius $r_s$. The observer may be inside the event horizon ($r_o < r_s$).Suppose the observer receives a light ray from a direction which is at angle $\alpha$ with respect to the radial direction, which points outwa... @AlessandroCodenotti That is a poor example, as the algebraic closure of the latter is just $\mathbb{C}$ again (assuming choice). But starting with $\overline{\mathbb{Q}}$ instead and comparing to $\mathbb{C}$ works. Seems like everyone is posting character formulas for simple modules of algebraic groups in positive characteristic on arXiv these days. At least 3 papers with that theme the past 2 months. Also, I have a definition that says that a ring is a UFD if every element can be written as a product of irreducibles which is unique up units and reordering. It doesn't say anything about this factorization being finite in length. Is that often part of the definition or attained from the definition (I don't see how it could be the latter). Well, that then becomes a chicken and the egg question. Did we have the reals first and simplify from them to more abstract concepts or did we have the abstract concepts first and build them up to the idea of the reals. I've been told that the rational numbers from zero to one form a countable infinity, while the irrational ones form an uncountable infinity, which is in some sense "larger". But how could that be? There is always a rational between two irrationals, and always an irrational between two rationals, ... I was watching this lecture, and in reference to above screenshot, the professor there says: $\frac1{1+x^2}$ has a singularity at $i$ and at $-i$, and power series expansions are limits of polynomials, and limits of polynomials can never give us a singularity and then keep going on the other side. On page 149 Hatcher introduces the Mayer-Vietoris sequence, along with two maps $\Phi : H_n(A \cap B) \to H_n(A) \oplus H_n(B)$ and $\Psi : H_n(A) \oplus H_n(B) \to H_n(X)$. I've searched through the book, but I couldn't find the definitions of these two maps. Does anyone know how to define them or where there definition appears in Hatcher's book? suppose $\sum a_n z_0^n = L$, so $a_n z_0^n \to 0$, so $|a_n z_0^n| < \dfrac12$ for sufficiently large $n$, so $|a_n z^n| = |a_n z_0^n| \left(\left|\dfrac{z_0}{z}\right|\right)^n < \dfrac12 \left(\left|\dfrac{z_0}{z}\right|\right)^n$, so $a_n z^n$ is absolutely summable, so $a_n z^n$ is summable Let $g : [0,\frac{ 1} {2} ] → \mathbb R$ be a continuous function. Define $g_n : [0,\frac{ 1} {2} ] → \mathbb R$ by $g_1 = g$ and $g_{n+1}(t) = \int_0^t g_n(s) ds,$ for all $n ≥ 1.$ Show that $lim_{n→∞} n!g_n(t) = 0,$ for all $t ∈ [0,\frac{1}{2}]$ . Can you give some hint? My attempt:- $t\in [0,1/2]$ Consider the sequence $a_n(t)=n!g_n(t)$ If $\lim_{n\to \infty} \frac{a_{n+1}}{a_n}<1$, then it converges to zero. I have a bilinear functional that is bounded from below I try to approximate the minimum by a ansatz-function that is a linear combination of any independent functions of the proper function space I now obtain an expression that is bilinear in the coeffcients using the stationarity condition (all derivaties of the functional w.r.t the coefficients = 0) I get a set of $n$ equations with the $n$ the number of coefficients a set of n linear homogeneus equations in the $n$ coefficients Now instead of "directly attempting to solve" the equations for the coefficients I rather look at the secular determinant that should be zero, otherwise no non trivial solution exists This "characteristic polynomial" directly yields all permissible approximation values for the functional from my linear ansatz. Avoiding the neccessity to solve for the coefficients. I have problems now to formulated the question. But it strikes me that a direct solution of the equation can be circumvented and instead the values of the functional are directly obtained by using the condition that the derminant is zero. I wonder if there is something deeper in the background, or so to say a more very general principle. If $x$ is a prime number and a number $y$ exists which is the digit reverse of $x$ and is also a prime number, then there must exist an integer z in the mid way of $x, y$ , which is a palindrome and digitsum(z)=digitsum(x). > Bekanntlich hat P. du Bois-Reymond zuerst die Existenz einer überall stetigen Funktion erwiesen, deren Fouriersche Reihe an einer Stelle divergiert. Herr H. A. Schwarz gab dann ein einfacheres Beispiel. (Translation: It is well-known that Paul du Bois-Reymond was the first to demonstrate the existence of a continuous function with fourier series being divergent on a point. Afterwards, Hermann Amandus Schwarz gave an easier example.) (Translation: It is well-known that Paul du Bois-Reymond was the first to demonstrate the existence of a continuous function with fourier series being divergent on a point. Afterwards, Hermann Amandus Schwarz gave an easier example.) It's discussed very carefully (but no formula explicitly given) in my favorite introductory book on Fourier analysis. Körner's Fourier Analysis. See pp. 67-73. Right after that is Kolmogoroff's result that you can have an $L^1$ function whose Fourier series diverges everywhere!!
I am struggling with this question: Prove or give a counterexample: If $f : X \to Y$ is a continuous mapping from a compact metric space $X$, then $f$ is uniformly continuous on $X$. Thanks for your help in advance. Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community I am struggling with this question: Prove or give a counterexample: If $f : X \to Y$ is a continuous mapping from a compact metric space $X$, then $f$ is uniformly continuous on $X$. Thanks for your help in advance. The answer is yes, if $f$ is continuous on a compact space then it is uniformly continuous: Let $f: X \to Y$ be continuous, let $\varepsilon > 0$ and let $X$ be a compact metric space. Because $f$ is continuous, for every $x$ in $X$ you can find a $\delta_x$ such that $f(B(\delta_x, x)) \subset B({\varepsilon\over 2}, f(x))$. The balls $\{B(\delta_x, x)\}_{x \in X}$ form an open cover of $X$. So do the balls $\{B(\frac{\delta_x}{2}, x)\}_{x \in X}$. Since $X$ is compact you can find a finite subcover $\{B(\frac{\delta_{x_i}}{2}, x_i)\}_{i=1}^n$. (You will see in a second why we are choosing the radii to be half only.) Now let $\delta_{x_i}' = {\delta_{x_i}\over 2}$. You want to choose a distance $\delta$ such that for any two $x,y$ they lie in the same $B(\delta_{x_i}', x_i)$ if their distance is less than $\delta$. How do you do that? Note that now that you have finitely many $\delta_{x_i}'$ you can take the minimum over all of them: $\min_i \delta_{x_i}'$. Consider two points $x$ and $y$. Surely $x$ lies in one of the $B(\delta_{x_i}', x_i) $ since they cover the whole space and hence $x$ also lies in $B(\delta_{x_i}', x_i)$ for some $i$. Now we want $y$ to also lie in $B(\delta_{x_i}', x_i)$. And this is where it comes in handy that we chose a subcover with radii divided by two: If you pick $\delta : = \min_i \delta_{x_i}'$ (i.e. $\delta = \frac{\delta_{x_i}}{2}$ for some $i$) then $y$ will also lie in $B(\delta_{x_i}, x_i)$: $d(x_i, y) \leq d(x_i, x) + d(x,y) < \frac{\delta_{x_i}}{2} + \min_k \delta_{x_k} \leq \frac{\delta_{x_i}}{2} + \frac{\delta_{x_i}}{2} = \delta_{x_i}$. Hope this helps. Let $(X, d)$ be a compact metric space, and $(Y, \rho)$ be a metric space. Suppose $f : X \to Y$ is continuous. We want to show that it is uniformly continuous. Let $\epsilon > 0$. We want to find $\delta > 0$ such that $d(x,y) < \delta \implies \rho(f(x), f(y))< \epsilon$. Ok, well since $f$ is continuous at each $x \in X$, then there is some $\delta_{x} > 0$ so that $f(B(x, \delta_{x})) \subseteq B(f(x), \frac{\epsilon}{2})$. Now, $\{B(x, \frac{\delta_{x}}{2})\}_{x \in X}$ is an open cover of $X$, so there is a finite subcover $\{B(x_{i}, \frac{\delta_{x_{i}}}{2})\}_{i =1}^{n}$. If we take $\delta := \min_{i} (\frac{\delta_{x_{i}}}{2})$, then we claim $d(x,y) < \delta \implies \rho(f(x), f(y)) < \epsilon$. Why? Well, suppose $d(x,y) < \delta$. Since $x \in B(x_{i}, \frac{\delta_{x_{i}}}{2})$ for some $i$, we get $y \in B(x_{i}, \delta_{x_{i}})$. Why? $d(y, x_{i}) \leq d(y,x) + d(x,x_{i}) < \frac{\delta_{x_{i}}}{2} + \frac{\delta_{x_{i}}}{2} = \delta_{x_{i}}$. Ok, finally, if $d(x,y) < \delta$, then we claim $\rho(f(x), f(y)) < \epsilon$. This is because $\rho(f(x), f(y)) \leq \rho(f(x), f(x_{i})) + \rho(f(x_{i}), f(y)) < \frac{\epsilon}{2} + \frac{\epsilon}{2} = \epsilon$.
According to the Perron-Frobenius theorem, a real matrix with only positive entries (or one with non-negative entries with a property called irreducibility) will have a unique eigenvector that contains only positive entries. Its corresponding eigenvalue will be real and positive, and will be the eigenvalue with greatest magnitude. I have a situation where I'm interested in such an eigenvector. I'm currently using numpy to find all the eigenvalues, then taking the eigenvector corresponding to the one with largest magnitude. The trouble is that for my problem, when the size of the matrix gets large, the results start to go crazy, e.g. the eigenvector found that way might not have all positive entries. I guess this is due to rounding errors. Because of this, I'm wondering if there's an algorithm that can give better results by making use of the facts that $(i)$ the matrix has non-negative entries and is irreducible, and $(ii)$ we're only looking for the eigenvector whose entries are positive. Since there are algorithms that can make use of other matrix properties (e.g. symmetry), it seems reasonable to think this might be possible. While writing this question it occurred to me that just iterating $\nu_{t+1} = \frac{A\nu_t}{|A\nu_t|}$ will work (starting with an initial $\nu_0$ with positive entries), but imagine with a large matrix the convergence will be very slow, so I guess I'm looking for a more efficient algorithm than this. (I'll try it though!) Of course, if the algorithm is easy to implement and/or has been implemented in a form that can easily be called from Python, that's a huge bonus. Incidentally, in case it makes any kind of difference, my problem is this one. I'm finding that as I increase the matrix size (finding the eigenvector using Numpy as described above) it looks like it's converging, but then suddenly starts to jump all over the place. This instability gets worse the smaller the value of $\lambda$.
I need to numerically evaluate the integral below: $$\int_0^\infty \mathrm{sinc}'(xr) r \sqrt{E(r)} dr$$ where $E(r) = r^4 (\lambda\sqrt{\kappa^2+r^2})^{-\nu-5/2} K_{-\nu-5/2}(\lambda\sqrt{\kappa^2+r^2})$, $x \in \mathbb{R}_+$ and $\lambda, \kappa, \nu >0$. Here $K$ is the modified Bessel function of the second kind. In my particular case I have $\lambda = 0.00313$, $\kappa = 0.00825$ and $\nu = 0.33$. I am using MATLAB, and I have tried the built-in functions integral and quadgk, which gives me a lot of errors (see below). I have naturally tried numerous other things as well, such as integrating by parts, and summing integrals from $kx\pi$ to $(k+1)x\pi$. So, do you have any suggestions as to which method I should try next? UPDATE (added questions) I read the paper @Pedro linked to, and I don't think it was too hard to understand. However, I have a few questions: Would it be okay to use $x^k$ as the basis-elements $\psi_k$, in the univariate Levin method described? Could I instead just use a Filon method, since the frequency of the oscillations is fixed? Example code >> integral(@(r) sin(x*r).*sqrt(E(r)),0,Inf) Warning: Reached the limit on the maximum number of intervals in use. Approximate bound on error is 1.6e+07. The integral may not exist, or it may be difficult to approximate numerically to the requested accuracy. > In funfun\private\integralCalc>iterateScalarValued at 372 In funfun\private\integralCalc>vadapt at 133 In funfun\private\integralCalc at 84 In integral at 89 ans = 3.3197e+06
This editorial corresponds to 2017 Chinese Multi-University Training, BeihangU Contest (stage 1), which was held on Jun 25th, 2017. There are 12 problems in total. You can solve them as a team member or an individual in a 5-hour contest. By the time you join as virtual participants, 770 teams, or even more, will compete with you virtually. Editorial in the English version has been completed, which is a bit different from its Chinese version (mostly because I don't want bad editorials to ruin the contest, lol). However, for the sake of hiding spoilers, editorials are locked and will be shown as the following conditions are met: Editorials for the easiest 4 problems will be revealed after the replay (all unlocked); Each for the hardest 5 will be released if the corresponding problem has been solved by at least 5 users or teams on Codeforces::Gym (3 unlocked); Each for the others will be published when the relevant problem has been solved by at least 10 users or teams in virtual participation (including the replay) on Codeforces::Gym (all unlocked). Or you can find solutions in comments? Idea: skywalkert solution The answer is $$$\left \lfloor \log_{10} (2^m - 1) \right \rfloor$$$. Apparently, there does not exist $$$10^k = 2^m$$$ for positive integers $$$k$$$, $$$m$$$, which results in $$$\left \lfloor \log_{10} (2^m - 1) \right \rfloor = \left \lfloor \log_{10} 2^m \right \rfloor = \left \lfloor m \log_{10} 2 \right \rfloor$$$. Idea: sd0061 solution Each letter's contribution to the answer can be treated as a number in the base of $$$26$$$, so the problem is equivalent to multiply these contributions with weights ranged from $$$0$$$ to $$$25$$$ and thus make the answer maximum. Obviously, the largest contribution matches $$$25$$$, the second-largest matches $$$24$$$, and so on. Doing this greedy algorithm by sorting could be enough, except for the only case where it might cause leading zeros, which is not permitted. Time complexity is $$$\mathcal{O}\left(\sum_{i = 1}^{n}{|s_i|} + \max(|s_1|, |s_2|, \ldots, |s_n|) \log C\right)$$$, where $$$C = 26$$$. Idea: sd0061 solution If we consider the number of paths covered by some color at least once for each color separately, we can conclude the answer is the sum of these numbers. Conversely, we can count how many paths that are not covered by this color, which is easy to compute. We can apply the idea of virtual trees to solve it directly, in which we do not really need to build them out. What we need to calculate is the total number of paths within any blocks after removing all the nodes of a specified color from the tree. We can maintain for each node of this color, the number of remaining nodes whose least ancestor of this color is this node. Together with the number of nodes which has no ancestor of this color, we can get the answer. The process can be implemented directly by DFS (Depth-First Search) once, so the time complexity is $$$\mathcal{O}(n)$$$. Idea: chitanda solution A permutation can be decomposed into one or more disjoint cycles, which are found by repeatedly tracing the application of the permutation on some elements. For each element $$$i$$$ in a $$$l$$$-cycle of permutation $$$a$$$, we have which implies $$$f(i)$$$ must be some element in a cycle of permutation $$$b$$$, whose length is a factor of $$$l$$$. Besides, if $$$f(i)$$$ is fixed, the other $$$(l - 1)$$$ numbers in this cycle can be determined by $$$f(i) = b_{f(a_i)}$$$. Consequently, the answer is $$$\prod_{i = 1}^{k} \sum_{j | l_i} {j \cdot c_j}$$$, where $$$k$$$ is the number of cycles obtained from permutation $$$a$$$, $$$l_i$$$ represents the length of the $$$i$$$-th cycle, and $$$c_j$$$ indicates the number of cycles in permutation $$$b$$$ whose length are equal to $$$j$$$. Due to $$$\sum_{i = 1}^{k} \sum_{j | l_i}{1} \leq \sum_{i = 1}^{k}{2 \sqrt{l_i}} \leq 2 \sqrt{k \sum_{i = 1}^{k}{l_i}} \leq 2 n$$$, the time complexity is $$$\mathcal{O}(n + m)$$$. Idea: constroy solution The gears with their adjacency relations form a forest. We can consider coaxial gears, which have the same angular velocity, as a block and then consider the effect of gear meshing. If the $$$x$$$-the gear and the $$$y$$$-th one mesh, let their radii be $$$r_x$$$, $$$r_y$$$ respectively, and let their angular velocities be $$$\omega_x$$$, $$$\omega_y$$$ respectively, and then we have $$$\ln \omega_y = \ln \omega_x + \ln r_x - \ln r_y$$$. Hence, for a particular gear in a connected component, we can determine the difference of angular velocities between this gear and any other gear in the component, and then maintain the maximum relative difference using a segment tree on the traversal sequence obtained from DFS (Depth-First Search). More specifically, let's fix a gear in each component as the reference gear and maintain the differences for all gears in this component. Let the fixed gear be the root of this component, and then we build a segment tree to maintain the maximum value on the rooted tree structure of this component. When a gear is replaced, the angular velocity of its block may be changed if it is the shallowest node in this block, and the angular velocity of other blocks may be changed if these blocks are completely contained in the subtree of the changed node. No matter in which case, there is only an interval of the traversal sequence corresponding to the nodes needed to update by adding a common offset on their record values. For each query, we only need to get the difference between the activated gear and the reference gear. Together with the maximum relative difference in the component, the real maximum angular velocity can be determined. In practice, you can maintain $$$\log_2 \omega$$$ to avoid possible precision issues and only convert it into $$$\ln \omega$$$ for the output. Time comlexity is $$$\mathcal{O}(n + m + q \log n)$$$. Idea: constroy solution Let's sort these hints in non-increasing order and remove duplicates. Denote the sorted array as $$${b'}_{1}$$$, $$${b'}_{2}$$$, $$$\ldots$$$, $$${b'}_{m'}$$$ satisfying $$${b'}_{i} < {b'}_{i - 1}$$$ for each $$$i \geq 2$$$, and define that $$${b'}_{0} = n$$$. One solution to this problem is to apply a linear selection algorithm, which can find the $$$k$$$-th smallest element in an array of length $$$n$$$ in time complexity $$$\mathcal{O}(n)$$$ and also split the array into three parts that include the smallest $$$(k - 1)$$$ elements, the $$$k$$$-th element and other elements respectively. Due to $$$b_i + b_j \leq b_k$$$ if $$$b_i \neq b_j$$$, $$$b_i < b_k$$$, $$$b_j < b_k$$$, we have $$$2 {b'}_{i} < {b'}_{i - 2}$$$ for each $$$i \geq 3$$$. We can conclude $$$\sum_{k}{{b'}_{2 k + 1}} < 2 {b'}_{1}$$$ and so for elements on even positions. If the algorithm is completely linear, the time complexity should be $$$\mathcal{O}\left(\sum_{i = 0}^{m' - 1}{{b'}_{i}}\right) = \mathcal{O}(n)$$$. Since ratings are almost generated at random, we can instead utilize an almost linear approach, such as nth_element function in C++. By the way, there exist solutions in expected time complexity $$$\mathcal{O}(\sqrt{n m'})$$$. Idea: sd0061 solution As the graph is a cactus graph, it is no doubt that one edge of each cycle needs to be removed in order to make a spanning tree. Hence, the problem can be rewritten as that we have $$$M$$$ ($$$M \leq \frac{m}{3}$$$) arrays consisting of integers and we want to pick a number from each array so that we can get their sum, however, we want you to find possible ways with the largest $$$K$$$ sums and report the sum of these values. It is a classic problem that can be solved by sequence merge algorithms, for example, we can maintain a set of $$$K$$$ largest values obtained from the first $$$x$$$ arrays, and then merge it with the $$$(x + 1)$$$-th array and find $$$K$$$ largest new values by using a heap or priority queue. More specifically, let's assume that we are going to merge two non-increasing arrays $$$A$$$, $$$B$$$ and pick $$$K$$$ largest values obtained from $$$(A_i + B_j)$$$. As we know $$$B_j \geq B_{j + 1}$$$, so if we pick $$$(A_i + B_{j + 1})$$$, we must pick $$$(A_i + B_j)$$$ first. That inspires us to set a counter $$$c_i$$$ for each $$$A_i$$$, which represents if we will pick $$$A_i$$$ with some value, we are going to pick $$$(A_i + B_{c_i})$$$. Then, we can use a data structure to maintain the set of all possible $$$(A_i + B_{c_i})$$$ and then query and erase the largest value, which we can just repeat at most $$$K$$$ times to get all the merged values we need. It seems the above method runs in time complexity $$$\mathcal{O}(MK \log K)$$$, but actually, it can run faster. Let the sizes of these arrays are $$$m_1$$$, $$$m_2$$$, $$$\ldots$$$, $$$m_M$$$ ($$$m_i \geq 3$$$ for each $$$i$$$) respectively. If we instead to maintain $$$(A_{c_j} + B_j)$$$ in the data structure, where $$$B$$$ is the next array to be merged, the complexity will be $$$\mathcal{O}\left(\sum_{i = 1}^{M}{K \log{m_i}}\right) = \mathcal{O}\left(K \log{\prod_{i = 1}^{M}{m_i}}\right) = \mathcal{O}\left(M K \log{\frac{\sum_{i = 1}^{M}{m_i}}{M}}\right) = \mathcal{O}\left(M K \log{\frac{m}{M}}\right)$$$. As $$$M \leq \frac{m}{3}$$$, we can conclude the worst complexity is $$$\mathcal{O}(m K)$$$, when $$$M = \frac{m}{3}$$$. By the way, there exist solutions in time complexity $$$\mathcal{O}(K \log K)$$$. Idea: skywalkert solution The main idea of our standard solution is ordinary generating function. Let's define the number of ways to choose food of total volume $$$k$$$ as $$$f(k)$$$, and its generating function as $$$F(z) = \sum_{k \geq 0}{f(k) z^k}$$$. After calculating the polynomial $$$(F(z) \bmod z^{2 n + 1})$$$ with coefficients in modulo $$$(10^9 + 7)$$$, we can enumerate one of equipment and calculate the answer. Based on the rule of product, we have Due to $$$0 \leq a_1 < a_2 < \ldots < a_n$$$, we know that $$$a_i \geq i - 1$$$ and thus $$$(a_i + 1) i \geq i^2$$$, which implies there are only $$$\mathcal{O}(\sqrt{n})$$$ items $$$\left(1 - z^{(a_i + 1) i}\right)$$$ that are not equivalent to $$$1$$$ in modulo $$$z^{2 n + 1}$$$. Hence, we can calculate the numerator of $$$(F(z) \bmod z^{2 n + 1})$$$ in time complexity $$$\mathcal{O}(n \sqrt{n})$$$. The rest is similar to the generating function of partition function. That is defined as where $$$p(k)$$$ represents the number of distinct partitions of a non-negative integer $$$k$$$. Pentagonal number theorem states that which can help us calculate the polynomial $$$(P(z) \bmod z^m)$$$ in time complexity $$$\mathcal{O}(m \sqrt{m})$$$. Besides, $$$1 - x^k \equiv 1 \equiv \frac{1}{1 - x^k} \pmod{x^m}$$$ for any integers $$$k \geq m \geq 1$$$, so we have and we can get the denominator of $$$(F(z) \bmod z^{2 n + 1})$$$ from $$$(P(z) \bmod z^{2 n + 1})$$$ easily. The total time complexity can be $$$\mathcal{O}(n \sqrt{n})$$$ if we calculate the denominator as a polynomial first, and then multiply it with each term included in the numerator one by one. By the way, if you are familiar with polynomial computation, you can solve the problem in time complexity $$$\mathcal{O}(n \log n)$$$. Idea: chitanda solution The answers for fixed $$$n$$$ form an almost periodic pattern. It is not difficult to find out that is $$$\underbrace{1, 2, \ldots, n}_{n \text{ numbers}}$$$, $$$\underbrace{1, 2, \ldots, n - 1}_{(n - 1) \text{ numbers}}$$$, $$$\underbrace{1, 2, \ldots, n - 2, n}_{(n - 1) \text{ numbers}}$$$, $$$\underbrace{1, 2, \ldots, n - 1}_{(n - 1) \text{ numbers}}$$$, $$$\underbrace{1, 2, \ldots, n - 2, n}_{(n - 1) \text{ numbers}}$$$, $$$\ldots$$$ Idea: skywalkert solution According to $$$(l_i, r_i)$$$, we can build the only Cartesian tree linearly. If there is any contradiction, the answer should be $$$0$$$. Besides, it is not difficult to conclude the Cartesian tree, if exists, is unique. More specifically, we can find out each node of the tree recursively. For example, the root must cover the interval $$$[1, n]$$$, and if a node $$$u$$$ cover the interval $$$[L, R]$$$, then its left child, if exists, must cover $$$[L, u - 1]$$$, and its right child, if exists, must cover $$$[u + 1, R]$$$. If there is no position $$$u$$$ covering the whole interval $$$[L, R]$$$, we will find a contradiction. If there are multiple choices, we will find a contradiction too, but we can just choose any of them as $$$u$$$ because further checking will detect contradictions. Hence, after sorting given intervals by radix sort, we can construct the tree linearly. The counting can be done recursively as well. Let $$$s(u)$$$ be the size of the subtree of the node $$$u$$$, which is also equal to $$$(r_u - l_u + 1)$$$, and $$$f(u)$$$ be the number of permutations obtained from $$$1$$$ to $$$s(u)$$$ that can form the same Cartesian tree as the subtree of the node $$$u$$$. If the node $$$u$$$ has two chilldren $$$v_1$$$, $$$v_2$$$, we have $$$f(u) = {s(v_1) + s(v_2) \choose s(v_1)} f(v_1) f(v_2)$$$. By the way, if we explore a bit more, we will find the answer is also equal to $$$\frac{n!}{\prod_{i = 1}^{n}{s(i)}}$$$. Time complexity is $$$\mathcal{O}(n)$$$. The bottleneck is on reading the input data. Hope you can enjoy it. Any comments, as well as downvotes, would be appreciated.
Difference between revisions of "Fujimura's problem" (→Asymptotics) (→n=6) Line 117: Line 117: Therefore the solution of the trapezoid 600-420-321-303 is IV. Since the disjoint triangles 402-222-204, 213-123-114, and 105-015-006 must all have points removed, that means the remaining points in the bottom three rows (420, 321, 510, 501, 312, 024) must be left open. 420 and 321 force 330 to be removed, so the "top triangle" is solution I. This leaves triangle 321-024-051 open, and we have reached a contradiction. Therefore the solution of the trapezoid 600-420-321-303 is IV. Since the disjoint triangles 402-222-204, 213-123-114, and 105-015-006 must all have points removed, that means the remaining points in the bottom three rows (420, 321, 510, 501, 312, 024) must be left open. 420 and 321 force 330 to be removed, so the "top triangle" is solution I. This leaves triangle 321-024-051 open, and we have reached a contradiction. + + + + + + + + + + + + + + == General n == == General n == Revision as of 10:31, 4 March 2009 Let [math]\overline{c}^\mu_n[/math] the largest subset of the triangular grid [math]\Delta_n := \{ (a,b,c) \in {\Bbb Z}_+^3: a+b+c=n \}[/math] which contains no equilateral triangles [math](a+r,b,c), (a,b+r,c), (a,b,c+r)[/math] with [math]r \gt 0[/math]; call such sets triangle-free. (It is an interesting variant to also allow negative r, thus allowing "upside-down" triangles, but this does not seem to be as closely connected to DHJ(3).) Fujimura's problem is to compute [math]\overline{c}^\mu_n[/math]. This quantity is relevant to a certain hyper-optimistic conjecture. n 0 1 2 3 4 5 [math]\overline{c}^\mu_n[/math] 1 2 4 6 9 12 Contents n=0 [math]\overline{c}^\mu_0 = 1[/math]: This is clear. n=1 [math]\overline{c}^\mu_1 = 2[/math]: This is clear. n=2 [math]\overline{c}^\mu_2 = 4[/math]: This is clear (e.g. remove (0,2,0) and (1,0,1) from [math]\Delta_2[/math]). n=3 [math]\overline{c}^\mu_3 = 6[/math]: For the lower bound, delete (0,3,0), (0,2,1), (2,1,0), (1,0,2) from [math]\Delta_3[/math]. For the upper bound: observe that with only three removals each of these (non-overlapping) triangles must have one removal: set A: (0,3,0) (0,2,1) (1,2,0) set B: (0,1,2) (0,0,3) (1,0,2) set C: (2,1,0) (2,0,1) (3,0,0) Consider choices from set A: (0,3,0) leaves triangle (0,2,1) (1,2,0) (1,1,1) (0,2,1) forces a second removal at (2,1,0) [otherwise there is triangle at (1,2,0) (1,1,1) (2,1,0)] but then none of the choices for third removal work (1,2,0) is symmetrical with (0,2,1) n=4 [math]\overline{c}^\mu_4=9[/math]: The set of all [math](a,b,c)[/math] in [math]\Delta_4[/math] with exactly one of a,b,c =0, has 9 elements and is triangle-free. (Note that it does contain the equilateral triangle (2,2,0),(2,0,2),(0,2,2), so would not qualify for the generalised version of Fujimura's problem in which [math]r[/math] is allowed to be negative.) Let [math]S\subset \Delta_4[/math] be a set without equilateral triangles. If [math](0,0,4)\in S[/math], there can only be one of [math](0,x,4-x)[/math] and [math](x,0,4-x)[/math] in S for [math]x=1,2,3,4[/math]. Thus there can only be 5 elements in S with [math]a=0[/math] or [math]b=0[/math]. The set of elements with [math]a,b\gt0[/math] is isomorphic to [math]\Delta_2[/math], so S can at most have 4 elements in this set. So [math]|S|\leq 4+5=9[/math]. Similar if S contain (0,4,0) or (4,0,0). So if [math]|S|\gt9[/math] S doesn’t contain any of these. Also, S can’t contain all of [math](0,1,3), (0,3,1), (2,1,1)[/math]. Similar for [math](3,0,1), (1,0,3),(1,2,1)[/math] and [math](1,3,0), (3,1,0), (1,1,2)[/math]. So now we have found 6 elements not in S, but [math]|\Delta_4|=15[/math], so [math]S\leq 15-6=9[/math]. Remark: curiously, the best constructions for [math]c_4[/math] uses only 7 points instead of 9. n=5 [math]\overline{c}^\mu_5=12[/math]: The set of all (a,b,c) in [math]\Delta_5[/math] with exactly one of a,b,c=0 has 12 elements and doesn’t contain any equilateral triangles. Let [math]S\subset \Delta_5[/math] be a set without equilateral triangles. If [math](0,0,5)\in S[/math], there can only be one of (0,x,5-x) and (x,0,5-x) in S for x=1,2,3,4,5. Thus there can only be 6 elements in S with a=0 or b=0. The set of element with a,b>0 is isomorphic to [math]\Delta_3[/math], so S can at most have 6 elements in this set. So [math]|S|\leq 6+6=12[/math]. Similar if S contain (0,5,0) or (5,0,0). So if |S| >12 S doesn’t contain any of these. S can only contain 2 point in each of the following equilateral triangles: (3,1,1),(0,4,1),(0,1,4) (4,1,0),(1,4,0),(1,1,3) (4,0,1),(1,3,1),(1,0,4) (1,2,2),(0,3,2),(0,2,3) (3,2,0),(2,3,0),(2,2,1) (3,0,2),(2,1,2),(2,0,3) So now we have found 9 elements not in S, but [math]|\Delta_5|=21[/math], so [math]S\leq 21-9=12[/math]. n=6 [math]15 \leq \overline{c}^\mu_6 \leq 17[/math]: [math]15 \leq \overline{c}^\mu_6[/math] from the bound for general n. Note that there are eight extremal solutions to [math] \overline{c}^\mu_3 [/math]: Solution I: remove 300, 020, 111, 003 Solution II: remove 030, 111, 201, 102 Solution III (and 2 rotations): remove 030, 021, 210, 102 Solution III' (and 2 rotations): remove 030, 120, 012, 201 Also consider the same triangular lattice with the point 020 removed, making a trapezoid. Solutions based on I-III are: Solution IV: remove 300, 111, 003 Solution V: remove 201, 111, 102 Solution VI: remove 210, 021, 102 Solution VI': remove 120, 012, 201 Suppose we can remove all equilateral triangles on our 7×7x7 triangular lattice with only 10 removals. The triangle 141-411-114 must have at least one point removed. Remove 141, and note because of symmetry any logic that follows also applies to 411 and 114. There are three disjoint triangles 060-150-051, 240-231-330, 042-132-033, so each must have a point removed. (Now only six removals remaining.) The remainder of the triangle includes the overlapping trapezoids 600-420-321-303 and 303-123-024-006. If the solutions of these trapezoids come from V, VI, or VI', then 6 points have been removed. Suppose the trapezoid 600-420-321-303 uses the solution IV (by symmetry the same logic will work with the other trapezoid). Then there are 3 disjoint triangles 402-222-204, 213-123-114, and 105-015-006. Then 6 points have been removed. Therefore the remaining six removals must all come from the bottom three rows of the lattice. Note this means the "top triangle" 060-330-033 must have only four points removed so it must conform to solution either I or II, because of the removal of 141. Suppose the solution of the trapezoid 600-420-321-303 is VI or VI'. Both solutions I and II on the "top triangle" leave 240 open, and hence the equilateral triangle 240-420-222 remains. So the trapezoid can't be VI or VI'. Suppose the solution of the trapezoid 600-420-321-303 is V. This leaves an equilateral triangle 420-321-330 which forces the "top triangle" to be solution I. This leaves the equilateral triangle 201-321-222. So the trapezoid can't be V. Therefore the solution of the trapezoid 600-420-321-303 is IV. Since the disjoint triangles 402-222-204, 213-123-114, and 105-015-006 must all have points removed, that means the remaining points in the bottom three rows (420, 321, 510, 501, 312, 024) must be left open. 420 and 321 force 330 to be removed, so the "top triangle" is solution I. This leaves triangle 321-024-051 open, and we have reached a contradiction. n = 7 n = 8 n = 9 n = 10 [math]\overline{c}^\mu_{10} \geq 29[/math]: 028,046,055,064,073,118,172,181,190,208,217,235,262, 316,334,352,361,406,433,442,541,550,604,613,622, 721,730,901,1000 General n A lower bound for [math]\overline{c}^\mu_n[/math] is 2n for [math]n \geq 1[/math], by removing (n,0,0), the triangle (n-2,1,1) (0,n-1,1) (0,1,n-1), and all points on the edges of and inside the same triangle. In a similar spirit, we have the lower bound [math]\overline{c}^\mu_{n+1} \geq \overline{c}^\mu_n + 2[/math] for [math]n \geq 1[/math], because we can take an example for [math]\overline{c}^\mu_n[/math] (which cannot be all of [math]\Delta_n[/math]) and add two points on the bottom row, chosen so that the triangle they form has third vertex outside of the original example. An asymptotically superior lower bound for [math]\overline{c}^\mu_n[/math] is 3(n-1), made of all points in [math]\Delta_n[/math] with exactly one coordinate equal to zero. A trivial upper bound is [math]\overline{c}^\mu_{n+1} \leq \overline{c}^\mu_n + n+2[/math] since deleting the bottom row of a equilateral-triangle-free-set gives another equilateral-triangle-free-set. We also have the asymptotically superior bound [math]\overline{c}^\mu_{n+2} \leq \overline{c}^\mu_n + \frac{3n+2}{2}[/math] which comes from deleting two bottom rows of a triangle-free set and counting how many vertices are possible in those rows. Another upper bound comes from counting the triangles. There are [math]\binom{n+2}{3}[/math] triangles, and each point belongs to n of them. So you must remove at least (n+2)(n+1)/6 points to remove all triangles, leaving (n+2)(n+1)/3 points as an upper bound for [math]\overline{c}^\mu_n[/math]. Asymptotics The corners theorem tells us that [math]\overline{c}^\mu_n = o(n^2)[/math] as [math]n \to \infty[/math]. By looking at those triples (a,b,c) with a+2b inside a Behrend set, one can obtain the lower bound [math]\overline{c}^\mu_n \geq n^2 \exp(-O(\sqrt{\log n}))[/math].
Seeing that in the Chomsky Hierarchy Type 3 languages can be recognised by a DFA (which has no stacks), Type 2 by a DFA with one stack (i.e. a push-down automaton) and Type 0 by a DFA with two stacks (i.e. with one queue, i.e. with a tape, i.e. by a Turing Machine), how do Type 1 languages fit in... Considering this pseudo-code of an bubblesort:FOR i := 0 TO arraylength(list) STEP 1switched := falseFOR j := 0 TO arraylength(list)-(i+1) STEP 1IF list[j] > list[j + 1] THENswitch(list,j,j+1)switched := trueENDIFNEXTIF switch... Let's consider a memory segment (whose size can grow or shrink, like a file, when needed) on which you can perform two basic memory allocation operations involving fixed size blocks:allocation of one blockfreeing a previously allocated block which is not used anymore.Also, as a requiremen... Rice's theorem tell us that the only semantic properties of Turing Machines (i.e. the properties of the function computed by the machine) that we can decide are the two trivial properties (i.e. always true and always false).But there are other properties of Turing Machines that are not decidabl... People often say that LR(k) parsers are more powerful than LL(k) parsers. These statements are vague most of the time; in particular, should we compare the classes for a fixed $k$ or the union over all $k$? So how is the situation really? In particular, I am interested in how LL(*) fits in.As f... Since the current FAQs say this site is for students as well as professionals, what will the policy on homework be?What are the guidelines that a homework question should follow if it is to be asked? I know on math.se they loosely require that the student make an attempt to solve the question a... I really like the new beta theme, I guess it is much more attractive to newcomers than the sketchy one (which I also liked). Thanks a lot!However I'm slightly embarrassed because I can't read what I type, both in the title and in the body of a post. I never encountered the problem on other Stac... This discussion started in my other question "Will Homework Questions Be Allowed?".Should we allow the tag? It seems that some of our sister sites (Programmers, stackoverflow) have not allowed the tag as it isn't constructive to their sites. But other sites (Physics, Mathematics) do allow the s... There have been many questions on CST that were either closed, or just not answered because they weren't considered research level. May those questions (as long as they are of good quality) be reposted or moved here?I have a particular example question in mind: http://cstheory.stackexchange.com... Ok, so in most introductory Algorithm classes, either BigO or BigTheta notation are introduced, and a student would typically learn to use one of these to find the time complexity.However, there are other notations, such as BigOmega and SmallOmega. Are there any specific scenarios where one not... Many textbooks cover intersection types in the lambda-calculus. The typing rules for intersection can be defined as follows (on top of the simply typed lambda-calculus with subtyping):$$\dfrac{\Gamma \vdash M : T_1 \quad \Gamma \vdash M : T_2}{\Gamma \vdash M : T_1 \wedge T_2}... I expect to see pseudo code and maybe even HPL code on regular basis. I think syntax highlighting would be a great thing to have.On Stackoverflow, code is highlighted nicely; the schema used is inferred from the respective question's tags. This won't work for us, I think, because we probably wo... Sudoku generation is hard enough. It is much harder when you have to make an application that makes a completely random Sudoku.The goal is to make a completely random Sudoku in Objective-C (C is welcome). This Sudoku generator must be easily modified, and must support the standard 9x9 Sudoku, a... I have observed that there are two different types of states in branch prediction.In superscalar execution, where the branch prediction is very important, and it is mainly in execution delay rather than fetch delay.In the instruction pipeline, where the fetching is more problem since the inst... Is there any evidence suggesting that time spent on writing up, or thinking about the requirements will have any effect on the development time? Study done by Standish (1995) suggests that incomplete requirements partially (13.1%) contributed to the failure of the projects. Are there any studies ... NPI is the class of NP problems without any polynomial time algorithms and not known to be NP-hard. I'm interested in problems such that a candidate problem in NPI is reducible to it but it is not known to be NP-hard and there is no known reduction from it to the NPI problem. Are there any known ... I am reading Mining Significant Graph Patterns by Leap Search (Yan et al., 2008), and I am unclear on how their technique translates to the unlabeled setting, since $p$ and $q$ (the frequency functions for positive and negative examples, respectively) are omnipresent.On page 436 however, the au... This is my first time to be involved in a site beta, and I would like to gauge the community's opinion on this subject.On StackOverflow (and possibly Math.SE), questions on introductory formal language and automata theory pop up... questions along the lines of "How do I show language L is/isn't... This is my first time to be involved in a site beta, and I would like to gauge the community's opinion on this subject.Certainly, there are many kinds of questions which we could expect to (eventually) be asked on CS.SE; lots of candidates were proposed during the lead-up to the Beta, and a few... This is somewhat related to this discussion, but different enough to deserve its own thread, I think.What would be the site policy regarding questions that are generally considered "easy", but may be asked during the first semester of studying computer science. Example:"How do I get the symme... EPAL, the language of even palindromes, is defined as the language generated by the following unambiguous context-free grammar:$S \rightarrow a a$$S \rightarrow b b$$S \rightarrow a S a$$S \rightarrow b S b$EPAL is the 'bane' of many parsing algorithms: I have yet to enc... Assume a computer has a precise clock which is not initialized. That is, the time on the computer's clock is the real time plus some constant offset. The computer has a network connection and we want to use that connection to determine the constant offset $B$.The simple method is that the compu... Consider an inductive type which has some recursive occurrences in a nested, but strictly positive location. For example, trees with finite branching with nodes using a generic list data structure to store the children.Inductive LTree : Set := Node : list LTree -> LTree.The naive way of d... I was editing a question and I was about to tag it bubblesort, but it occurred to me that tag might be too specific. I almost tagged it sorting but its only connection to sorting is that the algorithm happens to be a type of sort, it's not about sorting per se.So should we tag questions on a pa... To what extent are questions about proof assistants on-topic?I see four main classes of questions:Modeling a problem in a formal setting; going from the object of study to the definitions and theorems.Proving theorems in a way that can be automated in the chosen formal setting.Writing a co... Should topics in applied CS be on topic? These are not really considered part of TCS, examples include:Computer architecture (Operating system, Compiler design, Programming language design)Software engineeringArtificial intelligenceComputer graphicsComputer securitySource: http://en.wik... I asked one of my current homework questions as a test to see what the site as a whole is looking for in a homework question. It's not a difficult question, but I imagine this is what some of our homework questions will look like. I have an assignment for my data structures class. I need to create an algorithm to see if a binary tree is a binary search tree as well as count how many complete branches are there (a parent node with both left and right children nodes) with an assumed global counting variable.So far I have... It's a known fact that every LTL formula can be expressed by a Buchi $\omega$-automata. But, apparently, Buchi automata is a more powerful, expressive model. I've heard somewhere that Buchi automata are equivalent to linear-time $\mu$-calculus (that is, $\mu$-calculus with usual fixpoints and onl... Let us call a context-free language deterministic if and only if it can be accepted by a deterministic push-down automaton, and nondeterministic otherwise.Let us call a context-free language inherently ambiguous if and only if all context-free grammars which generate the language are ambiguous,... One can imagine using a variety of data structures for storing information for use by state machines. For instance, push-down automata store information in a stack, and Turing machines use a tape. State machines using queues, and ones using two multiple stacks or tapes, have been shown to be equi... Though in the future it would probably a good idea to more thoroughly explain your thinking behind your algorithm and where exactly you're stuck. Because as you can probably tell from the answers, people seem to be unsure on where exactly you need directions in this case. What will the policy on providing code be?In my question it was commented that it might not be on topic as it seemes like I was asking for working code. I wrote my algorithm in pseudo-code because my problem didnt ask for working C++ or whatever language.Should we only allow pseudo-code here?...
Binary structures often feature length specifiers; the parser is supposed to read them and then consume the specified amount of symbols. Because of this, the grammar is context-sensitive. What would a context-sensitve grammar for a simple binary format look like? For example, let's consider a length-prefixed array with the following layout: 64 bits | [size * 8] bitssize | data I suppose the language that corresponds to this format would be: $$n b^n \\ n \in [0, 2^{64}[ \\ b \in [0, 2^8[$$ Left-context-sensitive grammars have the following structure: $$\alpha A \rightarrow \alpha \gamma$$ I don't understand how this formalism could generate or be used to parse a language which contains part of the grammar itself in it. Does the following grammar make any sense? \begin{align} Bit & \rightarrow 0 \\ Bit & \rightarrow 1 \\ Octet & \rightarrow Bit^8 \\ Size & \rightarrow Octet^8 \\ Size \; Data & \rightarrow Size \; Octet^{Size} \\ Array_s & \rightarrow Size \; Data \\ \end{align} I reason that the grammar above matches the structure of left-context-sensitive grammars. It seems clear to me that the next-to-last line of the grammar directly corresponds to the definition above: \begin{align} \alpha & = Size \\ A & = Data \\ \gamma & = Octet^{Size} \\ \end{align} The trickiest non-terminal seems to be $Size$. It is not clear to me exactly how it influences the grammar's semantics. It is simultaneously part of the input and the grammar, serving as the number of repetitions for $Octet$. I have seen many grammar-based approaches to parsing binary file formats. They include formalisms such as attribute grammars, [1] [2] adaptive grammars, the recently introduced data-dependent grammars, [1] [2] parser combinators [1] [2] [3] [4] [5] and even scattered context grammars. [1] All these tools go beyond the definition above, so I keep wondering about the nature of binary file formats. If it is not possible to describe their language with a context-sensitive grammar, does that mean they are more powerful in the Chomsky hierarchy?
So I have an optimization problem of the form $$\text{maximize}\hspace{3mm}f(A):{\bf R}^{K\times K}\rightarrow{\bf R}$$ $$\text{subject to}\hspace{19mm}A^T{\bf 1}=\bf{1}$$ $$\hspace{33mm}A\geq 0$$ where the constraint set is thus the manifold of left stochastic matrices (each column sums to 1). The objective function $f$ is not concave and thus my goal is to very quickly find a local maxima (speed is important). The function $f$ is a nasty piece of work, nevertheless the gradient $\nabla f$ is analytically computable. However even without the constraint, setting the gradient equal to zero and solving for the extrema is intractable, and thus I can't imagine that back solving for Lagrange multipliers is doable. The method I've tried is manifold gradient ascent using some code found here from the Manopt package. They refer to the manifold of left stochastic matrices as the multinomial manifold, and the code provides methods for basically projecting the gradient onto the tangent space, and then retracting this new gradient onto the manifold itself. Unfortunately as far as retractions go it's fairly primitive I think, and it involves entry-wise dividing the projected gradient by the current point and then exponentiating it, which causes overflow unless you make your step-sizes quite small (however I don't fully understand the intricacies of manifold optimization so I could be wrong about this). I could compute the Hessian numerically and try to run a 2nd order optimization on the multinomial manifold, but I'd have to do some reading to figure out exactly how to do this. Basically I need something that is fast, speed is more important than accuracy, as I have to do this thousands of times and it's only 1 step in a larger coordinate ascent algorithm. The gradient $\nabla f$ is complicated and thus each evaluation of it is quite costly, however the one saving grace is that the dimension of $A$ is relatively small, almost certainly $<100$. What would be the best approach here? Update As requested, here is the function and its gradient: \begin{align} f(A )&= \Big[-\psi\big(\sum_{i=1}^KC_{ij}\big)+\sum_{i=1}^KA_{ij}(\psi(C_{ij})-\log C_{ij})\Big]_{j=1:K} \cdot \big(\sum_{n=2}^NA^{n-2}\big)\cdot {\bf x}\\ &+ \sum_{n=1}^N[\log B_{in}]_{i=1:K}^T\cdot A^{n-1}\cdot{\bf x} \end{align} \begin{align} \nabla f(A) &= [\psi(C_{ij}) - \log A_{ij} -1]_{i=1:K,\hspace{1mm}j=1:K}\cdot \text{diagm}\Big(\big(\sum_{n=2}^NA^{n-2}\big)\cdot{\bf x}\Big)\\ &+ \sum_{n=3}^{N}\sum_{r=0}^{n-3}\Big((A^r)^T\cdot\Big[-\psi\big(\sum_{i=1}^KC_{ij}\big)+\sum_{i=1}^KA_{ij}(\psi(C_{ij})-\log C_{ij})\Big]_{j=1:K}^T \cdot {\bf x}^T\cdot (A^{n-3-r})^T\Big)\\ &+ \sum_{n=2}^N\sum_{r=0}^{n-2}(A^r)^T\cdot[\log B_{in}]_{i=1:K}\cdot{\bf x}^T\cdot(A^{n-2-r})^T \end{align} Where $\psi$ is the digamma function, $A$ and $C$ are $K\times K$ matrices, $B$ a $K\times N$ matrix and ${\bf x}$ is a length $K$ vector, all real-valued. The function $\text{diagm}$ converts a vector to a diagonal matrix. Also I checked my derivation of the gradient numerically, so you can be sure its correct. I'm writing this in Julia, and I tried using the NLopt package both using a gradient-based method and a derivative free method, but they were even slower than my original manifold approach.
I am looking at the finite difference methods to solve simple $u_t=a(x,t)u_{xx}$. There are explicit, implicit, Crank Nicolson. The latter is said to be more accurate since the local truncation error is of second order provided all expansions are done around point $t^{n+0.5}$. However, local truncation error basically tells us how well the difference equation approximates the p.d.e. Thus, is I do expansion of CN scheme around any other point than I don't have second order anymore. Question1 How I can trust the results of the scheme if there is only a single point where second order occurs? On the other hand if I have explicit scheme, regardless around which point I do Taylor I keep having first order, so I get why it is first order but what I don't understand is: Question2 Why is CN supposed to give global error at the point on the grid of second order when the local error of second order is estimated at the point which is not even on the mesh? Thanks! EDIT: Any scheme can be written as $\frac{u^{n+1}-u^n}{\tau}=L_hu^{\theta}$. So, regardless for which $\theta$ we have RHS=$u_{xx}(x_i,t^{\theta})+(h^2)...$ So it really boils down to approximation of the first derivative on the left hand side. And here we have choice. We can take $\theta=1$ and have implicit scheme with a derivative approximated at the point $t^{n+1}$, we can take the midpoint and have a derivative being approximated at that point with higher accuracy. However, the point is not on the grid! What I don't understand is that we expand around the point which is not on the grid but measure the error at the point which is on the grid and where the expansion of the difference equation gives only first order. I am confused about it.
I want to solve a small deformation solid structure problem applying periodic boundary conditions in FEM. The geometry is a square and the equations are: $$ \text{div} \, \sigma = 0 \\ \sigma = f(\epsilon)\\ \epsilon_{ij} = \frac{1}{2}(u_{i,j} + u_{j,i}) \\ u^+ - u^- = c \\ \sigma^+ \cdot \hat{n} + \sigma^- \cdot \hat{n} = 0 $$ The variables with the $.^+$ or $.^-$ correspond to opposite sides in the square. A weak form the equilibrium equation can be written as: $$ \int_{V} \text{div} \, \sigma: \epsilon(\delta v) \,dV = 0 \\ $$ where $\delta v$ is a test function. Discretizing by FE and applying Newton-Raphson iterative scheme I wrote a residual of the form: $$ r = \left[ \begin{array}{c} \int_{V} \text{div} \, \sigma: \epsilon(\delta v_j) \,dV \quad \text{ for } j = 1...N_\text{int} - N_\text{ext}/2\\ u_j^+ - u_j^- - c \quad \text{ for } j = 1...N_\text{ext}/2\\ \end{array} \right] $$ Question: The weak formulation above does not have the surface contribution so the correct form is:$$\int_{V} \text{div} \, \sigma: \epsilon(\delta v) \,dV + \int_{\partial V} \sigma \cdot \hat{n} \, \,dS = 0 \\$$If you take the condition $\sigma^+ \cdot \hat{n} + \sigma^- \cdot \hat{n} = 0$, how does the condition $\sigma^+ \cdot \hat{n} + \sigma^- \cdot \hat{n} = 0$ affects that term in order to by enforce ? (It seems that that term should vanish but that did not work for me) If this is not the correct way how do I enforce the conditions $\sigma^+ \cdot \hat{n} + \sigma^- \cdot \hat{n} = 0$ ? Note: Here I have the octave code run_periodic.m for you to see. The ass_periodic.m function do the job of assembly the residual and jacobian and tries to enforce the boundary conditions.
I'm trying to work out if there is a way to get siunitx to natively handle the ~ symbol ( \sim) in the same way that it can handle the < and > operators before a number, as I would like to be able to use ~ as short hand for approximately. For example: \documentclass[12pt,a4paper]{report}\usepackage{siunitx}\begin{document}% Values output the same, with spacing after the < and before the % symbol\SI{< 10}{\percent} \\\SI{<10}{\percent} \\% When writing in math mode a space in placed after the ~ symbol, but not before the % symbol$\sim10\%$ \\\end{document} Ideally I'd like to be able to write something like \SI{~ 10}{\percent} or \SI{\tilde}{10}{\percent}, where {\tilde} is a custom value defined using the DeclareSIUnit\tilde{~} command. But I can't seem to find anything like this for prefix symbols in the documentation. Has anyone else come across a solution to this?
Consider a $C^k$, $k\ge 2$, Lorentzian manifold $(M,g)$ and let $\Box$ be the usual wave operator $\nabla^a\nabla_a$. Given $p\in M$, $s\in\Bbb R,$ and $v\in T_pM$, can we find a neighborhood $U$ of $p$ and $u\in C^k(U)$ such that $\Box u=0$, $u(p)=s$ and $\mathrm{grad}\, u(p)=v$? The tog is a measure of thermal resistance of a unit area, also known as thermal insulance. It is commonly used in the textile industry and often seen quoted on, for example, duvets and carpet underlay.The Shirley Institute in Manchester, England developed the tog as an easy-to-follow alternative to the SI unit of m2K/W. The name comes from the informal word "togs" for clothing which itself was probably derived from the word toga, a Roman garment.The basic unit of insulation coefficient is the RSI, (1 m2K/W). 1 tog = 0.1 RSI. There is also a clo clothing unit equivalent to 0.155 RSI or 1.55 tog... The stone or stone weight (abbreviation: st.) is an English and imperial unit of mass now equal to 14 pounds (6.35029318 kg).England and other Germanic-speaking countries of northern Europe formerly used various standardised "stones" for trade, with their values ranging from about 5 to 40 local pounds (roughly 3 to 15 kg) depending on the location and objects weighed. The United Kingdom's imperial system adopted the wool stone of 14 pounds in 1835. With the advent of metrication, Europe's various "stones" were superseded by or adapted to the kilogram from the mid-19th century on. The stone continues... Can you tell me why this question deserves to be negative?I tried to find faults and I couldn't: I did some research, I did all the calculations I could, and I think it is clear enough . I had deleted it and was going to abandon the site but then I decided to learn what is wrong and see if I ca... I am a bit confused in classical physics's angular momentum. For a orbital motion of a point mass: if we pick a new coordinate (that doesn't move w.r.t. the old coordinate), angular momentum should be still conserved, right? (I calculated a quite absurd result - it is no longer conserved (an additional term that varies with time ) in new coordinnate: $\vec {L'}=\vec{r'} \times \vec{p'}$ $=(\vec{R}+\vec{r}) \times \vec{p}$ $=\vec{R} \times \vec{p} + \vec L$ where the 1st term varies with time. (where R is the shift of coordinate, since R is constant, and p sort of rotating.) would anyone kind enough to shed some light on this for me? From what we discussed, your literary taste seems to be classical/conventional in nature. That book is inherently unconventional in nature; it's not supposed to be read as a novel, it's supposed to be read as an encyclopedia @BalarkaSen Dare I say it, my literary taste continues to change as I have kept on reading :-) One book that I finished reading today, The Sense of An Ending (different from the movie with the same title) is far from anything I would've been able to read, even, two years ago, but I absolutely loved it. I've just started watching the Fall - it seems good so far (after 1 episode)... I'm with @JohnRennie on the Sherlock Holmes books and would add that the most recent TV episodes were appalling. I've been told to read Agatha Christy but haven't got round to it yet ?Is it possible to make a time machine ever? Please give an easy answer,a simple one A simple answer, but a possibly wrong one, is to say that a time machine is not possible. Currently, we don't have either the technology to build one, nor a definite, proven (or generally accepted) idea of how we could build one. — Countto1047 secs ago @vzn if it's a romantic novel, which it looks like, it's probably not for me - I'm getting to be more and more fussy about books and have a ridiculously long list to read as it is. I'm going to counter that one by suggesting Ann Leckie's Ancillary Justice series Although if you like epic fantasy, Malazan book of the Fallen is fantastic @Mithrandir24601 lol it has some love story but its written by a guy so cant be a romantic novel... besides what decent stories dont involve love interests anyway :P ... was just reading his blog, they are gonna do a movie of one of his books with kate winslet, cant beat that right? :P variety.com/2016/film/news/… @vzn "he falls in love with Daley Cross, an angelic voice in need of a song." I think that counts :P It's not that I don't like it, it's just that authors very rarely do anywhere near a decent job of it. If it's a major part of the plot, it's often either eyeroll worthy and cringy or boring and predictable with OK writing. A notable exception is Stephen Erikson @vzn depends exactly what you mean by 'love story component', but often yeah... It's not always so bad in sci-fi and fantasy where it's not in the focus so much and just evolves in a reasonable, if predictable way with the storyline, although it depends on what you read (e.g. Brent Weeks, Brandon Sanderson). Of course Patrick Rothfuss completely inverts this trope :) and Lev Grossman is a study on how to do character development and totally destroys typical romance plots @Slereah The idea is to pick some spacelike hypersurface $\Sigma$ containing $p$. Now specifying $u(p)$ is trivial because the wave equation is invariant under constant perturbations. So that's whatever. But I can specify $\nabla u(p)|\Sigma$ by specifying $u(\cdot, 0)$ and differentiate along the surface. For the Cauchy theorems I can also specify $u_t(\cdot,0)$. Now take the neigborhood to be $\approx (-\epsilon,\epsilon)\times\Sigma$ and then split the metric like $-dt^2+h$ Do forwards and backwards Cauchy solutions, then check that the derivatives match on the interface $\{0\}\times\Sigma$ Why is it that you can only cool down a substance so far before the energy goes into changing it's state? I assume it has something to do with the distance between molecules meaning that intermolecular interactions have less energy in them than making the distance between them even smaller, but why does it create these bonds instead of making the distance smaller / just reducing the temperature more? Thanks @CooperCape but this leads me another question I forgot ages ago If you have an electron cloud, is the electric field from that electron just some sort of averaged field from some centre of amplitude or is it a superposition of fields each coming from some point in the cloud?
Here goes. The classes that are assigned to real bundles are Stiefel-Whitney Classes, and Wu classes. The Wu class is a refinement of the Stiefel-Whitney class, it is sort of like a (Steenrod) square root. Both take on values in $\mathbb{Z_2}$ cohomology. They can be seen as obstructions to nonvanishing sections defined on subsets of the base. If the bundle $p:E\rightarrow B$ is in addition oriented with fiber a vector space of dimension $r$, then the classes defining the orientation in $H^r(F,F-\{0\})$ can be woven together to form the Euler class. If $X\subset B$ is a cycle of dimenison $r$ it assigns to the cycle the intersection of number of a generic section over the cycle with the $0$-section. The pontryagin classes are integral cohomology classes assigned to real vector bundles by turning the real vector bundle into a complex vector bundle so that a real basis for each fiber becomes a complex basis for the fibers of the new bundle. Finally, the chern classes are assigned to complex vector bundles. There are many geometric interpretations of the chern classes, but there was one that was more illuminating to me than the others when I finally learned it, it's due to Grothendieck. You can build a complex vector bundle by covering the base $B$ with open sets $U_{\alpha}$ along with a continuous functions, $U_{\alpha}\cap U_{\beta}\rightarrow GL_r\mathbb{C}$ that act like they should be coordinate changes for trivializations. To build the bundle you treat them as such. So... a vector bundle is a lot like a matrix, and a matrix has a characteristic polynomial. The chern classes of a bundle are the coefficients of a "characteristic polynomial" of the vector bundle. Specifically, if $p:E\rightarrow B$ is a complex $r$-plane bundle, let $P(E)\rightarrow B$ be the bundle whose fibers are the projective $r-1$-spaces modeled on the fibers. There is a class $\gamma \in H^2(P(E))$ whose restriction to each fiber is the chern class of the canonical bundle over that fiber. The cohomology $H^*(P(E))$ is a free module over $H^*(B)$ under the obvious action with basis $, 1,\gamma,\gamma^2,\ldots, \gamma^{r-1}$.Hence you can write $\gamma^r+c_{1}\gamma^{r-1}+\ldots +c_{r-1}\gamma+c_r=0$ where the $c_i\in H^{2i}(B)$. These coefficients are the chern classes of the bundle.
Preprint Series 2001 2001:26 R. DeVore, G. Petrova, and V. Temlyakov We study the approximation of a function class F in $L _ p$ by choosing first a basis B and then using n-term approximation with the elements of B. Into the competition for best bases we enter all greedy (i.e. democratic and unconditional [20]) bases for $L _ ... [Full Abstract ] 2001:25 I. Daubechies, R. DeVore, C. Güntürk, and V. Vaishampayan We analyze mathematically the effect of quantization error in the circuit implementation of Analog to Digital (A/D) converters such as Pulse Code Modulation (PCM) and Sigma-Delta Modulation (∑∆). We show that ∑∆ modulation, which is based on oversampling the signal, has a self correction for quantization error that is not inherited ... [Full Abstract ] 2001:24 I. Daubechies and R. DeVore Digital signal processing has revolutionized the storage and transmission of audio signals, images and video, in consumer electronics as well as in more scientific settings (such as medical imaging). The main advantage of digital signal processing is its robustness: although all the operations have to be implemented with necessarily not ... [Full Abstract ] 2001:23 S. Dilworth, N. Kalton, D. Kutzarova, and V. Temlyakov Some new conditions that arise naturally in the study of the Thresholding Greedy Algorithm are introduced for bases of Banach spaces. We relate these conditions to best n-term approximation and we study their duality theory. In particular, we obtain a complete duality theory for greedy bases. [Full Abstract ] BACK TO TOP 2001:22 J. Griggs We determine the minimum volume (sum of cardinalities) of an intersecting family of subsets of an n-set, given the size of the family, by solving a simple linear program. From this we obtain a lower bound on the average size of the sets in an intersecting family. This answers ... [Full Abstract ] BACK TO TOP 2001:21 S. Brenner Smoothers, mesh dependent norms, interpolation and multigrid (file not available) (Appl. Numer. Math. 43 (2002), 45-56.) New estimates for the nodal interpolation operator for the P finite element are established with respect to mesh dependent norms that are defined in terms of the smoothers in multigrid algorithms. These estimates are useful in the additive approach to the convergence of V-cycle ... [Full Abstract ] BACK TO TOP 2001:20 H. Wang In this paper, we analyze the modified method of characteristics (MMOC) and an improved version of the MMOC, named the modified method of characteristics with adjusted advection (MMOCAA), for multidimensional advection-reaction transport equations in a uniform manner. We derive an optimal-order error estimate for these schemes. Numerical results are presented ... [Full Abstract ] BACK TO TOP 2001:19 J. Liu and H. Wang We combine an Eulerian-Lagrangian approach and multiresolution analysis to develop unconditionally stable, explicit, multilevel methods for multidimensional linear hyperbolic equations. The derived schemes generate accurate numerical solutions even if large time steps are used. Furthermore, these schemes have the capability of carrying out adaptive compression without introducing mass balance error ... [Full Abstract ] BACK TO TOP 2001:18 M. Fischermann, A. Hoffmann, D. Rautenbach, L. Székely, and L. Volkmann The Wiener index of a graph is the sum of all pairwise distances of vertices of the graph. In this paper we characterize the trees which minimize the Winer index among all trees of given order and maximum degree and the trees which maximize the Wiener index among all trees ... [Full Abstract ] BACK TO TOP 2001:17 E. Matouskova Let f be a bi-Lipschitz mapping of the Euclidean ball $B _ {R^n}$ into $\ell _ 2$ with both Lipschitz constants close to one. We investigate the shape of $f(B _ {R^n}$). We give examples of such a mapping f,which has the Lipschitz constants arbitrarily close ... [Full Abstract ] BACK TO TOP 2001:16 R. Anstee, R. Ferguson, and J. Griggs Consider the permutation $\pi=(\pi _ 1,\ldots,\pi _ n)$ of $1,2,\ldots,n$ as being placed on a circle with indices taken modulo n. For given $k<n$ there are n sums of k consecutive entries, and their average is $\frac{k(n+1)}{2}$. We say ... [Full Abstract ] BACK TO TOP 2001:15 S. Brenner Convergence of nonconforming (file not available) V-cycle and F-cycle multigrid algorithms for second order elliptic boundary value problems (Mathematics of Computation 73 (2004), 1041-1066, electronically posted August 19, 2003, PII: S0025-5718(03)01578-3) The convergence of V-cycle and F-cycle multigrid algorithms with a sufficiently large number of smoothing steps is established for nonconforming finite element methods for second order elliptic boundary value problems. [Full Abstract ] BACK TO TOP 2001:14 S. Dilworth, D. Kutzarova, and V. Temlyakov We consider some theoretical greedy algorithms for approximation inBanach spaces with respect to a general dictionary. We proveconvergence of the algorithms for Banach spaces which satisfy certainsmoothness assumptions. We compare the algorithms and their rates ofconvergence when the Banach space is $L _ p(T^d ... [Full Abstract ] BACK TO TOP 2001:13 B. Karaivanov and P. Petrushev We study nonlinear n-term approximation in $L _ p(R^2)$ ($0<p<\infty$) from Courant elements or (discontinuous) piecewise polynomials generated by multilevel nested triangulations of $R^2$which allow arbitrarily sharp angles. To characterize the rate ofapproximation we introduce and develop three families of smoothnessspaces ... [Full Abstract ] BACK TO TOP 2001:12 P. Petrushev We study nonlinear approximation in $L _ p(R^d)\ (0<p<\infty,>1)$ from (a) n-term rational functions, and (b) piecewise polynomials generated by different anisotropic dyadic partitions of $R^d$.To characterize the rates of each such piecewise polynomialapproximation we introduce a family of smoothness spaces ... [Full Abstract ] BACK TO TOP 2001:11 M. Al-Lawatia and H. Wang We develop an Eulerian-Lagrangian substructuring domain decomposition method for the solution of unsteady-state advection-diffusion transport equations. This method reduces to an Eulerian-Lagrangian scheme within each subdomain and to a type of Dirichlet-Neumann algorithm at subdomain interfaces. The method generates accurate and stable solutions that are free of artifacts even if ... [Full Abstract ] BACK TO TOP 2001:10 M. Steel and L. Székely In this paper we study inverting random functions under the maximum likelihood estimation (MLE) criterion. In particular, we consider how many independent evaluations of the random function at a particular element of the domain are needed for reliable reconstruction of that element. We provide explicit upper and lower bounds for ... [Full Abstract ] BACK TO TOP 2001:09 V. Temlyakov Our main interest in this paper is nonlinear approximation. The basic idea behind nonlinear approximation is that the elements used in the approximation do not come from a fixed linear space but are allowed to depend on the function being approximated. While the scope of this paper is mostly theoretical ... [Full Abstract ] BACK TO TOP 2001:08 O. Trifonov An approach of Swinnerton-Dyer is extended to obtain new upper bounds for the number of lattice points close to a smooth curve. One consequence of these bounds is a new asymptotic result for the distribution of squarefull numbers in short intervals. [Full Abstract ] BACK TO TOP 2001:07 V. Temlyakov We discuss one lower estimate for the rate of convergence of PureGreedy Algorithm with regard to a general dictionary and another lowerestimate for the rate of convergence of Weak Greedy Algorithm with aspecial weakness sequence $\tau =\{t\}$, $0<t<1$,with regard to a general dictionary. The ... [Full Abstract ] BACK TO TOP 2001:06 M. Filaseta and D. Meade For $w(x)\in{C}[x]$ with $w(x)\not\equiv0$, define the reciprocal of $w(x)$ to be the polynomial $$\tilde{w}(x)=x^{\deg{w}}w(\frac{1}{x}).$$ We refer to $w(x)$ as being reciprocal if $w(x)=\pm\tilde{w}(x)$ and as being non-reciprocal ... [Full Abstract ] BACK TO TOP 2001:05 M. Skopina Wavelet-type systems providing a uniformly convergent expansion for any continuous function on the sphere are found. This construction is transferred to the disk due to some special connections between polynomial bases on the sphere and on the disk. [Full Abstract ] BACK TO TOP 2001:04 S. Dilworth, R. Howard, and J. Roberts Let $\Delta _ m=\{(t _ 0,\ldots,t _ m)\in{R^{n+1}}:t _ i\geq0,\sum^m _ {i=0}t _ i=1\}$ be the standard m-dimensional simplex. Let $\emptyset\neq{S}\subset\bigcup^\infty _ {m=1}\Delta _ m$, then a function ... [Full Abstract ] BACK TO TOP 2001:03 S. Brenner A new look at FETI (file not available) (Proceedings of the Thirteenth International Conference on Domain Decomposition Methods, N.Debit, M. Garbey, R. Hoppe, J. Périaux, D. Keyes and Y. Kuznetsov, ed.,DDM.org, 2001, pp. 41-51) A coordinate-free formulation of the Finite Element Tearing andInterconnecting (FETI) method and a new 3D FETI preconditioner arepresented in ... [Full Abstract ] BACK TO TOP 2001:02 M. Nielsen and D. Zhou We study the mean size of wavelet packets in L. An exact formula for the mean size is given in terms of the p p-norm joint spectral radius. This will be a corollary of an asymptotic formula for the L-norms on the subdivision trees. Then the stability ... p [Full Abstract ] BACK TO TOP 2001:01 S. Brenner and Q. He Lower bounds for three-dimensional nonoverlapping domain decomposition algorithms (file not available) (Numerische Mathematik 93 (2003), 445-470) Lower bounds for the condition numbers of the preconditioned systems are obtained for the wire basket preconditioner and the Neumann-Neumann preconditioner in three dimensions. They show that the known upper bounds are sharp. [Full Abstract ] BACK TO TOP
I gave a talk (some months ago now) on the history of $\pi$ (which is well discussed in my unreliable history of maths, Cracking Mathematics, available wherever good books are sold.) At one point, I put up a slide generally excoriating degrees as a measurement of angle, and stating that for small $x$, you have $\sin\br{x^\circ}\approx \frac{\pi}{180}x$. And one member of the audience was Not Having It. He was certain that the $x$ on the right also needed a $^\circ$ after it. Let's start with the idea of units. If I write something like 15m, the m means "this number represents a distance, in the proportion such that 40,000,000 of them make a circumference of the earth" (or whatever the precise definition is these days). My understanding of the $^\circ$ symbol is that - even though angles are dimensionless - it functions as a unit: it states "this number represents an angle, in the proportion such that $360^\circ$ is a complete revolution." The sine function is a mapping from angles to real numbers (in particular, real numbers between -1 and 1). It's usually convenient to deal with angles in radians, which are angles in the proportion such that $2\pi$ represents a complete revolution - and it's so convenient to deal with it this way that we don't normally mention the unit - it's implied. Angles (in radians) generally behave just like numbers, so there's no difficulty. Putting a $^\circ$ on the right-hand side of the equation would state "the right hand side is an angle" - which it isn't - and that 360 of those angle-units made up a circle - which they don't. It would perhaps have been better to write it as $\frac{\pi}{180^\circ}x^\circ$ - but I stand by what I stated. And I can only apologise to the audience for what must have seemed like an interminable, technical conversation.
Consider the language $$L_{\times 2} = \big\{ x\bot y\bot z \mid x,y,z\in \{0,1\}, \#_0(x)=\#_0(y) \text{ and } |x|+|y|=z\big \}$$(where $\#_0(x)$ denotes the number of zeros in $x$). It is easy to decide $L_{\times 2}$ using a HAL machine — observe that the machine needs to keep track of two properties: the number of zeros in $x$ vs $y$ and the length of $x,y$ (vs $z$). It can push a 0 into the heap for every zero it sees in $x$ (and then later pop 0 for any zero seen in $y$); additionally it pushes 1 for any bit in $x,y$ (and later pops 1 for any bit of $z$). Since all the 1s are pushed down the heap, they don't interfere with the 0 count. The $\bot$ serves as a delimiter, and can be practically ignored. Now, let $L = L_{\times 2}^R$, be the reverse language. That is,$$ L = \big\{z\bot y \bot x \mid x,y,z\in \{0,1\}, \#_0(x)=\#_0(y) \text{ and } |x|+|y|=z\big \}$$We will show that no HAL machine can decide $L$. The intuition is the following. As above, the machine must keep track of both the length of $z$ and the number of zeros in $x,y$. However, in this case it need to track them simultaneously. This cannot be done via a heap. In more details, after reading $z$, the heap contains information about the length of $|x|+|y|$.while reading $y$ the machine must also keep in the heap the number of zeros in $y$. However, this information cannot interfere with the information the heap already has on the length we expect $x$ to be. Very intuitively, either the information about the number of zeros will be "below" the information about the length of $x$, and then we cannot access it while reading $x$, or it is "above" that information, rendering the latter inaccessible, or the two information will be "mixed" and become meaningless. More formally, we are going use some kind of a "pumping" argument. That is, we will take a very long input, and show that the "state" of the machine must repeat itself during processing that input, which will allow us to "replace" the input once the machine repeats its "state". For the formal proof, we require a simplification of the structure of the HAL machine, namely, that it doesn't contain a "loop" of $\varepsilon$-transitions$^1$. With this assumption we can see that for every input symbol the machine processes, the content of the heap can increase/decrease by at most $c$ (for some large enough constant $c$). Proof. Assume $H$ decides $L$, and consider a long enough input (say, of length $4n$, thus $|x|=|y|=n$, $|z|=2n$, ignoring the $\bot$s hereinafter). To be concrete, fix $z,y$ and assume that $\#_0(y) = n/2$. Observe that there are ${n \choose n/2}$ different $x$'s such that $z\bot y \bot x \in L$. Consider the heap's content immediately after processing $z\bot y$. It contains at most $3nc$ symbols (where each symbol is from a fixed alphabet $\Gamma$), by our assumption. However, there are ${n \choose n/2}$ different $x's$ that should be accepted (which is substantially larger than the amount of possible different contents for the heap, as this increases exponentially, while the different number of heaps increases polynomially, see below). Take two inputs $x_1,x_2$ that should be accepted, so that the following holds: The prefix of length $n/2$ of $x_1$ has different number of zeros than the prefix of $x_2$ of the same length. By the time the machine reads a prefix of length $n/2$ of the $x$ part, the heap looks the same for both $x_1$ and $x_2$, and also, the machine is in the same state (this must happen for some $x_1,x_2$, for large enough $n$, as there are more than $2^{0.8n}$ different options$^2$ for $x_1,x_2$, and at most $(3.5cn)^{|\Gamma|}|Q|$ different options for heap content and state$^3$). It is clear that the machine must accept the word $z\bot y \bot x_1^px_2^s$, where $x_1^p$ is a prefix of $x$ of length $n/2$ and $x_2^s$ is a suffix of $x_2$ of the same length. Note that the number of zeros in $x_1^px_2^s$ differs from the number of zeros in $x_1$ and $x_2$ (that is, from $\#_0(y)$), due to the way we chose $x_1$ and $x_2$, thus we reached a contradiction. $^1$ Does this assumption damages generality? I don't think so, but this indeed requires a proof. If someone sees how to get around this extra assumption, I'd love to know. $^2$ Let's fix $x_1$ so that it's prefix (of length $n/2$ has exactly $n/4$ zeros). Recall that using Stirling's approximation we know that $\log {n \choose k} \approx nH(k/n)$ where $H()$ is the Binary entropy funciton. Since $H(1/4) \approx 0.81$ we have ${n \choose n/4} > 2^{0.8n}$ for large enough $n$. $^3$ Assuming alphabet $\Gamma$, there are $|\Gamma|^n$ different strings of length $n$, so if this was a stack we were screwed. However, pushing "01" into a heap is equivalent to pushing "10" - the heap stores only the sorted version of the content. The number of different sorted strings of size $n$ is ${n+1 \choose |\Gamma|-1}\approx n^{|\Gamma|}$, for a constant $|\Gamma|$.
In my course notes the support of a distribution (continous lineair functional) is defined as follows: Definitions First it defines something like open annihilation sets: An open annihilation set$\omega$ of a distribution $T$ is an open set where $\langle T, \phi\rangle = 0$ if the compact support of $\phi$ is a subset of $\omega$. Then The support of a distribution$T$ is the complement of the open union of all open annihilation sets of $T$. There are some examples provided: ($\mathcal{D}$ is the function space of $\mathscr{C}^\infty$ functions with compact support) Choose a $\phi \in \mathcal{D}$ such that $0\not \in [\phi]$. Then $\langle \delta , \phi \rangle = \phi(0) = 0$. Which implies $[\delta]= \{0\}$. Let $Y$ be the Heaviside distribution. Choose $\phi\in \mathcal{D}$ such that $[\phi]\subseteq ]-\infty, 0[$, then $$\langle Y, \phi\rangle = \int_{-\infty}^{+\infty}Y(x)\phi(x)\operatorname d x = 0$$ Which implies $[Y] = [0,+\infty[$ What does it all mean? I find it hard to understand what support of a distribution really means. For example What does it mean for a distribution to have compact support? If an ordinary function has compact support I can visualize this as some sort of bump function. But how should I look at the support of a distribution?
In looking at solutions for the currently occuring 3x3-magic-squares-of-squares problem I ran into the question how I can parametrize $$ a^2 + 2b^2 = c^2 $$ where $a,b,c$ are parametrizable like in the problem of finding all pythagorean triples, where we have $ a=f(n,m),b=f(n,m),c=f(n,m) $ with $n,m \in \Bbb N$ with few additional conditions on $n,m$ like coprimality (I've just looked at the wikipedia entry for pythagorean triples). I've at the moment not even an idea for an ansatz to that question, but because I'll have later the problem of getting three square terms on the lhs, I'd like to get help for an initial idea, . (Surely I could try pattern-detection based on long lists of examples, but possibly this is essentially easy...) how to approach that Just for a bit more background: the final problem for me is, to find a solution in squares in a set of three or four term equations of quadratics - or to arrive at that there is no solution possible. Basic condition: all unknowns must be different. The first two three-term-equations can be solved by the asked parametrization of $-2e^2+1i^2=a^2$ and $-2e^2+1h^2=b^2$ where of course the $e$ are meant to be equal, and to have different solutions for $a$ and $b$ the unknown $e$ must have a structure of $2nm =2n'm'$ where $n \ne n'$ , so must contain at least 3 primefactors. Here is the set of equations: $$\small \begin{array}{r} e²&h²&i²&&&\\ *&*&*&&&\\ \hline -2&0&1&&=&a²&& \text{three-term equations}\\ -2&1&0&&&b²\\ \hline -1&1&1&&=&c²&& \text{four-term equations}\\ -2&1&2&&&d²\\ 4&-1&-2&&&f²\\ 3&-1&-1&&&g²\\ \end{array}$$
For the backward Euler discretization in time: $$ \left( \frac{u^{(k)}-u^{(k-1)}}{\Delta t}, v\right) + a(u^{(k)},v) = \ell(v) $$ where $a(\cdot,\cdot)$ is the bilinear operator associated with the discretization in space. I’m considering linear elliptic problems in 2D. So I rearrange the above equation: $$ \left(\frac{1}{\Delta t}u^{(k)},v\right) + a(u^{(k)},v) = \ell(v) + \left(\frac{1}{\Delta t}u^{(k-1)},v\right) $$ since $u^{(k-1)}$ is a known quantity from the previous time step. For non-time dependent problems, the convergence in space is typically on the order of $\mathcal{O}(h^{p+1})$, with $p$ being the basis degree. Now if I have a time-dependent problem as given above, how small does my time step have to be in order to test for convergence in space? If I take a large time step, the method is stable but yields error that is large, on the order of $10^{3}$. Taking my time step extremely small yields proper convergence rates, but I don’t think this tells me anything since for small $\Delta t$, the time-dependent terms will dominate and I’m essentially solving $$ \left(\frac{1}{\Delta t}u^{(k)},v\right) = \left(\frac{1}{\Delta t}u^{(k-1)},v\right) $$ and if the time step is small, we can most likely expect $u^{(k)} \approx u^{(k-1)}$, so solving the system will yield the “correct solution”. I’m just wondering if my implementation error of Backwards euler is incorrect. I know typically we solve the linear system: $$ (\mathbf{M} + \Delta t\mathbf{K})u^{(k)} = \mathbf{M}u^{(k-1)} + \Delta t\mathbf{f} $$ with $\mathbf{M}$ and $\mathbf{K}$ being the mass and stiffness matrices and $\mathbf{f}$ being the source term. But is the implementation I gave above incorrect? I originally implemented the first way I listed, then tried the second for comparison but they yield the same result in both cases, so I'm guessing that's not the issue.
53 0 Limits are defined for general topological spaces, even ones that don't have a metric. I'm not sure how to define a derivative without a norm though. Suppose you have a function f:REnumaElish said:IMHO another way to put the question is "can you define limits in a metric space without a norm"? If you can define limits, then you can define derivative and integral, am I not right? n-->R mIt seems as though you need a way to associate a number with each displacement vector h in R n, i.e. a norm. The usual way [tex]f'(a) = \lim_{h \rightarrow 0} \frac{f(a+h) - f(a)}{h}[/tex] doesn't make sense because you can't divide a vector in R mby a vector in R n, or any other vector for that matter. I just looked at Spivak's "Calculus on Manifolds", and he uses a normed version [tex]\lim_{|h| \rightarrow 0} \frac{|f(a+h) - f(a) - \lambda(h)|}{|h|} = 0[/tex] to define the derivative of f at a as the unique linear map λ : R n-->R mthat satisfies the above equation. So I guess a better way to ask my question is: Is there a way to define the derivative of a function from R nto R mthat doesn't require the use of a norm?
Recall the following Fact 1. The equation $$ a x + b y = c $$ has no solution with $x,y \in \Bbb{Z}$ if $(a,b) \nmid c$, while it admits infinitely many if $d = (a,b) \mid c$. Further, in this case the general solution (i.e. every solution) is of the form $$ x = \frac{b}{d} k + x_0 \quad y = -\frac{a}{d} k + y_0 $$ for any integer $k$, where $(x_0,y_0)$ is a particular solution (e.g. any one given or known solution). Note that you can rewrite your equation as$$2 x_1 + 12 x_2 + 3 x_3 = 2 x_1 + 3 ( 4x_2 + x_3) = 7 \label{eq:orig} \tag{1}$$In particular, every solution $(x_1,x_2,x_3)$ of \eqref{eq:orig} gives you a solution $(x_1, y = 4x_2 + x_3)$ of$$2 x_1 + 3 y = 7 \label{eq:partial_1} \tag{2}$$thus we will first solve this other equation and then for each of these solutions we will look for the corresponding solution of \eqref{eq:orig}, if there are any. Observe that $(2,3) = 1$, thus \eqref{eq:partial_1} has infinitely many solutions. Furtehrmore, $x_1 = 2, y = 1$ is a particular solution of \eqref{eq:partial_1}, hence from Fact 1 you know that every solution of \eqref{eq:partial_1} is of the form$$x_1 = 3s + 2 \quad y = -2s + 1 \qquad \forall s \in \Bbb{Z}$$Now, for any fixed $s$ we go on and solve the equation$$4 x_2 + x_3 = -2s + 1 \label{eq:partial_2} \tag{3}$$Again, observe that $(4,1) = 1$ and that $x_2 = -s$, $x_3 = 2s + 1$ is a particular solution of \eqref{eq:partial_2}. Hence it follows from Fact 1 that the general solution is of the form$$x_2 = t - s \quad x_3 = -4t + 2s + 1 \qquad \forall t \in \Bbb{Z}$$Putting it together, we conclude that the general solution of \eqref{eq:orig} is of the form$$x_1 = 3s + 2 \quad x_2 = t - s \quad x_3 = -4t + 2s + 1 \qquad \forall s,t \in \Bbb{Z}$$
I encountered the following concern when teaching indefinite integrals. I believe that many of us may overlook this. May I be wrong? Let's consider the following example. Find the indefinite integral $$ I=\int\dfrac{dx}{x\sqrt{x^{2}-1}}. $$ Some of my students gave the following answer. Let $t=1/x$ then $dx=-1/t^{2}dt$, so we get $$ I=\int\dfrac{-1/t^{2}dt}{\frac{1}{t}\sqrt{\frac{1}{t^{2}}-1}}=\int\dfrac{-dt}{\sqrt{1-t^{2}}}=-\arcsin\left(t\right)+C=-\arcsin\left(\frac{1}{x}\right)+C. $$ Sometimes, I accept this answer since it gives a quick general antiderivative. However, the problem here is that we should write $$ \int\dfrac{-1/t^{2}dt}{\frac{1}{t}\sqrt{\frac{1}{t^{2}}-1}}=\int\dfrac{-\left|t\right|dt}{t\sqrt{1-t^{2}}}. $$ Then we end up with the answer $$ \int\dfrac{dx}{x\sqrt{x^{2}-1}}=\begin{cases} -\arcsin\left(\dfrac{1}{x}\right)+C & \text{for }x>1,\\ \arcsin\left(\dfrac{1}{x}\right)+C & \text{for }x<-1. \end{cases} $$ In your teaching practice, how would you usually proceed? PS. We may encounter the same issue in many other problems. For example, find $\int\sqrt{1-x^{2}}dx$. Then if we let $x=\sin\left(t\right)$ then $\sqrt{1-\sin^{2}\left(t\right)}$ should be $\left|\cos\left(t\right)\right|$. So now we need to explain a bit here to our naive students. Of course, avoiding these kinds of problems is the quickest way to make our teaching job easier. However, we need to prepare a good way of explanining or handing these types of problems. That's what I want to know.
Problem: This is problem 7 from the first chapter of Modern Quantum Mechanics by Sakurai (page 59). Consider a ket space spanned by the eigenkets $\{ \mid a'\rangle \}$ of some Hermitian operator $A$. There is no degeneracy. (a) Prove that $\prod\limits_{a'} (A - a')$ is the null operator. (b) Explain the significance of $\prod\limits_{a'' \neq a'} \frac {(A - a'')}{(a' - a'')}$. (c) Illustrate (a) and (b) using $A$ set equal to $S_z$ of a spin $\frac{1}{2}$ system. My Work Construction of null operator and identity operator from eigenbasis of Hermitian operator: (a) To show that it's a null operator it's sufficient to take some arbitrary $\mid \gamma \rangle$ belonging to the linear span of the eigenbasis $\{ \mid a' \rangle\}$, and show that $$\left(\prod\limits_{a'}(A - a')\right) \mid \gamma \rangle = \left| 0 \right\rangle$$ (equation 1) So, we have $$A \mid a' \rangle = a' \mid a' \rangle$$ (equation 2) and $$\mid \gamma \rangle = \sum_{a'} \langle a'\mid \gamma \rangle \mid a'\rangle$$ (equation 3) Using (3) in the LHS of (1), therefore, yields, $$\sum_a' \langle a' \mid \gamma \rangle \prod\limits_{a''}(A-a'') \mid a' \rangle = \sum_{a'} \langle a' \mid \gamma \rangle \prod\limits_{a''}(a' - a'') \mid a' \rangle = \left| 0 \right\rangle $$ (equation 4) as when $a'' = a'$ inside the continued product, we get a $0$. (b) From the previous calculation it's clear that: $$\prod\limits_{a'' \neq a'} \frac {A - a''}{a' - a''} \mid \gamma \rangle = \sum_{a'} \langle a' \mid \gamma \rangle \cdot 1 \cdot\mid a' \rangle = \mathbb{I} \mid \gamma \rangle$$ $$\Longrightarrow \prod\limits_{a'' \neq a'} \frac {A - a''}{a' -a''} = \mathbb{I}$$ (equation 5) (c) Illustration for $A = S_z$ of a spin $\frac {1}{2}$ system: Let $\mid + \frac {1}{2} \rangle$, $\mid - \frac {1}{2} \rangle$ be the eigenvectors of $S_z$ operator. The $S_z$ operator can be decomposed as: $$S_z = \mathbb{I}\cdot S_z\cdot \mathbb{I} = \left(\mid + \frac {1}{2} \rangle \langle + \frac {1}{2} \mid + \mid - \frac {1}{2} \rangle \langle - \frac {1}{2} \mid \right) S_z \left( \mid + \frac {1}{2} \rangle \langle + \frac {1}{2} \mid + \mid - \frac {1}{2}\rangle \langle - \frac {1}{2} \mid \right)$$ $$= \langle + \frac {1}{2} \mid S_z \mid + \frac {1}{2} \rangle \cdot\mid + \frac {1}{2}\rangle \langle + \frac {1}{2} \mid + \langle - \frac {1}{2} \mid S_z \mid -\frac {1}{2} \rangle \cdot\mid -\frac {1}{2} \rangle \langle - \frac {1}{2} \mid$$ $$= \frac {\hbar}{2} \left( \mid + \frac {1}{2} \rangle \langle + \frac {1}{2} \mid - \mid - \frac {1}{2} \rangle \langle - \frac {1}{2} \mid \right)$$ (equation 6) The null operator $\hat{O}= (S_z - \frac {\hbar}{2}).(S_z + \frac {\hbar}{2}) = S_z^2 - \frac {\hbar ^2}{4} \mathbb{I}$ and the identity operators are $\mathbb{I} = \frac {S_z}{\hbar} + \frac {\mathbb{I}}{2}$, $- \frac {S_z}{\hbar} + \frac {\mathbb{I}}{2}$ Where I'm having trouble The problem is with the very last result, on the last line of the page. I am getting $$S_z=\frac{\hbar}{2}\mathbb{I}$$ and also $$S_z=-\frac{\hbar}{2}\mathbb{I}$$ which is clearly not correct. My guiding equation has been equation 5 (which seems to be correct). I have put $A=S_z$ of a spin 1/2 system in equation 5. I cannot locate the flaw in the steps. So, where's the mistake? Please share your views. Note: I should be sort of double lined and hollow like in the last sentence of "my work", but the command I found didn't work. Also, the arrow should be the same style. The command I found for that didn't work either. I replaced them with a normal I and a normal arrow. Here is the link to the work page image, if anyone wants it.
Hermitian operators (or more correctly in the infinite dimensional case, self-adjoint operators) are used not because measurements must use real numbers, but rather because we almost always decide to use real numbers. As the OP mentions at one point, you might choose to use complex numbers to label a two-dimensional screen, and in that case you'll be able to use a so-called normal operator to represent the 2-dimensional observable. (Contrary to what Dirac thought, nothing whatsoever goes wrong here.) It should not be too hard then to accept my next claim: You can use whatever measurement scale you want to measure a quantum observable! You can label pointer positions with items of fruit if you want to, and you can still build a perfectly legitimate observable. There is no question that the reals and complexes have enormous advantages over other more arbitrary measurement scales (due to their rich internal structure which we capitalize on in the functional analysis), but the idea that real numbers are somehow endowed with a prestigious metaphysical status is baloney. How to define an observable with any measurement scale you want to Step 1. Set up a bunch of particle detectors Step 2. Attach a label to each detector The set of labels we'll denote by $\Omega$. Examples include: $$\Omega = \{0,1\},\mathbb{R},\mathbb{C},\{\heartsuit,\clubsuit,\diamondsuit,\spadesuit\}$$ Step 3. Write out the list of all possible events By event, I mean a subset $\Delta$ of $\Omega$ that represents a possible question like "Did a detector in $\Delta$ fire?". We'll label the event structure $\Sigma$, e.g.$$\Sigma = \{\emptyset,\heartsuit,\clubsuit,\diamondsuit,\spadesuit,\heartsuit\clubsuit,\heartsuit\diamondsuit, \cdots,\heartsuit\clubsuit\diamondsuit\spadesuit\}$$ Step 4. Associate each event in $\Sigma$ with a projection operator This is the hard bit, and there's no recipe for it. But you have to make sure that the family of projectors forms a Boolean algebra that perfectly mirrors the natural algebra of $\Sigma$. We'll call the association $\sigma$, so that $\sigma:\Sigma\to\mathscr{P}(\mathscr{H})$. And that's basically it! The object $\sigma$ (technically, a Projection Valued Measure on $\langle\Omega,\Sigma\rangle$) is a quantum observable. It contains all the probabilistic information you need to calculate the probability measure on your chosen measurement scale for any quantum state. For example, suppose the state of the system is $\rho$ and you want the probability that a detector $\Delta\in\Sigma$ fires. The desired probability is just $p=tr[\rho\sigma(\Delta)]$. What the heck does this have to do with Self-Adjoint operators? Are you ready for the climax? Here it is... IF you choose to use the measurement scale $\langle\mathbb{R},\mathscr{B}(\mathbb{R})\rangle$, THEN you will be able to build a self-adjoint operator which is precisely equivalent (in terms of the information it stores) to the PVM you constructed. IF you choose a detection screen calibrated by $\langle\mathbb{C},\mathscr{B}(\mathbb{C})\rangle$, THEN repeat the above sentence replacing 'self-adjoint' with 'normal'. (Neither of the above statements is obvious, by the way. They are famous results in Functional Analysis known as the Spectral Theorems.) IF you choose to be a Fancy Nancy and use $\{\heartsuit,\clubsuit,\diamondsuit,\spadesuit\}$ for a measurement scale (with its power set for the event structure) THEN the fruits of your labors are more modest. In particular, you still get the answers to any questions you care to ask, but you don't get any neat operator to give you computational shortcuts. Instead you will forever be doing calculations like $p=tr[\rho\sigma(\heartsuit\clubsuit)]$. I haven't even touched eigenvectors yet, but suffice it to say that they also do not have a fundamental status in the theory. There is no doubt that we can learn something by reading the works of the great masters, but taking that work as the state of play can send you back a century. We've learned a lot since Einstein and Dirac.
The difference between Mean Square Error (MSE) and Mean Square Predicted Error (MSPE) is not the mathematical expression, as @David Robinson writes here. MSE measures the quality of an estimator, while MSPE measures the quality of a predictor. But was is curious to me is that the mathematical expressions for the relationship between bias and variance for MSE and MSPE is mathematically different: The MSPE can be decomposed into two terms (just like mean squared error is decomposed into bias and variance); however for MSPE one term is the sum of squared biases of the fitted values and another the sum of variances of the fitted values We have: $MSE(\hat{\theta})=E\left[\left(\hat{\theta}-E(\hat{\theta})\right)^2\right]+\left(E(\hat{\theta})-\hat{\theta}\right)^2=Var(\hat{\theta}) + Bias(\hat{\theta},\theta)^2$ From Wikipedia we read that for the MSPE we have the following relation: \begin{equation} MSPE(L)=E\left[(\hat{g}(x_i)-g(x_i))^2\right]=\sum_i (E[\hat{g}(x_i)]-g(x_i))^2 + \sum_i Var(\hat{g}(x_i)) =\sum_i Bias(\hat{g}(x_i),g(x_i))^2 + \sum_i Var(\hat{g}(x_i)) \end{equation} I'm looking for an intuitive explanation of the expression of bias and variance of the MSPE. Is it correct to think of this as each observation/fitted value having its own variance and bias? If so, it seems to me that increasing the amount of observations should increase the MSPE (more bias and variance sums). Should there maybe be a $\frac{1}{n}$ in front of the sums of the sums of the different bias and variance?
Given a context-free grammar $G$, let $\longrightarrow_G$ be the (one-step) rightmost derivation relation, and $\longrightarrow^*_G$ its reflexive and transitive closure. Let $S$ be the start symbol of $G$, $\mathbf{N}$ the set of non-terminal symbols and $\mathbf{T}$ the set of terminal symbols. For each rewriting rule $A\rightarrow \alpha$ ($A\in\mathbf{N}, \alpha\in (\mathbf{N}\cup\mathbf{T})^*$), define the sets $\operatorname{Left}_G(A)$ and $L_{G,A\rightarrow\alpha}$ by: $$\operatorname{Left}_G(A)=\{\beta\mid \exists w\in \mathbf{T}^*: S\longrightarrow_G^* \beta A w\}\qquad L_{G,A\rightarrow\alpha} = \{\beta\alpha\mid\beta\in \operatorname{left}_G(A)\}$$ I want to show, that for every context-free grammar $G$ and every rule $A\rightarrow \alpha$ of $G$, $L_{G,A\rightarrow\alpha}$ is a regular language over the alphabet $\mathbf{N}\cup\mathbf{T}$. I can use the fact that a language generated by a left-linear context-free grammar is regular. Here a context-free grammar $G$ is said to be left-linear if $\alpha\in \mathbf{T}^*\cup (\mathbf{N}\mathbf{T})^*$ holds for every rewriting rule $A\rightarrow \alpha$ of $G$. I've tried from both sides, the first thought was to show if I can get the language by getting the grammar from $G$ to $Left_G$, then maybe I can get some properties that $\beta$ may have, that lead to the fact that it's close to left-linear grammar, but I just can't, after thinking for a whole afternoon. Then I thought maybe I can start from the goal language $\beta \alpha$, that I cut them into rewriting relation that is left-lineared, like we have $S\rightarrow_G^*\beta\alpha$, if we want to cut a left-linear grammar out of it, we need $S\rightarrow_G^*A\omega$, which $\omega \in T^*$ and we have$\alpha \rightarrow \alpha\omega$ or something, but as the $\alpha \in (N\bigcup T)^*$, so I don't even know if I can cut the language that way. I'll appreciate if I can have some hint, like where may I begin to consider, or is there any properties about the relation I just missed or don't know?
Propulsion of Flexible Polymer Structures in a Rotating Magnetic Field Written by Kevin Tian, AP 225, Fall 2011 --Ktian 17:05, 9 November 2011 (UTC) Title: Propulsion of Flexible Polymer Structures in a Rotating Magnetic Field Authors: Piotr Garstecki1, Pietro Tierno, Douglas B Weibel, Francesc Sagués and George M Whitesides Journal: Journal of Physics: Condensed Matter 21 204110 Paper Summary The essential concept behind this paper is the proposal of a new method of propulsion for abiological structures in fluids of low Reynolds numbers. The basic idea is based off of elastic propellers that use rotational mechanisms in order to propel micro-organisms through fluids. Taking inspiration from these biological mechanisms, the authors designed flexible, planar polymer structures with a permanent magnetic moment. When placed in the presence of an external, uniform and rotating magnetic field, the structures deform into structures with helical symmetry (essentially meaning they are chiral), cause linear translations through fluids of Reynolds numbers between <math>10^{-1}</math> and 10. An example of the concept can be seen in Figure 1, where observe that with polydimethylosiloxane (PDMS) elastic swimmers doped with ferromagnetic powders and cured such that there is a permanent magnetic moment, when placed in a rotating magnetic field (accomplished by a magnetic stir plate) will begin to move on their own (so to speak). Though a cute design and result, there are a number of potential applications for such methods of propulsion in designing for motions of nano-, micro, and meso-scale devices/objects, as well as the study of other self-propelled systems. Experimental Details The 'Swimmers' and the System It is mentioned by the authors that a general difficulty in exploiting chiral structures for generating motion has been the fabrication of such objects, especially at smaller length scales (micron and below). However the unique aspect of this new design is that fabrication involves making a planar object, which is significantly easier to do. It is claimed that the technique and the underlying physics are both scalable. [Note: However this claim is never quite substantiated. I believe there are a multitude of considerations and problems that may arise with scaling, so it's definitely a non-trivial if not misleading claim. It would be interesting to see if there are problems that may arise in using these designs at smaller/larger length scales] For simplicity, the structures fabrications have been referred to as 'swimmer' and are approximately 2mm long. The environment these swimmers are placed under can be observed in Figure 2a and Figure 2b. The swimmers are placed in a petri dish of fluid high enough to accomodate their thickness but not length (so they're approximately parallel to the dish bottom). However although the swimmers *do* work in the situation illustrated in Figure 2a and can act as a good qualitative verification of results, for the purposes of characterization the simple magnetic stirrer was less than ideal. This is due to the non-uniformity of the magnetic field generated that resulted in additional forces (in addition to hydrodynamic forces) on the swimmer. Thus in order to have more control over the magnetic field, characterization (for the entire quantitative portion of the experiment), a setup closer to Figure 2b is used, where 3 electromagnetic coils were used to create the rotating magnetic field of uniform distribution (over the area the properties/performance of the swimmers was tested). Fabrication The authors (generously) provided a fairly detailed list of materials that were used in the experiment. This is in section 2.1 and will not be reproduced for brevity. The general fabrication process is illustrated in Figure 3. The fabrication of the swimmers is boils down to a single-step soft lithography process with ferromagnetic particles inside in order to magnetize the final structure. The fabrication is essentially as follows: Master and Molds A transparent mask is made with the structure design etched into the mask The structure design is then transferred onto SU8 photoresist (spun onto a silicon wafer) via photolithograpy This involves developing the wafer in propyleneglycol methylethylacrylate (PGMEA) to have a "master" with the design being embossed in the profile of the SU8 The master is silanized by vapor phase deposition of (tridecafluoro-1,1,2,2-tetrahydrooctyl)-1-trichlorosilane for 3h @ <math>25^\circ C</math>. PDMS pre-polymer mix (10:1 ratio of base to curing agent) onto the wafer and set to cure for 4h @ <math>65^\circ C</math> This creates molds that can be cut out with a scalpel, peeled away from the master surface and trimmed (Figure 3a) The molds are then treated with oxygen plasma and silanized in the same fashion as the master Swimmers The molds are now used to fabricate the swimmers. The mold indentations are filled with PDMS. This PDMS mix is admixed with ferrite powder (25% w/w) Excess prepolymer mix is scrapped off (Figure 3b) The PDMS is left to cure for 3h @ <math>60^\circ C</math> with an external magnet positioned such that the field was perpendicular to the mold surface (Figure 3c) After curing the structure is carefully released, yielding a single swimmer! General Properties Magnetic dipole oriented perpendicular to the plane of the body General schematic of swimmer presented in Figure 3d and 3e Typical dimensions varied between: 2-10mm length 1-3mm width 100-200 <math>\mu m</math> thickness Petri Dish Experiment Details As in figure 2b, an external magnetic field was generated with three coils. Each coil had inner diameter 4cm, outer diameter 7cm, ~1100 turns of 4mm thick Cu wire. Magnetic field rotation was in the x-z plane (whilst x-y plane was the plane of swimmer motion). To produce uniform magnetic field in the x-y plane, two of the coils were arranged in a Helmholtz configuration (both aligned on common axis with separation equal to their radius). A waveform generator (TTi TGA1244) created the rotating magnetic field. The waveform generator was connected to a current amplifier (IMG STA-800). Magnitude of current through the perpendicular coild systems was adjusted such that the magnetic field rotated uniformly in the plane with constant field amplitude of <math>H=10 \times 10^3 ~or ~11 \times 10^3 A~m^{-1}</math> (for the range of angular frequencies tested, <math>\Omega<200 s^{-1}</math>. A teslameter was used to measured the magnetic field intensity and uniformity (51662DE-Leybold, Germany). The Petri Dish was 3.7cm in diameter filled with fluid mixtures (ethylene glycol, EG (1,2-ethanediol) or a mixture of EG and glycerol) which had a range of viscosities from <math>16 \times 10^{-3}~-~0.3~Pa~s</math>. The dish was positioned directly above the z-coil. Tweezers were used to position swimmers directly in the region of uniform external magnetic field. Results Condition for Propulsion We know that for low Reynolds number, as per the expression for Stoke's drag, we know that the force on the fluid is has a linear relation to the resulting flow field. Thus it follows that in order for rotating objects to yield translational motion, the object must form a chiral structure about its axis of rotation. This can be achieved by either rotating a static, non-planar shape that already satisfies this chiral criterion, or by rotating an elastic shape that spontaneously deforms into a chiral shape (Figure 4a and 4b respectively). The exact mechanism of propulsion of helical structures are described well by G.I.Taylor's Educational Movies in Fluid Dynamics. In essence the helix can be divided into segments that are approximately cylindrical. These segments, when considered flowing through a fluid independently do not experience isotropic viscous drag. After some reasoning (that can be easily seen with a force body diagram) one notes that the net motion of the cylinder is not parallel to the applied force. If we then consider all these segments as part of the helical body we see that, although the body segments have force components in directions perpendicular to the rotational axis, the speed contains parallel components. Since the structure has a lack of inversion symmetry, due to its chirality, these parallel component's do not sum to zero. If one observes the motion of these swimmers in a viscous fluid then we observe that there is a net linear displacement (Figure 4c). Synchronous rotation vs 'Tumbling' It has been noted that in general a swimmer can be rotating synchronously with the magnetic field, and the arms deform due to viscous forces, and translates normally through the fluid. However by making several mechanical considerations of the system, one notes that there is a critical rotational frequency (for a given fluid viscosity, magnetic field strength, and swimmer geometry) for which above this limit the swimmer cannot synchronously follow the rotation of the field and the motion devolves into a back-and-forth motion. Thus above this critical frequency there is a decrease in the swimmer efficiency. An example of when this "rocking motion" occurs can be seen in Figure 4d, which is at a significantly higher frequency than in 4c, and we note the lack of as large a net translation as for lower rotational frequency. This mode of motion is referred to as "tumbling" since it does not contribute to efficient translation. Further characterizations of speed with respect to frequency was performed for various viscosities. There were two distinct dynamic regimes that there observed. i) Velocity of the swimmers were linearly related to rotational frequency (low freq) ii) Velocity decreased with increasing rotational frequency (high freq) It was noted that the critical rotational frequency increased with magnetic field intensity and decreased with increasing viscosity. It follows well with the prediction <math>\Omega_C \propto {H \over \eta} </math>, the background of which is not covered in this paper. Speed According to the linearity of the Stokes equations, a rotating collection of angular oblique bodies will achieve net speed dependent only on geometry and rotational frequency, but not fluid viscosity. In a similar fashion swimmer speed should only be proportional to the linear speed of the rotating arms. However this argument assumes that all points of the swimmer are contributing a net speed, and that there is a viscous torque influenced by swimmer geometry. Both introduce viscosity dependency. The authors avoid the second complication (by saying it is difficult to analyze) and focus entirely on the first by assuming deformation is described by Hooke's law and is approximately proportional to viscous torque at low speeds/low viscous torques. [Note: This doesn't seem very plausible to me.] If this is true then it is expected that for a constant rotational frequency, that increasing viscosity (and thus structural deformation) will increase the translational speed of the swimmer. The results of this experiment can be observed in Figure 5. As can be noted there is a distinct decrease of the speed (at the same angular frequency) for increasing viscosity, which the authors describe as "slight". I disagree with this evaluation and say it is rather significant compared to what speeds they are traveling at. Regardless this opens up the question of what shape is most efficient for swimming at low Reynolds numbers. The design was modified in arm length, and the speed at various rotational frequencies is tested and shown in Figure 6. Figure 6a has longer arms, 6b is the original design and 6c has shorter arms. As one can see, at the higher frequencies, the longer arms design was clearly faster than the original design. Though there are some curious effects that are not adequately explained (such as why at low rotational frequencies the longer arms design is slower than the original) the message is clear. Geometry is essential to the design of the ideal swimmer, though what exact specifications are needed for ideal performance is up in the air. Discussions & Conclusion The authors bring up an interesting discussion about the slight change in the way one must engineer solutions to problems on the low Reynolds number realm. Particular how it is not simply "what is the ideal propeller" but rather "what structure will deform into an ideal propeller?". The way the authors have designed their swimmers is an experimental platform for more such deformation-based self-propulsion. Since their platform involves planar objects, further redesigns can be scaled down into even smaller realms, since they did not approach the limit of photolithography and soft lithography. It was only mentioned in the last stages of the paper that there *were* considerations that weren't mentioned that complicate their claim of scalability. Material choice (for the mechanical properties) and flow fluctuations, and how they affect the propulsion of these structure was not touched upon but acknowledged as a problem on smaller length scales. Though interesting in demonstration, the use of magnetic fields came with an inconvenience in the form of constraints. It was required that the magnetic field be uniform, otherwise the motion would be complicated by additional forces. The details of these additional forces was not touched upon but are non-trivial considerations. The authors simply note that one potential solution is simply to increase the magnet size relative to the swimmers (doable for scaling down swimmers). Nonetheless the general technique has been proposed and shown to be effective as an approach to developing means for self-propulsion. The novelty of the technique lies in it's capability to exploit what was previously inaccessible (or rather just extremely difficult) to fabrication. Namely the full 3D structure that nature seems to have endless access to.
I'm trying to derive an HJB from a discrete time setting. At some point, I am left with $$ \lim_{\Delta\to 0} \frac{v(c_{t+\Delta}, u_{t+\Delta}, t+\Delta) - v(c_{t}, u_{t}, t)}{\Delta}$$ and am not sure what to do. If $\Delta$ was only in one argument, this would be a partial differential. My hunch is that this is the total derivative in $t$, but I dont know how to show that. How do I proceed with the expression above?
Recently I realized the concept of center of mass makes sense in special relativity. Maybe it's explained in the textbooks, but I missed it. However, there's a puzzle regarding the zero mass case Consider any (classical) relativistic system e.g. a relativistic field theory. Its state can be characterized by the conserved charges associated with Poincare symmetry. Namely, we have a covector $P$ associated with spacetime translation symmetry (the 4-momentum) and a 2-form $M$ associated with Lorentz symmetry. To define the center of mass of this state we seek a state of the free spinning relativistic point particle with the same values of conserved charges. This translates into the equations $$x \wedge P + s = M$$$${i_P}s = 0$$ Here $x$ is the spacetime coordinate of the particle and $s$ is a 2-form representing its spin (intrinsic angular momentum). I'm using the spacetime metric $\eta$ implicitely by identifying vectors and covectors The system is invariant under the transformation $$x'=x+{\tau}P$$ where $\tau$ is a real parameter For $P^2 > 0$ and any $M$ these equations yields a unique timelike line in $x$-space, which can be identified with the worldline of the center of mass of the system. However, for $P^2=0$ the rank of the system is lower since it is invariant under the more general transformation $$x'=x+y$$$$s'=s-y \wedge P$$ where $y$ satisfies $y \cdot P = 0$ This has two consequences. First, if a solution exists it yields a null hyperplane rather than a line*. Second, a solution only exists if the following constaint holds: $$i_P (M \wedge P) = 0$$ Are there natural situations in which this constaint is guaranteed to hold? In particular, does it hold for zero mass solutions of common relativistic field theories, for example Yang-Mills theory? I'm considering solutions with finite $P$ and $M$, of course *For spacetime dimension $D = 3$ a canonical line can be chosen out of this hyperplane by imposing $s = 0$. For $D = 4$ this is in general impossibleThis post has been migrated from (A51.SE)
Search Now showing items 1-10 of 24 Production of Σ(1385)± and Ξ(1530)0 in proton–proton collisions at √s = 7 TeV (Springer, 2015-01-10) The production of the strange and double-strange baryon resonances ((1385)±, Ξ(1530)0) has been measured at mid-rapidity (|y|< 0.5) in proton–proton collisions at √s = 7 TeV with the ALICE detector at the LHC. Transverse ... Forward-backward multiplicity correlations in pp collisions at √s = 0.9, 2.76 and 7 TeV (Springer, 2015-05-20) The strength of forward-backward (FB) multiplicity correlations is measured by the ALICE detector in proton-proton (pp) collisions at s√ = 0.9, 2.76 and 7 TeV. The measurement is performed in the central pseudorapidity ... Inclusive photon production at forward rapidities in proton-proton collisions at $\sqrt{s}$ = 0.9, 2.76 and 7 TeV (Springer Berlin Heidelberg, 2015-04-09) The multiplicity and pseudorapidity distributions of inclusive photons have been measured at forward rapidities ($2.3 < \eta < 3.9$) in proton-proton collisions at three center-of-mass energies, $\sqrt{s}=0.9$, 2.76 and 7 ... Rapidity and transverse-momentum dependence of the inclusive J/$\mathbf{\psi}$ nuclear modification factor in p-Pb collisions at $\mathbf{\sqrt{\textit{s}_{NN}}}=5.02$ TeV (Springer, 2015-06) We have studied the transverse-momentum ($p_{\rm T}$) dependence of the inclusive J/$\psi$ production in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV, in three center-of-mass rapidity ($y_{\rm cms}$) regions, down to ... Measurement of pion, kaon and proton production in proton–proton collisions at √s = 7 TeV (Springer, 2015-05-27) The measurement of primary π±, K±, p and p¯¯¯ production at mid-rapidity (|y|< 0.5) in proton–proton collisions at s√ = 7 TeV performed with a large ion collider experiment at the large hadron collider (LHC) is reported. ... Two-pion femtoscopy in p-Pb collisions at $\sqrt{s_{\rm NN}}=5.02$ TeV (American Physical Society, 2015-03) We report the results of the femtoscopic analysis of pairs of identical pions measured in p-Pb collisions at $\sqrt{s_{\mathrm{NN}}}=5.02$ TeV. Femtoscopic radii are determined as a function of event multiplicity and pair ... Measurement of charm and beauty production at central rapidity versus charged-particle multiplicity in proton-proton collisions at $\sqrt{s}$ = 7 TeV (Springer, 2015-09) Prompt D meson and non-prompt J/$\psi$ yields are studied as a function of the multiplicity of charged particles produced in inelastic proton-proton collisions at a centre-of-mass energy of $\sqrt{s}=7$ TeV. The results ... Charged jet cross sections and properties in proton-proton collisions at $\sqrt{s}=7$ TeV (American Physical Society, 2015-06) The differential charged jet cross sections, jet fragmentation distributions, and jet shapes are measured in minimum bias proton-proton collisions at centre-of-mass energy $\sqrt{s}=7$ TeV using the ALICE detector at the ... Centrality dependence of high-$p_{\rm T}$ D meson suppression in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV (Springer, 2015-11) The nuclear modification factor, $R_{\rm AA}$, of the prompt charmed mesons ${\rm D^0}$, ${\rm D^+}$ and ${\rm D^{*+}}$, and their antiparticles, was measured with the ALICE detector in Pb-Pb collisions at a centre-of-mass ... K*(892)$^0$ and $\Phi$(1020) production in Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV (American Physical Society, 2015-02) The yields of the K*(892)$^0$ and $\Phi$(1020) resonances are measured in Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV through their hadronic decays using the ALICE detector. The measurements are performed in multiple ...
1) A nonzero homogeneous polynomial $f(x_1,\cdots, x_n)\in \mathbb C[x_1,\cdots, x_n]$ has a unique degree $d$, which it is unnecessary to call "total" degree. For example $6x_1^3x_2^2-(1+i\sqrt{19})x_2x_3^4+\frac {22} {7}x_3^5 \in \mathbb C[x_1,x_2, x_3]$ is homogeneous of degree $d=5$. 2) A hypersurface $V(f)\subset \mathbb P^{n-1}(\mathbb C)$ of degree $d$ is determined by a homogeneous polynomial $f(x_1,\cdots, x_n)\in \mathbb C[x_1,\cdots, x_n]$ of degree $d$ and two polynomials $f,g\in \mathbb C[x_1,\cdots, x_n]$ determine the same hypersurface $V(f)=V(g)$ if and only $f=\lambda g$ for some $\lambda\in \mathbb C$. THAT'S ALL: THAT $f$ IS REDUCIBLE OR NOT IS IRRELEVANT 3) And strangely the definition of "hypersurface" is not very important (which is why I didn't give it to you!) : there have been several definitions since more than a century, the latest (dating from the late 1950's) being through the notion of scheme. But the brilliant Italian, German, French, English, ... algebraic geometers of the nineteenth century knew very well that the line $x_1=0$ is completely different from the conic $x_1^2=0$ ! To sum up: if $f=f_1^{e_1} \cdots f_r^{e_r}$ is the factorization of $f$ into irreducibles, you may consider the polynomial (of lower degree) $f_{red}=f_1 \cdots f_r$ but you must be aware that $V(f)$ and $V(f_{red})$ are different hypersurfaces.
In most cases, cryptography requires values to be uniformly random (in which case discussions of min-entropy are moot) or unpredictable (in which case min-entropy isn't sufficient --- though "conditional" min-entropy might be --- since in these contexts an adversary typically has multiple guesses). The context where I see min-entropy play the greatest role is in randomness extraction. Since physical sources of randomness rarely produce uniformly random bits, we need a way to transform the output of some physical system (presumably containing some amount of entropy, however measured) into a uniformly random bit string. At that point, we can start to do cryptography. The Leftover Hash Lemma tells us how to take an input $X$ and transform it into a value $f_S(X) \in \{0, 1\}^n$ that is "close" to uniform. In particular, it tells us how to construct $f$ such that $n \leq H_{\infty}(X) - 2 \log(1/\epsilon)$, where $\epsilon$ is the statistical distance between the uniform distribution and $f_S(X)$ (technically, the distance between $(S, f_S(X))$ and $(S, U)$, where $U$ and $S$ are uniform and $S$ is implicitly a fixed, public value). We're not making any assumptions on $X$ except for it's min-entropy. This quite nice because in practice we can't precisely characterize the distributions used to provide inputs to RNGs (which vary from device to device in the case of HW RNGs, and are not known a priori in the case of, e.g., /dev/urandom). Of course, estimating $H_{\infty}(X)$ is its own bag of worms...
I want to build a context sensitive grammar for the language $\{a^{2^n}\mid n\geq 0\}$. I think it should be something like this \begin{align*} S &\to aA \mid a\\ aA&\to aaaA \mid aa \end{align*} This is from Hopcroft and Ullman's book, Example 9.4, pg 220: $$ S \rightarrow ACaB $$ $$ Ca \rightarrow aaC $$ $$ CB \rightarrow DB $$ $$ CB \rightarrow E $$ $$ aD \rightarrow Da $$ $$ AD \rightarrow AC $$ $$ aE \rightarrow Ea $$ $$ AE \rightarrow \epsilon $$ Update: in fact given grammar is unrestricted which can be modified as a context sensitive. But if your ultimate goal is to prove that the given language is a CSL, then it is enough to create a linear bounded automaton - a TM whose computational ability is restricted to the portion of input. In other words, the head of the TM cannot move beyond the input area. LBA's and CSG's are equivalent. You can just erase one input symbol at a time starting from the right, and each time you erase one symbol increase a counter by one (binary symbols 0 and 1) which are stored in place of erased symbols. When you erase all input symbols check if the counter is of the form 100...0. If yes then accept, otherwise reject). 100...0 means it is a power of 2.
Let's consider the outer two loops first. For a fixed value of $i$, the number of iterations of the middle loop is exactly $n-1 - (i+1) + 1 = n - i -1$.Since $i$ ranges from $0$ to $n-1$ (in the outer loop), the overall number of iterations of the middle loop is:$$\sum_{i=0}^{n-1} (n - i -1) =\sum_{i=0}^{n-1} i = \frac{n(n-1)}{2}.$$For each of those ... Each time add $i$ to $s$ and increase $i$ by one, up to reach to $n$. Hence, if you find the $k$ such that $s = 0 + 1 + 2 + ... + k$ be equal to $n$, you can find the number of running loop. As $1 + 2 + \ldots + k = \frac{k(k+1)}{2}$, you need to solve this equation $\frac{k(k+1)}{2} = n$.$$k^2 + k -2n = 0 \Rightarrow k = \frac{-1 + \sqrt{1+8n}}{2} = \... For all $n\geq 1$, $56n^2+106n+48> 56n^2> n^2$ and $\log (264n^2+200)> \log 264n^2>\log n$, so$$(56n^2+106n+48)\log(264n^2+200) > n^2\log n\,,$$i.e., you can take $c_1=1$.Also for all $n\geq 1$, $56n^2+106n+48\leq 56n^2+106n^2+48n^2 = 210n^2$ and, for all $n\geq 200$, $264n+200 < 265n$ so $\log(264n^2+200) < \log 265n^2 = 2\log n + \... I just found the answer myself. In this paper:Lyle Ramshaw, Robert E. Tarjan (2012). "On minimum-cost assignments in unbalanced bipartite graphs". Technical reports, HP research labs.in section 5, the authors show that the Hopcroft-Karp algorithm in fact solves the following problem: given an integer $s$, find matchings with $1,\ldots,s$ edges. The ... 1For the first case, you had the right idea, but just had some algebra mistakes.for i=1..nj=1while j*j <= i:j = j + 1Let $T(n)$ be the time complexity.$$T(n) = \sum_{i=1}^n\sum_{j=1}^\sqrt{i}1\leq \sum_{i=1}^n\sqrt{n}=\leq n^{3/2}$$$$= O(n^{3/2})$$2I'm assuming you meant the pseudocode below since it is more analogous to ... Let's take a look at the case of size 7 first. Here, we want to show a linear upper bound for $T(n)$.Thus, we choose the recurrence relation $T(n) \leq T(n / 7) + T(5 / 7 n) + dn$ (remember, it is an upper bound).We guess that the solution is of the form $T(n) \leq cn$ for some constant $c$ and prove it with induction:$$T(n) \leq T(n / 7) + T(5 / 7 \cdot ...
@egreg It does this "I just need to make use of the standard hyphenation function of LaTeX, except "behind the scenes", without actually typesetting anything." (if not typesetting includes typesetting in a hidden box) it doesn't address the use case that he said he wanted that for @JosephWright ah yes, unlike the hyphenation near box question, I guess that makes sense, basically can't just rely on lccode anymore. I suppose you don't want the hyphenation code in my last answer by default? @JosephWright anway if we rip out all the auto-testing (since mac/windows/linux come out the same anyway) but leave in the .cfg possibility, there is no actual loss of functionality if someone is still using a vms tex or whatever I want to change the tracking (space between the characters) for a sans serif font. I found that I can use the microtype package to change the tracking of the smallcaps font (\textsc{foo}), but I can't figure out how to make \textsc{} a sans serif font. @DavidCarlisle -- if you write it as "4 May 2016" you don't need a comma (or, in the u.s., want a comma). @egreg (even if you're not here at the moment) -- tomorrow is international archaeology day: twitter.com/ArchaeologyDay , so there must be someplace near you that you could visit to demonstrate your firsthand knowledge. @barbarabeeton I prefer May 4, 2016, for some reason (don't know why actually) @barbarabeeton but I have another question maybe better suited for you please: If a member of a conference scientific committee writes a preface for the special issue, can the signature say John Doe \\ for the scientific committee or is there a better wording? @barbarabeeton overrightarrow answer will have to wait, need time to debug \ialign :-) (it's not the \smash wat did it) on the other hand if we mention \ialign enough it may interest @egreg enough to debug it for us. @DavidCarlisle -- okay. are you sure the \smash isn't involved? i thought it might also be the reason that the arrow is too close to the "M". (\smash[t] might have been more appropriate.) i haven't yet had a chance to try it out at "normal" size; after all, \Huge is magnified from a larger base for the alphabet, but always from 10pt for symbols, and that's bound to have an effect, not necessarily positive. (and yes, that is the sort of thing that seems to fascinate @egreg.) @barbarabeeton yes I edited the arrow macros not to have relbar (ie just omit the extender entirely and just have a single arrowhead but it still overprinted when in the \ialign construct but I'd already spent too long on it at work so stopped, may try to look this weekend (but it's uktug tomorrow) if the expression is put into an \fbox, it is clear all around. even with the \smash. so something else is going on. put it into a text block, with \newline after the preceding text, and directly following before another text line. i think the intention is to treat the "M" as a large operator (like \sum or \prod, but the submitter wasn't very specific about the intent.) @egreg -- okay. i'll double check that with plain tex. but that doesn't explain why there's also an overlap of the arrow with the "M", at least in the output i got. personally, i think that that arrow is horrendously too large in that context, which is why i'd like to know what is intended. @barbarabeeton the overlap below is much smaller, see the righthand box with the arrow in egreg's image, it just extends below and catches the serifs on the M, but th eoverlap above is pretty bad really @DavidCarlisle -- i think other possible/probable contexts for the \over*arrows have to be looked at also. this example is way outside the contexts i would expect. and any change should work without adverse effect in the "normal" contexts. @DavidCarlisle -- maybe better take a look at the latin modern math arrowheads ... @DavidCarlisle I see no real way out. The CM arrows extend above the x-height, but the advertised height is 1ex (actually a bit less). If you add the strut, you end up with too big a space when using other fonts. MagSafe is a series of proprietary magnetically attached power connectors, originally introduced by Apple Inc. on January 10, 2006, in conjunction with the MacBook Pro at the Macworld Expo in San Francisco, California. The connector is held in place magnetically so that if it is tugged — for example, by someone tripping over the cord — it will pull out of the socket without damaging the connector or the computer power socket, and without pulling the computer off the surface on which it is located.The concept of MagSafe is copied from the magnetic power connectors that are part of many deep fryers... has anyone converted from LaTeX -> Word before? I have seen questions on the site but I'm wondering what the result is like... and whether the document is still completely editable etc after the conversion? I mean, if the doc is written in LaTeX, then converted to Word, is the word editable? I'm not familiar with word, so I'm not sure if there are things there that would just get goofed up or something. @baxx never use word (have a copy just because but I don't use it;-) but have helped enough people with things over the years, these days I'd probably convert to html latexml or tex4ht then import the html into word and see what come out You should be able to cut and paste mathematics from your web browser to Word (or any of the Micorsoft Office suite). Unfortunately at present you have to make a small edit but any text editor will do for that.Givenx=\frac{-b\pm\sqrt{b^2-4ac}}{2a}Make a small html file that looks like<!... @baxx all the convertors that I mention can deal with document \newcommand to a certain extent. if it is just \newcommand\z{\mathbb{Z}} that is no problem in any of them, if it's half a million lines of tex commands implementing tikz then it gets trickier. @baxx yes but they are extremes but the thing is you just never know, you may see a simple article class document that uses no hard looking packages then get half way through and find \makeatletter several hundred lines of trick tex macros copied from this site that are over-writing latex format internals.
We are allowed to use gauge invariance in quantum mechanics – even quantum mechanical theories with the electromagnetic 4-potential are gauge-invariant theories. However, it's not quite true that all gauge invariant quantities are functions or functionals of $F_{\mu\nu}$. Instead, we may consider the phase$$ \exp\left(i\oint d\vec x\cdot \vec A\right) $$where the integral goes along a circle surrounding the solenoid. The exponential above may be seen to be gauge-invariant (add the appropriate natural factor of $e$ or $e/c$ to the exponent to fit your normalizations) because $\vec A$ changes by $\nabla \lambda$ and the integral changes by the step of $\lambda$ between the beginning and end of the circular contour. But this multiplicative factor is guaranteed to be a multiple of $2\pi$ because the charged fields of unit charge transform by getting multiplied by $\exp(i\lambda)$ and they have to stay single-valued in all directions of the solenoid. So the information given by the exponential – a complex number whose absolute value is one or, equivalently, the integral of $\vec A$ modulo $2\pi$ – remains the same under any gauge transformation. It has observable consequences in quantum mechanics. In particular, it affects the location of the interference patterns behind the solenoid. Equivalently, you may rewrite the contour integral as$$\oint d\vec x\cdot \vec A = \int dS\cdot \vec B $$which only depends on the gauge-invariant field strength $\vec B$. It's the magnetic flux through the solenoid. However, we must know the value of $\vec B$ even in – and especially in – regions that the electron never reaches, where it has a zero probability to be, namely inside the solenoid. Quantum mechanics is sensitive on the magnetic flux because it manifests itself as the relative phase of the wave function of the electron going around the left or right side of the solenoid, respectively. The first explanation of the AB effect only uses quantities measured along the paths of the electron but one needs to use other gauge-invariant objects than the field strength; the second explanation agrees with the proposition that all gauge-invariant entities are functionals of the field strength but one must "nonlocally" consider the field strength's value in forbidden regions in the solenoid, too. They matter in quantum physics. In the classical limit, the interference patterns go away and the whole sensitivity on the magnetic flux is eliminated, too.
Lead Article: Tables of Physics Formulae This article is a summary of the laws, principles, defining quantities, and useful formulae in the analysis of Continuity and Conservation Equations. To summarize essentails of physics, this section enumerates the classical conservation laws and continuity equations. All the following conservation laws carry through to modern physics, such as Quantum Mechanics, Relativity, Particle Physics and Quantum Relativity, though modifications to conserved quantities may be neccesary. Particle physics introduces new conservation laws, many in a different way using quantum numbers. For any isolated system (i.e. independant of external agents/influences) the following laws apply to the whole system. Constituents of the system possesing these quantities may experiance changes, but the total amount of the quantity due to all constituents is constant. Two equivalent ways of applying these in problems is by considering the quantities before and after an event, or considereing any two points in space and time, and equating the initial state of the sytem to the final, since the quantity is conserved. Corresponding to conserved quantities are currents, current densities, or other time derivatives. These quantites must be conserved also since the amount of a conserved quantity associated with a system is invariant in space and time. Classical Conservation [ edit ] Conserved Quantity Constancy Equation System Equation/s Time Derivatives Mass Δ m = 0 {\displaystyle \Delta m=0\,\!} M s y s t e m = ∑ i = 1 N 1 m i = ∑ j = 1 N 2 m j {\displaystyle M_{\mathrm {system} }=\sum _{i=1}^{N_{1}}m_{i}=\sum _{j=1}^{N_{2}}m_{j}\,\!} Mass current conservation ∑ i = 1 N 1 ( I m ) i = ∑ j = 1 N 2 ( I m ) j = 0 {\displaystyle \sum _{i=1}^{N_{1}}\left(I_{\mathrm {m} }\right)_{i}=\sum _{j=1}^{N_{2}}\left(I_{\mathrm {m} }\right)_{j}=0\,\!} ∑ i = 1 N 1 ( j m ) i = ∑ j = 1 N 2 ( j m ) j = 0 {\displaystyle \sum _{i=1}^{N_{1}}\left(\mathbf {j} _{\mathrm {m} }\right)_{i}=\sum _{j=1}^{N_{2}}\left(\mathbf {j} _{\mathrm {m} }\right)_{j}=\mathbf {0} \,\!} Linear Momentum Δ p = 0 {\displaystyle \Delta \mathbf {p} =\mathbf {0} \,\!} ∑ i = 1 N 1 p i = ∑ i = 1 N 2 p j {\displaystyle \sum _{i=1}^{N_{1}}\mathbf {p} _{i}=\sum _{i=1}^{N_{2}}\mathbf {p} _{j}\,\!} which can be written in equivalant ways, most useful forms are: ∑ i = 1 N 1 m i v i = ∑ j = 1 N 2 m j v j {\displaystyle \sum _{i=1}^{N_{1}}m_{i}\mathbf {v} _{i}=\sum _{j=1}^{N_{2}}m_{j}\mathbf {v} _{j}} Momentum current conservation ∑ i = 1 N 1 ( I p ) i = ∑ j = 1 N 2 ( I p ) j = 0 {\displaystyle \sum _{i=1}^{N_{1}}\left(I_{\mathrm {p} }\right)_{i}=\sum _{j=1}^{N_{2}}\left(I_{\mathrm {p} }\right)_{j}=0\,\!} Momentum current density conservation ∑ i = 1 N 1 ( j p ) i = ∑ j = 1 N 2 ( j p ) j = 0 {\displaystyle \sum _{i=1}^{N_{1}}\left(\mathbf {j} _{\mathrm {p} }\right)_{i}=\sum _{j=1}^{N_{2}}\left(\mathbf {j} _{\mathrm {p} }\right)_{j}=\mathbf {0} \,\!} Total Angular Momentum Δ L t o t a l = 0 {\displaystyle \Delta \mathbf {L} _{\mathrm {total} }=\mathbf {0} \,\!} L s y s t e m = ∑ i = 1 N 1 L i = ∑ j = 1 N 2 L j {\displaystyle \mathbf {L} _{\mathrm {system} }=\sum _{i=1}^{N_{1}}\mathbf {L} _{i}=\sum _{j=1}^{N_{2}}\mathbf {L} _{j}\,\!} which can be written in equivalant ways, most useful forms are: L s y s t e m = ∑ i = 1 N 1 ( I a b ω b ) i = ∑ j = 1 N 2 ( I a b ω b ) j {\displaystyle \mathbf {L} _{\mathrm {system} }=\sum _{i=1}^{N_{1}}\left(\mathbf {I} _{\mathrm {ab} }{\boldsymbol {\omega }}_{\mathrm {b} }\right)_{i}=\sum _{j=1}^{N_{2}}\left(\mathbf {I} _{\mathrm {ab} }{\boldsymbol {\omega }}_{\mathrm {b} }\right)_{j}\,\!} L s y s t e m = ∑ i = 1 N 1 r i × p i = ∑ j = 1 N 2 r j × p j {\displaystyle \mathbf {L} _{\mathrm {system} }=\sum _{i=1}^{N_{1}}\mathbf {r} _{i}\times \mathbf {p} _{i}=\sum _{j=1}^{N_{2}}\mathbf {r} _{j}\times \mathbf {p} _{j}\,\!} L s y s t e m = ∑ i = 1 N 1 m i ( r i × v i ) = ∑ j = 1 N 2 m j ( r j × v j ) {\displaystyle \mathbf {L} _{\mathrm {system} }=\sum _{i=1}^{N_{1}}m_{i}\left(\mathbf {r} _{i}\times \mathbf {v} _{i}\right)=\sum _{j=1}^{N_{2}}m_{j}\left(\mathbf {r} _{j}\times \mathbf {v} _{j}\right)\,\!} No analogue Spin Angular Momentum Δ L s p i n = 0 {\displaystyle \Delta \mathbf {L} _{\mathrm {spin} }=\mathbf {0} \,\!} Same as above Orbital Angular Momentum Δ L o r b i t a l = 0 {\displaystyle \Delta \mathbf {L} _{\mathrm {orbital} }=\mathbf {0} \,\!} Same as above Energy Δ E = 0 {\displaystyle \Delta E=0\,\!} E s y s t e m = ∑ i T i + ∑ j V j {\displaystyle E_{\mathrm {system} }=\sum _{i}T_{i}+\sum _{j}V_{j}\,\!} or simply E = T + V {\displaystyle E=T+V\,\!} E s y s t e m = ∑ i = 1 N 1 ( T i + V i ) = ∑ j = 1 N 2 ( T j + V j ) {\displaystyle E_{\mathrm {system} }=\sum _{i=1}^{N_{1}}\left(T_{i}+V_{i}\right)=\sum _{j=1}^{N_{2}}\left(T_{j}+V_{j}\right)\,\!} Power conservation ∑ i P i + ∑ j P j = 0 {\displaystyle \sum _{i}P_{i}+\sum _{j}P_{j}=0\,\!} Intensity conservation ∑ i I i + ∑ j I j = 0 {\displaystyle \sum _{i}I_{i}+\sum _{j}I_{j}=0\,\!} Charge Δ q = 0 {\displaystyle \Delta q=0\,\!} Q s y s t e m = ∑ i = 1 N 1 q i = ∑ j = 1 N 2 q j {\displaystyle Q_{\mathrm {system} }=\sum _{i=1}^{N_{1}}q_{i}=\sum _{j=1}^{N_{2}}q_{j}\,\!} Electric current conservation ∑ i = 1 N 1 I i = ∑ j = 1 N 2 I j = 0 {\displaystyle \sum _{i=1}^{N_{1}}I_{i}=\sum _{j=1}^{N_{2}}I_{j}=0\,\!} Electric current density conservation ∑ i = 1 N 1 J i = ∑ j = 1 N 2 J j = 0 {\displaystyle \sum _{i=1}^{N_{1}}\mathbf {J} _{i}=\sum _{j=1}^{N_{2}}\mathbf {J} _{j}=\mathbf {0} \,\!} Classical Continuity Equations [ edit ] Continuity equations describe transport of conserved quantities though a local region of space. Note that these equations are not fundamental simply because of conservation; they can be derived. Continuity Description Nomenclature General Equation Simple Case Hydrodynamics, Fluid Flow = Mass current current at the cross-section j m {\displaystyle j_{\mathrm {m} }\,\!} = Volume mass density ρ {\displaystyle \rho \,\!} = velocity field of fluid u {\displaystyle \mathbf {u} \,\!} = cross-section A {\displaystyle \mathbf {A} \,\!} ∇ ⋅ ( ρ u ) + ∂ ρ ∂ t = 0 {\displaystyle \nabla \cdot (\rho \mathbf {u} )+{\partial \rho \over \partial t}=0\,\!} j m = ρ 1 A 1 ⋅ u 1 = ρ 2 A 2 ⋅ u 2 {\displaystyle j_{\mathrm {m} }=\rho _{1}\mathbf {A} _{1}\cdot \mathbf {u} _{1}=\rho _{2}\mathbf {A} _{2}\cdot \mathbf {u} _{2}\,\!} Electromagnetism, Charge = Electric current at the cross-section I {\displaystyle I\,\!} = Electric current density J {\displaystyle \mathbf {J} \,\!} = Volume electric charge density ρ {\displaystyle \rho \,\!} = velocity of charge carriers u {\displaystyle \mathbf {u} \,\!} = cross-section A {\displaystyle \mathbf {A} \,\!} ∇ ⋅ J + ∂ ρ ∂ t = 0 {\displaystyle \nabla \cdot \mathbf {J} +{\partial \rho \over \partial t}=0\,\!} I = ρ 1 A 1 ⋅ u 1 = ρ 2 A 2 ⋅ u 2 {\displaystyle I=\rho _{1}\mathbf {A} _{1}\cdot \mathbf {u} _{1}=\rho _{2}\mathbf {A} _{2}\cdot \mathbf {u} _{2}\,\!} Quantum Mechnics, Probability = probablility current/flux j {\displaystyle \mathbf {j} \,\!} = probablility density function P = P ( x , t ) {\displaystyle P=P(x,t)\,\!} ∇ ⋅ j + ∂ P ∂ t = 0 {\displaystyle \nabla \cdot \mathbf {j} +{\frac {\partial P}{\partial t}}=0\,\!} External Links [ edit ] Conservation Laws Continiuity Equations
Consider the following one-shot version of a labour market matching model. Let the labour force be normalized at 1, who, because there is only one period, all start out as unemployed. There is a very large number of firms who can enter the market and search for a worker. Firms who engage in search first have to pay a fixed cost, $k$. If a measure $v$ of firms enters the labour market, a constant returns to scale matching function $m(1,v)$ gives us the total measure of matches in the economy. Within each match, the firm and the worker bargain for the wage, $w$, so that the workers get a constant proportion of $y$. Denote this proportion by $\beta$, which is interpreted as the bargaining power of the worker. Assume $\frac{k}{y} < 1 - \beta$ for the firm. Define market tightness as $ b \equiv \frac{1}{v}$ and assume that the arrival rate for a firm is given by: $a_{F} = 1 - e^{-b}$. Consider the firms can enter the labour market freely if they pay the entry cost, then what is the equilibrium value of $b$? Describe it graphically? Does it always exist? Is it unique? My solution: The value of a vacancy: $$V = -k + a_{F}(b)(J-V),$$and the value of a filled job: $$J = y-w.$$ If the firms enter the labour market freely then $V=0$. Then from these two equations I am left with this equation $$ 1 - e^{-b} = a_{F}(b) = \frac{k}{y (1-\beta)}.$$ Graphically, the function looks something like this: From the graph, it looks like that $b^*$ is unique, but how do I know if it always exists or not?
Suppose we are given a set of symmetric, positive definite matrices $A_1,A_2,\ldots,A_k\in\mathbb{R}^{n\times n}$. Is there any numerical method or reduction to a known problem (e.g. eigenvalue computation) that can solve the following problem? $$ \begin{array}{rl} \min_{v_1,\ldots,v_k\in\mathbb{R}^n} & \sum_{i=1}^kv_i^\top A_iv_i\\ \textrm{subject to }&v_i^\top v_j=\delta_{[i=j]}. \end{array} $$ Note: The problem has some resemblance to the "joint diagonalization" problem, e.g. in this paper. The difference seems to be that in joint diagonalization you use the same basis to diagonalize multiple matrices. Note 2: If the $A_i$'s do not have the same eigenvectors, then the $v_i$'s will not be exactly eigenvectors of the $A_i$'s. Note 3: My intuition is that this will give a set of mutually orthonormal vectors $v_1,\ldots,v_k\in\mathbb{R}^n$, with the property that vector $v_i$ is similar to an eigenvector of $A_i$. But this is not an eigenvalue problem.
Stokes flow through a complex 3D porous medium This is the 3D equivalent of the 2D porous medium test case. The medium is periodic and described using embedded boundaries. This tests mainly the robustness of the representation of embedded boundaries and the convergence of the viscous and Poisson solvers. #include "grid/octree.h"#include "embed.h"#include "navier-stokes/centered.h"#include "view.h"#include "navier-stokes/perfs.h" We will vary the maximum level of refinement, starting from 5. int maxlevel = 5; The porous medium is defined by the union of a random collection of spheres. The number of spheres ns can be varied to vary the porosity. void porous (scalar cs, face vector fs){ int ns = 700; coord pc[ns]; double R[ns]; srand (0); for (int i = 0; i < ns; i++) { foreach_dimension() pc[i].x = 0.5*noise(); R[i] = 0.04 + 0.08*fabs(noise()); } Once we have defined the random centers and radii, we can compute the levelset function \phi representing the embedded boundary. Since the medium is periodic, we need to take into account all the disk images using periodic symmetries. Note that this means that each function evaluation requires 27 times ns evaluations of the function for a sphere. This is expensive but could be improved a lot using a more clever algorithm. for (double xp = -L0; xp <= L0; xp += L0) for (double yp = -L0; yp <= L0; yp += L0) for (double zp = -L0; zp <= L0; zp += L0) for (int i = 0; i < ns; i++) phi[] = intersection (phi[], (sq(x + xp - pc[i].x) + sq(y + yp - pc[i].y) + sq(z + zp - pc[i].z) - sq(R[i]))); } boundary ({phi}); fractions (phi, cs, fs); This is necessary to remove degenerate fractions which could cause convergence problems. fractions_cleanup (cs, fs);} The domain is the periodic unit cube centered on the origin. We turn off the advection term. The choice of the maximum timestep and of the tolerance on the Poisson and viscous solves is not trivial. This was adjusted by trial and error to minimize (possibly) splitting errors and optimize convergence speed. We define the porous embedded geometry. porous (cs, fs); The gravity vector is aligned with the channel and viscosity is unity. const face vector g[] = {1.,0.,0.}; a = g; mu = fm; The boundary condition is zero velocity on the embedded boundary. u.n[embed] = dirichlet(0); u.t[embed] = dirichlet(0); u.r[embed] = dirichlet(0); We initialize the reference velocity. foreach() un[] = u.x[];}#if 0 // used for debugging onlycoord maxpos (scalar s){ coord pmax = {0,0,0}; double numax = 0.; foreach_leaf() if (fabs(s[]) > numax) { numax = fabs(s[]); pmax = (coord){x,y,z}; } return pmax;}event dumps (i += 10) { scalar nu[], du[], crappy[]; foreach() { nu[] = norm(u); du[] = u.x[] - un[]; double val; crappy[] = (embed_flux (point, u.x, mu, &val) != 0.); }#if !_MPI coord numax = maxpos (nu); fprintf (stderr, "numax: %g %g %g\n", numax.x, numax.y, numax.z); numax = maxpos (p);#if 0 Point point = locate (numax.x, numax.y, numax.z); fprintf (stderr, "pmax: %g %g %g %g %g\n", numax.x, numax.y, numax.z, p[], cs[]);#endif numax = maxpos (du); { Point point = locate (numax.x, numax.y, numax.z); fprintf (stderr, "dumax: %g %g %g\n", numax.x, numax.y, numax.z); FILE * fp = fopen ("dumax", "w"); foreach_neighbor(1) { fprintf (fp, "fs %g %g %g %g\n", x - Delta/2., y, z, fs.x[]); fprintf (fp, "fs %g %g %g %g\n", x, y - Delta/2., z, fs.y[]); fprintf (fp, "fs %g %g %g %g\n", x, y, z - Delta/2., fs.z[]); fprintf (fp, "fs %g %g %g %g\n", x + Delta/2., y, z, fs.x[1]); fprintf (fp, "fs %g %g %g %g\n", x, y + Delta/2., z, fs.y[0,1]); fprintf (fp, "fs %g %g %g %g\n", x, y, z + Delta/2., fs.z[0,0,1]); fprintf (fp, "%g %g %g %g\n", x, y, z, cs[]); } fclose (fp); }#endif p.nodump = false; dump(); }#endif We check for a stationary solution. event logfile (i++; i <= 500){ double avg = normf(u.x).avg, du = change (u.x, un)/(avg + SEPS); fprintf (ferr, "%d %d %d %d %d %d %d %d %.3g %.3g %.3g %.3g %.3g\n", maxlevel, i, mgp.i, mgp.nrelax, mgp.minlevel, mgu.i, mgu.nrelax, mgu.minlevel, du, mgp.resa*dt, mgu.resa, statsf(u.x).sum, normf(p).max); If the relative change of the velocity is small enough we stop this simulation. if (i > 1 && (avg < 1e-9 || du < 0.01)) { We are interested in the permeability k of the medium, which is defined by \displaystyle U = \frac{k}{\mu}\nabla p = \frac{k}{\mu}\rho g with U the average fluid velocity. We output fields and dump the simulation. scalar nu[]; foreach() nu[] = norm(u); boundary ({nu}); view (fov = 32.2073, quat = {-0.309062,0.243301,0.0992085,0.914026}, tx = 0.0122768, ty = 0.0604286, bg = {1,1,1}, width = 600, height = 600); box(); draw_vof("cs", "fs", fc = {0.5,0.5,0.5}); char name[80]; sprintf (name, "cs-%d.png", maxlevel); save (name); box(); isosurface ("u.x", 1e-5, fc = {0.,0.7,0.7}); sprintf (name, "nu-%d.png", maxlevel); save (name); view (fov = 19.1765, quat = {0,0,0,1}, tx = 0.0017678, ty = 0.00157844, bg = {1,1,1}, width = 600, height = 600); squares ("nu", linear = true); cells(); sprintf (name, "cross-%d.png", maxlevel); save (name); sprintf (name, "dump-%d", maxlevel); dump (name); We stop at level 8. if (maxlevel >= 8) return 1; /* stop */ We refine the converged solution to get the initial guess for the finer level. We also reset the embedded fractions to avoid interpolation errors on the geometry. maxlevel++;#if TREE adapt_wavelet ({cs,u}, (double[]){1e-2,4e-6,4e-6,4e-6}, maxlevel);#endif porous (cs, fs); boundary (all); // this is necessary since BCs depend on embedded fractions event ("adapt"); }} set xlabel 'Level'set gridset ytics format '%.1e'plot 'out' w lp t '' set xlabel 'Iterations'set logscale yset ytics format '%.0e'set yrange [1e-8:]set key center leftplot '../porous3D.ref' u 2:9 w l t '', '' u 2:10 w l t '', \ '' u 2:11 w l t '', '' u 2:12 w l t '', '' u 2:13 w l t '', \ 'log' u 2:9 w p t 'du', '' u 2:10 w p t 'resp', \ '' u 2:11 w p t 'resu', '' u 2:12 w p t 'u.x.sum', '' u 2:13 w p t 'p.max'
I responded to a Quora question that asked about solving the electric field with radially decreasing permittivity. Specifically, solving a problem in the form, $\nabla^2 \textbf{E} + \omega^2 \mu_0 \varepsilon(0) e^{-kr} \textbf{E} =0$. My answer was that the problem amounts to solving equations of the type, $$(\nabla^2+f(r)){\boldsymbol{\mathrm{E}}}=0.$$ In Cartesian coordinates ($r^2=x^2+y^2+z^2$): \begin{align}(\nabla^2+f(r))E_x&=0,\\ (\nabla^2+f(r))E_y&=0,\\ (\nabla^2+f(r))E_z&=0.\end{align} If we want a propagating solution in the $z$-direction, we might as well set $E_x=E_y=0$. As for $E_z$, we get: $$\frac{\partial^2}{\partial x^2}E_z+\frac{\partial^2}{\partial y^2}E_z+\frac{\partial^2}{\partial z^2}E_z+f(r)E_z=0.$$ We seek the solution in the form $E_z=\phi_x(x)\phi_y(y)\phi_z(z)$. Then, the equation becomes $$\frac{\partial^2\phi_x}{\partial x^2}\phi_y\phi_z+\frac{\partial^2\phi_y}{\partial y^2}\phi_z\phi_x+\frac{\partial^2\phi_z}{\partial z^2}\phi_x\phi_y+f(r)\phi_x\phi_y\phi_z=0,$$ which translates into the following equations for $\phi_x$, $\phi_y$ and $\phi_z$: \begin{align}\left[\frac{\partial^2}{\partial x^2}+\left(\frac{1}{\phi_y}\frac{\partial^2\phi_y}{\partial y^2}+\frac{1}{\phi_z}\frac{\partial^2\phi_z}{\partial z^2}+f(r)\right)\right]\phi_x&=0,\\ \left[\frac{\partial^2}{\partial y^2}+\left(\frac{1}{\phi_z}\frac{\partial^2\phi_z}{\partial z^2}+\frac{1}{\phi_x}\frac{\partial^2\phi_x}{\partial x^2}+f(r)\right)\right]\phi_y&=0,\\ \left[\frac{\partial^2}{\partial z^2}+\left(\frac{1}{\phi_x}\frac{\partial^2\phi_x}{\partial x^2}+\frac{1}{\phi_y}\frac{\partial^2\phi_y}{\partial y^2}+f(r)\right)\right]\phi_z&=0.\end{align} In other words, the problem is reduced to solving an ODE in the form, $$y''+f(x)y=0.$$ I do not know how to solve this type of equation in the general case. However, if $f(x)=A+e^{-kx}$, the solution is given in the form of Bessel functions: $$f(x)=C_1J_\alpha\left(2\frac{e^{-kx/2}}{k}\right)+C_2Y_\alpha\left(2\frac{e^{-kx/2}}{k}\right),$$ where $$\alpha=-2\frac{\sqrt{-A}}{k}.$$ This suggests to me that the solution to the problem in the general case would also be in the form of some generalized Bessel functions, i.e., a damped wave equation of some kind.
Given a potential well of $V = 0$ on the interval $(0,L)$ and $V = \infty$ outside the well, I am working to solve the Time Independent Schrodinger Equation $$\dfrac{d^2}{dx^2} \psi= \dfrac{-2mE}{\hbar^2}\psi=-k^2\psi.$$ While the sine/cosine solution may be easier, my question is how to do it with exponentials. With the ansatz $\psi \propto \exp(\alpha x)$ we can write $\alpha^2=-k^2$ so $\alpha = \pm ik$ and the general solution is $$\psi (x)= Ae^{ikx}+Be^{-ikx}.$$ From the boundary condition $\psi (0)=0$, we see that $$\psi (0)= Ae^{0}+Be^{0}=A+B=0 \Rightarrow B = -A$$ so $$\psi (x)= Ae^{ikx}-Ae^{-ikx}.$$ Then, applying the boundary condition at x=L, $$\psi (L)= A(e^{ikL}-e^{-ikL})=0.$$ I am unsure of how to evaluate this last part. $A= 0$ is not valid, so what is the solution to this last equation?
From Wikipedia Denote by $\mathbb{S}^n$ the space of all $n \times n$ real symmetric matrices. The space is equipped with the inner product (where ${\rm tr}$ denotes the trace) $$\langle A,B\rangle_{\mathbb{S}^n} = {\rm tr}(A^T B) = \sum_{i=1,j=1}^n A_{ij}B_{ij}.$$ We can rewrite the mathematical program given in the previous section equivalently as $$ \begin{array}{rl} {\displaystyle\min_{X \in \mathbb{S}^n}} & \langle C, X \rangle_{\mathbb{S}^n} \\ \text{subject to} & \langle A_k, X \rangle_{\mathbb{S}^n} \leq b_k, \quad k = 1,\ldots,m \\ & X \succeq 0 \end{array} $$ From Boyd's paper, a semidefinite programming problem is $$\min_{x \in\mathbb R^m} c^T x$$ subject to $$F_0 + \sum_{i=1}^m x_i F_i ⪰ 0$$ where $c \in \mathbb R^m$ and $m + 1$ symmetric matrices $F_0, ..., F_m \in \mathbb R^{n\times n}$. I was wondering if the two formulations are equivalent? I am not able to see how they are related. Thanks and regards!
$$ \newcommand{\bsth}{{\boldsymbol\theta}} \newcommand{\va}{\textbf{a}} \newcommand{\vb}{\textbf{b}} \newcommand{\vc}{\textbf{c}} \newcommand{\vd}{\textbf{d}} \newcommand{\ve}{\textbf{e}} \newcommand{\vf}{\textbf{f}} \newcommand{\vg}{\textbf{g}} \newcommand{\vh}{\textbf{h}} \newcommand{\vi}{\textbf{i}} \newcommand{\vj}{\textbf{j}} \newcommand{\vk}{\textbf{k}} \newcommand{\vl}{\textbf{l}} \newcommand{\vm}{\textbf{m}} \newcommand{\vn}{\textbf{n}} \newcommand{\vo}{\textbf{o}} \newcommand{\vp}{\textbf{p}} \newcommand{\vq}{\textbf{q}} \newcommand{\vr}{\textbf{r}} \newcommand{\vs}{\textbf{s}} \newcommand{\vt}{\textbf{t}} \newcommand{\vu}{\textbf{u}} \newcommand{\vv}{\textbf{v}} \newcommand{\vw}{\textbf{w}} \newcommand{\vx}{\textbf{x}} \newcommand{\vy}{\textbf{y}} \newcommand{\vz}{\textbf{z}} \DeclareMathOperator*{\argmin}{argmin} \DeclareMathOperator\mathProb{\mathbb{P}} \renewcommand{\P}{\mathProb} % need to overwrite stupid paragraph symbol \DeclareMathOperator\mathExp{\mathbb{E}} \newcommand{\E}{\mathExp} \DeclareMathOperator\Uniform{Uniform} \DeclareMathOperator\poly{poly} \DeclareMathOperator\diag{diag} \newcommand{\pa}[1]{ \left({#1}\right) } \newcommand{\ha}[1]{ \left[{#1}\right] } \newcommand{\ca}[1]{ \left\{{#1}\right\} } \newcommand{\norm}[1]{\left\| #1 \right\|} \newcommand{\nptime}{\textsf{NP}} \newcommand{\ptime}{\textsf{P}} \newcommand{\R}{\mathbb{R}} \newcommand{\card}[1]{\left\lvert{#1}\right\rvert} \newcommand{\abs}[1]{\card{#1}} \newcommand{\sg}{\mathop{\mathrm{SG}}} \newcommand{\se}{\mathop{\mathrm{SE}}} \newcommand{\mat}[1]{\begin{pmatrix} #1 \end{pmatrix}} \DeclareMathOperator{\var}{var} \DeclareMathOperator{\cov}{cov} \newcommand\independent{\perp\kern-5pt\perp} \newcommand{\CE}[2]{ \mathExp\left[ #1 \,\middle|\, #2 \right] } \newcommand{\disteq}{\overset{d}{=}} $$ Neural Network Optimization Methods The goal of this post and its related sub-posts is to explore at a high level how the theoretical guarantees of the various optimization methods interact with non-convex problems in practice, where we don’t really know Lipschitz constants, the validity of the assumptions that these methods make, or appropriate hyperparameters. Obviously, a detailed treatment would require delving into intricacies of cutting-edge research. That’s not the point of this post, which just seeks to offer a theoretical survey. I should also caution the reader that I’m not drawing on any of my own experience when discussing “practical” aspects of neural network (NN) optimization, but rather Dr. Goodfellow’s. For the most part, I’ll be summarizing sections 8.5 and 8.6 of the optimization chapter in that book, but I’ll throw in some relevant background and research, too. Further, one departure from practicality that I’ll be making for simplicity is not considering parallelism. All mentioned analyses assume sequential execution, and may not have obvious parallel versions. Even if they do, most bets are off. In part, I’ll also try to address exactly what theoretical guarantees we do have in a NN setting. Lots of work has been done for convex and adversarial online convex optimization, and most NNs are optimized by just throwing such a method at training. Luckily, a lot of very recent work, as of this posting, has addressed exactly what happens in this situation. Setting A NN is a real-valued circuit \(\hat{y}_\bsth\) of computationally efficient, differentiable, and Lipschitz functions parameterized by \(\bsth\). This network is trained to minimize a loss, \(J(\bsth)\), based on empirical risk minimization (ERM). This is the hard part, computationally, for training NNs. We are given a set of supervised examples, pairs \(\vx^{(i)},y^{(i)}\) for \(i\in[n]\). Under the assumption that these pairs are coming from some fixed, unknown distribution, some learning can be done by ERM relative to a loss \(\ell\) on our training set, which amounts to the following: \[ \argmin_\bsth J(\bsth) = \argmin_\bsth \frac{1}{n}\sum_{i=1}^m\ell(\hat{y}_\bsth(\vx^{(i)}), y^{(i)})+\Omega(\bsth) \] Above, \(\Omega\) is a regularization term (added to restrict the hypothesis class). Its purpose is for generalization. Typically, \(\Omega\) is of the form of an \(L^2\) or \(L^1\) norm. In other cases, it has a more complicated implicit form such as the case when we perform model averaging through dropout or weight regularization through early stopping (regularization may also be some kind of smoothing, like gradient clipping). In any case, we will assume that there exist some general strategies for reducing problems with nonzero \(\Omega\) to those where it is zero (see, for example, analysis and references in Krogh and Hertz 1991, Beck and Teboulle 2009, Allen-Zhu 2016). The presence of regularization in general is nuanced, and its application requires deeper analyses for minimization methods, but we will skirt those concerns when discussing practical behavior for the time being. An initial source of confusion about the above machine learning notation is the reuse of variable names in the optimization literature, where instead our parameters \(\bsth\) are points \(\vx\) and our training point errors \(\ell(\hat{y}_\bsth(\vx^{(i)}), y^{(i)})\) are replaced with opaque Lipschitz, differentiable costs \(f_i(\vx)\). We now summarize our general task of (unconstrained) NN optimization of our nonconvex composite (regularized) cost function \(f:\R^d\rightarrow \R\): \[ \argmin_\vx f(\vx) = \argmin_\vx \frac{1}{n}\sum_{i=1}^nf_i(\vx)+\Omega(\vx) \] In a lot of literature inspiring these algorithms, it’s important to keep straight in one’s head the various types of minimization problems that are being solved, and whether they’re making incompatible assumptions with the NN environment. Many algorithms are inspired by the general convex \(f\) case. NN losses are usually not convex. Sometimes, full gradients \(\nabla f\) are assumed. A full gradient is intractable for NNs, as it requires going through the entire \(m\)-sized dataset. We are looking for stochastic approximations to the gradient \(\E\ha{\tilde{\nabla} f}=\nabla f\). Some algorithms assume \(\Omega = 0\), but that’s not usually the case. Theoretical Convergence We’ll be looking at gradient descent (GD) optimization algorithms, which assume an initial point \(\vx_0\) and move in nearby directions to reduce the cost. As such, basically all asymptotic rates contain a hidden constant multiplicative term \(f(\vx_0) - \inf f\). Problem Specification Before discussing speed, it’s important to know what constitutes a solution. Globally minimizing a possibly non-convex function such as deep NN is NP-hard. Even finding an approximate local minimum of just a quartic multivariate polynomial or showing its convexity is NP-hard (Ahmadi et al 2010). What we do, in theory, at least, is instead merely find approximate critical points; i.e., a typical non-convex optimization algorithm would return a point \(\vx_{*}\) that satisfies \(\norm{\nabla f(\vx_*)}\le \epsilon\). This is an incredibly weak requirement: for NNs, there are significantly more saddle points than local minima, and they have high cost. Luckily, local minima actually concentrate around the global minimum cost for NNs, as opposed to saddles, so recent cutting-edge methods that find approximate local minima are worth keeping in mind. An approximate local minimum \(\vx_*\) has a neighborhood such that any \(\vx\) in that neighborhood will have \(f(\vx)-f(\vx_*)\le \epsilon\). See extended discussion here. We’ll assume that \(f\) is differentiable and Lipschitz. Even though ReLU activations and \(L^1\) regularization may technically invalidate the differentiability, these functions have well-defined subgradient that respect GD properties that we care about. Certain algorithms further might assume \(f\in\mathcal{C}^2\) and that the Hessian is operator-norm Lipschitz or bounded. There are two main runtime costs. The first is the desired degree of accuracy, \(\epsilon\). The second is due to the dimensionality of our input \(d\). Ignoring representation issues, thanks to the circuit structure of \(f\), we evaluate for any \(i\in[n]\) and \(\vv\in\R^d\) all of \(f_i(\vx), \nabla f_i(\vx), {\nabla^2 f_i(\vx)} \vv\) in \(O(d)\) time. Finally, since gradients of \(f_i\) approximate gradients of \(f\) only in expectation, reported worst-case runtimes are usually worst-case runtimes such that we expect to arrive at an approximate stationary point (expectation taken over the random uniform selection of \(i\) in SGD). Fundamental Lower Bounds First, unless \(\ptime = \nptime\), we expect runtime to be at least \(\Omega\pa{\log\frac{d}{\epsilon}}\) due to the aforementioned hardness results. fluctuation. Less obviously, convex optimization lower bounds for smooth functions imply that any first-order non-convex algorithm requires at least \(\Omega(1/\epsilon)\) gradient steps (Bubek 2014, see also notes here and here). Note that this is nowhere near polynomial time in the bit size of \(\epsilon\)! See also Wolpert and Macready 1997, Blum and Rivest 1988, and a recent re-visiting of the topic in Livni et al 2014. In other words, general non-convex optimization time lower bounds are too broad to apply usefully to NN, but specific approaches to fixed architectures may be appropriate. Limitations of Theoretical Descriptions There are a couple of limitations in using asymptotic, theoretical descriptions of convergence rates to analyze these algorithms. First, the \(\epsilon\) in \(\epsilon\)-approximate critical points above is merely a small piece in the overall generalization error that the NN will experience. As explained in Bousquet and Bottou 2007 (extended version), the generalization error is broken into approximation (how accurate the entire function class of neural networks for a fixed architecture is in representing the true function we’re learning), estimation (how far we are from a global optimum among our hypothesis class of functions), and optimization error (our convergence tolerance). As cautioned in the aforementioned paper, the tradeoff between the aforementioned errors implies that even improvements in optimization convergence rate, like the use of full GD instead of stochastic GD (SGD) may not be helpful if they increase other errors in hidden ways. Second, early stopping might prevent prevent convergence altogether—as mentioned in Goodfellow’s book, gradient norms can increase while training error decreases. It’s unclear whether we can fold in early stopping as an implicit term in \(\Omega\) and claim that we’re reaching a critical point in this virtual cost function. The fact that that theoretical lower bound rates are not relevant for NN training time (compared to what we see in practice) shows that there is a wide gap between general non-convex and smooth approximate local minimum finding and the same problem for NNs. Existing Algorithms In the below two linked blog posts, I will review the high-level details existing algorithms for NN non-convex optimization. Most of these are methods that have been developed for the composite convex smooth optimization problem, so they may not even have any theoretical guarantees for the \(\epsilon\)-approximate critical point or local min problem. It turns out that indeed we find a general dichotomy between these GD algorithms: Algorithms which are practically available, e.g., TensorFlow’s first order methods, but were initially developed for convex problems, and whose non-convex interpretations are usually only approximate critical point finders Algorithms which are (as of June 2017) cutting-edge research and not widely available, yet have been designed for finding local minima efficiently in non-convex settings. Nonetheless, they’re still useful to mention since the respective paper implementations might be available and it may be worthwhile to manually implement the optimization, too. This list of existing algorithms is going to be a bit redundant with the review Ruder 2016, but my intention is to be a bit more comprehensive and rigorous but less didactic in terms of update rules covered. In general, all these rules have the format \(\vx_{t+1}=\vx_t-\eta_t\vg_t\) where \(\eta_t\) is a learning rate and \(\vg_t\) is the gradient descent direction, both making a small local improvement at the \(t\)-th discrete time. Theoretical analysis won’t be presented, but guarantees, assumptions, intuition, and update rules will be described. Proofs will be linked. Technically, most neural networks don’t have smoothness or even differentiability everywhere. While in reality those issues don’t seem to surface in practice, it turns out we can still make some strong statements about first-order optimization methods.
I've been given the following puzzle Let $a_1, a_{100}$ be given real numbers. Let $a_i=a_{i-1}a_{i+1}$ for $2\leq i \leq 99$. Further suppose that the product of the first $50$ is $27$, and the product of all the $100$ numbers is also $27$. Find $a_1+a_2$. I tried the following, looking at the sequence for a moment we see: $$ a_2=a_1\, a_3\\ a_3=a_2\,a_4\\ \vdots\\ a_{99}=a_{98}\,a_{100} $$ So $$a_2=\frac 1 {a_{99}} \prod_i a_i =\frac {27}{a_{99}}$$ Looking at the other elements, we find that $$a_3=\frac {a_2} {a_1}, a_4=\frac {a_2}{a_1 a_2}, a_5=\frac {a_2}{a_1a_2a_3},\dots , a_n=\frac {a_2}{\prod_{i=1}^{n-2} a_i}$$ Then, $$27=\prod_i a_i=\prod_{1\leq i\leq 100} \frac {a_2}{\prod_{k=1}^{i-1}a_i}$$ The last product I believe is much more complicated than what I should have gotten... Has anyone ran into this puzzle before?
Problem: There are two types of balls, big (B) and small (S), which need to packed into boxes. One box can contain either: nothing, or 1 S, or 1 B, or 2 S, or 2 B, or 1 B and 2 S We are given the weight (float not integer) of each ball (in array $w$). There are some constraints on the weights of the boxes and balls: The total weight of a box $ \leq T$ In configurations 4 and 6 above, the difference of S should be $ \leq D$ In configuration 6 above, the weight of 2 S should be $\geq$ weight of 1 B Now I want to minimise the number of boxes used. My approach I have modelled this as a linear programming problem with binary variables. Let there be $M$ boxes, $N_S$ small balls and $N_B$ big balls. Decision variables: $b1_j$ is a binary variable which is $1$ if box $j$ is in configuration $1$, else $0$. Similarly for $b2_j, b3_j, b4_j, b5_j, b6_j$ for each box $j$. $x_{ij}$ is a binary variable which is $1$ if ith big ball is in box $j$, else $0$. $y_{ij}$ is a binary variable which is $1$ if ith small ball is in box $j$, else $0$. Constraints: Total weight of any box $\leq T$ $$\sum_{i}x_{ij}w_i + \sum_{i}y_{ij}w_i \leq T \qquad \forall \, j$$ Weight difference constraint: $$y_{ij}w_i-y_{kj} w_k \leq D \qquad \forall \, i,j,k$$ Each ball used exactly once: $$\sum_{j}x_{ij}=1 \qquad \forall \, i$$ $$\sum_{j}y_{ij}=1\qquad \forall \, i$$ Each box is exactly one configuration: $$b1_j+b2_j+b3_j+b4_j+b5_j+b6_j=1$$ Similarly for other constraints. I hope you get the way I am trying to solve this. Now the problem is this approach is taking a lot of time to solve using PuLP. For just 15 balls and 10 boxes it took minutes and time is increasing exponentially as the balls increase. I want to solve this for at least 100 balls. Any help would be appreciated.
Let $t_i =$ $1$ if transmitter i is to be constructed and $0$ otherwise, $c_j =$ $1$ if community j is covered and $0$ otherwise. Max $$z = [10, 15, ..., 10] \cdot c$$ s.t. 1 $$[3.6, 2.3, ..., 3.10] \cdot t \le 15$$ 2 eg if $c_1$ is covered, then $t_1$ or $t_3$ is constructed: $$c_1 \to (t_1 \bigvee t_3)$$ $$\iff \neg c_1 \bigvee (t_1 \bigvee t_3)$$ $$\iff 1 - c_1 + t_1 + t_3 \ge 1$$ $$\iff c_1 \le t_1 + t_3$$ Similarly, we have: $$c_2 \le t_1 + t_2$$ $$\vdots$$ $$c_{15} \le t_7$$ 3 eg if $t_1$ is constructed, then $c_1$ and $c_2$ are covered: $$t_1 \to (c_1 \bigwedge c_2)$$ $$\iff \neg t_1 \bigvee (c_1 \bigwedge c_2)$$ $$\iff (\neg t_1 \bigvee c_1) \bigwedge (\neg t_1 \bigvee c_2)$$ $$\iff 1 - t_1 + c_1 \ge 1 \ \text{and} \ 1 - t_1 + c_2 \ge 1$$ $$\iff c_1 \ge t_1 \ \text{and} \ c_2 \ge t_1$$ Similarly, we have: $$c_2, c_3, c_5 \ge t_2$$ $$c_1, c_7, c_9, c_{10} \ge t_3$$ $$\vdots$$ $$c_{12}, c_{13}, c_{14}, c_{15} \ge t_7$$ Is that right? From Chapter 3 here. OK, try 2 is still wrong but at least uses existing variables. Yes, \(c_1 \le x_1 + x_3\) correctly models the implication, and your try 4 looks good. By the way, you can derive the linear constraint automatically by rewriting the implication in conjunctive normal form: \begin{align} c_1 &\implies (x_1 \vee x_3) \\ \neg c_1 &\bigvee (x_1 \vee x_3) \\ \neg c_1 &\vee x_1 \vee x_3 \\ (1 - c_1) &+ x_1 + x_3 \ge 1 \\ c_1 &\le x_1 + x_3 \end{align} Your try 2 uses variables that don't exist. Hint: for community 1, you want to model the logical implication \(c_1 \implies (x_1 \vee x_3)\). answered Rob Pratt
$$ \newcommand{\bsth}{{\boldsymbol\theta}} \newcommand{\va}{\textbf{a}} \newcommand{\vb}{\textbf{b}} \newcommand{\vc}{\textbf{c}} \newcommand{\vd}{\textbf{d}} \newcommand{\ve}{\textbf{e}} \newcommand{\vf}{\textbf{f}} \newcommand{\vg}{\textbf{g}} \newcommand{\vh}{\textbf{h}} \newcommand{\vi}{\textbf{i}} \newcommand{\vj}{\textbf{j}} \newcommand{\vk}{\textbf{k}} \newcommand{\vl}{\textbf{l}} \newcommand{\vm}{\textbf{m}} \newcommand{\vn}{\textbf{n}} \newcommand{\vo}{\textbf{o}} \newcommand{\vp}{\textbf{p}} \newcommand{\vq}{\textbf{q}} \newcommand{\vr}{\textbf{r}} \newcommand{\vs}{\textbf{s}} \newcommand{\vt}{\textbf{t}} \newcommand{\vu}{\textbf{u}} \newcommand{\vv}{\textbf{v}} \newcommand{\vw}{\textbf{w}} \newcommand{\vx}{\textbf{x}} \newcommand{\vy}{\textbf{y}} \newcommand{\vz}{\textbf{z}} \DeclareMathOperator*{\argmin}{argmin} \DeclareMathOperator\mathProb{\mathbb{P}} \renewcommand{\P}{\mathProb} % need to overwrite stupid paragraph symbol \DeclareMathOperator\mathExp{\mathbb{E}} \newcommand{\E}{\mathExp} \DeclareMathOperator\Uniform{Uniform} \DeclareMathOperator\poly{poly} \DeclareMathOperator\diag{diag} \newcommand{\pa}[1]{ \left({#1}\right) } \newcommand{\ha}[1]{ \left[{#1}\right] } \newcommand{\ca}[1]{ \left\{{#1}\right\} } \newcommand{\norm}[1]{\left\| #1 \right\|} \newcommand{\nptime}{\textsf{NP}} \newcommand{\ptime}{\textsf{P}} \newcommand{\R}{\mathbb{R}} \newcommand{\card}[1]{\left\lvert{#1}\right\rvert} \newcommand{\abs}[1]{\card{#1}} \newcommand{\sg}{\mathop{\mathrm{SG}}} \newcommand{\se}{\mathop{\mathrm{SE}}} \newcommand{\mat}[1]{\begin{pmatrix} #1 \end{pmatrix}} \DeclareMathOperator{\var}{var} \DeclareMathOperator{\cov}{cov} \newcommand\independent{\perp\kern-5pt\perp} \newcommand{\CE}[2]{ \mathExp\left[ #1 \,\middle|\, #2 \right] } \newcommand{\disteq}{\overset{d}{=}} $$ The Semaphore Barrier This is the answer post to the question posed here. A Useful Formalism Reasoning about parallel systems is tough, so to make sure that our solution is correct we’ll have to introduce a formalism for parallel execution. The notion is the following. Given some instructions for threads \(\{t_i\}_{i=0}^{n-1}\), we expect each thread’s individual instructions to execute in sequence, but instructions between threads can be interleaved arbitrarily. In our simplified execution model without recursive functions, it suffices to assume each thread has a fixed set of instructions it will execute. Let this be the sequence \(t_i\), with \(k\)-th instruction \(t_{ik}\), which must be s(j).up or s(j).down for some j. Order of Execution Our parallel machine is free to choose a global order of operations \(g\) among all threads \(\{t_i\}_{i}\), where each \(g_j=t_{ik}\) for all \(j\) and some corresponding \(i,k\). However, the machine has to choose an ordering that is valid. A valid ordering \(g\) satisfies two criteria. The sequencing constraint is as follows: \[ k<m \implies t_{ik} <_g t_{im} \] Above, we define an ordering over operations \(x <_g y\) with respect to some ordering in the natural way: in \(g\), \(x\) comes before \(y\). If a statement holds for all (valid) \(g\), we omit the subscript: the conclusion of the sequencing constraint can be re-written \(t_{ik} < t_{im}\). In addition, for every global ordering of operations \(g\), there’s a corresponding sequence \(s\) (which differs from the un-italicized s(i), the code for the i-th semaphore). The \(j\)-th element in the sequence \(s\) is the state of each semaphore after the \(j\)-th instruction \(g_j\). We represent this state as a function from semaphore index to semaphore state. Letting \(s_0=(i\mapsto 0)\): \[s_{j}(i)=s_{j-1}(i)+\begin{cases}1 & g_j=\text{s(i).up}\\ - 1 & g_j=\text{s(i).down}\\ 0 & \text{otherwise}\ \end{cases} \] The above just says that after the i-th semaphore is upped, its value should be 1 more than before, and vice-versa for down. The semaphore constraint requires that the global order \(g\) is chosen such that:\[\forall i,j\,\,\,\,\, s_j(i)\ge 0\]Here, this constraint just makes sure that semaphores actually work as expected - it can’t be that a down call succeeds on a semaphore that had state 0 - it should wait until a corresponding up call completes, first. Solution Criteria A solution (which defines the particular values \(\{t_i\}_{i}\)) must satisfy two criteria. ( Correctness): No thread can finish b.wait() before all threads have called the method:\[\forall i,j,\,\, t_{j1}<t_{i\left\vert t_i\right\vert}\] ( Liveness): Eventually, every thread must complete b.wait(). There must exist at least one valid ordering \(g\) (if there is only one, the parallel processing system is forced to choose it). Example Let’s apply the formalism to the warmup solution for two threads: t0 t1 s(0).up s(1).up s(1).down s(0).down All the potential orderings respecting sequencing are: 0. s(0).up, s(1).up, s(1).down, s(0).down1. s(0).up, s(1).up, s(0).down, s(1).down2. s(1).up, s(0).up, s(1).down, s(0).down3. s(1).up, s(0).up, s(0).down, s(1).down4. s(0).up, s(1).down, s(1).up, s(0).down5. s(1).up, s(0).down, s(0).up, s(1).down Of these, we notice 4 and 5 violate the semaphore constraint. For 4, the state function after the second step \(s_2(0)=1, s_2(1)=-1\), and vice-versa for 5. That leaves only 0,1,2,3 as the valid orderings. In turn, we satisfy liveness. Correctness is guaranteed by inspection: the last operations are only executed after the first ones. Solution 1 \(O(n^2)\) space and \(O(n)\) time. This solution follows directly from reasoning about our formalism. Suppose s(i) was upped only once. For any \(g\) to be valid (no negative values), we must only down it once as well. Moreover, any down is guaranteed to occur after the up, again by the non-negativity requirement. This could be proven formally - every state starts at 0, so if no ups occur before a down, by induction, the state of that semaphore is 0 right before the down and -1 after. This leads to a contradiction. Suppose \(t_{ik}\) is s(ij).up and \(t_{jm}\) is s(ij).down. If we never use s(ij) again, the lemma above holds, in which case for every ordering \(t_{ik} < t_{jm}\). For any sequences \(t_i,t_j\), we must have \(k\in[1, \left\vert t_i\right\vert],m\in[1, \left\vert t_j\right\vert]\). Then by transitivity we conclude:\[t_{i1}\le t_{ik} < t_{jm} \le t_{j\left\vert t_j\right\vert}\] Thus, the presence of s(ij).up on thread i and s(ij).down on j guarantees correctness, if applied to all threads i,j. To guarantee some ordering exists, we will want to ignore the redundant case s(ii) and sequence our operations in a clear way: def wait(thread i): for all j != i: s(ij).up for all j != i: s(ji).down This solution is live: an order where all ups get executed in some order, then all downs do exists and is valid. With 3 threads, this looks like: t0 t1 t2 s(01).up s(12).up s(20).up s(02).up s(10).up s(21).up s(10).down s(21).down s(02).down s(20).down s(01).down s(12).down In other words, if we represent each pairwise constraint \(\forall i,j,T\triangleq\left\vert t_i\right\vert, t_{j1}<t_{iT}\) explicitly, we get a solution. Solution 2 \(O(n)\) space and \(O(n)\) time. This solution can be constructed by augmenting our lemma from before: for any \(g\) to be valid (no negative values), any semaphore must be upped more times than it has been downed right before every down. Then, if a single thread is responsible for upping its own semaphore, and all other threads down it exactly once, at least one up must’ve occurred before each of the downs. This lets us recover the transitive inequality from before for correctness. In other words, the following works: t0 t1 t2 s(0).up s(1).up s(2).up s(0).up s(1).up s(2).up s(1).down s(2).down s(0).down s(2).down s(0).down s(1).down With the same liveness argument, more generally the psuedocode is: def wait(thread i): do n-1 times: s(i).up for all j != i: s(j).down Solution 3 \(O(n)\) space and \(O(1)\) average time, \(O(n)\) worst-case time Now we need to start getting a little bit more clever. Previous solutions still performed a quadratic amount of work total, establishing the quadratic number of inequalities needed for correctness. The goal here will be to get transitivity to do some of our heavy lifting. def wait(thread i): // (1) if i < n-1: s(i).down s(i + 1).up else: s(0).up s(n-1).down // (2) if i < n-1: s(n).down else: do n-1 times: s(n).up By the reasoning from before, block (2) guarantees that \(t_{(n-1)k} < t_{j1}\) for some \(j\neq n-1\) and some \(k\in[3,n+2]\). Then by sequential validity of our orderings and transitivity we have a global property saying: \[ \forall j, t_{(n-1)3}<t_{j\left\vert t_j\right\vert} \] In other words, all threads wait on thread 3 (eq. 1). Next, we focus on block (1). We apply the lemma from solution 1 for each \(i\) between \(2\) and \(n-2\), which, by virtue of s(i) only being used once, says that the s(i).down instruction on thread \(i\) follows the s(j + 1).up one on thread \(j\), where \(j = i - 1\). For \(j<n-2\), this statement is \(t_{j2}<t_{i1}\). Next, by the sequence property, we have \(\forall j,t_{j1}<t_{j2}\). Finally, chaining all these inequalities together, we get for \(j<n-1\) (eq.2): \[ t_{j1}\le t_{(n-2)1}< t_{(n-2)2} \] We use the lemma from solution 1 once on the semaphore s(n-1), upped exactly at \(t_{(n-2)2}\) and downed on \(t_{(n-1)2}\). In turn, we have (eq. 3): \[ t_{(n-2)2} < t_{(n-1)2}< t_{(n-1)3} \] Let’s recap. All threads already wait on thread \(n-1\). We just need to check that all threads also wait on all threads \(i\) between \(1\) and \(n-2\). For all \(i,j\): \[ \begin{align} t_{i1} &< t_{(n-2)2} & \text{eq. 2}\\ &<t_{(n-1)3} &\text{eq. 3} \\ &<t_{j\left\vert t_j\right\vert} &\text{eq. 1} \\ \end{align} \] This finishes the correctness proof. We show liveness exists by providing the ordering \(t_{(n-1)1}\) followed by \(t_{i1}, t_{i2}\) for all \(i\) in order up to \(n-1\). Then we let \(t_{n-1}\) finish and after that order doesn’t matter. Here’s what this looks like on 5 threads: t0 t1 t2 t3 t4 s(0).down s(1).down s(2).down s(3).down s(0).up s(1).up s(2).up s(3).up s(4).up s(4).down s(5).down s(5).down s(5).down s(5).down s(5).up s(5).up s(5).up s(5).up Solution 4 \(O(n)\) space and \(O(1)\) worst-case time def wait(thread i): // (1) if i < n-1: s(i).down s(i + 1).up else: s(0).up s(n-1).down // (2) if i > 0: s(i).down s(i - 1).up else: s(n-1).up s(0).down The proof is left as an exercise to the reader :) Solution 5 \(O(1)\) space and \(O(1)\) worst-case time. This solution works by simulating a mutex with a semaphore, and implementing the barrier with that mutex. ctr = 0def wait(thread i): if i == 0: ctr += 1 local = ctr s(0).up else: s(0).down ctr += 1 local = ctr s(0).up if local == n: s(1).up else: s(1).down s(1).up This one introduces control flow that isn’t predictable given just i, so our model isn’t sufficient to prove that it works.
Another question from Introducing Monte Carlo Methods with R by Robert and Casella. Exercise 3.6 basically says the following. Suppose $f$ and $g$ are densities. Draw a random sample $X_1, \ldots, X_n \sim g$, then draw $(w_i \mid X_i) \sim Poisson( (f(X_i)/g(X_i) )$ for $i = 1, 2, \ldots, n$. Also define $X^*_i \mid w_i$ as iid draws from the categorical distribution $(X^*_i \mid w_i) \sim Categorical(\frac{w_1}{\sum_j w_j}, \ldots, \frac{w_n}{\sum_j w_j})$ for all $i$. Based on the fact (which I've managed to show) that $$ E_g\Big(w_i h(X_i)\Big)= E_f\Big(h(X_i)\Big),$$ we're supposed to "deduce that this sampling mechanism is marginally distributed from $f$". In other words, $X^{*}_{i} \sim f$. I'm stuck. What I've tried is (pretend there is an $i$ subscript everywhere): $$ \begin{aligned} \pi(x^*) &= \int_{-\infty}^{\infty} \sum_{w=0}^{\infty} \pi(x^* , w, x) dx\\ &= \int_{-\infty}^{\infty} \sum_{w=0}^{\infty} \pi(x)\pi(w \mid x)\pi(x^* \mid w , x) dx \\ &= \int_{-\infty}^{\infty} \sum_{w=0}^{\infty} g(x)\frac{\exp\Big(-\frac{f(x)}{g(x)}\Big) \Big(\frac{f(x)}{g(x)}\Big)^w}{w!} \frac{w}{\sum_j w_j} dx \end{aligned} $$ The sum is zero for $w=0$, so we change the bounds on the sum. After some manipulation, we get $$ \begin{aligned} \pi(x^*) &= \int_{-\infty}^{\infty} \sum_{w=1}^{\infty} f(x)\frac{\exp\Big(-\frac{f(x)}{g(x)}\Big) \Big(\frac{f(x)}{g(x)}\Big)^{(w-1)}}{(w-1)!} \frac{1}{\sum_j w_j} dx \\ &= \int_{-\infty}^{\infty} \frac{f(x)}{\sum_j w_j} \sum_{w=1}^{\infty} \frac{\exp\Big(-\frac{f(x)}{g(x)}\Big) \Big(\frac{f(x)}{g(x)}\Big)^{(w-1)}}{(w-1)!}dx \\ &= \int_{-\infty}^{\infty} \frac{f(x)}{\sum_j w_j}dx \end{aligned} $$ which seems wrong to me. But I can't tell where I'm going wrong. Any help would be appreciated!
So I want to write a document (german) with Charter and Helvetica. The problem is that I need to have lowercase upright greek letters in the body as well as in the section headers in their respective fonts. My attempt with LaTeX: \documentclass{scrartcl}\usepackage[utf8]{inputenc}\usepackage[charter,greeklowercase=upright]{mathdesign}\usepackage{helvet}\begin{document}\section{Text: ä ß $\alpha$ $\beta$ $\mu$ in Helvetica}Text: ä ß $\alpha$ $\beta$ $\mu$ in Charter\end{document} results in: I guess I would need two things here: 1. a version of Helvetica with matching math support and 2. a way to input more than one math font in LaTeX. Is this possible? My attempt with XeLaTeX: \documentclass{scrartcl}\usepackage{fontspec}\setmainfont{XCharter}\setsansfont{Helvetica}\begin{document}\section{Text: ä ß α β μ in Helvetica}Text: ä ß α β μ in Charter\end{document} results in: This is almost as I want it, but XCharter does not have greek letters. So I guess I would need one of two things here: 1. a version of Charter with greek letters or 2. a way to substitute those missing glyphs in Charter from a very similar font. I am aware of Charis SIL, but I really don't like its thickness and especially its greek letters. Any help is much appreciated!
Floating and pendant drops Contents Floating due to rigid surface tension The surface tension of a fluid is a true tension that shows up in different phenomena. Water bugs, for example, can float supported by the surface tension. When a paper clip or a bug is floating, the water surface is distorted. Since the water surface acts as a tense membrane, each portion of the perimeter pulls the clip upwards and prevents if from sinking. Therefore what determines whether an object can be supported by surface tension is the perimeter of the distorted surface. This is why the surface tension is measured in force per unit length. In particular, surface tension of water = 0.0004 pounds/inch. This looks like a small number but is enough to support some objects. For a paper clip, the perimeter is around a couple of inches, which means that the water is capable of supporting a weight of about 0.0008 pounds, which is roughly the weight of a paper clip. If soap is to be added, the surface tension decreases three times, and is now not enough to support the weight of the clip and it sinks. Caution: Do not confuse the floating due to surface tension with the floating due to buoyancy. Ships and wood float because they are less dense than water. A piece of wood will float regardless of its shape. On the other hand, if we change the shape of a paper clip and make it spherical, the perimeter of the deformed surface will be smaller and the surface tension will not be enough to keep it floating. Liquid-liquid-air interfaces Neumann’s construction for a liquid drop (2) on a liquid substrate (1): <math>\frac{\sigma _{1}}{\sin \theta _{1}}=\frac{\sigma _{2}}{\sin \theta _{2}}=\frac{\sigma _{12}}{\sin \theta _{12}}</math> Small drops When gravity is negligible (small drops) the drop is spherical and Neumann’s vector relation is satisfied: <math>\vec{\sigma }_{A}+\vec{\sigma }_{B}+\vec{\sigma }_{AB}=\vec{0}</math> Which gives the capillary lengths: <math>\begin{align} & \kappa _{A}^{-1}=\sqrt{\frac{\sigma _{A}}{\rho _{A}g}} \\ & \kappa _{B}^{-1}=\sqrt{\frac{\sigma _{B}}{\rho _{B}g}} \\ & \kappa _{AB}^{-1}=\sqrt{\frac{\sigma _{AB}}{\left( \rho _{A}-\rho _{B} \right)g}} \\ \end{align}</math> Large drops Gravity flattens the drop. The “area” of the drop” is the volume divided by the thickness: <math>\frac{\Omega }{e}</math> The surface energy is: <math>\left( \sigma _{B}-\sigma _{A}-\sigma _{AB} \right)\cdot \frac{\Omega }{\varepsilon }=S\cdot \frac{\Omega }{\varepsilon }</math> The total energy also depends on thickness, e: <math>F\left( e \right)=\left[ \frac{1}{2}\rho _{A}g\left( e-{e}' \right)^{2}+\frac{1}{2}\left( \rho _{B}-\rho _{A} \right)g{e}'^{2}-S \right]\frac{\Omega }{e}</math> The balance of hydrostatic pressures gives: <math>\rho _{A}ge=\rho _{B}g{e}'</math> Which gives: <math>F\left( e \right)=\left[ \frac{1}{2}\tilde{\rho }ge^{2}-S \right]\frac{\Omega }{e}</math> with <math>\tilde{\rho }=\frac{\rho _{A}}{\rho _{B}}\left( \rho _{B}-\rho _{A} \right)</math> Minimizing F( e) at constant volume, Ω, gives: <math>\frac{1}{2}\tilde{\rho }ge_{c}^{2}=-S</math> Which looks like the solution for the sessile drop except for the density. (Equation 2.8 in de Gennes) Pendant drops A drop adopts its shape based on two effects: surface tension that strives towards minimizing energy in a spherical form, and force of gravity that distorts that spherical shape. There are several methods currently used to measure surface tension: The Pendant Drop Method and the more commonly used Sessile Drop Method. We’ll discuss the former. The main idea of the pendant drop method is that we let a drop dangle from the end of a capillary tube, taking the shape shown in the figure below (Fig. 2.20). Pressures balancing out in the drop are Laplacian and hydrostatic. Taking <math>C</math> to be curvature of the surface of the drop, <math>\sigma</math> surface tension, and <math>\rho</math> density of liquid, we obtain the following: <math>\sigma C=\rho g z\,\!</math> We can express curvature using cylindrical coordinate system in the following manner: <math>C=-\frac{r_{zz}}{(1+r_z^2)^{3/2}}+\frac{1}{r(1+{r_z}^2)^{1/2}}</math> where <math>r_z=\frac{dr}{dz}</math> and <math>r_{zz}=\frac{d^2 r}{dz^2}</math> Using the two aforementioned formulas, we can solve the problem numerically by treating surface tension as an adjustable parameter, and slightly changing its value until our results agree with experiments. Similar, but slightly different equation can be acquired from considering the difference in density between fluids at interface: <math>\sigma = Dr g Ro^2/b </math> where <math>\sigma</math>=surface tension Dr = difference in density between fluids at interface g = gravitational constant Ro = radius of drop curvature at apex b = shape factor b, the shape factor can be defined through the Young-Laplace equation expressed as 3 dimensionless first order equations as shown in the figure below. Another way you can determine the value of <math>\sigma</math> is an experiment many of us have done in their early physics classes. A pendant drop on the capillary can break loose when the force of gravity exceeds capillary force of <math>2\pi R \sigma</math> where <math>R</math> is the inner radius of the capillary. By measuring the weight of the fallen drop, one can theoretically compute the value of <math>\sigma</math>. As straightforward as it seems, the experiment runs into numerous complications due to the nature of drop separation. Still, remarkably, it gives us well calibrated drops whose radius can be calculated in the following manner: <math>\frac{4}{3}R_g^3 \rho \pi g= 2 \pi \sigma R</math> <math>R_g=(\frac{3}{2} \frac{\sigma R}{\rho g} )^{1/3}</math> <math>R_g=(\frac{3}{2} \kappa^{-2} R)^{1/3}</math> In practice, we often have that only certain percentage <math>\alpha</math> of the drop falls of. This value can be looked up and is usually around 60% depending on the type of liquid. Taking this into account, we can write: <math>R_g=(\frac{3}{2 \alpha} \kappa^{-2} R)^{1/3}</math> Throbbing Oil Drop Phenomena Last year a phenomena that has been witnessed for centuries was solved at MIT by Professors Roman Stocker and John Bush. The basic idea is that if you mix a surfacant in oil and put a drop of it on a surface of water the oil drop will expand and contract due to evaporation induced changes in surface tension. "Think of the oil detergent drop as a small lens with a rounded bottom. The surfactant in the drop moves to the bottom surface of the lens, where it interacts with the water to decrease the surface tension where oil meets water. This change in tension increases the forces pulling on the outer edges of the drop, causing the drop to expand. The center of the drop is deeper than the edges, so more surfactant settles there, reducing the surface tension correspondingly. This causes the oil and surfactant near the outer edges of the drop to circulate. This circulation creates a shear (think of it as two velocities going in opposite directions), which generates very tiny waves rolling outward toward the edge. When these waves reach the edge, they cause small droplets to erupt and escape onto the water surface outside the drop. Videomicroscopy - essentially, attaching a video camera to a microscope - was critical in observing this step in the process. Those droplets of oil and surfactant disperse on the water and decrease the surface tension of the water surface, so the drop contracts. As the surfactant evaporates, the surface tension of the water increases again, and the system is reset. Forces pull at the outer edges of the lens, and the cyclical process begins again." A video of the phenomena is available on this link as well as the rest of the release article. Their work was published in the July 25, 2007 Journal of Fluid Mechanics. [2] Any idea why the interface ruptures in the manner it does and sprays tiny oil droplets into the water phase? if the surfacant is evaporating it would have to be on the surface and when it evaporates that part of the surface would shrink back. If a surfactant molecule evaportated it would have that large of an effect on the surface of the drop? Can anyone think of ideas for how this motion could be useful? Some kinda of micropump that stops pumping when it runs out of surfactant or is controled by its need for air? Amazing Dynamic Surface Effects Drops on surfaces can have some very unusual behavior when subject to non-equilibrium forces. Here are two very interesting and fun experiments conducted and published in recent years: 1) The Leidenfrost effect [3] applied to super-heated non-uniform surfaces can actually cause water droplets to go against gravity when the surface is tilted. Results of such experiments were published in 2006. [4] This shows how interactions between heat transfer, liquid-vapor transition and surface geometry can lead to a very cool effect, where a droplet never comes into contact with the surface it's "sitting on" and keeps moving in a counter-intuitive direction (against gravity). 2) In an even more counter-intuitive fashion, a droplet of liquid will not coalesce with a bath of the same liquid when falling on it, as long as the bath is accelerated in the right direction and at the right amplitude at the expected moment of contact. This leads to very interesting examples of what people call "bouncing droplets" or even "dancing droplets". The full story on this phenomenon can be read in Couder et al.'s 2005 work in PRL [5], and shows how the finite time it takes for a thin film of air under a drop to leave creates forces large enough to lift off the drop from the surface of a liquid bath. If you can find the movies of these bouncing and orbiting droplets, please post the links.
I'm trying to analyze an algorithm of a function, I can express the function in term of summation, but I have no clues on how I could simplify this summation down to get the run-time in tern of big $O$ or $\Theta$. I have this pseudo-code below: f(n) for int i = 1 to n for int j = i to n doThis(n - j) end endend Assuming that doThis(k) takes $c \cdot k$ operations for some constant $c > 0$. I can express the nested for loop in summation as follow: $\sum_{i=1}^n \sum_{j=i}^n c\cdot(n-j)$ How does one simplify this summation? It has been a while since I touch math so I'm trying to get an idea here. Any help would be appreciated, thank you!
Let $\Omega \in \mathbb R^n$, $n>1$, be a bounded domain with smooth boundary. Let $u \in C^1 (\bar \Omega)$ be harmonic in $\Omega$. (a) Prove $\max_{\bar \Omega} |\nabla u|^2=\max_{\partial \Omega} |\nabla u|^2$. (b) Let $u$ satisfies $u \in C^2 (\mathbb R^n)$, $\Delta u =0$ on $\mathbb R^n$. Show that if $u \in L^2 (\mathbb R^n)$, then $u$ is identically equal to zero. Thoughts: For part (a), I think I need to apply the maximum principle but based on the maximum principle, do I need to show that $\nabla u$ is harmonic and is in $C^2$? Further thoughts: I can't comment because of low points. For John Ma, we know the definition of $C^1$ function, then we can just required $u$ to satisfy $\Delta u =0$. Final thoughts: Just checked my previous homework and I think I need to show $|\nabla u |^2$ is subharmonic, then apply the maximum principle to subharmonic functions. Any hints would be appreciated. I think I need to show $\mid \nabla u \mid ^2$ is subharmonic, then apply the maximum principle to subharmonic functions.
So $\psi(x = a)$ is not a state, because it isn't a function --- it's just a number. In fact, evaluating this number, you've just asked "is the state of the particle 0.368?" Hopefully you see that this is a nonsensical statement! One of the central rules of quantum mechanics is that, when a measurement of an observable $O$ is made, the outcome is one of the eigenvalues of $\hat{O}$, and the system collapses into the corresponding eigenstate of $\hat{O}$. So if position is measured, and found to be $x=a$, then the system is going to collapse into the eigenstate of position with eigenvalue $a$. Informally, this state is the delta function. $$\psi(x) = \delta(x-a) \qquad \mathrm{after\ measurement}$$ To see this, note that we're looking for a function $\psi$ that, when multiplied by $x$, returns the same function scaled by a factor $a$. At first glance it might appear that there is no such function: $\sin x$ is certainly not proportional to $x \sin x$, and $1/x^2$ looks rather different to $1/x$ etc. The delta function is the only thing that works, because it's only non-zero at one specific point. Thus, multiplying it by $x$ has the effect of changing the height of the spike by a certain amount $a$ corresponding to where the spike is, leaving everything else zero. This is equivalent to multiplying the whole function by $a$. Hence it is an eigenstate.
The group of rotations of an $N$-dimensional space is $SO(N)$. Being a symmetry of nature, classical systems transform according to representations of $SO(N)$. Quantum mechanics, on the other hand, allows systems which transform according to the universal covering groups of classical symmetries. This is the reason why we get in three dimensional quantum theory representations of $SU(2)$ which are not true representations of$SO(3)$, (the half integer spin representations). More generally, we have, in quantum theory, representations of $Spin(N) = SO(N) \ltimes \mathbb{Z}_2$. However in the case of a two spatial dimensions, $SO(2) \cong U(1)$, and the universal covering of $U(1)$ is not $Spin(2)$ but rather $\mathbb{R}$. In contrast to $SO(2)$ or $U(1)$ which allow discrete values of the two dimensional spin: $ u = e^{i n \theta}$ $n \in \mathbb{Z}$, $0\le \theta <2 \pi$, the universal covering $\mathbb{R}$ allows a continuum of spin values. This is the basic reason of the fractionalization of spin in two dimensions.This post imported from StackExchange Physics at 2014-03-30 15:49 (UCT), posted by SE-user David Bar Moshe
Imagine doing a hypothetical experiment that would lead to the discovery of electron spin. Your laboratory has just purchased a microwave spectrometer with variable magnetic field capacity. We try the new instrument with hydrogen atoms using a magnetic field of 10 4 Gauss and look for the absorption of microwave radiation as we scan the frequency of our microwave generator (Figure \(\PageIndex{1}\)). Finally we see absorption at a microwave photon frequency of \(28 \times 10^9\, Hz\) (28 gigahertz). This result is really surprising from several perspectives. Each hydrogen atom is in its ground state, with the electron in a 1s orbital. The lowest energy electronic transition that we predict based on existing theory (the electronic transition from the ground state (\(\psi _{100}\) to \(\psi _{21m}\)) requires an energy that lies in the vacuum ultraviolet (the lower Lyman line at 121 nm), not the microwave, region of the spectrum. Furthermore, when we vary the magnetic field we note that the frequency at which the absorption occurs varies in proportion to the magnetic field. The Zeeman Effect: Breaking Degeneracy Magnetism results from the circular motion of charged particles. This property is demonstrated on a macroscopic scale by making an electromagnet from a coil of wire and a battery. Electrons moving through the coil produce a magnetic field, which can be thought of as originating from a magnetic dipole or a bar magnet. Electrons in atoms also are moving charges with angular momentum so they too produce a magnetic dipole, which is why some materials are magnetic. A magnetic dipole interacts with a magnetic field, and the energy of this interaction is given by the scalar product of the magnetic dipole moment, and the magnetic field, \(\vec{B}\). \[E_B = -\vec{\mu}_m \cdot \vec{B} \label {8.4.0}\] Pieter Zeeman was one of the first to observe the splittings of spectral lines in a magnetic field caused by this interaction. Consequently such splittings are known as the Zeeman effect. (Figure \(\PageIndex{2}\)). The \(m_l\) quantum number degeneracy of the hydrogen atom is removed by the externally applied magnetic field. For example, the three hydrogen atom eigenstates \(|\psi _{211} \rangle\), \(|\psi _{21-1} \rangle\), and \(|\psi _{210} \rangle \) are degenerate in zero magnetic field, but have different energies in an externally applied magnetic field (Figure \(\PageIndex{2}\)). The \(m_l = 0\) state, for which the component of angular momentum and hence also the magnetic moment in the external field direction is zero, experiences no interaction with the magnetic field. The \(m_l = +1\) state, for which the angular momentum in the z-direction is +ħ and the magnetic moment is in the opposite direction, against the field, experiences a raising of energy in the presence of a field. Maintaining the magnetic dipole against the external field direction is like holding a small bar magnet with its poles aligned exactly opposite to the poles of a large magnet. It is a higher energy situation than when the magnetic moments are aligned with each other. Electron Spin and the Stern-Gerlach Experiment To discover new things, experimentalists sometimes must explore new areas in spite of contrary theoretical predictions. Our theory of the hydrogen atom at this point gives no reason to look for absorption in the microwave region of the spectrum. By doing the crazy experiment outlines above, we discovered that when an electron is in the \(|1s \rangle\) orbital of the hydrogen atom, there are two different states that have the same energy. When a magnetic field is applied, this degeneracy is removed, and microwave radiation can cause transitions between the two states. In the rest of this section, we see what can be deduced from this experimental observation. This experiment actually could be done with electron spin resonance spectrometers available today (Figure \(\PageIndex{1}\)). To explain our observations, a new model for the hydrogen atom. Our original model for the hydrogen atom accounted for the motion of the electron and proton in our three-dimensional world; the new model needs something else that can give rise to an additional Zeeman-like effect. We need a charged particle with angular momentum to produce a magnetic moment, just like that obtained by the orbital motion of the electron. We can postulate that our observation results from a motion of the electron that was not considered in the last section - electron spin. We have a charged particle spinning on its axis. We then have charge moving in a circle, angular momentum, and a magnetic moment, which interacts with the magnetic field and gives us the Zeeman-like effect that we observed (Figure \(\PageIndex{3}\)). In 1920, Otto Stern and Walter Gerlach designed an experiment, which unintentionally led to the discovery that electrons have their own individual, continuous spin even as they move along their orbital of an atom. Today, this electron spin is indicated by the fourth quantum number, also known as the Electron Spin Quantum Number and denoted by \(m_s\). In 1925, Samuel Goudsmit and George Uhlenbeck made the claim that features of the hydrogen spectrum that were unexamined might by explained by assuming electrons act as if it has a spin, which can be denoted by an arrow pointing up, which is +1/2, or an arrow pointing down, which is -1/2. The Stern and Gerlach experiment which demonstrated this was done with a beam of vaporized silver atoms that split into two beams after passing through a magnetic field (Figure \(\PageIndex{4}\)). An explanation of this is that an electron has a magnetic field due to its spin. When electrons that have opposite spins are put together, there is no net magnetic field because the positive and negative spins cancel each other out. The silver atom used in the experiment has a total of 47 electrons, 23 of one spin type, and 24 of the opposite. Because electrons of the same spin cancel each other out, the one unpaired electron in the atom will determine the spin. Electron "spin" does not Originate Actual Spinning Electron's hypothetical surface would have to be moving faster than the speed of light for it to rotate quickly enough to produce the observed angular momentum. Hence, an electron is not simply a spinning ball or ring and electron spin appears to be an intrinsic angular moment of the particle rather than a consequence of the rotation of a charge particle like Figure \(\PageIndex{3}\) suggests. Despite this, the term "spin" is persists in quantum vernacular. Spin Eigenstates and Eigenvalues To describe electron spin from a quantum mechanical perspective, we must have spin wavefunctions and spin operators. The properties of the spin states are deduced from experimental observations and by analogy with our treatment of the states arising from the orbital angular momentum of the electron. The important feature of the spinning electron is the spin angular momentum vector, which we label \(S\) by analogy with the orbital angular momentum \(L\). We define spin angular momentum operators with the same properties that we found for the rotational and orbital angular momentum operators. After all, angular momentum is angular momentum, no matter if it is orbital or spin in nature. We found that \[ \hat {L}^2 | Y^{m_l} _l \rangle = l(l + 1) \hbar^2 | Y^{m_l}_l \rangle \label {8.4.1}\] so by analogy for the spin states, we must have \[ \hat {S}^2 | \sigma ^{m_s} _s \rangle = s( s + 1) \hbar ^2 | \sigma ^{m_s}_s \rangle \label {8.4.2}\] where \(\sigma\) is a spin wavefunction with quantum numbers \(s\) and \(m_s\) that obey the same rules as the quantum numbers \(l\) and \(m_l\) associated with the spherical harmonic wavefunction \(Y\). We also found the project of the orbital angular momentum on the z-axis is \[ \hat {L}_z | Y^{m_l}_l \rangle = m_l \hbar | Y^{m_l}_l \rangle \label {8.4.3}\] so by analogy, we must have a similar projection for the spin angular momentum: \[ \hat {S}_z | \sigma ^{m_s}_s \rangle = m_s \hbar | \sigma ^{m_s}_s \rangle \label {8.4.4}\] Since \(m_l\) ranges in integer steps from \(-l\) to \(+l\), also by analogy \(m_s\) ranges in integer steps from \(-s\) to \(+s\). In our hypothetical experiment, we observed one absorption transition, which means there are two spin states. Consequently, the two values of \(m_s\) must be \(+s\) and \(-s\), and the difference in \(m_s\) for the two states, labeled f and i below, must be the smallest integer step, i.e., 1. The result of this logic is that \[\begin{align} m_{s,f} - m_{s,i} &= 1 \nonumber\\[4pt] (+s) - (-s) &= 1 \nonumber\\[4pt] 2s &= 1 \nonumber\\[4pt] s &= \dfrac {1}{2} \label {8.4.5} \end{align}\] Therefore our conclusion is that the magnitude of the spin quantum number is 1/2 and the values for \(m_s\) are +1/2 and -1/2. The two spin states correspond to spinning clockwise and counter-clockwise with positive and negative projections of the spin angular momentum onto the z-axis. The state with a positive projection, \(m_s\) = +1/2, is called \(\alpha\); the other is called \(\beta\). These spin states are arbitrarily labeled \(\alpha\) and \(\beta\), and the associated spin wavefunctions also are designated by \(|\alpha \rangle \) and \(| \beta \rangle\). From Equation \(\ref{8.4.4}\), the magnitude of the z-component of spin angular momentum, \(S_z\), is given by \[S_z = m_s \hbar \label {8.4.6}\] so the value of \(S_z\) is \(+ħ/2\) for spin state \(\alpha\) and \(-ħ/2\) for spin state \(\beta\). Hence, we conclude that the \(\alpha\) spin state, where the magnetic moment is aligned against the external field direction, has a greater energy than the \(\beta\) spin state. Even though we do not know their functional forms, the spin wavefunctions are taken to be normalized and orthogonal to each other. \[ \int \alpha ^* \alpha \,d \tau _s = \int \beta ^* \beta \,d \tau _s = 1 \label {8.4.7a}\] or in braket notation \[ \langle \alpha | \alpha \rangle = \langle \beta | \beta \rangle =1 \label{8.4.7b}\] and \[ \int \alpha ^* \beta\, d \tau _s = \int \beta ^* \alpha\, d \tau _s = 0 \label {8.4.8a}\] or in braket notation \[ \langle \alpha | \beta \rangle = \langle \alpha | \beta \rangle = 0 \label{8.4.8b}\] where the integral is over the spin variable \(\tau _s\). Contributors Adapted from "Quantum States of Atoms and Molecules" by David M. Hanson, Erica Harvey, Robert Sweeney, Theresa Julia Zielinski
It is important to understand that the only problem here is to obtain the extrinsic parameters. Camera intrinsics can be measured off-line and there are lots of applications for that purpose. What are camera intrinsics? Camera intrinsic parameters is usually called the camera calibration matrix, $K$. We can write $$K = \begin{bmatrix}\alpha_u&s&u_0\\0&\alpha_v&v_0\\0&0&1\end{bmatrix}$$ where $\alpha_u$ and $\alpha_v$ are the scale factor in the $u$ and $v$ coordinate directions, and are proportional to the focal length $f$ of the camera: $\alpha_u = k_u f$ and $\alpha_v = k_v f$. $k_u$ and $k_v$ are the number of pixels per unit distance in $u$ and $v$ directions. $c=[u_0,v_0]^T$ is called the principal point, usually the coordinates of the image center. $s$ is the skew, only non-zero if $u$ and $v$ are non-perpendicular. A camera is calibrated when intrinsics are known. This can be done easily so it is not consider a goal in computer-vision, but an off-line trivial step. What are camera extrinsics? Camera extrinsics or External Parameters $[R|t]$ is a $3\times4$ matrix that corresponds to the euclidean transformation from a world coordinate system to the camera coordinate system. $R$ represents a $3\times3$ rotation matrix and $t$ a translation. Computer-vision applications focus on estimating this matrix. $$[R|t] = \begin{bmatrix} R_{11}&R_{12}&R_{13}&T_x\\R_{21}&R_{22}&R_{23}&T_y\\R_{31}&R_{32}&R_{33}&T_z \end{bmatrix}$$ How do I compute homography from a planar marker? Homography is an homogeneaous $3\times3$ matrix that relates a 3D plane and its image projection. If we have a plane $Z=0$ the homography $H$ that maps a point $M=(X,Y,0)^T$ on to this plane and its corresponding 2D point $m$ under the projection $P=K[R|t]$ is $$\tilde m = K \begin{bmatrix} R^1 & R^2 & R^3 & t \end{bmatrix} \begin{bmatrix} X \\ Y \\ 0 \\ 1 \end{bmatrix}$$ $$= K \begin{bmatrix}R^1&R^2&t\end{bmatrix} \begin{bmatrix} X \\ Y \\ 1 \end{bmatrix}$$ $$H = K \begin{bmatrix}R^1 & R^2 & t \end{bmatrix}$$ In order to compute homography we need point pairs world-camera. If we have a planar marker, we can process an image of it to extract features and then detect those features in the scene to obtain matches. We just need 4 pairs to compute homography using Direct Linear Transform. If I have homography how can I get the camera pose? The homography $H$ and the camera pose $K[R|t]$ contain the same information and it is easy to pass from one to another. The last column of both is the translation vector. Column one $H^1$ and two $H^2$ of homography are also column one $R^1$ and two $R^2$ of camera pose matrix. It is only left column three $R^3$ of $[R|t]$, and as it has to be orthogonal it can be computed as the crossproduct of columns one and two: $$R^3 = R^1 \otimes R^2$$ Due to redundancy it is necessary to normalize $[R|t]$ dividing by, for example, element [3,4] of the matrix.
I know that it can be also evaluate using Taylor expansion, but I am intentionally want to solve it using L'Hôpital's rule: $$ \lim\limits_{x\to 0} \frac{\sin x}{x}^{\frac{1}{1-\cos x}} = \lim\limits_{x\to 0}\exp\left( \frac{\ln(\frac{\sin x}{x})}{1-\cos x} \right)$$ Now, from continuity and L'hopital Rule: $$\lim\limits_{x\to 0} \frac{\ln(\frac{\sin x}{x})}{1-\cos x} = \lim\limits_{x\to 0} \frac{\frac{x}{\sin x}\cdot\frac{x\cos x - \sin x}{x^2}}{\sin x} = \lim\limits_{x\to 0}\frac{\frac{x\cos x - \sin x}{x\sin x}}{\sin x}$$ This is where I got stuck. If I'm not mistaken the limit is $-\frac{1}{3}$ so the orginial one is $e^{-\frac{1}{3}}$ What should I do different (Or what's is wrong with my calculation?) Thanks
(9 intermediate revisions by 6 users not shown) Line 1: Line 1: − http://www.viddler.com/explore/jenniferanistos jennifer aniston naked http://www.justin.tv/jenniferanistonnaked214/profile jennifer aniston naked http://forums.slimdevices.com/member.php?u= 26972 jennifer aniston naked http://www.jamendo.com/en/user/jenniferanistonnaked421 jennifer aniston naked http://forums.denverbroncos.com/member.php?u= 362224 jennifer aniston naked http://www.justin.tv/asswatcher21/profile asswatcher http://www.viddler.com/explore/asswatcher asswatcher http://forums.slimdevices.com/member.php?u= 26980 asswatcher http://forums.denverbroncos.com/member.php?u= 362226 asswatcher http://www.jamendo.com/en/user/asswatcher asswatcher + ==== − ==Examples of quasirandomness definitions== + of − ====Bipartite graphs==== + − Let X and Y be two finite sets and let <math>f:X\times Y\rightarrow [-1,1].</math> Then f is defined to be c-quasirandom if <math>\ mathbb{ E} _{x,x'\in X}\mathbb{E}_{y,y'\in Y}f(x,y)f(x,y')f(x',y)f(x',y')\leq c.</math> + quasirandom <math>\{}</math> − Since the left-hand side is equal to <math>\mathbb{E}_{x, x' \in X}(\mathbb{E}_{y\in Y}f(x,y)f(x', y))^2,</math> it is always non-negative, and the condition that it should be small implies that <math>\ mathbb{ E} _{y\in Y}f(x,y)f(x',y)</math> is small for almost every pair <math>x,x'. </math> + to , '' , <math>\{}</math> is . − If G is a bipartite graph with vertex sets X and Y and <math>\ delta</math> is the density of G, then we can define <math>f(x,y)</math> to be <math>1-\delta</math> if xy is an edge of G and <math>-\delta</math> otherwise. We call f the '' balanced function'' of G, and we say that G is c-quasirandom if its balanced function is c-quasirandom. + <math>\</math> '''' we . − It can be shown that if H is any fixed graph and G is a large quasirandom graph, then the number of copies of H in G is approximately what it would be in a random graph of the same density as G. + in in the density the density ]]this of , of the density . − + − ====Subsets of finite Abelian groups==== + − + − If A is a subset of a finite Abelian group G and A has density <math>\delta,</math> then we define the balanced function f of A by setting <math>f(x)=1-\delta</math> when x\in A and <math>f(x)=-\delta</math> otherwise. Then A is c-quasirandom if and only if f is c-quasirandom, and f is defined to be c-quasirandom if <math>\mathbb{E}_{x,a,b\in G}f(x)f(x+a)f(x+b)f(x+a+b)\leq c.</math> Again, we can prove positivity by observing that the left-hand side is a sum of squares. In this case, it is <math>\mathbb{E}_{a\in G}(\mathbb{E}_{x\in G}f(x)f(x+a))^2.</math> + − + − If G has odd order, then it can be shown that a quasirandom set A contains approximately the same number of triples <math>(x,x+d,x+2d)</math> as a random subset A of the same density . However, it is decidedly ''not'' the case that A must contain approximately the same number of arithmetic progressions of higher length (regardless of torsion assumptions on G). For that one must use "higher uniformity". + − + − ====Hypergraphs==== + − + − ====Subsets of grids==== + − + − A function f from <math>[n] ^2</math> to [-1,1] is c-quasirandom if the "sum over rectangles" is at most c. The sum over rectangles is <math>\mathbb{E}_{x,y,a,b}f(x,y)f(x+a,y)f(x,y+b)f(x+a,y+b)</math>. Again, it is easy to show that this sum is non-negative by expressing it as a sum of squares. And again, one defines a subset <math>A\subset[n]^2</math> to be c-quasirandom if it has a balanced function that is c-quasirandom. + − + − If A is a c-quasirandom set of density <math>\delta</math> and c is sufficiently small, then A contains roughly the same number of [[corners]] as a random subset of <math>[n]</math> of density <math>\delta. </math> + ==A possible definition of quasirandom subsets of <math>[3]^n</math>== ==A possible definition of quasirandom subsets of <math>[3]^n</math>== Latest revision as of 06:08, 8 July 2010 Introduction Quasirandomness is a central concept in extremal combinatorics, and is likely to play an important role in any combinatorial proof of the density Hales-Jewett theorem. This will be particularly true if that proof is based on the density increment method or on some kind of generalization of Szemerédi's regularity lemma. In general, one has some kind of parameter associated with a set, which in our case will be the number of combinatorial lines it contains, and one would like a deterministic definition of the word "quasirandom" with the following key property. Every quasirandom set [math]\mathcal{A}[/math] has roughly the same value of the given parameter as a random set of the same density. Needless to say, this is not the only desirable property of the definition, since otherwise we could just define [math]\mathcal{A}[/math] to be quasirandom if it has roughly the same value of the given parameter as a random set of the same density. The second key property is this. Every set [math]\mathcal{A}[/math] that fails to be quasirandom has some other property that we can exploit. These two properties are already discussed in some detail in the article on the density increment method: this article concentrates more on examples of quasirandomness in other contexts, and possible definitions of quasirandomness connected with the density Hales-Jewett theorem. A possible definition of quasirandom subsets of [math][3]^n[/math] As with all the examples above, it is more convenient to give a definition for quasirandom functions. However, in this case it is not quite so obvious what should be meant by a balanced function. Here, first, is a possible definition of a quasirandom function from [math][2]^n\times [2]^n[/math] to [math][-1,1].[/math] We say that f is c-quasirandom if [math]\mathbb{E}_{A,A',B,B'}f(A,B)f(A,B')f(A',B)f(A',B')\leq c.[/math] However, the expectation is not with respect to the uniform distribution over all quadruples (A,A',B,B') of subsets of [math][n].[/math] Rather, we choose them as follows. (Several variants of what we write here are possible: it is not clear in advance what precise definition will be the most convenient to use.) First we randomly permute [math][n][/math] using a permutation [math]\pi[/math]. Then we let A, A', B and B' be four random intervals in [math]\pi([n]),[/math] where we allow our intervals to wrap around mod n. (So, for example, a possible set A is [math]\{\pi(n-2),\pi(n-1),\pi(n),\pi(1),\pi(2)\}.[/math]) As ever, it is easy to prove positivity. To apply this definition to subsets [math]\mathcal{A}[/math] of [math][3]^n,[/math] define f(A,B) to be 0 if A and B intersect, [math]1-\delta[/math] if they are disjoint and the sequence x that is 1 on A, 2 on B and 3 elsewhere belongs to [math]\mathcal{A},[/math] and [math]-\delta[/math] otherwise. Here, [math]\delta[/math] is the probability that (A,B) belongs to [math]\mathcal{A}[/math] if we choose (A,B) randomly by taking two random intervals in a random permutation of [math][n][/math] (in other words, we take the marginal distribution of (A,B) from the distribution of the quadruple (A,A',B,B') above) and condition on their being disjoint. It follows from this definition that [math]\mathbb{E}f=0[/math] (since the expectation conditional on A and B being disjoint is 0 and f is zero whenever A and B intersect). Nothing that one would really like to know about this definition has yet been fully established, though an argument that looks as though it might work has been proposed to show that if f is quasirandom in this sense then the expectation [math]\mathbb{E}f(A,B)f(A\cup D,B)f(A,B\cup D)[/math] is small (if the distribution on these "set-theoretic corners" is appropriately defined).
CDS 140a Winter 2014 Homework 8 R. Murray Issued: 25 Feb 2014 (Tue) ACM 101b/AM 125b/CDS 140a, Winter 2014 Due: 5 Mar 2014 (Wed) @ noon Note: In the upper left hand corner of the second page of your homework set, please put the number of hours that you spent onthis homework set (including reading). Perko, Section 4.1, Problem 1: (a) Consider the two vector fields <amsmath> f(x) = \begin{bmatrix} -x_2 \\ x_1 \end{bmatrix}, \qquad g(x) = \begin{bmatrix} -x_2 + \mu x_1 \\ x_1 + \mu x_2 \end{bmatrix}.</amsmath> Show that $\|f-g\|_1 = |\mu| (\max_{x \in K} \|x\| + 1)$, where $K \subset {\mathbb R}^2$ is a compact set containing the origin in its interior. (b) Show that for $\mu \neq 0$ the systems <amsmath> \aligned \dot x_1 &= -x_2 \\ \dot x_2 &= x_1 \endaligned \quad\text{and}\quad \aligned \dot x_1 &= -x_2 + \mu x_1 \\ \dot x_2 &= x_1 + \mu x_2 \endaligned</amsmath> are not topologically equivalent. Hint: Let $\phi_t$ and $\psi_t$ be the flows defined by these two systems and assume that there is a homeomorphism $H:{\mathbb R}^2 \to {\mathbb R}^2$ and a strictly increasing, continuous function $t(\tau)$ mapping $\mathbb R$ onto $\mathbb R$ such that $\phi_{t(\tau)} = H^{-1} \circ \psi_\tau \circ H$. Use the fact that $\lim_{t\to\infty} \phi_t(1,0) \neq 0$ and that for $\mu < 0$, $\lim_{t \to \infty} \psi_t(x) = 0$ for all $x \in {\mathbb R}^2$ to arrive at a contradiction. Consider the dynamical system <amsmath> m \ddot q + b \dot q + k q = u(t), \qquad u(t) = \begin{cases} 0 & t = 0, \\ 1 & t > 0, \end{cases} \qquad q(0) = \dot q(0) = 0,</amsmath> which describes the "step response" of a mass-spring-damper system. (a) Derive the differential equations for the sensitivities of <amsmath>q(t) \in {\mathbb R}</amsmath> to the parameters <amsmath>b</amsmath> and <amsmath>k</amsmath>. Write out explicit systems of ODEs for computing these, including any initial conditions. (You don't have to actually solve the differential equations explicitly, though it is not so hard to do so.) (b) Compute the sensitivities and the relative (normlized) sensitivies of the equilibrium value of <amsmath>q_e</amsmath> to the parameters <amsmath>b</amsmath> and <amsmath>k</amsmath>. You should give explicit formulas in terms of the relevant parameters and initial conditions. (c) Sketch the plots of the relative sensitivities <amsmath>S_{q,b}</amsmath> and <amsmath>S_{q,k}</amsmath> as a function of time for the nominal parameter values <amsmath>m = 1</amsmath>, <amsmath>b = 2</amsmath>, <amsmath>k = 1</amsmath>. Perko, Section 4.2, Problem 4: Consider the planar system <amsmath> \aligned \dot x &= \mu x - x^2 \\ \dot y &= -y. \endaligned</amsmath> Verify that the system satisfies the conditions for a transcritical bifurcation (equation (3) in Section 4.2) and determine the dimensions of the various stable, unstable and center manifolds that occur. Perko, Section 4.2, Problem 7: Consider the two dimensional system <amsmath> \aligned \dot x &= -x^4 + 5 \mu x^2 - 4 \mu^2 \\ \dot y &= -y. \endaligned</amsmath> Determine the critical points and the bifurcation diagram for this system. Draw phase portraits for the various values of $\mu$ and draw the bifurcation diagram.
I'm trying to solve this recurrence relation using the iterative method and i keep getting the different answer from using the master theorem. $$\begin{aligned} T(n) &= 5T(n/2) +n^2 \\ &= 5^2 T(n/4) + (5/4)n^2 + n^2 \\ &= 5^3 T(n/8) + (5/4)^2 n^2 + (5/4)n^2 + n^2 \\ &= 5^k T(n/(2^k)) + n^2 \sum_{i=0}^{k-1}(5/4)^i \end{aligned}$$ $n = 2^k => \ln(n) = k$ $$\begin{aligned} &=5^{\ln(n)} T(1) + n^2 * \Theta(5^{\ln(n)}) \\ &= n^{\ln(5)} + n^2 * \Theta(n^{\ln(5)}) \end{aligned}$$ $O(n^2 * n^{\ln(5)}) = O(n^{2+\ln(5)})$ Right? According to the master Theorem, $a= 5$; $b=2$; $p=0$; $k=2$; $5>4$ This result in $\Theta(n^{\ln(5)})$ for this problem, So, do I ignore the $n^2$ in the iterative method? Did I do it incorrectly? If it correct, then why is there a difference between master Theorem and iterative method? Master Theorem have a tighter constraint?
I have learnt about Finite Element Method (also a little on other numerical methods) but I don't know what are exactly definition of these two errors and differences between them? Error estimates usually have the form$$ \|u - u_h\| \leq C(h),$$where $u$ is the exact solution you are interested in, $u_h$ is a computed approximate solution, $h$ is an approximation parameter you can control, and $C(h)$ is some function of $h$ (among other things). In finite element methods, $u$ is the solution of a partial differential equation and $u_h$ would be the finite element solution for a mesh with mesh size $h$, but you have the same structure in inverse problems (with the regularization parameter $\alpha$ in place of $h$) or iterative methods for solving equations or optimization problems (with the iteration index $k$ -- or rather $1/k$ -- in place of $h$). The point of such an estimate is to help answer the question "If I want to get within, say, $10^{-3}$ of the exact solution, how small do I have to choose $h$?" The difference between a priori and a posterior estimates is in the form of the right-hand side $C(h)$: In a prioriestimates, the right-hand side depends on $h$ (usually explicitly) and $u$, but not on $u_h$. For example, a typical a priori estimate for the finite element approximation of Poisson's equation $-\Delta u = f$ would have the form $$ \|u-u_h\|_{L^2} \leq c h^2 |u|_{H^2},$$ with a constant $c$ depending on the geometry of the domain and the mesh. In principle, the right-hand side can be evaluated prior to computing $u_h$ (hence the name), so you'd be able to choose $h$ before solving anything. In practice, neither $c$ nor $|u|_{H^2}$ is known ($u$ is what you're looking for in the first place), but you can sometimes get order-or-magnitude estimates for $c$ by carefully going through the proofs and for $|u|$ using the data $f$ (which is known). The main use is as a qualitative estimate -- it tells you that if you want to make the error smaller by a factor of four, you need to halve $h$. In a posterioriestimates, the right-hand side depends on $h$ and $u_h$, but not on $u$. A simple residual-baseda posterior estimate for Poisson's equation would be $$ \|u-u_h\|_{L^2} \leq c h \|f+\Delta u_h\|_{H^{-1}},$$ which could in theory be evaluated aftercomputing $u_h$. In practice, the $H^{-1}$ norm is problematic to compute, so you'd further manipulate the right-hand side to get an element-wisebound $$ \|u-u_h\|_{L^2} \leq c \left(\sum_{K} h_K^2 \|f+\Delta u_h\|_{L^2(K)} + \sum_{F} h_K^{3/2} \|j(\nabla u_h)\|_{L^2(F)}\right),$$ where the first sum is over the elements $K$ of the triangulation, $h_K$ is the size of $K$, the second sum is over all element boundaries $F$, and $j(\nabla u_h)$ denotes the jump of the normal derivative of $u_h$ across $F$. This is now fully computable after obtaining $u_h$, except for the constant $c$. So again the use is mainly qualitative -- it tells you which elements give a larger error contribution than others, so instead of reducing $h$ uniformly, you just select some elements with large error contributions and make those smaller by subdividing them. This is the basis of adaptive finite element methods.
It was mentioned in http://kclpure.kcl.ac.uk/portal/files/12371620/Studentthesis-Mehmet_Akyol_2013.pdf page 28, a new concept "oscillator basis" or more precisely the author defines gamma matrices of the oscillator basis. What does it mean to have gamma matrices of oscillator basis? The relevant equations from Akyol's paper are: The Clifford algebra gamma matrices can also be constructed using these basis elements and are defined in the following way (1.10),$$\Gamma_0 = - e_n \wedge +i_{e_n}$$ $$\Gamma_i = e_i\wedge +i_{e_i}$$ $$\Gamma_n = e_n \wedge +i_{e_n}$$ $$\Gamma_{i+n} = i(e_i\wedge -i_{e_i})$$where $i=1, ..., n-1$ where $n=d/2$ in even dimensions. As for (1.31) it is $$\Gamma_{+} = \frac{1}{\sqrt{2}}(\Gamma_n +\Gamma_0) = \sqrt{2}i_{e_n},$$ $$\Gamma_{-} =\frac{1}{\sqrt{2}}(\Gamma_n - \Gamma_0) =\sqrt{2}e_n \wedge,$$ $$\Gamma_{\alpha} = \frac{1}{\sqrt{2}}(\Gamma_{\alpha} +i\Gamma_{\alpha+n})=\sqrt{2}i_{e_{\alpha}},$$ $$\Gamma_{\bar{\alpha}} = \frac{1}{\sqrt{2}}(\Gamma_{\alpha} - i\Gamma_{\alpha+n}) = \sqrt{2}e_{\alpha} \wedge$$ Here $d=2n$ If we take $d=4$, then $n=2$ and $i$ takes only the value $1$ we get: $$\Gamma_0 = - e^2 \wedge +i_{e^2}$$ $$\Gamma_1 = e^1\wedge +i_{e^1}$$ $$\Gamma_2 = e^2 \wedge +i_{e^2}$$ $$\Gamma_3 = i(e^1\wedge -i_{e^1})$$ Where the oscillator Gamma matrices can be written as $$\Gamma_{+} = \frac{1}{\sqrt{2}}(\Gamma_2 +\Gamma_0) = \sqrt{2}i_{e^2},$$ $$\Gamma_{-} =\frac{1}{\sqrt{2}}(\Gamma_2 - \Gamma_0) =\sqrt{2}e^2 \wedge,$$ $$\Gamma_1 = \frac{1}{\sqrt{2}}(\Gamma_1 +i\Gamma_3)=\sqrt{2}i_{e^1},$$ $$\Gamma_{\bar{1}} = \frac{1}{\sqrt{2}}(\Gamma_1 - i\Gamma_3) = \sqrt{2}e^1 \wedge$$
In an article, I saw an equation with a pictogram where one might typically find a Latin letter: How did they do it? How can I replicate this equation in my own document? TeX - LaTeX Stack Exchange is a question and answer site for users of TeX, LaTeX, ConTeXt, and related typesetting systems. It only takes a minute to sign up.Sign up to join this community In an article, I saw an equation with a pictogram where one might typically find a Latin letter: How did they do it? How can I replicate this equation in my own document? As addition to A.Elletts perfect answer: If you want to use other symbols in order not copy the one from your PDF, you may get some inspiration here. There are many symbols included in the Unicode 6.0. % arara: lualatex\documentclass{article}\usepackage{fontspec}\usepackage{fontawesome}\def\faTree{\FA\symbol{"F1BB}}\begin{document} \faLeaf \faTree % not yet part of the package \setmainfont{symbola.ttf} \symbol{"1F331} \symbol{"1F332} \symbol{"1F333} \symbol{"1F334} \symbol{"1F335} \symbol{"1F33F} \symbol{"1F341} \symbol{"1F342} \symbol{"1F343} \symbol{"1F384} \symbol{"2E19}\end{document} % arara: lualatex \documentclass{article}\usepackage{fontspec}\usepackage{fontawesome}\usepackage{mathtools}\usepackage[bold-style=ISO]{unicode-math}\usepackage{graphicx}\newcommand*{\twig}{\reflectbox{\rotatebox{35}{\setmainfont{symbola.ttf}\symbol{"1F33F}}}}\newcommand*{\leaf}{\ensuremath{\text{\faLeaf}}}\begin{document}In the following formula I will use a \twig{} and a \leaf{} combined to $\twig^\leaf$: \[\mbfitY=f(\mbfitX)+\mbfscrE\approx\twig^\leaf_1(\mbfitX)+\twig^\leaf_2(\mbfitX)+\ldots+\twig^\leaf_m(\mbfitX)+\mbfscrE,\quad\mbfscrE\sim\mscrN_n(\mathbf{0}, \sigma^2 \mbfitI_n)\]\end{document} The one symbol can be found in the package textcomp as \textleaf. In your preamble, just enter \usepackage{textcomp} and in the body of the document write \textleaf. The other symbol can be found in the package phaistos as \PHplaneTree. \documentclass{article}\usepackage{textcomp}\usepackage{phaistos}\newcommand\myleaf{\mbox{\textleaf}}\newcommand\mytree{\mbox{\PHplaneTree}}\pagestyle{empty}\begin{document}$\mytree^{\myleaf}$\end{document} I found that I had to put the two objects in their own boxes to get these symbols to work in math mode. From the command line you can type texdoc symbols-a4 to get a master document showing a variety of different symbols and the packages which provide them.
Recall the definition of the load factor is the average number of elements in a chain for hashing for a table $T$ of size $m$ (with $n$ elements in consideration). Let $n_j = T[j]$ be the size of the chain for entry $j$. In this case the definition of the load factor should be: $$ \mathbb{E}[n_j] = \mathbb{E}[T[j]] = \alpha = \frac{n}{m}$$ I was wondering, why is the above statement correct under the Simple Uniform Hashing (i.e. the probability that any given element is equally likely to hash into any of the $m$ slots or in equations $Pr[h(k_i) = h(k_j)] = \frac{1}{m}$)? In particular, I find it difficult to understand what exactly distribution (or what random variables to define) in order to compute the expectation. What distribution is the expectation taken over with respect to? CLRS talks about this on page 259 but they never quite explain in detail why it is $\frac{n}{m}$ or ignore (leave out) the probabilistic analysis. After some thinking I realized that $n_j$ is a random variable and it depends on how the hash function $h$ distributes keys (and I guess also in the keys one might get). So a realization of $n_j$ will be in the set $\{ 0, ..., m-1 \}$, since the table can only have a $0$ elements up to $m$ elements. Hence to calculate the expected length of $n_j$ we compute: $$\mathbb{E}[n_j] = \sum^{m-1}_{x=0} x \Pr[n_j = x]$$ but my question is, how does one determine the distribution for $x$?
In general, all Krylov methods essentially seek a polynomial that is small when evaluated on the spectrum of the matrix. In particular, the $n$th residual of a Krylov method (with zero initial guess) can be written in the form $$ r_n = P_n (A) b $$ where $P_n$ is some monic polynomial of degree $n$ . If $A$ is diagonalizable, with $A=V\Lambda V^{-1}$, we have \begin{eqnarray*}\|r_n\| &\leq& \|V\|\cdot \|P_n(\Lambda)\|\cdot \|V^{-1}\|\cdot \|b\|\\ &=& \kappa(V) \cdot \|P_n(\Lambda)\| \cdot \|b\|. \end{eqnarray*} In the event that $A$ is normal (e.g., symmetric or unitary) we know that $\kappa(V) = 1.$ GMRES constructs such a polynomial through Arnoldi iteration, while CG constructs the polynomial using a different inner product (see this answer for details). Similarly, BiCG constructs its polynomial through the nonsymmetric Lanczos process, while Chebyshev iteration uses prior information on the spectrum (usually estimates of the largest and smallest eigenvalues for symmetric definite matrices). As a cool example (motivated by Trefethen + Bau), consider a matrix whose spectrum is this: In MATLAB, I constructed this with: A = rand(200,200); [Q R] = qr(A); A = (1/2)*Q + eye(200,200); If we consider GMRES, which constructs polynomials which actually minimize the residual over all monic polynomials of degree $n$, we can easily predict the residual history by looking at the candidate polynomial $$P_n (z) = (1-z)^n $$ which in our case gives $$ |P_n(z)| = \frac{1}{2^n} $$ for $z$ in the spectrum of $A$. Now, if we run GMRES on a random RHS and compare the residual history with this polynomial, they ought to be quite similar (the candidate polynomial values are smaller than the GMRES residual because $\|b\|_2 > 1$):
Difference between revisions of "Quasirandomness" (→Subsets of grids) Line 1: Line 1: + + + Quasirandomness is a central concept in extremal combinatorics, and is likely to play an important role in any combinatorial proof of the density Hales-Jewett theorem. This will be particularly true if that proof is based on the [[density increment method]] or on some kind of generalization of [[Szemerédi's regularity lemma]]. Quasirandomness is a central concept in extremal combinatorics, and is likely to play an important role in any combinatorial proof of the density Hales-Jewett theorem. This will be particularly true if that proof is based on the [[density increment method]] or on some kind of generalization of [[Szemerédi's regularity lemma]]. Revision as of 10:04, 23 February 2009 Contents Introduction Quasirandomness is a central concept in extremal combinatorics, and is likely to play an important role in any combinatorial proof of the density Hales-Jewett theorem. This will be particularly true if that proof is based on the density increment method or on some kind of generalization of Szemerédi's regularity lemma. In general, one has some kind of parameter associated with a set, which in our case will be the number of combinatorial lines it contains, and one would like a deterministic definition of the word "quasirandom" with the following key property. Every quasirandom set [math]\mathcal{A}[/math] has roughly the same value of the given parameter as a random set of the same density. Needless to say, this is not the only desirable property of the definition, since otherwise we could just define [math]\mathcal{A}[/math] to be quasirandom if it has roughly the same value of the given parameter as a random set of the same density. The second key property is this. Every set [math]\mathcal{A}[/math] that failsto be quasirandom has some other property that we can exploit. These two properties are already discussed in some detail in the article on the density increment method: this article concentrates more on examples of quasirandomness in other contexts, and possible definitions of quasirandomness connected with the density Hales-Jewett theorem. Examples of quasirandomness definitions Bipartite graphs Let X and Y be two finite sets and let [math]f:X\times Y\rightarrow [-1,1].[/math] Then f is defined to be c-quasirandom if [math]\mathbb{E}_{x,x'\in X}\mathbb{E}_{y,y'\in Y}f(x,y)f(x,y')f(x',y)f(x',y')\leq c.[/math] Since the left-hand side is equal to [math]\mathbb{E}_{x,x'\in X}(\mathbb{E}_{y\in Y}f(x,y)f(x',y))^2,[/math] it is always non-negative, and the condition that it should be small implies that [math]\mathbb{E}_{y\in Y}f(x,y)f(x',y)[/math] is small for almost every pair [math]x,x'.[/math] If G is a bipartite graph with vertex sets X and Y and [math]\delta[/math] is the density of G, then we can define [math]f(x,y)[/math] to be [math]1-\delta[/math] if xy is an edge of G and [math]-\delta[/math] otherwise. We call f the balanced function of G, and we say that G is c-quasirandom if its balanced function is c-quasirandom. It can be shown that if H is any fixed graph and G is a large quasirandom graph, then the number of copies of H in G is approximately what it would be in a random graph of the same density as G. Subsets of finite Abelian groups If A is a subset of a finite Abelian group G and A has density [math]\delta,[/math] then we define the balanced function f of A by setting [math]f(x)=1-\delta[/math] when x\in A and [math]f(x)=-\delta[/math] otherwise. Then A is c-quasirandom if and only if f is c-quasirandom, and f is defined to be c-quasirandom if [math]\mathbb{E}_{x,a,b\in G}f(x)f(x+a)f(x+b)f(x+a+b)\leq c.[/math] Again, we can prove positivity by observing that the left-hand side is a sum of squares. In this case, it is [math]\mathbb{E}_{a\in G}(\mathbb{E}_{x\in G}f(x)f(x+a))^2.[/math] If G has odd order, then it can be shown that a quasirandom set A contains approximately the same number of triples [math](x,x+d,x+2d)[/math] as a random subset A of the same density. However, it is decidedly not the case that A must contain approximately the same number of arithmetic progressions of higher length (regardless of torsion assumptions on G). For that one must use "higher uniformity". Hypergraphs Subsets of grids A function f from [math][n]^2[/math] to [-1,1] is c-quasirandom if the "sum over rectangles" is at most c. The sum over rectangles is [math]\mathbb{E}_{x,y,a,b}f(x,y)f(x+a,y)f(x,y+b)f(x+a,y+b)[/math]. Again, it is easy to show that this sum is non-negative by expressing it as a sum of squares. And again, one defines a subset [math]A\subset[n]^2[/math] to be c-quasirandom if it has a balanced function that is c-quasirandom. If A is a c-quasirandom set of density [math]\delta[/math] and c is sufficiently small, then A contains roughly the same number of corners as a random subset of [math][n][/math] of density [math]\delta.[/math] A possible definition of quasirandom subsets of [math][3]^n[/math] As with all the examples above, it is more convenient to give a definition for quasirandom functions. However, in this case it is not quite so obvious what should be meant by a balanced function. Here, first, is a possible definition of a quasirandom function from [math][2]^n\times [2]^n[/math] to [math][-1,1].[/math] We say that f is c-quasirandom if [math]\mathbb{E}_{A,A',B,B'}f(A,B)f(A,B')f(A',B)f(A',B')\leq c.[/math] However, the expectation is not with respect to the uniform distribution over all quadruples (A,A',B,B') of subsets of [math][n].[/math] Rather, we choose them as follows. (Several variants of what we write here are possible: it is not clear in advance what precise definition will be the most convenient to use.) First we randomly permute [math][n][/math] using a permutation [math]\pi[/math]. Then we let A, A', B and B' be four random intervals in [math]\pi([n]),[/math] where we allow our intervals to wrap around mod n. (So, for example, a possible set A is [math]\{\pi(n-2),\pi(n-1),\pi(n),\pi(1),\pi(2)\}.[/math]) As ever, it is easy to prove positivity. To apply this definition to subsets [math]\mathcal{A}[/math] of [math][3]^n,[/math] define f(A,B) to be 0 if A and B intersect, [math]1-\delta[/math] if they are disjoint and the sequence x that is 1 on A, 2 on B and 3 elsewhere belongs to [math]\mathcal{A},[/math] and [math]-\delta[/math] otherwise. Here, [math]\delta[/math] is the probability that (A,B) belongs to [math]\mathcal{A}[/math] if we choose (A,B) randomly by taking two random intervals in a random permutation of [math][n][/math] (in other words, we take the marginal distribution of (A,B) from the distribution of the quadruple (A,A',B,B') above) and condition on their being disjoint. It follows from this definition that [math]\mathbb{E}f=0[/math] (since the expectation conditional on A and B being disjoint is 0 and f is zero whenever A and B intersect). Nothing that one would really like to know about this definition has yet been fully established, though an argument that looks as though it might work has been proposed to show that if f is quasirandom in this sense then the expectation [math]\mathbb{E}f(A,B)f(A\cup D,B)f(A,B\cup D)[/math] is small (if the distribution on these "set-theoretic corners" is appropriately defined).
If a polynomial is "irreducible over the rationals", does it mean that it has no rational roots? I would say yes because otherwise I could divide out the linear factors (i.e. rational roots) but maybe I'm wrong? Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community If a polynomial is "irreducible over the rationals", does it mean that it has no rational roots? I would say yes because otherwise I could divide out the linear factors (i.e. rational roots) but maybe I'm wrong? Almost. For example, the polynomial $X-\frac23$ is irreducible and yet it has a rational root. However, this (i.e., all linear polynomials) is the only exception, as otherwise your correct reasoning applies. Note however, that the converse is not true: A polynomial may be recucible even if it does not have rational roots; consider e.g., $X^4-5X+6=(X^2-2)(X^2-2)$ A polynomial over the rationals is a polynomial $f(x)\in{\Bbb Q}[x]$. If the polynomial is linear, $f(x)=ax+b$, $a\ne 0$, it is irreducible over the rationals. If the polynomial has degree $\geq 2$, it is irreducible over the rationals provided that it has no root in the rationals.
I am trying to find geodesics for the FRW metric, $$ d\tau^2 = dt^2 - a(t)^2 \left(d\mathbf{x}^2 + K \frac{(\mathbf{x}\cdot d\mathbf{x})^2}{1-K\mathbf{x}^2} \right), $$ where $\mathbf{x}$ is 3-dimensional and $K=0$, $+1$, or $-1$. Geodesic equation Using the Christoffel symbols From Weinberg's Cosmology (Eqs. 1.1.17 - 20) in the geodesic equation I get: \begin{align} 0 &= \frac{d^2 t}{d\lambda^2} + a\dot{a} \left[ \left( \frac{d\mathbf{x}}{d\lambda} \right)^2 +\frac{K(\mathbf{x}\cdot \frac{d\mathbf{x}}{d\lambda})^2}{1-K \mathbf{x}^2}\right], &\text{($t$ equation)}\\ 0 &= \frac{d^2\mathbf{x}}{d\lambda^2} + 2 \frac{\dot{a}}{a}\frac{dt}{d\lambda}\frac{d\mathbf{x}}{d\lambda} + \left[ \left(\frac{d\mathbf{x}}{d\lambda}\right)^2 + \frac{K(\mathbf{x} \cdot \frac{d\mathbf{x}}{d\lambda})^2}{1-K\mathbf{x}^2} \right]K\mathbf{x}, &\text{($\mathbf{x}$ equation)} \end{align} where $\lambda$ is the affine parameter, and $\dot{a}=da/dt$. Variational principle It should also be possible to get the geodesics by finding the paths that extremize the proper time $d\tau$, i.e. using the Euler-Lagrange equations with a Lagrangian equal to the square root of the $d\tau^2$ I wrote above: $$ L = \frac{d\tau}{dp}= \sqrt{ t'^2 - a(t)^2 \left(\mathbf{x}'^2 + K \frac{(\mathbf{x}\cdot \mathbf{x}')^2}{1-K\mathbf{x}^2} \right) }, $$ where a prime is the derivative with respect to the variable $p$ that parameterizes the path. When I try this $L$ in the E-L equation for $t$ I get the same equation as above. However, when I try the E-L equation for $\mathbf{x}$ my result does not agree with the geodesic equation. I find $$ \frac{\partial L}{\partial \mathbf{x}} = -\frac{1}{L} \frac{a^2 K (\mathbf{x} \cdot \mathbf{x}')}{1-K\mathbf{x}^2} \left(\mathbf{x}' + \frac{K(\mathbf{x} \cdot \mathbf{x}')\mathbf{x}}{1-K\mathbf{x}^2}\right), $$ and $$ \frac{\partial L}{\partial \mathbf{x}'} = -\frac{a^2}{L} \left(\mathbf{x}' + \frac{K(\mathbf{x} \cdot \mathbf{x}')\mathbf{x}}{1-K\mathbf{x}^2}\right). $$ I write the E-L equation $$\frac{d}{dp}\frac{\partial L}{\partial \mathbf{x}'}=\frac{\partial L}{\partial \mathbf{x}},$$ and then multiply both sides by $dp/d\tau$ to replace $p$ with $\tau$ everywhere and get rid of the $L$'s in the denominators (using the fact that $1/L=dp/d\tau$ and changing the meaning of the primes to mean derivatives with respect to proper time $\tau$). I get $$ \frac{d}{d\tau} \left[ a^2\left(\mathbf{x}' + \frac{K(\mathbf{x} \cdot \mathbf{x}')\mathbf{x}}{1-K\mathbf{x}^2}\right) \right] = \frac{K (\mathbf{x} \cdot \mathbf{x}')}{1-K\mathbf{x}^2} a^2 \left(\mathbf{x}' + \frac{K(\mathbf{x} \cdot \mathbf{x}')\mathbf{x}}{1-K\mathbf{x}^2}\right). $$ I cannot rearrange this into the formula from the geodesic equation and suspect that the two sets of equations are not equivalent. I've gone through both methods a couple of times but haven't spotted any errors. Can anyone tell me where the inconsistency (if there actually is one) is coming from? [Interestingly, the E-L equation can be integrated once with an integrating factor of $\sqrt{1-K\mathbf{x}^2}$, whereas I don't see how to do so with the geodesic equation (not that I am very good at solving differential equations).]
We have $X=[\mathbf{x}_1,\mathbf{x}_2...,\mathbf{x}_n]\in\mathbb{R}^{d\times n}$, $H=[\mathbf{h}_1,\mathbf{h}_2...,\mathbf{h}_n] \in\mathbb{R}^{d\times n}$, and $d<n$. $H$ has rank $r\leq d$ and $X$ has rank $d$. Assume we have $\|H\|_F\leq C$ and $\|\mathbf{x}_i\|_2\leq R$, where $\|\cdot\|_F$ and $\|\cdot\|_2$ are the Frobenius norm and vectro 2-norm respectively. What do we know about the upper bound of Frobenius inner product $\left<H,X\right>_F$ of $H$ and $X$? Hint: I know by Cauchy-Schwarz inequality $\left<H,X\right>_F < \sqrt nCR$. The equality will not hold because we can not guarantee $\mathbf{x}_i = k\mathbf{h}_i, \forall i$. This is because $H$ is rank deficient. I think there is a tighter upper bound for $<H,X>_F$ in terms of singular values of $X$ and rank $r$ of $H$. I have also posted the question here.
Consider the quadratic loss $L(\theta,\delta)=(\theta-\delta)^2$, with prior given $\pi(\theta)$ where $\pi(\theta)\sim U(0,1/2)$. Let $f(x|\theta)=\theta x^{\theta-1}\mathbb{I}_{[0,1]}(x), \theta>0$ the likelihood. Find the Bayes estimator $\delta^\pi$. Consider the weighted quadratic loss $L_w(\theta,\delta)=w(\theta)(\theta-\delta)^2$ where $w(\theta)=\mathbb{I}_{(-\infty,1/2)}$ with prior $\pi_1(\theta)=\mathbb{I}_{[0,1]}(\theta)$. Let $f(x|\theta)=\theta x^{\theta-1}\mathbb{I}_{[0,1]}(x), \theta>0$ be the likelihood. Find the Bayes estimator $\delta^\pi_1$. Compare $\delta^\pi$ and $\delta^\pi_1$ First I noticed that $f(x|\theta)\sim Beta(\theta,1)$, and I assumed that that is the likelihood, otherwise I don't get any posterior, then $$\pi(\theta|x)\propto f(x|\theta)\pi(\theta)=\theta x^{\theta-1}\mathbb{I}_{[0,1]}*2\mathbb{I}_{(0,1/2)}(\theta)\sim Beta(\theta,1)$$ so the Bayes estimator with respect to quadratic loss is $$\mathbb{E}[\pi(\theta|x)]=\frac{\theta}{\theta+1}$$ I'm looking in the book The Bayesian Choice and there is a theorem about the Bayes estimator associated with weighted quadratic loss and it is given by$$\delta^\pi(x)=\frac{\mathbb{E}^\pi[w(\theta)\theta|x]}{\mathbb{E}^\pi[w(\theta)|x]}$$ Can someone explain to me how I calculate it? What I tried is: $$\delta^\pi(x)=\frac{\frac{\int \theta w(\theta)f(x|\theta)\pi(\theta)d\theta}{\int w(\theta)f(x|\theta)\pi(\theta)d\theta}}{\frac{\int f(x|\theta)\pi(\theta)d\theta}{\int w(\theta)f(x\theta)\pi(\theta)d\theta}}$$ I know that the support is $[0,\frac{1}{2}]$, but when I tried to integrate in the numerator $$\int \theta w(\theta)f(x|\theta)\pi(\theta)d\theta=\int_0^\frac{1}{2}\theta\theta x^{\theta-1}d\theta=\frac{1}{x}\int_0^\frac{1}{2}\theta^2 x^\theta d\theta$$ I get no good results.
Forgot password? New user? Sign up Existing user? Log in I found a trigonometric solution to the following problem: Can you find a solution without using trigonometry? Note by Kazem Sepehrinia 10 months, 2 weeks ago Easy Math Editor This discussion board is a place to discuss our Daily Challenges and the math and science related to those challenges. Explanations are more than just a solution — they should explain the steps and thinking strategies that you used to obtain the solution. Comments should further the discussion of math and science. When posting on Brilliant: *italics* _italics_ **bold** __bold__ - bulleted- list 1. numbered2. list paragraph 1paragraph 2 paragraph 1 paragraph 2 [example link](https://brilliant.org) > This is a quote This is a quote # I indented these lines # 4 spaces, and now they show # up as a code block. print "hello world" # I indented these lines# 4 spaces, and now they show# up as a code block.print "hello world" \( \) \[ \] 2 \times 3 2^{34} a_{i-1} \frac{2}{3} \sqrt{2} \sum_{i=1}^3 \sin \theta \boxed{123} Sort by: Redefine point AAA to satisfy MA=MCMA=MCMA=MC and ∠MAC=∠MCA=10∘.\angle MAC= \angle MCA=10^{\circ}.∠MAC=∠MCA=10∘. So, now it suffices to prove that ∠ABM=30∘\angle ABM=30^{\circ}∠ABM=30∘ or AB=AC.AB=AC.AB=AC.Let DDD be a point (below BCBCBC) such that MB=MD,∠BMD=40∘,MB=MD, \angle BMD= 40^{\circ},MB=MD,∠BMD=40∘, and MD∩BC≡E.MD\cap BC\equiv E.MD∩BC≡E. Denote III as the incenter of ΔMEC.F∈BM:ME=MF,G∈BC:FE=FG.\Delta MEC. F\in BM: ME=MF, G\in BC :FE=FG.ΔMEC.F∈BM:ME=MF,G∈BC:FE=FG. Clearly, ΔMDA≅ΔMBC⇒∠MDA=20∘,∠MAD=40∘,∠CAD=50∘−−(1)\Delta MDA \cong \Delta MBC \Rightarrow \angle MDA=20^{\circ}, \angle MAD=40^{\circ}, \angle CAD=50^{\circ} --(1)ΔMDA≅ΔMBC⇒∠MDA=20∘,∠MAD=40∘,∠CAD=50∘−−(1) We have;∠MFE=70∘\angle MFE=70^{\circ}∠MFE=70∘ and ∠MIE=90∘+∠MCE2=110∘⇒\angle MIE= 90^{\circ}+\frac{\angle MCE}{2} = 110^{\circ} \Rightarrow∠MIE=90∘+2∠MCE=110∘⇒ quad. MIEFMIEFMIEF is cyclic. Also, since MEMEME bisects ∠IMF⇒IE=EF=GF\angle IMF \Rightarrow IE=EF=GF∠IMF⇒IE=EF=GF Now, note that ΔBFG∼ΔCEI\Delta BFG\sim \Delta CEIΔBFG∼ΔCEI by AAS−AAS-AAS− similarity ⇒BF=CE\Rightarrow BF=CE⇒BF=CE But due to symmetry in isosceles ΔBMD,BF=ED\Delta BMD, BF=EDΔBMD,BF=ED Combining the two we get, CE=ED⇒∠MDC=30∘∴,CE=ED \Rightarrow \angle MDC= 30^{\circ} \therefore,CE=ED⇒∠MDC=30∘∴, from result (1)∠CDA=50∘=∠CAD⇒CA=CD(1) \angle CDA=50^{\circ}=\angle CAD \Rightarrow CA=CD(1)∠CDA=50∘=∠CAD⇒CA=CD Also, ΔMCD≅ΔMAB⟹AB=CD=AC\Delta MCD\cong\Delta MAB \Longrightarrow AB=CD=ACΔMCD≅ΔMAB⟹AB=CD=AC. And, so done. Q.E.D\boxed{Q.E.D}Q.E.D Log in to reply Beautiful solution, thanks :) Have you tried an algebraic solution to this, involving all the unknown angles? P.S. All the methods used still fall under the banner of "trigonometry", as ultimately we are studying triangles. I used sin law several times and then a trigonometric equation comes out for the unknown MAC angle. Solving this equation gives MAC=10 degrees.But I wanted to use only basic geometric rules like parallel lines, isosceles triangles, sum of angles of a triangle. For example if we could prove AM=MC then problem is solved. You can solve it algebraically, as i said, in terms of the unknown angles, for which there are four. Fortunately, all the equations will be linear, so you will need to (hopefully) find four relations from the triangle that allow you to solve using linear algebraic methods. To start you off, the angles around M add up to a full circle, so you can find a relation between two of the four unknown angles (as one of the angles is easily obtainable); this relation would namely be the sum. @Kazem Sepehrinia Well, I don't have a solution without trigonometry, but we can definitely solve it quickly by applying the Sine Rule to Triangles AMC and AMB and take advantage of the fact that AB = AC along with AM = AM (the common side).....!! There are four unknown angles (as the fifth can be easily obtained), and four possible relationships involving the unknown which turn out to be all linear in the unknowns. This must mean that a linear algebraic solution can easily be obtained. i found a solution with basic geometry and got the answer 10 ...is it correct? Yup...........the answer is 10 degrees..........Post your solution please... You've got it right.. Please post your solution :) please draw step by step with me...extend AB to meet AC at D.Again extend MD to reach a point E which MD=DE.So as you can see AM=AE and triangle EBM is equilateral.Take a point P in triangle MEB such that angle PBA=10.Thus,two triangles ABP & AMC are congruent.So we got that AP=AM=AE and angle MAP=80 and PB=CM.Draw a circle with radius AM.As you can see angle PAM=2*PEM=80>>>>PEM=40>>>>>BEP=PBE=20>>>>>PB=PE.PB=BEPM=PM(common)>>>>>>>>>>>>>>>>>>>>triangles MPE & BPM are congruent>>>>>>>angles EMP=PMB=30>>>>>>>>angle PAE=60(in circle)BM=EmSo triangle PEA is also equilateral>>>>>EP=AM=BP=CM. after all sorry for my English.... :) Nice natural approach:)) There is a bit typo in the first line, it should be "CM" rather than "AB". I have also a new method I have proved it without congruence try. this.. or **else . ..draw a perpendicular from a on bc.. and now. daw. the same figure symmetric abt.. AP then mark the point on AP where lines intersect as Q .. then.. produce. BM. till it meets AC .. and then also drop a perpendicular from M on AC name it N .repeat same... symmetrically. on othr side.. then.. use. basic geometry..(properties of isosceles triangle. and.. exterior angle theorem and. solve fr all the angles.. .. you will realise that. angle ANM is equal to angle AQM. equal to 70 degress. then use again properties of similar triangle.. n find. 2x=180 -2(70)=> x=20this. is entirely. geometry Sorry, but the correct answer is 10 degrees............. ABC is a triangle with angle bisector AD,BE,and CF. If angle FDE = 90 degree then find the angle BAC? 120∘120^{\circ}120∘ --- DUDE...........you beat me to it.......I was about to answer!!! XD How it is came? Send me the COMPLETE solution .In this way i would able to findmy mistake. Of Q1,ABC is a triangle with angle bisector AD,BE,and CF. If angle FDE = 90 degree then find the angle BAC I have solve it It is 10 Compas, D Scale, Triangle Scale and many more types of equipment for geometry diagram. Here ( https://www.mistersaad.com/ )Mobile app developer create Geometry application with complete Guide. every good article. I am going through a few of these issues as well. Also, I am creating the latest trend websites you can visit my web site: https://www.adwebstudio.com/web-design-company-dubai Thanks a lot for this Geometric solution post. Keep up the good work. I’ll be coming back lots. I am working on a web design company and we create all kind of websites you can visit our website: https://www.adwebstudio.com/web-design-company-riyadh-jeddah-saudi-arabia take a protector.. scale compass.. draw thefig. and measure Problem Loading... Note Loading... Set Loading...
Suppose $A$ is an array of integers, $|A|=n$, $A=\{a_i|1\leq a_i\leq N, i=1\ldots n\}$. The goal is to find an efficient algorithm $\cal{F}$ to find maximum element in $A$ with these restrictions: $\cal{F}$ should not compare any $a_i$ with any $a_j$ ever. $\cal{F}$ also should not add, subtract or exploit some fancy facts about integers $\cal{F}$ may, however, compare any $a_i$ with some predetermined constant number, and this number may depend on $N$ Why does algorithm exist? Well, counting sort will work, it never compares elements with each other. Its complexity is $O(n+N)$. I propose this: compare all $a_i$ with $N/2$, let $L=\{a_i\in A|a_i<\frac{N}{2}\}$, $U=\{a_i\in A|a_i\geq\frac{N}{2}\}$. If $U$ isn't empty, we may throw $L$ away and compare $U$ with $3N/4$, etc... If $U$ is empty, we have our initial problem but now the upper bound is $N/2$. Call algorithm recursively. Basically, it is binary search on $N$, so it will do in $O(n\lceil\log N\rceil)$ in the worst case. If $N$ is big enough it is better than counting sort. Two questions: a) am I correct with my algorithm? I can't see any flows in binary search implementation. b) it is not obvious to me that binary search in this problem is the best solution, is it possible to do better? If not, how to prove it? I've added some more explanation to clarify the question. According to comments (thanks, EvilJS) I've used term "comparison-based sorting" wrong, so sorry for that.
I was experimenting with the triple scalar product and forces in equilibrium when I came to this result: Consider 4 forces $ \pmb{F_i}$ for $i=1,2,3,4$. $\pmb{F_i}=F_i\hat{e_i}$ where $\hat{e_i}$ is the unit vector in the direction of the corresponding force and $F_i$ is the magnitude. $$F_1\hat{e_1}+F_2\hat{e_2}+F_3\hat{e_3}+F_4\hat{e_4}=0$$ Take the scalar product of the system with $e_3 \times e_4$ (the vector product of $e_3,e_4$). $$F_1\hat{e_1}\cdot (e_3 \times e_4)+F_2\hat{e_2}\cdot (e_3 \times e_4)+F_3\hat{e_3}\cdot (e_3 \times e_4)+F_4\hat{e_4}\cdot (e_3 \times e_4)=0$$ $$\Rightarrow F_1\hat{e_1}\cdot (e_3 \times e_4)+F_2\hat{e_2}\cdot (e_3 \times e_4)+0+0=0$$ $$\Rightarrow \frac{F_1}{\hat{e_2}\cdot (e_3 \times e_4)}=\frac{F_2}{-\hat{e_1}\cdot (e_3 \times e_4)}$$ $$\Rightarrow \frac{F_1}{V_1}=\frac{F_2}{V_2}$$ Where $V_1$ is the volume of the parallelepiped with edges $\hat{e_{2}},\hat{e_{3}},\hat{e_{4}}$. This can be extended to any other pair of forces. If one expresses Lami's theorem as If one has 3 forces in equilibrium acting at a single point in 2D then the magnitude of each force is proportional to the area of the parallelogram made from the unit vectors of the other 2 forces. Then the similarities between my result and this are pretty clear. This also leads me to conjecture for that for two forces in 1 dimension, the forces are proportional to the magnitude of the direction of the other force. Firstly, is my result correct and valid? Does this theorem have a name, or are there any resources on it I could read? Can this formulation and result be extended into other (n?) dimensions?
I have been struggling with problem for a while : Consider a body of mass $M$ released from a height of $h$ meters above the ground. With what amount of Force will it hit the ground ? My Attempt : I assume that air-resistance is non-existent and that $h$ is small enough so that the change in $g$ (Acceleration due to Gravity) can be ignored. Let the body hit the ground with velocity $v$. As its initial velocity is $0$, with a little bit of calculations, it can be found that $v=\sqrt{2gh}$. So, the linear momentum(p) of the body on striking the ground $=Mv=M\sqrt{2gh}$. But the problem here is that the time for which the body is in contact with the ground is not specifically given. If we assume it to be $\Delta t$, the force exerted by the body comes out to be $\dfrac{\Delta p}{\Delta t}=M\dfrac{\sqrt{2gh}}{t}$. Is there any way of calculating the Force without including the time of contact? I know that $|F|=\dfrac{dp}{dt}$, but calculus can not be used here since we are not given $p$ as a function of time.
Is there any hope in solving the following linear system efficiently with an iterative method? $A \in \mathbb{R}^{n \times n}, x \in \mathbb{R}^n, b \in \mathbb{R}^n \text{, with } n > 10^6$ $Ax=b$ with $ A=(\Delta - K) $, where $\Delta$ is a very sparse matrix with a few diagonals, arising from the discretization of the Laplace Operator. On it's main diagonal there is $-6$ and there are $6$ other diagonals with $1$ on it. $K$ is a full $\mathbb{R}^{n \times n}$ matrix that consists completely of ones. Solving $A=\Delta$ works fine with iterative methods like Gauss-Seidel, because it's a sparse diagonally dominant matrix. I suspect that the problem $A=(\Delta - K)$ is pretty much impossible to solve efficiently for large numbers of $n$, but is there any trick to maybe solve it, exploiting the structure of $K$? EDIT: Would doing something like $\Delta x^{k+1} = b + Kx^{k}$ // solve for $x^{k+1}$ with Gauss-Seidel converge to the correct solution? I read that such a Splitting Method converges if $\rho(\Delta^{-1} K) < 1$, where $\rho$ is the spectral norm. I manually calculated the eigenvalues of $\Delta^{-1} K$ for some different small values of $n$ and they're all zero except the one which has a pretty high negative value. (about ~500 for $n=256$) So I guess that wouldn't work. EDIT: More information about $\Delta$: $\Delta \in \mathbb{R}^{n \times n}$ is symmetric and is negative definite and diagonally dominant. It is created the following way in matlab n=W*H*D; e=ones(W*H*D,1); d=[e,e,e,-6*e,e,e,e]; delta=spdiags(d, [-W*H, -W, -1, 0, 1, W, W*H], n, n);
In his book "All of Statistics", Prof. Larry Wasserman presents the following Example (11.10, page 188). Suppose that we have a density $f$ such that $f(x)=c\,g(x)$, where $g$ is a known (nonnegative, integrable) function, and the normalization constant $c>0$ is unknown. We are interested in those cases where we can't compute $c=1/\int g(x)\,dx$. For example, it may be the case that $f$ is a pdf over a very high-dimensional sample space. It is well known that there are simulation techniques that allow us to sample from $f$, even though $c$ is unknown. Hence, the puzzle is: How could we estimate $c$ from such a sample? Prof. Wasserman describes the following Bayesian solution: let $\pi$ be some prior for $c$. The likelihood is $$ L_x(c) = \prod_{i=1}^n f(x_i) = \prod_{i=1}^n \left(c\,g(x_i)\right) = c^n \prod_{i=1}^n g(x_i) \propto c^n \, . $$ Therefore, the posterior $$ \pi(c\mid x) \propto c^n \pi(c) $$ does not depend on the sample values $x_1,\dots,x_n$. Hence, a Bayesian can't use the information contained in the sample to make inferences about $c$. Prof. Wasserman points out that "Bayesians are slaves of the likelihood function. When the likelihood goes awry, so will Bayesian inference". My question for my fellow stackers is: Regarding this particular example, what went wrong (if anything) with Bayesian methodology? P.S. As Prof. Wasserman kindly explained in his answer, the example is due to Ed George.
In QED there are 4 kinds of divergences: Ultraviolet divergences. Naive calculations depend on the cut-off in such a way that they go to infinity as the cut-off do. However, QED is a perturbatively renormalizable theory so that non-naive, well-done computations (see regularization and renormalization) give sensible results. Landau pole. The coupling constant $\alpha={e^2\over \hbar \, c}$, which is the expansion parameter in the perturbative series, grows with energy and goes to infinity for a finite value of the energy. It turns out that this finite value of energy is larger than the electroweak scale, where QED merges with the weak interaction and QED is not a good theory of nature anymore. Therefore, it isn't a real (phenomenological) problem. Infrared divergences. These are due to the fact that photons are massless. They however cancel out once one takes into account all the effects that contribute to a measurable observable. Non-convergent series. The $n$-th term of the perturbative expansion is of the form $\left({\alpha\over 2\pi}\right)^{n}\, (2n-1)!!$, so that the series is not convergent but asymptotic because the factor $(2n-1)!!$ grows very fast for large values of $n$. This means that we cannot give a non-perturbative definition of QFT by summing up all the terms of the series. However, the first terms are meaningful and actually give predictions that accurately agree with observations. The 'first terms' are approximately $n\sim {\pi\over \alpha}\sim 430$. And for this value of $n$, $\left({\alpha\over 2\pi}\right)^{n}\, (2n-1)!!\sim 10^{-187}$. Therefore, as long as we are not interested in a precision of one part in $10^{187}$ , this is not a real problem either. Note that QED is the theory of nature that has been confirmed with greatest precision — one part in $10^{9}$ in electron's anomalous magnetic dipole, for which $n=4$. For QCD points 1, 3, and 4 are more o less the same. However, point 2 doesn't apply since in QCD the coupling constant $\alpha_s$ gets lower with the increasing of energy, and in fact it goes to zero as energy goes to infinity. See asymptotic freedom. To summarize, infrared divergences are due to not taking into account effects that contribute to the observable magnitude. The asymptotic nature of QFT perturbative expansions prevents a non-perturvative (exact) definition of the theory (through its series), but doesn't entail a practical problem when comparing predictions with measurements. The lack of perturbative divergences and Landau-like poles are a necessary condition for a theory to be well-defined at arbitrarily high energies. However, theories that contain these divergences (ultraviolet or Landau-like poles) can still be very useful at energies above some scale. On the other hand, theories without these divergencies (ultraviolet or Landau-like poles), such as QCD, don't have to be valid to all energies as theories of nature. As M. Brown points out in the comments, there is a relation between instantons and renormalons and the asymptotic nature of series. Please, see these notes snd the questions Instantons and Non Perturbative Amplitudes in Gravity and Asymptoticity of Pertubative Expansion of QFT Reply to Graviton's comment: In my opinion, a fundamental theory of nature (whatever it means) should have a non-perturbative definition. If the perturbative expansion is not convergent, it cannot provide this non-perturbative definition. However, in principle, this doesn't necessarily mean that theory cannot have a non-perturbative definition or an exact solution, but this must be given by other means.This post imported from StackExchange Physics at 2014-03-31 22:28 (UCT), posted by SE-user drake
Answer b. 8 hours Work Step by Step We use the equation for period to find: $T=\sqrt{\frac{4\pi^2r^3}{GM_E}}$ $T=\sqrt{\frac{4\pi^2(20,200\times10^3)^3}{(6.67\times10^{-11})(5.97\times10^{24})}}\approx \fbox{8 hours}$ You can help us out by revising, improving and updating this answer.Update this answer After you claim an answer you’ll have 24 hours to send in a draft. An editorwill review the submission and either publish your submission or provide feedback.
Difference between revisions of "Kakeya problem" (2 intermediate revisions by 2 users not shown) Line 3: Line 3: Clearly, we have <math>k_1=3</math>, and it is easy to see that <math>k_2=7</math>. Using a computer, it is not difficult to find that <math>k_3=13</math> and <math>k_4\le 27</math>. Indeed, it seems likely that <math>k_4=27</math> holds, meaning that in <math>{\mathbb F}_3^4</math> one cannot get away with just <math>26</math> elements. Clearly, we have <math>k_1=3</math>, and it is easy to see that <math>k_2=7</math>. Using a computer, it is not difficult to find that <math>k_3=13</math> and <math>k_4\le 27</math>. Indeed, it seems likely that <math>k_4=27</math> holds, meaning that in <math>{\mathbb F}_3^4</math> one cannot get away with just <math>26</math> elements. − == + == Basic Estimates == Trivially, we have Trivially, we have Line 9: Line 9: :<math>k_n\le k_{n+1}\le 3k_n</math>. :<math>k_n\le k_{n+1}\le 3k_n</math>. − Since the Cartesian product of two Kakeya sets is another Kakeya set, + Since the Cartesian product of two Kakeya sets is another Kakeya set, :<math>k_{n+m} \leq k_m k_n</math>; :<math>k_{n+m} \leq k_m k_n</math>; − this implies that <math>k_n^{1/n}</math> converges to a limit as <math>n</math> goes to infinity. + this implies that <math>k_n^{1/n}</math> converges to a limit as <math>n</math> goes to infinity. == Lower Bounds == == Lower Bounds == − − To each of the <math>(3^n-1)/2</math> directions in <math>{\mathbb F}_3^n</math> there correspond at least three pairs of elements in a Kakeya set, determining this direction. Therefore, <math>\binom{k_n}{2}\ge 3\cdot(3^n-1)/2</math>, and hence To each of the <math>(3^n-1)/2</math> directions in <math>{\mathbb F}_3^n</math> there correspond at least three pairs of elements in a Kakeya set, determining this direction. Therefore, <math>\binom{k_n}{2}\ge 3\cdot(3^n-1)/2</math>, and hence − :<math>k_n\ + :<math>k_n\3^{(n+1)/2}.</math> One can derive essentially the same conclusion using the "bush" argument, as follows. Let <math>E\subset{\mathbb F}_3^n</math> be a Kakeya set, considered as a union of <math>N := (3^n-1)/2</math> lines in all different directions. Let <math>\mu</math> be the largest number of lines that are concurrent at a point of <math>E</math>. The number of point-line incidences is at most <math>|E|\mu</math> and at least <math>3N</math>, whence <math>|E|\ge 3N/\mu</math>. On the other hand, by considering only those points on the "bush" of lines emanating from a point with multiplicity <math>\mu</math>, we see that <math>|E|\ge 2\mu+1</math>. Comparing the two last bounds one obtains One can derive essentially the same conclusion using the "bush" argument, as follows. Let <math>E\subset{\mathbb F}_3^n</math> be a Kakeya set, considered as a union of <math>N := (3^n-1)/2</math> lines in all different directions. Let <math>\mu</math> be the largest number of lines that are concurrent at a point of <math>E</math>. The number of point-line incidences is at most <math>|E|\mu</math> and at least <math>3N</math>, whence <math>|E|\ge 3N/\mu</math>. On the other hand, by considering only those points on the "bush" of lines emanating from a point with multiplicity <math>\mu</math>, we see that <math>|E|\ge 2\mu+1</math>. Comparing the two last bounds one obtains <math>|E|\gtrsim\sqrt{6N} \approx 3^{(n+1)/2}</math>. <math>|E|\gtrsim\sqrt{6N} \approx 3^{(n+1)/2}</math>. − A better bound follows by using the "slices argument". Let <math>A,B,C\subset{\mathbb F}_3^{n-1}</math> be the three slices of a Kakeya set <math>E\subset{\mathbb F}_3^n</math>. Form a bipartite graph <math>G</math> with the partite sets <math>A</math> and <math>B</math> by connecting <math>a</math> and <math>b</math> by an edge if there is a line in <math>E</math> through <math>a</math> and <math>b</math>. The restricted sumset <math>\{a+b\colon (a,b)\in G\}</math> is contained in the set <math>-C</math>, while the difference set <math>\{a-b\colon (a,b)\in G\}</math> is all of <math>{\mathbb F}_3^{n-1}</math>. Using an estimate from [http://front.math.ucdavis.edu/math.CO/9906097 a paper of Katz-Tao], we conclude that <math>3^{n-1}\le\max(|A|,|B|,|C|)^{11/6}</math>, leading to <math>|E|\ge 3^{6(n-1)/11}</math>. Thus, + + + + + + + A better bound follows by using the "slices argument". Let <math>A,B,C\subset{\mathbb F}_3^{n-1}</math> be the three slices of a Kakeya set <math>E\subset{\mathbb F}_3^n</math>. Form a bipartite graph <math>G</math> with the partite sets <math>A</math> and <math>B</math> by connecting <math>a</math> and <math>b</math> by an edge if there is a line in <math>E</math> through <math>a</math> and <math>b</math>. The restricted sumset <math>\{a+b\colon (a,b)\in G\}</math> is contained in the set <math>-C</math>, while the difference set <math>\{a-b\colon (a,b)\in G\}</math> is all of <math>{\mathbb F}_3^{n-1}</math>. Using an estimate from [http://front.math.ucdavis.edu/math.CO/9906097 a paper of Katz-Tao], we conclude that <math>3^{n-1}\le\max(|A|,|B|,|C|)^{11/6}</math>, leading to <math>|E|\ge 3^{6(n-1)/11}</math>. Thus, :<math>k_n \ge 3^{6(n-1)/11}.</math> :<math>k_n \ge 3^{6(n-1)/11}.</math> Latest revision as of 00:35, 5 June 2009 A Kakeya set in [math]{\mathbb F}_3^n[/math] is a subset [math]E\subset{\mathbb F}_3^n[/math] that contains an algebraic line in every direction; that is, for every [math]d\in{\mathbb F}_3^n[/math], there exists [math]e\in{\mathbb F}_3^n[/math] such that [math]e,e+d,e+2d[/math] all lie in [math]E[/math]. Let [math]k_n[/math] be the smallest size of a Kakeya set in [math]{\mathbb F}_3^n[/math]. Clearly, we have [math]k_1=3[/math], and it is easy to see that [math]k_2=7[/math]. Using a computer, it is not difficult to find that [math]k_3=13[/math] and [math]k_4\le 27[/math]. Indeed, it seems likely that [math]k_4=27[/math] holds, meaning that in [math]{\mathbb F}_3^4[/math] one cannot get away with just [math]26[/math] elements. Basic Estimates Trivially, we have [math]k_n\le k_{n+1}\le 3k_n[/math]. Since the Cartesian product of two Kakeya sets is another Kakeya set, the upper bound can be extended to [math]k_{n+m} \leq k_m k_n[/math]; this implies that [math]k_n^{1/n}[/math] converges to a limit as [math]n[/math] goes to infinity. Lower Bounds To each of the [math](3^n-1)/2[/math] directions in [math]{\mathbb F}_3^n[/math] there correspond at least three pairs of elements in a Kakeya set, determining this direction. Therefore, [math]\binom{k_n}{2}\ge 3\cdot(3^n-1)/2[/math], and hence [math]k_n\ge 3^{(n+1)/2}.[/math] One can derive essentially the same conclusion using the "bush" argument, as follows. Let [math]E\subset{\mathbb F}_3^n[/math] be a Kakeya set, considered as a union of [math]N := (3^n-1)/2[/math] lines in all different directions. Let [math]\mu[/math] be the largest number of lines that are concurrent at a point of [math]E[/math]. The number of point-line incidences is at most [math]|E|\mu[/math] and at least [math]3N[/math], whence [math]|E|\ge 3N/\mu[/math]. On the other hand, by considering only those points on the "bush" of lines emanating from a point with multiplicity [math]\mu[/math], we see that [math]|E|\ge 2\mu+1[/math]. Comparing the two last bounds one obtains [math]|E|\gtrsim\sqrt{6N} \approx 3^{(n+1)/2}[/math]. The better estimate [math]k_n\ge (9/5)^n[/math] is obtained in a paper of Dvir, Kopparty, Saraf, and Sudan. (In general, they show that a Kakeya set in the [math]n[/math]-dimensional vector space over the [math]q[/math]-element field has at least [math](q/(2-1/q))^n[/math] elements). A still better bound follows by using the "slices argument". Let [math]A,B,C\subset{\mathbb F}_3^{n-1}[/math] be the three slices of a Kakeya set [math]E\subset{\mathbb F}_3^n[/math]. Form a bipartite graph [math]G[/math] with the partite sets [math]A[/math] and [math]B[/math] by connecting [math]a[/math] and [math]b[/math] by an edge if there is a line in [math]E[/math] through [math]a[/math] and [math]b[/math]. The restricted sumset [math]\{a+b\colon (a,b)\in G\}[/math] is contained in the set [math]-C[/math], while the difference set [math]\{a-b\colon (a,b)\in G\}[/math] is all of [math]{\mathbb F}_3^{n-1}[/math]. Using an estimate from a paper of Katz-Tao, we conclude that [math]3^{n-1}\le\max(|A|,|B|,|C|)^{11/6}[/math], leading to [math]|E|\ge 3^{6(n-1)/11}[/math]. Thus, [math]k_n \ge 3^{6(n-1)/11}.[/math] Upper Bounds We have [math]k_n\le 2^{n+1}-1[/math] since the set of all vectors in [math]{\mathbb F}_3^n[/math] such that at least one of the numbers [math]1[/math] and [math]2[/math] is missing among their coordinates is a Kakeya set. This estimate can be improved using an idea due to Ruzsa (seems to be unpublished). Namely, let [math]E:=A\cup B[/math], where [math]A[/math] is the set of all those vectors with [math]r/3+O(\sqrt r)[/math] coordinates equal to [math]1[/math] and the rest equal to [math]0[/math], and [math]B[/math] is the set of all those vectors with [math]2r/3+O(\sqrt r)[/math] coordinates equal to [math]2[/math] and the rest equal to [math]0[/math]. Then [math]E[/math], being of size just about [math](27/4)^{r/3}[/math] (which is not difficult to verify using Stirling's formula), contains lines in a positive proportion of directions: for, a typical direction [math]d\in {\mathbb F}_3^n[/math] can be represented as [math]d=d_1+2d_2[/math] with [math]d_1,d_2\in A[/math], and then [math]d_1,d_1+d,d_1+2d\in E[/math]. Now one can use the random rotations trick to get the rest of the directions in [math]E[/math] (losing a polynomial factor in [math]n[/math]). Putting all this together, we seem to have [math](3^{6/11} + o(1))^n \le k_n \le ( (27/4)^{1/3} + o(1))^n[/math] or [math](1.8207+o(1))^n \le k_n \le (1.8899+o(1))^n.[/math]
Do there exist any Turing complete typed lambda calculi? If so, what are a few examples? Yes, sure. Many typed lambda calculi accept only strongly normalizing terms, by design, so they cannot express arbitrary computations. But a type system can be anything you like; make it broad enough, and you can express all deterministic computations. A trivial type system that encompasses a Turing-complete fragment of the lambda calculus is the one that accepts every term as well-typed (with a top type). $$\dfrac{}{\Gamma \vdash M : \top}$$ More practically, statically typed functional programming languages have at their core a typed lambda calculus which allows a fixpoint combinator as well-typed. For example, start with the simply typed lambda calculus (or the ML type system or system F or any other type system of your choice) and add a rule that makes some fixpoint combinator like $\mathbf{Y} = \lambda f. (\lambda x. f (x\,x)) (\lambda x. f (x\,x))$ well-typed. $$ \dfrac{\Gamma \vdash f : T \rightarrow T} {\Gamma \vdash \mathbf{Y}\,f : T} \qquad \dfrac{\Gamma \vdash f : T \rightarrow T} {\Gamma \vdash (\lambda x. f (x\,x)) (\lambda x. f (x\,x)) : T} $$ The rules as presented above are rather clumsy, as they make terms like $\mathbf{Y}\,f$ well-typed even though their constituents are not well-typed — they are not fully compositional. A simple fix is to add a fixpoint combinator as a language constant and provide a delta rule for it; then it is a simple matter to have a type system and reduction semantics with type preservation. You do get away from the pure lambda calculus into the realm of lambda calculus with constants. $$\begin{gather*} \dfrac{}{\Gamma \vdash \textbf{fix} : (T \rightarrow T) \rightarrow T} \\ \textbf{fix}\,f \to f (\textbf{fix}\,f) \\ \end{gather*}$$ Sticking to the pure lambda calculus, an interesting type system is the lambda calculus with intersection types. $$ \dfrac{\Gamma \vdash M : T_1 \quad \Gamma \vdash M : T_2} {\Gamma \vdash M : T_1 \wedge T_2} (\wedge I) \qquad\qquad \dfrac{} {\Gamma \vdash M : \top} (\top I) $$ Intersection types have interesting properties with respect to normalization: A lambda-term can be typed without using the $\top I$ rule iff it is strongly normalizing. A lambda-term admits a type not containing $\top$ iff it has a normal form. See Characterization of lambda-terms that have union types for an insight as to why intersection types have such a remarkable scope. So you have a type system that defines a Turing-complete language (since every term is well-typed), and a simple characterization of terminating computations. Of course, since this type system characterizes normalization, it is not decidable. A remark on the rule names $(\top I)$ and $(\wedge I)$: they have no formal meaning, but they are chosen deliberately. the $I$ stands for “introduction”, because these are introduction rules — they introduce the symbol ($\wedge$ or $\top$) into the type below the line. Dually, you'll find elimination rules, when a symbol appears above the line but not below. For example, the rule to typecheck a lambda expression in the simply-typed lambda calculus is the introduction rule for $\rightarrow$, and the rule to typecheck an application is the elimination rule for $\rightarrow$.
I am stuck on trying to understand how to solve the following question. Could someone please explain for a beginner to average-case analysis? Considering the following algorithm A which takes as input a bit string b = b_1, b_2, ... , b_n (each b_i is either 0 or 1) and does the following: A(b,n)k = 0FOR i = 1 TO n DO IF b_i = 1 THEN k = k + 1j = k mod 2IF j = 1 THEN FOR i = 1 TO k DO PRINT("Hello World") Assuming each string is bit string b is equally likely, what is the average number of "Hello Worlds" that A will print? The hint I have so far is I need to express the sum directly on the sample space, where $X$ below represents the number of times "Hello World" is printed, and depends on individual inputs. $$ E(X) = \sum_{L \in S_n} X(L) \cdot Pr(L) $$
Backgroud. I'm reading papers about cutting stock problem (CSP). Said Ben Messaoud, Chengbin Chu, Marie-Laure Espinouse (2008) Characterization and modelling of guillotine constraints. European Journal of Operational Research 191 (2008) 112–126. D.A. Wuttke, H.S. Heese (2017) Two-dimensional cutting stock problem with sequence dependent setup times, European Journal of Operational Research Problem There is a square canvas, the side of the canvas is $n\times a$. It is required to cut this canvas into $n^2$ equivalent squares with the side, $a$. For $n = 3$, we can easily get the solution: $9$ squares with the side $a_1=a_2=...=a_9=a$, where $a_i$ the side of $i$-th square, $i \in I = \{1,2,\ldots, n^2\}$. If we remove any square, we will have a classic $8$-puzzle (left figure below, $n=3$). At first, let us now omit the constraint on squares' side size, they can be different: $$a_1 <a_2 <... <a_i <... <a_{n ^ 2}. \tag{1}$$ At second, let us add the following constraints on squares' sides (right figure above, $n=3$): The sum of any two consecutive elements of the set $(1)$ must be greater the following element. Any element of the set $(1)$ must be less than half of the canvas, $n\times a$. The first element of the set $(1)$ must be greater than the half of equivalent solution, $a$. The task is to prove: a) the problem will have a solution (integer or real), b) the solution is $(n^2-1)$-puzzle. Question. How to setup a model for an optimization problem? My attempt is: I think I have a case of guillotine cutting stock problem. I have tried to write the constrains (C1)-(C3): $$ s.t. \left\{% \begin{array}{ll} a_ {i + 2} < a_i + a_ {i + 1} ,& i = 1,2, ..., n ^ 2-2; \\ a_i <\frac{n\times a}{2}, & \forall i \in I; \\ 0 < a/2 < a_1. \\ \end{array}% \right.$$ Update. Here is the list of software (see at bottom of page) to design, test, and solve your own original sliding block puzzles.
Hi I'd appreciate if someone can check the following exercise any suggestions are welcome. Thanks ;) Let $A$ a subset of ${\bf{R}}^d$ show that the following conditions are equivalent: (i) $A$ is Lebesgue measurable (ii) $A$ is union of a F$_{\sigma}$ and a set of Lebesgue measure zero (iii) There is a set $B$ that is a F$_{\sigma}$ and satisfies $\lambda^*(A\triangle B)=0$ Proof: (i) $\Rightarrow$ (ii) Suppose $\lambda (A)<+\infty$. For each natural number $n$ we can choose a compact set $K_n$ such that $K_n\subset A$ and $\lambda(A)-2^{-n}<\lambda(K_n)$. Let $K=\bigcup_n K_n$. Then $K$ is a F$_{\sigma}$, $K\subset A$ and the relation $$\lambda(K)\ge\lambda(K_n)>\lambda(A)-2^{-n}$$ holds for all $n$, so $\lambda(K)=\lambda(A)$. Then $\lambda(A-K)=0$; furthermore $A=K\cup A-K$. Thus the assertion is proved in the case where $\lambda (A)<+\infty$. If $A$ is an arbitrary Lebesgue measurable set, then $A$ is the union of a sequence $\{A_n\}$ of Lebesgue measurable sets of finite Lebesgue measure (since ${\bf{R}}^d$ is sigma finite, $A=\bigcup_n A\cap B_n$, where ${B_n}$ is a sequence of measurable sets whose union is ${\bf{R}}^d$ and $\lambda (B_n)<+\infty$). For any positive natural number, we have that $A_n=F_n\cup Z_n$ where $F_n$ is a F$_{\sigma}$ and $Z_n$ is a set of Lebesgue measure zero. The set $F$ and $Z$ defined by $F=\bigcup_n F_n$ and $Z=\bigcup_n Z_n$, satisfies that $F$ is F$_{\sigma}$, $Z$ is a zero set and their union is $A$. (ii) $\Rightarrow$ (iii) Follows immediately (iii) $\Rightarrow$ (i) Every F$_{\sigma}$ is Lebesgue measurable. From the condition $\lambda^*(A\Delta B)=0$, we can derive that $\lambda^*(A-B)=0=\lambda^*(B-A)$. So $A=(B\cup A-B)-(B-A)$ is Lebesgue measurable (since any null set is Lebesgue measurable).
$$ \newcommand{\bsth}{{\boldsymbol\theta}} \newcommand{\va}{\textbf{a}} \newcommand{\vb}{\textbf{b}} \newcommand{\vc}{\textbf{c}} \newcommand{\vd}{\textbf{d}} \newcommand{\ve}{\textbf{e}} \newcommand{\vf}{\textbf{f}} \newcommand{\vg}{\textbf{g}} \newcommand{\vh}{\textbf{h}} \newcommand{\vi}{\textbf{i}} \newcommand{\vj}{\textbf{j}} \newcommand{\vk}{\textbf{k}} \newcommand{\vl}{\textbf{l}} \newcommand{\vm}{\textbf{m}} \newcommand{\vn}{\textbf{n}} \newcommand{\vo}{\textbf{o}} \newcommand{\vp}{\textbf{p}} \newcommand{\vq}{\textbf{q}} \newcommand{\vr}{\textbf{r}} \newcommand{\vs}{\textbf{s}} \newcommand{\vt}{\textbf{t}} \newcommand{\vu}{\textbf{u}} \newcommand{\vv}{\textbf{v}} \newcommand{\vw}{\textbf{w}} \newcommand{\vx}{\textbf{x}} \newcommand{\vy}{\textbf{y}} \newcommand{\vz}{\textbf{z}} \DeclareMathOperator*{\argmin}{argmin} \DeclareMathOperator\mathProb{\mathbb{P}} \renewcommand{\P}{\mathProb} % need to overwrite stupid paragraph symbol \DeclareMathOperator\mathExp{\mathbb{E}} \newcommand{\E}{\mathExp} \DeclareMathOperator\Uniform{Uniform} \DeclareMathOperator\poly{poly} \DeclareMathOperator\diag{diag} \newcommand{\pa}[1]{ \left({#1}\right) } \newcommand{\ha}[1]{ \left[{#1}\right] } \newcommand{\ca}[1]{ \left\{{#1}\right\} } \newcommand{\norm}[1]{\left\| #1 \right\|} \newcommand{\nptime}{\textsf{NP}} \newcommand{\ptime}{\textsf{P}} \newcommand{\R}{\mathbb{R}} \newcommand{\card}[1]{\left\lvert{#1}\right\rvert} \newcommand{\abs}[1]{\card{#1}} \newcommand{\sg}{\mathop{\mathrm{SG}}} \newcommand{\se}{\mathop{\mathrm{SE}}} \newcommand{\mat}[1]{\begin{pmatrix} #1 \end{pmatrix}} \DeclareMathOperator{\var}{var} \DeclareMathOperator{\cov}{cov} \newcommand\independent{\perp\kern-5pt\perp} \newcommand{\CE}[2]{ \mathExp\left[ #1 \,\middle|\, #2 \right] } \newcommand{\disteq}{\overset{d}{=}} $$ The Semaphore Barrier I wanted to share an interview question I came up with. The idea came from my operating and distributed systems classes, where we were expected to implement synchronization primitives and reason about parallelism, respectively. Synchronization primitives can be used to coordinate across multiple threads working on a task in parallel. Most primitives can be implemented through the use of a condition variable and lock, but I was wondering about implementing other primitives in terms of semaphores. Introduction to the Primitives Semaphores Semaphores are a type of synchronization primitive that encapsulate the idea of “thresholding”. A semaphore s has two operations: s.up() and s.down(). A semaphore also has an internal non-negative number representing its state. A thread calling s.down() is allowed to continue only if this number is positive, in which case the number is atomically decremented and the thread goes on with its work. s.up() doesn’t guarantee any blocking either, and raises the number. If we wanted to make sure that only 5 threads executing f() (a function that we implement) ever printed hello, and we somehow preemptively set the state of semaphore s to 5, then the following code would work: def f(): s.down() print("hello\n", end='') Regardless of how many threads call f at the same time, because of the atomic guarantees on s’s state, only 5 threads will be let through to print hello. If at any time in the above we had at least 6 threads call f and any thread also call s.up() at some point, eventually 6 hellos would be printed. In the following, we’ll assume the OS provides a magic semaphore implementation. Barriers A barrier is similar to a semaphore, but it’s meant to be a one-off, well, barrier. A barrier is preconfigured to accept n threads. Its API is defined by b.wait(), where a thread waits until n-1 other threads are also waiting on b, an only then are the threads allowed to continue. Barriers are useful when we want to coordinate some work. Suppose we have 2 threads, who want to draw a picture together. Say thread 0 can only draw red and thread 1 can only draw blue. But no two threads can draw on the same half of the screen at the same time. Assuming b has been initialized with n=2, the following would work: thread 0 thread 1 draw left half draw right half b.wait() b.wait() draw right half draw left half Now, no matter which thread is faster, we’ll never violate the condition that 2 threads write on the same half of the screen. The only way thread 0 can be on the left half is if it hasn’t crossed the barrier wait yet. The only way thread 1 can be on the left half is if it crossed the barrier, but since b=2, it can only cross the barrier if thread 1 is waiting, in which case it must have finished drawing on the left half! Similar logic can be applied to the right side; in other words, no side of the screen is ever shared by two threads at any given time, regardless of how fast one thread is compared to the other. The Challenge Warm-up Our goal will be to implement a barrier (namely, fill in what b.wait() does for a given n). Let’s focus on the case where we only have n=2 threads. This can be done with two semaphores. Solution to the Warm-up As you may have guessed, the only nontrivial semaphore arrangement works. From here on, we let s(i) be the i-th semaphore, initialized with state 0. Similarly, \(t_i\) will refer to the \(i\)-th thread. Here’s what we would want b.wait() to do on each thread. t0 t1 s(0).up s(1).up s(1).down s(0).down Indeed - \(t_0\) can’t advance past b.wait() unless s(1) is upped, which only happens if \(t_1\) call b.wait(). Symmetric logic shows that our barrier, if implemented to execute those instructions on each thread, will similarly stop s(2) from advancing without s(1) being ready. The General Problem Now, here’s the main question: Can we implement an arbitrary barrier, capable of blocking n threads, with semaphores and no control flow? With control flow? Now, can we do so efficiently, using as few semaphores as possible? In as little time per thread as possible? Attempt: Extending the 2-thread Case Let’s try extending our approach from the 2-thread case. Maybe we can just use 3 semaphores now, but using the “cycle” that seems to be built in the 2-thread example? t0 t1 t2 s(0).up s(1).up s(2).up s(1).down s(2).down s(0).down But this won’t work, unfortunately: suppose \(t_0\) is running slow. Both \(t_1\) and \(t_2\) finish well ahead of time, and each calls b.wait(). Then \(t_2\) ups s(2), after which \(t_1\) can pass through without waiting for \(t_0\) to call b.wait(), a violation of our barrier behavior. Answer Not so fast! Try it yourself! How efficient is your solution? There’s a couple of them, in increasing order of difficulty. The following list describes the asymptotic space complexity (number of semaphores used) and time complexity ( per thread). \(O(n^2)\) space and \(O(n)\) time \(O(n)\) space and \(O(n)\) time \(O(n)\) space and \(O(1)\) average time, \(O(n)\) worst-case time \(O(n)\) space and \(O(1)\) worst-case time \(O(1)\) space and \(O(1)\) worst-case time A Note on Thread IDs The fact that we can write different code for each of the threads to execute in the above examples might seem a bit questionable. However, we can get around this by assuming that we have access to thread IDs. As long as we can procure a thread’s procedure given just its ID (and the function procuring such a procedure doesn’t take \(O(n^2)\) space), we should be fine. Even if the thread ID isn’t available, we can use an atomic counter, which assigns effective thread IDs based on which thread called b.wait() first: atomic = AtomicInteger(0)def wait(): tid = atomic.increment_and_get() ...