text
stringlengths
256
16.4k
As a preperation of an exam about algorithms and complexity, I am currently solving old exercises. One concept I have already been struggling with when I encountered it for the first time is the concept of amortised analysis. What is amortised analysis and how to do it? In our lecture notes, it is stated that "amortised analysis gives bounds for the "average time" needed for certain operation and it can also give a bound for the worst case". That sounds really useful but when it comes to examples, I have no idea what I have to do and even after having read the sample solution, I have no idea what they are doing. Let's add up 1 in base 2, i.e. 0, 1, 10, 11, 100, 101, 110, 111, 1000, ... Using amortised analysis, show that in each step only amortised constantly many bits need to be changed. (the exercise originally is in German, so I apologise for my maybe not perfectly accurate translation) Now the standard solution first defines $\phi(i) := c \cdot \# \{\text{1-bits in the binary representation}\}$ for some constant $c > 0$. I think this is what is called the potential function which somehow corresponds to the excessive units of time (but I have no idea why I would come up with this particular definition). Assuming that we have to change $m$ bits in the $i$-th step. Such a step always is of the form $$\dots \underbrace{0 1 \dots 1}_m \to \dots \underbrace{1 0 \dots 0}_m.$$ This statement is understandable to me, however again I fail to see the motivation behind it. Then, out of nowhere, they come up with what they call an "estimate" $$a(i) = m + c(\phi(i) - \phi(i-1)) = m + c(-m + 2)$$ and they state that for $c=1$, we get $a(i)=2$ which is what we had to show. What just happened? What is $a(i)$? Why can we choose $c=1$? In general, if I have to show that in each step only amortised constantly many "units of time" are needed, does that mean that I have to show that $a(i)$ is constant? There are a few other exercises regarding amortised analysis and I don't understand them either. I thought if someone could help me out with this one, I could give the other exercises another try and maybe that'll help me really grasp the concept. Thanks a lot in advance for any help.
Back to Unconstrained Optimization The trust-region approach can be motivated by noting that the quadratic model \(q_k\) is a useful model of the function \(f\) only near \(x_k\). When the Hessian matrix is indefinite, the quadratic function \(q_k\) is unbounded below, so \(q_k\) is obviously a poor model of \(f(x_k+s)\) when \(s\) is large. Therefore, it is reasonable to select the step \(s_k\) by solving the subproblem \[\min \{ q_k(s) : \| D_ks \|_2 \leq \Delta_k \}\] for some \(\Delta_k > 0\) and scaling matrix \(D_k\). The trust-region parameter, \(\Delta_k\), is adjusted between iterations according to the agreement between predicted and actual reduction in the function \(f\) as measured by the ratio \[\rho_k = \frac{f(x_k) - f(x_k + s_k)}{f(x_k) - q_k(s_k)}.\] If there is good agreement, that is, \(p \approx 1\) then \(\Delta_k\) is increased. If the agreement is poor, i.e., \(\rho_k\) is small or negative, then \(\Delta_k\) is decreased. The decision to accept the step \(s_k\) is also based on \(\rho_k\). Usually, \[x_{k+1} = x_k + s_k,\] if \(\rho_k \geq \sigma_0\) where \(\sigma_0\) is small (typically, \(10^{-4}\)); otherwise \(x_{k+1} = x_k\). The following Newton codes use trust-region methods with different algorithms for choosing the step \(s_k\): IMSL, LANCELOT, PORT 3, and PROC NLP. Most of these algorithms rely on the observation that there is a \(\lambda_k\) such that \[(\nabla ^2 f(x_k) + \lambda_k D_k^T D_k ) s_k = - \nabla f(x_k), \quad\quad\quad(1.1)\] where either \(\lambda_k = 0\) and \(\| D_ks_k \|_2 \leq \Delta_k\), or \(\lambda_k > 0\) and \(\| D_ks_k \|_2 = \Delta_k\). An appropriate \(\lambda_k\) is determined by an iterative process in which (1.1) is solved for each trial value of \(\lambda_k\). The algorithm implemented in the TENMIN package extends Newton's method by forming low-rank approximations to the third- and fourth-order terms in the Taylor series approximation. These approximations are formed by using matching conditions that require storage of the gradient (but not the Hessian) on a number of previous iterations.
Antisymmetric wave functions instantly imply the Pauli exclusion principle – essentially because $\psi(x,x)=-\psi(x,x)=0$, to write the concept schematically – which implies that the occupation numbers are $N=0,1$ and statistical physics is therefore inevitably governed by the Fermi-Dirac statistics which may be derived from Boltzmann/statistical physics for these occupation numbers. Similarly, symmetric wave functions imply that particles are indistinguishable but the occupation numbers may be $N=0,1,2,3,\dots$. That implies the Bose-Einstein distribution by applying the Boltzmann steps to the multiparticle states with these occupation numbers. For the proofs of the first two paragraphs of my answer, see How to derive Fermi-Dirac and Bose-Einstein distribution using canonical ensemble? The bulk of the spin-statistics theorem is to link the antisymmetric functions with the half-integer spin and symmetric functions with integer spin. It was proved by Pauli and all the evidence available to me suggests that you haven't seen a clear proof because you haven't tried. I won't reproduce the full proof here because I don't believe it would be a good investment of time but I will give a sketch. The Lagrangian for a spin-0 real field $\phi$ has to contain the kinetic term$$\frac{1}{2} \partial_\mu \phi \partial^\mu \phi $$which is dictated by the Lorentz symmetry etc. If $\phi$ were anticommuting, the object above would identically vanish and there would be no dynamics. For spin-1/2 fields, it's the other way around (the Majorana kinetic term would vanish if the fields were not anticommuting), and so on. For the wrong combinations of spin and statistics, Pauli actually showed one can't have positive norms and/or Hamiltonians bounded from below.
Superposition Theorem: The total current in any part of a linear circuit equals the algebraic sum of the currents produced by each source separately. Let resistance per unit angle subtended at the center (in radians) of conducting loop be $\lambda$. Total EMF ($E_{total}$ ) around a circular path in a TVMF is $ \dfrac{d\phi}{dt}=A \dfrac{dB}{dt} $ (where $A$ is the area of the circle) We join points A and B on the loop. Suppose AB subtends an angle $\theta$ at the centre C. Now considering only minor arc : $E_{AB,minor}=\dfrac{E_{total}\theta}{2\pi}$ $i_{AB,minor}=\dfrac{E_{total}\theta}{2\pi \lambda \theta}=\dfrac{E_{total}}{2\pi \lambda }$ $E_{AB,major}=\dfrac{E_{total}(2\pi-\theta)}{2\pi}$ $i_{AB,major}=\dfrac{E_{total}(2\pi-\theta)}{2\pi \lambda (2\pi-\theta)}=\dfrac{E_{total}}{2\pi \lambda }$ Now, in the connecting wire AB currents $i_{AB,major}$ and $i_{AB,minor}$ are in opposite directions and cancel each other out, giving a net current of $0$ in the wire by Superposition Theorem.
$\underline{\text{ Another Method:}}$ Please refer to the following figure: Consider $ΔAED$. By the midpoint theorem, $MN \parallel AD$ and $MN=\frac{1}{2}AD$. Now extend $BM$ and $CN$ to meet one another at $F$, and consider $ΔBFC$, $ΔABE$ and $ΔEDC$. $$∠FBC=∠ABE=∠EDC \text{ and } ∠FCB=∠EAB=∠ECD $$$$\Longrightarrow ΔBFC \sim ΔABE \sim ΔEDC.$$ As $CN$ bisects $\angle ECD$ and halves $ED$, $MC$ bisects $\angle FCB$ and halves $FB$; as $BM$ bisects $\angle ABE$ and halves $AE$, $NB$ bisects $\angle FBC$ and halves $FC$. That means $M$ and $N$ are the midpoints of $FB$ and $FC$, respectively. Hence, by the midpoint theorem, $MN\parallel BC$ and $MN=\frac{1}{2}BC$. Thus, $AD\parallel BC$ and $AD=BC$. Then quadrilateral $ABCD$ is a parallelogram. Consider $ΔABM$ and $ΔCBN$; $ΔCDN$ and $ΔCBM$. $$∠ABM=∠CBN \text{ and } ∠NCD=∠MCB$$$$∠MAB=∠NCB \text{ and } ∠CDN=∠CBM$$$$\Longrightarrow ΔABM \sim ΔCBN \text{ and } ΔCDN \sim ΔCBM.$$ $$\frac{AB}{DC}=1 \implies \frac{AB}{BC}=\frac{DC}{BC}. $$That means the ratio of the corresponding sides of $ΔABM$ and $ΔCBN$ is the ratio of the corresponding sides of $ΔCDN$ and $ΔCBM $. Hence $$\frac{AM}{NC}=\frac{NC}{MC} \implies 3AM^2=NC^2$$$$\implies NC=\sqrt{3}AM $$ Consider $ΔEDC$. Using the angle bisector theorem, as Rex proposed, $$\frac{CE}{CD}=\frac{EN}{ND}=1 \implies CE=CD \implies CN \perp ED.$$ Consider $ΔENC$. Since $EC=2AM$ and $NC=\sqrt{3}AM$ and $CN \perp ED$, $EN=\sqrt{4-3}AM=AM.$ Then $ΔENC$ is $30$-$60$-$90$. Thus, $ΔAEB$ and $ΔEDC$ are $60$-$60$-$60$. Since $AD\parallel BC$, quadrilateral $ABCD$ is a rectangle.
Let $a,b \in N $ and $a \neq b$ If $(a-1)x^2-(a^2+2)x+a^2+2a=0$ and $(b-1)x^2-(b^2+2)x+b^2+2b=0$ have a common root then the value of $ab$ is ? I tried using Cramer's rule for common root but that did not simplify to anything and on subtracting the equations I again ended up with a quadratic, on solving the quadratic I got $$x=\frac{a^2-b^2\pm\sqrt{(b^2-a^2)^2-4(a-b)(a^2-b^2+2a-2b)}}{2(a-b)}$$ which does not simplify either. Any hints on how should I solve this.
Summary: The Multiple Traveling Salesman Problem (\(m\)TSP) is a generalization of the Traveling Salesman Problem (TSP) in which more than one salesman is allowed. Given a set of cities, one depot where \(m\) salesmen are located, and a cost metric, the objective of the \(m\)TSP is to determine a tour for each salesman such that the total tour cost is minimized and that each city is visited exactly once by only one salesman. The Multiple Traveling Salesman Problem (\(m\)TSP) is a generalization of the Traveling Salesman Problem (TSP) in which more than one salesman is allowed. Given a set of cities, one depot (where \(m\) salesmen are located), and a cost metric, the objective of the \(m\)TSP is to determine a set of routes for \(m\) salesmen so as to minimize the total cost of the \(m\) routes. The cost metric can represent cost, distance, or time. The requirements on the set of routes are: All of the routes must start and end at the (same) depot. Each city must be visited exactly once by only one salesman. The \(m\)TSP is a relaxation of the vehicle routing problem (VRP); if the vehicle capacity in the VRP is a sufficiently large value so as not to restrict the vehicle capacity, then the problem is the same as the \(m\)TSP. Therefore, all of the formulations and solution approaches for the VRP are valid for the \(m\)TSP. The \(m\)TSP is a generalization of the TSP; if the value of \(m\) is 1, then the \(m\)TSP problem is the same as the TSP. Therefore, all of the formulations and solution approaches for the \(m\)TSP are valid for the TSP. Bektas (2006) lists a number of variations on the \(m\)TSP. Multiple depots: Instead of one depot, the multi-depot \(m\)TSP has a set of depots, with \(m_j\) salesmen at each depot \(j\). In the fixed destinationversion, a salesman returns to the same depot from which he started. In the non-fixed destinationversion, a salesman does not need to return to the same depot from which he started but the same number of salesmen must return as started from a particular depot. The multi-depot \(m\)TSP is important in robotic applications involving ground and aerial vehicles. For example, see Oberlin et al. (2009). Specifications on the number of salesmen: The number of salesmen may be a fixed number \(m\), or the number of salesmen may be determined by the solution but bounded by an upper bound \(m\). Fixed charges: When the number of salesmen is not fixed, there may be a fixed cost associated with activating a salesman. In the fixed chargeversion of the \(m\)TSP, the overall cost to minimize includes the fixed charges for the salesmen plus the costs for the tours. Time windows: As with the TSP and the VRP, there is a variation of the \(m\)TSP with time windows. Associated with each node is a time window during which the node must be visited by a tour. The \(m\)TSPTW has many applications, such as school bus routing and airline scheduling. Here we present an assignment-based integer programming formulation for the \(m\)TSP. Consider a graph \(G=(V,A)\), where \(V\) is the set of \(n\) nodes, and \(A\) is the set of edges. Associated with each edge \((i,j) \in A\) is a cost (or distance) \(c_{ij}\). We assume that the depot is node 1 and there are \(m\) salesmen at the depot. We define a binary variable \(x_{ij}\) for each edge \((i,j) \in A\); \(x_{ij}\) takes the value 1 if edge \((i,j)\) is included in a tour and \(x_{ij}\) takes the value 0 otherwise. For the subtour elimination constraints, we define an integer variable \(u_i\) to denote the position of node \(i\) in a tour, and we define a value \(p\) to be the maximum number of nodes that can be visited by any salesman. Objective Minimize \( \sum_{(i,j) \in A} c_{ij} x_{ij}\) Constraints Ensure that exactly \(m\) salesmen depart from node 1 \(\sum_{j \in V: (1,j) \in A} x_{1j} = m\) Ensure that exactly \(m\) salesmen return to node 1 \(\sum_{j \in V: (j,1) \in A} x_{j1} = m\) Ensure that exactly one tour enters each node \(\sum_{i \in V: (i,j) \in A} x_{ij} = 1, \forall j \in V\) Ensure that exactly one tour exits each node \(\sum_{j \in V: (i,j) \in A} x_{ij} = 1, \forall i \in V\) Include subtour elimination constraints (Miller-Tucker-Zemlin) \(u_i - u_j + p \cdot x_{ij} \leq p-1, \forall 2 \leq i \neq j \leq n\) The literature includes a number of alternative formulations. Some of the alternatives to the two-index variable, assignment-based formulation are a two-index variable formulation with the original subtour elimination constraints (see Laporte and Nobert, 1980), a three-index variable formulation (see Bektas, 2006), and a \(k\)-degree center tree-based formulation (see Christofides et al., 1981 and Laporte, 1992). To solve this integer linear programming problem, we can use one of the NEOS Server solvers in the Mixed Integer Linear Programming (MILP) category. Each MILP solver has one or more input formats that it accepts. If we submit this model to XpressMP with a CPU time limit of 1 hour, we obtain a solution with a total cost of 2176 (gap of 4.3%) and the following tours: Tour 1: i13 - i4 - i10 - i20 - i2 - i13 cost = 60 + 39 + 25 + 49 + 87 = 260 Tour 2: i13 - i18 - i14 - i17 - i22 - i11 - i15 - i13 cost = 128 + 32 + 51 + 47 + 63 + 68 + 86 = 475 Tour 3: i13 - i24 - i8 - i28 - i12 - i6 - i1 - i13 cost = 56 + 48 + 64 + 71 + 46 + 60 + 82 = 427 Tour 4: i13 - i29 - i3 - i26 - i9 - i5 - i21 - i13 cost =160 + 60 + 78 + 57 + 42 + 50 + 82 = 529 Tour 5: i13 - i19 - i25 - i7 - i23 - i27 - i16 - i13 cost = 71 + 52 + 72 + 111 + 74 + 48 + 57 = 485 Bektas, T. 2006. The multiple traveling salesman problem: an overview of formulations and solution procedures. OMEGA: The International Journal of Management Science 34(3), 209-219. Christofides, N., A. Mingozzi, and P. Toth. 1981. Exact algorithms for the vehicle routing problem, based on spanning tree and shortest path relaxations. Mathematical Programming 20, 255-282. Laporte, G. 1992. The vehicle routing problem: an overview of exact and approximate algorithms. European Journal of Operational Research 59, 345-358. Laporte, G. and Y. Nobert. 1980. A cutting planes algorithm for the \(m\)-salesmen problem. Journal of the Operational Research Society 31, 1017-1023. Oberlin, P., S. Rathinam, and S. Darbha. 2009. A transformation for a heterogeneous, multi-depot, multiple traveling salesman problem. In Proceedings of the American Control Conference, 1292-1297, St. Louis, June 10 - 12, 2009.
This question already has an answer here: I'm currently following a course in quantum mechanics that uses Griffith's textbook. Griffiths shows that wave functions are members of a Hilbert space. Since this is an abstract vector space, we can assign a state vector $|{\psi}>$ to each wave function $\psi(x)$. Since our course on linear algebra runs pretty deep, this I understand. Griffiths subsequently defines the inner product between two state vectors $<\phi|\psi> = \int\phi^*(x)\psi(x)dx$, and he derives all sorts of fun things from this inner product; that the function $\psi(x)$ corresponding to $|\psi>$ is the wave function as seen previously in wave mechanics, and that its Fourier transform determines the probability density of momentum. At this point, something happens in both Griffiths and my course that I don't quite comprehend. Both my instructor and the book mention that at this point, the vectors will take precedence above the functions; all quantum mechanical information is from now on fully and completely recorded into the vector. The functions are simply a representation of this vector; one of the many possible representations. Am I correct in saying this? But Griffith's proof that the coefficients in the position basis of $|\psi>$ form the values of the function $\psi(x)$ is completely based on the definition $<\phi|\psi> = \int\phi^*(x)\psi(x)dx$. How should I then read this? Does he assume that there is a function representation $\psi(x)$ corresponding to the state vector $|\psi>$ (which is in his right, since $L_2$ is isomorphic to the vector space), to subsequently derive the properties of this function? To me it just seems really strange that we have these abstract state vectors, only to define the inner product in terms of one very specific representation of that vector. Edit 6 February 2019: I can (for obvious reasons) not unmark this question as a duplicate, but I can "edit to explain why my question hasn't been answered before." My question is not about whether the ket psi and the wave function are the exact same--I have written in my question that I understand that the wave function is a representation of the ket (in other words, I've used the answer to the duplicate question to ask my own question). My question is why in so many derivations, it appears that the wave function formulation takes precedence (in e.g. the definition of the inner product). In the answers below, it has been explained that there is no precedence, and that this is simply a shortcut in the theory.
Abstracts Preparation Instructions and guidelines for preparation and submission of abstracts for talks at the Fields Institute Your abstract should be sent electronically as simple LaTeX or plain TeX. You may also use features from the amsmath, amsfonts, or amssymb packages; all other macros should be defined. Please do not redefine standard macros. Send only body of your abstract, that is, the portion found between the \begin{document} and \end{document} statements. Please indicate your name and the title of your talk SEPARATELY from your abstract. If you are using an on-line registration form there are separate boxes for this information. If you are sending an e-mail you should clearly distinguish this information. For example, you could say in an e-mail. Dear Program Coordinator, Here is an abstract for my talk entitled "Some remarks on \(n\)-periodic morphisms." [ include your abstract here ] Sincerely, Jane Smith University of Nancago If the abstract is from a paper with multiple authors, only one of whom is speaking, you can list the remaining author(s) by putting a sentence like "Joint work with ..." somewhere in your abstract. Here is an example of an abstract in an acceptable format: \newcommand{\g}{\mathfrak{g}} Let \(S\) denote the set of all integers \(n\) for which \(n > 0\) and \[ \int_0^n x^n\, dx < 1/2 .\] We will prove that every element of \(S\) is a counterexample to Wiles's Theorem. We relate this to the theory of isomorphisms \(\phi: \g \to \g\) where \(\g\) is a Lie algebra. and here is how it will be rendered in our webpages and event materials: Let \(S\) denote the set of all integers \(n\) for which \(n > 0\) and \[ \int_0^n x^n\, dx < 1/2 .\] We will prove that every element of \(S\) is a counterexample to Wiles's Theorem. We relate this to the theory of isomorphisms \(\phi: \mathfrak g \to \mathfrak g\) where \(\mathfrak g\) is a Lie algebra. Here are some examples of things that must not appear in your abstract: Your name or the title of your talk. (Indicate these separately). \documentclass, \documentstyle, \bye, \eject, \newpage, \title, \author, \section, \subsection Commands to change margins or alter page numbering. Headings. (For example, don't put an "Abstract" heading at the beginning). Macro definitions that aren't used in your abstract. (In other words, please don't copy the entire macro definition section from your paper just because your abstract uses one of them). Spacing commands such as \vfill, \medskip (except when really necessary), \noindent, etc. Obsolete LaTeX 2.0.9 formatting commands such as \it, \rm, \bf, \em, \cal, \Bbb, \frak. These may still work, but we would prefer you to use the current LaTeX2e constructions: \textit{...}, \textrm{...}, \mathrm{...}, \textbf{...}, \mathbf{...}, \emph{...}, \mathcal{...}, \mathbb{...}, \mathfrak{...}, etc. Here is a quick test you can make on your abstract: Put this line at the beginning (with \usepackage{amsmath}\usepackage{amssymb} before the \begin{document} if your abstract uses these features): \documentclass{article}\begin{document} Put this line at the end: \end{document} Run latex on the resulting file. If you get errors when you do this, your abstract is not in an acceptable format. Remember that these additional lines are only for the test. Do not include them in the abstract you submit. ANY ABSTRACT FAILING THIS TEST WILL BE RETURNED TO THE AUTHOR FOR RE-SUBMISSION.
Uncountably many planar embeddings of unimodal inverse limit spaces 1. Faculty of Electrical Engineering and Computing, Unska 3,10000 Zagreb, Croatia 2. Faculty of Mathematics, University of Vienna, Oskar-Morgenstern-Platz 1, A-1090 Vienna, Austria For a point $x$ in the inverse limit space $X$ with a single unimodal bonding map we construct, with the use of symbolic dynamics, a planar embedding such that $x$ is accessible. It follows that there are uncountably many non-equivalent planar embeddings of $X$. Mathematics Subject Classification:Primary: 37B10; Secondary: 37B45, 37E05, 54C25, 54H20. Citation:Ana Anušić, Henk Bruin, Jernej Činč. Uncountably many planar embeddings of unimodal inverse limit spaces. Discrete & Continuous Dynamical Systems - A, 2017, 37 (5) : 2285-2300. doi: 10.3934/dcds.2017100 References: [1] [2] [3] [4] [5] [6] [7] [8] K. Brucks and B. Diamond, A symbolic representation of inverse limit spaces for a class of unimodal maps, [9] [10] [11] [12] [13] [14] [15] [16] [17] [18] [19] [20] [21] J. Milnor and W. Thurston, On Iterated maps of the interval, in " [22] J. R. Munkres, [23] S. B. Nadler, [24] S. P. Schwartz, [25] [26] R. F. Williams, Classification of one dimensional attractors, in "Global Analysis" ( show all references References: [1] [2] [3] [4] [5] [6] [7] [8] K. Brucks and B. Diamond, A symbolic representation of inverse limit spaces for a class of unimodal maps, [9] [10] [11] [12] [13] [14] [15] [16] [17] [18] [19] [20] [21] J. Milnor and W. Thurston, On Iterated maps of the interval, in " [22] J. R. Munkres, [23] S. B. Nadler, [24] S. P. Schwartz, [25] [26] R. F. Williams, Classification of one dimensional attractors, in "Global Analysis" ( [1] [2] Chris Good, Robin Knight, Brian Raines. Countable inverse limits of postcritical $w$-limit sets of unimodal maps. [3] Angela Alberico, Andrea Cianchi, Luboš Pick, Lenka Slavíková. Sharp Sobolev type embeddings on the entire Euclidean space. [4] Yi Yang, Robert J. Sacker. Periodic unimodal Allee maps, the semigroup property and the $\lambda$-Ricker map with Allee effect. [5] [6] Ihsane Bikri, Ronald B. Guenther, Enrique A. Thomann. The Dirichlet to Neumann map - An application to the Stokes problem in half space. [7] Song-Mei Huan, Xiao-Song Yang. On the number of limit cycles in general planar piecewise linear systems. [8] Armengol Gasull, Hector Giacomini. Upper bounds for the number of limit cycles of some planar polynomial differential systems. [9] Victoriano Carmona, Soledad Fernández-García, Antonio E. Teruel. Saddle-node of limit cycles in planar piecewise linear systems and applications. [10] Shimin Li, Jaume Llibre. On the limit cycles of planar discontinuous piecewise linear differential systems with a unique equilibrium. [11] [12] Francisco Balibrea, J.L. García Guirao, J.I. Muñoz Casado. A triangular map on $I^{2}$ whose $\omega$-limit sets are all compact intervals of $\{0\}\times I$. [13] [14] Ricardo M. Martins, Otávio M. L. Gomide. Limit cycles for quadratic and cubic planar differential equations under polynomial perturbations of small degree. [15] [16] Yoshihiro Ueda, Tohru Nakamura, Shuichi Kawashima. Stability of planar stationary waves for damped wave equations with nonlinear convection in multi-dimensional half space. [17] A. Yu. Ol'shanskii and M. V. Sapir. The conjugacy problem for groups, and Higman embeddings. [18] [19] [20] 2018 Impact Factor: 1.143 Tools Metrics Other articles by authors [Back to Top]
In free space, $\rho=0$ and $J=0$, so there are no electromagnetic sources/sinks. Maxwell's equations thus reduce to: $\nabla\cdot E = 0$ $\nabla\cdot B = 0$ $\nabla\times E = -\frac{\partial B}{\partial t}$ $\nabla\times B = \mu_0\epsilon_0 \frac{\partial E}{\partial t} $ Suppose I was writing a simple simulation to visualize the electromagnetic field in free space. I have seen people talk about waves propagating in free space, and I know that there is no such thing as electromagnetic waves being created from nothing -- it is usually assumed that such electromagnetic waves propagating in free space are plane waves that originated in a charged source extremely far away. However, when I actually implement the "oscillation" in the EM field, where does that oscillation come from -- speaking practically from a coding point of view? Do you just hardwire, e.g., a sinusoidal source at the location you are interested in probing the EM field? And if there were no such "magical" waves propagating in free space, would the EM field just remain smooth, without any oscillations, vibrations, sinusoids, etc.? In other words, can you have a completely stationary electromagnetic field, or would the last two of Maxwell's equations above prevent such stationary EM fields? But then what, in free space, would cause the initial change in the electric or magnetic field to get the oscillations going? To put it one last way: suppose I wrote a simulation involving the 4 Maxwell's equations above (free space). Would the EM field be stationary for all time, and the only way a propagating wave would appear is if I perturbed, say, the electric field which activated a never-ending loop of the curl equations? So if the initial values of E and B were both 0 in my simulation, then they would stay 0 for all time. But if one or both initial values of E and B were non-zero, then the curl equations would be "activated" and result in a never-ending loop of oscillations?
Question: A pendulum 2.20 m long is released (from rest) at an angle {eq}\theta{/eq} = 30.0 degrees. Determine the tension in the cord at {eq}\theta{/eq} = 15 degrees. Conservation of Energy: The principle of conservation of energy states that for an closed system, the energy before and after an interaction is the same. This means the kinetic energy will be converted solely to potential energy and vice versa. This phenomenon is independent of the path of the process. Answer and Explanation: Using conservation of energy to solve for the speed, we write: {eq}\frac{1}{2}mv^2 = mgl{1-\cos{15}} {/eq}, where: mis the mass vis the speed gis gravitational acceleration on Earth lis the length of the cord {eq}\frac{1}{2}v^2 = (9.8)(2.2)(1-\cos15) \\ v = 1.21{/eq} The radial component of the tension is given as: {eq}T = m(\frac{v^2}{r} + g\cos{15}){/eq} Inserting the values of the available parameters, we write: {eq}T = m(\frac{(1.21)^2}{2.2} + (9.8)\cos{15}) \\ \boxed{T = 10.1m \ N}{/eq} Become a member and unlock all Study Answers Try it risk-free for 30 daysTry it risk-free Ask a question Our experts can answer your tough homework and study questions.Ask a question Ask a question Search Answers Learn more about this topic: from Geography 101: Human & Cultural GeographyChapter 13 / Lesson 9
And I think people said that reading first chapter of Do Carmo mostly fixed the problems in that regard. The only person I asked about the second pset said that his main difficulty was in solving the ODEs Yeah here there's the double whammy in grad school that every grad student has to take the full year of algebra/analysis/topology, while a number of them already don't care much for some subset, and then they only have to pass rather the class I know 2 years ago apparently it mostly avoided commutative algebra, half because the professor himself doesn't seem to like it that much and half because he was like yeah the algebraists all place out so I'm assuming everyone here is an analyst and doesn't care about commutative algebra Then the year after another guy taught and made it mostly commutative algebra + a bit of varieties + Cech cohomology at the end from nowhere and everyone was like uhhh. Then apparently this year was more of an experiment, in part from requests to make things more geometric It's got 3 "underground" floors (quotation marks because the place is on a very tall hill so the first 3 floors are a good bit above the the street), and then 9 floors above ground. The grad lounge is in the top floor and overlooks the city and lake, it's real nice The basement floors have the library and all the classrooms (each of them has a lot more area than the higher ones), floor 1 is basically just the entrance, I'm not sure what's on the second floor, 3-8 is all offices, and 9 has the ground lounge mainly And then there's one weird area called the math bunker that's trickier to access, you have to leave the building from the first floor, head outside (still walking on the roof of the basement floors), go to this other structure, and then get in. Some number of grad student cubicles are there (other grad students get offices in the main building) It's hard to get a feel for which places are good at undergrad math. Highly ranked places are known for having good researchers but there's no "How well does this place teach?" ranking which is kinda more relevant if you're an undergrad I think interest might have started the trend, though it is true that grad admissions now is starting to make it closer to an expectation (friends of mine say that for experimental physics, classes and all definitely don't cut it anymore) In math I don't have a clear picture. It seems there are a lot of Mickey Mouse projects that people seem to not help people much, but more and more people seem to do more serious things and that seems to become a bonus One of my professors said it to describe a bunch of REUs, basically boils down to problems that some of these give their students which nobody really cares about but which undergrads could work on and get a paper out of @TedShifrin i think universities have been ostensibly a game of credentialism for a long time, they just used to be gated off to a lot more people than they are now (see: ppl from backgrounds like mine) and now that budgets shrink to nothing (while administrative costs balloon) the problem gets harder and harder for students In order to show that $x=0$ is asymptotically stable, one needs to show that $$\forall \varepsilon > 0, \; \exists\, T > 0 \; \mathrm{s.t.} \; t > T \implies || x ( t ) - 0 || < \varepsilon.$$The intuitive sketch of the proof is that one has to fit a sublevel set of continuous functions $... "If $U$ is a domain in $\Bbb C$ and $K$ is a compact subset of $U$, then for all holomorphic functions on $U$, we have $\sup_{z \in K}|f(z)| \leq C_K \|f\|_{L^2(U)}$ with $C_K$ depending only on $K$ and $U$" this took me way longer than it should have Well, $A$ has these two dictinct eigenvalues meaning that $A$ can be diagonalised to a diagonal matrix with these two values as its diagonal. What will that mean when multiplied to a given vector (x,y) and how will the magnitude of that vector changed? Alternately, compute the operator norm of $A$ and see if it is larger or smaller than 2, 1/2 Generally, speaking, given. $\alpha=a+b\sqrt{\delta}$, $\beta=c+d\sqrt{\delta}$ we have that multiplication (which I am writing as $\otimes$) is $\alpha\otimes\beta=(a\cdot c+b\cdot d\cdot\delta)+(b\cdot c+a\cdot d)\sqrt{\delta}$ Yep, the reason I am exploring alternative routes of showing associativity is because writing out three elements worth of variables is taking up more than a single line in Latex, and that is really bugging my desire to keep things straight. hmm... I wonder if you can argue about the rationals forming a ring (hence using commutativity, associativity and distributivitity). You cannot do that for the field you are calculating, but you might be able to take shortcuts by using the multiplication rule and then properties of the ring $\Bbb{Q}$ for example writing $x = ac+bd\delta$ and $y = bc+ad$ we then have $(\alpha \otimes \beta) \otimes \gamma = (xe +yf\delta) + (ye + xf)\sqrt{\delta}$ and then you can argue with the ring property of $\Bbb{Q}$ thus allowing you to deduce $\alpha \otimes (\beta \otimes \gamma)$ I feel like there's a vague consensus that an arithmetic statement is "provable" if and only if ZFC proves it. But I wonder what makes ZFC so great, that it's the standard working theory by which we judge everything. I'm not sure if I'm making any sense. Let me know if I should either clarify what I mean or shut up. :D Associativity proofs in general have no shortcuts for arbitrary algebraic systems, that is why non associative algebras are more complicated and need things like Lie algebra machineries and morphisms to make sense of One aspect, which I will illustrate, of the "push-button" efficacy of Isabelle/HOL is its automation of the classic "diagonalization" argument by Cantor (recall that this states that there is no surjection from the naturals to its power set, or more generally any set to its power set).theorem ... The axiom of triviality is also used extensively in computer verification languages... take Cantor's Diagnolization theorem. It is obvious. (but seriously, the best tactic is over powered...) Extensions is such a powerful idea. I wonder if there exists algebraic structure such that any extensions of it will produce a contradiction. O wait, there a maximal algebraic structures such that given some ordering, it is the largest possible, e.g. surreals are the largest field possible It says on Wikipedia that any ordered field can be embedded in the Surreal number system. Is this true? How is it done, or if it is unknown (or unknowable) what is the proof that an embedding exists for any ordered field? Here's a question for you: We know that no set of axioms will ever decide all statements, from Gödel's Incompleteness Theorems. However, do there exist statements that cannot be decided by any set of axioms except ones which contain one or more axioms dealing directly with that particular statement? "Infinity exists" comes to mind as a potential candidate statement. Well, take ZFC as an example, CH is independent of ZFC, meaning you cannot prove nor disprove CH using anything from ZFC. However, there are many equivalent axioms to CH or derives CH, thus if your set of axioms contain those, then you can decide the truth value of CH in that system @Rithaniel That is really the crux on those rambles about infinity I made in this chat some weeks ago. I wonder to show that is false by finding a finite sentence and procedure that can produce infinity but so far failed Put it in another way, an equivalent formulation of that (possibly open) problem is: > Does there exists a computable proof verifier P such that the axiom of infinity becomes a theorem without assuming the existence of any infinite object? If you were to show that you can attain infinity from finite things, you'd have a bombshell on your hands. It's widely accepted that you can't. If fact, I believe there are some proofs floating around that you can't attain infinity from the finite. My philosophy of infinity however is not good enough as implicitly pointed out when many users who engaged with my rambles always managed to find counterexamples that escape every definition of an infinite object I proposed, which is why you don't see my rambles about infinity in recent days, until I finish reading that philosophy of infinity book The knapsack problem or rucksack problem is a problem in combinatorial optimization: Given a set of items, each with a weight and a value, determine the number of each item to include in a collection so that the total weight is less than or equal to a given limit and the total value is as large as possible. It derives its name from the problem faced by someone who is constrained by a fixed-size knapsack and must fill it with the most valuable items.The problem often arises in resource allocation where there are financial constraints and is studied in fields such as combinatorics, computer science... O great, given a transcendental $s$, computing $\min_P(|P(s)|)$ is a knapsack problem hmm... By the fundamental theorem of algebra, every complex polynomial $P$ can be expressed as: $$P(x) = \prod_{k=0}^n (x - \lambda_k)$$ If the coefficients of $P$ are natural numbers , then all $\lambda_k$ are algebraic Thus given $s$ transcendental, to minimise $|P(s)|$ will be given as follows: The first thing I think of with that particular one is to replace the $(1+z^2)$ with $z^2$. Though, this is just at a cursory glance, so it would be worth checking to make sure that such a replacement doesn't have any ugly corner cases. In number theory, a Liouville number is a real number x with the property that, for every positive integer n, there exist integers p and q with q > 1 and such that0<|x−pq|<1qn.{\displaystyle 0<\left|x-{\frac {p}... Do these still exist if the axiom of infinity is blown up? Hmmm... Under a finitist framework where only potential infinity in the form of natural induction exists, define the partial sum: $$\sum_{k=1}^M \frac{1}{b^{k!}}$$ The resulting partial sums for each M form a monotonically increasing sequence, which converges by ratio test therefore by induction, there exists some number $L$ that is the limit of the above partial sums. The proof of transcendentally can then be proceeded as usual, thus transcendental numbers can be constructed in a finitist framework There's this theorem in Spivak's book of Calculus:Theorem 7Suppose that $f$ is continuous at $a$, and that $f'(x)$ exists for all $x$ in some interval containing $a$, except perhaps for $x=a$. Suppose, moreover, that $\lim_{x \to a} f'(x)$ exists. Then $f'(a)$ also exists, and$$f'... and neither Rolle nor mean value theorem need the axiom of choice Thus under finitism, we can construct at least one transcendental number. If we throw away all transcendental functions, it means we can construct a number that cannot be reached from any algebraic procedure Therefore, the conjecture is that actual infinity has a close relationship to transcendental numbers. Anything else I need to finish that book to comment typo: neither Rolle nor mean value theorem need the axiom of choice nor an infinite set > are there palindromes such that the explosion of palindromes is a palindrome nonstop palindrome explosion palindrome prime square palindrome explosion palirome prime explosion explosion palindrome explosion cyclone cyclone cyclone hurricane palindrome explosion palindrome palindrome explosion explosion cyclone clyclonye clycone mathphile palirdlrome explosion rexplosion palirdrome expliarome explosion exploesion
Let $M$ be a non-empty subset of a normed space $X$, and let $M^a$ denote the subspace of the dual space $X'$ that consists of all those bounded linear functionals that vanish at each point of set $M$. Now here's Prob. 14, Sec. 2.10 in Introductory Functional Analysis With Applications by Erwin Kreyszig: If $M$ is an $m$-dimensional subspace of an $n$-dimensional normed space $X$, show that $M^a$ is an ($n-m$)-dimensional subspace of $X'$. Formulate this as a theorem about solutions of a system of linear equations. My effort: Since $X$ is finite-dimensional, every linear functional on $X$ is bounded; so we can write $X' = X^*$. Let $\{e_1, \ldots, e_m \}$ be a basis for $M$; extend it to a basis $\{ e_1, \ldots, e_m, \ldots, e_n \}$ for $X$. Then each element of $X$ has a unique representation as a linear combination of the $e_j$s. Suppose that $x \in X$ has the unique representation $$x = \sum_{j=1}^n \xi_j e_j.$$ Then, for any $f \in X'$, we have $$f(x) = \sum_{j=1}^n \xi_j f(e_j).$$ Now, for each $j= 1, \ldots, n$, let $f_j \in X'$ be deined as $$f_j(x) = \xi_j.$$ Then we can write $$f(x) = \sum_{j=1}^n f(e_j) f_j(x).$$ So $$f = \sum_{j=1}^n \alpha_j f_j, \ \mbox{ where } \ \alpha_j \colon= f(e_j) \ \mbox{ for each } \ j= 1, \ldots, n.$$ It can also be shown that the set $\{ f_1, \ldots, f_n \}$ is linearly independent and therefore a basis for $X^* = X'$. Now suppose that $f \in M^a$. Then $$ f(e_j) = 0 \ \mbox{ for each } \ j= 1, \ldots, m.$$ So $$f(x) = \sum_{j=m+1}^n \xi_j f(e_j) = \sum_{j=m+1}^n \alpha_j f_j(x).$$ Thus each $f \in M^a$ can be written as $$f = \sum_{j=m+1}^n \alpha_j f_j.$$ Moreover, the set $\{f_{m+1}, \ldots, f_n \}$, being a subset of a linearly independent set, is also linearly independent and hence forms a basis for $M^a$. So $M^a$ has dimention $n-m$. Is the above proof correct? Now it is my feeling that this result yields the following result: Let $m < n$. Then any system of $m$ independent homogeneous simultaneous linear equations in $n$ unknowns (with real or complex numbers as co-efficients) has $n-m$ linearly independent solutions. Is my conclusion correct? But I'm not exactly sure how to relate the above formulation to this conclusion.
Portfolio Optimization Introduction We have \(n\) assets or stocks in our portfolio and must determine the amount of money to invest in each. Let \(w_i\) denote the fraction of our budget invested in asset \(i = 1,\ldots,m\), and let \(r_i\) be the returns (, fractional change in price) over the period of interest. We model returns as a random vector \(r \in {\mathbf R}^n\) with known mean \({\mathop{\bf E{}}}[r] = \mu\) and covariance \({\mathop{\bf Var{}}}(r) = \Sigma\). Thus, given a portfolio \(w \in {\mathbf R}^n\), the overall return is \(R = r^Tw\). Portfolio optimization involves a trade-off between the expected return \({\mathop{\bf E{}}}[R] = \mu^Tw\) and associated risk, which we take as the return variance \({\mathop{\bf Var{}}}(R) = w^T\Sigma w\). Initially, we consider only long portfolios, so our problem is \[ \begin{array}{ll} \underset{w}{\mbox{maximize}} & \mu^Tw - \gamma w^T\Sigma w \\ \mbox{subject to} & w \geq 0, \quad \sum_{i=1}^n w = 1 \end{array} \] where the objective is the risk-adjusted return and \(\gamma > 0\) is a risk aversion parameter. Example We construct the risk-return trade-off curve for \(n = 10\) assets and \(\mu\) and \(\Sigma^{1/2}\) drawn from a standard normal distribution. ## Problem dataset.seed(10)n <- 10SAMPLES <- 100mu <- matrix(abs(rnorm(n)), nrow = n)Sigma <- matrix(rnorm(n^2), nrow = n, ncol = n)Sigma <- t(Sigma) %*% Sigma## Form problemw <- Variable(n)ret <- t(mu) %*% wrisk <- quad_form(w, Sigma)constraints <- list(w >= 0, sum(w) == 1)## Risk aversion parametersgammas <- 10^seq(-2, 3, length.out = SAMPLES)ret_data <- rep(0, SAMPLES)risk_data <- rep(0, SAMPLES)w_data <- matrix(0, nrow = SAMPLES, ncol = n)## Compute trade-off curvefor(i in seq_along(gammas)) { gamma <- gammas[i] objective <- ret - gamma * risk prob <- Problem(Maximize(objective), constraints) result <- solve(prob) ## Evaluate risk/return for current solution risk_data[i] <- result$getValue(sqrt(risk)) ret_data[i] <- result$getValue(ret) w_data[i,] <- result$getValue(w)} Note how we can obtain the risk and return by directly evaluatingthe value of the separate expressions: result$getValue(risk)result$getValue(ret) The trade-off curve is shown below. The \(x\)-axis represents the standard deviation of the return. Red points indicate the result from investing the entire budget in a single asset. As \(\gamma\) increases, our portfolio becomes more diverse, reducing risk but also yielding a lower return. cbPalette <- brewer.pal(n = 10, name = "Paired")p1 <- ggplot() + geom_line(mapping = aes(x = risk_data, y = ret_data), color = "blue") + geom_point(mapping = aes(x = sqrt(diag(Sigma)), y = mu), color = "red")markers_on <- c(10, 20, 30, 40)nstr <- sprintf("gamma == %.2f", gammas[markers_on])df <- data.frame(markers = markers_on, x = risk_data[markers_on], y = ret_data[markers_on], labels = nstr)p1 + geom_point(data = df, mapping = aes(x = x, y = y), color = "black") + annotate("text", x = df$x + 0.2, y = df$y - 0.05, label = df$labels, parse = TRUE) + labs(x = "Risk (Standard Deviation)", y = "Return") We can also plot the fraction of budget invested in each asset. w_df <- data.frame(paste0("grp", seq_len(ncol(w_data))), t(w_data[markers_on,]))names(w_df) <- c("grp", sprintf("gamma == %.2f", gammas[markers_on]))tidyW <- gather(w_df, key = "gamma", value = "fraction", names(w_df)[-1], factor_key = TRUE)ggplot(data = tidyW, mapping = aes(x = gamma, y = fraction)) + geom_bar(mapping = aes(fill = grp), stat = "identity") + scale_x_discrete(labels = parse(text = levels(tidyW$gamma))) + scale_fill_manual(values = cbPalette) + guides(fill = FALSE) + labs(x = "Risk Aversion", y = "Fraction of Budget") Discussion Many variations on the classical portfolio problem exist. For instance, we could allow long and short positions, but impose a leverage limit \(\|w\|_1 \leq L^{max}\) by changing constr <- list(p_norm(w,1) <= Lmax, sum(w) == 1) An alternative is to set a lower bound on the return and minimize justthe risk. To account for transaction costs, we could add a term to theobjective that penalizes deviations of \(w\) from the previousportfolio. These extensions and more are described inBoyd et al. (2017). The key takeaway is that all of these convexproblems can be easily solved in CVXR with just a few alterationsto the code above. Session Info sessionInfo() ## R version 3.6.0 (2019-04-26)## Platform: x86_64-apple-darwin18.5.0 (64-bit)## Running under: macOS Mojave 10.14.5## ## Matrix products: default## BLAS/LAPACK: /usr/local/Cellar/openblas/0.3.6_1/lib/libopenblasp-r0.3.6.dylib## ## locale:## [1] en_US.UTF-8/en_US.UTF-8/en_US.UTF-8/C/en_US.UTF-8/en_US.UTF-8## ## attached base packages:## [1] stats graphics grDevices datasets utils methods base ## ## other attached packages:## [1] tidyr_0.8.3 RColorBrewer_1.1-2 ggplot2_3.1.1 ## [4] CVXR_0.99-6 ## ## loaded via a namespace (and not attached):## [1] gmp_0.5-13.5 Rcpp_1.0.1 compiler_3.6.0 ## [4] pillar_1.4.1 plyr_1.8.4 R.methodsS3_1.7.1## [7] R.utils_2.8.0 tools_3.6.0 digest_0.6.19 ## [10] bit_1.1-14 evaluate_0.14 tibble_2.1.2 ## [13] gtable_0.3.0 lattice_0.20-38 pkgconfig_2.0.2 ## [16] rlang_0.3.4 Matrix_1.2-17 yaml_2.2.0 ## [19] blogdown_0.12.1 xfun_0.7 withr_2.1.2 ## [22] dplyr_0.8.1 Rmpfr_0.7-2 ECOSolveR_0.5.2 ## [25] stringr_1.4.0 knitr_1.23 tidyselect_0.2.5 ## [28] bit64_0.9-7 grid_3.6.0 glue_1.3.1 ## [31] R6_2.4.0 rmarkdown_1.13 bookdown_0.11 ## [34] purrr_0.3.2 magrittr_1.5 scales_1.0.0 ## [37] htmltools_0.3.6 scs_1.2-3 assertthat_0.2.1 ## [40] colorspace_1.4-1 labeling_0.3 stringi_1.4.3 ## [43] lazyeval_0.2.2 munsell_0.5.0 crayon_1.3.4 ## [46] R.oo_1.22.0 Source References Boyd, S., E. Busseti, S. Diamond, R. N. Kahn, K. Koh, P. Nystrup, and J. Speth. 2017. “Multi-Period Trading via Convex Optimization.” Foundations and Trends in Optimization. Lobo, M. S., M. Fazel, and S. Boyd. 2007. “Portfolio Optimization with Linear and Fixed Transaction Costs.” Annals of Operations Research 152 (1): 341–65. Markowitz, H. M. 1952. “Portfolio Selection.” Journal of Finance 7 (1): 77–91. Roy, A. D. 1952. “Safety First and the Holding of Assets.” Econometrica 20 (3): 431–49.
A fraction represents a part of whole. For example, it tells how many slices of a pizza left or eaten with respect to the whole pizza like, one-half, three-quarters. Generally, a fraction has two parts i.e. the numerator and the denominator. A decimal fraction is a fraction where its denominator is a power of 10 i.e. 101,102, 103 etc. For example 32/10,56/100,325/1000. It can be expressed as a decimal like 32/10=3.2,56/100=0.56. We can perform all arithmetic operations on fractions by expressing them as a decimal. Let’s practice some word problems on decimal fractions by using various arithmetical operations. Example 1: A barrel has 56.32 liters capacity. If Supriya used 21.19 liters how much water is left in the barrel. Solution: Given, Capacity of the barrel = 56.32 liters Amount of water used= 21.19 liters Amount of water left in the barrel = 56.32 – 21.19 = 35.13 liters Example 2: Megha bought 12 bags of wheat flour each weighing 4563/100 kg. What will be the total weight? Solution: Total no. of bags = 12 Weight of each bag = 4563/100 kg = 45.63 kg Total weight =45.63 x 12=547.56 kg Example 3: If circumference of a circle is 16.09 cm. What will be its diameter(π=3.14)? Solution: Given, circumference = 16.09 cm Circumference of a circle, C=2πr \(\Rightarrow 16.09 = 2 \pi r\) \(\Rightarrow 16.09 = 2 \times 3.14 \times r\) \(\Rightarrow r = \frac{16.09 \times 100}{6.28 \times 100}\) \(\Rightarrow r = 2.56 \) Therefore Diameter = 2r = 2 x 2.56 = 5.12 cm. Example 4: If the product of 38.46 and another number is 658.17, what is the other number? Solution: Given, One number = 38.46 Product of two numbers = 658.17 The other number = 658.17÷38.46 = \(\frac{658.17}{100}\div \frac{38.46}{100}\) = 17.11 Example 5: Rakesh bought a new. He went on a road trip of 165.9 km on bike. After a week he went for another trip of 102.04 km. What will be the reading on meter reader of the bike? Solution: Given, Distance travelled on first trip = 165.9 km Distance travelled on second trip = 102.04km Total distance travelled = 165.9 + 102.04 = 267.94 km To solve more problems on the topic, download Byju’s – The Learning App from Google Play Store and watch interactive videos. Also, take free tests to practice for exams. To study about other topics, visit www.byjus.com and browse among thousands of interesting articles. ‘
ISSN: 1078-0947 eISSN: 1553-5231 All Issues Discrete & Continuous Dynamical Systems - A February 2009 , Volume 25 , Issue 1 Special Issue Dedicated to Professor Masayasu Mimura on the occasion of his 65th birthday Select all articles Export/Reference: Abstract: Professor Masayasu Mimura, known to his friends as ''Mayan", was born on Shikoku Island, Japan, on October 11th, 1941. He studied at Kyoto University, where he prepared his doctoral thesis under the supervision of Professor Masaya Yamaguti. Thanks to his advisor's acquaintance with a large circle of scientists, Mayan could keep close relations with a group of biologists and biophysicists - Ei Teramoto and Nanako Shigesada, among others - since his very early days. For more information please click the “Full Text” above. Abstract: We study some retention phenomenaon the free boundaries associated to some elliptic and parabolic problems of reaction-diffusion type. This is the case, for instance, of the waiting time phenomenonfor solutions of suitable parabolic equations. We find sufficient conditions in order to have a discrete version of the waiting time property (the so called nondiffusion of the support) for solutions of the associated family of elliptic equations and prove how to pass to the limit in order to get this property for the solutions of the parabolic equation. Abstract: This work is the continuation of our previous paper [6]. There, we dealt with the reaction-diffusion equation $\partial_t u=\Delta u+f(x-cte,u),\qquad t>0,\quad x\in\R^N,$ where $e\in S^{N-1}$ and $c>0$ are given and $f(x,s)$ satisfies some usual assumptions in population dynamics, together with $f_s(x,0)<0$ for $|x|$ large. The interest for such equation comes from an ecological model introduced in [1] describing the effects of global warming on biological species. In [6],we proved that existence and uniqueness of travelling wave solutions of the type $u(x,t)=U(x-cte)$ and the large time behaviour of solutions with arbitrary nonnegative bounded initial datum depend on the sign of the generalized principal in $\R^N$ of an associated linear operator. Here, we establish analogous results for the Neumann problem in domains which are asymptotically cylindrical, as well as for the problem in the whole space with $f$ periodic in some space variables, orthogonal to the direction of the shift $e$. The $L^1$ convergence of solution $u(t,x)$ as $t\to\infty$ is established next. In this paper, we also show that a bifurcation from the zero solution takes place as the principal crosses $0$. We are able to describe the shape of solutions close to extinction thus answering a question raised by M.~Mimura. These two results are new even in the framework considered in [6]. Another type of problem is obtained by adding to the previous one a term $g(x-c'te,u)$ periodic in $x$ in the direction $e$. Such a model arises when considering environmental change on two different scales. Lastly, we also solve the case of an equation $\partial_t u=\Delta u+f(t,x-cte,u),$ when $f(t,x,s)$ is periodic in $t$. This for instance represents the seasonal dependence of $f$. In both cases, we obtain a necessary and sufficient condition for the existence, uniqueness and stability of pulsating travelling waves, which are solutions with a profile which is periodic in time. Abstract: In this paper we analyze a class of phase field models for the dynamics of phase transitions which extend the well-known Caginalp and Penrose-Fife phase field models. We prove the existence and uniqueness of the solution of a corresponding initial boundary value problem and deduce further regularity of the solution by exploiting the so-called regularizing effect. Finally we study the long time behavior of the solution and show that it converges algebraically fast to a stationary solution as $t$ tends to infinity. Abstract: In this paper we study the effects of periodically varying heterogeneous media on the speed of traveling waves in reaction-diffusion equations. Under suitable conditions the traveling wave speed of the non-homogenized problem can be calculated in terms of the speed of the homogenized problem. We discuss a variety of examples and focus especially on the influence of the symmetric and antisymmetric part of the diffusion matrix on the wave speed. Abstract: In the two-dimensional Keller-Segel model for chemotaxis of biological cells, blow-up of solutions in finite time occurs if the total mass is above a critical value. Blow-up is a concentration event, where point aggregates are created. In this work global existence of generalized solutions is proven, allowing for measure valued densities. This extends the solution concept after blow-up. The existence result is an application of a theory developed by Poupaud, where the cell distribution is characterized by an additional defect measure, which vanishes for smooth cell densities. The global solutions are constructed as limits of solutions of a regularized problem. A strong formulation is derived under the assumption that the generalized solution consists of a smooth part and a number of smoothly varying point aggregates. Comparison with earlier formal asymptotic results shows that the choice of a solution concept after blow-up is not unique and depends on the type of regularization. This work is also concerned with local density profiles close to point aggregates. An equation for these profiles is derived by passing to the limit in a rescaled version of the regularized model. Solvability of the profile equation can also be obtained by minimizing a free energy functional. Abstract: We study the long-time behavior of positive solutions to the problem $u_t-\Delta u=a u-b(x)u^p \mbox{ in } (0,\infty)\times \Omega, Bu=0 \mbox{ on } (0,\infty)\times \partial \Omega, $ where $a$ is a real parameter, $b\geq 0$ is in $C^\mu(\bar{\Omega})$ and $p>1$ is a constant, $\Omega$ is a $C^{2+\mu}$ bounded domain in $R^N$ ($N\geq 2$), the boundary operator $B$ is of the standard Dirichlet, Neumann or Robyn type. Under the assumption that $\overline\Omega_0$:=$\{x\in\Omega: b(x)=0\}$ has non-empty interior, is connected, has smooth boundary and is contained in $\Omega$, it is shown in [8] that when $a\geq \lambda_1^D(\Omega_0)$, for any fixed $x\in \overline{\Omega}_0$, $\overline{\lim}_{t\to\infty}u(t,x)$=$\infty$, and for any fixed $x\in \overline{\Omega}\setminus \overline{\Omega}_0$, $\overline{\lim}_{t\to\infty}u(t,x)\leq \overline{U}_a(x), \underline{\lim}_{t\to\infty}u(t,x)\geq \underline{U}_a(x), where $\underline{U}_a$ and $\overline{U}_a$ denote respectively the minimal and maximal positive solutions of the boundary blow-up problem $-\Delta u=au-b(x)u^p \mbox{ in} \ \Omega\setminus\overline{\Omega}_0,\ Bu=0 \mbox{ on}\ \partial \Omega,\ \ u=\infty \mbox{ on}\ \partial \Omega_0.$ The main purpose of this paper is to show that, under the above assumptions, $\lim_{t\to\infty} u(t,x)=\underline U_a(x),\forall x\in \overline\Omega\setminus \overline\Omega_0.$ This proves a conjecture stated in [8]. Some extensions of this result are also discussed. Abstract: We consider fully nonlinear weakly coupled systems of parabolic equations on a bounded reflectionally symmetric domain. Assuming the system is cooperative we prove the asymptotic symmetry of positive bounded solutions. To facilitate an application of the method of moving hyperplanes, we derive Harnack type estimates for linear cooperative parabolic systems. Abstract: We start from a basic model for the transport of charged species in heterostructures containing the mechanisms diffusion, drift and reactions in the domain and at its boundary. Considering limit cases of partly fast kinetics we derive reduced models. This reduction can be interpreted as some kind of projection scheme for the weak formulation of the basic electro-reaction-diffusion system. We verify assertions concerning invariants and steady states and prove the monotone and exponential decay of the free energy along solutions to the reduced problem and to its fully implicit discrete-time version by means of the results of the basic problem. Moreover we make a comparison of prolongated quantities with the solutions to the basic model. Abstract: In this work a mathematical model for blood coagulation induced by an activator source is presented. Blood coagulation is viewed as a process resulting in fibrin polymerization, which is considered as the first step towards thrombi formation. We derive and study a system for the first moments of the polymer concentrations and the activating variables. Analysis of this last model allows us to identify parameter regions which could lead to thrombi formation, both in homeostatic and pathological situations. Abstract: In this paper we study an existing mathematical model of tumour encapsulation comprising two reaction-convection-diffusion equations for tumour-cell and connective-tissue densities. The existence of travelling-wave solutions has previously been shown in certain parameter regimes, corresponding to a connective tissue wave which moves in concert with an advancing front of the tumour cells. We extend these results by constructing novel classes of travelling waves for parameter regimes not previously treated asymptotically; we term these singular because they do not correspond to regular trajectories of the corresponding ODE system. Associated with this singularity is a number of further (inner) asymptotic regions in which the dynamics is not governed by the travelling-wave formulation, but which we also characterise. Abstract: We consider a curvature flow in heterogeneous media in the plane: $ V= a(x,y) \kappa + b$, where for a plane curve, $V$ denotes its normal velocity, $\kappa$ denotes its curvature, $b$ is a constant and $a(x,y)$ is a positive function, periodic in $y$. We study periodic traveling waves which travel in $y$-direction with given average speed $c \geq 0$. Four different types of traveling waves are given, whose profiles are straight lines, ''V"-like curves, cup-like curves and cap-like curves, respectively. We also show that, as $(b,c)\rightarrow (0,0)$, the profiles of the traveling waves converge to straight lines. These results are connected with spatially heterogeneous version of Bernshteĭn's Problem and De Giorgi's Conjecture, which are proposed at last. Abstract: The long time behavior for the degenerate Cahn-Hilliard equation [4, 5, 9], $u_t=\nabla \cdot (1-u^2) \nabla \[ \frac{\Theta}{2} \{ \ln(1+u)-\ln(1-u)\} - \alpha u -$ Δu$],$ is characterized by the growth of domains in which $u(x,t) \approx u_{\pm},$ where $u_\pm$ denote the ''equilibrium phases;" this process is known as coarsening. The degree of coarsening can be quantified in terms of a characteristic length scale, $l(t)$, where $l(t)$ is prescribed via a Liapunov functional and a $W^{1, \infty}$ predual norm of $u(x,t).$ In this paper, we prove upper bounds on $l(t)$ for all temperatures $\Theta \in (0, \Theta_c),$ where $\Theta_c$ denotes the ''critical temperature," and for arbitrary mean concentrations, $\bar{u}\in (u_{-}, u_{+}).$ Our results generalize the upper bounds obtained by Kohn & Otto [14]. In particular, we demonstrate that transitions may take place in the nature of the coarsening bounds during the coarsening process. Abstract: Bifurcation structure of the stationary solutions to the Swift-Hohenberg equation with a symmetry breaking boundary condition is studied. Namely, a SO(2) breaking perturbation is added to the Neumann or Dirichlet boundary conditions. As a result, half of the secondary bifurcation points change their characters by the imperfection of pitchfork bifurcations. Abstract: We solve and characterize the Lagrange multipliers of a reaction-diffusion system in the Gibbs simplex of $\R^{N+1}$ by considering strong solutions of a system of parabolic variational inequalities in $\R^N$. Exploring properties of the two obstacles evolution problem, we obtain and approximate a $N$-system involving the characteristic functions of the saturated and/or degenerated phases in the nonlinear reaction terms. We also show continuous dependence results and we establish sufficient conditions of non-degeneracy for the stability of those phase subregions. Abstract: In this paper, some properties of the minimal speeds of pulsating Fisher-KPP fronts in periodic environments are established. The limit of the speeds at the homogenization limit is proved rigorously. Near this limit, generically, the fronts move faster when the spatial period is enlarged, but the speeds vary only at the second order. The dependence of the speeds on habitat fragmentation is also analyzed in the case of the patch model. Abstract: We consider a simple mathematical model of distribution of morphogens (signaling molecules responsible for the differentiation of cells and the creation of tissue patterns) proposed by Lander, Nie and Wan in 2002. The model consists of a system of two equations: a PDE of parabolic type modeling the distribution of free morphogens with a dynamic boundary condition and an ODE describing the evolution of bound receptors. Three biological processes are taken into account: diffusion, degradation and reversible binding. We prove existence and uniqueness of solutions and its asymptotic behavior. Abstract: We consider the following Gierer-Meinhardt system with a precursor$ \mu (x)$ for the activator $A$ in $\mathbb{R}^1$: $A_t=$ε 2$A^{''}- \mu (x) A+\frac{A^2}{H} \mbox{ in } (-1, 1),$ $\tau H_t=D H^{''}-H+ A^2 \mbox{ in } (-1, 1),$ $ A' (-1)= A' (1)= H' (-1) = H' (1) =0.$ Such an equation exhibits a typical Turing bifurcation of the second kind, i.e., homogeneous uniform steady states do not exist in the system. We establish the existence and stability of $N-$peaked steady-states in terms of the precursor $\mu(x)$ and the diffusion coefficient $D$. It is shown that $\mu (x)$ plays an essential role for both existence and stability of spiky patterns. In particular, we show that precursors can give rise to instability. This is a new effect which is not present in the homogeneous case. Readers Authors Editors Referees Librarians More Email Alert Add your name and e-mail address to receive news of forthcoming issues of this journal: [Back to Top]
This simple educational tutorial demonstrates some core concepts of supervised machine learning: what is overfitting and how validation helps to avoid it. In this demo we machine-learn a very simple "hidden law" – one period of a sine function – from a noisy data using a very simple model – a 1D polynomial. The demo is inspired by an Exercise 1.1 in Christopher Bishop's book "Pattern Recognition and Machine Learning", 2006. © Mantas Lukoševičius 2019 mantas.info. This work is licensed under a Creative Commons Attribution 4.0 International License. %matplotlib inlineimport numpy as npimport mathimport matplotlib.pyplot as plt The hidden law we are trying to learn is $y=f(x)=\sin(2\pi x)$ for $x \in [0,1]$. hidden_law = np.vectorize(lambda x: math.sin(2*math.pi*x)) # made to run on vectorsx_all = np.arange(0,1,0.01) Peek how it looks like. In real life this is not possible. y_hidden = hidden_law(x_all)plt.plot(x_all,y_hidden,':g'); Generate same noisy data (observations) from the hidden law. Yeah, the world is not perfect. This is all we will use to learn the hidden law. np.random.seed(42) # to make repeatablenoiseLevel = 0.2x = np.random.rand(20)y = hidden_law(x) + noiseLevel*np.random.randn(20) Split the data 10:10 for training and validation. x_train = x[:10]; y_train = y[:10]x_valid = x[10:]; y_valid = y[10:] Plot the data. plt.plot(x_all,y_hidden,':g')plt.plot(x_train,y_train,'.')plt.plot(x_valid,y_valid,'r.')plt.legend(['Hidden law', 'Train. data', 'Valid. data']); maxPolyDegree = 10polyDegrees = range(maxPolyDegree) Learn all the different polynomials. A learned polynomial is defined by its parameter (weight) vector $\mathbf{w}$. polys = []for polyDegree in polyDegrees: polys.append( np.polyfit(x_train,y_train,polyDegree) ) See weights of the first three. polys[:3] [array([-0.21700967]), array([-2.1734858 , 0.91350014]), array([ 2.95148537, -5.04705926, 1.34462422])] trainRMSEs = np.zeros(maxPolyDegree)validRMSEs = np.zeros(maxPolyDegree)for polyDegree in polyDegrees: y_train_p = np.polyval(polys[polyDegree],x_train) trainRMSEs[polyDegree] = np.sqrt(np.mean(np.square(y_train_p - y_train))) y_valid_p = np.polyval(polys[polyDegree],x_valid) validRMSEs[polyDegree] = np.sqrt(np.mean(np.square(y_valid_p - y_valid))) Plot training and validation errors vs. polynomial degree. plt.plot(polyDegrees,trainRMSEs,'b')plt.plot(polyDegrees,validRMSEs,'r')plt.axis((0,maxPolyDegree-1,0,1))plt.legend(['Training','Validation'])plt.xlabel('Polynomial degree')plt.ylabel('RMSE'); We see that the 4th order polynomial is the optimal one. Going for the higher degrees we get overfitting: the validation error starts to increase from that point even as training error continues to approach zero. The 9th degree polinomial learns the training data perfectly (goes through the 10 training points) but the validation error is huge: validRMSEs[9] 1350.8736296423288 plt.figure(figsize=(15, 6))for polyDegree in polyDegrees: plt.subplot( 2,5,polyDegree+1 ) y_pol = np.polyval(polys[polyDegree],x_all) plt.plot(x_all,y_hidden,':g') plt.plot(x_train,y_train,'.') plt.plot(x_valid,y_valid,'r.') plt.plot(x_all,y_pol,'b') plt.title(polyDegree) #plt.axis((0,1,-1.5,1.5)) We see that indeed polynomials of degrees 3-5 (-6) that have the lowest validation errors give the closest match to the hidden law we are trying to model. Validation is a good way to select the model (meta-parameters) even if both training and validation data are not perfect. You can play with the parameters in the code, e.g., noiseLevel, and see how that affects the results.
This question is based on this one . Let $X=(X_1,X_2)$ be a bivariate random variable with the joint PDF $$f(x_1,x_2)=\frac{1}{2}I_{[0,4]}x_1 I_{[0,\frac{1}{4}x_1]}x_2$$ First : I've asked whether this function a valid PDF is ?. The answer was that if I integrate $x_2$ form $0$ up to$\frac{x_1}{4}$ I can easily and clearly verify that's a valid PDF , but now my new question(s) is , is the upper bound or the maximum value of $x_2$ equal to $1$ or not ? (since the maximum value $x_1$ can take is 4 ) , if the answer is yes , why can not we integrate $x_2$ up to $1$ which in in turn proves that is not a valid PDF ! $$\int_0^4 \int_0^1 0.5 \, dx_2 \, dx_1 = 2 \tag 1$$. Second :I want to find $P(2\leq X_1\leq 3;\frac{1}{2}\leq X_2 \leq \frac{3}{2})$ using the joint CDF i.e $$P(2\leq X_1\leq 3;\frac{1}{2}\leq X_2 \leq \frac{3}{2}) = F(3;\frac{3}{2}) -F(3;\frac{1}{2})-F(3;\frac{3}{2})+F(2:\frac{2}{2})$$ I have tried to find the CDF but unfortunately it seems that I miss a very basic idea either in probability theory or in calculus (or both ). That's my attempt . By definition $F(b_1,b_2)=P(X_1\leq b_1,X_2 \leq b_2)$ If $b_1 < 0 $ and/or $b_2 < 0,$ then $$F(b_1,b_2)=0$$ If $b_1 \in [0,4]$ and $b_2 \in [0,\frac{b_1}{4}]$ then $$F(b_1,b_2)=\int_0^{b_1} \int_0^{b_2} \frac{1}{2} \, dx_2 \, dx_1=\frac{1}{2}b_1b_2\tag 2$$ If $b_1 \in [0,4]$ and $b_2 > \frac{b_1}{4}$ then $$F(b_1,b_2)=\frac{1}{16}b_1^{2}$$ If $b_1 > 4 $ and $b_2 \in [0,\frac{b_1}{4}]$ then $$F(b_1,b_2)=2b_2$$ Otherwise $F(b_1,b_2)=1$ . There is absolutely something wrong in deriving this CDF , suppose I have to find $P(x_1 \leq 3,x_2 \leq 0.6)$ using the this derived CDF I will get $3.6 > 1$ !. Please can someone help me by answering my questions or explaining where I am wrong . Thanks a lot in advance
Colloquia/Fall18 Contents 1 Mathematics Colloquium 1.1 Spring 2018 1.2 Spring Abstracts 1.2.1 January 29 Li Chao (Columbia) 1.2.2 February 2 Thomas Fai (Harvard) 1.2.3 February 5 Alex Lubotzky (Hebrew University) 1.2.4 February 6 Alex Lubotzky (Hebrew University) 1.2.5 February 9 Wes Pegden (CMU) 1.2.6 March 2 Aaron Bertram (Utah) 1.2.7 March 16 Anne Gelb (Dartmouth) 1.2.8 April 6 Edray Goins (Purdue) 1.3 Past Colloquia Mathematics Colloquium All colloquia are on Fridays at 4:00 pm in Van Vleck B239, unless otherwise indicated. Spring 2018 date speaker title host(s) January 29 (Monday) Li Chao (Columbia) Elliptic curves and Goldfeld's conjecture Jordan Ellenberg February 2 (Room: 911) Thomas Fai (Harvard) The Lubricated Immersed Boundary Method Spagnolie, Smith February 5 (Monday, Room: 911) Alex Lubotzky (Hebrew University) High dimensional expanders: From Ramanujan graphs to Ramanujan complexes Ellenberg, Gurevitch February 6 (Tuesday 2 pm, Room 911) Alex Lubotzky (Hebrew University) Groups' approximation, stability and high dimensional expanders Ellenberg, Gurevitch February 9 Wes Pegden (CMU) The fractal nature of the Abelian Sandpile Roch March 2 Aaron Bertram (University of Utah) Stability in Algebraic Geometry Caldararu March 16 (Room: 911) Anne Gelb (Dartmouth) Reducing the effects of bad data measurements using variance based weighted joint sparsity WIMAW April 5 (Thursday) John Baez (UC Riverside) Monoidal categories of networks Craciun April 6 Edray Goins (Purdue) Toroidal Belyĭ Pairs, Toroidal Graphs, and their Monodromy Groups Melanie April 13 Jill Pipher (Brown) TBA WIMAW April 16 (Monday) Christine Berkesch Zamaere (University of Minnesota) TBA Erman, Sam April 25 (Wednesday) Hitoshi Ishii (Waseda University) Wasow lecture TBA Tran May 4 Henry Cohn (Microsoft Research and MIT) TBA Ellenberg date person (institution) TBA hosting faculty date person (institution) TBA hosting faculty date person (institution) TBA hosting faculty date person (institution) TBA hosting faculty date person (institution) TBA hosting faculty date person (institution) TBA hosting faculty date person (institution) TBA hosting faculty date person (institution) TBA hosting faculty Spring Abstracts January 29 Li Chao (Columbia) Title: Elliptic curves and Goldfeld's conjecture Abstract: An elliptic curve is a plane curve defined by a cubic equation. Determining whether such an equation has infinitely many rational solutions has been a central problem in number theory for centuries, which lead to the celebrated conjecture of Birch and Swinnerton-Dyer. Within a family of elliptic curves (such as the Mordell curve family y^2=x^3-d), a conjecture of Goldfeld further predicts that there should be infinitely many rational solutions exactly half of the time. We will start with a history of this problem, discuss our recent work (with D. Kriz) towards Goldfeld's conjecture and illustrate the key ideas and ingredients behind these new progresses. February 2 Thomas Fai (Harvard) Title: The Lubricated Immersed Boundary Method Abstract: Many real-world examples of fluid-structure interaction, including the transit of red blood cells through the narrow slits in the spleen, involve the near-contact of elastic structures separated by thin layers of fluid. The separation of length scales between these fine lubrication layers and the larger elastic objects poses significant computational challenges. Motivated by the challenge of resolving such multiscale problems, we introduce an immersed boundary method that uses elements of lubrication theory to resolve thin fluid layers between immersed boundaries. We apply this method to two-dimensional flows of increasing complexity, including eccentric rotating cylinders and elastic vesicles near walls in shear flow, to show its increased accuracy compared to the classical immersed boundary method. We present preliminary simulation results of cell suspensions, a problem in which near-contact occurs at multiple levels, such as cell-wall, cell-cell, and intracellular interactions, to highlight the importance of resolving thin fluid layers in order to obtain the correct overall dynamics. February 5 Alex Lubotzky (Hebrew University) Title: High dimensional expanders: From Ramanujan graphs to Ramanujan complexes Abstract: Expander graphs in general, and Ramanujan graphs , in particular, have played a major role in computer science in the last 5 decades and more recently also in pure math. The first explicit construction of bounded degree expanding graphs was given by Margulis in the early 70's. In mid 80' Margulis and Lubotzky-Phillips-Sarnak provided Ramanujan graphs which are optimal such expanders. In recent years a high dimensional theory of expanders is emerging. A notion of topological expanders was defined by Gromov in 2010 who proved that the complete d-dimensional simplical complexes are such. He raised the basic question of existence of such bounded degree complexes of dimension d>1. This question was answered recently affirmatively (by T. Kaufman, D. Kazdhan and A. Lubotzky for d=2 and by S. Evra and T. Kaufman for general d) by showing that the d-skeleton of (d+1)-dimensional Ramanujan complexes provide such topological expanders. We will describe these developments and the general area of high dimensional expanders. February 6 Alex Lubotzky (Hebrew University) Title: Groups' approximation, stability and high dimensional expanders Abstract: Several well-known open questions, such as: are all groups sofic or hyperlinear?, have a common form: can all groups be approximated by asymptotic homomorphisms into the symmetric groups Sym(n) (in the sofic case) or the unitary groups U(n) (in the hyperlinear case)? In the case of U(n), the question can be asked with respect to different metrics and norms. We answer, for the first time, one of these versions, showing that there exist fintely presented groups which are not approximated by U(n) with respect to the Frobenius (=L_2) norm. The strategy is via the notion of "stability": some higher dimensional cohomology vanishing phenomena is proven to imply stability and using high dimensional expanders, it is shown that some non-residually finite groups (central extensions of some lattices in p-adic Lie groups) are Frobenious stable and hence cannot be Frobenius approximated. All notions will be explained. Joint work with M, De Chiffre, L. Glebsky and A. Thom. February 9 Wes Pegden (CMU) Title: The fractal nature of the Abelian Sandpile Abstract: The Abelian Sandpile is a simple diffusion process on the integer lattice, in which configurations of chips disperse according to a simple rule: when a vertex has at least 4 chips, it can distribute one chip to each neighbor. Introduced in the statistical physics community in the 1980s, the Abelian sandpile exhibits striking fractal behavior which long resisted rigorous mathematical analysis (or even a plausible explanation). We now have a relatively robust mathematical understanding of this fractal nature of the sandpile, which involves surprising connections between integer superharmonic functions on the lattice, discrete tilings of the plane, and Apollonian circle packings. In this talk, we will survey our work in this area, and discuss avenues of current and future research. March 2 Aaron Bertram (Utah) Title: Stability in Algebraic Geometry Abstract: Stability was originally introduced in algebraic geometry in the context of finding a projective quotient space for the action of an algebraic group on a projective manifold. This, in turn, led in the 1960s to a notion of slope-stability for vector bundles on a Riemann surface, which was an important tool in the classification of vector bundles. In the 1990s, mirror symmetry considerations led Michael Douglas to notions of stability for "D-branes" (on a higher-dimensional manifold) that corresponded to no previously known mathematical definition. We now understand each of these notions of stability as a distinct point of a complex "stability manifold" that is an important invariant of the (derived) category of complexes of vector bundles of a projective manifold. In this talk I want to give some examples to illustrate the various stabilities, and also to describe some current work in the area. March 16 Anne Gelb (Dartmouth) Title: Reducing the effects of bad data measurements using variance based weighted joint sparsity Abstract: We introduce the variance based joint sparsity (VBJS) method for sparse signal recovery and image reconstruction from multiple measurement vectors. Joint sparsity techniques employing $\ell_{2,1}$ minimization are typically used, but the algorithm is computationally intensive and requires fine tuning of parameters. The VBJS method uses a weighted $\ell_1$ joint sparsity algorithm, where the weights depend on the pixel-wise variance. The VBJS method is accurate, robust, cost efficient and also reduces the effects of false data. April 6 Edray Goins (Purdue) Title: Toroidal Belyĭ Pairs, Toroidal Graphs, and their Monodromy Groups Abstract: A Belyĭ map [math] \beta: \mathbb P^1(\mathbb C) \to \mathbb P^1(\mathbb C) [/math] is a rational function with at most three critical values; we may assume these values are [math] \{ 0, \, 1, \, \infty \}. [/math] A Dessin d'Enfant is a planar bipartite graph obtained by considering the preimage of a path between two of these critical values, usually taken to be the line segment from 0 to 1. Such graphs can be drawn on the sphere by composing with stereographic projection: [math] \beta^{-1} \bigl( [0,1] \bigr) \subseteq \mathbb P^1(\mathbb C) \simeq S^2(\mathbb R). [/math] Replacing [math] \mathbb P^1 [/math] with an elliptic curve [math]E [/math], there is a similar definition of a Belyĭ map [math] \beta: E(\mathbb C) \to \mathbb P^1(\mathbb C). [/math] Since [math] E(\mathbb C) \simeq \mathbb T^2(\mathbb R) [/math] is a torus, we call [math] (E, \beta) [/math] a toroidal Belyĭ pair. The corresponding Dessin d'Enfant can be drawn on the torus by composing with an elliptic logarithm: [math] \beta^{-1} \bigl( [0,1] \bigr) \subseteq E(\mathbb C) \simeq \mathbb T^2(\mathbb R). [/math] This project seeks to create a database of such Belyĭ pairs, their corresponding Dessins d'Enfant, and their monodromy groups. For each positive integer [math] N [/math], there are only finitely many toroidal Belyĭ pairs [math] (E, \beta) [/math] with [math] \deg \, \beta = N. [/math] Using the Hurwitz Genus formula, we can begin this database by considering all possible degree sequences [math] \mathcal D [/math] on the ramification indices as multisets on three partitions of N. For each degree sequence, we compute all possible monodromy groups [math] G = \text{im} \, \bigl[ \pi_1 \bigl( \mathbb P^1(\mathbb C) - \{ 0, \, 1, \, \infty \} \bigr) \to S_N \bigr]; [/math] they are the ``Galois closure of the group of automorphisms of the graph. Finally, for each possible monodromy group, we compute explicit formulas for Belyĭ maps [math] \beta: E(\mathbb C) \to \mathbb P^1(\mathbb C) [/math] associated to some elliptic curve [math] E: \ y^2 = x^3 + A \, x + B. [/math] We will discuss some of the challenges of determining the structure of these groups, and present visualizations of group actions on the torus. This work is part of PRiME (Purdue Research in Mathematics Experience) with Chineze Christopher, Robert Dicks, Gina Ferolito, Joseph Sauder, and Danika Van Niel with assistance by Edray Goins and Abhishek Parab.
Good afternoon everyone! I am facing a problem which is straining my memory of linear algebra. I have: Three points with known coordinates, forming a triangle in space. Let the coordinates be R(top), P(left bottom) and Q(right bottom) (only rough positions) I'm not interested in the triangle as such, but in its two lines QP and QR These lines are tangent to a circle of known radius (basically I'm trying to smooth the angle via a radius, like in CAD) I need the equation of the circle, so I can pick any point I want between P and R to smooth out the angle. The angle is <180°, so there should exist one solution (correct me if I'm wrong) I found an image which illustrates my problem: You can see my points R,P,Q, aswell as my circle which is tangent to both rays originating in Q. Please note, that PQ does not necessarily have to be horizontal and that the angle $\alpha$ is not always 50°. My goal is to calculate the origin O and thus the complete equation of my circle in the form $\vec{r}(t)=\vec{c}+r\cdot\cos{\varphi}\cdot\vec{a}+r\cdot\sin{\varphi}\cdot\vec{b}$ Plan I have made so far: Calculate $\vec{PR}$ Calculate $a=\arccos{\frac{\vec{QP}\bullet\vec{QR}}{\left|\vec{QP}\right|\cdot\left|\vec{QR}\right|}}$ Calculate $b=\frac{\pi}{2}-a$ From here on it gets tricky. I know, that the origin is on the ray seperating the angle in Q in exact half. If I project that ray on my line $\vec{PQ}$, will I end up in the exact middle? Couldn't I just do something like "rotate $\frac{\vec{PR}}{2}$ around an axis through P by b degrees ccw, where the axis is perpendicular to the triangles plane" I start to get lost here. The perpendicular vector would be $\vec{QP}\times\vec{QR}$, wouldn't it? The German Wikipedia suggests for rotating via an rotation-matrix $R_{\hat{n}}(\alpha)\vec{x}=\hat{n}(\hat{n}\cdot\vec{x})+\cos\left(\alpha\right)(\hat{n}\times\vec{x})\times\hat{n}+\sin\left(\alpha\right)(\hat{n}\times\vec{x})$ where $\vec{n}$ is the unity-normal-vector around which to rotate. Can I use this formula? How do I finally compile my circle-equation? Edit: And yes, I have seen this, but it didn't help :-)
The answer can be found in Kato's book Perturbation theory for linear operators. I will use Kato's notation. In fact I will answer a more general question where you have an operator which depends analytically on a parameter $x$ (your $\epsilon$). Let such operator be $T(x)$ and let $$T(x) = \sum_{n=0}^{\infty} x ^n T^{(n)}$$ such that the series converges in a neighborhood of $x=0$. I also call $T=T^{(0)}$. In your case you simply have $T^{(n)}=0$ for $n\ge 2$. We seek the perturbation series of an eigenvalue $\lambda$ of $T$. This means that there exist an eigen-projector of $T$, $P$ such that $$TP = \lambda P +D, $$ where $D$ is a nilpotent term that may arise from the Jordan decomposition. If $m = \mathrm{dim} P$ is the dimension of the range of $P$, $D^m=0$. Note that for non-degenerate eigenvalue ($m=1$) we have necessarily $D=0$. Define also $Q=1-P$ and the reduced resolvent $$S = \lim_{z\to \lambda} = Q (T - z)^{-1} Q$$ lastly let's define $$S^{(0)} = -P, \ \ S^{(n)} = S^n, \ \ S^{(-n)} = - D^n, \ \mathrm{for}\ n\ge 1.$$ Let $P(x)$ be the eigenprojector of $T(x)$ analytically connected to $P$. Then one has the following series: $$(T(x) - \lambda) P(x) = D + \sum_{n=1}^{\infty} x^n \tilde{T}^{(n)}$$ with $$\tilde{T}^{(n)} = - \sum_{p=1}^{\infty} (-1)^p \sum_{\mathcal{A}} S^{(k_1)} T^{(n_1)} S^{(k_2)} \cdots S^{(k_p)} T^{(n_p)} S^{(k_{p+1})}, $$ where $\mathcal{A}$ corresponds to the indices satisfying the following constraint $$\mathcal{A} = \left \{ \sum_{i=1}^p n_i = n ; \sum_{j=1}^{p+1} k_j = p; n_j \ge 1; k_j \ge -m+1 \right \}.$$ In the non-degenerate case ($m=1$) this provides the final answer, i.e. \begin{eqnarray}\lambda(x) &=& \lambda + \sum_{n=1}^{\infty} x^n \lambda^{(n)} \\\lambda^{(n)} &=& \mathrm{Tr} \tilde{T}^{(n)}.\end{eqnarray} Note that in this case one must have $D=0$. Moreover taking the trace already kills many terms because of the cyclic property of the trace and noting that $SP = PS = 0$. To make contact with possibly more familiar expressions, note that for a self-adjoint unperturbed operator $T$, the reduced resolvent should look familiar: $$S = \sum_{\lambda_j \neq \lambda} \frac{ |j\rangle \langle j|}{ \lambda_j - \lambda},$$ where I called here $\lambda_j$ and $|j\rangle$ the eigenvalues and eigenvector of $T$.
I have read this page on closure phase and could not understand the following: \begin{align} \psi_1 & = \phi_1 + e_B - e_C \\ \psi_2 & = \phi_2 - e_B \\ \psi_3 & = \phi_3 - e_C. \end{align} If the error in $\phi_3$ is $e_C$ and the error in $\phi_2$ is $e_B$, then why should the error in $\phi_1$ be connected with the errors of $\phi_2$ and $\phi_3$ in the way given? It's because the interference phase is the difference between two antennas, it is the difference of a quantity in two different places. To starting out, there are three different shifts $e_A$, $e_B$ and $e_C$, which are the unknown shifts in time at the three locations. This gives three phase errors. $$ \psi_1 = \phi_1 + e_B - e_C $$ $$ \psi_2 = \phi_2 + e_A - e_B $$ $$ \psi_3 = \phi_3 + e_A - e_C $$ But you just define $e_A=0$, by using the time of the reception at A as your time coordinate. Now you have only two errors and three measurements.
And I think people said that reading first chapter of Do Carmo mostly fixed the problems in that regard. The only person I asked about the second pset said that his main difficulty was in solving the ODEs Yeah here there's the double whammy in grad school that every grad student has to take the full year of algebra/analysis/topology, while a number of them already don't care much for some subset, and then they only have to pass rather the class I know 2 years ago apparently it mostly avoided commutative algebra, half because the professor himself doesn't seem to like it that much and half because he was like yeah the algebraists all place out so I'm assuming everyone here is an analyst and doesn't care about commutative algebra Then the year after another guy taught and made it mostly commutative algebra + a bit of varieties + Cech cohomology at the end from nowhere and everyone was like uhhh. Then apparently this year was more of an experiment, in part from requests to make things more geometric It's got 3 "underground" floors (quotation marks because the place is on a very tall hill so the first 3 floors are a good bit above the the street), and then 9 floors above ground. The grad lounge is in the top floor and overlooks the city and lake, it's real nice The basement floors have the library and all the classrooms (each of them has a lot more area than the higher ones), floor 1 is basically just the entrance, I'm not sure what's on the second floor, 3-8 is all offices, and 9 has the ground lounge mainly And then there's one weird area called the math bunker that's trickier to access, you have to leave the building from the first floor, head outside (still walking on the roof of the basement floors), go to this other structure, and then get in. Some number of grad student cubicles are there (other grad students get offices in the main building) It's hard to get a feel for which places are good at undergrad math. Highly ranked places are known for having good researchers but there's no "How well does this place teach?" ranking which is kinda more relevant if you're an undergrad I think interest might have started the trend, though it is true that grad admissions now is starting to make it closer to an expectation (friends of mine say that for experimental physics, classes and all definitely don't cut it anymore) In math I don't have a clear picture. It seems there are a lot of Mickey Mouse projects that people seem to not help people much, but more and more people seem to do more serious things and that seems to become a bonus One of my professors said it to describe a bunch of REUs, basically boils down to problems that some of these give their students which nobody really cares about but which undergrads could work on and get a paper out of @TedShifrin i think universities have been ostensibly a game of credentialism for a long time, they just used to be gated off to a lot more people than they are now (see: ppl from backgrounds like mine) and now that budgets shrink to nothing (while administrative costs balloon) the problem gets harder and harder for students In order to show that $x=0$ is asymptotically stable, one needs to show that $$\forall \varepsilon > 0, \; \exists\, T > 0 \; \mathrm{s.t.} \; t > T \implies || x ( t ) - 0 || < \varepsilon.$$The intuitive sketch of the proof is that one has to fit a sublevel set of continuous functions $... "If $U$ is a domain in $\Bbb C$ and $K$ is a compact subset of $U$, then for all holomorphic functions on $U$, we have $\sup_{z \in K}|f(z)| \leq C_K \|f\|_{L^2(U)}$ with $C_K$ depending only on $K$ and $U$" this took me way longer than it should have Well, $A$ has these two dictinct eigenvalues meaning that $A$ can be diagonalised to a diagonal matrix with these two values as its diagonal. What will that mean when multiplied to a given vector (x,y) and how will the magnitude of that vector changed? Alternately, compute the operator norm of $A$ and see if it is larger or smaller than 2, 1/2 Generally, speaking, given. $\alpha=a+b\sqrt{\delta}$, $\beta=c+d\sqrt{\delta}$ we have that multiplication (which I am writing as $\otimes$) is $\alpha\otimes\beta=(a\cdot c+b\cdot d\cdot\delta)+(b\cdot c+a\cdot d)\sqrt{\delta}$ Yep, the reason I am exploring alternative routes of showing associativity is because writing out three elements worth of variables is taking up more than a single line in Latex, and that is really bugging my desire to keep things straight. hmm... I wonder if you can argue about the rationals forming a ring (hence using commutativity, associativity and distributivitity). You cannot do that for the field you are calculating, but you might be able to take shortcuts by using the multiplication rule and then properties of the ring $\Bbb{Q}$ for example writing $x = ac+bd\delta$ and $y = bc+ad$ we then have $(\alpha \otimes \beta) \otimes \gamma = (xe +yf\delta) + (ye + xf)\sqrt{\delta}$ and then you can argue with the ring property of $\Bbb{Q}$ thus allowing you to deduce $\alpha \otimes (\beta \otimes \gamma)$ I feel like there's a vague consensus that an arithmetic statement is "provable" if and only if ZFC proves it. But I wonder what makes ZFC so great, that it's the standard working theory by which we judge everything. I'm not sure if I'm making any sense. Let me know if I should either clarify what I mean or shut up. :D Associativity proofs in general have no shortcuts for arbitrary algebraic systems, that is why non associative algebras are more complicated and need things like Lie algebra machineries and morphisms to make sense of One aspect, which I will illustrate, of the "push-button" efficacy of Isabelle/HOL is its automation of the classic "diagonalization" argument by Cantor (recall that this states that there is no surjection from the naturals to its power set, or more generally any set to its power set).theorem ... The axiom of triviality is also used extensively in computer verification languages... take Cantor's Diagnolization theorem. It is obvious. (but seriously, the best tactic is over powered...) Extensions is such a powerful idea. I wonder if there exists algebraic structure such that any extensions of it will produce a contradiction. O wait, there a maximal algebraic structures such that given some ordering, it is the largest possible, e.g. surreals are the largest field possible It says on Wikipedia that any ordered field can be embedded in the Surreal number system. Is this true? How is it done, or if it is unknown (or unknowable) what is the proof that an embedding exists for any ordered field? Here's a question for you: We know that no set of axioms will ever decide all statements, from Gödel's Incompleteness Theorems. However, do there exist statements that cannot be decided by any set of axioms except ones which contain one or more axioms dealing directly with that particular statement? "Infinity exists" comes to mind as a potential candidate statement. Well, take ZFC as an example, CH is independent of ZFC, meaning you cannot prove nor disprove CH using anything from ZFC. However, there are many equivalent axioms to CH or derives CH, thus if your set of axioms contain those, then you can decide the truth value of CH in that system @Rithaniel That is really the crux on those rambles about infinity I made in this chat some weeks ago. I wonder to show that is false by finding a finite sentence and procedure that can produce infinity but so far failed Put it in another way, an equivalent formulation of that (possibly open) problem is: > Does there exists a computable proof verifier P such that the axiom of infinity becomes a theorem without assuming the existence of any infinite object? If you were to show that you can attain infinity from finite things, you'd have a bombshell on your hands. It's widely accepted that you can't. If fact, I believe there are some proofs floating around that you can't attain infinity from the finite. My philosophy of infinity however is not good enough as implicitly pointed out when many users who engaged with my rambles always managed to find counterexamples that escape every definition of an infinite object I proposed, which is why you don't see my rambles about infinity in recent days, until I finish reading that philosophy of infinity book The knapsack problem or rucksack problem is a problem in combinatorial optimization: Given a set of items, each with a weight and a value, determine the number of each item to include in a collection so that the total weight is less than or equal to a given limit and the total value is as large as possible. It derives its name from the problem faced by someone who is constrained by a fixed-size knapsack and must fill it with the most valuable items.The problem often arises in resource allocation where there are financial constraints and is studied in fields such as combinatorics, computer science... O great, given a transcendental $s$, computing $\min_P(|P(s)|)$ is a knapsack problem hmm... By the fundamental theorem of algebra, every complex polynomial $P$ can be expressed as: $$P(x) = \prod_{k=0}^n (x - \lambda_k)$$ If the coefficients of $P$ are natural numbers , then all $\lambda_k$ are algebraic Thus given $s$ transcendental, to minimise $|P(s)|$ will be given as follows: The first thing I think of with that particular one is to replace the $(1+z^2)$ with $z^2$. Though, this is just at a cursory glance, so it would be worth checking to make sure that such a replacement doesn't have any ugly corner cases. In number theory, a Liouville number is a real number x with the property that, for every positive integer n, there exist integers p and q with q > 1 and such that0<|x−pq|<1qn.{\displaystyle 0<\left|x-{\frac {p}... Do these still exist if the axiom of infinity is blown up? Hmmm... Under a finitist framework where only potential infinity in the form of natural induction exists, define the partial sum: $$\sum_{k=1}^M \frac{1}{b^{k!}}$$ The resulting partial sums for each M form a monotonically increasing sequence, which converges by ratio test therefore by induction, there exists some number $L$ that is the limit of the above partial sums. The proof of transcendentally can then be proceeded as usual, thus transcendental numbers can be constructed in a finitist framework There's this theorem in Spivak's book of Calculus:Theorem 7Suppose that $f$ is continuous at $a$, and that $f'(x)$ exists for all $x$ in some interval containing $a$, except perhaps for $x=a$. Suppose, moreover, that $\lim_{x \to a} f'(x)$ exists. Then $f'(a)$ also exists, and$$f'... and neither Rolle nor mean value theorem need the axiom of choice Thus under finitism, we can construct at least one transcendental number. If we throw away all transcendental functions, it means we can construct a number that cannot be reached from any algebraic procedure Therefore, the conjecture is that actual infinity has a close relationship to transcendental numbers. Anything else I need to finish that book to comment typo: neither Rolle nor mean value theorem need the axiom of choice nor an infinite set > are there palindromes such that the explosion of palindromes is a palindrome nonstop palindrome explosion palindrome prime square palindrome explosion palirome prime explosion explosion palindrome explosion cyclone cyclone cyclone hurricane palindrome explosion palindrome palindrome explosion explosion cyclone clyclonye clycone mathphile palirdlrome explosion rexplosion palirdrome expliarome explosion exploesion
The problem is the following: Consider a particle of mass $m$ confined in a long and thin hollow pipe, which rotates in the $xy$ plane with constant angular velocity $\omega$. The rotation axis passes through one of the ends of the pipe. The objective of the problem is to describe the motion of the particle. My question is about the Lagrangian, Hamiltonian and energy of the particle. Let $r$ be the distance of the particle from the $z$ axis. The kinetic energy $T$ is $$T= \dfrac{1}{2}m\dot{r}^2+\dfrac{1}{2}mr^2 \omega^2.$$ The potential is zero. So that, the Lagrangian is $$L=T= \dfrac{1}{2}m\dot{r}^2+\dfrac{1}{2}mr^2 \omega^2.\tag{1}$$ We have that $\dfrac{\partial L}{\partial \dot{q}}=p$ and $\dfrac{\partial L}{\partial q}=\dot{p}$. In this case, $\dfrac{\partial L}{\partial \dot{r}}=m \dot{r}=p$ and $\dfrac{\partial L}{\partial r}=mr\omega^2=\dot{p}$. The Hamiltonian is $\sum p\dot{q} -L$, so that $$H=\dfrac{1}{2m}p^2 -\dfrac{1}{2}mr^2\omega^2\tag{2}.$$ According to Laudau and Lifshitz-Mechanics, the energy of a system $E$, when the Lagrangian have no explicit dependence on time (page 14), is $$ E =\sum \dot{q} \dfrac{\partial L}{\partial \dot{q}} -L=\sum \dot{q}p -L.\tag{3}$$ Applying this definition to our Lagrangian (1), we get that the energy is equal to the Hamiltonian $$E=\dfrac{1}{2m}p^2 -\dfrac{1}{2}mr^2\omega^2=\dfrac{1}{2}m\dot{r}^2 -\dfrac{1}{2}mr^2\omega^2. \tag{4}$$ But the kinetic energy of the system is just $T=\dfrac{1}{2}m\dot{r}^2+\dfrac{1}{2}mr^2 \omega^2$ and the potential is zero, so the energy should be $$E=T=\dfrac{1}{2}m\dot{r}^2\color{red}{+}\dfrac{1}{2}mr^2 \omega^2\tag{5}$$ The middle sign is different from the sign of equation (4). The questions are: Are my expressions right? Am I missing something? IS $L\&L$ definition of energy right?
When this MWE is compiled, the appearance of the surd in equation (1) is correct (the parenthesis are placeholders to demonstrate the issue without using \left( and \right)). When \left( and \right) is used in equation (2), the surd becomes more vertical, decidedly not what looks best. The same is true for \big and friends. % MWE\documentclass{book}\usepackage{amsmath}\begin{document} \begin{align} \sqrt[3]{3\sqrt{3} \left(3+\frac{11}{3}\frac{\sqrt{2}}{\sqrt{3}}\right)}&= \label{q11401}\\ \sqrt[3]{3\sqrt{3} (3+\frac{11}{3}\frac{\sqrt{2}}{\sqrt{3}})}&= \label{q11402} \end{align}\end{document} How do I maintain the shape of the surd when using enlarged groupings inside the radical? This should not require a custom macro, should it? Or did I miss a step?
Question: Given that $r$ is a root of the quadratic equation $x^2−3x−5=0$, find the value of $2r^2−6r+1$ Thx a lot, because I can't see the pattern of this quadratic equation, so I am struggling a lot to use $r={3±√29\over 2}$ in $2r^2−6r+1$ Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community Question: Given that $r$ is a root of the quadratic equation $x^2−3x−5=0$, find the value of $2r^2−6r+1$ Thx a lot, because I can't see the pattern of this quadratic equation, so I am struggling a lot to use $r={3±√29\over 2}$ in $2r^2−6r+1$ $$x^2-3x=5$$ $r$ is the value that makes that ^ equation correct. So: $$r^2-3r=5$$ $$2r^2-6r=10$$ $$2r^2-6r+1=11$$ One way is to literally find $r$, which isn't too hard since $x^2-3x-5=0$ has roots $x=\frac{3\pm\sqrt{29}}{2}$. The way they probably want you to solve this question is to recognize that $r$ satisfies $r^2-3r-5=0$. Multiplying by $2$ yields $2r^2-6r-10=0$. Can you continue? Hint : $2x^2 - 6x = 2(x^2-3x)$. Even if you don't see the pattern, you can always use the relation $$r^2 - 3r - 5=0$$ to replace $r^2$ by $3r+5$ in the expression you want to evaluate. In the worst case, since the expression to be evaluated is a quadratic polynomial in $r$, if $r$ doesn't get caneceled in the simplification, you'll end up with a linear expression instead of a quadratic one, so it's still a good strategy. Thus, \begin{align*} r^2-3r - 5 &= 0\\[4pt] \implies\;r^2 &= 3r +5\\[4pt] \implies\;2r^2-6r+1&=2(3r+5)-6r+1\\[4pt] &=(6r+10)-6r +1\\[4pt] &=11\\[4pt] \end{align*} But here's an observation . . . Let's say at the outset, you were considering using the quadratic formula to solve for $r$. But the coefficients are real and the discriminant is$$b^2 - 4ac = (-3)^2-4(1)(-5) = 29$$so even without solving, you know the equation has two distinct real roots. But now, if you change your mind about using the quadratic formula, switching instead to the approach where you replace $r^2$ by $3r+5$, then even before making the substitution, it's clear that the result will simplify to either a constant or a polynomial of degree $1$ in $r$. But if you assume that the answer is unique, then, since the problem didn't specify which root $r$, the expression can't end up being degree $1$ in $r$. That's because a polynomial of degree $1$ can't realize the same value twice, so must yield a different result for each root, contrary to the assumption of uniqueness. The upshot of this observation is that you expect that in the process of simplification, the $r$ terms will cancel. That said, you still have to do the work for the evaluation, but when the $r$ terms do cancel, then, based on this thought process, it shouldn't come as a surprise.
Suppose that a connected planar simple graph with $e$ edges and $v$ vertices contains no simple circuit with length greater than or equal to $4.\;$ Show that $$\frac 53 v -\frac{10}{3} \geq e$$ or, equivalently, $$5(v-2) \geq 3e$$ Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community Suppose that a connected planar simple graph with $e$ edges and $v$ vertices contains no simple circuit with length greater than or equal to $4.\;$ Show that $$\frac 53 v -\frac{10}{3} \geq e$$ or, equivalently, $$5(v-2) \geq 3e$$ As Joseph suggests, one of two formulas you'll want to use for this problem is Euler's formula, which you may know as $$r = e - v + 2 \quad\text{(or}\quad v + r - e = 2)\qquad\qquad\quad (1)$$ where $r$ is the number of regions in a planar representation of $G$ (e: number of edges, v: number of vertices). (Note, for polyhedra which are clearly not planar, this translates into $r = F$, where $F$ is the number of faces of a polyhedron.) Now, a connected planar simple graph drawn in the plane divides the plane into regions, say $r$ of them. The degree of each region, including the unbound region, must be at least five (assuming graph $G$ is a connected planar graph with no simple circuit with length $\leq 4$). For the second formula you'll need: remember that the sum of the degrees of the regions is exactly twice the number of edges in the graph, because each edge occurs on the boundary of a region exactly twice, either in two different regions, or twice in the same region. Because each region $r$ has degree greater than or equal to five, $$2e = \sum_{\text{all regions}\;R} \mathrm{deg}(R) \geq 5r\qquad\qquad\qquad\qquad (2)$$ which gives us $r \leq \large\frac 25 e$. Now, using this result from (2), and substituting for r in Euler's formula, (1), we obtain $$e - v + 2 \leq \frac 25 e,$$$$\frac 35 e \leq v - 2,$$and hence, we have, as desired: $$e \leq \frac 53 v - \frac {10}{3} \quad\iff \quad \frac 53 v - \frac{10}{3} \geq e \quad \iff \quad 5(v-2) \geq 3e$$ Try using Euler's "polyhedral formula" - If G is a connected plane graph then V + F - E = 2.
Experimental Mathematics Experiment. Math. Volume 12, Issue 2 (2003), 135-153. The Asymptotic Distribution of Exponential Sums, I Abstract Let {$f(x)$} be a polynomial with integral coefficients and let, for {$c>0$, $S(f(x),c)=\sum_{j \pmod c} \exp(2\pi\imath\frac{f(j)}c)$}. It has been possible, for a long time, to estimate these sums efficiently. On the other hand, when the degree of {$f(x)$} is greater than 2 very little is known about their asymptotic distribution, even though their history goes back to C. F. Gauss and E. E. Kummer. The purpose of this paper is to present both experimental and theoretic evidence for a very regular asymptotic behaviour of {$S(f(x),c)$}. Article information Source Experiment. Math., Volume 12, Issue 2 (2003), 135-153. Dates First available in Project Euclid: 31 October 2003 Permanent link to this document https://projecteuclid.org/euclid.em/1067634728 Mathematical Reviews number (MathSciNet) MR2016703 Zentralblatt MATH identifier 1061.11046 Citation Patterson, S. J. The Asymptotic Distribution of Exponential Sums, I. Experiment. Math. 12 (2003), no. 2, 135--153. https://projecteuclid.org/euclid.em/1067634728
Skills to Develop To use the Cochran–Mantel–Haenszel test when you have data from \(2\times 2\) tables that you've repeated at different times or locations. It will tell you whether you have a consistent difference in proportions across the repeats. When to use it Use the Cochran–Mantel–Haenszel test (which is sometimes called the Mantel–Haenszel test) for repeated tests of independence. The most common situation is that you have multiple \(2\times 2\) tables of independence; you're analyzing the kind of experiment that you'd analyze with a test of independence, and you've done the experiment multiple times or at multiple locations. There are three nominal variables: the two variables of the \(2\times 2\) test of independence, and the third nominal variable that identifies the repeats (such as different times, different locations, or different studies). There are versions of the Cochran–Mantel–Haenszel test for any number of rows and columns in the individual tests of independence, but they're rarely used and I won't cover them. Fig. 2.10.1 A pony wearing pink legwarmers For example, let's say you've found several hundred pink knit polyester legwarmers that have been hidden in a warehouse since they went out of style in 1984. You decide to see whether they reduce the pain of ankle osteoarthritis by keeping the ankles warm. In the winter, you recruit \(36\) volunteers with ankle arthritis, randomly assign \(20\) to wear the legwarmers under their clothes at all times while the other \(16\) don't wear the legwarmers, then after a month you ask them whether their ankles are pain-free or not. With just the one set of people, you'd have two nominal variables (legwarmers vs. control, pain-free vs. pain), each with two values, so you'd analyze the data with Fisher's exact test. However, let's say you repeat the experiment in the spring, with \(50\) new volunteers. Then in the summer you repeat the experiment again, with \(28\) new volunteers. You could just add all the data together and do Fisher's exact test on the \(114\) total people, but it would be better to keep each of the three experiments separate. Maybe legwarmers work in the winter but not in the summer, or maybe your first set of volunteers had worse arthritis than your second and third sets. In addition, pooling different studies together can show a "significant" difference in proportions when there isn't one, or even show the opposite of a true difference. This is known as Simpson's paradox. For these reasons, it's better to analyze repeated tests of independence using the Cochran-Mantel-Haenszel test. Null hypothesis The null hypothesis is that the relative proportions of one variable are independent of the other variable within the repeats; in other words, there is no consistent difference in proportions in the \(2\times 2\) tables. For our imaginary legwarmers experiment, the null hypothesis would be that the proportion of people feeling pain was the same for legwarmer-wearers and non-legwarmer wearers, after controlling for the time of year. The alternative hypothesis is that the proportion of people feeling pain was different for legwarmer and non-legwarmer wearers. Technically, the null hypothesis of the Cochran–Mantel–Haenszel test is that the odds ratios within each repetition are equal to \(1\). The odds ratio is equal to \(1\) when the proportions are the same, and the odds ratio is different from \(1\) when the proportions are different from each other. I think proportions are easier to understand than odds ratios, so I'll put everything in terms of proportions. But if you're in a field such as epidemiology where this kind of analysis is common, you're probably going to have to think in terms of odds ratios. How the test works If you label the four numbers in a \(2\times 2\) test of independence like this: \[\begin{matrix} a & b\\ c & d \end{matrix}\] and \[(a+b+c+d)=n\] you can write the equation for the Cochran–Mantel–Haenszel test statistic like this: \[X_{MH}^{2}=\frac{\left \{ \left | \sum \left [ a-(a+b)(a+c)/n \right ] \right | -0.5\right \}^2}{\sum (a+b)(a+c)(b+d)(c+d)/(n^3-n^2)}\] The numerator contains the absolute value of the difference between the observed value in one cell (\(a\)) and the expected value under the null hypothesis, \((a+b)(a+c)/n\), so the numerator is the squared sum of deviations between the observed and expected values. It doesn't matter how you arrange the \(2\times 2\) tables, any of the four values can be used as \(a\). You subtract the \(0.5\) as a continuity correction. The denominator contains an estimate of the variance of the squared differences. The test statistic, \(X_{MH'}^{2}\), gets bigger as the differences between the observed and expected values get larger, or as the variance gets smaller (primarily due to the sample size getting bigger). It is chi-square distributed with one degree of freedom. Different sources present the formula for the Cochran–Mantel–Haenszel test in different forms, but they are all algebraically equivalent. The formula I've shown here includes the continuity correction (subtracting \(0.5\) in the numerator), which should make the \(P\) value more accurate. Some programs do the Cochran–Mantel–Haenszel test without the continuity correction, so be sure to specify whether you used it when reporting your results. Assumptions In addition to testing the null hypothesis, the Cochran-Mantel-Haenszel test also produces an estimate of the common odds ratio, a way of summarizing how big the effect is when pooled across the different repeats of the experiment. This require assuming that the odds ratio is the same in the different repeats. You can test this assumption using the Breslow-Day test, which I'm not going to explain in detail; its null hypothesis is that the odds ratios are equal across the different repeats. If some repeats have a big difference in proportion in one direction, and other repeats have a big difference in proportions but in the opposite direction, the Cochran-Mantel-Haenszel test may give a non-significant result. So when you get a non-significant Cochran-Mantel-Haenszel test, you should perform a test of independence on each \(2\times 2\) table separately and inspect the individual \(P\) values and the direction of difference to see whether something like this is going on. In our legwarmer example, if the proportion of people with ankle pain was much smaller for legwarmer-wearers in the winter, but much higher in the summer, and the Cochran-Mantel-Haenszel test gave a non-significant result, it would be erroneous to conclude that legwarmers had no effect. Instead, you could conclude that legwarmers had an effect, it just was different in the different seasons. Examples Example When you look at the back of someone's head, the hair either whorls clockwise or counterclockwise. Lauterbach and Knight (1927) compared the proportion of clockwise whorls in right-handed and left-handed children. With just this one set of people, you'd have two nominal variables (right-handed vs. left-handed, clockwise vs. counterclockwise), each with two values, so you'd analyze the data with Fisher's exact test. However, several other groups have done similar studies of hair whorl and handedness (McDonald 2011): Study group Handedness Right Left white children Clockwise 708 50 Counterclockwise 169 13 percent CCW 19.3% 20.6% British adults Clockwise 136 24 Counterclockwise 73 14 percent CCW 34.9% 38.0% Pennsylvania whites Clockwise 106 32 Counterclockwise 17 4 percent CCW 13.8% 11.1% Welsh men Clockwise 109 22 Counterclockwise 16 26 percent CCW 12.8% 54.2% German soldiers Clockwise 801 102 Counterclockwise 180 25 percent CCW 18.3% 19.7% German children Clockwise 159 27 Counterclockwise 18 13 percent CCW 10.2% 32.5% New York Clockwise 151 51 Counterclockwise 28 15 percent CCW 15.6% 22.7% American men Clockwise 950 173 Counterclockwise 218 33 percent CCW 18.7% 16.0% You could just add all the data together and do a test of independence on the \(4463\) total people, but it would be better to keep each of the \(8\) experiments separate. Some of the studies were done on children, while others were on adults; some were just men, while others were male and female; and the studies were done on people of different ethnic backgrounds. Pooling all these studies together might obscure important differences between them. Analyzing the data using the Cochran-Mantel-Haenszel test, the result is \(X_{MH}^{2}=6.07\), \(1d.f.\), \(P=0.014\). Overall, left-handed people have a significantly higher proportion of counterclockwise whorls than right-handed people. Example McDonald and Siebenaller (1989) surveyed allele frequencies at the Lap locus in the mussel Mytilus trossulus on the Oregon coast. At four estuaries, we collected mussels from inside the estuary and from a marine habitat outside the estuary. There were three common alleles and a couple of rare alleles; based on previous results, the biologically interesting question was whether the allele was less common inside estuaries, so we pooled all the other alleles into a " Lap94 non-" class. 94 There are three nominal variables: allele (\(94\) or non-\(94\)), habitat (marine or estuarine), and area (Tillamook, Yaquina, Alsea, or Umpqua). The null hypothesis is that at each area, there is no difference in the proportion of Lap 94 alleles between the marine and estuarine habitats. This table shows the number of \(94\) and non-\(94\) alleles at each location. There is a smaller proportion of \(94\) alleles in the estuarine location of each estuary when compared with the marine location; we wanted to know whether this difference is significant. Location Allele Marine Estuarine Tillamook 94 56 69 non-94 40 77 percent 94 58.3% 47.3% Yaquina 94 61 257 non-94 57 301 percent 94 51.7% 46.1% Alsea 94 73 65 non-94 71 79 percent 94 50.7% 45.1% Umpqua 94 71 48 non-94 55 48 percent 94 56.3% 50.0% The result is \(X_{MH}^{2}=5.05\), \(1d.f.\), \(P=0.025\). We can reject the null hypothesis that the proportion of alleles is the same in the marine and estuarine locations. Lap94 Example Duggal et al. (2010) did a meta-analysis of placebo-controlled studies of niacin and heart disease. They found \(5\) studies that met their criteria and looked for coronary artery revascularization in patients given either niacin or placebo: Study Revascularization No revasc. Percent revasc. FATS Niacin 2 46 4.2% Placebo 11 41 21.2% AFREGS Niacin 4 67 5.6% Placebo 12 60 16.7% ARBITER 2 Niacin 1 86 1.1% Placebo 4 76 5.0% HATS Niacin 1 37 2.6% Placebo 6 32 15.8% CLAS 1 Niacin 2 92 2.1% Placebo 1 93 1.1% There are three nominal variables: niacin vs. placebo, revascularization vs. no revascularization, and the name of the study. The null hypothesis is that the rate of revascularization is the same in patients given niacin or placebo. The different studies have different overall rates of revascularization, probably because they used different patient populations and looked for revascularization after different lengths of time, so it would be unwise to just add up the numbers and do a single \(2\times 2\) test. The result of the Cochran-Mantel-Haenszel test is \(X_{MH}^{2}=12.75\), \(1d.f.\), \(P=0.00036\). Significantly fewer patients on niacin developed coronary artery revascularization. Graphing the results To graph the results of a Cochran–Mantel–Haenszel test, pick one of the two values of the nominal variable that you're observing and plot its proportions on a bar graph, using bars of two different patterns. Fig. 2.10.2 Lap94 allele proportions (with 95% confidence intervals) in the mussel Mytilus trossulus at four bays in Oregon. Gray bars are marine samples and empty bars are estuarine samples. Similar tests Sometimes the Cochran–Mantel–Haenszel test is just called the Mantel–Haenszel test. This is confusing, as there is also a test for homogeneity of odds ratios called the Mantel–Haenszel test, and a Mantel–Haenszel test of independence for one \(2\times 2\) table. Mantel and Haenszel (1959) came up with a fairly minor modification of the basic idea of Cochran (1954), so it seems appropriate (and somewhat less confusing) to give Cochran credit in the name of this test. If you have at least six \(2\times 2\) tables, and you're only interested in the direction of the differences in proportions, not the size of the differences, you could do a sign test. The Cochran–Mantel–Haenszel test for nominal variables is analogous to a two-way anova or paired t–test for a measurement variable, or a Wilcoxon signed-rank test for rank data. In the arthritis-legwarmers example, if you measured ankle pain on a \(10\)-point scale (a measurement variable) instead of categorizing it as pain/no pain, you'd analyze the data with a two-way anova. How to do the test Spreadsheet I've written a spreadsheet to perform the Cochran–Mantel–Haenszel test cmh.xls. It handles up to \(50\) \(2\times 2\) tables. It gives you the choice of using or not using the continuity correction; the results are probably a little more accurate with the continuity correction. It does not do the Breslow-Day test. Web pages I'm not aware of any web pages that will perform the Cochran–Mantel–Haenszel test. R Salvatore Mangiafico's \(R\) Companion has a sample R program for the Cochran-Mantel-Haenszel test, and also shows how to do the Breslow-Day test. SAS Here is a SAS program that uses PROC FREQ for a Cochran–Mantel–Haenszel test. It uses the mussel data from above. In the TABLES statement, the variable that labels the repeats must be listed first; in this case it is "location". DATA lap; INPUT location $ habitat $ allele $ count; DATALINES; Tillamook marine 94 56 Tillamook estuarine 94 69 Tillamook marine non-94 40 Tillamook estuarine non-94 77 Yaquina marine 94 61 Yaquina estuarine 94 257 Yaquina marine non-94 57 Yaquina estuarine non-94 301 Alsea marine 94 73 Alsea estuarine 94 65 Alsea marine non-94 71 Alsea estuarine non-94 79 Umpqua marine 94 71 Umpqua estuarine 94 48 Umpqua marine non-94 55 Umpqua estuarine non-94 48 ; PROC FREQ DATA=lap; WEIGHT count / ZEROS; TABLES location*habitat*allele / CMH; RUN; There is a lot of output, but the important part looks like this: Cochran-Mantel-Haenszel Statistics (Based on Table Scores) Statistic Alternative Hypothesis DF Value Prob --------------------------------------------------------- 1 Nonzero Correlation 1 5.3209 0.0211 2 Row Mean Scores Differ 1 5.3209 0.0211 3 General Association 1 5.3209 0.0211 For repeated \(2\times 2\) tables, the three statistics are identical; they are the Cochran–Mantel–Haenszel chi-square statistic, without the continuity correction. For repeated tables with more than two rows or columns, the "general association" statistic is used when the values of the different nominal variables do not have an order (you cannot arrange them from smallest to largest); you should use it unless you have a good reason to use one of the other statistics. The results also include the Breslow-Day test of homogeneity of odds ratios: Breslow-Day Test for Homogeneity of the Odds Ratios ------------------------------ Chi-Square 0.5295 DF 3 Pr > ChiSq 0.9124 The Breslow-Day test for the example data shows no significant evidence for heterogeneity of odds ratios (\(X^2=0.53\), \(3d.f.\), \(P=0.91\)). References Cochran, W.G. 1954. Some methods for strengthening the common χ 2 tests. Biometrics 10: 417-451. Duggal, J.K., M. Singh, N. Attri, P.P. Singh, N. Ahmed, S. Pahwa, J. Molnar, S. Singh, S. Khosla and R. Arora. 2010. Effect of niacin therapy on cardiovascular outcomes in patients with coronary artery disease. Journal of Cardiovascular Pharmacology and Therapeutics 15: 158-166. Lauterbach, C.E., and J.B. Knight. 1927. Variation in whorl of the head hair. Journal of Heredity 18: 107-115. Mantel, N., and W. Haenszel. 1959. Statistical aspects of the analysis of data from retrospective studies of disease. Journal of the National Cancer Institute 22: 719-748. McDonald, J.H. 2011. Myths of human genetics. Sparky House Press, Baltimore. McDonald, J.H. and J.F. Siebenaller. 1989. Similar geographic variation at the Lap locus in the mussels Mytilus trossulus and M. edulis. Evolution 43: 228-231. Contributor John H. McDonald (University of Delaware)
Research Open Access Published: New parameterized quantum integral inequalities via η-quasiconvexity Advances in Difference Equations volume 2019, Article number: 425 (2019) Article metrics 82 Accesses Abstract We establish new quantum Hermite–Hadamard and midpoint types inequalities via a parameter \(\mu \in [0,1]\) for a function F whose \(|{}_{\alpha }D_{q}F|^{u}\) is η-quasiconvex on \([\alpha ,\beta ]\) with \(u\geq 1\). Results obtained in this paper generalize, sharpen, and extend some results in the literature. For example, see (Noor et al. in Appl. Math. Comput. 251:675–679, 2015; Alp et al. in J. King Saud Univ., Sci. 30:193–203, 2018) and (Kunt et al. in Rev. R. Acad. Cienc. Exactas Fís. Nat., Ser. A Mat. 112:969–992, 2018). By choosing different values of μ, loads of novel estimates can be deduced. We also present some illustrative examples to show how some consequences of our results may be applied to derive more quantum inequalities. Introduction Quantum calculus is generally described as the ordinary calculus without limits. The q-calculus and h-calculus are the main branches of the quantum calculus. In this article, we shall discuss within the framework of the q-calculus. Quantum calculus has been found to be useful in many areas of mathematics such as orthogonal polynomials, basic hypergeometric functions, combinatorics, the calculus of variations, mechanics, and the theory of relativity. Analogues of many results in the classical calculus have been established in the q-calculus sense. We start by presenting some of the recently published results in this direction. But before that, the following definitions are needed in the sequel: A function \(F:[\alpha ,\beta ]\subset \mathbb{R}\to \mathbb {R}\) is termed quasiconvex if, for all \(x,y\in [\alpha ,\beta ]\) and \(\tau \in [0,1]\), we have All convex functions are also quasiconvex, but not all quasiconvex functions are convex, so quasiconvexity is a generalization of convexity. Quasiconvex functions have applications in mathematical analysis, in mathematical optimization, in game theory, and economics. In 2016, the concept of quasiconvexity was generalized in the following way. Definition 1 ([6]) A function \(F:[\alpha ,\beta ]\to \mathbb{R}\) is called η-quasiconvex on \([\alpha ,\beta ]\) with respect to \(\eta :\mathbb{R}\times \mathbb{R}\to \mathbb{R}\) if for all \(x,y\in [\alpha ,\beta ]\) and \(\tau \in [0,1]\). In the field of mathematical analysis, many estimates have been established via this generalized convexity and quasiconvexity. For instance, estimates of the Hermite–Hadamard, trapezoid, midpoint, Simpson types have all been obtained for this class of functions. We invite the interested reader to see [3, 5, 7, 11, 18]. Theorem 2 ([14]) Let \(F:[\alpha ,\beta ]\to \mathbb {R}\) be a q- differentiable function on \((\alpha ,\beta )\) with \({}_{\alpha }D_{q}F\) continuous on \([\alpha ,\beta ]\) where \(0< q<1\). If \(|{}_{\alpha }D_{q}F|^{u}\) is quasiconvex on \([\alpha ,\beta ]\) for \(u\geq 1\), then the following inequality holds: Theorem 3 ([13]) Let \(F:[\alpha ,\beta ]\to \mathbb {R}\) be a q- differentiable function on \((\alpha ,\beta )\) with \({}_{\alpha }D_{q}F\) continuous on \([\alpha ,\beta ]\) where \(0< q<1\). If \(|{}_{\alpha }D_{q}F|^{u}\) is quasiconvex on \([\alpha ,\beta ]\) for \(u> 1\) with \(\frac{1}{u}+\frac{1}{v}=1\), then the following inequality holds: Theorem 4 Let \(F:[\alpha ,\beta ]\to \mathbb {R}\) be a q- differentiable function on \((\alpha ,\beta )\) with \({}_{\alpha }D_{q}F\) continuous on \([\alpha ,\beta ]\) where \(0< q<1\). If \(|{}_{\alpha }D_{q}F|^{u}\) is quasiconvex on \([\alpha ,\beta ]\) for \(u\geq 1\), then the following q- midpoint type inequality holds: Theorem 5 Let \(F:[\alpha ,\beta ]\to \mathbb {R}\) be a q- differentiable function on \((\alpha ,\beta )\) with \({}_{\alpha }D_{q}F\) continuous on \([\alpha ,\beta ]\) where \(0< q<1\). If \(|{}_{\alpha }D_{q}F|^{u}\) is quasiconvex on \([\alpha ,\beta ]\) for \(u> 1\) with \(\frac{1}{u}+\frac{1}{v}=1\), then the following q- midpoint type inequality holds: The goal of this article is to extend Theorems 2–5 to a more general class of functions. We do this by means of a parameter \(\mu \in [0,1]\) and obtain results for a function F whose \(|{}_{\alpha }D_{q}F|^{u}\) is η-quasiconvex on \([\alpha ,\beta ]\) for \(u\geq 1\). Our first result sharpens Theorem 2 (see Remark 17); whereas Theorems 3–5 are special cases of our theorems (see Remarks 19, 21, and 23). In addition, we apply our results to some special means to get more results in this direction. This paper is structured as follows: Sect. 2 contains a quick overview of the quantum calculus. The main results are then framed and justified in Sect. 3. Some illustrative examples are then presented in Sect. 4. Preliminaries In this section, we present some quick overview of the theory of quantum calculus. For an in-depth study of this subject, we invite the interested reader to the book [8]. We start with the following basic definitions. Definition 6 ([21]) Suppose that \(F:[\alpha ,\beta ]\subset \mathbb {R}\to \mathbb {R}\) is a continuous function and \(z\in [\alpha ,\beta ]\). Then the expression is called the q-derivative on \([\alpha ,\beta ]\) of the function at z. We say that F is q-differentiable on \([\alpha ,\beta ]\) provided \({{}_{\alpha }}D_{q}F(z)\) exists for all \(z\in [\alpha ,\beta ]\). Definition 7 ([21]) Let \(F:[\alpha ,\beta ]\to \mathbb {R}\) be a continuous function. Then the q-integral on \([\alpha ,\beta ]\) is defined as for \(z\in [\alpha ,\beta ]\). Moreover, if \(c\in (\alpha ,z)\), then the q-integral on \([\alpha ,\beta ]\) is defined as Remark 8 1. By taking \(\alpha =0\), the expression in (2) reduces to the well-known q-derivative, \(D_{q}F(z)\), of the function \(F(z)\) defined by$$ D_{q}F(z)=\frac{F(z)-F(qz)}{(1-q)z}. $$ 2. Also, if \(\alpha =0\), then (3) amounts to the classical q-integral of a function \(F:[0,\infty )\to \mathbb {R}\) defined by$$ \int _{0}^{z}F(r) \, {}_{0} d_{q}r=(1-q)z\sum_{k=0}^{\infty }q^{k}F \bigl(q^{k}z\bigr). $$ Analogues of some known results in the continuous calculus sense are also given in what follows. Theorem 9 ([4]) Let \(F,G:[\alpha ,\beta ]\to \mathbb {R}\) be two continuous functions and suppose \(F(r)\leq G(r)\) for all \(r\in [\alpha ,\beta ]\). Then Theorem 10 ([21]) Let \(F:[\alpha ,\beta ]\to \mathbb {R}\) be a continuous function. Then Theorem 11 ([21]) Let \(F,G:[\alpha ,\beta ]\to \mathbb {R}\) be continuous functions and \(\gamma \in \mathbb {R}\). Then, for \(z\in [\alpha ,\beta ]\) and \(c\in (\alpha ,z)\), we have Main results The succeeding lemmas will be needed in the proof of our theorems. Lemma 12 ([22]) Let \(F:[\alpha ,\beta ]\to \mathbb {R}\) be a continuous and q- differentiable function on \((\alpha ,\beta )\) with \(0< q<1\). If \({}_{\alpha } D_{q}F\) is integrable on \([\alpha ,\beta ]\), then for all \(\mu \in [0, 1]\) the following identity holds: Lemma 13 ([22]) Let \(F:[\alpha ,\beta ]\to \mathbb {R}\) be a continuous and q- differentiable function on \((\alpha ,\beta )\) with \(0< q<1\). If \({}_{\alpha } D_{q}F\) is integrable on \([\alpha ,\beta ]\), then for all \(\mu \in [0, 1]\) the following identity holds: Lemma 14 ([22]) Let \(\lambda ,\mu \in [0,1]\), \(k\in [0,\infty )\), and \(0< q<1\). Then Lemma 15 ([22]) Let \(\lambda ,\mu \in [0,1]\), \(\theta \in [1,\infty )\), and \(0< q<1\). Then Let f be an η-quasiconvex function on \([\alpha ,\beta ]\). We shall use the following notation: Theorem 16 Let \(F:[\alpha ,\beta ]\to \mathbb {R}\) be a q- differentiable function on \((\alpha ,\beta )\) with \({}_{\alpha }D_{q}F\) continuous on \([\alpha ,\beta ]\) where \(0< q<1\). If \(|{}_{\alpha }D_{q}F|^{u}\) is η- quasiconvex on \([\alpha ,\beta ]\) for \(u\geq 1\), then, for all \(\mu \in [0, 1]\), the following inequality holds: Proof The η-quasiconvexity of \(|{}_{\alpha }D_{q}F|^{u}\) on \([\alpha ,\beta ]\) implies that, for all \(\tau \in [0,1]\), one has: Now, putting \(k=0\) and \(\lambda =1\) in Lemma 14, we get Hence, that completes the proof. □ Remark 17 Let \(\eta (x,y)=x-y\) and \(\mu =\frac{1}{1+q}\). Then \(\frac{1}{1+q}>1-q\) and (5) boils down to Theorem 18 Let \(F:[\alpha ,\beta ]\to \mathbb {R}\) be a q- differentiable function on \((\alpha ,\beta )\) with \({}_{\alpha }D_{q}F\) continuous on \([\alpha ,\beta ]\) where \(0< q<1\). If \(|{}_{\alpha }D_{q}F|^{u}\) is η- quasiconvex on \([\alpha ,\beta ]\) for \(u> 1\) with \(\frac{1}{u}+\frac{1}{v}=1\), then for all \(\mu \in [0, 1]\) the following inequality holds: where \(\varOmega _{q} (1;\mu ;v )\) is defined in Lemma 15. Proof This completes the proof. □ Remark 19 By taking \(\mu =\frac{1}{2}\) and \(\eta (x,y)=x-y\) in (9), we deduce the following: Theorem 20 Let \(F:[\alpha ,\beta ]\to \mathbb {R}\) be a q- differentiable function on \((\alpha ,\beta )\) with \({}_{\alpha }D_{q}F\) continuous on \([\alpha ,\beta ]\) where \(0< q<1\). If \(|{}_{\alpha }D_{q}F|^{u}\) is η- quasiconvex on \([\alpha ,\beta ]\) for \(u\geq 1\), then for all \(\mu \in [0, 1]\) the following inequality holds: Proof We get, by taking the absolute values of both sides of Lemma 13 and then using Hölder’s inequality and the η-quasiconvexity of \(|{}_{\alpha }D_{q}F|^{u}\) on \([\alpha ,\beta ]\), the following estimates: Now, using Definition 3, we get that for any \(p\geq 0\). So, for \(p=1\), Also, using (4) and the fact that \(q\tau <1\), we obtain that Remark 21 Substituting \(\mu =0\), \(\mu =1\), and \(\mu =\frac{1}{2}\) with \(\eta (x,y)=x-y\) in (10), we get, respectively, the following: and Theorem 22 Let \(F:[\alpha ,\beta ]\to \mathbb {R}\) be a q- differentiable function on \((\alpha ,\beta )\) with \({}_{\alpha }D_{q}F\) continuous on \([\alpha ,\beta ]\) where \(0< q<1\). If \(|{}_{\alpha }D_{q}F|^{u}\) is η- quasiconvex on \([\alpha ,\beta ]\) for \(u> 1\) with \(\frac{1}{u}+\frac{1}{v}=1\), then for all \(\mu \in [0, 1]\) the following inequality holds: where \(\varTheta _{q}(v;\mu )=\int _{\mu }^{1} \vert q\tau -1 \vert ^{v}\, {}_{0}d_{q}\tau \). Proof Applying, again, Lemma 13 and Hölder’s inequality, we obtain From relation (4), we deduce that Remark 23 If \(\mu =0\) and \(\mu =1\) with \(\eta (x,y)=x-y\) in Theorem 22, then we get, respectively, the following inequalities: and Application The following special means of real numbers will be used here. 1. Arithmetic mean:$$ \mathcal {A}(u,v)=\frac{u+v}{2}. $$ 2. Generalized logarithmic mean:$$ \mathcal{L}_{m}(u,v)= \biggl[\frac{v^{m+1}-u^{m+1}}{(m+1)(v-u)} \biggr] ^{\frac{1}{m}},\quad m\in \mathbb{N}, u\neq v. $$ Example 24 Let \(0<\alpha <\beta \) and \(0< q<1\). Then Proof Let \(F(x)=x^{2}\). Then, by the properties of the q-integral, we have Also, for \(x\neq \alpha \), For \(x=\alpha \), we have \({}_{\alpha }D_{q}F(\alpha )=\lim_{x\to \alpha } ({}_{\alpha }D _{q}F(x) )=2\alpha \). The function \(|{}_{\alpha }D_{q}F(x)|\) is convex and hence quasiconvex on \([\alpha ,\beta ]\). The desired inequality is obtained by using (7) with \(u=1\). □ If we let \(q\to 1^{-}\) in (18), we obtain Example 25 Let \(0<\alpha <\beta \) and \(0< q<1\). Then Proof If we let \(q\to 1^{-}\), then (19) boils down to Conclusion By introducing a parameter \(\mu \in [0, 1]\), we established some quantum inequalities by means of the η-quasiconvexity. Our results sharpen, generalize, and extend some known results as can be seen in Remarks 17, 19, 21, and 23. Some examples are also given to show how new estimates can be obtained from our main results. We anticipate that these novel estimates will stimulate further investigation in this regard. Some recent results concerning quasiconvexity and its generalization can be found in [9, 10, 15,16,17,18,19,20]. References 1. Alomari, M., Darius, M., Dragomir, S.S.: Inequalities of Hermite–Hadamard’ type for functions whose derivatives absolute values are quasi-convex. RGMIA Res. Rep. Collect. 12(suppl.), Article ID 14 2. Alp, N., Sarikaya, M.Z., Kunt, M., Işcan, I.: q-Hermite–Hadamard inequalities and quantum estimates for midpoint type inequalities via convex and quasi-convex functions. J. King Saud Univ., Sci. 30, 193–203 (2018) 3. Awan, M.U., Noorb, M.A., Noorb, K.I., Safdarb, F.: On strongly generalized convex functions. Filomat 31(18), 5783–5790 (2017) 4. Chen, F., Yang, W.: Some new Chebyshev type quantum integral inequalities on finite intervals. J. Comput. Anal. Appl. 21, 417–426 (2016) 5. Delavar, M.R., Dragomir, S.S.: On η-convexity. Math. Inequal. Appl. 20(1), 203–216 (2017) 6. Gordji, M.E., Delavar, M.R., Sen, M.D.L.: On φ-convex functions. J. Math. Inequal. 10(1), 173–183 (2016) 7. Gordji, M.E., Dragomir, S.S., Delavar, M.R.: An inequality related to η-convex functions (II). Int. J. Nonlinear Anal. Appl. 6(2), 26–32 (2015) 8. Kac, V., Cheung, P.: Quantum Calculus. Springer, New York (2002) 9. Kermausuor, S., Nwaeze, E.R.: Some new inequalities involving the Katugampola fractional integrals for strongly η-convex functions. Tbil. Math. J. 12(1), 117–130 (2019) 10. Kermausuor, S., Nwaeze, E.R., Tameru, A.M.: New integral inequalities via the Katugampola fractional integrals for functions whose second derivatives are strongly η-convex. Mathematics 7(2), Article ID 183 (2019) 11. Khan, M.A., Khurshid, Y., Ali, T.: Hermite–Hadamard inequality for fractional integrals via η-convex functions. Acta Math. Univ. Comen. LXXXVI(1), 153–164 (2017) 12. Kunt, M., Işcan, I., Alp, N., Sarikaya, M.Z.: \((p, q)\)-Hermite–Hadamard inequalities and \(( p, q)\)-estimates for midpoint type inequalities via convex and quasi-convex functions. Rev. R. Acad. Cienc. Exactas Fís. Nat., Ser. A Mat. 112, 969–992 (2018) 13. Latif, M.A., Kunt, M., Dragomir, S.S., Işcan, I.: Some \((p,q)\)-estimates for Hermite–Hadamard inequalities via convex and quasi-convex functions. https://doi.org/10.13140/RG.2.1.1280.280 14. Noor, M.A., Noor, K.I., Awan, M.U.: Some quantum estimates for Hermite–Hadamard inequalities. Appl. Math. Comput. 251, 675–679 (2015) 15. Nwaeze, E.R.: Inequalities of the Hermite–Hadamard type for quasi-convex functions via the \((k,s)\)-Riemann–Liouville fractional integrals. Fract. Differ. Calc. 8(2), 327–336 (2018) 16. Nwaeze, E.R.: Generalized fractional integral inequalities by means of quasiconvexity. Adv. Differ. Equ. 2019, 262 (2019) 17. Nwaeze, E.R.: Integral inequalities via generalized quasiconvexity with applications. J. Inequal. Appl. 2019, 236 (2019) 18. Nwaeze, E.R., Kermausuor, S., Tameru, A.M.: Some new k-Riemann–Liouville fractional integral inequalities associated with the strongly η-quasiconvex functions with modulus \(\mu \geq 0\). J. Inequal. Appl. 2018, 139 (2018) 19. Nwaeze, E.R., Torres, D.F.M.: Novel results on the Hermite–Hadamard kind inequality for η-convex functions by means of the \((k,r)\)-fractional integral operators. In: Dragomir, S.S., Agarwal, P., Jleli, M., Samet, B. (eds.) Advances in Mathematical Inequalities and Applications (AMIA). Trends in Mathematics, pp. 311–321. Birkhäuser, Singapore (2018) 20. Nwaeze, E.R., Torres, D.F.M.: New inequalities for η-quasiconvex functions. In: Anastassiou, G., Rassias, J. (eds.) Frontiers in functional Equations and Analytic Inequalities (FEAI). Springer, New York. Accepted 21. Tariboon, J., Ntouyas, S.K.: Quantum calculus on finite intervals and applications to impulsive difference equations. Adv. Differ. Equ. 2013, 282 (2013) 22. Zhang, Y., Du, T.-S., Wang, H., Shen, Y.-J.: Different types of quantum integral inequalities via \((\alpha ,m)\)-convexity. J. Inequal. Appl. 2018, 264 (2018) Acknowledgements Many thanks to the three anonymous referees for their valuable suggestions and remarks. Availability of data and materials Not applicable. Funding There is no funding to report for this research. Ethics declarations Competing interests The authors declare that there are no competing interests. Additional information Publisher’s Note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Forgot password? New user? Sign up Existing user? Log in If x,yx,yx,y are real numbers such that x2014+y2014=1x^{2014}+y^{2014}=1x2014+y2014=1 then prove that: (∑n=11007x2n+1x4n+1)(∑n=11007y2n+1y4n+1)<1(1−x)(1−y)\left(\sum_{n=1}^{1007} \dfrac{x^{2n}+1}{x^{4n}+1} \right) \left(\sum_{n=1}^{1007} \dfrac{y^{2n}+1}{y^{4n}+1} \right)< \dfrac{1}{(1-x)(1-y)}(n=1∑1007x4n+1x2n+1)(n=1∑1007y4n+1y2n+1)<(1−x)(1−y)1 Note by Shivam Jadhav 4 years ago Easy Math Editor This discussion board is a place to discuss our Daily Challenges and the math and science related to those challenges. Explanations are more than just a solution — they should explain the steps and thinking strategies that you used to obtain the solution. Comments should further the discussion of math and science. When posting on Brilliant: *italics* _italics_ **bold** __bold__ - bulleted- list 1. numbered2. list paragraph 1paragraph 2 paragraph 1 paragraph 2 [example link](https://brilliant.org) > This is a quote This is a quote # I indented these lines # 4 spaces, and now they show # up as a code block. print "hello world" # I indented these lines# 4 spaces, and now they show# up as a code block.print "hello world" \( \) \[ \] 2 \times 3 2^{34} a_{i-1} \frac{2}{3} \sqrt{2} \sum_{i=1}^3 \sin \theta \boxed{123} Sort by: There is a typo in the question. I will explain you what the typo is. For suppose, take x=−1x=-1x=−1 and y=0y=0y=0. Then we get ∑n=11007x2n+1x4n+1=1007∑n=11007y2n+1y4n+1=10071(1−x)(1−y)=12 \sum_{n=1}^{1007} \dfrac{x^{2n}+1}{x^{4n}+1} = 1007 \\ \sum_{n=1}^{1007} \dfrac{y^{2n}+1}{y^{4n}+1} = 1007 \\ \dfrac{1}{\left(1-x \right) \left(1-y \right)} = \dfrac{1}{2} n=1∑1007x4n+1x2n+1=1007n=1∑1007y4n+1y2n+1=1007(1−x)(1−y)1=21 So, according to our question it forces that 1007×1007<1/21007 \times 1007 < 1/21007×1007<1/2, which is impossible. The question should be actually like this: If xxx, yyyare real numbers such that x2014+y2014=1x^{2014}+y^{2014}=1x2014+y2014=1 then prove that: (∑n=11007x2n+1x4n+1)(∑n=11007y2n+1y4n+1)<1(1−∣x∣)(1−∣y∣)\left( \sum_{n=1}^{1007} \dfrac{x^{2n}+1}{x^{4n}+1} \right) \left( \sum_{n=1}^{1007} \dfrac{y^{2n}+1}{y^{4n}+1} \right) < \dfrac{1}{\left(1-|x| \right) \left( 1-|y| \right)}(n=1∑1007x4n+1x2n+1)(n=1∑1007y4n+1y2n+1)<(1−∣x∣)(1−∣y∣)1 PROOF: Let ∣x∣=a|x| = a∣x∣=a and ∣y∣=b|y| = b∣y∣=b. So, the given constraint changes into a2014+b2014=1a^{2014} + b^{2014} = 1a2014+b2014=1. Since, a4n+1>(a2n+1)22=12×(a2n+1)×(a2n+1)>an(a2n+1) a^{4n} + 1 > \dfrac{ \left( a^{2n} + 1 \right)^2}{2} = \dfrac{1}{2} \times \left( a^{2n} + 1 \right) \times \left( a^{2n} + 1 \right) > a^{n} \left( a^{2n} + 1 \right)a4n+1>2(a2n+1)2=21×(a2n+1)×(a2n+1)>an(a2n+1). Which implies that 1+a2n1+a4n<1an∑n=110071+a2n1+a4n<∑n=110071an=(1a1007−1)×(11−a) \dfrac{1+a^{2n}}{1+a^{4n}} < \dfrac{1}{a^n} \\ \sum_{n=1}^{1007} \dfrac{1+a^{2n}}{1+a^{4n}} <\sum_{n=1}^{1007} \dfrac{1}{a^n} = \left( \dfrac{1}{a^{1007}} - 1 \right) \times \left( \dfrac{1}{1-a} \right)1+a4n1+a2n<an1n=1∑10071+a4n1+a2n<n=1∑1007an1=(a10071−1)×(1−a1) Similary ∑n=110071+b2n1+b4n<(1b1007−1)×(11−b)\sum_{n=1}^{1007} \dfrac{1+b^{2n}}{1+b^{4n}} < \left( \dfrac{1}{b^{1007}} - 1 \right) \times \left( \dfrac{1}{1-b} \right)∑n=110071+b4n1+b2n<(b10071−1)×(1−b1) Multiplying the two, we get ∑n=110071+a2n1+a4n×∑n=110071+b2n1+b4n<(1a1007−1)×(1b1007−1)×1(1−a)(1−b) \sum_{n=1}^{1007} \dfrac{1+a^{2n}}{1+a^{4n}} \times \sum_{n=1}^{1007} \dfrac{1+b^{2n}}{1+b^{4n}} < \left( \dfrac{1}{a^{1007}} - 1 \right) \times \left( \dfrac{1}{b^{1007}} - 1 \right) \times \dfrac{1}{\left(1-a \right) \left( 1-b \right)}n=1∑10071+a4n1+a2n×n=1∑10071+b4n1+b2n<(a10071−1)×(b10071−1)×(1−a)(1−b)1 So, If we prove that (1a1007−1)×(1b1007−1)<1 \left( \dfrac{1}{a^{1007}} - 1 \right) \times \left( \dfrac{1}{b^{1007}} - 1 \right) < 1(a10071−1)×(b10071−1)<1, then we are done. Since a2014+b2014=1a^{2014} + b^{2014} = 1 a2014+b2014=1 and 0≤a,b0 \leq a, b 0≤a,b. It implies a,b<1a,b < 1a,b<1. So, a1007>a2014a^{1007} > a^{2014}a1007>a2014 and b1007>b2014b^{1007} > b^{2014}b1007>b2014. Which implies that a1007+b1007>a2014+b2014=1a^{1007} + b^{1007} > a^{2014} + b^{2014} = 1a1007+b1007>a2014+b2014=1. So, 1−a1007−b1007<0(1−a1007)×(1−b1007)<a1007×b1007(1a1007−1)×(1b1007−1)<11-a^{1007} - b^{1007} <0 \\ \left( 1- a^{1007} \right) \times \left(1- b^{1007} \right) < a^{1007} \times b^{1007} \\ \left( \dfrac{1}{a^{1007}} - 1 \right) \times \left( \dfrac{1}{b^{1007}} - 1 \right) < 1 1−a1007−b1007<0(1−a1007)×(1−b1007)<a1007×b1007(a10071−1)×(b10071−1)<1 Therefore, (∑n=11007x2n+1x4n+1)(∑n=11007y2n+1y4n+1)=(∑n=11007a2n+1a4n+1)(∑n=11007b2n+1b4n+1)<1(1−a)(1−b)=1(1−∣x∣)(1−∣y∣)\left( \sum_{n=1}^{1007} \dfrac{x^{2n}+1}{x^{4n}+1} \right) \left( \sum_{n=1}^{1007} \dfrac{y^{2n}+1}{y^{4n}+1} \right) = \left( \sum_{n=1}^{1007} \dfrac{a^{2n}+1}{a^{4n}+1} \right) \left( \sum_{n=1}^{1007} \dfrac{b^{2n}+1}{b^{4n}+1} \right) < \dfrac{1}{\left(1-a \right) \left( 1-b \right)} = \dfrac{1}{\left(1-|x| \right) \left( 1-|y| \right)}(n=1∑1007x4n+1x2n+1)(n=1∑1007y4n+1y2n+1)=(n=1∑1007a4n+1a2n+1)(n=1∑1007b4n+1b2n+1)<(1−a)(1−b)1=(1−∣x∣)(1−∣y∣)1 Pheww!! Completed. ■\blacksquare■. Log in to reply Thank you very much. Surya, explain the ∑n=110071an \sum_{n=1}^{1007} \dfrac{1}{a^n} ∑n=11007an1 part. any hint hint is provided Shivam prove the below statement. If you do so, the problem is done. i want to look its solution.. @Shivam Jadhav @Saarthak Marathe Please post a solution. My curiosity is running out and so is of some people.Thanks. Agree.... I found another solution to this problem.This is my own solution. I did not know how to add pdf's to brilliant so I uploaded it on my google drive account. Please See the link below. https://drive.google.com/file/d/0ByQHFlC74eP_dGxzOTBVaFZKTVE/view?usp=sharing @Shivam Jadhav @Svatejas Shivakumar @Dev Sharma @Surya Prakash @Nihar Mahajan @Pi Han Goh If the following is proved then the problem is done,For x≤1x\le1x≤1, Prove that 1−x2n1+x2n≤x\frac{1-{x}^{2n}}{1+{x}^{2n}} \le x 1+x2n1−x2n≤x where n∈Nn \in N n∈N @Shivam Jadhav Please post a solution. No not yet. Please wait for some more time. I think I am getting it.@Shivam Jadhav I think we have got too much time for this. Remember that we have only half an hour in RMO for a question. @Nihar Mahajan – But once you get such a question by yourself then you will remember the method for your lifetime. @Saarthak Marathe – Then , please post your solution. Too curious :P Sarthak wants to post the solution. @Shivam Jadhav @Saarthak Marathe Please post a solution. Almost three weeks are over since this problem was posted and no one has got a solution so far. And BTW, in the problem it should be mentioned that x and y ≠\neq= 1. It is understood. @Shivam Jadhav Post the solution. @Shivam Jadhav Please post the solution!!! I am losing my curiosity!!!! I think we are unnecessarily wasting our time asking him because he probably doesn't have any solution. @Nihar Mahajan Yes even I think so. Can we ask Calvin sir to help us in this problem (provided this problem actually exists :D)? @Nihar Mahajan @Calvin Lin @Daniel Liu @Joel Tan @Harsh Shrivastava @Xuming Liang @Ammar Fathin Sabili @Satyajit Mohanty @Shivam Jadhav Please refrain from mass tagging. Please try and limit it to five people. Thanks. Problem Loading... Note Loading... Set Loading...
Let $p=4q-1$ be a prime with $q$ an odd prime. Let $G=\{0,1,\dots,p-1,\infty\}$. The following law $*$ makes $(G,*)$ a commutative group of order $4q$ with neutral element $\infty$: $$a*b=\begin{cases} a&\text{if }b=\infty\\ b&\text{if }a=\infty\\ \infty&\text{if }a\ne\infty\text{ and }b\ne\infty\text{ and }a+b\equiv 0\pmod p\\ {ab-1\over a+b}\bmod p&\text{otherwise}\end{cases}$$ The inverse of $a$ for law $*$ is $p-a$, with the exceptions of $0$ and $\infty$ which are their own inverse. Proof of associativity requires care, and uses $p\equiv3\bmod4$ at some point. Computation can be simplified by keeping an element of $G$ as an integer fraction $x\over y$ with $x$ and $y$ integers modulo $p$, and the neutral element $\infty$ represented as $x\over0$ with $x\not\equiv0\pmod p$. The group law becomes, without any special case: $${x_a\over y_a}*{x_b\over y_b}={(x_ax_b-y_ay_b)\bmod p\over(x_ay_b+y_ax_b)\bmod p}$$ and we need only 4 multiplications, 1 addition, 1 subtraction, and 2 modular reductions for $a*b$; down to 2 squarings, 1 multiplication, 1 doubling, 1 subtraction, and 2 modular reductions for $a*a$. Since we have a group law, we can define exponentiation. Exponents can be reduced modulo $4q$, the order of the group. Let $g$ be an element of order $q$. It can be found heuristically, perhaps starting from $g=2$ incrementally and checking $g^4\ne\infty$ and $g^q=\infty$ (Poncho's comment gives a faster way when we don't care that $g$ is large). Question: How hard is the Discrete Logarithm Problem in the cyclic subgroup of prime order $q$ generated by $g$? Is it somewhat related to a well known group? Update: The formulas $x_{a*b}=(x_ax_b-y_ay_b)\bmod p$ and $y_{a*b}=(x_ay_b+y_ax_b)\bmod p$ are the same as for complex multiplication in cartesian coordinates, except they are for integers modulo $p$. Now associativity is less surprising. By going thru polar coordinates, we can express of $g^k$ for law $*$ without iteration: that's $((y^{-1}\bmod p)x)\bmod p$ for $x=(g^2+1)^{k/2}\cos(k\cot^{-1}g)$ and $y=(g^2+1)^{k/2}\sin(k\cot^{-1}g)$. Quantities $x$ and $y$ are integers even though intermediate values use reals. This is computationally impractical, because we need so extreme precision. And it does not lead to a trivial way to solve the DLP.
Yes, but I don't believe there is a paper on arXiv containing all details (I would love to be corrected!), and if I were refereeing a paper claiming it to be so, I would give the author a hard time. Edit: Per Matthew Titsworth's comment below, Gregor Schaumann's thesis contains these details. Original post: The balanced tensor product is defined by universal property, and this makes everything work out. Let $\mathcal M_n$ denote the bicategory whose objects are $n$-simplices with the following decorations: On each edge (with endpoints $i<j$), put a $C$-$C$ bimodule $M_{ij}$. On each 2-dimensional face (with corners $i<j<k$) put a natural equivalence of bifunctors witnessing $M_{ik}$ as the balanced tensor $M_{ij} \otimes M_{jk}$. On each 3-dimensional face put a natural isomorphism of equivalences. On each 4-dimensional face put an equality of natural isomorphisms. I say "the bicategory" but I will not try to write out precisely the 1- or 2-morphisms. This is one of the things that if I were a referee I would demand. Then the bicategories $\mathcal M_n$ together form a simplicial bicategory $\mathcal M_\bullet$. (For degeneracies, put the standard identity bimodule and standard natural equivalences, etc.) Moreover, $\mathcal M_1 = {_C\mathrm{Mod}_C}$ is the bicategory of $C$-$C$-bimodules and $\mathcal M_0 = \{*\}$ is the terminal bicategory. The work of Douglas, Schommer-Pries, and Snyder implies that this simplicial bicategory satisfies the "Segal condition" that the bifunctor $\mathcal M_n \to (\mathcal M_1)^{\times n}$ remembering only the $n$ bimodules $M_{01},M_{12},M_{23},\dots, M_{(n-1)n}$ is an equivalence of bicategories. So we have constructed a Segal bicategory, and if anything deserves the name "monoidal bicategory", that does. One could then invoke strictification results to give ${_C\mathrm{Mod}_C}$ the structure of Gray monoidal bicategory if one so desired. Remark: An alternate approach to giving ${_C\mathrm{Mod}_C}$ a monoidal structure, and indeed to defining the 3-category whose objects are $k$-linear monoidal and morphisms are bimodules, should be straightforward using my joint work with Scheimbauer. One would need to compare carefully Ostrik's definitions (since you say "in the sense of Ostrik") with, probably, Haugseng's version of bimodules (or the version from Calaque and Scheimbauer, but I don't know of any paper that explains how to handle unpointed bimodules in their framework). I think everyone expects Ostrik's and Haugseng's notions to match, and the requisite ideas to match them go back to the halcyon days of bicategories. Haugseng's definition (as well as the version of Calaque and Scheimbauer) requires that $k$-linear categories (no monoidal anything) comprise a symmetric monoidal 2-category in the sense of $\Gamma$-categories. Building this is no easier than what I outlined above, but not really harder either. If you are willing to grant the existence of symmetric monoidal structure, then the main thing to check is that certain bicategories admit colimits of shape $\Delta$ ("geometric realizations"), and that tensor products distribute over colimits of shape $\Delta$. (Probably (b) follows just from universal property considerations.) I have been told that David Jordan and his collaborators have checked these types of properties for the bicategory $\mathrm{Rex}_k$ of small $k$-linear categories and right-exact functors, but I don't know a reference where details are written down.
I was reading about fibre optic cables and it was mentioned that ,the individual "light pipes" are coated with a material whose refractive index is less than that of that of the glass. My question is why a material with smaller $\mu$ ? According to sin $\theta$=1/$\mu$, if I decrease $\mu$ then $\theta$,which is the critical angle,will increase.Hence data loss will increase!(since these cables are used in communication) Then why are we doing so? or is my logic incorrect? According to sin θ=1/μ, if I decrease μ then θ,which is the critical angle, will increase. Your formula is for propagation from an optically dense medium into air ($n=1$). For optical fiber you should consider the case where both media have non-unity refractive index. The simplest case is step index fiber with a core large enough to allow solution by ray optics. This kind of fiber works on the principle of total internal reflection. Total internal reflection occors when the incident angle at the interface is greater than the critical angle. The critical angle occurs when $$\sin\theta = \frac{n_2}{n_1}$$ If $n_1 < n_2$, you'd need $\sin\theta > 1$ to solve this equation. Since the sine function is always in the range [-1, 1], there is no such solution. And indeed, total internal reflection doesn't occur when $n_1 < n_2$, and optical fiber are designed with the core index ($n_1$) slightly higher than the cladding index ($n_2$). Even if you talk about fiber small enough that you must consider the waveguide properties of the structure, rather than simply ray optics, you'll find that there is no guided wave solution unless $n_2<n_1$. It sounds like your $\mu$ is $n_1 / n_2$ where light is being transmitted from a medium with index of refraction $n_1$ to an index of refraction $n_2$. Decreasing $n_2$ therefore increases $\mu$, which appears to be your fundamental misunderstanding. Yes, you want $\mu$ as big as possible, and the way you do that (if you cannot increase $n_1$) is by decreasing $n_2$.
Answer Undefined Work Step by Step Here, $ x=0$; $ y=1$ and $ r=\sqrt {(0)^2+(1)^2}=1$ The trigonometric ratios are as follows: $\tan \theta =\dfrac{y}{x}$ Then, we have $\tan (\pi/2) =\dfrac{1}{0}=$ Undefined You can help us out by revising, improving and updating this answer.Update this answer After you claim an answer you’ll have 24 hours to send in a draft. An editorwill review the submission and either publish your submission or provide feedback.
In interferometry (specifically, in the domain of Fabry-Perot cavities), the function $$f(\phi) = \frac{1}{1 + F \sin^2 \phi},$$ which describes the shape of the resonant structure of the cavity, is often called the "Airy function" (for instance, in Wolfram Mathworld). However, it is obviously quite different from the special functions Ai(x) that usually go by that name. This function resembles probability density function of the wrapped Cauchy distribution. How did it get the name "Airy function"? I've heard that Fabry and Perot gave it this name in one of their original papers (maybe this one? PDF, in French, which I can't read), in honor of (the same) George Biddell Airy who had earlier considered similar interferometers. It would be great if someone could help ferret out the first reference to that function by this name.
Research Open Access Published: The persistence and extinction of a stochastic SIS epidemic model with Logistic growth Advances in Difference Equations volume 2018, Article number: 68 (2018) Article metrics 1019 Accesses Abstract The dynamical properties of a stochastic susceptible-infected epidemic model with Logistic growth are investigated in this paper. We show that the stochastic model admits a nonnegative solution by using the Lyapunov function method. We then obtain that the infected individuals are persistent under some simple conditions. As a consequence, a simple sufficient condition that guarantees the extinction of the infected individuals is presented with a couple of illustrative examples. Introduction Some mathematical models, for instance, see [1–5], have been employed to describe and understand epidemic transmission dynamics since the work of Kermack and McKendrick [6] was proposed. The classical compartment models were proposed and investigated on the ground of some restrictive assumptions including a constant total population size and a constant recruitment rate for the susceptible individuals. This assumption is relatively reasonable for a short-lasting disease. While in reality, the population sizes of human beings and other creatures are generally variable, instead of keeping constant for a long run. As an example of this phenomenon, Ngonghala et al. pointed out that malaria in developing countries took place with growth of local population size. When it concerns the variable population size, some recent literature works, such as Ngonghala et al. [7], Busenberg and Driessche [8], Wang et al. [9], Zhao et al. [10], Zhu and Hu [11], Li et al. [12], had considered the effect of population size on the epidemic dynamics. We would like to mention the work by Wang et al. [9], in which they constructed an SIS epidemic model under the assumption that the susceptible individuals followed the Logistic growth: where \(S(t)\) and \(I(t)\) denote the numbers of the susceptible and the infected individuals at time t, respectively; r is the intrinsic growth rate of the susceptible individuals; a is the carrying capacity of the community in the absence of infection; d is the natural death rate; γ represents the recovery rate of the infected individuals; ε is the disease-induced death rate; \(\beta(I)\) is the transmission rate and is given in the following form: All the parameters are assumed to be nonnegative. When \(p=0\), \(\beta(I)\) is equal to the constant transmission rate β. In this paper, we shall consider the following deterministic SIS endemic model: where \(b=r/a\). We set \(N(t)\) is the total population at time t, then Wang et al. [9] showed that the domain \(E_{0}(a,0)\) is globally asymptotically stable. If \(R_{0}>1\), then \(E_{0}(a,0)\) is unstable, and there is a unique endemic equilibrium \(E^{*}(S^{*},I^{*})\) which is globally asymptotically stable. Here The compartment models are inevitably affected by the environmental noise. We assume that the transmission coefficient β is subject to the environmental white noise, that is, where \(B(t)\) is a standard Brownian motion, σ is the intensity of environmental white noise. In order to explore the stochastic effect, when the constant transmission rate is replaced by a random variable, we consider the corresponding stochastic SIS epidemic model: Throughout this paper, we will work on the complete probability space \((\Omega, \{\mathcal{F}_{t}\}_{t\geq 0},P)\) with its filtration \(\{\mathcal{F}_{t}\}_{t\geq 0}\) satisfying the usual conditions (i.e., it is right continuous and \(\mathcal{F}_{0}\) contains all P-null sets). We will investigate the dynamical properties of stochastic SIS model from several aspects: the result that stochastic model (9) admits a unique positive solution will be studied in the next section. The sufficient conditions of the persistence for the infected individuals would be derived. Further, we still find a simple condition to reach the extinction for the infected individuals. As a consequence, several illustrative examples are carried out to support the main results of this paper. Existence and uniqueness of positive solution Theorem 1 There exists a unique solution \((S(t),I(t))\) of system (9) on \(t\geq 0\) for any initial value \((S(0),I(0))\in \mathbb{R}^{2}_{+}\), and the solution will remain in \(\mathbb{R}^{2}_{+}\) with probability 1, namely \((S(t),I(t))\in \mathbb{R}^{2}_{+} \) for all \(t\geq 0\) almost surely. Proof Since the coefficients of model (9) satisfy local Lipschitz conditions for any initial value \((S(0),I(0))\in \mathbb{R}^{2}_{+}\), there exists a unique local solution on \(t\in[0,\tau_{e})\), where \(\tau_{e}\) is the explosion time. Next, we will show that the solution of model (9) is global. To this end, we need to show that \(\tau_{e}=\infty\) holds almost surely. Let \(k_{0}>0\) be sufficiently large such that \(S(0)\) and \(I(0)\) all lie within the interval \([\frac{1}{k_{0}}, k_{0}]\). For all \(k\geq k_{0}\), we define the stopping time Throughout this paper, we set \(\inf\emptyset=\infty\). Clearly, \(\tau_{k}\) is an increasing function as \(k\rightarrow\infty\). We set \(\tau_{\infty}=\lim_{k\to \infty} \tau_{k}\), according to the definition of stopping time, we get that \(\tau_{\infty}\leq\tau_{e}\) a.s. If we can show that \(\tau_{\infty}=\infty \) a.s., then \(\tau_{e}=\infty\) a.s. From now on, our proof will go by contradiction. If this statement is false, then there exists a pair of constants \(T>0\) and \(\varepsilon\in(0,1)\) such that \(P\{\tau_{\infty}\leq T\}>\varepsilon\), hence there exists an integer \(k_{1}>k_{0}\) such that We define a \(C^{2}\)-function \(V:\mathbb{R}^{2}_{+}\rightarrow\mathbb{R}_{+}\) as follows: where k is a constant determined later. Generalized Itô's formula gives that Choose the constant which implies that The remainder of the proof follows that in Zhao et al. [10]. □ Persistence in the mean For convenience, we define the following notation: Lemma 1 ([13], Strong law of large numbers) Let \(M=\{M_{t}\}_{t\geq0}\) be a real- valued continuous local martingale vanishing at \(t=0\). Then and also Theorem 2 Let \((S(t), I(t))\) be a solution of system (9) with any initial value \((S(0),I(0))\in \Pi\). If then the density of the infected individuals obeys the following expression: Proof Integrating both sides of the second equation of model (9) gives that Then generalized Itô's formula acting on model (9) leads to then We denote by strong law of large numbers for martingales, together with the facts \(0< S(t)\), \(I(t)< K_{0}\), which yields that therefore, The proof is complete. □ Example 1 Let the parameters of model (9) be and the initial value be \((S(0),I(0))=(5,2)\), then the threshold of model (9) is computed as Extinction In the previous section, we have investigated the persistence of the solution to model (9). In this section, we shall prove that the density of the infected individuals will be driven to extinction with a negative exponential power under some simple assumptions. Theorem 3 Let \((S(t), I(t))\) be the solution of model (9) with the initial value \((S(0),I(0))\in \Pi\). If or holds, then the density of the infected individuals will decline to zero exponentially with probability one. That is to say, or Proof From the second equation of model (9), we have The fact \(S\leq K_{0}\) leads to the following result: We denote by the strong law of large numbers for martingales, we then have On the other hand, expression (39) can be computed as follows: The proof is complete. □ Example 2 We set the parameters of model (9) are and the initial value is \((S(0),I(0))=(35,50)\). It is easy to check that and the initial value \((S(0),I(0))=(1,8)\) in order to meet condition (36), after substitution, we then get that which also means that the infected individuals definitely tend to zero with the rate of a negative exponential power as shown in Fig. 3. Conclusion The dynamical properties of the stochastic SIS model with Logistic growth are paid more attention to in this paper. According to the approach shown in many recent literature works, we still construct a \(C^{2}\)-function to show that the stochastic SIS epidemic model admits a unique positive global solution. Based on the general assumption of this paper, the total population is separated into two compartments: one is the susceptible, another is the infected. We also assume that the transmission rate β is perturbed by a white noise. The two indicators \(\tilde{R}_{0}\) and \(\breve{R}_{0}\) are kind of thresholds of this paper: when \(\tilde{R}_{0}> 1\), under some extra conditions, the density of the infected individuals keeps persistent; when \(\breve{R}_{0}<1\) holds or (36) is valid, the density of the infected individuals declines to zero in a long run. Several illustrative examples support the main results of this paper. References 1. Tchuenche, J., Nwagwo, A., Levins, R.: Global behaviour of an SIR epidemic model with time delay. Math. Methods Appl. Sci. 30(6), 733–749 (2007) 2. Chen, H., Sun, J.: Global stability of delay multigroup epidemic models with group mixing and nonlinear incidence rates. Appl. Math. Comput. 218(8), 4391–4400 (2011) 3. Teng, Z., Wang, L.: Persistence and extinction for a class of stochastic SIS epidemic models with nonlinear incidence rate. Physica A 451, 507–518 (2016) 4. Liu, Q., Chen, Q.: Dynamics of a stochastic SIR epidemic model with saturated incidence. Appl. Math. Comput. 282, 155–166 (2016) 5. Zhao, D.: Study on the threshold of a stochastic SIR epidemic model and its extensions. Commun. Nonlinear Sci. Numer. Simul. 38, 172–177 (2016) 6. Kermack, W.O., McKendrick, A.G.: A contribution to the mathematical theory of epidemics. Proc. R. Soc. Lond. Ser. A, Math. Phys. Sci. 115(772), 700–721 (1927) 7. Ngonghala, C., Teboh-Ewungkem, M., Ngwa, G.: Persistent oscillations and backward bifurcation in a malaria model with varying human and mosquito populations: implications for control. J. Math. Biol. 70(7), 1581–1622 (2015) 8. Busenberg, S., Van den Driessche, P.: Analysis of a disease transmission model in a population with varying size. J. Math. Biol. 28(3), 257–270 (1990) 9. Wang, L., Zhou, D., Liu, Z., Xu, D., Zhang, X.: Media alert in an SIS epidemic model with logistic growth. J. Biol. Dyn. 11(1), 120–137 (2017) 10. Zhao, Y., Jiang, D., Mao, X., Gray, A.: The threshold of a stochastic SIRS epidemic model in a population with varying size. Discrete Contin. Dyn. Syst., Ser. B 20(4), 1277–1295 (2015) 11. Zhu, L., Hu, H.: A stochastic SIR epidemic model with density dependent birth rate. Adv. Differ. Equ. 2015, 1 (2015) 12. Li, X., Gray, A., Jiang, D., Mao, X.: Sufficient and necessary conditions of stochastic permanence and extinction for stochastic logistic populations under regime switching. J. Math. Anal. Appl. 376(1), 11–28 (2011) 13. Mao, X., Marion, G., Renshaw, E.: Environmental Brownian noise suppresses explosions in population dynamics. Stoch. Process. Appl. 97(1), 95–110 (2002) Acknowledgements The authors would like to thank Zhanshuai Miao for his discussion and the anonymous referees for their good comments. This work is supported by the National Natural Science Foundation of China (Grant Nos. 11201075, 11601085), Natural Science Foundation of Fujian Province of China (Grant No. 2016J01015, 2017J01400). Ethics declarations Competing interests We claim that none of the authors have any competing interests in the manuscript. Additional information Publisher's Note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
ISSN: 1078-0947 eISSN: 1553-5231 All Issues Discrete & Continuous Dynamical Systems - A July 1999 , Volume 5 , Issue 3 Select all articles Export/Reference: Abstract: Consider a propagator defined on a Banach space whose norm satisfies an appropriate exponential bound. To this operator is added a bounded operator which is relatively smoothing in the sense of Vidav. The location of the essential spectrum of the perturbed propagator is then estimated. An application to kinetic theory is given for a system of particles that interact both through collisions and through their charges. Abstract: In this paper we get a lower bound independent of $\delta$ on the life-span of classical solutions to the following Cauchy problem by using the global iteration method $\delta u_{t t}-\Delta u +u_t = F(u, \nabla u),$ $t = 0 : u = \epsilon u_0(x), u_t = \epsilon u_1(x),$ where $\delta$ and $\epsilon$ are small positive parameters. Moreover, we consider the related singular perturbated problem as $\delta\to 0$ and show that the perturbated term $\delta u_{t t}$ has an appreciable effect only for a short times. Abstract: In this paper, we use Lyapunov-Schmidt method and Morse theory to study semilinear elliptic boundary value problems with resonance at infinity, and get new multiple solutions theorems. Abstract: In this paper we develop a theory of k-adic expansion of an integral aritmethic function. Applying this formal language to Lefschetz numbers, or fixed point indices, of iterations of a given map we reformulate or reprove earlier results of Babienko-Bogatyj, Bowszyc, Chow-Mallet-Paret and Franks. Also we give a new characterization of a sequence of Lefschetz numbers of iterations of a map $f$: For a smooth transversal map we get more refined version of Matsuoka theorem on parity of number of orbits of a transversal map. Finally, for any $C^1$-map we show the existence of infinitely many prime periods provided the sequence of Lefschetz numbers of iterations is unbounded. Abstract: In this paper we improve a general theorem of O.A. Ladyzhenskaya on the dimension of compact invariant sets in Hilbert spaces. Then we use this result to prove that the Hausdorff and fractal dimensions of global compact attractors of differential inclusions and reaction-diffusion equations are finite. Abstract: We study the scaling function of a $C^{1+h}$ expanding circle endomorphism. We find necessary and sufficient conditions for a Hölder continuous function on the dual symbolic space to be realized as the scaling function of a $C^{1+h}$ expanding circle endomorphism. We further represent the Teichmüller space of $C^{1+h}$ expanding circle endomorphisms by the space of Hölder continuous functions on the dual symbolic space satisfying our necessary and sufficient conditions and study the completion of this Teichmüller space in the universal Teichmüller space. Abstract: In this paper we prove the existence of a solution for a class of noncoercive Cauchy problems whose prototype is the boundary value problem $\frac{\partialu}{\partial t}-$ div$(|Du|^{p-2}Du) + B(x, t)\cdot |Du|^{\gamma-1}Du = f$ in $\Omega_T$, $u(x, t)=0$ on $\Omega\times (0, T),$ $u(x, 0) = u_0(x)$ in $\Omega,$ under suitable hypotheses on the data. Abstract: We regard second order systems of the form $\ddot q = \nabla_qW(q, t), t\in \mathbb R, q \in \mathbb R^N,$ where $W(q, t)$ is $\mathbb Z^N$ periodic in $q$ and almost periodic in $t$. Variational arguments are used to prove the existence of heteroclinic solutions joining almost periodic solutions to the system. Abstract: We consider a thermo-elastic plate equation with rotational forces [Lagnese.1] and with coupled hinged mechanical/Neumann thermal boundary conditions (B.C.). We give a sharp result on the Neumann trace of the mechanical velocity, which is "$\frac{1}{2}$" sharper in the space variable than the result than one would obtain by a formal application of trace theory on the optimal interior regularity. Two proofs by energy methods are given: one which reduces the analysis to sharp wave equation's regularity theory; and one which analyzes directly the corresponding Kirchoff elastic equation. Important implications of this result are noted. Abstract: Recent work has shown that in the setting of continuous maps on a locally compact metric space the spectrum of the Conley index can be used to conclude that the dynamics of an invariant set is at least as complicated as that of full shift dynamics on two symbols, that is, a horseshoe. In this paper, one considers which spectra are possible and then produce examples which clearly delineate which spectral conditions do or do not allow one to conclude the existence of a horseshoe. Abstract: An orbital Conley index for non-invariant compact sets of discrete-time dynamical systems is introduced. The construction of this new index uses an algebraic reduction process inspired from Leray. Applications to detection of periodic orbits and chaos are presented. Abstract: In this paper, we prove the zero diffusion limit of 2-D incompressible Navier- Stokes equations with $L^1(\mathcal R^2)$ initial vorticity is still a weak solution of the corresponding Euler equations. Abstract: Let $M$ be a closed connected $C^\infty$ Riemannian manifold whose geodesic flow $\phi$ is Anosov. Let $\theta$ be a smooth 1-form on $M$. Given $\lambda\in \mathbb R$ small, let $h_{E L}(\lambda)$ be the topological entropy of the Euler-Lagrange flow of the Lagrangian $L_\lambda (x, v) =\frac{1}{2}|v|^2_x-\lambda\theta_x(v),$ and let $h_F(\lambda)$ be the topological entropy of the geodesic flow of the Finsler metric, $F_\lambda(x, v) = |v|_x-\lambda\theta_x(v),$ We show that $h_{E L}''(0) + h''_F(0) = h^2$Var$(\theta)$, where Var$(\theta)$ is the variance of $\theta$ with respect to the measure of maximal entropy of $\phi$ and $h$ is the topological entropy of $\phi$. We derive various consequences from this formula. Abstract: Of concern is the differentiability of the propagators of higher order Cauchy problem $u^{(n)}(t) +\sum_{k=0}^{n-1}A_ku^{(k)}(t)=0, t\geq 0,$ $u^{(k)}(0) = u_k, 0\leq k\leq n-1, where $A_0, A_1,\ldots, A_{n-1}$ are densely defined closed linear operators on a Banach space. A Pazy-type characterization of the infinitely differentiable propagators of the Cauchy problem is obtained. Moreover, two related sufficient conditions are given. Abstract: We prove existence and uniquences of positive solutions of an age-structured population equation of McKendrick type with spatial diffusion in $L^1$. The coefficients may depend on age and position. Moreover, the mortality rate is allowed to be unbounded and the fertility rate is time dependent. In the time periodic case, we estimate the essential spectral radius of the monodromy operator which gives information on the asymptotic behaviour of solutions. Our work extends previous results in [19], [24], [30], and [31] to the non-autonomous situation. We use the theory of evolution semigroups and extrapolation spaces. Abstract: In this paper we study a smoothing property of solutions to the Cauchy problem for the nonlinear Schrödinger equations of derivative type: $iu_t + u_{x x} = \mathcal N(u, \bar u, u_x, \bar u_x), \quad t \in \mathbf R,\ x\in \mathbf R;\quad u(0, x) = u_0(x),\ x\in \mathbf R,\qquad$ (A) where $\mathcal N(u, \bar u, u_x, \bar u_x) = K_1|u|^2u+K_2|u|^2u_x +K_3u^2\bar u_x +K_4|u_x|^2u+K_5\bar u$ $u_x^2 +K_6|u_x|^2u_x$, the functions $K_j = K_j (|u|^2)$, $K_j(z)\in C^\infty ([0, \infty))$. If the nonlinear terms $\mathcal N =\frac{\bar{u} u_x^2}{1+|u|^2}$, then equation (A) appears in the classical pseudospin magnet model [16]. Our purpose in this paper is to consider the case when the nonlinearity $\mathcal N$ depends both on $u_x$ and $\bar u_x$. We prove that if the initial data $u_0\in H^{3, \infty}$ and the norms $||u_0||_{3, l}$ are sufficiently small for any $l\in N$, (when $\mathcal N$ depends on $\bar u_x$), then for some time $T > 0$ there exists a unique solution $u\in C^\infty ([-T, T]$\ $\{0\};\ C^\infty(\mathbb R))$ of the Cauchy problem (A). Here $H^{m, s} = \{\varphi \in \mathbf L^2;\ ||\varphi||_{m, s}<\infty \}$, $||\varphi||_{m, s}=||(1+x^2)^{s/2}(1-\partial_x^2)^{m/2}\varphi||_{\mathbf L^2}, \mathbf H^{m, \infty}=\cap_{s\geq 1} H^{m, s}.$ Readers Authors Editors Referees Librarians More Email Alert Add your name and e-mail address to receive news of forthcoming issues of this journal: [Back to Top]
In the book Introduction to Wave Scattering, Localization and Mesoscopic Phenomena (Springer Series in Materials Science) one reads the following The specific initial condition for the Green funcion is that at $t=0$, the system is excited by a pulse localized at $\mathbf{r}=\mathbf{r}'$; i.e., the right-hand sides of $(2.4)$ and $(2.7)$ are replaced by $\delta(t)\delta(\mathbf{r}-\mathbf{r}')$, where $\mathbf{r}'$ denotes the source position. The reason for such a source is that it can excite the natural resonances (or eigenfunctions) of all temporal and spatial frequency in the system (since the source contains all temporal and spatial frequencies), so that the subsequent development may contain information about them. A useful approach for analyzing the Green function is hrough its frequency components. The left-hand side of the wave equation for the frequency components $\omega$ is given by $(2.8)$, and the same frequency component on the right-hand side is simply $\delta(\mathbf{r}-\mathbf{r}')$ because $\delta(t)$ gives $1$ as the amplitude for every frequency component. Equation both sides yields $$(\nabla^2+\kappa^2)G(\omega,\mathbf{r},\mathbf{r}')=\delta(\mathbf{r}-\mathbf{r}')\tag{2.16}$$ where For waves of a given energy or frequency, the quantum and classical wave equations can be put in the same form $$\nabla^2\phi+\kappa^2\phi=0,\tag{2.8}$$ where $$\kappa^2=\dfrac{2m(E-V)}{\hbar^2}\tag{2.9}$$ for the quantum case, and $$\kappa^2=\frac{\omega^2}{v^2}\tag{2.10}$$ I don't see how to deduce eq $(2.16)$. what does it mean to say that the left-hand side of the wave equation for the frequency component $\omega$? why is it given by $(2.8)$?
Now we are ready to formally define probability. Definition \(\PageIndex{1}\) A on the sample space \(S\) is a function, denoted \(P\), from subsets of \(S\) to the real numbers \(\mathbb{R}\), such that the following hold: probability measure \(P(S) = 1\) If \(A\) is any event in \(S\), then \(P(A) \geq 0\). If events \(A_1\) and \(A_2\) are disjoint, then \(P(A_1\cup A_2) = P(A_1) + P(A_2)\). More generally, if \(A_1, A_2, \ldots, A_n, \ldots\) is a sequence of pairwise disjointevents, i.e., \(A_i\cap A_j = \varnothing\), for every \(i \neq j\), then $$P(A_1\cup A_2\cup \cdots \cup A_n \cup\cdots) = P(A_1) + P(A_2) + \cdots + P(A_n) + \cdots.$$ So essentially, we are defining probability to be an on the events of a sample space, which assigns numbers to events in such a way that the three properties stated in Definition 1.2.1 are satisfied. operation Definition 1.2.1 is often referred to as the , where the three properties give the three axiomatic definition of probability axiomsof probability. These three axioms are all we need to assume about the operation of probability in order for many other desirable properties of probability to hold, which we now state. Properties of Probability Measures Let \(S\) be a sample space with probability measure \(P\). Also, let \(A\) and \(B\) be any events in \(S\). Then the following hold. \(P(A^c) = 1 - P(A)\) \(P(\varnothing) = 0\) If \(A \subseteq B\), then \(P(A) \leq P(B)\). \(P(A)\leq 1\) Addition Law:\(P(A \cup B) = P(A) + P(B) - P(A \cap B)\) Exercise \(\PageIndex{1}\) Can you prove the five properties of probability measures stated above using only the three axioms of probability measures stated in Definition 1.2.1? Answer (1) For the first property, note that by definition of the complement of an event \(A\) we have $$A\cup A^c = S \quad\text{and}\quad A\cap A^c = \varnothing.$$ In other words, given any event \(A\), we can represent the sample space \(S\) as a disjoint union of \(A\) with its complement. Thus, by the first and third axioms, we derive the first property: $$1 = P(S) = P(A\cup A^c) = P(A) + P(A^c)$$ $$\Rightarrow P(A^c) = 1 - P(A)$$ (2) For the second property, note that we can write \(S = S\cup\varnothing\), and that this is a disjoint union, since anything intersected with the empty set will necessarily be empty. So, using the first and third axioms, we derive the second property: $$1 = P(S) = P(S\cup\varnothing) = P(S) + P(\varnothing) = 1 + P(\varnothing)$$ $$\Rightarrow P(\varnothing) = 0$$ (3) For the third property, note that we can write \(B = A\cup(B\cap A^c)\), and that this is a disjoint union, since \(A\) and \(A^c\) are disjoint. By the third axiom, we have $$P(B) = P(A\cup(B\cap A^c)) = P(A) + P(B\cap A^c). \label{disjoint}$$ By the second axiom, we know that \(P(B\cap A^c) \geq 0\). Thus, if we remove it from the right-hand side of equation \ref{disjoint}, we are left with something smaller, which proves the third property: $$P(B) = P(A) + P(B\cap A^c) \geq P(A) \quad\Rightarrow\quad P(B) \geq P(A)$$ (4) For the fourth property, we will use the third property that we just proved. By definition, any event \(A\) is a subset of the sample space \(S\), i.e., \(A\subseteq S\). Thus, by the third property and the first axiom, we derive the fourth property: $$P(A) \leq P(S) = 1 \quad\Rightarrow\quad P(A) \leq 1$$ (5) For the fifth property, note that we can write the union of events \(A\) and \(B\) as the union of the following two disjoint events: $$A\cup B = A\cup (A^c\cap B),$$ in other words, the union of \(A\) and \(B\) is given by the union of all the outcomes in \(A\) with all the outcomes in \(B\) that are notin \(B\). Furthermore, note that event \(B\) can be written as the union the following two disjoint events: $$B = (A\cap B) \cup (A^c\cap B),$$ in other words, \(B\) is written as the disjoint union of all the outcomes in \(B\) that are also in \(A\) with the outcomes in \(B\) that are notin \(A\). We can use this expression for \(B\) to find an expression for \(P(A^c\cap B)\) to substitute in the expression for \(A\cup B\) in order to derive the fifth property: \begin{align} P(B) = P(A\cap B) + P(A^c\cap B) & \Rightarrow P(A^c\cap B) = P(B) - P(A\cap B) \\ P(A\cup B) = P(A) + P(A^c\cap B) & \Rightarrow P(A\cup B) = P(A) + P(B) - P(A\cap B) \end{align} Note that the axiomatic definition (Definition 1.2.1) does not tell us how to compute probabilities. It simply defines a formal, mathematical behavior of probability. In other words, the axiomatic definition describes how probability should theoretically behave when applied to events. To compute probabilities, we use the properties stated above, as the next example demonstrates. Example \(\PageIndex{1}\) Continuing in the context of Example 1.1.5, let's define a probability measure on \(S\). Assuming that the coin we toss is fair, then the outcomes in \(S\) are equally likely, meaning that each outcome has the same probability of occurring. Since there are four outcomes, and we know that probability of the sample space must be 1 (first axiom of probability in Definition 1.2.1), it follows that the probability of each outcome is \(\frac{1}{4} = 0.25\). So, we can write $$P(hh) = P(ht) = P(th) = P(tt) = 0.25.$$ The reader can verify this defines a probability measure satisfying the three axioms. With this probability measure on the outcomes we can now compute the probability of any event in \(S\) by simply counting the number of outcomes in the event. Thus, we find the probability of events \(A\) and \(B\) previously defined: $$P(A) = P(\{hh, ht, th\}) = \frac{3}{4} = 0.75$$ $$P(B) = P(\{ht, th\}) = \frac{2}{4} = 0.25.$$ We consider the case of equally likely outcomes further in Section 2.1. There is another, more empirical, approach to defining probability, given by using relative frequencies and a version of the Law of Large Numbers. Relative Frequency Approximation To estimate the probability of an event \(A\), repeat the random experiment several times (each repetition is called a trial) and count the number of times \(A\) occurred, i.e., the number of times the resulting outcome is in \(A\). Then, we approximate the probability of \(A\) using relative frequency: $$P(A) \approx \frac{\text{number of times}\ A\ \text{occurred}}{\text{number of trials}}.$$ Law of Large Numbers As the number of trials increases, the relative frequency approximation approaches the theoretical value of P( A). This approach to defining probability is sometimes referred to as the . Under this definition, probability represents a frequentist definition of probability long-run average. The two approaches to defining probability are equivalent. It can be shown that using relative frequencies to define a probability measure satisfies the axiomatic definition.
A "logarithmic neuron" is defined as follows [1]: Which for inputs $\left\{ {{x_1},...,{x_n}} \right\}$ yields an output of $z=\prod\limits_{i = 1..n} {x_i^{{w_i}}}$ (in MATLAB, the activation function is considered a separate layer, so I'm going to ignore it from now on). I'm trying to implement a layer of such neurons as a MATLAB class that inherits from nnet.layer.Layer (which is how custom layers should be defined so that they are compatible with the Deep Learning Toolbox), but my present difficulty is deriving expressions for the derivatives of the loss function through the layer. The loss function is defined as: $$ {L_{SPES}} = \frac{1}{2}\left( {\sum\limits_{i = 1..m} {{{\left( {\frac{{t - y}}{t}} \right)}^2}} } \right) \tag{1} $$ so to my understanding, the backward propagated derivative of the loss through the output (regression) layer is therefore: $$ \frac{{\partial L}}{{\partial y}} = \frac{{y - t}}{{{t^2}}} \tag{2} $$ Proceeding to the logarithmic layer - the forward pass can be defined like this, $$ z = \exp \left( {\sum\limits_{i = 1..n} {{w_i} \cdot \ln \left( {{x_i}} \right)} } \right) = \prod\limits_{i = 1..n} {x_i^{{w_i}}} \tag{3} $$ and the article mentions that the equation for weight updates in this layer is: $$ \frac{{\partial L}}{{\partial {w_{ji}}}} = \frac{{\partial L}}{{\partial {y_j}}}\frac{{\partial {y_j}}}{{\partial {v_j}}}\frac{{\partial {v_j}}}{{\partial {w_{ji}}}}\mathop = \limits^? {y_j}\sum\limits_k {{\delta _k}{w_{kj}}{y_i}} \tag{4} $$ My questions are: How can I express $\frac{{\partial L}}{{\partial {\boldsymbol{W}}}}$ as $f\left( {\frac{{\partial L}}{{\partial Z}},\boldsymbol{X,Z,W}} \right)$? What should be the expressions for $\frac{{\partial L}}{{\partial \boldsymbol{X}}} = g\left( {\frac{{\partial L}}{{\partial Z}},\boldsymbol{X,Z,W}} \right)$?
Our paper, Polarisation of Graded Bundles, with Janusz Grabowski and Mikołaj Rotkiewicz has now been published in SIGMA [1]. In the paper we show that Graded bundles (cf. [2]), which are a particular kind of graded manifold (cf. [3]), can be `fully linearised’ or `polarised’. That is, given any graded bundle of degree k, we can associate with it in a functorial way a k-fold vector bundle – we call this the full linearisation functor. In the paper [1], we fully characterise this functor. Hopefully, this notion will prove fruitful in applications as k-fold vector bundles are nice objects that that various equivalent ways of describing them. Graded Bundles Graded bundles are particular examples of polynomial bundles: that is we have a fibre bundle whose are \(\mathbb{R}^{N}\) and the admissible changes of local coordinates are polynomial. A little more specifically, a graded bundle $F$, is a polynomial bundle for which the base coordinates are assigned a weight of zero, while the fibre coordinates are assigned a weight in \(\mathbb{N} \setminus 0\). Moreover we require that admissible changes of local coordinates respect the weight. The degree of a graded bundle is the highest weight that we assign to the fibre coordinates. Any graded bundle admits a series of affine fibrations \(F = F_k \rightarrow F_{k-1} \rightarrow \cdots \rightarrow F_{1} \rightarrow F_{0} =M\), which is locally given by projecting out the higher weight coordinates. For example, a graded bundle of degree 2 admits local coordinates \((x, y ,z)\) of weight 0,1, and 2 respectively. Changes of coordinates are then, `symbolically’ \(x’ = x'(x)\), \(y’ = y T(x)\), \(z’ = z G(x) + \frac{1}{2} y y H(x)\), which clearly preserve the weight. We then have a series of fibrations \(F_2 \rightarrow F_1 \rightarrow M\), given (locally) by \((x,y,z) \mapsto (x,y) \mapsto (x)\). Linearisation The basic idea of the full linearisation is quite simple – I won’t go into details here. Recall the notion of polarisation of a homogeneous polynomial. The idea is that one adjoins new variables in order to produce a multi-linear form from a homogeneous polynomial. The original polynomial can be recovered by examining the diagonal. As graded bundles are polynomial bundles, and the changes of local coordinates respect the weight, we too can apply this idea to fully linearise a graded bundle. That is, we can enlarge the manifold by including more and more coordinates in the correct way as to linearise the changes of coordinates. In this way we obtain a k-fold vector bundle, and the original graded bundle, which we take to be of degree k. So, how do we decide on these extra coordinates? The method is to differentiate, reduce and project. That is we should apply the tangent functor as many times as is needed and then look for a substructure thereof. So, let us look at the degree 2 case, which is simple enough to see what is going on. In particular we only need to differentiate once, but you can quickly convince yourself that for higher degrees we just repeat the procedure. The tangent bundle \( T F_2\) – which we consider the tangent bundle as a double graded bundle – admits local coordinates \((\underbrace{x}_{(0,0)}, \; \underbrace{y}_{(1,0)} ,\; \underbrace{z}_{(2,0)} \; \underbrace{\dot{x}}_{(0,1)}, \; \underbrace{\dot{y}}_{(1,1)} ,\; \underbrace{\dot{z}}_{(2,1)})\) The changes of coordinates for the ‘dotted’ coordinates are inherited from the changes of coordinates on \(F_2\), \(\dot{x}’ = \dot{x}\frac{\partial x’}{\partial x}\), \( \dot{y}’ = \dot{y}T(x) + y \dot{x} \frac{\partial T}{\partial x}\), \(\dot{z}’ = \dot{z}G(x) + z \dot{x}\frac{\partial G}{\partial x} + y \dot{y}H(x) + \frac{1}{2}y y \dot{x}\frac{\partial H}{\partial x}\). Thus we have differentiated. Clearly we can restrict to the vertical bundle while still respecting the assignment of weights – one inherited from \(F_2\) and the other comes from the vector bundle structure of a tangent bundle. In fact, what we need to do is shift the first weight by minus the second weight. Technically, this means that we no longer are dealing with graded bundles, the coordinate \(\dot{x}\) will be of bi-weight (-1,1). However, the amazing thing here is that we can set this coordinate to zero – as we should do when looking at the vertical bundle – and remain in the category of graded bundles. That is, not only is setting \(\dot{x}=0\) well-defined, you see this from the coordinate transformations; but also this keeps us in the right category. We have preformed a reduction of the (shifted) tangent bundle. Thus we arrive at a double graded bundle \(VF_2\) which admits local coordinates \((\underbrace{x}_{(0,0)}, \; \underbrace{y}_{(1,0)} ,\; \underbrace{z}_{(2,0)}, \; \underbrace{\dot{y}}_{(0,1)} ,\; \underbrace{\dot{z}}_{(1,1)})\), and the obvious admissible changes thereof. Now, observe that we have the degree of \(z\) as (2,0), which is the coordinate with the highest first component of the bi-weight. Thus, as we have the structure of a graded bundle, we can project to a graded bundle of one lower degree \(\pi : VF_2 \rightarrow l(F_2)\). The resulting double vector bundle is what we will call the linearisation of \(F_2\). So we have constructed a manifold with coordinates \((\underbrace{x}_{(0,0)}, \; \underbrace{y}_{(1,0)}, \; \underbrace{\dot{y}}_{(0,1)} ,\; \underbrace{\dot{z}}_{(1,1)})\), with changes of coordinates \(x’ = x'(x)\), \(y’ = y T(x)\) \( \dot{y}’ = \dot{y}T(x)\), \(\dot{z}’ = \dot{z}G(x) + y \dot{y}H(x)\). Then, by comparison with the changes of local coordinates on \(F_2\) you see that we have a canonical embedding of the original graded bundle in its linearisation as a ‘diagonal’ \(\iota : F_2 \rightarrow l(F_2)\), by setting \(\dot{y} = y\) and \(\dot{z} = 2 z\). References [1] Andrew James Bruce, Janusz Grabowski and Mikołaj Rotkiewicz, Polarisation of Graded Bundles, SIGMA 12 (2016), 106, 30 pages. [2] Janusz Grabowski and Mikołaj Rotkiewicz, Graded bundles and homogeneity structures, J. Geom. Phys. 62 (2012), 21-36. [3] Th.Th. Voronov, Graded manifolds and Drinfeld doubles for Lie bialgebroids, in Quantization, Poisson Brackets and Beyond (Manchester, 2001), Contemp. Math., Vol. 315, Amer. Math. Soc., Providence, RI, 2002, 131-168.
I am confused about such things as negative velocity, acceleration, and displacement and what the negative indicates. It is better to understand the sign of a one dimensional vector as telling you its direction then trying to give it a meaning in words, and the acceleration is a great example of why. An object in one-dimensional motion which has a negative acceleration might be ... slowing down/stoppingif it currently has a positive velocity speeding upif it currently has a negative velocity getting startedif it currently has zero velocity changing direction/turning aroundif it currently has a a positive velocity and we watch it long enough for that velocity to become negative continuing in the same directionif it currently has a negative velocity. The point is that most of those day to day phrases ("slowing down", "turning around", etc.) are relative to the current state of motion. Displacement, velocity and acceleration are vector quantities. Strictly speaking they can't be positive or negative. Instead a vector has a direction in space (as well as a magnitude). For example, a displacement might be 2 km North West. [A displacement of –2 km North West isn't a negative vector, except in a trivial sense; it is a vector of 2 km South East.] It's often useful, though, to consider components of a vector, $\vec{V}$; that is vectors in chosen directions which add together by the head-to-tail rule to make $\vec{V}$. For example if $\vec{V}$=2 km North West, and the chosen directions for components are East and North, then $\vec{V}$ = $-\sqrt 2$ km East + $\sqrt 2$ km North. Now that we have established fixed directions for our components, we find that we have coefficients, namely $-\sqrt 2$ km and $\sqrt 2$ km, that really can be positive or negative. [Confusingly, these coefficients are also often themselves referred to as 'components'. We shall do this in the next paragraph.] So when, for example, considering the motion of a stone thrown upwards, we might choose to consider components of displacement, velocity and acceleration in the upwards direction. The acceleration 'component' is then $–9.8 \text{m s}^{-2},$ but, rather sloppily, we often say simply that (having chosen the upward direction), the stone's acceleration is negative! Note that the stone's upward acceleration component is always $–9.8 \text{m s}^{-2},$, whether the stone is on its way up and slowing down, or on its way down and speeding up. This follows from the definition of acceleration $$\text{mean acceleration} = \frac{\text{final velocity}-\text{initial velocity}}{\text{time taken to change}}$$ in which due care is taken over the vector subtraction on the top line! Example. Working with upward components... Suppose we launch the stone vertically with a velocity of $15.0\ \text{m s}^{-1}.$ We find that 0.50 s later it has a velocity of $10.1\ \text{m s}^{-1}.$ so its acceleration is $a=\frac{10.1 \text{m s}^{-1}-15.0\ \text{m s}^{-1}}{0.50 \text s}=\ –9.8\ \text{m s}^{-2}$At its highest point its velocity is zero, and 0.50 s later its velocity is $-4.9\ \text{m s}^{-1}.$ So its acceleration is $a=\frac{-4.9 \text{m s}^{-1}-0}{0.50 \text s}=\ –9.8\ \text{m s}^{-2}$. You might care to use the same method to calculate the mean acceleration for the stone's complete flight to and from its starting point (which takes 3.06 second). Here a change in direction occurs, but the method still works. [Assume that the stone's speed on returning to its stating point is the same as the speed at which it was thrown.] Normally, acceleration is represented as a vector: think of an arrow with a direction and a length (or magnitude). For example, velocity is a vector: 23 degrees east of due north at 17 miles per hour. Acceleration could be 23 degrees east of due north at 10 feet per second per second, which means every second its velocity increases by 10 feet per second in that direction. Negative length or magnitude simply reverses the direction of the arrow. So, in the above case, the negative of the acceleration would be 23 degrees west of due south at 10 feet per second per second- which is the same as losing 10 feet per second per second in the direction 23 degrees east of due north. Let's say an object is initially moving straight north at 60 mph and experiences a 5 mph per second negative acceleration in the north direction. In other words it is accelerating southward at positive 5 mph per second. That means it would continue moving along the north-south line, but would gradually slow down, come to a momentary stop after 12 seconds and reverse direction, then gain 5 mph every second in the south direction. Now consider another case: the object starts out moving north at 60 mph, but experiences a 1 mph per second positive acceleration in the straight-east direction. In this case, the northward component of its velocity does not change, but the eastward component (which is initially zero) increases by 1 mph every second. It will never be going directly east because it does not ever lose that initial northward component of its velocity; but after 60 seconds it will be moving directly northeast. One last example. If a satellite moves at a constant speed (speed is the magnitude of a velocity vector) in a circular orbit around the earth, its velocity direction is always tangent to the orbital path. Gravity accelerates the satellite toward the center of the Earth at right angles to the satellite's velocity. The direction of the satellite's velocity keeps changing, but the magnitude of its velocity does not change. Let the unit vector in a particular direction be $\hat r$. You may think of this as defining a positive direction. A displacement is $\vec r = r\, \hat r$ , a velocity $\vec v = v\,\hat r$, and an acceleration is $\vec a = a\,\hat r$ where $r$ and $a$ are components in the $\hat r$ direction.$r$ and $a$ can be either positive or negative quantities. For a positive displacement, a positive velocity or a positive acceleration, $r,\, v$ and $a$ are all positive quantities. For a negative displacement, a negative velocity or a negative acceleration, $r,\, v$ and $a$ are all negative quantities. As an example let the acceleration be $-10\,\hat r\,\rm m\,s^{-2}$ and this would be called a negative acceleration because the component of the acceleration $-10$ in the $\hat r$ direction is negative. However there is nothing to stop you writing the acceleration as $+10\,(-\hat r)\,\rm m\,s^{-2}$ and calling this a positive acceleration in the $(-\hat r)$ direction. This might be clearer if another unit vector is defined, $\hat R = (-\hat r)$, and then the acceleration is $ +10\,\hat R\,\rm m\,s^{-2}$.
In this month’s thrilling installment of Wrong, But Useful, we’re joined by @c_j_smith, who is Calvin Smith in real life. We discuss… Number of the Podcast: 5 Are Fish and Chip shop owners good at maths? Two maths puns and a maths joke Are there ‘popular’ books that ‘lead youRead More → Dear Uncle Colin, Can there be two or more consecutive irrational numbers? - Between A Number And Consecutive… Huh? Hi, BANACH, and thanks for your message! We… have a problem here. When you’re dealing with integers, consecutive is really neatly defined: every number has a single successor, a number that’sRead More → One of the many lovely things about Big MathsJam is that I’ve found My People - I’ve made several very dear friends there, introduced others to the circle, and get to stay in touch with other maths fans through the year. It’s golden. Adam Atkinson is one of those dearRead More → Dear Uncle Colin, I’m given that $0 \le x \lt 180^o$, and that $\cos(x) + \sin(x) = \frac{1}{2}$. I have to find $p$ and $q$ such that $\tan(x) = -\frac{p + \sqrt{q}}{3}$. Where do I even start? - Some Identity Needing Evaluation Hi, SINE, and thanks for your message! ThereRead More → So far in the Dictionary of Mathematical Eponymy, I’ve not picked anyone properly famous. I mean, if you’re a keen recreational mathematician, you’ll have heard of Collatz or Banach; a serious mathematician might know about Daubechies, and a chess enthusiast would conceivably have come across Elo. But everyone has heardRead More → Dear Uncle Colin, I’m told that $5\times 2^x + 1$ (with $x$ a non-negative integer) is a square number - how do I find $x$? - A Baffling Equation. Logs? Hi, ABEL, and thanks for your message! We’re looking for a square number - let’s call it $y^2$ - that’sRead More → Aaaages ago, @vingaints tweeted: This is pretty wild. It feels like what the Basis Representation Theorem is for Integers but for Rational Numbers. Hmm - trying to prove it now. Feels like a tough one. Need to work some examples! https://t.co/tgcy8iaXHa pic.twitter.com/tgcy8iaXHa — Ving Aints (@vingAints) September 18, 2018 InRead More → Dear Uncle Colin, Suppose Team 1 beats Team 2 by a score of 10-7, and Team 2 beats Team 3 by a score of 10-4. How would we predict the score of a match between Team 1 and Team 3? - Make A Team Calculation Happen Hi, MATCH, and thanksRead More → “That looks straightforward,” I thought. “I’ll keep on looking at this geometry puzzle.” Nut-uh. A standard pack of 52 cards is shuffled. The cards are turned over one at a time, and you guess whether each will be red or black. How many correct guesses do you expect to make?Read More → In this month's Wrong, But Useful, we're joined by @sheena2907, who is Sheena in real life. We discuss: Sheena's Number of the Podcast: 3,212 Board Games - Number Fluxx, Prime Climb Magic: The Gathering is undecidable! Oxbridge Time surprises The oddness of the Fibonacci sequence The heights of women BigRead More →
Case Study Contents Introduction Genome-Scale Metabolic Models Gene to Protein to Reaction Associations Flux Balance Analysis Flux Variability Analysis OptOrf Strain Design Algorithm Interactive Demo References The design and development of efficient microbial strains for the production of everything from medicine to biofuels is an increasingly relevant and challenging problem. While the rational design of strains with desirable chemical phenotypes can be accomplished through expertise and speculation, these mutant proposals are inherently qualitative and potentially suboptimal in both the required design effort and the resulting strain performance. To refine and expedite the process of creating and evaluating mutant strains, genome-scale metabolic models, in conjunction with strain design algorithms, allow for the fast and facile quantitative generation and analysis of proposed mutants to accomplish expressed engineering objectives. Metabolic Engineering Problem.Engineers are interested in modifying cells to optimally produce chemicals of interest to them. Steady State Network Node.By assuming a flow balance across all nodes internal to the cell, one can solve for the steady state flux distribution within a cell. \(v_{j}\) is a reaction flux that involves met A as a product or reactant. Genome-scale models are a mathematical recapitulation of the chemical transformations within the cell. Once all reactions within the cell have been determined, the stoichiometric coefficients are taken to populate a stoichiometric matrix, \(S_{ij}\). By convention, coefficients of reactants are written as negative values while product coefficients have a positive value. Using this stoichiometric matrix, which can be thought of as the arcs of a network model, one can solve for the steady state behavior of the cell subject to chemical uptake and secretion (which can be considered respectively the source and sink nodes of the network). From a physical perspective, this amounts to applying the law of conservation of mass to the cell system while assuming no material accumulation. This can be written as: \(S_{ij}v_{j}=0 \quad \forall j \in Metabs \quad [Steady~State~Material~Balance] \) using Einsteinian notation. To solve this particular problem, a toy model of ''Escherichia coli'' metabolism described by Palsson[1] was used and modeled, for simplicity, assuming growth in glucose minimal media. This model contains only a fraction of the reactions and genes that would be found in more sophisticated ''E. coli'' models such as the iJR904 [2] or iAF1260 [3] models. Additionally, this model does not take into account regulatory effects. Simplified Example of GPR Mapping Within a Network.Genes and the protein products they code for determine the reaction network of a cell. In order to make informed decisions about the behavior of cells and their mutant strains, one must first understand the function of their genes. While the metabolic model is valuable for predicting phenotypic behavior of a single wild type strain, designing an efficient strain (i.e. removing reactions) requires knowledge of the functions of the cell's numerous genes. To incorporate this knowledge into the models, mappings are made from genes to their proteins to the reactions they catalyze. These maps are commonly referred to as gene to protein to reaction associations (GPRs). Using binary variables, it is possible to model gene loss within the model and cascade the impact of this occurrence to the reaction level. A gene's proteins may have a number of functions and interactions within the cell. A number of relevant phenomena (as they relate to metabolic strain design) include: One-to-One Interaction: The simplest of interactions. A single gene is associated with a single reaction and vice versa. Isozymes: Multiple enzymes exist to perform a specific function. This often will require multiple gene deletions to remove the particular function from the cell. Multi-functional Protein: Protein is responsible for catalyzing multiple, although often, related reactions. Subunits: Multiple proteins (subunits) are required to create the final functional protein. The deletion of any of the genes related to this protein removes the related reaction. While outside the scope of this problem, an additional layer of complexity can be added by modeling the impact of genetic regulation using various transcription factor (TF) rules. This is accomplished using integrated metabolic and regulatory models. Flux balance analysis allows one to ascertain the maximal feasible reaction flux that a cell could produce given a set of media conditions and flux constraints. The model system is interrogated via FBA using an LP of the following form: A common assumption in solving for flux distributions is that adaptive evolutionary strains will work to fulfill their biological imperative (i.e., their objective is to propagate to perpetuate their genetic code) [4]. This objective is modeled using a unique reaction flux, the biomass reaction, commonly written as \(\mu\) or \(v_{Bio}\), that behaves as a sink for the network. Hypothetical Feasible Solution Space.Circles represent the resulting FBA solution for each particular strain. Flux variability analysis allows one to determine if multiple flux distributions exist for a particular optimal solution [5]. For the purposes of this problem, FVA is used to generate the feasible space for the strain's chemical production vs. biomass production. FVA is an LP of the following form: where \(x\) is a parameter used to set biomass production along a point in its domain and \(R\) and \(M\) are sets as defined above. For this particular problem, \( c_{j} = 0 \quad \forall j \in R \backslash COI \) and \( c_{COI} = 1\). For each point along the domain, the maximum and minimum flux values for the chemical of interest are determined. The resulting figure is effectively a cross-section of the convex hull. Within this region, FBA will always select the flux distribution associated with the rightmost extreme point. If this distribution does not have a unique chemical phenotype, the maximum potential production value will be reported. OptOrf is a bilevel mixed integer linear programming (MILP) strain design algorithm developed by Kim and Reed[6]. The algorithm works to maximize the production of a user specified chemical subject to the assumption of maximal cellular growth (i.e., it should be predictive of adaptive evolutionary strains). The MILP is implemented as described below: Notation: \( \Delta \) is a user-specified parameter limiting the number of gene deletions allowed. \( \Delta' \) is a user-specified parameter mandating a minimum number of gene deletions by the algorithm. \( \delta \) is a user-specified penalty for gene deletions \( a_{j} \) is a reaction decision variable (binary) that indicates whether a reaction is present \( ko_{g} \) is a gene knockout decision variable. \(ko_{g} \in \{0,1\} \) \(GPR_{j,n,g}\) is the set of gene, isozyme, and reactions pairs \( \in \{R \times N \times G \}\) where \(N\) is the set of isozyme numbers \((|N| = \max_{j}\) # of isozymes for \(rxn_{j})\) and \(G\) is the set of all genes. \(Isozyme_{j,n}\) is a binary decision variable that indicates the expression of a gene and, thus, the presence on an enzyme \(\pi\) is a Relational Projection Operator The final three constraints are rules used to map a gene deletion down to the reaction level using the GPR mappings provided to the model. For this problem, \( c_{j}v_{j} = v_{COI} \). To experiment with the model, see the interactive demo. Palsson, B.O. 2006. Systems Biology: Properties of Reconstructed Networks. Cambridge University Press, New York. Reed, J.L., T.D. Vo, C.H. Schilling, and B.O. Palsson. 2003. An expanded genome-scale model of Escherichia coli K-12 (iJR904 GSM/GPR). Genome Biology, 4(9), R54.1 - R54.12. Feist, A.M., C.S. Henry, J.L. Reed, M. Krummenacker, A.R. Joyce, P.D. Karp, L.J. Broadbelt, V. Hatzimanikatis, and B.O. Palsson. 2007. A genome-scale metabolic reconstruction for Escherichia coli K-12 MG1655 that accounts for 1260 ORFs and thermodynamic information. Molecular Systems Biology, 3:121. Orth, J.D., I.Thiele, and B.O. Palsson. 2010. What is flux balance analysis? Nat Biotech, 28(3), 245-248. Reed, J.L. and B.O. Palsson. 2004. Genome-Scale in Silico Models of E. coli Have Multiple Equivalent Phenotypic States: Assessment of Correlated Reaction Subsets That Comprise Network States. Genome Research, 14(9), 1797-1805. Kim, J. and J.L. Reed. 2010. OptORF: Optimal metabolic and regulatory perturbations for metabolic engineering of microbial strains. BMC Systems Biology, 4:53.
The Tracy-Widom law describes, among other things, the fluctuations of maximal eigenvalues of many random large matrix models. Because of its universal character, it obtained his position on the podium of very famous laws in probability theory. I'd like to discuss what are the ingredients to be present in order expect his apparition. More precisely, the Tracy-Widom law has for cumulative distribution the Fredholm determinant $$ F(s)=\det(I-A_s) $$ where the operator $A_s$ acts on $L^2(s,+\infty)$ by $$ A_sf(x)=\int A(x,y)f(y)dy,\qquad A(x,y)=\frac{Ai(x)Ai'(y)-Ai(y)Ai'(x)}{x-y}, $$ $Ai$ being the Airy function. It is moreover possible to rewrite $F$ in a more explicit (?) form, involving a solution of the Painlevé II equation. It is known that this distribution describes the fluctuations of the maximal value of the GUE, and actually of a large class of Wigner Matrices. It curiously also appears in many interacting particle processes, such as ASEP, TASEP, longest increasing subsequence of uniformly random permutations, polynuclear growth models ... (For an introduction, see http://arxiv.org/abs/math-ph/0603038 and references inside. You may jump at (30) if you are in a hurry, and read more about particles models in Section 3). A natural (but ambitious) question is You have $N$ interacting random points $(x_1,\ldots,x_N)$ on $\mathbb{R}$, when can you predict that $x_{\max}^{(N)}=\max_{i=1}^N x_i$ will fluctuate (up to a rescaling) according to Tracy-Widom law around its large $N$ limiting value ? Assume that the limiting distribution of the $x_i$'s $$ \mu(dx)=\lim_{N\rightarrow\infty}\frac{1}{N}\sum_{i=1}^N\delta_{x_i}\qquad \mbox{(in the weak topology)} $$ admits a density $f$ on a compact support $S(\mu)$, and note $x_\max=\max S(\mu)$ (which can be assumed to be positive by translation). I have the impression that a necessary condition for the appearance of Tracy-Widom is to satisfy the three following points : 1) (strong repulsion) There exists a strong repulsion between the $x_i$'s (typically, the joint density of the $x_i$'s has a term like $\prod_{i\neq j}|x_i-x_j|$, or at least the $x_i$'s form a determinantal point process). 2) (no jump for $x_\max^{(N)} $) $x_\max^{(N)}\rightarrow x_\max$ a.s. when $N\rightarrow\infty$. 3) (soft edge) The density of $\mu$ vanishes like a square root around $x_\max$, i.e. $f(x)\sim (x_\max-x)^{1/2}$ when $x\rightarrow x_\max$. For TASEP and longest increasing subsequence models, one can see that 1), 2) and 3) hold [since these models are somehow discretizations of random matrix models where everything is explicit (Wishart and GUE respectively)]. For the Wigner matrices, 2) and 3) clearly hold [Wigner's semicircular law], and I guess 1) is ok [because of the local semicircular law]. For ASEP, 1) clearly holds [because of the E of ASEP], 2) and 3) are not so clear to me, but sound reasonable. Do you know any interacting particle model where Tracy-Widom holds but where one of the previous points is cruelly violated ? Of course the condition 1) is pretty vague, and would deserve to be defined precisely. It is a part of the question ! NB : I have a pretty weak physical background, so if by any chance a physicist was lost on MO, I'd love to hear his/her criteria for Tracy-Widom...
I'm trying to get a predictive density and currently getting something which I know can't be true (based on both logic and simulation based techniques. Here's the relevant information. $\theta$ is a probability and thus $0 \leq \theta \leq 1$ $p(x|\theta) = 1/\theta$ (uniform) edit: $0 \leq x \leq \theta$ (and $p(x|\theta) = 0$ otherwise) $p(\theta) = 6\theta (1-\theta)$ (prior but we are unable to observe data) $p(x) = \int_\theta p(x | \theta) \cdot p(\theta) d\theta$ (general way to find the predictive, if I'm not mistaken) What I have, $\int_0^1 6\theta(1-\theta) \frac{1}{\theta} d\theta$. Unfortunately, solving this seems to just give me 3 (both by integration by parts or by simplifying it first). But the predictive should be a function in $x$ which is monotonically decreasing and concave up, right? Edit: here are the results of my simulation (100000 trials), for reference/checking.
Saturating Hinges Fit Introduction The following example comes from work on saturating splines inBoyd et al. (2016). Adaptive regression splines are commonly used instatistical modeling, but the instability they exhibit beyond theirboundary knots makes extrapolation dangerous. One way to correct thisissue for linear splines is to require they saturate: remainconstant outside their boundary. This problem can be solved using aheuristic that is an extension of lasso regression, producing aweighted sum of hinge functions, which we call a saturating hinge. For simplicity, consider the univariate case with \(n = 1\). Assume we are given knots \(t_1 < t_2 < \cdots < t_k\) where each \(t_j \in {\mathbf R}\). Let \(h_j\) be a hinge function at knot \(t_j\), , \(h_j(x) = \max(x-t_j,0)\), and define \(f(x) = w_0 + \sum_{j=1}^k w_jh_j(x)\). We want to solve \[ \begin{array}{ll} \underset{w_0,w}{\mbox{minimize}} & \sum_{i=1}^m \ell(y_i, f(x_i)) + \lambda\|w\|_1 \\ \mbox{subject to} & \sum_{j=1}^k w_j = 0 \end{array} \] for variables \((w_0,w) \in {\mathbf R} \times {\mathbf R}^k\). The function \(\ell:{\mathbf R} \times {\mathbf R} \rightarrow {\mathbf R}\) is the loss associated with every observation, and \(\lambda \geq 0\) is the penalty weight. In choosing our knots, we set \(t_1 = \min(x_i)\) and \(t_k = \max(x_i)\) so that by construction, the estimate \(\hat f\) will be constant outside \([t_1,t_k]\). Example We demonstrate this technique on the bone density data for female patients from Hastie, Tibshirani, and Friedman (2001), section 5.4. There are a total of \(m = 259\) observations. Our response \(y_i\) is the change in spinal bone density between two visits, and our predictor \(x_i\) is the patient’s age. We select \(k = 10\) knots about evenly spaced across the range of \(X\) and fit a saturating hinge with squared error loss \(\ell(y_i, f(x_i)) = (y_i - f(x_i))^2\). ## Import and sort datadata(bone, package = "ElemStatLearn")X <- bone[bone$gender == "female",]$agey <- bone[bone$gender == "female",]$spnbmdord <- order(X, decreasing = FALSE)X <- X[ord]y <- y[ord]## Choose knots evenly distributed along domaink <- 10lambdas <- c(1, 0.5, 0.01)idx <- floor(seq(1, length(X), length.out = k))knots <- X[idx] In R, we first define the estimation and loss functions: ## Saturating hingef_est <- function(x, knots, w0, w) { hinges <- sapply(knots, function(t) { pmax(x - t, 0) }) w0 + hinges %*% w}## Loss functionloss_obs <- function(y, f) { (y - f)^2 } This allows us to easily test different losses and knot locationslater. The rest of the set-up is similar to previous examples. Weassume that knots is an R vector representing\((t_1,\ldots,t_k)\). ## Form problemw0 <- Variable(1)w <- Variable(k)loss <- sum(loss_obs(y, f_est(X, knots, w0, w)))constr <- list(sum(w) == 0)xrange <- seq(min(X), max(X), length.out = 100)splines <- matrix(0, nrow = length(xrange), ncol = length(lambdas)) The optimal weights are retrieved using separate calls, as shown below. for (i in seq_along(lambdas)) { lambda <- lambdas[i] reg <- lambda * p_norm(w, 1) obj <- loss + reg prob <- Problem(Minimize(obj), constr) ## Solve problem and save spline weights result <- solve(prob) w0s <- result$getValue(w0) ws <- result$getValue(w) splines[, i] <- f_est(xrange, knots, w0s, ws)} Results We plot the fitted saturating hinges in Figure 1 below. As expected, when \(\lambda\) increases, the spline exhibits less variation and grows flatter outside its boundaries. d <- data.frame(xrange, splines)names(d) <- c("x", paste0("lambda_", seq_len(length(lambdas))))plot.data <- gather(d, key = "lambda", value = "spline", "lambda_1", "lambda_2", "lambda_3", factor_key = TRUE)ggplot() + geom_point(mapping = aes(x = X, y = y)) + geom_line(data = plot.data, mapping = aes(x = x, y = spline, color = lambda)) + scale_color_discrete(name = expression(lambda), labels = sprintf("%0.2f", lambdas)) + labs(x = "Age", y = "Change in Bone Density") + theme(legend.position = "top") The squared error loss works well in this case, but the Huber loss is preferred when the dataset contains large outliers. To see this, we add 50 randomly generated outliers to the bone density data and re-estimate the saturating hinges. ## Add outliers to dataset.seed(1)nout <- 50X_out <- runif(nout, min(X), max(X))y_out <- runif(nout, min(y), 3*max(y)) + 0.3X_all <- c(X, X_out)y_all <- c(y, y_out)## Solve with squared error lossloss_obs <- function(y, f) { (y - f)^2 }loss <- sum(loss_obs(y_all, f_est(X_all, knots, w0, w)))prob <- Problem(Minimize(loss + reg), constr)result <- solve(prob)spline_sq <- f_est(xrange, knots, result$getValue(w0), result$getValue(w))## Solve with Huber lossloss_obs <- function(y, f, M) { huber(y - f, M) }loss <- sum(loss_obs(y, f_est(X, knots, w0, w), 0.01))prob <- Problem(Minimize(loss + reg), constr)result <- solve(prob)spline_hub <- f_est(xrange, knots, result$getValue(w0), result$getValue(w)) Figure 2 shows the results. For a Huber loss with \(M = 0.01\), the resulting spline is fairly smooth and follows the shape of the original data, as opposed to the spline using squared error loss, which is biased upwards by a significant amount. d <- data.frame(xrange, spline_hub, spline_sq)names(d) <- c("x", "Huber", "Squared")plot.data <- gather(d, key = "loss", value = "spline", "Huber", "Squared", factor_key = TRUE)ggplot() + geom_point(mapping = aes(x = X, y = y)) + geom_point(mapping = aes(x = X_out, y = y_out), color = "orange") + geom_line(data = plot.data, mapping = aes(x = x, y = spline, color = loss)) + scale_color_discrete(name = "Loss", labels = c("Huber", "Squared")) + labs(x = "Age", y = "Change in Bone Density") + theme(legend.position = "top") Session Info sessionInfo() ## R version 3.6.0 (2019-04-26)## Platform: x86_64-apple-darwin18.5.0 (64-bit)## Running under: macOS Mojave 10.14.5## ## Matrix products: default## BLAS/LAPACK: /usr/local/Cellar/openblas/0.3.6_1/lib/libopenblasp-r0.3.6.dylib## ## locale:## [1] en_US.UTF-8/en_US.UTF-8/en_US.UTF-8/C/en_US.UTF-8/en_US.UTF-8## ## attached base packages:## [1] stats graphics grDevices datasets utils methods base ## ## other attached packages:## [1] tidyr_0.8.3 ggplot2_3.1.1 CVXR_0.99-6 ## ## loaded via a namespace (and not attached):## [1] gmp_0.5-13.5 Rcpp_1.0.1 compiler_3.6.0 ## [4] pillar_1.4.1 plyr_1.8.4 R.methodsS3_1.7.1## [7] R.utils_2.8.0 tools_3.6.0 digest_0.6.19 ## [10] bit_1.1-14 evaluate_0.14 tibble_2.1.2 ## [13] gtable_0.3.0 lattice_0.20-38 pkgconfig_2.0.2 ## [16] rlang_0.3.4 Matrix_1.2-17 yaml_2.2.0 ## [19] blogdown_0.12.1 xfun_0.7 withr_2.1.2 ## [22] dplyr_0.8.1 Rmpfr_0.7-2 ECOSolveR_0.5.2 ## [25] stringr_1.4.0 knitr_1.23 tidyselect_0.2.5 ## [28] bit64_0.9-7 grid_3.6.0 glue_1.3.1 ## [31] R6_2.4.0 rmarkdown_1.13 bookdown_0.11 ## [34] purrr_0.3.2 magrittr_1.5 scales_1.0.0 ## [37] htmltools_0.3.6 scs_1.2-3 assertthat_0.2.1 ## [40] colorspace_1.4-1 labeling_0.3 stringi_1.4.3 ## [43] lazyeval_0.2.2 munsell_0.5.0 crayon_1.3.4 ## [46] R.oo_1.22.0 Source References Boyd, N., T. Hastie, S. Boyd, B. Recht, and M. Jordan. 2016. “Saturating Splines and Feature Selection.” arXiv Preprint arXiv:1609.06764. Hastie, T., R. Tibshirani, and J. Friedman. 2001. The Elements of Statistical Learning. Springer.
Inverse tan is the inverse function of the trigonometric function ‘tangent’. It is used to calculate the angle by applying the tangent ratio of the angle, which is the opposite side divided by the adjacent side. Based on this function, the value of tan 1 or arctan 1 or tan 10, etc. can be determined. It is naming convention for all inverse trigonometric functions to use the prefix ‘arc’ and hence inverse tangent is denoted by arctan. Although it is not uncommon to use tan -1, we will use arctan throughout this article. Inverse Tangent Formula Addition Formula: The formula for adding two inverse tangent function is derived from tan addition formula. In this formula, by putting a = arctan x and b = arctan y, we get For Integration: Some of the important formulae for calculating integrals of expressions involving the arctan function are: ∫ arctan (x)dx = {x arctan (x)} – {In (x 2+1)/2} + C ∫ arctan (ax)dx = {x arctan (ax)}- {In (a 2x 2+1)/2a} + C ∫ x arctan (ax)dx = {x 2 arctan (ax)/2} – {arctan(ax)/2a 2 } – {x / 2a} + C ∫ x 2 arctan (ax)dx = {x 3 arctan (ax)/3} – {In (a 2x 2 + 1)/6a 3 } – {x 2/6a} + C Calculus of Arctan Function This section gives the formula to calculate the derivative and integral of the arctan function. Derivative The derivative of arctan x is denoted by d/dx(arctan(x)) and for complex values of x , the derivative is equal to 1/1+x 2 for x ≠ -i, +i. Integral For obtaining an expression for the definite integral of the inverse tan function, the derivative is integrated and the value at one point is fixed. The expression is:\(arctan (x) = \int_{0}^{x}\frac{1}{y^{2}+1}dy\) Inverse Tan Graph Relationship Between Inverse Tangent Function and Other Trigonometric Functions Consider a triangle whose length of adjacent and opposite are 1 and x respectively. Therefore the length of the hypotenuse is \(\sqrt{1+x^{2}}\). For this triangle, if the angle \(\Theta\) is the arctan, then the following relationships hold true for the three basic trigonometric functions: Sin (arctan(x)) = \(\frac{x}{\sqrt{1+x^{2}}}\) Cos (arctan(x)) = \(\frac{1}{\sqrt{1+x^{2}}}\) Tan (arctan(x)) = x Inverse Tangent Properties The basic properties of the inverse tan function, arctan, are listed below: Notation: y = arctan (x) Defined as: x = tan (y) Domain of the ratio: all real numbers Range of the principal value in radians: -π/2 < y < π/2 Range of the principal value in degrees: -90° < y < 90° What is the Value of tan -1 Infinity? To calculate the value of tan inverse of infinity(∞), we have to check the trigonometry table. From the table we know, tangent of angle π/2 or 90° is equal to infinity, i.e., tan 90° = ∞ or tan π/2 = ∞ Therefore, tan -1 = π/2 or tan -1 = 90° Related Links Learning inverse trigonometric functions and related topics can be fun and easy at BYJU’S – The Learning App. Register today.
A price-taking firm takes prices as given, but that does not mean that the firm cannot influence prices; it just means that the firm ignores its own impact on prices.Now the question is how sensible it is to assume that firms take prices as given. The usual view is that it is a reasonable assumption when the impact of a firm on prices is small enough ... This does sound a lot like the “contradiction” that Keen tries to derive. The key to resolving it is to remember that firms are small relative to the market, so $$\frac{\mathrm dQ}{\mathrm dq_i} = 0.$$One way to justify the above restriction is to assume there is a continuum of firms, so that each firm has zero measure, and $$Q = \int_{j \in I} q_j \, \... You’re looking for gross output; GDP is final output.Per the BEA:Economy-wide, real gross output—principally a measure of an industry's sales or receipts, which includes sales to final users in the economy (GDP) and sales to other industries (intermediate inputs)For calculation purposes, it’s generally assumed that a firm purchasing a good (for ... Aggregate supply is a relationship of price level and output. It is a function, or a curve, or a table. It is not a single value. If we know a particular price level, then we can determine the level of output that would correspond with that. The GDP for 2006 is determined by plugging in the price level of 2006 to the AS curve for 2006, and seeing what output ... Consider two firms with production functions $A_1 F(k,l)$ and $A_2 F(k,l)$. Both have the same curvature (and are CRS), but one is more productive. Here we can solve the social planner's problem (why?) instead of looking at a competitive equilibrium. He will want to equalize marginal returns across the firms. As one is more efficient, it will get more inputs.... At the outset, discovery does not equate availability. Huge amounts of platinum (or the material at issue) might be discovered, but that does not necessarily mean that its extraction is feasible or practicable with the currently available techniques. See here and here.Assuming that the discovered material is economically available, then supply would ... For question 1, I think that investment has a demand-side effect on the economy. I think it isn't a supply-side effect because it isn't stated that the investors are responding to a supply-side shock. So I'm saying that AD shifts right. With shifts in the curves, there will be a change in equilibrium. For example, the process you described of both curves ... The existence of aggregate demand does not mean that aggregate demand is fulfilled. For instance if the aggregate planned demand is 100 billion dollars and due to crisis or war output is very low at just 10 billion dollars. Does the national income equal 100 billion dollars? The role of 45 degree line while showing a consumption function is that the 45 degree line translates the values on x-axis to equal values on y-axis. The consumption function is drawn on an income and aggregate expenditure plane. Hence the 45 degree line shows consumption + savings which is aggregate output or aggregate supply or income.The other way to ... Explaining the shape of the horizontal rangeIn the very short run, the AS curve is perfectly price-elastic (i.e. on the diagram, it is a horizontal line). It is also referred to as the Keynesian range. In this time period, firms respond to a rise in demand for their product without considering the effects of the rising demand, such as higher prices. This ... In shifting the$\ AD_1 \rightarrow AD$ curves to achieve long run equilibrium, Real GDP will have to increase by 200 billion. Refer to the diagram below, (in purple).$\ MPC = 0.75 \rightarrow MPS = 0.25$In this 3 sector closed-economy model, the multiplier can be calculated using :$$\ K = \frac{1}{MPS+MPT} $$Assuming MPT (Marginal Propensity to Tax) ... Total investment in terms of how much capital is augmented, is always $I = I_{b} + I_{h}$.$(I_{b}^K + I_{h}^K)^{\frac{1}{K}}$ is equal to the amount of the intermediate good $Y$ that we need to allocate for investment. And given the formulation and when $K>1$ we see that we economize on the amount of the intermediate good $Y$,$$Y_I = (I_{b}^K + I_{h}... Short answer: Yes, the SRAS curve will shift after the LRAS shifts to return the short-run equilibrium (SRAS/AD) back in line with the long-run equilibrium (LRAS/AD). The reason the SRAS curve doesn't shift immediately with LRAS is that there are so-called "frictions" or "nominal rigidities" such as contracts and information gaps that prevent firms from ... The SRAS also shifts.SRAS is normally used in models where the supply side does not adjust immediately to the new conditions in the market, i.e. models with nominal rigidities. Examples are: sticky wages, sticky prices.Other instances where adjustments in prices or nominal money supply are not immediate and in which money has a real effect are the Lucas '...
[Question cross posted on stack-exchange] I'm slowly working through Part III of the book, and I'm scratching my head a bit while reading the proof of Lemma 3.2 (here reproduced): Let $X, A$ be a pair of CW-spectra, and $Y, B$ a pair of spectra such that $\pi_*(Y, B) = 0$. Suppose given a map $f: X \to Y$ and a homotopy $h: Cyl(A) \to Y$ from $f|A$ to a map $g: A \to B$. Then the homotopy can be extended over $Cyl(X)$ so as to deform $f$ to a map $X \to B$. Adams begins by choosing representing functions for the maps. We choose $f' : X' \to Y$ and $h' : Cyl(A') \to Y$ for $X'$ cofinal in $X$ and $A'$ cofinal in $A$. However, later in the proof it seems that we must use the (assumed?) fact that $h'$ restricted to the bottom of the cylinder agrees with $f'$. In other words, since $f'$ and $h'$ were chosen at the beginning of the proof, I don't see how we can require $h' \circ i_0 = f'| A'$, where $i_0 : A' \to Cyl(A')$ is inclusion on the bottom of the cylinder. Is there an unstated step such as "If $h' \circ i_0 \neq f' |A'$, then find cofinal sub-spectra and representing functions for which this is true"? Even that seems a bit fishy to me.
Articles 1 - 3 of 3 Full-Text Articles in Physics Topology Of Smectic Order On Compact Substrates, Xiangiun Xing Topology Of Smectic Order On Compact Substrates, Xiangiun Xing Physics Smectic orders on curved substrates can be described by differential forms of rank one (1-forms), whose geometric meaning is the differential of the local phase field of density modulation. The exterior derivative of 1-form is the local dislocation density. Elastic deformations are described by superposition of exact differential forms. Applying this formalism to study smectic order on torus as well as on sphere, we find that both systems exhibit many topologically distinct low energy states, that can be characterized by two integer topological charges. The total number of low energy states scales as the square root of the substrate area ... Numerical Results For The Ground-State Interface In A Random Medium, Alan Middleton Physics The problem of determining the ground state of a $d$-dimensional interface embedded in a $(d+1)$-dimensional random medium is treated numerically. Using a minimum-cut algorithm, the exact ground states can be found for a number of problems for which other numerical methods are inexact and slow. In particular, results are presented for the roughness exponents and ground-state energy fluctuations in a random bond Ising model. It is found that the roughness exponent $\zeta = 0.41 \pm 0.01, 0.22 \pm 0.01$, with the related energy exponent being $\theta = 0.84 \pm 0.03, 1.45 \pm ... Translational Correlations In The Vortex Array At The Surface Of A Type-Ii Superconductor, M. Cristina Marchetti, David R. Nelson Translational Correlations In The Vortex Array At The Surface Of A Type-Ii Superconductor, M. Cristina Marchetti, David R. Nelson Physics We discuss the statistical mechanics of magnetic flux lines in a finite-thickness slab of type-II superconductor. The long wavelength properties of a flux-line liquid in a slab geometry are described by a hydrodynamic free energy that incorporates the boundary conditions on the flux lines at the sample's surface as a surface contribution to the free energy. Bulk and surface weak disorder are modeled via Gaussian impurity potentials. This free energy is used to evaluate the two-dimensional structure factor of the flux-line tips at the sample surface. We find that surface interaction always dominates in determining the decay of translational ...
An indefinite nonlinear diffusion problem in population genetics, I: Existence and limiting profiles 1. Tokyo University of Marine Science and Technology, 4-5-7 Konan, Minato-ku, Tokyo 108-8477 2. School of Mathematics, University of Minnesota, Minneapolis, Minnesota 55455 3. School of Mathematics, University of Minnesota, Minneapolis, MN 55455, United States $ d\Delta u+g(x)u^{2}(1-u)=0 \ $ in Ω , $ 0\leq u\leq 1 $in Ω and $ \frac{\partial u}{\partial\nu}=0 $ on ∂Ω, where $\Delta$ is the Laplace operator, $\Omega$ is a bounded smooth domain in $\mathbb{R}^{N}$ with $\nu$ as its unit outward normal on the boundary $\partial\Omega$, and $g$ changes sign in $\Omega$. This equation models the "complete dominance" case in population genetics of two alleles. We show that the diffusion rate $d$ and the integral $\int_{\Omega}g\ \d x$ play important roles for the existence of stable nontrivial solutions, and the sign of $g(x)$ determines the limiting profile of solutions as $d$ tends to $0$. In particular, a conjecture of Nagylaki and Lou has been largely resolved. Our results and methods cover a much wider class of nonlinearities than $u^{2}(1-u)$, and similar results have been obtained for Dirichlet and Robin boundary value problems as well. Mathematics Subject Classification:Primary: 35K57; Secondary: 35B2. Citation:Kimie Nakashima, Wei-Ming Ni, Linlin Su. An indefinite nonlinear diffusion problem in population genetics, I: Existence and limiting profiles. Discrete & Continuous Dynamical Systems - A, 2010, 27 (2) : 617-641. doi: 10.3934/dcds.2010.27.617 [1] [2] Dingshi Li, Kening Lu, Bixiang Wang, Xiaohu Wang. Limiting behavior of dynamics for stochastic reaction-diffusion equations with additive noise on thin domains. [3] Begoña Barrios, Leandro Del Pezzo, Jorge García-Melián, Alexander Quaas. A Liouville theorem for indefinite fractional diffusion equations and its application to existence of solutions. [4] Pablo Amster, Manuel Zamora. Periodic solutions for indefinite singular equations with singularities in the spatial variable and non-monotone nonlinearity. [5] Dingshi Li, Kening Lu, Bixiang Wang, Xiaohu Wang. Limiting dynamics for non-autonomous stochastic retarded reaction-diffusion equations on thin domains. [6] Mi-Ho Giga, Yoshikazu Giga, Takeshi Ohtsuka, Noriaki Umeda. On behavior of signs for the heat equation and a diffusion method for data separation. [7] [8] [9] [10] Juliette Bouhours, Grégroie Nadin. A variational approach to reaction-diffusion equations with forced speed in dimension 1. [11] Yuan Lou, Wei-Ming Ni, Shoji Yotsutani. On a limiting system in the Lotka--Volterra competition with cross-diffusion. [12] [13] Pascal Bégout, Jesús Ildefonso Díaz. A sharper energy method for the localization of the support to some stationary Schrödinger equations with a singular nonlinearity. [14] [15] [16] Messoud Efendiev, Alain Miranville. Finite dimensional attractors for reaction-diffusion equations in $R^n$ with a strong nonlinearity. [17] King-Yeung Lam, Wei-Ming Ni. Limiting profiles of semilinear elliptic equations with large advection in population dynamics. [18] Stanisław Migórski, Shengda Zeng. The Rothe method for multi-term time fractional integral diffusion equations. [19] Zuzana Došlá, Mauro Marini, Serena Matucci. Global Kneser solutions to nonlinear equations with indefinite weight. [20] 2018 Impact Factor: 1.143 Tools Metrics Other articles by authors [Back to Top]
In his paper "On the origin of inertia", Sciama identifies $\frac{\Phi + \phi}{c^2} = -\frac{1}{G}$ This identity has confused me because I wonder how the right hand side arises since $\frac{\phi}{c^2}$ is a dimensionless quantity, often seen in red shift formula. I take it he really did mean this formula since he goes on to establish $G\Phi = -c^2$ (or may be also seen as) $\phi =- \frac{c^2}{G}$ Taking a look at an application, he gives $\frac{m}{r^2} = -(\frac{\Phi + \phi}{c^2})\frac{dv}{dt}$ since velocity is $\frac{dx}{dt}$ then $\frac{dv}{dt} = \frac{d^2x}{dt^2}$ which is an acceleration term. The term on the left hand side, if that is all there is (no other additional constants set to 1) is calculated by stating $F = ma = GM \frac{m}{r^2}$ Divide off $GM$ and it gives $\frac{a}{G} = \frac{m}{r^2}$ In which we do indeed get an inverse Newtonian constant $G^{-1}$ and an acceleration term $a$ which means $-(\frac{\Phi + \phi}{c^2})$ has to also be defined in units of $-\frac{1}{G}$. This seems to be because of the difference of dimensions contained in the potential known as the Voltage and that usually attributed to the Newtonian gravitational $\phi$. Leter the dimensions seem to make sense to me. For Sciama relationship: $\frac{m}{r^2} = \omega^2r$ To be true, must be assuming $G=1$, with it in normal units $\frac{m}{r^2} = \frac{\omega^2r}{G}$ Sciama uses the gravitational definition of the potential in the following way: $-\frac{M}{r^2} - \frac{\phi}{c^2} \frac{\partial v}{\partial t}$ and $\phi = \frac{Gm}{r}$ the scalar potential. These dimensions make sense, where Sciama has set Newtons constant to 1. So in regards to $\frac{\Phi + \phi}{c^2} = -\frac{1}{G}$ how does this arise dimensionally?
This is probably quite an obscure question but hopefully somebody has a simple answer. I'm studying the proof of the topology theorem on black holes due to Hawking and Ellis (Proposition 9.3.2, p. 335 of their famous book, see also Heusler ``black hole uniqueness theorems" p. 99 Theorem 6.17). Their proof relies critically on a `theorem due to Hodge' which I have had no success in locating. I own Hodge's book, to which they refer, ``The theory and applications of harmonic integrals", but cannot find the actual theorem they are using. Specifically, the important expression is (eq. (9.6), p. 336 of Hawking Ellis): $$p_{b ; d} \hat{h}^{bd} + y_{; bd} \hat{h}^{bd} - R_{ac} Y^{a}_{1} Y^{c}_{2} + R_{adcb} Y^{d}_{1} Y^{c}_{2} Y^{a}_{2} Y^{b}_{1} + p'^{a} p'_{a} \tag{1}$$ They claim one can choose $y$ such that $(1)$ is constant with sign depending on the integral:$$\int_{\partial \mathscr{B}(\tau)} (- R_{ac} Y^{a}_{1} Y^{c}_{2} + R_{adcb} Y^{d}_{1} Y^{c}_{2} Y^{a}_{2} Y^{b}_{1})$$ In the above we have: $\partial \mathscr{B}$ is the horizon surface, $Y^{j}_{1}, Y^{\ell}_{2}$ are future directed null vectors orthogonal to $\partial \mathscr{B}$, $\hat{h}^{ij}$ is the induced metric on $\partial \mathscr{B}$ from the space-time, $p^{a} = - \hat{h}^{ba} Y_{2 c ; b} Y^{c}_{1}$, $y$ is the transformation $\boldsymbol{Y}'_{1} = e^{y} \boldsymbol{Y}_{1}$, $\boldsymbol{Y}'_{2} = e^{-y} \boldsymbol{Y}_{2}$ and fnally $p'^{a} = p^{a} + \hat{h}^{a b} y_{; b}$. So $(1) = \text{cst}$ becomes a differential equation in $y$. Any ideas on which theorem is invoked?This post imported from StackExchange Physics at 2014-10-16 11:14 (UTC), posted by SE-user Arthur Suvorov
Search Now showing items 1-2 of 2 D-meson nuclear modification factor and elliptic flow measurements in Pb–Pb collisions at $\sqrt {s_{NN}}$ = 5.02TeV with ALICE at the LHC (Elsevier, 2017-11) ALICE measured the nuclear modification factor ($R_{AA}$) and elliptic flow ($\nu_{2}$) of D mesons ($D^{0}$, $D^{+}$, $D^{⁎+}$ and $D^{s+}$) in semi-central Pb–Pb collisions at $\sqrt{s_{NN}} =5.02$ TeV. The increased ... ALICE measurement of the $J/\psi$ nuclear modification factor at mid-rapidity in Pb–Pb collisions at $\sqrt{{s}_{NN}}=$ 5.02 TeV (Elsevier, 2017-11) ALICE at the LHC provides unique capabilities to study charmonium production at low transverse momenta ( p T ). At central rapidity, ( |y|<0.8 ), ALICE can reconstruct J/ ψ via their decay into two electrons down to zero ...
Log-Concave Distribution Estimation Introduction Let \(n = 1\) and suppose \(x_i\) are i.i.d. samples from a log-concave discrete distribution on \(\{0,\ldots,K\}\) for some \(K \in {\mathbf Z}_+\). Define \(p_k := {\mathbf {Prob}}(X = k)\) to be the probability mass function. One method for estimating \(\{p_0,\ldots,p_K\}\) is to maximize the log-likelihood function subject to a log-concavity constraint , i.e., \[ \begin{array}{ll} \underset{p}{\mbox{maximize}} & \sum_{k=0}^K M_k\log p_k \\ \mbox{subject to} & p \geq 0, \quad \sum_{k=0}^K p_k = 1, \\ & p_k \geq \sqrt{p_{k-1}p_{k+1}}, \quad k = 1,\ldots,K-1, \end{array} \] where \(p \in {\mathbf R}^{K+1}\) is our variable of interest and \(M_k\) represents the number of observations equal to \(k\), so that \(\sum_{k=0}^K M_k = m\). The problem as posed above is not convex. However, we can transform it into a convex optimization problem by defining new variables \(u_k = \log p_k\) and relaxing the equality constraint to \(\sum_{k=0}^K p_k \leq 1\), since the latter always holds tightly at an optimal solution. The result is \[ \begin{array}{ll} \underset{u}{\mbox{maximize}} & \sum_{k=0}^K M_k u_k \\ \mbox{subject to} & \sum_{k=0}^K e^{u_k} \leq 1, \\ & u_k - u_{k-1} \geq u_{k+1} - u_k, \quad k = 1,\ldots,K-1. \end{array} \] Example We draw \(m = 25\) observations from a log-concave distribution on \(\{0,\ldots,100\}\). We then estimate the probability mass function using the above method and compare it with the empirical distribution. set.seed(1)## Calculate a piecewise linear functionpwl_fun <- function(x, knots) { n <- nrow(knots) x0 <- sort(knots$x, decreasing = FALSE) y0 <- knots$y[order(knots$x, decreasing = FALSE)] slope <- diff(y0)/diff(x0) sapply(x, function(xs) { if(xs <= x0[1]) y0[1] + slope[1]*(xs -x0[1]) else if(xs >= x0[n]) y0[n] + slope[n-1]*(xs - x0[n]) else { idx <- which(xs <= x0)[1] y0[idx-1] + slope[idx-1]*(xs - x0[idx-1]) } })}## Problem datam <- 25xrange <- 0:100knots <- data.frame(x = c(0, 25, 65, 100), y = c(10, 30, 40, 15))xprobs <- pwl_fun(xrange, knots)/15xprobs <- exp(xprobs)/sum(exp(xprobs))x <- sample(xrange, size = m, replace = TRUE, prob = xprobs)K <- max(xrange)counts <- hist(x, breaks = -1:K, right = TRUE, include.lowest = FALSE, plot = FALSE)$counts ggplot() + geom_histogram(mapping = aes(x = x), breaks = -1:K, color = "blue", fill = "orange") We now solve problem with log-concave constraint. u <- Variable(K+1)obj <- t(counts) %*% uconstraints <- list(sum(exp(u)) <= 1, diff(u[1:K]) >= diff(u[2:(K+1)]))prob <- Problem(Maximize(obj), constraints)result <- solve(prob)pmf <- result$getValue(exp(u)) The above lines transform the variables \(u_k\) to \(e^{u_k}\) beforecalculating their resulting values. This is possible because exp isa member of the CVXR library of atoms, so it can operate directly ona Variable object such as u. Below are the comparison plots of pmf and cdf. dens <- density(x, bw = "sj")d <- data.frame(x = xrange, True = xprobs, Optimal = pmf, Empirical = approx(x = dens$x, y = dens$y, xout = xrange)$y)plot.data <- gather(data = d, key = "Type", value = "Estimate", True, Empirical, Optimal, factor_key = TRUE)ggplot(plot.data) + geom_line(mapping = aes(x = x, y = Estimate, color = Type)) + theme(legend.position = "top") d <- data.frame(x = xrange, True = cumsum(xprobs), Empirical = cumsum(counts) / sum(counts), Optimal = cumsum(pmf))plot.data <- gather(data = d, key = "Type", value = "Estimate", True, Empirical, Optimal, factor_key = TRUE)ggplot(plot.data) + geom_line(mapping = aes(x = x, y = Estimate, color = Type)) + theme(legend.position = "top") From the figures we see that the estimated curve is much closer to the true distribution, exhibiting a similar shape and number of peaks. In contrast, the empirical probability mass function oscillates, failing to be log-concave on parts of its domain. These differences are reflected in the cumulative distribution functions as well. Session Info sessionInfo() ## R version 3.6.0 (2019-04-26)## Platform: x86_64-apple-darwin18.5.0 (64-bit)## Running under: macOS Mojave 10.14.5## ## Matrix products: default## BLAS/LAPACK: /usr/local/Cellar/openblas/0.3.6_1/lib/libopenblasp-r0.3.6.dylib## ## locale:## [1] en_US.UTF-8/en_US.UTF-8/en_US.UTF-8/C/en_US.UTF-8/en_US.UTF-8## ## attached base packages:## [1] stats graphics grDevices datasets utils methods base ## ## other attached packages:## [1] tidyr_0.8.3 ggplot2_3.1.1 CVXR_0.99-6 ## ## loaded via a namespace (and not attached):## [1] gmp_0.5-13.5 Rcpp_1.0.1 compiler_3.6.0 ## [4] pillar_1.4.1 plyr_1.8.4 R.methodsS3_1.7.1## [7] R.utils_2.8.0 tools_3.6.0 digest_0.6.19 ## [10] bit_1.1-14 evaluate_0.14 tibble_2.1.2 ## [13] gtable_0.3.0 lattice_0.20-38 pkgconfig_2.0.2 ## [16] rlang_0.3.4 Matrix_1.2-17 yaml_2.2.0 ## [19] blogdown_0.12.1 xfun_0.7 withr_2.1.2 ## [22] dplyr_0.8.1 Rmpfr_0.7-2 ECOSolveR_0.5.2 ## [25] stringr_1.4.0 knitr_1.23 tidyselect_0.2.5 ## [28] bit64_0.9-7 grid_3.6.0 glue_1.3.1 ## [31] R6_2.4.0 rmarkdown_1.13 bookdown_0.11 ## [34] purrr_0.3.2 magrittr_1.5 scales_1.0.0 ## [37] htmltools_0.3.6 scs_1.2-3 assertthat_0.2.1 ## [40] colorspace_1.4-1 labeling_0.3 stringi_1.4.3 ## [43] lazyeval_0.2.2 munsell_0.5.0 crayon_1.3.4 ## [46] R.oo_1.22.0
What’s decision tree? A decision tree is a flowchart-like structure in which each internal node represents a "test" on an attribute (e.g. whether a coin flip comes up heads or tails), each branch represents the outcome of the test and each leaf node represents a class label (decision taken after computing all attributes). The paths from root to leaf represents classification rules. Overview A decision tree is a flowchart-like structure in which each internal node represents a "test" on an attribute (e.g. whether a coin flip comes up heads or tails), each branch represents the outcome of the test and each leaf node represents a class label (decision taken after computing all attributes). The paths from root to leaf represents classification rules. A decision tree consists of 3 types of nodes: Decision nodes - commonly represented by squares Chance nodes - represented by circles End nodes - represented by triangles An example of decision tree goes below: We get a decision tree using training data: Abe, Barb, Colette and Don. And we can get the Home value of Sally using this decision tree. Metrics Information Gain What’s entropy ? The entropy (very common in Information Theory) characterizes the impurity of an arbitrary collection of examples. We can calculate the entropy as follows: \(entropy(p)=\sum\limits_{i=1}^n{P_i}\log{P_i}\) For example, for the set \(R = \{a, a, a, b, b, b, b, b\}\) \(entropy(R)=-\frac{3}{8}\log\frac{3}{8}-\frac{5}{8}\log\frac{5}{8}\) What’s information entropy? In general terms, the expected information gain is the change in information entropy H from a prior state to a state that takes some information as given: \(IG(D,A)=\Delta{Entropy}=Entropy(D)-Entropy(D|A)\) Take the image below as an example: The entropy before spliting: \(Entropy(D)=-\frac{14}{30}\log\frac{14}{30}-\frac{16}{30}\log\frac{16}{30}\approx{0.996}\) The entropy after spliting: \(Entropy(D|A)=-\frac{17}{30}(\frac{13}{17}\log\frac{13}{17}+\frac{4}{17}\log\frac{4}{17})-\frac{13}{30}(\frac{1}{13}\log\frac{1}{13}+\frac{12}{13}\log\frac{12}{13})\approx{0.615}\) So the information gain should be: \(IG(D,A)=Entropy(D)-Entropy(D|A)\approx0.381\) Information Gain Ratio Problem of information gain approach Biased towards tests with many outcomes (attributes having a large number of values) E.g: attribute acting as unique identifier Produce a large number of partitions (1 tuple per partition) Each resulting partition D is pure entropy(D)=0, the information gain is maximized What’s information gain ratio? Information gain ratio overcomes the bias of information gain, and it applies a kind of normalization to information gain using a split information value. The split information value represents the potential information generated by splitting the training data set D into v partitions, corresponding to v outcomes on attribute A: \(SplitInfo_A(D)=-\sum\limits_{j=1}^v\frac{|D_j|}{|D|}\times\log_2(\frac{|D_j|}{|D|})\) The gain ratio is defined as: \(GainRatio(D, A)=\frac{IG(D, A)}{SplitInfo_A(D)}\) Let's take the second image in this article as an example: \(IG(D, A)\approx0.381\) \(SplitInfo_A(D)=-\frac{17}{30}\log_2\frac{17}{30}-\frac{13}{30}\log_2\frac{13}{30}\approx0.987\) \(GainRatio(D, A)=\frac{IG(D, A)}{SplitInfo_A(D)}\approx0.386\) Gini Impurity The Gini index is used in CART. Using the notation previously described, the Gini index measures the impurity of D, a data partition or set of training tuples, as: \(Gini(D)=1-\sum\limits_{i=1}^m{p_i^2}\) where \(p_i\) is the probability that a tuple in D belongs to class \(C_i\) and is estimated by \(|C_i, D| / |D|\). The sum is computed over m classes. A Gini index of zero expresses perfect equality, where all values are the same; a Gini index of one (or 100%) expresses maximal inequality among values. Still take the image above as an example: \(Gini(D_1)=1-(\frac{13}{17})^2-(\frac{4}{17})^2=\frac{104}{289}\approx0.360\) \(Gini(D_2)=1-(\frac{12}{13})^2-(\frac{1}{13})^2=\frac{24}{169}\approx0.142\) \(Gini(D)=\frac{|D_1|}{D}Gini(D_1)+\frac{|D_2|}{D}Gini(D_2)=\frac{176}{663}\approx0.265\) Decision Tree Algorithms ID3 Algorithm ID3 (Iterative Dichotomiser 3) is an algorithm used to generate a decision tree from a dataset. The ID3 algorithm begins with the original set S as the root node. On each iteration of the algorithm, it iterates through every unused attribute of the set S and calculates the entropy H(S) (or information gain IG(A)) of that attribute. It then selects the attribute which has the smallest entropy (or largest information gain) value. The set S is then split by the selected attribute (e.g. age < 50, 50 <= age < 100, age >= 100) to produce subsets of the data. The algorithm continues to recur on each subset, considering only attributes never selected before. Recursion on a subset may stop in one of these cases: every element in the subset belongs to the same class (+ or -), then the node is turned into a leaf and labelled with the class of the examples; there are no more attributes to be selected, but the examples still do not belong to the same class (some are + and some are -), then the node is turned into a leaf and labelled with the most common class of the examples in the subset; Let's take the following data as an example: Day Outlook Temperature Humidity Wind Play ball D1 Sunny Hot High Weak No D2 Sunny Hot High Strong No D3 Overcast Hot High Weak Yes D4 Rain Mild High Weak Yes D5 Rain Cool Normal Weak Yes D6 Rain Cool Normal Strong No D7 Overcast Cool Normal Strong Yes D8 Sunny Mild High Weak No D9 Sunny Cool Normal Weak Yes D10 Rain Mild Normal Weak Yes D11 Sunny Mild Normal Strong Yes D12 Overcast Mild High Strong Yes D13 Overcast Hot Normal Weak Yes D14 Rain Mild High Strong No The information gain is calculated for all four attributes: \(IG(S, Outlook)=0.246\) \(IG(S, Temperature)=0.029\) \(IG(S, Humidity)=0.151\) \(IG(S, Wind)=0.048\) So, we'll choose Outlook attribute for the first time. For the node where Outlook = Overcast, we'll find that all the attribute(Play ball) of end nodes are the same. For another two nodes, we should split them once more. Let's take the node where Outlook = Sunny as an example: \(IG(S_{sunny}, Temperature)=0.570\) \(IG(S_{sunny}, Humidity)=0.970\) \(IG(S_{sunny}, Wind)=0.019\) So, we'll choose Humidity attribute for this node. And repeat these steps, you'll get an decision tree as below: C4.5 Algorithm C4.5 builds decision trees from a set of training data in the same way as ID3, using an extension to information gain known as gain ratio. Improvements from ID3 algorithm Handling both continuous and discrete attributes; Handling training data with missing attribute values - C4.5 allows attribute values to be marked as ? for missing; Pruning trees after creation CART Algorithm The CART(Classification & Regression Trees) algorithm is a binary decision tree algorithm. It recursively partitions data into 2 subsets so that cases within each subset are more homogeneous. Allows consideration of misclassification costs, prior distributions, cost-complexity pruning. Let's take following data as an example: TID Buy House Marital Status Taxable Income Cheat 1 Yes Single 125K No 2 No Married 100K No 3 No Single 70K No 4 Yes Married 120K No 5 No Divorced 95K Yes 6 No Married 60K No 7 Yes Divorced 220K No 8 No Single 85K Yes 9 No Married 75K No 10 No Single 90K Yes Firstly, we calculate the Gini Index for Buy House. House=Yes House=No Cheat=Yes 0 4 Cheat=No 3 3 So the Gini Index for this attribute should be: \(Gini(House=Yes)=1-(\frac{3}{3})^2-(\frac{0}{3})^2=0\) \(Gini(House=Yes)=1-(\frac{4}{7})^2-(\frac{3}{7})^2=\frac{24}{49}\) \(Gini(House)=\frac{3}{10}\times{0}+\frac{7}{10}\times\frac{24}{49}=\frac{12}{35}\) How to deal with attributes with more than two values? We calculate the Gini Index for following splits: \(Gini(Martial Status=Single/Divorced)\) and \(Gini(Martial Status=Married)\) \(Gini(Martial Status=Single/Married)\) and \(Gini(Martial Status=Divorced)\) \(Gini(Martial Status=Divorced/Married)\) and \(Gini(Martial Status=Single)\) Then choose the split with minimum Gini index. How to deal with continuous attributes? For efficient computation: for each attribute, Sort the attribute on values Linearly scan these values, each time updating the count matrix and computing Gini index Choose the split position that has the least Gini index Cheat No No No Yes Yes Yes No No No No Tax Income Sorted 60 70 75 85 90 95 100 120 125 220 Split 55 65 72 80 87 92 97 110 122 172 230 <= > <= > <= > <= > <= > <= > <= > <= > <= > <= > <= > Yes 0 3 0 3 0 3 0 3 1 2 2 1 3 0 3 0 3 0 3 0 3 0 No 0 7 1 6 2 5 3 4 3 4 3 4 3 4 4 3 5 2 6 1 7 0 Gini 0.420 0.400 0.375 0.343 0.417 0.400 0.300 0.343 0.375 0.400 0.420 So the value(Tax Income = 95) with minimum Gini index will be chose as split node. The values will be split into two nodes: Tax Income > 97 and Tax Income <= 97. Notes on Overfitting Overfitting results in decision trees that are more complex than necessary. Training error no longer provides a good estimate of how well the tree will perform on previously unseen records. Estimating Generalization Errors Need new ways for estimating errors. Re‐substitution errors: error on training (Σe(t)) Generalization errors: error on testing (Σe’(t)) Optimistic approach Generalization errors: error on testing: e’(t) = (e(t) Pessimistic approach For each leaf node: e’(t) = (e(t)+0.5) Total errors: e’(T) = e(T) + N × 0.5 (N: number of leaf nodes) For a tree with 30 leaf nodes and 10 errors on training (out of 1000 instances): Training error = 10 / 1000 = 1% Generalization error = (10 + 30 × 0.5) / 1000 = 2.5% How to Address Overfitting Pre‐Pruning (Early Stopping Rule) Stop the algorithm before it becomes a fully‐grown tree Typical stopping conditions for a node Stop if all instances belong to the same class Stop if all the attribute values are the same More restrictive conditions Stop if number of instances is less than some user‐specified threshold Stop if class distribution of instances are independent of the available features (e.g., using χ2 test) Stop if expanding the current node does not improve impurity measures (e.g., Gini or information gain) Post‐pruning Grow decision tree to its entirety Trim the nodes of the decision tree in a bottom‐up fashion If generalization error improves after trimming, replace sub‐tree by a leaf node Class label of leaf node is determined from majority class of instances in the sub‐tree Reference https://drive.google.com/open?id=0B4DvJHU7zJrcSk01eloxZXNkOW8 https://en.wikipedia.org/wiki/Decision_tree https://en.wikipedia.org/wiki/Decision_tree_learning
The other day we derived Kepler's third law. $$ \left( \frac{T_1}{T_2} \right)^2 = \left( \frac{r_1}{r_2} \right) ^3 $$ In order to derive this, you can look at a given planet that revolves around the sun. If you assume that the sun is way heavier than the planet and that the planet moves on a circle, you can simply state the force on the Planet by the sun: $$ F_G = G \frac{mM}{r^2} $$ Where $F_G$ is the force on the planet, $G$ is the gravitational constant, $m$ is the mass of the planet, $M$ the mass of the sun and $r$ the distance of the planet to the sun. Our tutor said that you now have to have a centrifugal force $F_c$ that is equal in absolute value to $F_G$, but opposite in direction so that the sum of the forces would be zero. The absolute value of this $F_c$ would then be equal to $F_G$: Then you plug in the formulas for the respective forces: $$ |\vec{F_c}| = |\vec{F_G}| $$ $$ m \frac{v^2}{r} = G \frac{mM}{r^2} $$ $$ \frac{v^2}{r} = G \frac{M}{r^2} $$ $$ v^2 = G \frac{M}{r} $$ $$ v = \sqrt{G \frac{M}{r}} $$ Where $v$ is the tangential velocity of the planet. Together with $v=\frac{2\pi r}{T}$, where $T$ is the revolution period, you get: $$ \frac{2\pi r}{T} = \sqrt{G \frac{M}{r}} $$ $$ \frac{4\pi^2 r^2}{T^2} = G \frac{M}{r} $$ $$ \frac{r^3}{T^2} = G \frac{M}{4\pi^2} $$ Since the right part is constant for each solar system, you can form a ratio and optain Kepler's law. So this approach does work. I think that this view is only valid if you construct it from a rotating observation point, i. e. standing on the planet and facing the sun at all times. Our tutor said that this is from a static (with respect to the sun) observation point. In the rotating system, you do have the centrifugal force, but it is not a real force in this sense. So by saying that you have a centrifugal force, I think that you are in this rotating reference frame. And in the rotating frame, the planet is not moving at all. And by not moving in this rotating frame, it does revolve around the sun in the static frame. Since the planet is not moving, there must not be any force on it. Constructing a force equilibrium is the way to achieve this. But our tutor told me that there is a force equilibrium in the static reference frame as well. I say that if you have an equilibrium there, there is no net force, therefore the planet does only move along a straight line in a static frame. I was then told that although there is no net force, the forces are still there. At that point, I think that this is converting dynamics into statics by adding opposing forces so that everything cancels out. Say we do have a net force (gravity) on the planet. This force is something like this, if the planet has rotated $\theta$ around the sun: $$ \vec{F_G} \propto \left( \begin{matrix} -\cos(\theta) \\ -\sin(\theta) \end{matrix} \right) $$ If you integrate this, you will get the velocity and the position of the planet. $$ \int \vec{F_G} \, \mathrm d\theta \propto \left( \begin{matrix} -\sin(\theta) \\ \cos(\theta) \end{matrix} \right) $$ This seems very good, since it goes along the circle, so this is the tangential velocity. $$ \iint \vec{F_G} \, \mathrm d\theta \propto \left( \begin{matrix} \cos(\theta) \\ \sin(\theta) \end{matrix} \right) $$ And that is simply the position around the unit circle. You can substitute $\theta$ with $\omega t$ if you wish to express the angle with time. So I claim that you need a net force in order for the circular trajectory to appear at all. My tutor told me that if the force is too high (too weak), the planet will spiral into (away from) the sun, and therefore the equilibrium has to be there. That makes no sense to me. What makes sense to me is that if the trajectory is deflected too much (not enough) to form a circle, there is no circle. If you have no force, integration turns out quite different: $$ F = 0 $$ $$ \int F \, \mathrm dt = \underbrace{0t}_{a=0} + \underbrace{c_1}_{v_0} $$ $$ \iint F \, \mathrm dt = \underbrace{0t^2}_{a=0} + \underbrace{c_1t}_{v_0} + \underbrace{c_2}_{d_0} $$ The latter is your $d = v_0 t + d_0$, which models a linear, unaccelerated movement. In the rotating system, $v_0$ is just zero, and it works out. In the static system, that would mean that the planet either remains still or moves on a straight line -- which is not what is really happening. So I claim that you must not have a force equilibrium for the earth to rotate -- that is the basic disagreement we have. Who is right? Or are we both right, just looking differently at the same problem?
Jyrki Lahtonen General non-sense: Mostly I teach here. I want to encourage beginning students to think for themselves, so I use a lot of hints and comments. More advanced questions I often just answer. Relevant personal history: PhD from Notre Dame in '90. Drifted from representation theory of algebraic groups to applications of algebra into telecommunications, mostly coding theory, and lately mostly teaching at college level. 3 graduate students with awarded PhDs. Technically it's now 4, but my previous students did most of the work with the latest one, so I should not include him. I have mostly worked at our local University at Turku, Finland. At one point I tried working for Nokia Research Center. It was ok, but an old dog didn't learn all the tricks, and then they downsized, so I returned to the Uni as a tenured lecturer. Rusko, Finland Member for 2 years, 7 months 0 profile views Last seen Apr 5 at 20:28 Communities (40) Mathematics 114.5k 114.5k1414 gold badges186186 silver badges420420 bronze badges MathOverflow 1.2k 1.2k88 silver badges1515 bronze badges Mathematics Educators 1.2k 1.2k99 silver badges1919 bronze badges Meta Stack Exchange 910 91066 silver badges1313 bronze badges Politics 708 70833 silver badges77 bronze badges View network profile → Top network posts 158 The limit of truncated sums of harmonic series, $\lim\limits_{k\to\infty}\sum_{n=k+1}^{2k}{\frac{1}{n}}$ 131 Is there any mathematical reason for this "digit-repetition-show"? 95 Why is SE giving so much attention to the "be nice"-policy? 94 Math behind rotation in MS Paint 82 Why is a big country less suitable for high-tax, high-welfare system? 67 Is $\sqrt1+\sqrt2+\dots+\sqrt n$ ever an integer? 64 How can you show there are only 2 nonabelian groups of order 8? View more network posts → Keeping a low profile. This user hasn't posted yet. Badges (1) Gold Silver Bronze Rarest Apr 5
In many situations, additional information about the result of a probability experiment is known (or at least assumed to be known) and given that information the probability of some other event is desired. For this scenario, we compute what is referred to as conditional probability. Definition \(\PageIndex{1}\) For events \(A\) and \(B\), with \(P(B) > 0\), the of \(A\) given \(B\), denoted \(P(A\ |\ B)\), is given by conditional probability $$P(A\ |\ B) = \frac{P(A \cap B)}{P(B)}.$$ In computing a conditional probability we assume that we know the outcome of the experiment is in event \(B\) and then, given that additional information, we calculate the probability that the outcome is also in event \(A\). This is useful in practice given that partial information about the outcome of an experiment is often known, as the next example demonstrates. Example \(\PageIndex{1}\) Continuing in the context of Example 1.2.1, where we considered tossing a fair coin twice, define \(D\) to be the event that at least one tails is recorded: $$D = \{ht, th, tt\}$$ Let's calculate the conditional probability of \(A\) given \(D\), i.e., the probability that at least one heads $$P(A\ |\ D) = \frac{P(A\cap D)}{P(D)} = \frac{P(\{ht, th\})}{P(\{ht, th, tt\})}= \frac{(2/4)}{(3/4)} = \frac{2}{3} \approx 0.67.$$ Note that in Example 1.2.1 we found the conditional probability of \(A\) to be \(P(A) = 0.75\). So, knowing that at least one tails was recorded, i.e., assuming event \(D\) occurred, the conditional probability of \(A\) given \(D\) decreased. This is because, if event \(D\) occurs, then the outcome \(hh\) in \(A\) cannot occur, thereby decreasing the chances that event \(A\) occurs. un Exercise \(\PageIndex{1}\) Suppose we randomly draw a card from a standard deck of 52 playing cards. If we know that the card is a King, what is the probability that the card is a club? If we instead know that the card is black, what is the probability that the card is a club? Answer In order to compute the necessary probabilities, first note that the sample space is given by the set of cards in a standard deck of playing cards. So the number of outcomes in the sample space is 52. Next, note that the outcomes are equally likely, since we are randomlydrawing the card from the deck. For part (a), we are looking for the conditional probability that the randomly selected card is club, given that it is a King. If we let \(C\) denote the event that the card is a club and \(K\) the event that it is a King, then we are looking to compute $$P(C\ |\ K) = \frac{P(C\cap K)}{P(K)}.\label{condproba}$$ To compute these probabilities, we count the number of outcomes in the following events: \begin{align} \#\ \text{of outcomes in}\ C & = \#\ \text{of clubs in standard deck}\ = 13 \\ \#\ \text{of outcomes in}\ K & = \#\ \text{of Kings in standard deck}\ = 4 \\ \#\ \text{of outcomes in}\ C\cap K & = \#\ \text{of King of clubs in standard deck}\ = 1 \end{align} The probabilities in Equation \ref{condproba} are then given by dividing the counts of outcomes in each event by the total number of outcomes in the sample space (by Equation 2.1.8 in the previous section): $$P(C\ |\ K) = \frac{P(C\cap K)}{P(K)} = \frac{(1/52)}{(4/52)} = \frac{1}{4} = 0.25.$$ For part (b), we are looking for the conditional probability that the randomly selected card is club, given that it is instead black. If we let \(B\) denote the event that the card is black and \(K\), then we are looking to compute $$P(C\ |\ B) = \frac{P(C\cap B)}{P(B)}.\label{condprobb}$$ To compute these probabilities, we count the number of outcomes in the following events: \begin{align} \#\ \text{of outcomes in}\ B & = \#\ \text{of black cards in standard deck}\ = 26 \\ \#\ \text{of outcomes in}\ C\cap B & = \#\ \text{of black clubs in standard deck}\ = 13 \end{align} The probabilities in Equation \ref{condprobb} are then given by dividing the counts of outcomes in each event by the total number of outcomes in the sample space: $$P(C\ |\ B) = \frac{P(C\cap B)}{P(B)} = \frac{(13/52)}{(26/52)} = \frac{13}{26} = 0.5.$$ Remark: Exercise 2.2.1 demonstrates the following fact. For sample spaces with equally likely outcomes, conditional probabilities are calculated using $$P(A\ |\ B) = \frac{\text{number of outcomes in}\ A\cap B}{\text{number of outcomes in}\ B}.$$ In other words, if we know that the outcome of the probability experiment is in the event \(B\), then we restrict our focus to the outcomes in that event that are also in \(A\). We can think of this as event \(B\) taking the place of the sample space, since we know the outcome must lie in that event. Properties of Conditional Probability As with unconditional probability, we also have some useful properties for conditional probabilities. The first property below, referred to as the Multiplication Law, is simply a rearrangement of the probabilities used to define conditional probability. The Multiplication Law provides a way for computing the probability of an intersection of events when the conditional probabilities are known. Multiplication Law \(P(A \cap B) = P(A\ |\ B) P(B) = P(B\ |\ A) P(A)\) The next two properties are useful when a partition of the sample space exists, where a partition is a way of dividing up the outcomes in the sample space into non-overlapping sets. A partition is formally defined in the Law of Total Probability below. In many cases, when a partition exists, it is easy to compute the conditional probability of an event in the sample space given an event in the partition. The Law of Total Probability then provides a way of using those conditional probabilities of an event, given the partition to compute the unconditional probability of the event. Following the Law of Total Probability, we state Bayes' Rule, which is really just an application of the Multiplication Law. Bayes' Rule is used to calculate what are informally referred to as "reverse conditional probabilities", which are the conditional probabilities of an event in a partition of the sample space, given any other event. Law of Total Probability Suppose events \(B_1, B_2, \ldots, B_n,\) satisfy the following: \(S = B_1 \cup B_2 \cup \cdots \cup B_n\) \(B_i\cap B_j = \varnothing\), for every \(i\neq j\) \(P(B_i)>0\), for \(i=1, ldots, n\) We say that the events \(B_1, B_2, \ldots, B_n,\) partition the sample space \(S\). Then for any event \(A\), we can write $$P(A) = P(A\ |\ B_1) P(B_1) + \cdots + P(A\ |\ B_n) P(B_n).$$ Bayes' Rule Let \(B_1, B_2, \ldots, B_n,\) partition the sample space \(S\) and let \(A\) be an event with \(P(A)> 0\). Then, for \(j=1,\ldots, n\), we have $$P(B_j\ |\ A) = \frac{P(A\ |\ B_j) P(B_j)}{P(A)}.$$ A common application of the Law of Total Probability and Bayes' Rule is in the context of medical diagnostic testing. Example \(\PageIndex{2}\) Consider a test that can diagnose kidney cancer. The test correctly detects when a patient has cancer 90% of the time. Also, if a person does not have cancer, the test correctly indicates so 99.9% of the time. Finally, suppose it is known that 1 in every 10,000 individuals has kidney cancer. We find the probability that a patient has kidney cancer, given that the test indicates she does. First, note that we are finding a conditional probability. If we let \(A\) denote the event that the patient tests positive for cancer, and we let \(B_1\) denote the event that the patient actually has cancer, then we want $$P(B_1\ |\ A).$$ If we let \(B_2 = B_1^c\), then we have a partition of all patients (which is the sample space) given by \(B_1\) and \(B_2\). In the first paragraph of this example, we are given the following probabilities: \begin{align} \textcolor{BurntOrange}{\text{test correctly detects cancer 90% of time:}}\quad & \textcolor{BurntOrange}{P(A\ |\ B_1) = 0.9} \\ \textcolor{goldenrod}{\text{test correctly detects no cancer 99.9% of time:}}\quad & \textcolor{goldenrod}{P(A^c\ |\ B_2) = 0.999} \Rightarrow P(A\ |\ B_2) = 1-P(A^c\ |\ B_2) = 0.001 \\ \textcolor{red}{\text{1 in every 10,000 individuals has cancer:}}\quad & \textcolor{red}{P(B_1) = 0.0001} \Rightarrow P(B_2) = 1 - P(B_1) = 0.9999 \end{align} Since we have a partition of the sample space, we apply the Law of Total Probability to find \(P(A)\): $$P(A) = \textcolor{BurntOrange}{P(A\ |\ B_1)} \textcolor{red}{P(B_1)} + P(A\ |\ B_2) P(B_2) = (\textcolor{BurntOrange}{0.9})(\textcolor{red}{0.0001}) + (0.001)(0.9999) = 0.0010899$$ Next, we apply Bayes' Rule to find the desired conditional probability: $$P(B_1\ |\ A) = \frac{P(A\ |\ B_1) P(B_1)}{P(A)} = \frac{(0.9)(0.0001)}{0.0010899} \approx 0.08$$ This implies that only about 8% of patients that test positive under this particular test actually have kidney cancer, which is not very good.
For full proofs and derivations, read here. Abstract This paper generalizes the notion of the geometric distribution to allow for dependent Bernoulli trials generated from dependency generators as defined in Traylor and Hathcock’s previous work. The generalized geometric distribution describes a random variable X that counts the number of dependent Bernoulli trials until the first success. The main result of the paper is X can count dependent Bernoulli trials from any dependency structure and retain the same distribution. That is, if X counts Bernoulli trials with dependency generated by \alpha_{1} \in \mathcal{C}_{\delta}, and Y counts Bernoulli trials with dependency generated by \alpha_{2} \in \mathscr{C}_{\delta}, then the distributions of X and Y are the same, namely the generalized geometric distribution. Other characterizations and properties of the generalized geometric distribution are given, including the MGF, mean, variance, skew, and entropy. Introduction The standard geometric distribution counts one of two phenomena: The count of i.i.d. Bernoulli trials until the first success The count of i.i.d. Bernoulli trials that resulted in a failure prior to the first success The latter case is simply a shifted version of the former. However, this distribution, in both forms, has limitations because it requires a sequence of independent and identically distributed Bernoulli trials. Korzeniowski [2] originally defined what is now known as first-kind (FK) dependent Bernoulli random variables, and gave a generalized binomial distribution that allowed for dependence among the Bernoulli trials. Traylor [4] extended the work of Korzeniowski into FK-dependent categorical random variables and derived a generalized multinomial distribution in a similar fashion. Traylor and Hathcock [5] extended the notion of dependence among categorical random variables to include other kinds of dependency besides FK dependence, such as sequential dependence. Their work created a class of vertical dependency structures generated by a set of functions where the latter property is known as dependency continuity. In this paper, we derive a generalized geometric distribution from identically distributed but dependent Bernoulli random variables. The main result is that the pdf for the generalized geometric distribution is the same regardless of the dependency structure. That is, for any \alpha \in \mathscr{C}_{\delta} that generates a sequence of identically distributed but dependent Bernoulli trials, the generalized geometric distribution remains unchanged.
Podcast: Play in new window | Download Subscribe: Android | In this month's podcast, @reflectivemaths and I discuss: Colin's book being available to buy Number of the podcast: Catalan's constant, which is about 0.915 965 (defined as $\frac{1}{1} - \frac{1}{9} + \frac{1}{25} - \frac{1}{49} + ... + \frac{1}{(2n+1)^2} - \frac{1}{(2n+3)^2} + ...$). Not known whether it’s rational. Used in combinatorics and is $\int_0^\infty \arctan(e^{-t}) \dt $ Chalkdust magazine Issue 3 is out, and the crossnumber is good. Colin and @christianp have cross-checked their answers using Elaborate Codes. Dave attempts to mock Colin for enjoying maths, and is fighting a losing battle. Dave has been reading about Tupper's Self-Referential Formula, $ \frac{1}{2} < \left \lfloor \left( \left \lfloor \frac{y}{17} \right \rfloor 2 ^{-17 \lfloor x \rfloor - \lfloor y \rfloor \pmod { 17 }} \right) \pmod { 2 } \right \rfloor $ Dave came across Iva Sallay's Find The Factors game. It's good! Colin refers to Twynam's law, and gets Dave to admit that we should be suspicious about Statistics. @notonlyahatrack points us at the Romanian football team's venture into more interesting shirt numbers: @peterrowlett points us at @stecks's article about @rachelrileyRR's EE advert. (For clarity, as my speech isn't as clear as it might be: the article is by Katie alone, not by Katie and Peter.) Dave's students largely missed an answer in "solve $3x^2 = 147$". Colin thinks it's a bit of a gotcha. Relatively Prime Series 3 didn't reach its Kickstarter goal, and will not happen. @peterrowlett asks us to reveal the secret that Colin writes books. Colin erroneously states that Cracking Mathematics is out soon; it has been delayed until August, for no reason under Colin's control. Gold star for @chrishazell72, who identified that the church in Dave's last puzzle required 81 cards. This month's puzzle: given an equilateral triangle, what is the probability that a point inside the triangle lies closer to the centre than to any point on the edge? We congratulate ourselves on doing a good show and then Dave Hansens up the ending. * Edited 2016-04-01 to clarify authorship of the Aperiodical article. * Edited 2016-11-18 to correct a typo.
And I think people said that reading first chapter of Do Carmo mostly fixed the problems in that regard. The only person I asked about the second pset said that his main difficulty was in solving the ODEs Yeah here there's the double whammy in grad school that every grad student has to take the full year of algebra/analysis/topology, while a number of them already don't care much for some subset, and then they only have to pass rather the class I know 2 years ago apparently it mostly avoided commutative algebra, half because the professor himself doesn't seem to like it that much and half because he was like yeah the algebraists all place out so I'm assuming everyone here is an analyst and doesn't care about commutative algebra Then the year after another guy taught and made it mostly commutative algebra + a bit of varieties + Cech cohomology at the end from nowhere and everyone was like uhhh. Then apparently this year was more of an experiment, in part from requests to make things more geometric It's got 3 "underground" floors (quotation marks because the place is on a very tall hill so the first 3 floors are a good bit above the the street), and then 9 floors above ground. The grad lounge is in the top floor and overlooks the city and lake, it's real nice The basement floors have the library and all the classrooms (each of them has a lot more area than the higher ones), floor 1 is basically just the entrance, I'm not sure what's on the second floor, 3-8 is all offices, and 9 has the ground lounge mainly And then there's one weird area called the math bunker that's trickier to access, you have to leave the building from the first floor, head outside (still walking on the roof of the basement floors), go to this other structure, and then get in. Some number of grad student cubicles are there (other grad students get offices in the main building) It's hard to get a feel for which places are good at undergrad math. Highly ranked places are known for having good researchers but there's no "How well does this place teach?" ranking which is kinda more relevant if you're an undergrad I think interest might have started the trend, though it is true that grad admissions now is starting to make it closer to an expectation (friends of mine say that for experimental physics, classes and all definitely don't cut it anymore) In math I don't have a clear picture. It seems there are a lot of Mickey Mouse projects that people seem to not help people much, but more and more people seem to do more serious things and that seems to become a bonus One of my professors said it to describe a bunch of REUs, basically boils down to problems that some of these give their students which nobody really cares about but which undergrads could work on and get a paper out of @TedShifrin i think universities have been ostensibly a game of credentialism for a long time, they just used to be gated off to a lot more people than they are now (see: ppl from backgrounds like mine) and now that budgets shrink to nothing (while administrative costs balloon) the problem gets harder and harder for students In order to show that $x=0$ is asymptotically stable, one needs to show that $$\forall \varepsilon > 0, \; \exists\, T > 0 \; \mathrm{s.t.} \; t > T \implies || x ( t ) - 0 || < \varepsilon.$$The intuitive sketch of the proof is that one has to fit a sublevel set of continuous functions $... "If $U$ is a domain in $\Bbb C$ and $K$ is a compact subset of $U$, then for all holomorphic functions on $U$, we have $\sup_{z \in K}|f(z)| \leq C_K \|f\|_{L^2(U)}$ with $C_K$ depending only on $K$ and $U$" this took me way longer than it should have Well, $A$ has these two dictinct eigenvalues meaning that $A$ can be diagonalised to a diagonal matrix with these two values as its diagonal. What will that mean when multiplied to a given vector (x,y) and how will the magnitude of that vector changed? Alternately, compute the operator norm of $A$ and see if it is larger or smaller than 2, 1/2 Generally, speaking, given. $\alpha=a+b\sqrt{\delta}$, $\beta=c+d\sqrt{\delta}$ we have that multiplication (which I am writing as $\otimes$) is $\alpha\otimes\beta=(a\cdot c+b\cdot d\cdot\delta)+(b\cdot c+a\cdot d)\sqrt{\delta}$ Yep, the reason I am exploring alternative routes of showing associativity is because writing out three elements worth of variables is taking up more than a single line in Latex, and that is really bugging my desire to keep things straight. hmm... I wonder if you can argue about the rationals forming a ring (hence using commutativity, associativity and distributivitity). You cannot do that for the field you are calculating, but you might be able to take shortcuts by using the multiplication rule and then properties of the ring $\Bbb{Q}$ for example writing $x = ac+bd\delta$ and $y = bc+ad$ we then have $(\alpha \otimes \beta) \otimes \gamma = (xe +yf\delta) + (ye + xf)\sqrt{\delta}$ and then you can argue with the ring property of $\Bbb{Q}$ thus allowing you to deduce $\alpha \otimes (\beta \otimes \gamma)$ I feel like there's a vague consensus that an arithmetic statement is "provable" if and only if ZFC proves it. But I wonder what makes ZFC so great, that it's the standard working theory by which we judge everything. I'm not sure if I'm making any sense. Let me know if I should either clarify what I mean or shut up. :D Associativity proofs in general have no shortcuts for arbitrary algebraic systems, that is why non associative algebras are more complicated and need things like Lie algebra machineries and morphisms to make sense of One aspect, which I will illustrate, of the "push-button" efficacy of Isabelle/HOL is its automation of the classic "diagonalization" argument by Cantor (recall that this states that there is no surjection from the naturals to its power set, or more generally any set to its power set).theorem ... The axiom of triviality is also used extensively in computer verification languages... take Cantor's Diagnolization theorem. It is obvious. (but seriously, the best tactic is over powered...) Extensions is such a powerful idea. I wonder if there exists algebraic structure such that any extensions of it will produce a contradiction. O wait, there a maximal algebraic structures such that given some ordering, it is the largest possible, e.g. surreals are the largest field possible It says on Wikipedia that any ordered field can be embedded in the Surreal number system. Is this true? How is it done, or if it is unknown (or unknowable) what is the proof that an embedding exists for any ordered field? Here's a question for you: We know that no set of axioms will ever decide all statements, from Gödel's Incompleteness Theorems. However, do there exist statements that cannot be decided by any set of axioms except ones which contain one or more axioms dealing directly with that particular statement? "Infinity exists" comes to mind as a potential candidate statement. Well, take ZFC as an example, CH is independent of ZFC, meaning you cannot prove nor disprove CH using anything from ZFC. However, there are many equivalent axioms to CH or derives CH, thus if your set of axioms contain those, then you can decide the truth value of CH in that system @Rithaniel That is really the crux on those rambles about infinity I made in this chat some weeks ago. I wonder to show that is false by finding a finite sentence and procedure that can produce infinity but so far failed Put it in another way, an equivalent formulation of that (possibly open) problem is: > Does there exists a computable proof verifier P such that the axiom of infinity becomes a theorem without assuming the existence of any infinite object? If you were to show that you can attain infinity from finite things, you'd have a bombshell on your hands. It's widely accepted that you can't. If fact, I believe there are some proofs floating around that you can't attain infinity from the finite. My philosophy of infinity however is not good enough as implicitly pointed out when many users who engaged with my rambles always managed to find counterexamples that escape every definition of an infinite object I proposed, which is why you don't see my rambles about infinity in recent days, until I finish reading that philosophy of infinity book The knapsack problem or rucksack problem is a problem in combinatorial optimization: Given a set of items, each with a weight and a value, determine the number of each item to include in a collection so that the total weight is less than or equal to a given limit and the total value is as large as possible. It derives its name from the problem faced by someone who is constrained by a fixed-size knapsack and must fill it with the most valuable items.The problem often arises in resource allocation where there are financial constraints and is studied in fields such as combinatorics, computer science... O great, given a transcendental $s$, computing $\min_P(|P(s)|)$ is a knapsack problem hmm... By the fundamental theorem of algebra, every complex polynomial $P$ can be expressed as: $$P(x) = \prod_{k=0}^n (x - \lambda_k)$$ If the coefficients of $P$ are natural numbers , then all $\lambda_k$ are algebraic Thus given $s$ transcendental, to minimise $|P(s)|$ will be given as follows: The first thing I think of with that particular one is to replace the $(1+z^2)$ with $z^2$. Though, this is just at a cursory glance, so it would be worth checking to make sure that such a replacement doesn't have any ugly corner cases. In number theory, a Liouville number is a real number x with the property that, for every positive integer n, there exist integers p and q with q > 1 and such that0<|x−pq|<1qn.{\displaystyle 0<\left|x-{\frac {p}... Do these still exist if the axiom of infinity is blown up? Hmmm... Under a finitist framework where only potential infinity in the form of natural induction exists, define the partial sum: $$\sum_{k=1}^M \frac{1}{b^{k!}}$$ The resulting partial sums for each M form a monotonically increasing sequence, which converges by ratio test therefore by induction, there exists some number $L$ that is the limit of the above partial sums. The proof of transcendentally can then be proceeded as usual, thus transcendental numbers can be constructed in a finitist framework There's this theorem in Spivak's book of Calculus:Theorem 7Suppose that $f$ is continuous at $a$, and that $f'(x)$ exists for all $x$ in some interval containing $a$, except perhaps for $x=a$. Suppose, moreover, that $\lim_{x \to a} f'(x)$ exists. Then $f'(a)$ also exists, and$$f'... and neither Rolle nor mean value theorem need the axiom of choice Thus under finitism, we can construct at least one transcendental number. If we throw away all transcendental functions, it means we can construct a number that cannot be reached from any algebraic procedure Therefore, the conjecture is that actual infinity has a close relationship to transcendental numbers. Anything else I need to finish that book to comment typo: neither Rolle nor mean value theorem need the axiom of choice nor an infinite set > are there palindromes such that the explosion of palindromes is a palindrome nonstop palindrome explosion palindrome prime square palindrome explosion palirome prime explosion explosion palindrome explosion cyclone cyclone cyclone hurricane palindrome explosion palindrome palindrome explosion explosion cyclone clyclonye clycone mathphile palirdlrome explosion rexplosion palirdrome expliarome explosion exploesion
After an inelastic colision, the sum of the kinetic energies of the bodies is not conserved. The kinetic energy loss is converted into internal energy (related with temperature, elasticity, cshape/plasticity and other properties), and heat. In this sense the coefficent of restitution is a GIGO machine (garbage in, garbage out): it just take the initial energy, the final energy and compute the energy loss. Since the energy is conserved in the whole system, the coefficent of restitution must take into account the friction and other variables. To leave it clear: the coefficent of restitution takes in account: the friction between the surfaces, the dragging of the air, the change in the shape of the interacting bodies, the change of temperature of the interacting bodies, definitely everything that leads to a loss in kinetic energy. Now suppose you have a ball moving towards a wall (we consider a wall cause have in this context infinite mass/inertia) so we can discard its dynamics from the equation. We are given that the coefficent of restitution is $e$. Then:$$e=\sqrt{\frac{E_{Kin,2}}{E_{Kin,1}}}\Rightarrow E_{Kin,2}=e^2 E_{Kin,1}\Rightarrow |v_2|=e|v_1|$$ The repidities of the ball before and after the collision are related, but we cannot be sure about the velocities. Since part of the energy is lost, the change of the shape in the ball or in the wall, the increase on the temperature, and other factors, will determine how much momentum is absorbed by the wall. Because we cannot control this variables, the velocity (i.e. the angle after the colision) remains unknown. Recall that the kinetic energy and the momentum are related: $$E_kin = \frac{p^2}{2m}$$ Therefore: $$e=\sqrt{\frac{E_{kin,2}}{E_{kin,1}}}=\frac{p_2}{p_1}$$Suppose now we dont have a wall but two balls (A, and B) colliding. Now the dynamics is ruled by the conservation of momentum and the conservation of energy (taking in account this coefficent of restitution. For the sake of simplicity we will consider a head on colision (so an effective 1-dimensional problem) with balls of the same mass. Applying conservation of momentum: $$p_{1,A}+{p}_{1,B}=e({p}_{2,A}+{p}_{2,B})\Rightarrow v_{1,A}+{v}_{1,B}=e({v}_{2,A}+{v}_{2,B})\Rightarrow \frac{v_{1,A}-{v}_{2,A}}{{v}_{2,B}-{v}_{1,B}}=e$$ So with the expression you gave, we can calculate the coefficent of restitution and from there the final kinetic energy: $$\Delta E = E_{kin,2}-E_{kin,1}=eE_{kin,1}-E_{kin,1} =(e-1)E_{kin,1} $$ However realize that this expression is only valid when the masses are equal in a one dimensional problem. If the masses differ, or the collision does not happen in a line but with some angle, the previous argument does not hold and some coefficents related with the masses and the angles must be introduced in the formula.
The deeper problem with this supposition is that it assumes a conceptual identity between the notions of Hamiltonian and energy, and this is an identity that is not correct. That is, discernment needs to be applied to separate the two of these things. Conceptually, energy is a physical quantity that is, in a sense, "nature's money" - the "currency" that you have to expend to produce physical changes in the world. On a somewhat deeper level, energy is to time what momentum is to space. This can be seen across many areas, such as Noether's theorem, which relates the law of conservation of energy to the fact that the history of a system can be translated back and forth in time and still work the same way, i.e. that there is no preferred point in time in the laws of physics, and likewise, the same for momentum with it being translated around in space and still working the same way. It also occurs in relativity, in which the "four-momentum" incorporates energy as its temporal component. The Hamiltonian, on the other hand, is a mathematically modified version of the Lagrangian, through what is called the Legendre transform. The Lagrangian is a way to describe how that forces impact the time evolution of a physical system in terms of an optimization process, and the Hamiltonian converts this directly into an often more useful/intuitive differential equation process. In many cases, the Hamiltonian is equal to, the system total mechanical energy $E_\mathrm{mech}$, i.e. $K + U$, but this is not always so even in classical Hamiltonian mechanics, a fact which indicates and underscores the basic conceptual separation between the two. In quantum mechanics, the "energy is to time what momentum is to space" concept manifests in that it is the generator of temporal translation, or the generator of evolution, in the same way that momentum is the generator of spatial translation. In particular, just as we have a "momentum operator" $$\hat{p} := -i\hbar \frac{\partial}{\partial x}$$ which translates a position-space (here using one dimension for simplicity) wave function (mathematical representation of restricted information regarding the particle position on the part of an agent) $\psi$ via the somewhat-loose "infinitesimal equation" $$\psi(x - dx) = \psi(x) + \left(\frac{i}{\hbar} \hat{p} \psi\right)(x)$$ for translating it by a tiny forward nudge $dx$, likewise we would want to have an energy operator $$\hat{E} := i\hbar \frac{\partial}{\partial t}$$ which does the same but for translation with regard to time (the sign change is because we usually consider a temporal advance from $t$ to $t + dt$, as opposed to psychologically [perhaps also psycho-culturally] preferring spatial motions to be directed rightward, in our descriptions of things.). The problem here is that wave functions generally do not contain a time parameter, and at least non-relativistic quantum mechanics treats space and time separately, so the above cannot be a true operator on the system state space. Rather, it is more of a "pseudo-operator" that we'd "like" to have but can't "really" for this reason. One should note that this is the expression that appears on the right of the Schrodinger equation, which we could thus "better" write as $$\hat{H}[\psi(t)] = [\hat{E}\psi](t)$$ where $\psi$ is now a temporal sequence of wave functions (viz. a "curried function", which becomes an "ordinary" function when you consider the wave functions as the basis-independent Hilbert vectors). The Hamiltonian operator $\hat{H}$ is a bona fide operator, which acts only on the "present" configuration information for the system. What this equation is "really" saying is that in order for such a time series to represent a valid physical evolution, the Hamiltonian must also be able to translate it through time. The distinction between Hamiltonian and energy manifests in that the Hamiltonian will not translate every time sequence, while the energy pseudo-operator will, just as the momentum operator will translate every spatial wave function. Moreover, many Hamiltonians may be possible that give rise to the same energy spectrum. Because these two things are different, it makes no sense to equate them as operators, like suggested. You can, and should, have $\hat{H}[\psi(t)] = [\hat{E}\psi](t)$, but you should not have $\hat{H} = \hat{E}$!
There is a 2-part problem in my physics book in which part I gives some information about a ball being dropped vertically onto a flat surface, and we are asked to deduce the coefficient of restution $e$. In part II, we drop the same ball onto an inclined plane, and we are asked to find the location of the second point of contact of the ball with the plane. The inclined plane is presumably made of the same material as that of the flat surface, although this is not mentioned in the problem. It's a very easy exercise, but there was something in the book's solution which confused me: the book claims (without justification) that the collisions of the ball with the plane obey the law of reflection. To see why this doesn't make sense (to me), lets take a simple example. Ball moving in 2 dimmensions, no gravity Consider a ball moving frictionlessly in the horizontal plane which impacts a wall at an angle of incidence $\alpha$ and speed $v_0$. Lets take our reference frame to have the wall along the $y$-axis and the ball approaches the wall from the positive $x$ direction. During some small time $\Delta t$, the ball exerts an impulse on the ball and the impulse vector will be parallel to the $x$-axis, pointing in the positive $x$ direction. Hence during the time $\delta t$ there are no forces acting on the ball in the $y$-direction, and thus momentum is conserved in this direction. Lets suppose the ball bounces off the wall with a new speed $v_1$, and let $e=v_1/v_0$ be the coefficient of restitution of the collision. Note that $e=1$ iff the collision is perfectly elastic. Now since momentum in the $y$-direction is conserved, the angle of reflection $\beta$ must satisfy $mv_0\sin(\alpha)=mv_1\sin(\beta)$, or $\sin(\beta)=\sin(\alpha)/e$. If we impose that the collision is perfectly elastic, we have $\alpha=\beta$, which is the famous law of reflection. But in general, the collision does not obey the law of reflection. Ball dropped onto inclined plane Now let's come back to part 2 of the problem, and drop our ball from some height onto an incline plane living in the $xz$ plane with angle of inclination $\alpha$ (gravity acts in the negative $z$ direction). The angle of incidence is thus also $\alpha$, and now we would like to calculate the angle of reflection $\beta$, knowing that the coefficient of restitution of the collision is $e$. The problem now is that during the small time $\Delta t$ of the collision, gravity transmits an impulse of $mg\sin(\alpha)\Delta t$ in the direction parallel to the surface of the incline. Hence $mg\sin(\alpha)\Delta t=v_1\cos(\beta)-v_0\cos(\alpha)=v_0(e\cos(\beta)-\cos(\alpha))$. Unfortunately, we don't know $\Delta t$, so as far as I can see there isn't enough information to determine $\beta$. So is the book wrong? And how would we calculate the angle of reflection off an inclined plane? P.S sorry for the verbose question
I want to know which algorithm is fastest for multiplication of two n-digit numbers? Space complexity can be relaxed here! As of now Fürer's algorithm by Martin Fürer has a time complexity of $n \log(n)2^{Θ(log*(n))}$ which uses Fourier transforms over complex numbers. His algorithm is actually based on Schönhage and Strassen's algorithm which has a time complexity of $Θ(n\log(n)\log(\log(n)))$ Other algorithms which are faster than Grade School Multiplication algorithm are Karatsuba multiplication which has a time complexity of $O(n^{\log_{2}3})$ ≈ $O(n^{1.585})$ and Toom 3 algorithm which has a time complexity of $Θ(n^{1.465})$ Note that these are the fast algorithms. Finding fastest algorithm for multiplication is an open problem in Computer Science. References : Note that the FFT algorithms listed by avi add a large constant, making them impractical for numbers less than thousands+ bits. In addition to that list, there are some other interesting algorithms, and open questions: Linear time multiplication on a RAM model(with precomputation) Multiplication by a Constant is Sublinear(PDF) - this means a sublinear number of additions which gets for a total of $\mathcal{O}\left(\frac {n^2} {\log n} \right)$ bit complexity. This is essentially equivalent to long multiplication (where you shift/add based on the number of $1$s in the lower number), which is $\mathcal{O}\left({n^2} \right)$, but with an $\mathcal{O}\left(\log n\right)$ speedup. and other representations of numbers; multiplication is almost linear time. The downside is, the multiplication is modular and {overflow detection, parity, magnitude comparison} are all as hard or almost as hard as converting the number back to binary or similar representation and doing the traditional comparison; this conversion is at least as bad as traditional multiplication (at the moment, AFAIK). Residue number system Other Representations: [ ]: multiplication is addition of the logarithmic representation. Example: $$ 16 \times 32 = 2^{\log_2 16 + \log_2 32} = 2^{4+5} = 2^{9} $$ Logarithmic representation Downside is conversion to and from logarithmic representation can be as hard as multiplication or harder, the representation can also be fractional/irrational/approximate etc. Other operations (addition?) are likely more difficult. : represent the numbers as the exponents of the prime factorization. Multiplication is addition of the exponents. Example: $$ 36 \times 48 = 3^2\cdot 5^1\times 2^{2}\cdot 3^1\cdot 4^1 = {2^2}\cdot {3^2} \cdot 4^1 \cdot 5^1 $$ Canonical representation Downside is, requires factors, or factorization, a much harder problem than multiplication. Other operations such as addition are likely very difficult. [ Other Representations:
First some motivation: most proofs that show that the group of outer automorphisms is residually finite do not only show that the subgroup of inner automorphisms is closed in the profinite topology, they actually show something stronger: that the group of inner automorphisms is closed in the congruence topology, i.e. that a non-inner automorphism can be realised as an non-inner automorphims of some finite quotient. I would like to know whether there is a way to go back. If $N \unlhd G$ is characteristic, the natural projection $\pi_N \colon G \to G/N$ induces a homorphism $$\tilde\pi_N \colon \mathop{Aut}(G) \to \mathop{Aut}(G/N)$$ given by $\tilde\pi_N(\phi)(gN) = \phi(g)N$ where $g \in G$ and $\phi \in \mathop{Aut}(G)$. If $G$ is finitely generated, one can easily check that for every $K \unlhd_{f.i.} G$ there is $L \unlhd_{f.i.} G$ characteristic such that $L \leq K$, i.e. the profinite topology on $G$ is ``generated'' by characteristic subgroups of finite index. The congruence topology on $\mathop{Aut}(G)$ is then the topology whose base around $\mathop{id}_G$ is given by $$\mathop{Cong}(\mathop{Aut}(G)) = \{\ker{\tilde\pi_N} \mid N \unlhd_{f.i.} G \mbox{ characteristic} \}.$$ One can easily check that the congruence topology is weaker than the profinite topology on $\mathop{Aut}(G)$. That is, if a set $\Xi \subseteq \mathop{Aut}(G)$ is closed in the congruence topology, then $\Xi$ is closed in the profinite topology. My question is: if $G$ is a finitely generated residually finite group, does $\mathop{Inn}(G)$ being closed in the profinite topology on $\mathop{Aut}(G)$ imply $\mathop{Inn}(G)$ being closed in the congruence topology? Or equivalently, can every non-inner automorphism of $G$ be realised as an non-inner automorphism of some finite quotient of $G$ if $\mathop{Out}(G)$ is residually finite?
A neutron star has radius $10$km and mass $2.5\times 10^{29}$kg. A meteorite is drawn into its gravitational field. Calculate the speed with which it will strike the surface of the star. Neglect the initial speed of the meteorite. In the solution provided, we use the formula for escape speed, which is $$v = \sqrt{\frac{2GM}{R}}$$ But the derivation of the formula for escape speed involves a mass has zero kinetic energy and zero gravitational potential energy. So, my question is: Why can we use escape speed to calculate striking of a meteor in this case?
I would appreciate if someone can help me answer the following questions. Although I read several papers and documents on the Lambert W function, I could not assess on what set is this function (or at least its principle branch) analytic (or holomorphic)? Hence, on what set can one apply the identity theorem on it? I know that the principal branch of the function admits a convergent Taylor series at $0$ with a positive radius of convergence $1/{\rm e}$ given by $$ W_0(z)=\sum\limits_{n=0}^\infty \frac{(-n)^{n-1}}{n!}z^n $$ Hence, $W_0(z)$ is analytic at $0$. However, is it also analytic elsewhere in the complex plane? According to Wikipedia, "The function defined by this series can be extended to a holomorphic function defined on all complex numbers with a branch cut along the interval $(−\infty, −1/\text{e}]$; this holomorphic function defines the principal branch of the Lambert W function". I don't quite understand this statement. How can the extended function of this set be defined on all complex numbers away from the branch cut, if it only converges for $-\frac{1}{\text{e}}<z<\frac{1}{\text{e}}$ and diverges for all other $z$. From my understanding, a function is said to be analytic on an open set $D$, if the function converges to its Taylor series in a neighborhood of every point in the set $D$. If the Taylor series of the function $W_0(z)$ at an arbitrary $z_0$ is not known to have a closed-form, does this mean that this function is not analytic at $z_0$? or could it be analytic without a known Taylor series expansion for arbitrary $z_0$? And if the Taylor series around an arbitrary point $z_0$ is not known in closed-form, how can one obtain the radius of convergence of the series? Does the radius of convergence play any role in applying the identity theorem? In How to derive the Lambert W function series expansion?, there is an example showing how to write the Taylor series expansion of the Lambert W function around $\text{e}$. However, it is not clear to me if this applies to an arbitrary point $z_0$ (oher than $0$ and $\text{e}$) and how can one obtain the radius of converge of this non-closed form series and whether can one claim that the function is analytic at $z_0$? If we extend the function to the complex domain, it is known that the lambert W function has the derivative $\frac{\text{d}W}{\text{d}z}=\frac{W(z)}{z+\text{e}^{W(z)}}=\frac{W(z)}{z(W(z)+1)}$ for $z\neq \{0,-1/\text{e}\}$. If we differentiate this infinite number of times, and the derivative exists at $z_0$, then the function is infinitely differentiable $z_0$. In this case, it only remains to prove that the function is equal its own Taylor series at a neighborhood of every point of its domain (or some open set) for the function to be holomorphic, right? Can this be shown for arbitrary $z_0$? And hence for some open set? I tried to apply the Lagrange inversion theorem to get the Taylor series of $W_0(z)$ at arbitrary $z_0$, but I could not converge to a closed-form. In Short, in my problem, i need to use the identity theorem on the Lambert W function. However, i need to check first on what set this theorem applies. In other words, on what set is the Lambert W function analytic? Any help is appreciated. Thanks a lot in advance.
We colour each side of a square with $k \geq 1$ colours. Find a formula for the number of orbits under the action of $D_{4}=\{ e , r,r^{2},r^{3},s,sr,sr^{2},sr^{3} \}$ on the set of colours. Now as far as I know an orbit of an element $a \in A$ under the action of group $G$ is defined as the set $ G \cdot a = \{ g \cdot a : g \in G\}$. In our case $A$ is the colouring of the square. My attempt: Now if we number the sides of the square we can represent them as a set $A=\{1,2,3,4\}$. Now the generators of $G$ are $\langle r \rangle$ and $\langle s \rangle$. No matter the original state of the square $\langle r \rangle $ can only rotate it (by $90$ degrees), and thus the orbit then contains $4$ elements (since $r^{4}=e)$. Now this is also dependent on the amount of colours used (if $k=1$, then the orbit just contains the original square). As you probably can tell, I don't really grasp the problem yet. Any suggestions are welcome.
Hello i have the following problem i am solving integral with spherical coordinates but i am getting wrong answer - i think i am integrating correct so i think the problem is coming from the constraints. So i have $$\iint_Dxdx$$ and i have $D = (x^2 + y^2 \le 2y, x \ge 0, y \ge \frac{1}{2})$. So i begin solving - first i complete the circle and came with $(x-0)^2 + (y-1)^2 = 1$ that is circle centered at $y = 1$ with $radius$ = 1 so i substitute $x^2 + y^2 = 2y$ with $r^2 = 2rsin\theta$ i divide both sides by r i get $r = 2sin\theta$ so i have limits from $0$ to $2sin\theta$ for r also i need to rotate $\phi$ from $0$ to $\pi$ because that is how the circle is placed. And i got $$\int_0^\pi \int_0^{2sin\theta} (rcos\theta)r \,d\varphi\,dr$$ and so after integrating $r$ i am left with $$\frac{1}{3}\int_o^\pi sin^3\theta cos\theta$$ i am using u substitution and say $u = sin\theta$ and $du = cos\theta$ so i have $$\frac{8}{3}\int_0^\pi u^3du$$ and the new limits are from $0$ to $0$ because $sin\theta$ from 0 is 0 and $sin\theta$ from $\pi$ is also $0$ that means the final answer is zero in and it should be $\frac {9}{16}$. I am not good at math so i cannot see where my error is. Thank you for any help in advance. The limits are $x^2+y^2\le 2y \ ,\ x\ge0 \ , y\ge\frac{1}{2} $ You've found the upper limit i.e, $r = 2\sin\theta$ (upper limit). For $y\ge\frac{1}{2} \ , \ r\sin\theta \ge\frac{1}{2} $ So, the lower limit of $r$ is $\frac{1}{2\sin\theta}$ Also, $x\ge0$ , $r\cos\theta\ge0$ , $\cos\theta\ge0$ So, $\theta\le\pi/2$ At $y=\frac{1}{2}$, $\sin\theta = \frac{\pi}{6}$ [as, $1.\sin(\pi/6) =1/2$] $$I = \int^{\pi/2}_{\pi/6}\int^{2\sin\theta}_{\frac{1}{2sin\theta}}r^2\cos\theta \ dr d\theta = \frac{1}{3}\int^{\pi/2}_{\pi/6}\bigg\{8\sin^3\theta - \frac{1}{8sin^3\theta}\bigg\}\cos\theta d\theta$$ Now let $u = sin\theta$ , $du = \cos\theta d\theta$ At $\theta = \pi/6$, $u=1/2$ and at $\theta = \pi/2$, $u=1$ So, $$I = \frac{1}{3}\int^{1}_{1/2}\bigg\{8u^3- \frac{1}{8u^3}\bigg\}du = \frac{1}{3}\bigg[8\cdot\frac{1}{4}(1-\frac{1}{16}) - \frac{1}{8}\cdot\frac{-1}{2}\bigg(1 - \frac{1}{(1/2)^2}\bigg)\bigg] = \frac{1}{3}$$ $$I = \frac{1}{3}\big[ \frac{15}{8} - \frac{3}{16}\big] = \frac{1}{3}[\frac{27}{16}] = \frac{9}{16}$$
This Domino Artwork case study describes the optimization model that underlies the NEOS Domino solver, which constructs pictures from target images using complete sets of double-nine dominoes. Complete sets of double-nine dominoes include one domino for each distinct pair of dot values from 0 to 9. The NEOS Domino solver is an implementation of the work of Robert Bosch of Oberlin College. Background Information In his 2006 article "Opt Art", Robert Bosch of Oberlin College describes some applications of optimization in the area of art. Domino artwork is a type of photomosaic, a picture made up of many smaller pictures. The small pictures can be seen up close but, at a distance, they merge into a recognizable image. To create a photomosaic, we partition a target image and a blank canvas into a grid. Then, given a set \(F\) of building-block photographs, we place each photograph from \(F\) onto one square of the blank canvas grid with the goal of arranging the building blocks to resemble the target image as closely as possible. Optimization is well-suited to assigning the building blocks to the blank canvas grid but choosing the set of building blocks requires an artist's touch! Here, we describe domino artwork, which is a special type of photomosaic in which the building blocks are dominoes. The building-block set in the Domino solver is comprised of complete sets of double-nine dominoes; double-nine domino sets contain one domino for each distinct pair of dot values from 0 to 9. Mathematical Formulation In this section, we present Bosch's integer programming model for constructing domino artwork images. Instead of a set of photographs \(F\), we have a set \(D=\lbrace d = (d_1, d_2): 0 \leq d_1 \leq d_2 \leq 9\rbrace\) of double-nine dominoes. Each domino \(d=(d_1, d_2) \in D\) is black with \(d_1\) dots on one square and \(d_2\) dots on the other square. The domino has a brightness value associated with each half; the value corresponds to the number of dots, so it is measured on a scale from 0 to 9, black to white. Each domino can be positioned horizontally or vertically covering two squares of the canvas. A complete set of double-nine dominoes contains 55 dominoes, and exactly $s$ complete sets must be used. Therefore, we need to partition the target image and the canvas into \(m\) rows and \(n\) columns such that \(mn = 110s\). The decision variable is a yes-no decision for each possible assignment of a domino \(d\) to a pair of adjacent squares of the canvas. This is more complicated than a yes-no decision associated with assigning a single photograph to a single square on a canvas. However, we can introduce a binary variable \(x_{dp}\) for each domino \(d \in D\) and each pair \(p \in P\) where \(P\) is defined as: \[P = \big\{ \lbrace(i,j),(i+1,j) \rbrace : 1\leq i\leq m-1, i \leq j \leq n \big\} \\ \cup \big\{ \lbrace (i,j),(i,j+1)\rbrace : 1\leq i \leq m, 1 \leq j \leq n - 1 \big\}.\] Associated with each assignment of a domino \(d\) to a pair of squares \(p\) is a cost \(c_{dp}\). The cost \(c_{dp}\) is calculated based on how close the dots of domino \(d\) match up with the brightness values of the target image. Let \(\beta_{ij}\in[-0.5,9.5]\) represent the brightness value of square \((i,j)\) on the grid representation of target image, and recall that \(d=(d_1,d_2)\) represents a domino with \(d_1\) dots on one square and \(d_2\geq d_1\) dots on the other square. An explicit expression for \(c_{dp}\) with \(p=\lbrace (i_1,j_1), (i_2,j_2)\rbrace\in P\) is \[c_{dp} = \min\lbrace \left(d_1-\beta_{i_1j_1}\right)^2 + \left(d_2-\beta_{i_2j_2}\right)^2, \left(d_1-\beta_{i_2j_2}\right)^2 + \left(d_2-\beta_{i_1j_1}\right)^2 \rbrace.\] The cost is calculated by adding the squares of the differences between dot values and actual brightness values for each orientation. \(c_{dp}\) becomes the minimum of these values. Then, the domino optimization model can be written as: \[\begin{array}{rrcll} \text{minimize} & \sum_{d\in D}\sum_{p\in P} c_{dp}x_{dp} & & & \\ \text{subject to} & \sum_{p\in P} x_{dp} &= &s & \forall d\in D, \\ & \sum_{d\in D}\sum_{\substack{p\in P: \\ p\ni (i,j)}} x_{dp} & = & 1 & \forall 1\leq i\leq m, 1\leq j\leq n, \\ & x_{dp}\in\{0,1\} & & & \forall d\in D, p\in P. \end{array}\] The objective function measures the total cost of the domino arrangement. The first set of constraints ensures that all of the dominoes are placed on the canvas. The second set of constraints ensures that each square is covered by exactly one domino. The domino optimization model is an assignment problem with side constraints. Assignment problems (without side constraints) are easy to solve in the sense that they exhibit the integrality property. That is, if we relax the integrality restrictions on the decision variables and solve the problem as a linear program, the optimal solution is guaranteed to be integer-valued. Adding side constraints makes the problem harder to solve in theory, however, many instances still can be solved easily. Implementation Details The main contribution of the NEOS Domino solver is that anyone can create his/her own domino artwork image! Underlying the Domino solver is MATLAB code that processes a target image, creates the GAMS model instance, submits the model to NEOS, reads the solution returned and constructs the image file with only one click of a button. The NEOS Domino solver was developed by Eli Towle as his final project in ISyE/CS 635: Tools and Environments for Optimization at the University of Wisconsin in Madison. The Domino solver takes as input a JPEG image. There are two optional parameters. Quality (low, medium, high): the default quality is medium. Selecting lowproduces a lower quality output image more quickly. Selecting highproduces a high-quality output image but the solver may require a long time to produce the results!. Color (black, white): the default domino set is black dominoes with white dots. Selecting whitewill result in an image with white dominoes with black dots. Click here if you are ready to create an image! The optimization model solved by the NEOS Domino solver includes one change from the original model to improve the speed of constructing the final image. The original model defined the set of valid location assignments \(P\) as shown above and then defined the cost parameter \(c_{dp}\) based on either possible orientation of domino \(d\) assigned to location \(p\in P\). However, the decision variable \(x_{dp}\) does not allow for reverse orientations of dominoes. In other words, an optimal assignment of domino \((0,9)\) at location \(\lbrace(1,1),(1,2)\rbrace\) could be flipped so that the 9 comes first, but the solution will not make the distinction between these orientations. Therefore, we introduce parameters in the GAMS optimization model that allow orientations to be inferred based on the value of \(c_{dp}\). That is, if \[\begin{align*} \left(d_1-\beta_{i_1j_1}\right)^2 + \left(d_2-\beta_{i_2j_2}\right)^2 \leq \left(d_1-\beta_{i_2j_2}\right)^2 + \left(d_2-\beta_{i_1j_1}\right)^2, \end{align*}\] then the proper orientation is \((d_1,d_2)\). Otherwise, the orientation is \((d_2,d_1)\). As an alternative, we could expand the set \(P\) to \(\hat{P}\) to differentiate between orientations in the solution. In this case, we define \[\begin{align*} \hat{P} = & \lbrace (t_1,t_2) : (t_1,t_2)\in P \text{ or } (t_2,t_1)\in P \rbrace, \end{align*}\] where \(t_1\) and \(t_2\) are tuples. The costs could similarly be altered to reflect the cost of each individual orientation rather than being expressed as the minimum of the two. This expansion effectively doubles the number of decision variables in the model, resulting in slower solving times. As an example, we show below a comparison of solution times for an image of Taylor Swift. Sets Infer from \(c_{dp}\) Augment \(P\) to \(\hat{P}\) 18 11.85s 15.94s 72 86.32s 211.70s 162 349.56s 668.46s With minimal additional effort, calculating the correct orientations by examining \(c_{dp}\) after the solve is strongly preferred over doubling the number of acceptable orientations. Examples The first image is the target image and the second image is the image created by the NEOS Domino solver (using 896 complete sets of dominoes). The first image is the target image of Taylor Swift and the second image is the image, created by the NEOS Domino solver, constructed in dominoes! The physical construction of the domino portrait is captured here in a time-lapse video.
In transport phenomena the diffusive fluxes for mass, energy and momentum are the constitutive laws: $$\boldsymbol{j}_c=-D\boldsymbol{\nabla}c \quad \boldsymbol{j}_T=-k\boldsymbol{\nabla}T \quad \boldsymbol{\tau}_{\boldsymbol{v}}=-\mu\boldsymbol{\nabla}\boldsymbol{v}$$ with $c$ the mass concentration, $T$ the temperature, $\boldsymbol{v}$ the velocity. The coefficients are the mass diffusion coefficient $D$, the thermal conductivity $k$ and the dynamic viscosity $\mu$. Typically it is useful to view the diffusive flux in terms of the gradients of concentrations of mass, energy and momentum. For the mass diffusive flux this is already the case as $c$ is the mass concentration, the result is that the units for $D$ are $[m^2/s]$, typical units for diffusion coefficients. A quick dimensional analysis of the other fluxes show that these aren't in terms of energy and momentum concentration and $k$ and $\mu$ aren't diffusion coefficients, i.e. $k=[W/mK]$ and $\mu=[Ns/ m^2]$. We can proceed to rewrite the fluxes in terms of energy and momentum concentrations: $$\boldsymbol{j}_T=-\frac{k}{\rho c_p}\boldsymbol{\nabla}\rho c_pT=-\alpha\boldsymbol{\nabla}\epsilon\quad \boldsymbol{\tau}_{\boldsymbol{v}}=-\frac{\mu}{\rho}\boldsymbol{\nabla}\rho\boldsymbol{v}=-\nu\boldsymbol{\nabla}\boldsymbol{p}$$ Here the energy concentration $\epsilon=[J/m^3]$ and momentum concentration $\boldsymbol{p}=[\left(kgm/s\right)/m^3]$, with thermal diffusivity $\alpha=[m^2/s]$ and kinematic viscosity $\nu=[m^2/s]$ which show these are the respective diffusion coefficients for energy and momentum concentration. The above analysis can only be done under the assumption of incompressibility and this is where my question originates: Why are the constitutive laws for diffusive fluxes not defined in terms of mass, energy and momentum concentration? Is it simply because the laws were formulated under the assumption of steady-state and incompressibility? What if incompressibility is not valid, are the laws the invalid? As practical example of an issue which then arises: for a compressible medium should we still write the advection-diffusion heat equation as: $$\partial_t\rho c_p T + \boldsymbol{\nabla}\cdot \rho c_p \boldsymbol{u} T = k\nabla^2 T $$ or would it be of the following form: $$\partial_t\rho c_p T + \boldsymbol{\nabla}\cdot \rho c_p \boldsymbol{u} T = \alpha\nabla^2 \rho c_p T$$
How is $29 - 1 = 30$? If also $14 - 1 = 15$ $11 - 1 = 10$ $9 - 1 =10$. Hint: Guess the answer and be like Minerva Puzzling Stack Exchange is a question and answer site for those who create, solve, and study puzzles. It only takes a minute to sign up.Sign up to join this community How is $29 - 1 = 30$? If also $14 - 1 = 15$ $11 - 1 = 10$ $9 - 1 =10$. Hint: Guess the answer and be like Minerva Explanation: If you're using Roman numerals, you can remove I (one) from the representations of the numbers before the minus sign to get the representation of the numbers in the right. 29 - 1 = 30 XXIX - I = XXX 14 - 1 = 15 XIV - I = XV 11 - 1 = 10 XI - I = X 9 - 1 =10 IX - I = X $$9 - 1 = 10 = 11 - 1,$$ thus $$29 - 1 = 20 + 9 - 1 =\\ = 20 + 11 - 1 = 31 - 1 = 30.$$ Edit Or just using that $9-1 = 10$ $$ 29 - 1 = 20 + 9-1 = 20 + 10 = 30.$$ I suppose you could always round the answer to the nearest multiple of 5, although that has nothing to do with Athena. $$\begin{align} 29-1&=28 \xrightarrow{\text{rounds to }}30\\ 14-1&=13 \xrightarrow{\phantom{\text{________}}} 15\\ 11-1&=10 \xrightarrow{\phantom{\text{________}}} 10\\ 9-1 &= \phantom08 \xrightarrow{\phantom{\text{________}}} 10\\ \end{align}$$ This is mathematically true in $\mathbb{Z}/2\mathbb{Z}$, i.e. $\bmod 2$: $$\begin{align} 29 - 1 \equiv 30 \equiv 0 \pmod 2\\ 14 - 1 \equiv 15 \equiv 1 \pmod 2\\ 11 - 1 \equiv 10 \equiv 0 \pmod 2\\ 9 - 1 \equiv 10 \equiv 0 \pmod 2\\ \end{align}$$ This is inspired by/alternative to Olba12's nice answer: $$\begin{align} 30 &= 10 + 10 + 10\\ &= (11 - 1) + (11 - 1) + (9 - 1)\\ &= 31 - 3\\ &= 31 - 2 - 1\\ &= 29 - 1 \end{align}$$ Hence: $29 - 1 = 30$. Here's another approach that doesn't require redefining ‘$-$’ as a string operation, rather than a mathematical one: $$\begin{align} 29 - 1\ (\text{base}\ 11) &= 30\ (\text{base}\ 10)\\ 14 - 1\ (\text{base}\ 12) &= 15\ (\text{base}\ 10)\\ 11 - 1\ (\text{base}\ 10) &= 10\ (\text{base}\ 10)\\ 9 - 1\ (\text{base}\ 10) &= 10\ (\text{base}\ 8) \end{align}$$ I like the OP's intent better, though. It's a clever puzzle. This is a cheeky answer. Assume the usual rules of arithmetic and logic. We are told that $9 - 1 = 10$, that is, $8 = 10$, which is a contradiction. By the rules of logic, since we have derived a contradiction, we can now derive anything else (like 'magic', cf the storybook character Minerva), such as, that $29 - 1 = 30$. QED It is just the '-' operator has changed its semantics to compute sum of its operands. Thank you for your interest in this question. Because it has attracted low-quality or spam answers that had to be removed, posting an answer now requires 10 reputation on this site (the association bonus does not count). Would you like to answer one of these unanswered questions instead?
Gold Member 529 93 It's been a long time since my last exam on QM, so now I'm struggling with some basic concept that clearly I didn't understand very well. 1) The Sch. Eq for a free particle is ##-\frac {\hbar}{2m} \frac {\partial ^2 \psi}{\partial x^2} = E \psi## and the solutions are plane waves of the form ##\psi(x) = Ae^{1kx} + Be^{-ikx}##. This functions can not be normalized thus they do not represent a physical phenomenon, but if I superimpose all of them with an integral on ##k## I get the "true" solution (the wave packet). This implies that a free particle with definite energy does not exist (only superposition of states with different energies can exist). This bugs me a lot. For example, think about an atom hit by an ionizing radiation: at some point an electron will be kicked out of the shell and now, if I wait some time, I have a free electron (so a free particle) and what about its energy? It should be defined by the law of conservation of energy... 2) I'm reading some lecture notes about scattering. Why does everyone take the incoming particle to be described by the state ##\psi_i = e^{i \mathbf k \cdot \mathbf r}## if it is not normalizable ? It seems to me they all assume the particle to be inside a box of length ##L## and forget about about the normalization constant. But why ? 1) The Sch. Eq for a free particle is ##-\frac {\hbar}{2m} \frac {\partial ^2 \psi}{\partial x^2} = E \psi## and the solutions are plane waves of the form ##\psi(x) = Ae^{1kx} + Be^{-ikx}##. This functions can not be normalized thus they do not represent a physical phenomenon, but if I superimpose all of them with an integral on ##k## I get the "true" solution (the wave packet). This implies that a free particle with definite energy does not exist (only superposition of states with different energies can exist). This bugs me a lot. For example, think about an atom hit by an ionizing radiation: at some point an electron will be kicked out of the shell and now, if I wait some time, I have a free electron (so a free particle) and what about its energy? It should be defined by the law of conservation of energy... 2) I'm reading some lecture notes about scattering. Why does everyone take the incoming particle to be described by the state ##\psi_i = e^{i \mathbf k \cdot \mathbf r}## if it is not normalizable ? It seems to me they all assume the particle to be inside a box of length ##L## and forget about about the normalization constant. But why ?
Dear Uncle Colin, I'm told that the graphs of the functions $f(x) = x^3 + (a+b)x^2 + 3x - 4$ and $g(x) = (x-3)^3 + 1$ touch, and I have to determine $a$ in terms of $b$. Where would I even start? - Touching A New Graph Except Numerically TroublingRead More → Some while ago, I showed a slightly dicey proof of the Basel Problem identity, $\sum_{n=1}^{\infty} \frac{1}{n^2} = \frac {\pi^2}{6}$, and invited readers to share other proofs with me. My old friend Jean Reinaud stepped up to the mark with an exercise from his undergraduate textbook: The French isn't that difficult,Read More → Dear Uncle Colin I've been asked to find $\sum_3^\infty \frac{1}{n^2-4}$. Obviously, I can split that into partial fractions, but then I get two series that diverge! What do I do? - Which Absolute Losers Like Infinite Series? Hi, WALLIS, and thanks for your message! Hey! I'm an absolute loser whoRead More → In this month's episode of Wrong, But Useful, Colin and Dave are joined by @niveknosdunk, who is Professor Kevin Knudson in real life. Kevin, along with previous Special Guest Co-Host @evelynjlamb, has recently launched a podcast, My Favorite Theorem The number of the podcast is 12; Kevin introduces us toRead More → This post is inspired by a Futility Closet article. Do visit them and subscribe to their excellent podcast! Suppose you're dealt a bridge hand1, and someone asks whether you have any aces; you check, and yes! you find an ace. What's the probability you have more than one ace? ThisRead More → Dear Uncle Colin I have been asked to describe how $y = \frac{3x^2-1}{3x+2}$ behaves as $x$ goes to infinity. My first answers, "$y$ goes to infinity" and "$y$ approaches $x$", were both wrong. Any ideas? - Both Options Reasonable, Erroneous Limits Hi, BOREL, and thanks for your message! My firstRead More → One of the more surprising results a mathematician comes across in a university course is that the infinite sum $S = 1 + \frac{1}{4} + \frac{1}{9} + ... + \frac{1}{n^2} + ...$ comes out as $\frac{\pi^2}{6}$. If $\pi^2$s are going to crop up in sums like that, they should beRead More → Dear Uncle Colin, What is $\frac{1}{\infty}$? - Calculating A Number, Though Outside Reals Hi, CANTOR, and thanks for your message! The short answer: it's undefined. The longer answer: Infinity is not a number. It's not something you're allowed to divide by. The calculation doesn't make sense, and writing it downRead More → "So," said the Mathematical Ninja, "we meet again." "In fairness," said the student, "this is our regularly-scheduled appointment." The Mathematical Ninja was unable to deny this. Instead, it was time for a demand: "Tell me the square root of 22." "Gosh," said the student. "Between four-and-a-half and five, definitely. 4.7Read More →
Contents Back to Discrete Optimization In a general integer linear programming problem, we seek to minimize a linear cost function over all \(n\)-dimensional vectors \(x\) subject to a set of linear equality and inequality constraints as well as integrality restrictions on some or all of the variables in \(x\). \[\begin{array}{llll} \mbox{min} & c^Tx & & \\ \mbox{s.t.} & Ax & = & b \\ & x & \geq & 0 \\ & x & \in & Z^n \end{array} \] If only some of the variables \(x_i \in x\) are restricted to take on integer values (and some are allowed to take on real values), then the problem is called a mixed integer linear programming (MILP)problem. If the objective function and/or constraints are nonlinearfunctions, then the problem is called a mixed integer nonlinear programming problem (MINLP). If all of the variables \(x_i \in x\) are restricted to take on integer values, then the problem is called a pure integer programmingproblem. If all of the variables \(x_i \in x\) are restricted to take on binary values (0 or 1), then the problem is called a binary optimizationproblem, which is a special case of a pure integer programming problem. We use the term MIP to refer to any kind of integer linear programming problem; the other kinds can be viewed as special cases. MIP, in turn, is a particular member of the class of discrete optimization problems. In fact, the problem of determining whether a MIP has an objective value less than a given target is a member of the class of \(\mathcal{NP}\)-Complete problems. Since any \(\mathcal{NP}\)-Complete problem is reducible to any other, virtually any combinatorial problem of interest can be handled in principle by solving some equivalent MIP. Mixed Integer Linear Programming Solvers available on the NEOS Server A. Lodi and J. T. Linderoth. (2011) "MILP Software," Encyclopedia for Operations Research and Management Science, Wiley, pp. 3239-3248. J. T. Linderoth and T. K. Ralphs. (2005) "Noncommercial Software for Mixed-Integer Linear Programming," in Integer Programming: Theory and Practice, Karlof, J., Ed., CRC Press, pp. 253-303. Bioengineering: Metabolic Engineering Problem Puzzles: The Abbott's Window Puzzles: Rogo Supply Chain: Bar Crawl Optimization Supply Chain: Cutting Stock Problem Textbooks G.L. Nemhauser and L.A. Wolsey. (1988) Integer and Combinatorial Optimization, John Wiley & Sons, New York. A. Schrijver. (1984) Linear and Integer Programming, John Wiley & Sons, New York. L.A. Wolsey. (1998) Integer Programming, John Wiley & Sons, New York. Journal Publications and Technical Reports Optimization Online Integer Programming area
The following I believe is a direct proof of this fact. If a learner is tasked to be $\epsilon$-competitive with a hypothesis $h \in \mathcal H_n$, where $\mathcal H_n$ is agnostic PAC learnable, it should be sufficient to just agnostic PAC learn on $\mathcal H$. Suppose $\mathcal H = \bigcup_n \mathcal H_n$ where each $\mathcal H_n$ is PAC learnable. Fix $\epsilon,\delta,h \in \mathcal H$ where $h \in \mathcal H_n$. (The textbook Understanding Machine Learning implicitly assumes it's efficiently computable to assign $h$ to a $\mathcal H_n$, which I'm fine with.) $\mathcal H_n$ has the uniform convergence property with bound $m_{\mathcal H_n}(\cdot,\cdot)$ by hypothesis. Let m = $m(\epsilon/2, \delta, h) = m_\mathcal {H_n}(\epsilon, \delta)$. Take $S \sim \mathcal D^m$ and run ERM over $\mathcal H_n$ to return an $\epsilon/2$-accurate, $\delta$-confident PAC hypothesis $h'$ in $\mathcal H_n$. In particular $|L_S(h') - L_{\mathcal D}(h')| < \epsilon/2$ and $|L_S(h) - L_{\mathcal D}(h)| < \epsilon /2 $ with probability $1-\delta$ by uniform convergence. $L_S(h') \leq L_S(h)$ by the definition of ERM. Putting this together with triangle inequality and we have $L_{\mathcal D}(h') \leq L_{\mathcal D}(h) + \epsilon$ with probability $1-\delta$ as desired. I think this proof is correct. I'm working through Chapter 7 of Understanding Machine Learning which prefers using the SRM paradigm to prove this result. It seems like a direct proof is possible, and perhaps SRM is important but it's certainly not needed in this chapter to prove this theorem. Is this correct?
93 6 Homework Statement ##A## oscillates along the central position ##O## with amplitude ##5 cm## at a frecuency ##2 hz## such that its displacement measured in ##cm## in function of time is governed by ##x=5sin(4 \pi t)##, where ##t## is measured in seconds. An angular acceleration around ##O## is applied to the disc with an amplitude ##20 rad## at a frequency ##4 hz## such that ##\theta =0.20sin(8 \pi t)##. Determine the acceleration of A for ##x=0 cm## and ##x= 5 cm##. Homework Equations ##\vec a=\vec a_B + \vec{\dot \omega} X \vec r + \vec \omega X \vec \omega X \vec r + 2. \vec \omega . \vec v_{rel} + \vec a_{rel}## The first doubt that comes to my mind is "I have to determine the acceleration with respect to what?", because the problem doesn't tell. Then, I have some problems when having to plug the data in the formula of acceleration. ##\vec a_B=0## because the origin isn't accelerated, ##\vec{\dot \omega} X \vec r## would be ##x=5sin(4 \pi .5)## (in the second case), and then what numbers should I plug in ##\vec \omega X \vec \omega X \vec r##, ##2. \vec \omega . \vec v_{rel}## and ##\vec a_{rel}##? I don't understand relative rotational motion very well. I mean, I just have to plug the data in the formula, but I don't know what's the data that I have. I don't understand relative rotational motion very well. I mean, I just have to plug the data in the formula, but I don't know what's the data that I have.
${{\boldsymbol \chi}_{{c1}}{(3872)}}$ $I^G(J^{PC})$ = $0^+(1^{+ +})$ also known as ${{\mathit X}{(3872)}}$ This state shows properties different from a conventional ${{\mathit q}}{{\overline{\mathit q}}}$ state. A candidate for an exotic structure. See the review on non- ${{\mathit q}}{{\overline{\mathit q}}}$ states. First observed by CHOI 2003 in ${{\mathit B}}$ $\rightarrow$ ${{\mathit K}}{{\mathit \pi}^{+}}{{\mathit \pi}^{-}}{{\mathit J / \psi}{(1S)}}$ decays as a narrow peak in the invariant mass distribution of the ${{\mathit \pi}^{+}}{{\mathit \pi}^{-}}{{\mathit J / \psi}{(1S)}}$ final state. Isovector hypothesis excluded by AUBERT 2005B and CHOI 2011 . AAIJ 2013Q perform a full five-dimensional amplitude analysis of the angular correlations between the decay products in ${{\mathit B}^{+}}$ $\rightarrow$ ${{\mathit \chi}_{{c1}}{(3872)}}{{\mathit K}^{+}}$ decays, where ${{\mathit \chi}_{{c1}}{(3872)}}$ $\rightarrow$ ${{\mathit J / \psi}}{{\mathit \pi}^{+}}{{\mathit \pi}^{-}}$ and ${{\mathit J / \psi}}$ $\rightarrow$ ${{\mathit \mu}^{+}}{{\mathit \mu}^{-}}$ , which unambiguously gives the $\mathit J{}^{PC} = 1{}^{++}$ assignment under the assumption that the ${{\mathit \pi}^{+}}{{\mathit \pi}^{-}}$ and ${{\mathit J / \psi}}$ are in an ${\mathit S}{\mathrm -wave}$. AAIJ 2015AO extend this analysis with more data to limit ${\mathit D}{\mathrm -wave}$ contributions to $<$ 4$\%$ at 95$\%$ CL. See our note on ``Developments in Heavy Quarkonium Spectroscopy''.
This suppose to be a comment but it getting kinda long so I placed it here in term of an answer instead, although it's not directly answering your question but it may helps... In regard to Rick's comment, I think this is what he meant: Suppose I have something like: $$ \lim_{n \rightarrow \infty} \sum_{k=1}^n \dfrac{1}{\sqrt{k^2 + n^2}} $$ Then I can rewrite this as follow: $$ \lim_{n \rightarrow \infty} \dfrac{1}{ \sqrt{1^2 + n^2}} + \dfrac{1}{ \sqrt{2^2 + n^2}} + \dfrac{1}{ \sqrt{3^2 + n^2}} + \cdots \dfrac{1}{ \sqrt{n^2 + n^2}} $$ but note that$$ \lim_{n \rightarrow \infty} \dfrac{1}{ \sqrt{1^2 + n^2}} + \dfrac{1}{ \sqrt{2^2 + n^2}} + \cdots + \dfrac{1}{ \sqrt{n^2 + n^2}} = \lim_{n \rightarrow \infty} \dfrac{1}{n} \bigg( \dfrac{1}{\sqrt{ \big(\dfrac{1}{n}\big)^2 + 1 }} + \dfrac{1}{\sqrt{ \big(\dfrac{2}{n}\big)^2 + 1 }} + \cdots + \dfrac{1}{\sqrt{ \big(\dfrac{n}{n}\big)^2 + 1 }} \bigg) $$ From here you can see that it starting to look like how you would define a Riemann sum for the function $ \dfrac{1}{\sqrt{x^2 + 1}} $ from $0$ to $1$ with subdivision point $\dfrac{1}{n}, \dfrac{2}{n}, \cdots $ So now you can evaluate $$ \lim_{n \rightarrow \infty} \sum_{k=1}^n \dfrac{1}{\sqrt{k^2 + n^2}} = \int_0^1 \dfrac{1}{\sqrt{x^2 + 1}} dx = \log \big[ x + \sqrt(x^2 + 1) \big] \bigg|_0^1 = \log(1 + \sqrt{2} ) $$ For your problem, can you think of doing something similar? Hope this help...
As discribed in Anthony Zee's "Einstein Gravity in a Nutshell" Ch V Appendix 2, thinking about a vector field $V^{\mu}$ as the velocity field of a fluid in spacetime $V^{\mu}(X(\tau))$ where the worldline $X(\tau)$ takes the role of the trajectory of a fluid parcel, the Lie derivative of a vector (or more general tensor) field $W^{\mu}(x)$ along the direction $V^{\mu}$ can be derived from the mismatch between the "conventional" form applying the ordinary derivative expected (analog to the Eulerian difference in fluid dynamics) difference between W at two points $P(x)$ and $Q(\tilde{x})$ and the (called Lagrangian in fluid dynamics) change of $W^{\mu}(x)$ when following the flow from $x$ to $\tilde{x}$ along the worldline (or trajectory) \(\mathcal{L}_V W^{\mu}(x) = V^{\nu}(x)D_{\nu}W^{\mu}(x) - W^{\nu}(x)D_{\nu}V^{\mu}(x)\) where $D_{\nu}$ is the covariant derivative. In fluid dynamics, a Lagrangian conserved quantity can be described by means of the transport theorem \(\frac{d}{dt}\int\limits_{\mathcal{G}(t)} A(t)dV = \int\limits_{\mathcal{G}(t)} ( \frac{\partial A(\vec{r,t})}{\partial t} + \nabla\cdot\vec{v}(\vec{r},t)A(\vec{r},t))dV = 0\) where $\frac{dA}{dt} = \frac{\partial A}{\partial t} + (\vec{v}\nabla))A$ is the material derivative, $\mathcal{G}(t)$ is the volume of a fluid parcel, A(t) is a flow variable, and $\vec{v}(\vec{r},t)$ is the flow field. The Lie derivative and the second term on the r.h.s. of the transport theorem seem both to describe some kind of correction due to the flow which has to be added to obtain the full derivative seen by an observer moving with the flow. In addition, if the second part on the r.h.s. of the transport theorem vanishes, the quantity $A$ does not change with time which seems similar to the fact that a tensor quantity W does not change with (proper) time if it is Lie transported. So could the second term in the transport theorem formally be seen as some kind of a Lie derivative along the flow field $\vec{v}(\vec{r},t)$?
On one hand $$(1+t+t^2)^m=\sum_{i=0}^m\binom{m}{i}(1+t)^i(t^2)^{m-i}=\sum_{i=0}^m\binom{m}{i}\Biggl(\sum _{j=0}^i\binom{i}{j}t^j1^{i-j}\Biggr)t^{2m-2i}=\\\sum_{i=0}^m\sum _{j=0}^i\binom{m}{i}\binom{i}{j}t^{j+2m-2i}$$ On the other $$(1+t)^{n-2m}=\sum_{k=0}^{n-2m}\binom{n-2m}{k}t^k1^{n-2m-k}=\sum_{k=0}^{n-2m}\binom{n-2m}{k}t^k$$ Now, let's look at the sum representation we found for $(1+t+t^2)^m$ first (since it's the scarier looking one) and try to simplify it. Suppose we can rewrite this sum as $$\sum_pa_pt^p$$ That would be very convenient as this is a nice single sum rather than a double sum. To do that, we need to find $a_p$ in general. Looking at the original double sum we can see that we can find $a_p$ by finding all pairs of $j$ and $i$ such that $j\leq i$ and $j+2m-2i=p$- then, calculate for each pair, $\binom{m}{i}\binom{i}{j}$, and sum all of these binomial products up. Suppose for our (fixed) $p$ we also fixed $i$ at some value between $0$ and $m$. Well, in that case $j$ could take on at most one value- that would be $p+2i-2m$. We say "at most" because $j$ also needs to be $\leq i$, i.e., $p+2i-2m\leq i\implies i\leq 2m-p$. Don't forget though that $j$ also has to be $\geq 0$- this means $p+2i-2m\geq 0\implies i\geq m-\frac{p}{2}$ ($j$ will always be an integer so we won't worry about that). So if we vary $i$ between $0$ and $m$, $i$ contributes to the coefficient, $a_p$, iff $m-\frac{p}{2}\leq i\leq 2m-p$, and when that is the case, $i$ contributes exactly $\binom{m}{i}\binom{i}{p+2i-2m}$ to $a_p$. In other words, $$a_p=\sum_{i=m-\lfloor\frac{p}{2}\rfloor}^{2m-p}\binom{m}{i}\binom{i}{p+2i-2m}$$ What are the possible values $p$ can take? Well, for each $p$ there must exist $i$ and $j$ in the original double sum's index so that $p=j+2m-2i$ so $p=j+2m-2i\leq i+2m-2i=2m-i\leq 2m$. Also $p=j+2m-2i\geq 0+2m-2i=2m-2i\geq 2m-2m=0$. So $p$ ranges from $0$ to $2m$. Overall we have that $$(1+t+t^2)^m=\sum_{p=0}^{2m}a_pt^p$$ where, for each $p$ $$a_p=\sum_{i=m-\lfloor\frac{p}{2}\rfloor}^{2m-p}\binom{m}{i}\binom{i}{p+2i-2m}$$ Time to tackle the big boi. Now we need to find a nice single sum representation for $(1+t+t^2)^m(1+t)^{n-2m}$. Well, we have a nice(ish) single sum for each of the terms individually- let's try smashing them together. $$(1+t+t^2)^m(1+t)^{n-2m}=\Biggl(\sum_{p=0}^{2m}a_pt^p\Biggr)\Biggl(\sum_{k=0}^{n-2m}\binom{n-2m}{k}t^k\Biggr)=\sum_{p=0}^{2m}\sum_{k=0}^{n-2m}a_p\binom{n-2m}{k}t^{p+k}$$ Now, we run the same trick again and suppose that this double sum has some single sum expression like $$\sum_qb_qt^q$$ To find $b_q$ here, we need find pairs of $p$ and $k$ such that $p+k=q$, $0\leq p\leq 2m$, and $0\leq k\leq n-2m$. Now, suppose we fix some $p$ between $0$ and $2m$. Then $k$, again, has at most one value, i.e, $q-p$, and this one value is only valid if $0\leq k\leq n-2m \implies 0\leq q-p \leq n-2m \implies q-n-2m\leq p\leq q$. This gives us that $$b_q=\sum_{p=q-n-2m}^qa_p\binom{n-2m}{q-p}$$ Again, we can refer to the initial double sum to see that $p+k$ will always vary between $0$ and $n$, so $q$ varies between $0$ and $n$. This all means: $$(1+t+t^2)^m(1+t)^{n-2m}=\sum_{q=0}^nb_qt^q$$ where, for each $q$ $$b_q=\sum_{p=q-n-2m}^qa_p\binom{n-2m}{q-p}$$ The $b_q$ are the coefficients we desire. To simplify (it's really just making things messier) plug in the general single sum expression for $a_p$ $$b_q=\sum_{p=q-n-2m}^q\Biggl(\sum_{i=m-\lfloor\frac{p}{2}\rfloor}^{2m-p}\binom{m}{i}\binom{i}{p+2i-2m}\Biggr)\binom{n-2m}{q-p}=\\\sum_{p=q-n-2m}^q\sum_{i=m-\lfloor\frac{p}{2}\rfloor}^{2m-p}\binom{m}{i}\binom{i}{p+2i-2m}\binom{n-2m}{q-p}$$ After some checking now, I'm pretty sure the working is correct but given the number of mistakes I made writing this, there's still a good chance I messed up somewhere. Regardless, this is basically how you can find the coefficients in general. You'll always end up with a double sum, like the final expression for $b_q$, so you'll probably need to pull out some nifty binomial identities or something to simplify it down to the single sum your question asks for. Or just look at Markus' way simpler answer.
I have a question about an estimate of the surface area of a set. Let $B(r)$ denotes the open ball of $\mathbb{R}^{d}$ centered at origin with radius $r>0$. Let $F:\mathbb{R}^{d} \to \mathbb{R}^{d}$ be a Lipschitz continuous function and define \begin{equation} \text{Lip}(F)=\inf\{L>0:|F(x)-F(y)| \le L|x-y|,x,y \in \mathbb{R}^{d} \}. \end{equation} Question Define $B^{\ast}(r)=F(B(r))$. I want to obtain an upper estimate of $\sigma(B^{\ast}(r))$, where $\sigma(B^{\ast}(r))$ is the surface area of $B^{\ast}(r)$. Strictly speaking, $\sigma(B^{\ast}(r))$ is the $d-1$ dimensional Hausdorff measure of $\partial B^{\ast}(r)$. That is, $\sigma(B^{\ast}(r))=\mathcal{H}^{d-1}(\partial B^{\ast}(r))$. My attempt It is known that the following inequality: \begin{equation*} \mathcal{H}^{d-1}(F(E)) \le \text{Lip}(F)^{d-1}\mathcal{H}^{d-1}(E), \end{equation*} for all $E \subset \mathbb{R}^{d}$. But can we show $\partial B^{\ast}(r) \subset F \bigl( \partial B(r) \bigr)$ (if necessary, we may assume $F$ is smooth)? If this is true, we have\begin{align*}\sigma(B^{\ast}(r))&=\mathcal{H}^{d-1}(\partial B^{\ast}(r)) \\& \le \text{Lip}(F)^{d-1}\mathcal{H}^{d-1}(\partial B(r))\\&= \text{Lip}(F)^{d-1}\sigma( B(r)).\end{align*} Can we have an upper estimate like as $\sigma(B^{\ast}(r)) \le Cr^{d-1} ?$ Thank you in advance.
In the previous chapter we used one-way ANOVA to analyze data from three or more populations using the null hypothesis that all means were the same (no treatment effect). For example, a biologist wants to compare mean growth for three different levels of fertilizer. A one-way ANOVA tests to see if at least one of the treatment means is significantly different from the others. If the null hypothesis is rejected, a multiple comparison method, such as Tukey’s, can be used to identify which means are different, and the confidence interval can be used to estimate the difference between the different means. Suppose the biologist wants to ask this same question but with two different species of plants while still testing the three different levels of fertilizer. The biologist needs to investigate not only the average growth between the two species (main effect A) and the average growth for the three levels of fertilizer (main effect B), but also the interaction or relationship between the two factors of species and fertilizer. Two-way analysis of variance allows the biologist to answer the question about growth affected by species and levels of fertilizer, and to account for the variation due to both factors simultaneously. Our examination of one-way ANOVA was done in the context of a completely randomized design where the treatments are assigned randomly to each subject (or experimental unit). We now consider analysis in which two factors can explain variability in the response variable. Remember that we can deal with factors by controlling them, by fixing them at specific levels, and randomly applying the treatments so the effect of uncontrolled variables on the response variable is minimized. With two factors, we need a factorial experiment. Table 1. Observed data for two species at three levels of fertilizer. This is an example of a factorial experiment in which there are a total of 2 x 3 = 6 possible combinations of the levels for the two different factors (species and level of fertilizer). These six combinations are referred to as treatments and the experiment is called a 2 x 3 factorial experiment. We use this type of experiment to investigate the effect of multiple factors on a response and the interaction between the factors. Each of the n observations of the response variable for the different levels of the factors exists within a cell. In this example, there are six cells and each cell corresponds to a specific treatment. When you compare treatment means for a factorial experiment (or for any other experiment), multiple observations are required for each treatment. These are called replicates. For example, if you have four observations for each of the six treatments, you have four replications of the experiment. Replication demonstrates the results to be reproducible and provides the means to estimate experimental error variance. Replication also provides the capacity to increase the precision for estimates of treatment means. Increasing replication decreases \(s_{\frac{2}{y}} = \frac {s^2}{r}\) thereby increasing the precision of \(\bar y\). Main Effects and Interaction Effect Main effects deal with each factor separately. In the previous example we have two factors, A and B. The main effect of Factor A (species) is the difference between the mean growth for Species 1 and Species 2, averaged across the three levels of fertilizer. The main effect of Factor B (fertilizer) is the difference in mean growth for levels 1, 2, and 3 averaged across the two species. The interaction is the simultaneous changes in the levels of both factors. If the changes in the level of Factor A result in different changes in the value of the response variable for the different levels of Factor B, we say that there is an interaction effect between the factors. Consider the following example to help clarify this idea of interaction. Example \(\PageIndex{1}\): Factor A has two levels and Factor B has two levels. In the left box, when Factor A is at level 1, Factor B changes by 3 units. When Factor A is at level 2, Factor B again changes by 3 units. Similarly, when Factor B is at level 1, Factor A changes by 2 units. When Factor B is at level 2, Factor A again changes by 2 units. There is no interaction. The change in the true average response when the level of either factor changes from 1 to 2 is the same for each level of the other factor. In this case, changes in levels of the two factors affect the true average response separately, or in an additive manner. Figure 1. Illustration of interaction effect. Solution The right box illustrates the idea of interaction. When Factor A is at level 1, Factor B changes by 3 units but when Factor A is at level 2, Factor B changes by 6 units. When Factor B is at level 1, Factor A changes by 2 units but when Factor B is at level 2, Factor A changes by 5 units. The change in the true average response when the levels of both factors change simultaneously from level 1 to level 2 is 8 units, which is much larger than the separate changes suggest. In this case, there is an interaction between the two factors, so the effect of simultaneous changes cannot be determined from the individual effects of the separate changes. Change in the true average response when the level of one factor changes depends on the level of the other factor. You cannot determine the separate effect of Factor A or Factor B on the response because of the interaction. Assumptions Note: Basic Assumption The observations on any particular treatment are independently selected from a normal distribution with variance σ2 (the same variance for each treatment), and samples from different treatments are independent of one another. We can use normal probability plots to satisfy the assumption of normality for each treatment. The requirement for equal variances is more difficult to confirm, but we can generally check by making sure that the largest sample standard deviation is no more than twice the smallest sample standard deviation. Although not a requirement for two-way ANOVA, having an equal number of observations in each treatment, referred to as a balance design, increases the power of the test. However, unequal replications (an unbalanced design), are very common. Some statistical software packages (such as Excel) will only work with balanced designs. Minitab will provide the correct analysis for both balanced and unbalanced designs in the General Linear Model component under ANOVA statistical analysis. However, for the sake of simplicity, we will focus on balanced designs in this chapter. Sums of Squares and the ANOVA Table In the previous chapter, the idea of sums of squares was introduced to partition the variation due to treatment and random variation. The relationship is as follows: $$SSTo = SSTr + SSE$$ We now partition the variation even more to reflect the main effects (Factor A and Factor B) and the interaction term: $$SSTo = SSA + SSB +SSAB +SSE$$ where SSTo is the total sums of squares, with the associated degrees of freedom klm– 1 SSA is the factor A main effect sums of squares, with associated degrees of freedom k– 1 SSB is the factor B main effect sums of squares, with associated degrees of freedom l– 1 SSAB is the interaction sum of squares, with associated degrees of freedom ( k– 1)( l– 1) SSE is the error sum of squares, with associated degrees of freedom kl( m– 1) As we saw in the previous chapter, the magnitude of the SSE is related entirely to the amount of underlying variability in the distributions being sampled. It has nothing to do with values of the various true average responses. SSAB reflects in part underlying variability, but its value is also affected by whether or not there is an interaction between the factors; the greater the interaction, the greater the value of SSAB. The following ANOVA table illustrates the relationship between the sums of squares for each component and the resulting F-statistic for testing the three null and alternative hypotheses for a two-way ANOVA. \(H_0\): There is no interaction between factors \(H_1\): There is a significant interaction between factors \(H_0\): There is no effect of Factor A on the response variable \(H_1\): There is an effect of Factor A on the response variable \(H_0\): There is no effect of Factor B on the response variable \(H_1\): There is an effect of Factor B on the response variable If there is a significant interaction, then ignore the following two sets of hypotheses for the main effects. A significant interaction tells you that the change in the true average response for a level of Factor A depends on the level of Factor B. The effect of simultaneous changes cannot be determined by examining the main effects separately. If there is NOT a significant interaction, then proceed to test the main effects. The Factor A sums of squares will reflect random variation and any differences between the true average responses for different levels of Factor A. Similarly, Factor B sums of squares will reflect random variation and the true average responses for the different levels of Factor B. Table 2. Two-way ANOVA table. Each of the five sources of variation, when divided by the appropriate degrees of freedom (df), provides an estimate of the variation in the experiment. The estimates are called mean squares and are displayed along with their respective sums of squares and df in the analysis of variance table. In one-way ANOVA, the mean square error (MSE) is the best estimate of \(\sigma^2\) (the population variance) and is the denominator in the F-statistic. In a two-way ANOVA, it is still the best estimate of \(\sigma^2\). Notice that in each case, the MSE is the denominator in the test statistic and the numerator is the mean sum of squares for each main factor and interaction term. The F-statistic is found in the final column of this table and is used to answer the three alternative hypotheses. Typically, the p-values associated with each F-statistic are also presented in an ANOVA table. You will use the Decision Rule to determine the outcome for each of the three pairs of hypotheses. If the p-value is smaller than α (level of significance), you will reject the null hypothesis. When we conduct a two-way ANOVA, we always first test the hypothesis regarding the interaction effect. If the null hypothesis of no interaction is rejected, we do NOT interpret the results of the hypotheses involving the main effects. If the interaction term is NOT significant, then we examine the two main effects separately. Let’s look at an example. Example \(\PageIndex{2}\): An experiment was carried out to assess the effects of soy plant variety (factor A, with k = 3 levels) and planting density (factor B, with l = 4 levels – 5, 10, 15, and 20 thousand plants per hectare) on yield. Each of the 12 treatments ( k * l) was randomly applied to m = 3 plots ( klm = 36 total observations). Use a two-way ANOVA to assess the effects at a 5% level of significance. Table 3. Observed data for three varieties of soy plants at four densities. It is always important to look at the sample average yields for each treatment, each level of factor A, and each level of factor B. Density Variety 5 10 15 20 Sample average yield for each level of factor A 1 9.17 12.40 12.90 10.80 11.32 2 8.90 12.67 14.50 12.77 12.21 3 16.30 18.10 19.87 18.20 18.12 Sample average yield for each level of factor B 11.46 14.39 15.77 13.92 13.88 For example, 11.32 is the average yield for variety #1 over all levels of planting densities. The value 11.46 is the average yield for plots planted with 5,000 plants across all varieties. The grand mean is 13.88. The ANOVA table is presented next. Source DF SS MSS F P variety 2 327.774 163.887 100.48 <0.001 density 3 86.908 28.969 17.76 <0.001 variety*density 6 8.068 1.345 0.82 0.562 error 24 39.147 1.631 total 35 You begin with the following null and alternative hypotheses: \(H_0\): There is no interaction between factors \(H_1\): There is a significant interaction between factors The F-statistic: $$F_{AB} = \dfrac {MSAB}{MSE} = \dfrac {1.345}{1.631} = 0.82$$ The p-value for the test for a significant interaction between factors is 0.562. This p-value is greater than 5% (α), therefore we fail to reject the null hypothesis. There is no evidence of a significant interaction between variety and density. So it is appropriate to carry out further tests concerning the presence of the main effects. \(H_0\): There is no effect of Factor A (variety) on the response variable \(H_1\): There is an effect of Factor A on the response variable The F-statistic: $$F_{A} = \dfrac {MSA}{MSE} = \dfrac {163.887}{1.631} = 100.48$$ The p-value (<0.001) is less than 0.05 so we will reject the null hypothesis. There is a significant difference in yield between the three varieties. \(H_0\): There is no effect of Factor B (density) on the response variable \(H_1\): There is an effect of Factor B on the response variable The F-statistic: $$F_A = \dfrac {MSB}{MSE} = \dfrac {28.969}{1.631} = 17.76$$ The p-value (<0.001) is less than 0.05 so we will reject the null hypothesis. There is a significant difference in yield between the four planting densities.
Here's an outline of the basic ideas: The same critical value, $C$, applies whether $H_0$ is true or $H_1$ is true. $C=\mu_0+\sigma Z_{1-\alpha/2}/\sqrt{n}$ $C=\mu_1+\sigma Z_\beta/\sqrt{n}$ $\mu_0+\sigma Z_{1-\alpha/2}/\sqrt{n}=\mu_1+\sigma Z_\beta/\sqrt{n}$ $(\mu_1-\mu_0)/(\sigma/\sqrt{n}) = (Z_{1-\alpha/2}- Z_\beta)$ $[(\mu_1-\mu_0)/\sigma]\sqrt{n} = (Z_{1-\alpha/2}+ Z_{1-\beta})$ $\sqrt{n} = (Z_{1-\alpha/2}+ Z_{1-\beta})/[(\mu_1-\mu_0)/\sigma]$ and then you just square both sides $n = (Z_{1-\alpha/2}+ Z_{1-\beta})^2/[(\mu_1-\mu_0)^2/\sigma^2]$ $\quad = (Z_{1-\alpha/2}+ Z_{1-\beta})/\delta^2$ Note: (1) Here I'm doing just the case $\mu_1>\mu_0$. To cover both possibilities, we really need $|\mu_1-\mu_0|$ where I have $(\mu_1-\mu_0)$; the outcome is the same. $\quad\quad$(2) This ignores a small piece of area to the far left of the second diagram. $\quad\quad\quad\,\,\,$When $\delta$ is large, this is negligible.
Question: You've observed the following returns on Doyscher Corporation's stock over the past five years: 27.6 percent, 15.4 percent, 33.8 percent, 3.2 percent, and 22.2 percent. The average inflation rate over this period was 3.32 percent and the average T-bill rate over the period was 4.3 percent. (a) What was the average real return on the stock? (b) What was the average nominal risk premium on the stock? Real rate of return Real rate of return refers to the inflation adjusted return of an asset. It is a return that discounts an increase of consumer price to determine how much is the actual gained of an asset relative to the current purchasing power of the investment. Answer and Explanation: Question (a) Formula on calculating the average real return of the stock {eq}\begin{align*} Average~real~return&=\frac{1+\frac{\sum stock~return}{number~of~periods}}{1+average~inflation~rate}-1\\ &=\frac{1+\frac{.276+.154+.338+.032+.222}{5}}{1+.0332}-1\\ &=\frac{1+\frac{102.2}{5}}{1.0332}-1\\ &=\frac{1.2044}{1.0332}-1\\ &=1.1657-1\\ &=.1657 \end{align*} {/eq} The average real return of Doyscher Corporation's stock is 16.57 percent Question (b) Formula of the average nominal risk premium on the stock {eq}\begin{align*} Average~nominal~risk~premium&=\frac{\sum stock~return}{number~of~periods}-Average~T-bill~rate\\ &=\frac{.276+.154+.338+.032+.222}{5}-.043\\ &=\frac{102.2}{5}-.043\\ &=.2044-.043\\ &=.1614\\ \end{align*} {/eq} The average nominal risk premium of Doyscher Corporation stock is 16.14 percent Become a member and unlock all Study Answers Try it risk-free for 30 daysTry it risk-free Ask a question Our experts can answer your tough homework and study questions.Ask a question Ask a question Search Answers Learn more about this topic: from Financial Accounting: Help and ReviewChapter 5 / Lesson 26
We’ve all seen permutations before. If you have ten distinct items, and rearrange them on a shelf, you’ve just performed a permutation. A permutation is actually a function that is performing the arrangement on a set of labeled objects. For simplicity, we can just number the objects and work with permuting the numbers. If we just look at the object numbers 1,\ldots,n, then we can just call the set of the first n integers under the operation permutation forms the symmetric group S_{n}. These groups are so famous they get their own name. These groups are structurally similar to many other groups that the study of these runs throughout mathematics. Here we’ll take a look at permutations in a bit more depth and see that a certain type of permutation called a cycle behaves exactly the same as integers under modulo arithmetic. Notation: how do we look at permutations? Permutations of integers are done by a function \alpha that maps some i in \{1,2,\ldots,n\} to another j in \{1,2,\ldots,n\}. Every number in the set must get mapped somewhere in the set, though the permutation can hold some, all, or no numbers fixed. All numbers that stay fixed gives us the identity permutation, which we will call \epsilon. We use a certain kind of notation for permutations that looks like this example of a permutation of the numbers 1, 2, and 3.\begin{pmatrix}1&2&3\\2&3&1\end{pmatrix} The top row is the original set in order. The numbers in the bottom row beneath each top number tell you where the permutation sends each original number. So, the above permutation maps 1 \to 2, 2 \to 3, and 3 \to 1. Multiple permutations Now, “products” of permutations work like compositions of functions. You perform the first permutation, and then the next permutation is performed on the result of the first. When we do this, we work right to left, just like in composition of functions. Quick example: if f(x)=x+5 and g(x) = 2x, then f(g(x)) is calculated by first applying g, then inserting that result into f. So,f(g(2)) = f(2\cdot2) = 2\cdot2 + 5 = 9 Now, let’s suppose we have two permutations on the integers \{1,2,3\}: \alpha = \begin{pmatrix}1&2&3\\2&3&1\end{pmatrix} and \beta = \begin{pmatrix}1&2&3\\1&3&2\end{pmatrix}. What is \beta\alpha? We can simply follow a chain. Take 1 for a start. If we apply the permutation \alpha, and then the permutation \beta, where does 1 end up? Well, \alpha(1) = 2. Now we apply \beta to the result of \alpha acting on 1, which is \alpha(1) = 2. Then\beta\alpha(1) = \beta(\alpha(1)) =\beta(2) = 3 So after applying these two permutations, 1 is sent to 3. We can do this for the other two numbers, and we get that\beta\alpha =\begin{pmatrix}1&2&3\\1&3&2\end{pmatrix}\begin{pmatrix}1&2&3\\2&3&1\end{pmatrix} =\begin{pmatrix}1&2&3\\3&2&1\end{pmatrix} Cycles Now we’ll introduce another concept and notation-that of cycles. A cycle is written just using parentheses instead of the larger notation we used before 1. The cycle (123) denotes exactly that. Reading left to right, 1 maps to 2, 2 maps to 3, and 3 cycles back around to 1. So (123) is actually just another way of expressing \alpha = \begin{pmatrix}1&2&3\\2&3&1\end{pmatrix}. One other note about the shorthand: every permutation on n integers can be written as the product of these disjoint cycles. 2 (134) and (25) are disjoint cycles. We’re going to study powers of a single cycle. Powers of cycles work just like powers of numbers: just as 2^3 is 2\cdot2\cdot2, (134)^{3} = (134)(134)(134) 3 Examples of Powers of Cycles Let’s take an example of a cycle: \alpha = (123) and just compute some powers of it.\alpha^{2} =(123)(123)=\begin{pmatrix}1&2&3\\2&3&1\end{pmatrix}\begin{pmatrix}1&2&3\\2&3&1\end{pmatrix}=\begin{pmatrix}1&2&3\\3&1&2\end{pmatrix}=(132) Now, \alpha^{3} = \alpha\alpha^{2} (again note that we apply another “round” of permutations on the left), and can be calculated in a similar way:\alpha^{3} = (123)(132)=\begin{pmatrix}1&2&3\\1&2&3\end{pmatrix}=\epsilon Interesting notion here. Now we’re back to the identity permutation. That means if we multiply the cycle (123) on the left of the identity permutation \epsilon, we just get (123). In other words, (123)^{4} = (123). Then that means (123)^{5} = (123)(123)^{4} = (123)(123) = (123)^{2}, and (123)^{6} = \epsilon. Our powers of this cycle form another “cycle” that’s starting to look awfully similar to something we’ve seen before: modulo addition. Modulo addition and our permutations The great thing about studying algebra is that by studying the structure of things, we can observe that two groups that looked totally unrelated actually behave very similarly. Notice that the power of a cycle of length 3, like the ones we computed above, “start over” at powers that are multiples of 3: (123)^{3} = (123)^{6} = \epsilon. So if we paired the cycle (123) with 1, (123)^{2} with 2, and (123)^{3} with 0, then the behavior of this cycle under its “permutation multiplication” is identical to the behavior of \mathbb{Z}_{3}, the integers modulo 3, that we have previously discussed. To extend a bit further, since we can obvious take more powers of the cycle (123) than just stopping at 3, if we wanted to make the correspondence to \mathbb{Z}_{3}, all powers of (123) that are multiples of 3 (which evaluate to the identity permutation) would correspond to 0 (the identity element under modulo addition). The powers of (123) that leave a remainder of 3 when you divide the power by 3 would behave like 1 \in \mathbb{Z}_{3}, and the powers of (123) that leave a remainder of 2 would match with 2 \in \mathbb{Z}_{3} To write this mathematically, we can actually map the set of all powers of (123) or any cycle of length 3 4 to \mathbb{Z}_{3}. Let n be the power of a cycle of length 3, which we will now call (a_{1}a_{2}a_{3}) to generalize. Then (a_{1}a_{2}a_{3})^{n} maps to n \bmod 3. With this, we’ve connected the two groups and learned that powers of cycles of length 3 act just like integers modulo 3. One more extension We just discovered that cycles of length 3 under powers (multiplying by itself) act just like integers modulo 3 under modulo addition. The next question we can ask is: Does this hold true for cycles of any length? That is, if we have a cycle of length s, does it behave like integers modulo s? In fact, it does. We’re going to get a little more general here, so stick with me. Take a generic cycle of length s, and call it \beta = (b_{1}b_{2}\ldots b_{s}) where each b_{i} is some number or element in the permutation cycle. Then we’re going to start describing distinct powers of this \beta and watch what happens.\begin{aligned}\beta &=\begin{pmatrix}b_{1}&b_{2}&b_{3}&\ldots&b_{s}\\b_{2}&b_{3}&b_{4}&\ldots & b_{1}\end{pmatrix}\\\beta^{2} &=\begin{pmatrix}b_{1}&b_{2}&b_{3}&\ldots&b_{s}\\b_{3}&b_{4}&b_{5}&\ldots & b_{2}\end{pmatrix}\\\beta^{3} &=\begin{pmatrix}b_{1}&b_{2}&b_{3}&\ldots&b_{s}\\b_{4}&b_{5}&b_{6}&\ldots & b_{3}\end{pmatrix}\\&\vdots\\\beta^{s-1} &=\begin{pmatrix}b_{1}&b_{2}&b_{3}&\ldots&b_{s}\\b_{s}&b_{1}&b_{2}&\ldots & b_{s-1}\end{pmatrix}\\\beta^{s} &=\begin{pmatrix}b_{1}&b_{2}&b_{3}&\ldots&b_{s}\\b_{1}&b_{2}&b_{3}&\ldots & b_{s}\end{pmatrix}=\epsilon\end{aligned} If this looks intimidating, just follow what b_{1} gets permuted to as you take more powers. applying one \beta to b_{1} gives \beta(b_{1}) = b_{2}. Then \beta^{2}(b_{1}) = \beta(\beta(b_{1})) = \beta(b_{2}) = b_{3}. Next, \beta^{3}(b_{1}) = b_{4}. Now we start noticing the pattern, just as we did for the cycles of length 3: \beta^{n}(b_{1}) = b_{1+n} while n < s, the length of our cycle. Then \beta^{s}(b_{1}) = b_{1} again and we’re now starting over. Do the same thing for any other element in the permutation. Follow it through the powers of \beta and notice that \beta^{n}(b_{j}) = b_{j+n} for any generic j. Now, when we take powers of the cycle that are larger than s, we just “start over again”. So \beta^{s+1} = \beta, \beta^{s+2} = \beta^{2}, and so forth. So \beta^{n} = \beta^{n\mod s} in general, whether n \geq s or n < s. Now that means we have made the same observation in general that we did for cycles of length 3. We can map powers of cycles of any length s to integers modulo s, and both groups will behave the same way, with the same “starting over” property we observed in modulo arithmetic. Conclusion What you’ve just discovered is a mathematical phenomenon called an isomorphism. We’ll discuss the formalities of this in another post. In a nutshell, an isomorphism is the mapping that lets us connect two groups that behave the same way, even if their elements and respective operations have nothing to do with each other. Picture it like two people running with the same stride length and form. Even if they are running in different locations or opposite directions, they’re still behaving the same way. Two things that are structurally and behaviorally equivalent, mathematically, are called isomorphic. We just proved (albeit a bit informally) that \mathbb{Z}_{s}, is isomorphic to permutation cycles of length s. Modulo arithmetic is much easier to compute than powers of permutations, so if we want to study how they work, we can actually study an isomorphic group that is simpler to get the same results. As a more tangible example, molecular structures and symmetries can get really icky to study and play with. There are simpler groups that are isomorphic to these structures and their symmetries that make their study much easier. For instance, instead of having to build a model and study what happens if you rotate it twice and reflect it across its vertical symmetry once, we can map those actions to a simpler algebraic structure with a simpler multiplication, calculate the result, then map it back using the isomorphism. Abstract mathematics allows us to strip the more complex “fluff” from a problem and reduce it to its skeleton. By looking at the skeletons, we can study the problem’s mechanics more simply, and even reduce the problem to an isomorphic one that has been previously solved. This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License. Footnotes I said once that mathematicians were lazy. We hate writing more than we have to. Disjoint – a number that shows up in one cycle doesn’t show up in another Remember that we apply the permutations from right to left Numbers are just labels in permutations. If we have a cycle of length 3 that isn’t (123), say (489), the behavior is still the same. We can just “label” 4 as the first in the cycle, 8 as the second, and 9 as the third and be right back with what we already know
In Statistics, the researcher checks the significance of the observed result, which is known as test static. For this test, a hypothesis test is also utilized. The P-value or probability value concept is used everywhere in the statistical analysis. It determines the statistical significance and the measure of significance testing. In this article, let us discuss its definition, formula, table, interpretation and how to use P-value to find the significance level etc. in detail. P-value Definition The P-value is known as the probability value. It is defined as the probability of getting a result that is either the same or more extreme than the actual observations. The P-value is known as the level of marginal significance within the hypothesis testing that represents the probability of occurrence of the given event. The P-value is used as an alternative to the rejection point to provide the least significance at which the null hypothesis would be rejected. If the P-value is small, then there is stronger evidence in favour of the alternative hypothesis. P-value Table The P-value table shows the hypothesis interpretations: P-value Description Hypothesis Interpretation P-value ≤ 0.05 It indicates the null hypothesis is very unlikely. Rejected P-value > 0.05 It indicates the null hypothesis is very likely. Accepted or it “fails to reject”. P-value > 0.05 The P-value is near the cut-off. It is considered as marginal The hypothesis needs more attention. P-value Formula We Know that P-value is a statistical measure, that helps to determine whether the hypothesis is correct or not. P-value is a number that lies between 0 and 1. The level of significance(α) is a predefined threshold that should be set by the researcher. It is generally used as 0.05. The formula for the calculation for P-value is \(z = \frac{\hat{p}-p0}{\sqrt{\frac{po(1-p0)}{n}}}\) Step 1: Find out the test static Z is Where,\(\hat{p}\) = Sample Proportion P0 = assumed population proportion in the null hypothesis N = sample size Step 2: Look at the Z-table to find the corresponding level of P from the z value obtained. P-Value Example An example to find the P-value is given here. Question: A statistician wants to test the hypothesis H 0: μ = 120 using the alternative hypothesis Hα: μ > 120 and assuming that α = 0.05. For that, he took the sample values as n =40, σ = 32.17 and x̄ = 105.37. Determine the conclusion for this hypothesis? Solution: We know that,\(\sigma _{\bar{x}}=\frac{\sigma }{\sqrt{n}}\) Now substitute the given values\(\sigma _{\bar{x}}=\frac{32.17 }{\sqrt{40}}\) = 5.0865 Now, using the test static formula, we get t = (105.37 – 120) / 5.0865 Therefore, t = -2.8762 Using the Z-Score table, we can find the value of P(t>-2.8762) From the table, we get P (t<-2.8762) = P(t>2.8762) = 0.003 Therefore, If P(t>-2.8762) =1- 0.003 =0.997 P- value =0.997 > 0.05 Therefore, from the conclusion, if p>0.05, the null hypothesis is accepted or fails to reject. Hence, the conclusion is “fails to reject H 0.” Stay tuned with BYJU’S – The Learning App for related concepts on P-value and examples and explore more videos.
Back to Semi-infinite Programming The basic idea of KKT reduction methods, which were already known from Chebyshev approximation, consists of replacing \(P\) (whose objective function \(f\) and constraint functions \(g\) are assumed to be smooth) with a nonlinear system of equations obtained from the KKT local optimality conditions for \(P\). In fact, under suitable conditions, if \(\overline{x}\) is a local minimizer of \(P\), there exist indices \(\overline{t}_{j}\in T_{a}(\overline{x}), j = 1,\dots,q(\overline{x})\), with \(q(\overline{x})\in \mathbb{N}\) depending on \(\overline{x}\), and nonnegative multipliers \(\overline{\lambda}_{j}, j=1,\dots,q(\overline{x})\), such that \[\nabla_{x}f(\overline{x})=\sum_{j=1}^{q(\overline{x})}\overline{\lambda}_{j}\nabla_{x}g(\overline{x},\overline{t}_j).\] We assume also the availability of a description of \(T \subset \mathbb{R}^m\) as \[T=\{t\in \mathbb{R}^m : u_{i}(t) \geq 0, \,\,i=1,\dots, m\},\] where \(u_{i}\) is smooth for all \(i=1,\dots,m\). Observe that \(q(\overline{x})\) is the number of global minima of the lower level problem \(Q(\overline{x})\), provided that \(\nu (Q(\overline{x}))=0\). In that case, \(\overline{t}_{j}\in T_{a}(\overline{x})\) if and only if \(\overline{t}_{j}\) is an optimal solution of the (finite) lower level problem at \(\overline{x}\) \[Q(\overline{x})\,:\, \min_{t} g(\overline{x},t)\,\,\, \text{s.t.}\,\,\, u_{i}(t)\geq 0,\,\, i=1,\dots, m. \] Then, under some constraint qualification, the classical KKT theorem yields the existence of nonnegative multipliers \(\overline{\theta}_{i}^{j}, i=1,,\dots m,\) such that \[\nabla_{t}g(\overline{x},\overline{t}_{j})=\sum_{i=1}^{m}\overline{\theta}_{i}^{j}\nabla_{t}u_{i}(\overline{t}_j)\] and \[\overline{\theta}_{i}^{j}u_{i}(\overline{t}_j)= 0,\,\, i=1,\dots, m.\] Step \(k\): Start with a given \(x_{k}\) (not necessarily feasible). Estimate \(q(x_{k})\). Apply \(N_{k}\) steps of a quasi-Newton method (for finite systems of equations) to\begin{array}{llll} \nabla_{x}f(x) & = & \sum_{j=1}^{q(x_{k})}\lambda _{j}\nabla _{x}g(x,t_{j}) & \\ g(x,t_{j}) & = & 0 & j=1,...,q(x_{k}) \\ \nabla_{t}g(x,t_{j}) & = &\sum_{i=1}^{m}\theta_{i}^{j}\nabla_{t}u_{i}(t_{j}) & \\ \theta_{i}^{j}u_{i}(t_{j}) &= & 0 & i=1,\dots,m,\,\,j=1,\dots,q(x_{k}) \end{array} with unknowns \(x\), \(t_{j}\), \(\lambda_{j}\), \(\theta_{i}^{j}\), \(i=1,\dots,m\), \(j=1,\dots,q(x_{k})\), leading to iterates \(x_{k,l}\), \(l=1,\dots,N_{k}\). Set \(x_{k+1}=x_{k,N_{k}}\) and \(k=k+1.\) The main advantage of the KKT reduction methods on the discretization and cutting plane methods is their fast local convergence (as they are adaptations of quasi-Newton methods) provided that they start sufficiently close to an optimal solution. The so-called two-phase methods combine a discretization or central cutting plane method providing a rough approximation of an optimal solution (1st phase) and a reduction one improving this approximation (2nd phase). No theoretical result supports the decision to switch to phase 2. For more details, see Glashoff and Gustafson (1983), Hettich and Kortanek (1993), López and Still (2007), and Reemtsen and Görner (1998) in the Semi-infinite Programming References.
Answer $\cos A = \sin (90^{\circ} -A) = \sin B$ Work Step by Step Use cofunctions $\cos x = \sin (90^{\circ} -x)$ In a triangle, you have : $m(\angle B) = 90^{\circ} - m(\angle A)$ Therefore using the cofunction formula above $\cos A = \sin (90^{\circ} -A) = \sin B$
This article is an introduction to the post-Keynesian approach to inflation. It is largely based on Section 8.1.1 of Professor Marc Lavoie's Post-Keynesian Economics: New Foundations(link to my review). Similar to the work on stock-flow consistent models, we start out with what is essentially an accounting identity: a statement that is true by definition. We need to understand the implications of the accounting identity before we worry about the behavioural aspects (which are not pinned down with accounting). (The approach here is quite distinct from conventional approaches; I discussed why post-Keynesians reject conventional inflation theory in an earlier article.) Markup PricingFor simplicity, this article only discusses a closed economy; that is, an economy that is not open to external trade. Section 8.1.2 of Post-Keynesian Economicsdiscusses the open economy case. In a monetary exchange economy, pretty much all goods and services that are exchanged based on monetary transactions are the results of some form of labour. Yes, capital goods are typically required, but those capital goods were the result of labour inputs. For those of us worried about resource consumption, commodity inputs are important, but those commodities do not magically appear -- human labour is required to extract them. If one wanted to paraphrase some Teutonic economists, goods and services are the result of human action. We can use Hollywood dystopian fiction to give a good illustration. On Earth, we do not (yet!) pay for access to oxygen; it is available to healthy individuals without any conscious effort. Whereas if we turn to the (presumably) fictional Mars of the Arnold Schwarzenegger version of Total Recall, oxygen was under the control of the rather villainous entity that ruled Mars. (Admittedly, this breaks down at the end of the film, where oxygen was made available by alien technology.)So oxygen is outside the analysis of economies on Earth, whereas it would be important on (pre-alien intervention) Mars. In a capitalist society, the bulk of output is by paid labour. Although there are self-employed individuals (like myself) and worker-owned firms, they are still a minority. The defining characteristic of workers is that they are paid a wage, which is normally fixed nominally (although employees at the top of the hierarchy skim off a lot of profits via bonus schemes). If we assume that all output is the result of wage labour, we can arrive at the identity (due to Weintraub): \[ p = \kappa \frac{w}{y}, \] where: $\kappa$ is the average markup; $w$ is the nominal wage rate; $y$ is the output per worker. For example, if $\kappa$ is 1.1, then the selling price output is 10% higher than the wage cost to produce it. Rates of Change We now turn to the rate of change of prices -- inflation. We denote the percentage change of a variable $x$ as $\hat{x}$, which is defined as:\[ \hat{x(t)} = \frac{ \frac{dx}{dt}}{x(t)}, \] (the rate of change of $x$ divided by $x$ itself). Needless to say, we assume that the variable stays away from zero. (Unfortunately, the hats on my variables seem to be offset when the page is rendered, which is annoying.) If we apply some basic calculus, we can see that: \[ \hat{p} = \frac{\frac{dp}{dt}}{p(t)} = \hat{w}(t) - \hat{y}(t) + \hat{\kappa}(t). \] InterpretationWe want to be able to explain persistent inflation; for short periods of time, the three terms in the rate of change identity can do all sorts of things. The argument in Post-Keynesian Economicsis that markups cannot rise forever, as that would imply an ever-rising profit share of national income. One might cynically note that this is exactly what has been happening for the post-1980 period, but even so, the rise in the price level was much greater than the rise in profits over that period. Instead, we need to look at the first two terms: how much greater wage growth is than output per worker (${\hat w} - \hat{y}$). The analysis then leads to: why will wage gains outstrip productivity? The post-Keynesian answer is that this will happen if workers' bargaining position increases relative to that of business owners. As I noted previously, coming up with a mathematical formulation of the notion of "workers' bargaining position" is going to be difficult. However, one can see the attraction of this theory. By most accounts, the bargaining position of labour has been crippled as a result of structural changes imposed since the early 1980s. From this standpoint, the deceleration of inflation is no accident. How Different is This?If we stick to just the accounting identity that I described in this article, one could reasonably argue that this is not that much different than conventional theories about inflation. The same accounting identity appears, since it is obviously true. The difference is stark if we look at political economy considerations: a viewpoint that has largely been whitewashed from polite economic conversation. If we stick to the mathematical formulation used in mainstream macro, prices are purely the result of marginal productivity; the concept of a struggle for income shares disappears by definition. (One may note that not all mainstream economists entirely buy that story, even if they are otherwise theoretically orthodox.) From a practical consideration, central bankers are obsessed with labour market statistics. Is the unemployment rate below the dreaded NAIRU? Although the output gap -- as defined by actual GDP versus "potential" -- is allegedly more useful than the "NAIRU gap," one may note that it has barely come up in conversation this cycle (other than being invoked by rate doves). In previous cycles, the output gap was more popular in market analysis. No matter what mainstream economics allegedly says about the drivers of inflation, wage inflation is what matters in practice. However, one may note the absence of the notion of the central bank determining the price level, or the diminished role of expectations. Since administered prices are set by human beings as a markup over costs (a point I noted in an earlier article), their expectations about their future costs obviously matters.For this reason, we should not be surprised that there is relationship between inflation expectations and realised inflation. However, it is unclear how far we can run with that concept. If one demands a reduced order inflation model, we need to jump to Section 8.4 of Post-Keynesian Economics-- the conflicting claims model. However, despite my training as an applied mathematician, I am deeply skeptical about such reduced-form models. I have less objections to the post-Keynesian version. The reason is that the models contain some fairly arbitrary variables, such as targets for real wages. Why would workers' target for their real wages change? Although one might be able to do some historical analysis, it seems straightforward that such variables are a function of the political economy environment. We can try fitting a model to historical data, but any extrapolation of past trends is obviously contingent on the environment not changing. This is not a feature of mainstream macro, which allegedly capture universal truths about the economy. If we return to my complaints about price index aggregation (detailed in my earlier article), we see that the simplified post-Keynesian models are also hit by them. (Within the post-Keynsian literature, there is a lot of detailed empirical work on inflation, which would be compatible with the price level aggregation critique.) Although the aggregate accounting identity is true by definition, it is the result of lumping together very different price measures. A good example is the runaway cost inflation in American university tuition: it is certainly not the result of paying line professors too much. We would need to attempt to apply the concepts to the components of the domestic price structure where the price is best explained by a markup over line employee wages; other parts of the CPI will march to different drummers. From a policy standpoint, we see why the government needs to be concerned about wage growth in the context of inflation control. Real output per worker is set by the "technology" of the production process; it is unclear how much we can increase that over time (although every political party tends to promise that this is what their programme will deliver). Although the government should be neutral with regards to the distribution of income between labour and capital, it does seem somewhat implausible to worry about an ever-increasing level of profits with a fixed level of wages. By the process of elimination, this means that policy makers need to ensure that wage growth cannot indefinitely outstrip productivity growth (which is assumed to be outside the control of policy). This logic explains why I would only be concerned about a return of a 1970s style inflation in the context of changes in the political environment. Concluding RemarksFrom the perspective of my potential book on the business cycle, I doubt that I will go much further down the rabbit hole of inflation theory than this. From the perspective of orthodox mainstream macro, such a decoupling of inflation from the business cycle is insane: the determination of prices is the whole key to the business cycle. However, the evidence of such a view is sketchy at best, and so I would prefer to move inflation analysis to a later report. (It may be that I will eventually bind my reports into a single big book, but I would rather keep my reports at a manageable size.) (c) Brian Romanchuk 2018
Search Now showing items 1-10 of 15 A free-floating planet candidate from the OGLE and KMTNet surveys (2017) Current microlensing surveys are sensitive to free-floating planets down to Earth-mass objects. All published microlensing events attributed to unbound planets were identified based on their short timescale (below 2 d), ... OGLE-2016-BLG-1190Lb: First Spitzer Bulge Planet Lies Near the Planet/Brown-Dwarf Boundary (2017) We report the discovery of OGLE-2016-BLG-1190Lb, which is likely to be the first Spitzer microlensing planet in the Galactic bulge/bar, an assignation that can be confirmed by two epochs of high-resolution imaging of the ... OGLE-2015-BLG-1459L: The Challenges of Exo-Moon Microlensing (2017) We show that dense OGLE and KMTNet $I$-band survey data require four bodies (sources plus lenses) to explain the microlensing light curve of OGLE-2015-BLG-1459. However, these can equally well consist of three lenses ... OGLE-2017-BLG-1130: The First Binary Gravitational Microlens Detected From Spitzer Only (2018) We analyze the binary gravitational microlensing event OGLE-2017-BLG-1130 (mass ratio q~0.45), the first published case in which the binary anomaly was only detected by the Spitzer Space Telescope. This event provides ... OGLE-2017-BLG-1434Lb: Eighth q < 1 * 10^-4 Mass-Ratio Microlens Planet Confirms Turnover in Planet Mass-Ratio Function (2018) We report the discovery of a cold Super-Earth planet (m_p=4.4 +/- 0.5 M_Earth) orbiting a low-mass (M=0.23 +/- 0.03 M_Sun) M dwarf at projected separation a_perp = 1.18 +/- 0.10 AU, i.e., about 1.9 times the snow line. ... OGLE-2017-BLG-0373Lb: A Jovian Mass-Ratio Planet Exposes A New Accidental Microlensing Degeneracy (2018) We report the discovery of microlensing planet OGLE-2017-BLG-0373Lb. We show that while the planet-host system has an unambiguous microlens topology, there are two geometries within this topology that fit the data equally ... OGLE-2017-BLG-1522: A giant planet around a brown dwarf located in the Galactic bulge (2018) We report the discovery of a giant planet in the OGLE-2017-BLG-1522 microlensing event. The planetary perturbations were clearly identified by high-cadence survey experiments despite the relatively short event timescale ... Spitzer Opens New Path to Break Classic Degeneracy for Jupiter-Mass Microlensing Planet OGLE-2017-BLG-1140Lb (2018) We analyze the combined Spitzer and ground-based data for OGLE-2017-BLG-1140 and show that the event was generated by a Jupiter-class (m_p\simeq 1.6 M_jup) planet orbiting a mid-late M dwarf (M\simeq 0.2 M_\odot) that ... OGLE-2016-BLG-1266: A Probable Brown-Dwarf/Planet Binary at the Deuterium Fusion Limit (2018) We report the discovery, via the microlensing method, of a new very-low-mass binary system. By combining measurements from Earth and from the Spitzer telescope in Earth-trailing orbit, we are able to measure the ... KMT-2016-BLG-0212: First KMTNet-Only Discovery of a Substellar Companion (2018) We present the analysis of KMT-2016-BLG-0212, a low flux-variation $(I_{\rm flux-var}\sim 20$) microlensing event, which is well-covered by high-cadence data from the three Korea Microlensing Telescope Network (KMTNet) ...
I'm taking a class that is having me research a topic and write a report on it. The assignment is to analyze the relationship between quadratic and cubic polynomials and their discriminants. In an attempt to get used to Maple (I've only recently begun to use it) I plotted the discriminant $b^2-4ac>0$ with the fixed value of $a=1$ using the function inequal($b^2-4\cdot c>0)$ Visually, this makes sense to me. Equations of the form $x^2\geq 0$ has at least one real solution, but $x^2<0$ does not. Looking at the cubic discriminant (found on Wikipedia): $$\Delta = \,b^2c^2-4ac^3-4b^3d-27a^2d^2+18abcd$$ I plotted this the same way, with the assumption that $a=1$ and $b=0$. I came up with the following: This one doesn't appear to be as easy to understand, however. I plotted the equation ${x}^{3}+20\,x+5$ and got the following: This clearly has a real root, as it crosses the x-axis. But the inequality I linked seems to imply to me that the blue area is the only area with solutions. Given that this is my first time hearing about discriminants for anything except quadratic equations, how am I misinterpreting this data?
We’ll start off with a very basic problem; the goal of this problem is just to get a feel for how minimization problems are solved using the Euler-Lagrange equation. As we go through multiple problems, hopefully we’ll start to notice patterns and general procedures which were taken in each problem to get the result. The problem we’re going to look at here can be stated as follows: for any two coordinate points \((x_1,y_1)\) and \((x_2,y_2)\) on the surface of a cylinder, we need to find the curve on the cylinder connecting those two points which has the smallest possible arc length \(S\). The shortest path between any two points on a curved surface is called a geodesic, to clarify the title of this article and the video above. You could imagine expressing the generalized coordinates \(q_j\) in Cartesian coordinates as \((x,y,z)\) which will represent any curve on the cylinder between the coordinate points \((x_1,y_1,z_1)\) and \((x_2,y_2,z_2)\). The arc length \(S\) can be expressed as a functional of these coordinates as $$S=\int_{P_1}^{P_2}\sqrt{1+\biggl(\frac{dy}{dx}\biggl)^2}$$ where $$L=\sqrt{1+\biggl(\frac{dy}{dx}\biggl)^2}.$$ (As a side not, \(P_1\) and \(P_2\) refer to the starting and ending point of the curve as shown in the video above.) In order to solve the Euler-Lagrange equation for the coordinates \((x,y,z)\) which minimize the functional \(S\), we need to solve them—and to do that, we must take derivatives of \(L\) with respect to our choice of generalized coordinates. The general character of many problems is to choose generalized coordinates which make evaluating these derivatives the easiest. For this reason, we’ll choose our generalized coordinates to be polar coordinates. The algebraic and trigonometric manipulations used to express our Cartesian coordinates and arc length in terms of polar coordinates are shown in the video and, for convenience, I’ll also list them below\(^1\): $$x=Rcosθ ⇒ dx=-Rsinθdθ$$ $$y=Rsinθ ⇒ dy=-Rcosθdθ$$ $$z=z ⇒ dz=dz$$ $$dS=\sqrt{dx^2+dy^2+dz^2}=\sqrt{R^2cos^2θ^2dθ^2+R^2sin^2θdθ^2+dz^2}=\sqrt{(Rdθ)^2+dz^2}=\sqrt{dθ^2[R^2+(\frac{dz}{dθ})^2]}$$ $$S=\int_{θ_1}^{θ_2}\sqrt{R^2+(\frac{dz}{dθ})^2}dθ$$ The whole point of all that was just to express \(S\) as functional of the form \(S(z(\theta), z’(\theta), \theta)\) in terms of polar coordinates because, according to our derivation, only then can we solve for the curve \(z(\theta)\) (using the Euler-Lagrange equation) which minimizes \(S\). (It is very important to always know what we’re actually trying to do in these kinds of problems; it is all too easy to get lost in the math and lose track of what we’re actually trying to accomplish.) To actually solve the Euler-Lagrange equation to find the curve \(z(\theta)\) whose arc length is minimized, we’ll need to solve that differential equation. To do this, we’ll evaluate each of the derivatives as shown in the video and below: $$\frac{∂L}{∂z}=0$$ $$\frac{∂L}{∂z’}=\frac{1}{2}\biggl(R^2+(\frac{dz}{dθ})^2\biggl)^{-\frac{1}{2}}(2z’)=\frac{z’}{R^2+(\frac{dz}{dθ})^2}$$ $$\frac{d}{dθ}\biggl(\frac{z’}{R^2+(\frac{dz}{dθ})^2}\biggl)=0$$ Something very nice happened to the equation on the bottom which, in general, does not happen; we see that the derivative \((\frac{z’}{R^2+(\frac{dz}{dθ})^2})\) is equal to zero; if the rate-of-change of that single variable function is zero, then it must be a constant: $$\frac{z’}{\sqrt{R^2+(\frac{dz}{dθ})^2}}=C.$$ (\(C\) represents any arbitrary constant.) To make sure we’re not getting lost in the math, I’ll repeat and continue to repeat, our goal is to find \(q_j(\theta)=(z(\theta),R(\theta)=constant)\) which is the curve that minimizes \(S\). So what we’re going to do is some algebraic manipulations on the equation above to try to isolate \(z(\theta)\). The first thing that immediately comes to my mind is to integrate both sides to get rid of that derivative on the left-hand side of the equation above. You might try integrating both sides of the equation as it currently is, but let’s do some algebra first (written below) to simplify the thing that we’re taking a derivative of: $$\frac{z’}{\sqrt{R^2+(\frac{dz}{dθ})^2}}=C.$$ $$\frac{z’^2}{R^2+z’^2}=C^2=A$$ $$z’^2=A(R^2+z’^2)$$ $$z’^2(1-A)=AR^2$$ $$z’^2=\frac{AR^2}{1-A}$$ $$z’=\sqrt{\frac{AR^2}{1-A}}$$ Notice that the expression on the right-hand side of the bottom equation is a constant. Let’s represent the right-hand side by \(m\) as in the video. Since \(m\) is just a constant when we integrate both sides with respect to the independent variable \(θ\), we get: $$z(θ)=mθ+b.$$ This equation gives us the shortest path between two points \(P_1\) and \(P_2\) if we unraveled the cylinder and flattened it out into a flat plane. If you imagine rolling this plane back up again, the shape that \(z(θ)\) will trace out along the surface of the cylinder will be a helix. The Euler-Lagrange equation can be used to find the geodesic on any curved surface. A similar procedure to what we did in this section involving finding the geodesic of a cylinder can be generalized to find the geodesic along any surface. This article is licensed under a CC BY-NC-SA 4.0 license. References 1. The Kaizen Effect. "Lagrangian Mechanics - Lesson 2: Finding Geodesics on Any Surface". Online video clip. YouTube. YouTube, 21 May 2016. Web. 18 May 2017. Notes 1. If you are rusty on trigonometry or the chain rule, I suggest getting freshened up using the Khan Academy’s videos on these topics.
I need to find if given two formal languages $L_1$ and $L_2$ $$(L_1 \cap L_2)^*\subseteq (L_1^* \cap L_2^*) $$ I think that it's true since this can be rewritten as $$ \bigcup^\infty_{i=0}(L_1 \cap L_2)^i \subseteq (\bigcup^\infty_{i=0}L_1^i \cap \bigcup^\infty_{i=0}L_2^i)$$ and in general, $$(A \cap B) \cup (C \cap D) \subseteq (A \cup C) \cap (B \cup D)$$ I may be wrong since it has been a long time since i have done anything with set theory, and this is taken as given. Thanks Yes, it is true. But the argument to support the conjecture is incorrect. The correct proof is as follows: $L_1 \cap L_2 \subseteq L_1 $ and therefore $(L_1 \cap L_2)^* \subseteq L_1^*$. (argument: if $A \subseteq B$ then $A^* \subseteq B^*$). Similarly $(L_1 \cap L_2)^* \subseteq L_2^*$. Therefore $(L_1 \cap L_2)^* \subseteq (L_1^* \cap L_2^*)$ (argument: if $A \subseteq B$ and $A \subseteq C$ then $A \subseteq B \cap C$). Anyway you can not write $(A \cap B)^i \cup (C \cap D)^j = (A^i \cup C^j) \cap (B^i \cup D^j)$. Correct relation is: $(A \cap B) \cup (C \cap D) = (A \cup C) \cap (B \cup D) \cap (A \cup D) \cap (B \cup C) $ However $\bigcup^\infty_{i=0}(L_1 \cap L_2)^i \subseteq (\bigcup^\infty_{i=0}L_1^i \cap \bigcup^\infty_{i=0}L_2^i)$ is correct because $\bigcup^\infty_{i=0}(L_1 \cap L_2)^i \subseteq \bigcup^\infty_{i=0}L_1^i $ and $\bigcup^\infty_{i=0}(L_1 \cap L_2)^i \subseteq \bigcup^\infty_{i=0}L_2^i $.
The central limit theorem says the variable $Z$ converges in distribution to the standard normal. $$Z=\frac{\hat{x}-E(X)}{\text{sd}(\hat{x})} \overset{D}{\rightarrow} N(0,1)$$ Rearrange this at a fixed value of $n$ (number of samples) to get, approximately: $$\hat{x} \sim N(E(X),\text{Var}(\hat{x})) $$ where $\text{Var}(\hat{x})=\frac{\text{Var}(X)}{n}$ is the variance of the sample mean. You might interpret this as the distribution of $\hat{x}$ tends towards being normal with mean $E(X)$ and variance $\frac{\text{Var}(X)}{n}$. For large values of $n$, the variance will tend to $0$. which means that the normal distribution of $\hat{x}$ will actually converge to a point mass distribution at $E(X)$. Convergence in distribution to a constant implies convergence in probability to that constant. So $$\hat{x} \overset{P}{\rightarrow} E(X)$$ Of course this is just the weak law of large numbers. (The strong law stating that the convergence in almost sure.)