text
stringlengths
256
16.4k
Summary: Given a list of activities required to complete a project along with the duration of each activity and the dependencies between activities, the objective of the Critical Path Method (CPM) is to determine the sequence of activities that minimizes the latest completion time. Managing a large-scale project requires coordinating many activities of varying duration and involving numerous dependencies. PERT (Program Evaluation and Review Technique) and CPM (Critical Path Method) are two closely-related operations research techniques that use networks to coordinate activities, develop schedules, and monitor progress of projects. PERT and CPM were developed independently in the 1950s. While the original versions differed in some important ways, the two techniques had much in common. Over time, the techniques have merged and, for the most part, the names are used interchangeably or combined in a single acronym, PERT/CPM. We introduce a small example that will be used to illustrate various aspects of CPM. Suppose we are constructing a new building; the required construction activities are shown in the table below along with the estimated duration of each activity and any immediate predecessors. An immediate predecessor of an activity \(y\) is an activity \(x\) that must be completed no later than the starting time of activity \(y\). When an activity has more than one immediate predecessor, all of them must be completed before the activity can begin. Activity Duration (weeks) Predecessor(s) A 2 none B 3 A C 3 A D 4 C E 8 D F 6 B, E G 2 F There are many questions to be answered when scheduling a complex project but two of the important questions are: What is the total time required to complete the project if no delays occur? What are the critical bottleneck activities? A project network is used to represent a project and to show the relationships between the activities. In an activity-on-node (AON) network, each activity is represented by a node. Each predecessor relationship is represented by an arc; there is an arc from node \(i\) to node \(j\) if the activity at node \(i\) is an immediate predecessor to the activity at node \(j\). The duration of the activity at node \(i\) is recorded next to node \(i\). The figure below shows a network representation of the project information from the table above. 1. What is the total time required to complete the project? If we add up the times required for all of the activities, we get 28 weeks. However, this is not the best answer to the question, since some of the activities can be performed at the same time. Instead, to determine the total time required, we want to consider the length of each path through the network. A path through the network is a route made up of nodes and arcs that traverses the network from the start node to the finish node. The length of a path is the sum of the durations of the activities on the nodes along the path. In this simple example, there are two paths through the network: start -> A -> B -> F -> G -> finish, with a length of 13 start -> A -> C -> D -> E -> F -> G -> finish, with a length of 25 The project duration will be no longer than the longest path through the network. Therefore, the total time required to complete the project equals the length of the longest path through the network -- and this longest path is called the critical path. In the example, the total time to complete the project should be 25 weeks if no delays occur. 2. What are the critical bottleneck activities? The critical bottleneck activities are the activities that are "critical" to completing the project on time; a delay in a critical bottleneck activity will delay the project completion time. Therefore, the activities on the critical path are the critical bottleneck activities. In our example, the critical bottleneck activities are A, C, D, E, F, and G; the project should be managed to avoid delays in any of these activities. There is also a positive aspect to knowing the critical bottleneck activities; a reduction in the duration of a critical bottleneck activity may reduce the time required to complete the project. For small projects, it is possible to enumerate all of the paths and identify the critical path. For larger, more complex projects with many dependencies, a more efficient procedure is required. One approach is to formulate the project scheduling problem as a linear programming problem; due to the underlying network structure, even large problems can be solved efficiently. Thus, given a list of activities required to complete a project along with the duration of each activity and the dependencies between activities, the objective of the Critical Path Method (CPM) is to determine the sequence of activities that minimizes the latest completion time. Sets \(A\) = set of activities, e.g., \(P = \{A, B, C, D, E, F, G\}\) \(P\) = set of predecessor pairs, e.g., element \((i,j)\) means that activity \(i\) is an immediate predecessor of activity \(j\) Parameters \(d_i\) = duration of activity \(i\), \(\forall i \in A\) Decision Variables \(T\) = total time required to complete the project \(s_i\) = start time of activity \(i\), \(\forall i \in A\) Objective Function minimize \(T\) Constraints The total time to complete the project must be greater than or equal to the start time plus the duration of all of the activities. \(s_i + d_i \leq T, \forall i \in A\) For each predecessor pair \((i,j)\), the start time of activity \(j\) must be greater than or equal to the start time of activity \(i\) plus the duration of activity \(i\). \(s_i + d_i \leq s_j, \forall (i,j) \in P\) \(s_i \geq 0\) To solve this linear programming problem, we can use one of the NEOS Server solvers in the Linear Programming (LP) category. Each LP solver has one or more input formats that it accepts. Here is a GAMS model for the example shown above. set activity / A*G/; alias (activity,i,j); set prec(i,j) / A.(B,C), (B,E).F, C.D, D.E, F.G /; parameter duration(activity) / A 2, B 3, C 3, D 4, E 8, F 6, G 2 /; free variable time; nonnegative variable s(i); equations ctime(i) ptime(i,j) ; ctime(i).. time =g= s(i) + duration(i); ptime(prec(i,j)).. s(i) + duration(i) =l= s(j); model schedule /all/; solve schedule using lp minimizing time; display time.l, s.l; If we submit this LP model to Mosek, we obtain a solution with an objective function of 25, meaning that the total time to complete the project is 25 weeks (with no delays). From the values of the \(s(i)\) variables, we obtain a start time for each activity: \(B\) starts at 2, \(C\) starts at 2, \(D\) starts at 5, \(E\) starts at 9, \(F\) starts at 17, and \(G\) starts at 23. What we do not easily obtain from the LP solution is the sequence of nodes on the critical path. The real benefit of using linear programming for project scheduling comes when we need to consider additional constraints or when we want to consider time-cost trade-offs for individual activities. In some cases, it may be possible to "crash an activity"; that is, the duration of the activity can be reduced by completing it using more costly measures. For example, the duration of an activity might be reduced by working overtime, hiring additional workers, using special equipment, or using more expensive materials. In the extended version of the problem, we assume that there is a set of (duration, cost) pairs associated with each activity. (For example, activity \(x\) has a normal duration of 6 weeks at a cost of $100 or a reduced duration of 4 weeks at a cost of $150.) The objective of the time-cost trade-off problem is to determine how much (if any) to crash each activity in order to reduce the total time to complete the project to pre-specified value. Chapter 9 in Hillier and Lieberman's textbook provides a detailed example of this method. Hillier, F. S. and Lieberman, G. J. 2005. Chapter 22: Project Management with PERT/CPM. Introduction to Operations Research, 8th ed.. McGraw Hill, Boston, MA. Rutherford, T. F. 2011. Lecture Notes: Linear Programming Formulation Exercises, October 11, 2011. Downloaded from www.cepe.ethz.ch/education/OperationsResearch/formulation.pdf.
I have to admit, I am not familiar with the use of $\mathscr{O}\left[\lambda^{n}\right]$ notation. Apparently it doesn't mean what I thought. In John L. Friedman's lecture notes on Lie derivatives, forms, densities, and integration He writes: A vector field $\mathbf{w}$ is Lie-derivedby $\mathbf{v}$ if, for small $\lambda$, $\lambda\mathbf{w}$ is dragged along by the fluid flow. To make this precise, we are requiring that the equation [eq. (3)] $$\mathbf{r}\left(t\right)+\lambda\mathbf{w}\left(\mathbf{r}\left(t\right)\right)=\overline{\mathbf{r}}\left(t\right),$$ be satisfied to $\mathscr{O}\left(\lambda\right)$. At first I thought I knew exactly what that meant. Then I tried to put it into more rigorous terms and realized that I don't really know what it means in this context. Since equation 3 is linear in $\lambda$, I expect it to be accurate to $\mathscr{O\left(\lambda^{2}\right)}$. What does it mean to say equation 3 is "satisfied to $\mathscr{O}\left(\lambda\right)$", or to use $\mathscr{O}\left(\lambda^{2}\right)$ in equation 4 $$\mathbf{v}\left(\mathbf{r}\right)+\lambda\mathbf{v}\cdot\nabla\mathbf{w}\left(\mathbf{r}\right)=\mathbf{v}\left(\overline{\mathbf{r}}\right)=\mathbf{v}\left[\mathbf{r}+\lambda\mathbf{w}\left(\mathbf{r}\right)\right]$$ $$=\mathbf{v}\left(\mathbf{r}\right)+\lambda\mathbf{w}\cdot\nabla\mathbf{v}\left(\mathbf{r}\right)+\mathscr{O}\left(\lambda^{2}\right)?$$ I know it means "don't worry about error terms. They will vanish when the limit is taken." But that's not very satisfying. How is $\mathscr{O}\left(\lambda^{2}\right)$ stated in terms of a Taylor polynomial with a remainder? Edit to add: If $$\mathbf{r}\left(t\right)+\lambda\mathbf{w}\left(\mathbf{r}\left(t\right)\right)=\overline{\mathbf{r}}\left(t\right),$$ satisfied to $\mathscr{O}\left(\lambda\right),$ means $$\mathbf{r}\left(t\right)+\lambda\mathbf{w}\left(\mathbf{r}\left(t\right)\right)=\overline{\mathbf{r}}\left(t\right)+\mathscr{O}\left(\lambda\right),$$ then $\mathscr{O}\left(\lambda\right)=\mathbf{k}\lambda,$ where $\mathbf{k}\ne\vec{0}$ is a constant. So, $$\mathbf{w}\left(\mathbf{r}\left(t\right)\right)=\lim_{\lambda\to0}\frac{\overline{\mathbf{r}}-\mathbf{r}+\mathscr{O}\left(\lambda\right)}{\lambda}=\frac{d\mathbf{r}}{d\lambda}+\mathbf{k}.$$ Which is pretty clearly not what is intended. If it means $$\overline{\mathbf{r}}\left(t\right)-\mathbf{r}\left(t\right)=\mathscr{O}\left(\lambda\right),$$ then $\mathscr{O}\left(\lambda\right)=\mathbf{w}\left(\mathbf{r}\left(t\right)\right).$ Which makes sense. But that implies $$\mathbf{r}\left(t\right)+\lambda\mathbf{w}\left(\mathbf{r}\left(t\right)\right)+\mathscr{O}\left(\lambda^{2}\right)=\overline{\mathbf{r}}\left(t\right).$$ That is what has me confused. Are equation 3 satisfied to $\mathscr{O}\left(\lambda\right)$ and the last statement above equivalent?
Consider two arbitrary scalar multiplets ##\Phi## and ##\Psi## invariant under ##SU(2)\times U(1)##. When writing the potential for this model, in addition to the usual terms like ##\Phi^\dagger \Phi + (\Phi^\dagger \Phi)^2##, I often see in the literature, less usual terms like:$$\Phi^\dagger... Hello guys,I've came up with three statements in a discussion with a friend where we were trying to check if we had a clear vision of what isotropy and group invariance would imply in an arbitrary theory of gravity at the level of its matter lagrangian. We got stuck at some point so I came here... 1. Homework StatementHi,I'm trying to self-study quantum mechanics, with a special interest for the group-theoretical aspect of it. I found in the internet some lecture notes from Professor Woit that I fouund interesting, so I decided to use them as my guide. Unfortunately I'm now stuck at a... Hello! I just started reading about SU(2) (the book is Lie Algebras in Particle Physics by Howard Georgi) and I am confused about something - I attached a screenshot of those parts. So, for what I understood by now, the SU(2) are 2x2 matrices whose generators are Pauli matrices and they act on a... Within my project thesis I stumbled over the term SU(2)_V, SU(2)_A transformations. Although I know U(1)_V, U(1)_A transformations from the left and right handed quarks( U(1)_V transformations transform left and right handed quarks the same way, while U(1)_A transformations transform them with a... I have a left-handed ##SU(2)## lepton doublet:##\ell_L = \begin{pmatrix} \psi_{\nu,L} \\ \psi_{e,L} \end{pmatrix}.##I want to know its transformation properties under conjugation and similar 'basic' transformations: ##\ell^{\dagger}_L, \bar{\ell}_L, \ell^c_L, \bar{\ell}^c_L## and the general... I am curious as to the meaning of, and name given to the phase ##\xi(t)## which may be added as a prefix to the time evolution operator ##\hat{U}(t)##. This phase acts to shift the energy of the dynamical phase ##<{\psi(t)}|\hat{H}(t)|\psi(t)>##, since it appears in the Hamiltonian along the...
Home | Science | * Gravitation Equations | Share This Page The most important thing to understand about planetary orbits is that the orbital energy — the sum of kinetic and potential energy — is a constant. This means orbits in a vacuum require no energy source and can continue perpetually unless acted on by an outside force, consistent with Newton's First Law. It also means that an elliptical orbit, an orbit that changes its distance from the parent body as well as its velocity, represents a constant exchange between kinetic and potential energy, in a way consistent with the requirement for energy conservation. First we will show and explain the equations for kinetic and potential orbital energy. (1) $ \displaystyle e_k = \frac{mv^2}{2}$ Where: e k= Kinetic energy, joules m = Mass of the moving body, kilograms v = velocity, m/s (2) $ \displaystyle e_p = - G \frac{m_1 m_2}{r}$ Where: e p= Potential orbital energy, joules G = Universal gravitational constant, described above m 1and m 2= Masses separated by distance r, kilograms r = Radius (distance) between m 1and m 2, meters How can energy be negative?(3) $ \displaystyle e_p = \int G \frac{m_1 m_2}{r^2} dr = - G \frac{m_1 m_2}{r} $This shows that the negative sign for energy arises naturally in the mathematics, but it also follows from the fact that moving toward the parent body should decrease the potential gravitational energy acquired while moving away, and this relationship can only be expressed meaningfully if gravitational potential energy is seen as negative. This aspect of gravitational potential energy has important cosmological implications (see below). The total orbital energy e tof kinetic energy e kand potential orbital energy e pis: (4) $ \displaystyle e_t = e_k + e_p = - G \frac{m_1 m_2}{r} + \frac{m_{2}v^2}{2} = \frac{m_{2} r v^{2} - 2 \, G m_{1} m_{2}}{2 \, r} $Where: e t= total orbital energy, joules e k= kinetic energy, joules e p= potential energy, joules v = Velocity of orbiting body, m/s m 1= Mass of parent body, kilograms m 2= Mass of orbiting body, kilograms r = Radius (distance) between m 1and m 2, meters Again, for an orbit with no forces apart from gravity, the total orbital energy e tis a constant. For an elliptical orbit this means the ratio of kinetic to potential energy is constantly changing. As the orbiting body approaches the parent, kinetic energy increases (higher velocity) and potential energy declines. As the orbiting body moves away, the reverse. The Zero-Energy Solution Because kinetic and potential orbital energies have opposite signs, equation (4) has a zero-energy solution. If the kinetic energy in an orbit is modest, for example a circular orbit or any elliptical orbit, negative potential energy exceeds positive kinetic energy. But if the kinetic energy has a value consistent with something called "escape velocity" (see below) the total orbital energy is zero. This solution plays a part in cosmology, in particular the details of the Big Bang.For more on this topic, click here. Next, we will show some equations derived from those above. (5) $ \displaystyle v_0 = \sqrt{\frac{GM}{r}} $Where: v 0= Orbital velocity required for a circular orbit, m/s G = Universal gravitational constant described above M = Mass of the parent body, kilograms r = Distance to the center of the parent body, meters Notice about this equation that it doesn't include a mass term for the orbiting body. This means the equation is only reasonably accurate in a case where the orbiting body's mass is much less than that of the parent. Also, real-world natural orbits are rarely circular. An artificial example of a circular orbit is a geostationary satellite, which must have a circular orbit to fulfill its purpose (see below for more on this topic). (6) $ \displaystyle v_e = \sqrt{\frac{2GM}{r}} $Where: v e= Minimum velocity required to escape from orbit, m/s G = Universal gravitational constant described above M = Mass of the parent body, kilograms r = Distance to the center of the parent body, meters Escape velocity has a number of special properties. First, in the escape velocity profile, even though the body never returns, the escaping body's velocity constantly decreases, reaching zero at infinity (the asymptotic result). This means escape velocity has zero net energy (e k+ e p= 0). At any velocity less than escape velocity, the orbiting body has negative net energy (e k+ e p< 0) so it will eventually return. At any velocity greater than escape velocity, the orbiting body escapes with positive net energy (e k+ e p> 0), so it will have positive energy at an infinite distance. Again, escape velocity plays a part in cosmological theory. These two related equations give an orbital time for a radius argument and the reverse, for circular orbits. Time for Radius:(7) $ \displaystyle t_0 = \frac{2 \pi r^{(\frac{3}{2})}}{\sqrt{GM}} $ Where: t 0= Orbital period time, seconds r = Radius from the center of the mass being orbited to the orbital height, meters G = Universal gravitational constant described above M = Mass of body being orbited, kilograms Radius for Time:(8) $ \displaystyle r = \frac{(2GM)^{\left(\frac{1}{3}\right)} t_{0}^{\left(\frac{2}{3}\right)}}{2 \, \pi^{\left(\frac{2}{3}\right)}} $ Where: r = Radius from the center of the mass being orbited to the orbital height, meters t 0= Orbital period time, seconds G = Universal gravitational constant described above M = Mass of body being orbited, kilogramsThese equations can be used to compute the required altitude for geostationary satellites, but remember these points: The radius arguments are with respect to the center of the orbited body, not the surface. To convert a radius result to an altitude above the surface, subtract the planet's radius from the result. The orbital period for a geostationary satellite must be expressed in sidereal time to account for the fact that an earthy 24-hour day is with respect to the ever-changing direction of the sun, but the true rotation rate of the earth is with respect to the background stars, and is therefore not quite equal to 24 hours (it is 23 hours, 56 minutes and 4.091 seconds, more or less).Analysis by EnergyIt's possible to use kinetic equations to find the velocity required to get to a particular altitude, but in the general case, with sufficient altitude that the gravitational force changes, such a solution requires numerical methods. Let's say we want to know what the departure velocity should be for a ballistic flight to a given altitude h. Neglecting air resistance, we can write relatively simple differential equation terms to solve it: p(0) = r, the planet's radius p'(0) = v, the required velocity to achieve height h p''(t) = -GM/p(t) 2, the changing gravitational acceleration As it turns out, if we assume that the gravitational acceleration changes enough to influence the outcome, this differential equation is insoluble in closed form — it must be processed numerically. But there is a much simpler way to solve such a problem — compute the departure energy, compute the energy at altitude h, write an equation that compares the two, and solve for the departure velocity v. We will solve two forms: First, a ballistic flight to an altitude hwith no velocity remaining:(9) $ \displaystyle -\frac{GM}{r} + \frac{v^2}{2} = -\frac{GM}{r+h} $ The left-hand side of equation (9) is the sum of potential and kinetic energy at departure. The right-hand side is the potential energy at altitude h— the velocity is zero. By solving for vwe get:(10) $ \displaystyle v = \sqrt{\frac{\mbox{2GM} h}{h r + r^{2}}} $ And if we solve for hwe get:(11) $ \displaystyle h = -\frac{r^{2} v^{2}}{r v^{2} - 2 \, G M} $ Our second example is a ballistic flight that ends in a circular orbit, which means there is substantial velocity at altitude, and therefore kinetic as well as potential energy:(12) $ \displaystyle -\frac{GM}{r} + \frac{v^2}{2} = \frac{-\mbox{GM}}{2 \, {\left(h + r\right)}} $ As before, the left side represents the potential and kinetic energy at departure, and the right side represents potential energy plus the kinetic energy of a circular orbit at altitude r+h. The solution for v:(13) $ \displaystyle v = \sqrt{\frac{{\left(h + 2 \, r\right)} \mbox{GM}}{h^{2} + h r}} $ And the solution for h:(14) $ \displaystyle h= -\frac{r v^{2} - G M - \sqrt{r^{2} v^{4} + 6 \, G M r v^{2} + G^{2} M^{2}}}{2 \, v^{2}} $Orbit Energy Calculator Here is a calculator for the above energy problem, selectable for the case with, and without, a circular orbit: Mode:No Orbit Circular Orbit Value Entry Control G (gravitational constant, units m 3kg -1s -2) M (mass, units kilograms) r (radius, units meters) v (velocity, units meters/second) h (height, units meters) You may either enter a value for vand then click to solve for h, or the reverse. Note: This is a multi-page article. To navigate, use the drop-down lists and arrow keys at the top and bottom of each page. Home | Science | * Gravitation Equations | Share This Page
Currently, I am self-studying Intro to Algorithms (CLRS) and there is one particular method they outline in the book to solve recurrence relations. The following method can be illustrated with this example. Suppose we have the recurrence $$T(n) = 2T(\sqrt n) + \log n$$ Initially they make the substitution m = lg(n), and then plug it back in to the recurrence and get: $$T(2^m) = 2T(2^{\frac{m}{2}}) + m$$ Up to this point I understand perfectly. This next step is the one that's confusing to me. They now "rename" the recurrence $S(m)$ and let $S(m) = T(2^m)$, which apparently produces $$S(m) = 2S(m/2) + m$$ For some reason it's not clear to me why this renaming works, and it just seems like cheating. Can anyone explain this better?
If the best case for Insertion sort & bubble sort is $O(n)$ then how is lower bound for any comparison sort is $\Omega(n\log n)$? I mean, $O(n)$ is obviously smaller than $\Omega(nlogn)$. What am I missing here? The lower bound of $\Omega(n \log(n))$ is a lower bound for the worst-case behavior of a sorting method. Both insertion sort and bubble sort have $O(n^2)$ worst-case complexity. Given an algorithm $A$, we often consider its worst-case time complexity $T(A)$ over all possible inputs $I$:$$T(A) = \max_{I} T(A,I)$$where, $T(A,I)$ is the time for the algorithm $A$ executed on this particular input $I$. For a problem $P$, we may have several algorithms $A$ for it. The time complexity (or lower bound) of a problem $T(P)$ is defined with respect to its best algorithms. $$T(P) = \min_{A} T(A) = \min_{A} \max_{I} T(A,I)$$ For sorting problem, its lower bound (comparison-based) is $\Omega(n \lg n)$. Both $\texttt{Insertion-Sort}$ and $\texttt{Bubble-Sort}$ have worst-case time complexity of $\Theta(n^2)$, and thus do not match the lower bound. In contrary, both $\texttt{Merge-Sort}$ and $\texttt{Heap-Sort}$ have worst-case time complexity of $\Theta(n \lg n)$, matching the lower bound.
Current browse context: astro-ph.SR Change to browse by: Bookmark(what is this?) Astrophysics > Solar and Stellar Astrophysics Title: Asteroseismic constraints on the cosmic-time variation of the gravitational constant from an ancient main-sequence star (Submitted on 13 Sep 2019) Abstract: We investigate the variation of the gravitational constant $G$ over the history of the Universe by modeling the effects on the evolution and asteroseismology of the low-mass star KIC 7970740, which is one of the oldest (~11 Gyr) and best-observed solar-like oscillators in the Galaxy. From these data we find $\dot{G}/G = (2.1 \pm 2.9) \times 10^{-12}~\text{yr}^{-1}$, that is, no evidence for any variation in $G$. We also find a Bayesian asteroseismic estimate of the age of the Universe as well as astrophysical S-factors for five nuclear reactions obtained through a 12-dimensional stellar evolution Markov chain Monte Carlo simulation. Submission historyFrom: Earl Bellinger [view email] [v1]Fri, 13 Sep 2019 18:00:01 GMT (408kb,D)
An alternative to considering the law of the excluded middle as an axiom is to consider it as a definition. You can consider the definition of a Boolean value to be that it is either true or false, then the mechanic of the logic becomes to simply to determine whether we can prove that a value is a Boolean, at which point you can deduce the excluded middle. But you asked for an example. Here is one I came across once. There is a duality between sets and functions. For example, one person may write $S \cup T$ and another may write $s(x) \lor t(x)$. It is a worthwhile exercise to convert expressions between their set form and their function form. So let's do that with Russell's self contradictory set: Set form: $$P \equiv \{x \mid x \not \in x\}$$ Converted to boolean logic and functions becomes: $$p \equiv (\lambda q)\, \lnot q(q)$$ Russell's paradox comes from considering $P \in P$. The function form equivalent is to consider $p(p)$: $$p(p) = \bigg((\lambda q)\, \lnot q(q)\bigg)(p) = \lnot p(p)$$ Is $p(p) = \lnot p(p)$ paradoxical? No, because we haven't defined that $(\forall x )p(x)$ must be a boolean. We haven't assumed the excluded middle. On the other hand, some logics do assume $(\forall x,y)\,x\in y$ is a boolean, which does assume the excluded middle (and make a heroic attempt to limit set comprehension), which does result in the definition of $P$ being paradoxical. There are trade offs in the design of a logic. If you only use first order logic, you can assume the excluded middle all day long. If you want to use higher order logic and partial logics (logics where the domain of functions isn't the universe), then you give up the excluded middle. Another way of looking at this question (there seem to be many) is in terms of decidability. Godel established that in every sufficiently description axiom/inference set, either there is a grammatically valid statement that is undecidable or that your logic is self-contradictory. Now what happens if we assume every undecidable statement $D$ is either true or false? Let $A$ be the set of all possible assignments of true or false to $D$: $$\forall d \in D ~~\bigg(d \lor \lnot d\bigg)$$$$\exists c \in A ~~\forall d \in D ~~ \bigg(d = c_d\bigg)$$ Still being fairly informal, the law of the excluded middle implies that there is at least one assignment to the undecidable statements. But Godel established that at least one undecidable statement must exist which is neither true nor false for the axiom set to be consistent. I'm fairly certain that this inevitably leads to a paradox, although given all the encoding associate with the Godel Sentence it might be very complicated and roundabout. Either way, it is just easier (from a consistency POV) to not assume that all grammatically correct propositions are true or false, even if it does make some proofs harder or nonexistent.
Let $A$ be a finitely generated $\mathbb{Z}$-algebra. Is $\operatorname{Pic}(A)$ finitely generated (as an abelian group)? Thoughts: We may assume that $A$ is reduced since $\operatorname{Pic}(A) = \operatorname{Pic}(A_{\mathrm{red}})$. If $A$ is reduced, then the group of units $A^{\times}$ is a finitely generated abelian group, see e.g. [1, Appendix 1, no. 3] or [4, Théorème 1] (which I learned about through this question). The case $A$ is normal is proved in [3, Chapter 2, Theorem 7.6]. The following argument is from [2, Lemma 9.6]: Let $B$ be the normalization of $A$, set $X := \operatorname{Spec} A$ and $Y := \operatorname{Spec} B$ and let $\pi : Y \to X$ be the normalization morphism. We have the Leray spectral sequence $$ \mathrm{E}_{2}^{p,q} = \mathrm{H}^{p}(X,\mathbf{R}^{q}\pi_{\ast}\mathbb{G}_{m,Y}) \implies \mathrm{H}^{p+q}(Y,\mathbb{G}_{m,Y}) $$ with differentials $\mathrm{E}_{2}^{p,q} \to \mathrm{E}_{2}^{p+2,q-1}$. Since $\pi$ is a finite morphism (e.g. since $\mathbb{Z}$ is Nagata and [5, 030C]), every invertible sheaf on $Y$ can be trivialized on an open cover obtained as the preimage of an open cover of $X$ (e.g. [5, 0BUT]). Hence $\mathbf{R}^{1}\pi_{\ast}\mathbb{G}_{m,Y} = 0$, so we have $\operatorname{Pic}(Y) \simeq \mathrm{H}^{1}(X,\pi_{\ast}\mathbb{G}_{m,Y})$ from the Leray spectral sequence. Set $Q := \pi_{\ast}\mathbb{G}_{m,Y}/\mathbb{G}_{m,X}$; then the long exact sequence in cohomology associated to the sequence $1 \to \mathbb{G}_{m,X} \to \pi_{\ast}\mathbb{G}_{m,Y} \to Q \to 1$ gives an exact sequence $$ \Gamma(Y,\mathbb{G}_{m,Y}) \to \Gamma(X,Q) \stackrel{\partial}{\to} \operatorname{Pic}(X) \to \operatorname{Pic}(Y) $$ where the first and fourth terms are finitely generated. But what can I say about the sheaf $Q$? I know that it is $0$ on a dense open since $\pi$ is an isomorphism on a dense open (e.g. since $A$ is reduced, the regular locus is an open subset containing the generic points [5, 07R5]). I should also note that there is a Hartshorne exercise (II, Exercise 6.9) which relates the Picard group of a singular curve (over a field) to that of its normalization. References: Bass, Introduction to Introduction to Some Methods of Algebraic K-Theory, Number 20 in CBMS Regional Conference Series in Mathematics. American Mathematical Society, 1974. Jaffe, "Coherent functors, with application to torsion in the Picard group", Transactions of the American Mathematical Society, vol. 349, no. 2, 1997, pp. 481–527 link Lang, Fundamentals of Diophantine Geometry, Springer-Verlag (1983) Samuel, "A propos du théorème des unités", Bulletin des Sciences Mathématiques, vol. 90, 1966, pp. 89–96 Stacks Project link Keywords: arithmetic scheme, Picard group, finite type $\mathbb{Z}$-algebra
I don't understand how the limits of integration should be defined when doing basic integrals of trig functions. It seems like it's an arbitrary decision, I don't understand it. Here's the set up: For the field near a long straight wire carrying a current $I$, show the Biot-Savart law gives the same result as Ampere's law. Now intuitively, for me at least, with the way that $\theta$ is defined, I would view the angle as becoming smaller as $y$ moves toward negative infinity. So the limits of integration make sense in that regards. But then the cosine doesn't make sense anymore. As $y$ becomes more negative, which corresponds to an angle between $0$ and $\pi/2$, then cosine should always be positive. But because cos=adj/hyp, then $\cos\theta=y/r$, and $y$ would be negative, even though the corresponding angle is between $0$ and $\pi/2$? I know I'm misunderstanding something fundamental, hopefully somebody can help me so I can move on. I've been struggling with this for so long because it's easy enough to arbitrarily assign limits to get the answer you're looking for, but I want to know the right way, and more importantly, why it's the right way.
Search Now showing items 1-1 of 1 Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV (Elsevier, 2014-09) Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ...
With $hash_n$, I mean a standard cryptographic hash like sha256, scaled up to have arbitrary length $n$ of its output with the same underlying principles. What is the time complexity class of the following problem? Given an $n \in \mathbb{N}$ and data $d$ with $length(d) = n$, determine whether $\exists p : ((length(p) \le n) \wedge (hash_n(p) = d))$. I find it really hard to find a class for it because it seems like a hard problem and it's definitely not $\in P$ (except if $P = NP$ or there is a vulnerability of the hash function (assume there isn't one not already known (so sha256 basically just printing the state of a state machine can be used, of course))) but you can't use it to solve any of the classical NP hard problems. Determining the space complexity is of course trivial: It's only $\mathcal{O}(n)$. This is of course just one example of a hash decision problem and it's meant to only be an example. I'm more interested in the general classification of hash decision problems. Edit: You may assume a perfect hash function instead of a commonly used cryptographic hash function. E.g. if you're more comfortable with it, you don't want to exploit properties of cryptographic hash functions which are commonly used in practice, what you want to show is threatened by those properties, etc.
I am reading the book Markov Chains and Stochastic Stability from Meyn and Tweedie. They define Markov chains on a measurable state space $(E,\Sigma)$ (Chapter 3.4) and they define it on the space $\Omega = \prod_{i \in \mathbb{N}}E, $ with an $\sigma$-algebra $\mathcal{A}$ which is the smallest $\sigma$-algebra that contains all cylinder sets with only finitly many sets different from $E$ $$A_1 \times A_2 \times \dots A_n \times E \times E \times \dots$$ Then they define the Markov chain as a family of random variables $(X_n)_{n \in \mathbb{N}}$ where for $\omega=(x_n)_{n \in \mathbb{N}}\in \Omega$ they set $$X_n(\omega)=x_n .$$ Thus, all Markov chains are defined on the same set $\Omega$ and the random variables $(X_n)$ are also always the same. Now if they talk about a certain initial distribution $\mu$ and transition kernel $p(x,A)$; then they assoicate a Markov chain to it by constructing a specific measure $\mathbb{P}_\mu$. Thus, by this definition, two Markov chains only differ on the probability measure of the probability space. My problem is that in the book they define the term $$ \mathcal{F}_n = \sigma(X_0,\dots,X_n) \subseteq \mathcal{B}(X^{n+1})$$ and they say which is the smallest $\sigma$-field for which the random variable $\{X_0,\dots,X_n\}$ is measurable. In many cases $\mathcal{F}_n$ will coincide with $\mathcal{B}(X^{n+1})$, although this depend in particular on the initial measure $\mu$ choosen for a particular chain. How can $\mathcal{F}_n$ depend on the initial measure? The random variable is already defined as $X_n(\omega)=x_n$, and thus the measurability of $\{X_0,\dots,X_n\}$ depends only on $\Sigma$ and $\mathcal{A}$ where does the intial measure $\mu$ comes into play? After seeing the answers, I think it is a good thing to provide my question with an example. Lets consider the case where $E=\{1,2\}$ and $\Omega = E \times E$, then the random variables $X_0$ and $X_1$ are already defined as above, in particular $X_0$ is defined as$$ X_0 ((1,1))=X_0((1,2))=1$$ and $$X_0((2,1))=X_0((2,2))=2.$$ Now if $\mathbb{P}_\mu$ is the probability that $X_0 = 1 $, then we must have$$ \mathbb{P}_\mu[\{(1,1),(1,2)\}]=1.$$But this is completely independent from defining $\mathcal{F}_0$ (or $\mathcal{F}_n$). In this case we always have$$\mathcal{F}_0 = \{\{(1,1),(1,2)\},\{(2,1),(2,2)\},E,\emptyset \} $$which does not depend on $\mu$. It seems to me that in the answers one believes that $\mathbb{P}_\mu[\{(2,1),(2,2)\}]=0$ implies somehow that this set should not belong to $\mathcal{F}_0$, but I think this is not correct. Update:
Hello one and all! Is anyone here familiar with planar prolate spheroidal coordinates? I am reading a book on dynamics and the author states If we introduce planar prolate spheroidal coordinates $(R, \sigma)$ based on the distance parameter $b$, then, in terms of the Cartesian coordinates $(x, z)$ and also of the plane polars $(r , \theta)$, we have the defining relations $$r\sin \theta=x=\pm R^2−b^2 \sin\sigma, r\cos\theta=z=R\cos\sigma$$ I am having a tough time visualising what this is? Consider the function $f(z) = Sin\left(\frac{1}{cos(1/z)}\right)$, the point $z = 0$a removale singularitya polean essesntial singularitya non isolated singularitySince $Cos(\frac{1}{z})$ = $1- \frac{1}{2z^2}+\frac{1}{4!z^4} - ..........$$$ = (1-y), where\ \ y=\frac{1}{2z^2}+\frac{1}{4!... I am having trouble understanding non-isolated singularity points. An isolated singularity point I do kind of understand, it is when: a point $z_0$ is said to be isolated if $z_0$ is a singular point and has a neighborhood throughout which $f$ is analytic except at $z_0$. For example, why would $... No worries. There's currently some kind of technical problem affecting the Stack Exchange chat network. It's been pretty flaky for several hours. Hopefully, it will be back to normal in the next hour or two, when business hours commence on the east coast of the USA... The absolute value of a complex number $z=x+iy$ is defined as $\sqrt{x^2+y^2}$. Hence, when evaluating the absolute value of $x+i$ I get the number $\sqrt{x^2 +1}$; but the answer to the problem says it's actually just $x^2 +1$. Why? mmh, I probably should ask this on the forum. The full problem asks me to show that we can choose $log(x+i)$ to be $$log(x+i)=log(1+x^2)+i(\frac{pi}{2} - arctanx)$$ So I'm trying to find the polar coordinates (absolute value and an argument $\theta$) of $x+i$ to then apply the $log$ function on it Let $X$ be any nonempty set and $\sim$ be any equivalence relation on $X$. Then are the following true: (1) If $x=y$ then $x\sim y$. (2) If $x=y$ then $y\sim x$. (3) If $x=y$ and $y=z$ then $x\sim z$. Basically, I think that all the three properties follows if we can prove (1) because if $x=y$ then since $y=x$, by (1) we would have $y\sim x$ proving (2). (3) will follow similarly. This question arised from an attempt to characterize equality on a set $X$ as the intersection of all equivalence relations on $X$. I don't know whether this question is too much trivial. But I have yet not seen any formal proof of the following statement : "Let $X$ be any nonempty set and $∼$ be any equivalence relation on $X$. If $x=y$ then $x\sim y$." That is definitely a new person, not going to classify as RHV yet as other users have already put the situation under control it seems... (comment on many many posts above) In other news: > C -2.5353672500000002 -1.9143250000000003 -0.5807385400000000 C -3.4331741299999998 -1.3244286800000000 -1.4594762299999999 C -3.6485676800000002 0.0734728100000000 -1.4738058999999999 C -2.9689624299999999 0.9078326800000001 -0.5942069900000000 C -2.0858929200000000 0.3286240400000000 0.3378783500000000 C -1.8445799400000003 -1.0963522200000000 0.3417561400000000 C -0.8438543100000000 -1.3752198200000001 1.3561451400000000 C -0.5670178500000000 -0.1418068400000000 2.0628359299999999 probably the weirdness bunch of data I ever seen with so many 000000 and 999999s But I think that to prove the implication for transitivity the inference rule an use of MP seems to be necessary. But that would mean that for logics for which MP fails we wouldn't be able to prove the result. Also in set theories without Axiom of Extensionality the desired result will not hold. Am I right @AlessandroCodenotti? @AlessandroCodenotti A precise formulation would help in this case because I am trying to understand whether a proof of the statement which I mentioned at the outset depends really on the equality axioms or the FOL axioms (without equality axioms). This would allow in some cases to define an "equality like" relation for set theories for which we don't have the Axiom of Extensionality. Can someone give an intuitive explanation why $\mathcal{O}(x^2)-\mathcal{O}(x^2)=\mathcal{O}(x^2)$. The context is Taylor polynomials, so when $x\to 0$. I've seen a proof of this, but intuitively I don't understand it. @schn: The minus is irrelevant (for example, the thing you are subtracting could be negative). When you add two things that are of the order of $x^2$, of course the sum is the same (or possibly smaller). For example, $3x^2-x^2=2x^2$. You could have $x^2+(x^3-x^2)=x^3$, which is still $\mathscr O(x^2)$. @GFauxPas: You only know $|f(x)|\le K_1 x^2$ and $|g(x)|\le K_2 x^2$, so that won't be a valid proof, of course. Let $f(z)=z^{n}+a_{n-1}z^{n-1}+\cdot\cdot\cdot+a_{0}$ be a complex polynomial such that $|f(z)|\leq 1$ for $|z|\leq 1.$ I have to prove that $f(z)=z^{n}.$I tried it asAs $|f(z)|\leq 1$ for $|z|\leq 1$ we must have coefficient $a_{0},a_{1}\cdot\cdot\cdot a_{n}$ to be zero because by triangul... @GFauxPas @TedShifrin Thanks for the replies. Now, why is it we're only interested when $x\to 0$? When we do a taylor approximation cantered at x=0, aren't we interested in all the values of our approximation, even those not near 0? Indeed, one thing a lot of texts don't emphasize is this: if $P$ is a polynomial of degree $\le n$ and $f(x)-P(x)=\mathscr O(x^{n+1})$, then $P$ is the (unique) Taylor polynomial of degree $n$ of $f$ at $0$.
A forum where anything goes. Introduce yourselves to other members of the forums, discuss how your name evolves when written out in the Game of Life, or just tell us how you found it. This is the forum for "non-academic" content. A for awesome Posts: 1901 Joined: September 13th, 2014, 5:36 pm Location: 0x-1 Contact: viewtopic.php?p=44724#p44724 Like this: [/url][/wiki][/url] [/wiki] [/url][/code] Many different combinations work. To reproduce, paste the above into a new post and click "preview". x₁=ηx V ⃰_η=c²√(Λη) K=(Λu²)/2 Pₐ=1−1/(∫^∞_t₀(p(t)ˡ⁽ᵗ⁾)dt) $$x_1=\eta x$$ $$V^*_\eta=c^2\sqrt{\Lambda\eta}$$ $$K=\frac{\Lambda u^2}2$$ $$P_a=1-\frac1{\int^\infty_{t_0}p(t)^{l(t)}dt}$$http://conwaylife.com/wiki/A_for_all Aidan F. Pierce Saka Posts: 3138 Joined: June 19th, 2015, 8:50 pm Location: In the kingdom of Sultan Hamengkubuwono X I wonder if this works on other sites? (Remove/Change ) Airy Clave White It Nay Code: Select all x = 17, y = 10, rule = B3/S23 b2ob2obo5b2o$11b4obo$2bob3o2bo2b3o$bo3b2o4b2o$o2bo2bob2o3b4o$bob2obo5b o2b2o$2b2o4bobo2b3o$bo3b5ob2obobo$2bo5bob2o$4bob2o2bobobo! (Check gen 2) Saka Posts: 3138 Joined: June 19th, 2015, 8:50 pm Location: In the kingdom of Sultan Hamengkubuwono X Related:[url=http://a.com/] [/url][/wiki] My signature gets quoted. This too. And my avatar gets moved down Airy Clave White It Nay Code: Select all x = 17, y = 10, rule = B3/S23 b2ob2obo5b2o$11b4obo$2bob3o2bo2b3o$bo3b2o4b2o$o2bo2bob2o3b4o$bob2obo5b o2b2o$2b2o4bobo2b3o$bo3b5ob2obobo$2bo5bob2o$4bob2o2bobobo! (Check gen 2) A for awesome Posts: 1901 Joined: September 13th, 2014, 5:36 pm Location: 0x-1 Contact: Saka wrote: Related: [ Code: Select all [wiki][url=http://a.com/][quote][wiki][url=http://a.com/]a[/url][/wiki][/quote][/url][/wiki] ] My signature gets quoted. This too. And my avatar gets moved down It appears to be possible to quote the entire page by repeating that several times. I guess it leaves <div> and <blockquote> elements open and then autofills the closing tags in the wrong places. Here, I'll fix it: [/wiki][url]conwaylife.com[/url] x₁=ηx V ⃰_η=c²√(Λη) K=(Λu²)/2 Pₐ=1−1/(∫^∞_t₀(p(t)ˡ⁽ᵗ⁾)dt) $$x_1=\eta x$$ $$V^*_\eta=c^2\sqrt{\Lambda\eta}$$ $$K=\frac{\Lambda u^2}2$$ $$P_a=1-\frac1{\int^\infty_{t_0}p(t)^{l(t)}dt}$$http://conwaylife.com/wiki/A_for_all Aidan F. Pierce A for awesome Posts: 1901 Joined: September 13th, 2014, 5:36 pm Location: 0x-1 Contact: It appears I fixed @Saka's open <div>. x₁=ηx V ⃰_η=c²√(Λη) K=(Λu²)/2 Pₐ=1−1/(∫^∞_t₀(p(t)ˡ⁽ᵗ⁾)dt) $$x_1=\eta x$$ $$V^*_\eta=c^2\sqrt{\Lambda\eta}$$ $$K=\frac{\Lambda u^2}2$$ $$P_a=1-\frac1{\int^\infty_{t_0}p(t)^{l(t)}dt}$$http://conwaylife.com/wiki/A_for_all Aidan F. Pierce toroidalet Posts: 1019 Joined: August 7th, 2016, 1:48 pm Location: my computer Contact: A for awesome wrote:It appears I fixed @Saka's open <div>. what fixed it, exactly? "Build a man a fire and he'll be warm for a day. Set a man on fire and he'll be warm for the rest of his life." -Terry Pratchett A for awesome Posts: 1901 Joined: September 13th, 2014, 5:36 pm Location: 0x-1 Contact: toroidalet wrote: A for awesome wrote:It appears I fixed @Saka's open <div>. what fixed it, exactly? The post before the one you quoted. The code was: Code: Select all [wiki][viewer]5[/viewer][/wiki][wiki][url]conwaylife.com[/url][/wiki] x₁=ηx V ⃰_η=c²√(Λη) K=(Λu²)/2 Pₐ=1−1/(∫^∞_t₀(p(t)ˡ⁽ᵗ⁾)dt) $$x_1=\eta x$$ $$V^*_\eta=c^2\sqrt{\Lambda\eta}$$ $$K=\frac{\Lambda u^2}2$$ $$P_a=1-\frac1{\int^\infty_{t_0}p(t)^{l(t)}dt}$$http://conwaylife.com/wiki/A_for_all Aidan F. Pierce Saka Posts: 3138 Joined: June 19th, 2015, 8:50 pm Location: In the kingdom of Sultan Hamengkubuwono X Aidan, could you fix your ultra quote? Now you can't even see replies and the post reply button. Also, a few more ones eith unique effects popped up. Appart from Aidan Mode, there is now: -Saka Quote -Daniel Mode -Aidan Superquote We should write descriptions for these: -Adian Mode: A combination of url, wiki, and code tags that leaves the page shaterred in pieces. Future replies are large and centered, making the page look somewhat old-ish. -Saka Quote: A combination of a dilluted Aidan Mode and quotes, leaves an open div and blockquote that quotes the entire message and signature. Enough can quote entire pages. -Daniel Mode: A derivative of Aidan Mode that adds code tags and pushes things around rather than scrambling them around. Pushes bottom bar to the side. Signature gets coded. -Aidan Superqoute: The most lethal of all. The Aidan Superquote is a broken superquote made of lots of Saka Quotes, not normally allowed on the forums by software. Leaves the rest of the page white and quotes. Replies and post reply button become invisible. I would not like new users playing with this. I'll write articles on my userpage. Last edited by Saka on June 21st, 2017, 10:51 pm, edited 1 time in total. Airy Clave White It Nay Code: Select all x = 17, y = 10, rule = B3/S23 b2ob2obo5b2o$11b4obo$2bob3o2bo2b3o$bo3b2o4b2o$o2bo2bob2o3b4o$bob2obo5b o2b2o$2b2o4bobo2b3o$bo3b5ob2obobo$2bo5bob2o$4bob2o2bobobo! (Check gen 2) drc Posts: 1664 Joined: December 3rd, 2015, 4:11 pm Location: creating useless things in OCA I actually laughed at the terminology. "IT'S TIME FOR MY ULTIMATE ATTACK. I, A FOR AWESOME, WILL NOW PRESENT: THE AIDAN SUPERQUOTE" shoots out lasers This post was brought to you by the letter D, for dishes that Andrew J. Wade won't do. (Also Daniel, which happens to be me.) Current rule interest: B2ce3-ir4a5y/S2-c3-y fluffykitty Posts: 638 Joined: June 14th, 2014, 5:03 pm There's actually a bug like this on XKCD Forums. Something about custom tags and phpBB. Anyways, [/wiki] I like making rules Saka Posts: 3138 Joined: June 19th, 2015, 8:50 pm Location: In the kingdom of Sultan Hamengkubuwono X Here's another one. It pushes the avatar down all the way to the signature bar. Let's name it... -Fluffykitty Pusher Unless we know your real name that's going to be it lel. It's also interesting that it makes a code tag with purple text. Airy Clave White It Nay Code: Select all x = 17, y = 10, rule = B3/S23 b2ob2obo5b2o$11b4obo$2bob3o2bo2b3o$bo3b2o4b2o$o2bo2bob2o3b4o$bob2obo5b o2b2o$2b2o4bobo2b3o$bo3b5ob2obobo$2bo5bob2o$4bob2o2bobobo! (Check gen 2) A for awesome Posts: 1901 Joined: September 13th, 2014, 5:36 pm Location: 0x-1 Contact: Probably the simplest ultra-page-breaker: Code: Select all [viewer][wiki][/viewer][viewer][/wiki][/viewer] x₁=ηx V ⃰_η=c²√(Λη) K=(Λu²)/2 Pₐ=1−1/(∫^∞_t₀(p(t)ˡ⁽ᵗ⁾)dt) $$x_1=\eta x$$ $$V^*_\eta=c^2\sqrt{\Lambda\eta}$$ $$K=\frac{\Lambda u^2}2$$ $$P_a=1-\frac1{\int^\infty_{t_0}p(t)^{l(t)}dt}$$http://conwaylife.com/wiki/A_for_all Aidan F. Pierce Saka Posts: 3138 Joined: June 19th, 2015, 8:50 pm Location: In the kingdom of Sultan Hamengkubuwono X A for awesome wrote: Probably the simplest ultra-page-breaker: Code: Select all [viewer][wiki][/viewer][viewer][/wiki][/viewer] Screenshot? New one yay. -Adian Bomb: The smallest ultra-page breaker. Leaks into the bottom and pushes the pages button, post reply, and new replies to the side. Last edited by Saka on June 21st, 2017, 10:20 pm, edited 1 time in total. Airy Clave White It Nay Code: Select all x = 17, y = 10, rule = B3/S23 b2ob2obo5b2o$11b4obo$2bob3o2bo2b3o$bo3b2o4b2o$o2bo2bob2o3b4o$bob2obo5b o2b2o$2b2o4bobo2b3o$bo3b5ob2obobo$2bo5bob2o$4bob2o2bobobo! (Check gen 2) drc Posts: 1664 Joined: December 3rd, 2015, 4:11 pm Location: creating useless things in OCA Someone should create a phpBB-based forum so we can experiment without mucking about with the forums. This post was brought to you by the letter D, for dishes that Andrew J. Wade won't do. (Also Daniel, which happens to be me.) Current rule interest: B2ce3-ir4a5y/S2-c3-y Saka Posts: 3138 Joined: June 19th, 2015, 8:50 pm Location: In the kingdom of Sultan Hamengkubuwono X The testing grounds have now become similar to actual military testing grounds. Airy Clave White It Nay Code: Select all x = 17, y = 10, rule = B3/S23 b2ob2obo5b2o$11b4obo$2bob3o2bo2b3o$bo3b2o4b2o$o2bo2bob2o3b4o$bob2obo5b o2b2o$2b2o4bobo2b3o$bo3b5ob2obobo$2bo5bob2o$4bob2o2bobobo! (Check gen 2) fluffykitty Posts: 638 Joined: June 14th, 2014, 5:03 pm We also have this thread. Also, is now officialy the Fluffy Pusher. Also, it does bad things to the thread preview when posting. And now, another pagebreaker for you: Code: Select all [wiki][viewer][/wiki][viewer][/viewer][/viewer] Last edited by fluffykitty on June 22nd, 2017, 11:50 am, edited 1 time in total. I like making rules 83bismuth38 Posts: 453 Joined: March 2nd, 2017, 4:23 pm Location: Still sitting around in Sagittarius A... Contact: oh my, i want to quote somebody and now i have to look in a diffrent scrollbar to type this. intersting thing, though, is that it's never impossible to fully hide the entire page -- it will always be in a nested scrollbar. EDIT: oh also, the thing above is kinda bad. not horrible though -- i'd put it at a 1/13 on the broken scale. Code: Select all x = 8, y = 10, rule = B3/S23 3b2o$3b2o$2b3o$4bobo$2obobobo$3bo2bo$2bobo2bo$2bo4bo$2bo4bo$2bo! No football of any dui mauris said that. Cclee Posts: 56 Joined: October 5th, 2017, 9:51 pm Location: de internet Code: Select all [quote][wiki][viewer][/wiki][/viewer][wiki][/quote][/wiki] This dosen't do good things Edit: Code: Select all [wiki][url][size=200][wiki][viewer][viewer][url=http://www.conwaylife.com/forums/viewtopic.php?f=4&t=2907][/wiki][/url][quote][/url][/quote][/viewer][wiki][quote][/wiki][/quote][url][wiki][quote][/url][/wiki][/quote][url][wiki][/url][/wiki][/wiki][/viewer][/size][quote][viewer][/quote][/viewer][/wiki][/url] Neither does this ^ What ever up there likely useless Cclee Posts: 56 Joined: October 5th, 2017, 9:51 pm Location: de internet Code: Select all [viewer][wiki][/viewer][wiki][url][size=200][wiki][viewer][viewer][url=http://www.conwaylife.com/forums/viewtopic.php?f=4&t=2907][/wiki][/url][quote][/url][/quote][/viewer][wiki][quote][/wiki][/quote][url][wiki][quote][/url][/wiki][/quote][url][wiki][/url][/wiki][/wiki][/viewer][/size][quote][viewer][/quote][/viewer][/wiki][/url][viewer][/wiki][/viewer] I get about five different scroll bars when I preview this Edit: Code: Select all [viewer][wiki][quote][viewer][wiki][/viewer][/wiki][viewer][viewer][wiki][/viewer][/wiki][/quote][viewer][wiki][/viewer][/wiki][quote][viewer][wiki][/viewer][viewer][wiki][/viewer][/wiki][/wiki][/viewer][/quote][/viewer][/wiki] Makes a really long post and makes the rest of the thread large and centred Edit 2: Code: Select all [url][quote][quote][quote][wiki][/quote][viewer][/wiki][/quote][/viewer][/quote][viewer][/url][/viewer] Just don't do this (Sorry I'm having a lot of fun with this) ^ What ever up there likely useless cordership3 Posts: 127 Joined: August 23rd, 2016, 8:53 am Location: haha long boy Here's another small one: Code: Select all [url][wiki][viewer][/wiki][/url][/viewer] fg Moosey Posts: 2491 Joined: January 27th, 2019, 5:54 pm Location: A house, or perhaps the OCA board. Contact: Code: Select all [wiki][color=#4000BF][quote][wiki]I eat food[/quote][/color][/wiki][code][wiki] [/code] Is a pinch broken Doesn’t this thread belong in the sandbox? I am a prolific creator of many rather pathetic googological functions My CA rules can be found here Also, the tree game Bill Watterson once wrote: "How do soldiers killing each other solve the world's problems?" 77topaz Posts: 1345 Joined: January 12th, 2018, 9:19 pm Well, it started out as a thread to documents "Bugs & Errors" in the forum's code... Moosey Posts: 2491 Joined: January 27th, 2019, 5:54 pm Location: A house, or perhaps the OCA board. Contact: 77topaz wrote:Well, it started out as a thread to documents "Bugs & Errors" in the forum's code... Now it's half an aidan mode testing grounds. Also, fluffykitty's messmaker: Code: Select all [viewer][wiki][*][/viewer][/*][/wiki][/quote] I am a prolific creator of many rather pathetic googological functions My CA rules can be found here Also, the tree game Bill Watterson once wrote: "How do soldiers killing each other solve the world's problems?" PkmnQ Posts: 666 Joined: September 24th, 2018, 6:35 am Location: Server antipode Don't worry about this post, it's just gonna push conversation to the next page so I can test something while actually being able to see it. (The testing grounds in the sandbox crashed golly) Code: Select all x = 12, y = 12, rule = AnimatedPixelArt 4.P.qREqWE$4.2tL3vSvX$4.qREqREqREP$4.vS4vXvS2tQ$2.qWE2.qREqWEK$2.2vX 2.vXvSvXvStQtL$qWE2.qWE2.P.K$2vX2.2vX2.tQ2tLtQ$qWE4.qWE$2vX4.2vX$2.qW EqWE$2.4vX! i like loaf
I'm looking for a Lorentz covariant expression of Noether charges and I found this article: https://arxiv.org/abs/hep-th/0701268, section II-A in particular. Consider specifically eq. (20-21), they claim: $$Q_\mu=\frac{1}{2}(\phi, P_\mu \rhd \phi),$$ $Q$ is the conserved charge, "$\rhd$" is just an "acting on" symbol and the inner product is defined by ($\varepsilon$ is the sign function): $$ (\phi_1, \phi_2)=\int d^4p \, \delta(p^2-m^2) \varepsilon(p_0)\tilde{\phi}_1^*(-p)\tilde{\phi}_2(p). $$ $\tilde{\phi}$ is the fourier transform of $\phi$. Hence the Noether Charge is \begin{equation} \label{1} \tag{1} Q_\mu = \frac{1}{2}\int d^4p \, \delta(p^2-m^2) \varepsilon(p_0)p_\mu\tilde{\phi}^*(-p)\tilde{\phi}(p). \end{equation} Now I'm struggling to get the well known quantised expression in QFT: $$ Q_\mu= \int d^3p\,\, p_\mu \,\,a^\dagger (\vec{p}) \,a (\vec{p}), \,\,\,\,[a^\dagger (\vec{p}), \,a (\vec{q})]=-\delta^3 (\vec{p}-\vec{q}) $$ from (1), by plugging in usual scalar Klein-Gordon field with creation and annihilation operators. If I'm not wrong (1) in coordinate space looks like $$ \int d^4x \,\,d^4y \,\, \phi (x) \Delta (x-y)\left( -i\frac{\partial \phi(y)}{\partial y^\mu}\right)=Q_\mu, \tag{2} $$ where $\Delta$ is the usual commutator function $$\Delta (x-y)=\int d^4p \,\, \varepsilon(p_0) \delta (p^2-m^2)e^{-ip\cdot (x-y)}.$$ Just by substituting in (2) $$\phi(x)= \int \frac{d^3p}{\sqrt{2\omega}_\vec{p}}(a(\vec{p}) e^{-ip\cdot x}+a^\dagger(\vec{p}) e^{ip\cdot x} )$$ I don't seem to be getting the right answer. Maybe I'm doing some calculation wrong or misinterpreted the article. Any help would be greatly appreciated! UPDATE For instance writing $\phi(x)=\int d^4p \,\, a(p) \delta(p^2-m^2) e^{-ip \cdot x}$ then $\tilde{\phi}(p)=a(p)\delta(p^2-m^2)$ and (1) becomes $$ Q_\mu= \frac{1}{2} \int d^4p \,\, \varepsilon(p_0) \, p_\mu a(-p)a(p)\,\,\delta(p^2-m^2)\delta(p^2-m^2)\delta(p^2-m^2). $$ Is this right? How to work out the three deltas? I could use the identity $\delta(x)f(x)=\delta(x)f(0)$ with $f=\delta$ twice to get$$ \delta(p^2-m^2)\delta(p^2-m^2)\delta(p^2-m^2)=\delta(p^2-m^2)\delta(0)\delta(0)=\delta(p^2-m^2)\cdot \mathcal{S}, $$ where $\mathcal{S}$ is an (infinite) surface contribution which I currently fail to see how cancels out. what am I missing?
19 0 I am currently studying the Massive Thirring Model (MTM) with the Lagrangian $$ \mathcal{L} = \imath {\bar{\Psi}} (\gamma^\mu {\partial}_\mu - m_0 )\Psi - \frac{1}{2}g: \left( \bar{\Psi} \gamma_\mu \Psi \right)\left( \bar{\Psi} \gamma^\mu \Psi \right): . $$ and Hamiltonian $$ \int \mathrm{d}x \imath \Psi^\dagger \sigma_z \partial_x \Psi + m_0 \Psi^\dagger \Psi + 2g \Psi^\dagger_1 \Psi^\dagger_2 \Psi_2\Psi_1\\ $$ Due to the infinite set of conservation laws, particle production is said to be absent from this theory. However why isn't it sufficient to show that particle production is absent if the number operator $$N=\int \mathrm{d}x \Psi^\dagger \Psi$$ commutes the the Hamiltonian? Also, by particle production being absent, is that just a statement that all Feynman diagrams with self energy insertions evaluate to 0 but all other Feynman diagrams are possible? $$ \mathcal{L} = \imath {\bar{\Psi}} (\gamma^\mu {\partial}_\mu - m_0 )\Psi - \frac{1}{2}g: \left( \bar{\Psi} \gamma_\mu \Psi \right)\left( \bar{\Psi} \gamma^\mu \Psi \right): . $$ and Hamiltonian $$ \int \mathrm{d}x \imath \Psi^\dagger \sigma_z \partial_x \Psi + m_0 \Psi^\dagger \Psi + 2g \Psi^\dagger_1 \Psi^\dagger_2 \Psi_2\Psi_1\\ $$ Due to the infinite set of conservation laws, particle production is said to be absent from this theory. However why isn't it sufficient to show that particle production is absent if the number operator $$N=\int \mathrm{d}x \Psi^\dagger \Psi$$ commutes the the Hamiltonian? Also, by particle production being absent, is that just a statement that all Feynman diagrams with self energy insertions evaluate to 0 but all other Feynman diagrams are possible?
In a nutshell A compressor blade works best in subsonic flow. Supersonic flow introduces additional drag sources which should be avoided if efficiency is important. Thus, the intake has to slow down the air to a Mach number between 0.4 and 0.5. Note that the high circumferential speed of a large fan blade will still mean that its tips work at around Mach 1.5, but the subsequent compressor stages will operate in subsonic conditions. A scramjet is possible with fuels with supersonic flame front speeds and rapid mixing of fuel and air. If the engine would burn regular kerosene, the flame would be blown out like a candle if the internal airspeed would be supersonic, and even if flame holders keep the flame in place, most combustion would take place only after the fuel-air mixture has left the engine due to the slow mixing of kerosene and air. By using hydrogen, a stable combustion can be achieved even in supersonic flow. Due to the high flight speeds, compression is possible by a cascade of shocks, so no moving turbomachinery is needed in ramjets and scramjets. Background: Maximum heating of air All jets decelerate air in their intake in order to increase air pressure. This compression heats the air, and in order to achieve a combustion which produces thrust, this heating must be restricted. If air is heated above approx. 6,000° K, adding more energy will result in dissociation of the gas with little further heat increase. Since thrust is produced by expanding air through heating, burning air that enters the combustion process already at 6,000° K will not achieve much thrust. If the air enters the intake at Mach 6, it must not be decelerated below approx. Mach 2 to still achieve combustion with a meaningful temperature increase - that is why scramjets are used in hypersonic vehicles. Full disclosure: Oxygen starts to dissociate already between 2,000° and 4,000° K, depending on pressure, while Nitrogen will dissociate mainly above 8,000° K. The 6,000° K figure above is a rough compromise for the boundary where adding more energy starts to make less and less sense. Of course, even a 6,000° K flame temperature is a challenge for the materials of the combustion chamber, and ceramics with film cooling are mandatory. The equation for the stagnation temperature $T_0$ of air shows how important the flight speed $v$ is: $$T_0 = T_{\infty} \cdot \frac{v^2}{c_p} = T_{\infty} \cdot \left(1 + \frac{\kappa - 1}{2}\cdot Ma^2 \right)$$ $T_{\infty}$ is the ambient temperature, $c_p$ the specific heat at constant pressure and $\kappa$ the ratio of specific heats. For two-atomic gases (like oxygen and nitrogen), $\kappa$ is 1.405. Temperature increases with the square of flight speed, so at Mach 2 the factor of heat increase over ambient is only 3.8, while at Mach 6 this becomes 26.3. Even at 220° K air temperature, the air will be heated to 5,800° K when it is ideally compressed in case of a hypersonic vehicle traveling at Mach 6. Note that real compression processes will heat air even more due to friction. Compression with shocks Supersonic flow is slowed down by a pressure rise along the flow path. Since no "advance warning" of what is coming is possible, this pressure rise is sudden: Pressure jumps from a fixed value ahead to a higher, fixed value past the jump. This is called a shock. The energy for the pressure rise is taken from the kinetic energy of the air, so past the shock all other parameters (speed, density and temperature) take on new values. F-16 air intake (picture source) The simplest shock is a straight shock. This can be found at the face of pitot intakes like the one of the F-16 (see the picture above) in supersonic flight. More common are oblique shocks which are tilted according to the Mach number of the free flow. They happen on leading and trailing edges, fuselage noses and contour changes in general: Whenever something bends the airflow due to its displacement effect, the mechanism for this bending of the flowpath is an oblique shock. straight and oblique shock (own work) The index 1 denotes conditions ahead of the shock, and 2 those downstream of the shock. For weak straight shocks the product of the speed ahead of the shock $v_1$ and the speed past the shock $v_2$ equals the square of the speed of sound: $$v_1\cdot v_2 = a^2$$If $Ma_1 > 1$, then $Ma_2$ must be smaller than 1, so the flow is always decelerated to subsonic speed by a straight shock. The same equation works for the normal speed component $v_n$ ahead and past a weak oblique shock: $$v_{1n}\cdot v_{2n} = a^2$$Note that the tangential component $v_t$ is unaffected by the shock! Only the normal component is reduced. Now the speed $v_2$ is still supersonic, but lower than $v_1$, so a weak oblique shock produces a modest increase of pressure, density and temperature. The angle of the oblique shock wave is determined by the Mach number ahead of the shock. Supersonic intakes Weak shocks are desired, because they produce only small losses due to friction. Pitot intakes with their single, straight shocks work well at low supersonic speeds, but incur higher losses at higher Mach numbers. As a rule of thumb, a pitot intake is the best compromise at speeds below Mach 1.6. If the design airspeed is higher, more complex and heavier intakes are needed to decelerate the air efficiently. This is done by a sequence of weak, oblique shocks and by means of a wedge intake. The picture below shows the intake of the supersonic Concorde airliner: Concorde intake (picture source) Gradually increasing the angle of the wedge is causing a cascade of ever steeper, oblique shocks which gradually decelerate the air. The design goal is to position this cascade of shocks caused by the wedge on top such that they hit the lower intake lip. This is done by a moveable contour of the upper intake geometry and/or the lip. The goal is to achieve a uniform speed over the intake cross section and not to waste any of the compressed air to the flow around the intake. See the picture of the Eurofighter intake below for an example of a moveable intake lip (which admittedly is mainly for increasing the capture area at low speed and for avoiding flow separation even with a small intake lip radius). Eurofighter intake (picture source) Once the air has entered the intake, it is only mildly supersonic and can be further decelerated by a final, straight shock at the narrowest point of the intake. After that point, the intake contour is gradually widened, such that the air decelerates further without separation. To achieve this, a very even flow across the intake area is mandatory, and even the slight disturbance caused by the boundary layer of anything which is ahead of the intake must be avoided. This is achieved by a splitter plate which is clearly visible in the pictures of the F-16 and Eurofighter intakes. The splitter plate of the Eurofighter intake is even perforated to suck away the early boundary layer there. The deceleration of the intake flow results in a significant pressure rise: In case of the Concorde at Mach 2.02 cruise, the intake caused a pressure rise by a factor of more than 6, so the engine compressor had to add "only" a factor of 12, such that the pressure in the combustion chamber of the four Olympus 593 engines was 80 times that of the ambient pressure (admittedly, this ambient pressure was only 76 mbar in the cruise altitude of 18 km). This pressure increase means that a supersonic intake must be built like a pressure vessel, and the rectangular face of the intake must quickly be changed to a round cross section downstream to keep the mass of the intake structure low. Intakes at higher speed Going faster means the intake pressure recovery increases with the square of flight speed: In case of the SR-71 intake at Mach 3.2, the pressure at the engine face was already almost 40 times higher than the ambient pressure. Now it becomes clear that going faster than Mach 3.5 does away with the need for a turbocompressor: At these speeds a properly designed intake can achieve enough compression by itself for the combustion to produce enough thrust, and going above Mach 5 will need restraint in slowing down the intake flow in order to have enough temperature margin for combustion, requiring supersonic flow in the combustion chamber.
In his answer on cstheory.SE, Lev Reyzin directed me to Robert Schapire's thesis that improves the bound to $O(n^2 + n\log m)$ membership queries in section 5.4.5. The number of counterexample queries remains unchanged. The algorithm Schapire uses differs in what it does after a counterexample query. Sketch of the improvement At the highest level, Schapire forces $(S,E,T)$ from Angluin's algorithm to have the extra condition that for a closed $(S,E,T)$ and each $s_1, s_2 \in S$ if $s_1 \neq s_2$ then $row(s_1) \neq row(s_2)$. This guarantees that $|S| \leq n$ and also makes the consistency property of Angluin's algorithm trivial to satisfy. To ensure this, he has to handle the results of a counterexample differently. Given a counterexample $z$, Angluin simply added $z$ and all its prefixes to $S$. Schapire does something more subtle by instead adding a single element $e$ to $E$. This new $e$ will make $(S,E,T)$ to be not closed in Angluin's sense and the update to get closure with introduce at least one new string to $S$ while keeping all rows distinct. The condition on $e$ is: $$\exists s, s' \in S, a \in \Sigma \quad \text{s.t} \quad row(s) = row(s'a) \; \text{and} \; o(\delta(q_0,se)) \neq o(\delta(q_0,s'ae))$$ Where $o$ is the output function, $q_0$ is the initial state, and $\delta$ the update rule of the true 'unknown' DFA. In otherwords, $e$ must serve as a witness to distinguish the future of $s$ from $s'a$. To figure out this $e$ from $z$ we do a binary search to figure out a substring $r_i$ such that $z = p_ir_i$ and $0 \leq |p_i| = i < |z|$ such that the behavior of our conjectured-machine differs based on one input character. In more detail, we let $s_i$ be the string corresponding to the state reached in our conjectured machine by following $p_i$. We use binary search (this is where the $\log m$ comes from) to find an $k$ such that $o(\delta(q_0,s_kr_k)) \neq o(\delta(q_0,s_{k+1}r_{k+1})$. In other words, $r_{k+1}$ distinguishes two states that our conjectured machines finds equivalent and thus satisfies the condition on $e$, so we add it to $E$.
Read this article at https://www.pugetsystems.com/guides/943 Machine Learning and Data Science: IntroductionWritten on April 28, 2017 by Dr Donald Kinghorn This is the first post in a series that I have wanted to do for some time. Machine Learning and Data Science, fascinating! This is one of the most interesting areas of endeavor today. It has truly blossomed over the last few years. There are numerous high quality tools and frameworks for handling and working with data sets, a seemingly endless number of application domains, and lots and lots of data. Data Science has become "a thing". Many Universities are offering graduate programs in "Data Science". It has existed for ages as part of Statistics, Informatics, Business Intelligence/Analytics, Applied Mathematics, etc., but it is now taking on a multidisciplinary life of its own. In this series I'll be exploring the algorithms and tools of Machine Learning and Data Science. It will be tutorials, guides, how-to, reviews, "real world" application, and whatever I feel like writing about. It will be documenting my own journey. Even though I am familiar with the mathematics many of the algorithms and tools are new to me. This is partly a revival of the fascination I had with neural networks as a graduate student in the mid 1990's but never had the time to pursue. It's about time I did it! What is Machine Learning¶ I wrote a blog post in 2016 titled What is Machine Learning. I looked at it again and decided I like the definition I came up with so here it is. Machine Learning -- Machine Learning (ML) is a multidisciplinary field focused on implementing computer algorithms capable of drawing predictive insight from static or dynamic data sources using analytic or probabilistic models and using refinement via training and feedback. It make use of pattern recognition, artificial intelligence learning methods, and statistical data modeling. Learning is achieved in two primary ways; Supervised learning, where the desired outcome is known and an annotated data set or measured values are used to train/fit a model to accurately predict values or labels outside of the training set. This is basically “regression” for values and “classification” for labels. Unsupervised learning uses “unlabeled” data and seeks to find inherent partitions or clusters of characteristics present in the data. ML draws from the fields of computer science, statistics, probability, applied mathematics, optimization, information theory, graph and network theory, biology, and neuroscience. What is Data Science¶ In my view Machine Learning and is the primary "What", Data Science is the primary "How". Data Science has historically been just another name for Statistics. However, I feel that it has become much more than that! After about 2010 everything has changed. It used to be that the largest data set that a Statistician might have to work with was the US census data. That's a few 100 gigabytes, not small, but not so much by todays measure of "big data". The reality of any "real world" Machine Learning project is that a large part of the time and effort is going to put into getting usable data ready for some machine learning algorithm. There is also the problem of just trying to understand what useful information your data has to offer and how you are going to model that. Data visualization is also an important part of that. Here's my take on a definition for "Data Science"; Data Science -- Data Science is an emerging field of study establishing formalism and "best practices" for manipulating and extracting useful information from, possibly massive, data sets. It is a widely multidisciplinary field incorporating elements of; Statistics and Probability Machine Learning and Artificial Intelligence Data Visualization Numerical Analysis, Optimization, Applied Mathematics Computer Science, DevOps, Programming, HPC Domain Specific Knowledge (Sciences, Business, Finance, etc.) It is defining best practices for, Data Analysis and Reproducible Research. Reproducible Research¶ This is important! There is a crisis in research reproducibility. The further away research literature is from peer reviewed pure mathematics the more likely it is to irreproducible or just plain junk. The Data Science community seems to be making a serious effort to use and establish reproducible methodology. Literate Programming (Documentation and code together) Source control and Continuous Integration (DevOps!) Environment reproducibility via Containerization with Docker ;-) (the wink is for those of you that have read my Docker posts) Well maintained open source tools The Data Science community is remarkably "open" considering that a lot of Data Science gets done by large commercial interest's. Jupyter Notebooks¶ In keeping with the spirit of reproducibility I will be writing this series of blog posts as Jupyter notebooks and making them available on GitHub. This blog post so far is markdown code in a notebook cell. In Jupyter I can easily add LaTeX code like this little formula $$R = \sum_{t=0}^{q-1}\frac{1}{q}\gamma_{3}\left( p-q \right) \gamma_{3} \left(t \right) \left( 1-\frac{c}{ab}\right) ^{q-t}$$ and have it display nicely in the notebook,$$ R = \sum_{t=0}^{q-1}\frac{1}{q}\gamma_{3}\left( p-q \right) \gamma_{3} \left( t \right) \left( 1-\frac{c}{ab}\right) ^{q-t} $$ I can write Python and execute it in the document print("Hello from Python3") Hello from Python3 Make a plot %matplotlib inline import numpy as np import matplotlib.pyplot as plt N = 50 x = np.random.rand(N) y = np.random.rand(N) colors = np.random.rand(N) area = np.pi * (15 * np.random.rand(N))**2 plt.scatter(x, y, s=area, c=colors, alpha=0.7, cmap='PuBu') plt.show() Good stuff like that ... and then use the notebook to generate HTML for the blog post. Happy computing! --dbk Tags:Machine Learning, Data Science, Python, Jupyter notebook, Programming
If R is a commutative ring (with unit) then we have an affine scheme Spec(R) which is an object of the category of ringed topological spaces. Is there any way of characterising this object relative to the category of ringed topological spaces? The underlying space of an affine scheme is compact and the structure sheaf is a ring, but these statements hardly go any way towards characterising an affine scheme. I am not looking for an answer that is necessarily strictly tied to the structure of the category of ringed topological spaces - just something that is topological and/or about the algebraic structure of the structure sheaf. A non-answer is: 'An affine scheme is a ringed topological space of the form SpecR for some cummutative ring R.' Thanks for any pointers, Christopher If R is a commutative ring (with unit) then we have an affine scheme Spec(R) which is an object of the category of ringed topological spaces. Is there any way of characterising this object relative to the category of ringed topological spaces? The underlying space of an affine scheme is compact and the structure sheaf is a ring, but these statements hardly go any way towards characterising an affine scheme. I am not looking for an answer that is necessarily strictly tied to the structure of the category of ringed topological spaces - just something that is topological and/or about the algebraic structure of the structure sheaf. An affine scheme can be characterized in the category of locally ringed spaces (one needs the "locally" if I remember correctly). A l.r.s. $X$ is an affine scheme i.f.f. $Hom(Y,X)$ functorially equals $Hom(\Gamma(X,\mathcal{O}_X),\Gamma(Y,\mathcal{O}_Y))$, for $Y$ a l.r.s. In other words, the affine scheme construction is the construction of a right adjoint to $\Gamma: ( l.r.s. ) \to ( rings )^{op}$. Look at Eisenbud and Harris, The Geometry of Schemes, page 21. The conditions for a ringed space $(X,\mathcal{O})$ to be isomorphic to $Spec(R)$, where $R=\mathcal{O}(X)$, are: 1) For each $f\in R$, let $U_f\subset X$ be the set of $x$ such that $f$ maps to a unit in the stalk $\mathcal{O}_x$. Then $\mathcal{O}(U_f)=R[f^{-1}]$. 2) The stalks $\mathcal{O}_x$ are local rings. 3) The natural map $X\to \left| Spec(R)\right|$ that takes $x$ to the pre-image in $R$ of the maximal ideal in $\mathcal{O}_x$ is a homeomorphism. The characterizations mentioned so far are purely formal. There is a nontrivial cohomological characterization by Serre of affine schemes within quasi-compact quasi-separated schemes $X$ (see EGA II, 5.2). Namely, $X$ is affine iff $\Gamma : \mathrm{Qcoh}(X) \to \mathrm{Ab}$ is exact (i.e. $\mathcal{O}_X$ is a projective object in $\mathrm{Qcoh}(X)$) iff all cohomology groups $H^i(X,F)$ vanish, where $F$ is a quasi-coherent sheaf on $X$ and $i>0$. Actually it suffices to take $i=1$ and $F$ a finite type ideal of $\mathcal{O}_X$. Maybe you will be interested in the thesis of Mel Hochster. He characterizes the image of Spec in Top. The article based on it is called Prime Ideal Structures in Commutative Rings and appeared in the Transactions of the AMS in 1969. There is a purely category-theoretical way to describe affine schemes. Consider an arbitrary scheme $X$. For any ring $R$ we can consider a set of $R$-points of $X$, that is $X(R)=\mathrm{Hom}_{Schm}\left( \mathrm{Spec}\;R,X \right)$. This is a functor $X: Ring \to Set$. For an affine $X=\mathrm{Spec}\;A$ it equals to $X(R)= \mathrm{Hom}_{Ring}\left( A, R \right)$. Affine schemes are exactly such representable functors $Ring\to Set$. There's also a concrete theorem, describing affine schemes. If $X$ is a scheme and $\mathcal{J}$ is a nilpotent quasicoherent sheaf of ideals of $\mathcal{O}_X$, then $X$ is affine iff the closed subscheme $V(\mathcal{J})$ of $X$ is affine. (see Gabriel, Demazure, p.80).
For a spacetime $M$, the Green's function for the operator $\Delta+m^2$ is the following distribution on $M\times M$: $$G(x,y):=\langle \phi(x)\phi(y)\rangle=\int_{C^\infty(M)}\mathcal D\phi\,\phi(x)\phi(y) e^{-S(\phi)/\hbar},~~~~S(\phi)=\int_{x\in M}\phi(\Delta+m^2)\phi$$ If we wanted to compute this correlator in the Hamiltonian approach to QFT, we would interpret $\phi(x),\phi(y)$ as the appropriate time-dependent operators, but would have to specify the state of the field, i.e. we would have to write $$G(x,y):=\langle \phi(x)\phi(y)\rangle=\left<\psi\right|\phi(x)\phi(y)\left|\psi\right>$$ For some state $\left|\psi\right>$. What is that state? In general, $|\psi\rangle$ is the ground state of your theory, i.e., the state with the lowest energy. In free theories, $|\psi\rangle$ is written $|0\rangle$, and is defined through $H_0|0\rangle=0$, or equivalently, $a_{\boldsymbol k}|0\rangle=0$, with $a_{\boldsymbol k}$ the annihilation operator. In interacting theories, $|\psi\rangle$ is written $|\Omega\rangle$, and is defined as the eigenket of $H$ with the lowest possible eigenvalue (energy). Note that we assume (it is an axiom of any QFT) that $H$ is bounded below, i.e., $|\Omega\rangle$ always exists. In general, we define $H=H_0+H_I$, with $H_0$ the free (quadratic) part of the hamiltonian, and $H_I$ as the interaction. Note that both $|0\rangle$ and $|\Omega\rangle$ are well-defined, but in general they are different $|0\rangle\neq|\Omega\rangle$ (unless $H_I=0$). With this, we define the propagator of any theory as $$ \Delta(x,y)\equiv \langle 0|\mathcal T\ \phi_0^\dagger(x)\phi_0(y)|0\rangle $$ and the correlator (or $n$-point function) as $$ G(x,y)\equiv \langle \Omega|\mathcal T\ \phi^\dagger(x)\phi(y)|\Omega\rangle $$ Finally, we can use Wick's theorem, together with the Dyson series and the Gell-Mann and Low's formula, to write $G(x,y)$ in terms of $\Delta(x,y)$. Note that the propagator is independent of interactions (see e.g. this answer of mine) and $G(x,y)$ is not, but if $H_I=0$, then $\Delta(x,y)=G(x,y)$.
Let $\pi:X\to S=\operatorname{Spec } O_K$ be an arithmetic surface in the sense of Arakelov geometry. Here $K$ is a number field $\pi$ is a flat map and $X$ is a projective surface. For any coherent sheaf $\mathscr F$ on $X$ we have the determinant of cohomology: $$\det R\pi_\ast\mathscr F\in \operatorname{Pic }S$$ Moreover let $\omega_{X/S}$ be the usual dualizing sheaf. Can you please explain how can I get the following "duality formula"? $$\det R\pi_\ast\mathscr F\cong\det R\pi_\ast\mathscr (\omega_{X/S}\otimes \mathscr F^\vee)$$ (I think one should assume also the flatness of $\mathscr F$ over $\mathscr O_S$). Does it follow from some property of the determinant of cohomology? I've found the equation in Robin De Jong PhD thesis, I'll post it below even if I think there is a typo in the main formula:
Case Study Contents In statistics, a probit model (binary dependent variable case) is a type of regression in which the dependent variable can take only two values (0/1), for example, married or not married. The name comes from probability and un it. The purpose of the model is to estimate the probability that an observation with particular characteristics will fall into a specific category. As an example, consider the purchase of fluid milk by Mexican households as it relates to the concern about the lack of an adequate intake of calcium, especially by children. We can apply probit regression on the Encuesta Nacional de Ingresos y Gastos de los Hogares (ENIGH) data (2002). For descriptions of variables in the data file, see here. Assume for each observation $t$, the net utility gained from the consumption of fluid milk $U_t^*$, which is not observable, is related to a set of exogenous variables $x_t$ ($I \times 1$ vector, where $I$ is the total number of exogenous variables). Then, we are interested in coefficients $\beta$, which describe this relationship in the following latent model (as well as in the related probit model), assuming error term $\mu_t$ follows a standard normal distribution, i.e., $\mu_t \sim N(0,1)$: \begin{equation} U_t^* = x'_t\beta + \mu_t. \end{equation} This latent model is equivalent to the probit model \begin{equation} y_t = x'_t\beta + \mu_t, \end{equation} when the relationship between latent utility variable $U_t^*$ and the observable response (0/1) variable of whether a household purchases fluid milk, $y_t$, satisfies: \[ y_t = \left\{ \begin{aligned} 1 & \quad \text{if $U^*_t > 0$ } \\ 0 & \quad \text{otherwise}. \end{aligned} \right.\] Note that in the above model, the $j^{th}$ element of coefficients vector $\beta$, $\beta_j$ ($j \in \{1,2,\dots, I\}$) measures the change in the conditional probability $\Pr(y_t = 1|x_t) $ when there is unit change in $x^j_t$ ($j^{th}$ element in vector $x_t$). To further develop this regression model, in addition to i.i.d normally distributed error terms, we assume that the conditional probability takes the normal form: $$\Pr(y_t = 1|x_t) = \Phi(x'_t\beta),$$ where $\Phi(\cdot)$ is the standard normal CDF. A standard statistical textbook such as Greene (2011) would show that the estimator $\hat{\beta}$ could be calculated through maximizing the following log-likelihood function $\ln\mathcal{L}(\beta)$: In order to report standard regression outcomes such as t-statistic, p-value and others as calculated in Example 1, we need the estimated co-variance matrix of the estimator $\hat{\beta}$, i.e., $\hat{V_{\hat{\beta}}}$, which is based on the inverse Hessian matrix according to Greene (2011), $$\hat{V}_{\hat{\beta}} = (\hat{H})^{-1},$$ where $\hat{H} = \nabla^2\ln\mathcal{L}(\beta)_{|\hat{\beta}}$ is the estimated Hessian of the log-likelihood function $\ln\mathcal{L}(\beta)$ at the solution point $\hat{\beta}$. GAMS provides a mechanism to generate the Hessian matrix $H$ at the solution point. As we can see from this maximum likelihood example in GAMS with gdx data, probit_gdx.gms, we rely on the convertd solver with options dictmap and hessian, generating a dictionary map from the solver to GAMS and the Hessian matrix at the solution point, then saving them in data files dictmap.gdx and hessian.gdx individually. Combining information from these two files will provide the Hessian matrix $H$ at the solution point $\hat{\beta}$. We have implemented a discrete choice model in GAMS that can be solved using the NEOS Server. Click here to experiment with the demo of example 3. ENIGH data. Available for download at Instituto Nacional de Estadistica y Geograffia. Kalvelagen, Erwin. 2007. Least Squares Calculations with GAMS. Available for download at http://www.amsterdamoptimization.com/pdf/ols.pdf. Greene, William. 2011. Econometrics Analysis, 7th ed.Prentice Hall, Upper Saddle River, NJ.
Hello guys! I was wondering if you knew some books/articles that have a good introduction to convexity in the context of variational calculus (functional analysis). I was reading Young's "calculus of variations and optimal control theory" but I'm not that far into the book and I don't know if skipping chapters is a good idea. I don't know of a good reference, but I'm pretty sure that just means that second derivatives have consistent signs over the region of interest. (That is certainly a sufficient condition for Legendre transforms.) @dm__ yes have studied bells thm at length ~2 decades now. it might seem airtight and has stood the test of time over ½ century, but yet there is some fineprint/ loopholes that even phd physicists/ experts/ specialists are not all aware of. those who fervently believe like Bohm that no new physics will ever supercede QM are likely to be disappointed/ dashed, now or later... oops lol typo bohm bohr btw what is not widely appreciated either is that nonlocality can be an emergent property of a fairly simple classical system, it seems almost nobody has expanded this at length/ pushed it to its deepest extent. hint: harmonic oscillators + wave medium + coupling etc But I have seen that the convexity is associated to minimizers/maximizers of the functional, whereas the sign second variation is not a sufficient condition for that. That kind of makes me think that those concepts are not equivalent in the case of functionals... @dm__ generally think sampling "bias" is not completely ruled out by existing experiments. some of this goes back to CHSH 1969. there is unquestioned reliance on this papers formulation by most subsequent experiments. am not saying its wrong, think only that theres very subtle loophole(s) in it that havent yet been widely discovered. there are many other refs to look into for someone extremely motivated/ ambitious (such individuals are rare). en.wikipedia.org/wiki/CHSH_inequality @dm__ it stands as a math proof ("based on certain assumptions"), have no objections. but its a thm aimed at physical reality. the translation into experiment requires extraordinary finesse, and the complex analysis starts with CHSH 1969. etc While it's not something usual, I've noticed that sometimes people edit my question or answer with a more complex notation or incorrect information/formulas. While I don't think this is done with malicious intent, it has sometimes confused people when I'm either asking or explaining something, as... @vzn what do you make of the most recent (2015) experiments? "In 2015 the first three significant-loophole-free Bell-tests were published within three months by independent groups in Delft, Vienna and Boulder. All three tests simultaneously addressed the detection loophole, the locality loophole, and the memory loophole. This makes them “loophole-free” in the sense that all remaining conceivable loopholes like superdeterminism require truly exotic hypotheses that might never get closed experimentally." @dm__ yes blogged on those. they are more airtight than previous experiments. but still seem based on CHSH. urge you to think deeply about CHSH in a way that physicists are not paying attention. ah, voila even wikipedia spells it out! amazing > The CHSH paper lists many preconditions (or "reasonable and/or presumable assumptions") to derive the simplified theorem and formula. For example, for the method to be valid, it has to be assumed that the detected pairs are a fair sample of those emitted. In actual experiments, detectors are never 100% efficient, so that only a sample of the emitted pairs are detected. > A subtle, related requirement is that the hidden variables do not influence or determine detection probability in a way that would lead to different samples at each arm of the experiment. ↑ suspect entire general LHV theory of QM lurks in these loophole(s)! there has been very little attn focused in this area... :o how about this for a radical idea? the hidden variables determine the probability of detection...! :o o_O @vzn honest question, would there ever be an experiment that would fundamentally rule out nonlocality to you? and if so, what would that be? what would fundamentally show, in your opinion, that the universe is inherently local? @dm__ my feeling is that something more can be milked out of bell experiments that has not been revealed so far. suppose that one could experimentally control the degree of violation, wouldnt that be extraordinary? and theoretically problematic? my feeling/ suspicion is that must be the case. it seems to relate to detector efficiency maybe. but anyway, do believe that nonlocality can be found in classical systems as an emergent property as stated... if we go into detector efficiency, there is no end to that hole. and my beliefs have no weight. my suspicion is screaming absolutely not, as the classical is emergent from the quantum, not the other way around @vzn have remained civil, but you are being quite immature and condescending. I'd urge you to put aside the human perspective and not insist that physical reality align with what you expect it to be. all the best @dm__ ?!? no condescension intended...? am striving to be accurate with my words... you say your "beliefs have no weight," but your beliefs are essentially perfectly aligned with the establishment view... Last night dream, introduced a strange reference frame based disease called Forced motion blindness. It is a strange eye disease where the lens is such that to the patient, anything stationary wrt the floor is moving forward in a certain direction, causing them have to keep walking to catch up with them. At the same time, the normal person think they are stationary wrt to floor. The result of this discrepancy is the patient kept bumping to the normal person. In order to not bump, the person has to walk at the apparent velocity as seen by the patient. The only known way to cure it is to remo… And to make things even more confusing: Such disease is never possible in real life, for it involves two incompatible realities to coexist and coinfluence in a pluralistic fashion. In particular, as seen by those not having the disease, the patient kept ran into the back of the normal person, but to the patient, he never ran into him and is walking normally It seems my mind has gone f88888 up enough to envision two realities that with fundamentally incompatible observations, influencing each other in a consistent fashion It seems my mind is getting more and more comfortable with dialetheia now @vzn There's blatant nonlocality in Newtonian mechanics: gravity acts instantaneously. Eg, the force vector attracting the Earth to the Sun points to where the Sun is now, not where it was 500 seconds ago. @Blue ASCII is a 7 bit encoding, so it can encode a maximum of 128 characters, but 32 of those codes are control codes, like line feed, carriage return, tab, etc. OTOH, there are various 8 bit encodings known as "extended ASCII", that have more characters. There are quite a few 8 bit encodings that are supersets of ASCII, so I'm wary of any encoding touted to be "the" extended ASCII. If we have a system and we know all the degrees of freedom, we can find the Lagrangian of the dynamical system. What happens if we apply some non-conservative forces in the system? I mean how to deal with the Lagrangian, if we get any external non-conservative forces perturbs the system?Exampl... @Blue I think now I probably know what you mean. Encoding is the way to store information in digital form; I think I have heard the professor talking about that in my undergraduate computer course, but I thought that is not very important in actually using a computer, so I didn't study that much. What I meant by use above is what you need to know to be able to use a computer, like you need to know LaTeX commands to type them. @AvnishKabaj I have never had any of these symptoms after studying too much. When I have intensive studies, like preparing for an exam, after the exam, I feel a great wish to relax and don't want to study at all and just want to go somehwere to play crazily. @bolbteppa the (quanta) article summary is nearly popsci writing by a nonexpert. specialists will understand the link to LHV theory re quoted section. havent read the scientific articles yet but think its likely they have further ref. @PM2Ring yes so called "instantaneous action/ force at a distance" pondered as highly questionable bordering on suspicious by deep thinkers at the time. newtonian mechanics was/ is not entirely wrong. btw re gravity there are a lot of new ideas circulating wrt emergent theories that also seem to tie into GR + QM unification. @Slereah No idea. I've never done Lagrangian mechanics for a living. When I've seen it used to describe nonconservative dynamics I have indeed generally thought that it looked pretty silly, but I can see how it could be useful. I don't know enough about the possible alternatives to tell whether there are "good" ways to do it. And I'm not sure there's a reasonable definition of "non-stupid way" out there. ← lol went to metaphysical fair sat, spent $20 for palm reading, enthusiastic response on my leadership + teaching + public speaking abilities, brought small tear to my eye... or maybe was just fighting infection o_O :P How can I move a chat back to comments?In complying to the automated admonition to move comments to chat, I discovered that MathJax is was no longer rendered. This is unacceptable in this particular discussion. I therefore need to undo my action and move the chat back to comments. hmmm... actually the reduced mass comes out of using the transformation to the center of mass and relative coordinates, which have nothing to do with Lagrangian... but I'll try to find a Newtonian reference. One example is a spring of initial length $r_0$ with two masses $m_1$ and $m_2$ on the ends such that $r = r_2 - r_1$ is it's length at a given time $t$ - the force laws for the two ends are $m_1 \ddot{r}_1 = k (r - r_0)$ and $m_2 \ddot{r}_2 = - k (r - r_0)$ but since $r = r_2 - r_1$ it's more natural to subtract one from the other to get $\ddot{r} = - k (\frac{1}{m_1} + \frac{1}{m_2})(r - r_0)$ which makes it natural to define $\frac{1}{\mu} = \frac{1}{m_1} + \frac{1}{m_2}$ as a mass since $\mu$ has the dimensions of mass and since then $\mu \ddot{r} = - k (r - r_0)$ is just like $F = ma$ for a single variable $r$ i.e. an spring with just one mass @vzn It will be interesting if a de-scarring followed by a re scarring can be done in some way in a small region. Imagine being able to shift the wavefunction of a lab setup from one state to another thus undo the measurement, it could potentially give interesting results. Perhaps, more radically, the shifting between quantum universes may then become possible You can still use Fermi to compute transition probabilities for the perturbation (if you can actually solve for the eigenstates of the interacting system, which I don't know if you can), but there's no simple human-readable interpretation of these states anymore @Secret when you say that, it reminds me of the no cloning thm, which have always been somewhat dubious/ suspicious of. it seems like theyve already experimentally disproved the no cloning thm in some sense.
I'm trying to understand $\lambda$'s role in both the Poisson and Exponential Distributions and how it is used to find probabilities (yes, I have read the other post regarding this topic, didn't quite do it for me). What (I think) I understand: Poisson Distribution - discrete $\lambda$ is defined as the average number of successes (however "success" is defined, given problem context) per unit of time or space PMF: $~~P(X = k;\lambda) = \frac{ \lambda^ke^{-\lambda} }{k!} $ $P(X\leq k) = P(X = 0)~+~P(X = 1)~+~P(X = 2)~+~\ldots~+~P(X = k)$ $P(X< k) = P(X = 0)~+~P(X = 1)~+~P(X = 2)~+~\ldots~+~P(X = k~-~1)$ $P(X\geq k) = 1~-~P(X<k)$ $P(X > k) = 1~-~P(X\leq k)$ Exponential Distribution - continuous $\lambda$ is defined as the average time/space betweenevents (successes) that follow a Poisson Distribution Where my understanding begins to fade: PDF: $~~f(x; \lambda)~=~ \lambda e^{-\lambda x} $ CDF: $ P(X \leq k; \lambda)~=~1~-~e^{-\lambda x} $ $ P(X > k; \lambda) ~=~ 1 ~-~P(X \leq k; \lambda)~=~e^{-\lambda x}$ Where I think the misunderstanding lies: As of now I'm assuming $\lambda$ can be interchanged between the two distributions. Is this the case? I have briefly read about, "re-parameterizing" and I think that might be the key, but I don't know what that process is referring to. How do I do this and how does it affect the PMF and CDF of the exponential distribution? This all came from a problem asking: Given a random variable X that follows an exponential distribution with lambda = 3, find P(X > 8). My approach was $ e^{-3*8} $, which gives a probability that seems far too low.
So what are spin-networks? Briefly, they are graphs with representations ("spins") of some gauge group (generally SU(2) or SL(2,C) in LQG) living on each edge. At each non-trivial vertex, one has three or more edges meeting up. What is the simplest purpose of the intertwiner? It is to ensure that angular momentum is conserved at each vertex. For the case of four-valent edge we have four spins: $(j_1,j_2,j_3,j_4)$. There is a simple visual picture of the intertwiner in this case. Picture a tetrahedron enclosing the given vertex, such that each edge pierces precisely one face of the tetrahedron. Now, the natural prescription for what happens when a surface is punctured by a spin is to associate the Casimir of that spin $ \mathbf{J}^2 $ with the puncture. The Casimir for spin $j$ has eigenvalues $ j (j+1) $. You can also see these as energy eigenvalues for the quantum rotor model. These eigenvalues are identified with the area associated with a puncture. In order for the said edges and vertices to correspond to a consistent geometry it is important that certain constraints be satisfied. For instance, for a triangle we require that the edge lengths satisfy the triangle inequality $ a + b \lt c $ and the angles should add up to $ \angle a + \angle b + \angle c = \kappa \pi$, with $\kappa = 1$ if the triangle is embedded in a flat space and $\kappa \ne 1$ denoting the deviation of the space from zero curvature (positively or negatively curved). In a similar manner, for a classical tetrahedron, now it is the sums of the areas of the faces which should satisfy "closure" constraints. For a quantum tetrahedron these constraints translate into relations between the operators $j_i$ which endow the faces with area. Now for a triangle giving its three edge lengths $(a,b,c)$ completely fixes the angles and there is no more freedom. However, specifying all four areas of a tetrahedron does not fix all the freedom. The tetrahedron can still be bent and distorted in ways that preserve the closure constraints (not so for a triangle!). These are the physical degrees of freedom that an intertwiner possesses - the various shapes that are consistent with a tetrahedron with face areas given by the spins, or more generally a polyhedron for n-valent edges. Some of the key players in this arena include, among others, Laurent Friedel, Eugenio Bianchi, E. Magliaro, C. Perini, F. Conrady, J. Engle, Rovelli, R. Pereira, K. Krasnov and Etera Livine. I hope this provides some intuition for these structures. Also, I should add, that at present I am working on a review article on LQG for and by "the bewildered". This post imported from StackExchange Physics at 2014-04-01 16:52 (UCT), posted by SE-user user346 I reserve the right to use any or all of the contents of my answers to this and other questions on physics.se in said work, with proper acknowledgements to all who contribute with questions and comments. This legalese is necessary so nobody comes after me with a bullsh*t plagiarism charge when my article does appear :P
Terence Tao has just uploaded a preprint to the arXiv with a claimed proof of the Erdős discrepancy problem. Here’s his abstract to the paper, which is titled “The Erdős discrepancy problem”. We show that for any sequence $f: {\bf N} \to \{-1,+1\}$ taking values in $\{-1,+1\}$, the discrepancy $$ \sup_{n,d \in {\bf N}} \left|\sum_{j=1}^n f(jd)\right| $$ of $f$ is infinite. This answers a question of Erdős. In fact the argument also applies to sequences $f$ taking values in the unit sphere of a real or complex Hilbert space. The argument uses three ingredients. The first is a Fourier-analytic reduction, obtained as part of the Polymath5 project on this problem, which reduces the problem to the case when $f$ is replaced by a (stochastic) completely multiplicative function ${\bf g}$. The second is a logarithmically averaged version of the Elliott conjecture, established recently by the author, which effectively reduces to the case when ${\bf g}$ usually pretends to be a modulated Dirichlet character. The final ingredient is (an extension of) a further argument obtained by the Polymath5 project which shows unbounded discrepancy in this case. As Terence mentioned in the abstract, his proof builds on the work of the collaborative Polymath5 project, which took place on Timothy Gowers’s blog. We last talked about this problem in February 2014, when Boris Konev and Alexei Lisitsa made the news with an attack through the SAT problem (that post contains a good video by James Grime explaining the problem, by the way). Tao refers to this work in an example, which seems to have guided his thinking about the proof. The comments field on Tao’s arXiv submission mentions that he’s submitted it to the newly-created arXiv overlay journal Discrete Analysis, which was only announced last week. What a coup for open access mathematics! More information The Erdős discrepancy problem by Terence Tao, on the arXiv The Erdos discrepancy problem via the Elliott conjecture on Tao’s blog a week ago, which seems to have been the big breakthrough. Explanation and notes on the Erdős discrepancy problem, on the Polymath wiki.
Hello one and all! Is anyone here familiar with planar prolate spheroidal coordinates? I am reading a book on dynamics and the author states If we introduce planar prolate spheroidal coordinates $(R, \sigma)$ based on the distance parameter $b$, then, in terms of the Cartesian coordinates $(x, z)$ and also of the plane polars $(r , \theta)$, we have the defining relations $$r\sin \theta=x=\pm R^2−b^2 \sin\sigma, r\cos\theta=z=R\cos\sigma$$ I am having a tough time visualising what this is? Consider the function $f(z) = Sin\left(\frac{1}{cos(1/z)}\right)$, the point $z = 0$a removale singularitya polean essesntial singularitya non isolated singularitySince $Cos(\frac{1}{z})$ = $1- \frac{1}{2z^2}+\frac{1}{4!z^4} - ..........$$$ = (1-y), where\ \ y=\frac{1}{2z^2}+\frac{1}{4!... I am having trouble understanding non-isolated singularity points. An isolated singularity point I do kind of understand, it is when: a point $z_0$ is said to be isolated if $z_0$ is a singular point and has a neighborhood throughout which $f$ is analytic except at $z_0$. For example, why would $... No worries. There's currently some kind of technical problem affecting the Stack Exchange chat network. It's been pretty flaky for several hours. Hopefully, it will be back to normal in the next hour or two, when business hours commence on the east coast of the USA... The absolute value of a complex number $z=x+iy$ is defined as $\sqrt{x^2+y^2}$. Hence, when evaluating the absolute value of $x+i$ I get the number $\sqrt{x^2 +1}$; but the answer to the problem says it's actually just $x^2 +1$. Why? mmh, I probably should ask this on the forum. The full problem asks me to show that we can choose $log(x+i)$ to be $$log(x+i)=log(1+x^2)+i(\frac{pi}{2} - arctanx)$$ So I'm trying to find the polar coordinates (absolute value and an argument $\theta$) of $x+i$ to then apply the $log$ function on it Let $X$ be any nonempty set and $\sim$ be any equivalence relation on $X$. Then are the following true: (1) If $x=y$ then $x\sim y$. (2) If $x=y$ then $y\sim x$. (3) If $x=y$ and $y=z$ then $x\sim z$. Basically, I think that all the three properties follows if we can prove (1) because if $x=y$ then since $y=x$, by (1) we would have $y\sim x$ proving (2). (3) will follow similarly. This question arised from an attempt to characterize equality on a set $X$ as the intersection of all equivalence relations on $X$. I don't know whether this question is too much trivial. But I have yet not seen any formal proof of the following statement : "Let $X$ be any nonempty set and $∼$ be any equivalence relation on $X$. If $x=y$ then $x\sim y$." That is definitely a new person, not going to classify as RHV yet as other users have already put the situation under control it seems... (comment on many many posts above) In other news: > C -2.5353672500000002 -1.9143250000000003 -0.5807385400000000 C -3.4331741299999998 -1.3244286800000000 -1.4594762299999999 C -3.6485676800000002 0.0734728100000000 -1.4738058999999999 C -2.9689624299999999 0.9078326800000001 -0.5942069900000000 C -2.0858929200000000 0.3286240400000000 0.3378783500000000 C -1.8445799400000003 -1.0963522200000000 0.3417561400000000 C -0.8438543100000000 -1.3752198200000001 1.3561451400000000 C -0.5670178500000000 -0.1418068400000000 2.0628359299999999 probably the weirdness bunch of data I ever seen with so many 000000 and 999999s But I think that to prove the implication for transitivity the inference rule an use of MP seems to be necessary. But that would mean that for logics for which MP fails we wouldn't be able to prove the result. Also in set theories without Axiom of Extensionality the desired result will not hold. Am I right @AlessandroCodenotti? @AlessandroCodenotti A precise formulation would help in this case because I am trying to understand whether a proof of the statement which I mentioned at the outset depends really on the equality axioms or the FOL axioms (without equality axioms). This would allow in some cases to define an "equality like" relation for set theories for which we don't have the Axiom of Extensionality. Can someone give an intuitive explanation why $\mathcal{O}(x^2)-\mathcal{O}(x^2)=\mathcal{O}(x^2)$. The context is Taylor polynomials, so when $x\to 0$. I've seen a proof of this, but intuitively I don't understand it. @schn: The minus is irrelevant (for example, the thing you are subtracting could be negative). When you add two things that are of the order of $x^2$, of course the sum is the same (or possibly smaller). For example, $3x^2-x^2=2x^2$. You could have $x^2+(x^3-x^2)=x^3$, which is still $\mathscr O(x^2)$. @GFauxPas: You only know $|f(x)|\le K_1 x^2$ and $|g(x)|\le K_2 x^2$, so that won't be a valid proof, of course. Let $f(z)=z^{n}+a_{n-1}z^{n-1}+\cdot\cdot\cdot+a_{0}$ be a complex polynomial such that $|f(z)|\leq 1$ for $|z|\leq 1.$ I have to prove that $f(z)=z^{n}.$I tried it asAs $|f(z)|\leq 1$ for $|z|\leq 1$ we must have coefficient $a_{0},a_{1}\cdot\cdot\cdot a_{n}$ to be zero because by triangul... @GFauxPas @TedShifrin Thanks for the replies. Now, why is it we're only interested when $x\to 0$? When we do a taylor approximation cantered at x=0, aren't we interested in all the values of our approximation, even those not near 0? Indeed, one thing a lot of texts don't emphasize is this: if $P$ is a polynomial of degree $\le n$ and $f(x)-P(x)=\mathscr O(x^{n+1})$, then $P$ is the (unique) Taylor polynomial of degree $n$ of $f$ at $0$.
Difference between revisions of "Probability Seminar" (→May 7, Tuesday Van Vleck 901, 2:25pm,, Duncan Dauvergne (Toronto)) (→May 7, Tuesday Van Vleck 901, 2:25pm,, Duncan Dauvergne (Toronto)) Line 114: Line 114: Abstract: The interplay between geometry and probability in high-dimensional spaces is a subject of active research. Classical theorems in probability theory such as the central limit theorem and Cramer’s theorem can be viewed as providing information about certain scalar projections of high-dimensional product measures. In this talk we will describe the behavior of random projections of more general (possibly non-product) high-dimensional measures, which are of interest in diverse fields, ranging from asymptotic convex geometry to high-dimensional statistics. Although the study of (typical) projections of high-dimensional measures dates back to Borel, only recently has a theory begun to emerge, which in particular identifies the role of certain geometric assumptions that lead to better behaved projections. A particular question of interest is to identify what properties of the high-dimensional measure are captured by its lower-dimensional projections. While fluctuations of these projections have been studied over the past decade, we describe more recent work on the tail behavior of multidimensional projections, and associated conditional limit theorems. Abstract: The interplay between geometry and probability in high-dimensional spaces is a subject of active research. Classical theorems in probability theory such as the central limit theorem and Cramer’s theorem can be viewed as providing information about certain scalar projections of high-dimensional product measures. In this talk we will describe the behavior of random projections of more general (possibly non-product) high-dimensional measures, which are of interest in diverse fields, ranging from asymptotic convex geometry to high-dimensional statistics. Although the study of (typical) projections of high-dimensional measures dates back to Borel, only recently has a theory begun to emerge, which in particular identifies the role of certain geometric assumptions that lead to better behaved projections. A particular question of interest is to identify what properties of the high-dimensional measure are captured by its lower-dimensional projections. While fluctuations of these projections have been studied over the past decade, we describe more recent work on the tail behavior of multidimensional projections, and associated conditional limit theorems. − == + == <span style="color:red">'''Tuesday''' </span>Van Vleck 901, 2:25pm,, Duncan Dauvergne (Toronto) == Revision as of 14:52, 30 April 2019 Spring 2019 Thursdays in 901 Van Vleck Hall at 2:25 PM, unless otherwise noted. We usually end for questions at 3:15 PM. If you would like to sign up for the email list to receive seminar announcements then please send an email to join-probsem@lists.wisc.edu January 31, Oanh Nguyen, Princeton Title: Survival and extinction of epidemics on random graphs with general degrees Abstract: We establish the necessary and sufficient criterion for the contact process on Galton-Watson trees (resp. random graphs) to exhibit the phase of extinction (resp. short survival). We prove that the survival threshold $\lambda_1$ for a Galton-Watson tree is strictly positive if and only if its offspring distribution has an exponential tail, settling a conjecture by Huang and Durrett. On the random graph with degree distribution $D$, we show that if $D$ has an exponential tail, then for small enough $\lambda$ the contact process with the all-infected initial condition survives for polynomial time with high probability, while for large enough $\lambda$ it runs over exponential time with high probability. When $D$ is subexponential, the contact process typically displays long survival for any fixed $\lambda>0$. Joint work with Shankar Bhamidi, Danny Nam, and Allan Sly. Wednesday, February 6 at 4:00pm in Van Vleck 911 , Li-Cheng Tsai, Columbia University Title: When particle systems meet PDEs Abstract: Interacting particle systems are models that involve many randomly evolving agents (i.e., particles). These systems are widely used in describing real-world phenomena. In this talk we will walk through three facets of interacting particle systems, namely the law of large numbers, random fluctuations, and large deviations. Within each facet, I will explain how Partial Differential Equations (PDEs) play a role in understanding the systems.. Title: Fluctuations of the KPZ equation in d\geq 2 in a weak disorder regime Abstract: We will discuss some recent work on the Edwards-Wilkinson limit of the KPZ equation with a small coupling constant in d\geq 2. February 14, Timo Seppäläinen, UW-Madison Title: Geometry of the corner growth model Abstract: The corner growth model is a last-passage percolation model of random growth on the square lattice. It lies at the nexus of several branches of mathematics: probability, statistical physics, queueing theory, combinatorics, and integrable systems. It has been studied intensely for almost 40 years. This talk reviews properties of the geodesics, Busemann functions and competition interfaces of the corner growth model, and presents some new qualitative and quantitative results. Based on joint projects with Louis Fan (Indiana), Firas Rassoul-Agha and Chris Janjigian (Utah). February 21, Diane Holcomb, KTH Title: On the centered maximum of the Sine beta process Abstract: There has been a great deal or recent work on the asymptotics of the maximum of characteristic polynomials or random matrices. Other recent work studies the analogous result for log-correlated Gaussian fields. Here we will discuss a maximum result for the centered counting function of the Sine beta process. The Sine beta process arises as the local limit in the bulk of a beta-ensemble, and was originally described as the limit of a generalization of the Gaussian Unitary Ensemble by Valko and Virag with an equivalent process identified as a limit of the circular beta ensembles by Killip and Stoiciu. A brief introduction to the Sine process as well as some ideas from the proof of the maximum will be covered. This talk is on joint work with Elliot Paquette. Title: Quantitative homogenization in a balanced random environment Abstract: Stochastic homogenization of discrete difference operators is closely related to the convergence of random walk in a random environment (RWRE) to its limiting process. In this talk we discuss non-divergence form difference operators in an i.i.d random environment and the corresponding process—a random walk in a balanced random environment in the integer lattice Z^d. We first quantify the ergodicity of the environment viewed from the point of view of the particle. As consequences, we obtain algebraic rates of convergence for the quenched central limit theorem of the RWRE and for the homogenization of both elliptic and parabolic non-divergence form difference operators. Joint work with J. Peterson (Purdue) and H. V. Tran (UW-Madison). Wednesday, February 27 at 1:10pm Jon Peterson, Purdue Title: Functional Limit Laws for Recurrent Excited Random Walks Abstract: Excited random walks (also called cookie random walks) are model for self-interacting random motion where the transition probabilities are dependent on the local time at the current location. While self-interacting random walks are typically very difficult to study, many results for (one-dimensional) excited random walks are remarkably explicit. In particular, one can easily (by hand) calculate a parameter of the model that will determine many features of the random walk: recurrence/transience, non-zero limiting speed, limiting distributions and more. In this talk I will prove functional limit laws for one-dimensional excited random walks that are recurrent. For certain values of the parameters in the model the random walks under diffusive scaling converge to a Brownian motion perturbed at its extremum. This was known previously for the case of excited random walks with boundedly many cookies per site, but we are able to generalize this to excited random walks with periodic cookie stacks. In this more general case, it is much less clear why perturbed Brownian motion should be the correct scaling limit. This is joint work with Elena Kosygina. March 21, Spring Break, No seminar March 28, Shamgar Gurevitch UW-Madison Title: Harmonic Analysis on GLn over finite fields, and Random Walks Abstract: There are many formulas that express interesting properties of a group G in terms of sums over its characters. For evaluating or estimating these sums, one of the most salient quantities to understand is the character ratio: $$ \text{trace}(\rho(g))/\text{dim}(\rho), $$ for an irreducible representation $\rho$ of G and an element g of G. For example, Diaconis and Shahshahani stated a formula of this type for analyzing G-biinvariant random walks on G. It turns out that, for classical groups G over finite fields (which provide most examples of finite simple groups), there is a natural invariant of representations that provides strong information on the character ratio. We call this invariant rank. This talk will discuss the notion of rank for $GL_n$ over finite fields, and apply the results to random walks. This is joint work with Roger Howe (Yale and Texas AM). April 4, Philip Matchett Wood, UW-Madison Title: Outliers in the spectrum for products of independent random matrices Abstract: For fixed positive integers m, we consider the product of m independent n by n random matrices with iid entries as in the limit as n tends to infinity. Under suitable assumptions on the entries of each matrix, it is known that the limiting empirical distribution of the eigenvalues is described by the m-th power of the circular law. Moreover, this same limiting distribution continues to hold if each iid random matrix is additively perturbed by a bounded rank deterministic error. However, the bounded rank perturbations may create one or more outlier eigenvalues. We describe the asymptotic location of the outlier eigenvalues, which extends a result of Terence Tao for the case of a single iid matrix. Our methods also allow us to consider several other types of perturbations, including multiplicative perturbations. Joint work with Natalie Coston and Sean O'Rourke. April 11, Eviatar Procaccia, Texas A&M Title: Stabilization of Diffusion Limited Aggregation in a Wedge. Abstract: We prove a discrete Beurling estimate for the harmonic measure in a wedge in $\mathbf{Z}^2$, and use it to show that Diffusion Limited Aggregation (DLA) in a wedge of angle smaller than $\pi/4$ stabilizes. This allows to consider the infinite DLA and questions about the number of arms, growth and dimension. I will present some conjectures and open problems. April 18, Andrea Agazzi, Duke Title: Large Deviations Theory for Chemical Reaction Networks Abstract: The microscopic dynamics of well-stirred networks of chemical reactions are modeled as jump Markov processes. At large volume, one may expect in this framework to have a straightforward application of large deviation theory. This is not at all true, for the jump rates of this class of models are typically neither globally Lipschitz, nor bounded away from zero, with both blowup and absorption as quite possible scenarios. In joint work with Amir Dembo and Jean-Pierre Eckmann, we utilize Lyapunov stability theory to bypass this challenges and to characterize a large class of network topologies that satisfy the full Wentzell-Freidlin theory of asymptotic rates of exit from domains of attraction. Under the assumption of positive recurrence these results also allow for the estimation of transitions times between metastable states of this class of processes. April 25, Kavita Ramanan, Brown Title: Beyond Mean-Field Limits: Local Dynamics on Sparse Graphs Abstract: Many applications can be modeled as a large system of homogeneous interacting particle systems on a graph in which the infinitesimal evolution of each particle depends on its own state and the empirical distribution of the states of neighboring particles. When the graph is a clique, it is well known that the dynamics of a typical particle converges in the limit, as the number of vertices goes to infinity, to a nonlinear Markov process, often referred to as the McKean-Vlasov or mean-field limit. In this talk, we focus on the complementary case of scaling limits of dynamics on certain sequences of sparse graphs, including regular trees and sparse Erdos-Renyi graphs, and obtain a novel characterization of the dynamics of the neighborhood of a typical particle. This is based on various joint works with Ankan Ganguly, Dan Lacker and Ruoyu Wu. Friday, April 26, Colloquium, Van Vleck 911 from 4pm to 5pm, Kavita Ramanan, Brown Title: Tales of Random Projections Abstract: The interplay between geometry and probability in high-dimensional spaces is a subject of active research. Classical theorems in probability theory such as the central limit theorem and Cramer’s theorem can be viewed as providing information about certain scalar projections of high-dimensional product measures. In this talk we will describe the behavior of random projections of more general (possibly non-product) high-dimensional measures, which are of interest in diverse fields, ranging from asymptotic convex geometry to high-dimensional statistics. Although the study of (typical) projections of high-dimensional measures dates back to Borel, only recently has a theory begun to emerge, which in particular identifies the role of certain geometric assumptions that lead to better behaved projections. A particular question of interest is to identify what properties of the high-dimensional measure are captured by its lower-dimensional projections. While fluctuations of these projections have been studied over the past decade, we describe more recent work on the tail behavior of multidimensional projections, and associated conditional limit theorems. Tuesday , May 7, Van Vleck 901, 2:25pm,, Duncan Dauvergne (Toronto)
Case Study Contents In the Life Cycle Consumption Problem, the lifetime budget constraint restricted consumption in equating the present value of consumption to the present value of wage income. The Life Cycle Consumption Problem with Assets extends the Life Cycle Consumption Problem in that the consumer is allowed to have assets, i.e., the consumer can borrow money and repay it in future periods or save money and spend it in other periods. The objective of the life cycle consumption problem with assets is to determine how much one can consume in each period so as to maximize utility. The model includes a constraint on the minimum asset level; if the minimum asset level is zero, the consumer is not allowed to borrow. To formulate the life cycle problem with assets, we start with the same notation as the formulation for the life cycle consumption problem and add notation for assets. Set P = set of periods = {1..\(n\)} Parameters \(w(p,n)\) = wage income function \(r\) = interest rate \(\beta\) = discount factor \(a_{min}\) = minimum asset level Decision Variables \(c_p\) = consumption in period \(p\), \(\forall p \in P\) \(a_p\) = assets available at the beginning of period \(p\), \(\forall p \in P\) Objective Function Let \(c_p\) be consumption in period \(p\), where "life" begins at \(p=1\) and continues to \(p=n\). Let \(u()\) be the utility function and let \(u(c_p)\) be the utility value associated with consuming \(c_p\). Utility in future periods is discounted by a factor of \(\beta\). Then, the objective function is to maximize the total discounted utility: maximize \(\sum_{p \in P} \beta^{p-1} u(c_p)\) Constraints The life cycle consumption model with assets tracks the assets available at the beginning of each period. The constraint on consumption now is defined in terms of assets (wealth) and consumption. The wealth at the beginning of period \(p+1\) equals the wealth at the beginning of period \(p\) plus the net savings in period \(p\) (wage income minus consumption), multiplied by the return \(R\) on savings (where \(R = 1 + r\)). There is one constraint for each period: \[a_{p+1} \leq R(a_p + w(p,n) - c_p), \forall p \in P.\] The model assumption is that initial wealth is zero \((a_1 = 0)\) and that terminal wealth is non-negative \((a_{n+1} \geq 0)\). There is a minimum asset level, \(a_{min} \leq 0\). If \(a_{min} = 0\), then no borrowing is allowed. Otherwise, an individual can borrow as long as s/he can repay the amount before period \(n\). Therefore, there is one lower bound constraint for each period: \(a_p \geq a_{min}\) Also, the amount consumed in each period should be non-negative: \(c_p \geq \epsilon, \forall p \in P.\) Demo and Examples To solve your own life cycle consumption with assets problems, check out the Life Cycle Consumption with Assets Problem demo. $Title Life Cycle Consumption - with explicit modeling of savings Set p period /1*15/ ; Scalar B discount factor /0.96/; Scalar i interest rate /0.10/ ; Scalar R gross interest rate ; R = 1+i ; Scalar amin minimum asset level /0/ ; $macro u(c) (-exp(-c)) Parameter w(p) wage income in period p ; w(p) = ((15 - p.val)*p.val) / 15 ; Parameter lbnds(p) lower bounds of consumption / 1*15 0.0001 / ; Positive Variable c(p) consumption expenditure in period p ; Variable a(p) assets (or savings) at the beginning of period p ; Variable Z objective ; Equations budget(p) lifetime budget constraint , obj objective function ; budget(p) .. a(p+1) - R*(a(p) + w(p) - c(p)) =l= 0 ; obj .. Z =e= sum(p, power(B, p.val - 1)*u(c(p))) ; Model LifeCycleConsumptionSavings /budget, obj/ ; c.lo(p) = lbnds(p) ; a.lo(p) = amin ; a.fx('1') = 0 ; Solve LifeCycleConsumptionSavings using nlp maximizing Z ;
Let there be given a (configuration) manifold $M$. Often in physics one assumes that a constraint function $\chi$ obeys the following regularity conditions: $\chi: \Omega\subseteq M \to \mathbb{R}$ is defined in an open neighborhood $\Omega$ of the constrained submanifold $C\subset M$; $\chi$ is (sufficiently$^1$ many times) differentiable in $\Omega$; The gradient $\vec{\nabla} \chi$ is non-vanishing on the constrained submanifold $C\subset M$. Here it is implicitly understood that $\chi$ vanishes on the constrained submanifold $C\subset M$, i.e. $$C\cap \Omega ~=~\chi^{-1}(\{0\})~:=~\{x\in\Omega \mid \chi(x)=0\}.$$ [Also we imagine that the full constrained submanifold $C\subset M$ is covered by a family $(\Omega_{\alpha})_{\alpha\in I}$ of open neighborhoods, each with a corresponding constrained function $\chi_{\alpha}: \Omega_{\alpha}\subseteq M \to \mathbb{R}$, and such that the constraint functions $\chi_{\alpha}$ and $\chi_{\beta}$ are compatible in neighborhood overlaps $\Omega_{\alpha}\cap \Omega_{\beta}$.] Since there (locally) is only one constraint, the constrained submanifold will be a hypersurface, i.e. of codimension 1. [More generally, there could be more than one constraint: Then the above regularity conditions should be modified accordingly. See e.g. Ref. 1 for details.] The above regularity conditions are strictly speaking not always necessary, but greatly simplify the general theory of constrained systems. E.g. in cases where one would like to use the inverse function theorem, the implicit function theorem, or reparametrize $\chi\to\chi^{\prime}$ the constraints. [The rank condition (3.) can be tied to the non-vanishing of the Jacobian $J$ in the inverse function theorem.] Quantum mechanically, reparametrizations of constraints may induce a Faddeev-Popov-like determinantal factor in the path integral. Example 1a: OP's 1st example (v1)$$\tag{1a} \chi(x,y)~=~x^2+y^2-\ell^2$$would fail condition 3 if $\ell=0$. If $\ell=0$, then $C=\{(0,0)\}\subset M=\mathbb{R}^2$ is just the origin, which has codimension 2. On the other hand, the $\chi$-constraint satisfies the regularity conditions 1-3 if $\ell>0$. Example 1b: OP's 1st example (v3) $$\tag{1b} \chi(x,y)~=~\sqrt{x^2+y^2}-\ell$$is not differentiable at the origin $(x,y)=(0,0)$, and hence would fail condition 2 if $\ell=0$. On the other hand, the $\chi$-constraint satisfies the regularity conditions 1-3 if $\ell>0$. Example 2a: Assume $\ell>0$. OP's 2nd example (v1) $$\tag{2a} \chi(x,y)~=~\sqrt{x^2+y^2-\ell^2}$$would fail condition 1 and 2. The square root is not well-defined on one side of the constrained submanifold $C$. Example 2b: Assume $\ell>0$. OP's 2nd example (v3) $$\tag{2b} \chi(x,y)~=~(x^2+y^2-\ell^2)^2$$ would fail condition 3 since the gradient $\vec{\nabla} \chi$ vanishes on the constrained submanifold $C$. References: M. Henneaux and C. Teitelboim, Quantization of Gauge Systems, 1994; Subsection 1.1.2. -- $^1$ Exactly how many times differentiable depends on application.
Difference between revisions of "Probability Seminar" (→May 7, Tuesday Van Vleck 901, 2:25pm,, Duncan Dauvergne (Toronto)) (→Tuesday , May 7, Van Vleck 901, 2:25pm,, Duncan Dauvergne (Toronto)) Line 114: Line 114: Abstract: The interplay between geometry and probability in high-dimensional spaces is a subject of active research. Classical theorems in probability theory such as the central limit theorem and Cramer’s theorem can be viewed as providing information about certain scalar projections of high-dimensional product measures. In this talk we will describe the behavior of random projections of more general (possibly non-product) high-dimensional measures, which are of interest in diverse fields, ranging from asymptotic convex geometry to high-dimensional statistics. Although the study of (typical) projections of high-dimensional measures dates back to Borel, only recently has a theory begun to emerge, which in particular identifies the role of certain geometric assumptions that lead to better behaved projections. A particular question of interest is to identify what properties of the high-dimensional measure are captured by its lower-dimensional projections. While fluctuations of these projections have been studied over the past decade, we describe more recent work on the tail behavior of multidimensional projections, and associated conditional limit theorems. Abstract: The interplay between geometry and probability in high-dimensional spaces is a subject of active research. Classical theorems in probability theory such as the central limit theorem and Cramer’s theorem can be viewed as providing information about certain scalar projections of high-dimensional product measures. In this talk we will describe the behavior of random projections of more general (possibly non-product) high-dimensional measures, which are of interest in diverse fields, ranging from asymptotic convex geometry to high-dimensional statistics. Although the study of (typical) projections of high-dimensional measures dates back to Borel, only recently has a theory begun to emerge, which in particular identifies the role of certain geometric assumptions that lead to better behaved projections. A particular question of interest is to identify what properties of the high-dimensional measure are captured by its lower-dimensional projections. While fluctuations of these projections have been studied over the past decade, we describe more recent work on the tail behavior of multidimensional projections, and associated conditional limit theorems. − == <span style="color:red">'''Tuesday''' </span>, May 7, Van Vleck 901, 2:25pm + == <span style="color:red">'''Tuesday''' </span>, May 7, Van Vleck 901, 2:25pm, Duncan Dauvergne (Toronto) == Revision as of 14:52, 30 April 2019 Spring 2019 Thursdays in 901 Van Vleck Hall at 2:25 PM, unless otherwise noted. We usually end for questions at 3:15 PM. If you would like to sign up for the email list to receive seminar announcements then please send an email to join-probsem@lists.wisc.edu January 31, Oanh Nguyen, Princeton Title: Survival and extinction of epidemics on random graphs with general degrees Abstract: We establish the necessary and sufficient criterion for the contact process on Galton-Watson trees (resp. random graphs) to exhibit the phase of extinction (resp. short survival). We prove that the survival threshold $\lambda_1$ for a Galton-Watson tree is strictly positive if and only if its offspring distribution has an exponential tail, settling a conjecture by Huang and Durrett. On the random graph with degree distribution $D$, we show that if $D$ has an exponential tail, then for small enough $\lambda$ the contact process with the all-infected initial condition survives for polynomial time with high probability, while for large enough $\lambda$ it runs over exponential time with high probability. When $D$ is subexponential, the contact process typically displays long survival for any fixed $\lambda>0$. Joint work with Shankar Bhamidi, Danny Nam, and Allan Sly. Wednesday, February 6 at 4:00pm in Van Vleck 911 , Li-Cheng Tsai, Columbia University Title: When particle systems meet PDEs Abstract: Interacting particle systems are models that involve many randomly evolving agents (i.e., particles). These systems are widely used in describing real-world phenomena. In this talk we will walk through three facets of interacting particle systems, namely the law of large numbers, random fluctuations, and large deviations. Within each facet, I will explain how Partial Differential Equations (PDEs) play a role in understanding the systems.. Title: Fluctuations of the KPZ equation in d\geq 2 in a weak disorder regime Abstract: We will discuss some recent work on the Edwards-Wilkinson limit of the KPZ equation with a small coupling constant in d\geq 2. February 14, Timo Seppäläinen, UW-Madison Title: Geometry of the corner growth model Abstract: The corner growth model is a last-passage percolation model of random growth on the square lattice. It lies at the nexus of several branches of mathematics: probability, statistical physics, queueing theory, combinatorics, and integrable systems. It has been studied intensely for almost 40 years. This talk reviews properties of the geodesics, Busemann functions and competition interfaces of the corner growth model, and presents some new qualitative and quantitative results. Based on joint projects with Louis Fan (Indiana), Firas Rassoul-Agha and Chris Janjigian (Utah). February 21, Diane Holcomb, KTH Title: On the centered maximum of the Sine beta process Abstract: There has been a great deal or recent work on the asymptotics of the maximum of characteristic polynomials or random matrices. Other recent work studies the analogous result for log-correlated Gaussian fields. Here we will discuss a maximum result for the centered counting function of the Sine beta process. The Sine beta process arises as the local limit in the bulk of a beta-ensemble, and was originally described as the limit of a generalization of the Gaussian Unitary Ensemble by Valko and Virag with an equivalent process identified as a limit of the circular beta ensembles by Killip and Stoiciu. A brief introduction to the Sine process as well as some ideas from the proof of the maximum will be covered. This talk is on joint work with Elliot Paquette. Title: Quantitative homogenization in a balanced random environment Abstract: Stochastic homogenization of discrete difference operators is closely related to the convergence of random walk in a random environment (RWRE) to its limiting process. In this talk we discuss non-divergence form difference operators in an i.i.d random environment and the corresponding process—a random walk in a balanced random environment in the integer lattice Z^d. We first quantify the ergodicity of the environment viewed from the point of view of the particle. As consequences, we obtain algebraic rates of convergence for the quenched central limit theorem of the RWRE and for the homogenization of both elliptic and parabolic non-divergence form difference operators. Joint work with J. Peterson (Purdue) and H. V. Tran (UW-Madison). Wednesday, February 27 at 1:10pm Jon Peterson, Purdue Title: Functional Limit Laws for Recurrent Excited Random Walks Abstract: Excited random walks (also called cookie random walks) are model for self-interacting random motion where the transition probabilities are dependent on the local time at the current location. While self-interacting random walks are typically very difficult to study, many results for (one-dimensional) excited random walks are remarkably explicit. In particular, one can easily (by hand) calculate a parameter of the model that will determine many features of the random walk: recurrence/transience, non-zero limiting speed, limiting distributions and more. In this talk I will prove functional limit laws for one-dimensional excited random walks that are recurrent. For certain values of the parameters in the model the random walks under diffusive scaling converge to a Brownian motion perturbed at its extremum. This was known previously for the case of excited random walks with boundedly many cookies per site, but we are able to generalize this to excited random walks with periodic cookie stacks. In this more general case, it is much less clear why perturbed Brownian motion should be the correct scaling limit. This is joint work with Elena Kosygina. March 21, Spring Break, No seminar March 28, Shamgar Gurevitch UW-Madison Title: Harmonic Analysis on GLn over finite fields, and Random Walks Abstract: There are many formulas that express interesting properties of a group G in terms of sums over its characters. For evaluating or estimating these sums, one of the most salient quantities to understand is the character ratio: $$ \text{trace}(\rho(g))/\text{dim}(\rho), $$ for an irreducible representation $\rho$ of G and an element g of G. For example, Diaconis and Shahshahani stated a formula of this type for analyzing G-biinvariant random walks on G. It turns out that, for classical groups G over finite fields (which provide most examples of finite simple groups), there is a natural invariant of representations that provides strong information on the character ratio. We call this invariant rank. This talk will discuss the notion of rank for $GL_n$ over finite fields, and apply the results to random walks. This is joint work with Roger Howe (Yale and Texas AM). April 4, Philip Matchett Wood, UW-Madison Title: Outliers in the spectrum for products of independent random matrices Abstract: For fixed positive integers m, we consider the product of m independent n by n random matrices with iid entries as in the limit as n tends to infinity. Under suitable assumptions on the entries of each matrix, it is known that the limiting empirical distribution of the eigenvalues is described by the m-th power of the circular law. Moreover, this same limiting distribution continues to hold if each iid random matrix is additively perturbed by a bounded rank deterministic error. However, the bounded rank perturbations may create one or more outlier eigenvalues. We describe the asymptotic location of the outlier eigenvalues, which extends a result of Terence Tao for the case of a single iid matrix. Our methods also allow us to consider several other types of perturbations, including multiplicative perturbations. Joint work with Natalie Coston and Sean O'Rourke. April 11, Eviatar Procaccia, Texas A&M Title: Stabilization of Diffusion Limited Aggregation in a Wedge. Abstract: We prove a discrete Beurling estimate for the harmonic measure in a wedge in $\mathbf{Z}^2$, and use it to show that Diffusion Limited Aggregation (DLA) in a wedge of angle smaller than $\pi/4$ stabilizes. This allows to consider the infinite DLA and questions about the number of arms, growth and dimension. I will present some conjectures and open problems. April 18, Andrea Agazzi, Duke Title: Large Deviations Theory for Chemical Reaction Networks Abstract: The microscopic dynamics of well-stirred networks of chemical reactions are modeled as jump Markov processes. At large volume, one may expect in this framework to have a straightforward application of large deviation theory. This is not at all true, for the jump rates of this class of models are typically neither globally Lipschitz, nor bounded away from zero, with both blowup and absorption as quite possible scenarios. In joint work with Amir Dembo and Jean-Pierre Eckmann, we utilize Lyapunov stability theory to bypass this challenges and to characterize a large class of network topologies that satisfy the full Wentzell-Freidlin theory of asymptotic rates of exit from domains of attraction. Under the assumption of positive recurrence these results also allow for the estimation of transitions times between metastable states of this class of processes. April 25, Kavita Ramanan, Brown Title: Beyond Mean-Field Limits: Local Dynamics on Sparse Graphs Abstract: Many applications can be modeled as a large system of homogeneous interacting particle systems on a graph in which the infinitesimal evolution of each particle depends on its own state and the empirical distribution of the states of neighboring particles. When the graph is a clique, it is well known that the dynamics of a typical particle converges in the limit, as the number of vertices goes to infinity, to a nonlinear Markov process, often referred to as the McKean-Vlasov or mean-field limit. In this talk, we focus on the complementary case of scaling limits of dynamics on certain sequences of sparse graphs, including regular trees and sparse Erdos-Renyi graphs, and obtain a novel characterization of the dynamics of the neighborhood of a typical particle. This is based on various joint works with Ankan Ganguly, Dan Lacker and Ruoyu Wu. Friday, April 26, Colloquium, Van Vleck 911 from 4pm to 5pm, Kavita Ramanan, Brown Title: Tales of Random Projections Abstract: The interplay between geometry and probability in high-dimensional spaces is a subject of active research. Classical theorems in probability theory such as the central limit theorem and Cramer’s theorem can be viewed as providing information about certain scalar projections of high-dimensional product measures. In this talk we will describe the behavior of random projections of more general (possibly non-product) high-dimensional measures, which are of interest in diverse fields, ranging from asymptotic convex geometry to high-dimensional statistics. Although the study of (typical) projections of high-dimensional measures dates back to Borel, only recently has a theory begun to emerge, which in particular identifies the role of certain geometric assumptions that lead to better behaved projections. A particular question of interest is to identify what properties of the high-dimensional measure are captured by its lower-dimensional projections. While fluctuations of these projections have been studied over the past decade, we describe more recent work on the tail behavior of multidimensional projections, and associated conditional limit theorems. Tuesday , May 7, Van Vleck 901, 2:25pm, Duncan Dauvergne (Toronto)
The title said it all. I have come up with a solution, but I cannot figure out some details. Please help me out and comment on my solution. Feel free to leave your own solution so that I can also learn from you. Show that $[0,1] \cap \mathbb{Q}$ is not compact in $[0,1]$. Solution:Let $S =[0,1] \cap \mathbb{Q}$. Note that $S$ is countable. (Is it? If yes, how to prove it rigorously? If not, is the remaining part of my solution still true?) Then we write $S=\{ r_n : n \in \mathbb{N} \}$. We now construct an open cover that does not have a finite subcover. Consider $r_n \in S$, where $r_n \neq 0, 1$, then we define $p = \min \left\{\frac{d(r_{n-1},r_n)}{2}, \frac{d(r_n, r_{n+1})}{2}\right\}$. Then consider $$\left(\bigcup_{r_n \in S, r_n \neq0,1}B(r_n,p)\right)\cup 0\cup 1$$ is an open cover of $S$. But there exists no finite subcover. Thus, $S$ is not compact in $[0,1]$. Apart from the above question, if I change the metric space from $[0,1]$ to $\mathbb{R}$, can I still make the same conclusion? This confuses me as I did not really consider the metric space in my solution! Thanks in advance.
Note: for those more interested in the application and implementation review and discussion, this section can be skipped. Formal Model of Artificial Packet Chemistry Artificial Chemistry 1 Stochastic Molecular Collisions.Every single molecule is worked with, where a sample of molecules from the reaction vessel \mathcal{P} is drawn and the algorithm checks to see if a particular rule r \in \mathcal{R} applies. Differential Rate Equations:This approach seeks to describe the dynamics of a chemical system using concentrations of molecular species. The rules under this algorithm take a species approach: r: a_{1}s_{1} + a_{2}s_{2} + \ldots a_{N}s_{N} \longrightarrow b_{1}s_{1} + b_{2}s_{2} + \ldots + b_{N}s_{N}Here, the s_{i}‘s are species, not individual molecules. The coefficients are stoichiometric factors of the reaction. They are simply indicator functions to denote whether species s_{i} is a reactant or product. That is a_{i} = 1 if and only if s_{i} is a reactant in the rule r, and b_{i} = 1 if and only if s_{i} is a product in the rule r. It is this form of \mathcal{A} that Meyer and Tschudin [11] utilize in their packet chemistry.The change of overall concentration (concentration denoted c_{s_{i}}) is given by a system of differential equations\frac{\text{d}c_{s_{i}}}{\text{d}t} = (b_{i}-a_{i})\prod_{j=1}^{N}c_{s_{j}}^{a_{j}}, \quad i=1,\ldots,Naccording to the Law of Mass Action discussed earlier. There may be multiple rules/reactions r \in \mathcal{R} that affect the concentration of species s_{i}, so\frac{\text{d}c_{s_{i}}}{\text{d}t} = \sum_{r\in \mathcal{R}}\left[(b_{i}^{r}-a_{i}^{r})\prod_{j=1}^{N}c_{s_{j}}^{a_{j}^{r}}\right], \quad i=1,\ldots,N Others: There are other options, such as metadynamics (where the number of species and thus differential equations may change over time), mixed approaches, or symbolic analysis of the differential equations. As this article would be far too cumbersome to discuss these, they are omitted, but may be found in [1]. Artificial Packet Chemistry 2 Figure 3 from Meyer and Tschudin [11] gives an explicit example to help solidify these abstract ideas. The network consists of 4 nodes, so V = \{n_{1}, n_{2}, n_{3}, n_{4}\}. Each node has a bidirectional link with its neighbors, so E = \{n_{1}n_{2}, n_{2}n_{1}, n_{2}n_{3}, n_{3}n_{2}, n_{2}n_{4}, n_{4}n_{2}, n_{3}n_{4}, n_{4}n_{3}\}. In this case, we only have one species of molecule (one queue) per node, so \mathcal{S} = \{X_{1}, X_{2}, X_{3}, X_{4}\}. The set of reactions is simply a first-order reaction per arc: \mathcal{R} = \{r_{a,b}: X_{a} \to X_{b}: ab \in E\} From a review standpoint, I would have liked to see a less trivial example, such as one with multiple queues in a node, and rules that may keep packets in a node instead of just transmitting. These types of scenarios would be interesting to model this way, and demonstrate better the power of this approach. Continuation The next post in the series will discuss the mathematical analyses of the artificial packet chemistry described here. References Dittrich, P., Ziegler, J., and Banzhaf, W. Artificial chemistries – a review. Artificial Life 7(2001), 225–275. Feinburg, M. Complex balancing in general kinetic systems. Archive for Rational Mechanics and Analysis 49 (1972). Gadgil, C., Lee, C., and Othmer, H. A stochastic analysis of first-order reaction networks. Bulletin of Mathematical Biology 67 (2005), 901–946. Gibson, M., and Bruck, J. Effcient stochastic simulation of chemical systems with many species and many channels. Journal of Physical Chemistry 104 (2000), 1876–1889. Gillespie, D. The chemical langevin equation. Journal of Chemical Physics 113 (2000). Gillespie, D. The chemical langevin and fokker-planck equations for the reversible isomerizationreaction. Journal of Physical Chemistry 106 (2002), 5063–5071. Horn, F. On a connexion between stability and graphs in chemical kinetics. Proceedings of the RoyalSociety of London 334 (1973), 299–330. Kamimura, K., Hoshino, H., and Shishikui, Y. Constant delay queuing for jitter-sensitive iptvdistribution on home network. IEEE Global Telecommunications Conference (2008). Laidler, K. Chemical Kinetics. McGraw-Hill, 1950. McQuarrie, D. Stochastic approach to chemical kinetics. Journal of Applied Probability 4 (1967), 413–478. Meyer, T., and Tschudin, C. Flow management in packet networks through interacting queues and law-of-mass-action-scheduling. Technical report, University of Basel. Pocher, H. L., Leung, V., and Gilles, D. An application- and management-based approach to atm scheduling. Telecommunication Systems 12 (1999), 103–122. Tschudin, C. Fraglets- a metabolistic execution model for communication protocols. Proceedings of the 2nd annual symposium on autonomous intelligent networks and systems (2003). This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
A puzzle via @CmonMattTHINK (Matt Enlow): There is a line that is tangent to the curve y = x^4 - x^3 at two distinct points. What is its equation? (Can you find it without calculus?) #iteachmath #math #maths #mathchat #mathschat — Matt Enlow (@CmonMattTHINK) December 21, 2018 (I think weRead More → Dear Uncle Colin, I have a little problem. You see, there’s this bird, A, in its nest at time $t=0$ - the nest is at $(20, -17)$ - and it travels with a velocity of $-6\bi + 7\bj$ (in the appropriate units). But there’s another bird, B, whose nest isRead More → My cunning plan, back last August, was sadly foiled: @christianp refused to rise to the bait. I'd written a post about finding the smallest number such that moving its final digit to the front of the number doubles its value. It turned out, to my surprise, to be 17 digitsRead More → Dear Uncle Colin, I’m told that the graphs of the functions $f(x) = x^3 + (a+b)x^2 + 3x -4$ and $g(x) = (x-3)^3 + 1$ touch - and I need to express $a$ in terms of $b$. Can you help? - Can’t Understand Basic Introductory Calculus Hi, CUBIC, and thanksRead More → I… I… I… *Looks up Ito’s Lemma* *Reaches for bargepole, then doesn’t touch it.* I… I… I… Oh! It says here, there’s a thing called Ivory’s Theorem1! What is Ivory’s Theorem? Despite the main paper I could find about it calling it “the famous Ivory’s Theorem”, it was fairly trickyRead More → Dear Uncle Colin, I need to find the area between the curves $y=16x$, $y= \frac{4}{x}$ and $y=\frac{1}{4}x$, as shown. How would you go about that? Awkward Regions, Exhibit A Hi, AREA, and thanks for your message! As usual, there are several possible approaches here, but I’m going to write upRead More → The Mathematical Ninja peered at the problem sheet: Given that $(1+ax)^n = 1 - 12x + 63x^2 + \dots$, find the values of a and n Barked: “$n=-8$ and $a=\frac{3}{2}$.” The student sighed. “I get no marks if I just write down the answer.” Snarled: “You get noRead More → Dear Uncle Colin, A seven-digit integer has 870,720 as its last six digits. It is the product of six consecutive even integers. What is the missing first digit? Please Reveal Our Digit! Underlying Calculation Too Hi, PRODUCT, and thanks for your message! There are several approaches to this (as usual)Read More → Once upon a MathsJam, Barney Maunder-Taylor showed up with a curious object, a wedge with a circular base. Why? Well, if you held a light above it, it cast a circular shadow. From one side, the shadow was an equilateral triangle; along the third axis, a rectangle. A lovely thing.Read More → Dear Uncle Colin, I have to show that $\Pi_1^\infty \frac{(2n+1)^2 - 1}{(2n+1)^2} > \frac{3}{4}$. How would you do that? Partial Results Obtained Don’t Undeniably Create Truth Hi, PRODUCT, and thanks for your message! That’s a messy one. I can see two reasonable approaches: one is to take the whole thingRead More →
Case Study Contents Similar to the probit model we introduced in Example 3, a logit (or logistic regression) model is a type of regression where the dependent variable is categorical. It could be binary or multinomial; in the latter case, the dependent variable of multinomial logit could either be ordered or unordered. On the other hand, the logit is different from the probit in several key assumptions. This example covers the case of binary logit when its dependent variables can take only two values (0/1). Greene (1992) estimated a model of consumer behavior where he examined whether or not an individual had experienced a major negative derogatory report in his/her credit history. The file credit.gdx contains information on the credit history of a sample of more than 1,000 individuals. For descriptions of variables in the data file, see here. In order to examine the determinants of whether a credit card holder experiences a derogatory credit report, we set up the following discrete choice model similar to what we did in Example 3: $$y_t = x'_t\beta + \mu_t,$$ where $y_t$ is a discrete (0/1) response variable of card holding satisfying: \[ y_t = \left\{ \begin{aligned} 1 & \quad \text{if number of major derogatory credit reports $> 0$ } \\ 0 & \quad \text{otherwise,} \end{aligned} \right.\] and $x_t$ is a vector of exogenous variables. Therefore, the conditional probability $\Pr(y_t = 1|x_t)$ measures the chance that the observed outcome for the dependent variable is the "noteworthy" possible outcome -- here the probability of receiving major derogatory credit reports given exogenous variables. $\mu_t$ is the error term of observation $t$, while coefficient $\beta$ is the marginal effect measure on the conditional probability $\Pr(y_t = 1|x_t)$ when there is unit change in data $x_t$ (as introduced in Example 3). Having the model, we then can estimate it using the logit model specification and maximum likelihood techniques. Unlike their counterparts in the probit model, we now assume that error term $\mu_t$ follows an i.i.d. logistic distribution, and the conditional probability takes the logistic form: $$\Pr(y_t = 1|x_t) = \frac{\exp(x'_t\beta)}{1+\exp(x'_t\beta)}.$$ A standard statistical textbook such as Greene (2011) would show that the estimator $\hat{\beta}$ could be calculated through maximizing the following log-likelihood function $\ln\mathcal{L}(\beta)$: Similar to Example 3, we report estimated variances based on the diagonal elements of the covariance matrix $\hat{V}_{\hat{\beta}}$ along with t-statistics and p-values. Check out the demo of example 4 to experiment with a discrete choice model for estimating and statistically testing the logit model. Greene, William. 1992. A Statistical Model for Credit Scoring. Working Paper #92-29, Department of Economics, Stern School of Business, New York University, New York. Kalvelagen, Erwin. 2007. Least Squares Calculations with GAMS. Available for download at http://www.amsterdamoptimization.com/pdf/ols.pdf. Greene, William. 2011. Econometrics Analysis, 7th ed.Prentice Hall, Upper Saddle River, NJ.
Back to Combinatorial Optimization Perhaps the most famous combinatorial optimization problem is the Traveling Salesman Problem (TSP). Given a complete graph on \(n\) vertices and a weight function defined on the edges, the objective of the TSP is to construct a tour (a circuit that passes through each vertex exactly once) of minimum total weight. The TSP is an example of a hard combinatorial optimization problem; the decision version of the problem is \(\mathcal{NP}\)-complete. Integer Programming Formulation The TSP can be formulated as an integer linear programming problem. Let \(N\) be the set of vertices (cities) and let \(c_{ij}\) be the weight of the edge connecting vertex \(i\) and vertex \(j\). Let decision variable \(x_{ij}\) take the value 1 if the tour includes the edge connecting \(i\) and \(j\) and the value 0 otherwise. Minimize \( \sum_{i \in N} \sum_{j \in N} c_{ij} x_{ij}\) subject to: must enter each vertex exactly once \(\sum_{i \in N} x_{ij} = 1, \quad \forall j \in N\) subject to: must exit each vertex exactly once \(\sum_{j \in N} x_{ij} = 1, \quad \forall i\in N\) subject to: subtour elimination constraints \(\sum_{i, j \in S} x_{ij} \leq |S| - 1, \quad \forall S \subset N\) subject to: integrality constraints \(x_{ij}\) integer A NEOS version of the Concorde solver is available for users to solve TSP instances online. Stephan Mertens's TSP Algorithms in Action uses Java applets to illustrate some simple heuristics and compare them to optimal solutions on 10-25 node problems. Onno Waalewijn has constructed Java TSP applets exhibiting the behavior of different methods for heuristic and exhaustive search on various test problems. The TSP Package for R provides infrastructure for specifying instances of a TSP and its (possibly optimal) solution as well as several heuristics to find good solutions. It also provides an interface to the Concorde solver. TSP Solver for Google Maps API is a component for Google Maps API developers to compute the fastest route that visits a given set of locations. For practical purposes, the traveling salesman problem is only the simplest case of what are generally known as vehicle-routing problems. Commercial software packages for vehicle routing, or more generally for supply chain management, may have TSP routines. OR/MS Today periodically publishes a vehicle routing software survey. The most recent survey appeared in the February 2014 issue. The Traveling Salesman Problem website provides information on the history, applications, and current research on the TSP as well as information about the Concorde solver. Travelling Salesman Problem on Wikipedia provides some information on the history, solution approaches, and related problems. TSPLIB provides a library of sample instances for the TSP and related problems. Applegate, D. L., Bixby, R. E., Chvátal, V., and Cook, W. J. 2006. The Traveling Salesman Problem: A Computational Study.Princeton University Press, Princeton, NJ. Cook, W. J. 2011. In Pursuit of the Travelling Salesman: Mathematics at the Limits of Computation. Princeton University Press, Princeton, NJ. Lawler, E. L., Lenstra, J. K., Rinnooy Kan, A. H. G., and Shmoys, D. B. 1985. The Traveling Salesman Problem: A Guided Tour of Combinatorial Optimization.John Wiley and Sons, New York. Reinelt, G. 1994. The Traveling Salesman: Computational Solutions for TSP Applications.Springer-Verlag, Heidelberg.
the concept I think your problem may be coming from a misunderstanding about when you can apply conservation of energy and momentum. In fixed inertial frame energy and momentum are conserved separately, so $E_i = E_f$ and $\vec{p}_i = \vec{p}_f$. If you select the rest frame of the initial particle as your working frame for the whole calculation, you can use this fact. When using different reference frames for the before and after pictures, energy and momentum are not conserved separately. Only the magnitude of the total energy-momentum four-vector is: $\tilde{p}_i\cdot\tilde{p}_i = \tilde{p}_f\cdot\tilde{p}_f$, where $\tilde{p}\rightarrow(E/c, \vec{p})$ is a four-vector. The four-vector statement is where your energy equation comes from: $$\tilde{p}\cdot\tilde{p} = -\frac{E^2}{c^2} + p^2.$$ Working in the the rest frame of the system in question, $\vec{p}=0$ and $E = m c^2$: $$\tilde{p}\cdot\tilde{p} = -m^2\, c^2.$$ Finally since $\tilde{p}\cdot\tilde{p}$ is the same in all reference frames: $$E^2 = p^2 c^2 + m^2 c^4.$$ Applying it to your problem To start your problem you need to pick a reference frame for each state, before/after decay. Lets pick the rest frame of the initial particle, so $E_i = m_1 c^2$ and $\vec{p}_i = 0$. If we use the same reference frame for the final state, we can set up conservation separately. After the decay particle 2 is moving so $E_2 = \gamma m_2 c^2$, where $\gamma$ is the Lorentz factor. The massless particle 3 has energy. We can get it by applying the energy-momentum equation to it, alone: $${E_3}^2 = {p_3}^2 c^2 + 0$$ So $$E_i = E_f \implies m_1 c^2 = \gamma m_2 c^2 + |p_3| c.$$ We still need particle 3's momentum, but we can get that from momentum conservation: $$\vec{p}_i = \vec{p}_f = 0 \implies \vec{p}_2 = -\vec{p}_3.$$ All of the motion happens in one dimension, and $p_2 = \gamma m_2 v$ From here we can put it all together and solve for the speed of particle 2 (or $\gamma$ if you prefer).
Let $a\in \mathbb{R}$ and let a function $f_a\ :\ [0,1]\to \mathcal{H}(\mathbb{R})$ be given by $$f_a(x)=[a,a+x].$$ I have to show that $f_a$ is continuous. Definition: Let $(X,\tau)$ be a topological space and $\mathcal{H}(X)$ be the set of all non-empty compact subsets of $X$. For $V,U_0,U_1,U_2,\dots,U_n\in \tau$ let $$ \mathscr{B}_X^\mathcal{H}(V,U_0,U_1,U_2,\dots,U_n)=\{K\in \mathcal{H}(X)\ :\ K\subseteq V\& \ K\cap U_i\neq \emptyset,\ i=0,1,\dots,n \}. $$ The topology generated by the above basis is called the Vietoris Topology. I am unable to prove that $f_a$ is continuous.
What follows is an edited and expanded version of comments, and a list of examples, that I posted 12 June 2001 (and later in 26 September 2007, in a more abbreviated form) in the Math Forum discussion group AP-calculus. I believe rationalizing the denominator was originally positioned so early in the curriculum --- algebra 1 and geometry for division by $\sqrt{n},$ and algebra 2 for division by something like $m + \sqrt{n}$ --- was partly for reasons having to do with numerical calculation, and partly for reasons having to do with algebraic combination and simplification of exact numerical values. Incidentally, if you look at textbooks written 50 to 150 years ago, you don't really see much of an expectation that radicals were numerically approximated (this view being based on worked examples in the text and answers to exercises), except for trigonometry texts. However, the numerical aspect becomes much more important in applications that occurred outside of mathematics (mainly in science courses), so I suspect what happened is that the training in appropriately rewriting radical expressions so that square root tables and such could be easily used was left to the math courses. I personally think there has been too much emphasis on rationalizing the denominator in the past 40 years (perhaps in the past 20 years the emphasis has been more appropriate), especially in classes below the precalculus level, but I also think it's easy to forget just how often the technique of rationalization shows up in math, even if we restrict ourselves to the lower undergraduate level. As for me, when departmental and/or course supervisor constraints allowed me to do so, I DID NOT REQUIRE answers to be in denominator-rationalized form in high school or college algebra classes, or in precalculus classes. However, I felt it was an important skill for anyone getting at least as far as calculus. Thus, in calculus courses, I tried to make up for this inattention to rationalization (both by me and by other teachers) by working the topic in at a number of places. I did this mainly by working examples in class and by assigning problems (with an appropriate hint) like #1-6 below. MISCELLANEOUS LIST OF EXAMPLES FOR RATIONALIZING 1. These limits can be evaluated without taking derivatives if you first apply a binomial rationalization step:$$\lim_{x \rightarrow 1}\frac{x^2 - 1}{\sqrt{2x+2\,} \; - \; 2} \;\;\; \text{and} \;\;\; \lim_{x \rightarrow 0}\frac{1 - \cos x}{x} $$ 2. To rewrite$$ \ln \left( \frac{x + \sqrt{x^2 - 1}}{x - \sqrt{x^2 - 1}} \right) \;\;\; \text{as} \;\;\; 2\ln\left (x + \sqrt{x^2 - 1}\right),$$it helps if you first rationalize the numerator. Note: Putting $x = \sec \theta$ gives an identity that is sometimes useful. 3. To differentiate $x^{\frac{1}{2}},$ $x^{\frac{1}{3}},$ etc. using the limit definition of the derivative, you'll want to rationalize numerators. 4. The derivative of$$ \frac{\sqrt{a-x} \; + \; \sqrt{a+x}}{\sqrt{a-x} \; - \; \sqrt{a+x}} $$is much easier to put into the more useful form$$ \frac{a^2 \; + \; a\sqrt{a^2 - x^2}}{x^2\sqrt{a^2 - x^2}}$$ if you rationalize the denominator BEFORE differentiating. 5. Let $a \neq 0,$ $b,$ and $c$ be real number constants. To verify that$$ \lim_{n \rightarrow \infty} \left( \sqrt{an^2 + bn} \; - \; \sqrt{an^2 + cn} \right) \;\;\; = \;\;\; \frac{1}{2\sqrt{a}}(b-c),$$it helps to rationalize the numerator first. 6. The linearization of$$\frac{1+x}{1-x} \;\; \text{at} \;\; x=0$$is easy if you begin by multiplying both the numerator and denominator by $1+x.$ After doing this, you get to ignore the $x^2$ terms that appear additively with constants or with multiples of $x.$ The result will be $1 + 2x.$ More generally, rationalization ideas can be used to obtain the quotient rule for derivatives by multiplying/dividing by an appropriate conjugate and ignoring all but first order terms, and similar methods can be used to approximate $f(x+h,\,y+k)$ for rational functions $f(x,y)$ when $h$ and $k$ are close to $0.$ In the same way, one can show by rationalization methods that for $\delta$ and $\epsilon$ near $0,$ we have$$\frac{1}{1 + \delta} \; \approx \; 1 - \delta \;\;\; \text{and} \;\;\; \frac{1}{1-\delta} \; \approx \; 1 + \delta \;\;\; \text{and} \;\;\; \frac{1+\epsilon}{1+\delta} \; \approx \; 1 + \epsilon - \delta $$These and other approximations are discussed in Philip L. Alger's 1957 text Mathematics for Science and Engineering (see pp. 145-155 of Chapter 6: Numerical Calculations) and in William Charles Brenke's 1917 text Advanced Algebra (see Chapter IX, Section 146: Useful Approximations, pp. 126-127). These approximations are often more important for giving approximations that are valid over a range of variable values than for giving individual and isolated numerical approximations. This is especially useful when an exact algebraic form is difficult to work with, such as in a differential equation (recall the pendulum equation). For instance,$$\tanh M \;\; = \;\; \frac{e^{M} \; - \; e^{-M}}{e^{M} \; + \; e^{-M}} \;\; = \;\; \frac{1 \; - \; e^{-2M}}{1 \; + \; e^{-2M}} \;\; \approx \;\; 1 - 2e^{-2M}$$is an approximation that is correct to $16$ decimal places when $M = 10.$ This particular approximation for $\tanh M$ is obtained in the same manner I've just shown, and then used to find the lowest eigenvalue in the high barrier limit for a quantum mechanical particle confined to a double potential well, in Charles S. Johnson and Lee G. Pedersen's 1974 Problems and Solutions in Quantum Chemistry and Physics (see Problem 4.8(b) on pp. 105-106). Another example can be found in Jerry B. Marion's 1970 text Classical Dynamics of Particles and Systems (see p. 270). Marion uses the approximation$$\theta \;\; = \;\; \frac{2\pi}{1 - \frac{\delta}{\alpha}} \;\; \approx \;\; 2\pi\left(1 + \frac{\delta}{\alpha}\right) \;\; = \;\; 2\pi + \frac{2\pi\delta}{\alpha}$$near the end of a derivation of the precession of Mercury's orbit as predicted by Einstein's Theory of Relativity. The term $\frac{2\pi\delta}{\alpha}$ represents the approximate precession per orbit, which in Mercury's case works out to approximately $43$ seconds (angle measure) per century. 7. To express the quotient of two complex numbers in rectangular form, when each of the complex numbers is given in rectangular form, you'll want to use a "rationalization of the denominator" technique. Related to this is finding the real and imaginary parts of a rational function of a complex variable (e.g. verifying the Cauchy-Riemann equations, finding a harmonic conjugate of a rational function, investigating certain orthogonal families of curves, etc.). 8. For numerical purposes (e.g. reducing round-off errors during a computer computation), the quadratic formula$$ x \;\; = \;\; \frac{-b \; \pm \; \sqrt{b^2 - 4ac}}{2a}$$is in some cases more usefully expressed as$$ x \;\; = \;\; \frac{2c}{-b \; \pm \; \sqrt{b^2 - 4ac}}$$ 9. To show that ${\mathbb Q}[\sqrt{2}]$ (i.e. real numbers of the form $r + s\sqrt{2}$ where $r$ and $s$ are rational numbers) is a field, a "rationalization of the denominator" technique is useful when verifying the multiplicative inverse part of the definition of a field. 10. Rationalizing techniques are useful to obtain non-radical forms for the general equation of a hyperbola and an ellipse directly from their geometric definitions. Related to this is the general idea of rationalizing an algebraic equation (say, for an algebraic curve or an algebraic surface -- see Cayley's 1868 paper On Polyzomal Curves, otherwise the Curves $\sqrt{U} + \sqrt{V} +$ &c. $=0,$ which begins on p. 470 here, for some eye-opening stuff) and of solving radical equations. 11. It is easy to find a simple expression for the following sum if each denominator is rationalized:$$\frac{1}{\sqrt{1} + \sqrt{2}} \; + \; \frac{1}{\sqrt{2} + \sqrt{3}} \; + \;\frac{1}{\sqrt{3} + \sqrt{4}} \; + \; \cdots \; + \; \frac{n}{\sqrt{n} + \sqrt{n+1}}$$
If $A\subseteq B$ are affine domains over an algebraically closed field $k$ of characteristic zero, such that $Q(A)$ is algebraically closed in $Q(B)$, how can one show that $Q(A)$ is also algebraically closed in the field of fractions of $Q(A)\otimes_kB$? The history behind this problem: Starting from the fact that $Q(A)$ is algebraically closed in $Q(B)$, I intend to conclude that a general fiber of the morphism Spec $B \rightarrow$ Spec $A$ is irreducible. To do so, by first Bertini Theorem, as in Shafarevich's Basic Algebraic Geometry, vol. 1, it suffices to show that $Q(A)\otimes_k B$ is geometrically irreducible over $Q(A)$, which, in turn, by Zariski-Samuel's Commutative Algebra, vol. 2 (see page 230, thm. 39), is equivalent to showing that $Q(A)$ is algebraically closed in the field of fractions of $Q(A)\otimes_kB$. Now, for the proof it's easy to see that $Q(A)$ is alg. closed in $C:=Q(A)\otimes_kB$. But I cannot settle the case where some $x/y\in Q(C)$ might be algebraic over $Q(A)$, where both $x$ and $y$ go to 0 under the natural map $Q(A)\otimes_k B \rightarrow Q(B)$. By the way, if $B=k[x_1,...,x_n]/\mathfrak{p}$, with $\mathfrak{p}\in$ Spec $k[x_1,...,x_n]$, then the field of fractions of $Q(A)\otimes_k B$ is equal to the residue field of $Q(A)[x_1,...,x_n]$ at the point $\mathfrak{p}Q(A)[x_1,...,x_n]$.
Answer The rectangular form of $z=6\left( \cos \frac{2\pi }{3}+i\sin \frac{2\pi }{3} \right)$ is $z=\left( -3+3i\sqrt{3} \right)$. Work Step by Step The rectangular form of a complex number is $z=a+ib$, where $a$ and $b$ are rectangular coordinates. In the polar form the complex number $z=a+ib$ is represented as $z=r\left( \cos \theta +i\sin \theta \right)$, where r is the distance of the complex number from the origin and $\theta $ is the respective angle. Here, the given complex number is in the polar form with $r=6$ and $\theta =\frac{2\pi }{3}$. Since, $\cos \frac{2\pi }{3}=-\frac{1}{2}\ \text{ and }\ \sin \frac{2\pi }{3}=\frac{\sqrt{3}}{2}\ $ Therefore, $\begin{align} & z=6\left( \cos \frac{2\pi }{3}+i\sin \frac{2\pi }{3} \right) \\ & =6\left( -\frac{1}{2}+i\frac{\sqrt{3}}{2} \right) \\ & =\left( -\frac{6}{2}+i\frac{6\sqrt{3}}{2} \right) \\ & =\left( -3+3i\sqrt{3} \right) \end{align}$
There is no way to represent all real numbers without errors if each number is to have a finite representation. There are uncountably many real numbers but only countably many finite strings of 1's and 0's that you could use to represent them with. It all depends what you want to do.For example, what you show is a great way of representing rational numbers. But it still can't represent something like $\pi$ or $e$ perfectly.In fact, many languages such as Haskell and Scheme have built in support for rational numbers, storing them in the form $\frac{a}{b}$ where $a,b$ are integers.The main reason ... In typical floating point implementations, the result of a single operation is produced as if the operation was performed with infinite precision, and then rounded to the nearest floating-point number.Compare $a+b$ and $b+a$: The result of each operation performed with infinite precision is the same, therefore these identical infinite precision results ... It's because 0.1 = 1 / 10 = 1 / (2 × 5) cannot be represented exactly in binary floating point. This happens in C too:printf("%.20f", fmod(4.0,0.1));prints 0.09999999999999978351.Specifically, the only numbers that binary floating point can represent exactly are dyadic fractions of the form a / 2b, where a and b are integers. Even more ... The usual trick to avoid this underflow is to compute with logarithms, using the identity $$ \log \prod_{i=1}^n p_i = \sum_{i=1}^n \log p_i. $$That is, instead of using probabilities, you use their logarithms. Instead of multiplying them, you add them.Another approach, which is not so common, is to normalize the product manually. Instead of keeping just ... Ilmari Karonen gets it right in the other answer. But it gets even worse than that: arithmetic operations involving floating-point numbers don't necessarily behave the same as operators we're used to from mathematics. For instance, we're used to addition being associative, so that $a + (b + c) = (a + b) + c$. This doesn't generally hold using floating-point ... Assuming round-to-nearest and that $N > 0$, then $N * R < N$ always. (Be careful not to convert an integer that's too large.)Let $c 2^{-q} = N$, where $c \in [1, 2)$ is the significand and $q$ is the integer exponent. Let $1 - 2^{-s} = R$ and derive the bound$$N R = c 2^{-q}(1 - 2^{-s}) \le c 2^{-q} - 2^{-q - s},$$with equality if and only if $c =... There actually is some research on improving the numerical stability of floating point expressions, the Herbie project. Herbie is a tool to automatically improve the accuracy of floating point expressions. It's not quite comprehensive, but it will find a lot of accuracy improving transformations automatically.Cheers,Alex Sanchez-Stern Two's complement makes sense when the two entities in question have the same "units" and the same "width". By width I mean that, say, if you're adding an N bit number and an M bit number, where N and M are different, then you better not use two's complement. For floating point numbers, we have the problem of units: if the exponents are different, then we are ... Your idea does not work because a number represented in base $b$ with mantissa $m$ and exponent $e$ is the rational number $b \cdot m^{-e}$, thus your representation works precisely for rational numbers and no others. You cannot represent $\sqrt{2}$ for instance.There is a whole branch of computable mathematics which deals with exact real arithmetic. Many ... There are many effective Rational Number implementations but one that has been proposed many times and can even handle some irrationals quite well is Continued Fractions.Quote from Continued Fractions by Darren C. Collins:Theorem 5-1. - The continued fraction expression of a real number is finite ifand only if the real number is rational.Quote ... You cannot meaningfully test floating point values for equality. A floating point value does not represent a real number, it represents a range of real numbers, but it fails to store the width of this interval. All you can do with floating point values is to test them for approximate equality, and it's up to you to define the approximation you're willing to ... This is best explained in the base 10 equivalent: scientific notationIn scientific notation you have a mantissa and a exponent such that the value is $\mathrm{mantissa} \cdot 10^{\mathrm{exponent}}$.In a computer floating point the mantissa is always the same size of significant digits (a double precision has around 16 digits) and the exponent is bounded.... As long as you execute the same machine code on the different machines and as long as the settings for the floating point unit are identical, you will get identical results. However, you cannot execute the same machine code on both Intel and ARM, so this answer is only hypothetic. Even on different Intel processors you have to take special care that exactly ... There's nothing fundamentally hard about computing $\sin(10^{99})$. You simply compute $x = 10^{99} \bmod 2\pi$, then compute $\sin(x)$. (Why is this valid? It's because $\sin(x)=\sin(y)$ if $x\equiv y \pmod{2\pi}$.) It's not too hard to compute $x$ if you use a numerical representation that has enough digits of precision, and then to compute $\sin(x)$ ... Assuming multiplication between two numbers use one FLOP, the number of operations for $x^n$ will be $n-1$. However, is there a faster way to do this ...There most certainly is a faster way to do this for non-negative integer powers. For example, $x^{14}=x^{8}x^{4}x^{2}$. It takes one multiplication to compute $x^2$, one more to compute $x^4$, one more to ... Firstly, you need to decide if the floating point numbers you are working with are "normalized" or not.Think about how binary numbers are represented in scientific notation.Consider the number 100101.010101The expression given above is not given in scientific notation, but what if we change that?To re-write 100101.010101 in scientific notation, we move ... IEEE floating point format has a sign bit, an 11 bit exponent (ranging from -1022 to 1023) and a 52-bit mantissa with an implicit "1" in the 53rd bit.Thus, the largest integer that can be represented without rounding is the binary number with 53 "1"s, $2^{53}-1$ = 9,007,199,254,740,991 ~ 9e15 < 1e16. After that you start having to round off low order ... There are a number of "exact real" suggestions in the comments (e.g. continued fractions, linear fractional transformations, etc). The typical catch is that while you can compute answers to a formula, equality is often undecidable.However, if you're just interested in algebraic numbers, then you're in luck: The theory of real closed fields is complete, o-... Yes, the difference is constant.It is not really constant, but approximately, yes. With exceptions.With binary floating point numbers, the expression $(f(r)−r)/r$ is constant within a factor of $2$. It is between $1 \over 2^m$ and $1 \over 2^{m-1}$ where m is the number of bits in the mantissa. For rounding error calculations, you can assume it is ... The == operator invoke a float point compare instruction, this considers two numbers equal if neither is NaN, and they are exactly the same, or if they are positive and negative zero. Notably Infinity == Infinity, despite the mathematical dubiety of considering infinities equal, and Infinity - Infinity returning NaN.So if the numbers are not exactly equal, ... Generally speaking, normalized means "put in scientific notation." That just means, the mantissa should never start with 0, and should be less than the base. In binary that means the mantissa must be "1". Since the mantissa of a normalized binary floating point number is always 1, we don't need to store the 1. The first mantissa bit is hidden in the ... Java uses IEEE 754 binary floating point representation, which dedicates 23 binary digits to the mantissa, that is normalized to begin with the first significant digit (omitted, to save space).$0.00004_{10} = 0.00000000000000101001111100010110101100010001110001101101000111..._{2} = [1.]\color{red}{01001111100010110101100}010001110001101101000111..._{2} \... The binary floating point format supported by computers is essentially similar to decimal scientific notation used by humans.A floating-point number consists of a sign, mantissa (fixed width), and exponent (fixed width), like this:+/- 1.0101010101 × 2^12345sign ^mantissa^ ^exp^Regular scientific notation has a similar format:+/- 1.23456 × 10^... It may be that sqr (sqrt (7)) is displayed as 7, but it isn't actually exactly equal to 7. That's something you need to check. What you see is not always what you get. It may be that sqr (sqrt (7)) is exactly 7, by "coincidence" (not really coincidence, more like "unpredictable with my limited knowledge").Take any floating point number x, 1 ≤ x < 4. ... Using n-1 multiplications would be rather daft. For example, if n = 1024, you just square x ten times. Worst case is 2 * log_2 (n). You can look up Donald Knuth, Art of Computer Programming, for some details how to do it faster. There are some situations, like n = 1023, where you would square x ten times giving x^1024, then divide by x. But why is a sign bit necessary for floating point numbersFalse assumption. It isn't necessary. I'm pretty sure I've met floating point formats which used 2's complement for the mantissa, but I'd have to dig out for names.I'm far from being a specialist in numerical analysis, but I get that having signed zero is important for them. It's probably easier ... From Wikipedia:The two's-complement system has the advantage that the fundamental arithmetic operations of addition, subtraction, and multiplication are identical to those for unsigned binary numbers...Two's-complement is a representation of negative numbers that just so happens to be very convenient. That's the whole reason to use it at all.A ...
From K. Huang's Statistical Mechanics, par. 2.2: Suppose a droplet of liquid is placed in an external medium that exerts a pressure $P$ on the droplet. Then the work done by the droplet on expansion is empirically given by $$dW=P dV - \sigma da$$ where $da$ is the increase in the surface area of the droplet and $\sigma$ the coefficient of surface tension. The first law now takes the form $$dU = dQ - P dV + \sigma da$$ Integrating this, we obtain for the internal energy of a droplet of radius $r$ the expression $$U = \frac 4 3 \pi r^3 u_\infty + 4 \pi \sigma r^2$$ where $u_\infty$ is the internal energy per unit volume of an infinite droplet. I don't understand this last passage. I do understand the second term, since $$\sigma \int da = 4 \pi \sigma r^2$$ but frankly I don't understand where the first term is coming from. Why should $$\int dQ - \int P dV = \frac 4 3 \pi r^3 u_\infty$$ hold?
Distance of Point to a Lines- The general equation of a line is given by Ax + By + C = 0. Consider a point P in the Cartesian plane having the co-ordinates (x 1,y 1). The distance from point to line, in the Cartesian system is given by calculating the length of the perpendicular between the point and line. In the figure given below, the distance between the point P and the line LL can be calculated by figuring out the length of the perpendicular. Draw PQ from P to the line L. The coordinate points for different points are as follows: Point P (x1, y1), Point N (x2, y2), Point R (x3,y3) The line L makes intercepts on both the x – axis and y – axis at the points N and M respectively. The co-ordinates of these points are \(M (0,-\frac{C}{B})\) Area of Δ MPN can be given as: Area of Δ MPN = \( \frac{1}{2}~×~Base~×~Height\) \(\Rightarrow Area~ of~ Δ~MPN\) \(\Rightarrow PQ\) In terms of Co-ordinate Geometry, the area of the triangle is given as: Area of Δ MPN = \( \frac{1}{2} \left [ x_{1} (y_{2}-y_{3}) + x_{2} (y_{3}-y_{1}) + x_{3} (y_{1}-y_{2})\right ]\) Therefore, the area of the triangle can be given as: Area of Δ MPN \(= \frac{1}{2} \left [ x_{1} (0 + \frac{C}{B}) + (-\frac{C}{A}) ( -\frac{C}{B} -y_{1}) + 0( y_{1}-0 )\right ]\) \(\Rightarrow Area ~of~ Δ~MPN\) Solving this expression we get; \(2~×~Area~ of~ Δ~MPN\) Using the distance formula, we can find out the length of the side MN of ΔMPN. \(MN = \sqrt{\left ( 0 + \frac{C}{A} \right )^{2} + \left ( \frac{C}{B}- 0 \right )^{2}}\) \(\Rightarrow MN = \frac{C}{AB} \sqrt{A^{2} + B^{2}}\) Equating equation (ii) and (iii) in (i), the value of perpendicular comes out to be: \(PQ\) This length is generally represented by \(d\) Distance Between Two Parallel Lines- The distance between two parallel lines is equal to the perpendicular distance between the two lines. We know that, slopes of two parallel lines are same; therefore the equation of two parallel lines can be given as: \(y\) The point \(A\) The perpendicular distance would be the required distance between two lines The distance between the point \(A\) \(d\) \(\Rightarrow d \) \(\Rightarrow d \) Thus we can conclude that the distance between two parallel lines is given by: \(d\) If we consider the general form of the equation of straight line, and the lines are given by: \( L_1 ~: ~Ax~ + ~By ~+ ~C_1\) \( L_2 ~: ~Ax ~+ ~By ~+ ~C_2\) Then, the distance between them is given by: \(d\) Thus we can now easily calculate the distance between two parallel lines and the distance between a point and a line. To know more about co-ordinate geometry stay connected with us. Join us at www.byjus.com to know more.
My first question is: What is the proper definition of logarithmic function $f(z)=\ln{z}$. where $z\in \mathbb{C}$. quoting Wikipedia. a complex logarithm function is an "inverse" of the complex exponential function, just as the natural logarithm $\ln{x}$ is the inverse of the real exponential function $e^x$. In a book Calculus Vol 1.By Tom M. Apostol, he tells the function $\ln{x}$ is defined as $\ln{x}=\int_{1}^{x}{\frac1t\;dt}$ $\color{blue}{\star}$ and the function $e^x$ is defined to be it's inverse (rather than the opposite). Some reasons why it is so as per the book and what I have understood is . We can define what is $e^2$ $=$ $e\times e$ . But how can we give such a defintion to $e^{\sqrt{2}}$ or$\large e^{\sqrt{2+\sqrt[3]{3+\sqrt[5]{5}}}}$ or more generally the function $a^x$ when the domain is $\mathbb{R}$. Hence as the funciton $a^x$ is not properly defined for Real domain, how can we think about it's definition of it's inverse(The way Wikipedia and some other books define natural logarithm) So if we are to define $\ln{x} $ as in $\color{blue}{\star}$, it solves all the problem( a proper definition of $\ln{x}$ in real domain , a definition for exponential function in real domain, getting rid of otherwise-circular proofs of some basic theorem in limits involving logarithm and exponential function) Thinking in the same way just like $e^z$ have problems with definition.How can we define it's inverse?. Doing a bit of research through internet I found some bits of information. The first mention of the natural logarithm was by Nicholas Mercator in his work Logarithmotechnia published in 1668,2 although the mathematics teacher John Speidell had already in $\color{red}{1619}$compiled a table on the natural logarithm.[3] It was formerly also called hyperbolic logarithm,[4] as it corresponds to the area under a hyperbola. It is also sometimes referred to as the Napierian logarithm, although the original meaning of this term is slightly different. The exponential function arises whenever a quantity grows or decays at a rate proportional to its current value. One such situation is continuously compounded interest, and in fact it was this that led Jacob Bernoulli in $\color{red}{1683}$[4] to the number now known as e. Later, in 1697, Johann Bernoulli studied the calculus of the exponential function The dates(as per source) of discoveries suggests such($\color{blue}{\star}$) a definition. So summing up I have two questions. 1.A proper definition of $\ln{z}$ when $z\in \mathbb{C}$ 2.Is't the definition for logarithm in real domain, the one I have mentioned ($\color{blue}{\star}$) is the best/correct (Just because I have only seen a few places it is defined so).?
2019-10-11 06:14 Implementation of CERN secondary beam lines T9 and T10 in BDSIM / D'Alessandro, Gian Luigi (CERN ; JAI, UK) ; Bernhard, Johannes (CERN) ; Boogert, Stewart (JAI, UK) ; Gerbershagen, Alexander (CERN) ; Gibson, Stephen (JAI, UK) ; Nevay, Laurence (JAI, UK) ; Rosenthal, Marcel (CERN) ; Shields, William (JAI, UK) CERN has a unique set of secondary beam lines, which deliver particle beams extracted from the PS and SPS accelerators after their interaction with a target, reaching energies up to 400 GeV. These beam lines provide a crucial contribution for test beam facilities and host several fixed target experiments. [...] 2019 - 3 p. - Published in : 10.18429/JACoW-IPAC2019-THPGW069 Fulltext from publisher: PDF; In : 10th International Particle Accelerator Conference, Melbourne, Australia, 19 - 24 May 2019, pp.THPGW069 Detailed record - Similar records 2019-10-09 06:01 HiRadMat: A facility beyond the realms of materials testing / Harden, Fiona (CERN) ; Bouvard, Aymeric (CERN) ; Charitonidis, Nikolaos (CERN) ; Kadi, Yacine (CERN)/HiRadMat experiments and facility support teams The ever-expanding requirements of high-power targets and accelerator equipment has highlighted the need for facilities capable of accommodating experiments with a diverse range of objectives. HiRadMat, a High Radiation to Materials testing facility at CERN has, throughout operation, established itself as a global user facility capable of going beyond its initial design goals. [...] 2019 - 4 p. - Published in : 10.18429/JACoW-IPAC2019-THPRB085 Fulltext from publisher: PDF; In : 10th International Particle Accelerator Conference, Melbourne, Australia, 19 - 24 May 2019, pp.THPRB085 Detailed record - Similar records 2019-10-09 06:01 Commissioning results of the tertiary beam lines for the CERN neutrino platform project / Rosenthal, Marcel (CERN) ; Booth, Alexander (U. Sussex (main) ; Fermilab) ; Charitonidis, Nikolaos (CERN) ; Chatzidaki, Panagiota (Natl. Tech. U., Athens ; Kirchhoff Inst. Phys. ; CERN) ; Karyotakis, Yannis (Annecy, LAPP) ; Nowak, Elzbieta (CERN ; AGH-UST, Cracow) ; Ortega Ruiz, Inaki (CERN) ; Sala, Paola (INFN, Milan ; CERN) For many decades the CERN North Area facility at the Super Proton Synchrotron (SPS) has delivered secondary beams to various fixed target experiments and test beams. In 2018, two new tertiary extensions of the existing beam lines, designated “H2-VLE” and “H4-VLE”, have been constructed and successfully commissioned. [...] 2019 - 4 p. - Published in : 10.18429/JACoW-IPAC2019-THPGW064 Fulltext from publisher: PDF; In : 10th International Particle Accelerator Conference, Melbourne, Australia, 19 - 24 May 2019, pp.THPGW064 Detailed record - Similar records 2019-10-09 06:00 The "Physics Beyond Colliders" projects for the CERN M2 beam / Banerjee, Dipanwita (CERN ; Illinois U., Urbana (main)) ; Bernhard, Johannes (CERN) ; Brugger, Markus (CERN) ; Charitonidis, Nikolaos (CERN) ; Cholak, Serhii (Taras Shevchenko U.) ; D'Alessandro, Gian Luigi (Royal Holloway, U. of London) ; Gatignon, Laurent (CERN) ; Gerbershagen, Alexander (CERN) ; Montbarbon, Eva (CERN) ; Rae, Bastien (CERN) et al. Physics Beyond Colliders is an exploratory study aimed at exploiting the full scientific potential of CERN’s accelerator complex up to 2040 and its scientific infrastructure through projects complementary to the existing and possible future colliders. Within the Conventional Beam Working Group (CBWG), several projects for the M2 beam line in the CERN North Area were proposed, such as a successor for the COMPASS experiment, a muon programme for NA64 dark sector physics, and the MuonE proposal aiming at investigating the hadronic contribution to the vacuum polarisation. [...] 2019 - 4 p. - Published in : 10.18429/JACoW-IPAC2019-THPGW063 Fulltext from publisher: PDF; In : 10th International Particle Accelerator Conference, Melbourne, Australia, 19 - 24 May 2019, pp.THPGW063 Detailed record - Similar records 2019-10-09 06:00 The K12 beamline for the KLEVER experiment / Van Dijk, Maarten (CERN) ; Banerjee, Dipanwita (CERN) ; Bernhard, Johannes (CERN) ; Brugger, Markus (CERN) ; Charitonidis, Nikolaos (CERN) ; D'Alessandro, Gian Luigi (CERN) ; Doble, Niels (CERN) ; Gatignon, Laurent (CERN) ; Gerbershagen, Alexander (CERN) ; Montbarbon, Eva (CERN) et al. The KLEVER experiment is proposed to run in the CERN ECN3 underground cavern from 2026 onward. The goal of the experiment is to measure ${\rm{BR}}(K_L \rightarrow \pi^0v\bar{v})$, which could yield information about potential new physics, by itself and in combination with the measurement of ${\rm{BR}}(K^+ \rightarrow \pi^+v\bar{v})$ of NA62. [...] 2019 - 4 p. - Published in : 10.18429/JACoW-IPAC2019-THPGW061 Fulltext from publisher: PDF; In : 10th International Particle Accelerator Conference, Melbourne, Australia, 19 - 24 May 2019, pp.THPGW061 Detailed record - Similar records 2019-09-21 06:01 Beam impact experiment of 440 GeV/p protons on superconducting wires and tapes in a cryogenic environment / Will, Andreas (KIT, Karlsruhe ; CERN) ; Bastian, Yan (CERN) ; Bernhard, Axel (KIT, Karlsruhe) ; Bonura, Marco (U. Geneva (main)) ; Bordini, Bernardo (CERN) ; Bortot, Lorenzo (CERN) ; Favre, Mathieu (CERN) ; Lindstrom, Bjorn (CERN) ; Mentink, Matthijs (CERN) ; Monteuuis, Arnaud (CERN) et al. The superconducting magnets used in high energy particle accelerators such as CERN’s LHC can be impacted by the circulating beam in case of specific failure cases. This leads to interaction of the beam particles with the magnet components, like the superconducting coils, directly or via secondary particle showers. [...] 2019 - 4 p. - Published in : 10.18429/JACoW-IPAC2019-THPTS066 Fulltext from publisher: PDF; In : 10th International Particle Accelerator Conference, Melbourne, Australia, 19 - 24 May 2019, pp.THPTS066 Detailed record - Similar records 2019-09-20 08:41 Shashlik calorimeters with embedded SiPMs for longitudinal segmentation / Berra, A (INFN, Milan Bicocca ; Insubria U., Varese) ; Brizzolari, C (INFN, Milan Bicocca ; Insubria U., Varese) ; Cecchini, S (INFN, Bologna) ; Chignoli, F (INFN, Milan Bicocca ; Milan Bicocca U.) ; Cindolo, F (INFN, Bologna) ; Collazuol, G (INFN, Padua) ; Delogu, C (INFN, Milan Bicocca ; Milan Bicocca U.) ; Gola, A (Fond. Bruno Kessler, Trento ; TIFPA-INFN, Trento) ; Jollet, C (Strasbourg, IPHC) ; Longhin, A (INFN, Padua) et al. Effective longitudinal segmentation of shashlik calorimeters can be achieved taking advantage of the compactness and reliability of silicon photomultipliers. These photosensors can be embedded in the bulk of the calorimeter and are employed to design very compact shashlik modules that sample electromagnetic and hadronic showers every few radiation lengths. [...] 2017 - 6 p. - Published in : IEEE Trans. Nucl. Sci. 64 (2017) 1056-1061 Detailed record - Similar records 2019-09-20 08:41 Performance study for the photon measurements of the upgraded LHCf calorimeters with Gd$_2$SiO$_5$ (GSO) scintillators / Makino, Y (Nagoya U., ISEE) ; Tiberio, A (INFN, Florence ; U. Florence (main)) ; Adriani, O (INFN, Florence ; U. Florence (main)) ; Berti, E (INFN, Florence ; U. Florence (main)) ; Bonechi, L (INFN, Florence) ; Bongi, M (INFN, Florence ; U. Florence (main)) ; Caccia, Z (INFN, Catania) ; D'Alessandro, R (INFN, Florence ; U. Florence (main)) ; Del Prete, M (INFN, Florence ; U. Florence (main)) ; Detti, S (INFN, Florence) et al. The Large Hadron Collider forward (LHCf) experiment was motivated to understand the hadronic interaction processes relevant to cosmic-ray air shower development. We have developed radiation-hard detectors with the use of Gd$_2$SiO$_5$ (GSO) scintillators for proton-proton $\sqrt{s} = 13$ TeV collisions. [...] 2017 - 22 p. - Published in : JINST 12 (2017) P03023 Detailed record - Similar records 2019-04-26 08:32 Baby MIND: A magnetised spectrometer for the WAGASCI experiment / Hallsjö, Sven-Patrik (Glasgow U.)/Baby MIND The WAGASCI experiment being built at the J-PARC neutrino beam line will measure the ratio of cross sections from neutrinos interacting with a water and scintillator targets, in order to constrain neutrino cross sections, essential for the T2K neutrino oscillation measurements. A prototype Magnetised Iron Neutrino Detector (MIND), called Baby MIND, has been constructed at CERN and will act as a magnetic spectrometer behind the main WAGASCI target. [...] SISSA, 2018 - 7 p. - Published in : PoS NuFact2017 (2018) 078 Fulltext: PDF; External link: PoS server In : 19th International Workshop on Neutrinos from Accelerators, Uppsala, Sweden, 25 - 30 Sep 2017, pp.078 Detailed record - Similar records 2019-04-26 08:32 Detailed record - Similar records
I will use the notation of this question. So, if $X$ is a (nice) topological space and $G$ is an abelian group, we can form its $G$-linearization $G[X]$. In McCord's article, this was denoted $B(G,X)$. In that question it is mentioned that $\pi_*(G[X])=\tilde{H}_*(X;G)$. But more is true: the functor from Spaces to Graded Abelian Groups that maps a space $X$ to $\pi_*(G[X])$ is a homology theory, isomorphic to singular homology with coefficients in $G$. This says that the isomorphism commutes with the boundary maps in long exact sequences. And we have a good grip on what the boundary for $\pi_*(G[-])$ is, actually: if $A\to X$ is a cofibration, then McCord proves that $$G[A]\to G[X]\to G[X/A]$$ is a fibration (actually, a principal bundle). The boundary map in homology for $A\to X$ corresponds to the boundary map in homotopy for this fibration. With this set up, we have that $H_*(-;G)$ is naturally isomorphic to the composition $\pi_* \circ G[-]$. If I understand correctly what I've heard, it is actually true that for any spectrum $E$ we have that $E_*$ (the associated homology functor from Spaces to Graded Abelian Groups) decomposes as a composition $\pi_* \circ E[-]$, where $E[-]$ is some functor from spaces to spaces which maps cofibrations to fibrations. Question 1: what further hypotheses do we need on these functors to make the correspondence with the category of homology theories be a 1-1 correspondence? We certainly need $E[-]$ to map a point to a point (or to something contractible). Is this enough? So, for $E=HG$, the Eilenberg-Mac Lane spectrum of $G$, we have that $E[-]=G[-]$: for $H\mathbb Z$, for example, $E$ is the infinite symmetric product functor. For $E$ the sphere spectrum, we have $E[-]=Q$. Question 2: what can be said about other spectra? Is there a clean description on how to get $E[-]$ from $E$ in general? Or maybe in other well-known cases, $KO, KU, MO, MU, K(n)$, etc... Subsidiary question for the comments: any further references to this point of view are welcome.
In probability theory, Boole's inequality, also known as the union bound, says that for any finite or countable set of events, the probability that at least one of the events happens is no greater than the sum of the probabilities of the individual events. Boole's inequality is named after George Boole. Formally, for a countable set of events A 1, A 2, A 3, ..., we have P ( ⋃ i A i ) ≤ ∑ i P ( A i ) . {\displaystyle \mathbb {P} \left(\bigcup _{i}A_{i}\right)\leq \sum _{i}{\mathbb {P} }(A_{i}).} In measure-theoretic terms, Boole's inequality follows from the fact that a measure (and certainly any probability measure) is σ- sub-additive. Proof using induction [ edit ] Boole's inequality may be proved for finite collections of events using the method of induction. For the case, it follows that n = 1 {\displaystyle n=1} P ( A 1 ) ≤ P ( A 1 ) . {\displaystyle \mathbb {P} (A_{1})\leq \mathbb {P} (A_{1}).} For the case , we have n {\displaystyle n} P ( ⋃ i = 1 n A i ) ≤ ∑ i = 1 n P ( A i ) . {\displaystyle {\mathbb {P} }\left(\bigcup _{i=1}^{n}A_{i}\right)\leq \sum _{i=1}^{n}{\mathbb {P} }(A_{i}).} Since and because the union operation is P ( A ∪ B ) = P ( A ) + P ( B ) − P ( A ∩ B ) , {\displaystyle \mathbb {P} (A\cup B)=\mathbb {P} (A)+\mathbb {P} (B)-\mathbb {P} (A\cap B),} associative, we have P ( ⋃ i = 1 n + 1 A i ) = P ( ⋃ i = 1 n A i ) + P ( A n + 1 ) − P ( ⋃ i = 1 n A i ∩ A n + 1 ) . {\displaystyle \mathbb {P} \left(\bigcup _{i=1}^{n+1}A_{i}\right)=\mathbb {P} \left(\bigcup _{i=1}^{n}A_{i}\right)+\mathbb {P} (A_{n+1})-\mathbb {P} \left(\bigcup _{i=1}^{n}A_{i}\cap A_{n+1}\right).} Since P ( ⋃ i = 1 n A i ∩ A n + 1 ) ≥ 0 , {\displaystyle {\mathbb {P} }{\biggl (}\bigcup _{i=1}^{n}A_{i}\cap A_{n+1}{\biggr )}\geq 0,} by the first axiom of probability, we have P ( ⋃ i = 1 n + 1 A i ) ≤ P ( ⋃ i = 1 n A i ) + P ( A n + 1 ) , {\displaystyle \mathbb {P} \left(\bigcup _{i=1}^{n+1}A_{i}\right)\leq \mathbb {P} \left(\bigcup _{i=1}^{n}A_{i}\right)+\mathbb {P} (A_{n+1}),} and therefore P ( ⋃ i = 1 n + 1 A i ) ≤ ∑ i = 1 n P ( A i ) + P ( A n + 1 ) = ∑ i = 1 n + 1 P ( A i ) . {\displaystyle \mathbb {P} \left(\bigcup _{i=1}^{n+1}A_{i}\right)\leq \sum _{i=1}^{n}\mathbb {P} (A_{i})+\mathbb {P} (A_{n+1})=\sum _{i=1}^{n+1}\mathbb {P} (A_{i}).} Proof without using induction [ edit ] For any events in in our A 1 , A 2 , A 3 , … {\displaystyle A_{1},A_{2},A_{3},\dots } probability space we have P ( ⋃ i A i ) ≤ ∑ i P ( A i ) {\displaystyle \mathbb {P} \left(\bigcup _{i}A_{i}\right)\leq \sum _{i}\mathbb {P} (A_{i})} One of the axioms of a probability space is that if are B 1 , B 2 , B 3 , … {\displaystyle B_{1},B_{2},B_{3},\dots } disjoint subsets of the probability space then P ( ⋃ i B i ) = ∑ i P ( B i ) {\displaystyle \mathbb {P} \left(\bigcup _{i}B_{i}\right)=\sum _{i}\mathbb {P} (B_{i})} this is called countable additivity. If then B ⊂ A {\displaystyle B\subset A} P ( B ) ≤ P ( A ) {\displaystyle \mathbb {P} (B)\leq \mathbb {P} (A)} Indeed, from the axioms of a probability distribution, P ( A ) = P ( B ) + P ( A − B ) {\displaystyle \mathbb {P} (A)=\mathbb {P} (B)+\mathbb {P} (A-B)} Note that both terms on the right are nonnegative. Now we have to modify the sets , so they become disjoint. A i {\displaystyle A_{i}} B i = A i − ⋃ j = 1 i − 1 A j {\displaystyle B_{i}=A_{i}-\bigcup _{j=1}^{i-1}A_{j}} So if , then we know B i ⊂ A i {\displaystyle B_{i}\subset A_{i}} ⋃ i = 1 ∞ B i = ⋃ i = 1 ∞ A i {\displaystyle \bigcup _{i=1}^{\infty }B_{i}=\bigcup _{i=1}^{\infty }A_{i}} Therefore, we can deduce the following equation P ( ⋃ i A i ) = P ( ⋃ i B i ) = ∑ i P ( B i ) ≤ ∑ i P ( A i ) {\displaystyle \mathbb {P} \left(\bigcup _{i}A_{i}\right)=\mathbb {P} \left(\bigcup _{i}B_{i}\right)=\sum _{i}\mathbb {P} (B_{i})\leq \sum _{i}\mathbb {P} (A_{i})} Bonferroni inequalities [ edit ] Boole's inequality may be generalized to find upper and lower bounds on the probability of finite unions of events. These bounds are known as [1] Bonferroni inequalities, after Carlo Emilio Bonferroni, see Bonferroni (1936). Define S 1 := ∑ i = 1 n P ( A i ) , {\displaystyle S_{1}:=\sum _{i=1}^{n}{\mathbb {P} }(A_{i}),} and S 2 := ∑ 1 ≤ i < j ≤ n P ( A i ∩ A j ) , {\displaystyle S_{2}:=\sum _{1\leq i<j\leq n}{\mathbb {P} }\left(A_{i}\cap A_{j}\right),} as well as S k := ∑ 1 ≤ i 1 < ⋯ < i k ≤ n P ( A i 1 ∩ ⋯ ∩ A i k ) {\displaystyle S_{k}:=\sum _{1\leq i_{1}<\cdots <i_{k}\leq n}{\mathbb {P} }\left(A_{i_{1}}\cap \cdots \cap A_{i_{k}}\right)} for all integers k in {3, ..., n}. Then, for odd k in {1, ..., n}, P ( ⋃ i = 1 n A i ) ≤ ∑ j = 1 k ( − 1 ) j − 1 S j , {\displaystyle {\mathbb {P} }\left(\bigcup _{i=1}^{n}A_{i}\right)\leq \sum _{j=1}^{k}(-1)^{j-1}S_{j},} and for even k in {2, ..., n}, P ( ⋃ i = 1 n A i ) ≥ ∑ j = 1 k ( − 1 ) j − 1 S j . {\displaystyle {\mathbb {P} }\left(\bigcup _{i=1}^{n}A_{i}\right)\geq \sum _{j=1}^{k}(-1)^{j-1}S_{j}.} Boole's inequality is recovered by setting k = 1. When k = n, then equality holds and the resulting identity is the inclusion–exclusion principle. See also [ edit ] References [ edit ] Bonferroni, Carlo E. (1936), "Teoria statistica delle classi e calcolo delle probabilità", Pubbl. d. R. Ist. Super. di Sci. Econom. e Commerciali di Firenze (in Italian), 8: 1–62, Zbl 0016.41103 Dohmen, Klaus (2003), Improved Bonferroni Inequalities via Abstract Tubes. Inequalities and Identities of Inclusion–Exclusion Type, Lecture Notes in Mathematics, 1826, Berlin: Springer-Verlag, pp. viii+113, ISBN , 3-540-20025-8 MR 2019293, Zbl 1026.05009 Galambos, János; Simonelli, Italo (1996), Bonferroni-Type Inequalities with Applications, Probability and Its Applications, New York: Springer-Verlag, pp. x+269, ISBN , 0-387-94776-0 MR 1402242, Zbl 0869.60014 Galambos, János (1977), "Bonferroni inequalities", Annals of Probability, 5 (4): 577–581, doi: 10.1214/aop/1176995765, JSTOR 2243081, MR 0448478, Zbl 0369.60018 Galambos, János (2001) [1994], "Bonferroni inequalities", in Hazewinkel, Michiel (ed.), , Springer Science+Business Media B.V. / Kluwer Academic Publishers, Encyclopedia of Mathematics ISBN 978-1-55608-010-4 This article incorporates material from Bonferroni inequalities on PlanetMath, which is licensed under the Creative Commons Attribution/Share-Alike License.
Start with the unperturbed gravitational potential for a uniform sphere of mass M and radius R, interior and exterior: $$ \phi^0_\mathrm{in} = {-3M \over 2R} + {M\over 2R^3} (x^2 + y^2 + z^2) $$$$ \phi^0_\mathrm{out} = {- M\over r} $$ Add a quadrupole perturbation, you get $$ \phi_\mathrm{in} = \phi^0_\mathrm{in} + {\epsilon M\over R^3} D $$$$ \phi_\mathrm{out} = \phi^0_\mathrm{out} + {M\epsilon R^2\over r^5} D $$ $$ D = x^2 + y^2 - 2 z^2 $$ The scale factors of M and R are just to make $\epsilon$ dimensionless, the falloff of $D\over r^5$ is just so that the exterior solution solves Laplace's equation, and the matching of the solutions is to ensure that on any ellipsoid near the sphere of radius R, the two solutions are equal to order $\epsilon$. The reason this works is because the $\phi^0$ solutions are matched both in value and in first derivative at x=R, so they stay matched in value to leading order even when perturbed away from a sphere. The order $\epsilon$ quadrupole terms are equal on the sphere, and therefore match to leading order. The ellipsoid I will choose solves the equation: $$ r^2 + \delta D = R^2 $$ The z-diameter is increased by a fraction $\delta$, while the x diameter decreased by $\delta/2$. So that the ratio of polar to equatorial radius is $3\delta/2$. To leading order $$ r = R + {\delta D \over 2R}$$ We already matched the values of the inner and outer solutions, but we need to match the derivatives. taking the "d": $$ d\phi_\mathrm{in} = {M\over R^3} (rdr) + {\epsilon M\over R^3} dD $$$$ d\phi_\mathrm{out} = {M\over r^3} (rdr) + {MR^2\epsilon \over r^5} dD - {5\epsilon R^2 M\over r^7} (rdr) $$ $$ rdr = x dx + y dy + z dz $$$$ dD = 2 x dx + 2ydy - 4z dz $$ To first order in $\epsilon$, only the first term of the second equation is modified by the fact that r is not constant on the ellipsoid. Specializing to the surface of the ellipsoid: $$ d\phi_\mathrm{out}|_\mathrm{ellipsoid} = {M\over R^3} (rdr) + {3\delta \over 2 R^5}(rdr) + {\epsilon M \over R^3} dD - {5\epsilon M \over R^5} (rdr)$$ Equating the in and out derivatives, the parts proportional to $dD$ cancel (as they must--- the tangential derivatives are equal because the two functions are equal on the ellipsoid). The rest must cancel too, so $$ {3\over 2} \delta = 5 \epsilon $$ So you find the relation between $\delta$ and $\epsilon$. The solution for $\phi_\mathrm{in}$ gives $$ \phi_\mathrm{in} + {3M\over 2R} = {M\over 2R^3}( r^2 + {3\over 5} \delta D ) $$ Which means, looking at the equation in parentheses, that the equipotentials are 60% as squooshed as the ellipsoid. Now there is a condition that this is balanced by rotation, meaning that the ellipsoid is an equipotential once you add the centrifugal potential: $$ - {\omega^2\over 2} (x^2 + y^2) = -{\omega^2 \over 3} (x^2 + y^2 + z^2) -{\omega^2\over 6} (x^2 + y^2 - 2z^2) $$ To make the $\delta$ ellipsoid equipotential requires that $\omega^2\over 6$ equals the remaining ${2\over 5} {M\over 2R^2}$, so that, calling $M\over R^2$ (the acceleration of gravity) by the name "g", and $\omega^2 R$ by the name "C" (centrifugal) $$\delta = {5\over 6} {C \over g} $$ The actual difference in equatorial and polar diameters is found by multiplying by 3/2 (see above): $$ {3\over 2} \delta = {5\over 4} {C\over g} $$ instead of the naive estimate of ${C\over 2g}$. So the naive estimate is multiplied by two and a half for a uniform density rotating sphere. Nonuniform interior: primitive model The previous solution is both interior and exterior for a rotating uniform ellipsoid, and it is exact in r, it is only leading order in the deviation from spherical symmetry. So it immediately extends to give the shape of the Earth for a nonuniform interior mass distribution. The estimate with a uniform density is surprisingly good, and this is because there are competing effects largely cancelling out the correction for non-uniform density. The two competing effects are:1. the interior distribution is more elliptical than the surface, because the interior solution feels all the surrounding elliptical Earth deforming it, with extra density deforming it more. 2. The ellipticity of the interior is suppressed by the $1/r^3$ falloff of the quadrupole solution of Laplace's equation, which is $1/r^2$ faster than the usual potential. So although the interior is somewhat more deformed, the falloff more than compensates, and the effect of the interior extra density is to make the Earth more spherical, although not by much. These competing effects are what shift the correction factor from 2.5 to 2, which is actually quite small considering that the interior of the Earth is extremely nonuniform, with the center more than three times as dense as the outer parts. The exact solution is a little complicated, so I will start with a dopey model. This assumes that the Earth is a uniform ellipsoid of mass M and ellipticity parameter $\delta$, plus a point source in the middle (or a sphere, it doesn't matter), accounting for the extra mass in the interior, of mass M'. The interior potential is given by superposition. With the centrifugal potential: $$ \phi_{int} = - {M'\over r} - {2M\over 3R} + {M\over 2R^3}(r^2 - {3\over 5} \delta D) + {\omega^2\over 2} r^2 - {\omega^2\over 6} D $$ This has the schematic form of spherical plus quadrupole (including the centrifugal force inside F and G) $$ \phi_{int} = F(r) + G(r) D $$ The condition that the $\delta$ ellipsoid is an equipotential is found by replacing $r$ with $R - {\delta D\over 2R}$ inside F(r), and setting the D-part to zero: $$ {F'(R) \delta \over 2R} = G(r) $$ In this case, you get the equation below, which reduces to the previous case when $M'=0$: $$ {M'\over M+M'}\delta + {M\over M+M'} (\delta - {3\over 5} \delta) = - {C\over 3 g } $$ where $C=\omega^2 R$ is the centrifugal force, and $ g= {M+M'\over R^2} $ is the gravitational force at the surface. I should point out that the spherical part of the centrifugal potential ${\omega^2\over 2} r^2$ always contributes a subleading term proportional to $\omega^2\delta$ to the equation and should be dropped. The result is $$ {3\over 2} \delta = {1\over 2 (1 - {3\over 5} {M\over M+M'}) } {C\over g} $$ So that if you choose M' to be .2 M, you get the correct answer, so that the extra equatorial radius is twice the naive amount of ${C\over 2g}$. This says that the potential at the surface of the Earth is only modified from the uniform ellipsoid estimate by adding a sphere with 20% of the total mass at the center. This is somewhat small, considering the nonuniform density in the interior contains about 25% of the mass of the Earth (the perturbing mass is twice the density at half the radius, so about 25% of the total). The slight difference is due to the ellipticity of the core. Nonuniform mass density II: exact solution The main thing neglected in the above is that the center is also nonspherical, and so adds to the nonspherical D part of the potential on the surface. This effect mostly counteracts the general tendency of extra mass at the center to make the surface more spherical, although imperfectly, so that there is a correction left over. You can consider it as a superposition of uniform ellipsoids of mean radius s, with ellipticity parameter $\delta(s)$ for $0<s<R$ increasing as you go toward the center. Each is uniform on the interior, with mass density $|\rho'(s)|$ where $\rho(s)$ is the extra density of the Earth at distance s from the center, so that $\rho(R)=0$. These ellipsoids are superposed on top of a uniform density ellipsoid of density $\rho_0$ equal to the surface density of the Earth's crust: I will consider $\rho(s)$ and $\rho_0$ known, so that I also know $|\rho'(s)|$, it's (negative) derivative with respect to s, which is the density of the ellipsoid you add at s, and I also know: $$ M(r) = \int_0^r 4\pi \rho(s) s^2 ds $$ The quantity $M(s)$ is ${1\over 4\pi}$ times the additional mass in the interior, as compared to a uniform Earth at crust density. Note that $M(s)$ is not affected by the ellipsoidal shape to leading order, because all the nested ellipsoids are quadrupole perturbations, and so contain the same volume as spheres. Each of these concentric ellipsoids is itself an equipotential surface for the centrifugal potential plus the potential from the interior and exterior ellipsoids. So once you know the form of the potential of all these superposed ellipsoids, which is of the form of spherical + quadrupole + centrifugal quadrupole (the centrifugal spherical part always gives a subleading correction, so I omit it): $$ \phi_\mathrm{int}(r) = F(r) + G(r) D + {\omega^2 \over 6} D $$ You know that each of these nested ellipsoids is an equipotential $$ F(s - {\delta(s) \over 2s}) D + G(s) D - {\omega^2\over 6} D $$ so that the equation demanding that this is an equipotential at any s is $$ {\delta(s) F'(s) \over 2s} - G(s) + {\omega^2\over 6} = 0 $$ To find the form of F and G, you first express the interior/exterior solution for a uniform ellipsoid in terms of the density $\rho$ and the radius R: $$ {\phi_\mathrm{int}\over 4\pi} = - {\rho R^2\over 2} + {\rho\over 6} r^2 + {\rho \delta\over 10} D $$ $$ {\phi_\mathrm{ext}\over 4\pi} = - {\rho R^3 \over 3 r} + {\rho\delta R^5\over 10 r^5} D $$ You can check the sign and numerical value of the coefficients using the 3/5 rule for the interior equipotential ellipsoids, the separate matching of the spherical and D perturbations at r=R, and dimensional analysis. I put a factor $4\pi$ on the bottom of $\phi$ so that the right hand side solves the constant free form of Laplace's equation. Now you can superpose all the ellipsoids, by setting $\delta$ on each ellipsoid to be $\delta(s)$, setting $\rho$ on each ellipsoid to be $|\rho'(s)|$, and $R$ to be $s$. I am only going to give the interior solution at r (doing integration by parts on the spherical part, where you know the answer is going to turn out to be, and throwing away some additive constant C) is: $$ {\phi_\mathrm{int}(r)\over 4\pi} - C = {\rho_0\over 6} r^2 + {\rho_0 \delta(R)\over 10} D - {M(r)\over 4\pi r} + {1\over 10r^5} \int_0^r |\rho'(s)| \delta(s) s^5 ds D + {1\over 10} \int_r^R |\rho'(s)|\delta(s) D $$ The first two terms are the interior solution for constant density $\rho_0$. The third term is the total spherical contribution, which is just as in the spherical symmetric case. The fourth term is the the superposed exterior potential from the ellipsoids inside r, and the last term is the superposed interior potential from the ellipsoids outside r. From this you can read off the spherical and quadrupole parts:$$ F(r) = {\rho_0\over 6} r^2 + {M(r)\over r} $$$$ G(r) = {\rho_0\delta(R)\over 10} + {1\over 10r^5} \int_0^r |\rho'(s) |\delta(s) s^5 ds + {1\over 10} \int_r^R |\rho'(s)|\delta(s) $$ So that the integral equation for $\delta(s)$ asserts that the $\delta(r)$ shape is an equipotential at any depth. $$ {F'(r)\delta(r)\over 2r} - G(r) + {\omega^2 \over 6} = 0 $$ This equation can be solved numerically for any mass profile in the interior, to find the $\delta(R)$. This is difficult to do by hand, but you can get qualitative insight. Consider an ellipsoidal perturbation inside a uniform density ellipsoid. If you let this mass settle along an equipotential, it will settle to the same ellipsoidal shape as the surface, because the interior solution for the uniform ellipsoid is quadratic, and so has exact nested ellipsoids of the same shape as equipotentials. But this extra density will contribute less than it's share of elliptical potential to the surface, diminishing as the third power of the ratio of the radius of the Earth to the radius of the perturbation. But it will produce stronger ellipses inside, so that the interior is always more elliptical than the surface. Oblate Core Model The exact solution is too difficult for paper and pencil calculations, but looking [here]( http://www.google.com/imgres?hl=en&client=ubuntu&hs=dhf&sa=X&channel=fs&tbm=isch&prmd=imvns&tbnid=hjMCgNhAjHnRiM:&imgrefurl=http://www.springerimages.com/Images/Geosciences/1-10.1007_978-90-481-8702-7_100-1&docid=ijMBfCAOC1GhEM&imgurl=http://img.springerimages.com/Images/SpringerBooks/BSE%253D5898/BOK%253D978-90-481-8702-7/PRT%253D5/MediaObjects/WATER_978-90-481-8702-7_5_Part_Fig1-100_HTML.jpg&w=300&h=228&ei=ZccgUJCTK8iH6QHEuoHICQ&zoom=1&iact=hc&vpx=210&vpy=153&dur=4872&hovh=182&hovw=240&tx=134&ty=82&sig=108672344460589538944&page=1&tbnh=129&tbnw=170&start=0&ndsp=8&ved=1t:429,r:1,s:0,i:79&biw=729&bih=483 ), you see that it is sensible to model the Earth as two concentric spheres of radius $R$ and $R_1$ with total mass $M$ and $M_1$ and $\delta$ and $\delta_1$. I will take $$ R_1 = {R\over 2} $$ and $$ M_1 = {M\over 4} $$ that is, the inner sphere is 3000 km across, with twice the density, which is roughly accurate. Superposing the potentials and finding the equation for the $\delta$s (the two point truncation of the integral equation), you find $$ -\delta + {3\over 5} {M_0\over M_0 + M_1} \delta + {3\over 5} {M_1\over M_0 + M_1} \delta_1 ({R_1\over R})^2 = {C\over 3g} $$ $$ {M_0 \over M_0 + M_1} (-\delta_1 + {3\over 5} \delta) + {M_1 \over M_0 + M_1}( -\delta_1 + {3\over 5} \delta_1) = {C\over 3g} $$ Where $$ g = {M_0+ M_1\over R^2}$$$$ C = \omega^2 R $$ are the gravitational force and the centrifugal force per unit mass, as usual. Using the parameters, and defining $\epsilon = {3\delta\over 2}$ and $\epsilon_1={3\delta_1\over 2}$, one finds: $$ - 1.04 \epsilon + .06 \epsilon = {C\over g} $$$$ - 1.76 \epsilon_1 + .96 \epsilon = {C\over g} $$ (these are exact decimal fractions, there are denominators of 100 and 25). Subtracting the two equations gives: $$ \epsilon_1 = {\epsilon\over .91} $$ (still exact fractions) Which gives the equation $$ (-1.04 + {.06\over .91} ) \epsilon = {C\over g}$$ So that the factor in front is $.974$, instead of the naive 2. This gives an equatorial diameter of 44.3 km, as opposed to 42.73, which is close enough that the model essentially explains everything you wanted to know. The value of $\epsilon_1$ is also interesting, it tells you that the Earth's core is 9% more eccentric than the outer ellipsoid of the Earth itself. Given that the accuracy of the model is at the 3% level, this should be very accurate.
At the experimental level, those condensed matter Majorana degrees of freedom are the pioneering examples (assuming that the claims are true). The only other Majorana fields we know in the world around us are the neutrino fields but even though there are strong theoretical reasons to think that the neutrino species we know are Majorana and not Dirac, we can't really experimentally show it is the case. Theoretical physics is full of Majorana fields, however. The world sheet fermions in string theory are Majorana fermions – well, in 2 dimensions, much like in 10 dimensions and any $8k+2$ dimensions, one may impose the Majorana and Weyl conditions simultaneously so we're working with Majorana-Weyl fermions. Similarly, there are lots of hypothetical Majorana (but not Weyl at the same moment) fields in $d=4$ according to supersymmetry (and some other models of new physics). The superpartners to any bosonic field of the Standard Model – the Higgs and the gauge bosons – are Majorana fermions. Neutralinos may be the lightest example: they may be the lightest superpartners (LSPs) and in many models, they account for most of the dark matter. This dark matter would annihilate in pairs, something we expect for Majorana excitations that naturally carry a conserved ${\mathbb Z}_2$ quantum number. I would slightly disagree that the absence of a $U(1)$ charge is equivalent to the Majorana condition. This identification of the two conditions holds in one direction and it is "economic" in the other but it doesn't have to be the case. The neutrinos could be Majorana but they could still refuse to carry any conserved $U(1)$ charges. Majorana degrees of freedom mean that the fields transform as Majorana spinors (spinor representations that obey a reality condition); more generally, these fermions are the canonical momenta to themselves so that one may write $\{\theta_a,\theta_b\}=\delta_{ab}$ anticommutators without any daggers while $\theta^\dagger=\theta$ for these components.This post imported from StackExchange Physics at 2014-04-05 04:16 (UCT), posted by SE-user Luboš Motl
Assemble a formula using the numbers $2$, $0$, $1$, and $8$ in any order that equals 109. You may use the operations $x + y$, $x - y$, $x \times y$, $x \div y$, $x!$, $\sqrt{x}$, $\sqrt[\leftroot{-2}\uproot{2}x]{y}$ and $x^y$, as long as all operands are either $2$, $0$, $1$, or $8$. Operands may of course also be derived from calculations e.g. $10+(\sqrt{8*2})!$. You may also use brackets to clarify order of operations, and you may concatenate two or more of the four digits you start with (such as $2$ and $8$ to make the number $28$) if you wish. You may only use each of the starting digits once and you must use all four of them. I'm afraid that concatenation of numbers from calculations is not permitted, but answers with concatenations which get $109$ will get plus one from me. Double, triple, etc. factorials (n-druple-factorials), such as $4!! = 4 \times 2$ are not allowed, but factorials of factorials are fine, such as $(4!)! = 24!$. I will upvote answers with double, triple and n-druple-factorials which get 109, but will not mark them as correct. Here are some examples to this problem: Use 2 0 1 and 8 to make 67 Make numbers 93 using the digits 2, 0, 1, 8 Make numbers 1 - 30 using the digits 2, 0, 1, 8 many thanks to the authors of these questions for inspiring this question.
I can not find the mistake in this sequence of operations. If $\Sigma$ is a positive definite matrix, we can write $\Sigma = C \Lambda C '$, where $C$ is orthonormal $(CC' = C'C = I)$ and $\Lambda$ is diagonal with all diagonal elements positive. Considering $\Lambda^{*}$ the diagonal matrix whose diagonal elements are the reciprocal square roots of the corresponding diagonal elements of $\Lambda$. Let $H = C \Lambda^{*} C'$. I would like to prove that $$H'H = \Sigma ^{-1}$$ First, $H' = H $ and $\Lambda = \hbox{diag}\{ d_1, ... , d_n\}$ and $\Lambda^{*} = \hbox{diag}\{ \sqrt{d_1}, ... , \sqrt{d_n}\}$. With this in hand, we have $$H'H = H H = C \Lambda^{*} C' C \Lambda^{*} C' = C \Lambda^{*} I \Lambda^{*} C' = C \Lambda^{*}\Lambda^{*} C' = C \Lambda C' = \Sigma$$ Some ideia where is the error?
One of the most powerful areas of electrical engineering that flourished in the 20th century is the field of signal processing. The field is broad and rich in some beautiful mathematics, but by way of introduction, here we’ll take a look at some basic properties of signals and how we can use these properties to find a nice compact representation of operations on them. As a motivating application, we’ll use what we study today to apply certain effects to audio signals. In particular, we’ll take a piece of audio, and be able to make it sound like it’s being played in a cathedral, or in a parking garage, or even through a metal spring. First things first: what is a signal? For this discussion we’ll limit ourselves to looking at the space \ell = \{x :\mathbb{Z} \rightarrow \mathbb{R}\} – the set of functions which take an integer and return a real number. Another way to think of a signal then is as an infinite sequence of real numbers. We’re limiting ourselves to functions where the domain is discrete (the integers), rather than continuous (the real numbers), since in many applications we’re looking at signals that represent some measurement taken at a bunch of different times 1. It’s worth noting that any signal that’s been defined on a countable domain \{..., t_{n-1}, t_n, t_{n+1},...\} can be converted to one defined on the integers via an isomorphism. We like to place one further restriction on the signals, in order to make certain operations possible. We restrict the space to so-called finite-energy signals: This restriction makes it much easier to study and prove things around these functions, while still giving us lots of useful signals to study, without having to deal with messy things like infinities. In practice, when dealing with audio we usually have a signal with a finite length and range, so this finite-energy property is trivially true. Studying signals is only as useful if we can also define operations on them. We’ll study the interaction of signals with systems, which take one signal and transform it into another – essentially, a function operating on signals. Here, we’ll say that a system H : \ell_2 \rightarrow \ell_2 takes an input signal x(t) and produces output signal H\{x(t)\} = y(t). Linearity and Time Invariance There are certain properties that are useful for systems to have. The first is linearity. A system H is considered linear if for every pair of inputs x_1, x_2 \in \ell_2, and for any scalar values \alpha, \beta \in R, we haveH\{\alpha x_1 + \beta x_2\} = \alpha H\{x_1\} + \beta H\{x_2\} This is very useful, because it allows us to break down a signal into simpler parts, study the response of the system to each of those parts, and understand the response to the more complex original signal. The next property we’re going to impose on our systems is time-invariance:\forall s \in \mathbb{Z}, H\{x(n)\} = y(n) \Rightarrow H\{x(n-s)\} = y(n-s) This means that shifting the input by s corresponds to a similar shift in the output. In our example of playing music in a cathedral, we expect our system to be time-invariant, since it shouldn’t matter whether we play our music at noon or at midnight, we’d expect it to sound the same. However, if we were playing in a building that, for example, lowered a bunch of sound-dampening curtains at 8pm every night, then the system would no longer be time-invariant. So what are some more concrete examples of systems that are linear and time-invariant? Let’s consider an audio effect which imitates an echo – it outputs the original signal, plus a quieter, delayed version of that signal. We might express such a system as where \Delta \in \mathbb{Z} is the time delay of the echo (in terms of the number of samples), and k \in \mathbb{R} is the relative volume of the echoed signal. We can see that this signal is time-invariant, because there is no appearance of the time variable outside of the input. If we replaced k by a function k(n) = \sin(n), for example, we would lose this time invariance. Additionally, the system is plainly linear:\begin{aligned}H_{\Delta, k}\{\alpha x_1(n) + \beta x_2(n)\} &= \alpha x_1(n) + \beta x_2(n) + k x_1(n-\Delta) +k x_2(n-\Delta) \\&= H_{\Delta, k}\{x_1(n)\} + H_{\Delta, k}\{x_2(n)\}\end{aligned} A common non-linearity in audio processing is called clipping — we limit the output to be between -1 and 1: H\{x(n)\} = \max(\min(x(n), 1), -1). This is clearly non-linear since doubling the input will not generally double the output. The Kronecker delta signal There is a very useful signal that I would be remiss not to mention here: the Kronecker delta signal. We define this signal as\delta(n) = \begin{cases} 1 & n = 0 \\ 0 & n \neq 0 \end{cases} The delta defines an impulse, and we can use it to come up with a nice compact description of linear, time-invariant systems. One property of the delta is that it can be used to “extract” a single element from another signal, by multiplying:\forall s \in \mathbb{Z}, \delta(n-s)x(s) = \begin{cases} x(n) & n=s \\ 0 & n \neq s\end{cases} Similarly, we can then write any signal as an infinite sum of these multiplications:x(n) = \sum_{s=-\infty}^{\infty} \delta(n-s)x(s) = \sum_{s=-\infty}^{\infty}\delta(s)x(n-s) Why would we want to do this? Let H be a linear, time-invariant system, and let h(t) = H\{\delta(t)\}, the response of the system to the delta signal. Then we have\begin{aligned} H\{x(n)\} &= H\left\{\sum_{s=-\infty}^{\infty} \delta(n-s)x(s)\right\}\\&=\sum_{s=-\infty}^{\infty}H\{\delta(n-s)\}x(s) \text{ by linearity}\\&=\sum_{s=-\infty}^{\infty}h(n-s)x(s) \text{ by time-invariance}\end{aligned} We can write any linear, time-invariant system in this form. We call the function h the impulse response of the system, and it fully describes the behaviour of the system. This operation where we’re summing up the product of shifted signals is called a convolution, and appears in lots of different fields of math 2. Firing a Gun in The Math Citadel The power of this representation of a system is that if we want to understand how it will act on any arbitrary signal, it is sufficient to understand how it responds to an impulse. To demonstrate this, we’ll look at the example of how audio is affected by the environment it is played in. Say we were a sound engineer, and we wanted to get an instrument to sound like it was being played in a big, echoing cathedral. We could try to find such a place, and actually record the instrument, but that could be expensive, requiring a lot of setup and time. Instead, if we could record the impulse response of that space instead, we could apply the convolution to a recording we did back in a studio. How do we capture an impulse response? We just need a loud, very short audio source – firing a pistol or popping a balloon are common. To demonstrate, here are some example impulse responses, taken from OpenAirLib, and their effects on different audio signals. First, here is the unprocessed input signal – a short piece of jazz guitar: Here is the same clip, as if it were played in a stairwell at the University of York. First, the impulse response, then the processed audio. That sounds a little different, but we can try a more extreme example: the gothic cathedral of York Minster. Again, here is the impulse response, followed by the processed signal. In this case, we have a much more extreme reverberation effect, and we get the sound of a guitar in a large, ringing room. For our last example, we’ll note that impulse responses don’t have to be natural recordings, but instead could be entirely synthetic. Here, I’ve simply reversed the first impulse response from the stairwell, which creates this pre-echo effect, which doesn’t exist naturally. This is just one of the most basic examples of what can be done with signal processing, but I think it’s a particularly good one – by defining some reasonable properties for signals and systems, we’re able to derive a nice compact representation that also makes a practical application very simple. Footnotes In this post, the results for continuous functions are largely the same, just replacing the sums with integrals, and the Kronecker delta signal with the Dirac delta distribution; however, the proofs for these statements are much more involved in the continuous setting. You may have seen the phrase convolution in the context of machine learning. Convolutional neural networks are a popular tool in image processing, and make heavy use of convolution defined on 2-dimensional signals.
It is an easy consequence of the Serre open image theorem that for the torsion point count on elliptic curves, the following possibilities arise. If $E/\bar{\mathbb{Q}}$ is an elliptic curve without CM, then the number of torsion points $x \in E$ with $[\mathbb{Q}(x):\mathbb{Q}] \leq d$ is $\asymp d^{3/2}$ as $d \to \infty$. If $E/\bar{\mathbb{Q}}$ is an elliptic curve with CM, then this number is $\asymp d^2$ as $d \to \infty$. This appears on page 44 of Serre's book, Lectures on Mordell-Weil. As stated there, it is easy to show that, for a $g$-dimensional abelian variety, the corresponding count is bounded by $\leq O(d^{N})$ with $N = N(g) < \infty$. Should we expect, for $g$-dimensional abelian varieties, the count to be always asymptotic to $d^{\alpha}$ for a finite set of exponents $\alpha$ depending on $g$? Are there any clues as to the spectrum of those exponents? Second, I wanted to ask about any results on the uniform torsion count problem giving bounds that are uniform in $E$ but still polynomial in $d$. (Thus I do not ask about Merel's uniform boundedness theorem and its relatives). For example, restricting to $g$-dimensional abelian varieties over $\bar{\mathbb{Z}}$ (that is, with integral moduli, or with everywhere potentially good reduction), or at least to the abelian schemes over the spectra of rings of integers of number fields of bounded degree, it is clear a priori that there is a uniform exponential bound in $d$. Should we expect this uniform bound to be strengthened to a polynomial bound, or even (for elliptic curves) to $Cd^{2+\epsilon}$? What about the generalization where we count the small algebraic points, say those of (canonical) height $< 1/d$?
considers optimization problems involving more than one objective function to be optimized simultaneously. Multiobjective optimization problems arise in many fields, such as engineering, economics, and logistics, when optimal decisions need to be taken in the presence of trade-offs between two or more conflicting objectives. For example, developing a new component might involve minimizing weight while maximizing strength or choosing a portfolio might involve maximizing the expected return while minimizing the risk. Multiobjective optimization Typically, there does not exist a single solution that simultaneously optimizes each objective. Instead, there exists a (possibly infinite) set of Pareto optimal solutions. A solution is called nondominated or Pareto optimal if none of the objective functions can be improved in value without degrading one or more of the other objective values. Without additional subjective preference information, all Pareto optimal solutions are considered equally good. In mathematical terms, a multiobjective optimization problem can be formulated as \[ \begin{align} \min &\left(f_1(x), f_2(x),\ldots, f_k(x) \right) \\ \text{s.t. } &x\in X, \end{align} \] where the integer \(k\geq 2\) is the number of objectives and the set \(X\) is the feasible set of decision vectors. The feasible set is typically defined by some constraint functions. In addition, the vector-valued objective function is often defined as \(f:X\to\mathbb R^k, \ f(x)= (f_1(x),\ldots,f_k(x))^T\). An element \(x^*\in X\) is a feasible solution; a feasible solution \(x^1\in X\) is said to (Pareto) dominate another solution \(x^2\in X\), if \(f_i(x^1)\leq f_i(x^2)\) for all indices \(i \in \left\{ {1,2,\dots,k } \right\}\) and \(f_j(x^1) < f_j(x^2)\) for at least one index \(j \in \left\{ {1,2,\dots,k } \right\}\). A solution \(x^1\in X\) is called Pareto optimal if there does not exist another solution that dominates it. Online Resources International Society on Multiple Criteria Decision Making (MCDM) References Marler, R. T. and Arora, J. S. 2004. Survey of multi-objective optimization methods for engineering. Structural and Multidisciplinary Optimization 26, 369 - 395. Optimization Online Other Topics area includes multi-criteria optimization Wikipedia entry on Multi-objective optimization Zitzler, E., Laumanns, M., and Bleuler, S. 2004. A Tutorial on Evolutionary Multiobjective Optimization in Metaheuristics for Multiobjective Optimization, X. Gandibleux et al. eds., Springer-Verlag, Berlin, pp. 3 - 37.
I have a question about the one-loop computation of the wave-function renormalization factor in SQCD. According to Seiberg duality, the following electric $\mathrm{SQCD}_{e}$ \begin{gather} S_{e}(\mu)=\frac{1}{2g_{e}(\mu)^{2}}\left(\int d^{4}x\int d^{2}\theta\mathrm{Tr}(\mathbb{W^{\alpha}\mathbb{W}_{\alpha}})+\int d^{4}x\int d^{2}\bar{\theta}\mathrm{Tr}(\overline{\mathbb{W}}^{\dot{\alpha}}\overline{\mathbb{W}}_{\dot{\alpha}})\right)+ \\ +\frac{Z_{Q}(\Lambda_{e},\mu)}{4}\sum_{f=1}^{F}\int d^{4}x\int d^{2}\theta \int d^{2}\bar{\theta}\left(\widetilde{Q}^{\dagger}_{f}e^{V}\widetilde{Q}_{f}+Q^{\dagger}_{f}e^{-V}Q_{f}\right), \end{gather} with gauge group $SU(N)$ and $F$ flavors, is dual to the magnetic $\mathrm{SQCD}_{m}$ \begin{gather} S_{m}(\mu)=\frac{1}{2g_{m}(\mu)^{2}}\left(\int d^{4}x\int d^{2}\theta\mathrm{Tr}(\mathbb{W^{\alpha}\mathbb{W}_{\alpha}})+\int d^{4}x\int d^{2}\bar{\theta}\mathrm{Tr}(\overline{\mathbb{W}}^{\dot{\alpha}}\overline{\mathbb{W}}_{\dot{\alpha}})\right)+ \\ +\frac{Z_{q}(\Lambda_{m},\mu)}{4}\sum_{f=1}^{F}\int d^{4}x\int d^{2}\theta \int d^{2}\bar{\theta}\left(\tilde{q}^{\dagger}_{f}e^{V}\tilde{q}_{f}+q^{\dagger}_{f}e^{-V}q_{f}\right)+ \\ +\frac{Z_{T}(\Lambda_{m},\mu)}{4}\int d^{4}x\int d^{2}\theta \int d^{2}\bar{\theta}T^{\dagger}T+ \\ +\lambda(\Lambda_{m})\left(\int d^{4}xd^{2}\theta\mathrm{tr}(qT\tilde{q})+\int d^{4}xd^{2}\bar{\theta}\mathrm{tr}(\tilde{q}^{\dagger}T^{\dagger}q^{\dagger})\right), \end{gather} with gauge group $SU(F-N)$ and $F$ flavors in the IR. In the above expressions, $\Lambda_{e}$ and $\Lambda_{m}$ are respectively the UV cutoffs, factors $Z_{Q}$, $Z_{q}$, and $Z_{T}$ are wave-function renormalization constants, $T$ in $\mathrm{SQCD}_{m}$ is an $SU(F-N)$-gauge singlet, and the trace $\mathrm{tr}$ is taken over both flavor and color indices. The Yukawa coupling constant $\lambda(\Lambda_{m})$ is independent of the scale $\mu$ because of the famous Non-Renormalization Theorem. My questions are about the one-loop computation of the above wave-function renormalization constant. In QCD, we know that in minimal-subtraction scheme the wave-function renormalization factor for the kinetic term of the fermion is given by $$Z=1-C_{2}(R)\frac{g^{2}}{8\pi^{2}}\frac{1}{\epsilon}+\mathcal{O}(g^{4}),$$ where $C_{2}(R)$ is the second Casimir of the representation $R$ that the quark field is in. This can be found in many standard QFT textbooks such as Srednicki, equation (73.3). In this paper, there is a similar formula (equation (7) on page 4) of the wave-function renormalization constant, $$Z_{Q}(\Lambda,\mu)=1+C_{2}(R)\frac{g^{2}}{4\pi^{2}}\log\frac{\mu}{\Lambda}+\mathcal{O}(g^{4}), \tag{7}$$ given in Wilsonian approach. 1. Could you enlighten me how to derive equation (7) in Wilsonian approach? 2. Please also tell me how to derive equation (13) and (14) on page 6 \begin{align} Z_{q}(\Lambda,\mu)&=1+\left(\frac{g^{2}}{4\pi^{2}}C_{2}(R)-\frac{\lambda^{2}}{8\pi^{2}}F\right)\log\frac{\mu}{\Lambda}+\mathcal{O}(g^{4},g^{2}\lambda^{2},\lambda^{4}), \tag{13} \\ Z_{T}(\Lambda,\mu)&=1-\frac{\lambda^{2}}{8\pi^{2}}(F-N)\log\frac{\mu}{\Lambda}+\mathcal{O}(g^{4},g^{2}\lambda^{2},\lambda^{4}), \tag{14} \end{align} for the magnetic dual theory. My final question is a stupid one. As far as I could understand from what I read about Seiberg duality, the conjecture claims that in the conformal window $$\frac{3}{2}N<F<3N$$ both $\mathrm{SQCD}_{e}$ and $\mathrm{SQCD}_{m}$ are conformal and flow to the same IR fixed point. In $\mathrm{SQCD}_{e}$, the NSVZ $\beta$ function is given by \begin{align} \beta(g_{e})&=-\frac{g_{e}^{3}}{16\pi^{2}}\frac{3N-F(1-\gamma(g_{e}))}{1-\frac{Ng^{2}_{e}}{8\pi^{2}}}, \\ \gamma(g_{e})&=-\frac{g_{e}^{2}}{8\pi^{2}}\frac{N^{2}-1}{N}+\mathcal{O}(g_{e}^{4}), \end{align} whose zero (Banks-Zaks Fixed Point) is at $$(g^{\ast}_{e})^{2}=\frac{8\pi^{2}}{3}\frac{N}{N^{2}-1}\epsilon,$$ when $F=3N-\epsilon N$ with small enough $\epsilon$. On the other hand, in the dual theory $\mathrm{SQCD}_{m}$, the paper shows that the dual Banks-Zaks fixed point is at (equation (15) and (16)) \begin{align} \frac{(g_{m}^{\ast})^{2}}{8\pi^{2}}&=\epsilon\frac{F-N}{(F-N)^{2}-1}\left(1+2\frac{F}{F-N}\right), \tag{15} \\ \frac{(\lambda^{\ast})^{2}}{8\pi^{2}}&=2\epsilon\frac{1}{F-N}. \tag{16} \end{align} 3. Since the Yukawa coupling constant $\lambda$ should not run along the RG flow, does the above fixed point $\lambda^{\ast}$ imply that one must tune $\lambda$ to $\lambda^{\ast}$ in the UV so that the theory flows to a fixed point in the IR? 4. Is Seiberg duality claiming that $g_{e}^{\ast}$ and $(g_{m}^{\ast},\lambda^{\ast})$ are actually the same point in the theory space? Worrying that my question will have no answers, I also posted my question here.
The problem is of the same type of the one described in Try to comment: Fail review audit on Meta MathematicsStack Exchange. As a result, I’m not going to post my problem againso as not to create another duplicate question. Background Suppose that $A$ and $B$ are singular and nonsingular matrices respectively. Simplify $\det((A+B)^2−(A−B)^2)$. Problem A wrong solution with a vote of -2 is chosen by Daniel. Why canthis happen? Possible explanation That’s because he’s correctly done the expansion until $\det(2AB + 2BA)$. Raison d’être of this post Having spent time on typing a comment, I worry that it will automatically disappear in sooner or later if the accepted answer is deleted. Therefore, I back it up here. Consider However, if $A = 0$ and $B = I_3$, then the answer is clearly zero. As a result, we can’tdecude further from $\det(2(AB + BA))$. Lessons learnt The generation of a random matrix/array of integers using randi([imin, imax], m, n). For more details, you may readGNU Octave’s manual. Background Two years ago, I thought about a group of 689 elements. 1 Ionly managed to show the existence of such a group. Problem Inspired by the use of Sylow III to show that a group of order15 has only one structure: $\Z_{15} \cong \Z_3 \times \Z_5$, Iwondered if $\Z_{689}$ is the only possible structure for a group oforder 689. Problem In my notes, the external semidirect product $G_1 \rtimes_\gamma G_2$ of two groups $G_1$ and $G_2$ with respect to a homomorphism $\gamma: G_2 \to \Aut G_1$, is defined as \begin{multline} \forall\, x_1,y_1 \in G_1, \forall\, x_2,y_2 \in G_2, (x_1,x_2) \times_{G_1 \rtimes_\gamma G_2} (y_1,y_2) \\ = (x_1 \times_{G_1} \gamma(x_2)(y_1), x_2 \times_{G_2} y_2). \end{multline} Why don’t we write $(x_1,y_1)$ and $(x_2,y_2)$ instead? Background I’m recently enhancing the $\rm \LaTeX$ code for inline limits. Forthe reason of doing so, you may refer to the external link ofmy recent linklog Inline Limit Rendering. Problem In the previous post in this series written over one year ago, I haveincluded a code block which enables deferred MathJax loading.However, I manually added this chuck of code in the HTML filegenerated by kramdown, which created the problem described in thenext subsection A problem with Vim’s folding arised. Solution Firstly, save the code for loading MathJax in the previouspost in this series in a separate file ~/script.html. Thenuse the following commands within Vim in order to avoid leaving thecurrent buffer and to improve efficiency. 9,$w! ~/temp.mkd!kramdown ~/temp.mkd > ~/temp.html!cat ~/{temp,script}.html > ~/test.html The digit 9 in the first command isn’t exact. Change it to anyline number that separates the yaml front matter from thepost content. Problem In the past, I know two ways of writing a limit using $\rm \LaTeX$. $\lim_{x \to 1} \frac{1}{x^2}$ looks OK. $\lim_{x \to 1} \frac{1}{x^2}$ $\displaystyle \lim_{x \to 1} \frac{1}{x^2}$ looks better, but it occupies more than one line’s vertical space. $\displaystyle \lim_{x \to 1} \frac{1}{x^2}$ For option (1), including limits in inline equations by _ doesn’tlook good since $x \to 1$ isn’t placed at the bottom of $\lim$. If we want the text to occupy less space to save paper, then option(2) isn’t good. In order to see another drawback of this option, Ihave written some long (and meaningless) sentences here, so that thefraction in this paragraph appears in the middle. Although I seldomwrite in English, I have tried my best to illustrate my ideas withwords. The vertical space created by the fraction in display style$\displaystyle \frac{1}{x^2}$ doesn’t match with the line separationof other lines in the paragraph. If you have already reached thisline but you don’t understand what I’m saying, I’m write more so as towrap the fraction with a chuck of text. Goal To create an inline limit $\lim\limits_{x \to 1} \frac{1}{x^2}$ which looks better in the middle of a paragraph. Fames ac turpis egestas. Duis ultricies urna. Etiam enim urna, pharetra suscipit, varius et, congue quis, odio. Donec lobortis, elit bibendum euismod faucibus, $\lim\limits_{x \to 1} \frac{1}{x^2}$ velit nibh egestas libero, vitae pellentesque elit augue ut massa. Praesent vel ligula. Nam venenatis neque quis mauris. Proin felis. Cum sociis natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Aliquam quam. Background I reviewed my old post on power means inequalities. Problem As the MathJax tutorial on Math Meta SE pointed out, thecorrect $\rm \LaTeX$ syntax for | in {} denoting a set should be \mid. However, the | in {} doesn’t match the fraction. $$\max\left\{\frac{1}{a_i} \mid i = 1,\dots,k \right\}$$ gives Goal I need to change it back to Background Same as the previous post in this series, except that I ranthis command from M$ Win* 10. Problem Similar to the previous post. Owner@Owner-PC MINGW64 /c/github/blog2 (gh-pages)$ jekyll serveWARN: Unresolved specs during Gem::Specification.reset: pygments.rb (~> 0.6.0) jekyll-watch (~> 1.1)WARN: Clearing out unresolved specs.Please report a bug if this causes problems.C:/Ruby200-x64/lib/ruby/gems/2.0.0/gems/bundler-1.7.2/lib/bundler/resolver.rb:357:in `resolve': Could not find gem 'jekyll (~> 3.1) x64-mingw32' in the gems available on this machine. (Bundler::GemNotFound) from C:/Ruby200-x64/lib/ruby/gems/2.0.0/gems/bundler-1.7.2/lib/bundler/resolver.rb:164:in `start' from C:/Ruby200-x64/lib/ruby/gems/2.0.0/gems/bundler-1.7.2/lib/bundler/resolver.rb:129:in `resolve' from C:/Ruby200-x64/lib/ruby/gems/2.0.0/gems/bundler-1.7.2/lib/bundler/definition.rb:193:in `resolve' from C:/Ruby200-x64/lib/ruby/gems/2.0.0/gems/bundler-1.7.2/lib/bundler/definition.rb:132:in `specs' from C:/Ruby200-x64/lib/ruby/gems/2.0.0/gems/bundler-1.7.2/lib/bundler/definition.rb:177:in `specs_for' from C:/Ruby200-x64/lib/ruby/gems/2.0.0/gems/bundler-1.7.2/lib/bundler/definition.rb:166:in `requested_specs' from C:/Ruby200-x64/lib/ruby/gems/2.0.0/gems/bundler-1.7.2/lib/bundler/environment.rb:18:in `requested_specs' from C:/Ruby200-x64/lib/ruby/gems/2.0.0/gems/bundler-1.7.2/lib/bundler/runtime.rb:13:in `setup' from C:/Ruby200-x64/lib/ruby/gems/2.0.0/gems/bundler-1.7.2/lib/bundler.rb:121:in `setup' from C:/Ruby200-x64/lib/ruby/gems/2.0.0/gems/jekyll-2.5.3/lib/jekyll/plugin_manager.rb:37:in `require_from_bundler' from C:/Ruby200-x64/lib/ruby/gems/2.0.0/gems/jekyll-2.5.3/bin/jekyll:16:in `<top (required)>' from C:/Ruby200-x64/bin/jekyll:23:in `load' from C:/Ruby200-x64/bin/jekyll:23:in `<main>' Goal To know the page layout of a website, notably my blogs, on mobile devices before publishing it. Problem I used ifconfig to check the IP address of my desktop where thepreview site was hosted. It’s 192.168.1.5. When I typed in thisaddress followed by a colon and the port number 4000, then thebrowser said “connection timeout” after loading for a while. Background A preview of a blog is often needed before it’s published. Problem However, it is possible that one doesn’t like to type localhost inthe address bar, and would like to have other names. Solution The idea is in Local Setup – edit your hosts file in How to testlocalhost from any device on your networkwritten by Wes Bos. On *nix, the file path is still the same as on OSX, but many users would rather use Vim for editing /etc/hosts. One may even use sed with the -i flag andthe sudo privilege in order to directly manipulate this file. sudo sed -i "3i127.0.0.1\tblogtest.com" /etc/hostsfor insertion of “127.0.0.1 blogtest.com” beforethe third line. sudo sed -i "3c127.0.0.1\tblogtest.com" /etc/hostsfor changing the third line to “127.0.0.1 blogtest.com”. Note that the tab is escaped as \t in these two commands.
Every time you're asked to differentiate $a^x$ and you say $xa^{x-1}$, a kitten dies. A kitten that had been raised lovingly by an orphan, her only friend in the whole wide word - DEAD, just because you couldn't be bothered to learn how to do calculus properly. I hope theRead More → It's Friday, which means... It's free for all Friday time! Any questions you've got, about maths or anything else, pop them in the comments. I'm in the middle of the deadline doldrums (the next Dummies book won't write itself, you know!) but I'll pop in once in a while toRead More → If I could wave a magic wand and overhaul just one thing to make the world a better place, I'd have a tough choice. Would I get rid of the QWERTY keyboard in favour of a more sensible layout? Would I make the English language fonetik? Would I take mathsRead More → There's a very typical question in C3 papers that looks something like: "Express $3 \sin (x) + 5 \cos (x)$ in the form $R \sin (x + \alpha)$." (Sometimes it's a different trig function, or it may have a minus sign in it, but the same principle works for anyRead More → It's C1 day (and C2 for many)! I hope it's gone well if you were sitting it today. Anything that bothered you about the paper? It's your chance to pick my brain about whatever's on your mind about maths. Got a question you've been struggling with? Got a topic youRead More → Today's big question is about poker. For some reason, statistics books shy away from gambling, reasoning that it's somehow harmful or evil. It's true, it can be addictive (although so can, for instance, using the computer or reading books) but it's actually the whole reason statistics exists. (This Italian lad,Read More → Part I: Basic shapes of Core 1 graphs There are a handful of basic shapes of graphs you need to know about for C1, namely: - reciprocal graphs ($y = \frac{1}{x}$) - reciprocal-square graphs ($y = \frac{1}{x^2}$) - straight line graphs ($y = x$) - you know this one, right?Read More → Part I: Basic shapes of Core 1 graphs There are a handful of basic shapes of graphs you need to know about for C1, namely: - reciprocal graphs ($y = frac{1}{x}$) - reciprocal-square graphs ($y = frac{1}{x^2}$) - straight line graphs ($y = x$) - you know this one, right?Read More → Welcome to the first FFAF of the new year! It's your chance to pick my brain about whatever's on your mind about maths. Got a question you've been struggling with? Got a topic you just can't get your head around? Found a great maths resource I might not have seen?Read More → I turned over the paper and froze. Up until that point I'd been convinced that maths was, and always would be, easy. As such, I'd done next to no revision for this paper, which I needed to do well in to qualify for my next year of studies. This, IRead More →
Given a language $L$, I am finding a method to evaluate the advantage of an automaton to decide $L$. My goal is to decide a language $L$ (and maybe it is not decidable for automata). If one constructs an automaton $A_{1}$ whose laguage is $L_{1}$, I want to know the advantage of $A_{1}$ for decisidng $L$. If there is a distance or metric of two languages $d(\cdot, \cdot)$, I can define $\mathrm{Adv}_{L}(A_{1}) = d(L_{1}, L)$. Thus, we can say $A_{1}$ is better than $A_{2}$ respect of $L$ if $\mathrm{Adv}_{L}(A_{1}) > \mathrm{Adv}_{L}(A_{2})$. In my opinion, the follow conditions are satisfied at least. 1 It is non-negativity and identity of indiscernibles, $d(L_{1}, L_{2}) \geq 0 ~\text{iff}~L_{1} = L_{2}$. 2 It is symmetry, $d(L_{1}, L_{2}) = d(L_{2}, L_{1})$. 3 If $L_{1} \cap L \subseteq L_{2} \cap L$ and $L_{1} \setminus L = L_{2} \setminus L$, then $d(L,L_{1}) \leq d(L,L_{2})$. 4 If $L_{1} \cap L = L_{2} \cap L$ and $L_{1} \setminus L \subseteq L_{2} \setminus L$, then $d(L,L_{1}) \geq d(L,L_{2})$. Note that I am not sure the condition 3 and condition 4 above is suitful. Follows may be helpful.
The concept of order of integration is not uniquely defined, and there exist multiple definitions that to a large degree overlap. Some of the more hand-wavy definitions rather imprecisely define it as "the number of differences needed to achieve stationarity". Engle and Granger (1987) and Johansen (1995) define the concept more precisely: A series with no deterministic component which has a stationary, invertible ARMA representation after differencing $d$ times is said to be integrated of order $d$ (Engle and Granger, 1987) A stochastic process $Y_t$ which satisfies $Y_t-E(Y_t)=\sum_{i=0}^\infty C_i\epsilon_{t-i}$ is called $I(0)$ if $\sum_{i=0}^\infty C_iz^i$ converges for $|z_i|<1$ and $\sum_{i=0}^\infty C_i\neq 0$. (Johansen, 1995) One important implication stemming from these definitions is that an MA(1) process with $\theta=1$ is not $I(1)$ (because it is not invertible, and because $\sum_{i=0}^\infty C_i=1-1=0$). So according to these definitions, not all weakly stationary processes are stationary. The definition on Wikipedia is therefore neither correct nor incorrect; it is merely one definition. For more discussion, see Davidson (2009). Davidson, James (2009) "When is a time series I(0)?". Chapter 13 of The Methodology and Practice of Econometrics, a festschrift for David F. Hendry edited by Jennifer Castle and Neil Shepherd, Oxford University Press. https://people.exeter.ac.uk/jehd201/WhenisI0.pdf Engle, Robert & Granger, Clive (1987) Co-Integration and Error Correction: Representation, Estimation, and Testing. Econometrica, 55(2), 251-276. doi:10.2307/1913236 Johansen, Soren (1995) "Likelihood-Based Inference in Cointegrated Vector Autoregressive Models", Oxford University Press. https://www.oxfordscholarship.com/view/10.1093/0198774508.001.0001/acprof-9780198774501
Dear Uncle Colin, I got stuck on this sector question, which asks for the radius of circle $P$, which touches sector $ABC$ as shown. I'm given that $ABC$ is a sector of a circle with centre $A$ with radius 12cm, and that angle $BAC$ is $\frac{\pi}{3}$. My answer was 3.8cm,Read More → Another horde of zombies lumbered into view. "What are they saying?" asked the first, readying the shotgun as he'd done a hundred times before. "Something about the calculator exam," said the second. "It's hard to make out." He pulled some spare shells from his bag. "Calculator papers are easier!" groanedRead More → Dear Uncle Colin, I'm struggling with a STEP question. Any ideas? Given: 1. $q^2 - pr = -3k$ 2. $r^2 - qp = -k$ 3. $p^2 - rq = k$ Find p, q and r in terms of k. - Simultaneous Triple Equation Problem Hi, STEP, and thanks for yourRead More → An implicit differentiation question dealt with $y^4 - 2x^2 + 8xy^2 + 9 = 0$. Differentiating it is easy enough for a competent A-level student - but what does the curve look like? That requires a bit more thought. My usual approach to sketching a function uses a structure IRead More → Dear Uncle Colin, How would you find $\sqrt[4]{923521}$ without a calculator? -- Some Quite Recherché Technique Hi, SQRT! I have a few possible techniques here. The first is "do some clever stuff with logarithms", the second is "do some clever stuff with known squares" and the last is "do someRead More → In this episode of Wrong, But Useful1: We're joined by @ajk_44, who is Alison Kiddle from NRICH in real life. We ask Alison: how long has NRICH been going? How do you tell which problems you've covered before? Colin's number of the podcast is 13,532,396,179 (he mistakenly calls it quadrillionsRead More → I imagine, if one put one's mind to it, one could acquire copies of this year's paper online - however, many schools plan to use it as a mock for next year's candidates. In view of that, and at the request of my top-secret source, I'm not sharing the actualRead More → Dear Uncle Colin, I've been asked to solve Chebyshev's equation using a series expansion: $(1-x)\diffn{2}{y}{x} - x\dydx + p^2 y = 0$ assuming $y=C_0 + C_1 x + C_2 x^2 + ...$. I end up with the relation $C_{N+2} = \frac{C_N \left(N^2 -p^2\right)}{(N+2)(N+1)}$, but the given answer has a +Read More → This is a rolling update post for responses to this morning's post. The Admirable Adam Atkinson has emailed to suggest an answer I hadn't considered: 100. "I could imagine many programming languages would say 100. You start with the first term, 100. discover that the "next" number, 101, is outsideRead More → "Counting is hard. This is what I keep saying." - @realityminus3 It all stemmed from an arithmetic series problem with a known sum, but an unknown number of terms. As these things are prone to do, it led to a quadratic equation; as those things are prone to do, thatRead More →
I've been thinking of the relation between elliptic curves of large rank and quadratic imaginary fields with large 3-rank class groups. There are quite a few papers constructing infinitely many quadratic imaginary fields with class groups having torsion subgroups of the form $\mathbb{Z}_p^r$, by using elliptic or hyperelliptic curves with such rational torsion. I have read these papers intently. In this question I would like to focus on the other way around - constructing elliptic curves with large rank using quadratic imaginary fields with large 3-rank class groups. I do not recall seeing such a construction (excluding trivial examples for rank 2). Let's get into the specifics of this specific question: Let $D$ be a negative squarefree number, $K$ the associated quadratic field, and to make notation light assume that $O_K = \mathbb{Z}[\sqrt{D}]$. Say the class group has 3-rank $r$ generated by ideals $I_i$, $I_i^3 = (a_i+b_i\sqrt{D})$, $N(I_i)=x_i$, $i=0,...,r-1$. We have the following equations: $$a_i^2+b_i^2|D| = x_i^3$$ If the numbers $b_i$ were all equal, these equations would give $r$ points $(x_i,a_i)$ on the elliptic curve $$y^2=x^3-b^2|D|$$ And if all was relatively normal, these points would be independent because of the independence in the class group. If the $b_i$'s aren't equal, we can try to move around in the ideals' classes. Meaning, we can multiply the equations by cubes of integers in $O_K$. Before diving into the arbitrary-large-case, let's start with $r=2$. Therefore the question is this: Let $I_1, I_2$ be two ideals of order 3 in the class group of a quadratic imaginary order $\mathbb{Z}[\sqrt{D}]$, generating a subgroup of order 9. $I_i^3=(a_i+b_i\sqrt{D})$. Do there exist $\alpha, \beta \in \mathbb{Z}[\sqrt{D}]$ such that: $$\text{Im}( (a_1+b_1\sqrt{D})\alpha^3) = \text{Im}((a_2+b_2\sqrt{D})\beta^3)$$ P.S. The question is simple to state and does not immediately concern elliptic curves, even if the motivation does. I am not looking for papers on construction of large rank elliptic curves or quadratic class groups.
(Ed. Note: A pdf version of this article is attached at the end of the post for offline reading.) Introduction and Preliminaries Graphs are objects like any other, mathematically speaking. We can define operations on two graphs to make a new graph. We’ll focus in particular on a type of graph product- the Cartesian product, and its elegant connection with matrix operations. We mathematically define a graph G to be a set of vertices coupled with a set of edges that connect those vertices. We write G = (V_{G}, E_{G}). As an example, the left graph in Figure 1 has three vertices V_{G} = \{v_{1}, v_{2}, v_{3}\}, and two edges E_{G} = \{v_{1}v_{2}, v_{2}v_{3}\}. The order of G is the number of vertices, and the size of G is the number of edges. So our graph G has order n=3 and size m=2. This graph in particular has a special name, P_{3}, because it’s a special type of graph called a path that consists of 3 vertices. Two vertices that are connected by an edge are adjacent and denoted with the symbol \sim. Cartesian Product of Graphs Now that we’ve dispensed with necessary terminology, we shall turn our attention to performing operations on two graphs to make a new graph. In particular, a type of graph multiplication called the Cartesian product. Suppose we have two graphs, G and H, with orders n_{G} and n_{H} respectively. Formally, we define the Cartesian product G \times H to be the graph with vertex set V_{G} \times V_{H}. Pause here to note what we mean by this. The vertex set of the graph Cartesian is the Cartesian product of the vertex sets of the two graphs: V_{G \times H} = V_{G} \times V_{H}. This means that the Cartesian product of a graph has n_{G}n_{H} vertices. As an example, P_{3} \times P_{2} has 3\cdot 2 = 6 vertices. The vertices of G \times H are ordered pairs formed by V_{G} and V_{H}. For P_{3} \times P_{2} we have, V_{P_{3} \times P_{2}} = \{(v_{1}, x_{1}),(v_{1}, x_{2}),(v_{2}, x_{1}),(v_{2}, x_{2}), (v_{3}, x_{1}),(v_{1}, x_{2}) \} The edge set of G \times H is defined as follows: (v_{i}, x_{k}) is adjacent to (v_{j}, x_{l}) if v_{i} = v_{j} and x_{k} \sim x_{l}, or x_{k} = x_{l} and v_{i} \sim v_{j} Let’s create P_{3} \times P_{2} as an example: The red edges are due to condition (1) above, and the red to (2). Interestingly enough, this operation is commutative up to isomorphism (a relabeling of vertices that maintains the graph structure). This will be examined in a later section. Graph and Matrices We can also operate on graphs using matrices. The pictures above are one way to represent a graph. An adjacency matrix is another. The adjacency matrix A_{G} of a graph G of order n is an n \times n square matrix whose entries a_{ij} are given by The adjacency matrix for P_{3} and P_{2} respectively areA_{P_{3}} = \begin{bmatrix} 0 & 1 & 0\\ 1 & 0 & 1\\0 & 1 & 0\end{bmatrix} \qquad A_{P_{2}} = \begin{bmatrix} 0 & 1\\1 & 0\end{bmatrix} Note that a relabeling of vertices simply permutes the rows and columns of the adjacency matrix. Adjacency Matrix of the Cartesian product of Graphs What is the adjacency matrix of G \times H? If G and H have orders n_{G} and n_{H} respectively, we can show thatA_{G \times H} = A_{G} \otimes I_{n_{H}} + I_{n_{G}} \otimes A_{H} where \otimes is the Kronecker product, and I_{m} is the m\times m identity matrix. Recall that the Kronecker product of two matrices A_{m \times n} \otimes B_{k \times l} is the mk \times nl matrix given byA \otimes B = \begin{bmatrix} a_{11}B & a_{12}B &\ldots & a_{1n}B \\ a_{21}B & a_{22}B & \ldots & a_{2n}B \\ \vdots & & \ddots& \vdots \\ a_{m1}B & a_{m2}B & \ldots & a_{mn}B\end{bmatrix} In general, the Kronecker product is not commutative, so A \otimes B \neq B \otimes A. We can prove the formula for A_{G \times H} above using standard matrix theory, but this really wouldn’t be that illuminating. We’d see that the statement is indeed true, but we would gain very little insight into how this affects the graph and why. We shall break apart the formula to see why the Kronecker product is needed, and what the addition is doing. Let us return to finding P_{3} \times P_{2}. By our formula, A_{P_{3} \times P_{2}} = A_{P_{3}} \otimes I_{2} + I_{3} \otimes A_{P_{2}}. Notice first why the dimensions of the identity matrices “cross” each other; that is, we use the identity matrix of order n_{P_{2}} in the Kronecker product with A_{P_{3}} and vice versa n_{P_{3}} for A_{P_{2}}. This ensures we get the correct dimension for the adjacency matrix of P_{3} \times P_{2}, which is 6 \times 6. We’ll walk through the computation one term at a time. First,\begin{aligned}A_{P_{3}} \otimes I_{2} &= \begin{bmatrix} 0 & 1 & 0\\ 1 & 0 & 1\\0 & 1 & 0\\\end{bmatrix} \otimes \begin{bmatrix} 1 & 0\\0 & 1\end{bmatrix} \\&= \begin{bmatrix} 0 & 0 & 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 1 & 0 & 0\\ 1 & 0 & 0 & 0 & 1 & 0\\ 0 & 1 & 0 & 0 & 0 & 1\\ 0 & 0 & 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 1 & 0 & 0\\\end{bmatrix}\end{aligned} This matrix is the adjacency matrix of a 6-vertex graph given below: Notice that we now have two copies of P_{3} now in one graph. Similarly, I_{3} \otimes A_{P_{2}} is\begin{aligned}I_{3} \otimes A_{P_{2}} &= \begin{bmatrix} 0 & 1 & 0 & 0 & 0 & 0 \\ 1 & 0 & 0 & 0 & 0 & 0 \\0 & 0 & 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 1\\ 0 & 0 & 0 & 0 & 1 & 0\end{bmatrix}\end{aligned} Here the second Kronecker product term has made three copies of P_{2}. The last step is to add these two matrices together to obtain the adjacency matrix for P_{3} \times P_{2}. Graphically, what’s happening? This addition term overlays our 3 copies of P_{2} (usually written 3P_{2} or P_{2} \cup P_{2} \cup P_{2}) onto the respectively labeled vertices of our two copies of P_{3} (written 2P_{3} or P_{3} \cup P_{3}). That is,\begin{aligned}A_{P_{3} \times P_{2}} &= A_{P_{3}} \otimes I_{2} + I_{3} \otimes A_{P_{2}}\\&= \begin{bmatrix} 0 & 1 & 1 & 0 & 0 & 0 \\ 1 & 0 & 0 & 1 & 0 & 0 \\1 & 0 & 0 & 1 & 1 & 0 \\ 0 & 1 & 1 & 0 & 0 & 1 \\ 0 & 0 & 1 & 0 & 0 & 1\\ 0 & 0 & 0 & 1 & 1 & 0\end{bmatrix}\end{aligned} which is the adjacency matrix of our originally obtained graph Note that here the vertex labels aren’t ordered pairs. That’s ok. Technically we shouldn’t have labeled the vertices of the two graphs identically, but the goal was to illustrate the action of the Cartesian product. We can always relabel the edges. Formally, the overlaying of 3P_{2} onto the vertices of 2P_{3} would be forming those ordered-pair vertices. Commutativity of the Graph Cartesian Product A natural question for any operation is whether or not it possesses the property of commutativity. An operation is commutative if the order in which we take it produces the same result. That is, if \square is an operation, a\square b = b\square a. Does G \times H yield the same graph as H \times G? In a way, yes. Let’s examine what happens if we switch the order of the graph Cartesian product and take P_{2} \times P_{3}. We won’t go quite as thoroughly through each step, and instead give the results. We know the adjacency matrix of P_{2} \times P_{3} is given byA_{P_{2} \times P_{3}} = A_{P_{2}} \otimes I_{3} + I_{2} \otimes A_{P_{3}} \begin{aligned}A_{P_{2}} \otimes I_{3} &= \begin{bmatrix}0 & 0 & 0 & 1 & 0 & 0\\0 & 0 & 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 0 & 0 & 1\\ 1 & 0 & 0 & 0 & 0 & 0\\ 0 & 1 & 0 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 & 0 & 0\end{bmatrix}\end{aligned} Notice again we have 3 copies of P_{2}, but the labeling and drawing is now different. If we create a mapping \phi that relabels v_{4} as v_{2}, v_{6} as v_{4}, and v_{2} as v_{6}, then we’d get back the graph we drew from the adjacency matrix formed by I_{3} \otimes A_{P_{2}}. Thus, the two graphs are structurally equivalent; it just looks different due to labeling and redrawing. The mapping \phi is called an isomorphism because it transforms one graph into the other without losing any structural properties, and the two graphs are isomorphic. This implies that A_{P_{2}} \otimes I_{3} \neq I_{3} \otimes A_{P_{2}}, but if we permute rows and columns, we can transform one matrix into the other. If the labelings on the vertices didn’t matter, then the operation is commutative up to isomorphism. If the labelings matter (more common in engineering), then we do not get the “same” graph by switching the order of the Kronecker product. We’ll do the same thing and examine I_{2} \otimes A_{P_{3}} and compare it to A_{P_{3}}\otimes I_{2}:\begin{aligned}I_{2} \otimes A_{P_{3}} &= \begin{bmatrix}0 & 1 & 0 & 0 & 0 & 0\\1 & 0 & 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & 1 & 0\\ 0 & 0 & 0 & 1 & 0 & 1 \\ 0 & 0 & 0 & 0 & 1 & 0\end{bmatrix}\end{aligned} The corresponding graph is Notice again that here we have two copies of P_{3} just as when we created the graph formed by the matrix A_{P_{3}}\otimes I_{2}, but the labels are again permuted. We have generated an isomorphic graph. Finally, we add the two matrices together (or follow the definition to create the last step of the Cartesian product by overlaying vertices and fusing them together).\begin{aligned}A_{P_{2}} \otimes I_{3} + I_{2} \otimes A_{P_{3}} &= \begin{bmatrix}0 & 1 & 0 & 1 & 0 & 0\\1 & 0 & 1 & 0 & 1 & 0 \\ 0 & 1 & 0 & 0 & 0 & 1\\ 1 & 0 & 0 & 0 & 1 & 0\\ 0 & 1 & 0 & 1 & 0 & 1 \\ 0 & 0 & 1 & 0 & 1 & 0 \end{bmatrix}\end{aligned} This is isomorphic to P_{3} \times P_{2}. We can show this by finding a mapping \phi that “untangles” P_{2} \times P_{3} by relabeling vertices. We can also redraw the ladder structure we saw with P_{3} \times P_{2} and label the vertices so that we get the structure of P_{2} \times P_{3} Our conclusion is that the graph Cartesian product is “sort of” commutative as an operation. If the vertices were unlabeled or didn’t matter, then the operation is commutative because we can always relabel vertices or “untangle” the structure. If the vertices are labeled and matter, then we don’t get the same graph back when we switch the order in the Cartesian product. Thus, we can say that the graph Cartesian product is commutative to isomorphism. Continuation A natural question after looking through all of this is the following: Given an adjacency matrix, can we decompose it into a Cartesian product of two graphs? The next article will address this. This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
Search Now showing items 1-9 of 9 Production of $K*(892)^0$ and $\phi$(1020) in pp collisions at $\sqrt{s}$ =7 TeV (Springer, 2012-10) The production of K*(892)$^0$ and $\phi$(1020) in pp collisions at $\sqrt{s}$=7 TeV was measured by the ALICE experiment at the LHC. The yields and the transverse momentum spectra $d^2 N/dydp_T$ at midrapidity |y|<0.5 in ... Transverse sphericity of primary charged particles in minimum bias proton-proton collisions at $\sqrt{s}$=0.9, 2.76 and 7 TeV (Springer, 2012-09) Measurements of the sphericity of primary charged particles in minimum bias proton--proton collisions at $\sqrt{s}$=0.9, 2.76 and 7 TeV with the ALICE detector at the LHC are presented. The observable is linearized to be ... Pion, Kaon, and Proton Production in Central Pb--Pb Collisions at $\sqrt{s_{NN}}$ = 2.76 TeV (American Physical Society, 2012-12) In this Letter we report the first results on $\pi^\pm$, K$^\pm$, p and pbar production at mid-rapidity (|y|<0.5) in central Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV, measured by the ALICE experiment at the LHC. The ... Measurement of prompt J/psi and beauty hadron production cross sections at mid-rapidity in pp collisions at root s=7 TeV (Springer-verlag, 2012-11) The ALICE experiment at the LHC has studied J/ψ production at mid-rapidity in pp collisions at s√=7 TeV through its electron pair decay on a data sample corresponding to an integrated luminosity Lint = 5.6 nb−1. The fraction ... Suppression of high transverse momentum D mesons in central Pb--Pb collisions at $\sqrt{s_{NN}}=2.76$ TeV (Springer, 2012-09) The production of the prompt charm mesons $D^0$, $D^+$, $D^{*+}$, and their antiparticles, was measured with the ALICE detector in Pb-Pb collisions at the LHC, at a centre-of-mass energy $\sqrt{s_{NN}}=2.76$ TeV per ... J/$\psi$ suppression at forward rapidity in Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV (American Physical Society, 2012) The ALICE experiment has measured the inclusive J/ψ production in Pb-Pb collisions at √sNN = 2.76 TeV down to pt = 0 in the rapidity range 2.5 < y < 4. A suppression of the inclusive J/ψ yield in Pb-Pb is observed with ... Production of muons from heavy flavour decays at forward rapidity in pp and Pb-Pb collisions at $\sqrt {s_{NN}}$ = 2.76 TeV (American Physical Society, 2012) The ALICE Collaboration has measured the inclusive production of muons from heavy flavour decays at forward rapidity, 2.5 < y < 4, in pp and Pb-Pb collisions at $\sqrt {s_{NN}}$ = 2.76 TeV. The pt-differential inclusive ... Particle-yield modification in jet-like azimuthal dihadron correlations in Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV (American Physical Society, 2012-03) The yield of charged particles associated with high-pT trigger particles (8 < pT < 15 GeV/c) is measured with the ALICE detector in Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV relative to proton-proton collisions at the ... Measurement of the Cross Section for Electromagnetic Dissociation with Neutron Emission in Pb-Pb Collisions at √sNN = 2.76 TeV (American Physical Society, 2012-12) The first measurement of neutron emission in electromagnetic dissociation of 208Pb nuclei at the LHC is presented. The measurement is performed using the neutron Zero Degree Calorimeters of the ALICE experiment, which ...
Research Open Access Published: Stationary wave solutions for new developed two-waves’ fifth-order Korteweg–de Vries equation Advances in Difference Equations volume 2019, Article number: 263 (2019) Article metrics 264 Accesses Abstract In this work, we present a new two-waves’ version of the fifth-order Korteweg–de Vries model. This model describes the propagation of moving two-waves under the influence of dispersion, nonlinearity, and phase velocity factors. We seek possible stationary wave solutions to this new model by means of Kudryashov-expansion method and sine–cosine function method. Also, we provide a graphical analysis to show the effect of phase velocity on the motion of the obtained solutions. Introduction Stationary wave solutions for nonlinear equations play an important role in understanding many mathematical models arising in physics and applied sciences. These solutions were developed and categorized to fit many physical learned aspects (see [1]). For example, the authors of [2,3,4] used rogue soliton-waves to study the coupled variable-coefficient fourth-order nonlinear Schrödinger equations in an inhomogeneous optical fiber and coupled Sasa–Satsuma equations. Hu et al. in [5] explored the mixed lump–kink and rogue wave–kink solutions for a \((3+1)\)-dimensional B-type Kadomtsev–Petviashvili equation in fluid mechanics. Further, many interesting soliton-type solutions for physical applications that arise in plasma, surface waves of finite depth, and optical fiber were studied by researchers in, e.g., [6,7,8,9]. where \(w=w(x,t)\) and \(a_{1}\), \(a_{2}\), \(a_{3}\) are some arbitrary constants. The fifth-order KdV equation (1.1) is a hybrid mathematical model with wide applications to surface and internal waves in fluids [11], as well as to waves in other media [12,13,14]. Special cases of (1.1) are widely used in different branches of sciences such as fluid physics, plasma physics, and quantum theory. For instance, when \(a_{1}=\frac{3}{10} a_{3}^{2}\), \(a_{2}=2 a _{3}\), \(a_{3}=10\), equation (1.1), called Lax equation, was studied in [15]. Also, when \(a_{1}=\frac{2}{5} a_{3}^{2}\), \(a_{2}=a _{3}\), \(a_{3}=5\), this equation is called Sawada–Kotera equation and was solved in [16]. In addition, the authors of [17] obtained the solution to (1.1) for \(a_{1}= \frac{1}{5}a_{3}^{2}\), \(a_{2}=a_{3}\), \(a_{3}=10\), which is known as the Kaup–Kupershmidt equation. Later on, under the assumption \(a_{1}=\frac{2}{9}a_{3}^{2}\), \(a_{2}=2a_{3}\), \(a_{3}=3\), the Ito equation was investigated in [18]. For the case \(a_{1}=45\), \(a_{2}=- \frac{75}{2}\), \(a_{3}=-15\), it is called Kaup–Kupershmidt–Parker–Dye equation [19]. The solution to Caudrey–Dodd–Gibbon equation was found in [20] provided that \(a_{1}=180\), \(a_{2}=30\), \(a _{3}=30\). Finally, for \(a_{1}=45\), \(a_{2}=-15\), \(a_{3}=-15\), (1.1) is called Sawada–Kotera–Parker–Dye equation, which was explored in [21]. The purpose of listing the aforementioned classifications of (1.1) is to highlight the importance and merit of studying new versions of the model, and also to explore its physical features. Now, we proceed to present for the first time the two-waves’ version of (1.1) by applying the operators respectively, on the expressions \(a_{1} w^{2} w_{x}+a_{2} w_{x} w_{xx}+a _{3} w w_{xxx}\) and \(w_{xxxxx}\) and extending the term \(w_{t}\) into the expression \(w_{tt}-s^{2} w_{xx}\). Therefore, the two-wave fifth-order KdV (TWfKdV) is where α, β, and s are the nonlinearity, dispersion, and phase velocity, respectively, with \(|\alpha | \leq 1\), \(|\beta | \leq 1\), and \(s \geq 0\). If we set \(s=0\) in (1.3) and integrate once with respect to time t, the TWfKdV equation is reduced to the fifth-order KdV equation (1.2) for the description of a single-wave propagating in one direction only. To learn about constructing two-mode equations, the reader is advised to read [22,23,24,25,26,27,28,29,30,31]. The two-wave equation (1.3) describes the spread of moving two-waves under the influence of dispersion, nonlinearity, and phase velocity factors. We aim to seek possible solutions for (1.3) by implementing two techniques, the Kudryashov-expansion method and sine–cosine function method. Also, we study the effect of phase velocity on the motion of the obtained solutions. Both techniques require converting (1.3) by means of the new variable \(\zeta =x-c t\) into the differential equation where \(w=w(\zeta )\). Kudryashov solutions of TMfKdV where variable Z satisfies the differential equation Solving (2.2) gives where d is a nonzero free constant. The index n is to be determined by applying the order-balance procedure of the linear term \(w^{(5)}\) against the nonlinear term \(w^{2} w'\), which gives that \(n=2\). Therefore, we can write (2.1) as and Now, we insert (2.2) through (2.6) into (1.4) to get a finite polynomial in Z. By setting each coefficient of \(Z^{i}\) to zero, a nonlinear algebraic system with unknowns \(A_{0}\), \(A_{1}\), \(A _{2}\), μ, c is obtained. We cannot solve the resulting system unless we consider some restrictions on the coefficients \(a_{0}\), \(a _{1}\), \(a_{2}\) and the parameters α, β. Kudryashov-Case I The first solution for the TMfKdV (1.3) exists when the coefficients are assigned as and the two-mode parameters have the relation Hence, Therefore, the first obtained solution is Figure 1 presents 3D plots of the two-waves depicted in (2.8) upon increasing the phase velocity s. Figure 2 is a 2D plot of (2.8) when coordinate x is fixed. It can be seen that these two waves can be regarded as left–right waves (having opposite directions). Kudryashov-Case II When we take the coefficients and the two-wave parameters satisfy \(\alpha =\beta \), then the second solution for (1.3) is reached. Accordingly, Thus, the second obtained solution is Kudryashov-Case III It is worth mentioning that when the two-waves’ parameters satisfy \(\alpha =\beta =\pm 1\), the third solution for TWfKdV (1.3) (with no restrictions on the coefficients \(a_{1}\), \(a_{2}\), \(a_{3}\)) is obtained. So, which gives that the third obtained solution is Sine–cosine solution of TMfKdV or To determine the values of A, p, μ and c, we substitute (3.1) or (3.2) in (1.4), and then collect the coefficients of same powers of \(\sin ^{i}\) or \(\cos ^{i}\) and set each to zero. In fact, we have an algebraic system with \(\alpha =\beta \), namely Solving (3.3) requires \(p=-2\), and the TWfKdV’s coefficients are So, we deduce that the wave speed c is Therefore, two periodic-type solutions are Conclusion A new two-wave version of the generalized fifth-order KdV problem is established. This new model possesses two directional waves with interacting phase velocity. We obtained different solutions of this new model under particular choices of the coefficients \(a_{1}\), \(a_{2}\), \(a _{3}\), and the constraint condition \(\alpha =\beta =d\) with \(|d|<1\). Also, we studied the impact of increasing the phase velocity on the shape of spreading its two-waves. The following findings are recorded: For \(a_{1}=\frac{6 \mu ^{2} a_{3}}{A_{1}}\), \(a_{2}=\frac{60 \mu ^{2}-A _{1} a_{3}}{A_{1}}\), \(a_{3}= \mathtt{free}\), and \(\alpha =\beta \), the TWfKdV is a soliton-type. For \(a_{1}=\frac{(183- 7\sqrt{849}) \mu ^{4}}{8 A_{0}^{2}}\), \(a _{2}=\frac{(443- 7\sqrt{849}) \mu ^{2}}{8 A_{0}}\), \(a_{3}= \frac{-13 \mu ^{2}}{A_{0}}\), and \(\alpha =\beta \), the TWfKdV is a kink-type. For arbitrary \(a_{1}\), \(a_{2}\), \(a_{3}\) and \(\alpha =\beta =\pm 1\), the TWfKdV is a kink-type. For \(a_{1}=-\frac{6 a_{3} \mu ^{2}}{A}\), \(a_{2}=-\frac{A a_{3} +60 \mu ^{2}}{A}\) and \(\alpha =\beta \), the TWfKdV is a singular periodic-type. We may say that these two-waves could be useful in many physical and engineering applications, for example, they can be used as barrier waves to strengthen the transmission of different signals’ data. Also, if a large amount of data is difficult to pass on to a single router, it can be distributed on two routers. References 1. Gao, X.Y.: Mathematical view with observational/experimental consideration on certain \((2+1)\)-dimensional waves in the cosmic/laboratory dusty plasmas. Appl. Math. Lett. 91, 165–172 (2019) 2. Du, Z., Tian, B., Chai, H.P., Sun, Y., Zhao, X.H.: Rogue waves for the coupled variable-coefficient fourth-order nonlinear Schrödinger equations in an inhomogeneous optical fiber. Chaos Solitons Fractals 10(9), 90–98 (2018) 3. Liu, L., Tian, B., Yuan, Y.Q., Du, Z.: Dark–bright solitons and semirational rogue waves for the coupled Sasa–Satsuma equations. Phys. Rev. E 97, 052217 (2018) 4. Zhang, C.R., Tian, B., Wu, X.Y., Yuan, Y.Q., Du, X.X.: Rogue waves and solitons of the coherently-coupled nonlinear Schrödinger equations with the positive coherent coupling. Phys. Scr. 93, 095202 (2018) 5. Hu, C.C., Tian, B., Wu, X.Y., Yuan, Y.Q., Du, Z.: Mixed lump–kink and rogue wave–kink solutions for a \((3+1)\)-dimensional B-type Kadomtsev–Petviashvili equation in fluid mechanics. Eur. Phys. J. Plus 133, 40 (2018) 6. Zhao, X.H., Tian, B., Xie, X.Y., Wu, X.Y., Sun, Y., Guo, Y.J.: Solitons, Bäcklund transformation and Lax pair for a \((2+1)\)-dimensional Davey–Stewartson system on surface waves of finite depth. Waves Random Complex Media 28, 356–366 (2018) 7. Yuan, Y.Q., Tian, B., Liu, L., Wu, X.Y., Sun, Y.: Solitons for the \((2+1)\)-dimensional Konopelchenko–Dubrovsky equations. J. Math. Anal. Appl. 460, 476–486 (2018) 8. Du, X.X., Tian, B., Wu, X.Y., Yin, H.M., Zhang, C.R.: Lie group analysis, analytic solutions and conservation laws of the \((3+1)\)-dimensional Zakharov–Kuznetsov–Burgers equation in a collisionless magnetized electron–positron–ion plasma. Eur. Phys. J. Plus 133, 378 (2018) 9. Gao, X.Y.: Looking at a nonlinear inhomogeneous optical fiber through the generalized higher-order variable-coefficient Hirota equation. Appl. Math. Lett. 73, 143–149 (2017) 10. Bilige, S., Chaolu, T.: An extended simplest equation method and its application to several forms of the fifth-order KdV equation. Appl. Math. Comput. 216(11), 3146–3153 (2010) 11. Khusnutdinova, K.R., Stepanyants, Y.A., Tranter, M.R.: Soliton solutions to the fifth-order Korteweg–de Vries equation and their applications to surface and internal water waves. Phys. Fluids 30, 022104 (2018) 12. Grimshaw, R.H.J., Khusnutdinova, K.R., Moore, K.R.: Radiating solitary waves in coupled Boussinesq equations. IMA J. Appl. Math. 82(4), 802–820 (2017) 13. Khusnutdinova, K.R., Zhang, X.: Long ring waves in a stratified fluid over a shear flow. J. Fluid Mech. 794, 17–44 (2016) 14. Khusnutdinova, K.R., Zhang, X.: Nonlinear ring waves in a two-layer fluid. Physica D 333, 208–221 (2016) 15. Lax, P.D.: Integrals of nonlinear equations of evolution and solitary waves. Commun. Pure Appl. Math. 62, 467–490 (1968) 16. Sawada, K., Kotera, T.: A method for finding N-soliton solutions for the KdV equation and KdV-like equation. Prog. Theor. Phys. 51, 1355–1367 (1974) 17. Kaup, D.: On the inverse scattering problem for the cubic eigenvalue problems of the class \(\psi _{3x} + 6Q\psi _{x} + 6R_{\psi } = \lambda \psi \). Stud. Appl. Math. 62, 189–216 (1980) 18. Ito, M.: An extension of nonlinear evolution equations of the KdV (mKdV) type to higher orders. J. Phys. Soc. Jpn. 49, 771–778 (1980) 19. Kupershmidt, B.A.: A super KdV equation: an integrable system. Phys. Lett. A 102, 213–215 (1984) 20. Aiyer, R.N., Fuchssteiner, B., Oevel, W.: Solitons and discrete eigenfunctions of the recursion operator of non-linear evolution equations: the Caudrey–Dodd–Gibbon–Sawada–Kotera equations. J. Phys. A, Math. Gen. 19, 3755–3770 (1986) 21. Parker, A., Dye, J.M.: Boussinesq-type equations and switching solitons. Proc. Inst. NAS Ukr. 43(1), 344–351 (2002) 22. Korsunsky, S.V.: Soliton solutions for a second-order KdV equation. Phys. Lett. A 185, 174–176 (1994) 23. Wazwaz, A.M.: Two-mode Sharma–Tasso–Olver equation and two-mode fourth-order Burgers equation: multiple kink solutions. Alex. Eng. J. (2017). https://doi.org/10.1016/j.aej.2017.04.003 24. Alquran, M., Jarrah, A.: Jacobi elliptic function solutions for a two-mode KdV equation. J. King Saud Univ., Sci. (2017). https://doi.org/10.1016/j.jksus.2017.06.010 25. Syam, M., Jaradat, H.M., Alquran, M.: A study on the two-mode coupled modified Korteweg–de Vries using the simplified bilinear and the trigonometric-function methods. Nonlinear Dyn. 90(2), 1363–1371 (2017) 26. Jaradat, H.M., Syam, M., Alquran, M.: A two-mode coupled Korteweg–de Vries: multiple-soliton solutions and other exact solutions. Nonlinear Dyn. 90(1), 371–377 (2017) 27. Alquran, M., Jaradat, H.M., Syam, M.: A modified approach for a reliable study of new nonlinear equation: two-mode Korteweg–de Vries–Burgers equation. Nonlinear Dyn. 91(3), 1619–1626 (2018) 28. Jaradat, I., Alquran, M., Ali, M.: A numerical study on weak-dissipative two-mode perturbed Burgers’ and Ostrovsky models: right-left moving waves. Eur. Phys. J. Plus 133, 164 (2018) 29. Jaradat, I., Alquran, M., Momani, S., Biswas, A.: Dark and singular optical solutions with dual-mode nonlinear Schrodinger’s equation and Kerr-law nonlinearity. Optik 172, 822–825 (2018) 30. Wazwaz, A.M.: Multiple soliton solutions and other exact solutions for a two-mode KdV equation. Math. Methods Appl. Sci. 40(6), 1277–1283 (2017) 31. Hong, W.P., Jung, Y.D.: New non-traveling solitary wave solutions for a second-order Korteweg–de Vries equation. Z. Naturforsch. 54a, 375–378 (1999) 32. Alquran, M., Jaradat, I., Baleanu, D.: Shapes and dynamics of dual-mode Hirota–Satsuma coupled KdV equations: exact traveling wave solutions and analysis. Chin. J. Phys. 58, 49–56 (2019) 33. Abu Irwaq, I., Alquran, M., Jaradat, I.: New dual-mode Kadomtsev–Petviashvili model with strong–weak surface tension: analysis and application. Adv. Differ. Equ. 2018, 433 (2018) 34. Alquran, M., Jaradat, I.: Multiplicative of dual-waves generated upon increasing the phase velocity parameter embedded in dual-mode Schrödinger with nonlinearity Kerr laws. Nonlinear Dyn. 96(1), 115–121 (2019) 35. Kudryashov, N.A.: One method for finding exact solutions of nonlinear differential equations. Commun. Nonlinear Sci. Numer. Simul. 17(6), 2248–2253 (2012) 36. Alquran, M., Al-Khaled, K.: The tanh and sine–cosine methods for higher order equations of Korteweg–de Vries type. Phys. Scr. 84, 025010 (2011) 37. Alquran, M.: Solitons and periodic solutions to nonlinear partial differential equations by the sine–cosine method. Appl. Math. Inf. Sci. 6(1), 85–88 (2012) 38. Alquran, M., Qawasmeh, A.: Classifications of solutions to some generalized nonlinear evolution equations and systems by the sine–cosine method. Nonlinear Stud. 20(2), 261–270 (2013) Acknowledgements The authors would like to thank the editor and the anonymous referees for their in-depth reading and insightful comments on an earlier version of this paper. Availability of data and materials Not applicable. Funding Not applicable. Ethics declarations Ethics approval and consent to participate Not applicable. Competing interests The authors declare that there is no conflict of interest regarding the publication of this manuscript. The authors declare that they have no competing interests. Consent for publication Not applicable. Additional information Publisher’s Note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Disclaimer: This answer is based on the assumption that $\mbox{PSPACE} \neq \mbox{NP}$, a hypothesis most scientists strongly believe, but we have yet to prove. This means that there is a possibility that these problems are in $\mbox{NP}$ and thus also $\mbox{NP}$-complete. I would say the simplest most are True quantified Boolean formula and Generalized geography, both $\mbox{PSPACE}$-complete. TQBF is given a quantified boolean formula, test whether the formula is true, i.e. formulae on the form $\forall x \exists y \forall z . \; [(x \lor y) \land z]$ is false, because setting $z$ to false yields a false statement. Generalized Geography is a fun game (see Word chain) where you have a list of strings (e.g. city names) and Player 1 starts by saying a name, and Player 2 responds with a name starting on the letter the previous name ended with. Then it's Player 1's turn, until someone get stuck (this game is recommended to play as a drinking game where objects are bands/artists, movies, cities, capitals, famous mathematicians or whatever floats your boat. The one who cannot respond within reasonable time must of course drink). The formal problem is stated as the question "does Player 1 have a winning strategy".
This is a HW question, so I'm not expecting any answers, just a general guidance/help. Definition. Given $\underset{\neq0}{\underbrace{s}}\in\left\{ 0,1\right\} ^{n}$, a function $f:\left\{ 0,1\right\} ^{n}\to\left\{ 0,1\right\} ^{n}$ will be called s-difference-preserving if for all $x,y\in\left\{ 0,1\right\} ^{n}$ s.t. $x\neq y$: $x\oplus y=s\iff f\left(x\right)=f\left(y\right)$ (where $\oplus$ is the bitwise xor operation). Given a difference-preserving (that is, there exists such s so it is s-difference-preserving) function $f$, we talk about algorithms that find such $s$. We need to prove that every such random (las-vegas) algorithm worst case (w.c. input) average (on the randomness of the algorithm) complexity is $\Omega\left(\sqrt{2^{n}}\right)$.Where by complexity, we talk about query-complexity. That is, how many times we need to query f, to find a value for an input $x$ - that is, find $f\left(x\right)$. What I know and tried so far: I assume this is a classic problem for Yao's principle. So I want to find a distribution, and an optimal deterministic algorithm for it (optimal on average, on that distribution), such that its query-complexity is \Omega\left(\sqrt{2^{n}}\right) on average on that distribution. Given an $s$, it's easy to create an s-difference-preserving function, since any $s$ defines a partition of pairs on $\left\{ 0,1\right\} ^{n}$. So we just need to choose a different value for every pair $x,y$ s.t. $x\oplus y=s$. I also know $f\left(0\right)=f\left(s\right)$, and that to find $s$ for a fuction, it's enough to find a pair $x,y$ s.t. $f\left(x\right)=f\left(y\right)$, and then calculate $x\oplus y$. I thought of defining $\forall i\in\left[\sqrt{2^{n}}\right]:\ s_{i}=i\cdot\sqrt{2^{n}}-1$. And create $s_{i}$-difference-preserving function $\forall i\in\left[\sqrt{2^{n}}\right]$. Define a uniform distribution on them and have the deterministic algorithm check the values of every $s_{i}$. This is done in $\sqrt{2^{n}}$ queries on average. But I don't know how to prove this algorithm is optimal for that distribution. Furthermore, I suspect it isn't, since then I would be able to do the same with $s_{i},\ \forall i\in\left[2^{n}\right]$ and “prove” a lower bound of $2^{n}$ which is probably not true. I would love any help with this. Also, not as part of the HW, but because I'm interested (these are questions we don't need to submit a solution to), can you think of any deterministic algorithm for the general problem? can you think of any monte carlo random algorithm (with 0.5 chance of success)? I hope this is not too long, and written clearly enough. Would appreciate any help. Thanks!
Thanks for this clarification. I still have a couple of questions: 1). I am not sure if I understand why you say the "relevant forward period is identical". For the swaption case, the vol is the forward swap rate in one years time (relative to today) for a future two year period [1, 3]. For the cap case, we have two forward rates, one expiring at one year for the period [1,2], the other expiring in two years for the period [2,3]. Is this right? 2). I can intuitively understand why swap rate is an average of forward libors. However, how can I show this mathematically. I am asking because the swap rate formula is a ratio of zero coupon bond prices: swap rate = ( P(t, Ta) - P(t, Tb) ) / sum ( tau * P(t, Ti) ) Thanks! Perhaps, think of a swap like this: Floating leg [$] PV_{float}(t) = \sum_{i=1}^{n}{NF(t,T_{i-1}) \tau_{i} DF(t,T_{i})}[$] where, [$]F(t,T)[$] is the index forward rate(some IBOR rate) and [$]DF(t,T_{i})[$] = the discount factor from [$]T_{i}[$] to [$]t[$]. So, the swap rate is an economic equivalent of what the market thinks should be the value of a string of IBOR forwards. Let's make the simplifying assumptions : (1) The duration of the LIBOR deposit matches the coupon period. (2) Index-forward rates are equal to discount forward rates. Denoting the discount forward rate between the period [$](T_{i-1},T_{i})[$] by [$]R(t,T_{i-1},T_{i})[$], [$]F(t,T_{i-1})=R(t,T_{i-1},T_{i})=\frac{1}{\tau_{i}}\left(\frac{DF(t,T_{i-1})}{DF(t,T_{i})}-1\right)[$] Substituting this, we would get a formula similar to what you stated: [$] PV_{float} = NDF(t,T_{0})-NDF(t,T_{n})[$] Maybe that helps. Instead of using any of the above formulae, it's nice to price a few swaps on Excel, make a schedule for the fixed leg and the floating legs and try to find the par swap rate, to get an intuitive feel. Post 2008, the simplifying assumptions no longer hold. Swap pricing is far more complex.
I thought about it, and the second answer I proposed earlier is not all that bad. I'm going to treat the points $A$, $B$, $C$, $Q$, as coordinate triples, so that I can add and subtract them as vectors. I'm also going to write $r_2$ for $R$, the radius of the second sphere, and $r_1$ for the radius of the first. Here goes:$$\newcommand{\a}{\mathbf a}\newcommand{\b}{\mathbf b}\newcommand{\c}{\mathbf c}\newcommand{\u}{\mathbf u}\newcommand{\v}{\mathbf v}\newcommand{\w}{\mathbf w}$$\begin{align}r_1 &= \|A - C\| \\r_2 & = R\\\a &= (A - Q)/r_2 \\\b &= (B - Q)/r_2\\\c &= (C - Q)/r_2.\end{align} In doing this, we've changed to a coordinate system in which the second sphere is at the origin, and has radius 1; we'll convert back at the end. Let \begin{align}\u &= \frac{r_2}{r_1}(\a - \c) \\\v &= \frac{r_2}{r_1}(\b - \c) \\\v' &= \v - (\u \cdot \v) \u \\\w & = \v' / \| \v' \|\end{align} Note that $\|u\| = \|v \| = \|w\| = 1$. And $\u \cdot \w = 0$. Now the arc from $\u$ to $\w$ passes through $\v$ at some angle $T$. What angle? \begin{align}T &= \arccos ({\v \cdot \u})\end{align}Let \begin{align}\gamma(t) = \c + r_1\cos(t) \u + r_1\sin(t) \w\end{align}for $0 \le t \le T$. Then $\gamma$ parameterizes the arc from $\a$ to $\b$ on the (translated and scaled) first sphere. We seek points where \begin{align}\| \gamma(t) \|^2 &= 1.\end{align}because those are the ones that lie on the second sphere (in the new coordinate system). Writing this out, we have\begin{align}1 &= \| \gamma(t) \|^2 \\&= \gamma(t) \cdot \gamma(t)\\&= (\c + r_1\cos(t) \u + r_1\sin(t) \w) \cdot (\c + r_1\cos(t) \u + r_1\sin(t) \w) \\&= (\c\cdot \c) + 2r_1\cos(t) (\u \cdot \c) + 2 r_1\sin(t) (\w \cdot \c) + r_1^2\cos^2(t) (\u \cdot \u) + 2r_1^2\cos(t)\sin(t) (\u \cdot \w) + r_1^2\sin^2(t) (\w \cdot \w) \end{align}Fortunately, $\u$ and $\w$ are orthogonal, and each of $\u$ and $\w$ is a unit vector. So we have\begin{align}1 &= \| \gamma(t) \|^2 \\&= (\c\cdot \c) + 2r_1\cos(t) (\u \cdot \c) + 2 r_1\sin(t) (\w \cdot \c) + r_1^2\cos^2(t) + r_1^2\sin^2(t) \\&= (\c\cdot \c) + 2r_1\cos(t) (\u \cdot \c) + 2 r_1 \sin(t) (\w \cdot \c) + r_1^2 \\\end{align} Now we write $s = \tan(\frac{t}{2})$, which turns out to mean that $\sin t = \frac{2s}{1+s^2}$ and $\cos t = \frac{1-s^2}{1+s^2}$. So our equation becomes\begin{align}1 &= (\c\cdot \c) + 2r_1\frac{1-s^2}{1+s^2} (\u \cdot \c) + 2 r_1\frac{2s}{1+s^2} (\w \cdot \c) + r_1^2 \\\end{align} Multiplying through by $1+s^2$ gives\begin{align}1+s^2 &= (1+s^2)(\c\cdot \c) + 2r_1(1-s^2)(\u \cdot \c) + 4r_1s (\w \cdot \c) + r_1^2(1+s^2) \\0 &= -1 -s^2 + (1+s^2)(\c\cdot \c) + 2r_1(1-s^2)(\u \cdot \c) + 4r_1s (\w \cdot \c) + r_1^2(1+s^2) \\0 &= s^2 (\|c\|^2 - 1 - 2 r_1\u \cdot \c + r_1^2) + s 4r_1 (\w \cdot \c) + 2r_1(\u \cdot \c) + (\c \cdot \c) - 1 + r_1^2\end{align}which is a quadratic $As^2 + Bs + C = 0$, where the coefficients are\begin{align}A &= (\|c\|^2 - 1 - 2 r_1u \cdot c + r_1^2)\\B &= 4r_1(w \cdot c)\\C &= 2r_1(u \cdot c) + (c \cdot c) - 1 + r_1^2\end{align} (My apologies here for reusing the names $A, B, C$.) This quadratic can be solved with the quadratic formula to get solutions $s_1, s_2$. If both solutions have an imaginary part (which happens when $B^2 - 4AC < 0$), then the arc does not intersect the sphere. Otherwise, they are real, but may be identical. Let's see how to translate them back into an overall solution to the problem. At this point, now that we've solved the quadratic, "A, B, C" go back to their original meanings as points in 3-space, OK? From $s_i$ ($i = 1, 2$ henceforth), we can compute$$t_i = \arctan(2 s_i).$$If either value of $t_i$ is outside the interval $[0, T]$, we ignore it -- it does not correspond to a point of intersection of the arc and the sphere. From this, we can compute $\cos t_i$ and $\sin t_i$, and thus compute$$\gamma_i = \c + r_1\cos t_i \u + r_1\sin t_i \w$$and then, going back to the original coordinate system, compute$$X_i = r_2\gamma_i + Q$$and the points $X_i$ are the intersections you seek.
Proceedings of the Japan Academy, Series A, Mathematical Sciences Proc. Japan Acad. Ser. A Math. Sci. Volume 87, Number 9 (2011), 162-166. On the set of points where Lebesgue’s singular function has the derivative zero Abstract Let $L_{a}(x)$ be Lebesgue’s singular function with a real parameter $a$ ($0<a<1, a \neq 1/2$). As is well known, $L_{a}(x)$ is strictly increasing and has a derivative equal to zero almost everywhere. However, what sets of $x \in [0,1]$ actually have $L_{a}'(x)=0$ or $+\infty$? We give a partial characterization of these sets in terms of the binary expansion of $x$. As an application, we consider the differentiability of the composition of Takagi’s nowhere differentiable function and the inverse of Lebesgue’s singular function. Article information Source Proc. Japan Acad. Ser. A Math. Sci., Volume 87, Number 9 (2011), 162-166. Dates First available in Project Euclid: 4 November 2011 Permanent link to this document https://projecteuclid.org/euclid.pja/1320417394 Digital Object Identifier doi:10.3792/pjaa.87.162 Mathematical Reviews number (MathSciNet) MR2863359 Zentralblatt MATH identifier 1236.26007 Subjects Primary: 26A27: Nondifferentiability (nondifferentiable functions, points of nondifferentiability), discontinuous derivatives Secondary: 26A15: Continuity and related questions (modulus of continuity, semicontinuity, discontinuities, etc.) {For properties determined by Fourier coefficients, see 42A16; for those determined by approximation properties, see 41A25, 41A27} 26A30: Singular functions, Cantor functions, functions with other special properties 60G50: Sums of independent random variables; random walks Citation Kawamura, Kiko. On the set of points where Lebesgue’s singular function has the derivative zero. Proc. Japan Acad. Ser. A Math. Sci. 87 (2011), no. 9, 162--166. doi:10.3792/pjaa.87.162. https://projecteuclid.org/euclid.pja/1320417394
I have provided two starting points to solve my problem: Option 1 I am trying to move the arrow head to a higher point (somewhere in the middle of the upper love of the p orbital) and make the boron atom come on top of the p orbital not underneath in the following figure: My code so far is: \documentclass[english]{article}\usepackage[T1]{fontenc}\usepackage[latin9]{inputenc}\makeatletter\usepackage{chemfig}\usepackage{tikz}\usepackage{chemmacros}\usechemmodule{orbital}\makeatother\usepackage{babel}\begin{document}\setchemfig{cram width=3pt}\chemsetup[orbital]{overlay ,opacity = .55,}\schemestart\chemfig{H>:[:40,0.8]@{N}\Lewis{2:,N}(<[:-110,0.8]H)-[:-40]H}\+\chemfig{F-@{B}B(-[2,0.1,,,draw=none]{{\orbital[phase=-]{p}}})(<[:-20]F)<:[:20]F}\arrow{->}\chemfig{H>:[:-40,0.8]\chembelow[0.5pt]{N}{\scriptstyle\hspace{4.5mm}\oplus}(<[:110,0.8]H)(-[:40]H)-[:-90]\chemabove[0.5pt]{B}{\scriptstyle\hspace{4.5mm}\ominus}(<:[:-40,0.8]F)(<[:-70,0.8]F)-[:-140]F}\schemestop\chemmove[shorten <=2pt]{\draw (N) ..controls +(90:1cm)and+(90:2cm)..(B);}\end{document} How would I do the things I suggested above: (1) Change the position of the arrow to the middle of the upper lobe of the p orbital (2) Put the Boron atom on top of the P orbital Option 2 Alternatively instead of the chemmacro orbital I would be happy with placing the following tikz picture where the boron is: \begin{tikzpicture}[scale=1]\node (B) at (0,0) {B};\draw (-0.1,0.2) ..controls +(120:1) and +(60:1).. (0.1,0.2);\draw[fill=black!10] (-0.1,-0.2) ..controls +(-120:1) and +(-60:1).. (0.1,-0.2);\end{tikzpicture} Unfortunately, when I try to include it in the compound, the reaction scheme arrow goes to the bottom left corner of the page and the tikzpicture covers most of the compound. I would like for the compound to appear normally and the arrow to again go into somewhere in the middle of the upper lobe of the p orbital (the shape defined by \draw (-0.1,0.2) ..controls +(120:1) and +(60:1).. (0.1,0.2);)
Case Study Contents In the Life Cycle Consumption with Assets Problem, we extended the basic model to allow the consumer to borrow and save. The Life Cycle Consumption with Assets and Labor Supply Problem further extends the basic model to include labor decisions. In the previous models, the wage income in each period was a function of the number of periods and the current period. In this model, the consumer decides how much to work in each period as well as how much to consume in each period in order to maximize total utility. The utility function now is a function of both consumption and labor, where consuming more and working less increases the individual's utility. The objective of the life cycle consumption with assets and labor supply problem is to determine how much to work and how much to consume in each period so as to maximize total utility. To formulate the life cycle consumption with assets and labor supply problem, we start from the formulation for the life cycle consumption with assets problem and modify it to include the labor supply decisions. Set P = set of periods = {1..\(n\)} Parameters \(w(p,n)\) = wage income function \(r\) = interest rate \(\beta\) = discount factor \(a_{min}\) = minimum asset level Decision Variables \(c_p\) = consumption in period \(p\), \(\forall p \in P\) \(a_p\) = assets available at the beginning of period \(p\), \(\forall p \in P\) \(l_p\) = labor supply in period \(p\), \(\forall p \in P\) Objective Function Let \(u()\) be the utility function and let \(u(c_p, l_p)\) be the utility value associated with consuming \(c_p\) and working \(l_p\) in period \(p\). Utility in future periods is discounted by a factor of \(\beta\). Then, the objective function is to maximize the total discounted utility: maximize \(\sum_{p \in P} \beta^{p-1} u(c_p, l_p)\) Constraints The life cycle consumption with assets and labor supply model again tracks the assets available at the beginning of each period but now the wage income depends on the labor supply. That is, the wealth at the beginning of period \(p+1\) equals the wealth at the beginning of period \(p\) minus consumption plus the wage income multiplied by the labor supply, all multiplied by the return \(R\) on savings (where \(R = 1 + r\)). There is one constraint for each period: \(a_{p+1} \leq R(a_p - c_p + w(p,n)*l_p), \forall p \in P.\) The model assumption is that initial wealth is zero (\(a_1 = 0\)) and that terminal wealth is non-negative (\(a_{n+1} \geq 0\)). As in the life cycle model with assets, there is a minimum asset level, \(a_{min} \leq 0\). If \(a_{min} = 0\), then no borrowing is allowed. Otherwise, an individual can borrow as long as s/he can repay the amount before period \(n\). Therefore, there is one lower bound constraint for each period: \(a_p \geq a_{min}\) Also, the amount consumed in each period should be non-negative: \(c_p \geq \epsilon, \forall p \in P.\) To solve your own life cycle consumption with assets and labor supply models, check out the Life Cycle Consumption with Assets and Labor Supply demo. $Title Life Cycle Consumption - with savings and elastic labor supply Set p period /1*15/ ; Scalar B discount factor /0.96/; Scalar i interest rate /0.10/ ; Scalar R gross interest rate ; R = 1+i ; Scalar amin minimum asset level /0/ ; $macro u(c,l) (-exp(-c) - power(l,2)) Parameter w(p) wage income in period p ; w(p) = ((15 - p.val)*p.val) / 15 ; Parameter lbnds(p) lower bounds of consumption / 1*15 0.0001 / ; Positive Variables c(p) consumption expenditure in period p , l(p) labor supply at period p ; Variable a(p) assets (or savings) at the beginning of period p ; Variable Z objective ; Equations budget(p) lifetime budget constraint , obj objective function ; budget(p) .. a(p+1) - R*(a(p) + w(p)*l(p) - c(p)) =l= 0 ; obj .. Z =e= sum(p, power(B, p.val - 1)*u(c(p),l(p))) ; Model LifeCycleConsumptionSavings /budget, obj/ ; c.lo(p) = lbnds(p) ; a.lo(p) = amin ; a.fx('1') = 0 ; Solve LifeCycleConsumptionSavings using nlp maximizing Z ;
Planck's law Until stars were formed a few hundred million years after the Big Bang (BB), the brightness of the Universe was extremely homogeneous and given by a near-perfect blackbody Planck spectrum with a temperature of $T = T_0(1+z)$, where $T_0=2.725\,\mathrm{K}$ is the current temperature of the CMB, and $z$ is the redshift corresponding to the time $t$. That is, the brightness at wavelength $\lambda$ is:$$B_\lambda(\lambda,T) = \frac{2hc^2}{\lambda^5} \frac{1}{e^{hc/\lambda k_\mathrm{B} T} - 1},$$where $h$, $c$, $k_\mathrm{B}$ are Planck's constant, the speed of light, and Boltzmann's constant, respectively. Perceived brightness I'm going to assume that you're human, and hence that the brightness you're interested in is the optical wavelength region, i.e. around $\lambda\sim550\,\mathrm{nm}$. The peak of a Planck spectrum shifts toward higher frequencies the higher the temperature is, and hence the ratio of optical-to-UV/X-ray/gamma radiation will decrease. But regardless, the absolute brightness will always increase at any wavelength for larger temperatures. At $t\simeq10\,\mathrm{s}$, what later becomes our observable Universe was already $\sim30$ light-years in radius (although the observable Universe of that epoch was only 8000 km). The scale factor (ratio of the size at that time to the current size) was thus $a\sim7\times10^{-10}$, the corresponding redshift $z\sim1.4\times10^9$, and hence the temperature of the Universe was $T \sim 3.7\times10^9\,\mathrm{K}$. Plugging into the equation above, and dividing by $4\pi$ to get the brightness per solid angle, I get that the brightness in the optical was$$B_\lambda(550\,\mathrm{nm},3.7\times10^9\,\mathrm{K}) \simeq 3\times 10^{19}\,\mathrm{W}\,\mathrm{m}^{-3}\,\mathrm{sr}^{-1}.$$So, what does this number mean? To get a feeling of how it would look, we can compare the amount of light received from a human field of view, to the light received when looking directly at the Sun. Andersen et al. (2018) did exactly this, in order to calculate the period of time that a human could see anything in the early Universe. They found that while the Universe became too dim for a human to sense any light around $t=5.7$ million years after BB, it was as bright as looking at the Sun when the Universe was around $T\simeq1600\,\mathrm{K}$, just over 1 million years after BB, and so had a brightness in the optical of around $B_\lambda(550\,\mathrm{nm},1600\,\mathrm{K}) = 1.5\times 10^7\,\mathrm{W}\,\mathrm{m}^{-3}\,\mathrm{sr}^{-1}$, or a factor of $1.7\times10^{12}$ times smaller than at $t\simeq10\,\mathrm{s}$. In other words, ten seconds after the Big Bang the Universe was a trillion times brighter than looking at the Sun. Brightness today Today, the spectrum of the Universe is no longer a Planck spectrum, but is given by a mixture of cosmological and astrophysical processes. In this answer about the cosmic background radation, you can see that the brightness of the optical peak is roughly two orders of magnitude dimmer than the CMB peak brightness which, in turn, from Planck's law today has a brightness $\sim10^{24}$ times smaller than at $t\simeq10\,\mathrm{s}$. "Optical light" is here defined as a much broader region than what a human would see is, very roughly ten times as broad, so the perceived brightness would be another order of magnitude lower. Hence, today the Universe is 27 orders of magnitude less bright than at the photon epoch. Brightness and color through the history of the Universe The figure below shows, in green, the brightness of the CMB as a function of time after the Big Bang. A secondary $x$ axis on top show the corresponding temperature of the Universe. The background color shows the color of the Universe as would be perceived by a human being, calculated by convolving the spectrum of the radiation with the response function of the human eye: The first few tens of thousands of years, the Universe is a pale sapphire blue, turning white as it reaches the temperature of the Sun ($T_\odot \simeq 5\,780\,\mathrm{K}$). At $t\sim200\,\mathrm{Myr}$, stars begin to form and the calculation of the spectrum becomes more complicated (so I've grayed it it out), but today the Universe has reached a cosmic latte (Bladry et al. 2001). Note that, as mentioned above, only between $t\sim1\,\mathrm{Myr}$ and $t\sim6\,\mathrm{Myr}$ — where the temperatures was $1600\mathrm{K} \gtrsim T \gtrsim 500\mathrm{K}$ — could you actually see anything; prior to this epoch you'd go blind, and after this epoch, it'd be too dim (but you could in principle still see the color using sunglasses/binoculars, respectively). At $T\lesssim hc/\lambda k_\mathrm{B} \simeq 30\,000\,\mathrm{K}$ the exponential factor in the Planck law blows up so the brightness at $\lambda$ quickly decreases. The brightness is further decreased by the fact that, at $t\sim52\,\mathrm{kyr}$, the Universe transitions from being radiation-dominated to being matter-dominated and so the expansion goes from $a(t)\propto t^{1/2}$ to $a(t)\propto t^{2/3}$, i.e. faster. At the time of recombination ($t\sim379\,\mathrm{kyr}$), the Universe is somewhat brighter than at $t\sim1\,\mathrm{Myr}$. Here, $T\sim3000\,\mathrm{K}$, so $B_\lambda(550\,\mathrm{nm},3000\,\mathrm{K}) = 3\times 10^{10}\,\mathrm{W}\,\mathrm{m}^{-3}\,\mathrm{sr}^{-1}$, or a billion times less bright than at $t\simeq10\,\mathrm{s}$. A note on decoupling and mean free path Before the photons decoupled from matter, they scattered frequently on free electrons, so their mean free path was small compared to the size of the (observable) Universe. It is often said thad the Universe was "foggy" until decoupling, but I think people often overestimate this fogginess. The mean free path is $\ell = 1 / n_e \sigma_\mathrm{T}$,where $n_e$ is the number density of free electrons and $\sigma_\mathrm{T}\simeq6.65\times 10^{-25}\,\mathrm{cm}^2$ is the Thomson cross section of the electron. Calculating $n_e$ from the ionization state of the gas, this works out$^\dagger$ to roughly 2000 light-years just before recombination begins to kick in at a time $t\simeq200\,000\,\mathrm{yr}$ after BB, 20 light-years at $t\simeq50\,000\,\mathrm{yr}$ when matter started dominating over radiation, 16 kilometers at $t\simeq15\,\mathrm{m}$ when nucleosynthesis ended, and 20 meters at $t\simeq10\,\mathrm{s}$, when leptons and antileptons annihilated and the photon epoch began. But this scattering is not really important for how bright the Universe were. Photons arrive at your eye all the time, and if they have scattered multiple times from their point of origin, you won't know where they were created, but you will still see them. $^\dagger$ I wrote a Python code called timeline to calculate $\ell$ and other quantities of the Universe as a function of time, available on GitHub here.
The question is: Using the Young-Laplace Equation (if applicable), find the surface tension (dynes/cm) for water at 20 degrees Celsius with 2.5 psi. Round to the nearest tenth. Well, I didn't use the Young-Laplace equation, not sure if needed though. What I did was use the Eötvös rule and its special case for water to solve the question. The equation is: $$\gamma = 0.07275\;\frac{N}{m}\;\times\;(1-0.002\times(T-291K))$$ What I did was convert 20 Celsius to Kelvin (293K) and then put it in the equation to get: $$\gamma = 0.07275\;\frac{N}{m}\;\times\;(1-0.002\times(293K-291K))= 0.072459\frac{N}{m}$$ However, I think I may be wrong as this does not account for pressure at all. Which ends up becoming about $72.46\frac{dynes}{cm}$ Am I right or wrong? And is there a better/correct way of doing this?
Parabolic systems with non continuous coefficients 1. Dipartimento di Matematica e Informatica, Viale Andrea Doria, 6, 95128- Catania, Italy $T u = fi(y)$ $i = 1,...,N where the known terms $f_i$ are in Lebesgue spaces and the differential the parabolic operator $T$ has the form $ut - \sum_{j=1}^{N}\sum_{|\alpha|=2s} a^(\alpha)_(ij) (y)D^(\alpha) u_j (y) + \sum_{j=1}^{N}\sum_{|\alpha|<=2s-1} b^(\alpha)_(ij) (y)D^(\alpha) u_j (y)$. have discontinuous coefficients. Mathematics Subject Classification:Primary: 35J30, 35J45, 35B01; Secondary: 35J3. Citation:Maria Alessandra Ragusa. Parabolic systems with non continuous coefficients. Conference Publications, 2003, 2003 (Special) : 727-733. doi: 10.3934/proc.2003.2003.727 [1] Denis R. Akhmetov, Renato Spigler. $L^1$-estimates for the higher-order derivatives of solutions to parabolic equations subject to initial values of bounded total variation. [2] Kai Liu. Stationary solutions of neutral stochastic partial differential equations with delays in the highest-order derivatives. [3] [4] Marina V. Plekhanova. Strong solutions of quasilinear equations in Banach spaces not solvable with respect to the highest-order derivative. [5] [6] Maria Alessandra Ragusa, Atsushi Tachikawa. Estimates of the derivatives of minimizers of a special class of variational integrals. [7] [8] Dian Palagachev, Lubomira Softova. A priori estimates and precise regularity for parabolic systems with discontinuous data. [9] Lijuan Wang, Jun Zou. Error estimates of finite element methods for parameter identifications in elliptic and parabolic systems. [10] Horst Heck, Matthias Hieber, Kyriakos Stavrakidis. $L^\infty$-estimates for parabolic systems with VMO-coefficients. [11] Thuy N. T. Nguyen. Uniform controllability of semidiscrete approximations for parabolic systems in Banach spaces. [12] [13] [14] Chao Deng, Tong Li. Global existence and large time behavior of a 2D Keller-Segel system in logarithmic Lebesgue spaces. [15] Hartmut Pecher. The Chern-Simons-Higgs and the Chern-Simons-Dirac equations in Fourier-Lebesgue spaces. [16] [17] Nobuyuki Kato, Norio Kikuchi. Campanato-type boundary estimates for Rothe's scheme to parabolic partial differential systems with constant coefficients. [18] Zaihong Wang. Periodic solutions of the second order differential equations with asymmetric nonlinearities depending on the derivatives. [19] Hongjie Dong, Seick Kim. Green's functions for parabolic systems of second order in time-varying domains. [20] Kyeong-Hun Kim, Kijung Lee. A weighted $L_p$-theory for second-order parabolic and elliptic partial differential systems on a half space. Impact Factor: Tools Metrics Other articles by authors [Back to Top]
Colloquia/Fall18 Contents 1 Mathematics Colloquium 1.1 Spring 2018 1.2 Spring Abstracts 1.2.1 January 29 Li Chao (Columbia) 1.2.2 February 2 Thomas Fai (Harvard) 1.2.3 February 5 Alex Lubotzky (Hebrew University) 1.2.4 February 6 Alex Lubotzky (Hebrew University) 1.2.5 February 9 Wes Pegden (CMU) 1.2.6 March 2 Aaron Bertram (Utah) 1.2.7 March 16 Anne Gelb (Dartmouth) 1.2.8 April 6 Edray Goins (Purdue) 1.3 Past Colloquia Mathematics Colloquium All colloquia are on Fridays at 4:00 pm in Van Vleck B239, unless otherwise indicated. Spring 2018 date speaker title host(s) January 29 (Monday) Li Chao (Columbia) Elliptic curves and Goldfeld's conjecture Jordan Ellenberg February 2 (Room: 911) Thomas Fai (Harvard) The Lubricated Immersed Boundary Method Spagnolie, Smith February 5 (Monday, Room: 911) Alex Lubotzky (Hebrew University) High dimensional expanders: From Ramanujan graphs to Ramanujan complexes Ellenberg, Gurevitch February 6 (Tuesday 2 pm, Room 911) Alex Lubotzky (Hebrew University) Groups' approximation, stability and high dimensional expanders Ellenberg, Gurevitch February 9 Wes Pegden (CMU) The fractal nature of the Abelian Sandpile Roch March 2 Aaron Bertram (University of Utah) Stability in Algebraic Geometry Caldararu March 16 (Room: 911) Anne Gelb (Dartmouth) Reducing the effects of bad data measurements using variance based weighted joint sparsity WIMAW April 5 (Thursday) John Baez (UC Riverside) TBA Craciun April 6 Edray Goins (Purdue) Toroidal Belyĭ Pairs, Toroidal Graphs, and their Monodromy Groups Melanie April 13 Jill Pipher (Brown) TBA WIMAW April 16 (Monday) Christine Berkesch Zamaere (University of Minnesota) TBA Erman, Sam April 25 (Wednesday) Hitoshi Ishii (Waseda University) Wasow lecture TBA Tran May 4 Henry Cohn (Microsoft Research and MIT) TBA Ellenberg date person (institution) TBA hosting faculty date person (institution) TBA hosting faculty date person (institution) TBA hosting faculty date person (institution) TBA hosting faculty date person (institution) TBA hosting faculty date person (institution) TBA hosting faculty date person (institution) TBA hosting faculty date person (institution) TBA hosting faculty Spring Abstracts January 29 Li Chao (Columbia) Title: Elliptic curves and Goldfeld's conjecture Abstract: An elliptic curve is a plane curve defined by a cubic equation. Determining whether such an equation has infinitely many rational solutions has been a central problem in number theory for centuries, which lead to the celebrated conjecture of Birch and Swinnerton-Dyer. Within a family of elliptic curves (such as the Mordell curve family y^2=x^3-d), a conjecture of Goldfeld further predicts that there should be infinitely many rational solutions exactly half of the time. We will start with a history of this problem, discuss our recent work (with D. Kriz) towards Goldfeld's conjecture and illustrate the key ideas and ingredients behind these new progresses. February 2 Thomas Fai (Harvard) Title: The Lubricated Immersed Boundary Method Abstract: Many real-world examples of fluid-structure interaction, including the transit of red blood cells through the narrow slits in the spleen, involve the near-contact of elastic structures separated by thin layers of fluid. The separation of length scales between these fine lubrication layers and the larger elastic objects poses significant computational challenges. Motivated by the challenge of resolving such multiscale problems, we introduce an immersed boundary method that uses elements of lubrication theory to resolve thin fluid layers between immersed boundaries. We apply this method to two-dimensional flows of increasing complexity, including eccentric rotating cylinders and elastic vesicles near walls in shear flow, to show its increased accuracy compared to the classical immersed boundary method. We present preliminary simulation results of cell suspensions, a problem in which near-contact occurs at multiple levels, such as cell-wall, cell-cell, and intracellular interactions, to highlight the importance of resolving thin fluid layers in order to obtain the correct overall dynamics. February 5 Alex Lubotzky (Hebrew University) Title: High dimensional expanders: From Ramanujan graphs to Ramanujan complexes Abstract: Expander graphs in general, and Ramanujan graphs , in particular, have played a major role in computer science in the last 5 decades and more recently also in pure math. The first explicit construction of bounded degree expanding graphs was given by Margulis in the early 70's. In mid 80' Margulis and Lubotzky-Phillips-Sarnak provided Ramanujan graphs which are optimal such expanders. In recent years a high dimensional theory of expanders is emerging. A notion of topological expanders was defined by Gromov in 2010 who proved that the complete d-dimensional simplical complexes are such. He raised the basic question of existence of such bounded degree complexes of dimension d>1. This question was answered recently affirmatively (by T. Kaufman, D. Kazdhan and A. Lubotzky for d=2 and by S. Evra and T. Kaufman for general d) by showing that the d-skeleton of (d+1)-dimensional Ramanujan complexes provide such topological expanders. We will describe these developments and the general area of high dimensional expanders. February 6 Alex Lubotzky (Hebrew University) Title: Groups' approximation, stability and high dimensional expanders Abstract: Several well-known open questions, such as: are all groups sofic or hyperlinear?, have a common form: can all groups be approximated by asymptotic homomorphisms into the symmetric groups Sym(n) (in the sofic case) or the unitary groups U(n) (in the hyperlinear case)? In the case of U(n), the question can be asked with respect to different metrics and norms. We answer, for the first time, one of these versions, showing that there exist fintely presented groups which are not approximated by U(n) with respect to the Frobenius (=L_2) norm. The strategy is via the notion of "stability": some higher dimensional cohomology vanishing phenomena is proven to imply stability and using high dimensional expanders, it is shown that some non-residually finite groups (central extensions of some lattices in p-adic Lie groups) are Frobenious stable and hence cannot be Frobenius approximated. All notions will be explained. Joint work with M, De Chiffre, L. Glebsky and A. Thom. February 9 Wes Pegden (CMU) Title: The fractal nature of the Abelian Sandpile Abstract: The Abelian Sandpile is a simple diffusion process on the integer lattice, in which configurations of chips disperse according to a simple rule: when a vertex has at least 4 chips, it can distribute one chip to each neighbor. Introduced in the statistical physics community in the 1980s, the Abelian sandpile exhibits striking fractal behavior which long resisted rigorous mathematical analysis (or even a plausible explanation). We now have a relatively robust mathematical understanding of this fractal nature of the sandpile, which involves surprising connections between integer superharmonic functions on the lattice, discrete tilings of the plane, and Apollonian circle packings. In this talk, we will survey our work in this area, and discuss avenues of current and future research. March 2 Aaron Bertram (Utah) Title: Stability in Algebraic Geometry Abstract: Stability was originally introduced in algebraic geometry in the context of finding a projective quotient space for the action of an algebraic group on a projective manifold. This, in turn, led in the 1960s to a notion of slope-stability for vector bundles on a Riemann surface, which was an important tool in the classification of vector bundles. In the 1990s, mirror symmetry considerations led Michael Douglas to notions of stability for "D-branes" (on a higher-dimensional manifold) that corresponded to no previously known mathematical definition. We now understand each of these notions of stability as a distinct point of a complex "stability manifold" that is an important invariant of the (derived) category of complexes of vector bundles of a projective manifold. In this talk I want to give some examples to illustrate the various stabilities, and also to describe some current work in the area. March 16 Anne Gelb (Dartmouth) Title: Reducing the effects of bad data measurements using variance based weighted joint sparsity Abstract: We introduce the variance based joint sparsity (VBJS) method for sparse signal recovery and image reconstruction from multiple measurement vectors. Joint sparsity techniques employing $\ell_{2,1}$ minimization are typically used, but the algorithm is computationally intensive and requires fine tuning of parameters. The VBJS method uses a weighted $\ell_1$ joint sparsity algorithm, where the weights depend on the pixel-wise variance. The VBJS method is accurate, robust, cost efficient and also reduces the effects of false data. April 6 Edray Goins (Purdue) Title: Toroidal Belyĭ Pairs, Toroidal Graphs, and their Monodromy Groups Abstract: A Belyĭ map [math] \beta: \mathbb P^1(\mathbb C) \to \mathbb P^1(\mathbb C) [/math] is a rational function with at most three critical values; we may assume these values are [math] \{ 0, \, 1, \, \infty \}. [/math] A Dessin d'Enfant is a planar bipartite graph obtained by considering the preimage of a path between two of these critical values, usually taken to be the line segment from 0 to 1. Such graphs can be drawn on the sphere by composing with stereographic projection: [math] \beta^{-1} \bigl( [0,1] \bigr) \subseteq \mathbb P^1(\mathbb C) \simeq S^2(\mathbb R). [/math] Replacing [math] \mathbb P^1 [/math] with an elliptic curve [math]E [/math], there is a similar definition of a Belyĭ map [math] \beta: E(\mathbb C) \to \mathbb P^1(\mathbb C). [/math] Since [math] E(\mathbb C) \simeq \mathbb T^2(\mathbb R) [/math] is a torus, we call [math] (E, \beta) [/math] a toroidal Belyĭ pair. The corresponding Dessin d'Enfant can be drawn on the torus by composing with an elliptic logarithm: [math] \beta^{-1} \bigl( [0,1] \bigr) \subseteq E(\mathbb C) \simeq \mathbb T^2(\mathbb R). [/math] This project seeks to create a database of such Belyĭ pairs, their corresponding Dessins d'Enfant, and their monodromy groups. For each positive integer [math] N [/math], there are only finitely many toroidal Belyĭ pairs [math] (E, \beta) [/math] with [math] \deg \, \beta = N. [/math] Using the Hurwitz Genus formula, we can begin this database by considering all possible degree sequences [math] \mathcal D [/math] on the ramification indices as multisets on three partitions of N. For each degree sequence, we compute all possible monodromy groups [math] G = \text{im} \, \bigl[ \pi_1 \bigl( \mathbb P^1(\mathbb C) - \{ 0, \, 1, \, \infty \} \bigr) \to S_N \bigr]; [/math] they are the ``Galois closure of the group of automorphisms of the graph. Finally, for each possible monodromy group, we compute explicit formulas for Belyĭ maps [math] \beta: E(\mathbb C) \to \mathbb P^1(\mathbb C) [/math] associated to some elliptic curve [math] E: \ y^2 = x^3 + A \, x + B. [/math] We will discuss some of the challenges of determining the structure of these groups, and present visualizations of group actions on the torus. This work is part of PRiME (Purdue Research in Mathematics Experience) with Chineze Christopher, Robert Dicks, Gina Ferolito, Joseph Sauder, and Danika Van Niel with assistance by Edray Goins and Abhishek Parab.
What is Triangle? A triangle is a regular polygon, with three sides and the sum of any two sides is always greater than the third side. This is a unique property of a triangle. In other definition, it can be said as any closed figure with three sides with its sum of angles equal to 180. Being a closed figure, a triangle can have different shapes, and each shape is described by the angle made by any two adjacent sides. Types of Triangles: Acute angle triangle:When the angle between 2 sides is less than 90 it is called an acute angle triangle. Right angle triangle:When the angle between any two sides is equal to 90 it is called as right angle triangle. Obtuse angle triangle:When the angle between any two sides is greater than 90 it is called an obtuse angle triangle. Right Angled Triangle A Right-angled triangle is one of the most important shapes in geometry and is the basics of trigonometry. A right-angled triangle is the one which has 3 sides, “base” “hypotenuse” and “height” with the angle between base and height being 90°. But the question arises what are these? Well, these are the three sides of a right-angled triangle and generates the most important theorem that is Pythagoras theorem. The area of the biggest square is equal to the sum of the square of the two other small square area. We can generate Pythagoras as the square of the length of the hypotenuse is equal to the sum of the length of squares of base and height We can generate Pythagoras as the square of the length of the hypotenuse is equal to the sum of the length of squares of base and height As now we have a general idea about the shape and basic property of a right-angled triangle, let us discuss the area of a triangle. Right Angle Triangle Properties Let us discuss, the properties carried by a right angle triangle. One angle is always 90° or right angle. The side opposite angle 90° is the hypotenuse. The hypotenuse is always the longest side. The sum of the other two interior angles is equal to 90°. The other two sides adjacent to the right angle are called base and perpendicular. The area of right angle triangle is equal to half of the product of adjacent sides of the right angle, i.e., Area of Right Angle Triangle = ½ (Base × Perpendicular) If we drop a perpendicular from the right angle to the hypotenuse, we will get three similar triangles. If we draw a circumcircle which passes through all three vertices, then the radius of this circle is equal to half of the length of the hypotenuse. If one of the angles is 90° and the other two angles are equal to 450 each, then the triangle is called an Isosceles Right Angled Triangle, where the adjacent sides to 90° are equal in length to each other. Above were the general properties of Right angle triangle. The construction of the right angle triangle is also very easy. Keep learning with BYJU’S to get more such study materials related to different topics of Geometry and other subjective topics. Area of Right Angled Triangle The area is in 2 dimensional and is measured in square unit.it can be defined as the amount of space taken by the 2-dimensional object. The area of a triangle can be calculated by 2 formulas: area= \(\frac{a \times b }{2}\) and Heron’s formula i.e. area= \(\sqrt{s(s-a)(s-b)(s-c)}\), where s =, and a,b,c are the sides of a triangle. Where, s is the semi perimeter and is calculated as s \(=\frac{a+b+c}{2}\) and a, b, c are the sides of a triangle. Let us calculate the area of a triangle using the figure given below. Fig 1: Let us drop a perpendicular to the base b in the given right angle triangle Now let us multiply the triangle into 2 triangles. Fig 2: It forms a shape of a parallelogram as shown in the figure. Fig 3: Let us move the yellow shaded region to the beige colored region as shown the figure. Fig 4: It takes up the shape of a rectangle now. Now by the property of area, it is calculated as the multiplication of any two sides Hence area =b×h (for a rectangle) Therefore, the area of a right angle triangle will be half i.e. = \(\frac{b \times h}{2}\) For a right angled triangle, the base is always perpendicular to the height. When the sides of the triangle are not given and only angles are given, the area of a right-angled triangle can be calculated by the given formula: = \(\frac{bc \times ba}{2}\) Where a, b, c are respective angles of the right angle triangle, with ∠b always being 90°. To learn more interesting facts about triangle stay tuned with BYJU’S.
Inhalt des Dokuments Dr. Dennis Amelunxen Kontakt City University of Hong Kong Department of Mathematics G6613 (Green Zone), 6/F Academic 1 Tat Chee Avenue, Kowloon Tong, Hong Kong Persönliche Homepage sites.google.com/site/dennisamelunxen/home Publikationen in der Arbeitsgruppe Zitatschlüssel BA-Robust-Smoothed-Analysis-Of-A-Condition-Number-For-Linear-Programming Autor Peter Bürgisser and Dennis Amelunxen Seiten 221-251 Jahr 2012 ISSN 0025-5610 DOI 10.1007/s10107-010-0346-x Journal Mathematical Programming Jahrgang 131 Nummer 1-2, Ser. A Zusammenfassung We perform a smoothed analysis of the GCC-condition number $C(A)$ of the linear programming feasibility problem $\exists x\in\mathbb R^m+1 Ax < 0$. Suppose that $\barA$ is any matrix with rows $\bara_i$ of euclidean norm $1$ and, independently for all $i$, let $a_i$ be a random perturbation of $\bara_i$ following the uniform distribution in the spherical disk in $S^m$ of angular radius $\arcsin\sigma$ and centered at $\bara_i$. We prove that $E(łn C(A)) = O(mn / \sigma)$. A similar result was shown for Renegar's condition number and Gaussian perturbations by Dunagan, Spielman, and Teng [arXiv:cs.DS/0302011]. Our result is robust in the sense that it easily extends to radially symmetric probability distributions supported on a spherical disk of radius $\arcsin\sigma$, whose density may even have a singularity at the center of the perturbation. Our proofs combine ideas from a recent paper of Bürgisser, Cucker, and Lotz (Math. Comp. 77, No. 263, 2008) with techniques of Dunagan et al.
It is not possible to losslessly compress all files of size $n$ using a single algorithm, as there are more files of size $n (2^n)$ than of size $p, p: p < n ( 2^n-1)$. Via the pigeon hole principle, if we only tried to compress files of size $n$ with a single algorithm, there would be at least one file it was impossible to compress. If we wanted to be able to compress files with differing lengths $n_k$, the number of files of length $n_k$ we can compress for each $n_k$ becomes even smaller. Today when reading a story about how a file that was several gigabytes when compressed uncompressed to one gigabyte, I had an idea for a universal compression algorithm. Let $a_i$ be a compression algorithm. Let $g_j$ be a file. $|g_j|$ denotes the length of $g_j$. Let $f(a_i, g_j)$ be a function that returns $(|g_j| - |a_i(g_j)|)$. Let $S_N = \{g_j : |g_j| \le N\}$. Let $A = \{a_i : \, \forall \, g_j \in S_N \, \exists a_i in A : f(a_i, g_j) \gt \lceil(\log_2{\#A})\rceil\}$. $\#A$ denotes the number of elements in $A$. Let $m$ be the length of the label of the compression algorithm chosen. The first $m$ bytes of every compressed file denote the compression algorithm chosen. $m = \lceil(\log_2{\#A})\rceil$. Then you can compress all $g_j \in S_N$, by iterating through A until you find $a_i : f(a_i,g_j) - m \gt 0$. Even better. For each $g_j$, let $a_j$ be the corresponding compression algorithm. Let $h(a_i, g_j) = f(a_i,g_j) - m$. $${ \, \forall \, a_i \in A, g_j \in S_N, a_j = \displaystyle{ \underset{a_i \in A, g_j \in S} { \operatorname{argmax} } } \, (h(a_i, g_j))}$$ Is there a reason why the above is not done? While the above is an algorithm, and one could argue that the pigeon hole principle thus applies, this does not imply what it may at first seem to imply. The above algorithm call it $a^v$ is a little different. Let $a_i: S_N \to Y_N^i$ denote that algorithm $a_i$ maps a family of files $(S_N = \{g_j : |g_j| \le N\})$ is mapped to another family of files $Y_N = \{y_j : y_j = a_i(g_j)\}$. $\forall a_i \in A, a_i: S_N \to Y_N^i$. However, $a^v : S_{N+m} \to Y_{N+m}^v$. So $a^v$ compresses a different family of files from $a_i \ in A$. The pigeon hole principle merely states that $a^v$ cannot compress all files of length $N+m$; this is irrelevant, since $a^v$ only intends to compress a small subset of files of length $N+m$ (those whose first $m$ bits are the labels of some $a_i \in A$.
I see that this question has been bumped to the home page again. I recently answered a similar question at https://math.stackexchange.com/a/2920136/575517 so I'll give the essence of it here, in case anyone finds it helpful. This question "How do I know that the simulation run has reached equilibrium?" is often glossed over and left as a "rule of thumb". As discussed in other answers, usually one requires that an "equilibration" or "burn-in" time should be at least as long as the correlation time $\tau_A$ of the variable $A$ of interest or, even better, all variables of interest (taking the longest $\tau$). One discards that part of the trajectory, and starts accumulating averages thereafter. Problems here are that one needs an estimate of $\tau_A$ beforehand,and also that this argument is loosely based on linear response theory:namely that the relaxation of a slightly perturbed state to equilibriumoccurs on a timescale given by $\tau_A$, which is a property of the equilibrium time correlation function. There's no guarantee that therelaxation from an arbitrarily prepared initial configuration will followthis law, even though $\tau_A$ might provide a reasonable guide. However I'm aware of at least one paper where an attempt has been made, by John Chodera, to tackle it objectively: https://www.biorxiv.org/content/early/2015/07/04/021659 which was also published in J Chem Theo Comp, 12, 1799 (2016). I won't try to reproduce the mathematics here, but the basic idea is to use the procedure for estimating statistical errors in correlated sequences of data - which involves estimating the correlation time (or the statistical inefficiency, which is the spacing between effectively independent samples) - and applying it to the interval $(t_0,t_{\text{max}})$ which covers the period between the (proposed) end of the equilibration period, $t_0$, and the end of the whole dataset, $t_{\text{max}}$. This calculation behaves in a predictable way if the dataset is at equilibrium: the fluctuations $\langle\delta A_{\Delta t}^2\rangle$ of a finite-time-average $$\delta A_{\Delta t} = A_{\Delta t}-\langle A\rangle, \qquadA_{\Delta t} = \frac{1}{\Delta t} \int_t^{t+\Delta t} dt \, A(t)$$depend in a known way on the averaging interval $\Delta t$. The time origins $t$ are chosen within the interval of interest, $(t_0,t_{\text{max}}-\Delta t)$;the fluctuations are calculated from all the periods $\Delta t$lying within that interval.The method systematically carries out this calculation as a function of $t_0$, reducing it from an initial high value towards the start of the run. At some point, it is expected that deviations from the expected behaviour are seen.That point is taken to be the end of the equilibration period. Anyway, reading that paper should be helpful in clarifying this issue.The author also provides a piece of Python software to implement the calculation automatically, so it may be of practical use as well.
ISSN: 1531-3492 eISSN: 1553-524X All Issues Discrete & Continuous Dynamical Systems - B December 2013 , Volume 18 , Issue 10 Special issue on Chemotaxis Select all articles Export/Reference: Abstract: Self-organization of micro organisms through the oriented movement of individuals along chemical gradients has caught researchers' imagination and interest for a long time. In fact, the process of aggregation of cells is a first step in the transition from individuals to a collective. Chemotaxis has been identified to play an important role in areas as diverse as ecological species (e.g. slime molds) and bacteria (E. coli), embryogenesis, immune response, wound healing, and cancer development. The first mathematical model for chemotaxis was introduced by Patlak (1953) and became later known as the Keller and Segel (1970) system of equations. The model has become a hot topic not only for the description of biological phenomena, but also mathematically. Sophisticated mathematical analysis has developed and it is the purpose of this special issue of DCDS-B to showcase some of the interesting and challenging mathematical questions that currently arise in the analysis of chemotaxis models. For more information please click the “Full Text” above. Abstract: We prove that for radially symmetric functions in a ring $\Omega = ${$ x \in \mathbb{R}^n, n \geq 2 : r \leq |x| \leq R $} a special type of Trudinger-Moser-like inequality holds. Next we show how to infer from it a lack of blowup of radially symmetric solutions to a Keller-Segel system in $\Omega$. Abstract: In a recent study (K.J. Painter and T. Hillen, Spatio-temporal chaos in a chemotaxis model, Physica D, 240 (4), 363-375, 2011) a model for chemotaxis incorporating logistic growth was investigated for its pattern formation properties. In particular, a variety of complex spatio-temporal patterning was found, including stationary, periodic and chaotic. Complicated dynamics appear to arise through a sequence of ``merging and emerging'' events: the merging of two neighbouring aggregates or the emergence of a new aggregate in an open space. In this paper we focus on a time-discrete dynamical system motivated by these dynamics, which we call the merging-emerging system (MES). We introduce this new class of set-valued dynamical systems and analyse its capacity to generate similar ``pattern formation'' dynamics. The MES shows remarkably close correspondence with patterning in the logistic chemotaxis model, strengthening our assertion that the characteristic length scales of merging and emerging are responsible for the observed dynamics. Furthermore, the MES describes a novel class of pattern-forming discrete dynamical systems worthy of study in its own right. Abstract: This paper gives the gradient estimate for solutions to the quasilinear non-degenerate parabolic-parabolic Keller-Segel system (KS) on the whole space $\mathbb{R}^N$. The gradient estimate for (KS) on bounded domains is known as an application of Amann's existence theory in [1]. However, in the whole space case it seems necessary to derive the gradient estimate directly. The key to the proof is a modified Bernstein's method. The result is useful to obtain the whole space version of the global existence result by Tao-Winkler [13] except for the boundedness. Abstract: This paper gives a blow-up result for the quasilinear degenerate Keller-Segel systems of parabolic-parabolic type. It is known that the system has a global solvability in the case where $q < m + \frac{2}{N}$ ($m$ denotes the intensity of diffusion and $q$ denotes the nonlinearity) without any restriction on the size of initial data, and where $q \geq m + \frac{2}{N}$ and the initial data are ``small''. However, there is no result when $q \geq m + \frac{2}{N}$ and the initial data are ``large''. This paper discusses such case and shows that there exist blow-up energy solutions from initial data having large negative energy. Abstract: In this paper, the pattern formation of the attraction-repulsion Keller-Segel (ARKS) system is studied analytically and numerically. By the Hopf bifurcation theorem as well as the local and global bifurcation theorem, we rigorously establish the existence of time-periodic patterns and steady state patterns for the ARKS model in the full parameter regimes, which are identified by a linear stability analysis. We also show that when the chemotactic attraction is strong, a spiky steady state pattern can develop. Explicit time-periodic rippling wave patterns and spiky steady state patterns are obtained numerically by carefully selecting parameter values based on our theoretical results. The study in the paper asserts that chemotactic competitive interaction between attraction and repulsion can produce periodic patterns which are impossible for the chemotaxis model with a single chemical (either chemo-attractant or chemo-repellent). Abstract: We construct the global bounded solutions and the attractors of a parabolic-parabolic chemotaxis-growth system in two- and three-dimensional smooth bounded domains. We derive new $L_p$ and $H^2$ uniform estimates for these solutions. We then construct the absorbing sets and the global attractors for the dynamical systems generated by the solutions. We also show the existence of exponential attractors by applying the existence theorem of Eden-Foias-Nicolaenko-Temam. Abstract: We study the well-posedness of a model of individual clustering. Given $p>N\geq 1$ and an initial condition in $W^{1,p}(\Omega)$, the local existence and uniqueness of a strong solution is proved. We next consider two specific reproduction rates and show global existence if $N=1$, as well as, the convergence to steady states for one of these rates. Abstract: In this paper we consider a general system of reaction-diffusion equations and introduce a comparison method to obtain qualitative properties of its solutions. The comparison method is applied to study the stability of homogeneous steady states and the asymptotic behavior of the solutions of different systems with a chemotactic term. The theoretical results obtained are slightly modified to be applied to the problems where the systems are coupled in the differentiated terms and / or contain nonlocal terms. We obtain results concerning the global stability of the steady states by comparison with solutions of Ordinary Differential Equations. Abstract: In this paper we present an implicit finite element method for a class of chemotaxis models, where a new linearized flux-corrected transport (FCT) algorithm is modified in such a way as to keep the density of on-surface living cells nonnegative. Level set techniques are adopted for an implicit description of the surface and for the numerical treatment of the corresponding system of partial differential equations. The presented scheme is able to deliver a robust and accurate solution for a large class of chemotaxis-driven models. The numerical behavior of the proposed scheme is tested on the blow-up model on a sphere and an ellipsoid and on the pattern-forming dynamics model of Escherichia colion a sphere. Abstract: This paper deals with the repulsion chemotaxis system $$ \left\{ \begin{array}{ll} u_t=\Delta u +\nabla \cdot (f(u)\nabla v), & x\in\Omega, \ t>0, \\ v_t=\Delta v +u-v, & x\in\Omega, \ t>0, \end{array} \right. $$ under homogeneous Neumann boundary conditions in a smooth bounded convex domain $\Omega\subset\mathbb{R}^n$ with $n\ge 3$, where $f(u)$ is the density-dependent chemotactic sensitivity function satisfying $$ f \in C^2([0, \infty)), f(0)=0, 0 < f(u) \le K(u+1)^{\alpha} for all u > 0 $$ with some $K>0$ and $\alpha>0$. It is proved that under the assumptions that $0\not\equiv u_0\in C^0(\bar{\Omega})$ and $v_0\in C^1(\bar{\Omega})$ are nonnegative and that $\alpha<\frac{4}{n+2}$, the classical solutions to the above system are uniformly-in-time bounded. Moreover, it is shown that for any given $(u_0, v_0)$, the corresponding solution $(u, v)$ converges to $(\bar{u}_0, \bar{u}_0)$ as time goes to infinity, where $\bar{u}_0 :=\frac{1}{\Omega} f_{\Omega} u_0$. Readers Authors Editors Referees Librarians More Email Alert Add your name and e-mail address to receive news of forthcoming issues of this journal: [Back to Top]
From Wikipedia and my textbook(*), the definition of a.s. convergence is like this $X_n$ almost surely converges to $X$ if \begin{equation} P\left(\omega \in \Omega:\lim_{n\to \infty} X_n(\omega) = X(\omega)\right) = 1 \end{equation} In this definition, $\omega$ is mentioned to define a.s. convergence (There are other definitions of a.s. convergence which do not mention $\omega$. This is also what I do not understand). However, some problems in my homework confused me much. These two problems do not mention sample space $\Omega$. In the second problem, although it does not say $X_n$ almost surely converges to some $X$, it has readers judge whether $X_n$ converges almost surely. This is what I cannot understand. Another related question also confused me. A random variable series $X_n$ \begin{equation} X_n = \left\{ \begin{aligned} 0, \ &0<\omega\le \frac{1}{2} \\ 1, \ &\frac{1}{2}<\omega \le1 \\ \end{aligned} \quad \text{n is even}\\ \right. X_n = \left\{ \begin{aligned} 1, \ &0<\omega\le \frac{1}{2} \\ 0, \ &\frac{1}{2}<\omega \le1 \\ \end{aligned} \right. \quad \text{n is odd} \end{equation} Of course $X_n$ does not converge a.s. or p.(convergence in probability). However if sample space $\Omega$ is omitted like this \begin{equation} X_n = \left\{ \begin{aligned} 0, \ \text{with probability} \ \frac{1}{2} \\ 1, \ \text{with probability} \ \frac{1}{2} \\ \end{aligned} \right. \end{equation} In intuition, $X_n$ must converge since it is a constant, which contradicts the previous conclusion. Above is my questions, which can be summarized as the title. Could any one help me? Thanks in advance. * hajek.ece.illinois.edu/ECE534Notes.html
Gold Member 5,405 280 Maybe this should be in the classical physics section, but there is a connection with GR so I'm posting here. I've been looking at Lagrangians for oscillating systems because I'm interested in gravitational time dilation and stumbled on this. The Lagrangian for a free particle is just the kinetic energy, and I wondered what would happen if I allowed the inertial mass term to vary locally so that [tex]L = \frac{1}{2}m(x)\dot{x}^2[/tex] which leads to [tex]\frac{\partial L}{\partial x} = 0, \frac{\partial L}{\partial\dot{x}} = m(x)\dot{x} [/tex] and [tex]\frac{d}{dt}\left(m(x)\dot{x}\right) = m'(x)\dot{x}^2 + \ddot{x}m(x) [/tex] where the ' indicates differentiation wrt x. So for force free motion this is zero which gives the EOM [tex]\ddot{x} = \frac{-m'(x)\dot{x}^2}{m(x)}[/tex] This is interesting, because we have motion here without a force, ie coordinate acceleration. Now, making the assertion that [tex]m(x) = m_0f(x)[/tex] where m [tex]\ddot{x} = \frac{-f'(x)\dot{x}^2}{f(x)}[/tex] and the acceleration is independent of m In order to conform to the known gravitational time dilation effect which is proportional to [tex](g_{00})^{-1}[/tex] we have to say [tex]m(x) = (g_{00})^{-1}[/tex]. Doing the calculation for the Schwarzschild spacetime one finds [tex]\frac{-m'(x)}{m(x)} = -\left(1-\frac{2m}{r}\right)^{-1}\frac{m}{r^2} = \Gamma^x_{xx}[/tex]. We get exactly the spatial part of the GR EOM. What's happening is that translational symmetry of the Lagrangian is broken, giving rise to a gauge field that looks a lot like gravity, and seems to reproduce to some degree the EOM of GR. This is known already (Teleparallel gravity), but I haven't seen it done by allowing the inertial mass to be local. My questions are 1. Have I made a mistake in my calculation ? 2. can anyone point me to related work ? 3. Did you spot the pun on 'connection' ? M I've been looking at Lagrangians for oscillating systems because I'm interested in gravitational time dilation and stumbled on this. The Lagrangian for a free particle is just the kinetic energy, and I wondered what would happen if I allowed the inertial mass term to vary locally so that [tex]L = \frac{1}{2}m(x)\dot{x}^2[/tex] which leads to [tex]\frac{\partial L}{\partial x} = 0, \frac{\partial L}{\partial\dot{x}} = m(x)\dot{x} [/tex] and [tex]\frac{d}{dt}\left(m(x)\dot{x}\right) = m'(x)\dot{x}^2 + \ddot{x}m(x) [/tex] where the ' indicates differentiation wrt x. So for force free motion this is zero which gives the EOM [tex]\ddot{x} = \frac{-m'(x)\dot{x}^2}{m(x)}[/tex] This is interesting, because we have motion here without a force, ie coordinate acceleration. Now, making the assertion that [tex]m(x) = m_0f(x)[/tex] where m 0is a constant, we can say [tex]\ddot{x} = \frac{-f'(x)\dot{x}^2}{f(x)}[/tex] and the acceleration is independent of m 0. This is a strong clue that the motion is being caused by gravity. The EOM also looks like the GR equivalent, where the coefficients of the velocities are the affine connections. In order to conform to the known gravitational time dilation effect which is proportional to [tex](g_{00})^{-1}[/tex] we have to say [tex]m(x) = (g_{00})^{-1}[/tex]. Doing the calculation for the Schwarzschild spacetime one finds [tex]\frac{-m'(x)}{m(x)} = -\left(1-\frac{2m}{r}\right)^{-1}\frac{m}{r^2} = \Gamma^x_{xx}[/tex]. We get exactly the spatial part of the GR EOM. What's happening is that translational symmetry of the Lagrangian is broken, giving rise to a gauge field that looks a lot like gravity, and seems to reproduce to some degree the EOM of GR. This is known already (Teleparallel gravity), but I haven't seen it done by allowing the inertial mass to be local. My questions are 1. Have I made a mistake in my calculation ? 2. can anyone point me to related work ? 3. Did you spot the pun on 'connection' ? M Last edited:
The Lagrangian (or variational) formulation of the Euler-Lagrange equations and the Hamiltonian formulation are equivalent. This equivalence can be made quite explicit and goes a bit deeper than the standard treatments show. The equivalence can be established in several steps, which I'll try to outline below with references. The Hamiltonian formalism is a special case of the Lagrangian formalism. There is a generic operation that can be performed on variational problems: adjunction and elimination of auxiliary fields or variables. Given a Lagrangian $L(x,y)$, the variable $y$ (which could be vector valued) is called auxiliary if the Euler-Lagrange equations obtained from the variation of $y$ can be algebraically solved for solved for in terms of the remaining variables and their derivatives $y=y(x)$. The important point here is that we need not solve any differential equations to obtain $y(x)$. The Lagrangian $L'(x) = L(x,y(x))$ gives a new variational principle with the auxiliary field $y$ eliminated. The critical points of $L'(x)$ are in one-to-one correspondence with those of the original $L(x,y)$. The adjunction of an auxiliary field is the reverse operation. Given $L(x)$, we look for another Lagrangian $L'(x,y)$ where $y$ is auxiliary and its elimination gives $L'(x,y(x))=L(x)$. It is straightforward to check that given a Lagrangian $L(x)$, with Hamiltonian $H(x,p)$, the new Lagrangian $L'(x,p) = p\dot{x} - H(x,p)$, associated to Hamilton's Least Action Principle, is a special case of adjoining some auxiliary fields, namely the momenta $p$. The elimination $p=p(x)$ is precisely the inverse Legendre transform. The moral here is that the Legendre transform is not sacred. I learned this point of view from the following paper of Barnich, Henneaux and Schomblond (PRD, 1991). This point of view is particularly helpful in higher order field theories (multiple independent variables, second or higher order derivatives in the Lagrangian) where a unique notion of Legendre transform is lacking. The phase space is the space of solutions of the Euler-Lagrange equations. When the Euler-Lagrange equations have a well-posed initial value problem. Then solutions can be put into one-to-one correspondence with initial data. The initial data are uniquely specified by the canonical position and momentum variables that are commonly used to define the canonical phase space in the Hamiltonian picture. This is a rather common identification nowadays and can be found in many places, so I won't give a specific reference. If the Euler-Lagrange equations do not have a well-posed initial value problem, the definitions change somewhat on either side, but an equivalence can still be established. See the references in the next step for more details. Any Lagrangian defines a (pre)symplectic form (current). This is, unfortunately, less well known than it should be. The (pre)symplectic structure of classical mechanics and field theory can be defined straightforwardly directly from the Lagrangian. In case the equations of motion are degenerate, the form is degenerate and hence only presymplectic, otherwise symplectic. There is more than one way to do this, but a particularly transparent one is referred to as the covariant phase space method. A very nice (though not original) reference is Lee & Wald (JMP, 1990). See also this nLab page, which also has a more extensive reference list. Applying this method to the Lagrangian of Hamilton's Least Action Principle gives the standard symplectic form in terms of the canonical position and momentum coordinates. Briefly, and without going into the details of differential forms on jet spaces where this is most easily formalized, the construction is as follows. Denote by $d$ the space-time (which is just 1-dimensional if you only have time) exterior differential and by $\delta$ the field variation exterior differential. Without dropping boundary or total divergence terms, the total variation of the Lagrangian can be expressed as $\delta L(x) = \mathrm{EL}\delta x + d\theta$. Here the Lagrangian is a space-time volume form, $\mathrm{EL}$ denotes the Euler-Lagrange equations and $d\theta$ consists of all the terms that are usually dropped during partial integration. Clearly $\theta$ is of 1-form in terms of field variations and as a spacetime form of one degree lower than a volume form (aka a current). Taking another exterior field variation, we get $\omega=\delta\theta$, which is the desired (pre)symplectic current. If $\Sigma$ is a Cauchy surface (in particular it is codimension-1), the (pre)symplectic form is defined as $\Omega=\int_\Sigma \omega$, now a 2-form in terms of field variations, as expected. It can be shown that the (pre)symplectic form is closed, $d\omega=0$, when evaluated on solutions of the Euler-Lagrange equations. Hence, by Stokes' theorem, $\Omega$ is independent of the choice of $\Sigma$. One space-time is 1-dimensional, $\Omega$ is just $\omega$ evaluated at a particular time. A (pre)symplectic form (current) defines a Lagrangian. As alluded to in the question, Hamilton's equations of motion are often expressed in a special form that highlights a certain geometrical structure that is not obvious in the original Lagrangian form. It is well known from the symplectic formulation of classical mechanics that this structure can be seen as a consequence of the fact that they correspond time evolution generated by a Hamiltonian via the Poisson bracket. The symplectic form is then preserved by the evolution. There are analogous statements for field theory. In fact, no variational formulation is necessary to discuss this geometrical structure of the equations. What is quite remarkable is that a kind of converse to this statement holds as well. Namely, given a system of (partial) differential equations, if there is a conserved (pre)symplectic current $\omega$ on the space of solutions ($\omega$ is field dependent and $d\omega=0$ when evaluated on solutions, see previous step), then a subsystem of the equations is derived from a variational principle. There is a subtlety here. Even if there exists a conserved $\omega$, if it is degenerate (not symplectic) other independent equations need to be added to the Euler-Lagrange equations of the corresponding variational principle to obtain a system of equations equivalent to the original one. Onece the Lagrangian of this variational principle is known, the Hamiltonian and symplectic forms could be defined in the usual way and the original system of equations recast in the canonical Hamilton form. To my knowledge, the above observation first appeared in Henneaux (AnnPhys, 1982) for ODEs and in Bridges, Hydon & Lawson (MathProcCPS, 2010) for PDEs. The calculation demonstrating this observation is given in a bit more detail on this nLab page. Another way to look at this result is to consider a conserved (pre)symplectic form as a certificate for the solution of the inverse problem of the calculus of variations. A final note about the usefulness of the Hamiltonian formulation. Despite the fact that as a consequence of the above discussion it's not strictly necessary. Any symplectic manifold has local coordinates in which the symplectic form is canonical (Darboux's theorem). The Legendre transform identifies this choice of coordinates explicitly.
No, your first equation is not a consequence of your second one, nor vice-versa. The third one combines the first two. The antisymmetric permutation symbols in n (spacetime) dimensions obey $$ \epsilon^{j_1 \dots j_n}\epsilon_{i_1 \dots i_n} = n! \delta_{[ i_1}^{j_1} \dots \delta_{i_n ]}^{j_n} ,$$where [...] denotes complete antisymmetrization of the lower indices. As a result, contracting n-2 pairs, that is all indices but the leading two pairs, yields $$ \epsilon^{\mu\nu j_3 \dots j_n}\epsilon_{\alpha\beta i_3 \dots i_n} = -(n-2)! (\delta_{ \alpha}^{\mu} \delta_{ \beta}^{\nu}- \delta_{ \beta}^{\mu} \delta_{ \alpha}^{\nu} ), $$antisymmetric in all up and down free index pairs. Your n=4 has a coefficient -2, while n=3 and n=2 a coefficient -1, and n=5 a coefficient -6, etc. This is all pure combinatorics, logically independent of the Lorentz group. Now, the Lorentz group has an n(n-1)/2 -dimensional Lie algebra, (so 6d for n=4) indexed by the corresponding antisymmetric pairs of indices $\mu\nu$, etc,$$[J^{\mu\nu},J^{\rho \sigma}]=i(J^{\mu\rho} g^{\nu\sigma} +J^{\nu\sigma} g^{\mu\rho}- J^{\nu\rho} g^{\mu\sigma}-J^{\mu\sigma} g^{\nu\rho}),$$a unique form bearing all single and pairwise antisymmetries of the (pairwise antisymmetric) commutator on the left. It is represented by all types of m × m matrices. in general, whose indices are suppresed here. But when it is represented by a "fundamental" set of n × n matrices, 4 × 4 ones in your case, with indices $\alpha\beta$ (antisymmetric, but symmetrized by the action of the metric if one of the two is 0--see below) it is straightforward, albeit tedious brute force,to check that your first and third equations in fact satisfy it. I'd like to speculate that the 3rd eqn form is easier than the first, but I'd be lying... it's subjective. In any case, this is only one of several bases in that dimension used, with the SO(3) subroup of rotations represented by sparse antisymmetric matrices and the three boosts in the coset by sparse symmetric ones.In the mixed rotation/boost basis much of the compact structure you are observing is gone. For n=3, your first formula still works for the 3 × 3 generators, but the third one is supplanted by $$(J^{\mu\nu})_{\alpha\beta}=- i \epsilon^{\mu\nu\rho }\epsilon_{\alpha\beta\rho },\\(M^\lambda)_{\alpha\beta}\equiv g^{\lambda\kappa}\epsilon_{\kappa\mu\nu} (J^{\mu\nu})_{\alpha\beta}=-2ig^{\lambda\kappa}\epsilon_{\kappa\alpha\beta} ,$$one rotation ( antisymmetric matrix $(M^0)^\alpha_{~~\beta}$); and two boosts ($(M^1)^\alpha_{~~\beta}, ~ (M^2)^\alpha_{~~\beta}$, now matrices, on account of the symmetric uneven action of the metric in raising the last index!). * Intuition? I'm nonlinear at that... If you think of SO( n) instead of its noncompact brother the homogeneous Lorentz group SO(1, n-1), then the metric is the identity, and co/contra technically equivalent, so all is antisymmetric matrices and your 3rd equation is arguably more direct. It is yet another reformulation for your toolbox... Does it expedite Pauli-Lubanski for you?
Let \( A \) be any real square matrix (not necessarily symmetric). Prove that: $$ (x'A x)^2 \leq (x'A A'x)(x'x) $$ The key point in proving this inequality is to recognize that \( x'A A'x \) can be expressed as vector norm of \( A'x \). Proof: If \( x=0 \), then the inequality is trival. Suppose \( x \neq 0 \). \( \frac{x'A x}{x'x} = \frac{(A'x)'x}{\| x \|^2} = (A'\frac{x}{\| x \|})'\frac{x}{\| x \|} \) Because \( \frac{x}{\| x \|} \) is a unit vector, \( A'\frac{x}{\| x \|} \) can be considered as scale and rotation of \( \frac{x}{\| x \|} \) by \( A' \). Thus, the resulting vector norm of \( A'\frac{x}{\| x \|} \) is \( \alpha \) for some \( \alpha > 0 \). And \( (A'\frac{x}{\| x \|})'\frac{x}{\| x \|}=\alpha \, cos(\beta) \) for some \( -\pi \leq \beta \leq \pi \), which is the angle between before and after premulitplying \( A' \). Now: \( ( \frac{x'A x}{x'x} )^2 \) \(= ( (A'\frac{x}{\| x \|})'\frac{x}{\| x \|} )^2 \) \( =\alpha^2 \, cos(\beta)^2 \) \( \leq \alpha^2 \) \(= (A'\frac{x}{\| x \|})'A'\frac{x}{\| x \|} \) \(= \frac{(A'x)'A'x}{\| x \|^2} \) \(= \frac{x'A A'x}{x'x} \) Finally, multiplying both sides by \( (x'x)^2 \) completes the proof.
Back to Unconstrained Optimization Line search methods generate the iterates by setting \(x_{k+1} = x_k + \alpha_k d_k \,\) where \(d_k\) is a search direction and \(\alpha_k > 0\) is chosen so that \(f({x+1}) < f(x_k).\) Most line search versions of the basic Newton method generate the direction \(d_k\) by modifying the Hessian matrix \(\nabla^2f(x_k)\) to ensure that the quadratic model \(q_k\) of the function has a unique minimizer. The modified Cholesky decomposition approach adds positive quantities to the diagonal of \(\nabla^2f(x_k)\) during the Cholesky factorization. As a result, a diagonal matrix, \(E_k\), with nonnegative diagonal entries is generated such that \(\nabla^2f(x_k) + E_k\) is positive definite. Given this decomposition, the search direction \(d_k\) is obtained by solving \((\nabla^2f(x_k) + E_k)d_k = -\nabla f(x_k)\) After \(d_k\) is found, a line search procedure is used to choose an \(\alpha_k > 0\) that approximately minimizes \(f\) along the ray \({x_k + \alpha d_k : \alpha > 0}\). The following Newton codes use line-search methods: GAUSS, NAG Fortran Library, NAG C Library, and OPTIMA. The algorithms used in these codes for determining \(\alpha_k\) rely on quadratic or cubic interpolation of the univariate function \(\phi_\alpha = f(x_k + \alpha d_k)\) in their search for a suitable \(\alpha_k\). An elegant and practical criterion for a suitable \(\alpha_k\) is to require \(\alpha_k\) to satisfy the sufficient decrease condition: \(f(x_k + \alpha d_k) \leq f(x_k) + \mu \alpha_k \nabla f(x_k)^T d_k\) and the curvature condition: \(| \nabla f(x_k + \alpha d_k)^T d_k| \leq \eta | \nabla f(x_k)^T d_k| \) where \(\mu\) and \(\eta\) are two constants with \(0 < \mu < \eta < 1\). The sufficient decrease condition guarantees, in particular, that \(f(x_{k+1}) < f(x_k)\) while the curvature condition requires that \(\alpha_k\) be not too far from a minimizer of \(\phi\). Requiring an accurate minimizer is generally wasteful of function and gradient evaluations, so codes typically use \(\mu=0.001\) and \(\eta=0.9\) in these conditions.
I am currently working on gravitational waves, and as in every lecture on general relativity I derived the symmetric trace-free perturbation of a Minkowski metric with the Lorenz gauge condition, known as the "quadrupole formula" : $$h_{ij}^{STF}=\frac{2}{r}\frac{d^{2}I_{ij}^{STF}}{dt^{2}} $$ Where $ I_{ij}^{STF}$ is the symmetric trace-free part of the quadrupole momentum tensor of the source. (Correct me if mistaken or uncomplete) The thing is: how can we calculate the quadrupole momentum tensor of an object, and particularly a compact binary one ? (one of those which emits the gravitational waves detected by LIGO). I have come across the direct formula: $$I_{ij}=\int d^{3}x \rho x_{i} x_{j}$$ for the quadrupole momentum but I don't think we can use that directly, since e.g. we don't have accurate equation of state for neutron stars or other compact objects. Some people seem to have proceeded differently (and in a more general way, using a multipolar expansion here, eq. 50), but that's very technical. Are there clever tricks to compute this momentum ? Or at least some (massive?) approximations that would give at least a qualitative yet directly understandable physical result/expression ? EDIT : It just stricked me that I am not familiar with the concept of quadrupole, so maybe that's my issue here... I am going to make some research and maybe re-edit this post. EDIT 2 : I did some research on the quadrupole moment and multipole expansion in general. I think I got the physical meaning of this expansion for a single compact object (e.g. why it is meaningful to proceed that way for the ringdown oscillation of a BH).Still, I could not find anything directly related to my question. Any help would be highly appreciated, even if it is " You have to go brute force for this computation boy"
Forgot password? New user? Sign up Existing user? Log in I like Brilli the ant, but what does she look like? Maybe the participants of Brilliant can contribute images? Note by Arndt Jonasson 6 years, 3 months ago Easy Math Editor This discussion board is a place to discuss our Daily Challenges and the math and science related to those challenges. Explanations are more than just a solution — they should explain the steps and thinking strategies that you used to obtain the solution. Comments should further the discussion of math and science. When posting on Brilliant: *italics* _italics_ **bold** __bold__ - bulleted- list 1. numbered2. list paragraph 1paragraph 2 paragraph 1 paragraph 2 [example link](https://brilliant.org) > This is a quote This is a quote # I indented these lines # 4 spaces, and now they show # up as a code block. print "hello world" # I indented these lines# 4 spaces, and now they show# up as a code block.print "hello world" \( \) \[ \] 2 \times 3 2^{34} a_{i-1} \frac{2}{3} \sqrt{2} \sum_{i=1}^3 \sin \theta \boxed{123} Sort by: Okay, I did this in under five minutes. So it's a little rusty. But here is my take on Brilli The Ant. Hope you like it! EDIT: I see my Brilli The Ant has become quite popular since the last time I saw her! Thanks to everyone! I've also changed my short biography on my profile and made a special place for Brilli. Log in to reply This is great!! You brought a laugh into the office in this early morning :) I decided to add the image to the Sum of 5 squares problem. Five minute graphic design is my favorite kind! I think it is not the right place to say this, but my query (Changing name of school, placed under Support section) remains unanswered till date. Please see the discussion. Good creativity :D :D Thanks! Nice one Mursalin! :) Yo Mursalin! GREAT JOB done buddy!! ROFLMAO Mursalin, my friend, why are you so brilliant? My design of Brilli is not even half as good as that of Mursalin. Anyways, here it is. :) LOL Seriously you have a good sense of humor.........I guess she looks like some growing mathematician !!!!! great, funny nice one. :D How about this one:-http://www.picgifs.com/graphics/a/ants/graphics-ants-950864.gifOrhttp://www.picgifs.com/graphics/a/ants/graphics-ants-003322.gif Those look good too, thanks for finding them. Sadly I can't easily upload GIF's into the current problems. Good work mursalin!!! I never thought about this pun ( brilliant ---> brilli ant) lol!!! Good thinking five minutes Great imagination. I DUNNO TOO!!! I find it weird that a troll face is saying "I DUNNO" on a brilliant forum. ∑i=1∞1xi=1x−1\sum_{i=1}^{\infty} \frac{1}{x^i} = \frac{1}{x-1}i=1∑∞xi1=x−11 whats the formula for?the solution? @Tan Li Xuan – Sum of geometric series. Kinda irrelevant. Problem Loading... Note Loading... Set Loading...
There aren't separate parts to the time dilation: a gravitational part and a motion part. The time dilation cannot be divided up like that that. There is just the proper time along some curve. In this case that curve is a circukar orbit. @JohnRennie That certainly makes sense, but it seems to disagree with that text from Wikipedia, and the graph, which claims that at a certain altitude the gravitational time dilation exactly cancels the velocity time dilation, so that the time rate is in sync with time at the Earth's surface. I don't see how we can get that cancellation effect from your formula. We know that for a GPS we need to make a correction for both general and special relativity: general relativity predicts that clocks go slower in a higher gravitational field (the clock aboard a GPS satellite moves faster than a clock down on Earth), while special Relativity predicts that a movin... Consider a Mass on earth. The time dilation on the surface of Earth is$$T' = T \sqrt{1 - \frac{2GM}{rc^2}}$$Now if the mass is moving around the earth at velocity of v w.r.t Earth, what will be the time dilation within the mass as seen from the Earth? The object could be in free fall, orbit o... @JohnRennie I agree with that. I've just been skimming through the PDF by Neil Ashby, linked in this answer physics.stackexchange.com/a/390402/123208 It looks quite good, and even though it's fairly short it has derivations for most of the stuff it uses. I gave a partial answer to a homework questions here and was firmly told in comments that even partial answers to homework questions are not allowed. Is that correct ? As far as I can see, the guidelines on homework questions only prohibits complete answers.Follow up question: if any answer at ... Single photon interferometric experiments are a things (indeed, people had to learn how to do those before they could do all the spiffy delayed choice experiments and Bell's inequality experiments that drive discussions of entanglement). Townsend's QM book has a bunch of references in the first few chapters but my copy is in another state. — dmckee ♦46 mins ago @dmckee gaseous, I imagine? I can't quite see how you liquefy a book, but then again, you never know I am not familiar with any notation used in physics. The paper "Non-hermitian random matrix theory: Method of hermitian reduction" byJoshua Feinberg and A. Zee (Nuclear Physics B, 1997) states:A basic tool in studying hermitian random matrices $\phi$ (henceforth all matrices willbe take...
$1^0 = 1\to 1 =1$ $x^1=x\to x=x\;\forall x$. $9^2 = 81\to 8+1=9$ $8^3=512\to 5+1+2=8$. $7^4=2401\to 2+4+0+1=7$ $46^5 = 205962976\to 2+0+5+9+6+2+9+7+6=46$ $64^6 = 68719476736\to 6+8+7+1+9+4+7+6+7+3+6 = 64$ $68^7= 6722988818432\to 6+7+2+2+9+8+8+8+1+8+4+3+2 = 68$ $54^8 = 72301961339136\to 7+2+3+0+1+9+6+1+3+3+9+1+3+6=54$ $71^9 = 45848500718449031$ $\downarrow$ $4+5+8+4+8+5+0+0+7+1+8+4+4+9+0+3+1 = 71$ Conjecture: Given a positive integer $b$, there exists a positive integer $a$ such that the digit sum of $a^b$ is equal to $a$. Can this be proven? I don't know how to prove it; it was about 3:45am in the morning and I couldn't go to sleep, so I just went on my calculator and messed around because I was bored. That's when I noticed this cool property and decided to share it here. It is now 4:35am so... I gotta go to bed. I'll see you in some hours and hopefully gather the time to work on this. Sorry about that! Oh, incidentally, I also noticed that the digit sum of $29^5$ is $23$ and the digit sum of $23^5$ is $29$, so... there are cycles here. Same for $31$ and $34$. Also, the digit sum of $13^2$ is $16$ and the digit sum of $16^2$ is $13$. These cycles seem to only have two numbers involved, but I think regarding the seventh power, there is more than two involved (start with $72^7$ I think), however there is also a cycle between two numbers of seventh powers (between $44$ and $62$). Does this help? I don't know. I have to go to bed. Good night! Thank you in advance.
Let me start by saying that there isn't necessarily a very good selection of answers to this question. I am teaching some very basic mathematics (see below for module content) to a group of Computing Students who will not fail this class. I am trying to teach the material as well as I can. For example, all of our arithmetic of fractions will be born out of the definition that $\frac{1}{n}$ is a number that when multiplied by $n\neq 0$ gives one... and one is the number that when multiplied by any number gives the same number: $1\times m=m$. So I am happy with that side of things. I am just wondering is there any point in adding in applications to computing (specifically computing)? Are there any good examples? I do not have a computing background and am happy explaining to them why they should be adept at all of this stuff. Three good examples I have are units of data for conversion of units, a crude way of drawing lines on a pixel screen for coordinate geometry and Moore's Law for curve fitting. Module Outline The Fundamentals of Arithmetic with Applications --- The arithmetic of fractions. Decimal notation and calculations. Instruction on how to use a calculator. Ratio and proportion. Percentages. Tax calculations, simple and compound interest. Mensuration to include problems involving basic trigonometry. Approximation, error estimation, absolute, relative and relative percentage error. The calculation of statistical measures of location and dispersion to include arithmetic mean, median, mode, range, quartiles and standard deviation. Basic Algebra --- The laws of algebra expressed literally and illustrated both numerically and geometrically. Algebraic manipulation and simplification to include the factorisation of reducible quadratics. Transposition of formulae. Function notation with particular emphasis on functions of one variable. Indices and Logarithms --- Indices with a discussion of scientific notation and orders of magnitude. Conversion of units. Logarithms and their use in the solution of indicial (exponential) equations. Discussion of the number e and natural logarithms. Graphs --- Graphs of quantities which are in direction proportion and indirect proportion. Graphs of simple linear, exponential and logarithmic functions. Reduction of non-linear relations to linear form to allow for the estimation of parameters.
Your initial statement, that $X$ admits a complex structure if and only if $N_J=0$ is not correct. I think you mean that $J$ comes from a complex structure if and only if $N_J=0$; that is correct, but in principle $X$ could be a complex manifold and $J$ a non-integrable complex structure. But that's just a minor mis-statement; I know it's not what you meant. So let me make a suggestion that could be more to the point (though I don't have an answer to your question). The vanishing of $N_J$ is equivalent to saying that the associated $\bar \partial$ operator satisfies $\bar \partial^2 = 0$. This is a condition that gives the existence of a lot of holomorphic functions in a neighborhood (under sufficient regularity assumptions). When $\bar \partial ^2 \neq 0$, the conditions for finding local holomorphic functions is too overdetermined, and in general you will have no holomorphic functions at all. Constructing these holomorphic functions, when $\bar \partial ^2 =0$, is a PDE problem called the Newlander-Nirenberg theorem. Perhaps if you follow the proof of that theorem, you could tell if the vanishing of $\bar \partial ^2$ (or maybe $N_J$) along certain directions related to your divisor could allow you to construct the functions you seek. There is yet another equivalent statement to integrability; it is that the following: The almost complex structure $J$ has eigenvalues $\pm \sqrt{-1}$ so the complexification $T_X \times \mathbb{C}$ of $T_X$ splits as a direct sum $T^{1,0}_X\oplus T^{0,1}_X$ of eigenvectors with eigenvalue $\sqrt{-1}$ and $-\sqrt{-1}$ respectively. Integrability means that $T^{1,0}_X$ is closed under (the complexification of the) Lie bracket. If you assume the underlying structure is real analytic, then this integrability condition can be used with the complex version of Frobenius' integrability condition to give you (locally) a complex manifold in the complexified tangent bundle that projects onto the base. This gives the complex structure of $X$. Perhaps this picture could help you construct the divisor from holomorphic functions in the real analytic case; then if you get a good answer in this case, you can follow the method of Newlander-Nirenberg to adapt things in the smooth case.
The area of the whole square is 36 the area of tone quadrant = \(0.25 \pi r^2= 0.25*\pi*(2\sqrt3)^2=3\pi\;\;cm^2\) Let the bottom of this diagram be the x- axis. and (0,0) is at the bottom left corner then the equation of the bottom left quadrant is \(x^2+y^2=12\\ y^2=12-x^2\\ y=\sqrt{12-x^2}\) The area of the blue/ green overlap is \(A=\displaystyle 2\int_3^{2\sqrt3}\;\;\sqrt{12-x^2}\;\;dx\\ =2\left[ 0.5x\sqrt{12-x^2}+6 * asin(\frac{x}{2\sqrt3} ) \right]_3^{2\sqrt3} \qquad \text{Wolfram|Alpha}\\ =2\left[ 0.5(2\sqrt3)\sqrt{0}+6 * asin(1 ) \right]-2\left[ 1.5\sqrt{3}+6 * asin(\frac{3}{2\sqrt3} ) \right]\\ =2\left[ 6 * \frac{\pi}{2} \right]-2\left[ 1.5\sqrt{3}+6 * asin(\frac{\sqrt3}{2} ) \right]\\ =6\pi-2\left[ 1.5\sqrt{3}+6 * \frac{\pi}{3} ) \right]\\ =6\pi-2\left[ 1.5\sqrt{3}+2\pi ) \right]\\ =6\pi-3\sqrt{3}-4\pi \\ =2\pi-3\sqrt3\) So the desired area in the middle is \(middle \;area =36-[\pi*(2\sqrt3)^2-4*(2\pi-3\sqrt3)]\\ middle \;area =36-[12\pi-8\pi+12\sqrt3)]\\ middle \;area =36-[4\pi+12\sqrt3)]\\ middle \;area =36-12\sqrt3-4\pi \;\;cm^2\\ middle \;area \approx 2.65 cm^2\\\) Thanks, Melody... Here's another way without using Calculus Positioning the circles as Melody did : The intersection of two circles at the left side of the figure occurs at A = (sqrt(3) , 3) Similarly, the intersection of the two circles at the bottom occurs at ( 3, sqrt (3) ) So the distance between these two points is AD = sqrt [ 24 - 12 sqrt (3) ] cm Therefore...using symmetry....we can construct a square AFGD of this side with an area of [ 24 - 12sqrt(3)] cm^2 (1) Looking at the segment AE... the slope of this segment = 3/sqrt (3) = sqrt (3) Looking at the slope of DE....the slope of this segment is sqrt (3) / 3 = 1 / sqrt (3) So arctan (sqrt (3) ) - arctan (1/sqrt(3) ) = 60° - 30° = angle AED = 30° So the area of sector AED is (1/2)(12)(pi/6) = pi cm^2 And the area of triangle AED = (1/2)(12)sin(30) = 3 cm^2 So....the area between the sector and the triangle is [pi - 3] cm^2 And using symmetry....4 of these areas = [4pi -12] cm^2 (2) So....the shaded area is (1) - (2) = [ 24 - 12sqrt (3)] - [ 4pi - 12 ] cm^2 = [ 36 - 12sqrt (3) - 4pi] cm^2 (exact) ≈ 2.65 cm^2 (rounded)
Research Open Access Published: Blow-up theorems of Fujita type for a semilinear parabolic equation with a gradient term Advances in Difference Equations volume 2018, Article number: 128 (2018) Article metrics 564 Accesses 1 Citations Abstract This paper deals with the existence and non-existence of the global solutions to the Cauchy problem of a semilinear parabolic equation with a gradient term. The blow-up theorems of Fujita type are established and the critical Fujita exponent is determined by the behavior of the three variable coefficients at infinity associated to the gradient term and the diffusion–reaction terms, respectively, as well as the spacial dimension. Introduction In the paper, we investigate the blow-up theorems of Fujita type for the following Cauchy problem: where \(p>1\), \(-2<\lambda_{1}\leq\lambda_{2}\), \(0\leq u_{0}\in C_{0}(\mathbb {R}^{n})\) and \(b\in C^{1}([0,+\infty))\) satisfies and in the case that \(-n-\lambda_{1}<\kappa\leq+\infty\), b also satisfies The critical exponents for nonlinear diffusion equations have attached extensive attention since 1966, when Fujita [1] proved that, for the Cauchy problem of Eq. (1.1) with \(b\equiv0\) and \(\lambda_{1}=\lambda_{2}=0\), the nontrivial nonnegative solution blows up in a finite time if \(1< p< p_{c}=1+2/n\), whereas it exists globally for small initial data and blows up in a finite time for large ones if \(p>p_{c}=1+2/n\). This result reveals that the exponent p of the nonlinear reaction plays a remarkable role in affecting the properties of solutions. We call \(p_{c}\) with the above properties the critical Fujita exponent and the similar result a blow-up theorem of Fujita type. There have been many kinds of extensions of Fujita’s results since then, such as different types of parabolic equations and systems with or without degeneracies or singularities, various geometries of domains, different nonlinear reactions or nonhomogeneous boundary sources, etc. One can see the survey papers [2, 3] and the references therein, and more recent work [4–18]. For the Cauchy problem of with \(\mathbf{b}_{0}\) being a nonzero constant vector, Aguirre and Escobedo [19] showed that As to Neumann exterior problems, Levine and Zhang [20] investigated the critical Fujita exponent of the homogeneous Neumann exterior problem of (1.1) with \(b\equiv0\) and \(\lambda_{1}=\lambda_{2}=0\), and proved that \(p_{c}\) is still \(1+2/n\). In [21], Zheng and Wang concerned the homogeneous Neumann exterior problem of (1.1) with and formulated the critical Fujita exponent as Moreover, the general case of b is considered in [8] if \(0\le \lambda_{1}\leq\lambda_{2}\le p\lambda_{1}+(p-1)n\) and \(\kappa\ge0\). That is to say, if \(1< p< p_{c}\), there does not exist any nontrivial nonnegative global solution, whereas if \(p>p_{c}\), there exist both nontrivial nonnegative global and blow-up solutions. The technique used in this paper is mainly inspired by [11, 18, 21, 22]. To prove the blow-up of solutions, we use precise energy integral estimates instead of constructing subsolutions. For the global existence of nontrivial solutions, we construct a nontrivial global supersolution. It should be noted that we have to seek a complicated supersolution and do some precise calculations in order to overcome the difficulty from the non-self-similar construction of (1.1). Furthermore, the properties of such models which will be proved in the paper provide theoretical foundation for the numerical simulation which involved difference schemes. The paper is organized as follows. Some preliminaries and main results are introduced in Sect. 2, such as the local well-posedness of the problem (1.1), (1.2) and some auxiliary lemmas to be used later, as well as the blow-up theorems of Fujita type. The main results are proved in Sect. 3. Preliminaries and main results Definition 2.1 and Definition 2.2 Otherwise, u is called a global solution. For \(0\le u_{0}\in C_{0}(\mathbb {R}^{n})\) and \(b\in C^{1}([0,+\infty))\) and \(p>1\), one can establish the existence, uniqueness and the comparison principle for solutions to the problem (1.1), (1.2) locally in time by use of the classical theory on parabolic equations (see, e.g., [23]). Theorem 2.1 Assume that \(b\in C^{1}([0,+\infty))\) satisfies (1.3) and (1.4) with \(-\infty\le\kappa<+\infty\). If \(1< p< p_{c}\) with \(p_{c}\) given by (1.5), then, for any nontrivial \(0\le u_{0}\in C_{0}(\mathbb {R}^{n})\), the solution to the problem (1.1), (1.2) must blow up in a finite time. Theorem 2.2 Assume that \(b\in C^{1}([0,+\infty))\) satisfies (1.3) and (1.4) with \(-n<\kappa\le+\infty\). If \(p>p_{c}\) with \(p_{c}\) given by (1.5), then there exist both nontrivial nonnegative global and blow- up solutions to the problem (1.1), (1.2). Proofs of main results Lemma 3.1 with Then there exist three numbers \(R_{0}>0\), \(\delta>1\) and \(M_{0}>0\) depending only on n and b, such that, for any \(R>R_{0}\), in the distribution sense, where \(B_{r}\) denotes the open ball in \(\mathbb {R}^{n}\) with radius r and centered at the origin. Remark 3.1 For the case \(\kappa=+\infty\), one can prove that (3.1) holds for each fixed \(R>0\), but \(\delta>1\) and \(M_{0}>0\) depend also on R. Proof of Theorem 2.1 Let \(\eta_{R}\), h, \(R_{0}\), δ and \(M_{0}\) be introduced in Lemma 3.1. It follows from \(-\infty\le\kappa<+\infty\) and \(1< p< p_{c}\) that Fix \(\tilde{\kappa}>\kappa\) to satisfy which, together with (1.3), shows that there exists \(R_{1}>1\) such that For any \(R>R_{1}\), one can get and where Therefore, where \(\chi_{[0,\delta R]}\) is the characteristic function of the interval \([0,\delta R]\), while \(K>0\) depends only on n, b, \(R_{1}\), δ and κ̃. Let u be the solution to the problem (1.1), (1.2), and denote For any \(R>\max\{R_{0}, R_{1}\}\), Lemma 3.1 implies The Hölder inequality and (3.3) yield with \(M_{2}>0\) depending only on n, b, \(R_{1}\), δ and κ̃, and Note that (3.2) implies while \(w_{R}(0)\) is nondecreasing with respect to \(R\in(0,+\infty)\) and Therefore, there exists \(R_{2}>0\) such that, for any \(R>R_{2}\), Since \(p>1\), there exists \(T_{*}>0\) such that It follows from \(\operatorname{supp}\eta_{R}( \vert x \vert )=\overline{B}_{\delta R}\) that i.e., u blows up in a finite time. □ with and \(\tau>0\) will be determined. If \(U\in C^{1,1}([0,+\infty))\) with \(U'\leq0\) in \((0,+\infty)\) satisfies Lemma 3.2 with \(A\in C^{1,1}([0,+\infty))\) satisfies \(A(0)=0\) and where \(0< l<1\) will be determined, with \(\kappa_{1}\), \(\kappa_{2}\) satisfying Proof The choice of \(\kappa_{1}\), \(\kappa_{2}\) leads to \(A_{2}<\beta<A_{1}\). Fix Additionally, (1.3) allows us to choose \(\tau>0\) sufficiently large such that Due to \(-2<\lambda_{1}\leq\lambda_{2}\), \(p>1\) and the definition of the function \(A(r)\), Let \(\varepsilon>0\) be sufficiently small such that Proof of Theorem 2.2 The comparison principle and Lemma 3.2 show that there exists a nontrivial global solution to the problem (1.1), (1.2). We will show that the problem also admits a blow-up solutions. Fix \(R>R_{0}\). Assume that u is a solution to the problem (1.1), (1.2). Lemma 3.1 and Remark 3.1 imply that which implies If \(u_{0}\) is so large that then (3.19) leads to The same argument as in the proof of Theorem 2.1 shows that u must blow up in a finite time. □ Remark 3.2 For the critical case \(p=p_{c}\) with \(-n-\lambda_{1}<\kappa<+\infty\). we need an additional condition (see [18]) that Similar to the proof in critical case in [18, 21], one can show the blow-up of the solutions to the problem (1.1), (1.2) for the critical case \(p=p_{c}\) with \(-n-\lambda _{1}<\kappa<+\infty\) if (3.20) holds. References 1. Fujita, H.: On the blowing up of solutions of the Cauchy problem for \(u_{t}=\Delta u+u^{1+\alpha}\). J. Fac. Sci. Univ. Tokyo Sect. I 13, 109–124 (1966) 2. Deng, K., Levine, H.: The role of critical exponents in blow-up theorems: the sequel. J. Math. Anal. Appl. 243(1), 85–126 (2000) 3. Levine, H.: The role of critical exponents in blow-up theorems. SIAM Rev. 32(2), 262–288 (1990) 4. Andreucci, D., Cirmi, G., Leonardi, S., Tedeev, A.: Large time behavior of solutions to the Neumann problem for a quasilinear second order degenerate parabolic equation in domains with noncompact boundary. J. Differ. Equ. 174, 253–288 (2001) 5. Fira, M., Kawohl, B.: Large time behavior of solutions to a quasilinear parabolic equation with a nonlinear boundary condition. Adv. Math. Sci. Appl. 11(1), 113–126 (2001) 6. Guo, W., Wang, X., Zhou, M.: Asymptotic behavior of solutions to a class of semilinear parabolic equations. Bound. Value Probl. 2016, 68 (2016) 7. Meier, P.: On the critical exponent for reaction–diffusion equations. Arch. Ration. Mech. Anal. 109(1), 63–71 (1990) 8. Nie, Y., Zhou, M., Zhou, Q., Na, Y.: Fujita type theorems for a class of semilinear parabolic equations with a gradient term. J. Nonlinear Sci. Appl. 10(4), 1603–1612 (2017) 9. Qi, Y., Wang, M.: Critical exponents of quasilinear parabolic equations. J. Math. Anal. Appl. 267, 264–280 (2002) 10. Wang, C., Zheng, S., Wang, Z.: Critical Fujita exponents for a class of quasilinear equations with homogeneous Neumann boundary data. Nonlinearity 20(6), 1343–1359 (2007) 11. Wang, C., Zheng, S.: Critical Fujita exponents of degenerate and singular parabolic equations. Proc. R. Soc. Edinb. A 136(2), 415–430 (2006) 12. Wang, C., Zheng, S.: Fujita-type theorems for a class of nonlinear diffusion equations. Differ. Integral Equ. 26(5–6), 555–570 (2013) 13. Wang, Z., Yin, J., Wang, C.: Critical exponents of the non-Newtonian polytropic filtration equation with nonlinear boundary condition. Appl. Math. Lett. 20(2), 142–147 (2007) 14. Winkler, M.: A critical exponent in a degenerate parabolic equation. Math. Methods Appl. Sci. 25, 911–925 (2002) 15. Zhang, Q.: A general blow-up result on nonlinear boundary-value problems on exterior domains. Proc. R. Soc. Edinb. A 131(2), 451–475 (2001) 16. Zheng, S., Song, X., Jiang, Z.: Critical Fujita exponents for degenerate parabolic equations coupled via nonlinear boundary flux. J. Math. Anal. Appl. 298, 308–324 (2004) 17. Zhou, M., Li, H., Guo, W., Zhou, X.: Critical Fujita exponents to a class of non-Newtonian filtration equations with fast diffusion. Bound. Value Probl. 2016, 146 (2016) 18. Zhou, Q., Nie, Y., Han, X.: Large time behavior of solutions to semilinear parabolic equations with gradient. J. Dyn. Control Syst. 22(1), 191–205 (2016) 19. Aguirre, J., Escobedo, M.: On the blow-up of solutions of a convective reaction diffusion equation. Proc. R. Soc. Edinb. A 123(3), 433–460 (1993) 20. Levine, H., Zhang, Q.: The critical Fujita number for a semilinear heat equation in exterior domains with homogeneous Neumann boundary values. Proc. R. Soc. Edinb. A 130(3), 591–602 (2000) 21. Zheng, S., Wang, C.: Large time behaviour of solutions to a class of quasilinear parabolic equations with convection terms. Nonlinearity 21(9), 2179–2200 (2008) 22. Qi, Y.: The critical exponents of parabolic equations and blow-up in \(\mathbb {R}^{n}\). Proc. R. Soc. Edinb. A 128(1), 123–136 (1998) 23. Friedman, A.: Partial Differential Equations of Parabolic Type. Prentice Hall, New York (1964) Acknowledgements This work is supported by the National Natural Science Foundation of China (11571137 and 11601182). Ethics declarations Competing interests The authors declare that they have no competing interests. Additional information Publisher’s Note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
We examine inkdots placed on the input string as a way of providing advice to finite automata, and establish the relations between this model and the previously studied models of advised finite automata. The existence of an infinite hierarchy of classes of languages that can be recognized with the help of increasing numbers of inkdots as advice is shown. The effects of different forms of advice on the succinctness of the advised machines are examined. We also study randomly placed inkdots as advice to probabilistic finite automata, and demonstrate the superiority of this model over its deterministic version. Even very slowly growing amounts of space can become a resource of meaningful use if the underlying advised model is extended with access to secondary memory, while it is famously known that such small amounts of space are not useful for unadvised one-way Turing machines. Section: Automata, Logic and Semantics By a tight tour in a $k$-uniform hypergraph $H$ we mean any sequence of its vertices $(w_0,w_1,\ldots,w_{s-1})$ such that for all $i=0,\ldots,s-1$ the set $e_i=\{w_i,w_{i+1}\ldots,w_{i+k-1}\}$ is an edge of $H$ (where operations on indices are computed modulo $s$) and the sets $e_i$ for $i=0,\ldots,s-1$ are pairwise different. A tight tour in $H$ is a tight Euler tour if it contains all edges of $H$. We prove that the problem of deciding if a given $3$-uniform hypergraph has a tight Euler tour is NP-complete, and that it cannot be solved in time $2^{o(m)}$ (where $m$ is the number of edges in the input hypergraph), unless the ETH fails. We also present an exact exponential algorithm for the problem, whose time complexity matches this lower bound, and the space complexity is polynomial. In fact, this algorithm solves a more general problem of computing the number of tight Euler tours in a given uniform hypergraph. Section: Analysis of Algorithms The PASEP (Partially Asymmetric Simple Exclusion Process) is a probabilistic model of moving particles, which is of great interest in combinatorics, since it appeared that its partition function counts some tableaux. These tableaux have several variants such as permutations tableaux, alternative tableaux, tree- like tableaux, Dyck tableaux, etc. We introduce in this context certain excursions in Young's lattice, that we call stammering tableaux (by analogy with oscillating tableaux, vacillating tableaux, hesitating tableaux). Some natural bijections make a link with rook placements in a double staircase, chains of Dyck paths obtained by successive addition of ribbons, Laguerre histories, Dyck tableaux, etc. Section: Combinatorics We discuss cellular automata over arbitrary finitely generated groups. We call a cellular automaton post-surjective if for any pair of asymptotic configurations, every pre-image of one is asymptotic to a pre-image of the other. The well known dual concept is pre-injectivity: a cellular automaton is pre-injective if distinct asymptotic configurations have distinct images. We prove that pre-injective, post-surjective cellular automata are reversible. Moreover, on sofic groups, post-surjectivity alone implies reversibility. We also prove that reversible cellular automata over arbitrary groups are balanced, that is, they preserve the uniform measure on the configuration space. Section: Automata, Logic and Semantics An irreversible $k$-threshold process (also a $k$-neighbor bootstrap percolation) is a dynamic process on a graph where vertices change color from white to black if they have at least $k$ black neighbors. An irreversible $k$-conversion set of a graph $G$ is a subset $S$ of vertices of $G$ such that the irreversible $k$-threshold process starting with $S$ black eventually changes all vertices of $G$ to black. We show that deciding the existence of an irreversible 2-conversion set of a given size is NP-complete, even for graphs of maximum degree 4, which answers a question of Dreyer and Roberts. Conversely, we show that for graphs of maximum degree 3, the minimum size of an irreversible 2-conversion set can be computed in polynomial time. Moreover, we find an optimal irreversible 3-conversion set for the toroidal grid, simplifying constructions of Pike and Zou. Section: Graph Theory Tree-like tableaux are certain fillings of Ferrers diagrams originally introduced by Aval et al., which are in simple bijections with permutation tableaux coming from Postnikov's study of totally nonnegative Grassmanian and alternative tableaux introduced by Viennot. In this paper, we confirm two conjectures of Gao et al. on the refined enumeration of non-occupied corners in tree-like tableaux and symmetric tree-like tableaux via intermediate structures of alternative tableaux, linked partitions, type $B$ alternative tableaux and type $B$ linked partitions. Section: Combinatorics In this work, we study conditions for the existence of length-constrained path-cycle decompositions, that is, partitions of the edge set of a graph into paths and cycles of a given minimum length. Our main contribution is the characterization of the class of all triangle-free graphs with odd distance at least $3$ that admit a path-cycle decomposition with elements of length at least $4$. As a consequence, it follows that Gallai's conjecture on path decomposition holds in a broad class of sparse graphs. Section: Graph Theory A pair of non-adjacent edges is said to be separated in a circular ordering of vertices, if the endpoints of the two edges do not alternate in the ordering. The circular separation dimension of a graph $G$, denoted by $\pi^\circ(G)$, is the minimum number of circular orderings of the vertices of $G$ such that every pair of non-adjacent edges is separated in at least one of the circular orderings. This notion is introduced by Loeb and West in their recent paper. In this article, we consider two subclasses of planar graphs, namely $2$-outerplanar graphs and series-parallel graphs. A $2$-outerplanar graph has a planar embedding such that the subgraph obtained by removal of the vertices of the exterior face is outerplanar. We prove that if $G$ is $2$-outerplanar then $\pi^\circ(G) = 2$. We also prove that if $G$ is a series-parallel graph then $\pi^\circ(G) \leq 2$. Section: Graph Theory Let $G$ be a simple graph with a perfect matching. Deng and Zhang showed that the maximum anti-forcing number of $G$ is no more than the cyclomatic number. In this paper, we get a novel upper bound on the maximum anti-forcing number of $G$ and investigate the extremal graphs. If $G$ has a perfect matching $M$ whose anti-forcing number attains this upper bound, then we say $G$ is an extremal graph and $M$ is a nice perfect matching. We obtain an equivalent condition for the nice perfect matchings of $G$ and establish a one-to-one correspondence between the nice perfect matchings and the edge-involutions of $G$, which are the automorphisms $\alpha$ of order two such that $v$ and $\alpha(v)$ are adjacent for every vertex $v$. We demonstrate that all extremal graphs can be constructed from $K_2$ by implementing two expansion operations, and $G$ is extremal if and only if one factor in a Cartesian decomposition of $G$ is extremal. As examples, we have that all perfect matchings of […] Section: Graph Theory Let $[K_n,f,\pi]$ be the (global) SDS map of a sequential dynamical system (SDS) defined over the complete graph $K_n$ using the update order $\pi\in S_n$ in which all vertex functions are equal to the same function $f\colon\mathbb F_2^n\to\mathbb F_2^n$. Let $\eta_n$ denote the maximum number of periodic orbits of period $2$ that an SDS map of the form $[K_n,f,\pi]$ can have. We show that $\eta_n$ is equal to the maximum number of codewords in a binary code of length $n-1$ with minimum distance at least $3$. This result is significant because it represents the first interpretation of this fascinating coding-theoretic sequence other than its original definition. Section: Combinatorics We describe a new type of sufficient condition for a balanced bipartite digraph to be hamiltonian. Let $D$ be a balanced bipartite digraph and $x,y$ be distinct vertices in $D$. $\{x, y\}$ dominates a vertex $z$ if $x\rightarrow z$ and $y\rightarrow z$; in this case, we call the pair $\{x, y\}$ dominating. In this paper, we prove that a strong balanced bipartite digraph $D$ on $2a$ vertices contains a hamiltonian cycle if, for every dominating pair of vertices $\{x, y\}$, either $d(x)\ge 2a-1$ and $d(y)\ge a+1$ or $d(x)\ge a+1$ and $d(y)\ge 2a-1$. The lower bound in the result is sharp. Section: Graph Theory In this paper we introduce and study a new family of combinatorial simplicial complexes, which we call immediate snapshot complexes. Our construction and terminology is strongly motivated by theoretical distributed computing, as these complexes are combinatorial models of the standard protocol complexes associated to immediate snapshot read/write shared memory communication model. In order to define the immediate snapshot complexes we need a new combinatorial object, which we call a witness structure. These objects are indexing the simplices in the immediate snapshot complexes, while a special operation on them, called ghosting, describes the combinatorics of taking simplicial boundary. In general, we develop the theory of witness structures and use it to prove several combinatorial as well as topological properties of the immediate snapshot complexes. Section: Distributed Computing and Networking A binary triangle of size $n$ is a triangle of zeroes and ones, with $n$ rows, built with the same local rule as the standard Pascal triangle modulo $2$. A binary triangle is said to be balanced if the absolute difference between the numbers of zeroes and ones that constitute this triangle is at most $1$. In this paper, the existence of balanced binary triangles of size $n$, for all positive integers $n$, is shown. This is achieved by considering periodic balanced binary triangles, that are balanced binary triangles where each row, column or diagonal is a periodic sequence. Section: Combinatorics We deal with the problem of maintaining a shortest-path tree rooted at some process r in a network that may be disconnected after topological changes. The goal is then to maintain a shortest-path tree rooted at r in its connected component, V_r, and make all processes of other components detecting that r is not part of their connected component. We propose, in the composite atomicity model, a silent self-stabilizing algorithm for this problem working in semi-anonymous networks, where edges have strictly positive weights. This algorithm does not require any a priori knowledge about global parameters of the network. We prove its correctness assuming the distributed unfair daemon, the most general daemon. Its stabilization time in rounds is at most 3nmax+D, where nmax is the maximum number of non-root processes in a connected component and D is the hop-diameter of V_r. Furthermore, if we additionally assume that edge weights are positive integers, then it stabilizes in a polynomial […] Section: Distributed Computing and Networking Fix an integer partition lambda that has no more than n parts. Let beta be a weakly increasing n-tuple with entries from {1,..,n}. The flagged Schur function indexed by lambda and beta is a polynomial generating function in x_1, .., x_n for certain semistandard tableaux of shape lambda. Let pi be an n-permutation. The type A Demazure character (key polynomial, Demazure polynomial) indexed by lambda and pi is another such polynomial generating function. Reiner and Shimozono and then Postnikov and Stanley studied coincidences between these two families of polynomials. Here their results are sharpened by the specification of unique representatives for the equivalence classes of indexes for both families of polynomials, extended by the consideration of more general beta, and deepened by proving that the polynomial coincidences also hold at the level of the underlying tableau sets. Let R be the set of lengths of columns in the shape of lambda that are less than n. Ordered set partitions of […] Section: Combinatorics In connection with Fulkerson's conjecture on cycle covers, Fan and Raspaud proposed a weaker conjecture: For every bridgeless cubic graph $G$, there are three perfect matchings $M_1$, $M_2$, and $M_3$ such that $M_1\cap M_2 \cap M_3=\emptyset$. We call the property specified in this conjecture the three matching intersection property (and 3PM property for short). We study this property on matching covered graphs. The main results are a necessary and sufficient condition and its applications to characterization of special graphs, such as the Halin graphs and 4-regular graphs. Section: Graph Theory Total dominating set, connected vertex cover and Steiner tree are well-known graph problems. Despite the fact that they are NP-complete to optimize, it is easy (even trivial) to find solutions, regardless of their size. In this paper, we study a variant of these problems by adding conflicts, that are pairs of vertices that cannot be both in a solution. This new constraint leads to situations where it is NP-complete to decide if there exists a solution avoiding conflicts. This paper proposes NP-completeness proofs of the existence of a solution for different restricted classes of graphs and conflicts, improving recent results. We also propose polynomial time constructions in several restricted cases and we introduce a new parameter, the stretch, to capture the locality of the conflicts. Section: Graph Theory
We essentially examine the conditions under which the Law of Large Numbers holds for the sum $$\frac 1n \sum_{i=1}^nx_{ki}u_i,\;\;\; E(u_i)=0$$ for every $k=1,...K$ regressor, and we assume also a finite variance for $u_i$. Now, when the $x_{ki}$'s are non-stochastic then they are just a sequence of real numbers, and we might as well re-write the sum as $$\frac 1n \sum_{i=1}^na_{ki}u_i,\;\;\; E(u_i)=0$$ I write it like this to stress that the only source of randomness here is the $u_i$'s, and so that, what we are looking at is the average of independent but non-identically distributed random variables $z_{ki}=a_{ki}u_i$.This is the "Chebyshev's" Law of Large Numbers and requires that the variance of each random variable is finite. This means that we need$$\text{Var}(z_{ki}) = a_{ki}^2 \text{Var}(u_i) < \infty,\;\;\; \forall i$$ Markov generalized this LLN to possibly non-independent random variables, where we require that $$\text{Var}\left(\frac 1n \sum_{i=1}^na_{ki}u_i\right) \to 0$$ Under independence of $u_i$'s, covariances are zero and we have $$\text{Var}\left(\frac 1n \sum_{i=1}^na_{ki}u_i\right) = \text{Var}(u_i)\cdot \frac 1{n^2}\sum_{i=1}^na_{ki}^2$$ and so the sufficient condition we need is $$\frac 1{n^2}\sum_{i=1}^na_{ki}^2 \to 0$$ or to revert back to original notation $$\frac 1{n^2}\sum_{i=1}^nx_{ki}^2 \to 0$$
Forgot password? New user? Sign up Existing user? Log in The Ackermann function is a computable function which grows very, very quickly as its inputs grow. For example, while A(1,2), A(1,2),A(1,2), A(2,2),A (2,2),A(2,2), and A(3,2)A(3,2) A(3,2) are equal to 4,7,4,7,4,7, and 29,29,29, respectively, A(4,2)≈2×1019728 A(4,2) \approx 2 \times 10^{19728} A(4,2)≈2×1019728. The Ackermann function can be defined as follows: A(m,n)={n+1if m=0A(m−1,1)if m>0 and n=0A(m−1,A(m,n−1))if m>0 and n>0. A(m,n) = \begin{cases}n+1 & \mbox{if } m = 0 \\A(m-1, 1) & \mbox{if } m > 0 \mbox{ and } n = 0 \\A(m-1, A(m, n-1)) & \mbox{if } m > 0 \mbox{ and } n > 0.\end{cases}A(m,n)=⎩⎪⎨⎪⎧n+1A(m−1,1)A(m−1,A(m,n−1))if m=0if m>0 and n=0if m>0 and n>0. What are the last 3 digits of A(4,5)? A(4,5) ?A(4,5)? Problem Loading... Note Loading... Set Loading...