text
stringlengths
256
16.4k
Note that the question only makes sense when $T\geq T_{\rm c}$, since the 2-point function does not decay when $T<T_{\rm c}$ (neither in the half-plane, nor in the full plane). I don't know how to prove the result when $T=T_{\rm c}$, although it is quite plausible that the technology developed around SLE makes it possible nowadays. The decay will not be exponential in this case, of course. On the other hand, it is possible to prove the result when $T>T_{\rm c}$. I will write $\langle\cdot\rangle_{\rm full}$ for the expectation in the full plane. You can first use Griffiths' inequality to prove that $$\langle \sigma_{00} \sigma_{N0} \rangle \leq \langle \sigma_{00} \sigma_{N0} \rangle _{\rm full}.$$ To obtain a lower bound, use again Griffiths' inequality to show that $$\langle \sigma_{00} \sigma_{N0} \rangle \geq \langle \sigma_{00} \sigma_{0M} \rangle \langle \sigma_{0M} \sigma_{NM} \rangle \langle \sigma_{NM} \sigma_{N0} \rangle \geq e^{-c M} \langle \sigma_{0M} \sigma_{NM} \rangle.$$ Choose $M=N^\alpha$ for some arbitrary $\tfrac12<\alpha<1$. Then, it follows from the analysis developed here that $$\langle \sigma_{0M} \sigma_{NM} \rangle = (1+o(1)) \langle \sigma_{0M} \sigma_{NM} \rangle_{\rm full} = (1+o(1)) \langle \sigma_{00} \sigma_{N0} \rangle_{\rm full},$$ where the term $o(1)$ vanishes as $N\to\infty$. We thus have $$\langle \sigma_{00} \sigma_{N0} \rangle \geq e^{-c M} \langle \sigma_{00} \sigma_{N0} \rangle_{\rm full}.$$ The conclusion follows, since the decay is exponential in the full plane and $\lim_{N\to\infty}M/N = 0$. (Note that the factor $e^{-cM}$ is very poor; with enough work, it should however be possible to compute precisely the order of the prefactor, but this is not necessary for our purpose here.) To conclude, I have no idea whether the decay of this 2-point function can be computed explicitly directly, but this problem is easily solved using the many techniques that mathematical physicists have developed in the last decades to analyze nonperturbatively the Ising model with full mathematical rigor. Actually, the above proof of the equality of the rate of decay remains true when $d\geq 3$ (and you work in a half-space), since it has been proved that the decay in full space is exponential when $T>T_{\rm c}$ and the Ornstein-Zernike result I use above also remains valid in higher dimensions. So one can go much further than what is doable with exact computations.
Quite a lot of algebraic number theory was invented through trying to prove Fermat's last theorem and other Diophantine problems. For example, if I asked you to solve the equation $x^2 - y^2 = 5$ in integers it is very simple, you can factorise $(x+y)(x-y) = 5$ and solve the problem by linking to divisors of $5$, in order to get the solutions $x = \pm 3, y=\pm 2$. Now say I ask you to solve the equation $y^3 = x^2 + 2$ in integers. This is not so easy if we work entirely in $\mathbb{Z}$ and use elementary methods. However, if we shift focus into a bigger ring of numbers, $\mathbb{Z}[\sqrt{-2}] = \{x+y\sqrt{-2}\,|\,x,y\in\mathbb{Z}\}$ then the problem again turns into a multiplicative problem: $(x + \sqrt{-2})(x- \sqrt{-2}) = y^3$ so that solving the original Diophantine is really the same as solving a "product" style equation in $\mathbb{Z}[\sqrt{-2}]$. The point of (basic) algebraic number theory is to study rings like this. How do the elements in these rings factorise? In our problem above it turns out that the ring $\mathbb{Z}[\sqrt{-2}]$ has properties that are strangely close to properties of $\mathbb{Z}$. In fact the elements in this ring "factorise uniquely" into irreducible elements (the analogue of prime numbers in $\mathbb{Z}$). The phrase "factorise uniquely" does not have quite the meaning you might think, we have to allow for multiplication of units (things that "divide $1$"). It is the ordering of $\mathbb{Z}$ allows us to consider unique factorisations into "positive primes". There is also a notion of coprimality. This allows us to solve our problem since for odd $x$ it can be shown that $x\pm\sqrt{-2}$ are coprime in $\mathbb{Z}[\sqrt{-2}]$. But their product is a cube so (as in $\mathbb{Z}$), we must have that $x + \sqrt{-2} = (a+b\sqrt{-2})^3$ for some $a+b\sqrt{-2}\in \mathbb{Z}[\sqrt{-2}]$. Comparing coefficients lets you find the possibilities for $a,b$, hence for $x$. The ideas of Lame and Kummer were to study FLT in the same way by considering the factorisation (for $\zeta$ a primitive $p$-th root of unity): $z^p = x^p + y^p = (x + y)(x + \zeta y) ... (x + \zeta^{p-1} y)$ forming yet another product equation, now in the ring $\mathbb{Z}[\zeta]$. Now this is not the entire story, since some of the rings we study in algebraic number theory do not have unique factorisation. For example the ring $\mathbb{Z}[\sqrt{-5}]$ does not since: $6 = 2\times 3 = (1+\sqrt{-5})(1 - \sqrt{-5})$ gives two totally different factorisations of $6$. Actually the ring $\mathbb{Z}[\zeta]$ does not have unique factorisation for $p=23$, so that FLT could not be solved entirely by the above method. The thing that stopped factorisation being unique was the fact that the ring wasn't big enough to factorise everything further into the same things. Fortunately we can restore unique factorisation without having to extend! Through the genius of Kummer and Dedekind, they realised that by considering the "multiples" of an element as an object in its own right, we can reform factorisation in a way that becomes unique upto ordering. In modern language these objects are called ideals of a ring. There is a notion of a prime ideal, capturing the notion of prime number. The different factorisations of $6$ above can be explained as reordering of the prime ideals in the factorisation of the ideal generated by $6$. These prime ideals are NOT generated by one element, so they dont correspond to "multiples" of something in $\mathbb{Z}[\sqrt{-5}]$, they correspond more to "multiples" of something that doesn't exist in the ring, but would exist after making an extension. Kummer was able to prove a huge number of cases of FLT by using the ideal theory. This is outlined in many books. Focus in algebraic number theory now turns to studying these algebraic constructions. We see that in a given "nice" ring, certain prime numbers may factorise, whereas others don't. For example, in $\mathbb{Z}[i]$ we find that a prime $p$ factorises further if and only if $p=2$ or $p \equiv 1$ mod $4$. The factorisation of $2$ is different to the others in that $2 = (1+i)^2$ is not "square-free". All the others factorise into two different factors. We say that $2$ ramifies, primes $p\equiv 1$ mod $4$ split and primes $p\equiv 3$ mod $4$ are inert. This congruence relationship describing the factorisation of primes is in some sense really explained by the values of the Legendre symbol $\left(\frac{-1}{p}\right)$, which is also explaining sums of two squares! Working in similar rings gives you the entire quadratic reciprocity law. The goal of class field theory is to explain the splitting of primes in ANY extension of "number fields" to get similar characterisations in terms of congruences. In fact I just told a lie, we cannot yet do this for ANY extension, class field theory does it for abelian extensions (ones with abelian Galois group) but never-the-less it is quite a strong theory that has many applications (for example it solves the question of which primes can be written as $x^2 + ny^2$. In the case of abelian extensions of $\mathbb{Q}$ we find that there are simple congruence conditions mod some integer $N$ that completely describe splitting behaviour of primes! Another side of class field theory is the Cebotarev density theorem, which states essentially that most splitting types happen infinitely often. This is a huge generalisation of Dirichlet's theorem on primes in arithmetic progressions...in fact it provides an infinite amount of Dirichlet theorems, one for each abelian extension. These days the (mostly unsolved) Langlands program is supposed to be filling in the gaps for non-abelian extensions but this is very difficult to understand and is not yet completely understood. When this is fully understood it will prove to be the holy grail of number theory, it will characterise in a huge way the splitting of primes. Anyway, I hope this somewhat rushed introduction will whet your appetite. The book I first started with was Stewart/Tall - Algebraic number theory and fermat's last theorem. This is a good book to start you off. Also Lang - Algebraic number theory, Cox - Primes of the form $x^2 + ny^2$ and Childress - Class field theory are good ones to start with for class field theory.
This is related to waves & optics. I am given the fourier transform of a function (the spectrum of frequences for a pulse of sound) $$\hat f(w) = sinc((w-w_0)\tau ) $$ And now, a filter is attached so that it only allows waves with frequency $w_0$ to passthrough, and am asked to find what f(t) is like after we put the filter. The question hints at using Fourier series to solve, but I am unaware of how I can use the Fourier transform along with the series to solve. (or any relations between them). The solution gives that f(t) written as a fourier series is: $$f(t) = \sum \hat f(w)cos(wt)$$ i.e. that the transform is the coefficients for this series. Why is this true?
Topologies on Sets Definition: Let $X$ be a set. A Topology on $X$ is a collection $\tau$ of subsets of $X$ that satisfies the following properties: 1) $X, \emptyset \in \tau$. 2) If $\{ U_i : i \in I \}$ is any arbitrary collection of subsets of $X$ such that $U_i \in \tau$ for all $i \in I$ then the union $\displaystyle{\bigcup_{i \in I} U_i \in \tau}$. 3)) If $\{ U_1, U_2, ..., U_n \}$ is any finite collection of subsets of $X$ such that $U_i \in \tau$ for all $i \in \{ 1, 2, ..., n \}$ then the intersection $\displaystyle{\bigcap_{i=1}^{n} U_i \in \tau}$. The pair, $(X, \tau)$ is called a Topological Space. Note that (2) states that the union of any ARBITRARY collection of sets in $\tau$ is also contained in $\tau$, while (3) states that the intersection of any FINITE collection of sets in $\tau$ is also contained in $\tau$. In other words, a topological space $(X, \tau)$ is closed under arbitrary unions and under finite intersections. It can easily be show that for any set $X$ that $\tau = \{ \emptyset, X \}$ and $\tau = \mathcal P (X)$ (the power set of $X$ which is the collection of all subsets of $X$) are both topologies on $X$. These topologies are given a special name. Definition: The Trivial Topologies on $X$ are $\tau = \{ \emptyset, X \}$ and $\tau = \mathcal P (X)$. We now give an example of a set $X$ and a nontrivial topology $\tau$ on $X$. Consider the following set:(1) And the following collection of subsets of $X$:(2) We now show $\tau$ is a topological space. We see that $\emptyset, X \in \tau$ so (1) is satisfied. We now list all of the unions of sets from $\tau$:(3) Each of these arbitrary unions is contained in $\tau$ so (2) is satisfied. We now list all of the intersections of sets from $\tau$:(4) Each of these intersections (which are finite since $X$ is finite) are contained in $\tau$ so (3) is satisfied and hence $\tau$ is a topology on $X$.
Refine Year of publication 1998 (21) (remove) Document Type Article (21) (remove) Keywords The Wannier-Bloch resonance states are metastable states of a quantum particle in a space-periodic potential plus a homogeneous field. Here we analyze the states of quantum particle in space- and time-periodic potential. In this case the dynamics of the classical counterpart of the quantum system is either quasiregular or chaotic depending on the driving frequency. It is shown that both the quasiregular and the chaotic motion can also support quantum resonances. The relevance of the obtained result to the problem a of crystal electron under simultaneous influence of d.c. and a.c. electric fields is briefly discussed. PACS: 73.20Dx, 73.40Gk, 05.45.+b We study the statistics of the Wigner delay time and resonance width for a Bloch particle in ac and dc fields in the regime of quantum chaos. It is shown that after appropriate rescaling the distributions of these quantities have universal character predicted by the random matrix theory of chaotic scattering. Anwendungen effizienter Verfahren in Automation - Universität Karlsruhe auf der SPS97 in Nürnberg - (1998) We present a parallel path planning method that is able to automatically handle multiple goal configurations as input. There are two basic approaches, goal switching and bi-directional search, which are combined in the end. Goal switching dynamically selects a fa-vourite goal depending on some distance function. The bi-directional search supports the backward search direction from the goal to the start configuration, which is probably faster. The multi-directional search with goal switching combines the advantages of goal switching and bi-directional search. Altogether, the planning system is enabled to select one of the pref-erable goal configuration by itself. All concepts are experimentally validated for a set of benchmark problems consisting of an industrial robot arm with six degrees of freedom in a 3D environment. Es wird die Aufgabe der vollständigen räumlichen Abdeckung von Regionen in durch mobile Roboter betrachtet. Da-bei können die Regionen in vollständig, teilweise oder nicht bekannten Umgebungen liegen. Zur Lösung wird ein Verfahren aus der Computer-grafik zum Füllen von Bildregionen zugrunde gelegt. Das Verfahren hat eine lokale Sichtweise und läßt somit den Einsatz von Sensordaten und das Auftreten von unvorhergesehenen Hindernissen zu. Die Regionen können durch Karten off-line vorgegeben sein oder durch Sensordaten on-line aufgebaut werden. Dennoch ist eine vollständige und genau einma-lige Flächenbearbeitung garantiert. Dies wird an Beispielen in einer graphischen Visualisierung der Realzeit-Steuerung des Roboters validiert. This paper presents a new approach to parallel motion planning for industrial robot arms with six degrees of freedom in an on-line given 3D environment. The method is based on the A*-search algorithm and needs no essential off-line computations. The algorithm works in an implicitly descrete configuration space. Collisions are detected in the cartesian workspace by hierarchical distance computation based on the given CAD model. By decomposing the 6D configuration space into hypercubes and cyclically mapping them onto multiple processing units, a good load distribution can be achieved. We have implemented the parallel motion planner on a workstation cluster with 9 PCs and tested the planner for several benchmark environments. With optimal discretisation, the new approach usually shows linear, and sometimes even superlinear speedups. In on-line provided environments with static obstacles, the parallel planning times are only a few seconds. A practical distributed planning and control system for industrial robots is presented. The hierarchical concept consists of three independent levels. Each level is modularly implemented and supplies an application interface (API) to the next higher level. At the top level, we propose an automatic motion planner. The motion planner is based on a best-first search algorithm and needs no essential off-line computations. At the middle level, we propose a PC-based robot control architecture, which can easily be adapted to any industrial kinematics and application. Based on a client/server-principle, the control unit estab-lishes an open user interface for including application specific programs. At the bottom level, we propose a flexible and modular concept for the integration of the distributed motion control units based on the CAN bus. The concept allows an on-line adaptation of the control parameters according to the robot's configuration. This implies high accuracy for the path execution and improves the overall system performance. We present a parallel control architecture for industrial robot cells. It is based on closed functional components arranged in a flat communication hierarchy. The components may be executed by different processing elements, and each component itself may run on multiple processing elements. The system is driven by the instructions of a central cell control component. We set up necessary requirements for industrial robot cells and possible parallelization levels. These are met by the suggested robot control architecture. As an example we present a robot work cell and a component for motion planning, which fits well in this concept. This paper is based on a path planning approach we reported earlier for industrial robot arms with 6 degrees of freedom in an on-line given 3D environment. It has on-line capabilities by searching in an implicit and descrete configuration space and detecting collisions in the Cartesian workspace by distance computation based on the given CAD model. Here, we present different methods for specifying the C-space discretization. Besides the usual uniform and heuristic discretization, we investigate two versions of an optimal discretization for an user-predefined Cartesian resolution. The different methods are experimentally evaluated. Additionally, we provide a set of 3- dimensional benchmark problems for a fair comparison of path planner. For each benchmark, the run-times of our planner are between only 3 and 100 seconds on a Pentium PC with 133 MHz. In this paper, the problem of path planning for robot manipulators with six degrees of freedom in an on-line provided three-dimensional environment is investigated. As a basic approach, the best-first algorithm is used to search in the implicit descrete configuration space. Collisions are detected in the Cartesian workspace by hierarchical distance computation based on the given CAD model. The basic approach is extended by three simple mechanisms and results in a heuristic hierarchical search. This is done by adjusting the stepsize of the search to the distance between the robot and the obstacles. As a first step, we show encouraging experimental results with two degrees of freedom for five typical benchmark problems. This paper presents a new approach to parallel path planning for industrial robot arms with six degrees of freedom in an on-line given 3D environment. The method is based a best-first search algorithm and needs no essential off-line computations. The algorithm works in an implicitly discrete configuration space. Collisions are detected in the Cartesian workspace by hierarchical distance computation based on polyhedral models of the robot and the obstacles. By decomposing the 6D configuration space into hypercubes and cyclically mapping them onto multiple processing units, a good load distribution can be achieved. We have implemented the parallel path planner on a workstation cluster with 9 PCs and tested the planner for several benchmark environments. With optimal discretisation, the new approach usually shows very good speedups. In on-line provided environments with static obstacles, the parallel planning times are only a few seconds. This paper presents a new approach to parallel motion planning for industrial robot arms with six degrees of freedom in an on-line given 3D environment. The method is based on the A-search algorithm and needs no essential off-line computations. The algorithm works in an implicitly descrete configuration space. Collisions are detected in the Cartesian workspace by hierarchical distance computation based on the given CAD model. By decomposing the 6D configuration space into hypercubes and cyclically mapping them onto multiple processing units, a good load distribution can be achieved. We have implemented the parallel motion planner on a workstation cluster with 9 PCs and tested the planner for several benchmark environments. With optimal discretisation, the new approach usually shows linear speedups. In on-line provided environments with static obstacles, the parallel planning times are only a few seconds. This paper discusses the problem of automatic off-line programming and motion planning for industrial robots. At first, a new concept consisting of three steps is proposed. The first step, a new method for on-line motion planning is introduced. The motion planning method is based on the A*-search algorithm and works in the implicit configuration space. During searching, the collisions are detected in the explicitly represented Cartesian workspace by hierarchical distance computation. In the second step, the trajectory planner has to transform the path into a time and energy optimal robot program. The practical application of these two steps strongly depends on the method for robot calibration with high accuracy, thus, mapping the virtual world onto the real world, which is discussed in the third step. We prove that there exists a positive \(\alpha\) such thatfor any integer \(\mbox{$d\ge 3$}\) and any topological types \(\mbox{$S_1,\dots,S_n$}\) of plane curve singularities, satisfying \(\mbox{$\mu(S_1)+\dots+\mu(S_n)\le\alpha d^2$}\), there exists a reduced irreducible plane curve of degree \(d\) with exactly \(n\) singular points of types \(\mbox{$S_1,\dots,S_n$}\), respectively. This estimate is optimal with respect to theexponent of \(d\). In particular, we prove that for any topological type \(S\) there exists an irreducible polynomial of degree \(\mbox{$d\le 14\sqrt{\mu(S)}$}\) having a singular point of type \(S\). A formalism is developed for calculating the quasienergy states and spectrum for time-periodic quantum systems when a time-periodic dynamical invariant operator with a nondegenerate spectrum is known. The method, which circumvents the integration of the Schr-odinger equation, is applied to an integrable class of systems, where the global invariant operator is constructed. Furthermore, a local integrable approximation for more general non-integrable systems is developed. Numerical results are presented for the doubleresonance model. We consider N coupled linear oscillators with time-dependent coecients. An exact complex amplitude - real phase decomposition of the oscillatory motion is constructed. This decomposition is further used to derive N exact constants of motion which generalise the so-called Ermakov-Lewis invariant of a single oscillator. In the Floquet problem of periodic oscillator coecients we discuss the existence of periodic complex amplitude functions in terms of existing Floquet solutions. Beim Greifen deformierbarer oder zerbrechlicher Werkstücke kommen der Greifgeschwindigkeit sowie der Greifkraft besondere Bedeutung zu. In dieser Arbeit wird eine universelle Steuerung für pneumatische Greifer beschrieben, die eine einfache Einstellung dieser Größen über zwei spannungsgesteuerte Proportionalventile gestattet. Diese Anordnung wird für eine Einflußanalyse von Greifkraft und Greifgeschwindigkeit beim Greifen von Kabeln und Kabelbäumen genutzt, welche sich als robust und unproblematisch erwiesen haben. Enhancing the quality of surgical interventions is one of the main goals of surgical robotics. Thus we have devised a surgical robotic system for maxillofacial surgery which can be used as an intelligent intraoperative surgical tool. Up to now a surgeon preoperatively plans an intervention by studying twodimensional X-rays, thus neglecting the third dimension. In course of the special research programme "Computer and Sensor Aided Surgery" a planning system has been developed at our institute, which allows the surgeon to plan an operation on a threedimensional computer model of the patient . Transposing the preoperatively planned bone cuts, bore holes, cavities, and milled surfaces during surgery still proves to be a problem, as no adequate means are at hand: the actual performance of the surgical intervention and the surgical outcome solely depend on the experience and the skill of the operating surgeon. In this paper we present our approach of a surgical robotic system to be used in maxillofacial surgery. Special stress is being laid upon the modelling of the environment in the operating theatre and the motion planning of our surgical robot . The quasienergy spectrum of a periodically driven quantum system is constructed from classical dynamics by means of the semiclassical initial value representation using coherent states. For the first time, this method is applied to explicitly time dependent systems. For an anharmonic oscillator system with mixed chaotic and regular classical dynamics, the entire quantum spectrum (both regular and chaotic states) is reproduced semiclassically with surprising accuracy. In particular, the method is capable to account for the very small tunneling splittings. The dispersions of dipolar (Damon-Eshbach modes) and exchange dominated spin waves are calculated for in-plane magnetized thin and ultrathin cubic films with (111) crystal orientation and the results are compared with those obtained for the other principal planes. The properties of these magnetic excitations are examined from the point of view of Brillouin light scattering experiments. Attention is paid to study the spin-wave frequency variation as a function of the magnetization direction in the film plane for different film thicknesses. Interface anisotropies and the bulk magnetocrystalline anisotropy are considered in the calculation. A quantitative comparison between an analytical expression obtained in the limit of small film thickness and wave vector and the full numerical calculation is given.
This is a two-part question relating to the change of measure density used in Girsanov and secondly to the Stochastic Exponential. Whilst reading notes relating to Girsanov it is stated that the change of measure density martingale may be written: \begin{align} \rho_t = \exp \left[- \int_{0}^{t} \lambda_s \, dW_s - \tfrac{1}{2}\int_{0}^{t} \lambda_{s}^{2} \, ds \right] \end{align} It is stated that using Ito's Lemma it is straightforward to verify that the stochastic differential of $\rho_t$ is given by \begin{align} d\rho_t = -\rho_t \, \lambda_t \, dW_t \end{align} I think I've solved this and applied ito as follows: \begin{align} \rho_t &= exp\left[-\lambda_t W_t - \tfrac{1}{2} \, \lambda_{t}^{2} \, t \right]\\ d\rho_t &= \frac{\partial\rho_t}{\partial t} dt + \frac{\partial \rho_t}{\partial W} dW_t + \tfrac{1}{2} \frac{\partial^2 \rho_t}{\partial W^{2}} (dW_{t})^2\\ &= -\tfrac{1}{2} \lambda_{t}^{2}\exp\left[\dots\right] - \lambda_t\exp\left[\dots\right] dW_{t} + \tfrac{1}{2} \lambda_{t}^2 \exp\left[\dots\right] (dW_{t})^{2}\\ &= -\tfrac{1}{2}\,\lambda_{t}^{2}\,\rho_{t}\,dt - \lambda_{t}\,\rho_{t}dW_{t} + \tfrac{1}{2}\,\lambda_{t}^{2}\,\rho_{t} dt\\ &= -\lambda_{t}\rho_{t}\,dW_{t} \end{align} $\textbf{Question 1}$ - Is this correct? The Stochastic Exponential is stated as: \begin{align} \mathcal{E}_t(X) = \exp\left[ X_t - \tfrac{1}{2} \langle X,X \rangle_{t} \right] \end{align} $\textbf{Question 2}$ - I believe $\langle X,X \rangle_t$ is the quadratic variation, should I interpret this the same way I do the $\int_{0}^{t} \lambda_{t}^{2}\,ds$ term in the change of measure density above? It is stated (Filipovic - Term-Structure Models) that if $X_t$ is a continuous local martingale with $X_0 = 0$, then using Ito one can see $d\mathcal{E}_t(X) = \mathcal{E}_t(X)dX_t$. The solution I have is as follows: \begin{align} d\mathcal{E}_t(X) &= \exp\left[X_t - \tfrac{1}{2}\langle X \rangle_t \right]\left(dX_t - \tfrac{1}{2}d\langle X \rangle_t \right) + \tfrac{1}{2}\exp\left[X_t - \tfrac{1}{2}\langle X \rangle_t \right]d\langle X \rangle_t\\ &= \exp\left[X_t - \tfrac{1}{2}\langle X \rangle_t \right] dX_t\\ &= \mathcal{E}_{t}dM_t\\ \text{obviously $\mathcal{E}_0(X) = 1.$} \end{align} $\textbf{Question 3}$ - I have difficulty understanding the notation here and cannot see how Ito has been applied in this case (as I cannot see the $dt$ and $dW$ terms). I'd appreciate any help showing me how ito has been applied in this case (and why it is obvious that $\mathcal{E}_{0}(X) = 1$). Many thanks, John
Focus Questions The following questions are meant to guide our study of the material in this section. After studying this section, we should understand the concepts motivated by these questions and be able to write precise, coherent answers to these questions. What is de Moivre’s Theorem and why is it useful? If \(n\) is a positive integer, what is an \(n\)th root of a complex number? How many nth roots does a complex number have? How do we find all of the \(n\)th roots of a complex number? The trigonometric form of a complex number provides a relatively quick and easy way to compute products of complex numbers. As a consequence, we will be able to quickly calculate powers of complex numbers, and even roots of complex numbers. Beginning Activity Let \(z = r(\cos(\theta) + i\sin(\theta))\). Use the trigonometric form of \(z\) to show that \[z^{2} = r^{2}(\cos(2\theta) + i\sin(2\theta))) \label{eq1}\] De Moivre’s Theorem The result of Equation \ref{eq1} is not restricted to only squares of a complex number. If \(z = r(\cos(\theta) + i\sin(\theta))\), then it is also true that \[ \begin{align*} z^{3} &= zz^{2} \\[4pt] &= (r)(r^{2})(\cos(\theta + 2\theta) +i\sin(\theta + 2\theta)) \\[4pt] &= r^{3}(\cos(3\theta) + i\sin(3\theta)) \end{align*}\] We can continue this pattern to see that \[ \begin{align*} z^{4} &= zz^{3} \\[4pt] &= (r)(r^{3})(\cos(\theta + 3\theta) +i\sin(\theta + 3\theta)) \\[4pt] &= r^{4}(\cos(4\theta) + i\sin(4\theta)) \end{align*}\] The equations for \(z^{2}\), \(z^{3}\), and \(z^{4}\) establish a pattern that is true in general; this result is called de Moivre’s Theorem. DeMoivre’s Theorem Let \(z = r(\cos(\theta) + i\sin(\theta))\) be a complex number and \(n\) any integer. Then \[z^{n} = (r^{n})(\cos(n\theta) +i\sin(n\theta)) \label{DeMoivre}\] It turns out that DeMoivre’s Theorem also works for negative integer powers as well. Exercise \(\PageIndex{1}\) Write the complex number \(1 - i\) in polar form. Then use DeMoivre’s Theorem (Equation \ref{DeMoivre}) to write \((1 - i)^{10}\) in the complex form \(a + bi\), where \(a\) and \(b\) are real numbers and do not involve the use of a trigonometric function. Answer In polar form, \[1 - i = \sqrt{2}(\cos(-\dfrac{\pi}{4}) + \sin(-\dfrac{\pi}{4}))\] So \[(1 - i)^{10} = (\sqrt{2})^{10}(\cos(-\dfrac{10\pi}{4}) + \sin(-\dfrac{10\pi}{4})) = 32(\cos(-\dfrac{5\pi}{2}) + \sin(-\dfrac{5\pi}{2})) = 32(0 - i) = -32i\] Roots of Complex Numbers DeMoivre’s Theorem is very useful in calculating powers of complex numbers, even fractional powers. We illustrate with an example. Example \(\PageIndex{1}\): Roots of Complex Numbers We will find all of the solutions to the equation \(x^{3} - 1 = 0\). These solutions are also called the roots of the polynomial \(x^{3} - 1\). Solution To solve the equation \(x^{3} - 1 = 0\), we add 1 to both sides to rewrite the equation in the form \(x^{3} = 1\). Recall that to solve a polynomial equation like \(x^{3} = 1\) means to find all of the numbers (real or complex) that satisfy the equation. We can take the real cube root of both sides of this equation to obtain the solution x0 D 1, but every cubic polynomial should have three solutions. How can we find the other two? If we draw the graph of \(y = x^{3} - 1\) we see that the graph intersects the \(x\)-axis at only one point, so there is only one real solution to \(x^{3} = 1\). That means the other two solutions must be complex and we can use DeMoivre’s Theorem to find them. To do this, suppose \[z = r[\cos(\theta) + i\sin(\theta)]\] is a solution to \(x^{3} = 1\). Then \[1 = z^{3} = r^{3}(\cos(3\theta) + i\sin(3\theta)). \nonumber \] This implies that \(r = 1\) (or \(r = -1\), but we can incorporate the latter case into our choice of angle). We then reduce the equation \(x^{3} = 1\) to the equation \[1 = \cos(3\theta) + i\sin(3\theta)\] has solutions when \(\cos(3\theta) = 1\) and \(\sin(3\theta) = 0\). This will occur when \(3\theta = 2\pi k\), or \(\theta = \dfrac{2\pi k}{3}\), where \(k\) is any integer. The distinct integer multiples of \(\dfrac{2\pi k}{3}\) on the unit circle occur when \(k = 0\) and \(\theta = 0\), \(k = 1\) and \(\theta = \dfrac{2\pi}{3}\), and \(k = 2\) with \(\theta = \dfrac{4\pi}{3}\). In other words, the solutions to \(x^{3} = 1\) should be \[ \begin{align*} x_{0} &= \cos(0) + i\sin(0) = 1 \\[4pt] x_{1} &= \cos(\dfrac{2\pi}{3}) + i\sin(\dfrac{2\pi}{3}) = -\dfrac{1}{2} + \dfrac{\sqrt{3}}{2}i \\[4pt] x_{2} &= \cos(\dfrac{4\pi}{3}) + i\sin(\dfrac{4\pi}{3}) = -\dfrac{1}{2} - \dfrac{\sqrt{3}}{2}i \end{align*}\] We already know that \(x^{3}_{0} = 1^{3} = 1\) so \(x_{0}\) actually is a solution to \(x^{3} = 1\). To check that \(x_{1}\) and \(x_{2}\) are also solutions to \(x^{3} = 1\), we apply DeMoivre’s Theorem (Equation \ref{DeMoivre}): \[x^{3}_{1} = [\cos(\dfrac{2\pi}{3}) + i\sin(\dfrac{2\pi}{3})]^{3} = \cos(3(\dfrac{2\pi}{3})) + i\sin(3(\dfrac{2\pi}{3})) = \cos(2\pi) + i\sin(2\pi) = 1\], and \[x^{3}_{2} = [\cos(\dfrac{4\pi}{3}) + i\sin(\dfrac{4\pi}{3})]^{3} = \cos(3(\dfrac{4\pi}{3})) + i\sin(3(\dfrac{4\pi}{3})) = \cos(4\pi) + i\sin(4\pi) = 1\] Thus, \(x^{3}_{1} = 1\) and \(x^{3}_{2} = 1\) and we have found three solutions to the equation \(x^{3} = 1\). Since a cubic can have only three solutions, we have found them all. The general process of solving an equation of the form \(x^{n} = a + bi\), where \(n\) is a positive integer and \(a + bi\) is a complex number works the same way. Write \(a + bi\) in trigonometric form \[a + bi = r[\cos(\theta) + i\sin(\theta)] \nonumber \] and suppose that \(z = s[\cos(\alpha) + i\sin(\alpha)]\) is a solution to \(x^{n} = a + bi\). Then \[a + bi = z^{n}\nonumber\] \[r[\cos(\theta) + i\sin(\theta)] = (s[\cos(\alpha) + i\sin(\alpha)])^{n}\nonumber\] \[r[\cos(\theta) + i\sin(\theta)] = s^{n}[\cos(\alpha) + i\sin(\alpha)]\nonumber\] Using the last equation, we see that \[s^{n} = r\] and \[\cos(\theta) + i\sin(\theta) = \cos(n\alpha) + i\sin(n\alpha)\nonumber\] Therefore, \[s^{n} = r\] and \[n\alpha = \theta + 2\pi k\nonumber\] where \(k\) is any integer. This give us \[s = \sqrt[n]{r}\] and \[\alpha = \dfrac{\theta + 2\pi k}{n}\nonumber\] We will get n different solutions for \(k = 0, 1, 2, ..., n - 1\), and these will be all of the solutions. These solutions are called the \(n\)th roots of the complex number \(a + bi\). We summarize the results. If we want to represent the \(n\)th roots of \(r[\cos(\theta) + i\sin(\theta)]\) using degrees instead of radians, the roots will have the form \[\sqrt[n]{r}[\cos(\dfrac{\theta + 360^\circ k}{n}) + i\sin(\dfrac{\theta + 360^\circ k}{n})]\nonumber\] for \(k = 0, 1, 2, ..., (n - 1)\). Roots of Complex Numbers Let \(n\) be a positive integer. The \(n\)th roots of the complex number \(r[\cos(\theta) + i\sin(\theta)]\) are given by \[\sqrt[n]{r}[\cos(\dfrac{\theta + 2\pi k}{n}) + i\sin(\dfrac{\theta + 2\pi k}{n})]\] for \(k = 0, 1, 2, ..., (n - 1)\). Example \(\PageIndex{2}\): Square Roots of 1 As another example, we find the complex square roots of 1. In other words, we find the solutions to the equation \(z^{2} = 1\). Of course, we already know that the square roots of \(1\) are \(1\) and \(-1\), but it will be instructive to utilize our general result and see that it gives the same result. Note that the trigonometric form of \(1\) is \[1 = \cos(0) + i\sin(0)\] so the two square roots of \(1\) are \[\sqrt{1}[\cos(\dfrac{0 + 2\pi(0)}{2}) + i\sin(\dfrac{0 + 2\pi(0)}{2})] = \cos(0) +i\sin(0) = 1\] and \[\sqrt{1}[\cos(\dfrac{0 + 2\pi(1)}{2}) + i\sin(\dfrac{0 + 2\pi(1)}{2})] = \cos(\pi) +i\sin(\pi) = -1\] as expected. Exercise \(\PageIndex{2}\) Find all solutions to \(x^{4} = 1\). (The solutions to \(x^{n} = 1\) are called the \(n\)th roots of unity, with unity being the number 1. Find all sixth roots of unity. Answer 1. We find the solutions to the equation \(z^{4} = 1\). Let \(\omega = \cos(\dfrac{2\pi}{4}) + i\sin(\dfrac{2\pi}{4}) = \cos(\dfrac{\pi}{2}) + i\sin(\dfrac{\pi}{2})\). Then \(\omega^{0} = 1\), \(\omega = i\), \(\omega^{2} = \cos(\dfrac{2\pi}{2}) + i\sin(\dfrac{2\pi}{2}) = -1\) \(\omega^{3} = \cos(\dfrac{3\pi}{2}) + i\sin(\dfrac{3\pi}{2}) = -i\) So the four fourth roots of unity are \(1, i, -1,\) and \(-i\). 2. We find the solutions to the equation \(z^{6} = 1\). Let \(\omega = \cos(\dfrac{2\pi}{6}) + i\sin(\dfrac{2\pi}{6}) = \cos(\dfrac{\pi}{3}) + i\sin(\dfrac{\pi}{3})\). Then \(\omega^{0} = 1\), \(\omega = \dfrac{1}{2} + \sqrt{32}i\), \(\omega^{2} = \cos(\dfrac{2\pi}{3}) + i\sin(\dfrac{2\pi}{3}) = -\dfrac{1}{2} + \sqrt{32}i\) \(\omega^{3} = \cos(\dfrac{3\pi}{3}) + i\sin(\dfrac{3\pi}{3}) = -1\) \(\omega^{4} = \cos(\dfrac{4\pi}{3}) + i\sin(\dfrac{4\pi}{3}) = -\dfrac{1}{2} - \sqrt{32}i\) \(\omega^{5} = \cos(\dfrac{5\pi}{3}) + i\sin(\dfrac{5\pi}{3}) = \dfrac{1}{2} - \sqrt{32}i\) So the four fourth roots of unity are \(1, \dfrac{1}{2} + \sqrt{32}i, -\dfrac{1}{2} + \sqrt{32}i, -1, -\dfrac{1}{2} - \sqrt{32}i\), and \(\dfrac{1}{2} - \sqrt{32}i\). Now let’s apply our result to find roots of complex numbers other than \(1\). Example \(\PageIndex{3}\): Roots of Other Complex Numbers We will find the solutions to the equation \[x^{4} = -8 + 8\sqrt{3}i \nonumber\] Solution Note that we can write the right hand side of this equation in trigonometric form as \[-8 + 8\sqrt{3}i = 16(\cos(\dfrac{2\pi}{3}) + i\sin(\dfrac{2\pi}{3}))\] The fourth roots of \(-8 + 8\sqrt{3}i\) are then \[x_{0} = \sqrt[4]{16}[\cos(\dfrac{\dfrac{2\pi}{3} + 2\pi(0)}{4}) + i\sin(\dfrac{\dfrac{2\pi}{3} + 2\pi(0)}{4})] = 2[\cos(\dfrac{\pi}{6}) + i\sin(\dfrac{\pi}{6})] = 2(\dfrac{\sqrt{3}}{2} + \dfrac{1}{2}i) = \sqrt{3} + i\] \[x_{1} = \sqrt[4]{16}[\cos(\dfrac{\dfrac{2\pi}{3} + 2\pi(1)}{4}) + i\sin(\dfrac{\dfrac{2\pi}{3} + 2\pi(1)}{4})] = 2[\cos(\dfrac{2\pi}{3}) + i\sin(\dfrac{2\pi}{3})] = 2(-\dfrac{1}{3} + \dfrac{\sqrt{3}}{2}i) = -1 + \sqrt{3}i\] \[x_{2} = \sqrt[4]{16}[\cos(\dfrac{\dfrac{2\pi}{3} + 2\pi(2)}{4}) + i\sin(\dfrac{\dfrac{2\pi}{3} + 2\pi(2)}{4})] = 2[\cos(\dfrac{7\pi}{6}) + i\sin(\dfrac{7\pi}{6})] = 2(-\dfrac{\sqrt{3}}{2} - \dfrac{1}{2}i) = -\sqrt{3} - i\] \[x_{3} = \sqrt[4]{16}[\cos(\dfrac{\dfrac{2\pi}{3} + 2\pi(3)}{4}) + i\sin(\dfrac{\dfrac{2\pi}{3} + 2\pi(3)}{4})] = 2[\cos(\dfrac{5\pi}{3}) + i\sin(\dfrac{5\pi}{3})] = 2(\dfrac{1}{2} - \dfrac{\sqrt{3}}{2}i) = 1- \sqrt{3}i\] Exercise \(\PageIndex{3}\) Find all fourth roots of \(-256\), that is find all solutions of the equation \(x^{4} = -256\). Answer Since \(-256 = 256[\cos(\pi) + i\sin(\pi)]\) we see that the fourth roots of \(-256\) are \[x_{0} = \sqrt[4]{256}[\cos(\dfrac{\pi + 2\pi(0)}{4}) + i\sin(\dfrac{\pi + 2\pi(0)}{4})] = 4\cos(\dfrac{\pi}{4}) + i\sin(\dfrac{\pi}{4}) = 4[\dfrac{\sqrt{2}}{2} + \dfrac{\sqrt{2}}{2}i] = 2\sqrt{2} + 2i\sqrt{2}\] \[x_{1} = \sqrt[4]{256}[\cos(\dfrac{\pi + 2\pi(1)}{4}) + i\sin(\dfrac{\pi + 2\pi(1)}{4})] = 4\cos(\dfrac{3\pi}{4}) + i\sin(\dfrac{3\pi}{4}) = 4[-\dfrac{\sqrt{2}}{2} + \dfrac{\sqrt{2}}{2}i] = -2\sqrt{2} + 2i\sqrt{2}\] \[x_{2} = \sqrt[4]{256}[\cos(\dfrac{\pi + 2\pi(2)}{4}) + i\sin(\dfrac{\pi + 2\pi(2)}{4})] = 4\cos(\dfrac{5\pi}{4}) + i\sin(\dfrac{5\pi}{4}) = 4[-\dfrac{\sqrt{2}}{2} - \dfrac{\sqrt{2}}{2}i] = -2\sqrt{2} - 2i\sqrt{2}\] \[x_{3} = \sqrt[4]{256}[\cos(\dfrac{\pi + 2\pi(3)}{4}) + i\sin(\dfrac{\pi + 2\pi(3)}{4})] = 4\cos(\dfrac{7\pi}{4}) + i\sin(\dfrac{7\pi}{4}) = 4[\dfrac{\sqrt{2}}{2} + \dfrac{\sqrt{2}}{2}i] = 2\sqrt{2} - 2i\sqrt{2}\] Summary In this section, we studied the following important concepts and ideas: DeMoivre's Theorem Let \(z = r(\cos(\theta) + i\sin(\theta))\) be a complex number and n any integer. Then \[z^{n} = (r^{n})(\cos(n\theta) +i\sin(n\theta)) \nonumber \] Roots of Complex Numbers Let \(n\) be a positive integer. The \(n\)th roots of the complex number \(r[\cos(\theta) + i\sin(\theta)]\) are given by \[\sqrt[n]{r} \left[\cos \left(\dfrac{\theta + 2\pi k}{n}\right) + i\sin \left(\dfrac{\theta + 2\pi k}{n}\right) \right] \nonumber \] for \(k = 0, 1, 2, ..., (n - 1)\).
The Closure of a Set Equals the Union of the Set and its Accumulation Points Recall from The Closure of a Set in a Topological Space page that if $(X, \tau)$ is a topological space and $A \subseteq X$ then the closure of $A$ is the smallest closed subset containing $A$ denoted $\bar{A}$. We will now look at a very nice theorem which relates the set $A$, the set of accumulation points $A'$, and the closure $\bar{A}$. Theorem: Let $(X, \tau)$ be a topological space and $A \subseteq X$. Then $\bar{A} = A \cup A'$. Proof:Let $x \in \bar{A}$. Then $x$ is contained in the smallest closed set containing $A$. Hence $x \in A$ or $x \in \bar{A} \setminus A$. If $x \in A$ then $x \in A \cup A'$. If $x \not \in A$, then $x \in \bar{A} \setminus A$. Since $\mathrm{int} (A) \subseteq A$ we see that if $x \not \in A$ then $x \not \in \mathrm{int} (A)$. Therefore, there exists no open neighbourhood $U \in \tau$ with $x \in U$ such that $x \in U \subseteq A$. Hence for all $U \in \tau$ with $x \in U$ we have that $A \cap U = A \cap U \setminus \{ x \} \neq \emptyset$ so $x \in A'$. Hence $x \in A \cup A'$. So, $\bar{A} \subseteq A \cup A'$. (Alternatively since a set is closed if and only if it contains all of its accumulation points, we see that $A \cup A'$ is closed as it contains all of its accumulation points. Since $\bar{A}$ is the smallest closed set containing $A$ we have that $\bar{A} \subseteq A \cup A'$.) $\Rightarrow$ Now suppose that $A \cup A' \not \subseteq \bar{A}$. Then there exists an $x \in A \cup A'$ such that $x \not \in \bar{A}$. If $x \in A$ then since $A \subseteq \bar{A}$ we have that $x \in \bar{A}$ which is a contradiction. Hence $x \in A' \setminus A$. So then $x$ is an accumulation point of $A$, so for all $U \in \tau$ with $x \in U$ we have that $A \cap U \setminus \{ x \} \neq \emptyset$. Since $A \subseteq \bar{A}$ we have that then for all $U \in \tau$ with $x \in U$ that $\bar{A} \cap U \setminus \{ x \} \neq \emptyset$. Therefore $x$ is an accumulation point of $\bar{A}$. But $\bar{A}$ is closed and by definition contains all of its accumulation points, so $x \in \bar{A}$, a contradiction. Hence the assumption that $A \cup A' \not \subseteq \bar{A}$ was false. Therefore $A \cup A' \subseteq \bar{A}$. We conclude that then $\bar{A} = A \cup A'$.
The Dimension of a Sum of Subspaces Examples 1 Recall from The Dimension of a Sum of Subspaces page that if $V$ is a finite-dimensional vector space and if $U_1$ and $U_2$ are subspaces of $V$ then:(1) We will now look at some example problems regarding this important formula for finite-dimensional vector spaces. Example 1 Consider the vector space $\mathbb{R}^7$, and suppose that $U$ and $W$ are subspaces of $\mathbb{R}^7$ such that $\mathrm{dim} (U) = 3$ and $\mathrm{dim} (W) = 4$ and that $U + W = \mathbb{R}^7$. Show that $\mathbb{R}^7 = U \oplus W$. We are already given that $\mathbb{R}^7 = U + W$, so to show that $\mathbb{R}^7 = U \oplus W$ we only need to prove that $U_1 \cap U_2 = \{ 0 \}$. Note that $\mathrm{dim} (U + W) = \mathrm{dim} (\mathbb{R}^7) = 7$, $\mathrm{dim} (U) = 3$ and $\mathrm{dim}(W) = 4$. Using the formula above, we see that:(2) So $\mathrm{dim} (U \cap W) = 0$ which implies that $U \cap W = \{ 0 \}$. Thus $\mathbb{R}^7 = U \oplus W$. Example 2 Consider the vector $\mathbb{R}^{11}$, and suppose that $U$ and $W$ are subspaces of $\mathbb{R}^{11}$ such that $\mathrm{dim} (U) = 7$ and $\mathrm{dim} (W) = 8$. Show that $U \cap W \neq \{ 0 \}$. We note that since $U$ and $W$ are both subsets of $\mathbb{R}^{11}$ then the sum $U + W$ is also a subset of $\mathbb{R}^{11}$ and so $\mathrm{dim} (U + W) ≤ 11$. Using the dimension formula from above, we see that:(3) Since $\mathrm{dim} (U \cap W) ≥ 4$ we see that it is not possible for $\mathrm{dim} (U \cap W) = 0$ so $U \cap W \neq \{ 0 \}$.
Multiple positive solutions of a sturm-liouville boundary value problem with conflicting nonlinearities SISSA -International School for Advanced Studies, via Bonomea 265, 34136 Trieste, Italy $ u'' + \sum\limits_{i = 1}^m {} {\alpha _i}{a_i}(x){g_i}(u) - \sum\limits_{j = 1}^{m + 1} {} {\beta _j}{b_j}(x){k_j}(u) = 0,{\rm{ }} $ Keywords:Superlinear indefinite problems, positive solutions, Sturm-Liouville boundary conditions, multiplicity results, Leray-Schauder topological degree. Mathematics Subject Classification:34B15, 34B18, 47H11. Citation:Guglielmo Feltrin. Multiple positive solutions of a sturm-liouville boundary value problem with conflicting nonlinearities. Communications on Pure & Applied Analysis, 2017, 16 (3) : 1083-1102. doi: 10.3934/cpaa.2017052 References: [1] [2] [3] [4] [5] D. Bonheure, J. M. Gomes and P. Habets, Multiple positive solutions of superlinear elliptic problems with sign-changing weight, [6] A. Boscaggin, G. Feltrin and F. Zanolin, Positive solutions for super-sublinear indefinite problems: high multiplicity results via coincidence degree, [7] [8] [9] G. Feltrin and F. Zanolin, Existence of positive solutions in the superlinear case via coincidence degree: the Neumann and the periodic boundary value problems, [10] G. Feltrin and F. Zanolin, Multiple positive solutions for a superlinear problem: a topological approach, [11] G. Feltrin and F. Zanolin, Multiplicity of positive periodic solutions in the superlinear indefinite case via coincidence degree, [12] [13] M. Gaudenzi, P. Habets and F. Zanolin, Positive solutions of superlinear boundary value problems with singular indefinite weight, [14] P. M. Girão and J. M. Gomes, Multi-bump nodal solutions for an indefinite non-homogeneous elliptic problem, [15] R. Gómez-Reñasco and J. López-Gómez, The effect of varying coefficients on the dynamics of a class of superlinear indefinite reaction-diffusion equations, [16] [17] [18] R. D. Nussbaum, The fixed point index and some applications, vol. 94 of Séminaire de Mathématiques Suprieures [Seminar on Higher Mathematics], Presses de l'Université de Montréal, Montreal, QC, 1985. Google Scholar [19] R. D. Nussbaum, The fixed point index and fixed point theorems, in [20] H.-J. Ruppen, Multiplicity results for a semilinear. elliptic differential equation with conflicting nonlinearities, show all references References: [1] [2] [3] [4] [5] D. Bonheure, J. M. Gomes and P. Habets, Multiple positive solutions of superlinear elliptic problems with sign-changing weight, [6] A. Boscaggin, G. Feltrin and F. Zanolin, Positive solutions for super-sublinear indefinite problems: high multiplicity results via coincidence degree, [7] [8] [9] G. Feltrin and F. Zanolin, Existence of positive solutions in the superlinear case via coincidence degree: the Neumann and the periodic boundary value problems, [10] G. Feltrin and F. Zanolin, Multiple positive solutions for a superlinear problem: a topological approach, [11] G. Feltrin and F. Zanolin, Multiplicity of positive periodic solutions in the superlinear indefinite case via coincidence degree, [12] [13] M. Gaudenzi, P. Habets and F. Zanolin, Positive solutions of superlinear boundary value problems with singular indefinite weight, [14] P. M. Girão and J. M. Gomes, Multi-bump nodal solutions for an indefinite non-homogeneous elliptic problem, [15] R. Gómez-Reñasco and J. López-Gómez, The effect of varying coefficients on the dynamics of a class of superlinear indefinite reaction-diffusion equations, [16] [17] [18] R. D. Nussbaum, The fixed point index and some applications, vol. 94 of Séminaire de Mathématiques Suprieures [Seminar on Higher Mathematics], Presses de l'Université de Montréal, Montreal, QC, 1985. Google Scholar [19] R. D. Nussbaum, The fixed point index and fixed point theorems, in [20] H.-J. Ruppen, Multiplicity results for a semilinear. elliptic differential equation with conflicting nonlinearities, [1] M. Gaudenzi, P. Habets, F. Zanolin. Positive solutions of superlinear boundary value problems with singular indefinite weight. [2] [3] [4] [5] Julián López-Góme, Andrea Tellini, F. Zanolin. High multiplicity and complexity of the bifurcation diagrams of large solutions for a class of superlinear indefinite problems. [6] [7] [8] Ryuji Kajikiya, Daisuke Naimen. Two sequences of solutions for indefinite superlinear-sublinear elliptic equations with nonlinear boundary conditions. [9] [10] Inara Yermachenko, Felix Sadyrbaev. Types of solutions and multiplicity results for second order nonlinear boundary value problems. [11] Leszek Gasiński, Nikolaos S. Papageorgiou. Multiplicity of solutions for Neumann problems with an indefinite and unbounded potential. [12] Santiago Cano-Casanova. Bifurcation to positive solutions in BVPs of logistic type with nonlinear indefinite mixed boundary conditions. [13] Elimhan N. Mahmudov. Optimization of fourth order Sturm-Liouville type differential inclusions with initial point constraints. [14] Elimhan N. Mahmudov. Optimal control of Sturm-Liouville type evolution differential inclusions with endpoint constraints. [15] Rushun Tian, Zhi-Qiang Wang. Bifurcation results on positive solutions of an indefinite nonlinear elliptic system. [16] Michael E. Filippakis, Nikolaos S. Papageorgiou. Existence and multiplicity of positive solutions for nonlinear boundary value problems driven by the scalar $p$-Laplacian. [17] Alberto Boscaggin, Maurizio Garrione. Positive solutions to indefinite Neumann problems when the weight has positive average. [18] Antonio Iannizzotto, Nikolaos S. Papageorgiou. Existence and multiplicity results for resonant fractional boundary value problems. [19] Patrick Winkert. Multiplicity results for a class of elliptic problems with nonlinear boundary condition. [20] Julián López-Gómez, Marcela Molina-Meyer, Andrea Tellini. Spiraling bifurcation diagrams in superlinear indefinite problems. 2018 Impact Factor: 0.925 Tools Metrics Other articles by authors [Back to Top]
Hints will display for most wrong answers; explanations for most right answers. You can attempt a question multiple times; it will only be scored correct if you get it right the first time. To see ten new questions, reload the page. I used the official objectives and sample test to construct these questions, but cannot promise that they accurately reflect what’s on the real test. Some of the sample questions were more convoluted than I could bear to write. See terms of use. See the MTEL Practice Test main page to view questions on a particular topic or to download paper practice tests. MTEL General Curriculum Mathematics Practice Question 1 Which of the following is closest to the height of a college student in centimeters? 1.6 cm Hint: This is more the height of a Lego toy college student -- less than an inch! 16 cm Hint: Less than knee high on most college students. 160 cm Hint: Remember, a meter stick (a little bigger than a yard stick) is 100 cm. Also good to know is that 4 inches is approximately 10 cm. 1600 cm Hint: This college student might be taller than some campus buildings! Question 2 Here is a student's work on several multiplication problems: 58 x 22 Hint: This problem involves regrouping, which the student does not do correctly. 16 x 24 Hint: This problem involves regrouping, which the student does not do correctly. 31 x 23 Hint: There is no regrouping with this problem. 141 x 32 Hint: This problem involves regrouping, which the student does not do correctly. Question 3 An above-ground swimming pool is in the shape of a regular hexagonal prism, is one meter high, and holds 65 cubic meters of water. A second pool has a base that is also a regular hexagon, but with sides twice as long as the sides in the first pool. This second pool is also one meter high. How much water will the second pool hold? \( \large 65\text{ }{{\text{m}}^{3}}\) Hint: A bigger pool would hold more water. \( \large 65\cdot 2\text{ }{{\text{m}}^{3}}\) Hint: Try a simpler example, say doubling the sides of the base of a 1 x 1 x 1 cube. \( \large 65\cdot 4\text{ }{{\text{m}}^{3}}\) Hint: If we think of the pool as filled with 1 x 1 x 1 cubes (and some fractions of cubes), then scaling to the larger pool changes each 1 x 1 x 1 cube to a 2 x 2 x 1 prism, or multiplies volume by 4. \( \large 65\cdot 8\text{ }{{\text{m}}^{3}}\) Hint: Try a simpler example, say doubling the sides of the base of a 1 x 1 x 1 cube. Question 4 Exactly one of the numbers below is a prime number. Which one is it? \( \large511 \) Hint: Divisible by 7. \( \large517\) Hint: Divisible by 11. \( \large519\) Hint: Divisible by 3. \( \large521\) Question 5 In which table below is y a function of x? Hint: If x=3, y can have two different values, so it's not a function. Hint: If x=3, y can have two different values, so it's not a function. Hint: If x=1, y can have different values, so it's not a function. Hint: Each value of x always corresponds to the same value of y. Question 6 Use the four figures below to answer the question that follows: How many of the figures pictured above have at least one line of reflective symmetry? \( \large 1\) \( \large 2\) Hint: The ellipse has 2 lines of reflective symmetry (horizontal and vertical, through the center) and the triangle has 3. The other two figures have rotational symmetry, but not reflective symmetry. \( \large 3\) \( \large 4\) Hint: All four have rotational symmetry, but not reflective symmetry. Question 7 I. \(\large \dfrac{1}{2}+\dfrac{1}{3}\) II. \( \large .400000\) III. \(\large\dfrac{1}{5}+\dfrac{1}{5}\) IV. \( \large 40\% \) V. \( \large 0.25 \) VI. \(\large\dfrac{14}{35}\) Which of the lists below includes all of the above expressions that are equivalent to \( \dfrac{2}{5}\)? I, III, V, VI Hint: I and V are not at all how fractions and decimals work. III, VI Hint: These are right, but there are more. II, III, VI Hint: These are right, but there are more. II, III, IV, VI Question 8 At a school fundraising event, people can buy a ticket to spin a spinner like the one below. The region that the spinner lands in tells which, if any, prize the person wins. If 240 people buy tickets to spin the spinner, what is the best estimate of the number of keychains that will be given away? 40 Hint: "Keychain" appears on the spinner twice. 80 Hint: The probability of getting a keychain is 1/3, and so about 1/3 of the time the spinner will win. 100 Hint: What is the probability of winning a keychain? 120 Hint: That would be the answer for getting any prize, not a keychain specifically. Question 9 There are 15 students for every teacher. Let t represent the number of teachers and let s represent the number of students. Which of the following equations is correct? \( \large t=s+15\) Hint: When there are 2 teachers, how many students should there be? Do those values satisfy this equation? \( \large s=t+15\) Hint: When there are 2 teachers, how many students should there be? Do those values satisfy this equation? \( \large t=15s\) Hint: This is a really easy mistake to make, which comes from transcribing directly from English, "1 teachers equals 15 students." To see that it's wrong, plug in s=2; do you really need 30 teachers for 2 students? To avoid this mistake, insert the word "number," "Number of teachers equals 15 times number of students" is more clearly problematic. \( \large s=15t\) Question 10 Kendra is trying to decide which fraction is greater, \( \dfrac{4}{7}\) or \( \dfrac{5}{8}\). Which of the following answers shows the best reasoning? \( \dfrac{4}{7}\) is \( \dfrac{3}{7}\)away from 1, and \( \dfrac{5}{8}\) is \( \dfrac{3}{8}\)away from 1. Since eighth‘s are smaller than seventh‘s, \( \dfrac{5}{8}\) is closer to 1, and is the greater of the two fractions. \( 7-4=3\) and \( 8-5=3\), so the fractions are equal. Hint: Not how to compare fractions. By this logic, 1/2 and 3/4 are equal, but 1/2 and 2/4 are not. \( 4\times 8=32\) and \( 7\times 5=35\). Since \( 32<35\) , \( \dfrac{5}{8}<\dfrac{4}{7}\) Hint: Starts out as something that works, but the conclusion is wrong. 4/7 = 32/56 and 5/8 = 35/56. The cross multiplication gives the numerators, and 35/56 is bigger. \( 4<5\) and \( 7<8\), so \( \dfrac{4}{7}<\dfrac{5}{8}\) Hint: Conclusion is correct, logic is wrong. With this reasoning, 1/2 would be less than 2/100,000. If you found a mistake or have comments on a particular question, please contact me (please copy and paste at least part of the question into the form, as the numbers change depending on how quizzes are displayed). General comments can be left here.
Abstract: Bordered Floer homology is an invariant for three-manifolds with boundary, defined in collaboration with Robert Lipshitz and Dylan Thurston. The invariant associates a DG algebra to a parameterized surface, and a module over that algebra to a three-manifold with boundary. I will explain how methods from bordered Floer homology can be used to give a tidy description of knot Floer homology. This is joint work with Zoltan Szabo. Manchester building, Hebrew University of Jerusalem, (Room 209) Title: Arithmetic of Double Torus Quotients and the Distribution of Periodic Torus OrbitsAbstract:In this talk I will describe some new arithmetic invariants for pairs of torus orbits on inner forms of PGLn and SLn. These invariants allow us to significantly strengthen results towards the equidistribution of packets of periodic torus orbits on higher rank S-arithmetic quotients. An important aspect of our method is that it applies to packets of periodic orbits of maximal tori which are only partially split. A signature of chaotic behavior in dynamical systems is sensitive dependence on initial conditions, and Lyapunov exponents measure the rates at which nearby orbits diverge. One might expect that geometric expansion or stretching in a map would lead to positive Lyapunov exponents. This, however, is very difficult to prove - except for maps with invariant cones (or a priori separation of expanding and contracting directions). Consider a sequence of random walks on $\mathbb{Z}/p\mathbb{Z}$ with symmetric generating sets $A= A(p)$. I will describe known and new results regarding the mixing time and cut-off. For instance, if the sequence $|A(p)|$ is bounded then the cut-off phenomenon does not occur, and more precisely I give a lower bound on the size of the cut-off window in terms of $|A(p)|$. A natural conjecture from random walk on a graph is that the total variation mixing time is bounded by maximum degree times diameter squared. Title: Equidistribution of expanding translates of curves in homogeneous spaces and Diophantine approximation.Abstract:We consider an analytic curve $\varphi: I \rightarrow \mathbb{M}(n\times m, \mathbb{R}) \hookrightarrow \mathrm{SL}(n+m, \mathbb{R})$ and embed it into some homogeneous space $G/\Gamma$, and translate it via some diagonal flow Manchester building, Hebrew University of Jerusalem, (Room 209) Title: Indistinguishability of trees in uniform spanning forestsAbstract:The uniform spanning forest (USF) of an infinite connected graph G is the weak limit of the uniform spanning tree measure taken on exhausting finite subgraphs of G. It is easy to see that it is supported on spanning graphs of G with no cycles, but it need not be connected. Indeed, a classical result of Pemantle ('91) asserts that when G=Zd, the USF is almost surely a connected tree if and only if d=1,2,3,4. Manchester building, Hebrew University of Jerusalem, (Room 209) Title: Positive entropy actions of countable groups factor onto Bernoulli shiftsAbstract: I will prove that if a free ergodic action of a countable group has positive Rokhlin entropy (or, less generally, positive sofic entropy) then it factors onto all Bernoulli shifts of lesser or equal entropy. This extends to all countable groups the well-known Sinai factor theorem from classical entropy theory. As an application, I will show that for a largeclass of non-amenable groups, every positive entropy free ergodic action satisfies the measurable von Neumann conjecture. We consider (complex) Gaussian analytic functions on a horizontal strip, whose distribution is invariant with respect to horizontal shifts (i.e., "stationary"). Let N(T) be the number of zeroes in [0,T] x [a,b]. We present an extension of a result by Wiener, concerning the existence and characterization of the limit N(T)/T as T approaches infinity, as well as characterize the growth of the variance of N(T). For a given deterministic measure we construct a random measure on the Brownian path that has expectation the given measure. For the construction we introduce the concept of weak convergence of random measures in probability. The machinery can be extended to more general sets than Brownian path. Equilibrium states of affine iterated function systems Motivated by the long-standing problem of finding sharp lower estimates for the Hausdorff dimension of self-affine sets, I will describe some recent results on the equilibrium states of the singular value function. These equilibrium states arise as candidates for the measures of maximal Hausdorff dimension on self-affine sets. In particular I will discuss a sufficient condition for uniqueness of the equilibrium state (from joint work with Antti Käenmäki) and an unconditional bound for the number of ergodic equilibrium states (from joint work with Jairo Bochi). In his influential disjointness paper, H. Furstenberg proved that weakly-mixing systems are disjoint from irrational rotations (and in general, Kronecker systems), a result that inspired much of the modernresearch in dynamics. Recently, A. Venkatesh managed to prove a quantitative version of thisdisjointness theorem for the case of the horocyclic flow on a compact Riemann surface.I will discuss Venkatesh's disjointness result and present a generalization of this result to more general actions of nilpotent groups, utilizing structural results about nilflows proven by Green-Tao-Ziegler.
655 results for "part". Given two complex numbers, find by inspection the one that is a root of a given quartic real polynomial and hence find the other roots. Question Needs to be tested CC BY Published Last modified 10/10/2019 13:43 No subjects selected No topics selected No ability levels selected The student is asked to factorise a quadratic $x^2 + ax + b$. A custom marking script uses pattern matching to ensure that the student's answer is of the form $(x+a)(x+b)$, $(x+a)^2$, or $x(x+a)$. To find the script, look in the Scriptstab of part a. Question Draft CC BY Published Last modified 03/10/2019 16:00 No subjects selected No topics selected No ability levels selected Three graphs are given with areas underneath them shaded. The student is asked to calculate their areas, using integration. Q1 has a polynomial. Q2 has exponentials and fractional functions. Q3 requires solving a trig equation and integration by parts. Question Draft CC BY Published Last modified 26/09/2019 09:26 No subjects selected No topics selected No ability levels selected A mathematical expression part whose answer is the product of two matrices, $X \times Y$. By setting the "variable value generator" option for $X$ and $Y$ to produce random matrices, we can ensure that the order of the factors in the student's answer matters: $X \times Y \neq Y \times X$. Question Draft CC BY Published Last modified 24/09/2019 15:27 No subjects selected No topics selected No ability levels selected Factorise $x^2+cx+d$ into 2 distinct linear factors and then find $\displaystyle \int \frac{ax+b}{x^2+cx+d}\;dx,\;a \neq 0$ using partial fractions or otherwise. Question Draft CC BY Published Last modified 19/09/2019 11:55 No subjects selected No topics selected No ability levels selected
I'm working on a project, and I have to use the cumulative and conditional expected value of the variations of a stock following a Geometric Brownian Motion. I know that the cumulative is as follows : $$ \mathbb{E}\left[ \mathbb{1}_{ \frac{S_{i+1}}{S_{i}} < z}\right] = \mathbb{P} \left[ \frac{S_{i+1}}{S_{i}} < z \right] = \Phi\left(\frac{\log(z) - (r- \frac{\sigma^2}{2})(t_{i+1}-t_i)}{\sigma \sqrt{t_{i+1}-t_i}}\right) $$ $\Phi$ being the standard normal distribution cumulative function. But I couldn't find the expression of the conditional expected value : $$ \mathbb{E}\left[\frac{S_{i+1}}{S_i} 1_{\frac{S_{i+1}}{S_i}<z}\right] $$
J. D. Hamkins, “book review of G.~Tourlakis, Lectures in Logic and Set Theory, vols.~I & II,” Bulletin of Symbolic Logic, vol. 11, iss. 2, p. 241, 2005. @ARTICLE{Hamkins2005:TourlakisBookReview, AUTHOR = "Joel David Hamkins", TITLE = "book review of {G.~Tourlakis}, {Lectures in Logic and Set Theory}, vols.~{I \& II}", JOURNAL = "Bulletin of Symbolic Logic", YEAR = "2005", volume = "11", number = "2", pages = "241", month = "June", note = "", abstract = "", keywords = "book-review", source = "", url = "http://jdh.hamkins.org/tourlakisbookreview/", file = F, } Review of George Tourlakis, Lectures in Logic and Set Theory, volumes 1 and 2, Cambridge studies in advanced mathematics, vol. 83. Cambridge University Press, Cambridge, UK, 2003. This is a detailed two-volume development of mathematical logic and set theory, written from a formalist point of view, aimed at a spectrum of students from the third-year undergraduate to junior graduate level. Volume 1 presents the heart of mathematical logic, including the Completeness and Incompleteness theorems along with a bit of computability theory and accompanying ideas. Tourlakis aspires to include “the absolutely essential topics in proof, model and recursion theory” (vol. 1, p. ix). In addition, for the final third of the volume, Tourlakis provides a proof of the Second Incompleteness Theorem “right from Peano’s axioms,…gory details and all,” which he conjectures “is the only complete proof in print [from just Peano arithmetic] other than the one that was given in Hilbert and Bernays (1968)” (vol. 1, p. x). In the opening page of Chapter II, Tourlakis provides a lucid explanation of the proof in plain language, before diving into the details and emerging a hundred pages later with the provability predicate, the derivability conditions and a complete proof. Tempering his formalist tendencies, Tourlakis speaks “the formal language with a heavy `accent’ and using many `idioms’ borrowed from `real’ (meta)mathematics and English,” in a mathematical argot (vol. 1, p. 39). In his theorems and proofs, therefore, he stays close to the formal language without remaining inside it. But let me focus on volume 2, a stand-alone development of axiomatic set theory, containing within it a condensed version of volume 1. The book emphasizes the formal foundations of set theory and, like the first volume, gives considerable attention to the details of the elementary theory. Tourlakis is admirably meticulous in maintaining the theory/metatheory distinction, with a careful explanation of the role of inductive arguments and constructions in the metatheory (vol. 2, p. 20) and a correspondingly precise treatment of axioms, theorems and their respective schemes throughout. What is more, he sprinkles the text with philosophical explanations of the theory/metatheory interaction, giving a clear account, for example, of how it is that we may use apparently set theoretic arguments in the metatheory without circularity (vol. 1, p. 10-12). After developing the logical background, he paints the motivating picture of the cumulative hierarchy, the process by which we imagine sets to be built, with Russell’s paradox as a cautionary tale. In Chapter III, the axioms of set theory march forward in succession. He presents them gradually, motivating them from the cumulative hierarchy and deriving consequences as they appear. This treatment includes the Axiom of Choice, which he motivates, impressively, by developing Goedel’s constructible universe $L$ sufficiently to see that the Axiom of Choice holds there. Later, he revisits the constructible universe more formally, and by the end of the book his formal set theoretic development encompasses even the sophisticated topic of forcing. The book culminates in Cohen’s relative consistency proof, via forcing, of the failure of the Continuum Hypothesis. Interestingly, Tourlakis’ version of ZFC set theory, like Zermelo’s, allows for (without insisting on) the existence of urelements, atomic objects that are not sets, but which can be elements of sets. His reason for this is philosophical and pedagogical: he finds “it extremely counterintuitive, especially when addressing undergraduate audiences, to tell them that all their familiar mathematical objects — the `stuff of mathematics’ in Barwise’s words — are just perverse `box-in-a-box-in-a-box\dots’ formulations built from an infinite supply of empty boxes” (vol. 2, p. xiii). The enrichment of the theory to allow urelements requires only minor modifications of the usual ZFC axioms, such as the restriction of Extensionality to the sets and not the urelements. The application of the definition $a\subseteq b\iff\forall z(z\in a\implies z\in b)$ even when $a$ or $b$ are urelements, however, causes some peculiarities, such as the consequence that urelements are subsets of every object, including each other. Consequently, the axiom asserting that the urelements form a set (Axiom III.3.1), is actually deducible via the Comprehension Axiom from Tourlakis’ version of the Power Set Axiom, which asserts that for every object $a$ there is a set $b$ such that $\forall x(x\subseteq a\implies x\in b)$, since any such $b$ must contain all urelements. At times, the author employs what some might take as an exaggerated formal style. For example, after introducing the Pairing Axiom, stating that for any $a$ and $b$ there is $c$ with $a\in c$ and $b\in c$, he considers Proposition III.5.3, the trivial consequence that $\{a,b\}$ is a set. His first proof of this is set out in eleven numbered steps, with duly noted uses of the Leibniz axiom and modus ponens. To be sure, he later adopts what he calls a “relaxed” proof style, but even so, in the “Informal” Example III.9.2, he fills a page with tight reasoning and explicit appeals to the deduction theorem, the principle of auxiliary constants and more, to show merely that if $x$ is a set and $x\subseteq\{\emptyset\}$, then $x=\emptyset$ or $x=\{\emptyset\}$. Similar examples of formality can be found on pages 118, 120, 183-184 and elsewhere in volume 2, as well as volume 1. The preface of volume 2 explains that the book weaves a middle path between those set theory books that merely build set-theoretic tools for use elsewhere and those that aim at research in set theory. But I question this assessment. Many of the topics constituting what I take to be the beginnings of the subject appear only very late in the book. For example, the von Neumann ordinals appear first on page 331; Cantor’s theorem on the uncountability of $P(\omega)$ occurs on page 455; the Cantor-Bernstein theorem appears on page 463; the definitions of cardinal successor and $\aleph_\alpha$ wait until page 465; and the definition of cofinality does not appear until page 478, with regular and singular cardinals on page 479. Perhaps it was the elaborate formal development of the early theory that has pushed this basic part of set theory to the end of the book. This may not be a problem, but I worry that students may wrongly understand these topics to constitute “advanced” set theory, when surely the opposite is true. Furthermore, many other elementary topics, which one might expect to find in a set theory text aimed in part at graduate students, do not appear in the text at all. This includes closed unbounded sets, stationary sets, $\omega_1$-trees (such as Souslin trees or Kurepa trees), Borel sets, regressive functions, Martin’s axiom, the diamond principle and even ultrafilters. Large cardinals are not mentioned beyond the inaccessible cardinals. The omission of ultrafilters is particularly puzzling, given the author’s claim to have included “all the fundamental tools of set theory as needed elsewhere in the mathematical sciences” (vol. 2, p.~xii). Certainly ultrapowers are one of the most powerful and successful such tools, whose fundamental properties remain deeply connected with logic. In the final chapter, the author provides a formal account of the foundations of forcing, with useful explanations again of the important theory/metatheory interaction arising in connection with it. Because his account of forcing is based on countable transitive models, some set theorists may find it old-fashioned. This way of forcing tends to push much of the technique into the metatheory, which Tourlakis adopts explicitly (vol. 2, p. 519), and can sometimes limit forcing to its role in independence results. A more contemporary view of forcing makes sense within ZFC of forcing over $V$, for example via the Boolean-valued models $V^{\mathbb B}$, and allows one sensibly to discuss the possibilities achievable by forcing over any given model of set theory. Despite my reservations, I welcome Tourlakis’ addition to the body of logic texts. Readers with a formalist bent especially will gain from it.
Antonella Altamura and Marco Bee spotted that the language of the discussion on tail index for ARCH type data was not correct. It said that \begin{equation*} \Gamma(\iota/2+1/2)=\sqrt{\pi}(2\alpha)^{-\iota/2} \end{equation*} was the unconditional distribution of which of course does not make sense. Instead it should say that the value of can be found by solving for in the equation. Stefano Soccorsi pointed out that the one of the Kurtosis equations on page 37 is wrong. The rest of of it is correct, as is the final Kurtosis value. The typo was in the second equation below: \begin{align*} E(Y^4) &= 3 E\left(\left(\omega+\alpha Y_{t-1}^2\right)^2\right) \\ &= 3 \omega^2+ 6\alpha \omega E(Y^2)+3 \alpha^2 E(Y^4)\\ &= 3 \omega^2+ 6\alpha \omega \frac{\omega}{1-\alpha}+3 \alpha^2 E(Y^4)\\ \end{align*} Stefano Soccorsi pointed out that the sequential moments equation on page 19 is wrong. It should be: \begin{equation*} \frac{1}{t}\sum\limits_{i=1}^{t}x_i^m. \end{equation*} My FM320 students Hongshen Chen, Yida Li and Yanfei Zhou pointed out that the discussion in Section 8.3.2 could be more clear, so I repeat the relevant parts of the section here with more clarifications. We need to calculate the probabilities of two consecutive violations, , as well the probability of a violation, if there was no violation on the previous day, i.e. . More generally, where and are either 0 or 1: \begin{equation*} p_{ij}=\Pr \left( \eta_{t}=j|\eta_{t-1}=i\right). \end{equation*} The violation process can be represented as a Markov chain with two states, so the first order transition probability matrix is defined as: \begin{equation*} \Pi_1=\left( \begin{array}{cc} 1-p_{01} & p_{01} \\ 1-p_{11} & p_{11} \end{array} \right) . \end{equation*} The likelihood function is: \begin{equation} L_1(\Pi_1) =\left( 1-p_{01}\right) ^{v_{00}}p_{01}^{v_{01}}\left( 1-p_{11}\right) ^{v_{10}}p_{11}^{v_{11}} \tag{8.5}\label{eq:risk2:lik:bt:int} \end{equation} where is the number of observations where follows . The maximum likelihood (ML) estimates are obtained by maximizing the likelihood function which is simple since the parameters are the ratios of the counts of the outcomes: \begin{gather*} \hat{\Pi}_{1}= \begin{pmatrix} \frac{v_{00}}{v_{00}+v_{01}} & \frac{v_{01}}{v_{00}+v_{01}} \\\\[-2mm] \frac{v_{10}}{v_{10}+v_{11}} & \frac{v_{11}}{v_{10}+v_{11}} \\ \end{pmatrix} . \end{gather*} Under the null hypothesis of no clustering, the probability of a violation tomorrow does not depend on today being a violation, then and the transition matrix is simply: \begin{align*} \Pi_{2} & =\left( \begin{array}{cc} 1-p & p \\ 1-p & p \end{array} \right) \end{align*} and the ML estimate is: \[ \hat{p} =\frac{v_{01}+v_{11}}{v_{00}+v_{10}+v_{01}+v_{11}}. \] so \begin{align*} \hat{\Pi}_{2} & =\left( \begin{array}{cc} 1-\hat{p} & \hat{p} \\ 1-\hat{p} & \hat{p} \end{array} \right) \end{align*} The likelihood function then is \begin{equation} L_2(\Pi_2) =\left( 1-p\right) ^{v_{00}+v_{10}} p^{v_{01}+v_{11}} .\tag{8.6} \label{eq:risk2:lik:bt:int2} \end{equation} Note in \eqref{eq:risk2:lik:bt:int2} we impose independence but do not in \eqref{eq:risk2:lik:bt:int}. Replace the by the estimated numbers, . The LR test is then: \begin{equation*} LR=2\left( \log L_1\left( \hat{\Pi}_{1}\right) -\log L_2\left( \hat{\Pi}_{2}\right) \right) \overset{\rm asymptotic}{\sim}\chi _{\left( 1\right) }^{2}. \end{equation*} Example 4.4 could be more clear, it is not strictly wrong, but could be better, i.e. have weights in the right side of the inequality, i.e. $$ VaR^{5\%}(0.5 X+ 0.5 Y) \approx 50 > 0.5 VaR^{5\%}(X) + 0.5 VaR^{5\%}(Y) = 0+0. $$ My FM320 student Emily Wong spotted a typo in line 3, Example 4.5. The VaRs in the equation are missing a minus, and should be $$ 0 > - VaR_1 > -VaR_0$$ My FM320 student and summer intern Yiying Zhong spotted a typo in Chapter 4 page 86. The ES equation at the bottom of the page says $$ ES = - [Q|Q \le -VaR(p)]$$ but is missing the expectation $$ ES = - E[Q|Q \le -VaR(p)]$$ It had been pointed out that Listings 3.3, 3.4 might be better if they had y[i-1] inside the loop. It is not wrong as it is, but this is better My FM447 student Gevorg Saakyan pointed out a few typos The tail index on page 1 uses the letter where it should use . Chapter 1, page 8. The date in the code comment does not correspond to the date in the code. Chapter 8.2 Backtesting the S&P 500, pages 147-148. The dates in the text do not correspond with the dates in the code, the code should use February 2, this means that the number of observations is not 4000. My Fm320 student, Chetan Varsani, noted that it might be better to reverse the first line of the table, i.e. ARCH(1) and ARCH(4) the code in Listings 8.9 to 8.12 is correct, but it can have numerical problems when sample size is large. It is better to do the code in logs. Here is Listing 8.9 with that alternative, and it would be straightforward to do same adjustment to the other 3. Gian Giacomo pointed our that the likelihoods in the independence discussion are mislabeled, so restricted is unrestricted, and vice versa. My FM320 student Richard Dunn spotted that below the equation the was incorrectly defined. It should be: $$\Delta P_2=P_2-P_1$$ My FM320 students Jocelyn Tete and Chi Li independently spotted incorrect references where the reference to the four VaR models reported in Table 8.2/8.3 should have been 8.1 and 8.2 instead. My FM320 student Chi Li spotted yet another problem in the cursed Figure 8.1, where MW should be MA. My FM320 student Chi Li spotted that section 2.6.4 second paragraph, last sentence: This is consistent with the residual analysis in Table 2.4. This should instead be Table 2.5. My FM320 student Richard Dunn spotted that second equation under section 5.3.4 is missing a minus in front. my FM320 student Richard Dunn spotted a typo on page 90. The 10th word on the 5th line should be 'variance' instead of 'variable My FM320 student Alexander Stampfer found a number of small typos The last paragraph ion page 44 “ The restricted log-likelihood minus the unrestricted log-likelihood” which should be reversed. Page 149. There is a bracket missing in the Matlab code in the very last line of the page for EWMA My FM320 student Akash Jhunjhunwala points out that on page 96, the last line on the first paragraph, the 1% and 5% levels corresponds to the 4th and 20th values respectively, not the 5th and 25th as suggested in the brackets. My FM320 student, Bide Liu, spotted that I should have and not in the penultimate equation on page 155. It should read like $$p_{ji}= \Pr (\eta_t=i | \eta_{t-1} = j)$$ My FM320 student, Bide Liu, spotted that the LR test in (8.4) has the wrong order in the first line. Should be unrestricted - the restricted, i.e. something like $$ LR=2(\log L_U (\hat{p})- \log L_R (p)) $$ My FM320 student Amith Bhattacharyya spotted that on the penultimate line on page 42 standard deviation should replace variance. My class teacher Marcela Valenzuela and Ehsan Ramezanifar, University of Tehran, both spotted a typo in the middle of page 144, where the subscript T should be E. My FM320 student Daniel Payne pointed out that in example 4.3 in third line from the bottom the subscript on the weight is wrong, its right in the preceding line, so in both cases it should be:$$ (w_X \sigma_X + w_Y \sigma_Y)^2$$ My MF320 student Ken Starling pointed out that I refer to Table 1.2 on page 104, but cite different numbers, so instead of 0.021% and 1.1% for mean and vol, use 0.019% and 1.16%. My MF320 student Ken Starling pointed out that in the penultimate line on page 85 the left side of the interval should be ( since infinity is not included, so $$ (- \infty ,-VaR(p)] $$ My FM320 student Daniel Payne pointed out that the words restricted and unrestricted are reversed on the bottom of page 44. It should read: "The unrestricted log-likelihood minus the restricted log-likelihood" My FM320 student Dominic Clark pointed out that I could have been more clear on page 39 in the last text paragraph. Its not wrong, but a better way is: where indicates volatility... My FM320 student Ken Starling pointed out a missing + in the 3rd equation from the bottom on page 38, it should be $$\sigma^2= E(\omega+\alpha Y_{t-1}^2 +\beta \sigma_{t-1}^2) =\omega+\alpha \sigma^2 +\beta \sigma^2. $$ My FM320 student Yong Bin Ng pointed out a typo in the equation on top of page 37. It should be:$$ E(Y^4)=3E\left[(\omega+\alpha Y_{t-1}^2)^2\right]=3(\omega^2+2\alpha \omega \sigma^2 + \alpha^2 E(Y^4))$$Also, in case you were wondering how to derive the equation we use previous results on page 36, independence of Y's and Z and properties of the normal distribution, and it's done as follows. \begin{aligned} E(Y^4)&=E(Y_t^4)\\ &=E(\sigma_t^4 Z_t^4)\\ &=E(\sigma_t^4)E(Z_t^4)\\ &=E(\sigma_t^4)3(E(Z_t^2))^2\\ &=3E((\sigma_t^2)^2)\\ &=3E\left[(\omega+\alpha Y_{t-1}^2)^2\right]\\ &=3(\omega^2+2\alpha \omega \sigma^2 + \alpha^2 E(Y_{t-1}^4))\\ &=3\omega^2+6\alpha \omega \sigma^2 + 3\alpha^2 E(Y^4)\\ &=3\omega^2+6\alpha \omega \frac{\omega}{1-\alpha} + 3\alpha^2 E(Y^4) \end{aligned} then, $$ E(Y^4)(1-3\alpha^2)(1-\alpha) =3\omega^2(1-\alpha)+6\alpha \omega^2 $$ and $$ E(Y^4)=\frac{3\omega^2(1+\alpha)}{(1-3\alpha^2)(1-\alpha)} $$ My FM320 student Han Wang pointed out that Table 1.5 is not right. It is supposed to have a two tailed probability of outcomes, but the %1 number is 1-the one tailed prob, and the rest are one tailed. So, here are the correct numbers. Set the volatility to 1.16 as per Table 1.2, and get 1% 0.3886496 2% 0.08468295 3% 0.009703866 5% 1.630002e-05 15% 3.007448e-38 23% 1.721029e-87 2*pnorm(-23,sd=1.16) item 2 on page 133 has an incorrect second index for the y. It should be not Philippe Mueller spotted a bug in the legend of Figure 8.1. It is an unwieldy figure, with almost too much going on, and hard to see in black and white. The color plot below is much clearer, and hopefully correct. Also, I called returns, volatility. Guess the pic was cursed. Finally, one could specify the probability, 1%. In any case, here is the correct. I do thank Oliver Linton for spotting a typo in the equation of ES for the normal at the bottom of page 103 and top of page 104. The setup and derivation is correct, but somehow the became . The correct equation (bottom page 103) $$\text{ES}=-\frac{\sigma \phi(-\text{VaR}(p))}{p}$$ and the corresponding equation at the top of 104 $$\text{ES}=-\varphi \frac{\sigma \phi(-\text{VaR}(p))}{p}$$
No. Not at all. If a language has Godel numbering (and certainly the language of set theory with only $\in$ has that), then asserting that a theory in that language is consistent is a number theoretic statement. Namely, it's a statement about integers. Of course, we need to assume that the numbers encoding the axioms of the theory make a definable set of integers, but we can always assume that (or else we can add a predicate to the language of Peano arithmetic whose interpretation is this collection of codes). Now the statement of $\operatorname{Con}(T)$ is really just saying that there is no proof that $\exists x(x\neq x)$ from the axioms of $T$, or rather there is no code for a proof of the Godel number of $\exists x(x\neq x)$ (or some other form of false statement) from the Godel numbers of the axioms in $T$. All that we really need from our theory is to allow us to internalize first-order logic. This can be done with theories much much weaker than $\sf ZFC$. And if you look at some set theory books, you might find there that set theory can be developed within a theory as weak as $\sf PRA$ (which is a weak fragment of Peano arithmetic). Of course in that sort of context we can't talk about models, we can't say that a theory is consistent if and only if it has a model. But consistency is in its essence syntactical and requires us to be able to talk about proofs, not about interpretations and satisfaction. So there's no harm there. To the edit let me point out that while both "completeness" and "incompleteness" have similar names, they talk about different types of completeness, and they are not quite related. The incompleteness theorems tells us that $\sf ZF$ cannot prove its own consistency. This is, essentially, a theorem about proofs and syntax. But as a consequence of this theorem we know that assuming that $\sf ZF$ is consistent at all, then the theory $\sf ZF+\lnot\operatorname{Con}(ZF)$ is consistent. The completeness theorem then tells us that this theory has a model $(M,E)$. And this model $M$ is such that there is no $(N,E')\in M$ such that $M$ "thinks" that $N$ is a model of $\sf ZF$. There are a lot of very delicate points here about internal and external properties of these models. And of course, in order to talk about the existence of a model we need to be able to talk about models, which are sets, so we need some rudimentary set theory at our disposal.
Metric Spaces Are Compact Spaces If and Only If They're Countably Compact Recall from The Lebesgue Number Lemma page that if $(X, d)$ is a metric space that is also a BW space then for every open cover $\mathcal F$ of $X$ there exists an $\epsilon > 0$ called a Lebesgue number such that for all $x \in X$ there exists a $U \in \mathcal F$ such that $B(x, \epsilon) \subseteq U$. We used this very important lemma to prove a very nice result on the Metric Spaces Are Compact Spaces If and Only If They're BW Spaces page. We proved that if $(X, d)$ is a metric space that $X$ is compact if and only if $X$ is a BW space. The first direction of this statement is somewhat trivial as we have already seen that a compact space $X$ is a BW space from the Compact Spaces as BW Spaces page, however, the converse of this result is very useful nevertheless. We will now look at a nice consequence of these results. We will see that a metric space $X$ is compact if and only if $X$ is countably compact. Therefore, in a metric space $X$, the concept of compactness and countably compactness are in essence the same. Theorem 1: Let $X$ be a metric space. Then $X$ is compact if and only if $X$ is countably compact. Proof:$\Rightarrow$ Let $X$ be a compact metric space. Then trivially, $X$ is also countably compact (since every open cover $\mathcal F$ of $X$ has a finite subcover $\mathcal F^*$ and a finite number is countable). $\Leftarrow$ Let $X$ be a countably compact metric space. We know that every metric space is Hausdorff, so $X$ is both Hausdorff and countably compact. So by the theorem referenced on the Hausdorff Spaces Are BW Spaces If and Only If They're Countably Compact page, we have that $X$ is a BW space. Since $X$ is a BW metric space, we have from the theorem referenced above that then $X$ is a compact metric space. $\blacksquare$
The word ‘ Trigonometry ’ is derived from the Greek word and the subject is developed to solve geometric problems involving triangles. It is used to measure the sides of a triangle. An angle is a measure of rotation of a given ray about its initial point and the original ray is called the initial side and the final position of a ray after the rotation of the original ray is called the terminal side. If the rotation of a ray is in an anticlockwise direction, then the angle is a positive angle and if the rotation of a ray in a clockwise direction, then the angle is a negative angle. An angle is said to be positive if the rotation is in anti clockwise direction and if the rotation is in a clockwise direction, then the angle is negative. The two types of conventions used for measuring angles are Degree Measure Radian Measure. In degree measurement, consider a unit circle where the angle is said to have a measure of degree and it is denoted by the symbol ‘1° Tan 0 Degree Value In a right angled triangle, the opposite side of the right angle is called the hypotenuse side, the side opposite the angle of interest is called the opposite side and the remaining side is called the adjacent side where it forms a side of both the right angle and the angle of interest. The tangent function of an angle is equal to the length of the opposite side divided by the length of the adjacent side. Tan θ = Opposite side/ Adjacent Side By representing the tangent function in terms of sin and cos function, it is given by Tan θ = Sin θ / Cos θ Deriving the Value of Tan Degrees To find the value of tan 0 degree, use sine function and cosine function. Because tan function is the ratio of sine function and cos function. We can easily learn the values of tangent degrees with the help of sine functions and cosine functions. Just knowing the value of sine functions, we will find the values of cos and tan functions. There is an easy way to remember Some other tangent values of like tan 30, tan 60 degrees.. Sin 0 ° = \(\sqrt{\frac{0}{4}}\) Sin 30 ° = \(\sqrt{\frac{1}{4}}\) Sin 45 ° = \(\sqrt{\frac{2}{4}}\) Sin 60 ° = \(\sqrt{\frac{3}{4}}\) Sin 90 ° = \(\sqrt{\frac{4}{4}}\) Now Simplify all the sine values obtained and put in the tabular form: 0 30 ° 45 ° 60 ° 90 Sin 0 1/2 \(\frac{1}{\sqrt{2}}\) \(\frac{\sqrt{3}}{2}\) 1 Now find the cosine function values. It is done as follows: Cos 0 ° = Sin 90 ° Cos 30 ° = Sin 60 ° Cos 45 ° = sin 45 ° Cos 60 ° = sin 30 ° Cos 90 ° = sin 0 ° 0 30 ° 45 ° 60 ° 90 Sin 0 1/2 \(\frac{1}{\sqrt{2}}\) \(\frac{\sqrt{3}}{2}\) 1 Cos 1 \(\frac{\sqrt{3}}{2}\) \(\frac{1}{\sqrt{2}}\) 1/2 0 Since tangent function is the function of sine and cosine function, find the values of tan, which can be obtained by dividing sin function by cos functions with respective degree values Hence, the tan 0 degree value is given as tan 0°= Sin 0° / Cos 0° = 0 / 1 = 0 Similarly, Tan 30° Tan 45° Tan 60° Tan 90° So the tabular column that represents the tan function as 0 30 ° 45 ° 60 ° 90 Sin 0 1/2 \(\frac{1}{\sqrt{2}}\) \(\frac{\sqrt{3}}{2}\) 1 Cos 1 \(\frac{\sqrt{3}}{2}\) \(\frac{1}{\sqrt{2}}\) 1/2 0 tan 0 \(\frac{1}{\sqrt{3}}\) 1 \(\sqrt{3}\) Not Defined In the same way, we can derive other values of tan degrees like 180 °, 270 ° and 360 °. The trigonometry table is given below, which defines all the values of tan along with other trigonometric ratios. Sample problem: Question 1 : Find the value of tan 15 0 Solution : To find : tan 15 ° degree value Tan 15 ° = tan (45 ° – 30 °) We know that the formula, tan ( A-B) = (tan A – tan B) / (1+ tan A tan B) Now substitute the value of tan 30 ° and tan 45 ° tan 15 0 = \((1-\frac{1}{\sqrt{3}})/(1+\frac{1}{\sqrt{3}})\) Therefore, \(\tan 15^{\circ}=(\sqrt{3}-1)/(\sqrt{3}+1)\) Question 2 : Prove that \(\frac{\sin (x + y)}{\sin (x-y)}=\frac{tanx+tany}{tanx-tany}\) Solution: Given : \(\frac{\sin (x + y)}{\sin (x-y)}=\frac{tanx+tany}{tanx-tany}\) L.H.S = \(\frac{\sin (x + y)}{\sin (x-y)}=\frac{\sin x\cos y +\cos x\sin y}{\sin x\cos y-\cos x\sin y}\) Now, divide the numerator and denominator by cos x cos y ,we get = \(\frac{\tan x+\tan y}{\tan x-\tan y}\) = R. H.S Therefore, L.H.S = R.H.S\(\frac{\sin (x + y)}{\sin (x-y)}=\frac{tanx+tany}{tanx-tany}\) Hence proved. Visit BYJU’S for more information on tangent angle in trigonometry and its related articles, and also watch the interactive videos to clarify the doubts.
Let $\varphi: R \rightarrow S$ be a (unital) ring homomorphism. So every left $S$-module $M$ has also a left $R$-module structure via $\varphi$ and in general we have $$ \text{End}_S(M) \subseteq \text{End}_R(M)$$ My question is: Is there a necessary and sufficient condition on $\varphi$ such that the above inclusion becomes an equality for every $M$? Note that $\varphi$ being surjective is sufficient but not necessary as $\mathbb{Z} \hookrightarrow \mathbb{Q}$ also has the desired property. This led me to believe $\varphi$ being an epimorphism in the category $\mathsf{Ring}$ is the condition I want but I couldn't show that or find a counterexample. Edit: We can generalize the situation with $\mathbb{Z} \hookrightarrow \mathbb{Q}$: If $R$ is commutative and $D$ is a multiplicatively closed subset of $R$, the localization $R \rightarrow D^{-1}R$ has the desired property. Moreover ring homomorphisms with this "equal endomorphisms" property are closed under composition. So any composition of surjections and localizations also works. Unfortunately there are commutative ring epimorphisms which cannot be factored to surjections and localizations, as can be seen here. (Some time I will try to work out some of the examples given there to see whether they fail to equalize the endomorphisms or not) Edit: In general a necessary condition is that the centralizer of the image $\varphi(R)$ in $S$ should be equal to the center of $S$: We always have $\mathbf{Z}(S) \subseteq \mathbf{C}_{S}(\varphi (R))$. For the reverse inclusion, let $u \in \mathbf{C}_S(\varphi(R))$. Then letting M be the left regular $S$-module $_S S$, the map $\alpha: S \rightarrow S$ given by $\alpha(s) = us$ lies in $\text{End}_R(M)$. Now by assumption $\alpha$ is also an $S$-endomorphism. We know that $S$-endomorphisms of $M$ are right multiplications by elements of $S$. Say $\alpha$ is given by right multiplication by $t \in S$. Then evaluating at $1$, we get $u = t$. Thus left and right multiplications by $u$ coincide, that is $u \in \mathbf{Z}(S)$. Note that by above, when $R$ is commutative we get$$\varphi(R) \subseteq \mathbf{C}_S(\varphi(R)) = \mathbf{Z}(S)$$This means $S$ is an $R$-algebra via $\varphi$. So in this case the question can be rephrased as "Let $k$ be a commutative ring and $S$ be a $k$-algebra. When can we say that $k$-linear endomorphisms of $S$-modules are always $S$-linear?" Maybe this has an easy answer when $k$ is a field. A relevant (perhaps more natural) question to mine is when all homomorphisms are the same, not just endomorphisms. That is, can we find a (nice) condition on $\varphi$ such that for every pair of $S$-modules $M$ and $N$ the inclusion $$\text{Hom}_S(M,N) \subseteq \text{Hom}_R(M,N)$$ becomes an equality?
Let $f$ be a non-negative Riemann integrable function on $[a,b]$. If $f$ equals to zero except on an null set,then$\int_a^b f = 0$ Let $A$ be the null set and $M=\sup\left\{f(x):x\in[a,b]\right\}$. For any $\epsilon>0$, there exists a sequence of intervals $(I_k)$ such that $A\subset \bigcup_{k=1}^\infty I_k$ and $\sum_{k=1}^\infty|I_k|<\epsilon$ My approach is 2-steps. Step 1: To prove that there exists an interval $I$ such that $\{\bigcup_{k=N}^\infty I_N\} \cap A\subset I$ for some $N$ Step 2: After Step 1, $f$ on $[a,b]\setminus I$ is either zero or nonzero value but is covered by $I$ for some $K<N$, which means those nonzero values can be covered by finite numbers of intervals. Hence, it is easy to find suitable $\delta$ to make the Riemann sum in this step to be smaller than $0.5\epsilon$. For step 1, consider $\frac{\epsilon}{2M}$, there is a sequence of intervals $(I_k)$ such that $A\subset \bigcup_{k=1}^\infty I_k$ and $\sum_{k=1}^\infty|I_k|<\frac{\epsilon}{2M}$. Define $I_k=[a_k,b_k]$, assume that $I_k$ is in the order that $\sup\{f(x):x\in I_k\}\le\sup\{f(x):x\in I_{k+1}\}$, otherwise we rearrange the sequence of interval. Since $A$ is bounded from $a$ and$b$, $\sup_{x\in[a,b]}A$ is well defined, there exists $x_N\in A$, and thus $x_N\in I_N$ for some $N$ such that $x_N\gt \sup_{x\in[a,b]}A-\frac{\epsilon}{4M}$ Note that $|I_N|\lt \frac{\epsilon}{2M}$, because the infinite sum is less than$\frac{\epsilon}{2M}$, so $a_N\gt sup_{x\in[a,b]}A-\frac{\epsilon}{2M}$, so there exists an interval $I$ such that $I_N \cap A\subset I$. For $N+1$, since $x_N\le \sup\{f(x):x\in I_N\}\le \sup\{f(x):x\in I_{N+1}\}$, there exists $x\ge x_N \ge \sup_{x\in[a,b]}A-\frac{\epsilon}{2M}$, so again $I_{N+1} \cap A\subset I$. Then, we can do induction following the "N+1"argument and yield $\{\bigcup_{k=N}^\infty I_N\} \cap A\subset I$ for some $N$. $|I|\lt \frac{\epsilon}{2M}$, so the Riemann sum on this interval is less than $\frac{\epsilon}{2}$, combining with setp 2, the proof is done. If I am right, the assumption that $f$ is Riemann integrable is not needed, and the null set assumption can be weakened to be those intervals only need to be small than any $\epsilon\gt 0$? If I am wrong, please teach me how to prove it. Any help would be appreciated.
I am reading about testing independence in two-way contingency tables from Mood Graybill and Boes's Introduction to the Theory of Statistics and is confused about testing independence. We have a two-way contingency table. We assume that the cells in the table follow a multinomial distribution with parameters $n$ (known) and $p_{i,j}$ (unknown) for $1 \leqslant i \leqslant r$ and $1 \leqslant j \leqslant s$ where $p_{i,j}$ is the probability of getting cell $(i,j)$ for a single trial. Let $N_{i,j}$ be the random variable representing the number in cell $i,j$. We want to use the generalised likelihood-ratio $\Lambda$ to test $H_0:p_{i,j} = p_{i}p_{j}$ $\Lambda$ turns out to equal $\frac{(\prod_{i} {n_{i.}}^{n_{i.}})(\prod_{j} {n_{.j}}^{n_{.j}})}{n^n \prod_{i,j} {n_{i,j}}^{n_{i,j}}}$, (the $n_{i.}$ and the $n_{.j}$ are the marginal totals) the distribution of this under $H_0$ is not unique, since $H_0$ is composite and $\Lambda$ involves the unknown parameters; this makes formulating a test with a fixed Type-I error size difficult. What the book does is to use the marginals $N_{i.}$ and $N_{.j}$. The book computes the joint probability mass function of the $N_{i,j}$ under $H_0$ which is $\frac{n!}{\prod_{i,j} {n_{i,j}!}}(\prod_{i} {p_{i.}}^{n_{i.}})(\prod_{j} {p_{.j}}^{n_{.j}})$. Then it computes the joint probability mass function of the marginals $N_{i.}$ and $N_{.j}$ under $H_0$ which is $\frac{(n!)^2}{(\prod {n_{i.}!})(\prod {n_{.j}!})}(\prod_{i} {p_{i.}}^{n_{i.}})(\prod_{j} {p_{.j}}^{n_{.j}})$. Then it computes the conditional distribution of the $N_{i,j}$ given the marginals $N_{i.}$ and $N_{.j}$ which turns out to be $\frac{(\prod {n_i. !})(\prod {n_.j !})}{n! \prod_{i,j} {n_{i,j}!}}$ The marginals are therefore (jointly) sufficient. Write $T$ for the marginals $N_{i.}$ and $N_{.j}$. The book then explains how for each $t$ (which would be a specific set of marginals in this instance) we can find $\lambda_0 (t)$ that satisfies $\int_{0}^{\lambda_0 (t)} f_{\Lambda | T=t}(\lambda | t) = 0.05$ because $f_{\Lambda | T=t}(\lambda | t)$ does not involve any unknown parameters (because $T$ is a sufficient statistic). So our test is 'conditional', we observe $T$ then we observe $\Lambda$ and reject $H_0$ if $\Lambda$ is below $\lambda_0 (t)$. What I don't understand is how do we find $\lambda_0 (t)$? The book is vague about this. I think we are supposed to compute $f_{\Lambda | T=t}(\lambda | t)$, but how do we do this? Are we supposed to compute it from the conditional distribution of the $N_{i,j}$ given the marginals $N_{i.}$ and $N_{.j}$ which was shown to be $\frac{(\prod {n_i. !})(\prod {n_.j !})}{n! \prod_{i,j} {n_{i,j}!}}$ ? This sounds like a computational nightmare, the book does mention 'large-sample approximation', but I am not so certain what that refers to.
Here we’ll look at some more probability calculations before moving on to permutations, combinations and the binomial theorem. More Probability… If you recall from the last part, we used set notation to describe a general way of calculating simple probability: \(P(A)={{|A|} \over {|S|}}.\) Where \(A\) is a set of events and \(S\) is a set of all possible outcomes. Here’s a sample space set describing two dice: \[S=\{1,2,3,4,5,6\}^2\] …for all \(|S|=6^2=36\) possible outcomes. This set contains repeated values and ordering, so let’s directly enumerate the whole set: \begin{array} ((1,1),(1,2),(2,1),(1,3),(3,1),(1,4),(4,1),(1,5),(5,1),(1,6),(6,1) \\ (2,2),(2,3),(3,2),(2,4),(4,2),(2,5),(5,2),(2,6),(6,2) \\ (3,3),(3,4),(4,3),(3,5),(5,3),(3,6),(6,3) \\ (4,4),(4,5),(5,4),(4,6),(6,4) \\ (5,5),(5,6),(6,5) \\ (6,6) \end{array} Each tuple represents a probable roll of a pair of dice. Normally we roll a pair of dice at the same time so that we don’t care what the order is – a five and a three are read the same as a three and a five, for example. However, if we roll each dice individually you can see that the order counts – I may roll a one first followed by a six, or the other way around. Notice how the doubles occur only once in the above table but the different rolls occur twice – (3,6) and (6,3), for example. This leads to a counter-intuitive probability problem. You’d naturally think that any pair of numbers (including the doubles) would have an equal chance, but this is not so as can be seen here: \[A=\{(1,1),(2,2),(3,3),(4,4),(5,5),(6,6)\}\] …which is an event set for all the possible double throws. If we calculate the probability: \[P(A) = {{|A|} \over {|S|}} = {6 \over 36} \approx 0.17\] …a 0.17 or 17% chance of rolling any double, and as there are 30 possible outcomes for rolling two different numbers: \[P(A) = {{|A|} \over {|S|}} = {30 \over 36} \approx 0.83\] …or an 83% chance of rolling anything other than a double. Finally, you can see that you are twice as likely to roll a differing pair of dice than a double: \[A=\{(6,6)\} {\;\;\;\;} B=\{(5,3),(3,5)\}\] \[P(A) = {1 \over 36} \approx 0.03 {\;\;\;\;} P(B) = {2 \over 36} = {1 \over 18} \approx 0.06\] …a one in thirty-six chance of rolling a particular double verses a one in eighteen chance for rolling two different numbers. This explains why doubles are so highly prized in dice games. The Sum Of Probabilities If set \(S\) contains all of the possible outcomes from some random event, like tossing a coin, then if we sum up all of the probabilities of those outcomes we get the number 1 or 100%. We can see this easily with a coin sample set: \(S=\{\text{Heads,Tails}\}\). I think that we can all work out that there is a 50% chance that the outcome is either heads or tails. The sum of both these probabilities is, of course, 100%. Mathematically, we can write it like this: \[\sum _{x \in S} p(x) = 1\] …where the function \(p(x)\) returns a probability value between 0 and 1, inclusive \((p(x)=[0,1])\), for any event \(x\) inside of set \(S\). The sigma \(\sum\) symbol simply means to add all of the probabilities for all possible events together: \[p(\text{Heads}) + p(\text{Tails}) = 0.5 + 0.5 = 1\] The event set is a subset of the sample space set. If \(A\) is the event set then \(A \subseteq S\). This, logically, means that the events inside of set \(A\) can be any of the subsets of the power set of \(S\), \(\wp(S)\). Including the empty-set, \(\emptyset\). In that case, we haven’t flipped a coin so we have no chance of winning or losing! 🙂 All of the possible event sets (subsets) \(A\) from a heads/tails set \(S\) give us these probabilities for \(p(x)\): \[p(\emptyset)=0 {\;\;} p(\text{Heads})=0.5 {\;\;} p(\text{Tails})=0.5 {\;\;} p(\{\text{Heads, Tails}\})=1\] …and therefore: \[P(A) = \sum _{x \in A} p(x)\] In the next lesson we’ll look at the factorial function, its interaction with the prime numbers and ways to solve it for positive real numbers. © Doc Mike Finnley 2019
+ Recall from The nth Convergent of an Infinite Continued Fraction page that if $\langle a_0; a_1, a_2, ... \rangle$ is an infinite simple continued fraction with $a_0 \in \mathbb{Z}$ and $a_n \in \mathbb{N}$ for $n \geq 1$ then the $n^{\mathrm{th}}$ convergent of this infinite simple continued fraction is defined to be:(1) We say that $\langle a_0; a_1, a_2, ... \rangle$ converges if $\displaystyle{\lim_{n \to \infty} r_n}$ exists. We also proved the following identities:(2) We will now prove that every infinite simple continued fraction of the form above converges. Theorem 1: Let $\langle a_0; a_1, a_2, ... \rangle$ be an infinite simple continued fraction where $a_0 \in \mathbb{Z}$ and $a_n \in \mathbb{N}$ for $n \geq 1$. Then: a) If $(r_n)$ is the sequence of $n^{\mathrm{th}}$ convergents then $r_0 < r_2 < r_4 < ... < r_5 < r_3 < r_1$. b) $\displaystyle{\lim_{n \to \infty} r_n}$ exists and $\displaystyle{r_{2j} < \lim_{n \to \infty} r_n < r_{2j-1}}$ for all $j$. Proof of a)For all $j \geq 0$ we have that: If $j$ is even, then $(-1)^j = 1$, and since $a_j > 0$ and $k_j, k_{j-2} > 0$, we have that: Hence $r_0 < r_2 < r_4 < ...$. $(*)$ If $j$ is odd, then $(-1)^{j} = -1$, and since $a_j >0$ and $k_j, k_{j-2} > 0$, we have that: Hence $... < r_5 < r_3 < r_1$. $(**)$ Now for all $j \geq 0$ we have that: If $j$ is even, then $(-1)^{j-1} = -1$, and since $k_j, k_{j-1} > 0$ we have that $r_j - r_{j-1} < 0$. Hence $r_j < r_{j+1}$ for all even $j$. This combined with $(*)$ and $(**)$ above show that: Proof of b)Consider the subsequences $(r_{2n})$ and $(r_{2n-1})$ of $(r_n)$. Observe from $(a)$ that $(r_{2n})$ is an increasing sequence bounded above by $r_1$ and so $(r_{2n})$ converges to some $M \in \mathbb{R}$. Similarly, $(r_{2n-1})$ is a decreasing sequence bound below by $r_0$ and so $(r_{2n-1})$ also converges to some $N \in \mathbb{R}$. For all $j$ we have that $r_{2j} \leq M$ and $L \leq r_{2j-1}$, and: But the righthand side tends to $0$ as $j \to \infty$. Hence $L = M$. Hence $\displaystyle{\lim_{n \to \infty} r_n}$ exists and is such that for all $j$:
I'm not sure about your calculation but Matlab yields the correct result. In this problem, the common approach is to use Routh-Hurwitz criterion and search for a row of zeros that yields the possibility for imaginary axis roots. For convert the system to the closed-loop transfer function, hence $$\frac{K}{s^4 + 10s^3 + 88s^2 + 256s + K}$$ The Routh table is $$\begin{matrix}s^4 &&&& 1 &&&& 88 &&&& K \\s^3 &&&& 10 &&&& 256 \\s^2 &&&& 62.4 &&&& K \\s^1 &&&& \frac{15974.4-10K}{62.4} \\s^0 &&&& K \end{matrix}$$ The \$ s^1 \$ row is the only row that can yield a row of zeros. From the preceding row, we obtain $$\begin{align}&15974.4 - 10K = 0 \\K &= \frac{15974.4}{10} = 1597.44\end{align}$$ Now we take a look at the row above \$s^1\$ and construct the following polynomial, hence $$\begin{align}62.4 s^2 + K &= 0 \\62.4 s^2 + 1597.44 &= 0 \\ s^2 &= \frac{-1597.44}{62.4} \\s_{1,2} &= \pm j \sqrt{25.6} \\s_{1,2} &= \pm j 5.0596 \\\end{align}$$ The root locus crosses the imaginary axis at \$\pm j5.0596\$ at the gain \$K=1597.44\$. Consequently, the gain \$K\$ must be less than 1597.44 for the system to be stable.
I am using TTR in R and I am trying to understand the Yang Zhang volatility estimator (without drift). The following equations seem to imply a single value: $$ \sigma = \sqrt{{\sigma_o^2}+k\sigma_c^2+(1-k)\sigma_{rs}^2} $$ $$\sigma_o^2 = \frac{1}{N-1}\sum_{i=1}^{N}ln{\frac{o_i}{c_{i-1}}^2}$$ $$\sigma_c^2 = \frac{1}{N-1}\sum_{i=1}^{N}ln{\frac{c_i}{o_{i-1}}^2}$$ $$\sigma_{rs}^2 = \frac{1}{N-1}\sum_{i=1}^{N}(ln{\frac{h_i}{c_i}})(ln{\frac{h_i}{o_i}}) + (ln{\frac{l_i}{c_i}})(ln{\frac{l_i}{o_i}})$$ $$k = \frac{0.34}{1.34 + \frac{N+1}{N-1}}$$ However when I run volatility(my_data, n = 100, calc = "yang.zhang") I get a vector with a bunch of NAs in front of it. What is my volatility estimate? Is it the last value in the data frame, if so what are the remaining values at the other data points? I apologize if this is trivial - but I cant seem to find anything in the TTR documentation. Thank you!
I asked the following question in MSE for which I couldn't get any answer yet. I thought this would be a better place for that question. In statistical maniolds $S=\{p_\theta\}$,$\theta=(\theta_1,\dots,\theta_n)$, the Riemaanian metric usually defined is the Fisher information metric $$g_{ij}(\partial_i,\partial_j)=\int \partial_i(\log p_\theta) \partial_i(\log p_\theta)~p_\theta~dx$$ The associated connection coefficients are defined by $$\Gamma_{ij}^k=\int \partial_i\partial_j(\log p_\theta)\partial_k (\log p_\theta)~ p_{\theta}~dx$$ where $\partial_i=\frac{\partial}{\partial\theta_i}$. My question is, what is the intuition behind defining these? Is there a way to prove using the above metric and connection that the linear family of probability distributions $$L=\{p:\int f_i(x)p(x)~dx=m_i, i=1,\dots,k\}$$ intersects "orthogonally" the associated exponential family $$\mathcal{E}=\{p:p(x)=c(\theta)q(x)\exp(-\sum_{i=1}^k\theta_i f_i(x))\}$$ in the sense that $L\cap\mathcal{E}=\{p^*\}$ where $p^*$ satisfies $$D(p\|q)=D(p\|p^*)+D(p^*\|q)$$ for every $p\in L, q\in \mathcal{E}$. I recently came to know about the connection between Fisher information metric and the relative entropy: $$D( p(\cdot , a+da) \| p(.,a) )\approx\frac{1}{2} g_{i,j} da^{i} da^{j}$$ Would this be a backbone in establishing the above result?
Section 6.1 Exercises 1. Sketch a graph of \(f\left(x\right)=-3\sin \left(x\right)\). 2. Sketch a graph of \(f\left(x\right)=4\sin \left(x\right)\). 3. Sketch a graph of \(f\left(x\right)=2\cos \left(x\right)\). 4. Sketch a graph of \(f\left(x\right)=-4\cos \left(x\right)\). For the graphs below, determine the amplitude, midline, and period, then find a formula for the function. 5. 6. 7. 8. 9. 10. For each of the following equations, find the amplitude, period, horizontal shift, and midline. 11. \(y=3\sin (8(x+4))+5\) 12. \(y=4\sin \left(\frac{\pi }{2} (x-3)\right)+7\) 13. \(y=2\sin (3x-21)+4\) 14. \(y=5\sin (5x+20)-2\) 15. \(y=\sin \left(\frac{\pi }{6} x+\pi \right)-3\) 16. \(y=8\sin \left(\frac{7\pi }{6} x+\frac{7\pi }{2} \right)+6\) Find a formula for each of the functions graphed below. 17. 18. 19. 20. 21. Outside temperature over the course of a day can be modeled as a sinusoidal function. Suppose you know the temperature is 50 degrees at midnight and the high and low temperature during the day are 57 and 43 degrees, respectively. Assuming t is the number of hours since midnight, find a function for the temperature, D, in terms of t. 22. Outside temperature over the course of a day can be modeled as a sinusoidal function. Suppose you know the temperature is 68 degrees at midnight and the high and low temperature during the day are 80 and 56 degrees, respectively. Assuming t is the number of hours since midnight, find a function for the temperature, D, in terms of t. 23. A Ferris wheel is 25 meters in diameter and boarded from a platform that is 1 meters above the ground. The six o’clock position on the Ferris wheel is level with the loading platform. The wheel completes 1 full revolution in 10 minutes. The function \(h(t)\) gives your height in meters above the ground t minutes after the wheel begins to turn. a. Find the amplitude, midline, and period of \(h\left(t\right)\). b. Find a formula for the height function \(h\left(t\right)\). c. How high are you off the ground after 5 minutes? 24. A Ferris wheel is 35 meters in diameter and boarded from a platform that is 3 meters above the ground. The six o’clock position on the Ferris wheel is level with the loading platform. The wheel completes 1 full revolution in 8 minutes. The function \(h(t)\) gives your height in meters above the ground t minutes after the wheel begins to turn. a. Find the amplitude, midline, and period of \(h\left(t\right)\). b. Find a formula for the height function \(h\left(t\right)\). c. How high are you off the ground after 4 minutes? Answer 1. 3. 5. Amp: 3. Period = 2. Midline: \(y = -4\). \(f(t) = 3\sin(\pi t) - 4\) 7. Amp: 2. Period = \(4\pi\). Midline: \(y = 1\). \(f(t) = 2\cos(\dfrac{1}{2} t) + 1\) 9. Amp: 2. Period = 5. Midline: \(y = 3\). \(f(t) = -2\cos(\dfrac{2\pi}{5} t) + 3\) 11. Amp: 3, Period = \(\dfrac{\pi}{4}\), Shift: 4 left, Midline: \(y = 5\) 13. Amp: 2, Period = \(\dfrac{2\pi}{3}\), Shift: 7 left, Midline: \(y = 4\) 15. Amp: 1, Period = 12, Shift: 6 left, Midline: \(y = -3\) 17. \(f(x) = 4\sin(\dfrac{\pi}{5} (x + 1))\) 19. \(f(x) = \cos(\dfrac{\pi}{5} (x + 2))\) 21. \(D(t) = 50 - 7 \sin(\dfrac{\pi}{12}t)\) 23. a. Amp: 12.5. Midline: \(y = 13.5\). Period: 10 b. \(h(t) = -12.5 \cos(\dfrac{\pi}{5}t) + 13.5\) c. \(h(t) = 26\) meters
Periodic attractors of nonautonomous flat-topped tent systems ISEL - Instituto Superior de Engenharia de Lisboa, Mathematics Department and CIMA - Research Centre for Mathematics and Applications, Rua Conselheiro Emídio Navarro, 1, 1959-007 Lisboa, Portugal In this work we will consider a family of nonautonomous dynamical systems $x_{k+1} = f_k(x_k,\lambda)$, $\lambda \in [-1,1]^{\mathbb{N}_0}$, generated by a one-parameter family of flat-topped tent maps $g_{\alpha}(x)$, i.e., $f_k(x,\lambda) = g_{\lambda_k}(x)$ for all $k\in \mathbb{N}_0$. We will reinterpret the concept of attractive periodic orbit in this context, through the existence of some periodic, invariant and attractive nonautonomous sets and establish sufficient conditions over the parameter sequences for the existence of such periodic attractors. Mathematics Subject Classification:Primary: 37B55, 37E05, 37G35; Secondary: 37E15. Citation:Luís Silva. Periodic attractors of nonautonomous flat-topped tent systems. Discrete & Continuous Dynamical Systems - B, 2019, 24 (4) : 1867-1874. doi: 10.3934/dcdsb.2018243 References: [1] N. Franco, L. Silva and P. Simões, Symbolic dynamics and renormalization of nonautonomous $k$ periodic dynamical systems, [2] J. Franke and A. Yakubu, Population models with periodic recruitment functions and survival rates, [3] L. Glass and W. Zeng, Bifurcations in flat-topped maps and the control of cardiac chaos, [4] [5] C. Pötzsche, Bifurcations in nonautonomous dynamical systems: Results and tools in discretetime, in [6] L. Silva, J. L. Rocha and M. T. Silva, Bifurcations of 2-periodic nonautonomous stunted tent systems, [7] [8] show all references References: [1] N. Franco, L. Silva and P. Simões, Symbolic dynamics and renormalization of nonautonomous $k$ periodic dynamical systems, [2] J. Franke and A. Yakubu, Population models with periodic recruitment functions and survival rates, [3] L. Glass and W. Zeng, Bifurcations in flat-topped maps and the control of cardiac chaos, [4] [5] C. Pötzsche, Bifurcations in nonautonomous dynamical systems: Results and tools in discretetime, in [6] L. Silva, J. L. Rocha and M. T. Silva, Bifurcations of 2-periodic nonautonomous stunted tent systems, [7] [8] [1] [2] Bernd Aulbach, Martin Rasmussen, Stefan Siegmund. Approximation of attractors of nonautonomous dynamical systems. [3] [4] [5] [6] [7] [8] Ioana Moise, Ricardo Rosa, Xiaoming Wang. Attractors for noncompact nonautonomous systems via energy equations. [9] [10] [11] H. M. Hastings, S. Silberger, M. T. Weiss, Y. Wu. A twisted tensor product on symbolic dynamical systems and the Ashley's problem. [12] [13] [14] Marta Štefánková. Inheriting of chaos in uniformly convergent nonautonomous dynamical systems on the interval. [15] João Ferreira Alves, Michal Málek. Zeta functions and topological entropy of periodic nonautonomous dynamical systems. [16] [17] [18] P.E. Kloeden, Desheng Li, Chengkui Zhong. Uniform attractors of periodic and asymptotically periodic dynamical systems. [19] [20] David Burguet, Todd Fisher. Symbolic extensionsfor partially hyperbolic dynamical systems with 2-dimensional center bundle. 2018 Impact Factor: 1.008 Tools Metrics Other articles by authors [Back to Top]
In Boyd's Convex Optimization, pp. 243, for anyoptimization problem ... for which strong duality obtains, any pair of primal and dual optimal points must satisfy the KKT conditions i.e. $\mathrm{strong ~ duality} \implies \mathrm{KKT ~ is ~ necessary ~ condition ~ for ~ optimal ~ solution}$ and in pp. 244, (When the primal problem is convex) if $\tilde{x}, \tilde{\lambda}, \tilde{\mu}$ are any points that satisfy the KKT conditions, then $\tilde{x}$ and $(\tilde{\lambda}, \tilde{\mu})$ are primal and dual optimal, with zero duality gap. If duality gap = 0, the problem satisfies strong duality, and in the 3rd paragraph: If a convex optimization problem ... satisfies Slater’s condition, then the KKT conditions provide necessary and sufficient conditions for optimality For me it means: (for any convex problems KKT is already sufficient for optimal) $$\mathrm{KKT} \implies \mathrm{optimal ~ with ~ zero ~ duality ~ gap} \implies \mathrm{strong ~ duality} \implies \mathrm{KKT ~ is ~ also ~ necessary}$$ so KKT is necessary and sufficient for any convex problems? (Because Slater's condition can be automatically satisfied for the zero duality gap)
I am learning set theory and I am curious if we could have unrestricted comprehension while blocking Russell's paradox using the axiom of regularity/foundation. To my very limited knowledge, axiom of regularity is not needed to block paradoxes since if ZF without regularity is inconsistent, then adding regularity would not make it consistent (and the restricted separation schema has already done the work). It seems to me that axiom of regularity also blocks the existence of a universal set and since it says no sets can be a member of itself, it should also block the construction of the Russell sets (since $\forall x (x\not\in x)$ is a description of the universal set under the Axiom of Regularity...?) Adding an axiom can never make an inconsistent system consistent. Axioms can't block paradoxes. You should think of this the other way round: the axioms of regularity and replacement are designed to strengthen ZF in useful ways while avoiding inconsistency. Rob wrote correctly, you cannot revive a dead person by giving him an extra leg to stand on. But here is a direct proof: Consider the Russell class, $\{x\mid x\notin x\}$. By comprehension this is a set, and by regularity it is in fact the set of all sets. But now it must be a member of itself, and this is a contradiction to regularity.
Table of Contents Locally Connected and Locally Path Connected Topological Spaces Recall from the Connected and Disconnected Topological Spaces page that a topological space $X$ is said to be connected if it is not disconnected. Also recall from the Path Connected Topological Spaces page that a topological space $X$ is said to be path connected if for every pair of distinct points $x, y \in X$ there exists a continuous function $\alpha : [0, 1] \to X$, called a path from $x$ to $y$, such that $\alpha(0) = x$ and $\alpha(1) = y$. Sometimes a topological space may not be connected or path connected, but may be connected or path connected in a small open neighbourhood of each point in the space. We define these new types of connectedness and path connectedness below. Definition: Let $X$ be a topological space and let $x \in X$. We say that $X$ is Locally Connected at $x$ if for every neighbourhood $U$ of $x$ there exists a connected neighbourhood $V$ of $x$ such that $x \in V \subseteq U$. $X$ is said to be Locally Connected on all of $X$ if $X$ is locally connected at every $x \in X$. Definition: Let $X$ be a topological space and let $x \in X$. We say that $X$ is Locally Path Connected at $x$ if for every neighbourhood $U$ of $x$ there exists a path connected neighbourhood $V$ of $x$ such that $x \in V \subseteq U$. $X$ is said to be Locally Path Connected on all of $X$ if $X$ is locally path connected at every $x \in X$. For example, consider the topological space $\mathbb{R}$ with the usual topology. Consider the following topological subspace of $\mathbb{R}$ which we denote by $N$:(1) Clearly $N$ is disconnected. To see this, let $\displaystyle{A = \bigcup_{n \in \mathbb{Z}, n < 0} (n, n+1)}$ and let $\displaystyle{B = \bigcup_{n \in \mathbb{Z}, n \geq 0} (n, n+1)}$. Then it's not hard to verify that $\{ A, B \}$ is a separation of $N$. However, $N$ is locally connected on all of $N$. To see this, let $x \in N$. Then $x \in (n, n+1)$ for some $n \in \mathbb{N}$. Let $\delta = \min \{ n - x, n + 1 - x \}$. Then $(x - \delta, x + \delta) \subseteq (n, n+1)$ is a connected neighbourhood of $x$. Since $x$ was arbitrary, we see that $N$ is locally collected on all of $N$. For another example, consider the following topological subspace of $\mathbb{R}$ which we denote by $X$:(2) The following diagram illustrates this space: It's not hard to see that $X$ is path connected. However, $X$ is not locally path connected everywhere. Take a point $\mathbf{a} = (a, 0) \in X$ where $a \neq 0$: Then there will always exist an open neighbourhood of $\mathbf{a}$ that is not locally path connected as illustrated above. For an explicit open neighbourhood, $X \cap B(\mathbf{a}, \mid a \mid)$ will always be not locally path connected.
187 results for "into". Putting a pair of linear equations into matrix notation and then solving by finding the inverse of the coefficient matrix. Question Ready to use CC BY Published Last modified 02/10/2019 16:11 No subjects selected No topics selected No ability levels selected What is the value of the expression given a choice of n? Question Ready to use CC BY Published Last modified 02/10/2019 08:09 No subjects selected No topics selected No ability levels selected Factorise $x^2+cx+d$ into 2 distinct linear factors and then find $\displaystyle \int \frac{ax+b}{x^2+cx+d}\;dx,\;a \neq 0$ using partial fractions or otherwise. Question Draft CC BY Published Last modified 19/09/2019 11:55 No subjects selected No topics selected No ability levels selected Try these questions as a little refresher on what you did in first year. These are the type of thing you should know going into second year. If you find any questions tricky then Maths Cafe is a great place to go and get a little support. Construct a line through two points in a GeoGebra worksheet. Change the line by setting the positions of the two points when the worksheet is embedded into the question. Question Draft CC BY Published Last modified 14/08/2019 12:54 No subjects selected No topics selected No ability levels selected Find the $x$ and $y$ components of a force which is applied at an angle to a particle. Resolve using $F \cos \theta$. The force acts in the positive $x$ and positive $y$ direction. Question Ready to use CC BY Published Last modified 08/08/2019 10:37 No subjects selected No topics selected No ability levels selected
Differential Equations also called as Partial differential equations if they have partial derivatives. The highest order derivative is the order of differential equation. Differential Equation formula \(\frac{dy}{dt} + p(t)y = g(t)\) p(t) & g(t) are the functions which are continuous. y(t) = \(\frac{\int \mu (t)g(t)dt + c}{\mu (t)}\) Where \(\mu (t) = e^{\int p(t)d(t)}\) Differential Equation formula question \(\frac{dv}{dt}\) = 9.8 – 0.196v, v(0) = 48 , solve this equation. Question 1: Solution: Step 1: Find the general solution: V = 50 + ce -0.196t> Step 2: Find the value of c. 48 = v(0) = 50 + c C = -2 V = 50 – 2e 0.196t To explore more formulas on this and other mathematical topics, Register at BYJU’S.
Learning Objectives In this section, you will: Use the Law of Cosines to solve oblique triangles. Solve applied problems using the Law of Cosines. Use Heron’s formula to find the area of a triangle. Suppose a boat leaves port, travels \(10\) miles, turns \(20\) degrees, and travels another 8 miles as shown in Figure \(\PageIndex{1}\) How far from port is the boat? Unfortunately, while the Law of Sines enables us to address many non-right triangle cases, it does not help us with triangles where the known angle is between two known sides, a SAS (side-angle-side) triangle, or when all three sides are known, but no angles are known, a SSS (side-side-side) triangle. In this section, we will investigate another tool for solving oblique triangles described by these last two cases. Using the Law of Cosines to Solve Oblique Triangles The tool we need to solve the problem of the boat’s distance from the port is the Law of Cosines, which defines the relationship among angle measurements and side lengths in oblique triangles. Three formulas make up the Law of Cosines. At first glance, the formulas may appear complicated because they include many variables. However, once the pattern is understood, the Law of Cosines is easier to work with than most formulas at this mathematical level. Understanding how the Law of Cosines is derived will be helpful in using the formulas. The derivation begins with the Generalized Pythagorean Theorem, which is an extension of the Pythagorean Theorem to non-right triangles. Here is how it works: An arbitrary non-right triangle \(ABC\) is placed in the coordinate plane with vertex \(A\) at the origin, side \(c\) drawn along the x-axis, and vertex \(C\) located at some point \((x,y)\) in the plane, as illustrated in Figure \(\PageIndex{2}\). Generally, triangles exist anywhere in the plane, but for this explanation we will place the triangle as noted. We can drop a perpendicular from \(C\) to the x-axis (this is the altitude or height). Recalling the basic trigonometric identities, we know that \(\cos \theta=\dfrac{x(adjacent)}{b(hypotenuse)}\) and \(\sin \theta=\dfrac{y(opposite)}{b(hypotenuse)}\) In terms of \(\theta\), \(x=b \cos \theta\) and \(y=b \sin \theta\). The \((x,y)\) point located at \(C\) has coordinates \((b \cos \theta, b \sin \theta)\). Using the side \((x−c)\) as one leg of a right triangle and \(y\) as the second leg, we can find the length of hypotenuse \(a\) using the Pythagorean Theorem. Thus, \(\begin{array}{ll} a^2={(x−c)}^2+y^2 \\[4pt] \;\;\;\;\; ={(b \cos \theta−c)}^2+{(b \sin \theta)}^2 & \text{Substitute }(b \cos \theta) \text{ for }x \text{ and }(b \sin \theta)\text{ for }y \\[4pt] \;\;\;\;\;\; =(b^2{\cos}^2 \theta−2bc \cos \theta+c^2)+b^2 {\sin}^2 \theta & \text{Expand the perfect square.} \\[4pt] \;\;\;\;\; =b^2{\cos}^2 \theta+b^2{\sin}^2 \theta+c^2−2bc \cos \theta & \text{Group terms noting that }{\cos}^2 \theta+{\sin}^2 \theta=1 \\[4pt] \;\;\;\;\; =b^2({\cos}^2 \theta+{\sin}^2 \theta)+c^2−2bc \cos \theta & \text{Factor out }b^2 \\[4pt] \end{array}\) \(a^2=b^2+c^2−2bc \cos \theta \) The formula derived is one of the three equations of the Law of Cosines. The other equations are found in a similar fashion. Keep in mind that it is always helpful to sketch the triangle when solving for angles or sides. In a real-world scenario, try to draw a diagram of the situation. As more information emerges, the diagram may have to be altered. Make those alterations to the diagram and, in the end, the problem will be easier to solve. Example \(\PageIndex{1}\): Finding the Unknown Side and Angles of a SAS Triangle Find the unknown side and angles of the triangle in Figure \(\PageIndex{4}\). Solution First, make note of what is given: two sides and the angle between them. This arrangement is classified as SAS and supplies the data needed to apply the Law of Cosines. Each one of the three laws of cosines begins with the square of an unknown side opposite a known angle. For this example, the first side to solve for is side \(b\), as we know the measurement of the opposite angle \(\beta\). \(\begin{array}{ll} b^2=a^2+c^2−2ac \cos \beta \\[4pt] b^2={10}^2+{12}^2−2(10)(12)\cos(30°) & \text{Substitute the measurements for the known quantities.} \\[4pt] b^2=100+144−240 \left(\dfrac{\sqrt{3}}{2}\right) & \text{Evaluate the cosine and begin to simplify.} \\[4pt] b^2=244−120\sqrt{3} \\[4pt] b=\sqrt{244−120\sqrt{3}} & \text{Use the square root property.} \\[4pt] b≈6.013 \end{array}\) Because we are solving for a length, we use only the positive square root. Now that we know the length \(b\), we can use the Law of Sines to fill in the remaining angles of the triangle. Solving for angle \(\alpha\), we have \(\begin{array}{cc} \dfrac{\sin \alpha}{a}=\dfrac{\sin \beta}{b} \\[4pt] \dfrac{\sin \alpha}{10}=\dfrac{\sin(30°)}{6.013} \\[4pt] \sin \alpha=\dfrac{10\sin(30°)}{6.013} & \text{Multiply both sides of the equation by }10. \\[4pt] \alpha={\sin}^{−1}\left(\dfrac{10\sin(30°)}{6.013}\right) & \text{Find the inverse sine of } \dfrac{10\sin(30°)}{6.013}. \\[4pt] \alpha≈56.3° \end{array}\) The other possibility for \(\alpha\) would be \(\alpha=180°-56.3°≈123.7°\). In the original diagram,\(\alpha\) is adjacent to the longest side, so \(\alpha\) is an acute angle and, therefore, \(123.7°\) does not make sense. Notice that if we choose to apply the Law of Cosines, we arrive at a unique answer. We do not have to consider the other possibilities, as cosine is unique for angles between \(0°\) and \(180°\). Proceeding with \(\alpha≈56.3°\), we can then find the third angle of the triangle. \[\begin{align*} \gamma&= 180^{\circ}-30^{\circ}-56.3^{\circ}\\ &\approx 93.7^{\circ} \end{align*}\] The complete set of angles and sides is \(\alpha≈56.3°\) \(a=10\) \(\beta=30°\) \(b≈6.013\) \(\gamma≈93.7°\) \(c=12\) Exercise \(\PageIndex{1}\) Find the missing side and angles of the given triangle: \(\alpha=30°\), \(b=12\), \(c=24\). Answer \(a≈14.9\), \(\beta≈23.8°\), \(\gamma≈126.2°\). Example \(\PageIndex{2}\): Solving for an Angle of a SSS Triangle Find the angle \(\alpha\) for the given triangle if side \(a=20\), side \(b=25\), and side \(c=18\). Solution For this example, we have no angles. We can solve for any angle using the Law of Cosines. To solve for angle \(\alpha\), we have \(\begin{array}{ll} a^2=b^2+c^2−2bc \cos \alpha \\[4pt] {20}^2={25}^2+{18}^2−2(25)(18)\cos \alpha & \text{Substitute the appropriate measurements.} \\[4pt] 400=625+324−900 \cos \alpha & \text{ Simplify in each step.} \\[4pt] 400=949−900 \cos \alpha \\[4pt] −549=−900 \cos \alpha & \text{Isolate }\cos \alpha. \\[4pt] −549−900=\cos \alpha \\[4pt] 0.61≈\cos \alpha \\[4pt] 0.61≈\cos \alpha & \text{Find the inverse cosine.} \\[4pt] \alpha≈52.4° \end{array}\) See Figure \(\PageIndex{5}\). Analysis Because the inverse cosine can return any angle between \(0\) and \(180\) degrees, there will not be any ambiguous cases using this method. Exercise \(\PageIndex{2}\) Given \(a=5\), \(b=7\), and \(c=10\), find the missing angles. Answer \(\alpha≈27.7°\), \(\beta≈40.5°\), \(\gamma≈111.8°\) Solving Applied Problems Using the Law of Cosines Just as the Law of Sines provided the appropriate equations to solve a number of applications, the Law of Cosines is applicable to situations in which the given data fits the cosine models. We may see these in the fields of navigation, surveying, astronomy, and geometry, just to name a few. Example \(\PageIndex{3A}\): Using the Law of Cosines to Solve a Communication Problem On many cell phones with GPS, an approximate location can be given before the GPS signal is received. This is accomplished through a process called triangulation, which works by using the distances from two known points. Suppose there are two cell phone towers within range of a cell phone. The two towers are located \(6000\) feet apart along a straight highway, running east to west, and the cell phone is north of the highway. Based on the signal delay, it can be determined that the signal is \(5050\) feet from the first tower and \(2420\) feet from the second tower. Determine the position of the cell phone north and east of the first tower, and determine how far it is from the highway. Solution For simplicity, we start by drawing a diagram similar to Figure \(\PageIndex{6}\) and labeling our given information. Using the Law of Cosines, we can solve for the angle \(\theta\). Remember that the Law of Cosines uses the square of one side to find the cosine of the opposite angle. For this example, let \(a=2420\), \(b=5050\), and \(c=6000\). Thus, \(\theta\) corresponds to the opposite side \(a=2420\). \[\begin{align*} a^2 & =b^2+c^2−2bc \cos \theta \\[4pt] {(2420)}^2 &={(5050)}^2+{(6000)}^2−2(5050)(6000) \cos \theta \\[4pt] \cos \theta &≈ 0.9183 \\[4pt] \cos \theta &≈ 0.9183 \\[4pt] \theta &≈ {\cos}^{−1}(0.9183) \\[4pt] \theta &≈ 23.3° \end{align*}\] To answer the questions about the phone’s position north and east of the tower, and the distance to the highway, drop a perpendicular from the position of the cell phone, as in Figure \(\PageIndex{7}\). This forms two right triangles, although we only need the right triangle that includes the first tower for this problem. Using the angle \(\theta=23.3\)° and the basic trigonometric identities, we can find the solutions. Thus \[\begin{align*} \cos(23.3°) &= \dfrac{x}{5050} \\[4pt] x &= 5050\cos(23.3°) \\[4pt] x &≈ 4638.15\, feet\\[4pt] \sin(23.3°) &= \dfrac{y}{5050} \\[4pt] y &= 5050\sin(23.3°) \\[4pt] y &≈1997.5 \, feet \end{align*}\] The cell phone is approximately \(4638\) feet east and \(1998\) feet north of the first tower, and \(1998\) feet from the highway. Example \(\PageIndex{3B}\): Calculating Distance Traveled Using a SAS Triangle Returning to our problem at the beginning of this section, suppose a boat leaves port, travels \(10\) miles, turns \(20\) degrees, and travels another \(8\) miles. How far from port is the boat? The diagram is repeated here in Figure \(\PageIndex{8}\). Solution The boat turned 20 degrees, so the obtuse angle of the non-right triangle is the supplemental angle, \(180°−20°=160°\). With this, we can utilize the Law of Cosines to find the missing side of the obtuse triangle—the distance of the boat to the port. \[\begin{align*} x^2 &= 8^2+{10}^2−2(8)(10)\cos(160°) \\[4pt] x^2 &= 314.35 \\[4pt] x &= \sqrt{314.35} \\[4pt] x&≈17.7\, miles \end{align*}\] The boat is about \(17.7\) miles from port. Using Heron’s Formula to Find the Area of a Triangle We already learned how to find the area of an oblique triangle when we know two sides and an angle. We also know the formula to find the area of a triangle using the base and the height. When we know the three sides, however, we can use Heron’s formula instead of finding the height. Heron of Alexandria was a geometer who lived during the first century A.D. He discovered a formula for finding the area of oblique triangles when three sides are known. HERON’S FORMULA Heron’s formula finds the area of oblique triangles in which sides \(a\), \(b\),and \(c\) are known. \[Area=\sqrt{s(s−a)(s−b)(s−c)}\] where \(s=\dfrac{(a+b+c)}{2}\) is one half of the perimeter of the triangle, sometimes called the semi-perimeter. Exercise \(\PageIndex{3}\) Use Heron’s formula to find the area of a triangle with sides of lengths \(a=29.7\) ft, \(b=42.3\) ft, and \(c=38.4\) ft. Answer Area = \(552\) square feet Example \(\PageIndex{5}\): Applying Heron’s Formula to a Real-World Problem A Chicago city developer wants to construct a building consisting of artist’s lofts on a triangular lot bordered by Rush Street, Wabash Avenue, and Pearson Street. The frontage along Rush Street is approximately \(62.4\) meters, along Wabash Avenue it is approximately \(43.5\) meters, and along Pearson Street it is approximately \(34.1\) meters. How many square meters are available to the developer? See Figure \(\PageIndex{10}\) for a view of the city property. Solution Find the measurement for \(s\), which is one-half of the perimeter. \[\begin{align*} s&= \dfrac{(62.4+43.5+34.1)}{2}\\ s&= 70\; m\\ \text {Apply Heron's formula.}\\ Area&= \sqrt{70(70-62.4)(70-43.5)(70-34.1)}\\ Area&= \sqrt{506,118.2}\\ Area&\approx 711.4 \end{align*}\] The developer has about \(711.4\) square meters. Exercise \(\PageIndex{4}\) Find the area of a triangle given \(a=4.38\) ft , \(b=3.79\) ft, and \(c=5.22\) ft. Answer about \(8.15\) square feet Key Equations Law of Cosines \(a^2=b^2+c^2−2bc \cos \alpha\) \(b^2=a^2+c^2−2ac \cos \beta\) \(c^2=a^2+b^2−2ab \cos \gamma\) Heron’s formula \(Area=\sqrt{s(s−a)(s−b)(s−c)}\) where \(s=\dfrac{(a+b+c)}{2}\) Key Concepts The Law of Cosines defines the relationship among angle measurements and lengths of sides in oblique triangles. The Generalized Pythagorean Theorem is the Law of Cosines for two cases of oblique triangles: SAS and SSS. Dropping an imaginary perpendicular splits the oblique triangle into two right triangles or forms one right triangle, which allows sides to be related and measurements to be calculated. See Example \(\PageIndex{1}\) and Example \(\PageIndex{2}\). The Law of Cosines is useful for many types of applied problems. The first step in solving such problems is generally to draw a sketch of the problem presented. If the information given fits one of the three models (the three equations), then apply the Law of Cosines to find a solution. See Example \(\PageIndex{3}\) and Example \(\PageIndex{4}\). Heron’s formula allows the calculation of area in oblique triangles. All three sides must be known to apply Heron’s formula. See Example \(\PageIndex{5}\) and See Example \(\PageIndex{6}\).
Equipartition of energy for nonautonomous wave equations 1. Department of Mathematical Sciences, The University of Memphis, Dunn Hall, 337, Memphis, TN 38152, USA 2. Department of Mathematical Sciences, The University of Memphis, Dunn Hall, 343, Memphis, TN 38152, USA 3. Department of Mathematics, Statistics and Physics, Federal University of Rio Grande, Av. Italia, Km 08, Campus Carreiros, Rio Grande, RS 96203-900, Brazil $\begin{align*}u''(t)+ A^2u(t)=0\end{align*}$ $\mathcal{H}$ $\begin{align*}K(t)= \| u'(t)\|^2, P(t)= \|Au(t)\|^2, E(t) = K(t)+P(t).\end{align*}$ $E(t)$ $E(t)= E(0)$ $\begin{align*}\lim_{t \longrightarrow ± ∞}K(t) = \lim_{t \longrightarrow ± ∞}P(t) = \frac{E(0)}{2}\end{align*}$ $e^{itA}\longrightarrow 0$ $A$ Mathematics Subject Classification:Primary: 34G10, 35L90; Secondary: 76D33. Citation:Gisèle Ruiz Goldstein, Jerome A. Goldstein, Fabiana Travessini De Cezaro. Equipartition of energy for nonautonomous wave equations. Discrete & Continuous Dynamical Systems - S, 2017, 10 (1) : 75-85. doi: 10.3934/dcdss.2017004 References: [1] J. L. Doob, [2] [3] [4] [5] show all references References: [1] J. L. Doob, [2] [3] [4] [5] [1] Ioana Moise, Ricardo Rosa, Xiaoming Wang. Attractors for noncompact nonautonomous systems via energy equations. [2] [3] Kazumasa Fujiwara, Shuji Machihara, Tohru Ozawa. On a system of semirelativistic equations in the energy space. [4] [5] Geng Chen, Ping Zhang, Yuxi Zheng. Energy conservative solutions to a nonlinear wave system of nematic liquid crystals. [6] Zhaoquan Xu, Jiying Ma. Monotonicity, asymptotics and uniqueness of travelling wave solution of a non-local delayed lattice dynamical system. [7] Ludwig Gauckler, Daniel Weiss. Metastable energy strata in numerical discretizations of weakly nonlinear wave equations. [8] Petronela Radu, Grozdena Todorova, Borislav Yordanov. Higher order energy decay rates for damped wave equations with variable coefficients. [9] Carlos E. Kenig, Frank Merle. Radial solutions to energy supercritical wave equations in odd dimensions. [10] [11] Thomas Duyckaerts, Carlos E. Kenig, Frank Merle. Profiles for bounded solutions of dispersive equations, with applications to energy-critical wave and Schrödinger equations. [12] Bilgesu A. Bilgin, Varga K. Kalantarov. Non-existence of global solutions to nonlinear wave equations with positive initial energy. [13] Louis Tebou. Energy decay estimates for some weakly coupled Euler-Bernoulli and wave equations with indirect damping mechanisms. [14] [15] [16] Bouthaina Abdelhedi. Existence of periodic solutions of a system of damped wave equations in thin domains. [17] Tatsien Li, Bopeng Rao, Yimin Wei. Generalized exact boundary synchronization for a coupled system of wave equations. [18] Francesca Bucci, Igor Chueshov, Irena Lasiecka. Global attractor for a composite system of nonlinear wave and plate equations. [19] Bopeng Rao, Zhuangyi Liu. A spectral approach to the indirect boundary control of a system of weakly coupled wave equations. [20] Takashi Narazaki. Global solutions to the Cauchy problem for the weakly coupled system of damped wave equations. 2018 Impact Factor: 0.545 Tools Metrics Other articles by authors [Back to Top]
Let $G$ be a finite solvable group and $N$ be a normal subgroup of $G$ such that $G/N$ is not abelian. Also for every prime integer $p$, $G$ has at most $5$ conjugacy classes whose sizes are multiples of $p$. Moreover, let $G/N$ be a Frobenius group with kernel $K/N$ of order $5$ and complement isomorphic to $G/K$ of order $2$. Let $N/L$ be a chief factor of $G$. Therefore $N/L$ is an elementary abelian $p$-group. If $|N/L|>2$, then can we say that $C_{G/L}(N/L)=N/L$? Let $C=C_{G/L}(N/L)$. Clearly, $N/L\leq C$, so suppose $N/L<C$. Then $K/L\leq C$ and thus $K/L$ is abelian. Since $K/L$ has index $2$ in $G/L$, the conjugacy classes (in $G/L$) of elements of $K/L$ have size either $1$ or $2$. Since $G/N\cong D_5$, the center of $G/L$ is contained in $N/L$ and thus every element of $K/L\setminus N/L$ is in a conjugacy class (in $G/L$) of size exactly $2$. Since $|N/L|\geq 3$, that's at least $4|N/L|\geq 12$ elements, split into at least $6$ classes of size exactly $2$. When we go to $G$, these will become at least $6$ conjugacy classes of even size, a contradiction.
May 2nd, 2015, 05:23 AM # 1 Newbie Joined: May 2015 From: Imperium Romanum Posts: 13 Thanks: 0 Need help with Lagrange multiplier... Hi Everyone, How do I workout the partial derivative for 1.) and the workings to the solution 2.)? Your help is much appreciated! May 2nd, 2015, 06:21 AM # 2 Math Team Joined: Jan 2015 From: Alabama Posts: 3,264 Thanks: 902 Write $\displaystyle f(x)= \sqrt{x^2+ y^2}= (x^2+ y^2)^{1/2}$. Think of that as $\displaystyle f(u)=u^{1/2}$ and $\displaystyle u(x,y)= x^2+ y^2$. By the chain rule, $\displaystyle \frac{\partial f}{\partial x}= \frac{df}{du}\frac{\partial u}{\partial x}$ and $\displaystyle \frac{\partial f}{\partial y}= \frac{df}{du}\frac{\partial u}{\partial y}$ $\displaystyle \frac{df}{du}= (1/2)u^{1/2- 1}= (1/2)u^{-1/2}$, $\displaystyle \frac{\partial u}{\partial x}= 2x$, and $\displaystyle \frac{\partial u}{\partial y}= 2y$ So $\displaystyle \frac{\partial f}{\partial x}= (1/2)u^{-1/2}(2x)$$\displaystyle = x(x^2+ y^2)^{-1/2}$$\displaystyle = \frac{x}{\sqrt{x^2+ y^2}}$ and $\displaystyle \frac{\partial f}{\partial y}= (1/2)u^{-1/2}(2y)= y(x^2+ y^2)^{-1/2}= \frac{y}{\sqrt{x^2+ y^2}}$ For your second question, since $\displaystyle 2+ \frac{1}{\sqrt{x^2+ y^2}}$ is never negative, you can divide both sides by it, leaving x= y. You have the constraint x+ y= 1 so that x= y= 1/2. Last edited by Country Boy; May 2nd, 2015 at 06:31 AM. Tags lagrange, multiplier Thread Tools Display Modes Similar Threads Thread Thread Starter Forum Replies Last Post Lagrange multiplier method Brazen Calculus 1 January 15th, 2013 10:29 AM Lagrange multiplier (am I doing this right?)? trey01 Applied Math 2 March 25th, 2012 07:14 AM The Lagrange multiplier method dk1702 Abstract Algebra 1 July 21st, 2010 05:25 AM Lagrange Multiplier Question OSearcy4 Calculus 2 October 16th, 2009 01:44 PM Lagrange Multiplier Fix.. roonaldo17 Calculus 0 November 16th, 2008 11:27 AM
I could need some advice on extensions of the CIR model. The standard CIR reads $dr(t)=\kappa(\theta-r(t))dt + \sigma \sqrt{r(t)} dW(t)$. A possible extension, if we would like the short-rate to also include negative values, could be a displaced version, so that $r(t)+\alpha$, where $\alpha>0$, follows a CIR model. Further, to fit the initial term structure one could also consider the CIR++ (can be seen in Brigo et al) which is that $r(t)=x(t)+\phi(t)$, where $x$ is CIR and $\phi(t)$ is deterministic and chosen to fit the initial term structure. My question is if it would make sense to consider a displaced CIR++, that is that $r(t)+\alpha=x(t)+\phi(t)$. My immediate thought is that the $\alpha$ does not provide any additional value for the model, and that the $\phi$-function already makes it possible for the short-rate to be negative?
Binomial Tree Simulation The binomial model is a discrete grid generation method from \(t=0\) to \(T\). At each point in time (\(t+\Delta t\)) we can move up with probability \(p\) and down with probability \((1-p)\). As the probability of an up and down movement remain constant throughout the generation process, we end up with a recombining binary tree, or binary lattice. Whereas a balanced binomial tree with height \(h\) has \(2^{h+1}-1\) nodes, a binomial lattice of height \(h\) has \(\sum_{i=1}^{h}i\) nodes. The algorithm to generate a binomial lattice of \(M\) steps (i.e. of height \(M\)) given a starting value \(S_0\), an up movement \(u\), and down movement \(d\), is: FOR i=1 to M FOR j=0 to i STATE S(j,i) = S(0)*u^j*d^(n-j) ENDFOR ENDFOR We can write this function in R and generate a graph of the lattice. A simple lattice generation function is below: [source lang=”R”] # Generate a binomial lattice # for a given up, down, start value and number of steps genlattice <- function(X0=100, u=1.1, d=.75, N=5) { X <- c() X[1] <- X0 count <- 2 for (i in 1:N) { for (j in 0:i) { X[count] <- X0 * u^j * d^(i-j) count <- count + 1 } } return(X) } [/source] We can generate a sample lattice of 5 steps using symmetric up-and-down values: [source lang=”R”] > genlattice(N=5, u=1.1, d=.9) [1] 100.000 90.000 110.000 81.000 99.000 121.000 72.900 89.100 108.900 133.100 65.610 [12] 80.190 98.010 119.790 146.410 59.049 72.171 88.209 107.811 131.769 161.051 [/source] In this case, the output is a vector of alternate up and down state values. We can nicely graph a binomial lattice given a tool like graphviz, and we can easily create an R function to generate a graph specification that we can feed into graphviz: [source lang=”R”] function(S, labels=FALSE) { shape <- ifelse(labels == TRUE, "plaintext", "point") cat("digraph G {", "\n", sep="") cat("node[shape=",shape,", samehead, sametail];","\n", sep="") cat("rankdir=LR;","\n") cat("edge[arrowhead=none];","\n") # Create a dot node for each element in the lattice for (i in 1:length(S)) { cat("node", i, "[label=\"", S[i], "\"];", "\n", sep="") } # The number of levels in a binomial lattice of length N # is `$\frac{\sqrt{8N+1}-1}{2}$` L <- ((sqrt(8*length(S)+1)-1)/2 – 1) k<-1 for (i in 1:L) { tabs <- rep("\t",i-1) j <- i while(j>0) { cat("node",k,"->","node",(k+i),";\n",sep="") cat("node",k,"->","node",(k+i+1),";\n",sep="") k <- k + 1 j <- j – 1 } } cat("}", sep="") } [/source] This will simply output a dot script to the screen. We can capture this script and save it to a file by invoking: [source lang=”R”] > x<-capture.output(dotlattice(genlattice(N=8, u=1.1, d=0.9))) > cat(x, file="/tmp/lattice1.dot") [/source] We can then invoke dot from the command-line on the generated file: [source lang=”bash”] $ dot -Tpng -o lattice.png -v lattice1.dot [/source] The resulting graph looks like the following: If we want to add labels to the lattice vertices, we can add the labels attribute: [source lang=”R”] > x<-capture.output(dotlattice(genlattice(N=8, u=1.1, d=0.9), labels=TRUE)) > cat(x, file="/tmp/lattice1.dot") [/source]
Question 1: The Boltzmann entropy $S_B=k_B\ln\Omega(E)$ is valid only for the microcanonical ensemble. In the microcanonical ensemble, all accessible microstates (accessible = they have energy $E$, at least with some $\delta E$ uncertainty) have equal probability. So if $r$ is an index that labels microstates, we have $$ p_r=C, $$ where $C$ is a constant. The normalization means that $C=1/\Omega(E)$, where $\Omega(E)$ is the number of accessible microstates. Let us define a more general entropy as $$ S_S=-k_B\sum_r p_r\ln(p_r), $$ valid for an arbitrary probability distribution $p_r$. What happens if $p_r=C$? Then we have $$ S_S=-k_B\Omega(E)C\ln(C)=-k_B\ln\left(\frac{1}{\Omega(E)}\right)=k_B\ln(\Omega(E))=S_B. $$ The only thing that is questionable is what's with $k_B$ and $\ln$ instead of $\log_2$. The point is, the multiplicative constant doesn't really matter, and different logarithms are related by multiplicative constants. What we have is that in the microcanonical ensemble, temperature is defined as $$ \frac{1}{T}=\frac{\partial S_B}{\partial E}. $$ Early phenomenological thermodynamists on the other hand had no clue about what temperature actually is, so they invented a unit, $K$ for it. From he perspective of equilibrium statistical mechanics, it is far more natural to measure temperature in units of energy, rather than Kelvin. So, whatever multiplicative constants happen to be in the formula for entropy, they essentially act as conversion factors between units of temperature and units of energy. And aside from multiplicative factors, $S_S$ is Shannon-entropy. So they are essentially the same, with the understanding that Boltzmann's entropy is a special case for microcanonical ensembles. Interesting tidbit: Consider Shannon-entropy $S_S$ as a functional of probability distribution: $$ S_S[p]=-\sum_r p_r\ln(p_r). $$ Here I set $k_B=1$. What are the critical points of this functional? We can do calculus of variations, but we only vary probability distributions, so we need to enforce $\sum_r p_r=1$. The functional to be varied is then $$ F[p]=-\sum_r p_r\ln(p_r)-\gamma\left(1-\sum_r p_r\right) $$ where $\gamma$ is a Lagrange-multiplier. After variation we get $$ \delta F[p]=-\sum_r\left(\delta p_r\ln(p_r)+p_r\frac{1}{p_r}\delta p_r+\gamma\delta p_r\right). $$ Setting this to 0 gives $$ \ln(p_r)=-(1+\gamma)\Rightarrow p_r=e^{-1-\gamma}=C, $$ where $C$ can be determined from normalization. So basically, the microcanonical ensemble is precisely the one which maximizes entropy. Question 2: I am not sure which is Gibbs entropy (probably the "modified" Shannon entropy?), but they are basically all the same in different formulations and different conventions for temperature. Von Neumann entropy is of course quantum mechanical, but it reduces to usual entropy if you diagonize the density matrix. If you are curious about the meaning of entropy, I think you should drop strict information theory and just look at probability theory. It is probably simpler to consider the negative of entropy, $I=\sum_r p_r\ln(p_r)$. It can be seen that this essentially measures how much knowledge you'd gain if you were to know which state the system is in. Assume the probability distribution is such that only one state has nonzero probability, so $p_r=1$ for a specific $r$ and 0 for the rest. Then entropy is ostensibly zero. And indeed, since that state is the only realizable state, you gain absolutely no information if somebody tells you what its state is. On the other hand, if all states are equiprobable, then you have absolutely no basis to "guess" the state of the system without knowing anything about it. If someone tells you the state of the system, you gain quite a lot of information. If the probability distribution is "spiky", then this $I$ quantity is lower, than if it was even, because if you "guess" that the state of the sytem is in the "spiky" domain, you'd be more often right than not. So I somewhat retrace my statement and say that it isn't so much about how much knowlegde one gained if they were told the state of the system (but clearly, it is related), but rather, how likely it is that you can guess which state of the system is realized, just by knowing the distribution. For a "spiky" distribution, the system is likely near the spike, so it is pretty guessable. For a system that is evenly distributed, your guess is worthless. It is a measure of "spikiness", a measure of how evenly the system is distributed over its accessible states. Question 3: I cannot really answer this directly, mainly because my knowledge of information theory isn't that high, so I'll only say what I already said in 1, that the multiplicative constant of units $J/K$ is only needed to make contact with what phenomenological thermodynamists of old defined as temperature. In the microcanonical ensemble, the entropy is given by the logarithm of the number of accessible microstates, which depend on energy. Inverse temperature is the response of entropy to the change in energy, so it should have dimensions of energy. With that said, if you defined entropy in the microcanonical ensemble as $$ S=\log_2(\Omega(E)), $$ then temperature would have units of $J/\text{bit}$, if you'd like. Then, if you defined $k_B$ with units of energy, then the unit of temperature would be $1/\text{bit}$. Edit - clarifications: I cannot shake the nagging feeling that I did not answer this question satisfyingly, so I'd like to clarify certain points. Related to Question 3, I think (but I might be wrong, not an expert in this field) that relating temperature to information is somewhat futile, at least beyond superficialities. Temperature is only defined in a meaningful way for equilibrium systems. Specifically, temperature is only defined for microcanonical ensembles. Realize that it is not meaningful to talk about microcanonical ensembles that do not describe equilibrium systems. Non-equilibrium systems have time-dependent probability distributions. But a microcanonical ensemble is in a very specific distribution (even distribution), so you cannot have time-evolution if this even-ness is to be kept. For all other ensembles, temperature is defined by being in equilibirum with another system such that they, together form a microcanonical ensemble. On the other hand entropy/information is meaningful as soon as you got a probability distribution. Related to the interpretation of entropy , I think it is probably best to not to think about it either in the context of information theory or thermodynamics. Even if those two fields were the main inspiration for the conept of entropy, it is a concept in probability theory. Both information theorists and thermodynamists use entropy for their own nefarious purposes, so it is best to abstract it away. Entropy is simply a number associated to a probability distribution. I thought things through and I think I can give a better recount of what it means than I did in the main answer. Instead of considering $S=-\sum_r p_r\ln(p_r)$, let us consider $I_r=-\ln(p_r)$ where $p_r$ is the probability of a specific state. Let us call this the "information" of the state $r$. Since $p_r$ may take on values between $0$ and $1$, and $I_r$ is a monotonically decreasing function of $p_r$, we need to consider only the limiting cases, 0 and 1. If $p_r=1$, $I_r=0$. In this case, the system is not probabilistic, but deterministic. Thus, there is no information to be gained if a wizard suddenly told us that the system is in $r$. It is trivial. No information content. On the other hand, if $p_r=0$, $I_r=\infty$. This case is singular, so it is difficult to interpret. Basically, if a wizard told us that the system is in $r$, he'd be lying. But if we consider the case when $p_r=\epsilon$ very small, but nonzero, $I_r$ approaches infinity. If a wizard told us that the system is in $r$, a very unlikely state, we'd be surprised. It would, in some sense, net us a great deal of information, since it is very unlikely that the system is in $r$. Entropy is then $$ S=\sum_r p_rI_r=\left< I\right>, $$ the expectation value of information. So it is a kind of "average information content" of the distribution. If the distribution is even, we know very little about the state of the system, since it can be in any. If the distribution is spiky, we pretty much know that the system is near the spike. Entropy parametrizes our ignorance about the system, if we only know the distribution and nothing else.
$\vec{B}=\nabla \times \vec{A}\tag1$ This is true because at every point $\nabla\cdot\vec{B}=0 \tag2$ In free space points, $\displaystyle \vec{B}=\dfrac{\mu_0}{4 \pi}\int_C \dfrac{I\ dl \times\hat{r}}{r^2}\tag3$ Consequently: $\nabla \cdot\vec{B}=0 \tag2$ At the points on the circuit, there is a singularity and we cannot directly apply equation $(3)$, i.e. Biot Savart law. So in this case how can $\nabla \cdot\vec{B}=0 $ Edit (My understanding)@ garyp: $\text{I am a graduate student. So I may seem to be a bit naive. Anyway please tell whether}\\ \text{I am understanding in the right way.}$ Here I am not considering the circuit as three dimensional. By considering it as one dimensional, the (closed) circuit becomes equivalent to a magnetic dipole layer of infinitesimal thickness. By using inverse square law of magnetic poles, we can find magnetic field (intensity)at any point outside the magnetic dipole layer (even at points infinitely close to the dipole layer). Let's first see the magnetic field due to an element of dipole layerat a point infinitely close to it: $$\vec{B}=\mu_0\vec{H}= k \ M\ \left[ \dfrac{\hat{r_1}}{r_1^2}-\dfrac{\hat{r_2}}{r_2^2} \right] dS'$$ (where $M$ is magnetic pole density and $S'$ is the surface of magnetic dipole layer) Now making use of divergence formula in spherical coordinates: \begin{align} \nabla \cdot \vec{B} &= k \ M\ \left[ \nabla \cdot \dfrac{\hat{r_1}}{r_1^2}-\nabla \cdot \dfrac{\hat{r_2}}{r_2^2} \right]\ dS' \\ & = \lim_{r\to\ 0} k \ M\ \left \{ \left[ \dfrac{1}{r^2} \dfrac{\partial \left( r^2 \dfrac{\hat{r}}{r^2} \right)}{\partial r} \right]_{\text{at }r_1} -\left[ \dfrac{1}{r^2} \dfrac{\partial \left( r^2 \dfrac{\hat{r}}{r^2} \right)}{\partial r} \right]_{\text{at }r_2} \right \} dS' \\ \text{We see the two $r^2$ cancel out} \\ & = \lim_{r\to\ 0} k \ M\ \left[ \left[ \dfrac{1}{r^2} \dfrac{\partial \hat{r}}{\partial r} \right]_{\text{at } r_1} - \left[ \dfrac{1}{r^2} \dfrac{\partial \hat{r}}{\partial r} \right]_{\text{at } r_2} \right] dS'\\ & =0\\ \text{(This is because $\dfrac{\partial \hat{r}}{\partial r}=0$)} \end{align} Therefore the divergence (at points outside the dipole layer) due to each of the elements of dipole layeris zero. That is, the divergence (at points outside the dipole layer) due to the magnetic dipole layer is zero. Thus we see that the magnetic field may blow up at points infinitely close to the dipole layer but its divergence will still be zero. Thus the divergence of magnetic field due to (closed) circuit is zero everywhere except at points on the (closed) circuit. Now comes the key point:Since we know the divergence of magnetic field due to a (closed) circuit, even at points infinitely close to the circuit, is zero we ignore the circuit and say $\nabla \cdot \vec{B}=0$ everywhere on $\mathbb R^3$.
PLaces text as a title, xlabel, or ylabel on a group of subplots. Returns a handle to the label and a handle to the axis. [ax,h]=suplabel(text,whichLabel,supAxes) returns handles to both the axis and the label. ax=suplabel(text,whichLabel,supAxes) returns a handle to the axis only. suplabel(text) with one input argument assumes whichLabel='x' whichLabel is any of 'x', 'y', or 't', specifying whether the text is to be the xlable, ylabel, or title respectively. supAxes is an optional argument specifying the Position of the "super" axes surrounding the subplots. supAxes defaults to [.075 .075 .85 .85] specify supAxes if labels get chopped or overlay subplots EXAMPLE: subplot(2,2,1);ylabel('ylabel1');title('title1') subplot(2,2,2);ylabel('ylabel2');title('title2') subplot(2,2,3);ylabel('ylabel3');xlabel('xlabel3') subplot(2,2,4);ylabel('ylabel4');xlabel('xlabel4') [ax,h1]=suplabel('super X label'); [ax,h2]=suplabel('super Y label','y'); [ax,h3]=suplabel('super Title' ,'t'); set(h3,'FontSize',30) SEE ALSO: text, title, xlabel, ylabel, zlabel, subplot, suptitle (Matlab Central) Very useful for presentation/manuscript plots. However, it doesn't seem to accept the 'interpreter' field/text properties, so the "_" in my label text create whimsical underscores. After using the 'suplabel' routine, I got problems to manipulate the image view (zoom in, zoom out, etc). Great! Thanks for this. Would be nice to add yyaxes functionality, to allow for two y-axis suplabels, one on the right and one on the left. Thanks gooooooood Thanks! Exactly what I needed To reflect my previous comment: uistack(ax,'bottom'); This line might be an even better solution to the problem Thanks, it works great. However, if you use it on a figure which has 'visible' 'off' then it will turn into visible. You can fix this by changing line 86 from: for k=1:length(currax), axes(currax(k));end % restore all other axes to for k=1:length(currax), set(gcf,'CurrentAxes',currax(k));end % restore all other axes Thanks =] This is a very useful function. I give it a 5 star Does a great job. However, if you change line 65 from: if ~isstr(text) | ~isstr(whichLabel) to if ((iscell(text) & ~isstr([text{:}])) | (~iscell(text) &~isstr(text) )) | ~isstr(whichLabel) the function will also display multiline titles contained in a cell array with Matlab R2015b, suplabel changes all my other single plot titles fontsize. Has anyone else had this problem ? Thanks ! Works great in R2016a! Awesome. This should be a standard function. R2016a. Very useful! MATLAB should pre-include this in next version! When I don't fill every column space in the figure with a subplot, the title gets centred above the created sub-plots rather than the whole figure Window. e.g.: figure subplot(2,3,1); subplot(2,3,2); [ax,h]=suplabel('super Title' ,'t'); How can I prevent this or move the title to the centre of the figure window? Thanks Very useful; however, it is awkward to adjust the axes together, and isn't compatible with plotyy. Useful script. Added new argument(handles) and then ah=handles if present. This then allows choice of handles to be used to define location of axes titles. Therefore allowing titles of plots in a grid that have common titles in rows/columns but not common titles for all plots. For example, this type of plot could be created. http://walkingwithrichard.files.wordpress.com/2013/03/different-speeds-time-normalised2.png First of all thank you very much for the excellent package. My question is the following: Is there any way to horizontally align the ylabels? The problem is that my Y-axis numbers go from 0.1 to 0.0001, causing the ylabels to be wherever but not nicely aligned. this is really useful!!! Response to Will (Nov 2014): "Handles of type Legend cannot be made the current Axes." Before 2014b, I used to add the line currax=[currax;findobj(gcf,'tag','legend')]; as suggested by comment by Jeff (Mar 2012) so that the legends do not get deleted out. I found out after 2014b adding Legends into axes list created problems. But now, I can remove the line with no ill-effects (so it seems). This is giving me problems with R2014b. I get the following error: Error using axes Handles of type Legend cannot be made the current Axes. Error in suplabel (line 87) for k=1:length(currax), axes(currax(k));end % restore all other axes I think it has something to do with this: http://www.mathworks.com/help/matlab/graphics_transition/why-are-colorbars-and-legends-not-valid-axes-handles.html Thanks for this function, very helpful! I have a problem when saving the figure though. I use it together with the tight_subplot function and I noticed that when I modify the positions of the labels (either x,y,or t) and save it as as figure (.fig - haven't tried other formats), it saves without the modifications, which is a bit frustrating. Do you have any idea on why this might be happening? Thanks!! Great script. Very useful to document a scene with subplots. However, it crashes if no supAxes is supplied and no visible axe is in the figure. This case happens if, for example, a figure is composed of subplots filled with images with imshow. Great script. Is there any way to use \bar and \tilde with it? It appears not to be possible. E.g., \suplabel('\xi','t') works fine but \suplabel('\bar{\xi}','t') can't interpret the string. The handle doesn't appear to have an 'Interpreter' field either. Hi everyone~ I read the code again and found what was wrong... I should put in the argument [.08 .08 .5 .5]. When I changed in the function supAxes=[.08 .08 .5 .5] (line 46), it was overwritten (line 49-57). Thanks... Hi, I changed the position by doing supAxes=[.08 .08 .5 .5]; but the title position does change... Does anybody have the same thing or know what's going on? Any thoughts are appreciated. Thanks! if (nargout < 2), clear h; end if (nargout < 1), clear ax; end Hint: if you need to specify the optional supAxes argument, do set(ax,'Visible','one'), redimension it manually/visually, then do get(ax,'Position') to obtain the desired value. i just plugged in suplabel('my title','t') on my subplot containining figure and it worked !. works pretty well for my purpose problems with plotyy. calling this after plotyy removes the 2nd axis plot, as per omar above. cant seem to get it to work with a pre call either The titles/labels can be moved away from the figure edges by manually changing the value of the "axBuf=.04;" to "axBuf=.001;", this fixed the issue for me. It appears that the code allows TeX commands (like '\fontsize{14}TEXT HERE') for the y-axis command, but not x... excellent, thank you for this. very easy to use. I added the following after line 39 in order to preserve sublot legends: currax=[currax;findobj(gcf,'tag','legend')]; Anyway to get the super label to not be squished at the top of the figure window, and closer to the subplots? Top of super title is nearly cut-off, while there is a chunk of space between the super title and the subplots. I tried [ax,h1]=suplabel([' ',' ','Title'],'t'); to put some empty lines above the title, but it's still squished at the top of the window... This is fantastic - a huge time saver and so easy to use. Thank you! Does not work with plotyy.....it removes the 2nd axis plot And I have some code that changes that figure creation time using suplabel from 30 seconds back down to 5.2 seconds. Apply the following to suplabel, replacing the loop through ah (lines 74 to 82) with: %%%%% AH=get(ah); ii=find(strcmp({AH(:).Visible}.','on')); thisPos=reshape([AH(ii).Position],4,length(ii)).'; leftMin=min(thisPos(:,1)); bottomMin=min(thisPos(:,2)); leftMax=max(thisPos(:,1)+thisPos(:,3)); bottomMax=max(thisPos(:,2)+thisPos(:,4)); %%%%% and replacing line 110, kudos to Daniel Golden for his suggestion above, with %%%%% ch = get(get(ax, 'parent'), 'children'); set(get(ax, 'parent'), 'children', [ch(2:end); ch(1)]); %%%%% Now (almost) all multi-handle axis operations are vectorized. I do not know how to do it yet, but I will try to put up a unified diff patch if I can figure how to attach a file. Has anyone given thought to vectorizing or otherwise speeding up suplabel? I have figures with many subplots (of histograms), eg., subplot(6,8,i); and using suplabel to create the x- and y-labels changes the figure creation time from 5 seconds to 30 seconds. Using Lars advice I can use multi-line and latex!! Nice function Can't seem to get multi-line to work... posted in NewsReader Thank you very much.. i know have some awesomely titled plots :) A very useful script!!! Thank you a lot! when I want three titles in the super title. How should it be? L1 = "Energy spectrum function E = f (FFT (\ kappa) ^ 2) '; L2 = ['height']; L3 = ['Regression'] [AX4, h3] = suplabel ([L1, L2, L3], 't'); thanks when I want three subttulos the super title. How should it be? L1 = "Energy spectrum function E = f (FFT (\ kappa) ^ 2) '; L2 = ['height']; L3 = ['Regression'] [AX4, h3] = suplabel ([L1, L2, L3], 't'); thanks!! Is there a way to have a second "super Y label" that corresponds to subplots 2 and 4 of the example figure? Thanks. I want to have a super-legend. When the subplots are all multi-lined with the same grouping scheme, their legends will be the same. Putting a legend in each will be too busy and take too much space. Is there a way to put the common legend outside the plot matrix? The problem with zooming is that suplabel places its axis on top of all other axes in the figure. You can fix this by adding the following lines under the ax=axes('Units','Normal','Position',supAxes,'Visible','off'); line: ch = get(get(ax, 'parent'), 'children'); set(get(ax, 'parent'), 'children', [ch(2:end); ch(1)]); This moves the "suplabel" axis to the bottom of the figures' axes list and allows you to mess with the original axes on the figure. Quite useful and simple to use. You might consider to change the argument checking to allow for cell strings (multi-line) labels: if ~(ischar(text) || (iscell(text) && all(cellfun(@ischar,text)))) error('text must be a string or a cell string') end if ~ischar(whichLabel) error('whichLabel must be a string') end I have seen another think that is sad : no greek letter is allowed Example : suplabel('log_{10} (Re \epsilon *)','y'); does not write the epsilon, right? I have aproblem with it : if I only do 4 subplot out of 6, for example 1,2,4,5 and not 3 and 6, the title is not centered any more.... sad :( GREAT! works perfect thanks A quick solution to a common problem. Zooming would be nice -- I haven't checked the underpinnings, but if anyone is interested in taking a look decorates plots with extra labels and survives the zoom button potentially useful for me, but not until zoom/rotate issue is fixed. Good tool, but like all other comments, I cannot zoom/rotate the image afterwards. Useful tool, but as Tom Van Grotel said, I can't zoom in on the subplots! Setting 'HitTest' to 'Off' did not work =( I can not rotate an image when I use this function. Axes below the "Ghost" axes made by SUPLABEL are not editable (zoom, scale, rotate). Hint to improve this: 'HitTest' set to 'Off', but this still did not solve it. Exactly what I was looking for! Thanks for this. Excellent! nopes..not working Thank you--just what I was looking for! very easy to use. Just to follow up on comment 3) above, the default values are [0.08 0.08 0.84 0.84], not a major change. 1) nice little helper; 2) it would be more convenient if supaxes would default to the outer boundaries of all axes on the canvas if it is not defined by the user (see mtit on the FEX); 3) supaxes' default values do not correspond to those shown in the help section 1.5.0.0 Now allows text to be a cell array of strings to allow for multiline labels. 1.3.0.0 Restores visible axes at exit. Now zoomable, etc. at exit. 1.2.0.0 Modified to restore visible axes on exit. Now zoomable, etc. on exit. 1.1.0.0 added capability for right side y-label 1.0.0.0 Default behavior now detects existing axes. Updated default values.
Contact InfoPure Mathematics University of Waterloo 200 University Avenue West Waterloo, Ontario, Canada N2L 3G1 Departmental office: MC 5304 Phone: 519 888 4567 x33484 Fax: 519 725 0160 Email: puremath@uwaterloo.ca Alessandro Portaluri, University of Turin "Existence and Stability Results in Celestial Mechanics" Is the solar system stable? This is maybe one of the oldest open questions in dynamical systems. It is still a lively and very active research field starting from Newton, Lagrange, Maxwell, Poincar\'e and Birkhoff (only to mention a few) who proved several astonishing results in this direction. Michael Deveau, Department of Pure Mathematics, University of Waterloo "Computability Theory and Some Applications" Pawel Sarkowicz, Department of Pure Mathematics, University of Waterloo "A Fourier Series Approach to the Isoperimetric Problem" We will discuss the isoperimetric problem, which is a question of relating the area of an enclosed space to its perimeter (at least in the plane). We will see how this inequality comes to fruition and what it’s optimal solution is using Fourier series. Time permitting, we will look at generalizations of the problem. MC 5501 Ehsaan Hossain, Department of Pure Mathematics, University of Waterloo "Localisation and the Nullstellensatz" Jeff defined the basic open sets $D_f$. We'll see that in fact $D_f\simeq \mathrm{Spec}(R_f)$ where $R_f$ is a localisation. We might be able to finish the proof that $\mathrm{Spec}(R)$ is Hausdorff iff $\mathrm{Kdim}(R)=0$. Lastly, we can show that if $A$ is an affine algebra then the closed points are dense in $\mathrm{Spec}(A)$. MC 5479 Speaker 1: Shubham Dwivedi, Department of Pure Mathematics, University of Waterloo "Differential Harnack estimates" We will discuss differential harnack estimates including Hamilton’s matrix harnack estimate for solutions of the heat equation and the Li-Yau inequality. If time permits, we will discuss harnack estimates for the Ricci flow. Speaker 2: Spiro Kargiannis, Department of Pure Mathematics, University of Waterloo "Bubble Tree Convergence for Harmonic Maps" Sam Harris, Department of Pure Mathematics, University of Waterloo "Quantum XOR games and Connes' embedding problem" Greg Patchell, Department of Pure Mathematics, University of Waterloo "Model Theory of von Neumann Algebras II" Mohammad Mahmoud, Department of Pure Mathematics, University of Waterloo "Degrees of Categoricity of Trees" Ehsaan Hossain, Department of Pure Mathematics, University of Waterloo "Zariski Topology 101" We'll (at least partially) answer the following questions: when is Spec( R) compact? Hausdorff? connected? irreducible? noetherian? Also, the basic open sets that Jeff described last time can be interpreted as localisations --- we will talk about that if time permits. MC 5479 Speaker 1: Christopher Lang, Department of Pure Mathematics, University of Waterloo "Using Group Actions to Simplify Nahm Data" The Nahm equations are a system of differential equations for $u(k)$-valued functions on $(a,b)\subset\mathbb{R}$. Solutions of the Nahm equations are called Nahm data. By imposing certain conditions on the Nahm data, the ADHM-Nahm procedure gives rise to monopoles in $\mathbb{R}^3$. Elaborating on [1], we examine how the actions of $\mathbb{R}^3$, $u(k)$, and $\mathrm{SU}(2)$ simplify the Nahm data. Samuel Harris, University of Waterloo Pawel Sarkowicz, Department of Pure Mathematics, University of Waterloo This week we will define elementary substructures and prove the Downward (and possibly Upward) Löwenheim-Skolem Theorem(s). To that end, we will introduce the notion of separable languages. MC 5403 Michael Hartz, FernUniversität in Hagen "Dilations in finite dimensions and matrix convexity" Aasaimani Thamizhazhagan, Department of Pure Mathematics, University of Waterloo "On the invertible elements of Fourier-Stieltjes algebra" Robert Xu Yang, Department of Pure Mathematics, University of Waterloo "Sidon and Kronecker-like sets in harmonic analysis" Let $G$ be a compact abelian group and $\Gamma$ be its discrete dual group. In this thesis we study various types of interpolation sets. Jeffrey Samuelson, Department of Pure Mathematics, University of Waterloo "Introducing affine schemes" We will introduce the notion of an affine scheme and describe the Zariski topology, after which we will discuss several examples. MC 5479 Mahmoud Filali, University of Oulu "Arens irregularity in harmonic analysis" Arens irregularity of a Banach algebra is due to elements in its Banach dual which are not weakly almost periodic. Mohammad Mahmoud, Department of Pure Mathematics, University of Waterloo "Degrees of Categoricity and the Isomorphism Problem" Jaspar Wiart, RICAM, Austrian Academy of Sciences Samuel Harris, University of Waterloo Gregory Patchell, University of Waterloo "Model Theory of Tracial von Neumann Algebras" This talk is the latest in a series of seminars on the model theory of C*-algebras. This week, we axiomatize tracial von Neumann algebras, tracial factors, and II\textsubscript{1} factors. We define local classes of algebras and determine whether several classes of finite factors are local and/or axiomatizable. We follow Farah, Hart, and Sherman's work in their series of papers titled “Model Theory of Operator Algebras.” MC 5403 Mohammad Mahmoud, Department of Pure Mathematics, University of Waterloo "Degrees of Categoricity, the Isomorphism Problem, and the Turing Ordinal" Departmental office: MC 5304 Phone: 519 888 4567 x33484 Fax: 519 725 0160 Email: puremath@uwaterloo.ca
To do it for a particular number of variables is very easy to follow. Consider what you do when you integrate a function of x and y over some region. Basically, you chop up the region into boxes of area ${\rm d}x{~\rm d} y$, evaluate the function at a point in each box, multiply it by the area of the box. This can be notated a bit sloppily as: $$\sum_{b \in \text{Boxes}} f(x,y) \cdot \text{Area}(b)$$ What you do when changing variables is to chop the region into boxes that are not rectangular, but instead chop it along lines that are defined by some function, call it $u(x,y)$, being constant. So say $u=x+y^2$, this would be all the parabolas $x+y^2=c$. You then do the same thing for another function, $v$, say $v=y+3$. Now in order to evaluate the expression above, you need to find "area of box" for the new boxes - it's not ${\rm d}x~{\rm d}y$ anymore. As the boxes are infinitesimal, the edges cannot be curved, so they must be parallelograms (adjacent lines of constant $u$ or constant $v$ are parallel.) The parallelograms are defined by two vectors - the vector resulting from a small change in $u$, and the one resulting from a small change in $v$. In component form, these vectors are ${\rm d}u\left\langle\frac{\partial x}{\partial u}, ~\frac{\partial y}{\partial u}\right\rangle $ and ${\rm d}v\left\langle\frac{\partial x}{\partial v}, ~\frac{\partial y}{\partial v}\right\rangle $. To see this, imagine moving a small distance ${\rm d}u$ along a line of constant $v$. What's the change in $x$ when you change $u$ but hold $v$ constant? The partial of $x$ with respect to $u$, times ${\rm d}u$. Same with the change in $y$. (Notice that this involves writing $x$ and $y$ as functions of $u$, $v$, rather than the other way round. The main condition of a change in variables is that both ways round are possible.) The area of a paralellogram bounded by $\langle x_0,~ y_0\rangle $ and $\langle x_1,~ y_1\rangle $ is $\vert y_0x_1-y_1x_0 \vert$, (or the abs value of the determinant of a 2 by 2 matrix formed by writing the two column vectors next to each other.)* So the area of each box is $$\left\vert\frac{\partial x}{\partial u}{\rm d}u\frac{\partial y}{\partial v}{\rm d}v - \frac{\partial y}{\partial u}{\rm d}u\frac{\partial x}{\partial v}dv\right\vert$$ or $$\left\vert \frac{\partial x}{\partial u}\frac{\partial y}{\partial v} - \frac{\partial y}{\partial u}\frac{\partial x}{\partial v}\right\vert~{\rm d}u~{\rm d}v$$ which you will recognise as being $\mathbf J~{\rm d}u~{\rm d}v$, where $\mathbf J$ is the Jacobian. So, to go back to our original expression $$\sum_{b \in \text{Boxes}} f(x,y) \cdot \text{Area}(b)$$ becomes $$\sum_{b \in \text{Boxes}} f(u, v) \cdot \mathbf J \cdot {\rm d}u{\rm d}v$$ where $f(u, v)$ is exactly equivalent to $f(x, y)$ because $u$ and $v$ can be written in terms of $x$ and $y$, and vice versa. As the number of boxes goes to infinity, this becomes an integral in the $uv$ plane. To generalize to $n$ variables, all you need is that the area/volume/equivalent of the $n$ dimensional box that you integrate over equals the absolute value of the determinant of an n by n matrix of partial derivatives. This is hard to prove, but easy to intuit. *to prove this, take two vectors of magnitudes $A$ and $B$, with angle $\theta$ between them. Then write them in a basis such that one of them points along a specific direction, e.g.: $$A\left\langle \frac{1}{\sqrt 2}, \frac{1}{\sqrt 2}\right\rangle \text{ and } B\left\langle \frac{1}{\sqrt 2}(\cos(\theta)+\sin(\theta)),~ \frac{1}{\sqrt 2} (\cos(\theta)-\sin(\theta))\right\rangle $$ Now perform the operation described above and you get $$\begin{align} & AB\cdot \frac12 \cdot (\cos(\theta) - \sin(\theta)) - AB \cdot 0 \cdot (\cos(\theta) + \sin(\theta)) \\ = & \frac 12 AB(\cos(\theta)-\sin(\theta)-\cos(\theta)-\sin(\theta)) \\ = & -AB\sin(\theta) \end{align}$$ The absolute value of this, $AB\sin(\theta)$, is how you find the area of a parallelogram - the products of the lengths of the sides times the sine of the angle between them.
Difference between revisions of "Unitriangular matrix group:UT(3,p)" (→External links) (→As a group of matrices) Line 5: Line 5: ===As a group of matrices=== ===As a group of matrices=== − Given a prime <math>p</math>, the group <math>UT(3,p)</math> is defined as the [[unitriangular matrix group]] of [[unitriangular matrix group of degree three|degree three]] over the [[prime field]] <math>\mathbb{F}_p</math>. Explicitly, it has the form: + Given a prime <math>p</math>, the group <math>UT(3,p)</math> is defined as the [[unitriangular matrix group]] of [[unitriangular matrix group of degree three|degree three]] over the [[prime field]] <math>\mathbb{F}_p</math>. Explicitly, it has the form : <math>\left \{ \begin{pmatrix} 1 & a_{12} & a_{13} \\ 0 & 1 & a_{23} \\ 0 & 0 & 1 \\\end{pmatrix} \mid a_{12},a_{13},a_{23} \in \mathbb{F}_p \right \}</math> <math>\left \{ \begin{pmatrix} 1 & a_{12} & a_{13} \\ 0 & 1 & a_{23} \\ 0 & 0 & 1 \\\end{pmatrix} \mid a_{12},a_{13},a_{23} \in \mathbb{F}_p \right \}</math> Revision as of 14:47, 18 September 2012 This article is about a family of groups with a parameter that is prime. For any fixed value of the prime, we get a particular group. View other such prime-parametrized groups Contents 1 Definition 2 Families 3 Elements 4 Arithmetic functions 5 Subgroups 6 Linear representation theory 7 Subgroup-defining functions 8 GAP implementation 9 Endomorphisms 10 Related groups 11 External links Definition As a group of matrices The analysis given below does not apply to the case . For , we get the dihedral group:D8, which is studied separately. As a semidirect product This group of order can also be described as a semidirect product of the elementary abelian group of order by the cyclic group of order , where the generator of the cyclic group of order acts via the automorphism: In this case, for instance, we can take the subgroup with as the elementary abelian subgroup of order and the subgroup with as the cyclic subgroup of order . Definition by presentation The group can be defined by means of the following presentation: where denotes the identity element. These commutation relation resembles Heisenberg's commuatation relations in quantum mechanics and so the group is sometimes called a finite Heisenberg group. Generators correspond to matrices: In coordinate form We may define the group as set of triples over the prime field , with the multiplication law given by: . The matrix corresponding to triple is: Families These groups fall in the more general family of unitriangular matrix groups. The unitriangular matrix group can be described as the group of unipotent upper-triangular matrices in , which is also a -Sylow subgroup of the general linear group . This further can be generalized to where is the power of a prime . is the -Sylow subgroup of . These groups also fall into the general family of extraspecial groups. Elements Further information: element structure of unitriangular matrix group:UT(3,p) Summary Item Value number of conjugacy classes order Agrees with general order formula for : conjugacy class size statistics size 1 ( times), size ( times) orbits under automorphism group Case : size 1 (1 conjugacy class of size 1), size 1 (1 conjugacy class of size 1), size 2 (1 conjugacy class of size 2), size 4 (2 conjugacy classes of size 2 each) Case odd : size 1 (1 conjugacy class of size 1), size ( conjugacy classes of size 1 each), size ( conjugacy classes of size each) number of orbits under automorphism group 4 if 3 if is odd order statistics Case : order 1 (1 element), order 2 (5 elements), order 4 (2 elements) Case odd: order 1 (1 element), order ( elements) exponent 4 if if odd Conjugacy class structure Note that the characteristic polynomial of all elements in this group is , hence we do not devote a column to the characteristic polynomial. For reference, we consider matrices of the form: Nature of conjugacy class Jordan block size decomposition Minimal polynomial Size of conjugacy class Number of such conjugacy classes Total number of elements Order of elements in each such conjugacy class Type of matrix identity element 1 + 1 + 1 + 1 1 1 1 1 non-identity element, but central (has Jordan blocks of size one and two respectively) 2 + 1 1 , non-central, has Jordan blocks of size one and two respectively 2 + 1 , but not both and are zero non-central, has Jordan block of size three 3 if odd 4 if both and are nonzero Total (--) -- -- -- -- -- Arithmetic functions Compare and contrast arithmetic function values with other groups of prime-cube order at Groups of prime-cube order#Arithmetic functions For some of these, the function values are different when and/or when . These are clearly indicated below. Arithmetic functions taking values between 0 and 3 Function Value Explanation prime-base logarithm of order 3 the order is prime-base logarithm of exponent 1 the exponent is . Exception when , where the exponent is . nilpotency class 2 derived length 2 Frattini length 2 minimum size of generating set 2 subgroup rank 2 rank as p-group 2 normal rank as p-group 2 characteristic rank as p-group 1 Arithmetic functions of a counting nature Function Value Explanation number of conjugacy classes elements in the center, and each other conjugacy class has size number of subgroups when , when See subgroup structure of unitriangular matrix group:UT(3,p) number of normal subgroups See subgroup structure of unitriangular matrix group:UT(3,p) number of conjugacy classes of subgroups for , for See subgroup structure of unitriangular matrix group:UT(3,p) Subgroups Further information: Subgroup structure of unitriangular matrix group:UT(3,p) Table classifying subgroups up to automorphisms Automorphism class of subgroups Representative Isomorphism class Order of subgroups Index of subgroups Number of conjugacy classes Size of each conjugacy class Number of subgroups Isomorphism class of quotient (if exists) Subnormal depth (if subnormal) trivial subgroup trivial group 1 1 1 1 prime-cube order group:U(3,p) 1 center of unitriangular matrix group:UT(3,p) ; equivalently, given by . group of prime order 1 1 1 elementary abelian group of prime-square order 1 non-central subgroups of prime order in unitriangular matrix group:UT(3,p) Subgroup generated by any element with at least one of the entries nonzero group of prime order -- 2 elementary abelian subgroups of prime-square order in unitriangular matrix group:UT(3,p) join of center and any non-central subgroup of prime order elementary abelian group of prime-square order 1 group of prime order 1 whole group all elements unitriangular matrix group:UT(3,p) 1 1 1 1 trivial group 0 Total (5 rows) -- -- -- -- -- -- -- Tables classifying isomorphism types of subgroups Group name GAP ID Occurrences as subgroup Conjugacy classes of occurrence as subgroup Occurrences as normal subgroup Occurrences as characteristic subgroup Trivial group 1 1 1 1 Group of prime order 1 1 Elementary abelian group of prime-square order 0 Prime-cube order group:U3p 1 1 1 1 Total -- Table listing number of subgroups by order Group order Occurrences as subgroup Conjugacy classes of occurrence as subgroup Occurrences as normal subgroup Occurrences as characteristic subgroup 1 1 1 1 1 1 0 1 1 1 1 Total Linear representation theory Further information: linear representation theory of unitriangular matrix group:UT(3,p) Item Value number of conjugacy classes (equals number of irreducible representations over a splitting field) . See number of irreducible representations equals number of conjugacy classes, element structure of unitriangular matrix group of degree three over a finite field degrees of irreducible representations over a splitting field (such as or ) 1 (occurs times), (occurs times) sum of squares of degrees of irreducible representations (equals order of the group) see sum of squares of degrees of irreducible representations equals order of group lcm of degrees of irreducible representations condition for a field (characteristic not equal to ) to be a splitting field The polynomial should split completely. For a finite field of size , this is equivalent to . field generated by character values, which in this case also coincides with the unique minimal splitting field (characteristic zero) Field where is a primitive root of unity. This is a degree extension of the rationals. unique minimal splitting field (characteristic ) The field of size where is the order of mod . degrees of irreducible representations over the rational numbers 1 (1 time), ( times), (1 time) Orbits over a splitting field under the action of the automorphism group Case : Orbit sizes: 1 (degree 1 representation), 1 (degree 1 representation), 2 (degree 1 representations), 1 (degree 2 representation) Case odd : Orbit sizes: 1 (degree 1 representation), (degree 1 representations), (degree representations) number: 4 (for ), 3 (for odd ) Orbits over a splitting field under the multiplicative action of one-dimensional representations Orbit sizes: (degree 1 representations), and orbits of size 1 (degree representations) Subgroup-defining functions Subgroup-defining function Subgroup type in list Isomorphism class Comment Center (2) Group of prime order Commutator subgroup (2) Group of prime order Frattini subgroup (2) Group of prime order The maximal subgroups of order intersect here. Socle (2) Group of prime order This subgroup is the unique minimal normal subgroup, i.e.,the monolith, and the group is monolithic. Also, minimal normal implies central in nilpotent. Quotient-defining function Quotient-defining function Isomorphism class Comment Inner automorphism group Elementary abelian group of prime-square order It is the quotient by the center, which is of prime order. Abelianization Elementary abelian group of prime-square order It is the quotient by the commutator subgroup, which is of prime order. Frattini quotient Elementary abelian group of prime-square order It is the quotient by the Frattini subgroup, which is of prime order. GAP implementation GAP ID For any prime , this group is the third group among the groups of order . Thus, for instance, if , the group is described using GAP's SmallGroup function as: SmallGroup(343,3) Note that we don't need to compute ; we can also write this as: SmallGroup(7^3,3) As an extraspecial group For any prime , we can define this group using GAP's ExtraspecialGroup function as: ExtraspecialGroup(p^3,'+') For , it can also be constructed as: ExtraspecialGroup(p^3,p) where the argument indicates that it is the extraspecial group of exponent . For instance, for : ExtraspecialGroup(5^3,5) Endomorphisms Automorphisms The automorphisms essentially permute the subgroups of order containing the center, while leaving the center itself unmoved. Related groups For any prime , there are (up to isomorphism) two non-abelian groups of order . One of them is this, and the other is the semidirect product of the cyclic group of order by a group of order acting by power maps (with the generator corresponding to multiplication by ).
If I understand correctly, a distribution in the exponential family... $$\underline X\sim f_{\underline\theta}(\underline x) = exp\{\sum\limits_{i}\eta_i(\underline\theta)T_i(\underline x)-B(\underline\theta)\}~h(\underline x)$$ ...where $\underline\eta(\centerdot)$ is a (possibly vector valued) parameter, $\underline T(\centerdot)$ is a corresponding vector of sufficient statistics, $B(\theta)$ is the log partition, and $h(\underline x)$ is the base measure and $\underline T(\underline x)$ itself has the following distribution: $$\underline T(\underline x) =\underline t\sim g_{\underline\theta}(\underline t) = exp\{\sum\limits_{i}\eta_i(\underline\theta)t_i-B(\underline\theta)\}~h^*(\underline t)$$ ...where $h^*(\centerdot)$ is not necessarily the same function as the one from $f_{\underline\theta}(\underline x)$ above. My questions are: Are the other two functions ($B(\underline\theta)$ and $\underline\eta(\underline\theta)$) identical to the ones from $f_{\underline\theta}(\underline x)$? What is the general strategy for finding $h^*(\centerdot)$?
In set theory, we have the phenomenon of the universal definition. This is a property $\phi(x)$, first-order expressible in the language of set theory, that necessarily holds of exactly one set, but which can in principle define any particular desired set that you like, if one should simply interpret the definition in the right set-theoretic universe. So $\phi(x)$ could be defining the set of real numbes $x=\mathbb{R}$ or the integers $x=\mathbb{Z}$ or the number $x=e^\pi$ or a certain group or a certain topological space or whatever set you would want it to be. For any mathematical object $a$, there is a set-theoretic universe in which $a$ is the unique object $x$ for which $\phi(x)$. The universal definition can be viewed as a set-theoretic analogue of the universal algorithm, a topic on which I have written several recent posts: Let’s warm up with the following easy instance. Theorem. Any particular real number $r$ can become definable in a forcing extension of the universe. Proof. By Easton’s theorem, we can control the generalized continuum hypothesis precisely on the regular cardinals, and if we start (by forcing if necessary) in a model of GCH, then there is a forcing extension where $2^{\aleph_n}=\aleph_{n+1}$ just in case the $n^{th}$ binary digit of $r$ is $1$. In the resulting forcing extension $V[G]$, therefore, the real $r$ is definable as: the real whose binary digits conform with the GCH pattern on the cardinals $\aleph_n$. QED Since this definition can be settled in a rank-initial segment of the universe, namely, $V_{\omega+\omega}$, the complexity of the definition is $\Delta_2$. See my post on Local properties in set theory to see how I think about locally verifiable and locally decidable properties in set theory. If we push the argument just a little, we can go beyond the reals. Theorem. There is a formula $\psi(x)$, of complexity $\Sigma_2$, such that for any particular object $a$, there is a forcing extension of the universe in which $\psi$ defines $a$. Proof. Fix any set $a$. By the axiom of choice, we may code $a$ with a set of ordinals $A\subset\kappa$ for some cardinal $\kappa$. (One well-orders the transitive closure of $\{a\}$ and thereby finds a bijection $\langle\mathop{tc}(\{a\}),\in\rangle\cong\langle\kappa,E\rangle$ for some $E\subset\kappa\times\kappa$, and then codes $E$ to a set $A$ by an ordinal pairing function. The set $A$ tells you $E$, which tells you $\mathop{tc}(\{a\})$ by the Mostowski collapse, and from this you find $a$.) By Easton’s theorem, there is a forcing extension $V[G]$ in which the GCH holds at all $\aleph_{\lambda+1}$ for a limit ordinal $\lambda<\kappa$, but fails at $\aleph_{\kappa+1}$, and such that $\alpha\in A$ just in case $2^{\aleph_{\alpha+2}}=\aleph_{\alpha+3}$ for $\alpha<\kappa$. That is, we manipulate the GCH pattern to exactly code both $\kappa$ and the elements of $A\subset\kappa$. Let $\phi(x)$ assert that $x$ is the set that is decoded by this process: look for the first stage where the GCH fails at $\aleph_{\lambda+2}$, and then extract the set $A$ of ordinals, and then check if $x$ is the set coded by $A$. The assertion $\phi(x)$ did not depend on $a$, and since it can be verified in any sufficiently large $V_\theta$, the assertion $\phi(x)$ has complexity $\Sigma_2$. QED Let’s try to make a better universal definition. As I mentioned at the outset, I have been motivated to find a set-theoretic analogue of the universal algorithm, and in that computable context, we had a universal algorithm that could not only produce any desired finite set, when run in the right universe, but which furthermore had a robust interaction between models of arithmetic and their top-extensions: any set could be extended to any other set for which the algorithm enumerated it in a taller universe. Here, I’d like to achieve the same robustness of interaction with the universal definition, as one moves from one model of set theory to a taller model. We say that one model of set theory $N$ is a top-extension of another $M$, if all the new sets of $N$ have rank totally above the ranks occuring in $M$. Thus, $M$ is a rank-initial segment of $N$. If there is a least new ordinal $\beta$ in $N\setminus M$, then this is equivalent to saying that $M=V_\beta^N$. Theorem. There is a formula $\phi(x)$, such that In any model of ZFC, there is a unique set $a$ satisfying $\phi(a)$. For any countable model $M\models\text{ZFC}$ and any $a\in M$, there is a top-extension $N$ of $M$ such that $N\models \phi(a)$. Thus, $\phi(x)$ is the universal definition: it always defines some set, and that set can be any desired set, even when moving from a model $M$ to a top-extension $N$. Proof. The previous manner of coding will not achieve property 2, since the GCH pattern coding started immediately, and so it would be preserved to any top extension. What we need to do is to place the coding much higher in the universe, so that in the top extension $N$, it will occur in the part of $N$ that is totally above $M$. But consider the following process. In any model of set theory, let $\phi(x)$ assert that $x$ is the empty set unless the GCH holds at all sufficiently large cardinals, and indeed $\phi(x)$ is false unless there is a cardinal $\delta$ and ordinal $\gamma<\delta^+$ such that the GCH holds at all cardinals above $\aleph_{\delta+\gamma}$. In this case, let $\delta$ be the smallest such cardinal for which that is true, and let $\gamma$ be the smallest ordinal working with this $\delta$. So both $\delta$ and $\gamma$ are definable. Now, let $A\subset\gamma$ be the set of ordinals $\alpha$ for which the GCH holds at $\aleph_{\delta+\alpha+1}$, and let $\phi(x)$ assert that $x$ is the set coded by the set $A$. It is clear that $\phi(x)$ defines a unique set, in any model of ZFC, and so (1) holds. For (2), suppose that $M$ is a countable model of ZFC and $a\in M$. It is a fact that every countable model of ZFC has a top-extension, by the definable ultrapower method. Let $N_0$ be a top extension of $M$. Let $N=N_0[G]$ be a forcing extension of $N_0$ in which the set $a$ is coded into the GCH pattern very high up, at cardinals totally above $M$, and such that the GCH holds above this coding, in such a way that the process described in the previous paragraph would define exactly the set $a$. So $\phi(a)$ holds in $N$, which is a top-extension of $M$ as no new sets of small rank are added by the forcing. So statement (2) also holds. QED The complexity of the definition is $\Pi_3$, mainly because in order to know where to look for the coding, one needs to know the ordinals $\delta$ and $\gamma$, and so one needs to know that the GCH always holds above that level. This is a $\Pi_3$ property, since it cannot be verified locally only inside some $V_\theta$. A stronger analogue with the universal algorithm — and this is a question that motivated my thinking about this topic — would be something like the following: Question. Is there is a $\Sigma_2$ formula $\varphi(x)$, that is, a locally verifiable property, with the following properties? In any model of ZFC, the class $\{x\mid\varphi(x)\}$ is a set. It is consistent with ZFC that $\{x\mid\varphi(x)\}$ is empty. In any countable model $M\models\text{ZFC}$ in which $\{x\mid\varphi(x)\}=a$ and any set $b\in M$ with $a\subset b$, then there is a top-extension $N$ of $M$ in which $\{x\mid\varphi(x)\}=b$. An affirmative answer would be a very strong analogue with the universal algorithm and Woodin’s theorem about which I wrote previously. The idea is that the $\Sigma_2$ properties $\varphi(x)$ in set theory are analogous to the computably enumerable properties in computability theory. Namely, to verify that an object has a certain computably enumerable property, we run a particular computable process and then sit back, waiting for the process to halt, until a stage of computation arrives at which the property is verified. Similarly, in set theory, to verify that a set has a particular $\Sigma_2$ property, we sit back watching the construction of the cumulative set-theoretic universe, until a stage $V_\beta$ arrives that provides verification of the property. This is why in statement (3) we insist that $a\subset b$, since the $\Sigma_2$ properties are always upward absolute to top-extensions; once an object is placed into $\{x\mid\varphi(x)\}$, then it will never be removed as one makes the universe taller. So the hope was that we would be able to find such a universal $\Sigma_2$ definition, which would serve as a set-theoretic analogue of the universal algorithm used in Woodin’s theorem. If one drops the first requirement, and allows $\{x\mid \varphi(x)\}$ to sometimes be a proper class, then one can achieve a positive answer as follows. Theorem. There is a $\Sigma_2$ formula $\varphi(x)$ with the following properties. If the GCH holds, then $\{x\mid\varphi(x)\}$ is empty. For any countable model $M\models\text{ZFC}$ where $a=\{x\mid \varphi(x)\}$ and any $b\in M$ with $a\subset b$, there is a top extension $N$ of $M$ in which $N\models\{x\mid\varphi(x)\}=b$. Proof. Let $\varphi(x)$ assert that the set $x$ is coded into the GCH pattern. We may assume that the coding mechanism of a set is marked off by certain kinds of failures of the GCH at odd-indexed alephs, with the pattern at intervening even-indexed regular cardinals forming the coding pattern. This is $\Sigma_2$, since any large enough $V_\theta$ will reveal whether a given set $x$ is coded in this way. And because of the manner of coding, if the GCH holds, then no set is coded. Also, if the GCH holds eventually, then only a set-sized collection is coded. Finally, any countable model $M$ where only a set is coded can be top-extended to another model $N$ in which any desired superset of that set is coded. QED Update. Originally, I had proposed an argument for a negative answer to the question, and I was actually a bit disappointed by that, since I had hoped for a positive answer. However, it now seems to me that the argument I had written is wrong, and I am grateful to Ali Enayat for his remarks on this in the comments. I have now deleted the incorrect argument. Meanwhile, here is a positive answer to the question in the case of models of $V\neq\newcommand\HOD{\text{HOD}}\HOD$. Theorem. There is a $\Sigma_2$ formula $\varphi(x)$ with the following properties: In any model of $\newcommand\ZFC{\text{ZFC}}\ZFC+V\neq\HOD$, the class $\{x\mid\varphi(x)\}$ is a set. It is relatively consistent with $\ZFC$ that $\{x\mid\varphi(x)\}$ is empty; indeed, in any model of $\ZFC+\newcommand\GCH{\text{GCH}}\GCH$, the class $\{x\mid\varphi(x)\}$ is empty. If $M\models\ZFC$ thinks that $a=\{x\mid\varphi(x)\}$ is a set and $b\in M$ is a larger set $a\subset b$, then there is a top-extension $N$ of $M$ in which $\{x\mid \varphi(x)\}=b$. Proof. Let $\varphi(x)$ hold, if there is some ordinal $\alpha$ such that every element of $V_\alpha$ is coded into the GCH pattern below some cardinal $\delta_\alpha$, with $\delta_\alpha$ as small as possible with that property, and $x$ is the next set coded into the GCH pattern above $\delta_\alpha$. This is a $\Sigma_2$ property, since it can be verified in any sufficiently large $V_\theta$. In any model of $\ZFC+V\neq\HOD$, there must be some sets that are no coded into the $\GCH$ pattern, for if every set is coded that way then there would be a definable well-ordering of the universe and we would have $V=\HOD$. So in any model of $V\neq\HOD$, there is a bound on the ordinals $\alpha$ for which $\delta_\alpha$ exists, and therefore $\{x\mid\varphi(x)\}$ is a set. So statement (1) holds. Statement (2) holds, because we may arrange it so that the GCH itself implies that no set is coded at all, and so $\varphi(x)$ would always fail. For statement (3), suppose that $M\models\ZFC+\{x\mid\varphi(x)\}=a\subseteq b$ and $M$ is countable. In $M$, there must be some minimal rank $\alpha$ for which there is a set of rank $\alpha$ that is not coded into the GCH pattern. Let $N$ be an elementary top-extension of $M$, so $N$ agrees that $\alpha$ is that minimal rank. Now, by forcing over $N$, we can arrange to code all the sets of rank $\alpha$ into the GCH pattern above the height of the original model $M$, and we can furthermore arrange so as to code any given element of $b$ just above that coding. And so on, we can iterate it so as to arrange the coding above the height of $M$ so that exactly the elements of $b$ now satisfy $\varphi(x)$, but no more. In this way, we will ensure that $N\models\{x\mid\varphi(x)\}=b$, as desired. QED I find the situation unusual, in that often results from the models-of-arithmetic context generalize to set theory with models of $V=\HOD$, because the global well-order means that models of $V=\HOD$ have definable Skolem functions, which is true in every model of arithmetic and which sometimes figures implicitly in constructions. But here, we have the result of Woodin’s theorem generalizing from models of arithmetic to models of $V\neq\HOD$. Perhaps this suggests that we should expect a fully positive solution for models of set theory. Further update. Woodin and I have now established the fully general result of the universal finite set, which subsumes much of the preliminary early analysis that I had earlier made in this post. Please see my post, The universal finite set.
And I think people said that reading first chapter of Do Carmo mostly fixed the problems in that regard. The only person I asked about the second pset said that his main difficulty was in solving the ODEs Yeah here there's the double whammy in grad school that every grad student has to take the full year of algebra/analysis/topology, while a number of them already don't care much for some subset, and then they only have to pass rather the class I know 2 years ago apparently it mostly avoided commutative algebra, half because the professor himself doesn't seem to like it that much and half because he was like yeah the algebraists all place out so I'm assuming everyone here is an analyst and doesn't care about commutative algebra Then the year after another guy taught and made it mostly commutative algebra + a bit of varieties + Cech cohomology at the end from nowhere and everyone was like uhhh. Then apparently this year was more of an experiment, in part from requests to make things more geometric It's got 3 "underground" floors (quotation marks because the place is on a very tall hill so the first 3 floors are a good bit above the the street), and then 9 floors above ground. The grad lounge is in the top floor and overlooks the city and lake, it's real nice The basement floors have the library and all the classrooms (each of them has a lot more area than the higher ones), floor 1 is basically just the entrance, I'm not sure what's on the second floor, 3-8 is all offices, and 9 has the ground lounge mainly And then there's one weird area called the math bunker that's trickier to access, you have to leave the building from the first floor, head outside (still walking on the roof of the basement floors), go to this other structure, and then get in. Some number of grad student cubicles are there (other grad students get offices in the main building) It's hard to get a feel for which places are good at undergrad math. Highly ranked places are known for having good researchers but there's no "How well does this place teach?" ranking which is kinda more relevant if you're an undergrad I think interest might have started the trend, though it is true that grad admissions now is starting to make it closer to an expectation (friends of mine say that for experimental physics, classes and all definitely don't cut it anymore) In math I don't have a clear picture. It seems there are a lot of Mickey Mouse projects that people seem to not help people much, but more and more people seem to do more serious things and that seems to become a bonus One of my professors said it to describe a bunch of REUs, basically boils down to problems that some of these give their students which nobody really cares about but which undergrads could work on and get a paper out of @TedShifrin i think universities have been ostensibly a game of credentialism for a long time, they just used to be gated off to a lot more people than they are now (see: ppl from backgrounds like mine) and now that budgets shrink to nothing (while administrative costs balloon) the problem gets harder and harder for students In order to show that $x=0$ is asymptotically stable, one needs to show that $$\forall \varepsilon > 0, \; \exists\, T > 0 \; \mathrm{s.t.} \; t > T \implies || x ( t ) - 0 || < \varepsilon.$$The intuitive sketch of the proof is that one has to fit a sublevel set of continuous functions $... "If $U$ is a domain in $\Bbb C$ and $K$ is a compact subset of $U$, then for all holomorphic functions on $U$, we have $\sup_{z \in K}|f(z)| \leq C_K \|f\|_{L^2(U)}$ with $C_K$ depending only on $K$ and $U$" this took me way longer than it should have Well, $A$ has these two dictinct eigenvalues meaning that $A$ can be diagonalised to a diagonal matrix with these two values as its diagonal. What will that mean when multiplied to a given vector (x,y) and how will the magnitude of that vector changed? Alternately, compute the operator norm of $A$ and see if it is larger or smaller than 2, 1/2 Generally, speaking, given. $\alpha=a+b\sqrt{\delta}$, $\beta=c+d\sqrt{\delta}$ we have that multiplication (which I am writing as $\otimes$) is $\alpha\otimes\beta=(a\cdot c+b\cdot d\cdot\delta)+(b\cdot c+a\cdot d)\sqrt{\delta}$ Yep, the reason I am exploring alternative routes of showing associativity is because writing out three elements worth of variables is taking up more than a single line in Latex, and that is really bugging my desire to keep things straight. hmm... I wonder if you can argue about the rationals forming a ring (hence using commutativity, associativity and distributivitity). You cannot do that for the field you are calculating, but you might be able to take shortcuts by using the multiplication rule and then properties of the ring $\Bbb{Q}$ for example writing $x = ac+bd\delta$ and $y = bc+ad$ we then have $(\alpha \otimes \beta) \otimes \gamma = (xe +yf\delta) + (ye + xf)\sqrt{\delta}$ and then you can argue with the ring property of $\Bbb{Q}$ thus allowing you to deduce $\alpha \otimes (\beta \otimes \gamma)$ I feel like there's a vague consensus that an arithmetic statement is "provable" if and only if ZFC proves it. But I wonder what makes ZFC so great, that it's the standard working theory by which we judge everything. I'm not sure if I'm making any sense. Let me know if I should either clarify what I mean or shut up. :D Associativity proofs in general have no shortcuts for arbitrary algebraic systems, that is why non associative algebras are more complicated and need things like Lie algebra machineries and morphisms to make sense of One aspect, which I will illustrate, of the "push-button" efficacy of Isabelle/HOL is its automation of the classic "diagonalization" argument by Cantor (recall that this states that there is no surjection from the naturals to its power set, or more generally any set to its power set).theorem ... The axiom of triviality is also used extensively in computer verification languages... take Cantor's Diagnolization theorem. It is obvious. (but seriously, the best tactic is over powered...) Extensions is such a powerful idea. I wonder if there exists algebraic structure such that any extensions of it will produce a contradiction. O wait, there a maximal algebraic structures such that given some ordering, it is the largest possible, e.g. surreals are the largest field possible It says on Wikipedia that any ordered field can be embedded in the Surreal number system. Is this true? How is it done, or if it is unknown (or unknowable) what is the proof that an embedding exists for any ordered field? Here's a question for you: We know that no set of axioms will ever decide all statements, from Gödel's Incompleteness Theorems. However, do there exist statements that cannot be decided by any set of axioms except ones which contain one or more axioms dealing directly with that particular statement? "Infinity exists" comes to mind as a potential candidate statement. Well, take ZFC as an example, CH is independent of ZFC, meaning you cannot prove nor disprove CH using anything from ZFC. However, there are many equivalent axioms to CH or derives CH, thus if your set of axioms contain those, then you can decide the truth value of CH in that system @Rithaniel That is really the crux on those rambles about infinity I made in this chat some weeks ago. I wonder to show that is false by finding a finite sentence and procedure that can produce infinity but so far failed Put it in another way, an equivalent formulation of that (possibly open) problem is: > Does there exists a computable proof verifier P such that the axiom of infinity becomes a theorem without assuming the existence of any infinite object? If you were to show that you can attain infinity from finite things, you'd have a bombshell on your hands. It's widely accepted that you can't. If fact, I believe there are some proofs floating around that you can't attain infinity from the finite. My philosophy of infinity however is not good enough as implicitly pointed out when many users who engaged with my rambles always managed to find counterexamples that escape every definition of an infinite object I proposed, which is why you don't see my rambles about infinity in recent days, until I finish reading that philosophy of infinity book The knapsack problem or rucksack problem is a problem in combinatorial optimization: Given a set of items, each with a weight and a value, determine the number of each item to include in a collection so that the total weight is less than or equal to a given limit and the total value is as large as possible. It derives its name from the problem faced by someone who is constrained by a fixed-size knapsack and must fill it with the most valuable items.The problem often arises in resource allocation where there are financial constraints and is studied in fields such as combinatorics, computer science... O great, given a transcendental $s$, computing $\min_P(|P(s)|)$ is a knapsack problem hmm... By the fundamental theorem of algebra, every complex polynomial $P$ can be expressed as: $$P(x) = \prod_{k=0}^n (x - \lambda_k)$$ If the coefficients of $P$ are natural numbers , then all $\lambda_k$ are algebraic Thus given $s$ transcendental, to minimise $|P(s)|$ will be given as follows: The first thing I think of with that particular one is to replace the $(1+z^2)$ with $z^2$. Though, this is just at a cursory glance, so it would be worth checking to make sure that such a replacement doesn't have any ugly corner cases. In number theory, a Liouville number is a real number x with the property that, for every positive integer n, there exist integers p and q with q > 1 and such that0<|x−pq|<1qn.{\displaystyle 0<\left|x-{\frac {p}... Do these still exist if the axiom of infinity is blown up? Hmmm... Under a finitist framework where only potential infinity in the form of natural induction exists, define the partial sum: $$\sum_{k=1}^M \frac{1}{b^{k!}}$$ The resulting partial sums for each M form a monotonically increasing sequence, which converges by ratio test therefore by induction, there exists some number $L$ that is the limit of the above partial sums. The proof of transcendentally can then be proceeded as usual, thus transcendental numbers can be constructed in a finitist framework There's this theorem in Spivak's book of Calculus:Theorem 7Suppose that $f$ is continuous at $a$, and that $f'(x)$ exists for all $x$ in some interval containing $a$, except perhaps for $x=a$. Suppose, moreover, that $\lim_{x \to a} f'(x)$ exists. Then $f'(a)$ also exists, and$$f'... and neither Rolle nor mean value theorem need the axiom of choice Thus under finitism, we can construct at least one transcendental number. If we throw away all transcendental functions, it means we can construct a number that cannot be reached from any algebraic procedure Therefore, the conjecture is that actual infinity has a close relationship to transcendental numbers. Anything else I need to finish that book to comment typo: neither Rolle nor mean value theorem need the axiom of choice nor an infinite set > are there palindromes such that the explosion of palindromes is a palindrome nonstop palindrome explosion palindrome prime square palindrome explosion palirome prime explosion explosion palindrome explosion cyclone cyclone cyclone hurricane palindrome explosion palindrome palindrome explosion explosion cyclone clyclonye clycone mathphile palirdlrome explosion rexplosion palirdrome expliarome explosion exploesion
The transistor going into saturation isn't a property of the transistor itself, but instead a property of the circuit surrounding the transistor the transistor, as part of it. and The simplest case to imagine is an NPN switch. I'll present two different such switch circuits to make the above point concretely clear: simulate this circuit – Schematic created using CircuitLab Let's assume a perfect \$\beta=100\$ in both cases shown. We are going to increase the current source's current into the base (of either or both NPNs) from \$0\:\mu\textrm{A}\$ to \$100\:\mu\textrm{A}\$. In both cases, the relationship of \$I_C=\beta\cdot I_B=100\cdot I_B\$ will be considered to hold until some other limitation forces this relationship to change. In the left circuit, as the base current rises, the permitted collector current also rises. When \$I_B=90\:\mu\textrm{A}\$, then \$I_C=9\:\textrm{mA}\$ and the voltage drop across \$R_1\$ will be \$I_C\cdot R_1=9\:\textrm{mA}\cdot 1\:\textrm{k}\Omega=9\:\textrm{V}\$. This works fine, as the collector voltage will then be \$V_C=10\:\textrm{V}-I_C\cdot R_1=1\:\textrm{V}\$. This voltage is between the ground rail of \$0\:\textrm{V}\$ and the power supply rail of \$10\:\textrm{V}\$ and since the transistor isn't yet in saturation, as I still expect \$V_C\ge V_B\$ (given \$V_E=0\:\textrm{V}\$.) But as \$I_B\$ increases still further towards \$I_B=100\:\mu\textrm{A}\$, I would expect the collector voltage to drop still further to a final case where \$V_C=0\:\textrm{V}\$ as the voltage drop across \$R_1\$ reaches a full \$10\:\textrm{V}\$. This assumes that the collector actually can achieve such a situation. But it can't. But before discussing why, let's move to the right side. Looking at the right side schematic, you see the same thing except that \$R_2=10\:\textrm{k}\Omega\$, instead. Other details remain the same. In this case, though, the voltage drop across \$R_2\$ will reach \$9\:\textrm{V}\$ when \$I_B=9\:\mu\textrm{A}\$ and \$I_C=900\:\mu\textrm{A}\$ and a full \$10\:\textrm{V}\$ voltage drop when \$I_B=10\:\mu\textrm{A}\$. Assuming this is even possible, what should then happen as the \$I_2\$ current source increases still further? Well... nothing more can happen. There's no possible way for a still larger voltage drop across the collector resistor, \$R_2\$. To do so would require \$Q_2\$'s collector to move into a negative voltage value with respect to ground. But there are no sources for that negative voltage available and while \$Q_2\$'s guts might be able to produce voltages in between \$0\:\textrm{V}\$ and \$10\:\textrm{V}\$, \$Q_2\$'s guts can't manufacture voltages outside that range out of thin air. It just doesn't happen. So the process stops here. More base current achieves nothing. You can apply it, of course. There's nothing to stop \$I_2\$ from continuing right on up to a full \$100\:\mu\textrm{A}\$. So that works just fine. But the collector voltage just cannot continue it's downward direction any more. So the collector current just stops, regardless of the base current. The result is that the effective \$\beta\$ drops from 100 to some lower value, then. All this said, though, a real BJT can't even cause the collector voltage to match up exactly with its emitter voltage. The base-collector diode can go into a forward biased mode in order to allow the collector to drop. And it has to do that, if it is going to squeeze out the last remaining additional dribbles of collector current so that the voltage drop across the collector resistor can rise just a little bit more. But at some point before the collector voltage reaches the emitter voltage, the process halts. There must be at least a small voltage difference remaining, just to operate at all. This might cause the base-emitter diode to be forward biased with \$800\:\textrm{mV}\$ while the base-collector diode is forward biased with \$600\:\textrm{mV}\$, so that \$V_{CE}=200\:\textrm{mV}\$. But the base-collector diode cannot be more forward biased than the base-emitter diode. Because doing so would require the BJT to present an impossible collector voltage that it cannot observe and cannot just create out of thin air. (At least, in the circuits I've shown above.) At this point, it should also be clear that the external circuit . These two circuits were identical except for the collector load. But the limitation of collector current matters depends upon the collector resistor value, as well as the BJT. So saturation is best not seen as just an internal detail of the BJT but instead that it also depends upon what is surrounding the BJT. Another way of saying this is that the transistor enters gradually into saturation as the collector current, coupled with the collector load external to it, causes the collector voltage to move in such a way that the base-collector diode transitions from being reverse-biased into becoming forward-biased. While the BC junction is still reverse-biased, the transistor is in active mode. Once the BC junction transitions into forward-biased, the BJT is in saturated mode. However, saturation is gradual in the sense that \$\beta\$ gradually declines and doesn't suddenly change (it's not a switch-like effect) -- in the above examples where the base current is gradually changed. For design purposes, if you want a switch behavior then you anticipate the above process and just design around some value of \$\beta\$ that you want to achieve for your switch. If you look it up on a datasheet, there will usually be a curve showing just how small of a difference between the collector and emitter is achievable given some desired \$\beta\$. Or, at least, an example for \$\beta=10\$ which is usually considered to be a highly saturated case for most (but not all) BJTs. Since the external circuit can be designed to force a low \$\beta\$ result, that all works fine. (Of course, you still have to keep in mind dissipation and other limitations for the BJT.) Hopefully, that helps.
Search Now showing items 1-2 of 2 Anisotropic flow of inclusive and identified particles in Pb–Pb collisions at $\sqrt{{s}_{NN}}=$ 5.02 TeV with ALICE (Elsevier, 2017-11) Anisotropic flow measurements constrain the shear $(\eta/s)$ and bulk ($\zeta/s$) viscosity of the quark-gluon plasma created in heavy-ion collisions, as well as give insight into the initial state of such collisions and ... Investigations of anisotropic collectivity using multi-particle correlations in pp, p-Pb and Pb-Pb collisions (Elsevier, 2017-11) Two- and multi-particle azimuthal correlations have proven to be an excellent tool to probe the properties of the strongly interacting matter created in heavy-ion collisions. Recently, the results obtained for multi-particle ...
It should come as no surprise that we can use this reasoning about division in the “Dots & Boxes” model in other bases as well. The following picture shows that working in base 5, $$1432_{five} \div 13_{five} = 110_{five} R2_{five},\; \text{meaning}\; 1432_{five} = 110_{five} \cdot 13_{five} + 2_{five} \ldotp$$ Think / Pair / Share Carefully explain the connection between the picture and the equation shown above. Show in the picture where you see \(1432_{five}\) from the equation. Where do you see \(13_{five}\)? Where do you see \(110_{five}\) and \(2_{five}\)? Example: \(1432_{five} \div 13_{five}\) Here’s where we left off the division, with a remainder of 2: Now we can unexplode one of those two remaining dots. Then we’re able to make another group of \(13_{five}\). Once again, there are two dots left over, not in any group. So let’s unexplode one of them. And we still have two dots left over. Why not do it again? It seems like we’re going to be doing the same thing forever: Start with two dots in some box. Unexplode one one of the dots, so you have one dot in your original box and five in the box to the right. Form a group of \(3_{five}\). That uses the one dot in your original box and three dots in the box to the right. So you have two dots left in a box. Unexplode one of the dots, so you have one dot in your original box and five in the box to the right. This feels familiar… We conclude: $$1432_{five} \div 13_{five} = 110.111 \ldots_{five} = 110. \bar{1}_{five} \ldotp$$ Think / Pair / Share The equation $$1432_{five} \div 13_{five} = 110. \bar{1}_{five} \ldotp$$ is a statement in base five. What is it saying in base ten? “\(1432_{five}\)” is the number $$1 \cdot 125 + 4 \cdot 25 + 3 \cdot 5 + 2 \cdot 1 = 242_{ten} \ldotp$$ What is \(13_{five}\) in base 10? Be sure to explain your answer. What is \(110. \bar{1}_{five}\) in base 10? Explain how you got your answer. Translate the equation above to a statement in base ten and check that it is correct. Problem 2 Draw pictures to compute \(8 \div 3\) in a base ten system, and show the answer is \(2. \bar{6}\). Draw the pictures to compute \(8_{nine} \div 3_{nine}\) in a base 9 system, and write the answer as a decimal. (Or is it a “nonimal”?) Problem 3 Draw the pictures to compute \(1 \div 11\) in a base ten system, and show the answer is \(0. \overline{09}\). Draw the base 3 pictures to compute \(1_{three} \div 11_{three}\), and write the answer as a decimal (“trimal”?) number. Draw the base four pictures to compute \(1_{four} \div 11_{four}\), and write the answer as a decimal (“quadimal”?) number. Draw the base six pictures to compute \(1_{six} \div 11_{six}\), and write the answer as a decimal (“heximal”?) number. Describe any patterns you notice in the computations above. Do you have a conjecture of a general rule? Can you prove your general rule is true? Problem 4 Remember that the fraction \(\frac{2}{5}\) represents the division problem \(2 \div 5\). (This is all written in base ten.) What is the decimal expansion (in base ten) of the fraction \(\frac{2}{5}\)? Rewrite the base-ten fraction \(\frac{2}{5}\) as a base four division problem. Then find the decimal expansion for that fraction in base four. Rewrite the base-ten fraction \(\frac{2}{5}\) as a base five division problem. Then find the decimal expansion for that fraction in base five. Rewrite the base-ten fraction \(\frac{2}{5}\) as a base seven division problem. Then find the decimal expansion for that fraction in base seven. Barry said that in base fifteen, the division problem looks like $$2_{fifteen} \div 5_{fifteen},$$and the decimal representation would be \(0.6_{fifteen}\). Check Barry’s answer. Is he right? Problem 5 Expand each of the following as a “decimal” number in the base given. (The fraction is given in base ten.) $$\begin{split} (a)\; \frac{1}{9}\; \text{in base 10} \quad \qquad &(b)\; \frac{1}{2}\; \text{in base 3} \\ (c)\; \frac{1}{3}\; \text{in base 4} \quad \qquad &(d)\; \frac{1}{4}\; \text{in base 5} \\ (e)\; \frac{1}{5}\; \text{in base 6} \quad \qquad &(f)\; \frac{1}{6}\; \text{in base 7} \\ (g)\; \frac{1}{7}\; \text{in base 8} \quad \qquad &(h)\; \frac{1}{8}\; \text{in base 9} \end{split}$$ Do you notice any patterns? Any conjectures? Problem 6 (Challenge) What fraction has decimal expansion \(0. \bar{3}_{seven}\)? How do you know you are right?
One general rule about technical papers--especially those found on the Web--is that the reliability of any statistical or mathematical definition offered in them varies inversely with the number of unrelated non-statistical subjects mentioned in the paper's title. The page title in the first reference offered (in a comment to the question) is "From Finance to Cosmology: The Copula of Large-Scale Structure." With both "finance" and "cosmology" appearing prominently, we can be pretty sure that this is not a good source of information about copulas! Let's instead turn to a standard and very accessible textbook, Roger Nelsen's An introduction to copulas (Second Edition, 2006), for the key definitions. ... every copula is a joint distribution function with margins that are uniform on [the closed unit interval $[0,1]]$. [At p. 23, bottom.] For some insight into copulae, turn to the first theorem in the book, Sklar's Theorem: Let $H$ be a joint distribution function with margins $F$ and $G$. Then there exists a copula $C$ such that for all $x,y$ in [the extended real numbers], $$H(x,y) = C(F(x),G(y)).$$ [Stated on pp. 18 and 21.] Although Nelsen does not call it as such, he does define the Gaussian copula in an example: ... if $\Phi$ denotes the standard (univariate) normal distribution function and $N_\rho$ denotes the standard bivariate normal distribution function (with Pearson's product-moment correlation coefficient $\rho$), then ... $$C(u,v) = \frac{1}{2\pi\sqrt{1-\rho^2}}\int_{-\infty}^{\Phi^{-1}(u)}\int_{-\infty}^{\Phi^{-1}(v)}\exp\left[\frac{-\left(s^2-2\rho s t + t^2\right)}{2\left(1-\rho^2\right)}\right]dsdt$$ [at p. 23, equation 2.3.6]. From the notation it is immediate that this $C$ indeed is the joint distribution for $(u,v)$ when $(\Phi^{-1}(u), \Phi^{-1}(v))$ is bivariate Normal. We may now turn around and construct a new bivariate distribution having any desired (continuous) marginal distributions $F$ and $G$ for which this $C$ is the copula, merely by replacing these occurrences of $\Phi$ by $F$ and $G$: take this particular $C$ in the characterization of copulas above. So yes, this looks remarkably like the formulas for a bivariate normal distribution, because it is bivariate normal for the transformed variables $(\Phi^{-1}(F(x)),\Phi^{-1}(G(y)))$. Because these transformations will be nonlinear whenever $F$ and $G$ are not already (univariate) Normal CDFs themselves, the resulting distribution is not (in these cases) bivariate normal. Example Let $F$ be the distribution function for a Beta$(4,2)$ variable $X$ and $G$ the distribution function for a Gamma$(2)$ variable $Y$. By using the preceding construction we can form the joint distribution $H$ with a Gaussian copula and marginals $F$ and $G$. To depict this distribution, here is a partial plot of its bivariate density on $x$ and $y$ axes: The dark areas have low probability density; the light regions have the highest density. All the probability has been squeezed into the region where $0\le x \le 1$ (the support of the Beta distribution) and $0 \le y$ (the support of the Gamma distribution). The lack of symmetry makes it obviously non-normal (and without normal margins), but it nevertheless has a Gaussian copula by construction. FWIW it has a formula and it's ugly, also obviously not bivariate Normal: $$\frac{1}{\sqrt{3}}2 \left(20 (1-x) x^3\right) \left(e^{-y} y\right) \exp \left(w(x,y)\right)$$ where $w(x,y)$ is given by $$\text{erfc}^{-1}\left(2 (Q(2,0,y))^2-\frac{2}{3} \left(\sqrt{2} \text{erfc}^{-1}(2 (Q(2,0,y)))-\frac{\text{erfc}^{-1}(2 (I_x(4,2)))}{\sqrt{2}}\right)^2\right).$$ ($Q$ is a regularized Gamma function and $I_x$ is a regularized Beta function.)
diff options Diffstat (limited to 'docs') -rw-r--r-- docs/tutorial/solver.md 79 1 files changed, 78 insertions, 1 deletions diff --git a/docs/tutorial/solver.md b/docs/tutorial/solver.md index 17f793e..b150f64 100644 --- a/docs/tutorial/solver.md +++ b/docs/tutorial/solver.md @@ -6,7 +6,14 @@ title: Solver / Model Optimization The solver orchestrates model optimization by coordinating the network's forward inference and backward gradients to form parameter updates that attempt to improve the loss. The responsibilities of learning are divided between the Solver for overseeing the optimization and generating parameter updates and the Net for yielding loss and gradients. -The Caffe solvers are Stochastic Gradient Descent (SGD), Adaptive Gradient (ADAGRAD), and Nesterov's Accelerated Gradient (NESTEROV). +The Caffe solvers are: + +- Stochastic Gradient Descent (`SGD`), +- AdaDelta (`ADADELTA`), +- Adaptive Gradient (`ADAGRAD`), +- Adam (`ADAM`), +- Nesterov's Accelerated Gradient (`NESTEROV`) and +- RMSprop (`RMSPROP`) The solver @@ -104,6 +111,32 @@ If learning diverges (e.g., you start to see very large or `NaN` or `inf` loss v [ImageNet Classification with Deep Convolutional Neural Networks](http://papers.nips.cc/paper/4824-imagenet-classification-with-deep-convolutional-neural-networks.pdf). *Advances in Neural Information Processing Systems*, 2012. +### AdaDelta + +The **AdaDelta** (`solver_type: ADADELTA`) method (M. Zeiler [1]) is a "robust learning rate method". It is a gradient-based optimization method (like SGD). The update formulas are + +$$ +\begin{align} +(v_t)_i &= \frac{\operatorname{RMS}((v_{t-1})_i)}{\operatorname{RMS}\left( \nabla L(W_t) \right)_{i}} \left( \nabla L(W_{t'}) \right)_i +\\ +\operatorname{RMS}\left( \nabla L(W_t) \right)_{i} &= \sqrt{E[g^2] + \varepsilon} +\\ +E[g^2]_t &= \delta{E[g^2]_{t-1} } + (1-\delta)g_{t}^2 +\end{align} +$$ + +and + +$$ +(W_{t+1})_i = +(W_t)_i - \alpha +(v_t)_i. +$$ + +[1] M. Zeiler + [ADADELTA: AN ADAPTIVE LEARNING RATE METHOD](http://arxiv.org/pdf/1212.5701.pdf). + *arXiv preprint*, 2012. + ### AdaGrad The **adaptive gradient** (`solver_type: ADAGRAD`) method (Duchi et al. [1]) is a gradient-based optimization method (like SGD) that attempts to "find needles in haystacks in the form of very predictive but rarely seen features," in Duchi et al.'s words. @@ -124,6 +157,28 @@ Note that in practice, for weights $$ W \in \mathcal{R}^d $$, AdaGrad implementa [Adaptive Subgradient Methods for Online Learning and Stochastic Optimization](http://www.magicbroom.info/Papers/DuchiHaSi10.pdf). *The Journal of Machine Learning Research*, 2011. +### Adam + +The **Adam** (`solver_type: ADAM`), proposed in Kingma et al. [1], is a gradient-based optimization method (like SGD). This includes an "adaptive moment estimation" ($$m_t, v_t$$) and can be regarded as a generalization of AdaGrad. The update formulas are + +$$ +(m_t)_i = \beta_1 (m_{t-1})_i + (1-\beta_1)(\nabla L(W_t))_i,\\ +(v_t)_i = \beta_2 (v_{t-1})_i + (1-\beta_2)(\nabla L(W_t))_i^2 +$$ + +and + +$$ +(W_{t+1})_i = +(W_t)_i - \alpha \frac{\sqrt{1-(\beta_2)_i^t}}{1-(\beta_1)_i^t}\frac{(m_t)_i}{\sqrt{(v_t)_i}+\varepsilon}. +$$ + +Kingma et al. [1] proposed to use $$\beta_1 = 0.9, \beta_2 = 0.999, \varepsilon = 10^{-8}$$ as default values. Caffe uses the values of `momemtum, momentum2, delta` for $$\beta_1, \beta_2, \varepsilon$$, respectively. + +[1] D. Kingma, J. Ba. + [Adam: A Method for Stochastic Optimization](http://arxiv.org/abs/1412.6980). + *International Conference for Learning Representations*, 2015. + ### NAG **Nesterov's accelerated gradient** (`solver_type: NESTEROV`) was proposed by Nesterov [1] as an "optimal" method of convex optimization, achieving a convergence rate of $$ \mathcal{O}(1/t^2) $$ rather than the $$ \mathcal{O}(1/t) $$. @@ -149,6 +204,28 @@ What distinguishes the method from SGD is the weight setting $$ W $$ on which we [On the Importance of Initialization and Momentum in Deep Learning](http://www.cs.toronto.edu/~fritz/absps/momentum.pdf). *Proceedings of the 30th International Conference on Machine Learning*, 2013. +### RMSprop + +The **RMSprop** (`solver_type: RMSPROP`), suggested by Tieleman in a Coursera course lecture, is a gradient-based optimization method (like SGD). The update formulas are + +$$ +(v_t)_i = +\begin{cases} +(v_{t-1})_i + \delta, &(\nabla L(W_t))_i(\nabla L(W_{t-1}))_i > 0\\ +(v_{t-1})_i \cdot (1-\delta), & \text{else} +\end{cases} +$$ + +$$ +(W_{t+1})_i =(W_t)_i - \alpha (v_t)_i, +$$ + +If the gradient updates results in oscillations the gradient is reduced by times $$1-\delta$$. Otherwise it will be increased by $$\delta$$. The default value of $$\delta$$ (`rms_decay`) is set to $$\delta = 0.02$$. + +[1] T. Tieleman, and G. Hinton. + [RMSProp: Divide the gradient by a running average of its recent magnitude](http://www.cs.toronto.edu/~tijmen/csc321/slides/lecture_slides_lec6.pdf). + *COURSERA: Neural Networks for Machine Learning.Technical report*, 2012. + ## Scaffolding The solver scaffolding prepares the optimization method and initializes the model to be learned in `Solver::Presolve()`.
In Exercises \((2.2E.1)\) to \((2.2E.12)\), find the general solution. Exercise \(\PageIndex{1}\) \(y''+5y'-6y=0\) Answer Add texts here. Do not delete this text first. Exercise \(\PageIndex{2}\) \(y''-4y'+5y=0\) Answer Add texts here. Do not delete this text first. Exercise \(\PageIndex{3}\) \(y''+8y'+7y=0\) Answer Add texts here. Do not delete this text first. Exercise \(\PageIndex{4}\) \(y''-4y'+4y=0\) Answer Add texts here. Do not delete this text first. Exercise \(\PageIndex{5}\) \(y''+2y'+10y=0\) Answer Add texts here. Do not delete this text first. Exercise \(\PageIndex{6}\) \(y''+6y'+10y=0\) Answer Add texts here. Do not delete this text first. Exercise \(\PageIndex{7}\) \(y''-8y'+16y=0\) Answer Add texts here. Do not delete this text first. Exercise \(\PageIndex{8}\) \(y''+y'=0\) Answer Add texts here. Do not delete this text first. Exercise \(\PageIndex{9}\) \(y''-2y'+3y=0\) Answer Add texts here. Do not delete this text first. Exercise \(\PageIndex{10}\) \(y''+6y'+13y=0 \) Answer Add texts here. Do not delete this text first. Exercise \(\PageIndex{11}\) \(4y''+4y'+10y=0\) Answer Add texts here. Do not delete this text first. Exercise \(\PageIndex{12}\) \(10y''-3y'-y=0\) Answer Add texts here. Do not delete this text first. In Exercises \((2.2E.13)\) to \((2.2E.17)\), solve the initial value problem. Exercise \(\PageIndex{13}\) \(y''+14y'+50y=0, \quad y(0)=2,\quad y'(0)=-17\) Answer Add texts here. Do not delete this text first. Exercise \(\PageIndex{14}\) \(6y''-y'-y=0, \quad y(0)=10,\quad y'(0)=0\) Answer Add texts here. Do not delete this text first. Exercise \(\PageIndex{15}\) \(6y''+y'-y=0, \quad y(0)=-1,\quad y'(0)=3\) Answer Add texts here. Do not delete this text first. Exercise \(\PageIndex{16}\) \(4y''-4y'-3y=0, \quad y(0)=\displaystyle{13\over 12},\quad y'(0)=\displaystyle{23 \over 24}\) Answer Add texts here. Do not delete this text first. Exercise \(\PageIndex{17}\) \(4y''-12y'+9y=0, \quad y(0)=3,\quad y'(0)=\displaystyle{5\over 2}\) Answer Add texts here. Do not delete this text first. In Exercises \9(2.2E.18)\) to \((2.2E.21)\), solve the initial value problem and graph the solution. Exercise \(\PageIndex{18}\) \(y''+7y'+12y=0, \quad y(0)=-1,\quad y'(0)=0\) Answer Add texts here. Do not delete this text first. Exercise \(\PageIndex{19}\) \(y''-6y'+9y=0, \quad y(0)=0,\quad y'(0)=2\) Answer Add texts here. Do not delete this text first. Exercise \(\PageIndex{20}\) \(36y''-12y'+y=0, \quad y(0)=3,\quad y'(0)=\displaystyle{5\over2}\) Answer Add texts here. Do not delete this text first. Exercise \(\PageIndex{21}\) \(y''+4y'+10y=0, \quad y(0)=3,\quad y'(0)=-2\) Answer Add texts here. Do not delete this text first. Exercise \(\PageIndex{22}\) (a) Suppose \(y\) is a solution of the constant coefficient homogeneous equation \begin{equation}\label{eq:2.2E.1} ay''+by'+cy=0. \end{equation} Let \(z(x)=y(x-x_0)\), where \(x_0\) is an arbitrary real number. Show that \begin{eqnarray*} az''+bz'+cz=0. \end{eqnarray*} (b) Let \(z_1(x)=y_1(x-x_0)\) and \(z_2(x)=y_2(x-x_0)\), where \(\{y_1,y_2\}\) is a fundamental set of solutions of \eqref{eq:2.2E.1} (A). Show that \(\{z_1,z_2\}\) is also a fundamental set of solutions of \eqref{eq:2.2E.1}. (c) The statement of Theorem \((2.2.1)\) is convenient for solving an initial value problem \begin{eqnarray*} ay''+by'+cy=0, \quad y(0)=k_0,\quad y'(0)=k_1, \end{eqnarray*} where the initial conditions are imposed at \(x_0=0\). However, if the initial value problem is \begin{equation}\label{eq:2.2E.2} ay''+by'+cy=0, \quad y(x_0)=k_0,\quad y'(x_0)=k_1, \end{equation} where \(x_0\ne0\), then determining the constants in \begin{eqnarray*} y=c_1e^{r_1x}+c_2e^{r_2x}, \quad y=e^{r_1x}(c_1+c_2x),\mbox{ or } y=e^{\lambda x}(c_1\cos\omega x+c_2\sin\omega x) \end{eqnarray*} (whichever is applicable) is more complicated. Use part (b) to restate Theorem \((2.2.1)\) in a form more convenient for solving \eqref{eq:2.2E.2}. Answer Add texts here. Do not delete this text first. In Exercises \((2.2E.23)\) tp \((2.2E.28)\), use a method suggested by Exercise \((2.2E.22)\) to solve the initial value problem. Exercise \(\PageIndex{23}\) \(y''+3y'+2y=0, \quad y(1)=-1,\quad y'(1)=4\) Answer Add texts here. Do not delete this text first. Exercise \(\PageIndex{24}\) \(y''-6y'-7y=0, \quad y(2)=-\displaystyle{1\over3},\quad y'(2)=-5\) Answer Add texts here. Do not delete this text first. Exercise \(\PageIndex{25}\) \(y''-14y'+49y=0, \quad y(1)=2,\quad y'(1)=11\) Answer Add texts here. Do not delete this text first. Exercise \(\PageIndex{26}\) \(9y''+6y'+y=0, \quad y(2)=2,\quad y'(2)=-\displaystyle{14\over3}\) Answer Add texts here. Do not delete this text first. Exercise \(\PageIndex{27}\) \(9y''+4y=0, \quad y(\pi/4)=2,\quad y'(\pi/4)=-2\) Answer Add texts here. Do not delete this text first. Exercise \(\PageIndex{28}\) \(y''+3y=0, \quad y(\pi/3)=2,\quad y'(\pi/3)=-1\) Answer Add texts here. Do not delete this text first. Exercise \(\PageIndex{29}\) Prove: If the characteristic equation of \begin{equation}\label{eq:2.2E.3} ay''+by'+cy=0 \end{equation} has a repeated negative root or two roots with negative real parts, then every solution of \eqref{eq:2.2E.3} approaches zero as \(x\to\infty\). Answer Add texts here. Do not delete this text first. Exercise \(\PageIndex{30}\) Suppose the characteristic polynomial of \(ay''+by'+cy=0\) has distinct real roots \(r_1\) and \(r_2\). Use a method suggested by Exercise \((2.2E.22)\) to find a formula for the solution of \begin{eqnarray*} ay''+by'+cy=0, \quad y(x_0)=k_0,\quad y'(x_0)=k_1. \end{eqnarray*} Answer Add texts here. Do not delete this text first. Exercise \(\PageIndex{31}\) Suppose the characteristic polynomial of \(ay''+by'+cy=0\) has a repeated real root \(r_1\). Use a method suggested by Exercise \((2.2E.22)\) to find a formula for the solution of \begin{eqnarray*} ay''+by'+cy=0, \quad y(x_0)=k_0,\quad y'(x_0)=k_1. \end{eqnarray*} Answer Add texts here. Do not delete this text first. Exercise \(\PageIndex{32}\) Suppose the characteristic polynomial of \(ay''+by'+cy=0\) has complex conjugate roots \(\lambda\pm i\omega\). Use a method suggested by Exercise \((2.2E.22)\) to find a formula for the solution of \begin{eqnarray*} ay''+by'+cy=0, \quad y(x_0)=k_0,\quad y'(x_0)=k_1. \end{eqnarray*} Answer Add texts here. Do not delete this text first. Exercise \(\PageIndex{33}\) Suppose the characteristic equation of \begin{equation}\label{eq:2.2E.4} ay''+by'+cy=0 \end{equation} has a repeated real root \(r_1\). Temporarily, think of \(e^{rx}\) as a function of two real variables \(x\) and \(r\). (a) Show that \begin{equation}\label{eq:2.2E.5} a{\partial^2\over\partial^2 x}(e^{rx})+b{\partial \over\partial x}(e^{rx}) +ce^{rx}=a(r-r_1)^2e^{rx}. \end{equation} (b) Differentiate \eqref{eq:2.2E.5} with respect to \(r\) to obtain \begin{equation}\label{eq:2.2E.6} a{\partial\over\partial r}\left({\partial^2\over\partial^2x}(e^{rx})\right)+b{\partial\over\partial r}\left({\partial \over\partial x}(e^{rx})\right) +c(xe^{rx})=[2+(r-r_1)x]a(r-r_1)e^{rx}. \end{equation} (c) Reverse the orders of the partial differentiations in the first two terms on the left side of \eqref{eq:2.2E.6} to obtain \begin{equation}\label{eq:2.2E.7} a{\partial^2\over\partial x^2}(xe^{rx})+b{\partial\over\partial x}(xe^{rx})+c(xe^{rx})=[2+(r-r_1)x]a(r-r_1)e^{rx}. \end{equation} (d) Set \(r=r_1\) in \eqref{eq:2.2E.5} and \eqref{eq:2.2E.7} to see that \(y_1=e^{r_1x}\) and \(y_2=xe^{r_1x}\) are solutions of \eqref{eq:2.2E.4}. Answer Add texts here. Do not delete this text first. Exercise \(\PageIndex{34}\) In calculus you learned that \(e^u\), \(\cos u\), and \(\sin u\) can be represented by the infinite series \begin{equation}\label{eq:2.2E.8} e^u=\sum_{n=0}^\infty {u^n\over n!} =1+{u\over 1!}+{u^2\over 2!}+{u^3\over 3!}+\cdots+{u^n\over n!}+\cdots \end{equation} \begin{equation}\label{eq:2.2E.9} \cos u=\sum_{n=0}^\infty (-1)^n{u^{2n}\over(2n)!} =1-{u^2\over2!}+{u^4\over4!}+\cdots+(-1)^n{u^{2n}\over(2n)!} +\cdots, \end{equation} and \begin{equation}\label{eq:2.2E.10} \sin u=\sum_{n=0}^\infty (-1)^n{u^{2n+1}\over(2n+1)!} =u-{u^3\over3!}+{u^5\over5!}+\cdots+(-1)^n {u^{2n+1}\over(2n+1)!} +\cdots \end{equation} for all real values of \(u\). Even though you have previously considered \eqref{eq:2.2E.8} only for real values of \(u\), we can set \(u=i\theta\), where \(\theta\) is real, to obtain \begin{equation}\label{eq:2.2E.11} e^{i\theta}=\sum_{n=0}^\infty {(i\theta)^n\over n!}. \end{equation} Given the proper background in the theory of infinite series with complex terms, it can be shown that the series in \eqref{eq:2.2E.11} converges for all real \(\theta\). (a) Recalling that \(i^2=-1,\) write enough terms of the sequence \(\{i^n\}\) to convince yourself that the sequence is repetitive: \begin{eqnarray*} 1,i,-1,-i,1,i,-1,-i,1,i,-1,-i,1,i,-1,-i,\cdots. \end{eqnarray*} Use this to group the terms in \eqref{eq:2.2E.11} as \begin{eqnarray*} e^{i\theta}&=&\left(1-{\theta^2\over2}+{\theta^4\over4}+\cdots\right) +i\left(\theta-{\theta^3\over3!}+{\theta^5\over5!}+\cdots\right)\\ &=&\sum_{n=0}^\infty (-1)^n{\theta^{2n}\over(2n)!} +i\sum_{n=0}^\infty (-1)^n{\theta^{2n+1}\over(2n+1)!}. \end{eqnarray*} By comparing this result with \eqref{eq:2.2E.9} and \eqref{eq:2.2E.10}, conclude that \begin{equation}\label{eq:2.2E.12} e^{i\theta}=\cos\theta+i\sin\theta. \end{equation} This is Euler's Identity. (b) Starting from \begin{eqnarray*} e^{i\theta_1}e^{i\theta_2}=(\cos\theta_1+i\sin\theta_1), (\cos\theta_2+i\sin\theta_2), \end{eqnarray*} collect the real part (the terms not multiplied by \(i\)) and the imaginary part (the terms multiplied by \(i\)) on the right, and use the trigonometric identities \begin{eqnarray*} \cos(\theta_1+\theta_2)&=&\cos\theta_1\cos\theta_2-\sin\theta_1\sin\theta_2\\ \sin(\theta_1+\theta_2)&=&\sin\theta_1\cos\theta_2+\cos\theta_1\sin\theta_2 \end{eqnarray*} to verify that \begin{eqnarray*} e^{i(\theta_1+\theta_2)}=e^{i\theta_1}e^{i\theta_2}, \end{eqnarray*} as you would expect from the use of the exponential notation \(e^{i\theta}\). (c) If \(\alpha\) and \(\beta\) are real numbers, define \begin{equation}\label{eq:2.2E.13} e^{\alpha+i\beta}=e^\alpha e^{i\beta}=e^\alpha(\cos\beta+i\sin\beta). \end{equation} Show that if \(z_1=\alpha_1+i\beta_1\) and \(z_2=\alpha_2+i\beta_2\) then \begin{eqnarray*} e^{z_1+z_2}=e^{z_1}e^{z_2}. \end{eqnarray*} (d) Let \(a\), \(b\), and \(c\) be real numbers, with \(a\ne0\). Let \(z=u+iv\) where \(u\) and \(v\) are real-valued functions of \(x\). Then we say that \(z\) is a solution of \begin{equation}\label{eq:2.2E.14} ay''+by'+cy=0 \end{equation} if \(u\) and \(v\) are both solutions of \eqref{eq:2.2E.14}. Use Theorem \((2.2.1)\) (c) to verify that if the characteristic equation of \eqref{eq:2.2E.14} has complex conjugate roots \(\lambda\pm i\omega\) then \(z_1=e^{(\lambda+i\omega)x}\) and \(z_2=e^{(\lambda-i\omega)x}\) are both solutions of \eqref{eq:2.2E.14}. Answer Add texts here. Do not delete this text first.
Since the limit $\frac{\sin(x)}{x}=1$ for $x \rightarrow 0$, I wondered about the infinite product: $$\prod^{\infty}_{n=1} n \sin \left( \frac{1}{n} \right)=\sin(1) \cdot 2 \sin\left( \frac{1}{2} \right) \cdot 3 \sin\left( \frac{1}{3} \right) \dots$$ By numerical experiment in Mathematica it seems to converge, even if very slowly (I mean to non-zero value): $$P(14997)= 0.755371783$$ $$P(14998)= 0.755371782$$ $$P(14999)= 0.755371782$$ $$P(15000)= 0.755371781$$ I can prove the convergence by integral test for the series: $$\sum^{\infty}_{n=1} \ln\left( n \sin \left( \frac{1}{n} \right) \right)$$ $$\int^{\infty}_{1} \ln\left( x \sin \left( \frac{1}{x} \right) \right) dx=\int^{1}_{0} \frac{1}{y^2} \ln \left( \frac{\sin (y)}{y} \right) dy=-0.168593$$ I think the integral test can work with negative function as long as it's monotone, otherwise I can just put the minus sign before the infinite sum. By the way, this is a related question about the convergence of the sum above. But I'm more interested in the infinite product itself. I'm not sure if the value of this infinite product can be found and how to go about it. Is it zero or not? Any thoughts would be appreciated
base Geometric construction of the global base of the quantum modified algebra of However the compatibility of the canonical base of the modified algebra and of the geometric base given by intersection cohomology sheaves on the affine flag variety was never proved. We prove that these determinantal semi-invariants span the space of all semi-invariants for any quiver and any infinite base field. For $\gamma\in\mathbb{R}$ let $C(\gamma)$ be the set of all $f\in{\mathcal S}^\prime$ for which$\sum_{n=0}^{\infty}\,|(f,h_n)|\,(n+1)^{\gamma}>amp;lt;\infty$, where $(h_n)_{n=0,1,\dots}$ is the orthonormal base of Hermite functions. A gene regulatory mechanism has been proposed in which steroid hormones and certain other drugs bind to nuclear receptor proteins followed by transfer to DNA where they are inserted between base pairs. The amylose 1 was selectively converted into 2 without protection of the other hydroxyl groups by allylation at the all O-6 positions in excess base. The base-catalyzed condensation of thioureas (1-3a-i) with acetone was carried out in the presence of bromine to afford the corresponding 1-(isomeric methyl) benzoyl-3-aryl-4-methyl-imidazole-2-thiones (4-6a-i) in good yield. Based on having a cushioned pair-base space and compact strongly monotonically T2 space, some results (Theorems 1-3) are obtained. The generation of good pseudo-random numbers is the base of many important fields in scientific computing, such as randomized algorithms and numerical solution, of stochastic differential equations. By searching the appropriate base for uniform structure, it is shown that the topological transformation group is topologically equivalent to an isometric one if it is uniformly equicontinuous. Reactions between the dendrimers and acid-processed silica gel took place, with toluene reflux and organic base as catalyst. Synthesis and characterization of rare earth complexes with Phthalaldehyde-lysine Schiff base These results might serve as a base for quality evaluation of Magnetitum (Cishi). Studies on solid state synthesis and the oxygenation property of cobalt (II) Schiff base (vanilline polyamine) complexes According to the electrochemical equation deduced in this paper, the binding constant of 1.36 × 105 (mol/L)-1 and the binding size of 1.94 (base pairs) of CFX with ctDNA were obtained by nonlinear fit analysis of the electrochemical data. Polymerization of styrene catalyzed by rare earth Schiff base complexes The rare earth Schiff base complex Nd (H2Salen)2Cl3·2C2H5OH was synthesized by a simple and convenient method and characterized by IR and elemental analysis. A circle was defined as the parametric space and a non-uniform B-splines defined on the unit circle were used as base functions. To make RMAC fit WSN better, we designed an easy and efficient routing protocol base station flooding (BSF) and then integrated it with a MAC protocol timing out MAC (TMAC) [1], while traditionally BSF and TMAC work separately at two layers. Furthermore, we also discussed the best bases' choice for the time-frequency representation of HRIRs, and the results show that local cosine bases are more propitious to HRIRs' adaptive approximation than wavelet and wavelet packet base.
V. Gitman, J. D. Hamkins, and A. Karagila, “Kelley-Morse set theory does not prove the class Fodor theorem.” (manuscript under review) @ARTICLE{GitmanHamkinsKaragila:KM-set-theory-does-not-prove-the-class-Fodor-theorem, author = {Victoria Gitman and Joel David Hamkins and Asaf Karagila}, title = {Kelley-Morse set theory does not prove the class {F}odor theorem}, journal = {}, year = {}, volume = {}, number = {}, pages = {}, month = {}, note = {manuscript under review}, abstract = {}, keywords = {under-review}, eprint = {1904.04190}, archivePrefix = {arXiv}, primaryClass = {math.LO}, source = {}, doi = {}, url = {http://wp.me/p5M0LV-1RD}, } V. Gitman, J. D. Hamkins, and A. Karagila, “Kelley-Morse set theory does not prove the class Fodor theorem.” (manuscript under review) Abstract. We show that Kelley-Morse (KM) set theory does not prove the class Fodor principle, the assertion that every regressive class function $F:S\to\newcommand\Ord{\text{Ord}}\Ord$ defined on a stationary class $S$ is constant on a stationary subclass. Indeed, it is relatively consistent with KM for any infinite $\lambda$ with $\omega\leq\lambda\leq\Ord$ that there is a class function $F:\Ord\to\lambda$ that is not constant on any stationary class. Strikingly, it is consistent with KM that there is a class $A\subseteq\omega\times\Ord$, such that each section $A_n=\{\alpha\mid (n,\alpha)\in A\}$ contains a class club, but $\bigcap_n A_n$ is empty. Consequently, it is relatively consistent with KM that the class club filter is not $\sigma$-closed. The class Fodor principle is the assertion that every regressive class function $F:S\to\Ord$ defined on a stationary class $S$ is constant on a stationary subclass of $S$. This statement can be expressed in the usual second-order language of set theory, and the principle can therefore be sensibly considered in the context of any of the various second-order set-theoretic systems, such as Gödel-Bernays (GBC) set theory or Kelley-Morse (KM) set theory. Just as with the classical Fodor’s lemma in first-order set theory, the class Fodor principle is equivalent, over a weak base theory, to the assertion that the class club filter is normal. We shall investigate the strength of the class Fodor principle and try to find its place within the natural hierarchy of second-order set theories. We shall also define and study weaker versions of the class Fodor principle. If one tries to prove the class Fodor principle by adapting one of the classical proofs of the first-order Fodor’s lemma, then one inevitably finds oneself needing to appeal to a certain second-order class-choice principle, which goes beyond the axiom of choice and the global choice principle, but which is not available in Kelley-Morse set theory. For example, in one standard proof, we would want for a given $\Ord$-indexed sequence of non-stationary classes to be able to choose for each member of it a class club that it misses. This would be an instance of class-choice, since we seek to choose classes here, rather than sets. The class choice principle $\text{CC}(\Pi^0_1)$, it turns out, is sufficient for us to make these choices, for this principle states that if every ordinal $\alpha$ admits a class $A$ witnessing a $\Pi^0_1$-assertion $\varphi(\alpha,A)$, allowing class parameters, then there is a single class $B\subseteq \Ord\times V$, whose slices $B_\alpha$ witness $\varphi(\alpha,B_\alpha)$; and the property of being a class club avoiding a given class is $\Pi^0_1$ expressible. Thus, the class Fodor principle, and consequently also the normality of the class club filter, is provable in the relatively weak second-order set theory $\text{GBC}+\text{CC}(\Pi^0_1)$. This theory is known to be weaker in consistency strength than the theory $\text{GBC}+\Pi^1_1$-comprehension, which is itself strictly weaker in consistency strength than KM. But meanwhile, although the class choice principle is weak in consistency strength, it is not actually provable in KM; indeed, even the weak fragment $\text{CC}(\Pi^0_1)$ is not provable in KM. Those results were proved several years ago by the first two authors, but they can now be seen as consequences of the main result of this article (see corollary 15. In light of that result, however, one should perhaps not have expected to be able to prove the class Fodor principle in KM. Indeed, it follows similarly from arguments of the third author in his dissertation that if $\kappa$ is an inaccessible cardinal, then there is a forcing extension $V[G]$ with a symmetric submodel $M$ such that $V_\kappa^M=V_\kappa$, which implies that $\mathcal M=(V_\kappa,\in, V^M_{\kappa+1})$ is a model of Kelley-Morse, and in $\mathcal M$, the class Fodor principle fails in a very strong sense. In this article, adapting the ideas of Karagila to the second-order set-theoretic context and using similar methods as in Gitman and Hamkins’s previous work on KM, we shall prove that every model of KM has an extension in which the class Fodor principle fails in that strong sense: there can be a class function $F:\Ord\to\omega$, which is not constant on any stationary class. In particular, in these models, the class club filter is not $\sigma$-closed: there is a class $B\subseteq\omega\times\Ord$, each of whose vertical slices $B_n$ contains a class club, but $\bigcap B_n$ is empty. Main Theorem. Kelley-Morse set theory KM, if consistent, does not prove the class Fodor principle. Indeed, if there is a model of KM, then there is a model of KM with a class function $F:\Ord\to \omega$, which is not constant on any stationary class; in this model, therefore, the class club filter is not $\sigma$-closed. We shall also investigate various weak versions of the class Fodor principle. Definition. For a cardinal $\kappa$, the class $\kappa$-Fodor principleasserts that every class function $F:S\to\kappa$ defined on a stationary class $S\subseteq\Ord$ is constant on a stationary subclass of $S$. The class ${<}\Ord$-Fodor principleis the assertion that the $\kappa$-class Fodor principle holds for every cardinal $\kappa$. The bounded class Fodor principleasserts that every regressive class function $F:S\to\Ord$ on a stationary class $S\subseteq\Ord$ is bounded on a stationary subclass of $S$. The very weak class Fodor principleasserts that every regressive class function $F:S\to\Ord$ on a stationary class $S\subseteq\Ord$ is constant on an unbounded subclass of $S$. We shall separate these principles as follows. Theorem. Suppose KM is consistent. There is a model of KM in which the class Fodor principle fails, but the class ${<}\Ord$-Fodor principle holds. There is a model of KM in which the class $\omega$-Fodor principle fails, but the bounded class Fodor principle holds. There is a model of KM in which the class $\omega$-Fodor principle holds, but the bounded class Fodor principle fails. $\text{GB}^-$ proves the very weak class Fodor principle. Finally, we show that the class Fodor principle can neither be created nor destroyed by set forcing. Theorem. The class Fodor principle is invariant by set forcing over models of $\text{GBC}^-$. That is, it holds in an extension if and only if it holds in the ground model. Let us conclude this brief introduction by mentioning the following easy negative instance of the class Fodor principle for certain GBC models. This argument seems to be a part of set-theoretic folklore. Namely, consider an $\omega$-standard model of GBC set theory $M$ having no $V_\kappa^M$ that is a model of ZFC. A minimal transitive model of ZFC, for example, has this property. Inside $M$, let $F(\kappa)$ be the least $n$ such that $V_\kappa^M$ fails to satisfy $\Sigma_n$-collection. This is a definable class function $F:\Ord^M\to\omega$ in $M$, but it cannot be constant on any stationary class in $M$, because by the reflection theorem there is a class club of cardinals $\kappa$ such that $V_\kappa^M$ satisfies $\Sigma_n$-collection. Read more by going to the full article: V. Gitman, J. D. Hamkins, and A. Karagila, “Kelley-Morse set theory does not prove the class Fodor theorem.” (manuscript under review) @ARTICLE{GitmanHamkinsKaragila:KM-set-theory-does-not-prove-the-class-Fodor-theorem, author = {Victoria Gitman and Joel David Hamkins and Asaf Karagila}, title = {Kelley-Morse set theory does not prove the class {F}odor theorem}, journal = {}, year = {}, volume = {}, number = {}, pages = {}, month = {}, note = {manuscript under review}, abstract = {}, keywords = {under-review}, eprint = {1904.04190}, archivePrefix = {arXiv}, primaryClass = {math.LO}, source = {}, doi = {}, url = {http://wp.me/p5M0LV-1RD}, }
In a paper by Joos and Zeh, Z Phys B 59 (1985) 223, they say:This 'coming into being of classical properties' appears related to what Heisenberg may have meant by his famous remark [7]: 'Die "Bahn" entsteht erst dadurch, dass wir sie beobachten.'Google Translate says this means something ... @EmilioPisanty Tough call. It's technical language, so you wouldn't expect every German speaker to be able to provide a correct interpretation—it calls for someone who know how German is used in talking about quantum mechanics. Litmus are a London-based space rock band formed in 2000 by Martin (bass guitar/vocals), Simon (guitar/vocals) and Ben (drums), joined the following year by Andy Thompson (keyboards, 2001–2007) and Anton (synths). Matt Thompson joined on synth (2002–2004), while Marek replaced Ben in 2003. Oli Mayne (keyboards) joined in 2008, then left in 2010, along with Anton. As of November 2012 the line-up is Martin Litmus (bass/vocals), Simon Fiddler (guitar/vocals), Marek Bublik (drums) and James Hodkinson (keyboards/effects). They are influenced by mid-1970s Hawkwind and Black Sabbath, amongst others.They... @JohnRennie Well, they repeatedly stressed their model is "trust work time" where there are no fixed hours you have to be there, but unless the rest of my team are night owls like I am I will have to adapt ;) I think u can get a rough estimate, COVFEFE is 7 characters, probability of a 7-character length string being exactly that is $(1/26)^7\approx 1.2\times 10^{-10}$ so I guess you would have to type approx a billion characters to start getting a good chance that COVFEFE appears. @ooolb Consider the hyperbolic space $H^n$ with the standard metric. Compute $$\inf\left\{\left(\int u^{2n/(n-2)}\right)^{-(n-2)/n}\left(4\frac{n-1}{n-2}\int|\nabla u|^2+\int Ru^2\right): u\in C^\infty_c\setminus\{0\}, u\ge0\right\}$$ @BalarkaSen sorry if you were in our discord you would know @ooolb It's unlikely to be $-\infty$ since $H^n$ has bounded geometry so Sobolev embedding works as expected. Construct a metric that blows up near infinity (incomplete is probably necessary) so that the inf is in fact $-\infty$. @Sid Eating glamorous and expensive food on a regular basis and not as a necessity would mean you're embracing consumer fetish and capitalism, yes. That doesn't inherently prevent you from being a communism, but it does have an ironic implication. @Sid Eh. I think there's plenty of room between "I think capitalism is a detrimental regime and think we could be better" and "I hate capitalism and will never go near anything associated with it", yet the former is still conceivably communist. Then we can end up with people arguing is favor "Communism" who distance themselves from, say the USSR and red China, and people who arguing in favor of "Capitalism" who distance themselves from, say the US and the Europe Union. since I come from a rock n' roll background, the first thing is that I prefer a tonal continuity. I don't like beats as much as I like a riff or something atmospheric (that's mostly why I don't like a lot of rap) I think I liked Madvillany because it had nonstandard rhyming styles and Madlib's composition Why is the graviton spin 2, beyond hand-waiving, sense is, you do the gravitational waves thing of reducing $R_{00} = 0$ to $g^{\mu \nu} g_{\rho \sigma,\mu \nu} = 0$ for a weak gravitational field in harmonic coordinates, with solution $g_{\mu \nu} = \varepsilon_{\mu \nu} e^{ikx} + \varepsilon_{\mu \nu}^* e^{-ikx}$, then magic?
Search Now showing items 1-10 of 24 Production of Σ(1385)± and Ξ(1530)0 in proton–proton collisions at √s = 7 TeV (Springer, 2015-01-10) The production of the strange and double-strange baryon resonances ((1385)±, Ξ(1530)0) has been measured at mid-rapidity (|y|< 0.5) in proton–proton collisions at √s = 7 TeV with the ALICE detector at the LHC. Transverse ... Forward-backward multiplicity correlations in pp collisions at √s = 0.9, 2.76 and 7 TeV (Springer, 2015-05-20) The strength of forward-backward (FB) multiplicity correlations is measured by the ALICE detector in proton-proton (pp) collisions at s√ = 0.9, 2.76 and 7 TeV. The measurement is performed in the central pseudorapidity ... Inclusive photon production at forward rapidities in proton-proton collisions at $\sqrt{s}$ = 0.9, 2.76 and 7 TeV (Springer Berlin Heidelberg, 2015-04-09) The multiplicity and pseudorapidity distributions of inclusive photons have been measured at forward rapidities ($2.3 < \eta < 3.9$) in proton-proton collisions at three center-of-mass energies, $\sqrt{s}=0.9$, 2.76 and 7 ... Rapidity and transverse-momentum dependence of the inclusive J/$\mathbf{\psi}$ nuclear modification factor in p-Pb collisions at $\mathbf{\sqrt{\textit{s}_{NN}}}=5.02$ TeV (Springer, 2015-06) We have studied the transverse-momentum ($p_{\rm T}$) dependence of the inclusive J/$\psi$ production in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV, in three center-of-mass rapidity ($y_{\rm cms}$) regions, down to ... Measurement of pion, kaon and proton production in proton–proton collisions at √s = 7 TeV (Springer, 2015-05-27) The measurement of primary π±, K±, p and p¯¯¯ production at mid-rapidity (|y|< 0.5) in proton–proton collisions at s√ = 7 TeV performed with a large ion collider experiment at the large hadron collider (LHC) is reported. ... Two-pion femtoscopy in p-Pb collisions at $\sqrt{s_{\rm NN}}=5.02$ TeV (American Physical Society, 2015-03) We report the results of the femtoscopic analysis of pairs of identical pions measured in p-Pb collisions at $\sqrt{s_{\mathrm{NN}}}=5.02$ TeV. Femtoscopic radii are determined as a function of event multiplicity and pair ... Measurement of charm and beauty production at central rapidity versus charged-particle multiplicity in proton-proton collisions at $\sqrt{s}$ = 7 TeV (Springer, 2015-09) Prompt D meson and non-prompt J/$\psi$ yields are studied as a function of the multiplicity of charged particles produced in inelastic proton-proton collisions at a centre-of-mass energy of $\sqrt{s}=7$ TeV. The results ... Charged jet cross sections and properties in proton-proton collisions at $\sqrt{s}=7$ TeV (American Physical Society, 2015-06) The differential charged jet cross sections, jet fragmentation distributions, and jet shapes are measured in minimum bias proton-proton collisions at centre-of-mass energy $\sqrt{s}=7$ TeV using the ALICE detector at the ... Centrality dependence of high-$p_{\rm T}$ D meson suppression in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV (Springer, 2015-11) The nuclear modification factor, $R_{\rm AA}$, of the prompt charmed mesons ${\rm D^0}$, ${\rm D^+}$ and ${\rm D^{*+}}$, and their antiparticles, was measured with the ALICE detector in Pb-Pb collisions at a centre-of-mass ... K*(892)$^0$ and $\Phi$(1020) production in Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV (American Physical Society, 2015-02) The yields of the K*(892)$^0$ and $\Phi$(1020) resonances are measured in Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV through their hadronic decays using the ALICE detector. The measurements are performed in multiple ...
Search Now showing items 1-5 of 5 Measurement of electrons from beauty hadron decays in pp collisions at root √s=7 TeV (Elsevier, 2013-04-10) The production cross section of electrons from semileptonic decays of beauty hadrons was measured at mid-rapidity (|y| < 0.8) in the transverse momentum range 1 < pT <8 GeV/c with the ALICE experiment at the CERN LHC in ... Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV (Elsevier, 2014-09) Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ... Elliptic flow of muons from heavy-flavour hadron decays at forward rapidity in Pb-Pb collisions at $\sqrt{s_{\rm NN}}=2.76$ TeV (Elsevier, 2016-02) The elliptic flow, $v_{2}$, of muons from heavy-flavour hadron decays at forward rapidity ($2.5 < y < 4$) is measured in Pb--Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV with the ALICE detector at the LHC. The scalar ... Centrality dependence of the pseudorapidity density distribution for charged particles in Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV (Elsevier, 2013-11) We present the first wide-range measurement of the charged-particle pseudorapidity density distribution, for different centralities (the 0-5%, 5-10%, 10-20%, and 20-30% most central events) in Pb-Pb collisions at $\sqrt{s_{NN}}$ ... Beauty production in pp collisions at √s=2.76 TeV measured via semi-electronic decays (Elsevier, 2014-11) The ALICE Collaboration at the LHC reports measurement of the inclusive production cross section of electrons from semi-leptonic decays of beauty hadrons with rapidity |y|<0.8 and transverse momentum 1<pT<10 GeV/c, in pp ...
Search Now showing items 11-20 of 27 Pseudorapidity dependence of the anisotropic flow of charged particles in Pb-Pb collisions at $\sqrt{s_{\rm NN}}=2.76$ TeV (Elsevier, 2016-11) We present measurements of the elliptic ($\mathrm{v}_2$), triangular ($\mathrm{v}_3$) and quadrangular ($\mathrm{v}_4$) anisotropic azimuthal flow over a wide range of pseudorapidities ($-3.5< \eta < 5$). The measurements ... Correlated event-by-event fluctuations of flow harmonics in Pb-Pb collisions at $\sqrt{s_{\rm NN}}=2.76$ TeV (American Physical Society, 2016-10) We report the measurements of correlations between event-by-event fluctuations of amplitudes of anisotropic flow harmonics in nucleus–nucleus collisions, obtained for the first time using a new analysis method based on ... Centrality dependence of $\mathbf{\psi}$(2S) suppression in p-Pb collisions at $\mathbf{\sqrt{{\textit s}_{\rm NN}}}$ = 5.02 TeV (Springer, 2016-06) The inclusive production of the $\psi$(2S) charmonium state was studied as a function of centrality in p-Pb collisions at the nucleon-nucleon center of mass energy $\sqrt{s_{\rm NN}}$ = 5.02 TeV at the CERN LHC. The ... Transverse momentum dependence of D-meson production in Pb–Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV (Springer, 2016-03) The production of prompt charmed mesons D$^0$, D$^+$ and D$^{*+}$, and their antiparticles, was measured with the ALICE detector in Pb–Pb collisions at the centre-of-mass energy per nucleon pair, $\sqrt{s_{\rm NN}}$ of ... Multiplicity and transverse momentum evolution of charge-dependent correlations in pp, p-Pb, and Pb-Pb collisions at the LHC (Springer, 2016) We report on two-particle charge-dependent correlations in pp, p-Pb, and Pb-Pb collisions as a function of the pseudorapidity and azimuthal angle difference, $\mathrm{\Delta}\eta$ and $\mathrm{\Delta}\varphi$ respectively. ... Charge-dependent flow and the search for the chiral magnetic wave in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV (American Physical Society, 2016-04) We report on measurements of a charge-dependent flow using a novel three-particle correlator with ALICE in Pb–Pb collisions at the LHC, and discuss the implications for observation of local parity violation and the Chiral ... Pseudorapidity and transverse-momentum distributions of charged particles in proton-proton collisions at $\mathbf{\sqrt{\textit s}}$ = 13 TeV (Elsevier, 2016-02) The pseudorapidity ($\eta$) and transverse-momentum ($p_{\rm T}$) distributions of charged particles produced in proton-proton collisions are measured at the centre-of-mass energy $\sqrt{s}$ = 13 TeV. The pseudorapidity ... Differential studies of inclusive J/$\psi$ and $\psi$(2S) production at forward rapidity in Pb-Pb collisions at $\mathbf{\sqrt{{\textit s}_{_{NN}}}}$ = 2.76 TeV (Springer, 2016-05) The production of J/$\psi$ and $\psi(2S)$ was measured with the ALICE detector in Pb-Pb collisions at the LHC. The measurement was performed at forward rapidity ($2.5 < y < 4 $) down to zero transverse momentum ($p_{\rm ... Elliptic flow of muons from heavy-flavour hadron decays at forward rapidity in Pb-Pb collisions at $\sqrt{s_{\rm NN}}=2.76$ TeV (Elsevier, 2016-02) The elliptic flow, $v_{2}$, of muons from heavy-flavour hadron decays at forward rapidity ($2.5 < y < 4$) is measured in Pb--Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV with the ALICE detector at the LHC. The scalar ... Anisotropic flow of charged particles in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 5.02 TeV (American Physical Society, 2016-04) We report the first results of elliptic ($v_2$), triangular ($v_3$) and quadrangular flow ($v_4$) of charged particles in Pb--Pb collisions at $\sqrt{s_{_{\rm NN}}}=$ 5.02 TeV with the ALICE detector at the CERN Large ...
I’m working on a number theory proof that has been giving me some trouble for a while. I will explain the problem and the attempts I’ve made. Let $x\in \mathbb{R}$ and $d \in \mathbb{Z}$ where both $x, d > 0$ (i.e. positive values). Prove that the number of integers, say k, that are $\leq $ $x$ and divisible by $d$ is $[\frac{x}{d}]$ (given that [x] is the greatest integer function). So I’ve decided to try using a proof by contradiction, but I don’t think I’m doing it correctly, but I will list the steps I’ve taken below. Suppose not, that is suppose that the number of integers divisible by $d$ and less than $x$ does not equal [$\frac{x}{d}$]. $k \neq$ [$\frac{x}{d}$] This would imply that $k >$ [$\frac{x}{d}$] or $k <$ [$\frac{x}{d}$], but both of this cases lead to contradictions. If $k >$ [$\frac{x}{d}$] then that implies that [$\frac{x}{d}$] does not produce the greatest integer because if it did, each of the integers in k could be covered by a factor of [$\frac{x}{d}$] If $k <$ [$\frac{x}{d}$]then that implies that not all values in k are less than $x$ and divisible by $d$, but this is the definition of values in $k$ Therefore both these are false and $k = $ [$\frac{x}{d}$] Now I’m having a feeling this is incorrect, but I’m not sure where to go from here and if my solution Is correct or not. Any help would be appreciated.
Efficient decoding of interleaved subspace and Gabidulin codes beyond their unique decoding radius using Gröbner bases 1. Institute of Communications and Navigation, German Aerospace Center (DLR), D-82234 Oberpfaffenhofen, Germany 2. Institute for Communications Engineering, Technical University of Munich (TUM), D-80290 Munich, Germany An interpolation-based decoding scheme for $L$-interleaved subspace codes is presented. The scheme can be used as a (not necessarily polynomial-time) list decoder as well as a polynomial-time probabilistic unique decoder. Both interpretations allow to decode interleaved subspace codes beyond half the minimum subspace distance. Both schemes can decode $\gamma $ insertions and $\delta $ deletions up to $\gamma +L\delta \leq L({{n}_{t}}-k)$, where ${{n}_{t}}$ is the dimension of the transmitted subspace and $k$ is the number of data symbols from the field ${{\mathbb{F}}_{{{q}^{m}}}}$. Further, a complementary decoding approach is presented which corrects $\gamma $ insertions and $\delta $ deletions up to $L\gamma +\delta \leq L({{n}_{t}}-k)$. Both schemes use properties of minimal Gröebner bases for the interpolation module that allow predicting the worst-case list size right after the interpolation step. An efficient procedure for constructing the required minimal Gröebner basis using the general Kötter interpolation is presented. A computationally- and memory-efficient root-finding algorithm for the probabilistic unique decoder is proposed. The overall complexity of the decoding algorithm is at most $\mathcal{O}\left( {{L}^{2}}n_{r}^{2} \right)$ operations in ${{\mathbb{F}}_{{{q}^{m}}}}$ where ${{n}_{r}}$ is the dimension of the received subspace and $L$ is the interleaving order. The analysis as well as the efficient algorithms can also be applied for accelerating the decoding of interleaved Gabidulin codes. Keywords:Subspace codes, rank-metric codes, interleaved Gabidulin codes, probabilistic unique decoding, interpolation-based decoding. Mathematics Subject Classification:Primary: 94B35, 94B05. Citation:Hannes Bartz, Antonia Wachter-Zeh. Efficient decoding of interleaved subspace and Gabidulin codes beyond their unique decoding radius using Gröbner bases. Advances in Mathematics of Communications, 2018, 12 (4) : 773-804. doi: 10.3934/amc.2018046 References: [1] [2] [3] H. Bartz, [4] H. Bartz, M. Meier and V. Sidorenko, Improved Syndrome Decoding of Interleaved Subspace Codes, in [5] H. Bartz and V. Sidorenko, List and probabilistic unique decoding of folded subspace codes in [6] H. Bartz and V. Sidorenko, On list-decoding schemes for punctured reed-solomon, Gabidulin and subspace codes in [7] H. Bartz and A. Wachter-Zeh, Efficient interpolation-based decoding of interleaved subspace and Gabidulin codes, in [8] [9] [10] T. Etzion and N. Silberstein, Error-correcting codes in projective spaces via rank-metric codes and Ferrers diagrams, [11] [12] [13] [14] [15] V. Guruswami and C. Xing, List decoding Reed-Solomon, algebraic-geometric, and Gabidulin subcodes up to the singleton bound, [16] R. A. Horn and C. R. Johnson, [17] A. Kohnert and S. Kurz, Construction of large constant dimension codes with a prescribed minimum distance, in [18] [19] [20] [21] R. Lidl and H. Niederreiter, [22] P. Loidreau and R. Overbeck, Decoding rank errors beyond the error correcting capability, in [23] [24] H. Mahdavifar and A. Vardy, List-Decoding of Subspace Codes and Rank-Metric Codes up to Singleton Bound, in [25] J. N. Nielsen, [26] [27] S. Puchinger, J. S. R. Nielsen, W. Li and V. Sidorenko, Row reduction applied to decoding of rank metric and subspace codes, [28] [29] [30] V. R. Sidorenko and M. Bossert, Decoding interleaved Gabidulin codes and multisequence linearized shift-register synthesis, in [31] V. R. Sidorenko, L. Jiang and M. Bossert, Skew-feedback shift-register synthesis and decoding interleaved Gabidulin codes, [32] D. Silva, [33] D. Silva, F. R. Kschischang and R. Kötter, A rank-metric approach to error control in random network coding, [34] [35] A. L. Trautmann, F. Manganiello and J. Rosenthal, Orbit codes - A new concept in the area of network coding in [36] [37] A.-L. Trautmann, N. Silberstein and J. Rosenthal, List decoding of lifted Gabidulin codes via the Plücker embedding, in [38] [39] A. Wachter-Zeh and A. Zeh, List and unique error-erasure decoding of interleaved Gabidulin codes with interpolation techniques, [40] B. Wang, R. J. McEliece and K. Watanabe, Kötter interpolation over free modules, in [41] [42] H. Xie, Z. Yan and B. W. Suter, General linearized polynomial interpolation and its applications, in show all references References: [1] [2] [3] H. Bartz, [4] H. Bartz, M. Meier and V. Sidorenko, Improved Syndrome Decoding of Interleaved Subspace Codes, in [5] H. Bartz and V. Sidorenko, List and probabilistic unique decoding of folded subspace codes in [6] H. Bartz and V. Sidorenko, On list-decoding schemes for punctured reed-solomon, Gabidulin and subspace codes in [7] H. Bartz and A. Wachter-Zeh, Efficient interpolation-based decoding of interleaved subspace and Gabidulin codes, in [8] [9] [10] T. Etzion and N. Silberstein, Error-correcting codes in projective spaces via rank-metric codes and Ferrers diagrams, [11] [12] [13] [14] [15] V. Guruswami and C. Xing, List decoding Reed-Solomon, algebraic-geometric, and Gabidulin subcodes up to the singleton bound, [16] R. A. Horn and C. R. Johnson, [17] A. Kohnert and S. Kurz, Construction of large constant dimension codes with a prescribed minimum distance, in [18] [19] [20] [21] R. Lidl and H. Niederreiter, [22] P. Loidreau and R. Overbeck, Decoding rank errors beyond the error correcting capability, in [23] [24] H. Mahdavifar and A. Vardy, List-Decoding of Subspace Codes and Rank-Metric Codes up to Singleton Bound, in [25] J. N. Nielsen, [26] [27] S. Puchinger, J. S. R. Nielsen, W. Li and V. Sidorenko, Row reduction applied to decoding of rank metric and subspace codes, [28] [29] [30] V. R. Sidorenko and M. Bossert, Decoding interleaved Gabidulin codes and multisequence linearized shift-register synthesis, in [31] V. R. Sidorenko, L. Jiang and M. Bossert, Skew-feedback shift-register synthesis and decoding interleaved Gabidulin codes, [32] D. Silva, [33] D. Silva, F. R. Kschischang and R. Kötter, A rank-metric approach to error control in random network coding, [34] [35] A. L. Trautmann, F. Manganiello and J. Rosenthal, Orbit codes - A new concept in the area of network coding in [36] [37] A.-L. Trautmann, N. Silberstein and J. Rosenthal, List decoding of lifted Gabidulin codes via the Plücker embedding, in [38] [39] A. Wachter-Zeh and A. Zeh, List and unique error-erasure decoding of interleaved Gabidulin codes with interpolation techniques, [40] B. Wang, R. J. McEliece and K. Watanabe, Kötter interpolation over free modules, in [41] [42] H. Xie, Z. Yan and B. W. Suter, General linearized polynomial interpolation and its applications, in Transmissions Observed dec. failures Simulated 8 8 0 18749 6 1 6202 7 7 0 1025 5 1 144 6 6 0 21 4 1 17 5 5 0 750 3 1 184 4 4 0 21 2 1 0 Transmissions Observed dec. failures Simulated 8 8 0 18749 6 1 6202 7 7 0 1025 5 1 144 6 6 0 21 4 1 17 5 5 0 750 3 1 184 4 4 0 21 2 1 0 Decoding scheme Decoding region Op. in Li-Sidorenko-Silva [20,31] Wachter-Zeh-Zeh [39] Guruswami-Xing [15] Bartz-Meier-Sidorenko [4] This contribution Decoding scheme Decoding region Op. in Li-Sidorenko-Silva [20,31] Wachter-Zeh-Zeh [39] Guruswami-Xing [15] Bartz-Meier-Sidorenko [4] This contribution [1] Anna-Lena Horlemann-Trautmann, Kyle Marshall. New criteria for MRD and Gabidulin codes and some Rank-Metric code constructions. [2] [3] [4] [5] [6] [7] [8] Heide Gluesing-Luerssen, Uwe Helmke, José Ignacio Iglesias Curto. Algebraic decoding for doubly cyclic convolutional codes. [9] Fernando Hernando, Tom Høholdt, Diego Ruano. List decoding of matrix-product codes from nested codes: An application to quasi-cyclic codes. [10] Joan-Josep Climent, Diego Napp, Raquel Pinto, Rita Simões. Decoding of $2$D convolutional codes over an erasure channel. [11] [12] [13] Anas Chaaban, Vladimir Sidorenko, Christian Senger. On multi-trial Forney-Kovalev decoding of concatenated codes. [14] Vladimir Sidorenko, Christian Senger, Martin Bossert, Victor Zyablov. Single-trial decoding of concatenated codes using fixed or adaptive erasing. [15] [16] [17] Kamil Otal, Ferruh Özbudak. Explicit constructions of some non-Gabidulin linear maximum rank distance codes. [18] [19] [20] 2018 Impact Factor: 0.879 Tools Metrics Other articles by authors [Back to Top]
Here is a proof in the real case. For general fields, this only gives a lower bound of $\lceil n/2 \rceil$, though the correct lower bound should indeed be $n$. For more, take a look at the monograph Algebraic complexity theory by Bürgisser, Clausen and Shokrollahi. Model It will be easier to give a lower bound on computing the inner product of a vector with itself; this is a potentially easier task. Consider an arithmetic "straight-line" program for computing the squared norm $\sum_{i=1}^n x_i^2$. Such a program consists of the following operations: addition, subtraction, multiplication, division, all of which of the form $a \gets b \circ c$. We are also allowed to "load" input variables and arbitrary field constants. The final answer must be found in some variable. Since we don't care about space, we can assume that every operation assigns its output to a new variable. For simplicity we will not allow division in what follows, though divisions can always be eliminated (this is a famous result of Strassen), without too much trouble. Here is an example of such a program: $t_1 \gets x_1 \cdot x_1$ $t_2 \gets x_2 \cdot x_2$ ... $t_n \gets x_n \cdot x_n$ $s_2 \gets t_1 + t_2$ $s_3 \gets s_2 + t_3$ ... $s_n \gets s_{n-1} + t_n$ The variable $s_n$ contains the inner product. Normal form (1) The first step is coming up with a program in normal form. Let $t$ be a variable occurring in the program. The value of $t$ is always a fixed polynomial in the inputs (if we allowed divisions, it could be a more general power series). We write $t$ as a sum $t = t^{(0)} + t^{(1)} + t^{(2)} + t'$, where $t^{(0)}$ is a constant. $t^{(1)} = \sum_i c_i x_i$. $t^{(2)} = \sum_{i \leq j} d_{ij} x_i x_j$. $t'$ consists of all monomials of degree larger than 2. We will convert an arbitrary program into one which computes only the parts $t^{(0)},t^{(1)},t^{(2)}$, and furthermore contains no more "essential" multiplications than the original program; a multiplication is not essential if one of the operands is a constant. Since the output variable $o$ satisfies $o = o^{(2)}$, the new program also computes the squared norm. The base cases are when $t$ is constant or an input variable. In both cases no computation is needed, since each part is either zero, a constant, or an input variable. We replace the operation $a \gets b + c$ with the sequence $a^{(0)} \gets b^{(0)} + c^{(0)}$, $a^{(1)} \gets b^{(1)} + c^{(1)}$, $a^{(2)} \gets b^{(2)} + c^{(2)}$. Subtraction is similar. The most interesting operation is $a \gets b \cdot c$, which we replace by the sequence $a^{(0)} \gets b^{(0)} \cdot c^{(0)}$ $a^{(1')} \gets b^{(0)} \cdot c^{(1)}$ $a^{(1'')} \gets b^{(1)} \cdot c^{(0)}$ $a^{(1)} \gets a^{(1')} + a^{(1'')}$ $a^{(2')} \gets b^{(0)} \cdot c^{(2)}$ $a^{(2'')} \gets b^{(1)} \cdot c^{(1)}$ $a^{(2''')} \gets b^{(2)} \cdot c^{(0)}$ $a^{(2'''')} \gets a^{(2')} + a^{(2'')}$ $a^{(2)} \gets a^{(2''')} + a^{(2'''')}$ The only essential multiplication is involved in computing $a^{(2'')}$. It is routine to check that the new program indeed computes $t^{(0)},t^{(1)},t^{(2)}$ correctly for all $t$ occurring in the original program. Normal form (2) Consider the new program. It is not hard to prove by induction that for each original variable $t$, $t^{(0}$ is a constant. $t^{(1)}$ is a linear combination of inputs, i.e. of the form $\sum_i c_i x_i$. $t^{(2)}$ is a linear combination of the results of essential multiplications (multiplications of the form $s^{(1)} \cdot r^{(1)}$). In particular, if there are $m$ multiplications in the original program then there are $m$ essential multiplications in the new program, and so it is semantically equivalent to the following program: Compute $2m$ linear combinations $\ell_1,\ldots,\ell_{2m}$ of the inputs. Compute $m$ essential multiplications $p_i \gets \ell_{i_1} \cdot \ell_{i_2}$, for $1 \leq i \leq m$. Output a linear combination $\sum_{i=1}^m c_i p_i$. In fact, without loss of generality we can assume that $c_i = 1$. Rank lower bound Given a program computing the inner product $\sum_{i=1}^n x_i y_i$ using $m$ multiplications, we have shown how to obtain a representation$$\sum_{i=1}^n x_i^2 = \sum_{j=1}^m \ell_j r_j,$$where $\ell_j,r_j$ are linear combinations of the $x_i$. Replace $\ell_j,r_j$ with coefficient vectors of length $n$, say column vectors. The representation then implies the matrix identity$$I_n + S = \sum_{j=1}^m \ell_j r_j^T,$$where $S$ is a skew-symmetric matrix (we get $S$ since $x_ix_j$ can be represented in two ways for $i \neq j$). If we are working over the reals, then all eigenvalues of $S$ are either zero or pure imaginary, and so $I_n + S$ has full rank. The right-hand side implies that the rank is at most $m$, and we conclude that $m \geq n$. When working over the complex numbers, we can have shortcuts such as$$(x_1 + ix_2)(x_1 - ix_2) = x_1^2 + x_2^2.$$The same method still shows that $n/2$ multiplications are needed, and indeed the optimal number of multiplications to compute the squared norm is $\lceil n/2 \rceil$.
Wikipedia offers the following definition for an (embedded) submanifold: An embedded submanifold (also called a regular submanifold), is an immersed submanifold for which the inclusion map is a topological embedding. I've been wondering if one could not equivalently define a submanifold like this: $(\ast)$ Let $M$ be a smooth $m$-manifold and let $N$ be a subset. Then $N$ is an $n$-submanifold if and only if it is itself a $n$-manifold when endowed with the subspace topology. Is this definition equivalent to the usual definition with embedding? Edit! It appears that "my" definition is in fact common, too. I seem to have found it in this book on page 5: 1.5. Definition. A subset $M \subset \mathbb R^n$ is called differentiable submanifold of $\mathbb R^n$ of dimension $m \le n$ if to each $x\in M$ there corresponds an invertible germ $\widetilde{\phi}:(\mathbb R^n,x) \to (\mathbb R^n,0)$ such that $\widetilde{\phi} (M,x) = (\mathbb R^m,x)\subset (\mathbb R^n,x)$ ($\mathbb R^m$ linearly embedded in $\mathbb R^n$ for $m \le n$) This is exactly the same as $(\ast)$ if in $(\ast)$ we let $M = \mathbb R^n$, right?
Starting from the following definition of stress-energy tensor for a perfect fluid in special relativity : $${\displaystyle T^{\mu \nu }=\left(\rho+{\frac {p}{c^{2}}}\right)\,v^{\mu }v^{\nu }-p\,\eta ^{\mu \nu }\,}\quad(1)$$ with $$v^{\nu}=\dfrac{\text{d}x^{\nu}}{\text{d}\tau}$$ and $$V^{\nu}=\dfrac{\text{d}x^{\nu}}{\text{d}t}$$ (we have $v^{\nu}=\gamma\,V^{\nu}$) So, finally, I have to get the following relation : $$\dfrac{\partial \vec{V}}{\partial t} + (\vec{V}.\vec{grad})\vec{V} = -\dfrac{1}{\gamma^2(\rho+\dfrac{p}{c^2})} \bigg(\vec{grad}\,p+\dfrac{\vec{V}}{c^2}\dfrac{\partial p}{\partial t}\bigg)\quad(2)$$ To get this relation, I must use the conservation of energy for $\nu=i$ and $\nu=0$ with : $$\partial_{\mu}T^{\mu\nu}=0\quad(3)$$ If someone could help me to find the equation $(2)$ from $(1)$ and $(3)$, this would be nice to indicate the tricks to apply. EDIT 1 : For the moment, below where I am : I recognize in the left member of wanted relation $(2)$ the Lagrangian derivative : $$\dfrac{\text{D}\,\vec{V}}{\text{d}t}=\dfrac{\partial \vec{V}}{\partial t} + (\vec{V}.\vec{\nabla})\vec{V}\quad(4)$$ and I can rewrite $(1)$ with the $V^{\mu}$ components like : $$T^{\mu\nu}=\left(\rho+\dfrac{p}{c^{2}}\right)\,\gamma^2\,V^{\mu}V^{\nu }-p\,\eta^{\mu\nu}\,\quad(5)$$ But from this point, I don't know how to make the link between $(4)$, $(5)$, $(3)$ (the divergence of stress-energy equal to zero), and $(1)$ ... Any help is welcome
Jan 13, 2019 Great course for kickoff into the world of CNN's. Gives a nice overview of existing architectures and certain applications of CNN's as well as giving some solid background in how they work internally. Sep 02, 2019 This is very intensive and wonderful course on CNN. No other course in the MOOC world can be compared to this course's capability of simplifying complex concepts and visualizing them to get intuition. автор: Gyuho S• Apr 25, 2019 This course is definitely tougher than the first three courses. Challenging but worth it. автор: Farzeen H• Jan 12, 2019 Amazing! Feels like AI is getting tamed in my hands. Course lectures , assignments are excellent. To those who are not well versed with python - numpy and tensorflow , it would be better to brush up. автор: David B C S• Dec 17, 2018 Great course, easy to understand and very useful. The explanations are very clear, as is expected from the professor. The purpose of the course is for you to have a practical comprehension of CNNs, it will give you the necessary tools to implement you own networks, but it will not get into the specifics of each model. Nevertheless, all of the resources are referenced, which makes it very easy for you to dig deeper on any specific topic covered on the course. автор: Aleksa G• Jan 13, 2019 Great course for kickoff into the world of CNN's. Gives a nice overview of existing architectures and certain applications of CNN's as well as giving some solid background in how they work internally. автор: Ed B• Nov 03, 2017 Wonderful course. Covers a wide array of immediately appealing subjects: from object detection to face recognition to neural style transfer, intuitively motivate relevant models like YOLO and ResNet. автор: Xinwei B• Feb 13, 2019 When I am doing the programming assignments, I felt that some part were quite difficult since I had no background in neither Keras nor Tensorflow. It was helpful that in one of the previous courses there was a tutorial for the basics of Tensorflow. But for Keras I felt that there is a gap between what I have and what is needed for the assignment. So I would suggest a more thorough tutorial for Keras. Maybe several short tutorials talking about the implementations and ideas of Tensorflow & Keras may help a lot. автор: Stefan J• Dec 30, 2018 Theoretical material was great as always. However, programming assignments were poorly commented in some cases which results in unnecessary confusion. автор: fabrizio f• Dec 17, 2018 Very good however most of the effort is applied in learning and applying programming (tf, Keras) than actually thinking about the DL models and practicing different scenarios. автор: Joshua M• Jul 31, 2019 Content is great, but videos could be trimmed to cut retakes. A big issue is that guidance for programming assignments abruptly drops off from extreme hand-holding to being thrown in the deep end. автор: Markus B• Dec 05, 2018 Great course. The only improvement I'd wish is to get a better introduction to the concepts of Tensorflow and Keras. автор: Sergei S• Apr 29, 2019 Some parts of the course seemed incomplete to me, wanted more information on why things work exactly as described. Last week assignments have a number of uncertainties/bugs. автор: Huijun P• Apr 18, 2019 Great lectures but the programming assignments feel as if it is testing your proficiency with tensorflow which is neither formally covered in the lecture nor the most intuitive framework to understand so you'll spend so much time digging through convoluted tensorflow documents and qna and whatnot to debug your codes that you would rather learn tensorflow formally first and then take this course and still end up finishing it faster than only going through this course only but it is only the programming assignments that basically assume that you are already familiar with the tensorflow framework so if you are only going to go over the video lectures it gives a great overview of how CNN works and many useful algorithms which can applied to a assortment of situations автор: Sriram G• Feb 10, 2019 Homeworks are too canned and do not promote deeper understanding. автор: Michael J• Jan 02, 2019 A short (but cogent) overview of CNNs with a ton of references to read through and much more interesting assignments (than previous courses). I really enjoyed this course, I got a ton of exposure from it. автор: Devjyoti M• Apr 22, 2019 This is one of the best courses for CNNs. This gives a very deep understanding of the concepts and helps to understand the brains behind the CNNs and their working in application based environments. автор: Daniel G• Feb 14, 2018 Too much hand-holding during assignments, although still very good directions. Obviously the issue with the final programming assignment needs to be addressed. Fantastic lecture material, as always. автор: Tian Q• Jan 01, 2019 Excellent introductory course for CNN. The basic ideas and key components are explained clearly. Coding assingments helped me understand the algorithm to every little detail. автор: Anne R• Oct 09, 2019 Out of the four courses I have taken in the deepai sequence this is the best one! This course got to the heart of the methods that researchers are implementing and also dropped you into the programming using Tensorflow. As noted by some other reviewers there are places where more instruction could be helpful, but I felt that this course obtained a good balance between information and challenging and also between concepts and hand-on implementation. A couple of the prior courses could be merged and then this would be the 2nd or 3rd course in the sequence which would be much better in getting students to complete the projects and all courses. автор: Cosmin D• Jan 04, 2019 Good content, videos have the occasional editing hiccups that also affect other courses in this specialisation. Assignments could be a little bit harder but do a reasonable job at familiarising with useful deep learning frameworks. автор: Sai B A• Oct 09, 2019 The course content is great, I felt link the programming assignments should have more information on running the Tensorflow sessions and (optional )information for people who are not familiar with Tensorflow would be great. автор: Ralph J R F• Apr 27, 2019 I think it's a good idea to remove repeated parts in the videos. Also, put all pieces toguether to give a better overview of the object detection solution автор: Alberto B• Feb 08, 2019 Assignments are very bad explained автор: divya p p• Feb 18, 2019 Dear Instructors, This is most frustrating course in all of your courses so far. The instructions were completely misguiding the candidates from YOLO implementation onwards. All along you presented the course very well. But when come to most important topics, we had to focus on syntactical errors. But we are supposed to spend time on understanding the algorithms at this level. Dont know why this 180 degrees turn taken by you. If you intentionally designed this course then fine. Otherwise, you should seriously think about rework on the instructions. Few links to hints were taking to some pages in github with just folders. I am sure , many learners here have such same opinion. I can see this in the forum postings. From YOLO onwards, you were not giving the big picture of the task. This is confusing. We are lost, where we are heading by the mid of the assignment. With all due respect to your highly precious time, I request you to enhance the assignment instructions. All motivation I got from previous course, losing because of this course. Personally, I feel YOLO easy to understand, but instructions were misguiding and confusing the candidates. This is my honest feedback, as I very much like this course. I am going forward for the 5th course in this series. Last but not least. Thank you for making this high quality knowledge made available for public with easy access via Coursera. автор: Basile B• Apr 30, 2018 IoU validation problem is known but nothing as been done to resolv it video editing problem unreadable formula in python notebook for art generation (exemple : $$J_{style}^{[l]}(S,G) = \frac{1}{4 \times {n_C}^2 \times (n_H \times n_W)^2} \sum _{i=1}^{n_C}\sum_{j=1}^{n_C}(G^{(S)}_{ij} - G^{(G)}_{ij})^2\tag{2} $$ What append ? that was great so far... =( автор: ENRIQUE A C A• Nov 18, 2018 excellent course!!!
The second formula is wrong: the outside parts are equal to each other, but the middle part is merely proportional to (and not necessarily equal to) the outside parts. The likelihood is defined by $L(\theta \mid y) = k(y) p(y \mid \theta) \propto p(y \mid \theta)$ where $k$ is some constant-of-proportionality that does not depend on $\theta$. This means you have: $$p(\theta \mid y) = \frac{p(y \mid \theta) p(\theta)}{p(y)} = \frac{k(y) L(\theta \mid y) p(\theta)}{p(y)} \propto \frac{L(\theta \mid y) p(\theta)}{p(y)}.$$ Using the law of total probability you also have $p(y) = \int p(y \mid \theta) p(\theta) d\theta$ which gives: $$p(\theta \mid y) = \frac{p(y \mid \theta) p(\theta)}{p(y)} = \frac{k(y) L(\theta \mid y) p(\theta)}{k(y) \int L(\theta \mid y) p(\theta) d\theta} = \frac{L(\theta \mid y) p(\theta)}{\int L(\theta \mid y) p(\theta) d\theta}.$$ In the special case where $k(y) = 1$ you have $L(\theta \mid y) = p(y \mid \theta)$ and so in this case you get the second equation you specified. However, it is common when using likelihood functions to use a constant-of-proportionality that effectively removes multiplicative terms that do not depend on $\theta$.
Journal of Symbolic Logic J. Symbolic Logic Volume 65, Issue 3 (2000), 1223-1240. Fragments of Heyting Arithmetic Abstract We define classes $\Phi_n$ of formulae of first-order arithmetic with the following properties: (i) Every $\varphi \in \Phi_n$ is classically equivalent to a $\Pi_n$-formula (n $\neq$ 1, $\Phi_1 :=\Sigma_1)$. (ii) $\bigcup_{n\in \omega} \Phi_n = \mathscr L_A$. (iii) $I\Pi_n$ and $i\Phi_n$ (i.e., Heyting arithmetic with induction schema restricted to $\Phi_n$- formulae) prove the same $\Pi_2$-formulae. We further generalize a result by Visser and Wehmeier, namely that prenex induction within intuitionistic arithmetic is rather weak: After closing $\Phi_n$ both under existential and universal quantification (we call these classes $\Theta_n$) the corresponding theories i$\Theta_n$ still prove the same $\Pi_2$-formulae. In a second part we consider i$\Delta_0$ plus collection-principles. We show that both the provably recursive functions and the provably total functions of $i\Delta_0 + \{\forall x \leq a \exists y \varphi(x,y) \rightarrow \exists z \forall x \leq a \exists y \leq z \varphi(x,y) \mid \varphi \in \mathscr L_A\}$ are polynomially bounded. Furthermore we show that the contrapositive of the collection-schema gives rise to instances of the law of excluded middle and hence $i\Delta_0 + \{B\varphi, C\varphi \mid \varphi \in \mathscr L_A\} \vdash PA$. Article information Source J. Symbolic Logic, Volume 65, Issue 3 (2000), 1223-1240. Dates First available in Project Euclid: 6 July 2007 Permanent link to this document https://projecteuclid.org/euclid.jsl/1183746179 Mathematical Reviews number (MathSciNet) MR1791374 Zentralblatt MATH identifier 0966.03052 JSTOR links.jstor.org Citation Burr, Wolfgang. Fragments of Heyting Arithmetic. J. Symbolic Logic 65 (2000), no. 3, 1223--1240. https://projecteuclid.org/euclid.jsl/1183746179
The Kunen inconsistency The Kunen inconsistency, the theorem showing that there can be no nontrivial elementary embedding from the universe to itself, remains a focal point of large cardinal set theory, marking a hard upper bound at the summit of the main ascent of the large cardinal hierarchy, the first outright refutation of a large cardinal axiom. On this main ascent, large cardinal axioms assert the existence of elementary embeddings $j:V\to M$ where $M$ exhibits increasing affinity with $V$ as one climbs the hierarchy. The $\theta$-strong cardinals, for example, have $V_\theta\subset M$; the $\lambda$-supercompact cardinals have $M^\lambda\subset M$; and the huge cardinals have $M^{j(\kappa)}\subset M$. The natural limit of this trend, first suggested by Reinhardt, is a nontrivial elementary embedding $j:V\to V$, the critical point of which is accordingly known as a Reinhardtcardinal. Shortly after this idea was introduced, however,Kunen famously proved that there are no such embeddings,and hence no Reinhardt cardinals in ZFC. Since that time, the inconsistency argument has been generalized by various authors, including Harada [1](p. 320-321), Hamkins, Kirmayer and Perlmutter [2], Woodin [1](p. 320-321), Zapletal [3] and Suzuki [4, 5]. There is no nontrivial elementary embedding $j:V\to V$ from the set-theoretic universe to itself. There is no nontrivial elementary embedding $j:V[G]\to V$ of a set-forcing extension of the universe to the universe, and neither is there $j:V\to V[G]$ in the converse direction. More generally, there is no nontrivial elementary embedding between two ground models of the universe. More generally still, there is no nontrivial elementary embedding $j:M\to N$ when both $M$ and $N$ are eventually stationary correct. There is no nontrivial elementary embedding $j:V\to \text{HOD}$, and neither is there $j:V\to M$ for a variety of other definable classes, including gHOD and the $\text{HOD}^\eta$, $\text{gHOD}^\eta$. If $j:V\to M$ is elementary, then $V=\text{HOD}(M)$. There is no nontrivial elementary embedding $j:\text{HOD}\to V$. More generally, for any definable class $M$, there is no nontrivial elementary embedding $j:M\to V$. There is no nontrivial elementary embedding $j:\text{HOD}\to\text{HOD}$ that is definable in $V$ from parameters. It is not currently known whether the Kunen inconsistency may be undertaken in ZF. Nor is it known whether one may rule out nontrivial embeddings $j:\text{HOD}\to\text{HOD}$ even in ZFC. Metamathematical issues Kunen formalized his theorem in Kelly-Morse set theory, but it is also possble to prove it in the weaker system of Gödel-Bernays set theory. In each case, the embedding $j$ is a GBC class, and elementary of $j$ is asserted as a $\Sigma_1$-elementary embedding, which implies $\Sigma_n$-elementarity when the two models have the ordinals. Reinhardt cardinal Although the existence of Reinhardt cardinals has now been refuted in ZFC and GBC, the term is used in the ZF context to refer to the critical point of a nontrivial elementary embedding $j:V\to V$ of the set-theoretic universe to itself. Super Reinhardt cardinal A super Reinhardt cardinal $\kappa$, is a cardinal which is the critical point of elementary embeddings $j:V\to V$, with $j(\kappa)$ as large as desired. References Kanamori, Akihiro. Second, Springer-Verlag, Berlin, 2009. (Large cardinals in set theory from their beginnings, Paperback reprint of the 2003 edition) www bibtex The higher infinite. Zapletal, Jindrich. A new proof of Kunen's inconsistency.Proc Amer Math Soc 124(7):2203--2204, 1996. www DOI MR bibtex Suzuki, Akira. Non-existence of generic elementary embeddings into the ground model.Tsukuba J Math 22(2):343--347, 1998. MR bibtex | Abstract Suzuki, Akira. No elementary embedding from $V$ into $V$ is definable from parameters.J Symbolic Logic 64(4):1591--1594, 1999. www DOI MR bibtex
The Open and Closed Sets of a Topological Space Consider a topological space $(X, \tau)$. We will now define exactly what the open and closet sets of this topological space are. Definition: Let $(X, \tau)$ be a topological space. If $A \subseteq X$ is such that $A \in \tau$ then $A$ is said to be Open. A subset $A \subseteq X$ is said to be Closed if $A^c = X \setminus A$ is open. If $A \subseteq X$ are both open and closed, then $A$ is said to be Clopen. By the definition above we see that a set $A$ is closed by definition if and only if $X \setminus A$ is open. From this, we get a criterion for whether or not a set is open. Proposition 1: Let $(X, \tau)$ be a topological space and let $A \subseteq X$. Then $A$ is open if and only if $X \setminus A$ is closed. Proof:$\Rightarrow$ Suppose that $A$ is open. Since $(X \setminus A)^c = A$ and $A$ is open, we see that $X \setminus A$ is closed. $\Leftarrow$ Suppose that $X \setminus A$ is closed. Then by definition, $(X \setminus A)^c = A$ is open. $\blacksquare$ The following theorem follows directly from the definition of closed sets above and the definition of a topological space. Theorem 2: Let $(X, \tau)$ be a topological space. Then: a) $\emptyset$ and $X$ are closed sets. b) If $\{ U_i \}_{i \in I}$ is an arbitrary collection of closed subsets of $X$ for some index set $I$ then $\displaystyle{\bigcap_{i \in I} U_i}$ is closed. c) If $\{ U_1, U_2, ..., U_n \}$ is a finite collection of closed subsets of $X$ then $\displaystyle{\bigcup_{i=1}^{n} U_i}$ is closed. Proof of a)The complement of $\emptyset$ is $\emptyset^c = X$ which is open, so $\emptyset$ is closed. Similarly, the complement of $X$ is $X^c = \emptyset$ which is open. Therefore $\emptyset$ and $X$ are closed. $\blacksquare$ Proof of b)Let $\{ U_i \}_{i \in I}$ be an arbitrary collection of closed subsets of $X$ for some index set $I$. Consider the intersection $\displaystyle{\bigcap_{i \in I} U_i}$. The complement of this set (using De Morgan's Laws) is: Since $U_i$ is closed for each $i \in I$ we have that $U_i^c$ is open for each $i \in I$. Therefore the complement above is the union of an arbitrary collection of open sets which is open. Therefore $\displaystyle{\bigcap_{i \in I} U_i}$ is closed. $\blacksquare$ Proof of c)Let $\{ U_1, U_2, ..., U_n \}$ be a finite collection of closed subsets of $X$ for some index set $I$. Consider the union $\displaystyle{\bigcup_{i=1}^{n} U_i}$. The complement of this set (using De Morgan's Laws) is: Since $U_i$ is closed for each $i \in \{ 1, 2, ..., n \}$ we must have that $U_i^c$ is open for each $i \in \{ 1, 2, ..., n \}$. Therefore the complement above is the intersection of a finite collection of open sets which is open. Therefore $\displaystyle{\bigcup_{i=1}^{n} U_i}$ is closed. $\blacksquare$ Example 1 As proven in the theorem above if $(X, \tau)$ is a topological space then the whole set $X$ and the emptyset $\emptyset$ are always closed in a topological space. By definition, they are also always open in a topological space. Therefore, $X$ and $\emptyset$ are always clopen sets. Sometimes $X$ and $\emptyset$ are the only clopen sets for a particular topology $\tau$ on $X$, but in general, a topological space may have many clopen sets. Example 2 Consider the set $X = \{ a, b, c \}$ and the nested topology $\tau = \{ \emptyset, \{ a \}, \{ a, b \}, \{ a, b, c \} \}$. Then all elements in $\tau$ are open and then the sets $\{ b, c \}$ and $\{ c \}$ are closed sets since:(3) Further we note that $\emptyset^c = X$ and $X^c = \emptyset$ so by definition, $\emptyset$ and $X$ are both open AND closed, i.e., clopen! In general, it is possible that other subsets of $X$ are both open and closed, or neither.
Complex Roots of The Characteristic Equation Consider the following second order linear homogenous differential equation $a \frac{d^2 y}{dt^2} + b \frac{dy}{dt} + cy = 0$ where $a$, $b$, and $c$ are constants. Recall that the characteristic equation for this differential equation is the quadratic polynomial $ar^2 + br + c = 0$ and that the roots of this polynomial, $r_1$ and $r_2$ are either both real and distinct, both complex and conjugates of each other, or are the same real root repeated twice. In the case when $r_1$ and $r_2$ were two distinct roots, we saw that the general solution to the second order linear homogenous differential equation $a \frac{d^2 y}{dt^2} + b \frac{dy}{dt} + cy = 0$ was $y = Ce^{r_1 t} + De^{r_2 t}$ where $C$ and $D$ are constants. We will now look at the case where the roots $r_1$ and $r_2$ are complex numbers. Suppose that $r_1$ and $r_2$ are both complex number roots of the characteristic equation $ar^2 + br + c = 0$. One important theorem regarding polynomials is that a polynomial with real coefficients has the property that if $r_1$ is a complex root of the polynomial, then the complex conjugate of $r_1$ is also a root of the polynomial. In this case, since $r_1$ is a complex root of the characteristic equation, we must have that $r_2$, the second root, must be the complex conjugate of $r_1$, that is $r_1 = \overline{r_2}$. We thus have that the roots $r_1$ and $r_2$ can both be written as $r_1 = \lambda + \mu i$ and $r_2 = \lambda - \mu i$ where $\lambda, \mu \in \mathbb{R}$. Therefore we have that two solutions for our differential equation are:(1) As of the moment, the solutions given above are not that useful to us, so we will make use of perhaps one of the most famous formulas in mathematics known as Euler's Formula which is: With this in hand, we see that the our solution $y_1 = e^{(\lambda + \mu i)t}$ can be rewritten as:(3) Furthermore, noting that $\cos (-\mu t) = \cos (\mu t )$ and $\sin (- \mu t) = - \sin (\mu t)$ we have that our solution $y_2 = e^{(\lambda - \mu i)t}$ can be rewritten as:(4) Once again, the solutions above are nicer, though they still include the complex number $i$, while our original differential equation has only real coefficients. Fortunately, the following theorem will allow us to write our solutions in terms of real-valued functions. Theorem 1: If $\frac{d^2 y}{dt^2} + p(t) \frac{dy}{dt} + q(t) y = 0$ is a second order linear homogenous differential equation where $p$ and $q$ are continuous real-valued functions and if $y = u(t) + v(t) i$ is a complex-valued solution to the differential equation above, then $y = u(t)$ and $y = v(t)$ are also solutions to this differential equation. Proof:Let $\frac{d^2 y}{dt^2} + p(t) \frac{dy}{dt} + q(t) y = 0$ be a second order linear homogenous differential equation where $p$ and $q$ are continuous, and suppose that $y = u(t) + v(t) i$ is a complex-valued solution. Plugging in $y = u(t) + v(t)i$ into our differential equation and we get that: Note that the righthand side of the equation above is a complex number for each $t$. However, a complex number $z$ is equal to $0$ if and only if $\mathrm{Re} (z) = 0$ and $\mathrm{Im} (z) = 0$. From above, this implies that both: Therefore $y = u(t)$ and $y = v(t)$ are both solutions to our second order linear homogenous differential equation $\frac{d^2 y}{dt^2} + p(t) \frac{dy}{dt} + q(t) y = 0$. $\blacksquare$ From Theorem 1 above, we thus see that $y_1 = e^{\lambda t} \cos (\mu t)$ and $y_2 = e^{\lambda t} \sin (\mu t)$ are both solutions to our second order linear homogenous differential equation. Therefore if the roots of the characteristic equation $ar^2 + br + c = 0$, then for $C$ and $D$ as constants, the general solution to our second order linear homogenous differential equation is given by:(7)
Let $X$ be a Hausdorff locally compact in $x \in X$. Show that for each open nbd $U$ of $x$ there exists an open nbd $V$ of $x$ such that $\overline{V}$ is compact and $\overline{V} \subset U$. My work: Since $X$ is Hausdorff and locally compact then $X$ is regular. Let $U$ be an open nbd of $x$. By assumption $X$ is locally compact so there exists some open nbd $W$ of $x$ such that $\overline{W}$ is compact. Now consider the open set $W \cap U$ this is non-empty since $x$ lies in the intersection. By regularity find an open set $V$ such that: $x\in V \subset \overline{V} \subset W \cap U$ Then in particular $\overline{V} \subset U$. But also $\overline{V} \subset W \subset \overline{W}$. Since $\overline{W}$ is compact then $\overline{V}$ is a closed subset of a compact set, hence compact. Is the above OK? Thank you.
This is a heuristic explanation of Witten's statement, without going into the subtleties of axiomatic quantum field theory issues, such as vacuum polarization or renormalization. A particle is characterized by a definite momentum plus possible other quantum numbers. Thus, one particle states are by definition states with a definite eigenvalues of the momentum operator, they can have further quantum numbers. These states should exist even in an interactiong field theory, describing a single particle away from any interaction.In a local quantum field theory, these states are associated with local field operators: $$| p, \sigma \rangle = \int e^{ipx} \psi_{\sigma}^{\dagger}(x) |0\rangle d^4x$$Where $\psi $ is the field corresponding to the particle and $\sigma$ describes the set of other quantum numbers additional to the momentum.A symmetry generator $Q$ being the integral of a charge density according to the Noether's theorem$$Q = \int j_0(x') d^3x'$$should generate a local field when it acts on a local field:$[Q, \psi_1(x)] = \psi_2(x)$(In the case of internal symmetries $\psi_2$ depends linearly on the components of $\psi_1(x)$, in the case of space time symmetries it depends on the derivatives of the components of $\psi_1(x)$) Thus in general: $$[Q, \psi_{\sigma}(x)] = \sum_{\sigma'} C_{\sigma\sigma'}(i\nabla)\psi_{\sigma'}(x)])$$ Where the dependence of the coefficients $ C_{\sigma\sigma'}$ on the momentum operator $\nabla$ is due to the possibility that $Q$ contains a space-time symmetry.Thus for an operator $Q$ satisfying $Q|0\rangle = 0$, we have$$ Q | p, \sigma \rangle = \int e^{ipx} Q \psi_{\sigma}^{\dagger}(x) |0\rangle d^4x = \int e^{ipx} [Q , \psi_{\sigma}^{\dagger}(x)] |0\rangle d^4x = \int e^{ipx} \sum_{\sigma'} C_{\sigma\sigma'}(i\nabla)\psi_{\sigma'}(x) |0\rangle d^4x = \sum_{\sigma'} C_{\sigma\sigma'}(p) \int e^{ipx} \psi_{\sigma'}^{\dagger}(x) |0\rangle d^4x = \sum_{\sigma'} C_{\sigma\sigma'}(p) | p, \sigma' \rangle $$Thus the action of the operator $Q$ is a representation in the one particle states. The fact that $Q$ commutes with the Hamiltonian is responsible for the energy degeneracy of its action, i.e., the states $| p, \sigma \rangle$ and $Q| p, \sigma \rangle$ have the same energy.This post imported from StackExchange Physics at 2015-06-16 14:50 (UTC), posted by SE-user David Bar Moshe
If $A \sim B$ and $B \sim C$. Prove that $A \sim C$ What I have: We know there exists functions $f,g$ such that $f:A\to B$ and $g:B \to C$ where $f$ and $g$ are bijective. We thus require to show that there exists a function $h:A \to C$ where $h$ is bijective. Can anyone please give me some hints that might point me in the right direction? I do not want a complete answer, simply a nudge in the right direction. UPDATE:This is my (very rough) attempt at the proof. We know there exists functions $f,g$ such that $f:A\to B$ and $g:B \to C$ where $f$ and $g$ are bijective. We thus require to show that there exists a function $h:A \to C$ where $h$ is bijective. Assume that $h(x) = g \circ f = g(f(x))$. We must now show that $h$ is bijective. $\underline{RTP:}$ If $f$ and $g$ are surjective, then $g \circ f$ is surjective. $\underline{Proof:}$ We are given that $f$ and $g$ are surjective (since they are bijective). Suppose $c \in C$, then since $g$ is surjective, we know $\exists b \in B$ such that $g(b) = c$. Similarly, since $f$ is surjective, $\exists a \in A$ such that $f(a) = b$. Thus \begin{align}g \circ f(a) &= g(f(a)) \\ &= g(b) \\ &= c\end{align} Hence we have shown that $h$ is surjective. We must now show that $h$ is injective. $\underline{RTP:}$ If $f$ snd $g$ are injective, then $g\circ f$ is also injective. $\underline{Proof:}$ We are given that $f$ and $g$ are injective (since they are bijective) and suppose $$g \circ f(x_1) = g \circ f(x_2)$$ We thus have that $$g(f(x_1))= g(f(x_2))$$ Since $g$ is injective, we know that this is true if and only if $f(x_1) = f(x_2)$. Similarly, since $f$ is injective, it follows that $$x_1 = x_2$$ Thus $g \circ f$ is injective. We have thus proven that there exists a function $h(x)= g \circ f : A \to C$ that is bijective. Hence we can conclude that $A \sim C$.
In addition to the excelent answers already given, there are a few subleties one should explicitly point out. There are two main concepts for integration, the first being indefinite integration, that is finding the antiderrivative of a function, and the second is definite integration, finding the measure of the (signed) area enclosed by the graph of a function and the x-axis. There are various ways to codify these concepts in a rigourous mathematical language. For exaple, if we know that $f$ is a real function over $[a, b]$, $a,b \in \mathbb{R}$, we use Riemann's definition of$$\int_{a}^b f(x) \, \operatorname{d}\!x \ ,$$which can be found in any elementary calculus textbook. Provited that $f$ meets some specific conditions, we say that $f$ is Riemann-integrable over $[a,b]$ and we assign the above symbolic expression a unique real value. (There is also a theorem according to which if $f$ is Riemann-integrale over $[a,b]$, it's also Riemann-integrable over any closed subinterval of $[a,b]$). But what about indefinite integration? Remember that, given a real function $h$ over an interval $I$, function $H:I \to \mathbb{R}$ is called an antiderivative of $h$ just in case $H'=h$. You shall note that whenever $H$ is an antiderivative of $h$, $H+c$ is likewise an antiderivative of $h$, and you can prove that any antiderivative of $h$ is of the form $H+c$ for some constant c. Now, get ready for the hard truth... While acknowledging that there is an infinite number of antiderivatives for $h$ (all functions $H+c$), mathematicians go mad, break their own rules, and proudly state:$$\int h(x) \, \operatorname{d}\!x = H(x),$$without any clue of shame! Sometimes, we even go as far as claiming that ‘ the indefinite integral of $h(x)$ is H(x) for all $x \in I$’! There is not really such a thing as the indefinite integral of a function, for there is not such a thing as the antiderivative of a function; there are uncountably many of them. This is a clear abuse of notation/terminology, but due to historical reasons, it is accepted as the standard to this day. To illustrate the above mentioned facts, some books prefer to write$$\int h(x) \, \operatorname{d}\!x = H(x)+c,$$and say that $c$ is an arbitrary constant, named constant of integration. This is a good way to remind us that if we have an integral equation like this:$$\int h(x) \, \operatorname{d}\!x + x^2 = 6\ln x + 42,$$we get the equivalent statement$$\exists c \in \mathbb{R} \quad H(x) + c + x^2 = 6\ln x +42.$$ $$\star \ \star \ \star$$ Now you may understand what exactly that peculiar constant is, and why it's not supposed to much a specific function, as you say in your question. There are not constants ‘existing’ in integrable functions! I'll finish this long (too long?) post by noticing that the integral$$\int_{a}^x f'(t) \, \operatorname{d}\!t$$in question is not just a definite integral (definite integrals equal a real number) but a function$$F(x) = \int_{a}^x f'(t) \, \operatorname{d}\!t, \ x \in D,$$where $f'$ should be Riemann-integrable over $D$ (recall also that $a \in D$, and $D$ should be a closed and bounted interval). After all this, you can use the Fundamental Theorem of Calculus and find that$$F(x) = f(x) - f(a), \ x \in D,$$as already mentioned by others. I know I wrote a lot, but I think this helps the OP figure out some usual misconceptions that underlie her/his question. Any comments/corrections welcome!
a) Regular implies normal. Okay, maybe that's too high-tech. If $f=g+hy$ lies in the integral closure, then it satisfies the quadratic polynomial $(z-g)^2 - h^2(x^3-x)$. If the coefficients are polynomials (which they must be, by Gauss's Lemma), then $g$ is a polynomial, and $h^2 (x^3-x)$ is a polynomial. But $x^3-x$ is square-free (here we use that the characteristic is not $2$), so $h$ is a polynomial and $f\in R$. b) Since $y^2,x^3 \in P^2$, it is straightforward that $x\in P^2$ and $P^2 = (x)$. c) (Note: it is sufficient to address this problem for $R\otimes_k \overline{k}$, so we assume that $k$ is algebraically closed to avoid some technicalities.) First of all, $X=\operatorname{Spec}k[x,y]/(y^2-x^3+x)$ is an open subset of an elliptic curve $\overline{X} = Y\cup \{\infty\}$. This is a rather important observation, because it implies that $R=k[x,y]/(y^2-x^3+x)$ doesn't have any principal prime ideals whatsoever (except, of course, for $(0)$). We will demonstate this, real mathematician-style, by looking at the global behavior of a hypothetical generator of $P$. If $P=(f)$ is principal, then $f$ extends to a rational function on $\overline{X}$ with a simple pole at $\infty$. Therefore $1/f$ is a non-constant rational function on $\overline{X}$ with only a simple pole at $P$. This implies that $l(P) \geq 2$. Here $l(P)$ is shorthand for the dimension of the space of rational functions on $\overline{X}$ which have at worst a simple pole at $P$, and are regular elsewhere. In general, $l(D)$ is defined for $D$ a Weil divisor, i.e. a formal sum of points. For example, $l(P+2Q-3R)$ counts the number of linearly independent functions which must have a triple zero at $R$, and are allowed to have a pole of order $1$ at $P$, and $2$ at $Q$. The sum of the coefficients of $D$ is called its degree. We will calculate $l(P)$ exactly using the Riemann-Roch theorem, which says that, for any Weil divisor $D$, $l(D)-l(K-D) = \operatorname{deg}{D} + 1 - g$, where $g$ is the genus of $\overline{X}$, and $K$ is the divisor associated to any differential form on $\overline{X}$. For an elliptic curve, we must have $K=0$ and $g=1$, so this equation takes the simple form $l(D)-l(-D) = \operatorname{deg}{D}$. We set $D=P$. Since $\operatorname{deg}{P} = 1$, and $l(-P)=0$ (in fact, $l(-D)=0$ for any divisor of positive degree), we conclude that $l(P) = 1$. Since earlier we found that $l(P)\geq 2$ for any principal maximal ideal on $X$ (in fact, the proof works for any projective curve with a single point removed), we have our contradiction. (The above does have a nice, concrete interpretation: when you strip away the machinery, the argument is that, if $P=(f)$, then the map $k[x]\to R$ sending $x$ to $f$ is an isomorphism. We use basic facts about divisors to avoid some difficult calculations, and genus to show that $R$ cannot be a polynomial ring.)
Suppose $i$ and $j$ are indices which take values $1, \dots, m$, and for each $i$ and $j$, we have a number $a_{ij}$. Note that $(i, j) \in \{1, \dots, m\}\times\{1, \dots, m\}$. If we were to sum over all possible values of $(i, j)$ we would have $$\sum_{(i, j) \in \{1, \dots, m\}\times\{1, \dots, m\}}a_{ij}$$ which could also be written as $$\sum_{i \in \{1, \dots, m\}}\sum_{j\in\{1, \dots, m\}}a_{ij}$$ or more commonly $$\sum_{i=1}^m\sum_{j=1}^ma_{ij}.$$ Sometimes, we are not interested in all possible pairs of indices, but only those pairs which satisfy some condition. In the example you are looking at, the pairs of indices of interest are $(i, j) \in \{1, \dots, m\}\times\{1, \dots, m\}$ such that $i \leq j$. One way to denote the sum over all such pairs of indices is $$\sum_{\substack{(i, j) \in \{1, \dots, m\}\times\{1, \dots, m\}\\ i \leq j}}a_{ij}$$ but this is rather cumbersome. It would be much more helpful if we could write it as a double sum as above. To do this, note that we can list all suitable pairs of indices, by first fixing $i \in \{1, \dots, m\}$ and then allowing $j$ to vary from $i$ to $m$ (as these are the only possible values of $j$ with $i \leq j$). Doing this, we obtain the double sum $$\sum_{i=1}^m\sum_{j=i}^ma_{ij}.$$ Note, we could also have fixed $j \in \{1, \dots, m\}$ and then allowed $i$ to vary from $1$ to $j$ (as these are the only possible values of $i$ with $i \leq j$). Doing this, we obtain an alternative double sum $$\sum_{j=1}^m\sum_{i=1}^ja_{ij}.$$ The notation that you are asking about is yet another way to express the sum. That is, $$\mathop{\sum\sum}_{i \leq j}a_{ij} = \sum_{\substack{(i, j) \in \{1, \dots, m\}\times\{1, \dots, m\}\\ i \leq j}}a_{ij} = \sum_{i=1}^m\sum_{j=i}^ma_{ij} = \sum_{j=1}^m\sum_{i=1}^ja_{ij}.$$
Perhaps you should revisit the definitions of random variable and distribution to clarify things. For random variables, I like the one on Wikipedia for its simplicity. A random variable $X : \Omega \rightarrow E$ is a measurable function from a set of possible outcomes $\Omega$ to a measurable space $E$. On the other hand, the cumulative distribution function $F_Z$ of a random variable $Z$, as you know, represents the probability that $Z$ takes a value within a specific region, that is, $F_Z(x)=Pr(Z\leq x)$. The probability density function $f_Z$ of $Z$, that is, your first plot, can be defined simply as $$f_Z(x)=\frac{d}{dx}F_Z(x)$$And of course, $F_Z$ is such that $\lim_{x\rightarrow \infty}\int_{-\infty}^x f_Z(x)dx=1$ Therefore, for any continuous random variable, we can come up with whichever density we want, provided it is nonnegative and intregrates to $1$. It can have one, two, four or infinitely many modes, and the corresponding random variable can be represented as a single variable or as a mixture of infinitely many, differently distributed, variables. So, are two modes indicative of two variables? That's up to you. You should propose a mixture model for your data if you feel that is consistent with your understanding of the phenomena behind them. But are two modes indicative of two random variables? Well, just bear in mind that that is a well-defined mathematical concept, so you just need to go the definition to see what you can take for granted and what not (no, the answer is no).
A field $\phi(z)$ has the conformal weight $h$, if it transforms under $z\rightarrow z_1(z)$ as $$ \phi(z) = \tilde{\phi}(z_1)\left(\frac{dz_1}{dz}\right)^h $$ The (classical) scaling dimension can be obtained for each field by appearing in the Lagrangian by making use of the constraint that has to be dimensionless, resulting for example in $$ [\phi] = [A^{\mu}] = 1 $$ for a scalar and a gauge field or $$ [\Psi_D] = [\Psi_M] = [\chi] = [\eta] = \frac{3}{2} $$ for Dirac, Majorana, and Weyl spinors. Are these two concepts of scaling dimension and conformal weight somehow related?
Sources of Error Sources of Error It is always important to acknowledge possibly sources of error - especially when it comes to applied mathematics dealing with biology, physics, chemistry, engineering, economics, etc… We will now outline some of the sources of error. Calculation Errors. This is the most obvious type of error that arises as a result of an incorrect calculation. For example, $1 + 1 = 3$. Other types of calculation errors can result due to assumptions. Rounding/Truncation Errors. We have already discussed this type of error on the Truncation of Floating-Point Numbers and Rounding of Floating-Point Numbers pages. These sort of errors arise frequently when an exact value cannot be used directly, for example, using $3.14$ to represent $\pi$. It is a lot easier to use $3.14$ represent $\pi$, however, for precision, $3.14$ may produce a large amount of error when applied to many of the elementary geometry formulas for area and volume, such as $A = \pi r^2$ (the area formula for a circle) or $V = \pi r^2 h$ (the volume of a right circular cylinder). Modelling Errors. Many phenomena in the world can be represented with mathematical models. Of course, the world is so complex that it would be virtually impossible to create a mathematical model that represented some situation with complete accuracy . As a result, many mathematical models have errors associated to missing variables or to perturbations in the normalcy of a model. Approximation Errors. There are many instances in mathematics where we approximate some value with another value. One common example is $\pi \approx \frac{22}{7}$. Another common example is using power or Taylor polynomial series to represent functions. For example, the geometric series below represents the function $f(x) = \frac{1}{1 - x}$ for $\mid x \mid < 1$: \begin{align} \quad \sum_{n=0}^{\infty} x_n = 1 + x + x^2 + ... = \frac{1}{1 - x} \end{align} Thus we have that $\frac{1}{1 - x}$ is can be approximated by a finite number of consecutive terms (starting at the first term) of the series given above, that is for $\mid x \mid < 1$ we have that: \begin{align} \frac{1}{1 - x} \approx 1 + x + x^2 + ... + x^n \end{align}
Let $m^*$ denote the outer measure corresponding to the Lebesgue measure on $\mathbb{R}$, i.e., $$m^*(A)=\inf\{\sum_{n=1}^\infty l(I_n):A\subset\bigcup_{n=1}^\infty I_n\},$$ where $A\subset\mathbb{R}$, $I_n\subset\mathbb{R}$ is a bounded open interval for $n=1,2,\dots$ and $l((a,b))$ is the length of the interval $(a,b)$. Let $0<\rho<1$. Proof that if $E\subset\mathbb{R}$ and for all intervals $(a,b)$ we have that $m^*(E\cap(a,b))\leq\rho(b-a)$, then $E$ has zero Lebesgue measure. Commonly, I would add some comments and thoughts about the question, but I'm pretty stuck on this one.
Keywords positive linear maps, geometric mean, sector matrix, norm inequality Abstract Ando proved that if $A, B$ are positive definite, then for any positive linear map $\Phi$, it holds \begin{eqnarray*} \Phi(A\sharp_\lambda B)\le \Phi(A)\sharp_\lambda \Phi(B), \end{eqnarray*} where $A\sharp_\lambda B$, $0\le\lambda\le 1$, means the weighted geometric mean of $A, B$. Using the recently defined geometric mean for accretive matrices, Ando's result is extended to sector matrices. Some norm inequalities are considered as well. Recommended Citation Tan, Fuping and Chen, Huimin.(2019),"Inequalities for sector matrices and positive linear maps", Electronic Journal of Linear Algebra,Volume 35, pp. 418-423. DOI: https://doi.org/10.13001/1081-3810.4041
PCTeX Talk Discussions on TeX, LaTeX, fonts, and typesetting Author Message Michael Spivak Joined: 10 Oct 2005 Posts: 52 Posted: Tue Oct 11, 2005 3:46 pm Post subject: new version of fonts We are making a new version of the MTPro fonts, which will have Times-Italic-like characters designed into them, so that there will be no need for virtual fonts. This is a good opportunity to ask for new characters, etc. PLEASE , IF YOU SUBMIT A REQUEST AS A GUEST, ADD AN EMAIL ADDRESS, SO THAT I CAN CONTACT YOU IF I HAVE QUESTIONS ABOUT IT!!! My email is mikespivak@aol.com. The following have been suggsted: :=, = with ^ accent above it, updownarrows and downuparrows---and I think I'll add updownharpoons, downupharpoons, upharpoons, downharpoons. Also, slanted \sum, \prod [and presumably \coprod]. We could have \usum, \slsum, etc. to specify upright or slanted \sum, etc., while \uoperators would normally make \sum mean \usum, etc., while \sloperators would normally make \sum mean \slsum, etc. Last edited by Michael Spivak on Wed Nov 02, 2005 9:59 am; edited 2 times in total AnnaD Guest Posted: Thu Oct 27, 2005 6:26 pm Post subject: Michael, Here are several things I came across: 1. \rightarrow with \sim on top of it (looks like \simeq but with an arrow) 2. \ast as a big math operator with limits (variable sizes: for in-text mode and display) 3. variable-length corner ( __| ) which works similar to \framebox (vertical line is variable too) 4. wide dual math accents (like \Hat{\Bar{}}, for example) with smaller gap between them kolchin Joined: 27 Oct 2005 Posts: 15 Location: Moscow, Russia Posted: Thu Oct 27, 2005 8:56 pm Post subject: Hello Michael, I would like to see \& as a big math operator with limits, of variable size for text- and display modes. Nase Joined: 28 Oct 2005 Posts: 1 Posted: Fri Oct 28, 2005 1:28 am Post subject: Re: new version of fonts Michael Spivak wrote: We are making a new version of the MTPro fonts, which will have Times-Italic-like characters designed into them, so that there will be no need for virtual fonts. This is a good opportunity to ask for new characters, etc. Dear Michael, something that would be desirable in analysis is a mean value integral "\mint". What I need is an integral sign which is crossed horizontally in the middle by a short bar. I have got some TeXnical solution, but this hack only works in the "\nolimits" case: \newcommand{\meanbar}[1]{% \setbox0 = \hbox{$#1 \int$} \hbox to 0pt{% \thinspace \hskip 0.1\wd0 \raise 0.5\ht0 \hbox{% \lower 0.5\dp0 \hbox{\rule{0.8\wd0}{2\linethickness}} }% \hss }% } \newcommand{\palette}[1]{% \mathchoice{#1 \displaystyle}% {#1 \textstyle}% {#1 \scriptstyle}% {#1 \scriptscriptstyle}% } \newcommand{\mean}{\palette \meanbar} \newcommand{\mint}{\mean \int} Thank you and all the other folks from PCTeX for developing a fairly complete mathematical Times font family, especially for adding suitable fonts for smaller design sizes! Best regards, Jens _________________ Jens Andre Griepentrog WIAS Berlin (Germany) Guest Posted: Fri Oct 28, 2005 12:25 pm Post subject: I would suggest a symbol similar to \hbar, but for d (i.e., \dbar). This is useful in thermodynamics as an inexact differential. Michael Spivak Joined: 10 Oct 2005 Posts: 52 Posted: Fri Oct 28, 2005 1:14 pm Post subject: AnnaD wrote: Michael, Here are several things I came across: 1. \rightarrow with \sim on top of it (looks like \simeq but with an arrow) 2. \ast as a big math operator with limits (variable sizes: for in-text mode and display) 3. variable-length corner ( __| ) which works similar to \framebox (vertical line is variable too) 4. wide dual math accents (like \Hat{\Bar{}}, for example) with smaller gap between them 1. should be simple 2. A \bigast should be OK, but do you have a sample to show how large it should be? 3. Don't know about \framebox (presumably from LaTeX? about which I also don't know anything), but this should be doable completely in TeX, without any need for a font character. 4. Will get back to you about dual math accents later. Michael Spivak Joined: 10 Oct 2005 Posts: 52 Posted: Fri Oct 28, 2005 1:15 pm Post subject: kolchin wrote: Hello Michael, I would like to see \& as a big math operator with limits, of variable size for text- and display modes. Should it look just like a Time &? Although \& would be a convenient name for the user, it would be simpler to implement things if it had another name, like \ampersand, or \bigampersand, or perhaps you have another name in mind. Michael Spivak Joined: 10 Oct 2005 Posts: 52 Posted: Fri Oct 28, 2005 1:18 pm Post subject: Re: new version of fonts Nase wrote: something that would be desirable in analysis is a mean value integral "\mint". OK, but perhaps it should be called \barint? Michael Spivak Joined: 10 Oct 2005 Posts: 52 Posted: Fri Oct 28, 2005 1:20 pm Post subject: Anonymous wrote: I would suggest a symbol similar to \hbar, but for d (i.e., \dbar). This is useful in thermodynamics as an inexact differential. OK. I presume you want a regular d, not a barred partial sign ("eth"). Guest Posted: Fri Oct 28, 2005 2:57 pm Post subject: AnnaD wrote: Michael, wide dual math accents (like \Hat{\Bar{}}, for example) with smaller gap between them I will add \widehatdown#1#2, which puts a \widehat on #2, but moves it down by #1. So, for example, \widehatdown{2pt}{\widehat{a+b+c+d+e+f+g+h}} will look better. jp Guest Posted: Sat Oct 29, 2005 12:06 pm Post subject: Symbols Hi, what I would like to see is the contraction operator, for example for differential forms: \omega _| X . := is definitely very welcome, as would be =: and :<=> jürgen pöschel Uni Stuttgart kolchin Joined: 27 Oct 2005 Posts: 15 Location: Moscow, Russia Posted: Sat Oct 29, 2005 12:07 pm Post subject: Michael Spivak wrote: kolchin wrote: Hello Michael, I would like to see \& as a big math operator with limits, of variable size for text- and display modes. Should it look just like a Time &? Although \& would be a convenient name for the user, it would be simpler to implement things if it had another name, like \ampersand, or \bigampersand, or perhaps you have another name in mind. Yes, it should look as a big & in Time font; the name I like is \bigvarland (big alternative logical "and"). Best regards and many thanks. Andrei Steklov Inst. Math. Michael Spivak Joined: 10 Oct 2005 Posts: 52 Posted: Sat Oct 29, 2005 4:44 pm Post subject: Re: Symbols jp wrote: Hi, what I would like to see is the contraction operator, for example for differential forms: \omega _| X . := is definitely very welcome, as would be =: and :<=> jürgen pöschel Uni Stuttgart Can add these. Do you mean literally :<=> or is that an abbreviation for one or more symbols? kolchin Joined: 27 Oct 2005 Posts: 15 Location: Moscow, Russia Posted: Sun Oct 30, 2005 12:46 am Post subject: another lowercase "z" in math Dear Michael, Is it possible to make lowercase mathematical italic "z" look as in Adobe Times PS font in my printer, with a swash? Al Freed Guest Posted: Tue Nov 01, 2005 12:45 pm Post subject: more blackboard fonts Hi Michael, As long as you're soliciting a wish list, here is mine: 1) blackboard bold Greek fonts, upper and lower case 2) slanted blackboard bold fonts, medium weight, upper and lower case Thanks, Al Freed All times are GMT - 7 Hours You can post new topics in this forum You can reply to topics in this forum You cannot edit your posts in this forum You cannot delete your posts in this forum You cannot vote in polls in this forum Powered by phpBB © 2001, 2005 phpBB Group
I am working on using a Feedforward multi-layered perceptron as a function approximator for the pressure distribution of a groundwater system. I am essentially trying to solve a boundary value problem with an ANNs. From the mass balance equation of groundwater flow I now that the pressure, P, is dependent on the position (x, in 1D) and time, t, since the start of the simulation. In my simple case I therefore want to find a neural network $N(x,t,\hat{p})$, where $\hat{p}$ are the network's weights and biases, such that: \begin{equation} \ P(x,t) \approx N(x,t, \hat{p}), \end{equation} I will then train my network using [x,t] inputs and verifying either that the mass balance equals to zero, or specific boundary or initial conditions. To be a bit more clear the following could be my problem definitions: Mass Balance Equation at any point and time \begin{equation} \ \frac{\partial^2 P}{\partial x^2} - \frac{\partial P}{\partial t} = 0 \, \end{equation} Initial Condition (uniform water pressure at t = 0 at any point) \begin{equation} \ P(x,0) = 0 \, \end{equation} Boundary Condition (injecting water from position x = 0, through time) \begin{equation} \ \frac{\partial P}{\partial x}(0,t) = 0.5 \, \end{equation} As you can see, because my network approximates the pressure $P$ I could then use the derivatives of the network to verify the boundary conditions and the mass balance. I can then use the sum mean square error as my cost function.So for example, the error for inputs [x = 0, t = 1] can be computed by taking the difference between the first derivative of the network with respect to x and the target value of 0.5 and squaring it. So far I have used only a single hidden layer but I wish to extend it to multiple layers. The hidden layers of my network should have a sigmoid ,$S$ activation function and the output layer a linear one $f(x) =x$. so far, based on a paper, I have come up with the expression for the first derivative for a hidden layer with respect to its input neuron l: \begin{equation} \ \frac{\partial P}{\partial l} =\sum_{i=1}^m w_{i}\frac{dS}{dl} (Z_i).\ \end{equation} Am I correct in thinking that the diagonal of the Hessian Matrix can be describe by: \begin{equation} \ \frac{\partial^2 P}{\partial l^2} =\sum_{i=1}^m w_{i}^2\frac{d^2S}{dl^2} (Z_{i}).\ \end{equation} where $Z_{i}$ is the neuron value obtained from weights and biases before activation. And $w_i$ the synaptic weight. Would that be easy to compute using Python ? It seems doable to me, but I read a lot about how costly it was to compute the Hessian Matrix, but as I am only interested in the diagonal of it can I obtain it using the equation above ?
PCTeX Talk Discussions on TeX, LaTeX, fonts, and typesetting Author Message stubner Joined: 14 Mar 2006 Posts: 7 Posted: Wed Apr 19, 2006 2:19 pm Post subject: absolute values Hi everybody, it seems I ahven't used much absolute values lately since only yesterday I found that things like $|x|$ or $|o|$ look offbalance to me. The space to the right of the latter looks larger than the space to the left of the letter. In order to test this systematically, I have taken some code from testfont.tex (without fully understanding it ;-): \documentclass{article} \renewcommand{\rmdefault}{ptm} \usepackage[slantedGreek]{mtpro2} \def\math{\def\ii{i} \def\jj{j} \def\\##1{|##1|+}\mathtrial \def\\##1{##1_2+}\mathtrial \def\\##1{##1^2+}\mathtrial \def\\##1{##1/2+}\mathtrial \def\\##1{2/##1+}\mathtrial \def\\##1{##1,{}+}\mathtrial \def\\##1{d##1+}\mathtrial \let\ii=\imath \let\jj=\jmath \def\\##1{\hat##1+}\mathtrial} \newcount\skewtrial \skewtrial='177 \def\mathtrial{$\\A \\B \\C \\D \\E \\F \\G \\H \\I \\J \\K \\L \\M \\N \\O \\P \\Q \\R \\S \\T \\U \\V \\W \\X \\Y \\Z \\a \\b \\c \\d \\e \\f \\g \\h \\\ii \\\jj \\k \\l \\m \\n \\o \\p \\q \\r \\s \\t \\u \\v \\w \\x \\y \\z \\\alpha \\\beta \\\gamma \\\delta \\\epsilon \\\zeta \\\eta \\\theta \\\iota \\\kappa \\\lambda \\\mu \\\nu \\\xi \\\pi \\\rho \\\sigma \\\tau \\\upsilon \\\phi \\\chi \\\psi \\\omega \\\vartheta \\\varpi \\\varphi \\\Gamma \\\Delta \\\Theta \\\Lambda \\\Xi \\\Pi \\\Sigma \\\Upsilon \\\Phi \\\Psi \\\Omega \\\partial \\\ell \\\wp$\par} \def\mathsy{\begingroup\skewtrial='060 % for math symbol font tests \def\mathtrial{$\\A \\B \\C \\D \\E \\F \\G \\H \\I \\J \\K \\L \\M \\N \\O \\P \\Q \\R \\S \\T \\U \\V \\W \\X \\Y \\Z$\par} \math\endgroup} \begin{document} \math \end{document} IMO most lower case letters look off-center to me with to much space to the tight of the letter (f,v,w,e are exceptions). The greeks are fine, while the uppercase letters are mixed (R has to much space to the left, U and M on the right). The other tests (besides absolute values) look fine. Other opinions? cheerio ralf jautschbach Joined: 17 Mar 2006 Posts: 11 Posted: Wed Apr 19, 2006 3:52 pm Post subject: Re: absolute values stubner wrote: Hi everybody, it seems I ahven't used much absolute values lately since only yesterday I found that things like $|x|$ or $|o|$ look offbalance to me. The space to the right of the latter looks larger than the space to the left of the letter. ralf |i| and |\pi|, for example, seem to have too much space on the right. |\eta| looks like there is not enough space on the right. Jochen Michael Spivak Joined: 10 Oct 2005 Posts: 52 Posted: Thu Apr 20, 2006 4:04 pm Post subject: Basically, I want to reiterate the remark I made in the last post to the "firstimpressions" posting by stubner. If you start looking carefully at any mathematical typesetting (as opposed to just reading it) you will find thousands of non-optimal things. Some of these are actually due to the design of TeX (see some remarks of mine in the "spacing" posting by zeller), and some to the varying circumstances of individual characters. All sorts of things that one would never even notice while reading a mathematics paper can stand out when one looks at things a character at a time, and sometimes one becomes overly concerned. (The link http://support.pctex.com/files/JWPXMWRZTYLV/abs.pdf shows Computer Modern and MTPro2 characters inside absolute values and parentheses, and I think that you will find cases where CM is spaced better than MTPro2, but also cases where the opposite is true.) For example, although I agree that |M| and |U| have too much space to the right of the letters, I wouldn't agree that |R| has too much space to the left of the R, or at most just a tiny extra bit of space. By contrast, in Computer Modern, the |R| definitely has this problem to a much greater degree. Notice, moreover, that in MTPro2, (M) and (U) and (R) look nicely balanced. Of course, that's partly because of the character of the right parenthesis---it has a top piece that extends backwards, unlike almost all characters! In Computer Modern this doesn't pose as great a problem mainly because the ) is much thinner and unshaped. The case of |i|, where there is certainly more space on the right, is also instructive. Notice that the dot on the Times-Italic i is very close to being the rightmost part of the character, while in CM it is nowhere near the right, because of the curlicue at the bottom. For this reason, I had to make the italic correction of the i rather big; otherwise, superscripts would be very close to the dot, making reading very unpleasant. Since the italic correction is always added to the i, this gives the extra space before the | or the ). Naturally, I had to compensate for this by adding more negative kerning between the i and all other characters, but you can't kern with the ), as I've mentioned before, in one of the two postings I mentioned. Similarly, if you compare x^i in CM and MTPro2, you'll see that the superscript i in CM has a curlicue to the left, which keeps it separated from the x, while in MTPro2, I needed to make a greater italic correction to the x in order to get superscripts adequately far away. TeX has \scriptspace to determine extra space after a subscript or superscript; alas, that it does not also have a \prescriptspace, to determine some extra space _before_ superscripts! (And similarly, see one of the previously mentioned postings, the spacing in scriptstyle and scriptscriptstyle should be more flexible.) At any rate, for now, I'll leave things as they are. Possibly in a future release I'll try to address some of these questions, though it simply isn't possible to optimize all spacing. All times are GMT - 7 Hours You can post new topics in this forum You can reply to topics in this forum You cannot edit your posts in this forum You cannot delete your posts in this forum You cannot vote in polls in this forum Powered by phpBB © 2001, 2005 phpBB Group
Let $f:\mathbb{R}\rightarrow\bar{\mathbb{R}}$ Lebesgue integrable. Prove that for $\epsilon >0$ there exists a finite interval $[a,b]$ such that $$\left|\int{f(x)}dx-\int_{a}^b f(x)dx\right|<\epsilon.$$ My attempt: If $f$ is integrable on $[a,b]$, then for any $\epsilon > 0$ there exists $\delta > 0$ such that for any measurable set $D \subset [a,b]$ with measure $\mu(D) < \delta$ we have $$\left|\int_{a}^b f(x)dx\right|<\epsilon/2.$$ This is where I'm stuck. Can someone help me?
PCTeX Talk Discussions on TeX, LaTeX, fonts, and typesetting Author Message stubner Joined: 14 Mar 2006 Posts: 7 Posted: Wed Apr 19, 2006 2:19 pm Post subject: absolute values Hi everybody, it seems I ahven't used much absolute values lately since only yesterday I found that things like $|x|$ or $|o|$ look offbalance to me. The space to the right of the latter looks larger than the space to the left of the letter. In order to test this systematically, I have taken some code from testfont.tex (without fully understanding it ;-): \documentclass{article} \renewcommand{\rmdefault}{ptm} \usepackage[slantedGreek]{mtpro2} \def\math{\def\ii{i} \def\jj{j} \def\\##1{|##1|+}\mathtrial \def\\##1{##1_2+}\mathtrial \def\\##1{##1^2+}\mathtrial \def\\##1{##1/2+}\mathtrial \def\\##1{2/##1+}\mathtrial \def\\##1{##1,{}+}\mathtrial \def\\##1{d##1+}\mathtrial \let\ii=\imath \let\jj=\jmath \def\\##1{\hat##1+}\mathtrial} \newcount\skewtrial \skewtrial='177 \def\mathtrial{$\\A \\B \\C \\D \\E \\F \\G \\H \\I \\J \\K \\L \\M \\N \\O \\P \\Q \\R \\S \\T \\U \\V \\W \\X \\Y \\Z \\a \\b \\c \\d \\e \\f \\g \\h \\\ii \\\jj \\k \\l \\m \\n \\o \\p \\q \\r \\s \\t \\u \\v \\w \\x \\y \\z \\\alpha \\\beta \\\gamma \\\delta \\\epsilon \\\zeta \\\eta \\\theta \\\iota \\\kappa \\\lambda \\\mu \\\nu \\\xi \\\pi \\\rho \\\sigma \\\tau \\\upsilon \\\phi \\\chi \\\psi \\\omega \\\vartheta \\\varpi \\\varphi \\\Gamma \\\Delta \\\Theta \\\Lambda \\\Xi \\\Pi \\\Sigma \\\Upsilon \\\Phi \\\Psi \\\Omega \\\partial \\\ell \\\wp$\par} \def\mathsy{\begingroup\skewtrial='060 % for math symbol font tests \def\mathtrial{$\\A \\B \\C \\D \\E \\F \\G \\H \\I \\J \\K \\L \\M \\N \\O \\P \\Q \\R \\S \\T \\U \\V \\W \\X \\Y \\Z$\par} \math\endgroup} \begin{document} \math \end{document} IMO most lower case letters look off-center to me with to much space to the tight of the letter (f,v,w,e are exceptions). The greeks are fine, while the uppercase letters are mixed (R has to much space to the left, U and M on the right). The other tests (besides absolute values) look fine. Other opinions? cheerio ralf jautschbach Joined: 17 Mar 2006 Posts: 11 Posted: Wed Apr 19, 2006 3:52 pm Post subject: Re: absolute values stubner wrote: Hi everybody, it seems I ahven't used much absolute values lately since only yesterday I found that things like $|x|$ or $|o|$ look offbalance to me. The space to the right of the latter looks larger than the space to the left of the letter. ralf |i| and |\pi|, for example, seem to have too much space on the right. |\eta| looks like there is not enough space on the right. Jochen Michael Spivak Joined: 10 Oct 2005 Posts: 52 Posted: Thu Apr 20, 2006 4:04 pm Post subject: Basically, I want to reiterate the remark I made in the last post to the "firstimpressions" posting by stubner. If you start looking carefully at any mathematical typesetting (as opposed to just reading it) you will find thousands of non-optimal things. Some of these are actually due to the design of TeX (see some remarks of mine in the "spacing" posting by zeller), and some to the varying circumstances of individual characters. All sorts of things that one would never even notice while reading a mathematics paper can stand out when one looks at things a character at a time, and sometimes one becomes overly concerned. (The link http://support.pctex.com/files/JWPXMWRZTYLV/abs.pdf shows Computer Modern and MTPro2 characters inside absolute values and parentheses, and I think that you will find cases where CM is spaced better than MTPro2, but also cases where the opposite is true.) For example, although I agree that |M| and |U| have too much space to the right of the letters, I wouldn't agree that |R| has too much space to the left of the R, or at most just a tiny extra bit of space. By contrast, in Computer Modern, the |R| definitely has this problem to a much greater degree. Notice, moreover, that in MTPro2, (M) and (U) and (R) look nicely balanced. Of course, that's partly because of the character of the right parenthesis---it has a top piece that extends backwards, unlike almost all characters! In Computer Modern this doesn't pose as great a problem mainly because the ) is much thinner and unshaped. The case of |i|, where there is certainly more space on the right, is also instructive. Notice that the dot on the Times-Italic i is very close to being the rightmost part of the character, while in CM it is nowhere near the right, because of the curlicue at the bottom. For this reason, I had to make the italic correction of the i rather big; otherwise, superscripts would be very close to the dot, making reading very unpleasant. Since the italic correction is always added to the i, this gives the extra space before the | or the ). Naturally, I had to compensate for this by adding more negative kerning between the i and all other characters, but you can't kern with the ), as I've mentioned before, in one of the two postings I mentioned. Similarly, if you compare x^i in CM and MTPro2, you'll see that the superscript i in CM has a curlicue to the left, which keeps it separated from the x, while in MTPro2, I needed to make a greater italic correction to the x in order to get superscripts adequately far away. TeX has \scriptspace to determine extra space after a subscript or superscript; alas, that it does not also have a \prescriptspace, to determine some extra space _before_ superscripts! (And similarly, see one of the previously mentioned postings, the spacing in scriptstyle and scriptscriptstyle should be more flexible.) At any rate, for now, I'll leave things as they are. Possibly in a future release I'll try to address some of these questions, though it simply isn't possible to optimize all spacing. All times are GMT - 7 Hours You can post new topics in this forum You can reply to topics in this forum You cannot edit your posts in this forum You cannot delete your posts in this forum You cannot vote in polls in this forum Powered by phpBB © 2001, 2005 phpBB Group
The following are both plausible messages, but have a completely different meaning:SOS HELP = ...---... .... . .-.. .--. => ...---.........-...--.I AM HIS DATE = .. .- -- .... .. ... -.. .- - . => ...---.........-...--. You don't need a separator because Huffman codes are prefix-free codes (also, unhelpfully, known as "prefix codes"). This means that no codeword is a prefix of any other codeword. For example, the codeword for "e" in your example is 10, and you can see that no other codewords begin with the digits 10.This means that you can decode greedily by reading the ... Quoting David Richerby from the comments:Since ⋅ represents E and − represents T, any Morse message without spaces can be interpreted as a string in $\{E,T\}^*$Further, since A, I, M, and N are represented by the four possible combinations of two morse characters (⋅-, ⋅⋅, --, -⋅, respectively), any message without spaces can also be interpreted as a ... This answer isn't as long as it looks; this site just puts a lot of spacing between list items! Update: Actually it's getting pretty long...Morse Code isn't "officially" binary, ternary, quaternary, quinary, or even 57-ary (if I count correctly). Arguing about which one it is without context is not productive. It is up to you to define which of those five ... Morse code is a prefix ternary code (for encoding 58 characters) on top of a prefix binary code encoding the three symbols.This was a much shorter answer when accepted. However, considering theconsiderable misunderstandings between users, and following a requestfrom the OP, I wrote this much longer answer. The first "nutshell"section gives you the gist ... It is enough to observe that certain short combinations of letters give ambiguous decodings. A single ambiguous sequence suffices, but I can see the following:ATE ~ PEA ~ ITMO ~ OMetc. As David Richerby notes in the comments, any letter is equivalent to a string of Es and Ts, which makes Morse Code ambiguous as a way of encoding arbitrary sequences of ... What you are missing is that you need to consider all bits of size 3 or less. That is: if in a compression scheme for bits of size 3 or less we compress one of the 3-bit strings to a 2-bit string, then some string of size 3 or less will have to expand to 3 bits or more.A losless compression scheme is a function $C$ from finite bit strings to finite bit ... It's helpful to imagine it as a tree. You are simply traversing the tree until you hit a leaf node, and then restarting from the root. From the algorithm which does huffman coding, you can see that this sort of structure is created in the process.https://en.wikipedia.org/wiki/File:HuffmanCodeAlg.png What you need is a random number between 0 and ${ 64 \choose n } - 1$. The problem then is to turn this into the bit pattern.This is known as enumerative coding, and it's one of the oldest deployed compression algorithms. Probably the simplest algorithm is from Thomas Cover. It's based on the simple observation that if you have a word that is $n$ bits long,... Let's look at a slightly different way of thinking about Huffman coding.Suppose you have an alphabet of three symbols, A, B, and C, with probabilities 0.5, 0.25, and 0.25. Because the probabilities are all inverse powers of two, this has a Huffman code which is optimal (i.e. it's identical to arithmetic coding). We will use the canonical code 0, 10, 11 for ... The Hamming distance being 3 means that any two code words must differ in at least three bits. Suppose that 10111 and 10000 are codewords and you receive 10110. If you assume that only one bit has been corrupted, you conclude that the word you received must have been a corruption of 10111: hence, you can correct a one-bit error. However, if you assume that ... Morse Code is actually a ternary code, not a binary code, so the spaces are necessary. If spaces were not there, a lot of ambiguity would result, not so much with the entire message, but with individual letters.For example, 2 dots is an I, but 3 dots is an S. If you are transcribing and you hear two dots, do you immediately write "I" or do you wait until ... Yes, there is such a set. You are actually on the right track to find the following example.Let $C = \{c : |c|=6 \text{ and there are even number of 1's in c}\}$. You can check the following.$|C|=32$.$d(u,v)\geq2$ for all $u,v\in C$, $u\not=v$. (In fact, $d(u,v)=2$ or 4 or 6.)Here are four related exercise, listed in the order of increasing ... Many widely used codes for binary data are concatenated codes, which are composed by using two error-correcting codes. The inner code is over a binary alphabet, and the outer code is over an alphabet whose symbols correspond to the codewords of the inner code. This allows you to use the superior power of larger alphabet sizes to encode binary messages ... There are zillions of papers in coding theory, proposing zillions of codes. Most of them are not used, due to several reasons:Some of the codes are non-constructive.Some of the codes don't have efficient decoding procedures.Some of the codes have bad parameters.The main reason, however, is that practitioners don't spend their time reading the coding ... Just an additional note to Andrej's good answer:You can also take a look to to Kolmogorov complexity:Definition: Given a string $s$, its Kolmogorov complexity $C(s)$ relative to a fixed model of computation is the length of the shortes program (e.g. Turing machine) that outputs $s$.Informally $C(s)$ measures its information content or its degree of ... All words of even parity from a linear code with $2^{n-1}$ codewords and minimum distance $2$.More generally, if $A_2(n,d)$ is the maximum size of a code of length $n$ and minimum distance $d$, then $A_2(n,2d) = A_2(n-1,2d-1)$. Here's a more efficient way of doing it. Let's map all the strings of length $n$ into strings of length $n+O(\sqrt{n} \log n)$ with no consecutive string of $0$'s of length more than $\sqrt{n}$. We then add a string $1 0^{a}1$ at the end, where $a \geq \sqrt{n} + 1$. Our mapping isn't always going to give us the same length string, so $a$ can vary.How do ... If a single-bit error correction is attempted, the ordering presented in the example guarantees that the syndrome vector (the result of the multiplication of the checking matrix and the received data), if interpreted as an integer, will indicate the position of the error. Otherwise, a lookup table would have to be used. Your problem is known as calculating the minimal distance of a (binary) linear code, and is NP-hard, as shown by Vardi. It is even NP-hard to approximate within any constant factor, as shown by Dumer, Miccancio and Sudan. The above sequence it read as a concatination of 5 numbers:You start from the left side, read the first unary code. It let's you know what is the length of the first number. The 2nd number starts right after the 1st, and you interpet it the same way.First, read the first unary code, it is 1110 - so the first numberis "1110:001", which is 9The next ... Well, these examples are not easy to read, by they are indeed correct.e.g.1:The word to code has 4 bits, and lets call them $b_1 b_2 b_3 b_4$.To this "data" word add 3 parity bits(adding is over $GF(2)$):$p_1 = b_1 \oplus b_2 \oplus b_3$,$p_2 = b_1 \oplus b_2 \oplus b_4$, and$p_3 = b_1 \oplus b_3 \oplus b_4$.[remark: for odd parity, add "1"... Despite my initial thoughts on this, it turns out this question can be formalized in a way that admits a fairly precise answer (modulo a couple of definition issues). The answer turns out to be 3 or 4, i.e. ternary or quaternary. The crowd-pleaser "everything goes from 2 to 57" answer is correct only in the sense that if someone asks you for a ... A binary code is a set of vectors in $\mathbb{F}_2^n$ for some $n$.Presumably the context in which you encountered this construction is a motivation for it. It's a particular case of a more general construction known as a Cayley graph, though perhaps this particular case has a specific name. You are right that all arithmetic is done in $\mathbb{F}_2$.There ... It's not unsuitable, it is just not optimal. That's because letters in human readable text are not independent, but quite strongly correlated. That correlation can be used to get huge savings. For example, the letters q and Q are almost always followed by u. Comma and period are almost always followed by a space character, and period space is almost always ... The probability $P$ that a uniformly chosen random prime between $1$ and $n^c$ satisfies $a \equiv b \mod{p}$ is the number of primes in this range that satisfy $a \equiv b \mod{p}$ divided by the total number of primes in this range. Writing $[C] = 1$ if $C$ is true and $[C] = 0$ if $C$ is false, and $\pi(x)$ for the number of primes less than $x$:$$P = \... Your counterexample is wrong.Your list of compressed values has some hidden information which indeed makes the average length longer than 3 bits. The extra information is the length of the output string.With our eyes we can see from your table that the first output string is only 1 bit long, and the others are 3 bits, but you're cheating if you don't ... a few notes not covered in other (good) answers but which dont generally research prior knowledge and cite any stuff (to me an intrinsic part of computer science).this general theory of CS falls into the category of text segmentation and also "word splitting"/ "disambiguation" although there the theory is a bit different, its about splitting sequences of ... Right, the code $VT_0(4)$ has Levenshtein distance (edit distance) of 4: to get from one codeword to another you must do 2 deletions and 2 insertions. Therefore, the code can correct one deletion.Indeed, if 101 was received, the only possible way to get this message assuming one deletion, is if 1001 was sent.Decoding can be done in several ways:The ... Here is a lower bound and an asymptotically matching construction, at least for some ranges of the parameters. Denote by $m$ the number of columns, and suppose for simplicity that $p \leq n/2$.We start with a lower bound on $m$. Let $X$ be the encoding of symbol chosen uniformly at random. Let $X_1,\ldots,X_m$ be the individual coordinates, and let $w_i \...
X Search Filters Format Subjects Library Location Language Publication Date Click on a bar to filter by decade Slide to change publication date range Revista Brasileira de Cirurgia Cardiovascular, ISSN 0102-7638, 2015 Journal Article 2010, 1. ed., ISBN 9871172559, 228, [3] Book 1991, Ensayo crítico, ISBN 9789506001742, 237 Book 2011, Oxford bibliographies. Latin American studies, ISBN 0199766584 Web Resource 5. Combination of inclusive and differential $ \mathrm{t}\overline{\mathrm{t}} $ charge asymmetry measurements using ATLAS and CMS data at $ \sqrt{s}=7 $ and 8 TeV Journal of High Energy Physics (Online), ISSN 1029-8479, 04/2018, Volume 2018, Issue 4 Journal Article 6. Persistent lipid abnormalities in statin-treated patients with diabetes mellitus in Europe and Canada: results of the Dyslipidaemia International Study Diabetic medicine, ISSN 0742-3071, 2011, Volume 28, Issue 11, pp. 1343 - 1351 To assess the prevalence of persistent lipid abnormalities in statin-treated patients with diabetes with and without the metabolic syndrome. This was a... metabolic syndrome | cardiovascular disease | diabetes mellitus | dyslipidaemia | statins | Cardiovascular disease | Dyslipidaemia | Metabolic syndrome | Diabetes mellitus | Statins | METAANALYSIS | GUIDELINES | PREVENTION | DENSITY-LIPOPROTEIN CHOLESTEROL | THERAPY | REDUCTION | ENDOCRINOLOGY & METABOLISM | CORONARY-HEART-DISEASE | CLINICAL-PRACTICE | CARDIOVASCULAR RISK | LOW HDL-CHOLESTEROL | Hydroxymethylglutaryl-CoA Reductase Inhibitors - administration & dosage | Humans | Diabetic Angiopathies - etiology | Male | Metabolic Syndrome - drug therapy | Diabetes Mellitus, Type 2 - epidemiology | Metabolic Syndrome - blood | Canada - epidemiology | Cardiovascular Diseases - epidemiology | Cholesterol, LDL - blood | Female | Dyslipidemias - etiology | Diabetes Mellitus, Type 2 - complications | Metabolic Syndrome - epidemiology | Cardiovascular Diseases - etiology | Diabetic Angiopathies - chemically induced | Cross-Sectional Studies | Cholesterol, HDL - drug effects | Europe - epidemiology | Cholesterol, LDL - drug effects | Diabetic Angiopathies - blood | Anticholesteremic Agents - adverse effects | Metabolic Syndrome - complications | Diabetes Mellitus, Type 2 - blood | Dyslipidemias - epidemiology | Hydroxymethylglutaryl-CoA Reductase Inhibitors - adverse effects | Anticholesteremic Agents - administration & dosage | Dyslipidemias - chemically induced | Cholesterol, HDL - blood | Diabetic Angiopathies - epidemiology | Aged | Cardiovascular Diseases - chemically induced | Diabetes Mellitus, Type 2 - drug therapy | Diabetics | Blood cholesterol | Simvastatin | Low density lipoproteins | Triglycerides | Diabetes | Risk factors | Trans fatty acids metabolic syndrome | cardiovascular disease | diabetes mellitus | dyslipidaemia | statins | Cardiovascular disease | Dyslipidaemia | Metabolic syndrome | Diabetes mellitus | Statins | METAANALYSIS | GUIDELINES | PREVENTION | DENSITY-LIPOPROTEIN CHOLESTEROL | THERAPY | REDUCTION | ENDOCRINOLOGY & METABOLISM | CORONARY-HEART-DISEASE | CLINICAL-PRACTICE | CARDIOVASCULAR RISK | LOW HDL-CHOLESTEROL | Hydroxymethylglutaryl-CoA Reductase Inhibitors - administration & dosage | Humans | Diabetic Angiopathies - etiology | Male | Metabolic Syndrome - drug therapy | Diabetes Mellitus, Type 2 - epidemiology | Metabolic Syndrome - blood | Canada - epidemiology | Cardiovascular Diseases - epidemiology | Cholesterol, LDL - blood | Female | Dyslipidemias - etiology | Diabetes Mellitus, Type 2 - complications | Metabolic Syndrome - epidemiology | Cardiovascular Diseases - etiology | Diabetic Angiopathies - chemically induced | Cross-Sectional Studies | Cholesterol, HDL - drug effects | Europe - epidemiology | Cholesterol, LDL - drug effects | Diabetic Angiopathies - blood | Anticholesteremic Agents - adverse effects | Metabolic Syndrome - complications | Diabetes Mellitus, Type 2 - blood | Dyslipidemias - epidemiology | Hydroxymethylglutaryl-CoA Reductase Inhibitors - adverse effects | Anticholesteremic Agents - administration & dosage | Dyslipidemias - chemically induced | Cholesterol, HDL - blood | Diabetic Angiopathies - epidemiology | Aged | Cardiovascular Diseases - chemically induced | Diabetes Mellitus, Type 2 - drug therapy | Diabetics | Blood cholesterol | Simvastatin | Low density lipoproteins | Triglycerides | Diabetes | Risk factors | Trans fatty acids Journal Article 1993, Colección de estudios socio-políticos, ISBN 9507530053, Volume 7, 368 Book 8. Combinations of single-top-quark production cross-section measurements and $|f_{\rm LV}V_{tb}|$ determinations at $\sqrt{s}=7$ and 8 TeV with the ATLAS and CMS experiments 02/2019 JHEP 05 (2019) 088 This paper presents the combinations of single-top-quark production cross-section measurements by the ATLAS and CMS Collaborations, using... Physics - High Energy Physics - Experiment Physics - High Energy Physics - Experiment Journal Article 9. Measurements of the Higgs boson production and decay rates and constraints on its couplings from a combined ATLAS and CMS analysis of the LHC $pp$ collision data at $\sqrt{s}=$ 7 and 8 TeV Journal of High Energy Physics, ISSN 1029-8479, 06/2016, Volume 8, Issue 8, p. 045 JHEP08(2016)045 Combined ATLAS and CMS measurements of the Higgs boson production and decay rates, as well as constraints on its couplings to vector bosons and... Physics - High Energy Physics - Experiment | Physics | High Energy Physics - Experiment | Hadron-Hadron scattering (experiments) | PHYSICS OF ELEMENTARY PARTICLES AND FIELDS | Higgs physics Physics - High Energy Physics - Experiment | Physics | High Energy Physics - Experiment | Hadron-Hadron scattering (experiments) | PHYSICS OF ELEMENTARY PARTICLES AND FIELDS | Higgs physics Journal Article 02/2018 Phys. Rev. Lett. 120 (2018) 202007 A search for the narrow structure, $X(5568)$, reported by the D0 Collaboration in the decay sequence $X \to B^0_s \pi^\pm$,... Physics - High Energy Physics - Experiment Physics - High Energy Physics - Experiment Journal Article Scientific Reports, ISSN 2045-2322, 12/2019, Volume 9, Issue 1, pp. 1 - 15 Tropical forests are known for their high diversity. Yet, forest patches do occur in the tropics where a single tree species is dominant. Such “monodominant”... Ectomycorrhizas | Forests | Life history | Tropical environment | Nodulation | Tropical forests | Species | Life Sciences | Ecosystems | Botanics | Ecology, environment | Vegetal Biology | Biodiversity | Systematics, Phylogenetics and taxonomy | Biodiversity and Ecology | Environmental Sciences Ectomycorrhizas | Forests | Life history | Tropical environment | Nodulation | Tropical forests | Species | Life Sciences | Ecosystems | Botanics | Ecology, environment | Vegetal Biology | Biodiversity | Systematics, Phylogenetics and taxonomy | Biodiversity and Ecology | Environmental Sciences Journal Article 12. Search for heavy particles decaying into top-quark pairs using lepton-plus-jets events in proton–proton collisions at $\sqrt{s} = 13$ $\text {TeV}$ with the ATLAS detector European Physical Journal. C, Particles and Fields, ISSN 1434-6044, 07/2018, Volume 78, Issue 7 Here, a search for new heavy particles that decay into top-quark pairs is performed using data collected from proton–proton collisions at a centre-of-mass... PHYSICS OF ELEMENTARY PARTICLES AND FIELDS PHYSICS OF ELEMENTARY PARTICLES AND FIELDS Journal Article
Table of Contents The Direct Product of an Arbitrary Collection of Groups Recall from The Direct Product of Two Groups page that if $(G, \cdot)$ and $(H, *)$ are groups then the direct product of these groups is another group, $G \times H$ with the operation defined for all $(g_1, h_1), (g_2, h_2) \in G \times H$ by:(1) We will now extend this notion to an arbitrary collection of groups. Let $\{ G_i: i \in I \}$ be an arbitrary collection of groups. The cartesian product $\displaystyle{\prod_{i \in I} G_i}$ is defined to be:(2) For example, if $I = \{ 1, 2 \}$ then $\{ G_i : i \in I \} = \{ G_1, G_2 \}$, and an element $(g, h) \in G \times H$ can be thought of as the function $f : \{ 1, 2 \} \to G_1 \cup G_2$ given by $f(1) = g$, $f(2) = h$, so that $(g, h) = (f(1), f(2))$. Note that if $I$ is a countably infinite set, i.e., if $I = \mathbb{N}$ then $\prod_{i=1}^{n} G_i$ can be thought of as all sequences $(g_n) \in \bigcup_{i=1}^{\infty}$ such that $g_n \in G_n$ for each $n \in \mathbb{N}$. So we now construct the direct product for an arbitrary product of groups. Proposition 1: Let $\{ G_i : i \in I \}$ be an arbitrary collection of groups. For all $f, g \in \prod_{i \in I} G_i$ define $fg : I \to \bigcup_{i \in I} G_i$ for each $i \in I$ by $(fg)(i) = f(i)g(i)$. Then $\prod_{i \in I} G_i$ with this operation is a group. Proof:For all $f, g \in \prod_{i \in I} G_i$ we have that for each $i \in I$ that $(fg)(i) = f(i)g(i) \in G_i$ since each $G_i$ is a group, and so $fg \in \prod_{i \in I} G_i$, i.e., $\prod_{i \in I} G_i$ is closed under this operation. Let $f, g, h \in \prod_{i \in I} G_i$. Then for each $i \in I$ we have by the associativity of the operation in $G_i$ that: Since this holds true for each $i \in I$ we see that $f(gh) = (fg)h$, so the operation on $\prod_{i \in I} G_i$ is associative. For each $i \in I$ let $e_{G_i}$ denote the identity in $G_i$. Let $e : I \to\bigcup_{i \in I} G_i$ be defined for each $i \in I$ by $e(i) = e_{G_i}$. Then $e \in \prod_{i \in I} G_i$ and for all $f \in \prod_{i \in I} G_i$ we have that for all $i \in I$: So $ef = f$ and $fe = f$, i.e., $e$ is the identity element for the operation on $\prod_{i \in I} G_i$. Lastly, for each $f \in \prod_{i \in I} G_i$ let $f^{-1} : I \to \prod_{i \in I} G_i$ be defined for all $i \in I$ by $f^{-1}(i) = [f(i)]^{-1}$. Then for each $i \in I$ we have that: So $ff^{-1} = f^{-1}f = e$. Thus every $f \in \prod_{i \in I} G_i$ has an inverse $f^{-1} \in \prod_{i \in I} G_i$. Therefore $\prod_{i \in I} G_i$ with this operation forms a group. $\blacksquare$ Definition: Let $\{ G_i : i \in I \}$ be an arbitrary collection of groups. The Direct Product of these groups is defined to be the set $\prod_{i \in I} G_i$ with the operation defined for all $f, g \in \prod_{i \in I} G_i$ by $fg : I \in \bigcup_{i \in I} G_i$ where $(fg)(i) = f(i)g(i)$ for all $i \in I$.
The Factorization of Polynomials with Real Coefficients Table of Contents The Factorization of Polynomials with Real Coefficients We are about to look at an important way to factor polynomials with real coefficients, but before we do, we must first look at the following proposition. Proposition 1: A quadratic polynomial with real coefficients $\alpha, \beta \in \mathbb{R}$ such that $p(x) = x^2 + \alpha x + \beta$ can be factored as $p(x) = (x - \lambda_1)(x - \lambda_2)$ where $\lambda_1, \lambda_2 \in \mathbb{R}$ if and only if $\alpha^2 ≥ 4 \beta$. Proof:$\Rightarrow$ Suppose that $p(x) = (x - \lambda_1)(x - \lambda_2)$. Then when we expand $p(x)$ we get that: \begin{align} \quad p(x) = x^2 - \lambda_2 x - \lambda_1 x + \lambda_1 \lambda_2 \\ \quad p(x) = x^2 + (-\lambda_1 - \lambda_2)x + \lambda_1 \lambda_2 \end{align} Therefore $\alpha = -\lambda_1 - \lambda_2$ and $\beta = \lambda_1 \lambda_2$. Now notice that $(-\lambda_1 - \lambda_2)^2 = \lambda_1^2 + 2 \lambda_1 \lambda_2 + \lambda_2^2 ≥ 0$ and so $\lambda_1^2 + \lambda_2^2 ≥ 2 \lambda_1 \lambda_2$, thus: \begin{align} \quad \alpha^2 = (-\lambda_1 - \lambda_2)(-\lambda_1 - \lambda_2) = \lambda_1^2 + \lambda_2^2 + 2 \lambda_1 \lambda_2 ≥ 2 \lambda_1 \lambda_2 + 2 \lambda_1 \lambda_2 = 4 \lambda_1 \lambda_2 = 4 \beta \end{align} $\Leftarrow$ Now suppose that $\alpha^2 ≥ 4 \beta$. Then there exists a real number $c \in \mathbb{R}$ such that $c^2 = \frac{\alpha^2}{4} - \beta$. Notice that we can rewrite the polynomial $x^2 + \alpha x + \beta$ as: \begin{align} \quad x^2 + \alpha x + \beta = \left ( x + \frac{\alpha}{2} \right )^2 - \left ( \frac{\alpha^2}{4} - \beta \right ) = \left ( x + \frac{\alpha}{2} \right )^2 - c^2 = \left ( \left ( x + \frac{\alpha}{2} \right) + c \right ) \left ( \left ( x + \frac{\alpha}{2} \right ) - c \right ) \\ = \left ( x - \left ( - \frac{\alpha}{2} - c \right ) \right) \left ( x - \left ( - \frac{\alpha}{2} + c \right ) \right ) \end{align} Thus we have the desired factorization with $\lambda_1 = - \frac{\alpha}{2} - c$ and $\lambda_2 = - \frac{\alpha}{2} + c$. $\blacksquare$ We are now ready to look at the following theorem which gives us a second way to fac Theorem 1: Let $p(x) \in \wp ( \mathbb{R} )$ be a non-constant polynomial. Then $p(x)$ has a unique factorization $p(x) = c(x - \lambda_1)(x - \lambda_2)...(x - \lambda_m)(x^2 + \alpha_1x + \beta_1)(x^2 + \alpha_2x + \beta_2)...(x^2 + \alpha_nx + \beta_n)$ where $\lambda_1, \lambda_2, ..., \lambda_m \in \mathbb{R}$ are the real roots of $p(x)$ and $c, \alpha_1, \alpha_2, ..., \alpha_n, \beta_1, \beta_2, ..., \beta_n \in \mathbb{R}$. Proof:Let $p(x) \in \wp ( \mathbb{R} )$ be a non-constant polynomial with $\mathrm{deg} (p) = r$. Since $p(x)$ is a polynomial of real coefficients, it is also a polynomial of complex coefficients since all real numbers are complex numbers, and from The Factorization of Polynomials with Complex Coefficients page we have that for $\tau_1, \tau_2, ..., \tau_r \in \mathbb{C}$ (which are the roots of $p(x)$), $p(x)$ can be uniquely factored as: \begin{align} \quad p(x) = (x - \tau_1)(x - \tau_2)...(x - \tau_r) \end{align} Now if some root $\tau_i$ where $1 ≤ i ≤ r$ is a complex number whose imaginary part is nonzero (that is $\tau_i$ is a non-real complex root), then we saw from the Pairs of Complex Roots for Polynomials with Real Coefficients page that the complex conjugate $\bar{\tau_i}$ is also a non-real complex root of $p(x)$ and so $\bar{\tau_i} = \tau_j$ for $i \neq j$ and $1 ≤ j ≤ r$. The product of $(x - \tau_i)(x - \tau_j) = (x - \tau_i)(x - \bar{\tau_i})$ will yield one of our quadratic factors, $x^2 + \alpha_k x + \beta_k$ where $1 ≤ k ≤ n$. To guarantee that such a factorization $c(x - \lambda_1)(x - \lambda_2)...(x - \lambda_m)(x^2 + \alpha_1x + \beta_1)(x^2 + \alpha_2x + \beta_2)...(x^2 + \alpha_nx + \beta_n)$ exists, we must show that for all non-real $\tau$ that $(x - \tau)$ and $(x - \bar{\tau})$ appear the same number of times in such a factorization so that we can group them to make our quadratic factors. We note that for $q(x) \in \wp ( \mathbb{C} )$ where $\mathrm{deg} (q) = r - 2$ we can write $p(x)$ as follows: \begin{align} p(x) = (x - \tau)(x - \bar{\tau}) q(x) = (x^2 - 2 \Re (\tau) x + \mid \tau \mid^2 ) q(x) \end{align} If we can show that $q(x)$ has real coefficients then this will imply that $(x - \tau)$ appears as many times as $(x - \bar{\tau})$. Note that $(x^2 - 2 \Re (\tau) x + \mid \tau \mid^2 ) \neq 0$, and so: \begin{align} \quad q(x) = \frac{p(x)}{x^2 - 2 \Re (\tau) x + \mid \tau \mid^2} \end{align} Now since $p(x) \in \wp (\mathbb{R})$ and $x^2 - 2 \Re (\tau) x + \mid \tau \mid^2 \in \mathbb{R}$ then for all $x \in \mathbb{R}$ we have that $q(x) \in \mathbb{R}$ and so $q(x)$ is a polynomial whose range contains only real numbers. Let's write the polynomial $q(x)$ as: \begin{equation} q(x) = a_0 + a_1x + ... + a_{r-2} x^{r-2} \end{equation} If we take the imaginary parts of both sides of this equation (by using the properties of the imaginary part of numbers), then we have that: \begin{align} \quad \Im (q(x)) = \Im (a_0 + a_1x + ... + a_{r-2} x^{r-2}) \\ \quad 0 = \Im (a_0) + \Im (a_1)x + ... + \Im (a_{r-2}) x^{r-2} \end{align} But this implies that $\Im (a_0) = \Im (a_1) = ... = \Im (a_{r-2}) = 0$ otherwise the lefthand side of the above equation would not be equal to $0$. Thus $a_0, a_1, ..., a_{r-2} \in \mathbb{R}$. Therefore, such a factorization exists. Now we only need to show that this factorization is unique. This is easy to show. Since $p(x)$ is a polynomial of real coefficients, then $p(x)$ is also a polynomial of complex coefficients and so two different factorizations would lead to a contradiction of the theorem we proved on The Factorization of Polynomials with Complex Coefficients page. $\blacksquare$
The general solution of second-order Cauchy-Euler equation $$x^2y''(x)+pxy'(x)+qy(x)=0\tag1$$ is given by $$y(x)=c_1 x^{\alpha_1}+c_2 x^{\alpha_2},\tag2$$ where $$\alpha_{1,2}=\frac{1-p}2\pm\frac{\sqrt{(1-p)^2-4q}}2.\tag3$$ But when $q=\frac14(1-p)^2$, i.e. when $\alpha_1=\alpha_2=\alpha$, the general solution somehow gets a logarithmic term, without which the generality of $(2)$ is lost: $$y(x)=c_1x^\alpha+c_2x^\alpha\ln x.\tag4$$ I know how to derive this result e.g. by the method of reduction of order, but it doesn't seem to give much intuition on the origin of this logarithm. What is an intuitive explanation of where this logarithm comes from and why the general solution suddenly stops being general for particular combinations of $p$ and $q$?
So after the other answer and comment I really really hope you ment working with ListPlot. Otherwise im sad :D - So it's not an anwser with a pretty short code but it works well. To uniformly distribute your points, we want the length between adjacent points to be constant. So we formulate: $$l=\sqrt{(\Delta x)^2+(\Delta y)^2}$$$$l=\sqrt{(x_1-x_2)^2+(1/x_1-1/x_2)^2}$$ Solving this for $x_2$ gives two solutions. One backward (the first) and one forward solution (represented by Root).So if we define a $l$ and a $y_{min,max}$-value l = 1/5; (*length between points*) yBorder = 2; (*max absolute y value*) sol = x2 /. Solve[l == Sqrt[(x1 - x2)^2 + (1/x1 - 1/x2)^2], x2, Reals]; we could solve the points as long as they are in the range using Reap and Sow in a Do from both sides starting by $x=3$ and $x=-3$: xPoints = Reap[ Do[ x = 3*(-1)^(i + 1); Sow[x]; y = 1/x; While[Abs[y] <= yBorder, x = N[sol[[i]] /. {x1 -> x}]; Sow[x]; y = 1/x]; , {i, 1, 2}] ][[2, 1]]; getting everything together and ploting it: l = 1/5; (*length between points*) yBorder = 2; (*max absolute y value*) xBorder = 3; sol = x2 /. Solve[l == Sqrt[(x1 - x2)^2 + (1/x1 - 1/x2)^2], x2, Reals]; xPoints = Reap[ Do[ x = xBorder*(-1)^(i + 1); Sow[x]; y = 1/x; While[Abs[y] <= yBorder, x = N[sol[[i]] /. {x1 -> x}]; Sow[x]; y = 1/x]; , {i, 1, 2}] ][[2, 1]]; yPoints = 1/xPoints; points = Transpose[{xPoints, yPoints}]; ListPlot[points] Attention:You should also take the AspectRatio and PlotRange into account, since it seems that for bigger $y$-values the distance between the adjacent points shrinks or grows which is not the case. I've did it for you (which was more easier than i thought): It uses the fact that we just need to stretch one of the coordinates to the PlotRange and AspectRatio: $$l=\sqrt{(\Phi\cdot (y_{max}-y_{min})/(x_{max}-x_{min})\cdot\Delta x)^2+(\Delta y)^2}$$ Where $\Phi$ is the inverse AspectRatio which is by default the GoldenRatio. l = 1; (*length between points*) yBorder = 10; (*max absolute y value*) xBorder = 50; sol = x2 /. Solve[l == Sqrt[(yBorder/xBorder)^2* GoldenRatio^2*(x1 - x2)^2 + (1/x1 - 1/x2)^2], x2, Reals]; xPoints = Reap[ Do[ x = xBorder*(-1)^(i + 1); Sow[x]; y = 1/x; While[Abs[y] <= yBorder, x = N[sol[[i]] /. {x1 -> x}]; Sow[x]; y = 1/x]; , {i, 1, 2}] ][[2, 1]]; yPoints = 1/xPoints; points = Transpose[{xPoints, yPoints}]; ListPlot[points, PlotRange -> Full]
The coordinates of an event in spacetime are given by the 4-vector $(ct, \mathbf{r})$, where $\mathbf{r}$ is the spacial coordinates of the event. This 4-vector can be seen as 4-displacement of a worldline from the defined origin of the reference frame we're in at time $t$. It seems sensible that $\frac{d}{dt}(ct,\mathbf{r})$ should give 4-velocity of the worldline, but instead everything I've read has stated that we differentiate with respect to the worldline's proper time $\tau$ instead, and yet I so far haven't seen any explanation as to why. This answer here on the Stack Exchange simply says we do it because it maintains the Lorentz invariant. However, why would proper time be invariant under the Lorentz transformation and other times wouldn't? Consider $\mathbf{x^\mu}=(ct,x,y,z)^T$, which I differentiate with respect to time $t$ to get $\mathbf{v}=(c,v_x,v_y,v_z).$ Let's check if this is Lorentz invariant: $$ \begin{bmatrix} \gamma & -\beta\gamma & 0 & 0 \\ -\beta\gamma & \gamma & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \end{bmatrix} \begin{bmatrix} c\\ u_x\\ u_y\\ u_z \end{bmatrix} =\begin{bmatrix} c\gamma-\beta\gamma u_x\\ -\beta\gamma c+\gamma u_x\\ u_y\\ u_z \end{bmatrix} =\mathbf{v'} $$ $$ (c\gamma-\beta\gamma u_x)^2-(-\beta\gamma c+\gamma u_x)^2=c^2\gamma^2+\beta^2\gamma^2u_x^2+\beta^2\gamma^2c^2+\gamma^2u_x^2 =c^2\gamma^2(1+\beta^2)-u_x^2\gamma^2(1+\beta^2)=c^2-u_x^2 \\ \therefore \mathbf{v'}\cdot\mathbf{v'}=c^2-u_x^2-u_y^2-u_z^2=\mathbf{v}\cdot\mathbf{v} $$ Therefore, $\mathbf{v}$ is Lorentz invariant. Why then, do we reject it as the velocity 4-vector?
Expected number of substitutions Let's assume a haploid mutation rate of $\mu$ and a population of constant size $N$. For simplicity, I will assume an absence of selection because otherwise, we would need to talk about what kind of selection you want to talk about. There are therefore $2 N \mu t$ mutations since this last ancestor. A fraction of $\frac{1}{2N}$ of them drifted to fixation. Therefore, there were $\frac{2 N \mu t}{2N} = \mu t$ mutations fixed. If we consequence a sequence with a mutation rate of $\mu = 10^{-4}$ (about 10 kilo bases if is a human genome) and $t = 10^5$ generations. Then, there were 10 substitutions (on average). Of course, the assumption of no selection is likely wrong and violation of this assumption could well drastically affect this estimate. However, you would need to perfectly describe what type of selection pressures (and what selection coefficient and dominance coefficient) you want to consider for more calculations. Probability of k substitutions As events of substitutions are independent, the probability that exactly $k$ substitutions happen given the expected number of substitutions ($\mu t$) is given by the Poisson distribution with rate $\mu t$. The probability $P(k | \mu, t)$ of $k$ substitutions had happen in $t$ generations, given the mutation rate $\mu$ is therefore $$P(k | \mu, t) = \frac{\left(\mu t\right)^k e^{-\left(\mu t\right)} }{k!}$$ Per consequence, the probability of no mutation is obtained by replacing $k$ to zero $$P(k=0 | \mu, t) = e^{-\left(\mu t\right)}$$ Again, if $\mu = 10^{-4}$ (about 10 kilo bases if is a human genome), then the probability that zero mutation had happen in $t = 10^5$ generations is $e^{-10^{-4} 10^5} ≈ 5\cdot 10^{-5}$
Examples of Expressing Integers as a Sum of Two Squares Recall from the Expressing Integers as a Sum of Two Squares page that if $n \in \mathbb{N}$ and $n$ has prime power factorization $n = 2^{\alpha}p_1^{e_1}p_2^{e_2}...p_k^{e_k}q_1^{f_1}q_2^{f_2}...q_l^{f_l}$ where $p_i \equiv 1 \pmod 4$ for each $1 \leq i \leq k$ and $q_j \equiv 3 \pmod 4$ for each $1 \leq j \leq l$ then $n$ can be expressed as a sum of two squares if and only if $f_j$ is even for each $1 \leq j \leq l$. In other words, $n$ cannot be expressed as a sum of two squares if $n$ has a prime factor $p^{2k-1}$ where $p \equiv 3 \pmod 4$. We will now look at some examples of expressing integers as a sum of two squares. Example 1 Determine if $600$ can be expressed as a sum of two squares and if so, find a representation for $600$. The prime power factorization of $600$ is:(1) Note that $3 \equiv 3 \pmod 4$ but $1$ is not even, and so $600$ cannot be expressed as a sum of two squares. Example 2 Determine if $1034$ can be expressed as a sum of two squares and if so, find a representation for $1034$. The prime power factorization of $1034$ is:(2) Note that $11 \equiv 3 \pmod 4$ but $1$ is not even, so again, $1034$ cannot be expressed as a sum of two squares. Example 3 Determine if $98765$ can be expressed as a sum of two squares and if so, find a representation for $98765$. The prime power factorization of $98765$ is:(3) Since $5 \equiv 1 \pmod 4$ and $19753 \equiv 1 \pmod 4$ we have that $98765$ can be written as a sum of two squares. Namely:(4)
Most of the algorithms for estimating the volume of a convex polyhedron $K \subset R^d$ assume the existence of an affine transform $T$ with the property that $$ B \subset TK \tilde{\subset}\ \sigma B$$ where $B$ is the unit ball in $d$ dimensions, and $\sigma$ is $O(\sqrt{d})$. (Update: the $\tilde{\subset}$ indicates that the containment is true except for an $\epsilon$-fraction of $K$) The algorithms that I've seen for computing this transform are quite tricky. They require a bootstrap sampling process to extract a few points from inside $K$ which are then used to define the transformation. However, the fact that such a transformation exists is folklore, and my question was: Is there a simple algorithm (with possibly a weaker bound on $\sigma$) to compute the affine transform, given only a membership oracle for $K$ ?
Skills to Develop In this section, we strive to understand the ideas generated by the following important questions: What is a sequence? What does it mean for a sequence to converge? What does it mean for a sequence to diverge? We encounter sequences every day. Your monthly rent payments, the annual interest you earn on investments, a list of your car’s miles per gallon every time you fill up; all are examples of sequences. Other sequences with which you may be familiar include the Fibonacci sequence \[1, 1, 2, 3, 5, 8, . . . \nonumber\] in which each entry is the sum of the two preceding entries and the triangular numbers \[1, 3, 6, 10, 15, 21, 28, 36, 45, 55, . . .\nonumber\] which are numbers that correspond to the number of vertices seen in the triangles in Figure 8.1. Sequences of integers are of such interest to mathematicians and others that Figure 8.1: Triangular numbers they have a journal 1 devoted to them and an on-line encyclopedia 2 that catalogs a huge number of integer sequences and their connections. Sequences are also used in digital recordings and digital images. To this point, most of our studies in calculus have dealt with continuous information (e.g., continuous functions). The major difference we will see now is that sequences model discrete instead of continuous information. We will study ways to represent and work with discrete information in this chapter as we investigate sequences and series, and ultimately see key connections between the discrete and continuous. Preview Activity \(\PageIndex{1}\) Suppose you receive \($5000\) through an inheritance. You decide to invest this money into a fund that pays \(8\%\) annually, compounded monthly. That means that each month your investment earns \(0.08 12 · P\) additional dollars, where \(P\) is your principal balance at the start of the month. So in the first month your investment earns \(5000 \left( \dfrac{0.08}{12} \right)\) or \($33.33\). If you reinvest this money, you will then have \($5033.33\) in your account at the end of the first month. From this point on, assume that you reinvest all of the interest you earn. How much interest will you earn in the second month? How much money will you have in your account at the end of the second month? Complete Table 8.1 to determine the interest earned and total amount of money in this investment each month for one year. As we will see later, the amount of money \(P_n\) in the account after month n is given by \[P_n = 5000 \left( 1 + \dfrac{0.08}{12} \right)^n \)\]Use this formula to check your calculations in Table 8.1. Then find the amount of money in the account after \(5\) years. 1The Journal of Integer Sequences at http://www.cs.uwaterloo.ca/journals/JIS/ 2The On-Line Encyclopedia of Integer Sequences at http://oeis.org/ Month Interest earned Total amount of money in the account 0 $0 $5000.00 1 $33.33 $5033.33 2 3 4 5 6 7 8 9 10 11 12 Table 8.1: Interest d. How many years will it be before the account has doubled in value to $10000? Sequences As our discussion in the introduction and Preview Activity \(\PageIndex{1}\) illustrate, many discrete phenomena can be represented as lists of numbers (like the amount of money in an account over a period of months). We call these any such list a sequence. In other words, a sequence is nothing more than list of terms in some order. To be able to refer to a sequence in a general sense, we often list the entries of the sequence with subscripts, \[s_1, s_2, . . ., s_n . . .,\nonumber\] where the subscript denotes the position of the entry in the sequence. More formally, Definition: Sequences A sequence is a list of terms \(s_1, s_2, s_3, . . .\) in a specified order. As an alternative to Definition 8.1, we can also consider a sequence to be a function f whose domain is the set of positive integers. In this context, the sequence \(s_1, s_2, s_3\), . . . would correspond to the function \(f\) satisfying \(f (n) = s_n\) for each positive integer n. This alternative view will be be useful in many situations. We will often write the sequence \(s_1, s_2, s_3, . . .\) using the shorthand notation {\(s_n\)}. The value \(s_n\) (alternatively \(s(n)\)) is called the nth term in the sequence. If the terms are all 0 after some fixed value of \(n\), we say the sequence is finite. Otherwise the sequence is infinite. We will work with both finite and infinite sequences, but focus more on the infinite sequences. With infinite sequences, we are often interested in their end behavior and the idea of convergent sequences. Activity \(\PageIndex{1}\) Let \(s_n\) be the nth term in the sequence 1, 2, 3, . . .. Find a formula for \(s_n\) and use appropriate technological tools to draw a graph of entries in this sequence by plotting points of the form \((n, s_n)\) for some values of n. Most graphing calculators can plot sequences; directions follow for the TI-84. In the MODE menu, highlight SEQ in the FUNC line and press ENTER. In the Y= menu, you will now see lines to enter sequences. Enter a value for \(nMin\) (where the sequence starts), a function for \(u(n)\) (the nth term in the sequence), and the value of \(u_n Min\). Set your window coordinates (this involves choosing limits for \(n\) as well as the window coordinates XMin, XMax, YMin, and YMax. The GRAPH key will draw a plot of your sequence. Using your knowledge of limits of continuous functions as \( x \rightarrow \infty \), decide if this sequence {\(s_n\)} has a limit as \(n \rightarrow \infty\). Explain your reasoning. Let \(s_n\) be the \(n\)th term in the sequence \(1, \frac{1}{2}, \frac{1}{3}, ...\). Find a formula for \(s_n\). Draw a graph of some points in this sequence. Using your knowledge of limits of continuous functions as \( x \rightarrow \infty \), decide if this sequence {\(s_n\)} has a limit as \( n \rightarrow \infty \). Explain your reasoning. Let \(s_n\) be the \(n\)th term in the sequence \(2, \frac{3}{2}, \frac{4}{3}, \frac{5}{4}. . .\). Find a formula for \(s_n\). Using your knowledge of limits of continuous functions as \(x \rightarrow \infty \), decide if this sequence {\(s_n\)} has a limit as \( n \rightarrow \infty \). Explain your reasoning. Next we formalize the ideas from Activity \(\PageIndex{1}\). Activity \(\PageIndex{2}\) Recall our earlier work with limits involving infinity in Section 2.8. State clearly what it means for a continuous function \(f\) to have a limit \(L\) as \( x \rightarrow \infty \). Given that an infinite sequence of real numbers is a function from the integers to the real numbers, apply the idea from part (a) to explain what you think it means for a sequence {\(s_n\)} to have a limit as \( n \rightarrow \infty \). Based on your response to (b), decide if the sequence { \(\frac{1+n}{2+n}\) } has a limit as \( n \rightarrow \infty \). If so, what is the limit? If not, why not? In Activities \(\PageIndex{1}\) and \(\PageIndex{2}\) we investigated the notion of a sequence {\(s_n\)} having a limit as \(n\) goes to infinity. If a sequence {\(s_n\)} has a limit as \(n\) goes to infinity, we say that the sequence converges or is a convergent sequence. If the limit of a convergent sequence is the number \(L\), we use the same notation as we did for continuous functions and write \[ \lim_{n \rightarrow \infty} s_n = L. \nonumber\] If a sequence {\(s_n\)} does not converge then we say that the sequence {\(s_n\)} diverges. Convergence of sequences is a major idea in this section and we describe it more formally as follows. Definition: Convergence A sequence {\(s_n\)} of real numbers converges to a number \(L\) if we can make all values of \(s_k\) for \(k \geq n\) as close to \(L\) as we want by choosing \(n\) to be sufficiently large. Remember, the idea of sequence having a limit as \( n \rightarrow \infty \) is the same as the idea of a continuous function having a limit as \( x \rightarrow \infty \). The only new wrinkle here is that our sequences are discrete instead of continuous. We conclude this section with a few more examples in the following activity. Activity \(\PageIndex{3}\) Use graphical and/or algebraic methods to determine whether each of the following sequences converges or diverges. {\(\frac{1+2n}{3n−2}\)} { \(\frac{5+3^n}{10+2^n} \) } {\(\frac{10^n}{n!} \)} (where ! is the factorial symbol and \(n! = n(n − 1)(n − 2) · · · (2)(1)\) for any positive integer \(n\) (as convention we define \(0!\) to be \(1\))). Summary In this section, we encountered the following important ideas: A sequence is a list of objects in a specified order. We will typically work with sequences of real numbers and can also think of a sequence as a function from the positive integers to the set of real numbers. A sequence {\(s_n\)} of real numbers converges to a number \(L\) if we can make every value of \(s_k\) for \(k \geq n\) as close as we want to \(L\) by choosing \(n\) sufficiently large. A sequence diverges if it does not converge. Contributors Matt Boelkins (Grand Valley State University), David Austin (Grand Valley State University), Steve Schlicker (Grand Valley State University)