text
stringlengths
256
16.4k
Let $q_i \in Q = \mathbb R_+$ denote the quantity produced by firm $i \in \{1,2\}$. Further let $\pi_i(q_1,q_2) = (1-q_1-q_2)q_i$ denote the profits of $i$. A Nash equilibrium $(q_1^*,q_2^*) \in Q^2$ satisfies \begin{align} &\pi_1(q_1^*,q_2^*) \geq \pi_1(q_1,q_2^*) \quad \forall q_1 \in Q\\ &\pi_2(q_1^*,q_2^*) \geq \pi_2(q_1^*,q_2) \quad \forall q_2 \in Q. \end{align} We are considering symmetric equilibria of the form $q^* = q_1^* = q_2^*$ and therefore apply the symmetric opponents form approach. Define $\pi(q,q^*) = \pi_1(q,q^*)$. There exists a unique symmetric root to the first order condition $\pi_q(q^*,q^*) = 0$ given by $q^* = \frac{1}{3}$. Claim The candidate $q = \frac{1}{3}$ is the unique symmetric maximizer of $\pi(q,q^*)$. Problem: The candidate might be a minimum or saddle. The idea: In economic settings equilibrium quantities are basically restricted by individual rationality, i.e. $\pi(q^*,q^*) = (1-2q^*)q^*$ implies $q^* \in [0,\frac{1}{2}]$. Since $\pi(\frac{1}{3},\frac{1}{3}) = \frac{1}{9} > 0$, the claim follows. Edit I edit the question to further clarify the issue. Suppose I don't have any information about concavity of $\pi(q,q^*)$ w.r.t. $q$. A general argument: We need to distinguish 4 cases. $q^*$ is a saddle and $\pi(q^*,q^*) > 0$ and $\pi(\infty,q^*) = \infty$. $q^*$ is a saddle and $\pi(q^*,q^*) < 0$ and $\pi(\infty,q^*) = -\infty$. $q^*$ is a minimum and $\pi(q^*,q^*) < 0$ and $\pi(\infty,q^*) = \infty$. $q^*$ is a maximum and $\pi(q^*,q^*) > 0$ and $\pi(\infty,q^*) = -\infty$. Since case 4 is considered here $\frac{1}{3} = \arg\max_q\pi(q,q^*)$.
This turned out to be very obvious based on the definition of a prime number, so obvious I felt like not bothering, but I still want to see if there is a problem with the proof so far, seeings I am in general just terrible at proofs. I observed that the greatest common divisor of the largest divisor of $n \cdot p_n$ and the largest divisor of $n\cdot\bigl\lfloor \frac{p_n}{n} \bigr\rfloor$ is always equal to $n$. I then assumed this must be because $\bigl\lfloor \frac{p_n}{n} \bigr\rfloor$ and $p_n$ share no common divisor greater than 1, i.e they are coprime. So the question became: Prove that $p_n$ and $\bigl\lfloor \frac{p_n}{n} \bigr\rfloor$ must be coprime. And this is what have done so far: since $\lnot (n | p_n)$ is true $\forall n \in \mathbb N$, $\frac{p_n}{n}-\bigl\lfloor \frac{p_n}{n} \bigr\rfloor \ne 0\,\, \forall n \in \mathbb N$ and since we know : $\gcd(k,p_n)=1$ $\forall k \leq p_n -1$ and $\bigl\lfloor \frac{p_n}{n} \bigr\rfloor \lt p_n -1$ $\forall n \gt 1 \in \mathbb N$ we can substitute $k=\bigl\lfloor \frac{p_n}{n} \bigr\rfloor$ therefore $\gcd(\bigl\lfloor \frac{p_n}{n} \bigr\rfloor,p_n)=1$ Hence showing that $\bigl\lfloor \frac{p_n}{n} \bigr\rfloor$ and $p_n$ are coprime.
What is a Functor? Definition and Examples, Part 1 Next up in our mini series on basic category theory: functors! We began this series by asking What is category theory, anyway? and last week walked through the precise definition of a category along with some examples. As we saw in example #3 in that post, a functor can be viewed an arrow/morphism between two categories. ...every sufficiently good analogy is yearning to become a functor. - John Baez What is a functor? More precisely, a functor $F:\mathsf{C}\to\mathsf{D}$ from a category $\mathsf{C}$ to a category $\mathsf{D}$ consists of some data that satisfies certain properties. The Data an object $F(x)$ in $\mathsf{D}$ for every object $x$ in $\mathsf{C}$ a morphism $F(x)\overset{F(f)}{\longrightarrow}F(y)$ in $\mathsf{D}$ for every morphism $x\overset{f}{\longrightarrow}y$ in $\mathsf{C}$ The Properties $F$ respects composition, i.e. $F(g\circ f)=F(g)\circ F(f)$ in $\mathsf{D}$ whenever $g$ and $f$ are composable morphisms in $\mathsf{C}.$ $F$ sends identities to identities, i.e. $F(\text{id}_x)=\text{id}_{F(x)}$ for all objects $x$ in $\mathsf{C}.$ Example #1: a functor between groups We noted last week that every group $G$ can be viewed as a one-object category called $\mathsf{B}G$. Suppose then that $\mathsf{B}G$ and $\mathsf{B}H$ are two such categories. What would a functor $F:\mathsf{B}G\to\mathsf{B}H$ look like? To start, it must send the single object ${\color{RubineRed}\bullet}$ of $\mathsf{B}G$ to the single object ${\color{ProcessBlue}\bullet}$ of $\mathsf{B}H$. Moreover, any morphism ${\color{RubineRed}\bullet}\overset{g}{\longrightarrow} {\color{RubineRed}\bullet}$, which, you'll remember, is just a group element $g\in G$, must map to a morphism ${\color{ProcessBlue}\bullet}\overset{F(g)}{\longrightarrow} {\color{ProcessBlue}\bullet}$. That is, $F(g)$ must be an element of $H$. The composition property requires that $F(g\circ g')=F(g)\circ F(g')$ for all group elements $g,g'\in G$. And finally, if ${\color{RubineRed}e}$ denotes the identity morphism on ${\color{RubineRed}\bullet}$ (i.e. the identity element in $G$), then we must have $F({\color{RubineRed}e})={\color{ProcessBlue}e}$ where ${\color{ProcessBlue}e}$ is the identity morphism on ${\color{ProcessBlue}\bullet}$ (i.e. the identity element in $H$). So what's a functor $F:\mathsf{B}G\to\mathsf{B}H$? It's precisely a group homomorphism from $G$ to $H$! So in this example, a functor is just a function (which happens to be compatible with the group structure). But what if the domain/codomain of a functor has more than one object in it? Example #2: the fundamental group There is a functor $\pi_1:\mathsf{Top}\to\mathsf{Group}$ that associates to every topological space* $X$ a group $\pi_1(X)$, called the fundamental group of $X$, and which sends every continuous function $X\overset{f}{\longrightarrow}Y$ to a group homomorphism $\pi_1(X)\overset{\pi_1(f)}{\longrightarrow}\pi_1(Y)$. The elements of $\pi_1(X)$ are (homotopy classes of) maps of the circle into the space $X$. But don't worry if you're not familiar with the phrase "homotopy classes of." Just know that informally $\pi_1$ is a "hole-detector"---it keeps track of the various loops in $X$. And knowing that $\pi_1$ is functorial allows you to prove cool things like Brouwer's fixed point theorem. For instance, see the proof of Theorem 1.3.3 here. Now it turns out that the fundamental group is an invariant of a topological space. In other words, if $X$ and $Y$ are homeomorphic spaces then their fundamental groups are isomorphic. (Even better, this sentence is still true if we replace "homeomorphic" by "homotopic" which is a slightly weaker notion of "sameness.") This gives us a handy way to distinguish spaces: if $X$ and $Y$ have nonisomorphic fundamental groups, then $X$ and $Y$ are guaranteed to be topologically different. (Actually, there's nothing special about the number 1 here. For each $n\geq 1$, there is a functor $\pi_n:\mathsf{Top}\to\mathsf{Group}$ that sends $X$ to $\pi_n(X)$ which is again, informally, the group whose elements are maps of the $n$-dimensional sphere into $X$. The groups $\pi_n(X)$ for $n>1$ are called the higher homotopy groups of $X$. And when $X$ is a sphere, cool and spooky things happen.) In practice it can be helpful to think of a functor $F:\mathsf{C}\to\mathsf{D}$ as encoding an invariant of some sort. This is because if two objects $x$ and $y$ are "the same" in $\mathsf{C}$, then $F(x)$ and $F(y)$ must be "the same" in $\mathsf{D}$. By "the same," I'm referring to the precise notion of isomorphic: In any category, a morphism $x\overset{f}{\longrightarrow}y$ is said to be an isomorphism if there exists a morphism $y\overset{g}{\longrightarrow}x$ so that $g\circ f=\text{id}_x$ and $f\circ g=\text{id}_y$. Isomorphisms in $\mathsf{Set}$ are called bijections, isomorphisms in $\mathsf{Top}$ are called homeomorphisms, isomorphisms in $\mathsf{Man}$ (the category of smooth manifolds and smooth maps) are called diffeomorphisms, isomorphisms in $\mathsf{Vect}_{\mathbb{k}}$ (the category of $\mathbb{k}$-vector spaces with linear transformations) are called, well, isomorphisms, and so on.** So we can rephrase the above more formally: "If $f$ is an isomorphism between $x$ and $y$ then $F(f)$ is an isomorphism between $F(x)$ and $F(y)$." (The proof isn't tricky - it follows directly from the definitions!) Or, to put it simply, functors preserve isomorphisms. Well, I have a couple more examples of functors that I'm excited to share with you---one of them makes an appearance in a familiar equation from calculus!---but I'll save them for tomorrow. Stay tuned! *Technically, they are pointed topological spaces, i.e. spaces where you declare a certain point, called a basepoint, to be special/distinguished. So really, $\pi_1$ is from $\mathsf{Top}_*$ to $\mathsf{Group}$ where $\mathsf{Top}_*$ denotes the category of pointed spaces with basepoint-preserving maps as morphisms. **Fun fact: Categories in which every morphism is an isomorphism are given a special name: groupoids. (Can you guess why?) Every group, then, is precisely a one-object groupoid. And yes! The fundamental groupoid of a topological space is a thing! Perhaps we'll chat about it in a future post....
What is a Functor? Definitions and Examples, Part 2 Continuing yesterday's list of examples of functors, here is... Example #3: the chain rule Let $\mathsf{E}$ be the category whose objects are the Euclidean spaces $\mathbb{R},\mathbb{R}^2,\mathbb{R}^3,\ldots,\mathbb{R}^n,\ldots$ and let's suppose each has a special point in it. That is, an object will not just be the space $\mathbb{R}^n$ but $\mathbb{R}^n$ together with a basepoint $x_0$. (Just let $x_0$ be your favorite $n$-dimensional vector. Pick the zero-vector if you like). In this category the morphisms $\mathbb{R^n}\overset{f}{\longrightarrow}\mathbb{R}^m$ are the differentiable functions that preserve basepoints: if $y_0$ is the basepoint in $\mathbb{R}^m$ and $x_0$ is the basepoint in $\mathbb{R}^n$, then $f(x_0)=y_0$. Let $\mathsf{M}$ be the category whose objects are positive integers and where a morphism $n\to m$ is an $m\times n$ matrix with real entries. In other words, there is an arrow from $n$ to $m$ for each such matrix. So here, composition of morphisms corresponds to matrix multiplication. Define $D:\mathsf{E}\to\mathsf{M}$ as follows, on objects in $\mathsf{E}$: let $D$ send $\mathbb{R}^n$ to its dimension, $n$ on morphisms in $\mathsf{E}$: let $D$ send a differentiable function $f:\mathbb{R}^n\to\mathbb{R}^m$ to its $m\times n$ Jacobian matrix, evaluated at $x_0\in\mathbb{R}^n.$ The $ij^{\text{th}}$ entry of the matrix $Df\big\vert_{x_0}$ is $\dfrac{\partial f_i}{\partial x_j}(x_0)$ where $f_i$ is the $i^{\text{th}}$ component function of $f=(f_1,f_2,\ldots,f_m)$ and where $x_j$ is the $j^{\text{th}}$ standard basis of $\mathbb{R}^n$. Now, is $D$ a functor? If so, then it must respect composition. Let's check this. Suppose $\mathbb{R}^m\overset{g}{\longrightarrow}\mathbb{R}^k$ is another basepoint-preserving differentiable function. Does the following equality hold? Of course it does! It's the chain rule from multivariable calculus! To find the Jacobian of $g\circ f$ at $x_0$, we simply multiply the Jacobian of $f$ at $x_0$ on the left by the Jacobian of $g$ at $f(x_0)$. And in the special case when $n=m=1$, we recover the familiar single-variable equation: And a quick check will show that if $\mathbb{R}^n\overset{f}{\longrightarrow}\mathbb{R}^n$ is the identity function, then $Df\big\vert_{x_0}$ is indeed the $n\times n$ identity matrix. Thanks to calculus, our $D$ is indeed a functor. Example #4: contravariant functors There is a functor $F:\mathsf{Vect}_{\mathbb{k}}\to \mathsf{Vect}_{\mathbb{k}}$ where $\mathsf{Vect}_{\mathbb{k}}$ consists of all vector spaces over a field $\mathbb{k}$ with linear transformations as morphisms. This $F$ sends a vector space $V$ to $\text{hom}(V,\mathbb{k})$, the space of linear functions $V\to\mathbb{k}$. This space of maps is called the dual space of $V$ and is sometimes denoted $V^*$. What do you think this functor does to morphisms? That is, given a linear map $V\overset{f}{\longrightarrow} W$, how can we get a map $F(f)$ between $\text{hom}(V,\mathbb{k})$ and $\text{hom}(W,\mathbb{k})$? Well, a map $F(f):\text{hom}(V,\mathbb{k})\to \text{hom}(W,\mathbb{k})$ would have to take a linear transformation $\varphi$ with domain $V$ and send it to one with domain $W$, and it would have to do it using $f:V\to W$ somehow. I'm not sure if there's a way to do this---the arrows simply don't line up. BUT, if we instead started with a linear transformation on $W$, say $\varphi:W\to\mathbb{k}$, we could obtain a linear transformation on $V$ by precomposing with $f$! So let's define $F(f):=f^*$, typically called a pullback, to be the function $f^*$ that takes a linear transformation $\varphi:W\to\mathbb{k}$ and sends it to $\varphi\circ f:V\to\mathbb{k}$, in other words, $f^*(\varphi)=\varphi\circ f$. See how this assignment of $F$ reverses arrows? We give functors with this arrow-flipping property a special name-- contravariant. The "normal" functors (as defined yesterday) are then called covariant. Formally, a contravariant functor $F:\mathsf{C}^{op}\to\mathsf{D}$ (sometimes you'll see the "op" placed on the $\mathsf{C}$ to remind you, "Hey! this functor's contravariant!") has the same data and properties as a covariant functor, except that the composition property is now $F(f\circ g)=F(g)\circ F(f)$ whenever $f$ and $g$ are composable morphisms in $\mathsf{C}$. For practice, you could try checking that the functor $F:\mathsf{Vect}_{\mathbb{k}}\to\mathsf{Vect}_{\mathbb{k}}$ we defined above is indeed a contravariant functor. (We only discussed its action on morphisms and objects but didn't check that it satisfies the functorial properties.) Example #5: representable functors If you're familiar with some basic topology, then you might like our last example: There is a contravariant functor $\mathscr{O}$ from $\mathsf{Top}^{op}$ to $\mathsf{Set}$ that associates to each topological space $X$ its set $\mathscr{O}(X)$ of open subsets. On morphisms, $\mathscr{O}$ takes a continuous function $f:X\to Y$ to $f^{-1}:\mathscr{O}(Y)\to\mathscr{O}(X)$ where $f^{-1}$ is a function sending an open set $U$ in $Y$ to the open set $f^{-1}(U)$ in $X$. (We know $f^{-1}(U)$ is open since $f$ is continuous!) Now here's what's cool: it turns out that $\mathscr{O}$ is a very, very special kind of functor---it is representable. I won't go into the details here, but in short, for any category $\mathsf{C}$, a (covariant or contravariant) functor $F:\mathsf{C}\to\mathsf{Set}$ is said to be representable if there is an object $c$ in $\mathsf{C}$ so that for all objects $x$ in $C$, the elements of $F(x)$ are really maps $x\to c$ (or maps $c\to x$, depending on the "variance" of $F$). In this case we say $F$ is represented by the object $c$. I know this may sound like a head-scratcher, but we've seen this before! Remember our discussion on that measly space with two points, a.k.a. the Sierpinski space, $S$? In that post, we discovered that for any topological space $X$, continuous functions $X\to S$ are in one-to-one correspondence with the open subsets of $X$. Friends, that's precisely the statement that $\mathscr{O}$ is a representable functor, represented by the one and only Sierpinski space! Who knew two little dots could be so special? Pretty cool, huh? I'd love to chat more about representability on the blog, but to do so, we first need to understand natural transformations. And that's what we'll do next week! References: Category Theory in Context by Emily Riehl
Essay and Opinion A new graph density Version 1Released on 08 April 2015 under Creative Commons Attribution 4.0 International License Authors' affiliations Laboratoire Electronique, Informatique et Image (Le2i). CNRS : UMR6306 - Université de Bourgogne - Arts et Métiers ParisTech Keywords Community discovery Density @Graph theory Graph properties Graph theory Metric spaces Abstract For a given graph $G$ we propose the non-classical definition of its true density: $\rho(G) = \mathcal{M}ass (G) / \mathcal{V}ol (G)$, where the $\mathcal{M}ass$ of the graph $G$ is a total mass of its links and nodes, and $\mathcal{V}ol (G)$ is a size-like graph characteristic, defined as a function from all graphs to $\mathbb{R} \cup \infty$. We show how the graph density $\rho$ can be applied to evaluate communities, i.e “dense” clusters of nodes. Background and motivation Take a simple graph $G = (V, E)$ with $n$ nodes and $m$ links. The standard definition of graph density, i.e. the ratio between the number of its links and the number of all possible links between $n$ nodes, is not very suitable when we are talking about the true density in the physical sense. More precisely, by “the true density” we mean: $\rho(G) = \mathcal{M}ass (G) / \mathcal{V}ol (G) \,,$ where the $\mathcal{M}ass$ of the graph $G$ equals to the total mass of its links and nodes, and the $\mathcal{V}ol$ is a size-like characteristic of $G$. Consider again the usual graph density: $D = \frac{2 m}{n \left( n - 1 \right)}$. Rewriting $D$ in the “mass divided by volume” form, one obtain the following definitions of graph mass and volume: \begin{align*} \mathcal{M}ass_D (G) &= 2 m \,, \\ \mathcal{V}ol_D (G) &= n \left( n - 1 \right) ; \end{align*} Note, that $\mathcal{V}ol_D (G)$ depends only of the number of nodes, so it is very rough estimation of the actual graph volume. Moreover, any function of the number of nodes (and the number links) will give somewhat strange results, because we neglect the actual graph structure in this way. In the next section of this article we give a formal definition of the actual graph volume. For the moment, just take a look at the Fig. 1, where different graphs with 6 nodes and 6 links are shown. Intuitively, graph $C$ is larger (more voluminous) than $B$ and $A$. But it is not clear which graph is larger: $A$ or $B$. True graph density $\mathcal{M}ass (G)$ It seems a god idea to define $\mathcal{M}ass(G)$ as the total mass of its nodes and links. The simple way consists in assuming that the mass of one link (or node) equals to $1$. \begin{equation} \tag{MASS} \mathcal{M}ass(G) = n + m \label{eq:mass} \end{equation} $\mathcal{V}ol (G)$ We cannot use any classical measure (e.g. Lebesgue-like) to define a volume of a graph $G$, because all measures are additive. Let us explain why the additivity is bad. Observing that $G$ is the union of its links and nodes, and assuming that the volume of a link (node) equals to one, we obtain: \[ \mathcal{C}lassical\mathcal{V}ol(G) = n + m \,,\] where $m$ is the number of links in $G$, and $n$ equals to the number of nodes. The graph structure disappears again, and we should find “another definition of volume”. A clever person can develop a notion of “volume” for any given metric space. Since any graph can be regarded as a metric space, we can use this as a solution of our problem. Here we briefly describe how Feige in his paper [2] defined the volume of a finite metric space $(S,d)$ of $n$ points. A a function $\phi : S \to \mathbb{R}^{n-1}$ is a contraction if for every $u,v \in S$, $d_{\mathbb{R}} \big( \phi (u) - \phi (v) \big) \le d(u,v)$, where $d_{\mathbb{R}}$ denotes usual Euclidean distance between points in $\mathbb{R}^{n-1}$. The Fiege's volume $\mathit{Vol} \big( (S,d) \big)$ is the maximum $(n-1)$ dimensional Euclidean volume of a simplex that has the points of $\{\phi(s) | s \in S \}$ as vertices, where the maximum is taking over all contractions $\phi : S \to \mathbb{R}^{n-1}$. Sometimes in order to calculate Fiege's volume, we need to modify the original metric. Abraham et al. deeply studied Fiege-like embeddings in [1]. Another approach is to find a good mapping $g : S \to \mathbb{R}^{n-1}$, trying to preserve original distances as much as possible, and calculate the $\mathit{Vol} \big( (S,d ) \big)$ as the volume of convex envelop that contains all $\{g(s) | s \in S\}$. The interested reader can refer to the Matoušek's book [3], which gives a good introduction into such embeddings. But we should note that not all finite metric spaces can be embedded into Euclidean space with exact preservation of distances. In this paper we chose another approach: instead of doing approximative embeddings, we compute the “volume” directly. First of all, let us introduce some natural properties that must be satisfied by the graph volume. A graph volume is a function from set of all graphs $\mathcal{G}$ to $\mathbb{R} \cup \infty$: \[ \mathcal{V}ol : \mathcal{G} \to \mathbb{R} \cup \infty \,,\] Note that our volume has no such parameter as dimension. The absence of dimension allows us directly compare volumes of any two graphs. Let the volume of any complete graph be equal to $1$: \begin{equation} \tag{I} \mathcal{V}ol (K_x) = 1 \label{eq:I} \end{equation} Then, for any disconnected graph, denoted by $G_{\bullet^\bullet_\bullet}$, let the volume be equal to infinity: \begin{equation} \tag{II} \mathcal{V}ol (G_{\bullet^\bullet_\bullet}) = \infty \label{eq:II} \end{equation} Intuitively, here one can make an analogy with a gas. Since gas molecules are “not connected”, they fill an arbitrarily large container in which they are placed. When we add a new edge between two existed vertices, the new volume (after edge addition) cannot be greater than the original volume: \begin{equation} \tag{III} \mathcal{V}ol (G) \ge \mathcal{V}ol (G + e) \label{eq:III} \end{equation} When we add a new vertex $v^1$ with degree $1$, the new volume cannot be less than the original one: \begin{equation} \tag{IV} \mathcal{V}ol (G) \le \mathcal{V}ol (G + v^1) \label{eq:IV} \end{equation} For a given graph $G = (V, E)$ the eccentricity $\epsilon(v)$ of a node $v$ equals to the greatest distance between $v$ and any other node from $G$: \[ \epsilon(v) = \max_{u \in V}{d(v,u)} \,, \] where $d(v,u)$ denotes the length of a shortest path between $v$ and $u$. Finally, we define the volume of a graph $G$ as a product of all eccentricities: \begin{equation} \mathcal{V}ol (G) = \sqrt[|V|]{\prod_{v \in V} \epsilon(v)} \tag{VOLUME} \label{eq:volume} \end{equation} Obviously properties \ref{eq:I}, \ref{eq:II} and \ref{eq:III} hold for this definition. But \ref{eq:IV} is needed to be proved or disproved. Reconsidering graphs from Fig. 1, we have $\mathcal{V}ol(A) = \sqrt[6]{3^3 2^3} \approx 2.45$, $ \mathcal{V}ol(B) = \sqrt[6]{3^3 2^3} \approx 2.45$ and $\mathcal{V}ol(C) = \sqrt[6]{3^6} = 3$. Possible applications Quality of communities Consider two graphs $A$ and $B$. We say that $A$ is better than $B$ if and only if $\rho(A) > \rho(B)$. Using this notion one can define a quality of graph partition. The volume of finite metrics spaces Our approach can be applied to calculate the “volume” of any finite metric space $(S,d)$: \[ \mathcal{V}ol \big( (S,d) \big) = \sqrt[|S|]{\prod_{s \in S} \epsilon(s)} \,,\] where $\epsilon(s) = \max_{p \in S}{d(s,p)} $. References I. Abraham, Y. Bartal, O. Neiman, and L. J. Schulman, Volume in general metric spaces, in Proceedings of the 18th annual European conference on Algorithms: Part II, ESA'10, Berlin, Heidelberg, 2010, Springer-Verlag, pp. 87–99. U. Feige, Approximating the bandwidth via volume respecting embeddings, Journal of Computer and System Sciences, 60 (2000), pp. 510 – 539. J. Matoušek, Lectures on Discrete Geometry, Springer-Verlag New York, Inc., Secaucus, NJ, USA, 2002.
A remark on minimal nodal solutions of an elliptic problem in a ball DOI: http://dx.doi.org/10.12775/TMNA.2004.025 Abstract Consider the equation $-\Delta u = u_{+}^{p-1}-u_{-}^{q-1}$ in the unit ball $B$ with a homogeneous Dirichlet boundary condition. We assume $2< p,q< 2^{*}$. Let $\varphi(u)=(1/2)\int_{B} |\nabla u|^{2} dx-(\1/p)\int_{B}u_{+}^{p}dx -(1/q)\int_{B}u_{-}^{q}dx$ be the functional associated to this equation. The nodal Nehari set is defined by $\mathcal M=\{u\in H^{1}_{0}(B): u_{+}\neq 0,\ u_{-}\neq 0,\ \langle\varphi'(u_{+}),u_{+}\rangle= \langle\varphi'(u_{-}),u_{-}\rangle=0\}$. Now let $\mathcal M_{\text{\rm rad}}$ denote the subset of $\mathcal M$ consisting of radial functions and let $\beta_{\text{\rm rad}}$ be the infimum of $\varphi$ restricted to $\mathcal M_{\text{\rm rad}}$. Furthermore fix two disjoint half balls $B^{+}$ and $B^{-}$ and denote by $\mathcal M_{h}$ the subset of $\mathcal M$ consisting of functions which are positive in $B^{+}$ and negative in $B^{-}$. We denote by $\beta_{h}$ the infimum of $\varphi$ restricted to $\mathcal M_{h}$. In this note we are interested in obtaining inequalities between $\beta_{\text{\rm rad}}$ and $\beta_{h}$. This problem is related to the study of symmetry properties of least energy nodal solutions of the equation under consideration. We also consider the case of the homogeneous Neumann boundary condition. u_{+}^{p-1}-u_{-}^{q-1}$ in the unit ball $B$ with a homogeneous Dirichlet boundary condition. We assume $2< p,q< 2^{*}$. Let $\varphi(u)=(1/2)\int_{B} |\nabla u|^{2} dx-(\1/p)\int_{B}u_{+}^{p}dx -(1/q)\int_{B}u_{-}^{q}dx$ be the functional associated to this equation. The nodal Nehari set is defined by $\mathcal M=\{u\in H^{1}_{0}(B): u_{+}\neq 0,\ u_{-}\neq 0,\ \langle\varphi'(u_{+}),u_{+}\rangle= \langle\varphi'(u_{-}),u_{-}\rangle=0\}$. Now let $\mathcal M_{\text{\rm rad}}$ denote the subset of $\mathcal M$ consisting of radial functions and let $\beta_{\text{\rm rad}}$ be the infimum of $\varphi$ restricted to $\mathcal M_{\text{\rm rad}}$. Furthermore fix two disjoint half balls $B^{+}$ and $B^{-}$ and denote by $\mathcal M_{h}$ the subset of $\mathcal M$ consisting of functions which are positive in $B^{+}$ and negative in $B^{-}$. We denote by $\beta_{h}$ the infimum of $\varphi$ restricted to $\mathcal M_{h}$. In this note we are interested in obtaining inequalities between $\beta_{\text{\rm rad}}$ and $\beta_{h}$. This problem is related to the study of symmetry properties of least energy nodal solutions of the equation under consideration. We also consider the case of the homogeneous Neumann boundary condition. Keywords Nodal solutions; Symmetry; Elliptic semilinear equations Full Text:FULL TEXT Refbacks There are currently no refbacks.
Perhaps it will help to discuss this in a broader context, so let me explain this concept with regard to the resolution principle, which is used to check if some formula is unsatisfiable.Resolution for propositional logic's formulae is rather straightforward and is based on the Conjunctive Normal Form. We want to apply resolution to first-order formulae, i.e. sort of reduce FOL to propositional logic. One way to do that is to convert the original formula via Skolemization into a formula of the form$$\forall x_1x_2...x_nM,$$where $M$ (called 'matrix') is a formula without quantifiers and in CNF. To do this conversion we must get rid of existential quantifiers. The rule you provided does that in the special case where the current existential quantifier is not in scope of any universal quantifiers. The intuition here is if it is known there exists some $\nu$ for which $\alpha$ holds, then why don't we "pretend" we know that $\nu = k$, where $k$ is some fresh constant. It needs to be 'fresh' in order not to introduce any dependencies between unrelated things. And the reason we are allowed to "pretend" we know $\nu$ is that there is a theorem which claims that Skolemization preserves unsatisfiability. Note: recall that we only want to check if a formula is satisfiable/unsatisfiable, so the skolemized formula is not necessarily equivalent to the original one.
Dominated Convergence Theorem The Basic Idea Given a sequence of functions $\{f_n\}$ which converges pointwise to some limit function $f$, it is not always true that $$\int \lim_{n\to\infty}f_n = \lim_{n\to\infty}\int f_n.$$ (Take this sequence for example.)The Monotone Convergence Theorem (MCT), the Dominated Convergence Theorem (DCT), and Fatou's Lemma are three major results in the theory of Lebesgue integration which answer the question "When do $\displaystyle{ \lim_{n\to\infty} }$ and $\int$ commute?" The MCT and DCT tell us that if you place certain restrictions on both the $f_n$ and $f$, then you can go ahead and interchange the limit and integral. Fatou's Lemma, on the other hand, says "Here's the best you can do if you don't make any extra assumptions about the functions." Today we're discussing the Dominated Convergence Theorem. First we'll look at a counterexample to see why "domination" is a necessary condition, and we'll close by using the DCT to compute $$\lim_{n\to\infty}\int_{\mathbb{R}}\frac{n\sin(x/n)}{x(x^2+1)}.$$ From English to Math The Dominated Convergence Theorem: If $\{f_n:\mathbb{R}\to\mathbb{R}\}$ is a sequence of measurable functions which converge pointwise almost everywhere to $f$, and if there exists an integrable function $g$ such that $|f_n(x)|\leq g(x)$ for all $n$ and for all $x$, then $f$ is integrable and $$\int_{\mathbb{R}}f=\lim_{n\to\infty}\int_\mathbb{R} f_n.$$ Why is domination necessary? Let's see where things can go wrong if a sequence $\{f_n\}$ is not dominated by any function. Take, for instance, the sequence of functions $\{f_n\}$ where for each $n\in\mathbb{N}$ we define$$f_n(x)=n\chi_{(0,1/n]}(x)=\begin{cases} n, &\text{if $0< x\leq \frac{1}{n} $}\\0, &\text{else.}\end{cases}$$ Then $f_n\to 0$ pointwise as is evident by looking at graphs of the first few $f_n$. (We've also discussed this sequence before.) But notice there is no integrable function $g$ such that $|f_n(x)|\leq g(x)$ for $x\in(0,1]$ and for ALL $n$. This is because for large values of $n$, the height of $f_n$ is tending towards infinity, or in other words, the $f_n$ are unbounded. Since the area of each rectangle is 1, we see that the integral and the limit do not commute in this example. Explicitly:$$1=\lim_{n\to\infty}\int_0^1 f_n(x)\;dx\quad \neq\quad \int_0^1\lim_{n\to\infty}f_n(x)\;dx=0$$where we have use the fact that $\displaystyle{\lim_{n\to\infty}f_n(x)=0}$ and that the Riemann and Lebesgue integral coincide in this case. An example using the DCT Compute the following integral: $$\lim_{n\to\infty}\int_{\mathbb{R}} \frac{n\sin(x/n)}{x(x^2+1)}\;dx.$$ Solution. Let $x\in\mathbb{R}$ and begin by defining $$ f_n(x)=\frac{n\sin(x/n)}{x(x^2+1)}\qquad \text{for each $n\in\mathbb{N}$.}$$ Observe that each $f_n$ is measurable* and the sequence $\{f_n\}$ converges pointwise to $\frac{1}{1+x^2}$ for all $x\neq 0$: \begin{align*} \lim_{n\to\infty} f_n(x) &=\lim_{n\to\infty} \left(\frac{\sin(x/n)}{x/n}\right)\frac{1}{1+x^2}\\ &=\frac{1}{1+x^2} \end{align*} since $\displaystyle{ \lim_{n\to\infty} \frac{\sin(x/n)}{x/n} }=1$ for a fixed $x$. From this we also see that $g(x)=\frac{1}{1+x^2}$ works as a dominating function. Indeed, $g$ is integrable on $\mathbb{R}$ (as we'll verify below) and \begin{align*} |f_n(x)|&=\bigg|\frac{\sin(x/n)}{x/n}\cdot\frac{1}{1+x^2} \bigg|\\ &=\frac{|\sin(x/n)|}{|x/n|}\cdot\frac{1}{1+x^2}\\ &\leq \frac{1}{1+x^2}\\ &=g(x) \end{align*} since $|\sin(x)|\leq |x|$ for all $x$. Thus we apply the DCT to conclude \begin{align*} \lim_{n\to\infty}\int_{\mathbb{R}} \frac{n\sin(x/n)}{x(x^2+1)}\;dx &= \lim_{n\to\infty}\int_{-\infty}^{\infty} \frac{n\sin(x/n)}{x(x^2+1)}\;dx\\[4pt] &=\int_{-\infty}^{\infty}\frac{1}{1+x^2}\;dx\\[4pt] &=\tan^{-1}(x)\bigg|_{-\infty}^{\infty}\\[4pt] &=\pi. \end{align*} Footnote: *If $g$ is a continuous function and $f$ is a measurable function, then their composition $g\circ f$ is measurable. And if $g$ and $f$ are both measurable, then so is $fg$.
The Annals of Statistics Ann. Statist. Volume 25, Number 6 (1997), 2512-2546. Optimal pointwise adaptive methods in nonparametric estimation Abstract The problem of optimal adaptive estimation of a function at a given point from noisy data is considered. Two procedures are proved to be asymptotically optimal for different settings. First we study the problem of bandwidth selection for nonparametric pointwise kernel estimation with a given kernel. We propose a bandwidth selection procedure and prove its optimality in the asymptotic sense. Moreover, this optimality is stated not only among kernel estimators with a variable bandwidth. The resulting estimator is asymptotically optimal among all feasible estimators. The important feature of this procedure is that it is fully adaptive and it "works" for a very wide class of functions obeying a mild regularity restriction. With it the attainable accuracy of estimation depends on the function itself and is expressed in terms of the "ideal adaptive bandwidth" corresponding to this function and a given kernel. The second procedure can be considered as a specialization of the first one under the qualitative assumption that the function to be estimated belongs to some Hölder class $\Sigma (\beta, L)$ with unknown parameters $\beta, L$. This assumption allows us to choose a family of kernels in an optimal way and the resulting procedure appears to be asymptotically optimal in the adaptive sense in any range of adaptation with $\beta \leq 2$. Article information Source Ann. Statist., Volume 25, Number 6 (1997), 2512-2546. Dates First available in Project Euclid: 30 August 2002 Permanent link to this document https://projecteuclid.org/euclid.aos/1030741083 Digital Object Identifier doi:10.1214/aos/1030741083 Mathematical Reviews number (MathSciNet) MR1604408 Zentralblatt MATH identifier 0894.62041 Citation Lepski, O. V.; Spokoiny, V. G. Optimal pointwise adaptive methods in nonparametric estimation. Ann. Statist. 25 (1997), no. 6, 2512--2546. doi:10.1214/aos/1030741083. https://projecteuclid.org/euclid.aos/1030741083
Difference between revisions of "Ineffable" (→Subtle cardinal: for every class $\mathcal{A}$,) m (→Ethereal cardinal: w) (3 intermediate revisions by the same user not shown) Line 47: Line 47: ==Ethereal cardinal== ==Ethereal cardinal== + + + + + + + + + + + + ''To be expanded.'' ''To be expanded.'' Revision as of 05:50, 6 October 2019 Ineffable cardinals were introduced by Jensen and Kunen in [1] and arose out of their study of $\diamondsuit$ principles. An uncountable regular cardinal $\kappa$ is ineffable if for every sequence $\langle A_\alpha\mid \alpha<\kappa\rangle$ with $A_\alpha\subseteq \alpha$ there is $A\subseteq\kappa$ such that the set $S=\{\alpha<\kappa\mid A\cap \alpha=A_\alpha\}$ is stationary. Equivalently an uncountable regular $\kappa$ is ineffable if and only if for every function $F:[\kappa]^2\rightarrow 2$ there is a stationary $H\subseteq\kappa$ such that $F\upharpoonright [H]^2$ is constant [1]. This second characterization strengthens a characterization of weakly compact cardinals which requires that there exist such an $H$ of size $\kappa$. If $\kappa$ is ineffable, then $\diamondsuit_\kappa$ holds and there cannot be a slim $\kappa$-Kurepa tree [1] . A $\kappa$-Kurepa tree is a tree of height $\kappa$ having levels of size less than $\kappa$ and at least $\kappa^+$-many branches. A $\kappa$-Kurepa tree is slim if every infinite level $\alpha$ has size at most $|\alpha|$. Contents Ineffable cardinals and the constructible universe Ineffable cardinals are downward absolute to $L$. In $L$, an inaccessible cardinal $\kappa$ is ineffable if and only if there are no slim $\kappa$-Kurepa trees. Thus, for inaccessible cardinals, in $L$, ineffability is completely characterized using slim Kurepa trees. [1] Ramsey cardinals are stationary limits of completely ineffable cardinals, they are weakly ineffable, but the least Ramsey cardinal is not ineffable. Ineffable Ramsey cardinals are limits of Ramsey cardinals, because ineffable cardinals are $Π^1_2$-indescribable and being Ramsey is a $Π^1_2$-statement. The least strongly Ramsey cardinal also is not ineffable, but super weakly Ramsey cardinals are ineffable. $1$-iterable (=weakly Ramsey) cardinals are weakly ineffable and stationary limits of completely ineffable cardinals. The least $1$-iterable cardinal is not ineffable. [2, 4] Relations with other large cardinals Measurable cardinals are ineffable and stationary limits of ineffable cardinals. $\omega$-Erdős cardinals are stationary limits of ineffable cardinals, but not ineffable since they are $\Pi_1^1$-describable. [3] Ineffable cardinals are $\Pi^1_2$-indescribable [1]. Ineffable cardinals are limits of totally indescribable cardinals. [1] ([5] for proof) For a cardinal $κ=κ^{<κ}$, $κ$ is ineffable iff it is normal 0-Ramsey. [6] Weakly ineffable cardinal Weakly ineffable cardinals (also called almost ineffable) were introduced by Jensen and Kunen in [1] as a weakening of ineffable cardinals. An uncountable regular cardinal $\kappa$ is weakly ineffable if for every sequence $\langle A_\alpha\mid \alpha<\kappa\rangle$ with $A_\alpha\subseteq \alpha$ there is $A\subseteq\kappa$ such that the set $S=\{\alpha<\kappa\mid A\cap \alpha=A_\alpha\}$ has size $\kappa$. If $\kappa$ is weakly ineffable, then $\diamondsuit_\kappa$ holds. Weakly ineffable cardinals are downward absolute to $L$. [1] Weakly ineffable cardinals are $\Pi_1^1$-indescribable. [1] Ineffable cardinals are limits of weakly ineffable cardinals. Weakly ineffable cardinals are limits of totally indescribable cardinals. [1] ([5] for proof) For a cardinal $κ=κ^{<κ}$, $κ$ is weakly ineffable iff it is genuine 0-Ramsey. [6] Subtle cardinal Subtle cardinals were introduced by Jensen and Kunen in [1] as a weakening of weakly ineffable cardinals. A uncountable regular cardinal $\kappa$ is subtle if for every for every $\langle A_\alpha\mid \alpha<\kappa\rangle$ with $A_\alpha\subseteq \alpha$ and every closed unbounded $C\subseteq\kappa$ there are $\alpha<\beta$ in $C$ such that $A_\beta\cap\alpha=A_\alpha$. If $\kappa$ is subtle, then $\diamondsuit_\kappa$ holds. Subtle cardinals are downward absolute to $L$. [1] Weakly ineffable cardinals are limits of subtle cardinals. [1] Subtle cardinals are stationary limits of totally indescribable cardinals. [1, 7] The least subtle cardinal is not weakly compact as it is $\Pi_1^1$-describable. $\alpha$-Erdős cardinals are subtle. [1] If $δ$ is a subtle cardinal, the set of cardinals $κ$ below $δ$ that are strongly uplifting in $V_δ$ is stationary.[8] for every class $\mathcal{A}$, in every club $B ⊆ δ$ there is $κ$ such that $\langle V_δ, \mathcal{A} ∩ V_δ \rangle \models \text{“$κ$ is $\mathcal{A}$-shrewd.”}$.[9] (The set of cardinals $κ$ below $δ$ that are $\mathcal{A}$-shrewd in $V_δ$ is stationary.) there is an $\eta$-shrewd cardinal below $δ$ for all $\eta < δ$.[9] Ethereal cardinal Ethereal cardinals were introduced by Ketonen in [10] (information in this section from there). Definition: A regular cardinal $κ$ is called etherealif for every club $C$ in $κ$ and sequence $(S_α|α < κ)$ of sets such that for $α < κ$, $|S_α| = |α|$ and $S_α ⊆ α$, there are elements $α, β ∈ C$ such that $α < β$ and $|S_α ∩ S_β| = |α|$. I.e., symbolically(?): $$κ \text{ – ethereal} \overset{\text{def}}{⟺} \left( κ \text{ – regular} ∧ \left( \forall_{C \text{ – club in $κ$}} \forall_{S : κ → \mathcal{P}(κ)} \left( \forall_{α < κ} |S_α| = |α| ∧ S_α ⊆ α \right) ⟹ \left( \exists_{α, β ∈ C} α < β ∧ |S_α ∩ S_β| = |α| \right) \right) \right)$$ Properties: Every subtle cardinal is obviously ethereal. Every ethereal cardinal is weakly inaccessible. A strongly inaccessible cardinal is ethereal if and only if it is subtle. If $κ$ is ethereal and $2^\underset{\smile}{κ} = κ$, then $\diamond(κ)$ (diamond principle) holds (where $2^\underset{\smile}{κ} = \bigcup \{ 2^α | α < κ \}$ is the weak power of $κ$). To be expanded. $n$-ineffable cardinal The $n$-ineffable cardinals for $2\leq n<\omega$ were introduced by Baumgartner in [11] as a strengthening of ineffable cardinals. A cardinal is $n$-ineffable if for every function $F:[\kappa]^n\rightarrow 2$ there is a stationary $H\subseteq\kappa$ such that $F\upharpoonright [H]^n$ is constant. $2$-ineffable cardinals are exactly the ineffable cardinals. an $n+1$-ineffable cardinal is a stationary limit of $n$-ineffable cardinals. [11] A cardinal $\kappa$ is totally ineffable if it is $n$-ineffable for every $n$. a $1$-iterable cardinal is a stationary limit of totally ineffable cardinals. (this follows from material in [4]) Helix (Information in this subsection come from [7] unless noted otherwise.) For $k \geq 1$ we define: $\mathcal{P}(x)$ is the powerset (set of all subsets) of $x$. $\mathcal{P}_k(x)$ is the set of all subsets of $x$ with exactly $k$ elements. $f:\mathcal{P}_k(\lambda) \to \mathcal{P}(\lambda)$ is regressive iff for all $A \in \mathcal{P}_k(\lambda)$, we have $f(A) \subseteq \min(A)$. $E$ is $f$-homogenous iff $E \subseteq \lambda$ and for all $B,C \in \mathcal{P}_k(E)$, we have $f(B) \cap \min(B \cup C) = f(C) \cap \min(B \cup C)$. $\lambda$ is $k$-subtle iff $\lambda$ is a limit ordinal and for all clubs $C \subseteq \lambda$ and regressive $f:\mathcal{P}_k(\lambda) \to \mathcal{P}(\lambda)$, there exists an $f$-homogenous $A \in \mathcal{P}_{k+1}(C)$. $\lambda$ is $k$-almost ineffable iff $\lambda$ is a limit ordinal and for all regressive $f:\mathcal{P}_k(\lambda) \to \mathcal{P}(\lambda)$, there exists an $f$-homogenous $A \subseteq \lambda$ of cardinality $\lambda$. $\lambda$ is $k$-ineffable iff $\lambda$ is a limit ordinal and for all regressive $f:\mathcal{P}_k(\lambda) \to \mathcal{P}(\lambda)$, there exists an $f$-homogenous stationary $A \subseteq \lambda$. $0$-subtle, $0$-almost ineffable and $0$-ineffable cardinals can be defined as “uncountable regular cardinals” because for $k \geq 1$ all three properties imply being uncountable regular cardinals. For $k \geq 1$, if $\kappa$ is a $k$-ineffable cardinal, then $\kappa$ is $k$-almost ineffable and the set of $k$-almost ineffable cardinals is stationary in $\kappa$. For $k \geq 1$, if $\kappa$ is a $k$-almost ineffable cardinal, then $\kappa$ is $k$-subtle and the set of $k$-subtle cardinals is stationary in $\kappa$. For $k \geq 1$, if $\kappa$ is a $k$-subtle cardinal, then the set of $(k-1)$-ineffable cardinals is stationary in $\kappa$. For $k \geq n \geq 0$, all $k$-ineffable cardinals are $n$-ineffable, all $k$-almost ineffable cardinals are $n$-almost ineffable and all $k$-subtle cardinals are $n$-subtle. Completely ineffable cardinal Completely ineffable cardinals were introduced in [5] as a strengthening of ineffable cardinals. Define that a collection $R\subseteq P(\kappa)$ is a stationary class if $R\neq\emptyset$, for all $A\in R$, $A$ is stationary in $\kappa$, if $A\in R$ and $B\supseteq A$, then $B\in R$. A cardinal $\kappa$ is completely ineffable if there is a stationary class $R$ such that for every $A\in R$ and $F:[A]^2\to2$, there is $H\in R$ such that $F\upharpoonright [H]^2$ is constant. Relations: Completely ineffable cardinals are downward absolute to $L$. [5] Completely ineffable cardinals are limits of ineffable cardinals. [5] There are stationarily many completely ineffable, greatly Erdős cardinals below any Ramsey cardinal.[13] The following are equivalent:[6] $κ$ is completely ineffable. $κ$ is coherent $<ω$-Ramsey. $κ$ has the $ω$-filter property. Every completely ineffable is a stationary limit of $<ω$-Ramseys.[6] Completely Ramsey cardinals and $ω$-Ramsey cardinals are completely ineffable.[6] $ω$-Ramsey cardinals are limits of completely ineffable cardinals.[2] References Jensen, Ronald and Kunen, Kenneth. Some combinatorial properties of $L$ and $V$.Unpublished, 1969. www bibtex Holy, Peter and Schlicht, Philipp. A hierarchy of Ramsey-like cardinals.Fundamenta Mathematicae 242:49-74, 2018. www arχiv DOI bibtex Jech, Thomas J. Third, Springer-Verlag, Berlin, 2003. (The third millennium edition, revised and expanded) www bibtex Set Theory. Gitman, Victoria. Ramsey-like cardinals.The Journal of Symbolic Logic 76(2):519-540, 2011. www arχiv MR bibtex Abramson, Fred and Harrington, Leo and Kleinberg, Eugene and Zwicker, William. Flipping properties: a unifying thread in the theory of large cardinals.Ann Math Logic 12(1):25--58, 1977. MR bibtex Nielsen, Dan Saattrup and Welch, Philip. Games and Ramsey-like cardinals., 2018. arχiv bibtex Friedman, Harvey M. Subtle cardinals and linear orderings., 1998. www bibtex Hamkins, Joel David and Johnstone, Thomas A. Strongly uplifting cardinals and the boldface resurrection axioms., 2014. arχiv bibtex Rathjen, Michael. The art of ordinal analysis., 2006. www bibtex Ketonen, Jussi. Some combinatorial principles.Trans Amer Math Soc 188:387-394, 1974. DOI bibtex Baumgartner, James. Ineffability properties of cardinals. I.Infinite and finite sets (Colloq., Keszthely, 1973; dedicated to P. Erdős on his 60th birthday), Vol. I, pp. 109--130. Colloq. Math. Soc. János Bolyai, Vol. 10, Amsterdam, 1975. MR bibtex Kentaro, Sato. Double helix in large large cardinals and iteration ofelementary embeddings., 2007. www bibtex Sharpe, Ian and Welch, Philip. Greatly Erdős cardinals with some generalizations to the Chang and Ramsey properties.Ann Pure Appl Logic 162(11):863--902, 2011. www DOI MR bibtex
Settling is the process by which particulates settle to the bottom of a liquid and form a sediment. Settling velocity (\(v_{p}\), m/s) describes the rate at which a particle moves through the liquid either due to gravity or due to centrifugal force. \(v_{p}=\frac{2}{9}\frac{\rho_{p}\rho _{f}}{\mu }gR^{2}\) \(g\) - gravitational acceleration (m/s \(R\) - radius of the spherical particle \(\rho _{p}\) - mass density of the particles (kg/m \(\rho _{f}\) - mass density of the fluid (kg/m \(\mu\) - dynamic viscosity The settling velocity increases with higher density difference of the particles to water of the particles, larger size (size times density difference in boyant mass) and lower water viscosity. Execution Settling velocity can be calculated based on the parameters of the medium used for the experimental setup and physical characteristics of the particles. Used in Read more Read also Contact Frank von der Kammer
I'd like to draw a closed plot path filled with pattern and surrounded by arrows: When I used @Jake's solution to add arrows (Decorate a path with little arrows parallel to it) then the pattern made with the method proposed by @Domenico Camasta vanishes (Modified pattern does not see the pattern color option). The command postaction=decorate causes that the pattern filling the area is not created. What can I do to avoid executing 2-times the \addplotfor the same path in the code below? Why removing of enlargelimits=trueaffects the distance between arrows and the plot path? How to trim both axes at the origin of the coordinate system? The code is following: \documentclass{standalone}\usepackage[utf8]{inputenc}\usepackage{pgfplots}\usepackage{tikz}\pgfplotsset{compat=newest}\usetikzlibrary{decorations.markings}\usetikzlibrary{patterns}\usepgfplotslibrary{fillbetween}\definecolor{mygreen}{rgb}{0.00000,0.60000,0.00000}\makeatletter\pgfdeclarepatternformonly[\LineSpace]{my north east lines}% {\pgfqpoint{-1pt}{-1pt}}% {\pgfqpoint{\LineSpace}{\LineSpace}}% {\pgfqpoint{\LineSpace}{\LineSpace}}% {\pgfsetcolor{\tikz@pattern@color} \pgfsetlinewidth{0.4pt} \pgfpathmoveto{\pgfqpoint{0pt}{0pt}} \pgfpathlineto{\pgfqpoint{\LineSpace + 0.1pt}{\LineSpace + 0.1pt}} \pgfusepath{stroke} }\makeatother\newdimen\LineSpace\tikzset{ line space/.code={\LineSpace=#1}, line space=9pt}\begin{document}\begin{tikzpicture}\begin{axis}[%width=10cm,height=7cm,axis on top=true,axis x line=middle,axis y line=middle,xtick=\empty,ytick=\empty,extra x ticks={0,0.1},extra x tick labels={$x_{min}$,$x_{max}$},extra y ticks={600},extra y tick labels={$F_{max}$},extra y tick style={yticklabel style={xshift=0.8ex, anchor=west}},xlabel={piston position},ylabel={force},enlargelimits=true]\addplot[ name path=A,color=blue,solid,forget plot,smooth, decoration={ markings, mark=between positions 0.1 and 1 step 3em with {\draw [-latex] (-2mm,0) -- (2mm,0);}, raise=0.6cm }, postaction=decorate]table[row sep=crcr]{% 0 60\\ 0.005 450\\ 0.025 550\\ 0.1 600\\} node [above=0.8ex,pos=0.96] {$F_{\rightarrow}(x)$};\addplot[ name path=B,color=mygreen,solid,forget plot,smooth, decoration={ markings, mark=between positions 0.1 and 1 step 3em with {\draw [-latex] (-2mm,0) -- (2mm,0);}, raise=0.6cm }, postaction=decorate]table[row sep=crcr]{% 0.1 600\\ 0.095 200\\ 0.065 100\\ 0 60\\} node [below=1.0ex,pos=0.93] {$F_{\leftarrow}(x)$};\addplot fill between[of=A and B, split, every segment no 1/.style={pattern=my north east lines,pattern color=gray} ];\node[fill=white] at (0.05,350) {\textcolor{gray}{dissipated energy}};\end{axis}\end{tikzpicture}\end{document}
If the particle density of suspended particulate matter (SPM) for a given size class, \(\rho _{spm,n}\), (kg/m 3) (which includes the inflow of "new" SPM from erosion or other tributaries) is greater than the density of water \(\rho _{w}\) (kg/m 3), then the settling of that sediment size class is triggered. The settling velocity \(W _{spm,n}\) for an individual SPM size class is calculated as: \(w_{\text{SPM},n} = \begin{cases} \frac{\eta}{d_n} d_{*,n}^3 \left( 38.1 + 0.93d_{*,n}^{12/7} \right)^{-7/8} & \mbox{if } \rho_{\text{SPM},n} > \rho_{\text{w}} \\ 0 & \mbox{if } \rho_{\text{SPM},n} \leq \rho_{\text{w}} \end{cases}\) where \(d_{*,n} = (\Delta g/\eta^2)^{1/3} d_n\) is an effective particle diameter (m), \(g\) is the gravitational acceleration (m/s Execution The calculated settling velocity is used to compute a settling rate constant \(k_{\text{settle}} = w_{\text{SPM},n} / D \)(/s), where \( D\) is the depth of the water column (m). On a timestep of length \(\delta t\), the mass of SPM in the size class lost to the bed sediment due to settling (kg) is thus \(\mathbf{j}_{\text{SPM,dep},n} = \textbf{m}_{\text{SPM},n} k_{\text{settle},n} \delta t\) where \(\mathbf{m}_{\text{SPM},n}\)(kg) is the mass of suspended sediment in the size class within the river reach being simulated. The deposited mass per unit area of the bed sediment (kg/m 2) is given by \(\mathbf{M}_{\text{dep},n} = \mathbf{j}_{\text{SPM,dep},n} / l f_{\text{m}} W\) where \(l\) is the linear reach length (m), \(W\) is the reach width (m) and\( f_(m)\) is a factor to account for meandering. Used in Read more Read also Consult the NanoFASE Library to see abstracts of these deliverable reports: Fentie, B., Yu, B., & Rose, C. W. (2004). Comparison of Seven Particle Settling Velocity Formulae for Erosion Modelling. Paper presented at the 13th International Soil Conservation Organisation Conference, Brisbane. http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.506.5319&rep=rep1&type=pdf Zhiyao, S., Tingting, W., Fumin, X., & Ruijie, L. (2008). A simple formula for predicting settling velocity of sediment particles. Water Science and Engineering, 1(1), 37-43. Contact Sam Harrison
Let $f : [a,+\infty[ \rightarrow \mathbb{R}_+$ a positive function, locally Riemann-integrable. It's assumed the improper integral $\int_{a}^{+\infty} f(x) dx$ is convergent. For all $n \in \mathbb{N}$, we have $f_n = 1_{[a,n]} f$. With the monotone convergence theoreme, we would like to prove that $f$ is Lebesgue-integrable on $[a,+\infty[$ and that $\int_{[a,+\infty[} f d\lambda(x) = \int_a^{+\infty} f(x) dx$. I can constate that $\lim_{n \rightarrow +\infty} f_n = f$. Then, $\int_{[a,+\infty[} f d \lambda = \lim_{n \rightarrow +\infty} \int_{[a,n]} f_n d\lambda$. For me, it is obvious that $\int_{[a,n]} |f_n| d \lambda = \int |1_{[a,n]} f| d \lambda < +\infty$ but I can't prove it. Could someone help me ?
$L$ is a linear operator acting on hilbert space $V$ of dimension $n$, $L: V \to V$. The trace of a linear operator is defined as sum of diagonal entries of any matrix representation in same input and output basis of $V$. But if $L$ is a linear operator acting on $V \otimes V$ and I want to take partial trace over the first/second system, it makes sense to me when the operator is expressed in dirac notation, eg a linear operator acting $ H \otimes H$ where $H$ is a 2-dimensional hilbert space in dirac notation is $$L_{AB} = |01\rangle \langle 00 | +|00\rangle \langle 10 | $$ $$tr_A(L_{AB})=|1\rangle \langle 0 |$$ $$tr_B(L_{AB})=|0\rangle \langle 1 |$$ here $\{|0\rangle , |1\rangle \}$ is an orthonormal basis for $H$. But how is the partial trace found and defined in terms of the matrix representation of the linear operator. Does the input and output basis have to be the same to define partial trace similar to definition of trace ? Let $H_A \otimes H_B$ be your Hilbert space, and $O$ be an operator acting on this composite space. Then $O$ can be written has $$ O = \sum_{i,j} c_{ij} M_i \otimes N_j$$ where the $M_i$'s and $N_j$'s act on $H_A$ and $H_B$ respectively. Then the partial trace over $H_A$ defined as $$tr_{H_A}(O) = \sum_{i,j} c_{ij} tr(M_i) N_j ,$$ and similarly for $H_B$. To take the partial trace you need to build the sum over the matrix elements w.r.t. the same input and output basis, as you probably already used to calculate the partial traces you gave. In Dirac notation this is often written as: $$ tr_A(L_{AB}) =\sum_i \langle i|_A L_{AB} |i\rangle_A=\langle0|0\rangle\langle 0|0\rangle (|1\rangle\langle0|)_B+\langle1|0\rangle\langle 1|1\rangle \left(|0\rangle\langle0|\right)_B\\ =(|1\rangle\langle0|)_B $$ What is implicit to this notation, is that you leave the part of the operator which acts on the space B untouched. In principle what you do is to multiply the square matrix by rectangular matrices to obtain a smaller matrix: $$ tr_A(L_{AB})=\sum_i [(\langle i|\otimes id)L_{AB}(|i\rangle\otimes id)] $$ If you want to think of matrices, just represent the tensor products via Kronecker products: $$ tr_A(L_{AB})= \left(\array{1&0&0&0\\0&1&0&0}\right)\cdot \left(\array{0&0&1&0\\1&0&0&0\\0&0&0&0\\0&0&0&0} \right)\cdot \left( \array{1&0\\0&1\\0&0\\0&0}\right)=\left(\array{0&0\\1&0} \right) $$ (I just wrote the surviving term (where $i=0$).)
I wanted to better understand dfa. I wanted to build upon a previous question:Creating a DFA that only accepts number of a's that are multiples of 3But I wanted to go a bit further. Is there any way we can have a DFA that accepts number of a's that are multiples of 3 but does NOT have the sub... Let $X$ be a measurable space and $Y$ a topological space. I am trying to show that if $f_n : X \to Y$ is measurable for each $n$, and the pointwise limit of $\{f_n\}$ exists, then $f(x) = \lim_{n \to \infty} f_n(x)$ is a measurable function. Let $V$ be some open set in $Y$. I was able to show th... I was wondering If it is easier to factor in a non-ufd then it is to factor in a ufd.I can come up with arguments for that , but I also have arguments in the opposite direction.For instance : It should be easier to factor When there are more possibilities ( multiple factorizations in a non-ufd... Consider a non-UFD that only has 2 units ( $-1,1$ ) and the min difference between 2 elements is $1$. Also there are only a finite amount of elements for any given fixed norm. ( Maybe that follows from the other 2 conditions ? )I wonder about counting the irreducible elements bounded by a lower... How would you make a regex for this? L = {w $\in$ {0, 1}* : w is 0-alternating}, where 0-alternating is either all the symbols in odd positions within w are 0's, or all the symbols in even positions within w are 0's, or both. I want to construct a nfa from this, but I'm struggling with the regex part
Please assume that this graph is a highly magnified section of the derivative of some function, say $F(x)$. Let's denote the derivative by $f(x)$.Let's denote the width of a sample by $h$ where $$h\rightarrow0$$Now, for finding the area under the curve between the bounds $a ~\& ~b $ we can a... @Ultradark You can try doing a finite difference to get rid of the sum and then compare term by term. Otherwise I am terrible at anything to do with primes that I don't know the identities of $\pi (n)$ well @Silent No, take for example the prime 3. 2 is not a residue mod 3, so there is no $x\in\mathbb{Z}$ such that $x^2-2\equiv 0$ mod $3$. However, you have two cases to consider. The first where $\binom{2}{p}=-1$ and $\binom{3}{p}=-1$ (In which case what does $\binom{6}{p}$ equal?) and the case where one or the other of $\binom{2}{p}$ and $\binom{3}{p}$ equals 1. Also, probably something useful for congruence, if you didn't already know: If $a_1\equiv b_1\text{mod}(p)$ and $a_2\equiv b_2\text{mod}(p)$, then $a_1a_2\equiv b_1b_2\text{mod}(p)$ Is there any book or article that explains the motivations of the definitions of group, ring , field, ideal etc. of abstract algebra and/or gives a geometric or visual representation to Galois theory ? Jacques Charles François Sturm ForMemRS (29 September 1803 – 15 December 1855) was a French mathematician.== Life and work ==Sturm was born in Geneva (then part of France) in 1803. The family of his father, Jean-Henri Sturm, had emigrated from Strasbourg around 1760 - about 50 years before Charles-François's birth. His mother's name was Jeanne-Louise-Henriette Gremay. In 1818, he started to follow the lectures of the academy of Geneva. In 1819, the death of his father forced Sturm to give lessons to children of the rich in order to support his own family. In 1823, he became tutor to the son... I spent my career working with tensors. You have to be careful about defining multilinearity, domain, range, etc. Typically, tensors of type $(k,\ell)$ involve a fixed vector space, not so many letters varying. UGA definitely grants a number of masters to people wanting only that (and sometimes admitted only for that). You people at fancy places think that every university is like Chicago, MIT, and Princeton. hi there, I need to linearize nonlinear system about a fixed point. I've computed the jacobain matrix but one of the elements of this matrix is undefined at the fixed point. What is a better approach to solve this issue? The element is (24*x_2 + 5cos(x_1)*x_2)/abs(x_2). The fixed point is x_1=0, x_2=0 Consider the following integral: $\int 1/4*(1/(1+(u/2)^2)))dx$ Why does it matter if we put the constant 1/4 behind the integral versus keeping it inside? The solution is $1/2*\arctan{(u/2)}$. Or am I overseeing something? *it should be du instead of dx in the integral **and the solution is missing a constant C of course Is there a standard way to divide radicals by polynomials? Stuff like $\frac{\sqrt a}{1 + b^2}$? My expression happens to be in a form I can normalize to that, just the radicand happens to be a lot more complicated. In my case, I'm trying to figure out how to best simplify $\frac{x}{\sqrt{1 + x^2}}$, and so far, I've gotten to $\frac{x \sqrt{1+x^2}}{1+x^2}$, and it's pretty obvious you can move the $x$ inside the radical. My hope is that I can somehow remove the polynomial from the bottom entirely, so I can then multiply the whole thing by a square root of another algebraic fraction. Complicated, I know, but this is me trying to see if I can skip calculating Euclidean distance twice going from atan2 to something in terms of asin for a thing I'm working on. "... and it's pretty obvious you can move the $x$ inside the radical" To clarify this in advance, I didn't mean literally move it verbatim, but via $x \sqrt{y} = \text{sgn}(x) \sqrt{x^2 y}$. (Hopefully, this was obvious, but I don't want to confuse people on what I meant.) Ignore my question. I'm coming of the realization it's just not working how I would've hoped, so I'll just go with what I had before.
@JosephWright Well, we still need table notes etc. But just being able to selectably switch off parts of the parsing one does not need... For example, if a user specifies format 2.4, does the parser even need to look for e syntax, or ()'s? @daleif What I am doing to speed things up is to store the data in a dedicated format rather than a property list. The latter makes sense for units (open ended) but not so much for numbers (rigid format). @JosephWright I want to know about either the bibliography environment or \DeclareFieldFormat. From the documentation I see no reason not to treat these commands as usual, though they seem to behave in a slightly different way than I anticipated it. I have an example here which globally sets a box, which is typeset outside of the bibliography environment afterwards. This doesn't seem to typeset anything. :-( So I'm confused about the inner workings of biblatex (even though the source seems.... well, the source seems to reinforce my thought that biblatex simply doesn't do anything fancy). Judging from the source the package just has a lot of options, and that's about the only reason for the large amount of lines in biblatex1.sty... Consider the following MWE to be previewed in the build in PDF previewer in Firefox\documentclass[handout]{beamer}\usepackage{pgfpages}\pgfpagesuselayout{8 on 1}[a4paper,border shrink=4mm]\begin{document}\begin{frame}\[\bigcup_n \sum_n\]\[\underbrace{aaaaaa}_{bbb}\]\end{frame}\end{d... @Paulo Finally there's a good synth/keyboard that knows what organ stops are! youtube.com/watch?v=jv9JLTMsOCE Now I only need to see if I stay here or move elsewhere. If I move, I'll buy this there almost for sure. @JosephWright most likely that I'm for a full str module ... but I need a little more reading and backlog clearing first ... and have my last day at HP tomorrow so need to clean out a lot of stuff today .. and that does have a deadline now @yo' that's not the issue. with the laptop I lose access to the company network and anythign I need from there during the next two months, such as email address of payroll etc etc needs to be 100% collected first @yo' I'm sorry I explain too bad in english :) I mean, if the rule was use \tl_use:N to retrieve the content's of a token list (so it's not optional, which is actually seen in many places). And then we wouldn't have to \noexpand them in such contexts. @JosephWright \foo:V \l_some_tl or \exp_args:NV \foo \l_some_tl isn't that confusing. @Manuel As I say, you'd still have a difference between say \exp_after:wN \foo \dim_use:N \l_my_dim and \exp_after:wN \foo \tl_use:N \l_my_tl: only the first case would work @Manuel I've wondered if one would use registers at all if you were starting today: with \numexpr, etc., you could do everything with macros and avoid any need for \<thing>_new:N (i.e. soft typing). There are then performance questions, termination issues and primitive cases to worry about, but I suspect in principle it's doable. @Manuel Like I say, one can speculate for a long time on these things. @FrankMittelbach and @DavidCarlisle can I am sure tell you lots of other good/interesting ideas that have been explored/mentioned/imagined over time. @Manuel The big issue for me is delivery: we have to make some decisions and go forward even if we therefore cut off interesting other things @Manuel Perhaps I should knock up a set of data structures using just macros, for a bit of fun [and a set that are all protected :-)] @JosephWright I'm just exploring things myself “for fun”. I don't mean as serious suggestions, and as you say you already thought of everything. It's just that I'm getting at those points myself so I ask for opinions :) @Manuel I guess I'd favour (slightly) the current set up even if starting today as it's normally \exp_not:V that applies in an expansion context when using tl data. That would be true whether they are protected or not. Certainly there is no big technical reason either way in my mind: it's primarily historical (expl3 pre-dates LaTeX2e and so e-TeX!) @JosephWright tex being a macro language means macros expand without being prefixed by \tl_use. \protected would affect expansion contexts but not use "in the wild" I don't see any way of having a macro that by default doesn't expand. @JosephWright it has series of footnotes for different types of footnotey thing, quick eye over the code I think by default it has 10 of them but duplicates for minipages as latex footnotes do the mpfoot... ones don't need to be real inserts but it probably simplifies the code if they are. So that's 20 inserts and more if the user declares a new footnote series @JosephWright I was thinking while writing the mail so not tried it yet that given that the new \newinsert takes from the float list I could define \reserveinserts to add that number of "classic" insert registers to the float list where later \newinsert will find them, would need a few checks but should only be a line or two of code. @PauloCereda But what about the for loop from the command line? I guess that's more what I was asking about. Say that I wanted to call arara from inside of a for loop on the command line and pass the index of the for loop to arara as the jobname. Is there a way of doing that?
The procedure you suggest really cannot get too far. Here is an abstract result explaining what I mean (see Aspects of incompleteness by Per Lindström): Given an r.e. sequence of r.e. theories that interpret arithmetic and whose union is consistent, there is a $\Pi^0_1$ formula that is not provable by any of them. That a sequence is r.e. means here that we have a way of listing all pairs $(\phi,n)$ with $\phi$ an axiom of the $n$-th theory. (The connection comes by realizing that $\mathsf{Con}(T)$ is $\Pi^0_1$ for r.e. $T$.) Arguing specifically within your setting: There is a natural way of iterating the consistency operator, to define a sequence $T_\alpha$, $\alpha<\omega_1^{CK}$. Although each $T_\alpha$ is strictly stronger than its predecessors, their union is still pretty weak: The theory $U_1=\mathsf{ZFC}+$"$\mathsf{ZFC}$ has an $\omega$-model" is stronger than all of them. This theory can also be iterated $\omega_1^{CK}$ times ($U_2$ would be $U_1+$"$U_1$ has an $\omega$-model", etc). All these theories are strictly weaker than $S_1=\mathsf{ZFC}+$"$\mathsf{ZFC}$ has a transitive set model", which, again, we can iterate $\omega_1^{CK}$ times. All these theories are strictly weaker than asserting that some $V_\alpha$ is a model of $\mathsf{ZFC}$ ($\alpha$ is what Joel Hamkins calls a worldly cardinal). Worldliness can also be iterated, and the resulting theories are strictly weaker than asserting that there is a weakly inaccessible cardinal. (Stopping at $\omega_1^{CK}$ is just an artifact of my wanting to keep all the theories recursive. I don't quite see how to make sense of iterating these theories beyond the recursive ordinals. What do we mean by $\mathsf{Con}(T)$ in such a case, since $T$ is no longer r.e.? Of course, we can make sense of this semantically, and just require the existence of models (after some pains formalizing this, as the relevant language in which the theories are formulated would evolve with the ordinal), but even then it seems to me we need to argue that the models are sufficiently correct to see the relevant ordinals, and this seems a distraction from the main goal.) Above, I mentioned that we iterate the consistency operator "naturally". By contrast, Solomon Feferman and others have investigated ways of iterating "consistency" operators, so that the claims above fail, with the "$\omega$-th step" being considerably stronger than just $\mathsf{ZFC}+\mathrm{Con}(\mathsf{ZFC})+\mathrm{Con}(\mathrm{Con}(\mathsf{ZFC}))+\dots$ There are also other alternatives, where we "add" to $\mathsf{ZFC}$ all true $\Pi^0_1$ statements, then to this "add" all true $\Pi^0_2$-statements, etc. This is typically formalized via reflection sequences, see here.
It is claimed that the credit default swap (CDS) spread should approximate the risky par bond yield or coupon rate spread from the riskless bond on the same entity. This comes about when we assume discount factor $B(t)=e^{-rt}$ with constant riskless interest rate $r$ together with infinitesimal coupon period. However, this is not true in general and it is dubious under what condition this holds even approximately. Let us examine this claim mathematically. In general, for par bond coupon rate $c$ and CDS spread $s$ $$c-s=\frac{\int_0^T P(t)\mathrm d(-B(t))}{\sum_{i=1}^n \delta_iB_iP_i} \ge 0,$$ where $P(t)$ is the survival probability and $B(t)$ the discount factor at time $t$, $t_i$ is the $i$'th coupon date and $P_i=P(t_i)$ and $B_i=B(t_i)$. It is true $B(t)\searrow 0 \Longrightarrow c-s\searrow 0$, for any given $P$. However, once can device decreasing $P$ and $B$ such that $c-s$ is unbounded from above in the set of admissible $P$ and $B$ (decreasing positive function on $[0,\infty)$ taking value $1$ at $t=0$) for any given $\{\delta_i>0\}_{i=1}^n$. Consider very small $P_i$. The riskless par bond coupon rate of the same coupon schedule is $$c_0=\frac{\int_0^T \mathrm d(-B(t))}{\sum_{i=1}^n \delta_iB_i}.$$ So $$c-s-c_0=\frac{\int_0^T P(t)\mathrm d(-B(t))}{\sum_{i=1}^n \delta_iB_iP_i}-\frac{\int_0^T \mathrm d(-B(t))}{\sum_{i=1}^n \delta_iB_i}$$ We already know from the previous paragraph that the above expression is unbounded from above. To explore the range of the above expression, consider the following case. $$ P(t) = \begin{cases} 1, & t=0 \\ P_1, & t\in (0,t_1] \\ 0, & t\in (0,\infty) \end{cases}, \quad B(t) = \begin{cases} 1, & t=0 \\ B_1, & t\in (0,t_1] \\ 0, & t\in (0,\infty) \end{cases}. $$ then $$c-s-c_0 = \frac{1}{\delta_1B_1}-1-\frac{1}{\delta_1B_1}=-1.$$ So $c-s-c_0$ ranges at least from $-1$ to positive infinity. Therefore, all we can say is that in the very low interest rate regime like recently, $c$ is not much higher than $s$. Then what additional conditions are imposed or specific models are assumed to justify the folklore statement that $c-s\approx c_0$? Can someone provide a mathematical derivation or supply some references to that effect? Many authors have cited Darrell Duffie's paper as the source of the claim. However, I do not see a mathematical derivation or justification there --- the aforementioned link is a draft version not the published one and perhaps therein lies the rub. One of the papers making such claim is this. There Assumption 4. is the most pertinent, acknowledging the assumption of constant interest rate, presumably implying $B(t)=e^{-rt}$ with $r$ a positive constant, just as I have stated in the first paragraph. How good an approximation is this? This and this are two other papers amongst many making the claim. None of these authours are "dubious" folks. Can anyone elucidate?
David Xiao dxiao@cs.princeton.edu 1 IntroductionLATEX is the standard mathematical typesetting program. This document is for people who have never usedLATEX before and just want a quick crash course to get started. I encourage all students in mathematics andtheoretical computer science to learn LATEX so you can use it to typeset your problem sets; your TAs willlove you if you do.For a more comprehensive introduction, check outhttp://ctan.tug.org/tex-archive/info/lshort/english/lshort.pdf. 3 Basic rulesBasic LATEX is just text with typesetting commands. Typesetting commands are usually preceded by \,and any arguments are usually placed inside curly braces {}.LATEX wraps text in adjacent lines as if they were part of the same paragraph. To start a new paragraph,insert an extra return: Source: Output: This is one paragraph. This is one paragraph. This is another. This is another. 14 Starting a new documentThe most basic (empty) document has only three parts: \documentclass{article} \begin{document}\end{document} To start a new document, you can just take the LATEX file for this document and delete the stu between thebegin and end document commands, leaving the \maketitle command (this prints out the title and yourname). You will see before the \begin{document} that there are commands for the title and author of thedocument. Change the names between the curly braces to the name of problem set and your name. (Theother stu that precedes \begin{document} is useful stu that you dont need to worry about for now, justleave it as is.) Then, save the file under a dierent name, for example problem-set-1.tex. 5 CompilingSuppose our file is named mylatexfile.tex. To compile it, simply invoke latex mylatexfile.tex inyour Unix shell. This will compile the file, assuming there are no errors.1 If there are errors, you canquit the compiler by hitting x and then enter. Unfortunately LATEX compiler errors are very unhelpfulin determining the nature of the problem, but they usually correctly point you to the line where the erroroccurred.Once it successfully compiles, you will get a file named mylatexfile.dvi. Typically we convert this eitherinto a Postscript or PDF file, which may be done by the programs dvips mylatexfile.dvi and dvipdfmylatexfile.dvi. Sometimes dvips requires an extra option: -o output-file-name specifying the outputfile. 6 OrganizationOne important thing to do is to organize your document well. 6.1 SectioningThere are two sectioning commands that will be useful for you: \section{Name of section} and\subsection{Name of subsection}. Use these to separate dierent problems or subproblems in the as-signment. 6.2 TablesYou can put stu into tables by using the tabular environment. For example: 1 You may have to invoke latex twice if you are using labels and references. See Section 6.4. 2 Source: Output: \begin{tabular}{r|cl} 1st column & 2nd column & 3rd column\\ 1st column 2nd column 3rd column \hline a b c a & b & c \end{tabular}Note that the command is called tabular and not table. Important points: The {r|cl} after the tabular \begin{tabular} indicate the alignment of the three columns: right, center, and left. This is mandatory as it specifies the layout of the table. For more columns, type more alignment commands, e.g. for a table with 5 columns all aligned to the right, you would use rrrrr. The vertical bar | between the r and c indicates that a vertical line should be drawn between those columns. The & separates the columns in the body of the table. A \\ signifies the end of each line of the table. The command \hline means that a horizontal line should be inserted. 6.3 ListsYou can put stu into ordered and unordered lists by using the enumerate and itemize commands, respec-tively. For example: Source: Output: Unordered list: Unordered list: \begin{itemize} \item This is one item. This is one item. \item This is another. \end{itemize} This is another. 37 MathThe reason we use LATEX is because it is so powerful in typesetting mathematical expressions. It wins handsdown versus word processors like Word. To give an equation a number and have it referable, use the equation environment and use a \labelcommand: Source: Output: The following is an important equation: \begin{equation} The following is an important equation: \label{emc} E = mc2 (7.1) E = mc^2 \end{equation} Please memorize Equation 7.1. Please memorize Equation \ref{emc}. To typeset several equations together and have them properly aligned, use the align environment: Source: Output: Some important equations: \begin{align} \label{einstein} Some important equations: E & = mc^2 \\ E = mc2 (7.2) \label{newton} F & = ma \\ F = ma (7.3) \label{euler} e = 1 i (7.4) e^{i \pi} & = -1 \end{align}The equations are aligned along the & and each line is terminated by \\. To suppress the equation numbering(i.e. if the equations wont be referred to) use align* instead of align. Superscript and subscript are done using ^ and _ characters. Note that if you want multiple characters in the super/subscript then you need to surround them with curly braces: $e^i\pi = -1$ gives ei = 1 whereas $e^{i\pi} = -1$ gives ei = 1. 4 Fractions are done using $\frac{1}{2}$ which gives 12 . To do a binomial coefficient, use $\binom{n}{k}$ which gives n k . Modular arithmetic can be written using the \pmod{n} and \bmod{n} commands. The first puts parentheses and a lot of space around the mod and the second does not. 8 and 9 are written as \forall and \exists. 6=, , and are \neq, \geq, and \leq. (e.g. for multiplication) is \cdot. is \circ. [ and \ are \cup and \cap. is written with \oplus. Large [, \, signs that behave like summations (see below for summations) are written as \bigcup, \bigcap, \bigoplus. R is produced with \drawnr. Z, R, etc. are produced using \Z, \R, etc. E is produced with \Exp. P, NP, etc. are produced using \P, \NP, etc. P is \calP. ` (as opposed to l) is produced with \ell. {} are done with \{ and \}. is produced with \approx. and x x are done with \hat{x} and \bar{x}. A longer bar may be written using \overline{\SAT}, which produces SAT . 5 $$\Pr\left[\sum_{i=1}^k X_i > c \right] \leq 2^{-\Omega(c^2 k)}$$ Output: " # k X (c2 k) Pr Xi > c 2 i=1 Arrays are like tables, except they must be used in place of tables when in math mode: instead of using \begin{tabular} and \end{tabular} use \begin{array} and \end{array}. Again, you must give a column specification for how the columns are to be laid out. Spacing is very dierent in math mode so text in the middle of a formula is set strangely. If you want to have text in the middle of the formula, use the \text{some text} command. For example, $\P \neq \NP \text{ implies that } \SAT \notin \P$ produces P 6= NP implies that SAT 2 / P. Summations and products are done using \sum and \prod respectively. Parameters can be given for the summation/product as well: Source: Output: X1 1 $$\sum_{i=1}^\infty \frac{1}{2^i} = 1$$ =1 i=1 2i
I would like your help to understand the definition of obedience in an incomplete information game at p.7 of this paper. Let me summarise the definition provided in the paper. There are $N\in \mathbb{N}$ players, with $i$ denoting a generic player. There is a finite set of states $\Theta$, with $\theta$ denoting a generic state. A basic game $G$ consists of for each player $i$, a finite set of actions $A_i$, where we write $A\equiv A_1\times A_2\times ... \times A_N$, and a utility function $u_i: A\times \Theta \rightarrow \mathbb{R}$. a full support prior $\psi\in \Delta(\Theta)$. An information structure $S$ consists of for each player $i$, a finite set of signals $T_i$, where we write $T\equiv T_1\times T_2\times ... \times T_N$. a signal distribution $\pi: \Theta \rightarrow \Delta(T)$. A decision rule of the incomplete information game $(G,S)$ is a mapping $$ \sigma: T\times \Theta\rightarrow \Delta(A) $$ One way to mechanically understand the notion of the decision rule is to view $\sigma$ as the strategy of an omniscient mediator who first observes the realization $\theta\in \Theta$ chosen according to $\psi$ and the realization $t\in T$ chosen according to $\pi(\cdot|\theta)$; and then picks a probability distribution from $\Delta(A)$, and privately announces to each player $i$, her lottery to play. For players to have an incentive to follow the mediator's recommendation in this scenario, it would have to be the case that the recommended lottery was always preferred to any other lottery conditional on the signal $t_i$ that player $i$ had received. This is reflected in the following "obedience" condition. Definition: The decision rule $\sigma$ is obedient for $(G,S)$ if, for each $i=1,...,N$, $t_i\in T_i$, and $a_i\in A_i$, we have$$\sum_{a_{-i}, t_{-i}, \theta} \psi(\theta) \pi(t_i,t_{-i}| \theta) \sigma(a_i, a_{-i}|t_i, t_{-i}, \theta) u_i(a_i, a_{-i},\theta)$$$$\geq \sum_{a_{-i}, t_{-i}, \theta} \psi(\theta) \pi(t_i,t_{-i}| \theta) \sigma(a_i, a_{-i}|t_i, t_{-i}, \theta) u_i(\tilde{a}_i, a_{-i},\theta)$$$\forall \tilde{a}_i\in A_i$. My question: I don't understand fully the LHS (or, equivalently, the RHS) expression $$\sum_{a_{-i}, t_{-i}, \theta} \psi(\theta) \pi(t_i,t_{-i}| \theta) \sigma(a_i, a_{-i}|t_i, t_{-i}, \theta) u_i(a_i, a_{-i},\theta)$$ Saying it in words, this is the sum over $A_{-i}, T_{-i}, \Theta$ of $$ \text{[Prob that the mediator observes $(\theta, t)$]}\times\\ \text{[Prob that the mediator suggests players to play $a$ under $\sigma$, given that he observes $(\theta, t)$]}\times\\ \text{[Profit that player $i$ gets from obeying the mediator, given that the other players obey too]} $$ Let the product just written be denoted by $(\star)$. Why in the second component of $(\star)$ we DO NOT condition on $a_i$ by taking $$ \text{[Prob that the mediator suggests the other players to play $a_{-i}$ under $\sigma$, given that he observes $(\theta, t)$ and given that he suggests player $i$ to play $a_i$]} $$ in place of $$ \text{[Prob that the mediator suggests players to play $a$ under $\sigma$, given that he observes $(\theta, t)$]} $$
Hydejack offers a few additional features to markup your content. Don’t worry, these are merely CSS classes added with kramdown’s {:...} syntax, so that your content remains compatible with other Jekyll themes. Table of Contents A word on building speeds Adding a table of contents Adding message boxes Adding large text Adding large images Adding image captions Adding large quotes Adding faded text Adding tables Scroll table Flip table Small tables Adding code blocks Adding math Inline Block A word on building speeds If building speeds are a problem, try using the --incremental flag, e.g. bundle exec jekyll serve --incremental From the Jekyll docs (emphasis mine): Enable the experimental incremental build feature. Incremental build only re-builds posts and pages that have changed, resulting in significant performance improvements for large sites, but may also break site generation in certain cases. The breakage occurs when you create new files or change filenames. Also, changing the title, category, tags, etc. of a page or post will not be reflected in pages other then the page or post itself. This makes it ideal for writing new posts and previewing changes, but not setting up new content. Adding a table of contents You can add a generated table of contents to any page by adding {:toc} below a list. Example: see above Markdown: * this unordered seed list will be replaced by toc as unordered list{:toc} Adding message boxes You can add a message box by adding the message class to a paragraph. Example: Markdown: **NOTE**: You can add a message box.{:.message} Adding large text You can add large text by adding the lead class to the paragraph. Example: You can add large text. Markdown: You can add large text.{:.lead} Adding large images You can make an image span the full width by adding the lead class. Example: Markdown: ![Full-width image](https://placehold.it/800x100){:.lead data-width="800" data-height="100"} Adding image captions You can add captions to images by adding the figure class to the paragraph containing the image and a caption. A caption for an image. Markdown: ![Full-width image](https://placehold.it/800x100){:.lead data-width="800" data-height="100"}A caption for an image.{:.figure} For better semantics, you can also use the figure/ figcaption HTML5 tags: <figure> <img alt="An image with a caption" src="https://placehold.it/800x100" class="lead" data-width="800" data-height="100" /> <figcaption>A caption to an image.</figcaption></figure> Adding large quotes You can make a quote “pop out” by adding the lead class. Example: You can make a quote “pop out”. Markdown: > You can make a quote "pop out".{:.lead} Adding faded text You can gray out text by adding the faded class. Use this sparingly and for information that is not essential, as it is more difficult to read. Example: I’m faded, faded, faded. Markdown: I'm faded, faded, faded.{:.faded} Adding tables Adding tables is straightforward and works just as described in the kramdown docs, e.g. Default aligned Left aligned Center aligned Right aligned First body part Second cell Third cell fourth cell Markdown: | Default aligned |Left aligned| Center aligned | Right aligned ||-----------------|:-----------|:---------------:|---------------:|| First body part |Second cell | Third cell | fourth cell | However, it gets tricker when adding large tables. In this case, Hydejack will break the layout and grant the table the entire available screen width to the right: Default aligned Left aligned Center aligned Right aligned Default aligned Left aligned Center aligned Right aligned Default aligned Left aligned Center aligned Right aligned Default aligned Left aligned Center aligned Right aligned First body part Second cell Third cell fourth cell First body part Second cell Third cell fourth cell First body part Second cell Third cell fourth cell First body part Second cell Third cell fourth cell Second line foo strong baz Second line foo strong baz Second line foo strong baz Second line foo strong baz Third line quux baz bar Third line quux baz bar Third line quux baz bar Third line quux baz bar Second body Second body Second body Second body 2 line 2 line 2 line 2 line Footer row Footer row Footer row Footer row Scroll table If the extra space still isn’t enough, the table will receive a scrollbar. It is browser default behavior to break the lines inside table cells to fit the content on the screen. By adding the scroll-table class on a table, the behavior is changed to never break lines inside cells, e.g: Default aligned Left aligned Center aligned Right aligned Default aligned Left aligned Center aligned Right aligned Default aligned Left aligned Center aligned Right aligned Default aligned Left aligned Center aligned Right aligned First body part Second cell Third cell fourth cell First body part Second cell Third cell fourth cell First body part Second cell Third cell fourth cell First body part Second cell Third cell fourth cell Second line foo strong baz Second line foo strong baz Second line foo strong baz Second line foo strong baz Third line quux baz bar Third line quux baz bar Third line quux baz bar Third line quux baz bar Second body Second body Second body Second body 2 line 2 line 2 line 2 line Footer row Footer row Footer row Footer row You can add the scroll-table class to a markdown table by putting {:.scroll-table} in line directly below the table. To add the class to a HTML table, add the it to the class attribute of the table tag, e.g. <table class="scroll-table">. Flip table Alternatively, you can “flip” (transpose) the table. Unlike the other approach, this will keep the table head (now the first column) fixed in place. You can enable this behavior by adding flip-table or flip-table-small to the CSS classes of the table. The -small version will only enable scrolling on “small” screens (< 1080px wide). Example: Default aligned Left aligned Center aligned Right aligned Default aligned Left aligned Center aligned Right aligned Default aligned Left aligned Center aligned Right aligned Default aligned Left aligned Center aligned Right aligned First body part Second cell Third cell fourth cell First body part Second cell Third cell fourth cell First body part Second cell Third cell fourth cell First body part Second cell Third cell fourth cell Second line foo strong baz Second line foo strong baz Second line foo strong baz Second line foo strong baz Third line quux baz bar Third line quux baz bar Third line quux baz bar Third line quux baz bar 4th line quux baz bar 4th line quux baz bar 4th line quux baz bar 4th line quux baz bar 5th line quux baz bar 5th line quux baz bar 5th line quux baz bar 5th line quux baz bar 6th line quux baz bar 6th line quux baz bar 6th line quux baz bar 6th line quux baz bar 7th line quux baz bar 7th line quux baz bar 7th line quux baz bar 7th line quux baz bar 8th line quux baz bar 8th line quux baz bar 8th line quux baz bar 8th line quux baz bar 9th line quux baz bar 9th line quux baz bar 9th line quux baz bar 9th line quux baz bar 10th line quux baz bar 10th line quux baz bar 10th line quux baz bar 10th line quux baz bar You can add the flip-table class to a markdown table by putting {:.flip-table} in line directly below the table. To add the class to a HTML table, add the it to the class attribute of the table tag, e.g. <table class="flip-table">. Small tables If a table is small enough to fit the screen even on small screens, you can add the stretch-table class to force a table to use the entire available content width. Note that stretched tables can no longer be scrolled. Default aligned Left aligned Center aligned Right aligned First body part Second cell Third cell fourth cell You can add the stretch-table class to a markdown table by putting {:.stretch-table} in line directly below the table. To add the class to a HTML table, add the it to the class attribute of the table tag, e.g. <table class="stretch-table">. Adding code blocks To add a code block without syntax highlighting, simply indent 4 spaces (regular markdown). For code blocks with code highlighting, use ~~~<language>. This syntax is also supported by GitHub. For more information and a list of supported languages, see Rouge. Example: // Example can be run directly in your JavaScript console// Create a function that takes two arguments and returns the sum of those// argumentsvar adder = new Function("a", "b", "return a + b");// Call the functionadder(2, 6);// > 8 Markdown: ~~~js// Example can be run directly in your JavaScript console// Create a function that takes two arguments and returns the sum of those// argumentsvar adder = new Function("a", "b", "return a + b");// Call the functionadder(2, 6);// > 8~~~ Adding math Hydejack supports math blocks via KaTeX. Why KaTeX instead of MathJax? KaTeX is faster and more lightweight at the cost of having less features, but for the purpose of writing blog posts, this should be a favorable tradeoff. Before you add math content, make sure you have the following in your config file: kramdown: math_engine: mathjax # this is not a typo math_engine_opts: preview: true preview_as_code: true Inline Example: Lorem ipsum f(x) = x^2. Markdown: Lorem ipsum $$ f(x) = x^2 $$. Block Example: \begin{aligned} \phi(x,y) &= \phi \left(\sum_{i=1}^n x_ie_i, \sum_{j=1}^n y_je_j \right) \\[2em] &= \sum_{i=1}^n \sum_{j=1}^n x_i y_j \phi(e_i, e_j) \\[2em] &= (x_1, \ldots, x_n) \left(\begin{array}{ccc} \phi(e_1, e_1) & \cdots & \phi(e_1, e_n) \\ \vdots & \ddots & \vdots \\ \phi(e_n, e_1) & \cdots & \phi(e_n, e_n) \end{array}\right) \left(\begin{array}{c} y_1 \\ \vdots \\ y_n \end{array}\right)\end{aligned} Markdown: $$\begin{aligned} \phi(x,y) &= \phi \left(\sum_{i=1}^n x_ie_i, \sum_{j=1}^n y_je_j \right) \\[2em] &= \sum_{i=1}^n \sum_{j=1}^n x_i y_j \phi(e_i, e_j) \\[2em] &= (x_1, \ldots, x_n) \left(\begin{array}{ccc} \phi(e_1, e_1) & \cdots & \phi(e_1, e_n) \\ \vdots & \ddots & \vdots \\ \phi(e_n, e_1) & \cdots & \phi(e_n, e_n) \end{array}\right) \left(\begin{array}{c} y_1 \\ \vdots \\ y_n \end{array}\right)\end{aligned}$$ Continue with Scripts
Let $$(a_n)_{n\in\mathbb{N}}$$ be a sequence and $$M$$ a real number. We say that the sequence is bounded from above by $$M$$ if all the terms are smaller than $$M$$, that is to say, if $$$a_n \leq M$$$ for all $$n$$. Similarly, we say that the sequence is bounded from below by $$M$$ if all the terms are greater than $$M$$, that is to say, if $$$a_n \geq M$$$ for all $$n$$. We call $$M$$ the upper, and lower bound of the sequence respectively. Let's consider the sequence $$a_n=\dfrac{1}{n}$$. We have $$0 < \dfrac{1}{n} \leq 1$$ for all $$n$$. Therefore, the sequence is bounded from above by $$1$$ and bounded from below by $$0$$. We observe how the upper bound is part of the sequence, $$a_1=1$$ and, therefore, it is the best possible bound. But the lower bound does not belong to the set. This might make us think that it is not the best possible bound. Let's verify that it is not the case: Let's suppose that $$M$$ is the best non zero lower bound. Since $$M > 0$$ we can consider a point $$n$$ greater than $$\dfrac{1}{M}$$. Then $$\dfrac{1}{n} < M$$ which contradicts our hypothesis on $$M$$. Anyway, in many cases it is enough to find a bound even though it is not the best possible one. In fact, if a sequence is bounded from above by $$M$$ then it is also bounded from above by all numbers greater than $$M$$. As with the classification of monotonous sequences, this one cannot either be taken as a general classification since a sequence does not necessarily admit any bound. Let's consider the sequence $$a_n=n$$, which is bounded from below, for example, by $$0$$ but does not admit any upper bound since for all $$M$$ we can always find a greater $$n$$. To know whether a sequence admits an upper or a lower bound is not usually a simple problem either. Therefore, it is even more difficult to find a bound, even knowing that the sequence is bounded. In the case of monotonous sequences, the first term serves us as a bound. If we have an increasing sequence then the first term is a lower bound of the sequence. And if the sequence is decreasing then the first term is an upper bound. Another criterion to verify whether a sequence admits a bound is to verify if all the terms of a sequence are positive, or negative, in which case the sequence would be, respectively, bounded from below by $$0$$ or bounded from above by $$0$$. Calculation using functions When a sequence is given by the general term, we can verify if a sequence is monotonous or bounded from the function that defines the general term. In this case, the properties of the function are also valid for the sequence. More specifically; If the function is monotonous for values greater than $$1$$, then the sequence is monotonous too. It can even happen that the sequence is definitely monotonous even though the function is not. For example, let's consider the sequence $$\Big(\Big(n-\dfrac{3}{2}\Big)^2\Big)_{n\in\mathbb{N}}$$. The function $$f(x)=\Big(x-\dfrac{3}{2}\Big)^2$$ is not increasing for values greater than $$1$$, for example $$$f(1)=\dfrac{1}{4} > 0 =f\Big(\dfrac{3}{2}\Big)$$$ But we can verify that the sequence is increasing since $$$\Big(n-\dfrac{3}{2}\Big)^2 \leq \Big(n-\dfrac{1}{2}\Big)^2$$$ If the function is bounded from above, or below, for values greater than 1 then the sequence is also bounded from above, or below, respectively. These results allow the use of the differential calculus methods for our calculations in sequences. Essentially, the calculation of the monotony is interesting from the derivative of the general term. As a last comment, we can think about whether the reciprocal of the previous results is true or not.In fact, we can ask ourselves if the results obtained for the sequence are also valid for the function. The answer to this question is negative in general. For example, For example, let's consider the function $$f(x)=x\cdot\sin(2\pi x)$$. We calculate the corresponding sequence. We evaluate the function in an integer $$n$$; $$$f(n)=n\cdot\sin(2\pi n)=n\cdot\sin(2\pi)=0$$$ Namely, it defines the constant function. And this is increasing, decreasing and upper and bounded from below. Now we are going to see that the function is not any of those. We evaluate the function in the points $$n+\dfrac{1}{4}$$ for $$n$$ integer $$$f\Big(n+\dfrac{1}{4}\Big)=\Big(n+\dfrac{1}{4}\Big)\cdot\sin(2\pi n+\dfrac{2\pi}{4})=\Big(n+\dfrac{1}{4}\Big)\cdot\sin\Big(\dfrac{\pi}{2}\Big)=\Big(n+\dfrac{1}{4}\Big)$$$ We see therefore that the function is not already bounded from above since $$f\Big(n+\dfrac{1}{4}\Big)=n+\dfrac{1}{4}$$ cannot be bounded from above for any $$n$$. We evaluate now the function at the points $$n+\dfrac{3}{4}$$ for $$n$$ integer; $$$f\Big(n+\dfrac{3}{4}\Big)=\Big(n+\dfrac{3}{4}\Big)\cdot\sin(2\pi n+2\pi\dfrac{3}{4})=\Big(n+\dfrac{3}{4}\Big)\cdot\sin\Big(\dfrac{3\pi}{2}\Big)=-\Big(n+\dfrac{3}{4}\Big)$$$ In the same way we establish that the function is not bounded from below since $$$f\Big(n+\dfrac{3}{4}\Big)=-\Big(n+\dfrac{3}{4}\Big)$$$ cannot be bounded from below for any $$n\in\mathbb{N}.$$ Also, since the values $$n+\dfrac{1}{4}$$ and $$n+\dfrac{3}{4}$$ are inserted into one another, it turns out that the function cannot be monotonous since in the first ones the function is positive and in the second ones it is negative.
We review the Metropolis algorithm — a simple Markov Chain Monte Carlo (MCMC) sampling method — and its application to estimating posteriors in Bayesian statistics. A simple python example is provided. Follow @efavdb Follow us on twitter for new submission alerts! Introduction One of the central aims of statistics is to identify good methods for fitting models to data. One way to do this is through the use of Bayes’ rule: If $\textbf{x}$ is a vector of $k$ samples from a distribution and $\textbf{z}$ is a vector of model parameters, Bayes’ rule gives \begin{eqnarray} \tag{1} \label{Bayes} p(\textbf{z} \vert \textbf{x}) = \frac{p(\textbf{x} \vert \textbf{z}) p(\textbf{z})}{p(\textbf{x})}. \end{eqnarray} Here, the probability at left, $p(\textbf{z} \vert \textbf{x})$ — the “posterior” — is a function that tells us how likely it is that the underlying true parameter values are $\textbf{z}$, given the information provided by our observations $\textbf{x}$. Notice that if we could solve for this function, we would be able to identify which parameter values are most likely — those that are good candidates for a fit. We could also use the posterior’s variance to quantify how uncertain we are about the true, underlying parameter values. Bayes’ rule gives us a method for evaluating the posterior — now our goal: We need only evaluate the right side of (\ref{Bayes}). The quantities shown there are $\\ \\$ $\\$ It turns out that the last term above can sometimes be difficult to evaluate analytically, and so we must often resort to numerical methods for estimating the posterior. Monte Carlo sampling is one of the most common approaches taken for doing this. The idea behind Monte Carlo is to take many samples $\{\textbf{z}_i\}$ from the posterior (\ref{Bayes}). Once these are obtained, we can approximate population averages by averages over the samples. For example, the true posterior average $\langle\textbf{z} \rangle \equiv \int \textbf{z} p(\textbf{z} \vert \textbf{x}) d \textbf{z}$ can be approximated by $\overline{\textbf{z}} \equiv \frac{1}{N}\sum_i \textbf{z}_i$, the sample average. By the law of large numbers, the sample averages are guaranteed to approach the distribution averages as $N \to \infty$. This means that Monte Carlo can always be used to obtain very accurate parameter estimates, provided we take $N$ sufficiently large — and that we can find a convenient way to sample from the posterior. In this post, we review one simple variant of Monte Carlo that allows for posterior sampling: the Metropolis algorithm. Metropolis Algorithm Iterative Procedure Metropolis is an iterative, try-accept algorithm. We initialize the algorithm by selecting a parameter vector $\textbf{z}$ at random. Following this, we repeatedly carry out the following two steps to obtain additional posterior samples: Identify a next candidate sample $\textbf{z}_j$ via some random process. This candidate selection step can be informed by the current sample’s position, $\textbf{z}_i$. For example, one could require that the next candidate be selected from those parameter vectors a given step-size distance from the current sample, $\textbf{z}_j \in \{\textbf{z}_k: \vert \textbf{z}_i – \textbf{z}_k \vert = \delta \}$. However, while the candidate selected can depend on the current sample, it must not depend on any prior history of the sampling process. Whatever the process chosen (there’s some flexibility here), we write $t_{i,j}$ for the rate of selecting $\textbf{z}_j$ as the next candidate given the current sample is $\textbf{z}_i$. Once a candidate is identified, we either accept or reject it via a second random process. If it is accepted, we mark it down as the next sample, then go back to step one, using the current sample to inform the next candidate selection. Otherwise, we mark the current sample down again, taking it as a repeat sample, and then use it to return to candidate search step, as above. Here, we write $A_{i,j}$ for the rate of accepting $\textbf{z}_j$, given that it was selected as the next candidate, starting from $\textbf{z}_i$. Selecting the trial and acceptance rates In order to ensure that our above process selects samples according to the distribution (\ref{Bayes}), we need to appropriately set the $\{t_{i,j}\}$ and $\{A_{i,j}\}$ values. To do that, note that at equilibrium one must see the same number of hops from $\textbf{z}_i$ to $\textbf{z}_j$ as hops from $\textbf{z}_j$ from $\textbf{z}_i$ (if this did not hold, one would see a net shifting of weight from one to the other over time, contradicting the assumption of equilibrium). If $\rho_i$ is the fraction of samples the process takes from state $i$, this condition can be written as \begin{eqnarray} \label{inter} \rho_i t_{i,j} A_{i,j} = \rho_j t_{j,i} A_{j,i} \tag{3} \end{eqnarray} To select a process that returns the desired sampling weight, we solve for $\rho_i$ over $\rho_j$ in (\ref{inter}) and then equate this to the ratio required by (\ref{Bayes}). This gives \begin{eqnarray} \tag{4} \label{cond} \frac{\rho_i}{\rho_j} = \frac{t_{j,i} A_{j,i}}{t_{i,j} A_{i,j}} \equiv \frac{p(\textbf{x} \vert \textbf{z}_i)p(\textbf{z}_i)}{p(\textbf{x} \vert \textbf{z}_j)p(\textbf{z}_j)}. \end{eqnarray} Now, the single constraint above is not sufficient to pin down all of our degrees of freedom. In the Metropolis case, we choose the following working balance: The trial rates between states are set equal, $t_{i,j} = t_{j,i}$ (but remain unspecified — left to the discretion of the coder on a case-by-case basis), and we set $$ \tag{5} A_{i,j} = \begin{cases} 1, & \text{if } p(\textbf{z}_j \vert \textbf{x}) > p(\textbf{z}_i \vert \textbf{x}) \\ \frac{p(\textbf{x} \vert \textbf{z}_j)p(\textbf{z}_j)}{p(\textbf{x} \vert \textbf{z}_i)p(\textbf{z}_i)} \equiv \frac{p(\textbf{z}_j \vert \textbf{x})}{p(\textbf{z}_i \vert \textbf{x})}, & \text{else}. \end{cases} $$ This last equation says that we choose to always accept a candidate sample if it is more likely than the current one. However, if the candidate is less likely, we only accept a fraction of the time — with rate equal to the relative probability ratio of the two states. For example, if the candidate is only $80\%$ as likely as the current sample, we accept it $80\%$ of the time. That’s it for Metropolis — a simple MCMC algorithm, guaranteed to satisfy (\ref{cond}), and to therefore equilibrate to (\ref{Bayes})! An example follows. Coding example The following python snippet illustrates the Metropolis algorithm in action. Here, we take 15 samples from a Normal distribution of variance one and true mean also equal to one. We pretend not to know the mean (but assume we do know the variance), assume a uniform prior for the mean, and then run the algorithm to obtain two hundred thousand samples from the mean’s posterior. The histogram at right summarizes the results, obtained by dropping the first 1% of the samples (to protect against bias towards the initialization value). Averaging over the samples returns a mean estimate of $\mu \approx 1.4 \pm 0.5$ (95% confidence interval), consistent with the true value of $1$. %matplotlib inline import matplotlib.pyplot as plt import numpy as np # Take some samples true_mean = 1 X = np.random.normal(loc=true_mean, size=15) total_samples = 200000 # Function used to decide move acceptance def posterior_numerator(mu): prod = 1 for x in X: prod *= np.exp(-(x - mu) ** 2 / 2) return prod # Initialize MCMC, then iterate z1 = 0 posterior_samples = [z1] while len(posterior_samples) < total_samples: z_current = posterior_samples[-1] z_candidate = z_current + np.random.rand() - 0.5 rel_prob = posterior_numerator( z_candidate) / posterior_numerator(z_current) if rel_prob > 1: posterior_samples.append(z_candidate) else: trial_toss = np.random.rand() if trial_toss < rel_prob: posterior_samples.append(z_candidate) else: posterior_samples.append(z_current) # Drop some initial samples and thin thinned_samples = posterior_samples[2000:] plt.hist(thinned_samples) plt.title("Histogram of MCMC samples") plt.show() Summary To summarize, we have reviewed the application of MCMC to Bayesian statistics. MCMC is a general tool for obtaining samples from a probability distribution. It can be applied whenever one can conveniently specify the relative probability of two states — and so is particularly apt for situations where only the normalization constant of a distribution is difficult to evaluate, precisely the problem with the posterior (\ref{Bayes}). The method entails carrying out an iterative try-accept algorithm, where the rates of trial and acceptance can be adjusted, but must be balanced so that the equilibrium distribution that results approaches the desired form. The key equation enabling us to strike this balance is (\ref{inter}) — the zero flux condition (aka the detailed balance condition to physicists) that holds between states at equilibrium.
Can any telescope be capable to see some one walking on Mars? How much time dilation would there be? What is the theoretical best resolution? From here, discussing images of Mars taken by Hubble while near to its closest approach to Earth: The telescope snapped these pictures between April 27 and May 6, 1999, when Mars was 87 million kilometres from Earth. From this distance the telescope could see Martian features as small as 19 kilometres wide. Theoretically Our resolution is limited by the diffraction limit: $$ \theta = \frac{1.22 \times\lambda}{d}$$ Where $\lambda$ is the light's wavelength, $d$ is our aperture size and $\theta$ is the angular resolution. We can express $\theta$ in with an object's distance $s$ and radius $r$ and use a small angle approximation: $$\theta= arctan(r/s) \approx \frac{r}{s}$$ If we want to resolve a ~1m human from 87 million km, we would need a telescope aperture some ~50km in diameter. Note: techniques like interferometry can 'bypass' the diffraction limit to some extent, but imaging small objects at very large distance is inherently very hard. It's not exactly "from earth", but the Mars Reconnaissance Orbiter has an instrument that should be able to just barely detect the presence of a person on the surface. They would be about a single pixel wide, so you should be able to detect their moving around but not much else. https://mars.nasa.gov/mro/mission/instruments/hirise/ Coverage would be intermittent due to the orbiter not being overhead at all times.
Two Tricks Using Eisenstein's Criterion Today we're talking about Eisenstein's (not Einstein's!) Criterion - a well-known test to determine whether or not a polynomial is irreducible. Theorem (Eisenstein's Criterion) Let $R$ be a UFD and $p\in R$ be a prime. If $f(x)=x^n+a_{n-1}x^{n-1}+\cdots + a_0\in R[x]$ is a monic polynomial such that $p|a_i$ for each $0\leq i \leq n-1$ and $p^2\nmid a_0$, then $f(x)$ is irreducible in $R[x]$ )and hence in $F[x]$ where $F$ is the field of fractions of $R$). Surely you've seen a standard type of example which makes use of this theorem: $x^6+10x^2+15x+5$ is irreducible over $\mathbb{Z}$ since $5$ divides $10,15$ and $5$ but $5^2$ does not divide $5$. But sometimes the use of Eisenstein's Criterion might not be so obvious and a trick may be needed. Today we consider two such examples. Trick #1 Let $p$ be a prime integer. Prove $\Phi_p(x)=\frac{x^p-1}{x-1}$ is irreducible in $\mathbb{Z}[x]$. $\Phi_p(x)$ is called the cyclotomic $p$th polynomial and is special because its roots are precisely the primitive $p$th roots of unity. That $\Phi_p(x)$ is irreducible is quite an important fact (which, for instance, turns up frequently in Galois theory), but at first glace it's not obvious that Eisenstein's Criterion is even applicable. For that we need a trick (or, a fact, really): $p(x)$ is irreducible $\quad \Longleftrightarrow\quad$ $p(x+1)$ is irreducible. for any polynomial $p(x)\in R[x]$ for any UFD $R$. (This works because any factorization of $p(x)$, say $p(x)=f(x)g(x)$, corresponds to a factorization of $p(x+1)$, namely $p(x+1)=f(x+1)g(x+1)$.) So $$\Phi_p(x+1)=\frac{(x+1)^p-1}{x}=\frac{ \sum_{k=0}^p\binom{p}{k}x^{p-k} \: -1 }{x}.$$ Notice that the numerator is equal to $x^p+px^{p-1}+\frac{1}{2}p(p-1)x^{p-2}+\cdots+\frac{1}{2}p(p-1)x^2+px$ and so the above becomes $$x^{p-1}+px^{p-2}+\frac{1}{2}p(p-1)x^{p-3}+\cdots+\frac{1}{2}p(p-1)x+p.$$ This satisfies the Eisenstein criterion at the prime $p$. The key is simply that each of the binomial coefficients $\binom{p}{k}$ for $1\leq k \leq p-1$ is divisible by $p$ since the $p$ in the $p!$ of the numerator never gets canceled. Similarly, we can show that $x^4+1$ is "Eisenstein at 2" by using the same trick. One can see this quickly by noting that fourth row of Pascal's triangle is 1-4-6-4-1 and so the coefficents of $(x+1)^4+1$ must be $1-4-6-4-2$. Trick #2 Prove $x^2+y^2-1$ is irreducible in $\mathbb{Q}[x,y]$ Here we want to regard $\mathbb{Q}[x,y]$ as $\mathbb{Q}[x][y]$ so that our UFD is $R=\mathbb{Q}[x]$, and we want to view $x^2+y^2-1=y^2+(x+1)(x-1)$ as a monic polynomial in the variable $y$ with constant term $x^2-1\in R$. If we can show that $x+1$ (or similarly $x-1$) is a prime in $R$, then we can say $y^2+x^2-1$ is irreducible over $R$ since $x+1$ divides $(x+1)(x-1)$ but $(x+1)^2$ does not. Indeed $x+1$ is prime. To see this, recall that it's enough to show $x+1$ is irreducible since we're in a UFD. This can be easily done with a degree argument.* OR if you're feeling extra fancy, you can take the high route: Notice that $$\mathbb{Q}[x]/(x+1)\cong \mathbb{Q}.$$ To see this, let $\varphi:\mathbb{Q}[x]\to\mathbb{Q}$ be the ring homomorphism given by $\varphi(p(x))=p(-1)$. Then $\varphi$ is surjective (any $q\in \mathbb{Q}$ has preimage $p(x)=x+q+1$) and $\ker\varphi$ is the ideal $(x+1)=\{(x+1)f(x):f(x)\in\mathbb{Q}[x]\}$. That $\mathbb{Q}[x]/(x+1)\cong \mathbb{Q}$ follows directly from the First Isomorphism Theorem.Thus since $\mathbb{Q}$ is a field, so is $\mathbb{Q}[x]/(x+1)$ which implies $(x+1)$ is a maximal ideal (as $\mathbb{Q}[x]$ is a commutative ring) and therefore $x+1\in \mathbb{Q}[x]$ is irreducible. (This is "using a sledge hammer to beat a flea"!) Footnote: * Here 'tis: Suppose $x+1$ factors in $\mathbb{Q}[x]$ as $x+1=f(x)g(x)$. Then we must have $1=\deg f(x)+\deg g(x)$ which can only happen if, say, $\deg f(x)=1$ and $\deg g(x)=0$. This means $f(x)$ is linear and $g(x)=g\in \mathbb{Q}$ is a nonzero unit of $\mathbb{Q}[x]$ (i.e. it's a constant). But this is precisely what it means for $x+1$ to be irreducible.
First, you need to ensure that the pixel has minimal value in its 8-neighborhood. SIFT, SURF and other keypoint detectors filter these pixels in step called "non-maximal suppression". This is basically a necessary condition for second-order approximation we will use to determine sub-pixel location. If you imagine the scale space slice $I$ as a 3D surface, the pixel lays in a valley sorrounded by pixels with higer value. The shape can be well approximated by a parabola surface and the corrected pixel position is at its minimum. The pixel value $I(p)$ at location $p=(x,y)^{\textbf{T}}$ can be approximated by second order Taylor expansion: $$I(p+\delta)\approx I(p) + \delta^{\textbf{T}}g+\frac{1}{2}\delta^{\textbf{T}}H\delta$$ where $g$ and $H$ are gradient (2-vector of first order derivatives) and Hessian ($2\times 2$ matrix of second-order derivatives) at $(x,y)$. The $\delta$ is a sub-pixel correction factor we want to compute. Since the parabola has minimum at point where its derivatives are equal to zero, we obtain derivative of right-hand side of the above Taylor expansion (with respect to $\delta$) and place it equal to zero: $$g+H\delta=0$$ From this we can compute the correction vector: $$\delta=-H^{-1}g$$ Finally, the desired sub-pixel location is given by $p + \delta$. The gradient and Hessian are obtained using finite differencing: $$g_{1}=\left( I(x+1,y)-I(x-1,y) \right)\cdot 1/2$$$$g_{2}=\left( I(x,y+1)-I(x,y-1) \right)\cdot 1/2$$$$H_{1,1}=I(x+1,y)+I(x-1,y)-2I(x,y)$$$$H_{1,2}=H_{2,1}=\\ \left(I(x+1,y+1)+I(x-1,y-1)+I(x+1,y-1)+I(x-1,y+1)\right)\cdot 1/4$$ So the algorithm is: Check if the pixel is a local minimum (sorrounded by pixels with higher value). Compute $g$ and $H$ using finite differencing formulas above. Compute $\delta$. You can repeat steps 2. and 3. for extra accuracy, but just one step is fine for our needs. The value at sub-pixel location $I(p+\delta)$ can be obtained by well known bilinear or bicubic interpolation. We did the interpolation with respect to $x,y$, but you can perform this interpolation with respect to scale as well. This will lead to 3-vector gradient and $3\times 3$ Hessian, but the Taylor expansion formula holds. You can take a look on OpenSURF implementation in C++ or C# which has this implemented. The above derivation is actually a Newton's Method used in multivariate optimization.
The bounty period lasts 7 days. Bounties must have a minimum duration of at least 1 day. After the bounty ends, there is a grace period of 24 hours to manually award the bounty. Simply click the bounty award icon next to each answer to permanently award your bounty to the answerer. (You cannot award a bounty to your own answer.) @Mathphile I found no prime of the form $$n^{n+1}+(n+1)^{n+2}$$ for $n>392$ yet and neither a reason why the expression cannot be prime for odd n, although there are far more even cases without a known factor than odd cases. @TheSimpliFire That´s what I´m thinking about, I had some "vague feeling" that there must be some elementary proof, so I decided to find it, and then I found it, it is really "too elementary", but I like surprises, if they´re good. It is in fact difficult, I did not understand all the details either. But the ECM-method is analogue to the p-1-method which works well, then there is a factor p such that p-1 is smooth (has only small prime factors) Brocard's problem is a problem in mathematics that asks to find integer values of n and m for whichn!+1=m2,{\displaystyle n!+1=m^{2},}where n! is the factorial. It was posed by Henri Brocard in a pair of articles in 1876 and 1885, and independently in 1913 by Srinivasa Ramanujan.== Brown numbers ==Pairs of the numbers (n, m) that solve Brocard's problem are called Brown numbers. There are only three known pairs of Brown numbers:(4,5), (5,11... $\textbf{Corollary.}$ No solutions to Brocard's problem (with $n>10$) occur when $n$ that satisfies either \begin{equation}n!=[2\cdot 5^{2^k}-1\pmod{10^k}]^2-1\end{equation} or \begin{equation}n!=[2\cdot 16^{5^k}-1\pmod{10^k}]^2-1\end{equation} for a positive integer $k$. These are the OEIS sequences A224473 and A224474. Proof: First, note that since $(10^k\pm1)^2-1\equiv((-1)^k\pm1)^2-1\equiv1\pm2(-1)^k\not\equiv0\pmod{11}$, $m\ne 10^k\pm1$ for $n>10$. If $k$ denotes the number of trailing zeros of $n!$, Legendre's formula implies that \begin{equation}k=\min\left\{\sum_{i=1}^\infty\left\lfloor\frac n{2^i}\right\rfloor,\sum_{i=1}^\infty\left\lfloor\frac n{5^i}\right\rfloor\right\}=\sum_{i=1}^\infty\left\lfloor\frac n{5^i}\right\rfloor\end{equation} where $\lfloor\cdot\rfloor$ denotes the floor function. The upper limit can be replaced by $\lfloor\log_5n\rfloor$ since for $i>\lfloor\log_5n\rfloor$, $\left\lfloor\frac n{5^i}\right\rfloor=0$. An upper bound can be found using geometric series and the fact that $\lfloor x\rfloor\le x$: \begin{equation}k=\sum_{i=1}^{\lfloor\log_5n\rfloor}\left\lfloor\frac n{5^i}\right\rfloor\le\sum_{i=1}^{\lfloor\log_5n\rfloor}\frac n{5^i}=\frac n4\left(1-\frac1{5^{\lfloor\log_5n\rfloor}}\right)<\frac n4.\end{equation} Thus $n!$ has $k$ zeroes for some $n\in(4k,\infty)$. Since $m=2\cdot5^{2^k}-1\pmod{10^k}$ and $2\cdot16^{5^k}-1\pmod{10^k}$ has at most $k$ digits, $m^2-1$ has only at most $2k$ digits by the conditions in the Corollary. The Corollary if $n!$ has more than $2k$ digits for $n>10$. From equation $(4)$, $n!$ has at least the same number of digits as $(4k)!$. Stirling's formula implies that \begin{equation}(4k)!>\frac{\sqrt{2\pi}\left(4k\right)^{4k+\frac{1}{2}}}{e^{4k}}\end{equation} Since the number of digits of an integer $t$ is $1+\lfloor\log t\rfloor$ where $\log$ denotes the logarithm in base $10$, the number of digits of $n!$ is at least \begin{equation}1+\left\lfloor\log\left(\frac{\sqrt{2\pi}\left(4k\right)^{4k+\frac{1}{2}}}{e^{4k}}\right)\right\rfloor\ge\log\left(\frac{\sqrt{2\pi}\left(4k\right)^{4k+\frac{1}{2}}}{e^{4k}}\right).\end{equation} Therefore it suffices to show that for $k\ge2$ (since $n>10$ and $k<n/4$), \begin{equation}\log\left(\frac{\sqrt{2\pi}\left(4k\right)^{4k+\frac{1}{2}}}{e^{4k}}\right)>2k\iff8\pi k\left(\frac{4k}e\right)^{8k}>10^{4k}\end{equation} which holds if and only if \begin{equation}\left(\frac{10}{\left(\frac{4k}e\right)}\right)^{4k}<8\pi k\iff k^2(8\pi k)^{\frac1{4k}}>\frac58e^2.\end{equation} Now consider the function $f(x)=x^2(8\pi x)^{\frac1{4x}}$ over the domain $\Bbb R^+$, which is clearly positive there. Then after considerable algebra it is found that \begin{align*}f'(x)&=2x(8\pi x)^{\frac1{4x}}+\frac14(8\pi x)^{\frac1{4x}}(1-\ln(8\pi x))\\\implies f'(x)&=\frac{2f(x)}{x^2}\left(x-\frac18\ln(8\pi x)\right)>0\end{align*} for $x>0$ as $\min\{x-\frac18\ln(8\pi x)\}>0$ in the domain. Thus $f$ is monotonically increasing in $(0,\infty)$, and since $2^2(8\pi\cdot2)^{\frac18}>\frac58e^2$, the inequality in equation $(8)$ holds. This means that the number of digits of $n!$ exceeds $2k$, proving the Corollary. $\square$ We get $n^n+3\equiv 0\pmod 4$ for odd $n$, so we can see from here that it is even (or, we could have used @TheSimpliFire's one-or-two-step method to derive this without any contradiction - which is better) @TheSimpliFire Hey! with $4\pmod {10}$ and $0\pmod 4$ then this is the same as $10m_1+4$ and $4m_2$. If we set them equal to each other, we have that $5m_1=2(m_2-m_1)$ which means $m_1$ is even. We get $4\pmod {20}$ now :P Yet again a conjecture!Motivated by Catalan's conjecture and a recent question of mine, I conjecture thatFor distinct, positive integers $a,b$, the only solution to this equation $$a^b-b^a=a+b\tag1$$ is $(a,b)=(2,5).$It is of anticipation that there will be much fewer solutions for incr...
Should I construct the disks $D_n$ / the function $f$ explicitly or just show the existence? Showing the existence of $f$ suffices. But often the easiest way to show existence is giving an explicit example. As for the sequence of disks, I think an explicit construction is way beyond the scope of the exercise (if at all possible), but one can show that such a sequence exists. Take your favourite countable dense subset of the open unit disk - for example the set of points with rational real and imaginary part - and enumerate it $\{ p_k : k \in \mathbb{N}\setminus \{0\}\}$. Let $K_0 = \varnothing$ and $m_0 = 0$. For every $k \in \mathbb{N}\setminus \{0\}$, if $p_k \in K_{k-1}$, set $K_k = K_{k-1}$ and $m_k = m_{k-1}$. Otherwise choose a radius $0 < r < \frac{1}{2k^2}$ such that $\overline{D(p_k;r)} \subset U\setminus K_{k-1}$ and set $m_k = m_{k-1}+1$, $\alpha_{m_k} = p_k$, $r_{m_k} = r$ and $K_k = K_{k-1} \cup \overline{D(p_k;r)}$. By construction, each $K_k$ is a compact subset of $U$, so there are $m > k$ with $p_m \notin K_k$, hence you get an infinite sequence of disjoint disks. Since for each $k$ we have $p_k \in K_k$, and hence $p_k \in \overline{D(\alpha_j;r_j)}$ for some $j \leqslant k$, the closure of the union $V$ of the open disks contains a dense subset of $U$, hence $V$ is dense in $U$. The upper bound on the chosen radii forces $\sum r_n < +\infty$. Having shown the existence of such disks, we note that by the standard estimate we have $$\biggl\lvert\int_{\lvert z-a \rvert = r} f(z)\,dz\biggr\rvert \leqslant 2\pi r\cdot \lVert f\rVert_\infty$$ and hence $$\lvert L(f)\rvert \leqslant 2\pi \Biggl(1 + \sum_{n = 1}^\infty r_n\Biggr)\cdot \lVert f\rVert_\infty,$$ so $L$ is continuous. If $R$ is a rational function whose poles lie outside $K$, then each pole of $R$ lies either outside the closed unit disk or in exactly one of the disks $D_n$. The poles outside the closed unit disk don't contribute to $L(R)$ since neither $\Gamma$ nor any $\gamma_n$ winds around these poles, and for the poles in $D_m$, only $\gamma_m$ and $\Gamma$ wind around these poles, both with winding number $1$, so the integrals cancel for the principal part of $R$ in these poles. The partial fraction decomposition of $R$ thus shows $L(R) = 0$. For the last part, a simple continuous function whose integral over circles doesn't vanish is the conjugation. We have $$\int_{\lvert z-a\rvert = r} \overline{z}\,dz = 2\pi ir^2,$$ so $$L(z \mapsto \overline{z}) = 2\pi i \Biggl(1 - \sum_{n = 1}^\infty r_n^2\Biggr).$$ In the construction above, it is easily seen that $\sum r_n^2 < 1$, so $f(z) = \overline{z}$ works. If it were possible to choose the radii such that $\sum r_n^2 = 1$ (1), then $K$ would be a null set, and by the Hartogs-Rosenthal theorem (thanks to A.G. for pointing it out), every continuous function on $K$ would be a uniform limit of rational functions, so then $L(f) = 0$ for all $f\in C(K)$. But - thanks again to A.G. for digging up the reference - if one has a sequence of disjoint disks in $U$ such that $\sum r_n^2 = 1$, then it follows that $\sum r_n = +\infty$, so under the constraints of the exercise we have $\sum r_n^2 < 1$. A proof of this using approximation theory for holomorphic functions is contained in Mergelyan's 1952 paper (2), and an elementary proof of a generalisation is in Oscar Wesler's 1960 paper. (1) We have $\sum r_n^2 \leqslant 1$ since $V$ has measure $\pi \sum r_n^2$ as a disjoint union of disks, and since $V \subset U$, that measure is bounded by $\pi$. (2) S. N. Mergelyan, Uniform approximations to functions of a complex variable, Uspehi Mat. Nauk. vol. 7 (1952) pp. 31-122; Amer. Math. Soc. Transi, no. 101 (1954) p. 21.
Difference between revisions of "Constructible universe" (→Implications, equivalences, and consequences of $0^\#$'s existence: +) (→Statements True in $L$: links etc.) Line 27: Line 27: == Statements True in $L$ == == Statements True in $L$ == − Here is a list of statements true in $L$: + Here is a list of statements true in $L$: * $\text{ZFC}$ (and therefore the Axiom of Choice) * $\text{ZFC}$ (and therefore the Axiom of Choice) * $\text{GCH}$ * $\text{GCH}$ * $V=L$ (and therefore $V$ $=$ [[HOD|$\text{HOD}$]]) * $V=L$ (and therefore $V$ $=$ [[HOD|$\text{HOD}$]]) − * The + * The − * The + * The − * The + * The of Suslin's == Determinacy of $L(\R)$ == == Determinacy of $L(\R)$ == Revision as of 12:11, 6 October 2019 The Constructible universe (denoted $L$) was invented by Kurt Gödel as a transitive inner model of $\text{ZFC+}$$\text{GCH}$ (assuming the consistency of $\text{ZFC}$) showing that $\text{ZFC}$ cannot disprove $\text{GCH}$. It was then shown to be an important model of $\text{ZFC}$ for its satisfying of other axioms, thus making them consistent with $\text{ZFC}$. The idea is that $L$ is built up by ranks like $V$. $L_0$ is the empty set, and $L_{\alpha+1}$ is the set of all easily definable subsets of $L_\alpha$. The assumption that $V=L$ (also known as the Axiom of constructibility) is undecidable from $\text{ZFC}$, and implies many axioms which are consistent with $\text{ZFC}$. A set $X$ is constructible iff $X\in L$. $V=L$ iff every set is constructible. Contents 1 Definition 2 The difference between $L_\alpha$ and $V_\alpha$ 3 Statements True in $L$ 4 Determinacy of $L(\R)$ 5 Using other logic systems than first-order logic 6 Silver indiscernibles 7 Silver cardinals 8 Sharps 9 References Definition $\mathrm{def}(X)$ is the set of all "easily definable" subsets of $X$ (specifically the $\Delta_0$ definable subsets). More specifically, a subset $x$ of $X$ is in $\mathrm{def}(X)$ iff there is a first-order formula $\varphi$ and $v_0,v_1...v_n\in X$ such that $x=\{y\in X:\varphi^X[y,v_0,v_1...v_n]\}$. Then, $L_\alpha$ and $L$ are defined as follows: $L_0=\emptyset$ $L_{\alpha+1}=\mathrm{def}(L_\alpha)$ $L_\beta=\bigcup_{\alpha<\beta} L_\alpha$ if $\beta$ is a limit ordinal $L=\bigcup_{\alpha\in\mathrm{Ord}} L_\alpha$ The Relativized constructible universes $L_\alpha(W)$ and $L_\alpha[W]$ $L_\alpha(W)$ for a class $W$ is defined the same way except $L_0(W)=\text{TC}(\{W\})$ (the transitive closure of $\{W\}$). $L_\alpha[W]$ for a class $W$ is defined in the same way as $L$ except using $\mathrm{def}_W(X)$, where $\mathrm{def}_W(X)$ is the set of all $x\subseteq X$ such that there is a first-order formula $\varphi$ and $v_0,v_1...v_n\in X$ such that $x=\{y\in X:\varphi^X[y,W,v_0,v_1...v_n]\}$ (because the relativization of $\varphi$ to $X$ is used and $\langle X,\in\rangle$ is not used, this definition makes sense even when $W$ is not in $X$). $L[W]=\bigcup_{\alpha\in\mathrm{Ord}}L_\alpha[W]$ is always a model of $\text{ZFC}$, and always satisfies $\text{GCH}$ past a certain cardinality. $L(W)=\bigcup_{\alpha\in\mathrm{Ord}}L_\alpha(W)$ is always a model of $\text{ZF}$ but need not satisfy $\text{AC}$ (the axiom of choice). In particular, $L(\mathbb{R})$ is, under large cardinal assumptions, a model of the axiom of determinacy. However, Shelah proved that if $\lambda$ is a strong limit cardinal of uncountable cofinality then $L(\mathcal{P}(\lambda))$ is a model of $\text{AC}$. The difference between $L_\alpha$ and $V_\alpha$ For $\alpha\leq\omega$, $L_\alpha=V_\alpha$. However, $|L_{\omega+\alpha}|=\aleph_0 + |\alpha|$ whilst $|V_{\omega+\alpha}|=\beth_\alpha$. Unless $\alpha$ is a $\beth$-fixed point, $|L_{\omega+\alpha}|<|V_{\omega+\alpha}|$. Although $L_\alpha$ is quite small compared to $V_\alpha$, $L$ is a tall model, meaning $L$ contains every ordinal. In fact, $V_\alpha\cap\mathrm{Ord}=L_\alpha\cap\mathrm{Ord}=\alpha$, so the ordinals in $V_\alpha$ are precisely those in $L_\alpha$. If $0^{\#}$ exists (see below), then every uncountable cardinal $\kappa$ has $L\models$"$\kappa$is totally ineffable (and therefore the smallest actually totally ineffable cardinal $\lambda$ has many more large cardinal properties in $L$). However, if $\kappa$ is inaccessible and $V=L$, then $V_\kappa=L_\kappa$. Furthermore, $V_\kappa\models (V=L)$. In the case where $V\neq L$, it is still true that $V_\kappa^L=L_\kappa$, although $V_\kappa^L$ will not be $V_\kappa$. In fact, $\mathcal{P}(\omega)\not\in V_\kappa^L$ if $0^{\#}$ exists. Statements True in $L$ Here is a list of statements true in $L$ of any model of $\text{ZF}$: $\text{ZFC}$ (and therefore the Axiom of Choice) $\text{GCH}$ $V=L$ (and therefore $V$ $=$ $\text{HOD}$) The diamond principle The clubsuit principle The falsity of Suslin's hypothesis Determinacy of $L(\R)$ Main article: axiom of determinacy Using other logic systems than first-order logic When using second order logic in the definition of $\mathrm{def}$, the new hierarchy is called $L_\alpha^{II}$. Interestingly, $L^{II}=\text{HOD}$. When using $\mathcal{L}_{\kappa,\kappa}$, the hierarchy is called $L_\alpha^{\mathcal{L}_{\kappa,\kappa}}$, and $L\subseteq L^{\mathcal{L}_{\kappa,\kappa}}\subseteq L(V_\kappa)$. Finally, when using $\mathcal{L}_{\infty,\infty}$, it turns out that the result is $V$. Chang's Model is $L^{\mathcal{L}_{\omega_1,\omega_1}}$. Chang proved that $L^{\mathcal{L}_{\kappa,\kappa}}$ is the smallest inner model of $\text{ZFC}$ closed under sequences of length $<\kappa$. Silver indiscernibles To be expanded. Silver cardinals A cardinal $κ$ is Silver if in a set-forcing extension there is a club in $κ$ of generating indiscernibles for $V_κ$ of order-type $κ$. This is a very strong property downwards absolute to $L$, e.g.:[1] Every element of a club $C$ witnessing that $κ$ is a Silver cardinal is virtually rank-into-rank. If $C ∈ V[H]$, a forcing extension by $\mathrm{Coll}(ω, V_κ)$, is a club in $κ$ of generating indiscernibles for $V_κ$ of order-type $κ$, then each $ξ ∈ C$ is $< ω_1$-iterable. Sharps $0^{\#}$ (zero sharp) is a $\Sigma_3^1$ real number which, under the existence of many Silver indiscernibles (a statement independent of $\text{ZFC}$), has a certain number of properties that contredicts the axiom of constructibility and implies that, in short, $L$ and $V$ are " very different". Technically, under the standard definition of $0^\#$ as a (real number encoding a) set of formulas, $0^\#$ provably exists in $\text{ZFC}$, but lacks all its important properties. Thus the expression "$0^\#$ exists" is to be understood as "$0^\#$ exists and there are uncountably many Silver indiscernibles". Definition of $0^{\#}$ Assume there is an uncountable set of Silver indiscernibles. Then $0^{\#}$ is defined as the set of all Gödel numberings of first-order formula $\varphi$ such that $L_{\aleph_{\omega}}\models\varphi(\aleph_0,\aleph_1...\aleph_n)$ for some $n$. "$0^{\#}$ exists" is used as a shorthand for "there is an uncountable set of Silver indiscernibles"; since $L_{\aleph_\omega}$ is a set, $\text{ZFC}$ can define a truth predicate for it, and so the existence of $0^{\#}$ as a mere set of formulas would be trivial. It is interesting only when there are many (in fact proper class many) Silver indiscernibles. Similarly, we say that "$0^{\#}$ does not exist" if there are no Silver indiscernibles. Implications, equivalences, and consequences of $0^\#$'s existence If $0^\#$ exists then: $L_{\aleph_\omega}\prec L$ and so $0^\#$ also corresponds to the set of the Gödel numberings of first-order formulas $\varphi$ such that $L\models\varphi(\aleph_0,\aleph_1...\aleph_n)$ In fact, $L_\kappa\prec L$ for every Silver indiscernible, and thus for every uncountable cardinal. Given any set $X\in L$ which is first-order definable in $L$, $X\in L_{\omega_1}$. This of course implies that $\aleph_1$ is not first-order definable in $L$, because $\aleph_1\not\in L_{\omega_1}$. This is already a disproof of $V=L$ (because $\aleph_1$ is first-order definable). For every $\alpha\in\omega_1^L$, every Silver indiscernible (and in particular every uncountable cardinal) in $L$ is a Silver cardinal, $\alpha$-iterable, $\geq$ an $\alpha$-Erdős, totally ineffable and completely remarkable and has most other virtual large cardinal properties and other large cardinal properties consistent with $V=L$.[1][2] There are only countably many reals in $L$, i.e. $|\R\cap L|=\aleph_0$ in $V$. By elementary-embedding absoluteness results (The hypothesis can be weakened, because one can chop at off the universe at any Silver indiscernible and use reflection.):[3] Therefore every Silver indiscernible is virtually $A$-extendible in $L$ for every definable class $A$ and is the critical point of virtual rank-into-rank embeddings with targets as high as desired and fixed points as high above the critical sequence as desired. The following statements are equivalent: There is an uncountable set of Silver indiscernibles (i.e. "$0^\#$ exists") There is a proper class of Silver indiscernibles (unboundedly many of them). There is a unique well-founded remarkable E.M. set (see below). Jensen's Covering Theorem fails (see below). $L$ is thin, i.e. $|L\cap V_\alpha|=|\alpha|$ for all $\alpha\geq\omega$. $\Sigma^1_1$-determinacy (lightface form). $\aleph_\omega$ is regular (hence weakly inaccessible) in $L$. There is a nontrivial elementary embedding $j:L\to L$. There is a proper class of nontrivial elementary embeddings $j:L\to L$. There is a nontrivial elementary embedding $j:L_\alpha\to L_\beta$ with $\text{crit}(j)<|\alpha|$. The existence of $0^\#$ is implied by: Chang's conjecture Both $\omega_1$ and $\omega_2$ being singular (requires $\neg\text{AC}$). The negation of the singular cardinal hypothesis ($\text{SCH}$). The existence of an $\omega_1$-iterable cardinal or of a $\omega_1$-Erdős cardinal. The existence of a weakly compact cardinal $\kappa$ such that $|(\kappa^+)^L|=\kappa$. The existence of some uncountable regular cardinal $\kappa$ such that every constructible $X\subseteq\kappa$ either contains or is disjoint from a closed unbounded set. Note that if $0^{\#}$ exists then for every Silver indiscernible (in particular for every uncountable cardinal) there is a nontrivial elementary embedding $j:L\rightarrow L$ with that indiscernible as its critical point. Thus if any such embedding exists, then a proper class of those embeddings exists. Nonexistence of $0^\#$, Jensen's Covering Theorem EM blueprints and alternative characterizations of $0^\#$ An EM blueprint (Ehrenfeucht-Mostowski blueprint) $T$ is any theory of the form $\{\varphi:(L_\delta;\in,\alpha_0,\alpha_1...)\models\varphi\}$ for some ordinal $\delta>\omega$ and $\alpha_0<\alpha_1<\alpha_2...$ are indiscernible in the structure $L_\delta$. Roughly speaking, it's the set of all true statements about $\alpha_0,\alpha_1,\alpha_2...$ in $L_\delta$. For an EM blueprint $T=\{\varphi:(L_\delta;\in,\alpha_0,\alpha_1...)\models\varphi\}$, the theory $T^{-}$ is defined as $\{\varphi:L_\delta\models\varphi\}$ (the set of truths about any definable elements of $L_\delta$). Then, the structure $\mathcal{M}(T,\alpha)=(M(T,\alpha);E)\models T^{-}$ has a very technical definition, but it is indeed uniquely (up to isomorphism) the only structure which satisfies the existence of a set $X$ of $\mathcal{M}(T,\alpha)$-ordinals such that: $X$ is a set of indiscernibles for $\mathcal{M}(T,\alpha)$ and $(X;E)\cong\alpha$ ($X$ has order-type $\alpha$ with respect to $\mathcal{M}(T,\alpha)$) For any formula $\varphi$ and any $x<y<z...$ with $x,y,z...\in X$, $\mathcal{M}(T,\alpha)\models\varphi(x,y,z...)$ iff $\mathcal{M}(T,\alpha)\models\varphi(\alpha_0,\alpha_1,\alpha_2...)$ where $\alpha_0,\alpha_1...$ are the indiscernibles used in the EM blueprint. If $<$ is an $\mathcal{M}(T,\alpha)$-definable $\mathcal{M}(T,\alpha)$-well-ordering of $\mathcal{M}(T,\alpha)$, then: $$\mathcal{M}(T,\alpha)=\{\min{}_<^{\mathcal{M}(T,\alpha)}\{x:\mathcal{M}(T,\alpha)\models\varphi[x,a,b,c...]\}:\varphi\in\mathcal{L}_\in\text{ and } a,b,c...\in X\}$$ $0^\#$ is then defined as the unique EM blueprint $T$ such that: $\mathcal{M}(T,\alpha)$ is isomorphic to a transitive model $M(T,\alpha)$ of ZFC for every $\alpha$ For any infinite $\alpha$, the set of indiscernibles $X$ associated with $M(T,\alpha)$ can be made cofinal in $\text{Ord}^{M(T,\alpha)}$. The $L_\delta$-indiscernables $\beta_0<\beta_1...$ can be made so that if $<$ is an $M(T,\alpha)$-definable well-ordering of $M(T,\alpha)$, then for any $(m+n+2)$-ary formula $\varphi$ such that $\min_<^{M(T,\alpha)}\{x:\varphi[x,\beta_0,\beta_1...\beta_{m+n}]\}<\beta_m$, then: $$\min{}_<^{M(T,\alpha)}\{x:\varphi[x,\beta_0,\beta_1...\beta_{m+n}]\}=\min{}_<^{M(T,\alpha)}\{x:\varphi[x,\beta_0,\beta_1...\beta_{m-1},\beta_{m+n+1}...\beta_{m+2n+1}]\}$$ If the EM blueprint meets 1. then it is called well-founded. If it meets 2. and 3. then it is called remarkable. If $0^\#$ exists (i.e. there is a well-founded remarkable EM blueprint) then it happens to be equivalent to the set of all $\varphi$ such that $L\models\varphi[\kappa_0,\kappa_1...]$ for some uncountable cardinals $\kappa_0,\kappa_1...<\aleph_\omega$. This is because the associated $M(T,\alpha)$ will always have $M(T,\alpha)\prec L$ and furthermore $\kappa_0,\kappa_1...$ would be indiscernibles for $L$. $0^\#$ exists interestingly iff some $L_\delta$ has an uncountable set of indiscernables. If $0^\#$ exists, then there is some uncountable $\delta$ such that $M(0^\#,\omega_1)=L_\delta$ and $L_\delta$ therefore has an uncountable set of indiscernables. On the other hand, if some $L_\delta$ has an uncountable set of indiscernables, then the EM blueprint of $L_\delta$ is $0^\#$. Sharps of arbitrary sets Generalisations $0^\P$ (zero pistol) is connected with strong cardinals. $¬ 0^\P$ ( not zero pistol) means that a core model may be built with a strong cardinal, but that there is no class of indiscernibles for it that is closed and unbounded in $\mathrm{Ord}$).[5] $0^¶$ is “the sharp for a strong cardinal”, meaning the minimal sound active mouse $\mathcal{M}$ with $M | \mathrm{crit}(\dot F^{\mathcal{M}}) \models \text{“There exists a strong cardinaly”}$, with $\dot F^{\mathcal{M}}$ being the top extender of $\mathcal{M}$.[6] References Jech, Thomas J. Set Theory(The 3rd Millennium Ed.). Springer, 2003. user46667, Gödel's Constructible Universe in Infinitary Logics (A Possible Approach to HOD Problem), URL (version: 2014-03-17): https://mathoverflow.net/q/156940 Chang, C. C. (1971), "Sets Constructible Using $\mathcal{L}_{\kappa,\kappa}$", Axiomatic Set Theory, Proc. Sympos. Pure Math., XIII, Part I, Providence, R.I.: Amer. Math. Soc., pp. 1–8 Gitman, Victoria and Shindler, Ralf. Virtual large cardinals.www bibtex Bagaria, Joan and Gitman, Victoria and Schindler, Ralf. Generic {V}opěnka's {P}rinciple, remarkable cardinals, and the weak {P}roper {F}orcing {A}xiom.Arch Math Logic 56(1-2):1--20, 2017. www DOI MR bibtex Gitman, Victoria and Hamkins, Joel David. A model of the generic Vopěnka principle in which the ordinals are not Mahlo., 2018. arχiv bibtex Kanamori, Akihiro and Awerbuch-Friedlander, Tamara. The compleat 0†.Mathematical Logic Quarterly 36(2):133-141, 1990. DOI bibtex Sharpe, Ian and Welch, Philip. Greatly Erdős cardinals with some generalizations to the Chang and Ramsey properties.Ann Pure Appl Logic 162(11):863--902, 2011. www DOI MR bibtex Nielsen, Dan Saattrup and Welch, Philip. Games and Ramsey-like cardinals., 2018. arχiv bibtex
Although the atomic theory proposed by John Dalton created a basic structure of the atom, the general idea of molecules was not cleared. In 1809, Frech chemist Joseph-Louis Gay-Lussac and others began doing numerous experiments with gases by measuring the amounts of gass that actually reacted. They found that two volumes of reacted with one volume of hydrogen to form two volumes of oxygen , and that one volume of water gas reacted with one volume of hydrogen gas to form two volumes of chlorine gas. hydrogen chloride \(2H_{2}+O_{2}\rightarrow2H_{2}O\) \(H_{2}+{CL}_{2}\rightarrow2HCL\) In 1811, Avogadro proposed the following law: “Equal volumes of ideal or perfect gases, at the same temperature and pressure, contain the same number of particles, or molecules.” This Law is later confirmed experimentally. With the basis of Avogadro’s Laws, it became possible to compare the relative weights of various melecules and atoms. According to Avogadro’s Law: \(\frac{V_1}{n_1}=\frac{V_2}{n_2}=Constant\) n: Number of moles, V: Volume, T: Temperature (Constant), P: Pressure (Constant) Example: The reaction in which hydrogen and oxygen combine to form water can be displayed as the following. Avogadro’s ConstantThe number of molecules in one mole, that is the number of atoms in exactly 12 grams of carbon-12. \(Avogadro’s\:Constant=N_A=6.0221367\times10^{23}\:mol^{-1}\)
I am struggling with the proof that if G has a faithful complex irreducible representation then $Z(G)$ is cyclic: Let $\rho:G \rightarrow GL(V)$ be a faithful complex irreducible representation. Let $z \in Z(G)$. Consider the map $\phi_z: v \mapsto zv$ for all $v \in V$. This is a G-endomorphism on $V$, hence is multiplication by a scalar $\mu_z$" I keep coming across the term G-homomorphism. For instance in Schur's Lemma ..."Then any G-homomorphism $\theta:V \rightarrow W$ is 0 or an isomorphism". What exactly is G-homomorphism? Then the map $Z(G) \rightarrow \mathbb{C}^\times, z \mapsto \mu_z$, is a representation of $Z$ and is faithful (since $\rho$ is). Thus $Z(G)$ is isomorphic to a finite subgroup of $\mathbb{C}^\times$, hence is cyclic. What is the justification for this last sentence?
Random dynamics of non-autonomous semi-linear degenerate parabolic equations on $\mathbb{R}^N$ driven by an unbounded additive noise School of Mathematics and Statistics, Chongqing Technology and Business University, Chongqing 400067, China In this paper, we study the dynamics of a non-autonomous semi-linear degenerate parabolic equation on $\mathbb{R}^N$ driven by an unbounded additive noise. The nonlinearity has $(p,q)$-exponent growth and the degeneracy means that the diffusion coefficient $σ$ is unbounded and allowed to vanish at some points. Firstly we prove the existence of pullback attractor in $L^2(\mathbb{R}^N)$ by using a compact embedding of the weighted Sobolev space. Secondly we establish the higher-attraction of the pullback attractor in $L^δ(\mathbb{R}^N)$, which implies that the cocycle is absorbing in $L^δ(\mathbb{R}^N)$ after a translation by the complete orbit, for arbitrary $δ∈[2,∞)$. Thirdly we verify that the derived $L^2$-pullback attractor is in fact a compact attractor in $L^p(\mathbb{R}^N)\cap L^q(\mathbb{R}^N)\cap D_0^{1,2}(\mathbb{R}^N,σ)$, mainly by means of the estimate of difference of solutions instead of the usual truncation method. Keywords:Non-autonomous degenerate semilinear parabolic equations, unbounded additive noise, pullback attractor, higher-order attraction, asymptotically compact. Mathematics Subject Classification:Primary: 60H15, 35B40, 35B41; Secondary: 37H10. Citation:Wenqiang Zhao. Random dynamics of non-autonomous semi-linear degenerate parabolic equations on $\mathbb{R}^N$ driven by an unbounded additive noise. Discrete & Continuous Dynamical Systems - B, 2018, 23 (6) : 2499-2526. doi: 10.3934/dcdsb.2018065 References: [1] [2] C. T. Anh and L. T. Thuy, Global attractors for a class of semilinear degenerate parabolic equations on $\mathbb{R}^N$, [3] L. Arnold, [4] [5] [6] [7] A. N. Carvalho, J. A. Langa and J. C. Robinson, [8] [9] I. Chueshov, [10] [11] [12] [13] [14] [15] H. Cui and Y. Li, Existence and upper semicontinuity of random attractors for stochastic degenerate parabolic equations with multiplicative noises, [16] G. Da Prato and Z. Jerzy, [17] R. Dautray and J. L. Lions, [18] F. Flandoli and B. Schmalfuß, Random attractors for the stochastic Navier-Stokes equation with multiplicative noise, [19] [20] N. I. Karachalios and N. B. Zographopoulos, On the dynamics of a degenerate parabolic equation: Global bifurcation of stationary states and convergence, [21] A. Krause and B. Wang, Pullback attractors of non-autonomous stochastic degenerate parabolic equations on unbounded domains, [22] A. Krause, M. Lewis and B. Wang, Dynamics of the non-autonomous stochastic $p$-Laplace equation driven by multiplicative noise, [23] [24] Y. Li and J. Yin, A modiffied proof of pullback attractors in a Sobolev space for stochastic FitzHugh-Nagumo equations, [25] Y. Li and B. Guo, Random attractors for quasi-continuous random dynamical systems and applications to stochastic reaction-diffusion equations, [26] Y. Li, A. Gu and J. Li, Existence and continuity of bi-spatial random attractors and application to stochasitic semilinear Laplacian equations, [27] [28] J. C. Robinson, [29] [30] B. Schmalfuß, Backward cocycle and attractors of stochastic differential equations, in: V. Reitmann, T. Riedrich, N. Koksch (Eds.), International Seminar on Applied MathematicsNonlinear Dynamics: Attractor Approximation and Global Behavior, [31] B. Schmalfuß, Attractors for the nonautonomous dynamical systems, in:International Conference on Differential Equations, [32] C. Sun, L. Yuan and J. Shi, Higher-order integrability for a semilinear reaction-diffusion equation with distribution derivatives, [33] [34] R. Temam, [35] [36] B. Wang, Suffcient and necessary criteria for existence of pullback attractors for non-compact random dynamical systems, [37] M. Yang and P. E. Kloeden, Random attractors for stochastic semi-linear degenerate parabolic equations, [38] J. Yin, Y. Li and H. Zhao, Random attractors for stochastic semi-linear degenerate parabolic equations with additive noise in $L^q$, [39] J. Yin, Y. Li and H. Cui, Box-counting dimensions and upper semicontinuities of bi-spatial attractors for stochastic degenerate parabolic equations onanunbounded domain, [40] [41] W. Zhao and Y. Zhang, Compactness and attracting of random attractors for non-autonomous stochastic lattice dynamical systems in weighted space $\ell_ρ^p$, [42] W. Zhao, Regularity of random attractors for a stochastic degenerate parabolic equation driven by multiplicative noise, [43] [44] [45] K. Zhu and F. Zhou, Continuity and pullback attractors for a non-autonomous reaction-diffusion equation in $\mathbb{R}^N$, show all references References: [1] [2] C. T. Anh and L. T. Thuy, Global attractors for a class of semilinear degenerate parabolic equations on $\mathbb{R}^N$, [3] L. Arnold, [4] [5] [6] [7] A. N. Carvalho, J. A. Langa and J. C. Robinson, [8] [9] I. Chueshov, [10] [11] [12] [13] [14] [15] H. Cui and Y. Li, Existence and upper semicontinuity of random attractors for stochastic degenerate parabolic equations with multiplicative noises, [16] G. Da Prato and Z. Jerzy, [17] R. Dautray and J. L. Lions, [18] F. Flandoli and B. Schmalfuß, Random attractors for the stochastic Navier-Stokes equation with multiplicative noise, [19] [20] N. I. Karachalios and N. B. Zographopoulos, On the dynamics of a degenerate parabolic equation: Global bifurcation of stationary states and convergence, [21] A. Krause and B. Wang, Pullback attractors of non-autonomous stochastic degenerate parabolic equations on unbounded domains, [22] A. Krause, M. Lewis and B. Wang, Dynamics of the non-autonomous stochastic $p$-Laplace equation driven by multiplicative noise, [23] [24] Y. Li and J. Yin, A modiffied proof of pullback attractors in a Sobolev space for stochastic FitzHugh-Nagumo equations, [25] Y. Li and B. Guo, Random attractors for quasi-continuous random dynamical systems and applications to stochastic reaction-diffusion equations, [26] Y. Li, A. Gu and J. Li, Existence and continuity of bi-spatial random attractors and application to stochasitic semilinear Laplacian equations, [27] [28] J. C. Robinson, [29] [30] B. Schmalfuß, Backward cocycle and attractors of stochastic differential equations, in: V. Reitmann, T. Riedrich, N. Koksch (Eds.), International Seminar on Applied MathematicsNonlinear Dynamics: Attractor Approximation and Global Behavior, [31] B. Schmalfuß, Attractors for the nonautonomous dynamical systems, in:International Conference on Differential Equations, [32] C. Sun, L. Yuan and J. Shi, Higher-order integrability for a semilinear reaction-diffusion equation with distribution derivatives, [33] [34] R. Temam, [35] [36] B. Wang, Suffcient and necessary criteria for existence of pullback attractors for non-compact random dynamical systems, [37] M. Yang and P. E. Kloeden, Random attractors for stochastic semi-linear degenerate parabolic equations, [38] J. Yin, Y. Li and H. Zhao, Random attractors for stochastic semi-linear degenerate parabolic equations with additive noise in $L^q$, [39] J. Yin, Y. Li and H. Cui, Box-counting dimensions and upper semicontinuities of bi-spatial attractors for stochastic degenerate parabolic equations onanunbounded domain, [40] [41] W. Zhao and Y. Zhang, Compactness and attracting of random attractors for non-autonomous stochastic lattice dynamical systems in weighted space $\ell_ρ^p$, [42] W. Zhao, Regularity of random attractors for a stochastic degenerate parabolic equation driven by multiplicative noise, [43] [44] [45] K. Zhu and F. Zhou, Continuity and pullback attractors for a non-autonomous reaction-diffusion equation in $\mathbb{R}^N$, [1] Yangrong Li, Renhai Wang, Jinyan Yin. Backward compact attractors for non-autonomous Benjamin-Bona-Mahony equations on unbounded channels. [2] Rodrigo Samprogna, Tomás Caraballo. Pullback attractor for a dynamic boundary non-autonomous problem with Infinite Delay. [3] T. Caraballo, J. A. Langa, J. Valero. Structure of the pullback attractor for a non-autonomous scalar differential inclusion. [4] Yangrong Li, Shuang Yang. Backward compact and periodic random attractors for non-autonomous sine-Gordon equations with multiplicative noise. [5] José A. Langa, James C. Robinson, Aníbal Rodríguez-Bernal, A. Suárez, A. Vidal-López. Existence and nonexistence of unbounded forwards attractor for a class of non-autonomous reaction diffusion equations. [6] Lu Yang, Meihua Yang, Peter E. Kloeden. Pullback attractors for non-autonomous quasi-linear parabolic equations with dynamical boundary conditions. [7] Angelo Favini, Georgy A. Sviridyuk, Alyona A. Zamyshlyaeva. One Class of Sobolev Type Equations of Higher Order with Additive "White Noise". [8] María Anguiano, Tomás Caraballo, José Real, José Valero. Pullback attractors for reaction-diffusion equations in some unbounded domains with an $H^{-1}$-valued non-autonomous forcing term and without uniqueness of solutions. [9] Hong Lu, Jiangang Qi, Bixiang Wang, Mingji Zhang. Random attractors for non-autonomous fractional stochastic parabolic equations on unbounded domains. [10] [11] Cung The Anh, Le Van Hieu, Nguyen Thieu Huy. Inertial manifolds for a class of non-autonomous semilinear parabolic equations with finite delay. [12] Peter E. Kloeden, Jacson Simsen. Pullback attractors for non-autonomous evolution equations with spatially variable exponents. [13] Zhijian Yang, Yanan Li. Upper semicontinuity of pullback attractors for non-autonomous Kirchhoff wave equations. [14] Alexandre Nolasco de Carvalho, Marcelo J. D. Nascimento. Singularly non-autonomous semilinear parabolic problems with critical exponents. [15] Wen Tan. The regularity of pullback attractor for a non-autonomous [16] Bixiang Wang. Random attractors for non-autonomous stochastic wave equations with multiplicative noise. [17] Shengfan Zhou, Min Zhao. Fractal dimension of random attractor for stochastic non-autonomous damped wave equation with linear multiplicative white noise. [18] Zhaojuan Wang, Shengfan Zhou. Random attractor and random exponential attractor for stochastic non-autonomous damped cubic wave equation with linear multiplicative white noise. [19] [20] Xin Li, Chunyou Sun, Na Zhang. Dynamics for a non-autonomous degenerate parabolic equation in $\mathfrak{D}_{0}^{1}(\Omega, \sigma)$. 2018 Impact Factor: 1.008 Tools Metrics Other articles by authors [Back to Top]
I have to express the following problem as a semidefinite program $$\begin{array}{ll} \text{minimize} & F(x,y) := x + y +1\\ \text{subject to} & (x-1)^2+y^2 \leq 1 \tag{1}\end{array}$$ Only affine equality conditions should be used. The hint was to examine the structure of $\mathbb{S}^2_+$, the cone of symmetric positive semidefinite matrices. The characteristic polynomial of such a matrix is $$C=\lambda^2-(a_{11}+a_{22})\lambda - a_{12}a_{21}$$ which has a similar form to the rewritten condition (1) $x^2-2x+y^2\leq0$. If $a_{11}=a_{22}=1,\lambda = x$ and $a_{12}=-a_{21}=y$ the characteristic polynomial would be $x^2-2x+y^2=0$. Is this useful? My problem is, that I have no clue how to formulate a $\leq$ with equality conditions.
I have been trying to find the algorithmic complexity of a problem that I have. I am almost sure it is either NP-hard or NP-complete but I cannot find any proof. Recently, I found that my problem can be something similar to a special instance of the Maximum Capacity Representatives problem, which is NP-complete. However, the objective function to optimize in my case is different than the one in the MCR problem. The problem that I am trying to solve is the following: INSTANCE: Disjoint sets $S_1, \ldots, S_m$ and, for any $i \neq j$, $x \in S_i$, and $y \in S_j$, a nonnegative capacity $c(x,y)$. SOLUTION: A system of representatives $T$, i.e., a set $T$ such that, for any $i$, $\vert T \cap S_i\vert=1$. MEASURE: $\min \{c(x,y): x,y \in T \}$. And my goal is to maximize $\min \{c(x,y): x,y \in T \}$. Do you know any way to determine the complexity of my problem? Is there any well known problem in the literature that can be reduced to this one?, Thanks in advance.
Update: This implementation is now a package called CompoundMatrixMethod, hosted on github. It can be installed easily by evaluating: Needs["PacletManager`"] PacletInstall["CompoundMatrixMethod", "Site" -> "http://raw.githubusercontent.com/paclets/Repository/master"] This version also includes a function ToMatrixSystem which converts a system of ODEs to matrix form (and linearises if necessary), including the boundary conditions. This eliminates the need to set the matrices directly, and also specifies which variable is the eigenvalue, simplifying the notation. Please use the package rather than the code below. I've written an implementation of the Compound Matrix Method that suits my purposes, and so I'll put it here for other people. A good explanation of this method are available here. Basically the Compound Matrix Method takes an $n$ by $n$ eigenvalue problem of the form $$\mathbf{y}' = A(x, \lambda) \mathbf{y}, \quad a \leq x \leq b, \\ B(x,\lambda) \mathbf{y} = \mathbf{0}, \quad x=a, \\ C(x,\lambda) \mathbf{y} = \mathbf{0}, \quad x=b,$$ and converts it to a larger system of determinants that satisfy a different matrix equation $$ \mathbf{\phi}' = Q(x, \lambda) \mathbf{\phi}.$$This removes a lot of the stiffness from the equations, as well as being able to also remove the exponential growth terms that dominate away from an eigenvalue. The code is written for general size $n$, and I've used it for $n=10$. The first time you run the code for a particular size $n$ the general form of matrix $\mathbf{Q}$ will be calculated, for $n=10$ this takes about 3 minutes for me, after that the matrix will be cached. The matching should be independent of the choice of matching point, but you can change it in the code to check that. reprules = ϕ[a_List] :> Signature[a] ϕ[Sort[a]]; minorsDerivs[list_?VectorQ,len_?NumericQ] := Sum[Sum[AA[y, z] ϕ[list /. y -> z], {z, Union[Complement[Range[len], list], {y}]}], {y, list}] /. reprules qComponents[n_?NumericQ, len_?NumericQ] := qComponents[n, len] = Coefficient[Table[minorsDerivs[ii, len], {ii, Subsets[Range[len], {len/2}]}] /.Thread[Subsets[Range[len], {len/2}] -> Range[Binomial[len, len/2]]], \[Phi][n]] Evans[{λ_/;!NumericQ[λ], λλ_?NumericQ}, Amat_?MatrixQ, bvec_?MatrixQ, cvec_?MatrixQ, {x_ /;!NumericQ[x], xa_?NumericQ, xb_?NumericQ,xmatch_:False}] := Module[{ya, yb, ϕpa, ϕmb, valsleft, valsright, ϕpainit, ϕmbinit, posint, negint, ϕmvec, ϕpvec, det, QQ, len, subsets,matchpt}, len = Length[Amat]; If[(xa <= xmatch <= xb && NumericQ[xmatch]), matchpt = xmatch, matchpt = (xb - xa)/2]; If[!EvenQ[len], Print["Matrix A does not have even dimension"]; Abort[]]; If[Length[Amat] != Length[Transpose[Amat]],Print["Matrix A is not a square matrix"]; Abort[]]; subsets = Subsets[Range[len], {len/2}]; ya = NullSpace[bvec]; If[Length[ya] != len/2, Print["Rank of matrix B is not correct"];Abort[]]; yb = NullSpace[cvec]; If[Length[yb] != len/2, Print["Rank of matrix C is not correct"];Abort[]]; ϕmvec = Table[ϕm[i][x], {i, 1, Length[subsets]}]; ϕpvec = Table[ϕp[i][x], {i, 1, Length[subsets]}]; ϕpa = (Det[Transpose[ya][[#]]] & /@ subsets); ϕmb = (Det[Transpose[yb][[#]]] & /@ subsets); valsleft = Select[Eigenvalues[Amat /. x -> xa /. λ -> λλ], Re[#] > 0 &]; valsright = Select[Eigenvalues[Amat /. x -> xb /. λ -> λλ], Re[#] < 0 &]; ϕpainit = Thread[Through[Array[ϕp, {Length[subsets]}][xa]] == ϕpa]; ϕmbinit = Thread[Through[Array[ϕm, {Length[subsets]}][xb]] == ϕmb]; QQ = Transpose[Table[qComponents[i, len], {i, 1, Length[subsets]}]] /. AA[i_, j_] :> Amat[[i, j]] /. λ -> λλ; posint = NDSolve[{Thread[D[ϕpvec,x] == (QQ - Total[Re@valsleft] IdentityMatrix[Length[QQ]]).ϕpvec], ϕpainit}, Array[ϕp, {Length[subsets]}], {x, xa, xb}][[1]]; negint = NDSolve[{Thread[D[ϕmvec,x] == (QQ - Total[Re@valsright] IdentityMatrix[Length[QQ]]).ϕmvec], ϕmbinit}, Array[ϕm, {Length[subsets]}], {x, xa, xb}][[1]]; det = Total@Table[ϕm[i][x] ϕp[Complement[Range[len], i]][x] (-1)^(Total[Range[len/2] + i]) //. reprules /. Thread[subsets -> Range[Length[subsets]]], {i, subsets}]; Exp[-Integrate[Tr[Amat], {x, xa, matchpt}]] det /. x -> matchpt /. posint /. negint] For a simple 2nd order eigenvalue problem, $y''(x) + \lambda y(x) = 0, y(0)=y(L)=0$, the roots can be found analytically as $n \pi/L, n \in \mathbb{Z}$. Here the matrix $A$ is {{0,1}, {-\[Lambda]^2, 0}}, and the BCs are DiagonalMatrix[{1, 0}]: Plot[Evans[{λ, λλ}, {{0, 1}, {-λ^2, 0}}, DiagonalMatrix[{1, 0}], DiagonalMatrix[{1, 0}], {x, 0, 2}], {λλ, 0.1, 20}] Changing the boundary conditions is straight forward, so for a Robin BCs like $y(0)+2y'(0)=0$ the corresponding matrix $B$ would be {{1, 2}, {0, 0}}. For the first 4th order example in the linked notes $$\epsilon^4 y''''(x) + 2 \epsilon^2 \lambda \frac{d}{dx}\left[\sin(x) \frac{dy}{dx}\right]+y =0, \\ y(0) = y''(0) = y'(\pi/2) = y'''(\pi/2) = 0,$$ the matrices are given by: A1={{0,1,0,0}, {0,0,1,0}, {0,0,0,1}, {-1/ϵ^4, -2 ω Cos[x]/ϵ^2, -2 ω Sin[x]/ϵ^2, 0}}; B1 = DiagonalMatrix[{1,0,1,0}]; C1 = DiagonalMatrix[{0,1,0,1}]; Evans[{ω, 1}, A1 /. ϵ-> 0.1, B1, C1, {x, 0, Pi/2}] (* -0.650472 *) And we can then vary the value of $\omega$ to see the roots: Plot[Evans[{ω, ωω}, A1 /.ϵ->0.1, B1, C1, {x, 0, Pi/2}], {ωω, 1, 3}] For a 10x10 example similar to my original question (that has positive eigenvalues): A2 = {{0, 1, 0, 0, 0, 0, 5, 0, -5, 0}, {0, 0, 1, 0, 0, 0, 0, 0, 0, 0}, {0, 0, 0, 1, 0, 0, 0, 0, 0, 0}, {-625 ω, -(125/2), 2, 0, 0, 3, -300, 0, 300, 0}, {0, 0, 0, 0, 0, 1, 0, 0, 0, 0}, {0, 0, 0, -1.5, 1/2, 0, 0, 0, 0, 0}, {0, 0, 0, 0, 0, 0, 0, 1, 0, 0}, {0, -169, 0, 0, 0, 0, 9175 + 694 ω, 0, 811, 0}, {0, 0, 0, 0, 0, 0, 0, 0, 0, 1}, {0, 672, 0, 0, 0, 0, 3222, 0, -709 + 694 ω, 0}}; B2 = C2 = DiagonalMatrix[{0, 1, 1, 0, 1, 0, 0, 1, 0, 1}]; Evans[{ω, 1}, A2, B2, C2, {x, 0, 1}] (* 0.672945 *) We can plot and see some positive eigenvalues: ListPlot[Table[{ωω,Evans[{ω, ωω}, A2, B2, C2, {x, 0, 1}]},{ωω,0.1,1,0.01}] And then FindRoot will find one: FindRoot[Evans[{ω, ωω}, A2, B2, C2, {x, 0, 1}],{ωω,0.5}] The eigenfunctions can be extracted from this method if required, but I haven't coded that here. The subtraction of the dominant growing eigenvalues from $Q$ may not be suitable for all problems, but is really useful when it works. It will also use exact numbers if you give them in the original matrices, so it'll be faster if you give an approximate number.
You already have a correct answer, but I'd like to show you a little trick that allows you to derive the solution without solving any integrals. You've already figured out the analytical expression for $X_1(j\omega)$, which is an important step. Now you could observe that this expression contains a factor $j\omega$. Remember that multiplication by $j\omega$ in the frequency domain corresponds to differentiation in the time domain: $$\mathcal{F}\left\{\frac{dx(t)}{dt}\right\}\Longleftrightarrow j\omega X(j\omega)\tag{1}$$ This means that you could obtain the function $x_1(t)$ by first deriving the inverse Fourier transform of $$X(j\omega)=\frac{X_1(j\omega)}{j\omega}=3[u(\omega+3\pi)-u(\omega-3\pi)]\tag{2}$$ which is a simple rectangular function. It's corresponding time domain function is $$x(t)=\frac{3\sin(3\pi t)}{\pi t}\tag{3}$$ This is a basic Fourier transform relationship, which you should know by heart.The desired function $x_1(t)$ is now given by the derivative of $(3)$: $$\begin{align}x_1(t)=\frac{dx(t)}{dt}&=\frac{(3\pi)^2 t\cos(3\pi t))-3\pi\sin(3\pi t))}{(\pi t)^2}\\&=\frac{3}{\pi t^2}[3\pi t\cos(3\pi t)-\sin(3\pi t)]\tag{4}\end{align}$$
Fermat's little theorem says that the congruence $a^p \equiv a (mod p)$ if $p$ is a prime number. $a^{n+1} \equiv a (mod n)$ works for all integers $a$ and some positive integers $n$, how can we characterize the positive integers $n$ for which $a^{n+1} \equiv a (mod n)$ holds? closed as off-topic by Lucia, Boris Bukh, abx, user9072, Stefan Kohl Sep 18 '14 at 17:17 This question appears to be off-topic. The users who voted to close gave this specific reason: "MathOverflow is for mathematicians to ask each other questions about their research. See Math.StackExchange to ask general questions in mathematics." – Lucia, Boris Bukh, abx, Community, Stefan Kohl This is one of Don Zagier's problems: http://www-groups.dcs.st-and.ac.uk/~john/Zagier/Problems.html First Day 1. I claim that this holds for $n$ if and only if $n$ is square-free and for all prime divisors $p$ of $n$, we have $p-1\mid n$. Let $n=p_1^{k_1}\cdots p_r^{k_r}$ be the prime factor decomposition of $n$. If at least one of the $k_i$ is greater than $1$, we see that $a:=p_1\cdots p_r$ is a counter-example, since its $(n+1)$-th power will be congruent to $0$ modulo $n$. Hence $n$ must necessarily be square-free: $n=p_1\cdots p_r$. Now for any $a\in\mathbb{Z}$, the congruence $a^{n+1}\equiv a\hspace{3pt}(\mathrm{mod}\hspace{3pt}n)$ holds by definition if and only if $n\mid a^{n+1}-a$, that is, if and only if for all $i=1,\ldots,r$, $p_i\mid a(a^n-1)$. So, for each $i=1,\ldots,r$, precisely one of the two statements $p_i\mid a$ and $p_i\nmid a \wedge \mathrm{ord}_{p_i}(a)\mid n$ must be satisfied, where $\mathrm{ord}_p$ denotes the multiplicative order modulo $p$. Choosing, for $i=1,\ldots,r$, $a_i$ to be a primitive root modulo $p_i$, we see that necessarily $p_i-1\mid n$ for all $i=1,\ldots,r$, and this is sufficient as well.
Let $T = {\mathbb R}/{\mathbb Z}$ be the $1$-torus. Let $a_{ij}$ be integer numbers, $1 \leq i \leq m$, $1 \leq j \leq n$ and $A$ the $m \times n$ matrix whose $(i,j)$ entry is $a_{ij}$. Consider the following system of $m$ linear equations: $$\left\{\begin{array}{rl} a_{11}x_1 + a_{12}x_2 + \cdots + a_{1n}x_n & = \overline{0}\\ a_{21}x_1 + a_{22}x_2 + \cdots + a_{2n}x_n & = \overline{0}\\ \vdots &\\ a_{m1}x_1 + a_{m2}x_2 + \cdots + a_{mn}x_n & = \overline{0} \end{array}\right.$$ where $x_1,x_2,...,x_n \in T$. The set of solutions $S$ is obviously a subgroup of $T^n$. Let $S_0$ be the connected component containing the trivial solution $(0,0,...,0)$. I would like to understand the quotient group $S/S_0$. By "understand" I mean compute it an algorithmic way — something which can be implemented on a computer. What is the right way to think about something like this? Thank you. Remark: For instance, if $m=n$ and $A$ is invertible, then it is fairly easy to show that $S/S_0$ has order $|\det A|$. (One way to do it is geometric, taking the cup product of the (Poincare duals) of the codimension 1 submanifolds corresponding to solutions of individual equations). Of course, this doesn't come close to answering the question. P.S. My motivation for asking this question comes from algebraic geometry.
This question may be more suited for physics.stackexchange, but I saw this post was recommended for StackOverflow or Computational Science, so I'm asking my question here. I am trying to write a program in Matlab to evaluate an expression with fermionic annihilation and creation operators. Define $$\{f, g \} \equiv fg + gf$$ then the fermionic second quantization operators satisfy $$\{a_i,a_j \} = \{ a_i^\dagger, a_j^\dagger\} = 0 \qquad \{a_i^\dagger, a_j\} = \delta_{ij}$$ $\forall i, j \in \{1, 2, ..., N \}$ for some given $N$. $W(t)$ is a $2N \times 2N$ matrix that I know that gives me the time evolution of the operators in the Heisenberg picture; \begin{align*} a_i(t) &= \sum_{j=1}^N \left(W_{2i-1,2j-1}(t)a_j + W_{2i-1,2j}(t)a_j^\dagger\right)\\ a_i^\dagger(t) &= \sum_{j=1}^N \left(W_{2i,2j-1}(t)a_j + W_{2i,2j}(t)a_j^\dagger\right) \end{align*} i.e. $$\left(\begin{matrix}a_1(t)\\a_1^\dagger(t)\\\vdots\\a_N(t)\\a_N^\dagger(t) \end{matrix}\right) = W(t)\left(\begin{matrix}a_1\\a_1^\dagger\\\vdots\\a_N\\a_N^\dagger \end{matrix}\right)$$ I want to write a function that takes in $W(t)$ (evaluated at a given $t$) and $j$ and returns $$\langle \psi | \left(\prod_{i=1}^{j-1}\left(1-2a_i^\dagger(t)a_i(t) \right) \right)\left(a_j(t)^\dagger + a_j(t) \right) | \psi \rangle$$ where $$| \psi \rangle = \left(\prod_{i=1}^N\frac{1+a_i^\dagger}{\sqrt{2}} \right)|0\rangle = 2^{-N/2}\left(|0...0\rangle + |0...1 \rangle + |0...1 \ 0 \rangle + |0 ...1 \ 1 \rangle + ... + |1...1 \rangle \right)$$ $|\psi\rangle$ is just a normalized, equally weighted linear combination of all states. Note that it does not change with time. The time evolution is encoded in the operators. My thought was to use Matlab's symbolic algebra, and try and code for what each operator does to a state. But the symbolic algebra does not always preserve order, which is crucial because the operators don't always commute. Also, trying to just naively go through is so computationally expensive and grows exponentially with $N$. Any ideas?
Find all primitive roots modulo $13$. We show $2$ is a primitive root first. Note that $\varphi(13)=12=2^2\cdot3$. So the order of $2$ modulo $13$ is $2,3,4,6$ or $12$. \begin{align} 2^2\not\equiv1\mod{13}\\ 2^3\not\equiv1\mod{13}\\ 2^4\not\equiv1\mod{13}\\ 2^6\not\equiv1\mod{13}\\ 2^{12}\equiv1\mod{13} \end{align} Hence $2$ has order $12$ modulo 13 and is therefore a primitive root modulo $13$. Now note all even powers of $2$ can't be primitive roots as they are squares modulo $13$. $(*)$ There are $\varphi(12)=4$ primitive roots modulo $13$. These must therefore be $$2,2^5=6,2^7=11,2^{11}=7\mod{13}.$$ Questions: Why do we only check the divisors of $\varphi(13)$ lead to $1\mod{13}$? does the line marked $(*)$ mean they can be written in the form $a^2$? Why does this mean they can't be a primitive root? I thought $\varphi(12)$ counts the number of coprimes to $12$.. Why does this now suddenly tell us the number of primitive roots modulo $13$? How have these powers been plucked out of thin air? I understand even powers can't be primitive roots, also we have shown $2^3$ can't be a primitive root above but what about $2^9$?
Suppose we have a function $f(x,y)$ differentiable as many times as you like in $\mathbb{R}^2$ the gradient is given by $$ \nabla f (x,y) = \left(f_x,f_y \right)^T $$ cosine and sine of such vector are given by $$ \left\{ \begin{array}{l} \cos \alpha = \frac{f_x}{\lVert \nabla f \rVert} \\ \sin \alpha = \frac{f_y}{\lVert \nabla f \rVert} \\ \end{array} \right. , $$ I also define $u_\alpha = (\cos \alpha, \sin \alpha)^T$ I want to compute $ \nabla_{\alpha} \left( \lVert \nabla f \rVert \right) $ which should be given by $$ \nabla_{\alpha} \left( \lVert \nabla f \rVert \right) = \langle \nabla \left( \lVert \nabla f \rVert \right) , u_{\alpha} \rangle = \frac{f_{xx} f_x}{\lVert \nabla f \rVert} \cdot \frac{f_x}{\lVert \nabla f \rVert} + \frac{f_{yy} f_y}{\lVert \nabla f \rVert} \cdot \frac{f_y}{\lVert \nabla f \rVert} = \\ \left(f_{xx} + f_{yy} \right) \cdot \left( \frac{f_x^2}{\lVert \nabla f \rVert^2} + \frac{f_y^2}{\lVert \nabla f \rVert^2} \right) = f_{xx} + f_{yy} = \nabla^2 f $$ The question is: is this derivation of the Laplacian operator rigorous? The reason of my question is given by the following quote, taken from a computer vision book (the topic is edge detection): For many applications, however, we wish to think such a continuous gradient image to only return isolated edges, i.e., as single pixels at discrete locations along the edge contours. This can be achieved by looking for maxima in the edge strength (gradient magnitude) in a direction perpendicular to the edge orientation, i.e., along the gradient direction. Finding this maximum corresponds to taking a directional derivative of the strength field in the direction of the gradient and then looking for zero crossing. The desired directional derivative is equivalent to the dot product between a second gradient operator and the result of the first... The gradient dot product with the gradient is called the Laplacian. Thank you.
Following the work of Lu (1967) (Full text available here!) I got stuck trying to derive the elasticity of substitution between factors. He use the formula developed by Allen, that when the production function is linear and homogeneous is the following: $$\sigma =\frac{\frac{\partial V}{\partial K}\frac{\partial V}{\partial L}}{V\frac{\partial^2 V}{\partial K\partial L}}$$ The partial derivatives of L and K and the cross second-order partial derivative are the following: $$(1)\frac{\partial V}{\partial K}=\frac{dY}{dX}=\frac{1}{X}\left ( Y-\alpha X^{-\frac{c}{b}}Y^{\frac{1}{b}} \right )=\frac{1}{K}\left ( V-\frac{\partial V}{\partial L}\cdot L \right )$$ $$(2)\frac{\partial V}{\partial L}=Y-X\frac{dY}{dX}=\alpha X^{-\frac{c}{b}}Y^{\frac{1}{b}}$$ $$(3)\frac{\partial^2 V}{\partial K \partial L}=\frac{\alpha}{bL} X^{-\frac{c}{b}-1}Y^{\frac{1}{b}-1}\left ( X\frac{dY}{dX}-cY \right )$$ And the expected result (the one the author gets) is: $$\sigma =\frac{b}{1-\frac{cf}{cf'}}$$ Is there anyone who can help me with this? Thanks.
In number theory, it's often required to check whether the proof of an expression can be done without resorting to prime numbers (Hence the prime free part). We are allowed to use concepts of coprime numbers though. Now I have seen a prime free proof of $$\phi(mn)=\phi(m)\phi(n)$$ when $gcd(m,n):=(m,n)=1$. There is a generalization for any $m$ and $n$ given by $$\phi(mn) = \frac{d\phi(m)\phi(n)}{\phi(d)}$$ where $d=(m,n)$. So I started proving this as follows (no primes allowed): Proof:$$\phi(mn) = \sum_{k=1}^{mn}1_{\left\{(k,mn)=1\right\}}$$$$= \sum_{k=1}^{mn}1_{\left\{(k,m)=1\right\}}1_{\left\{(k,n)=1\right\}}$$Let $k=(q-1)m+r$ where $q=\{1,2,\cdots, n\}$ and $r=\{1,2,\cdots, m\}$.Then we get$$\phi(mn) = \sum_{r=1}^m\sum_{q=1}^n 1_{\{((q-1)m+r,m)=1\}}1_{\{((q-1)m+r,n)\}}$$$$=\sum_{r=1}^m1_{\{(r,m)=1\}}\sum_{q=1}^n1_{\{((q-1)m+r,n)\}}$$ Given $r$, consider the collection $\{(q-1)m+r\}_{q=1}^n$ modulo n. Now pick any $q_1 \in \{1,2,...n\}$. Let $q_2=q_1+\frac{n}{d}$. Then we can easily see $q_1m=q_2m \mbox{ mod } n$. Hence we get $d$ repetitions of some of the residues from $q=1$ to $n/d$ $$\sum_{q=1}^n1_{\{((q-1)m+r,n)\}} = d\sum_{q=1}^{n/d}1_{\{((q-1)m+r,n)\}}$$ So if I can show $$\sum_{q=1}^{n/d}1_{\{((q-1)m+r,n)\}} =\frac{\phi(n)}{\phi(d)},$$ I would be done. Note that RHS is an integer (although I am looking ahead and claiming it, I'm not using that here yet). Unfortunately, I do not know what to do at this point. It's like I should expand the sum in some way and then divide it again or something like that... I'd be grateful if someone could offer some useful advice on this matter. Most books I know use the prime number representation to prove it but I think it can be done without it. I have given an answer below. I think it is correct but I am open to ideas on how to improve it. Update: I have corrected some errors. Now that I've proved this is prime free, the following are also prime free as corollaries: a) $d|n \Rightarrow \phi(d) | \phi(n)$. b) If $lcm(m,n) := [m,n]$, then $\phi(m)\phi(n) = \phi((m,n))\phi([m,n])$.
Caveat: it can be misleading to write $\sigma$ for the noise variance instead of $\sigma^2$. The formula for a linear combination of "signals" (noise included), considered as a sum of random variables, is given in SE.Stats: Variance of Uncorrelated Variables: for observation $Y$ composed of signal $X$ and noise $N$: $$Y=S+N$$ one has: $$\mathop{var}(Y)=\mathop{var}(S)+\mathop{var}(N)+2\mathop{cov}(S,N)\,.$$ If the signal and the noise are correlated, you do not a lot of options. if not, based on power formulas like: $$\sigma^\mathrm{dB} = 10 \log \left(\frac{\sigma}{\sigma^\mathrm{ref}}\right)$$ you can get: $$\sigma_Y^\mathrm{dB} = 10 \log \left(\frac{10 ^ \left(\frac{\sigma_X^\mathrm{dB}}{10}\right)+10 ^ \left(\frac{\sigma_N^\mathrm{dB}}{10}\right)}{\sigma^\mathrm{ref}}\right)$$ In your case, if I am not making a huge mistake, this gives about 40 dB, since the noise is very low with respect to the signal. What is not clear to me is your purpose. If you create a noise with this total "variance", I am afraid you won't reach the expected result.
This is how I'd approach the problem. Please point out any issues on this method as it is based on my own approach (I have no textbook to reference this to). Based on the information you have, you would run a regression of log output on log-labor, log-capital and log-human capital. This would give you a model like this. $$\ln(Y)=\beta_0+\beta_1\ln(L)+\beta_2\ln(K)+\beta_3\ln(H)+\mu$$ in terms of a more "economic looking" equation, we take the expectation of this equation and take $e$ and raise it to the power of both sides giving us our production function. $$\mathbb{E}[\ln(Y)]=\mathbb{E[}\beta_0+\beta_1\ln(L)+\beta_2\ln(K)+\beta_3\ln(H)+\mu]$$ $$\ln(Y)=\beta_0+\beta_1\ln(L)+\beta_2\ln(K)+\beta_3\ln(H)$$ Recall that we view $\beta_0$s the co-efficient on omitted variable $\ln(A)$ as the rate of technological change$$\exp\{\ln(Y)\}=\exp\{\beta_0+\beta_1\ln(L)+\beta_2\ln(K)+\beta_3\ln(H)\}$$ 1,2 $$Y=A^{\beta_0}L^{\beta_1}K^{\beta_2}H^{\beta_3}$$ Using this form you can more comfortably calculate elasticity of subsitution between $L$ and $K$. If your elasticity of substitution is greater than or equal to 1 you have a labor saving process, however if elasticity of substitution is less than 1, we either have a process which is either a Human capital augmented process of TFP augmented process 3. Hope this helps 1. https://en.wikipedia.org/wiki/Solow_residual#Regression_analysis_and_the_Solow_residual 2. the actual "quantity" of $A$ can be calculated by$$A=\left(\frac{Y}{L^{\beta_1}K^{\beta_2}H^{\beta_3}}\right)^{\frac{1}{\beta_0}}$$ 3. this is of course assuming that either $\beta_3>0$ and/or $\beta_0>0$.
First of all, here is a noise diode model for a PN junction from sourceforgeIt consists of the capacitance of the diode \$ c_d \$, the conduction (or resistance \$g_d = \frac{1}{r_d} \$) and a noise current source \$i_d\$. The resistance is a standard thermal noise source: \$S_{r_d} = 4 k T r_d\$ k is the Boltsmann constant T is absolute temperature and \$ r_d\$ is the resistance of the device The current noise source has shot noise and flicker noise: \$S_{i_d} = 2qI_d\Delta f + \frac{\alpha_H I_d}{f^{\gamma}N}\$ The first term is for shot noise, it is dependent on the electron charge \$q\$ which is equal to \$1.60217662 × 10^{-19}\$ coulombs and \$ I_d \$ is equal the current through the diode. The second term is for flicker (1/f) noise, and I'll gloss over the details becasue these parameters are unlikely to be found in a datasheet but is more complicated as \$\alpha_H\$ is dependent on semiconductor junction and has a value from \$5 × 10^{-6} \$ to \$ 2 × 10^{-3}\$ and \$\gamma\$ and \$\alpha\$ are material constants, but you would have to model these from experimental data for that device. However if you are using the device with AC coupling (and ignoring DC) you wouldn't have to worry about the low frequency 1/f noise. The noise source \$S_{i_d}\$ is also correleated, which I will not go into detail here either but the sourceforge link has the math there An ideal diode (pn- or schottky-diode) generates shot noise. Both types of current (field and diffusion) contribute independently to it. That is, even though the two currents flow in different directions ("minus" in dc current equation), they have to be added in the noise equation (current is proportional to noise power spectral density). Taking into account the dynamic conductance \$ g_d\$ in parallel to the noise current source, the noise wave correlation matrix writes as follows. (The sourceforge link has an explanation of this but so does Fundamentals of Industrial Electronics 11.2.2 (shot noise) and 11.2.4 (flicker noise) Here is another link for the Fundamentals of Industrial Electronics) Source: PN Junction diode sourceforge If you want to see what noise looks like, here is a great figure fromEDN. This is from a 12V zener and the noise has been gained up by a factor of 10. Notice that they cut the frequency of the graph off at 1Hz, if we were to see lower than this, 1/f noise would start to dominate. There is also a high frequency cuttoff above 100kHz that is not seen from the capacitance of the circuit. Figure 2 The power spectral density of the noise output is very flat from 1Hz to 100kHz. As a comparison, the noise of an LM317 regulator is also plotted, as this regulator is normally thought of as being very noisy. Now on to your questions. How does the quiescent current effect the noise density? and How much will the noise density vary with time and temperature at a given quiescent current? This will likely determine whether I need to incorporate AGC into my design. The quiescent current affects both shot noise and flicker noise through value \$I_d\$ although it would probably be easier to sum up the total currents through the device into \$I_d\$ and then model the shot and flicker noise. The problem is modeling a zener diode will require you to measure noise parameters as they are not measured by manufacturers (even noise curves are not provided on most datasheets). Since you have to take measurements, the easiest thing to do is to build the circuit with all these ideas in hand and measure the noise at the end of the day. At minimum, you could model the shot and thermal noise, and from 1 or 10Hz to the \$c_d\$ cutoff the noise should be white. What configuration will maximize the avalanche noise produced by the Zener? Operating it in the avalanche mode or in the reverse diode voltage part of the curve will maximize the avalanche noise. However, a large change in current \$I_d\$ will only produce a small change in voltage because it is operating on the steepest part of the curve. Source: Zener Diode Tutorial Would using a current mirror in place of a resistor to bias the Zener increase or decrease the noise density? I would say that it would increase the current noise density. Because a resistor is typically enough to regulate the voltage using a resitor is typically how the current is supplied to the zener, after all, adding a current mirror will add 1/f noise that a resistor doesn't have so it would probably not benefit to add a current mirror vs a resistor. A zener\resistor combination is typically the source for most voltage regulators. The trick is to keep the current into the resistor (and measuring the voltage with an op amp) as constant as possible. Would a voltage or a transconductance amplifier be more effective at amplifying avalanche noise? A voltage amplifier is the way it is commonly done. Transconducance amplifiers have less availability, and have lower impedance, and more noise (from what I've seen in available IC's) all of which are undesirable characteristics.
Let $A_f\colon L^2([0,1]) \rightarrow L^2([0,1]^2)$ be given by $(Af)(x)=\displaystyle\sum_{k=1}^\infty \frac{1}{k\pi^2}\int_0^1 f(x)\sin(k\pi x)\sinh(k\pi y)\,\mathrm{d}x\sin(k\pi x)$ where $f\in C^1([0,1])$. I want to show that $A$ is not bounded. What I can tell for sure is, that $\displaystyle \int_0^1 f(x)\sin(k\pi x)\sinh(k\pi y)\,\mathrm{d}x$ is bounded. This follows from a theorem which requires the absolute value of the kernel to be bounded in $L^1([0,1])$ for both $x$ and $y$. Now I would have to show that $\displaystyle (Bg)(x,y)=\sum_{k=1}^\infty \frac{1}{k\pi^2} g(y)\sin(k\pi x)$ is in general not bounded in $L^2([0,1]^2)$ for $g\in L^2([0,1])$ How can I do this? I have not much background in functional analysis, I only know how to deal with product spaces to be exact. I can only image that the $L^2$ norm would be integrating over two variables.
Huge cardinal Huge cardinals (and their variants) were introduced by Kenneth Kunen in 1972 as a very large cardinal axiom. Kenneth Kunen first used them to prove that the consistency of the existence of a huge cardinal implies the consistency of $\text{ZFC}$+"there is a $\omega_2$-saturated $\sigma$-ideal on $\omega_1$". It is now known that only a Woodin cardinal is needed for this result. However, the consistency of the existence of an $\omega_2$-complete $\omega_3$-saturated $\sigma$-ideal on $\omega_2$, as far as the set theory world is concerned, still requires an almost huge cardinal. [1] Contents 1 Definitions 2 Consistency strength and size 3 Relative consistency results 4 References Definitions Their formulation is similar to that of the formulation of superstrong cardinals. A huge cardinal is to a supercompact cardinal as a superstrong cardinal is to a strong cardinal, more precisely. The definition is part of a generalized phenomenon known as the "double helix", in which for some large cardinal properties n-$P_0$ and n-$P_1$, n-$P_0$ has less consistency strength than n-$P_1$, which has less consistency strength than (n+1)-$P_0$, and so on. This phenomenon is seen only around the n-fold variants as of modern set theoretic concerns. [2] Although they are very large, there is a first-order definition which is equivalent to n-hugeness, so the $\theta$-th n-huge cardinal is first-order definable whenever $\theta$ is first-order definable. This definition can be seen as a (very strong) strengthening of the first-order definition of measurability. Elementary embedding definitions $\kappa$ is almost n-huge with target $\lambda$iff $\lambda=j^n(\kappa)$ and $M$ is closed under all of its sequences of length less than $\lambda$ (that is, $M^{<\lambda}\subseteq M$). $\kappa$ is n-huge with target $\lambda$iff $\lambda=j^n(\kappa)$ and $M$ is closed under all of its sequences of length $\lambda$ ($M^\lambda\subseteq M$). $\kappa$ is almost n-hugeiff it is almost n-huge with target $\lambda$ for some $\lambda$. $\kappa$ is n-hugeiff it is n-huge with target $\lambda$ for some $\lambda$. $\kappa$ is super almost n-hugeiff for every $\gamma$, there is some $\lambda>\gamma$ for which $\kappa$ is almost n-huge with target $\lambda$ (that is, the target can be made arbitrarily large). $\kappa$ is super n-hugeiff for every $\gamma$, there is some $\lambda>\gamma$ for which $\kappa$ is n-huge with target $\lambda$. $\kappa$ is almost huge, huge, super almost huge, and superhugeiff it is almost 1-huge, 1-huge, etc. respectively. Ultrahuge cardinals A cardinal $\kappa$ is $\lambda$-ultrahuge for $\lambda>\kappa$ if there exists a nontrivial elementary embedding $j:V\to M$ for some transitive class $M$ such that $j(\kappa)>\lambda$, $M^{j(\kappa)}\subseteq M$ and $V_{j(\lambda)}\subseteq M$. A cardinal is ultrahuge if it is $\lambda$-ultrahuge for all $\lambda\geq\kappa$. [1] Notice how similar this definition is to the alternative characterization of extendible cardinals. Furthermore, this definition can be extended in the obvious way to define $\lambda$-ultra n-hugeness and ultra n-hugeness, as well as the " almost" variants. Ultrafilter definition The first-order definition of n-huge is somewhat similar to measurability. Specifically, $\kappa$ is measurable iff there is a nonprincipal $\kappa$-complete ultrafilter, $U$, over $\kappa$. A cardinal $\kappa$ is n-huge with target $\lambda$ iff there is a normal $\kappa$-complete ultrafilter, $U$, over $\mathcal{P}(\lambda)$, and cardinals $\kappa=\lambda_0<\lambda_1<\lambda_2...<\lambda_{n-1}<\lambda_n=\lambda$ such that: $$\forall i<n(\{x\subseteq\lambda:\text{order-type}(x\cap\lambda_{i+1})=\lambda_i\}\in U)$$ Where $\text{order-type}(X)$ is the order-type of the poset $(X,\in)$. [1] $\kappa$ is then super n-huge if for all ordinals $\theta$ there is a $\lambda>\theta$ such that $\kappa$ is n-huge with target $\lambda$, i.e. $\lambda_n$ can be made arbitrarily large. If $j:V\to M$ is such that $M^{j^n(\kappa)}\subseteq M$ (i.e. $j$ witnesses n-hugeness) then there is a ultrafilter $U$ as above such that, for all $k\leq n$, $\lambda_k = j^k(\kappa)$, i.e. it is not only $\lambda=\lambda_n$ that is an iterate of $\kappa$ by $j$; all members of the $\lambda_k$ sequence are. As an example, $\kappa$ is 1-huge with target $\lambda$ iff there is a normal $\kappa$-complete ultrafilter, $U$, over $\mathcal{P}(\lambda)$ such that $\{x\subseteq\lambda:\text{order-type}(x)=\kappa\}\in U$. The reason why this would be so surprising is that every set $x\subseteq\lambda$ with every set of order-type $\kappa$ would be in the ultrafilter; that is, every set containing $\{x\subseteq\lambda:\text{order-type}(x)=\kappa\}$ as a subset is considered a "large set." Coherent sequence characterization of almost hugeness Consistency strength and size Hugeness exhibits a phenomenon associated with similarly defined large cardinals (the n-fold variants) known as the double helix. This phenomenon is when for one n-fold variant, letting a cardinal be called n-$P_0$ iff it has the property, and another variant, n-$P_1$, n-$P_0$ is weaker than n-$P_1$, which is weaker than (n+1)-$P_0$. [2] In the consistency strength hierarchy, here is where these lay (top being weakest): measurable = 0-superstrong = 0-huge n-superstrong n-fold supercompact (n+1)-fold strong, n-fold extendible (n+1)-fold Woodin, n-fold Vopěnka (n+1)-fold Shelah almost n-huge super almost n-huge n-huge super n-huge ultra n-huge (n+1)-superstrong All huge variants lay at the top of the double helix restricted to some natural number n, although each are bested by I3 cardinals (the critical points of the I3 elementary embeddings). In fact, every I3 is preceeded by a stationary set of n-huge cardinals, for all n. [1] Similarly, every huge cardinal $\kappa$ is almost huge, and there is a normal measure over $\kappa$ which contains every almost huge cardinal $\lambda<\kappa$. Every superhuge cardinal $\kappa$ is extendible and there is a normal measure over $\kappa$ which contains every extendible cardinal $\lambda<\kappa$. Every (n+1)-huge cardinal $\kappa$ has a normal measure which contains every cardinal $\lambda$ such that $V_\kappa\models$"$\lambda$ is super n-huge" [1], in fact it contains every cardinal $\lambda$ such that $V_\kappa\models$"$\lambda$ is ultra n-huge". Every n-huge cardinal is m-huge for every m<n. Similarly with almost n-hugeness, super n-hugeness, and super almost n-hugeness. Every almost huge cardinal is Vopěnka (therefore the consistency of the existence of an almost-huge cardinal implies the consistency of Vopěnka's principle). [1] Every ultra n-huge is super n-huge and a stationary limit of super n-huge cardinals. Every super almost (n+1)-huge is ultra n-huge and a stationary limit of ultra n-huge cardinals. In terms of size, however, the least n-huge cardinal is smaller than the least supercompact cardinal (assuming both exist). [1] This is because n-huge cardinals have upward reflection properties, while supercompacts have downward reflection properties. Thus for any $\kappa$ which is supercompact and has an n-huge cardinal above it, $\kappa$ "reflects downward" that n-huge cardinal: there are $\kappa$-many n-huge cardinals below $\kappa$. On the other hand, the least super n-huge cardinals have both upward and downward reflection properties, and are all much larger than the least supercompact cardinal. It is notable that, while almost 2-huge cardinals have higher consistency strength than superhuge cardinals, the least almost 2-huge is much smaller than the least super almost huge. While not every $n$-huge cardinal is strong, if $\kappa$ is almost $n$-huge with targets $\lambda_1,\lambda_2...\lambda_n$, then $\kappa$ is $\lambda_n$-strong as witnessed by the generated $j:V\prec M$. This is because $j^n(\kappa)=\lambda_n$ is measurable and therefore $\beth_{\lambda_n}=\lambda_n$ and so $V_{\lambda_n}=H_{\lambda_n}$ and because $M^{<\lambda_n}\subset M$, $H_\theta\subset M$ for each $\theta<\lambda_n$ and so $\cup\{H_\theta:\theta<\lambda_n\} = \cup\{V_\theta:\theta<\lambda_n\} = V_{\lambda_n}\subset M$. Every almost $n$-huge cardinal with targets $\lambda_1,\lambda_2...\lambda_n$ is also $\theta$-supercompact for each $\theta<\lambda_n$, and every $n$-huge cardinal with targets $\lambda_1,\lambda_2...\lambda_n$ is also $\lambda_n$-supercompact. The $\omega$-huge cardinals A cardinal $\kappa$ is almost $\omega$-huge iff there is some transitive model $M$ and an elementary embedding $j:V\prec M$ with critical point $\kappa$ such that $M^{<\lambda}\subset M$ where $\lambda$ is the smallest cardinal above $\kappa$ such that $j(\lambda)=\lambda$. Similarly, $\kappa$ is $\omega$-huge iff the model $M$ can be required to have $M^\lambda\subset M$. Sadly, $\omega$-huge cardinals are inconsistent with ZFC by a version of Kunen's inconsistency theorem. Now, $\omega$-hugeness is used to describe critical points of I1 embeddings. Relative consistency results Hugeness of $\omega_1$ In [2] it is shown that if $\text{ZFC +}$ "there is a huge cardinal" is consistent then so is $\text{ZF +}$ "$\omega_1$ is a huge cardinal" (with the ultrafilter characterization of hugeness). Generalizations of Chang's conjecture Cardinal arithmetic in $\text{ZF}$ If there is an almost huge cardinal then there is a model of $\text{ZF+}\neg\text{AC}$ in which every successor cardinal is Ramsey. It follows that for all ordinals $\alpha$ there is no injection $\aleph_{\alpha+1}\to 2^{\aleph_\alpha}$. This in turns imply the failure of the square principle at every infinite cardinal (and consequently $\text{AD}^{L(\mathbb{R})}$, see determinacy). [3] References Kanamori, Akihiro. Second, Springer-Verlag, Berlin, 2009. (Large cardinals in set theory from their beginnings, Paperback reprint of the 2003 edition) www bibtex The higher infinite. Kentaro, Sato. Double helix in large large cardinals and iteration ofelementary embeddings., 2007. www bibtex
The possibility of recursive self-improvement is often brought up as a reason to expect that an intelligence explosion is likely to result in a singleton - a single dominant agent controlling everything. Once a sufficiently general artificial intelligence can make improvements to itself, it begins to acquire a compounding advantage over rivals, because as it increases its own intelligence, it increases its ability to increase its own intelligence. If returns to intelligence are not substantially diminishing, this process could be quite rapid. It could also be difficult to detect in its early stages because it might not require a lot of exogenous inputs. However, this argument only holds if self-improvement is not only a rapid route to AI progress, but the fastest route. If an AI participating in the broader economy could make advantageous trades to improve itself faster than a recursively self-improving AI could manage, then AI progress would be coupled to progress in the broader economy. If algorithmic progress (and anything else that might seem more naturally a trade secret than a commodity component) is shared or openly licensed for a fee, then a cutting-edge AI can immediately be assembled whenever profitable, making a single winner unlikely. However, if leading projects keep their algorithmic progress secret, then the foremost project could at some time have a substantial intelligence advantage over its nearest rival. If an AI project attempting to maximize intelligence growth would devote most of its efforts towards such private improvements, then the underlying dynamic begins to resemble the recursive self-improvement scenario. This post reviews a prior mathematization of the recursive self-improvement model of AI takeoff, and then generalizes it to the case where AIs can allocate their effort between direct self-improvement and trade. A recalcitrance model of AI takeoff In Superintelligence, Nick Bostrom describes a simple model of how fast an intelligent system can become more intelligent over time by working on itself. This exposition loosely follows the one in the book. We can model the intelligence of the system as a scalar quantity , and the work, or optimization power, applied to system in order to make it more intelligent, as another quantity . Finally, at any given point in the process, it takes some amount of work to augment the system's intelligence by one unit. Call the marginal cost of intelligence in terms of work recalcitrance, , which may take different values at different points in the progress. So, at the beginning of the process, the rate at which the system's intelligence increases is determined by the equation . We then add two refinements to this model. First, assume that intelligence is nothing but a type of optimization power, so and can be expressed in the same units. Second, if the intelligence of the system keeps increasing without limit, eventually the amount of work it will be able to put into things will far exceed that of the team working on it, so that . is now the marginal cost of intelligence in terms of applied intelligence, so we can write . Constant recalcitrance The simplest model assumes that recalcitrance is constant, . Then , or . This implies exponential growth. Declining recalcitrance Superintelligence also considers a case where work put into the system yields increasing returns. Prior to takeoff, where , this would look like a fixed team of researchers with a constant budget working on a system that always takes the same interval of time to double in capacity. In this case we can model recalcitrance as , so that , so that for some constant , which implies that the rate of progress approaches infinity as approaches ; a singularity. How plausible is this scenario? In a footnote, Bostrom brings up Moore's Law as an example of increasing returns to input, although (as he mentions) in practice it seems like increasing resources are being put into microchip development and manufacturing technology, so the case for increasing returns is far from clear-cut. Moore's law is predicted by the experience curve effect, or Wright's Law, where marginal costs decline as cumulative production increases; the experience curve effect produces exponentially declining costs under conditions of exponentially accelerating production. This suggests that in fact accelerating progress is due to an increased amount of effort put into making improvements. Nagy et al. 2013 show that for a variety of industries with exponentially declining costs, it takes less time for production to double than for costs to halve. Since declining costs also reflect broader technological progress outside the computing hardware industry, the case for declining recalcitrance as a function of input is ambiguous. Increasing recalcitrance In many cases where work is done to optimize a system, returns diminish as cumulative effort increases. We might imagine that high intelligence requires high complexity, and more intelligent systems require more intelligence to understand well enough to improve at all. If we model diminishing returns to intelligence as , then . In other words, progress is a linear function of time and there is no acceleration at all. Generalized expression The recalcitrance model can be restated as a more generalized self-improvement process with the functional form : Increasing recalcitrance Constant progress Increasing recalcitrance Polynomial progress Constant recalcitrance Exponential progress 1" /> Declining recalcitrance Singularity Deciding between trade and self-improvement Some inputs to an AI might be more efficiently obtained if the AI project participates in the broader economy, for the same reason that humans often trade instead of making everything for themselves. This section lays out a simple two-factor model of takeoff dynamics, where an AI project chooses how much to engage in trade. Suppose that there are only two inputs into each AI: computational hardware available for purchase, and algorithmic software that the AI can best design for itself. Each AI project is working on a single AI running on a single hardware base. The intelligence of this AI depends both on hardware progress and software progress, and holding either constant, the other has diminishing returns. (This is broadly consistent with trends described by Grace 2013.) We can model this as . At each moment in time, the AI can choose whether to allocate all its optimization power to making money in order to buy hardware, improving its own algorithms, or some linear combination of these. Let the share of optimization power devoted to algorithmic improvement be . Assume further that hardware earned and improvements to software are both linear functions of the optimization power invested, so , and . What is the intelligence-maximizing allocation of resources ? This problem can be generalized to finding that maximizes for any monotonic function . This is maximized whenever is maximized. (Note that this is no longer limited to the case of diminishing returns.) This generalization is identical to the Cobb-Douglas production function in economics. If then this model predicts exponential growth, if 1" /> it predicts a singularity, and if then it predicts polynomial growth. The intelligence-maximizing value of is . In our initial toy model , where , that implies that no matter what the price of hardware, as long as it remains fixed and the indifference curves are shaped the same, the AI will always spend exactly half its optimizing power working for money to buy hardware, and half improving its own algorithms. Changing economic conditions The above model makes two simplifying assumptions: that the application of a given amount of intelligence always yields the same amount in wages, and that the price of hardware stays constant. This section relaxes these assumptions. Increasing productivity of intelligence We might expect the productivity of a given AI to increase as the economy expands (e.g. if it discovers a new drug, that drug is more valuable in a world with more or richer people to pay for it). We can add a term exponentially increasing over time to the amount of hardware the application of intelligence can buy: . This does not change the intelligence-maximizing allocation of intelligence between trading for hardware and self-improving. Declining hardware costs We might also expect the long-run trend in the cost of computing hardware to continue. This can again be modeled as an exponential process over time, . The new expression for the growth of hardware is , identical in functional form to the expression representing wage growth, so again we can conclude that . Maximizing profits rather than intelligence AI projects might not reinvest all available resources in increasing the intelligence of the AI. They might want to return some of their revenue to investors if operated on a for-profit basis. (Or, if autonomous, they might invest in non-AI assets where the rate of return on those exceeded the rate of return on additional investments in intelligence.) On the other hand, they might borrow if additional money could be profitably invested in hardware for their AI. If the profit-maximizing strategy involves less than 100% reinvestment, then whatever fraction of the AI's optimization power is reinvested should still follow the intelligence-maximizing allocation rule , where is now the share of reinvested optimization power devoted to algorithmic improvements. If the profit-maximizing strategy involves a reinvestment rate of slightly greater than 100%, then at each moment the AI project will borrow some amount (net of interest expense on existing debts) , so that the total optimization power available is . Again, whatever fraction of the AI's optimization power is reinvested should still follow the intelligence-maximizing allocation rule , where is now the share of economically augmented optimization power devoted to algorithmic improvements. This strategy is no longer feasible, however, once \frac{\alpha}{\alpha+\beta}" />. Since by assumption hardware can be bought but algorithmic improvements cannot, at this point additional monetary investments will shift the balance of investment towards hardware, while 100% of the AI's own work is dedicated to self-improvement.
A multipole expansion is a mathematical series representing a function that depends on angles—usually the two angles on a sphere. These series are useful because they can often be truncated, meaning that only the first few terms need to be retained for a good approximation to the original function. Multipole expansions are very frequently used in the study of electromagnetic and gravitational fields, where the fields at distant points are given in terms of sources in a small region. The multipole expansion with angles is often combined with an expansion in radius. Such a combination gives an expansion describing a function throughout three-dimensional space. The multipole expansion is expressed as a sum of terms with progressively finer angular features. For example, the initial term—called the zeroth, or monopole, moment—is a constant, independent of angle. The following term—the first, or dipole, moment—varies once from positive to negative around the sphere. Higher-order terms (like the quadrupole and octupole) vary more quickly with angles. A multipole moment usually involves powers (or inverse powers) of the distance to the origin, as well as some angular dependence. Setting up the System Consider an arbitrary charge distribution \( \rho (\mathbf {r} ')\). We wish to find the electrostatic potential due to this charge distribution at a given point \( \mathbf {r} \). We assume that this point is at a large distance from the charge distribution, that is if \( \mathbf {r} '\) varies over the charge distribution, then \( \mathbf {r} \gg \mathbf {r} '\). Now, the coulomb potential for a charge distribution is given by \[ V(\mathbf {r} ) ={\dfrac {1}{4\pi \epsilon _{0}}}\int _{V'}{\dfrac {\rho (\mathbf {r} ')}{|\mathbf {r} -\mathbf {r'} |}}dV' \label{eq2} \] Here, \[ \begin{align} | \mathbf {r} -\mathbf {r'} |&= \sqrt{ |r^{2}-2\mathbf {r} \cdot \mathbf {r} '+r'^{2}|} \\[4pt] &=r \sqrt{ \left|1-2 \dfrac {\hat {\mathbf {r}} \cdot \mathbf {r} '}{r} +\left( \dfrac {r'}{r} \right)^2 \right|} \end{align}\] where \[ \hat {\mathbf {r} } = \mathbf {r} /r \] Thus, using the fact that \({ \mathbf {r} }\) is much larger than \( \mathbf {r} '\), we can write \[ \dfrac {1}{|\mathbf {r} -\mathbf {r'} |} = \dfrac {1}{r} \dfrac {1}{ \sqrt{\left|1-2 \dfrac {\hat {\mathbf {r} }\cdot \mathbf {r} '}{r} +\left(\dfrac {r'}{r}\right)^2 \right|}} \label{eq6}\] and using the binomial expansion (see below), \[ \dfrac {1} {\sqrt{ \left|1-2 \dfrac {\hat {\mathbf {r}} \cdot \mathbf {r} '}{r} + \left( \dfrac {r'}{r} \right)^{2} \right|} } =1-{\dfrac {\hat {\mathbf {r} }\cdot \mathbf {r} '}{r}}+{\dfrac {1}{2r^{2}}}\left(3({\hat {\mathbf {r} }}\cdot \mathbf {r} '\right)^{2}-r'^{2})+O\left({\dfrac {r'}{r}}\right)^{3} \label{eq10}\] (we neglect the third and higher order terms). Binomial Theorem The binomial theory can be used to expand specific functions into an infinite series: \[\begin{align} (1+ x )^2 &= \sum_{n=0}^{\infty} \dfrac{s!}{n! (s-n)!} x^n \\[4pt] &= 1 + \dfrac{s}{1!} x^2 + \dfrac{s(s-1)(s-2)}{3!} x^3 + \ldots \end{align}\] Equation \ref{eq6} can be rewritten as \[ \dfrac {1}{|\mathbf {r} -\mathbf {r'} |} = \dfrac {1}{r} \dfrac {1}{ \sqrt{1 + \epsilon } } \label{eq6B}\] where \[\epsilon = -2 \dfrac {\hat {\mathbf {r} }\cdot \mathbf {r} '}{r} +\left(\dfrac {r'}{r}\right)^2\] Applying the Binomial Theorem to Equation \ref{eq6B} results in \[ \dfrac {1}{|\mathbf {r} -\mathbf {r'} |} = \dfrac {1}{r} \left( 1 - \dfrac{1}{2} \epsilon - \dfrac{3}{8} \epsilon^2 - \dfrac{15}{16} \epsilon^3 + \ldots \right)\] which is where Equation \ref{eq10} originates from. The Expansion Inserting Equation \ref{eq10} into Equation \ref{eq2} shows that the potential can be written as \[ V(\mathbf {r} )={\dfrac {1}{4\pi \epsilon _{0}r}}\int _{V'}\rho (\mathbf {r} ')\left(1-{\dfrac {\hat {\mathbf {r} }\cdot \mathbf {r} '}{r}}+{\dfrac {1}{2r^{2}}}\left(3({\hat {\mathbf {r} }}\cdot \mathbf {r} '\right)^{2}-r'^{2})+O\left({\dfrac {r'}{r}}\right)^{3}\right)dV' \] We write this as \[ V(\mathbf {r} )=V_{\text{mon}}(\mathbf {r} )+V_{\text{dip}}(\mathbf {r} )+V_{\text{quad}}(\mathbf {r} )+\ldots \label{expand}\] The first (the zeroth-order) term in the expansion is called the monopole moment, the second (the first-order) term is called the dipole moment, the third (the second-order) is called the quadrupole moment, the fourth (third-order) term is called the octupole moment, and the fifth (fourth-order) term is called the hexadecapole moment. Given the limitation of Greek numeral prefixes, terms of higher order are conventionally named by adding "-pole" to the number of poles—e.g., 32-pole (i.e., dotriacontapole) and 64-pole (hexacontatetrapole). These moments can be expanded thusly \[ \begin{align} V_{\text{mon}}(\mathbf {r} ) &={\dfrac {1}{4\pi \epsilon _{0}r}}\int _{V'}\rho (\mathbf {r} ')dV' \\[4pt] V_{\text{dip}}(\mathbf {r} ) &=-{\dfrac {1}{4\pi \epsilon _{0}r^{2}}}\int _{V'}\rho (\mathbf {r} ')\left({\hat {\mathbf {r} }}\cdot \mathbf {r} '\right)dV' \\[4pt] V_{\text{quad}}(\mathbf {r} ) &={\dfrac {1}{8\pi \epsilon _{0}r^{3}}}\int _{V'}\rho (\mathbf {r} ')\left(3\left({\hat {\mathbf {r} }}\cdot \mathbf {r} '\right)^{2}-r'^{2}\right)dV' \end{align}\] and so on. In principle, a multipole expansion provides an exact description of the potential and generally converges under two conditions: if the sources (e.g. charges) are localized close to the origin and the point at which the potential is observed is far from the origin; or the reverse, i.e., if the sources are located far from the origin and the potential is observed close to the origin. In the first (more common) case, the coefficients of the series expansion are called exterior multipole moments or simply multipole moments whereas, in the second case, they are called interior multipole moments. The Monopole Observe that \[ V_{mon}(\mathbf {r}) =\dfrac {1}{4\pi \epsilon _0r}\int _{V'}\rho (\mathbf {r} ')dV' = \dfrac{q}{r}\] is a scalar, (actually the total charge in the distribution) and is called the electric monopole. This term indicates point charge electrical potential with charge \(q\). The Dipole If a charge distribution has a net total charge, it will tend to look like a monopole (point charge) from large distances. We can write \[ V_{\text{dip}}(\mathbf {r} )=-{\dfrac {\hat {\mathbf {r} }}{4\pi \epsilon _{0}r^{2}}}\cdot \int _{V'}\rho (\mathbf {r} ')\mathbf {r} 'dV' \] The vector \[ \mathbf {p} =\int _{V'}\rho (\mathbf {r} ')\mathbf {r} 'dV' \] is called the electric dipole. And its magnitude is called the dipole moment of the charge distribution. This terms indicates the linear charge distribution geometry of a dipole electrical potential. Quadrupole Let \( \hat {\mathbf {r} }\) and \( \mathbf {r} '\) be expressed in Cartesian coordinates as \( (r_{1},r_{2},r_{3})\) and \( (x_{1},x_{2},x_{3})\). Then, \( ({\hat {\mathbf {r} }}\cdot \mathbf {r} ')^{2}=(r_{i}x_{i})^{2}=r_{i}r_{j}x_{i}x_{j}\). We define a dyad to be the tensor \( {\hat {\mathbf {r} }}{\hat {\mathbf {r} }}\) given by \[ \left({\hat {\mathbf {r} }}{\hat {\mathbf {r} }}\right)_{ij}=r_{i}r_{j} \] Define the Quadrupole tensor as \[ T=\int _{V'} \rho (\mathbf {r} ') \left(3 (\mathbf {r} '\mathbf {r} ')-\mathbf {I} r'^{2}\right)dV' \] Then, we can write \( V_{\text{qua}}\) as the tensor contraction \[ V_{\text{qua}}(\mathbf {r} )=- \dfrac {\hat {\mathbf {r}} \hat {\mathbf {r}}} {4\pi \epsilon _{0}r^{3}} ::T\] this term indicates the three dimensional distribution of a quadruple electrical potential.
Let's start with the mathematical definitions. Discrete signal power is defined as$$P_s = \sum_{-\infty}^{\infty}s^2[n] = \left|s[n]\right|^2.$$ We can apply this notion to noise $w$ on top of some signal to calculate $P_w$ in the same way. The signal to noise ratio (SNR) is then simply$$P_{SNR}=\frac{P_s}{P_w}$$ If we've received a noise corrupted signal $x[n] = s[n]+w[n]$ then we compute the SNR as follows $$P_{SNR}=\frac{P_s}{P_w} = \frac{P_s}{\left|x[n]-s[n]\right|^2}.$$ Here $\left|x[n]-s[n]\right|^2$ is simply the squared error between original and corrupted signals. Note that if we scaled the definition of power by the number of points in the signal, this would have been the mean squared error (MSE) but since we're dealing with ratios of powers, the result stays the same. Let us now interpret this result. This is the ratio of the power of signal to the power of noise. Power is in some sense the squared norm of your signal. It shows how much squared deviation you have from zero on average. You should also note that we can extend this notion to images by simply summing twice of rows and columns of your image vector, or simply stretching your entire image into a single vector of pixels and apply the one-dimensional definition. You can see that no spacial information is encoded into the definition of power. Now let's look at peak signal to noise ratio. This definition is $$P_{PSNR}=\frac{\text{max}(s^2[n])}{\text{MSE}}.$$ If you stare at this for long enough you will realize that this definition is really the same as that of $P_{SNR}$ except that the numerator of the ratio is now the maximum squared intensity of the signal, not the average one. This makes this criterion less strict. You can see that $P_{PSNR} \ge P_{SNR}$ and that they will only be equal to each other if your original clean signal is constant everywhere, and with maximum amplitude. Notice that although the variance of a constant signal is null, its power is not; the level of such constant signal does make a difference in SNR but not in PSNR. Now, why does this definition make sense? It makes sense because the case of SNR we're looking at how strong the signal is and to how strong the noise is. We assume that there are no special circumstances. In fact, this definition is adapted directly from the physical definition of electrical power. In case of PSNR, we're interested in signal peak because we can be interested in things like the bandwidth of the signal, or number of bits we need to represent it. This is much more content-specific than pure SNR and can find many reasonable applications, image compression being on of them. Here we're saying that what matters is how well high-intensity regions of the image come through the noise, and we're paying much less attention to how we're performing under low intensity.
Consider the Correlated Random Effects model $y_{it} = \alpha + x_{it}\beta + \bar x \gamma + w_i + \epsilon_{it} $ where $x_{it}$ is a scalar explanatory variable. The correlated random effects GLS estimator $ \hat \beta_{CRE} $ is the OLS estimator of $\beta$ in the quasi-demeaned regression $\tilde y_{it} = \delta + \tilde x_{it} \beta + \bar x_i \rho + u_{it} $, where $\tilde y_{it} = y_{it} - \theta \bar y_i , \tilde x_{it} = x_{it} - \theta \bar x_i $ and $\theta = 1 - (\sigma^2_\epsilon/ (\sigma^2_\epsilon + T\sigma^2_w))^{1/2} $ Question: I need to show that the residuals from the regression of $x_{it} - \bar x_i $ on a constant and $\bar x_i $ is just $x_{it} - \bar x_i $ itself. Attempt: Regress $x_{it} - \bar x_i = \alpha + \bar x_{i} + \tilde r_{it} $ rearrange to get the residuals, $\tilde r_{it} = (x_{it} - \bar x_i) - (\alpha + \bar x_{i})$ I'm not sure how to proceed.
I have read quite a lot of times now that the only coordinate system whose coordinate basis is orthonormal is the cartesian coordinate system. Even though this makes sense, more or less, I never succeed when I try to prove it for myself. Could anyone give me a hint, or a sketch of a part of the proof, so I can work from there? EDIT: by "coordinate basis", I mean the basis you get by doing $$ \vec{e}_\alpha = \frac{\partial\vec{x}}{\partial q^\alpha} $$ with $\vec{x}$ the position vector and $(q^1,...,q^n)$ the coordinates of a point in $\mathbb{R}^n$ according to your coordinate system.
In the previous STT5100 course, last week, we’ve seen how to use monte carlo simulations. The idea is that we do observe in statistics a sample \{y_1,\cdots,y_n\}, and more generally, in econometrics \{(y_1,\mathbf{x}_1),\cdots,(y_n,\mathbf{x}_n)\}. But let’s get back to statistics (without covariates) to illustrate. We assume that observations y_i are realizations of an underlying random variable Y_i. We assume that Y_i are i.id. random variables, with (unkown) distribution F_{\theta}. Consider here some estimator \widehat{\theta} – which is just a function of our sample \widehat{\theta}=h(y_1,\cdots,y_n). So \widehat{\theta} is a real-valued number like . Then, in mathematical statistics, in order to derive properties of the estimator \widehat{\theta}, like a confidence interval, we must define \widehat{\theta}=h(Y_1,\cdots,Y_n), so that now, \widehat{\theta} is a real-valued random variable. What is puzzling for students, is that we use the same notation, and I have to agree, that’s not very clever. So now, \widehat{\theta} is . There are two strategies here. In classical statistics, we use probability theorem, to derive properties of \widehat{\theta} (the random variable) : at least the first two moments, but if possible the distribution. An alternative is to go for computational statistics. We have only one sample, \{y_1,\cdots,y_n\}, and that’s a pity. But maybe we can create another one \{y_1^{(1)},\cdots,y_n^{(1)}\}, as realizations of F_{\theta}, and another one \{y_1^{(2)},\cdots,y_n^{(2)}\}, anoter one \{y_1^{(3)},\cdots,y_n^{(3)}\}, etc. From those counterfactuals, we can now get a collection of estimators, \widehat{\theta}^{(1)},\widehat{\theta}^{(2)}, \widehat{\theta}^{(3)}, etc. Instead of using mathematical tricks to calculate \mathbb{E}(\widehat{\theta}), compute \frac{1}{k}\sum_{s=1}^k\widehat{\theta}^{(s)}That’s what we’ve seen last friday. I did also mention briefly that looking at densities is lovely, but not very useful to assess goodness of fit, to test for normality, for instance. In this post, I just wanted to illustrate this point. And actually, creating counterfactuals can we a good way to see it. Consider here the height of male students, Davis=read.table( "http://socserv.socsci.mcmaster.ca/jfox/Books/Applied-Regression-2E/datasets/Davis.txt") Davis[12,c(2,3)]=Davis[12,c(3,2)] X=Davis$height[Davis$sex=="M"] We can visualize its distribution (density and cumulative distribution) u=seq(155,205,by=.5) par(mfrow=c(1,2)) hist(X,col=rgb(0,0,1,.3)) lines(density(X),col="blue",lwd=2) lines(u,dnorm(u,178,6.5),col="black") Xs=sort(X) n=length(X) p=(1:n)/(n+1) plot(Xs,p,type="s",col="blue") lines(u,pnorm(u,178,6.5),col="black") Since it looks like a normal distribution, we can add the density a Gaussian distribution on the left, and the cdf on the right. Why not test it properly. To be a little bit more specific, I do not want to test if it’s a Gaussian distribution, but if it’s a \mathcal{N}(178,6.5^2). In order to see if this distribution is relevant, one can use monte carlo simulations to create conterfactuals hist(X,col=rgb(0,0,1,.3)) lines(density(X),col="blue",lwd=2) Y=rnorm(n,178,6.5) hist(Y,col=rgb(1,0,0,.3)) lines(density(Y),col="red",lwd=2) Ys=sort(Y) plot(Xs,p,type="s",col="white",lwd=2,axes=FALSE,xlab="",ylab="",xlim=c(155,205)) polygon(c(Xs,rev(Ys)),c(p,rev(p)),col="yellow",border=NA) lines(Xs,p,type="s",col="blue",lwd=2) lines(Ys,p,type="s",col="red",lwd=2) We can see on the left that it is hard to assess normality from the density (histogram and also kernel based density estimator). One can hardly think of a valid distance, between two densities. But if we look at graph on the right, we can compare the empirical distribution cumulative distribution \widehat{F} obtained from \{y_1,\cdots,y_n\} (the blue curve), and some conterfactual, \widehat{F}^{(s)} obtained from \{y_1^{(s)},\cdots,y_n^{(s)}\} generated from F_{\theta_0} – where \theta_0 is the value we want to test. As suggested above, we can compute the yellow area, as suggest in Cramer-von Mises test, or the Kolmogorov-Smirnov distance. d=rep(NA,1e5) for(s in 1:1e5){ d[s]=ks.test(rnorm(n,178,6.5),"pnorm",178,6.5)$statistic } ds=density(d) plot(ds,xlab="",ylab="") dks=ks.test(X,"pnorm",178,6.5)$statistic id=which(ds$x>dks) polygon(c(ds$x[id],rev(ds$x[id])),c(ds$y[id],rep(0,length(id))),col=rgb(1,0,0,.4),border=NA) abline(v=dks,col="red") If we draw 10,000 counterfactual samples, we can visualize the distribution (here the density) of the distance used a test statistic \widehat{d}^{(1)}, \widehat{d}^{(2)}, etc, and compare it with the one observe on our sample \widehat{d}. The proportion of samples where the test-statistics exceeded the one observed mean(d>dks) [1] 0.78248 is the computational version of the p-value ks.test(X,"pnorm",178,6.5) One-sample Kolmogorov-Smirnov test data: X D = 0.068182, p-value = 0.8079 alternative hypothesis: two-sided I thought about all that a couple of days ago, since I got invited for a panel discussion on “coding”, and why “coding” helped me as professor. And this is precisely why I like coding : in statistics, either manipulate abstract objects, like random variables, or you actually use some lines of code to create counterfactuals, and generate fake samples, to quantify uncertainty. The later is interesting, because it helps to visualize complex quantifies. I do not claim that maths is useless, but coding is really nice, as a starting point, to understand what we talk about (which can be very usefull when there is a lot of confusion on notations).
Lephenixnoir af424d1baa update documentation after writing the wiki 3 maanden geleden config 4 maanden geleden include/TeX 3 maanden geleden src 3 maanden geleden .gitignore 3 maanden geleden Makefile 4 maanden geleden README.md 3 maanden geleden TODO.md 3 maanden geleden configure 3 maanden geleden font5x7.bmp 4 maanden geleden font8x9.bmp 4 maanden geleden font10x12.bmp 4 maanden geleden This library is a customizable 2D math rendering tool for calculators. It can be used to render 2D formulae, either from an existing structure or TeX syntax. \frac{x^7 \left[X,Y\right] + 3\left|\frac{A}{B}\right>} {\left\{\frac{a_k+b_k}{k!}\right\}^5}+ \int_a^b \frac{\left(b-t\right)^{n+1}}{n!} dt+ \left(\begin{matrix} \frac{1}{2} & 5 \\ -1 & a+b \end{matrix}\right) List of currently supported elements: \frac) _ and ^) \left and \right) \sum, \prod and \int) \vec) and limits ( \lim) \sqrt) \begin{matrix} ... \end{matrix}) Features that are partially implemented (and what is left to finish them): See the TODO.md file for more features to come. First specify the platform you want to use : cli is for command-line tests, with no visualization (PC) sdl2 is an SDL interface with visualization (PC) fx9860g builds the library for fx-9860G targets (calculator) fxcg50 builds the library for fx-CG 50 targets (calculator) For calculator platforms, you can use --toolchain to specify a different toolchain than the default sh3eb and sh4eb. The install directory of the library is guessed by asking the compiler, you can override it with --prefix. Example for an SDL setup: % ./configure --platform=sdl2 Then you can make the program, and if it’s a calculator library, install it. You can later delete Makefile.cfg to reset the configuration, or just reconfigure as needed. % make% make install # fx9860g and fxcg50 only Before using the library in a program, a configuration step is needed. The library does not have drawing functions and instead requires that you provide some, namely: TeX_intf_pixel) TeX_intf_line) TeX_intf_size) TeX_intf_text) The three rendering functions are available in fxlib; for monospaced fonts the fourth can be implemented trivially. In gint, the four can be defined as wrappers for dpixel(), dline(), dsize() and dtext(). The type of formulae is TeX_Env. To parse and compute the size of a formula, use the TeX_parse() function, which returns a new formula object (or NULL if a critical error occurs). The second parameter display is set to non-zero to use display mode (similar to \[ .. \] in LaTeX) or zero to use inline mode (similar to $ .. $ in LaTeX). char *code = "\\frac{x_7}{\\left\\{\\frac{\\frac{2}{3}}{27}\\right\\}^2}";struct TeX_Env *formula = TeX_parse(code, 1); The size of the formula can be queried through formula->width and formula->height. To render, specify the location of the top-left corner and the drawing color (which will be passed to all primitives): TeX_draw(formula, 0, 0, BLACK); The same formula can be drawn several times. When it is no longer needed, free it with TeX_free(): TeX_free(formula);
Yesterday I went to the ETH Zurich library and got hold of Wilhelm Blaschke's 1917 paper Über affine Geometrie III: Eine Minimumeigenschaft der Ellipse. Leipziger Berichte 69 (1917), pages 3–12. In this paper he proves that any convex body $B$ in the plane contains a triangle $T$ with$${\rm area}(T)\geq{3\sqrt{3}\over 4\pi}{\rm area}(B)\ .$$The proof uses so-called Steiner symmetrization. One symmetrization step (in $y$-direction) consists in replacing$$B:=\bigl\{(x,y)\>\bigm|\>a\leq x\leq b, \quad c(x)\leq y\leq d(x)\bigr\}$$by$$B':=\bigl\{(x,y)\>\bigm|\>a\leq x\leq b, \quad c(x)-m(x)\leq y\leq d(x)-m(x)\bigr\}\ ,$$where$$m(x):={c(x)+d(x)\over2}\ ;$$see the figure below, taken from Blaschke's paper. By Cavalieri's principle we can conclude that$${\rm area}(B')={\rm area}(B)\ .$$Denote the area of a maximal triangle in $B$ and $B'$ by $\Delta$ and $\Delta'$ respectively. Draw in $B'$ two mirror copies $T'$, $\bar T'$ of maximal triangles, and let $T$, $\bar T$ be their inverse images in $B$. Then one shows with a simple computation that$$2\Delta\geq{\rm area}(T)+{\rm area}(\bar T)={\rm area}(T')+{\rm area}(\bar T')=2\Delta'\ .$$Symmetrizing repeatedly in different directions we obtain a sequence of convex bodies $(B_n)_{n\geq0}$. All $B_n$ have the same area, and the areas of their maximal triangles are monotonically decreasing. By choosing a suitable sequence of directions we can make sure that the $B_n$ converge to a circular disk $D$ having the same area as $B_0=B$. (For the proof of this Blaschke refers to his monograph Kreis und Kugel, Leipzig 1916). It follows that$$\Delta(B)\geq \Delta(D)={3\sqrt{3}\over 4\pi}{\rm area}(D)={3\sqrt{3}\over 4\pi}{\rm area}(B)\ .$$
Let $X\in\mathbb{R}^{n\times p}$ denote a matrix with $p$ linearly-independent columns, and let $L\in\mathbb{R}^{n\times n}$ denote a symmetric matrix. Furthermore, let $D\in\mathbb{R}^{n\times n}$ denote a diagonal matrix with positive-entries. Now suppose that we are interested in computing solutions to the symmetric generalized-definite eigenproblem, $$ A v = \lambda B v, $$ where the symmetric matrices $A$ and $B$ (where $B$ is also positive-definite) are implicitly given by the matrices $X$, $L$, and $D$ above, where $A = X^T L X$, and $B = X^T D X$. I ran into a paper which claimed that we may solve an equivalent standard eigenvalue problem involving a modification of $X$. In particular, if we orthogonalize $X$ with respect to $D$, say $\tilde X^T D \tilde X = I$, then eigenpairs of $$ \tilde X^T L \tilde X u = \lambda u $$ are also eigenpairs for the original generalized eigenproblem. How can we show this to be true?
Fit the Matern Cluster Point Process by Minimum Contrast Using Pair Correlation Fits the Matern Cluster point process to a point pattern dataset by the Method of Minimum Contrast using the pair correlation function. Usage matclust.estpcf(X, startpar=c(kappa=1,scale=1), lambda=NULL, q = 1/4, p = 2, rmin = NULL, rmax = NULL, ..., pcfargs=list()) Arguments X Data to which the Matern Cluster model will be fitted. Either a point pattern or a summary statistic. See Details. startpar Vector of starting values for the parameters of the Matern Cluster process. lambda Optional. An estimate of the intensity of the point process. q,p Optional. Exponents for the contrast criterion. rmin, rmax Optional. The interval of \(r\) values for the contrast criterion. … Optional arguments passed to optimto control the optimisation algorithm. See Details. pcfargs Optional list containing arguments passed to pcf.pppto control the smoothing in the estimation of the pair correlation function. Details This algorithm fits the Matern Cluster point process model to a point pattern dataset by the Method of Minimum Contrast, using the pair correlation function. The argument X can be either a point pattern: An object of class "ppp"representing a point pattern dataset. The pair correlation function of the point pattern will be computed using pcf, and the method of minimum contrast will be applied to this. a summary statistic: An object of class "fv"containing the values of a summary statistic, computed for a point pattern dataset. The summary statistic should be the pair correlation function, and this object should have been obtained by a call to pcfor one of its relatives. The algorithm fits the Matern Cluster point process to X, by finding the parameters of the Matern Cluster model which give the closest match between the theoretical pair correlation function of the Matern Cluster process and the observed pair correlation function. For a more detailed explanation of the Method of Minimum Contrast, see mincontrast. The Matern Cluster point process is described in Moller and Waagepetersen (2003, p. 62). It is a cluster process formed by taking a pattern of parent points, generated according to a Poisson process with intensity \(\kappa\), and around each parent point, generating a random number of offspring points, such that the number of offspring of each parent is a Poisson random variable with mean \(\mu\), and the locations of the offspring points of one parent are independent and uniformly distributed inside a circle of radius \(R\) centred on the parent point, where \(R\) is equal to the parameter scale. The named vector of stating values can use either R or scale as the name of the second component, but the latter is recommended for consistency with other cluster models. The theoretical pair correlation function of the Matern Cluster process is $$ g(r) = 1 + \frac 1 {4\pi R \kappa r} h(\frac{r}{2R}) $$ where the radius R is the parameter scale and $$ h(z) = \frac {16} \pi [ z \mbox{arccos}(z) - z^2 \sqrt{1 - z^2} ] $$ for \(z <= 1\), and \(h(z) = 0\) for \(z > 1\). The theoretical intensity of the Matern Cluster process is \(\lambda = \kappa \mu\). In this algorithm, the Method of Minimum Contrast is first used to find optimal values of the parameters \(\kappa\) and \(R\). Then the remaining parameter \(\mu\) is inferred from the estimated intensity \(\lambda\). If the argument lambda is provided, then this is used as the value of \(\lambda\). Otherwise, if X is a point pattern, then \(\lambda\) will be estimated from X. If X is a summary statistic and lambda is missing, then the intensity \(\lambda\) cannot be estimated, and the parameter \(\mu\) will be returned as NA. The remaining arguments rmin,rmax,q,p control the method of minimum contrast; see mincontrast. The Matern Cluster process can be simulated, using rMatClust. Homogeneous or inhomogeneous Matern Cluster models can also be fitted using the function kppm. The optimisation algorithm can be controlled through the additional arguments "..." which are passed to the optimisation function optim. For example, to constrain the parameter values to a certain range, use the argument method="L-BFGS-B" to select an optimisation algorithm that respects box constraints, and use the arguments lower and upper to specify (vectors of) minimum and maximum values for each parameter. Value An object of class "minconfit". There are methods for printing and plotting this object. It contains the following main components: Vector of fitted parameter values. Function value table (object of class "fv") containing the observed values of the summary statistic ( observed) and the theoretical values of the summary statistic computed from the fitted model parameters. References Moller, J. and Waagepetersen, R. (2003). Statistical Inference and Simulation for Spatial Point Processes. Chapman and Hall/CRC, Boca Raton. Waagepetersen, R. (2007) An estimating function approach to inference for inhomogeneous Neyman-Scott processes. Biometrics 63, 252--258. See Also Aliases matclust.estpcf Examples # NOT RUN { data(redwood) u <- matclust.estpcf(redwood, c(kappa=10, R=0.1)) u plot(u, legendpos="topright")# } Documentation reproduced from package spatstat, version 1.59-0, License: GPL (>= 2)
Preprints (rote Reihe) des Fachbereich Mathematik Refine Year of publication 1998 (5) (remove) Keywords 306 In this paper we study the space-time asymptotic behavior of the solutions and derivatives to th incompressible Navier-Stokes equations. Using moment estimates we obtain that strong solutions to the Navier-Stokes equations which decay in \(L^2\) at the rate of \(||u(t)||_2 \leq C(t+1)^{-\mu}\) will have the following pointwise space-time decay \[|D^{\alpha}u(x,t)| \leq C_{k,m} \frac{1}{(t+1)^{ \rho_o}(1+|x|^2)^{k/2}} \] where \( \rho_o = (1-2k/n)( m/2 + \mu) + 3/4(1-2k/n)\), and \(|a |= m\). The dimension n is \(2 \leq n \leq 5\) and \(0\leq k\leq n\) and \(\mu \geq n/4\) 300 319 The Kallianpur-Robbins law describes the long term asymptotic behaviour of the distribution of the occupation measure of a Brownian motion in the plane. In this paper we show that this behaviour can be seen at every typical Brownian path by choosing either a random time or a random scale according to the logarithmic laws of order three. We also prove a ratio ergodic theorem for small scales outside an exceptional set of vanishing logarithmic density of order three. 299 We propose a new discretization scheme for solving ill-posed integral equations of the third kind. Combining this scheme with Morozov's discrepancy principle for Landweber iteration we show that for some classes of equations in such method a number of arithmetic operations of smaller order than in collocation method is required to appoximately solve an equation with the same accuracy.
What is the difference between the three terms below? percentile quantile quartile Cross Validated is a question and answer site for people interested in statistics, machine learning, data analysis, data mining, and data visualization. It only takes a minute to sign up.Sign up to join this community 0 quartile = 0 quantile = 0 percentile 1 quartile = 0.25 quantile = 25 percentile 2 quartile = .5 quantile = 50 percentile (median) 3 quartile = .75 quantile = 75 percentile 4 quartile = 1 quantile = 100 percentile Percentiles go from $0$ to $100$. Quartiles go from $1$ to $4$ (or $0$ to $4$). Quantiles can go from anything to anything. Percentiles and quartiles are examples of quantiles. In order to define these terms rigorously,it is helpful to first define the quantile functionwhich is also known as the inverse cumulative distribution function.Recall that for a random variable $X$,the cumulative distribution function $F_X$ is defined by the equation$$F_X(x) := \Pr(X \le x).$$The quantile function is defined by the equation$$Q(p)\,=\,\inf\left\{ x\in \mathbb{R} : p \le F(x) \right\}.$$ Now that we have got these definitions out of the way, we can define the terms: percentile: a measure used in statistics indicatingthe value below which a given percentage of observationsin a group of observations fall. Example: the 20th percentile of $X$ is the value $Q_X(0.20)$ quantile: values taken from regular intervalsof the quantile function of a random variable.For instance, for some integer $k \geq 2$,the $k$-quartiles are defined as the valuesi.e. $Q_X(j/k)$ for $j = 1, 2, \ldots, k - 1$. Example: the 5-quantiles of $X$ are the values $Q_X(0.2), Q_X(0.4), Q_X(0.6), Q_X(0.8)$ It may be helpful for you to work out an example of what these definitions mean when say $X \sim U[0,100]$, i.e. $X$ is uniformly distributed from 0 to 100. References from Wikipedia: From wiki page: https://en.wikipedia.org/wiki/Quantile Some q-quantiles have special names: The only 2-quantile is called the median The 3-quantiles are called tertiles or terciles → T The 4-quantiles are called quartiles → Q The 5-quantiles are called quintiles → QU The 6-quantiles are called sextiles → S The 8-quantiles are called octiles → O (as added by @NickCox - now on wiki page also) The 10-quantiles are called deciles → D The 12-quantiles are called duodeciles → Dd The 20-quantiles are called vigintiles → V The 100-quantiles are called percentiles → P The 1000-quantiles are called permilles → Pr The difference between quantile, quartile and percentile becomes obvious. Percentile : The percent of population which lies below that value Quantile : The cut points dividing the range of probability distribution into continuous intervals with equal probability There are q-1 of q quantiles one of each k satisfying 0 < k < q Quartile : Quartile is a special case of quantile, quartiles cut the data set into four equal parts i.e. q=4 for quantiles so we have First quartile Q1, second quartile Q2(Median) and third quartile Q3 Thank you for your interest in this question. Because it has attracted low-quality or spam answers that had to be removed, posting an answer now requires 10 reputation on this site (the association bonus does not count). Would you like to answer one of these unanswered questions instead?
To use Power Substitution, you should be familiar with the Integration by Substitution (or U-Substitution) and derivatives. The key thing to remember is:If \(u=f(x)\) , then \(du=f'(x) \, dx\) .For example:If \(u=5{x}^{3}-4x+5\) , then \(du={15x}^{2}-4 \, dx\) .The goal of power substitution is to replace any nth root (which is difficult to integrate), with integer power, which is much easier to integrate. Using Power Substitution Let's illustrate this method using an example: \(\int \frac{1}{2+\sqrt{x}} \, dx\) Since this integral has an nth root, we decide to use Power Substitution. Let's substitute the term that has the nth root with \(u\) , then solve for \(u\) , and finally, find its derivative: Let \(u=\sqrt{x}\), \(x={u}^{2}\) and \(dx=2u \, du\) Now, let's substitute \(u\) into our original integral: \(\int \frac{1}{2+u}\times 2u \, du\) After simplifying, we have: \(2\int \frac{u}{2+u} \, du\) Now we have an easier integral without any square root. You can proceed to solve this. Just don't forget to substitute back \(u=\sqrt{x}\) after the integration.We will not actually solve the integral in this tutorial. For the full answer with steps, see the full solution here. After all, the key is to recognize when to use power substitutions and what to set the variable \(u\) to. Hope you now have a good idea on how to do these. What's Next At Cymath, we believe that learning by examples is one of the best ways to get better in calculus and problem solving in general. You can now try solving other integrals at the top of this page using Power Substitution. You can also subscribe to Cymath Plus, which offers ad-free and more in-depth help, from pre-algebra to calculus.
My topology professor told me in a discussion that the suspension spectrum $colim \Omega_n \Sigma_n S_0$ is the same as the monoid $G$ where $G=colim G_n$ where $G_n$ are self homotopy equivalences of $S_n$. I just want to ask if this is correct or whether I heard her wrong. My first guess is that $G=\pi_0 \Omega^\infty \Sigma^\infty S_0$ is the correct statement. A homotopy equivalence of $S_k$ is an element $\alpha=\pm 1 \in \pi_k(S^k)=\pi_0( \Omega_k \Sigma_k S_0)$. This gives a map from $G_k \to \pi_0\Omega^\infty \Sigma^\infty S_0$. But since the connected components of $\Omega^\infty \Sigma^\infty$ are not contractible since the stable k stems $\pi_k^S(S_0)$ are nontrivial for $k>0$, I can't identify $\pi_0 \Omega^\infty \Sigma^\infty S_0$ with $\Omega^\infty \Sigma^\infty S_0$. Hence I don't see any possible way of even finding a map from $G \to \Omega^\infty \Sigma^\infty S_0$. Do any of you?
Let $f: [0,\infty) \to [0,\infty)$ be a continuous function such that $f(f(x)) = x^2, \forall x \in [0,\infty)$. Prove that $\displaystyle{\int_{0}^{1}{(f(x))^2dx} \geq \frac{3}{13}}$. All I know about this function is that $f$ is bijective, it is strictly increasing*, $f(0) = 0, f(1) = 1, f(x^2) = (f(x))^2, \forall x \in [0, \infty)$ and $f(x) \leq x, \forall x \in [0, 1]$**. With all these, I am not able to show that $\displaystyle{\int_{0}^{1}{(f(x))^2dx} \geq \frac{3}{13}}$. *Suppose that $f$ is strictly decreasing. Then, $\forall x \in (0,1), x^2 < x \implies f(x^2) > f(x) \iff (f(x))^2 > f(x) \iff f(x) > 1$, which is false because, if we substitute $x$ with $0$ and with $1$ in $f(x^2) = (f(x))^2$ we get that $f(0) \in \{0,1\}$ and $f(1) \in \{0,1\}$. So $f$ is strictly increasing. **Suppose that there exists $x_0 \in [0, 1]$ such that $f(x_0) > x_0$. Then, $x_0^2 = f(f(x_0)) > f(x_0) > x_0$, which is false. Then $f(x) \leq x, \forall x \in [0,1]$. Edit: I have come up with an idea to use Riemann sums, but I reach a point where I cannot continue. Let $\epsilon < 1$. Then $f(\epsilon) = x_1$ and $f(x_1) = \epsilon^2$. And now $(f(\epsilon))^2 = f(\epsilon^2) = x_2$ and so on. Now we will use the Riemann sum: We will take the partition $\Delta = (1 > \epsilon > \epsilon ^2 > ... \epsilon ^{2^n} >0 ) $and the intermediate points will be the left margin of each interval. Then we have: $\displaystyle{\int_{0}^{1}{(f(x))^2 dx}} = \displaystyle{ \lim_{\epsilon \to 1}{\lim_{n \to \infty}{\sum_{k = 0}^{n}{(\epsilon^{2^k} - \epsilon^{2^{k+1}})\epsilon^{2^{k+1}}}}}}$ I do not know how to compute this.
I've found a number of closely related functions, each of which takes three complex numbers $z_1,\,z_2$, and $z_3$ (which we can consider as the vertices of a triangle) as its arguments, and outputs a complex number $w$. A few of these functions represent well-known triangle centers $w$, such as the centroid (that is, $w=(z_1+z_2+z_3)/3)$), or the zeros and critical points of the polynomial $R(z) := (z-z_1)(z-z_2)(z-z_3)$. Other functions among them are less obvious, such as the following point $w=p$: $$p := \frac{(z_2+z_3) z_1^2+\left(z_2^2-6z_2z_3+z_3^2\right) z_1+z_2z_3(z_2+z_3)\sqrt{-3(z_1-z_2)^2(z_1-z_3)^2(z_2-z_3)^2}}{2 \left(z_1^2+z_2^2+z_3^2-z_1z_2-z_1z_3-z_2z_3\right)}$$ As far as I can tell, by comparing the point $p$ to different triangle centers in GeoGebra, it seems to be the first isodynamic point to a high degree of precision for the triangles that I've thrown at it so far. My problem is twofold: $(1)$ How do I prove that $p$ is (or isn't) the first isodynamic point of the triangle ($z_1,\,z_2,\,z_3$)? $(2)$ How do I (numerically) match $w$ with likely triangle centers based on the Encyclopedia of Triangle Centers? GeoGebra uses a small portion of the material in the ETC, but unfortunately, it fails to implement some important triangle centers with low indices. Suggestions for other, similar resources are welcome, too. As for $(1)$, I think a viable starting point might be to convert $p$ into barycentric or normalized trilinear coordinates and directly compare the result to the coordinates given in the ETC, but I'm not sure how to do this. It seems to be slightly easier to use Cartesian coordinates as a starting point, but Mathematica (for example) generally doesn't seem to be very inclined to return the real and imaginary parts of $p$ (for general $z_k$, e.g. when you let $z_k = a_k + i b_k$ for real numbers $a_k,\,b_k,\,k=1,2,3,$ and simplify the expressions). Regarding $(2)$, the following website can supposedly be used to compare points to almost all entries in the ETC: https://faculty.evansville.edu/ck6/encyclopedia/Search_6_9_13.html Unfortunately, I seem to get inconsistent results from my attempts to implement the algorithm on the website above in Mathematica. More specifically, certain triangle centers that I've tested (such as the centroid) yield the correct coordinates in the list, while other obvious ones that I've tested are either not listed, or have the wrong index. I suspect that I've misunderstood the information on the website in some elementary way, so feel free to correct my algorithm (for $p$ above, in the example that follows): $(i)$ Choose $z_1,\,z_2,\,z_3$ as the vertices of a triangle with side lengths $6,\,9,$ and $13$, e.g. $z_1 = 0,\,z_2 = 9,$ and $z_3 = (-26/9) + (8/9)\sqrt{35}i$. $(ii)$ Solve the following linear system of equations for $u,v,w$: $$\begin{cases} u\,\text{Re}(z_1) + v\,\text{Re}(z_2) + w\,\text{Re}(z_3) = \text{Re}(p)\\ u\,\text{Im}(z_1) + v\,\text{Im}(z_2) + w\,\text{Im}(z_3) = \text{Im}(p)\\ u + v + w = 1, \end{cases}$$ where $u:v:w$ are the barycentric coordinates for $p$. (Here, $p$ can be approximated to a high degree of precision, e.g. to avoid problems with Mathematica...) $(iii)$ Let $a=6,\,b=9$, and $c=13$ (or, more ideally, just use the Pythagorean theorem...), and define $x = u/a,\,y = v/b,\,z = w/c$. Furthermore, calculate the area of the triangle as $A = 4\sqrt{35}$. $(iv)$ Calculate $kx = 2Ax/(ax+by+cz) = 2Ax/(u+v+w)$, which should be the sought "coordinate" in the table. Carrying out the algorithm above yields $kx \approx 0.14368543660$, whereas the coordinate for the first isodynamic point (with index $15$ in the table) is $\sim 3.10244402065$. Perhaps $p$ isn't actually the first isodynamic point, but I like to think that I've done something wrong, like interpreting $a,\,b,$ and $c$ as a triangle's side lengths...
I personally like the following approach. Define the tensor algebra $T^{\circ}(V)=\bigoplus V^{\otimes n} = k\oplus V \oplus V\otimes V +\cdots$. Because there natural isomorphisms $V^{\otimes m}\otimes V^{\otimes n} \cong V^{\otimes m+n}$, and tensor products distributes over direct sums, we have a natural multiplication on $T^{\circ}(V)$. Because tensor products are associative, this multiplication is associative. Because we have a natural isomorphism $k\otimes_k V \cong V$, the multiplication is unital. Further more, because tensor products (over $k$) are $k$-linear, all the maps in sight are $k$-linear. Thus, $T^{\circ}(V)$ is a noncommutative, associative, unital $k$-algebra. There is a grading on $T^{\circ}(V)$ where $V^{\otimes n}$ has degree $n$, and so $T^{\circ}(V)$ is graded. If we quotient out by a homogeneous ideal, the result will still be a noncommutative, graded, associative, unital $k$-algebra. So let's quotient out by the (two sided) ideal generated by all elements of the form $v\otimes v$ where $v\in V$. We call the quotient ring the exterior algebra of $V$ and denote it $\bigwedge^{\circ}(V)$. You can actually define the product to be the wedge product, and you get all the properties (except skew symmetry) from the properties of $T^{\circ}(V)$. Skew symmetry comes from the defining relations in the ideal we quotiented by. So we can get all the basic properties of the wedge product without passing to combinatorics. Of course, you still have to link this definition of wedge product with whatever other definition you are using, and since I don't know your definition, I can't really say if there will be any combinatorics involved in that. Unfortunately, we have replaced combinatorics (which, while messy, is elementary) with tensor products and graded rings, which are slightly higher level. In particular, this is probably not an appropriate definition for an undergraduate class in calculus on manifolds, at least one that doesn't presuppose some algebra.
The log function also called a logarithm function which comes under most of the mathematical problem. The logarithmic function is used to reduce the complexity of the problems by reducing multiplication into addition operation and division into subtraction operation by using the properties of logarithmic functions. Here, the method to find the logarithm function is given with the help of value of log infinity. In general, the logarithm is classified into two types. They are Common Logarithmic Function Natural Logarithmic Function The log function with base 10 is called the common logarithmic functions and the log with base e is called the natural logarithmic function. The logarithmic function is defined by, if log Where x is the logarithm of a number ‘b’ and ‘a’ is the base of the log function that could be either replaced by the value ‘e’ or ‘10’. The variable ‘a’ should be any positive number and that should not be equal to 1. What is Infinity? In terms of log infinity log(y), as the number y increases infinitely, log(y) also increases infinitely even at a slower rate. The symbol used to denote infinity is “∞” How to calculate the value of Log Infinity? Now, let us discuss how to find the value of log infinity using common log function and natural log function. Value of log 10 infinity The log function of infinity to the base 10 is denoted as “log 10 ∞” or “log ∞” According to the definition of the logarithmic function, it is observed that Base, a = 10 and 10 x = ∞ Therefore, the value of log infinity to the base 10 as follows Consider that 10 ∞ = ∞, it becomes As the value of b approaches infinity, the value of x also approaches infinity. Log 10 infinity = ∞ Value of log e infinity The natural log function of infinity is denoted as “log e ∞”. It is also known as the log function of infinity to the base e. The natural log of ∞ is also represented as ln( ∞) Log e ∞ = ∞ (or) ln( ∞)= ∞ Both the common logarithm and the natural logarithm value of infinity possess the same value. Sample Problem Question : Evaluate the following limits \(\lim_{x\rightarrow \infty }e^{x}\) \(\lim_{x\rightarrow -\infty }e^{x}\) \(\lim_{x\rightarrow \infty }e^{-x}\) \(\lim_{x\rightarrow -\infty }e^{-x}\) Solution: (1)When x takes the value infinity, it becomes. e∞ = ∞\(\lim_{x\rightarrow \infty }e^{x}\)=∞ (2)When x takes the negative value infinity, it becomes. e-∞ = 0\(\lim_{x\rightarrow -\infty }e^{x}\)=0 (3)When x takes the value infinity, it becomes. e-(∞) = 0\(\lim_{x\rightarrow \infty }e^{-x}\)=0 (4)When x takes the negative value of infinity, it becomes. e-(-∞) = ∞\(\lim_{x\rightarrow -\infty }e^{-x}\)=∞ Stay tuned with BYJU’S – The Learning App to get more information on logarithmic values and also watch interactive videos to clarify the doubts.
Hellow, I want to describe all involutions of full matrix ring over field and all involutions of matrix polynomial ring. Is it true or false that every involution of the full matrix ring $T = M_n(R)$ over field $R$ has the follwing form $$ A \to C^{-1}A^TC, $$ for all $A\in M_n(R)$ and some fixed matrix $C$? What can we say about involutions of the matrix-polynomial ring $T[x]$? Every involution of $T$ is of the form $\phi(A)=C^{-1}\sigma(A^T)C$ for an invertible matrix $C$, and a field automorphism $\sigma\colon R\to R$ satisfying $\sigma^2=\iota$. Note that, for an involution $\phi$, then $\theta\colon T\to T$ defined by $\theta(A)=\phi(A^T)$ is a ring automorphism. So $\theta$ preserves the center of $T$. As the center of $T$ is the diagonal elements $\lambda I$ for $\lambda\in R$, we have $\theta(\lambda I)=\sigma(\lambda)I$ for a field automorphism $\sigma$. As $\phi\phi(\lambda I)=\lambda I$, we have $\sigma^2(\lambda)=\lambda$. Then, $\chi(A)=\phi(\sigma(A^T))$ is an automorphism of $T$, which preserves $\lambda I$ for each $\lambda\in R$, so is an automorphism of $R$-algebras. So, $\chi$ is an inner automorphism, meaning it is of the form $\chi(A)=C^{-1}AC$ for an invertible $C$. Therefore, $\phi(A)=C^{-1}\sigma(A^T)C$. In fact, $\phi\phi(A)=A$, so $C^{-1}\sigma(C)^TA\sigma(C)^{-T}C=A$, showing that $C^{-1}\sigma(C)^T$ is in the center of $T$, so is equal to $\lambda I$ for some $\lambda\in R$. So, $C^T=\lambda \sigma(C)$. Taking the transpose $C=C^{TT}=\lambda\sigma(C^T)=\lambda^2 C$. So, $\lambda=\pm1$ and either $C^T=\sigma(C)$ or $C^T=-\sigma(C)$.
In electromagnetism we say that all the electromagnetic interactions are governed by the 4 golden rules of Maxwell. But I want to know: is this(to assume that there is no requirement of any other rule)only an assumption, a practical observation, or is there a deeper theoretical point behind it? Could there be a deeper theory behind assuming that there is not requirement of rules other than Maxwell's equations? In electromagnetism we say that all the electromagnetic interactions are governed by the 4 golden rules of Maxwell. But I want to know that is this only an assumption It is not an assumption, it is an elegant way of joining the diverse laws of electrictity and magnetism into one mathematical framework. or a practical observation The laws of electricity and magnetism were described mathematically by fitting observations and always being validated, i.e. correct, in their predictions. Maxwell's equations not only incorporated them but also by unifying electricity and magnetism mathematically give predictions that have never been falsified. So yes, they are a mathematical model fitting observations, a very elegant model. or there exist any theoretical point behind it? Physics is about observations and the derivation of mathematical models, theories, that will fit them and will also predict new observations to be measured and evaluate the theory. Physics is not about philosophy or mathematics, it is about describing nature using mathematics as a tool. If there exists a "theoretical point" it is that theoretical physicists try to unify in one mathematical model all the known observations, i.e. continue on what Maxwell has done in unifying electricity and magnetism, by unifying the weak with the electromagnetic, and proposing a unification with the strong in similar mathematical frameworks. The goal being in unifying also gravity, all four forces in one mathematical model The Maxwell equations only approximately describe electromagnetism, even in a pure vacuum. This is a consequence of quantum electrodynamics. One can derive corrections to the Maxwell equations; this was first done by Heisenberg an Euler in the regime where the fields only change appreciably over distances much larger than the electron Compton wavelength, see here. Depending on how "basic" you consider an equation to be to electromagnetism, you could consider other equations to be important enough to be thought of as basic, given the type of situation. For instance, when dealing with electromagnetism in media (typically linear media), the Constitutive Relations also apply and are necessary: $$\overrightarrow{D} = \varepsilon_{0} \overrightarrow{E} + \overrightarrow{P}$$ $$\overrightarrow{H} = \frac{\overrightarrow{B}}{\mu_{0}} - \overrightarrow{M}$$ Or, if you happen to be relating currents to charges, you may want to use the Continuity Equation (although you can derive this by taking the divergence of Ampere's Law with the Maxwell Correction): $$\nabla \cdot \overrightarrow{J} + \frac{\partial \rho}{\partial t} = 0$$ Furthermore, if you are dealing with point charge with mass in addition to electromagnetic fields, the Lorentz Force Equation will be needed (although you can derive this from Newton's 2nd Law, Lagrangian Mechanics, or Hamiltonian Mechanics): $$\overrightarrow{F} = q(\overrightarrow{E}+ \overrightarrow{v}\times \overrightarrow{B})$$ But if you are being strict, and want to have the most bare-bones version of Maxwell's Equations (in a vacuum), you can get away with only two equations, those of the vector and scalar potentials: $$\overrightarrow{B} = \nabla \times \overrightarrow{A}$$ $$\overrightarrow{E} =- \nabla \Phi - \frac{ \partial \overrightarrow{A}}{ \partial t}$$ And by taking the divergence and curl of each of these equations, you can recover the four Maxwell's Equations. The number of equations you need really boils down to what type of problem you are trying to solve. I appreciate @ACuriousMind's comments on my question and thank him/her for pointing out the link that he/she has pointed out. I also apologize for being somewhat reluctant about mathematics in my comments when I posted the question. The question was whether Maxwell's equations are the only equations that govern the electromagnetic interactions. With the assumption that the Lorentz force law is given (that is to say that it has been determined experimentally beyond any doubt), the question reduces to asking "Whether Maxwell's equations determine the electromagnetic fields completely?". Now, I think the answer is obvious and is answered by the famous and beautiful Helmholtz theorem. With all due respect, I wonder why no other answers chose to mention this simple and conclusive response and instead some chose to patronize the OP about how science works and what physics is about. The Helmholtz theorem states that if $\nabla \cdot \vec{M}= U$ and $\nabla \times \vec{M}=\vec{V}$, $U$ and $\vec{V}$ both go to zero as $r \to \infty$ faster than $\displaystyle\frac{1}{r^2}$, and $\vec{M}$ itself goes to zero as $r \to \infty$ then $\vec{M}$ is uniquely and consistently determined in terms of $U$ and $\vec{V}$. Once identifying $\vec{M}$ with $\vec{E}$ and then with $\vec{B}$, it is clear that the Helmholtz theorem dictates that the electromagnetic field is uniquely determined by Maxwell's equations and thus, there cannot be any additional new law of electrodynamics. Because there is nothing left to be described.
Frankly, complex angles just aren't worth the hassle. As the answer you've linked to points out, they're just a complicated way to re-parametrize the evanescent wave$$e^{i\mathbf k\cdot \mathbf r}= e^{i(k_x,0,i\kappa)\cdot (x,y,z)}=e^{ik_x x}e^{-\kappa z}\ \ \stackrel{\rm as}{=} \ \ e^{ik(\sin(\theta),0,\cos(\theta))\cdot\mathbf r}.$$ For the total-internal-reflection case that you ask about, take the form above for the evanescent wave in medium $2$ for $z>0$, so that you're required to have$$k_x^2 -\kappa^2 = k_2^2 = \frac{n_2^2}{n_1^2} k_1^2.$$With that notation in place, any angle that fulfills the equations\begin{align}k_2 \sin(\theta) & = k_{x,1} \\k_2 \cos(\theta) & = i\kappa,\end{align}or equivalently\begin{align}\sin(\theta) & = \frac{n_1}{n_2}\sin(\theta_i) \\\cos(\theta) & = i\sqrt{ \frac{n_1^2}{n_2^2}\sin^2(\theta_i)-1 },\end{align}is an acceptable solution $-$ so long as those two conditions are satisfied, even if you get different $\theta$s, the consequences for the real-world waveforms are exactly identical, and the $\theta$s cannot be differentiated on the basis of any physical observable. On the other hand, it's important to note that both of these conditions need to be satisfied, or you run the risk of having an incorrect sign for $\kappa$ and, with that, an exponentially-increasing wave where you were meant to have a quickly-decaying evanescent wave. To fully characterize the complex angle, let's work our conditions some more, though, by expanding out the angle into its real and imaginary parts, $\theta = \theta_\mathrm{re}+i \theta_\mathrm{im}$ (with $\theta_\mathrm{re},\theta_\mathrm{im} \in \mathbb R$),\begin{align}\sin(\theta_\mathrm{re})\cosh(\theta_\mathrm{im}) + i\cos(\theta_\mathrm{re})\sinh(\theta_\mathrm{im}) & = \frac{n_1}{n_2}\sin(\theta_i) \\\cos(\theta_\mathrm{re})\cosh(\theta_\mathrm{im}) - i\sin(\theta_\mathrm{re})\sinh(\theta_\mathrm{im}) & = i\sqrt{ \frac{n_1^2}{n_2^2}\sin^2(\theta_i)-1 },\end{align}and then requiring that the real and imaginary parts of these equations be satisfied simultaneously,\begin{align}\sin(\theta_\mathrm{re})\cosh(\theta_\mathrm{im}) & = \frac{n_1}{n_2}\sin(\theta_i) \\\cos(\theta_\mathrm{re})\sinh(\theta_\mathrm{im}) & = 0 \\\cos(\theta_\mathrm{re})\cosh(\theta_\mathrm{im}) & = 0 \\\sin(\theta_\mathrm{re})\sinh(\theta_\mathrm{im}) & = - \sqrt{ \frac{n_1^2}{n_2^2}\sin^2(\theta_i)-1 }.\end{align}Since $\theta_\mathrm{im}$ is real, which guarantees that $\cosh(\theta_\mathrm{im})>0$, we conclude from $\cos(\theta_\mathrm{re})\cosh(\theta_\mathrm{im}) = 0$ that$$\cos(\theta_\mathrm{re}) = 0$$and therefore that$$\theta_\mathrm{re} = \frac{\pi}{2} + n \pi, \ \ n\in \mathbb Z,$$as a necessary condition, which then implies that $\sin(\theta_\mathrm{re}) = (-1)^n$, so we still need to solve both of\begin{align}\cosh(\theta_\mathrm{im}) & = (-1)^n\frac{n_1}{n_2}\sin(\theta_i) \\\sinh(\theta_\mathrm{im}) & = - (-1)^n\sqrt{ \frac{n_1^2}{n_2^2}\sin^2(\theta_i)-1 }.\end{align}Here the second equation is always solvable, as$$\theta_\mathrm{im} = - (-1)^n\mathrm{arcsinh} \mathopen{}\left(\sqrt{ \frac{n_1^2}{n_2^2}\sin^2(\theta_i)-1 }\right),$$and this will guarantee the correct sign for $\kappa$, but the first equation is only possible if $(-1)^n\sin(\theta_i)>0$, where you can conclude that $n$ is even if the incidence angle is counted as positive. This is a reasonable convention, but it is not guaranteed, and a more general solution is to adjust the offset $n$ according to the sign of the incidence angle as$$(-1)^n = \mathrm{sgn}(\theta_i),$$giving $\theta_\mathrm{re} = \frac{\pi}{2} \ (\mathrm{mod} \ 2\pi)$ for $\theta_i> 0$, and choosing $n=-1$ as the principal representative for the opposite case, $\theta_\mathrm{re} = -\frac{\pi}{2} \ (\mathrm{mod}\ 2\pi)$ for $\theta_i< 0$. This means, then, that the correct solution is\begin{align}\theta_\mathrm{re} & = \mathrm{sgn}(\theta_i) \frac{\pi}{2}\\\theta_\mathrm{im} & = - \mathrm{sgn}(\theta_i) \mathrm{arcsinh} \mathopen{}\left(\sqrt{ \frac{n_1^2}{n_2^2}\sin^2(\theta_i)-1 }\right),\end{align}or in other words$$\theta = \mathrm{sgn}(\theta_i) \left[\frac{\pi}{2} - i\: \mathrm{arcsinh} \mathopen{}\left(\sqrt{ \frac{n_1^2}{n_2^2}\sin^2(\theta_i)-1 }\right)\right] \mod 2\pi$$as the general solution. I've phrased things like this because they're a surefire way to guarantee the correct signs, but it's clunkier than it needs to be, so it can be simplified a bit to swap that arcsinh-of-a-square-root for a simpler arccosh: to do that, start by writing\begin{align}\cosh(\theta_\mathrm{im}) & = (-1)^n\frac{n_1}{n_2}\sin(\theta_i) = \frac{n_1}{n_2}\left|\sin(\theta_i)\right|\end{align}(since the $(-1)^n$ factor is there to guarantee positivity), allowing you to write\begin{align}\theta_\mathrm{im} = \pm \mathrm{arccosh} \mathopen{}\left(\frac{n_1}{n_2}\left|\sin(\theta_i)\right|\right)\end{align}up to a sign ambiguity which we've resolved above. With the sign resolution from above, though, we get as our final result$$\theta = \mathrm{sgn}(\theta_i) \left[\frac{\pi}{2} - i\: \mathrm{arccosh} \mathopen{}\left(\frac{n_1}{n_2}\left|\sin(\theta_i)\right|\right)\right] \mod 2\pi.$$
I am not very big in mathematics yet(will be hopefully), naive set theory has a problem with Russell's paradox, how do they defeat this sort of problem in mathematics? Is there a greater form of set theory than naive set theory that beats this problem? (Maybe something like superposition if is both or neither)? The Zermelo-Fraenkel Axioms for set theory were developed in response. The key axiom which obviates Russell's Paradox is the Axiom of Specification, which, roughly, allows new sets to be built based on a predicate (condition), but only quantified over some set. That is, for some predicate $p(x)$ and a set $A$ the set $\{x \in A : p(x)\}$ exists by the axiom, but constructions of the form: $\{x : p(x) \}$ (not quantified over any set) are not allowed. Thus the contradictory set $\{x : x \not\in x\}$ is not allowed. If you consider $S = \{x \in A: x \not\in x\}$, there's no contradiction. The same logic as in Russell's paradox gives us that $S \not\in S$, but then the conclusion is simply that $S \not\in A$, instead of any contradiction. There are other problems with Russell's program which led to Gödel's work, which is also something you should check out, but ZF is where Russell's Paradox was fixed. I'm aware of two principal approaches to solving Russell's paradox: One is to reject $x \not\in x$ as a formula. This is what Russell himself did with his theory of types: inhabitants on the universe are indexed with natural numbers (types), and $x \in y$ is only a valid formula if the type of $y$ is one greater than the type of $x$. Basically, this slices the universe into "elements", "sets of elements", "sets of sets of elements", etc, and then there is no slice that admits a concept of membership of itself. One drawback of this approach is that we then have, e.g. many empty sets of different types. Sometimes this is addressed by introducing type-shifting automorphisms, so that you can somehow recognise that the empty sets are all really the same. Another approach is to say that comprehension formulae are permitted only if you could give types to the variables, but you don't actually have to do so – this is the basis of Quine's New Foundations set theory. NF even has a universal set, but still manages to avoid Russell messing things up. This actually leads to a slightly odd situation where in NF the universal set $V$ actually does satisfy $V \in V$, and does not satisfy $V \not\in V$, so we don't forbid this kind of consideration entirely, it's just not permitted when building sets by comprehension. The other main approach is to permit $x \not\in x$, but reject the formation of the set $\{x : x\not\in x\}$. Some argued that this set is problematic because it is in some sense far too large, a substantial slice of the whole universe – if we only stuck to sensible things like $\mathbb N$ and $\mathbb R$ and $\aleph_\omega$ then everything would be fine. Zermelo–Fraenkel set theory (ZF) proposed to build sets by more concrete means: start with things you know are sets, like the empty set, and operations that you know make sets, like union and power set, and just keep applying those operations, and take all the sets you can prove to exist in this way. ZF only allows selection of subsets of existing sets by arbitrary properties, that is $\{x \in A : p(x) \}$ instead of the more general comprehension $\{x : p(x)\}$ that ruined everything. As a sidenote, ZF actually includes the Axiom of Foundation, which implies that $x\not\in x$ is true for all $x$. But there are also theories like ZF but without Foundation in which some $x$ do contain themselves. It lead to the development of a number of solutions in logic and set theory, as well as other insights. You can find a nice introduction to the topic in the Stanford Encyclopedia of Philosophy section on Russell's Paradox in Contemporary Logic. To the best of my knowledge, the Theory of Types http://en.wikipedia.org/wiki/Type_theory , was designed specifically to address this. Basically you restrict the objects you can refer to , by the choice of types. Another way of escaping from Russell's paradox, not yet mentioned in the answers here, is Morse-Kelley set theory. In MK set theory, there are good collections (sets) and bad collections (classes) and only the good collections (sets) are allowed to be members; unless $x$ is a set, $x\notin y$ holds for all $y$. Suppose $\Phi$ is some property. In naïve set theory, you can construct $$\{x\mid \Phi(x)\}$$ the set of all $x$ with property $\Phi$; this is the principle that causes the trouble of Russell's paradox. In ZF set theory, this unrestricted comprehension is forbidden; all you can have is $$\{x\in S\mid \Phi(x)\}$$ which is the subset of $S$ for which $\Phi$ holds. MK goes a different way. $\{x\mid \Phi(x)\}$ is allowed, but it only represents the class of all sets $x$ with property $\Phi$. That is, it is the collection of all $x$ such that $x$ has property $\Phi$ and $x$ is a set. Now take $\Phi(x) = “x\notin x”$ and let $S = \{x\mid x\notin x\}$. In naïve set theory we have $S\in S$ if and only if $S\notin S$, which is absurd. But in MK set theory we have $S\in S$ if and only if $S\notin S$ and $S$ is a set. There's no contradiction here; we've merely proved that $S$ is not a set. Since $S$ is not a set, it is not allowed to be a member of any collection, so $S\notin T$ is true for all $T$, and in particular $S\notin S$. Everything is fine. In MK set theory one can also construct a universal class $V$ which contains all sets—but $V$ is not itself a set and is not a member of itself. The way around Russell's paradox which Georg Cantor chose (and if you read Russell's letter describing the paradox to Frege, who fell into it, so to speak--this is found on pp 124-5 of van Heijenoort's book "From Frege to Goedel: A Source Book in Mathematical Logic, 1879-1931"--you find that Russell held that his paradox showed "that under certain circumstances a definable collection [Menge--(the German word for set, my comment)] does not form a totality", and therefore falls into Cantor's means of avoiding the set-theoretical paradoxes (there are others, namely Cantor's, Burali-Forti's, and Curry's paradoxes)) is this: you can divide the notion of 'collection' into those collections that form a consistent, completed totality (these are the sets), and those which do not (these are called proper classes). You might consider reading Penelope Maddy's paper "Proper Classes" (Journal of Symbolic Logic, Volume 48, Number 1, March 1983, pp. 113-139, although I found a pdf file of it on the Web, look for it using author and title) as an introduction. You might also want to look at my mathstackexchange question (867626) "A Question Regarding Consistent Fragments of Naive (Ideal) Set Theory". In the comments section you will find a link to a paper titled "Maximal consistent sets of naive comprehension" by Luca Incurvati and Julien Murzi. You asked "how do they defeat this sort of problem in mathematics?". Incurvati's and Murzi's paper show that even though one has maximal consistent fragments of naive Comprehension (call it COMP, the axiom that gets one in trouble with Russell's paradox) there will be incompatible maximal consistent sets of naive Comprehension, that is, there exists in COMP (in a first-order language (V,$\in$) the axiom COMP, i.e. ($\exists$y)(x)(x$\in$y iff $\phi$(x)) is an axiom schema, that is, a collection of first-order sentences with distinct first-order formulae $\phi$) two maximally consistent subcollections of COMP, $COMP_1$ and $COMP_2$ such that there exists a first-order formula $\psi$(x) such that $\psi$(x)$\in$$COMP_1$ and $\lnot$$\psi$(x)$\in$$COMP_2$ so even if you rid yourself of the problem caused by Russell's paradox, you still have for Naive Set Theory another sort of inconsistency--that caused by the independence results. Finally, you might want to look at Cantor's letter to Dedekind (in van Heijenoort, pp. 113-117). There he discusses the distinction between "consistent multiplicities [sets]" and "inconsistent multiplicities [now called proper classes]" and also discusses Cantor's Paradox, and discovers the Burali-Forti paradox (in fact, in that letter, Cantor shows (pg. 115 of van Heijenoort) that the collection $\Omega$ of all ordinals "is an inconsistent, absolutely infinite multiplicity" and therefore not a set). Suppose there exists a set $r$ such that, for any object $x$, $x\in r$ if and only if $x\notin x$. This leads to the obvious contradiction that $r\in r$ if and only if $r\notin r$. Therefore, no such set can exist. This is only a problem if, as in naive set theory, such a set must exist simply because you can define it. So, a consistent set theory cannot allow such a set to exist. The widely used Zermelo-Fraenkel axioms of set theory have been particularly successful in this regard. After over a century of intensive scrutiny, no such contradictions have been shown to arise from these axioms. They build set thoery on axioms, as ZF or ZFC, and then prove (if possible) that there would be no contradiction inside the builded theory (read on Godel 193?, and Cohen 1963). Russell's paradox is based on the naive assumption that the set of all sets does exist. They defeat it with the opposite assumption, that the set of all sets does not exist. Sets are instead "built" starting from the empty set, and step by step, being careful not to build too big sets in a single step. Certain axioms are introduced, e.g. pairs, power set, replacement, comprehension, which allow one to form a new set, based on old sets, and restrict the formation of sets defined just by "a property". Thus, the property of a set being a member (or not being a member) of itself is not allowed to be the defining property for a new set. But, for any given set $A$ one may form the set $B=\{a\in A: a$ is not a member of itself$\}$. Then $B$ is a set, but $B$ need not be an element of $A$ so even if $B$ is not a member if itself,the definition of $B$ would not imply that $B$ is a member of itself, which formally resolves the paradox. Most set theories solve this problem by giving up on unlimited comprehension. Unlimited comprehension posits the existence of "a set of all sets such that..." where you can state any clear criterion that talks about sets and their membership. Unlimited comprehension gives you the set of all sets, and also a set of all sets that aren't a member of themselves. Giving up on unlimited comprehension is a smart thing to do, because the combination of unlimited comprehension and classical two valued logic (true versus not true) gives you Russell's Paradox. Several such approaches are well covered in excellent sister answers. There is however also another way out - or there at least might be. Keep unlimited comprehension. Throw away the classical logic instead. Suppose we adopt a multi-valued logic where a truth value can be true ($1$), false ($0$), or anything in between, such as one half. Suppose that the negation of a statement whose truth value is $1\over 2$ is also $1\over 2$. (This isn't the only way to define negation over partial truths, but we need to pick one.) Instead of classical sets we now have fuzzy sets where membership is a matter of degree. Can we have a set of all sets that do not contain themselves? Yes, we can. The big question: Does this set contain itself? Answer: $1\over 2$. As you can see, this isn't exactly a superposition of the big question being answered by both $0$ and $1$, but it comes close.
Problem 1. Here's an example: take $A=\mathbb{N}$, $B=\{0,1,2,3\}$, and let $f\colon A\to B$ map the natural numbers to their remainder when divided by $4$ (so $f(3) = 3$, $f(15) = 3$, $f(21) = 1$, etc). Let $R$ be the equivalence relation on $B$ given by $aRb$ if and only if $a^2\equiv b^2 \pmod{4}$. The relation $S$ is defined as follows: first map to $B$, then check $R$ for the images. If the images are related, then the original elements are related; if the images are not related, then the originals are not related. For example, to see whether $3R10$ holds, we take $f(3)=3$ and $f(10)=2$, and check whether $3R2$ holds or not; since $3^2 \not\equiv 2^2\pmod{4}$, then $3\not R2$; that is, $f(3)\not Rf(2)$, so $3\not S 2$. On the other hand, $7S9$ holds: because $f(7) = 3$, $f(9) = 1$, and $f(7)^2 = 9 \equiv 1 = f(9)^2\pmod{4}$, so $f(7)Rf(9)$ is true. That's the idea. Now, for a general case. First convince yourself that $S$ is in fact an equivalence relation on $A$. Once you do that, in order to check (a), you want to see if $f([a]_S)\subseteq [f(a)]_R$. Well, take an element in $f([a]_S)$; this is a $b\in B$ such that $b=f(x)$ for some $x\in[a]_S$. that is, $xSa$ is true, and $f(x)=b$. So you want to see whether $b\in[f(a)]_R$; that is, whether $bRf(a)$. So, from assuming $xSa$ holds, you want to see whether you can always conclude that $f(x)Sf(a)$. For (b), you proceed similarly; now take $b\in [f(a)]_R$, and you want to see whether $b\in f([a]_S)$. That is, assume that $bRf(a)$; must there exist an $x\in A$ such that $xSa$ and $f(x)=b$? If so, prove it; if not, give a specific counterexample with specific $A$, $B$, $f$, $R$, and $S$, and an exlicit $a$ and $b$. Problem 2. (a) If your sets are finite, the your guesses are right; if your sets are not necessarily finite, you're in trouble. Consider $A=B=\mathbb{N}$ and $f(n) = 2n$. For (b); you know there is a $1-1$ function from $A$ to $B$; see if you can construct a $1-1$ function from $A\times C$ to $B\times C$ (and see if you can spot why you need $C\neq\emptyset$). This will show that $|A|\leq|B|$ implies $|A\times C|\leq|B\times C|$. If you also know that there is no surjection between $A$ and $B$, then again you are going to have two different cases, depending on whether $C$ is "really big" (compared to $A$ and $B$) or not. Problem 3. The set $A^B$ is the set of all functions from $B$ to $A$. So $\mathbb{Q}^{\mathbb{N}}$ is the set of all functions from $\mathbb{N}$ to $\mathbb{Q}$; this is the set of all rational sequences (sequences, each term a rational number). The set $\{0,1\}^{\mathbb{N}}$ is the set of all binary sequences (sequences, each term either $0$ or $1$). Hint. Think about binary expansion of real numbers between $0$ and $1$, or even better, the ternary expansion of the elements of the Cantor set to deal with $\{0,1\}^{\mathbb{N}}$. For $\{0,1\}^*$, can you count how many sequences of length $n$ there are, for each $n$? Add them all up then. For $\mathbb{Q}^{\mathbb{N}}$, can you exhibit at least as many sequences as there are real numbers? Problem 4. You are comparing sets by inclusion. We say set $A$ is "smaller than or equal to" set $B$ if and only if $A\subseteq B$. So, for example, the set $\{3\}$ is smaller than or equal to the set $\{1,2,3,4,5\}$, but not smaller than the set $\{1,2\}$. When using this order, note that it is possible for you to have two sets, neither of which is smaller than or equal to the other (for example, neither of $\{3\}$ and $\{1,2\}$ is smaller than the other; they are incomparable). The maximum among the sets you are given would be a set that is greater than or equal (contains) all of the sets you are given; a minimum would be a set that is smaller than or equal (is contained in) all of the sets you are given. They may or may not exist. "Greatest" is the same as "maximum" in this context; "smallest" the same as minimum. Problem 5. An equivalence relation is a collection of ordered pairs that is reflexive, symmetric, and transitive. A partial order is a collection of ordered pairs that is reflexive, * anti*symmetric, and transitive. So the question is: how many equivalence relations on $\mathbb{N}$ are also antisymmetric? Or: how many relations are all of reflexive, transitive, symmetric, and antisymmetric? Think about what having both symmetry and antisymmetry means.
Probability distributions of clique numbers are an active area of research. Most of the known results are in terms of bounds or asymptotic behavior. The following might all be of interest: Your problem can at least be reduced to an arguably simpler combinatorial problem. Let $a(n,k,c)$ denote the number of graphs with $n$ vertices, $k$ edges, and clique number $c$. Let $N(n)=\frac{n(n-1)}{2}$ be the total number of places one of those edges could be placed. The probability of a graph having $k$ edges is modeled by the binomial distribution $$q(n,k)={N(n)\choose k}p^k(1-p)^{N(n)-k}.$$ The probability of a graph with $k$ vertices having clique number $c$ is $$r(n,k,c)=\frac{a(n,k,c)}{N(n)\choose k}.$$ The probability of a graph having $k$ edges AND having clique number $c$ is then $$\begin{aligned}b(n,k,c)&=r(n,k,c)q(n,k)\\&=a(n,k,c)p^k(1-p)^{N(n)-k}.\end{aligned}$$ To find the probability of having clique number $c$, we simply sum over the variable $k$ to obtain $$\begin{aligned}s(n,c)&=\sum_{k=0}^Nb(n,k,c)\\&=\sum_{k=0}^Na(n,k,c)p^k(1-p)^{N(n)-k}.\end{aligned}$$ The expected value follows the standard procedure. $$\begin{aligned}E(n)&=\sum_{c=1}^nc\cdot s(n,c)\\&=\sum_{c=1}^nc\sum_{k=0}^Na(n,k,c)p^k(1-p)^{N(n)-k}.\end{aligned}$$ The only really troublesome part is the combinatorial question. How does one count $a(n,k,c)$? One of the neat properties that we discovered in this process though is that the solution is a polynomial of maximum degree $N(n)$ with integer coefficients. This suggests a relationship to generating functions and chromatic polynomials which might allow low-ordered versions to be reasoned about in terms of roots and other qualitative properties. The roots are kind of weird even for the small examples you presented, and this could easily be a dead end as well.
It is known that $\log(p_1),\cdots,\log(p_n)$ are linearly independet over $\mathbb{Q}$, where $p_i$ denotes the $i$-th prime. For a number $1 \le k \le n$ let $Log(k)$ denote the vector with respect to this basis for the numbers $1,\cdots,n$. For a subset $A$ of $1,\cdots,n$ let $rank(A)$ denote the rank of the matrix build by numbers $k$ in $A$ of vectors $Log(k)$. Through experimentation I found the following explicit "formula" for prime-pi function: $$ \Pi(n)= (-1)^{n+1} \sum rank(A) \cdot (-1)^{|A|}$$ where $A$ runs through the subsets of $1\cdots n$ of size $< n$. But I have no proof for this. The formula above has some similarity with the inclusion-exclusion principle: $$\left| \bigcup_{i=1}^n A_i\right| = \sum_{\emptyset\neq J\subseteq\{1,\ldots,n\}}(-1)^{|J|-1} \left |\bigcap_{j\in J} A_j\right|$$ But how does one define the sets $A_i$? The formula has also similarity with Euler characteristic. For a set of (possibly repeating) vectors, define: $$\chi(v_1,\cdots,v_n) = \sum_{A} rank(A) \cdot (-1)^{|A|}$$ where $A$ runs through the subsets (with repetion) of $v_1,\cdots,v_n$. It is not difficult to show, that if those vectors are linearly independent then $$\chi=0$$ My conjecture is that if $\chi(v_1,\cdots,v_n)=0$ then $$\chi(v_1,\cdots,v_n,w)=0$$ for any vector $w$. This would prove the prime-counting formula. If you can think of a proof, that would be great. Related-but-not duplicate question: https://mathoverflow.net/questions/325880/is-this-line-of-thought-using-linear-algebra-to-get-number-theoretic-results-a Here is some sage code to play around with: MAXN=100def Log(a,N=MAXN): return vector([valuation(a,p) for p in primes(N)])def findsubsets(S,m): return Set(S).subsets(m)def getMatrixA(someSubset,N=MAXN): return matrix([Log(xx,N=N) for xx in someSubset ])def eulerCharPrimePi(n,N=MAXN): return (-1)^(n+1)*sum([ getMatrixA(x).rank()*(-1)^k if k < n else 0 for k in range(0,n+1) for x in findsubsets(range(1,n+1),k) ])
A couple of questions about the Lemma 3.5. Lemma 3.5. (Gauss) : Let $p \in M$ and $v \in T_p M$ such that $\exp_p v$ is defined. Let $w \in T_p M \approx T_v(T_p M)$. Then $$\left\langle (d \exp_p)_v (v), (d \exp_p)_v (w) \right\rangle = \left\langle v, w\right\rangle \;\;\;\;\; (2)$$ Proof. Let $w = w_T + w_N$ is parallel to $v$ and $w_N$ is normal to $v$. Since $d \exp_p$ is linear and, by the definition $\exp_p$, $$\left\langle (d \exp_p)_v (v), (d \exp_p)_v (w_T) \right\rangle = \left\langle v, w_T\right\rangle$$ it suffices to prove (2) for $w = w_N$. It is clear that we can assume $w_N \neq 0$. The very first bit I don't get: Since $\exp_p v$ is defined, there exists $\epsilon > 0$ such that $\exp_p u$ is defined for $$ u = tv(s), \; 0 \leq t \leq 1, \; -\epsilon < s < -\epsilon $$ where $v(s)$ is a curve in $T_p M$ with $v(0) = v, v'(0) = w_N$, and $\left| v(s) \right| = const$. Why there's such an $\epsilon$ that defines $\exp_p$ for $u = tv(s)$? And continuing We can, therefore, consider the parametrized surface $$ f : A \to M, \;\;\;\; A = \left\{ (t,s) ; 0 \leq t \leq 1, -\epsilon < s < \epsilon \right\} $$ given by $$f(t,s) = \exp_p tv(s)$$ Observe the curves $t \to f(t,s_o)$ are geodesics. To prove (2) for $w = w_N$, observe first that: $$ \left\langle \frac{\partial f}{\partial s}, \frac{\partial f}{\partial t} \right\rangle(1,0) = \left\langle (d \exp_p)_v (w_N), (d \exp_p)_v (v) \right\rangle = \left\langle v, w\right\rangle \;\;\; (3) $$ Where does (3) come from? In addition, for all $(t,s)$, we have $$ \frac{\partial}{\partial t}\left\langle \frac{\partial f}{\partial s}, \frac{\partial f}{\partial t} \right\rangle = \left\langle \frac{D}{\partial t}\frac{\partial f}{\partial s}, \frac{\partial f}{\partial t} \right\rangle + \left\langle \frac{\partial f}{\partial s}, \frac{D}{\partial t} \frac{\partial f}{\partial t} \right\rangle $$ The last term of the expression above is zero, since $\frac{\partial f}{\partial t}$ is the tangent vector of a geodesic. From the symmetry of the connection, the first term of the sum is transformed in $$ \left\langle \frac{D}{\partial t} \frac{\partial f}{\partial s}, \frac{\partial f}{\partial t}\right\rangle = \left\langle \frac{D}{\partial s} \frac{\partial f}{\partial t}, \frac{\partial f}{\partial t}\right\rangle = \frac{1}{2} \frac{\partial}{\partial s} \left\langle \frac{\partial f}{\partial t} , \frac{\partial f}{\partial t}\right\rangle = 0 $$ It follows that $\left\langle \frac{\partial f}{\partial s}, \frac{\partial f}{\partial t}\right\rangle$ is independent of t. Since $$ \lim_{t\to 0} \frac{\partial f}{\partial s}(1,0) = \lim_{t\to 0} (d \exp_p)_{tv} t w_N = 0 $$ we conclude $\left\langle \frac{\partial f}{\partial s}, \frac{\partial f}{\partial t}\right\rangle(1,0) = 0$, which together with (3) proves the lemma. Why is $\left\langle \frac{\partial f}{\partial s}, \frac{\partial f}{\partial t}\right\rangle$ indipendent from $t$? and why is the computed limit $0$? There are still a couple of questions, but they might get clarified once I understood the once I've asked. Thank you so much.
Radioactivity is the process by which the nucleus of an unstable atom loses energy by emitting radiation, including alpha particles, beta particles, gamma rays and conversion electrons Although radioactivity is observed as a natural occurring process, it can also be artificially induced typically via the bombarding atoms of a specific element by radiating particles, thus creating new atoms. Introduction Ernest Rutherford was a prominent New Zealand scientist, and a winner of the Nobel Prize in chemistry in 1908. Amongst his vast list of discoveries, Rutherford was also the first to discover artificially induced radioactivity. Through the bombardment of alpha particles against nuclei of \(\ce{^{14}N}\) with 7 protons/electrons, Rutherford produced \(\ce{^{17}O}\) (8 protons/electrons) and protons (Figure \(\PageIndex{1}\)). Through this observation, Rutherford concluded that atoms of one specific element can be made into atoms of another element. If the resulting element is radioactive, then this process is called artificially induced radioactivity Rutherford was the first researcher to create protons outside of the atomic nuclei and the \(\ce{^{17}O}\) isotope of oxygen, which is nonradioactive. Similarly, other nuclei when bombarded with alpha particles will generate new elements (Figure \(\PageIndex{2}\)) that may be radioactive and decay naturally or that may be stable and persist like \(\ce{^{17}O}\) . Before this discovery of artificial induction of radioactivity, it was a common belief that atoms of matter are unchangeable and indivisible. After the very first discoveries made by Ernest Rutherford, Irene Joliot-Curie and her husband, Frederic Joliot, a new point of view was developed. The point of view that although atoms appear to be stable, they can be transformed into new atoms with different chemical properties. Today over one thousand artificially created radioactive nuclides exist, which considerably outnumber the nonradioactive ones created. Note: Irene Joliet-Curie and Frederic Joliot Irene Joliet-Curie and her husband Frédéric both were French scientists who shared winning the Nobel Prize award in chemistry in 1935 for artificially synthesizing a radioactive isotope of phosphorus by bombarding aluminum with alpha particles. \(\ce{^{30}P}\) with 15 protons was the first radioactive nuclide obtained through this method of artificially inducing radioactivity. \[ \ce{^27_13Al + ^4_2He \rightarrow ^30_15P + ^1_0n}\] \[ \ce{^30_15P \rightarrow ^30_14Si + ^0_{-1}\beta}\] Activation (or radioactivation) involves making a radioactive isotope by neutron capture, e.g. the addition of a neutron to a nuclide resulting in an increase in isotope number by 1 while retaining the same atomic number (Figure \(\PageIndex{3}\)). Activation is often an inadvertent process occurring inside or near a nuclear reactor, where there are many neutrons flying around. For example, Cobalt-59 has a large neutron capture cross-section, making it likely that Co-59 in or near a nuclear reactor will capture a neutron forming the radioactive isotope Co-60. \[\ce{ ^1_0n + ^{59}Co \rightarrow ^{60}Co }\] The \(\ce{ ^{60}Co}\) isotope is unstable (half life of 5.272 years) and disintegrates into \(\ce{ ^{60}Ni }\) via the emission of \(\beta\) particle and \(\gamma\) radiation Figure \(\PageIndex{4}\). Example \(\PageIndex{1}\): Neutron Bombardment Write a nuclear equation for the creation of 56Mn through the bombardment of 59Co with neutrons. SOLUTION A unknown particle is produced with 56Mn, in order to find the mass number (A) of the unknown we must subtract the mass number of the Manganese atom from the mass number of the Cobalt atom plus the neutron being thrown. In simpler terms, Now, by referring to a periodic table to find the atomic numbers of Mn and Co, and then subtracting the atomic number of Mn from Co, we will receive the atomic number of the unknown particle Thus, the unknown particle has A = 4, and Z = 2, which would make it a Helium particle, and the nuclear formula would be as follows: \[ \ce{^{50}_{27}Co + ^1_0n \rightarrow ^{56}_{25}Mn + ^{4}_{2}\alpha } \nonumber\] Example \(\PageIndex{2}\): Calcium Bombardment Write a nuclear equation for the production of SOLUTION Like the above example, you must first find the mass number of the unknown particle. Thus, the mass number of the unknown particle is 4. Again by referring to a periodic table and finding the atomic numbers of Lanthanum, Carbon and Europium, we are able to calculate the atomic number of the unknown particle, The atomic number for the unknown particle equals to zero, therefore 4 neutrons are emitted, and the nuclear equation is written as follows: \[ \ce{^{139}_{57}La + ^{12}_6C \rightarrow ^{147}_{63}Eu + 4 ^{0}_{1}n } \nonumber\] Summary Induced radioactivity occurs when a previously stable material has been made radioactive by exposure to specific radiation. Most radioactivity does not induce other material to become radioactive. This Induced radioactivity was discovered by Irène Curie and F. Joliot in 1934. This is also known as man-made radioactivity. The phenomenon by which even light elements are made radioactive by artificial or induced methods is called artificial radioactivity. References Petrucci, Harwood, Herring, Madura. General Chemistry:Principles & Modern Applications (9th Edition). New Jersey: Pearson Education, 2007 Savel, Pierre. "Atomic Energy." The Discovery of Artificial Radioactivity Vol.16 No.6 (1964), pp. 534-537
From what little I've been able to gather, the complex envelope preserves all information required for the reconstruction of the original signal and that it basically sends a low pass version of a bandpass signal.(Correct me if I'm wrong).I need an explanation for how the complex envelope is being created as per this text book? And also of how the original signal is obtained at the end? Let’s get one thing straight. You cannot recover the original real and imaginary parts of a complex signal based only on the magnitude values of the complex signal. That’s equivalent to saying, “I have the magnitude of a complex number. How do I compute that complex number’s real and imaginary parts?” There are an infinite number of possible correct answers to that question! However, as MBaz rightly said, you don’t have any “envelope” signal in your block diagrams. And by the way, there are no such things as “complex envelope” signals. All “envelope” signals are real-only. Having said all of that, you can learn everything about your two block diagrams by writing down the algebraic equations for all their various signals. (Your textbook should have done that for you.) Then pay attention to all the various trigonometric identities in your math reference book. It’s a mild pain to do all of that, but it’s terrifically educational. And another thing, your two block diagrams are not the normal “complex modulator” and “complex demodulator” used in modern digital communications systems. In a normal digital comms system the real-valued gI(t)/2 and gQ(t)/2 would be added (or subtracted, I forget) to produce a new real-valued signal that’s transmitted via an antenna. Have a look at Figure 1 of the following PDF file: http://www.testequity.com/documents/pdf/keysight/complex-modulation-generation-wp.pdf The complex envelope seems a not-so-well-defined concept to me. Some consider it an equivalent of the analytic signal (see What exactly is complex envelope?), from a real input. In other words, a complex signal whose real part is the real signal $f(t)$, and the imaginary part the Hilbert transform $\mathcal{H}$: $g(t) = f(t) + \imath\mathcal{H}(f(t))$. What I understand from your diagram is that $g(t)$ is already analytic. For others (see Analytic signal/Complex envelope/baseband), it denotes a modulation of the analytic signal, generally to shift the center frequency: $g_c(t) = g(t)e^{-\imath \omega_c t}$. This definition is not unique, as it depends on $\omega_c$. If this center frequency is shifted toward $0$, then a low-pass filtering can extract the baseband low-frequency component of the "complex signal". Forgetting the $\frac{1}{2}$ factor, this would correspond to the upper part and the lower part of your diagram. You can find a graphical explanation in Figure 2-1 from Complex band-bass filters for analytic signal generation and their application, where the left subfigure relates to your diagram, and the right one to the above formula for $g_c(t)$, with $w_c = 2\pi f_c$. To me, an envelope can be complex if one defines it properly, althought this is counter-intuitive as the traditional envelope is usually real and non-negative, or somehow symmetric about the time-axis. Does this transform preserve all the information from the input? In some cases, possibly. This could be stated as "In general, a real-valued bandpass signal/system has a complex-valued lowpass equivalent" (Mikko Valkama, Complex-valued signals and systems). This, of course, depends on the properties of the signal and an appropriate choice of $\omega_c$ and the low-pass filter. I do not see where you talk about magnitude, so I do not think the question is about "recovering a complex signal from magnitude only". However, this is an important topic in signal processing, and can be performed in some cases with additional conditions on signals.
import syssys.path.append('../code')from init_mooc_nb import *init_notebook() Populated the namespace with: np, matplotlib, kwant, holoviews, init_notebook, SimpleNamespace, pprint_matrix, scientific_number, pretty_fmt_complex, plt, pf, display_html from code/edx_components: MoocVideo, MoocDiscussion, MoocCheckboxesAssessment, MoocMultipleChoiceAssessment, MoocSelfAssessment from code/functions: spectrum, hamiltonian_array, h_k, pauli Using kwant 1.3.2 and holoviews 1.7.0-x-gf3a9f4f Executed on 2019-01-31 at 20:42:26.249233. As usual, start by grabbing the notebooks of this week ( w8_general). They are once again over here. You have learned how to map a winding number onto counting the zeros of an eigenproblem in a complex plane. This can be applied to other symmetry classes as well. Let's try to calculate the invariant in the 1D symmetry class DIII. If you look in the table, you'll see it's the same invariant as the scattering invariant we've used for the quantum spin Hall effect,$$ Q = \frac{\textrm{Pf } h(k=0)}{\textrm{Pf } h(k=\pi)} \sqrt{\frac{\det h(k=\pi)}{\det h(k=0)}} $$ In this paper (around Eq. 4.13), have a look at how to use analytic continuation to calculate the analytic continuation of $\sqrt{h}$, and implement the calculation of this invariant without numerical integration, like we did before. In order to test your invariant, you'll need a topologically non-trivial system in this symmetry class. You can obtain it by combining a Majorana nanowire with its time-reversed copy. This is a hard task; if you go for it, try it out, but don't hesitate to ask for help in the discussion below. The analytic continuation from $e^{ik}$ to a complex plane is also useful in telling if a system is gapped. Using the mapping of a 1D Hamiltonian to the eigenvalue problem, implement a function which checks if there are propagating modes at a given energy. Then implement an algorithm which uses this check to find the lowest and the highest energy states for a given 1D Hamiltonian $H = h + t e^{ik} + t^\dagger e^{-ik}$ (with $h$, $t$ arbitrary matrices, of course). MoocSelfAssessment() MoocSelfAssessment description In the live version of the course, you would need to share your solution and grade yourself. Now share your results: MoocDiscussion('Labs', 'Topological invariants') Discussion Topological invariants is available in the EdX version of the course. Jeffrey C. Y. Teo, C. L. Kane We develop a unified framework to classify topological defects in insulators and superconductors described by spatially modulated Bloch and Bogoliubov de Gennes Hamiltonians. We consider Hamiltonians H(k,r) that vary slowly with adiabatic parameters r surrounding the defect and belong to any of the ten symmetry classes defined by time reversal symmetry and particle-hole symmetry. The topological classes for such defects are identified, and explicit formulas for the topological invariants are presented. We introduce a generalization of the bulk-boundary correspondence that relates the topological classes to defect Hamiltonians to the presence of protected gapless modes at the defect. Many examples of line and point defects in three dimensional systems will be discussed. These can host one dimensional chiral Dirac fermions, helical Dirac fermions, chiral Majorana fermions and helical Majorana fermions, as well as zero dimensional chiral and Majorana zero modes. This approach can also be used to classify temporal pumping cycles, such as the Thouless charge pump, as well as a fermion parity pump, which is related to the Ising non-Abelian statistics of defects that support Majorana zero modes. Hint: The most general classification. Fan Zhang, C. L. Kane We discover novel topological pumps in the Josephson effects for superconductors. The phase difference, which is odd under the chiral symmetry defined by the product of time-reversal and particle-hole symmetries, acts as an anomalous adiabatic parameter. These pumping cycles are different from those in the "periodic table", and are characterized by $Z\times Z$ or $Z_2\times Z_2$ strong invariants. We determine the general classifications in class AIII, and those in class DIII with a single anomalous parameter. For the $Z_2\times Z_2$ topological pump in class DIII, one $Z_2$ invariant describes the coincidence of fermion parity and spin pumps whereas the other one reflects the non-Abelian statistics of Majorana Kramers pairs, leading to three distinct fractional Josephson effects. Hint: Beyond classification. M. B. Hastings, T. A. Loring We apply ideas from $C^*$-algebra to the study of disordered topological insulators. We extract certain almost commuting matrices from the free Fermi Hamiltonian, describing band projected coordinate matrices. By considering topological obstructions to approximating these matrices by exactly commuting matrices, we are able to compute invariants quantifying different topological phases. We generalize previous two dimensional results to higher dimensions; we give a general expression for the topological invariants for arbitrary dimension and several symmetry classes, including chiral symmetry classes, and we present a detailed $K$-theory treatment of this expression for time reversal invariant three dimensional systems. We can use these results to show non-existence of localized Wannier functions for these systems. We use this approach to calculate the index for time-reversal invariant systems with spin-orbit scattering in three dimensions, on sizes up to $12^3$, averaging over a large number of samples. The results show an interesting separation between the localization transition and the point at which the average index (which can be viewed as an "order parameter" for the topological insulator) begins to fluctuate from sample too sample, implying the existence of an unsuspected quantum phase transition separating two different delocalized phases in this system. One of the particular advantages of the $C^*$-algebraic technique that we present is that it is significantly faster in practice than other methods of computing the index, allowing the study of larger systems. In this paper, we present a detailed discussion of numerical implementation of our method. Hint: The non-commutative invariants. I. C. Fulga, F. Hassler, A. R. Akhmerov The topological invariant of a topological insulator (or superconductor) is given by the number of symmetry-protected edge states present at the Fermi level. Despite this fact, established expressions for the topological invariant require knowledge of all states below the Fermi energy. Here, we propose a way to calculate the topological invariant employing solely its scattering matrix at the Fermi level without knowledge of the full spectrum. Since the approach based on scattering matrices requires much less information than the Hamiltonian-based approaches (surface versus bulk), it is numerically more efficient. In particular, is better-suited for studying disordered systems. Moreover, it directly connects the topological invariant to transport properties potentially providing a new way to probe topological phases. Hint: All about scattering. Do you know of another paper that fits into the topics of this week, and you think is good? Then you can get bonus points by reviewing that paper instead! MoocSelfAssessment() MoocSelfAssessment description In the live version of the course, you would need to share your solution and grade yourself. Do you have questions about what you read? Would you like to suggest other papers? Tell us: MoocDiscussion("Reviews", "General classification") Discussion General classification is available in the EdX version of the course.
What is the best method for ranking items that have positive and negative reviews? Some sites, including reddit, have adopted an algorithm suggested by Evan Miller to generate their item rankings. However, this algorithm can sometimes be unfairly pessimistic about new, good items. This is especially true of items whose first few votes are negative — an issue that can be “gamed” by adversaries. In this post, we consider three alternative ranking methods that can enable high-quality items to more-easily bubble-up. The last is the simplest, but continues to give good results: One simply seeds each item’s vote count with a suitable fixed number of hidden “starter” votes. Follow @efavdb Follow us on twitter for new submission alerts! Introduction — a review of Evan Miller’s post In an insightful prior post, Evan Miller (EM) considered the problem of ranking items that had been reviewed as positive or negative (up-voted or down-voted, represented by a 1 or a 0, respectively) by a sample of users. He began by illustrating that two of the more readily-arrived at solutions to this problem are highly flawed. To review: Bad method 1: Rank item $i$ by $n_i(1) – n_i(0)$, its up-vote count minus its down-vote count. Issue: If one item has garnered 60 up-votes and 40 down-votes, it will get the same score as an item with only 20 votes, all positive. Yet, the latter has a 100% up-vote rate (20 for 20), suggesting that it is of very high quality. Despite this, the algorithm ranks the two equally. Bad method 2: Rank item $i$ by $\hat{p} \equiv n_i(1)/[n_i(0) + n_i(1)]$, its sample up-vote rate (average rating). Issue: If any one item has only one vote, an up-vote, it will be given a perfect score by this algorithm. This means that it will be ranked above all other items, despite the fact that a single vote is not particularly informative/convincing. In general, this method can work well, but only once each item has a significant number of votes. To avoid the issues of these two bad methods (BMs), EM suggests scoring and ranking each item by the lower limit of its up-vote-rate confidence interval. This is (E.B. Wilson, 1927), $$\tag{1} \label{emsol} p_{W} = \frac{\hat{p} + \frac{z_{\alpha/2}^2}{2n} – z_{\alpha/2} \sqrt{\frac{\hat{p}(1-\hat{p}) + \frac{z_{\alpha/2}^2}{4n} }{n}}}{1 + \frac{z_{\alpha/2}^2}{n}}, $$ where $\hat{p}$ is again the sample up-vote rate, $z_{\alpha/2}$ is a positive constant that sets the size of the confidence interval used, and $n$ is the total number of votes that have so far been recorded. The score $p_{W}$ approaches $\hat{p}$ once an item has a significant number of votes — it consequently avoids the pitfall of BM1 above. By construction, it also avoids the pitfall of BM2. With both of these pitfalls avoided, the EM method can sometimes provide a reasonable, practical ranking system. Potential issue with (\ref{emsol}) Although (\ref{emsol}) does a good job of avoiding the pitfall associated with BM2, it can do a poor job of handling a related pitfall: If any new item has only a few votes, and these each happen to be down-votes, its sample up-vote rate will be $\hat{p} = 0$. In this case, (\ref{emsol}) gives $$\label{problem} \tag{2} p_{W} = \left .\frac{\hat{p} + \frac{z_{\alpha/2}^2}{2n} – z_{\alpha/2} \sqrt{\frac{\hat{p}(1-\hat{p}) + \frac{z_{\alpha/2}^2}{4n} }{n}}}{1 + \frac{z_{\alpha/2}^2}{n}}\right \vert_{\hat{p} = 0} = 0. $$ Now, $p_W$ is always between $0$ and $1$, so (\ref{problem}) implies that any new, quickly-down-voted item will immediately be ranked below all others. This is extremely harsh and potentially unfair. For example, consider the case of a newly-opened restaurant: If an adversary were to quickly down-vote this restaurant on some ranking site — the day of its opening — the new restaurant would be ranked below all others, including the adversary. This would occur even if the new restaurant were of very high true quality. This could have potentially-damaging consequences, for both the restaurant and the ranking site — whose lists should provide only the best recommendations! An ideal ranking system should explicitly take into account the large uncertainty present when only a small number of votes have been recorded. The score (\ref{emsol}) does a good job of this on the high $\hat{p}$ end, but a poor job on the low $\hat{p}$ end. This approach may be appropriate for cases where one is risk-averse on the high end only, but in general one should protect against both sorts of quick, strong judgements. Below we consider some alternative, Bayesian ranking solutions. The last is easy to understand and implement: One simply gives each item a hidden number of up- and down-votes to start with. These hidden “starter” votes can be chosen in various ways — they serve to simply bias new items towards an intermediate value early on, with the bias becoming less important as more votes come in. This approach avoids each of the pitfalls we have discussed. Bayesian formulation Note: This section and the next are both fairly mathematical. They can be skipped for those wishing to focus on application method only. To start our Bayesian analysis, we begin by positing a general beta distribution for the up-vote rate prior distribution, $$\tag{3}\label{beta} P(p) = \tilde{\mathcal{N}} p^a (1-p)^b. $$ Here, $\tilde{\mathcal{N}}$ is a normalization factor and $a$ and $b$ are some constants (we suggest methods for choosing their values in the discussion section). The function $P(p)$ specifies an initial guess — in the absence of any reviews for an item — for what we think the probability is that it will have up-vote rate $p$. If item $i$ actually has been reviewed, we can update our guess for its distribution using Bayes’ rule: $$\begin{align} \tag{4} \label{BR} P(p \vert n_i(1), n_i(0)) =\frac{ P( n_i(1), n_i(0) \vert p ) P(p)}{P(n_i(1), n_i(0))} = \mathcal{N} p^{n_i(1)+a}(1-p)^{n_i(0)+b}. \end{align} $$ Here, we have evaluated $ P( n(1), n(0) \vert p )$ using the binomial distribution, we’ve plugged in (\ref{beta}) for $P(p)$, and we’ve collected all $p$-independent factors into the new normalization factor $\mathcal{N}$. The formula (\ref{BR}) provides the basis for the three ranking methods discussed below. Three Bayesian ranking systems Let’s rank! Bayesian method 1: Choose the ordering that is most likely. It is a simple matter to write down a formal expression for the probability of any ranking. For example, given two items we have $$ P(p_1 > p_2) = \int_0^1 dp_1 \int_0^{p_1} dp_2 P(p_1) P(p_2). \tag{5} \label{int} $$ Plugging in (\ref{BR}) for the $P(p_i)$’s, this can be evaluated numerically. Evaluating the probability for the opposite ordering, we can then choose that which is most likely to be correct. $\bullet$ Pros: Approach directly optimizes for the object we’re interested in, the ranking — very appealing! $\bullet$ Cons: Given $N$ items, one has $N!$ integrals to carry out — untenable for large $N$. $\bullet$ Note: See posssiblywrong’s post here for some related, interesting points. Bayesian method 2: Rank item $i$ by its median $p$-value. Sorting by an item score provides an approach that will scale well even at large $N$. A natural score to consider is an item’s median $p$-value: that which it has a $50/50$ shot of being larger (or smaller) than. Using (\ref{BR}), this satisfies $$\tag{6}\label{m2} \frac{\int_0^{p_{med}} p^{n_i(1)+a}(1-p)^{n_i(0)+b} dp}{\int_0^{1} p^{n_i(1)+a}(1-p)^{n_i(0)+b} dp} = 1/2. $$ The integral at left actually has a name — it’s called the incomplete beta function. Using a statistics package, it can be inverted to give $p_{med}$. For example, if we set $a = b = 1$, an item with a single up-vote and no down-votes would get a score of $0.614$. In other words, we’d guess there’s a 50/50 shot that the item’s up-vote rate falls above this value, so we’d rank it higher than any other item whose $p$ value is known to be smaller than this. $\bullet$ Pros: Sorting is fast. Gives intuitive, meaningful score for each item. $\bullet$ Cons: Inverting (\ref{m2}) can be somewhat slow, e.g. $\sim 10^{-3}$ seconds in Mathematica. $\bullet$ Note: EM also derived this score function, in a follow-up to his original post. However, he motivated it in a slightly different way — see here. Bayesian method 3: Rank item $i$ by its most likely (aka MAP) $p$-value. The most likely $p$-value for each item provides another natural score function. To find this, we simply set the derivative of (\ref{BR}) to zero, $$ \begin{align} \partial_p p^{n_i(1)+a}(1-p)^{n_i(0)+b} &= \left (\frac{n_i(1)+a}{p} + \frac{n_i(0)+b}{1-p} \right ) p^{n_i(1)+a}(1-p)^{n_i(0)+b} = 0 \\ \to p = \tilde{p} &\equiv \frac{n_i(1)+a}{(n_i(1)+a) + (n_i(0)+b)}. \tag{7} \label{final} \end{align} $$ This form $\tilde{p}$ is interesting because it resembles the sample mean $\hat{p}$ considered above. However, the actual number of up- and down-votes, $n_i(1)$ and $n_i(0)$, are supplemented in (\ref{final}) by $a$ and $b$, respectively. We can thus interpret these values as effective “starter votes”, given to each item before any real reviews are recorded. Their effect is to bias our guess for $p$ towards the prior’s peak value, with the bias being most strong when $a$ and $b$ are chosen large and/or when we have few actual votes present. For any non-zero choices, (\ref{final}) avoids each of the pitfalls discussed above. Further, it approaches the true up-vote rate in the limit of large review sample sizes, as required. $\bullet$ Pros: Sorting is fast. Simple method for avoiding the common pitfalls. $\bullet$ Cons: Have to pick $a$ and $b$ — see below for suggested methods. Discussion We consider each of the four ranking methods we’ve discussed here to be interesting and useful — the three Bayesian ranking systems, as well as EM’s original system, which works well when one only needs to protect against false positives (again, we note that Bayesian method 2 was also considered by EM in a follow-up to his original post). In practice, the three Bayesian approaches will each tend to return similar, but sometimes slightly different rankings. With regards to “correctness”, the essential point is that each method is well-motivated and avoids the common pitfalls. However, the final method is the easiest to apply, so it might be the most practical. To apply the Bayesian methods, one must specify the $a$ and $b$ values defining the prior, (\ref{BR}). We suggest three methods for choosing these: 1) Choose these values to provide a good approximation to your actual distribution, fitting only to items for which you have good statistics. 2) A/B test to get the ranking that optimizes some quantity you are interested in, e.g. clicks. 3) Heuristics: For example, if simplicity is key, choose $a= b =1$, which biases towards an up-vote rate of $0.5$. If a conservative estimate is desired for new items, one can set $b$ larger than $a$. Finally, if you want to raise the number of actual votes required before the sample rates dominate, simply increase the values of $a$ and $b$ accordingly. To conclude, we present some example output in the table below. We show values for the Wilson score $p_W$, with $z_{\alpha/2}$ set to $1.281$ in (\ref{emsol}) (the value reddit uses), and the seed score $\tilde{p}$, with $a$ and $b$ set to $1$ in (\ref{final}). Notice that the two scores are in near-agreement for the last item shown, which has already accumulated a fair number of votes. However, $p_W$ is significantly lower than $\tilde{p}$ for each of the first three items. For example, the third has an up-vote rate of $66\%$, but is only given a Wilson score of $0.32$: This means that it would be ranked below any mature item having an up-vote rate at least this high — including fairly unpopular items liked by only one in three! This observation explains why it is nearly impossible to have new comments noticed on a reddit thread that has already hit the front page. Were reddit to move to a ranking system that were less pessimistic of new comments, its mature threads might remain dynamic. up-votes down-votes $p_W$, $z_{\alpha/2}= 1.281$ $\tilde{p}$, $a=b=1$ 1 0 0.38 0.67 1 1 0.16 0.5 2 1 0.32 0.6 40 10 0.72 0.79 Cover image from USDA Forest Service.
This tag should be applied to questions concerning acid and base reactions. An acid is capable of donating a hydron/ proton (Brønsted acid) or capable of forming a covalent bond with an electron pair (Lewis acid). A base on the other hand is a chemical species/ molecular entity having an available pair of electrons capable of forming a covalent bond with a hydron/ proton (Brønsted base) or with the vacant orbital of some other species (Lewis base). This tag should be applied to questions concerning acid and base reactions. According to the IUPAC goldbook, an acid is a molecular entity or chemical species capable of donating a hydron (proton) (Brønsted acid) or capable of forming a covalent bond with an electron pair (Lewis acid). Brønsted acid(source) A molecular entity capable of donating a hydron (proton) to a base, (i.e. a 'hydron donor') or the corresponding chemical species. For example: $\ce{H2O, H3O+, CH3CO2H, H2SO4, HSO4^{−}, HCl, CH3OH, NH3}$ . Lewis acid(source) A molecular entity (and the corresponding chemical species) that is an electron-pair acceptor and therefore able to react with a Lewis base to form a Lewis adduct, by sharing the electron pair furnished by the Lewis base. For example: $\ce{\underset{Lewis~acid}{(H3C)3B} + \underset{Lewis~base}{:NH3} -> \underset{Lewis~adduct}{(H3C)3\overset{\small{~~\ominus}}{B}-\overset{\small{\oplus}}{N}H3}}$ In conjunction to this, the definition of a base is a chemical species or molecular entity having an available pair of electrons capable of forming a covalent bond with a hydron (proton) (Brønsted base) or with the vacant orbital of some other species (Lewis base). Brønsted base(source) A molecular entity capable of accepting a hydron (proton) from an acid (i.e. a 'hydron acceptor') or the corresponding chemical species. For example: $\ce{{}^{-}OH, H2O, CH3CO2^{−}, HSO4^{−}, SO4^{2−}, Cl^{−}}$. Lewis base(source) A molecular entity (and the corresponding chemical species) able to provide a pair of electrons and thus capable of coordination to a Lewis acid, thereby producing a Lewis adduct. The two kinds of molecules or chemical species are closely related as they form so called conjugated acid-base pairs in the Brønsted sense, or as mentioned previously, as a Lewis adduct in the alternative Description. conjugate acid–base pair The Brønsted acid $\ce{BH+}$ formed on protonation of a base $\ce{B}$ is called the conjugate acid of $\ce{B}$, and $\ce{B}$ is the conjugate base of $\ce{BH+}$. (The conjugate acid always carries one unit of positive charge more than the base, but the absolute charges of the species are immaterial to the definition.) For example: the Brønsted acid $\ce{HCl}$ and its conjugate base $\ce{Cl^{−}}$ constitute a conjugate acid–base pair. The reactivity of acids and bases is dependent on the $\mathrm{p}\ce{H}$ of a solution and as a consequence, every reaction of acids and bases will change this property of a solution. In aqueous solution at $T = 25~^\circ\mathrm{C}$, acids usually have a $\mathrm{p}\ce{H}$ less than 7 and bases have a $\mathrm{p}\ce{H}$ greater than 7, this is because the neutrality of an aqueous solution is determined or governed by the autoprotolysis of water. $$\ce{2H2O <=> H3+O + {}^{-}OH}$$ Therefore neutrality of a solution is achieved when the activities of hydronium and hydroxide ions are equal, $$a(\ce{H3+O})=a(\ce{{}^{-}OH}),$$ or in simpler terms, when the concentrations of hydronium and hydroxide ions are equal, i.e. $$[\ce{H3+O}]=[\ce{{}^{-}OH}].$$ A fairly good answer, that covers acids and bases in general can be found in this question.
I have met with a scalar control algorithm for a permanent magnet synchronous motor. I haven´t heard about it before so I decided to develop a dynamic model to further analyze this control algorithm. Unfortunatelly I haven´t got the Matlab with the SimPowerSystems library so I decided to create this model in Scilab. I have been using below given equations for PMSM simulation: \$\frac{\mathrm{d}i_d}{\mathrm{d}t} = \frac{1}{L_d}\cdot(u_d-R\cdot i_q+L_q\cdot\omega_e\cdot i_q), \\ \frac{\mathrm{d}i_q}{\mathrm{d}t} = \frac{1}{L_q}\cdot(u_q-R\cdot i_q-L_d\cdot\omega_e\cdot i_d - \omega_e\cdot\psi_m), \\ \frac{\mathrm{d}\omega_m}{\mathrm{d}t} = \frac{1}{J}\cdot(1.5\cdot p\cdot\psi_m\cdot i_q - T_l),\\ \omega_e = p\cdot\omega_m\$ where \$p\$ is number of pole pairs The parameters of the model: \$L_d = 1.365\cdot10^{-3}\,H \\ L_q = 1.365\cdot10^{-3}\,H \\ R = 0.416\,\Omega \\ \psi_m = 0.166\,Wb \\ p = 2 \\ J = 3.4\cdot10^{-4}\,kg\cdot m^2 \\ kF= \frac{3.2}{(2\cdot\pi\cdot\frac{1200}{60})^2}\,\frac{N\cdot m}{(rad\cdot s^{-1})^2}\$ The motor is loaded by below given torque \$T_l = kF\cdot\omega^2\$ which is removed in step wise manner at \$t=0.4\,s\$ Ther results are following I have doubts about correctness of my model due to the fact that the actual motor speed differs from the reference speed. I expected that they will equal because of the modeled machine is a synchronous motor. Does anybody know where I did a mistake? Thanks in advance for any suggestions.
Example: Let $f(x,y)=x^2+\cos{y}$. The rate of change at $f$ at $(1,0)$ in the direction of $<1,1>$ is: A. $1$ B. $\sqrt{2}$ C. $\frac{\sqrt{3}}{2}$ D. $\pi$ E. $0$ I'm confused on how to start this. Am I supposed to find the gradient, plug in $(1,0)$ and take the dot product of this with $<1,1>$? Thanks!
How can two angles of a triangle be equal to $90°$? If two angles were $90°$, this would mean that the two sides would be parallel and the angle of the third side would be equal to 0. Thus, there would be only two vertices and this wouldn't be a triangle at all, ultimately making $\sin 90° = 1$ impossible. Consider polar coordinates, $(r \cos\theta, r \sin \theta)$ in a unit circle such that $r=1$. Then, we can see that, the mapping in the first quadrant inside the unit circle is just $(\cos \theta, \sin \theta)$. Now, consider a moving point $A$ starting from $\theta=0°$ to $\theta=90°$ on the circumference of the circle. Now, $OA$ is the hypotenuse of the right angle triangle inside the circle. Now, it is clear that at $\theta=90°$, the hypotenuse and the perpendicular are the same line in the $x$-axis (i.e., they coincide). So, $\sin(90°)=\frac{p}{h}=1$. As per your argument, the case is two sides coinciding rather than being parallel, because by Pythagoras theorem, we have $h^2=p^2+b^2$, so if $p$ increases then $b$ must decrease, and $h=p$ iff $b=0$. Where, h is the radius of the unit circle, p is the perpendicular drawn from (aka height) the point in the circumference to the x-axis, and b is the distance of intersection of p and x-axis from the origin. You can very well admit the concept of flat triangles (coming in two flavors, with a $0°$ and a $180°$ angle). Anyway, the usual definition of the sine does not involve a triangle. Rather, the trigonometrical circle and the projection of a point at a given angle to the vertical axis. Indisputably, $\sin90°=1$. Simlarly, $\cos90°=0$. As noted in a comment, this answer to a previous similar question may well help. To answer your question directly: I offer that mathematicians made $\sin(x)$ “well defined” for values of $x$ that are smaller than $0^{\circ}$ or larger than $90^{\circ}$ (that is, outside the range of values that are valid for an interior angle of a triangle) when it became useful to do so. “Well defined” is a technical term that means, roughly, that everyone agrees on the answer. As a simple example, imagine an object that is moving in the plane at constant velocity. The velocity can be described as a speed $v$ and an angle $\theta$ measured clockwise from the $x$ axis. Now suppose that we ask the question, “How fast is the object moving in the $y$ direction?” If $\theta$ is less than $90^{\circ}$ then trigonometry gives the answer $v \sin(\theta)$. For values of $\theta$ equal or larger than $90^{\circ}$, it’s possible to calculate an answer by deducting an appropriate multiple of $90^{\circ}$ to bring $\theta$ back into the range of valid triangle interior angles. But that’s tedious. The answer “should” be $v \sin(\theta)$ for any direction $\theta$. To make this answer work, the definition of $\sin$ has to be extended from the range of valid triangle interior angles $(0^{\circ},90^{\circ})$ to the range of valid directions $[0^{\circ},360^{\circ})$. In this extension definition, $\sin(90^{\circ})=1$. A trivial Pythagorean triple, generated by Euclid's formula where $m=n$ such as $(A,B,C)=(0,2,2)$, is a vertical line $(A=0)$ on the $y$ axis. Since $\frac{C}{B}=1, \sin90^\circ=1$.
Introduction (Hoffman & Johnson, 2016) proposed an elegantdecomposition of the evidence lower bound, or ELBO, objective commonlyused to train unsupervised latent-variable models characterized by aprobabilistic decoder $p(x | z)$ and prior $p(z)$ over latentvariable $z$. Such models have the form: \begin{equation} p_{\theta}(x) = \int p_{\theta} (x | z) p(z) dz \end{equation} A common way of writing the ELBO objective, for variational approximation $q_{\phi}(z | x)$ to the true posterior $p_{\theta}(z | x)$ is like, \begin{equation} \label{eq:term-by-term} L({\theta}, {\phi}) = \frac{1}{N} \sum_{n=1}^{N} \mathbb{E}_{q(z_n | x_n)}[\log (p(x_n|z_n))] - KL(q(z_n | x_n) \parallel p(z_n)) \end{equation} which they call the average term-by-term reconstruction minus the KL divergence to the prior,where $n$ indexes observations.This leads to a natural question: what is the best value for the KL term totake? This can be interpreted as a regularizer that is minimized when$q(z_n | x_n) = p(z_n)$, whichencourages maximum use of the code space $z$ by raising the entropy of$q(z)$ to that of the prior. Aside: one of the most common choices for thisprior is a multivariate normal: the maximum entropy distribution for a givenvariance.Now, clearly we do not want this KL term to be exactly zero, asthis implies independence between inputs $x_n$ and their codes $z_n$. Forexamples of this behaviour see the autodecoder in(Alemi et al., 2018), where the generated samples areof high quality, but do not match the class of the input source. Breaking up the KL divergence term: \begin{equation} \frac{1}{N} \sum_{n=1}^{N} KL[q(z_n | x_n) \parallel p(z_n)] = D_{KL}(q(z) \parallel p(z)) + \log(N) - \mathbb{E}_{q(z)}[H[q(n|z)]] \end{equation} Notice that $\log(N) - \mathbb{E}_{q(z)}[H[q(n|z)]]$, looks an awful lot like the mutual information of the form $I(n;z)=H(n)-H(n|z)$, where we’re treating the indices $n$ to the $N$ input samples $X_N$ as a uniform random variable, the maximum entropy distribution for a constraint on the values. Thus, we have: \begin{equation} = D_{KL}(q(z) \parallel p(z)) + I(n;z) \end{equation} where $q(z)$ can safely be made close to the prior without losing modelingpower. Derivation Using $p(z | n) = p(z)$, $q(z | n) = q(z | x_n)$, $q(n) = p(n) = 1/N$, \begin{equation} \frac{1}{N} \sum_{n=1}^{N} KL(q(z_n | x_n) \parallel p(z_n)) = \sum_n q(n, z) \log \frac{q(n, z)}{p(n, z)} \end{equation} Now, splitting up the log \begin{equation} = KL(q(z) \parallel p(z)) + \mathbb{E}_{q(z)}[KL(q(n | z) \parallel p(n))] \end{equation} With Bayes rule $q(n)q(z|n) = q(z)q(n|z)$, and expanding some terms Since we have a uniform distribution over $n$, the i.i.d. sample indices, $H(n) = \log N$. \begin{equation} = KL(q(z) \parallel p(z)) + (\log N - \mathbb{E}_{q(z)}[H(q(n|z))]) \end{equation} \begin{equation} = KL(q(z) \parallel p(z)) + I(n; z) \end{equation} Getting back to the original question, we now have clear expressions for the quantities we want to minimize: $KL(q(z) \parallel p(z))$, and maximize: $I(n; z)$. Now, we have some context for an inequality from (Alemi et al., 2018), where D is distortion—the first term in Eq.\eqref{eq:term-by-term}—and H is $H(X)$, the entropy of the source distribution. R is the rate, the description length or average number of bits transmitted—but not necessarily received. \begin{equation} H - D \leq I(X; Z) \leq R \end{equation} Since $KL(q(z) \parallel p(z))$ can be computed analytically for known parametric families, we have a way of estimating $I(X; Z)$ by simply subtracting this term from the ‘‘KL divergence to the prior’’ in Eq.\eqref{eq:term-by-term}, and replacing each $x$ by its index $n$ from the finite training sample. Sweeping $\beta$ as in the $\beta-VAE$ obtained by breaking up Eq.\eqref{eq:term-by-term}, we obtain a concave rate distortion RD curve. This says that reducing distortion, which in this case can be interpreted as increasing the quality of the reconstructed images, is only possible by increasing the complexity of the representations, and vice versa, reducing the rate is only possible if willing to accept higher distortion. What are the practical consequences of this trade-off? After all, the cost of bandwidth and computing is at an all time low, can’t we just use arbitrarily high complexity representations to minimize distortion? No, not if we’d like to use the generative model for one of its main functions: to generate novel samples simply by sampling from the prior. See the ‘‘autoencoder’’ example in Fig. 4 b) of (Alemi et al., 2018) where R=156.0 and D=4.8. Thus, we want to be somewhere in between the ‘‘semantic encoder’’ and ‘‘semantic decoder’’ which corresponds to places on the curve where both R and D are changing rapidly. This makes the model capable of both reconstructing natural inputs, and ensures most of the code space maps to plausible samples when sampling from the prior unconditionally. References Fixing a Broken ELBO In International Conference on Machine Learning2018 Formal Limitations on the Measurement of Mutual Information 2018 ELBO Surgery: Yet Another Way to Carve up the Variational Evidence Lower Bound In NIPS Workshop on Advances in Approximate Bayesian Inference2016
Let's assume a photon is moving in the $z$ direction and state of the photon is represented by $$|\psi\rangle = \alpha |x\rangle + \beta |y\rangle $$ This photon will pass through three polarizers. One is oriented in the y direction along the horizontal, second is by 45 degree and third one is along the $x$ direction. I have written the matrix representation of the photon $$\begin{pmatrix} \cos \theta \\ \sin \theta \end{pmatrix}$$ And let represent the state along $x$ and $y$ axis by $$\begin{pmatrix} 1 \\ 0 \end{pmatrix} \hspace{0.3cm}\text{and} \hspace{0.3cm}\begin{pmatrix} 0 \\ 1 \end{pmatrix}$$. I understand what's the probability of getting each state ofter entering the each polarizer. My problem is that, i want to write the polarization state as a total Eigen ket space. Could you please enlighten me how to write the ket space whenever a polarized light enters into a polarizer. You can give me an example. I'm fine with that. You are welcome to improve the question or ask me so I can add more information to the question.
"One-Line" Proof: Fundamental Group of the Circle Once upon a time I wrote a six-part blog series on why the fundamental group of the circle is isomorphic to the integers. (You can read it here, though you may want to grab a cup of coffee first.) Last week, I shared a proof* of the same result. In one line . On Twitter. I also included a fewer-than-140-characters explanation. But the ideas are so cool that I'd like to elaborate a little more. As you might guess, the tools are more sophisticated than those in the original proof, but they make frequent appearances in both topology and category theory, so I think it's worth a blog post. (Or six. Heh.) To keep the discussion at a reasonable length, I'll have to assume the reader has some familiarity with So without further ado, I present Theorem : The fundamental group of the circle is isomorphic to ℤ. Proof: Let's take a closer look at each of the three isomorphisms. The Loop-Suspension Adjunction There are two important functors in topology called based loop $\Omega$ and reduced suspension $\Sigma$: The loop functor $\Omega$ assigns to each pointed space $X$ (that is, a space with a designated basepoint) the space $\Omega X$ of based loops in $X$, i.e. loops that start and end at the basepoint of $X$. On the other hand, $\Sigma$ assigns to each $X$ the (reduced) suspension $\Sigma X$ of $X$. This space is the smash product of $X$ with $S^1$. In general it might not be easy to draw a picture of $\Sigma X$, but when an $n$-dimensional sphere, it turns out that $\Sigma S^n$ is homeomorphic to $S^{n+1}$ for $n\geq 0$. So for $n=1$ the picture is The loop-suspension adjunction is a handy, categorical result which says that $\Omega$ and $\Sigma$ interact very nicely with each other: up to homotopy, maps out of suspension spaces are the same as maps in to loop spaces. More precisely, for all pointed topological spaces $X$ and $Y$ there is a natural isomorphism Here I'm using the notation $[A,B]$ to indicate the set of homotopy classes of basepoint-preserving maps from $A\to B$. (Two based maps are homotopic if there is a basepoint-preserving homotopy between them. This is an equivalence relation, and the equivalence classes are given the name homotopy classes.) This, together with the observation that the $n$th homotopy group $\pi_n(X)$ is by definition $[S^n,X]$, yields the following: And that's the first isomorphism above!** Remark: The $\Omega-\Sigma$ adjunction is just one example of a general categorial construction. Two functors are said to form an if they are - adjunction very loosely speaking - dual to each other. I had planned to blog about adjunctions after our series on natural transformations but ran out of time! In the mean time, I recommend looking at chapter 4 of Emily Riehl's Category Theory in Context for a nice discussion. adjunction The Homotopy Equivalence Next, let's say a word about why $\Omega S^1$ and $\mathbb{Z}$ are homotopy equivalent. This equivalence will immediately imply the second isomorphism above since $\pi_0$ (and in fact each $\pi_n$) is a functor, and functors preserve isomorphisms. (That is, $\pi_0$ sends homotopy equivalent spaces to isomorphic sets.***) Now, why are $\Omega S^1$ and $\mathbb{Z}$ homotopy equivalent? It's a consequence of the Claim: A homotopy equivalence between fibrations induces a homotopy equivalence between fibers. Eh, that was a mouthful, I know. Let's unwind it. Roughly speaking, a map $p:E\to B$ of topological spaces is called a fibration over $B$ if you can always lift a homotopy in $B$ to a homotopy in $E$, provided the initial "slice" of the homotopy in $B$ has a lift. And the preimage $p^{-1}(b)\subset E$ of a point in $b$ is called the fiber of $b$. So the claim is that if $p:E\to B$ and $p':E'\to B$ are two fibrations over $B$, and if there is a map between them that is a homotopy equivalence (We'd need to properly define what this entails, but it can be done.) then there is a homotopy equivalence between fibers $p^{-1}(b)$ and $p'^{-1}(b)$. Example #1 The familiar map $\mathbb{R}\to S^1$ that winds $\mathbb{R}$ around the circle by $x\mapsto e^{2\pi ix}$ is a fibration, and the fiber above the basepoint $1\in S^1$ is $\mathbb{Z}$. Incidentally, we proved this in the original six-part series that I mentioned earlier! Example #2 The based path space $\mathscr{P}S^1$ of the circle gives another example. This is the space of all paths in $S^1$ that start at the basepoint $1\in S^1$. The map $\mathscr{P}S^1\to S^1$ which sends a path to its end point is a fibration. What's the fiber above $1$? By definition, a path is in the fiber if and only if it starts and ends at 1. But that's precisely a loop in $S^1$! So the fiber above 1 is $\Omega S^1$. These examples give us two fibrations over the circle: $\mathbb{R}\to S^1$ and $\mathscr{P} S^1\to S^1$. And it gets even better. Both $\mathbb{R}$ and $\mathscr{P}S^1$ are contractible and therefore homotopy equivalent! By the claim above, $\Omega S^1$ and $\mathbb{Z}$ must be homotopy equivalent, too. This gives us the second isomorphism above. Pretty neat, right? If you're interested in the details of the claim and ideas used here, take a look at J. P. May's A Concise Course in Algebraic Topology, chapter 7.5. By the way, there is a dual notion to fibrations called cofibrations. (Roughly: a map is a cofibration if you can extend - rather than lift - homotopies.) And both of these topological maps have abstract, categorical counterparts -- also called (co)fibrations -- which play a central role in model categories. The Integers are Discrete The third isomorphism is relatively simple: we just have to think about what $\pi_0(X)$ really is. Recall that $\pi_0(X)=[S^0,X]$ consists of homotopy classes of basepoint-preserving maps $S^0\to X$. But $S^0$ is just two points, say $-1$ and $+1$, and one of them, say $-1$, must map to the basepoint of $X$. So a basepoint-preserving map $S^0\to X$ is really just a choice of a point in $X$. And any two such maps are homotopic when there's a path between the corresponding points! So $\pi_0(X)$ is the set of path components of $X$. It follows that $\pi_0(\mathbb{Z})\cong\mathbb{Z}$ since there are $\mathbb{Z}$-many path-components in $\mathbb{Z}$. And that's precisely the third isomorphism above! And with that, we conclude QED Okay, okay, I suppose with all the background and justification, this isn't an honest-to-goodness one-line proof. But I still think it's pretty cool! Especially since it calls on some nice constructions in topology and category theory. Well, as promised in my previous post I'm (supposed to be) taking a small break from blogging to prepare for my oral exam. But I had to come out of hiding to share this with you - I thought it was too good not to! Until next time! **You might worry that $\pi_0(\Omega S^1)$ is just a set with no extra structure. But it's actually a group! To see this, note that there is a multiplication on $\Omega S^1$ given by loop concatenation. It's not associative, but it is up to homotopy. (So loops spaces are not groups. They are, however, $A_\infty$ spaces.) So in general, sets of the form $[X,\Omega Y]$ are groups. For more, see May's book section 8.2. ***Yes, sets. Not groups. In general, $\pi_0(X)$ is merely a set (unlike $\pi_n(X)$ for $n\geq 1$ which is always a group). But we're guaranteed that $\pi_0(\mathbb{Z})$ is a group since it's isomorphic to $\pi_0(\Omega S^1)$ (and see the second footnote).
Let $G$ be a topological group the underlying set of which is infinite (e.g., $(\mathbb{R}\,;+)$ or $(\mathbb{Z}\,;+)$), and let $H$ be a topological group the underlying set of which is finite (e.g., the group $P(n\,;\mathbb{R})$ of $(n\times n)$ permutation matrices). My questions are: Is it possible to have a non-trivial group homomorphism $\phi:G\longrightarrow H$? If so, is it possible to have a $\phi$ such that the mapping $G\ni x\mapsto \phi(x)\in H$ is continuous? Specifically, I am interested in the case $G=\mathbb{R}$ and $H=P(n\,;\mathbb{R})$. Thank You.
I am proving this, all improvements are good. If I am completely wrong tell me. Let $h:M\rightarrow N$ and $i:N\rightarrow O$. Prove that if $i \circ h$ is injective, then $h$ is injective. This is my proof for this. Lets assume that $i \circ h$ is one-to-one. This means that if $i(h(a_1))=i(h(a_2)) $ then $a_1=a_2$. assume contradiction. $h$ is not injective, therefore if $h(b_1) = h(b_2)$ then $b_1 \neq b_2$. Then if you look at $i(h(a_1))=i(h(a_2)) $ now you will see that now $a_1 \neq a_2$. This contradicts $i \circ h$ being injective. Therefor $h$ must be injective
As it can be shown, there are no interacting helicity-3 (and higher) particles (i.e., massless spin-3 or higher particles) in soft limit (small momentums of emitting particles of given helicity). Сan this result tell us something about high-momentum limit (i.e., the principal possibility of interaction of corresponding particles with other fields in high-energy limit)? The proof of the statement above can be done in the following way. First we assume an arbitrary process with an external lines which correspond to particles which can interact with our massless particles ("charged" particles). Then we modify one of the external lines by adding an external line of massless particle, after which we take the soft limit and sum over all possible ways of emission of this particle (external line of this particle may begin from each external line of charged particles). As it can be shown, the summary amplitude is given in a form $$ M = M_{0} \sum_{n} M^{\mu_{1}...\mu_{2m}}_{n}(p_{n}, q)\varepsilon_{\mu_{1}...\mu_{2m}}(q). $$ Here $M_{0}$ refers to the amplitude without emission of our massless particles, while the other corresponds to the "emission" part: $$ M^{\mu_{1}...\mu_{2m}}(p_{n},q) = f_{n}\eta_{n}\frac{p_{n}^{\mu_{1}}...p_{n}^{\mu_{2m}}}{(p_{n} \cdot q) - i\varepsilon}, $$ $\eta_{n} = \pm 1$ ($+1$ for emission from outgoing particle and $-1$ for emission from ingoing particle), $f_{n}$ is the coupling constant of interaction of $n$-th charged particle with our massless particle, $\varepsilon_{\mu_{1}...\mu_{2m}}(q)$ is only the polarization tensor of massless particle with momentum $q$. The requirement of lorentz-invariance of process leads us to the statement that $$ \tag 1 M^{\mu_{1}...\mu_{2m} }q_{\mu_{1}} = 0 \Rightarrow \sum_{n} f_{n}\eta_{n}p^{\mu_{1}}...p^{\mu_{2m - 1}} = 0. $$ there isn't non-trivial conserved (like summary momentum or summary charge) object rank $l \geqslant 3$ built only from momentums, so the only possible way to satisfy $(1)$ without prohibiting on all non-trivial processes is to set all of $f_{n}$ to zero. Analogous thinking for helicity 1, 2 cases leads us to the proof of charge conservation and the equivalence principle without an additional words about gauge invariances of correspond interaction theories. The details can be found in Weinberg QFT (chapter about infrared photons).
What are the differences in the provided functionalities between these (the ones I named and possibly others) TeX plugins? Here is a couple of related questions on SO: Vi and Vim Stack Exchange is a question and answer site for people using the vi and Vim families of text editors. It only takes a minute to sign up.Sign up to join this community What are the differences in the provided functionalities between these (the ones I named and possibly others) TeX plugins? Here is a couple of related questions on SO: I will not provide an in-depth comparison, as I only have limited experience with the different plugins. I know a couple of plugins well (LaTeX-Box and LaTeX-Suite), and I know one plugin very well, since I am developing it myself: vimtex. Thus, I will mostly write about vimtex. However, I will first point out some references that might be of interest to others. There are alot of plugins for Vim. These are the ones I've recognized as being at least semi popular: vimtex is based on LaTeX-Box. It started out after I had contributed bug fixes and updates to LaTeX-Box for some time. I realized that the plugin could be written in a much more modern way if I wrote it from scratch. I first stripped most features and built a more robust and modular "engine". I then added features, and I think today it has most of the features of LaTeX-Box and then some. Instead of giving a full list of features (see instead here for that), I will rather try to point out some of the differences between vimtex and other plugins. However, I want to provide some bold claims: Since vimtex is based on LaTeX-Box, it obviously has similar principles. The idea is to keep things simple and to solve problems thare are not already solved by other, better plugins. It utilizes latexmk to compile the LaTeX documents, and it builds upon the internal Vim plugin for syntax highlighting. There is currently one important feature in LaTeX-Box that is missing in vimtex: Single-shot compilation with callback. The reason that this feature is not in vimtex is simply because it is complicated, and that I never found a way to implement it that is simple enough for my own preference (suggestions are welcome, please don't hesitate to open issues or pull requests). The main difference between vimtex and LaTeX-Suite is probably that vimtex does not try to implement a full fledged IDE for LaTeX inside Vim. E.g.: latexmk for compilation with a callback feature to get instant feedback on compilation errors I can't do comparison, as Vim-LaTeX is the only LaTeX plugin I've used. I have been using Vim-LaTeX for almost a year. So I will talk about Vim-LaTeX alone. There are many features present in Vim-LaTeX. I don't remember all of them. I'll just talk about features that I know and use constantly. Note: These are my limited user experience, which may be very misleading. I'm not a seasoned Vim user. And I know of nothing about vimscript. <C-j> Jumpping IMAP() function and <C-j> jumpping functions are provided separately as a plugin imaps.vim in Vim-LaTeX bundle. They are powerful features and could be very useful even when you are not writing LaTeX. IMAP() function provides a more natural way to do insert mode mappings and templating in general than the built-in imap and iabbrev, IMO. <C-j> jumpping is utilized by many Vim-LaTeX completion features. A jumpping point is indicated by <++>. Built-in insert mode key mappings are implemented as IMAP() calls. For example, you can find a long list of useful IMAP() calls in main.vim file: call IMAP ('__', '_{<++>}<++>', "tex")call IMAP ('()', '(<++>)<++>', "tex")call IMAP ('[]', '[<++>]<++>', "tex")call IMAP ('{}', '{<++>}<++>', "tex")...call IMAP ('((', '\left( <++> \right)<++>', "tex")call IMAP ('[[', '\left[ <++> \right]<++>', "tex")call IMAP ('{{', '\left\{ <++> \right\}<++>', "tex")... Then when you type say (), the cursor will reside automatically between the parenthese, replacing the first <++>. After you finished typing inside, you kick <C-j> and bang, the cursor will move out of parenthese and you just keep typing forward. Once you are used to it, it begins to form a typing flow which is kinda addictive... You see from above a \left \right pair can be typed easily with double stroke of its opening bracket. And <C-j> jumpping makes typing flow. One major glitch of IMAP() and <C-j> thing is that they messes up your last change history. (One bug I wish to fix for a long time.) Therefore, you may encounter unexpected behavior when trying to redo your last change by . if your "supposed last change" contains these function calls. You can do all kinds of mappings using IMAP(), from simple key mappings to more complex templating. Here are some examples of my mappings ( ftplugin/tex.vim): call IMAP('*EEQ',"\\begin{equation*}\<CR><++>\<CR>\\end{equation*}<++>",'tex')call IMAP('DEF',"\\begin{definition}[<++>]\<CR><++>\<CR>\\end{definition}<++>",'tex')call IMAP('BIC','\binom{<++>}{<++>}<++>','tex')call IMAP('PVERB','\PVerb{<++>}<++>','tex')call IMAP('VERB','\verb|<++>|<++>','tex') An interesting fact about imaps.vim plugin is that it's a global plugin, which implies its potential usage beyond LaTeX. Indeed, I do use <++> and <C-j> jumppings (combining with other plugins) to build code snippet templates in C. <F5> <F7> Insertion of Commands and Environments One disadvantage of IMAP() is that the key combination can not be used in normal text anymore (unless you undo the mapping by u). In the cases that you just want to trigger the mapping as you wish, the <F5> and <F7> come in handy. These two keys are used for triggering environments and inline commands insertion, respectively. And they behave differently based on the mode and customizations from user. In Insert/Normal Mode, when the cursor is attaching a word or is in the word, pressing <F5> will by default insert a basic environment of the form \begin{word}<Cursor>\end{word}<++> based on the word; pressing <F7> will by default insert a basic inline command of the form \word{}<++> based on the word. "By default", I mean you can customize the behavior of a specific word when triggered by <F5>/ <F7>. Here are some of my settings ( .vimrc): let g:Tex_Com_newcommand = "\\newcommand{<++>}[<++>]{<++>}<++>"let g:Tex_Com_latex = "{\\LaTeX}<++>"let g:Tex_Com_D = "\\D{<++>}{<++>}<++>" In Insert/Normal Mode, when the cursor is not attached to anything (a.k.a alone), pressing <F5>/ <F7> will prompt to you a menu to select environment/command to insert. Or you can type the name of desired environment/command at the bottom. Personally, I rarely use <F5>/ <F7> this way. Press <F5>/ <F7> after visually selecting a piece of text will prompt a menu for wrapping text. Then the selected text will be wrapped in the environment/command you selected or typed. In Insert/Normal Mode, when the cursor is in the scope of an environment/command, press <Shift>+<F5>/<F7> will prompt a menu for changing environment/command. `a to `z and corresponding capitals. `8 for \infty, `< for \le, `I for \int_{<++>}^{<++>}<++>, etc. " twice gets a pair of TeX double quotes. So to type literal " character, you have to use . \item tag. \left \right pair by `(, `[ and `{. Tex_FoldedSections, Tex_FoldedMisc, and Tex_FoldedEnvironments. Sometimes the built-in mappings have just gone too far or are not quit what you want. You can override the built-in mappings by redefining them in after/ftplugin/tex.vim: call IMAP('`|','\abs{<++>}<++>','tex')call IMAP('ETE',"\\begin{table}\<CR>\\centering\<CR>\\caption{<+Caption text+>}\<CR>\\label{tab:<+label+>}\<CR>\\begin{tabular}{<+dimensions+>}\<CR><++>\<CR>\\end{tabular}\<CR>\\end{table}<++>",'tex')call IMAP('==','==','tex')call IMAP('`\','`\','tex') I always need to switch between pdflatex and xelatex engine. Thus, I have the following lines in my .vimrc: "switch to pdflatexfunction SetpdfLaTeX() let g:Tex_CompileRule_pdf = 'pdflatex --interaction=nonstopmode -synctex=1 -src-specials $*'endfunctionnoremap <Leader>lp :<C-U>call SetpdfLaTeX()<CR>"switch to xelatexfunction SetXeLaTeX() let g:Tex_CompileRule_pdf = 'xelatex --interaction=nonstopmode -synctex=1 -src-specials $*'endfunctionnoremap <Leader>lx :<C-U>call SetXeLaTeX()<CR> This is a messy and complicated topic. With certain PDF viewer and a certain amount of luck, it can be very easy. But it's mainly a matter of google search. Overall, I think it will work well if you are willing to invest some time to tame the beast. That being said, had I time and adequate knowledge, I will surely thin off the overhead features and explore the potentials of integration with other plugins.
Posted by Nate on August 24, 2017 A Data-Driven Approach to LaTeX Autocomplete Autocomplete Nearly anywhere you go on the web today you will find some sort of autocomplete feature. Start typing into Google and you get immediate suggestions related to your query. If you code in other languages, many IDEs have built-in, or configurable, autocomplete tools that complete variables, functions, methods, etc., with varying degrees of success. At the best, these tools speed up the process of programming by actively bug checking, suggesting variables of the correct type, methods of the correct object/class, and can sometimes offer documentation when opening functions. These tools allow the user to focus their time on more valuable concepts and ideas, rather than syntax. When learning a new programming language, especially if it is your first language, it can be difficult to remember syntax and \(\mathrm{\LaTeX}\) is no exception. “Was it \product, \times, \mult, \prodor something else to produce \(\prod\)?” These questions are often asked by new users and can range from being a rather minor nuisance, to a painstakingly slow and annoying time sink. By the way, it is \prod☺. To help combat the aforementioned issues (and others), Overleaf has included a default list of commands which it will suggest. Just type \into the editor to see the dropdown list. The list is by no means comprehensive, but it does offer mostof the frequently used commands that are needed to build a basic \(\mathrm{\LaTeX}\) document. What can we do to improve? When suggesting commands, as of now, we simply use a fuzzy-search using Fuse.js which works well in some cases, but surely does not account for the popularity of commands. For example, when typing \c, the fuzzy search ranks commands beginning with cfirst, and hence columnbreakis the first completion. While yes, according to the algorithm this is a good match, it isn't the best for productivity. Wouldn't it be nice if chapter, cite, caption, and centeringwere suggested before that? To make this happen we have begun studying which commands are being used frequently in publicly available \(\mathrm{\LaTeX}\) documents. Fortunately, there are many collections of public \(\mathrm{\LaTeX}\) documents that we can use, such as the arXiv, and also the Overleaf Gallery, which contains just under 8000 .texdocuments (we define a document as a single .texfile). For this blog post, we’re going to use the Overleaf gallery, which contains a mixture of research articles, presentations, and CVs. It also contains a large number of \(\mathrm{\LaTeX}\) examples and \(\mathrm{\LaTeX}\) templates, which are not necessarily the most representative documents, but it provides a good starting point, and as we will see, a useful one. The reason we want chapter, cite, caption, and centeringto be ranked ahead of columnbreakis because they are used more. So to find the commands that should top the suggestion list one might think to simply look at raw counts of commands. Doing this for our given corpus we find some odd commands in the top ten list we didn't quite expect ( pgf, and pdfglyphtounicode). After a little investigation we found these commands were appearing tens of thousands of times in very few documents and nowhere else. To avoid such extreme cases we weight commands by the number of documents they appear in (see Methodology for details). It is not too surprising that we find many of \(\mathrm{\LaTeX}\)’s structural commands in the top ten list, but perhaps textbfis somewhat surprising. I guess bolding is more fashionable than italicizing. Another feature which might spark some interest is the relatively high frequency of chapterwhen excluding no appearances. This is because of the context in which this command appears. Often, when writing documents with multiple chapters, authors will break these chapters into separate files and have main.texcall the respective chapters in via \chapter{foo}\input{chapters/foo}. This somewhat artificially inflates the frequency of the chaptercommand (and possibly other commands). We say artificially, because really the input files are all apart of the same project, and they should be considered together. This, however, has not yet been done in our analysis. We can produce an analagous bar plot for environments (anything that starts with \begin{…}and ends with \end{…}) where we view an entire environment as a single entity. The following shows mostlywhat we would expect, with documentappearing the most often among all documents, but rather peculiarly, we see the frequency of the frameenvironment (from the beamerpackage) is very large when we have enforced a single appearance. This means that while it's not the most frequently occurring environment, when it is used, it constitutes nearly 40% of the document's environments! With this data we will rank commands based on their corpus frequency so you spend less time looking for your command, and more time focusing on what’s important. Now you may say: Hold on, so even once I start my documentenvironment, the next time I open an environment I will be suggested \begin{document}as the number one completion? Well, it takes some fine tuning. In particular one thing we can do is look at the median number of times these commands are used in a document (excluding no appearances). Doing this gives us a better picture of how many times commands are being used in a given document. So if a command is usually being used one time per document, then we probably shouldn’t continue suggesting it after that one use (or at least push it down the list). Below you can see the the median number of uses of commands in documents in which they appear. Use the dropdown menu to toggle between the top 10 commands and environments! Dissociating the Data We have a fair amount of data at this point, and while what we have seen thus far is helpful, we can do better. \(\mathrm{\LaTeX}\) documents should not solely be considered as a stream of input tokens; rather they have logical structure. We would very likely get cleaner, more representative data if we took this into account. \(\mathrm{\LaTeX}\) Structure \(\mathrm{\LaTeX}\) documents are built up from smaller pieces, namely commands and environments. Given we have already studied the global use of commands and environments, it is then important to look more closely at how these are used together. Preamble Ideally we want to suggest commands based on the contextof the cursor's position within your document. An important example of context is the \(\mathrm{\LaTeX}\) document preamble: there, it is highly unlikely that you will need to use commands such as \section{…}, \chapter{…}, or many math commands. Wouldn’t it be nice if we didn’t suggest them? We can perform a very similar analysis as above to find the commands which occur in the preamble of all the documents (that is commands that occur before \begin{document}, and ignoring documents that contained no documentenvironment). If you have ever composed a preamble and loaded some packages then it is unlikely that these results will come as a surprise. The long tail on the above plot is attributed to the fact that preambles, while sharing some structure, can vary wildly based on which packages are loaded. It is often the case that commands used in the preamble are dependent upon which packages have already been loaded—we’ll address that point in a minute. Environments Just as above we can study which commands are being used most frequently in given environments. In particular, we explore the top 10 as case studies and these can be viewed in the following plot's dropdown menu. And here we begin to see some real structure emerging from this data. We see much more definitive trends in the data such as the itemcommand being used extremely heavily in list-like environments, the includegraphicscommand being used heavily in the figureenvironment, and so on. This data will allow us to provide context-sensitive autocomplete suggestions based on which document element is currently being edited—providing a much more effective and efficient editing experience. An important feature to note is the seemingly high frequency of beginand endcommands appearing within environments. Naturally, this suggests that documents often have nested environments—-which can be common in \(\mathrm{\LaTeX}\) documents, depending on which environments are being used; for example, \begin{table}\begin{tabular}…. If we could understand these nesting patterns we would even be able to provide context-aware environment suggestions! Of course we have a very finite data set, so we can only take this so far. Packages For future work, we will begin to explore links between which packages have been loaded and which commands are used most frequently in conjunction with those packages—to suggest commands based on the packages you have loaded. What's Next? While getting this data is one thing, implementing it is another. We've already started to improve ShareLaTeX's autocomplete (since we’ve now joined forces): now, along with suggesting commands you have already used in your document, it will suggest the top 100 most frequent commands as indicated in the analysis above! While we acknowledge this data set is not completely representative, it has given us a great birds-eye view of what .texdocuments look like and how people are using the language. In order to obtain data with more predictive power, we are continuing to study the structure and use of \(\mathrm{\LaTeX}\) documents. Along with this, we plan to add more corpora to our existing Overleaf Gallery such as source files from the arXiv and maybe even GitHub. Methodology In order to compute the frequencies plotted in the What can we do to improve section let's establish a bit of notation. Let the corpus, or collection of documents, be \(\mathsf{D}\) and the collection of all commands used in \(\mathsf{D}\) be \(\mathsf{C}_\mathsf{D}\). Fun fact: there are roughly 15000 unique commands used throughout this corpus and over 900000 total command uses! For each command \(\mathsf{c}\) in \(\mathsf{C}_\mathsf{D}\), we can calculate its local frequency with respect to each document \(\mathsf{d}\in\mathsf{D}\) as the simple ratio \[f_\mathsf{c,d} = \frac{n_\mathsf{c}}{N_\mathsf{d}}\] where \(n_\mathsf{c}\) is the number of times the command \(\mathsf{c}\) appears in document \(\mathsf{d}\), and \(N_\mathsf{d}\) is the total number of command uses in the document \(\mathsf{d}\). Note that for many commands \(f_\mathsf{c,d}\) will be 0 if command \(\mathsf{c}\) does not appear in document \(\mathsf{d}\). We can now calculate the global frequency of each command in the given corpus by averaging all local frequencies. \[f_\mathsf{c} = \frac{1}{|\mathsf{D}|} \sum_{\mathsf{d}\in\mathsf{D}} f_{\mathsf{c,d}}\] where \(|\mathsf{D}|\) is the number of documents in the corpus. This method of calculating frequencies weights commands not only by how many uses they have, but also how many documents we find them in. This gives an effective measure of the permeability of commands through a wide range of documents, and it is what you see plotted above in the lighter green . With this information alone we can rank commands based on how often they are used. What is also interesting to look at once we have this data is, given the most used commands, how often are they used in documents that they doappear in. So a modified frequency \(\tilde{f}_{!\mathsf{c}}\) dependent on the set \(\mathsf{D}_\mathsf{c}\) which consists of all documents \(\mathsf{d}\) such that command \(\mathsf{c}\) is found in \(\mathsf{d}\) (that is \(\mathsf{D}_\mathsf{c} = {\mathsf{d}\in\mathsf{D}\,|\,\mathsf{c}\in\mathsf{d}}\)). \[\tilde{f}_{!\mathsf{c}} = \frac{1}{|\mathsf{D}_\mathsf{c}|} \sum_{\mathsf{d}\in\mathsf{D}_\mathsf{c}} f_{\mathsf{c,d}}\] This quantity expresses more features about how commands are being used in their respective documents, rather than taking a corpus view. In the plots above, this is represented with the darker shade of green . Note the plots are sorted by their corpus, or global, frequency.
This question arises when looking at a certain constant associated to (a certain Banach algebra built out of) a given compact group, and specializing to the case of finite groups, in order to try and do calculations for toy examples. It feels like the answer should be (more) obvious to those who play around with finite groups more than I do, or are at least know some more of the literature. To be more precise: let $G$ be a finite group; let $d(G)$ be the maximum degree of an irreducible complex representation of $G$; and (with apologies to Banach-space theorists reading this) let $K_G$ denote the order of $G$ divided by the number of conjugacy classes. Some easy but atypical examples: if $G$ is abelian, then $K_G=1=d(G)$; if $G=Aff(p)$ is the affine group of the finite field $F_p$, $p$ a prime, then $$K_{Aff(p)}=\frac{p(p-1)}{p}=p-1=d(Aff(p))$$ Question.Does there exist a sequence $(G_n)$ of finite groups such that $d(G_n)\to\infty$ while$$\sup_n K_{G_n} <\infty ? $$ To give some additional motivation: when $d(G)$ is small compared to the order of $G$, we might regard this as saying that $G$ is not too far from being abelian. (In fact, we can be more precise, and say that $G$ has an abelian subgroup of small index, although I can't remember the precise dependency at time of writing.) Naively, then, is it the case that having $K_G$ small compared to the order of $G$ will also imply that $G$ is not too far from being abelian? Other thoughts.Since the number of conjugacy classes in $G$ is equal to the number of mutually inequivalent complex irreps of $G$, and since $|G|=\sum_\pi d_\pi^2$, we see that $K_G$ is also equal to the mean square of the degress of complex irreps of $G$. Now it is very easy, given any large positive $N$, to find a sequence $a_1,\dots, a_m$ of strictly positive integers such that$$ \frac{1}{m}\sum_{i=1}^m a_i^2 \hbox{is small while} \max_i a_i > N $$so the question is whether we can do so in the context of degrees of complex irreps -- and if not, why not? The example of $Aff(p)$ shows that we can find examples with only one large irrep, but as seen above such groups won't give us a counterexample.
By logic without equality I mean those kind of logics where equality is treated as a binary relation satisfying some axioms, as opposed to a logics where equality is a logical symbol satisfying some inference rules. In my understanding there should be no difference at the syntactic level between logics with or without equality since the syntax should not be able to distinguish a logic-primitive predicate or any other predicate (but in case I am wrong please correct me). So I am wondering what are the differences at a semantic level between these kind of logics. In particular I am wondering whether all the definitions and theorems of classical model theory can be transported verbatim to the without-equality logics. If it is possible I would also appreciate references that treat the subject. Addendum: After Andrej Bauer's answer I realize it would be better to add some specification.I am interested basically in logical theories where every theory comes equipped with an axiom-schema for substitution, that is a family of axioms of the form $$\forall x,y. (x=y) \land \varphi(x) \rightarrow \varphi(y)$$where $\varphi$ is a formula of the language. In short I am curious to know what changes we get if we change the semantics of the symbol of equality. Addendum 2: also it seems from the answer provided so far that the model theory should remain unchanged but it does not look like it to me. First of all while quotienting for the equivalence relation should provide a (elementary?) equivalent model it does not (necessarily) preserves the homomorphisms: consider a set with an equivalence relation that identifies all elements, the corresponding quotient structure should be the terminal model (in the sense of category theory) but the first model clearly does not have to be a terminal object in the category. Also it seems to me that one could lose the characterisation of homomorphisms as mapping that preserve atomic formulas, since, if we drop the requirement of interpreting the equality symbol as the identity relation, then we could have mappings that preserve atomic formulas but for some operations symbols do not respect the external equality: i.e. we could have a mapping $f \colon A \to B$ and an operation symbol $o$ such that $$B \models f(o^A(a)) = o^B(f(a))$$but $f(o^A(a))$ and $o^B(f(a))$ are not identical. It seems rather difficult to find references that deal with this kind of phenomena, that is why I would really appreciate if someone could provide some pointers. Thanks in advance.
One form of the Gronwall's inequality is that If $\alpha(x),u(x)$ are non-negative continuous functions on $[0,1]$, and $$\forall x\in [0,1], u(x)\leq C+\int_{0}^{x}[\alpha(s)u(s)+K]ds\;(C,K\geq0),$$ then we have that $u(x)\leq[C+Kx]e^{\int_{0}^{x}\alpha(s)ds}$. One form of comparison theorem is the following. Assume that $f(x,y),F(x,y)$ are continuous on a domain $\Omega\supset [0,1]\times\mathbb{R}$ and $f(x,y)<F(x,y)(\forall (x,y)\in\Omega)$, $y=\phi(x),y=\varphi(x)$ are solutions to $y'=f(x,y),y'=F(x,y)$ (respectively), and $\phi(0)=\varphi(0)$. Then we have that $\phi(x)<\varphi(x)\,(\forall x\in(0,1])$. Are there any interpretations for the Gronwall's inequality in view of the comparison theorem? I am not sure if they have any connections besides the fact that Gronwall's inequality can be used to prove the comparison theorem. Will someone be kind enough to give some comments on this? Thank you very much!
Just this week I started a Data Structures and Algorithms at my alma mater, San Jose state. And in jsut this short week I have noticed that the ones ability to properly evaluate ones code will take you a long way. Moreover, the skills developed through practiing the analysis of algortihm can go far in other discplines and studies. A firm understanding and comfortablity with summations is fundamental to propering analyzing ones algortihm. In fact, coming form an EE background, I have seen summations many of times. And I have always felt as though I understood the underlying princple and implications of them. However, with that being said, I want to refresh myself. More specifically, create a document that I can later refer too and add to if needed. Prior to anything in this class, I have always thought of summations as discrete representations of analog intergals. However, when I naviely assumed in the first reading that the trival summation, $$ \sum_{i=1}^{n} i$$ was easily translated to continous integral as, $$ \int_1^n i di $$ I quicly realized I was wrong once I did the math. For, $$ \sum_{i=1}^{n} i = \frac{n(n+1)}{2},\text{ (1) and }\int_1^n i di = \frac{n^2-1}{2} \text{ (2)}$$ At that point I figured it wpuld be a great investment to really look how the definte-continous-integral relates back to discrete-summation Defintion: Much of what I read in preparation for this post can be found here. Thank you UC Davis. According the resources linked above, the definition for a definite integral related to discrete summations is as follows, $$ \int_a^b f(x)dx = \lim_{x\to\infty} \sum_{i=1}^{\infty} f(c_{i})\cdot \Delta x_{i}\text{ (3)}$$ where, $$ \Delta x_{i} = \frac{b-a}{n},\text{ the length of step interval, (4)}$$ and $$ c_{i} = a + (\frac{b-a}{n})i,\text{ right-end point of sampling interval, (5)}$$ Objective: Solve one of the sample problems from the link with the (3) Then work backwards from (1) with (3) to get the equivalent continous-intergal representation (2) Solve Example I decided to solve problem 2. It had similar representation to continous equation in which I thought eq (1) would yeild. So i figured doing a similar example to my problem would offer more insight. The UC Davis site contained solutions. So for an indepth solution, I suggest one vist thier site. Also, they are many more solved examples to practice. Problem 2: Use the limit definition of definite integral to evaluate $\int_0^1 (2x + 3) dx$ With $f(x) = 2x + 3$, $a=0$ and $b=1$, we get $$x = c_{i} = a + (\frac{b-a}{n})i = \frac{i}{n} $$ and, $$ \Delta x_{i} = \frac{b-a}{n} = \frac{1}{n}$$ combining these equations in the left-hand-side of eq (3) we obtain the following equation $$ \lim_{n\to\infty} \sum_{i=1}^n (\frac{2i}{n^2} + \frac{3}{n}) = \lim_{n\to\infty} ( \frac{2}{n^2} \sum_{i=1}^n i + \sum_{i=1}^n \frac{3}{n} ) $$ from here, one can go view the detailed solution at the UC Davis link. I merely present up until this point to show I step the problem and used the eqs (3), (4), (5)
Does there exist a triple of distinctnumbers $a,b,c$ such that $$(a-b)^5 + (b-c)^5 + (c-a)^5 = 0$$ ? SOURCE :Inequalities (PDF) (Page Number 4 ; Question Number 220.1) I tried expanding the brackets and I ended up with this messy equation : $$-5 a^4 b + 5 a^4 c + 10 a^3 b^2 - 10 a^3 c^2 - 10 a^2 b^3 + 10 a^2 c^3 + 5 a b^4 - 5 a c^4 - 5 b^4 c + 10 b^3 c^2 - 10 b^2 c^3 + 5 b c^4 = 0$$ There is no hope of setting $a=b$ or $a=c$ as the question specifically asks for distinct numbers. So, at last I started collecting, grouping, factoring and manipulating the terms around but could find nothing. Wolfram|Alpha gives a solution as : $$c=\dfrac{1}{2}\big(\pm\sqrt{3}\sqrt{-(a-b)^2} + a+b\big)$$ How can this solution be found? Another thing I notice about the solution is that it contains a negative term inside the square root, so does that mean that the solution involves complex numbers and that there is no solution for $\big(a,b,c\big)\in \mathbb {R}$ ? I am very confused about how to continue. Can anyone provide a solution/hint on how to 'properly' solve this problem ? Thanks in Advance ! :)
B. KRISHNAREDDY Articles written in Journal of Astrophysics and Astronomy Volume 40 Issue 2 April 2019 Article ID 0009 We report optical observations of TGSS J1054 $+$ 5832, a candidate high-redshift ($z = 4.8 \pm 2$) steep-spectrum radio galaxy, in $r$ and $i$ bands, using the faint object spectrograph and camera mounted on 3.6-m Devasthal Optical Telescope (DOT). The source previously detected at 150 MHz from Giant Meterwave Radio Telescope (GMRT) and at 1420 MHz from Very Large Array has a known counterpart in near-infrared bands with $K$-band magnitude of AB 22. The source is detected in $i$-band with AB24.3 $\pm$ 0.2 magnitude in theDOT images presented here. The source remains undetected in the $r$-band image at a 2.5$\sigma$ depth of AB 24.4 mag over an $1.2^{\prime\prime}\times 1.2^{\prime\prime}$ aperture. An upper limit to $i−K$ color is estimated to be $\sim$2.3, suggesting youthfulness of the galaxy with active star formation. These observations highlight the importance and potential of the 3.6-mDOT for detections of faint galaxies. Current Issue Volume 40 | Issue 5 October 2019 Since January 2016, the Journal of Astrophysics and Astronomy has moved to Continuous Article Publishing (CAP) mode. This means that each accepted article is being published immediately online with DOI and article citation ID with starting page number 1. Articles are also visible in Web of Science immediately. All these have helped shorten the publication time and have improved the visibility of the articles. Click here for Editorial Note on CAP Mode
Revista Matemática Iberoamericana Rev. Mat. Iberoamericana Volume 22, Number 2 (2006), 591-648. Riesz transforms for symmetric diffusion operators on complete Riemannian manifolds Abstract Let $(M, g)$ be a complete Riemannian manifold, $L=\Delta -\nabla \phi \cdot \nabla$ be a Markovian symmetric diffusion operator with an invariant measure $d\mu(x)=e^{-\phi(x)}d\nu(x)$, where $\phi\in C^2(M)$, $\nu$ is the Riemannian volume measure on $(M, g)$. A fundamental question in harmonic analysis and potential theory asks whether or not the Riesz transform $R_a(L)=\nabla(a-L)^{-1/2}$ is bounded in $L^p(\mu)$ for all $1<p<\infty$ and for certain $a\geq 0$. An affirmative answer to this problem has many important applications in elliptic or parabolic PDEs, potential theory, probability theory, the $L^p$-Hodge decomposition theory and in the study of Navier-Stokes equations and boundary value problems. Using some new interplays between harmonic analysis, differential geometry and probability theory, we prove that the Riesz transform $R_a(L)=\nabla(a-L)^{-1/2}$ is bounded in $L^p(\mu)$ for all $a>0$ and $p\geq 2$ provided that $L$ generates a ultracontractive Markovian semigroup $P_t=e^{tL}$ in the sense that $P_t 1=1$ for all $t\geq 0$, $\|P_t\|_{1, \infty} < Ct^{-n/2}$ for all $t\in (0, 1]$ for some constants $C>0$ and $n > 1$, and satisfies $$ (K+c)^{-}\in L^{{n\over 2}+\epsilon}(M, \mu) $$ for some constants $c\geq 0$ and $\epsilon>0$, where $K(x)$ denotes the lowest eigenvalue of the Bakry-Emery Ricci curvature $Ric(L)=Ric+\nabla^2\phi$ on $T_x M$, i.e., $$ K(x)=\inf\limits\{Ric(L)(v, v): v\in T_x M, \|v\|=1\}, \quad\forall\ x\in M. $$ Examples of diffusion operators on complete non-compact Riemannian manifolds with unbounded negative Ricci curvature or Bakry-Emery Ricci curvature are given for which the Riesz transform $R_a(L)$ is bounded in $L^p(\mu)$ for all $p\geq 2$ and for all $a>0$ (or even for all $a\geq 0$). Article information Source Rev. Mat. Iberoamericana, Volume 22, Number 2 (2006), 591-648. Dates First available in Project Euclid: 26 October 2006 Permanent link to this document https://projecteuclid.org/euclid.rmi/1161871349 Mathematical Reviews number (MathSciNet) MR2294791 Zentralblatt MATH identifier 1119.53022 Subjects Primary: 31C12: Potential theory on Riemannian manifolds [See also 53C20; for Hodge theory, see 58A14] 53C20: Global Riemannian geometry, including pinching [See also 31C12, 58B20] 58J65: Diffusion processes and stochastic analysis on manifolds [See also 35R60, 60H10, 60J60] 60H30: Applications of stochastic analysis (to PDE, etc.) Citation Li, Xiang Dong. Riesz transforms for symmetric diffusion operators on complete Riemannian manifolds. Rev. Mat. Iberoamericana 22 (2006), no. 2, 591--648. https://projecteuclid.org/euclid.rmi/1161871349
Monotone Convergence Theorem Given a sequence of functions $\{f_n\}$ which converges pointwise to some limit function $f$, it is not always true that $$\int \lim_{n\to\infty}f_n = \lim_{n\to\infty}\int f_n.$$ (Take this sequence for example.)The Monotone Convergence Theorem (MCT), the Dominated Convergence Theorem (DCT), and Fatou's Lemma are three major results in the theory of Lebesgue integration which answer the question "When do $\displaystyle{ \lim_{n\to\infty} }$ and $\int$ commute?" The MCT and DCT tell us that if you place certain restrictions on both the $f_n$ and $f$, then you can go ahead and interchange the limit and integral. Fatou's Lemma, on the other hand, says "Here's the best you can do if you don't make any extra assumptions about the functions." Last week we discussed Fatou's Lemma. Today we'll look at an example which uses the MCT. And next week we'll cover the DCT. Monotone Converegence Theorem: If $\{f_n:X\to[0,\infty)\}$ is a sequence of measurable functions on a measurable set $X$ such that $f_n\to f$ pointwise almost everywhere and $f_1\leq f_2\leq \cdots$, then $$\lim_{n\to\infty}\int_X f_n=\int_X f.$$ In this statement the $f_n$ are nondecreasing, but the theorem holds for a nonincreasing sequence as well. Let's look at an example which, on the surface, looks quite nasty. But thanks to the MCT, it's not bad at all. Example Let $X$ be a measure space with a positive measure $\mu$ and let $f:X\to[0,\infty]$ be a measurable function. Prove that$$\lim_{n\to\infty}\int_X n\log\left(1+\frac{f}{n}\right)d\mu \;=\;\int_X f\;d\mu.$$ Proof. Begin by defining\begin{align*}f_n&=n\log\left(1+\frac{f}{n}\right)\\&=\log\left(1+\frac{f}{n}\right)^n\end{align*}and note that each $f_n$ is nonnegative (since both $\log$ and $f$ are nonnegative) and measurable (since the composition of a continuous function with a measurable function is measurable). Further $f_1\leq f_2\leq\cdots$. Indeed, $\log$ is an increasing function and for a fixed $x\in X$ the sequence $\left(1+\frac{f(x)}{n}\right)^n$ is increasing. In fact*, it increases to $e^{f(x)}$. In other words, $$\lim_{n\to\infty}f_n(x)=\lim_{n\to\infty}\log\left(1+\frac{f}{n}\right)^n=\log e^{f(x)}=f(x).$$Hence, by the Monotone Convergence Theorem$$\lim_{n\to\infty}\int_X f_n\;d\mu=\int_x f\;d\mu$$ as desired. Not so bad, huh? Remark Did you know that the MCT has a "continuous cousin"? (Well, maybe it's more like a second cousin.) Have you come across Dini's Theorem before? Dini's Theorem: If $\{f_n:X\to\mathbb{R}\}$ is a nondecreasing sequence of continuous functions on a compact metric space $X$ such that $f_n\to f$ pointwise to a continuous function $f:X\to\mathbb{R}$, then the convergence is uniform. Here we have a monotone sequence of continuous - instead of measurable - functions which converge pointwise to a limit function $f$ on a compact metric space. By Dini's Theorem, the convergence is actually uniform. So IF the $f_n$ are also Riemann integrable, then we can conclude**$$\lim_{n\to\infty}\int_Xf_n=\int_Xf.$$Perhaps this doesn't surprise us too much: we've seen before that continuity and measurability are analogous notions (to a certain extent)! Footnotes *Recall from elementary calculus: $\displaystyle{\lim_{n\to\infty} \left(1+\frac{x}{n}\right)^n=e^x}$ for any $x\in\mathbb{R}$. ** See Rudin's Principles of Mathematical Analysis (3ed.), Theorem 7.16.