text
stringlengths
256
16.4k
texlive 2012 debian package: \documentclass[]{article}%\usepackage{amsmath} \begin{document} \begin{align} & \uparrow\sum F\end{align}\end{document} compile with htlatex report.tex "htm,mathml" " -cunihtf" gives /usr/share/texmf/tex/generic/tex4ht/html-mml.4ht) (/usr/share/texmf/tex/generic/tex4ht/html4-uni.4ht)) (./report.aux) ! Bad mathchar (79119). <argument> & \uparrow \sum F l.8 \end{align} ? But when compile as htlatex report.tex "htm,xhtml" " -cunihtf" or htlatex report.tex "htm" " -cunihtf" No error. What should I do? Is this a bug in htlatex? any workaround? The problem happens when using \uparrow with mathml and when using align. Without align there is no error. \documentclass[]{article}%\usepackage{amsmath} \begin{document} $\uparrow\sum F$\end{document} Now htlatex report.tex "htm,mathml" " -cunihtf" No error. update fyi, if you are using htlatex with mathml watch out for align. Workaround I found for now is to use eqnarray. Now the errors went away. I am trying to compile to HTML code generated by Scientific word, which generated the align Latex code. So I had to edit the code and change all the align to eqnarray as temporary fix for now to get HTML generated.
Gödel's System T in TypeScript Recently, I’ve been reading Bove and Dybjer’s paper on “Dependent Types at Work” where Kurt Gödel’s System T is briefly described. It is a type system based on the Simply Typed Lambda Calculus and includes booleans and natural numbers. The unusual thing about it is that it allows us to perform only primitive recursion, which considerably limits the number of possible programs we can write but on the other hand it guarantees that these programs always terminate. This means that System T is not Turing complete as we can express only a subset of the total computable functions. How come we only have primitive recursion? In Untyped Lambda Calculus we can define fixed point combinators which allow us to simulate recursion. So if System T is based lambda calculus, how come we can’t have non-primitive recursion? The reason is - the type system. Let’s recall from the article “On Recursive Functions” - and the \(\omega\) combinator which applies a term to itself: \[\omega := \lambda x.xx\] However, here we’re dealing with types. What would the type of this term be? Let’s assume the second \(x\) in \(xx\) be of type \(\alpha\). That means the first \(x\) should also be of type \(\alpha\). Here we arrive at a contradiction because the first \(x\) is a function so it should have a type \(\alpha \to \beta\) for some \(\beta\). Both terms should have the same type, hence the contradiction. Every fixed point combinator involves some kind of self-application, therefore, it cannot be expressed in Simply Typed Lambda Calculus thus making our language less powerful but on the other hand - more predictable. Building Blocks System T includes predefined constants for True, False and Zero, as well as the Succ, Cases and Rec combinators, which represent the successor function, if-then-else, and primitive recursion respectively. Having this in our arsenal, we can now build some abstractions. In the paper, the authors define the primitives and some operators in Agda and leave some additional tasks to the reader. So I tried to implement this system in TypeScript, which turned out to be a fun exercise. Primitives Bool: this is quite trivial as we can use TypeScript’s booleantype. Nat: the set of natural numbers. This is tricky as the definition for Natin System Tis Nat: Zero | Succ. So it can be either zero or the successor function, iterated n number of times. To give an intuition Zero == 0, Succ(Zero) == 1, Succ(Succ(Zero)) == 2etc. For the sake of simplicity, I decided to use TypeScript’s numbertype but only for representing the numbers. We’re not allowed to use their built-in properties, like arithmetic operations, comparison, etc. We’re going to construct them from the ground up using the predefined primitives. Succ: Nat → Natwe define as: const Succ = (x: number): number => x + 1; Cases<T>: Bool → T → T → T- think of it as a conditional (if-then-else) expression. System Tallows polymorphic functions which we implement using TypeScript’s generics. function Cases<T>(cond: boolean, a: T, b: T): T { return cond ? a : b; } It’s easy to see that Cases<number>(true, 1, 2) == 1 and Cases<number>(false, 1, 2) == 2. Rec<T>: Nat → T → (Nat → T → T) → T- this is called Gödel’s Recursor. function Rec<T>(sn: number, s: T, t: (z: number, acc: T) => T): T { return sn === Zero ? s : t(sn - 1, Rec(sn - 1, s, t)); } It might seem confusing at first but its reduction is straightforward: Rec 0 s t → sRec sn s t → t n (R n s t) Rec<T> is a polymorphic function that takes three arguments (or four if we count the type). sn is the natural number on which we perform the recursion, think of sn as the successor of n. s is the element returned from the base case, whereas t is the function called on each recursive step. z enumerates each recursive step, and acc is the value which we “accumulate” over the recursion. Later in the examples, we’ll see that we can use Rec<T> as a higher order function too. Arithmetic Operators This is where it gets interesting. We construct the addition operator in the following way: function add(x: number, y: number): number { return Rec<number>(x, y, (z, acc) => Succ(acc));} Seems weird? Let’s see what happens when we invoke add(2, 2): t := λz.λacc.Succ accadd 2 2 → Rec 2 2 t→ t 1 (Rec 1 2 t)→ t 1 (t 0 (Rec 0 2 t))→ t 1 (t 0 2)→ t 1 3→ 4 Using it as a building block we can define multiplication: function multiply(x: number, y: number): number { return Rec<number>(y, Zero, (z, acc) => add(x, acc));} As well as exponentiation: function exp(x: number, y: number): number { return Rec<number>(y, 1, (z, acc) => multiply(x, acc));} We can also define the predecessor function: function pred(x: number): number { return Rec<number>(x, Zero, (z, acc) => z);} This is a bit different than what we’ve been defining so far and it might not be so obvious why it works. t is a function that takes two arguments and returns the first one. Let’s walk through the reduction sequence of pred(3): t := λz.λw.zpred 3 → Rec 3 0 t→ t 2 (Rec 2 0 t)→ t 2 (t 1 (Rec 1 2 t))→ t 2 (t 1 (t 0 (Rec 0 2 t)))→ t 2 (t 1 (t 0 2))→ t 2 (t 1 0)→ t 2 1→ 2 subtraction is very similar to addition, we just have to replace Succ with pred. You can check it out yourself. Boolean Operators Use the Cases<T> combinator with true and false constants to construct logical operators: function not(x: boolean): boolean { return Cases<boolean>(x, false, true);}function and(x: boolean, y: boolean): boolean { return Cases<boolean>(x, y, false);}function or(x: boolean, y: boolean): boolean { return Cases<boolean>(x, true, y);} Based on this you can try implementing the xor operator. Now it’s time to compare numbers. For that we’re going to reuse some of the functions we implemented so far and define to isZero: Nat → Bool. It uses the recursor, that is, if we’re in the base case ( x equals Zero) we return True, otherwise False. const isZero = (x: number): boolean => { return Rec<boolean>(x, true, (z, acc) => false);}function eq(x: number, y: number): boolean { return and(isZero(subtract(x, y)), isZero(subtract(y, x)));}function gt(x: number, y: number): boolean { return not(isZero(subtract(x, y)));}function lt(x: number, y: number): boolean { return not(isZero(subtract(y, x)));} Now reusing the operators above it is straightforward to define “greater than or equal to” ≥ and “less than or equal to” ≤. Beyond Primitive Recursion Now to our last and most interesting example. In System T we can express the total computable functions using primitive recursion, but can we express the ones that are not primitive recursive? Ackermann The Ackermann function is one of the earliest discovered examples of a total computable function that is not primitive recursive. All primitive recursive functions are total and computable, but the Ackermann function illustrates that not all total computable functions are primitive recursive. You can find this neat visual example of its execution. Let’s try to implement it within our type system! We’ll start by defining an operator for function composition: type OneArityFn<T, K> = (x: T) => K;function compose<T, K, V>(f: OneArityFn<K, V>, g: OneArityFn<T, K>) : OneArityFn<T, V> { return x => f(g(x));} Observe that compose is a higher order function. It takes two functions f and g and returns a new function that takes an input x, applies it to g and returns the result of g(x) applied to f. Now, using the Gödel’s recursor let’s define a repeater function: function repeat<T>(f: OneArityFn<T, T>, n: number) : OneArityFn<T, T> { return Rec<OneArityFn<T, T>>( n, x => x, (z, acc) => compose(f, acc));} Simply put, given a function f and a number n, repeat will invoke f on its output n number of times. Think of it as composing it with itself n number of times. For example repeat(f, 3) will result in x => f(f(f(x))). This is all we need to define ackermann: function ackermann(x: number): OneArityFn<number, number> { return Rec<OneArityFn<number, number>>( x, Succ, (z, acc) => y => repeat(acc, y)(acc(Succ(Zero))));}ackermann(1)(1); // => 3 Turns out ackermann is, in fact, a higher-order primitive recursive function, hence the partial application. We can see in its definition that we decrement the first argument on each step (which guarantees that it’ll terminate), and based on its value we compose a new execution branch which involves a seperate primitive recursor. The result is constructed via finite composition of the successor Succ function. Think of acc as an accumulated composition of the successor function. Conclusion Gödel’s System T has been influential in defining the Curry-Howard isomorphism or in other words - for establishing the relationship between computer programs and mathematical proofs. We can think of a type system as a set of axioms and type checkers as automatic theorem provers based on these axioms.We have seen that using a language with a particular type system always comes with its tradeoffs. Type systems with less expressive power reduce the number of possible programs we can write but on the other hand provide additional safety and potential performance benefits. In some cases, a strongly typed language can be detrimental to our project’s long term success, in other cases, it might provide little to no additional value. That’s why when picking a language for a specific task, we have to carefully consider what’s going to best serve our needs. Further Reading and References Full Code Reference on GitHub Bove, Dybjer, “Dependent Types at Work” Dowek, “Gödel’s System T as a precursor of modern type theory” Gödel’s System T in Agda
It looks like you're new here. If you want to get involved, click one of these buttons! Isomorphisms are very important in mathematics, and we can no longer put off talking about them. Intuitively, two objects are 'isomorphic' if they look the same. Category theory makes this precise and shifts the emphasis to the 'isomorphism' - the way in which we match up these two objects, to see that they look the same. For example, any two of these squares look the same after you rotate and/or reflect them: An isomorphism between two of these squares is a process of rotating and/or reflecting the first so it looks just like the second. As the name suggests, an isomorphism is a kind of morphism. Briefly, it's a morphism that you can 'undo'. It's a morphism that has an inverse: Definition. Given a morphism \(f : x \to y\) in a category \(\mathcal{C}\), an inverse of \(f\) is a morphism \(g: y \to x\) such that and I'm saying that \(g\) is 'an' inverse of \(f\) because in principle there could be more than one! But in fact, any morphism has at most one inverse, so we can talk about 'the' inverse of \(f\) if it exists, and we call it \(f^{-1}\). Puzzle 140. Prove that any morphism has at most one inverse. Puzzle 141. Give an example of a morphism in some category that has more than one left inverse. Puzzle 142. Give an example of a morphism in some category that has more than one right inverse. Now we're ready for isomorphisms! Definition. A morphism \(f : x \to y\) is an isomorphism if it has an inverse. Definition. Two objects \(x,y\) in a category \(\mathcal{C}\) are isomorphic if there exists an isomorphism \(f : x \to y\). Let's see some examples! The most important example for us now is a 'natural isomorphism', since we need those for our databases. But let's start off with something easier. Take your favorite categories and see what the isomorphisms in them are like! What's an isomorphism in the category \(\mathbf{3}\)? Remember, this is a free category on a graph: The morphisms in \(\mathbf{3}\) are paths in this graph. We've got one path of length 2: $$ f_2 \circ f_1 : v_1 \to v_3 $$ two paths of length 1: $$ f_1 : v_1 \to v_2, \quad f_2 : v_2 \to v_3 $$ and - don't forget - three paths of length 0. These are the identity morphisms: $$ 1_{v_1} : v_1 \to v_1, \quad 1_{v_2} : v_2 \to v_2, \quad 1_{v_3} : v_3 \to v_3.$$ If you think about how composition works in this category you'll see that the only isomorphisms are the identity morphisms. Why? Because there's no way to compose two morphisms and get an identity morphism unless they're both that identity morphism! In intuitive terms, we can only move from left to right in this category, not backwards, so we can only 'undo' a morphism if it doesn't do anything at all - i.e., it's an identity morphism. We can generalize this observation. The key is that \(\mathbf{3}\) is a poset. Remember, in our new way of thinking a preorder is a category where for any two objects \(x\) and \(y\) there is at most one morphism \(f : x \to y\), in which case we can write \(x \le y\). A poset is a preorder where if there's a morphism \(f : x \to y\) and a morphism \(g: x \to y\) then \(x = y\). In other words, if \(x \le y\) and \(y \le x\) then \(x = y\). Puzzle 143. Show that if a category \(\mathcal{C}\) is a preorder, if there is a morphism \(f : x \to y\) and a morphism \(g: x \to y\) then \(g\) is the inverse of \(f\), so \(x\) and \(y\) are isomorphic. Puzzle 144. Show that if a category \(\mathcal{C}\) is a poset, if there is a morphism \(f : x \to y\) and a morphism \(g: x \to y\) then both \(f\) and \(g\) are identity morphisms, so \(x = y\). Puzzle 144 says that in a poset, the only isomorphisms are identities. Isomorphisms are a lot more interesting in the category \(\mathbf{Set}\). Remember, this is the category where objects are sets and morphisms are functions. Puzzle 145. Show that every isomorphism in \(\mathbf{Set}\) is a bijection, that is, a function that is one-to-one and onto. Puzzle 146. Show that every bijection is an isomorphism in \(\mathbf{Set}\). So, in \(\mathbf{Set}\) the isomorphisms are the bijections! So, there are lots of them. One more example: Definition. If \(\mathcal{C}\) and \(\mathcal{D}\) are categories, then an isomorphism in \(\mathcal{D}^\mathcal{C}\) is called a natural isomorphism. This name makes sense! The objects in the so-called 'functor category' \(\mathcal{D}^\mathcal{C}\) are functors from \(\mathcal{C}\) to \(\mathcal{D}\), and the morphisms between these are natural transformations. So, the isomorphisms deserve to be called 'natural isomorphisms'. But what are they like? Given functors \(F, G: \mathcal{C} \to \mathcal{D}\), a natural transformation \(\alpha : F \to G\) is a choice of morphism $$ \alpha_x : F(x) \to G(x) $$ for each object \(x\) in \(\mathcal{C}\), such that for each morphism \(f : x \to y\) this naturality square commutes: Suppose \(\alpha\) is an isomorphism. This says that it has an inverse \(\beta: G \to F\). This \(beta\) will be a choice of morphism $$ \beta_x : G(x) \to F(x) $$ for each \(x\), making a bunch of naturality squares commute. But saying that \(\beta\) is the inverse of \(\alpha\) means that $$ \beta \circ \alpha = 1_F \quad \textrm{ and } \alpha \circ \beta = 1_G .$$ If you remember how we compose natural transformations, you'll see this means $$ \beta_x \circ \alpha_x = 1_{F(x)} \quad \textrm{ and } \alpha_x \circ \beta_x = 1_{G(x)} $$ for all \(x\). So, for each \(x\), \(\beta_x\) is the inverse of \(\alpha_x\). In short: if \(\alpha\) is a natural isomorphism then \(\alpha\) is a natural transformation such that \(\alpha_x\) is an isomorphism for each \(x\). But the converse is true, too! It takes a little more work to prove, but not much. So, I'll leave it as a puzzle. Puzzle 147. Show that if \(\alpha : F \Rightarrow G\) is a natural transformation such that \(\alpha_x\) is an isomorphism for each \(x\), then \(\alpha\) is a natural isomorphism. Doing this will help you understand natural isomorphisms. But you also need examples! Puzzle 148. Create a category \(\mathcal{C}\) as the free category on a graph. Give an example of two functors \(F, G : \mathcal{C} \to \mathbf{Set}\) and a natural isomorphism \(\alpha: F \Rightarrow G\). Think of \(\mathcal{C}\) as a database schema, and \(F,G\) as two databases built using this schema. In what way does the natural isomorphism between \(F\) and \(G\) make these databases 'the same'. They're not necessarily equal! We should talk about this.
Since the fppf cohomology group ${\rm{H}}^1(F, \alpha_p) = F/F^p$ is visibly uncountable (where $p = {\rm{char}}(F) > 0$), perhaps you meant to assume $G$ is smooth (and then fppf cohomology coincides with etale cohomology, which in turn coincides with Galois cohomology as in the title of the question). The cohomology set can be infinite even when $G$ is (smooth and) connected and commutative: see Example 11.3.3 in the book "Pseudo-reductive groups". But it is always countable for smooth $G$. Indeed, any right $G$-torsor $E$ over $F$ is $F$-smooth (since $G$ is) and hence is split by a finite separable extension $F'/F$ (by the Zariski-local structure theorem for smooth schemes over a field, or by more hands-on means), so it suffices to show that there are only countably many such $F'/F$ and that for each such $F'$ the set ${\rm{H}}^1(F'/F,G)$ of (isom. classes of) right $G$-torsors split by $F'/F$ is countable. In fact, each ${\rm{H}}^1(F'/F,G)$ is finite (so the infinitude for smooth affine $G$ is really caused by lack of control on splitting fields of torsors), but this is very hard to prove in general (see below), so let me first prove it is countable by more elementary means. To check that $F$ has only countably many separable extensions $F'$ of degree below any specific bound we can apply the usual Krasner argument (as the space of separable Eisenstein polynomials of a given degree, while generally non-compact, has a countable base of open sets). Now fix a finite separable extension $F'/F$ and consider the inclusion $G \hookrightarrow \mathscr{G} := {\rm{R}}_{F'/F}(G_{F'})$ of $F$-groups (using Weil restriction from $F'$ down to $F$). We are interested in the kernel of the induced map on ${\rm{H}}^1$'s (as this is identified with the restriction map ${\rm{H}}^1(F,G) \rightarrow {\rm{H}}^1(F',G)$ due to the non-abelian Shapiro Lemma). By Corollary 1 to Prop. 36 in section 5.4 of Chapter I of Serre's "Galois cohomology" book, this kernel is identified with the quotient set $\mathscr{G}(F)\backslash X(F)$ where $X$ is the smooth coset space $\mathscr{G}/G$ equipped with its natural left $\mathscr{G}$-action. But the natural quotient mapping $\mathscr{G} \rightarrow X$ is a smooth morphism (hence surjective on tangent spaces at $F$-points of the source), so likewise for the orbit map $\mathscr{G} \rightarrow X$ through any $x_0 \in X(F)$. Hence, by the $F$-analytic implicit function theorem it follows that the induced map on $F$-points $\mathscr{G}(F) \rightarrow X(F)$ defined by $g \mapsto g.x_0$ has open image, so all $\mathscr{G}(F)$-orbits in $X(F)$ are open. Since $X(F)$ is a countable base for its topology, we conclude that the space $\mathscr{G}(F)\backslash X(F)$ of $\mathscr{G}(F)$-orbits in $X(F)$ is countable. The preceding countability proof was of elementary nature. To prove that each ${\rm{H}}^1(F'/F,G)$ is finite, we first note that this is elementary if $G$ is finite (etale). Indeed, it is harmless to increase $F'$ to be a Galois extension that splits $G$, and then this H$^1$ coincides with ${\rm{H}}^1({\rm{Gal}}(F'/F),G(F'))$ where both the Galois group and the coefficient group are finite, so finiteness is clear in such cases. In general, we may and do increase $F'$ so that it splits the finite etale $G/G^0$ and so that the natural map $G(F') \rightarrow (G/G^0)(F')$ is surjective. Thus, we have an exact sequence of pointed sets $${\rm{H}}^1(F'/F,G^0) \rightarrow {\rm{H}}^1(F'/F,G) \rightarrow {\rm{H}}^1(F'/F,G/G^0),$$where the H$^1$'s in this diagram are for the Galois group ${\rm{Gal}}(F'/F)$ with coefficients in the groups of $F'$-points of $G^0$, $G$, and $G/G^0$ respectively.By the known finiteness of the final term, it suffices to prove finiteness of the fiber through each point of the middle term. By the "twisting" method in Galois cohomology and the canonicity of the identity component (including its compatibility with ground field extension), each fiber is identified with an analogous "kernel" at the cost of replacing $G$ with an $F'/F$-form. In this way, the proof of finiteness of ${\rm{H}}^1(F'/F,G)$ is reduced to the case when the smooth $G$ is connected. As the OP noted, in the connected reductive case the finiteness is known (as even ${\rm{H}}^1(F,G)$ is finite in such cases, ultimately by deep results of Bruhat-Tits to prove vanishing in the simply connected semisimple case, with the general connected reductive case reducing to this via finiteness of $n$-torsion in Brauer groups of local fields and finiteness of degree-1 Galois cohomology of tori, which in turn rests on duality theorems and local class field theory). In the general smooth connected affine case one has to use the structure theory of pseudo-reductive groups to reduce the problem separately to the connected reductive case over finite (possibly inseparable) extensions of $F$ and to the connected solvable case (where one has to use Tits' structure theory of wound unipotent groups to treat the unipotent case). This final part of the argument is given in section 7.1 of the paper "Finiteness theorems for algebraic groups over function fields" in Compositio Math. 148 (2012) (see Prop. 7.1.2). In that section one also finds a finiteness result generalizing the stronger result in the connected reductive case, namely that if $G$ is pseudo-reductive and generated by its maximal $F$-tori (e.g., pseudo-reductive and perfect) then ${\rm{H}}^1(F,G)$ is finite.
Journal of Symbolic Logic J. Symbolic Logic Volume 48, Issue 3 (1983), 529-538. Classifying Positive Equivalence Relations Abstract Given two (positive) equivalence relations $\sim_1, \sim_2$ on the set $\omega$ of natural numbers, we say that $\sim_1$ is $m$-reducible to $\sim_2$ if there exists a total recursive function $h$ such that for every $x, y \in \omega$, we have $x \sim_1 y \operatorname{iff} hx \sim_2 hy$. We prove that the equivalence relation induced in $\omega$ by a positive precomplete numeration is complete with respect to this reducibility (and, moreover, a "uniformity property" holds). This result allows us to state a classification theorem for positive equivalence relations (Theorem 2). We show that there exist nonisomorphic positive equivalence relations which are complete with respect to the above reducibility; in particular, we discuss the provable equivalence of a strong enough theory: this relation is complete with respect to reducibility but it does not correspond to a precomplete numeration. From this fact we deduce that an equivalence relation on $\omega$ can be strongly represented by a formula (see Definition 8) iff it is positive. At last, we interpret the situation from a topological point of view. Among other things, we generalize a result of Visser by showing that the topological space corresponding to a partition in e.i. sets is irreducible and we prove that the set of equivalence classes of true sentences is dense in the Lindenbaum algebra of the theory. Article information Source J. Symbolic Logic, Volume 48, Issue 3 (1983), 529-538. Dates First available in Project Euclid: 6 July 2007 Permanent link to this document https://projecteuclid.org/euclid.jsl/1183741310 Mathematical Reviews number (MathSciNet) MR716612 Zentralblatt MATH identifier 0528.03030 JSTOR links.jstor.org Citation Bernardi, Claudio; Sorbi, Andrea. Classifying Positive Equivalence Relations. J. Symbolic Logic 48 (1983), no. 3, 529--538. https://projecteuclid.org/euclid.jsl/1183741310
Definition:Right Angle Jump to navigation Jump to search In the above diagram, the line $CD$ has been constructed so as to be a Definition The measurement of a right angle is $\dfrac{180^\circ} 2 = 90^\circ$ or $\dfrac \pi 2$. In the words of Euclid: When a straight line set up on a straight line makes the adjacent angles equal to one another, each of the equal angles is right, and the straight line standing on the other is called a perpendicularto that on which it stands. In the above diagram, the line $CD$ has been constructed so as to be a perpendicular to the line $AB$.
Given a convex hexagon $ABCDEF$. All its sides are equal (it can be irregular). Furthermore, $AD = BE = CF$. How can I prove that a circle can be inscribed in this hexagon? Let $O$ be the intersection point of diagonals $AD$ and $BE$ (see diagram below) and set $a=OB$, $b=OD$, $d=AD=AE$, $\theta=\angle AOB=\angle DOE$. Then by the cosine rule we have: $$ a^2+(d-b)^2-2a(d-b)\cos\theta=b^2+(d-a)^2-2b(d-a)\cos\theta, $$ which simplified gives: $$ (b-a)(1-\cos\theta)=0, \quad\hbox{that is:}\quad a=b. $$ It follows that $OBC\cong ODC$ and $OAF\cong OEF$, so that $\angle COB\cong \angle COD\cong \angle AOF\cong (\pi-\theta)/2$ and points $COF$ are aligned. By repeating the same argument given above for diagonals $AD$ and $CF$ we get $OF=a$, $OC=d-a$. It follows that all six triangles in the diagram are congruent between them and in particular they all have the same altitude from their common vertex $O$. A circle of center $O$ and having that common altitude as radius will therefore touch all six red sides of the hexagon. Notice that if $a\ne d/2$ the hexagon is NOT regular. All six angles with vertex at $O$ are however congruent and they thus measure 60°. EDIT. One could prove that $a=b$ even without trigonometry. Triangles $ABO$ and $EDO$ have congruent between them the angles of vertex $O$ and the sides opposite to them. In addition, the other two sides of each triangle have the same difference. It is well known that if two triangles have the same base $PQ$ and the angles opposite to $PQ$ both congruent, then the third vertex of both triangles lies on the arc of a well defined circle having $PQ$ as a chord. Moreover, if the difference of the distances of the third vertex from $P$ and $Q$ is the same for both triangles, then the third vertex must also belong to a well defined hyperbola having $P$ and $Q$ as foci. This third vertex is then one of the two intersections between circle arc and hyperbola, and both possible triangles are congruent between them. Lemma. Diagonal $BE$ is the angle bisector of both angles $\angle \, ABC$ and $\angle \, DEF$; (1) Diagonal $CF$ is the angle bisector of both angles $\angle \, BCD$ and $\angle \, EFA$; (2) Diagonal $AD$ is the angle bisector of both angles $\angle \, FAB$ and $\angle \, CDE$; (3) Finally, the three diagonals $AD, \, BE$ and $CF$ intersect at a common point I. Proof. Look at quadrilateral $ACDF$. Reflect point $A$ in the line $DF$ and let $A^*$ be the symmetric image of $A$ with respect to $DF$. Then $$ \angle \, DA^*F = \angle \, DAF $$ as well as $FA^* = FA = CD$ and $A^*D = AD = CF$. Since $A^*D = CF$ and $FA^*=CD$ the quad $FCDA^*$ is a parallelogram and thus $$\angle \,DCF = \angle \, DA^*F = \angle \, DAF$$ Therefore quad $ACDF$ is inscribed in a circle and since $FA = CD$, the quad $ACDF$ is in fact an isosceles trapezoid with $AC$ parallel to $DF$. For that reason, the parallel segments $AC$ and $DF$ have a common orthogonal bisector and since $BA = BC$ and $ED =EF$, the points $B$ and $E$ must lie on that orthogonal bisector, i.e. the line $EF$ is the orthogonal bisector of segments $AC$ and $DF$ simultaneously. However, since triangles $ABC$ and $DEF$ are isosceles, the orthogonal bisector $BE$ is at the same time the angle bisector of both angles $\angle \, ABC$ and $\angle \, DEF$. Observe that since the line $BE$ is the orthogonal bisector of the two parallel sides $AC$ and $DF$ of the isosceles trapezoid $ACDF$ its two diagonals $AD$ and $CF$ intersect in a common point which lies on $BE$. The rest of the lemma follows analogously. $$ $$ Completing the proof: Observe that by the Lemma, the lines $AD, \, BE,$ and $CF$ all meet at a common point, we denote by $I$. Consequently, by the Lemma, $I$ is the common intersection point of all angle bisectors of the angles at the vertices of the hexagon $ABCDEF$. Therefore, there is a circle inscribed in the hexagon with point $I$ the incenter. As a consequence we get that $IA=IC=IE$ and $IB = ID=IF$.
In a question, I have to find the acceleration of a fluid parcel in a steady line vortex. I am given that $u_\theta=\frac{A_0}{r}$. So for a steady line vortex, the parcels are following circular paths therefore in cylindrical coordinates, $u_r=u_z=0$ and the acceleration comes from the Lagrangian derivative $\frac{D\vec{u}}{Dt}=\frac{\partial\vec{u}}{\partial{t}}+(\vec{u}\dot{}\vec{\nabla})\vec{u}$ evaluating this for the velocity given above gives a zero acceleration as the only term from $\vec{u}\dot{}\vec{\nabla}$ that gives a non zero derivative is $\partial/\partial{r}$ but then $u_r=0$ which kills it. However this doesnt ring true as from just considering circular motion, $\vec{a}=-\frac{u_\theta^2}{r}\hat{r}=-\frac{A_0^2}{r^3}\hat{r}$. If I use the identity $(\vec{u}\dot{}\vec{\nabla})\vec{u}=-\vec{\omega}\times\vec{u}+\vec{\nabla}(\frac{\vec{u}\dot{}\vec{u}}{2})$ then it can be shown that the vorticity is zero and the expected acceleration is obtained. How can this not work when evaluating the acceleration before the identity is used!? Three of us have puzzled over this and got nowhere!
Normal modes are used to describe the different vibrational motions in molecules. Each mode can be characterized by a different type of motion and each mode has a certain symmetry associated with it. Group theory is a useful tool in order to determine what symmetries the normal modes contain and predict if these modes are IR and/or Raman active. Consequently, IR and Raman spectroscopy is often used for vibrational spectra. Degrees of Freedom In general, a normal mode is an independent motion of atoms in a molecule that occurs without causing movement to any of the other modes. Normal modes, as implied by their name, are orthogonal to each other. In order to discuss the quantum-mechanical equations that govern molecular vibrations it is convenient to convert Cartesian coordinates into so called normal coordinates. Vibrations in polyatomic molecules are represented by these normal coordinates. A molecule can have three types of degrees of freedom and a total of 3N degrees of freedom, where N equals the number of atoms in the molecule. These degrees of freedom can be broken down into three categories. Translational: These are the simplest of the degrees of freedom. These entail the movement of the entire molecule’s center of mass. This movement can be completely described by three orthogonal vectors and thus contains 3 degrees of freedom. Rotational: These are rotations around the center of mass of the molecule and like the translational movement they can be completely described by three orthogonal vectors. This again means that this category contains only 3 degrees of freedom. However, in the case of a linear molecule only two degrees of freedom are present due to the rotation along the bonds in the molecule having a negligible inertia. Vibrational: These are any other types of movement not assigned to rotational or translational movement and thus there are 3N – 6 degrees of vibrational freedom for a nonlinear molecule and 3N – 5 for a linear molecule. These vibrations include bending, stretching, wagging and many other aptly named internal movements of a molecule. These various vibrations arise due to the numerous combinations of different stretches, contractions, and bends that can occur between the bonds of atoms in the molecule. Total Degrees of Freedom Translational degrees of freedom Rotational degrees of freedom Vibrational degrees of freedom Nonlinear Molecules 3N 3 3 3N -6 Linear Molecules 3N 3 2 3N - 5 Each of these degrees of freedom is able to store energy. However, In the case of rotational and vibrational degrees of freedom, energy can only be stored in discrete amounts. This is due to the quantized break down of energy levels in a molecule described by quantum mechanics. In the case of rotations the energy stored is dependent on the rotational inertia of the gas along with the corresponding quantum number describing the energy level. Example \(\PageIndex{1}\): Ethane vs. Carbon Dioxide Ethane, \(C_2H_6\) has eight atoms (\N=8\) and is a nonlinear molecule so of the \(3N=24\) degrees of freedom, three are translational and three are rotational. The remaining 18 degrees of freedom are internal (vibrational). This is consistent with: \[3N -6 =3(8)-6=18\] Carbon Dioxide, \(CO_2\) has three atoms (\(N=3\) and is a linear molecule so of the \(3N=9\) degrees of freedom, three are translational and two are rotational. The remaining 4 degrees of freedom are vibrational. This is consistent with: \[3N - 5 = 3(3)-5 = 4\] The normal modes of vibration are: asymmetric, symmetric, wagging, twisting, scissoring, and rocking for polyatomic molecules. Symmetricical Stretching Asymmetrical Stretching Wagging Twisting Scissoring R ocking Figure \(\PageIndex{1}\) : Six types of Vibrational Modes. Taken from publisher http://en.wikipedia.org/wiki/Infrared_spectroscopy with permission from copyright holder. Normal Modes If there is no external field present, the energy of a molecule does not depend on its orientation in space (its translational degrees of freedom) nor its center of mass (its rotational degrees of freedom). The potential energy of the molecule is therefore made up of its vibrational degrees of freedom only of \(3N-6\) (or \(3N-5\) for linear molecules). The difference in potential energy is given by: \[ \begin{align} \Delta V &= V(q_1,q_2,q_3,...,q_n) - V(0,0,0,...,0) \label{1} \\[4pt] &= \dfrac{1}{2} \sum_{i=1}^{N_{vib}} \sum_{j=1}^{N_{vib}} \left(\dfrac{\partial^2 V}{\partial q_i\partial q_j} \right) q_iq_j \label{2} \\[4pt] &= \dfrac{1}{2}\sum_{i=1}^{N_{vib}} \sum_{j=1}^{N_{vib}} f_{ij} q_iq_j \label{3} \end{align}\] where \(q\) represents the equilibrium displacement and \(N_{vib}\) the number of vibrational degrees of freedom. For simplicity, the anharmonic terms are neglected in this equation (i.e., higher order terms are ignore). A theorem of classical mechanics states that the cross terms can be eliminated from the above equation (the details of the theorem are very complex and will not be discussed in detail). By using matrix algebra a new set of coordinates {Q j} can be found such that \[\Delta{V} = \dfrac{1}{2} \sum_{j=1}^{N_{vib}}{F_jQ_j^2} \label{4}\] Note that there are no cross terms in this new expression. These new coordinates are called normal coordinates or normal modes. With these new normal coordinates in hand, the Hamiltonian operator for vibrations can be written as follows: \[\hat{H}_{vib} = -\sum_{j=1}^{N_{vib}} \dfrac{\hbar^2}{2\mu_i} \dfrac{d^2}{dQ_j^2} + \dfrac{1}{2} \sum_{j=1}^{N_{vib}}F_jQ_j^2 \label{5}\] The total wavefunction is a product of the individual wavefunctions and the energy is the sum of independent energies. This leads to: \[ \hat{H}_{vib} = \sum_{j=1}^{N_{vib}} \hat{H}_{vib,j} = \sum_{j=1}^{N_{vib}} \left( \dfrac{-\hbar^2}{2 \mu_j}\dfrac{d^2}{dQ_i^2} + \dfrac{1}{2}\sum_{j=1}^{N_{vib}} F_jQ_j^2 \right) \label{6}\] and the wavefunction is then \[ \psi_{vib} = Q_1,Q_2, Q_3 ..., Q_{vib} = \psi_{vib,1}(Q_1) \psi_{vib,2}(Q_2) \psi_{vib,3}(Q_3) , ..., \psi_{vib,N_{vib}}(Q_{N_{vib}}) \label{7}\] and the total vibrational energy of the molecule is \[E_{vib} = \sum_{j=1}^{N_{vin}} h\nu_j \left (v_j + \dfrac{1}{2}\right) \label{8}\] where \(v_j= 0,1,2,3...\) The consequence of the result stated in the above equations is that each vibrational mode can be treated as a harmonic oscillator approximation. There are \(N_{vib}\) harmonic oscillators corresponding to the total number of vibrational modes present in the molecule. Pictorial description of normal coordinates using CO The normal coordinate q is used to follow the path of a normal mode of vibration. As shown in Figure \(\PageIndex{2}\) the displacement of the C atom, denoted by Δr o(C), and the displacement of the O atom, denoted by Δr, occur at the same frequency. The displacement of atoms is measured from the equilibrium distance in ground vibrational state, r o(O) o. Contributors Kristin Kowolik, University of California, Davis
A little bit of background: As stated in your notes, this equation is obtained using the Fraunhofer diffraction formula (which is an approximation to the more general Rayleigh-Sommerfeld formula). According to the Fraunhofer diffraction integral, given the transmittance function of the diffractive screen:$$U(x',y')=\cases{1\quad\quad\text{on the aperture}\\0\quad \quad\quad \text{otherwise}}$$ the diffracted field observed on a plane located at a distance $z$ from the screen would be$$U_O(x,y) = \frac{e^{jkz}e^{j\frac{k}{2z}(x^2+y^2)}}{j\lambda z}\mathcal F\{U_{inc}(x',y')\}|_{(f_x,f_y) }$$which is just the Fourier transform of the transmittance function, $\mathcal F\{U(x',y')\}$, calculated at the frequencies $(f_x,f_y)=(\dfrac{x}{\lambda z},\dfrac{y}{\lambda z})$, multiplied by some factor.Therefore, within the scope of validity of the Fraunhofer (or far-field) approximation, you just have to Fourier transform the transmittance function in order to find the diffraction pattern. In your problem, we have a single slit with a width of $w_x$ illuminated by a normally incident plane-wave with amplitude A and intensity $ A^2$. Therefore, the transmittance function would be a rectangular function $\mathrm {rect}(\dfrac{x'}{w_x})$. looking up its Fourier transform in some table, we find the diffracted field using the above formula:$$U(x) = A\frac{e^{jkz}e^{j\frac{k}{2z}x^2}}{j\lambda z}\times w_x\mathrm {sinc}(\frac{w_xx}{\lambda z})$$ and for the intensity:$$\boxed{I(x)=|U(x)|^2=A^2\frac{w_x^2}{\lambda ^2 z^2}\mathrm {sinc}^2(\frac{w_xx}{\lambda z})}$$ where $\mathrm {sinc}(x) = \frac{\sin (\pi x)}{\pi x}$. This is essentially the same formula in your question, because in the Fraunhofer approximation you can use $\sin\theta\approx \tan \theta= \dfrac{x}{z}$. Therefore, $I_0$ in your question would be $A^2\dfrac{w_x^2}{\lambda ^2 z^2}$.
I was trying to understand the final section of the paper "Revisiting Baselines for Visual Question Answering". Authors state that their model performs better with a binary loss in comparison to a softmax loss. I wonder what actually a binary loss mean in this case? I think the softmax loss is the same term for binary cross-entropy. Could someone explain what exactly a binary loss is?Thanks. I was trying to understand the final section of the paper "Revisiting Baselines for Visual Question Answering". Authors state that their model performs better with a There is a nice explanation here Binary Cross-Entropy Lossis also called Sigmoid Cross-Entropy loss. It is a Sigmoid activation plus a Cross-Entropy loss. Unlike Softmax loss it is independent for each vector component (class), meaning that the loss computed for every vector component is not affected by other component values. The term binary stands for number of classes = 2. I think that binary loss is the one based Shannon on entropy $-\sum_ip_i\ln p_i$, while the softmax is based on Boltzmann distribution: $\frac{e^{z_i}}{\sum_j e^{z_i}}$ Softmax itself shouldn't be a loss function though. It's the probability like function that you use to pick the output of classificator. For instance, you could use it to calculate probabilities $\hat p_{ij}$ of classificator outcomes for categories $i$ for sample $j$. Then you can use the entropy based loss function to evaluate the fit: $\sum_ip_{ij}\ln\hat p_{ij}$, where $p_{ij}$ is a binary outcome of a category $i$.
Given normally distributed integers with a mean of 0 and a standard deviation $\sigma$ around 1000, how do I compress those numbers (almost) perfectly? Given the entropy of the Gaussian distribution, ... A few days ago this appeared on HN http://www.patrickcraig.co.uk/other/compression.htm. This refers to a challenge from 2001 - where someone was offering a prize of \$5000 for any kind of reduction to ... I'm trying to measure now much non redundant (actual) information my file contains. Some call this the amount of entropy.Of course there is the standard p(x) log{p(x)}, but I think that Shannon was ... Suppose that $\Omega$ is a finite probability space,with measure $P$ (we can take $P$ uniform). Let $C \in \{\pm 1 \}$ be a random variable on $\Omega$, the classifier. Let$$H(C) = H(C, \Omega, P) = ... I have a generator of files with approximately 7 bits /byte entropy. The files are about 30KB each in length. I'd like to use these as sources of entropy to generate random numbers. Theoretically I ...
I will assume $X_i$ independent in this post. To get a feel for the problem, recall the special case of uniform distribution, corresponding to $\alpha=1$, i.e. $\mathrm{Beta}(1,1) \stackrel{d}{=} U(0,1)$. The sum of $n$ iid uniform distribution was studied by J.O. Irwin and P. Hall, and the result is known as Irwin-Hall distribution, aka uniform sum distribution. Already for $n=3$ the distribution density of the sum of three standard uniform variables approximates normal quite well: The same approximation will work well for larger values of $n$ in your case as well. To write it out we need to compute mean and variance of the sum:$$ \mu_n = \mathbb{E}(\sum_{k=1}^n X_i) = n \frac{\alpha}{\alpha+1} \qquad \sigma_n^2 = \mathbb{Var}(\sum_{k=1}^n X_i) = \sum_{k=1}^n \mathbb{Var}(X_i) = \frac{n \alpha}{\alpha+2} \frac{1}{(\alpha+1)^2}$$ Thus the quantile function approximation is:$$ Q_n(q) \approx n \frac{\alpha}{\alpha+1} + \frac{1}{\alpha+1} \sqrt{ \frac{n \alpha}{\alpha+2} } Q_{\mathcal{N}(0,1)}(q)$$ For $n=2$ CDF can be worked out exactly, and can be inverted using numerical algorithms: Added : The normal approximation can be truncated to $(0,n)$ interval to improve accuracy:$$ Q_{Y_n}(q) = \mu_n + \sigma_n Q_{N(0,1)}( (1-q) \Phi(-\mu_n/\sigma_n) + q \Phi((n-\mu_n)/\sigma_n) )$$
Differential and Integral Equations Differential Integral Equations Volume 20, Number 12 (2007), 1423-1433. Positive solutions for a class of infinite semipositone problems Abstract We analyze the positive solutions to the singular boundary value problem \[-\Delta u = \lambda[ f(u)-1/u^{\alpha}]; x \in \Omega\] \[u = 0; \, x \in \partial\Omega,\] where $f$ is a $C^2$ function in $(0,\infty)$, $f(0)\geq 0,f^{'}>0, \lim_{s\rightarrow\infty}\frac{f(s)}{s}=0, \lambda$ is a positive parameter, $\alpha \in (0,1)$ and $\Omega$ is a bounded region in $ R^{n}, n \geq 1 $ with $C^{2+\gamma}$ boundary for some $\gamma \in (0,1)$. In the case $n=1$ we use the quadrature method and for $n>1$ we use the method of sub-super solution to establish our results. Article information Source Differential Integral Equations, Volume 20, Number 12 (2007), 1423-1433. Dates First available in Project Euclid: 20 December 2012 Permanent link to this document https://projecteuclid.org/euclid.die/1356039073 Mathematical Reviews number (MathSciNet) MR2377025 Zentralblatt MATH identifier 1212.35129 Subjects Primary: 35J60: Nonlinear elliptic equations Secondary: 35J25: Boundary value problems for second-order elliptic equations Citation Ramaswamy, Mythily; Shivaji, R.; Ye, Jinglong. Positive solutions for a class of infinite semipositone problems. Differential Integral Equations 20 (2007), no. 12, 1423--1433. https://projecteuclid.org/euclid.die/1356039073
Prediction and Forecasting Yes you are correct, when you view this as a problem of prediction, a Y-on-X regression will give you a model such that given a instrument measurement you can make an unbiased estimate of the accurate lab measurement, without doing the lab procedure. Put another way, if you are just interested in $E[Y|X]$ then you want Y-on-X regression. This may seem counter-intuitive because the error structure is not the "real" one. Assuming that the lab method is a gold standard error free method, then we "know" that the true data-generative model is $X_i = \beta Y_i + \epsilon_i$ where $Y_i$ and $\epsilon_i$ are independent identically distribution, and $E[\epsilon]=0$ We are interested in getting the best estimate of $E[Y_i|X_i]$. Because of our independence assumption we can rearrange the above: $Y_i = \frac{X_i - \epsilon}{\beta}$ Now, taking expectations given $X_i$ is where things get hairy $E[Y_i|X_i] = \frac{1}{\beta} X_i - \frac{1}{\beta} E[\epsilon_i|X_i]$ The problem is the $E[\epsilon_i|X_i]$ term - is it equal to zero? It doesn't actually matter, because you can never see it, and we are only modelling linear terms (or the argument extend up to whatever terms you are modelling). Any dependence between $\epsilon$ and $X$ can simply be absorbed into the constant we are estimating. Explicitly, without loss of generality we can let $\epsilon_i = \gamma X_i + \eta_i$ Where $E[\eta_i|X] = 0$ by definition, so that we now have $Y_I = \frac{1}{\beta} X_i - \frac{\gamma}{\beta} X_i - \frac{1}{\beta} \eta_i$ $Y_I = \frac{1-\gamma}{\beta} X_i - \frac{1}{\beta} \eta_i $ which satisfies all the requirements of OLS, since $\eta$ is now exogenous. It doesn't matter in the slightest that the error term also contains a $\beta$ since neither $\beta$ nor $\sigma$ are known anyway and must be estimated. We can therefore simply replace those constants with new ones and use the normal approach $Y_I = {\alpha} X_i + \eta_i $ Notice that we have NOT estimated the quantity $\beta$ that I originally wrote down - we have built the best model we can for using X as a proxy for Y. Instrument Analysis The person who set you this question, clearly didn't want the answer above since they say X-on-Y is the correct method, so why might they have wanted that? Most likely they were considering the task of understanding the instrument. As discussed in Vincent's answer, if you want to know about they want the instrument behaves, the X-on-Y is the way to go. Going back to the first equation above: $X_i = \beta Y_i + \epsilon_i$ The person setting the question could have been thinking of calibration. An instrument is said to be calibrated when it has expectation equal to the true value - that is $E[X_i|Y_i] = Y_i$. Clearly in order to calibrate $X$ you need to find $\beta$, and so to calibrate an instrument you need to do X-on-Y regression. Shrinkage Calibration is an intuitively sensible requirement of an instrument, but it can also cause confusion. Notice, that even a well calibrated instrument will not be showing you the expected value of $Y$! To get $E[Y|X]$ you still need to do the Y-on-X regression, even with a well calibrated instrument. This estimate will generally look like a shrunk version of the instrument value (remember the $\gamma$ term that crept in). In particular, to get a really good estimate of $E[Y|X]$ you should include your prior knowledge of the distribution of $Y$. This then leads to concepts such as regression-to-the-mean and empirical bayes. Example in ROne way to get a feel for what is going on here is to make some data and try the methods out. The code below compares X-on-Y with Y-on-X for prediction and calibration and you can quickly see that X-on-Y is no good for the prediction model, but is the correct procedure for calibration. library(data.table) library(ggplot2) N = 100 beta = 0.7 c = 4.4 DT = data.table(Y = rt(N, 5), epsilon = rt(N,8)) DT[, X := 0.7*Y + c + epsilon] YonX = DT[, lm(Y~X)] # Y = alpha_1 X + alpha_0 + eta XonY = DT[, lm(X~Y)] # X = beta_1 Y + beta_0 + epsilon YonX.c = YonX$coef[1] # c = alpha_0 YonX.m = YonX$coef[2] # m = alpha_1 # For X on Y will need to rearrage after the fit. # Fitting model X = beta_1 Y + beta_0 # Y = X/beta_1 - beta_0/beta_1 XonY.c = -XonY$coef[1]/XonY$coef[2] # c = -beta_0/beta_1 XonY.m = 1.0/XonY$coef[2] # m = 1/ beta_1 ggplot(DT, aes(x = X, y =Y)) + geom_point() + geom_abline(intercept = YonX.c, slope = YonX.m, color = "red") + geom_abline(intercept = XonY.c, slope = XonY.m, color = "blue") # Generate a fresh sample DT2 = data.table(Y = rt(N, 5), epsilon = rt(N,8)) DT2[, X := 0.7*Y + c + epsilon] DT2[, YonX.predict := YonX.c + YonX.m * X] DT2[, XonY.predict := XonY.c + XonY.m * X] cat("YonX sum of squares error for prediction: ", DT2[, sum((YonX.predict - Y)^2)]) cat("XonY sum of squares error for prediction: ", DT2[, sum((XonY.predict - Y)^2)]) # Generate lots of samples at the same Y DT3 = data.table(Y = 4.0, epsilon = rt(N,8)) DT3[, X := 0.7*Y + c + epsilon] DT3[, YonX.predict := YonX.c + YonX.m * X] DT3[, XonY.predict := XonY.c + XonY.m * X] cat("Expected value of X at a given Y (calibrated using YonX) should be close to 4: ", DT3[, mean(YonX.predict)]) cat("Expected value of X at a gievn Y (calibrated using XonY) should be close to 4: ", DT3[, mean(XonY.predict)]) ggplot(DT3) + geom_density(aes(x = YonX.predict), fill = "red", alpha = 0.5) + geom_density(aes(x = XonY.predict), fill = "blue", alpha = 0.5) + geom_vline(x = 4.0, size = 2) + ggtitle("Calibration at 4.0") The two regression lines are plotted over the data And then the sum of squares error for Y is measured for both fits on a new sample. > cat("YonX sum of squares error for prediction: ", DT2[, sum((YonX.predict - Y)^2)]) YonX sum of squares error for prediction: 77.33448 > cat("XonY sum of squares error for prediction: ", DT2[, sum((XonY.predict - Y)^2)]) XonY sum of squares error for prediction: 183.0144 Alternatively a sample can be generated at a fixed Y (in this case 4) and then average of those estimates taken. You can now see that the Y-on-X predictor is not well calibrated having an expected value much lower than Y. The X-on-Y predictor, is well calibrated having an expected value close to Y. > cat("Expected value of X at a given Y (calibrated using YonX) should be close to 4: ", DT3[, mean(YonX.predict)]) Expected value of X at a given Y (calibrated using YonX) should be close to 4: 1.305579 > cat("Expected value of X at a gievn Y (calibrated using XonY) should be close to 4: ", DT3[, mean(XonY.predict)]) Expected value of X at a gievn Y (calibrated using XonY) should be close to 4: 3.465205 The distribution of the two prediction can been seen in a density plot.
Because a problem is in NP if, and only if, "yes" instances have succinct certificates. Informally, this means that the solution to any instance is at most polynomially large in the size of the instance, and if you're given an instance and something that is claimed to be a solution to it, you can check in polynomial time that it really is a solution. More formally, a language $L\subseteq\Sigma^*$ is in NP if, and only if, there is a relation $R\subseteq \Sigma^*\times\Sigma^*$ such that: there is a polynomial $p$ such that, for all $(x,y)\in R$, $|y|\leq p(|x|)$; the problem of determining whether $(x,y)\in R$ is in P; $L = \{x\mid \exists y\,(x,y)\in R\}$. For example, consider graph 3-colourability. You can describe a graph on $n$ vertices just by writing out its adjacency matrix, which requires $n^2$ bits. If a graph is 3-colourable, you can describe a colouring in $2n$ bits: just list the colour of each vertex in turn (say, $00$ for red, $01$ for green, $10$ for blue). So, if a string $x$ describes a graph, and that graph is 3-colourable, each 3-colouring can be described in a string $y$ with length $2\sqrt{|x|}\leq 2|x|$. Furthermore, if I give you a decription of a graph and a description of a 3-colouring, you can easily check in polynomial time that it really is a 3-colouring: just check that adjacent vertices always have different colours. So, the relation$$\{(x,y)\mid x\text{ describes a graph $G$ and $y$ describes a 3-colouring of }G\}$$proves that 3-colourability is in NP. So, to show that Masyu is in NP, we just need to construct the corresponding relation $R$. We can describe an $n\times n$ instance of Masyu in $2n^2$ bits: for each square in turn, write $00$ if it's blank, $01$ if it contains a black circle and $10$ if it's a white circle. We can describe a solution in $3n^2$ bits: for each square in turn, write $000$ if the square isn't on the solution path, and $100$, $101$, $110$ and $111$ if it's on the path and the next square is the one to the right, left, up and down, respectively. It's easy to check in polynomial time that a claimed solution really is a solution: just find a square that's on the path, follow the path and check it satisfies the criteria.
I found out that the symbol for union, ∪, was created in 1895 by Giuseppe Peano in his Formulario Mathematico but of course the use of the word "union" in mathematics was older. Do you have a source for the earliest occurrence? I found out that the Intersection and union. The symbols $\cap$ and $\cup$ were used by Giuseppe Peano (1858-1932) for intersection and union in 1888 in Calcolo geometrico secondo l' Ausdehnungslehredi H.Grassmann (Cajori vol. 2, page 298). See page 2 : Colla scrittura $A \cup B \cup C \cup \ldots$ intenderemo la minima classe che contiene le classi $A, B, C,\ldots$, ossia la classe formata dagli enti che sono o $A$ o $B$ o $C$, ecc. Il segno $\cup$ si leggerà o; l'operazione indicata col segno $\cup$ chiamasi in logica disgiunzione; noi la diremo anche addizione logica; le classi $A, B,\ldots$ si diranno i terminidella somma$A \cup B \cup \ldots$ [With the symbol $A \cup B \cup C \cup \ldots$ we mean the least class containing the classes $A, B, C,\ldots$, i.e. the class formed by the entities that are either $A$ or $B$ or $C$, etc. The symbol $\cup$ will be read " or"; the operation denoted by the symbol $\cup$ is named in logic disjunction; we will call it also logical sum; the classes $A, B,\ldots$ will be called termsof the sum$A \cup B \cup \ldots$.] For the term, we have to search : W&R's Principia is still under Peano's influence; see page 27 : "Similarly the logical sum of two classes $\alpha$ and $\beta$ ..." Some early occurrences are : Felix Hausdorff, Set theory (Engl.transl (1957) of the 3rd German ed. 1937), page 18 : "If $A, B$, are two sets, then by their sum, or union ..." [but it is necessary to check on the earlier German editions.] Waclaw Sierpinski, Algèbre des ensembles (1951), page 62 : " somme(ou réunion) des ensembles".
Based on this question: What is the homology groups of the torus with a sphere inside? I'm trying to find the fundamental group of this space using the Seifert–van Kampen theorem. If $U$ is the torus and $V$ is the sphere, then $U\cap V$ is the circle, thus we have the following fundamental groups: $\pi_1(U)=\mathbb Z\times\mathbb Z$ $\pi_1(V)=0$ $\pi_1(U\cap V)=\mathbb Z$ If we use the group presentation notation we have: $\pi_1(U)=\langle\alpha,\beta\mid \alpha\beta=\beta\alpha\rangle$ $\pi_1(V)=\langle\emptyset\mid\emptyset\rangle$ $\pi_1(U\cap V)=\langle\gamma\mid\emptyset\rangle$ Thus using the Seifert–van Kampen theorem: $\pi_1(X)=\langle\alpha,\beta\mid\alpha\beta=\beta\alpha,\beta\rangle$ Note that I added $\beta$ above because when we turn around the generator of $S^1$ which is $U\cap V$, we span one of the generators of the torus which is $U$. Thus the fundamental group of this space is $\mathbb Z\times \{0\}$ which is $\mathbb Z$ itself. My approach is correct? Thanks a lot!
Answer The two points of intersection are: $(3, \frac{\pi}{3})$ and $(3, \frac{5\pi}{3})$ Work Step by Step $r = 3$ $r = 2+2~cos~\theta$ To find the points of intersection, we can equate the expressions for $r$: $3 = 2+2~cos~\theta$ $2~cos~\theta = 1$ $cos~\theta = \frac{1}{2}$ $\theta = \frac{\pi}{3}, \frac{5\pi}{3}$ We can find $r$: $r = 3$ The two points of intersection are: $(3, \frac{\pi}{3})$ and $(3, \frac{5\pi}{3})$
Dini criterion for the convergence of Fourier series A criterion first proved by Dini for the convergence of Fourier series in [Di]. TheoremConsider a summable $2\pi$ periodic function $f$ and a point $x\in \mathbb R$. If thereis a number $S$ and a $\delta>0$ such that\begin{equation}\label{e:Dini}\int_0^\delta |f(x+u) + f(x-u)-2S| \frac{du}{u} < \infty\end{equation}then the Fourier series of $f$ converges to $S$ at $x$. Cp. with Section 38 of Chapter I in volume 1 of [Ba] and Section 6 of Chapter II in volume 1 of [Zy]. Observe that, if \eqref{e:Dini} holds, then the right and left limits $f (x^+)$ and $f(x^-)$ of $f$ at $x$ exists and $S= \frac{f(x^+)+f(x^-)}{2}$. From Dini's statement it is possible to conclude several classical corollaries, for instance the convergence of the Fourier series of $f$ to $f(x)$ at every point where $f$ is differentiable the convergence of the Fourier series of $f$ to $f$ when $f$ is H\"older continuous. It is also a (sharp) statement in the following sense. If $\omega: ]0, \infty[\to ]0, \infty[$ is a continuous function such that $\frac{\omega (t)}{t}$ is not integrable in a neighborhood of the origin, then there is a continuous $2\pi$-periodic function $f:\mathbb R \to \mathbb R$ such that $|f(t)-f(0)|\leq \omega (|t|)$ for every $t$ and the Fourier series of $f$ diverges at $0$. References [Ba] N.K. Bary, "A treatise on trigonometric series" , Pergamon, 1964. [Di] U. Dini, "Serie di Fourier e altre rappresentazioni analitiche delle funzioni di una variabile reale" , Pisa (1880). [Ed] R. E. Edwards, "Fourier series". Vol. 1. Holt, Rineheart and Winston, 1967. [Zy] A. Zygmund, "Trigonometric series" , 1–2 , Cambridge Univ. Press (1988) MR0933759 Zbl 0628.42001 How to Cite This Entry: Dini criterion. Encyclopedia of Mathematics.URL: http://www.encyclopediaofmath.org/index.php?title=Dini_criterion&oldid=28457
In the review Foundations of Black Hole Accretion Disk Theory, the authors defines an effective potential for Kerr geometry as (Chap. 2, eqn. 23) $$\mathcal{U}_{eff}=-\frac{1}{2}\ln\left|g^{tt}-2lg^{t\phi}+l^2g^{\phi\phi}\right|$$ where $l=\dfrac{\mathcal{L}}{\mathcal{E}}=-\dfrac{u_\phi}{u_t}$ is the specific angular momentum, $\mathcal{L}=p_\phi$ is the angular momentum and $\mathcal{E}=-p_t$ is the energy. It is mentioned that this form of the potential is chosen because using the potential $\mathcal{U}_{eff}$, the rescaled energy $\mathcal{E}^*=\ln\mathcal{E}$ and $V=u^ru_r+u^\theta u_\theta<<u^\phi u_\phi$, slightly non-circular motion can be characterized by the the equation $$\frac{1}{2}V^2=\mathcal{E}^*-\mathcal{U}_{eff}$$ The form of this equation is indeed similar to that of the Newtonian equation. But there were nothing mentioned in the paper regarding the derivation of the effective potential. Also, I couldn't understand why they re-scaled the energy as $\mathcal{E}^*=\ln\mathcal{E}$. My Questions: How to derive the effective potential $\mathcal{U}_{eff}$? Hints to the derivation would be sufficient. What is the logic behind the scaling of the conserved energy $\mathcal{E}^*=\ln\mathcal{E}$?
Definition:Jacobian/Determinant Definition Let $U$ be an open subset of $\R^n$. $\displaystyle \map \det {\mathbf J_{\mathbf f} } := \begin{vmatrix} \map {\dfrac {\partial f_1} {\partial x_1} } {\mathbf x} & \cdots & \map {\dfrac {\partial f_1} {\partial x_n} } {\mathbf x} \\ \vdots & \ddots & \vdots \\ \map {\dfrac {\partial f_m} {\partial x_1} } {\mathbf x} & \cdots & \map {\dfrac {\partial f_m} {\partial x_n} } {\mathbf x} \end{vmatrix}$ Also defined as Also known as This concept is often called just the Jacobian of $\mathbf f$ at $\mathbf x$. However, this can allow it to be confused with the Jacobian matrix, so it is advised to use the full name unless context establishes which is meant. Some sources present this as $\map \det {J_f}$, but boldface for matrices is usual, and standard on $\mathsf{Pr} \infty \mathsf{fWiki}$. Other sources present it as either $\mathbf J_{\mathbf f}$ or $J_f$, allowing a further source of confusion between this and the Jacobian matrix. Also see Source of Name This entry was named for Carl Gustav Jacob Jacobi. Sources 1989: Ephraim J. Borowski and Jonathan M. Borwein: Dictionary of Mathematics... (previous) ... (next): Entry: Jacobianor Jacobian determinant 2008: David Nelson: The Penguin Dictionary of Mathematics(4th ed.) ... (previous) ... (next): Entry: Jacobian 2014: Christopher Clapham and James Nicholson: The Concise Oxford Dictionary of Mathematics(5th ed.) ... (previous) ... (next): Entry: Jacobian
Definition:Normal Subgroup/Definition 1 Definition Let $G$ be a group. Let $N$ be a subgroup of $G$. $N$ is a normal subgroup of $G$ if and only if: $\forall g \in G: g \circ N = N \circ g$ where $g \circ N$ denotes the subset product of $g$ with $N$. The statement that $N$ is a normal subgroup of $G$ is represented symbolically as $N \lhd G$. To use the notation introduced in the definition of the congugate: $N \lhd G \iff \forall g \in G: N^g = N$ It is usual to describe a normal subgroup of $G$ as normal in $G$. Some sources refer to a normal subgroup as an invariant subgroup or a self-conjugate subgroup. This arises from Definition 6: $\forall g \in G: \paren {n \in N \iff g \circ n \circ g^{-1} \in N}$ $\forall g \in G: \paren {n \in N \iff g^{-1} \circ n \circ g \in N}$ Some sources use distinguished subgroup. Also see Sources 1955: John L. Kelley: General Topology... (previous) ... (next): Chapter $0$: Algebraic Concepts 1965: J.A. Green: Sets and Groups... (previous) ... (next): $\S 6.6$. Normal subgroups 1965: Seth Warner: Modern Algebra... (previous) ... (next): $\S 11$ 1966: Richard A. Dean: Elements of Abstract Algebra... (previous) ... (next): $\S 1.10$ 1972: A.G. Howson: A Handbook of Terms used in Algebra and Analysis... (previous) ... (next): $\S 5$: Groups $\text{I}$: Subgroups 1978: Thomas A. Whitelaw: An Introduction to Abstract Algebra... (previous) ... (next): $\S 49$. Normal subgroups 1996: John F. Humphreys: A Course in Group Theory... (previous) ... (next): Chapter $7$: Normal subgroups and quotient groups: Proposition $7.4 \ \text{(d)}$
I was given this answer: So I was told that $$\tan(x) = 2$$ Then, they said from this statement they could know that: $$\cos(x) = \frac{1}{\sqrt{5}}$$ $$\sin(x) = \frac{2}{\sqrt{5}}$$ Now, I understand that if I do $$\tan^{-1}(2) = 63.4$$ And then after that I can get the ratio of cosine and sine. However, I don't know how they got the precise fraction. Does anyone know how?
Problem Is there an irrational $\alpha\in\mathbb{R}\backslash\mathbb{Q}$ such that the set $S= \{\,\{2^N\alpha\} :N\,\in\mathbb{N}\}$ is not dense in $[0,1]$. Here $\{x\}=x-\lfloor x\rfloor$ is the fractional part of $x$. Questions very similar to this have been asked in the past. For example However right now I can't adjust them and put the two sticks together. Context I was trying to prove the following proposition --- which isn't actually true when $n>2$. Let $n>2$ and let $O_n$ be the (proper) subset of $[0,1]$ of irrational numbers containing only $0$s and $1$s in their base-$n$ expansion. If we take $\alpha\in O_n$ we then have that $$\{n^N\alpha\}\in O_n\text{ for all }N\in\mathbb{N}$$ and so $S$ can not be dense in $[0,1]$ when $n>2$. Proposition Suppose that $f:\mathbb{C}\rightarrow\mathbb{Z}$ is a power mapping$f(z)=z^n$ for some $n\geq 2$. Then the dynamical system $(\mathbb{C},f)$ exhibits the following four behaviours: If $z_0\in\mathbb{D}:=\{z\in\mathbb{C}:|z|<1\}$ then $z_n\rightarrow 0$. If $z_0\in\mathbb{C}\backslash\bar{\mathbb{D}}:=\{z\in\mathbb{Z}:|z|>1\}$ then $z_n\rightarrow\infty$. If $z_0\in\mathbb{T}=\{z\in\mathbb{C}:|z|=1\}$ then there are two behaviours: a. If $z_0=e^{q\,2\pi i}$ with $q\in\mathbb{Q}$ then $z_0$ is eventually periodic. b. If $z_0=e^{\alpha\,2\pi i}$ with $\alpha\in\mathbb{R}\backslash\mathbb{Q}$ then $z_0$ has a dense orbit. In fact, $(\mathbb{T},f)$ is a chaotic mapping. Proof: 1, 2 and the first part of 3 were illustrated in class for $n=4$. Here we prove for a general $n\geq 2$. Assume that $z_0=e^{\alpha\cdot\,2\pi i}$ with $\alpha\in\mathbb{R}\backslash \mathbb{Q}$ Claim 1: $z_0$ is not eventually periodic. Proof by Contradiction: Suppose that $z_0=e^{\alpha\cdot2\pi i}$ is eventually periodic. That is there exists an iterate of $z_0$, say $f^N(z_0)$, that is periodic. Note that $$f^N(z_0)=z_0^{n^N}=\left(e^{\alpha\cdot 2\pi i}\right)^{n^N}=e^{n^N\alpha\cdot2\pi i}.$$ Now if this point is periodic then there exists an $M\geq 1$ such that $$f^M(e^{n^N\alpha\cdot2\pi i})=e^{n^N\alpha\cdot2\pi i}$$ $$\Rightarrow e^{n^Nn^M\alpha\cdot2\pi i}=e^{n^N \alpha\cdot2\pi i}$$ If these two complex numbers are equal then their argument/angle must differ only by a multiple of $360^\circ$; i.e. $k\cdot 2\pi$ for $k\in\mathbb{Z}$: $$n^{M}n^N\alpha\cdot 2\pi-n^N\alpha\cdot 2\pi=k\cdot 2\pi$$ $$\Rightarrow n^{N+M}\alpha-n^N\alpha=k$$ $$\displaystyle\alpha=\frac{k}{n^{N+M}-n^N},$$ but this is a contradiction as $\alpha$ is not a fraction. Hence $z_0$ is not eventually periodic $\bullet$ Claim 2: Two iterates of $z_0$ can be found that are arbitrarily close together. Proof : Note that the length of the unit circle is $2\pi$. Suppose that we divide the unit circle into $N$ arcs $A_1,A_2,\dots,A_N$ each of equal length $\displaystyle \frac{2\pi}{N}$ (to be careful suppose that they contain their 'right' endpoints but not their 'left'). Now, we know that $z_0$ is not eventually periodic so the first $n$ iterates of $z_0$ are all distinct: $$z_0,z_1,z_2,z_3,\dots,z_N.$$ Note that there are $N$ arcs but $N+1$ iterates so by the Pigeonhole Principle there is an arc $A_i$ containing two iterates of $z_0$ --- say $z_i$ and $z_j$ --- that are on the same arc. Take $N\rightarrow \infty$ so that the arcs becoming arbitrarily small and the points become arbitrarily close together $\bullet$ Claim 3: There is a number $M$ such that $z_0$ and $f^M(z_0)$ are arbitrarily close together. Proof: We know that for any $N$ we can find two iterates of $z_0$ that are at most (radially) apart by $\displaystyle \frac{2\pi }{N}$. Denote these by $z_i=f^i(z_0)$ and $z_j=f^j(z_0)$... what I was hoping to do here actually doesn't work. Hmmm..
I'm having trouble understanding some of the subtleties of working with bras and kets when considering the standard 1-D quantum oscillator. Say I am given a state vector at $t=0$, $$|\Psi(t=0)\rangle = A \sum_{q=0}^{Q_o} \frac{1}{q+i}|\phi_q\rangle$$ Where $Q_o$ is a positive finite integer. I am given the number operator $$\hat{N}|\phi_q\rangle = q|\phi_q\rangle$$ where $\hat{N} = \hat{a}^\dagger\hat{a}$, and the energy eigenvalues of the quantum oscillator are known as $$E_q = (q+\frac{1}{2})\hbar\omega_o$$ for $q = 0,1,2,3....$ Now, I am tasked to answer the following questions regarding the state vector $|\Psi(t=0)\rangle$: [1] Find an equation for $A$ which normalizes the state vector $|\Psi(t=0)\rangle$ In wave mechanics, the coefficient for normalization is typically found by enforcing that $\int|\Psi|^2 dx = 1$ for all space, and then solving for the coefficient within $\Psi$. If I am to let $Q_o = 100$, $$|\Psi(t=0)\rangle = |\Psi(0) \rangle = A \sum_{q=0}^{100} \frac{1}{q+i}|\phi_q\rangle$$ I know that a normalized state vector will follow $$\langle \Psi(t)|\Psi(t)\rangle = 1$$ So if I have $$\langle \Psi(0)| = (|\Psi(0)\rangle)^* = A^*\sum_{q=0}^{100} \frac{1}{q-i}\langle\phi_q|$$ then, $$\langle \Psi(0)|\Psi(0)\rangle = (A^*\sum_{q=0}^{100} \frac{1}{q-i}\langle\phi_q|)(A \sum_{q=0}^{100} \frac{1}{q+i}|\phi_q\rangle)=1$$ Now if $A = A^*$, am I able to state that $$A^2\sum_{q=0}^{100}\sum_{q=0}^{100}\frac{1}{q+i}\frac{1}{q-i}\langle\phi_q|\phi_q\rangle = 1$$ where $$\langle\phi_q|\phi_q\rangle = \langle q|\hat{N} |q\rangle = q\langle q|q\rangle = q$$ (as $\langle q|q\rangle = 1$ due to orthonormality), and then solve for $A$? Or am I horribly mistaken in terms of how bras and kets work? From what I have so far, I would obtain that $$A^2\sum_{q=0}^{100}\sum_{q=0}^{100}\frac{q}{(q+i)(q-i)} = 1$$ and $$A = \sqrt{\sum_{q=0}^{100}\sum_{q=0}^{100}\frac{(q+i)(q-i)}{q}}$$ Hopefully someone will be able to clarify my misunderstandings here.. [2] Evaluate $\langle \phi_q|\hat{N}$ Am I able to simply state here that $$\langle \phi_q|\hat{N} = \sum_{q}N_{p,q}\langle \phi_q| = \sum_{q}N_{p,q}\langle q|\hat{a}^\dagger$$ where $N_{p,q}$ are the matrix elements of the number operator $\hat{N}$? I don't quite know what I am supposed to end up with as an answer or proceed further to work with the sum produced, provided it is even correct. [3] What is the state vector for times $t \geq 0$, and the expectation value for $\hat{a}^\dagger$ for these times? From my understanding, I can simply tack on time dependence to the original state vector. For example, $$|\Psi(t)\rangle = |\phi_q\rangle e^{\frac{-iE_qt}{\hbar}}$$ then, will the expectation value $\hat{a}^\dagger$ just be $$\langle \Psi(t)|\hat{a}^\dagger|\Psi(t)\rangle$$ I'm not sure how to actually calculate this expectation value, but looking at my notes, I see that $$\langle p|\hat{a}^\dagger|q\rangle = \sqrt{q+1}\delta_{p,q+1}$$ Does this imply that I can write $$\langle \hat{a}^\dagger\rangle = \langle \phi_q e^{-i\omega_o t}|\hat{a}^\dagger|\phi_q e^{-i\omega_o t}\rangle = \sqrt{q+1}e^{-i\omega_o t}$$? Sorry for the length of the question, but hopefully it will make it easy to figure out where I'm going wrong in my approach of these problems. Thanks.
For $1\leq p<\infty$ an approach to define fractional Sobolev spaces is by Sobolev-Slobodeckij spaces a generalisation of Hölder continuity. For example letting $U\subset\mathbb{R}^n$ then, $ \left\|u\right\|^p_{W^\mu_p(U)} = \left\|u\right\|^p_{W^{\lfloor\mu\rfloor}_p(U)} + \sum \int_U \int_U \frac{|D^\alpha u(x)-D^\alpha u(y)|^p}{|x-y|^{n+p[\mu]}}dxdy $ where $[\mu]=\mu-\lfloor\mu\rfloor$ and the sum is taken over all multi-indices $\alpha$ with $|\alpha|=\lfloor\mu\rfloor$ This is from Chapter 14 of The mathematical theory of finite element methods By Susanne C. Brenner, L. Ridgway Scott. Does the above hold for $p=\infty$? For example, for $p=\infty$ do we have (or something similar), $ \left\|u\right\|_{W^\mu_p(U)} = \left\|u\right\|_{W^{\lfloor\mu\rfloor}_p(U)} + \sup \sup_U \sup_U \frac{|D^\alpha u(x)-D^\alpha u(y)|}{|x-y|^{[\mu]}} $ where the $\sup$ is taken over all multi-indices $\alpha$ with $|\alpha|=\lfloor\mu\rfloor$. Can this be shown by considering the limit of the case $p<\infty$ as $p\rightarrow\infty$? Thanks in advance.
The geometric analogue to darij grinberg's answer is that you are effectively asking whether the product of two particular irreducible algebraic varieties is also irreducible. Here is how the translation works. The two polynomial rings $K[x_1, \dots, x_m]$ and $K[y_1, \dots, y_n]$ are the coordinate rings of the affine spaces $\mathbb{A}^m_K$ and $\mathbb{A}^n_K$, and their prime ideals $P$ and $Q$ are, respectively, irreducible subvarieties $X$ and $Y$ in these. The combined ring $K[x_1, \dots, x_m, y_1, \dots, y_n]$ is the ring of $\mathbb{A}^{n + m}_K$, which is isomorphic to the product $\mathbb{A}^n_K \times_K \mathbb{A}^m_K$. Considering $P$ and $Q$ as prime ideals in the combined ring is the same as forming the varieties $X \times_K \mathbb{A}^m_K$ and $\mathbb{A}^n_K \times_K Y$, and their intersection is itself isomorphic to $X \times_K Y$. You want to know whether this is necessarily irreducible. Over an algebraically closed field the answer is "yes". This formally follows from the theorem darij cited, but you can also understand it geometrically in that if, for example, $K = \mathbb{C}$, you can cut the ${}_K$ from the product and convince yourself that since the slices of $X \times Y$ over $Y$ are irreducible (isomorphic to $X$) and of equal dimension, the product itself is also irreducible. (To understand the second condition, think about the reducible variety $\{0\} \times \mathbb{A}^1 \cup \mathbb{A}^1 \times \{0\}$ and its map to $\mathbb{A}^1$ in, say, the second factor.) Over a non-closed field you have darij's example. It's hard to give a totally geometric picture for this sort of thing, since geometry takes place over a closed field, but you can think about it in a sort of quasi-pictorial way. Namely, the real affine line $\mathbb{A}^1_\mathbb{R}$ (with coordinate ring $\mathbb{R}[x]$) has two kinds of points: those in $\mathbb{R}$, corresponding to prime ideals $(x - a)$, and conjugate pairs of those in $\mathbb{C}$, corresponding to prime ideals $(x^2 + 2bx + c)$ with $b^2 < c$. This means that $\mathbb{A}^1_\mathbb{R}$ is like $\mathbb{A}^1_\mathbb{C}$ with conjugate pairs of points being grouped together into one point. In $\mathbb{A}^2_\mathbb{R}$, things are more complicated because of dimension, but for closed points, it's similar: each coordinate is like a pair of conjugate complex numbers grouped together. Suppose you have points like $\{z, \bar{z}\}$ and $\{w, \bar{w}\}$ in $\mathbb{A}^1_\mathbb{R}$. Their product, taken in the naive sense, is going to look like four pairs:$$ \{ (z, w), (\bar{z}, \bar{w}), (z, \bar{w}), (\bar{z}, w) \}. $$The first two and the last two are conjugate pairs, but (in the absence of coincidences like one of them being real) not conjugate to each other, so you will get two distinct points in $\mathbb{A}^2_\mathbb{R}$, which is not an irreducible set. This corresponds to the fact that, as darij computed, the coordinate ring of this product is $\mathbb{C} \otimes_\mathbb{R} \mathbb{C}$, which is isomorphic to $\mathbb{C} \oplus \mathbb{C}$ since $\mathbb{C}$ is Galois of degree 2 over $\mathbb{R}$. And in fact, everything I've said about conjugates in the last few paragraphs has used this fact implicitly.
I have a doubt about how the time step is calculated in kinetic montecarlo simulations. One state with index $i$ is connected to other $N$ states, indexed by $j=1...N$, by transitions that happen with rates $r_{ij}$ (form $i$ to $j$). At each iteration of the algorithm, one of the $N$ transitions is randomly chosen (with the right probability), and the time step is cleverly calculated using a random number $u$ in $(0,1]$ by $\Delta t = \frac{ln(1/u)}{R}$ where $R$ is the sum of all the rates of the transitions leaving from state $i$ : $R=\Sigma_{j=1}^N r_{ij}$. In this way $\Delta t$ is exponentially distributed with mean $1/R$. My doubt: I understand that the time between events (or transitions) $\Delta t$ should be exponentially distributed because the events are Poisson distributed. But why should the mean of the time between events be $1/R$? If the algorithm in a particular iteration picks up the particular transition $i$ to $j_o$, with the rate $r_{ij_o}$, I would say that the average residence time in state $i$ before leaving to state $j_o$ should be $1/r_{ij_o}$, and not $1/R$. So instead of the $\Delta t$ above, I would use in that particular iteration a time step $\Delta t=\frac{ln(1/u)}{r_{ijo}}$. Is this wrong?
I don't think it's correct to talk about probabilities in context of random forests, as random forest classifiers do not attempt to produce accurate probabilities (see Olson et al. ). It's better to view the output as some score in the interval [0,1]. In random forest classification, each class $c_i, i \in {1, ..., k}$ gets assigned a score $s_i$ such that $\sum{s_i} = 1$. The model outputs the label of the class $c_i$ where $s_i = max({s_1, ..., s_k})$. So in order to adjust the thresholds, you can weight the scores $s_i$ by some weights $w_i$, such that you output the label of class $c_i$ with $s_i^* = max(s_1^*, ..., s_k^*) = max(s_1 \times w_1, ..., s_k \times w_k)$. (If you want the $s_i^*$ to add up to $1$, you need to normalize them.) I'm not sure how or if answering questions such as "With maximal 1% probability we will be predicting B though it is actually A" is possible for > 2 classes, but for two classes, e.g. A and B, you can approach this by formulating a hypothesis test. Given a sample X, in case of binary classification there are exactly two hypotheses in the universe. (1) X belongs to class A or (2) X belongs to class B. For simplicity, I will assume that the classes are balanced. Sample X corresponds to a score $s_x \in [0, 1]$ and $s_x$ follows a different distribution depending whether the true class of X is A or B. Lets say that the mean score of samples in class A is bigger than the mean score of samples in class B, i.e. $\mu_A > \mu_B$. Given your random forest model and a test set, you can calculate the empirical distribution of $s_x$ under A and $s_x$ under B. Now, say you observed a sample X with score $s_x$. What is the probability that this class belongs to class A? To answer this, you can simply calculate the lower tail of the score-distribution under class A for $s_x$, i.e. the percentage of sample in class A with a score <= $s_x$. The resulting p-value corresponds to the probability that a sample X with score $s_x$ or lower truly belongs to the class A. You can do the same for class B by calculating the upper tail. Similarly, for a given significance level $\alpha$, e.g. $\alpha = 0.01$, you can calculate a score $s_\alpha$ such that the chance that a sample X with score $s_x <= s_\alpha$ belongs to class A is less than $1\%$. You see, since there are only two possible lables for a sample X, i.e. A or B, you can formulate a hypothesis test $H_0: X \in A, H_a: X \in B$ or vice verse. Now, in case you have >2 classes, this is no longer possible, since you can only reject a given hypothesis, i.e. that a sample X belongs to A. In in the binary case, rejecting A corresponds to accepting B, since there are only two possible outcomes, but for e.g. 3 classes, rejecting A corresponds to accepting B and C! Keep in mind that the procedure above only works for balanced classes. You can possibly extend this approach for the imbalanced case, but it will likely be more complicated and I just wanted to outline a possible general approach.
I think your question implicates another question (which is also mentioned in some comments here), namely: Why are all energy eigenvalues of states with a different angular momentum quantum number $\ell$ but with the same principal quantum number $n$ (e.g. $3s$, $3p$, $3d$) degenerate in the hydrogen atom but non-degenerate in multi-electron atoms?Although AcidFlask already gave a good answer (mostly on the non-degeneracy part) I will try to eleborate on it from my point of view and give some additional information. I will split my answer in three parts: The first will address the $\ell$-degeneracy in the hydrogen atom, in the second I will try to explain why this degeneracy is lifted, and in the third I will try to reason why $3s$ states are lower in energy than $3p$ states (which are in turn lower in energy than $3d$ states). $\ell$-degeneracy of the hydrogen atoms energy eigenvalues The non-relativistic electron in a hydrogen atom experiences a potential that is analogous to the Kepler problem known from classical mechanics. This potential (aka Kepler potential) has the form $\frac{\kappa}{r}$, where $r$ is the distance between the nucleus and the electron, and $\kappa$ is a proportionality constant. Now, it is known from physics that symmetries of a system lead to conserved quantities (Noether Theorem).For example from the rotational symmetry of the Kepler potential follows the conservation of the angular momentum, which is characterized by $\ell$. But while the length of the angular momentum vector is fixed by $\ell$ there are still different possibilities for the orientation of its $z$-component, characterized by the magnetic quantum number $m$, which are all energetically equivalent as long as the system maintains its rotational symmetry.So, the rotational symmetry leads to the $m$-degeneracy of the energy eigenvalues for the hydrogen atom.Analogously, the $\ell$-degeneracy of the hydrogen atoms energy eigenvalues can also be traced back to a symmetry, the $SO(4)$ symmetry. The system's $SO(4)$ symmetry is not a geometric symmetry like the one explored before but a so called dynamical symmetry which follows from the form of the Schroedinger equation for the Kepler potential.(It corresponds to rotations in a four-dimensional cartesian space. Note that these rotations do not operate in some physical space.)This dynamical symmetry conserves the Laplace-Runge-Lenz vector $\hat{\vec{M}}$ and it can be shown that this conserved quantity leads to the $\ell$-independent energy spectrum with $E \propto \frac{1}{n^2}$. (A detailed derivation, though in German, can be found here.) Why is the $\ell$-degeneracy of the energy eigenvalues lifted in multi-electron atoms? As the $m$-degeneracy of the hydrogen atom's energy eigenvalues can be broken by destroying the system's spherical symmetry, e.g. by applying a magnetic field, the $\ell$ degeneracy is lifted as soon as the potential appearing in the Hamilton operator deviates from the pure $\frac{\kappa}{r}$ form.This is certainly the case for multielectron atoms since the outer electrons are screened from the nuclear Coulomb attraction by the inner electrons and the strength of the screening depends on their distance from the nucleus.(Other factors, like spin and relativistic effects, also lead to a lifting of the $\ell$-degeneracy even in the hydrogen atom.) Why do states with the same $n$ but lower $\ell$ values have lower energy eigenvalues? Two effects are important here: The centrifugal force puts an "energy penalty" onto states with higher angular momentum.${}^{1}$ So, a higher $\ell$ value implies a stronger centrifugal force, that pushes electrons away from the nucleus. The concept of centrifugal force can be seen in the radial Schroedinger equation for the radial part $R(r)$ of the wave function $\Psi(r, \theta, \varphi) = R(r) Y_{\ell,m} (\theta, \varphi )$\begin{equation}\bigg( \frac{ - \hbar^{2} }{ 2 m_{\mathrm{e}} } \frac{ \mathrm{d}^{2} }{ \mathrm{d} r^{2} } + \underbrace{ \frac{ \hbar^{2} }{ 2 m_{\mathrm{e}} } \frac{ \ell (\ell + 1) }{ r^{2} } } - \frac{ Z e^{2} }{ 2 m_{\mathrm{e}} r } - E \bigg) r R(r) = 0\end{equation}\begin{equation}{}^{= ~ V^{\ell}_{\mathrm{cf}} (r)} \qquad \qquad\end{equation}The radial part experiences an additional $\ell$-dependent potential $V^{\ell}_{\mathrm{cf}} (r)$ that pushes the electrons away from the nucleus. Core repulsion (Pauli repulsion), on the other hand, puts an "energy penalty" on states with a lower angular momentum. That is because the core repulsion acts only between electrons with the same angular momentum${}^{1}$. So it acts stronger on the low-angular momentum states since there are more core shells with lower angular momentum. Core repulsion is due to the condition that the wave functions must be orthogonal which in turn is a consequence of the Pauli principle. Because states with different $\ell$ values are already orthogonal by their angular motion, there is no Pauli repulsion between those states. However, states with the same $\ell$ value feel an additional effect from core orthogonalization. The "accidental" $\ell$-degeneracy of the hydrogen atom can be described as a balancebetween centrifugal force and core repulsion, that both act against the nuclear Coulombattraction.In the real atom the balance between centrifugal force and core repulsion is broken,The core electrons are contracted compared to the outer electrons because there are less inner electron-shells screening the nuclear attraction from the core shells than from the valence electrons.Since the inner electron shells are more contracted than the outer ones, the core repulsion is weakened whereas the effects due to the centrifugal force remain unchanged. The reduced core repulsion in turn stabilizes the states with lower angular momenta, i.e. lower $\ell$ values. So, $3s$ states are lower in energy than $3p$ states which are in turn lower in energy than $3d$ states. Of course, one has to be careful when using results of the hydrogen atom to describe effects in multi-electron atoms as AcidFlask mentioned. But since only a qualitative description is needed this might be justifiable. I hope this somewhat lengthy answer is helpful. If something is wrong with my arguments I'm happy to discuss those points.
One way to solve this without using the laplace transform is by taking this back to the differential equation which produced this transfer function. The transfer function: $$\dfrac{Y(s)}{r(s)}=\dfrac{6s+100}{s^2+12s+100} $$ This could be re-written as: $$(s^2+12s+100)Y(s)=(6s+100)r(s) $$ Back to the time domain: $$y''+12y'+100y=6r'+100r $$ Here, \$r(t)\$ is the unit step function. The derivative of the step function is the dirac delta function, so the differential equation becomes: $$y''+12y'+100y=6\delta(t)+100, \text{for } t>0 $$ Now you have a differential equation, which you can readily solve. The tricky part here is the dirac delta function, but remember it's zero everywhere but at \$t=0\$. Then for \$t>0\$: $$y''+12y'+100y=100, \text{for } t>0 $$ After solving the differential equation (homogeneous and particular solutions added together), $$y(t)=1+K_1e^{-6t}\cos(8t)+K_2e^{-6t}\sin(8t) \tag1$$ And the constants are found by using the initial conditions. Here is the trick, you have to account for the impulse (dirac delta function), right at t=0 (we have disregarded it so far). Laplace transform are usually defined from \$t=0^-\$ somehow in order to include the impulses (such as the dirac delta function). So we need to include this impulse if we want to get the same result one would with the Laplace transform. This is a math problem from here on (don't wanna have to explain more extensively) but you can read this and get familiar with the details. In general though, when you have a differential equation, with a forcing function which includes a dirac delta, you need to find the solution to the case when \$t>0\$ (just what we did before in (1)), with the initial conditions \$y(0)=0\$, \$y'(0)=a/m\$ (for this case, \$a=6\$ and \$m=1\$). This is what brings in the fact the you have a dirac delta as part of the forcing function. \$a\$ is just the coefficient multiplying the dirac delta function—6 in this problem. \$m\$ is the coefficient for the highest order derivative (\$y''\$ is highest order and its coefficient is 1). With those initial conditions (\$y(0)=0\$, \$y'(0)=6\$), you can find the values of the constants, \$K_1\$ and \$K_2\$. These turn out to be \$K_1=-1\$ and \$K_2=0\$. Therefore: $$y(t)=1-e^{-6t}\cos(8t) $$ A lot simpler to solve this problem with Laplace transform, but still possible to find a solution using time domain analysis. Also, the nature of the dirac delta function is what makes this problem a bit trickier.
Answer The hill makes an angle of $3.8^{\circ}$ with the horizontal. Work Step by Step Let $\theta$ be the angle the hill makes with the horizontal. The car's weight of $2800~lb$ is directed straight down. The force of $186~lb$ is directed up the hill at an angle of $\theta$ above the horizontal. We can draw a triangle with the $2800~lb$ as the hypotenuse of the triangle and the force of $186~lb$ opposite the angle $\theta$. We can find $\theta$: $sin~\theta = \frac{186}{2800}$ $\theta = arcsin(\frac{186}{2800})$ $\theta = 3.8^{\circ}$ The hill makes an angle of $3.8^{\circ}$ with the horizontal.
I'm studying chapter 5 of Discrete-Time Signal Processing 3rd edition by Alan Oppenheim and I'm having serious difficulties understanding how he obtained equation 5.57. For those who don't have this book I tell you that in this part it is analyzing the frequency, phase and group delay of $$ 1- re^{j \theta} e^{-j \omega} $$ which could either be a pole or a zero depending on whether this factor is in the denominator or numerator of the frequency response. Here r is a random magnitude variable and theta is a random phase variable So far I have been able to understand how is the phase expression obtained as it is defined as $$\arctan \frac {\Im}{\Re}$$ and $$ 1- re^{j \theta} e^{-j \omega} = 1- r[ \cos (\theta - \omega) + j \sin(\theta - \omega) ] $$ and as cosine is even and sine is odd $$ = 1- r[ \cos (\omega- \theta) - j \sin(\omega- \theta) ] = 1- r\cos (\omega- \theta) + j r\sin(\omega- \theta) $$ the resulting phase expression is: $$ \arctan \frac {r \sin(\omega- \theta)}{1- r \cos (\omega- \theta)}$$ which coincides with equation 5.56 in the book But when it comes to finding the group delay (which is the negative derivative of this expression) I'm not obtaining what it says in the book. Moreover I introduced the expression in Matlab and I'm obtaining the following result: According to the book the group delay is; $$ \frac{r^2 - r\cos (\omega- \theta)}{1 + r^2 - 2r\cos (\omega- \theta)}$$ How did they get there? Can you help me?
In Griffiths' Intro to QM [1] he gives the eigenfunctions of the Hermitian operator $\hat{x}=x$ as being $$g_{\lambda}\left(x\right)~=~B_{\lambda}\delta\left(x-\lambda\right)$$ (cf. last formula on p. 101). He then says that these eigenfunctions are not square integrable because $$\int_{-\infty}^{\infty}g_{\lambda}\left(x\right)^{*}g_{\lambda}\left(x\right)dx ~=~\left|B_{\lambda}\right|^{2}\int_{-\infty}^{\infty}\delta\left(x-\lambda\right)\delta\left(x-\lambda\right)dx ~=~\left|B_{\lambda}\right|^{2}\delta\left(\lambda-\lambda\right) ~\rightarrow~\infty$$ (cf. second formula on p. 102). My question is, how does he arrive at the final term, more specifically, where does the $\delta\left(\lambda-\lambda\right)$ bit come from? My total knowledge of the Dirac delta function was gleaned earlier on in Griffiths and extends to just about understanding $$\tag{2.95}\int_{-\infty}^{\infty}f\left(x\right)\delta\left(x-a\right)dx~=~f\left(a\right)$$ (cf. second formula on p. 53). References: D.J. Griffiths, Introduction to Quantum Mechanics,(1995) p. 101-102.
An important remark: since the function $f(x) = \begin{cases} 1 & x \in \mathbb{Q} \\ 0 & x \in \mathbb{R} \setminus \mathbb{Q} \end{cases}$ is clearly Lebesgue-integrable (being equal to $0$ almost everywhere), you are probably asking about Riemann integrability. In this case, though, you have a problem: the concept of Riemann integrability is defined only for functions defined on bounded intervals, and $f$ is defined on $\Bbb R$. I believe that what you meant to ask about is the following slightly modified function: $g(x) = \begin{cases} 1 & x \in [a,b] \cap \mathbb{Q} \\ 0 & x \in [a,b] \setminus \mathbb{Q} \end{cases}$. In this case, an answer can be given using Lebesgue's criterion of Riemann integrability: a bounded $f$ is Riemann-integrable on $[a,b]$ if and only if the set of discontinuities of $f$ has Lebesgue measure $0$. Let $x \in [a,b] \cap \Bbb Q$. Then $f(x)=1$. Pick a sequence of irrational numbers $(x_n)_{n\in\Bbb N}$ with $x_n \to x$. Then $f(x_n)=0 \not\to 1=f(x)$, so $f$ is not continuous in $x$. Let $x \in [a,b] \setminus \Bbb Q$. Then $f(x)=0$. Pick a sequence of rational numbers $(x_n)_{n\in\Bbb N}$ with $x_n \to x$. Then $f(x_n)=1 \not\to 0=f(x)$, so $f$ is not continuous in $x$. The above argument shows that $f$ is discontinuous in all the points of $[a,b]$, which is a set of Lebesgue measure $b-a \ne 0$, therefore Lebesgue's criterion tells us that $f$ is not Riemann-integrable on $[a,b]$.
ADDED (29 May, 2013) As has been pointed out in the comments, there has been great progress since this answer was first written, and the conjectures below have now been proved, thanks to ground-breaking work of Agol, Kahn--Markovic and Wise. Here's a brief summary of some of the highlights. (Shameless self-promotion: see this survey article for too many further details, including definitions of some of the terms.) Haglund--Wise define the notion of special (non-positively curved) cube complex. If a closed hyperbolic 3-manifold $M$ is homotopy equivalent to a special cube complex then $M$ satisfies L (largeness, defined below). Agol proves that if $M$ is homotopy equivalent to a special cube complex then $M$ also satisfies VFC (the Virtually Fibred Conjecture, also defined below). Kahn--Markovic prove SSC (the Surface Subgroup Conjecture, also defined below), using mixing properties of the geodesic flow. In fact, they construct enough surfaces to show that $M$ is homotopy equivalent to a cube complex. Wise proves (independently of Kahn--Markovic) that if $M$ contains an embedded, geometrically finite surface then $M$ is special. Agol uses a very deep theorem of Wise (the Malnormal Special Quotient Theorem) to prove a conjecture (also of Wise), which states that word-hyperbolic fundamental groups of non-positively curved cube complexes are special. All the properties below follow. It's quite a story, and many other names have gone unmentioned. There were also very important contributions by Sageev (whose thesis initiated the programme of using cube complexes to attack these problems), Groves--Manning, Bergeron--Wise, Hsu--Wise and another very deep paper of Haglund--Wise. To extend these results to the cusped hyperbolic case you need results of Hruska--Wise and Sageev--Wise. Finally, it turns out that similar results hold for all non-positively curved 3-manifolds, a result established by Liu and Przytycki--Wise. Let $M$ be a finite-volume hyperbolic 3-manifold. (Some of these extend, suitably restated, to larger classes of 3-manifolds. But it follows from Geometrisation that the hyperbolic case is often the most interesting. These are all trivial or trivially false in the elliptic case, for example.) The Surface Subgroup Conjecture (SSC). $\pi_1M$ contains a subgroup isomorphic to the fundamental group of a closed hyperbolic surface. (Recently proved by Kahn and Markovic.) The Virtually Haken Conjecture (VHC). $M$ has a finite-sheeted covering space with an embedded incompressible subsurface. Virtually positive first Betti number (VPFB). $M$ has a finite-sheeted covering space $\widehat{M}$ with $b_1(\widehat{M})\geq 1$. Virtually infinite first Betti number (VIFB). $M$ has finite-sheeted covering spaces $\widehat{M}_k$ with $b_1(\widehat{M}_k)$ arbitrarily large. Largeness (L). $\pi_1(M)$ has a finite-index subgroup that surjects a non-abelian free group. The Virtually Fibred Conjecture (VFC). $M$ has a finite sheeted cover that is homeomorphic to the mapping torus of a (necessarily pseudo-Anosov) surface automorphism. This is false for graph manifolds. There are fairly easy implications $L\Rightarrow VIFB \Rightarrow VPFB \Rightarrow VHC \Rightarrow SSC$. Also, a fortiori, $VFC\Rightarrow VPFB$. Recently, Daniel Wise announced a proof that $VHC\Rightarrow VFC$. His proof also shows that, if $M$ has an embedded geometrically finite subsurface, then we get $L$ and other nice properties. This list is similar to the one that Agol links to in the comments. Also, I suppose it's exactly what Daniel Moskovich meant by 'The Virtually Fibred Conjecture, and related problems'. I thought some people might be interested in a little more detail. Paul Siegel asks in comments: 'Would it be correct to guess that the "virtually _ conjecture" problems can be translated into a question about the large scale geometry of the fundamental group?' Certainly, it's true that most of these can be translated into an assertion about how (some finite-index subgroup of) $\pi_1M$ splits as an amalgamated product, HNN extension or, more generally, as a graph of groups. The equivalence uses the Seifert--van Kampen Theorem in one direction, and something like Proposition 2.3.1 of Culler--Shalen in the other. Rephrased like this, some of the above conjectures turn out as follows. The Virtually Haken Conjecture (VHC). $M$ has a finite-sheeted covering space $\widehat{M}$ such that $\pi_1(\widehat{M})$ splits. Virtually positive first Betti number (VPFB). $M$ has a finite-sheeted covering space $\widehat{M}$ such that $\pi_1(\widehat{M})$ splits as an HNN extension. Largeness (L). $M$ has a finite-sheeted covering space $\widehat{M}$ such that $\pi_1(\widehat{M})$ splits as a graph of groups with underlying graph of negative Euler characteristic. The Virtually Fibred Conjecture (VFC). $M$ has a finite-sheeted covering space $\widehat{M}$ such that $\pi_1(\widehat{M})$ splits can be written as a semi-direct product $\pi_1(\widehat{M}) \cong K\rtimes\mathbb{Z}$ with $K$ finitely generated. (Here we invoke Stallings' theorem that a 3-manifold whose fundamental group has finitely generated commutator subgroup is fibred.) I don't think I know a way to rephrase $VIFB$ in terms of splittings of $\pi_1$. Often, when people say 'the large scale geometry of $\pi_1$' they're talking about properties that are invariant under quasi-isometry. I'm really not sure whether these splitting properties (or, more exactly, 'virtually having these splitting properties') are invariant under quasi-isometry. Perhaps something like the work of Mosher--Sageev--Whyte does the trick?
Let us define the following matrix: $C=AB$ where $B$ is a block diagonal matrix with $N$ blocks, $B_1$, $B_2$ … $B_N$, each of dimensions $M \times M$. I know that $B_k = I_M - \mu R_k$ with $R_k$ equals to a hermitian matrix and $\mu$ some positive constant. Moreover, I know that the the entries of the matrix $A$ are non-negative real numbers. I also know that the matrix $A$ is right stochastic, i.e., the sum of the elements in each row equals one. In particular, the matrix has the following structure $A=blkdiag\{A_g \otimes I_{M-1},I_N\}$ where $\otimes$ denotes the Kronecker product, $I_{M-1}$ is an $(M-1 \times M-1)$ identity matrix, $I_N$ equals an $(N\times N)$ identity matrix, $A_g$ denotes a $(N\times N)$ right stochastic matrix with non-negative real entries and $blkdiag\{.\}$ equals a block diagonal matrix. Would all this information help to get a result independent of the dimensions of $A$, i.e., $MN$? Can I say that the spectral radius of C is smaller than one for some values of $\mu$? If so, can I determine the range of values of $\mu$ under which the spectral radius of $C$ is smaller than one?
Here's a quote from Andrew Gilpin (1993) advocating Maurice Kendall's $τ$ over Spearman's $ρ$ for theoretical reasons: [Kendall's $τ$] approaches a normal distribution more rapidly than $ρ$, as $N$, the sample size, increases; and $τ$ is also more tractable mathematically, particularly when ties are present. I can't add much about Goodman-Kruskal $γ$, other than that it seems to produce ever-so-slightly larger estimates than Kendall's $τ$ in a sample of survey data I've been working with lately... and of course, noticeably lower estimates than Spearman's $ρ$. However, I also tried calculating a couple partial $γ$ estimates (Foraita & Sobotka, 2012), and those came out closer to the partial $ρ$ than the partial $τ$... It took a fair amount of processing time though, so I'll leave the simulation tests or mathematical comparisons to someone else... (who would know how to do them...) As ttnphns implies, you can't conclude that your $ρ$ estimates are better than your $τ$ estimates by magnitude alone, because their scales differ (even though the limits don't). Gilpin cites Kendall (1962) as describing the ratio of $ρ$ to $τ$ to be roughly 1.5 over most of the range of values. They get closer gradually as their magnitudes increase, so as both approach 1 (or -1), the difference becomes infinitesimal. Gilpin gives a nice big table of equivalent values of $ρ$, $r$, $r^2$, d, and $Z_r$ out to the third digit for $τ$ at every increment of .01 across its range, just like you'd expect to see inside the cover of an intro stats textbook. He based those values on Kendall's specific formulas, which are as follows:$$\begin{aligned}r &= \sin\bigg(\tau\cdot\frac \pi 2 \bigg) \\\rho &= \frac 6 \pi \bigg(\tau\cdot\arcsin \bigg(\frac{\sin(\tau\cdot\frac \pi 2)} 2 \bigg)\bigg)\end{aligned}$$(I simplified this formula for $ρ$ from the form in which Gilpin wrote, which was in terms of Pearson's $r$.) Maybe it would make sense to convert your $τ$ into a $ρ$ and see how the computational change affects your effect size estimate. Seems that comparison would give some indication of the extent to which the problems that Spearman's $ρ$ is more sensitive to are present in your data, if at all. More direct methods surely exist for identifying each specific problem individually; my suggestion would produce more of a quick-and-dirty omnibus effect size for those problems. If there's no difference (after correcting for the difference in scale), then one might argue there's no need to look further for problems that only apply to $ρ$. If there's a substantial difference, then it's probably time to break out the magnifying lens to determine what's responsible. I'm not sure how people usually report effect sizes when using Kendall's $τ$ (to the unfortunately limited extent that people worry about reporting effect sizes in general), but since it seems likely that unfamiliar readers would try to interpret it on the scale of Pearson's $r$, it might be wise to report both your $τ$ statistic and its effect size on the scale of $r$ using the above conversion formula...or at least point out the difference in scale and give a shout out to Gilpin for his handy conversion table. References Foraita, R., & Sobotka, F. (2012). Validation of graphical models. gmvalid Package, v1.23. The Comprehensive R Archive Network. URL: http://cran.r-project.org/web/packages/gmvalid/gmvalid.pdf Gilpin, A. R. (1993). Table for conversion of Kendall's Tau to Spearman's Rho within the context measures of magnitude of effect for meta-analysis. Educational and Psychological Measurement, 53(1), 87-92. Kendall, M. G. (1962). Rank correlation methods (3rd ed.). London: Griffin.
How to show that the series convergent $$\frac{1}{2^2\log{2}}-\frac{1}{3^2\log{3}}+\frac{1}{4^2\log{4}}-\dots$$ The series can be written as $$\sum_{n=1}^\infty (-1)^{n+1}\frac{1}{(n+1)^{2}\log{(n+1)}}$$ I want to use Leibnitz's test. Here $u_n=\frac{1}{(n+1)^{2}\log{(n+1)}}\to 0$ as $n\to \infty$ How to show $u_n$ is monotone decreasing? Is there any other method to solve except Leibnitz
How can Hawking radiation with a finite (greather than zero) temperature come from the event horizon of a black hole? A redshifted thermal radiation still has Planck spectrum but with the lower temperature (remember CMB with temperature redshifted by expansion of the universe). Now, redshift at the event horizon is infinite (time is frozen for a distant observer) so the temperature of the radiation would be zero for him/her, that is no radiation is detected Oh, one of my favourite questions. Let me try to explain why black holes radiate: To understand the Hawking radiation, you need to know about the Bogoliubov-Valatin transformation, which is often use to diagonalise Hamiltonians, and which was actually developed in regards of superconductivity and superfluidity. If you have the creation and annihilation operators $$a^\dagger$$ and $$a,$$ you can define new operators \begin{align} b&=ua+va^\dagger\\ b^\dagger&=u^*a^\dagger+v^*a. \end{align} Just defining new operators is fun, but only really helpful if certain conditions are met. The Bogoliubov-Valatin transformation is the canonical mapping from the set of $a$-operators to the set of $b$-operators: \begin{align}\left[b,b^\dagger\right]&=\left[ua+va^\dagger,u^*a^\dagger+v^*a\right]\\&=\left[ua,u^*a^\dagger+v^*a\right]+\left[va^\dagger,u^*a^\dagger+v^*a\right]\\&=\left[ua,u^*a^\dagger\right]+\left[ua,v^*a\right]+\left[va^\dagger,u^*a^\dagger\right]+\left[va^\dagger,v^*a\right]\\&=u^2\left[a,a^\dagger\right]+uv\left[a,a\right]+uv\left[a^\dagger,a^\dagger\right]+v^2\left[a^\dagger,a\right]\\&=u^2\left[a,a^\dagger\right]+uv\cdot0+uv\cdot0-v^2\left[a,a^\dagger\right]\\&=\left(u^2-v^2\right)\left[a,a^\dagger\right]\\&=\left(u^2-v^2\right),\end{align}where we have to set $u$ and $v$ in a way that our transformation is indeed canonical, i.e. that $\left(u^2-v^2\right)=1$. This is a transformation of the phase space. This transformation can be used to transform one coordinate system to another one, coordinate systems which are accelerated compared to each other! If we have one observer in the past and another one in the future, both observing an area where in between them a black hole came into existence, their coordinate systems are accelerated (compared to each other), because the black hole curves space-time. The past-observer sees a vacuum, maybe a star that will become a black hole, but otherwise a vacuum. The future-observer sees a strongly curved space-time full of radiation. But why? The reason is the uncertainty principle! We don't know the energy state of the vacuum, it depends on our coordinate system! I am not talking here about the position-momentum uncertainty, I am talking about the energy-time uncertainty! The higher the precision of your energy measurement, the higher the uncertainty of your time measurement. And because the vacuum can have different energy states, it can also spontaneously create particles - the Hawking radiation. Summary: A black holes radiates, because different observers can observe different energy state of the vacuum around a black hole. This has to do with the coordinate system and the uncertainty principle. The vacuum energy depends on both. So, the redshift precisely at the event horizon might be infinite, but not a bit away from it.
I am sure it's just your eyelashes creating a filtering effects, but if you look a a bright(ish) light source such as a lightbulb while squinting, if looks like you are seeing straight light rays emitting from the lightbulb, I assume you can't see an individual ray of light like this? As Qmechanic noted, a question pretty similar to this has already been asked, but I figured you might be interested in a non-diffractive answer. As is obvious from women's hair-dying commercials, human hair has a small degree of reflective sheen to it, and so when you squint, it seems plausible that as the eyelashes intermesh over each other, some light which otherwise would not enter your pupil will strike your eyelashes, be reflected, and enter your eyes from a different angle than the original light source. One simple way to model this is to model the eyelash mesh as a semi-transparent isotropic diffusive scattering surface with absorption. An ideal example of this would be a thin sheet of plastic with absorbing dye and scattering $\text{TiO}_2$ particles dispersed throughout it. In this model, a fraction $T<1$ of the light is transmitted without any collision or angular deviation, a fraction $A$ is absorbed by the eyelashes, and a fraction $S$ is scattered isotropically from the scattering plane, with $$T+A+S=1$$. For concreteness, let the $YZ$-plane be the scattering surface, let the eye (modeled as a tiny square of area $\alpha$ in the YZ plane) be located at $(-d,0,0)$ and put a point source of light at $(r,0,0)$ with emission intensity $\mathcal{I}r^2$ per steradian (so that the light intensity remains constant regardless of distance $r$). Then from the light due to the point source which passes through without being absorbed or scattered, the eye receives an amount of light $$P_\text{source}=T\mathcal{I}r^2\frac{\alpha}{(r+d)^2}\rightarrow T\mathcal{I}\alpha$$ in the limit $r\rightarrow\infty$ (ie, as the point source is moved to infinity but its apparent brightness is kept constant). Meanwhile, a patch of diffusing surface located at $(0,y,z)$ with unit area will receive an amount of light $$\mathcal{I}r^2\frac{r}{\left(r^2+y^2+z^2\right)^{3/2}}\rightarrow\mathcal{I}$$ which will then be isotropically reradiated with intensity $\frac{S\mathcal{I}}{4\pi}$ per steradian. From this, the eye will detect an amount $$P_\text{diffuse}=\alpha\frac{S\mathcal{I}}{4\pi}\frac{d}{\left(d^2+y^2+z^2\right)^{3/2}}.$$ Now all that remains is to convert these to solid angle intensities as detected by the eye. For the diffuse light, note that when looking at the point $(0,y,z)$ on the diffusing panel, a solid angle $d\Omega$ looks at a patch of area $$a=4\pi d\Omega\frac{\left(d^2+y^2+z^2\right)^{3/2}}{d}$$ and thus the apparent visual brightness due to the diffusing surface $I_\text{diffuse}$ is given by $$I_\text{diffuse}d\Omega=aP_\text{diffuse}=\alpha S\mathcal{I}d\Omega$$ which is exactly the apparent brightness of a Lambertian surface, and is independent of viewing angle $\Omega$. Meanwhile, the apparent brightness of the point source is given by $$I_\text{source}(\Omega)=P_\text{source}\delta(\Omega)=\alpha T\mathcal{I}\delta(\Omega)$$ where the angular coordinates have been chosen so that $\Omega=0$ points towards the light source and where $\delta$ is the Dirac delta function. It is the $I_\text{diffuse}$ which gives rise to the light halo which surrounds bright objects when you squint at them. In particular, by convolving the object's visual profile with the response function previously obtained, one sees that you visualize the original object (with brightness reduced by a factor of $T$) along with a uniformly smeared-out halo. As the linked question (Qmechanic) suggests, diffraction effects cause some of that streaking. There are other cool things that can happen -- the same link mentions eyelash filter effects. I remember observing that back in the analog TV days. I could squint at a slightly noisy TV image and actually clean up the image, because I'd filtered out the high-frequency components which were due to background noise. BTW, an "individual ray of light" is not something you could see as a line, because it'd be pointing straight at your eyeball. The fact that you see a streak means you're perceiving light arriving from a continuous range of angles. Okay so I just witnessed this phenomenon and I'm not 100% convinced its entirely from diffraction, because, like Carl Witthoft said, "...an individual ray of light is not something you could see as a line, because it'd be pointing straight at your eyeball." I was looking at an orange colored light. I noticed that when my eye focused at a certain distance away from my body, (eyes) not directed at the light source, I could see what appeared to be focused 'lines' of light like a laser would shine. I assume since the light is coming from a bulb which has a curved surface that I was viewing light that was diffracted by particles in the air at a certain radial distance from my eyeball. Since my vision was not focused on the light source I don't see how DumpsterDoofus's explanation of the diffuse halo which surrounds light sources is a well defined explanation of what I or Kyle Kanos was seeing. It looks like diffracted light that you can focus on; like being able to focus a microscope on the different well-defined additive light wave interference. The key is that instead of focusing my vision on the effect of diffraction leading to local collections of additive interfering light reflected off of a surface perpendicular to my eye, my eye(s) (I could close one eye and focus even better on this phenomenon, without squinting) were focusing on light that was being emitted from the source at an angle with respect to my eyeball. Instead of this: https://upload.wikimedia.org/wikipedia/commons/f/f1/Wavepanel.png I was seeing something more like this: http://reednightingale.com/projects/physical/laserSpirograph/DSCN0025.JPG So, I suppose that I was seeing light perpendicular to my eye that was being refracted off of particles in the air after that light was diffracted by particles in the air at a different position (than that of the refracting particles). The latter position being the focal distance of my vision which led to me being able to 'see' the 'rays of light' which were being emitted at an angle from the light source w/r/t my eyeball. protected by Qmechanic♦ Aug 1 '15 at 4:49 Thank you for your interest in this question. Because it has attracted low-quality or spam answers that had to be removed, posting an answer now requires 10 reputation on this site (the association bonus does not count). Would you like to answer one of these unanswered questions instead?
Vor's answer gives the standard definition. Let me try to explain the difference a bit more intuitively. Let $M$ be a bounded error probabilistic polynomial-time algorithm for a language $L$ that answers correctly with probability at least $p\geq\frac{1}{2}+\delta$. Let $x$ be the input and $n$ the size of the input. What distinguishes an arbitrary $\mathsf{PP}$ algorithm from a $\mathsf{BPP}$ algorithm is the The essential thing about $\mathsf{BPP}$ is that the gap is at least $n^{-O(1)}$. I will try to explain why this distinction is significant and allows us to consider $\mathsf{BPP}$ to be considered efficient algorithms (even conjectured to be equal to $\mathsf{P}$) whereas $\mathsf{PP}$ is considered inefficient (actually $\mathsf{PP}$ contains $\mathsf{NP}$). All of this comes from this gap. positive gap between the probability of accepting $x\in L$ and the probability of accepting $x\notin L$. Let's start by looking at $\mathsf{PP}$ more carefully. Note that if an algorithm uses at most $r(n)$ random bits during its execution and the error probability is smaller than $2^{-r(n)}$ then error probability is actually $0$, there cannot be any chose of random bits that will make the algorithm answer incorrectly. Furthermore, an algorithm with running time $t(n)$ cannot use more than $t(n)$ random bits, so if a probabilistic algorithm's error with worst-case running-time $t(n)$ is better than With a similar argument we can show that the case where the difference between the probability of accepting an $x\in L$ and the probability of accepting an $x\notin L$ is too small is similar to the case where we don't have almost no difference as in $\mathsf{PP}$ case. Let's now move towards $\mathsf{BPP}$. In probabilistic algorithms, we can boost the probability for correctly answering. Let's say we want to boost the correctness probability to $1-\epsilon$ for say error probability $\epsilon=2^{-n}$ (exponentially small error). The idea is simple: run $M$ several times and take the majority's answer. How many times should we to run $M$ to get the error probability to be at most $\epsilon$? $\Theta(\delta^{-1} \lg \epsilon)$ times. The proof is given at the bottom of this answer. Now let's take into the consideration that the algorithms we are discussing need to be polynomial-time. That means that we cannot run $M$ more than polynomially many times. In other words, $\Theta(\delta^{-1} \ln \epsilon) = n^{O(1)}$, or more simply $$\delta^{-1} \lg \epsilon = n^{O(1)}$$ This relation categorizes the bounded error probabilistic algorithms into classes depending on their error probability. There is no difference between the error probability $\epsilon$ being being $2^{-n}$ or a positive constant (i.e. doesn't change with $n$) or $\frac{1}{2}-n^{O(1)}$. We can get from one of these to the other ones while remaining inside polynomial time. However if $\delta$ is too small, say $0$, $2^{-n}$, or even $n^{-\omega(1)}$ then we don't have a way of boosting the correctness probability and reducing the error probability sufficiently to get into $\mathsf{BPP}$. The main point here is that in $\mathsf{BPP}$ we can efficiently reduce the error probability exponentially so we are almost certain about the answers and that is what makes us consider this class of algorithms as efficient algorithms. The error probability can be reduced so much that a hardware failure is more likely or even a meteor falling on the computer is more likely than making an error by the probabilistic algorithm. That is not true for $\mathsf{PP}$, we don't know any way of reducing the error probability and we are left almost as if we are answering by throwing a coin to obtain the answer (we are not completely, the probabilities are not half and half, but it is very close to that situation). This section gives the proof that to obtain error probability $\epsilon$ when we start with an algorithm with gap $(\frac{1}{2}-\delta,\frac{1}{2}+\delta)$ we should run $M$ $\Theta(\delta^{-1} \lg \epsilon)$ times. Let $N_k$ be the algorithm that runs $M$ for $k$ times and then answers according to the answer of majority. For simplicity, lets assume that $k$ is odd so we don't have ties. Consider the case that $x \in L$. The case $x \notin L$ is similar. Then $$\mathsf{Pr}\{M(x) \text{ accepts}\} = p \geq \frac{1}{2} + \delta$$ To analyze the correctness probability of $N_k$ we need to estimate the probability that majority of the $k$ runs accept. Let $X_i$ be 1 if the $i$th run accepts and be $0$ if it rejects. Note that each run is independent from others as they use independent random bits. Thus $X_i$s are independent Boolean random variables where $$\mathbb{E}[X_i] = \mathsf{Pr}\{X_i=1\} = \mathsf{Pr}\{M(x)\text{ accepts}\} = p \geq \frac{1}{2}+\delta$$ Let $Y = \Sigma_{i=1}^k X_i$. We need to estimate the probability that majority accept, i.e. the probability that $Y\geq\frac{k}{2}$. $$\mathsf{Pr}\{N_k(x) \text{ accepts}\} = \mathsf{Pr}\{Y \geq \frac{k}{2}\}$$ How to do it? We can use Chernoff bound which tells us the concentration of probability near the expected value. For any random variable $Z$ with expected value $\mu$, we have $$\mathsf{Pr}\{|Z-\mu| > \alpha\mu\} < e^{\frac{\alpha^2}{4}\mu}$$ which says that the probability that $Z$ is $\alpha\mu$ far from its expected value $\mu$ is exponentially decreases as $\alpha$ increases. We will use it to bound the probability of $Y < \frac{k}{2}$. Note that by linearity of expectation we have $$\mathbb{E}[Y] = \mathbb{E}[\Sigma_{i=1}^k X_i] = \Sigma_{i=1}^k \mathbb{E}[X_i] = kp \geq \frac{k}{2} + k\delta$$ Now we can apply the Chernoff bound. We want an upper-bound on the probability of $Y< \frac{k}{2}$. The Chernoff bound will give an upper-bound on the probability of $|Y-(\frac{k}{2}+k\delta)| > k\delta$ which is sufficient. We have $$Pr\{|Y - kp| > \alpha kp\} < e^{-\frac{\alpha^2}{4}kp}$$ and if we pick $\alpha$ such that $\alpha kp = k\delta$ we are done, so we pick $\alpha = \frac{\delta}{p} \leq \frac{2\delta}{2\delta+1}$. Therefore we have $$Pr\{Y < \frac{k}{2} \} \leq Pr\{|Y - (\frac{k}{2}+k\delta)| > k\delta\} \leq Pr\{|Y - kp| > \alpha kp\} < e^{-\frac{\alpha^2}{4}kp}$$ and if you do the calculations you will see that $$\frac{\alpha^2}{4}kp \leq \frac{\delta^2}{4\delta+2}k = \Theta(k\delta)$$ we have $$Pr\{Y < \frac{k}{2} \} < e^{-\Theta(k\delta)}$$ We want the error to be at most $\epsilon$, so we want $$e^{-\Theta(k\delta)} \leq \epsilon$$ or in other words $$\Theta(\delta^{-1} \lg \epsilon) \leq k $$ One essential point here is that in the process we will use many more random bits and also the running time will increase, i.e. the worst-case running-time of $N_k$ will be roughly $k$ times the running-time of $M$. Here the mid point of the gap was $\frac{1}{2}$. But in general this doesn't need to be the case. We can adopt a similar method for other values by taking other fractions in place of majority for accepting.
Primitive Pythagorean triplets $a^2 = b^2 + c^2, \gcd(b,c) = 1$ are given by $a = r^2 + s^2$, $b = r^2 - s^2$ and $c = 2rs$ where $r > s$ are natural numbers. Let the $n$-th primitive triplet be the one formed by the $n$-th smallest pair in increasing order of $(r,s)$. Claim 1: Let $\mu_n$ be the arithmetic mean of the ratio of the perimeter to the hypotenuse of first $n$ primitive Pythagorean triplets; then, $$ \lim_{n \to \infty}\mu_n = \frac{\pi}{2} + \log 2$$ Claim 2: Let $\mu_x$ be the arithmetic mean of the ratio of the perimeter to the hypotenuse of all primitive Pythagorean triplets in which no side exceeds $x$; then, $$ \lim_{x \to \infty}\mu_x = 1 + \frac{4}{\pi}$$ Update 8-Oct-2019: Claim 2 has been proved in Mathoverflow. Data for claim 1: From the plot of $\mu_n$ vs. $n$ for $n \le 5 \times 10^8$ we observe that $\mu_n$ is approaching a limiting value which is somwhere between $2.263942$ and $2.263944$. The midpoint of the distribution of $\mu_n$ agrees with the above closed form to $6$ decimal places. Claim 2 has similar data. Question: Are these limits known if not, can it be proved or disproved? Sage code for claim 1 r = 2s = 1n = sum = 0max = 10^20while(r <= max): s = 1 while(s < r): a = r^2 + s^2 b = r^2 - s^2 if(gcd(a,b) == 1): c = 2*r*s if(gcd(b,c) == 1): n = n + 1 sum = sum + ((a+b+c)/a).n() if(n%10^5 == 0): print(n,sum/n) s = s + 1 r = r + 1
Volume 56 Issue 2 / Pages.117-120 / 2016 / 2466-1384(pISSN) / 2466-1392(eISSN) DOI QR Code Appearance of osteoporosis in rat experimental autoimmune encephalomyelitis Ahn, Meejung ; Kang, Sohi ; Park, Channam ; Kim, Jeongtae ; Jung, Kyungsook ; Yang, Miyoung ; Kim, Sung-Ho ; Moon, Changjong ; Shin, Taekyun Received : 2016.03.24 Accepted : 2016.05.26 Published : 2016.06.30 Abstract Experimental autoimmune encephalomyelitis (EAE) in Lewis rats is characterized by transient paralysis followed by recovery. To evaluate whether transient paralysis in EAE affects bone density, tibiae of EAE rats were morphologically investigated using micro-computed tomography and histology. The parameters of bone health were significantly reduced at the peak stage of EAE rats relative to those of controls (p < 0.05). The reduction of bone density was found to remain unchanged, even in the recovery stage. Collectively, the present data suggest that osteoporosis occurs in paralytic rats with monophasic EAE, possibly through the disuse of hindlimbs and/or autoimmune inflammation. Keywords autoimmunity;experimental autoimmune encephalomyelitis;micro-computed tomography;osteoporosis File References Ahn M, Yang W, Kim H, Jin JK, Moon C, Shin T. Immunohistochemical study of arginase-1 in the spinal cords of Lewis rats with experimental autoimmune encephalomyelitis. Brain Res 2012, 1453, 77-86. https://doi.org/10.1016/j.brainres.2012.03.023 Basso N, Heersche JN. Effects of hind limb unloading and reloading on nitric oxide synthase expression and apoptosis of osteocytes and chondrocytes. Bone 2006, 39, 807-814. https://doi.org/10.1016/j.bone.2006.04.014 Brouwers JEM, Lambers FM, van Rietbergen B, Ito K, Huiskes R. Comparison of bone loss induced by ovariectomy and neurectomy in rats analyzed by in vivo micro-CT. J Orthop Res 2009, 27, 1521-1527. https://doi.org/10.1002/jor.20913 Coskun Benlidayi I, Basaran S, Evlice A, Erdem M, Demirkiran M. Prevalence and risk factors of low bone mineral density in patients with multiple sclerosis. Acta Clin Belg 2015, 70, 188-192. https://doi.org/10.1179/2295333715Y.0000000002 Deodhar AA, Woolf AD. Bone mass measurement and bone metabolism in rheumatoid arthritis: a review. Br J Rheumatol 1996, 35, 309-322. https://doi.org/10.1093/rheumatology/35.4.309 Eskan MA, Jotwani R, Abe T, Chmelar J, Lim JH, Liang S, Ciero PA, Krauss JL, Li F, Rauner M, Hofbauer LC, Choi EY, Chung KJ, Hashim A, Curtis MA, Chavakis T, Hajishengallis G. The leukocyte integrin antagonist Del-1 inhibits IL-17-mediated inflammatory bone loss. Nat Immunol 2012, 13, 465-473. https://doi.org/10.1038/ni.2260 Feng J, Liu S, Ma S, Zhao J, Zhang W, Qi W, Cao P, Wang Z, Lei W. Protective effects of resveratrol on postmenopausal osteoporosis: regulation of SIRT1-NF- ${\kappa}B$signaling pathway. Acta Biochim Biophys Sin (Shanghai) 2014, 46, 1024-1033. https://doi.org/10.1093/abbs/gmu103 Filip RS, Zagorski J. Age- and BMD-related differences in biochemical markers of bone metabolism in rural and urban women from Lublin Region, Poland. Ann Agric Environ Med 2004, 11, 255-259. Hjortnaes J, Butcher J, Figueiredo JL, Riccio M, Kohler RH, Kozloff KM, Weissleder R, Aikawa E. Arterial and aortic valve calcification inversely correlates with osteoporotic bone remodelling: a role for inflammation. Eur Heart J 2010, 31, 1975-1984. https://doi.org/10.1093/eurheartj/ehq237 Iwamoto J, Matsumoto H, Takeda T, Sato Y, Yeh JK. Effects of vitamin K2 on cortical and cancellous bone mass, cortical osteocyte and lacunar system, and porosity in sciatic neurectomized rats. Calcif Tissue Int 2010, 87, 254-262. https://doi.org/10.1007/s00223-010-9387-7 Kim S, Moon C, Wie MB, Kim H, Tanuma N, Matsumoto Y, Shin T. Enhanced expression of constitutive and inducible forms of nitric oxide synthase in autoimmune encephalomyelitis. J Vet Sci 2000, 1, 11-17. Kodama Y, Nakayama K, Fuse H, Fukumoto S, Kawahara H, Takahashi H, Kurokawa T, Sekiguchi C, Nakamura T, Matsumoto T. Inhibition of bone resorption by pamidronate cannot restore normal gain in cortical bone mass and strength in tail-suspended rapidly growing rats. J Bone Miner Res 1997, 12, 1058-1067. https://doi.org/10.1359/jbmr.1997.12.7.1058 Shin T, Kojima T, Tanuma N, Ishihara Y, Matsumoto Y. The subarachnoid space as a site for precursor T cell proliferation and effector T cell selection in experimental autoimmune encephalomyelitis. J Neuroimmunol 1995, 56, 171-178. https://doi.org/10.1016/0165-5728(94)00144-D Tanuma N, Shin T, Kogure K, Matsumoto Y.Differential role of $TNF-{\alpha}$and $IFN-{\gamma}$in the brain of rats with chronic relapsing autoimmune encephalomyelitis. J Neuroimmunol 1999, 96, 73-79. https://doi.org/10.1016/S0165-5728(99)00018-1 Will R, Palmer R, Bhalla AK, Ring F, Calin A. Osteoporosis in early ankylosing spondylitis: a primary pathological event? Lancet 1989, 2, 1483-1485. Acknowledgement Supported by : National Research Foundation of Korea (NRF)
while I was reading about charge conjugation I found some (apparently) contradictory facts. For example Itzykson & Zuber says (page 153) "Up to a phase, $\cal C$ interchanges particles and antiparticlas with the same momentum, energy and helicity" while Zee (pag. 101) "You can easily convince yourself that the charged conjugated of a left handed field is a right handed field and viceversa" How can this be possible (in the case with $m=0$ when chirality coincides with helicity)? In conclusion what is the handedness of the charge conjugated of a left handed field? In order to convince myself I worked out two contradictory proofs: Let $U$ be the charge conjugation operator in Fock space, and $C$ the matrix that realizes charge conjugation on spinors: $\Psi^c = U^\dagger\Psi U = C\bar{\Psi}^t$, then: 1) $U^\dagger P_L\Psi U = P_L U^\dagger\Psi U = P_L C\bar{\Psi}^t$, (because $P_L$ is acting only upon creation and annihilation operators) and so here we proved that the charge conjugated of a left handed field is still left handed; 2) $U^\dagger P_L\Psi U = C\overline{(P_L\Psi)}^t = C\gamma^0 P_L\Psi^* = P_RC\bar{\Psi}^t$, (using Pauli-Dirac as well as Weyl representation of gamma matrices) and so here we proved that the charge conjugated of al left handed field is instead right handed. Can you help me? Note added: does a Majorana massless fermion exist?
Moser-lower.tex \section{Lower bounds for the Moser problem}\label{moser-lower-sec} In this section we discuss lower bounds for $c'_{n,3}$. Clearly we have $c'_{0,3}=1$ and $c'_{1,3}=2$, so we focus on the case $n \ge 2$. Observe that if $\{w(1),w(2),w(3)\}$ is a geometric line in $[3]^n$, then $w(1), w(3)$ both lie in the same sphere $S_{i,n}$ (which was defined in Section \ref{notation-sec}), and that $w(2)$ lies in a lower sphere $S_{i-r,n}$ for some $1 \leq r \leq i \leq n$. Furthermore, $w(1)$ and $w(3)$ are separated by Hamming distance $r$. As a consequence, we see that $S_{i-1,n} \cup S_{i,n}^o$ (or $S_{i-1,n} \cup S_{i,n}^e$) is a Moser set for any $1 \leq i \leq n$, since any two distinct elements $S_{i,n}^o$ are separated by a Hamming distance of at least two. This leads to the lower bound $$ c'_{n,3} \geq \binom{n}{i-1} 2^{i-1} + \binom{n}{i} 2^{i-1} = \binom{n+1}{i} 2^{i-1}.$$ It is not hard to see that $\binom{n+1}{i+1} 2^{i} > \binom{n+1}{i} 2^{i-1}$ if and only if $3i < 2n+1$, and so this lower bound is maximised when $i = \lfloor \frac{2n+1}{3} \rfloor$ for $n \geq 2$, giving the formula \eqref{binom}. This leads to the lower bounds $$ c'_{2,3} \geq 6; c'_{3,3} \geq 16; c'_{4,3} \geq 40; c'_{5,3} \geq 120$$ which gives the right lower bounds for $n=2,3$, but is slightly off for $n=4,5$. {\bf where was this bound first observed?} Applying Stirling's formula, we see that this lower bound takes the form \begin{equation}\label{cpn3} c'_{n,3} \geq C 3^n / \sqrt{n} \end{equation} for some absolute constant $C>0$. One can do slightly better by considering the sets $$ A := S_{i-1,n} \cup S_{i,n}^o \cup A'$$ where $A' \subset S_{i+1,n}$ has the property that any two elements in $A'$ are separated by a Hamming distance of at least three, or have a Hamming distance of exactly one but their midpoint lies in $S_{i,n}^e$. By the previous discussion we see that this is a Moser set, and we have the lower bound $$ c'_{n,3} \geq \binom{n+1}{i} 2^{i-1} + |A'|.$$ This gives some improved lower bounds for $c'_{n,3}$: \begin{itemize} \item By taking $n=4$, $i=3$, and $A' = \{ 1111, 1333, 3333\}$, we obtain $c'_{4,3} \geq 43$; \item By taking $n=5$, $i=4$, and $A' = \{ 13111, 13113, 31311, 13333 \}$, we obtain $c'_{5,3} \geq 124$. \end{itemize} This gives the lower bounds in Theorem \ref{moser} up to $n=5$. One could continue this sort of procedure for higher $n$, but the improvements over \eqref{binom} are rather minor, and in particular we have been unable to locate a bound which is asymptotically better than \eqref{cpn3}. The lower bound $c'_{6,3} \geq 353$ was located by a genetic algorithm; see Appendix \ref{genetic-alg}.
He explains this by citing the fact that the square of the wave function which gives the probability density is maximum at the origin. Not exactly. The answer by @dsva explains why this is wrong and I'll just expand that in a minute, but first note that it's easy to see why this is wrong. If the electron was most likely to be at the origin it would be likely to interact with the nucleus, something we expect not to happen. In fact there is a small chance of this because the nucleus is a non-zero size. Essentially he's missing the need to sum over the shell, and the shell's volume is zero at the origin because of the volume element $dV = 4\pi r^2dr$ with $r=0$. The probability of finding the electron in a region of zero radial size is zero, but we can evaluate the relative probability for two radii. For 1-s orbitals that is : $$\frac {r_1^2 e^{-{r_1}/a}}{r_1^2 e^{-{r_1}/a}+r_2^2 e^{-{r_2}/a}}$$ and $$\frac {r_2^2 e^{-{r_2}/a}}{r_1^2 e^{-{r_1}/a}+r_2^2 e^{-{r_2}/a}}$$ For $r_1=0$ we clearly get a relative probability of zero compared to any $r_2\neq 0$. At the same time, we all agree that the Bohr radius is the distance at which probability of finding the electron is maximum for 1s orbital. This is because of what an expectation value is and how it is calculated in QM. An expectation value is an average over the entire space of the effect of an operator on a wavefunction. It is not simply the operator multiplied by the probability density. In short an expectation value can be considered an average over all space, whereas the probability density is not an average over all space.
Bulletin of the Belgian Mathematical Society - Simon Stevin Bull. Belg. Math. Soc. Simon Stevin Volume 14, Number 4 (2007), 641-652. Achievement of continuity of $(\varphi,\psi)$-derivations without linearity Abstract Suppose that $\frak A$ is a $C^*$-algebra acting on a Hilbert space $\frak K$, and $\varphi, \psi$ are mappings from $\frak A$ into $B(\frak K)$ which are not assumed to be necessarily linear or continuous. A $(\varphi, \psi)$-derivation is a linear mapping $d: \frak A \to B(\frak K)$ such that $$d(ab)=\varphi(a)d(b)+d(a)\psi(b)\quad (a,b\in \frak A).$$ We prove that if $\varphi$ is a multiplicative (not necessarily linear)\ $*$-mapping, then every $*$-$(\varphi,\varphi)$-derivation is automatically continuous. Using this fact, we show that every $*$-$(\varphi,\psi)$-derivation $d$ from $\frak A$ into $B(\frak K)$ is continuous if and only if the $*$-mappings $\varphi$ and $\psi$ are left and right $d$-continuous, respectively. Article information Source Bull. Belg. Math. Soc. Simon Stevin, Volume 14, Number 4 (2007), 641-652. Dates First available in Project Euclid: 15 November 2007 Permanent link to this document https://projecteuclid.org/euclid.bbms/1195157133 Digital Object Identifier doi:10.36045/bbms/1195157133 Mathematical Reviews number (MathSciNet) MR2384460 Zentralblatt MATH identifier 1138.46041 Citation Hejazian, S.; Janfada, A. R.; Mirzavaziri, M.; Moslehian, M. S. Achievement of continuity of $(\varphi,\psi)$-derivations without linearity. Bull. Belg. Math. Soc. Simon Stevin 14 (2007), no. 4, 641--652. doi:10.36045/bbms/1195157133. https://projecteuclid.org/euclid.bbms/1195157133
It looks like you're new here. If you want to get involved, click one of these buttons! Isomorphisms are very important in mathematics, and we can no longer put off talking about them. Intuitively, two objects are 'isomorphic' if they look the same. Category theory makes this precise and shifts the emphasis to the 'isomorphism' - the way in which we match up these two objects, to see that they look the same. For example, any two of these squares look the same after you rotate and/or reflect them: An isomorphism between two of these squares is a process of rotating and/or reflecting the first so it looks just like the second. As the name suggests, an isomorphism is a kind of morphism. Briefly, it's a morphism that you can 'undo'. It's a morphism that has an inverse: Definition. Given a morphism \(f : x \to y\) in a category \(\mathcal{C}\), an inverse of \(f\) is a morphism \(g: y \to x\) such that and I'm saying that \(g\) is 'an' inverse of \(f\) because in principle there could be more than one! But in fact, any morphism has at most one inverse, so we can talk about 'the' inverse of \(f\) if it exists, and we call it \(f^{-1}\). Puzzle 140. Prove that any morphism has at most one inverse. Puzzle 141. Give an example of a morphism in some category that has more than one left inverse. Puzzle 142. Give an example of a morphism in some category that has more than one right inverse. Now we're ready for isomorphisms! Definition. A morphism \(f : x \to y\) is an isomorphism if it has an inverse. Definition. Two objects \(x,y\) in a category \(\mathcal{C}\) are isomorphic if there exists an isomorphism \(f : x \to y\). Let's see some examples! The most important example for us now is a 'natural isomorphism', since we need those for our databases. But let's start off with something easier. Take your favorite categories and see what the isomorphisms in them are like! What's an isomorphism in the category \(\mathbf{3}\)? Remember, this is a free category on a graph: The morphisms in \(\mathbf{3}\) are paths in this graph. We've got one path of length 2: $$ f_2 \circ f_1 : v_1 \to v_3 $$ two paths of length 1: $$ f_1 : v_1 \to v_2, \quad f_2 : v_2 \to v_3 $$ and - don't forget - three paths of length 0. These are the identity morphisms: $$ 1_{v_1} : v_1 \to v_1, \quad 1_{v_2} : v_2 \to v_2, \quad 1_{v_3} : v_3 \to v_3.$$ If you think about how composition works in this category you'll see that the only isomorphisms are the identity morphisms. Why? Because there's no way to compose two morphisms and get an identity morphism unless they're both that identity morphism! In intuitive terms, we can only move from left to right in this category, not backwards, so we can only 'undo' a morphism if it doesn't do anything at all - i.e., it's an identity morphism. We can generalize this observation. The key is that \(\mathbf{3}\) is a poset. Remember, in our new way of thinking a preorder is a category where for any two objects \(x\) and \(y\) there is at most one morphism \(f : x \to y\), in which case we can write \(x \le y\). A poset is a preorder where if there's a morphism \(f : x \to y\) and a morphism \(g: x \to y\) then \(x = y\). In other words, if \(x \le y\) and \(y \le x\) then \(x = y\). Puzzle 143. Show that if a category \(\mathcal{C}\) is a preorder, if there is a morphism \(f : x \to y\) and a morphism \(g: x \to y\) then \(g\) is the inverse of \(f\), so \(x\) and \(y\) are isomorphic. Puzzle 144. Show that if a category \(\mathcal{C}\) is a poset, if there is a morphism \(f : x \to y\) and a morphism \(g: x \to y\) then both \(f\) and \(g\) are identity morphisms, so \(x = y\). Puzzle 144 says that in a poset, the only isomorphisms are identities. Isomorphisms are a lot more interesting in the category \(\mathbf{Set}\). Remember, this is the category where objects are sets and morphisms are functions. Puzzle 145. Show that every isomorphism in \(\mathbf{Set}\) is a bijection, that is, a function that is one-to-one and onto. Puzzle 146. Show that every bijection is an isomorphism in \(\mathbf{Set}\). So, in \(\mathbf{Set}\) the isomorphisms are the bijections! So, there are lots of them. One more example: Definition. If \(\mathcal{C}\) and \(\mathcal{D}\) are categories, then an isomorphism in \(\mathcal{D}^\mathcal{C}\) is called a natural isomorphism. This name makes sense! The objects in the so-called 'functor category' \(\mathcal{D}^\mathcal{C}\) are functors from \(\mathcal{C}\) to \(\mathcal{D}\), and the morphisms between these are natural transformations. So, the isomorphisms deserve to be called 'natural isomorphisms'. But what are they like? Given functors \(F, G: \mathcal{C} \to \mathcal{D}\), a natural transformation \(\alpha : F \to G\) is a choice of morphism $$ \alpha_x : F(x) \to G(x) $$ for each object \(x\) in \(\mathcal{C}\), such that for each morphism \(f : x \to y\) this naturality square commutes: Suppose \(\alpha\) is an isomorphism. This says that it has an inverse \(\beta: G \to F\). This \(beta\) will be a choice of morphism $$ \beta_x : G(x) \to F(x) $$ for each \(x\), making a bunch of naturality squares commute. But saying that \(\beta\) is the inverse of \(\alpha\) means that $$ \beta \circ \alpha = 1_F \quad \textrm{ and } \alpha \circ \beta = 1_G .$$ If you remember how we compose natural transformations, you'll see this means $$ \beta_x \circ \alpha_x = 1_{F(x)} \quad \textrm{ and } \alpha_x \circ \beta_x = 1_{G(x)} $$ for all \(x\). So, for each \(x\), \(\beta_x\) is the inverse of \(\alpha_x\). In short: if \(\alpha\) is a natural isomorphism then \(\alpha\) is a natural transformation such that \(\alpha_x\) is an isomorphism for each \(x\). But the converse is true, too! It takes a little more work to prove, but not much. So, I'll leave it as a puzzle. Puzzle 147. Show that if \(\alpha : F \Rightarrow G\) is a natural transformation such that \(\alpha_x\) is an isomorphism for each \(x\), then \(\alpha\) is a natural isomorphism. Doing this will help you understand natural isomorphisms. But you also need examples! Puzzle 148. Create a category \(\mathcal{C}\) as the free category on a graph. Give an example of two functors \(F, G : \mathcal{C} \to \mathbf{Set}\) and a natural isomorphism \(\alpha: F \Rightarrow G\). Think of \(\mathcal{C}\) as a database schema, and \(F,G\) as two databases built using this schema. In what way does the natural isomorphism between \(F\) and \(G\) make these databases 'the same'. They're not necessarily equal! We should talk about this.
It looks like you're new here. If you want to get involved, click one of these buttons! Isomorphisms are very important in mathematics, and we can no longer put off talking about them. Intuitively, two objects are 'isomorphic' if they look the same. Category theory makes this precise and shifts the emphasis to the 'isomorphism' - the way in which we match up these two objects, to see that they look the same. For example, any two of these squares look the same after you rotate and/or reflect them: An isomorphism between two of these squares is a process of rotating and/or reflecting the first so it looks just like the second. As the name suggests, an isomorphism is a kind of morphism. Briefly, it's a morphism that you can 'undo'. It's a morphism that has an inverse: Definition. Given a morphism \(f : x \to y\) in a category \(\mathcal{C}\), an inverse of \(f\) is a morphism \(g: y \to x\) such that and I'm saying that \(g\) is 'an' inverse of \(f\) because in principle there could be more than one! But in fact, any morphism has at most one inverse, so we can talk about 'the' inverse of \(f\) if it exists, and we call it \(f^{-1}\). Puzzle 140. Prove that any morphism has at most one inverse. Puzzle 141. Give an example of a morphism in some category that has more than one left inverse. Puzzle 142. Give an example of a morphism in some category that has more than one right inverse. Now we're ready for isomorphisms! Definition. A morphism \(f : x \to y\) is an isomorphism if it has an inverse. Definition. Two objects \(x,y\) in a category \(\mathcal{C}\) are isomorphic if there exists an isomorphism \(f : x \to y\). Let's see some examples! The most important example for us now is a 'natural isomorphism', since we need those for our databases. But let's start off with something easier. Take your favorite categories and see what the isomorphisms in them are like! What's an isomorphism in the category \(\mathbf{3}\)? Remember, this is a free category on a graph: The morphisms in \(\mathbf{3}\) are paths in this graph. We've got one path of length 2: $$ f_2 \circ f_1 : v_1 \to v_3 $$ two paths of length 1: $$ f_1 : v_1 \to v_2, \quad f_2 : v_2 \to v_3 $$ and - don't forget - three paths of length 0. These are the identity morphisms: $$ 1_{v_1} : v_1 \to v_1, \quad 1_{v_2} : v_2 \to v_2, \quad 1_{v_3} : v_3 \to v_3.$$ If you think about how composition works in this category you'll see that the only isomorphisms are the identity morphisms. Why? Because there's no way to compose two morphisms and get an identity morphism unless they're both that identity morphism! In intuitive terms, we can only move from left to right in this category, not backwards, so we can only 'undo' a morphism if it doesn't do anything at all - i.e., it's an identity morphism. We can generalize this observation. The key is that \(\mathbf{3}\) is a poset. Remember, in our new way of thinking a preorder is a category where for any two objects \(x\) and \(y\) there is at most one morphism \(f : x \to y\), in which case we can write \(x \le y\). A poset is a preorder where if there's a morphism \(f : x \to y\) and a morphism \(g: x \to y\) then \(x = y\). In other words, if \(x \le y\) and \(y \le x\) then \(x = y\). Puzzle 143. Show that if a category \(\mathcal{C}\) is a preorder, if there is a morphism \(f : x \to y\) and a morphism \(g: x \to y\) then \(g\) is the inverse of \(f\), so \(x\) and \(y\) are isomorphic. Puzzle 144. Show that if a category \(\mathcal{C}\) is a poset, if there is a morphism \(f : x \to y\) and a morphism \(g: x \to y\) then both \(f\) and \(g\) are identity morphisms, so \(x = y\). Puzzle 144 says that in a poset, the only isomorphisms are identities. Isomorphisms are a lot more interesting in the category \(\mathbf{Set}\). Remember, this is the category where objects are sets and morphisms are functions. Puzzle 145. Show that every isomorphism in \(\mathbf{Set}\) is a bijection, that is, a function that is one-to-one and onto. Puzzle 146. Show that every bijection is an isomorphism in \(\mathbf{Set}\). So, in \(\mathbf{Set}\) the isomorphisms are the bijections! So, there are lots of them. One more example: Definition. If \(\mathcal{C}\) and \(\mathcal{D}\) are categories, then an isomorphism in \(\mathcal{D}^\mathcal{C}\) is called a natural isomorphism. This name makes sense! The objects in the so-called 'functor category' \(\mathcal{D}^\mathcal{C}\) are functors from \(\mathcal{C}\) to \(\mathcal{D}\), and the morphisms between these are natural transformations. So, the isomorphisms deserve to be called 'natural isomorphisms'. But what are they like? Given functors \(F, G: \mathcal{C} \to \mathcal{D}\), a natural transformation \(\alpha : F \to G\) is a choice of morphism $$ \alpha_x : F(x) \to G(x) $$ for each object \(x\) in \(\mathcal{C}\), such that for each morphism \(f : x \to y\) this naturality square commutes: Suppose \(\alpha\) is an isomorphism. This says that it has an inverse \(\beta: G \to F\). This \(beta\) will be a choice of morphism $$ \beta_x : G(x) \to F(x) $$ for each \(x\), making a bunch of naturality squares commute. But saying that \(\beta\) is the inverse of \(\alpha\) means that $$ \beta \circ \alpha = 1_F \quad \textrm{ and } \alpha \circ \beta = 1_G .$$ If you remember how we compose natural transformations, you'll see this means $$ \beta_x \circ \alpha_x = 1_{F(x)} \quad \textrm{ and } \alpha_x \circ \beta_x = 1_{G(x)} $$ for all \(x\). So, for each \(x\), \(\beta_x\) is the inverse of \(\alpha_x\). In short: if \(\alpha\) is a natural isomorphism then \(\alpha\) is a natural transformation such that \(\alpha_x\) is an isomorphism for each \(x\). But the converse is true, too! It takes a little more work to prove, but not much. So, I'll leave it as a puzzle. Puzzle 147. Show that if \(\alpha : F \Rightarrow G\) is a natural transformation such that \(\alpha_x\) is an isomorphism for each \(x\), then \(\alpha\) is a natural isomorphism. Doing this will help you understand natural isomorphisms. But you also need examples! Puzzle 148. Create a category \(\mathcal{C}\) as the free category on a graph. Give an example of two functors \(F, G : \mathcal{C} \to \mathbf{Set}\) and a natural isomorphism \(\alpha: F \Rightarrow G\). Think of \(\mathcal{C}\) as a database schema, and \(F,G\) as two databases built using this schema. In what way does the natural isomorphism between \(F\) and \(G\) make these databases 'the same'. They're not necessarily equal! We should talk about this.
Review Questions Review Questions For Loops Q09.01 Create a for loops to print out the numbers 1 to 10. Q09.02 Create a for loop to print out the number -1 to -10 starting at -1 and ending at -10. Q09.03 Create a for loop to print out all the letters in the word 'love' Q09.04 Use a for loop to sum the elements in the list [1,3,5,8,12]. Print the sum to the user. Q09.05 The first 10 terms of the Fibonacci sequence are below: Create the Fibonacci sequence using a for loop. Print out the first 20 terms of the Fibonacci sequence on one line. Q09.06 This problem is about Fizz Buzz, a programming task that is sometimes used in interviews. (a) Use a for loop to print out the numbers 1 to 30 (b) Use a for loop to print out all the numbers 1 to 30, but leave out any number which is divisible by 3, such as 3, 6 and 9. (c) Use a for loop to print out all the numbers 1 to 30, but leave out any number which is divisible by 5, such as 5, 10 and 15. (d) Use a for loop to print out all the numbers 1 to 30, but insert the word fizz for any number that is divisible by 3, insert the word buzz for any number that is divisible by 5 and insert the word fizz buzz for any numbers that are both divisible by 3 and 5, like 15. Q09.07 Imagine you can see the future of investing and over the next four years, the interest rate of return on investments is going to be 0.02, 0.03, 0.015, 0.06. Prompt the user for an initial investment with Python's input(_) function and use the formula below to calculate how much the investment will be worth after four years. new balance = old balance + old balance \times interest rate Note the first "old balance" is the person's initial investment. Q09.08 A geometric series is a series that has a common ratio between the terms. The sum of the geometric series that starts at \frac{1}{2} and has a common ratio of \frac{1}{2} approaches the value 1. The formula that shows the sum of a geometric series which approaches 1 is below. Use the geometric series above to approximate the value of 1 after 10 terms are added. Print out how far off the geometric series approximation is to 1. Q09.09 A Taylor Series is an infinite series of mathematical terms when summed together approximate a mathematical function. A Taylor Series can be used to approximate the mathematical functions e^x, sine, and cosine. Taylor Series expansion for the function e^x is below: Write a program that asks a user for a number x, then calculates e^x using the Taylor Series expansion. Calculate 20 terms. Q09.10 Write a function called lt100() that accepts 1 variable as input: a 1D NumPy array. The output of lt100() will be a single 1D NumPy array. Use a for loop to go through the input NumPy array 1 element at a time starting with element 0 going upward. If the element's value is less than 100, put the element into the output 1D NumPy array. If a value of nan (Python's not a number) is encountered, stop adding elements to the output variable, but do not raise an exception. If nan appears in the first element, the function should return an empty 1D NumPy array, not the None object. Researching the numpy.append() function will help. Use np.nan for testing your code with a nan value. Q09.11 Write a function called get_bigger() that accepts two 1D NumPy arrays for input: A and B. A and B will contain the same number of elements. Use a for loop to iterate through both A and B one element at a time. If A's current element is greater than B's, put A's element into the output 1D NumPy array variable C. Otherwise, put the sum of the 2 elements into C. C will have the same number of elements as both A and B. For example, if A contains [5, -10, 1] and B contains [2, -10, 8], then C should contain [5, -20, 9]. Q09.12 Write a function called str_add() that accepts 1 variable as its input: a list of strings. Use a for loop to add the character 'G' to the end of each string in the list. Return the list of altered strings as the only output. For example, if the list ['hello', 'bye'] was passed into your function, the output should be ['helloG', 'byeG']. Q09.13 Write a loop that prints out your name 20 times. Each time your name is printed, it should be on a new line. Q09.14 Write a loops that prints out " around and" 20 times. Each time " around and" is printed, it should be on the same line. As in around and around and around and around and around .... Q09.15 The Appendix contains a section on ASCII character codes. Use code similar to the code shown in the Appendix to print out the ASCII character code and resulting character for just the letters a to z, A to Z, and 0 to 9. Q09.16 Use a for loop to print out the days of the week. Print each day of the week on its own line. Q09.17 Use a for loop to print out the spelling of the word mississippi with one letter on each line. Q09.18 An employee starts with an annual salary of 58 thousand dollars. Print out the employees salary each year for five years if the employee receives a 2.5 percent (0.02) raise each year. Q09.19 The factorial function is the product of integers between 1 and n. The formula for factorial is below: Write a Python function that contains a for loop to find 5 factorial (5!) and 20 factorial (20!) Q09.20 Create a for loop that prints out all of the even numbers between 1 and 100. Q09.21 Use Python's input() function to ask a user for an integer between 1 and 10. Then use a for loop to print out all of the multiples of that number between 1 and 100. Use your program to print out all of the multiples of 9 between 1 and 100. Q09.22 The Leibniz approximation for the value of \pi is below. Use 15 terms in the Leibniz approximation to calculate the value of \pi. Compute the error of the Leibniz approximation with 15 terms compared to Pythons math.pi function. Hint: (-1)^i will alternate in sign as i steps through integers. Q09.23 Use a for loop to print two columns on each line. In each line, give an SI prefix, then show which power the prefix corresponds to. Start with "nano" and "10^-9" and end on "giga" and "10^9". An example of the table is below. nano 10^-9micro 10^-6milli 10^-3... ...mega 10^6giga 10^9 Q09.24 Python's bin() function converts an integer into its binary representation (a number represented as 1's and 0's). Use the bin() function to build a table of values from 1 to 10 showing the binary representation of each number. Hint: use bin(i)[2:] to remove 0b from the output of the bin() function. An example of the table is below. 0 01 12 103 114 100... ... Q09.25 Iodine-131 is a radioactive isotope of iodine that has a half-life of about 8 days. This means that after 8 days 100g of iodine-131 will decay to 100g/2 = 50g, and after 16 days, 100g of iodine-131 will decay to (100g/2)/2 = 25g. Use a for loop to calculate the mass of 100g of Iodine-131 left after 1 year of radioactive decay. Q09.26 Use a for loop to ask a user for five numbers. Use another for loop to print out the largest of the five numbers back to the user. Q09.27 Use a for loop to ask a user for three exam grades. Print back to the user the average of the three grades. Q09.28 Use a for loop to ask a user for 10 numbers. Print back the the user the mean, median and mode of the numbers. Hint: Python's statistics module is part of the Standard Library. statistics.mean(), statistics.median() and statistics.mode() are three functions present in the statistics module. Q09.29 Write a program that requests a word from a user and then counts the number of vowels in the word. The English vowels are a, e, i, o, u, y. Hint: the code 'a' in ['a','e','i','o','u','y'] and 'a' in 'aeiouy' both return True. While Loops Q09.40 Use a while loop to sum the elements in the list [1,3,5,8,12]. Print the sum to the user. Q09.41 Use a while loop to print out the numbers between 1 and 100 that have whole number square roots. Q08.42 Create a program that prompts a user for test scores. Continue to prompt the user for additional test scores until the user types 'q'. When the user types 'q', the program stops asking the user for test scores and prints out the following statistics about the test scores the user entered: mean median standard deviation Q09.43 Use a while loop to validate user input. Ask the user to enter a value between 0 and 1. Print back to the user "Try again" if the user's input is invalid. Print "Your number works!" when the user enters valid input. Q09.44 Use a while loop to validate user input. Ask the user to enter a day of the week. Keep asking the user for a day of the week until they enter one. When the user enters a day of the week, print "Yup, it's <day of the week>". Q09.45 Write a program to play the game higher/lower. Tell the user you have picked a random integer between 1 and 20. The code below creates a random integer n between 1 and 20: from random import randintn = (randint(1, 20)) (a) Ask the user to enter a number (one time) and tell the user if the random number is higher or lower. Print higher if the random number is higher than the user's guess, print lower if the random number is lower than the user's guess. Print You guessed it: <random number> if the user guesses the random number. (b) Modify your program so that the program keeps printing higher or lower after each guess until the user guesses the random number. When the user guesses the random number print You guessed it: <random number>. (c) Extend your higher/lower game to record the number of guesses the user enters to guess the random number. Then the user guesses the random number print You guessed: <random number> in <number of tries>. Q09.46 A Taylor Series is an infinite series of mathematical terms that when summed together approximate a mathematical function. A Taylor Series can be used to approximate e^x, sine, and cosine. Taylor Series expansion for the function e^x is below: Write a program that asks a user for a number x, then calculates e^x using the Taylor Series expansion. Continue to add terms to the Taylor Series until the result from the Taylor series is less than 0.001 off the value of e^x calculated with Python's math.exp() function. Errors, Explanations, and Solutions Run the following code snippets. Explain the error in your own words. Then rewrite the code snippet to solve the error. Q09.80 n = [1 2 3]for n[1] == 2: n = n + 1end Q09.81 while x in [1, 2, 3]: print(x) Q09.82 n = 1while 1 == n print('valid') n = n +1 Q09.83 for i in range(3):print(i) Q09.84 for i in range(5,1): print(i)
Quasirandomness Quasirandomness is a central concept in extremal combinatorics, and is likely to play an important role in any combinatorial proof of the density Hales-Jewett theorem. This will be particularly true if that proof is based on the density increment method or on some kind of generalization of Szemerédi's regularity lemma. In general, one has some kind of parameter associated with a set, which in our case will be the number of combinatorial lines it contains, and one would like a deterministic definition of the word "quasirandom" with the following key property. Every quasirandom set [math]\mathcal{A}[/math] has roughly the same value of the given parameter as a random set of the same density. Needless to say, this is not the only desirable property of the definition, since otherwise we could just define [math]\mathcal{A}[/math] to be quasirandom if it has roughly the same value of the given parameter as a random set of the same density. The second key property is this. Every set [math]\mathcal{A}[/math] that failsto be quasirandom has some other property that we can exploit. These two properties are already discussed in some detail in the article on the density increment method: this article concentrates more on examples of quasirandomness in other contexts, and possible definitions of quasirandomness connected with the density Hales-Jewett theorem. Contents Examples of quasirandomness definitions Bipartite graphs Let X and Y be two finite sets and let [math]f:X\times Y\rightarrow [-1,1].[/math] Then f is defined to be c-quasirandom if [math]\mathbb{E}_{x,x'\in X}\mathbb{E}_{y,y'\in Y}f(x,y)f(x,y')f(x',y)f(x',y')\leq c.[/math] Since the left-hand side is equal to [math]\mathbb{E}_{x,x'\in X}|\mathbb{E}_{y\in Y}f(x,y)f(x',y)|^2,[/math] it is always non-negative, and the condition that it should be small implies that [math]\mathbb{E}_{y\in Y}f(x,y)f(x',y)[/math] is small for almost every pair [math]x,x'.[/math] If G is a bipartite graph with vertex sets X and Y and [math]\delta[/math] is the density of G, then we can define [math]f(x,y)[/math] to be [math]1-\delta[/math] if xy is an edge of G and [math]-\delta[/math] otherwise. We call f the balanced function of G, and we say that G is c-quasirandom if its balanced function is c-quasirandom. Subsets of finite Abelian groups Hypergraphs Subsets of grids A possible definition of quasirandom subsets of [math][3]^n[/math] (To be continued.)
2019-09-04 12:06 Soft QCD and Central Exclusive Production at LHCb / Kucharczyk, Marcin (Polish Academy of Sciences (PL)) The LHCb detector, owing to its unique acceptance coverage $(2 < \eta < 5)$ and a precise track and vertex reconstruction, is a universal tool allowing the study of various aspects of electroweak and QCD processes, such as particle correlations or Central Exclusive Production. The recent results on the measurement of the inelastic cross section at $ \sqrt s = 13 \ \rm{TeV}$ as well as the Bose-Einstein correlations of same-sign pions and kinematic correlations for pairs of beauty hadrons performed using large samples of proton-proton collision data accumulated with the LHCb detector at $\sqrt s = 7\ \rm{and} \ 8 \ \rm{TeV}$, are summarized in the present proceedings, together with the studies of Central Exclusive Production at $ \sqrt s = 13 \ \rm{TeV}$ exploiting new forward shower counters installed upstream and downstream of the LHCb detector. [...] LHCb-PROC-2019-008; CERN-LHCb-PROC-2019-008.- Geneva : CERN, 2019 - 6. Fulltext: PDF; In : The XXVII International Workshop on Deep Inelastic Scattering and Related Subjects, Turin, Italy, 8 - 12 Apr 2019 Rekord szczegółowy - Podobne rekordy 2019-08-15 17:39 LHCb Upgrades / Steinkamp, Olaf (Universitaet Zuerich (CH)) During the LHC long shutdown 2, in 2019/2020, the LHCb collaboration is going to perform a major upgrade of the experiment. The upgraded detector is designed to operate at a five times higher instantaneous luminosity than in Run II and can be read out at the full bunch-crossing frequency of the LHC, abolishing the need for a hardware trigger [...] LHCb-PROC-2019-007; CERN-LHCb-PROC-2019-007.- Geneva : CERN, 2019 - mult.p. In : Kruger2018, Hazyview, South Africa, 3 - 7 Dec 2018 Rekord szczegółowy - Podobne rekordy 2019-08-15 17:36 Tests of Lepton Flavour Universality at LHCb / Mueller, Katharina (Universitaet Zuerich (CH)) In the Standard Model of particle physics the three charged leptons are identical copies of each other, apart from mass differences, and the electroweak coupling of the gauge bosons to leptons is independent of the lepton flavour. This prediction is called lepton flavour universality (LFU) and is well tested. [...] LHCb-PROC-2019-006; CERN-LHCb-PROC-2019-006.- Geneva : CERN, 2019 - mult.p. In : Kruger2018, Hazyview, South Africa, 3 - 7 Dec 2018 Rekord szczegółowy - Podobne rekordy 2019-05-15 16:57 Rekord szczegółowy - Podobne rekordy 2019-02-12 14:01 XYZ states at LHCb / Kucharczyk, Marcin (Polish Academy of Sciences (PL)) The latest years have observed a resurrection of interest in searches for exotic states motivated by precision spectroscopy studies of beauty and charm hadrons providing the observation of several exotic states. The latest results on spectroscopy of exotic hadrons are reviewed, using the proton-proton collision data collected by the LHCb experiment. [...] LHCb-PROC-2019-004; CERN-LHCb-PROC-2019-004.- Geneva : CERN, 2019 - 6. Fulltext: PDF; In : 15th International Workshop on Meson Physics, Kraków, Poland, 7 - 12 Jun 2018 Rekord szczegółowy - Podobne rekordy 2019-01-21 09:59 Mixing and indirect $CP$ violation in two-body Charm decays at LHCb / Pajero, Tommaso (Universita & INFN Pisa (IT)) The copious number of $D^0$ decays collected by the LHCb experiment during 2011--2016 allows the test of the violation of the $CP$ symmetry in the decay of charm quarks with unprecedented precision, approaching for the first time the expectations of the Standard Model. We present the latest measurements of LHCb of mixing and indirect $CP$ violation in the decay of $D^0$ mesons into two charged hadrons [...] LHCb-PROC-2019-003; CERN-LHCb-PROC-2019-003.- Geneva : CERN, 2019 - 10. Fulltext: PDF; In : 10th International Workshop on the CKM Unitarity Triangle, Heidelberg, Germany, 17 - 21 Sep 2018 Rekord szczegółowy - Podobne rekordy 2019-01-15 14:22 Experimental status of LNU in B decays in LHCb / Benson, Sean (Nikhef National institute for subatomic physics (NL)) In the Standard Model, the three charged leptons are identical copies of each other, apart from mass differences. Experimental tests of this feature in semileptonic decays of b-hadrons are highly sensitive to New Physics particles which preferentially couple to the 2nd and 3rd generations of leptons. [...] LHCb-PROC-2019-002; CERN-LHCb-PROC-2019-002.- Geneva : CERN, 2019 - 7. Fulltext: PDF; In : The 15th International Workshop on Tau Lepton Physics, Amsterdam, Netherlands, 24 - 28 Sep 2018 Rekord szczegółowy - Podobne rekordy 2019-01-10 15:54 Rekord szczegółowy - Podobne rekordy 2018-12-20 16:31 Simultaneous usage of the LHCb HLT farm for Online and Offline processing workflows LHCb is one of the 4 LHC experiments and continues to revolutionise data acquisition and analysis techniques. Already two years ago the concepts of “online” and “offline” analysis were unified: the calibration and alignment processes take place automatically in real time and are used in the triggering process such that Online data are immediately available offline for physics analysis (Turbo analysis), the computing capacity of the HLT farm has been used simultaneously for different workflows : synchronous first level trigger, asynchronous second level trigger, and Monte-Carlo simulation. [...] LHCb-PROC-2018-031; CERN-LHCb-PROC-2018-031.- Geneva : CERN, 2018 - 7. Fulltext: PDF; In : 23rd International Conference on Computing in High Energy and Nuclear Physics, CHEP 2018, Sofia, Bulgaria, 9 - 13 Jul 2018 Rekord szczegółowy - Podobne rekordy 2018-12-14 16:02 The Timepix3 Telescope andSensor R&D for the LHCb VELO Upgrade / Dall'Occo, Elena (Nikhef National institute for subatomic physics (NL)) The VErtex LOcator (VELO) of the LHCb detector is going to be replaced in the context of a major upgrade of the experiment planned for 2019-2020. The upgraded VELO is a silicon pixel detector, designed to with stand a radiation dose up to $8 \times 10^{15} 1 ~\text {MeV} ~\eta_{eq} ~ \text{cm}^{−2}$, with the additional challenge of a highly non uniform radiation exposure. [...] LHCb-PROC-2018-030; CERN-LHCb-PROC-2018-030.- Geneva : CERN, 2018 - 8. Rekord szczegółowy - Podobne rekordy
I am looking into the method to calculate the sample size if we would want the specific coefficient significantly different from zero. I have 4 independent variables and 1 dependent variable. And I have 50 observations so far. I would like to know how many more observations do I need to get one of the coefficients significant because it's trending. I have read this paper from Ken Kelley and Scott E. Maxwell, "Sample Size for Multiple Regression: Obtaining Regression Coefficients That Are Accurate, Not Simply Significant". And they have this formula to estimate sample size. $N = \Big(\frac{z_{(1-\alpha/2)}}{w}\Big)^2\Big(\frac{1-R^2}{1-R^2_{XX_j}}\Big) + p + 1$ where $R^2$ represents the population multiple correlation coefficient predicting the criterion (dependent) variable $Y$ from the $p$ predictor variables and $R^2_{XX_j}$ represents the population multiple correlation coefficient predicting the $j$th predictor from the remaining $p − 1$ predictors. The calculated $N$ should be rounded to the next larger integer for sample size. The $w$ in the above equation is the desired half-width of the confidence interval. $R^2$ is easy to calculate. But I don't know how to calculate $R^2_{XX_j}$ for this. Do you guys have any idea?
Consider the simplest possible case in which the time reversal operator $\hat{\mathrm{T}}$ is given by the operation of complex conjugation $\hat{\mathrm{K}}$. We can view $\mathrm{T}$ is an anti-untary operator on the Hilbert space of our quantum system. In particular, we can determine its action on the momentum operator $\hat{\mathbf{p}}$ as $$ \hat{\mathrm{T}} \hat{\mathbf{p}} \hat{\mathrm{T}}^{-1} = - \hat{\mathbf{p}}. \hspace{2cm}(1) $$ Question: How can we describe this operation within the phase space formulation of quantum mechanics? Can the form of this operation be derived from the Wigner transformation? Some ideas: In this case, all non-commutativity of quantum mechanics is absorbed into the Moyal product, denoted by the binary $\star$ operation. The relation above will be transformed to its phase-space counterpart $$ \mathrm{T} \star \mathbf{p} \star \mathrm{T}^{-1} = - \mathbf{p}. \hspace{2cm}(2) $$ It seems to me that the phase-space representation $\mathrm{T}$ of the operator $\hat{\mathrm{T}}$ should be rather non-trivial. My suspicion is that it will still contain the operation of complex conjugation (because the relations above should also hold when we consider the canonical momentum in electromagnetic fields). The difficulty arises because $\mathbf{p}$ is now a real $c$-number and not a complex operator. Thus, $\mathrm{T}$ must have a dependence on position and momentum. Conjecture: Based on these observations, I would conjecture that $$ \mathrm{T} = \mathcal{U}(\mathbf{x},\mathbf{p})\mathrm{K}, $$ where $\mathcal{U}$ is a yet to be determined $\star$-unitary function on phase space. One thing which can easily be done is a translation $t$ by $\mathbf{p}'$, i.e., $$ t(\mathbf{p'})\star \mathbf{p} \star t(\mathbf{p'})^{-1} = \mathbf{p} +\mathbf{p}'. $$ This operation is easily defined. After we take the limit $\mathbf{p'}\to -2 \mathbf{p}$ we get the desired result. Can we incorporate this into the conjectured form? EDIT 1: Following the discussion in the comment section we could interpret the statement (2) from above as an integral operator, let's call it $\tilde{\mathrm{T}}$ which acts via the functional $$ \tilde{\mathrm{T}}[f(\mathbf{x},\mathbf{p})] = \int \text{d}\mathbf{x}'\text{d}\mathbf{p}' \delta(\mathbf{x}-\mathbf{x}')\delta(\mathbf{p}+\mathbf{p}') f(\mathbf{x}',\mathbf{p}') $$ Because of its operational definition, it is clear that this coincides with the Wigner transformation $\mathcal{W}$ of Eq. (1). It remains to show, that we can write this in terms of a $\star$-product, i.e. $$ \tilde{\mathrm{T}}[f(\mathbf{p})] = \mathcal{W} [ \hat{\mathrm{T}} f(\hat{\mathbf{p}}) \hat{\mathrm{T}}^{-1} ] \overset{\text{to show}}{=} \mathrm{T} \star f( \mathbf{p} ) \star \mathrm{T}^{-1}. $$ EDIT 2: I might have found some additional constraints. If we assume that $\mathrm{T} \star \mathrm{T}^{-1} = \mathrm{T}^{-1} \star \mathrm{T} = 1 $, Eq. (2) can be used to derive $$ \left\lbrace \mathrm{T} ~\overset{\star}{,} ~\mathbf{p} \right\rbrace= 0, $$ which is only fulfilled for $\mathrm{T} = 0$ which is not a solution to the original problem. This means $\mathrm{T}^{-1}$ cannot coincide with the $\star$-inverse of $\mathrm{T}$.
It looks like you're new here. If you want to get involved, click one of these buttons! Isomorphisms are very important in mathematics, and we can no longer put off talking about them. Intuitively, two objects are 'isomorphic' if they look the same. Category theory makes this precise and shifts the emphasis to the 'isomorphism' - the way in which we match up these two objects, to see that they look the same. For example, any two of these squares look the same after you rotate and/or reflect them: An isomorphism between two of these squares is a process of rotating and/or reflecting the first so it looks just like the second. As the name suggests, an isomorphism is a kind of morphism. Briefly, it's a morphism that you can 'undo'. It's a morphism that has an inverse: Definition. Given a morphism \(f : x \to y\) in a category \(\mathcal{C}\), an inverse of \(f\) is a morphism \(g: y \to x\) such that and I'm saying that \(g\) is 'an' inverse of \(f\) because in principle there could be more than one! But in fact, any morphism has at most one inverse, so we can talk about 'the' inverse of \(f\) if it exists, and we call it \(f^{-1}\). Puzzle 140. Prove that any morphism has at most one inverse. Puzzle 141. Give an example of a morphism in some category that has more than one left inverse. Puzzle 142. Give an example of a morphism in some category that has more than one right inverse. Now we're ready for isomorphisms! Definition. A morphism \(f : x \to y\) is an isomorphism if it has an inverse. Definition. Two objects \(x,y\) in a category \(\mathcal{C}\) are isomorphic if there exists an isomorphism \(f : x \to y\). Let's see some examples! The most important example for us now is a 'natural isomorphism', since we need those for our databases. But let's start off with something easier. Take your favorite categories and see what the isomorphisms in them are like! What's an isomorphism in the category \(\mathbf{3}\)? Remember, this is a free category on a graph: The morphisms in \(\mathbf{3}\) are paths in this graph. We've got one path of length 2: $$ f_2 \circ f_1 : v_1 \to v_3 $$ two paths of length 1: $$ f_1 : v_1 \to v_2, \quad f_2 : v_2 \to v_3 $$ and - don't forget - three paths of length 0. These are the identity morphisms: $$ 1_{v_1} : v_1 \to v_1, \quad 1_{v_2} : v_2 \to v_2, \quad 1_{v_3} : v_3 \to v_3.$$ If you think about how composition works in this category you'll see that the only isomorphisms are the identity morphisms. Why? Because there's no way to compose two morphisms and get an identity morphism unless they're both that identity morphism! In intuitive terms, we can only move from left to right in this category, not backwards, so we can only 'undo' a morphism if it doesn't do anything at all - i.e., it's an identity morphism. We can generalize this observation. The key is that \(\mathbf{3}\) is a poset. Remember, in our new way of thinking a preorder is a category where for any two objects \(x\) and \(y\) there is at most one morphism \(f : x \to y\), in which case we can write \(x \le y\). A poset is a preorder where if there's a morphism \(f : x \to y\) and a morphism \(g: x \to y\) then \(x = y\). In other words, if \(x \le y\) and \(y \le x\) then \(x = y\). Puzzle 143. Show that if a category \(\mathcal{C}\) is a preorder, if there is a morphism \(f : x \to y\) and a morphism \(g: x \to y\) then \(g\) is the inverse of \(f\), so \(x\) and \(y\) are isomorphic. Puzzle 144. Show that if a category \(\mathcal{C}\) is a poset, if there is a morphism \(f : x \to y\) and a morphism \(g: x \to y\) then both \(f\) and \(g\) are identity morphisms, so \(x = y\). Puzzle 144 says that in a poset, the only isomorphisms are identities. Isomorphisms are a lot more interesting in the category \(\mathbf{Set}\). Remember, this is the category where objects are sets and morphisms are functions. Puzzle 145. Show that every isomorphism in \(\mathbf{Set}\) is a bijection, that is, a function that is one-to-one and onto. Puzzle 146. Show that every bijection is an isomorphism in \(\mathbf{Set}\). So, in \(\mathbf{Set}\) the isomorphisms are the bijections! So, there are lots of them. One more example: Definition. If \(\mathcal{C}\) and \(\mathcal{D}\) are categories, then an isomorphism in \(\mathcal{D}^\mathcal{C}\) is called a natural isomorphism. This name makes sense! The objects in the so-called 'functor category' \(\mathcal{D}^\mathcal{C}\) are functors from \(\mathcal{C}\) to \(\mathcal{D}\), and the morphisms between these are natural transformations. So, the isomorphisms deserve to be called 'natural isomorphisms'. But what are they like? Given functors \(F, G: \mathcal{C} \to \mathcal{D}\), a natural transformation \(\alpha : F \to G\) is a choice of morphism $$ \alpha_x : F(x) \to G(x) $$ for each object \(x\) in \(\mathcal{C}\), such that for each morphism \(f : x \to y\) this naturality square commutes: Suppose \(\alpha\) is an isomorphism. This says that it has an inverse \(\beta: G \to F\). This \(beta\) will be a choice of morphism $$ \beta_x : G(x) \to F(x) $$ for each \(x\), making a bunch of naturality squares commute. But saying that \(\beta\) is the inverse of \(\alpha\) means that $$ \beta \circ \alpha = 1_F \quad \textrm{ and } \alpha \circ \beta = 1_G .$$ If you remember how we compose natural transformations, you'll see this means $$ \beta_x \circ \alpha_x = 1_{F(x)} \quad \textrm{ and } \alpha_x \circ \beta_x = 1_{G(x)} $$ for all \(x\). So, for each \(x\), \(\beta_x\) is the inverse of \(\alpha_x\). In short: if \(\alpha\) is a natural isomorphism then \(\alpha\) is a natural transformation such that \(\alpha_x\) is an isomorphism for each \(x\). But the converse is true, too! It takes a little more work to prove, but not much. So, I'll leave it as a puzzle. Puzzle 147. Show that if \(\alpha : F \Rightarrow G\) is a natural transformation such that \(\alpha_x\) is an isomorphism for each \(x\), then \(\alpha\) is a natural isomorphism. Doing this will help you understand natural isomorphisms. But you also need examples! Puzzle 148. Create a category \(\mathcal{C}\) as the free category on a graph. Give an example of two functors \(F, G : \mathcal{C} \to \mathbf{Set}\) and a natural isomorphism \(\alpha: F \Rightarrow G\). Think of \(\mathcal{C}\) as a database schema, and \(F,G\) as two databases built using this schema. In what way does the natural isomorphism between \(F\) and \(G\) make these databases 'the same'. They're not necessarily equal! We should talk about this.
So to recap, you are starting with a covector $T_a$, and then you take was is called the "exterior derivative" to get a rank (0,2) tensor $A_{ab}$ defined by $A_{\mu \nu} = T_{\mu,\nu} - T_{\nu,\mu}$. Here the comma in the subscript denotes partial derivative so that $T_{\mu,\nu}=\dfrac{\partial}{\partial y^\nu} T_\mu$. Notice also that we are considering the partial derivative and not the covariant derivative (we may not even have a metric). I am using greek indices as coordinate indices and roman indices as abstract inidices. Now suppose we have a prime coordinate system, related to the unprimed system by the covector transformation law $T'_{\nu'} =J^\nu_{\nu'} T_\nu$, where $T'$ is the coordinates of the tensor in the transformed coordinate system. Then we would expect a rank (0,2) tensor to transform according to the law $A'_{\mu'\nu'} =J^\mu_{\mu'}J^\nu_{\nu'} A_{\mu \nu}$. But is this true for $A$ as we have defined it. Let's find out. In the unprimed coordinate system, the coordinates of $A$ are given by $$A_{\mu \nu}=T_{\mu,\nu}-T_{\nu,\mu}$$ Now in the primed coordinate system the coordinates of $A$ are given by $$A'_{\mu' \nu'}=T'_{\mu',\nu'}-T'_{\nu',\mu'}= \partial_{\nu'} T'_{\mu'} - \partial_{\mu'} T'_{\nu'} = \partial_{\nu'} J^{\mu}_{\mu'} T_\mu-\partial_{\mu'} J^{\nu}_{\nu'} T_\nu.$$ Now the partial derivative operator transforms according to the rule $\partial_{\mu'}= J^{\mu}_{\mu'} \partial_\mu$, so we get $$A'_{\mu' \nu'}= J^{\nu}_{\nu'}\partial_{\nu} J^{\mu}_{\mu'} T_\mu-J^{\mu}_{\mu'}\partial_{\mu} J^{\nu}_{\nu'} T_\nu=J^{\nu}_{\nu'} J^{\mu}_{\mu'} \partial_{\nu} T_\mu + T_\mu J^{\nu}_{\nu'}\partial_{\nu} J^{\mu}_{\mu'} - J^{\mu}_{\mu'} J^{\nu}_{\nu'} \partial_{\mu} T_\nu-T_\nu J^{\mu}_{\mu'}\partial_{\mu} J^{\nu}_{\nu'} \\= J^{\mu}_{\mu'} J^{\nu}_{\nu'} A_{\mu \nu} + T_\mu J^{\nu}_{\nu'}\partial_{\nu} J^{\mu}_{\mu'}- T_\nu J^{\mu}_{\mu'}\partial_{\mu} J^{\nu}_{\nu'},$$ where the second to last equality is by the product rule, and the last equality groups the first and third terms of the second last expression using the definition of $A_{\mu \nu}$. But the first term in the last expression is all we are supposed to get according to our expectation $A'_{\mu'\nu'} =J^\mu_{\mu'}J^\nu_{\nu'} A_{\mu \nu}$. Therefore, we must show that $T_\mu J^{\nu}_{\nu'}\partial_{\nu} J^{\mu}_{\mu'}- T_\nu J^{\mu}_{\mu'}\partial_{\mu} J^{\nu}_{\nu'}=0$. This is the crucial step which only works because $A$ is an exterior derivative. Moving back to primed derivatives and changing a dummy index, we have $$T_\mu J^{\nu}_{\nu'}\partial_{\nu} J^{\mu}_{\mu'}- T_\nu J^{\mu}_{\mu'}\partial_{\mu} J^{\nu}_{\nu'} =T_\mu \partial_{\nu'} J^{\mu}_{\mu'}- T_\nu \partial_{\mu'} J^{\nu}_{\nu'} = T_\mu \partial_{\nu'} J^{\mu}_{\mu'}- T_\mu \partial_{\mu'} J^{\mu}_{\nu'}=T_\mu \left( \partial_{\nu'} J^{\mu}_{\mu'} - \partial_{\mu'} J^{\mu}_{\nu'}\right),$$ so all that remains is to show that $\partial_{\nu'} J^{\mu}_{\mu'} = \partial_{\mu'} J^{\mu}_{\nu'}$. But remember that the coordinate transformation Jacobian matrix $J$ is given by $J^{\mu}_{\mu'}= \dfrac{\partial y^\mu}{\partial y'^{\mu'}},$ so that $\partial_{\nu'} J^{\mu}_{\mu'} = \dfrac{\partial^2 y^\mu}{\partial y'^{\nu'} \partial y'^{\mu'}} = \dfrac{\partial^2 y^\mu}{\partial y'^{\mu'} \partial y'^{\nu'}} = \partial_{\mu'} J^{\mu}_{\nu'},$ and we are done. Going forward you will learn that the exterior derivative can operate on tensors of any rank, and will always give a result that transforms correctly under changes of coordinates, but that other expressions involving partial derivatives do not have cancellations of the extra terms (involving derivatives of the jacobian matrix), and so these expressions do not transform correctly so they aren't really tensors, and you must instead use the covariant derivative instead of partial derivatives.
I am studying an article of Berestychi-Caffarelli-Niremberg - Monotonicity for elliptic equations in unbounded Lipschitz domains, and I don't understand a convergence in the demonstration of the lemma 3.4. Suppose that $u\in C^2(\Omega)\cap C(\overline\Omega)$, such that $$ \left\{ \begin{array}{rl} \Delta u+f(u)=0, & in \ \Omega,\\ u=0, & on \ \partial\Omega,\\ 0<u<1, & in \ \Omega, \end{array} \right. $$ where $\Omega=\{x=(x',x_n)\in\mathbb{R}^n\times\mathbb{R};x_n>\varphi(x')\}$, with $\varphi$ is a Lipschitz function. LEMMA 3.4: For any $h>0$, the solution $u$ is bounded away from $1$ in $\Omega_h=\{x\in\Omega;\varphi(x')<x_n<\varphi(x')+h\}$. Sketch of the proof: Suppose by contradiction that exists a sequence $(x'^j,x_n^j)_j=(x^j)_j\subset\Omega_h$ such that $u(x^j)\rightarrow1$. By the translation $T^j(x)=x-x^j$ we move the set $\Omega$ to $\Omega^j$, given by $$\Omega^j=\{z=(z',z_n)\in\mathbb{R}^n;z_n>\varphi^j(z')=\varphi(z'+x'^j)-x_n^j\}.$$ Is easy to verify that the functions $\varphi^j$ is Lipschitz continuous and uniformly bounded in compact sets, so by the Arzela-Ascoli Theorem, for a subsequence $\varphi^j$ tend to a function $\widehat\varphi$. For each set $\Omega^j$ you have a shifted solution $$u^j(z',z_n)=u(z'+x'^j,z_n+x_n^j),$$ satisfying $$ \left\{ \begin{array}{rl} \Delta u^j+f(u^j)=0, & in \ \Omega^j,\\ u^j=0, & on \ \partial\Omega^j,\\ 0<u^j<1, & in \ \Omega^j, \end{array} \right. $$ FINALLY, the doubt: In the article, he says that the shifted solutions converge uniformly in compact subsets of $$\widehat\Omega=\{x\in\mathbb{R}^n;x_n>\widehat\varphi(x')\},$$ to a solution $\widehat u$ that satisfies $$ \left\{ \begin{array}{rl} \Delta \widehat u+f(\widehat u)=0, & in \ \widehat\Omega,\\ \widehat u=0, & on \ \partial\widehat\Omega,\\ \end{array} \right. $$ If the initial sequence $(x^j)_j$ is bounded, I can to argument this implies (Because in compact sets, the shifted solutions and the first and second derivatives, would be uniformly continuous and uniformly bounded, then you could apply the Arzela-Ascoli theorem). But I think that the sequence could be unbounded, I don't know. Someone can help me in this argument? Thank you.
Voronin´s Universality Theorem (for the Riemann zeta-Function) according to Wikipedia: Let $U$ be a compact subset of the "critical half-strip" $\{s\in\mathbb{C}:\frac{1}{2}<Re(s)<1\}$ with connected complement. Let $f:U \rightarrow\mathbb{C}$ be continuous and non-vanishing on $U$ and holomorphic on $U^{int}$. Then $\forall\varepsilon >0$ $\exists t=t(\varepsilon)$ $\forall s\in U: |\zeta(s+it)-f(s)|<\varepsilon $. (Q1) Is this the accurate statement of Voronin´s Universality Theorem? If so, are there any (recent) generalisations of this statement with respect to, say, shape of $U$ or conditions on $f$ ? (If I am not mistaken, the theorem dates back to 1975.) (Q2) Historically, were the Riemann zeta-function and Dirichlet L-functions the first examples for functions on the complex plane with such "universality"? Are there any examples for functions (on the complex plane) with such properties beyond the theory of zeta- and L-functions? (Q3) Is there any known general argument why such functions (on $\mathbb{C}$) "must" exist, i.e. in the sense of a non-constructive proof of existence? (with Riemann zeta-function being considered as a proof of existence by construction). (Q4) Is anything known about the structure of the class of functions with such universality property, say, on some given strip in the complex plane? (Q5) Are there similar examples when dealing with $C^r$-functions from some open subset of $\mathbb{R}^n$ into $\mathbb{R}^m$ ? Thanks in advance and Happy New Year!
Here is a closely related pair of examples from operator theory, von Neumann's inequality and the theory of unitary dilations of contractions on Hilbert space, where things work for 1 or 2 variables but not for 3 or more. In one variable, von Neumann's inequality says that if $T$ is an operator on a (complex) Hilbert space $H$ with $\|T\|\leq1$ and $p$ is in $\mathbb{C}[z]$, then $\|p(T)\|\leq\sup\{|p(z)|:|z|=1\}$. Szőkefalvi-Nagy's dilation theorem says that (with the same assumptions on $T$) there is a unitary operator $U$ on a Hilbert space $K$ containing $H$ such that if $P:K\to H$ denotes orthogonal projection of $K$ onto $H$, then $T^n=PU^n|_H$ for each positive integer $n$. These results extend to two commuting variables, as Ando proved in 1963. If $T_1$ and $T_2$ are commuting contractions on $H$, Ando's theorem says that there are commuting unitary operators $U_1$ and $U_2$ on a Hilbert space $K$ containing $H$ such that if $P:K\to H$ denotes orthogonal projection of $K$ onto $H$, then $T_1^{n_1}T_2^{n_2}=PU_1^{n_1}U_2^{n_2}|_H$ for each pair of nonnegative integers $n_1$ and $n_2$. This extension of Sz.-Nagy's theorem has the extension of von Neumann's inequality as a corollary: If $T_1$ and $T_2$ are commuting contractions on a Hilbert space and $p$ is in $\mathbb{C}[z_1,z_2]$, then $\|p(T_1,T_2)\|\leq\sup\{|p(z_1,z_2)|:|z_1|=|z_2|=1\}$. Things aren't so nice in 3 (or more) variables. Parrott showed in 1970 that 3 or more commuting contractions need not have commuting unitary dilations. Even worse, the analogues of von Neumann's inequality don't hold for $n$-tuples of commuting contractions when $n\geq3$. Some have considered the problem of quantifying how badly the inequalities can fail. Let $K_n$ denote the infimum of the set of those positive constants $K$ such that if $T_1,\ldots,T_n$ are commuting contractions and $p$ is in $\mathbb{C}[z_1,\ldots,z_n]$, then $\|p(T_1,\ldots,T_n)\|\leq K\cdot\sup\{|p(z_1,\ldots,z_n)|:|z_1|=\cdots=|z_n|=1\}$. So von Neumann's inequality says that $K_1=1$, and Ando's Theorem yields $K_2=1$. It is known in general that $K_n\geq\frac{\sqrt{n}}{11}$. When $n>2$, it is not known whether $K_n\lt\infty$. See Paulsen's book (2002) for more. On page 69 he writes: The fact that von Neumann’s inequality holds for two commuting contractions but not three or more is still the source of many surprising results and intriguing questions. Many deep results about analytic functions come from this dichotomy. For example, Agler [used] Ando’s theorem to deduce an analogue of the classical Nevanlinna–Pick interpolation formula for analytic functions on the bidisk. Because of the failure of a von Neumann inequality for three or more commuting contractions, the analogous formula for the tridisk is known to be false, and the problem of finding the correct analogue of the Nevanlinna–Pick formula for polydisks in three or more variables remains open.
Earlier we showed how unity factors can be used to express quantities in different units of the same parameter. For example, a density can be expressed in g/cm 3 or lb/ft 3. Now we will see how conversion factors representing mathematical functions, like D = m/v, can be used to transform quantities into different parameters. For example, what is the volume of a given mass of gold? Unity factors and conversion factors are conceptually different, and we'll see that the "dimensional analysis" we develop for unit conversion problems must be used with care in the case of functions. When we are referring to the same object or sample of material, it is often useful to be able to convert one into another. For example, in our discussion of fossil-fuel reserves we find that 318 Pg (3.18 × 10 parameter 17g) of coal, 28.6 km 3(2.68 × 10 10m 3) of petroleum, and 2.83 × 10 3km 3(2.83 × 10 13m 3) of natural gas (measured at normal atmospheric pressure and 15°C) are available. But none of these quantities tells us what we really want to know ― how much heat energycould be released by burning each of these reserves? Only by converting the mass of coal and the volumes of petroleum and natural gas into their equivalent energies can we make a valid comparison. When this is done, we find that the coal could release 7.2 × 10 21J, , the petroleum 1.1 × 10 21J, and the gas 1.1 × 10 21J of heat energy. Thus the reserves of coal are more than three times those of the other two fuels combined. It is for this reason that more attention is being paid to the development of new ways for using coal resources than to oil or gas. Conversion of one kind of quantity into another is usually done with what can be called a conversion factor, but the conversion factor is based on a mathematical function(D = m / V) or mathematical equation that relates parameters. Since we have not yet discussed energy or the units (joules) in which it is measured, an example involving the more familiar quantities mass and volume will be used to illustrate the way conversion factors are employed. The same principles apply to finding how much energy would be released by burning a fuel, and that problem will be encountered later. For helpful context about the above discussion, check out the following Crash Course Chemistry video: https://youtu.be/hQpQ0hxVNTg Suppose we have a rectangular solid sample of gold which measures 3.04 cm × 8.14 cm × 17.3 cm. We can easily calculate that its volume is 428 cm 3 but how much is it worth? The price of gold is about 5 dollars per gram, and so we need to know the mass rather than the volume. It is unlikely that we would have available a scale or balance which could weigh accurately such a large, heavy sample, and so we would have to determine the mass of gold equivalent to a volume of 428 cm 3. This can be done by manipulating the equation which defines density, ρ = m / V. If we multiply both sides by V, we obtain \[V \times \rho =\dfrac{m}{V}\times V = m\label{1}\] \[m = V \times \rho \] or \[mass = \text{volume} \times \text{ density } \] Taking the density of gold from a reference table, we can now calculate \[\text{Mass}= m =V \rho =\text{428 cm}^{3}\times \dfrac{\text{10}\text{0.32 g}}{\text{1 cm}^{3}}=8.27\times \text{10}^{3}\text{g}=\text{8}\text{.27 kg}\] This is more than 18 lb of gold. At the price quoted above, it would be worth over 40 000 dollars! The formula which defines density can also be used to convert the mass of a sample to the corresponding volume. If both sides of Eq. \(\ref{1}\) are multiplied by 1/ρ, we have \[\dfrac{\text{1}}{\rho }\times m=V \rho \times \dfrac{\text{1}}{\rho }=V \] \[V=m \times \dfrac{\text{1}}{\rho }\label{2}\] Notice that we used the mathematical function D = m/V to convert parameters from mass to volume or vice versa in these examples. How does this differ from the use of unity factors to change units of one parameter? An Important Caveat A mistake sometimes made by beginning students is to confuse density with , which also may have units of g/cm concentration 3. By dimensional analysis, this looks perfectly fine. To see the error, we must understand the meaning of the function \[ C = \dfrac{m}{V}\] In this case, \(V\) refers to the volume of a solution, which contains both a solute and solvent. Given a concentration of an alloy is 10 g gold in 100 cm 3 of alloy, we see that it is wrong (although dimensionally correct as far as conversion factors go) to calculate the volume of gold in 20 g of the alloy as follows: incorrectly \[20 \text{g} \times \dfrac{\text{100 cm^3}}{\text{10 g}} = 200 \text{ cm}^{3} \] It is only possible to calculate the volume of gold if the density of the alloy is known, so that the volume of alloy represented by the 20 g could be calculated. This volume multiplied by the concentration gives the mass of gold, which then can be converted to a volume with the density function. The bottom line is that using a simple unit cancellation method does not always lead to the expected results, unless the mathematical function on which the conversion factor is based is fully understood. Example \(\PageIndex{1}\): Volume of Ethanol A solution of ethanol with a concentration of 0.1754 g / cm 3 has a density of 0.96923 g / cm 3 and a freezing point of -9 ° F [1]. What is the volume of ethanol (D = 0.78522 g / cm 3 at 25 °C) in 100 g of the solution? Solution The volume of 100 g of solution is \[ V = m \div D = 100 \text{ g} \div 0.96923 \text{ g} \text{ cm}^{3} = 103.17 \text{ cm}^{3}\] The mass of ethanol in this volume is \[ m = V \times C = 103.17 \text{ cm}^{3} \times 0.1754 \text{ g /} \text{ cm}^{3} = 18.097 \text{ g}\] \[ \text{The volume of ethanol } = m \div D = 18.097 \text{ g} \div 0.78522 \text{ g / } \text{cm}^{3} = 23.05 \text{cm}^{3}\] Note that we cannot calculate the volume of ethanol by \[\dfrac {\dfrac{0.96923 g}{cm^3} \times 100 cm^3}{\dfrac {0.78522 g}{cm^3}} \normalsize = 123.4 \text{cm}^{3}\] even though this equation is dimensionally correct. Note: Note that this result required when to use the function C = m/V, and when to use the function D=m/V as conversion factors. Pure dimensional analysis could not reliably give the answer, since both functions have the same dimensions. Example \(\PageIndex{2}\): Volume of Benzene Find the volume occupied by a 4.73-g sample of benzene. Solution The density of benzene is 0.880 g cm –3. Using Eq. (2), \[\text{Volume = }V\text{ = }m\text{ }\times \text{ }\dfrac{\text{1}}{\rho }\text{ = 4}\text{.73 g }\times \text{ }\dfrac{\text{1 cm}^{\text{3}}}{\text{0}\text{.880 g}}\text{ = 5}\text{.38 cm}^{\text{3}}\] Note: Note that taking the reciprocal of \(\Large\tfrac{\text{0}\text{.880 g}}{\text{1 cm}^{3}}\) simply inverts the fraction ― 1 cm 3 goes on top, and 0.880 g goes on the bottom. The two calculations just done show that density is a conversion factor which changes volume to mass, and the reciprocal of density is a conversion factor changing mass into volume. This can be done because the mathematical formula defining density relates it to mass and volume. Algebraic manipulation of this formula gave us expressions for mass and for volume [Eq. \(\ref{1}\) and \(\ref{2}\)], and we used them to solve our problems. If we understand the function D = m/V and heed the caveat above, we can devise appropriate converstion factors by unit cancellation, as the following example shows: Example \(\PageIndex{3}\): Volume of Mercury A student weighs 98.0 g of mercury. If the density of mercury is 13.6 g/cm 3, what volume does the sample occupy? Solution We know that volume is related to mass through density. Therefore \[ V = m \times \text{ conversion factor}\] Since the mass is in grams, we need to get rid of these units and replace them with volume units. This can be done if the reciprocal of the density is used as a conversion factor. This puts grams in the denominator so that these units cancel: \[V=m\times \dfrac{\text{1}}{\rho }=\text{98}\text{.0 g}\times \dfrac{\text{1 cm}^{3}}{\text{13}\text{.6 g}}=\text{7}\text{.21 cm}^{3}\] If we had multiplied by the density instead of its reciprocal, the units of the result would immediately show our error: \(V=\text{98}\text{.0 g}\times \dfrac{\text{13.6 }g}{\text{1 cm}^{3}}=\text{1}\text{.333}{\text{g}^{2}}/{\text{cm}^{3}}\;\) (no cancellation!) It is clear that square grams per cubic centimeter are not the units we want. Using a conversion factor is very similar to using a unity factor — we know the conversion factor is correct when units cancel appropriately. A conversion factor is not unity, however. Rather it is a physical quantity (or the reciprocal of a physical quantity) which is related to the two other quantities we are interconverting. The conversion factor works because of the relationship [ie. the definition of density as defined by Eqs. \(\ref{1}\) and \(\ref{2}\) includes the relationships between density, mass, and volume], not because it is has a value of one. Once we have established that a relationship exists, it is no longer necessary to memorize a mathematical formula. The units tell us whether to use the conversion factor or its reciprocal. Without such a relationship, however, mere cancellation of units does not guarantee that we are doing the right thing. A simple way to remember relationships among quantities and conversion factors is a “road map“of the type shown below: \[\text{Mass }\overset{density}{\longleftrightarrow}\text{ volume or }m\overset{\rho }{\longleftrightarrow}V\text{ }\] This indicates that the mass of a particular sample of matter is related to its volume (and the volume to its mass) through the conversion factor, density. The double arrow indicates that a conversion may be made in either direction, provided the units of the conversion factor cancel those of the quantity which was known initially. In general the road map can be written \[\text{First quantity }\overset{\text{conversion factor}}{\longleftrightarrow}\text{ second quantity}\] As we come to more complicated problems, where several steps are required to obtain a final result, such road maps will become more useful in charting a path to the solution. Example \(\PageIndex{1}\): Volume to Mass Conversion Black ironwood has a density of 67.24 lb/ft 3. If you had a sample whose volume was 47.3 ml, how many grams would it weigh? (1 lb = 454 g; 1 ft = 30.5 cm). Solution The road map \[V\xrightarrow{\rho }m\text{ }\] tells us that the mass of the sample may be obtained from its volume using the conversion factor, density. Since milliliters and cubic centimeters are the same, we use the SI units for our calculation: \[ \text{Mass} = m = 47.3 \text{cm}^{3} \times \dfrac{\text{67}\text{.24 lb}}{\text{1 ft}^{3}}\] Since the volume units are different, we need a unity factor to get them to cancel: \[m\text{ = 47}\text{.3 cm}^{\text{3}}\text{ }\times \text{ }\left( \dfrac{\text{1 ft}}{\text{30}\text{.5 cm}} \right)^{\text{3}}\text{ }\times \text{ }\dfrac{\text{67}\text{.24 lb}}{\text{1 ft}^{\text{3}}}\text{ = 47}\text{.3 cm}^{\text{3}}\text{ }\times \text{ }\dfrac{\text{1 ft}^{\text{3}}}{\text{30}\text{.5}^{\text{3}}\text{ cm}^{\text{3}}}\text{ }\times \text{ }\dfrac{\text{67}\text{.24 lb}}{\text{1 ft}^{\text{3}}}\] We now have the mass in pounds, but we want it in grams, so another unity factor is needed: \[m\text{ = 47}\text{.3 cm}^{\text{3}}\text{ }\times \text{ }\dfrac{\text{1 ft}^{\text{3}}}{\text{30}\text{.5}^{\text{3}}\text{ cm}^{\text{3}}}\text{ }\times \text{ }\dfrac{\text{67}\text{.24 lb}}{\text{1 ft}^{\text{3}}}\text{ }\times \text{ }\dfrac{\text{454 g}}{\text{ 1 lb}}\text{ = 50}\text{0.9 g}\] In subsequent chapters we will establish a number of relationships among physical quantities. Formulas will be given which define these relationships, but we do not advocate slavish memorization and manipulation of those formulas. Instead we recommend that you remember that a relationship exists, perhaps in terms of a road map, and then adjust the quantities involved so that the units cancel appropriately. Such an approach has the advantage that you can solve a wide variety of problems by using the same technique.
Difference between revisions of "Upper and lower bounds" (→Asymptotics) Line 5: Line 5: {| {| − | n || 0 || 1 || 2 || 3 || 4 || 5 + | n || 0 || 1 || 2 || 3 || 4 || 5 |- |- − | <math>c_n</math> || 1 || 2 || 6 || 18 || 52 || + | <math>c_n</math> || 1 || 2 || 6 || 18 || 52 || 150 ,] |} |} Line 111: Line 111: == n=5 == == n=5 == − We have the upper bound <math> + + + We have the upper bound <math>\leq 154</math> [The following needs to be formatted etc.] [The following needs to be formatted etc.] Line 131: Line 133: Now the major diagonal of the cube is yyy, and six points must be removed from that. Four of the off-diagonal cubes must also lose points. That leaves 152 points, which contradicts the 155 points we started with. Now the major diagonal of the cube is yyy, and six points must be removed from that. Four of the off-diagonal cubes must also lose points. That leaves 152 points, which contradicts the 155 points we started with. − We have the lower bound <math> + We have the lower bound <math>\geq 150</math> [The following needs to be formatted also.] [The following needs to be formatted also.] Line 147: Line 149: 12 17 17 12 17 17 + + + + + + + + + + + + + + + + + + + + + == Larger n == == Larger n == Line 231: Line 254: A greedy algorithm [http://thetangentspace.com/wiki/Hales-Jewett_Theorem was implemented here]. The results were sharp for <math>n \leq 3</math> but were slightly inferior to the constructions above for larger n. A greedy algorithm [http://thetangentspace.com/wiki/Hales-Jewett_Theorem was implemented here]. The results were sharp for <math>n \leq 3</math> but were slightly inferior to the constructions above for larger n. − − Revision as of 11:38, 16 February 2009 Upper and lower bounds for [math]c_n[/math] for small values of n. [math]c_n[/math] is the size of the largest subset of [math][3]^n[/math] that does not contain a combinatorial line. A spreadsheet for all the latest bounds on [math]c_n[/math] can be found here. In this page we record the proofs justifying these bounds. n 0 1 2 3 4 5 6 7 [math]c_n[/math] 1 2 6 18 52 150 450 [1302,1350] Contents Basic constructions For all [math]n \geq 1[/math], a basic example of a mostly line-free set is [math]D_n := \{ (x_1,\ldots,x_n) \in [3]^n: \sum_{i=1}^n x_i = 0 \ \operatorname{mod}\ 3 \}[/math]. (1) This has cardinality [math]|D_n| = 2 \times 3^{n-1}[/math]. The only lines in [math]D_n[/math] are those with A number of wildcards equal to a multiple of three; The number of 1s equal to the number of 2s modulo 3. One way to construct line-free sets is to start with [math]D_n[/math] and remove some additional points. Another useful construction proceeds by using the slices [math]\Gamma_{a,b,c} \subset [3]^n[/math] for [math](a,b,c)[/math] in the triangular grid [math]\Delta_n := \{ (a,b,c) \in {\Bbb Z}_+^3: a+b+c = n \},[/math]. (2) where [math]\Gamma_{a,b,c}[/math] is defined as the strings in [math][3]^n[/math] with [math]a[/math] 1s, [math]b[/math] 2s, and [math]c[/math] 3s. Note that [math]|\Gamma_{a,b,c}| = \frac{n!}{a! b! c!}.[/math] (3) Given any set [math]B \subset \Delta_n[/math] that avoids equilateral triangles [math] (a+r,b,c), (a,b+r,c), (a,b,c+r)[/math], the set [math]\Gamma_B := \bigcup_{(a,b,c) \in B} \Gamma_{a,b,c}[/math] (4) is line-free and has cardinality [math]|\Gamma_B| = \sum_{(a,b,c) \in B} \frac{n!}{a! b! c!},[/math] (5) and thus provides a lower bound for [math]c_n[/math]: [math]c_n \geq \sum_{(a,b,c) \in B} \frac{n!}{a! b! c!}.[/math] (6) All lower bounds on [math]c_n[/math] have proceeded so far by choosing a good set of B and applying (6). Note that [math]D_n[/math] is the same as [math]\Gamma_{B_n}[/math], where [math]B_n[/math] consists of those triples [math](a,b,c) \in \Delta_n[/math] in which [math]a \neq b\ \operatorname{mod}\ 3[/math]. Note that if one takes a line-free set and permutes the alphabet [math]\{1,2,3\}[/math] in any fashion (e.g. replacing all 1s by 2s and vice versa), one also gets a line-free set. This potentially gives six examples from any given starting example of a line-free set, though in practice there is enough symmetry that the total number of examples produced this way is less than six. (These six examples also correspond to the six symmetries of the triangular grid [math]\Delta_n[/math] formed by rotation and reflection.) Another symmetry comes from permuting the [math]n[/math] indices in the strings of [math][3]^n[/math] (e.g. replacing every string by its reversal). But the sets [math]\Gamma_B[/math] are automatically invariant under such permutations and thus do not produce new line-free sets via this symmetry. The basic upper bound Because [math][3]^{n+1}[/math] can be expressed as the union of three copies of [math][3]^n[/math], we have the basic upper bound [math]c_{n+1} \leq 3 c_n.[/math] (7) Note that equality only occurs if one can find an [math]n+1[/math]-dimensional line-free set such that every n-dimensional slice has the maximum possible cardinality of [math]c_n[/math]. n=0 [math]c_0=1[/math]: This is clear. n=1 [math]c_1=2[/math]: The three sets [math]D_1 = \{1,2\}[/math], [math]\{2,3\}[/math], and [math]\{1,3\}[/math] are the only two-element sets which are line-free in [math][3]^1[/math], and there are no three-element sets. n=2 [math]c_2=6[/math]: There are four six-element sets in [math][3]^2[/math] which are line-free, which we denote [math]x[/math], [math]y[/math], [math]z[/math], and [math]w[/math] and are displayed graphically as follows. 13 .. 33 .. 23 33 13 .. 33 13 23 .. x = 12 22 .. y = 12 .. 32 z = 12 22 .. w = 12 .. 32 .. 21 31 11 21 .. .. 21 31 .. 21 31 [math]z[/math] is also the same as [math]D_2[/math]. Combining this with the basic upper bound (7) we see that [math]c_2=6[/math]. n=3 [math]c_3=18[/math]: We describe a subset [math]A[/math] of [math][3]^3[/math] as a string [math]abc[/math], where [math]a, b, c \subset [3]^2[/math] correspond to strings of the form [math]1**[/math], [math]2**[/math], [math]3**[/math] in [math][3]^3[/math] respectively. Thus for instance [math]D_3 = xyz[/math], and so from (7) we have [math]c_3=18[/math]. It turns out that [math]D_3 = xyz[/math] is the only 18-element line-free subset of [math][3]^3[/math]. To create an 17-element set, the only way is to remove a single element from one of xyz, yzx, or zxy. n=4 [math]c_4=52[/math]: Indeed, divide a line-free set in [math][3]^4[/math] into three blocks of [math][3]^3[/math]. If two of them are of size 18, then they must both be xyz, and the third block can have at most 6 elements, leading to an inferior bound of 42. So the best one can do is [math]18+17+17=52[/math]. In fact, there are exactly three ways to get to 52, namely xyz yz’x zxy’ y’zx zx’y xyz z’xy xyz yzx’ where x' is x with either 2222 or 3333 removed (depending on whether the x' appears in the second block or the third) y' is y with either 1111 or 3333 removed z' is z with either 1111 or 2222 removed The second example here can also be described as [math]D_4[/math] with 1111 and 2222 removed. n=5 [math]c_5=150[/math]: We have the upper bound [math]c_5 \leq 154[/math] [The following needs to be formatted etc.] Recall that there is just one pattern to fit 18 points in a cube; the three square slices of this pattern (along any axis) are x, y and z. To fit 17 points in a cube, the only way is to remove one point from either xyz, yzx or zxy. This makes the proof in 243. easier because the slices are formed by removing points from yzx zxy xyz zxy xyz yzx xyz yzx zxy Now the major diagonal of the cube is yyy, and six points must be removed from that. Four of the off-diagonal cubes must also lose points. That leaves 152 points, which contradicts the 155 points we started with. We have the lower bound [math]c_5 \geq 150[/math] [The following needs to be formatted also.] One way to get 150 is to start with [math]D_5[/math] and remove the slices [math]\Gamma_{0,4,1}, \Gamma_{0,5,0}, \Gamma_{4,0,1}, \Gamma_{5,0,0}[/math]. Another pattern of 150 points is this: Take the 450 points in [math]{}[3]^6[/math] which are (1,2,3), (0,2,4) and permutations, then select the 150 whose final coordinate is 1. That gives this many points in each cube: 17 18 17 17 17 18 12 17 17 An integer programming method has established the upper bound [math]c_5\leq 150[/math], with 12 extremal solutions. This file contains the extermisers. One point per line and different extermisers separated by a line with “—” This is the linear program, readable by Gnu’s glpsol linear programing solver, which also quickly proves that 150 is the optimum. Each variable corresponds to a point in the cube, numbered according to their lexicografic ordering. If a variable is 1 then the point is in the set, if it is 0 then it is not in the set. There is one linear inequality for each combinatorial line, stating that at least one point must be missing from the line. n=6 [math]c_6=450[/math]: The upper bound follows since [math]c_6 \leq 3 c_5[/math]. The lower bound can be formed by gluing together all the slices [math]\Gamma_{a,b,c}[/math] where (a,b,c) is a permutation of (0,2,4) or (1,2,3). n=7 [math]1302 \leq c_7 \leq 1350[/math]: The upper bound follows since [math]c_7 \leq 3 c_6[/math]. The lower bound can be formed by removing 016,106,052,502,151,511,160,610 from [math]D_7[/math]. Larger n The following construction gives lower bounds for the number of triangle-free points, There are of the order [math]2.7 \sqrt{log(N)/N}3^N[/math] points for large N (N ~ 5000) It applies when N is a multiple of 3. For N=3M-1, restrict the first digit of a 3M sequence to be 1. So this construction has exactly one-third as many points for N=3M-1 as it has for N=3M. For N=3M-2, restrict the first two digits of a 3M sequence to be 12. This leaves roughly one ninth of the points for N=3M-2 as for N=3M. The current lower bounds for [math]c_{3m}[/math] are built like this, with abc being shorthand for [math]\Gamma_{a,b,c}[/math]: [math]c_3[/math] from (012) and permutations [math]c_6[/math] from (123,024) and perms [math]c_9[/math] from (234,135,045) and perms [math]c_{12}[/math] from (345,246,156,02A,057) and perms (A=10) [math]c_{15}[/math] from (456,357,267,13B,168,04B,078) and perms (B=11) To get the triples in each row, add 1 to the triples in the previous row; then include new triples that have a zero. A general formula for these points is given below. I think that they are triangle-free. (For N<21, ignore any triple with a negative entry.) There are thirteen groups of points in the centre, that are the same for all N=3M: (M-7, M-3, M+10) and perms (M-7, M, M+7) and perms (M-7, M+3, M+4) and perms (M-6, M-4, M+10) and perms (M-6, M-1, M+7) and perms (M-6, M+2, M+4) and perms (M-5, M-1, M+6) and perms (M-5, M+2, M+3) and perms (M-4, M-2, M+6) and perms (M-4, M+1, M+3) and perms (M-3, M+1, M+2) and perms (M-2, M, M+2) and perms (M-1, M, M+1) and perms There is also a string of points, that is slightly different for odd and even N: For N=6K: (2x, 2x+2, N-4x-2) and permutations (x=0..K-4) (2x, 2x+5, N-4x-5) and perms (x=0..K-4) (2x, 3K-x-4, 3K+x+4) and perms (x=0..K-4) (2x, 3K-x-1, 3K+x+1) and perms (x=0..K-4) (2x+1, 2x+5, N-4x-6) and perms (x=0..K-5) (2x+1, 2x+8, N-4x-9) and perms (x=0..K-5) (2x+1, 3K-x-1, 3K-x) and perms (x=0..K-5) (2x+1, 3K-x-4, 3K-x+3) and perms (x=0..K-5) For N=6K+3: the thirteen points mentioned above, and: (2x, 2x+4, N-4x-4) and perms, x=0..K-4 (2x, 2x+7, N-4x-7) and perms, x=0..K-4 (2x, 3K+1-x, 3K+2-x) and perms, x=0..K-4 (2x, 3K-2-x, 3K+5-x) and perms, x=0..K-4 (2x+1, 2x+3, N-4x-4) and perms, x=0..K-4 (2x+1, 2x+6, N-4x-7) and perms, x=0..K-4 (2x+1, 3K-x, 3K-x+2) and perms, x=0..K-4 (2x+1, 3K-x-3, 3K-x+5) and perms, x=0..K-4 For N=6K: An alternate construction: First define a sequence, of all positive numbers which, in base 3, do not contain a 1. Add 1 to all multiples of 3 in this sequence. This sequence does not contain a length-3 arithmetic progression. It starts 1,2,7,8,19,20,25,26,55, … Second, list all the (abc) triples for which the larger two differ by a number from the sequence, excluding the case when the smaller two differ by 1, but then including the case when (a,b,c) is a permutation of N/3+(-1,0,1) Asymptotics DHJ(3) is equivalent to the upper bound [math]c_n \leq o(3^n)[/math] In the opposite direction, observe that if we take a set [math]S \subset [3n][/math] that contains no 3-term arithmetic progressions, then the set [math]\bigcup_{(a,b,c) \in \Delta_n: a+2b \in S} \Gamma_{a,b,c}[/math] is line-free. From this and the Behrend construction it appears that we have the lower bound [math]c_n \geq 3^{n-O(\sqrt{\log n})}[/math] though this has to be checked. Numerics suggest that the first large n construction given above above give a lower bound of roughly [math]2.7 \sqrt{\log(n)/n} \times 3^n[/math], which would asymptotically be inferior to the Behrend bound. The second large n construction had numerical asymptotics for \log(c_n/3^n) close to [math]1.2-\sqrt{\log(n)}[/math] between n=1000 and n=10000, consistent with the Behrend bound. Numerical methods A greedy algorithm was implemented here. The results were sharp for [math]n \leq 3[/math] but were slightly inferior to the constructions above for larger n.
As dkaeae indicated in his comment, a right-moving or staying Turing machine (TM) is essentially a finite deterministic automaton (FDA). Here's a proof. Let $M$ be such a machine, whose transition rules is in the form of $\delta(q, \gamma)=(t, \beta, d)$ where $q$ is the current state, $\gamma$ is the contents of the current cell, $t$ is the new state, $\beta$ is the symbol that replaces $\gamma$ in the current cell, and $d$ is the direction to move the head, which is either $R$ or $S$, meaning moving right or staying put respectively. Let us check the behavior of $M$ when it has just applied a transition rule of the form $\delta(q, \gamma)=(t, \beta, S)$. Now $M$ is in state $t$ on top of $\beta$. In the next step, $M$ will read the symbol $\beta$, applying the unique rule on the pair $(t,\beta)$. Suppose that rule is $\delta(t, \beta)=(u, \alpha, d)$. Then $M$ will change state to $u$, rewrite the current cell to $\alpha$ and move in the direction of $d$. We have found that once $M$ was in state $q$ on top of $\gamma$, it will change state to $u$, rewrite the current cell to $\alpha$ and move in the direction of $d$. So we can replace the transition rule $\delta(q, \gamma)=(p, \beta, S)$ by $\delta(q, \gamma)=(u, \alpha, d)$ in the specification of $M$ without changing the language accepted by $M$. Suppose that rule is not defined. Then $M$ will halt. We can remove the transition rule $\delta(q, \gamma)=(p, \beta, S)$ in the specification of $M$ without changing the language accepted by $M$. Applying the replacement or removal above repeatedly until there is no rule in $M$ that tells $M$ to stay in the same cell, we find that $L(M)$ is the language accepted by a right-moving only TM. This question and answer tells us a right-moving only TM is essentially a DFA. Basically, a rule in a right-moving only TM, $\delta(q, \gamma)=(t, \beta, R)$ corresponds to $\delta(q, \gamma)=(t)$, a rule in the corresponding DFA. Since the language of a DFA is decidable, so is $L(M)$.
In mathematics, we often write relations between $a$ and $b$ in the form $aRb$. I mean this both in the sense that we write that string to represent an abstract relation, as well as using that form to write expressions with particular relations. In almost every case, these are read as "$a$ [relation] $b$." For a few examples, we have $a:=b$, "is defined to be" $a\geq b$, "is greater than or equal to" $a\in b$, "in / is an element of" $a\subseteq$ "is a subset of" $a\to b$, "maps to / is mapped to" $a=O(b)$, "is big-O of" Notably, every relation on this list is antisymmetric, so the ordering of $a$ first and then $b$ is important. This list is extremely incomplete, and there are dozens more. The correct reading of the symbol $|$ is "divides / is a divisor of." When interpreted in this way, $a|b$ aka "$a$ divides $b$" fits this very well established pattern perfectly. Although it might be counter-intuitive to someone who has more experience with arithmetic than mathematics, it's actually a manifestation of a highly standardized pattern.
Triple integral examples Example 1 A cube has sides of length 4. Let one corner be at the origin and the adjacent corners be on the positive $x$, $y$, and $z$ axes. If the cube's density is proportional to the distance from the xy-plane, find its mass. Solution:The density of the cube is $f(x,y,z) = kz$ for some constant $k$. If $\dlv$ is the cube, the mass is the triple integral \begin{align*} \iiint_\dlv kz\,dV &= \int_0^4 \int_0^4 \int_0^4 kz\,dx\,dy\,dz\\ &= \int_0^4 \int_0^4 \left(\left.kxz \right|_{x=0}^{x=4}\right) dy\,dz\\ &= \int_0^4 \int_0^4 4 k z \,dy\,dz\\ &= \int_0^4 \left(\left. 4kzy \right|_{y=0}^{y=4}\right) dz\\ &= \int_0^4 16 kz dz = \left.\left.8kz^2\right|_{z=0}^{z=4}\right. = 128k \end{align*} If distance is in cm and $k=1$ gram per cubic cm per cm, then the mass of the cube is 128 grams. Example 2 Evaluate the integral \begin{align*} \int_0^1 \int_0^x \int_0^{1+x+y} f(x,y,z) dz \, dy\, dx \end{align*} where $f(x,y,z)=1$. Solution:\begin{align*} &\int_0^1 \int_0^x \int_0^{1+x+y} dz \, dy\, dx\\ &\qquad= \int_0^1 \int_0^x \left(z\Big|_{z=0}^{z=1+x+y}\right)dy\, dx\\ &\qquad= \int_0^1 \int_0^x (1+x+y) dy\, dx\\ &\qquad= \int_0^1 \biggl[y + yx + \frac{y^2}{2}\biggr]_{y=0}^{y=x} dx\\ &\qquad= \int_0^1 \biggl(x + x^2 + \frac{x^2}{2}\biggr) dx\\ &\qquad= \int_0^1 \biggl(x + \frac{3x^2}{2} \biggr) dx\\ &\qquad= \biggl[\frac{x^2}{2} + \frac{x^3}{2} \biggr]_0^1 = \frac{1}{2} + \frac{1}{2} = 1\end{align*} Note: when we integrate $f(x,y,z)=1$, the integral $\iiint_\dlv dV$ is the volume of the solid $\dlv$. Example 3a Set up the integral of $f(x,y,z)$ over $\dlv$, the solid “ice cream cone” bounded by the cone $z=\sqrt{x^2+y^2}$ and the half-sphere $z = \sqrt{1-x^2-y^2}$, pictured below. Applet loading Ice cream cone region. The ice cream cone region is bounded above by the half-sphere $z=\sqrt{1-x^2-y^2}$ and bounded below by the cone $z=\sqrt{x^2+y^2}$. Solution: We'll use the shadow method to set up the bounds on the integral. This means we'll write the triple integral as a double integral on the outside and a single integral on the inside of the form\begin{gather*} \iint_{\textit{shadow}} \int_{\textit{bottom}}^{\textit{top}} f(x,y,z).\end{gather*}We'll let the $z$-axis be the vertical axis so that the cone $z=\sqrt{x^2+y^2}$ is the bottom and the half-sphere $z = \sqrt{1-x^2-y^2}$ is the top of the ice cream cone $\dlv$. Hence, $\dlv$ is the region between these two surfaces:\begin{align} \sqrt{x^2+y^2} \le z \le \sqrt{1-x^2-y^2}. \label{zinequalities}\end{align}These inequalities give the range of $z$ as a function of $x$ and $y$ and thus form the bounds of the inner integral, which will be an integral with respect to $z$ of the form\begin{gather*} \int_{\textit{bottom}}^{\textit{top}} f(x,y,z)dz= \int_{\sqrt{x^2+y^2}}^{\sqrt{1-x^2-y^2}} f(x,y,z)dz.\end{gather*} The whole region $\dlv$ is the set of points satisfying the inequalities \eqref{zinequalities} while $x$ and $y$ range over the shadow of the ice cream cone that is parallel to the $xy$-plane, as illustrated by the cyan circle below. Applet loading Ice cream cone region with shadow. The ice cream cone region is bounded above by the half-sphere $z=\sqrt{1-x^2-y^2}$ and bounded below by the cone $z=\sqrt{x^2+y^2}$. The two surfaces intersect along a circle defined by $x^2+y^2=1/2$ and $z=1/\sqrt{2}$, which is the widest part of the ice cream cone. Therefore, the shadow of the ice cream cone region parallel to the $xy$-plane is the disk of radius $1/\sqrt{2}$ described by $x^2+y^2 \le 1/2$. The shadow parallel to the $xy$-plane is the maximal range of $x$ and $y$ over all points inside $\dlv$. Inside the ice cream cone, the maximal range of $x$ and $y$ occurs where the two surfaces meet, i.e., where the “ice cream” (the half-sphere) meets the cone. From the figure, you can see that the surfaces meet in a circle, and the range of $x$ and $y$ is the disk that is the interior of that circle. The surfaces meet when $ \sqrt{x^2+y^2} = \sqrt{1-x^2-y^2}$, which means $x^2+y^2 = 1-x^2-y^2$ or \begin{align*} x^2+y^2 = \frac{1}{2}. \end{align*} In other words, for any point $(x,y,z)$ in the ice cream cone, the inequality \begin{align} x^2+y^2 \le \frac{1}{2} \label{shadow} \end{align} is satisfied. This inequality describes the shadow of the ice cream cone, which is the set of points $(x,y)$ that lie in a disk of radius $1/\sqrt{2}$, as illustrated below. Now we've reduced the rest of the task of finding bounds for the triple integral to the much simpler task of finding bounds for a double integral over the shadow described by inequality \eqref{shadow}. We'll let $y$ be the inner integral of the double integral, meaning we need to describe the range of $y$ in the shadow as a function of $x$. To do this, we simply rewrite inequality \eqref{shadow} in terms of $y$ as \begin{align*} -\sqrt{1/2 - x^2} \le y \le \sqrt{1/2 - x^2}. \end{align*} This range of $y$ as a function of $x$ gives the bounds on the inner integral of the double integral. Finally, for the bounds on the outer integral, we need the maximal range of $x$ alone. Given that $x^2+y^2 \le 1/2$, the maximal range occurs when $y=0$ so that $x^2 \le 1/2$. We can write the maximal range of $x$ is \begin{align*} -1/\sqrt{2} \le x \le 1/\sqrt{2}. \end{align*} The double integral with respect to $x$ and $y$ becomes \begin{gather*} \iint_{\textit{shadow}} \cdots dy\,dx = \int_{-1/\sqrt{2}}^{1/\sqrt{2}} \int_{-\sqrt{1/2-x^2}}^{\sqrt{1/2-x^2}} \cdots dy\,dx \end{gather*} We have determined all the limits on the iterated integral. Putting the bottom/top limits together with the shadow limits, the ice cream cone can be described by the inequalities \begin{gather*} -1/\sqrt{2} \le x \le 1/\sqrt{2}\\ -\sqrt{1/2 - x^2} \le y \le \sqrt{1/2 - x^2}\\ \sqrt{x^2+y^2} \le z \le \sqrt{1-x^2-y^2} \end{gather*} and the integral of the function $f(x,y,z)$ over $\dlv$ is \begin{align} \iiint_\dlv f\, dV = \int_{-1/\sqrt{2}}^{1/\sqrt{2}} \int_{-\sqrt{1/2-x^2}}^{\sqrt{1/2-x^2}} \int_{\sqrt{x^2+y^2}}^{\sqrt{1-x^2-y^2}} f(x,y,z) dz\,dy\,dx. \label{icecreamintegral} \end{align} Example 3b Find the volume of the ice cream cone of Example 3a.. Solution: Simply set $f(x,y,z)=1$ in equation \eqref{icecreamintegral}. The volume of the ice cream cone $\dlv$ given by the integral \begin{align*} \iiint_\dlv dV = \int_{-1/\sqrt{2}}^{1/\sqrt{2}} \int_{-\sqrt{1/2-x^2}}^{\sqrt{1/2-x^2}} \int_{\sqrt{x^2+y^2}}^{\sqrt{1-x^2-y^2}} dz\,dy\,dx. \end{align*} We won't attempt to evaluate this integral in rectangular coordinates. Once you've learned how to change variables in triple integrals, you can read how to compute the integral using spherical coordinates. Example 4 Find volume of the tetrahedron bounded by the coordinate planes and the plane through $(2,0,0)$, $(0,3,0)$, and $(0,0,1)$. Applet loading A tetrahedron. The tetrahedron is bounded by the coordinate planes ($x=0$, $y=0$, and $z=0$) and the plane through the three points (2,0,0), (0,3,0), and (0,0,1). Solution:We know the equation for three of the surfaces of the tetrahedron, as they are the equations for the coordinates planes: $x=0$, $y=0$, and $z=0$. As an initial step, we can find the equation for the angled plane. You can followthe procedure in the second forming plane example to calculate that the plane is given by the equation\begin{align} 3x + 2y + 6z = 6. \label{plane_equation}\end{align} To find the limits of the tetrahedron, we'll use the shadow method again, but this time, we'll think of the $y$-axis as being the vertical axis. You can imagine the sun that is casting the shadow as being at some point far on the positive $y$-axis. With this orientation, the shadow of the tetrahedron is the maximal range of $x$ and $z$ over the tetrahedron. Since the tetrahedron gets wider in the $x$ and $z$ directions as $y$ decreases, the shadow of the tetrahedron is exactly the base of the tetrahedron in the $xz$-plane (the plane $y=0$), which is the triangle pictured below. We approach the integral over this shadow as a double integral. In this shadow (and consequently in the tetrahedron itself), the total range of $z$ is \begin{align*} 0 \le z \le 1. \end{align*} To find the range of $x$ for each value of $z$, you can calculate from the figure of the shadow that the upper limit of $x$ is the line $z=1-x/2$ or $x=2(1-z)$. Given that the lower limit on $x$ is zero, the range of $x$ in the shadow for a given $z$ is \begin{align*} 0 \le x \le 2(1 - z). \end{align*} Alternatively, you could see that the upper limit on $x$ corresponds to the plane given by equation \eqref{plane_equation} when $y=0$. Plugging $y=0$ into equation \eqref{plane_equation} yields $3x + 6z=6$ or $x = 2(1-z)$. For each value of $x$ and $z$ in the shadow, we need to integrate $y$ from the bottom to the top (viewing $y$ as the vertical axis). The bottom from this perspective is in the plane $y=0$, and the top is the angled plane of equation \eqref{plane_equation}, which we can solve for $y$ to write as $y=3(1-x/2 -z)$. Hence, for a given $z$ and $x$, the range of $y$ is \begin{align*} 0 \le y \le 3\left(1 - \frac{x}{2} - z\right). \end{align*} To find the volume, we integrate the function 1 over this region: \begin{align*} &\int_0^1 \int_0^{2(1-z)} \int_0^{3(1- x/2 - z)} dy \, dx \, dz\\ &\qquad = \int_0^1 \int_0^{2(1-z)} 3\left(1 - \frac{x}{2} - z \right) dx \, dz\\ &\qquad = \int_0^1 3\left.\left[x - \frac{x^2}{4} -zx\right]_{x=0}^{x=2(1-z)}\right. dz\\ &\qquad = \int_0^1 3\left(2(1-z) - (1-z)^2 - 2z(1-z)\right) dz\\ &\qquad = \int_0^1 3(1 - 2z +z^2) dz\\ &\qquad = 3 \left.\left[ z - z^2 + \frac{z^3}{3} \right]_0^1\right.\\ &\qquad = 3\left(1 - 1 +\frac{1}{3}\right) = 3\left(\frac{1}{3}\right) = 1. \end{align*} Example 5 Change the order of $x$ and $y$ in the integral we derived above, \begin{align*} \int_0^1 \int_0^{2(1-z)} \int_0^{3(1- x/2 - z)} dy \, dx \, dz, \end{align*} so that the order will be $dx \, dy \, dz$. Solution: One way to change the order of integration is to build up the graph of the tetrahedron from the limits of the integral, and then repeat the procedure of Example 4 but let the shadow be cast from the positive $x$-axis. Instead, we'll illustrate an alternative procedure of calculating the new limits directly from the inequalities of the old limits. If $y$ will be middle integral, we need limits of $y$ in terms of $z$ (independent of $x$). For given $z$, how large can $y$ range? From the above limits, we know \begin{align*} 0 \le y \le 3\left(1 - \frac{x}{2} - z\right). \end{align*} The range is largest when $x=0$, so \begin{align*} 0 \le y \le 3\left(1 - z\right) \end{align*} Then, given $z$ and $y$, we need to know the range of the $x$. The following relationship must still be true: \begin{align*} y \le 3\left(1 - \frac{x}{2} - z\right). \end{align*} We can rewrite this relationship in terms of $x$ as \begin{align*} \frac{3x}{2} \le 3 - 3z -y, \end{align*} or \begin{align*} x \le 2\left(1 - z - \frac{y}{3}\right). \end{align*} Since we also know $x \ge 0$, the new limits of integration are \begin{align*} \int_0^1\int_0^{3(1-z)} \int_0^{2(1-z-y/3)} dx\, dy \, dz. \end{align*}
A forum where anything goes. Introduce yourselves to other members of the forums, discuss how your name evolves when written out in the Game of Life, or just tell us how you found it. This is the forum for "non-academic" content. A for awesome Posts: 1901 Joined: September 13th, 2014, 5:36 pm Location: 0x-1 Contact: viewtopic.php?p=44724#p44724 Like this: [/url][/wiki][/url] [/wiki] [/url][/code] Many different combinations work. To reproduce, paste the above into a new post and click "preview". x₁=ηx V ⃰_η=c²√(Λη) K=(Λu²)/2 Pₐ=1−1/(∫^∞_t₀(p(t)ˡ⁽ᵗ⁾)dt) $$x_1=\eta x$$ $$V^*_\eta=c^2\sqrt{\Lambda\eta}$$ $$K=\frac{\Lambda u^2}2$$ $$P_a=1-\frac1{\int^\infty_{t_0}p(t)^{l(t)}dt}$$http://conwaylife.com/wiki/A_for_all Aidan F. Pierce Saka Posts: 3138 Joined: June 19th, 2015, 8:50 pm Location: In the kingdom of Sultan Hamengkubuwono X I wonder if this works on other sites? (Remove/Change ) Airy Clave White It Nay Code: Select all x = 17, y = 10, rule = B3/S23 b2ob2obo5b2o$11b4obo$2bob3o2bo2b3o$bo3b2o4b2o$o2bo2bob2o3b4o$bob2obo5b o2b2o$2b2o4bobo2b3o$bo3b5ob2obobo$2bo5bob2o$4bob2o2bobobo! (Check gen 2) Saka Posts: 3138 Joined: June 19th, 2015, 8:50 pm Location: In the kingdom of Sultan Hamengkubuwono X Related:[url=http://a.com/] [/url][/wiki] My signature gets quoted. This too. And my avatar gets moved down Airy Clave White It Nay Code: Select all x = 17, y = 10, rule = B3/S23 b2ob2obo5b2o$11b4obo$2bob3o2bo2b3o$bo3b2o4b2o$o2bo2bob2o3b4o$bob2obo5b o2b2o$2b2o4bobo2b3o$bo3b5ob2obobo$2bo5bob2o$4bob2o2bobobo! (Check gen 2) A for awesome Posts: 1901 Joined: September 13th, 2014, 5:36 pm Location: 0x-1 Contact: Saka wrote: Related: [ Code: Select all [wiki][url=http://a.com/][quote][wiki][url=http://a.com/]a[/url][/wiki][/quote][/url][/wiki] ] My signature gets quoted. This too. And my avatar gets moved down It appears to be possible to quote the entire page by repeating that several times. I guess it leaves <div> and <blockquote> elements open and then autofills the closing tags in the wrong places. Here, I'll fix it: [/wiki][url]conwaylife.com[/url] x₁=ηx V ⃰_η=c²√(Λη) K=(Λu²)/2 Pₐ=1−1/(∫^∞_t₀(p(t)ˡ⁽ᵗ⁾)dt) $$x_1=\eta x$$ $$V^*_\eta=c^2\sqrt{\Lambda\eta}$$ $$K=\frac{\Lambda u^2}2$$ $$P_a=1-\frac1{\int^\infty_{t_0}p(t)^{l(t)}dt}$$http://conwaylife.com/wiki/A_for_all Aidan F. Pierce A for awesome Posts: 1901 Joined: September 13th, 2014, 5:36 pm Location: 0x-1 Contact: It appears I fixed @Saka's open <div>. x₁=ηx V ⃰_η=c²√(Λη) K=(Λu²)/2 Pₐ=1−1/(∫^∞_t₀(p(t)ˡ⁽ᵗ⁾)dt) $$x_1=\eta x$$ $$V^*_\eta=c^2\sqrt{\Lambda\eta}$$ $$K=\frac{\Lambda u^2}2$$ $$P_a=1-\frac1{\int^\infty_{t_0}p(t)^{l(t)}dt}$$http://conwaylife.com/wiki/A_for_all Aidan F. Pierce toroidalet Posts: 1019 Joined: August 7th, 2016, 1:48 pm Location: my computer Contact: A for awesome wrote:It appears I fixed @Saka's open <div>. what fixed it, exactly? "Build a man a fire and he'll be warm for a day. Set a man on fire and he'll be warm for the rest of his life." -Terry Pratchett A for awesome Posts: 1901 Joined: September 13th, 2014, 5:36 pm Location: 0x-1 Contact: toroidalet wrote: A for awesome wrote:It appears I fixed @Saka's open <div>. what fixed it, exactly? The post before the one you quoted. The code was: Code: Select all [wiki][viewer]5[/viewer][/wiki][wiki][url]conwaylife.com[/url][/wiki] x₁=ηx V ⃰_η=c²√(Λη) K=(Λu²)/2 Pₐ=1−1/(∫^∞_t₀(p(t)ˡ⁽ᵗ⁾)dt) $$x_1=\eta x$$ $$V^*_\eta=c^2\sqrt{\Lambda\eta}$$ $$K=\frac{\Lambda u^2}2$$ $$P_a=1-\frac1{\int^\infty_{t_0}p(t)^{l(t)}dt}$$http://conwaylife.com/wiki/A_for_all Aidan F. Pierce Saka Posts: 3138 Joined: June 19th, 2015, 8:50 pm Location: In the kingdom of Sultan Hamengkubuwono X Aidan, could you fix your ultra quote? Now you can't even see replies and the post reply button. Also, a few more ones eith unique effects popped up. Appart from Aidan Mode, there is now: -Saka Quote -Daniel Mode -Aidan Superquote We should write descriptions for these: -Adian Mode: A combination of url, wiki, and code tags that leaves the page shaterred in pieces. Future replies are large and centered, making the page look somewhat old-ish. -Saka Quote: A combination of a dilluted Aidan Mode and quotes, leaves an open div and blockquote that quotes the entire message and signature. Enough can quote entire pages. -Daniel Mode: A derivative of Aidan Mode that adds code tags and pushes things around rather than scrambling them around. Pushes bottom bar to the side. Signature gets coded. -Aidan Superqoute: The most lethal of all. The Aidan Superquote is a broken superquote made of lots of Saka Quotes, not normally allowed on the forums by software. Leaves the rest of the page white and quotes. Replies and post reply button become invisible. I would not like new users playing with this. I'll write articles on my userpage. Last edited by Saka on June 21st, 2017, 10:51 pm, edited 1 time in total. Airy Clave White It Nay Code: Select all x = 17, y = 10, rule = B3/S23 b2ob2obo5b2o$11b4obo$2bob3o2bo2b3o$bo3b2o4b2o$o2bo2bob2o3b4o$bob2obo5b o2b2o$2b2o4bobo2b3o$bo3b5ob2obobo$2bo5bob2o$4bob2o2bobobo! (Check gen 2) drc Posts: 1664 Joined: December 3rd, 2015, 4:11 pm Location: creating useless things in OCA I actually laughed at the terminology. "IT'S TIME FOR MY ULTIMATE ATTACK. I, A FOR AWESOME, WILL NOW PRESENT: THE AIDAN SUPERQUOTE" shoots out lasers This post was brought to you by the letter D, for dishes that Andrew J. Wade won't do. (Also Daniel, which happens to be me.) Current rule interest: B2ce3-ir4a5y/S2-c3-y fluffykitty Posts: 638 Joined: June 14th, 2014, 5:03 pm There's actually a bug like this on XKCD Forums. Something about custom tags and phpBB. Anyways, [/wiki] I like making rules Saka Posts: 3138 Joined: June 19th, 2015, 8:50 pm Location: In the kingdom of Sultan Hamengkubuwono X Here's another one. It pushes the avatar down all the way to the signature bar. Let's name it... -Fluffykitty Pusher Unless we know your real name that's going to be it lel. It's also interesting that it makes a code tag with purple text. Airy Clave White It Nay Code: Select all x = 17, y = 10, rule = B3/S23 b2ob2obo5b2o$11b4obo$2bob3o2bo2b3o$bo3b2o4b2o$o2bo2bob2o3b4o$bob2obo5b o2b2o$2b2o4bobo2b3o$bo3b5ob2obobo$2bo5bob2o$4bob2o2bobobo! (Check gen 2) A for awesome Posts: 1901 Joined: September 13th, 2014, 5:36 pm Location: 0x-1 Contact: Probably the simplest ultra-page-breaker: Code: Select all [viewer][wiki][/viewer][viewer][/wiki][/viewer] x₁=ηx V ⃰_η=c²√(Λη) K=(Λu²)/2 Pₐ=1−1/(∫^∞_t₀(p(t)ˡ⁽ᵗ⁾)dt) $$x_1=\eta x$$ $$V^*_\eta=c^2\sqrt{\Lambda\eta}$$ $$K=\frac{\Lambda u^2}2$$ $$P_a=1-\frac1{\int^\infty_{t_0}p(t)^{l(t)}dt}$$http://conwaylife.com/wiki/A_for_all Aidan F. Pierce Saka Posts: 3138 Joined: June 19th, 2015, 8:50 pm Location: In the kingdom of Sultan Hamengkubuwono X A for awesome wrote: Probably the simplest ultra-page-breaker: Code: Select all [viewer][wiki][/viewer][viewer][/wiki][/viewer] Screenshot? New one yay. -Adian Bomb: The smallest ultra-page breaker. Leaks into the bottom and pushes the pages button, post reply, and new replies to the side. Last edited by Saka on June 21st, 2017, 10:20 pm, edited 1 time in total. Airy Clave White It Nay Code: Select all x = 17, y = 10, rule = B3/S23 b2ob2obo5b2o$11b4obo$2bob3o2bo2b3o$bo3b2o4b2o$o2bo2bob2o3b4o$bob2obo5b o2b2o$2b2o4bobo2b3o$bo3b5ob2obobo$2bo5bob2o$4bob2o2bobobo! (Check gen 2) drc Posts: 1664 Joined: December 3rd, 2015, 4:11 pm Location: creating useless things in OCA Someone should create a phpBB-based forum so we can experiment without mucking about with the forums. This post was brought to you by the letter D, for dishes that Andrew J. Wade won't do. (Also Daniel, which happens to be me.) Current rule interest: B2ce3-ir4a5y/S2-c3-y Saka Posts: 3138 Joined: June 19th, 2015, 8:50 pm Location: In the kingdom of Sultan Hamengkubuwono X The testing grounds have now become similar to actual military testing grounds. Airy Clave White It Nay Code: Select all x = 17, y = 10, rule = B3/S23 b2ob2obo5b2o$11b4obo$2bob3o2bo2b3o$bo3b2o4b2o$o2bo2bob2o3b4o$bob2obo5b o2b2o$2b2o4bobo2b3o$bo3b5ob2obobo$2bo5bob2o$4bob2o2bobobo! (Check gen 2) fluffykitty Posts: 638 Joined: June 14th, 2014, 5:03 pm We also have this thread. Also, is now officialy the Fluffy Pusher. Also, it does bad things to the thread preview when posting. And now, another pagebreaker for you: Code: Select all [wiki][viewer][/wiki][viewer][/viewer][/viewer] Last edited by fluffykitty on June 22nd, 2017, 11:50 am, edited 1 time in total. I like making rules 83bismuth38 Posts: 453 Joined: March 2nd, 2017, 4:23 pm Location: Still sitting around in Sagittarius A... Contact: oh my, i want to quote somebody and now i have to look in a diffrent scrollbar to type this. intersting thing, though, is that it's never impossible to fully hide the entire page -- it will always be in a nested scrollbar. EDIT: oh also, the thing above is kinda bad. not horrible though -- i'd put it at a 1/13 on the broken scale. Code: Select all x = 8, y = 10, rule = B3/S23 3b2o$3b2o$2b3o$4bobo$2obobobo$3bo2bo$2bobo2bo$2bo4bo$2bo4bo$2bo! No football of any dui mauris said that. Cclee Posts: 56 Joined: October 5th, 2017, 9:51 pm Location: de internet Code: Select all [quote][wiki][viewer][/wiki][/viewer][wiki][/quote][/wiki] This dosen't do good things Edit: Code: Select all [wiki][url][size=200][wiki][viewer][viewer][url=http://www.conwaylife.com/forums/viewtopic.php?f=4&t=2907][/wiki][/url][quote][/url][/quote][/viewer][wiki][quote][/wiki][/quote][url][wiki][quote][/url][/wiki][/quote][url][wiki][/url][/wiki][/wiki][/viewer][/size][quote][viewer][/quote][/viewer][/wiki][/url] Neither does this ^ What ever up there likely useless Cclee Posts: 56 Joined: October 5th, 2017, 9:51 pm Location: de internet Code: Select all [viewer][wiki][/viewer][wiki][url][size=200][wiki][viewer][viewer][url=http://www.conwaylife.com/forums/viewtopic.php?f=4&t=2907][/wiki][/url][quote][/url][/quote][/viewer][wiki][quote][/wiki][/quote][url][wiki][quote][/url][/wiki][/quote][url][wiki][/url][/wiki][/wiki][/viewer][/size][quote][viewer][/quote][/viewer][/wiki][/url][viewer][/wiki][/viewer] I get about five different scroll bars when I preview this Edit: Code: Select all [viewer][wiki][quote][viewer][wiki][/viewer][/wiki][viewer][viewer][wiki][/viewer][/wiki][/quote][viewer][wiki][/viewer][/wiki][quote][viewer][wiki][/viewer][viewer][wiki][/viewer][/wiki][/wiki][/viewer][/quote][/viewer][/wiki] Makes a really long post and makes the rest of the thread large and centred Edit 2: Code: Select all [url][quote][quote][quote][wiki][/quote][viewer][/wiki][/quote][/viewer][/quote][viewer][/url][/viewer] Just don't do this (Sorry I'm having a lot of fun with this) ^ What ever up there likely useless cordership3 Posts: 127 Joined: August 23rd, 2016, 8:53 am Location: haha long boy Here's another small one: Code: Select all [url][wiki][viewer][/wiki][/url][/viewer] fg Moosey Posts: 2486 Joined: January 27th, 2019, 5:54 pm Location: A house, or perhaps the OCA board. Contact: Code: Select all [wiki][color=#4000BF][quote][wiki]I eat food[/quote][/color][/wiki][code][wiki] [/code] Is a pinch broken Doesn’t this thread belong in the sandbox? I am a prolific creator of many rather pathetic googological functions My CA rules can be found here Also, the tree game Bill Watterson once wrote: "How do soldiers killing each other solve the world's problems?" 77topaz Posts: 1345 Joined: January 12th, 2018, 9:19 pm Well, it started out as a thread to documents "Bugs & Errors" in the forum's code... Moosey Posts: 2486 Joined: January 27th, 2019, 5:54 pm Location: A house, or perhaps the OCA board. Contact: 77topaz wrote:Well, it started out as a thread to documents "Bugs & Errors" in the forum's code... Now it's half an aidan mode testing grounds. Also, fluffykitty's messmaker: Code: Select all [viewer][wiki][*][/viewer][/*][/wiki][/quote] I am a prolific creator of many rather pathetic googological functions My CA rules can be found here Also, the tree game Bill Watterson once wrote: "How do soldiers killing each other solve the world's problems?" PkmnQ Posts: 666 Joined: September 24th, 2018, 6:35 am Location: Server antipode Don't worry about this post, it's just gonna push conversation to the next page so I can test something while actually being able to see it. (The testing grounds in the sandbox crashed golly) Code: Select all x = 12, y = 12, rule = AnimatedPixelArt 4.P.qREqWE$4.2tL3vSvX$4.qREqREqREP$4.vS4vXvS2tQ$2.qWE2.qREqWEK$2.2vX 2.vXvSvXvStQtL$qWE2.qWE2.P.K$2vX2.2vX2.tQ2tLtQ$qWE4.qWE$2vX4.2vX$2.qW EqWE$2.4vX! i like loaf
Parametrized curve arc length examples Example 1 Write a parameterization for the straight-line path from the point (1,2,3) to the point (3,1,2). Find the arc length. Solution: The vector from (1,2,3) to (3,1,2) is $\vc{d} = (3,1,2)-(1,2,3) = (2,-1,-1)$. We can parametrize the line segment by \begin{align*} \dllp(t) = (1,2,3) + t (2,-1,-1), \qquad 0 \le t \le 1 \end{align*} To find arc length, we calculate \begin{align*} \dllp'(t) &= (2,-1,-1)\\ \| \dllp'(t)\| &= \sqrt{2^2+(-1)^2 + (-1)^2} = \sqrt{6}\\ \end{align*} Therefore, the length of the line segment is \begin{align*} \int_a^b \| \dllp'(t)\| dt = \int_0^1 \sqrt{6} dt = \sqrt{6} \end{align*} Clearly, it was silly to calculate the length this way. We knew the length of the line segment must be $\| \vc{d} \| = \sqrt{6}$. But, this simply illustrates the method of calculating arc length of parametrized curves. Example 2 Another parameterization for the line segment of example 1 is \begin{align*} \adllp(t) = (1,2,3) + (e^t-1) (2,-1,-1), \quad 0 \le t \le \log 2. \end{align*} Find the length of the line segment using this parametrization. It might not be obvious that $\adllp$ parametrizes the same line segment as $\dllp$ from example 1. To see this fact, notice that $(e^t-1)$ is zero when $t=0$, and it is 1 when $t=\log 2$. (As mathematicians often do, we are using $\log t$ to represent the natural logarithm, which is often written as $\ln t$. Hence, $e^{\log 2} = 2$, as required for this example.) Indeed, a particle with position $\adllp(t)$ at time $t$ does move along the straight line from (1,2,3) to (3,1,2) as $t$ goes from 0 to $\log 2$. It just doesn't move at a constant speed. You can read about another example where particles move along the same curve but at different speeds. Solution: We simply use the definition of arc length to find the length of the line segment using using this parameterization. We calculate \begin{align*} \adllp'(t) &= e^t (2,-1,-1)\\ \|\adllp'(t)\| &= e^t \|(2,-1,-1)\| = e^t \sqrt{6}, \end{align*} so the arc length is \begin{align*} \int_a^b \| \adllp'(t)\| dt &= \int_0^{\log 2} e^t \sqrt{6} dt\\ &= \sqrt{6} (e^{\log 2}- e^0)\\ &= \sqrt{6} (2-1) = \sqrt{6}, \end{align*} which agrees with example 1. Examples 1 and 2 illustrate an important principle. The length of a curve does not depend on its parametrization. Of course, this makes sense, as the distance a particle travels along a particular route doesn't depend on its speed. Example 3 Find the arc length of the helix parametrized by $\dllp(t) = (\cos t, \sin t, t)$ for $0 \le t \le 6\pi$. (This was the example used in the introduction to arc length.) Solution: We calculate \begin{align*} \dllp'(t) &= (-\sin t, \cos t, 1)\\ \|\dllp'(t)\| &= \sqrt{\sin^2 t + \cos^2 t + 1^2} = \sqrt{2}. \end{align*} The length is \begin{align*} \int_0^{6\pi} \sqrt{2} dt = \left.\left.\sqrt{2} t\right|_0^{6\pi}\right. = 6\sqrt{2} \pi \approx 26.7. \end{align*}
Line integrals as circulation The vector line integral introduction explains how the line integral $\dlint$ of a vector field $\dlvf$ over an oriented curve $\dlc$ “adds up” the component of the vector field that is tangent to the curve. In this sense, the line integral measures how much the vector field is aligned with the curve. If the curve $\dlc$ is a closed curve, then the line integral indicates how much the vector field tends to circulate around the curve $\dlc$. In fact, for an oriented closed curve $\dlc$, we call the line integral the “circulation” of $\dlvf$ around $\dlc$: \begin{align*} \dlint = \text{circulation of $\dlvf$ around $\dlc$}. \end{align*} Sometimes one might write the integral as \begin{align*} \oint_{\dlc} \dlvf \cdot d\lis \end{align*} to emphasize that the integral is around a closed curve, but we tend to omit the circle decoration on the integral sign since it is redundant. The circulation can be positive or negative, depending on the orientation of $\dlc$ compared to the flow of $\dlvf$. For example, let $\dlvf(x,y) = (y,-x)$ and $\dlc$ be the ellipse \begin{align*} \frac{x^2}{4}+\frac{y^2}{9} =1 \end{align*} oriented counterclockwise. As shown in the graph, the vector field appears to circulate in the clockwise direction, tending to point in the opposite direction of the orientation of the curve. We expect the circulation $\dlint$ to be negative. We can compute the circulation by parametrizing $\dlc$ by \begin{align*} \dllp(t) = (2\cos t, 3 \sin t) \end{align*} for $0 \le t \le 2\pi$. Since $\dllp'(t) = (-2\sin t, 3 \cos t)$, the line integral is \begin{align*} \dlint &= \plint{0}{2\pi}{\dlvf}{\dllp}\\ &= \int_0^{2\pi} \dlvf(2\cos t, 3 \sin t) \cdot (-2\sin t, 3 \cos t) dt\\ &= \int_0^{2\pi} (3 \sin t, -2\cos t) \cdot (-2\sin t, 3 \cos t) dt\\ &=\int_0^{2\pi} (-6 \sin^2t -6 \cos^2t) dt = \int_0^{2\pi} -6 dt = -12\pi. \end{align*} In other cases, the circulation may not be so obvious in a picture. For example, let $\dlvf(x,y) = (-y,0)$ and let $\dlc$ be the counterclockwise oriented unit circle, as pictured below. In this case, there is no obvious circulation of $\dlvf$. However, if you look closely at the alignment of the vector field, you will see that it tends to align with the orientation of the curve. The circulation of $\dlvf$ around $\dlc$ is positive. We verify this by calculating directly the circulation. Parametrizing the unit circle by $\dllp(t)=(\cos t, \sin t)$ for $0 \le t \le 2\pi$, the circulation is \begin{align*} \dlint &= \plint{0}{2\pi}{\dlvf}{\dllp}\\ &= \int_0^{2\pi} \dlvf(\cos t, \sin t) \cdot (-\sin t, \cos t) dt\\ &= \int_0^{2\pi} (- \sin t, 0) \cdot (-\sin t, \cos t) dt\\ &=\int_0^{2\pi} \sin^2t dt = \int_0^{2\pi} \frac{1-\cos 2t}{2} dt = \pi. \end{align*} Circulation in vector calculus Circulation plays an important role in vector calculus. Circulation defined by line integrals forms the basis for the “microscopic circulation” of the curl of a vector field. Three of the four fundamental theorems of vector calculus involve circulation. The link between the “microscopic ciculation” of the curl and the circulation defines by line integrals forms the basis of Green's theorem and Stokes' theorem. Lack of circulation can be thought of as the defining property of conservative vector fields.
Search Now showing items 1-10 of 24 Production of Σ(1385)± and Ξ(1530)0 in proton–proton collisions at √s = 7 TeV (Springer, 2015-01-10) The production of the strange and double-strange baryon resonances ((1385)±, Ξ(1530)0) has been measured at mid-rapidity (|y|< 0.5) in proton–proton collisions at √s = 7 TeV with the ALICE detector at the LHC. Transverse ... Forward-backward multiplicity correlations in pp collisions at √s = 0.9, 2.76 and 7 TeV (Springer, 2015-05-20) The strength of forward-backward (FB) multiplicity correlations is measured by the ALICE detector in proton-proton (pp) collisions at s√ = 0.9, 2.76 and 7 TeV. The measurement is performed in the central pseudorapidity ... Inclusive photon production at forward rapidities in proton-proton collisions at $\sqrt{s}$ = 0.9, 2.76 and 7 TeV (Springer Berlin Heidelberg, 2015-04-09) The multiplicity and pseudorapidity distributions of inclusive photons have been measured at forward rapidities ($2.3 < \eta < 3.9$) in proton-proton collisions at three center-of-mass energies, $\sqrt{s}=0.9$, 2.76 and 7 ... Rapidity and transverse-momentum dependence of the inclusive J/$\mathbf{\psi}$ nuclear modification factor in p-Pb collisions at $\mathbf{\sqrt{\textit{s}_{NN}}}=5.02$ TeV (Springer, 2015-06) We have studied the transverse-momentum ($p_{\rm T}$) dependence of the inclusive J/$\psi$ production in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV, in three center-of-mass rapidity ($y_{\rm cms}$) regions, down to ... Measurement of pion, kaon and proton production in proton–proton collisions at √s = 7 TeV (Springer, 2015-05-27) The measurement of primary π±, K±, p and p¯¯¯ production at mid-rapidity (|y|< 0.5) in proton–proton collisions at s√ = 7 TeV performed with a large ion collider experiment at the large hadron collider (LHC) is reported. ... Two-pion femtoscopy in p-Pb collisions at $\sqrt{s_{\rm NN}}=5.02$ TeV (American Physical Society, 2015-03) We report the results of the femtoscopic analysis of pairs of identical pions measured in p-Pb collisions at $\sqrt{s_{\mathrm{NN}}}=5.02$ TeV. Femtoscopic radii are determined as a function of event multiplicity and pair ... Measurement of charm and beauty production at central rapidity versus charged-particle multiplicity in proton-proton collisions at $\sqrt{s}$ = 7 TeV (Springer, 2015-09) Prompt D meson and non-prompt J/$\psi$ yields are studied as a function of the multiplicity of charged particles produced in inelastic proton-proton collisions at a centre-of-mass energy of $\sqrt{s}=7$ TeV. The results ... Charged jet cross sections and properties in proton-proton collisions at $\sqrt{s}=7$ TeV (American Physical Society, 2015-06) The differential charged jet cross sections, jet fragmentation distributions, and jet shapes are measured in minimum bias proton-proton collisions at centre-of-mass energy $\sqrt{s}=7$ TeV using the ALICE detector at the ... Centrality dependence of high-$p_{\rm T}$ D meson suppression in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV (Springer, 2015-11) The nuclear modification factor, $R_{\rm AA}$, of the prompt charmed mesons ${\rm D^0}$, ${\rm D^+}$ and ${\rm D^{*+}}$, and their antiparticles, was measured with the ALICE detector in Pb-Pb collisions at a centre-of-mass ... K*(892)$^0$ and $\Phi$(1020) production in Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV (American Physical Society, 2015-02) The yields of the K*(892)$^0$ and $\Phi$(1020) resonances are measured in Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV through their hadronic decays using the ALICE detector. The measurements are performed in multiple ...
Let $p>1.$ Define the set $$C=\left\{\mu=\{\mu_n\}_{n=0}^\infty: \{\mu_n\}\subseteq (0,1), \;\{\mu_n\} \textrm{ decreasing, } \sum_{n=0}^\infty\mu_n(n+1)^p =1\right\}.$$ Determine $$\inf_{\mu\in C}\; \sum_{n=0}^\infty\mu_n(n+1).$$ This problem arises when we try to find a sharp upper bound on the dual norm of the derivative of the penalty function in the Borwein-Preiss Variational Principle. I tried to use Chebyshev inequality(after a transformation), but unsuccessfully obtained the trivial bound $inf\geq 0$. If you find a lower bound, I will be happy too.
This question is very similar to this one, but the difference is that I'm asking for a strong deformation retraction. Notation/Definitions: (all maps are by definition continuous) A homotopy between maps $f,g\!:X\rightarrow Y$ is a map $H(x,t)\equiv H_t(x):X\!\times\!I\rightarrow Y$, $H_0\!=\!f$, $H_1\!=\!g$; denoted $H\!:\!f\!\simeq\!g$. For $A\!\subseteq\!X$, $f|_A\!=\!g|_A$, a homotopy relative to $A$ is a homotopy $H\!:\!f\!\simeq\!g$, $\forall t\!:H_t|_A\!=\!f|_A\!=\!g|_A$; denoted $H\!:\!f\!\simeq\!g \,(\mathrm{rel}\,A)$. Spaces $X$ and $Y$ are homotopy equivalent, denoted $X\!\simeq\!Y$, if there is an $f\!:X\rightarrow Y$ and $g\!:Y\rightarrow X$, such that $f\circ g\simeq id_Y$ and $g\circ f\simeq id_X$; then $f$ is an homotopy equivalence and $g$ is its homotopy inverse. A deformation retraction of $X$ onto $A\!\subseteq\!X$, denoted $H\!:X\searrow A$, is $H\!:id_X\!\simeq\!r$, where $r\!:X\rightarrow X$ is a retraction, i.e. $r(X)\!=\!A$, $r|_A\!=\!id_A$. A strong deformation retraction of $X$ onto $A\!\subseteq\!X$, denoted $H\!:X\searrow\searrow A$, is $H\!:id_X\!\simeq\!r\,(\mathrm{rel}\,A)$, where $r$ is a retraction. Furthermore, $H^-$ denotes the inverse homotopy, i.e. $H(x,1-t)$ and $H\ast H'$ denotes the product homotopy, i.e. $H(x,2t)$ for $0\leq t\leq1/2$ and $H'(x,2t-1)$ for $1/2\leq t\leq1$ (under the condition that $H_1=H'_0$, i.e. "the endfunctions match"), read from left to right (contrary to composition of maps).Lastly, $\big(\! \begin{smallmatrix} \scriptstyle x&\!\scriptstyle \mapsto \! &\scriptstyle f(x)\\ \scriptstyle y&\!\scriptstyle \mapsto \! &\scriptstyle g(y)\end{smallmatrix}\!\big)$simply denotes a mapping, defined on two spaces. Proposition: $X\!\simeq\!Y$ $\;\;\;\Leftrightarrow\;\;\;$ $\exists Z\supseteq X,Y:$ $\;X\swarrow Z\searrow Y$ Proof: $(\Leftarrow):$ If $r\!:Z\rightarrow X$ is a retraction, $i\!:X\hookrightarrow Z$ the inclusion, and $r\!\simeq\!id_Z$, then $r\circ i\!=\!id_X$ and $i\circ r\!\simeq\!id_Z$, so $Z\!\simeq\!X$. Similarly $Z\!\simeq\!Y$, and by transitivity, $X\!\simeq Y$. $(\Rightarrow):$ Let $f\!:X\!\rightarrow\!Y$ be the homotopy equivalence with homotopy inverse $g$ and define $Z\!:=\!Z_f\!=\!(X\!\times\!I)\coprod Y/_{(x,0)\sim f(x)}$, the mapping cylinder of $f$.Clearly$$\left(\! \begin{matrix} (x,s,t)&\!\!\!\! \mapsto \!\!\!\! & (x,s(1-t))\\ (y,t) &\!\!\!\! \mapsto \!\!\!\! & y\end{matrix}\!\right):Z_f\searrow\searrow Y.$$Let $\widetilde{H}\!\!:f\!\circ\!g\!\simeq\!id_Y$ and $\widehat{H}\!\!:g\!\circ\!f\!\simeq\!id_X$. Then define $r\!:Z_f\!\rightarrow\!Z_f$, $r\!:=\!\big(\! \begin{smallmatrix} \scriptstyle(x,s)&\!\scriptstyle \mapsto \! &\scriptstyle(\widehat{H}(x,s),1)\\ \scriptstyle y &\!\scriptstyle \mapsto \! &\scriptstyle (g(y),1)\end{smallmatrix}\!\big)$.We see that $r$ is well defined (respects $(x,0)=f(x)$), $r(Z_f)\!=\!X\!\times\!\{1\}$, $r|_{X\!\times\!\{1\}}\!=\!id_{X\!\times\!\{1\}}$, which makes $r$ the retraction from $Z_f$ to $Y$. Finally, we construct a homotopy$$H\!:=\!\big(\! \begin{smallmatrix} (x,s,t)&\! \mapsto \! &(x,s(1-t))\\ (y,t) &\! \mapsto \! & y\end{smallmatrix}\!\big)\ast\big(\! \begin{smallmatrix} (x,s,t)&\! \mapsto \! & \widetilde{H}^-(f(x),t)\\ (y,t) &\! \mapsto \! & \widetilde{H}^-(y,t)\end{smallmatrix}\!\big)\ast\big(\! \begin{smallmatrix} (x,s,t)&\! \mapsto \! &(\widehat{H}(x,st),t)\\ (y,t) &\!\mapsto \! &(g(y),t)\end{smallmatrix}\!\big):id_{Z_f}\!\simeq\!r.$$ Notice that each of the three homotopies is well defined on $Z_f$ (respects $(x,0)=f(x)$) and "their endfunctions match", so $H$ is well defined. Therefore $H\!\!:Z_f\,\searrow\,X\!\times\!\{1\}\!\approx\!X$. $\blacksquare$ Question: How can I prove: $X\!\simeq\!Y$ $\;\;\;\Rightarrow\;\;\;$ $\exists Z\supseteq X,Y:$ $\;X\swarrow\swarrow Z\searrow\searrow Y$? Comment: I was really hoping that it is possible to change this proof in order to get a homotopy $H$ that doesn't move the points of $X\times\{1\}$, i.e. get a strong deformation retraction. Request: Please, no advanced answers, since I'm just starting to learn Algebraic Topology. I don't know any obstruction theory/(co)fibrations/...
Search Now showing items 1-7 of 7 The ALICE Transition Radiation Detector: Construction, operation, and performance (Elsevier, 2018-02) The Transition Radiation Detector (TRD) was designed and built to enhance the capabilities of the ALICE detector at the Large Hadron Collider (LHC). While aimed at providing electron identification and triggering, the TRD ... Constraining the magnitude of the Chiral Magnetic Effect with Event Shape Engineering in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV (Elsevier, 2018-02) In ultrarelativistic heavy-ion collisions, the event-by-event variation of the elliptic flow $v_2$ reflects fluctuations in the shape of the initial state of the system. This allows to select events with the same centrality ... Systematic studies of correlations between different order flow harmonics in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV (American Physical Society, 2018-02) The correlations between event-by-event fluctuations of anisotropic flow harmonic amplitudes have been measured in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV with the ALICE detector at the LHC. The results are ... $\pi^0$ and $\eta$ meson production in proton-proton collisions at $\sqrt{s}=8$ TeV (Springer, 2018-03) An invariant differential cross section measurement of inclusive $\pi^{0}$ and $\eta$ meson production at mid-rapidity in pp collisions at $\sqrt{s}=8$ TeV was carried out by the ALICE experiment at the LHC. The spectra ... π0 and η meson production in proton-proton collisions at √s=8 TeV (Springer, 2018-03-26) An invariant differential cross section measurement of inclusive π^0 and η meson production at mid-rapidity in pp collisions at √s=8 TeV was carried out by the ALICE experiment at the LHC. The spectra of π^0 and η mesons ... Longitudinal asymmetry and its effect on pseudorapidity distributions in Pb–Pb collisions at √sNN = 2.76 TeV (Elsevier, 2018-03-22) First results on the longitudinal asymmetry and its effect on the pseudorapidity distributions in Pb–Pb collisions at √sNN = 2.76 TeV at the Large Hadron Collider are obtained with the ALICE detector. The longitudinal ... Production of deuterons, tritons, 3He nuclei, and their antinuclei in pp collisions at √s=0.9, 2.76, and 7 TeV (American Physical Society, 2018-02) Invariant differential yields of deuterons and antideuterons in p p collisions at √ s = 0.9, 2.76 and 7 TeV and the yields of tritons, 3 He nuclei, and their antinuclei at √ s = 7 TeV have been measured with the ALICE ...
Learning Objectives To convert a value reported in one unit to a corresponding value in a different unit using conversion factors. During your studies of chemistry (and physics also), you will note that mathematical equations are used in many different applications. Many of these equations have a number of different variables with which you will need to work. You should also note that these equations will often require you to use measurements with their units. Algebra skills become very important here! Converting Between Units with Conversion Factors A conversion factor is a factor used to convert one unit of measurement into another. A simple conversion factor can be used to convert meters into centimeters, or a more complex one can be used to convert miles per hour into meters per second. Since most calculations require measurements to be in certain units, you will find many uses for conversion factors. What always must be remembered is that a conversion factor has to represent a fact; this fact can either be simple or much more complex. For instance, you already know that 12 eggs equal 1 dozen. A more complex fact is that the speed of light is \(1.86 \times 10^5\) miles/\(\text{sec}\). Either one of these can be used as a conversion factor depending on what type of calculation you might be working with (Table \(\PageIndex{1}\)). English Units Metric Units Quantity 1 ounce (oz) 28.35 grams (g) *mass 1 fluid once (oz) 2.96 mL volume 2.205 pounds (lb) 1 kilogram (kg) *mass 1 inch (in) 2.54 centimeters (cm) length 0.6214 miles (mi) 1 kilometer (km) length 1 quarter (qt) 0.95 liters (L) volume *pounds and ounces are technically units of force, not mass, but this fact is often ignored by the non-scientific community. Of course, there are other ratios which are not listed in Table \(\PageIndex{1}\). They may include: Ratios embedded in the text of the problem (using words such as peror in each, or using symbols such as / or %). Conversions in the metric system, as covered earlier in this chapter. Common knowledge ratios (such as 60 seconds \(=\) 1 minute). If you learned the SI units and prefixes described, then you know that 1 cm is 1/100th of a meter. \[ 1\; \rm{cm} = \dfrac{1}{100} \; \rm{m} = 10^{-2}\rm{m}\] or \[100\; \rm{cm} = 1\; \rm{m}\] Suppose we divide both sides of the equation by \(1 \text{m}\) (both the number and the unit): \[\mathrm{\dfrac{100\:cm}{1\:m}=\dfrac{1\:m}{1\:m}}\] As long as we perform the same operation on both sides of the equals sign, the expression remains an equality. Look at the right side of the equation; it now has the same quantity in the numerator (the top) as it has in the denominator (the bottom). Any fraction that has the same quantity in the numerator and the denominator has a value of 1: \[ \dfrac{ \text{100 cm}}{\text{1 m}} = \dfrac{ \text{1000 mm}}{\text{1 m}}= \dfrac{ 1\times 10^6 \mu \text{m}}{\text{1 m}}= 1\] We know that 100 cm is 1 m, so we have the same quantity on the top and the bottom of our fraction, although it is expressed in different units. Performing Dimensional Analysis Dimensional analysis is amongst the most valuable tools physical scientists use. Simply put, it is the conversion between an amount in one unit to the corresponding amount in a desired unit using various conversion factors. This is valuable because certain measurements are more accurate or easier to find than others. The use of units in a calculation to ensure that we obtain the final proper units is called dimensional analysis. Here is a simple example. How many centimeters are there in 3.55 m? Perhaps you can determine the answer in your head. If there are 100 cm in every meter, then 3.55 m equals 355 cm. To solve the problem more formally with a conversion factor, we first write the quantity we are given, 3.55 m. Then we multiply this quantity by a conversion factor, which is the same as multiplying it by 1. We can write 1 as \(\mathrm{\frac{100\:cm}{1\:m}}\) and multiply: \[ 3.55 \; \rm{m} \times \dfrac{100 \; \rm{cm}}{1\; \rm{m}}\] The 3.55 m can be thought of as a fraction with a 1 in the denominator. Because m, the abbreviation for meters, occurs in both the numerator and the denominator of our expression, they cancel out: \[\dfrac{3.55 \; \cancel{\rm{m}}}{ 1} \times \dfrac{100 \; \rm{cm}}{1 \; \cancel{\rm{m}}}\] The final step is to perform the calculation that remains once the units have been canceled: \[ \dfrac{3.55}{1} \times \dfrac{100 \; \rm{cm}}{1} = 355 \; \rm{cm}\] In the final answer, we omit the 1 in the denominator. Thus, by a more formal procedure, we find that 3.55 m equals 355 cm. A generalized description of this process is as follows: quantity (in old units) × conversion factor = quantity (in new units) You may be wondering why we use a seemingly complicated procedure for a straightforward conversion. In later studies, the conversion problems you will encounter will not always be so simple. If you can master the technique of applying conversion factors, you will be able to solve a large variety of problems. In the previous example, we used the fraction \(\frac{100 \; \rm{cm}}{1 \; \rm{m}}\) as a conversion factor. Does the conversion factor \(\frac{1 \; \rm m}{100 \; \rm{cm}}\) also equal 1? Yes, it does; it has the same quantity in the numerator as in the denominator (except that they are expressed in different units). Why did we not use that conversion factor? If we had used the second conversion factor, the original unit would not have canceled, and the result would have been meaningless. Here is what we would have gotten: \[ 3.55 \; \rm{m} \times \dfrac{1\; \rm{m}}{100 \; \rm{cm}} = 0.0355 \dfrac{\rm{m}^2}{\rm{cm}}\] For the answer to be meaningful, we have to construct the conversion factor in a form that causes the original unit to cancel out. Figure \(\PageIndex{1}\) shows a concept map for constructing a proper conversion. The general steps in performing dimensional analysis Identify the " given" information in the problem. Look for a number with units to start this problem with. What is the problem asking you to " find"? In other words, what unit will your answer have? Use ratiosand conversion factors to cancel out the units that aren't part of your answer, and leave you with units that are part of your answer. When your units cancel out correctly, you are ready to do the math. You are multiplying fractions, so you multiply the top numbers and divide by the bottom numbers in the fractions. Significant Figures in Conversions How do conversion factors affect the determination of significant figures? Numbers in conversion factors based on prefix changes, such as kilograms to grams, are notconsidered in the determination of significant figures in a calculation because the numbers in such conversion factors are exact. Exact numbers are defined or counted numbers, not measured numbers, and can be considered as having an infinite number of significant figures. (In other words, 1 kg is exactly 1,000 g, by the definition of kilo-.) Counted numbers are also exact. If there are 16 students in a classroom, the number 16 is exact. In contrast, conversion factors that come from measurements (such as density, as we will see shortly) or that are approximations have a limited number of significant figures and should be considered in determining the significant figures of the final answer. Example \(\PageIndex{1}\) Example \(\PageIndex{2}\) Steps for Problem Solving The average volume of blood in an adult male is 4.7 L. What is this volume in milliliters? A hummingbird can flap its wings once in 18 ms. How many seconds are in 18 ms? Identify the "given"information and what the problem is asking you to "find." Given: 4.7 L Find: mL Given: 18 ms Find: s List other known quantities \(1\, mL = 10^{-3} L \) \(1 \,ms = 10^{-3} s \) Prepare a concept map and use the proper conversion factor. Cancel units and calculate. \( 4.7 \cancel{\rm{L}} \times \dfrac{1 \; \rm{mL}}{10^{-3}\; \cancel{\rm{L}}} = 4,700\; \rm{mL}\) \( 4.7 \cancel{\rm{L}} \times \dfrac{1,000 \; \rm{mL}}{1\; \cancel{\rm{L}}} = 4,700\; \rm{mL}\) \( 18 \; \cancel{\rm{ms}} \times \dfrac{10^{-3}\; \rm{s}}{1 \; \cancel{\rm{ms}}} = 0.018\; \rm{s}\) or \( 18 \; \cancel{\rm{ms}} \times \dfrac{1\; \rm{s}}{1,000 \; \cancel{\rm{ms}}} = 0.018\; \rm{s}\) Think about your result. The amount in mL should be 1000 times larger than the given amount in L. The amount in s should be 1/1000 the given amount in ms. Exercise \(\PageIndex{1}\) Perform each conversion. 101,000. ns to seconds 32.08 kg to grams 1.53 grams to cg Answer a: \(1.01000 x 10^{-4} s \) Answer b: \(3.208 x 10^{4} g \) Answer c: \(1.53 x 10^{2} g \) Summary Conversion factors are used to convert one unit of measurement into another. Dimensional analysis (unit conversions) involves the use of conversion factors that will cancel units you don't want and produce units you do want. Further Reading/Supplemental Links Tutorial: Vision Learning: Unit Conversions & Dimensional Analysis http://visionlearning.com/library/mo...?mid=144&1=&c3=
I used Hückel's method along with a Linear Combination of Atomic Orbitals (LCAO) to calculate an estimate for the orbital energies of cyclobutadiene ($\ce{C4H4}$) and butadiene ($\ce{C4H6}$). For butadiene, I ended up with the result $E = \alpha \pm 1.62 \beta$ and $E = \alpha \pm 0.62 \beta$, where $\alpha$ and $\beta$ are both negative (and are the Coulomb integral and the exchange integral, respectively). I'm fairly confident with the method and interpretation of these integrals so far, but I don't know how to use these results to "guess and draw" the shapes and phases of the molecular orbitals, and I'm also not sure how to tie in what $\alpha$ and $\beta$ represent. As I understand things, in these molecules, hybridisation of the orbitals occur and $\mathrm{sp^2}$ orbitals form, taking up three electrons from each carbon atom and leaving one electron left over in the third p orbital. This p orbital is the one I assume is portrayed in the below images. Could someone please give me a hint towards the following: relating the molecular orbital shapes/phases with the Coulomb integral (overlap of one carbon's electron wavefunction with a neighbouring carbon's potential) and the exchange integral (the overlap of two electrons' wavefunctions in the field of one of the carbons how to get any sort of information about the phases of the different orbitals. I understand that $E = \alpha + 2\beta$ will have the lowest energy, as both $\alpha$ and $\beta$ are negative, but how does that correspond to the phase configurations shown below? Why do the size of the circles in the first image vary? Note: I've been sent here from the Physics Stack Exchange, please be nice. :) Thanks for any help! This is my first post so please also let me know if the posting etiquette passes muster.
How much weight can you put on a bike tire? What does it depend on? closed as unclear what you're asking by Kyle Kanos, ACuriousMind♦, JamalS, John Rennie, Qmechanic♦ Apr 12 '15 at 14:50 Please clarify your specific problem or add additional details to highlight exactly what you need. As it's currently written, it’s hard to tell exactly what you're asking. See the How to Ask page for help clarifying this question. If this question can be reworded to fit the rules in the help center, please edit the question. The answer depends on the size of the tire, the pressure to which you inflate it, and how much deformation you are willing to tolerate. Just to illustrate, I will take a road ("racing") bike as an example - using rough numbers so you see how the math is done. The tire might be just 20 mm wide, and inflated to 8 bar. Adding weight to the tire will barely increase the pressure inside, but it will increase the contact area - mostly the length of the contact patch. The tire will look a little bit "flat": We can compute the area of the patch, and the degree of flattening, as a function of the contact angle $\theta$ in the above diagram: $$\sin\theta = \frac{s}{2r}$$ The degree of flattening is given by $$f = r - h = r(1-\cos\theta)$$ For example, a 100 kg person might load each wheels with 50 kg (500 N). At a pressure of 8 bar (8 kgf / cm 2), you need a contact area of about 50/8~6 cm 2 to support that much weight; this means the patch would be 3 cm long (it is 2 cm wide). For a wheel that is 70 cm diameter, that subtends an angle of about 2.5 degrees, and gives a "flattening" of less than a mm. This is why a road bike rolls so smoothly over a flat surface - the less distortion, the less friction. Using the same equations to determine a "maximum" load, let's assume we can tolerate a distortion of 5 mm; we can now work backwards: $$f = 5 mm\\ \cos\theta = 1 - \frac{5}{350}\\ \theta \approx 10°\\ s = 2r\sin\theta \approx 120 mm$$ With the contact patch 12 cm long, it has an area of 2*12=24 cm 2 and can support a force of 24*8~200 kg But it would get pretty hard to pedal. Note that when you go over a sudden bump, you can quite easily hit these kinds of distortions - and if you land a jump, you will also significantly increase the (transient) force supported by the tire. Note also (per my linked answer (2) below) that the pressure in the tire will hardly change because of these distortions - the fractional change in volume of the tire is very small. Overloading the tire will lead to lots of friction (hard to pedal) excessive wear (the distorted patch "rubs" on the ground) possible puncture ("snakebite" when your tire gets pinched between the ground and the rims) broken spokes (especially if the wheel is warped and spoke tension is uneven)
Currently I am trying to improve my knowledge of thermodynamics and I have stumbled upon the following description of a thermodynamic system in which I don't fully understand the calculation: A container with an ideal gas is given in which a piston with a certain weight resides and the movement of the piston is frictionless. The container is linked to a work reservoir and is surrounded by a thermostat with diathermic walls so that any influence of heat does not change the temperature. -> the system is isothermic The first law states: $dU=\delta{Q}+\delta{W}=\delta{Q}-pdV=\left(\frac{\partial{U}}{\partial{T}}\right)dT +\left(\frac{\partial{U}}{\partial{V}}\right)dV$ Because the gas is an ideal gas: $dU=\delta{Q}+\delta{W}=\delta{Q}-pdV=\left(\frac{\partial{U}}{\partial{T}}\right)dT $ For an isothermic process $dT=0$ and thus: $\delta{Q}=-\delta{W}$ Let the gas in the container have a pressure of $p_{1}$ and the piston exert a pressure of $p_{p}$ where $p_{1}\gg p_{p}$. Here follows my first question: Is it right to say that the system seeks equilibrium and thus the force exerted by the gas will change until it is the same as the force exerted on the piston leading to the same pressure? And how can one explain this behaviour? The next question I have regards the work done by the system: The textbook from which I have this example states the the work done is given by: $W=\int_{V_1}^{V_2}{p_pdV}= p_p\cdot \left(V_2-V_1\right)$ I can't fully comprehend why the pressure used in this equation is the constant pressure exerted from the piston. Due to $dU=0$ we see that the state equation equals 0 and thus I would say that the way we obtain a change in the system does matter. I am somewhat fixated on the idea that the work done is the scalar product of the force exerted and the path on which the force takes effect. Hence I can't let go of the idea that if we split the container into say 100 parts in the first department there exists a force $F_1$ and thus a pressure $p_1$ which leads to a an energy $j_1$ and in the next department the same holds the truth for $F_2, p_2, j_2$ and as such I would add them. I know that it is wrong but I can't understand the other idea fully either.
Answer $$\frac{\sec x}{\csc x}=\tan x$$ $\text{E}$ is the answer. Work Step by Step $$A=\frac{\sec x}{\csc x}$$ 2 Reciprocal Identities related to $\sec\theta$ and $\csc\theta$ state that $$\sec\theta=\frac{1}{\cos\theta}$$ $$\csc\theta=\frac{1}{\sin\theta}$$ That means we can rewrite $A$ as follows: $$A=\frac{\frac{1}{\cos x}}{\frac{1}{\sin x}}$$ $$A=\frac{\sin x}{\cos x}$$ Also, from Quotient Identities, we know $$\tan\theta=\frac{\sin\theta}{\cos\theta}$$ Therefore, $$A=\tan x$$ $\text{E}$ is the answer.
Basically, the problem is: For a set $S$ of positive numbers, find a minimal number $d$ that is not a divisor of any element of $S$, i.e. $\forall x \in S,\ d \nmid x$. Denote $n = |S|$ and $C = \max(S) $. Consider the function $F(x) = $ the least prime number not dividing $x$. It is easy to see that $F(x) \leq \log x$. And for a set $S$, let $F(S) = $ the least prime that doesn't divide any element of $S$. We have an upper bound $$F(S) \leq F(\operatorname{lcm}(S)) \leq F(C^n) \leq n \log C.$$ Therefore a simple brute-force algorithm, which enumerates all numbers from $1$ to $n \log C$ and checks if it doesn't divide any element of $S$, is polynomial and has time complexity $O(n^2 \log C)$. The other way to solve the problem is to compute all factors for every element of $S$ and use them in brute-force algorithm to check if $x$ is an answer in $O(1)$ time. This algorithm has time complexity $O(n \cdot \min (\sqrt{C}, n \log C) + n \log C)$ and uses $O(n \log C)$ memory, because we don't need to compute and store factors greater than $n \log C$. For small $n$ and $C$ it performs better. In detail, the algorithm consists of two parts: Construct a set $\hat{S}$ composed of all factors of all elements of $S$, i.e. $$\forall x \in S\ \forall f \le n \cdot \log C, \ (f \mid x \rightarrow f \in \hat{S})$$ This can be done in $O(n \cdot \min (\sqrt{C}, n \log C))$ time and $O(n \log C)$ memory. (Where does this come from? For any element of $S$, we can factor it using either trial factorization with all numbers up to $\sqrt{C}$ or all primes up to $n \log C$, whichever is smaller; thus each element of $S$ can be factored in time $O(\min (\sqrt{C}, n \log C))$ time.) Find minimal number $d \notin \hat{S}$. This step requires $O(|\hat{S}|) = O(n \log C)$ time, if checking whether $x \in \hat{S}$ can be done in $O(1)$ time. I have two questions that I'm interested in: Is there a faster algorithm to solve the problem? For given $n$ and $C$, how can we construct a set $S$ with maximal least common non-divisor?
Current browse context: astro-ph.CO Change to browse by: Bookmark(what is this?) Astrophysics > Cosmology and Nongalactic Astrophysics Title: First results from the HAYSTAC axion search (Submitted on 2 Jan 2018) Abstract: The axion is a well-motivated cold dark matter (CDM) candidate first postulated to explain the absence of $CP$ violation in the strong interactions. CDM axions may be detected via their resonant conversion into photons in a "haloscope" detector: a tunable high-$Q$ microwave cavity maintained at cryogenic temperature, immersed a strong magnetic field, and coupled to a low-noise receiver. This dissertation reports on the design, commissioning, and first operation of the Haloscope at Yale Sensitive to Axion CDM (HAYSTAC), a new detector designed to search for CDM axions with masses above $20$ $\mu\mathrm{eV}$. I also describe the analysis procedure developed to derive limits on axion CDM from the first HAYSTAC data run, which excluded axion models with two-photon coupling $g_{a\gamma\gamma} \gtrsim 2\times10^{-14}$ $\mathrm{GeV}^{-1}$, a factor of 2.3 above the benchmark KSVZ model, over the mass range $23.55 < m_a < 24.0$ $\mu\mathrm{eV}$. This result represents two important achievements. First, it demonstrates cosmologically relevant sensitivity an order of magnitude higher in mass than any existing direct limits. Second, by incorporating a dilution refrigerator and Josephson parametric amplifier, HAYSTAC has demonstrated total noise approaching the standard quantum limit for the first time in a haloscope axion search. Submission historyFrom: Benjamin Brubaker [view email] [v1]Tue, 2 Jan 2018 21:00:14 GMT (3706kb,D)
I am trying to construct a LRT to test the hypothesis $H_0: p \ge p_0$ and $H_1: p < p_0$ where $\alpha = .1$ and $p_0 = .6$ and give a critical region. Attempt: $\lambda(x) = \frac{"restricted" MLE}{"unrestricted" MLE} = \frac{L(\hat{\theta_0}|X)}{L(\hat{\theta}|X)}$ I am looking for $P(X \in \Re) \le \alpha$ and $P(\lambda(x) \le c) \le \alpha$ where $C \in (0,1)$ $Reject: \{ x: \lambda(x) \le c \}$ My confusion, is with this little information, how can I calculate $\frac{L(\hat{\theta_0}|X)}{L(\hat{\theta}|X)}$.
Upper bound of $e$ $$\frac{1}{7!}\approx0.000198\ldots<0.000457\ldots\frac{1}{3^7}$$ Therefore, for $n\geq7$, $\frac{1}{n!}<\frac{1}{3^n}$ $$\begin{aligned}e&=\sum_{0\leq n\leq7}\frac{1}{n!}+\sum_{8\leq n}\frac{1}{n!}\\e&<\sum_{0\leq n\leq 7}\frac{1}{n!}+\sum_{n=8}^\infty\frac{1}{3^n}\\e&<\sum_{0\leq n\leq 7}\frac{1}{n!}+\frac{1}{3^8}\sum_{n=0}^\infty\frac{1}{3^n}\\e&<2.71825\ldots+\frac{1}{3^8}\left(\frac{1}{1-1/3}\right)\\e&<2.71848\end{aligned}$$ Upper bound of $\gamma$ This part's tricky since we need a relatively high precision (at least $4$ decimals) for $\gamma$ but it's also quite hard to bound from above. It also appears that we can't escape long computations at this stage, but they're well within the remit of someone to do by hand, in a day. The computations I've chosen requires finding binomial coefficients (e.g. from Pascal's triangle), finding their reciprocals, and summing around $200$ numbers. I think this should be feasible without a calculator, even with just a couple of hours. Euler managed to do it in the $1700$s so surely we can too! Let $S(k)=\sum_{0\leq j\leq k-1}\binom{2^{k-j}+j}{j}^{-1}$. As $k\to\infty$, $S(k)$ appears to approach $1$ from above, though I haven't found a proof of this yet. $S(20)<1.004902$ so for $k\geq20$, $S(k)<1.004902$. $$\begin{aligned}\gamma&=\sum_{1\leq k}\frac{S(k)}{2^k}\\\gamma&<\sum_{1\leq k\leq20}\frac{S(k)}{2^k}+\sum_{20< k}\frac{1.004902}{2^k}\\\gamma&<\sum_{1\leq k\leq20}\frac{S(k)}{2^k}+1.004902\sum_{k=21}^\infty\frac{1}{2^k}\\\gamma&<\sum_{1\leq k\leq20}\frac{S(k)}{2^k}+\frac{1.004902}{2^{21}}\sum_{k=0}^\infty\frac{1}{2^k}\\\gamma&<\sum_{1\leq k\leq20}\frac{S(k)}{2^k}+\frac{1.004902}{2^{21}}\left(\frac{1}{1-1/2}\right)\\\gamma&<0.57721470\ldots+0.000000958\ldots\\\gamma&<0.57721567\end{aligned}$$ Lower bound of $\pi$ and Final Result By Archimedes, $\pi>3.14$ so we can show $$2\gamma e<2\cdot2.71848\cdot0.57721567<3.14<\pi$$ Hence, $\frac{e}{\pi}<\frac{1}{2\gamma}$, as desired. We could also find a lower bound for $\pi$ by truncating an alternating series. But many such series require vast numbers of terms to even get the $2$ decimal places that we need. For example, the series $\frac{2}{\pi}=\sum_{0\leq k}\frac{(-1)^k(4k+1)((2k-1)!!)^3}{((2k)!!)^3}$, due to Ramanujan still takes a few hundred terms to reach a lower bound of $3.14$, which is far too long to do by hand. It was tricky finding a way of bounding $\gamma$ since $\gamma=\sum_{1\leq n}\frac{(-1)^n\log_2{n}}{n}$ and $\gamma<\int_1^{N}\left(\frac{1}{\lfloor x \rfloor}-\frac{1}{x}\right)\,\mathrm{d}x+\int_{N}^{+\infty}\frac{\mathrm{d}x}{x(x-1)}$ both converge too slowly.
Main Page The Problem Let [math][3]^n[/math] be the set of all length [math]n[/math] strings over the alphabet [math]1, 2, 3[/math]. A combinatorial line is a set of three points in [math][3]^n[/math], formed by taking a string with one or more wildcards [math]x[/math] in it, e.g., [math]112x1xx3\ldots[/math], and replacing those wildcards by [math]1, 2[/math] and [math]3[/math], respectively. In the example given, the resulting combinatorial line is: [math]\{ 11211113\ldots, 11221223\ldots, 11231333\ldots \}[/math]. A subset of [math][3]^n[/math] is said to be line-free if it contains no lines. Let [math]c_n[/math] be the size of the largest line-free subset of [math][3]^n[/math]. [math]k=3[/math] Density Hales-Jewett (DHJ(3)) theorem: [math]\lim_{n \rightarrow \infty} c_n/3^n = 0[/math] The original proof of DHJ(3) used arguments from ergodic theory. The basic problem to be consider by the Polymath project is to explore a particular combinatorial approach to DHJ, suggested by Tim Gowers. Some background to this project can be found here, and general discussion on massively collaborative "polymath" projects can be found here. Threads (1-199) A combinatorial approach to density Hales-Jewett (inactive) (200-299) Upper and lower bounds for the density Hales-Jewett problem (active) (300-399) The triangle-removal approach (inactive) (400-499) Quasirandomness and obstructions to uniformity (inactive) (500-599) Possible proof strategies (active) (600-699) A reading seminar on density Hales-Jewett (active) We are also collecting bounds for Fujimura's problem. Here are some unsolved problems arising from the above threads. Bibliography M. Elkin, "An Improved Construction of Progression-Free Sets ", preprint. H. Furstenberg, Y. Katznelson, “A density version of the Hales-Jewett theorem for k=3“, Graph Theory and Combinatorics (Cambridge, 1988). Discrete Math. 75 (1989), no. 1-3, 227–241. H. Furstenberg, Y. Katznelson, “A density version of the Hales-Jewett theorem“, J. Anal. Math. 57 (1991), 64–119. B. Green, J. Wolf, "A note on Elkin's improvement of Behrend's construction", preprint. K. O'Bryant, "Sets of integers that do not contain long arithmetic progressions", preprint. R. McCutcheon, “The conclusion of the proof of the density Hales-Jewett theorem for k=3“, unpublished.
It looks like you're new here. If you want to get involved, click one of these buttons! Last time we studied meets and joins of partitions. We observed an interesting difference between the two. Suppose we have partitions \(P\) and \(Q\) of a set \(X\). To figure out if two elements \(x , x' \in X\) are in the same part of the meet \(P \wedge Q\), it's enough to know if they're the same part of \(P\) and the same part of \(Q\), since $$ x \sim_{P \wedge Q} x' \textrm{ if and only if } x \sim_P x' \textrm{ and } x \sim_Q x'. $$ Here \(x \sim_P x'\) means that \(x\) and \(x'\) are in the same part of \(P\), and so on. However, this does not work for the join! $$ \textbf{THIS IS FALSE: } \; x \sim_{P \vee Q} x' \textrm{ if and only if } x \sim_P x' \textrm{ or } x \sim_Q x' . $$ To understand this better, the key is to think about the "inclusion" $$ i : \{x,x'\} \to X , $$ that is, the function sending \(x\) and \(x'\) to themselves thought of as elements of \(X\). We'll soon see that any partition \(P\) of \(X\) can be "pulled back" to a partition \(i^{\ast}(P)\) on the little set \( \{x,x'\} \). And we'll see that our observation can be restated as follows: $$ i^{\ast}(P \wedge Q) = i^{\ast}(P) \wedge i^{\ast}(Q) $$ but $$ \textbf{THIS IS FALSE: } \; i^{\ast}(P \vee Q) = i^{\ast}(P) \vee i^{\ast}(Q) . $$ This is just a slicker way of saying the exact same thing. But it will turn out to be more illuminating! So how do we "pull back" a partition? Suppose we have any function \(f : X \to Y\). Given any partition \(P\) of \(Y\), we can "pull it back" along \(f\) and get a partition of \(X\) which we call \(f^{\ast}(P)\). Here's an example from the book: For any part \(S\) of \(P\) we can form the set of all elements of \(X\) that map to \(S\). This set is just the preimage of \(S\) under \(f\), which we met in Lecture 9. We called it $$ f^{\ast}(S) = \{x \in X: \; f(x) \in S \}. $$ As long as this set is nonempty, we include it our partition \(f^{\ast}(P)\). So beware: we are now using the symbol \(f^{\ast}\) in two ways: for the preimage of a subset and for the pullback of a partition. But these two ways fit together quite nicely, so it'll be okay. Summarizing: Definition. Given a function \(f : X \to Y\) and a partition \(P\) of \(Y\), define the pullback of \(P\) along \(f\) to be this partition of \(X\): $$ f^{\ast}(P) = \{ f^{\ast}(S) : \; S \in P \text{ and } f^{\ast}(S) \ne \emptyset \} . $$ Puzzle 40. Show that \( f^{\ast}(P) \) really is a partition using the fact that \(P\) is. It's fun to prove this using properties of the preimage map \( f^{\ast} : P(Y) \to P(X) \). It's easy to tell if two elements of \(X\) are in the same part of \(f^{\ast}(P)\): just map them to \(Y\) and see if they land in the same part of \(P\). In other words, $$ x\sim_{f^{\ast}(P)} x' \textrm{ if and only if } f(x) \sim_P f(x') $$ Now for the main point: Proposition. Given a function \(f : X \to Y\) and partitions \(P\) and \(Q\) of \(Y\), we always have $$ f^{\ast}(P \wedge Q) = f^{\ast}(P) \wedge f^{\ast}(Q) $$ but sometimes we have $$ f^{\ast}(P \vee Q) \ne f^{\ast}(P) \vee f^{\ast}(Q) . $$ Proof. To prove that $$ f^{\ast}(P \wedge Q) = f^{\ast}(P) \wedge f^{\ast}(Q) $$ it's enough to prove that they give the same equivalence relation on \(X\). That is, it's enough to show $$ x \sim_{f^{\ast}(P \wedge Q)} x' \textrm{ if and only if } x \sim_{ f^{\ast}(P) \wedge f^{\ast}(Q) } x'. $$ This looks scary but we just follow our nose. First we rewrite the right-hand side using our observation about the meet of partitions: $$ x \sim_{f^{\ast}(P \wedge Q)} x' \textrm{ if and only if } x \sim_{ f^{\ast}(P)} x' \textrm{ and } x\sim_{f^{\ast}(Q) } x'. $$ Then we rewrite everything using what we just saw about the pullback: $$ f(x) \sim_{P \wedge Q} f(x') \textrm{ if and only if } f(x) \sim_P f(x') \textrm{ and } f(x) \sim_Q f(x'). $$ And this is true, by our observation about the meet of partitions! So, we're really just stating that observation in a new language. To prove that sometimes $$ f^{\ast}(P \vee Q) \ne f^{\ast}(P) \vee f^{\ast}(Q) , $$ we just need one example. So, take \(P\) and \(Q\) to be these two partitions: They are partitions of the set $$ Y = \{11, 12, 13, 21, 22, 23 \}. $$ Take \(X = \{11,22\} \) and let \(i : X \to Y \) be the inclusion of \(X\) into \(Y\), meaning that $$ i(11) = 11, \quad i(22) = 22 . $$ Then compute everything! \(11\) and \(22\) are in different parts of \(i^{\ast}(P)\): $$ i^{\ast}(P) = \{ \{11\}, \{22\} \} . $$ They're also in different parts of \(i^{\ast}(Q)\): $$ i^{\ast}(Q) = \{ \{11\}, \{22\} \} .$$ Thus, we have $$ i^{\ast}(P) \vee i^{\ast}(Q) = \{ \{11\}, \{22\} \} . $$ On the other hand, the join \(P \vee Q \) has just two parts: $$ P \vee Q = \{\{11,12,13,22,23\},\{21\}\} . $$ If you don't see why, figure out the finest partition that's coarser than \(P\) and \(Q\) - that's \(P \vee Q \). Since \(11\) and \(22\) are in the same parts here, the pullback \(i^{\ast} (P \vee Q) \) has just one part: $$ i^{\ast}(P \vee Q) = \{ \{11, 22 \} \} . $$ So, we have $$ i^{\ast}(P \vee Q) \ne i^{\ast}(P) \vee i^{\ast}(Q) $$ as desired. \( \quad \blacksquare \) Now for the real punchline. The example we just saw was the same as our example of a "generative effect" in Lecture 12. So, we have a new way of thinking about generative effects: the pullback of partitions preserves meets, but it may not preserve joins! This is an interesting feature of the logic of partitions. Next time we'll understand it more deeply by pondering left and right adjoints. But to warm up, you should compare how meets and joins work in the logic of subsets: Puzzle 41. Let \(f : X \to Y \) and let \(f^{\ast} : PY \to PX \) be the function sending any subset of \(Y\) to its preimage in \(X\). Given \(S,T \in P(Y) \), is it always true that $$ f^{\ast}(S \wedge T) = f^{\ast}(S) \wedge f^{\ast}(T ) ? $$ Is it always true that $$ f^{\ast}(S \vee T) = f^{\ast}(S) \vee f^{\ast}(T ) ? $$ To read other lectures go here.
Inline equations # Inline equations are wrapped between \(and \). $wrapping also works, but it is not preferred as it comes with restrictions like “there should be no whitespace between the equation and the $delimiters”. So $ a=b $will not work (it will look like: $ a=b $), but $a=b$will work (it will look like: \(a=b\)). On the other hand, both \(a=b\)(it will look like: \(a=b\)) and \( a=b \)(it will look like: \( a=b \)) will work. One-per-line equations are wrapped between \[and \]or $$delimiters. For example, below in Org: LaTeX formatted equation: \( E = -J \sum_{i=1}^N s_i s_{i+1} \) will look like this in Hugo rendered HTML: LaTeX formatted equation: \( E = -J \sum_{i=1}^N s_i s_{i+1 }\) Don’t see this in Markdown, see what it looks after Hugo hasprocessed it. — ox-hugo does some heavy escaping to get around aBlackfriday issue with supporting MathJax syntax equations (that Orgsupports). Here’s another example, taken from (org) LaTeX fragments. Below in Org: If $a^2=b$ and \( b=2 \), then the solution must be either$$ a=+\sqrt{2} $$ or \[ a=-\sqrt{2} \] renders to: If \(a^2=b\) and \( b=2 \), then the solution must be either \[ a=+\sqrt{2} \] or \[ a=-\sqrt{2} \] (Note that the last two equations show up on their own lines.) L aT eX Environments # ox-hugo support L aT eX environments. So below in Org buffer: \begin{equation}\label{eq:1}C = W\log_{2} (1+\mathrm{SNR})\end{equation} will export as: \begin{equation} \label{eq:1} C = W\log_{2} (1+\mathrm{SNR}) \end{equation} Equation referencing will also work 1. So \ref{eq:1} will render as\ref{eq:1}. Examples # You can find many examples by looking at tests tagged equations.
Dice permutations are calculated as "Permutations with repetition/replacement," which means, intuitively speaking, that if you roll 2d6, getting a 6 on the first die does not prevent you from getting a 6 on the second die. (As opposed to "Permutations without repetition/replacement." An example of that is putting putting six numbered slips of paper into an urn and drawing out two of them, literally without replacing the first slip before you draw out the second-- now, getting a 6 on the first slip does prevent you from getting a 6 on the second slip.) Calculating permutations with repetition is very easy. I'll build up the intuition in a few short steps: Consider rolling 1d6. The number of permutations is, trivially, 6: \begin{array}{r|llllll}\text{Die One} & \text{1} & \text{2} & \text{3} & \text{4} & \text{5} & \text{6} \\\hline\end{array} I'll note in passing that \$ 6^1 \$ happens to be \$ 6 \$ and that we could use this for any single roll of some die with Y faces: $$ \text{1dY} \rightarrow Y^1 = Y $$ Consider rolling 2d6: \begin{array}{r|llllll}\text{Die Two\Die One} & \text{1} & \text{2} & \text{3} & \text{4} & \text{5} & \text{6} \\\hline1 & \text{1,1} & \text{1,2} & \text{1,3} & \text{1,4} & \text{1,5} & \text{1,6}\\2 & \text{2,1} & \text{2,2} & \text{2,3} & \text{2,4} & \text{2,5} & \text{2,6}\\3 & \text{3,1} & \text{3,2} & \text{3,3} & \text{3,4} & \text{3,5} & \text{3,6}\\4 & \text{4,1} & \text{4,2} & \text{4,3} & \text{4,4} & \text{4,5} & \text{4,6}\\5 & \text{5,1} & \text{5,2} & \text{5,3} & \text{5,4} & \text{5,5} & \text{5,6}\\6 & \text{6,1} & \text{6,2} & \text{6,3} & \text{6,4} & \text{6,5} & \text{6,6}\\\end{array} You can count the entries in the table and come up with 36. But you can also think of this as extending the table for a 1d6 roll: Each of the six entries in the first table is turned into its own new, separate list with six unique entries. This is because no matter what we roll on the first die, we can get any of the six values on the second die. So we can simply calculate 6 times 6 entries = 36. If we generalize that to two dice, each with Y sides, that leaves us with: $$ \text{2dY} \rightarrow Y^2 $$ Consider rolling 3d6. I'm not going to draw the big table, but extending what we did above, we turn each of the 36 table entries into, again, its own unique list of six more entries. (Because again, no matter what we roll on the first two dice, we can roll any of the six values for the third one.) For the specific case of 3d6 we would have 36 times 6 = 216 entries. This basic thought process holds for any die with Y sides that we toss X times, i.e., $$ \text{XdY} \rightarrow Y^X $$ Note that the X and the Y swap places on either side of the expression! This is not a mistake. But what about something weird, like 1d6 + 1d4? The same basic process: \begin{array}{r|llllll}\text{Die Two\Die One} & \text{1} & \text{2} & \text{3} & \text{4} & \text{5} & \text{6} \\\hline1 & \text{1,1} & \text{1,2} & \text{1,3} & \text{1,4} & \text{1,5} & \text{1,6}\\2 & \text{2,1} & \text{2,2} & \text{2,3} & \text{2,4} & \text{2,5} & \text{2,6}\\3 & \text{3,1} & \text{3,2} & \text{3,3} & \text{3,4} & \text{3,5} & \text{3,6}\\4 & \text{4,1} & \text{4,2} & \text{4,3} & \text{4,4} & \text{4,5} & \text{4,6}\\\end{array} Just note here that each of the six entries from the first die is only matched with FOUR entries from the second die. In the expression below, I'm using Y values with subscripts $$ \text{1dY}_1 + \text{1dY}_2 \rightarrow Y_1^1 \times Y_2^1 $$ It doesn't matter how weird we get from there, as long as we're talking about simple dice, we just keep multiplying by the number of faces on the new die we've just added. The full formula in all its generality becomes: $$ \text{X}_1\text{dY}_1 + \text{X}_2\text{dY}_2 + \dots + \text{X}_N\text{dY}_N \rightarrow Y_1^{X_1} \times Y_2^{X_2} \dots Y_N^{X_N}$$ As a final note, the addition of a constant (e.g., 3d6 + 6) changes nothing, and the constant can be ignored for finding the number of permutations. Technically, one might consider it "a single sided die" and multiply by one, but that's a bit precious. For your specific example, there are 2,985,984 permutations: \begin{array}{r|llll}\ N & X_N & Y_N & {Y_N ^ {X_N}} \\\hline1 & \text{4} & \text{2} & \text{16} \\2 & \text{4} & \text{4} & \text{256} \\3 & \text{6} & \text{3} & \text{729}\\\hline\text{product} & \text{ } & \text{ } & \text{2985984}\\\end{array}
Hi i'm trying to carry out the following indefinite integral: $$\int \frac{1}{e^{-q} - q} \, dq$$ mathematica is not helping me, and i think it is not solvable by substitution method. any idea on how to solve it? thanks in advance Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community Hi i'm trying to carry out the following indefinite integral: $$\int \frac{1}{e^{-q} - q} \, dq$$ mathematica is not helping me, and i think it is not solvable by substitution method. any idea on how to solve it? thanks in advance According to Wolfram: $\displaystyle{\frac{1}{e^{-x}-x}=1+2x+\frac{7x^2}{2}+\frac{37x^3}{6}+\cdots}$ therefore the integral can be written like that : $$\displaystyle{\int \frac{1}{e^{-x}-x}\, dx=\int \left(1+2x+\frac{7x^2}{2}+\frac{37x^3}{6}+\cdots \right )\, dx=x+x^2+\frac{7x^3}{6}+\frac{37x^4}{24}+\cdots}$$ According to http://calculus-geometry.hubpages.com/hub/List-of-Functions-You-Cannot-Integrate-No-Antiderivatives, the function $$f: x \mapsto \frac{1}{e^x + x}$$ does not have an antiderivative expressible in terms of elementary functions, i.e. as algebraic combinations and compositions of polynomials, $\ln$, and $\exp$. Your integrand is simply $f(-x)$. However, many functions of this kind have indefinite integrals, for instance $$\int_{0}^{\infty} \frac{\mathrm{d}x}{e^x + 1} = \ln(2).$$ Mathematica tells us there is no such expression for your integrand, however. It may not be possible to get a nice clean answer. To me, it looks like your best bet is to use a Taylor series. To refresh your memory, the Taylor series of a function centered at $x=a$ is: $$\sum _{n=0}^\infty \frac{f^{(n)}(a)}{n!}(x-a)^n,$$ where $f^{(n)}(a)$ is the $n^{\mathrm{th}}$ derivative of $f$ at $a$. Since Taylor series are polynomials, it should be simple to come up with the indefinite integral of the Taylor series of your function. Case $1$: $xe^x\leq1$ Then $\int\dfrac{1}{e^{-x}-x}dx$ $=\int\dfrac{e^x}{1-xe^x}dx$ $=\int\sum\limits_{n=0}^\infty x^ne^{(n+1)x}~dx$ $=\sum\limits_{n=0}^\infty\sum\limits_{k=0}^n\dfrac{(-1)^{n-k}n!x^ke^{(n+1)x}}{(n+1)^{n-k+1}k!}+C$ (can be obtained from http://en.wikipedia.org/wiki/List_of_integrals_of_exponential_functions)
For the standard Kalman filter we must assume $\boldsymbol{\theta}_t$ and $y_t$ are jointly Gaussian. So in short, we know they are joint Gaussian because we assumed it in the first place. This assumption is made for convenience in estimation and I do not see any obvious reason why you would want to relax this assumption given a linear, Gaussian, state space model. The more general question you pose, given two Gaussian distributed variables, how do I test that they are distributed joint Gaussian?, is a more interesting question. In this setting, it is generally easier to specify a particular class of alternative distributions and compare model fit rather than conduct formal hypothesis testing. For example, let $X_1,..X_n \stackrel{iid}{\sim}N(\mu_x,\sigma_x)$ and $Y_1,..,Y_n\stackrel{iid}{\sim}N(\mu_y,\sigma_y);$ it is then much easier to compare the fit of a bivariate normal model to the fit of, say a bivariate Clayton copula then to test the hypothesis $H_0:$ $(\mathbf{X,Y})$ is bivariate normal vs $H_a:$ $(\mathbf{X,Y})$ is not bivariate normal However, you may still be interested in such a hypothesis. In this case, residual diagnostic testing can be used to asses the validity of the joint normality assumption given a set of parameters, which comes very close to the above hypothesis test. To do this we begin with a likelihood. If we assume $(\mathbf{X,Y})$ are bivariate normal than we can estimate the univariate parameters $\mu_x,\mu_y,\sigma_x,\sigma_y$ and the correlation coefficient, $\rho$, via maximum likelihood. $$L(\mu_x,\mu_y,\sigma_x,\sigma_y,\rho|\mathbf{X,Y})=\prod_{i=1}^n\boldsymbol{\phi}(X_i,Y_i|\mu_x,\mu_y,\sigma_x,\sigma_y,\rho)$$ where $\boldsymbol{\phi}$ is the bivariate normal pdf. Given the maximum likelihood estimates $\hat \mu_x,\hat\mu_y,\hat\sigma_x,\hat\sigma_y,\hat\rho$, we can generate a generalized residual for each $i$ by first calculating $u_i=\boldsymbol{\Phi}(X_i,Y_i|\hat \mu_x,\hat\mu_y,\hat\sigma_x,\hat\sigma_y,\hat\rho)$ where $\boldsymbol{\Phi}$ is the bivariate normal cdf. By the probability integral transform, it should be the case and that $u_i \sim uniform(0,1)$ and in turn, the generalized residual $r_i=\Phi^{-1}(u_i)$, where $\Phi$ is the univariate standard normal cdf, should be distributed standard normal. You can then test $r_1,..r_n$ for standard normality using the KS-test, Jarque-Bera test or whatever else you may prefer. The idea is that if $(\mathbf{X,Y})$ are truly bivariate normal with correctly estimated parameters, then $r_1,..r_n$ should be distributed standard normal. If we reject normality in $r_1,..r_n$, than we in-effect reject that $(X_i,Y_i) \sim \mathcal{N}(\hat \mu_x,\hat\mu_y,\hat\sigma_x,\hat\sigma_y,\hat\rho)$. One nuance is that the hypothesis test is conditional on the estimated parameters so we would not be testing the "unconditional" hypothesis $H_0:$ $(\mathbf{X,Y})$ is bivariate normal vs $H_a:$ $(\mathbf{X,Y})$ is not bivariate normal exactly. In a Bayesian setting one could probably find a way to integrate over parameter uncertainty, but given a sufficient amount of data the difference between the conditional and unconditional test becomes negligible.
Difference between revisions of "Main Page" (→Bibliography) (→Threads) Line 25: Line 25: A spreadsheet containing the latest upper and lower bounds for <math>c_n</math> can be found [http://spreadsheets.google.com/ccc?key=p5T0SktZY9DsU-uZ1tK7VEg here]. Here are the proofs of our [[upper and lower bounds]] for these constants. A spreadsheet containing the latest upper and lower bounds for <math>c_n</math> can be found [http://spreadsheets.google.com/ccc?key=p5T0SktZY9DsU-uZ1tK7VEg here]. Here are the proofs of our [[upper and lower bounds]] for these constants. − We are also collecting bounds for [[Fujimura's problem]]. + We are also collecting bounds for [[Fujimura's problem]]. Here are some [[unsolved problems]] arising from the above threads. Here are some [[unsolved problems]] arising from the above threads. Revision as of 18:14, 13 February 2009 The Problem Let [math][3]^n[/math] be the set of all length [math]n[/math] strings over the alphabet [math]1, 2, 3[/math]. A combinatorial line is a set of three points in [math][3]^n[/math], formed by taking a string with one or more wildcards [math]x[/math] in it, e.g., [math]112x1xx3\ldots[/math], and replacing those wildcards by [math]1, 2[/math] and [math]3[/math], respectively. In the example given, the resulting combinatorial line is: [math]\{ 11211113\ldots, 11221223\ldots, 11231333\ldots \}[/math]. A subset of [math][3]^n[/math] is said to be line-free if it contains no lines. Let [math]c_n[/math] be the size of the largest line-free subset of [math][3]^n[/math]. [math]k=3[/math] Density Hales-Jewett (DHJ(3)) theorem: [math]\lim_{n \rightarrow \infty} c_n/3^n = 0[/math] The original proof of DHJ(3) used arguments from ergodic theory. The basic problem to be considered by the Polymath project is to explore a particular combinatorial approach to DHJ, suggested by Tim Gowers. Useful background materials Some background to the project can be found here. General discussion on massively collaborative "polymath" projects can be found here. A cheatsheet for editing the wiki may be found here. Finally, here is the general Wiki user's guide Threads (1-199) A combinatorial approach to density Hales-Jewett (inactive) (200-299) Upper and lower bounds for the density Hales-Jewett problem (final call) (300-399) The triangle-removal approach (inactive) (400-499) Quasirandomness and obstructions to uniformity (inactive) (500-599) Possible proof strategies (active) (600-699) A reading seminar on density Hales-Jewett (active) (700-799) Bounds for the first few density Hales-Jewett numbers, and related quantities (arriving at station) Here are some unsolved problems arising from the above threads. Here is a tidy problem page. Bibliography Density Hales-Jewett H. Furstenberg, Y. Katznelson, “A density version of the Hales-Jewett theorem for k=3“, Graph Theory and Combinatorics (Cambridge, 1988). Discrete Math. 75 (1989), no. 1-3, 227–241. H. Furstenberg, Y. Katznelson, “A density version of the Hales-Jewett theorem“, J. Anal. Math. 57 (1991), 64–119. R. McCutcheon, “The conclusion of the proof of the density Hales-Jewett theorem for k=3“, unpublished. Behrend-type constructions M. Elkin, "An Improved Construction of Progression-Free Sets ", preprint. B. Green, J. Wolf, "A note on Elkin's improvement of Behrend's construction", preprint. K. O'Bryant, "Sets of integers that do not contain long arithmetic progressions", preprint. Triangles and corners M. Ajtai, E. Szemerédi, Sets of lattice points that form no squares, Stud. Sci. Math. Hungar. 9 (1974), 9--11 (1975). MR369299 I. Ruzsa, E. Szemerédi, Triple systems with no six points carrying three triangles. Combinatorics (Proc. Fifth Hungarian Colloq., Keszthely, 1976), Vol. II, pp. 939--945, Colloq. Math. Soc. János Bolyai, 18, North-Holland, Amsterdam-New York, 1978. MR519318 J. Solymosi, A note on a question of Erdős and Graham, Combin. Probab. Comput. 13 (2004), no. 2, 263--267. MR 2047239
Observation of electromagnetic Dalitz decays J/\psi \to P e^+e^- 2014 (English)In: PHYSICAL REVIEW D, ISSN 2470-0010, Vol. 89, no 9, article id 092008Article in journal (Refereed) Published Resource typeText Abstract [en] Based on a sample of (225.3\pm2.8)\times 10^{6} J/\psi events collected with the BESIII detector, the electromagnetic Dalitz decays of J/\psi \to P e^+e^-(P=\eta'/\eta/\pi^0) are studied. By reconstructing the pseudoscalar mesons in various decay modes, the decays J/\psi \to \eta' e^+e^-, J/\psi \to \eta e^+e^- and J/\psi \to \pi^0 e^+e^- are observed for the first time. The branching fractions are determined to be \mathcal{B}(J/\psi\to \eta' e^+e^-) = (5.81\pm0.16\pm0.31)\times10^{-5}, \mathcal{B}(J/\psi\to \eta e^+e^-) = (1.16\pm0.07\pm0.06)\times10^{-5}, and \mathcal{B}(J/\psi\to \pi^0 e^+e^-)=(7.56\pm1.32\pm0.50)\times10^{-7}, where the first errors are statistical and the second ones systematic. Place, publisher, year, edition, pages 2014. Vol. 89, no 9, article id 092008 National Category Subatomic Physics IdentifiersURN: urn:nbn:se:uu:diva-288072DOI: 10.1103/PhysRevD.89.092008OAI: oai:DiVA.org:uu-288072DiVA, id: diva2:923895 Note Funding: The BESIII Collaboration thanks the staff of BEPCII and the IHEP computing center for their strong support. This work is supported in part by National Key Basic Research Program of China under Contract No. 2015CB856700; National Natural Science Foundation of China (NSFC) under Contracts No. 11125525, No. 11235011, No. 11322544, No. 11335008, and No. 11425524; the Chinese Academy of Sciences (CAS) Large-Scale Scientific Facility Program; the CAS Center for Excellence in Particle Physics; the Collaborative Innovation Center for Particles and Interactions; Joint Large-Scale Scientific Facility Funds of the NSFC and CAS under Contracts No. 11179007, No. U1232201, and No. U1332201; CAS under Contracts No. KJCX2-YW-N29 and No. KJCX2-YW-N45; 100 Talents Program of CAS; National 1000 Talents Program of China; INPAC and Shanghai Key Laboratory for Particle Physics and Cosmology; German Research Foundation DFG under Collaborative Research Center Contract No. CRC-1044; Istituto Nazionale di Fisica Nucleare, Italy; Joint Funds of the National Science Foundation of China under Contract No. U1232107; Ministry of Development of Turkey under Contract No. DPT2006K-120470; Russian Foundation for Basic Research under Contract No. 14-07-91152; The Swedish Research Council; US Department of Energy under Contracts No. DE-FG02-04ER41291, No. DE-FG02-05ER41374, No. DE-SC0012069, and No. DESC0010118; US National Science Foundation; University of Groningen and the Helmholtzzentrum fuer Schwerionenforschung GmbH, Darmstadt; and WCU Program of National Research Foundation of Korea under Contract No. R32-2008-000-10155-0.2016-04-272016-04-272016-04-27
I have a question on page 655 of Peskin and Schroeder. The second equation of (19.23) is discussed here. But the first equation of (19.23) is still a mystery. $$ \underset{\epsilon \to 0}{\text{symm lim}}=\left\{\frac{\epsilon^{\mu}}{\epsilon^2}\right\} =0 $$ How can we understand this? Thanks. I have a question on page 655 of Peskin and Schroeder. Look at (19.27).$$ \bar\psi(x+\varepsilon/2)\,\Gamma\,\psi(x-\varepsilon/2) = \frac{-i}{2\pi} \mathrm{tr} \left[ \frac{\gamma^{\alpha}\epsilon_{\alpha}}{\epsilon^2} \Gamma \right]\tag{19.27} $$where the two fermion fields are contracted. And note the first sentence of the paragraph just below (19.27) : Because the contraction of fermion fields is singular as $\epsilon \to 0$, the terms of order $\epsilon$ in the last line of (19.25) can give a finite contribution. i.e. When one put $\Gamma =I$ in (19.27), one should get a divergent quantity. So the first expression in (19.23) is misprinted. it should be replaced by $$ \underset{\epsilon \to 0}{\text{symm lim}}=\Bigl\{\frac{\epsilon^{\mu}}{\epsilon^2}\Bigr\} \to \infty$$
It is obvious the flux of the whole sphere is $\iiint_V 3r^2 dV = \frac{12}5\pi R^5 = 7500\pi$. I guess there's no problem you getting this. Then we subtract the flux in the region with $|x|,|y|,|z|\ge 4$ from $7500\pi$. We notice that this will create 6 circular holes from the sphere. By symmetry the flux of all 6 holes should be the same, so we only need to compute one. I believe there is no other way than computing $\iint_S \mathbf F\cdot \hat{\mathbf n}\,dS$ directly.This method does not really use divergence theorem, but I doubt there is any simpler methods. The extra cap can be described in spherical coordinates as $\left\{r=5\wedge\theta \in \left[0, \tan^{-1}\frac34\right)\right\}$. The vector field can be rewritten as \begin{align}\mathbf F&= x^3\hat{\mathbf x} + y^3\hat{\mathbf y} + z^3 \hat{\mathbf z} \\&= (r\sin\theta\cos\phi)^3 (\sin\theta\cos\phi \hat{\mathbf r}+\dotsb) + (r\sin\theta\sin\phi)^3 (\sin\theta\sin\phi \hat{\mathbf r}+\dotsb) + (r\cos\theta)^3 (\cos\theta \hat{\mathbf r}+\dotsb) \\&= r^3 \left( \sin^4\theta(\cos^4\phi + \sin^4\phi) + \cos^4\theta \right) \hat{\mathbf r} + \dotsb,\end{align}because $\hat{\mathbf n}dS=r^2 \sin\theta\, d\phi \, d\theta\hat{\mathbf r}$ on the spherical surface, we are left with\begin{align}\iint_S \mathbf F\cdot \hat{\mathbf n} \,dS&= \int_0^{2\pi}\int_0^{\tan^{-1}(3/4)} r^3 \left( \sin^4\theta(\cos^4\phi + \sin^4\phi) + \cos^4\theta \right) \cdot r^2 \sin\theta\, d\theta \, d\phi \\&= 5^5 \int_0^{\tan^{-1}(3/4)}\int_0^{2\pi} \left( \sin^5\theta(\cos^4\phi + \sin^4\phi) + \cos^4\theta\sin\theta \right) d\phi \, d\theta \\&= 5^5\pi \int_0^{\tan^{-1}(3/4)} \left( \frac32 \sin^5\theta + 2\cos^4\theta\sin\theta\right)d\theta \\&= 5^5\pi \left( \frac32 \cdot \frac{428}{46875} + 2\cdot \frac{2101}{15625} \right) \\&= \frac{4416}5 \pi,\end{align}hence the final answer is$$ 7500\pi - 6\times\frac{4416}5 \pi = \frac{11004}5\pi \approx 6914.0171. $$ As "verification", I did arrive at the same answer by numerical integration. You should need to show yourself how to carry out the integrals of $\sin^5\theta$ etc.
Let $(X,d)$ be metric space with $d(f,g)=\sup |f(x)-g(x)|$ where $X$ is the set of continuous function on $[0,1/2]$. Show $\Phi:X\rightarrow X$ $$\Phi(f)(x)=\int_0^x \frac{1}{1+f(t)^2}dt$$ has a unique fixed point $f(0)=0$. b) Show that it satisfies $\frac{df}{dx}=\frac{1}{1+f(x)^2}$ My attempt: I assumed $[0,1/2]$ is complete. I need to show that: $$d(\Phi(f)(x),\Phi(g)(x))=\sup \Big|\int_0^x \frac{1}{1+f(t)^2}dt-\int_0^x \frac{1}{1+g(t)^2}dt\Big|\leq\alpha d(f,g)$$ Now, $$\sup \Big|\int_0^x \frac{1}{1+f(t)^2}dt-\int_0^x \frac{1}{1+g(t)^2}dt\Big|=\sup \Big|\int_0^x \frac{1}{1+f(t)^2+g(t)^2}dt\Big|$$ Now ideally I need to somehow change this integral to include $d(f,g)=\sup|f(x)-g(x)|$, but I don't know how. Then I would integrate, and get the integration constant to be $<1$, which will be my contraction constant.
An object $C$ in an additive category admitting all filtered direct limits $\mathcal{C}$ is called "of finite type" if the canonical map $$\underrightarrow{\lim} Hom_{\mathcal{C}}(C,F(i))\to Hom_{\mathcal{C}}(C,\underrightarrow{\lim}F)$$ is injective for every $I$ directed poset for every functor $F:I\to \mathcal{C}$ In the case $\mathcal{C}$=Mod-R prove that this definition is equivalent to the definition of "finitely generated" The exercise has this strange hint: Use the fact that if $\mathcal{F}$ is the set of finitely generated submodules of a module C then $$C/{\sum_{A\in\mathcal{F}}A}=\underrightarrow{\lim}C/A$$ I say that the hint is strange because $\displaystyle {\sum_{A\in\mathcal{F}}A}=C$
Search In the previous lab we have switched from the generative approach of classifier design to the discriminative one. Particularly, we have implemented the Perceptron algorithm. In this lab, we will continue examining the discriminative domain by implementing popular Support Vector Machines (SVM) classifier. The perceptron algorithm has despite its simplicity several drawbacks. The first drawback is that it finds just some separating hyperplane (decision boundary), but ideally we would like to find the optimal separating hyperplane (at least in some sense). Another important drawback is, that perceptron cannot deal with noisy data. The SVM are designed to handle both of these drawbacks. Given an annotated training data set$$\mathcal{T} = \{ (\mathbf{x}_i, y_i)\}_{i = 1}^{m} \quad,\ \text{where} \ \mathbf{x}_i \in \mathbb{R}^n,\ y_i \in \{-1, 1\}$$the SVM primary task is formulated as$$ \mathbf{w^*}, b^*, \xi_i^* = \arg\min_{\mathbf{w}, b, \xi_i} \left( \frac{1}{2} \| \mathbf{w} \|^2 + C \sum_{i=1}^{m}{\xi_i} \right)$$subject to \begin{align} \langle\mathbf{w}, \mathbf{x}_i\rangle + b & \ge +1 - \xi_i, \quad y_i = +1, \\ \langle\mathbf{w}, \mathbf{x}_i\rangle + b & \le -1 + \xi_i, \quad y_i = -1, \\ \xi_i & \ge 0,\ \forall i.\end{align} Here, and in the following text, $\langle\mathbf{a}, \mathbf{b}\rangle$ denotes dot product $\mathbf{a}^\top \mathbf{b}$. As we can see, the SVM is posed purely as an optimization task. The first term in the above optimisation task maximises so called margin, $2/||\mathbf{w}||$, by actually minimising $||\mathbf{w}||$. The margin is the distance between the separating hyperplane and the closest training points (intuitively, the further is the separating hyperplane from the training data, the better it generalises to unseen data). Thus the SVM are sometimes referred to as Maximum margin classifier. More detailed explanation can be found in [1, Section 3 - Linear Support Vector Machines]. The sum in the minimisation task is over so called slack variables, $\xi_i$. They allow some training points to be incorrectly classified (compare the separating hyperplane equation in the optimisation constraints with the one in Perceptron) by paying certain penalty controlled by the user specified constant $C$. This enables SVM to deal with noisy data (linearly non-separable due to noise). Although intuitive, the primal task formulation is difficult to optimize. Thus, the parameters of the optimal separating hyperplane are usually sought by solving the dual problem. The reason for solving rather the dual task is, that the separating hyperplane is fully specified only by a small number of points (compared to the usually very numerous training set), called support vectors.The derivation of the dual task can be found e.g. in [1] or [2].Note that to obtain the dual formulation, the student should be familiar with the Lagrange multipliers. The dual task is formulated as$$\mathbf{\alpha^*} = \arg\max_{\mathbf{\alpha}} \left( \sum_{i=1}^{m} \alpha_i - \frac{1}{2} \sum_{i=1}^{m}\sum_{j=1}^{m}\alpha_i \alpha_j y_i y_j \langle\mathbf{x}_i, \mathbf{x}_j\rangle \right),$$ subject to \begin{align} C \ge \alpha_i & \ge 0,\quad i = 1, \dots, m \\ \sum_{i=1}^{m} \alpha_i y_i & = 0\end{align} The transformation of the dual solution to the primal variables is done as follows: \begin{align} \mathbf{w} & = \sum_{i=1}^{m} \alpha_i y_i \mathbf{x}_i, \\ b & = \frac{1}{| I |} \sum_{i \in I} (y_i - \sum_{j=1}^{m} \alpha_j y_j < \mathbf{x}_j, \mathbf{x}_i > ),\end{align} where $I$ denotes the set of support vector indices. Note that the original feature vectors $\mathbf{x}_i$ appear in the dual task in the form of dot product. This is very important and will be used in the next assignment! Let us rewrite the dual task in matrix form: $$ \mathbf{\alpha^*} = \arg\max_{\mathbf{\alpha}} \left( \langle\mathbf{\alpha}, \mathbf{1}\rangle - \frac{1}{2} \mathbf{\alpha}^\top \mathbf{H} \mathbf{\alpha} \right), $$ \begin{align} \langle\mathbf{\alpha}, \mathbf{y}\rangle & = 0, \\ \mathbf{0} \le \mathbf{\alpha} & \le \mathbf{C}\end{align} As you can see, the dual task is formulated as an Quadratic programming task. Having found the parameters of the optimal separating hyperplane, the classification of some data $\mathbf{x}$ using SVM is performed by a simple dot product and testing the sign of the result \begin{align}h(\mathbf{x}; \mathbf{w}) & = \text{sign}(\langle\mathbf{x}, \mathbf{w}\rangle + b), \end{align} which is very fast. To fulfil this assignment, you need to submit these files (all packed in a single .zip file) into the upload system: .zip answers.txt assignment07.m my_svm.m classif_lin_svm.m compute_TstErr.m compute_measurements_2d linear_svm.png, ocr_svm_tst.png, ocr_svm_classif.png Start by downloading the template of the assignment. [ w, b, sv_idx ] = my_svm( X, y, C, options ) gsmo sv_idx classif = classif_lin_svm(X, model) model w b my_svm data_33rpz_svm_toy.mat .tmax options inf X = [1, 2, 1, -1 -1 -2; 1, 1, 2, -1, -2, -1];y = [1, 1, 1, -1, -1, -1];C = 1; [w, b, sv_idx] = my_svm(X, y, C) Settings of QP solvernrhs : 11nlhs : 5tmax : 2147483647tolKKT : 0.001000n : 6verb : 1t=1, KKTviol=2.000000, tau=0.250000, tau_lb=0.000000, tau_ub=1.000000, Q_P=-0.250000t=2, KKTviol=0.000000, tau=0.250000, tau_lb=0.000000, tau_ub=1.000000, Q_P=-0.250000 w = 0.5000 0.5000 b = 0 sv_idx = 1 4 classif_lin_svm linclass ppatterns pboundary model.W = w;model.b = b;model.fun = 'linclass';ppatterns(X, y); pboundary(model); linear_svm.png Similarly to the Parzen Window task, the optimal value of the hyper-parameter $C$ can be found using the cross-validation technique. To measure the quality of the parameter we will compute the average error on test data over all folds and select such value of $C$, which minimizes this error. Do the following: [X, y] = compute_measurements_2d(input_struct) crossval TstErr = compute_TstErr(itrn, itst, X, y, C) compute_TstErr ocr_svm_tst.png ocr_svm_classif.png Fill the correct answers to your answers.txt file. For the following questions it is assumed that you run rand('seed', 42); always before the splitting of the training set using crossval function. The dataset split will be unambiguously defined. Try to combine several binary SVM classifiers into a multiclass one. Use the “one versus all” technique (see e.g. this link for the general idea). Use MNIST data to create a multiclass classifier of images of numerals. The file contains 30 000 examples of normalized images of digits stored in matrix $X$ and corresponding labels (the label is the digit itself) $y \in \{0, 1, \dots, 9\}$. Use actual pixel values as features, in this case your feature space will be 784 dimensional. To display one example, you can use this code: imshow(reshape(X(:, 1), imsize)', []); Display the resulting classification (i.e. the montage of all images classified as $0, 1, \dots, 9$). Hint: use all relevant functions, which you have implemented in this assignment for solving this task. Hint: use some reasonable subset of data for training and testing, training with 30 000 examples would need extensive amounts of ram and would take too much time. Preserve proportions of classes when doing subsampling. [1] Christopher J. C. Burges. A Tutorial On Support Vector Machines for Pattern Recognition [2] Text of exercise from previous course[3] Lecture slides[4] Quadratic programming[5] Lagrange multiplier
I am confused about, what I believe, refers to passive and active transformations in QM. What I have understood so far is that the matrix elements $\langle \psi| \hat{H}|\phi\rangle$ should remain unchanged under transformations; this implies that $\hat{H} = \hat{U}^\dagger \hat{H} \hat{U}$. On the other hand, under certain transformation $\hat{U}$, an arbitrary operator $\hat{\Omega}$ transforms as $\hat{U} \hat{\Omega} \hat{U}^\dagger$. 1) Why should the matrix elements of the hamiltonian remain unchanged under a transformation? Is that the same as saying: "The evolution of the state is governed by the hamiltonian; under a transformation the system should still evolve in the same way, hence the matrix elements should be conserverd"? 2) What does $\hat{H} = \hat{U}^\dagger \hat{H} \hat{U}$ mean? Is that talking about how the hamiltonian transforms? Or is it only a condition on the transformation $\hat{U}$? 3) What is $\hat{\Omega} \to \hat{U} \hat{\Omega} \hat{U}^\dagger$ opposed $\hat{U}|\phi\rangle$. Does an active transformation mean applying $\hat{U}$ to every state, while a passive transformation is leaving vectors as they are and transforming the opeartors as $\hat{\Omega} \to \hat{U} \hat{\Omega} \hat{U}^\dagger$? Or does applying a transformation mean doing both of the above?
Research Open Access Published: A numerical solution of a singular boundary value problem arising in boundary layer theory SpringerPlus volume 5, Article number: 198 (2016) Article metrics 650 Accesses Abstract In this paper, a second-order nonlinear singular boundary value problem is presented, which is equivalent to the well-known Falkner–Skan equation. And the one-dimensional third-order boundary value problem on interval \([0,\infty )\) is equivalently transformed into a second-order boundary value problem on finite interval \([\beta , 1]\). The finite difference method is utilized to solve the singular boundary value problem, in which the amount of computational effort is significantly less than the other numerical methods. The numerical solutions obtained by the finite difference method are in agreement with those obtained by previous authors. Background The well-known nonlinear third-order Falkner–Skan equation is one of the nonlinear two-point boundary value problem (BVP) on infinite intervals. This problem arises in the study of laminar boundary layers exhibiting similarity in fluid mechanics. The solutions of the one-dimensional third-order boundary-value problem described by the Falkner–Skan equation are the similarity solutions of the two-dimensional incompressible laminar boundary layer equations (Cheng 1977; Merkin 1980; Salama 2004; Postelnicu and Pop 2011; Mosayebidorcheh 2013). Considering the following differential equation (Aly et al. 2003): subject to the boundary conditions where \(\beta \ge 0\) and \(f'(+\infty ):=\lim \nolimits _{t\rightarrow +\infty }f'(t)\). The nonlinear BVP (1–2) with \(\beta =0\) is studied (Aly et al. 2003; Nazar et al. 2004) and comes from the study of a plane mixed convection boundary-layer flow near a semi-infinite vertical surface, with a prescribed power law of the distance from the leading edge for the temperature. About BVP (1–2), there have existed some interesting results about the problem above. For example, there admits a unique convex solution (i.e., such that \(f''(\eta )>0\)) for \(\lambda >0\) and \(0<\beta <1\) (Brighi and Hoernel 2006); also there admits a unique concave solution (i.e., such that \(f''(\eta )<0\)) for \(\lambda >0\) and \(\beta >1\). It is unfortunate that they did not consider the case of \(\lambda \le 0\) and few results are available for \(\lambda \le 0\). Current numerical analysis is an important technique for the solution of the Falkner–Skan equation. One key problem for numerical technique is how to deal with the infinite boundary. Early approaches have mainly used shooting or invariant imbedding (Cebeci and Keller 1971; Na 1979). Asaithambi presented an asymptotic condition and truncated the infinite boundary condition by an unknown \(\eta _{\infty }\) (Asaithambi 1998, 2004, 2005). Adomian decomposition method was developed to obtain series solutions instead of truncating the infinite boundary (Elgazery 2005; Alizadeh et al. 2009). Yang and Hu (2008) transformed the problem to a singular boundary value problem on finite interval and proposed Galerkin finite element method. Based on ideas (Yang 2003; Lan and Yang 2008), the purpose of this paper is to transform the problem mentioned above to a singular boundary value problem on a finite interval and develop a finite difference method which is much more effective and simpler than the other existing methods for BVP (1–2), and which requires much less computational effort. Transformation formula Lan and Yang (2008) established the equivalence between the Falkner–Skan equation and a singular integral equation. In this paper, the BVP (1–2) is transformed to a second-order singular boundary value problem, and the solution of BVP (1–2) is characterized by \(f''(0)\). Let \(0<\beta <1\) and \(f''(\eta )>0 (\eta \ge 0)\), function \( t = f'(\eta )\) is strictly increasing in interval \([0,+\infty )\), and its inverse function \(\eta =g(t)\) exits and strictly increases in interval \( [\beta ,1)\). Then we have \( g(\beta )=0, g(1-0)=+\infty \), and Differentiating Eq. (3) with respect to t yields Differentiating Eq. (4) with respect to t yields According to Eq. (3), we obtain \(tg'(t)=f'(g(t))g'(t)\), i.e., Integrating Eq. (6) from \( \beta \) to t with respect to s, we get It follows from \(f(g(\beta ))=0\) that Differentiating Eq. (9), we have and On the other hand, according to Eq. (4) and boundary condition \(f'(+\infty )=1\), we could obtain boundary condition Numerical solutions of boundary value problem Equation (10) can be changed to the following equivalent form subject to the boundary conditions In this paper, the numerical solution of Eq. (13) with boundary conditions (14, 15) is based on the the finite difference method. The interval \([\beta , 1]\) is divided into N subintervals with step size \(h=\frac{\;1-\beta \;}{\;N\;}\), and define \(t_{j}=\beta +jh\) for \(j=0, 1, \ldots , N\). Let \(w_j\) denotes the values of \(w(t_{j})\) for \(j=0, 1, \ldots , N\). Let \(t=t_{j}\), the finite difference formulation of Eq. (13) writes as for \(j=1, 2, \ldots , N-1\). The boundary condition (14) corresponds to And the discretization of boundary condition (15) reads as The discretization formulation (16–18) is a nonlinear equation system, so Newton iteration method is recommended to solve approximate solutions. We now proceed to describe the iterative process for the solution of the nonlinear system (16–18). Let \({\mathbf{w}}^{T}=[w_{0}\quad \cdots \quad w_{N}]\), and where and for \(j=1, 2, \ldots , N-1\). Newton’s iteration method is recommended to solve nonlinear system (22). Given \(\lambda \) and initial values \(w_{j}^{0}, j=0,1,2,\ldots , N\), the k- th Newton’s iterates \({\mathbf{w} }^{k}=[w_{0}^k,w_{1}^k, \ldots \quad w_{N}^k]^T,k=1,2,\ldots ,\) can be obtained by solving system (22). Newton’s method for the solution of Eq. (22) proceeds to yield subsequent iterates for w as where \(\triangle {\mathbf{w}}^{k}\) satisfies the equation The iterative process described by Eqs. (23, 24) may be repeated in succession until \(\Vert \triangle {\mathbf{w}}^{k}\Vert _{\infty }<\varepsilon \) for some prescribed error tolerance \(\varepsilon \). The algorithm is then given as: Step 1. Input the values \(\lambda \), number of subintervals N and stopping condition \(\varepsilon \) Step 2. Initialize \(\beta \),\(k\leftarrow 0\), step size \(h \leftarrow \frac{1-\beta }{N}\) and \({\mathbf{w}}_{N}\leftarrow {\mathbf 0} \), Step 3. Step 4. Repeat through step 3 until \(\Vert \triangle {\mathbf{w}}^{k}\Vert _{\infty }<\varepsilon \) is satisfied. Results and discussion The Falkner–Skan equation has two parameters \(\beta \) and \(\lambda \), and Aly et al. (2003) obtained some numerical solution for various \(\beta \) and \(\lambda \). Also, the numerical solutions of the equation have been simulated by using Galerkin finite element methods for various values of \(\beta \) and \(\lambda \) (Yang and Hu 2008). In order to demonstrate the reliability and efficiency of the proposed theory. The numerical results have been obtained by solving the boundary value problems (13–15) with different parameters \(\lambda \) and \(\beta \). And comparison of the accuracy for calculation \(f''(0)(=w(\beta ))\) is made between our method and Galerkin finite element method proposed in (Yang and Hu 2008), the errors are simulated and shown in Table 1. In numerical simulation, we choose \(h=10^{-3}\) and \(\varepsilon = 10^{-10}\), respectively. By virtue of equivalent Eqs. (13–15), we can obtained the numerical solution of \(f''(0)(= w(\beta ))= 0.4695998\). It can be seen from Fig. 1, where \(f''(0)(=w(\beta ))\) is plotted as a function of \(\beta \) in the range of \(0\le \beta \le 1\), curves are drawn for value \(\lambda = -0.30, -0.25, -0.20, -0.18, -0.15, -0.10\). It is also shown that \(f''(0)(=w(\beta ))\) changes smoothly with \(\beta \). As \(\lambda \) increases, the results also increase in the range of \(0\le \beta \le 1\). Figure 2 shows the characteristics of numerical solutions \(f''(0)(=w(\beta ))\) for \(\beta =\) 0.0–0.9 by solving the boundary value problems (13–15). The solutions indicate that \(f''(0)(=w(\beta ))\) decreases with increasing of the parameter \(\beta \), i.e., \(f''(0)(=w(\beta ))\) is a decrease function of parameter \(\beta \). For each fixed value of \(\lambda \), solution of \(f''(0)(=w(\beta ))\) decreases with increase of \(\beta \) in the range of [0, 1 ], and especially, when \(\beta = 0\) and \(\lambda = 0\), the classical Balasis solution is obtained (Aly et al. 2003). Because the Eq. (13) is a second-order boundary value problem, the amount of computational effort used by finite difference method is significantly less than the other numerical methods of the third-order differential equation which essentially solve two or more initial value problems during each iteration (Asaithambi 2004). In general, the numerical simulation shows that the initial guess for \({\mathbf{w}}^{0}\) could be far away from the exact value. For each fixed value of \({\mathbf{w}}^{0}\), the method in this paper required 2–6 iterations in order to solve system (22) to the desired accuracy. Conclusions In this work, we have demonstrated the effectiveness of the finite difference method to Falkner–Skan equation. Applying equivalent transformation to Falkner–Skan equation, a third-order boundary value problem in infinite interval is transformed into a second-order boundary value problem in finite interval. By using finite difference method and Newton’s iteration approximation, the numerical solution have been calculated. The results of comparison studied in this paper indicate that, the values of the Newton’s iteration for \(f''(0)(=w(\beta ))\) are in excellent agreement with those results obtained by previous authors. Therefore, the method presented in this work shows its validity and great potential for the solution of Falkner–Skan equations arising in science and engineering . References Alizadeh E, Farhadi M, Sedighi K, Ebrahimi-Kebria HR, Ghafourian A (2009) Solution of the Falkner–Skan equation for wedge by Adomian Decomposition Method. Commun Nonlinear Sci Numer Simul 14(3):724–733 Aly EH, Elliott L, Ingham DB (2003) Mixed convection boundary-layer flow over a vertical surface embedded in a porous medium. Eur J Mech B Fluid 22(6):529–543 Asaithambi A (1998) A finite-difference method for the solution of the Falkner–Skan equation. Appl Math Comput 92(2):135–141 Asaithambi A (2004) Numerical solution of the Falkner–Skan equation using piecewise linear functions. Appl Math Comput 159(1):267–273 Asaithambi A (2005) Solution of the Falkner–Skan equation by recursive evaluation of Taylor coefficients. J Comput Appl Math 176(1):203–214 Brighi B, Hoernel JD (2006) On the concave and convex solutions of a mixed convection boundary layer approximation in a porous medium. Appl Math Lett 19(1):69–74 Cebeci T, Keller HB (1971) Shooting and parallel shooting methods for solving the Falkner–Skan boundary-layer equation. J Comput Phys 7(2):289–300 Cheng P (1977) Combined free and forced convection flow about inclined surfaces in porous media. Int J Heat Mass Transf 20(77):807–814 Elgazery NS (2005) Numerical solution for the Falkner–Skan equation. Chaos Solitons Fractals 35(4):738–746 Lan KQ, Yang GC (2008) Positive solutions of the Falkner–Skan equation arising in the boundary layer theory. Can Math Bull 51(3):386–398 Merkin JH (1980) Mixed convection boundary layer flow on a vertical surface in a saturated porous medium. J Eng Math 14(4):301–313 Mosayebidorcheh S (2013) Solution of the boundary layer equation of the power-law pseudoplastic fluid using differential transform method. Math Probl Eng 70(2):717–718 Na TY (1979) Computational methods in engineering boundary value problems. Academic Press, New York Nazar R, Amin N, Pop I (2004) Unsteady mixed convection boundary-layer flow near the stagnation point on a vertical surface in a porous medium. Int J Heat Mass Tran 47(12–13):2681–2688 Postelnicu A, Pop I (2011) Falkner–Skan boundary layer flow of a power-law fluid past a stretching wedge. Appl Math Comput 217(9):4359–4368 Salama AA (2004) Higher order method for solving free boundary-value problems. Numer Heat Transf B Fundam 45(4):385–394 Yang GC (2003) Existence of solutions to the third-order nonlinear differential equations arising in boundary layer theory. Aplpl Math Lett 16(3):827–832 Yang GC, Hu JC (2008) Numerical results of a singular boundary value problems related with mixed convection equation arising in boundary layer. Nonlinear Anal Forum 1(1):103–108 Acknowledgements This research is supported by the Scientific Research Fund of Sichuan Provincial Education Department (Grant No. 13ZB0086) and the Scientific Research Foundation of CUIT (Grant No. KYTZ201425). The authors would like to thank the referees for their careful reading of the original manuscript and their constructive comments. Competing interests The authors declare that they have no competing interests.
Difference between revisions of "Quasirandomness" (→Introduction) (11 intermediate revisions by 8 users not shown) Line 1: Line 1: − + − + of − + − + quasirandom <math>\{}</math> − + to , '' , <math>\{}</math> is . − + <math>\</math> '''' we . − + in in the density the density ]]this of , of the density . − + − + − + − + − + − + − + − + − + − + − + − + − + − + ==A possible definition of quasirandom subsets of <math>[3]^n</math>== ==A possible definition of quasirandom subsets of <math>[3]^n</math>== Latest revision as of 06:08, 8 July 2010 Introduction Quasirandomness is a central concept in extremal combinatorics, and is likely to play an important role in any combinatorial proof of the density Hales-Jewett theorem. This will be particularly true if that proof is based on the density increment method or on some kind of generalization of Szemerédi's regularity lemma. In general, one has some kind of parameter associated with a set, which in our case will be the number of combinatorial lines it contains, and one would like a deterministic definition of the word "quasirandom" with the following key property. Every quasirandom set [math]\mathcal{A}[/math] has roughly the same value of the given parameter as a random set of the same density. Needless to say, this is not the only desirable property of the definition, since otherwise we could just define [math]\mathcal{A}[/math] to be quasirandom if it has roughly the same value of the given parameter as a random set of the same density. The second key property is this. Every set [math]\mathcal{A}[/math] that failsto be quasirandom has some other property that we can exploit. These two properties are already discussed in some detail in the article on the density increment method: this article concentrates more on examples of quasirandomness in other contexts, and possible definitions of quasirandomness connected with the density Hales-Jewett theorem. A possible definition of quasirandom subsets of [math][3]^n[/math] As with all the examples above, it is more convenient to give a definition for quasirandom functions. However, in this case it is not quite so obvious what should be meant by a balanced function. Here, first, is a possible definition of a quasirandom function from [math][2]^n\times [2]^n[/math] to [math][-1,1].[/math] We say that f is c-quasirandom if [math]\mathbb{E}_{A,A',B,B'}f(A,B)f(A,B')f(A',B)f(A',B')\leq c.[/math] However, the expectation is not with respect to the uniform distribution over all quadruples (A,A',B,B') of subsets of [math][n].[/math] Rather, we choose them as follows. (Several variants of what we write here are possible: it is not clear in advance what precise definition will be the most convenient to use.) First we randomly permute [math][n][/math] using a permutation [math]\pi[/math]. Then we let A, A', B and B' be four random intervals in [math]\pi([n]),[/math] where we allow our intervals to wrap around mod n. (So, for example, a possible set A is [math]\{\pi(n-2),\pi(n-1),\pi(n),\pi(1),\pi(2)\}.[/math]) As ever, it is easy to prove positivity. To apply this definition to subsets [math]\mathcal{A}[/math] of [math][3]^n,[/math] define f(A,B) to be 0 if A and B intersect, [math]1-\delta[/math] if they are disjoint and the sequence x that is 1 on A, 2 on B and 3 elsewhere belongs to [math]\mathcal{A},[/math] and [math]-\delta[/math] otherwise. Here, [math]\delta[/math] is the probability that (A,B) belongs to [math]\mathcal{A}[/math] if we choose (A,B) randomly by taking two random intervals in a random permutation of [math][n][/math] (in other words, we take the marginal distribution of (A,B) from the distribution of the quadruple (A,A',B,B') above) and condition on their being disjoint. It follows from this definition that [math]\mathbb{E}f=0[/math] (since the expectation conditional on A and B being disjoint is 0 and f is zero whenever A and B intersect). Nothing that one would really like to know about this definition has yet been fully established, though an argument that looks as though it might work has been proposed to show that if f is quasirandom in this sense then the expectation [math]\mathbb{E}f(A,B)f(A\cup D,B)f(A,B\cup D)[/math] is small (if the distribution on these "set-theoretic corners" is appropriately defined).
This question already has an answer here: I came across this in proving that the $\sqrt{3}$ is irrational Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community This question already has an answer here: I came across this in proving that the $\sqrt{3}$ is irrational Take the contrapositive statement: Prove that if $p$ is not divisible by $3$, then $p^2$ is not divisible by $3$. Try proof by contrapositive: Assume that for some $p$, $3 \nmid p$, then the only two cases are: $p \equiv 1\pmod{3}$ and $p \equiv 2\pmod{3}$. Calcualte $p^2 \mod 3$ and get the conclusion. Even if you don't know (don't remember) any number theory, you can always write $p=3m+k$ where $m$ is an integer and $k\in\{0,1,2\}$. Then, $$ p^2=(3m+k)^2=9m^2+6mk+k^2. $$ In other for this to be divisible by $3$, $k^2$ has to be divisible by $3$. You can manually check that only $k=0$ works. I would take an approach by looking at the prime factorization of $p^2$. Since it's a square, the powers of its prime factorization must all be even numbers. Since it's divisible by $3$, it has a $3$ raised to some nonzero even number. It follows from this that $p$ must also have $3$ in its prime factorization. This is a particular instance of the Euclid's lemma. It is an easy consequence of prime factorization, but without assuming prime factorization slightly less easy.
Spreading speeds and traveling waves of a parabolic-elliptic chemotaxis system with logistic source on $\mathbb{R}^N$ Department of Mathematics and Statistics, Auburn University, Auburn University, AL 36849, USA $\label{IntroEq0-2}\begin{cases}u_{t}=Δ{u}-χ\nabla·(u\nabla{v})+u(1-u),{x}∈\mathbb{R}^N,\\{0}=Δ{v}-v+u,{x}∈\mathbb{R}^N,\end{cases}$ $\mathop {\lim }\limits_{t \to \infty } \mathop {\sup }\limits_{|x| \le ct} [|u(x,t;{u_0}) - 1| + |v(x,t;{u_0}) - 1|] = 0\quad \forall {\mkern 1mu} {\mkern 1mu} 0 < c < c_ - ^*(\chi )$ $\mathop {\lim }\limits_{t \to \infty } \mathop {\sup }\limits_{|x| \le ct} [u(x,t;{u_0}) + v(x,t;{u_0})] = 0\quad \forall {\mkern 1mu} {\mkern 1mu} c > c_ + ^*(\chi ).$ $\mathop {\lim }\limits_{\chi \to 0} {c^*}(\chi ) = \mathop {\lim }\limits_{\chi \to 0} c_ + ^*(\chi ) = \mathop {\lim }\limits_{\chi \to 0} c_ - ^*(\chi ) = 2.$ Keywords:Parabolic-elliptic chemotaxis system, logistic source, comparison principles, spreading speed, traveling wave solution. Mathematics Subject Classification:35B35, 35B40, 35K57, 35Q92, 92C17. Citation:Rachidi B. Salako, Wenxian Shen. Spreading speeds and traveling waves of a parabolic-elliptic chemotaxis system with logistic source on $\mathbb{R}^N$. Discrete & Continuous Dynamical Systems - A, 2017, 37 (12) : 6189-6225. doi: 10.3934/dcds.2017268 References: [1] [2] [3] N. Bellomo, A. Bellouquid, Y. Tao and M. Winkler, Toward a mathematical theory of Keller-Segel models of pattern formation in biological tissues, [4] H. Berestycki, F. Hamel and G. Nadin, Asymptotic spreading in heterogeneous diffusive excita media, [5] [6] H. Berestycki, F. Hamel and N. Nadirashvili, The speed of propagation for KPP type problems, Ⅱ -General domains, [7] H. Berestycki and G. Nadin, Asymptotic spreading for general heterogeneous Fisher-KPP type, preprint.Google Scholar [8] [9] [10] J. I. Diaz, T. Nagai and J.-M. Rakotoson, Symmetrization techniques on unbounded domains: Application to a chemotaxis system on $\mathbb{R}^{N}$, [11] [12] [13] [14] A. Friedman, [15] [16] E. Galakhov, O. Salieva and J. I. Tello, On a parabolic-elliptic system with chemotaxis and logistic type growth, [17] D. Henry, [18] [19] [20] [21] [22] A. Kolmogorov, I. Petrowsky and N. Piscunov, A study of the equation of diffusion with increase in the quantity of matter, and its application to a biological problem, [23] J. Lankeit, Eventual smoothness and asymptotics in a three-dimensional chemotaxis system with logistic source, [24] J. Li, T. Li and Z.-A. Wang, Stability of traveling waves of the Keller-Segel system with logarithmic sensitivity, [25] [26] X. Liang and X.-Q. Zhao, Spreading speeds and traveling waves for abstract monostable evolution systems, [27] B. P. Marchant, J. Norbury and J. A. Sherratt, Travelling wave solutions to a haptotaxis-dominated model of malignant invasion, [28] [29] [30] [31] J. Nolen, M. Rudd and J. Xin, Existence of KPP fronts in spatially-temporally periodic adevction and variational principle for propagation speeds, [32] J. Nolen and J. Xin, Existence of KPP type fronts in space-time periodic shear flows and a study of minimal speeds based on variational principle, [33] R. B. Salako and W. Shen, Global existence and asymptotic behavior of classical solutions to a parabolic-elliptic chemotaxis system with logistic source on $\mathbb{R}^{N}$, [34] [35] W. Shen, Variational principle for spatial spreading speeds and generalized propgating speeds in time almost and space periodic KPP models, [36] [37] Y. Sugiyama, Global existence in sub-critical cases and finite time blow up in super critical cases to degenerate Keller-Segel systems, [38] Y. Sugiyama and H. Kunii, Global Existence and decay properties for a degenerate keller-Segel model with a power factor in drift term, [39] [40] [41] L. Wang, C. Mu and P. Zheng, On a quasilinear parabolic-elliptic chemotaxis system with logistic source, [42] [43] [44] [45] [46] M. Winkler, Aggregation vs.global diffusive behavior in the higher-dimensional Keller-Segel model, [47] M. Winkler, Blow-up in a higher-dimensional chemotaxis system despite logistic growth restriction, [48] M. Winkler, Finite-time blow-up in the higher-dimensional parabolic-parabolic Keller-Segel system, [49] [50] [51] P. Zheng, C. Mu, X. Hu and Y. Tian, Boundedness of solutions in a chemotaxis system with nonlinear sensitivity and logistic source, [52] show all references References: [1] [2] [3] N. Bellomo, A. Bellouquid, Y. Tao and M. Winkler, Toward a mathematical theory of Keller-Segel models of pattern formation in biological tissues, [4] H. Berestycki, F. Hamel and G. Nadin, Asymptotic spreading in heterogeneous diffusive excita media, [5] [6] H. Berestycki, F. Hamel and N. Nadirashvili, The speed of propagation for KPP type problems, Ⅱ -General domains, [7] H. Berestycki and G. Nadin, Asymptotic spreading for general heterogeneous Fisher-KPP type, preprint.Google Scholar [8] [9] [10] J. I. Diaz, T. Nagai and J.-M. Rakotoson, Symmetrization techniques on unbounded domains: Application to a chemotaxis system on $\mathbb{R}^{N}$, [11] [12] [13] [14] A. Friedman, [15] [16] E. Galakhov, O. Salieva and J. I. Tello, On a parabolic-elliptic system with chemotaxis and logistic type growth, [17] D. Henry, [18] [19] [20] [21] [22] A. Kolmogorov, I. Petrowsky and N. Piscunov, A study of the equation of diffusion with increase in the quantity of matter, and its application to a biological problem, [23] J. Lankeit, Eventual smoothness and asymptotics in a three-dimensional chemotaxis system with logistic source, [24] J. Li, T. Li and Z.-A. Wang, Stability of traveling waves of the Keller-Segel system with logarithmic sensitivity, [25] [26] X. Liang and X.-Q. Zhao, Spreading speeds and traveling waves for abstract monostable evolution systems, [27] B. P. Marchant, J. Norbury and J. A. Sherratt, Travelling wave solutions to a haptotaxis-dominated model of malignant invasion, [28] [29] [30] [31] J. Nolen, M. Rudd and J. Xin, Existence of KPP fronts in spatially-temporally periodic adevction and variational principle for propagation speeds, [32] J. Nolen and J. Xin, Existence of KPP type fronts in space-time periodic shear flows and a study of minimal speeds based on variational principle, [33] R. B. Salako and W. Shen, Global existence and asymptotic behavior of classical solutions to a parabolic-elliptic chemotaxis system with logistic source on $\mathbb{R}^{N}$, [34] [35] W. Shen, Variational principle for spatial spreading speeds and generalized propgating speeds in time almost and space periodic KPP models, [36] [37] Y. Sugiyama, Global existence in sub-critical cases and finite time blow up in super critical cases to degenerate Keller-Segel systems, [38] Y. Sugiyama and H. Kunii, Global Existence and decay properties for a degenerate keller-Segel model with a power factor in drift term, [39] [40] [41] L. Wang, C. Mu and P. Zheng, On a quasilinear parabolic-elliptic chemotaxis system with logistic source, [42] [43] [44] [45] [46] M. Winkler, Aggregation vs.global diffusive behavior in the higher-dimensional Keller-Segel model, [47] M. Winkler, Blow-up in a higher-dimensional chemotaxis system despite logistic growth restriction, [48] M. Winkler, Finite-time blow-up in the higher-dimensional parabolic-parabolic Keller-Segel system, [49] [50] [51] P. Zheng, C. Mu, X. Hu and Y. Tian, Boundedness of solutions in a chemotaxis system with nonlinear sensitivity and logistic source, [52] [1] Rachidi B. Salako, Wenxian Shen. Existence of traveling wave solutions to parabolic-elliptic-elliptic chemotaxis systems with logistic source. [2] [3] Rachidi B. Salako. Traveling waves of a full parabolic attraction-repulsion chemotaxis system with logistic source. [4] Liangchen Wang, Yuhuan Li, Chunlai Mu. Boundedness in a parabolic-parabolic quasilinear chemotaxis system with logistic source. [5] [6] Yūki Naito, Takasi Senba. Oscillating solutions to a parabolic-elliptic system related to a chemotaxis model. [7] [8] Ling Liu, Jiashan Zheng. Global existence and boundedness of solution of a parabolic-parabolic-ODE chemotaxis-haptotaxis model with (generalized) logistic source. [9] Giuseppe Maria Coclite, Helge Holden, Kenneth H. Karlsen. Wellposedness for a parabolic-elliptic system. [10] Tomasz Cieślak, Kentarou Fujie. Global existence in the 1D quasilinear parabolic-elliptic chemotaxis system with critical nonlinearity. [11] Hua Chen, Wenbin Lv, Shaohua Wu. A free boundary problem for a class of parabolic-elliptic type chemotaxis model. [12] Jiamin Cao, Peixuan Weng. Single spreading speed and traveling wave solutions of a diffusive pioneer-climax model without cooperative property. [13] Chunhua Jin. Global classical solution and stability to a coupled chemotaxis-fluid model with logistic source. [14] Chang-Hong Wu. Spreading speed and traveling waves for a two-species weak competition system with free boundary. [15] Zhenguo Bai, Tingting Zhao. Spreading speed and traveling waves for a non-local delayed reaction-diffusion system without quasi-monotonicity. [16] [17] Pan Zheng, Chunlai Mu, Xuegang Hu. Boundedness and blow-up for a chemotaxis system with generalized volume-filling effect and logistic source. [18] [19] Yūki Naito, Takasi Senba. Blow-up behavior of solutions to a parabolic-elliptic system on higher dimensional domains. [20] Kentarou Fujie, Takasi Senba. Global existence and boundedness in a parabolic-elliptic Keller-Segel system with general sensitivity. 2018 Impact Factor: 1.143 Tools Metrics Other articles by authors [Back to Top]