text stringlengths 256 16.4k |
|---|
The original Curry-Howard correspondence is an isomorphism between intuitionistic propositional logic and the simply-typed lambda calculus.There are, of course, other Curry-Howard-like isomorphisms; Phil Wadler famously pointed out that the double-barrelled name "Curry-Howard" predicts other double-barrelled names like "Hindley-Milner" and "Girard-Reynolds"...
The vast majority of proof systems don't allow for infinite, circular proofs, but they do so by making their langauges non-Turing complete.In a normal functional language, the only way to make a program go on forever is with recursion, and in terms of theory, usually we look at recursion as the $Y$ combinator, a program of type $\forall a \ldotp (a \to a) \...
Let me articulate the Curry-Howard-Lambek correspondence with a bit of jargon which I'll explain. Lambek showed that the simply typed lambda calculus with products was the internal language of a cartesian closed category. I'm not going to spell out what a cartesian closed category is, though it isn't difficult, instead what the above statement says is you ...
Long story short: no you can't. A foreign function is like a black box and the type you ascribe to it is a promise you make: in the Curry-Howard correspondence that would correspond to adding an axiom to your theory.That being said, there are ways. In Coq for instance, there are various formalisations of the C standard (e.g. Robbert Krebbers' work). ...
The Curry-Howard relates type systems to logical deduction systems. Among other things, it maps:programs to proofsprogram evaluation to transformations on proofsinhabited types to true propositionstype systems to logical deduction systemsIf the type system admits a Y combinator, then that means that the corresponding logical deduction system is ...
Proving the correctness of a program in a form of a proof that's nothing but the program itselfThis is not quite how the Curry-Howard-Correspondence works.First one has to show that the language of choice actually corresponds to some consistent logic. Different languages correspond to different logics, and many languages correspond to inconsistent ...
Short answer: yes.Long answer: For $\mathrm{Type}:\mathrm{Type}$, non-termination at the type level is trivial. You can take a constant $X:\mathrm{False}\rightarrow \mathrm{Type}$. Then if you take the inconsistent term $\bot : \mathrm{False}$ you have$$ X\ \bot : \mathrm{Type}$$Which is non-terminating at the type level. You might complain that this has ...
Logic programming is proof search for some logic. Traditionally, this is the Horn clause fragment of first-order logic. Languages like lambdaProlog extend this to (intuitionistic) hereditary Harrop formulas. There are also languages like Lolli, LolliMon, and Olli that work in fragments of linear logic (ordered linear logic in the last case). The concepts ...
Proofs in Haskell?Okay, first let's talk about the Curry-Howard correspondence. This says that one can view theorems as types and proofs as programs. However, it says nothing about which specific logic a particular programming language represents.In particular, Haskell lacks dependent types. That means that it can't express statements with "forall x" or "...
The fact of the matter is, if a proof exists, then a Curry-Howard version of the program exists too. That doesn't mean that it's easy to find, though.Undecidability still holds for Curry-Howard: if your types are advanced enough to capture logic, then there's no algorithm which takes in a type and outputs a program of that type, if it exists. Just like ...
For Agda, I think, as stated in the other question, that the fact that function types (or rather Π-types) are built-in while pairs (or even Σ-types) aren't so much is a reasonable argument that they are more fundamental in Agda. That said, even there it's not completely clear cut. For example, Σ-types/pairs are introduced in Agda via inductive data type ...
Think of this in terms of the Curry-Howard isomorphism. What would this type look like as a theorem?For any propositions $A$ and $B$, $A \implies B$.Clearly this is not true, if it were, then we could do $A=\top$ and $B = \bot$ and now true implies false! So, if it's not a true theorem, there's no proof of it, so the type is uninhabited.In a language ...
I know for pretty sure that there is a function with the type $f: \forall \alpha, \beta . \alpha \rightarrow \beta$, but I can't wrap my head over it.No, that type is not inhabited. There are no functions having that type in a typed lambda calculus, provided it is (weakly) normalizing.The intuition is that, as you mention, the associated proposition is ...
How would you prove inside the pure CoC that the induction principle holds of the Church numerals? See Thomas Streicher's, Independence of the inductionprinciple and the axiom ofchoice in the pure calculusof constructions.
The λ-calculus was invented to be a logic and foundation of mathematics (1-4). The most well-known logic to use λ-calculus for formulae (as opposed to proofs in the Curry-Howard approach) is HOL (= Higher-Order Logic). The most well-developed implementation of HOL is Isabelle/HOL (5). To the extent that you believe logic can represent ...
One way to interpret types as logic is as the existence conditions for values of the return type. So f :: a -> [a] us the theorem that, if there exists a value of type a, there exists a value of type [a]. The implementation of the function is the proof of the proposition.Here's a more detailed explanation:Basically, data constructors let us build ...
Some other usages of the type Unit (I'm sure the list is not exhaustive):(1) The value of type Unit is used to simulate functions of arity 0 in strict languages that don't have zero-argument functions, like in OCaml: f (). Essentially this is just for deferring computations.(2) It also can be used to instantiate some parametrically polymorphic type when ...
TLDR: A sound logic corresponds to a non-Turing-complete lambda calculus, so the Church-Turing thesis doesn't apply.It's important to remember that most Dependently Typed programming languages aren't Turing Complete. When you allow for non-halting programs, your logic becomes unsound. So the Curry-Howard Isomorphism doesn't really apply toConsider ...
I'm not sure where you see the dissonance.The Church-Turing thesis is a hypothesis stating that Turing Machines (equiv. Lambda Calculus or Recursive Functions) can do anything that we'd think of as computable.The Curry-Howard correspondence is a much stronger statement that certain types of intuitionistic logic are structurally identical to things kind ...
In some sense it doesn't matter what the function does, as long as it takes the correct types and produces something of the correct type. The trick is that when you start talking about the Curry-Howard correspondence, the types are much more precise and specified that what we'd normally deal with day-to-day.Moreover the Curry-Howard correspondence says ...
To explain why I'm uncomfortable with Newsham's and (especially) Piponi's data wrappers ... (This is going to be more question than answer, but perhaps it'll work towards explaining what's wrong with my IsNat even though it seems very similar to Newsham.)Piponi page 17 on has:data Proposition = Proposition :-> Proposition| Symbol ...
"Theorems for free" are so-called because they follow from the type of the program, without looking at the the program code!"Contracts" are clearly not free theorems, because they depend on the code of the program, not merely the type.However, the connection you do want to make is between types and specifications. Specifications are in a way "more ...
List types are a bit strange as proposition. They don't really correspond to anything directly familiar but it is easy to see what they are equivalent to. Because nil exists you can always prove [a] for any a so list types are always very easy to prove, in particularly they are trivially equivalent to any tautology that has already been proven. So ...
I can think of a couple:If a language makes a distinction between functions that return a value, and those that don't, it becomes difficult to stitch functions together. You have one set of functions that do return a value, and one set that doesn't. You end up having to write somewhat duplicated higher order functions. One set of higher order functions ...
The programs that you describe are very good at searching for zeroes in an interval; they can find all of the zeroes of the form $s+it$ between $t=0$ and $t=10^9$, say, and show that all these zeroes have $s=\frac12$. But that upper bound is critical, because it defines the search space. RH is the statement that all of the zeroes lie on the critical line, ...
As you observe, restricting the domain of a variable has exactly the same effect as applying a unary constraint to it.One situation where you might prefer to use unary constraints rather than restricted domains is when you want to control very tightly the relations that are allowed to be used in constraints. For example, if you want to investigate the ...
There is a result recently published within Annals of Pure and Applied Logic in which church encoded data are realizers of their own induction principle. In this system, the induction principle for natural numbers, trees, lists... are derivable. The core calculus doesn't have any datatype constructors packaged in. It starts at an extrinsic (curry style) ...
The difference between $s\to t$ and $\forall x : s. t$ is that in the second case, the return type of the function depends on the input.For example, in most languages, you can make a function that takes a natural number and returns an array of that size and it will have type $\operatorname{nat} \to \operatorname{nat} \operatorname{array}$. But if you want ...
Even though a domain may be considered just another type of constraint, there do exist good reasons to keep them separated, and it may be easier to think of them from a pure mathematical standpoint. Domains should in a sense be seen as the definition of the variable in terms of Type - e.g. Integer or Real etcetera. The domains can also be seen as the Master ... |
Good Bye 2018 Finished
Bob is a pirate looking for the greatest treasure the world has ever seen. The treasure is located at the point $$$T$$$, which coordinates to be found out.
Bob travelled around the world and collected clues of the treasure location at $$$n$$$ obelisks. These clues were in an ancient language, and he has only decrypted them at home. Since he does not know which clue belongs to which obelisk, finding the treasure might pose a challenge. Can you help him?
As everyone knows, the world is a two-dimensional plane. The $$$i$$$-th obelisk is at integer coordinates $$$(x_i, y_i)$$$. The $$$j$$$-th clue consists of $$$2$$$ integers $$$(a_j, b_j)$$$ and belongs to the obelisk $$$p_j$$$, where $$$p$$$ is some (unknown) permutation on $$$n$$$ elements. It means that the treasure is located at $$$T=(x_{p_j} + a_j, y_{p_j} + b_j)$$$. This point $$$T$$$ is the same for all clues.
In other words, each clue belongs to exactly one of the obelisks, and each obelisk has exactly one clue that belongs to it. A clue represents the vector from the obelisk to the treasure. The clues must be distributed among the obelisks in such a way that they all point to the same position of the treasure.
Your task is to find the coordinates of the treasure. If there are multiple solutions, you may print any of them.
Note that you don't need to find the permutation. Permutations are used only in order to explain the problem.
The first line contains an integer $$$n$$$ ($$$1 \leq n \leq 1000$$$) — the number of obelisks, that is also equal to the number of clues.
Each of the next $$$n$$$ lines contains two integers $$$x_i$$$, $$$y_i$$$ ($$$-10^6 \leq x_i, y_i \leq 10^6$$$) — the coordinates of the $$$i$$$-th obelisk. All coordinates are distinct, that is $$$x_i \neq x_j$$$ or $$$y_i \neq y_j$$$ will be satisfied for every $$$(i, j)$$$ such that $$$i \neq j$$$.
Each of the next $$$n$$$ lines contains two integers $$$a_i$$$, $$$b_i$$$ ($$$-2 \cdot 10^6 \leq a_i, b_i \leq 2 \cdot 10^6$$$) — the direction of the $$$i$$$-th clue. All coordinates are distinct, that is $$$a_i \neq a_j$$$ or $$$b_i \neq b_j$$$ will be satisfied for every $$$(i, j)$$$ such that $$$i \neq j$$$.
It is guaranteed that there exists a permutation $$$p$$$, such that for all $$$i,j$$$ it holds $$$\left(x_{p_i} + a_i, y_{p_i} + b_i\right) = \left(x_{p_j} + a_j, y_{p_j} + b_j\right)$$$.
Output a single line containing two integers $$$T_x, T_y$$$ — the coordinates of the treasure.
If there are multiple answers, you may print any of them.
2 2 5 -6 4 7 -2 -1 -3 1 2 4 2 2 8 2 -7 0 -2 6 1 -14 16 -12 11 -18 7 -14 9 -12
As $$$n = 2$$$, we can consider all permutations on two elements.
If $$$p = [1, 2]$$$, then the obelisk $$$(2, 5)$$$ holds the clue $$$(7, -2)$$$, which means that the treasure is hidden at $$$(9, 3)$$$. The second obelisk $$$(-6, 4)$$$ would give the clue $$$(-1,-3)$$$ and the treasure at $$$(-7, 1)$$$. However, both obelisks must give the same location, hence this is clearly not the correct permutation.
If the hidden permutation is $$$[2, 1]$$$, then the first clue belongs to the second obelisk and the second clue belongs to the first obelisk. Hence $$$(-6, 4) + (7, -2) = (2,5) + (-1,-3) = (1, 2)$$$, so $$$T = (1,2)$$$ is the location of the treasure.
In the second sample, the hidden permutation is $$$[2, 3, 4, 1]$$$.
Name |
This is a question from the book
Introduction to Set Theory (Hrbacek and Jech), chapter 5, question 2.6.
(Show that) The cardinality of the set of all discontinuous functions is $2^{2^{\aleph_0}}$. [Hint: Using exercise 2.5, show that $|\mathbb{R}^\mathbb{R}-C|=2^{2^{\aleph_0}}$ whenever $|C|\leq 2^{\aleph_0}$.]
Exercise 2.5, as referred to in the hint, provides the following result:
For $n>0$, $n \cdot 2^{2^{\aleph_0}} = \aleph_0 \cdot 2^{2^{\aleph_0}} = 2^{\aleph_0} \cdot 2^{2^{\aleph_0}} = 2^{2^{\aleph_0}} \cdot 2^{2^{\aleph_0}} = (2^{2^{\aleph_0}})^n = (2^{2^{\aleph_0}})^{\aleph_0} = (2^{2^{\aleph_0}})^{2^{\aleph_0}}=2^{2^{\aleph_0}}$.
The only way I could answer this question was by making use of two theorems stated later in the same textbook. First, it is easy to prove that $|\mathbb{R}^\mathbb{R}-C|$ is an infinite set, since if it was finite we would have\begin{equation}2^{2^{\aleph_0}}=|\mathbb{R}^\mathbb{R}|=|\mathbb{R}^\mathbb{R}-C|+|C| \leq n+2^{\aleph_0}=2^{\aleph_0}< 2^{2^{\aleph_0}},\end{equation}which is a contradiction. Now I make use of a theorem requiring the axiom of choice:
For every infinite set $S$ there exists a unique aleph $\aleph_\alpha$ such that $|S|=\aleph_\alpha$. So I let $|\mathbb{R}^\mathbb{R}-C|=\aleph_\alpha$ for some ordinal $\alpha$. Now assume $\aleph_\alpha < 2^{2^{\aleph_0}}$. Here I make use of another theorem, which i am not sure I am allowed to do: For every $\alpha$ and $\beta$ such that $\alpha \leq \beta$, we have $\aleph_\alpha+\aleph_\beta=\aleph_\beta$. So I basically then say either $|\mathbb{R}^\mathbb{R}-C|+|C|=\aleph_\alpha$ or $|\mathbb{R}^\mathbb{R}-C|+|C|=2^{\aleph_0}$, depending on whether $\aleph_\alpha \leq 2^{\aleph_0}$ or vice-versa. The assumption I am making is that $2^{\aleph_0}=\aleph_1$, which as I understand it is basically equivalent to the continuum hypothesis, as we are saying that the least uncountable number is $\aleph_1$ (but by the previous theorem requiring the axiom of choice this seems reasonable?). So anyway assuming I can do that, we again have a contradiction since then we are saying $|\mathbb{R}^\mathbb{R}|=2^{\aleph_0}<2^{2^{\aleph_0}}$ or $|\mathbb{R}^\mathbb{R}|=\aleph_\alpha < 2^{2^{\aleph_0}}$. My main problem with my proof is that I am not making use of the result given in exercise 2.5 as suggested in the hint. Can anyone help with a proof making use of this result (stated in the second block quote above). |
If $X,\tau$ is a topological space and K is a closed subspace. If $U$ is a compact set of $X,\tau$ then $U\cap K$ is compact.
Attempted proof:
Since $U$ is compact there is a finite sub covering such that $U\subset \bigcup_\limits{i=1}^{n}B_i$ $B_i\in \tau\forall 1\leqslant i\leqslant n$
$U\cap K\subset (\bigcup_\limits{i=1}^{n}B_i)\cap K=\bigcup_\limits{i=1}^{n}(B_i\cap K)$
$B_i\cap K$ is open for the subspace topology for all $i$.
Hence $\bigcup_\limits{i=1}^{n}(B_i\cap K)$ is a finite sub covering of $U\cap K$ proving $U\cap K$ to be compact in the subspace topology.
Question:
Is my proof right? If not how should I correct it?
Thanks in advance! |
A
is a constrained optimization problem in which the constraints include equilibrium constraints, such as variational inequalities or complementarity conditions. MPECs arise in applications in engineering design, in economic equilibrium and in multilevel games. MPECs are difficult to solve because the feasible region is not necessarily convex or connected. Mathematical Program with Equilibrium Constraints (MPEC)
A special case of an MPEC is a
in which the equilibrium constraints are Mathematical Program with Complementarity Constraints (MPCC) complementarity constraints.
\[\begin{array}{llll}
\mbox{minimize}_x & f(x) & & \\
\mbox{subject to} & g(x) & \geq & 0 \\
& h(x) & = & 0 \\
& 0 \leq x_1 \perp x_2 & \geq & 0
\end{array}
\]
The
complementarity constraint\(0 \leq x_1 \perp x_2 \geq 0\) can be written equivalently as \(x_1 \geq 0, x_2 \geq 0, x_1^T x_2 = 0\). Online and Software Resources CPNET: Complementarity Problem Net GAMS World MPEC World includes MPEC Library of models and other information GAMS World MPSGE World includes MPSGE Library of economic equilibrium models and other information MacMPEC is a collection of MPEC test problems in AMPL MacEPEC is a small collection of Equilibrium Problems with Equilibrium Constraints (EPEC) test problems in AMPL References Luo, Z.-Q., Pang, J.-S. Pang, and Ralph, D. 1996. Mathematical Programs with Equilibrium Constraints. Cambridge University Press. |
ISSN:
1078-0947
eISSN:
1553-5231
All Issues
Discrete & Continuous Dynamical Systems - A
June 2006 , Volume 16 , Issue 2
A special Issue Dedicated to Anatole Katok On the Occasion of his 60th Birthday
Select all articles
Export/Reference:
Abstract:
This issue of Discrete and Continuous Dynamical Systems is Anatole Katok at UC Berkeley dedicated to Anatole Katok and was conceived on the occasion of his 60th birthday. Anatole Katok was born in Washington, D.C. in 1944. In 1959 he placed second in the Moscow Mathematical Olympiad, and the year after entered Moscow State University, earning his mathematics doctorate in 1968 from Y. Sinai. After working in the department of mathematical methods at the Central Economics and Mathematics Institute for 10 years he emigrated with his family, moving via Vienna, Rome and Paris to the University ofMaryland. The position in the US allowed him to travel, attend and organize conferences, collaborate with other mathematicians and supervise students.
From this time on, he organized more conferences, special years and other events than anybody else in the dynamics community. During his five years at Maryland Katok was instrumental in the development of their dynamical systems school, and after moving to first Caltech and then Penn State he founded a strong group in dynamical systems at each of these institutions. The schools at Maryland and Penn State have become leading world centers. He has always been active in mentoring younger generations. During his student years he devoted much energy to mathematics olympiads and circles, at Penn State he has been the driving force behind the Mathematics Advanced Study Semesters program for especially strong mathematics undergraduates, and he has supervised more than two dozen doctoral students.
For more information please click the “Full Text” above.
Abstract:
The nonadditive thermodynamic formalism is a generalization of the classical thermodynamic formalism, in which the topological pressure of a single function $\phi$ is replaced by the topological pressure of a
sequenceof functions $\Phi=(\phi_n)_n$. The theory also includes a variational principle for the topological pressure, although with restrictive assumptions on $\Phi$. Our main objective is to provide a new class of sequences, the so-called almost additive sequences, for which it is possible not only to establish a variational principle, but also to discuss the existence and uniqueness of equilibrium and Gibbs measures. In addition, we give several characterizations of the invariant Gibbs measures, also in terms of an averaging procedure over the periodic points. Abstract:
The existence of stable manifolds for nonuniformly hyperbolic trajectories is well know in the case of $C^{1+\alpha}$ dynamics, as proven by Pesin in the late 1970's. On the other hand, Pugh constructed a $C^1$ diffeomorphism that is not of class $C^{1+\alpha}$ for any $\alpha$ and for which there exists no stable manifold. The $C^{1+\alpha}$ hypothesis appears to be crucial in some parts of smooth ergodic theory, such as for the absolute continuity property and thus in the study of the ergodic properties of the dynamics. Nevertheless, we establish the existence of invariant stable manifolds for nonuniformly hyperbolic trajectories of a large family of maps of class at most $C^1$, by providing a condition which is weaker than the $C^{1+\alpha}$ hypothesis but which is still sufficient to establish a stable manifold theorem. We also consider the more general case of sequences of maps, which corresponds to a nonautonomous dynamics with discrete time. We note that our proof of the stable manifold theorem is new even in the case of $C^{1+\alpha}$ nonuniformly hyperbolic dynamics. In particular, the optimal $C^1$ smoothness of the invariant manifolds is obtained by constructing an invariant family of cones.
Abstract:
Adapting techniques of Misiurewicz, for $1\leq r < \infty$ we give an explicit construction of $C^r$ maps with positive residual entropy. We also establish the behavior of symbolic extension entropy with respect to joinings, fiber products, products, powers and flows.
Abstract:
We call an ordered set $\mathbf{c} = (c(i): i \in \mathbb{N})$, of nonnegative extended real numbers $c(i)$, a universal skyscraper template if it is the distribution of first return times for every ergodic measure preserving transformation $T$ of an infinite Lebesgue measure space. If ∑
$ c(i)<\infty$, we give a family of examples of ergodic infinite measure preserving transformations that do not admit c as a skyscraper template. i
If the distribution $\mathbf{c}$ satisfies $\gcd\{i: c(i) >0 \} = 1 $, and if either of the conditions $c(I) = \infty$ (for some integer $I$), or $i n f_i \{c(i) \} > 0$ is satisfied, then $\mathbf{c}$ is a universal skyscraper template.
Abstract:
Suppose $G$ is an infinite Abelian group that factorizes as the direct sum $G = A \oplus B$: i.e., the $B$-translates of the single tile $A$ evenly tile the group $G$ ($B$ is called the tile set). In this note, we consider conditions for another set $C \subset G$ to tile $G$ with the same tile set $B$. In an earlier paper, we answered a question of Sands regarding such tilings of $G$ when $A$ is a finite tile. We now consider extensions of Sands's question when $A$ is infinite. We offer two approaches to this question. The first approach involves a combinatorial condition used by Tijdeman and Sands. This condition completely characterizes when a set $C$ can tile $G$ with the tile set $B$; the condition is applied to simplify the proofs and extend some of Sands's results [8]. The second approach is measure theoretic and follows Eigen, Hajian, and Ito's work on exhaustive weakly wandering sets for ergodic infinite measure preserving transformations.
Abstract:
A configuration (i.e., a pair of points) in a Riemannian space $X$ is secure if all connecting geodesics can be blocked by a finite subset of $X$. A space is secure if all of its configurations are secure. Secure spaces seem to be rare.
If $X$ is an insecure space, it is natural to ask how big the set of insecure configurations is. We investigate this problem for flat surfaces, in particular for translation surfaces and polygons, from the viewpoint of measure theory.
Here is a sample of our results. Let $X$ be a lattice translation surface or a lattice polygon. Then the following dichotomy holds: i) The surface (polygon) $X$ is arithmetic. Then all configurations in $X$ are secure; ii) The surface (polygon) $X$ is nonarithmetic. Then almost all configurations in $X$ are insecure.
Abstract:
Let $M_{\phi}$ denote the set of Borel probability measures invariant under a topological action $\phi$ on a compact metrizable space $X$. For a continuous function $f:X\to\R$, a measure $\mu\in\M_{\phi}$ is called $f$-maximizing if $\int f\, d\mu = s u p\{\int f dm:m\in\M_{\phi}\}$. It is shown that if $\mu$ is any ergodic measure in $\M_{\phi}$, then there exists a continuous function whose unique maximizing measure is $\mu$. More generally, if $\mathcal E$ is a non-empty collection of ergodic measures which is weak$^*$ closed as a subset of $\M_{\phi}$, then there exists a continuous function whose set of maximizing measures is precisely the closed convex hull of $\mathcal E$. If moreover $\phi$ has the property that its entropy map is upper semi-continuous, then there exists a continuous function whose set of equilibrium states is precisely the closed convex hull of $\mathcal E$.
Abstract:
We construct the simplest chaotic system with a two-point attractor on the plane.
Abstract:
The topological entropy of piecewise affine maps is studied. It is shown that singularities may contribute to the entropy only if there is angular expansion and we bound the entropy via the expansion rates of the map. As a corollary, we deduce that non-expanding conformal piecewise affine maps have zero topological entropy. We estimate the entropy of piecewise affine skew-products. Examples of abnormal entropy growth are provided.
Abstract:
We consider the horocycle flow associated to a $\Z^d$-cover of a compact hyperbolic surface. Such flows have no finite invariant measures, and infinitely many infinite ergodic invariant Radon measures. We prove that, up to normalization, only one of these infinite measures admits a generalized law of large numbers, and we identify such laws.
Abstract:
We study the nonadditive thermodynamic formalism for the class of almost-additive sequences of potentials. We define the topological pressure $P_Z(\Phi)$ of an almost-additive sequence $\Phi$, on a set $Z$. We give conditions which allow us to establish a variational principle for the topological pressure. We state conditions for the existence and uniqueness of equilibrium measures, and for subshifts of finite type the existence and uniqueness of Gibbs measures. Finally, we compare the results for almost-additive sequences to the thermodynamic formalism for the classical (additive) case [10] [11] [3], the sequences studied by Barreira [1], Falconer [5], and that of Feng and Lau [7], [6].
Abstract:
We show that the iterated images of a Jacobian pair $f:\mathbb{C}^2 \rightarrow \mathbb{C}^2$ stabilize; that is, all the sets $f^k(\mathbb{C}^2)$ are equal for $k$ sufficiently large. More generally, let $X$ be a closed algebraic subset of $\mathbb{C}^N$, and let $f:X\rightarrow X$ be an open polynomial map with $X-f(X)$ a finite set. We show that the sets $f^k(X)$ stabilize, and for any cofinite subset $\Omega \subseteq X$ with $f(\Omega) \subseteq \Omega$, the sets $f^k(\Omega)$ stabilize. We apply these results to obtain a new characterization of the two dimensional complex Jacobian conjecture related to questions of surjectivity.
Abstract:
In the paper, we discuss two questions about degree $d$ smooth expanding circle maps, with $d \ge 2$. (i) We characterize the sequences of asymptotic length ratios which occur for systems with Hölder continuous derivative. The sequence of asymptotic length ratios are precisely those given by a positive Hölder continuous function $s$ (solenoid function) on the Cantor set $C$ of $d$-adic integers satisfying a functional equation called the matching condition. In the case of the $2$-adic integer Cantor set, the functional equation is
$ s (2x+1)= \frac{s (x)} {s (2x)}$ $1+\frac{1}{ s (2x-1)}-1. $
We also present a one-to-one correspondence between solenoid functions and affine classes of exponentially fast $d$-adic tilings of the real line that are fixed points of the $d$-amalgamation operator. (ii) We calculate the precise maximum possible level of smoothness for a representative of the system, up to diffeomorphic conjugacy, in terms of the functions $s$ and $cr(x)=(1+s(x))/(1+(s(x+1))^{-1})$. For example, in the Lipschitz structure on $C$ determined by $s$, the maximum smoothness is $C^{1+\alpha}$ for $0 < \alpha \le 1$ if and only if $s$ is $\alpha$-Hölder continuous. The maximum smoothness is $C^{2+\alpha}$ for $0 < \alpha \le 1$ if and only if $cr$ is $(1+\alpha)$-Hölder. A curious connection with Mostow type rigidity is provided by the fact that $s$ must be constant if it is $\alpha$-Hölder for $\alpha > 1$.
Abstract:
We prove that if a diffeomorphism on a compact manifold preserves a nonatomic ergodic hyperbolic Borel probability measure, then there exists a hyperbolic periodic point such that the closure of its unstable manifold has positive measure. Moreover, the support of the measure is contained in the closure of all such hyperbolic periodic points. We also show that if an ergodic hyperbolic probability measure does not locally maximize entropy in the space of invariant ergodic hyperbolic measures, then there exist hyperbolic periodic points that satisfy a multiplicative asymptotic growth and are uniformly distributed with respect to this measure.
Readers Authors Editors Referees Librarians More Email Alert
Add your name and e-mail address to receive news of forthcoming issues of this journal:
[Back to Top] |
We have shown that a holomorphic map \(f: G\to \mathbb{C}\) to be expressed as a power series, which bears a certain similarity to polynomials, and a feature of polynomials are that if \(a\) is a root, or zero, for a polynomial \(p\), we can factor \(p\) such that \(p(z)=(z-a)^n q(z)\) where \(q\) is another polynomial with the property that \(q(a)\neq 0\). Now, does this similarity with polynomials extend to factorization? In fact it does as we shall see.
Let \(f: G\to \mathbb{C}\) be a holomorphic map that is not identically zero, with \(G\subseteq \mathbb{C}\) a domain and \(f(a)=0\). It is our claim that there exists a smallest natural number \(n\) such that \(f^{(n)}(a)\neq 0\). So suppose that there are no such \(n\), such that \(f^{(k)}(a)=0\) for all \(k\in\mathbb{N}\). Let \(B_\rho(a)\) be the largest open ball with center \(a\) contained in \(G\), since we have that \[f(z)=\sum^\infty_{k=0}\frac{f^{(k)}(a)}{k!}(z-a)^k\] we then have that \(f\) is identically zero on \(B_\rho(a)\). Fix a point \(z_0\in G\) and let \(\gamma : [0,1]\to G\) be a continuous curve from \(a\) to \(z_0\). By the paving lemma there is a finite partition \(0=t_1 < t_2 <\cdots <t_m=1\) and an \(r>0\) such that \(B_r(\gamma(t_k))\subseteq G\) for all \(k\) and \(\gamma([t_{k-1},t_k])\subseteq B_r(\gamma(t_k))\). Note that \(B_r(\gamma(t_1))=B_r(a)\subseteq B_\rho(a)\) so \(f\) is identically zero on \(B_r(\gamma(t_1))\), but since \(\gamma([t_1,t_2])\subseteq B_r(\gamma(t_1))\) we must have that \(f\) is identically zero on \(B_r(\gamma(t_2))\), and so on finitely many times untill we reach \(\gamma(t_m)\) and conclude that \(f\) is identically zero on \(B_r(\gamma(t_m))=B_r(z_0)\) and since \(z_0\) was chosen to be arbitrary we must conclude that \(f\) is identically zero on all of \(G\). A contradiction.
Now, let \(n\) be the smallest natural number such that \(f^{(n)}(a)\neq 0\), then we must have that \(f^{(k)}(a)=0\) for \(k < n\). We then get, for \(z\in B_\rho(a)\): \[\begin{split} f(z) &=\sum^\infty_{k=0}\frac{f^{(k)}(a)}{k!}(a-z)^k \\ &= \sum^\infty_{k=n}\frac{f^{(k)}(a)}{k!}(a-z)^k \\ &= \sum^\infty_{k=0}\frac{f^{(n+k)}(a)}{(n+k)!}(a-z)^{n+k} \\&=(z-a)^n \sum^\infty_{k=0}\frac{f^{(n+k)}(a)}{(n+k)!}(a-z)^{k}, \end{split}\] now, let \(\tilde{f}(z)=\sum^\infty_{k=0}\frac{f^{(n+k)}(a)}{(n+k)!}(a-z)^{k}\) and note that \(\tilde{f}\) is non-zero and holomorphic on \(B_\rho(a)\). We then define a map \(g\) given by \[g(z)=\begin{cases} \tilde{f}(z), & z\in B_\rho(a) \\ \frac{f(z)}{(z-a)^n}, & z\in G\setminus \{a\}\end{cases}\] and note that \[f(z)=(z-a)^n g(z),\] showing the existance of a factorization with our desired properties. Showing that this representation is unique is left as an exercise 😉
References Complex analysis. Copenhagen: Department of Mathematical Sciences, University of Copenhagen. |
After
years, I still find myself having trouble really internalizing the meaning of various differentials in integrals—specifically, when they come about via reasoning regarding physical phenomena. When I come back for review, I fall prey to the same problems I had when originally learning the material. It's not that I don't necessarily understand the correct solution, but I more often than not don't underestand why the incorrect one is incorrect. Example 1: Consider the process of deriving the moment of inertia of a thin disk of mass $M$ and radius $R$. My immediate thought is "I'd like to derive this by summing the moment of inertia of concentric rings of various radii."
$$I = \sum m_i r_i^2$$ ... where $m_i$ is the mass of a particle on the ring and $r_i$ will be the distance of the particle from the center of the ring (or, the ring's radius).
The moment of inertia of
one point on the circumference of the ring of radius $r$ is:
$$dI_{ring} = dm ~ r^2 = \left(\frac{m}{\pi R^2}\right)r^2$$
... but shouldn't there be a $dr$ somewhere? It's about here where I flounder around trying to figure out why I don't have a $dr$, what $dr$ really means (so that I can insert it into the appropriate place), whether or not I should actually have $dr$ or $dm$, etc. Then, I ask myself "... what am I really summing over—what would the bounds of integration be?" (From $0$ to $2\pi r$, because I'm summing over tiny piece of the circumference? Am I confusing myself by using $dr$ and not, say, $dS$?)
Example 2: Let's say I get past all that, and find the moment of inertia of a thin hoop of radius $r$ to be $I = \left( \frac{2mr}{R^2}\right)r^2$. I'd now like to sum these hoops to form a disk. So...
$$dI_{disk} = \left( \frac{2mr}{R^2}\right)r^2$$
... where I'd like $r$ to vary from $0$ to $R$. Again, what about $dr$ (or $d\text{[whatever]}$)? Well, I know I want $r$ to vary, so... my integral should look something like...
$$\int_0^R dI$$
... right?
$$\int_0^R \left( \frac{2mr}{R^2}\right)r^2 = \int_0^R \frac{2mr^3}{R^2}$$ ... $dr$?
This will go on for a long time, until I inevitably make a post on Stack Exchange asking for help.
I've read through many examples, and have walked myself through many derivations that fall into this category—and I understand them fully when I do. The problem is, the knowledge that I gain from doing this doesn't seem to generalize. I can't seem to intuit a kind of general rule of thumb for these types of problems, and it's particularly frustrating.
Will someone
please help elucidate what this god damned mysterious differential is in such a way that, perhaps, provides a general rule of thumb? |
Any ket vector \(|\psi⟩\) can be multiplied by a number \(z\) (which, in general can be real or complex) to produce a new vector \(|\phi⟩\):
$$z|\psi⟩=|\phi⟩.$$
In general, \(z=x+iy\). Sometimes the number \(z\) will just be a real number with no imaginary part, but in general it can have both a real and imaginary part. The complex conjugate of the number \(z\) is represented by \(z^*\) and is obtained by changing the sign of the imaginary part of \(z\); so \(z^*=x-iy\). Let’s look at some examples of complex numbers and their complex conjugates. Suppose that \z\) is any real number with no imaginary part: \(z=x+(i·0)=x\). The complex conjugate of any real number is \(z^*=x-(i·0)=x\). In other words taking the complex conjugate \(z^*\) of any real number \(z=x\) just gives the real number back. Suppose, however, that \(z=x+iy\) is any complex number and we take the complex conjugate twice. Let’s see what we get. \(z^*\) is just \(z^*=x-iy\) (a new complex number). If we take the “complex conjugate of the complex conjugate,” we have \((z^*)^*=x+iy=z\). For any complex number \(z\),\(z^*=z\) .
If we multiply any complex number \(z\) by its complex conjugate \(z^*\), we’ll get
$$zz^*=(x=iy)(x-iy)=x^2-ixy+ixy-i^2y^2=x^2+y^2.$$
For any complex number \(z\) the product \(z^*z\) is always a real number that is greater than or equal to zero. This product is called the modulus squared and, in a very rough sense, represents the length squared of a vector in a complex space. We can write the modulus squared as \(|z|^2=zz^*\). From fig. # we can also see that any complex number can be represented by \(z=x+iy=rcosθ+irsinθ=re^{iθ}\). The complex conjugate of this is \(z^*=x-iy=rcosθ+irsin(-θ)=re^{-iθ}\). The modulus squared of any vector in the complex plain is given by \(|z|^2=zz^*=(re^{iθ})(re^{iθ}=r^2\). If \(|z|^2=r^2=1\) and hence \(|z|=r=1\), then the magnitude of the complex vector is 1 and the vector is called normalized.
Any vector \(|A⟩\) can be expressed as a column vector: \(\begin{bmatrix}A_1 \\⋮ \\A_N\end{bmatrix}\). To multiply \(|A⟩\) by any number \(z\) we simply multiply each of the components of the column vector by \(z\) to get \(z|A⟩=z\begin{bmatrix}A_1 \\⋮ \\A_N\end{bmatrix}=\begin{bmatrix}zA_1 \\⋮ \\zA_N\end{bmatrix}\). We can add two complex vectors \(|A⟩\) and \(|B⟩\) to get a new complex vector \(|C⟩\). Each of the new components of \(|C⟩\) is obtained by adding the components of \(|A⟩\) and \(|B⟩\) to get \(|A⟩+|B⟩=\begin{bmatrix}A_1 \\⋮ \\A_N\end{bmatrix}+\begin{bmatrix}B_1 \\⋮ \\B_N\end{bmatrix}=\begin{bmatrix}A_1+B_1 \\⋮ \\A_N+B_N\end{bmatrix}=|C⟩\). For every ket vector \(|A⟩=\begin{bmatrix}A_1 \\⋮ \\A_N\end{bmatrix}\) there is an associated bra vector which is the complex conjugate of \(|A⟩\) and is given by \(⟨A|=\begin{bmatrix}A_1^* &... &A_N^*\end{bmatrix}\). The inner product between any two vectors \(|A⟩\) and \(|B⟩\) is written as \(⟨B|A⟩\). The outer product between any two vectors \(|A⟩\) and \(|B⟩\) is written as \(|A⟩⟨B|\). The rule for taking the inner product between any two such vectors is
$$⟨B|A⟩=\begin{bmatrix}B_1^* &... &B_N^*\end{bmatrix}\begin{bmatrix}A_1 \\⋮ \\A_N\end{bmatrix}=B_1^*A_1\text{+ ... +}B_N^*A_N.$$
Whenever you take the inner product of a vector \(|A⟩\) with itself you get
$$⟨A|A⟩=\begin{bmatrix}A_1^* &... &A_N^*\end{bmatrix}\begin{bmatrix}A_1 \\⋮ \\A_N\end{bmatrix}=A_1^*A_1\text{+ ... +}A_N^*A_N.$$
We learned earlier that the product between any number \(z\) (which can be a real number but is in general a complex number) and its complex conjugate \(z^*\) (written as ) is always a real number that is greater than or equal to zero. This means that each of the terms \(A_i^*A_i\) is greater than or equal to zero and, therefore, will always equal a real number greater than or equal to zero.
Suppose we have some arbitrary matrix \(\textbf{M}\) whose elements are given by
$$\textbf{M}=\begin{bmatrix}m_{11} &... &m_{N1} \\⋮ &\ddots &⋮ \\m_{1N} &... &m_{NN}\end{bmatrix}.$$
To find the transpose of this matrix (written as \(\textbf{M}^T\)) we interchange the order of the two lower indices of each element. (Another way of thinking about it is that we “reflect” each element about the diagonal.) When we do this we get
$$\textbf{M}^T=\begin{bmatrix}m_{11} &... &m_{1N} \\⋮ &\ddots &⋮ \\m_{N1} &... &m_{NN}\end{bmatrix}.$$
The Hermitian conjugate of a matrix (represented by \(\textbf{M}^†\)) is obtained by first taking the transpose of the matrix and then taking the complex conjugate of each element to get
$$\textbf{M}^†=\begin{bmatrix}m_{11}^* &... &m_{1N^*} \\⋮ &\ddots &⋮ \\m_{N1}^* &... &m_{NN}^*\end{bmatrix}.$$
We represent observables/measurable as linear Hermitian operators. In our electron spin example, the observable/measurable is given by the linear Hermitian operator \(\hat{σ}_r\). |
There are a few notational conveniences of the lambda calculus that seem to be missing in the HoTT calculus. Here are some ideas about how to add them. First, a kind of "reverse lambda" operator that allows us to refer to arbitrary elements of a type without using a name, just like lambda does with functions. Second, an explicit type former for enumerated types, that allows us to express such types without using a name. Lastly, a pipeline application operator that allows us to string expressions together.
A PDF version of this article is at HoTT Without Vars. This is revision 2.
Type Specialization
The type formers of HoTT (×, →, Π, Σ and the rest) are very similar to the lambda operator λ. Just as λ turns an open formula into the name of the associated function (λx.x + 1 names the function associated with x + 1), the type formers turn expressions into the names of types. For example, × turns A, B into the type name A × B; → turns A, B into the type name A → B, and so forth.
Lambda expressions like λx.x + 1 are called lambda abstractions; by extension, we can call type formation expressions type abstractions. To “undo” a lambda abstraction, we use application: λx.x + 1(2) = 3. Since the opposite of abstraction is specialization, it is tempting to think of application as specialization; but application involves more than mere specialization. If we think of a function (that is, a lambda abstraction) as a set of ordered pairs, then its specialization should be a particular ordered pair; but application does not deliver a pair, it projects the second element of a pair. So application can be thought of as a combination of specialization and projection.
Lambda abstractions are treated not only as the names of functions, but as function definitions. That’s what allows us to write, e.g. f = λx.x + 1.
There is no application operator for type abstractions. We cannot undo
A → Bby writing (A → B)(x), for example. But we can specialize type abstractions; that is just what expressions like a : Ado. One (classical) way to interpret such an expression is to say that adenotes an “element” of A. Such an interpretation presupposes a denotational semantics and thus implicates a notion of choice; a more detailed gloss might say that “:” chooses an element of Aand assigns it to a. Under a pragmatic interpretation that eschews denotational semantics, we might simply stipulate that a : Ameans that ais a token of type Aand leave it at that. Either way, we have introduced a variable, a, in the service of specializing a type abstraction.
Now one of the great advantages of the lambda notation is that it allows us to dispense with function names. In fact this notational convenience was Church’s original motivation in introducing λ; he did not realize until later that it formed the basis of a powerful computation/logical calculus. Without the lambda operator we would have to define and name every function we need to use, so that we can later refer to it by name. With the lambda calculus we can just write out the function definition wherever we need it.
In principle we should be able to do the same thing – that is, dispense with intermediate names – for type abstractions and specializations. But the HoTT notation does not currently support this; if you want to work with an arbitrary element of a type, you must name it using the “:” operator. And enumerated types (e.g. the boolean type) cannot be specified without naming.
To address this notational deficiency we propose a specialization operator for type abstractions, ƛ (this is U+019B, lambda with stroke; in latex, I use reversed lambda). Its meaning is exactly analogous to the lambda operator, but instead of abstracting it specializes. λ turns a particular formula into an abstraction; ƛ turns a type abstraction into a particular. Under a classical interpretation, the operator “selects” an element of the type to which it is applied; under a pragmatic interpretation, the combination of the operator symbol and a type symbol is a symbol of that type.
1) f :A→B f names an arbitrary function of type A→B 2) ƛA → B ƛA→B : A→B 3) ƛN an arbitrary natural number
The usefulness of this becomes more evident with Π and Σ types. From the definition of path induction, p. 49:
4) \(C:Π_{(x,y:A)}(x=_Ay)→U\) same as: \(C :\equiv ƛ \bigg(Π_{(x,y:A)} ƛ\big( (x=_Ay)\to\mathcal{U}\big)\bigg)\) 5) \(c:Π_{(x:A)}C(x,x,refl_x)\) same as: \(c :\equiv ƛ \bigg(Π_{(x:A)} ƛ\ C(x,x,refl_x)\bigg)\)
These may be rewritten as follows using the pipeline application operator “|” described below; briefly, x|f = f (x) :
6) \(c:≡ƛ\Pi_{(x:A)}\bigg(\ x\to\Sigma_{(x,x:A)}(x=_Ax)\ \bigg\rvert\ ƛ\Pi_{(x,y:A)}(x= y)→U\bigg)\) incorrect 6a) \(c:≡ƛ\Pi_{(x:A)}\ x\toƛ\bigg(ƛ\Sigma_{(x,x:A)}(x=_Ax)\ \bigg\rvert\ ƛ\Pi_{(x,y:A)}(x= y)→U\bigg)\) correct? 7) \(c:≡ƛ\Pi_{(x:A)}\bigg(\ x\to\Sigma_{(x:A)}\Big(\Sigma_{(x:A)}(x=_Ax)\Big)\ \bigg\rvert\ ƛ\Pi_{(x,y:A)}(x= y)→U\bigg)\) incorrect 7a) \(c:≡ƛ\Pi_{(x:A)}\ x\to\ ƛ\bigg(ƛ\Sigma_{(x:A)}\Big(\Sigma_{(x:A)}(x=_Ax)\Big)\ \bigg\rvert\ ƛ\Pi_{(x,y:A)}(x= y)→U\bigg)\) correct?
These are equivalent (I think); (7) is just more explicit than (6). They are slightly different than (4) and (5) in that they do not use \(refl_x\). In both cases, the entire meaning can be read off the one expression, just as in the case of a lambda definition; it can be glossed in order: c is an element of a function type that takes any x : A, produces a dependent triple of the form (x, x, p), where p is any proof that x = x, and feeds that into a function that maps such triples to a new type.
This notation seems to make it possible to render structure more explicitly; whether or not this is a better way to write this is beside the point; the λ operator at least makes it possible.
(Of course, the symbol is arbitrary but motivated - it’s a reverse lambda. We may find a better symbol than this; e.g. upside down lambda: or the like).
Caveat: You cannot just replace named elements with λ expressions willy-nilly, since the latter pick out arbitrary elements. Where a name occurs multiple times in a context it must mean the same thing each time; e.g. in the formula for \(ind=_A\) on p. 49. (reproduced below).
Definitions: Implicit v. Explicit
A specialization like
In other words, typing judgments of the form The difference is clear in the case of function types: a : Asays that ais an arbitrary token of type A. But arbitrary does not mean indefinite; if we adopt the classical perspective and treat a : Aas meaning that adenotes an arbitrary element of Awe have not said that a is undefined. On the contrary, if it is to denote at all, it must denote something definite. So we might say that a denotes a definite but unknown element of A. Since it is defi- nite, we can say that a is implicitly defined. If we then offer a “defining equation” for a, it becomes explicitly defined.
In other words, typing judgments of the form
a : Aimplicitly define their token component.
The difference is clear in the case of function types:
8) f:N→N 9) f :≡λx:N.x+1
Expression (8) implicitly defines
fby giving its type explicitly and “judging” that fhas that type; that is, by assigning fto a specialization of its type. Expression (9) then makes that definition explicit.
Since the term “definition” thus has two meanings, we will generally use the term “specification” and leave it to the reader to determine from context which kind of definition is involved. For example, we might say “Given the specification
a : A”, meaning that a : Aspecifies that anames an implicitly defined element of A. An Explicit Operator for Enumerated TypesEnumerated types like 2 do not have an explicit abstraction operator. We propose Λ. This allows us to specify “anonymous” enumerated types, e.g.
Λ{Mon, Tues, Wed, Thu, Fri, Sat, Sun}
names a “Day” type without naming it. To name it we can write:
Day :≡ Λ{Mon, Tues, Wed, Thu, Fri, Sat, Sun}
Pipeline Application
10) \(ƛ\Sigma_{(x,y:A)}(x=_A y)\ \Big\rvert\ ƛ\Pi_{(x,y:A)}(x=_Ay)→U\)
11) \(\Sigma_{(x,y:A)}(x=_Ay)\ \Big\rvert\ \Pi_{(x,y:A)}(x=_Ay)→U\)
12) \(C:\Pi_{(x,y:A)}(x=_Ay)→U\) 13) \(C(a,b,p)\) where \(a,b:A\) and \(p:(a=_Ab)\) Universe OperatorFor lack of a better term. The standard way to express a family is B : A → U. This is fine if B and A are relatively simple; B(a) is a type. But if B and/or A are complex this can be awkward. Sometimes we may want a name to refer to members of a type family, if only for expository purposes; e.g. B(a) :≡ P. The prosposal here is that we introduce a new operator that allows us to name an element of U: PU. Read this as “P is an element drawn from U”.
This allows us to say e.g. B : A → PU
This is to be interpreted as “B takes each element of A to an element of U, which we name P”; it does
notmean “B takes each element of A to an element of P”. |
In a barter exchange market, agents bring items and seek to exchange their
items with one another. Agents may agree to a k-way exchange involving a cycle
of k agents. A barter exchange market can be represented by a digraph where the
vertices represent items and the edges out of a vertex indicate the items that
an agent is willing to accept in exchange for that item. It is known that the
problem of finding a set of vertex-disjoint cycles with the maximum total
number of vertices (MAX-SIZE-EXCHANGE) can be solved in polynomial time. We
consider a barter exchange where each agent may bring multiple items, and items
of the same agent are represented by vertices with the same color. A set of
cycles is said to be tropical if for every color there is a cycle that contains
a vertex of that color. We show that the problem of determining whether there
exists a tropical set of vertex-disjoint cycles in a digraph
(TROPICAL-EXCHANGE) is NP-complete and APX-hard. This is equivalent to
determining […]
Section: Analysis of Algorithms
A mixed dominating set for a graph $G = (V,E)$ is a set $S\subseteq V \cup E$
such that every element $x \in (V \cup E) \backslash S$ is either adjacent or
incident to an element of $S$. The mixed domination number of a graph $G$,
denoted by $\gamma_m(G)$, is the minimum cardinality of mixed dominating sets
of $G$. Any mixed dominating set with the cardinality of $\gamma_m(G)$ is
called a minimum mixed dominating set. The mixed domination set (MDS) problem
is to find a minimum mixed dominating set for a graph $G$ and is known to be an
NP-complete problem. In this paper, we present a novel approach to find all of
the mixed dominating sets, called the AMDS problem, of a graph with bounded
tree-width $tw$. Our new technique of assigning power values to edges and
vertices, and combining with dynamic programming, leads to a fixed-parameter
algorithm of time $O(3^{tw^{2}}\times tw^2 \times |V|)$. This shows that MDS is
fixed-parameter tractable with respect to tree-width. In addition, […]
Section: Graph Theory
This is the first of three papers that develop structures which are counted
by a "parabolic" generalization of Catalan numbers. Fix a subset R of
{1,..,n-1}. Consider the ordered partitions of {1,..,n} whose block sizes are
determined by R. These are the "inverses" of (parabolic) multipermutations
whose multiplicities are determined by R. The standard forms of the ordered
partitions are refered to as "R-permutations". The notion of 312-avoidance is
extended from permutations to R-permutations. Let lambda be a partition of N
such that the set of column lengths in its shape is R or R union {n}. Fix an
R-permutation pi. The type A Demazure character (key polynomial) in x_1, ..,
x_n that is indexed by lambda and pi can be described as the sum of the weight
monomials for some of the semistandard Young tableau of shape lambda that are
used to describe the Schur function indexed by lambda. Descriptions of these
"Demazure" tableaux developed by the […]
Section: Combinatorics
For oriented graphs $G$ and $H$, a homomorphism $f: G \rightarrow H$ is
locally-injective if, for every $v \in V(G)$, it is injective when restricted
to some combination of the in-neighbourhood and out-neighbourhood of $v$. Two
of the possible definitions of local-injectivity are examined. In each case it
is shown that the associated homomorphism problem is NP-complete when $H$ is a
reflexive tournament on three or more vertices with a loop at every vertex, and
solvable in polynomial time when $H$ is a reflexive tournament on two or fewer
vertices.
Section: Graph Theory
In this paper, we study a parameter that is squeezed between arguably the two
important domination parameters, namely the domination number, $\gamma(G)$, and
the total domination number, $\gamma_t(G)$. A set $S$ of vertices in $G$ is a
semitotal dominating set of $G$ if it is a dominating set of $G$ and every
vertex in S is within distance $2$ of another vertex of $S$. The semitotal
domination number, $\gamma_{t2}(G)$, is the minimum cardinality of a semitotal
dominating set of $G$. We observe that $\gamma(G)\leq \gamma_{t2}(G)\leq
\gamma_t(G)$. In this paper, we give a lower bound for the semitotal domination
number of trees and we characterize the extremal trees. In addition, we
characterize trees with equal domination and semitotal domination numbers.
Section: Graph Theory
We study the biased $(1:b)$ Maker--Breaker positional games, played on the
edge set of the complete graph on $n$ vertices, $K_n$. Given Breaker's bias
$b$, possibly depending on $n$, we determine the bounds for the minimal number
of moves, depending on $b$, in which Maker can win in each of the two standard
graph games, the Perfect Matching game and the Hamilton Cycle game.
Section: Graph Theory
The 1-2 Conjecture raised by Przybylo and Wozniak in 2010 asserts that every undirected graph admits a 2-total-weighting such that the sums of weights "incident" to the vertices yield a proper vertex-colouring. Following several recent works bringing related problems and notions (such as the well-known 1-2-3 Conjecture, and the notion of locally irregular decompositions) to digraphs, we here introduce and study several variants of the 1-2 Conjecture for digraphs. For every such variant, we raise conjectures concerning the number of weights necessary to obtain a desired total-weighting in any digraph. We verify some of these conjectures, while we obtain close results towards the ones that are still open.
Section: Graph Theory
For a connected graph $G$ of order at least $2$ and $S\subseteq V(G)$, the
\emph{Steiner distance} $d_G(S)$ among the vertices of $S$ is the minimum size
among all connected subgraphs whose vertex sets contain $S$. Let $n$ and $k$ be
two integers with $2\leq k\leq n$. Then the \emph{Steiner $k$-eccentricity
$e_k(v)$} of a vertex $v$ of $G$ is defined by $e_k(v)=\max
\{d_G(S)\,|\,S\subseteq V(G), \ |S|=k, \ and \ v\in S\}$. Furthermore, the
\emph{Steiner $k$-diameter} of $G$ is $sdiam_k(G)=\max \{e_k(v)\,|\, v\in
V(G)\}$. In this paper, we investigate the Steiner distance and Steiner
$k$-diameter of Cartesian and lexicographical product graphs. Also, we study
the Steiner $k$-diameter of some networks.
Section: Graph Theory
By means of inversion techniques and several known hypergeometric series
identities, summation formulas for Fox-Wright function are explored. They give
some new hypergeometric series identities when the parameters are specified.
Section: Combinatorics
We study a recently introduced generalization of the Vertex Cover (VC)
problem, called Power Vertex Cover (PVC). In this problem, each edge of the
input graph is supplied with a positive integer demand. A solution is an
assignment of (power) values to the vertices, so that for each edge one of its
endpoints has value as high as the demand, and the total sum of power values
assigned is minimized. We investigate how this generalization affects the
parameterized complexity of Vertex Cover. On the positive side, when
parameterized by the value of the optimal P, we give an O*(1.274^P)-time
branching algorithm (O* is used to hide factors polynomial in the input size),
and also an O*(1.325^P)-time algorithm for the more general asymmetric case of
the problem, where the demand of each edge may differ for its two endpoints.
When the parameter is the number of vertices k that receive positive value, we
give O*(1.619^k) and O*(k^k)-time algorithms for the symmetric and asymmetric
cases […]
Section: Discrete Algorithms
In this paper, we facilitate the reasoning about impure programming languages, by annotating terms with “decorations”that describe what computational (side) effect evaluation of a term may involve. In a point-free categorical language,called the “decorated logic”, we formalize the mutable state and the exception effects first separately, exploiting anice duality between them, and then combined. The combined decorated logic is used as the target language forthe denotational semantics of the IMP+Exc imperative programming language, and allows us to prove equivalencesbetween programs written in IMP+Exc. The combined logic is encoded in Coq, and this encoding is used to certifysome program equivalence proofs.
Section: Automata, Logic and Semantics
We present two families of Wilf-equivalences for consecutive and
quasi-consecutive vincular patterns. These give new proofs of the
classification of consecutive patterns of length $4$ and $5$. We then prove
additional equivalences to explicitly classify all quasi-consecutive patterns
of length $5$ into 26 Wilf-equivalence classes.
Section: Combinatorics
Dominating broadcasting is a domination-type structure that models a
transmission antenna network. In this paper, we study a limited version of this
structure, that was proposed as a common framework for both broadcast and
classical domination. In this limited version, the broadcast function is upper
bounded by an integer $k$ and the minimum cost of such function is the
dominating $k$-broadcast number. Our main result is a unified upper bound on
this parameter for any value of $k$ in general graphs, in terms of both $k$ and
the order of the graph. We also study the computational complexity of the
associated decision problem.
Section: Graph Theory
Rectangulations are partitions of a square into axis-aligned rectangles. A
number of results provide bijections between combinatorial equivalence classes
of rectangulations and families of pattern-avoiding permutations. Other results
deal with local changes involving a single edge of a rectangulation, referred
to as flips, edge rotations, or edge pivoting. Such operations induce a graph
on equivalence classes of rectangulations, related to so-called flip graphs on
triangulations and other families of geometric partitions. In this note, we
consider a family of flip operations on the equivalence classes of diagonal
rectangulations, and their interpretation as transpositions in the associated
Baxter permutations, avoiding the vincular patterns { 3{14}2, 2{41}3 }. This
complements results from Law and Reading (JCTA, 2012) and provides a complete
characterization of flip operations on diagonal rectangulations, in both
geometric and combinatorial terms.
Section: Combinatorics
A graph $G$ is {\em matching-decyclable} if it has a matching $M$ such that
$G-M$ is acyclic. Deciding whether $G$ is matching-decyclable is an NP-complete
problem even if $G$ is 2-connected, planar, and subcubic. In this work we
present results on matching-decyclability in the following classes: Hamiltonian
subcubic graphs, chordal graphs, and distance-hereditary graphs. In Hamiltonian
subcubic graphs we show that deciding matching-decyclability is NP-complete
even if there are exactly two vertices of degree two. For chordal and
distance-hereditary graphs, we present characterizations of
matching-decyclability that lead to $O(n)$-time recognition algorithms.
Section: Graph Theory
A digraph such that every proper induced subdigraph has a kernel is said to
be \emph{kernel perfect} (KP for short) (\emph{critical kernel imperfect} (CKI
for short) resp.) if the digraph has a kernel (does not have a kernel resp.).
The unique CKI-tournament is $\overrightarrow{C}_3$ and the unique
KP-tournaments are the transitive tournaments, however bipartite tournaments
are KP. In this paper we characterize the CKI- and KP-digraphs for the
following families of digraphs: locally in-/out-semicomplete, asymmetric
arc-locally in-/out-semicomplete, asymmetric $3$-quasi-transitive and
asymmetric $3$-anti-quasi-transitive $TT_3$-free and we state that the problem
of determining whether a digraph of one of these families is CKI is polynomial,
giving a solution to a problem closely related to the following conjecture
posted by Bang-Jensen in 1998: the kernel problem is polynomially solvable for
locally in-semicomplete digraphs.
Section: Graph Theory
We consider a relaxation of the concept of well-covered graphs, which are
graphs with all maximal independent sets of the same size. The extent to which
a graph fails to be well-covered can be measured by its independence gap,
defined as the difference between the maximum and minimum sizes of a maximal
independent set in $G$. While the well-covered graphs are exactly the graphs of
independence gap zero, we investigate in this paper graphs of independence gap
one, which we also call almost well-covered graphs. Previous works due to
Finbow et al. (1994) and Barbosa et al. (2013) have implications for the
structure of almost well-covered graphs of girth at least $k$ for $k\in
\{7,8\}$. We focus on almost well-covered graphs of girth at least $6$. We show
that every graph in this class has at most two vertices each of which is
adjacent to exactly $2$ leaves. We give efficiently testable characterizations
of almost well-covered graphs of girth at least $6$ having exactly one or
exactly two […]
Section: Graph Theory
We present a polynomial time algorithm, which solves a nonstandard Variation
of the well-known PARTITION-problem: Given positive integers $n, k$ and $t$
such that $t \geq n$ and $k \cdot t = {n+1 \choose 2}$, the algorithm
partitions the elements of the set $I_n = \{1, \ldots, n\}$ into $k$ mutually
disjoint subsets $T_j$ such that $\cup_{j=1}^k T_j = I_n$ and $\sum_{x \in
T_{j}} x = t$ for each $j \in \{1,2, \ldots, k\}$. The algorithm needs
$\mathcal{O}(n \cdot ( \frac{n}{2k} + \log \frac{n(n+1)}{2k} ))$ steps to
insert the $n$ elements of $I_n$ into the $k$ sets $T_j$.
Section: Discrete Algorithms
A $\textit{sigma partitioning}$ of a graph $G$ is a partition of the vertices
into sets $P_1, \ldots, P_k$ such that for every two adjacent vertices $u$ and
$v$ there is an index $i$ such that $u$ and $v$ have different numbers of
neighbors in $P_i$. The $\textit{ sigma number}$ of a graph $G$, denoted by
$\sigma(G)$, is the minimum number $k$ such that $ G $ has a sigma partitioning
$P_1, \ldots, P_k$. Also, a $\textit{ lucky labeling}$ of a graph $G$ is a
function $ \ell :V(G) \rightarrow \mathbb{N}$, such that for every two adjacent
vertices $ v $ and $ u$ of $ G $, $ \sum_{w \sim v}\ell(w)\neq \sum_{w \sim
u}\ell(w) $ ($ x \sim y $ means that $ x $ and $y$ are adjacent). The $\textit{
lucky number}$ of $ G $, denoted by $\eta(G)$, is the minimum number $k $ such
that $ G $ has a lucky labeling $ \ell :V(G) \rightarrow \mathbb{N}_k$. It was
conjectured in [Inform. Process. Lett., 112(4):109--112, 2012] that it is $
\mathbf{NP} $-complete to decide whether $ \eta(G)=2$ for a given […]
Section: Graph Theory |
Current browse context:
cond-mat.dis-nn
Change to browse by: References & Citations Bookmark(what is this?) Condensed Matter > Disordered Systems and Neural Networks Title: Hydrodynamics of disordered marginally-stable matter
(Submitted on 8 May 2019 (v1), last revised 18 Aug 2019 (this version, v2))
Abstract: We study the vibrational spectra and the specific heat of disordered systems using an effective hydrodynamic framework. We consider the contribution of diffusive modes, i.e. the 'diffusons', to the density of states and the specific heat. We prove analytically that these new modes provide a constant term to the vibrational density of states $g(\omega)$. This contribution is dominant at low frequencies, with respect to the Debye propagating modes. We compare our results with numerical simulations data and random matrix theory. Finally, we compute the specific heat and we show the existence of a linear in $T$ scaling $C(T) \sim c\,T$ at low temperatures due to the diffusive modes. We analytically derive the coefficient $c$ in terms of the diffusion constant $D$ of the quasi-localized modes and we obtain perfect agreement with numerical data. The linear in $T$ behavior in the specific heat is stronger the more localized the modes, and crosses over to a $T^{3}$ (Debye) regime at a temperature $T^{*}\sim \sqrt{v^{3}/D}$, where $v$ is the speed of sound. Our results suggest that the anomalous properties of glasses and disordered systems can be understood effectively within a hydrodynamic approach which accounts for diffusive quasi-localized modes generated via disorder-induced scattering. Submission historyFrom: Matteo Baggioli [view email] [v1]Wed, 8 May 2019 18:30:15 GMT (119kb,D) [v2]Sun, 18 Aug 2019 13:38:14 GMT (164kb,D) |
Back to Semi-infinite Programming
Let \(P\) be an arbitrary
semi-infinite programming (SIP) problem. We associate with any nonempty set \(S \subset T\) the problem \[P(S) : \min_{x} f(x)\,\,\text{s.t.}\,\, g(x,t) \geq 0\, \text{for all}\, t \in S.\]
Obviously, \(P(T)\) coincides with \(P\), and \(P(S)\) is a finite program when \(S\) is finite.
Discretization methods generate sequences of points in \(\mathbb{R}^n\) converging to an optimal solution of \(P\) by solving a sequence of problems of the form \(P(T_{k})\), where \(T_{k}\) is a nonempty finite subset of \(T\) for \(k=1,2,\dots\). Let \(\varepsilon < 0\) be a fixed small scalar called accuracy.
Step \(k\) : Let \(T_{k}\) be given.
Compute a solution \(x_{k}\) of \(P(T_{k})\). Stop if \(x_{k}\) is feasible within the fixed accuracy \(\varepsilon\), i.e., \(g(x,t) \geq -\varepsilon\) for all \( t \in T\). Otherwise, replace \(T_{k}\) with a new grid \(T_{k+1}\).
Obviously, \(x_{k}\) is infeasible before optimality. Grid discretization methods select a priori sequences of grids, \(T_{1},T_{2},...,\) usually satisfying \(T_{k+1} \subset T_{k}\) for all \(k\). The alternative discretization approaches generate the sequence \(T_{1},T_{2},...,\) inductively. For instance, the classical Kelley cutting plane approach used in convex SIP consists of taking \(T_{k+1} = T_{k}\cup \{t_{k}\}\), for some \(t_k\in T\), or \(T_{k+1}=(T_{k}\cup \{t_{k}\}) \backslash\{t_{k}'\}\) for some \(t_{k}'\in T_{k}\) (if an elimination rule is included).
Convergence of discretization methods requires \(P\) to be continuous. The main difficulties with these methods are undesirable jamming in the proximity of an optimal solution and the increasing size of the auxiliary problems \(P(T_{k})\) (unless elimination rules are implemented). These methods are only efficient for low-dimensional indices.
For more details, see Gustafson (1979), Hettich and Zencke (1982), Hettich (1986), Hettich and Kortanek (1993), Reemtsen and Görner (1998), and López and Still (2007) in the Semi-infinite Programming References. |
Let \( A \) be any real square matrix (not necessarily symmetric). Prove that: $$ (x'A x)^2 \leq (x'A A'x)(x'x) $$
The key point in proving this inequality is to recognize that \( x'A A'x \) can be expressed as vector norm of \( A'x \).
Proof:
If \( x=0 \), then the inequality is trival.
Suppose \( x \neq 0 \).
\( \frac{x'A x}{x'x}
= \frac{(A'x)'x}{\| x \|^2} = (A'\frac{x}{\| x \|})'\frac{x}{\| x \|} \)
Because \( \frac{x}{\| x \|} \) is a unit vector, \( A'\frac{x}{\| x \|} \) can be considered as scale and rotation of \( \frac{x}{\| x \|} \) by \( A' \). Thus, the resulting vector norm of \( A'\frac{x}{\| x \|} \) is \( \alpha \) for some \( \alpha > 0 \). And \( (A'\frac{x}{\| x \|})'\frac{x}{\| x \|}=\alpha \, cos(\beta) \) for some \( -\pi \leq \beta \leq \pi \), which is the angle between before and after premulitplying \( A' \).
Now:
\( ( \frac{x'A x}{x'x} )^2 \)
\(= ( (A'\frac{x}{\| x \|})'\frac{x}{\| x \|} )^2 \)
\( =\alpha^2 \, cos(\beta)^2 \)
\( \leq \alpha^2 \)
\(= (A'\frac{x}{\| x \|})'A'\frac{x}{\| x \|} \)
\(= \frac{(A'x)'A'x}{\| x \|^2} \)
\(= \frac{x'A A'x}{x'x} \)
Finally, multiplying both sides by \( (x'x)^2 \) completes the proof. |
The Trapezoidal Rule is given by:
$$\int_a^b f(x) ~ dx = \dfrac{b-a}{2n}(f(x_0) + 2f(x_1) + \ldots + 2f(x_{n-1}) + f(x_n))$$
The error term is given by:
$$|e_n| \le \dfrac{max_{a,b} |f''(x)|}{12 n^2} (b-a)^3$$
We have:
$$f(x) = \dfrac{\sin(x)}{x}, x \in (0,1)$$
We find the second derivative:
$$f''(x) = \frac{2 \sin (x)}{x^3}-\frac{2 \cos (x)}{x^2}-\frac{\sin (x)}{x}$$
A plot of $f(x), f'(x), f''(x)$ shows:
We need to find the maximum of $f''(x), x \in (0,1)$, which yields $|-0.34375|$ at $x = 0$.
To find the number of iterations, we find $n$ from the error bound, thus:
$$|e_n| \le \dfrac{max_{a,b} |~f''(x)|~}{12 n^2} (b-a)^3 = \dfrac{|-0.34375|}{12 n^2} (1-0)^3\le 10^{-4} \implies n \ge 16.9251 $$
So, we choose $n = 17$.
Doing $17-$steps of the Trapezoidal Rule yields:
$$\int_0^1 \dfrac{\sin(x)}{x} ~dx \approx 0.9459962252$$
Using WA, we get the value as:
$$ \int_0^1 \dfrac{\sin(x)}{x} ~dx \approx 0.9460830704$$
So, our error estimate produces:
$$ \Delta =0.9460830704 - 0.9459962252 = 0.000086845$$
This satisfies our requirement of less than $10^{-4}$ error.
Aside: Sometimes using the second derivative as an error bound can have issues, see
Deriving the Trapezoidal Rule Error. Sometimes it is better to just take a much more pessimistic value for the max when using second derivative estimates, for example, $1$ in this problem and just avoid the second derivative altogether. |
Let $M$ be a compact smooth manifold (closed, for simplicity), $n\in\mathbb{N}$ and equip the space of embeddings $Emb(M,\mathbb{R^n})$ wih the Whitney-$C^{\infty}$-topology. (The weak and the strong one are the same as $M$ is assumed to be compact.) Fix an embedding $j\in Emb(M,\mathbb{R}^n)$ and a tubular neighbourhood of the embedded $j(M)\subseteq\mathbb{R}^n$.
Are small normal deformations of $j$ open in $Emb(M,\mathbb{R}^n)$?
To be more precise: Equip the normal sections $\Gamma(N(j(M))$ with the Whitney-$C^{\infty}$ topology and let $U\subset\Gamma(N(j(M))$ be the open (!) set of all sections $s$ such that the induced normal deformation $\{m+s(m)|m\in j(M)\}$ of $M$ lies in the tubular neighbourhood. Every such section $s\in U$ gives me an embedding $[m\mapsto j(m)+s(j(m)]\in Emb(M,\mathbb{R}^n)$. Do these embeddings form an open subset of $Emb(M,\mathbb{R}^n)$?
Some naive thoughts:
Consider a unit circle in the plane. Every open subset of the standard embedding of the circle should contain a small rotation of the circle, which does not come from a normal deformation of the circle. This suggests that the claim is false, but I believe to remember that I saw a flavor of this kind ("small normal deformations of embeddings are open") somewhere, although I cannot figure out where it was. |
I promise I’m actually a probability theorist, despite many of my posts being algebraic in nature. Algebra, as we’ve seen in several other posts, elegantly generalizes many things in basic arithmetic, leading to highly lucrative applications in coding theory and data protection. Some definitions in mathematics may not have obvious “practical use”, but turn out to yield theorems and results so powerful we can use them to send image data cleanly from space.
This is a continuation of the exploration of the algebra behind many aspects of coding theory and data protection, and we will continue to develop the fundamentals necessary to appreciate the elegance and simplicity of code transmission and data protection. Here we will discuss the notion of a
coset as a subgroup of a finite group G. For a discussion on the definition of a group, please check out this post. Recall that a group is a set G combined with an operation we will denote \cdot that satisfies the following axioms:
(1)
Closure under \cdot: If a and b are in G, then their product (or sum) 1 a\cdot b is in G
(2)
Associativity: Let a,b and c be elements of G. Then we can group the operation however we like. That is,
(3)
Existence of an identity element: There must be some element e in G that, when multiplied on either the left or right by any element a in G, returns that element. Mathematically, there exists e \in G such that for any a \in Ge\cdot a = a\cdot e = a
(4)
Existence of an inverse: For each element a in G, there is another element a^{-1} in G such that multiplying a^{-1} on either the left or right by a yields the identity element e. Formally, for each a \in G there exists a^{-1} in G such that
Some quick examples of groups:
(\mathbb{R}, +), all real numbers under regular addition. The identity element is 0, and the inverse of any real number a is -a. \mathbb{Z}_{n}, integers modulo n under modulo addition. n \times n matrices under matrix addition
The number of elements in a group G can be either finite or infinite and is denoted |G|. In \mathbb{Z}_{3} = \{0,1,2\}, |\mathbb{Z}_{3}| = 3. |\mathbb{R}| is infinite
2. We’re going to stick with finite groups; that is, groups with a finite number of elements. Subgroups
A
subset T of a set S is simply a set made from some of the elements in S. If S = \{0,1,2,3,4,5\}, then T = \{0,2,4\} is a subset of S, and we denote it T \subseteq S. The even integers are a subset of all integers. The rational numbers area subset of all real numbers. The set of square matrices is a subset of all matrices.
If our set has an operation and is actually a group G, then a subset of this group that also fits the definition of a group is called a
subgroup, and we write H \leq G. Not all subsets of a group are subgroups. To determine if a subset of a group is actually a subgroup, we just have to verify properties (1) and (4) above for the subset.
Let’s focus on finite groups for now. Obviously, we can have subgroups of infinite groups, but it’s easier to get used to these ideas on finite groups first. One possible way to construct a subgroup of a group is to pick an element a \in G. We can take powers of this element and collect them together in a nice bag to create a subgroup called H, or for fun, denote it \langle a \rangle H = \langle a\rangle = \{a, a^{2} = a\cdot a, a^{3} = a\cdot a \cdot a,\ldots \}
Now, are we ever done taking powers of a? Actually, yes. In a finite group, there are only a finite number of elements total, so we have to stop taking powers at some point. Moreover, we’ll get to the point where some power of a actually gives us the identity element e, and then any further powers just start us over. Let’s take an explicit example.
Suppose we take \mathbb{Z}_{6} = \{0,1,2,3,4,5\} under modulo addition. Pick 2. Now let’s see what happens when we create a subgroup from powers of 2 under modulo 6 addition.
First, we just have 2. Then 2^{2} = (2+ 2) \bmod 6 = 4
3. Next, 2^{3} = (2+2+2) \bmod 6 = 6 \bmod 6 \equiv 0. Hey look! Our identity element. That means that 2^{4} = 2^{3}\cdot 2 = 0 + 2 = 2. Now we started over. That means there are only 3 distinct powers of 2 under modulo 6 addition: H = \langle 2\rangle = \{0,2,4\}. This is the subgroup generated by 2, which is why we use the notation \langle 2\rangle, because 2 and its powers under this group operation constructs the subgroup.
We call |\langle a \rangle | the
order of a. Put another way, the order of an element in a group is the power required to yield the identity element. So, under modulo 6, 2^{3} \equiv 0, so the order of 2 is 3, which is the size of the subgroup generated by 2. Coset Decomposition using Subgroups
We can take a subgroup of a
finite group G and partition it relative to that subgroup in a nice, methodical way 4. Let’s take a subgroup generated by some element h \in G. That is, let our subgroup H = \langle h \rangle = \{h_{1}, h_{2},...,h_{c} = e\} where c is the order of h. We know the order of the element must be finite, because the group itself is finite, and all groups are closed under operations. Now let’s construct a pretty array:
First, we’ll put the elements of our subgroup H on the first row:\begin{array}{ccccc}h_{1}&h_{2}&h_{3}&\ldots&h_{c}\end{array}
Now, pick any g \in G you want that
is not in the subgroup H. Let’s name it g_{2}. Put it at the start of the second row, and each element of the second row is formed by multiplying this g_{2} on the left by the column header h_{i}:
That was easy enough. Now pick another g_{3}\in G that hasn’t shown up in the first two rows. Then we’ll create the third row in the same way as we created the second:\begin{array}{ccccc}h_{1}&h_{2}&h_{3}&\ldots&h_{c}\\g_{2}&g_{2}\cdot h_{2}& g_{2} \cdot h_{3}&\ldots&g_{2} \cdot h_{n}\\g_{3}&g_{3}\cdot h_{2}& g_{3} \cdot h_{3}&\ldots&g_{3} \cdot h_{n}\end{array}
We may continue making the array in this way until all of our elements are used up. Then we end up with the elements of our group arranged nicely into an m \times n array:\begin{array}{ccccc}h_{1}&h_{2}&h_{3}&\ldots&h_{c}\\g_{2}&g_{2}\cdot h_{2}& g_{2} \cdot h_{3}&\ldots&g_{2} \cdot h_{n}\\g_{3}&g_{3}\cdot h_{2}& g_{3} \cdot h_{3}&\ldots&g_{3} \cdot h_{n}\\\vdots&&&&\\g_{m}&g_{m}\cdot h_{2}& g_{m} \cdot h_{3}&\ldots&g_{m} \cdot h_{n}\end{array}
Each row in this array forms what we call a
left coset of H in G. You take an element of the group and multiply each element of the subgroup on the left by that element. Notice that for a finite group, we have m left cosets. If we had multiplied those g_{j}‘s on the right, we would call the rows right cosets. If the operation is commutative (a\cdot b = b\cdot a), then left and right cosets are the same thing.
We can prove that every element in a finite group shows up in this array, and that no element appears more than once in the array
5. What we’ve actually done is partitioned a finite group into these rows using our subgroup H. This partitioning is called a coset decomposition.
(
Remark. In fact, a coset on a finite group is a type of equivalence relation. This very elegant fact will play a large role in coding theory syndromes in a future post. For a discussion of equivalence relations please see the video below:) LaGrange’s Theorem
For finite groups, this coset decomposition yields LaGrange’s theorem, a particularly beautiful theorem that relates the size of a subgroup, the number of cosets of that subgroup, and the size of a group. Because mathematicians hate writing more than we have to, let’s write the number of left (or right) cosets of H in G as [G:H].
Lagrange’s Theorem
Suppose |G| is finite, and H \leq G
6. Then
The number of cosets with respect to H times the size of H gives us the size of the group. Take a look back at our array we created. That’s a visual illustration of this theorem. The number of rows gives us the number of cosets, and the number of columns is the size of our subgroup H. All group elements appeared in the array, so we can see that the number of cosets times the size of the subgroup gives us the size of the original group G.
Conclusion
Cosets exist for infinite groups as well, though they cannot be constructed quite the same way we did above, since we would never “finish” filling up rows and columns. For those, we just create them by multiplying some element g \in G that isn’t in a subgroup H (which may not be finite anymore) on the left or right by all elements in the subgroup. We write a general left coset asgH = \{gh : h \in H\}
The product of that g with all elements of the subgroup H. If the group is infinite, we don’t necessarily get a coset decomposition like we had before, and LaGrange’s Theorem doesn’t apply anymore.
Sticking with finite groups can yield some extremely powerful uses though. We’ll need all of this (and some linear algebra) when we discuss how to store and decode a binary code that may be transmitted over a noisy channel. The fact that the coset decomposition yields a set of equivalence classes will allow us to drastically cut the amount of bits we need to store to fully represent a binary code and its parity check equations. This will allow us to use very large block length for code transmission and data protection, as we will see in future posts.
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License. Footnotes Remember that operations aren’t just the typical addition or multiplication you use every day. We use addition and multiplication as general operations, and the product is just the name for one element “operation” another. There are different kinds of infinite, and different sizes of infinity. Cardinality of infinite sets gets weird, and we’ll explore this later. Remember that the operation is addition, so 2 “operation” 2 is 2+2 modulo 6 This only works for finite groups I’m not going to prove this here. This proof is done by contradiction, and is useful and necessary, but not particularly illuminating to this discussion Remember, that’s our notation for “H is a subgroup of G” |
Some tricks I've seen:
Tricks with notable products
$(a + b)^2 = a^2 + 2ab + b^2$
This formula can be used to compute squares. Say that we want to compute $46^2$. We use $46^2 = (40+6)^2 = 40^2+2\cdot40\cdot6 +6^2 = 1600 + 480 + 36 = 2116$. You can also use this method for negative $b$:$ 197^2 = (200 - 3)^2 = 200^2 - 2\cdot200\cdot3 + 3^2 = 40000 - 1200 + 9 = 38809 $
The last subtraction can be kind of tricky: remember to do it right to left, and take out the common multiples of 10:$ 40000 - 1200 = 100(400-12) = 100(398-10) = 100(388) = 38800 $The hardest thing here is to keep track of the amount of zeroes, this takes some practice!
Also note that if we're computing $(a+b)^2$ and a is a multiple of $10^k$ and $b$ is a single digit-number, we already know the last $k$ digits of the answer: they are $b^2$, then the rest (going to the right) are zeroes. We can use this even if a is only a multiple of 10: the last digit of $(10 * a + b)^2$ (where $a$ and $b$ consist of a single digit) is $b$. So we can write (or maybe only make a mental note that we have the final digit) that down and worry about the more significant digits.
Also useful for things like $46\cdot47 = 46^2 + 46 = 2116 + 46 = 2162$. When both numbers are even or both numbers are uneven, you might want to use:
$(a+b)(a-b) = a^2 - b^2$Say, for example, we want to compute $23 \cdot 27$. We can write this as $(25 - 2)(25 + 2) = 25^2 - 2^2 = (20 + 5)^2 = 20^2 + 2\cdot20\cdot5 + 5^2 - 4 = 400 + 200 + 25 - 4 = 621$.
Divisibility checks
Already covered by Theodore Norvell. The basic idea is that if you represent numbers in a base $b$, you can easily tell if numbers are divisible by $b - 1$, $b + 1$ or prime factors of $b$, by some modular arithmetic.
Vedic math
A guy in my class gave a presentation on Vedic math. I don't really remember everything and there probably are a more cool things in the book, but I remember with algorithm for multiplication that you can use to multiplicate numbers in your head.
This picture shows a method called lattice or gelosia multiplication and is just a way of writing our good old-fashioned multiplication algorithm (the one we use on paper) in a nice way. Please notice that the picture and the Vedic algorithm are not tied: I added the picture because I think it helps you appreciate and understand the pattern that is used in the algorithm. The gelosia notation shows this in a much nicer way than the traditional notation.
The algorithm the guy explained is essentially the same algorithm as we would use on paper. However, it structures the arithmetic in such a way that we never have remember too many numbers at the same time.
Let's illustrate the method by multiplying $456$ with $128$, as in the picture. We work from left to right: we first compute the least significant digits and work our way up.
We start by multiplying the least significant digits:
$6 \cdot 8 = 48$: the least significant digit is $8$, remember the $4(0)$ for the next round (of course, I don't mean zero times four here but four, or forty, whatevery you prefer: be consistent though, if you include the zero here to make forty, you got do it everywhere).$ 8 \cdot 5(0) = 40(0) $
$ 2(0) \cdot 6 = 12(0) $ $ 4(0) + 40(0) + 12(0) = 56(0) $: our next digit (to the left of the $8$) is $6$: remember the $5(00)$
$ 8 \cdot 4(00) = 32(00) $
$ 2(0) \cdot 5(0) = 10(00) $ $ 1(00) \cdot 6 = 6(00) $ $ 5(00) + 32(00) + 10(00) + 6(00) = 53(00) $: our next digit is a $3$, remember the $5(000)$
Pfff... starting with 2-digit numbers is a better idea, but I wanted to this longer one to make the structure of the algorithm clear. You can do this much faster if you have practiced, since you don't have to write it all down.
$ 2(0) \cdot 4(00) = 8(000) $
$ 1(00) \cdot 5(0) = 5(000)$ $ 5(000) + 8(000) + 5(000) = 18(000)$: next digit is an $8$, remember the $1(0000)$
$ 1(00) \cdot 4(00) = 4(0000) $
$ 1(0000) + 4(0000) = 5(0000) $: the most significant digit is a $5$.
So we have $58368$.
Quadratic equations
There are multiple ways to solve a quadratic equation in your head. The easiest are quadratic with integer coefficients. If we have $x^2 + ax + c = 0$, try to find $r_{1, 2}$ such that $r_1 + r_2 = -a$ and $r_1r_2 = c$. It is also possible to solve for non-integer solutions this way, but it is usually too hard to actually come up with solutions this way.
Another way is just to try divisors of the constant term. By the rational root theorem (google it, I can't link anymore
sigh) all solutions to $x^n + ... + c = 0$ need to be divisors of $c$. If $c$ is a fraction $\frac{p}{q}$, the solutions need to be of the form $\frac{a}{b}$ where $a$ divides $p$ and $b$ divides $q$.
If this all fails, we can still put the abc-formula in a much easier form:
$ ux^2 + vx + w = 0 $
$ x^2 + \frac{v}{u}x + \frac{w}{u} = 0 $ $ x^2 + \frac{v}{u}x + \frac{w}{u} = 0 $ $ x^2 - ax - b = 0 $
$ x^2 = ax + b $
(This is the form that I found easiest to use!) $ (x - \frac{a}{2})^2 = (\frac{a}{2})^2 + b $ $ x = \frac{a\pm\sqrt{a^2 + 4b}}{2} = \frac{a}{2} \pm \sqrt{(\frac{a}{2})^2 + b} $
I'm sure there are also a lot of techniques for estimating products and the like, but I'm not really familiar with them.
Tricks that aren't really usable but still pretty cool
See this excerpt from Feynman's "Surely you're joking, Mr. Feynman!" about how he managed to amaze some of his colleagues, and also this video from Numberphile. |
I happen to have revised our calculus syllabus for first year biology majors about one year ago (in a French university, for that matter). I benefited a lot from my wife's experience as a math-friendly biologist.
The main point of the course is to get students able to deal with
quantitative models. For example, my wife studied the movement of cells under various circumstances.
A common model postulates that the average distance $d$ between two
positions of a cell at times $t_0$ and $t_0+T$ is given by
$$d = \alpha T^\beta$$ where $\alpha>0$ is a speed parameter and
$\beta\in[\frac12,1]$ is a parameter that measures how the movement
fits between a Brownian motion ($\beta=\frac12$)
and a purely ballistic motion ($\beta=1$).
This simple model is a great example to show how calculus can be relevant to biology.
My first point might be specific to recent French students: first-year students are often not even proficient enough with basic algebraic manipulations to be able to do anything relevant with such a model. For example, even asking to compute how $d$ changes when $T$ is multiplied by a constant needs to now how to
deal with exponents. In fact, we even had serious issues with the mere use of percentages.
One of the main point of our new calculus course is to be able to
estimate uncertainties: in particular, given that $T=T_0\pm \delta T$, $\alpha=\alpha_0\pm\delta\alpha$ and $\beta=\beta_0\pm\delta\beta$, we ask them to estimate $d$ up to order one (i.e. using first-order Taylor series). This already involves derivatives of multivariable functions, and is an important computation when you want to draw conclusions from experiments.
Another important point of the course is the
use of logarithms and exponentials, in particular to interpret log or log-log graphs. For example, in the above model, it takes a (very) little habit to see that taking logs is a good thing to do: $\log d = \beta\log T+\log \alpha$ so that plotting your data in log-log chart should give you a line (if the models accurately represent your experiments).
This then interacts with
statistics: one can find the linear regression in log-log charts to find estimates for $\alpha$ and $\beta$. But then one really gets an estimate of $\beta$ and... $\log\alpha$, so one should have a sense of how badly this uncertainty propagates to $\alpha$ ( one variable first-order Taylor series: easy peasy).
The other main goal of the course is to get them able to deal with some (ordinary) differential equations. The motivating example I chose was offered to me by the chemist of our syllabus meeting.
A common model for the kinetics of a chemical reaction$$A + B \to C$$is the second-order model: one assumes that the speed of the reaction is proportional to the product of the concentrations of the species A and B. This leads to a not-so-easy differential equation of the form$$ y'(t) = (a-y(t))(b-y(t)).$$This is a
first-order ODE with separable variables. One can solve it explicitly (a luxury!) by dividing by the second member, integrate in $t$, do a change of variable $u=y(t)$ in the left-hand-side, resolve into partial fractions the rational fraction that comes out, and remember how log is an antiderivative of the inverse function (and how to adjust for the various constants the appeared in the process). Then, you need some algebraic manipulations to transform the resulting equation into the form $y(t) = \dots$. Unfortunately and of course, we are far from being able to properly cover all this material, but we try to get the student able to follow this road later on, with their chemistry teachers.
In fact, I would love to be able to do more quantitative analysis of differential equations, but it is difficult to teach since it quickly goes beyond a few recipes. For example, I would like them to become able to tell in a glimpse the
variations of solutions to$$y'(t)=a\cdot y(t)-b \sqrt{y(t)}$$(a model of population growth for colonies of small living entities organized in circles, where death occur mostly on the edge - note how basic geometry makes an appearance here to explain the model) in terms of the initial value. Or to be able to realize that solutions to$$y'(t)=\sqrt{y(t)}$$must be sub-exponential (and what that even means...). For this kind of goals, one must first aim to basic proficiency in calculus.
To sum up,
dealing with any quantitative model needs a fair bit of calculus, in order to have a sense of what the model says, to use it with actual data, to analyze experimental data, to interpret it, etc.
To finish with a controversial point, it seems to me that, at least in my environment, biologists tend to underestimate the usefulness of calculus (and statistics, and more generally mathematics) and that improving the basic understanding of mathematics among biologists-to-be can only be beneficial. |
Our teacher taught us about Gauss law of magnetism, $\phi_B=\oint_s{\small{}}\vec {B} .\vec {ds}=0$(being valid because magnetic field line form closed loop, right?) which denies the existence of magnetic monopole.
However, it does not decline the existence of magnet with no pole, and I understood how. Then he told us that one such example is an
toroid. Question:
But, I am not able to understand
why does a toroid have no pole? Shouldn't it have a pole similar to circular loop, why it does not? ( Please explain this one in detail.)
Also, while thinking about the above,
what about a long current carrying conductor? Does it have a pole or not?
(Sorry for the rudimentary question, but high school physics is always like take this and we get that and things, never explains why? It bothers me.)
Thank you! |
Back to Unconstrained Optimization
, or Quasi-Newton methods variable metric methods, can be used when the Hessian matrix is difficult or time-consuming to evaluate. Instead of obtaining an estimate of the Hessian matrix at a single point, these methods gradually build up an approximate Hessian matrix by using gradient information from some or all of the previous iterates \(x_k\) visited by the algorithm. Given the current iterate \(x_k\), and the approximate Hessian matrix \(B_k\) at \(x_k\), the linear system
\[B_kd_k = -\nabla f(x_k)\]
is solved to generate a direction \(d_k\). The next iterate is then found by performing a line search along \(d_k\) and setting \(x_{k+1} = x_k + \alpha_k d_k\). The question is then: How can we use the function and gradient information from points \(x_k\) and \(x_{k+1}\) to improve the quality of the approximate Hessian matrix \(B_k\)? In other words, how do we obtain the new approximate Hessian matrix \(B_{k+1}\) from the previous approximation \(B_k\)?
The key to this question depends on what is sometimes called the
fundamental theorem of integral calculus. If we define \[s_k = x_{k+1} - x_k, \quad\quad y_k = \nabla f(x_{k+1}) - \nabla f(x_k),\] then this theorem implies that \[ \left\{ \int_0^1 \nabla^2 f(x_k + ts_k) dt \right\} s_k = y_k.\]
The matrix in braces can be interpreted as the average of the Hessian matrix on the line segment \([x_k, x_k + s_k]\). This result states that when this matrix is multiplied by the vector \(s_k\), the resulting vector is \(y_k\). In view of these observations, we can make \(B_{k+1}\) mimic the behavior of \(\nabla^2f\) by enforcing the quasi-Newton condition
\[B_{k+1}s_k = y_k.\]
This condition can be satisfied by making a simple low-rank update to \(B_k\). The most commonly used family of updates is the Broyden class of rank-two updates, which have the form
where \(\phi \in [0,1]\) and
\[v_k = \left[ \frac{y_k}{y_k^T s_k} - \frac{B_k s_k}{s_k^T B_k s_k} \right].\]
The choice \(\phi_k = 0\) gives the Broyden-Fletcher-Goldfarb-Shanno update, which practical experience, and some theoretical analysis, has shown to be the method of choice in most circumstances. The Davidon-Fletcher-Powell update, which was proposed earlier, is obtained by setting \(\phi_k = 1\). These two update formulae are known universally by their initials BFGS and DFP, respectively.
Updates in the Broyden class remain positive definite as long as \(y_k^T s_k > 0\). Although the previous condition holds automatically if \(f\) is strictly convex, it can be enforced for all functions by requiring that \(\alpha_k\) satisfy the curvature condition. Some codes avoid enforcing the curvature condition by skipping the update if \(y_k^T s_k \leq 0\).
GAUSS , IMSL , MATLAB , NAG(FORTRAN) , NAG(C) , OPTIMA , and PROC NLP implement quasi-Newton methods. These codes differ in the choice of update (usually BFGS), line-search procedure, and the way in which \(B_k\) is stored and updated. We can update \(B_k\) by either updating the Cholesky decomposition of \(B_k\) or by updating the inverse of \(B_k\). In either case, the cost of updating the search direction by solving the system
\[B_k d_k = -\nabla f(x_k)\] is on the order of \(n^2\) operations. Updating the Cholesky factorization is widely regarded as more reliable, while updating the inverse of \(B_k\) is less complicated. Indeed, if we define \[H_k = B_k^{-1},\] then a BFGS update of \(B_k\) is equivalent to the following update of \(H_k\): \[H_{k+1} = \left( I - \frac{s_k y_k^T}{y_k^T s_k} \right) H_k \left( I - \frac{y_k s_k^T}{y_k^T s_k} \right) + \frac{s_k s_k^T}{y_k^T s_k}.\]
When we store \(H_k\) explicitly, the direction \(d_k\) is obtained from the matrix-vector product
\[d_k = -H_k \nabla f(x_k).\]
The availability of quasi-Newton methods renders steepest-descent methods obsolete. Both types of algorithms require only first derivatives, and both require a line search. The quasi-Newton algorithms require slightly more operations to calculate an iterate and somewhat more storage, but in almost all cases, these additional costs are outweighed by the advantage of superior convergence.
At first glance, quasi-Newton methods may seem unsuitable for large problems because the approximate Hessian matrices and inverse Hessian matrices are generally dense. This is not the case as the explicit storage of \(B_k\) or \(H_k\) as \(n x n\) matrices is not necessary. For example, the above expression for the BFGS update of \(H_k\) makes it clear that we can compute
\[H_k \nabla f(x_k)\] if we know the initial matrix \(H_0\); the subsequent update vectors \(s_i, y_i\); and their inner products \(y_i^T s_i\) for \(0 \leq i < k\). If \(H_0\) is chosen to be a diagonal matrix, the necessary information can be stored in about \(2nk\) words of memory. Limited-memory quasi-Newton methods make use of these ideas to cut down on storage for large problems. They store only the \(s_i\) and \(y_i\) vectors from the previous few iterates (typically, five) and compute the vector \(d_k\) by a recursion that requires roughly 16 \(nm\) operations. The L-BFGS code is an implementation of the limited-memory BFGS algorithms. The codes M1QN2 and M1QN3 are the same as L-BFGS, except that they allow the user to specify a preconditioning technique. |
Research Open Access Published: On a nonlocal 1-D initial value problem for a singular fractional-order parabolic equation with Bessel operator Advances in Difference Equations volume 2019, Article number: 254 (2019) Article metrics
199 Accesses
Abstract
In this paper, we obtain some results of the existence and uniqueness of a generalized solution for a singular fractional initial boundary value problem in the Caputo sense subject to Neumann and weighted integral conditions. We show that a priori estimate or energy inequality methods can be successfully applied to obtaining a priori estimates for the solution of initial fractional boundary problems as in the classical case. The obtained results will contribute in the development of the functional analysis method and enrich the existing nonextensive literature on the nonlocal fractional mixed problems in the Caputo sense.
Introduction
The one-dimensional fractional-order diffusion heat equation has become a real model for all linear and nonlinear fractional and nonfractional partial differential equations of parabolic type [5, 8, 10, 18, 19]. Although mathematical models in two and three dimensions are of great significance for applications, the majority of recent papers are devoted to the fractional-order diffusion equations in the one-dimensional case. Papers dealing with the multidimensional fractional diffusion equations are still not numerous. For fractional parabolic equations, we interpret physically the fractional derivative appearing in the equation as the degree of memory in the diffusing material [9]. Many authors have studied analytically and numerically various models of time-fractional differential equations; see, for example, [2,3,4, 7, 13, 20].
Many physical phenomena can be modeled in terms of local and nonlocal initial boundary value problems where the classical time and space derivatives are present, but, unfortunately, many others cannot be modeled by such problems. Different methods have been used to solve fractional diffusion equations. We can cite, for example, the works [11, 17].
In this paper, we apply the traditional functional analysis method, the so-called energy inequality method based mainly on some a priori bounds and on the density of the range of the operator generated by the considered problem for a fractional singular equation with Bessel operator and Caputo fractional derivative of order \(0<\alpha <1\) (see [6]).
In the literature, there are very many papers using the functional analysis method for the proof of the well-posedness of mixed problems (having local or nonlocal boundary conditions) in the classical sense, such as [14, 15, 21], but in the fractional case, there are only few papers using the previous method to prove their well-posedness. Therefore our work can be considered as a contribution to the development of the functional analysis method used to prove the well-posedness of mixed problems with fractional order. We should like also to mention that the positivity of the fractional derivative operator helps us to obtain a priori bounds for solutions of certain classes of fractional initial and boundary value problems.
This paper is organized as follows: In Sect. 2, we set and pose the problem and give different types of fractional derivatives used in the paper. In Sect. 3, we introduce some function spaces, give some useful tools, and write the given problem in operator form. In Sect. 4, by choosing an appropriate functional differential operator multiplier we establish an a priori estimate, from which we deduce the uniqueness of the solution and its dependence on the given data of the posed problem. In Sect. 5, we prove the main result concerning the solvability of the given problem. With some modifications in the classical method (energy inequality method) used for classical equations, we could show that the range of the operator generated by the studied problem is dense in the weighted Hilbert space \(H=L_{x}^{2}(0,1) \times L_{x}^{2}(Q)\), where \(Q=(0,1)\times (0,T)\), \(T<\infty \).
Problem setting
We consider the governing equation of Caputo’s time fractional order subject to initial and boundary conditions of integral and Neumann types in the domain \(Q=(0,1)\times (0,T)\), \(T<\infty \). By \(\partial _{t} ^{\alpha }\mathcal{\theta }\) we denote the Caputo time fractional derivative. This initial boundary value problem is nonlocal in time derivative and in one of the boundary conditions:
The functions \(Y(x,t)\) and \(f(x,t)\) are given functions, which will be specified later.
The time fractional derivative of order \(0<\alpha <1\) is taken in the Caputo sense. It is defined for a differentiable function by
or, equivalently,
where
Γ is the gamma function.
We also need to use the Riemann–Liouville integral of order \(0<\alpha <1\) defined by
Preliminaries
We need the following function spaces and tools. We denote by \(C^{2,1}(\overline{Q})\) the set of functions that, together with their partial derivatives of orders 2 and 1 in
x and t, are continuous on Q̅, by \(C^{m}(0,T)\) the space of m-fold differentiable functions, and by \(C_{0}^{\infty }(0,T)\) the space of infinitely differentiable functions having their support in \((0,T)\). We use the usual \(L^{2}(0,T)\) space of measurable square-integrable functions on \((0,T)\). Lemma 3.1
([1])
For any absolutely continuous function \(\beta (s)\) on the interval \([0,T]\), we have the inequality Lemma 3.2
([1])
Let a nonnegative absolutely continuous function \(\mathcal{R}(s)\) satisfy the inequality for almost all \(s\in [ 0,T]\), where \(c_{1}\) is a positive constant, and \(c_{2}(s)\) is an integrable nonnegative function on \([0,T]\). Then where are the Mittag–Leffler functions.
Young’s inequality with
ε: For any \(\varepsilon >0\), we have the inequality
which is the generalization of the Cauchy inequality with
ε:
where
a and Y are nonnegative numbers.
Poincaré-type inequalities [14]:
where
To establish the existence and uniqueness of the solution of problem (2.1), we write it in an equivalent operator form.
The solution of problem (2.1) can be regarded as the solution of the operator equation
where \(\mathcal{M}=(\mathcal{L},l_{1})\), and the operator \(\mathcal{M}\) acts from \(\mathcal{S}\) to
H with domain of definition
where \(\mathcal{S}\) is a Banach space of functions
θ endowed by the finite norm
and
H is the weighted Hilbert space \(L_{x}^{2}(Q)\times L_{x}^{2}(0,1)\) consisting of vector-valued functions \(\mathcal{F}=(f,\omega )\) with finite norm A priori estimate for the solution and its consequences
We establish an a priori bound for the solution of problem (2.1), from which we deduce its uniqueness.
Theorem 4.1 Suppose that the function Y satisfies where \(C_{0}\) and \(C_{1}\) are positive constants, and \(f\in L_{x}^{2}(Q)\). Then there exists a positive constant μ such that the following a priori estimate holds: for all \(\mathcal{\theta }\in D(\mathcal{M})\), where μ= \(\mu (\delta,\sigma,d)\) is given by Proof
Consider the identity
The terms on the right-hand side of (4.11) can be estimated in the following way:
By choosing \(\varepsilon _{1}=2C_{0, }\varepsilon _{2}= \frac{1}{2}\), and \(\varepsilon _{3}=3C_{0 }\) (4.20) becomes
where
Replacing
t by τ and integrating both sides of (4.21) with respect to τ from 0 to t, we obtain
where
we obtain
where
It is easy to see that
where
Now since the right-hand side of (4.31) does not depend on
t, the a priori estimate (4.2) follows by taking the upper bound for both sides with respect to t over \([0,T]\). Note that the uniqueness and continuous dependence of the solution on the data of problem (2.1) follows from the a priori bound (4.2). □ Existence of solution
The a priori estimate (4.2) shows that the unbounded operator \(\mathcal{M}\) has an inverse \(\mathcal{M}^{-1}:\mathcal{R}(\mathcal{M) \rightarrow S}\). Since \(\mathcal{R}(\mathcal{M)}\) is a subset of
H, we can construct its closure \(\overline{\mathcal{M}}\) so that estimate (4.2) holds for this extension and \(\mathcal{R(} \overline{\mathcal{M}}\mathcal{)}\) coincides with the whole space H. Hence we have the following: Corollary 5.1 The operator \(\mathcal{M}:\mathcal{S\rightarrow }H\) admits a closure ( proof is similar to that in [14]. Estimate (4.2) can be then extended to for all \(\mathcal{\theta }\in D(\overline{\mathcal{M}})\). Corollary 5.2
\(\mathcal{R}(\overline{\mathcal{M}}\mathcal{)}\)
is a closed subset in H, \(\mathcal{R}(\mathcal{M)=R}(\overline{ \mathcal{M}}\mathcal{)}\), and \(\overline{\mathcal{M}}^{-1}=\overline{ \mathcal{M}^{-1}}\).
We are now ready to give the result on the existence of the solution of problem (2.1).
Theorem 5.3 Suppose that the conditions of Theorem 4.1 are satisfied. Then for all \(\mathcal{F}=\) \((f,\omega )\in H\), there exists a unique strong solution \(\theta =\overline{\mathcal{M}}^{-1} \mathcal{F}=\overline{\mathcal{M}^{-1}}\mathcal{F}\) of problem (2.1). Proof
Estimate (5.1) asserts that if a strong solution of (2.1) exits, then it is unique and depends continuously on the data. Corollary 5.2 says that to prove that problem (2.1) admits a strong solution for any \(\mathcal{F}=\) \((f,\omega )\in H\), it suffices to show that the closure of the range of the operator \(\mathcal{M}\) is dense in
H. To establish the existence of the strong solution of problem (2.1), we use a density argument, that is, we show that the range \(\mathcal{R}(\mathcal{M})\) of the operator \(\mathcal{M}\), is dense in the space H for every element θ in the Banach space \(\mathcal{S}\). For this, we consider the following particular case of density. □ Theorem 5.4 Suppose that the conditions of Theorem 4.1 are satisfied. Suppose that for all functions \(\theta \in \mathcal{D(M)}\) such that \(l_{1}\theta =\theta (x,0)=0\) and for some function ψ∈ \(L^{2}(Q)\), we have Then ψ vanishes a. e. in Q. Proof
Identity (5.2) is equivalent to
Assume that a function \(\gamma (x,t)\) satisfies the conditions boundary and initial conditions in (2.1) and that
γ, \(\gamma _{x}\), and \(\frac{\partial }{\partial x} ( x\int _{0}^{t}\gamma (x,s)\,ds ) \in L^{2}(Q_{t})\). We then set
Equation (5.3) then becomes
We now introduce the function
Equation (5.5) then reduces to
Discarding the last three terms on the left-hand side of (5.15), we obtain
where
and
Then
Inequality (5.19) implies that
Then from (5.20) it follows that the function \(\psi =\int _{0}^{t} \gamma (x,s)\,ds-\mathcal{I}_{x}^{2} ( \xi \int _{0}^{t}\gamma ( \xi,s)\,ds ) \) is zero a.e. in
Q.
To complete the proof of Theorem 5.3, assume that for \((\varPsi,\omega _{1}) \in R(\mathcal{M})^{\bot }\), we have
Then we should show that \(\varPsi =0,\omega _{1}=0\). If we put \(\theta \in D(\mathcal{M})\) satisfying condition \(l_{1}\theta =\theta (x,0)=0\) into (5.20), we get
We present the following example to illustrate our main results.
Example
In the considered problem (2.1), we set
and
where
The function
Y satisfies assumptions (4.1) with \(C_{0}=1\) and \(C_{1}=n(n-1)(T^{\nu }+1)\). The inclusion \(f\in L_{x}^{2}(Q)\) holds, and we can easily verify that the function
satisfies the fractional differential equation in (2.1) and the initial and boundary conditions with the initial condition \(\omega (x)=\frac{6x ^{2}-12x+5}{5}\), which satisfies the compatibility conditions
Moreover, \(\theta \in L_{x}^{2}(Q)\), and
Conclusion
The existence and uniqueness of a generalized solution for a singular fractional initial boundary value problem in the Caputo sense subject to Neumann and weighted integral conditions are established. It is found that the method of energy inequalities is successfully applied to obtaining a priori estimates for the solution of the initial fractional boundary value problem as in the classical case. The obtained results will contribute to the development of the functional analysis method and enrich the existing nonextensive literature on the nonlocal fractional mixed problems in the Caputo sense.
References 1.
Alikhanov, A.A.: A priori estimates for solutions of boundary value problems for fractional-order equations. Differ. Equ.
46(5), 660–666 (2010) 2.
Béla, J.S., Izsák, F.: A finite difference method for fractional diffusion equations with Neumann boundary conditions. Open Math.
13, 581–600 (2015) 3.
Beshtokov, M.K.H.: To boundary-value problems for degenerating pseudoparabolic equations with Gerasimov–Caputo fractional derivative. Russ. Math.
62(10), 1–14 (2018). https://doi.org/10.3103/S1066369X18100018 4.
Beshtokov, M.K.H.: Local and nonlocal boundary value problems for degenerating and nondegenerating pseudoparabolic equations with a Riemann–Liouville fractional derivative. Differ. Equ.
54(6), 758–774 (2018). https://doi.org/10.1134/S0012266118060058 5.
Cannon, J.R.: The One-Dimensional Heat Equation. Cambridge University Press, Cambridge (1984)
6.
Caputo, M.: Elasticitae Dissipazione, Zanichelli, Bologna (1969)
7.
El-Sayed, A.M.A., Gaber, M.: The Adomian decomposition method for solving partial differential equations of fractional order infinite domains. Phys. Lett. A
359, 175–182 (2006) 8.
Friedman, A.: Partial Differential Equations of Parabolic Type. Prentice-Hall, Englewood Cliffs (1964)
9.
Gorenflo, R., Mainardi, F., Moretti, D., Paradisi, P.: Time fractional diffusion: a discrete random walk approach. Nonlinear Dyn.
29, 129–143 (2002) 10.
Huy Tuan, N., Tran Bao, N., Tatar, S.: Recovery of the solute concentration and dispersion flux in an inhomogeneous time fractional diffusion equation. J. Comput. Appl. Math.
342, 96–118 (2018) 11.
Jafari, H., Daftardar-Gejji, V.: Solving linear and non-linear fractional diffusion and wave equations by Adomian decomposition. Appl. Math. Comput.
180, 488–497 (2006) 12.
Kilbas, A.A., Srivastava, H.M., Trujillo, J.J.: Theory and Applications of Fractional Differential Equations. Elsevier, Amsterdam (2006)
13.
Liu, F., Anh, V., Turner, I., Zhuang, P.: Time fractional advection dispersion equation. J. Appl. Math. Comput.
13, 233–245 (2003) 14.
Mesloub, S.: A nonlinear nonlocal mixed problem for a second order parabolic equation. J. Math. Anal. Appl.
316, 189–209 (2006) 15.
Mesloub, S., Bouziani, A.: On a class of singular hyperbolic equations with a weighted integral condition. Int. J. Math. Math. Sci.
22(3), 511–519 (1999) 16.
Podlubny, I.: Fractional Differential Equations. Academic Press, San Diego (1999)
17.
Schneider, W.R., Wyss, W.: Fractional diffusion and wave equations. J. Math. Phys.
30, 34–144 (1989) 18.
Wei, T., Li, Y.S.: Identifying a diffusion coefficient in a time-fractional diffusion equation. Math. Comput. Simul.
151, 77–95 (2018) 19.
Widder, D.V.: The Heat Equation. Academic Press, New York (1975)
20.
Xianjuan, L., Chuanju, X.: A space-time spectral method for the time fractional diffusion equation. SIAM J. Numer. Anal.
47(3), 2108–2131 (2009) 21.
Yurchuk, N.I.: Mixed problem with an integral condition for certain parabolic equations. Differ. Uravn.
22(12), 2117–2126 (1986) Acknowledgements
The authors wish to thank anonymous referees for their comments and valuable suggestions.
Funding
The authors would like to extend their sincere appreciation to the Deanship of Scientific Research at King Saud University for its funding this Research group NO (RG-1435-043).
Ethics declarations Competing interests
The authors declare that they have no competing interests.
Additional information Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
About this article Received Accepted Published DOI MSC 35D35 35L20 Keywords Solvability of the problem Weighted integral conditions Fractional differential equation Initial boundary value problem |
Difference between revisions of "Probability Seminar"
(→Tuesday , May 7, Van Vleck 901, 2:25pm, Duncan Dauvergne (Toronto))
(→Tuesday , May 7, Van Vleck 901, 2:25pm, Duncan Dauvergne (Toronto))
Line 117: Line 117: −
<div style="width:
+
<div style="width:;height:50px;border:5px solid black">
<b><span style="color:red"> Please note the unusual day.
<b><span style="color:red"> Please note the unusual day.
</span></b>
</span></b>
Revision as of 08:45, 23 May 2019 Spring 2019 Thursdays in 901 Van Vleck Hall at 2:25 PM, unless otherwise noted. We usually end for questions at 3:15 PM.
If you would like to sign up for the email list to receive seminar announcements then please send an email to join-probsem@lists.wisc.edu
January 31, Oanh Nguyen, Princeton
Title:
Survival and extinction of epidemics on random graphs with general degrees
Abstract: We establish the necessary and sufficient criterion for the contact process on Galton-Watson trees (resp. random graphs) to exhibit the phase of extinction (resp. short survival). We prove that the survival threshold $\lambda_1$ for a Galton-Watson tree is strictly positive if and only if its offspring distribution has an exponential tail, settling a conjecture by Huang and Durrett. On the random graph with degree distribution $D$, we show that if $D$ has an exponential tail, then for small enough $\lambda$ the contact process with the all-infected initial condition survives for polynomial time with high probability, while for large enough $\lambda$ it runs over exponential time with high probability. When $D$ is subexponential, the contact process typically displays long survival for any fixed $\lambda>0$. Joint work with Shankar Bhamidi, Danny Nam, and Allan Sly.
Wednesday, February 6 at 4:00pm in Van Vleck 911 , Li-Cheng Tsai, Columbia University
Title:
When particle systems meet PDEs
Abstract: Interacting particle systems are models that involve many randomly evolving agents (i.e., particles). These systems are widely used in describing real-world phenomena. In this talk we will walk through three facets of interacting particle systems, namely the law of large numbers, random fluctuations, and large deviations. Within each facet, I will explain how Partial Differential Equations (PDEs) play a role in understanding the systems..
Title:
Fluctuations of the KPZ equation in d\geq 2 in a weak disorder regime
Abstract: We will discuss some recent work on the Edwards-Wilkinson limit of the KPZ equation with a small coupling constant in d\geq 2.
February 14, Timo Seppäläinen, UW-Madison
Title:
Geometry of the corner growth model
Abstract: The corner growth model is a last-passage percolation model of random growth on the square lattice. It lies at the nexus of several branches of mathematics: probability, statistical physics, queueing theory, combinatorics, and integrable systems. It has been studied intensely for almost 40 years. This talk reviews properties of the geodesics, Busemann functions and competition interfaces of the corner growth model, and presents some new qualitative and quantitative results. Based on joint projects with Louis Fan (Indiana), Firas Rassoul-Agha and Chris Janjigian (Utah).
February 21, Diane Holcomb, KTH
Title:
On the centered maximum of the Sine beta process Abstract: There has been a great deal or recent work on the asymptotics of the maximum of characteristic polynomials or random matrices. Other recent work studies the analogous result for log-correlated Gaussian fields. Here we will discuss a maximum result for the centered counting function of the Sine beta process. The Sine beta process arises as the local limit in the bulk of a beta-ensemble, and was originally described as the limit of a generalization of the Gaussian Unitary Ensemble by Valko and Virag with an equivalent process identified as a limit of the circular beta ensembles by Killip and Stoiciu. A brief introduction to the Sine process as well as some ideas from the proof of the maximum will be covered. This talk is on joint work with Elliot Paquette.
Title: Quantitative homogenization in a balanced random environment
Abstract: Stochastic homogenization of discrete difference operators is closely related to the convergence of random walk in a random environment (RWRE) to its limiting process. In this talk we discuss non-divergence form difference operators in an i.i.d random environment and the corresponding process—a random walk in a balanced random environment in the integer lattice Z^d. We first quantify the ergodicity of the environment viewed from the point of view of the particle. As consequences, we obtain algebraic rates of convergence for the quenched central limit theorem of the RWRE and for the homogenization of both elliptic and parabolic non-divergence form difference operators. Joint work with J. Peterson (Purdue) and H. V. Tran (UW-Madison).
Wednesday, February 27 at 1:10pm Jon Peterson, Purdue
Title:
Functional Limit Laws for Recurrent Excited Random Walks
Abstract:
Excited random walks (also called cookie random walks) are model for self-interacting random motion where the transition probabilities are dependent on the local time at the current location. While self-interacting random walks are typically very difficult to study, many results for (one-dimensional) excited random walks are remarkably explicit. In particular, one can easily (by hand) calculate a parameter of the model that will determine many features of the random walk: recurrence/transience, non-zero limiting speed, limiting distributions and more. In this talk I will prove functional limit laws for one-dimensional excited random walks that are recurrent. For certain values of the parameters in the model the random walks under diffusive scaling converge to a Brownian motion perturbed at its extremum. This was known previously for the case of excited random walks with boundedly many cookies per site, but we are able to generalize this to excited random walks with periodic cookie stacks. In this more general case, it is much less clear why perturbed Brownian motion should be the correct scaling limit. This is joint work with Elena Kosygina.
March 21, Spring Break, No seminar March 28, Shamgar Gurevitch UW-Madison
Title:
Harmonic Analysis on GLn over finite fields, and Random Walks
Abstract: There are many formulas that express interesting properties of a group G in terms of sums over its characters. For evaluating or estimating these sums, one of the most salient quantities to understand is the
character ratio:
$$ \text{trace}(\rho(g))/\text{dim}(\rho), $$
for an irreducible representation $\rho$ of G and an element g of G. For example, Diaconis and Shahshahani stated a formula of this type for analyzing G-biinvariant random walks on G. It turns out that, for classical groups G over finite fields (which provide most examples of finite simple groups), there is a natural invariant of representations that provides strong information on the character ratio. We call this invariant
rank. This talk will discuss the notion of rank for $GL_n$ over finite fields, and apply the results to random walks. This is joint work with Roger Howe (Yale and Texas AM). April 4, Philip Matchett Wood, UW-Madison
Title:
Outliers in the spectrum for products of independent random matrices
Abstract: For fixed positive integers m, we consider the product of m independent n by n random matrices with iid entries as in the limit as n tends to infinity. Under suitable assumptions on the entries of each matrix, it is known that the limiting empirical distribution of the eigenvalues is described by the m-th power of the circular law. Moreover, this same limiting distribution continues to hold if each iid random matrix is additively perturbed by a bounded rank deterministic error. However, the bounded rank perturbations may create one or more outlier eigenvalues. We describe the asymptotic location of the outlier eigenvalues, which extends a result of Terence Tao for the case of a single iid matrix. Our methods also allow us to consider several other types of perturbations, including multiplicative perturbations. Joint work with Natalie Coston and Sean O'Rourke.
April 11, Eviatar Procaccia, Texas A&M Title: Stabilization of Diffusion Limited Aggregation in a Wedge.
Abstract: We prove a discrete Beurling estimate for the harmonic measure in a wedge in $\mathbf{Z}^2$, and use it to show that Diffusion Limited Aggregation (DLA) in a wedge of angle smaller than $\pi/4$ stabilizes. This allows to consider the infinite DLA and questions about the number of arms, growth and dimension. I will present some conjectures and open problems.
April 18, Andrea Agazzi, Duke
Title:
Large Deviations Theory for Chemical Reaction Networks
Abstract: The microscopic dynamics of well-stirred networks of chemical reactions are modeled as jump Markov processes. At large volume, one may expect in this framework to have a straightforward application of large deviation theory. This is not at all true, for the jump rates of this class of models are typically neither globally Lipschitz, nor bounded away from zero, with both blowup and absorption as quite possible scenarios. In joint work with Amir Dembo and Jean-Pierre Eckmann, we utilize Lyapunov stability theory to bypass this challenges and to characterize a large class of network topologies that satisfy the full Wentzell-Freidlin theory of asymptotic rates of exit from domains of attraction. Under the assumption of positive recurrence these results also allow for the estimation of transitions times between metastable states of this class of processes.
April 25, Kavita Ramanan, Brown
Title:
Beyond Mean-Field Limits: Local Dynamics on Sparse Graphs
Abstract: Many applications can be modeled as a large system of homogeneous interacting particle systems on a graph in which the infinitesimal evolution of each particle depends on its own state and the empirical distribution of the states of neighboring particles. When the graph is a clique, it is well known that the dynamics of a typical particle converges in the limit, as the number of vertices goes to infinity, to a nonlinear Markov process, often referred to as the McKean-Vlasov or mean-field limit. In this talk, we focus on the complementary case of scaling limits of dynamics on certain sequences of sparse graphs, including regular trees and sparse Erdos-Renyi graphs, and obtain a novel characterization of the dynamics of the neighborhood of a typical particle. This is based on various joint works with Ankan Ganguly, Dan Lacker and Ruoyu Wu.
Friday, April 26, Colloquium, Van Vleck 911 from 4pm to 5pm, Kavita Ramanan, Brown
Title:
Tales of Random Projections
Abstract: The interplay between geometry and probability in high-dimensional spaces is a subject of active research. Classical theorems in probability theory such as the central limit theorem and Cramer’s theorem can be viewed as providing information about certain scalar projections of high-dimensional product measures. In this talk we will describe the behavior of random projections of more general (possibly non-product) high-dimensional measures, which are of interest in diverse fields, ranging from asymptotic convex geometry to high-dimensional statistics. Although the study of (typical) projections of high-dimensional measures dates back to Borel, only recently has a theory begun to emerge, which in particular identifies the role of certain geometric assumptions that lead to better behaved projections. A particular question of interest is to identify what properties of the high-dimensional measure are captured by its lower-dimensional projections. While fluctuations of these projections have been studied over the past decade, we describe more recent work on the tail behavior of multidimensional projections, and associated conditional limit theorems.
Tuesday , May 7, Van Vleck 901, 2:25pm, Duncan Dauvergne (Toronto)
Title:
The directed landscape
Abstract: I will describe the construction of the full scaling limit of (Brownian) last passage percolation: the directed landscape. The directed landscape can be thought of as a random scale-invariant `directed' metric on the plane, and last passage paths converge to directed geodesics in this metric. The directed landscape is expected to be a universal scaling limit for general last passage and random growth models (i.e. TASEP, the KPZ equation, the longest increasing subsequence in a random permutation). Joint work with Janosch Ormann and Balint Virag. |
The Problem
Say you want to evaluate the expectation of a function over a random variable
\(E[f(x)] = \int p(x)f(x)dx\),
or perhaps find the max of a probability distribution, typically the posterior,
\(\arg \max p(x|y)\),
where calculating the derivate is intractable since it depends on some feature that comes from some algorithm or some unknown function.
These situations typically make us resort to approximate methods. One way of solving the expectation is to draw \(N\) random samples from \(p(x)\) and when \(N\) is sufficiently large we can approximate the expectation or the max by
\(E[f(x)] \approx \frac{1}{N} \sum\limits_{i=1}^{N}f(x_i)\),
We can apply the same strategy to finding \(\arg \max p(x|y)\) by sampling from \(p(x|y)\) and taking the max value in the set of samples. When we have acquired a large enough amount of samples, we can be fairly sure of having found the max.
This way of approximating invariably leads to the question how to sample from a distribution \(p(x)\)(or \(p(x|y)\)). However, how can we sample from a pdf that in its current form already is too complicated?
A common way of sampling from a troublesome distribution is to use some kind of Markov Chain Monte-Carlo(MCMC) method. The core idea of MCMC is to generate a Markov chain that converges to the desired distribution after a sufficient amount of samples.
Markov Chains & Detailed Balance
A Markov chain is a sequence of random variables such that the current state only depends on the previous state,
\(p(X_n|X_{n-1},X_{n-2},X_{n-3}...,X_{1}) = p(X_n|X_{n-1}) \).
To simulate a Markov chain we must formulate a transition kernel, \(T(x_i,x_j)\). The transition kernel is the probability of moving from a state \(x_i\) to a state \(x_j\); it can be either discrete or continuous. In undergraduate courses one often finds that it is discrete and formulated as a transition
Convergence for a Markov Chain means that it has a stationary distribution, \(\pi\). A stationary distribution implies that if we run the Markov Chain repeatedly, for long enough time, the samples we get for each run will always form the same distribution.
So how do we know when the kernel has a stationary distribution? Detailed balance is a sufficient but not necessary condition for a Markov chain to be stationary. Detailed balance essentially says that moving from state \(x_i\) to \(x_j\) has the same probability as moving from \(x_j\) to \(x_i\). Observe though that this does not mean that all states are equally probable. We state detailed balance more formally as follows,
\(\pi(x_i) T(x_i,x_j) = \pi(x_j)T(x_j,x_i), \; \forall x_i,x_j\).
The MCMC Idea
The key idea for sampling from a distribution, \(p(x)\), using MCMC should be clear from the above. We put \(p(x)\) as our stationary distribution, find a transition kernel, \(T\), that fulfills the detailed balance condition, and generate samples from the Markov chain. By detailed balance the Markov chain will converge to our desired distribution, and we can use the samples for calculating the expectation or finding the max, or for whatever other reason we wanted samples from the distribution.
Finding Detailed Balance
How do we find a transition kernel that fulfills the detailed balance condition? Say we pick any distribution (transition kernel), \(T(x_i|x_j)\), that we can easily sample from. It will naturally be so that we will move from some state \(x_i\) to state \(x_j\) a bit more probable, that is,
\(\pi(x_i) T(x_i,x_j) > \pi(x_j)T(x_j,x_i)\).
We can compensate for this by multiplying in another transition term, \(A(x_i,x_j)\) such that,
\(\pi(x_i) T(x_i,x_j) A(x_i,x_j) = \pi(x_j)T(x_j,x_i)\).
This means that our balancing term will have the form,
\(A(x_i,x_j) = \frac{\pi(x_j)T(x_j,x_i)}{\pi(x_i) T(x_i,x_j)}\).
If we are in state \(x_i\), and have generated a sample from \(T\), \(x_j\). \(A(x_i,x_j)\) thus becomes the binary probability of either transitioning to state \(x_j\) or staying in state \(x_i\). All we have to do to make a decision is to generate a uniform sample, \(u\), on \([0,1]\) and if \( u < A(x_i,x_j)\) we move to state \(x_j\). Sometimes we will have \(A(x_i,x_j)>1\), this means that moving from \(x_i\) to \(x_j\) is likely but the reverse is very unlikely, implying that the sample will need to automatically pass to the new state every time.
The Full Metropolis Hasting Algorithm
The full MH-algorithm thus takes the following form:
(0) For \(n = 1,2,\ldots, N\), set current state to \(x_i\) and do
(1) Sample from the proposal distribution \(q(x_j|x_i)\).
(2) Evaluate \(\alpha = \frac{q(x_i|x_j)p(x_j)}{q(x_j|x_i)p(x_i)}\)
(3) Accept the proposal sample, \(x_j\) with probability \(\min[1,\alpha]\) otherwise keep the current sample \(x_i\) as the new sample. |
ISSN:
1078-0947
eISSN:
1553-5231
All Issues
Discrete & Continuous Dynamical Systems - A
November 2010 , Volume 27 , Issue 4
A special issue Dedicated to Roger Temam on the Occasion of his 70th Birthday Part I
Select all articles
Export/Reference:
Abstract:
Born in Tunis on May 19, 1940, Roger Temam moved to Paris in 1957 to study at the University of Paris, which was at that time the only university in Paris, known as La Sorbonne. He wrote his doctoral thesis under the supervision of Professor Jacques-Louis Lions and became a professor at the University of Paris-Sud XI at Orsay in 1968. There, he founded, together with Professors Jacques Deny and Charles Goulaouic, the Laboratory of Numerical and Functional Analysis which he directed from 1972 to 1988. He was also a Maître de Conférences at the famous Ecole Polytechnique from 1968 to 1986.
In 1983, Roger Temam co-founded the SMAI, the French Applied and Industrial Mathematical Society, analogous to SIAM, and served as its first president. He initiated the ICIAM conference series and was head of the Steering Committee of its first meeting held in Paris in 1987. He was also the Editor-in-Chief of the mathematical journal M2AN from 1986 to 1997, and he is or has been on the editorial board of such journals as Asymptotic Analysis, Discrete and Continuous Dynamical Systems, Journal of Differential Equations, Physica D, Communications in PDEs and SIAM Journal of Numerical Analysis.
For more information please click the “Full Text” above.
Abstract:
We analyze the vortex core structure inside spherical ferromagnetic particles through both a bifurcation analysis and numerical simulations. Based on properties of the solution and simplifying assumptions, specific numerical algorithms are developed. Numerical results are provided showing the applicability of the methods.
Abstract:
We study here a number of mathematical problems related to our recently introduced neoclassical theory for electromagnetic phenomena in which charges are represented by complex valued wave functions as in the Schrödinger wave mechanics. In the non-relativistic case the dynamics of elementary charges is governed by a system of nonlinear Schrödinger equations coupled with the electromagnetic fields, and we prove that if the wave functions of charges are well separated and localized their centers converge to trajectories of the classical point charges governed by Newton's equations with the Lorentz forces. We also found exact solutions in the form of localized accelerating solitons. Our studies of a class of time multiharmonic solutions of the same field equations show that they satisfy Planck-Einstein relation and that the energy levels of the nonlinear eigenvalue problem for the hydrogen atom converge to the well-known energy levels of the linear Schrödinger operator when the free charge size is much larger than the Bohr radius.
Abstract:
A rigorous study of universal laws of 2-D turbulence is presented for time independent forcing at all length scales. Conditions for energy and enstrophy cascades are derived, both for a general force, and for one with a large gap in its spectrum. It is shown in the gap case that either a direct cascade of enstrophy or an inverse cascade of energy must hold, provided the gap modes of the velocity has a nonzero ensemble average. Partial rigorous support for 2-D analogs of Kolmogorov's 3-D dissipation law, as well as the power law for the distribution of energy are given.
Abstract:
Under assumptions on smoothness of the initial velocity and the external body force, we prove that there exists T
0> 0, V *> 0 and a unique family of strong solutions u vof the Euler or Navier-Stokes initial-boundary value problem on the time interval (0, T 0), depending continuously on the viscosity coefficient $\nu$ for $0\leq\nu< $ V *. The solutions of the Navier-Stokes problem satisfy a Navier-type boundary condition. We give the information on the rate of convergence of the solutions of the Navier-Stokes problem to the solution of the Euler problem for $\nu\to 0+$. Abstract:
We consider a class of non-linear partial differential systems like
-div$(a(x)\nabla u_{\nu}) +\lambda u_{\nu}=H_{\nu}(x, Du) \, $
with applications for the solution of stochastic differential games with $N$ players, where $N$ is an arbitrary but positive number. The Hamiltonian $H$ of the non-linear system satisfies a quadratic growth condition in $D u$ and contains interactions between the players in the form of non-compact coupling terms $\nabla u_{i} \cdot\nabla u_j$. A $L^{\infty}\cap H^1$-estimate and regularity results are shown, mainly in two-dimensional space. The coupling arises from cyclic non-market interaction of the control variables.
Abstract:
The evolution equation
$ u_t- $ u
xxt$ +u_x-$uu t$ +u_x\int_x^{+\infty}u_tdx'=0, $ (1)
was developed by Hirota and Satsuma as an approximate model for unidirectional propagation of long-crested water waves. It possesses solitary-wave solutions just as do the related Korteweg-de Vries and Benjamin-Bona-Mahony equations. Using the recently developed theory for the initial-value problem for (1) and an analysis of an associated Liapunov functional, nonlinear stability of these solitary waves is established.
Abstract:
We revisit the Near Equidiffusional Flames (NEF) model introduced by Matkowsky and Sivashinsky in 1979 and consider a simplified, quasi-steady version of it. This simplification allows, near the planar front, an explicit derivation of the front equation. The latter is a pseudodifferential fully nonlinear parabolic equation of the fourth-order. First, we study the (orbital) stability of the null solution. Second, introducing a parameter ε, we rescale both the dependent and independent variables and prove rigourously the convergence to the solution of the Kuramoto-Sivashinsky equation as ε $ \to 0$.
Abstract:
In this article, we discuss the numerical solution of a constrained minimization problem arising from the stress analysis of elasto-plastic bodies. This minimization problem has the flavor of a generalized non-smooth eigenvalue problem, with the smallest eigenvalue corresponding to the load capacity ratio of the elastic body under consideration. An augmented Lagrangian method, together with finite element approximations, is proposed for the computation of the optimum of the non-smooth objective function, and the corresponding minimizer. The augmented Lagrangian approach allows the decoupling of some of the nonlinearities and of the differential operators. Similarly an appropriate Lagrangian functional, and associated Uzawa algorithm with projection, are introduced to treat non-smooth equality constraints. Numerical results validate the proposed methodology for various two-dimensional geometries.
Abstract:
In this article, we investigate a water wave model with a nonlocal viscous term
$ u_t+u_x+\beta $u
xxx$+\frac{\sqrt{\nu}}{\sqrt{\pi}}\int_0^t
\frac{u_t(s)}{\sqrt{t-s}}ds+$uu x$=\nu $u xx$. $
The wellposedness of the equation and the decay rate of solutions are investigated theoretically and numerically.
Abstract:
We consider a non-autonomous reaction-diffusion system of two equations having in one equation a diffusion coefficient depending on time ($\delta =\delta (t)\geq 0,t\geq 0$) such that $\delta (t)\rightarrow 0$ as $t\rightarrow +\infty $. The corresponding Cauchy problem has global weak solutions, however these solutions are not necessarily unique. We also study the corresponding "limit'' autonomous system for $\delta =0.$ This reaction-diffusion system is partly dissipative. We construct the trajectory attractor A for the limit system. We prove that global weak solutions of the original non-autonomous system converge as $t\rightarrow +\infty $ to the set A in a weak sense. Consequently, A is also as the trajectory attractor of the original non-autonomous reaction-diffusions system.
Abstract:
We consider a finite element space semi-discretization of the Cahn-Hilliard equation with dynamic boundary conditions. We prove optimal error estimates in energy norms and weaker norms, assuming enough regularity on the solution. When the solution is less regular, we prove a convergence result in some weak topology. We also prove the stability of a fully discrete problem based on the backward Euler scheme for the time discretization. Some numerical results show the applicability of the method.
Abstract:
A semilinear integrodifferential equation of hyperbolic type is studied, where the dissipation is entirely contributed by the convolution term accounting for the past history of the variable. Within a novel abstract framework, based on the notion of
minimal state, the existence of a regular global attractor is proved. Abstract:
We study the long time behavior of the solution of a stochastic PDEs with random coefficients assuming that randomness arises in a different independent scale. We apply the obtained results to $2D$- Navier-Stokes equations.
Abstract:
The method of group foliation can be used to construct solutions to a system of partial differential equations that, as opposed to Lie's method of symmetry reduction, are not invariant under any symmetry of the equations. The classical approach is based on foliating the space of solutions into orbits of the given symmetry group action, resulting in rewriting the equations as a pair of systems, the so-called automorphic and resolvent systems, involving the differential invariants of the symmetry group, while a more modern approach utilizes a reduction process for an exterior differential system associated with the equations. In each method solutions to the reduced equations are then used to reconstruct solutions to the original equations. We present an application of the two techniques to the one-dimensional Korteweg-de Vries equation and the two-dimensional Flierl-Petviashvili (FP) equation. An exact analytical solution is found for the radial FP equation, although it does not appear to be of direct geophysical interest.
Abstract:
We study the long time behavior, and, in particular, the existence of attractors for the Navier-Stokes-Fourier system under energetically insulated boundary conditions. We show that the attractor consists of static solutions determined uniquely by the total mass and energy of the fluid.
Abstract:
The three-dimensional incompressible Navier-Stokes equations are considered along with its weak global attractor, which is the smallest weakly compact set which attracts all bounded sets in the weak topology of the phase space of the system (the space of square-integrable vector fields with divergence zero and appropriate periodic or no-slip boundary conditions). A number of topological properties are obtained for certain regular parts of the weak global attractor. Essentially two regular parts are considered, namely one made of points such that
allweak solutions passing through it at a given initial time are strong solutions on a neighborhood of that initial time, and one made of points such that at least oneweak solution passing through it at a given initial time is a strong solution on a neighborhood of that initial time. Similar topological results are obtained for the family of all trajectories in the weak global attractor. Abstract:
The paper is devoted to the study of a mathematical model for the thermomechanical evolution of metallic shape memory alloys. The main novelty of our approach consists in the fact that we include the possibility for these materials to exhibit voids during the phase change process. Indeed, in the engineering paper [60] has been recently proved that voids may appear when the mixture is produced by the aggregations of powder. Hence, the composition of the mixture varies (under either thermal or mechanical actions) in this way: the martensites and the austenite transform into one another whereas the voids volume fraction evolves. The first goal of this contribution is hence to state a PDE system capturing all these modelling aspects in order then to establish the well-posedness of the associated initial-boundary value problem.
Readers Authors Editors Referees Librarians More Email Alert
Add your name and e-mail address to receive news of forthcoming issues of this journal:
[Back to Top] |
Answer
$$x=\frac{\pi }{3}+2\pi n,\:x=\frac{2\pi }{3}+2\pi n$$
Work Step by Step
Solving for a general solution to the equation, we find: $$\sin \left(x\right)=\frac{\sqrt{3}}{2} \\ x=\frac{\pi }{3}+2\pi n,\:x=\frac{2\pi }{3}+2\pi n$$
You can help us out by revising, improving and updating this answer.Update this answer
After you claim an answer you’ll have
24 hours to send in a draft. An editorwill review the submission and either publish your submission or provide feedback. |
Answer
The solutions are $$x=\{\frac{\pi}{18},\frac{7\pi}{18},\frac{13\pi}{18},\frac{19\pi}{18},\frac{25\pi}{18},\frac{31\pi}{18}\}$$
Work Step by Step
$$\cot3x=\sqrt3$$ over interval $[0,2\pi)$ 1) Interval $[0,2\pi)$ can be written as $$0\le x\lt2\pi$$ That means, for $3x$, the interval would be $$0\le3x\lt6\pi$$ or $$3x\in[0,6\pi)$$ 2) Now consider back the equation $$\cot3x=\sqrt3$$ Over the interval $[0,6\pi)$, there are 6 values whose $\cot$ equals $\sqrt3$, which are $\frac{\pi}{6},\frac{7\pi}{6},\frac{13\pi}{6},\frac{19\pi}{6},\frac{25\pi}{6},\frac{31\pi}{6}$, meaning that $$3x=\{\frac{\pi}{6},\frac{7\pi}{6},\frac{13\pi}{6},\frac{19\pi}{6},\frac{25\pi}{6},\frac{31\pi}{6}\}$$ So $$x=\{\frac{\pi}{18},\frac{7\pi}{18},\frac{13\pi}{18},\frac{19\pi}{18},\frac{25\pi}{18},\frac{31\pi}{18}\}$$ (In fact, a value whose $\tan$ equals $\frac{\sqrt3}{3}$ would at the same times have $\cot$ equaling $\sqrt3$) |
Search
Now showing items 1-10 of 24
Production of Σ(1385)± and Ξ(1530)0 in proton–proton collisions at √s = 7 TeV
(Springer, 2015-01-10)
The production of the strange and double-strange baryon resonances ((1385)±, Ξ(1530)0) has been measured at mid-rapidity (|y|< 0.5) in proton–proton collisions at √s = 7 TeV with the ALICE detector at the LHC. Transverse ...
Forward-backward multiplicity correlations in pp collisions at √s = 0.9, 2.76 and 7 TeV
(Springer, 2015-05-20)
The strength of forward-backward (FB) multiplicity correlations is measured by the ALICE detector in proton-proton (pp) collisions at s√ = 0.9, 2.76 and 7 TeV. The measurement is performed in the central pseudorapidity ...
Inclusive photon production at forward rapidities in proton-proton collisions at $\sqrt{s}$ = 0.9, 2.76 and 7 TeV
(Springer Berlin Heidelberg, 2015-04-09)
The multiplicity and pseudorapidity distributions of inclusive photons have been measured at forward rapidities ($2.3 < \eta < 3.9$) in proton-proton collisions at three center-of-mass energies, $\sqrt{s}=0.9$, 2.76 and 7 ...
Rapidity and transverse-momentum dependence of the inclusive J/$\mathbf{\psi}$ nuclear modification factor in p-Pb collisions at $\mathbf{\sqrt{\textit{s}_{NN}}}=5.02$ TeV
(Springer, 2015-06)
We have studied the transverse-momentum ($p_{\rm T}$) dependence of the inclusive J/$\psi$ production in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV, in three center-of-mass rapidity ($y_{\rm cms}$) regions, down to ...
Measurement of pion, kaon and proton production in proton–proton collisions at √s = 7 TeV
(Springer, 2015-05-27)
The measurement of primary π±, K±, p and p¯¯¯ production at mid-rapidity (|y|< 0.5) in proton–proton collisions at s√ = 7 TeV performed with a large ion collider experiment at the large hadron collider (LHC) is reported. ...
Two-pion femtoscopy in p-Pb collisions at $\sqrt{s_{\rm NN}}=5.02$ TeV
(American Physical Society, 2015-03)
We report the results of the femtoscopic analysis of pairs of identical pions measured in p-Pb collisions at $\sqrt{s_{\mathrm{NN}}}=5.02$ TeV. Femtoscopic radii are determined as a function of event multiplicity and pair ...
Measurement of charm and beauty production at central rapidity versus charged-particle multiplicity in proton-proton collisions at $\sqrt{s}$ = 7 TeV
(Springer, 2015-09)
Prompt D meson and non-prompt J/$\psi$ yields are studied as a function of the multiplicity of charged particles produced in inelastic proton-proton collisions at a centre-of-mass energy of $\sqrt{s}=7$ TeV. The results ...
Charged jet cross sections and properties in proton-proton collisions at $\sqrt{s}=7$ TeV
(American Physical Society, 2015-06)
The differential charged jet cross sections, jet fragmentation distributions, and jet shapes are measured in minimum bias proton-proton collisions at centre-of-mass energy $\sqrt{s}=7$ TeV using the ALICE detector at the ...
Centrality dependence of high-$p_{\rm T}$ D meson suppression in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV
(Springer, 2015-11)
The nuclear modification factor, $R_{\rm AA}$, of the prompt charmed mesons ${\rm D^0}$, ${\rm D^+}$ and ${\rm D^{*+}}$, and their antiparticles, was measured with the ALICE detector in Pb-Pb collisions at a centre-of-mass ...
K*(892)$^0$ and $\Phi$(1020) production in Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV
(American Physical Society, 2015-02)
The yields of the K*(892)$^0$ and $\Phi$(1020) resonances are measured in Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV through their hadronic decays using the ALICE detector. The measurements are performed in multiple ... |
The integration of a function f(x) is given by F(x) and it is represented by:
∫f(x)dx = F(x) + C
Here R.H.S. of the equation means integral of f(x) with respect to x.
F(x) is called anti-derivative or primitive.
f(x) is called the integrand.
dx is called the integrating agent.
C is called constant of integration or arbitrary constant.
x is the variable of integration.
The anti-derivatives of basic functions are known to us. The integrals of these functions can be obtained readily. But this integration technique is limited to basic functions and in order to determine the integrals of various functions, different methods of integration are used. Among these methods of integration let us discuss integration by substitution.
INTEGRATION BY SUBSTITUTION
In this method of integration, any given integral is transformed to a simple form of integral by substituting the independent variable by other.
Take for example an equation having independent variable in x , i.e. \(\int \sin (x^{3}).3x^{2}.dx\)
In the equation given above the independent variable can be transformed into another variable say t.
Substituting \( x^{3} = t \)
Differentiation of above equation will give-
\(3x^{2}.dx = dt\)
Substituting the value of (ii) and (iii) in (i), we have
\(\int \sin (x^{3}).3x^{2}.dx = \int \sin t . dt\)
Thus the integration of the above equation will give
\(\int \sin t .dt = -\cos t + c \)
Again putting back the value of t from equation (ii), we get
\(\int \sin (x^{3}).3x^{2}.dx = -\cos (x^{3}) + c\)
The General Form of integration by substitution is:
\(\int f(g(x)).g'(x).dx = f(t).dt\)
where t = g(x)
Usually the method of integration by substitution is extremely useful when we make a substitution for a function whose derivative is also present in the integrand. Doing so, the function simplifies and then the basic formulas of integration can be used to integrate the function. To understand this concept better let us look into an example.
Example:Find the integration of \(\int \frac{e^{\tan^{-1}x}}{1+x^{2}}.dx\)
Let \(t = \tan^{-1}x\)
\(\Rightarrow dt = \frac{1}{1+x^{2}}.dx\)
\(I = \int e^{t}. dt\)
\(= e^{t} + C\)
Substituting the value of (i) in (ii), we have
\(I = e^{\tan^{-1}x} + C\)
Let \(x^{2} – 5 = t\)
\(\Rightarrow 2x .dx= dt\)
Substituting these values, we have
\(I = \int \cos(t).dt\)
\(= \sin t + C\)
Substituting the value of (i) in (ii), we have
\(= \sin(x^{2}- 5) + C\)
This is the required integration for the given function.
To learn more about integration by substitution please download Byju’s- The Learning App.
‘ |
Some tricks I've seen:
Tricks with notable products
$(a + b)^2 = a^2 + 2ab + b^2$
This formula can be used to compute squares. Say that we want to compute $46^2$. We use $46^2 = (40+6)^2 = 40^2+2\cdot40\cdot6 +6^2 = 1600 + 480 + 36 = 2116$. You can also use this method for negative $b$:$ 197^2 = (200 - 3)^2 = 200^2 - 2\cdot200\cdot3 + 3^2 = 40000 - 1200 + 9 = 38809 $
The last subtraction can be kind of tricky: remember to do it right to left, and take out the common multiples of 10:$ 40000 - 1200 = 100(400-12) = 100(398-10) = 100(388) = 38800 $The hardest thing here is to keep track of the amount of zeroes, this takes some practice!
Also note that if we're computing $(a+b)^2$ and a is a multiple of $10^k$ and $b$ is a single digit-number, we already know the last $k$ digits of the answer: they are $b^2$, then the rest (going to the right) are zeroes. We can use this even if a is only a multiple of 10: the last digit of $(10 * a + b)^2$ (where $a$ and $b$ consist of a single digit) is $b$. So we can write (or maybe only make a mental note that we have the final digit) that down and worry about the more significant digits.
Also useful for things like $46\cdot47 = 46^2 + 46 = 2116 + 46 = 2162$. When both numbers are even or both numbers are uneven, you might want to use:
$(a+b)(a-b) = a^2 - b^2$Say, for example, we want to compute $23 \cdot 27$. We can write this as $(25 - 2)(25 + 2) = 25^2 - 2^2 = (20 + 5)^2 = 20^2 + 2\cdot20\cdot5 + 5^2 - 4 = 400 + 200 + 25 - 4 = 621$.
Divisibility checks
Already covered by Theodore Norvell. The basic idea is that if you represent numbers in a base $b$, you can easily tell if numbers are divisible by $b - 1$, $b + 1$ or prime factors of $b$, by some modular arithmetic.
Vedic math
A guy in my class gave a presentation on Vedic math. I don't really remember everything and there probably are a more cool things in the book, but I remember with algorithm for multiplication that you can use to multiplicate numbers in your head.
This picture shows a method called lattice or gelosia multiplication and is just a way of writing our good old-fashioned multiplication algorithm (the one we use on paper) in a nice way. Please notice that the picture and the Vedic algorithm are not tied: I added the picture because I think it helps you appreciate and understand the pattern that is used in the algorithm. The gelosia notation shows this in a much nicer way than the traditional notation.
The algorithm the guy explained is essentially the same algorithm as we would use on paper. However, it structures the arithmetic in such a way that we never have remember too many numbers at the same time.
Let's illustrate the method by multiplying $456$ with $128$, as in the picture. We work from left to right: we first compute the least significant digits and work our way up.
We start by multiplying the least significant digits:
$6 \cdot 8 = 48$: the least significant digit is $8$, remember the $4(0)$ for the next round (of course, I don't mean zero times four here but four, or forty, whatevery you prefer: be consistent though, if you include the zero here to make forty, you got do it everywhere).$ 8 \cdot 5(0) = 40(0) $
$ 2(0) \cdot 6 = 12(0) $ $ 4(0) + 40(0) + 12(0) = 56(0) $: our next digit (to the left of the $8$) is $6$: remember the $5(00)$
$ 8 \cdot 4(00) = 32(00) $
$ 2(0) \cdot 5(0) = 10(00) $ $ 1(00) \cdot 6 = 6(00) $ $ 5(00) + 32(00) + 10(00) + 6(00) = 53(00) $: our next digit is a $3$, remember the $5(000)$
Pfff... starting with 2-digit numbers is a better idea, but I wanted to this longer one to make the structure of the algorithm clear. You can do this much faster if you have practiced, since you don't have to write it all down.
$ 2(0) \cdot 4(00) = 8(000) $
$ 1(00) \cdot 5(0) = 5(000)$ $ 5(000) + 8(000) + 5(000) = 18(000)$: next digit is an $8$, remember the $1(0000)$
$ 1(00) \cdot 4(00) = 4(0000) $
$ 1(0000) + 4(0000) = 5(0000) $: the most significant digit is a $5$.
So we have $58368$.
Quadratic equations
There are multiple ways to solve a quadratic equation in your head. The easiest are quadratic with integer coefficients. If we have $x^2 + ax + c = 0$, try to find $r_{1, 2}$ such that $r_1 + r_2 = -a$ and $r_1r_2 = c$. It is also possible to solve for non-integer solutions this way, but it is usually too hard to actually come up with solutions this way.
Another way is just to try divisors of the constant term. By the rational root theorem (google it, I can't link anymore
sigh) all solutions to $x^n + ... + c = 0$ need to be divisors of $c$. If $c$ is a fraction $\frac{p}{q}$, the solutions need to be of the form $\frac{a}{b}$ where $a$ divides $p$ and $b$ divides $q$.
If this all fails, we can still put the abc-formula in a much easier form:
$ ux^2 + vx + w = 0 $
$ x^2 + \frac{v}{u}x + \frac{w}{u} = 0 $ $ x^2 + \frac{v}{u}x + \frac{w}{u} = 0 $ $ x^2 - ax - b = 0 $
$ x^2 = ax + b $
(This is the form that I found easiest to use!) $ (x - \frac{a}{2})^2 = (\frac{a}{2})^2 + b $ $ x = \frac{a\pm\sqrt{a^2 + 4b}}{2} = \frac{a}{2} \pm \sqrt{(\frac{a}{2})^2 + b} $
I'm sure there are also a lot of techniques for estimating products and the like, but I'm not really familiar with them.
Tricks that aren't really usable but still pretty cool
See this excerpt from Feynman's "Surely you're joking, Mr. Feynman!" about how he managed to amaze some of his colleagues, and also this video from Numberphile. |
It's my understanding that when something is going near the speed of light in reference to an observer, time dilation occurs and time goes slower for that fast-moving object. However, when that object goes back to "rest", it has genuinely aged compared to the observer. It's not like time goes slow for a while, and then speeds back to "normal," so that the age of the observer once again matches the object. The time dilation is permanent. Why wouldn't the same thing happen with length contraction? Since the two are so related, you'd think if one is permanent, the other would be also. And from everything I've read so far, length contraction is not permanent. An object will be at rest touching an observer, go far away near light speed, return to touching the observer, and be the same length it was at the beginning. It shortens, and then grows long again, as if its shrinkage was an illusion the whole time. Did I just not read the right things or what? Were my facts gathered incorrectly?
Time dilation is a comparison of rates. When an object is moving fast with respect to you, it's clock rate is slow, and when it comes to rest with respect to you its clock rate returns to normal. The time difference between the two clocks at this time is due to the accumulation due to these different time rates. That is the leftover effect of the time dilation but not the time dilation itself.
Length contraction, like time dilation, exists when there is relative motion and goes away when there is no relative motion, but there isn't any "accumulation" with length contraction, so there is nothing to be "left over".
The way I see it, time dilation is the real effect here.
Length contraction (in SR) is just a consequence of the fact that the "length" of a rod is the distance between simultaneous positions of the rod's endpoints. But two observers with different velocities will have different ideas about what simultaneous is, and this means they measure different lengths.
The best paradox to think about here is the "ladder" or "train" paradox. I think if you have got your head around that you understand length contraction.
It's my understanding that when something is going near the speed of light in reference to an observer, time dilation occurs and time goes slower for that fast-moving object.
According to the 'something', it is the observer's clock that runs slower and it is the observer's rulers that are contracted. That is to say, the time dilation and length contraction are symmetrical. Neither clock can be objectively said to be running slow and neither ruler can objectively said to be contracted.
However, when that object goes back to "rest",
Now the symmetry is lost; the object's accelerometer registered non-zero acceleration for some time while the observer remained inertial. This means that there is now an objective difference between the object (non-inertial) and observer (inertial) and, thus, an objective difference in elapsed times.
Why wouldn't the same thing happen with length contraction?
In fact, in SR, acceleration of an extended object must be handled with great care. If an object is to not to stretch or compress during acceleration, different parts of the object must have different (proper) acceleration.
See, for example, this question for additional information and links.
Length contraction effects may be permanent in the same way as time dilation! You just have to choose the right example.
Example: An astronaut is traveling at v=0,99 c to an exoplanet, according to Earth frame he is traveling 198 light years in 200 years. According to his frame (reciprocal gamma = 0,141) he is traveling 27,9 light years in 28,2 years. After his arrival on the exoplanet, he is permanently younger than (and outliving) his twin brother on Earth, and he is permanently at a distance of 198 light years from Earth, a distance which he could never have traveled without length contraction.
There is actually an equivalent to "total elapsed proper time" along time-like curves in spacetime (which can represent the worldlines of particles moving slower than light), and that is the "proper distance" along a space-like curve (which cannot be any real particle's worldline). See the spacetime wikipedia article for more on time-like vs. space-like, particular the basic concepts section dealing with different types of intervals in special relativity, and the spacetime in general relativity generalizing that discussion.
The simplest physical interpretation of proper time on a time-like curve is just the total elapsed time on an ideal clock that has that curve as its worldline. But just as an arbitrary curve can be approximated as a polygonal shape consisting of a series of straight segments connected at their endpoints, so an arbitrary time-like curve can be approximated as a series of short inertial segments, which could represent bits of the worldlines of a bunch of different inertial clocks which cross paths with one another at the point the segments join. Then if you add the time elapsed by each inertial clock on each segment, this is approximately the proper time on the whole curve. Analogously, an arbitrary space-like curve can be approximated by a series of space-like segments, and the endpoints of each segment can be events at either end of a short inertial ruler which is moving at just the right velocity so that its plane of simultaneity is parallel to the the segment. Then the total proper distance is just the sum of the proper length of the rulers for all the segments. But this will probably only make sense if you already have a decent familiarity with spacetime diagrams in special relativity.
To give a mathematical example, suppose we are dealing with curves in SR which can be described in the coordinates of some inertial frame, and suppose the curves only vary their position along the x-axis so we can ignore the y and z space coordinates, and just describe the curves by some x(t) function. Then a time-like curve is one where $\frac{dx}{dt} < c$ everywhere, and a space-like curve is one where $\frac{dx}{dt} > c$ everywhere. If the time-like curve is approximated by a "polygonal" path made up of a series of inertial segments that each have a constant velocity $v$ over a time-interval $\Delta t$ in the inertial frame, then the elapsed proper time on each segment is $\sqrt{1 - v^2/c^2} \Delta t$ (this is just the time dilation equation), and the total proper time along the whole polygonal path is the sum or the proper time for each each segment. In the limit as the time-intervals become infinitesimal, this sum becomes an integral, and in this limit the error in the polygonal approximation goes to zero, so the actual proper time along the curve is $\int \sqrt{1 - v(t)^2/c^2} \, dt$.
Similarly, the space-like curve can be approximated by a polygonal path made up of a series of space-like segments whose endpoints have a spatial interval of $\Delta x$ between them, and with each segment having a constant value of $v^{\prime} = \frac{dx}{dt}$, where $v^{\prime} > c$. Each segment will be parallel to the simultaneity plane of a ruler moving at a slower-than-light speed $v = \frac{c^2}{v^{\prime}}$, and if the ruler's ends line up with the endpoints of the spacelike segment, that means the ruler has a contracted length of $\Delta x$ in the inertial frame we're using, which means the ruler's proper length is $\frac{1}{\sqrt{1 - v^2/c^2}} \Delta x$ (this is just the length contraction equation). So the total proper distance along the polygonal path is just the sum of the proper length for each ruler, and in the limit as the rulers' proper lengths become infinitesimal the sum becomes an integral and the error goes to zero, so the actual proper distance along the curve is $\int \frac{1}{\sqrt{1 - v(t)^2/c^2}} \, dx$.
So, you can see that in the first integral for proper time the factor in the integral is the same one that appears in the time dilation equation $dt_{proper} = \sqrt{1 - v^2/c^2} \, dt$, and in the second integral for proper distance the factor in the integral is the same one that appears in the length contraction equation $dx_{proper} = \frac{1}{\sqrt{1 - v^2/c^2}} \, dx$.
There is an asymmetry between space and time, and that is the reason why time dilation can be permanent and length contraction can not. The reason is that you can travel back and forth in space but not in time. In turn, this is related with an asymmetry between time and space in relativity, not in the laws of motion, but in the theory itself: we cannot go continuously from moving below the speed of light to moving faster than the speed of light, thus any physical reference frame is always moving either forward or backward in time, but cannot change from one to another. If they could, then length contraction could be made permanent.
Why could you make length contraction permanent if you could move backwards in time?
To see why you need to travel backwards in time remember the twin paradox:
a thought experiment in special relativity involving identical twins, one of whom makes a journey into space in a high-speed rocket and returns home to find that the twin who remained on Earth has aged more. This result appears puzzling because each twin sees the other twin as moving, and so, according to an incorrect naive application of time dilation and the principle of relativity, each should paradoxically find the other to have aged more slowly
You can explain why the twin paradox breaks the symmetry, that is, make the time difference permanent for one of them, by watching the figure below. There the twin traveler changes of inertial reference frame in the middle of the trip to initiate its return. In the graph the change in speed is instantaneous, thus the acceleration infinite and the brake in the asymmetry instantaneous: one point for the traveler maps into a segment for the stationary twin. If the change in speed were not instantaneous, then we would see that a small time segment (that is, during acceleration) in the traveling twin maps into a much larger time segment in the stationary tween, so the stationary twin will be correct.
Now relabel the graph $ct$ to $x$ and $x$ to $ct$. You get exactly the same problem, but with $t$ and $x$ reversed. Thus, to make length contraction permanent you need, instead of a traveler in space, a traveler in time. It should work like this: in a stationary reference frame, two meters, meter one and meter two have the same length. But now one of the meters (meter two) moves with traveler two. In the travelers reference frame meter two will be larger than meter one.The same will think the stationary meter, meter one, who will think he is longer. Same when he is in a different reference frame when returning. No symmetry break for now. But if the traveler is allowed to move back in time when returning, a similar asymmetry than in the twin paradox will occur. The final result being that the stationary meter was correct, and the returning meter will be shorter than the original.
Putting CuriousOne's comment into an answer,
In the theory of relativity, time dilation is an actual difference of elapsed time between two events as measured by observers either moving relative to each other or differently situated from gravitational masses. Wikipedia
I see that such a definition might be misleading as it talks about time dilation in an "elapsed time" sense. Although I can't say it is technically
wrong, perhaps a better way to understand it would be in terms of speed that time goes at for observers moving relative to each other.
Just like length contraction, time dilation, interpreted as difference of speed of time flow also disappears as the observers again come at rest relative to each other. But, the elapsed time is a cumulative quantity. That difference can't be restored.
Total time or any such concept may not be covered by General Relativity or any such present theory, as far as my limited knowledge goes.
No, your facts weren't gathered incorrectly, your reasoning is just incorrect. It doesn't even take knowledge of physics to answer the question, just logical reasoning. (Don't take my language as a personal insult, I'm just trying to be clear.)
"It's not like time goes slow for a while, and then speeds back to "normal," so that the age of the observer once again matches the object."
Well, actually, yes, time does go back to "normal" when the moving clock comes back to rest. (All relative to the observer, of course). Once the clock comes to rest, it will once again tick at its normal rate, which is faster than the rate it ticked at when it was moving.
The clock will be behind, however, because it spent some time being a slowpoke. Your idea that once the clock begins to tick at its normal rate it will somehow "catch up" with the other clock is incorrect.
That's like saying if one marathon runner spends an hour walking, while his competitor runs, once he starts running again, he'll immediately catch up to the other guy. No, he'll be behind because of the time he spent walking while the other guy was running. Same idea with the clocks.
Time dilation does disappear as relative velocity approaches zero. The things experienced during the time experienced do not disappear; cells which have died remain dead and second hands which have ticked ahead do not reverse direction. To undo those things would require time reversal.
Sine we as humans only perceive time in one direction, time reversal is irrelevant: if an object is travelling in direction a at 1m/-s we would perceive and record it as travelling in direction -a at 1m/s, or direction a at -1m/s. We always record and perceive time as forward moving, but it can as easily be seen in the other direction.
As you say, time dilation and length contraction occur when two frames of reference (observer and observed) travel at two different speeds. Both of these effects "go away" if the two frames of reference subsequently travel at the same speed; that is, time will pass at the same rate and two yardsticks will have the same length.
But the EFFECTS of these relativistic effects
are permanent in BOTH cases. For time dilation it's easy to imagine (i.e. the "old twin" scenario that you mentioned). So here's an example for length contraction:
Imagine that there's an immense opaque disc between you (on Earth) and a big stellar nebula. The disc is big, but not so big that it completely obscures the nebula. Some of the photons coming from the nebula are blocked by the disc.
Now imagine that the same scenario, but the disc is travelling very fast tangentially to you and the nebula. In other words, it's moving across your field of vision. Now, at such immense distances, you won't be able to easily see the disc moving, but it WILL be length-contracted. So
fewer photons from the nebula will be blocked by the disc, allowing you to see more of the nebula. This is a permanent effect! Those extra photons that slipped by the foreshortened disc are now streaming out into the universe, interacting with things and hitting retinas (maybe yours) long after the disc slows down (relative to you).
Nothing that happens in the universe every really "goes away". I didn't even mention the increased mass of the disc, which will distort the paths of those photons and everything else around it. Every distortion is "permanent" in that respect.
Any change in time at all is only "permanent" because of the second law of thermodynamics and the resulting arrow of time.
protected by Qmechanic♦ Dec 20 '14 at 16:49
Thank you for your interest in this question. Because it has attracted low-quality or spam answers that had to be removed, posting an answer now requires 10 reputation on this site (the association bonus does not count).
Would you like to answer one of these unanswered questions instead? |
Kerr Black Holes have usually (excluding extrema $a=0$, $a=1$) due to their spinning activity an ellipsoidal ergosphere.
So why does the photon-sphere does not have an ellipsoidal form?
Physics Stack Exchange is a question and answer site for active researchers, academics and students of physics. It only takes a minute to sign up.Sign up to join this community
Kerr Black Holes have usually (excluding extrema $a=0$, $a=1$) due to their spinning activity an ellipsoidal ergosphere.
So why does the photon-sphere does not have an ellipsoidal form?
So why does the photon-sphere does not have an ellipsoidal form?
It does have an an ellipsoidal form, or, more exactly, that of an oblated sheroid; in Boyer Lindquist coordinates where
$${x} = \sqrt {r^2 + a^2} \sin\theta\cos\phi \ , \ \ {y} = \sqrt {r^2 + a^2} \sin\theta\sin\phi \ , \ \ {z} = r \cos\theta$$
the r-coordinate of all the possible photon orbits is constant, but if you transform the pseudospherical coordinate system into the cartesian background space the constant r does indeed transform into an ellipsoid.
For comparison see the ergospheres and horizons of a rotating black hole in pseudospherical $r, \theta, \phi$ (left, for comparison see Fig. 3 in Nigel Sharp's paper
"On embeddings of the Kerr geometry") and $x,y,z$ (right, comparison at Fig. 3 in Matt Visser's paper "The Kerr spacetime: A brief introduction" ) coordinates:
While the horizons have a constant $r$ in Boyer Lindquist coordinates, they don't have a constant $R$ in cartesian coordinates. That of course also goes for photon orbits:
As you can see, the shape of the photon trajectory in cartesian background space is not sperical, but an oblated spheroid. In Fig. 4 of Edward Teo's paper
"Spherical photon orbits around a Kerr black hole" (pdf), you can find the same orbit, but in pseudospherical Boyer Lindquist coordinates, where the orbit looks spherical again.
I dont' see much discussion on the linked article about the photon sphere being a true sphere -- they talk about the photon orbit at $r=3M$, but this is only valid for an equatorial orbit (and I'd expect that it would depend on the value of $a$ and you'd have a different radius for corotation and anti-rotation), and when you go away from the equatorial plane, you'll have different values for the orbit.
I'll admit that I"m talking from intuition here, I haven't done the calculation for closed null geodesics in the kerr spacetime.
Short, non-rambly answer: I don't know if the kerr hole even has a photon-sphere in the Schwarzschild case, and if it did, my intuition screams that it will not be a sphere. (also note, $r=const, t=const$ does not define a sphere in the Kerr spacetime)
Photon "spheres" of Kerr metric are not yet found. That needs a lot of work. Two closed photon circles are known. |
I have a problem I solved using kinematics/Newton's 2nd law.
It gives the mass of a walker as 55kg. It then says she starts from rest and walks 20m is 7s. It wants to know the horizontal force acting on her.
From kinematics for constant acceleration, I know $\vec{r}=\frac{a}{2}t^2\hat{i}$. Plugging in the known time and the known distance I solved for the acceleration and then I could get the force by multiplying the acceleration by the walker's mass. So I got the problem right... but then I got to wondering: Was there a way to do this problem using energy? I have in mind $\vec{F}\cdot\Delta\vec{r}=\Delta K$. I tried but I don't know the final velocity (from the given information).
Edit: I realized after looking at some of the feedback that I do know the final velocity (because the linear dependance of velocity on time means the average velocity must be half the final velocity). Therefore, you can see below, that I have posted the answer I was hoping to write back when I wished I knew the final velocity. |
Dear Uncle Colin, I wanted to work out $3^{41}\mod 13$: Wolfram|Alpha says it's 9, but MATLAB says it's 8. They can't both be right! What gives? MATLAB Obviously Doesn't Understand Logical Operations Hi, MODULO! First up, when computers disagree, the best thing to do is check by hand. Luckily, youRead More →
"Hm," I thought, "that's odd." I don't often work in degrees, but the student's syllabus insisted. And $\sin(50º)$ came up. It's 0.7660, to four decimal places. But... I know that $\sin\left(\frac 13 \pi\right)$, er, sorry, $\sin(60º)$ is 0.8660 -- a difference that's pretty close to $\frac 1{10}$. Which got meRead More →
In this month's festive WBU, Colin and Dave discuss... Bill's second birthday and train set Colin's latest book is due out any day. First number of the podcast: 998 (days since our first recording) Colin gloats about Spoof My Proof and Dave chews on sour grapes Second number of theRead More →
Dear Uncle Colin, I'm having trouble getting my head around sum notation! I can't tell whether $\sum_{n=0}^{6}{5}$ means $0\times5 + 1\times 5 + ... + 6\times 5$ or $0 + 1 + 2 + ... + 6$ or $5 + 5 + 5 ... + 5$. Wolfram|Alpha just gives meRead More →
A question, some time ago, from my favourite Egyptologist on Twitter: Is applied maths less satisfying than pure? @ColinTheMathmo @icecolbeveridge #mathcult — Liz Hind (@drbhind) April 14, 2014 There's a very simple answer to this question: it doesn't just depend on who you're asking, it may also depend on whatRead More →
Dear Uncle Colin, @CmonMattTHINK unearthed the challenge to prove that: $\tan\left( \frac 3{11}\pi \right) + 4 \sin\left( \frac 2{11}\pi \right) = \sqrt {11}$. Wolfram Alpha says it's true, but I can barely get started on the proof and I'm worried no-one will like me. Grr, Really Obnoxious Trigonometry Has EvidentlyRead More →
I've always had a soft spot for the Countdown numbers game, a challenge pitched just perfectly right for my mental capacities. It would take a very good numbers game to displace Countdown in my heart. Mathador is a very good numbers game1 with similarities to Countdown: you're given a batchRead More →
The following puzzle/trick came up on Futility Closet, one of my favourite sites for recreations. Here's how they describe it: Arrange cards with values ace through 9 in a row, in counting order, with the ace on the left. Take up a card from one end of the row —Read More →
Dear Uncle Colin, Do you have any advice about Cambridge interviews? I have one coming up. How About Wanting Kings (If Not, Girton) Hello, HAWKING! My Cambridge interviews were a couple of decades ago, so I don't know how current my advice is. A few things: be ready to demonstrateRead More →
In terms of tone and style, The Joy of $x$ is an absolute delight -- Strogatz has a knack for finding the right analogy and the right anecdote that is the envy of maths writers everywhere. It's an enjoyable read, and I'd recommend it to anyone who thinks "I'd loveRead More → |
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Discuss-gnuradio] Delay locked loop for the two-clock problem
From: Fons Adriaensen Subject: Re: [Discuss-gnuradio] Delay locked loop for the two-clock problem Date: Thu, 27 Oct 2016 22:35:32 +0000 User-agent: Mutt/1.5.21 (2010-09-15) On Wed, Oct 26, 2016 at 11:54:17PM +0200, Marcus Müller wrote:
> > The actual frequency of the clock used to measure time doesn't
> > matter as long as it has reasonable short term stability (and both sides
> > use the same clock of course).
> Exactly; that what was I was worried about. I don't have any data on the
> frequency stability of PC clocks – but I'm 100% sure a USRP's oscillator
> should be better
Only the short-term stability matters. If W and R are the two ends of the
buffer, when we obtain timestamps (tW_n, kW_n) and (tR_n, kR_n), where the
't' are read from time_now(), and the 'k' are cumulative counts of samples,
bytes, or whatever items. Now if the control loop is at the reading side
(it doesn't have to be), then whenever a block is read we obtain (tR, kR).
What we want then is kW (tR), so we can compute kW (tB) - kR, which is the
'logical' number of items buffered at time tR. 'Logical' meaning that instead
of block writes and reads we assume imaginary constant rate writes and reads.
Given two pairs (tW_n, kW_n) and (tW_n+1, kW_n+1), all we need is linear
interpolation to find kW (tB). In practice, all 't' will be a small fraction
of a second apart, so only the short term stability of time_now() matters.
The only thing we need to ensure is that the 't' are a sufficient number
of clock ticks apart, so that their difference isn't dominated by round-off
error.
Any jitter of the clock used by time_now() (and any round-off error) has
exactly the same effect as jitter of the actual write/read event times,
and is filtered by the DLL. And whatever remains is filtered again by
the main control loop (see example below).
> Hm, at 100MS/s, the integration periods to get stable rate estimates
> relative to CPU clock would probably get pretty long, sample-wise,
> wouldn't they?
It doesn't depend on the sample rate. What gets timestamped are the
block write and read operations on the buffer. It doesnt' matter what
the block contains, 256 samples at 48 kHz or 25600 at 4.8 MHz. What
matters is the average block period, and how much variation this has.
> In other words, while we still need to aggregate samples
> to get a block of samples temporally long enough for the CPU time
> estimate to be stable, buffers are already flowing over.
You mean when the system starts processing ? We don't wait, but just
start assuming the actual rate is the nominal average one. After
the first iteration a one-time correction to the buffer state is
made so it corresponds to the target value of kW - kR. After that
the control loop takes over.
> Also, I'm still
> confused: Let's say we have two rates that we need to match, $r_1$ and
> $r_2$, with $\frac{r_1}{r_2} - 1 = \epsilon_{1,2}$ for pretty small
> values of $\epsilon_{1,2}$, i.e. relatively well matched. If we now use
> a third rate, $r_3$ (namely, the clock resolution of the PC), whose
> $\epsilon_{1,2},\epsilon_{1,3} \gg \epsilon_{1,2}$, how does that work
> out? I feel like that will add more jitter, mathematically?
The rates being 'well matched' is the normal situation. It doesn't
matter what the resampling ratio is. All the control loop does is
apply a small correction to the nominal ratio which is known a priori.
> >> I think it'll be a little unlikely to implement this as a block that you
> >> drop in somewhere in your flow graph.
Really there is no problem with that under the assumption stated
previously.
In RF engineering terms, the resampling does indeed add some phase
noise, but only within the loop bandwidht (0.1 Hz is a typical value)
It is really similar to a PLL, the phase noise of the LO within the
PLL bandwidth is added to the signal. If you have another PLL
downstream with a lower bandwidth that one may well fail to lock.
But there is really no reason to do adaptive resampling in the
RF domain. Just before the audio sink is the right place.
> > In theory it would be possible. The requirement then (assuming RF in and
> > audio out) is that everything upstream is directly or indirectly triggered
> > by the RF clock, and everything downstream by the audio clock. Don't know
> > if that's possible in GR (still have to discover the internals).
> Not really, there's no direct triggering.
It doesn't have to be direct.
A module will execute (i.e. its work() is called) when it has sufficient
input and sufficient space in its output buffers. Whenever that happens,
it is triggered by some other module providing input or space for output.
So in the end everything is triggered by events produced by the HW, even
if it may take some for these event to 'ripple through'.
> > The only assumption for this to work is that there is no 'chocking point',
> > i.e. all modules are fast enough to keep up with the signal.
>
> But that assumption fails with GNU Radio in general! There's always
> faster and slower blocks.
You seem to misunderstand what I mean by 'no chocking point'. It just
means that on average your CPUs can perform the work that is required.
If that is the case, then
1. the system will be idle part of the time, just waiting for more
input, and
2. at any point there will be a well defined and on average constant
and known data rate (at least for sampled signals).
> ... and we're back at the question of how much we can trust the CPU
> clock as a base for estimating latencies :)
All modern PCs have a clock that is guaranteed to be continuous,
monotic and having a sub-microsecond resolution. Of course this
is not the sort of clock you'd use to generate an RF signal (phase
noise could well be horrible). But whatever jitter this clock has
is orders of magnitude less than the time jitter of the events it
is used to timestamp.
An example may make this a bit more clear. Assume we are receiving
an audio stream from the network and need to resample this to the
actual sample rate of our sound card. Let's say we have 200 packets
per second (of 5 ms each). For a cross-atlantic link typical jitter
on the arrival time of the packets will be some tens of milliseconds.
Every now and then a packet will arrive 300 ms late. That means we
need the average fill state of our buffer to be at least 300 ms if
we want to avoid interruptions in the signal. This sets the target
value for kW - kR (as above) and the buffer size (a bit more).
Now assume a packet does arrive 300 ms late. So the error seen by
the DLL is 300 ms. Now the value of w1 is 2 * pi * B * dt, with B
the bandwidth of the DLL and dt = 5 ms. Let's set B to 0.1 Hz,
then w1 ~= 1/300. So of the 300 ms error 1 ms remains in t0, which
is the value seen by the main control loop. If this has a similar
bandwidth, the effective error that remains is again divided by
300, so we have something close to 3 microseconds. This error is
passed on to the resampler which will try to correct it with a
time constant of around 10 / B, i.e. one second. So the relative
correction to the resampling ratio will be around 3 ppm.
Ciao,
--
FA
A world of exhaustive, reliable metadata would be an utopia.
It's also a pipe-dream, founded on self-delusion, nerd hubris
and hysterically inflated market opportunities. (Cory Doctorow)
Re: [Discuss-gnuradio] simple mod-demod combinations doesn't work, (continued) Re: [Discuss-gnuradio] simple mod-demod combinations doesn't work, Marcus Müller,
2016/10/09
Re: [Discuss-gnuradio] simple mod-demod combinations doesn't work, MarkO,
2016/10/25
Re: [Discuss-gnuradio] simple mod-demod combinations doesn't work, Marcus Müller,
2016/10/25
Re: [Discuss-gnuradio] simple mod-demod combinations doesn't work, Fons Adriaensen,
2016/10/25
Re: [Discuss-gnuradio] simple mod-demod combinations doesn't work, Marcus Müller,
2016/10/25
Re: [Discuss-gnuradio] simple mod-demod combinations doesn't work, Fons Adriaensen,
2016/10/25
Re: [Discuss-gnuradio] simple mod-demod combinations doesn't work, Kevin Reid,
2016/10/25
[Discuss-gnuradio] Delay locked loop for the two-clock problem (was: simple mod-demod combinations doesn't work), Marcus Müller,
2016/10/26
Re: [Discuss-gnuradio] Delay locked loop for the two-clock problem (was: simple mod-demod combinations doesn't work), Fons Adriaensen,
2016/10/26
Re: [Discuss-gnuradio] Delay locked loop for the two-clock problem, Marcus Müller,
2016/10/26
Re: [Discuss-gnuradio] Delay locked loop for the two-clock problem, Fons Adriaensen <= Re: [Discuss-gnuradio] Delay locked loop for the two-clock problem, Fons Adriaensen,
2016/10/27
Re: [Discuss-gnuradio] Delay locked loop for the two-clock problem, Biju Ravindran,
2016/10/29
Re: [Discuss-gnuradio] simple mod-demod combinations doesn't work, Sylvain Munaut,
2016/10/25 |
Theorem. $\int_0^\infty \sin x \phantom. dx/x = \pi/2$.
Poof. For $x>0$ write $1/x = \int_0^\infty e^{-xt} \phantom. dt$,and deduce that $\int_0^\infty \sin x \phantom. dx/x$ is$$\int_0^\infty \sin x \int_0^\infty e^{-xt} \phantom. dt \phantom. dx= \int_0^\infty \left( \int_0^\infty e^{-tx} \sin x \phantom. dx \right)\phantom. dt= \int_0^\infty \frac{dt}{t^2+1},$$which is the arctangent integral for $\pi/2$, QED.
The theorem is correct, and usually obtained as an application ofcontour integration, or of Fourier inversion ($\sin x / x$ is a multiple ofthe Fourier transform of the characteristic function of an interval).The poof, which is the first one I saw(given in a footnote in an introductory textbook on quantum physics),is not correct, because the integral does not converge absolutely.One can rescue it by writing $\int_0^M \sin x \phantom. dx/x$as a double integral in the same way, obtaining$$\int_0^M \sin x \frac{dx}{x} =\int_0^\infty \frac{dt}{t^2+1}- \int_0^\infty e^{-Mt} (\cos M + t \cdot \sin M) \frac{dt}{t^2+1}$$and showing that the second integral approaches $0$ as $M \rightarrow \infty$;but this detour makes for a much less appealing alternative to the usualproof by complex or Fourier analysis.
Still the double-integral trick can be used legitimately to evaluate$\int_0^\infty \sin^m x \phantom. dx/x^n$ for integers $m,n$ such thatthe integral converges absolutely (that is, with $2 \leq n \leq m$;NB unlike the contour or Fourier approach this technique appliesalso when $m \not\equiv n \bmod 2$).Write $(n-1)!/x^n = \int_0^\infty t^{n-1} e^{-xt} \phantom. dt$ to obtain$$\int_0^\infty \sin^m x \frac{dx}{x^n} = \frac1{(n-1)!} \int_0^\infty t^{n-1} \left( \int_0^\infty e^{-tx} \sin^m x \phantom. dx \right)\phantom. dt,$$in which the inner integral is a rational function of $t$,and then the integral with respect to $t$ is elementary.For example, when $m=n=2$ we find$$\int_0^\infty \sin^2 x \frac{dx}{x^2}= \int_0^\infty t \frac2{t^3+4t} dt= 2 \int_0^\infty \frac{dt}{t^2+4} = \frac\pi2.$$As a bonus, we recover a correct proof of our starting theorem byintegration by parts:
$$\frac\pi2 = \int_0^\infty \sin^2 x \frac{dx}{x^2} = \int_0^\infty \sin^2 x \phantom. d(-1/x) = \int_0^\infty \frac1x d(\sin^2 x) = \int_0^\infty 2 \sin x \cos x \frac{dx}{x};$$since $2 \sin x \cos x = \sin 2x$, the desired$\int_0^\infty \sin x \phantom. dx/x = \pi/2$follows by a linear change of variable.
Exercise Use this technique to prove that$\int_0^\infty \sin^3 x \phantom. dx/x^2 = \frac34 \log 3$,and more generally$$\int_0^\infty \sin^3 x \frac{dx}{x^\nu} = \frac{3-3^{\nu-1}}{4} \cos \frac{\nu\pi}{2} \Gamma(1-\nu)$$when the integral converges. [Both are in Gradshteyn and Ryzhik,page 449, formula 3.827; the $\nu=2$ case is 3.827#3, credited toD. Bierens de Haan, Nouvelles tables d'intégrales définies,Amsterdam 1867; the general case is 3.827#1, from Gröbner andHofreiter's Integraltafel II, Springer: Vienna and Innsbruck 1958.] |
And I think people said that reading first chapter of Do Carmo mostly fixed the problems in that regard. The only person I asked about the second pset said that his main difficulty was in solving the ODEs
Yeah here there's the double whammy in grad school that every grad student has to take the full year of algebra/analysis/topology, while a number of them already don't care much for some subset, and then they only have to pass rather the class
I know 2 years ago apparently it mostly avoided commutative algebra, half because the professor himself doesn't seem to like it that much and half because he was like yeah the algebraists all place out so I'm assuming everyone here is an analyst and doesn't care about commutative algebra
Then the year after another guy taught and made it mostly commutative algebra + a bit of varieties + Cech cohomology at the end from nowhere and everyone was like uhhh. Then apparently this year was more of an experiment, in part from requests to make things more geometric
It's got 3 "underground" floors (quotation marks because the place is on a very tall hill so the first 3 floors are a good bit above the the street), and then 9 floors above ground. The grad lounge is in the top floor and overlooks the city and lake, it's real nice
The basement floors have the library and all the classrooms (each of them has a lot more area than the higher ones), floor 1 is basically just the entrance, I'm not sure what's on the second floor, 3-8 is all offices, and 9 has the ground lounge mainly
And then there's one weird area called the math bunker that's trickier to access, you have to leave the building from the first floor, head outside (still walking on the roof of the basement floors), go to this other structure, and then get in. Some number of grad student cubicles are there (other grad students get offices in the main building)
It's hard to get a feel for which places are good at undergrad math. Highly ranked places are known for having good researchers but there's no "How well does this place teach?" ranking which is kinda more relevant if you're an undergrad
I think interest might have started the trend, though it is true that grad admissions now is starting to make it closer to an expectation (friends of mine say that for experimental physics, classes and all definitely don't cut it anymore)
In math I don't have a clear picture. It seems there are a lot of Mickey Mouse projects that people seem to not help people much, but more and more people seem to do more serious things and that seems to become a bonus
One of my professors said it to describe a bunch of REUs, basically boils down to problems that some of these give their students which nobody really cares about but which undergrads could work on and get a paper out of
@TedShifrin i think universities have been ostensibly a game of credentialism for a long time, they just used to be gated off to a lot more people than they are now (see: ppl from backgrounds like mine) and now that budgets shrink to nothing (while administrative costs balloon) the problem gets harder and harder for students
In order to show that $x=0$ is asymptotically stable, one needs to show that $$\forall \varepsilon > 0, \; \exists\, T > 0 \; \mathrm{s.t.} \; t > T \implies || x ( t ) - 0 || < \varepsilon.$$The intuitive sketch of the proof is that one has to fit a sublevel set of continuous functions $...
"If $U$ is a domain in $\Bbb C$ and $K$ is a compact subset of $U$, then for all holomorphic functions on $U$, we have $\sup_{z \in K}|f(z)| \leq C_K \|f\|_{L^2(U)}$ with $C_K$ depending only on $K$ and $U$" this took me way longer than it should have
Well, $A$ has these two dictinct eigenvalues meaning that $A$ can be diagonalised to a diagonal matrix with these two values as its diagonal. What will that mean when multiplied to a given vector (x,y) and how will the magnitude of that vector changed?
Alternately, compute the operator norm of $A$ and see if it is larger or smaller than 2, 1/2
Generally, speaking, given. $\alpha=a+b\sqrt{\delta}$, $\beta=c+d\sqrt{\delta}$ we have that multiplication (which I am writing as $\otimes$) is $\alpha\otimes\beta=(a\cdot c+b\cdot d\cdot\delta)+(b\cdot c+a\cdot d)\sqrt{\delta}$
Yep, the reason I am exploring alternative routes of showing associativity is because writing out three elements worth of variables is taking up more than a single line in Latex, and that is really bugging my desire to keep things straight.
hmm... I wonder if you can argue about the rationals forming a ring (hence using commutativity, associativity and distributivitity). You cannot do that for the field you are calculating, but you might be able to take shortcuts by using the multiplication rule and then properties of the ring $\Bbb{Q}$
for example writing $x = ac+bd\delta$ and $y = bc+ad$ we then have $(\alpha \otimes \beta) \otimes \gamma = (xe +yf\delta) + (ye + xf)\sqrt{\delta}$ and then you can argue with the ring property of $\Bbb{Q}$ thus allowing you to deduce $\alpha \otimes (\beta \otimes \gamma)$
I feel like there's a vague consensus that an arithmetic statement is "provable" if and only if ZFC proves it. But I wonder what makes ZFC so great, that it's the standard working theory by which we judge everything.
I'm not sure if I'm making any sense. Let me know if I should either clarify what I mean or shut up. :D
Associativity proofs in general have no shortcuts for arbitrary algebraic systems, that is why non associative algebras are more complicated and need things like Lie algebra machineries and morphisms to make sense of
One aspect, which I will illustrate, of the "push-button" efficacy of Isabelle/HOL is its automation of the classic "diagonalization" argument by Cantor (recall that this states that there is no surjection from the naturals to its power set, or more generally any set to its power set).theorem ...
The axiom of triviality is also used extensively in computer verification languages... take Cantor's Diagnolization theorem. It is obvious.
(but seriously, the best tactic is over powered...)
Extensions is such a powerful idea. I wonder if there exists algebraic structure such that any extensions of it will produce a contradiction. O wait, there a maximal algebraic structures such that given some ordering, it is the largest possible, e.g. surreals are the largest field possible
It says on Wikipedia that any ordered field can be embedded in the Surreal number system. Is this true? How is it done, or if it is unknown (or unknowable) what is the proof that an embedding exists for any ordered field?
Here's a question for you: We know that no set of axioms will ever decide all statements, from Gödel's Incompleteness Theorems. However, do there exist statements that cannot be decided by any set of axioms except ones which contain one or more axioms dealing directly with that particular statement?
"Infinity exists" comes to mind as a potential candidate statement.
Well, take ZFC as an example, CH is independent of ZFC, meaning you cannot prove nor disprove CH using anything from ZFC. However, there are many equivalent axioms to CH or derives CH, thus if your set of axioms contain those, then you can decide the truth value of CH in that system
@Rithaniel That is really the crux on those rambles about infinity I made in this chat some weeks ago. I wonder to show that is false by finding a finite sentence and procedure that can produce infinity
but so far failed
Put it in another way, an equivalent formulation of that (possibly open) problem is:
> Does there exists a computable proof verifier P such that the axiom of infinity becomes a theorem without assuming the existence of any infinite object?
If you were to show that you can attain infinity from finite things, you'd have a bombshell on your hands. It's widely accepted that you can't. If fact, I believe there are some proofs floating around that you can't attain infinity from the finite.
My philosophy of infinity however is not good enough as implicitly pointed out when many users who engaged with my rambles always managed to find counterexamples that escape every definition of an infinite object I proposed, which is why you don't see my rambles about infinity in recent days, until I finish reading that philosophy of infinity book
The knapsack problem or rucksack problem is a problem in combinatorial optimization: Given a set of items, each with a weight and a value, determine the number of each item to include in a collection so that the total weight is less than or equal to a given limit and the total value is as large as possible. It derives its name from the problem faced by someone who is constrained by a fixed-size knapsack and must fill it with the most valuable items.The problem often arises in resource allocation where there are financial constraints and is studied in fields such as combinatorics, computer science...
O great, given a transcendental $s$, computing $\min_P(|P(s)|)$ is a knapsack problem
hmm...
By the fundamental theorem of algebra, every complex polynomial $P$ can be expressed as:
$$P(x) = \prod_{k=0}^n (x - \lambda_k)$$
If the coefficients of $P$ are natural numbers , then all $\lambda_k$ are algebraic
Thus given $s$ transcendental, to minimise $|P(s)|$ will be given as follows:
The first thing I think of with that particular one is to replace the $(1+z^2)$ with $z^2$. Though, this is just at a cursory glance, so it would be worth checking to make sure that such a replacement doesn't have any ugly corner cases.
In number theory, a Liouville number is a real number x with the property that, for every positive integer n, there exist integers p and q with q > 1 and such that0<|x−pq|<1qn.{\displaystyle 0<\left|x-{\frac {p}...
Do these still exist if the axiom of infinity is blown up?
Hmmm...
Under a finitist framework where only potential infinity in the form of natural induction exists, define the partial sum:
$$\sum_{k=1}^M \frac{1}{b^{k!}}$$
The resulting partial sums for each M form a monotonically increasing sequence, which converges by ratio test
therefore by induction, there exists some number $L$ that is the limit of the above partial sums. The proof of transcendentally can then be proceeded as usual, thus transcendental numbers can be constructed in a finitist framework
There's this theorem in Spivak's book of Calculus:Theorem 7Suppose that $f$ is continuous at $a$, and that $f'(x)$ exists for all $x$ in some interval containing $a$, except perhaps for $x=a$. Suppose, moreover, that $\lim_{x \to a} f'(x)$ exists. Then $f'(a)$ also exists, and$$f'...
and neither Rolle nor mean value theorem need the axiom of choice
Thus under finitism, we can construct at least one transcendental number. If we throw away all transcendental functions, it means we can construct a number that cannot be reached from any algebraic procedure
Therefore, the conjecture is that actual infinity has a close relationship to transcendental numbers. Anything else I need to finish that book to comment
typo: neither Rolle nor mean value theorem need the axiom of choice nor an infinite set
> are there palindromes such that the explosion of palindromes is a palindrome nonstop palindrome explosion palindrome prime square palindrome explosion palirome prime explosion explosion palindrome explosion cyclone cyclone cyclone hurricane palindrome explosion palindrome palindrome explosion explosion cyclone clyclonye clycone mathphile palirdlrome explosion rexplosion palirdrome expliarome explosion exploesion |
LaTeX supports many worldwide languages by means of some special packages. In this article is explained how to import and use those packages to create documents in
Italian.
Contents
Italian language has some accentuated words. For this reason the preamble of your document must be modified accordingly to support these characters and some other features.
documentclass{article} \usepackage[utf8]{inputenc} \usepackage[italian]{babel} \usepackage[T1]{fontenc} \begin{document} \tableofcontents \vspace{2cm} %Add a 2cm space \begin{abstract} Questo è un breve riassunto dei contenuti del documento scritto in italiano. \end{abstract} \section{Sezione introduttiva} Questa è la prima sezione, possiamo aggiungere alcuni elementi aggiuntivi e tutto digitato correttamente. Inoltre, se una parola è troppo lunga e deve essere troncato babel cercherà per troncare correttamente a seconda della lingua. \section{Teoremi Sezione} Questa sezione è quello di vedere cosa succede con i comandi testo definendo \[ \lim x = \sin{\theta} + \max \{3.52, 4.22\} \] \end{document}
There are two packages in this document related to the encoding and the special characters. These packages will be explained in the next sections.
If your are looking for instructions on how to use more than one language in a sinlge document, for instance English and Italian, see the International language support article.
Modern computer systems allow you to input letters of national alphabets directly from the keyboard. In order to handle a variety of input encodings used for different groups of languages and/or on different computer platforms LaTeX employs the
inputenc package to set up input encoding. In this case the package properly displays characters in the Italian alphabet. To use this package add the next line to the preamble of your document: \usepackage[utf8]{inputenc} The recommended input encoding is utf-8. You can use other encodings depending on your operating system.
To proper LaTeX document generation you must also choose a font encoding which has to support specific characters for Italian language, this is accomplished by the
package: fontenc \usepackage[T1]{fontenc} Even though the default encoding works well in Italian, using this specific encoding will avoid glitches with some specific characters. The default LaTeX encoding is
OT1.
To extended the default LaTeX capabilities, for proper hyphenation and translating the names of the document elements, import the
babel package for the Italian language. \usepackage[italian]{babel} As you may see in the example at the introduction, instead of "abstract" and "Contents" the Italian words "Sommario" and "Indice" are used.
Sometimes for formatting reasons some words have to be broken up in syllables separated by a
- (
hyphen) to continue the word in a new line. For example, matematica could become mate-matica. The package babel, whose usage was described in the previous section, usually does a good job breaking up the words correctly, but if this is not the case you can use a couple of commands in your preamble.
\usepackage{hyphenat} \hyphenation{mate-mati-ca recu-perare}
The first command will import the package
hyphenat and the second line is a list of space-separated words with defined hyphenation rules. On the other side, if you want a word not to be broken automatically, use the
{\nobreak word} command within your document.
For more information see |
I am running a Kernel Ridge Regression in R. Mathematically, the minimization problem to be solved is the following:
$$ \min_{\boldsymbol{\beta} \in \mathbb{R}^{d}} \ \sum_{i = 1}^{n} (y_{i} - \left \langle \boldsymbol{\beta}, \phi(\boldsymbol{x_{i}}) \right \rangle )^{2} + \lambda \left \| \boldsymbol{\beta} \right \|^{2}$$
In particular, I found the function krr() from the listdtr package (here official documentation) to be particularly interesting. Indeed, a part from its efficiency, considering the Gaussian Kernel, it estimates a specific $\gamma_{j}$ for each of the $k$ variables. Considering for instance two individuals, $\boldsymbol{x_{i}}$ and $\boldsymbol{x_{j}}$, the associated Gaussian Kernel is:
$$K(\boldsymbol{x_{i}}, \boldsymbol{x_{h}}) = exp \left \{ -\sum_{j = 1}^{k}\gamma_{j}(x_{i,j} - x_{h,j})^{2} \right \}$$
where, for sake of completeness of the description:
$$\left \langle \phi(\boldsymbol{x_{i}}),\phi(\boldsymbol{x_{h}}) \right \rangle = K(\boldsymbol{x_{i}}, \boldsymbol{x_{h}})$$.
In presenting the code, please note that "Holdout" is the set used to train the model and estimate lambda and gamma, 35% of the dataset, while "estim" the remaining 65% to compute the MSE.
The code to perform the computation is the following:
library(listdtr)set.seed(42)krr_values <- krr(x = as.matrix(X_34_holdout), y = as.matrix(y_34_holdout))test_predict_krr <- predict(krr_values, as.matrix(X_34_estim))MSE_KRR_each_gamma <- colMeans((y_34_estim - test_predict_krr)^(2))
Now, I have a few questions: consider that source code is not available (or at least I could not find it):
1) In determining lambda and the gammas, what are the hyperparameter sets considered?
2) I understand I can extract the gammas (one for each of my 8 features) via:
krr_values$model$gamma#realnumber1 realnumber2 realnumber3 realnumber4 realnumber5 #realnumber6 realnumber7 realnumber8
Problem now is, how do I know each one of them to which of the variables do relate? For instance, one of them is about 20, but running the same script with two different seeds, once I estimated an 18 in was in the fourth position and another time about 20 in the second position: it is clearly the $\gamma_{j}$ associated with the same variable, but how can I then associate all of the others?
It would be particularly helpful the reply from someone who has already some experience with the aforementioned function, but any comment or reply will be highly appreciated. |
I am an analyst at heart, despite my recent set of algebra posts. Augustin Louis Cauchy can be argued as one of the most influential mathematicians in history, pioneering rigor in the study of calculus, almost singlehandedly inventing complex analysis and real analysis, though he also made contributions to number theory, algebra, and physics.
One of the fundamental areas he studied was sequences and their notion of convergence. Suppose I give you a sequence of numbers, and ask you what happens to this sequence if I kept appending terms forever? Would the path created by the sequence lead somewhere?
Let’s start with a nice, basic sequence, and assume we are living on the space of real numbers:s = (1, 1/2, 1/3, 1/4, 1/5, 1/6,\ldots)
That tuple notation denotes an ordered
sequence of stuff 1. The individual elements are terms of the sequence. The first term in s above is 1, called s_{1}. So s_{6} = 1/6. 2. The ellipses tell you that there is no end to the sequence; that it goes on forever. First question: what’s the pattern?
If we’re going to study sequences, we need to know how to refer to one in general. What’s the 100th term of the sequence I showed you? The millionth? The general nth term of the sequence is a function of its index.
3.
In our sequence above, s_{1} = 1, s_{2} = 1/2, s_{3} = 1/3…notice a pattern? Can you see how to use the index to generate terms of the sequence?s_{n} = 1/n; that is, the nth term of the sequence is obtained by dividing 1 by the index n. So we can actually represent the sequence a bit shorter: s_{n} = \left(\frac{1}{n}\right)
This tells us how to generate the entire sequence, term by term. Now we can speak about a generic sequence a with terms a_{n}, where n is the index.
Studying sequences – the Cauchy property
One of the biggest questions we can ask of a sequence is regarding
convergence. Where do the terms lead, if anywhere? Convergence theory of both sequences and series (the sum of a sequence) is quite a rabbit hole to dive down, but in short, we want to know if there is some “destination” of the sequence. Proving convergence is required; it’s not enough to just say where they go and call it good. For now, though we’re going to play with a couple sequences to see if we can get an intuition of what happens. (1, 1/2, 1/3, 1/4, 1/5,\ldots)
This one is a sequence of fractions, given by the general term s = \left(\frac{1}{n}\right). Where do you think these terms lead? If you guessed 0, then you are right.
4.
Let’s try another one.
(1,0,1,0,1,0,\ldots)
This sequence alternates between 0 and 1 forever. So where does it lead? In this case, nowhere. It’s just an eternal pong match between 0 and 1.
One last example:
(1, 1, 2, 3, 5, 8, 13, \ldots)
This one is the famous Fibonnaci sequence, as I’m sure most of you recognized. The next term is generated by adding the two previous terms. So if we kept generating terms of the sequence forever, where would we go? The terms just keep getting bigger and bigger, so we would never actually converge to anything finite. We’d be walking in this Fibonnaci tunnel forever, never actually reaching some number we would converge to.
All three of these are sequences, but only one of them has a very useful and famous property- the
Cauchy Property.
In a nutshell, a sequence that has the Cauchy property (or is
Cauchy), has terms that get arbitrarily close after a certain point. Put formally, for every tiny \epsilon > 0 there exists some number N such that for any two indices m and n that are greater than this N,
Let’s take this apart to understand what this definition means.
For every \epsilon > 0
This means that I should be able to pick any number, no matter how tiny, and there is a corresponding natural number N that depends on this epsilon. I could also write N_{\epsilon} to explicitly show this dependence. Let’s work with our sequence s = \left(\frac{1}{n}\right). If you picked \epsilon = \frac{1}{4}, then after which index are any two terms less than 1/4 apart? Is it N = 2?
Well, the difference between any two subsequent terms are obviously going to get smaller, so the largest difference between two subsequent terms would be between 1/2 and 1/3
5.
But wait. Let’s go back to the definition. It says that we have to find an N such that
terms afterwards. That means we can’t just check the difference between terms next to each other. We have to know that it’s true for any two, such as the second and fifth, or fourth and twentieth. All the terms have to be bunching together after a certain point. for any two
So in our case, for example, 1/2 – 1/5 = 3/10 > 1/4, so N = 2 isn’t our N.
How do we actually show a sequence is Cauchy?
Testing every possible \epsilon and finding its corresponding N is going to be impossible. It’s a pain just to do it for \epsilon = \frac{1}{4} above. This means we do need to attack this problem abstractly. Showing a sequence is Cauchy is actually pretty formulaic, because we have to satisfy a definition. We’ll walk through it on our sequence s = \left(\frac{1}{n}\right), because this exercise forces us to really understand the definition.
First, the definition says “for every \epsilon > 0“. That means we need to grab one – a generic one. So let’s do that. We say:
Let \epsilon > 0 to start out. Now we have one. The goal is to find the N in terms of our generic \epsilon such that any two terms of s = \left(\frac{1}{n}\right) with indices greater than this N have a difference no larger than our chosen (but unknown) \epsilon.
So let’s set up what we need: \left|\frac{1}{n}-\frac{1}{m}\right| < \epsilon. We’ll need to just play with it algebraically to get somewhere.\begin{aligned}\left|\frac{1}{n}-\frac{1}{m}\right| &= \left|\frac{m-n}{mn}\right|\end{aligned}
All I did in the line above was combine the fraction. Let’s see here, if I subtract n from m in the numerator, that’s less than not subtracting anything at all. That means that\begin{aligned}\left|\frac{1}{n}-\frac{1}{m}\right| &= \left|\frac{m-n}{mn}\right|\\&<\left|\frac{m}{mn}\right|\\&= \frac{1}{n}\end{aligned}
Why did I do this? I want to bound what looked icky by something that looks nicer to work with.
6. Now, how do we ensure that \frac{1}{n} is less than an \epsilon we don’t know? This is where our N comes in. What should N be in terms of our \epsilon to guarantee this?
If N = \frac{1}{\epsilon}, and n > N, then \frac{1}{n} < \frac{1}{N} = \frac{1}{1/\epsilon} = \epsilon.
That means that for N = \frac{1}{\epsilon}, and for any two m,n that are greater than N \begin{aligned}\left|\frac{1}{n}-\frac{1}{m}\right| &= \left|\frac{m-n}{mn}\right|\\&<\left|\frac{m}{mn}\right|\\&= \frac{1}{n}\\&<\epsilon\end{aligned}
We’re done! We took a generic \epsilon and found the corresponding N that ensured any two terms with an index greater than N are no further than \epsilon apart. Because we didn’t specify anything, we showed that this is true no matter what \epsilon you picked. Therefore, we conclude that the sequence s = \left(\frac{1}{n}\right) is a Cauchy sequence.[
Some notes: I want to point out here that a Cauchy sequence doesn’t have to be super neat. As long as it “calms down” after a certain finite point, it can be wild and crazy. For example, the first 10 terms of a sequence can be 1,000,000, and then from the 11th term onward be something like \left(\frac{1}{n^{2}}\right). When we walk about sequences, we actually don’t care at all about the first bits of a sequence. It’s the tail we care about almost always.] So what does this give us?
The Cauchy property actually yields quite a few things that can help us when we study convergence of both sequences and series. Here are a few things we can prove if we know a sequence is Cauchy:
(1) Every Cauchy sequence of real or complex numbers is bounded. (2) A Cauchy sequence that has a convergent subsequence is itself convergent.
A subsequence is a sequence made from selecting certain terms of the sequence to make a new one, like all the odd terms, or all the evens, or every third one, etc. This can reduce the problem of showing convergence of a complicated sequence if we can find a subsequence that leads somewhere.
(3) Every Cauchy sequence of real numbers converges.
This is actually a very specific case of a more general statement, that every Cauchy sequence in n-dimensional real space is convergent. This is extremely helpful when we want to show a sequence converges, but we can’t really figure out what the limit might be by intuition.
Conclusion
This post was meant to give a slight taste of mathematical analysis in the form of studying sequences of numbers with the Cauchy property. Of course, we can get more general than this, and study Cauchy sequences of anything we like: functions, groups, vectors, and more abstract objects than that. We’ll tackle more concepts in analysis in future posts.
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License. Footnotes We can make sequences of anything we want to: matrices, numbers, functions, or even more general objects. We’ll stick with numbers for this discussion Some people start indexing terms at 0. Either one is fine, just be aware of which one you have or choose. Actually, a sequence is a function that maps an index, typically the natural (counting) numbers onto some other space Again, we can prove this formally, but I want to develop a sense of intuition here without getting bogged down in epsilons and scaring people off We left out the first term, because we want to see if after the second term, all terms are less than 1/4 apart. In my experience, analysis is a game of bounding arguments. It’s a neat way to sidestep some really nasty arithmetic or calculus. |
I'm reading the paper MST Construction in O(log log n) Communication Rounds in a Clique and trying to understand the correctness analysis, in page 5.
It shows by induction on k (phase number), that the spanning tree, for every cluster F (from phase k), is indeed an MST fragment of that cluster of vertices.
The claim is of course trivial for the base case, as the clusters are singletons of vertices. To show the induction step, it suffices to show that each edge $e$ (where $X\left(e\right)=\left(F_{1},F_{2}\right),\ F_{1}\in\hat{F_{1}},\ F_{2}\in\hat{F_{2}}$, where $F_{1},F_{2}$ are the original clusters, that is, the connected components of the previous round, now contracted into one vertex, and $\hat{F_{1}},\hat{F_{2}}$ are their super clusters, that is, the connected components containing them, created as part of this procedure in phase $k$) chosen by the procedure Const_Frags is indeed a minimum weight outgoing edge (MWOE) of either $H_{1}=\underset{F\in\hat{F_{1}}}{\bigcup}F$ or $H_{2}=\underset{F\in\hat{F_{2}}}{\bigcup}F$. They assume wlog that $e$ was chosen as part of $F_1$'s $\mu$ lightest outgoing weight edges, and want to show that $e$ is indeed the MWOE of $H_1$. So they assume otherwise, that there exists $e'$ of lower weight than $e$ that is outgoing from $H_1$.
What I don't understand is, why do they assume that $e'$ is of cluster $F_1$ as well? The only thing we know about $e'$ is that it is outgoing from $H_1$. This means that it
can be outgoing from another cluster- $F'\in\hat{F_{1}}\backslash\left\{ F_{1}\right\} $. I would appreciate any help! thanks! |
I was trying to calculate the electric field on any point of the $z$-axis of a layer with the following properties:
It as a thickness of $a$ It is placed along the XY plane It is infinite in extension It has a uniform density of charge $\rho_o$
So I decided to do the following: let's imagine that the entire surface is made out of concentric
pipe segments centered in $(0,0,0)$:
(Here you can see why I'm not a renowned artist). Each
segment has a thickness of $dr$ and therefore a volume of $dV=(2\pi r) a dr$. It also has a charge $dq=\rho_o dV$.
For any point $(x,y,z)$ outside of the layer itself the electric field must point upwards and it must be the sum of the fields from all the infinitesimal rings under it. The further a right is, the bigger the angle $\theta$ between $dE$ and the $z$-axis. So we have:
$$E=\int_S dE = \int_S \frac{1}{4\pi \epsilon_o} \frac{\rho_o dV}{r^2+z^2}\cos{\theta} = \int_S \frac{1}{4\pi \epsilon_o} \frac{\rho_o dV}{r^2+z^2}\frac{z}{\sqrt{r^2+z^2}}$$
Taking the constants out, integrating and simplifying we get:
$$E= \frac{\rho_o a}{2\epsilon_o}$$
Now, I don't think what I got is right even if I'm not able to see where I failed. The field apparently doesn't depend on how close or far away we are from the layer, and that seems wrong. So I either made a math mistake or my premises were wrong. Where did I get it wrong and how?
PS: I realize I could express the field in vectorial form, but it's the process of deriving the equation above that I'm interested in. |
2019-10-11 06:14
Implementation of CERN secondary beam lines T9 and T10 in BDSIM / D'Alessandro, Gian Luigi (CERN ; JAI, UK) ; Bernhard, Johannes (CERN) ; Boogert, Stewart (JAI, UK) ; Gerbershagen, Alexander (CERN) ; Gibson, Stephen (JAI, UK) ; Nevay, Laurence (JAI, UK) ; Rosenthal, Marcel (CERN) ; Shields, William (JAI, UK) CERN has a unique set of secondary beam lines, which deliver particle beams extracted from the PS and SPS accelerators after their interaction with a target, reaching energies up to 400 GeV. These beam lines provide a crucial contribution for test beam facilities and host several fixed target experiments. [...] 2019 - 3 p. - Published in : 10.18429/JACoW-IPAC2019-THPGW069 Fulltext from publisher: PDF; In : 10th International Particle Accelerator Conference, Melbourne, Australia, 19 - 24 May 2019, pp.THPGW069 Record dettagliato - Record simili 2019-10-09 06:01
HiRadMat: A facility beyond the realms of materials testing / Harden, Fiona (CERN) ; Bouvard, Aymeric (CERN) ; Charitonidis, Nikolaos (CERN) ; Kadi, Yacine (CERN)/HiRadMat experiments and facility support teams The ever-expanding requirements of high-power targets and accelerator equipment has highlighted the need for facilities capable of accommodating experiments with a diverse range of objectives. HiRadMat, a High Radiation to Materials testing facility at CERN has, throughout operation, established itself as a global user facility capable of going beyond its initial design goals. [...] 2019 - 4 p. - Published in : 10.18429/JACoW-IPAC2019-THPRB085 Fulltext from publisher: PDF; In : 10th International Particle Accelerator Conference, Melbourne, Australia, 19 - 24 May 2019, pp.THPRB085 Record dettagliato - Record simili 2019-10-09 06:01
Commissioning results of the tertiary beam lines for the CERN neutrino platform project / Rosenthal, Marcel (CERN) ; Booth, Alexander (U. Sussex (main) ; Fermilab) ; Charitonidis, Nikolaos (CERN) ; Chatzidaki, Panagiota (Natl. Tech. U., Athens ; Kirchhoff Inst. Phys. ; CERN) ; Karyotakis, Yannis (Annecy, LAPP) ; Nowak, Elzbieta (CERN ; AGH-UST, Cracow) ; Ortega Ruiz, Inaki (CERN) ; Sala, Paola (INFN, Milan ; CERN) For many decades the CERN North Area facility at the Super Proton Synchrotron (SPS) has delivered secondary beams to various fixed target experiments and test beams. In 2018, two new tertiary extensions of the existing beam lines, designated “H2-VLE” and “H4-VLE”, have been constructed and successfully commissioned. [...] 2019 - 4 p. - Published in : 10.18429/JACoW-IPAC2019-THPGW064 Fulltext from publisher: PDF; In : 10th International Particle Accelerator Conference, Melbourne, Australia, 19 - 24 May 2019, pp.THPGW064 Record dettagliato - Record simili 2019-10-09 06:00
The "Physics Beyond Colliders" projects for the CERN M2 beam / Banerjee, Dipanwita (CERN ; Illinois U., Urbana (main)) ; Bernhard, Johannes (CERN) ; Brugger, Markus (CERN) ; Charitonidis, Nikolaos (CERN) ; Cholak, Serhii (Taras Shevchenko U.) ; D'Alessandro, Gian Luigi (Royal Holloway, U. of London) ; Gatignon, Laurent (CERN) ; Gerbershagen, Alexander (CERN) ; Montbarbon, Eva (CERN) ; Rae, Bastien (CERN) et al. Physics Beyond Colliders is an exploratory study aimed at exploiting the full scientific potential of CERN’s accelerator complex up to 2040 and its scientific infrastructure through projects complementary to the existing and possible future colliders. Within the Conventional Beam Working Group (CBWG), several projects for the M2 beam line in the CERN North Area were proposed, such as a successor for the COMPASS experiment, a muon programme for NA64 dark sector physics, and the MuonE proposal aiming at investigating the hadronic contribution to the vacuum polarisation. [...] 2019 - 4 p. - Published in : 10.18429/JACoW-IPAC2019-THPGW063 Fulltext from publisher: PDF; In : 10th International Particle Accelerator Conference, Melbourne, Australia, 19 - 24 May 2019, pp.THPGW063 Record dettagliato - Record simili 2019-10-09 06:00
The K12 beamline for the KLEVER experiment / Van Dijk, Maarten (CERN) ; Banerjee, Dipanwita (CERN) ; Bernhard, Johannes (CERN) ; Brugger, Markus (CERN) ; Charitonidis, Nikolaos (CERN) ; D'Alessandro, Gian Luigi (CERN) ; Doble, Niels (CERN) ; Gatignon, Laurent (CERN) ; Gerbershagen, Alexander (CERN) ; Montbarbon, Eva (CERN) et al. The KLEVER experiment is proposed to run in the CERN ECN3 underground cavern from 2026 onward. The goal of the experiment is to measure ${\rm{BR}}(K_L \rightarrow \pi^0v\bar{v})$, which could yield information about potential new physics, by itself and in combination with the measurement of ${\rm{BR}}(K^+ \rightarrow \pi^+v\bar{v})$ of NA62. [...] 2019 - 4 p. - Published in : 10.18429/JACoW-IPAC2019-THPGW061 Fulltext from publisher: PDF; In : 10th International Particle Accelerator Conference, Melbourne, Australia, 19 - 24 May 2019, pp.THPGW061 Record dettagliato - Record simili 2019-09-21 06:01
Beam impact experiment of 440 GeV/p protons on superconducting wires and tapes in a cryogenic environment / Will, Andreas (KIT, Karlsruhe ; CERN) ; Bastian, Yan (CERN) ; Bernhard, Axel (KIT, Karlsruhe) ; Bonura, Marco (U. Geneva (main)) ; Bordini, Bernardo (CERN) ; Bortot, Lorenzo (CERN) ; Favre, Mathieu (CERN) ; Lindstrom, Bjorn (CERN) ; Mentink, Matthijs (CERN) ; Monteuuis, Arnaud (CERN) et al. The superconducting magnets used in high energy particle accelerators such as CERN’s LHC can be impacted by the circulating beam in case of specific failure cases. This leads to interaction of the beam particles with the magnet components, like the superconducting coils, directly or via secondary particle showers. [...] 2019 - 4 p. - Published in : 10.18429/JACoW-IPAC2019-THPTS066 Fulltext from publisher: PDF; In : 10th International Particle Accelerator Conference, Melbourne, Australia, 19 - 24 May 2019, pp.THPTS066 Record dettagliato - Record simili 2019-09-20 08:41
Shashlik calorimeters with embedded SiPMs for longitudinal segmentation / Berra, A (INFN, Milan Bicocca ; Insubria U., Varese) ; Brizzolari, C (INFN, Milan Bicocca ; Insubria U., Varese) ; Cecchini, S (INFN, Bologna) ; Chignoli, F (INFN, Milan Bicocca ; Milan Bicocca U.) ; Cindolo, F (INFN, Bologna) ; Collazuol, G (INFN, Padua) ; Delogu, C (INFN, Milan Bicocca ; Milan Bicocca U.) ; Gola, A (Fond. Bruno Kessler, Trento ; TIFPA-INFN, Trento) ; Jollet, C (Strasbourg, IPHC) ; Longhin, A (INFN, Padua) et al. Effective longitudinal segmentation of shashlik calorimeters can be achieved taking advantage of the compactness and reliability of silicon photomultipliers. These photosensors can be embedded in the bulk of the calorimeter and are employed to design very compact shashlik modules that sample electromagnetic and hadronic showers every few radiation lengths. [...] 2017 - 6 p. - Published in : IEEE Trans. Nucl. Sci. 64 (2017) 1056-1061 Record dettagliato - Record simili 2019-09-20 08:41
Performance study for the photon measurements of the upgraded LHCf calorimeters with Gd$_2$SiO$_5$ (GSO) scintillators / Makino, Y (Nagoya U., ISEE) ; Tiberio, A (INFN, Florence ; U. Florence (main)) ; Adriani, O (INFN, Florence ; U. Florence (main)) ; Berti, E (INFN, Florence ; U. Florence (main)) ; Bonechi, L (INFN, Florence) ; Bongi, M (INFN, Florence ; U. Florence (main)) ; Caccia, Z (INFN, Catania) ; D'Alessandro, R (INFN, Florence ; U. Florence (main)) ; Del Prete, M (INFN, Florence ; U. Florence (main)) ; Detti, S (INFN, Florence) et al. The Large Hadron Collider forward (LHCf) experiment was motivated to understand the hadronic interaction processes relevant to cosmic-ray air shower development. We have developed radiation-hard detectors with the use of Gd$_2$SiO$_5$ (GSO) scintillators for proton-proton $\sqrt{s} = 13$ TeV collisions. [...] 2017 - 22 p. - Published in : JINST 12 (2017) P03023 Record dettagliato - Record simili 2019-04-26 08:32
Baby MIND: A magnetised spectrometer for the WAGASCI experiment / Hallsjö, Sven-Patrik (Glasgow U.)/Baby MIND The WAGASCI experiment being built at the J-PARC neutrino beam line will measure the ratio of cross sections from neutrinos interacting with a water and scintillator targets, in order to constrain neutrino cross sections, essential for the T2K neutrino oscillation measurements. A prototype Magnetised Iron Neutrino Detector (MIND), called Baby MIND, has been constructed at CERN and will act as a magnetic spectrometer behind the main WAGASCI target. [...] SISSA, 2018 - 7 p. - Published in : PoS NuFact2017 (2018) 078 Fulltext: PDF; External link: PoS server In : 19th International Workshop on Neutrinos from Accelerators, Uppsala, Sweden, 25 - 30 Sep 2017, pp.078 Record dettagliato - Record simili 2019-04-26 08:32 Record dettagliato - Record simili |
We have shown that a holomorphic map \(f: G\to \mathbb{C}\) to be expressed as a power series, which bears a certain similarity to polynomials, and a feature of polynomials are that if \(a\) is a root, or zero, for a polynomial \(p\), we can factor \(p\) such that \(p(z)=(z-a)^n q(z)\) where \(q\) is another polynomial with the property that \(q(a)\neq 0\). Now, does this similarity with polynomials extend to factorization? In fact it does as we shall see.
Let \(f: G\to \mathbb{C}\) be a holomorphic map that is not identically zero, with \(G\subseteq \mathbb{C}\) a domain and \(f(a)=0\). It is our claim that there exists a smallest natural number \(n\) such that \(f^{(n)}(a)\neq 0\). So suppose that there are no such \(n\), such that \(f^{(k)}(a)=0\) for all \(k\in\mathbb{N}\). Let \(B_\rho(a)\) be the largest open ball with center \(a\) contained in \(G\), since we have that \[f(z)=\sum^\infty_{k=0}\frac{f^{(k)}(a)}{k!}(z-a)^k\] we then have that \(f\) is identically zero on \(B_\rho(a)\). Fix a point \(z_0\in G\) and let \(\gamma : [0,1]\to G\) be a continuous curve from \(a\) to \(z_0\). By the paving lemma there is a finite partition \(0=t_1 < t_2 <\cdots <t_m=1\) and an \(r>0\) such that \(B_r(\gamma(t_k))\subseteq G\) for all \(k\) and \(\gamma([t_{k-1},t_k])\subseteq B_r(\gamma(t_k))\). Note that \(B_r(\gamma(t_1))=B_r(a)\subseteq B_\rho(a)\) so \(f\) is identically zero on \(B_r(\gamma(t_1))\), but since \(\gamma([t_1,t_2])\subseteq B_r(\gamma(t_1))\) we must have that \(f\) is identically zero on \(B_r(\gamma(t_2))\), and so on finitely many times untill we reach \(\gamma(t_m)\) and conclude that \(f\) is identically zero on \(B_r(\gamma(t_m))=B_r(z_0)\) and since \(z_0\) was chosen to be arbitrary we must conclude that \(f\) is identically zero on all of \(G\). A contradiction.
Now, let \(n\) be the smallest natural number such that \(f^{(n)}(a)\neq 0\), then we must have that \(f^{(k)}(a)=0\) for \(k < n\). We then get, for \(z\in B_\rho(a)\): \[\begin{split} f(z) &=\sum^\infty_{k=0}\frac{f^{(k)}(a)}{k!}(a-z)^k \\ &= \sum^\infty_{k=n}\frac{f^{(k)}(a)}{k!}(a-z)^k \\ &= \sum^\infty_{k=0}\frac{f^{(n+k)}(a)}{(n+k)!}(a-z)^{n+k} \\&=(z-a)^n \sum^\infty_{k=0}\frac{f^{(n+k)}(a)}{(n+k)!}(a-z)^{k}, \end{split}\] now, let \(\tilde{f}(z)=\sum^\infty_{k=0}\frac{f^{(n+k)}(a)}{(n+k)!}(a-z)^{k}\) and note that \(\tilde{f}\) is non-zero and holomorphic on \(B_\rho(a)\). We then define a map \(g\) given by \[g(z)=\begin{cases} \tilde{f}(z), & z\in B_\rho(a) \\ \frac{f(z)}{(z-a)^n}, & z\in G\setminus \{a\}\end{cases}\] and note that \[f(z)=(z-a)^n g(z),\] showing the existance of a factorization with our desired properties. Showing that this representation is unique is left as an exercise 😉
References 648 {648:SS6NUFWV} items 1 apa default asc
Berg, C. (2016).
Complex analysis. Copenhagen: Department of Mathematical Sciences, University of Copenhagen.
Suppose you have an open map \(p\) between topological spaces, and if you have a subet \(A\) of \(p\)’s domain such that \(p(A)\) is open. Can you then conclude that \(A\) is open? Nope! Consider the following spaces \(X=\{x_1,x_2\}\) and \(Y=\{y_1,y_2\}\) with topologies \(\tau_X=\{\varnothing, X, \{x_1\}\}\) and \(\tau_Y=\{\varnothing,Y,\{y_1\}\}\), respectively and let \(p: X\times Y\to X\) be the projection onto its first fator. This is an open map. If we consider \(A=X\times\{y_2\}\) we see that \(A\) is not open in \(X\times Y\), but we have that \(p(A)=p(X\times\{y_2\})= X\) which is trivially open in \(X\).
I came across this little problem recently: If \(X\) is a topological space with exactly two components, and given an equivalence relation \(\sim\) what can we say about its quotient space \(X/{\sim}\)? It turns out that \(X/{\sim}\) is connected if and only if there exists \(x,y\in X\) where \(x\) and \(y\) are in separate components, such that \(x\sim y\).
Suppose first that there exists \(x,y\in X\) such that \(x\sim y\). Let \(C_1\) and \(C_2\) be the two components of \(X\) and let \(p: X \to X/{\sim}\) be the natural projection. Since \(p\) is a quotient map it is surely continuous and since the image of a connected space under a continuous function is connected we have have that, say \(p(C_1)\) is connected and so is \(p(C_2)\), but since \(x\sim y\) we have that \(p(C_1)\cap p(C_2)\neq \varnothing\) so \(X/{\sim}\) consists of a single component, becuase \[p(C_1)\cup p(C_2) = p(C_1\cup C_2)=p(X)=X/{\sim},\] as wanted.
To show the reverse implication, we use the contrapositive of the statement and show: if we for no \(x\in C_1\) or \(y\in C_2\) have that \(x\sim y\), then \(X/{\sim}\) is not connected. Assume the hypothesis and note that then \(p(C_1)\) and \(p(C_2)\) are then disjoint connected subspaces whose union equal all of \(X/{\sim}\) (since \(p\) is surjective). But then the images of \(C_1\) and \(C_2\) under \(p\) are two components of \(X/{\sim}\), showing that \(X/{\sim}\) is not connected. As wanted.
It’s soon exam time, so I’m practicing proofs in complex analysis. Right now that means Cauchy’s integral formula for \(n\)’th derivatives.
Let \(G\) be a domain of the complex numbers and \(f: G\to \mathbb{C}\) a holomorphic function. We first want to show that \(f\) can be expressed as a power series, such that $$f(z)=\sum^\infty_{n=0} a_n(z-a).$$ for some \(a\in\mathbb{C}\), let \(B_\rho(a)\) the largest open ball at \(a\) contained in \(G\). We claim that $$a_n = \frac{1}{2\pi i} \oint\frac{f(z)}{(z-a)^{n+1}}dz.$$ By the Cauchy integral formula we have that, for a fixed \(z_0\in B_\rho(a)\) we have $$f(z_0)=\frac{1}{2\pi i} \oint \frac{f(z)}{z-z_0}$$ and by elementary calculations we can, for \(z\in \partial B_r(a)\), write $$\frac{1}{z-z_0} = \frac{1}{z-a} \frac{1}{1-\frac{z_0 -a}{z-a}}=\frac{1}{z-a}\sum^\infty_{n=0} \left(\frac{z_0-a}{z-a}\right)^n,$$ and from above we then have $$\begin{split} f(z_0)& = \frac{1}{2\pi i} \oint \frac{f(z)}{z-z_0} \\ &=\frac{1}{2\pi i}\oint\sum^\infty_{n=0} \frac{f(z)(z_0-a)^n}{(z-a)^{n+1}}dz\\ &=\frac{1}{2\pi i}\sum^\infty_{n=0}\oint \frac{f(z)(z_0-a)^n}{(z-a)^{n+1}}dz\\&=\frac{1}{2\pi i}\sum^\infty_{n=0}\oint \frac{f(z)}{(z-a)^{n+1}}dz(z_0-a)^n\\ &=\sum^\infty_{n=0}a_n (z_0-a)^n,\end{split}$$ as wanted. We see that \(f\) is a power series and thus infinitely complex differentiable, and the derivatives are $$f^{(n)}(a)=\frac{n!}{2\pi i}\oint\frac{f(z)}{(z-a)^{n+1}}dz,$$ as desired.
The observant reader will have noticed that I didn’t check that the sums were uniformly convergent, which is needed in other to switch the sum and integral signs, but this is an easy application of the Weierstraß \(M\)-test.
References 648 {648:SS6NUFWV} items 1 apa default asc
Berg, C. (2016).
Complex analysis. Copenhagen: Department of Mathematical Sciences, University of Copenhagen. |
I was trying to solve the equation, $x=1+\sqrt{x}$ for real $x$. Though I didn't correctly solve it. I'm curious as to why that is, and what else I need to initially consider in the domain of the function.
I started off by recognising that $x \geq 0$ for the square root to be real (I know when $x=0$ it is not a solution). Squaring both sides and rearranging; $$x^2 -3x + 1=0$$ Finding the solutions to this equation you obtain; $x=\frac{3\pm\sqrt{5}}{2}$. Both of these solutions to that equation are greater than zero, but only $x=\frac{3 + \sqrt{5}}{2}$ is the solution to the original. Why is that?
Is there some other "domain" restriction I must consider? Or for every question where there inolves root must I numerically test it (is there no way to get around this)?
Thanks |
This is only a comment about classes of rings in which
no counter-examples can be found.
Since a one-dimensional GCD domain is a Bézout domain [1, Corollary 3.9], GCD domains will not provide any counter-example.
We will show that Dedekind domains will not provide any counter-example either. To do so, we will use the following well-known lemma (see e.g., Matsumura's "Commutative Ring Theory") and two subsequent claims.
Lemma. Let $R$ be a commutative ring with identity. Let $\mathfrak{a}, \mathfrak{b}, \mathfrak{c}$ be ideals of $R$. Assume that $\mathfrak{c}$ is co-maximal with both $\mathfrak{a}$ and $\mathfrak{b}$, i.e., $\mathfrak{a} + \mathfrak{c} = \mathfrak{b} + \mathfrak{c} = R$. Then $\mathfrak{c}$ is co-maximal with $\mathfrak{ab}$.
Proof. $R = (\mathfrak{a} + \mathfrak{c})(\mathfrak{b} + \mathfrak{c}) \subseteq \mathfrak{ab} + \mathfrak{c} \subseteq R$.
The following claim is a general fact about commutative domains.
Claim 1. Let $R$ be a commutative domain with identity. Let $M(R)$ be the submonoid of $R \setminus \{0\}$ generated by the units of $R$ together with the prime elements $p$ such that $Rp$ is a maximal ideal of $R$. Let $S(R)$ be the subset of $R \setminus \{0\}$ consisting of the non-zero elements $a$ such that $Ra + Rb$ is principal for every $b \in R$. Then $M(R) \subseteq S(R)$.
Proof. Let $a = up_1^{\alpha_1} \cdots p_n^{\alpha_n} \in M(R)$ where $u$ is a unit of $R$ and where the elements $p_i$ are distinct prime elements such that $Rp_i$ is a maximal ideal for every $i$. Let us show by induction on $s = \alpha_1 + \cdots + \alpha_n$ that $a$ belongs to $S(R)$. If $s = 0$, it is immediate. Let us suppose that $s > 0$ and let $b \in R$. We can certainly assume that $b \neq 0$. Let $\beta_i$ be the largest integer such that $p_i^{\beta_i}$ divides both $a$ and $b$ and set $d = p_1^{\beta_1} \cdots p_n^{\beta_n}$. If $d$ is not a unit, then the induction hypothesis applies to $a/d$ and yields a Bézout relation with $b/d$ so that $a \in S(R)$. Otherwise, the element $b$ cannot be divided by any of the $p_i$. Thus $Rb$ is co-maximal with $Rp_i$ for every $i$. By the above lemma, we deduce that $Rb$ is co-maximal with $Ra$, and hence $a \in S(R)$.
Our last claim establishes that no counter-example to OP's condition can be found in a Dedekind domain.
Claim 2. Let $R$ be Dedekind domain and let $a \in R$ such that $Ra \cap Rb$ is principal for every $b \in R$. Then $a \in M(R)$.
Proof. Let $a$ be a non-zero element in $R \setminus M(R)$. By hypothesis, there is a maximal ideal $\mathfrak{m}$ appearing in the decomposition of $Ra$ which is not principal. By the Chinese Remainder Theorem, we can find $b \in R$ such that $b \in \mathfrak{m} \setminus \mathfrak{m}^2$ and $b$ doesn't belong to any of the other maximal ideals containing $a$. In the group of fractional ideals of $R$, we have $Ra \cap Rb = (Rab)\mathfrak{m}^{-1}$ so that $Ra \cap Rb$ cannot be principal.
[1] P. Sheldon, "Prime ideals in GCD domains". |
Following Barwise and Perry (1985
, p. 153), I distinguish between Soames’s argument and Soames’s derivation. Soames’s derivation shows that any circumstantialist theory committed to certain natural semantic assumptions, including that names, indexicals, and variables are directly referential, predicts that (5
) entails (6
):
I will not reproduce all of Soames’s derivation here. To illustrate the role of the Representation Thesis, I will focus on one fragment that reveals Soames’s argument to be a fineness of grain argument. This fragment relies on the semantic theses Compositionality, Conjunction, Direct Reference, and Existential Quantification:
Compositionality
If S\(_{1}\) and S\(_{2}\) are non-intensional sentences/formulas with the same grammatical structure, which differ only in the substitution of constituents with the same semantic contents (relative to their respective contexts and assignments), then the semantic contents of S\(_{1}\) and S\(_{2}\) will be the same (relative to those contexts and assignments).
Conjunction
A sentence Open image in new window is true at a circumstance
w, relative to a context c and assignment function f, if and only if both A and B are true at w relative to c and f.
Direct Reference
Proper names, indexicals (relative to contexts), and variables (relative to assignments) are directly referential.
Existential Quantification
A sentence Open image in new window is true at a circumstance
w, relative to a context c and assignment f, if and only if \(\phi \) is true at w relative to c and some v-variant of f (where a v-variant of f is a function that differs from f at most in what it assigns to v).
Given these assumptions, the fragment comprises the following lemmas:
Lemma 1
(11
) is true at every circumstance at which (10
) is true:
$$\begin{aligned}&\mathrm{`Hesperus'}~\mathrm{refers}~\mathrm{to}~\mathrm{Hesperus}, \mathrm{and}~\mathrm{`Phosphorus'}~\mathrm{refers}~\mathrm{to}~\mathrm{Hesperus.}\end{aligned}$$
(10)
$$\begin{aligned}&\exists x \mathrm{(`Hesperus'}~\mathrm{refers to}~x~\mathrm{and}~\mathrm{`Phosphorus'}~\mathrm{refers to }~x). \end{aligned}$$
(11)
Proof
Assume that (10
) is true at some circumstance
w
relative to some context
c
and assignment
f
. By Direct Reference, both the name ‘Hesperus’ and the variable ‘
x
’ relative to the ‘
x
’-variant of
f
that assigns Venus to ‘
x
’ directly refer to Venus. So via Compositionality, the truth of (10
) at
w
relative to
c
and
f
guarantees the truth at
w
of the open formula
$$\begin{aligned} \hbox {`Hesperus' refers to } x \hbox { and `Phosphorus' refers to } x \end{aligned}$$
relative to
c
and the ‘
x
’-variant of
f
that assigns Venus to ‘
x
’, because the formulas have the same content (relative to
c
and the respective assignments). So by the right-to-left direction of Existential Quantification, (11
) is true at
w
relative to
c
and
f
. \(\square \)
Proof
Left to right: assume that Open image in new window is true at a circumstance
w (relative to c and f—omitted henceforth). Then by Conjunction, (10) is true at w. Right to left: assume that (10) is true at w. Then by Lemma 1, (11) is true at w. Hence both (10) and (11) are true at w, and so by Conjunction is Open image in new window . \(\square \)
The set of circumstances at which a sentence is true is the truth-conditional content of the sentence. Thus Lemma 2 entails that (10) and Open image in new window are a representational pair. Given Circumstantialism (and hence the Representation Thesis), it follows that (10) and Open image in new window express the same proposition.
It is at this stage that Soames’s argument becomes a fineness of grain argument. Soames’s derivation as a whole goes through only if the result above—that (10) and Open image in new window express the same proposition—holds. Distinguishing between the propositions expressed by (10) and Open image in new window would block the problematic derivation, and Soames argues that other attempts to block the derivation face independent problems. If these further arguments are correct, we have independent grounds for distinguishing propositions that Circumstantialism identifies.
Ripley (2012
), building on Priest’s (2005
) semantics for impossible and open worlds, shows how to reject Soames’s argument.
33
The account of quantification in Priest illustrates this. Ripley adopts Priest’s notion of a
matrix
:
Call a formula a
matrix, if all its free terms are variables, no free variable has multiple occurrences and—for the sake of definiteness—the free variables that occur in it are the least variables greater than all the variables bound in the formula, in some canonical ordering, in ascending order from left to right. (Priest 2005, p. 17)
Priest and Ripley adopt the following notational convention: where
C
is any matrix containing the exactly the variables \(v_{i} \ldots v_{j}\)
free, and \(t_{i}, \ldots , t_{j}\)
a sequence of terms (some of which may be variables), \(C(t_{i},\ldots , t_{j})\)
is the unique formula that results from substituting \(t_{i}\)
for \(v_{i}\)
, ..., and \(t_{j}\)
for \(v_{j}\)
. Given this convention, every formula is the result of substituting a unique sequence of terms in a unique matrix. The unique matrix from which a formula
A
results via the appropriate substitution of terms is called the matrix, \(\overline{A}\)
, of
A
. So \(\overline{(11)}\)
is (12
) (assuming a natural alphabetic ordering of the variables), and (13
) is a notational variant of (11
) given the convention above:
$$\begin{aligned}&\exists x (y \hbox { refers to }x\hbox { and }z\hbox { refers to }x.)\end{aligned}$$
(12)
$$\begin{aligned}&\exists x (y \hbox { refers to }x\hbox { and }z\hbox { refers to }x\hbox {)(` `Hesperus' ',` `Phosphorus' ')} \end{aligned}$$
(13)
At logically impossible circumstances, Priest and Ripley treat matrices as
n
-place predicates that are assigned arbitrary extensions. For atomic formulas, Ripley introduces a general purpose denotation function, or \( [\![ \, \, ]\!]\)
, that maps terms to objects or individuals and maps predicates (and matrices) to intensions (functions from circumstances to extensions). Each denotation function \( [\![ \, \, ]\!]\)
also determines a unique assignment
f
of values to variables. We can now state the rule for impossible quantification from Ripley (2012
, p. 110):
Impossible Quantification
For a quantified sentence \(A = \overline{A}(t_{1},t_{2},\ldots ,t_{n})\):
A is true at a circumstance w (relative to a context c—but I’ll follow Ripley in ignoring this while discussing his argument) if and only if \(\left\langle [\![ \,t_{1}\, ]\!], [\![ \,t_{2}\, ]\!],\ldots , [\![ \,t_{n}\, ]\!] \right\rangle \in [\![ \,\overline{A}\, ]\!](w)\)
To maintain the proper behavior of existential quantification at logically possible circumstances, we must restrict the denotation of the matrix in various ways. Assume that we have done so. (Alternatively, assume that Impossible Quantification only applies at logically impossible circumstances, and that at all other circumstances, quantifiers are governed by rules like Existential Quantification above.) Now let
w
be a circumstance at which (10
) is true, but at which the extension of the matrix (12
) of (11
) does not include the pair \(\left\langle \hbox {`Hesperus'}, \hbox {`Phosphorus'} \right\rangle \)
. Then (13
) is not true at
w
. But since (13
) is a notational variant of (11
), (11
) is also not true at
w
. Thus
w
is both logically impossible and open. Given circumstances such as
w
, the proof of Lemma 1 fails. This blocks Soames’s derivation. |
I recently came across this in a textbook (NCERT class 12 , chapter: wave optics , pg:367 , example 10.4(d)) of mine while studying the Young's double slit experiment. It says a condition for the formation of interference pattern is$$\frac{s}{S} < \frac{\lambda}{d}$$Where $s$ is the size of ...
The accepted answer is clearly wrong. The OP's textbook referes to 's' as "size of source" and then gives a relation involving it. But the accepted answer conveniently assumes 's' to be "fringe-width" and proves the relation. One of the unaccepted answers is the correct one. I have flagged the answer for mod attention. This answer wastes time, because I naturally looked at it first ( it being an accepted answer) only to realise it proved something entirely different and trivial.
This question was considered a duplicate because of a previous question titled "Height of Water 'Splashing'". However, the previous question only considers the height of the splash, whereas answers to the later question may consider a lot of different effects on the body of water, such as height ...
I was trying to figure out the cross section $\frac{d\sigma}{d\Omega}$ for spinless $e^{-}\gamma\rightarrow e^{-}$ scattering. First I wrote the terms associated with each component.Vertex:$$ie(P_A+P_B)^{\mu}$$External Boson: $1$Photon: $\epsilon_{\mu}$Multiplying these will give the inv...
As I am now studying on the history of discovery of electricity so I am searching on each scientists on Google but I am not getting a good answers on some scientists.So I want to ask you to provide a good app for studying on the history of scientists?
I am working on correlation in quantum systems.Consider for an arbitrary finite dimensional bipartite system $A$ with elements $A_{1}$ and $A_{2}$ and a bipartite system $B$ with elements $B_{1}$ and $B_{2}$ under the assumption which fulfilled continuity.My question is that would it be possib...
@EmilioPisanty Sup. I finished Part I of Q is for Quantum. I'm a little confused why a black ball turns into a misty of white and minus black, and not into white and black? Is it like a little trick so the second PETE box can cancel out the contrary states? Also I really like that the book avoids words like quantum, superposition, etc.
Is this correct? "The closer you get hovering (as opposed to falling) to a black hole, the further away you see the black hole from you. You would need an impossible rope of an infinite length to reach the event horizon from a hovering ship". From physics.stackexchange.com/questions/480767/…
You can't make a system go to a lower state than its zero point, so you can't do work with ZPE. Similarly, to run a hydroelectric generator you not only need water, you need a height difference so you can make the water run downhill. — PM 2Ring3 hours ago
So in Q is for Quantum there's a box called PETE that has 50% chance of changing the color of a black or white ball. When two PETE boxes are connected, an input white ball will always come out white and the same with a black ball.
@ACuriousMind There is also a NOT box that changes the color of the ball. In the book it's described that each ball has a misty (possible outcomes I suppose). For example a white ball coming into a PETE box will have output misty of WB (it can come out as white or black). But the misty of a black ball is W-B or -WB. (the black ball comes out with a minus). I understand that with the minus the math works out, but what is that minus and why?
@AbhasKumarSinha intriguing/ impressive! would like to hear more! :) am very interested in using physics simulation systems for fluid dynamics vs particle dynamics experiments, alas very few in the world are thinking along the same lines right now, even as the technology improves substantially...
@vzn for physics/simulation, you may use Blender, that is very accurate. If you want to experiment lens and optics, the you may use Mistibushi Renderer, those are made for accurate scientific purposes.
@RyanUnger physics.stackexchange.com/q/27700/50583 is about QFT for mathematicians, which overlaps in the sense that you can't really do string theory without first doing QFT. I think the canonical recommendation is indeed Deligne et al's *Quantum Fields and Strings: A Course For Mathematicians *, but I haven't read it myself
@AbhasKumarSinha when you say you were there, did you work at some kind of Godot facilities/ headquarters? where? dont see something relevant on google yet on "mitsubishi renderer" do you have a link for that?
@ACuriousMind thats exactly how DZA presents it. understand the idea of "not tying it to any particular physical implementation" but that kind of gets stretched thin because the point is that there are "devices from our reality" that match the description and theyre all part of the mystery/ complexity/ inscrutability of QM. actually its QM experts that dont fully grasp the idea because (on deep research) it seems possible classical components exist that fulfill the descriptions...
When I say "the basics of string theory haven't changed", I basically mean the story of string theory up to (but excluding) compactifications, branes and what not. It is the latter that has rapidly evolved, not the former.
@RyanUnger Yes, it's where the actual model building happens. But there's a lot of things to work out independently of that
And that is what I mean by "the basics".
Yes, with mirror symmetry and all that jazz, there's been a lot of things happening in string theory, but I think that's still comparatively "fresh" research where the best you'll find are some survey papers
@RyanUnger trying to think of an adjective for it... nihilistic? :P ps have you seen this? think youll like it, thought of you when found it... Kurzgesagt optimistic nihilismyoutube.com/watch?v=MBRqu0YOH14
The knuckle mnemonic is a mnemonic device for remembering the number of days in the months of the Julian and Gregorian calendars.== Method ===== One handed ===One form of the mnemonic is done by counting on the knuckles of one's hand to remember the numbers of days of the months.Count knuckles as 31 days, depressions between knuckles as 30 (or 28/29) days. Start with the little finger knuckle as January, and count one finger or depression at a time towards the index finger knuckle (July), saying the months while doing so. Then return to the little finger knuckle (now August) and continue for...
@vzn I dont want to go to uni nor college. I prefer to dive into the depths of life early. I'm 16 (2 more years and I graduate). I'm interested in business, physics, neuroscience, philosophy, biology, engineering and other stuff and technologies. I just have constant hunger to widen my view on the world.
@Slereah It's like the brain has a limited capacity on math skills it can store.
@NovaliumCompany btw think either way is acceptable, relate to the feeling of low enthusiasm to submitting to "the higher establishment," but for many, universities are indeed "diving into the depths of life"
I think you should go if you want to learn, but I'd also argue that waiting a couple years could be a sensible option. I know a number of people who went to college because they were told that it was what they should do and ended up wasting a bunch of time/money
It does give you more of a sense of who actually knows what they're talking about and who doesn't though. While there's a lot of information available these days, it isn't all good information and it can be a very difficult thing to judge without some background knowledge
Hello people, does anyone have a suggestion for some good lecture notes on what surface codes are and how are they used for quantum error correction? I just want to have an overview as I might have the possibility of doing a master thesis on the subject. I looked around a bit and it sounds cool but "it sounds cool" doesn't sound like a good enough motivation for devoting 6 months of my life to it |
Let $X$ be an algebraic scheme and $\mathscr C$ a cone on $X\times\mathbf A^1$ and $C_t$ denote the restriction of $\mathscr C$ to $X\times\{t\}$ ($t=0,1$, or whatever). The claim in Fulton's
Intersection Theory is that when $\mathscr C$ is flat over $\mathbf A^1$, then$$s(C_0)=s(C_1)\in A_\ast X$$where the Segre class $s(C)$ of a cone $C$ on a scheme $X$ is defined as$$s(C)=q_\ast\left(\sum_{i\geq0}c_1(\mathscr O(1))^i\smallfrown [P(C\oplus 1)]\right)$$where $q:P(C\oplus 1)\to X$ is the projection.
Since if $C_i$ are the irreducible components of a cone $\mathscr C$ on $X$, then $s(C)=\sum_i m_is(C_i)$ and $[P(C\oplus 1)]=\sum_i m_i[P(C_i\oplus1)]$, where $m_i$ are the geometric multiplicities, so I believe we can restrict our attention to the class $[V]$ of an irreducible component of $\mathscr C$. In general a variety is flat over a nonsingular curve iff its generic point maps to the generic point of the curve. But if this is not true, then it means that the image of $V$ under projection to $\mathbf A^1$ is a point, say $t$. But in this case, $i_t^\ast(V)=0\ne V=V_t$, where I take $V_t$ to be the fiber and I take $i_t^\ast$ to be the Gysin morphism $A_\ast(X)\to A_{\ast-1}(D)$ where $C_t\subset\mathscr C$ is the divisor on $\mathscr C$ with local equation $t$ (aka the fiber). Since if $V=V_t$ then $i_t^\ast(V)$ is computed by restricting $\mathscr O_{\mathscr C}(C_t)$ to $V$ and then taking a corresponding Cartier divisor on $V$. But $\mathscr O_{\mathscr C}(C_t)$ restricted to $C_t$ is the normal bundle which is trivial, so $i_t^\ast(V)=0$.
So I see why flatness is needed. My question is, wouldn't the statement $$s(i_0^\ast\mathscr C)=s(i_1^\ast\mathscr C)\in A_\ast X$$ hold without the assumption of flatness? When $\mathscr C$ is flat over $\mathbf A^1$ it restricts to the original statement. |
I was trying to compute the product
$$ P_{a,b} = \prod_{n=1}^\infty(an + b), $$
after I computed
$$ P_{1,b} = \prod_{n=1}^\infty(n + b) = \frac{\sqrt{2\pi}}{\Gamma(b+1)}, $$
and the well-known
$$ \prod_{n=1}^\infty a = \exp\left\{\log(a)\sum_{n=1}^\infty n^0 \right\} = \exp\left\{\log(a)\zeta(0) \right\} = a^{-1/2}. $$
So I have
$$ P_{a,b} = \prod_{n=1}^\infty a \prod_{n=1}^\infty\left(n + \frac{b}{a} \right) = a^{-1/2}\frac{\sqrt{2\pi}}{\Gamma\left(1+\frac{b}{a}\right)}. $$
However I found this article Quine, Heydari and Song 1993 stating $P_{1,b}$ as mine but
$$ P_{a,b} = a^{-1/2 - b/a}\frac{\sqrt{2\pi}}{\Gamma \left( 1+\frac{b}{a}\right )}. \tag{18} $$
Of course this formula is not compatible with product of infinite products, but it seems to work rather than mine when computing some partition function by path integrals as
$$ \int\mathcal{D}[\phi,\phi^\dagger]\exp\left\{-\int_0^\beta\mathrm{d}t\phi^\dagger(t)(\partial_t + w)\phi(t) \right\}, $$
with $\phi,\phi^\dagger$ bosonic fields. Notice that in this case
$$ \phi(t) = \sum_{n=-\infty}^\infty\phi_n e^{\frac{2\pi i}{\beta}n t} $$
so that to evaluation of path integral boils up to some gaussian one.
Can anyone help me? |
Here's a question that's creating some doubt to me.
Suppose there are 2 big spheres
A and B of mass M and mass 4M, each of radius, R separated by a distance of 6R. An object of mass, m is projected from the surface of A. What should be the minimum velocity of the body with which it should be projected so that it just reaches the surface of B. A try to the question:-
We'll first find the neutral point where the gravitational forces by both the objects cancel out each other.
For this, we give a little displacement to the object/satellite, $d\vec{r}$ which is in direction from A to B. Also, a unit vector $\hat{r}$ is assigned in the same direction. Suppose the gravitational forces of the masses be $\vec{F_A}$ and $\vec{F_B}$. They are given below:
$\vec{F_A}$ = - $G\frac{Mm}{r^2}\hat{r}$
$\vec{F_B}$ = $G\frac{4Mm}{x^2}\hat{r}$
Negative sign is not present in $\vec{F_B}$ because $\hat{r}$ is in the direction of the force $\vec{F_B}$.
Now, at neutral point, P
$\vec{F_A}$ = - $\vec{F_B}$
F A = F B
$G\frac{Mm}{r^2}$ = $G\frac{4Mm}{x^2}$ [Here,
x=6R-r] $G\frac{Mm}{r^2}$ = $G\frac{4Mm}{(6R-r)^2}$
4 r
2 = (6R-r) 2
r = 2R
From this point, P(r=2R), the gravitational force, F
B is sufficient to attract the satellite to reach the surface of B.
Let W
A and W B be the work done by the gravitational forces $\vec{F_A}$ and $\vec{F_B}$ separately from the surface of A to point P. Work done by Force, F A:-
dW
A = $\vec{F_A}\cdot d\vec{r}$ dW A = F A dr cos 180° dW A = - FA dr ---------Eq(a)
In equation a,
For limits,
Now, when object is at surface of A, r = R And, when object is at neutral point P, r = 2R
$$\int \, dW_A = \int\limits_{R}^{2R} - F_A \, dr$$
$$W_A = - \int\limits_{R}^{2R} G\frac{Mm}{r^2} \, dr$$ $$W_A = -GMm \int\limits_{R}^{2R} \frac{1}{r^2} \, dr$$ $$W_A = -GMm \biggl[\frac{-1}{r}\biggr]_{R}^{2R} $$ $$W_A = -GMm \biggl[\frac{-1}{2R}-\frac{-1}{R}\biggr] $$ $$W_A = -\frac{GMm}{2R} $$
$$W_A = {\color{violet}{\int\limits_{R}^{2R} - F_A \, dr}} = {\color{pink}{-\frac{GMm}{2R}}} $$
Both equations,
violet and the pink one are satisfying each other, infering that the work done by the gravitational force $\vec{F_A}$ will be negative. Work done by Force, F B:-
dW
B = $\vec{F_B}\cdot d\vec{r}$ dW B = F B dr cos 0° dW B = FB dr ---------Eq(b)
In equation b,
For limits,
Now, when object is at surface of A, r = R -----> x = 6R - r = 5R And, when object is at neutral point P, r = 2R -----> x = 6R - r = 4R
$$\int \, dW_B = \int\limits_{R}^{2R} F_B \, dr$$
$$W_B = \int\limits_{R}^{2R} G\frac{4Mm}{(6R - r)^2} \, dr$$ $$W_B = 4GMm\int\limits_{5R}^{4R} \frac{1}{x^2} \, dr$$ $$W_B = 4GMm \biggl[\frac{-1}{x} \biggr]_{5R}^{4R} $$ $$W_B = 4GMm \biggl[\frac{-1}{4R}-\frac{-1}{5R}\biggr] $$ $$W_B = 4GMm \biggl[\frac{-1}{20R} \biggr] $$ $$W_B = -\frac{GMm}{5R} $$
$$W_B = {\color{orange}{\int\limits_{R}^{2R} F_B \, dr}} = {\color{cyan}{-\frac{GMm}{5R}}}$$
So, here's my doubt:-
The
orange equation infer that the work by gravitational force $\vec{F_B}$ is positive (why) as it was derived from equation (b) and in that equation, the angle between force $\vec{F_B}$ and displacement $d\vec{r}$ was 0°. And, it is equal to the cyan equation. But in the cyan equation, there is a negative sign which tells us that the work by gravitational force $\vec{F_B}$ is negative.
Hence, the
orange equation is not consistent with cyan equation. My Doubt: Why?
Well, there's lot to be done as we need to find the velocity of satellite with which it should be projected. But, before that, I need to add the 2 work, W
A and W B and make it equal to the kinetic energy of the satellite to find the velocity of the body (if I'm not wrong).
But, I'm stuck here. So please help.
OK, I have done a lot in the post but the doubt is lot like similar to the last one asked in this post Work Done by Gravitational Force.
The difference is only that in that post, I had some problem associated to the direction of radial vector $d\vec{r}$. But here, I don't think so, there's any problem with that.
So, please tell me why the the
orange equation is not consistent with cyan equation? |
There's a lot more to this question than OP imagines.
If $F$ was a continuous force, then from the geometry and with trigonometry $F_1$ could easily be calculated:
$$F_1=F\cos (\pi-2\alpha)$$
This creates counterclockwise torque about the forward pivot point of the stand:
$$\tau_1=F_1r$$
Which tries to topple the stand.
The weight $mg$ provides an opposing clockwise torque $\tau_2$:
$$\tau_2=mgr\cos\alpha$$If there is a net, positive torque:
$$\tau_{net}=\tau_1-\tau_2>0$$
Then
angular acceleration around the forward pivot point will occur, as per Newton. The ensemble will topple because as rotation proceeds, $\tau_2$ actually vanishes.
But that's far from the end of it.
I'm designing throwing target that will behave like a person when struck.
This suggest that the target will be struck by a mass bearing projectile. In that case $F$ is not constant but an impact force or a short-lived impuls. The size and duration of it cannot be calculated accurately or easily because they depend on how elastic the collision is: I assume the projectile will bounce off the target (only in the case of a very sticky target would that not be true).
Possibly the easiest approach would be to assume the projectile transfers some of its
kinetic energy to the ensemble, so that it will have a certain amount of rotational kinetic energy $K_R$, immediately after impact:
$$K_R=\frac12 I\omega_0^2$$
Where $I$ is the
inertial moment of the ensemble about the forward pivot point and $\omega_0$ the angular velocity of the ensemble, immediately after impact.
Since the ensemble is now rotating, the previously mentioned $\tau_2$ provides a
decelerating torque:
$$\tau_2=I\dot{\omega}$$
As the point $m$ increases in height during rotation, its
potential energy $U$ increases. When the forward bar has become vertical, the height increase $\Delta y$ is:
$$\Delta y=r-h\:\text{with }h=r\sin\alpha$$And the corresponding change in potential energy is:
$$\Delta U=mg\Delta y=mg(r-h)=mgr(1-\sin\alpha)$$
As during the rotation
rotational kinetic energy is converted to potential energy, the ensemble will topple if:
$$K_R>\Delta U$$
Or:
$$\frac12 I\omega_0^2>mg(r-h)$$
This is the condition for toppling.
Let's assume a projectile of mass $M$ is thrown at the target at speed $v$. Its kinetic energy would be:
$$K_P=\frac12 Mv^2$$
Now we assume a fraction $\epsilon$ of this is transferred to the ensemble. Its rotational kinetic energy $K_R$ would then become:
$$K_R=\frac12 \epsilon Mv^2$$
The remaining fraction of kinetic energy would be carries off by the bouncing projectile:
$$K_P=K_R+K'_P\implies K'_P=\frac12 (1-\epsilon)Mv^2$$
And the condition for toppling:
$$\frac12 \epsilon Mv^2>mg(r-h)=mgr(1-\sin\alpha)$$
The problem remains to find a useful expression for $\epsilon$ from all relevant parameters. |
I want to build a big PDF of multiple TeX files which will all have the same preamble. They'll have
\newcommand and
\renewcommand as well as
\usepackage and
\documentclass because each file should be standalone and compiled as a smaller PDF. But then, I want to take all of the files we have and put them in a main document. I've looked at the other questions:
Include file with preamble to another tex file Make a .tex file that combines complete .tex documents in subdirectories http://www.faqoverflow.com/tex/79594.html http://ctan.mackichan.com/macros/latex/contrib/standalone/standalone.pdf
and the standalone package. I'm not sure I'll be able to do what I want with these.
Each child document needs to stand on it's own, with it's own preamble, and table of contents, etc. But they should be included in a master document with a global table of contents that ignores/copies all preambles.
Below is example code for one of the children files.
\documentclass[11pt]{article}\usepackage{amsmath, amsfonts,amssymb,latexsym, multirow}\usepackage{fullpage, graphicx, subfig, float, hyperref, enumerate}\usepackage[parfill]{parskip}\usepackage{pdflscape}%for large figures\usepackage{cancel}\linespread{1.3}\hypersetup{backref, pdfpagemode=FullScreen, colorlinks=true}\renewcommand{\dag}{^\dagger}\renewcommand{\d}{\text{d}}\newcommand{\D}{\text{D}}\newcommand{\bra}{\langle}\newcommand{\ket}{\rangle}\newcommand{\comment}[1]{}\newcommand{\p}{\partial}\newcommand{\eq}[1]{\begin{align*}#1\end{align*}}\begin{document}\noindent \fbox{\begin{minipage}{6.4in} \medskip \textbf{Book} \hfill \textbf{Author} \begin{center} {\Large Chapter \#} \\[3mm] \end{center}\today \hfill Subauthor\medskip\end{minipage}}\bigskip\tableofcontents\newpage\section{}\subsection{}\end{document} |
Differential Topology Lecture 2 Notes
From last lecture, we know topological manifolds are Hausdorff, second-countable, and locally euclidean, or $\forall x \in M$, $\exists U = N(x)$ such that $\phi : U \to \tilde{U} \subseteq \mathbb{R}^n$ is a homeomorphism.
DefinitionA pair $(U, \phi)$ is called a chart.
DefinitionA topological manifold with boundaryis Hausdorff and second countable such that $\forall x \in M$, $\exists U = N(x)$ such that $U$ is homeomorphic to $\tilde{U} \subseteq \mathbb{H}^n$ (half space, $\mathbb{H} = \{ x \in \mathbb{R}\, |\, x \ge 0 \}$). Smooth Manifolds
If we have an $n$-dimensional topological manifold, $X$, when is $f: X \to \mathbb{R}^n$ smooth?
We have that there is a chart $(U, \phi)$, so we want a function $f \circ \phi^{-1} \colon \tilde{U} \to \mathbb{R}^n$ because $\tilde{U} \subseteq \mathbb{R}^n$ or something. $f$ will be smooth if $\phi^{-1}$ is smooth. |
We now look at our second numerical characteristic associated to random variables.
Definition\(\PageIndex{1}\)
The
variance of a random variable \(X\) is given by $$\sigma^2 = Var(X) = E[(X-\mu)^2],\notag$$ where \(\mu\) denotes the expected value of \(X\). The standard deviation of \(X\) is given by $$\sigma = \text{SD}(X) = \sqrt{Var(X)}.\notag$$
In words, the variance of a random variable is the average of the squared deviations of the random variable from its mean (or expected value). Notice that the variance of a random variable will result in a number with units squared, but the standard deviation will have the same units as the random variable. Thus, the standard deviation is easier to interpret, which is why we make a point to define it. The variance and standard deviation give us a
measure of spread for random variables. The standard deviation is interpreted as a measure of how "spread out'' the possible values of \(X\) are with respect to the mean of \(X\).
As with expected values, for many of the common probability distributions, the variance is given by a parameter or a function of the parameters for the distribution. For example, if continuous random variable \(X\) has a normal distribution with parameters \(\mu\) and \(\sigma\), then \(Var(X) = \sigma^2\), i.e., the parameter \(\sigma\) gives the standard deviation. Again, the normal case explains the notation used for variance and standard deviation.
Example \(\PageIndex{1}\)
Suppose \(X_1\sim\text{normal}(0, 2^2)\) and \(X_2\sim\text{normal}(0, 3^2)\). So, \(X_1\) and \(X_2\) are both normally distributed random variables with the same mean, but \(X_2\) has a larger standard deviation. Given our interpretation of standard deviation, this implies that the possible values of \(X_2\) are more "spread out'' from the mean. This is easily seen by looking at the graphs of the pdf's corresponding to \(X_1\) and \(X_2\) given in Figure 1.
Figure 1: Graph of normal pdf's: \(X_1\sim\text{normal}(0,2^2)\) in blue, \(X_2\sim\text{normal}(0,3^2)\) in red
Theorem 3.7.1 tells us how to compute variance, since it is given by finding the expected value of a function applied to the random variable. First, if \(X\) is a discrete random variable with possible values \(x_1, x_2, \ldots, x_i, \ldots\), and frequency function \(p(x_i)\), then the variance of \(X\) is given by
$$Var(X) = \sum_{i} (x_i - \mu)^2\cdot p(x_i).\notag$$ If \(X\) is continuous with pdf \(f(x)\), then $$Var(X) = \int\limits^{\infty}_{-\infty}\! (x-\mu)^2\cdot f(x)\, dx.\notag$$ The above formulas follow directly from Definition 3.8.1. However, there is an alternate formula for calculating variance, given by the following theorem, that is often easier to use.
Theorem \(\PageIndex{1}\)
\(Var(X) = E[X^2] - \mu^2\)
Proof
By the definition of
variance,
\begin{align*}
Var(X)&= E[(X-\mu)^2]\\
&= E[X^2+\mu^2-2X\mu]\\
&= E[X^2]+E[\mu^2]-E[2X\mu]\\
&= E[X^2] + \mu^2-2\mu E[X] (\text{Note: since \(mu\) is constant, we can take it out from the expected value})\\
&= E[X^2] + \mu^2-2\mu^2\\
&= E[X^2] -\mu^2
\end{align*}
Example \(\PageIndex{2}\)
Continuing in the context of Example 23, we calculate the variance and standard deviation of the random variable \(X\) denoting the number of heads obtained in two tosses of a fair coin. Using the alternate formula for variance, we need to first calculate \(E[X^2]\), for which we use Theorem 3.8.1:
$$E[X^2] = 0^2\cdot p(0) + 1^2\cdot p(1) + 2^2\cdot p(2) = 0 + 0.5 + 1 = 1.5.\notag$$ In Example 23, we found that \(\mu = E[X] = 1\). Thus, we find \begin{align*} Var(X) &= E[X^2] - \mu^2 = 1.5 - 1 = 0.5 \\ \Rightarrow\ \text{SD}(X) &= \sqrt{Var(X)} = \sqrt{0.5} \approx 0.707 \end{align*}
Example \(\PageIndex{3}\)
Continuing with Example 24, we calculate the variance and standard deviation of the random variable \(X\) denoting the time a person waits for an elevator to arrive. Again, we use the alternate formula for variance and first find \(E[X^2]\) using Theorem 3.8.1:
CONTINUES $$E[X^2] = \int\limits^1_0\! x^2\cdot x\, dx + \int\limits^2_1\! x^2\cdot (2-x)\, dx = \int\limits^1_0\! x^3\, dx + \int\limits^2_1\! (2x^2 - x^3)\, dx = \frac{1}{4} + \frac{11}{12} = \frac{7}{6}.\notag$$ In Example 24, we found that \(\mu = E[X] = 1\). Thus, we have \begin{align*} Var(X) &= E[X^2] - \mu^2 = \frac{7}{6} - 1 = \frac{1}{6} \\ \Rightarrow\ \text{SD}(X) &= \sqrt{Var(X)} = \frac{1}{\sqrt{6}} \approx 0.408 \end{align*}
Given that the variance of a random variable is defined to be the expected value of
squared deviations from the mean, variance is not linear as expected value is. We do have the following useful property of variance though.
Theorem \(\PageIndex{2}\)
Let \(X\) be a random variable, and \(a, b\) be constants. Then the following holds:
$$Var(aX + b) = a^2Var(X).\notag$$ Proof
Add proof here and it will automatically be hidden
Theorem 3.8.2 easily follows from a little algebraic modification. Note that the "\(+\ b\)'' disappears in the formula. There is an intuitive reason for this. Namely, the "\(+\ b\)'' corresponds to a
horizontal shift of the frequency function or pdf of the random variable. Such a transformation to either of these functions is not going to affect the spread, i.e., the variance will not change. |
In this lesson, we’ll use the concept of the definite integral to calculate the arc length of a curve. Let’s say that the curve \(P_1P_n\) in Figure 1 is any arbitrary curve. Let’s subdivide this curve into an \(n\) number of smaller arcs \(P_1P_2,P_2P_3,…,P_{n-1}P_n\) as illustrated in Figure 1. By drawing a straight line from \(P_i\) to \(P_{i+1}\) for each arc \(P_iP_{i+1}\) (where \(i=1,…,n\)), we can draw an \(n\) number of chords which will roughly be equal to the length of each arc \(P_iP_{i+1}\) if \(n\) is very large. If we use the notation \(L_i(P_iP_{i+1})\) to represent the length of each chord drawn in Figure 1, then by summing the lengths of the chords, \(P_1P_2,P_2P_3,…,P_{n-1}P_n\), we can obtain a rough estimate of the total arc length of the curve \(P_1P_n\). Thus,
$$\text{Arc length of }P_1P_n≈L_1(P_1P_2)+…+L_n(P_{n-1}P_n)$$
or, equivalently,
$$\text{Arc length of }P_1P_n≈\sum_{i=1}^nL_i(P_iP_{i+1}).\tag{1}$$
Let’s briefly review the concept of a limit. If I write the limit,
$$\lim_{z→k}\text{ (‘something’)}=?,$$
then the thing that the limit is equal to is whatever the “something” gets closer and closer to equaling as \(z\) approaches \(k\). Now, as \(n→∞\) (which is to say, as the number of subdivisions in the arc \(P_1P_n\) approaches infinity), what does the sum \(\sum_{i=1}^nL_i(P_iP_{i+1})\) get closer and closer to equaling? Answering this question is very important since whatever \(\sum_{i=1}^nL_i(P_iP_{i+1})\) gets closer and closer to equaling as \(n→∞\) must be the thing that the limit \(\lim_{n→∞}\sum_{i=1}^nL_i(P_iP_{i+1})\) is equal to. Well, as the number of subdivisions becomes greater and greater, the length of each chord, \(L_i(P_iP_{i+1})\), will get closer and closer to equaling the length of each arc \(P_iP_{i+1}\). Thus, as the number of subdivisions keeps increasing and as \(n→∞\), the sum \(\sum_{i=1}^nL_i(P_iP_{i+1})\) will get closer and closer to equaling the exact arc length of the curve \(P_1P_n\). Thus,
$$\lim_{n→∞}\sum_{i=1}^nL_i(P_iP_{i+1})=\text{Arc length of curve }P_1P_n.$$
Since it is typical to denote the arc length of a curve with the letter \(s\), we’ll rewrite the above equation as
$$s=\lim_{n→∞}\sum_{i=1}^nL_i(P_iP_{i+1}).\tag{2}$$
Equation (2) represents, conceptually, how the arc length can be obtained by taking the infinite sum of the lengths of infinitesimally small chords. But since a definite integral involves taking the limit of a sum of the form
$$S_n=\sum_{i=1}^ng(x_i)Δx,\tag{3}$$
we must find a way to represent Equation (1) of the same form as Equation (3). Then, after that, by taking the limit of the sum as \(n→∞\) as we did in Equation (2), we’ll obtain a definite integral of the form
$$\int_a^bg(x)dx.$$
Let’s try to represent the quantity \(L_i(P_iP_{i+1})\) in the same form as \(f(x_i)Δx_i\). In Figure 2, I have drawn a “zoomed in” image of the \(i^{th}\) chord \(P_iP_{i+1}\). As you can see from Figure 2, the typical chord \(P_iP_{i+1}\) can be expressed in terms of \(x\) and \(y\) as
$$L_i(P_iP_{i+1})=\sqrt{(Δx_i)^2+(Δy_i)^2}.\tag{4}$$
Equation (4) brought us one step closer to expressing \(L_i(P_iP_{i+1})\) in the form \(f(x_i)\); the only problem is that \(Δy_i\) term in Equation (4). Let’s try to represent the radical in Equation (4) entirely in terms of \(x\) by doing some algebraic manipulations. If we multiply the right-hand side of Equation (4) by \(\frac{\sqrt{(Δx_i)^2}}{\sqrt{(Δx_i)^2}}\) (\(=1\)), then we can simplify the radical in Equation (4) to
$$\frac{\sqrt{(Δx_i)^2}}{\sqrt{(Δx_i)^2}}\sqrt{(Δx_i)^2+(Δy_i)^2}=\sqrt{\frac{1}{(Δx_i)^2}\biggl((Δx_i)^2+(Δy_i)^2\biggr)}Δx_i$$
$$=\sqrt{1+\biggl(\frac{Δy_i(x)}{Δx_i }\biggr)^2}Δx_i $$
Thus,
$$L_i(P_iP_{i+1})=\sqrt{1+\biggl(\frac{Δy_i(x)}{Δx_i }\biggr)^2}Δx_i.\tag{5}$$
What is nice about Equation (5) is that it can be expressed entirely in terms of (x\) since the slope, \(Δy_i(x)/Δx_i\), of the chord \(P_iP_{i+1}\) can be represented entirely in terms of \(x\). Substituting Equation (5) into (2), we have
$$S=\lim_{n→∞}\sum_{i=1}^n=\sqrt{1+\biggl(\frac{Δy_i(x)}{Δx_i }\biggr)^2}Δx_i.\tag{6}$$
The sum in Equation (6) is precisely the same form as Equation (3). Thus, the limit in Equation (6) must be equal to the definite integral
$$\lim_{n→∞}\sum_{i=1}^n=\sqrt{1+\biggl(\frac{Δy_i(x)}{Δx_i }\biggr)^2}Δx_i=\int_a^b\sqrt{1+(f’(x))^2}dx.\tag{7}$$
(As the number \(n\) of subdivisions of the arc \(P_1P_n\) approaches infinity, the length of the typical chord, \(L_i(P_iP_{i+1})\), becomes infinitesimally small. This means that the other side lengths \(Δx_i\) and \(Δy_i\) in the right triangle in Figure 2 must also become infinitesimally small. In other words, as \(n→∞\), \(Δx_i→dx\) and \(Δy_i→dy\). Thus, \(Δy_i/Δx_i→dy/dx\) and the slope becomes the derivative \(f’(x)\). I thought I would take the time to explain this since it’s a common source of confusion as to where the \(f’(x)\) in Equation (7) comes from.)
The definite integral in Equation (7) is used to calculate the arc lenth \(s\) of any arbitrary curve. Provided that you know what the function \(f’(x)\) is, then you’ll be able to determine the arc length so long as the definite integral in Equation (7) is solvable. If the definite integral in Equation (7) is unsolvable (indeed, most integrals are), then you’ll have to resort to numerical methods (ideally using a computer) to estimate its value. |
I have a very technical question on deriving a Ward identity directly from a given explicit form of the "conserved current". Let me emphasize that I do not start with an apriori knowledge on the symmetry transformation that corresponds to the given "conserved current". I am given an explicit expression for the "conserved current" from the beginning. Just for clarity, let me define a "conserved current" $\Theta_{\mu\nu}$ given as \begin{equation} \Theta_{\mu\nu}=\partial_{\mu}\phi\partial_{\nu}\phi-g_{\mu\nu}\cal{L}\,, \end{equation} where $\phi$ is a scalar field and I have a simple massive $\lambda\phi^4$ theory in mind. This is nothing but the conventional energy-momentum tensor, and we already know that the corresponding symmetry transformation is space-time translation. However, I don't want to use any additional information except the explicit expression for the current and the canonical commutation relation given as \begin{equation} [\partial_t\phi(\boldsymbol{x},t),\phi(\boldsymbol{y},t)]=-i\delta^3({\boldsymbol{x},\boldsymbol{y}})\,, \end{equation} to arrive at the result: \begin{equation} k^{\mu}\Gamma^n_{\mu\nu}(k;p_1,...,p_n)=-i\underset{P}{\sum}(p_{1}+k)_{\nu}G^{n}\left(p_{1}+k,p_{2},...,p_{n}\right)\,, \end{equation} where the sum is on cyclic permutations of the indices $1$ to $n$. This kind of procedure seems to be done in Eqs.(2.8-2.13) of this renowned reference: http://inspirehep.net/record/61135, although they do not show it explicitly.
Define a function given by \begin{equation} i\Gamma^n_{\mu\nu}(x;x_1,...,x_n)=\frac{\delta}{\delta \cal{J}^{\mu\nu}(x)}G^n(x_1,...,x_n)\big|_{\cal{J}^{\mu\nu}=0}\,, \end{equation} where $G^n$ is the $n$-point Green's function. In the above equation, I am thinking of the following replacement of the Lagrangian inside the $e^{iS}$ of the path integral: \begin{equation} \cal{L}\rightarrow\cal{L}+\Theta_{\mu\nu}\cal{J}^{\mu\nu}\,. \end{equation} Now let's say I want to find an expression for $k^\mu\Gamma^n_{\mu\nu}$ ($\Gamma^n_{\mu\nu}$ here is Fourier transformed into momentum space), i.e., a Ward identity. In principle, I should be able to explicitly do this using only the equal-time canonical commutation relation.
Now here is my question. When explicitly taking $\partial^{\mu}\Gamma^n_{\mu\nu}$ in position space, I need to evaluate an expression like \begin{equation} \left\langle T_\ast(\square_x\phi(x)\partial^x_\nu\phi(x)\phi(x_1)...\phi(x_n))\right\rangle\,\,\,\,\,(\star), \end{equation} where $T_*$ is the "covariant" time ordering that is different from the usual time ordering $T$. For example, the difference is highlighted as $$T_*(\partial_x^\mu\phi(x)\partial_y^\nu\phi(y))=\partial^\mu_x\partial^\nu_y T(\phi(x)\phi(y)).$$ (Note that $T_*$ ordering appears because the definition of $\Gamma^n_{\mu\nu}$ is through a path integral.) To evaluate this $T_*$ ordered object using canonical commutation relation given above, I must first take out all the derivatives out of the $T_*$ ordering to make it into a $T$ ordering, e.g., \begin{equation} (\textrm{differential operator})\times\langle T(\phi(x)^{m}\phi(x_1)...\phi(x_n))\rangle\,. \end{equation} in position space. Then using the definition of $T$-ordering, I could differentiate the $\theta$-functions and continue.
It is not so clear to me how to do this for the expression $(\star)$, since stripping out the derivatives out of the $T_*$-ordering seems hard to do. I feel like this is a very technical question in the sense that I may be just not coming up with the relevant algebraic manipulations.
Any comments are appreciated. |
Case Study Contents
The
life cycle consumption problem is a generalization of the three-period life cycle problem in that the number of periods can vary from 1 to \(n\). Since the number of periods is variable in the general life cycle problem, the model takes as input a wage function instead of a set of discrete values. The wage function returns the wage for the current period \(p\) as a function of the total number of periods \(n\). The objective of the life cycle consumption problem is to determine how much one can consume in each period so as to maximize utility subject to the lifetime budget constraint.
The formulation for the general life cycle problem generalizes the formulation of the three-period life cycle problem from three periods to \(n\) periods.
Set P = set of periods = \({1..n}\) Parameters \(w_p\) = wage income in period \(p\), \(\forall p \in P\) \(r\) = interest rate \(\beta\) = discount factor Decision Variables \(c_p\) = consumption in period \(p\), \(\forall p \in P\) Objective Function Let \(c_p\) be consumption in period \(p\), where "life" begins at \(p=1\) and continues to \(p=n\). Let \(u()\) be the utility function and let \(u(c_p)\) be the utility value associated with consuming \(c_p\). Utility in future periods is discounted by a factor of \(\beta\). Then, the objective function is to maximize the total discounted utility:
maximize \(\sum_{p \in P} \beta^{p-1 }u(c_p)\)
Constraints The main constraint in the life cycle model is the lifetime budget constraint, which asserts that, over the life cycle, the present value of consumption equals the present value of wage income. From above, \(r\) is the interest rate; therefore, \(R = 1 + r\) is the gross interest rate. If I invest 1 dollar in this period, then I receive \(R\) dollars in the next period. The expression for the present value of the consumption stream over the life cycle is
\[\sum_{p \in P} \frac{c_p}{R^{p-1}}.\]
Similarly, the expression for the present value of the wage income stream over the lifecycle is
\[\sum_{p \in P} \frac{w_p}{R^{p-1}}.\]
The lifetime budget constraint states that the present value of the consumption stream must equal (or be less than) the present value of the wage income stream:
\[\sum_{p \in P} \frac{c_p}{R^{p-1}} \leq \sum_{p \in P} \frac{w_p}{R^{p-1}}.\]
To avoid numerical difficulties, we add constraints requiring the consumption variables to take a non-negative value:
\(c_p \geq 0.0001, \forall p \in P\)
To solve the three-period life cycle consumption problem, we need to specify a utility function and the values of the parameters. As in the case of the three-period life cycle problem, the solution of the general life cycle consumption problem specifies the amount that Joey should consume in each period to maximize his utility.
To solve your own life cycle consumption problems, check out the Life Cycle Consumption demo.
$Title Life Cycle Consumption
Set p period /1*10/ ;
Scalar B discount factor /0.96/;
Scalar i interest rate /0.10/ ;
Scalar R gross interest rate ;
R = 1+i ;
$macro u(c) (-exp(-c))
Parameter w(p) wage income in period p ;
w(p) = ((10 - p.val)*p.val) / 10 ;
Parameter lbnds(p) lower bounds of consumption
/ 1*10 0.0001 / ;
Positive Variables
c(p) consumption expenditure in period p , PVc present value of consumption expenditures , PVw present value of wage income ;
Variable Z objective ;
Equations
defPVc definition of PVc , defPVw definition of PVw , budget lifetime budget constraint , obj objective function ;
defPVc ..
PVc =e= sum(p, c(p) / power(R, p.val - 1)) ;
defPVw ..
PVw =e= sum(p, w(p) / power(R, p.val - 1)) ;
budget ..
PVc =l= PVw ;
obj ..
Z =e= sum(p, power(B, p.val - 1)*u(c(p))) ;
Model LifeCycleConsumption /defPVc, defPVw, budget, obj/ ;
c.lo(p) = lbnds(p) ;
Solve LifeCycleConsumption using nlp maximizing Z ; |
Here is how to implement your solution. Let $A = \langle Q, q_0, F, \delta \rangle$ be a DFA for $L$. We will construct an NFA $A' = \langle Q', q'_0, F', \delta' \rangle$ as follows:
$Q' = \{q'_0\} \cup Q^3$. The state $(q_1,q_2,q_3)$ means that we have guessed that when $A$ finishes reading the first copy of $w$, it will be in state $q_1$; the first copy of $A$, started at $q_0$, is at state $q_2$; and the second copy of $A$, started at $q_1$, is at state $q_3$. $F' = \{(q_1,q_1,q_2) : q_1 \in Q, q_2 \in F\}$. Thus we accept if the first copy of $A$ is in the guessed state, and the second copy of $A$ is at an accepting state. $\delta'(q'_0,\epsilon) = \{(q,q_0,q) : q \in Q\}$. This initializes the simulation of the two copies of $A$. $\delta'((q_1,q_2,q_3),a) = \{(q_1,\delta(q_2,a),\delta(q_3,a))\}$. This simulates both copies of $A$, while keeping the guessed state.
We leave the reader the formal proof that $L(A') = \sqrt{L(A)}$.
Here is another solution, which creates a DFA. We now run $|Q|$ copies of $A$ in parallel, starting at each state of $A$:
$Q' = Q^Q$. $q'_0 = q \mapsto q$, the identity function. $\delta'(f,a) = q \mapsto \delta(q,a)$. $F' = \{ f \in Q' : f(f(q_0)) \in F \}$.
What is the meaning of the condition $f(f(q_0)) \in F$? After reading a word $w$, the automaton $A'$ is in a state $f$ given by $f(q) = \delta(q,w)$. Thus $f(f(q_0)) = \delta(\delta(q_0,w),w) = \delta(q_0,w^2)$. |
Difference between revisions of "Probability Seminar"
(→Probability related talk in PDE Geometric Analysis seminar: Monday, 3:30pm to 4:30pm, Van Vleck 901, Xiaoqin Guo, UW-Madison)
(→March 7, TBA)
Line 64: Line 64:
Excited random walks (also called cookie random walks) are model for self-interacting random motion where the transition probabilities are dependent on the local time at the current location. While self-interacting random walks are typically very difficult to study, many results for (one-dimensional) excited random walks are remarkably explicit. In particular, one can easily (by hand) calculate a parameter of the model that will determine many features of the random walk: recurrence/transience, non-zero limiting speed, limiting distributions and more. In this talk I will prove functional limit laws for one-dimensional excited random walks that are recurrent. For certain values of the parameters in the model the random walks under diffusive scaling converge to a Brownian motion perturbed at its extremum. This was known previously for the case of excited random walks with boundedly many cookies per site, but we are able to generalize this to excited random walks with periodic cookie stacks. In this more general case, it is much less clear why perturbed Brownian motion should be the correct scaling limit. This is joint work with Elena Kosygina.
Excited random walks (also called cookie random walks) are model for self-interacting random motion where the transition probabilities are dependent on the local time at the current location. While self-interacting random walks are typically very difficult to study, many results for (one-dimensional) excited random walks are remarkably explicit. In particular, one can easily (by hand) calculate a parameter of the model that will determine many features of the random walk: recurrence/transience, non-zero limiting speed, limiting distributions and more. In this talk I will prove functional limit laws for one-dimensional excited random walks that are recurrent. For certain values of the parameters in the model the random walks under diffusive scaling converge to a Brownian motion perturbed at its extremum. This was known previously for the case of excited random walks with boundedly many cookies per site, but we are able to generalize this to excited random walks with periodic cookie stacks. In this more general case, it is much less clear why perturbed Brownian motion should be the correct scaling limit. This is joint work with Elena Kosygina.
−
== March 7, TBA ==
+
== March 7, TBA ==
== March 14, TBA ==
== March 14, TBA ==
Revision as of 01:08, 26 February 2019 Spring 2019 Thursdays in 901 Van Vleck Hall at 2:25 PM, unless otherwise noted. We usually end for questions at 3:15 PM.
If you would like to sign up for the email list to receive seminar announcements then please send an email to join-probsem@lists.wisc.edu
January 31, Oanh Nguyen, Princeton
Title:
Survival and extinction of epidemics on random graphs with general degrees
Abstract: We establish the necessary and sufficient criterion for the contact process on Galton-Watson trees (resp. random graphs) to exhibit the phase of extinction (resp. short survival). We prove that the survival threshold $\lambda_1$ for a Galton-Watson tree is strictly positive if and only if its offspring distribution has an exponential tail, settling a conjecture by Huang and Durrett. On the random graph with degree distribution $D$, we show that if $D$ has an exponential tail, then for small enough $\lambda$ the contact process with the all-infected initial condition survives for polynomial time with high probability, while for large enough $\lambda$ it runs over exponential time with high probability. When $D$ is subexponential, the contact process typically displays long survival for any fixed $\lambda>0$. Joint work with Shankar Bhamidi, Danny Nam, and Allan Sly.
Wednesday, February 6 at 4:00pm in Van Vleck 911 , Li-Cheng Tsai, Columbia University
Title:
When particle systems meet PDEs
Abstract: Interacting particle systems are models that involve many randomly evolving agents (i.e., particles). These systems are widely used in describing real-world phenomena. In this talk we will walk through three facets of interacting particle systems, namely the law of large numbers, random fluctuations, and large deviations. Within each facet, I will explain how Partial Differential Equations (PDEs) play a role in understanding the systems..
Title:
Fluctuations of the KPZ equation in d\geq 2 in a weak disorder regime
Abstract: We will discuss some recent work on the Edwards-Wilkinson limit of the KPZ equation with a small coupling constant in d\geq 2.
February 14, Timo Seppäläinen, UW-Madison
Title:
Geometry of the corner growth model
Abstract: The corner growth model is a last-passage percolation model of random growth on the square lattice. It lies at the nexus of several branches of mathematics: probability, statistical physics, queueing theory, combinatorics, and integrable systems. It has been studied intensely for almost 40 years. This talk reviews properties of the geodesics, Busemann functions and competition interfaces of the corner growth model, and presents some new qualitative and quantitative results. Based on joint projects with Louis Fan (Indiana), Firas Rassoul-Agha and Chris Janjigian (Utah).
February 21, Diane Holcomb, KTH
Title:
On the centered maximum of the Sine beta process Abstract: There has been a great deal or recent work on the asymptotics of the maximum of characteristic polynomials or random matrices. Other recent work studies the analogous result for log-correlated Gaussian fields. Here we will discuss a maximum result for the centered counting function of the Sine beta process. The Sine beta process arises as the local limit in the bulk of a beta-ensemble, and was originally described as the limit of a generalization of the Gaussian Unitary Ensemble by Valko and Virag with an equivalent process identified as a limit of the circular beta ensembles by Killip and Stoiciu. A brief introduction to the Sine process as well as some ideas from the proof of the maximum will be covered. This talk is on joint work with Elliot Paquette.
Title: Quantitative homogenization in a balanced random environment
Abstract: Stochastic homogenization of discrete difference operators is closely related to the convergence of random walk in a random environment (RWRE) to its limiting process. In this talk we discuss non-divergence form difference operators in an i.i.d random environment and the corresponding process—a random walk in a balanced random environment in the integer lattice Z^d. We first quantify the ergodicity of the environment viewed from the point of view of the particle. As consequences, we obtain algebraic rates of convergence for the quenched central limit theorem of the RWRE and for the homogenization of both elliptic and parabolic non-divergence form difference operators. Joint work with J. Peterson (Purdue) and H. V. Tran (UW-Madison).
Wednesday, February 27 at 1:10pm Jon Peterson, Purdue
Title:
Functional Limit Laws for Recurrent Excited Random Walks
Abstract:
Excited random walks (also called cookie random walks) are model for self-interacting random motion where the transition probabilities are dependent on the local time at the current location. While self-interacting random walks are typically very difficult to study, many results for (one-dimensional) excited random walks are remarkably explicit. In particular, one can easily (by hand) calculate a parameter of the model that will determine many features of the random walk: recurrence/transience, non-zero limiting speed, limiting distributions and more. In this talk I will prove functional limit laws for one-dimensional excited random walks that are recurrent. For certain values of the parameters in the model the random walks under diffusive scaling converge to a Brownian motion perturbed at its extremum. This was known previously for the case of excited random walks with boundedly many cookies per site, but we are able to generalize this to excited random walks with periodic cookie stacks. In this more general case, it is much less clear why perturbed Brownian motion should be the correct scaling limit. This is joint work with Elena Kosygina.
March 14, TBA March 21, Spring Break, No seminar March 28, Shamgar Gurevitch UW-Madison
Title:
Harmonic Analysis on GLn over finite fields, and Random Walks
Abstract: There are many formulas that express interesting properties of a group G in terms of sums over its characters. For evaluating or estimating these sums, one of the most salient quantities to understand is the
character ratio:
$$ \text{trace}(\rho(g))/\text{dim}(\rho), $$
for an irreducible representation $\rho$ of G and an element g of G. For example, Diaconis and Shahshahani stated a formula of this type for analyzing G-biinvariant random walks on G. It turns out that, for classical groups G over finite fields (which provide most examples of finite simple groups), there is a natural invariant of representations that provides strong information on the character ratio. We call this invariant
rank. This talk will discuss the notion of rank for $GL_n$ over finite fields, and apply the results to random walks. This is joint work with Roger Howe (Yale and Texas AM). |
I would like to use Alegreya font since I like it very much. I'm using XeLaTeX because I want to write the source directly in UTF8.
(EDITED)
However this font doesn't seem to have math support. As an MWE consider the following:
\documentclass{article}\usepackage{unicode-math} \usepackage{fontspec}\setmainfont[ SmallCapsFont={Alegreya SC}, ItalicFeatures={SmallCapsFont=AlegreyaSC-Italic}, BoldFeatures={SmallCapsFont=AlegreyaSC-Bold}, BoldItalicFeatures={SmallCapsFont=AlegreyaSC-BoldItalic}, Ligatures=TeX,]{Alegreya}\begin{document}\textsc{Example Document}This is an example document where I like to put some math:$$i \neq \mathrm{i} $$also $$∫ \neq ∑ $$ finally$$e^θ = \cos \Re θ + i\sin \Im θ$$\end{document}
QUESTIONS
I noticed that roman font invoked by
\mathrmand by operators like
\sinare taken from Alegreya while the others are standard Computer Modern Math, since is the default choice and I didn't use
\setmathfont. It is possible to make XeLaTeX use Alegreya Italic in math? How is possible to do it? What are disadvantages in doing this?
I noticed that the
Eulerfont in LaTeX harmonizes very well with Alegreya. It is possible to use it in XeLaTeX? (I found that the project
Neo Eulerseems inactive.)
Would you suggest another math font to use with Alegreya? I tried the standard XeLaTeX math font (STIX, XITS, TeX Gyre Something), which are well designed but do not match well will Alegreya. |
Difference between revisions of "Probability Seminar"
(→March 7, TBA)
(→March 14, TBA)
Line 66: Line 66:
<!-- == March 7, TBA == -->
<!-- == March 7, TBA == -->
−
== March 14, TBA ==
+
== March 14, TBA ==
+
== March 21, Spring Break, No seminar ==
== March 21, Spring Break, No seminar ==
Revision as of 01:08, 26 February 2019 Spring 2019 Thursdays in 901 Van Vleck Hall at 2:25 PM, unless otherwise noted. We usually end for questions at 3:15 PM.
If you would like to sign up for the email list to receive seminar announcements then please send an email to join-probsem@lists.wisc.edu
January 31, Oanh Nguyen, Princeton
Title:
Survival and extinction of epidemics on random graphs with general degrees
Abstract: We establish the necessary and sufficient criterion for the contact process on Galton-Watson trees (resp. random graphs) to exhibit the phase of extinction (resp. short survival). We prove that the survival threshold $\lambda_1$ for a Galton-Watson tree is strictly positive if and only if its offspring distribution has an exponential tail, settling a conjecture by Huang and Durrett. On the random graph with degree distribution $D$, we show that if $D$ has an exponential tail, then for small enough $\lambda$ the contact process with the all-infected initial condition survives for polynomial time with high probability, while for large enough $\lambda$ it runs over exponential time with high probability. When $D$ is subexponential, the contact process typically displays long survival for any fixed $\lambda>0$. Joint work with Shankar Bhamidi, Danny Nam, and Allan Sly.
Wednesday, February 6 at 4:00pm in Van Vleck 911 , Li-Cheng Tsai, Columbia University
Title:
When particle systems meet PDEs
Abstract: Interacting particle systems are models that involve many randomly evolving agents (i.e., particles). These systems are widely used in describing real-world phenomena. In this talk we will walk through three facets of interacting particle systems, namely the law of large numbers, random fluctuations, and large deviations. Within each facet, I will explain how Partial Differential Equations (PDEs) play a role in understanding the systems..
Title:
Fluctuations of the KPZ equation in d\geq 2 in a weak disorder regime
Abstract: We will discuss some recent work on the Edwards-Wilkinson limit of the KPZ equation with a small coupling constant in d\geq 2.
February 14, Timo Seppäläinen, UW-Madison
Title:
Geometry of the corner growth model
Abstract: The corner growth model is a last-passage percolation model of random growth on the square lattice. It lies at the nexus of several branches of mathematics: probability, statistical physics, queueing theory, combinatorics, and integrable systems. It has been studied intensely for almost 40 years. This talk reviews properties of the geodesics, Busemann functions and competition interfaces of the corner growth model, and presents some new qualitative and quantitative results. Based on joint projects with Louis Fan (Indiana), Firas Rassoul-Agha and Chris Janjigian (Utah).
February 21, Diane Holcomb, KTH
Title:
On the centered maximum of the Sine beta process Abstract: There has been a great deal or recent work on the asymptotics of the maximum of characteristic polynomials or random matrices. Other recent work studies the analogous result for log-correlated Gaussian fields. Here we will discuss a maximum result for the centered counting function of the Sine beta process. The Sine beta process arises as the local limit in the bulk of a beta-ensemble, and was originally described as the limit of a generalization of the Gaussian Unitary Ensemble by Valko and Virag with an equivalent process identified as a limit of the circular beta ensembles by Killip and Stoiciu. A brief introduction to the Sine process as well as some ideas from the proof of the maximum will be covered. This talk is on joint work with Elliot Paquette.
Title: Quantitative homogenization in a balanced random environment
Abstract: Stochastic homogenization of discrete difference operators is closely related to the convergence of random walk in a random environment (RWRE) to its limiting process. In this talk we discuss non-divergence form difference operators in an i.i.d random environment and the corresponding process—a random walk in a balanced random environment in the integer lattice Z^d. We first quantify the ergodicity of the environment viewed from the point of view of the particle. As consequences, we obtain algebraic rates of convergence for the quenched central limit theorem of the RWRE and for the homogenization of both elliptic and parabolic non-divergence form difference operators. Joint work with J. Peterson (Purdue) and H. V. Tran (UW-Madison).
Wednesday, February 27 at 1:10pm Jon Peterson, Purdue
Title:
Functional Limit Laws for Recurrent Excited Random Walks
Abstract:
Excited random walks (also called cookie random walks) are model for self-interacting random motion where the transition probabilities are dependent on the local time at the current location. While self-interacting random walks are typically very difficult to study, many results for (one-dimensional) excited random walks are remarkably explicit. In particular, one can easily (by hand) calculate a parameter of the model that will determine many features of the random walk: recurrence/transience, non-zero limiting speed, limiting distributions and more. In this talk I will prove functional limit laws for one-dimensional excited random walks that are recurrent. For certain values of the parameters in the model the random walks under diffusive scaling converge to a Brownian motion perturbed at its extremum. This was known previously for the case of excited random walks with boundedly many cookies per site, but we are able to generalize this to excited random walks with periodic cookie stacks. In this more general case, it is much less clear why perturbed Brownian motion should be the correct scaling limit. This is joint work with Elena Kosygina.
March 21, Spring Break, No seminar March 28, Shamgar Gurevitch UW-Madison
Title:
Harmonic Analysis on GLn over finite fields, and Random Walks
Abstract: There are many formulas that express interesting properties of a group G in terms of sums over its characters. For evaluating or estimating these sums, one of the most salient quantities to understand is the
character ratio:
$$ \text{trace}(\rho(g))/\text{dim}(\rho), $$
for an irreducible representation $\rho$ of G and an element g of G. For example, Diaconis and Shahshahani stated a formula of this type for analyzing G-biinvariant random walks on G. It turns out that, for classical groups G over finite fields (which provide most examples of finite simple groups), there is a natural invariant of representations that provides strong information on the character ratio. We call this invariant
rank. This talk will discuss the notion of rank for $GL_n$ over finite fields, and apply the results to random walks. This is joint work with Roger Howe (Yale and Texas AM). |
Nope: Correct for binary (thank, that was the name I confused with barrier option!). But, as call Payoff (x-K)^+ have second order derivatives that are measure valued, there exists for any N, any probability measure [$]\mu[$] (with technical assumptions here) a sampling sequence [$]x^1,\ldots,x^N[$]such thatOK, so just so that I get it straight: 1. call payoff (S - K)+ is notBV and there is NO sampling sequence for it which converges faster than 1/N 2. binary payoff Heaviside(S - K) is BV and there IS a sampling sequence for it which converges faster than 1/N Correct?
[$] | \int_{R^D} (x_d-K)^+ d\mu(x) - \frac{1}{N} \sum_{n=1}^N (x_d^n-K)^+ | \le \frac{C}{N^2} [$]
for any strike K, d= 1..D.
More precisely, without proving this result, one can not pretend to break the curse of dimensionality, would he use deep tralala networks or not. |
I frequently hear that Kepler, using his equations of orbital motion, could predict the orbits of all the planets to a high degree of accuracy --
except Mercury. I've heard that mercury's motion couldn't be properly predicted until general relativity came around. But what does general relativity have to do with Mercury's orbit?
I frequently hear that Kepler, using his equations of orbital motion, could predict the orbits of all the planets to a high degree of accuracy --
This web page has a nice discussion on it: http://archive.ncsa.illinois.edu/Cyberia/NumRel/EinsteinTest.html
Basically the orbit's eccentricity would precess around the sun. Classical stellar mechanics (or Newtonian gravity) couldn't account for all of that. It basically had to do with (and forgive my crude wording) the sun dragging the fabric of space-time around with it.
Or as the web page says:
Mercury's Changing Orbit
In a second test, the theory explained slight alterations in Mercury's orbit around the Sun.
Daisy petal effect of precession
Since almost two centuries earlier astronomers had been aware of a small flaw in Mercury's orbit around the Sun, as predicted by Newton's laws. As the closest planet to the Sun, Mercury orbits a region in the solar system where spacetime is disturbed by t he Sun's mass. Mercury's elliptical path around the Sun shifts slightly with each orbit such that its closest point to the Sun (or "perihelion") shifts forward with each pass. Newton's theory had predicted an advance only half as large as the one actually observed. Einstein's predictions exactly matched the observation.
For more detail that goes beyond a simple layman answer, you can check this page out and even download an app that let's you play with the phenomenon: http://www.fourmilab.ch/gravitation/orbits/
And of course, the ever handy Wikipedia has this covered as well: http://en.wikipedia.org/wiki/Tests_of_general_relativity#Perihelion_precession_of_Mercury Although, truth be told, I think I said it better (i.e. more elegantly) than the Wiki page does. But then I may be biased.
Mercury's orbit is elliptical. The orientation of this ellipse's long axis slowly rotates around the sun. This process is known as the "precession of the perihelion of Mercury" in astronomical jargon. It's a total of 5600 arcseconds of rotation per century.
The precession is mostly a result of totally classical behavior; almost all of the movement of the perihelion (about 5030 arcseconds per century) is present in a two-body system with point masses for the Sun and Mercury. Another 530 arcseconds per century are caused by gravitational effects of the other planets.
That leaves 40 arcseconds per century of unexplained movement. The observed value of 5599.7 arcseconds per century is measured
very accurately, to within 0.04 arcseconds per century, so this is a significant deviation.
It turns out that 43 arcseconds per century are expected to result from general relativity. One hand-wavey way of explaining this is that the
curvature of spacetime itself by the two bodies (Sun and Mercury) causes some changes to the gravitational potential, so it isn't really exactly $\frac{GMm}{r}$.
I'd like to add a clarification to the other answers, some of which seem to imply that the precession of Mercury's orbital perehelion is owing to general relativistic frame dragging. In particular, the statement that the Sun drags the fabric of space time around with it could be, in my opinion, misleading because most of the precession is NOT owing to "frame dragging", which is otherwise known as the Lense-Thirring Effect.
A nonrotating Sun would also beget the observed anomalous precession, whose non-Newtonian component almost wholly arises from the inverse cubic term in the effective potential coming from the solution of the Einstein Field Equations for the Schwarzschild Metric. This metric assumes the central body (Sun in this case) is stationary and
nonrotating. It is this cubic term that leads to the celebrated triumph of GR over Newtonian gravity, which does not imply this cubic term.
This is the metric that is equivalent Einstein's own analysis used to declare that general relativity explains the anomalous precession. He did not account for the Lense-Thirring frame dragging owing to the Sun's rotation, which is a much smaller effect even than that of the cubic term.
Afternote: Einstein's own method did not solve for a metric; historically, as noted by Physics SE use Stan Liou (thanks Stan):
...[Einstein] used an approximation scheme without writing any metric for his second approximation--but his potential did indeed have an inverse-cube term. Other than via Schwarzschild, a modernized approach would a stationary PPN metric (so no frame dragging here either):
$$\mathrm{d}s^2 = -(1+2\beta\Phi)\mathrm{d}t^2 + (1-2\gamma\Phi)\mathrm{d}\Sigma_\text{Euclid}^2$$
with the perihelion shift scaling proportionally to $(2-\beta+2\gamma)/3$ of the correct GTR value, which predicts $\beta=\gamma=1$
The solution of Einstein, contesting Newton´s laws, was challenged by several scientists including Dr. Thomas Van Flandern astronomer who worked at the U.S. Naval Observatory in Washington. According to them, Einstein would have gotten this information (43 "arc) and" adjusted "the arguments for the result of the equation, previously known, were achieved, because I knew this would be a critical test for his Theory of General Relativity, http:/ / ldolphin.org / vanFlandern /, www.metaresearch.org, "The Greatest Standing Errors in Physics and Mathematics" in http://milesmathis.com/merc.html Is better believe in the Newton´s laws. The mass that caused the precession of Mercury is shown briefly in 2014.
protected by Qmechanic♦ Dec 9 '13 at 0:24
Thank you for your interest in this question. Because it has attracted low-quality or spam answers that had to be removed, posting an answer now requires 10 reputation on this site (the association bonus does not count).
Would you like to answer one of these unanswered questions instead? |
Here's an example that shows that $\mathcal{M}_{\geq 0}(M)$ is not closed under addition in general. There may be easier examples, but this is the only one I know.
Let $\pi : \widetilde{M} \to M$ be a covering map. Suppose $g$ is a Riemannian metric on $\widetilde{M}$ such that $f^*g = g$ for all deck transformations $f$. Then there is a Riemannian metric $h$ on $M$ such that $\pi^*h = g$. Note that $s_g = s_{\pi^*h} = \pi^*s_h =s_h\circ \pi$. As $\pi$ is surjective, the functions $s_g$ and $s_h$ are determined by one another.
Now, let $X$ be a non-spin complex surface arising as a complete intersection in some complex projective space (e.g. a smooth hypersurface of $\mathbb{CP}^3$ with odd degree $d \geq 5$), and let $N = S^2\times S^2/\mathbb{Z}_2$ where the $\mathbb{Z}_2$ action is generated by $\sigma(x, y) = (-x, -y)$. Set $M = X\# N$. Then the universal cover of $M$ is $\widetilde{M} = X\# X \#(S^2\times S^2)$, and $\pi : \widetilde{M} \to M$ is a double covering.
In
Scalar curvature, covering spaces, and Seiberg-Witten theory, LeBrun showed that $Y(M) < 0$ and $Y(\widetilde{M}) > 0$ where $Y$ denotes the Yamabe invariant. In particular, $\widetilde{M}$ admits a positive scalar curvature metric, while $M$ does not admit a metric with non-negative scalar curvature.
Let $f : \widetilde{M} \to \widetilde{M}$ be the non-trivial deck transformation of $\pi$. If $g$ is a positive scalar curvature metric on $\widetilde{M}$, then $f^*g \neq g$, otherwise there would be a positive scalar curvature metric $h$ on $M$ with $\pi^*h = g$. Note that $f^*g$ is another Riemannian metric on $\widetilde{M}$, and as $s_{f^*g} = s_g\circ f$, it also has positive scalar curvature. Now consider the metric $\tilde{g} := g + f^*g$. As $f^*\tilde{g} = \tilde{g}$, there is a Riemannian metric $h$ on $M$ with $\pi^*h = \tilde{g}$. As $M$ does not admit metrics with non-negative scalar curvature and $s_{\tilde{g}} = s_h\circ\pi$, the metric $\tilde{g}$ does not have non-negative scalar curvature.
So $g, f^*g \in \mathcal{M}_{> 0}(\widetilde{M}) \subset \mathcal{M}_{\geq 0}(\widetilde{M})$, but $g + f^*g \not\in \mathcal{M}_{\geq 0}(\widetilde{M})$.
If you only wanted an example to show (the weaker result) that $\mathcal{M}_{> 0}(M)$ is not closed under addition in general, then this follows from earlier work by Bérard Bergery. He pointed out that there are examples of finite coverings $\pi : \widetilde{M} \to M$ such that $\widetilde{M}$ admits positive scalar metrics, but $M$ doesn't.
For example, one could take $M = (S^2\times\mathbb{RP}^7)\#\Sigma$ where $\Sigma$ is an exotic $9$-sphere with $\alpha(\Sigma) \neq 0$; here $\alpha$ denotes the Hitchin-Lichnerowicz obstruction $\alpha : \Omega^{\text{spin}}_n \to KO_n(\text{pt})$. Hitchin showed that if $M$ is a spin manifold which admits metrics of positive scalar curvature, then $\alpha(M) = 0$; in particular, $\alpha(S^2\times\mathbb{RP}^7) = 0$. Now $M$ is spin and $\alpha(M) = \alpha(S^2\times\mathbb{RP}^7) + \alpha(\Sigma) = \alpha(\Sigma) \neq 0$, so $M$ does not admit metrics of positive scalar curvature. On the other hand $\widetilde{M} = (S^2\times S^7)\#\Sigma\#\Sigma$ is diffeomorphic to $S^2\times S^7$ (because $\Theta_9 = \mathbb{Z}_2\oplus\mathbb{Z}_2\oplus\mathbb{Z}_2$) which does admit metrics of positive scalar curvature.
In the same way as we did above, we can use this example to construct two metrics $g, f^*g \in \mathcal{M}_{> 0}(S^2\times S^7)$ with $g + f^*g \not\in \mathcal{M}_{> 0}(S^2\times S^7)$. |
A parabola is a U-shaped plane curve where any point is at an equal distance from a fixed point (known as the focus) and from a fixed straight line which is known as the directrix. Parabola is an integral part of conic section topic and all its concepts parabola are covered here which include the following:
Definition Standard Equation Latus Rectum Parametric co-ordinates General Equations Tangent to a Parabola Normal to a Parabola Focal Chord Properties Focal Chord, Tangent and Normal Properties Forms Questions What is Parabola?
Section of a right circular cone by a plane parallel to a generator of the cone is a
parabola. It is a locus of a point, which moves so that distance from a fixed point (focus) is equal to the distance from a fixed line (directrix) Fixed point is called focus Fixed line is called directrix
Standard Equation of Parabola
The simplest equation of a parabola is y
2 = x when the directrix is parallel to the y-axis. In general, if the directrix is parallel to the y-axis in the standard equation of a parabola is given as:
y 2 = 4ax
If the parabola is sideways i.e., the directrix is parallel to x-axis, the standard equation of a parabole becomes,
x 2 = 4ay
Apart from these two, the equation of a parabola can also be
y 2 = 4ax and xif the parabola is in the negative quadrants. Thus, the four equations of a parabola are given as: 2= 4ay y 2= 4ax y 2= – 4ax x 2= 4ay x 2= – 4ay y Parabola Equation Derivation
In the above equation, “a” is the distance from the origin to the focus. Below is the derivation for the parabola equation. First, refer to the image given below.
From definition,
\(\frac{SP}{PM}=1\)
SP = PM
\(\sqrt{{{\left( x-a \right)}^{2}}+{{y}^{2}}}=\left| \frac{x+a}{1} \right|\)
(x – a)
2 + y 2 = (x + a) 2
\(y^{2}=4ax\) ⇒
Standard equation of Parabola. Latus Rectum of Parabola
The latus rectum of a parabola is the chord that passes through the focus and is perpendicular to the axis of the parabola.
LSL’ Latus Ractum
= \(2\left( \sqrt{4a.a} \right)\)
= 4a (length of latus Rectum)
Note: – Two parabola are said to be equal if their latus rectum are equal. Parametric co-ordinates of Parabola
For a parabola, the equation is y
2 = -4ax. Now, to represent the co-ordinates of a point on the parabola, the easiest form will be = at 2 and y = 2at as for any value of “t”, the coordinates (at 2, 2at) will always satisfy the parabola equation i.e. y 2 = 4ax. So, Any point on the parabola
y
2 = 4ax (at 2, 2at)
where ‘t’ is a parameter.
Video Lesson on Parabola
Focal Chord and Focal Distance Focal chord: Any chord passes through the focus of the parabola is a fixed chord of the parabola.
Focal Distance: The focal distance of any point p(x, y) on the parabola y 2 = 4ax is the distance between point ‘p’ and focus.
PM = a + x
PS = Focal distance = x + a
General Equations of Parabola
Equation of parabola by definition.
SP = PM
\({{(x-\alpha )}^{2}}+{{(y-\beta )}^{2}}=\frac{{{(\ell x+my+n)}^{2}}}{{{\ell }^{2}}+{{m}^{2}}}\)
The general equation of 2
nd degree i.e. ax 2 + 2hxy + by 2 + 2gx + 2fy + c = 0 if
\(\Delta \ne 0\) \({{h}^{2}}=ab\)
Position of a point with respect to parabola
For parabola
\(S\equiv {{y}^{2}}-4ax=0\,\,\,\,\,\,\,\,\,,\,p(x{}_{1},{{y}_{1}})\)
\({{S}_{1}}={{y}_{1}}^{2}-4a{{x}_{1}}\)
\({{S}_{1}}<0\,\,\,\,\,(inside\,curve)\)
\({{S}_{1}}=0\,\,\,\,\,(on\,curve)\)
\({{S}_{1}}>0\,\,\,\,(outside\,curve)\)
Intersection of a straight line with the parabola y
2 = 4ax
Straight line y = mx + c
m slope of straight line
(mx + c)
2 – 4ax = 0
m
2x 2 + 2x(mc – 2a) + c 2 = 0
Ax
2 Bx + c = 0
B
2 – 4AC = discriminant D
D = 0
\(c={}^{a}/{}_{m}\)
D > 0
mc – a > 0: Straight line intersect the curve D < 0 (mc – a) < 0: Straight line not touching the curve
Tangent to a Parabola
Tangent at point (x
1, y 1)
y
2 = 4ax (parabola)
equation of Tangent
\(y{{y}_{1}}-{{y}_{1}}^{2}=2a(x-{{x}_{1}})\)
\(y{{y}_{1}}-4a{{x}_{1}}=2a(x-{{x}_{1}})\)
\(y{{y}_{1}}=2a(x+{{x}_{1}})\)
⇒ Point \(({{x}_{1}}\,{{y}_{1}})\)
Tangent in slope (m) form:
y
2 = 4ax
Let equation of Tangent y = mx + c
From the previous illustration
y = mx + c touches curve at a point
so , \(c\text{ }=~{}^{9}/{}_{m}\)
equation of Tangent :- y = mx + \(~{}^{a}/{}_{m}\)
so, point of Tangency is \(\left( {}^{a}/{}_{{{m}^{2}}},\frac{2a}{m} \right)\)
Tangent in parameter form (at 2, 2at)
ty = x + at
2 where ‘t’ is
parameter
Pair of Tangents from (x 1, y1) external points
Let y
2 = 4 are, (parabola)
P(x
1, y 1) external point then equation of Tangents is given by
SS
1 = T 2
\(S\equiv {{y}^{2}}-4\,are,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,{{S}_{1}}\equiv {{y}_{1}}^{2}-4a{{x}_{1}}\)
\(T\equiv y{{y}_{1}}-2a(x-{{x}_{1}})\)
Chord of contact:
Equation of chord of contact of Tangents from a point p(x
1, y 1) to the parabola y 2 = 4ax is given by T = 0
i.e., yy
1 – 2a(x + x 1) = 0
Equation of QS T = 0
Normal to the parabola:
Normal to the point p(x
1, y 1) since normal is perpendicular to Tangent so slope of normal be will
\({}^{-1}/{}_{Slope\,of\,Tangent}\)
slope of normal at ‘p’ (x
1 y 1) is \(\frac{-{{y}_{1}}}{2a}\)
equation of normal\(y-{{y}_{1}}=\frac{-{{y}_{1}}}{2a}(x-{{x}_{1}})\)
Normal in term of ‘m’:
\(\left( slope\,of\,normal \right)\Rightarrow m=-\frac{dx}{dy\,\,}\,\,\,\,\,\,\,\,\,\,\,\,\,\,{{y}^{2}}=4ax\,\,\)
\({{y}_{1}}=-2am\)
\(\,\,\,\,\,\,\,m=\frac{-{{y}_{1}}}{2a}\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,{{x}_{1}}=a{{m}^{2}}\,\,\,\,\)
\(y=mx-2am-a{{m}^{3}}\)
\(m=\frac{-dx}{dy}\)
Equation of normal at point (am
2, – 2am)
Normal at point (at
2, 2at)
T parameter
y = tx + 2at + at
3 Important Properties of Focal Chord If chord joining \(P=(at_{1}^{2},2a{{t}_{1}})\) and \(Q=(at_{2}^{2},2a{{t}_{2}})\)is focal chord of parabola \({{y}^{2}}=4ax\) then \({{t}_{1}}{{t}_{2}}=-1\). If one extremity of a focal chord is \((at_{1}^{2},2a{{t}_{1}})\) then the other extremity \((at_{1}^{2},2a{{t}_{2}})\) becomes \(\left( \frac{a}{t_{1}^{2}},-\frac{2a}{{{t}_{1}}} \right)\). If point \(P(a{{t}^{2}},2at)\) lies on parabola \({{y}^{2}}=4ax\), then the length of focal chord PQis \(a{{(t+1/t)}^{2}}\). The length of the focal chord which makes an angle θ with positive x-axis is \(4a\cos e{{c}^{2}}\theta\). Semi latus rectum is harmonic mean of SP and SQ, where P and Q are extremities of latus rectum. i.e., \(2a=\frac{2SP\times SQ}{SP+SQ}\,or\frac{1}{SP}+\frac{1}{SQ}=\frac{1}{a}\) Circle described on focal length as diameter touches tangent at vertex. Circle described on focal chord as diameter touches directrix.
Important Properties of focal chord, Tangent and normal of Parabola The tangent at any point P on a parabola bisects the angle between the focal chord through P and the perpendicular from P on the directix.
The portion of a tangent to a parabola cut off between the directrix and the curve subtends a right angle at the focus.
(iii) Tangents at the extremities of any focal chord intersect at right angles on the directrix.
(iv) Any Tangent to a parabola and perpendicular on it from the focus meet on the Tangent at its vertex.
Intersect at y-axis, at u = 0
Four Common Forms of a Parabola:
Form: y 2 = 4ax y 2 = – 4ax x 2 = 4ay x 2 = – 4ay Vertex: (0, 0) (0,0) (0, 0) (0, 0) Focus: (a, 0) (-a, 0) (0, a) (0, -a) Equation of the directrix: x = – a x = y = – a y = a Equation of the axis: y = 0 y = 0 x = 0 x = 0 Tangent at the vertex: x = 0 x = 0 y = 0 y = 0 Practice Problems on Parabola Illustration 1: Find the vertex, axis, directrix, tangent at the vertex and the length of the latus rectum of the parabola \(2{{y}^{2}}+3y-4x-3=0\). Solution: The given equation can be re-written as \({{\left( y+\frac{3}{4} \right)}^{2}}=2\left( x+\frac{33}{32} \right)\)
which is of the form \({{Y}^{2}}=4aX\)where \(Y=y+\frac{3}{4},\,X=x+\frac{33}{32},\,4a=2\).
Hence the vertex is \(X=0,Y=0\) i.e. \(\left( -\frac{33}{32},-\frac{3}{4} \right)\).
The axis is \(y+\frac{3}{4}=0\Rightarrow y=-\frac{3}{4}\).
The directix is \(X=a=0\)
\(\Rightarrow x+\frac{33}{32}+\frac{1}{2}=0\Rightarrow x=-\frac{49}{32}\)
The tangent at the vertex is \(X=0\,or\,x+\frac{33}{32}=0\Rightarrow x=-\frac{33}{32}\).
Length of the latus rectum = 4a = 2.
Illustration 2: Find the equation of the parabola whose focus is (3, -4) and directix x – y + 5 = 0. Solution: Let P(x, y) be any point on the parabola. Then
\(\sqrt{{{(x-3)}^{3}}+{{(y+4)}^{2}}}=\frac{\left| x-y+5 \right|}{\sqrt{1+1}}\)
\(\Rightarrow {{(x-3)}^{2}}+{{(y+4)}^{2}}=\frac{{{(x-y+5)}^{2}}}{2}\)
\(\Rightarrow {{x}^{2}}+{{y}^{2}}+2xy-22x+26y+25=0\)
\(\Rightarrow {{(x+y)}^{2}}=22x-26y-25\)
Illustration 3: Find the equation of the parabola having focus (-6, -6) and vertex (-2, 2). Solution: Let S(6, -6) be the focus and A(-2, 2) the vertex of the parabola. On SA take a point K (x 1 , y 1) such that SA = AK. Draw KM perpendicular on SK. Then KM is the directix of the parabola.
Since A bisects SK, \(\left( \frac{-6+{{x}_{1}}}{2},\frac{-6+{{y}_{1}}}{2} \right)=(-2,2)\)
\(\Rightarrow -6+{{x}_{1}}=-4\,and\,-6+{{y}_{1}}=4\,or\,({{x}_{1}},{{y}_{1}})=(2,10).\)
Hence the equation of the directrix KM is y – 10 = m(x+2) ……(1)
Also gradient of \(SK=\frac{10-(-6)}{2-(-6)}=\frac{16}{8}=2;\,m=\frac{-1}{2}\)
So that equation (1) becomes
\(y-10=\frac{1}{2}(x-2)\) or \(x+2y-22=0\) is the directrix.
Next, let PM be a perpendicular on the directrix KM from any point P(x, y) on the parabola.
From SP = PM, the equation of the parabola is
\(\sqrt{\left\{ {{(x+6)}^{2}}+{{(y+6)}^{2}} \right\}}=\frac{x+2y-22}{\sqrt{({{1}^{2}}+{{2}^{2}})}}\)
Illustration 4: Find the coordinates of the focus, axis of the parabola, the equation of directrix and the length of the latus rectum for \({{y}^{2}}=12x\). Solution: The given equation is \({{y}^{2}}=12x\).
Here, the coefficient of x is positive. Hence, the parabola opens towards the right.
On comparing this equation with \({{y}^{2}}=4ax\), we get \(4a=12a\) or \(a=3\).
Coordinates of the focus are given by (a, 0) i.e., (3, 0).
Since the given equation involves y
2, the axis of the parabola is the y-axis.
Equation of directix is \(x=-a\), i.e., \(x=-3\).
Length of latus rectum = 4a = 4 x 3 = 12.
Illustration 5: Find the coordinates of the focus, axis of the parabola, the equation of directrix and the length of the latus rectum for \({{x}^{2}}=-16y\). Solution: The given equation is \({{x}^{2}}=-16y\).
Here, the coefficient of y is negative. Hence, the parabola opens downwards.
On comparing this equation with \({{x}^{2}}=-4ay\), we get \(-4a=-16\) or \(a=4\).
Coordinates of the focus = (0, -a) = (0, -4).
Since the given equation involves \({{x}^{2}}\), the axis of the parabola is the y-axis.
Equation of directrix, y = a i.e. = 4.
Length of latus rectum = 4a = 16.
Illustration 6: If the parabola y 2 = 4x and x 2 = 32y intersect at (16, 8) at an angle θ, then find the value of θ. Solution: The slope of the tangent to y 2 = 4x at (16, 8) is given by
\({m}_{1}={\left( \frac{dy}{dx} \right)}_{(16,8)}={{\left( \frac{4}{2y} \right)}_{(16,8)}}=\frac{2}{8}=\frac{1}{4}\)
The slope of the tangent to x
2 = 32y at (16, 8) is given by
\({m}_{2}={\left( \frac{dy}{dx} \right)}_{(16,8)} ={{\left( \frac{2x}{32} \right)}_{(16,8)}}=1\)
∴ \(Tan \;\theta =\frac{1-(1/4)}{1+(1/4)}=\frac{3}{5}\)
\(\Rightarrow \,\,\,\,\,\theta ={{\tan }^{-1}}\left( \frac{3}{5} \right)\)
Illustration 7: Find the equation of common tangent of y 2 = 4ax and x 2 – 4ay. Solution: Equation of tangent to y 2 = 4ax having slope m is \(y=mx+\frac{a}{m}\).
It will touch x
2 – 4ay, if \({{x}^{2}}=4a\left( mx+\frac{a}{m} \right)\) has a equal roots. Thus, \(16{{a}^{2}}{{m}^{2}}=\text{ }-16\frac{{{a}^{2}}}{m}\,\,\,\Rightarrow \,m=-1\)
Thus, common tangent is y + x + a = 0.
Illustration 8: Find the equation of normal to the parabola y 2 = 4x passing through the point (15, 12). Solution: Equation of the normal having slope m is
\(y=mx-2m-{{m}^{3}}\)
If it passes through the point (15, 12) then
\(12=15m-2m-{{m}^{3}}\)
\(\Rightarrow \,\,\,\,\,{{m}^{3}}-13m+12=0\)
\(\Rightarrow \,\,\,\,\,\left( m-1 \right)\left( m-3 \right)\left( m+4 \right)=0\)
\(\Rightarrow \,\,\,\,\,m=1,\,3,\,-4\)
Hence, equations of normal are:
\(y=x-3,\,y=3x-33\,and\,y+4x=72\)
Illustration 9: Find the point on y 2 = 8x where line x + y = 6 is a normal. Solution: Slope m of the normal x + y = 6 is -1 and a = 2
Normal to parabola at point (am
2, -2am) is
\(y=mx-2am-a{{m}^{3}}\)
\(\Rightarrow \,\,\,\,\,y=-x+4+2\,at\,(2,4)\)
\(\Rightarrow \,\,\,\,\,x+y=6\,is\,normal\,at\,(2,4)\)
Illustration 10: Tangents are drawn to y 2 = 4ax at point where the line lx + my + n = 0 meets this parabola. Find the intersection of these tangents. Solution: Let the tangents intersects at P (h, k). Then lx + my + n = 0 will be the chord of contact. That means lx + my + n = 0 and yk – 2ax – 2ah = 0 which is chord of contact, will represent the same line.
Comparing the ratios of coefficients, we get
\(frac{k}{m}=\frac{-2a}{l}=\frac{-2ah}{n}\)
\(\Rightarrow \,\,\,\,\,h=\frac{n}{l},\,k=-\frac{2am}{l}\)
Illustration 11: If the chord of contact of tangents from a point P to the parabola If the chord of contact of tangents from a point P to the parabola y 2 = 4ax touches the parabola x 2=4by, then find the locus of P. Solution: Chord of contact of parabola y 2 = 4ax w.r.t. point P(x 1 , y 1)
yy
1 = 2a(x + x 1) ……(1)
This line touches the parabola x
2 = 4by.
Solving line (1) with parabola, we have
\({{x}^{2}}=4b\left[ \left( 2a/{{y}_{1}} \right)\left( x+{{x}_{1}} \right) \right]\)
or \({{y}_{1}}{{x}^{2}}-8abx-8ab{{x}_{1}}=0\)
According to the question, this equation must have equal roots.
\(\Rightarrow \,\,\,\,\,D=0\,\)
\(\Rightarrow \,\,\,\,64{{a}^{2}}{{b}^{2}}+32ab{{x}_{1}}{{y}_{1}}=0\)
\(\Rightarrow \,\,\,\,\,{{x}_{1}}{{y}_{1}}=-2ab\) or \(xy=-2ab\), which is the required locus. |
"this class of problems lie"s in
RE, so its name is " RE".
$$\begin{align*}\operatorname{Prob}(M \text{ accepts}) &= \operatorname{Prob}\big((\exists n)(M \text{ accepts after exactly $n$ steps})\big)\\ &=\sum_n \operatorname{Prob}(M \text{ accepts after exactly $n$ steps})\\ &=\lim_N \sum_{n\leq N} \operatorname{Prob}(M \text{ accepts after exactly $n$ steps})\\ &=\lim_N \;\operatorname{Prob}(M \text{ accepts in at most $N$ steps}) \\ &= \lim_n \; \operatorname{Prob}(M \text{ accepts in at most $n$ steps})\,. \end{align*}$$
For all $m$ and $n$ with $m\leq n$, and for all randomness strings $r$, $M$ accepts in at most $m$ steps if and only if it accepts in exactly some $t\leq m$ steps. But then $t\leq n$ so this can only occur if $M$ accepts in at most $n$ steps with randomness string $r$.
For all $m$ and $n$, with $m\leq n$, the probability that $M$ accepts in at most $m$ steps does not exceed the probability that it accepts in at most $n$ steps.
$$\begin{align*} &\tfrac12 < \operatorname{Prob}(M \text{ accepts}) \\ &\iff \tfrac12 < \lim_n \; \operatorname{Prob}(M \text{ accepts in at most $n$ steps})\\ &\iff (\exists n)\left(\tfrac12 < \operatorname{Prob}(M \text{ accepts in at most $n$ steps})\right)\,. \end{align*}$$
Therefore, a machine that loops over the positive integers $n$ and accepts if and only if $\tfrac12 < \operatorname{Prob}(M \text{ accepts in at most $n$ steps})$ will accept exactly the inputs that $M$ has a probability greater than $\tfrac12$ of accepting. |
What is the exact difference between wavenumber and propagation constant in an electromagnetic wave propagating in a medium such as a transmission line, cause i am a bit confused. Does it have to do with loss in the medium?
See "Mathematical descriptions of opacity", Wikipedia.
The propagation constant has a real and an imaginary part. One of those is equal to the angular wavenumber, the other is proportional to the absorption coefficient.
Which is which (which is the real part and which is the imaginary part) depends on what definition you're using for the term "propagation constant". There is more than one definition in common use.
Here is the consensus for microwave engineering. Other fields of science may vary.
$\newcommand{\j}{{\rm{j}}}$$\newcommand{\e}[1]{\,{\rm{e}}^{#1}}$ It was shown by Hertz that an arbitrary electromagnetic field in a source free homogeneous linear isotropic medium can be defined in terms of a single vector potential $\vec{\Pi}$. Assuming $\e{\,\j\omega t}$ time dependency, a wave in the Hertz vector potential field can be written as: $$\vec{\Pi}(x) = \vec{\Pi}(0) \e{-\gamma x}$$
The propagation constant $\gamma$ is a complex quantity: $$\gamma = \alpha + \j\beta$$
where $\alpha$ is the attenuation constant, and $\beta$ is the phase constant.
However, since the attenuation in an air medium is negligible, it is customary to write the wave equation solely in function of a complex phase constant $\beta$: $$\vec{\Pi}(x) = \vec{\Pi}(0) \e{-\j\beta x}$$
where $\beta = \beta' -\j \beta''$, such that $\gamma \equiv \j \beta = \j (\beta' -\j \beta'') = \beta'' + \j \beta' \Rightarrow \beta'' \equiv \alpha$.
The free space angular wave number $k_0$ is defined as: $$k_0 \equiv \frac{\omega}{c_0} = \frac{2\pi}{\lambda_0}$$
The unit is $\frac{\text{rad}}{\text{m}}$
Only for TEM waves: $$\beta = k_0 = \frac{2\pi}{\lambda_0}$$
Whereas for TE and TM waves, separation of variables in a Helmholtz equation results in a transcendental dispersion function that needs to be solved involving the free space wave number $k_0$ and a transverse wave number $\tau$.
In such cases, $$\tau^2 = -\left(\gamma^2 + {k_0}^2\right) = \beta^2 - {k_0}^2$$
$$\Rightarrow \beta = \sqrt{{k_0}^2 + \tau^2}$$
Some worked out examples for EM surface waves can be found here.
The attenuation constant is specifically the imaginary part of the wave number (ki), while the wave number in dissipative media is the real part of the wave number (kr). Let me know what you think, because I just started studying this. |
Difference between revisions of "Duh Haymet"
m (→References)
m
Line 4: Line 4:
:<math>B(\gamma^{*})= - \frac{1}{2} \gamma^{*2} \left[ \frac{1}{ \left[ 1+ \left( \frac{5\gamma^{*} +11}{7\gamma^{*} +9} \right) \gamma^{*} \right]} \right]</math>
:<math>B(\gamma^{*})= - \frac{1}{2} \gamma^{*2} \left[ \frac{1}{ \left[ 1+ \left( \frac{5\gamma^{*} +11}{7\gamma^{*} +9} \right) \gamma^{*} \right]} \right]</math>
−
where (Eq. 10) <math>\gamma^{*}(r) = \gamma (r) - \beta \Phi_p(r)</math> where <math>\Phi_p (r)</math> is the perturbative part of the pair potential
+
where (Eq. 10) <math>\gamma^{*}(r) = \gamma (r) - \beta \Phi_p(r)</math> where <math>\Phi_p (r)</math> is the perturbative part of the pair potential
− +
==References==
==References==
#[http://dx.doi.org/10.1063/1.470724 Der-Ming Duh and A. D. J. Haymet "Integral equation theory for uncharged liquids: The Lennard-Jones fluid and the bridge function", Journal of Chemical Physics '''103''' pp. 2625-2633 (1995)]
#[http://dx.doi.org/10.1063/1.470724 Der-Ming Duh and A. D. J. Haymet "Integral equation theory for uncharged liquids: The Lennard-Jones fluid and the bridge function", Journal of Chemical Physics '''103''' pp. 2625-2633 (1995)]
Revision as of 16:43, 26 February 2007
The
Duh-Haymet (Ref. 1) (1995) Padé (3/2) approximation for the Bridge function for the Lennard Jones system is (Eq. 13)
where (Eq. 10) where is the perturbative (attractive) part of the pair potential. |
The independent boson model consists of the following Hamiltonian: $$ H_s = E \sigma^z $$ $$ H_b = \sum_k \omega_k b^{\dagger}_kb_k $$ $$H_{sb} = \sigma^z \sum_k (g_k b_k + g_k^{\ast}b^{\dagger}_k).$$ The model describes a single spin-1/2 impurity with Pauli operators $\sigma^{x,y,z}$ linearly coupled to an infinity of bosonic modes $b_k$. Importantly, the interaction $H_{sb}$ commutes with $H_s$.
The model is exactly solvable by introducing a state-dependent displacement:
$$ U = \exp \left[ \sigma^z \sum_k (g_k^{\ast}b^{\dagger}_k - g_k b_k)\right],$$leading to the transformed Hamiltonian$$U H U^{\dagger} = E^{\prime} \sigma^z + \sum_k \omega_k b^{\dagger}_kb_k + \mathrm{const.}$$where $E^{\prime}$ is the renormalised impurity energy. Similar tricks allow one to compute time evolution etc. The solutions can be found in detail in Mahan's book
Many-Particle Physics.
Note that there exists an equivalence between a spin-1/2 particle and a single fermionic mode, i.e. we can rewrite the above Hamiltonian by replacing $\sigma^z \to c^{\dagger} c$, where $\{c,c^{\dagger}\} = 1$ are fermionic ladder operators. The resulting model is equivalent up to a shift of the equilibrium position of the oscillators.
However, when $c$ is instead taken to be bosonic, the solution fails. The fermionic/spin solution relies on the fact that $(c^{\dagger}c)^2 = c^{\dagger}c$, which ultimately stems from the fact that the fermionic Hilbert space has 2 states. In contrast, the Hilbert space of a bosonic mode is infinite-dimensional.
Is the independent boson model always exactly solvable so long as the Hilbert space of the impurity is finite-dimensional?
I mean precisely the following: imagine replacing $\sigma^z$ with $S^z$, the $z$ projection of a spin with total angular momentum $S > 1/2$. Is the model exactly solvable? Constructive answers which describe the form of the solution would be great, or any references to where this problem has already been solved in the literature. |
For example, if there are 50 boys in a school and the total number of students is $200$ then, for finding percentage of boys why do we do like this, $\left(\, 50/200\, \right)100\ \% = 25\ \%$ ?. Why does the fraction '$50/200$' represent ?. I know 'percentage' means out of $100$ but $50/200$ is not representing $50$ out of hundred ?.
What you are solving is $$\dfrac{50}{200} = \dfrac{x}{100}$$
Which becomes $$x= 100\times \dfrac{50}{200}$$
When you take a portion and divide it by the whole, you create a value that must always be less than or equal to 1.
Let $w$ represent the whole.
Let $p$ represent the portion such that $p \underline{<} w$.
Therefore, $\frac{p}{w} \underline{<} 1$.
When $\frac{p}{w}$ is multiplied by 100, you're just scaling the decimal to I guess what you might consider a more user-friendly, comprehensive value. The ratio is still the same, of course.
The word percent actually comes from the Latin
per centum, which means per one hundred. So instead of visualizing data in terms of values greater than $100$, such as in your case, ratios can be scaled to a denominator of $100$, simplifying all data to a more comprehensible manner.
$50$%, $75$%, and $90$% are easier to compare than $\frac{2234}{4468}$, $\frac{2935929}{3914572}$, and $\frac{37901171181738.6}{42112412424154}$. By multipling these fractions by $100$, they are set standardized and thus much easier to compare and contrast with other standardized values. |
Why does a black hole attract matter with such a huge amount of force? Does its mass increase on becoming a black hole? Is it due to its volume decreasing? In the formula for gravitational force, $\frac{Gm_1 m_2}{r^2}$, there is no mention of the volume of the bodies, just their masses. The amount of matter present must be same as that present before the collapse of the star, so why does gravity increase so much?
Black Holes exert no more gravitational force than other matter. They are just so massive that light can't escape. When they collapse, they typically lose some matter which means their mass actually decreases during the collapse. You are correct to note that gravity is independent of the volume of the object. Nearby to a black hole, Newtonian physics starts to break down which is where General Relativity comes in, but from afar, $\frac{GMm}{r^2}$ continues to hold true.
And as far as i think the amount of matter present must be same as that present before the collapse of star so why does it gravity increase manifold?
It doesn't. If you replaced our sun with a black hole of the same mass, it wouldn't change the Earth's orbit. Everything would stay the same - except for the noticeable lack of sunlight, of course.
You're right that the Newtonian formula for the gravitational force $$ F = G\frac{Mm}{R^2}$$ does not say anything about the size of the attractor - the distance $R$ is measured from the center of mass of the attracting body. So one might ask what happens when $R\rightarrow 0$? The force appears to grow without limit. But we've forgotten that day-to-day attractors, like the Earth and the sun, have a nonzero size.
If we fly toward the sun, we will plunge into the surface (where $R=R_0$ is the radius of the sun) long before $R=0$. Once we're inside, the gravitational force becomes roughly
$$ F = \frac{GMm}{R_0^2} \cdot \frac{R}{R_0}$$ which goes smoothly to zero as we reach the center.
On the other hand, a black hole is special because there is no surface to plunge into. Rather than emanating from a volume (which we could get inside, at which point the gravity would start to decrease), the force from a black hole appears to come from a single point. It therefore becomes possible to get closer and closer to the source of gravity, at which point the gravitational force grows without bound.
The weird stuff starts to happen when we get to a radial distance on the order of the
Schwarzschild radius$$R_s = \frac{2GM}{c^2}$$
Plugging in the values for the sun, we find that $R_s \approx 2$ miles - but of course, the sun's radius is about 200,000 times that distance.
Approaching in a Newtonian sense, if you equate the escape velocity from a body to the speed of light, you will get yourself a black hole by definition.
$$ v_{escape} = \sqrt { \frac{2GM} {R} } = v_{light} = 299 \ 792 \ 458 \ m / s $$
Note while the term $ \frac {M}{R} $ is not density itself, it is obviously proportional to it; so this is why when any body is dense enough, it can become a black hole in some sense.
You are talking about black holes in a Newtonian / astrophysical sense. But, there black holes are a purely general relativistic phenomena. In particular, if we just consider non-rotating black holes, then, the existence of such solutions is by Birkhoff's theorem, namely: every spherically symmetric vacuum spacetime is static. The resulting spacetime is the Schwarzschild solution, which is the original "black hole" solution of Einstein's field equations. The black hole is a singularity that occurs at $r=0$, which is the result of computing the Kretschmann scalar: $K = R^{abcd}R_{abcd} \sim r^{-6}$.
In this context, one cannot talk about such solutions of Einstein's equations with terms like "force", "volume", "mass".
A separate question is that of how a star / astrophysical object can collapse into a black hole, and for that you must use the TOV equation: https://en.wikipedia.org/wiki/Tolman%E2%80%93Oppenheimer%E2%80%93Volkoff_equation
Integrating this equation / solving the ODE, one obtains for the pressure of the spherically symmetric object at some distance $r$, :
$p(r) = \mu \left[ \frac{\sqrt{1-r_s / R} - \sqrt{1 - r_s r^2 / R^3}}{ \sqrt{1 - r_s r^2 / R^3} - 3 \sqrt{1 - r_s / R}}\right]$
where $\mu$ is the mass density of the object, and $r_s$ I have denoted as the Schwarzschild radius: $r_s = 2G M$.
Now, look at this result: it is only valid for when $r \leq R$. The pressure at $r=0$ which is what you're concerned with is:
$p(0) = \mu \left[ \frac{\sqrt{1 - r_s / R} - 1}{1 - 3 \sqrt{1 - r_s /R}} \right]$.
This becomes negative, $p(0) < 0$, when the denominator of this expression becomes negative, since $\mu > 0$ by assumption. So, this becomes negative when:
$1 - 3 \sqrt{1 - r_s / R} < 0$.
This inequality is essentially the collapse condition. So, your object will collapse into a "black hole" as long as this inequality is satisfied.
You essentially use G.R. to see when the pressure of the object you are trying to "collapse" becomes negative. If your object is spherically symmetric, you can use the TOV equation to show that this collapse into a "black hole" occurs for $R>9/8 r_s$, where $r_s$ is the Schwarzschild radius. The spherically symmetric case is nice because of symmetry, but for general geometries, it is slightly more difficult. This hopefully answers your question. (On a side note, one can always have black holes without any such notions of things collapsing: just take a Minkowski spacetime and cut out a hole, you'll get the Schwarzschild metric!)
protected by Qmechanic♦ Nov 1 '17 at 22:04
Thank you for your interest in this question. Because it has attracted low-quality or spam answers that had to be removed, posting an answer now requires 10 reputation on this site (the association bonus does not count).
Would you like to answer one of these unanswered questions instead? |
According to the kinetic molecular theory obeying Maxwell-Boltzmann distribution of speeds, the rate of effusion through a pinhole of area $A$ is
$$R=\frac{PA}{\sqrt{2\pi M R T}}$$ where $M$ is the molecular weight, $R$ is the gas constant and $T$ the absolute temperature.
To derive this, I consider the collision frequency on any small area ($A$):-
using $$v_{avg}=\sqrt{\frac{8RT}{\pi M}}$$ I get the result, that the atoms in volume $v_{avg}A$ can hit the area. The number of particles in this volume is $nv_{avg}A$ ($n$=number density). But the derivation includes a factor of $\frac14$ before this term to find the actual number of atoms in this volume hitting the wall. I want to know how that factor of $\frac14$ came into picture to make the collision frequency per unit area as $\frac14nv_{avg}=\frac{P}{\sqrt{2\pi M R T}}$.
I know the origins of a factor of $\frac12$ before the pressure term while calculating it, considering change in momentum of a molecule on collision with the wall, which is to account for the fact that $<v_x^2>$ includes both the $v_x$ terms, going towards and going away from the wall(positive and negative directions) but the ones colliding are just the half of these $\frac12<v_x^2>$(going in any one direction). |
Probability Seminar Spring 2019 Thursdays in 901 Van Vleck Hall at 2:25 PM, unless otherwise noted. We usually end for questions at 3:15 PM.
If you would like to sign up for the email list to receive seminar announcements then please send an email to join-probsem@lists.wisc.edu
January 31, Oanh Nguyen, Princeton
Title:
Survival and extinction of epidemics on random graphs with general degrees
Abstract: We establish the necessary and sufficient criterion for the contact process on Galton-Watson trees (resp. random graphs) to exhibit the phase of extinction (resp. short survival). We prove that the survival threshold $\lambda_1$ for a Galton-Watson tree is strictly positive if and only if its offspring distribution has an exponential tail, settling a conjecture by Huang and Durrett. On the random graph with degree distribution $D$, we show that if $D$ has an exponential tail, then for small enough $\lambda$ the contact process with the all-infected initial condition survives for polynomial time with high probability, while for large enough $\lambda$ it runs over exponential time with high probability. When $D$ is subexponential, the contact process typically displays long survival for any fixed $\lambda>0$. Joint work with Shankar Bhamidi, Danny Nam, and Allan Sly.
Wednesday, February 6 at 4:00pm in Van Vleck 911 , Li-Cheng Tsai, Columbia University
Title:
When particle systems meet PDEs
Abstract: Interacting particle systems are models that involve many randomly evolving agents (i.e., particles). These systems are widely used in describing real-world phenomena. In this talk we will walk through three facets of interacting particle systems, namely the law of large numbers, random fluctuations, and large deviations. Within each facet, I will explain how Partial Differential Equations (PDEs) play a role in understanding the systems..
Title:
Fluctuations of the KPZ equation in d\geq 2 in a weak disorder regime
Abstract: We will discuss some recent work on the Edwards-Wilkinson limit of the KPZ equation with a small coupling constant in d\geq 2.
February 14, Timo Seppäläinen, UW-Madison
Title:
Geometry of the corner growth model
Abstract: The corner growth model is a last-passage percolation model of random growth on the square lattice. It lies at the nexus of several branches of mathematics: probability, statistical physics, queueing theory, combinatorics, and integrable systems. It has been studied intensely for almost 40 years. This talk reviews properties of the geodesics, Busemann functions and competition interfaces of the corner growth model, and presents some new qualitative and quantitative results. Based on joint projects with Louis Fan (Indiana), Firas Rassoul-Agha and Chris Janjigian (Utah).
February 21, Diane Holcomb, KTH
Title:
On the centered maximum of the Sine beta process Abstract: There has been a great deal or recent work on the asymptotics of the maximum of characteristic polynomials or random matrices. Other recent work studies the analogous result for log-correlated Gaussian fields. Here we will discuss a maximum result for the centered counting function of the Sine beta process. The Sine beta process arises as the local limit in the bulk of a beta-ensemble, and was originally described as the limit of a generalization of the Gaussian Unitary Ensemble by Valko and Virag with an equivalent process identified as a limit of the circular beta ensembles by Killip and Stoiciu. A brief introduction to the Sine process as well as some ideas from the proof of the maximum will be covered. This talk is on joint work with Elliot Paquette.
Title: Quantitative homogenization in a balanced random environment
Abstract: Stochastic homogenization of discrete difference operators is closely related to the convergence of random walk in a random environment (RWRE) to its limiting process. In this talk we discuss non-divergence form difference operators in an i.i.d random environment and the corresponding process—a random walk in a balanced random environment in the integer lattice Z^d. We first quantify the ergodicity of the environment viewed from the point of view of the particle. As consequences, we obtain algebraic rates of convergence for the quenched central limit theorem of the RWRE and for the homogenization of both elliptic and parabolic non-divergence form difference operators. Joint work with J. Peterson (Purdue) and H. V. Tran (UW-Madison).
Wednesday, February 27 at 1:10pm Jon Peterson, Purdue
Title:
Functional Limit Laws for Recurrent Excited Random Walks
Abstract:
Excited random walks (also called cookie random walks) are model for self-interacting random motion where the transition probabilities are dependent on the local time at the current location. While self-interacting random walks are typically very difficult to study, many results for (one-dimensional) excited random walks are remarkably explicit. In particular, one can easily (by hand) calculate a parameter of the model that will determine many features of the random walk: recurrence/transience, non-zero limiting speed, limiting distributions and more. In this talk I will prove functional limit laws for one-dimensional excited random walks that are recurrent. For certain values of the parameters in the model the random walks under diffusive scaling converge to a Brownian motion perturbed at its extremum. This was known previously for the case of excited random walks with boundedly many cookies per site, but we are able to generalize this to excited random walks with periodic cookie stacks. In this more general case, it is much less clear why perturbed Brownian motion should be the correct scaling limit. This is joint work with Elena Kosygina.
March 21, Spring Break, No seminar March 28, Shamgar Gurevitch UW-Madison
Title:
Harmonic Analysis on GLn over finite fields, and Random Walks
Abstract: There are many formulas that express interesting properties of a group G in terms of sums over its characters. For evaluating or estimating these sums, one of the most salient quantities to understand is the
character ratio:
$$ \text{trace}(\rho(g))/\text{dim}(\rho), $$
for an irreducible representation $\rho$ of G and an element g of G. For example, Diaconis and Shahshahani stated a formula of this type for analyzing G-biinvariant random walks on G. It turns out that, for classical groups G over finite fields (which provide most examples of finite simple groups), there is a natural invariant of representations that provides strong information on the character ratio. We call this invariant
rank. This talk will discuss the notion of rank for $GL_n$ over finite fields, and apply the results to random walks. This is joint work with Roger Howe (Yale and Texas AM). April 4, Philip Matchett Wood, UW-Madison
Title:
Outliers in the spectrum for products of independent random matrices
Abstract: For fixed positive integers m, we consider the product of m independent n by n random matrices with iid entries as in the limit as n tends to infinity. Under suitable assumptions on the entries of each matrix, it is known that the limiting empirical distribution of the eigenvalues is described by the m-th power of the circular law. Moreover, this same limiting distribution continues to hold if each iid random matrix is additively perturbed by a bounded rank deterministic error. However, the bounded rank perturbations may create one or more outlier eigenvalues. We describe the asymptotic location of the outlier eigenvalues, which extends a result of Terence Tao for the case of a single iid matrix. Our methods also allow us to consider several other types of perturbations, including multiplicative perturbations. Joint work with Natalie Coston and Sean O'Rourke.
April 11, Eviatar Procaccia, Texas A&M Title: Stabilization of Diffusion Limited Aggregation in a Wedge.
Abstract: We prove a discrete Beurling estimate for the harmonic measure in a wedge in $\mathbf{Z}^2$, and use it to show that Diffusion Limited Aggregation (DLA) in a wedge of angle smaller than $\pi/4$ stabilizes. This allows to consider the infinite DLA and questions about the number of arms, growth and dimension. I will present some conjectures and open problems.
April 18, Andrea Agazzi, Duke
Title:
Large Deviations Theory for Chemical Reaction Networks
Abstract: The microscopic dynamics of well-stirred networks of chemical reactions are modeled as jump Markov processes. At large volume, one may expect in this framework to have a straightforward application of large deviation theory. This is not at all true, for the jump rates of this class of models are typically neither globally Lipschitz, nor bounded away from zero, with both blowup and absorption as quite possible scenarios. In joint work with Amir Dembo and Jean-Pierre Eckmann, we utilize Lyapunov stability theory to bypass this challenges and to characterize a large class of network topologies that satisfy the full Wentzell-Freidlin theory of asymptotic rates of exit from domains of attraction. Under the assumption of positive recurrence these results also allow for the estimation of transitions times between metastable states of this class of processes.
April 25, Kavita Ramanan, Brown
Title:
Beyond Mean-Field Limits: Local Dynamics on Sparse Graphs
Abstract: Many applications can be modeled as a large system of homogeneous interacting particle systems on a graph in which the infinitesimal evolution of each particle depends on its own state and the empirical distribution of the states of neighboring particles. When the graph is a clique, it is well known that the dynamics of a typical particle converges in the limit, as the number of vertices goes to infinity, to a nonlinear Markov process, often referred to as the McKean-Vlasov or mean-field limit. In this talk, we focus on the complementary case of scaling limits of dynamics on certain sequences of sparse graphs, including regular trees and sparse Erdos-Renyi graphs, and obtain a novel characterization of the dynamics of the neighborhood of a typical particle. This is based on various joint works with Ankan Ganguly, Dan Lacker and Ruoyu Wu.
April 26, Colloquium, Kavita Ramanan, Brown
Title:
Tales of Random Projections
Abstract: The interplay between geometry and probability in high-dimensional spaces is a subject of active research. Classical theorems in probability theory such as the central limit theorem and Cramer’s theorem can be viewed as providing information about certain scalar projections of high-dimensional product measures. In this talk we will describe the behavior of random projections of more general (possibly non-product) high-dimensional measures, which are of interest in diverse fields, ranging from asymptotic convex geometry to high-dimensional statistics. Although the study of (typical) projections of high-dimensional measures dates back to Borel, only recently has a theory begun to emerge, which in particular identifies the role of certain geometric assumptions that lead to better behaved projections. A particular question of interest is to identify what properties of the high-dimensional measure are captured by its lower-dimensional projections. While fluctuations of these projections have been studied over the past decade, we describe more recent work on the tail behavior of multidimensional projections, and associated conditional limit theorems. |
Error analysis of discontinuous Galerkin method for the time fractional KdV equation with weak singularity solution
1.
School of Mathematics and Statistics, Shandong Normal University, Jinan 250014, China
2.
School of Mathematic and Quantitative Economics, Shandong University of Finance and Economics, Jinan 250014, China
3.
Labroatory of Computational Physics, Institute of Applied Physics and Computational Mathematics, Beijing 100088, China
In this work, the time fractional KdV equation with Caputo time derivative of order $ \alpha \in (0,1) $ is considered. The solution of this problem has a weak singularity near the initial time $ t = 0 $. A fully discrete discontinuous Galerkin (DG) method combining the well-known L1 discretisation in time and DG method in space is proposed to approximate the time fractional KdV equation. The unconditional stability result and O$ (N^{-\min \{r\alpha,2-\alpha\}}+h^{k+1}) $ convergence result for $ P^k \; (k\geq 2) $ polynomials are obtained. Finally, numerical experiments are presented to illustrate the efficiency and the high order accuracy of the proposed scheme.
Keywords:Time fractional KdV equation, weak singularity, discontinuous Galerkin method, stability, error estimate. Mathematics Subject Classification:Primary: 35R11, 65M60; Secondary: 65M12. Citation:Na An, Chaobao Huang, Xijun Yu. Error analysis of discontinuous Galerkin method for the time fractional KdV equation with weak singularity solution. Discrete & Continuous Dynamical Systems - B, 2020, 25 (1) : 321-334. doi: 10.3934/dcdsb.2019185
References:
[1]
N. An, C. Huang and X. Yu,
Error analysis of direct discontinuous Galerkin method for two-dimensional fractional diffusion-wave equation,
[2]
W. Bu and A. Xiao,
An h-p version of the continuous Petrov-Galerkin finite element method for Riemann-Liouville fractional differential equation with novel test basis functions,
[3] [4]
Y. Cheng and C.-W. Shu,
A discontinuous Galerkin finite element method for time dependent partial differential equations with higher order derivatives,
[5]
P. G. Ciarlet,
[6] [7]
P. A. Farrell, A. F. Hegarty, J. J. H. Miller, E. O'Riordan and G. I. Shishkin,
[8] [9]
D. Henry,
[10] [11]
C. Huang, N. An and X. Yu,
A fully discrete direct discontinuous Galerkin method for the fractional diffusion-wave equation,
[12]
C. Huang, M. Stynes and N. An,
Optimal ${L}^\infty ({L}^2)$ error analysis of a direct discontinuous Galerkin method for a time-fractional reaction-diffusion problem,
[13]
C. Huang, X. Yu, C. Wang, Z. Li and N. An,
A numerical method based on fully discrete direct discontinuous Galerkin method for the time fractional diffusion equation,
[14]
D. J. Korteweg and G. de Vries,
On the change of form of long waves advancing in a rectangular canal, and on a new type of long stationary waves,
[15] [16] [17]
S. Momani and A. Yıldı rım,
Analytical approximate solutions of the fractional convection-diffusion equation with nonlinear source term by He's homotopy perturbation method,
[18] [19]
K. Mustapha and W. McLean,
Discontinuous Galerkin method for an evolution equation with a memory term of positive type,
[20]
K. Mustapha and W. McLean,
Uniform convergence for a discontinuous Galerkin, time-stepping method applied to a fractional diffusion equation,
[21]
K. Mustapha, M. Nour and B. Cockburn,
Convergence and superconvergence analyses of HDG methods for time fractional diffusion problems,
[22]
I. Podlubny,
[23]
I. Podlubny, Geometric and physical interpretation of fractional integration and fractional differentiation,
[24]
J. Russell, Report of the committee on waves, Rep. Meet. Brit. Assoc. Adv. Sci., 7th Liverpool, 1837, London, John Murray.Google Scholar
[25]
M. Stynes, E. O'Riordan and J. Gracia,
Error analysis of a finite difference method on graded meshes for a time-fractional diffusion equation,
[26] [27]
L. Wei, Y. He, A. Yildirim and S. Kumar,
Numerical algorithm based on an implicit fully discrete local discontinuous Galerkin method for the time-fractional KdV-Burgers-Kuramoto equation,
[28]
G. H. Weiss, R. Klages, G. Radons and I. M. Sokolov (eds.), Anomalous transport: Foundations and applications [book review of WILEY-VCH Verlag GmbH & Co., Weinheim, 2008],
[29]
G. B. Witham,
[30]
N. Zabusky and M. Kruskal,
Interactions of solitons in a collisionless plasma and the recurrence of initial states,
[31]
Q. Zhang, J. Zhang, S. Jiang and Z. Zhang,
Numerical solution to a linearized time fractional KdV equation on unbounded domains,
show all references
References:
[1]
N. An, C. Huang and X. Yu,
Error analysis of direct discontinuous Galerkin method for two-dimensional fractional diffusion-wave equation,
[2]
W. Bu and A. Xiao,
An h-p version of the continuous Petrov-Galerkin finite element method for Riemann-Liouville fractional differential equation with novel test basis functions,
[3] [4]
Y. Cheng and C.-W. Shu,
A discontinuous Galerkin finite element method for time dependent partial differential equations with higher order derivatives,
[5]
P. G. Ciarlet,
[6] [7]
P. A. Farrell, A. F. Hegarty, J. J. H. Miller, E. O'Riordan and G. I. Shishkin,
[8] [9]
D. Henry,
[10] [11]
C. Huang, N. An and X. Yu,
A fully discrete direct discontinuous Galerkin method for the fractional diffusion-wave equation,
[12]
C. Huang, M. Stynes and N. An,
Optimal ${L}^\infty ({L}^2)$ error analysis of a direct discontinuous Galerkin method for a time-fractional reaction-diffusion problem,
[13]
C. Huang, X. Yu, C. Wang, Z. Li and N. An,
A numerical method based on fully discrete direct discontinuous Galerkin method for the time fractional diffusion equation,
[14]
D. J. Korteweg and G. de Vries,
On the change of form of long waves advancing in a rectangular canal, and on a new type of long stationary waves,
[15] [16] [17]
S. Momani and A. Yıldı rım,
Analytical approximate solutions of the fractional convection-diffusion equation with nonlinear source term by He's homotopy perturbation method,
[18] [19]
K. Mustapha and W. McLean,
Discontinuous Galerkin method for an evolution equation with a memory term of positive type,
[20]
K. Mustapha and W. McLean,
Uniform convergence for a discontinuous Galerkin, time-stepping method applied to a fractional diffusion equation,
[21]
K. Mustapha, M. Nour and B. Cockburn,
Convergence and superconvergence analyses of HDG methods for time fractional diffusion problems,
[22]
I. Podlubny,
[23]
I. Podlubny, Geometric and physical interpretation of fractional integration and fractional differentiation,
[24]
J. Russell, Report of the committee on waves, Rep. Meet. Brit. Assoc. Adv. Sci., 7th Liverpool, 1837, London, John Murray.Google Scholar
[25]
M. Stynes, E. O'Riordan and J. Gracia,
Error analysis of a finite difference method on graded meshes for a time-fractional diffusion equation,
[26] [27]
L. Wei, Y. He, A. Yildirim and S. Kumar,
Numerical algorithm based on an implicit fully discrete local discontinuous Galerkin method for the time-fractional KdV-Burgers-Kuramoto equation,
[28]
G. H. Weiss, R. Klages, G. Radons and I. M. Sokolov (eds.), Anomalous transport: Foundations and applications [book review of WILEY-VCH Verlag GmbH & Co., Weinheim, 2008],
[29]
G. B. Witham,
[30]
N. Zabusky and M. Kruskal,
Interactions of solitons in a collisionless plasma and the recurrence of initial states,
[31]
Q. Zhang, J. Zhang, S. Jiang and Z. Zhang,
Numerical solution to a linearized time fractional KdV equation on unbounded domains,
N = 32 N = 64 N = 128 N = 256 N = 1024 3.0496E-2 1.1110E-2 3.9235E-3 1.3578E-3 4.6379E-4 1.5729E-4 1.4567 1.5016 1.5307 1.5498 1.5600 3.8341E-2 1.5127E-2 5.8825E-3 2.2665E-3 8.6831E-4 3.3157E-4 1.3417 1.3626 1.3759 1.3842 1.3888 5.9953E-2 2.6607E-2 1.1728E-2 5.1485E-3 2.2540E-3 9.8512E-4 1.1720 1.1817 1.1878 1.1916 1.1941
N = 32 N = 64 N = 128 N = 256 N = 1024 3.0496E-2 1.1110E-2 3.9235E-3 1.3578E-3 4.6379E-4 1.5729E-4 1.4567 1.5016 1.5307 1.5498 1.5600 3.8341E-2 1.5127E-2 5.8825E-3 2.2665E-3 8.6831E-4 3.3157E-4 1.3417 1.3626 1.3759 1.3842 1.3888 5.9953E-2 2.6607E-2 1.1728E-2 5.1485E-3 2.2540E-3 9.8512E-4 1.1720 1.1817 1.1878 1.1916 1.1941
Polynomial M Order Order 5 5.3831E-01 - 3.2328E-01 - 10 7.8579E-02 2.7762 4.7729E-02 2.7598 20 9.9319E-03 2.9840 6.2124E-03 2.9416 40 1.1426E-04 3.1196 7.5845E-04 3.0340 5 1.7236E-02 - 1.3819E-02 - 10 1.1399E-03 3.9184 8.7589E-04 3.9798 15 2.2712E-04 3.9406 1.7695E-04 3.9667 20 7.2979E-05 3.9418 6.1408E-04 3.9070
Polynomial M Order Order 5 5.3831E-01 - 3.2328E-01 - 10 7.8579E-02 2.7762 4.7729E-02 2.7598 20 9.9319E-03 2.9840 6.2124E-03 2.9416 40 1.1426E-04 3.1196 7.5845E-04 3.0340 5 1.7236E-02 - 1.3819E-02 - 10 1.1399E-03 3.9184 8.7589E-04 3.9798 15 2.2712E-04 3.9406 1.7695E-04 3.9667 20 7.2979E-05 3.9418 6.1408E-04 3.9070
N = 32 N = 64 N = 128 N = 256 N = 1024 2.6605E-2 9.8042E-3 3.4860E-3 1.2119E-3 4.1549E-4 1.4179E-4 1.4402 1.4918 1.5243 1.5444 1.5510 3.0086E-2 1.2002E-2 4.6980E-3 1.8177E-3 6.9850E-4 2.6770E-4 1.3258 1.3531 1.3699 1.3797 1.3836 4.1374E-2 1.8226E-2 7.9818E-3 3.4836E-3 1.5178E-3 6.6104E-4 1.1827 1.1912 1.1961 1.1985 1.1992
N = 32 N = 64 N = 128 N = 256 N = 1024 2.6605E-2 9.8042E-3 3.4860E-3 1.2119E-3 4.1549E-4 1.4179E-4 1.4402 1.4918 1.5243 1.5444 1.5510 3.0086E-2 1.2002E-2 4.6980E-3 1.8177E-3 6.9850E-4 2.6770E-4 1.3258 1.3531 1.3699 1.3797 1.3836 4.1374E-2 1.8226E-2 7.9818E-3 3.4836E-3 1.5178E-3 6.6104E-4 1.1827 1.1912 1.1961 1.1985 1.1992
Polynomial M Order Order 5 3.8931E-01 - 2.3483E-01 - 10 5.6563E-02 2.7829 3.4418E-02 2.7704 20 7.1139E-03 2.9911 4.4696E-03 2.9449 40 7.7979E-04 3.1894 5.4210E-04 3.0435 5 1.2812E-02 - 1.0564E-02 - 10 8.3809E-03 3.9342 6.7270E-04 3.9731 15 1.6615E-04 3.9552 1.2940E-04 4.0071 20 5.3162E-05 3.9564 4.3749E-05 3.9578
Polynomial M Order Order 5 3.8931E-01 - 2.3483E-01 - 10 5.6563E-02 2.7829 3.4418E-02 2.7704 20 7.1139E-03 2.9911 4.4696E-03 2.9449 40 7.7979E-04 3.1894 5.4210E-04 3.0435 5 1.2812E-02 - 1.0564E-02 - 10 8.3809E-03 3.9342 6.7270E-04 3.9731 15 1.6615E-04 3.9552 1.2940E-04 4.0071 20 5.3162E-05 3.9564 4.3749E-05 3.9578
N = 64 N = 128 N = 256 N = 512 N = 1024 2.4959E-3 8.4989E-4 2.8741E-4 9.6635E-5 3.2367E-5 1.5542 1.5641 1.5720 1.5784 6.0125E-3 2.3383E-3 8.9915E-4 3.4367E-4 1.3092E-4 1.3624 1.3788 1.3875 1.3923 1.0359E-2 4.6808E-3 2.0899E-3 1.2645E-4 4.0879E-4 1.1460 1.1633 1.1736 1.1803
N = 64 N = 128 N = 256 N = 512 N = 1024 2.4959E-3 8.4989E-4 2.8741E-4 9.6635E-5 3.2367E-5 1.5542 1.5641 1.5720 1.5784 6.0125E-3 2.3383E-3 8.9915E-4 3.4367E-4 1.3092E-4 1.3624 1.3788 1.3875 1.3923 1.0359E-2 4.6808E-3 2.0899E-3 1.2645E-4 4.0879E-4 1.1460 1.1633 1.1736 1.1803
N = 64 N = 128 N = 256 N = 512 N = 1024 6.0380E-3 2.2285E-3 7.2999E-4 2.4866E-4 8.4023E-5 1.5110 1.5371 1.5536 1.5653 1.1214E-2 4.4769E-3 1.7480E-3 6.7438E-4 2.5839E-4 1.3248 1.3567 1.3741 1.3839 1.5602E-2 7.0291E-3 3.1157E-3 1.3694E-3 5.9926E-4 1.1503 1.1737 1.1859 1.1923
N = 64 N = 128 N = 256 N = 512 N = 1024 6.0380E-3 2.2285E-3 7.2999E-4 2.4866E-4 8.4023E-5 1.5110 1.5371 1.5536 1.5653 1.1214E-2 4.4769E-3 1.7480E-3 6.7438E-4 2.5839E-4 1.3248 1.3567 1.3741 1.3839 1.5602E-2 7.0291E-3 3.1157E-3 1.3694E-3 5.9926E-4 1.1503 1.1737 1.1859 1.1923
[1]
Mahboub Baccouch.
Superconvergence of the semi-discrete local discontinuous Galerkin method for nonlinear KdV-type problems.
[2]
Chaoxu Pei, Mark Sussman, M. Yousuff Hussaini.
A space-time discontinuous Galerkin spectral element method for the Stefan problem.
[3]
Atsushi Kawamoto.
Hölder stability estimate in an inverse source problem for a first and half order time fractional diffusion equation.
[4] [5]
Kim S. Bey, Peter Z. Daffer, Hideaki Kaneko, Puntip Toghaw.
Error analysis of the p-version discontinuous Galerkin method for heat transfer in built-up structures.
[6]
Yoshifumi Aimoto, Takayasu Matsuo, Yuto Miyatake.
A local discontinuous Galerkin method based on variational structure.
[7]
Runchang Lin, Huiqing Zhu.
A discontinuous Galerkin least-squares finite element method for solving Fisher's equation.
[8]
Yinhua Xia, Yan Xu, Chi-Wang Shu.
Efficient time discretization for local discontinuous Galerkin methods.
[9]
Jerry L. Bona, Stéphane Vento, Fred B. Weissler.
Singularity formation and blowup of complex-valued solutions of the modified KdV equation.
[10]
Konstantinos Chrysafinos, Efthimios N. Karatzas.
Symmetric error estimates for discontinuous Galerkin
approximations for an optimal control problem associated to
semilinear parabolic PDE's.
[11] [12] [13]
Juan-Ming Yuan, Jiahong Wu.
A dual-Petrov-Galerkin method for two integrable fifth-order
KdV type equations.
[14]
Boris P. Belinskiy, Peter Caithamer.
Energy estimate for the wave equation driven by a fractional Gaussian noise.
[15] [16]
Xia Ji, Wei Cai.
Accurate simulations of 2-D phase shift masks with a generalized discontinuous Galerkin (GDG) method.
[17]
Zheng Sun, José A. Carrillo, Chi-Wang Shu.
An entropy stable high-order discontinuous Galerkin method for cross-diffusion gradient flow systems.
[18]
Andreas C. Aristotelous, Ohannes Karakashian, Steven M. Wise.
A mixed discontinuous Galerkin, convex splitting scheme for a modified Cahn-Hilliard equation and an efficient nonlinear multigrid solver.
[19] [20]
Torsten Keßler, Sergej Rjasanow.
Fully conservative spectral Galerkin–Petrov method for the inhomogeneous Boltzmann equation.
2018 Impact Factor: 1.008
Tools Metrics Other articles
by authors
[Back to Top] |
A flame propagation model on a network with application to a blocking problem
1.
Dip. di Scienze di Base e Applicate per l'Ingegneria, "Sapienza" Università di Roma, via Scarpa 16, 00161 Roma, Italy
2.
Dipartimento di Matematica, "Sapienza" Università di Roma, p.le A. Moro 5, 00185 Roma, Italy
3.
Dip. di Ingegneria dell'Informazione, Università di Padova, via Gradenigo 6/B, 35131 Padova, Italy
$\left\{ \begin{array}{*{35}{l}} {{\partial }_{t}}u+H(x,Du) = 0&(x,t)\in \Gamma \times (0,T) \\ u(x,0) = {{u}_{0}}(x)&x\in \Gamma \\\end{array} \right.$
$\Gamma$
$H$ Keywords:Evolutive Hamilton-Jacobi equation, viscosity solution, network, Hopf-Lax formula, approximation. Mathematics Subject Classification:Primary: 35D40; Secondary: 35R02, 35F21, 65M06, 49L25. Citation:Fabio Camilli, Elisabetta Carlini, Claudio Marchi. A flame propagation model on a network with application to a blocking problem. Discrete & Continuous Dynamical Systems - S, 2018, 11 (5) : 825-843. doi: 10.3934/dcdss.2018051
References:
[1]
Y. Achdou, F. Camilli, A. Cutrí and N. Tchou,
Hamilton-Jacobi equations constrained on networks,
[2] [3]
G. Barles,
[4] [5]
F. Camilli and C. Marchi,
A comparison among various notions of viscosity solution for Hamilton-Jacobi equations on networks,
[6]
F. Camilli, C. Marchi and D. Schieborn,
The vanishing viscosity limit for Hamilton-Jacobi equation on networks,
[7]
F. Camilli, A. Festa and D. Schieborn,
An approximation scheme for an Hamilton-Jacobi equation defined on a network,
[8]
Y. G. Chen, Y. Giga and S. Goto,
Uniqueness and existence of viscosity solutions of generalized mean curvature flow equations,
[9]
G. Costeseque, J.-P. Lebacque and R. Monneau,
A convergent scheme for Hamilton-Jacobi equations on a junction: application to traffic,
[10]
M. Garavello and B. Piccoli,
[11]
A. Khanafer and T. Başar,
[12] [13]
P.-L. Lions and P. E. Souganidis,
Viscosity solutions for junctions: well posedness and stability,
[14]
P.-L. Lions and P. E. Souganidis,
Well posedness for multi-dimensional junction problems with Kirchoff-type conditions,
[15] [16]
G. Namah and J. M. Roquejoffre,
Remarks on the long time behaviour of the solutions of Hamilton-Jacobi equations,
[17]
D. Schieborn and F. Camilli,
Viscosity solutions of Eikonal equations on topological network,
[18]
A. Siconolfi,
A first order Hamilton-Jacobi equation with singularity and the evolution of level sets,
[19] [20] [21]
show all references
References:
[1]
Y. Achdou, F. Camilli, A. Cutrí and N. Tchou,
Hamilton-Jacobi equations constrained on networks,
[2] [3]
G. Barles,
[4] [5]
F. Camilli and C. Marchi,
A comparison among various notions of viscosity solution for Hamilton-Jacobi equations on networks,
[6]
F. Camilli, C. Marchi and D. Schieborn,
The vanishing viscosity limit for Hamilton-Jacobi equation on networks,
[7]
F. Camilli, A. Festa and D. Schieborn,
An approximation scheme for an Hamilton-Jacobi equation defined on a network,
[8]
Y. G. Chen, Y. Giga and S. Goto,
Uniqueness and existence of viscosity solutions of generalized mean curvature flow equations,
[9]
G. Costeseque, J.-P. Lebacque and R. Monneau,
A convergent scheme for Hamilton-Jacobi equations on a junction: application to traffic,
[10]
M. Garavello and B. Piccoli,
[11]
A. Khanafer and T. Başar,
[12] [13]
P.-L. Lions and P. E. Souganidis,
Viscosity solutions for junctions: well posedness and stability,
[14]
P.-L. Lions and P. E. Souganidis,
Well posedness for multi-dimensional junction problems with Kirchoff-type conditions,
[15] [16]
G. Namah and J. M. Roquejoffre,
Remarks on the long time behaviour of the solutions of Hamilton-Jacobi equations,
[17]
D. Schieborn and F. Camilli,
Viscosity solutions of Eikonal equations on topological network,
[18]
A. Siconolfi,
A first order Hamilton-Jacobi equation with singularity and the evolution of level sets,
[19] [20] [21]
[1] [2] [3] [4] [5] [6]
Nalini Anantharaman, Renato Iturriaga, Pablo Padilla, Héctor Sánchez-Morgado.
Physical solutions of the Hamilton-Jacobi equation.
[7]
María Barbero-Liñán, Manuel de León, David Martín de Diego, Juan C. Marrero, Miguel C. Muñoz-Lecanda.
Kinematic reduction and the Hamilton-Jacobi equation.
[8]
Larry M. Bates, Francesco Fassò, Nicola Sansonetto.
The Hamilton-Jacobi equation,
integrability, and nonholonomic
systems.
[9] [10]
Olga Bernardi, Franco Cardin.
Minimax and viscosity solutions of Hamilton-Jacobi equations in the convex case.
[11]
Kaizhi Wang, Jun Yan.
Lipschitz dependence of viscosity solutions of Hamilton-Jacobi equations with respect to the parameter.
[12]
Steven Richardson, Song Wang.
The viscosity approximation to the Hamilton-Jacobi-Bellman equation in optimal feedback control: Upper bounds for extended domains.
[13]
Yoshikazu Giga, Przemysław Górka, Piotr Rybka.
Nonlocal spatially inhomogeneous Hamilton-Jacobi equation
with unusual free boundary.
[14] [15]
Alexander Quaas, Andrei Rodríguez.
Analysis of the attainment of boundary conditions for a nonlocal diffusive Hamilton-Jacobi equation.
[16]
Renato Iturriaga, Héctor Sánchez-Morgado.
Limit of the infinite horizon discounted Hamilton-Jacobi equation.
[17]
Nicolas Forcadel, Mamdouh Zaydan.
A comparison principle for Hamilton-Jacobi equation with moving in time boundary.
[18]
Eddaly Guerra, Héctor Sánchez-Morgado.
Vanishing viscosity limits for space-time periodic
Hamilton-Jacobi equations.
[19]
Kai Zhao, Wei Cheng.
On the vanishing contact structure for viscosity solutions of contact type Hamilton-Jacobi equations I: Cauchy problem.
[20]
2018 Impact Factor: 0.545
Tools Metrics Other articles
by authors
[Back to Top] |
Introduction
In my previous post I presented abstract topological spaces by way of two special characteristics. These properties are enough to endow a given set with vast possibilities for analysis. Fundamental to mathematical analysis of all kinds (real, complex, functional, etc.) is the
sequence.
We have covered the concept of sequences in some of our other posts here at The Math Citadel. As Rachel pointed out in her post on Cauchy sequences, one of the most important aspects of the character of a given sequence is convergence.
In spaces like the real numbers, there is convenient framework available to quantify
closeness and proximity, and which allows naturally for a definition of limit or tendency for sequences. In a general topological space missing this skeletal feature, convergence must be defined.
This post will assume only some familiarity with sequences as mathematical objects and, of course, the concepts mentioned in Part 1. For a thorough treatment of sequences, I recommend
Mathematical Analysis by Tom M. Apostol. Neighborhoods
Suppose (X,\mathscr{T}) is a given topological space, and nothing more is known. At our disposal so far are only
open sets (elements of \mathscr{T}), and so it is on these a concept of vicinity relies. Definition.Given a topological space (X,\mathscr{T}), a neighborhoodof a point x\in X is an open set which contains x.
That is, we say an element T\in\mathscr{T} such that x\in T is a neighborhood
1 of x. To illustrate, take the examples from my previous post. The Trivial Topology
When the topology in question is the trivial one: \{\emptyset,X\}, the only nonempty open set is X itself, hence it is the only neighborhood of any point x\in X.
The Discrete Topology
Take X=\{2,3,5\} and \mathscr{T} to be the collection of
all subsets of X:
\emptyset \{2\} \{3\} \{5\} \{2,3\} \{2,5\} \{3,5\} \{2,3,5\}
Then, for, say x=5, neighborhoods include \{5\}, \{2,5\}, \{3,5\}, and \{2,3,5\}.
The Standard Topology on \mathbb{R}
The standard topology on \mathbb{R} is defined to be the family of all sets of real numbers containing an open interval around each of its points. In this case, there are infinitely
2 many neighborhoods of every real number. Taking x=\pi for instance, then (3,4), (-2\pi,2\pi), and even
are all neighborhoods of \pi.
(x-r,x+r)=\{y\in\mathbb{R}\mathrel{:}|x-y|<r\}
Remark.A special type of neighborhood in the standard topology is the symmetricopen interval. Given a point x and a radius r>0, the set
is a neighborhood of x. These sets form what is referred to as a
basisfor the standard topology and are important to definition of convergence in \mathbb{R} as a metric space. Convergence
“…
the topology of a space can be described completely in terms of convergence.” —John L. Kelley, General Topology
At this point in our discussion of topological spaces, the only objects available for use are open sets and neighborhoods, and so it is with these that convergence of a sequence are built
3. Definition.A sequence (\alpha_n) in a topological space (X,\mathscr{T}) convergesto a point L\in X if for every neighborhood U of L, there exists an index N\in\mathbb{N} such that \alpha_n\in U whenever n\geq N. The point L is referred to as the limitof the sequence (\alpha_n).
Visually, this definition can be thought of as demanding the points of the sequence cluster around the limit point L. In order for the sequence (\alpha_n) to converge to L, it must be the case that after finitely many terms,
every one that follows is contained in the arbitrarily posed neighborhood U.
As you might expect, the class of neighborhoods available has a dramatic effect on which sequences converge, as well as where they tend. Just how
close to L are the neighborhoods built around it in the topology?
We will use the example topologies brought up so far to exhibit the key characteristics of this definition, and what these parameters permit of sequences.
The Trivial Topology
In case it was to this point hazy just how useful the trivial topology is, sequences illustrate the issue nicely. For the sake of this presentation, take the trivial topology on \mathbb{R}. There is precisely
one neighborhood of any point, namely \mathbb{R} itself. As a result, any sequence of real numbers converges, since every term belongs to \mathbb{R}. Moreover, every real number is a limit of any sequence. So, yes, the sequence (5,5,5,\ldots) of all 5‘s converges to \sqrt{2} here. The Discrete Topology
Whereas with the trivial topology a single neighborhood exists, the discrete topology is as packed with neighborhoods as can be. So, as the trivial topology allows every sequence to converge to everything, we can expect the discrete topology to be comparatively restrictive. Taking the set \{2,3,5\} with the discrete topology as mentioned above, we can pinpoint the new limitation:
every set containing exactly one point is a neighborhood of that point. Notice the sets 4 \{2\}, \{3\}, and \{5\} are all open sets.
What does this mean?
Any sequence that converges to one of these points, say 3, must eventually have all its terms in the neighborhood \{3\}. But that requires all convergent sequences to be eventually constant! This seems to be a minor issue with the finite set \{2,3,5\}, but it presents an undesirable, counter-intuitive problem in other sets.
Take \mathbb{R} with the discrete topology, for example. Under these rules, the sequence(\alpha_n)=\left(\frac{1}{n}\right)=\left(1,\frac{1}{2},\frac{1}{3},\frac{1}{4},\ldots\right),
though expected to converge to 0, does not converge at all.
So, the discrete topology is
too restrictive, and the trivial topology lets us get away with anything. Fortunately, a happy middle ground exists by being a little more selective with neighborhoods. The Standard Topology
By requiring an open set to contain an open interval around each of its points, it is impossible that a singleton be an open set. Therefore a singleton cannot be a neighborhood, and we eliminate the trouble posed by the discrete topology. Yet every open interval around a real number L contains a smaller one, and each of these is a neighborhood.
This effectively corrals the points of any convergent sequence, requiring the distance between the terms and the limit to vanish as n increases. Take again the sequence(\alpha_n)=\left(\frac{1}{n}\right)=\left(1,\frac{1}{2},\frac{1}{3},\frac{1}{4},\ldots\right).
We suspect (\alpha_n) converges to 0, but this requires proof. Therefore, we must consider an arbitrary neighborhood of 0, and expose the index N\in\mathbb{N} such that all terms, from the Nth onward, exist in that neighborhood.
Suppose U is a given neighborhood of 0, so that U contains an open interval surrounding 0. Without loss of generality, we may assume this interval is symmetric; that is, the interval has the form (-r,r) for some radius r>0. Take N to be any integer greater than \tfrac{1}{r}. Then, whenever n\geq N,\alpha_n = \frac{1}{n} \leq \frac{1}{N} < \frac{1}{1/r} = r.
But this means \alpha_n\in(-r,r)\subset U so long as n\geq N. Since we chose U arbitrarily, it follows (\alpha_n) converges to 0.
Conclusion
The behavior of a sequence in a given set can change rather drastically depending on the network of neighborhoods the topology allows. However, with careful construction, it is possible to have all the sequential comforts of metric spaces covered under a briefly put definition.
My next post in this series will push the generalization of these concepts much further, by relaxing a single requirement. In order to define convergence in the preceding discussion, the set of indices \mathbb{N} was important not for guaranteeing infinitely many terms, but instead for providing
order. This allows us to speak of all terms subsequent to one in particular. It turns out that if we simply hold on to order, we can loosen the nature of the set on which it is defined. That is the key to Moore-Smith Convergence, to be visited next. This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License. Footnotes An important distinction here is that a neighborhood, by design, is nonempty. At minimum, a neighborhood of xcontains xitself. Uncountably many, in fact. The phrase a sequence( α) n in a topological space( X, 𝒯) means that for every n∈ ℕ, it is true that α∈ n X. That is, the sequence takes its values in X. These are known as singletons. |
I'm confused about the concept of equivalent parametric curves. Based on my understanding, two parametric curves, $\phi$ and $\psi$, are equivalent, if there is a strictly monotonically increasing function $g$ such that $\psi(g(t)) = \phi(t)$. Intuitively speaking, the two curves must have the same direction and the same image, and they "travel" through their image for the same number of times.
Three questions:
(1) Why does $g$ have to be monotonically increasing? Why can't it be decreasing?
(2) They are allowed to have different speeds, correct?
(3) Are these two equivalent: $\phi(\theta) = (cos(\theta),sin(\theta))$ and $\psi(\theta) = (cos(2\theta),sin(2\theta))$, where the domain of $\phi$ is $[0,2\pi]$, and the domain of $\psi$ is $[0,\pi]$? |
Taken literally, from what you wrote:$$\frac{dy}{dx}\sin(xy^2) - x^2 = x+5$$you find $\frac{dy}{dx}$ "in terms of $x$ and $y$" simply by moving $x^2$ to the other side and then dividing through by $\sin(xy^2)$:$$\begin{align*}
\frac{dy}{dx}\sin(xy^2) - x^2 &= x+5\\
\frac{dy}{dx}\sin(xy^2) &= x^2 + x + 5\\
\frac{dy}{dx} &= \frac{x^2+x+5}{\sin(xy^2)}.
\end{align*}$$
However, I suspect this is not what your problem is. It is unfortunate that you refer to instructions but don't quote them; when you are confused by the statement of a problem, it is best to quote it and then say what confuses you. I fear your confusion has caused you to misreport what the problem actually says.
I
suspect that your problem says for you to your find $\frac{dy}{dx}$, in terms of $x$ and $y$, if$$\sin(xy^2) - x^2 = x+5.$$
This is called implicit differentiation. This equation defines $y$
implicitly as a function of $x$: given any value of $x$, you plug it in, and you find the values of $y$ that make the equation true. Since $y$ is a function of $x$ (though only implicitly), you can ask what the derivative of $y$ with respect to $x$ is.
You start by taking derivatives on both sides, using the Chain Rule. It is important to remember that $y$ itself is a function of $x$, so when you differentiate things like $y^2$, you have to use the chain rule:$$\frac{d}{dx}y^2 = 2y\frac{dy}{dx}.$$
So, let me do that. First, we use the Chain Rule to differentiate $\sin(xy^2)$; then we will need to find the derivative of $xy^2$, which requires the Product Rule; then we will need the derivative of $y^2$, which requires the Chain Rule (as above). Let's do that:$$\begin{align*}
\sin(xy^2)-x^2 &= x+5\\
\frac{d}{dx}\Bigl(\sin(xy^2)-x^2\Bigr) &= \frac{d}{dx}\Bigl(x+5\Bigr)\\
\frac{d}{dx}\sin(xy^2) - \frac{d}{dx}x^2 &= \frac{d}{dx}x + \frac{d}{dx}5\\
\cos(xy^2)\left(\frac{d}{dx}xy^2\right) - 2x &= 1\\
\cos(xy^2)\Bigl((x)'y^2 + x(y^2)'\Bigr) -2x &= 1\\
\cos(xy^2)\Bigl(y^2 + x(2yy')\Bigr) - 2x &= 1\\
y^2\cos(xy^2) + 2xy\cos(xy^2)\left(y'\right) -2x &= 1.
\end{align*}$$The next step is to "solve for $y'$". Just move every term that includes $y'$ to the left hand side, all terms that do not to the right hand side, and then divide through:$$\begin{align*}
y^2\cos(xy^2) + 2xy\cos(xy^2)\left(y'\right) -2x &= 1\\
2xy\cos(xy^2)y' &= 1 + 2x - y^2\cos(xy^2)\\
y' &= \frac{1 + 2x - y^2\cos(xy^2)}{2xy\cos(xy^2)}.
\end{align*}$$And that expresses $y'$ in terms of $x$ and $y$, given the original equation. |
Introduction to kinematics
Kinematics is the study of how to describe the motion of objects using mathematics. Galileo long ago thought about the parabolic motion of cannonballs. This is one of the earliest applications of kinematics that I'm aware of and this will be our starting point. The study of the motion of objects near Earth's surface is a branch of kinematics called projectile motion. We're going to start off just looking at one-dimensional projectile motion and, later on, we'll also delve into two-dimensional and three-dimensional projectile motion. Now, if your cannonball gets too fast (meaning its speed is a considerable fraction of the speed of light), then we'll have to talk about four-dimensional projectile motion—but for now, let's not go there.
We're going to start by learning from examples and saving the abstract generalizations for later since this, to me, seems like the most pedagogic way of teaching this material. It is also very pedagogic to give a glimpse of topics later to come and present them in a way which captures interest, even though it might be a new concept which the student doesn't yet know—but it is one's not fully knowing which largely makes it interesting to begin with. We will mention Newton's second law and the concept of force throughout these lessons and hopefully we'll have a friendly first encounter with these ideas and hopefully they'll spark some interest. But we'll save a more thorough treatment of these ideas for lessons which will be covered after we're done with kinematics.
Position vectors and displacement
We're going to start off with something very general but we'll see that the results are very important for analyzing one-dimensional projectile motion. In physics, we use something called a
position vector (written as \(\vec{R}\)) to specify the location of an object at each instant of time. You could imagine a set of \(x\), \(y\), and \(z\) Cartesian axes sitting stationary on the ground (say, to make things easier to visualize, on Earth's surface) as in Figure 1. These Cartesian coordinate axes should be thought of as imaginary rulers (that are infinitely long) with a bunch of pink tick marks on them that specify how far away the object is from the origin. Indeed, if you were standing at the origin of this coordinate system, then these rulers would measures how far away the object is in the left-right direction, the up-down direction, and the in-out direction.
The position vector of the object is defined as
$$\vec{R}(t)≡x(t)\hat{i}+y(t)\hat{j}+z(t)\hat{k}.$$
Notice that the position vector is a parametric function of time: at each moment into time \(t\), it specifies the location of the object. So, for example, in Figure 1 you can see that at \(t=0\), if you were standing at the origin of the coordinate system, then the ball would start off right next to you since \(\vec{R}(0)=0\). After a time \(t_1\) has gone by, the object will have moved to a position specified by \(\vec{R}(t_1)\). After waiting an additional time of \(Δt=t_2-t_1\), the object will have moved to a position specified by \(\vec{R}(t_2)\). Now, if you were to take a tape measure and run it down the entire arclength of the red line in Figure 1, you would measure some amount of course; and that amount is what we call the distance the object traveled. Distance is a useful concept in everyday life but it, to a large extant, takes a backseat in most discussions of kinematics. The purpose of our entire discussion on position vectors was to develope a far more useful concept called displacement. Let me explain what that is. Let the position vectors \(\vec{R}(t)\) and \(\vec{R}(t_0)\) represent any two arbitrary positions of the object at any arbitary instants of time. The displacement (represented as \(Δ\vec{R}\)) of the object as it moved from \(\vec{R}(t_0)\) to \(\vec{R}(t)\) is given by the difference of these two vectors:
$$Δ\vec{R}≡\vec{R}(t)-\vec{R}(t_0).\tag{1}$$
First off, I'd like to start by saying that there is a big difference between distance and displacement and the two should nevver be confused. To see this distinction, let's consider the displacement of the object as it moves from \(\vec{R}(t_1)\) to \(\vec{R}(t_2)\). From Equation (1), we know that this displacement is given by \(Δ\vec{R}=\vec{R}(t_2)-\vec{R}(t_1)\). What does this look like graphically? Notice that in order to get \(Δ\vec{R}\), we had to add the two vectors \(\vec{R}(t_2)\) and \(-\vec{R}(t_1)\). Since whenever you add any two vectors you always get a new vector, it follows that \(Δ\vec{R}\) must also be a vector. But what does the vector \(Δ\vec{R}\) look like on the graph? Well, we already know what the vectors \(\vec{R}(t_1)\) to \(\vec{R}(t_2)\) look like since they are already drawn for us in Figure 1. If we wanted to find out what the vector \(\vec{R}(t_1)+\vec{R}(t_2)\) looked like, we'd just have to put the 'tail' of \(\vec{R}(t_2)\) next to the 'head' of \(\vec{R}(t_1)\) to get out new vector. The same procedure is, of course, true for adding any two vectors. But before we can find out what \(\vec{R}(t_1)-\vec{R}(t_2)\) looks like, we have to first figure out what \(-\vec{R}(t_2)\) is. To get \(-\vec{R}(t_2)\), all you need to do is rotate \(\vec{R}(t_2)\) by 180 degrees. (The reason why this is is explained very well in the Linear Algebra section of the Khan Academy.) If we then put the tail of \(-\vec{R}(t_2)\) next to the head of \(\vec{R}(t_1)\), then we'll get the vector \(Δ\vec{R}\) shown in Figures 1 and 2.
From Figure 1, you can immediately see that there are two big differences between distance and displacement. First of all, distance is a scalar (meaning it has only a magnitude) whereas displacement is a vector (meaning it has both magnitude and direction). The second big difference is that the magnitude of the displacement is not the same as the distance. As you can see graphically, the length of the vector \(Δ\vec{S}\) is shorter than the distance from \(\vec{R}(t_1)\) to \(\vec{R}(t_2)\) (as a reminder, the distance is the arclength of the red curve from the point at \(\vec{R}(t_1)\) to the point at \(\vec{R}(t_2)\)). An extreme example is often given in most textbooks to make this distinction clear. If you watch a runner run for 1 mile on a circular racetrack and return to his starting point, the total distance he would have run would be 1 mile. But his total displacement \(Δ\vec{S}\) would be zero. The reason is because the position vector \(\vec{R}(0)\) specifying his location at the beginning of the run points at the same location as the position vector \(\vec{R}(t)\) specifying his location at the end of the run (which is, of course, the same location). Since the two position vectors are the same, it follows that there difference \(Δ\vec{S}=\vec{R}(t_1)-\vec{R}(t_2)\) must be zero.
Instantaneous velocity
Here's where things start to get interesting. The quotient \(\frac{Δ\vec{S}}{Δt}\) gives us the average velocity of the object as it moves during the time interval \(Δt\). It isn't exact though since the object might be accelerating or deceleration during that time interval. It is oftentimes best to start with the extreme examples to get the basic point across of when \(\frac{Δ\vec{S}}{Δt}\) can be a good estimate and when it can be totally off. If the runner Usain Bolt runs a total distance of 1 mile in a circle and returns back to his starting point, his average velocity \(\frac{Δ\vec{S}}{Δt}\) over the time interval of that entire run would be zero since \(Δ\vec{S}=0\). The reason why this calculation gave us such a poor estimate of how fst he was running is becuse we did the calculation over a time interval \(Δt\) that was so big that his running speed varied wildly over that time. But if we considered a very small time interval, his speed would remain roughly constant and the calculation of \(\frac{Δ\vec{S}}{Δt}\) would give us a good estimate.
A long time ago, Newton encountered this same kind of problem. When an apple falls, it acellerates and its speed keeps changing. Newton, of course, realized that if you calculated \(\frac{Δ\vec{S}}{Δt}\) over a very small time interval, the object wouldn't acellerate much and its speed would be almost constant. For a small value of \(Δt\), the quotient \(\frac{Δ\vec{S}}{Δt}\) would give you a pretty good estimate of how fast the apple was falling. Now, if you keep choosing values of \(Δt\) that get smaller and smaller, the displacement would keep getting smaller and smaller (each of the values would approach zero). But (assuming the apple is in motion) the ratio of the two would be approaching some finite number. This number is called the instantaneous velocity. We can define the instantaneous velocity mathematically as
$$\vec{v(t)}≡\lim_{Δt→0}\frac{Δ\vec{S}}{Δt}=\frac{d\vec{S}}{dt}.\tag{2}$$
Equation (2) can be used to tell you how fast the apple is moving right at the time \(t\).
Instantaneous acelleration
Another very important idea is how quickly is the velocity changing? By taking the time derivative of the position \(\vec{R}\), the velocity \(\vec{v}\) is able to capture how quickly the position \(\vec{R}\) is changing. To find out how quickly the velocity \(\vec{v}\) is changing, we need to take its time derivative as well; this is called acelleration and is defined as
$$\vec{a}(t)≡\lim_{Δt→0}\frac{Δ\vec{v}}{Δt}=\frac{d\vec{v}}{dt}.\tag{3}$$
Displacement, and instantaneous velocity and acelleration, will be the three fundamental quantities which will be used to describe kinematics: the motion of objects. |
Contents Mixed integer nonlinear programming (MINLP) refers to optimization problems with continuous and discrete variables and nonlinear functions in the objective function and/or the constraints. MINLPs arise in applications in a wide range of fields, including chemical engineering, finance, and manufacturing. The general form of a MINLP is
\[\begin{array}{lllll}
\mbox{min} & f(x,y) & & & \\ \mbox{s.t.} & c_i(x,y) & = & 0 & \forall i \in E \\ & c_i(x,y) & \leq & 0 & \forall i \in I \\ & x & \in & X & \\ & y & \in & Y & \mbox{integer} \end{array} \] where each \(c_i(x,y) \,\) is a mapping from \(R^n \,\) to \(R \,\), and \(E \,\) and \(I \,\) are index sets for equality and inequality constraints, respectively. Typically, the functions \(f\) and \(c_i\) have some smoothness properties, i.e., once or twice continuously differentiable.
Software developed for MINLP has generally followed two approaches:
Outer Approximation/Generalized Bender's Decomposition: These algorithms alternate between solving a mixed-integer LP master problem and nonlinear programming subproblems. Branch-and-Bound: Branch-and-bound methods for mixed-integer LP can be extended to MINLP with a number of tricks added to improve their performance.
For a recent survey of MINLP applications, models, and solution methods, see Belotti et al. (2013).
MacMINLP, a collection of MINLP test problems in AMPL MINLPLib, a collection of MINLP test problems at MINLP World
Optimization Online Integer Programming area (area covers both linear and nonlinear submissions)
Belotti, P., C. Kirches, S. Leyffer, J. Linderoth, J. Luedtke, and A. Mahajan. 2013. Mixed-Integer Nonlinear Optimization. Acta Numerica 22:1-131. DOI: http://dx.doi.org/10.1017/S0962492913000032 Leyffer, S. and Mahajan, A. 2011. Software For Nonlinearly Constrained Optimization. Wiley Encyclopedia of Operations Research and Management Science, John Wiley & Sons, New York. |
Cauchy number, abbreviated as Ca, a dimensionless number expressing the ratio of inertial force to compressibility force in a flow. When the compressibility is important the elastic forces must be considered along with inertial forces.
Cauchy Number FORMULA
\(\large{ Ca = \frac {v^2 \; \rho } { B } }\)
Where:
\(\large{ Ca }\) = Cauchy number
\(\large{ B }\) = bulk modulus elasticity
\(\large{ \rho }\) (Greek symbol rho) = density
\(\large{ v }\) = velocity of the flow
Solve for:
\(\large{ B = \frac {v^2 \; \rho} { Ca } }\)
\(\large{ \rho = \frac {Ca \; B} { v^2 } }\)
\(\large{ v = \sqrt { \frac {Ca \; B}{\rho} } }\)
Cauchy Number CALCULATOR
Tags: Equations for Force |
Worked Example 16 Newton’s Second Law
Question: A block of mass 10 kg is accelerating at 2 m·s−2. What is the magnitude of the net force acting on the block?
Answer
Step 1:
We are given
the block’s mass
the block’s acceleration all in the correct units.
Step 2 :
We are asked to find the magnitude of the force applied to the block. Newton’s Second Law tells us the relationship between acceleration and force for one object. Since we are only asked for the magnitude we do not need to worry about the directions of the vectors:
{\displaystyle {\begin{matrix}F_{Net}&=&ma\&=&10\ {\mbox{kg}}\times 2{\mbox{ m}}\cdot {\mbox{s}}^{-2}\&=&20\ {\mbox{N}}\end{matrix}}} {\displaystyle {\begin{matrix}F_{Net}&=&ma\&=&10\ {\mbox{kg}}\times 2{\mbox{ m}}\cdot {\mbox{s}}^{-2}\&=&20\ {\mbox{N}}\end{matrix}}}
Thus, there must be a net force of 20 N acting on the box.
Worked Example 17 Newton’s Second Law 2
Question: A 12 N force is applied in the positive x-direction to a block of mass 100 mg resting on a frictionless flat surface. What is the resulting acceleration of the block?
Answer:
Step 1 :
We are given
the block’s mass
the applied force but the mass is not in the correct units.
Step 2 :
Let us begin by converting the mass:
{\displaystyle {\begin{matrix}100{\mbox{ mg}}&=&100\times 10^{-3}{\mbox{ g}}=0.1{\mbox{ g}}\1000{\mbox{ g}}&=&1\ {\mbox{kg}}\1&=&1kg\times {\frac {1}{1000g}}\&=&{\frac {1kg}{1000g}}\0.1g&=&0.1g\times 1\&=&0.1g\times {\frac {1kg}{1000g}}\&=&0.0001\ kg\\end{matrix}}} {\displaystyle {\begin{matrix}100{\mbox{ mg}}&=&100\times 10^{-3}{\mbox{ hich {\displaystyle 2\times 2} {\displaystyle 2\times 2} is only 2 gatherings of 2 included to give 4. We cone embrace a comparative way to deal with see how vector duplication functions.
Fhsst vectors32.png
The Free High School Science Texts: A Textbook for High School Students Studying Physics.
Primary Page – << Previous Chapter (Waves and wavelike movement) – Next Chapter (Forces) >>
Vectors
PGCE Comments – TO DO LIST – Introduction – Examples – Mathematical Properties – Addition – Components – Importance – Important Quantities, Equations, and Concepts
Methods of Vector Addition
Presently which you have been familiar without the numerical properties of vectors, we come back to vector expansion in more detail. There are various procedures of vector expansion. These procedures fall into two principle classes graphical and arithmetical systems.
Graphical Techniques
Graphical methods include attracting precise scale outlines to signify singular vectors and their resultants. We next talk about the two essential graphical procedures, the tail-to-head strategy and the parallelogram technique.
The Tail-to-head Method
In portraying the scientific properties of vectors we utilized removals and the tail-to-head graphical strategy for vector expansion as one outline. In the tail-to-head technique for vector expansion the accompanying system is pursued:
Pick a scale and incorporate a reference bearing.
Pick any of the vectors to be summed and draw it as one bolt in the right bearing and of the right length-make sure to put one pointed stone on the conclusion to signify its course.
Take the following vector and draw it as one bolt beginning from the sharpened stone of the principal vector in the right course and of the right length.
Proceed until you have drawn every vector-each time beginning from the leader of the past vector. Along these lines, the vectors to be included are drawn consistently tail-to-head.
The resultant is then the vector drawn from the tail of the principal vector to the leader of the last. Its extent cone be resolved from the length of its bolt utilizing the scale. Its course also cone be resolved from the scale chart.
Worked Example 4 Tail-to-Head Graphical Addition I
Question: A ship leaves harbor H and sails 6km north to port A. From here the ship ventures out 12km east to port B, before cruising 5.5km south-west to port C. Decide the ship’s resultant removal utilizing the tail-to-head procedure of vector expansion.
Answer:
Presently, we are looked without a down to earth issue: in this issue the relocations are too huge to even consider drawing them their genuine length! Drawing a 2km long bolt would require a major book. Much the same as cartographers (individuals who draw maps), we need to pick a scale. The decision of scale relies upon the real issue you ought to pick a scale such which your vector graph fits the page. Before picking a scale one ought to consistently draw a harsh sketch of the issue. In a harsh sketch one is keen on the rough state of the vector graph.
Stage 1 : |
I am trying to prove $(n+1)! = O(2^{(2^n)})$. I am trying to use L'Hospital rule but I am stuck with infinite derivatives.
Can anyone tell me how i can prove this?
Computer Science Stack Exchange is a question and answer site for students, researchers and practitioners of computer science. It only takes a minute to sign up.Sign up to join this community
You can compare ratios of adjacent values: $(n+1)!/n! = n+1$ versus $2^{2^n}/2^{2^{n-1}} = 2^{2^{n-1}}$. Since $n+1\leq 2^{2^{n-1}}$ for $n \geq 1$, you can prove using mathematical induction that $(n+1)! \leq 2^{2^n}$.
Well, since this upper bound is not nearly tight, you can just use basic transformations to get
$$(n+1)!< (n+1)^{n}=2^{\log (n+1)^{n}}= 2^{n\log (n+1)}=O(2^{n\log n})\subset O(2^{(2^n)}). $$
Using de l'Hôpital's rule on discrete functions is meaningless; you would have to use continuous extension. For factorials, that would be the Gamma function. For discrete functions, there is Stolz–Cesàro.
In either case, considering difference or differential quotients does not make sense for (super)exponential functions: since growth is at least of the same order as the function value itself, application of such rules does not simplify the situation.
In order to deal with factorials in asymtotics, Stirling's approximation is often helpful:
$\qquad \displaystyle n! \sim \sqrt{2 \pi n} \left(\frac{n}{e}\right)^n$
Therefore, $n! \in o(n^n)$. Furthermore, since
$\qquad\displaystyle \frac{n \log n}{2^n \log 2} \xrightarrow{n \to \infty} 0$
and $e^x$ is a non-decreasing and convex function¹, we have
$\qquad \displaystyle \lim_{n \to \infty} \frac{n^n}{2^{2^n}} = \lim_{n \to \infty} \frac{e^{n \log n}}{e^{2^n \log 2}} = \lim_{n \to \infty} \frac{n \log n}{2^n \log 2} = 0$.
Therefore, $n^n \in o(2^{2^n})$. Together (since $o$ is transitive) we get the desired result.
For more general advice for comparing functions, see here. |
You're on the right track, but always have a look at the documentation of the software you're using to see what model is actually fit. Assume a situation with a categorical dependent variable $Y$ with ordered categories $1, \ldots, g, \ldots, k$ and predictors $X_{1}, \ldots, X_{j}, \ldots, X_{p}$.
"In the wild", you can encounter three equivalent choices for writing the theoretical proportional-odds model with different implied parameter meanings:
$\text{logit}(p(Y \leqslant g)) = \ln \frac{p(Y \leqslant g)}{p(Y > g)} = \beta_{0_g} + \beta_{1} X_{1} + \dots + \beta_{p} X_{p} \quad(g = 1, \ldots, k-1)$ $\text{logit}(p(Y \leqslant g)) = \ln \frac{p(Y \leqslant g)}{p(Y > g)} = \beta_{0_g} - (\beta_{1} X_{1} + \dots + \beta_{p} X_{p}) \quad(g = 1, \ldots, k-1)$ $\text{logit}(p(Y \geqslant g)) = \ln \frac{p(Y \geqslant g)}{p(Y < g)} = \beta_{0_g} + \beta_{1} X_{1} + \dots + \beta_{p} X_{p} \quad(g = 2, \ldots, k)$
(Models 1 and 2 have the restriction that in the $k-1$ separate binary logistic regressions, the $\beta_{j}$ do not vary with $g$, and $\beta_{0_1} < \ldots < \beta_{0_g} < \ldots < \beta_{0_k-1}$, model 3 has the same restriction about the $\beta_{j}$, and requires that $\beta_{0_2} > \ldots > \beta_{0_g} > \ldots > \beta_{0_k}$)
In model 1, a positive $\beta_{j}$ means that an increase in predictor $X_{j}$ is associated with increased odds for a lower category in $Y$. Model 1 is somewhat counterintuitive, therefore model 2 or 3 seem to be the preferred one in software. Here, a positive $\beta_{j}$ means that an increase in predictor $X_{j}$ is associated with increased odds for a higher category in $Y$. Models 1 and 2 lead to the same estimates for the $\beta_{0_g}$, but their estimates for the $\beta_{j}$ have opposite signs. Models 2 and 3 lead to the same estimates for the $\beta_{j}$, but their estimates for the $\beta_{0_g}$ have opposite signs.
Assuming your software uses model 2 or 3, you can say "with a 1 unit increase in $X_1$, ceteris paribus, the
predicted odds of observing '$Y = \text{Good}$' vs. observing '$Y = \text{Neutral OR Bad}$' change by a factor of $e^{\hat{\beta}_{1}} = 0.607$.", and likewise "with a 1 unit increase in $X_1$, ceteris paribus, the predicted odds of observing '$Y = \text{Good OR Neutral}$' vs. observing '$Y = \text{Bad}$' change by a factor of $e^{\hat{\beta}_{1}} = 0.607$." Note that in the empirical case, we only have the predicted odds, not the actual ones.
Here are some additional illustrations for model 1 with $k = 4$ categories. First, the assumption of a linear model for the cumulative logits with proportional odds. Second, the implied probabilities of observing at most category $g$. The probabilities follow logistic functions with the same shape.
For the category probabilities themselves, the depicted model implies the following ordered functions:
P.S. To my knowledge, model 2 is used in SPSS as well as in R functions
MASS::polr() and
ordinal::clm(). Model 3 is used in R functions
rms::lrm() and
VGAM::vglm(). Unfortunately, I don't know about SAS and Stata. |
From the Lie algebra inclusion $\mathfrak{g}\rightarrow \Gamma(T_G)$ of right invariant vector fields we get a map $U(\mathfrak{g})\rightarrow\Gamma(\mathcal{D}_G).$ This induces a map $U(\mathfrak{g})\otimes\mathcal{O}_G\rightarrow\mathcal{D}_G$ of $\mathcal{O}_G$-modules. We have a structure of $G$-equivariant sheaf on both sides, with respect to the right translation action of $G$ on itself. (The $G$-equivariant structure on the left hand side comes from tensoring up the $G$-equivariant structure on $\mathcal{O}_G,$ i.e., the $G$ action on the $U(\mathfrak{g})$ part is trivial.) Our map is $G$-equivariant, and if we know it is an isomorphism, we will get an isomorphism $\Gamma(U(\mathfrak{g})\otimes\mathcal{O}_G)^G\rightarrow\Gamma(\mathcal{D}_G)^G,$ or $U(g)\rightarrow\Gamma(\mathcal{D}_G)^G$, as desired.
Now let us show that $U(\mathfrak{g})\otimes\mathcal{O}_G\rightarrow\mathcal{D}_G$ is an isomorphism. We have a canonical filtration $F_i\mathcal{D}_G$ on $\mathcal{D}_G$ by order of differential operators. On the left hand side, the PBW filtration $F_iU(\mathfrak{g})$ induces a filtration $F_iU(\mathfrak{g})\otimes\mathcal{O}_G$.
As our map sends $F_iU(\mathfrak{g})\otimes\mathcal{O}_G$ to $F_i\mathcal{D}_G$ and both filtrations are exhaustive, it suffices to check that the map on associated gradeds is an isomorphism. The associated graded of the left hand side is $\operatorname{Sym}^{\bullet}\mathfrak{g}\otimes\mathcal{O}_G.$
For the right hand side, we know that $F_i\mathcal{D}_G/F_{i-1}\mathcal{D}_G$ can be canonically identified with $\operatorname{Sym}^iT_G.$ The inclusion $\mathfrak{g}\rightarrow \Gamma(T_G)$ gives rise to a map $\mathfrak{g}\otimes\mathcal{O}_G\rightarrow T_G$ which can be seen to be an isomorphism. This identifies the associated graded of the right hand side with $\operatorname{Sym}^{\bullet}\mathfrak{g}\otimes\mathcal{O}_G,$ and the map on associated gradeds becomes the identity. |
A couple of years ago, I came up with the following question, to which I have no answer to this day. I have asked a few people about this, most of my teachers and some friends, but no one had ever heard of the question before, and no one knew the answer.
I hope this is an original question, but seeing how natural it is, I doubt this is the first time someone has asked it.
First, some motivation. Take $P$ any nonzero complex polynomial. It is an easy and classical exercise to show that the roots of its derivative $P'$ lie in the convex hull of its own roots (I know this as the Gauss-Lucas property). To show this, you simply write $P = a \cdot \prod_{i=1}^{r}(X-\alpha_i)^{m_i}$ where the $\alpha_i~(i=1,\dots,r)$ are the different roots of $P$, and $m_i$ the corresponding multiplicities, and evaluate $\frac{P'}{P}=\sum_i \frac{m_i}{X-\alpha_i}$ on a root $\beta$ of $P'$ which is not also a root of $P$. You'll end up with an expression of $\beta$ as a convex combination of $\alpha_1,\dots,\alpha_r$. It is worth mentioning that all the convex coefficients are $>0$, so the new root cannot lie on the edge of the convex hull of $P$'s roots.
Now fix $P$ a certain nonzero complex polynomial, and consider $\Pi$, its primitive (antiderivative) that vanishes at $0:~\Pi(0)=0$ and $\Pi'=P$. For each complex $\omega$, write $\Pi_{\omega}=\Pi-\omega$, so that you get all the primitives of $P$. Also, define for any polynomial $Q$, $\mathrm{Conv}(Q)$, the convex hull of $Q$'s roots.
MAIN QUESTION: describe$\mathrm{Hull}(P)=\bigcap_{\omega\in\mathbb{C}}\mathrm{Conv}(\Pi_{\omega})$.
By the property cited above, $\mathrm{Hull}(P)$ is a convex compact subset of the complex plane that contains $\mathrm{Conv}(P)$, but I strongly suspect that it is in general larger.
Here are some easy observations:
replacing $P$ (resp. $\Pi$) by $\lambda P$ (resp. $\lambda \Pi$) will not change the result, and considering $P(aX+b)$ will change $\mathrm{Hull}(P)$ accordingly. Hence we can suppose both $P$ and $\Pi$ to be monic. The fact that $\Pi$ is no longer a primitive of $P$ is of no consequence.
the intersection defining $\mathrm{Hull}(P)$ can be taken for $\omega$ ranging in a compact subset of $\mathbb{C}$: as $|\omega| \rightarrow \infty$, the roots of $\Pi_{\omega}$ will tend to become close to the $(\deg (P)+1)$-th roots of $\omega$, so for large enough $\omega$, their convex hull will always contain, say, $\mathrm{Conv}(\Pi)$.
$\mathrm{Hull}(P)$ can be explicitly calculated in the following cases: $P=X^n$, $P$ of degree $1$ or $2$. There are only 2 kinds of degree $2$ polynomials: two simple roots or a double root. Using $z\rightarrow az+b$, one only has to consider $P=X^2$ and $P=X(X-1)$. The first one yields {$0$}, which equals $\mathrm{Conv}(X^2)$, the second one gives $[0,1]=\mathrm{Conv}(X(X-1))$.
Also, if $\Pi$ is a real polynomial of odd degree $n+1$ that has all its roots real and simple, say $\lambda_1 < \mu_1 < \lambda_2 < \dots < \mu_n < \lambda_{n+1}$, where I have also placed $P$'s roots $\mu_1, \dots, \mu_n$, and if you further assume that $\Pi(\mu_{2j}) \leq \Pi(\mu_n) \leq\Pi(\mu_1) \leq\Pi(\mu_{2j+1})$ for all suitable $j$ (a condition that is best understood with a picture), then $\mathrm{Hull}(P)=\mathrm{Conv}(P)=[\mu_1,\mu_n]$: just vary $\omega$ between $[\Pi(\mu_n), \Pi(\mu_1)]$; the resulting polynomial $\Pi_{\omega}$ is always split over the real numbers and you get
$$[\mu_1,\mu_n]=\mathrm{Conv}(P)\subset\mathrm{Hull}(P)\subset \mathrm{Conv}(\Pi_{\Pi(\mu_1)})\cap \mathrm{Conv}(\Pi_{\Pi(\mu_n)}) = \\= [\mu_1,\dots]\cap [\dots,\mu_n]=[\mu_1,\mu_n]$$
The equation $\Pi_{\omega}(z)=\Pi(z)-\omega=0$ defines a Riemann surface, but I don't see how that could be of any use.
Computing $\mathrm{Hull}(P)$ for the next most simple polynomial $P=X^3-1$ has proven a challenge, and I can only conjecture what it might be.
Computing $\mathrm{Hull}(X^3-1)$ requires factorizing degree 4 polynomials, so one naturally tries to look for good values of $\omega$, the $\omega$ that allow for easy factorization of $\Pi_{\omega}=X^4-4X-\omega$---for instance, the $\omega$ that produces a double root. All that remains to be done afterwards is to factor a quadratic polynomial. The problem is symmetric, and you can focus on the case where 1 is the double root (i.e., $\omega=-3$). Plugging in the result in the intersection, and rotating twice, you obtain the following superset of $\mathrm{Hull}(X^3-1)$: a hexagon that is the intersection of three similar isoceles triangles with their main vertex located on the three third roots of unity $1,j,j^2$
QUESTION: is this hexagon equal to $\mathrm{Hull}(X^3-1)$?
Here's why I think this might be.
Consider the question of how the convex hulls of the roots of $\Pi_{\omega}$ vary as $\omega$ varies. When $\omega_0$ is such that all roots of $\Pi_{\omega_0}$ are simple, then the inverse function theorem shows that the roots of $\Pi_{\omega}$ with $\omega$ in a small neighborhood of $\omega_0$ vary holomorphically $\sim$ linearly in $\omega-\omega_0$: $z(\omega)-z(\omega_0)\sim \omega-\omega_0$. If however $\omega_0$ is such that $\Pi_{\omega_0}$ has a multiple root $z_0$ of multiplicity $m>1$, then a small variation of $\omega$ about $\omega_0$ will split the multiple root $z_0$ into $m$ distinct roots of $\Pi_{\omega}$ that will spread out roughly as $z_0+c(\omega-\omega_0)^{\frac{1}{m}}$, where $c$ is some nonzero coefficient. This means that for small variations, these roots will move at much higher velocities than the simple roots, and they will constitute the major contribution to the variation of $\mathrm{Conv}(\Pi_{\omega})$; also, they spread out evenly, and (at least if the multiplicity is greater or equal to $3$) they will tend to increase the convex hull around $z_0$. Thus it seems not too unreasonable to conjecture that the convex hull $\mathrm{Conv}(\Pi_{\omega})$ has what one can only describe as critical points at the $\omega_0$ that produce roots with multiplicities. I'm fairly certain there is a sort of calculus on convex sets that would allow one to make this statement precise, but I don't know see what it could be.
Back to $X^3-1$: explicit calculations suggest that up to second order, the double root $1$ of $X^4-4X+3-h$ for $|h|<<1$ splits in half nicely (here $\omega=-3+h$), and the convex hull will continue to contain the aforementioned hexagon.
QUESTION (Conjecture): is it true that$\mathrm{Hull}(P)=\bigcap_{\omega\in\mathrm{MR}}\mathrm{Conv}(\Pi_{\omega})$, where$\mathrm{MR}$ is the set of all $\omega_0$ such that$\Pi_{\omega_0}$ has a multiple root, i.e., the set of all $\Pi(\alpha_i)$ where the$\alpha_i$ are the roots of $P$?
All previous examples of calculations agree with this, and I have tried as best I can to justify this guess heuristically.
Are you aware of a solution? Is this a classical problem? Is anybody brave enough to make a computer program that would compute some intersections of convex hulls obtained from the roots to see if my conjecture is valid? |
I'm reviewing old exams in preparation for a statistics final, and I'm stuck on a particular question:
Suppose that you have n independent random variables $Y_i$, with each distributed normal with expected value $\beta x_i$ and known variance $\sigma^2$. Further suppose that the $x_i$ are known constant values, and define the likelihood as $\mathcal{L}(\beta; y) = \frac{f(y ; \beta)}{f(y ; \hat{\beta})}$, where $\hat{\beta}$ is the maximum likelihood estimate for $\beta$.
Show that $\hat{\beta} = \frac{\sum_{i=1}^n x_i y_i}{\sum_{i=1}^n x_i^2}$ Find the sampling distribution of the MLE Show that the sampling distribution of $-2\ln(\mathcal{L}(\beta; y))$ is Chi-squared with 1 df
I had no problem with (1), but I'm unsure about (2) and (3).
For (2), I rewrote (1) as $\sum_{i=1}^n k_i y_i$ where the $k_i$ are constants and equal to $\frac{x_i}{\sum_{i=1}^n x_i^2}$. I then used linearity of expectation to say that the expected value of the sampling distribution is $\sum_{i=1}^n k_i \beta x_i$. For the variance, I believe it would be $\mathrm{Var}(\sum_{i=1}^n k_i y_i) = \sum_{i=1}^n \mathrm{Var} (k_i y_i)$ because of independence. Therefore, the variance of the sampling distribution should be $\sigma^2 \sum_{i=1}^n k_i^2$. Since the $y_i$'s are normal, their sum should be normal, so the sampling distribution is $N(\sum_{i=1}^n k_i \beta x_i, \sigma^2 \sum_{i=1}^n k_i^2)$.
For (3), I'm not sure where to start. Using the definition of $\mathcal{L}(\beta; y)$ above, I simplified to be $$\mathcal{L}(\beta; y) = \exp \left[ \frac{1}{2\sigma^2} \left(2\beta \sum_{i=1}^n x_i y_i - \beta^2 \sum_{i=1}^nx_i^2 - \frac{(\sum_{i=1}^n x_i y_i)^2}{\sum_{i=1}^n x_i^2} \right) \right]$$ So $-2\ln \mathcal{L}(\beta; y)$ would be $$ \frac{-1}{\sigma^2} \left(2\beta \sum_{i=1}^n x_i y_i - \beta^2 \sum_{i=1}^nx_i^2 - \frac{(\sum_{i=1}^n x_i y_i)^2}{\sum_{i=1}^n x_i^2} \right)$$ but I'm not seeing how this is a chi-squared distribution with 1 degree of freedom.
I realize this is a long post, but I'd appreciate any hints or clarifications on either subproblem. |
1) \((a=_A b)\) a=b 2) \((a=_A b)\to\mathcal{P}\) if \(a=b\) then \(\mathcal{P}\) is a proposition 3) \(\prod\limits_{x,y:A} (x =_A y) \to \mathcal{P}\) \(\forall x,y\) if \(x=y\) then \(\mathcal{P}\) is a proposition 4) \(C:\prod\limits_{x,y:A} (x =_A y) \to \mathcal{P}\) C proves proposition 3 5) \(C(a,b)\) if \(a=b\) then \(\mathcal{P}\) is a proposition 6) \(b,a|\prod\limits_{x,y:A} (x =_A y) \to \mathcal{P}\) same as proposition 5 7) \(p:(a=_Ab)\) p proves \(a=_Ab\) 8) \(p|(a=_Ab)\to\mathcal{P}\) \(\mathcal{P}\) is a proposition 9) \(C(a,b)(p)\) \(\mathcal{P}\) is a proposition 10) \(p,b,a|\prod\limits_{x,y:A} (x =_A y) \to \mathcal{P}\) \(\mathcal{P}\) is a proposition
Note: 8 and 10 use an unorthodox notation that I'm calling the "tokenization operator". The idea is that for any type A, "|" picks out an element of the type, and we then use prefix notation to indicate application. (We might also split out a tokenization operator, e.g. \(\Lambda\) and define a "pipeline" application operator.) This allows us to dispense with (some) intermediate variables, just like the lambda operator allows us to dispense with function names. In 8, \(|(a=b)\to\mathcal{P}\) picks out an element of the function type, and prefixing \(p\) to it indicates application of that element to p. So 8 is a (nearly) variable-free version of:
\(C: (a=b)\to\mathcal{P}\); \(C(p)\)
Note that 10 could be written more explicitly:
\[a,b:A, \ p:(a=b),\ p|b,a|\prod\limits_{x,y:A} (x =_A y) \to \mathcal{P}\]
which should be read "outward": feed a then b then a proof of a=b to the \(\prod\) function.
The glosses for 8-10 here ignore information in the formulae; the gloss only applies to the "result" of the type formation operation. They can also be glossed intensionally as instances of modus ponens
(\(A\to B\); but \(A\), therefore \(B\)): If a=b is proven, then P; but p proves a=b, therefore P. Here I think the advantage of having a tokenization operator like "|" is evident; it makes it easier to read the intensional sense of 8 and 10 directly off the formula. Well, not entirely; we still use vars a, b, and p. They could be eliminated using the tokenization operator with a \(\sum\) type, but that's a topic for another post. What this is intended to show is that Curry-Howard is not limited to propositions. Types may encode terms, propositions, or inferences. Those that express inferences may also be construed as expressing propositions (namely, the concluding proposition of the inference); those that express propositions may be construed as expressing terms. The latter corresponds to the fact that embedded propositions function as terms: in "if P then Q", both P and Q are propositional variables but they act as terms within the conditional.
(\(A\to B\); but \(A\), therefore \(B\)):
If a=b is proven, then P; but p proves a=b, therefore P.
Here I think the advantage of having a tokenization operator like "|" is evident; it makes it easier to read the intensional sense of 8 and 10 directly off the formula. Well, not entirely; we still use vars a, b, and p. They could be eliminated using the tokenization operator with a \(\sum\) type, but that's a topic for another post.
What this is intended to show is that Curry-Howard is not limited to propositions. Types may encode terms, propositions, or inferences. Those that express inferences may also be construed as expressing propositions (namely, the concluding proposition of the inference); those that express propositions may be construed as expressing terms. The latter corresponds to the fact that embedded propositions function as terms: in "if P then Q", both P and Q are propositional variables but they act as terms within the conditional. |
Let $f : X \to Y$ and $g : Y \to Z$ be continuous maps (between topological spaces). Assume these hypotheses:
$f : X \to Y$ is a split surjection, i.e. has a section. $g \circ f : X \to Z$ is a local homeomorphism, i.e. there is an open cover $\{ U_i : i \in I \}$ of $X$ such that, for each $i \in I$, the composite $U_i \to X \to Y \to Z$ is an open embedding.
Does it follow that $g : Y \to Z$ is a local homeomorphism?
Here are some observations:
The question with "open map" instead of "local homeomorphism" has a positive answer. In particular, under the above hypotheses, $g : Y \to Z$ must be an open map. Moreover, the fibres of $g : Y \to Z$ must be discrete. So we have an open map with discrete fibres – is such a thing necessarily a local homeomorphism? If $f : X \to Y$ is an open map, then $g : Y \to Z$ is a local homeomorphism. Conversely, if $g : Y \to Z$ is a local homeomorphism, then $f : X \to Y$ is also a local homeomorphism (hence an open map a fortiori). |
I was thinking about single point continuity and came across this function. $$ f(x) = \left\{ \begin{array}{ll} x & \quad x\in \mathbb{Q}\\ 2-x & \quad x\notin \mathbb{Q} \end{array} \right. $$ We know this function is continuous only at $x=1$ . But doesn't that contradict our whole idea of continuity? A function is continuous if we are able to draw the function without lifting our pen or pencil. But here both the pieces of the function exist at specific places, so we have to lift our pen. Shouldn't the function be discontinuous everywhere? Looks like a stupid doubt though.
More precisely, a function is continuous over an interval if we are able to draw the function without lifting our pen or pencil within that interval, in our intuition. This function is only continuous at one point.
The idea that "a function is continuous if (and only if) its graph can be drawn without lifting one's pen(cil)" is sometimes adequate for communicating with non-mathematicians, but is technically flawed for multiple reasons.
For convenience, let's call this "condition"
pen continuity.
First, as other answers note, a function must be continuous
on an interval to have any hope of being pen continuous. Unfortunately for "pen continuity", there are a couple of reasons a function might be continuous (to a mathematician, using the $\varepsilon$-$\delta$ definition), but not continuous on an interval:
A function can be continuous at a single point (such as the function in your post), or at each point of a complicated set that contains no interval of real numbers (such as Thomae's function, which is continuous at $x$ if and only if $x$ is irrational).
A function can be continuous
at every point of its domain, but the domain is not an interval (and perhaps contains no interval). Think, for example, of Przemysław Scherwentke's example $f(x) = 1/x$ for $x \neq 0$, which is continuous throughout its domain (the set of non-zero real numbers), or of the zero function defined on an arbitrary set of real numbers (which can be nastier than the human mind can comprehend).
So, let's focus on (real-valued) functions that are continuous at every point of an interval. Depending on your definition of a
pen, not every continuous function is pen continuous (!). If a "pen" is a mathematical point, and "draw" has its ordinary meaning ("the pen can be traced along the graph in finite time", say), then most continuous functions are not pen continuous, because their graphs have infinite length over arbitrary subintervals (or "are not locally rectifiable", in technical terms). (The Koch snowflake curve isn't a graph, but may be a familiar non-rectifiable example.)
To emphasize, a "typical" continuous function is
nowhere-differentiable: Its graph looks something like an EKG or a seismograph tracing or the curves you draw after drinking 50 cups of espresso. "Zooming in" only reveals details at smaller and smaller scales, peaks and valleys whose total length (over an arbitrarily short subinterval of the domain) may well be infinite. Only functions of bounded variation have graphs of finite length, and that's a "thin" subset of all continuous functions.
The bottom line (literally!) is, a mathematician mustn't conflate "continuity" with "pen continuity".
A function is continous at a point $a$ if:
$$\lim_{x \to a}f(x)=f(a)$$
For your function, (
except one point) no matter what point you choose there are irrational numbers alongside rational numbers so you can't find such a limit. The exception is a point that two conditions yield the same value, i.e., their intersection. Since, the intersection point ($2-x=x$) is $x=1$, both conditions tend to go to $1$ and above limit exists. Therefore it is only continuous at that point.
No, mathemathic hasn't failed. You has.
Drawing the graph of function without lifting pencil is only the initial intuition. Consider $f(x)=1/x$, which is continuous.
user86418 gave some examples of where pen-continuity differs from continuity. Here are two more important distinctions. With a pen, you can only draw curves that have finite length and bounded derivative, otherwise you need infinite ink and either infinite time or infinite speed. But there are plenty of continuous curves that have infinite length.
There are also 'simpler' examples. $\left( x \mapsto x \sin(\frac{π}{x^2}) \right)$ is continuous but has infinite length on any interval containing $0$, because the arc length between consecutive roots $\frac{1}{\sqrt{n}},\frac{1}{\sqrt{n+1}}$ is at least $\frac{2}{\sqrt{n+1}}$, and $\sum_{n=1}^\infty \frac{2}{\sqrt{n+1}} = \infty$. This example also has unbounded derivative, which means that you have to keep changing direction at higher and higher speed if you every want to finish drawing it! |
Electronic Journal of Probability Electron. J. Probab. Volume 23 (2018), paper no. 94, 27 pp. Trees within trees: simple nested coalescents Abstract
We consider the compact space of pairs of nested partitions of $\mathbb{N} $, where by analogy with models used in molecular evolution, we call “gene partition” the finer partition and “species partition” the coarser one. We introduce the class of nondecreasing processes valued in nested partitions, assumed Markovian and with exchangeable semigroup. These processes are said simple when each partition only undergoes one coalescence event at a time (but possibly the same time). Simple nested exchangeable coalescent (SNEC) processes can be seen as the extension of $\Lambda $-coalescents to nested partitions. We characterize the law of SNEC processes as follows. In the absence of gene coalescences, species blocks undergo $\Lambda $-coalescent type events and in the absence of species coalescences, gene blocks lying in the same species block undergo i.i.d. $\Lambda $-coalescents. Simultaneous coalescence of the gene and species partitions are governed by an intensity measure $\nu _s$ on $(0,1]\times{\mathcal M} _1 ([0,1])$ providing the frequency of species merging and the law in which are drawn (independently) the frequencies of genes merging in each coalescing species block. As an application, we also study the conditions under which a SNEC process comes down from infinity.
Article information Source Electron. J. Probab., Volume 23 (2018), paper no. 94, 27 pp. Dates Received: 6 March 2018 Accepted: 3 September 2018 First available in Project Euclid: 18 September 2018 Permanent link to this document https://projecteuclid.org/euclid.ejp/1537257886 Digital Object Identifier doi:10.1214/18-EJP219 Mathematical Reviews number (MathSciNet) MR3858922 Zentralblatt MATH identifier 06964788 Subjects Primary: 60G09: Exchangeability 60G57: Random measures 60J35: Transition functions, generators and resolvents [See also 47D03, 47D07] 60J75: Jump processes 92D10: Genetics {For genetic algebras, see 17D92} 92D15: Problems related to evolution Citation
Blancas, Airam; Duchamps, Jean-Jil; Lambert, Amaury; Siri-Jégousse, Arno. Trees within trees: simple nested coalescents. Electron. J. Probab. 23 (2018), paper no. 94, 27 pp. doi:10.1214/18-EJP219. https://projecteuclid.org/euclid.ejp/1537257886 |
Site Index Site is defined by the Society of American Foresters (1971) as “an area considered in terms of its own environment, particularly as this determines the type and quality of the vegetation the area can carry.” Forest and natural resource managers use site measurement to identify the potential productivity of a forest stand and to provide a comparative frame of reference for management options. The productive potential or capacity of a site is often referred to as site quality.
Site quality can be measured directly or indirectly. Direct measurement of a stand’s productivity can be measured by analyzing the variables such as soil nutrients, moisture, temperature regimes, available light, slope, and aspect. A productivity-estimation method based on the permanent features of soil and topography can be used on any site and is suitable in areas where forest stands do not presently exist. Soil site index is an example of such an index. However, such indices are location specific and should not be used outside the geographic region in which they were developed. Unfortunately, environmental factor information is not always available and natural resource managers must use alternative methods.
Historical yield records also provide direct evidence of a site’s productivity by averaging the yields over multiple rotations or cutting cycles. Unfortunately, there are limited long-term data available, and yields may be affected by species composition, stand density, pests, rotation age, and genetics. Consequently, indirect methods of measuring site quality are frequently used, with the most common involving the relationship between tree height and tree age.
Using stand height data is an easy and reliable way to quantify site quality. Theoretically, height growth is sensitive to differences in site quality and height development of larger trees in an even-aged stand is seldom affected by stand density. Additionally, the volume-production potential is strongly correlated with height-growth rate. This measure of site quality is called site index and is the average total height of selected dominant-codominant trees on a site at a particular reference or index age. If you measure a stand that is at an index age, the average height of the dominant and codominant trees is the site index. It is the most widely accepted quantitative measure of site quality in the United States for even-aged stands (Avery and Burkhart 1994).
The objective of the site index method is to select the height development pattern that the stand can be expected to follow during the remainder of its life (not to predict stand height at the index age). Most height-based methods of site quality evaluation use site index curves. Site index curves are a family of height development patterns referenced by either age at breast height or total age. For example, site index curves for plantations are generally based on total age (years since planted), where age at breast height is frequently used for natural stands for the sake of convenience. If total age were to be used in this situation, the number of years required for a tree to grow from a seedling to DBH must be added in. Site index curves can either be anamorphic or polymorphic curves. Anamorphic curves (most common) are a family of curves with the same shape but different intercepts. Polymorphic curves are a family of curves with different shapes and intercepts.
The index age for this method is typically the culmination of mean annual growth. In the western part of the United States, 100 years is commonly used as the reference age with 50 years in the eastern part of this country. However, site index curves can be based on any index age that is needed. Coile and Schumacher (1964) created a family of anamorphic site index curves for plantation loblolly pine with an index age of 25 years. The following family of anamorphic site index curves for a southern pine is based on a reference age of 50 years.
Figure 1. Site index curves with an index age of 50 years.
Creating a site index curve involves the random selection of dominant and codominant trees, measuring their total height, and statistically fitting the data to a mathematical equation. So, which equation do you use? Plotting height over age for single species, even-aged stands typically results in a sigmoid shaped pattern.
$$H_d = b_0e^{(b_1A^{-1})}$$
where
Hd is the height of dominant and codominant trees, A is stand age, and b0 and b1 are coefficients to be estimated. Variable transformation is needed if linear regression is to be used to fit the model. A common transformation is
$$ln \ H_d = b_0+b_1A^{-1}$$
Coile and Schumacher (1964) fit their data to the following model:
$$ln \ S = ln \ H +5.190(\frac {1}{A} - \frac {1}{25})$$
where
S is site index, H is total tree height, and A is average age. The site index curve is created by fitting the model to data from stands of varying site qualities and ages, making sure that all necessary site index classes are equally represented at all ages. It is important not to bias the curve by using an incomplete range of data.
Data for the development of site index equations can come from measurement of tree or stand height and age from temporary or permanent inventory plots or from stem analysis. Inventory plot data are typically used for anamorphic curves only and sampling bias can occur when poor sites are over represented in older age classes. Stem analysis can be used for polymorphic curves but requires destructive sampling and it can be expensive to obtain such data.
We are going to examine three different methods for developing site index equations:
Guide curve method Difference equation method Parameter prediction method Guide Curve Method
The guide curve method is commonly used to generate anamorphic site index equations. Let’s begin with a commonly used model form:
$$ ln \ H_d =b_0 +b_1A^{-1} = b_0 + b_1\frac{1}{A}$$
Parameterizing this model results in a “guide curve” (the average line for the sample data) that is used to create the individual height/age development curves that parallel the guide curve. For a particular site index the equation is:
$$ln \ H_d = b_{0i} +b_1A^{-1}$$
where
boi is the unique y-intercept for that age. By definition, when A = A0 (index age), H is equal to site index S. Thus:
$$b_{0i} = ln \ S - b_1A_0^{-1}$$
Substituting
boi into equation 9.2.5 gives:
$$ ln \ H = ln \ S + b_1(A^{-1} - A_0^{-1})$$
which can be used to generate site index curves for given values of
S and A0 and a range of ages ( A). The equation can be algebraically rearranged as:
$$ln \ S = ln \ H -b_1(A^{-1} - A_0^{-1}) = ln (H) - b_1(\frac {1}{A} - \frac {1}{A_0})$$
This is the form to estimate site index (height at index age) when height and age data measurements are given. This process is sound only if the average site quality in the sample data is approximately the same for all age classes. If the average site quality varies systematically with age, the guide curve will be biased.
Difference Equation Method
This method requires either monumented plot, tree remeasurement data, or stem analysis data. The model is fit using differences of height and specific ages. This method is appropriate for anamorphic and polymorphic curves, especially for longer and/or multiple measurement periods. Schumacher (after Clutter et al. 1983) used this approach when estimating site index using the reciprocal of age and the natural log of height. He believed that there was a linear relationship between Point A (1/
A1, ln H1) and Point B (1/ A2, ln H2) and defined β1 (slope) as:
$$\beta_1 = \dfrac {ln(H_2) - ln (H_1)}{(1/A_2)-(1/A_1)}$$
where
H1 and A1 were initial height and age, and H2 and A2 were height and age at the end of the remeasurement period. His height/age model became:
$$ln (H_2) = ln (H_1) +\beta_1 (\frac {1}{A_2} - \frac {1}{A_1})$$
Using remeasurement data, this equation would be fitted using linear regression procedures with the model
$$Y = \beta_1X$$
where
Y = ln( H2) – ln( H1) X = (1/ A2) – (1/ A1)
After estimating β1, a site index equation is obtained from the height/age equation by letting
A2equal A0 (the index age) so that H2 is, by definition, site index ( S). The equation can then be written:
$$ln (S) = ln(H_1) + \beta_1(\frac {1}{A_0} - \frac {1}{A_1})$$
Parameter Prediction Method
This method requires remeasurement or stem analysis data, and involves the following steps:
Fitting a linear or nonlinear height/age function to the data on a tree-by-tree (stem analysis data) or plot by plot (remeasurement data) basis Using each fitted curve to assign a site index value to each tree or plot (put A0 in the equation to estimate site index) Relating the parameters of the fitted curves to site index through linear or nonlinear regression procedures
Trousdell et al. (1974) used this approach to estimate site index for loblolly pine and it provides an example using the Chapman-Richards (Richards 1959) function for the height/age relationship. They collected stem analysis data on 44 dominant and codominant trees that had a minimum age of at least 50 years. The Chapman-Richards function was used to define the height/age relationship:
$$H = \theta_1[1-e^{(-\theta_2A)}]^{[(1-\theta_3)^{-1}]}$$
where
H is height in feet at age A and θ1, θ2, and θ3 are parameters to be estimated. This equation was fitted separately to each tree. The fitted curves were all solved with A = 50 to obtain site index values ( S) for each tree.
The parameters θ1, θ2, and θ3 were hypothesized to be functions of site index, where
$$\theta_1 = \beta_1 + \beta_2S$$
$$\theta_2 = \beta_3 + \beta_4S+\beta_5S^2$$
$$\theta_3 = \beta_6 + \beta_7S + \beta_8S^2$$
The Chapman-Richards function was then expressed as:
$$H = (\beta_1+\beta_2S){1-e^{[-(\beta_3+\beta_4S+\beta_5S^2)A]}}^{[(1-\beta_6-\beta_7S-\beta_8S^2)^{-1}]}$$
This function was then refitted to the data to estimate the parameters β1, β2, …β8. The estimating equations obtained for θ1, θ2, and θ3 were
$$\hat {\theta_1} = 63.1415+0.635080S$$
$$\hat {\theta_2} = 0.00643041 + 0.000124189S + 0.00000162545S^2$$
$$\hat {\theta_3} = 0.0172714 - 0.00291877S + 0.0000310915S^2$$
For any given site index value, these equations can be solved to give a particular Chapman-Richards site index curve. By substituting various values of age into the equation and solving for
H, we obtain height/age points that can be plotted for a site index curve. Since each site index curve has different parameter values, the curves are polymorphic. Periodic Height Growth Data
An alternative to using current stand height as the surrogate for site quality is to use periodic height growth data, which is referred to as a growth intercept method. This method is practical only for species that display distinct annual branch whorls and is primarily used for juvenile stands because site index curves are less dependable for young stands.
This method requires the length measurement of a specified number of successive annual internodes or the length over a 5-year period. While the growth-intercept values can be used directly as measures of site quality, they are more commonly used to estimate site index.
Alban (1972) created a simple linear model to predict site index for red pine using 5-year growth intercept in feet beginning at 8 ft. above ground.
SI = 32.54 + 3.43 X
where
SI is site index at a base age of 50 years and X is 5-year growth intercept in feet.
Using periodic height growth data has the advantage of not requiring stand age or total tree height measurements, which can be difficult in young, dense stands. However, due to the short-term nature of the data, weather variation may strongly influence the internodal growth thereby rendering the results inaccurate.
Site index equations should be based on biological or mathematical theories, which will help the equation perform better. They should behave logically and not allow unreasonable values for predicted height, especially at very young or very old ages. The equations should also contain an asymptotic parameter to control unbounded height growth at old age. The asymptote should be some function of site index such that the asymptote increases with increases of site index.
When using site index, it is important to know the base age for the curve before use. It is also important to realize that site index based on one base age cannot be converted to another base age. Additionally, similar site indices for different species do not mean similar sites even when the same base age is used for both species. You have to understand how height and age were measured before you can safely interpret a site index curve. Site index is not a true measure of site quality; rather it is a measure of a tree growth component that is affected by site quality (top height is a measure of stand development, NOT site quality). |
Definition \(\PageIndex{1}\)
If \(X\) is a continuous random variable with density function \(f(x)\), then the
expected value (or mean) of \(Y\) is given by
$$\mu = \mu_X = E[X] = \int\limits^{\infty}_{-\infty}\! x\cdot f(x)\, dx.\notag$$
The formula for the expected value of a continuous random variable is the continuous analogue of the expected value of a discrete random variable, where instead of summing over all possible values we integrate, recall
section 3.7. This interpretation of the expected value as a weighted average explains why it is also referred to as the mean of the random variable. Example \(\PageIndex{1}\)
Consider again the context of Example 17, where we defined the continuous random variable \(X\) to denote the time a person waits for an elevator to arrive. The pdf of \(X\) was given by
$$f(x) = \left\{\begin{array}{l l} x, & \text{for}\ 0\leq x\leq 1 \\ 2-x, & \text{for}\ 1< x\leq 2 \\ 0, & \text{otherwise} \end{array}\right.\notag$$ Applying Definition 4.4.1, we compute the expected value of \(X\): $$E[X] = \int\limits^1_0\! x\cdot x\, dx + \int\limits^2_1\! x\cdot (2-x)\, dx = \int\limits^1_0\! x^2\, dx + \int\limits^2_1\! (2x - x^2)\, dx = \frac{1}{3} + \frac{2}{3} = 1.\notag$$ Thus, we expect a person will wait 1 minute for the elevator on average. Figure 1 demonstrates the graphical representation of the expected value as the center of mass of the pdf.
Figure 1: Graph of \(f\): The red arrow represents the center of mass, or the expected value of \(X\).
If continuous random variable \(X\) has a normal distribution with parameters \(\mu\) and \(\sigma\), then \(E[X] = \mu\). The normal case is why the notation \(\mu\) is often used for the expected value. Again, this fact can be derived using Definition 4.4.1; however, the integral calculation requires many tricks.
The expected value may not be exactly equal to a parameter of the probability distribution, but rather it may be a function of the parameters as the next example with the uniform distribution shows.
Example \(\PageIndex{2}\)
Suppose the random variable \(X\) has a uniform distribution on the interval \([a,b]\). Then the pdf of \(X\) is given by
$$f(x) = \frac{1}{b-a}, \quad\text{for}\ a\leq x\leq b.\notag$$ Applying Definition 4.4.1, we compute the expected value of \(X\): $$E[X] = \int\limits^b_a\! x\cdot\frac{1}{b-a}\, dx = \frac{b^2 - a^2}{2}\cdot\frac{1}{b-a} = \frac{(b-a)(b+a)}{2}\cdot\frac{1}{b-a} = \frac{b+ a}{2}.\notag$$ Thus, the expected value of the uniform\([a,b]\) distribution is given by the average of the parameters \(a\) and \(b\), or the midpoint of the interval \([a,b]\). This is readily apparent when looking at a graph of the pdf. Since the pdf is constant over \([a,b]\), the center of mass is simply given by the midpoint.
If \(X\) is a continuous random variable with pdf \(f(x)\), then the expected value of \(Y\) is given by
CONTINUOUS CASE
$$E[Y] = \int\limits^{\infty}_{-\infty}\! g(x)\cdot f(x)\, dx.\notag$$
2. If \(X_1, \ldots, X_n\) are continuous random variables with joint density function \(p(x_1, \ldots, x_n)\), then the expected value of \(Y\) is given by
CONTINUOUS CASE $$E[Y] = \int\limits^{\infty}_{-\infty}\!\cdots\int\limits^{\infty}_{-\infty}\! g(x_1, \ldots, x_n)\cdot f(x_1, \ldots, x_n)\, dx_1\, \ldots\, dx_n.\notag$$ |
Showing that the language $L$ with $\{1^n0^m |\space n \neq 2^m\}$ is not regular using Myhill-Nerode is easy: Let $i, j\in \mathbb{N}.i\neq j.$ It follows $1^{2^i}\nsim 1^{2^j}$ because $1^{2^i}0^{i}\notin L$ but $1^{2^j}0^{i}\in L$. Therefore $L$ has an infinite amount of Myhill-Nerode equivalence classes and is not regular. But how do I show this using the general version of the pumping lemma for regular languages? https://en.wikipedia.org/wiki/Pumping_lemma_for_regular_languages#General_version_of_pumping_lemma_for_regular_languages
Let $p$ be the pumping length, and consider the string $u=1^{2^{p!+p}}$, $w = 0^p$, $v = \epsilon$. Notice $uwv \in L$. According to the pumping lemma, there is a value $q \in \{1,\ldots,p\}$ such that $1^{2^{p!+p}} 0^{p-q+iq} \in L$ for every $i \geq 0$. Choosing $i = 1 + p!/q$, we obtain a contradiction.
This answer uses the standard pumping lemma. Depending on your perspective, this answer may or may not have used the general pumping lemma.
Consider $L' = (1^*0^*) \setminus\{1^n0^m \mid n \neq 2^m\}=\{1^n0^m\mid n=2^m \}$.
Suppose the pumping length of $L'$ is $p$. Consider $w=1^p0^{2^p}\in L$. Then $w=xyz$, where $|xy|\le p$, $|y|\ge 1$ and $xy^0z=xz\in L'$. Since $y$ contains 1 only, $xz$ has less numbers of 1s and the same numbers of 0s as $w$. So $xz\not\in L'$. That contradiction shows $L'$ and, hence, $L$ is not regular. |
I've posted this already in stats.stackexchange. I'm not sure what the rules are for cross-posting but mathoverflow seems to be more active.
Suppose we have data $x_i, i=1,2,3,...n$ that are
dependent and identically distributed with marginal $f(\cdot|\alpha)$. If we model this with the likelihood
$ L = c(F(x_1|\alpha),F(x_2|\alpha),...F(x_n|\alpha)|\theta)\prod_{i=1}^n f(x_i|\alpha) $
and the dependence parameter $\theta$ is known, can we apply some variant of the Expectation Maximization algorithm to estimate $\alpha$ using an iterative procedure with relatively simple steps?
For instance, I considered a simple problem with exponential marginals and Gaussian copula (with known correlation), and did something procedural (and hokey). I introduced the unknown independent samples $\tilde{x}_i, i=1,2,...,n$ which you would compute by knowing the correct value of $\alpha$, mapping the $x_i$ to correlated Gaussians $y_i = \Phi^{-1}(F(x_i|\alpha))$ and then "undoing" the correlations $z = B^{-1}y$ and mapping the $z$ forward again to produce the independent $\tilde{x_i}=F^{-1}(\Phi(z_i)|\alpha)$. Here $C=BB^T$ is the correlation matrix. Here $\Phi$ is the (0,1)-normal cdf. If you turn this into an iterative procedure, using $x$ as the initial guess for the independent data, it seems to produce a series that (at least in my trials) converged. However, the whole thing is doubtful since it depends entirely on what you choose for $B$ (only defined up to a unitary matrix). I think it's the unitary invariance of the Gaussian hitting you when you try to basically do an inversion.
Is there an obvious way to turn this kind of problem into a sane iterative procedure using simple steps like the EM? I feel like I'm missing a simple trick. |
Joint Distributions
In this chapter we consider two or more random variables defined on the same sample space and discuss how to model the probability distribution of the random variables
jointly. This starts with defining the joint cdf.
Definition\(\PageIndex{1}\)
The joint behavior of random variables \(X\) and \(Y\) is determined by the
joint cumulative distribution function (cdf): $$F(x,y) = P(X\leq x\ \text{and}\ Y\leq y).\notag$$
In the next two sections, we consider the two cases of random variables: discrete and continuous.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% Joint Distributions of Discrete Random Variables
Having already defined the joint cdf, we begin by looking at the joint frequency function for two discrete random variables.
Definition\(\PageIndex{1}\)
If discrete random variables \(X\) and \(Y\) are defined on the same sample space \(\Omega\), then their
joint frequency function is given by $$p(x,y) = P(X=x\ \text{and}\ Y=y).\notag$$ In the discrete case, we can obtain the joint cdf of \(X\) and \(Y\) by summing up the joint frequency function: $$F(x,y) = \sum_{x_i \leq x} \sum_{y_j \leq y} p(x_i, y_j),\notag$$ where \(x_i\) denotes possible values of \(X\) and \(y_j\) denotes possible values of \(Y\). From the joint frequency function, we can also obtain the individual probability distributions of \(X\) and \(Y\) separately as shown in the next definition.
Definition\(\PageIndex{2}\)
Suppose that discrete random variables \(X\) and \(Y\) have joint frequency function \(p(x,y)\). Let \(x_1, x_2, \ldots, x_i, \ldots\) denote the possible values of \(X\), and let \(y_1, y_2, \ldots, y_j, \ldots\) denote the possible values of \(Y\). The
marginal frequency functions of \(X\) and \(Y\) are respectively given by the following: \begin{align*} p_X(x) &= \sum_j p(x, y_j) \quad(\text{fix}\ x,\ \text{sum over possible values of}\ Y) \\ p_Y(y) &= \sum_i p(x_i, y) \quad(\text{fix}\ y,\ \text{sum over possible values of}\ X) \end{align*}
Example \(\PageIndex{1}\):
Consider again the probability experiment of Example 3.4.1, where we toss a fair coin three times and record the sequence of heads \((h)\) and tails \((t)\). Again, we let random variable \(X\) denote the number of heads obtained. We also let random variable \(Y\) denote the winnings earned in a single play of a game with the following rules, based on the outcomes of the probability experiment:
player wins $1 if first \(h\) occurs on the first toss player wins $2 if first \(h\) occurs on the second toss player wins $3 if first \(h\) occurs on the third toss player loses $1 if no \(h\) occur
Note that the possible values of \(X\) are \(x=0,1,2,3\), and the possible values of \(Y\) are \(y=-1,1,2,3\). We represent the joint frequency function using a table:
\(p(x,y)\) \(X\) \(Y\) 0 1 2 3 -1 1/8 0 0 0 1 0 1/8 2/8 1/8 2 0 1/8 1/8 0 3 0 1/8 0 0
The values in Table 1 give the values of \(p(x,y)\). For example, consider \(p(0,-1)\):
$$p(0,-1) = P(X=0\ \text{and}\ Y=-1) = P(ttt) = 1/8.\notag$$ Since the outcomes are equally likely, the values of \(p(x,y)\) are found by counting the number of outcomes in the sample space that result in the specified values of the random variables, and then dividing by \(8\), the total number of outcomes. The sample space is given below, color coded to help explain the values of \(p(x,y)\): $$\Omega = \{{\color{green}hhh}, {\color{green}hht}, {\color{green}hth}, {\color{green}htt}, {\color{blue}thh}, {\color{blue}tht}, {\color{magenta}tth}, {\color{red} ttt}\}\notag$$
Given the joint frequency function, we can now find the marginal frequency functions. Note that the marginal frequency function for \(X\) is found by computing sums of the columns in Table 1, and the marginal frequency function for \(Y\) corresponds to the row sums. (Note that we found the frequency function for \(X\) in Example 3.4.1 as well.)
\(x\) \(p_X(x)\) \(y\) \(p_Y(y)\) 0 1/8 -1 1/8 1 3/8 1 1/2 2 3/8 2 1/4 3 1/8 3 1/8
Finally, we can find the joint cdf for \(X\) and \(Y\) by summing over values of the joint frequency function. For example, consider \(F(1,1)\):
$$F(1,1) = P(X\leq1\ \text{and}\ Y\leq1) = p(0,-1) + p(0,1) + p(-1,1) + p(1,1) = \frac{1}{4}\notag$$ Again, we can represent the joint cdf using a table:
\(F(x,y)\) \(X\) \(Y\) 0 1 2 3 -1 1/8 1/8 1/8 1/8 1 1/8 1/4 1/2 5/8 2 1/8 3/8 3/4 7/8 3 1/8 1/2 7/8 1 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% Joint Distributions of Continuous Random Variables
Definition\(\PageIndex{1}\)
If continuous random variables \(X\) and \(Y\) are defined on the same sample space \(\Omega\), then their
joint density function is a piecewise continuous function, denoted \(f(x,y)\), that satisfies the following. \(f(x,y)\geq0\), for all \((x,y)\in\mathbb{R}^2\) \(\displaystyle{\iint\limits_{\mathbb{R}^2}\! f(x,y)\, dx\, dy = 1}\) \(\displaystyle{P((X,Y)\in A) = \iint\limits_A\! f(x,y)\, dx\, dy}\), for any \(A\subseteq\mathbb{R}^2\)
As an example of the third condition in Definition 24 (the one above), in the continuous case, the joint cdf for random variables \(X\) and \(Y\) is obtained by integrating the joint density function over a set \(A\) of the form
$$A = \{(x,y)\in\mathbb{R}^2\ |\ X\leq a\ \text{and}\ Y\leq b\},\notag$$ where \(a\) and \(b\) are constants. Specifically, if \(A\) is given as above, then the joint cdf of \(X\) and \(Y\), at the point \((a,b)\), is given by $$F(a,b) = P(X\leq a\ \text{and}\ Y\leq b) = \int\limits^b_{-\infty}\int\limits^a_{-\infty}\! f(x,y)\, dx\, dy.\notag$$ Note that probabilities for continuous jointly distributed random variables are now volumes instead of areas as in the case of a single continuous random variable.
As in the discrete case, we can also obtain the individual probability distributions of \(X\) and \(Y\) from the joint density function.
Definition\(\PageIndex{2}\)
Suppose that continuous random variables \(X\) and \(Y\) have joint density function \(f(x,y)\). The
marginal density functions of \(X\) and \(Y\) are respectively given by the following. \begin{align*} f_X(x) &= \int\limits^{\infty}_{-\infty}\! f(x, y)\,dy \quad(\text{fix}\ x,\ \text{integrate over possible values of}\ Y) \\ f_Y(y) &= \int\limits^{\infty}_{-\infty}\! f(x, y)\,dx \quad(\text{fix}\ y,\ \text{integrate over possible values of}\ X) \end{align*}
Example \(\PageIndex{1}\):
Suppose a radioactive particle is contained in a unit square. We can define random variables \(X\) and \(Y\) to denote the \(x\)- and \(y\)-coordinates of the particle's location in the unit square, with the bottom left corner placed at the origin. Radioactive particles follow completely random behavior, meaning that the particle's location should be uniformly distributed over the unit square. This implies that the joint density function of \(X\) and \(Y\) should be constant over the unit square, which we can write as
$$f(x,y) = \left\{\begin{array}{l l} c, & \text{if}\ 0\leq x\leq 1\ \text{and}\ 0\leq y\leq 1 \\ 0, & \text{otherwise}, \end{array}\right.\notag$$ where \(c\) is some unknown constant. We can find the value of \(c\) by using the first condition in Definition 24(first definition in the section) and solving the following: $$\iint\limits_{\mathbb{R}^2}\! f(x,y)\, dx\, dy = 1 \quad\Rightarrow\quad \int\limits^1_0\!\int\limits^1_0\! c\, dx\, dy = 1 \quad\Rightarrow\quad c \int\limits^1_0\!\int\limits^1_0\! 1\, dx\, dy = 1 \quad\Rightarrow\quad c=1\notag$$
We can now use the joint density of \(X\) and \(Y\) to compute probabilities that the particle is in some specific region of the unit square. For example, consider the region
$$A = \{(x,y)\ |\ x-y > 0.5\},\notag$$ which is graphed in Figure 5.
Integrating the joint density function over \(A\) gives the following probability:
$$P(X-Y>0.5) = \iint\limits_A\! f(x,y)\, dx\, dy = \int^{0.5}_0\! \int^{0.5}_0\! 1\, dx\, dy = 0.25\notag$$ Finally, we apply Definition 25(second definition in the section) and find the marginal density functions of \(X\) and \(Y\). \begin{align*} f_X(x) &= \int\limits^1_0\! 1\, dy = 1, \quad\text{for}\ 0\leq x\leq 1 \\ f_Y(y) &= \int\limits^1_0\! 1\, dx = 1, \quad\text{for}\ 0\leq y\leq 1 \end{align*} Note that both \(X\) and \(Y\) are individually uniform random variables, each over the interval \([0,1]\). This should not be too surprising. Given that the particle's location was uniformly distributed over the unit square, we should expect that the coordinates would also be uniformly distributed over the unit intervals.
Example \(\PageIndex{2}\): keep? commented out on the note
At a gas station, gasoline is stocked in a bulk tank each week. Let random variable \(X\) denote the proportion of the tank's capacity that is
stocked in a given week, and let \(Y\) denote the proportion of the tank's capacity that is sold in the same week. Note that the gas station cannot sell more than what was stocked in a given week, which implies that the value of \(Y\) cannot exceed the value of \(X\). A possible joint density function of \(X\) and \(Y\) is given by $$f(x,y) = \left\{\begin{array}{l l} 3x, & \text{if}\ 0\leq y \leq x\leq 1 \\ 0, & \text{otherwise.} \end{array}\right.\notag$$ We find the joint cdf of \(X\) and \(Y\) at \((1/2, 1/3)\): $$F(1/2, 1/3) = P(X\leq 1/2\ \text{and}\ Y\leq 1/3)\notag$$ %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% Independent Random Variables
In some cases, the probability distribution of one random variable will not be affected by the distribution of another random variable defined on the same sample space. In those cases, the joint distribution functions have a very simple form, and we refer to the random variables as independent.
Definition\(\PageIndex{1}\)
Random variables \(X_1, X_2, \ldots, X_n\) are
independent if the joint cdf factors into a product of the marginal cdf's: $$F(x_1, x_2, \ldots, x_n) = F_{X_1}(x_1)\cdot F_{X_2}(x_2) \cdots F_{X_n}(x_n).\notag$$ It is equivalent to check that this condition holds for frequency functions in the discrete setting and density function in the continuous setting.
Example \(\PageIndex{1}\):
Consider the discrete random variables defined in Example 19(Joint Discrete RV EXAMPLE 1). \(X\) and \(Y\) are independent if
$$p(x,y) = p_X(x)\cdot p_Y(y),\notag$$ for all pairs \((x,y)\). Note that, for \((0,-1)\), we have $$p(0,-1) = \frac{1}{8},\ \ p_X(0) = \frac{1}{8},\ \ p_Y(-1) = \frac{1}{8} \quad\Rightarrow\quad p(0,-1) \neq p_X(0)\cdot p_Y(-1).\notag$$ Thus, \(X\) and \(Y\) are not independent, or in other words, \(X\) and \(Y\) are dependent. This should make sense given the definition of \(X\) and \(Y\). The winnings earned depend on the number of heads obtained. So the probabilities assigned to the values of \(Y\) will be affected by the values of \(X\).
Example \(\PageIndex{2}\):
Consider the continuous random variables defined in Example 20(Joint Continuous RV EXAMPLE 1). \(X\) and \(Y\) are independent if
$$f(x,y) = f_X(x)\cdot f_Y(y),\notag$$ for all \((x,y)\in\mathbb{R}^2\). Note that, for \((x,y)\) in the unit square, we have $$f(x,y) = 1,\ \ f_X(x) = 1,\ \ f_Y(y) = 1 \quad\Rightarrow\quad f(x,y) = f_X(x)\cdot f_Y(y).\notag$$ Outside of the unit square, \(f(x,y) = 0\) and at least one of the marginal density functions, \(f_X\) or \(f_Y\), will equal 0 as well. Thus, \(X\) and \(Y\) are independent. |
I have to show a Gaussian reduction in an assignment, and I was wondering what the more space-efficient and neatest way of expressing this is. I thought about using
\begin{smallmatrix} (it's only a 2x2 matrix) and arrays, but I'm wondering if perhaps there is a better way. Any suggestions?
I have to show a Gaussian reduction in an assignment, and I was wondering what the more space-efficient and neatest way of expressing this is. I thought about using
The
gauss package is specifically designed for this purpose and allows for typesetting even large matrices and the associated Gaussian elimination (or reduction).
Here is a fairly elementary example of Gaussian (or Gauss-Jordan) elimination on a 2x2 matrix:
\documentclass{article}\usepackage{gauss}% http://ctan.org/pkg/gauss\usepackage{amsmath}% http://ctan.org/pkg/amsmath\begin{document}\begin{align*} & \begin{gmatrix}[p] 1 & 2 \\ 3 & 4 \rowops \add[-3]{0}{1} \end{gmatrix} \\ \Rightarrow & \begin{gmatrix}[p] 1 & 2 \\ 0 & -6 \rowops \mult{1}{\scriptstyle\cdot-\frac{1}{6}} \end{gmatrix} \\ \Rightarrow & \begin{gmatrix}[p] 1 & 2 \\ 0 & 1 \rowops \add[-2]{1}{0} \end{gmatrix} \\ \Rightarrow & \begin{gmatrix}[p] 1 & 0 \\ 0 & 1 \end{gmatrix}\end{align*}\end{document}
Matrices using the
gauss package are typeset within a
gmatrix environment (an optional parameter specifies the delimiters), while elementary row operations are specified using
\mult,
\add, or
\swap. See the
gauss documentation for more information and refinements.
amsmath provided the
align* environment, although it was not necessary; a regular
array would also have worked.
read http://www.dante.de/DTK/Ausgaben/komoedie20023.pdf, pages 34--40 for an introduction into the package
gauss. The examples should be self explanatory. |
$\def\L{\mathfrak{L}}\def\Prof{\mathsf{Prof}}\require{AMScd}$ The notation for this question is the same of this post: in particular
Isbell duality $\text{Spec}\dashv {\cal O} : {\cal V}^{A^°} \leftrightarrows \big({\cal V}^A\big)^°$ allows us to define the functor $$ \L : \Prof(A,B) \to \Prof(B,A) $$ ($\cal V$ is a cosmos in which $A,B$ are enriched categories) sending $K : A^° \times B \to \cal V$ into $\L(K) : (b,a)\mapsto {\cal V}^{A^°}(K(-,b), \hom(-,a)) = {\cal O}(K_b)_a$.
If $Q : A' \to A$ is a profunctor, there is a diagram $$ \begin{CD} \Prof(A,B) @>\L>> \Prof(B,A) \\ @VQ^*VV @AAQ_*A\\ \Prof(A',B) @>>\L> \Prof(B, A') \end{CD} $$ of course, since $Q$ is arbitrary, there is no hope that this commutes; but there is a canonical 2-cell filling it? It is rather easy to find that there is a 2-cell $K\diamond \L(K) \Rightarrow \hom_B$ (let's call it $\epsilon$ since it behaves like an "evaluation").
More in detail, we can consider the natural transformation $\epsilon : K \diamond \L(K) \Rightarrow \hom_B$ for a given $K$, obtained from the mate $$ K(b',a)\otimes \textsf{Nat}(K(-,a), y_b) \to \hom(b',b) $$ of the projection $$ \textsf{Nat}(K(-,a), y_b) \to {\cal V}(K(b',a),\hom(b',b)). $$ Since this is a cowedge, $\epsilon$ is now obtained as the map $P\diamond \L(P)(b',b)=\int^a K(b',a)\otimes \textsf{Nat}(K(-,a), y_b) \to\hom(b',b)$.
Nevertheless, $K$ is rarely a left adjoint for $\L(K)$ (a sufficient condition is that $K$ is representable, so that $K=\hom(k,1), \L K = \hom(1,K)$). Is this condition also necessary? Does $\epsilon$ satisfy a universal property? |
Newton's second law tells us that the acceleration of an object is related to the force acting on it by the equation:
$$ \frac{d^2x}{dt^2} = \frac{F}{m} \tag{1} $$
Note that if the force, $F$, is zero then equation (1) reduces to:
$$ \frac{d^2x}{dt^2} = 0 $$
i.e. the acceleration is zero so the velocity of the object doesn't change. If the force is zero a stationary object will remain stationary.
The corresponding equation in general relativity is the geodesic equation:
$$ {d^2 x^\mu \over d\tau^2} = \Gamma^\mu_{\alpha\beta} {dx^\alpha \over d\tau} {dx^\beta \over d\tau} \tag{2} $$
This is a lot more complicated that equation (1), but the left side is basically just an acceleration and the right hand side can be thought of as the
gravitational force. The quantity $\Gamma^\mu_{\alpha\beta}$ describes the spacetime curvature in a complicated way that only us nerds need to worry about. What's interesting about this equation is that the right hand side contains the terms $dx^\alpha/d\tau$ and this is sort of a velocity. So if all the $dx^\alpha/d\tau$ terms, i.e. all the velocities, were zero equation (2) would simplify to:
$$ {d^2 x^\mu \over d\tau^2} = 0 $$
and just as for our original Newton equation this would tell us that the velocity of the object is constant i.e. there is no gravitational force - a stationary object wouldn't feel a force, just as you say.
The trouble is that in GR we consider motion in spacetime not just space, and while you can stop moving in space you can't stop moving in time. All of us move through time (normally at one second per second) no matter how much we might wish it otherwise. That means there is no such think as a stationary object, and no way to escape gravity.
You are right to be cautious about the rubber sheet analogies for spacetime curvature. While these have various shortcomings a major one is that they don't portray the curvature in the time dimension.
Response to comment:
Using the geodesic equation to show how movement in time causes an object to move in a gravitational field is straightforward, but I doubt the maths would be very illuminating. So let me attempt an intuitive explanation of why it happens. Nota bene that like any intuitive explanation this will be simplified and potentially misleading if you pursue it too far. Still, let's give it a go.
Let's put you at some distance from a black hole and let go. And just to make it more fun we'll put an opaque shell around you so you can't see out. The key point is that you will feel no force (you'll be in free fall just like the astronauts in the International Space Station) so you're going to assume space is flat. You would draw a spacetime diagram to describe your motion that looks like this:
We're approximating you as a green dot, and in your coordinates you stay still in space but move in time, so you just move up the time axis.
Now suppose I'm watching you from well away from the black hole. Because spacetime is curved, my $r$ and $t$ axes won't match yours. I've drawn my $r$ and $t$ axes as curves, but don't take the shape of the curve too literally as I have just sketched any old curve:
Now let's superimpose my axes and your axes:
Because of the curvature your time axis and my time axis won't match. That means if you're moving along your time axis you're not moving along my time axis. From my perspective you've moved off the time axis by a distance shown by the red arrow, and that means you must have moved in space.
So even though, as far as you're concerned you're just moving in time, as far as I'm concerned you've moved in space as well. And that's why a stationary object falls. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.