text
stringlengths
256
16.4k
Is there an efficient algorithm for the following problem? Given: a $m$-vector $b \in \{0,1,2\}^m$, and a $m \times 2m$ matrix $A$, with the promise that for every $b' \in \{0,1,2\}^m$, there exists $x' \in \{0,1\}^{2m}$ such that $Ax'=b' \pmod 3$ Goal: find $x \in \{0,1\}^{2m}$ such that $Ax'=b' \pmod 3$ In more detail: I have a system of equations Ax (mod 3) = b. Each entry for A and each entry for b is 0, 1 or 2 ("ternary"). We're looking for a solution x where each entry is 0 or 1 ("binary"). For example: p q r b1 0 1 | 21 2 1 | 1 has one solution: p=1, q=1, r=1. For square matrices A, this often has no solution. For example p q r b1 0 0 | 10 1 0 | 10 0 1 | 0 Does have a solution (p=1, q=1, r=0), but p q r b1 0 0 | 10 1 0 | 20 0 1 | 0 Does not have a solution, since q can only be 0 or 1 (as x should be binary). I found from practice that practical/useful NP-complete problem instances can often be written in this way with an m x n matrix with m << n << 2m (often n is about 1.5m), but in this question, I'm only interested in cases where n=2m and the equation does have solution. This is at least the case when something stronger than linear independency holds. I think the right condition is that matrix A is such that Ax (mod 3) = b has a binary solution for every ternary vector b. For example, the matrix A with n=2m: 1 1 0 0 0 0 | ?0 0 1 1 0 0 | ?0 0 0 0 1 1 | ? always has a binary solution. Unfortunately, even if the rows are linearly independent, row operations can generally not achieve such a matrix (with 2 pivot columns in each row) except for trivial problems. Take for example b1 1 0 0 0 0 | ?0 0 1 1 0 0 | ?2 0 0 0 1 1 | ? Then there is a binary solution for every b, but the question now is, is there a quick algorithm so that we can find it for any given A and b (where A meets the requirement that a solution exists for every b')?
Difference between revisions of "Extendible" (→$C^{(n)}$-extendible cardinals: I3) (→Virtually extendible cardinals: start merging) Line 87: Line 87: If $κ$ is [[huge|virtually huge*]], then $V_κ$ is a model of proper class many virtually extendible cardinals. If $κ$ is [[huge|virtually huge*]], then $V_κ$ is a model of proper class many virtually extendible cardinals. + + + + + + + + + + + + + + + + + + + + + + + == In set-theoretic geology == == In set-theoretic geology == Revision as of 23:02, 21 September 2019 A cardinal $\kappa$ is $\eta$-extendible for an ordinal $\eta$ if and only if there is an elementary embedding $j:V_{\kappa+\eta}\to V_\theta$, with critical point $\kappa$, for some ordinal $\theta$. The cardinal $\kappa$ is extendible if and only if it is $\eta$-extendible for every ordinal $\eta$. Equivalently, for every ordinal $\alpha$ there is a nontrivial elementary embedding $j:V_{\kappa+\alpha+1}\to V_{j(\kappa)+j(\alpha)+1}$ with critical point $\kappa$. Contents 1 Alternative definition 2 Relation to Other Large Cardinals 3 Variants 4 In set-theoretic geology 5 References Alternative definition Given cardinals $\lambda$ and $\theta$, a cardinal $\kappa\leq\lambda,\theta$ is jointly $\lambda$-supercompact and $\theta$-superstrong if there exists a nontrivial elementary embedding $j:V\to M$ for some transitive class $M$ such that $\mathrm{crit}(j)=\kappa$, $\lambda<j(\kappa)$, $M^\lambda\subseteq M$ and $V_{j(\theta)}\subseteq M$. That is, a single embedding witnesses both $\lambda$-supercompactness and (a strengthening of) superstrongness of $\kappa$. The least supercompact is never jointly $\lambda$-supercompact and $\theta$-superstrong for any $\lambda$,$\theta\geq\kappa$. A cardinal is extendible if and only if it is jointly supercompact and $\kappa$-superstrong, i.e. for every $\lambda\geq\kappa$ it is jointly $\lambda$-supercompact and $\kappa$-superstrong. [1] One can show that extendibility of $\kappa$ is in fact equivalent to "for all $\lambda$,$\theta\geq\kappa$, $\kappa$ is jointly $\lambda$-supercompact and $\theta$-superstrong". A similar characterization of $C^{(n)}$-extendible cardinals exists. The ultrahuge cardinals are defined in a way very similar to this, and one can (very informally) say that "ultrahuge cardinals are to superhuges what extendibles are to supercompacts". These cardinals are superhuge (and stationary limits of superhuges) and strictly below almost 2-huges in consistency strength. To be expanded: Extendibility Laver Functions, $C^{(n)}$-extendibility Relation to Other Large Cardinals Extendible cardinals are related to various kinds of measurable cardinals. Supercompactness Extendibility is connected in strength with supercompactness. Every extendible cardinal is supercompact, since from the embeddings $j:V_\lambda\to V_\theta$ we may extract the induced supercompactness measures $X\in\mu\iff j''\delta\in j(X)$ for $X\subset \mathcal{P}_\kappa(\delta)$, provided that $j(\kappa)\gt\delta$ and $\mathcal{P}_\kappa(\delta)\subset V_\lambda$, which one can arrange. On the other hand, if $\kappa$ is $\theta$-supercompact, witnessed by $j:V\to M$, then $\kappa$ is $\delta$-extendible inside $M$, provided $\beth_\delta\leq\theta$, since the restricted elementary embedding $j\upharpoonright V_\delta:V_\delta\to j(V_\delta)=M_{j(\delta)}$ has size at most $\theta$ and is therefore in $M$, witnessing $\delta$-extendibility there. Although extendibility itself is stronger and larger than supercompactness, $\eta$-supercompacteness is not necessarily too much weaker than $\eta$-extendibility. For example, if a cardinal $\kappa$ is $\beth_{\eta}(\kappa)$-supercompact (in this case, the same as $\beth_{\kappa+\eta}$-supercompact) for some $\eta<\kappa$, then there is a normal measure $U$ over $\kappa$ such that $\{\lambda<\kappa:\lambda\text{ is }\eta\text{-extendible}\}\in U$. Strong Compactness Interestingly, extendibility is also related to strong compactness. A cardinal $\kappa$ is strongly compact iff the infinitary language $\mathcal{L}_{\kappa,\kappa}$ has the $\kappa$-compactness property. A cardinal $\kappa$ is extendible iff the infinitary language $\mathcal{L}_{\kappa,\kappa}^n$ (the infinitary language but with $(n+1)$-th order logic) has the $\kappa$-compactness property for every natural number $n$. [2] Given a logic $\mathcal{L}$, the minimum cardinal $\kappa$ such that $\mathcal{L}$ satisfies the $\kappa$-compactness theorem is called the strong compactness cardinal of $\mathcal{L}$. The strong compactness cardinal of $\omega$-th order finitary logic (that is, the union of all $\mathcal{L}_{\omega,\omega}^n$ for natural $n$) is the least extendible cardinal. Variants $C^{(n)}$-extendible cardinals (Information in this subsection from [3]) A cardinal $κ$ is called $C^{(n)}$-extendible if for all $λ > κ$ it is $λ$-$C^{(n)}$-extendible, i.e. if there is an ordinal $µ$ and an elementary embedding $j : V_λ → V_µ$, with $\mathrm{crit(j)} = κ$, $j(κ) > λ$ and $j(κ) ∈ C^{(n)}$. For $λ ∈ C^{(n)}$, a cardinal $κ$ is $λ$-$C^{(n)+}$-extendible iff it is $λ$-$C^{(n)}$-extendible, witnessed by some $j : V_λ → V_µ$ which (besides $j(κ) > λ$ and $j(κ) ∈ C(n)$) satisfies that $µ ∈ C^{(n)}$. $κ$ is $C^{(n)+}$-extendible iff it is $λ$-$C^{(n)+}$-extendible for every $λ > κ$ such that $λ ∈ C^{(n)}$. Properties: There exists a $C^{(n)}$-extendible cardinal if and only if there exists a $C^{(n)+}$-extendible cardinal. Every extendible cardinal is $C^{(1)}$-extendible and $C^{(1)+}$-extendible. If $κ$ is $C^{(n)}$-extendible, then $κ ∈ C^{(n+2)}$. For every $n ≥ 1$, if $κ$ is $C^{(n)}$-extendible and $κ+1$-$C^{(n+1)}$-extendible, then the set of $C^{(n)}$-extendible cardinals is unbounded below $κ$. Hence, the first $C^{(n)}$-extendible cardinal $κ$, if it exists, is not $κ+1$-$C^{(n+1)}$-extendible. In particular, the first extendible cardinal $κ$ is not $κ+1$-$C^{(2)}$-extendible. For every $n$, if there exists a $C^{(n+2)}$-extendible cardinal, then there exist a proper class of $C^{(n)}$-extendible cardinals. The existence of a $C^{(n+1)}$-extendible cardinal $κ$ (for $n ≥ 1$) does not imply the existence of a $C^{(n)}$-extendible cardinal greater than $κ$. For if $λ$ is such a cardinal, then $V_λ \models$“κ is $C^{(n+1)}$-extendible”. If $κ$ is $κ+1$-$C^{(n)}$-extendible and belongs to $C^{(n)}$, then $κ$ is $C^{(n)}$-superstrong and there is a $κ$-complete normal ultrafilter $U$ over $κ$ such that the set of $C^{(n)}$-superstrong cardinals smaller than $κ$ belongs to $U$. For $n ≥ 1$, the following are equivalent ($VP$ — Vopěnka's principle): $VP(Π_{n+1})$ $VP(κ, \mathbf{Σ_{n+2}})$ for some $κ$ There exists a $C(n)$-extendible cardinal. “For every $n$ there exists a $C(n)$-extendible cardinal.” is equivalent to the full Vopěnka's principle. Assuming $\mathrm{I3}(κ, δ)$, if $δ$ is a limit cardinal (instead of a successor of a limit cardinal – Kunen’s Theorem excludes other cases), it is equal to $sup\{j^m(κ) : m ∈ ω\}$ where $j$ is the elementary embedding. Then $κ$ and $j^m(κ)$ are $C^{(n)}$-extendible (inter alia) in $V_δ$, for all $n$ and $m$. $(\Sigma_n,\eta)$-extendible cardinals There are some variants of extendible cardinals because of the interesting jump in consistency strength from $0$-extendible cardinals to $1$-extendibles. These variants specify the elementarity of the embedding. A cardinal $\kappa$ is $(\Sigma_n,\eta)$-extendible, if there is a $\Sigma_n$-elementary embedding $j:V_{\kappa+\eta}\to V_\theta$ with critical point $\kappa$, for some ordinal $\theta$. These cardinals were introduced by Bagaria, Hamkins, Tsaprounis and Usuba [4]. $\Sigma_n$-extendible cardinals The special case of $\eta=0$ leads to a much weaker notion. Specifically, a cardinal $\kappa$ is $\Sigma_n$-extendible if it is $(\Sigma_n,0)$-extendible, or more simply, if $V_\kappa\prec_{\Sigma_n} V_\theta$ for some ordinal $\theta$. Note that this does not necessarily imply that $\kappa$ is inaccessible, and indeed the existence of $\Sigma_n$-extendible cardinals is provable in ZFC via the reflection theorem. For example, every $\Sigma_n$ correct cardinal is $\Sigma_n$-extendible, since from $V_\kappa\prec_{\Sigma_n} V$ and $V_\lambda\prec_{\Sigma_n} V$, where $\kappa\lt\lambda$, it follows that $V_\kappa\prec_{\Sigma_n} V_\lambda$. So in fact there is a closed unbounded class of $\Sigma_n$-extendible cardinals. Similarly, every Mahlo cardinal $\kappa$ has a stationary set of inaccessible $\Sigma_n$-extendible cardinals $\gamma<\kappa$. $\Sigma_3$-extendible cardinals cannot be Laver indestructible. Therefore $\Sigma_3$-correct, $\Sigma_3$-reflecting, $0$-extendible, (pseudo-)uplifting, weakly superstrong, strongly uplifting, superstrong, extendible, (almost) huge or rank-into-rank cardinals also cannot.[4] Virtually extendible cardinals (Information in this subsection from [5]) A cardinal $κ$ is virtually extendible iff for every $α > κ$, in a set-forcing extension there is an elementary embedding $j : V_α → V_β$ with $\mathrm{crit(j)} = κ$ and $j(κ) > α$. $C^{(n)}$-virtually extendible cardinals require additionaly that $j(κ)$ has property $C^{(n)}$ (i.e. $\Sigma_n$-correctness). If $κ$ is virtually Shelah for supercompactness or 2-iterable, then $V_κ$ is a model of proper class many virtually $C^{(n)}$-extendible cardinals for every $n < ω$. If $κ$ is virtually huge*, then $V_κ$ is a model of proper class many virtually extendible cardinals. (Information in this subsection from [6]) Definition: A cardinal $κ$ is $n$-remarkable, for $n > 0$, iff for every $η > κ$ in $C^{(n)}$ , there is $α<κ$ also in $C^{(n)}$ such that in $V^{Coll(ω, < κ)}$, there is an elementary embedding $j : V_α → V_η$ with $j(\mathrm{crit}(j)) = κ$. A cardinal is completely remarkable iff it is $n$-remarkable for all $n > 0$. $1$-remarkability is equivalent to remarkability. A cardinal is virtually $C^{(n)}$-extendible iff it is $n + 1$-remarkable (virtually extendible cardinals are virtually $C^{(1)}$-extendible). Results: Every $n$-remarkable cardinal is in $C^{(n+1)}$. Every $n+1$-remarkable cardinal is a limit of $n$-remarkable cardinals. Completely remarkable cardinals can exist in $L$ and the consistency of a completely remarkable cardinal follows from a $2$-iterable cardinal. In relation to Generic Vopěnka's Principle: The following are equiconsistent $gVP(Π_n)$ $gVP(κ, \mathbf{Σ_{n+1}})$ for some $κ$ There is an $n$-remarkable cardinal. The following are equiconsistent $gVP(\mathbf{Π_n})$ $gVP(κ, \mathbf{Σ_{n+1}})$ for a proper class of $κ$ There is a proper class of $n$-remarkable cardinals. The following are equiconsistent ...... In set-theoretic geology This article is a stub. Please help us to improve Cantor's Attic by adding information. References Usuba, Toshimichi. Extendible cardinals and the mantle.Archive for Mathematical Logic 58(1-2):71-75, 2019. arχiv DOI bibtex Kanamori, Akihiro. Second, Springer-Verlag, Berlin, 2009. (Large cardinals in set theory from their beginnings, Paperback reprint of the 2003 edition) www bibtex The higher infinite. Bagaria, Joan. $C^{(n)}$-cardinals.Archive for Mathematical Logic 51(3--4):213--240, 2012. www DOI bibtex Bagaria, Joan and Hamkins, Joel David and Tsaprounis, Konstantinos and Usuba, Toshimichi. Superstrong and other large cardinals are never Laver indestructible.Archive for Mathematical Logic 55(1-2):19--35, 2013. www arχiv DOI bibtex Gitman, Victoria and Shindler, Ralf. Virtual large cardinals.www bibtex Bagaria, Joan and Gitman, Victoria and Schindler, Ralf. Generic {V}opěnka's {P}rinciple, remarkable cardinals, and the weak {P}roper {F}orcing {A}xiom.Arch Math Logic 56(1-2):1--20, 2017. www DOI MR bibtex
The amsmath package provides a handful of options for displaying equations. You can choose the layout that better suits your document, even if the equations are really long, or if you have to include several equations in the same line. Contents The standard LaTeX tools for equations may lack some flexibility, causing overlapping or even trimming part of the equation when it's too long. We can surpass these difficulties with amsmath. Let's check an example: \begin{equation} \label{eq1} \begin{split} A & = \frac{\pi r^2}{2} \\ & = \frac{1}{2} \pi r^2 \end{split} \end{equation} You have to wrap your equation in the equation environment if you want it to be numbered, use equation* (with an asterisk) otherwise. Inside the equation environment, use the split environment to split the equations into smaller pieces, these smaller pieces will be aligned accordingly. The double backslash works as a newline character. Use the ampersand character &, to set the points where the equations are vertically aligned. This is a simple step, if you use LaTeX frequently surely you already know this. In the preamble of the document include the code: \usepackage{amsmath} To display a single equation, as mentioned in the introduction, you have to use the equation* or equation environment, depending on whether you want the equation to be numbered or not. Additionally, you might add a label for future reference within the document. \begin{equation} \label{eu_eqn} e^{\pi i} + 1 = 0 \end{equation} The beautiful equation \ref{eu_eqn} is known as the Euler equation For equations longer than a line use the multline environment. Insert a double backslash to set a point for the equation to be broken. The first part will be aligned to the left and the second part will be displayed in the next line and aligned to the right. Again, the use of an asterisk * in the environment name determines whether the equation is numbered or not. \begin{multline*} p(x) = 3x^6 + 14x^5y + 590x^4y^2 + 19x^3y^3\\ - 12x^2y^4 - 12xy^5 + 2y^6 - a^3b^3 \end{multline*} Split is very similar to multline. Use the split environment to break an equation and to align it in columns, just as if the parts of the equation were in a table. This environment must be used inside an equation environment. For an example check the introduction of this document. If there are several equations that you need to align vertically, the align environment will do it: Usually the binary operators (>, < and =) are the ones aligned for a nice-looking document. As mentioned before, the ampersand character & determines where the equations align. Let's check a more complex example: \begin{align*} x&=y & w &=z & a&=b+c\\ 2x&=-y & 3w&=\frac{1}{2}z & a&=b\\ -4 + 5x&=2+y & w+2&=-1+w & ab&=cb \end{align*} Here we arrange the equations in three columns. LaTeX assumes that each equation consists of two parts separated by a &; also that each equation is separated from the one before by an &. Again, use * to toggle the equation numbering. When numbering is allowed, you can label each row individually. If you just need to display a set of consecutive equations, centered and with no alignment whatsoever, use the gather environment. The asterisk trick to set/unset the numbering of equations also works here. For more information see
Review Comment: This manuscript was submitted as 'full paper' and should be reviewed along the usual dimensions for research contributions which include (1) originality, (2) significance of the results, and (3) quality of writing. In this paper, the authors present a knowledge graph embedding approach based on the idea that each entity (and relation) should be embedded based on what is called the eigenstate and mimesis. The eigenstate represent the intrinsic aspects of the entity, while the mimesis is representative of what the authors call exogenous properties. The paper is well written, despite some minor errors in grammar, which could be corrected by intensive proofreading. I do think there is some novelty in the paper, as the concept seems somewhat different as earlier approaches. However, as I will elaborate below, a stronger argumentation is needed to convince a reader that the approach chosen is a) solving the right problems and b) actually solving them. Further, there is an issue regarding the significance of the results. Many of the results obtained in this work are dominated by approaches which predate this submission. Now, there are some results in which this paper still brings merit, and it might well be that the approach in this paper scales better to larger knowledge graphs as the currently dominating approaches. Hence, I advise a major revision of this paper in which the author would investigate the behavior of their approach on larger graphs. I will first go into some details of major issues I observed in the paper. The I will list smaller issues below. 1. As mentioned, the results in this paper are often dominated by other approaches. Exemplary is TransG [1], which dominates many of the results (others include KG2E, PTransE). One might argue that there would not be a need to compare to that model as is is a generative model and not translation-based. In my view, that argument is, however, void. To me, it seems that transG also has some of the same ideas in the background as the proposed approach, namely that having a translation identically for all the triples of the same relation is not necessarily a good idea. Therefore, in the revised version, an in depth comparison would be necessary to weed out the relation between these approaches. The same goes for the already included related work. There is a short discussion establishing that TransE and TransR are special cases (parameterizations) of GTrans, but the relations to other TransX models is not elaborated. 2. The main two reasons put forward to why the approach is needed are that 1) current approaches underestimate the complexity of entities 2) a statement saying that something is wrong with the function which is typically optimized I am not sure I completely get what the problem is they want to point out, but I understand it to be that the relation vector (or matrix in some approaches) of a relation is not the same for all head-tail pairs. Now, the first argument is pretty vague and in my opinion not enough supported in the paper. The second one I agree with, and the discussion could be augmented with some examples. All in all, there is some support for a new approach, but it should be argues clearer. Then, when presenting the approach is would be good to indicate explicitly how the model is solving the indicated issues. Especially, how it is doing it better compared to other approaches. For example, while reading, I started thinking that using normal TransE with double the vector length might be as good as having two vectors. Or that having a translation matrix would cover anything one can do with two translation vectors. Some of this becomes clearer in section 3, but a reader has to find this out independently, instead of a comparison being made explicitly. 3. I might be getting the wrong impression, but it seems to me that the model is represented more involved as its actual strength. Continuing from equation (6), I can do a rewrite as follows: h = \alpha h_e + \beta (r_a h_a^T h_e) h = \alpha h_e + \beta (r_a h_a^T) h_e h = (\alpha + \beta (r_a h_a^T)) h_e Hence, it seems like the whole first term, related to the 'abstract' vectors only results in a linear scaling of the 'eigenstate'. This means that the whole abstract part is not able to change specific parts (dimensions) of the eigenstate, but only scale the whole vector. It seems to me that this diminishes the power of the model greatly. 4. There is a comparison of the complexity of the approaches, however, a timing comparison is missing. I agree that the complexity analysis is essential, mainly because the graphs currently used are rather small. From experiments performed as part of my own work, I have observed that some of the approaches cited to in this paper will indeed take a very long time to complete, or not complete within a reasonable time on large graphs. Further, it is known that a (worst case) complexity analysis does not always give a complete view on the actual time needed to run the algorithm on real test cases. Hence, I would suggest to also provide a comparison between the approaches, regarding the time it takes to build the model of a given (large) dataset. Besides these major issues, there are several other parts to pay attention to in the revised work. * Besides the works mentioned above, there are also other works aiming to embed knowledge graphs. One branch of approaches are the ones related to (biased) RDF2Vec, which also scales to much larger KGs. * When introducing NTN, mention explicitly where the model was first conceived. * The sensitivity on the alpha and beta parameters could be investigated explicitly. * In section 3.0, when introducing \delta', provide a forward reference to where it gets defined. * In the text after eq. (9), it is unclear what the i-th entity e refers to. Is it only heads, tails or both? * In section 3.1, you write "Moreover, SED has lower time complexity and can be applied in large-scale knowledge representation." It seems to me that you should try both measures and then say whether they are feasible. At the moment, it seems like only one was tried. Denying one of them for reasons of theoretical time complexity is not sufficient. * I see the argumentation for changing 1/\sigma to W_r as unnecessary. Writing it as W_r is just easier for notation. The fact that this also helped you to do some (preliminary?) optimization is not important. * In equation 19, there are some '-signs missing in the second summation. Besides, it would be fair to mention the source of inspiration for this cost function explicitly. * I am confused by the #parameters column of table 1. It seems you mean to indicate some sort of space complexity, but I am not too sure. * You mention "elegant performance on large-scale KG". I like the lyric language use, but only accept it in case you support this with experiments, as mentioned above. * Using TransE for initializing the vectors for the proposed embedding algorithms seems like a good idea indeed. It would be useful if you can report how much training time this saved overall. * Your experiments cover the datasets which are often used for performing embeddings. However, for some experiments, some datasets are not used without a clear explanation. (e.g. why no WN18 in table 3) * As already mentioned, the current graphs used are not that large at all. So, the statement "GTrans is experimentally demonstrated to be applicable to large-scale knowledge graphs." must be toned down as long as no large KGs (like e.g., Wikidata and DBPedia) have been embedded using this approach. * In 4.4. Distribution of relation space. I see why this technique gives an indication of how even the distribution of the relation space is. What I do, however, not understand is why this would immediately mean that the embedding is better. Perhaps there are specific interrelations which are only representable by a non-even spreading. Moreover, comparing the error measurements of the different methods directly, as seems to be done in the last paragraph does not make sense. This mainly because the functions to be optimized are different and hence their errors cannot be compared. * Language issues There were several issues related to language in the paper. They were minor and not disturbing. Pay attention to the use of single and plural, in particular subject-verb agreements. Besides, articles are missing in some places. [1] Xiao, Han, Minlie Huang, and Xiaoyan Zhu. "TransG: A Generative Model for Knowledge Graph Embedding." In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), vol. 1, pp. 2316-2325. 2016.
Hello fellows, As referred in Wikipedia (see the specified criteria there), L'Hôpital's rule says, $$ \lim_{x\to c}\frac{f(x)}{g(x)}=\lim_{x\to c}\frac{f'(x)}{g'(x)} $$ As $$ \lim_{x\to c}\frac{f'(x)}{g'(x)}= \lim_{x\to c}\frac{\int f'(x)\ dx}{\int g'(x)\ dx} $$ Just out of curiosity, can you integrate instead of taking a derivative? Does $$ \lim_{x\to c}\frac{f(x)}{g(x)}= \lim_{x\to c}\frac{\int f(x)\ dx}{\int g(x)\ dx} $$ work? (given the specifications in Wikipedia only the other way around: the function must be integrable by some method, etc.) When? Would it have any practical use? I hope this doesn't sound stupid, it just occurred to me, and I can't find the answer myself. Edit (In response to the comments and answers.) Take 2 functions $f$ and $g$. When is $$ \lim_{x\to c}\frac{f(x)}{g(x)}= \lim_{x\to c}\frac{\int_x^c f(a)\ da}{\int_x^c g(a)\ da} $$ true? Not saying that it always works, however, it sometimes may help. Sometimes one can apply l'Hôpital's even when an indefinite form isn't reached. Maybe this only works on exceptional cases. Most functions are simplified by taking their derivative, but it may happen by integration as well (say $\int \frac1{x^2}\ dx=-\frac1x+C$, that is simpler). In a few of those cases, integrating functions of both nominator and denominator may simplify. What do those (hypothetical) functions have to make it work? And even in those cases, is is ever useful? How? Why/why not?
The following question was asked at Math StackExchange but, having attracted some attention, didn't get solved. Problem 323 from the Mathematical Excalibur Vol. 14, No. 2, May-Sep. 09, linked here (see page 3, where also a solution is given), reads: $\qquad$ Prove that there are infinitely many positive integers n such that $2^n+2$ is divisible by $n$. OEIS sequence A006517 lists the 27 smallest integers $n$ with $n\mid 2^n+2$: $$ 1, 2, 6, 66, 946, 8646, 180246, 199606, 265826, 383846, 1234806, 3757426, 9880278, 14304466, 23612226, 27052806, 43091686, 63265474, 66154726, 69410706, 81517766, 106047766, 129773526, 130520566, 149497986, 184416166, 279383126. $$ All these numbers, with the exception of $1$, are even. Indeed, Max Alekseyev has shown (see COMMENTS section) that this keeps to hold for larger terms, too: if $n\mid 2^n+2$ and $n>1$, then $n$ is even. Yet another observation is that all numbers listed above are square-free. Does this hold in general? $\qquad$ Is it true that if $n\mid 2^n+2$, then $n$ is square-free? As observed by the Math StackExchange user rtybase, if $p^2\mid n$, then $p$ must be a Wieferich prime. The only two Wieferich primes presently known are $1093$ and $3511$; none of them can divide $n$ as $n\mid 2^n+2$, along with evenness of $n$, imply that $-2$ is a quadratic residue modulo any odd prime divisor of $n$. Since there are no other Wieferich primes up to $10^{17}$, any non-squarefree $n$ with $n\mid 2^n+2$ must satisfy $n>2\cdot 10^{34}$. The argument of rtybase can in fact be pushed a little further. Suppose that $n\mid 2^n+2$ with $n=2mp^2$, where $m$ is a positive integer and $p$ is an odd prime. Then $2^{2mp^2}\equiv -2\pmod{2mp^2}$ whence $2^{2mp}\equiv -2\pmod{p^2}$, showing that $$ (-2)^{\frac{p-1}2}\equiv 1\pmod{p^2} $$ and, on the other hand, $$ (-2)^{2mp-1}\equiv 1\pmod{p^2}. $$ Consequently, the order of $-2$ modulo $p^2$ is an odd divisor of $p-1$. This leads to the following question: Are there any primes of the form $p=2^ck+1$, with $c,k\ge 1$ and $k$ odd, such that $(-2)^{k}\equiv1\pmod{p^2}$? Notice that this requirement is stronger than that in the definiton of a Wieferich prime; thus, can be more tractable.
The Annals of Probability Ann. Probab. Volume 40, Number 5 (2012), 2069-2105. The functional equation of the smoothing transform Abstract Given a sequence $T=(T_{i})_{i\geq1}$ of nonnegative random variables, a function $f$ on the positive halfline can be transformed to $\mathbb{E}\prod_{i\geq1}f(tT_{i})$. We study the fixed points of this transform within the class of decreasing functions. By exploiting the intimate relationship with general branching processes, a full description of the set of solutions is established without the moment conditions that figure in earlier studies. Since the class of functions under consideration contains all Laplace transforms of probability distributions on $[0,\infty)$, the results provide the full description of the set of solutions to the fixed-point equation of the smoothing transform, $X\stackrel{d}{=}\sum_{i\geq1}T_{i}X_{i}$, where $\stackrel{d}{=}$ denotes equality of the corresponding laws, and $X_{1},X_{2},\ldots$ is a sequence of i.i.d. copies of $X$ independent of $T$. Further, since left-continuous survival functions are covered as well, the results also apply to the fixed-point equation $X\stackrel{d}{=}\inf\{X_{i}/T_{i} : i\geq1,T_{i}>0\}$. Moreover, we investigate the phenomenon of endogeny in the context of the smoothing transform and, thereby, solve an open problem posed by Aldous and Bandyopadhyay. Article information Source Ann. Probab., Volume 40, Number 5 (2012), 2069-2105. Dates First available in Project Euclid: 8 October 2012 Permanent link to this document https://projecteuclid.org/euclid.aop/1349703316 Digital Object Identifier doi:10.1214/11-AOP670 Mathematical Reviews number (MathSciNet) MR3025711 Zentralblatt MATH identifier 1266.39022 Subjects Primary: 39B22: Equations for real functions [See also 26A51, 26B25] Secondary: 60E05: Distributions: general theory 60J85: Applications of branching processes [See also 92Dxx] 60G42: Martingales with discrete parameter Keywords Branching process branching random walk Choquet–Deny-type functional equation endogeny fixed point general branching process multiplicative martingales smoothing transformation stochastic fixed-point equation Weibull distribution weighted branching Citation Alsmeyer, Gerold; Biggins, J. D.; Meiners, Matthias. The functional equation of the smoothing transform. Ann. Probab. 40 (2012), no. 5, 2069--2105. doi:10.1214/11-AOP670. https://projecteuclid.org/euclid.aop/1349703316
Linear Lagrange Interpolating Polynomials We will now begin to discuss various techniques of interpolation. Given a set of discrete points, we sometimes want to construct a function out of polynomials that is an approximation of another known (or possibly unknown) function. Interpolations can be useful as the original function may not be readily integrable or nicely differentiable, while polynomials are relatively easy to integrate and differentiate. Sometimes we use interpolations because they're easier to work with in general for computations. The first technique of interpolation that we will look at is Linear Lagrange Polynomial Interpolation. Suppose that we have two points $(x_0, y_0)$ and $(x_1, y_1)$ where $x_0 \neq x_1$. We will define the linear Lagrange interpolating polynomial to be the straight line that passes through both of these points. Let's construct this straight line. We first note that the slope of this line will be $\frac{y_1 - y_0}{x_1 - x_0}$, and so in point-slope form we have that:(1) If we let $L_0(x) = \frac{x - x_1}{x_0 - x_1}$ and $L_1(x) = \frac{x - x_0}{x_1 - x_0}$, then the polynomial above can be rewritten as $P_1(x) = y_0 L_0(x) + y_1 L_1(x)$. We note that indeed this function passes through the points $(x_0, y_0)$ and $(x_1, y_1)$ since $P_1(x_0) = y_0$ and $P_1(x_1) = y_1$. We formally define this polynomial below. Definition: The Linear Lagrange Interpolating Polynomial that passes through the points $(x_0, y_0)$ and $(x_1, y_1)$ is $P_1(x) = y_0 L_0(x) + y_1 L_1(x)$. Let's now look at some examples of applying linear Lagrange interpolating polynomials. Example 1 Find the linear Lagrange interpolating polynomial, $P_1(x)$, that passes through the points $(1, 2)$ and $(3, 4)$. The function $P_1$ can be obtained directly by substituting the the points $(1, 2)$ and $(3, 4)$ into the formula above to get:(2) Example 2 Estimate the value of $\sqrt{5}$ using the linear Lagrange interpolating polynomial $P_1(x)$ that passes through the points $(1, 1)$ and $(9, 3)$ and evaluate the error of this approximation with the true value of $\sqrt{5} \approx 2.23606...$. Note that $(1, 1)$ and $(9, 3)$ are points on the function $f(x) = \sqrt{x}$. We first set up the linear Lagrange interpolating polynomial $P_1(x)$ as follows:(3) Now our approximation of $f(5) = \sqrt{5}$ is given by $P_1(5)$:(4) As we can see, our approximation is an underestimate of the true value of $\sqrt{5}$. We only obtained one significant digit of accuracy.
We now present several multiplicative number theoretic functions which will play a crucial role in many number theoretic results. We start by discussing the Euler phi-function which was defined in an earlier chapter. We then define the sum-of-divisors function and the number-of-divisors function along with their properties. The Euler \(\phi\)-Function As defined earlier, the Euler \(\phi\)-function counts the number of integers smaller than and relatively prime to a given integer. We first calculate the value of the \(phi\)-function at primes and prime powers. If \(p\) is prime, then \(\phi(p)=p-1\). Conversely, if \(p\) is an integer such that \(\phi(p)=p-1\), then \(p\) is prime. The first part is obvious since every positive integer less than \(p\) is relatively prime to \(p\). Conversely, suppose that \(p\) is not prime. Then \(p=1\) or \(p\) is a composite number. If \(p=1\), then \(\phi(p)\neq p-1\). Now if \(p\) is composite, then \(p\) has a positive divisor. Thus \(\phi(p)\neq p-1\). We have a contradiction and thus \(p\) is prime. We now find the value of \(\phi\) at prime powers. Let \(p\) be a prime and \(m\) a positive integer, then \(\phi(p^m)=p^m-p^{m-1}\). Note that all integers that are relatively prime to \(p^m\) and that are less than \(p^m\) are those that are not multiple of \(p\). Those integers are \(p,2p,3p,...,p^{m-1}p\). There are \(p^{m-1}\) of those integers that are not relatively prime to \(p^m\) and that are less than \(p^m\). Thus \[\phi(p^m)=p^m-p^{m-1}.\] \(\phi(7^3)=7^3-7^2=343-49=294\). Also \(\phi(2^{10})=2^{10}-2^9=512.\) We now prove that \(\phi\) is a multiplicative function. Let \(m\) and \(n\) be two relatively prime positive integers. Then \(\phi(mn)=\phi(m)\phi(n)\). Denote \(\phi(m)\) by \(s\) and let \(k_1,k_2,...,k_s\) be a reduced residue system modulo \(m\). Similarly, denote \(\phi(n)\) by \(t\) and let \(k_1',k_2',...,k_t'\) be a reduced residue system modulo \(n\). Notice that if \(x\) belongs to a reduced residue system modulo \(mn\), then \[(x,m)=(x,n)=1.\] Thus \[x\equiv k_i(mod\ m) \mbox{and} \ \ x\equiv k_j'(mod \ n)\] for some \(i,j\). Conversely, if \[x\equiv k_i(mod\ m) \mbox{and} \ \ x\equiv k_j'(mod \ n)\] some \(i,j\) then \((x,mn)=1\) and thus \(x\) belongs to a reduced residue system modulo \(mn\). Thus a reduced residue system modulo \(mn\) can be obtained by by determining all \(x\) that are congruent to \(k_i\) and \(k_j'\) modulo \(m\) and \(n\) respectively. By the Chinese remainder theorem, the system of equations \[x\equiv k_i(mod\ m) \mbox{and} \ \ x\equiv k_j'(mod \ n)\] has a unique solution. Thus different \(i\) and \(j\) will yield different answers. Thus \(\phi(mn)=st\). We now derive a formula for \(\phi(n)\). Let \(n=p_1^{a_1}p_2^{a_2}...p_s^{a_s}\) be the prime factorization of \(n\). Then \[\phi(n)=n\left(1-\frac{1}{p_1}\right)\left(1-\frac{1}{p_2}\right)...\left(1-\frac{1}{p_s}\right).\] By Theorem 37, we can see that for all \(1\leq i\leq k\) \[\phi(p_i^{a_i})=p_i^{a_i}-p_i^{a_i-1}=p_i^{a_i}\left(1-\frac{1}{p_i}\right).\] Thus by Theorem 38, \[\begin{aligned} \phi(n)&=&\phi(p_1^{a_1}p_2^{a_2}...p_s^{a_s})\\&=& \phi(p_1^{a_1})\phi(p_2^{a_2})...\phi(p_s^{a_s})\\&=&p_1^{a_1}\left(1-\frac{1}{p_1}\right) p_2^{a_2}\left(1-\frac{1}{p_2}\right)...p_s^{a_s}\left(1-\frac{1}{p_s}\right)\\&=& p_1^{a_1}p_2^{a_2}...p_k^{a_k}\left(1-\frac{1}{p_1}\right)\left(1-\frac{1}{p_2}\right)... \left(1-\frac{1}{p_s}\right)\\&=& n\left(1-\frac{1}{p_1}\right)\left(1-\frac{1}{p_2}\right)...\left(1-\frac{1}{p_s}\right).\end{aligned}\] Note that \[\phi(200)=\phi(2^35^2)=200\left(1-\frac{1}{2}\right)\left(1-\frac{1}{5}\right)=80.\] Let \(n\) be a positive integer greater than 2. Then \(\phi(n)\) is even. Let \(n=p_1^{a_1}p_2^{a_2}...p_k^{a_k}\). Since \(\phi\) is multiplicative, then \[\phi(n)=\prod_{j=1}^k\phi(p_j^{a_j}).\] Thus by Theorem 39, we have \[\phi(p_j^{a_j})=p_j^{a_j-1-1}(p_j-1).\] We see then \(\phi(p_j^{a_j})\)is even if \(p_j\) is an odd prime. Notice also that if \(p_j=2\), then it follows that \(\phi(p_j^{a_j})\) is even. Hence \(\phi(n)\) is even. Let \(n\) be a positive integer. Then \[\sum_{d\mid n}\phi(d)=n.\] Split the integers from 1 to \(n\) into classes. Put an integer \(m\) in the class \(C_d\) if the greatest common divisor of \(m\) and \(n\) is \(d\). Thus the number of integers in the \(C_d\) class is the number of positive integers not exceeding \(n/d\) that are relatively prime to n/d. Thus we have \(\phi(n/d)\) integers in \(C_d\). Thus we see that \[n=\sum_{d\mid n}\phi(n/d).\] As \(d\) runs over all divisors of \(n\), so does \(n/d\). Hence \[n=\sum_{d\mid n}\phi(n/d)=\sum_{d\mid n}\phi(d).\] The Sum-of-Divisors Function The sum of divisors function, denoted by \(\sigma(n)\), is the sum of all positive divisors of \(n\). \(\sigma(12)=1+2+3+4+6+12=28.\) Note that we can express \(\sigma(n)\) as \(\sigma(n)=\sum_{d\mid n}d\). We now prove that \(\sigma(n)\) is a multiplicative function. The sum of divisors function \(\sigma(n)\) is multiplicative. We have proved in Theorem 35 that the summatory function is multiplicative once \(f\) is multiplicative. Thus let \(f(n)=n\) and notice that \(f(n)\) is multiplicative. As a result, \(\sigma(n)\) is multiplicative. Once we found out that \(\sigma(n)\) is multiplicative, it remains to evaluate \(\sigma(n)\) at powers of primes and hence we can derive a formula for its values at any positive integer. Let \(p\) be a prime and let \(n=p_1^{a_1}p_2^{a_2}...p_t^{a_t}\) be a positive integer. Then \[\sigma(p^a)=\frac{p^{a+1}-1}{p-1},\] and as a result, \[\sigma(n)=\prod_{j=1}^{t}\frac{p_j^{a_j+1}-1}{p_j-1}\] Notice that the divisors of \(p^{a}\) are \(1,p,p^2,...,p^a\). Thus \[\sigma(p^a)=1+p+p^2+...+p^a=\frac{p^{a+1}-1}{p-1}.\] where the above sum is the sum of the terms of a geometric progression. Now since \(\sigma(n)\) is multiplicative, we have \[\begin{aligned} \sigma(n)&=&\sigma(p^{a_1})\sigma(p^{a_2})...\sigma(p^{a_t})\\&=& \frac{p_1^{a_1+1}-1}{p_1-1}.\frac{p_2^{a_2+1}-1}{p_2-1}...\frac{p_t^{a_t+1}-1}{p_t-1}\\ &=&\prod_{j=1}^{t}\frac{p_j^{a_j+1}-1}{p_j-1}\end{aligned}\] \(\sigma(200)=\sigma(2^35^2)=\frac{2^4-1}{2-1}\frac{5^3-1}{5-1}=15.31=465.\) The Number-of-Divisors Function The number of divisors function, denoted by \(\tau(n)\), is the sum of all positive divisors of \(n\). \(\tau(8)=4.\) We can also express \(\tau(n)\) as \(\tau(n)=\sum_{d\mid n}1\). We can also prove that \(\tau(n)\) is a multiplicative function. The number of divisors function \(\tau(n)\) is multiplicative. By Theorem 36, with \(f(n)=1\), \(\tau(n)\) is multiplicative. We also find a formula that evaluates \(\tau(n)\) for any integer \(n\). Let \(p\) be a prime and let \(n=p_1^{a_1}p_2^{a_2}...p_t^{a_t}\) be a positive integer. Then \[\tau(p^a)=a+1,\] and as a result, \[\tau(n)=\prod_{j=1}^{t}(a_j+1).\] The divisors of \(p^{a}\) as mentioned before are \(1,p,p^2,...,p^a\). Thus \[\tau(p^a)=a+1\] Now since \(\tau(n)\) is multiplicative, we have \[\begin{aligned} \tau(n)&=&\tau(p^{a_1})\tau(p^{a_2})...\tau(p^{a_t})\\&=& (a_1+1)(a_2+1)...(a_t+1)\\ &=&\prod_{j=1}^{t}(a_j+1).\end{aligned}\] \(\tau(200)=\tau(2^35^2)=(3+1)(2+1)=12\). Exercises Find \(\phi(256)\) and \(\phi(2.3.5.7.11)\). Show that \(\phi(5186)=\phi(5187)\). Find all positive integers \(n\) such that \(\phi(n)=6\). Show that if \(n\) is a positive integer, then \(\phi(2n)=\phi(n)\) if \(n\) is odd. Show that if \(n\) is a positive integer, then \(\phi(2n)=2\phi(n)\) if \(n\) is even. Show that if \(n\) is an odd integer, then \(\phi(4n)=2\phi(n)\). Find the sum of positive integer divisors and the number of positive integer divisors of 35 Find the sum of positive integer divisors and the number of positive integer divisors of \(2^53^45^37^313\). Which positive integers have an odd number of positive divisors. Which positive integers have exactly two positive divisors.
Search Now showing items 1-2 of 2 Search for new resonances in $W\gamma$ and $Z\gamma$ Final States in $pp$ Collisions at $\sqrt{s}=8\,\mathrm{TeV}$ with the ATLAS Detector (Elsevier, 2014-11-10) This letter presents a search for new resonances decaying to final states with a vector boson produced in association with a high transverse momentum photon, $V\gamma$, with $V= W(\rightarrow \ell \nu)$ or $Z(\rightarrow ... Fiducial and differential cross sections of Higgs boson production measured in the four-lepton decay channel in $\boldsymbol{pp}$ collisions at $\boldsymbol{\sqrt{s}}$ = 8 TeV with the ATLAS detector (Elsevier, 2014-11-10) Measurements of fiducial and differential cross sections of Higgs boson production in the ${H \rightarrow ZZ ^{*}\rightarrow 4\ell}$ decay channel are presented. The cross sections are determined within a fiducial phase ...
Standard Unit Vectors Definition: The vectors $\vec{i} = (1, 0, 0)$, $\vec{j} = (0, 1, 0)$, and $\vec{k} = (0, 0, 1)$ are said to be Standard Unit Vectors, that is a vector that runs on either the $x$, $y$ or $z$-axis and has a magnitude of 1, that is $\mid \mid \vec{i} \mid \mid = \mid \mid \vec{j} \mid \mid= \mid \mid \vec{k} \mid \mid = 1$. The following image depicts the three standard unit vectors on an $xyz$-plane: Standard unit vectors are helpful as we can in fact represent any vector in terms of them. Consider a general vector $\vec{u} = (u_1, u_2, u_3)$. We can represent the vector $\vec{u}$ in the following way:(1) Expressing Vectors with Standard Unit Vectors Let's let some vector u exist in 3-Space. We know that u = (u1, u2, u3). We can thus express vector u in terms of standard unit vectors by the dot product property. For example: Example 1 Express the vector $\vec{u} = (2, -4, 5)$ in terms of standard unit vectors. By simply putting what we know into the formula, we can write $\vec{u}$ rather easily in terms of standard unit vectors, that is $\vec{u} = 2\vec{i} -4\vec{j} + 5\vec{k}$. Representing The Cross Product in Standard Unit Vectors One way to represent the cross product of two vectors $\vec{u} \times \vec{v}$ is with determinants and unit vectors. Firstly, create a $3 \times 3$ matrix such that the entries of the first row are the unit vectors $\vec{i}$, $\vec{j}$, and $\vec{k}$. Make the entries of the second row be the components of $\vec{u}$, and make the entries of the third row be the components of $\vec{v}$ as shown: $\begin{bmatrix}\vec{i} & \vec{j} & \vec{k}\\ u_{1} & u_{2} & u_{3}\\ v_{1} & v_{2} & v_{3} \end{bmatrix}$. The determinant of this matrix is equal to the cross product $\vec{u} \times \vec{v}$. This can easily be seen by expanding across the first row in terms of minors/cofactors:(3)
November 18th, 2012, 04:06 PM # 1 Senior Member Joined: Aug 2012 From: South Carolina Posts: 866 Thanks: 0 Finding the Particular Solution of diff eqn dy/dx = 8x^3 -9x^2 + 4 and y =5, when x = 0 wanted to see this worked out in full ty November 18th, 2012, 04:52 PM # 4 Senior Member Joined: Nov 2011 Posts: 595 Thanks: 16 Re: Finding the Particular Solution of diff eqn New to these? Who are you kidding? It is like months you are asking for problems of that sort, actually this one is simpler than many you posted and that mostly Mark has guided you to the solution. Are you just copying any question that you have in your book to solve or are you trying to solve them before and then post? Learn to sweat a bit on your paper sheet, MarkFL will not always be by your side! (perhaps can we adopt a MarkFL? ) I don't want you to take it bad, I am just giving a honest advise, which is to try solving problem by yourself first (and then post if you are REALLY stuck)...and this one I am sure you could have solved if you had really wanted? No? For the answer you know the antiderivative of x^2 is x^3/3 (I am not using latex since you seem not to like it ) of x^3 is x^4/4...etc...(try to remember it for now... ) so y=(8/4)x^4-(9/3)x^3+(4/1)x+cte so y=2x^4-3x^3+4x+cte. Since you want y(0)=5 this means, cte=5. cheers. PS: To do latex just write exactly what you usually do but press latex on the little box above. What's complicate about it? What about to celebrate your 700 posts you start doing this? November 18th, 2012, 05:17 PM # 6 Senior Member Joined: Nov 2011 Posts: 595 Thanks: 16 Re: Finding the Particular Solution of diff eqn Great! Sorry if I have been a bit harsh, I am glad you understood I am trying to go in a constructive way, not the opposite... it is very good that you are studying all these exercises at the end! Now think, could you solve another problem similar to this one? For instance could you do the equivalent problem and we want y to be equal to 5 at x=1, meaning y(1)=5? If you can do this, this means that you can do any problem like this with powers of x and when you see this in your exam that will be a peace of cake! November 18th, 2012, 06:35 PM # 7 Senior Member Joined: Aug 2012 From: South Carolina Posts: 866 Thanks: 0 Re: Finding the Particular Solution of diff eqn I think this is wrong. Maybe u can point out my errors 6x^6/6 + x^5/5 -2x^3/3 + 4x^2/2 + Csub1 ? November 18th, 2012, 07:02 PM # 8 Senior Member Joined: Nov 2011 Posts: 595 Thanks: 16 Re: Finding the Particular Solution of diff eqn Almost good! (but the fact that you still don't want to use latex ) You did a mistake in the coefficient in , it should be Do you see it? All the rest looks good! What about the value of the constant? y(1)=5, so what is Csub1? Plug x=1 and y(1)=5 and you will obtain an equation and you can solve for Csub1 and obtain its value. For the Latex thing, so you write things like y=5x^5+3x^2 for instance, now you write the exact same thing but inside the latex...you press the little box "latex" (I guess you can find it ) and write inside, not that complicate right? For each equation you have to start a new latex box and in between you write as you do normally. You want a sin, write \sin, a cos write \cos, an integral write \int, you want to write x/3 write \frac{x}{3} so like [ latex]\frac{x}{3}[/latex]= or [ latex]x^3[/latex]= edit: Actually I did not pay enough attention, all coefficients are wrong! I don't have time now to explain but you still have not understood this...try to work more on it and find your mistakes. Even if you have the correct solution to the other problem you will do wrong at your quiz on such questions as long as you don't know how to do them YOURSELF. We can discuss this later, try to work on it..you are not that far to understand!! Courage! November 18th, 2012, 07:26 PM # 9 Senior Member Joined: Aug 2012 From: South Carolina Posts: 866 Thanks: 0 Re: Finding the Particular Solution of diff eqn ok I will review in my book at work the problem again from start to finish. Also, I am going to post a new problem This one is an approximation problem. I will show as much work as I can ps...I will post it with LATEX or at least try November 18th, 2012, 08:07 PM # 10 Senior Member Joined: Aug 2012 From: South Carolina Posts: 866 Thanks: 0 Re: Finding the Particular Solution of diff eqn Here we go ..from start to finish... = find particular solution when y = 5, when x =0 dx y = y = now when y = 5 and x = 0 C must equal 5 ? Tags diff, eqn, finding, solution Thread Tools Display Modes Similar Threads Thread Thread Starter Forum Replies Last Post Is this an ordinary diff eq or a partial diff eq? Niko Bellic Calculus 2 July 8th, 2013 10:01 AM need help finding solution .... sokaris Advanced Statistics 1 August 17th, 2009 11:48 AM Finding solution (X, Y, Z) that satisfies EQW Mathmen Abstract Algebra 1 May 18th, 2009 05:39 AM Finding diff equ Daniiel Calculus 0 September 13th, 2008 08:22 PM 1st order non-linear Diff Equations: Any known solution? Polle Applied Math 0 December 5th, 2007 09:38 AM
Let $(a_n)_{n \in \mathbb{N}}$ be a convergent sequence with limit $a \in \mathbb{R}$. Show that the arithmetic mean given by: $$s_n:= \frac{1}{n}\sum_{i=1}^n a_i \tag{A.M.} $$ also converges to $a$. I have read: arithmetic mean of a sequence converges but unfortunately the answers there don't help me much because I don't understand their substitutions and most of it all, why their substitutions seem to work. What I know from the problem is that since $a_n$ is convergent and $\epsilon >0$ is given, I can say that: $$\exists N_1 \in \mathbb{N}, \forall n \geq N_1: |a_n-a|<\epsilon_1 $$ I also know that since $a_n$ is convergent, it is bound, so $(a_n) < M, \ \forall n \in \mathbb{N}$ I need to show that: $$\exists N_2 \in \mathbb{N}, \forall n \geq N_2: |s_n-a|< \epsilon_2$$ I started as follows: $$ \left|s_n-a \right| = \left|\frac{1}{n}\sum_{i=1}^na_i -a\right|= \left|\frac{1}{n}\sum_{i=1}^m(a_i-a)+\frac{1}{n}\sum_{i=m+1}^n(a_i-a) \right| \\ \leq \frac{1}{n} \sum_{i=1}^m|(a_i-a)|+\frac{1}{n}\sum_{i=m+1}^n|a_i-a|$$ I believe to understand that the left sum after the $\leq$ is finite, bound and doesn't depend on the values that $n$ takes on. However, I don't understand where all the substitutions come from and make this proof so seemingly easy to complete. Is there a general idea I can follow to complete such proofs? Because I know that the last step is to show that the given sum is smaller than $\epsilon$. I also know that I should bring the condition $a_n < M$ into place somewhere, but I don't know where. If I choose an $n \geq N: |a_n-a|< \epsilon'$ what does that tell me about $|a_i-a|$? I know that they are seemingly the same just with a different index.
The Seifert-van Kampen Theorem Example 3 On The Seifert-van Kampen Theorem page we stated the very important Seifert-van Kampen theorem. We will now look at some examples of applying the theorem. More examples can be found on the following pages: The Seifert-van Kampen Theorem Example 3 Example 3 Let $X$ be the Klein bottle. One construction of the Klein bottle is as follows. Take the unit box $[0, 1] \times [0, 1]$. Identify one pair of opposite edges in the same orientation, and the other pair of opposite edges in the opposite orientation to obtain a quotient space (see the illustration below). Use the Seifert-van Kampen theorem to find $\pi_1(X, x)$. Let $U_1$ and $U_2$ be the following subsets: Then $U_1 \cap U_2$ is: Now observe that $U_1$ is convex and so we have that:(1) Furthermore, observe that the fundamental group of $U_1 \cap U_2$ is generated by a single loop $\alpha$ and is infinite cyclic, so:(2) The only difficult arises from finding the fundamental group of $U_2$. Observe that the square (with the preserved orientations) is a deformation retract of $U_2$. However, the square with the preserved orientations is precisely a bouquet of two circles. We computed the fundamental group of this space in the first example of applying the Seifert-van Kampen theorem and found that:(3) Let $i_1 : U_1 \cap U_2 \to U_1$ and $i_2 : U_1 \cap U_2 \to U_2$ be the inclusion maps as usual so that $i_{1*} : \pi_1(U_1 \cap U_2, x) \to \pi_1(U_1, x)$ and $i_{2*} : \pi_1(U_1 \cap U_2, x) \to \pi_1(U_2, x)$ are the induced homomorphisms. Then by the Seifert-van Kampen theorem we have that:(4)
I have the following exercise: Use Heisenberg's uncertainty principle and the relation $\Delta u = \sqrt{\langle u^2 \rangle - \langle u \rangle^2}$ to find the range of energy an electron has in an atom of diameter 1 amstrong. This is the attempt at a solution: From the uncertainty principle: $\Delta p \Delta x \geq \hslash / 2 $. Therefore $\Delta p \geq \hslash / 2\Delta x$. Without considering relativistic corrections (don't know if this is OK), $E_c = p^2/2m$ From the definition of standard deviation $\Delta p = \sqrt{\langle p^2 \rangle - \langle p \rangle^2}$. Then $\Delta p^2 = \langle p^2 \rangle - \langle p \rangle^2$. Therefore $\langle p^2 \rangle = \Delta p^2 + \langle p \rangle^2$ The energy of the electron will be the kinetic energy minus the potential energy: $E = p^2/2m - e^2/r$ And so the average energy will be $\langle E \rangle = \langle p^2\rangle/2m - e^2/\langle r \rangle = (\Delta p^2 + \langle p \rangle^2)/2m - e^2/\langle r \rangle$ Here I don't know how to continue. Do I have to assume $\langle p \rangle = 0$? Why? And even when I assume that, replacing $\langle r\rangle$ with $\Delta x$, which I suppose is $1 * 10^{-10} m$ (why?) I don't get a value of energy similar to the ground level energy of a hydrogen atom (which has roughly the same diameter than this one). What am I doing wrong?
Eigenvalue arguments tend to work well when the base field $\Bbb F$, with $A \in M_{n \times n}(\Bbb F)$, is algebraically closed, so that all roots of the characteristic polynomial are guaranteed to exist in $\Bbb F$. Here's an inductive demonstration not based on eigenvalues and eigenvectors which works over any field $\Bbb F$: First, the result is clearly true if $n = 2$, for if $A = \begin{bmatrix} a_{11} & a_{22} \\ a_{21} & a_{22} \end{bmatrix}, \tag{1}$ then $A - \lambda I = \begin{bmatrix} a_{11} - \lambda & a_{22} \\ a_{21} & a_{22} - \lambda\end{bmatrix}, \tag{2}$ so the characteristic polynomial $p_A(\lambda)$ is $p_A(\lambda) = \det(\begin{bmatrix} a_{11} - \lambda & a_{22} \\ a_{21} & a_{22} - \lambda\end{bmatrix})$$= \lambda^2 - (a_{11} + a_{22}) \lambda + (a_{11}a_{22} - a_{12}a_{21}) = \lambda^2 - \text{Tr}(A)\lambda + \det(A); \tag{3}$ the coefficient of $\lambda^2$ is $1 = (-1)^2$ and that of $\lambda = \lambda^1$ is $(-1)^1\text{Tr}(A) = (-1)^{2 - 1}\text{Tr}(A)$, and $\deg p_A(\lambda) = 2$ in this the base case for the induction. Now assume that $\deg p_A(\lambda) = k$, the coefficient of $\lambda^k$ is $(-1)^k$, and the coefficient of $\lambda^{k - 1}$ in $p_A(\lambda)$ is $(-1)^{k - 1}\text{Tr}(A)$ for all $A \in M_{k \times k}(\Bbb F)$, and consider some $A \in M_{(k + 1) \times (k + 1)}(\Bbb F)$; its characteristic polynomial is $p_A(\lambda) = \det(A - \lambda I)$$= \det(\begin{bmatrix} a_{11} - \lambda & a_{12} & \ldots & a_{1 \; k} & a_{1 \; (k + 1)} \\ a_{21} & a_{22} - \lambda & \ldots & a_{2 \; k} & a_{2 \; (k + 1)} \\a_{31} & a_{32} & a_{33} - \lambda & \ldots & a_{3 \; (k + 1)} \\\ldots \\ a_{(k + 1) \; 1} & a_{(k + 1) \; 2} & \ldots & a_{(k + 1) \; k} & a_{(k + 1) \; (k + 1)} - \lambda \end{bmatrix}). \tag{4}$ When we evaluate the determinant in (4) by expanding in minors along the first row, we see that $p_A(\lambda) = (a_{11} - \lambda) \det(\begin{bmatrix} a_{22} - \lambda & \ldots & a_{2 \; k} & a_{2 \; (k + 1)} \\a_{32} & a_{33} - \lambda & \ldots & a_{3 \; (k + 1)} \\\ldots \\ a_{(k + 1) \; 2} & \ldots & a_{(k + 1) \; k} & a_{(k + 1) \; (k + 1)} - \lambda \end{bmatrix}) + \Theta(\lambda), \tag{5}$ where $\Theta(\lambda)$ is a polynomial of degree at most $k - 1$ in $\lambda$, since deleting the row and column containing $a_{1 \; j}$, $2 \le j \le k + 1$ leaves a matrix with at most $k - 1$ entries of the form $a_{ii} - \lambda$. Furthermore, our inductive hypothesis implies that $\det(\begin{bmatrix} a_{22} - \lambda & \ldots & a_{2 \; k} & a_{2 \; (k + 1)} \\a_{32} & a_{33} - \lambda & \ldots & a_{3 \; (k + 1)} \\\ldots \\ a_{(k + 1) \; 2} & \ldots & a_{(k + 1) \; k} & a_{(k + 1) \; (k + 1)} - \lambda \end{bmatrix})$$= (-1)^k \lambda^k + (-1)^{k - 1} \text{Tr} (A_{11}) \lambda^{k - 1} +\theta(\lambda), \tag{6}$ where $A_{11}$ is the matrix obtained from $A$ by deleting the first row and column, and $\theta(\lambda)$ is a polynomial of degree at most $k - 2$ in $\lambda$. Thus the $\lambda^{k + 1}$ and $\lambda^k$ terms of $p_A(\lambda)$ are given by the product $(a_{11} - \lambda)((-1)^k \lambda^k + (-1)^{k - 1} \text{Tr} (A_{11}) \lambda^{k - 1})$$= (-1)^{k + 1}\lambda^{k + 1} + a_{11}(-1)^k \lambda^k + (-1)^k \text{Tr} (A_{11})\lambda^k + a_{11}(-1)^{k - 1} \text{Tr} (A_{11}) \lambda^{k - 1}$$= (-1)^{k + 1}\lambda^{k + 1} + (-1)^k \text{Tr} (A)\lambda^k + a_{11}(-1)^{k - 1} \text{Tr} (A_{11}) \lambda^{k - 1} \tag{7};$ we see from (7) that not only is the coefficient of $\lambda^k$ $(-1)^k \text{Tr} (A)$ and , but that the coefficient of $\lambda^{k + 1}$ is $(-1)^{k + 1}$ and also that the degree of $p_A(\lambda)$ is $k + 1$, thus inductively establishing all three of the bullet points in the question. And all this over an arbitrary field $\Bbb F$! QED. Hope this helps. Cheerio, and as always, Fiat Lux!!!!
Given a mapping in the Sobolev space $f\in W^{2,n}_{\rm loc}(\mathbb{R}^n,\mathbb{R}^n)$ I would like to know what is the Sobolev regularity of the Jacobian $J_f=\operatorname{det} Df$. It is well known and easy to prove that if $u,v\in W^{1,p}\cap L^\infty(\mathbb{R}^n)$, then $uv\in W^{1,p}\cap L^\infty$. Indeed, product of a bounded and an $L^p$ function is in $L^p$ and the same argument applies to the derivatives $\partial_i(uv)=(\partial_i u)v+v\partial_i\in L^p$. Now if $u\in W^{1,n}$ than $u$ has very high integrability (Trudinger's inequality) so if $u,v\in W^{1,n}$ (no longer bounded), then $uv\in W^{1,n}$ must belong to some Orlicz-Sobolev space slightly larger than $W^{1,n}$. Thus my question is: Let $u_1,\ldots,u_n \in W^{1,n}(B^n(0,1))$. Find an optimal (or close to optimal) Orlicz-Sobolev space $W^{1,P}$ for some Young function $P$ such that $u_1\cdot\ldots\cdot u_n\in W^{1,P}$. In fact I would like to know if one can find $P$ so that it satisfies the so called divergence condition: $$ \int_0^1 \frac{P(t)}{t^{n+1}}\, dt =\infty. $$ is satisfied. Since the derivatives of $f\in W^{2,n}(\mathbb{R}^n,\mathbb{R}^n)$ belong to $W^{1,n}$ such a result will imply that $J_f=\det Df\in W^{1,P}.$
You need a slightly stronger version of continuity: Lipschitz. Recall Lipschitz continuity implies uniform continuity Let $f:[a,b]\rightarrow\mathbb{R}$ be a function. $f'(x)$ is bounded if and only if $f$ is Lipschitz. $\implies)$ $f'(x)$ is bounded. WTS $f$ is Lipschitz. If the derivative is bounded, we have $|f(x)|\leq M$ for some $M$. By definition of a derivative, we have that $f'(x)=$$\lim_{h\to 0}$${|(f(x+h)-f(x)|\over h}\leq M$. For Lipschitz continuity, we have $|f(x)-f(y)|\leq M|x-y|$. In particular when $x\neq y$, we have $f'(c)={|f(x)-f(y)|\over |(x-y)|}$ by MVT (Mean Value Theorem), and so $f'(c)={|f(x)-f(y)|\over |(x-y)|}$ $\leq M$ $\rightarrow$ $|f(x)-f(y)|\leq M|x-y|$ as desired $\impliedby)$ $f$ if Lipschitz, WTS $f'(x)$ is bounded. if $f$ is Lipschitz, then we can write it as ${|f(x)-f(y)|\over |(x-y)|}\leq M$fix $y=x+h$ so ${|f(x)-f(x+h)|\over |(x-(x+h))|}={|f(x+h)-f(x)|\over |h|}\leq M$ (notice in the numerator we can switch order because of absolute value). now we just take $h\rightarrow 0$ and we have our derivative, which is bounded by $M$ while not the standard approach, this can be also be proved by contrapositive. If $f'(x)$ is unbounded then $f$ isn't lipschitz. This follows from $\implies)$since there is now no way to choose an upper bound $M$ for $|f(x)-f(y)|\leq M|x-y|$, so we have the result. Also, I'll have to check to be sure, but Lipschitz functions are differentiable a.e. (almost everywhere) since we can always find points $x\neq y$ where the derivative exists, the only time this might fail is if $x=y$ but this is an isolated point, and in particular, forms a singleton set of measure zero. For $f(x)=|x|$, differentiability fails at $x=y=0$. For more general functions, other points may fail, but the points at which it fail have measure zero (worth checking).
Connectedness of Box Topological Products Recall from the Connectedness of Finite Topological Products page that if $\{ X_1, X_2, ..., X_n \}$ is a finite collection of connected topological spaces then the topological product $\displaystyle{\prod_{i=1}^{n} X_i}$ is also connected. We also noted that if instead $\{ X_i \}_{i \in I}$ is an arbitrary collection of connected topological spaces then the topological product $\displaystyle{\prod_{i \in I} X_i}$ is also connected, though, we did not prove this. Recall from the Box Topological Products of Topological Spaces page that another somewhat natural topology to put on the Cartesian product $\displaystyle{\prod_{i \in I} X_i}$ is called the box topology. The box topology on this Cartesian product is the topology $\tau$ with the basis:(1) We already know that if $\{ X_1, X_2, ..., X_n \}$ is a finite collection of topological spaces then the resulting topological product and box topological product of these spaces are the same. So if this finite collection of topological spaces are all connected then by the theorem referenced above, $\displaystyle{\prod_{i=1}^{n} X_i}$ will also be connected. Interestingly enough, if instead $\{ X_i \}_{i \in I}$ is an infinite collection of connected topological spaces then the box topological product need not be connected! For example, consider the topological space $\mathbb{R}$ with the usual topology. Let $\{ \mathbb{R}_i \}_{i=1}^{\infty}$ be an infinite collection of copies of this topological space. We will show that even though each $\mathbb{R}_i$ is connected, the box topological product $\displaystyle{\prod_{i=1}^{\infty} \mathbb{R}_i}$ is not connected. Define the sets $A$ and $B$ as follows:(2) Then clearly $A, B \not \emptyset$, $A \cap B \neq \emptyset$, and $\displaystyle{\prod_{i=1}^{\infty} \mathbb{R}_i = A \cup B}$. We only need to show that the sets $A$ and $B$ are open to prove that $\{ A, B \}$ is a separation of $\displaystyle{\prod_{i=1}^{\infty} \mathbb{R}_i}$ and so $\displaystyle{\prod_{i=1}^{\infty} \mathbb{R}_i}$ is disconnected. Let $(x_i)_{i=1}^{\infty} \in A$. Then consider the following open neighbourhood $U$ of $(x_i)_{i=1}^{\infty}$:(4) Then $(x_i)_{i=1}^{\infty} \in U$, and $U$ is open in $\displaystyle{\prod_{i=1}^{\infty} \mathbb{R}_i}$ with the box topology. Furthermore, $U$ contains only bounded sequences, so $(x_i)_{i=1}^{\infty} in U \subset A$ which shows that $A = \mathrm{int}(A)$, so $A$ is open in $\displaystyle{\prod_{i=1}^{\infty} \mathbb{R}_i}$. Similarly, let $(x_i)_{i=1}^{\infty} \in B$, and consider the following open neighbourhood $V$ of $(x_i)_{i=1}^{\infty}$:(5) Then $(x_i)_{i=1}^{\infty} \in V$ and $V$ is open in $\displaystyle{\prod_{i=1}^{\infty} \mathbb{R}_i}$ with the box topology. Furthermore, $V$ contains only unbounded sequences, so $(x_i)_{i=1}^{\infty} \in V \subset B$ which shows that $B = \mathrm{int}(B)$, so $B$ is open in $\displaystyle{\prod_{i=1}^{\infty} \mathbb{R}_i}$. Thus $\{ A, B \}$ is a separation of $\displaystyle{\prod_{i=1}^{\infty} \mathbb{R}_i}$ with the box topology, so $\displaystyle{\prod_{i=1}^{\infty} \mathbb{R}_i}$ is disconnected!
Let $W$ be a subspace of $\mathbb{R}^n$. For $x \in \mathbb{R}^n$, define $$\rho (x) = \inf_{y \in W} \Vert x - y \Vert _2$$ Let $\{ u_1, \ldots, u_m \}$ be an orthonormal basis of $W$, where $m$ is the dimension of $W$. Extend this to an orthonormal basis $\{ u_1, \ldots, u_m, \ldots, u_n \}$ of all of $\mathbb{R}^n$. How I can show that $$\rho (x) = \left[ \mathop {\sum} \limits_{j=m+1}^n \vert \langle x, u_j \rangle \vert^2 \right]^\frac{1}{2}$$ and that it is uniquely attained at $$y=Px, \qquad P = \mathop {\sum} \limits_{j=1}^m u_j u_j^T$$ As you know, $P$ is called the orthogonal projection of $x$ onto $W$.
Skills to Develop The following questions are meant to guide our study of the material in this section. After studying this section, we should understand the concepts mo- tivated by these questions and be able to write precise, coherent answers to these questions. How is the tangent function defined? What is the domain of the tangent function? What are the reciprocal functions and how are they defined? What are the domains of each of the reciprocal functions? We defined the cosine and sine functions as the coordinates of the terminal points of arcs on the unit circle. As we will see later, the sine and cosine give relations for certain sides and angles of right triangles. It will be useful to be able to relate different sides and angles in right triangles, and we need other circular functions to do that. We obtain these other circular functions – tangent, cotangent, secant, and cosecant – by combining the cosine and sine together in various ways. Beginning Activity Using radian measure: For what values of \(t\) is \(\cos(t) = 0\)? For what values of \(t\) is \(\sin(t) = 0\)? In what quadrants is \(\cos(t) > 0\)? In what quadrants is \(\sin(t) > 0\)? In what quadrants is \(\cos(t) < 0\)? In what quadrants is \(\sin(t) < 0\)? The Tangent Function Next to the cosine and sine, the most useful circular function is the tangent. The word tangent was introduced by Thomas Fincke (1561-1656) in his Flenspurgensis Geometriae rotundi libri XIIII where he used the word tangens in Latin. From Earliest Known Uses of Some of the Words of Mathematics at http://jeff560.tripod.com/mathword.html. Definition: tangent function The tangent function is the quotient of the sine function divided by the cosine function. So the tangent of a real number \(t\) is defined to be \(\dfrac{\sin(t)}{\cos(t)}\) for those values \(t\) for which \(\cos(t) \neq 0\). The common abbreviation for the tangent of \(t\) is \[\tan(t) = \dfrac{\sin(t)}{\cos(t)}.\] In this definition, we need the restriction that \(\cos(t) \neq 0\) to make sure the quotient is defined. Since \(\cos(t) \neq 0\) whenever \(t = \dfrac{\pi}{2} + k\pi\) for some integer \(k\), we see that \(\tan(t)\) is defined when \(t \neq \dfrac{\pi}{2} + k\pi\) for all integers \(k\). So the domain of the tangent function is the set of all real numbers \(t\) for which \(t \neq \dfrac{\pi}{2} + k\pi\) for every integer \(k\). Notice that although the domain of the sine and cosine functions is all real numbers, this is not true for the tangent function. When we worked with the unit circle definitions of cosine and sine, we often used the following diagram to indicate signs of \(\cos(t)\) and \(\sin(t)\) when the terminal point of the arc t is in a given quadrant. Exercise \(\PageIndex{1}\) Considering \(t\) to be an arc on the unit circle, for the terminal point of \(t\): In which quadrants is \(\tan(t)\) positive? In which quadrants is \(\tan(t)\) negative? For what values of \(t\) is \(tan(t) = 0\)? Complete Table 1.4, which gives the values of cosine, sine, and tangent at the common reference arcs in Quadrant I. \(t\) \(\cos(t)\) \(\sin(t)\) \(\tan(t)\) 0 0 \(\dfrac{\pi}{6}\) \(\dfrac{1}{2}\) \(\dfrac{\pi}{4}\) \(\dfrac{\sqrt{2}}{2}\) \(\dfrac{\pi}{4}\) \(\dfrac{\sqrt{3}}{2}\) \(\dfrac{\pi}{2}\) 1 Answer Since \(\tan(t) = \dfrac{\sin(t)}{\cos(t)}\), \(\tan(t)\) positive when both \(\sin(t)\) and \(\cos(t)\) have the same sign. So \(\tan(t) > 0\) in the first and third quadrants. We see that \(\tan(t)\) negative when \(\sin(t)\) and \(\cos(t)\) have opposite signs. So \(\tan(t) < 0\) in the second and fourth quadrants. \(\tan(t)\) will be zero when \(\sin(t) = 0\) and \(\cos(t) \neq 0\). So \(\tan(t) = 0\) when the terminal point of \(t\) is on the \(x\)-axis. That is, \(\tan(t) = 0\) when \(t = k\pi\) for some integer \(k\). Following is a completed version of Table 1.4. \(t\) \(\cos(t)\) \(\sin(t)\) \(\tan(t)\) \(0\) \(1\) \(0\) \(0\) \(\dfrac{\pi}{6}\) \(\dfrac{\sqrt{3}}{2}\) \(\dfrac{1}{2}\) \(\dfrac{1}{\sqrt{3}}\) \(\dfrac{\pi}{4}\) \(\dfrac{\sqrt{2}}{2}\) \(\dfrac{\sqrt{2}}{2}\) \(1\) \(\dfrac{\pi}{4}\) \(\dfrac{1}{2}\) \(\dfrac{\sqrt{3}}{2}\) \(\sqrt{3}\) \(\dfrac{\pi}{2}\) \(0\) \(1\) undefined Just as with the cosine and sine, if we know the values of the tangent function at the reference arcs, we can find its values at any arc related to a reference arc. For example, the reference arc for the arc \(t = \dfrac{5\pi}{3}\) is \(\dfrac{\pi}{3}\). \[\tan(\dfrac{5\pi}{3}) = \dfrac{\sin(\dfrac{5\pi}{3})}{\cos(\dfrac{5\pi}{3})} = -\dfrac{\sin(\dfrac{\pi}{3})}{\cos(\dfrac{\pi}{3})} = \dfrac{-\dfrac{\sqrt{3}}{2}}{\dfrac{1}{2}} = -\sqrt{3}\] So we can shorten this process by just using the fact that \(\tan(\dfrac{\pi}{3}) = \sqrt{3}\) and that \(\tan(\dfrac{5\pi}{3}) < 0\) since the terminal point of the arc \(\dfrac{5\pi}{3}\) is in the fourth quadrant. \[\tan(\dfrac{5\pi}{3}) = -\tan(\dfrac{\pi}{3}) = -\sqrt{3}\] Exercise \(\PageIndex{2}\) Determine the exact values of \(t = \dfrac{5\pi}{4}\) and \(t = \dfrac{5\pi}{6}\). Determine the exact values of \(\cos(t)\) and \(\tan(t)\) if it is known that \(\sin(t) = \dfrac{1}{3}\) and \(\tan(t) < 0\). Answer 1. \[\tan(\dfrac{5\pi}{4}) = \tan(\dfrac{\pi}{4}) = \dfrac{\sqrt{2}}{2}\] \[\tan(\dfrac{6\pi}{6}) = -\tan(\dfrac{\pi}{6}) = -\dfrac{1}{\sqrt{3}}\] 2. We first use the Pythagorean Identity. \[\cos^{2}(t) + \sin^{2}(t) = 1\] \[\cos^{2}(t) + (\dfrac{1}{3})^{2} = 1\] \[\cos^{2}(t) = \dfrac{8}{9}\] Since \(\sin(t) > 0\) and \(\tan(t) < 0\), we conclude that the terminal point of \(t\) must be in the second quadrant, and hence, \(\cos(t) < 0\). Therefore, \[\cos(t) = -\dfrac{\sqrt{8}}{3}\] \[\tan(t) = \dfrac{\dfrac{1}{3}}{-\dfrac{\sqrt{8}}{3}} = -\dfrac{1}{\sqrt{8}}\] The Reciprocal Functions The remaining circular or trigonometric functions are reciprocals of the cosine, sine, and tangent functions. Since these functions are reciprocals, their domains will be all real numbers for which the denominator is not equal to zero. The first we will introduce is the secant function. Definition: secant function The secant function is the reciprocal of the cosine function. So the secant of a real number \(t\) is defined to be \(\dfrac{1}{\cos(t)}\) for those values \(t\) where \(\cos(t) \neq 0\). The common abbreviation for the secant of \(t\) is \[\sec(t) = \dfrac{1}{\cos(t)}.\] Since the tangent function and the secant function use \(\cos(t)\) in a denominator, they have the same domain. So the domain of the secant function is the set of all real numbers \(t\) for which \(t \neq \dfrac{\pi}{2} + k\pi\) for every integer \(k\). The term secant was introduced by was by Thomas Fincke (1561-1656) in his Thomae Finkii Flenspurgensis Geometriae rotundi libri XIIII, Basileae: Per Sebastianum Henricpetri, 1583. Vieta (1593) did not approve of the term secant, believing it could be confused with the geometry term. He used Transsinuosa instead. From Earliest Known Uses of Some of the Words of Mathematics at http://jeff560.tripod.com/mathword.html. Next up is the cosecant function. Definition: cosecant function The cosecant function is the reciprocal of the secant function. So the cosecant of a real number \(t\) is defined to be \(\dfrac{1}{\sin(t)}\) for those values t where \(\sin(t) \neq 0\). The common abbreviation for the secant of \(t\) is \[\csc(t) = \dfrac{1}{\sin(t)}.\] Since \(\sin(t) = 0\) whenever \(t = k\pi\) for some integer \(k\), we see that the domain of the cosecant function is the set of all real numbers t for which \(t \neq k\pi\) for every integer \(k\). Finally, we have the cotangent function. Definition: cotangent function The cotangent function is the reciprocal of the secant function. So the cotangent of a real number \(t\) is defined to be \(\dfrac{1}{\tan(t)}\) for those values t where \(\tan(t) \neq 0\). The common abbreviation for the secant of \(t\) is \[\cot(t) = \dfrac{1}{\tan(t)}.\] Since \(\tan(t) = 0\) whenever \(t = k\pi\) for some integer \(k\), we see that the domain of the cotangent function is the set of all real numbers \(t\) for which \(t \neq k\pi\) for every integer \(k\). Georg Joachim von Lauchen Rheticus appears to be the first to use the term cosecant (as cosecans in Latin) in his Opus Palatinum de triangulis. From Earliest Known Uses of Some of the Words of Mathematics at http://jeff560.tripod.com/mathword.html. The word cotangent was introduced by Edmund Gunter in Canon Triangulorum (Table of Artificial Sines and Tangents) where he used the term cotangens in Latin. From Earliest Known Uses of Some of the Words of Mathematics at http://jeff560.tripod.com/mathword.html. A Note about Calculators When it is not possible to determine exact values of a trigonometric function, we use a calculator to determine approximate values. However, please keep in mind that many calculators only have keys for the sine, cosine, and tangent functions. With these calculators, we must use the definitions of cosecant, secant, and cotangent to determine approximate values for these functions. Exercise \(\PageIndex{3}\) When possible, find the exact value of each of the following functional values. When this is not possible, use a calculator to find a decimal approximation to four decimal places. \(\sec(\dfrac{7\pi}{4})\) \(\csc(\dfrac{-\pi}{4})\) \(\tan(\dfrac{7\pi}{8})\) \(\cot(\dfrac{4\pi}{3})\) \(\csc(5)\) Answer \[\sec(\dfrac{7\pi}{4}) = \dfrac{1}{\cos(\dfrac{7\pi}{4})} = \dfrac{1}{\cos(\dfrac{\pi}{4})} = \dfrac{2}{\sqrt{2}} = \sqrt{2}\] \[\csc(-\dfrac{\pi}{4}) = \dfrac{1}{\sin(-\dfrac{\pi}{4})} = \dfrac{1}{\sin(-\dfrac{\pi}{4})} = -\dfrac{2}{\sqrt{2}} = -\sqrt{2}\] \[\tan(\dfrac{7\pi}{8}) \approx -0.4142\] \[\cot(\dfrac{4\pi}{3}) = \cot(\dfrac{\pi}{3}) = \dfrac{1}{\tan(\dfrac{\pi}{3})} = \dfrac{2}{\sqrt{3}}\] \[\csc(5) = \dfrac{1}{\sin(5)} \approx -1.0428\] Exercise \(\PageIndex{4}\) If \(\cos(t) = \dfrac{1}{3}\) and \(\sin(t) < 0\), determine the exact values of \(\sin(t)\), \(\tan(t)\), \(\csc(t)\), and \(\cot(t)\). If \(\sin(t) = \dfrac{-7}{10}\) and \(\tan(t) > 0\), determine the exact values of \(\cos(t)\) and \(\cot(t)\). What is another way to write \((\tan(t))(\cos(t))\)? Answer 1. If \(\cos(x) = \dfrac{1}{3}\) and \(\sin(x) < 0\), we use the Pythagorean Identity to determine that \(\sin(x) = -\dfrac{\sqrt{8}}{3}\). We can then determine that \[\tan(x) = -\sqrt{8}\] \[\csc(x) = -\dfrac{3}{\sqrt{8}}\] \[\cot(t) = -\dfrac{1}{\sqrt{8}}\] 2. If \(\sin(x) = -0.7\) and \(\tan(x) > 0\), we use the Pythagorean Identity to obtain \[\cos^{2}(x) + (-0.7)^{2} = 1\] \[\cos^{2}(x) = 0.51\] Since we are also given that \(\tan(x) > 0\), we know that the terminal point ofx is in the third quadrant. Therefore, \(\cos(x) < 0\) and \(\cos(x) = -\sqrt{0.51}\). Hence, \[\tan(x) = \dfrac{-0.7}{-\sqrt{0.51}}\] \[\cot(x) = \dfrac{\sqrt{0.51}}{0.7}\] 3. We can use the definition of \(\tan(x)\) to obtain \[(\tan(x))(\cos(x)) = \dfrac{\sin(x)}{\cos(x)}\cdot \cos(x) = \sin(x)\] So \(\tan(x)\cos(x) = \sin(x)\), but it should be noted that this equation is only valid for those values of \(x\) for which \(\tan(x)\) is defined. That is, this equation is only valid if \(x\) is not an integer multiple of \(\pi\). Summary In this section, we studied the following important concepts and ideas: The tangent functionis the quotient of the sine function divided by the cosine function. So is the quotient of the sine function divided by the cosine function. That is, \(\tan(t) = \dfrac{sin(t)}{\cos(t)}\) for those values \(t\) for which \(\cos(t) \neq 0\). The domain of the tangent functionis the set of all real numbers \(t\) for which \(t \neq \dfrac{\pi}{2} + k\pi\) for every integer \(k\). The reciprocal functions are the secant, cosecant, and tangent functions. \(\sec(t) = \dfrac{1}{\cos(t)}\) The set of all real numbers \(t\) for which \(t \neq \dfrac{\pi}{2} + k\pi\) for every integer \(k\) \(\csc(t) = \dfrac{1}{\sin(t)}\) The set of all real numbers t for which \(t \neq k\pi\) for every integer \(k\). \(\cot(t) = \dfrac{1}{\tan(t)}\) The set of all real numbers t for which \(t \neq k\pi\) for every integer \(k\).
What I am interested in is to find an expression for $$\frac{\partial^2G(S,\sigma)}{\partial S\partial\sigma}$$ where $G$ is inverse function in the first argument of function $C$ such that $S = C(G(S,\sigma), \sigma)$, and we define $C$ as \begin{align*} C(V,\sigma) &=V N \left(d_1 \right) - D \exp \left( -rT\right) N \left(d_2\right) \\ d_1 &= \frac{\log\left(V/D\right) + \left(r + \sigma^2/2\right)T}{\sigma \sqrt{T}} \\ d_2 &= d_1 - \sigma \sqrt{T}. \end{align*} with $S,V,T,D,\sigma\in (0,\infty)$, $r\in\mathbb{R}$, and $N$ is the standard normal cumulative distribution function. $C$ is the European call option pricing formula in the Black–Scholes model. Thus, $C: (0,\infty)^2 \rightarrow (0,\infty)$. One can show that $C$ is stricly montone in both of its arguments. There is no formula for $G$ but it can easily be computed with a bisection like method. One can find that \begin{align*} n(d) &= \frac{1}{\sqrt{2\sigma}}\exp(-d^2/2)\\ \frac{\partial C(V,\sigma)}{\partial \sigma} &= \sqrt{T}V n(d_1) \\ \frac{\partial^2 C(V,\sigma)}{\partial \sigma\partial V} &= \sqrt{T}n(d_1) + \sqrt{T}Vn'(d_1)\frac{\partial d_1}{\partial V} \\ &= \sqrt{T}n(d_1) + \sqrt{T}V(-d_1n(d_1)) \frac{1}{\sigma \sqrt{T}V} \\ &= \left(\sqrt{T} - d_1 / \sigma\right)n(d_1) \end{align*} but that is not useful as far as I gather since $C$ is strictly monotone in $V$ but $\frac{\partial C(V,\sigma)}{\partial \sigma}$ is not so one cannot use results like here. Ultimately, I want know if there are regions of $(S,\sigma)\in(0,\infty)^2$ where $$ \frac{\partial^2G(S,\sigma)}{\partial S\partial\sigma} \approx 0 $$ or tend to zero. I am not sure if this is an easier question. Update One can almost get a close form solution as follows. $C$ is strictly monotone in the first argument so $$ \frac{\partial G(S, \sigma)}{\partial S} = \left( \left.\frac{\partial C(V,\sigma)}{\partial V} \right\vert_{V = G(S,\sigma)} \right)^{-1} $$ Further, we know that $$\frac{\partial C(V,\sigma)}{\partial V} = N(d_1(V, \sigma))$$ where we have made the depedence of $d_1$ on $(V,\sigma)$ explicit. Using the above, we find that $$ \frac{\partial G(S, \sigma)}{\partial S} = \frac{1}{N(d_1(G(S,\sigma), \sigma))} $$ Hence, \begin{align*} \frac{\partial G(S, \sigma)}{\partial S \partial \sigma} = - \frac{n(d_1(G(S,\sigma), \sigma))}{N(d_1(G(S,\sigma), \sigma))^2} d_1'(G(S,\sigma), \sigma) \end{align*} where the derivative is w.r.t. $\sigma$. The only thing we are missing here is $d_1'(G(S,\sigma), \sigma)$. We can re-write $d_1$ as $$ d_1(V,\sigma) = \frac{\log V - \log D + rT}{\sigma \sqrt{T}} + \frac{\sigma\sqrt T}{2} $$ So $$ d_1'(G(S,\sigma), \sigma) = - \frac{\log G(S,\sigma) - \log D + rT}{\sigma^2\sqrt T} + \frac{\sqrt T}{2} + \frac{G'(S,\sigma)}{\sigma\sqrt T G(S,\sigma)} $$ This though leaves us with a $\partial G(S, \sigma) / \partial \sigma$ factor which does not appear to be nice. Finding an expression for this factor would be neat. R code to confirm the above ###### assign values and functionsD <- .8r <- .03T. <- 3library(DtD)G <- function(S, sigma) get_underlying(S = S, D = D, T. = T., r = r, vol = sigma)Gfunc <- function(par) G(par["S"], par["sigma"])d1 <- function(V, sigma) (log(V) - log(D) + (r + sigma^2/2) * T.) / (sigma * sqrt(T.))S <- seq(1, 3, length.out = 20)sigma <- seq(.1, .5, length.out = 20)xy <- expand.grid(S = S, sigma = sigma)###### compute cross partial derivative numerical differentiationlibrary(numDeriv)r1 <- mapply(function(S, sigma) hessian(Gfunc, c(S = S, sigma = sigma))[2, 1], S = xy$S, sigma = xy$sigma)dim(r1) <- c(length(S), length(sigma))persp(S, sigma, r1, theta = 30, phi = 30, ticktype = "detailed") ###### compute cross partial derivative numerical differentiation given d G / d Sr2 <- mapply( function(S, sigma) grad(function(sigma) pnorm(d1(G(S, sigma), sigma))^-1, sigma), S = xy$S, sigma = xy$sigma)###### compute cross partial derivative given almost all partsr3 <- mapply( function(S, sigma){ V <- G(S, sigma) d_val <- d1(V, sigma) grad_fac <- grad(function(sigma) G(S, sigma), sigma) - (dnorm(d_val) / pnorm(d_val)^2) * ( - (log(V) - log(D) + r * T.) / (sigma^2 * sqrt(T.)) + sqrt(T.) / 2 + grad_fac / (sigma * sqrt(T.) * V)) }, S = xy$S, sigma = xy$sigma)###### check resultsall.equal(c(r1), r2, tolerance = 1e-6)#R [1] TRUEall.equal(c(r1), r3, tolerance = 1e-6)#R [1] TRUE
Table of Contents The Boundary of a Set in a Topological Space Examples 1 Recall from The Boundary of a Set in a Topological Space that if $(X, \tau)$ and $A \subseteq X$ then a point $x \in X$ is said to be a boundary point of $A$ if for every $U \in \tau$ with $x \in U$ we have that $U$ intersects $A$ and $A^c$ nontrivially, i.e., $A \cap U \neq \emptyset$ and $A^c \cap U \neq \emptyset$. We noted that the set of all boundary points of $A$ is denoted $\partial A$. We also proved an important fact in that $\partial A$ is a closed set. We will now look at some examples regarding the boundary of a set. Example 1 Let $X = \{ a, b, c \}$ and let $\tau = \{ \emptyset, \{ a \}, \{ b, c \}, X \}$. Consider the set $A = \{ a \}$. Determine $\partial A$. Since $A = \{ a \}$ we see that $A^c = \{b, c \}$. First consider the point $a \in X$. The open neighbourhoods of this point are $\{ a \}$ and $X$. We have that $A^c \cap \{ a \} = \emptyset$. Therefore $a \not \in \partial A$. Now consider the point $b \in X$. The open neighbourhoods of this point are $\{ b, c \}$ and $X$. We have that $A \cap \{ b, c \} = \emptyset$, so $b \not \in \partial A$. Lastly, consider the point $c \in X$. The open neighbourhoods of this point are $\{b, c \}$ and $X$. We have that $A \cap \{ b, c \} = \emptyset$, so $c \not \in \partial A$. Therefore $\partial A = \emptyset$. Example 2 Let $X = \{ a, b, c, d \}$ and let $\tau = \{ \emptyset, \{ a \}, \{ b \}, \{ a, b \}, \{ b, d \}, \{a, b, d \}, X \}$. Consider the set $A = \{ a, b, c \}$. Determine $\partial A$. Since $A = \{ a, b, c \}$ we have that $A^c = \{ d \}$. First look at the point $a \in X$. The open neighbourhoods of $a$ are $\{ a \}$, $\{ a, b \}$, $\{a, b, d \}$, and $X$. Notice that $A^c \cap \{ a \} = \emptyset$. Therefore $a \not \in \partial A$. Now look at the point $b \in X$. The open neighbourhoods of $b$ are $\{ b \}$, $\{a, b \}$, $\{ b, d \}$, $\{a, b, d \}$, and $X$. Notice that $A^c \cap \{ b \} = \emptyset$. Therefore $b \not \in \partial A$. Now look at the point $c \in X$. The only open neighbourhood of $c$ is $X$. We have that $A \cap X = \{a, b , c \}$ and $A^c \cap X = \{ d \}$ - both of which sets are nonempty. Therefore $c \in \partial A$. Lastly look at the point $d \in X$. The open neighbourhoods of $d$ are $\{ b, d \}$, $\{a, b, d \}$, and $X$. We have that $A \cap \{ b, d \} = \{ b \}$ and $A^c \cap \{ b, d \} = \{ d \}$; $A \cap \{a, b, d \} = \{a, b \}$ and $A^c \cap \{a, b, d \} = \{ d \}$; and $A \cap X = \{a, b, c \}$ and $A^c \cap X = \{ d \}$. All of these sets are nonempty. Therefore $d \in \partial A$. Hence $\partial A = \{ c, d \}$.
Can someone explain the reasoning behind the answers given? We reviewed these in class, but I wasn't able to grasp the logic. Especially c, d, and e. The domain of possible values for variables X and Y is {Jim, Ann, Sal, Pat, Tom}. The following facts define the values for which the child predicate is true. The child predicate is false for all other cases. Evaluate each expression using the domain values and predicates as defined and indicate if the expression is trueor false. child (Ann,Jim) child (Sal,Jim) child (Pat,Ann) child (Tom,Sal) a. $(\forall X)$ child$(X,Jim)$ b. $(\exists X)\neg $child$(X,Jim)$ c. $(\forall X)(\exists Y)($child$(X,Jim) \rightarrow $child$(Y,X))$ d. $(\exists Y)(\forall X)($child$(X,Jim) \rightarrow $child$(Y,X))$ e. $(\exists X)(\forall Y)($child$(X,Jim) \rightarrow $child$(Y,X))$ a. FALSE (Reasoning: Any case that make it false? Yes, so FALSE) b. TRUE (Reasoning: Any case that makes it true? Yes, so TRUE) c. TRUE d. FALSE e. TRUE
Table of Contents Systems of First Order Ordinary Differential Equations Recall from the First Order Ordinary Differential Equations page that if $D \subseteq \mathbb{R}^2$ is a domain (a nonempty, open, connected subset of $\mathbb{R}^2$) and $f \in C(D, \mathbb{R})$ then a first order ordinary differential equation has the form:(1) Furthermore we said that a solution to a first order ordinary differential equation on an interval $J = (a, b)$ is a function $\phi \in C^1(J, \mathbb{R})$ (continuous, differentiable function) such that for all $t \in J$ we have that $(t, \phi(t)) \in D$ and:(2) We defined a solution to the initial value problem $x' = f(t, x)$ on $J$ with initial condition $x(\tau) = \xi$ where $\tau \in J$ is a solution to $\phi$ to $x' = f(t, x)$ as above with the added condition that $\phi(\tau) = \xi$. We are now ready to define a system of first order ordinary differential equations. Definition: Let $D \subseteq \mathbb{R}^{n+1}$ be a domain and let $f_1, f_2, ..., f_n \in C(D, \mathbb{R})$. A System of $n$ First Order Ordinary Differential Equations is of the form $\displaystyle{\left\{\begin{matrix} x_1' = f_1(t, x_1, x_2, ..., x_n)\\ x_2' = f_2(t, x_1, x_2, ..., x_n)\\ \vdots \\ x_n' = f_n(t, x_1, x_2, ..., x_n) \end{matrix}\right.}$, sometimes abbreviated as $\{ x_i' = f_i(t, x_1, x_2, ..., x_n) \}$ where $i \in \{ 1, 2, ..., n \}$. Remember once again that we say that $D$ is a domain of $\mathbb{R}^{n+1}$ if $D$ is a nonempty, open, connected, subset of $\mathbb{R}^{n+1}$. The following is an example of a system of 4 first order differential equations:(3) Here we have that $f_1, f_2, f_3, f_4 : \mathbb{R}^{n+1} \to \mathbb{R}$ are given explicitly by:(4) Note that while the functions $f_1, f_2, f_3, f_4$ are functions of $t, x_1, x_2, x_3, x_4$ that they need not depend on all of those variables. Definition: A Solution to a system of $n$ first order ordinary differential equations on an open interval $J = (a, b)$ is a collection of continuously differentiable functions $\phi_1, \phi_2, ..., \phi_n \in C(J, \mathbb{R})$ such that for all $t \in J$ we have that $(t, \phi_1(t), \phi_2(t), ..., \phi_n(t)) \in D$ and $\displaystyle{\left\{\begin{matrix} \phi_1'(t) = f_1(t, \phi_1(t), \phi_2(t), ..., \phi_n(t))\\ \phi_2'(t) = f_2(t, \phi_1(t), \phi_2(t), ..., \phi_n(t))\\ \vdots \\ \phi_n'(t) = f_n(t, \phi_1(t), \phi_2(t), ..., \phi_n(t)) \end{matrix}\right.}$ sometimes abbreviated $\{ \phi_i'(t) = f(t, \phi_1(t), \phi_2(t), ..., \phi_n(t) \} where [[$ i \in \{ 1, 2, ..., n \}$. Definition: An Initial Value Problem is a system of $n$ first order ordinary differential equations $\{ x_i' = f(t, x_1, x_2, ..., x_n) \}$, $i \in \{ 1, 2, ..., n \}$ with initial conditions $\{ x_i(\tau) = \xi_i \}$, $i \in \{ 1, 2, ..., n \}$ where $(\tau, \xi_1, \xi_2, ..., \xi_n) \in D$. A Solution to the initial value problem $\{ x_i' = f(t, x_1, x_2, ..., x_n) \}$, $\{ x_i(\tau) = \xi_i \}$, $i \in \{ 1, 2, ..., n \}$ on the open interval $J = (a, b)$ with $\tau \in J$ is a solution $\{ \phi_1, \phi_2, ..., \phi_n \}$ to $\{ x_i' = f(t, x_1, x_2, ..., x_n) \}$ with $\{ \phi_i(\tau) = \xi_i \}$ for all $i \in \{ 1, 2, ..., n \}$.
The Schottky defect (small shot effect) is named after a popular German physicist Walter H. Schottky who was even awarded the Royal Society’s Hughes medal in 1936 for the discovery of this defect. In his model he explains that the defect is formed in ionic crystals when oppositely charged ions leave their lattice sites which leads to the creation of vacancies. These vacancies are further created to maintain a neutral charge in the crystal. The model further explains that the surrounding atoms also move to occupy these vacancies. Typically, when the defect is found in non-ionic crystals it is referred to as lattice vacancy defect. Schottky defect is however different than frenkel defect wherein the atoms permanently leave the crystal in terms of schottky defect while atoms usually stay within the solid crystal in frenkel defect. We will further study the characteristics below. Definition Schottky defect is a type of point defect or imperfection in solids which is caused by a vacant position that is generated in a crystal lattice due to the atoms or ions moving out from the interior to the surface of the crystal. Characteristics Of Schottky Defect Some of the distinct characteristics of this defect are; There is a very small difference in size between cation and anion. Cation and anion both leave the solid crystal. Atoms also move out of the crystal permanently. Generally two vacancies are formed. As for the density of the solid it decreases considerably. Examples It is a type of defect in crystals that mostly occurs in highly ionic compounds or highly coordinated compounds. The compound lattice has only a small difference in sizes between the anions and cations. Some of the common example of salts where Schottky defect is prominent include Sodium Chloride (NaCl), Potassium Chloride (KCl), Potassium Bromide (KBr), Caesium Chloride (CsCl) and Silver Bromide (AgBr). Calculation The number of defects for an MX crystal can be calculated by the given formula;\(n_{s} \approx N \: exp \left ( -\frac{\Delta H_{s}}{2RT} \right )\) where, ns = number of Schottky defects per unit volume, ΔH = enthalpy of defect formation, R = gas constant, T = absolute temperature (in K). We can calculate N by using the formula;\(N = \frac{density \, of \, the \, ionic \, crystal \, compound \times NA}{molar \, mass \, of \, the \, ionic crystal \, compound}\)
In my post Trigonometry Yoga, I discussed how defining sine and cosine as lengths of segments in a unit circle helps develop intuition for these functions. I learned the circle definitions of sine and cosine in my junior year of high school, in the class that would now be called pre-calculus (it was called “Trig Senior Math”). Two years earlier, I’d learned the triangle definitions of sine, cosine, and tangent in geometry class. I don’t remember any of my teachers ever mentioning a circle definition of the tangent function. The geometric definition of the tangent function, which predates the triangle definition, is the length of a segment tangent to the unit circle. The tangent really is a tangent! Just as for sine and cosine, this one-variable definition helps develop intuition. Here is the definition, followed by an applet to help you get a feel for it: Let OA be a radius of the unit circle, let B = (1,0), and let \( \theta =\angle BOA\). Let C be the intersection of \(\overrightarrow{OA}\) and the line x=1, i.e. the tangent to the unit circle at B. Then \(\tan \theta\) is the y-coordinate of C, i.e. the signed length of segment BC. Move the blue point below; the tangent is the length of the red segment. (If a label is getting in the way, right click and toggle “show label” from the menu). The circle definition of the tangent function leads to geometric illustrations of many standard properties and identities. (If this were my class, I would stop here and tell you to explore on your own and with others). Some things to notice: \(\left| \tan \theta \right|\) gets big as \(\theta\) approaches \(\pm 90{}^\circ \). \(\tan (\pm 90{}^\circ)\) is undefined, because at these angles, \(\overline{OA}\) is parallel to x=1, so the two lines don’t intersect, and point C doesn’t exist. \(\tan 90{}^\circ\) tends toward \(+\infty\), \(\tan (-90{}^\circ)\) tends toward \(-\infty\). \(\tan \theta\) is positive in the first and third quadrants, negative in the second and fourth quadrants. \(\tan \theta\)=\(tan (\theta+180{}^\circ)\) — the angles \(\theta\) and \(\theta +180{}^\circ\) form the same line. Thus the period of the tangent function is \(180 {}^\circ = \pi\) radians. \(\tan \theta\) = \(- \tan (-\theta)\). Moving from \(\theta\) to \(-\theta\) reflects \(OC\) about the x-axis. \(\tan \theta\) is equal to the slope of OA (rise = \(\tan \theta\) , run =1), which is also equal to \(\dfrac{\sin\theta}{\cos\theta}\), as well as Opposite over Adjacent for angle \(\theta\) in right triangle CBO. \(\tan (45{}^\circ)=1\). When \(\theta=45{}^\circ\), triangle CBO is a 45-45-90 triangle, and OB=1. Similarly, \(\tan (-45{}^\circ)=-1\), etc. For small values of \(\theta\), \(\tan \theta\) is close to \(\sin \theta\), which is close to the arc length of AB, i.e. the measure of \(\theta\) in radians. If we define \(\arctan \theta\) as the function whose input is the signed length of BC and whose output is the angle \(\theta\) corresponding to that tangent length, then the domain of that function is the reals, and it makes sense to define the range as \(-90 {}^\circ< \theta <90{}^\circ\) (in radians \(-\pi/2<\theta < \pi/2\) and arctan’s output is an arc length). This range includes all the angles we need and avoids the discontinuity at \(\theta= \pm 90{}^\circ =\pm \pi/2\) radians. For \(\left| \theta \right|\leq 45{}^\circ\), \(\left| \tan \theta \right|\leq 1\). Half of the input values of \(\tan \theta\) give outputs with absolute values less than or equal to 1, and the other half give values on the rest of the number line. This mapping also occurs with fractions and slopes, but there’s something very compelling about seeing the lengths change dynamically. Applets like the one above could also help students develop intuition about slopes. \(\tan (180{}^\circ-\theta) = -\tan \theta\). We reflect BC over the x-axis to form \(B{C}’\). Then \(\angle BO{C}’=\theta\) and \(\angle BOD =(180{}^\circ-\theta)\). \(B{C}’\) (the blue segment) is the tangent of \((180{}^\circ-\theta)\). \(\tan (\theta \pm 90{}^\circ)\) = \(-1/\tan \theta\). The picture below illustrates the geometry of this identity when \(\theta\) is in the first quadrant. The line formed at \(\theta + 90{}^\circ\) is perpendicular to OC and \(\triangle COB\sim \triangle ODB\). Thus \(\dfrac{BD}{OB}=\dfrac{OB}{BC}\), and with appropriate signs, \(\tan (\theta + 90{}^\circ)\) = \(-1/\tan \theta\). Since \(\tan \theta\)=\(\tan (\theta+180{}^\circ)\), \(\tan (\theta +90{}^\circ)=\tan(\theta-90{}^\circ)\). The applet below shows the geometry in all quadrants, and it gives a dynamic sense of the relationship between \(\tan\theta\) and \(\tan(-\theta)\). Again, move the blue point: Special Bonus: The Secant Function The signed length of the segment OC is called the secant function, \(\sec\theta\). Using similar triangles, we see that \(\sec \theta = \dfrac{1}{\cos \theta}\). The Pythagorean Theorem applied to \(\triangle COB\) shows that \(\tan^2\theta+1=\sec^2 \theta\). When the tangent function is big, so is the secant function, and when the tangent function is small, so is the secant function. Also \(\sec \theta\) is close to \(\pm 1\) when \(\theta\) is close to the x-axis and when \(\tan \theta\) is close to 0. The graphs of the two functions look nice together:
The original renewal process has independent and identically disributed (i.i.d.) inter-renewal times $\{T_1, T_2, T_3, ...\}$. Thus, starting from time 0, renewals occur at times $\{T_1, T_1+T_2, T_1+T_2+T_3, ...\}$. Now fix a probability $p >0$. Independently place each renewal time to $P_1$ with prob $p$. So we get new inter-renewal times $\{Z_1, Z_2, Z_3, ...\}$, where $Z_1 = \sum_{i=1}^GT_i$ and $G$ is an independent geometric random variable with success probability $p$, and $Z_2, Z_3, ...$ are i.i.d. That is because, when you generate them, you generate them independently but using the same probability law (so each is just a random sum of i.i.d. $T_i$ variables). If the rate of $P$ was $\lim_{t\rightarrow\infty} \frac{N(t)}{t}= \frac{1}{E[T_1]}=\lambda$ (with prob 1), the rate of $P_1$ is $\lim_{t\rightarrow\infty} \frac{N_1(t)}{t} = p\lambda$ (with prob 1). Here I am of course defining: \begin{align} &N(t) = \mbox{ Total number of renewals from $P$ during $[0,t]$}\\&N_1(t) = \mbox{ Total number of renewals from $P_1$ during $[0,t]$}\end{align} A simple example is when $T_i=1/\lambda$ for all $i \in \{1, 2, 3, ...\}$, for a given constant $\lambda>0$. So inter-arrival times are constant (and hence trivially i.i.d.). Then $N(t)$ is a deterministic staircase function and $E[N(t)]=N(t)$ for all $t$, and indeed $$\lim_{t\rightarrow\infty} \frac{N(t)}{t} = \lim_{t\rightarrow\infty} \frac{E[N(t)]}{t} = \lambda $$Probabilistically splitting this deterministic process $N(t)$ (of rate $\lambda$ arrivals/time) using a probability $p$ gives a random process $N_1(t)$ that has rate $p\lambda$ arrivals/time (and this new process indeed has i.i.d. inter-arrival times). On visualizing the renewal times: I imagine renewal times as if they are things that arrive to a system, like job arrivals in a queueing system. So the original renewal process$P$ can be drawn over a timeline with spikes arising at the renewal times (the times $\{T_1, T_1+T_2, T_1+T_2+T_3, ...\}$). Then $N(t)$ counts the spikes up to time $t$. We can "probabilistically thin" this spikey process by independently including spikes with prob $p$, and throwing the others away. The thinned process $N_1(t)$ counts the number of included spikes, and so $N_1(t)\leq N(t)$ for all $t$. As an interesting side note: Consider $\{T_1, T_2, T_3, ...\}$ as any random sequence of inter-arrival times (not necessarily i.i.d.) and let $N(t)$ count the number of arrivals up to time $t$. Now probabilistically thin this process to a new process $N_1(t)$ by independently including each arrival with probability $p$. Then $E[N_1(t)] = pE[N(t)]$ for all $t\geq 0$ since: $$ E[N_1(t)] = E[E[N_1(t)|N(t)]] = E[pN(t)] = pE[N(t)] $$For example, if $E[N(t)]=\lambda t$ for all $t\geq 0$, then $E[N_1(t)]=p\lambda t$ for all $t\geq 0$. An example of such a process $N(t)$ that is not a Poisson process is this: Choose $T_1$ uniformly over $[0,1]$, then define $T_i=1$ for all $i\in\{2,3,4,...\}$.
I need help understanding the Mandelbot and Van Ness' definition of Fractional Brownian motion $ B_H( t , \omega ) - B_H( 0 , \omega ) = \frac{1}{\Gamma(H + \frac{1}{2})} ( \int_{-\infty}^0 [(t - s)^{H - \frac{1}{2}} - (-s)^{H - \frac{1}{2}}t] dB(s, \omega) + \int_0^t (t - s)^{H - \frac{1}{2}} dB(s, \omega) ) $ (from "Fractional Brownian Motions, Fractional Noises and Applications" by Mandelbrot and Van Ness, 1968). All I've been able to find out is that the above equation is the moving average of past white noise but I don't understand how it is so, and the motivation for the various terms.
Your mistake is simply the statement that the state $|j_1 m_1\rangle$ and $|j_2 m_2\rangle$ are the same physical state: these are abstract angular momentum labels, they aren't full descriptions of the state. The two states would only be the same when the two objects carrying these quantum numbers are indistinguishable in every way, like two spin-1/2 electrons in an S-wave in the He atom ground-state. In this case, only the antisymmetric spin-0 combination survives, where the phase factor $(-1)^{j_1+j_2 + J}$ is -1. The two electrons are fermions, so you see the two states need to get a minus sign, and this is only possible when they are a spin 0 combination. There is no spin-1 version of the He4 ground state, because the two electrons can't have the spin aligned since they are fermions. The way to understand the phase factor is through a few examples, and a better SU(2) representation theory. The examples are the vector combination law: $$ A \cdot B $$$$ A \times B $$$$ (AB)_{ij} = {1\over 2}(A_i B_j + B_j A_i) - {1\over 3} A\cdot B \delta_{ij}$$ These are the spin-0, spin-1, and spin-2 parts of the product of two vectors, in SO(3) index notation. You can see that the spin-1 part is antisymmetric under interchange, and the spin-2 and spin-0 part are symmetric. Likewise, the antisymmeric spin-1/2 spin-1/2 combination is the singlet, and the symmetric one is the triplet. This is easiest to see in SU(2) index notation, where $$ \epsilon_{ij} a^i b^j $$ is the singlet formed from SU(2) vectors $a^i$ and $b^i$, while the triplet is $$ (AB)^{ij} = (a^i b^j + a^j b^i )$$ Which is symmetric. Beware that $AB^{12}$ is not normalized properly as compared to the usual textbook $|j,m\rangle$ presentation. One of the states of the 2-tensor (2-tensor of SU(2), the spin-1 object) is $$ |1,0\rangle $$ when represented as an SU(2) tensor, this has components $$ (AB)^{12} = (AB)^{21} = {1\over \sqrt 2 }$$ These numbers are determined by making sure that the tensor is normalized, and give you square-root of integer factors. The other states don't have these annoying factors, but these build up the Clebsch-Gordon coefficients. The representation theory of SU(2), when expressed in tensors, makes the $J=j_1+j_2$ representation by multiplying the two tensors for $j_1$ and $j_2$ without any epsilons. This makes a completely symmetric thing, where you get rid of the antisymmetric parts. As you step down J, you get an $\epsilon$ tensor each time you go down, changing the symmetric/antisymmetric character. This way of doing the representation theory is the simplest way, it allows you to carry the Clebsch-Gordon coefficients in your head. It is described in detail in this answer: Mathematically, what is color charge? .This post imported from StackExchange Physics at 2014-04-01 05:48 (UCT), posted by SE-user Ron Maimon
Westhoff, Andreas and Schmeling, Daniel and Bosbach, Johannes and Claus, Wagner (2010) OSCILLATIONS OF HEAT TRANSFER AND LARGE-SCALE CIRCULATION IN TURBULENT MIXED CONVECTION. In: Third Int. Symposium on Bifurcations and Instabilities in Fluid Dynamics, p. 34. The Univercity Nottingham. Third Int. Symposium on Bifurcations and instabilities in fluid dynamics, 10.-13. Aug. 2009, Nottingham (UK). PDF 312kB Abstract Mixed convection (MC) describes the transport of heat in fluids when forced convection (FC) and thermal convection (TC) coexists. It is a very often occurring flow condition e.g. in the oceans, the atmosphere, indoor climatisation or in many industrial processes and applications. MC can be characterised by the dimensionless parameters Rayleigh number $Ra \equiv \Delta T \beta g H^3 / \kappa \nu$, Reynolds number $Re \equiv U H / \nu$, Prandtl number $Pr \equiv \nu / \kappa$ and Archimedes number $Ar = Ra / (Re^2 \times Pr)$ which is the ratio of buoyancy to inertia forces. In this study we investigated the influence of torsional oscilations of the large-scale circulations (LSC) on the heat transfer at MC in a rectangular cavity. To cover a large range of $600 < Re < 3 \times 10^6$ and $1 \times 10^5 < Ra < 1 \times 10^{11}$ two convection cells with an aspect ratio of 1:1:5 (height:width:length) have been constructed using air as working fluid. As a characteristic length the height $H$ of the cell is chosen and the spatially averaged inflow velocity as characteristic velocity $U$. The convection cells consist of a rectangular container with an air inlet at the top and an air outlet at the bottom. Inlet and outlet are located at the same side of the cell. They span the whole length of the cell and are constituted by rectangular channels. The bottom is equipped with a heated copper plate and the top with an aluminium heat exchanger with cooling fins. The small cell with the dimensions $H = 100$ mm, $W = 100$ mm and $L = 500$ mm was designed to be operated under high pressure conditions up to 100 bar. The large convection cell has been designed to work under ambient pressure with the same aspect ratio. However, its dimensions are scaled by a factor of 5. Item URL in elib: https://elib.dlr.de/66605/ Document Type: Conference or Workshop Item (Speech, Paper) Title: OSCILLATIONS OF HEAT TRANSFER AND LARGE-SCALE CIRCULATION IN TURBULENT MIXED CONVECTION Authors: Date: August 2010 Journal or Publication Title: Third Int. Symposium on Bifurcations and Instabilities in Fluid Dynamics Open Access: Yes Gold Open Access: No In SCOPUS: No In ISI Web of Science: No Page Range: p. 34 Publisher: The Univercity Nottingham Status: Published Keywords: mixed convection, turbulence, heat transfer, large-scale circulations, flow structure formation Event Title: Third Int. Symposium on Bifurcations and instabilities in fluid dynamics Event Location: Nottingham (UK) Event Type: international Conference Event Dates: 10.-13. Aug. 2009 Organizer: School of mathematical sciences University of Nottingham HGF - Research field: Aeronautics, Space and Transport HGF - Program: Aeronautics HGF - Program Themes: Aircraft Research (old) DLR - Research area: Aeronautics DLR - Program: L AR - Aircraft Research DLR - Research theme (Project): L - Systems & Cabin (old) Location: Göttingen Institutes and Institutions: Institute of Aerodynamics and Flow Technology > Fluid Systems Deposited By: Westhoff, Andreas Deposited On: 26 Nov 2010 09:58 Last Modified: 31 Jul 2019 19:29 Repository Staff Only: item control page
I'm using the code ContourPlot3D[ NumericQ[ Integrate[ Piecewise[ {{Exp[-1/(1 - x^4 - y^4)]* Exp[I*ω*10*({Cos[ϕ], Sin[θ]} - {Cos[π/2], Sin[π/2]})], Sqrt[x^2 + y^2] < 1}, {0, Sqrt[x^2 + y^2] >= 1}}, PerformanceGoal -> "Speed"], {x, -2, 2}, {y, -2, 2}]], {ϕ, 0, 2*π}, {θ, 0, 2*π}, {ω, -2, 2}, PerformanceGoal -> "Speed", MaxRecursion -> 0] I had the parameters set higher before, namely {x,-10,10}, etc., and I didn't use the MaxRecursion nor PerformanceGoal, but after four days of running I still had no result. Even now, I have had it running for most of the day and still no result. Is this normal? Can I speed it up? Edit: Note that the piecewise function I'm trying to plot is: $$\frac{e^{10i\omega}}{40\pi}\int_{-\infty}^{\infty}\int_{-\infty}^{\infty}\exp\left(-\frac{1}{1-x^{4}-y^{4}}\right)\exp\left(10i\omega\left(\begin{pmatrix} \cos\phi\\\sin\theta \end{pmatrix}-\begin{pmatrix} \cos\frac{\pi}{2}\\\sin\frac{\pi}{2} \end{pmatrix}\right)\begin{pmatrix} x\\y \end{pmatrix}\right)\,dx\,dy, \text{ if }\sqrt{x^2+y^2}<1\text{ and }0\text{ otherwise}.$$
User:Jan A. Sanders/An introduction to Lie algebra cohomology/Lecture 8 The trace and Killing form Let \( R\) be \(\mathbb{C}\) and \(\dim_\mathbb{C}\mathfrak{a}<\infty\ .\) Then define \(K_\mathfrak{a}\in C^2(\mathfrak{g},\mathbb{C})\) by\[ K_\mathfrak{a}(x,y)=\mathrm{tr}(d_1(x) d_1(y))\]In the case \(\mathfrak{a}=\mathfrak{g}\) and \(d_1=\mathrm{ad}\ ,\) this is called the Killing form.In general, one calls \(K_\mathfrak{a}\) the trace form. example - of a trace form Let \(\mathfrak{g}=\mathfrak{sl}_2\) and \(\mathfrak{a}=\R^2\ ,\) with the standard representation (see Lecture 1). Then \[ K_{\R^2}(M,M)=0, \quad K_{\R^2}(M,N)=1,\quad K_{\R^2}(M,H)=0,\] \[ K_{\R^2}(N,M)=1, \quad K_{\R^2}(N,N)=0,\quad K_{\R^2}(N,H)=0,\] \[ K_{\R^2}(H,M)=0, \quad K_{\R^2}(H,N)=0,\quad K_{\R^2}(H,H)=2.\] proposition - trace form symmetric \(K_\mathfrak{a}\) is symmetric. proof This follows from \(\mathrm{tr}(AB)=\mathrm{tr}(BA)\).\(\square\) proposition - trace form invariant \( K_\mathfrak{a} \) is \(\mathfrak{g}\)-invariant, that is, \(K_\mathfrak{a}\in C^2(\mathfrak{g},\mathbb{C})^\mathfrak{g}\ .\) proof Given the trivial action of \(\mathfrak{g}\) on \(\mathbb{C}\ ,\) one has \[ d_1^{2}(x)K_\mathfrak{a}(y,z)=-K_\mathfrak{a}([x,y],z)-K_\mathfrak{a}(y,[x,z])\ :\] \[=-\mathrm{tr}(d_1([x,y]) d_1(z))-\mathrm{tr}(d_1(y) d_1([x,z]))\ :\] \[=-\mathrm{tr}(d_1(x) d_1(y) d_1(z))+\mathrm{tr}(d_1(y) d_1(x) d_1(z)) -\mathrm{tr}(d_1(y) d_1(x)d_1(z))+\mathrm{tr}(d_1(y) d_1(z)d_1(x))\ :\] \[=0\] proposition - \(d^2 K_\mathfrak{a} \) antisymmetric \[d^2 K_\mathfrak{a}\in C_{\wedge}^3(\mathfrak{g},\mathbb{C})\] proof From the \(\mathfrak{g}\)-invariance it follows that \[d^2 K_\mathfrak{a}(x,y,z)=K_\mathfrak{a}(x,[y,z])\] Furthermore, \[K_\mathfrak{a}(x,[z,y])=-K_\mathfrak{a}(x,[y,z])\] and \[K_\mathfrak{a}(z,[x,y])=-K_\mathfrak{a}(z,[y,x])=K_\mathfrak{a}([y,z],x)=K_\mathfrak{a}(x,[y,z])\]\(\square\) corollary - nontrivial third cohomology Let \(\mathfrak{g}\) be a Lie algebra. Then \[[d^2 K_\mathfrak{a}]\in H_{\wedge}^3(\mathfrak{g},\mathbb{C})\] Observe that this class is not trivial, since \(K_\mathfrak{a}\) is symmetric, not antisymmetric. musical maps Let \(\mathfrak{g}^\star=C^1(\mathfrak{g},\mathbb{C})\) and define \( \flat: \mathfrak{g}\rightarrow \mathfrak{g}^\star\) by \[ \flat(x)(y)=K_\mathfrak{a}(x,y)\] proposition \[ \flat\in Hom_\mathfrak{g}(\mathfrak{g},\mathfrak{g}^\star)\] proof \[ \flat([x,y])(z)=K_\mathfrak{a}([x,y],z)\ :\] \[=-K_\mathfrak{a}(y,[x,z])\ :\] \[=-\flat(y)([x,z])\ :\] \[=d_1^{1}(x)\flat(y)(z)\] or \( \flat([x,y])=d_1^{1}(x)\flat(y)\quad \square\ .\) Define \[ \sharp:\mathfrak{g}^\star\rightarrow \mathfrak{g}\] by \[ K_\mathfrak{a}(\sharp(c_1),y)=c_1(y)\] Then \[ K_\mathfrak{a}(x,y)=\flat(x)(y)=K_\mathfrak{a}(\sharp(\flat(x)),y)\ ,\] or \(x-\sharp(\flat(x))\in \ker K_\mathfrak{a}\ .\) proposition \(\ker K_\mathfrak{a}\) is an ideal. proof Let \(y\in\ker K_\mathfrak{a}\ ,\) that is \(K_\mathfrak{a}(x,y)=0\) for all \(x\in\mathfrak{g}\ .\) Then it follows from the invariance of \(K_\mathfrak{a}\) that \[ K_\mathfrak{a}([y,x],z)+K_\mathfrak{a}(y,[x,z])=0\] and therefore \(K_\mathfrak{a}([y,x],z)=0\) for all \(z\in\mathfrak{g}\ .\) This shows that \([\mathfrak{g},\ker K_\mathfrak{a}]\subset \ker K_\mathfrak{a}\ .\) The statement that \([\ker K_\mathfrak{a},\mathfrak{g}]\subset \ker K_\mathfrak{a}\) follows by a symmetry argument. definition A Lie algebra \(\mathfrak{g}\) is called simple if \([\mathfrak{g},\mathfrak{g}]\neq 0\)and \(\mathfrak{g}\) contains no ideals besides \(0\) and itself. proposition - simple Lie algebra If \(\mathfrak{g}\) is simple, then \(\mathfrak{g}=[\mathfrak{g},\mathfrak{g}]\ .\) proof \([\mathfrak{g},\mathfrak{g}]\neq 0\) is an ideal, so it must equal \([\mathfrak{g},\mathfrak{g}]=\mathfrak{g}\ .\) proposition If \( K_\mathfrak{a} \) is nonzero, and \(\mathfrak{g}\) is simple, then \(\flat\) is injective. proof Let \( x\in\ker\flat\ .\) Then \( 0=\flat(x)(y)=K_\mathfrak{a}(x,y)\) for all \(y\in\mathfrak{g}\ ,\) that is, \(x\in \ker K_\mathfrak{a}\ .\) But \( \ker K_\mathfrak{a}\) must be zero, so \(x=0\ .\) proposition Let \(\mathfrak{h}\) be an ideal in \(\mathfrak{g}\ .\) Define \[\mathfrak{h}^\perp=\{x\in\mathfrak{g}|K_\mathfrak{g}(x,y)=0 \quad \forall y\in \mathfrak{h}\}\] Then \(\mathfrak{h}^\perp\) is an ideal in \(\mathfrak{g}\ .\) proof Let \(g\in\mathfrak{g}\ ,\) \( h\in\mathfrak{h}\) and \(k\in\mathfrak{h}^\perp\ .\) Then \[K_\mathfrak{g}([g,k],h)=-K_\mathfrak{g}(k,[g,h])=0\] This shows that \([\mathfrak{g},\mathfrak{h}^\perp]\subset \mathfrak{h}^\perp\) and similarly \([\mathfrak{h}^\perp,\mathfrak{g}]\subset \mathfrak{h}^\perp\ .\) definition - derived series, solvable One defines a series of ideals of \(\mathfrak{g}\ ,\) the derived series, as follows.\[\mathfrak{g}^{(0)}=\mathfrak{g}\]\[\mathfrak{g}^{(i+1)}=[\mathfrak{g}^{(i)},\mathfrak{g}^{(i)}]\]If, for some \(n\in\mathbb{N}\ ,\) \(\mathfrak{g}^{(n)}=0\) then \(\mathfrak{g}\) is called solvable. well defined \( \mathfrak{g}^{(0)}\) is an ideal in \(\mathfrak{g}\ .\) Suppose that \(\mathfrak{g}^{(i)}\) is an ideal for \( i=0,\dots,n\ .\) Then \[ [\mathfrak{g},\mathfrak{g}^{(n+1)}]=[\mathfrak{g},[\mathfrak{g}^{(n)},\mathfrak{g}^{(n)}]]\ :\] \[ \subset [[\mathfrak{g},\mathfrak{g}^{(n)}],\mathfrak{g}^{(n)}]+[\mathfrak{g}^{(n)},[\mathfrak{g},\mathfrak{g}^{(n)}]]\ :\] \[\subset [\mathfrak{g}^{(n)},\mathfrak{g}^{(n)}]=\mathfrak{g}^{(n+1)}\] The inclusion \( [\mathfrak{g}^{(n+1)},\mathfrak{g}]\subset \mathfrak{g}^{(n+1)}\) follows in a similar way. By induction it follows that all the \( g^{(i)}\)'s are ideals in \(\mathfrak{g}\) corollary For \(i\leq j \ ,\) \(\mathfrak{g}^{(j)}\) is an ideal in \(\mathfrak{g}^{(i)}\ .\) remark If \(\mathfrak{g}\) is solvable (that is, \(\mathfrak{g}^{(n)}=0\) for some \(n\)), then it contains an abelian ideal (namely \(\mathfrak{g}^{(n-1)}\)). proposition - solvable If \(\mathfrak{g}\) is solvable, then all its subalgebras and homomorphic images are. proof Let \( \mathfrak{h}\) be a subalgebra. Then \( \mathfrak{h}^{(0)}\subset\mathfrak{g}^{(0)}\ .\) Assume \(\mathfrak{h}^{(i)}\subset\mathfrak{g}^{(i)}\ .\) Then \[ \mathfrak{h}^{(i+1)}=[\mathfrak{h}^{(i)},\mathfrak{h}^{(i)}]\subset [\mathfrak{g}^{(i)},\mathfrak{g}^{(i)}]=\mathfrak{g}^{(i+1)}\] and the statement is proved by induction. Similarly, let \(\phi:\mathfrak{g}\rightarrow \mathfrak{h}\) be surjective, and assume \(\phi:\mathfrak{g}^{(i)}\rightarrow \mathfrak{h}^{(i)}\) to be surjective. Then \[\phi(\mathfrak{g}^{(i+1)})=\phi([\mathfrak{g}^{(i)},\mathfrak{g}^{(i)}])=[\phi(\mathfrak{g}^{(i)}),\phi(\mathfrak{g}^{(i)})]= [\mathfrak{h}^{(i)},\mathfrak{h}^{(i)}]=\mathfrak{h}^{(i+1)}\] proposition - solvable quotient If \(\mathfrak{h}\) is a solvable ideal such that \(\mathfrak{g}/\mathfrak{h}\) is solvable, then \(\mathfrak{g}\) is solvable. proof Say \((\mathfrak{g}/\mathfrak{h})^{(n)}=0\ .\) Let \(\pi:\mathfrak{g}\rightarrow \mathfrak{g}/\mathfrak{h}\) be the canonical projection. Then \(\pi(\mathfrak{g}^{(n)})=(\mathfrak{g}/\mathfrak{h})^{(n)}=0\) or \(\mathfrak{g}^{(n)}\subset \mathfrak{h}\ .\) Since \(\mathfrak{h}^{(m)}=0\ ,\) \(\mathfrak{g}^{(n+m)}=(\mathfrak{g}^{(n)})^{(m)}\subset \mathfrak{h}^{(m)}=0\ ,\) implying the statement. proposition If \(\mathfrak{h}, \mathfrak{k}\) are solvable ideals of \(\mathfrak{g}\ ,\) then so is \(\mathfrak{h}+\mathfrak{k}\ .\) proof One has \[ (\mathfrak{h}+ \mathfrak{k})/\mathfrak{k}\equiv \mathfrak{h}/(\mathfrak{h}\cap \mathfrak{k}) \] Since \(\mathfrak{h}\) is solvable, so is \(\mathfrak{h}/(\mathfrak{h}\cap \mathfrak{k}) \ .\) But this implies that \(\mathfrak{h}+\mathfrak{k}\ ,\) since \(\mathfrak{k}\) is solvable. proposition - radical If \(\dim \mathfrak{g}<\infty\ ,\) there exists a unique maximal solvable ideal in \(\mathfrak{g}\ ,\) the radical of \(\mathfrak{g}\ ,\)denoted by \( \mathrm{Rad\ }\mathfrak{g}\ .\) proof Let \(\mathfrak{s}\) be a maximal solvable ideal in \(\mathfrak{g}\ .\) Suppose \(\mathfrak{h}\) is another solvable ideal. Then \(\mathfrak{s}+\mathfrak{h}\supset \mathfrak{s}\) is solvable, and by the maximality, \(\mathfrak{s}+\mathfrak{h}= \mathfrak{s}\ ,\) that is, \(\mathfrak{h}\subset\mathfrak{s}\ .\) definition - semisimple A Lie algebra \(\mathfrak{g}\) is called semisimple if \(\mathrm{Rad\ }\mathfrak{g}=0\ .\) proposition - simple implies semisimple If \(\mathfrak{g}\) is simple, it is semisimple proof For a simple Leibniz algebra the derived series is stationary, that is, \(\mathfrak{g}^{(i)}=\mathfrak{g}\) for all \(i\in\mathbb{N}\ .\) The only other possible ideal is \(0\ ,\) so this must be \(\mathrm{Rad\ }\mathfrak{g}\ .\) proposition \(\mathfrak{g}/\mathrm{Rad\ }\mathfrak{g}\) is semisimple. proof Let \([\mathfrak{h}]\) be a nonzero solvable ideal in \(\mathfrak{g}/\mathrm{Rad\ }\mathfrak{g}\ .\) Then \(\mathfrak{h}+\mathrm{Rad\ }\mathfrak{g}\) strictly contains \(\mathrm{Rad\ }\mathfrak{g}\ ,\) which is in contradiction with its maximality. Thus \(\mathfrak{h}\subset \mathrm{Rad\ }\mathfrak{g}\ ,\) that is, \([\mathfrak{h}]\) is the zero ideal in \(\mathfrak{g}/\mathrm{Rad\ }\mathfrak{g}\ .\) proposition If \(\ker K_\mathfrak{g}=0\ ,\) then \(\mathfrak{g}\) is semisimple. proof Let \(\mathfrak{h}\) be an abelian ideal of \(\mathfrak{g}\ .\) Take \(h\in\mathfrak{h}, g\in\mathfrak{g}\ .\) Then \( ad(h)ad(g)\) maps \( \mathfrak{g}\) to \(\mathfrak{h}\ .\) Thus \( (ad(h)ad(g))^2=0\ .\) This implies that \[ K_\mathfrak{g}(h,g)=\mathrm{tr}(ad(h)ad(g))=0\] In other words, \(\mathfrak{h}\subset\ker K_\mathfrak{g}=0\ .\) If there are no abelian ideals, then there are no solvable ideals besides \(0\ ,\) that is, \(\mathfrak{g}\) is semisimple. theorem - common eigenvector Let \(\mathfrak{g}\) be a solvable subalgebra of \(\mathfrak{gl}(\mathfrak{a})\ ,\) \(\dim\mathfrak{a}<\infty\ .\) If \(\mathfrak{a}\neq 0\ ,\) then \(\mathfrak{a}\) contains a common eigenvector for all endomorphisms in \(\mathfrak{g}\ .\) proof Induction on \(\dim\mathfrak{g}\ .\) Since \(\mathfrak{g}\) is solvable, it properly contains \(\mathfrak{g}^{(1)}=[\mathfrak{g},\mathfrak{g}]\ ,\) otherwise \(\mathfrak{g}^{(i)}=\mathfrak{g}\) for \( i\in\mathbb{N}\ .\) Since \(\mathfrak{g}/[\mathfrak{g},\mathfrak{g}]\) is abelian, subspaces are ideals. Take a subspace of codimension one. Then the inverse image \(\mathfrak{h}\) in \(\mathfrak{g}\) is an ideal of codimension one which includes \([\mathfrak{g},\mathfrak{g}]\ .\) \( \mathfrak{h}\) is solvable, and by the induction assumption there exists a vector \(a\in\mathfrak{a}\) such that \(a \) is an eigenvector for each \(h\in\mathfrak{h}\ ,\) that is, \[ h a=\lambda(h)a,\quad\lambda\in C^1(\mathfrak{h},\mathbb{C})\] (the exceptional case here is when \(\dim \mathfrak{h}=0\ .\) In that case, \(\mathfrak{g}\) onedimensional and abelian, so one takes an eigenvector of a generator of \(\mathfrak{g}\)). Let \[\mathcal{W}=\{a\in\mathfrak{a}|x a=\lambda(x)a \quad \forall x\in \mathfrak{h}\}\] Now for \(x\in\mathfrak{g}\) and \(y\in\mathfrak{h}\) one finds \[ y x w=x y w-[x,y] w=\lambda(y) x w-\lambda([x,y])w\ .\] If one can prove that \(\lambda([x,y])=0\) then \(\mathcal{W}\) is invariant under the action of \(\mathfrak{g}\ .\) Fix \(x\in \mathfrak{g}\ ,\) \(w\in\mathcal{W}\ .\) Let \( n>0 \) be the smallest integer such that \(w, xw, \dots, x^n w\) are linearly dependent. Let \(\mathcal{W}_0=0\) and \(\mathcal{W}_i\) be the subspace of \(\mathfrak{a}\) spanned by \(w, xw,\dots, x^{i-1} w\ .\) It follows that \(\dim\mathcal{W}_n=n\) and \(W_{n+i}=W_n, i\geq 0\ .\) Each \(\mathcal{W}_i\) is invariant under \(y\in\mathfrak{h}\ .\) The matrix of \(y\) is upper triangular with eigenvalue \(\lambda(y)\) on the diagonal. This implies \(\mathrm{tr}_{\mathcal{W}_i}(y)=i\lambda(y)\ .\) Since \([x,y]\in\mathfrak{h}\ ,\) one also has \[\mathrm{tr}_{\mathcal{W}_n}([x,y])=i\lambda([x,y])\] Both \(x\) and \(y\) leave \(\mathcal{W}_n\) invariant, so the trace of \([x,y]\) must be zero. Thus \(n\lambda([x,y])=0\ .\) This shows that \(\mathcal{W}\) is invariant under the action of \(\mathfrak{g}\ .\) Write \(\mathfrak{g}=\mathfrak{h}+\mathbb{C} z\ .\) Let \(w_0 \in\mathcal{W}\) be an eigenvector of \(z\) (acting on \(\mathcal{W}\)). Then \(w_0\) is a common eigenvector of \(\mathfrak{g}\ .\) definition - flag Let \(\mathfrak{a}\) be a finite dimensional vectorspace (\(\dim\mathfrak{a}=n\)). A flag is a chain of subspaces \[0=\mathfrak{a}_0\subset\mathfrak{a}_1\subset\dots\subset\mathfrak{a}_n=\mathfrak{a},\quad \dim\mathfrak{a}_i=i\]If \(x\in\mathrm{End}(\mathfrak{a})\ ,\) one says that \( x\) leaves the flag invariant if \(x \mathfrak{a}_i\subset \mathfrak{a}_i\) for \(i=1,\dots,n\ .\) theorem (Lie) Let \(\mathfrak{g}\) be a solvable subalgebra of \(\mathfrak{gl}(\mathfrak{a}), \dim\mathfrak{a}=n<\infty\ .\) Then \( \mathfrak{g}\) leaves a flag in \(\mathfrak{a}\) invariant. proof It follows from the proof above that there exists a codimension one \( \mathfrak{g}\)-invariant subspace. Let that be \(\mathfrak{a}_{n-1}\ .\) Repeat the argument starting with \(\mathfrak{a}_{n-1}\) instead of \(\mathfrak{a}_{n}\) and use induction. lemma - flag of ideals Let \( \mathfrak{g}\) be solvable. Then there exists a flag of ideals \[ 0=\mathfrak{g}_0\subset\mathfrak{g}_1\subset\dots\subset\mathfrak{g}_n=\mathfrak{g},\quad \dim\mathfrak{g}_i=i\] proof Let \(d_1:\mathfrak{g}\rightarrow \mathfrak{gl}(\mathfrak{a})\) be a finite dimensional representation of \(\mathfrak{g}\ .\) Then \(d_1(\mathfrak{g})\) is solvable, and stabilizes a flag in \(\mathfrak{a}\ .\) Take \(\mathfrak{a}=\mathfrak{g}\) and \(d_1=\mathrm{ad}\ ,\) then the \(\mathfrak{g}_i\) are ideals (since they are \(\mathfrak{g}\)-invariant) and they obey the flag condition. lemma Let \(\mathfrak{g}\) be solvable. Then \(x\in\mathfrak{g}^{(1)}\) implies that \( \mathrm{ad}_\mathfrak{g}(x)\) is nilpotent. proof From the flag of ideals construct a basis. relative to this basis the matrix of \(\mathrm{ad}_\mathfrak{g}(y), y\in \mathfrak{g}\ ,\) is upper triangular. Thus the matrix of \(\mathrm{ad}_\mathfrak{g}(x), x\in \mathfrak{g}^{(1)}\) is strictly upper triangular, and therefore nilpotent. remark In the next lecture it is shown that this implies that \(\mathfrak{g}^{(1)}\) is nilpotent (to be defined).
I am stucked at a detail in a constrained optimization problem: Question Assume that the objective function is continuous on its domain $D$, but at some points $Z \subseteq D$ it is not differentiable. Further assume that the constraints implicitly force some or all feasible and optimal solutions to take values in $Z$. Doesn't this contradict somehow the Lagrangian approach? How to deal with this? Example Think of the following optimization problem (with the continuous extension $0 \cdot \log 0 := 0$) on $n$ variables $x_1, \ldots, x_n$: minimize $\sum_i x_i \cdot \log x_i$ subject to $A~x = b$ and all $x_i \geq 0$ for some given matrix $A \in \{0,1\}^{n \times n}$ and some positive vector $b$. It has a unique optimum (if any) because the objective is strictly convex with convex constraints. The Lagrangian $\Lambda(x, \mathbf{\lambda}) = \sum_i~ x_i \cdot \log x_i - \lambda^T ( A~x - b)$ provides the derivatives $\nabla_{x_i}\Lambda(x,\alpha) = \log x_i + 1 - \sum_k \lambda_k a_{ki}$ and $\nabla_{\lambda_i}\Lambda(X,\lambda) = (\sum_k a_{ik} x_k) - b_i$. Probably one should not simply continue calculations and just ignore the fact that the derivatives are not defined for $x_i=0$? Indeed the entries in $A$ may be such that every feasible solution $f$ forces some of the $x_i$'s to $0$, that is, it determines some index set $I(f) \subseteq \{1, \ldots, n\}$ with $x_i = 0$ for all $i \in I(f)$. For any such $x_i$ the derivative is $\nabla_{x_i}\Lambda(x,\alpha) = -\infty$. Hence for each feasible solution separately, "in retrospective" we could set all the $x_i$'s from $I(f)$ to zero already from the beginning and then define $\nabla_{x_i}\Lambda(x,\alpha) := 0$ for them). But this seems to be a painful argument and somehow contradicts the clear formulation of the Lagrangian as a function in the free variables $x_i$. So what is the right approach here? Some rough ideas: Can one for example argue somehow by the continuity of the derivatives? Or should one study all the optimization problems on every index subspace $S \subseteq \{1, \ldots, n\}$ separately constrained to $x_i > 0$ on $S$, in order to then find the optimum among all of them? Or can one extend the Lagrangian approach to a weaker notion of derivatives? Or are there related techniques that directly work on the KKT conditions without explicitly taking the derivatives?
Let's suppose we have two variables $(\alpha, \beta)$ on two functions $k_1, k_2$ that can be defined in terms of matrix relations: $$ k_1 = \left| \xi^T \mathcal{F}^T \mathcal{F} \xi \right| $$ $$ k_2 = \left| \xi^T \mathcal{G}^T \mathcal{G} \xi \right| $$ where: $$ \mathcal{F}= \left( \begin{matrix} c_{11} & c_{12} \\ c_{21} & c_{22} \end{matrix} \right) $$ $$ \mathcal{G}= \left( \begin{matrix} d_{11} & d_{12} \\ d_{21} & d_{22} \end{matrix} \right) $$ $$ \xi = \left( \begin{matrix} \alpha \\ \beta \end{matrix} \right) $$ Suppose that the determinants of $\mathcal{F}$ and $\mathcal{G}$ are non-zero. It's possible to invert the relation such that: $$ \alpha = f(k_1, k_2) $$ $$ \beta = g(k_1, k_2) $$ However the multiple solutions return by the CAS (mathematica) are extremely complicated. Is there a compact way to express the functions $f$ and $g$ supposing we are only interested in positive values for $k_1$ and $k_2$?
This is in fact a tricky matter.As you say one way is to calculate delta by an analytic formula, i.e. calculate the first derivative of the option pricing formula you are using with respect to the underlying's spot price.The second way is to do it numerically, i.e. change the spot price by a small value $dS$, calculate the value of the option and then ... Except in highly unusual cases, financial PDEs lack analytic solutions. The mathematical tools used are Monte Carlo, plus the usual ones for solving PDEs on grids, almost always one of the following:Trees, for very simple casesExplicit finite differencing, for throwaway projects or very specific casesImplicit or Crank-Nicolson finite differencing for ... Not so fast! I think it is of the utmost importance to first examine whether the data points are real outliers, i.e. noise that is contaminating the data, or perhaps the most important pieces of the time series!For example when you look at US stock market data of the last 50 years and remove only the ten biggest moves because they are outliers you get a ... Fastest method is a pre-generated lookup table with carefully selected in-memory structure so you don't get too many CPU cache misses (avoiding the memory latency).If you want an absolute speed, you also can go for a hardware specific implementation (GPU, FPGA). I believe this is a nice paper for you to start with. Check out what references it cited and who cited it.Markov Chain Monte Carlo Analysis of Option Pricing Models"Use the Markov Chain Monte Carlo (MCMC) method to investigate a largeclass of continuous-time option pricing models. These include: constant-volatility,stochastic volatility, price jump-... By definition the fair value of an option is given by an expectation value of the payoff, $\mathbf{E}\left[\textrm{payoff}(\textit{paths})\right]$. The probability distribution of the paths is the risk neutral measure. This is just an integral expression of the form you wrote. This applies to all option prices. Many options are, of course, special in the ... The method described in Hallerbach (2004) always worked well for me.We derive an estimator for Black-Scholes-Merton implied volatility that, when compared to the familiar Corrado & Miller [JBaF, 1996] estimator, has substantially higher approximation accuracy and extends over a wider region of moneyness. You may want to look into these two open source projects:QuantLib which is aimed at providing a comprehensive software framework for quantitative finance. This is written in C++.JQuantLib the 100% Java implementation based on the first project. Check this document out: link to pdf fileAlso, if you are concerned with actual performance of your code and want to implement efficient code then gsl libraries would be the first place look at: link. It's got everything you need. For such high-dimensional path problems you will want to use the Morokov technique (you can find the paper online), which takes QR samples for the "important" dimensions and then reverts to pseudorandom for the less important dimensions in an interest rate problem remarkably similar to yours. (Similar principles apply to using QR sequences in factor model ... As far as PDEs (deterministic) are concerned we have the notion of a "strong solution" (directly solving the differential operator in the strong formulation of the problem) and the "weak solution" that deals with a weak formulation of the problem.For the strong formulation, finite differences are the way to go since they are the natural discretization of ... Let's Be Rational uses exactly two iterations to give full machine accuracy for all inputs. It can be viewed as a three-stage analytical formula if you like.The code is free to download at www.jaeckel.org.Rgds,Peter I am not sure if I understood your question correctly but I will try to answer it anyway.If you have a standard normal random vector $z \sim N(\mathbb{0},I_n)$ (where $z,0 \in \mathbb{R}^{n\times1}$ and $I_n \in \mathbb{R}^{n\times n}$ is the identity matrix) and you want to transform it into a multivariate normal $x \sim N(\mu,\Sigma)$ you do it the ... To keep things simple let's assume you have a perfect random number generator (i.e. I will discuss only the statistics not the numerics of the problem). I will also focus on the practical matter and gloss over some mathematical details.From a practical perspective "convergence" means that you will never get an exact answer from Monte-Carlo but ... FDMs represent PDEs over a simple grid shape; the different implementations are just different recurrence relations to approximate the solutions to the PDE between boundary values (e.g., for options pricing, $T=[t_\mathrm{now},t_\mathrm{maturity}]$ and $S=[\mathrm{deep\_itm},\mathrm{deep\_otm}])$.FEM is a general name for a lot of different ... When you decide if the performance improvement is worth it you can add these to the downside ow using single precision:the result of your basic B-S pricer will eventually need to be multiplied with a notional and maybe a discount factor; For a sufficiently large notional you will see different results than the one calculated using double precision. Is that ... C is not used for any particular reason in numerical optimizations other than for legacy reasons. However, there are areas where C is preferred over C++ though even C is not the preferred language of choice. To mind comes programming FPGAs. Though VHDL and Verilog are by far the standards. But "behavioral synthesis" allows to utilize C or C relatives such as ... Here are a few more papers about MCMC and alike methods for derivative pricing and co. :Blanchet-Scalliet, Patras - Counterparty risk valuation for CDSJasra, Del Moral - Sequential Monte Carlo Methods for Option PricingFrey, Schmidt - Filtering and Incomplete Information in Credit RiskPeters, Briers, Shevchenko, Doucet - Calibration and Filtering for ... The word cubature is just a replacement for quadrature in the infinite dimensional setting, such as the Wiener space as in the answer from @TheBridge. The term is used in the context of integrating functionals of stochastic processes$$E[F(X)]$$where X is random variable valued in a functional space such as a the solution of a SDE or simply the Brownian ... As far as I know, differential equations such as the Black-Scholes PDE are solved once analytically and then the result is used directly. If a given derivatives-pricing differential equation could not be solved analytically, it would probably be better to model it numerically using Monte Carlo methods than to derive a complicated PDE which must then be ... who told you that ? I am used to create new trade systems in C++ to make the customers requirements feasible.CERN used C++ to prove higgs boson particle. I see people using C to program embedded like microwaves or fridges :Dbut it is just my opnion, I would like to hear others. Working on trigonometric polynomial decomposition, the first step is to take a big look at Fourier transformation.It is very powerfull, well documented and probably well implemented on your favorite language.It will give you the decomposition of your time series. You can remove highest frequencies, which correspond to noise, to have a good estimation. There are some other references:Li and Lee (2009)[download]An adaptive successive over-relaxation method for computing the Black–Scholes implied volatilityStefanica and Radoicic (2017) An Explicit Implied Volatility FormulaRelated discussions on the implied volatility inversion:How can the implied volatility be calculated?What is an efficient ... the output of an MC simulation depends on the random numbers used and if the distribution used is not too weird, after 10,000 runs you will get an answer that is distributed$$\mu + \frac{\sigma}{\sqrt{n}} Z,$$with $Z$ a standard normal. Here $n=10,000.$With $\mu$ the quantity you want and $\sigma$ the standard deviation.So you won't get precisely the ... There has been a huge amount of work on this. Generally a Fourier transform approach is used.First, be careful to use the form of the characteristic function that does not wind about zero in order to avoid having to count the normal of windings.Second, using contour shifts can make the integral much better behaved. eg integrate along the line with $0.5$... To test your programming skills, try QuantLib. Can you do interest-rate modelling with QuantLib? Can you debug the 10-level C++ template? Do you know how to use day count? Do you know how to use business calendar? Do you know how to link a forward curve with a LIBOR market index? Do you know how to calibrate a model?If you could, you have proven yourself a ... Ikonen and Toivanen don't say that the LCP is solved exactly, they simply say that the modified back-substitution is a valid algorithm to solve the LCP.A numerical error may arise around the location of optimal exercise, since it does not fall directly on the finite difference grid. I think that however, the error is of the same order as the discretization ... To compute the price of an American option or a callable instrument in general, at each potential exercise date, one is required to compare its continuation value (discounted risk-neutral expectation of what the option would pay off if it was not exercised) to the relevant exercise value/early redemption price.By construction, lattice and finite difference ...
Section 5.5 Exercise Note: pictures may not be drawn to scale. In each of the triangles below, find \(\sin \left(A\right),\cos \left(A\right),\tan \left(A\right),\sec \left(A\right),\csc \left(A\right),\cot \left(A\right)\). 1. 2. In each of the following triangles, solve for the unknown sides and angles. 3. 4. 5. 6. 7. 8. 9. A 33-ft ladder leans against a building so that the angle between the ground and the ladder is 80\(\mathrm{{}^\circ}\). How high does the ladder reach up the side of the building? 10. A 23-ft ladder leans against a building so that the angle between the ground and the ladder is 80\(\mathrm{{}^\circ}\). How high does the ladder reach up the side of the building? 11. The angle of elevation to the top of a building in New York is found to be 9 degrees from the ground at a distance of 1 mile from the base of the building. Using this information, find the height of the building. 12. The angle of elevation to the top of a building in Seattle is found to be 2 degrees from the ground at a distance of 2 miles from the base of the building. Using this information, find the height of the building. 13. A radio tower is located 400 feet from a building. From a window in the building, a person determines that the angle of elevation to the top of the tower is 36\(\mathrm{{}^\circ}\) and that the angle of depression to the bottom of the tower is 23\(\mathrm{{}^\circ}\). How tall is the tower? 14. A radio tower is located 325 feet from a building. From a window in the building, a person determines that the angle of elevation to the top of the tower is 43\(\mathrm{{}^\circ}\) and that the angle of depression to the bottom of the tower is 31\(\mathrm{{}^\circ}\). How tall is the tower? 15. A 200 foot tall monument is located in the distance. From a window in a building, a person determines that the angle of elevation to the top of the monument is 15\(\mathrm{{}^\circ}\) and that the angle of depression to the bottom of the tower is 2\(\mathrm{{}^\circ}\). How far is the person from the monument? 16. A 400 foot tall monument is located in the distance. From a window in a building, a person determines that the angle of elevation to the top of the monument is 18\(\mathrm{{}^\circ}\) and that the angle of depression to the bottom of the tower is 3\(\mathrm{{}^\circ}\). How far is the person from the monument? 17. There is an antenna on the top of a building. From a location 300 feet from the base of the building, the angle of elevation to the top of the building is measured to be 40\(\mathrm{{}^\circ}\). From the same location, the angle of elevation to the top of the antenna is measured to be 43\(\mathrm{{}^\circ}\). Find the height of the antenna. 18. There is lightning rod on the top of a building. From a location 500 feet from the base of the building, the angle of elevation to the top of the building is measured to be 36\(\mathrm{{}^\circ}\). From the same location, the angle of elevation to the top of the lightning rod is measured to be 38\(\mathrm{{}^\circ}\). Find the height of the lightning rod. 19. Find the length \(x\). 20. Find the length \(x\). 21. Find the length \(x\). 22. Find the length \(x\). 23. A plane is flying 2000 feet above sea level toward a mountain. The pilot observes the top of the mountain to be 18\({}^{\circ}\) above the horizontal, then immediately flies the plane at an angle of 20\({}^{\circ}\) above horizontal. The airspeed of the plane is 100 mph. After 5 minutes, the plane is directly above the top of the mountain. How high is the plane above the top of the mountain (when it passes over)? What is the height of the mountain? [UW] 24. Three airplanes depart SeaTac Airport. A United flight is heading in a direction 50\(\mathrm{{}^\circ}\) counterclockwise from east, an Alaska flight is heading 115\(\mathrm{{}^\circ}\) counterclockwise from east and a Delta flight is heading 20\(\mathrm{{}^\circ}\) clockwise from east. [UW] a. Find the location of the United flight when it is 20 miles north of SeaTac. b. Find the location of the Alaska flight when it is 50 miles west of SeaTac. c. Find the location of the Delta flight when it is 30 miles east of SeaTac. 25. The crew of a helicopter needs to land temporarily in a forest and spot a flat piece of ground (a clearing in the forest) as a potential landing site, but are uncertain whether it is wide enough. They make two measurements from A (see picture) finding \(\alpha\) = 25\(\mathrm{{}^\circ}\) and \(\beta\) = 54\(\mathrm{{}^\circ}\). They rise vertically 100 feet to B and measure \(\gamma\) = 47\(\mathrm{{}^\circ}\). Determine the width of the clearing to the nearest foot. [UW] 26. A Forest Service helicopter needs to determine the width of a deep canyon. While hovering, they measure the angle \(\gamma\) = 48\(\mathrm{{}^\circ}\) at position B (see picture), then descend 400 feet to position A and make two measurements: \(\alpha\) = 13\(\mathrm{{}^\circ}\) (the measure of \(\angle\)EAD), \(\beta\) = 53\(\mathrm{{}^\circ}\) (the measure of \(\angle\)CAD). Determine the width of the canyon to the nearest foot. [UW] Answer 1. \(\sin(A) = \dfrac{5\sqrt{41}}{41}\), \(\cos(A) = \dfrac{4\sqrt{41}}{41}\), \(\tan(A) = \dfrac{5}{4}\) \(\sec(A) = \dfrac{\sqrt{41}}{4}\), \(\csc(A) = \dfrac{\sqrt{41}}{5}\), \(\cot(A) = \dfrac{4}{5}\) 3. \(c = 14\), \(b = 7\sqrt{3}\), \(B = 60^{\circ}\) 5. \(a = 5.3171\), \(c = 11.3257\), \(A = 28^{\circ}\) 7. \(a = 9.0631\), \(b = 4.2262\), \(B = 25^{\circ}\) 9. 32.4987 ft 11. 836.2698 ft 13. 460.4069 ft 15. 660.35 feet 17. 28.025 ft 19. 143.0427 21. 86.6685
Focus Questions After studying this section, we should understand the concepts motivated by these questions and be able to write precise, coherent answers to these questions. How do we measure angles using degrees? What do we mean by the radian measure of an angle? How is the radian measure of an angle related to the length of an arc on the unit circle? Why is radian measure important? How do we convert from radians to degrees and from degrees to radians? How do we use a calculator to approximate values of the cosine and sine functions? The ancient civilization known as Babylonia was a cultural region based in southern Mesopotamia, which is present-day Iraq. Babylonia emerged as an independent state around 1894 BCE. The Babylonians developed a system of mathematics that was based on a sexigesimal (base 60) number system. This was the origin of the modern day usage of 60 minutes in an hour, 60 seconds in a minute, and 360 degrees in a circle. Many historians now believe that for the ancient Babylonians, the year consisted of 360 days, which is not a bad approximation given the crudeness of the ancient astronomical tools. As a consequence, they divided the circle into 360 equal length arcs, which gave them a unit angle that was 1/360 of a circle or what we now know as a degree. Even though there are 365.24 days in a year, the Babylonian unit angle is still used as the basis for measuring angles in a circle. Figure \(\PageIndex{1}\) shows a circle divided up into 6 angles of 60 degrees each, which is also something that fit nicely with the Babylonian base-60 number system. Figure \(\PageIndex{1}\): A circle with six 60-degree angles. We often denote a line that is drawn through 2 points A and B by \(\overleftrightarrow{AB}\). The portion of the line \(\overleftrightarrow{AB}\) that starts at the point A and continues indefinitely in the direction point of point B is called ray AB and is denoted by \(\overrightarrow{AB}\). The point A is the initial point of ray \(\overrightarrow{AB}\). An angle is formed by rotating a ray about its endpoint. The ray in its initial position is called the initial side of the angle, and the position of the ray after it has been rotated is called the terminal side of the ray. The endpoint of the ray is called the vertex of the angle. Figure \(\PageIndex{2}\): An angle including some notation. Figure \(\PageIndex{2}\) shows the ray \(\overrightarrow{AB}\) rotated about the point A to form an angle. The terminal side of the angle is the ray \(\overrightarrow{AC}\) . We often refer to this as angle BAC, which is abbreviated as \(\angle{BAC}\). We can also refer to this angle as angle \(CAB\) or \(\angle{CAB}\). If we want to use a single letter for this angle, we often use a Greek letter such as \(\alpha\)(alpha). We then just say the angle ̨. Other Greek letters that are often used are \(\beta\)(beta), \(\gamma\)(gamma), \(\theta\)(theta), \(\phi\)(phi), and \(\rho\)(rho). Arcs and Angles To define the trigonometric functions in terms of angles, we will make a simple connection between angles and arcs by using the so-called standard position of an angle. When the vertex of an angle is at the origin in the \(xy\)-plane and the initial side lies along the positive x-axis, we see that the angle is in standard position. The terminal side of the angle is then in one of the four quadrants or lies along one of the axes. When the terminal side is in one of the four quadrants, the terminal side determines the so-called quadrant designation of the angle. See Figure \(\PageIndex{3}\). Figure \(\PageIndex{3}\): Standard position of an angle in the second quadrant. Exercise \(\PageIndex{1}\) Draw an angle in standard position in the first quadrant; the third quadrant; and the fourth quadrant. Answer These graphs show positive angles in standard position. The one on the left has its terminal point in the first quadrant, the one in the middle has its terminal point in the third quadrant, and the one on the right has its terminal point in the fourth quadrant. If an angle is in standard position, then the point where the terminal side of the angle intersects the unit circle marks the terminal point of an arc as shown in Figure 1.11. Similarly, the terminal point of an arc on the unit circle determines a ray through the origin and that point, which in turn defines an angle in standard position. In this case we say that the angle is subtended by the arc. So there is a natural correspondence between arcs on the unit circle and angles in standard position. Because of this correspondence, we can also define the trigonometric terminal side functions in terms of angles as well as arcs. Before we do this, however, we need to discuss two different ways to measure angles. Figure \(\PageIndex{4}\): An arc and its corresponding angle. Degrees Versus Radians There are two ways we will measure angles – in degrees and radians. When we measure the length of an arc, the measurement has a dimension (the length, be it inches, centimeters, or something else). As mentioned in the introduction, the Babylonians divided the circle into 360 regions. So one complete wrap around a circle is 360 degrees, denoted \(360^\circ\). The unit measure of \(1^\circ\) is an angle that is 1/360 of the central angle of a circle. Figure \(\PageIndex{1}\) shows 6 angles of \(60^\circ\) each. The degree \(^\circ\) is a dimension, just like a length. So to compare an angle measured in degrees to an arc measured with some kind of length, we need to connect the dimensions. We can do that with the radian measure of an angle. Radians will be useful in that a radian is a dimensionless measurement. We want to connect angle measurements to arc measurements, and to do so we will directly define an angle of 1 radian to be an angle subtended by an arc of length 1 (the length of the radius) on the unit circle as shown in Figure \(\PageIndex{5}\). Figure \(\PageIndex{5}\): One radian. Definition: Radian An angle of one radian is the angle in standard position on the unit circle that is subtended by an arc of length 1 (in the positive direction). This directly connects angles measured in radians to arcs in that we associate a real number with both the arc and the angle. So an angle of 2 radians cuts off an arc of length 2 on the unit circle, an angle of 3 radians cuts of an arc of length 3 on the unit circle, and so on. Figure 1.13 shows the terminal sides of angles with measures of 0 radians, 1 radian, 2 radians, 3 radians, 4 radians, 5 radians, and 6 radians. Notice that \(2\pi \approx 6.2832\) and so \(6 < 2\pi\) as shown in Figure \(\PageIndex{6}\). Figure \(\PageIndex{6}\): Angles with Radian Measure 1, 2, 3, 4, 5, and 6 We can also have angles whose radian measure is negative just like we have arcs with a negative length. The idea is simply to measure in the negative (clockwise) direction around the unit circle. So an angle whose measure is \(-1\) radian is the angle in standard position on the unit circle that is subtended by an arc of length 1 in the negative (clockwise) direction. So in general, an angle (in standard position) of t radians will correspond to an arc of length t on the unit circle. This allows us to discuss the sine and cosine of an angle measured in radians. That is, when we think of sin\((t)\) and cos\((t)\), we can consider \(t\) to be: a real number; the length of an arc with initial point \((1, 0)\) on the unit circle; the radian measure of an angle in standard position. When we draw a picture of an angle in standard position, we often draw a small arc near the vertex from the initial side to the terminal side as shown in Figure \(\PageIndex{7}\), which shows an angle whose measure is \(\dfrac{3}{4}\pi\) radians. Exercise \(\PageIndex{2}\) 1. Draw an angle in standard position with a radian measure of: Figure \(\PageIndex{7}\): An angle with measure \(\dfrac{3}{4}\pi\) in standard position \(\dfrac{\pi}{2}\) radians. \(\pi\) radians. \(\dfrac{3\pi}{2}\) radians. \(\dfrac{-3\pi}{2}\) radians. 2. What is the degree measure of each of the angles in part (1)? Answer \[90^\circ\] \[180^\circ\] \[270^\circ\] \[-270^\circ\] Conversion Between Radians and Degrees Radian measure is the preferred measure of angles in mathematics for many reasons, the main one being that a radian has no dimensions. However, to effectively use radians, we will want to be able to convert angle measurements between radians and degrees. Recall that one wrap of the unit circle corresponds to an arc of length \(2\pi\), and an arc of length \(2\pi\) on the unit circle corresponds to an angle of \(2\pi\) radians. An angle of \(360^\circ\) is also an angle that wraps once around the unit circle, so an angle of \(360^\circ\) is equivalent to an angle of \(2\pi\) radians, or each degree is \(\dfrac{\pi}{180}\) radians, each radian is \(\dfrac{180}{\pi}\) degrees. Notice that 1 radian is then \(\dfrac{180}{\pi} \approx 57.3^\circ\), so a radian is quite large compared to a degree. These relationships allow us to quickly convert between degrees and radians. Example \(\PageIndex{1}\) If an angle has a degree measure of 35 degrees, then its radian measure can be calculated as follows: \[35 \space degrees \times \dfrac{\pi \space radians}{180 \space degrees} = \dfrac{35\pi}{180} \space radians\] Rewriting this fraction, we see that an angle with a measure of 35 degrees has a radian measure of \(\dfrac{7\pi}{36}\) radians. If an angle has a radian measure of \(\dfrac{3\pi}{10}\) radians, then its degree measure can be calculated as follows: \[\dfrac{3\pi}{10} \space radians \space \times \dfrac{180 \space degrees}{\pi \space radians}= \dfrac{540}{10} \space degrees\] So an angle with a radian measure of \(\dfrac{3\pi}{10}\) has an angle measure of 54\(^\circ\). IMPORTANT NOTE Since a degree is a dimension, we MUST include the degree mark \(^\circ\) whenever we write the degree measure of an angle. A radian has no dimension so there is no dimension mark to go along with it. Consequently, if we write 2 for the measure of an angle we understand that the angle is measured in radians. If we really mean an angle of 2 degrees, then we must write 2\(^\circ\). Exercise \(\PageIndex{3}\) Complete the following table to convert from degrees to radians and vice versa. Answer Angle in Radians Angle in degrees \(0\) \(0^\circ\) \(\dfrac{\pi}{6}\) \(30^\circ\) \(\dfrac{\pi}{4}\) \(45^\circ\) \(\dfrac{\pi}{3}\) \(60^\circ\) \(\dfrac{\pi}{2}\) \(90^\circ\) \(\dfrac{7\pi}{6}\) \(210^\circ\) \(\dfrac{5\pi}{4}\) \(225^\circ\) \(\dfrac{4\pi}{3}\) \(240^\circ\) \(\dfrac{3\pi}{2}\) \(270^\circ\) \(\dfrac{5\pi}{3}\) \(300^\circ\) \(\dfrac{2\pi}{3}\) \(120^\circ\) \(\dfrac{4\pi}{3}\) \(135^\circ\) \(\dfrac{5\pi}{6}\) \(150^\circ\) \(\pi\) \(180^\circ\) \(\dfrac{7\pi}{4}\) \(315^\circ\) \(\dfrac{11\pi}{6}\) \(330^\circ\) \(2\pi\) \(360^\circ\) Calculators and the Trigonometric Functions We have now seen that when we think of \(\sin(t)\) or \(\cos(t)\), we can think of \((t)\) as a real number, the length of an arc, or the radian measure of an angle. In Section 1.5, we will see how to determine the exact values of the cosine and sine functions for a few special arcs (or angles). For example, we will see that \(\cos(\dfrac{\pi}{6}) = \dfrac{\sqrt{3}}{2}\). However, the definition of cosine and sine as coordinates of points on the unit circle makes it difficult to find exact values for these functions except at very special arcs. While exact values are always best, technology plays an important role in allowing us to approximate the values of the circular (or trigonometric)functions. Most hand-held calculators, calculators in phone or tablet apps, and online calculators have a cosine key and a sine key that you can use to approximate values of these functions, but we must keep in mind that the calculator only provides an approximation of the value, not the exact value (except for a small collection of arcs). In addition, most calculators will approximate the sine and cosine of angles. Angle in radians Angle in degrees 0 \(0^\circ\) \(\dfrac{\pi}{6}\) \(\dfrac{\pi}{4}\) \(\dfrac{\pi}{3}\) \(\dfrac{\pi}{2}\) \(90^\circ\) \(120^\circ\) \(\dfrac{3\pi}{4}\) \(135^\circ\) \(150^\circ\) \(180^\circ\) \(\dfrac{7\pi}{6}\) \(\dfrac{5\pi}{4}\) \(\dfrac{4\pi}{3}\) \(\dfrac{3\pi}{2}\) \(270^\circ\) \(300^\circ\) \(315^\circ\) \(330^\circ\) \(2\pi\) \(360^\circ\) Table 1.1: Conversions between radians and degrees. To do this, the calculator has two modes for angles: Radian and Degree. Be- cause of the correspondence between real numbers, length of arcs, and radian measures of angles, for now, we will always put our calculators in radian mode. In fact, we have seen that an angle measured in radians subtends an arc of that radian measure along the unit circle. So the cosine or sine of an angle measure in radians is the same thing as the cosine or sine of a real number when that real number is interpreted as the length of an arc along the unit circle. (When we study the trigonometry of triangles in Chapter 3, we will use the degree mode. For an introductory discussion of the trigonometric functions of an angle measure in degrees, see Exercise (4)). Exercise \(\PageIndex{4}\) In Exercise 1.6, we used the Geogebra Applet called Terminal Points of Arcs on the Unit Circle at http://gvsu.edu/s/JY to approximate the values of the cosine and sine functions at certain values. For example, we found that \(\cos(1) \approx 0.5403\), \(\sin(1) \approx 0.8415\). \(\cos(2) \approx -0.4161\), \(\sin(2) \approx 0.9093\). \(\cos(-4) \approx -0.6536\), \(\sin(-4) \approx 0.7568\). \(\cos(-15) \approx -0.7597\), \(\sin(-15) \approx -0.6503\). Use a calculator to determine these values of the cosine and sine functions and compare the values to the ones above. Are they the same? How are they different? Answer Using a calculator, we obtain the following results correct to ten decimal places.\(\cos(1) \approx 0.5403023059, \sin(1) \approx 0.8414709848\). \(\cos(1) \approx 0:4161468365, \sin(1) \approx 0.9092974268\). The difference between these values and those obtained in Progress Check 1.6 is that these values are correct to 10 decimal places (and the others are correct to 4 decimal places). If we round off each of the values above to 4 decimal places, we get the same results we obtained in Progress Check 1.6. \(\cos(1) \approx 0:6536436209, \sin(1) \approx 0.7568024953\). \(\cos(1) \approx 0:7596879129, \sin(1) \approx 0.6502878402\). Summary of Section 1.3 In this section, we studied the following important concepts and ideas: An angleis formed by rotating a ray about its endpoint. The ray in its initial position is called the initial sideof the angle, and the position of the ray after it has been rotated is called the terminal sideof the ray. The endpoint of the ray is called the vertex of the angle. When the vertex of an angle is at the origin in the \(xy\)-plane and the initial side lies along the positive x-axis, we see that the angle is in standard position. There are two ways to measure angles. For degree measure, one complete wrap around a circle is 360 degrees, denoted 360\(^\circ\). The unit measure of 1\(^\circ\) is an angle that is 1=360 of the central angle of a circle. An angle of one radianis the angle in standard position on the unit circle that is subtended by an arc of length 1 (in the positive direction). We convert the measure of an angle from degrees to radians by using the fact that each degree is radians. We convert the measure of an angle from 180 radians to degrees by using the fact that each radian is 180 degrees.
Let $n$ be a given arbitrary positive integer, and let $U_n$ denote the group of all the positive integers less than $n$ and relatively prime to $n$ under multiplication mod $n$. Then for which values of $n$ is $U_n$ a cyclic group? And for any such $n$, which elements of $U_n$ generate it? One key step in this problem is the fact that $U_{ab} \cong U_a \times U_b$, when $a$ and $b$ are co-prime. This comes from the Chinese Remainder Theorem. The order of $U_a$ is $\phi(a)$ and the order of $U_b$ is $\phi(b)$. In most cases, both $\phi(a)$ and $\phi(b)$ are even and so $m=lcm(\phi(a),\phi(b))<\phi(a)\phi(b)=\phi(ab)$ is an exponent for $U_{ab}$, in the sense that $x^m =1$ for all $x\in U_{ab}$. This proves that $U_{ab}$ is not cyclic. The remaining cases, when $n$ cannot be written as $n=ab$ with $a,b$ co-prime and $\phi(a),\phi(b)$ both even, give you the direction to the solution.
I have a hierarchical model that includes a normal distribution and a beta distribution. For the normal distribution, it has two parameters: $\mu$ and $\tau^2$. However, I want to implement hierarchy such that the $\mu$ and $\tau^2$ parameters are sampled from group-level distributions. What are good distributions to use for each of these two parameters? I am assuming the $\mu$ parameter could be sampled from another normal distribution, and the $\tau^2$ parameter could be sampled from an inverse gamma distribution. Is this along the right lines? For the beta distribution, it has two parameters: $\alpha$ and $\beta$. I want to also implement this hierarchically in the same manner, so I have the same question. What is a good distribution to use which would then sample values for $\alpha$ and $\beta$? I am sorry if these are simple questions but I have been having a very difficult time finding answers online.
Workshop II, 3-4th June Proofs, justifications, certificates ***SLIDES*** Friday 3 June, Institut de Recherche en Informatique de Toulouse (IRIT), Salle des thèses Session 1, Friday 3 June, 9h-10h30, 10h45-12h15, PROVABILITY LOGIC Lev BEKLEMISHEV (Steklov Mathematical Institute, Russia), Positive provability logic and reflection calculus: an overview Several interesting applications of provability logic in proof theory made use of the polymodal logic $\GLP$ due to Giorgi Japaridze. This system, although decidable, is not very easy to handle. In this talk we will advocate the use of a weaker system, called Reflection Calculus, which is much simpler than $\GLP$, yet expressive enough to regain its main proof-theoretic applications, and more. From the point of view of modal logic, $\RC$ can be seen as a fragment of polymodal logic consisting of implications of the form $A\to B$, where $A$ and $B$ are formulas built-up from $\top$ and the variables using just $\land$ and the diamond modalities. We discuss general problems around weak systems of this kind and describe some applications of $\RC$ to the analysis of provability in formal arithmetical theories. Joost JOOSTEN (Universitat de Barcelona, Spain), A Calculus of Worms in Coq In this talk we consider modal provability logics with a series of modalities of length omega that represent a sequence of consistency predicates of increasing strength. The closed fragment of this logic is already quite expressible and constitutes an alternative ordinal notation system up to epsilon_zero. Iterated consistency statements in this closed fragments are also called worms. We present a calculus that manipulates only worms. In particular, the language lacks propositional variables, implications, conjunctions and disjunctions. We compare this Calculus of Worms to the closed fragment of the better known Reflection Calculus. Moreover, we comment on how these worms and their corresponding calculus can be formalized and implemented in Coq. Joint work with Eduardo Hermo Reyes, Pilar Garcia de la Parra and Alejandro Ramírez Atrio. Session 2, vendredi 3 June, 14h15-15h45, 16h-17h30, REALIZABILITY Fernando FERREIRA (University of Lisbon, Portugal), Modified realizability and functional interpretations: some logical and mathematical observations Federico ASCHIERI (Technical University of Vienna, Austria), From Intuitionistic Realizability to Classical Realizability Realizability was originally conceived as a constructive semantics for intuitionistic Arithmetic. Since classical Arithmetic looks radically non-constructive, extending realizability to classical fragments of it may appear hopeless. Yet, realizability can be extended to full classical mathematical systems. How is it possible? The goal of this talk is go through the flow of ideas that lead in a natural way to the many known classical realizabilities. Since realizability appears in disguise under other names such as "validity", "reducibility", "computability", "proof-theoretic semantics", we shall have to look for its origins in unexpected places. We shall start from Prawitz and Dummett. In Prawitz's work, we find the idea that the introduction rules of natural deduction determine the constructive meaning of logical constants, which leads to realizability based on introduction rules. In Dummett's work, we find the idea that it is the elimination rules of natural deduction that fix the meaning of logical constants, which leads to realizability based on elimination rules. We shall see that realizability based on introduction rules gives rise to more constructively flavoured semantics, such as realizability for Intuitionistic Arithmetic with Markov's principle, realizability for Intuitionistic Arithmetic with Excluded Middle over formulas with one quantifier and learning based realizability for classical Arithmetic with Skolem choice axioms. On the other hand, realizability based on elimination rules gives rise to Krivine-style realizability for classical second-order Arithmetic, the only known semantics that can be generalized even to full set theory, but that is also useful for other intermediate logics. Saturday 4 June, Institut de Mathématiques de Toulouse (IMT), Amphi Schwartz Session 3, Saturday 4 June, 9h-10h30, 10h45-12h15, CERTIFICATES Dale MILLER (Inria Saclay, France), Defining and checking proof certificates In order for one theorem prover to export its proofs for other provers to check and trust, proofs-as-documents must be given a clear and precise semantics. After making the case that an infrastructure for sharing proofs should exists, I will describe how recent research in proof theory provides a flexible framework for defining the semantics of proof certificates. I will also briefly describe an implementation that can execute a wide range of such semantic definitions. Jasmin Christian BLANCHETTE (INRIA Nancy Grand Est, Nancy, France), Semi-intelligible Isabelle Proofs from Machine-Generated Proofs Sledgehammer is a component of the Isabelle proof assistant that integrates external automatic theorem provers to discharge proof obligations. As a safeguard against bugs, the proofs found by the external provers are reconstructed in Isabelle. Reconstructing complex arguments involves translating them to Isabelle's Isar format, supplying suitable justifications for each step. Sledgehammer transforms the proofs by contradiction into direct proofs; it iteratively tests and compresses the output, resulting in simpler and faster proofs; and it supports a wide range of automatic provers, including E, LEO-II, Satallax, SPASS, Vampire, veriT, Waldmeister, and Z3. Joint work with Sascha Böhme, Mathias Fleury, Steffen Juilf Smolka, and Albert Steckermeier. Session 4, Saturday 4 June, 14h15-15h45, 16h-17h30, JUSTIFICATION LOGIC Thomas STUDER (University of Bern, Switzerland), Justification Logic - a short introduction Traditional modal logics feature formulas of the form K A that stand for 'the agent knows that A'. The classical semantics for these logics is given by possible world models, in which the formula K A is true if A is true in all worlds that the agent considers possible. However, this approach is missing the justified part of Plato's classic characterization of knowledge as justified true belief. Justification logics can fill this gap. Instead of formulas K A, the language of justification logics includes formulas of the form t : A that mean 'the agent knows that A for reason t'. The evidence term t in this expression can represent a formal proof of A or an informal reason why A is known. Moreover, justification logics include operations on these terms to reflect the agent's reasoning power. For instance, if A -> B is known for reason s and A is known for reason t, then B is known for reason s x t, where the binary operation x models the agent's ability to apply modus ponens. In our talk, we give a short introduction to justification logic and present some of the main results in this area. Juan Pablo AGUILERA (Technical University of Vienna, Austria), An arithmetical interpretation for negative introspection We introduce verification logic. This variant of Artemov's Logic of Proofs includes proof terms of the form ¡A! that satisfy the axiom schema A -> ¡A!:A. The intention is that terms ¡A! denote a PA-proof of the formula A if it exists. We show that a restriction on the language yields a logic that realizes the axioms of S5 and is sound and complete for its arithmetical interpretation. Organisation scientifique: David FERNANDEZ DUQUE, Andreas HERZIG, Ralph MATTHES, Martin STRECKER
In my probability class we learned the Central Limit Theorem in the following form. Theorem:Let $\{X_i\}_{i=1}^\infty$ be a sequence of independent identically distributed random variables and suppose that $E(X_i)=\mu$, $\operatorname{Var}(X_i)=\sigma^2<\infty$. Then, $$\sqrt{n}\frac{\frac{1}{n}\sum_{i=1}^nX_i-\mu}{\sigma}\stackrel{P}{\longrightarrow}N(0,1),\quad\text{as }n\to\infty$$ in probability, where the notation $N(\mu,\sigma^2)$ means normal distribution of mean $\mu$ and variance $\sigma^2$. Now, if we remove $\sigma$ and $\mu$, do we get $$\sqrt{n}\frac{1}{n}\sum_{i=1}^nX_i\stackrel{P}{\longrightarrow}N(\mu,\sigma^2)?$$ This seems very natural, but I cannot prove it. Is it true? I think I can prove it by removeing only $\sigma$, but I am not sure for $\mu$.
Ultrapower The intuitive idea behind ultrapower constructions (and ultraproduct constructions in general) is to take a sequence of already existing models and construct new ones from some combination of the already existing models. Ultrapower constructions are used in many major results involving elementary embeddings. A famous example is Scott's proof that the existence of a measurable cardinal implies $V\neq L$. Ultrapower embeddings are also used to characterize various large cardinal notions such as measurable, supercompact and certain formulations of rank into rank embeddings. Ultrapowers have a more concrete structure than general embeddings and are often easier to work with in proofs. Most of the results in this article can be found in [1]. Contents General construction The general construction of an ultrapower supposes given an index set $X$ set for a collection of (non-empty) models $M_i$ with $i\in X$ and an ultrafilter $U$ over $X$. The ultrafilter $U$ is used to define equivalence classes over the structure $\prod_{i\in X} M_i$, the collection of all functions $f$ with domain $X$ such that $f(i)\in M_i$ for each $i\in X$. When the $M_i$ are identical to one another, we form an ultrapower by "modding out" over the equivalence classes defined by $U$. In the general case where $M_i$ differs from $M_j$, we form a structure called the ultraproduct of $\langle M_i : i\in X\rangle$. Two functions $f$ and $g$ are $U$-equivalent, denoted $f=_U g$, when the set of indices in $X$ where $f$ and $g$ agree is an element of the ultrafilter $U$ (intuitively, we think of $f$ and $g$ as disagreeing on a "small" subset of $X$). The $U$-equivalence class of $f$ is usually denoted $[f]$ and is the class of all functions $g\in \prod_{i\in X} M_i$ which are $U$-equivalent to $f$. When each $M_i$ happens to be the entire universe $V$, each $[f]$ is a proper class. To remedy this, we can use Scott's trick and only consider the members of $[f_U]$ of minimal rank to insure that $[f]$ is a set. The ultrapower (ultraproduct) is then denoted by $\text{Ult}_U(M) = M/U =\{[f]: f\in \prod_{i\in X} M_i\}$ with the membership relation defined by setting $[f]\in_U [g]$ when the set of all $i\in X$ such that $f(i)\in g(i)$ is in $U$. Note that $U$ could be a principal ultrafilter over $X$ and in this case the ultraproduct is isomorphic to almost every $M_i$, so in this case nothing new or interesting is gained by considering the ultraproduct. However, even in the case where each $M_i$ is identical and $U$ is non-principal, we have the ultrapower isomorphic to each $M_i$ but the construction technique nevertheless yields interesting results. Typical ultrapower constructions concern the case $M_i=V$. Formal definition Given a collection of nonempty models $\langle M_i : i\in X \rangle$, we define the product of the collection $\langle M_i : i\in X \rangle$ as $$\prod_{i\in X}M_i = \{f:\text{dom}(f)=X \land (\forall i\in X)(f(i)\in M_i)\}$$ Given an ultrafilter $U$ on $X$, we then define the following relations on $\prod_{i\in X} M_i$: Let $f,g\in\prod_{i\in X} M_i$, then $$f =_U g \iff \{i\in X : f(i)=g(i)\}\in U$$ $$f \in_U g \iff \{i\in X : f(i)\in g(i)\}\in U$$ For each $f\in\prod_{i\in X} M_i$, we then define the equivalence class of $f$ in $=_U$ as follows: $$[f]=\{g: f=_U g \land \forall h(h=_U f \Rightarrow \text{rank}(g)\leq \text{rank}(h) \}$$ Every member of the equivalence class of $f$ has the same rank, therefore the equivalence class is always a set, even if $M_i = V$. We now define the ultraproduct of $\langle M_i : i\in X \rangle$ to be the model $\text{Ult}=(\text{Ult}_U\langle M_i : i\in X \rangle, \in_U)$ where: $$\text{Ult}_U\langle M_i : i\in X \rangle = \prod_{i\in X}M_i / U = \{[f]:f\in\prod_{i\in X}M_i\}$$ If there exists a model $M$ such that $M_i=M$ for all $i\in X$, then the ultraproduct is called the ultrapower of $M$, and is denoted $\text{Ult}_U(M)$. Los' theorem Los' theorem is the following statement: let $U$ be an ultrafilter on $X$ and $Ult$ be the ultraproduct model of some family of nonempty models $\langle M_i : i\in X \rangle$. Then, for every formula $\varphi(x_1,...,x_n)$ of set theory and $f_1,...,f_n \in \prod_{i\in X}M_i$, $$Ult\models\varphi([f_1],...,[f_n]) \iff \{i\in X : \varphi(f_1(i),...,f_n(i))\}\in U$$ In particular, an ultrapower $\text{Ult}=(\text{Ult}_U(M), \in_U)$ of a model $M$ is elementarily equivalent to $M$. This is a very important result: to see why, let $f_x(i)=x$ for all $x\in M$ and $i\in X$, and now let $j_U(x)=[f_x]$ for every $x\in M$. Then $j_U$ is an elementary embedding by Los' theorem, and is called the canonical ultrapower embedding $j_U:M\to\text{Ult}_U(M)$. Properties of ultrapowers of the universe of sets Let $U$ be a nonprincipal $\kappa$-complete ultrafilter on some measurable cardinal $\kappa$ and $j_U:V\to\text{Ult}_U(V)$ be the canonical ultrapower embedding of the universe. Let $\text{Ult}=\text{Ult}_U(V)$ to simplify the notation. Then: $U\not\in\text{Ult}$ $\text{Ult}^\kappa\subseteq\text{Ult}$ $2^\kappa\leq(2^\kappa)^{\text{Ult}}<j_U(\kappa)<(2^\kappa)^+$ If $\lambda>\kappa$ is a strong limit cardinal of cofinality $\neq\kappa$ then $j(\lambda)=\lambda$. If $\lambda$ is a limit ordinal of cofinality $\kappa$ then $j_U(\lambda)>lim_{\alpha\to\lambda}$ $j_U(\alpha)$, but if $\lambda$ has cofinality $\neq\kappa$, then $j_U(\lambda)=lim_{\alpha\to\lambda}$ $j_U(\alpha)$. Also, the following statements are equivalent: $U$ is a normal measure For every $X\subseteq\kappa$, $X\in U$ if and only if $\kappa\in j_U(X)$. In $(\text{Ult}_U(V),\in_U)$, $\kappa=[d]$ where $d:\kappa\to\kappa$ is defined by $d(\alpha)=\alpha$ for every $\alpha<\kappa$. Let $j:V\to M$ be a nontrivial elementary embedding of $V$ into some transitive model $M$ with critical point $\kappa$ (which is a measurable cardinal), also let $D=\{X\subseteq\kappa:\kappa\in j(X)\}$ be the canonical normal fine measure on $\kappa$. Then: There exists an elementary embedding $k:\text{Ult}\to M$ such that $k(j_D(x))=j(x)$ for every $x\in V$. Ultrapower axiom Definition of the ultrapower axiom needed here. The existence of a strongly compact is equiconsistent with the existence of a supercompact. The $\text{GCH}$ holds above the least strongly compact cardinal. The least strongly compact cardinal is supercompact. $V$ is a forcing extension of $\text{HOD}$. $\text{UA}$ holds in all known inner models, but none of them contains a strongly compact cardinal, let alone a supercompact. It is currently unknown whether $\text{UA}$ is consistent with the existence of a supercompact or of a strongly compact. Iterated ultrapowers Given a nonprincipal $\kappa$-complete ultrafilter $U$ on some measurable cardinal $\kappa$, we define the iterated ultrapowers the following way:$$(\text{Ult}^{(0)},E^{(0)})=(V,\in)$$$$(\text{Ult}^{(\alpha+1)},E^{(\alpha+1)})=\text{Ult}_{U^{(\alpha)}}(\text{Ult}^{(\alpha)},E^{(\alpha)})$$$$(\text{Ult}^{(\lambda)},E^{(\lambda)})=lim dir_{\alpha\to\lambda}\{(\text{Ult}^{(\alpha)},E^{(\alpha)}),i_{\alpha,\beta})$$where $\lambda$ is a limit ordinal, $limdir$ denotes direct limit, $i_{\alpha,\beta} : \text{Ult}^{(\alpha)}\to \text{Ult}^{(\beta)}$ is an elementary embedding defined as follows:$$i_{\alpha,\alpha}(x)=j^{(\alpha)}(x)$$$$i_{\alpha,\alpha+n}(x)=j^{(\alpha)}(j^{(\alpha+1)}(...(j^{(\alpha+n)}(x))...))$$$$i_{\alpha,\lambda}(x)=lim_{\beta\to\lambda}i_{\alpha,\beta}(x)$$and $j^{(\alpha)}:\text{Ult}^{(\alpha)}\to \text{Ult}^{(\alpha+1)}$ is the canonical ultrapower embedding from $\text{Ult}^{(\alpha)}$ to $\text{Ult}^{(\alpha+1)}$. Also, $U^{(\alpha)}=i_{0,\alpha}(U)$ and $\kappa^{(\alpha+1)}=i_{0,\alpha}(\kappa)$ where $\kappa=\kappa^{(0)}=crit(j^{(0)})$. If $M$ is a transitive model of set theory and $U$ is (in $M$) a $\kappa$-complete nonprincipal ultrafilter on $\kappa$, we can construct, within $M$, the iterated ultrapowers. Let us denote by $\text{Ult}^{(\alpha)}_U(M)$ the $\alpha$th iterated ultrapower, constructed in $M$. Properties For every $\alpha$ the $\alpha$th iterated ultrapower $(\text{Ult}^{(\alpha)},E^{(\alpha)})$ is well-founded. This is due to $U$ being nonprincipal and $\kappa$-complete. The Factor Lemma: for every $\beta$, the iterated ultrapower $\text{Ult}^{(\beta)}_{U^{(\alpha)}}(\text{Ult}^{(\alpha)})$ is isomorphic to the iterated ultrapower $\text{Ult}^{(\alpha+\beta)}$. For every limit ordinal $\lambda$, $\text{Ult}^{(\lambda)}\subseteq \text{Ult}^{(\alpha)}$ for every $\alpha<\lambda$. Also, $\kappa^{(\lambda)}=lim_{\alpha\to\lambda}$ $\kappa^{(\alpha)}$. For every $\alpha$, $\beta$ such that $\alpha>\beta$, one has $\kappa^{(\alpha)}>\kappa^{(\beta)}$. If $\gamma<\kappa^{(\alpha)}$ then $i_{\alpha,\beta}(\gamma)=\gamma$ for all $\beta\geq\alpha$. If $X\subseteq\kappa^{(\alpha)}$ and $X\in \text{Ult}^{(\alpha)}$ then for all $\beta\geq\alpha$, one has $X=\kappa^{(\alpha)}\cap i_{\alpha,\beta}(X)$. The representation lemma References Jech, Thomas J. Third, Springer-Verlag, Berlin, 2003. (The third millennium edition, revised and expanded) www bibtex Set Theory.
Scalar Triple Products If we have three vectors $\vec{u}$, $\vec{v}$, and $\vec{w}$ in $\mathbb{R}^3$, then the scalar quantity $\vec{u} \cdot (\vec{v} \times \vec{w})$ appears frequently in other areas of mathematics such as Calculus and has a special name which we define below. Definition: For any three vectors $\vec{u}, \vec{v}, \vec{w} \in \mathbb{R}^3$, the Scalar Triple Product is denoted as the dot product between $\vec{u}$ and $\vec{v} \times \vec{w}$, or rather $\vec{u} \cdot (\vec{v} \times \vec{w})$. The following theorem will give us a method for computing the scalar triple product between three vectors in $\mathbb{R}^3$. Theorem 1: If $\vec{u}, \vec{v}, \vec{w} \in \mathbb{R}^3$, then the scalar triple product can be computed as $\vec{u} \cdot (\vec{v} \times \vec{w}) = \begin{vmatrix} u_1 & u_2 & u_3\\ v_1 & v_2& v_3\\ w_1 & w_2 & w_3 \end{vmatrix}$. Proof:We note that the cross product of two vectors $\vec{v} \times \vec{w} = \begin{bmatrix} v_2 & v_3\\ w_2 & w_3 \end{bmatrix} \vec{i} - \begin{bmatrix} v_1 & v_3\\ w_1 & w_3 \end{bmatrix} \vec{j} + \begin{bmatrix} v_1 & v_2\\ w_1 & w_2 \end{bmatrix} \vec{k}$. Multiplying this by $\vec{u}$ we obtain that: Note: We note that when we compute the scalar triple product between $\vec{u}, \vec{v}, \vec{w} \in \mathbb{R}^3$ that the result will be a scalar and NOT a vector! As another remark, we should note that the order of $\vec{u}, \vec{v}, \vec{w} \in \mathbb{R}^3$ matters when it comes to computing certain scalar triple products. That is, $\vec{u} \cdot (\vec{v} \times \vec{w})$ does not necessarily equal $(\vec{u} \times \vec{v}) \cdot \vec{w}$. However, for any vectors $\vec{u}, \vec{v}, \vec{w} \in \mathbb{R}^3$, then the following scalar triple products are in fact equal:(2) An easy way to remember this equivalence is with the following diagram where clockwise movements of the dot and cross represent equal scalar triples. Example 1 Find the scalar triple product $\vec{u} \cdot (\vec{v} \times \vec{w})$ where $\vec{u} = (1, 2, 3)$, $\vec{v} = (0, 2, 0)$ and $\vec{w} = (1, 4, 1)$. Applying our formula tells us that the scalar triple product $\vec{u} \cdot (\vec{v} \times \vec{w})$ arises when we evaluate the following determinant $\begin{vmatrix}1 & 2 & 3\\ 0 & 2 & 0\\ 1 & 4 & 1\end{vmatrix}$. This is best done by cofactor expansion along row 2 where we get our answer to be $-4$.
As other people have pointed out in comments, the correct answer to the question "what is the probability of rolling another 6 given that I have rolled a 6 prior to it?" is indeed $\frac{1}{6}$. This is because the die rolls are assumed (very reasonably so) to be independent of each other. This means that past rolls of the die does not affect future die rolls. Expressed mathematically, independence of two variables $X$ and $Y$ imply that $Pr(Y=y | X = x) = Pr(Y = y)$. Letting $X$ be a variable denoting the outcome of the first die roll and $Y$ be a variable for the second die roll, we can use the definition of independence to arrive to the conclusion that $Pr(Y=6 | X = 6) = Pr(Y = 6)=1/6$. The reason that the answer is not 1/36 is due to the fact that we are making a conditional statement. We are saying "given that we already have rolled a six in the first roll". This means that we are not interested in the likelihood of that first roll occuring. We are only interested in what happens next. It might be helpful to enumerate all possible outcomes here. I have done this below in the form {x, y}, where x is the outcome in the first roll and y in the second. {1, 1} {1, 2} {1, 3} {1, 4} {1, 5} {1, 6} {2, 1} {2, 2} {2, 3} {2, 4} {2, 5} {2, 6} {3, 1} {3, 2} {3, 3} {3, 4} {3, 5} {3, 6} {4, 1} {4, 2} {4, 3} {4, 4} {4, 5} {4, 6} {5, 1} {5, 2} {5, 3} {5, 4} {5, 5} {5, 6} {6, 1} {6, 2} {6, 3} {6, 4} {6, 5} {6, 6} Now, the probability you are interested in is the event {6, 6}. If you give the information that you are in the last row (which corresponds to having rolled a 6 in the first roll), you only have six possibilities of outcomes. Only one of them is a "success", so the probability of that event is 1/6. Edit: After re-reading the OP's question, it appears that I have missed part of the question. The question there seems to be regarding the following scenario: A six-sided die is rolled. If the die rolled a 6, roll a second die. Otherwise, do not roll a second die. The question is there: What is the probability that this procedure results in two sixes having been rolled? Equivalently: What is the probability that this procedure results in us rolling a six in step 2? The answer to this question is indeed 1/36. Heuristically, the reason for this is that we now are not conditioning on something that has happened anymore. We are instead asking for the probability of an event that can occur after we go through a procedure. Let us now prove that the probability is 1/36. Letting once again $X$ be the result of the first roll and $Y$ the result of the second roll. We are interested in $Pr(Y=6)$. Note that if $X\neq 6$ then the probability that $Y=6$ is zero since the second die won't be rolled. Thus $Pr(Y=6\mid X\neq6)=0$. We use the law of total probability to note that $Pr(Y=6)=\underset{x=1}{\overset{6}{\sum}}Pr(Y=6 \mid X=x) \cdot Pr(X=x)$. Now since $Pr(Y=6 \mid X=x)=0$ $\forall x\neq 6$, we see that $Pr(Y=6) = 0+0+0+0+0+Pr(Y=6\mid X=6)\cdot Pr(X=6)$. This simplifies to $Pr(Y=6) = \frac{1}{6}\cdot\frac{1}{6}=\frac{1}{36}$ which completes the proof.
Your wife could be right about the rocket. Whenever we say something is small physically, we need to be sure what it is small with respect to. In the case of a thread, the drag force exerted by air beats the centripetal force keeping it taut. Because centripetal forces scale with mass, a denser thread will work. The issue facing a space elevator is different. For a rigid cable, we need the terminus to be above geostationary orbit. The height of the atmosphere is about 100km from sea level, while geostationary orbit is about 36000 km, so very little of the cable will be exposed to atmospheric drag. What net drag there is will be mostly from the prevailing winds, I suspect. The effect of this will be small but persistent. If it is never corrected, over time the elevator will drift away from vertical. Whether this effect is large enough to matter over, say, the lifetime of a civilization, I don't know. For the other part of your question - what mass would we need to keep it taut? - I think your intuition about the rope is good. Here our concern isn't wrapping, but whether the rope will collapse. Imagine the rope is made of little beads of mass d$m$ connected by little massless cables of length d$h$. In the rotating frame, a bead at height $h$ in the cable experiences a total force $\mathrm{d}F = -\mathrm{d}m\frac{M_eG}{(h+R_e)^2}+T(h+\mathrm{d}h) - T(h) + \mathrm{d}m \, \omega^2(h+R_e) = 0$ for a system in equilibrium, where $T$ is the tension. If we call the mass per unit length of the cable $\lambda$, then we can turn this into a differential equation: $$\frac{\mathrm{d}}{\mathrm{d}h} T(h) = \lambda \left ( \frac{M_eG}{(h+R_e)^2} - \omega ^2 (h+R_e) \right )$$Integrating:$$ T(h) = \lambda \left ( M_eG \left( \frac{1}{R_e} - \frac{1}{R_e+h} \right ) - \frac{1}{2}\omega^2 (h^2 + 2 h R_e) \right) +C $$We find the integration constant $C$ we look at the base of the cable. The tension there must be sufficient to keep the whole cable in place. The force the whole cable exerts at that point is $$T(0)=C=\int_0^L \lambda \left ( -\frac{M_eG}{(h+R_e)^2} + \omega ^2 (h+R_e) \right ) \mathrm{d}h$$ where $L$ is the total length of the cable. We can see right away that this integral becomes more and more positive as $L$ increases, so even without finishing the integral we know there is some value of $L$ that will make this base tension positive. The condition that the cable stay taut is the condition $T(h) > 0 \;\forall\; h < L$ - that is, there is tension in the cable everywhere but the very end. Plug in the value of $C$ and you will find that if $C$ is positive, this is the case. So all we technically need is a long cable. But it might be more efficient to have a counterweight at the end.
Consider two options with maturity $T$ that only differ in their exercise styles, one being European (holder can only exercise at $T $), the other American (holder exercises when it's best for him/her). These options need not necessarily be vanilla options. Let us further denote by $I (S_t) $ the intrinsic value of these contigent claims at time $t $, i.e. the value that the holder would get by exercising at $t$. Your question then translates to Assuming that $\forall t \in [0,T]$, the following inequality holds $$ V^E(t,S_t) \geq I(S_t) \tag{A} $$ does that imply the following equality $$ V^A(t,S_t) = V^E(t,S_t) \tag{B} $$ Proof $(B) \Rightarrow (A)$ The proof is straightforward.Indeed, as you've stated in your question the $t$-value of an American option is always greater than the intrinsic value at time $t$: $V^A(t,S_t) \geq I(S_t)$ while by $(B)$ $V^A(t,S_t)=V^E(t,S_t)$. The former inequality comes from the fact that immediate exercise is merely one of the many stopping strategies that the holder of an American option could resolve to and he/she is expected to pick the one which maximises his/her gains. Proof $(A) \Rightarrow (B)$ Starting from the definition of the American option\begin{align}V^A(t,S_t) &= \text{sup}_{\tau \in [t,T]} \mathbb{E}_t^\mathbb{Q}\left[ e^{-r(\tau-t)} I(S_\tau) \right] \tag{1}\end{align}where $\tau$ represents a family of stopping times with values in $[t,T]$. Assume that $(A)$ holds i.e. $I(S_t) \leq V^E(t,S_t), \forall t \in [0,T]$. In that case from equation $(1)$, we can write that\begin{align}V^A(t,S_t) &\leq \text{sup}_{\tau \in [t,T]} \mathbb{E}_t^\mathbb{Q}\left[ e^{-r(\tau-t)} V^E(\tau,S_\tau) \right] \tag{2} \\\end{align} by linearity of the expectation operator. Now, noting that in the absence of arbitrage $\frac{V^E(t,S_t)}{B_t}$ should emerge as a $\mathbb{Q}$-martingale (remember that $V^E(t,S_t)$ is a tradable asset), with $B_t$ the $t$-value of the risk-free money market account, the optimal sampling theorem gives:\begin{align}\mathbb{E}_t^\mathbb{Q}\left[ e^{-r(\tau-t)} V^E(\tau,S_\tau) \right] &= e^{rt} \mathbb{E}_t^\mathbb{Q} \left[ \frac{V^E(\tau,S_\tau)}{B_\tau} \right] \\&= e^{rt}\frac{V^E(t,S_t)}{B_t} \\&= V^E(t,S_t)\end{align}hence $(2)$ becomes\begin{align}V^A(t,S_t) &\leq \text{sup}_{\tau \in [t,T]} V^E(t,S_t) = V^E(t,S_t) \tag{I1} \\\end{align} On the other hand we have that $$ V^A(t,S_t) \geq V^E(t,S_t) \tag{I2} $$i.e. the $t$-value of an American option is always greater than the $t$-value of its European counterpart. This is because European exercise at expiry is merely one of the many stopping strategies that the holder of an American option could resolve to and he/she is expected to pick the one which maximises his/her gains. Combining inequalities $(I1)$ and $(I2)$ then trivially yields:$$ V^A(t,S_t) = V^E(t,S_t) $$
Zero sharp $0^{\#}$ is a $\Sigma_3^1$ real number which cannot be proven to exist in $\text{ZFC}$. It's existence contradicts the Axiom of constructibility, $V=L$. In fact, it's existence is somewhat equivalent to $L$ being completely different from $V$. Definition $0^{\#}$ is defined as the set of all Gödel numberings of first-order formula $\varphi$ such that $L\models\varphi(\aleph_0,\aleph_1...\aleph_n)$ for some $n$. Because of the stability of $\aleph_\omega$, $0^{\#}$ is equivalent to the set of all Gödel numberings of first-order formula $\varphi$ such that $L_{\aleph_{\omega}}\models\varphi(\aleph_0,\aleph_1...\aleph_n)$. This definition implies the existence of Silver Indiscernables. Moreover, it implies: Given any set $X\in L$ which is first-order definable in $L$, $X\in L_{\omega_1}$. This of course implies that $\aleph_1$ is not first-order definable in $L$, because $\aleph_1\not\in L_{\omega_1}$. This is already a disproof of $V=L$ (because $\aleph_1$ is first-order definable). For every $\alpha\in\omega_1^L$, every uncountable cardinal is $\alpha$-iterable, $\geq$ an $\alpha$-Erdős, and totally ineffable in $L$. There are $\mathfrak{c}$ many reals which are not constructible (that is, $x\not\in L$). The existence of $0^\#$ is implied by: Chang's Conjecture. The existence of an $\omega_1$-iterable cardinal. The negation of the singular cardinal hypothesis ($\text{SCH}$). The axiom of determinacy ($\text{AD}$). $0^{\#}$ cardinal $0^{\#}$ exists iff there is a nontrivial elementary embedding $j:L\rightarrow L$ (by a theorem of Kunen). The critical point of such an embedding is sometimes called a $0^{\#}$ cardinal, and sometimes called a $j:L\rightarrow L$ cardinal. These cardinals do not coincide with measurable cardinals for a long time. While the least measurable cardinal is $\Sigma_1^2$-describable, each of these cardinals is totally indescribable. Furthermore, the least measurable cardinal $\kappa$ such that $V_\kappa$ satisfies the existence of a measurable cardinal is not a $j:L\rightarrow L$ cardinal, and the least measurable cardinal $\kappa$ such that $V_\kappa$ satisfies the existence of such a cardinal is not a $j:L\rightarrow L$ cardinal, and so on. However, the existence of a measurable suffices to prove the existence and consistency of a $j:L\rightarrow L$ cardinal. More information to be added here. References Jech, Thomas J. Set Theory (The 3rd Millennium Ed.). Springer, 2003.
Given that \(p\) and \(q\) are odd primes. Suppose we know whether \(q\) is a quadratic residue of \(p\) or not. The question that this section will answer is whether \(p\) will be a quadratic residue of \(q\) or not. Before we state the law of quadratic reciprocity, we will present a Lemma of Eisenstein which will be used in the proof of the law of reciprocity. The following lemma will relate Legendre symbol to the counting lattice points in the triangle. Lemma of Eisenstein If \(p\neq 2\) is a prime and \(a\) is an odd integer such that \(p\nmid a\), then \[\left(\frac{a}{p}\right)=(-1)^{\sum_{i=1}^{(p-1)/2}[ia/p]}.\] Consider the least positive residues of the integers \(a, 2a,...,((p-1)/2)a\); let \(m_1,m_2,...,m_s\) be integers of this set such that \(m_i>p/2\) for all \(i\) and let \(n_1,n_2,...,n_t\) be those integers where \(n_i<p/2\). Using the division algorithm, we see that \[ia=p[ia/p]+r\] where \(r\) is one of the \(m_i\) or \(n_i\). By adding the \((p-1)/2\) equations, we obtain \[\label{qr1} \sum_{i=1}^{(p-1)/2}ia=\sum_{i=1}^{(p-1)/2}p[ia/p]+\sum_{i=1}^sm_i+\sum_{i=1}^tn_i.\] As in the proof of Gauss’s Lemma, we see that \[p-m_1,p-m_2,...,p-m_s,p-n_1,p-n_2,...,p-n_t\] are precisely the integers \(1,2,...,(p-1)/2\), in the same order. Now we obtain \[\label{qr2} \sum_{i=1}^{(p-1)/2}i=\sum_{i=1}^s(p-m_i)+\sum_{i=1}^tn_i=ps-\sum_{i=1}^sm_i+\sum_{i=1}^tn_i.\] We subtract \((\ref{qr2})\) from \((\ref{qr1})\) to get \[\sum_{i=1}^{(p-1)/2}ia-\sum_{i=1}^{(p-1)/2}i=\sum_{i=1}^{(p-1)/2}p[ia/p]-ps+2\sum_{i=1}^sm_i.\] Now since we are taking the following as exponents for \(-1\), it suffice to look at them modulo 2. Thus \[0\equiv \sum_{i=1}^{(p-1)/2}[ia/p]-s(mod \ 2).\] \[\sum_{i=1}^{(p-1)/2}[ia/p]\equiv s(mod \ 2)\] Using Gauss’s lemma, we get \[\left(\frac{a}{p}\right)=(-1)^s=(-1)^{\sum_{i=1}^{(p-1)/2}[ia/p]}.\] The Law of Quadratic Reciprocity Let \(p\) and \(q\) be distinct odd primes. Then \[\left(\frac{p}{q}\right)\left(\frac{q}{p}\right)=(-1)^{\frac{p-1}{2}.\frac{q-1}{2}}\] We consider now the pairs of integers also known as lattice points \((x,y)\) with \[1\leq x\leq (p-1)/2 \mbox{and} \ \ 1\leq y\leq (q-1)/2.\] The number of such pairs is \(\frac{p-1}{2}.\frac{q-1}{2}\). We divide these pairs into two groups depending on the sizes of \(qx\) and \(py\). Note that \(qx\neq py\) for all pairs because \(p\) and \(q\) are distinct primes. We now count the pairs of integers \((x,y)\) with \[1\leq x\leq (p-1)/2, \ \ 1\leq y\leq (q-1)/2 \mbox{and} \ \ qx>py.\] Note that these pairs are precisely those where \[1\leq x\leq (p-1)/2 \mbox{and} \ \ 1\leq y\leq qx/p.\] For each fixed value of \(x\) with \(1\leq x\leq (p-1)/2\), there are \([qx/p]\) integers satisfying \(1\leq y\leq qx/p\). Consequently, the total number of pairs with are \[1\leq x\leq (p-1)/2, \ \ 1\leq y\leq qx/p, \mbox{and} \ \ qx>py\] is \[\sum_{i=1}^{(p-1)/2}[qi/p].\] Consider now the pair of integers \((x,y)\) with \[1\leq x\leq (p-1)/2, \ \ 1\leq y\leq (q-1)/2, \mbox{and} \ \ qx<py.\] Similarly, we find that the total number of such pairs of integers is \[\sum_{i=1}^{(q-1)/2}[pi/q].\] Adding the numbers of pairs in these classes, we see that \[\sum_{i=1}^{(p-1)/2}[qi/p]+ \sum_{i=1}^{(q-1)/2}[pi/q]=\frac{p-1}{2}.\frac{q-1}{2},\] and hence using Lemma 14, we get that \[\left(\frac{p}{q}\right)\left(\frac{p}{q}\right)=(-1)^{\frac{p-1}{2}.\frac{q-1}{2}}\] Exercises Evaluate \(\left(\frac{3}{53}\right)\). Evaluate \(\left(\frac{31}{641}\right)\). Using the law of quadratic reciprocity, show that if \(p\) is an odd prime, then \[\left(\frac{3}{p}\right)=\left\{\begin{array}{lcr} \ 1 &{\mbox{if}\ p\equiv \pm1(mod \ 12)} \\ \ -1 &{\mbox{if}\ p\equiv \pm 5(mod \ 12)}. \\ \end{array}\right .\] Show that if \(p\) is an odd prime, then \[\left(\frac{-3}{p}\right)=\left\{\begin{array}{lcr} \ 1 &{\mbox{if}\ p\equiv 1(mod \ 6)} \\ \ -1 &{\mbox{if}\ p\equiv -1 (mod \ 6)}. \\ \end{array}\right .\] Find a congruence describing all primes for which 5 is a quadratic residue.
To send this article to your account, please select one or more formats and confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.Find out more about sending content to . To send this article to your Kindle, first ensure no-reply@cambridge.org is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about sending to your Kindle.Find out more about sending to your Kindle. Note you can select to send to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be sent to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply. We study the regularity of the higher secant varieties of${{\mathbb{P}}^{1}}\times {{\mathbb{P}}^{n}}$, embedded with divisors of type$\text{(}d\text{,}\,\text{2)}$and$(d,3)$. We produce, for the highest defective cases, a “determinantal” equation of the secant variety. As a corollary, we prove that the Veronese triple embedding of${{\mathbb{P}}^{n}}$is not Grassmann defective. We associate with the Farey tessellation of the upper half-plane an$\text{AF}$algebra$\mathfrak{A}$encoding the “cutting sequences” that define vertical geodesics. The Effros–Shen$\text{AF}$algebras arise as quotients of$\mathfrak{A}$. Using the path algebra model for$\text{AF}$algebras we construct, for each$\tau \,\,\in \,\,\left( 0 \right.,\left. \frac{1}{4} \right]$, projections$({{E}_{n}})$in$\mathfrak{A}$such that${{E}_{n}}{{E}_{n\pm 1}}E\le \tau {{E}_{n}}$. Our main result is that a finitely generated nilpotent group has no isometric action on an infinite-dimensional Hilbert space with dense orbits. In contrast, we construct such an action with a finitely generated metabelian group. Let$T$be a sectorial operator. It is known that the existence of a bounded (suitably scaled)${{H}^{\infty }}$calculus for$T$, on every sector containing the positive half-line, is equivalent to the existence of a bounded functional calculus on the Besov algebra$\Lambda _{\infty ,1}^{\alpha }({{\mathbb{R}}^{+}})$. Such an algebra includes functions defined byMikhlin-type conditions and so the Besov calculus can be seen as a result on multipliers for$T$. In this paper, we use fractional derivation to analyse in detail the relationship between$\Lambda _{\infty ,1}^{\alpha }$and Banach algebras of Mikhlin-type. As a result, we obtain a new version of the quoted equivalence. We investigate the problem of deforming$n$-dimensional mod$p$Galois representations to characteristic zero. The existence of 2-dimensional deformations has been proven under certain conditions by allowing ramification at additional primes in order to annihilate a dual Selmer group. We use the same general methods to prove the existence of$n$-dimensional deformations. We then examine under which conditions we may place restrictions on the shape of our deformations at$p$, with the goal of showing that under the correct conditions, the deformations may have locally geometric shape. We also use the existence of these deformations to prove the existence as Galois groups over$\mathbb{Q}$of certain infinite subgroups of$p$-adic general linear groups. Hua’s fundamental theorem of the geometry of hermitian matrices characterizes bijective maps on the space of all$n\times n$hermitianmatrices preserving adjacency in both directions. The problem of possible improvements has been open for a while. There are three natural problems here. Do we need the bijectivity assumption? Can we replace the assumption of preserving adjacency in both directions by the weaker assumption of preserving adjacency in one direction only? Can we obtain such a characterization formaps acting between the spaces of hermitian matrices of different sizes? We answer all three questions for the complex hermitian matrices, thus obtaining the optimal structural result for adjacency preserving maps on hermitian matrices over the complex field. Let$F$be a non-archimedean local field of residue characteristic neither 2 nor 3 equipped with a galois involution with fixed field${{F}_{0}}$, and let$G$be a symplectic group over$F$or an unramified unitary group over${{F}_{0}}$. Following the methods of Bushnell–Kutzko for$\text{GL}(N,F)$, we define an analogue of a simple type attached to a certain skew simple stratum, and realize a type in$G$. In particular, we obtain an irreducible supercuspidal representation of$G$like$\text{GL}(N,F)$. We give a complete classification of mixed Tsirelson spaces$T\left[ ({{F}_{i,}}{{\theta }_{i}})_{i=1}^{r} \right]$for finitely many pairs of given compact and hereditary families${{F}_{i}}$of finite sets of integers and$0<{{\theta }_{i}}<1$in terms of the Cantor–Bendixson indices of the families${{F}_{i}}$, and${{\theta }_{i}}(1\le i\le r)$. We prove that there are unique countable ordinal$\alpha $and$0<\theta <1$such that every block sequence of$T\left[ ({{F}_{i,}}{{\theta }_{i}})_{i=1}^{r} \right]$has a subsequence equivalent to a subsequence of the natural basis of the$T({{S}_{{{\omega }^{\alpha }}}},\theta )$. Finally, we give a complete criterion of comparison in between two of these mixed Tsirelson spaces. We study the geometry, topology and Lebesgue measure of the set of monic conjugate reciprocal polynomials of fixed degree with all roots on the unit circle. The set of such polynomials of degree$N$is naturally associated to a subset of${{\mathbb{R}}^{N-1}}$. We calculate the volume of this set, prove the set is homeomorphic to the$N-1$ball and that its isometry group is isomorphic to the dihedral group of order$2N$. We examine the fine structure of the short time behavior of solutions to various linear and nonlinear Schrödinger equations${{u}_{t}}=i\Delta u+q(u)$on$I\times {{\mathbb{R}}^{n}}$, with initial data$u(0,x)=f(x)$. Particular attention is paid to cases where$f$is piecewise smooth, with jump across an$(n-1)$-dimensional surface. We give detailed analyses of Gibbs-like phenomena and also focusing effects, including analogues of the Pinsky phenomenon. We give results for general$n$in the linear case. We also have detailed analyses for a broad class of nonlinear equations when$n=1$and 2, with emphasis on the analysis of the first order correction to the solution of the corresponding linear equation. This work complements estimates on the error in this approximation.
Congratulations to Phong Quach of Debney Park Secondary College, Tomas from Malmesbury School, Guildford, Chor Kiang Tan, Vassil from Lawnswood High School, Tom from Madras College, Adam from King James's School, Knaresborough, Alex from King Edward and Queen Mary School, Lytham and Andaleeb from Woodhouse Sixth Form College, London who all proved this resultusing mathematical induction. Congratulations also to Daniel, and Alex, also to Mark and Eduardo from the British School of Manila and to Michael and Sue of Madras College and Yiwen of the Chines High School, Singapore for their alternative method using the standard formulae . First the method using the standard formulae: $$\sum_{i=1}^n i = {1\over 2}n(n + 1)\quad {\rm and} \ \sum_{i=1}^n i^2 = {1\over 6}n(n + 1)(2n + 1).$$ We conjecture that $$\sum _{i=1}^n (2i - 1)^2 = {1 \over 6}(2n -1)2n(2n+1).$$ Consider \begin{eqnarray} \\ \sum_{i=1}^n(2i - 1)^2 &=& 4 \sum_{i=1}^n i^2 - 4\sum_{i=1}^n i + \sum_{i=1}^n 1 \\ &=& 4\left(\frac{1}{6}n(n + 1)(2n + 1)\right ) - 4\left({1\over 2}n(n + 1)\right) + n \\ &=& \frac{1}{6}(2n)(4n^2 + 6n + 2 - 6n - 6 + 3) \\ &=& \frac{1}{6}(2n)(4n^2 - 1) \\&=& \frac{1}{6}(2n)(2n - 1)(2n + 1) \end{eqnarray} The second method uses mathematical induction. The formulae given in the question are easily verified showing that the conjecture is true for $n = 1, 2$ and $3$. Suppose that the conjecture is true for $n = k$. Then $$1^2 + 3^2 + ... + (2k - 1)^2 = {(2k - 1)(2k)(2k + 1)\over 6}\quad (1)$$ Adding one more term we get \begin{eqnarray} \\ 1^2 + 3^2 + ... + (2k - 1 )^2 + (2k + 1 )^2 &=& \frac{(2k - 1)(2k)(2k + 1)}{6} + (2k + 1)^2 \\ &=& \frac{(2k - 1)(2k)(2k + 1) + 6(2k + 1)^2}{6} \\ &=& \frac{(2k + 1)(4k^2 + 10k + 6)}{6} \\&=& \frac{(2k + 1)(2k + 2)(2k + 3)}{6}. \end{eqnarray} This is essentially the same as (1) but here $k$ is replaced by $k + 1$. Thus if the conjecture is true for $n = k$, it is also true for $n = k + 1$. Thus by the axiom of induction $$1^2 + 3^2 + ... + (2n - 1)^2 = {(2n - 1)(2n)(2n + 1)\over 6}.$$
Task to determine the charged particle pseudo density from tracks and tracklets. This class determines \[ \left.\frac{d^2N_{ch}}{d\eta d\phi}\right|_{central} \] from tracks and tracklets. First, global tracks are investigated. The requirements on global tracks are \( n_{TPC clusters} \ge 70\) \( \chi^2_{TPC cluster} \le 4\) No daughter kinks Re-fit of TPC tracks Re-fit of ITS tracks No requirement on SPD clusters Secondly, ITS stand-alone tracks are investigated. The requirements on ITS standalone tracks are Re-fit of ITS tracks No requirement on SPD clusters Tracks that does not meet these quality conditions are flagged as rejected. Both kinds of tracks (global and ITS standalone) have requirements on the distance of closest approach (DCA) to the interaction vertex \((v_x,v_y,v_z)\). For tracks with SPD clusters: \( DCA_{xy} < 0.0182+0.0350/p_t^{1.01}\) \( DCA_{z} < 0.5\) For tracks without SPD clusters \( DCA_{xy} < 1.5(0.0182+0.0350/p_t^{1.01})\) \( DCA_{z} < 0.5\) Tracks that does not meet these DCA requirements are flagged as secondaries. Thirdly, the number of SPD tracklets are investigated. If the tracklet is associated with a track, and that track has already been used, then that tracklet is ignored An \((\eta,\phi)\) per-event histogram is then filled from these tracks and tracklets, and that histogram is stored in the output AOD event. Furthermore, a number of diagnostics \(\eta\) histograms are filled: \(\eta\) of all accepted tracks and tracklets \(\eta\) of accepted global tracks \(\eta\) of accepted ITS tracks \(\eta\) of accepted SPD tracklets At the end of the job, these histograms are normalized to the number of accepted events and bin width to provide an estimate of the \(dN_{ch}/d\eta\) Only minimum bias events with a \(v_z\) within the defined cut are analysed. Deprecated: This class is deprecated Definition at line 84 of file AddTaskCentralTracks.C.
Vilma Kemešytė, Nijolė Lemežienė, Vaclovas Stukonis and Juozas Kanapeckas species of the genus Lolium . Gen. Res. Crop Evol. , 47 , 247-255.Brown, R. N., Barker, R. E., Warnke S. E., Cooper, L. D., Brilman, L. A., Rouf Mian, M. A., Jung, G., Sim, S.-C. (2009). Identification of quantitative trait loci for seed traits and floral morphology in a field-grown Lolium perenne x Lolium multiflorum mapping population. Plant Breeding , 129 , 29-34.Cojocariu, L., Moisuc, A., Radu, F., Marian, F., Horablaga, M., Bostan, C., Sarateanu, V. (2008). Qualitative changes in the fodder obtained from forage legumes and Isabela R. Birs, Cristina I. Muresan, Silviu Folea and Ovidiu Prodan will always be kept at 0. The measured structural displacement is given to the controller, which computes the control signal u ( t ) which is the control force for the actuator. The controller will treat any excitation as a disturbance, continuously trying to reject it. In the subsequent paragraphs, the disturbance d in Fig. 1 will be considered as an impulse disturbance.Fig. 1Active vibration attenuation in a smart beam.3.1Description of the practical stand and model identificationThe experimental setup has been entirely developed at identification with a sequence t of length q , referred to as the Talbot sequence. This also permits the restriction of p to values within the range 0 < p < 2 q , or within the non-zero residues mod2 q , Z 2 q × .$\begin{array}{}\displaystyle2q,\, \mathbb{Z}_{2q}^\times.\end{array}$The following results lead to the solution of the Gauss sum as a DFT pair.Theorem 3[ 31 ] Given two positive and coprime integers , p and q , with 0 < p < 2 q , there exists a unique positive integer s , in the range 0 < s < 2 q , such that :s is classical arrow and spoke types at particular frequencies. See figure 10 for a geometric view of this instance.Fig. 10Special relative equilibria of the body-inclined type.As a final remark, we would like to point out that, a complete identification of the classical families of relative equilibria involves the use of a complete set of charts, which cover the case in which the axis b 3 is allowed to be the spin-axis.6Conclusions and future workAn intermediary model has been presented considering the triaxial version of the one
can you please help me with this simple question? I want to know if this is linear or not $f: \Bbb R^2\to\Bbb R^3$ $f(0,0)=(1,0,0)$ $f(0,1)=(0,0,0)$ If you could explain why it is or it isn’t linear, I’d be really grateful. All the best. Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community can you please help me with this simple question? I want to know if this is linear or not $f: \Bbb R^2\to\Bbb R^3$ $f(0,0)=(1,0,0)$ $f(0,1)=(0,0,0)$ If you could explain why it is or it isn’t linear, I’d be really grateful. All the best. Every linear transformation fix the origin, i.e. $T(0)=0$ To expand on the answer of rowcol, the definition of a linear operator is that it satifies $f(\alpha x + \beta y) = \alpha f(X) + \beta f(y)$. Note that if you set $\alpha = 0$ and pick any $x \in \mathbb{R}^2$, you get $$ f(0 x) = f(0) = (1, 0, 0) \neq 0 f(x). $$ Therefore, $f$ is not linear.
I am trying to understand a modal logic countermodel, illustrating that the QK theorem,$$ \Box (\phi(x) \wedge \forall x \phi (x)) \rightarrow \Box \forall x \phi (x), $$ fails in the alternate semantics for modal logic given by David Lewis's counterpart theory.The countermodel is the following: It comes from Oliver Kutz's Kripke-Typ Semantiken für die modale Prädikatenlogik, p. 31. I don't understand German, but based on what I could glean from context and Google Translate it doesn't seem like the diagram is explained at all. Here's what I do understand: The large circles represent the two worlds, $w_1$ and $w_2$. The smaller dotted circles, which I would prefer to be loops, indicate that $C (u_1, u_1)$ and $C (u_2, u_2)$ (where $C$ is the "counterpart relation"). At the bottom of each circle is an indication that $\phi/\lnot \phi$ hold of $u_1/u_2$ in their respective worlds. What I don't get are the dotted arcs. Based on context I believe the quantifiers below the arcs are quantifiers over worlds. This makes sense because the original presentation of counterpart theory utilized a translation procedure to link statements of quantified modal logic to statements in the two-sorted (worlds and individuals) framework of counterpart theory. So, e.g., the formula $\Box \phi (x)$ gets translated as $$\forall v \forall y [W (v) \wedge I (y, v) \wedge C (y, x) \rightarrow \phi^v (y)],$$ which is read as "for every world $v$ if $y$ inhabits $v$ and $y$ is a counterpart of $x$ then $\phi$ is true of $y$ in $v$". Similarly, a statement like $\Diamond \phi (x)$ gets translated as $$\exists v \exists y (W (v) \wedge I (y, v) \wedge C (y, x) \wedge \phi^v (b)).$$ As a result there are various completion I could imagine to the quantification over worlds. It will be true that there is some world where $\exists x \lnot \phi (x)$ is true, namely $w_2$, and so $\Diamond \exists x \lnot \phi (x)$ is true in $w_1$ (and therefore $\lnot \Box \forall x \phi (x)$ is also true in $w_1$, falsifying the consequent of the conditional). Additionally, both $\phi (x)$ and $\forall x \phi (x)$ are true in $w_1$ since the only counterpart of $u_1$ is itself it also follows that $\Box (\phi (x) \wedge \forall x \phi (x))$ is true in $w_1$. From this and the preceding we have that the conditional QK theorem is false at $w_1$ in this counterpart theory model. So obviously there is some quantification over worlds in the box and diamond statements, but I can't quite figure out what sort of a relationship the dotted arcs are attempting to indicate. Any help would be appreciated.
I was wondering if Integration by substitution is a method only for Riemann integral? if Integration by substitution is a special case of Radon–Nikodym theorem, and why? Thanks and regards! Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community There are measure-theoretic versions of integration by substitution, a couple of which can be found on the page you linked to (search the page for "Lebesgue"). Another version is an exercise in Royden's Real analysis (page 107 of the 2nd edition) which says that if $g$ is a monotone increasing, absolutely continuous function such that $g([a,b])=[c,d]$, and if $f$ is a Lebesgue integrable function on $[c,d]$, then $\displaystyle{\int_c^d f(y)dy=\int_a^bf(g(x))g'(x)dx}$. This is also an exercise in Wheeden and Zygmund's Measure and integral (page 124). One of the versions on the Wikipedia page generalizes this to subsets of $\mathbb{R}^n$ in the case where the change of variables is bi-Lipschitz. I don't see how it would be. The Radon-Nikodym theorem says that if $\nu$ and $\mu$ are measures on $X$ such that $\nu$ is absolutely continuous with respect to $\mu$, then there is a $\mu$-integrable function $g$ such that $\int_X fd\nu=\int_Xfgd\mu$ for all $\nu$-integrable $f$. Both integrals are over the same set, with no change of variables. Maybe I'm not seeing what you have in mind. However, you can at least derive the formula for linear change of variables for Lebesgue measure using the Radon-Nikodym theorem, and maybe there's more to this than I initially thought. If $T:\mathbb{R}^n\to\mathbb{R}^n$ is an invertible linear map and $m$ is Lebesgue measure, then $\int_{\mathbb{R}^n}fdm=|\det(T)|\int_{\mathbb{R}^n}f\circ Tdm$ for all integrable $f$. A proof of this (without Radon-Nikodym) is given in a 1998 article by Dierolf and Schmidt, and they mention that in the proof they could also have used the Radon-Nikodym theorem. They don't pursue this, but the idea is that $f\mapsto\int_{\mathbb{R}^n}f\circ Tdm$ corresponds to an absolutely continuous measure on $\mathbb{R}^n$, so there is a $g$ such that $\int_{\mathbb{R}^n}f\circ Tdm=\int_{\mathbb{R}^n}fgdm$. In particular, considering $f=\chi_E$ shows that $m(T^{-1}(E))=\int_Egdm$ for all measurable $E$. From this you can show that $g$ must be constant, and the constant must be the measure of the image of the unit $n$-cube under $T^{-1}$, which is $|\det(T^{-1})|=\frac{1}{|\det(T)|}$.
Topological insulator is a fermion system with only short-ranged entanglement, what does the entanglement mean here? For example, the Hilbert space $V_s$ of a lattice $N$ spin-1/2 system is $V_s=V_1\otimes V_2\otimes...\otimes V_N$, where $V_i$ is the Hilbert space of the spin on site $i$. And the meaning of an entanglement state belongs to $V_s$ is clear — a state which can not be written as a direct tensor product of the $N$ single spin states. Now consider a spinless fermion system lives on the same lattice as spin-1/2, in the 2nd quantization framework, the fermion operators $c_i,c_j$ on different lattices $i,j$ do not commute with each other and the Hilbert space $V_f$ of the fermion system can not be written as a direct product of $N$ single fermion Hilbert spaces. Thus, how to understand the entanglement in this fermion system? Mathematically, we can make a natural linear bijective map between $V_f$ and $V_s$, simply say, just let $\mid 0\rangle=\mid \downarrow\rangle,\mid 1\rangle=\mid \uparrow\rangle$. Thus, can we understand the entanglement of a fermion state in $V_f$ through its corresponding spin state in $V_s$? This post imported from StackExchange Physics at 2014-03-09 08:41 (UCT), posted by SE-user K-boy
Theorem Schmeorem. A Galilean invariant Lagrangian for any number of classical particles interacting with a potential: $$ S = \int \sum_k {m_k(\dot{x}_k-u)^2\over 2} + \lambda \dot{u} - U(x_k)\;\;\; dt $$ For any Galilean invariant Lagrangian $L(\dot{x}_k, x_k)$, the Lagrangian $$ L'(\dot{x}_k,x_k, \lambda, u) = L(\dot{x}_k-u,x_k) + \lambda \dot{u} $$ is explicitly Galilean invariant, and has the same dynamics (assuming the original Lagrangian was Galilean invariant). The Galilean properties of the x's are as usual. The dynamical variables extend to include $\lambda,u$ which act as Lagrange multipliers. The transformation law for u and $\lambda$ are: $ x \rightarrow x-vt $ $ u \rightarrow u-v $ $ \lambda \rightarrow \lambda $ And it is trivial to verify that the new Lagrangian is completely invariant. The equation of motion for $\lambda$ just makes $u$ constant, equal to $u_0$, while the equation of motion for $u$ integrates to $$ \lambda = - \sum_k m_k x_k - M u_0 t $$ up to an additive constant which I have set to zero. This is almost all the equations of motion, but there is one more equation which comes from extremizing the action with respect to $u_0$, which sets $$ u_0 = \sum_k m_k \dot{x}_k $$ Where the time is unimportant, because this is the center of mass velocity, which is conserved. The Noether prescription in the explicitly Galilean invariant action is trivial--- the conserved quantity associated with Galilean boosts is just $\lambda$, and this is indeed the center of mass position. Why this works If you integrate the kinetic energy for the usual free particle action by parts, you get: $$S = \int \sum_k m\ddot{x}_k x_k + U(x_k) dt$$ This action is Galilean invariant on mass shell, meaning that the non-galilean invariant part is zero when you enforce the equations of motion. This means that adding some additional nondynamical fields should produce a Galilean invariant action off shell, and this is the $\lambda, u$. Relation to Lorentz transformations When you perform a Lorentz transformation, the arclength particle action is invariant. But if you fix the origin of the Lorentz transformation on the initial time, the final time is transformed, so the path is not going to the same final time anymore after the transformation. When you take the non-relativistic limit, the final time becomes degenerate with the initial time, but the action cost from shifting the final time does not approach zero. This means that you need an extra variable to keep track of the infinitesimal bit of final time, and that this extra variable will need a nontrivial transformation law under Galilean transformations. To find out what this new variable should be, it is always best to consider the analogous thing for rotational invariance. Consider a string at tension with small deviations from horizontal, and let the deviation of the string from horizontal be h(t). The rotationally invariant potential energy is the arclength of the string $$ U = \int \sqrt{1+h'^2} dx $$ and this is the potential energy which gives the rotationally invariant analog of the wave equation. Once you go to small deviations, the expansion for U gives the usual wave-equation potential energy $$ U(h) = \int {1\over 2} h'^2 dx $$ and this is no longer rotationally invariant. But it is skew-invariant, meaning adding a constant slope line to h does not change the energy. Except that it does, by a perfect derivative: $$ U(h + ax) = \int {1\over 2}h'^2 + a h' + {a^2\over 2} dx$$ This is clearly the same exact situation as for the Lorentz invariance turning into Galilean invariance, except using rotational invariance, where everybody's intuition is firm. The additional $a^2\over 2$ energy is due to the quadratic extra length of a rotated string, while the linear perfect derivative $ah'$ integrates to $a (h_f - h_i)$, and this is the amount of reduction/increase in length when you rotate a tilted string. So to get a fully tilt-invariant potential energy, you need to add a variable $u$ which is dynamically constrained to equal the total tilt of the string. This variable will distinguish between different rotated versions of the string: rotating the string by itself without rotating the average tilt variable will change the energy--- this is because tilting the horizontal string between 0 and A is not quite the same as the pre-tilted string between 0 and A, the pre-tilted string has a different length. Rotating the total tilt by itself will change the energy, but rotating both does nothing, and this is the encoding of rotatonal invariance. So you need an average tilt variable to turn explicit rotational invariance to explicit tilt invariance. The total potential energy is then given by the deviations from the average tilt: $$ U = \int {1\over 2} (h'-u)^2 dx $$ and u transforms as $u-a$ under a tilt by a. This makes the potential energy invariant. The kinetic energy is given by the time dependence of h, and there must be a Lagrange multiplier to enforce that the total tilt is equal to the average tilt $$ S = \int {1\over 2} \dot h^2 - {1\over 2} (h'-u)^2 + \beta (u - h') dt dx $$ Where $\beta$ is a global in x Lagrange multiplier for u, forcing it to equal h'. But it does no harm to allow u to vary in x, so long as the Lagrange multiplier enforces that it is constant. The way to do this is to change the Lagrange multiplier term to $$ - \int \lambda' (u(x) - h'(x)) dx = \int \lambda (u'(x) - h''(x)) $$ But then the equation of motion kills the second term, so you only need a Lagrange multiplier to be: $$ \int \lambda u'(x)$$ And the equations of motion automatically constrain u to be the average slope. These manipulations have exact analogs in Lorentz transformations, and explain the relation of the explicitly Galilean invariant action to the Lorentz action. The analog of average slope is the center of mass velocity.
The applications of solid codes to r-R and r-D languages 12 Downloads Abstract A language S on a free monoid \(A^*\) is called a solid code if S is an infix code and overlap-free. A congruence \(\rho \) on \(A^*\) is called principal if there exists \(L\subseteq A^*\) such that \(\rho =P_L\), where \(P_L\) is the syntactic congruence determined by L. For any solid code S over A, Reis defined a congruence \(\sigma _S\) on \(A^*\) by means of S and showed it is principal (Semigroup Forum 41:291–306, 1990). A new simple proof of the fact that \(\sigma _S\) is principal is given in this paper. Moreover, two congruences \(\rho _S\) and \(\lambda _S\) on \(A^*\) defined by solid code S are introduced and proved to be principal. For every class of the classification of \({{\mathbf {D}}}_{\mathbf{r}}\) and \({{\mathbf {R}}}_{\mathbf{r}}\), languages are given by means of three principal congruences \(\sigma _S\), \(\rho _S\) and \(\lambda _S\). KeywordsSolid code Principal congruence Relatively regular language Relatively disjunctive language Notes Acknowledgements The authors thank the referees for their very careful and in-depth recommendations. This work was supported by the National Natural Science Foundation of China (Grant No. 11861071). Compliance with ethical standards Conflict of interest Author Zuhua Liu declares that he has no conflict of interest. Author Yuqi Guo declares that he has no conflict of interest. Author Jing Leng declares that she has no conflict of interest. Ethical approval This article does not contain any studies with human participants or animals performed by any of the authors. References Ito M (1993) Dense and disjunctive properties of languages. In: Proceedings of the Fundamentals of Computation Theory, International Symposium, Fct ’93, Szeged, Hungary, August 23–27, 1993. DBLP 31–49Google Scholar Jürgensen H, Yu SS (1990) Solid codes. J Inf Process Cybern 26(10):563–574Google Scholar Shyr HJ, Thierrin G (1977) Disjunctive languages and codes, fundamentals of computation theory. In: Proceeding of the 1977 Inter. FCT-conference, Poznan, Poland, lecture notes in computer science, No. 56. Springer, Berlin, pp 171–176Google Scholar
Norm Minimization Examples 1 Recall from the Norm Minimization page that if $V$ is an inner product space and $U$ is a finite-dimensional vector space of $V$ where $V = U \oplus U^{\perp}$, and if we let $v \in V$ then for every vector $u \in U$ we have that:(1) Furthermore, evaluating holds if and only if we take $u = P_U(v)$. We will now identify how to use the theorem above in order to solve various minimization problems. Step 1 We must first identify a vector space $V$ and define an inner product space on $V$. Step 2 We will want to also identity the subspace $U$ of $V$ for which our solution lies in. We must note that $U$ must be a finite-dimensional vector space. Step 3 We must then find a basis of $U$. Using the Gram-Schmidt procedure, we can then convert this basis to an orthonormal basis, $\{ e_1, e_2, ..., e_n \}$ of $U$ with respect to the inner product defined on $V$. Step 4 In applying Theorem 1 from above, we will have that the normed difference, $\| v - P_U(v) \|$ will be less than or equal to the normed difference of $\| v - u \|$ for every vector $v \in V$, that is, $\| v - P_U(v) \| ≤ \| v - u \|$. We choose $u = P_U(v)$ to minimize the difference between every vector in $V$. Step 5 The vector $P_U(v) = <v, e_1>e_1 + <v, e_2>e_2 + ... + <v, e_n>e_n$ (which is in $U$ since it is a linear combination of the basis vectors of $U$) is our solution. Let's look at an example. Example 1 Find the polynomial $p(x) \in \wp_3 (\mathbb{R})$ that minimizes $\int_0^1 \mid 2 + 3x - p(x) \mid^2 \: dx$ such that $p(x) \in \wp_3 ( \mathbb{R})$ and $p(0) = 0$ and $p'(0) = 0$. We first identity that the larger vector space we're looking at is $V = \wp_3 ( \mathbb{R} )$. Let $q(x) = 2 + 3x$. We define an inner product of $V$ by:(2) We're looking for a polynomial in $\wp_3 (\mathbb{R})$ such that $p(0) = 0$ and $p'(0) = 0$. The set of polynomials of degree less than or equal to $3$ that satisfy these conditions are a subspace $U$ of $V$ defined by:(3) We will now find a basis of $U$. Let $p(x) = a_0 + a_1x + a_2x^2 + a_3x^3$. Note that $p(0) = a_0$, but we must have that $p(0) = 0$. Therefore $a_0 = 0$. Differentiate $p$ to get that $p'(x) = a_1 + 2a_2x^2 + 3a_3x^2$. Note that $p'(0) = a_1$, but we must have that $p'(0) = 0$, which implies that $a_1 = 0$. Therefore, $p(x) = x_2a^2 + x_3a^3$, and so:(4) We can clearly see that $U = \mathrm{span} (x^2, x^3)$, so $\mathrm{dim} (U) = 2$. Furthermore, $\{ x^2, x^3 \}$ are linearly independent in $V$, so take $\{ x^2, x^3 \}$ to be our start up basis. We must now use the Gram-Scmidt procedure in order to convert this basis to an orthonormal basis. Let $v_1 = x^2$ and $v_2 = x^3$. Let $\{ e_1, e_2 \}$ be our orthonormal basis. Then we have that:(5) Therefore $\{ \sqrt{5}x^2, \sqrt{7} (6x^3 - 5x^2) \}$ is an orthonormal basis of $U$. We now let $p(x) = P_U(q(x)) = P_U(2 + 3x)$, which is given by the formula:(7)
Answer $$\frac{\cos(x+y)+\cos(x-y)}{\sin(x-y)+\sin(x+y)}=\cot x$$ The equation is an identity. Work Step by Step $$\frac{\cos(x+y)+\cos(x-y)}{\sin(x-y)+\sin(x+y)}=\cot x$$ We should examine from the left side. $$A=\frac{\cos(x+y)+\cos(x-y)}{\sin(x-y)+\sin(x+y)}$$ As in here, the application of 4 following identities is essential: $$\cos(A+B)=\cos A\cos B-\sin A\sin B$$ $$\cos(A-B)=\cos A\cos B+\sin A\sin B$$ $$\sin(A+B)=\sin A\cos B+\cos A\sin B$$ $$\sin(A-B)=\sin A\cos B-\cos A\sin B$$ That means $$A=\frac{\cos x\cos y-\sin x\sin y+\cos x\cos y+\sin x\sin y}{\sin x\cos y-\cos x\sin y+\sin x\cos y+\cos x\sin y}$$ $$A=\frac{(\cos x\cos y+\cos x\cos y)+(-\sin x\sin y+\sin x\sin y)}{(\sin x\cos y+\sin x\cos y)+(-\cos x\sin y+\cos x\sin y)}$$ $$A=\frac{2\cos x\cos y}{2\sin x\cos y}$$ $$A=\frac{\cos x}{\sin x}$$ $$A=\cot x\hspace{1cm}\text{(Quotient Identity)}$$ The equation has thus been verified to be an identity.
I was recently asked an interesting elementary question about the number of possible order types of the final segments of an ordinal, and in particular, whether there could be an ordinal realizing infinitely many different such order types as final segments. Since I found it interesting, let me write here how I replied. The person asking me had noted that every nonempty final segment of the first infinite ordinal $\omega$ is isomorphic to $\omega$ again, since if you start counting from $5$ or from a million, you have just as far to go in the natural numbers. Thus, if one includes the empty final segment, there are precisely two order-types that arise as final segments of $\omega$, namely, $0$ and $\omega$ itself. A finite ordinal $n$, in contrast, has precisely $n+1$ many final segments, corresponding to each of the possible cuts between any of the elements or before all of them or after all of them, and these final segments, considered as orders themselves, all have different sizes and hence are not isomorphic. He wanted to know whether an ordinal could have infinitely many different order-types for its tails. Question. Is there an ordinal having infinitely many different isomorphism types for its final segments? The answer is no, and I’d like to explain why. I’ll discuss two different arguments, the first being an easy direct argument aimed only at this answer, and the second being a more careful analysis aimed at understanding exactly how many and which order-types arise as the order type of a final segment of an ordinal $\alpha$. Theorem. Every ordinal has only finitely many order types of its final segments. Proof: Suppose that $\alpha$ is an ordinal, and consider the order types of the final segments $[\eta,\alpha)$, for $\eta\leq\alpha$. Note that as $\eta$ increases, the final segment $[\eta,\alpha)$ becomes smaller as a suborder, and so it’s order type does not go up. And since these are well-orders, it can go down only finitely many times. So only finitely many order types arise, and the theorem is proved. QED But let’s figure out exactly how many and which order types arise. Theorem. The number of order types of final segments of an ordinal $\alpha$ is precisely $n+1$, where $n$ is the number of terms in the Cantor normal form of $\alpha$, and one can describe those order types in terms of the normal form of $\alpha$. Cantor proved that every ordinal $\alpha$ can be uniquely expressed as a finite sum $$\alpha=\omega^{\beta_n}+\cdots+\omega^{\beta_0},$$ where $\beta_n\geq\cdots\geq\beta_0$, and this is called the Cantor normal form of the ordinal. There are alternative forms, where one allows terms like $\omega^\beta\cdot n$ for finite $n$, but in my favored formulation, one simply expands this into $n$ terms with $\omega^\beta+\cdots+\omega^\beta$. In particular, the ordinal $\omega=\omega^1$ has exactly one term in its Cantor normal form, and a finite number $n=\omega^0+\cdots+\omega^0$ has exactly $n$ terms in its Cantor normal form. So the statement of the theorem agrees with the calculations that we had made at the very beginning. Proof: First, let’s observe that every nonempty final segment of an ordinal of the form $\omega^\beta$ is isomorphic to $\omega^\beta$ again. This amounts to the fact that ordinals of the form $\omega^\beta$ are additively indecomposable, or in other words, closed under ordinal addition, since the final segments of an ordinal $\alpha$ are precisely the ordinals $\zeta$ such that $\alpha=\xi+\zeta$ for some $\xi$. If $\alpha$ is additively indecomposable, then it cannot be that $\zeta<\alpha$, and so all final segments would be isomorphic to $\alpha$. So let’s prove that $\omega^\beta$ is additively indecomposable. This is clear if $\beta=0$, since the only ordinal less than $\omega^0=1$ is $0$ and $0+0<1$. If $\beta$ is a limit ordinal, then the ordinals $\omega^\eta$ for $\eta<\beta$ are unbounded in $\omega^\beta$, and adding them stays below because $\omega^\eta+\omega^\eta=\omega^\eta\cdot 2\leq\omega^\eta\cdot\omega=\omega^{\eta+1}<\omega^\beta$. If $\beta=\delta+1$ is a successor ordinal, then $\omega^\beta=\omega^{\delta+1}=\omega^\delta\cdot\omega=\sup_{n<\omega}\omega^\delta\cdot n$, but again adding them stays below because $\omega^\delta\cdot n+\omega^\delta\cdot m=\omega^\delta\cdot(n+m) < \omega^\delta\cdot\omega=\omega^\beta$. To prove the theorem, consider any ordinal $\alpha$ with Cantor normal form $\alpha=\omega^{\beta_n}+\cdots+\omega^{\beta_0}$, where $\beta_n\geq\cdots\geq\beta_0$. So as an order type, $\alpha$ consists of finitely many pieces, the first of type $\omega^{\beta_n}$, the next of type $\omega^{\beta_{n-1}}$ and so on up to $\omega^{\beta_0}$. Any final segment of $\alpha$ therefore consists of a final segment of one of these segments, together with all the segments after that segment (and omitting any segments prior to it, if any). But since these segments all have the form $\omega^{\beta_i}$, they are additively indecomposable and therefore are isomorphic to all their nonempty final segments. So any final segment of $\alpha$ is order-isomorphic to an ordinal whose Cantor normal form simply omits some (or none) of the terms from the front of the Cantor normal form of $\alpha$. Since we may start with any of the $n$ terms (or none), this gives precisely $n+1$ many order types of the final segments of $\alpha$, as claimed. The argument shows, furthermore, that the possible order types of the final segments of $\alpha$, where $\alpha=\omega^{\beta_n}+\cdots+\omega^{\beta_0}$, are precisely the ordinals of the form $\omega^{\beta_k}+\cdots+\omega^{\beta_0}$, omitting terms only from the front, where $k\leq n$. QED
Consider the elliptic second-order PDE on some bounded domain $D$ (in any dimension) $-\Delta_D u + \alpha u = 0$ subject to the constraint $\gamma^i \nabla_i u + \beta u = 0$, where $\Delta_D$ is the Laplacian on $D$ (with respect to some metric), and $\alpha$, $\beta$, and $\gamma^i$ are real smooth functions on $D$. A zeroth question, to address user254433's comment below, is what compatibility conditions on $\alpha$, $\beta$, $\gamma^i$, and presumably the metric on $D$ are required to ensure that the system above has solutions? user254433 derived such a condition in one dimension. Now my main question. Assume the system satisfies the above compatibility conditions so that it admits solutions, and say if any (real) solution to the above system is positive on the boundary $\partial D$, it must be positive everywhere in $D$. What does this imply on the coefficients $\alpha$, $\beta$, and $\gamma^i$? Note that here I'm asking for necessary conditions on the coefficients. Indeed, I do know of a sufficient condition: if $\alpha \geq 0$ everywhere, this ensures by standard minimum principles that $u$ cannot have any negative local minima (since otherwise at the minimum we'd have $-\Delta_D u + \alpha u < 0$), and so if it's positive on $\partial D$ it must be positive everywhere. However, if necessary conditions on the coefficients are unknown, I'm also interested in learning about sufficient conditions that are weaker than $\alpha \geq 0$ everywhere. Note that these conditions (necessary or sufficient) need not be local; for instance I'm happy to have conditions on integrals of these coefficients over $D$, etc.
Table of Contents Infinite / Finite-Dimensional Vector Space Comparison Theorem Theorem 1: If $U$ is a subspace of a finite-dimensional vector space $V$, then $U$ is finite-dimensional. Proof:First consider the case where $U$ is the zero subspace. In such a case, clearly $\{ 0 \}$ is finite-dimensional since it can be spanned by any vector in $U$ when multiplied by zero. Now consider the case where $U \neq \{ 0 \}$. Let $u_1 \in U$ where $u_1 \neq 0$. Step 1:If $U = \mathrm{span} (u_1)$ then we are done, and $U$ is finite-dimensional. If not, then there exists another vector $u_2 \in U$ where $u_2 \not \in \mathrm{span} (u_1)$. Step 2:If $U = \mathrm{span} (u_1, u_2)$ then once again, we are done and so $U$ is finite-dimensional. If not, then there exists another vector $u_3 \in U$ where $u_3 \not \in \mathrm{span} (u_1, u_2)$. Step j:If $U = \mathrm{span} (u_1, u_2, ..., u_j)$ then $U$ is finite-dimensional, and if not, then there exists another vector $u_{j+1} \in U$ where $u_{j+1} \not \in \mathrm{span} (u_1, u_2, ..., u_j)$. Now since $V$ is finite-dimensional, $V$ is spanned by a finite set of vectors call them $\{ v_1, v_2, ... v_n \}$. Now notice that each $\{ u_1, u_2, ..., u_j \}$ is a linearly independent set of vectors in $U$, and since $U$ is a subspace of $V$, then $\{ u_1, u_2, ..., u_j \}$ is a linearly independent set of vectors in $V$ as well. Recall from the Finite-Dimensional Linearly Independent Set of Vectors Theorem theorem that any linearly independent set of vectors from a vector space $V$ has a size that is less than or equal to a spanning set of $V$, and so $j ≤ n$. Therefore, after at most $n$ steps, we will obtain a spanning set for $U$, and so $U$ is finite-dimensional. $\blacksquare$ Corollary 1: If $U$ is a subspace of a vector space $V$, and $U$ is infinite-dimensional, then $V$ is also infinite-dimensional. Proof:This is just the contrapositive to the theorem proven above, that is if $U$ is not finite-dimensional, then $V$ is not finite-dimensional. $\blacksquare$ Let's now look at an example of applying the theorem from above. Example 1 Show that the vector space of infinite sequences $\mathbb{F}^{\infty}$ is infinite-dimensional. Consider the subspaces $\mathbb{F}^n$, $n = 1, 2, ...$. Each of these subspaces can be spanned by the linearly independent set of lists $\{ (1, 0, 0, ..., 0), (0, 1, 0, ..., 0), ..., (0, 0, 0, ..., 0) \}$ of length $n$. So as $n \to \infty$, we will always have a spanning set of linearly independent lists of vectors for the subspace $\mathbb{F}^n$ of $\mathbb{F}^{\infty}$ which implies that no set of sequences in $\mathbb{F}^{\infty}$ span $\mathbb{F}^{\infty}$, so $\mathbb{F}^{\infty}$ is infinite-dimensional.
№ 9 All Issues On Entire Functions Belonging to a Generalized Class of Convergence Abstract In terms of Taylor coefficients and distribution of zeros, we describe the class of entire functions f defined by the convergence of the integral \(\int\limits_{r_0 }^\infty {\frac{{\gamma (\ln M_{f} (r))}}{{r^{\rho + 1} }}} dr\) , where γ is a slowly increasing function. English version (Springer): Ukrainian Mathematical Journal 54 (2002), no. 4, pp 536-547. Citation Example: Gal' Yu. M., Mulyava O. M., Sheremeta M. M. On Entire Functions Belonging to a Generalized Class of Convergence // Ukr. Mat. Zh. - 2002. - 54, № 4. - pp. 439-446. Full text
markus 2005-01-02 19:40:05 UTC Hello LaTeX'ers,a simular <-- --> E and E of course this can be done with \newcommand{\leftvect}[1]{\stackrel{\leftarrow}{#1}} \newcommand{\rightvect}[1]{\stackrel{\rightarrow}{#1}} but I find the arrow used by \vec much nicer. Is there a way to have <-- --> E and E of course this can be done with \newcommand{\leftvect}[1]{\stackrel{\leftarrow}{#1}} \newcommand{\rightvect}[1]{\stackrel{\rightarrow}{#1}} but I find the arrow used by \vec much nicer. Is there a way to have arrow as \vec uses pointing left ?Well, better late (11 years ;-) than never: Thanks a lot for all help, Thanks a lot for all help, I found that so far the following works quite well for me: \usepackage{graphicx} \newcommand{\lvec}[1]{\,\,\reflectbox{$\vec{\reflectbox{$\!\!#1$}}$}} However, the spacing is rather manual and may not fit in all cases/for all font sizes. Note that changes done with \reflectbox and \rotatebox are not shown in some dvi-Viewers. Create ps/pdf first. Regards, Markus PS: Some keywords for those who -- like me -- searched one or more hours to find a solution online: This code provides a way to draw \vec in LaTeX from right to left, like \leftarrow but as an accent. I did not find a way to solve this with some \DeclareMathAccent command, since the character is missing. \overleftarrow is not convenient for inverting \vec, since it is too large. Flipping \vec may also work with the help of the accents-package.
Another way to look at this problem is to consider it in position space, and then transform the solution to it's momentum space representation. While this may seem like an unnecessary amount of work, it may illuminate to you the delta function solution in a different way. So, in position space we have $$-\frac{\hbar^2}{2m}\frac{d^2\psi}{dx^2}=E\psi\:\:\:\xrightarrow{\kappa^2\:\equiv\:2mE/\hbar^2}\:\:\: \frac{d^2\psi}{dx^2}=-\kappa^2\psi\:\:\:\rightarrow\:\:\:\psi(x)=Ae^{i\kappa x}+Be^{-i\kappa x}$$ Before casting this into its momentum space representation, recall the integral representation of the Dirac delta function (which can be arrived at by considering orthogonality of position or momentum eigenstates): $$\delta(\alpha-\beta)=\frac{1}{2\pi\hbar}\int e^{ix(\alpha-\beta)/\hbar}dx.$$ Using the above, let's Fourier transform our solution to get its momentum representation: $$\psi(p) = \frac{1}{\sqrt{2\pi\hbar}}\int\psi(x)e^{-ipx/\hbar}dx = \frac{A}{\sqrt{2\pi\hbar}}\int e^{ix(\kappa-p/\hbar)}dx + \frac{B}{\sqrt{2\pi\hbar}}\int e^{ix(-\kappa-p/\hbar)}dx\\ = \sqrt{2\pi\hbar}\Big[A\delta(\kappa-p/\hbar)+B\delta(-\kappa-p/\hbar)\Big].$$ Now stick in $\kappa = \sqrt{2mE}/\hbar$, and use the fact that $\delta(-x) = \delta(x)$ and $\:\delta(\alpha x) = \delta(x)/|\alpha|$ to rewrite this as $$\psi(p) = \tilde{A}\delta(p-\sqrt{2mE}) + \tilde{B}\delta(p+\sqrt{2mE}),$$ where I've collected constants and called them $\tilde{A}$ and $\tilde{B}$ for simplicity of the final solution. Obviously this is more work than noticing the solution in momentum space corresponds to delta function behavior, but perhaps you'll find this route illuminating; or, if nothing else, a nice consistency check.
Difference between revisions of "Vopenka" Line 4: Line 4: In a set theoretic setting, the most common definition is the following: In a set theoretic setting, the most common definition is the following: <blockquote> <blockquote> − For any language $\mathcal{L}$ and any proper class $C$ of $\mathcal{L}$-structures, there are distinct structures $M, N\in C$ and an elementary embedding $j:M\to N$. + For any language $\mathcal{L}$ and any proper class $C$ of $\mathcal{L}$-structures, there are distinct structures $M, N\in C$ and an elementary embedding$j:M\to N$. </blockquote> </blockquote> For example, taking $\mathcal{L}$ to be the language with one unary and one binary predicate, we can consider for any ordinal $\eta$ the class of structures For example, taking $\mathcal{L}$ to be the language with one unary and one binary predicate, we can consider for any ordinal $\eta$ the class of structures Revision as of 00:58, 3 October 2017 Vopěnka's principle is a large cardinal axiom at the upper end of the large cardinal hierarchy that is particularly notable for its applications to category theory. In a set theoretic setting, the most common definition is the following: For any language $\mathcal{L}$ and any proper class $C$ of $\mathcal{L}$-structures, there are distinct structures $M, N\in C$ and an elementary embedding $j:M\to N$. For example, taking $\mathcal{L}$ to be the language with one unary and one binary predicate, we can consider for any ordinal $\eta$ the class of structures $\langle V_{\alpha+\eta},\{\alpha\},\in\rangle$, and conclude from Vopěnka's principle that a cardinal that is at least $\eta$-extendible exists. In fact if Vopěnka's principle holds then there are a proper class of extendible cardinals; bounding the strength of the axiom from above, we have that if $\kappa$ is almost huge, then $V_\kappa$ satisfies Vopěnka's principle. Contents Formalisations As stated above and from the point of view of ZFC, this is actually an axiom schema, as we quantify over proper classes, which from a purely ZFC perspective means definable proper classes. A somewhat stronger alternative is to view Vopěnka's principle as an axiom in second-order set theory capable to dealing with proper classes, such as von Neumann-Gödel-Bernays set theory. This is a strictly stronger assertion.[1]. Finally, one may relativize the principle to a particular cardinal, leading to the concept of a Vopěnka cardinal. Vopěnka cardinals An inaccessible cardinal $\kappa$ is a Vopěnka cardinal if and only if $V_\kappa$ satisfies Vopěnka's principle, that is, where we interpret the proper classes of $V_\kappa$ as the subsets of $V_\kappa$ of cardinality $\kappa$. As we mentioned above, every almost huge cardinal is a Vopěnka cardinal. Equivalent statements The schema form of Vopěnka's principle is equivalent to the existence of a proper class of $C^{(n)}$-extendible cardinals for every $n$; indeed there is a level-by-level stratification of Vopěnka's principle, with Vopěnka's principle for a $\Sigma_{n+2}$-definable class corresponds to the existence of a $C^{(n)}$-extendible cardinal greater than the ranks of the parameters. [2] Other points to note Whilst Vopěnka cardinals are very strong in terms of consistency strength, a Vopěnka cardinal need not even be weakly compact. Indeed, the definition of a Vopěnka cardinal is a $\Pi^1_1$ statement over $V_\kappa$, and $\Pi^1_1$ indescribability is one of the equivalent definitions of weak compactness. Thus, the least weakly compact Vopěnka cardinal must have (many) other Vopěnka cardinals less than it. References Bagaria, Joan and Casacuberta, Carles and Mathias, A R D and Rosický, Jiří. Definable orthogonality classes in accessible categories are small.Journal of the European Mathematical Society 17(3):549--589. arχiv bibtex
I'm having some trouble solving this limit: $$\lim_{x\to\infty} x\left[\left(\cosh x\right)^ \frac1x - \left(1+\frac1x\right)^x\right]$$ It's part of a set of limits wich should be solved using taylor. I tried this road: $$\lim_{x\to\infty} x\left[e^{\frac1x\ln(\cosh x)}-e^{\ln\left(1+\frac1x\right)x}\right]$$ I then tried with algebraic manipulation using the definition of hyperbolic cosine $\frac12(e^x+e^{-x})$ and then I also played a bit with l'Hopital but it turns into something suspiciously compicated... I'm taking real analysis 1 and my toolset is: -algebraic manipulation -talyor series -l'Hopital (if and only if all else fails) I can't use more advanced techniques since they are not part of the course. I'm sure there's something obvious I'm missing. Any idea on how to proceed?
I will elaborate on Xi'an's response. Metropolis-Adjusted Langevin Algorithm, as its name implies, is based on the Langevin diffusion that is represented by the following stochastic differential equation (SDE): $ d X_t = - \nabla f(X_t) dt + \sqrt{2} d B_t $, where $B_t$ is the standard Brownian motion and the target density $\pi(x) = \exp(-f(x)) / Z$ for some $Z>0$. One can show that, under arguably mild assumptions, the solution process to this SDE, i.e. $(X_t)_{t \geq 0}$ which solves the above equation is a Markov process and admits a unique stationary distribution, which is indeed $\pi$. Under some more assumptions, we can show that $(X_t)_t$ is ergodic with $\pi$. This means that, if we could exactly simulate the process $(X_t)_t$, the distribution of $X_t$ would converge to $\pi$, therefore, we could use the trajectories for approximating expectations with respect to $\pi$ for instance. But the issue is that we cannot simulate this process exactly (in general) since it's a continuous-time process. Then, the idea in MALA is to discretize this process by using a first-order scheme (namely the Euler-Maruyama discretization), which gives the following recursion: $X_{n+1} = X_n - h \nabla f(X_n) + \sqrt{2h} N_{n+1}$, where $N_{n}$ is a standard Gaussian random variable. In other words, we can view $X_{n+1}$ as a random variable that is drawn from the following distribution: $\mathcal{N}(X_n - h \nabla f(X_n), 2h \mathbf{I})$, where $\mathbf{I}$ denotes the identity matrix of appropriate size. However, due to discretization, this process does not target $\pi$ anymore, therefore a Metropolis-Hastings acceptance-rejection step is appended to this procedure to remove the discretization error (hence the name Metropolis 'adjusted'). This is the reason why a Gaussian proposal is used in MALA. More information can be found in the following paper, which is the standard reference for MALA: Roberts, G. O., & Tweedie, R. L. (1996). Exponential convergence of Langevin distributions and their discrete approximations. Bernoulli, 2(4), 341-363. On the other hand, you are not restricted to Gaussian proposals. It has been shown that heavy-tailed proposals can have their own benefits, such as JARNER, S., & ROBERTS, G. (2007). Convergence of Heavy-tailed Monte Carlo Markov Chain Algorithms. Scandinavian Journal of Statistics, 34(4), 781-815. Şimsekli, U. (2017, August). Fractional Langevin Monte Carlo: Exploring Lévy driven stochastic differential equations for Markov Chain Monte Carlo. In Proceedings of the 34th International Conference on Machine Learning-Volume 70 (pp. 3200-3209). The latter doesn't have a Metropolis correction step, but still related.
Search Now showing items 1-10 of 20 Measurement of electrons from beauty hadron decays in pp collisions at root √s=7 TeV (Elsevier, 2013-04-10) The production cross section of electrons from semileptonic decays of beauty hadrons was measured at mid-rapidity (|y| < 0.8) in the transverse momentum range 1 < pT <8 GeV/c with the ALICE experiment at the CERN LHC in ... Multiplicity dependence of the average transverse momentum in pp, p-Pb, and Pb-Pb collisions at the LHC (Elsevier, 2013-12) The average transverse momentum <$p_T$> versus the charged-particle multiplicity $N_{ch}$ was measured in p-Pb collisions at a collision energy per nucleon-nucleon pair $\sqrt{s_{NN}}$ = 5.02 TeV and in pp collisions at ... Directed flow of charged particles at mid-rapidity relative to the spectator plane in Pb-Pb collisions at $\sqrt{s_{NN}}$=2.76 TeV (American Physical Society, 2013-12) The directed flow of charged particles at midrapidity is measured in Pb-Pb collisions at $\sqrt{s_{NN}}$=2.76 TeV relative to the collision plane defined by the spectator nucleons. Both, the rapidity odd ($v_1^{odd}$) and ... Long-range angular correlations of π, K and p in p–Pb collisions at $\sqrt{s_{NN}}$ = 5.02 TeV (Elsevier, 2013-10) Angular correlations between unidentified charged trigger particles and various species of charged associated particles (unidentified particles, pions, kaons, protons and antiprotons) are measured by the ALICE detector in ... Anisotropic flow of charged hadrons, pions and (anti-)protons measured at high transverse momentum in Pb-Pb collisions at $\sqrt{s_{NN}}$=2.76 TeV (Elsevier, 2013-03) The elliptic, $v_2$, triangular, $v_3$, and quadrangular, $v_4$, azimuthal anisotropic flow coefficients are measured for unidentified charged particles, pions, and (anti-)protons in Pb–Pb collisions at $\sqrt{s_{NN}}$ = ... Measurement of inelastic, single- and double-diffraction cross sections in proton-proton collisions at the LHC with ALICE (Springer, 2013-06) Measurements of cross sections of inelastic and diffractive processes in proton--proton collisions at LHC energies were carried out with the ALICE detector. The fractions of diffractive processes in inelastic collisions ... Transverse Momentum Distribution and Nuclear Modification Factor of Charged Particles in p-Pb Collisions at $\sqrt{s_{NN}}$ = 5.02 TeV (American Physical Society, 2013-02) The transverse momentum ($p_T$) distribution of primary charged particles is measured in non single-diffractive p-Pb collisions at $\sqrt{s_{NN}}$ = 5.02 TeV with the ALICE detector at the LHC. The $p_T$ spectra measured ... Mid-rapidity anti-baryon to baryon ratios in pp collisions at $\sqrt{s}$ = 0.9, 2.76 and 7 TeV measured by ALICE (Springer, 2013-07) The ratios of yields of anti-baryons to baryons probes the mechanisms of baryon-number transport. Results for anti-proton/proton, anti-$\Lambda/\Lambda$, anti-$\Xi^{+}/\Xi^{-}$ and anti-$\Omega^{+}/\Omega^{-}$ in pp ... Charge separation relative to the reaction plane in Pb-Pb collisions at $\sqrt{s_{NN}}$= 2.76 TeV (American Physical Society, 2013-01) Measurements of charge dependent azimuthal correlations with the ALICE detector at the LHC are reported for Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV. Two- and three-particle charge-dependent azimuthal correlations ... Multiplicity dependence of two-particle azimuthal correlations in pp collisions at the LHC (Springer, 2013-09) We present the measurements of particle pair yields per trigger particle obtained from di-hadron azimuthal correlations in pp collisions at $\sqrt{s}$=0.9, 2.76, and 7 TeV recorded with the ALICE detector. The yields are ...
Search Now showing items 1-1 of 1 Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV (Elsevier, 2014-09) Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ...
Search Now showing items 1-10 of 26 Kaon femtoscopy in Pb-Pb collisions at $\sqrt{s_{\rm{NN}}}$ = 2.76 TeV (Elsevier, 2017-12-21) We present the results of three-dimensional femtoscopic analyses for charged and neutral kaons recorded by ALICE in Pb-Pb collisions at $\sqrt{s_{\rm{NN}}}$ = 2.76 TeV. Femtoscopy is used to measure the space-time ... Anomalous evolution of the near-side jet peak shape in Pb-Pb collisions at $\sqrt{s_{\mathrm{NN}}}$ = 2.76 TeV (American Physical Society, 2017-09-08) The measurement of two-particle angular correlations is a powerful tool to study jet quenching in a $p_{\mathrm{T}}$ region inaccessible by direct jet identification. In these measurements pseudorapidity ($\Delta\eta$) and ... Online data compression in the ALICE O$^2$ facility (IOP, 2017) The ALICE Collaboration and the ALICE O2 project have carried out detailed studies for a new online computing facility planned to be deployed for Run 3 of the Large Hadron Collider (LHC) at CERN. Some of the main aspects ... Evolution of the longitudinal and azimuthal structure of the near-side peak in Pb–Pb collisions at $\sqrt{s_{\rm NN}}=2.76$ TeV (American Physical Society, 2017-09-08) In two-particle angular correlation measurements, jets give rise to a near-side peak, formed by particles associated to a higher $p_{\mathrm{T}}$ trigger particle. Measurements of these correlations as a function of ... J/$\psi$ elliptic flow in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 5.02 TeV (American Physical Society, 2017-12-15) We report a precise measurement of the J/$\psi$ elliptic flow in Pb-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV with the ALICE detector at the LHC. The J/$\psi$ mesons are reconstructed at mid-rapidity ($|y| < 0.9$) ... Enhanced production of multi-strange hadrons in high-multiplicity proton-proton collisions (Nature Publishing Group, 2017) At sufficiently high temperature and energy density, nuclear matter undergoes a transition to a phase in which quarks and gluons are not confined: the quark–gluon plasma (QGP)1. Such an exotic state of strongly interacting ... K$^{*}(892)^{0}$ and $\phi(1020)$ meson production at high transverse momentum in pp and Pb-Pb collisions at $\sqrt{s_\mathrm{NN}}$ = 2.76 TeV (American Physical Society, 2017-06) The production of K$^{*}(892)^{0}$ and $\phi(1020)$ mesons in proton-proton (pp) and lead-lead (Pb-Pb) collisions at $\sqrt{s_\mathrm{NN}} =$ 2.76 TeV has been analyzed using a high luminosity data sample accumulated in ... Production of $\Sigma(1385)^{\pm}$ and $\Xi(1530)^{0}$ in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV (Springer, 2017-06) The transverse momentum distributions of the strange and double-strange hyperon resonances ($\Sigma(1385)^{\pm}$, $\Xi(1530)^{0}$) produced in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV were measured in the rapidity ... Charged–particle multiplicities in proton–proton collisions at $\sqrt{s}=$ 0.9 to 8 TeV, with ALICE at the LHC (Springer, 2017-01) The ALICE Collaboration has carried out a detailed study of pseudorapidity densities and multiplicity distributions of primary charged particles produced in proton-proton collisions, at $\sqrt{s} =$ 0.9, 2.36, 2.76, 7 and ... Energy dependence of forward-rapidity J/$\psi$ and $\psi(2S)$ production in pp collisions at the LHC (Springer, 2017-06) We present ALICE results on transverse momentum ($p_{\rm T}$) and rapidity ($y$) differential production cross sections, mean transverse momentum and mean transverse momentum square of inclusive J/$\psi$ and $\psi(2S)$ at ...
The right triangle formulas are the most important ones in the trigonometry. The sin Theta is the ratio of the opposite side to the hypotenuse, If theta (Θ) is one of the acute angles. Sin x Formula: It is one of the formulae of the Sin function from the six other trigonometric functions. Sin \(\Theta\) = \(\frac{Opposite Side}{Hypotenuse}\) Sin angle Questions Example: If cos x = 4/5, Find the value of Sin x? Solution: Using Trigonometric identities: Sin 2x = 1- Cos 2x Sin 2x = 1 – (4/5) 2 = 1 – 16/25 = (25 – 16) / 25 = 9/25 Sin x = \(\sqrt{9/25}\) = 3/5 Example 2: If Cosec x = 4/7, find the Sin x? Solution: As Cosec x = 1/sin x = 1/ 4/7 = 7/4 To Explore other trigonometric functions and its formulas, visit BYJU’S.
In my post Trigonometry Yoga, I discussed how defining sine and cosine as lengths of segments in a unit circle helps develop intuition for these functions. I learned the circle definitions of sine and cosine in my junior year of high school, in the class that would now be called pre-calculus (it was called “Trig Senior Math”). Two years earlier, I’d learned the triangle definitions of sine, cosine, and tangent in geometry class. I don’t remember any of my teachers ever mentioning a circle definition of the tangent function. The geometric definition of the tangent function, which predates the triangle definition, is the length of a segment tangent to the unit circle. The tangent really is a tangent! Just as for sine and cosine, this one-variable definition helps develop intuition. Here is the definition, followed by an applet to help you get a feel for it: Let OA be a radius of the unit circle, let B = (1,0), and let \( \theta =\angle BOA\). Let C be the intersection of \(\overrightarrow{OA}\) and the line x=1, i.e. the tangent to the unit circle at B. Then \(\tan \theta\) is the y-coordinate of C, i.e. the signed length of segment BC. Move the blue point below; the tangent is the length of the red segment. (If a label is getting in the way, right click and toggle “show label” from the menu). The circle definition of the tangent function leads to geometric illustrations of many standard properties and identities. (If this were my class, I would stop here and tell you to explore on your own and with others). Some things to notice: \(\left| \tan \theta \right|\) gets big as \(\theta\) approaches \(\pm 90{}^\circ \). \(\tan (\pm 90{}^\circ)\) is undefined, because at these angles, \(\overline{OA}\) is parallel to x=1, so the two lines don’t intersect, and point C doesn’t exist. \(\tan 90{}^\circ\) tends toward \(+\infty\), \(\tan (-90{}^\circ)\) tends toward \(-\infty\). \(\tan \theta\) is positive in the first and third quadrants, negative in the second and fourth quadrants. \(\tan \theta\)=\(tan (\theta+180{}^\circ)\) — the angles \(\theta\) and \(\theta +180{}^\circ\) form the same line. Thus the period of the tangent function is \(180 {}^\circ = \pi\) radians. \(\tan \theta\) = \(- \tan (-\theta)\). Moving from \(\theta\) to \(-\theta\) reflects \(OC\) about the x-axis. \(\tan \theta\) is equal to the slope of OA (rise = \(\tan \theta\) , run =1), which is also equal to \(\dfrac{\sin\theta}{\cos\theta}\), as well as Opposite over Adjacent for angle \(\theta\) in right triangle CBO. \(\tan (45{}^\circ)=1\). When \(\theta=45{}^\circ\), triangle CBO is a 45-45-90 triangle, and OB=1. Similarly, \(\tan (-45{}^\circ)=-1\), etc. For small values of \(\theta\), \(\tan \theta\) is close to \(\sin \theta\), which is close to the arc length of AB, i.e. the measure of \(\theta\) in radians. If we define \(\arctan \theta\) as the function whose input is the signed length of BC and whose output is the angle \(\theta\) corresponding to that tangent length, then the domain of that function is the reals, and it makes sense to define the range as \(-90 {}^\circ< \theta <90{}^\circ\) (in radians \(-\pi/2<\theta < \pi/2\) and arctan’s output is an arc length). This range includes all the angles we need and avoids the discontinuity at \(\theta= \pm 90{}^\circ =\pm \pi/2\) radians. For \(\left| \theta \right|\leq 45{}^\circ\), \(\left| \tan \theta \right|\leq 1\). Half of the input values of \(\tan \theta\) give outputs with absolute values less than or equal to 1, and the other half give values on the rest of the number line. This mapping also occurs with fractions and slopes, but there’s something very compelling about seeing the lengths change dynamically. Applets like the one above could also help students develop intuition about slopes. \(\tan (180{}^\circ-\theta) = -\tan \theta\). We reflect BC over the x-axis to form \(B{C}’\). Then \(\angle BO{C}’=\theta\) and \(\angle BOD =(180{}^\circ-\theta)\). \(B{C}’\) (the blue segment) is the tangent of \((180{}^\circ-\theta)\). \(\tan (\theta \pm 90{}^\circ)\) = \(-1/\tan \theta\). The picture below illustrates the geometry of this identity when \(\theta\) is in the first quadrant. The line formed at \(\theta + 90{}^\circ\) is perpendicular to OC and \(\triangle COB\sim \triangle ODB\). Thus \(\dfrac{BD}{OB}=\dfrac{OB}{BC}\), and with appropriate signs, \(\tan (\theta + 90{}^\circ)\) = \(-1/\tan \theta\). Since \(\tan \theta\)=\(\tan (\theta+180{}^\circ)\), \(\tan (\theta +90{}^\circ)=\tan(\theta-90{}^\circ)\). The applet below shows the geometry in all quadrants, and it gives a dynamic sense of the relationship between \(\tan\theta\) and \(\tan(-\theta)\). Again, move the blue point: Special Bonus: The Secant Function The signed length of the segment OC is called the secant function, \(\sec\theta\). Using similar triangles, we see that \(\sec \theta = \dfrac{1}{\cos \theta}\). The Pythagorean Theorem applied to \(\triangle COB\) shows that \(\tan^2\theta+1=\sec^2 \theta\). When the tangent function is big, so is the secant function, and when the tangent function is small, so is the secant function. Also \(\sec \theta\) is close to \(\pm 1\) when \(\theta\) is close to the x-axis and when \(\tan \theta\) is close to 0. The graphs of the two functions look nice together:
Search Now showing items 1-1 of 1 Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV (Elsevier, 2014-09) Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ...
December 4th, 2018, 08:19 PM # 1 Senior Member Joined: Sep 2015 From: USA Posts: 2,574 Thanks: 1422 removable discontinuities in rational functions So I was taught that $f(x) = \dfrac{(x+1)(x-2)}{x+1} = x-2$ is continuous. I'm reading online now that $x=-1$ is considered a removable discontinuity, i.e. a "hole" If this is the case what stops us from creating infinite holes in any continuous function by multiplying by 1 in the fashion $\tilde{f}(x) = f(x)\prod \limits_{k=1}^\infty \dfrac{x-a_k}{x-a_k}$ where $a_k$ is any sequence of real numbers What's the current convention on removable singularities? December 5th, 2018, 12:09 AM # 2 Global Moderator Joined: Dec 2006 Posts: 21,019 Thanks: 2254 Nothing stops you from doing that. It's equivalent to not defining the function at infinitely many values of $x$. December 5th, 2018, 05:08 AM # 3 Senior Member Joined: Sep 2016 From: USA Posts: 666 Thanks: 437 Math Focus: Dynamical systems, analytic function theory, numerics Quote: This difference is often hammered during a Calc 1 course since students at this level don't understand limits, and they don't understand that functions aren't just formulas. At the level of a first real analysis course or higher, it is very typical to make no distinction between a function with removable singularities and its continuous extension. For example, a statement like: "$f(x) = \frac{\sin x}{x}$ is continuous on $\mathbb{R}$" would not be atypical to see and should be considered as a true statement. What is meant by this, is that $\lim_{x \to 0} f(x) = 1$ so the function being described is assigned the value $f(0) = 1$ which makes it continuous. This is cumbersome to say over and over so when the audience understands limits and continuity a long explanation of this is typically omitted. Tags discontinuities, functions, rational, removable Search tags for this page Click on a term to search for related topics. Thread Tools Display Modes Similar Threads Thread Thread Starter Forum Replies Last Post Rational Functions Zery Pre-Calculus 3 August 14th, 2015 05:21 PM Rational Functions Zery Pre-Calculus 3 August 8th, 2015 10:38 AM Rational Functions help edwinandrew Algebra 1 November 27th, 2013 04:31 PM removable discontinuities mathkid Algebra 5 September 22nd, 2012 06:44 PM Rational Functions Paper_Scars Algebra 1 March 23rd, 2012 09:02 AM
Journées d'Etude II, 3-4 juin Preuves, justifications, certificats ***TRANSPARENTS*** Vendredi 3 juin, Institut de Recherche en Informatique de Toulouse (IRIT), Salle des thèses Session 1, vendredi 3 juin, 9h-10h30, 10h45-12h15, PROVABILITY LOGIC Lev BEKLEMISHEV (Steklov Mathematical Institute, Russia), Positive provability logic and reflection calculus: an overview Several interesting applications of provability logic in proof theory made use of the polymodal logic $\GLP$ due to Giorgi Japaridze. This system, although decidable, is not very easy to handle. In this talk we will advocate the use of a weaker system, called Reflection Calculus, which is much simpler than $\GLP$, yet expressive enough to regain its main proof-theoretic applications, and more. From the point of view of modal logic, $\RC$ can be seen as a fragment of polymodal logic consisting of implications of the form $A\to B$, where $A$ and $B$ are formulas built-up from $\top$ and the variables using just $\land$ and the diamond modalities. We discuss general problems around weak systems of this kind and describe some applications of $\RC$ to the analysis of provability in formal arithmetical theories. Joost JOOSTEN (Universitat de Barcelona, Spain), A Calculus of Worms in Coq In this talk we consider modal provability logics with a series of modalities of length omega that represent a sequence of consistency predicates of increasing strength. The closed fragment of this logic is already quite expressible and constitutes an alternative ordinal notation system up to epsilon_zero. Iterated consistency statements in this closed fragments are also called worms. We present a calculus that manipulates only worms. In particular, the language lacks propositional variables, implications, conjunctions and disjunctions. We compare this Calculus of Worms to the closed fragment of the better known Reflection Calculus. Moreover, we comment on how these worms and their corresponding calculus can be formalized and implemented in Coq. Joint work with Eduardo Hermo Reyes, Pilar Garcia de la Parra and Alejandro Ramírez Atrio. Session 2, vendredi 3 juin, 14h15-15h45, 16h-17h30, REALIZABILITY Fernando FERREIRA (University of Lisbon, Portugal), Modified realizability and functional interpretations: some logical and mathematical observations Federico ASCHIERI (Technical University of Vienna, Austria), From Intuitionistic Realizability to Classical Realizability Realizability was originally conceived as a constructive semantics for intuitionistic Arithmetic. Since classical Arithmetic looks radically non-constructive, extending realizability to classical fragments of it may appear hopeless. Yet, realizability can be extended to full classical mathematical systems. How is it possible? The goal of this talk is go through the flow of ideas that lead in a natural way to the many known classical realizabilities. Since realizability appears in disguise under other names such as "validity", "reducibility", "computability", "proof-theoretic semantics", we shall have to look for its origins in unexpected places. We shall start from Prawitz and Dummett. In Prawitz's work, we find the idea that the introduction rules of natural deduction determine the constructive meaning of logical constants, which leads to realizability based on introduction rules. In Dummett's work, we find the idea that it is the elimination rules of natural deduction that fix the meaning of logical constants, which leads to realizability based on elimination rules. We shall see that realizability based on introduction rules gives rise to more constructively flavoured semantics, such as realizability for Intuitionistic Arithmetic with Markov's principle, realizability for Intuitionistic Arithmetic with Excluded Middle over formulas with one quantifier and learning based realizability for classical Arithmetic with Skolem choice axioms. On the other hand, realizability based on elimination rules gives rise to Krivine-style realizability for classical second-order Arithmetic, the only known semantics that can be generalized even to full set theory, but that is also useful for other intermediate logics. Samedi 4 juin, Institut de Mathématiques de Toulouse (IMT), Amphi Schwartz Session 3, samedi 4 juin, 9h-10h30, 10h45-12h15, CERTIFICATES Dale MILLER (Inria Saclay, France), Defining and checking proof certificates In order for one theorem prover to export its proofs for other provers to check and trust, proofs-as-documents must be given a clear and precise semantics. After making the case that an infrastructure for sharing proofs should exists, I will describe how recent research in proof theory provides a flexible framework for defining the semantics of proof certificates. I will also briefly describe an implementation that can execute a wide range of such semantic definitions. Jasmin Christian BLANCHETTE (INRIA Nancy Grand Est, Nancy, France), Semi-intelligible Isabelle Proofs from Machine-Generated Proofs Sledgehammer is a component of the Isabelle proof assistant that integrates external automatic theorem provers to discharge proof obligations. As a safeguard against bugs, the proofs found by the external provers are reconstructed in Isabelle. Reconstructing complex arguments involves translating them to Isabelle's Isar format, supplying suitable justifications for each step. Sledgehammer transforms the proofs by contradiction into direct proofs; it iteratively tests and compresses the output, resulting in simpler and faster proofs; and it supports a wide range of automatic provers, including E, LEO-II, Satallax, SPASS, Vampire, veriT, Waldmeister, and Z3. Joint work with Sascha Böhme, Mathias Fleury, Steffen Juilf Smolka, and Albert Steckermeier. Session 4, samedi 4 juin, 14h15-15h45, 16h-17h30, JUSTIFICATION LOGIC Thomas STUDER (University of Bern, Switzerland), Justification Logic - a short introduction Traditional modal logics feature formulas of the form K A that stand for 'the agent knows that A'. The classical semantics for these logics is given by possible world models, in which the formula K A is true if A is true in all worlds that the agent considers possible. However, this approach is missing the justified part of Plato's classic characterization of knowledge as justified true belief. Justification logics can fill this gap. Instead of formulas K A, the language of justification logics includes formulas of the form t : A that mean 'the agent knows that A for reason t'. The evidence term t in this expression can represent a formal proof of A or an informal reason why A is known. Moreover, justification logics include operations on these terms to reflect the agent's reasoning power. For instance, if A -> B is known for reason s and A is known for reason t, then B is known for reason s x t, where the binary operation x models the agent's ability to apply modus ponens. In our talk, we give a short introduction to justification logic and present some of the main results in this area. Juan Pablo AGUILERA (Technical University of Vienna, Austria), An arithmetical interpretation for negative introspection We introduce verification logic. This variant of Artemov's Logic of Proofs includes proof terms of the form ¡A! that satisfy the axiom schema A -> ¡A!:A. The intention is that terms ¡A! denote a PA-proof of the formula A if it exists. We show that a restriction on the language yields a logic that realizes the axioms of S5 and is sound and complete for its arithmetical interpretation.
In the Grothendieck universe approach to category theory, as you say, we replace all small sets with $\mathcal{U}$-small sets. Let's look at the definition of a locally presentable $\mathcal{U}$-category, for example, since that's in the conclusion of the theorem you mentioned: a locally presentable $\mathcal{U}$-category is a category with $\mathcal{U}$-small homsets which has all $\mathcal{U}$-small colimits and for which there exists a $\mathcal{U}$-small regular cardinal $\lambda$ and a $\mathcal{U}$-small set of $\lambda$-compact objects which generate the category under $\lambda$-filtered $\mathcal{U}$-small colimits. The size relationships here are key. For example, the category of all sets is not a locally presentable $\mathcal{U}$-category because its hom-sets are not $\mathcal{U}$-small, while the category of sets of cardinality less than some $0 < \kappa < \mathcal{U}$ is not a locally presentable $\mathcal{U}$-category because it lacks $\mathcal{U}$-small colimits. Now one formulation of Vopěnka's principle is There is no proper class-sized discrete full subcategory of $\mathrm{Gra}$ (some category of graphs) or in a more positive form Any discrete full subcategory of $\mathrm{Gra}$ has only a set of objects The proof of a theorem like the one you mentioned (in the non-universe setting) is going to go something like this. We use this axiom to conclude that some particular transfinite construction of new objects in a category cannot go on "forever". It has to stop at stage $\gamma$ for some ordinal $\gamma$, and from $\gamma$ we'll somehow produce the cardinal $\lambda$ in the definition of a locally presentable category and the set of generators. If we now adopt universes, you may be thinking that Any discrete full subcategory of $\mathrm{Gra}_{\mathcal{U}}$ has only a set of objects is just (trivially) true, even without Vopěnka's principle. But if we tried to repeat the above argument using $\mathcal{U}$-categories, we'd just learn that there is some $\gamma$ and $\lambda$ as before, but we have no control over their sizes. For example, maybe the $\lambda$ we end up with would be larger than the cardinality of our category and the set of generators just all the objects of our category. In order to show that $\lambda \in \mathcal{U}$ we would need Any discrete full subcategory of $\mathrm{Gra}_{\mathcal{U}}$ has only a $\mathcal{U}$-small set of objects and I guess this is what it means for $\mathcal{U}$ to be a Vopěnka cardinal, as Mike mentioned in a comment. If you believe in such cardinals then you believe in Con(ZFC + Vopěnka’s principle), so you are still relying on set-theoretic assumptions (and much more so than simply believing in universes in the first place).
Search Now showing items 1-10 of 18 J/Ψ production and nuclear effects in p-Pb collisions at √sNN=5.02 TeV (Springer, 2014-02) Inclusive J/ψ production has been studied with the ALICE detector in p-Pb collisions at the nucleon–nucleon center of mass energy √sNN = 5.02TeV at the CERN LHC. The measurement is performed in the center of mass rapidity ... Suppression of ψ(2S) production in p-Pb collisions at √sNN=5.02 TeV (Springer, 2014-12) The ALICE Collaboration has studied the inclusive production of the charmonium state ψ(2S) in proton-lead (p-Pb) collisions at the nucleon-nucleon centre of mass energy √sNN = 5.02TeV at the CERN LHC. The measurement was ... Event-by-event mean pT fluctuations in pp and Pb–Pb collisions at the LHC (Springer, 2014-10) Event-by-event fluctuations of the mean transverse momentum of charged particles produced in pp collisions at s√ = 0.9, 2.76 and 7 TeV, and Pb–Pb collisions at √sNN = 2.76 TeV are studied as a function of the ... Centrality, rapidity and transverse momentum dependence of the J/$\psi$ suppression in Pb-Pb collisions at $\sqrt{s_{NN}}$=2.76 TeV (Elsevier, 2014-06) The inclusive J/$\psi$ nuclear modification factor ($R_{AA}$) in Pb-Pb collisions at $\sqrt{s_{NN}}$=2.76TeV has been measured by ALICE as a function of centrality in the $e^+e^-$ decay channel at mid-rapidity |y| < 0.8 ... Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV (Elsevier, 2014-09) Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ... Multiplicity Dependence of Pion, Kaon, Proton and Lambda Production in p-Pb Collisions at $\sqrt{s_{NN}}$ = 5.02 TeV (Elsevier, 2014-01) In this Letter, comprehensive results on $\pi^{\pm}, K^{\pm}, K^0_S$, $p(\bar{p})$ and $\Lambda (\bar{\Lambda})$ production at mid-rapidity (0 < $y_{CMS}$ < 0.5) in p-Pb collisions at $\sqrt{s_{NN}}$ = 5.02 TeV, measured ... Multi-strange baryon production at mid-rapidity in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV (Elsevier, 2014-01) The production of ${\rm\Xi}^-$ and ${\rm\Omega}^-$ baryons and their anti-particles in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV has been measured using the ALICE detector. The transverse momentum spectra at ... Measurement of charged jet suppression in Pb-Pb collisions at √sNN = 2.76 TeV (Springer, 2014-03) A measurement of the transverse momentum spectra of jets in Pb–Pb collisions at √sNN = 2.76TeV is reported. Jets are reconstructed from charged particles using the anti-kT jet algorithm with jet resolution parameters R ... Two- and three-pion quantum statistics correlations in Pb-Pb collisions at √sNN = 2.76 TeV at the CERN Large Hadron Collider (American Physical Society, 2014-02-26) Correlations induced by quantum statistics are sensitive to the spatiotemporal extent as well as dynamics of particle-emitting sources in heavy-ion collisions. In addition, such correlations can be used to search for the ... Exclusive J /ψ photoproduction off protons in ultraperipheral p-Pb collisions at √sNN = 5.02TeV (American Physical Society, 2014-12-05) We present the first measurement at the LHC of exclusive J/ψ photoproduction off protons, in ultraperipheral proton-lead collisions at √sNN=5.02 TeV. Events are selected with a dimuon pair produced either in the rapidity ...
Table of Contents The Fundamental Group of a Topological Space at a Point Definition: Let $X$ be a topological space. A path $\alpha : I \to X$ is a Loop at $x$ if $\alpha(0) = x$ and $\alpha(1) = x$. Recall the material on the following pages: In particular, if $\alpha, \beta, \gamma : I \to X$ are loops at $x$ then the operation;(1) Is well defined, and furthermore: 1)$[\alpha]([\beta][\gamma]) = ([\alpha][\beta])[\gamma]$ (Associative Property). 2)$[\alpha][c_x] = [\alpha]$ and $[c_x][\alpha] = [\alpha]$ (The Existence of an Identity). 3)$[\alpha][\alpha^{-1}] = [c_x]$ and $[\alpha^{-1}][\alpha] = [c_x]$ (The Existence of Inverses). So the set of homotopy classes $[\alpha]$ with the property that $\alpha(0) = x = \alpha(1)$ form a group. This group is given a special name. Definition: Let $X$ be a topological space and let $x \in X$. The Fundamental Group of $X$ at $x$ is $\pi_1(X, x) = \{ [\alpha] : \alpha(0) = x = \alpha(1) \}$ with the operation of homotopy class multiplication given for all $\alpha, \beta \in \pi_1(X, x)$ by $[\alpha][\beta] = [\alpha \beta]$. The simplest example of determining a fundamental group is with the topological space $X = \{ x \}$. This topological space contains only one element, and so $\pi_1(X, x)$ contains all of the homotopy classes of loops that start at $x$. But the only such loop is the constant look $[c_x]$. So:(2) Theorem 1 (The Fundamental Group of the Circle): Let $S^1 = \{ (x, y) \in \mathbb{R}^2 : x^2 + y^2 = 1 \}$ denote the unit circle. Then $\pi_1(S^1, x) \cong \mathbb{Z}$, that is, the fundamental group of the circle is infinite cyclic. We do not yet have the tools to prove that the Fundamental group of the circle is isomorphic to the group of integers. In fact, proving this result is very long and cumbersome!
Vector Subspace Sums We will now look at an important definition regarding vector subspaces. Definition: Let $U_1, U_2, ..., U_m$ all be vector subspaces of the $\mathbb{F}$-vector space $V$. We define the Vector Sum of these subspaces $\sum_{i=1}^{m} = U_1 + U_2 + ... + U_m$ is defined to be the set of all possible sums $u_1 + u_2 + ... + u_m$ where each $u_i \in U_i$, that is $\sum_{i=1}^{m} U_i = U_1 + U_2 + ... + U_m := \{ u_1 + u_2 + ... + u_m : u_1 \in U_1, u_2 \in U_2, ..., u_m \in U_m \}$. For example, consider the subspaces $U_1 = \{ (x, 0) : x \in \mathbb{F} \}$ and $U_2 = \{ (0, y) : y \in \mathbb{F} \}$ of $\mathbb{F}^2$. We denote the sum of these subspaces $U_1 + U_2 = \{ (x, 0) + (0, y) : x, y, \in \mathbb{F} \} = \{ (x, y) : x, y \in \mathbb{F} \}$, which is also a subspace of $\mathbb{F}^2$. In fact, the sum of two subspaces of a vector space $V$ will always be a subspace of $V$. Lemma 1: If $V$ is a vector space and $U_1, U_2, ..., U_m$ are subspaces of $V$ then the sum $U_1 + U_2 + ... + U_m$ is also a subspace of $V$. Proof:Since $U_1, U_2, ..., U_m$ are subspaces, we must show that their sum $\sum_{i=1}^{m} U_i$ contains the zero vector, is closed under addition, and is closed under multiplication. We note that $\sum_{i=1}^{m} U_i = U_1 + U_2 + ... + U_m := \{ u_1 + u_2 + ... + u_m : u_1 \in U_1, u_2 \in U_2, ..., u_m \in U_m \}$, so since each subspace $U_i$ contains the zero vector, then the zero vector is also contained in the sum by summing up the zero vector $m$-times. We also note that the sum is closed under addition since each vector in the sum $u_1 + u_2 + ... + u_m$ is closed under addition. Lastly we have that the sum is closed under scalar multiplication. That is for any $k$ we have that $ku_1 + ku_2 + ... + ku_m$ is in the sum, since each vector $ku_i \in U_i$. $\blacksquare$ Proposition 1: The sum $\sum_{i=1}^{m} U_i$ is the smallest vector subspace containing all of the subspaces $U_1, U_2, ..., U_m$. Proof:Suppose that a smaller vector subspace existed by removing a finite number of elements from $\sum_{i=1}^{m} U_i$, none of which removed vectors are the zero vector. Then not all possible sums $u_1 + u_2 + ... + u_m$ could be formed. We will now look at another type of sum known as a vector subspace direct sum. Vector Subspace Direct Sums Definition: Let $U_1, U_2, ..., U_m$ all be vector subspaces of the $\mathbb{F}$-vector space $V$. We define the Vector Direct Sum of these subspaces $\bigoplus_{i=1}^{m} = U_1 \oplus U_2 \oplus ... \oplus U_m = V$ is defined to be a sum of the subspaces $U_1, U_2, ..., U_m$ to which each element in $V$ can be uniquely written as $u_1 + u_2 + ... + u_m$ where $u_i \in U_i$ for $i = 1, 2, ..., m$. One such example of a direct sum comes from the $\mathbb{F}$-vector space $\mathbb{R}^3$. Let $U_1 = \{ (x, 0, 0) : x \in \mathbb{R} \}$, $U_2 = \{ (0, y, 0) : y \in \mathbb{R} \}$ and $U_3 = \{ (0, 0, z) : z \in \mathbb{R} \}$, and suppose that we wanted to show that $U_1 \oplus U_2 \oplus U_3 = V$. Let $x \in U_1$, $y \in U_2$ and $z \in U_3$ such that $x = (x, 0, 0)$, $y = (0, y, 0)$ and $z = (0, 0, z)$. Then clearly $x + y + z = (x, y, z)$ is unique determined since the first coordinate is determined only by vectors in $U_1$, the second coordinate is determined only by vectors in $U_2$, and the third coordinate is determined only by vectors in $U_3$. Therefore $U_1 \oplus U_2 \oplus U_3 = V$. It is important to note that not all sums are direct sums though. Consider the $\mathbb{F}$-vector space $\wp _5(\mathbb{F})$ which is defined to be the set of all polynomials whose degree is less than or equal to $5$ and whose coefficients are from $\mathbb{F}$, that is $\wp _5(\mathbb{F}) := \{ a_0 + a_1x + a_2x^2 + a_3x^3 + a_4x^4 + a_5x^5 : a_i \in \mathbb{F} \: i = 0, 1, 2, 3, 4, 5 \}$. Now consider the vector subspace $\wp _4(\mathbb{F})$ (the $\mathbb{F}$-vector space containing all polynomials whose degree is less than or equal to $4$) and $U_2 = \{ ax^2 + bx^3 + cx^5 : a, b, c \in \mathbb{F} \}$. We note that the polynomial $1 + x + x^2 + x^3 + x^4 + x^5$ is an element within the sum $U_1 + U_2$. Note that this element can be written as $(1 + x + x^4) + (x^2 + x^3 + x^5)$ where $(1 + x + x^4) \in U_1$ and $(x^2 + x^3 + x^5) \in U_2$. This polynomial can also be written as $(1 + x + x^2 + x^3 + x^4) + (x^5)$ where $(1 + x + x^2 + x^3 + x^4) \in U_1$ and $(x^5) \in U_2$. So each element in the sum $U_1 + U_2$ is not necessarily uniquely determined, and so $\wp _5(\mathbb{F})$ is not a direct sum of the subspaces $U_1$ and $U_2$, that is $\wp _5(\mathbb{F}) \neq U_1 \oplus U_2$. We will now look at an important lemma to determine whether a sum of vector subspaces is a direct sum of a specific vector space. Lemma 2: Let $U_1, U_2, ..., U_m$ be vector subspaces of the $\mathbb{F}$-vector space $V$. Then these subspaces form a direct sum $\bigoplus_{i=1}^{m} U_i = V$ if and only if the sum of these subspaces is equal to $V$, that is $\sum_{i=1}^{m} U_i = V$ and when $\sum_{i=1}^{m} u_i = 0$ where $u_i \in U_i$ implies that $u_i = 0$ for every $i = 1, 2, ..., m$. Proof:$\Rightarrow$ Let $V = \bigoplus_{i=1}^{m} U_i$. Then it follows by the definition of the direct sum that $V = \sum_{i=1}^{m} U_i$ and suppose $\sum_{i=1}^{m} u_i = 0$ were $u_i \in U_i$ for every $i = 1, 2, ..., m$. Since the direct sum procures uniquely determined sums, then $0 = u_1 + u_2 + ... + u_m$ implies that $u_1 = u_2 = ... = u_m = 0$. $\Leftarrow$ Now suppose that $V = \sum_{i=1}^{m} U_i$ and $\sum_{i=1}^{m} u_i = 0$ and $u_1 = u_2 = ... = u_m = 0$. Now let a vector $v \in V$ and suppose that $v$ has two representations, that is $v = \sum_{i=1}^{m} u_i = \sum_{i=1}^{m} u_i'$ where $u_i, u_i' \in U_i$ for $i = 1, 2, ..., m$, and so $\sum_{i=1}^{m} u_i - \sum_{i=1}^{m} u_i' = \sum_{i=1}^{m} (u_i - u_i') = 0$. Therefore $u_i - u_i' = 0$ which implies that $u_i = u_i'$ for every $i = 1, 2, ..., m$ and so $V = \bigoplus_{i=1}^{m} U_i$. $\blacksquare$
Your body has a strange property: you can learn information about the entire organism from a single cell. Pick a cell, dive into the nucleus, and extract the DNA. You can now regrow the entire creature from that tiny sample. There's a math analogy here. Take a function, pick a specific point, and dive in. You can pull out enough data from a single point to rebuild the entire function. Whoa. It's like remaking a movie from a single frame. The Taylor Series discovers the "math DNA" behind a function and lets us rebuild it from a single data point. Let's see how it works. Pulling information from a point Given a function like $f(x) = x^2$, what can we discover at a single location? Normally we'd expect to calculate a single value, like $f(4) = 16$. But there's much more beneath the surface: $f(x)$ = Value of function at point $x$ $f'(x)$ = First derivative, or how fast the function is changing (the velocity) $f''(x)$ = Second derivative, or how fast the changesare changing (the acceleration) $f'''(x)$ = Third derivative, or how fast the changesin the changes are changing (acceleration of the acceleration) And so on Investigating a single point reveals multiple, possibly infinite, bits of information about the behavior. (Some functions have an endless amount of data (derivatives) at a single point). So, given all this information, what should we do? Regrow the organism from a single cell, of course! ( Maniacal cackle here.) Growing a Function from a point Our plan is to grow a function from a single starting point. But how can we describe any function in a generic way? The big aha moment: imagine any function, at its core, is a polynomial (with possibly infinite terms): To rebuild our function, we start at a fixed point ($c_0$) and add in a bunch of other terms based on the value we feed it (like $c_1x$). The "DNA" is the values $c_0, c_1, c_2, c_3$ that describe our function exactly. Ok, we have a generic "function format". But how do we find the coefficients for a specific function like sin(x) (height of angle x on the unit circle)? How do we pull out its DNA? Time for the magic of 0. Let's start by plugging in the function value at $x=0$. Doing this, we get: Every term vanishes except $c_0$, which makes sense: the starting point of our blueprint should be $f(0)$. For $f(x) = \sin(x)$, we can work out $c_0 = \sin(0) = 0$. We have our first bit of DNA! Getting More DNA Now that we know $c_0$, how do we isolate $c_1$ in this equation? Hrm. A few ideas: Can we set $x = 1$? That gives $f(1) = c_0 + c_1(1) + c_2(1^2) + c_3(1^3) + \cdots$ . Although we know $c_0$, the other constants are summed together. We can't pull out $c_1$ by itself. What if we divide by $x$? This gives: Then we can set $x=0$ to make the other terms disappear... right? It's a nice idea, except we're now dividing by zero. Hrm. This approach is really close. How can we almost divide by zero? Using the derivative! If we take the derivative of the blueprint of $f(x)$, we get: Every power gets reduced by 1 and the $c_0$, a constant value, becomes zero. It's almost too convenient. Now we can isolate $c_1$ using our $x=0$ trick: In our example, $\sin'(x) = \cos(x)$ so in our example: $f'(0) = \sin'(0) = \cos(0) = 1 = c_1$ Yay, one more bit of DNA! This is the magic of the Taylor series: by repeatedly applying the derivative and setting $x = 0$, we can pull out the polynomial DNA. Let's try another round: After taking the second derivative, the powers are reduced again. The first two terms ($c_0$ and $c_1x$) disappear, and we can again isolate $c_2$ by setting $x=0$: For our sine example, $\sin'' = -\sin$, so: or $c_2 = 0$. As we keep taking derivatives, we're performing more multiplications and growing a factorial in front of each term (1!, 2!, 3!). The Taylor Series for a function around point x=0 is: (Technically, the Taylor series around the point $x=0$ is called the MacLaurin series.) The generalized Taylor series, extracted from any point a is: The idea is the same. Instead of our regular blueprint, we use: Since we're growing from $f(a)$, we can see that $f(a) = c_0 + 0 + 0 + \dots = c_0$. The other coefficients can be extracted by taking derivatives and setting $x = a$ (instead of $x =0$). Example: Taylor Series of sin(x) Plugging in derivatives into the formula above, here's the Taylor series of $\sin(x)$ around $x = 0$: And here's what that looks like: A few notes: 1) Sine has infinite terms Sine is an infinite wave, and as you can guess, needs an infinite number of terms to keep it going. Simpler functions (like $f(x) = x^2 + 3$) are already in their "polynomial format" and don't have infinite derivatives to keep the DNA going. 2) Sine is missing every other term If we repeatedly take the derivative of sine at x = 0 we get: with values: Ignoring the division by the factorial, we get the pattern: So the DNA of sine is something like [0, 1, 0, -1] repeating. 3) Different starting positions have different DNA For fun, here's the Taylor series of $\sin(x)$ starting at $x =\pi$ (link): A few notes: The DNA is now something like [0, -1, 0, 1]. The cycle is similar, but the starting value has changed since we're starting at $x=\pi$. Written as calculated numbers, the denominators 1, 6, 120, 5040 look strange. But they're just every other factorial: 1! = 1, 3! = 6, 5! =120, 7! = 5040. In general, the Taylor series can have gnarly denominators. The $O(x^{12})$ term means there are other components of order (power) $x^{12}$ and higher. Because $\sin(x)$ has infinite derivatives, we have infinite terms and the computer has to cut us off somewhere. ( You've had enough Tayloring for today, buddy.) Application: Function Approximations A popular use of Taylor series is getting a quick approximation for a function. If you want a tadpole, do you need the DNA for the entire frog? The Taylor series has a bunch of terms, typically ordered by importance: $c_0 = f(0)$, the constant term, is the exact value at the point $c_1 = f'(0)x$, the linear term, tells us what speed to move from our point $c_2= \frac{f''(0)}{2!}x^2 $, the quadratic term, tells us how much to accelerate away from our point and so on If we only need a prediction for a few instants around our point, the initial position & velocity may be good enough: If we're tracking for longer, then acceleration becomes important: As we get further from our starting point, we need more terms to keep our prediction accurate. For example, the linear model $\sin(x) = x$ is a good prediction around $x=0$. As we get further out, we need to account for more terms. Similarly, $e^x \sim 1 + x$ works well for small interest rates: 1% discrete interest is 1.01 after one time period, 1% continuous interest is a tad higher than 1.01. As time goes on, the linear model falls behind because it ignores the compounding effects. Application: Comparing Functions What's a common application of DNA? Paternity tests. If we have a few functions, we can compare their Taylor series to see if they're related. Here's the expansions of $\sin(x)$, $\cos(x)$, and $e^x$: There's a family resemblence in the sequences, right? Clean powers of $x$ divided by a factorial? One problem is the sequence for $e^x$ has positive terms, while sine and cosine alternate signs. How can we link these together? Euler's great insight was realizing an imaginary number could swap the sign from positive to negative: Whoa. Using an imaginary exponent and separating into odd/even powers reveals that sine and cosine are hiding inside the exponential function. Amazing. Although this proof of Euler's Formula doesn't show why the imaginary number makes sense, it reveals the baby daddy hiding backstage. Appendix: Assorted Aha! Moments Relationship to Fourier Series The Taylor Series extracts the "polynomial DNA" and the Fourier Series/Transform extracts the "circular DNA" of a function. Both see functions as built from smaller parts (polynomials or exponential paths). Does the Taylor Series always work? This gets into mathematical analysis beyond my depth, but certain functions aren't easily (or ever) approximated with polynomials. Notice that powers like $x^2, x^3$ explode as $x$ grows. In order to have a slow, gradual curve, you need an army of polynomial terms fighting it out, with one winner barely emerging. If you stop the train too early, the approximation explodes again. For example, here's the Taylor Series for $\ln(1 + x)$. The black line is the curve we want, and adding more terms, even dozens, barely gets us accuracy beyond $x=1.0$. It's just too hard to maintain a gentle slope with terms that want to run hog wild. In this case, we only have a radius of convergence where the approximation stays accurate (such as around $|x| < 1$). Turning geometric to algebraic definitions Sine is often defined geometrically: the height of a line on a circular figure. Turning this into an equation seems really hard. The Taylor Series gives us a process: If we know a single value and how it changes (the derivative), we can reverse-engineer the DNA. Similarly, the description of $e^x$ as "the function with its derivative equal to the current value" yields the DNA [1, 1, 1, 1], and polynomial $f(x) = 1 + \frac{1}{1!}x + \frac{1}{2!}x^2 + \frac{1}{3!}x^3 + \dots $. We went from a verbal description to an equation. Phew! A few items to ponder. Happy math. Other Posts In This Series A Gentle Introduction To Learning Calculus Understanding Calculus With A Bank Account Metaphor Prehistoric Calculus: Discovering Pi A Calculus Analogy: Integrals as Multiplication Calculus: Building Intuition for the Derivative How To Understand Derivatives: The Product, Power & Chain Rules How To Understand Derivatives: The Quotient Rule, Exponents, and Logarithms An Intuitive Introduction To Limits Intuition for Taylor Series (DNA Analogy) Why Do We Need Limits and Infinitesimals? Learning Calculus: Overcoming Our Artificial Need for Precision A Friendly Chat About Whether 0.999... = 1 Analogy: The Calculus Camera Abstraction Practice: Calculus Graphs Quick Insight: Easier Arithmetic With Calculus How to Add 1 through 100 using Calculus
Suppose that we run the simple linear regression $Y = \alpha + \beta X + \epsilon$. I want to test whether the independent variable $X$ is exogenous. If the correlation between the independent variable $X$ and the residual of linear regression $\epsilon$ is almost zero, i.e. $cor(X, \epsilon) \approx 0$, can I then conclude that this simple test suggests that the independent variable $X$ is exogenous? The linear regression model is$$\boldsymbol{Y} = \mathbf{X}\boldsymbol{\beta} + \boldsymbol{\varepsilon}$$together with the conditional uncorrelatedness assumption $\mathbb{E}( \mathbf{X}\boldsymbol{\varepsilon}) = \boldsymbol{0}$. If estimation of the parameters $\boldsymbol{\beta}$ proceeds by least squares, then the first order conditions (normal equations) are $$ \begin{align} \mathbf{X}'\left(\boldsymbol{Y} - \mathbf{X}\hat{\boldsymbol{\beta}} \right) &= \boldsymbol{0} \\ \mathbf{X}'\hat{\boldsymbol{\varepsilon}} &= \boldsymbol{0} \end{align} $$ where the last line indicates that the correlation between the residuals and the regressors is always zero for a linear regression model estimated by least squares. Thus this cannot form the basis of a test for the unconditional uncorrelatedness assumption in the linear regression model. In order to test the exogeneity assumption, you will need access to instrumental variables. See the Durbin-Wu-Hausman test.
Source Extent and Errors The apparent sizes and associated errors of sources reported in version 2 of the Chandra Source Catalog are determined using a Mexican-Hat optimization method, which uses a wavelet transform to define elliptical source regions. This is a refinement of the source extent results produced by wavdetect, the source detection algorithm which identifies source candidates in each observation in catalog processing. The basic idea is as follows: given wavdetect sizes for the source an the PSF at the location, one can derive the intrinsic size of a source by deconvolving its observed size. In order to decide if a source is extended, srcextent evaluates if the intrinsic size of a source is different than zero at a \(5\sigma\) confidence level. All catalog sources are run through the srcextent algorithm, except if they have less than 15 counts, in which case no extent information is provided. Source Region Stacked Observation Detection Table: ra_aper, dec_aper, mjr_axis_aper, mnr_axis_aper, pos_angle_aper, mjr_axis1_aperbkg, mnr_axis1_aperbkg, mjr_axis2_aperbkg, mnr_axis2_aperbkg, pos_angle_aperbkg The spatial regions defining a source and its corresponding background are determined by scaling and merging the individual source detection regions that result from all of the spatial scales and source detection energy bands in which the source is detected during the source detection process (wavdetect). The result is a single elliptical source region which excludes any overlapping source regions, and a single, co-located, scaled, elliptical annular background region. The parameter values that define the source region and background region for each source are the ICRS right ascension and signed ICRS declination of the center of the source region and background region; the semi-major and semi-minor axes of the source region ellipse and of the inner and outer annuli of the background region ellipse; and the position angles of the semi-major axes defining the source and background region ellipses. In the first catalog release, the source region is defined on a tangent plane projection. The 0 deg position angle reference is defined on that tangent plane to be parallel to the true North direction at the location of the tangent plane reference (refer to the tangent plane reference right ascension (ra_nom), declination (dec_nom), and roll angle (roll_nom)). Modified Source Region Per-Observation Detections Table: area_aper, area_aperbkg The modified source region and modified background region for each source are defined as the areas of intersection of the source region and background region for that source with the field-of-view, excluding any overlapping source regions. Source Extent Per-Observation Detections Table: mjr_axis_raw, mjr_axis_raw_lolim, mjr_axis_raw_hilim, mnr_axis_raw, mnr_axis_raw_lolim, mnr_axis_raw_hilim, pos_angle_raw, pos_angle_raw_lolim, pos_angle_raw_hilim In order to estimate the intrinsic extent of a source in the sky, one first needs to realize that the measured extent of the source on the detector is the result of a convolution between the source itself and the PSF corresponding to that particular observation. It is therefore necessary to estimate the convolved extent of the source and of the PSF, and then perform a deconvolution. Convolved Source Extent The extent of the convolved source is estimated in a given science energy band with a rotated elliptical Gaussian parametrization of the raw extent of a source, i.e., the extent of a source before deconvolution has been performed. The corresponding ellipse has the following form: Where Here, \(\phi\) ( pos_angle_raw) is the clockwise angle between the positive x-axis and the ellipse major axis; \(a_{1}\) and \(a_{2}\) are the \(1\sigma\) radii along the major and minor axes of the source ellipse ( mjr_axis_raw, mnr_axis_raw); \(s_{0}\) is the amplitude of the source elliptical Gaussian distribution. For source extent purposes, the parameters of the ellipse are estimated by performing a spatial transform with a Mexican-Hat wavelet (also known as Ricker wavelet) directly on the counts in the raw source region, provided that more than 15 counts have been detected (for less than 15 counts, the error in the determination of the source size is comparable to the size itself). Note that this region describes the raw size of the source, and it is therefore different from the source region derived by wavdetect. Below we describe how that region is fitted to the observed distribution of counts. The idea is simple: the two-dimensional correlation integral (i.e., the transform) between the wavelet function \(W\) and the ellipse function \(S\) is defined as: where \(\mathbf{\alpha} = (a_{1},a_{2},\phi)\) are the semi-major axis, semi-minor axis, and rotational angle of the Mexican-Hat wavelet. This correlation should be maximized when the scale and position of the wavelet coincide with that of the source. Spcifically, the quantity \(\psi(x,y;\mathbf{\alpha}) = C(x,y;\mathbf{\alpha})/\sqrt{a_{1} a_{2}}\) is maximized if the dimensions of the ellipse and the Mexican-Hat wavelength are related as: \(a_{i} = \sqrt{3} \sigma_{i} \) and \(\phi = \phi_{0}\). We can therefore estimate the parameters of the source extent ellipse by maximizing \(\phi(x,y;\mathbf{\alpha})\). Note that this assumes that sources can always be described as elliptical Gaussians. In practice, the maximization is evaluated as a discrete version of the equations above on the pixels of the image. In CSC2, the optimization of the correlation integral is performed using the Sherpa fitting tool. PSF Extent The same approach as for the convolved source extent is used to estimate the elliptical parameters that best represent the instrumental point spread function (PSF) in each science band at the location of the source. The inputs are the PSF counts in the source region. The parameterization of the PSF can be compared with the parameterization of the detected source to determine whether the latter is consistent with a point source (see below). Deconvolved Source Extent Determination of the extended nature of sources Deconvolving the raw source image with the PSF produces the intrinsic shape of the source. In order to determine whether a source is extended, we first need to realize that the intrinsic size of a point source is, by definition, zero. By performing the fitting described above, srcextent estimates the raw sizes and errors of both the source and the PSF. In what follows, we derive the size and error for the intrinsic size of the source, and then we establish a criterion to flag a source as extended. In principle, one can determine the parameters of the intrinsic source ellipse, \(\{a_{1},a_{2},\phi\}\), by solving a non-linear system of equations involving the PSF parameters, \(\{a_{1},a_{2},\psi\}\), and the measured source parameters, \(\{\sigma_{1},\sigma_{2},\delta\}\). However, because these equations are based on a crude approximation and because the input parameters are often uncertain, such an elaborate calculation seems unjustified. A much simpler and more robust approach makes use of the identity:\sigma_{1}^{2} + \sigma_{2}^{2} = a_{1}^{2} + a_{2}^{2} + b_{1}^{2} + b_{2}^{2} \ , which applies to the convolution of two elliptical Gaussians having arbitrary relative sizes and position angles. Using this identity, one can define a root-sum-square intrinsic source size:a_{\mathrm{rss}} \equiv \sqrt{a_{1}^2 + a_{2}^{2}} = \sqrt{ \max\{0,(\sigma_{1}^{2} + \sigma_{2}^{2}) - (b_{1}^{2} + b_{2}^{2}) \}} \ , that depends only on the sizes of the relevant ellipses and is independent of their orientations. This expression is analogous to the well-known result for convolution of 1D Gaussians and for convolution of circular Gaussians in 2D. Using the equation above, one can derive an analytic expression for the uncertainty in \(a_{\mathrm{rss}}\) in terms of the measurement errors associated with \(\sigma_{i}\) and \(b_{i}\). Because \(\sigma_{i}\) and \(b_{i}\) are non-negative, evaluating the right-hand side of the equation using the corresponding mean values should give a reasonable estimate of the mean value of \(a_{\mathrm{rss}}\). A Taylor series expansion of the right-hand side evaluated at the mean parameter values, therefore, yields the uncertainty:\Delta a_{\mathrm{rss}} = \frac{1}{a} \sqrt{ \sigma_{1}^{2} \left(\Delta \sigma_{1} \right)^{2} + \sigma_{2}^{2} \left(\Delta \sigma_{2} \right)^{2} + b_{1}^{2} \left(\Delta b_{1} \right)^{2} + b_{1}^{2} \left(\Delta b_{1} \right)^{2} } where \(\left(\Delta X \right)^{2}\) represents the variance in \(X\) and wherea \equiv \left\{ \begin{array}{ll} a_{\mathrm{rss}} \ , & a_{\mathrm{rss}} > 0 \ , \\ \sqrt{b_{1}^{2} + b_{2}^{2}} \ , & a_{\mathrm{rss}} = 0 \ . \end{array} \right. A source is extended if its root-sum-square intrinsic size is larger than the root-sum-square error of this size. Namely, if its derived intrinsic size, that depends on the raw source and PSF sizes and errors, is larger than the fluctuations due to the uncertainty in the determination (different from zero). If \(a_{\mathrm{rss}} > f \Delta a_{\mathrm{rss}}\), then the source is extended at the \(f\sigma\) level. In CSC2, we use \(f = 5\), which implies that sources are flagged as extended if the intrinsic size is determined with a significance of \(5\sigma\). Point Spread Function Extent Per-Observation Detections Table: psf_mjr_axis_raw, psf_mjr_axis_raw_lolim, psf_mjr_axis_raw_hilim, psf_mnr_axis_raw, psf_mnr_axis_raw_lolim, psf_mnr_axis_raw_hilim, psf_pos_angle_raw, psf_pos_angle_raw_lolim, psf_pos_angle_raw_hilim The point spread function extent is a rotated elliptical Gaussian parameterization of the raw extent of the point spread function (PSF) at the location of the source. The parameterization of the PSF is computed from a wavelet transform analysis of the PSF counts in the source region in a given science energy band, and can be compared with the parameterization of the detected source to determine whether the latter is consistent with a point source. The point spread function extent is defined by the values and associated errors of the \(1\sigma\) radii along the major and minor axes, and position angle of the major axis of the point spread function ellipse that the detection process would assign to a monochromatic PSF at the location of the source, and whose energy is the effective energy of the given energy band. The point spread function has the following form: Here, \(\psi\) ( psf_pos_angle_raw) is the clockwise angle between the positive x-axis and the ellipse major axis; \(b_{1}\) and \(b_{2}\) are the \(1\sigma\) radii along the major and minor axes of the PSF ellipse ( psf_mjr_axis_raw, psf_mnr_axis_raw); \(p_{0}\) is the amplitude of the PSF elliptical Gaussian distribution, and Deconvolved Source Extent Master Sources Table: major_axis, major_axis_lolim, major_axis_hilim, minor_axis, minor_axis_lolim, minor_axis_hilim, pos_angle, pos_angle_lolim, pos_angle_hilim The deconvolved source extent is a parameterization of the best estimate of the flux distribution defining the PSF-convolved source, which is determined in each science energy band from a variance-weighted mean of the deconvolved extent of each source measured in all contributing observations. The parameterization consists of the best estimate values and associated errors for the \(1\sigma\) radius along the major axis, the \(1\sigma\) radius along the minor axis, and the position angle of the major axis of a rotated elliptical Gaussian source that is convolved with the ray-trace local PSF at the location of the source spatial event distribution. Stacked Observation Detections Table: major_axis, major_axis_lolim, major_axis_hilim, minor_axis, minor_axis_lolim, minor_axis_hilim, pos_angle, pos_angle_lolim, pos_angle_hilim The deconvolved source extent is a parameterization of the deconvolved extent of each source, i.e., a rotated elliptical Gaussian source that is convolved with the ray-trace local PSF at the location of the spatial event distribution of the source. Using a telescope with a PSF defined by \(p(x,y)\) to observe a source represented by \(s(x,y)\), one obtains a source image, \(c(x,y)\), which is the convolution of the source and the PSF, If both the source and PSF are elliptical Gaussians, then the PSF-convolved source, \(c(x,y)\), is an elliptical Gaussian centered on the origin with \(1\sigma\) radii along the major and minor axes \(\sigma_{1}\) and \(\sigma_{2}\) ( minor_axis, major_axis), and has the form: Here, \(\delta\) ( pos_angle) is the clockwise angle between the positive x-axis and the ellipse major axis, \(s_{0}\) and \(p_{0}\) are the respective amplitudes of the source and PSF elliptical Gaussian distributions, and In the Chandra Source Catalog, the deconvolved source extent is defined by the \(1\sigma\) radius along the major axis \(\sigma_{2}\), the \(1\sigma\) radius along the minor axis \(\sigma_{1}\), the position angle \(\delta\) of the major axis of the elliptical Gaussian defining the PSF-convolved source, and the associated errors. Changes with Respect to Earlier Versions With respect to earlier versions of the catalog, a number of improvements have been included in Release 2 of the catalog to improve the source extent estimate. The main changes were: We use the results from wavdetect to set the initial parameter guess for the source size. The correlation integral is maximized using the Nelder-Mead Simplex optimization method, but only the scale and orientation of the Mexican-Hat wavelet are free parameters. The centroid position is estimated prior to the fit by maximizing a simplified version of the wavelet that uses the initial guesses for \(a_{i}\) (from wavdetect), and \(\phi = 0\). Therefore, the position of the pixel where the maximum occurs is found first, and then the orientation and size of the ellipse are optimized for. Adding the effect of aspect blur to the PSF. Both the aspect solution and detector effects add an aspect blur to the instrumental PSF that effectively increase its extent. We have added an estimated blur in quadrature to the PSF extent in order to improve our estimate of the deconvolved source extent. Improving the PSF image fitting by adjusting image centering and size, and using sub-pixelated PSFs where appropriate. Caveats When using the srcextent results, users should keep in mind the following two caveats: The algorithm is not designed to separate blended sources and is unlikely to generate optimal source regions in such cases. Users should use caution in interpreting srcextent results in very crowded regions. The algorithm makes no attempt to detect cases in which no significant source is present above background within the ellipse that was initially provided by wavdetect. In such cases, subsequent optimization of \(\psi\) may yield a meaningless result, such as an ellipse of maximum size of an ellipse of random size centered on a noise peak.
Forgot password? New user? Sign up Existing user? Log in A bumble bee charged to 1 C1 \text{ C}1 C flies from (0 m,0 m)(0 \text{ m}, 0\text{ m}) (0 m,0 m) to (1 m,1 m)(1 \text{ m}, 1\text{ m})(1 m,1 m) through a region with electric field E⃗=(x^+y^)NC\vec{E}=(\hat{x}+\hat{y})\dfrac{N}{C}E=(x^+y^)CN. Find the work done by the electric field on the bumble bee. Problem Loading... Note Loading... Set Loading...
We now discuss the concept of divisibility and its properties. Integer Divisibility If \(a\) and \(b\) are integers such that \(a\neq 0\), then we say "\(a\) divides \(b\)" if there exists an integer \(k\) such that \(b=ka\). If \(a\) divides \(b\), we also say "\(a\) is a factor of \(b\)" or "\(b\) is a multiple of \(a\)" and we write \(a\mid b\). If \(a\) doesn’t divide \(b\), we write \(a\nmid b\). For example \(2\mid 4\) and \(7\mid 63\), while \(5\nmid 26\). a) Note that any even integer has the form \(2k\) for some integer \(k\), while any odd integer has the form \(2k+1\) for some integer \(k\). Thus \(2|n\) if \(n\) is even, while \(2\nmid n\) if \(n\) is odd. b) \(\forall a\in\mathbb{Z}\) one has that \(a\mid 0\). c) If \(b\in\mathbb{Z}\) is such that \(|b|<a\), and \(b\neq 0\), then \(a\nmid b\). If \(a, b\) and \(c\) are integers such that \(a\mid b\) and \(b\mid c\), then \(a\mid c\). Since \(a\mid b\) and \(b\mid c\), then there exist integers \(k_1\) and \(k_2\) such that \(b=k_1a\) and \(c=k_2b\). As a result, we have \(c=k_1k_2a\) and hence \(a\mid c\). Since \(6\mid 18\) and \(18\mid 36\), then \(6\mid 36\). The following theorem states that if an integer divides two other integers then it divides any linear combination of these integers. [thm4] If \(a,b,c,m\) and \(n\) are integers, and if \(c\mid a\) and \(c\mid b\), then \(c\mid (ma+nb)\). Since \(c\mid a\) and \(c\mid b\), then by definition there exists \(k_1\) and \(k_2\) such that \(a=k_1c\) and \(b=k_2c\). Thus \[ma+nb=mk_1c+nk_2c=c(mk_1+nk_2),\] and hence \(c\mid (ma+nb)\). Theorem [thm4] can be generalized to any finite linear combination as follows. If \[a\mid b_1, a\mid b_2,...,a\mid b_n\] then \[a\mid \sum_{j=1}^nk_jb_j\] for any set of integers \(k_1,\cdots,k_n\in\mathbb{Z}\). It would be a nice exercise to prove the generalization by induction. The following theorem states somewhat an elementary but very useful result. [thm5] The Division Algorithm If \(a\) and \(b\) are integers such that \(b>0\), then there exist unique integers \(q\) and \(r\) such that \(a=bq+r\) where \(0\leq r< b\). Consider the set \(A=\{a-bk\geq 0 \mid k\in \mathbb{Z}\}\). Note that \(A\) is nonempty since for \(k<a/b\), \(a-bk>0\). By the well ordering principle, \(A\) has a least element \(r=a-bq\) for some \(q\). Notice that \(r\geq 0\) by construction. Now if \(r\geq b\) then (since \(b>0\)) \[r>r-b=a-bq-b=a-b(q+1)=\geq 0.\] This leads to a contradiction since \(r\) is assumed to be the least positive integer of the form \(r=a-bq\). As a result we have \(0\leq r <b\). We will show that \(q\) and \(r\) are unique. Suppose that \(a=bq_1+r_1\) and \(a=bq_2+r_2\) with \(0\leq r_1<b\) and \(0\leq r_2<b\). Then we have \[b(q_1-q_2)+(r_1-r_2)=0.\] As a result we have \[b(q_1-q_2)=r_2-r_1.\] Thus we get that \[b\mid (r_2-r_1).\] And since \(-\max(r_1,r_2)\leq|r_2-r_1|\leq\max(r_1,r_2)\), and \(b>\max(r_1,r_2)\), then \(r_2-r_1\) must be \(0\), i.e. \(r_2=r_1\). And since \(bq_1+r_1=bq_2+r_2\), we also get that \(q_1=q_2\). This proves uniqueness. If \(a=71\) and \(b=6\), then \(71=6\cdot 11+5\). Here \(q=11\) and \(r=5\). Exercises Show that \(5\mid 25, 19\mid38\) and \(2\mid 98\). Use the division algorithm to find the quotient and the remainder when 76 is divided by 13. Use the division algorithm to find the quotient and the remainder when -100 is divided by 13. Show that if \(a,b,c\) and \(d\) are integers with \(a\) and \(c\) nonzero, such that \(a\mid b\) and \(c\mid d\), then \(ac\mid bd\). Show that if \(a\) and \(b\) are positive integers and \(a\mid b\), then \(a\leq b\). Prove that the sum of two even integers is even, the sum of two odd integers is even and the sum of an even integer and an odd integer is odd. Show that the product of two even integers is even, the product of two odd integers is odd and the product of an even integer and an odd integer is even. Show that if \(m\) is an integer then \(3\) divides \(m^3-m\). Show that the square of every odd integer is of the form \(8m+1\). Show that the square of any integer is of the form \(3m\) or \(3m+1\) but not of the form \(3m+2\). Show that if \(ac\mid bc\), then \(a\mid b\). Show that if \(a\mid b\) and \(b\mid a\) then \(a=\pm b\).
This was requested on meta.SO for the Audio & Video Production site. As balpha says, audio clips uploaded to SoundCloud and lined to in a post will automatically be converted into an embedded player like this. Of course, a dev has to turn on the feature for this site, and I'm sure they will when they see this post. While I completely agree that upper case tags will make life easier for us here, I'm very certain that this will be status-declined. For example, c#, .net, php etc., have existed on StackOverflow for 3 years now, although people would've preferred they be capitalized. Tags are made lowercase uniformly to standardize them. Else you'll end up with [DTFT], [... Reasons for downvoting:To add to Jason R's answer, a lot of users, myself included, do not explain a downvote if there is another comment that explains it, and they agree with the explanation. In this case, users simply upvote the comment indicating agreement and downvote the post. Here is one such question that comes to mind as a good example of this ... This is a fantastic idea, especially for sites that move into a second week of private beta. If a site generates enough questions and answers in the first week, then all's well and it opens to the public. However, when there is not enough content being generated it makes sense to extend the user base.I was mulling over this trying to come up with ideas ... I haven't seen a big rash of negative votes, but maybe I haven't been looking in the right places. I agree that it is annoying to receive a negative vote after spending time to craft a detailed (and correct) answer. I think that it's a difficult balance: the anonymity provided during the down-vote process might help to encourage some negative votes that ... Can someone from SE please revisit this request now that we're out of beta. Signal processing is sufficiently math heavy that it's sometimes required in meta discussions as well. For example, see this recent question. We can turn on code syntax highlighting if the community sees a need for it -- do you think many posts would benefit?It's not a heavy dependency, but it is a minor one, so we'd want to make sure that it will benefit many posts not just a few. There is some thought of possibly allowing this kind of functionality. It is not a definite plan yet, mostly because the majority of private betas only last 7 days - the need for invitations in that short of a period is fairly sparse. It's still under some level of consideration.However, the point is rather moot, as this site is now open as a public beta. ... You could do:<pre>codecode2code3</pre>which gives youcodecode2code3I think triple backticks code should work to:```codemore codesome final code```codemore codesome final codeThis should also work:<code>codecode2code3</code>which renders as:codecode2code3The <pre> tag has trouble with HTML ... Any site that is not still in beta can request up to five user-initiated migration targets that will be set up by the Community Team as needed. There's a nice FAQ about migration over on the main meta site. You can request these through a meta request like this and we will consider them.For right now, I don't see a need for this migration path.The last ... Other than contacting the person in chat there is no way of "targeting" users in this way.If your question is interesting and tagged correctly then, assuming that they are interested in those tags, then it's highly likely they'll see the question anyway, and if it's a good question they may well answer.Questions are, by definition public, and open to ... No, it's not a bug. It just means that there are no questions with open bounties.A week or so ago, it was there. That was because I had an open bounty on a question. The bounty is now closed, so there are no featured questions. Generally speaking, we discourage migration unless a question simply makes no sense on the site that it's asked on. This is especially important on beta sites which are still establishing themselves. If someone has found their way to Digital Signal Processing and asked a question, it's generally best to assume that this is the right spot for them to ask ... I will point out part of what Yoda was saying. The solution is not using acronyms in my mind. Make a synonym for your acronym! The users can use the acronym they are used to and we can all see it typed out without the exhausting work of typing it out.Now the not fitting part, I am not sure what to say there, we need to approach tags somehow for that and ... That's great! The only additional thing I'd ask is that you create a one-sentence summary of the tag.I get this at the top of the question listwhen I click on that as a tag on a question.Let me know if you don't see that and I'll see if I can add something. Done (after a short hiccup that hopefully didn't inconvenience anyone). We usually do this when we turn MathJax on for the main site, somehow it fell through the cracks - sorry about that :)It's enabled now, enjoy. There are only three custom close reasons allowed, and these are currently set up to be:I asked for the last one to be added, because the beginning of each northern hemisphere academic year always brings students asking for us to do their homework.As a moderator, when I select the off-topic option, I can search for a more appropriate SE site:but then ... I completely agree (answer stolen from @PeterK. comment:).A very good option is to consider that a "perfect answering" comment is a philanthropic action. The commenter could have answered instead for his benefit.Philanthropy should draw attention to the commenter, and that could incite the OP and the valid answer providers (inspired by the comment) to ... It looks like this was added by another moderator:This question does not appear to be about signal processing within the scope defined in the help center.but required the approval of someone else (e.g. me). I've just approved it and it seems to have gone live. let's see if it works for an answer:$$\begin{align} X(f) &\triangleq \int\limits_{-\infty}^{+\infty} x(t) \ e^{-j 2 \pi f t} \ dt \\ \\ x(t) &= \int\limits_{-\infty}^{+\infty} X(f) \ e^{+j 2 \pi f t} \ df \\ \end{align} $$that seems to work too.Thank you Tim. In my question, I complained about anonymous down-votes without any reason being given, and asked "Are signal processors just nastier than ..."Well, the question itself has received 3 down-votes without any reason being given, and so rather than making this a comment on my question (comments cannot be down-voted, only up-voted), I am making this an answer. ... You can always set the text-transform property in a personal stylesheet, for example.post-tag {text-transform: uppercase;}switches all those tags to nice uppercase. Beware: this switches all tags. It should be possible to limit the change to dsp.sx via @-moz-document.
I have recently acquired a couple of mysterious ultra/super capacitors from my brother. Apparently he doesn't remember any of the specifications or even brand... To further complicate matters, they have no meaningful identification information stamped or printed on them. (There is a bar code label with alphanumeric code but a quick Google search using it found nothing.) Looks like it's time to fire up the Scooby-Doo Mystery Buss, 'cause were going on an adventure folks. First, I figured I'd try to measure the capacitance. Since my LCR meter isn't specified for enormous capacitors like these, I had to get creative with my test equipment. Taking basic physics into account, we have that capacitance is proportional to the stored charge per volt across the capacitor: $$ C=\frac{q}{V} $$ where the accumulated charge in the capacitor is the integral of the current through the capacitor: $$ \int i(t)dt=q$$ Using a current source to charge the capacitor we can simplify the calculations, using only delta measurements of the charge and voltage across the capacitor. $$ C=\frac{\Delta q}{\Delta V}=\frac{i\Delta t}{\Delta V} $$ With my Advantest R6144 current source I can then charge the capacitor at a set current and simply measure the voltage across the capacitor using my Tektronix DMM4050 in the trendplot mode. However, this is where I start to see some rather large numbers. It's possible the capacitor really is ~2200 farads, but that seems a bit high. Admittedly, the capacitor is quite large at ~5.5" long by ~1" radius. And now some questions for the fine folks of Electrical Engineering Stack Exchange: Is this method a viable means to measure super capacitors? Or is there a more suitable method that I can apply to measure them? Also, does the capacitance of super/ultra capacitors significantly change vs. voltage of the capacitor? E.g., are these measured results predictive/indicative for higher charge voltages. I would reckon the capacitance should fluctuate some, but I doubt its that much. Probably at worst it's a few hundred farads, but I'm no expert on the matter. Also, and somewhat more importantly, how would I find the maximum charge voltage without destroying the capacitor? Would a constant current charge of say 100uA over a few weeks till the voltage reaches some sort of equilibrium with self discharge work. Then back off a couple hundred milivolts and call that the max charge voltage. Or will it just reach a tripping point and self-destruct while spraying electrolyte all over my lab? Finally, how do you determine the polarity orientation of the capacitors? These are not marked in any way, and both terminals are identical. I cast my bet with the residual voltage stored in the capacitor. I assume the dielectric absorption/memory effect from previous charging knows the correct direction... At any rate, it's sort of fun to try and determine the characteristics of these capacitors. But it's still a touch aggravating that there are no useful markings on them, like polarity orientation, manufacturer, ect.
Let $\alpha$ and $\beta$ be two isomorphic ordinals. Then $\alpha = \beta$. I want to whether the following proof is correct. I already know that there are three prossible cases: $\alpha \in \beta, \alpha=\beta$ or $\beta\in\alpha$. Without loss of generality, assume that $\alpha \in \beta$. Let $f: \beta \to \alpha$ be an isomorphism. We consider $f(\alpha)$. Because the range of $f$ is $\alpha$, we have $f(\alpha) < \alpha$. Because $f$ is an isomorphism, it is order preserving, so $f^2(\alpha) < f(\alpha)$. Hence $f^{n+1}(\alpha) < f^n(\alpha)$ for all $n$. So $\{f^n(\alpha): n \in \mathbb{N}\}$ is a strictly decreasing sequence in a well-ordered set. This is impossible, so we get the desired contradiction. My doubt stems from the fact that I don't seem to use that $f$ is a bijection. I tried to use this argument for special cases of $\alpha$ and $\beta$, but I can't find order-preserving functions from a larger ordinal ($\beta$) to a smaller one ($\alpha$).
A few days ago, my math teacher (I hold him in high faith) said that $2\pi$ radians is not exactly $360^{\circ}$. His reasoning is the following. $\pi$ is irrational (and transcendental). $360$ is a whole number. Since no multiple of $\pi$ can equal a whole number, $2\pi$ radians is not exactly $360^{\circ}$. His logic was the above, give or take any mathematical formalities I may have omitted due to my own lack of experience. Is the above logic correct? Update: This is a very late update, and I'm making it so I don't misrepresent the level of mathematics teaching in my education system. I talked with my teacher afterwards, and he was oversimplifying things so that people didn't just use $\pi=3.14$ in conversions between degrees and radians and actually went to radian mode on their calculator when applicable. In essence, he meant $2\times3.14 \ne 2\pi^R.$
What is $f'(a)$ of the function $f(x)$? $$f(x) = 2x^2 − 3x + 1$$ I have been trying to do it but I cannot figure out how to do it. Please somebody can help me. Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community What is $f'(a)$ of the function $f(x)$? $$f(x) = 2x^2 − 3x + 1$$ I have been trying to do it but I cannot figure out how to do it. Please somebody can help me. When we write $f'(x) = 4x-3$, that means that $x$ is a placeholder. In other words, exactly what symbol we use is of little consequence. Thus, we have, for instance,$$f'(\color{red}x) = 4\color{red}x-3\\f'(\color{red}5) = 4\cdot \color{red}5 - 3\\f'(\color{red}\dagger) = 4\color{red}\dagger - 3\\f'(\color{red}わ) = 4\color{red}わ - 3$$and, of course, $f'(\color{red}a) = 4\color{red}a - 3$. The (almost) only time to be careful is when we insert specific numbers. See, we might have written $f'(5) = 17$, which is technically correct, but that makes it difficult to see exactly how the $5$ we insert affected the result. However, when using other symbols, like $a, x,$ etc., this is usually not a problem. We have $$f'(x) = 4x - 3$$ by simply using that $$(x^n)' = nx^{n-1}$$ for any $n \in \mathbb{N}$. Then you just plug $a$ in for $x$, so $$f'(a) = 4a - 3$$
An urn contains $r$ Red and $b$ Blue marbles. A fair coin is flipped. If the flip is Heads then $h$ Red marbles are added to the urn. If the flip is Tails then $t$ Blue marbles are add to the urn. Now a random marble $M$ is drawn from the urn. (a) What is the probability that $M$ is Red? (b) What is the probability that the flip was Heads given that $M$ is Blue? My attempt: Original marbles: $r$, $b$ After flip: $H: r+h = R$, $b$. $T: b+t = B$, $r$. a) $$P(M_{red}) = \frac{1}{2}\left [ \frac{R}{R+b}+\frac{r}{B+r} \right ]$$ b) $$\frac{1}{2} \cdot \dfrac{\dfrac{b}{R+b}}{\dfrac{b}{R+b}+\dfrac{B}{B+r}}$$
Is it true that all zeros of the Riemann Zeta Function are of order 1? Let $h(z) = \frac{\zeta'(z)}{\zeta(z)}\frac{x^z}{z}$, where $x$ is a positive real number ($x > 1$, probably?) , and $\zeta$ is the Riemann zeta function. I'm computing the residues of $h$. The only place I'm having problems computing the residue of $h$ are at the zeros of $\zeta$. I just need to know the order of the zeros of $\zeta$. It seems like for the trivial zeros (the negative even integers) the orders are 1. But I don't know what to do about the non-trivial zeros. I'm probably confused about something. When I look in the derivation of the von Mangoldt formula shown here, it lists the residues of $h$, and from their computation, it seems that the orders of the zero at the non-trivial zeros are also 1. I was under the impression that the order of the the zeros of $\zeta$ were not all known. What am I missing? EDIT: I found this. On page 2, it says that if $\rho$ is a zero of $\zeta$ contributes $\frac{x^\rho}{\rho}$ to the residue of $h$, counted with multiplicity. (I'm off by a negative sign because of the way I defined $h$). Is there a special way to compute the residue without knowing the order of $\rho$?
When trying to find the Lorentz transformation in matrix form in the $x^2+x^3$-direction, I tried simply mapping the Lorentz boost in the $x^2$-direction to the $x+x^3$-direction by rotating it $45°$ around the $x^1$ axis: $$ R_{x}^{-1}(\theta=45°)\left(\begin{array}{cccc}\gamma & 0 & -\gamma\beta &0\\0&1&0&0\\-\gamma\beta&0&\gamma&0\\0&0&0&1\end{array}\right)R_{x}(\theta=45°) $$ Where $\beta=\dfrac vc$ and $\gamma = \dfrac{1}{\sqrt{1-\beta^2}}$. However, I am not sure of the proper form of the rotation matrix in 4-dimensional spacetime. Especially since we are using 'mostly minus' conventions. Using 'mostly plus' I would guess: $$ R_x=\left(\begin{array}{cccc}-1&0&0&0\\0&1&0&0\\0&0&\cos\theta&-\sin\theta\\0&0&\sin\theta&\cos\theta\end{array}\right). $$ How would this translate to mostly minus conventions?
In a paper by Joos and Zeh, Z Phys B 59 (1985) 223, they say:This 'coming into being of classical properties' appears related to what Heisenberg may have meant by his famous remark [7]: 'Die "Bahn" entsteht erst dadurch, dass wir sie beobachten.'Google Translate says this means something ... @EmilioPisanty Tough call. It's technical language, so you wouldn't expect every German speaker to be able to provide a correct interpretation—it calls for someone who know how German is used in talking about quantum mechanics. Litmus are a London-based space rock band formed in 2000 by Martin (bass guitar/vocals), Simon (guitar/vocals) and Ben (drums), joined the following year by Andy Thompson (keyboards, 2001–2007) and Anton (synths). Matt Thompson joined on synth (2002–2004), while Marek replaced Ben in 2003. Oli Mayne (keyboards) joined in 2008, then left in 2010, along with Anton. As of November 2012 the line-up is Martin Litmus (bass/vocals), Simon Fiddler (guitar/vocals), Marek Bublik (drums) and James Hodkinson (keyboards/effects). They are influenced by mid-1970s Hawkwind and Black Sabbath, amongst others.They... @JohnRennie Well, they repeatedly stressed their model is "trust work time" where there are no fixed hours you have to be there, but unless the rest of my team are night owls like I am I will have to adapt ;) I think u can get a rough estimate, COVFEFE is 7 characters, probability of a 7-character length string being exactly that is $(1/26)^7\approx 1.2\times 10^{-10}$ so I guess you would have to type approx a billion characters to start getting a good chance that COVFEFE appears. @ooolb Consider the hyperbolic space $H^n$ with the standard metric. Compute $$\inf\left\{\left(\int u^{2n/(n-2)}\right)^{-(n-2)/n}\left(4\frac{n-1}{n-2}\int|\nabla u|^2+\int Ru^2\right): u\in C^\infty_c\setminus\{0\}, u\ge0\right\}$$ @BalarkaSen sorry if you were in our discord you would know @ooolb It's unlikely to be $-\infty$ since $H^n$ has bounded geometry so Sobolev embedding works as expected. Construct a metric that blows up near infinity (incomplete is probably necessary) so that the inf is in fact $-\infty$. @Sid Eating glamorous and expensive food on a regular basis and not as a necessity would mean you're embracing consumer fetish and capitalism, yes. That doesn't inherently prevent you from being a communism, but it does have an ironic implication. @Sid Eh. I think there's plenty of room between "I think capitalism is a detrimental regime and think we could be better" and "I hate capitalism and will never go near anything associated with it", yet the former is still conceivably communist. Then we can end up with people arguing is favor "Communism" who distance themselves from, say the USSR and red China, and people who arguing in favor of "Capitalism" who distance themselves from, say the US and the Europe Union. since I come from a rock n' roll background, the first thing is that I prefer a tonal continuity. I don't like beats as much as I like a riff or something atmospheric (that's mostly why I don't like a lot of rap) I think I liked Madvillany because it had nonstandard rhyming styles and Madlib's composition Why is the graviton spin 2, beyond hand-waiving, sense is, you do the gravitational waves thing of reducing $R_{00} = 0$ to $g^{\mu \nu} g_{\rho \sigma,\mu \nu} = 0$ for a weak gravitational field in harmonic coordinates, with solution $g_{\mu \nu} = \varepsilon_{\mu \nu} e^{ikx} + \varepsilon_{\mu \nu}^* e^{-ikx}$, then magic?
The best model is not always the most complicated. Sometimes including variables that are not evidently important can actually reduce the accuracy of predictions. In this section we discuss model selection strategies, which will help us eliminate variables from the model that are found to be less important. In practice, the model that includes all available explanatory variables is often referred to as the full model. The full model may not be the best model, and if it isn’t, we want to identify a smaller model that is preferable. Identifying variables in the model that may not be helpful Adjusted R 2 adding value means they are (likely) improving the accuracy in predicting future outcomes. Let’s consider two models, which are shown in Tables 1 and 2. The first table summarizes the full model since it includes all predictors, while the second does not include the duration variable. df = 136 Table 1. The fit for the full regression model, including the adjusted R 2 Estimate Std. Error t value Pr( >|t|) (Intercept) 36.2110 1.5140 23.92 0.0000 cond_new 5.1306 1.0511 4.88 0.0000 stock_photo 1.0803 1.0568 1.02 0.3085 duration –0.0268 0.1904 –0.14 0.8882 wheels 7.2852 0.5547 13.13 0.0000 = 0 R 2 adj .7108 df= 137 Table 2. The fit for the regression model for predictors cond_new, stock_photo, and wheels. Estimate Std. Error t value Pr( >|t|) (Intercept) 36.0483 0.9745 36.99 0.0000 cond_new 5.1763 0.9961 5.20 0.0000 stock_photo 1.1177 1.0192 1.10 0.2747 wheels 7.2984 0.5448 13.40 0.0000 = 0 R 2 adj .7128 Example Which of the two models is better? Solution: We compare the adjusted R 2 of each model to determine which to choose. Since the first model has an Rsmaller than the 2 adj Rof the second model, we prefer the second model to the first. 2 adj Will the model without duration be better than the model with duration? We cannot know for sure, but based on the adjusted R 2, this is our best assessment. Two model selection strategies Two common strategies for adding or removing variables in a multiple regression model are called backward elimination and forward selection. These techniques are often referred to as stepwise model selection strategies, because they add or delete one variable at a time as they “step” through the candidate predictors. Backward elimination starts with the model that includes all potential predictor variables. Variables are eliminated one-at-a-time from the model until we cannot improve the adjusted R 2 R 2 Example Results corresponding to the full model for the mario kart data are shown in Table 8.6. How should we proceed under the backward elimination strategy? Solution: Our baseline adjusted R 2 from the full model is R 2 = 0 adj .7108, and we need to determine whether dropping a predictor will improve the adjusted R. To check, we 2 fit four models that each drop a different predictor, and we record the adjusted R 2 from each: [latex]\begin{array}\text{Exclude . . .}\hfill&\text{cond_new}\hfill&\text{stock_photo}\hfill&\text{duration}\hfill&\text{wheels}\\\text{ }\hfill&R^2_{adj}=0.6626\hfill&R^2_{adj}=0.7107\hfill&R^2_{adj}=0.7128\hfill&R^2_{adj}=0.3487\end{array}[/latex] The third model without duration has the highest adjusted R 2 of 0.7128, so we compare it to the adjusted Rfor the full model. Because eliminating duration leads to a model with a higher adjusted 2 R, we drop duration from the model. Since we eliminated a predictor from the model in the first step, we see whether we should eliminate any additional predictors. Our baseline adjusted 2 Ris now 2 R= 0 2 adj .7128. We now fit three new models, which consider eliminating each of the three remaining predictors: [latex]\begin{array}\text{Exclude duration and . . .}\hfill&\text{cond_new}\hfill&\text{stock_photo}\hfill&\text{wheels}\\\text{ }\hfill&R^2_{adj}=0.6587\hfill&R^2_{adj}=0.7124\hfill&R^2_{adj}=0.3414\end{array}[/latex] None of these models lead to an improvement in adjusted R 2, so we do not eliminate any of the remaining predictors. That is, after backward elimination, we are left with the model that keeps cond new, stock photos, and wheels, which we can summarize using the coefficients from Table 2: [latex]\displaystyle\hat{y}=b_0+b_1x_1+b_2x_2+b_4x_4[/latex] [latex]\displaystyle\widehat{\text{price}}=36.05+5.18\times\text{cond_new}+1.12\times\text{stock_photo}+7.30\times\text{wheels}[/latex] The forward selection strategy is the reverse of the backward elimination technique. Instead of eliminating variables one-at-a-time, we add variables one-at-a-time until we cannot find any variables that improve the model (as measured by adjusted R 2 Example Construct a model for the mario kart data set using the forward selection strategy. Solution: We start with the model that includes no variables. Then we fit each of the possible models with just one variable. That is, we fit the model including just cond_new, then the model including just stock photo, then a model with just duration, and a model with just wheels. Each of the four models provides an adjusted R 2 value: [latex]\begin{array}\text{Add . . .}\hfill&\text{cond_new}\hfill&\text{stock_photo}\hfill&\text{duration}\hfill&\text{wheels}\\\text{ }\hfill&R^2_{adj}=0.3459\hfill&R^2_{adj}=0.0332\hfill&R^2_{adj}=0.1338\hfill&R^2_{adj}=0.6390\end{array}[/latex] In this first step, we compare the adjusted R 2 against a baseline model that has no predictors. The no-predictors model always has R 2 = 0. The model with one predictor that has the largest adjusted adj Ris the model with the wheels predictor, and because this adjusted 2 Ris larger than the adjusted 2 Rfrom the model with no predictors ( 2 R 2 = 0), we will add this variable to our model. adj We repeat the process again, this time considering 2-predictor models where one of the predictors is wheels and with a new baseline of R 2adj = 0 .6390: [latex]\begin{array}\text{Add wheels and . . .}\hfill&\text{cond_new}\hfill&\text{stock_photo}\hfill&\text{duration}\\\text{ }\hfill&R^2_{adj}=0.7124\hfill&R^2_{adj}=0.6587\hfill&R^2_{adj}=0.6528\end{array}[/latex] The best predictor in this stage, cond new, has a higher adjusted R 2 (0.7124) than the baseline (0.6390), so we also add cond_new to the model. Since we have again added a variable to the model, we continue and see whether it would be beneficial to add a third variable: [latex]\begin{array}\text{Add wheels, cond_new, and . . .}\hfill&\text{stock_photo}\hfill&\text{duration}\\\text{ }\hfill&R^2_{adj}=0.7128\hfill&R^2_{adj}=0.7107\end{array}[/latex] The model adding stock photo improved adjusted R 2 (0.7124 to 0.7128), so we add stock_photo to the model. Because we have again added a predictor, we check whether adding the last variable, duration, will improve adjusted R 2. We compare the adjusted Rfor the model with duration and the other three predictors (0.7108) to the model that only considers wheels, cond_new, and stock photo (0.7128). Adding duration does not improve the adjusted 2 R, so we do not add it to the model, and we have arrived at the same model that we identified from backward elimination. 2 Model Selection Strategies Backward elimination begins with the largest model and eliminates variables one-by-one until we are satisfied that all remaining variables are important to the model. Forward selection starts with no variables included in the model, then it adds in variables according to their importance until no other important variables are found. There is no guarantee that backward elimination and forward selection will arrive at the same final model. If both techniques are tried and they arrive at different models, we choose the model with the larger R 2 adj The p-Value Approach, an Alternative to Adjusted R 2 The p-value may be used as an alternative to adjusted R 2 for model selection. In backward elimination, we would identify the predictor corresponding to the largest p-value. If the p-value is above the significance level, usually α = 0 .05, then we would drop that variable, refit the model, and repeat the process. If the largest p-value is less than α = 0 .05, then we would not eliminate any predictors and the current model would be our best-fitting model. In forward selection with p-values, we reverse the process. We begin with a model that has no predictors, then we fit a model for each possible predictor, identifying the model where the corresponding predictor’s p-value is smallest. If that p-value is smaller than α = 0 .05, we add it to the model and repeat the process, considering whether to add more variables one-at-a-time. When none of the remaining predictors can be added to the model and have a p-value less than 0.05, then we stop adding variables and the current model would be our best-fitting model. Try It Examine Table 2, which considers the model including the cond_new, stock_photo, and wheels predictors. If we were using the p-value approach with backward elimination and we were considering this model, which of these three variables would be up for elimination? Would we drop that variable, or would we keep it in the model? Solution: The stock photo predictor is up for elimination since it has the largest p-value. Additionally, since that p-value is larger than 0.05, we would in fact eliminate stock photo from the model. While the adjusted R 2 p-value approaches are similar, they sometimes lead to different models, with the adjusted R 2 p-value approach with the auction data, we would not have included the stock photo predictor in the final model. When to use the adjusted R 2 and when to use the p-value approach When the sole goal is to improve prediction accuracy, use adjusted R 2. This is commonly the case in machine learning applications. When we care about understanding which variables are statistically significant predictors of the response, or if there is interest in producing a simpler model at the potential cost of a little prediction accuracy, then the p-value approach is preferred. Regardless of whether you use adjusted R 2 or the p-value approach, or if you use the backward elimination of forward selection strategy, our job is not done after variable selection. We must still verify the model conditions are reasonable.
Learning Objectives Identify a conic in polar form. Graph the polar equations of conics. Define conics in terms of a focus and a directrix. Most of us are familiar with orbital motion, such as the motion of a planet around the sun or an electron around an atomic nucleus. Within the planetary system, orbits of planets, asteroids, and comets around a larger celestial body are often elliptical. Comets, however, may take on a parabolic or hyperbolic orbit instead. And, in reality, the characteristics of the planets’ orbits may vary over time. Each orbit is tied to the location of the celestial body being orbited and the distance and direction of the planet or other object from that body. As a result, we tend to use polar coordinates to represent these orbits. In an elliptical orbit, the periapsis is the point at which the two objects are closest, and the apoapsis is the point at which they are farthest apart. Generally, the velocity of the orbiting body tends to increase as it approaches the periapsis and decrease as it approaches the apoapsis. Some objects reach an escape velocity, which results in an infinite orbit. These bodies exhibit either a parabolic or a hyperbolic orbit about a body; the orbiting body breaks free of the celestial body’s gravitational pull and fires off into space. Each of these orbits can be modeled by a conic section in the polar coordinate system. Identifying a Conic in Polar Form Any conic may be determined by three characteristics: a single focus, a fixed line called the directrix, and the ratio of the distances of each to a point on the graph. Consider the parabola \(x=2+y^2\) shown in Figure \(\PageIndex{2}\). We previously learned how a parabola is defined by the focus (a fixed point) and the directrix (a fixed line). In this section, we will learn how to define any conic in the polar coordinate system in terms of a fixed point, the focus \(P(r,\theta)\) at the pole, and a line, the directrix, which is perpendicular to the polar axis. If \(F\) is a fixed point, the focus, and \(D\) is a fixed line, the directrix, then we can let \(e\) be a fixed positive number, called the eccentricity, which we can define as the ratio of the distances from a point on the graph to the focus and the point on the graph to the directrix. Then the set of all points \(P\) such that \(e=\dfrac{PF}{PD}\) is a conic. In other words, we can define a conic as the set of all points \(P\) with the property that the ratio of the distance from \(P\) to \(F\) to the distance from \(P\) to \(D\) is equal to the constant \(e\). For a conic with eccentricity \(e\), if \(0≤e<1\), the conic is an ellipse if \(e=1\), the conic is a parabola if \(e>1\), the conic is an hyperbola With this definition, we may now define a conic in terms of the directrix, \(x=\pm p\), the eccentricity \(e\), and the angle \(\theta\). Thus, each conic may be written as a polar equation, an equation written in terms of \(r\) and \(\theta\). THE POLAR EQUATION FOR A CONIC For a conic with a focus at the origin, if the directrix is \(x=\pm p\), where \(p\) is a positive real number, and the eccentricity is a positive real number \(e\), the conic has a polar equation \[r=\dfrac{ep}{1\pm e \cos \theta}\] For a conic with a focus at the origin, if the directrix is \(y=\pm p\), where \(p\) is a positive real number, and the eccentricity is a positive real number \(e\), the conic has a polar equation \[r=\dfrac{ep}{1\pm e \sin \theta}\] Example \(\PageIndex{1}\): Identifying a Conic Given the Polar Form For each of the following equations, identify the conic with focus at the origin, the directrix, and the eccentricity. \(r=\dfrac{6}{3+2 \sin \theta}\) \(r=\dfrac{12}{4+5 \cos \theta}\) \(r=\dfrac{7}{2−2 \sin \theta}\) Solution For each of the three conics, we will rewrite the equation in standard form. Standard form has a \(1\) as the constant in the denominator. Therefore, in all three parts, the first step will be to multiply the numerator and denominator by the reciprocal of the constant of the original equation, \(\dfrac{1}{c}\), where \(c\) is that constant. Multiply the numerator and denominator by \(\dfrac{1}{3}\). \(r=\dfrac{6}{3+2\sin \theta}⋅\dfrac{\left(\dfrac{1}{3}\right)}{\left(\dfrac{1}{3}\right)}=\dfrac{6\left(\dfrac{1}{3}\right)}{3\left(\dfrac{1}{3}\right)+2\left(\dfrac{1}{3}\right)\sin \theta}=\dfrac{2}{1+\dfrac{2}{3} \sin \theta}\) Because \(\sin \theta\) is in the denominator, the directrix is \(y=p\). Comparing to standard form, note that \(e=\dfrac{2}{3}\).Therefore, from the numerator, \[\begin{align*} 2&=ep\\ 2&=\dfrac{2}{3}p\\ \left(\dfrac{3}{2}\right)2&=\left(\dfrac{3}{2}\right)\dfrac{2}{3}p\\ 3&=p \end{align*}\] Since \(e<1\), the conic is an ellipse. The eccentricity is \(e=\dfrac{2}{3}\) and the directrix is \(y=3\). Multiply the numerator and denominator by \(\dfrac{1}{4}\). \[\begin{align*} r&=\dfrac{12}{4+5 \cos \theta}\cdot \dfrac{\left(\dfrac{1}{4}\right)}{\left(\dfrac{1}{4}\right)}\\ r&=\dfrac{12\left(\dfrac{1}{4}\right)}{4\left(\dfrac{1}{4}\right)+5\left(\dfrac{1}{4}\right)\cos \theta}\\ r&=\dfrac{3}{1+\dfrac{5}{4} \cos \theta} \end{align*}\] Because \(\cos \theta\) is in the denominator, the directrix is \(x=p\). Comparing to standard form, \(e=\dfrac{5}{4}\). Therefore, from the numerator, \[\begin{align*} 3&=ep\\ 3&=\dfrac{5}{4}p\\ \left(\dfrac{4}{5}\right)3&=\left(\dfrac{4}{5}\right)\dfrac{5}{4}p\\ \dfrac{12}{5}&=p \end{align*}\] Since \(e>1\), the conic is a hyperbola. The eccentricity is \(e=\dfrac{5}{4}\) and the directrix is \(x=\dfrac{12}{5}=2.4\). Multiply the numerator and denominator by \(\dfrac{1}{2}\). \[\begin{align*} r&=\dfrac{7}{2-2 \sin \theta}\cdot \dfrac{\left(\dfrac{1}{2}\right)}{\left(\dfrac{1}{2}\right)}\\ r&=\dfrac{7\left(\dfrac{1}{2}\right)}{2\left(\dfrac{1}{2}\right)-2\left(\dfrac{1}{2}\right) \sin \theta}\\ r&=\dfrac{\dfrac{7}{2}}{1-\sin \theta} \end{align*}\] Because sine is in the denominator, the directrix is \(y=−p\). Comparing to standard form, \(e=1\). Therefore, from the numerator, \[\begin{align*} \dfrac{7}{2}&=ep\\ \dfrac{7}{2}&=(1)p\\ \dfrac{7}{2}&=p \end{align*}\] Because \(e=1\), the conic is a parabola. The eccentricity is \(e=1\) and the directrix is \(y=−\dfrac{7}{2}=−3.5\). Exercise \(\PageIndex{1}\) Identify the conic with focus at the origin, the directrix, and the eccentricity for \(r=\dfrac{2}{3−\cos \theta}\). Answer ellipse; \(e=\dfrac{1}{3}\); \(x=−2\) Graphing the Polar Equations of Conics When graphing in Cartesian coordinates, each conic section has a unique equation. This is not the case when graphing in polar coordinates. We must use the eccentricity of a conic section to determine which type of curve to graph, and then determine its specific characteristics. The first step is to rewrite the conic in standard form as we have done in the previous example. In other words, we need to rewrite the equation so that the denominator begins with \(1\). This enables us to determine \(e\) and, therefore, the shape of the curve. The next step is to substitute values for \(\theta\) and solve for \(r\) to plot a few key points. Setting \(\theta\) equal to \(0\), \(\dfrac{\pi}{2}\), \(\pi\), and \(\dfrac{3\pi}{2}\) provides the vertices so we can create a rough sketch of the graph. Example \(\PageIndex{2A}\): Graphing a Parabola in Polar Form Graph \(r=\dfrac{5}{3+3 \cos \theta}\). Solution First, we rewrite the conic in standard form by multiplying the numerator and denominator by the reciprocal of \(3\), which is \(\dfrac{1}{3}\). \[\begin{align*} r &= \dfrac{5}{3+3 \cos \theta}=\dfrac{5\left(\dfrac{1}{3}\right)}{3\left(\dfrac{1}{3}\right)+3\left(\dfrac{1}{3}\right)\cos \theta} \\ r &= \dfrac{\dfrac{5}{3}}{1+\cos \theta} \end{align*}\] Because \(e=1\),we will graph a parabola with a focus at the origin. The function has a \(\cos \theta\), and there is an addition sign in the denominator, so the directrix is \(x=p\). \[\begin{align*} \dfrac{5}{3}&=ep\\ \dfrac{5}{3}&=(1)p\\ \dfrac{5}{3}&=p \end{align*}\] The directrix is \(x=\dfrac{5}{3}\). Plotting a few key points as in Table \(\PageIndex{1}\) will enable us to see the vertices. See Figure \(\PageIndex{3}\). A B C D \(\theta\) \(0\) \(\dfrac{\pi}{2}\) \(\pi\) \(\dfrac{3\pi}{2}\) \(r=\dfrac{5}{3+3 \cos \theta}\) \(\dfrac{5}{6}≈0.83\) \(\dfrac{5}{3}≈1.67\) undefined \(\dfrac{5}{3}≈1.67\) We can check our result with a graphing utility. See Figure \(\PageIndex{4}\). Example \(\PageIndex{2B}\): Graphing a Hyperbola in Polar Form Graph \(r=\dfrac{8}{2−3 \sin \theta}\). Solution First, we rewrite the conic in standard form by multiplying the numerator and denominator by the reciprocal of \(2\), which is \(\dfrac{1}{2}\). \[\begin{align*} r &=\dfrac{8}{2−3\sin \theta}=\dfrac{8\left(\dfrac{1}{2}\right)}{2\left(\dfrac{1}{2}\right)−3\left(\dfrac{1}{2}\right)\sin \theta} \\ r &= \dfrac{4}{1−\dfrac{3}{2} \sin \theta} \end{align*}\] Because \(e=\dfrac{3}{2}\), \(e>1\), so we will graph a hyperbola with a focus at the origin. The function has a \(\sin \theta\) term and there is a subtraction sign in the denominator, so the directrix is \(y=−p\). \[\begin{align*} 4&=ep\\ 4&=\left(\dfrac{3}{2}\right)p\\ 4\left(\dfrac{2}{3}\right)&=p\\ \dfrac{8}{3}&=p \end{align*}\] The directrix is \(y=−\dfrac{8}{3}\). Plotting a few key points as in Table \(\PageIndex{2}\) will enable us to see the vertices. See Figure \(\PageIndex{5}\). A B C D \(\theta\) \(0\) \(\dfrac{\pi}{2}\) \(\pi\) \(\dfrac{3\pi}{2}\) \(r=\dfrac{8}{2−3\sin \theta}\) \(4\) \(−8\) \(4\) \(\dfrac{8}{5}=1.6\) Example \(\PageIndex{2C}\): Graphing an Ellipse in Polar Form Graph \(r=\dfrac{10}{5−4 \cos \theta}\). Solution First, we rewrite the conic in standard form by multiplying the numerator and denominator by the reciprocal of 5, which is \(\dfrac{1}{5}\). \[\begin{align*} r &= \dfrac{10}{5−4\cos \theta}=\dfrac{10\left(\dfrac{1}{5}\right)}{5\left(\dfrac{1}{5}\right)−4\left(\dfrac{1}{5}\right)\cos \theta} \\ r &= \dfrac{2}{1−\dfrac{4}{5} \cos \theta} \end{align*}\] Because \(e=\dfrac{4}{5}\), \(e<1\), so we will graph an ellipse with a focus at the origin. The function has a \(\cos \theta\), and there is a subtraction sign in the denominator, so the directrix is \(x=−p\). \[\begin{align*} 2&=ep\\ 2&=\left(\dfrac{4}{5}\right)p\\ 2\left(\dfrac{5}{4}\right)&=p\\ \dfrac{5}{2}&=p \end{align*}\] The directrix is \(x=−\dfrac{5}{2}\). Plotting a few key points as in Table \(\PageIndex{3}\) will enable us to see the vertices. See Figure \(\PageIndex{6}\). A B C D \(\theta\) \(0\) \(\dfrac{\pi}{2}\) \(\pi\) \(\dfrac{3\pi}{2}\) \(r=\dfrac{10}{5−4 \cos \theta}\) \(10\) \(2\) \(\dfrac{10}{9}≈1.1\) \(2\) Analysis We can check our result using a graphing utility. See Figure \(\PageIndex{7}\). Exercise \(\PageIndex{2}\) Graph \(r=\dfrac{2}{4−\cos \theta}\). Answer Defining Conics in Terms of a Focus and a Directrix So far we have been using polar equations of conics to describe and graph the curve. Now we will work in reverse; we will use information about the origin, eccentricity, and directrix to determine the polar equation. How to: Given the focus, eccentricity, and directrix of a conic, determine the polar equation Determine whether the directrix is horizontal or vertical. If the directrix is given in terms of \(y\), we use the general polar form in terms of sine. If the directrix is given in terms of \(x\), we use the general polar form in terms of cosine. Determine the sign in the denominator. If \(p<0\), use subtraction. If \(p>0\), use addition. Write the coefficient of the trigonometric function as the given eccentricity. Write the absolute value of \(p\) in the numerator, and simplify the equation. Example \(\PageIndex{3A}\): Finding the Polar Form of a Vertical Conic Given a Focus at the Origin and the Eccentricity and Directrix Find the polar form of the conic given a focus at the origin, \(e=3\) and directrix \(y=−2\). Solution The directrix is \(y=−p\), so we know the trigonometric function in the denominator is sine. Because \(y=−2\), \(–2<0\), so we know there is a subtraction sign in the denominator. We use the standard form of \(r=\dfrac{ep}{1−e \sin \theta}\) and \(e=3\) and \(|−2|=2=p\). Therefore, \[\begin{align*} r&=\dfrac{(3)(2)}{1-3 \sin \theta}\\ r&=\dfrac{6}{1-3 \sin \theta} \end{align*}\] Example \(\PageIndex{3B}\): Finding the Polar Form of a Horizontal Conic Given a Focus at the Origin and the Eccentricity and Directrix Find the polar form of a conic given a focus at the origin, \(e=\dfrac{3}{5}\), and directrix \(x=4\). Solution Because the directrix is \(x=p\), we know the function in the denominator is cosine. Because \(x=4\), \(4>0\), so we know there is an addition sign in the denominator. We use the standard form of \(r=\dfrac{ep}{1+e \cos \theta}\) and \(e=\dfrac{3}{5}\) and \(|4|=4=p\). Therefore, \[\begin{align*} r &= \dfrac{\left(\dfrac{3}{5}\right)(4)}{1+\dfrac{3}{5}\cos\theta} \\ r &= \dfrac{\dfrac{12}{5}}{1+\dfrac{3}{5}\cos\theta} \\ r &=\dfrac{\dfrac{12}{5}}{1\left(\dfrac{5}{5}\right)+\dfrac{3}{5}\cos\theta} \\ r &=\dfrac{\dfrac{12}{5}}{\dfrac{5}{5}+\dfrac{3}{5}\cos\theta} \\ r &= \dfrac{12}{5}⋅\dfrac{5}{5+3\cos\theta} \\ r &=\dfrac{12}{5+3\cos\theta} \end{align*}\] Exercise \(\PageIndex{3}\) Find the polar form of the conic given a focus at the origin, \(e=1\), and directrix \(x=−1\). Answer \(r=\dfrac{1}{1−\cos\theta}\) Example \(\PageIndex{4}\): Converting a Conic in Polar Form to Rectangular Form Convert the conic \(r=\dfrac{1}{5−5\sin \theta}\) to rectangular form. Solution: We will rearrange the formula to use the identities \(r=\sqrt{x^2+y^2}\), \(x=r \cos \theta\),and \(y=r \sin \theta\). \[\begin{align*} r&=\dfrac{1}{5-5 \sin \theta} \\ r\cdot (5-5 \sin \theta)&=\dfrac{1}{5-5 \sin \theta}\cdot (5-5 \sin \theta)\qquad \text{Eliminate the fraction.} \\ 5r-5r \sin \theta&=1 \qquad \text{Distribute.} \\ 5r&=1+5r \sin \theta \qquad \text{Isolate }5r. \\ 25r^2&={(1+5r \sin \theta)}^2 \qquad \text{Square both sides. } \\ 25(x^2+y^2)&={(1+5y)}^2 \qquad \text{Substitute } r=\sqrt{x^2+y^2} \text{ and }y=r \sin \theta. \\ 25x^2+25y^2&=1+10y+25y^2 \qquad \text{Distribute and use FOIL. } \\ 25x^2-10y&=1 \qquad \text{Rearrange terms and set equal to 1.} \end{align*}\] Exercise \(\PageIndex{4}\) Convert the conic \(r=\dfrac{2}{1+2 \cos \theta}\) to rectangular form. Answer \(4−8x+3x^2−y^2=0\) Visit this website for additional practice questions from Learningpod. Key Concepts Any conic may be determined by a single focus, the corresponding eccentricity, and the directrix. We can also define a conic in terms of a fixed point, the focus \(P(r,\theta)\) at the pole, and a line, the directrix, which is perpendicular to the polar axis. A conic is the set of all points \(e=\dfrac{PF}{PD}\), where eccentricity \(e\) is a positive real number. Each conic may be written in terms of its polar equation. See Example \(\PageIndex{1}\). The polar equations of conics can be graphed. See Example \(\PageIndex{2}\), Example \(\PageIndex{3}\), and Example \(\PageIndex{4}\). Conics can be defined in terms of a focus, a directrix, and eccentricity. See Example \(\PageIndex{5}\) and Example \(\PageIndex{6}\). We can use the identities \(r=\sqrt{x^2+y^2}\), \(x=r \cos \theta\),and \(y=r \sin \theta\) to convert the equation for a conic from polar to rectangular form. See Example \(\PageIndex{7}\).
Hints will display for most wrong answers; explanations for most right answers. You can attempt a question multiple times; it will only be scored correct if you get it right the first time. I used the official objectives and sample test to construct these questions, but cannot promise that they accurately reflect what’s on the real test. Some of the sample questions were more convoluted than I could bear to write. See terms of use. See the MTEL Practice Test main page to view random questions on a variety of topics or to download paper practice tests. MTEL General Curriculum Mathematics Practice Question 1 Exactly one of the numbers below is a prime number. Which one is it? \( \large511 \) Hint: Divisible by 7. \( \large517\) Hint: Divisible by 11. \( \large519\) Hint: Divisible by 3. \( \large521\) Question 2 The letters A, and B represent digits (possibly equal) in the ten digit number x=1,438,152,A3B. For which values of A and B is x divisible by 12, but not by 9? \( \large A = 0, B = 4\) Hint: Digits add to 31, so not divisible by 3, so not divisible by 12. \( \large A = 7, B = 2\) Hint: Digits add to 36, so divisible by 9. \( \large A = 0, B = 6\) Hint: Digits add to 33, divisible by 3, not 9. Last digits are 36, so divisible by 4, and hence by 12. \( \large A = 4, B = 8\) Hint: Digits add to 39, divisible by 3, not 9. Last digits are 38, so not divisible by 4, so not divisible by 12. Question 3 The letters A, B, and C represent digits (possibly equal) in the twelve digit number x=111,111,111,ABC. For which values of A, B, and C is x divisible by 40? \( \large A = 3, B = 2, C=0\) Hint: Note that it doesn't matter what the first 9 digits are, since 1000 is divisible by 40, so DEF,GHI,JKL,000 is divisible by 40 - we need to check the last 3. \( \large A = 0, B = 0, C=4\) Hint: Not divisible by 10, since it doesn't end in 0. \( \large A = 4, B = 2, C=0\) Hint: Divisible by 10 and by 4, but not by 40, as it's not divisible by 8. Look at 40 as the product of powers of primes -- 8 x 5, and check each. To check 8, either check whether 420 is divisible by 8, or take ones place + twice tens place + 4 * hundreds place = 18, which is not divisible by 8. \( \large A =1, B=0, C=0\) Hint: Divisible by 10 and by 4, but not by 40, as it's not divisible by 8. Look at 40 as the product of powers of primes -- 8 x 5, and check each. To check 8, either check whether 100 is divisible by 8, or take ones place + twice tens place + 4 * hundreds place = 4, which is not divisible by 8. Question 4 The prime factorization of n can be written as n=pqr, where p, q, and r are distinct prime numbers. How many factors does n have, including 1 and itself? \( \large3\) Hint: 1, p, q, r, and pqr are already 5, so this isn't enough. You might try plugging in p=2, q=3, and r=5 to help with this problem. \( \large5\) Hint: Don't forget pq, etc. You might try plugging in p=2, q=3, and r=5 to help with this problem. \( \large6\) Hint: You might try plugging in p=2, q=3, and r=5 to help with this problem. \( \large8\) Hint: 1, p, q, r, pq, pr, qr, pqr. Question 5 How many factors does 80 have? \( \large8\) Hint: Don't forget 1 and 80. \( \large9\) Hint: Only perfect squares have an odd number of factors -- otherwise factors come in pairs. \( \large10\) Hint: 1,2,4,5,8,10,16,20,40,80 \( \large12\) Hint: Did you count a number twice? Include a number that isn't a factor? Question 6 What is the greatest common factor of 540 and 216? \( \large{{2}^{2}}\cdot {{3}^{3}}\) Hint: One way to solve this is to factor both numbers: \(540=2^2 \cdot 3^3 \cdot 5\) and \(216=2^3 \cdot 3^3\). Then take the smaller power for each prime that is a factor of both numbers. \( \large2\cdot 3\) Hint: This is a common factor of both numbers, but it's not the greatest common factor. \( \large{{2}^{3}}\cdot {{3}^{3}}\) Hint: \(2^3 = 8\) is not a factor of 540. \( \large{{2}^{2}}\cdot {{3}^{2}}\) Hint: This is a common factor of both numbers, but it's not the greatest common factor. Question 7 What is the least common multiple of 540 and 216? \( \large{{2}^{5}}\cdot {{3}^{6}}\cdot 5\) Hint: This is the product of the numbers, not the LCM. \( \large{{2}^{3}}\cdot {{3}^{3}}\cdot 5\) Hint: One way to solve this is to factor both numbers: \(540=2^2 \cdot 3^3 \cdot 5\) and \(216=2^3 \cdot 3^3\). Then for each prime that's a factor of either number, use the largest exponent that appears in one of the factorizations. You can also take the product of the two numbers divided by their GCD. \( \large{{2}^{2}}\cdot {{3}^{3}}\cdot 5\) Hint: 216 is a multiple of 8. \( \large{{2}^{2}}\cdot {{3}^{2}}\cdot {{5}^{2}}\) Hint: Not a multiple of 216 and not a multiple of 540. Question 8 Elena is going to use a calculator to check whether or not 267 is prime. She will pick certain divisors, and then find 267 divided by each, and see if she gets a whole number. If she never gets a whole number, then she's found a prime. Which numbers does Elena NEED to check before she can stop checking and be sure she has a prime? All natural numbers from 2 to 266. Hint: She only needs to check primes -- checking the prime factors of any composite is enough to look for divisors. As a test taking strategy, the other three choices involve primes, so worth thinking about. All primes from 2 to 266 . Hint: Remember, factors come in pairs (except for square root factors), so she would first find the smaller of the pair and wouldn't need to check the larger. All primes from 2 to 133 . Hint: She doesn't need to check this high. Factors come in pairs, and something over 100 is going to be paired with something less than 3, so she will find that earlier. All primes from \( \large 2\) to \( \large \sqrt{267}\). Hint: \(\sqrt{267} \times \sqrt{267}=267\). Any other pair of factors will have one factor less than \( \sqrt{267}\) and one greater, so she only needs to check up to \( \sqrt{267}\). Question 9 P is a prime number that divides 240. Which of the following must be true? P divides 30 Hint: 2, 3, and 5 are the prime factors of 240, and all divide 30. P divides 48 Hint: P=5 doesn't work. P divides 75 Hint: P=2 doesn't work. P divides 80 Hint: P=3 doesn't work. Question 10 M is a multiple of 26. Which of the following cannot be true? M is odd. Hint: All multiples of 26 are also multiples of 2, so they must be even. M is a multiple of 3. Hint: 3 x 26 is a multiple of both 3 and 26. M is 26. Hint: 1 x 26 is a multiple of 26. M is 0. Hint: 0 x 26 is a multiple of 26. Question 11 The chairs in a large room can be arranged in rows of 18, 25, or 60 with no chairs left over. If C is the smallest possible number of chairs in the room, which of the following inequalities does C satisfy? \( \large C\le 300\) Hint: Find the LCM. \( \large 300 < C \le 500 \) Hint: Find the LCM. \( \large 500 < C \le 700 \) Hint: Find the LCM. \( \large C>700\) Hint: The LCM is 900, which is the smallest number of chairs. Question 12 The least common multiple of 60 and N is 1260. Which of the following could be the prime factorization of N? \( \large2\cdot 5\cdot 7\) Hint: 1260 is divisible by 9 and 60 is not, so N must be divisible by 9 for 1260 to be the LCM. \( \large{{2}^{3}}\cdot {{3}^{2}}\cdot 5 \cdot 7\) Hint: 1260 is not divisible by 8, so it isn't a multiple of this N. \( \large3 \cdot 5 \cdot 7\) Hint: 1260 is divisible by 9 and 60 is not, so N must be divisible by 9 for 1260 to be the LCM. \( \large{{3}^{2}}\cdot 5\cdot 7\) Hint: \(1260=2^2 \cdot 3^2 \cdot 5 \cdot 7\) and \(60=2^2 \cdot 3 \cdot 5\). In order for 1260 to be the LCM, N has to be a multiple of \(3^2\) and of 7 (because 60 is not a multiple of either of these). N also cannot introduce a factor that would require the LCM to be larger (as in choice b). Question 13 A biology class requires a lab fee, which is a whole number of dollars, and the same amount for all students. On Monday the instructor collected $70 in fees, on Tuesday she collected $126, and on Wednesday she collected $266. What is the largest possible amount the fee could be? $2 Hint: A possible fee, but not the largest possible fee. Check the other choices to see which are factors of all three numbers. $7 Hint: A possible fee, but not the largest possible fee. Check the other choices to see which are factors of all three numbers. $14 Hint: This is the greatest common factor of 70, 126, and 266. $70 Hint: Not a factor of 126 or 266, so couldn't be correct. If you found a mistake or have comments on a particular question, please contact me (please copy and paste at least part of the question into the form, as the numbers change depending on how quizzes are displayed). General comments can be left here.
I have a question regarding the physical significance of the canonical energy momentum tensor $T_\nu ^\mu$ in the context of classical field theory. It is defined as $T_\nu ^\mu = \frac{\partial \mathcal{L}}{\partial ( \partial_\mu \Phi^I)} \partial_\nu \Phi^I - \delta_\nu ^\mu \mathcal{L} $, where $\Phi^I$ is the set of all relevant scalar, vector, or tensor fields in the Lagrangian, and $I$ is the corresponding indices. For example, if we consider the Lagrangian for the free gauge field in electrodynamics $\mathcal{L}_A = - \frac{1}{4} F_{\mu \nu} F^{\mu \nu}$, where $F_{\mu \nu} = \partial_\mu A_\nu - \partial_\nu A_\mu$, we have that $\Phi^I = A_\mu$. Accordingly, the (covariant) canonical energy momentum tensor is $T_{\mu \nu} = \frac{1}{4}\eta_{\mu \nu} F_{\alpha \beta} F^{\alpha \beta} - F_{\mu \alpha} \partial _ \nu A^ \alpha$. It satisfies that $\partial _\mu T_\nu ^\mu = 0$ provided that the system, which it describes, is independent of translations in spacetime. This is a consequence of Noethers theorem. My problem: On the one hand, it is used to define the canonical 4-momentum by $P_\nu = \int T_{\nu}^0 d^3 x $, i.e. it is used to define some physical observables of the system. Also, the canonical Hamiltonian $\mathcal{H}$ density is defined by $\mathcal{H}=T_{0}^0$ On the other hand, $T_\nu ^\mu$ is not in general gauge invariant - but physical observables must be gauge invariant. If the gauge field is time independent, i.e. $\partial_0 A^\alpha = 0$ for each $\alpha$, then $T_0 ^0$ (and thus also the energy) can be gauge independent, as is the case for $T_0 ^0$ for free electrodynamics defined above. However, in general for time-dependent gauge field $T_\nu ^\mu$ is not gauge-invariant, which must imply that the canonical Hamiltonian density is not in general gauge independent and thus not observable. My question: Is it possible to show that the 4-momentum is always gauge invariant and thus observable despite the fact that $T_\nu ^\mu$ is not? If this is not possible, what even is the significance (and relevance) of $T_\nu ^\mu$ and the canonical 4-momentum?