text stringlengths 256 16.4k |
|---|
This question is more of a trick.
Notation: for any formula $\phi$, let $\phi^=$ mean the formula obtained by
merely replacing each occurrence of the membership symbol $\in$ in $\phi$ by the equality symbol $=$, so for example if $\phi$ is the formula $(y \in y)$, then $\phi^=$ is the formula $(y=y)$.
We assume full Extensionality throughout this exposition.
Notice this
naive like Comprehension axiom scheme:
If $\phi$ is a formula in the first order language of set theory (i.e.; $\sf FOL(=,\in)$), in which the symbol $``x"$ doesn't occur free, then: $$[\exists y (\phi^=) \wedge \exists y ((\neg \phi)^=) \to \exists x \forall y (y \in x \iff \phi)]$$; is an axiom.
Axiom of Multiplicity: $\exists x,y: x \neq y $
This theory supply the appearance of an inconsistent theory. However, a proof of this inconsistency keeps eluding me?!
Is this theory consistent?
The point is that this theory is
very naive. The antecedent of comprehension is an extremely simple syntactical procedure, more of a trick really! In other words its a very naive kind of restriction on the original unrestricted naive comprehension.
The general expectation for such maneuvers is inconsistency!?
This theory [if consistent] has a universal set. So if consistent, it might be interpretable in one of the fragments of $\sf NF$? It would be nice to see if that is the case! |
The $(M+1)$ peak is often considered in the high-resolution mass spectra of organic molecules as it reveals the number of carbon atoms in the sample. In general, it is known that the ratio of the size of the $M$ to $(M+1)$ peaks is $98.9 : 1.1 \times n $ since the relative abundance in nature of $^{13}$C is $ 1.1$% for the mass spectrum of an organic molecule containing $n$ carbon atoms and no heteroatoms. It is mentioned that the $(M+2)$ peak is statistically insignificant on this site. However, I believe that only applies for organic molecules with a relatively small number of carbon atoms and this peak would become significant when considering the mass spectra of larger organics. Using simple mathematics, I derived that the ratio of the $M$ to $(M+2)$ peak is $98.9^{2} : 1.1^{2} \times _nC _2 $. I would like to verify if this is correct. If it is not, could someone then correct it by posting an answer?
You are correct on all accounts.
To a very good approximation, molecules can be thought as made of elements (with their respective isotope distributions) combining completely independently. You can think of it like rolling multiple die at once. This means that a simple multinomial distribution will describe this problem mathematically.
Let's start with something easy and consider the hypothetical molecule $\ce{C_5}$. Furthermore, let us consider that the only carbon isotopes with significant natural occurrence are $\ce{^{12}C}$ (98.9%) and $\ce{^{13}C}$ (1.1%). We can find all of the isotopic peaks and their relative abundances by then expanding the binomial $(0.989\times m[^{12}C] + 0.011\times m[^{13}C])^5$, where $m[^{12}C]$ and $m[^{13}C]$ denote the exact masses of the carbon-12 and carbon-13 isotopes, respectively. Expanding the binomial yields:
$\begin{equation} \begin{aligned} (0.989\times m[^{12}C] + 0.011\times m[^{13}C])^5 ={} & \ \ \ \ \ \binom {5} {0}(0.989\times m[^{12}C])^5 \\ & + \binom {5} {1}(0.989\times m[^{12}C])^4 \times (0.011\times m[^{13}C]) \\ & + \binom {5} {2}(0.989\times m[^{12}C])^3 \times (0.011\times m[^{13}C])^2 \\ & + \binom {5} {3}(0.989\times m[^{12}C])^2 \times (0.011\times m[^{13}C])^3 \\ & + \binom {5} {4}(0.989\times m[^{12}C]) \times (0.011\times m[^{13}C])^4 \\ & + \binom {5} {5} (0.011\times m[^{13}C])^5 \\ \end{aligned} \end{equation}$
Calculating the coefficients in each term:
$\begin{equation} \begin{aligned} (0.989\times m[^{12}C] + 0.011\times (m[^{13}C])^5 ={} & \ \ \ \ \ 0.946 \times (m[^{12}C])^5 \\ & + 0.0526\times (m[^{12}C])^4 \times m[^{13}C] \\ & + 0.00117 \times (m[^{12}C])^3 \times (m[^{13}C])^2 \\ & + 0.0000130 \times (m[^{12}C])^2 \times (m[^{13}C])^3 \\ & + 7.24×10^{-8}\times (m[^{12}C]) \times ( m[^{13}C])^4 \\ & + 1.61×10^{-10} \times m[^{13}C])^5 \\ \end{aligned} \end{equation}$
From the expansion, we see that 94.6% of all $\ce{C_5}$ molecules contain only carbon-12 (the lowest possible mass for the molecule), and almost all of the rest (5.3% out of the remaining 5.4%) is accounted for by molecules that contain a single carbon-13 atom. Only about 0.1% of $\ce{C_5}$ molecules contain two or more carbon-13 atoms.
But what happens in very large molecules? Intuitively, if there are many atoms, you would expect a higher chance of there being at least one less common isotope in the mix. Let's see the first few terms for the molecule $\ce{C_100}$:
$\begin{equation} \begin{aligned} (0.989\times m[^{12}C] + 0.011\times m[^{13}C])^{100} ={} & \ \ \ \ \ \binom {100} {0}(0.989\times m[^{12}C])^{100} \\ & + \binom {100} {1}(0.989\times m[^{12}C])^{99} \times (0.011\times m[^{13}C]) \\ & + \binom {100} {2}(0.989\times m[^{12}C])^{98} \times (0.011\times m[^{13}C])^2 \\ & + \ ...\\ \end{aligned} \end{equation}$
Calculating the coefficients:
$\begin{equation} \begin{aligned} (0.989\times m[^{12}C] + 0.011\times m[^{13}C])^{100} ={} & \ \ \ \ \ 0.331 \times (m[^{12}C])^{100} \\ & + 0.368 \times (m[^{12}C])^{99} \times (m[^{13}C]) \\ & + 0.203 \times (m[^{12}C])^{98} \times (m[^{13}C])^2 \\ & +\ ...\\ \end{aligned} \end{equation}$
Well that's interesting. Now only 33.1% of the molecules contain only carbon-12 atoms, and in fact more molecules contain exactly one carbon-13 atom, at 36.8% of the total. Even molecules with two carbon-13 atoms are quite abundant, at 20.3%.
Indeed, peaks containing rarer isotopes eventually dominate. For the huge molecule $\ce{C_10000}$, the strongest mass spectrum signal would come from molecules contaning 110 carbon-13 atoms, corresponding to 3.8% of the total, while a measly $9.2\times 10^{-47}\%$ of molecules contain only carbon-12. This happens because when $n$ is large, the term $\binom {n} {k}$ grows very quickly as $k$ rises from zero, overwhelming the increase in the exponent of the rarer isotope. You can see this behaviour quite nicely in this sequence of mass spectra of molecules with increasing size.
To calculate the specific $M/M+2$ ratio for a molecule containing only $n$ carbon atoms, all you need is to get the ratio for first and third terms in the binomial:
$\begin{equation} \begin{aligned} (0.989\times m[^{12}C] + 0.011\times m[^{13}C])^n ={} & \ \ \ \ \ \color{#0000ff}{ \binom {n} {0}(0.989\times m[^{12}C])^n} \\ & + \binom {n} {1}(0.989\times m[^{12}C])^{n-1} \times (0.011\times m[^{13}C]) \\ & + \color{#0000ff}{\binom {n} {2}(0.989\times m[^{12}C])^{n-2} \times (0.011\times m[^{13}C])^2} \\ & +\ ...\\ \end{aligned} \end{equation}$
The ratio is then:
$$\frac{\binom {n} {0}0.989^n}{\binom {n} {2}0.989^{n-2} \times 0.011^2}=\frac{2\times 0.989^2}{n(n-1)\times 0.011^2}$$
Technically this only holds if there are no other elements which contain multiple isotopes, though it will hold approximately if the other elements only have very rare alternate isotopes, such as hydrogen (99.98% hydrogen-1, 0.02% hydrogen-2).
As a last curiosity, all of the above extends to analysing more complicated molecules. For example, glucose ($\ce{C6H12O6}$) will have a mass spectrum exactly described by the expression:
$$(0.989\times m[^{12}C] + 0.011\times m[^{13}C])^6 \times (0.9998\times m[^{1}H] + 0.002\times m[^{2}H])^{12} \times (0.9976\times m[^{16}O] + 0.004\times m[^{17}O] + 0.020\times m[^{18}O])^6$$
Happy expanding!
Let’s assume your compound is $\ce{C_nH_xO_y}$. Thankfully, both hydrogen and oxygen are elements that only have one significant naturally occuring isotope. Therefore, we can treat the entire contribution of $\ce{H_xO_y}$ as a constant $c$. All we need to answer is how large the $M+1$ and $M+2$ peaks are given the number of carbons, $n$. (This treatment is not entirely correct as highlighted by orthocresol, but it is close enough for my purposes.)
Assuming you do not have any isotope enrichments, each carbon will
independently have a chance of either being $\ce{^12C}$ or $\ce{^13C}$ (again, we will ignore all other isotopes such as $\ce{^14C}$). The independence is the big key word here. We can use general principles of stochastics to calculate the result.
The probability of all carbon atoms of one molecule of your compound are $\ce{^12C}$ is: $$P(\ce{^12C_nH_xO_y}) = 0.989^n$$ The $M+1$ peak is represented by one single atom being $\ce{^13C}$. Again, the principles of stochastics apply: $$P(\ce{^12C_{n-1}^13C1H_xO_y}) = \left ( n\atop 1\right) \times 0.989^{n-1} \times 0.011$$ And finally for the $M+2$ peak: $$P(\ce{^12C_{n-2}^13C2H_xO_y}) = \left ( n \atop 2\right) \times 0.989^{n-2} \times 0.011^2$$
Using this, we can plot what the probability of each peak is – and use that as their relative heights.
We see that from $n=90$ the $M+1$ peak is actually larger than the $M$ peak. From $n=128$, even the $M+2$ peak will be larger than the $M$ peak. And from $n=181$ the $M+2$ peak becomes the largest of these three.
Of course, further peaks will also start appearing, meaning that the mass spectra of very large molecules will become difficult to analyse.
Now the numbers from which $M$ is no longer te principal peak are quite alrge. Unless you are synthesising maitotoxin, you will probably not need to resort to any analysis that does not only involve $M+1$. |
While reading some articles on formal proofs (see also my previous question on cstheory about the length of ZFC proofs versus human written proofs), I came up with this apparent paradox.
Let $M_{const}$ be a program that given Turing machine $M$ checks if there is a short ZFC proof $\Gamma$ of length $|\Gamma| \leq |M|^2$ that $M$ runs in constant time.
Program M_const( M ) enumerate all strings S of length <= |M|^2 verify if S is a proof of "M in O(1)" if it is a valid proof the halt and accept if no valid proof is found halt and reject
Now, we can build $M_{paradox}$ that knows its own code (by the recursion theorem) and on input $x$ first simulate $M_{const}( M_{paradox} )$; if it accepts then loop from $1$ to $|x|$ (so it falls in $O(n)$), otherwise halts (so it falls in $O(1)$).
Program M_paradox( x ) String M = self_code() // ok by the recursion theorem simulate M_const( M ) // simulate M_const on M_paradox if it accepts then for i = 1 to |x| do nothing // -> O(n) otherwise halts // -> O(1) some dummy unused code here // see below
It is clear (and hopefully provable in ZFC) that:
$M_{const}$ always halts and is correct; if $M_{const}( M_{paradox} ) = Yes$ then $M_{const} \notin O(1)$ by construction; so we have a contradiction; so $M_{const}(M_{paradox}) = No $; and we can have: (a) $M_{paradox} \notin O(1)$ OR (b) $M_{paradox} \in O(1)$ AND there is not a short proof of it;
but case $(a)$ cannot hold by construction; so ...
$M_{paradox} \in O(1)$ AND there is not a short proof of it;
But steps 1--4 can be formalized in ZFC and (unless I'm missing something, its length depends only linearly on $|M_{paraox}|$) so we can add some dummy code to $M_{paradox}$ until such ZFC proof is shorter than $| M_{paradox}|^2$ (we can use the fact that the runtime of a Turing machine doesn't change if we add unused states, it only affects the self representation); so we get a contradiction with point 5 which says that there is not a short proof of $M_{paradox} \in O(1)$ ???
Q1.What is the output of $M_{const}( M_{paradox} )$ ? Update: There were another question Q2 here, but it I decided to post it as a new "Part II" question to avoid confusion. |
Wind is not an acceleration, but the drag due to the wind is a force applied to the body. This results in an acceleration according to Newton's second law. All the forces or accelerations need to be added as vectors to find the magnitude and direction of the total force or acceleration.
Let $\hat{y}$ be the upward direction and $\hat{x}$ be the rightward direction. Then gravity is a downward acceleration $-g\,\hat{y}$ where $g=10\,m/s^2$.
The drag force on a body is $$\vec{F}_d=C_d A \frac{1}{2}\rho v^2\hat{v}$$ where $A$ is the relevant cross-sectional area, $\rho$ is the density of the air, $v$ is the velocity of the air
relative to the body, and $\hat{v}$ is the unit vector in the direction of the relative wind (not necessarily the direction of the wind as seen from the ground). $C_d$ is the drag coefficient, which is a unitless empirically determined fudge factor which is different for different shapes. You could just set $C_d=1$ if this is a very rough simulation.
If your wind is blowing to the side with velocity $\vec{v}_{wind}=v_{wind} \hat{x}$, and your body is currently moving with velocity $\vec{v}_{body}=v_x\hat{x}+v_y\hat{y}$, then the relative velocity to use in the drag equation is $\vec{v}_{wind}-\vec{v}_{body}=(v_{wind}-v_x)\hat{x}-v_y\hat{y}$, the magnitude of the relative wind is $v=\sqrt{(v_{wind}-v_x)^2+v_y^2}$, the direction of the relative wind is $\hat{v}=\frac{v_{wind}-v_x}{v}\hat{x}-\frac{v_y}{v}\hat{y}$ and the drag is $\vec{F}_d=C_d A\frac{1}{2}\rho v^2\hat{v}=C_d A\frac{1}{2}\rho \sqrt{(v_{wind}-v_x)^2+v_y^2}((v_{wind}-v_x)\hat{x}-v_y\hat{y})$.
According to Newton's 2nd law, the total force is $\sum \vec{F}=m \vec{a}$. The force due to the gravitational acceleration is $\vec{F}_g=-mg\hat{y}$. So doing a vector summation of the gravitational and drag forces, we get $$\sum \vec{F}=\vec{F_g}+\vec{F_d}\\=C_d A\frac{1}{2}\rho \sqrt{(v_{wind}-v_x)^2+v_y^2}(v_{wind}-v_x)\hat{x}+(-mg-C_d A\frac{1}{2} \rho \sqrt{(v_{wind}-v_x)^2+v_y^2}v_y)\hat{y}$$ Then from Newton's 2nd law the total acceleration of the body is $$\vec{a}=C_d A\frac{1}{2}\frac{\rho}{m} \sqrt{(v_{wind}-v_x)^2+v_y^2}(v_{wind}-v_x)\hat{x}+(-g-C_d A\frac{1}{2}\frac{\rho}{m} \sqrt{(v_{wind}-v_x)^2+v_y^2}v_y)\hat{y}$$
Your simulator will then integrate this total acceleration. |
An advanced way is to think of the Cyclotomic polynomial. $\Phi_{p-1}(x),$ which has primitive $p-1$th roots of unity as roots.
But you can show that in general:$$\Phi_{4m}(x)=\Phi_{2m}(x^2)$$
This can be seen by using the rule:
$$\Phi_n(x)=\prod_{d\mid n}\left(x^d-1\right)^{\mu(n/d)}$$
But when $n/d$ is divisible by $4$, $\mu(n/d)=0$ so:
$$\Phi_{4m}(x)=\prod_{d\mid 2m}\left(x^{2d}-1\right)^{\mu(2m/d)}=\Phi_{2m}(x^2)$$
This means that $\Phi_{4m}(x)$ has, as the sum of roots, zero.
This same argument shows that:
$$\Phi_{mq^2}(x)=\Phi_{mq}(x^q)$$
So for a prime $p\equiv 1\pmod{q^2}$ for any $q>1$ then the sum of the generators modulo $p$ is zero.
So it is true for $p=19,$ for example, which has generators $2,2^5=13,2^7\equiv14,2^{11}\equiv15,2^{13}\equiv3,2^{17}\equiv10$, and $$2+13+14+15+3+10=57$$ is divisible by $19.$
This leads to two conjectures:
The sum of the generators modulo $p$ is $0$ if and only if $\mu(p-1)=0,$ that is , if $p-1$ is divisible by a prime squared.
and
Define $Q$ to be the set or primes whose generators do not add up to $0.$ Then: $$q(x)=\left|\{p\in Q\mid p\leq x \}\right|$$ then $$\frac{q(x)}{\pi(x)}\to \prod_p\left(1-\frac{1}{p(p-1)}\right)$$ |
A few preliminary remarks:
1) Complexifications of Banach lattices are in fact a special case of the more general concept of
complexifications of real Banach spaces.
2) Most books and articles about complex Banach lattices which contain spectral theoretic results focus on
positive operators (or, say, generators of positive semigroups) which are, of course, real.
3) One reason for 2) might be that real operators can actually be defined on every complexification of a real Banach space (no matter whether a lattice structure is present), so the question about "spectral properties of real operators" belongs to the realm of complexifications of real Banach spaces rather than to the realm of complex Banach lattices.
4) While most books on Banach lattices contain a chapter on complex Banach lattices (which is, unfortunately, very brief in most cases), there does not seem to exist much literature on complexifications of real Banach spaces (although the topic often seems to be considered as some kind of "standard folklore" among functional analysts).
The only article I'm aware of which contains a rather extensive treatment of complexifications of real Banach spaces is "G. A. Muñoz, Y. Sarantopoulos, and A. Tonge: Complexifications of real Banach spaces, polynomials and multilinear maps. Studia Math., 134(1):1–33, 1999." Unfortunately, though, spectral theory is not treated there.
References to learn about spectral properties of real operators:
If you primarily want to learn about the spectral properties of real operators, you can find some information in the following references (I apologize for the self-advertisement, but these are really the only references I know for this topic):
[1] Appendix C in my PhD thesis contains a thorough treatment of complexifications. Spectral theoretic aspects are discussed in Section C.3, and a few spectral theoretic properties of real operators are discussed in Proposition C.3.2 (the fact that the operators considered in this proposition are exactly the class of real operators follows from Proposition C.1.6). However, I should add that Appendix C contains almost no proofs, for the following reason (which reflects, of course, only my personal point of view): when it comes to complexfications, it is in most cases more difficult to find the correct statements rather than to prove them (proving them is very straightforward in most cases).
[2] You can also find similar information in Appendix A of the arXiv version 1 of my paper "Spectral and Asymptotic Properties of Contractive Semigroups on Non-Hilbert Spaces" (but in version 2, which was eventually published, I removed this appendix and replaced it with a very brief discussion of complexifications at the beginning of the paper in response to a request in a referee's report).
References to quote:
The above references [1] and [2] are certainly not very well-suited in case that you need a reference to quote in a paper. Indeed, [2] is only a preprint which was eventually published without the information relevant here, and [1] contains almost no proofs (moreover, one might also question whether it is a particularly good idea to refer to the appendix of a PhD thesis for information about a quite classical topic in functional analysis - even though this topic does not seem to be particularly well represented in the literature).
Unfortunately, I do not know any other references about the spectral theory of real operators (and I have severe doubts whether such a reference exists).
So my suggestion is: If you need spectral properties of real operators in a paper, simply state them either in the introduction or in an appendix of the paper. You may include proofs if you feel they are necessary; but in many cases those properties are such elementary that you probably won't need to include proofs.
Additional remark (biased by my personal opinion):
I think it is reasonable to conlcude that a survey article about complexifications of real Banach spaces which complements the aspects discussed in the article of Muñoz, Sarantopoulos and Tonge would be a very useful source of reference. Now, all that is needed is a volunteer to write it and a journal willing to publish it...
Edit in response to the OP's comment:
Here are a few facts with proofs. Throughout, let $E$ be a complexification of a real Banach space $E_{\mathbb{R}}$.
Note that an
everywhere defined linear operator $T$ on $E$ is real if and only if $T(E_{\mathbb{R}}) \subseteq E_{\mathbb{R}}$.
Proposition 1. Let $A: E \supseteq D(A) \to E$ be a linear operator which is real. If $z \in D(A)$, then $\overline{z}$ is also in $D(A)$ and we have $A\overline{z} = \overline{Az}$.
Proof. We may write $z$ as $z = x+iy$, where $x,y \in E_{\mathbb{R}}$. As $A$ is real, we have $x,y \in D(A)$ and $Ax$ as well as $Ay$ is in $E_{\mathbb{R}}$. Hence,\begin{align*} A\overline{z} = A(x-iy) = Ax - iAy = \overline{Ax+ iAy} = \overline{Az}.\end{align*}Note that the third equality above uses the fact that both $Ax$ and $Ay$ are elements of $E_{\mathbb{R}}$.
Proposition 2. Let $A: E \supseteq D(A) \to E$ be a linear operator which is real. If $\lambda$ is a real number in the resolvent set of $A$, then the resolvent $R(\lambda,A) := (\lambda - A)^{-1}$ is real, too.
Proof. Let $x \in E_{\mathbb{R}}$ and set $y := R(\lambda,A)x$. Then $y$ is an element of $D(A)$ and it can be written as $y = a + ib$ for unique $a,b \in E_{\mathbb{R}}$. Since $A$ is real, we conclude that $a,b \in D(A)$.Clearly, $x = (\lambda - A)y = (\lambda-A)a + i (\lambda - A)b$. Since $A$ is a real operator and $\lambda$ is a real number, both vectors $(\lambda - A)a$ and $(\lambda - A)b$ are elements of $E_{\mathbb{R}}$. As the decomposition of each vector in $E$ into its real and imaginary part is unqie, we conclude that $x = (\lambda - A)a$. Hence, $R(\lambda,A)x = a \in E_{\mathbb{R}}$.This shows that $R(\lambda,A)$ is indeed real.
Proposition 3. Let $A: E \supseteq D(A) \to E$ be a linear operator which is real.
(a) If $\lambda \in \mathbb{C}$ is an eigenvalue of $A$ with eigenvector $z \in E$, then $\overline{\lambda}$ is an eigenvalue of $A$ with eigenvector $\overline{z}$.
(b) If $\lambda \in \mathbb{C}$ is a spectral value of $A$, then so is $\overline{\lambda}$.
Proof. (a) According to Proposition 1 we have $\overline{z} \in D(A)$ and $A\overline{z} = \overline{Az} = \overline{\lambda z} = \overline{\lambda} \overline{z}$.
(b) Assume that $\overline{\lambda}$ is not a spectral value of $A$. Then $A$ is closed, so we only have to show that $\lambda - A$ is a bijective mapping from $D(A)$ to $E$. Clearly $\lambda - A$ is injective, since otherwise it would follow from (a) that $\overline{\lambda} - A$ was not injective.In order to show that $\lambda - A$ is surjective, let $z \in E$. Since $\overline{\lambda} - A$ is surjective, we can find a vector $c \in E$ such that $(\overline{\lambda} - A)c = \overline{z}$. Hence, we obtain\begin{align*} \overline{(\lambda - A)\overline{c}} = \overline{\lambda} c - Ac = (\overline{\lambda} - A) c = \overline{z}\end{align*}(where we used Proposition 1 for the first equality), so $(\lambda - A)\overline{c} = z$. This proves that $\lambda - A$ is indeed surjective.
Remark 4. Let $A: E \supseteq D(A) \to E$ be a linear operator which is real and let $\lambda$ be a complex number. Then the following assertions hold:
(a) The number $\lambda$ is an eigenvalue of $A$ if and only if $\overline{\lambda}$ is an eigenvalue of $A$.
(b) The number $\lambda$ is a spectral value of $A$ if and only if $\overline{\lambda}$ is a spectral value of $A$.
Proof. The "only if" implications are the content of Proposition 3, and the converse implications also follow from Proposition 3 by applying the proposition to the number $\overline{\lambda}$ instead of $\lambda$.
Remark 5. The arguments in the proof of Proposition 3 actually show that we have\begin{align*} \overline{R(\lambda,A)z} = R(\overline{\lambda},A) \overline{z}\end{align*}for each $\lambda$ in the resolvent set of $A$ and each $z \in E$. This can be considered as a generalisation of Proposition 2.
Theorem 6. Let $A: E \supseteq D(A) \to E$ be a linear operator which is real and let $\sigma$ be a compact subset of the spectrum $\sigma(A)$ such that $\sigma(A) \setminus \sigma$ is closed. Let $P$ denote the spectral projection associated to $\sigma$.
If $\sigma$ is invariant under complex conjugation (i.e. $\overline{\lambda} \in \sigma$ for all $\lambda \in \sigma$), then the operator $P$ is real.
Sketch of the proof. By Proposition 3 (or Remark 4) the entire spectrum $\sigma(A)$ is conjugation invariant. Since $\sigma$ is compact and $\sigma(A) \setminus \sigma$ is closed, we can find a closed (and smooth) cycle $\gamma$ in $\mathbb{C}$ which circumvents no element of $\sigma(A) \setminus \sigma$ but each element of $\sigma $ exactly once. As $\sigma$ and $\sigma(A) \setminus \sigma$ are conjugation invariant, we can choose $\gamma$ to be conjugation invariant, too, meaning that walking along $\overline{\gamma}$ is the same as walking along $\gamma$ in the converse direction. By definition of the spectral projection we have\begin{align*} P = \frac{1}{2\pi i} \int_{\gamma} R(\mu,A) \, d\mu\end{align*}Applying this to a vector $z \in E$ and employing Remark 5 we conclude that\begin{align*} \overline{Pz} = - \frac{1}{2\pi i} \int_\gamma R(\overline{\mu}, A) \overline{z} \, \overline{d\mu} = \frac{1}{2\pi i} \int_\gamma R(\mu,A) \overline{z} \, d\mu = P\overline{z}.\end{align*}If $z \in E_{\mathbb{R}}$ this implies that $\overline{Pz} = P\overline{z} = Pz$, so $Pz \in E_{\mathbb{R}}$. Hence, $P$ is real.
Of course, one can prove many related results, but I hope that the above arguments suffice to give you an idea of how things usually work in the spectral theory of real operators. |
The answer to this question suggest that one can solve the measurement problem by decoherence. If I understand it correctly the decoherence appears when the quantum state interacts with the measurement device (and possibly the environment).
However I was wondering about locality. Bell tests tell us that measurements can change the quantum state globally (Although one cannot transfer information, one can still measure correlations).
I was wondering how decoherence can behave non local? So far all matter interacts only locally (In QFT one assumes this explicitly). Why can the measurement device then affect the wavefunction outside of the lightcone?
EDIT: Thank you all for your answers. Maybe I should rephrase the question (Maybe also the title is not suitable, but I didn't have any better idea). This question is not about the Bell test itself. I completely understood that a measurement changes the wave function globally, but this cannot be used to transfer information (because the reduced density matrix of system B is not affected by a measurement on system A).
This question is more about the measurement procedure itself. Decoherence tries to explain the measurement process in way like: From the microscopic point of view the wave function completely behaves according to the Schrödinger equation, however from the macroscopic point of view it looks like (what we call) "a measurement" happened.
I was wondering about the following: If I can explain the measurement procedure completely using ordinary quantum mechanics, how can a measurement procedure change the wave function outside of the lightcone?
Unfortunately I cannot explain this using the standard Schrödinger equation $i\partial_t |\psi\rangle = (-\frac{1}{2m}\nabla^2 + V(x)) |\psi\rangle$ since it is non relativistic.
So as a toy model let's use the Klein-Gordon equation for our particle wave function instead: $(\partial_\mu\partial^\mu + m^2) |\psi\rangle = 0$. To do a measurement we need to couple it to a measurement system. This will introduce a "source term" on the right side, something like this: $$(\partial_\mu\partial^\mu + m^2) |\psi\rangle = \hat{A}|\text{detector}\rangle $$ where $\hat{A}$ is some coupling operator and $|\text{detector}\rangle$ the detector state. One can solve this equation using the retarded Greens function. This tells me that the interaction with the detector will only affect the wave function $|\psi\rangle$ inside the lightcone. So in this toy model the decoherence process cannot introduce any correlations between spacelike separated regions. But these correlations exist (they are measured in the bell experiments).
Of course this was only a toy model. However all other matter fields I encountered so far (like the Dirac field or the photon field) also satisfy the Klein-Gordon equation (I have no idea about strong or weak interaction, but I guess it's the same there). This means whatever you do with the field at one point in spacetime - it will only effect the wavefunction inside the lightcone. Therefore (from a microscopic view) one cannot create any correlations between spacelike separated regions. How is it therefore possible to create these correlations at the macroscopic level? |
Decision Tree Learning is a non-parametric supervised learning method used for classification and regression. The goal is to create a model that predicts the value of a target variable by learning simple decision rules inferred from the data features.
Decision Tree is a simple representation for classifying examples. Assuming that all of the input features have finite discrete domains, and there is a single target feature called the
classification. Each element of the domain of the classification is called a class. In fact, a decision tree or a classification tree is a tree in which each internal node is labelled with an input feature. The arcs coming from a node labelled with an input feature are labelled with each of the possible values of the target or output feature or the arc leads to a subordinate decision node on a different input feature. Each leaf of the tree is labelled with a class or a probability distribution over the classes. Also, a tree can be learned by splitting the source set into subsets based on an attribute value test following some criteria. This process is repeated on each derived subset in a recursive manner called recursive partitioning. The recursion is completed when the quality of the subset satisfies some conditions explained later. For instance, at a node has all the same value of the target variable, or when splitting no longer adds value to the predictions. This process of top-down induction of decision trees is an example of a greedy algorithm.
DT used in two categories.
Classification Tree: when the predicted outcome is the class to which the data belongs Regression Tree: when the predicted outcome can be considered a real number Boosted Tree: Incrementally building an ensemble by training each new instance to emphasize the training instances previously mismodelled. A typical example is AdaBoost. Bootstrap Aggregated: an early ensemble method, builds multiple DT by repeatedly resampling training data with replacement, and voting the trees for a consensus prediction e.g., Random Forest.
The term
Classification And Regression Tree(CART) is quite popular. Gini Impurity Entropy Information Gain Nice Meterial :http://people.revoledu.com/kardi/tutorial/DecisionTree/how-to-measure-impurity.htmhttp://www.bogotobogo.com/python/scikit-learn/scikt_machine_learning_Decision_Tree_Learning_Informatioin_Gain_IG_Impurity_Entropy_Gini_Classification_Error.php
Algorithms for constructing decision trees usually work top-down a variable at each step that best splits the set of items. Different algorithms use different metrics for measuring "best". These generally measure the homogeneity of the target within the subsets. some examples are given above.
Used by the CART algorithm for classification trees, Gini impurity is a measure of how often a randomly chosen element from the set would be incorrectly labelled if it was randomly labelled according to the distribution of labels in the subset. The Gini impurity can be computed by summing the probability $p_i$ of an item with label $i$ being chosen times the probability $\sum_{k\neq i}p_k = 1 - p_i$ of a mistake in categorising that item. It reaches its minimum when all cases in the node fall into a single target category.
To compute Gini impurity for a set of items with $J$ classes, suppose $i\in$ {$1,2,...,J$}, and let $p_i$ be the fraction of items labelled with class $i$ in the set.
I_G(p) = \sum^J_{i=1}p_i \sum_{k \neq i}p_k = \sum^J_{i=1} p_i(1-p_i) = \sum^J_{i=1}(p_i - p_i)^2 = 1 - \sum^J_{i=1}p_i^2
2. Entropy : wikipedia
The basic idea of information theory is the more one knows about a topic, the less new information one is apt to get about it. If an event is very probable, it is no surprise when it happens and thus provides little new information. Inversely, if the event was improbable, it is much more informative that the event happened. Since the measure of information entropy associated with each possible data value is the negative logarithm of the probability mass function for the value, when the data source has a lower-probability value, the event carries more "information" than when the source data has a higher-probability value.
H(X) = E[I(X)] = E[-ln(P(x))]\\
H(X) = \sum^n_{i=1}P(x_i)I(x_i) = -\sum^n_{i=1}P(x_i)logP(x_i)
Using a decision tree algorithm, we start at the tree root and split the data on the feature that results in the largest
information gain(IG). We repeat this splitting procedure at each child node down to the empty leaves. this means that the samples at ach node all belong to the same class. However, this can result in a very deep tree with many nodes, which can easily lead to overfitting. Thus, we typically want to prune the tree by setting a limit for the maximum depth of the tree. Basically, using IG, we want to determine which attribute in a given set of training feature vectors is most useful. In other words, IG tells us how important a given attribute of the feature vector is. We will use it to decide the ordering of attributes in the nodes of a decision tree. IG can be defined as follows:
IG(D_p) = I(D_p) - \frac{N_{left}}{N_p}I(D_{left}) - \frac{N_{right}}{N_p}I(D_{right})
where $I$ could be
entropy, Gini index, classification error. Example(IG) : Reference
print(__doc__)
import numpy as np
import matplotlib.pyplot as plt
from sklearn.datasets import load_iris
from sklearn.tree import DecisionTreeClassifier
# Parameters
n_classes = 3 plot_colors = "ryb" plot_step = 0.02
# Load data
iris = load_iris()
for pairidx, pair in enumerate([[0, 1], [0, 2], [0, 3],
[1, 2], [1, 3], [2, 3]]): # We only take the two corresponding features X = iris.data[:, pair] y = iris.target
# Train
clf = DecisionTreeClassifier().fit(X, y)
# Plot the decision boundary
plt.subplot(2, 3, pairidx + 1)
x_min, x_max = X[:, 0].min() - 1, X[:, 0].max() + 1
y_min, y_max = X[:, 1].min() - 1, X[:, 1].max() + 1 xx, yy = np.meshgrid(np.arange(x_min, x_max, plot_step), np.arange(y_min, y_max, plot_step)) plt.tight_layout(h_pad=0.5, w_pad=0.5, pad=2.5)
Z = clf.predict(np.c_[xx.ravel(), yy.ravel()])
Z = Z.reshape(xx.shape) cs = plt.contourf(xx, yy, Z, cmap=plt.cm.RdYlBu)
plt.xlabel(iris.feature_names[pair[0]])
plt.ylabel(iris.feature_names[pair[1]])
# Plot the training points
for i, color in zip(range(n_classes), plot_colors): idx = np.where(y == i) plt.scatter(X[idx, 0], X[idx, 1], c=color, label=iris.target_names[i], cmap=plt.cm.RdYlBu, edgecolor='black', s=15)
plt.suptitle("Decision surface of a decision tree using paired features")
plt.legend(loc='lower right', borderpad=0, handletextpad=0) plt.axis("tight") plt.show()
print(__doc__)
# Import the necessary modules and libraries
import numpy as np from sklearn.tree import DecisionTreeRegressor import matplotlib.pyplot as plt
# Create a random dataset
rng = np.random.RandomState(1) X = np.sort(5 * rng.rand(80, 1), axis=0) y = np.sin(X).ravel() y[::5] += 3 * (0.5 - rng.rand(16))
# Fit regression model
regr_1 = DecisionTreeRegressor(max_depth=2) regr_2 = DecisionTreeRegressor(max_depth=5) regr_1.fit(X, y) regr_2.fit(X, y)
# Predict
X_test = np.arange(0.0, 5.0, 0.01)[:, np.newaxis] y_1 = regr_1.predict(X_test) y_2 = regr_2.predict(X_test)
# Plot the results
plt.figure() plt.scatter(X, y, s=20, edgecolor="black", c="darkorange", label="data") plt.plot(X_test, y_1, color="cornflowerblue", label="max_depth=2", linewidth=2) plt.plot(X_test, y_2, color="yellowgreen", label="max_depth=5", linewidth=2) plt.xlabel("data") plt.ylabel("target") plt.title("Decision Tree Regression") plt.legend() plt.show() |
Learning Outcomes
Use the order of operations to correctly perform multi-step arithmetic Apply the order of operations to statistics related complex questions.
When we are given multiple arithmetic operations within a calculation, there is a, established order that we must do them in based on how the expression is written. Understanding these rules is especially important when using a calculator, since calculators are programmed to strictly follow the order of operations. This comes up in every topic in statistics, so knowing the order of operations is an essential skill for all successful statistics students to have.
PEMDAS
The order of operations are as follows:
Parentheses Exponents Multiplication and Division Addition and Subtraction
When there is a tie, the rule is to go from left to right.
Notice that Multiplication and division are listed together as item 3. If you see multiplication and division in the same expression the rule is to go from left to right. Similarly, if you see addition and subtraction in the same expression the rule is to from go left to right. The same goes for two of the same arithmetic operators.
Example \(\PageIndex{1}\)
Evaluate: \(20-6\div3+\left(2\times3^2\right)\)
Solution
We start with what is inside the parentheses: \(2+3^2\). Since exponents comes before addition, we find \(3^2=9\) first. We now have
\[20-6\div3+\left(2\times9\right) \nonumber\]
We continue inside the parentheses and perform the multiplication: \(2\times9=18\).
This gives
\[20-6\div3+18 \nonumber\]
Since division comes before addition and subtraction, we next calculate \(6\div3=2\) to get
\[20-18+18 \nonumber\]
Since subtraction and addition are tied, we go from left to right. We calculate: \(20-18=2\) to get
\[2+18\:=20 \nonumber\]
The key to arriving at the correct answer is to go slow and write down each step in the arithmetic.
Hidden Parentheses
You may think that since you always have a calculator or computer at hand, that you don't need to worry about order of operations. Unfortunately, the way that expressions are written is not the same as the way that they are entered into a computer or calculator. In particular, exponents need to be treated with care as do fractions bars.
Example \(\PageIndex{3}\)
Evaluate \(2.1^{6-2}\)
Solution
First, note that we use the symbol "^" to tell a computer or calculator to exponentiate. If you were to enter 2.1^6-2 into a computer, it would give you the answer of 83.766121 which is not correct, since the computer will first expontiate and then subtract. Since the subtraction is within the exponent, it must be performed first. To tell a calculator or computer to perform the subtraction first, we use parentheses:
2.1^(6 - 2) = 19.4481
Example \(\PageIndex{4}\): z-scores
The "z-score" is defined by:
\[z=\frac{x-\mu}{\sigma} \nonumber\]
Find the z-score rounded to one decimal place if:
\[x=2.323,\:\mu=1.297,\:\sigma=0.241 \nonumber\]
Solution
Once again, if we put these numbers into the z-score formula and use a computer or calculator by entering \(3.323\:-\:1.297\:\div\:0.241\) we will get -0.259 which is the wrong answer. Instead, we need to know that the fraction bar separates the numerator and the denominator, so the subtraction must be done first. We compute
\[\frac{2.323-1.297}{0.241}\:=\left(2.323-1.297\right)\div0.241=\:4.25726141 \nonumber\]
Now round to one decimal place to get 4.3. Notice that if you rounded before you did the arithmetic, you would get exactly 5 which is very different. 4.3 is more accurate.
Exercise
Suppose the equation of the regression line for the number of pairs of socks a person owns, \(y\), based on the number of pairs of shoes, \(x\), the person owns is
\[\hat y=6+2x \nonumber\]
Use this regression line to predict the number of pairs of socks a person owns for a person who owns 4 pairs of shoes. |
M 3: a new muon missing momentum experiment to probe (g – 2) μ and dark matter at Fermilab Abstract
Here, new light, weakly-coupled particles are commonly invoked to address the persistent $$\sim 4\sigma$$ anomaly in $$(g-2)_\mu$$ and serve as mediators between dark and visible matter. If such particles couple predominantly to heavier generations and decay invisibly, much of their best-motivated parameter space is inaccessible with existing experimental techniques. In this paper, we present a new fixed-target, missing-momentum search strategy to probe invisibly decaying particles that couple preferentially to muons. In our setup, a relativistic muon beam impinges on a thick active target. The signal consists of events in which a muon loses a large fraction of its incident momentum inside the target without initiating any detectable electromagnetic or hadronic activity in downstream veto systems. We propose a two-phase experiment, M$^3$ (Muon Missing Momentum), based at Fermilab. Phase 1 with $$\sim 10^{10}$$ muons on target can test the remaining parameter space for which light invisibly-decaying particles can resolve the $$(g-2)_\mu$$ anomaly, while Phase 2 with $$\sim 10^{13}$$ muons on target can test much of the predictive parameter space over which sub-GeV dark matter achieves freeze-out via muon-philic forces, including gauged $$U(1)_{L_\mu - L_\tau}$$.
Authors: Princeton Univ., Princeton, NJ (United States) Fermi National Accelerator Lab. (FNAL), Batavia, IL (United States) Publication Date: Research Org.: Fermi National Accelerator Lab. (FNAL), Batavia, IL (United States) Sponsoring Org.: USDOE Office of Science (SC), High Energy Physics (HEP) (SC-25) OSTI Identifier: 1439466 Report Number(s): arXiv:1804.03144; FERMILAB-PUB-18-087-A Journal ID: ISSN 1029-8479; 1667037; TRN: US1900618 Grant/Contract Number: AC02-07CH11359 Resource Type: Journal Article: Accepted Manuscript Journal Name: Journal of High Energy Physics (Online) Additional Journal Information: Journal Volume: 2018; Journal Issue: 9; Journal ID: ISSN 1029-8479 Publisher: Springer Berlin Country of Publication: United States Language: English Subject: 79 ASTRONOMY AND ASTROPHYSICS; 46 INSTRUMENTATION RELATED TO NUCLEAR SCIENCE AND TECHNOLOGY; 72 PHYSICS OF ELEMENTARY PARTICLES AND FIELDS; Fixed target experiments Citation Formats
Kahn, Yonatan, Krnjaic, Gordan, Tran, Nhan, and Whitbeck, Andrew.
M3: a new muon missing momentum experiment to probe (g – 2)μ and dark matter at Fermilab. United States: N. p., 2018. Web. doi:10.1007/JHEP09(2018)153.
Kahn, Yonatan, Krnjaic, Gordan, Tran, Nhan, & Whitbeck, Andrew.
M3: a new muon missing momentum experiment to probe (g – 2)μ and dark matter at Fermilab. United States. doi:10.1007/JHEP09(2018)153.
Kahn, Yonatan, Krnjaic, Gordan, Tran, Nhan, and Whitbeck, Andrew. Wed . "M3: a new muon missing momentum experiment to probe (g – 2)μ and dark matter at Fermilab". United States. doi:10.1007/JHEP09(2018)153. https://www.osti.gov/servlets/purl/1439466.
@article{osti_1439466,
title = {M3: a new muon missing momentum experiment to probe (g – 2)μ and dark matter at Fermilab}, author = {Kahn, Yonatan and Krnjaic, Gordan and Tran, Nhan and Whitbeck, Andrew}, abstractNote = {Here, new light, weakly-coupled particles are commonly invoked to address the persistent $\sim 4\sigma$ anomaly in $(g-2)_\mu$ and serve as mediators between dark and visible matter. If such particles couple predominantly to heavier generations and decay invisibly, much of their best-motivated parameter space is inaccessible with existing experimental techniques. In this paper, we present a new fixed-target, missing-momentum search strategy to probe invisibly decaying particles that couple preferentially to muons. In our setup, a relativistic muon beam impinges on a thick active target. The signal consists of events in which a muon loses a large fraction of its incident momentum inside the target without initiating any detectable electromagnetic or hadronic activity in downstream veto systems. We propose a two-phase experiment, M$^3$ (Muon Missing Momentum), based at Fermilab. Phase 1 with $\sim 10^{10}$ muons on target can test the remaining parameter space for which light invisibly-decaying particles can resolve the $(g-2)_\mu$ anomaly, while Phase 2 with $\sim 10^{13}$ muons on target can test much of the predictive parameter space over which sub-GeV dark matter achieves freeze-out via muon-philic forces, including gauged $U(1)_{L_\mu - L_\tau}$.}, doi = {10.1007/JHEP09(2018)153}, journal = {Journal of High Energy Physics (Online)}, issn = {1029-8479}, number = 9, volume = 2018, place = {United States}, year = {2018}, month = {9} } Citation information provided by Web of Science
Web of Science
Figures / Tables: left) and vector ( right) forces that couple predominantly to muons. In both cases, a relativistic muon beam is incident on a fixed target and scatters coherently off a nucleus to produce the new particle as initial- ormore » |
First note, that hybridisation is a mathematical concept which can be applied to interpret a bonding situation. It has no physical meaning whatsoever. Instead it helps us to understand the direction of bonds better.
Second note, that the second period usually behaves quite differently from the remaining elements in a group. So in a way, ammonia behaves unnatural or anomalous.
If you compare nitrogen with phosphorus, you will note, that the former is much smaller than the latter, i.e. van der Waals radii $r(\ce{N})=155~\mathrm{pm};\ r(\ce{P})=180~\mathrm{pm}$ (ref. wikipedia), covalent radii $r(\ce{N})=71~\mathrm{pm};\ r(\ce{P})=107~\mathrm{pm}$ (ref. wikipedia). Therefore also the orbitals in nitrogen are smaller, and $\ce{s}$ and $\ce{p}$ orbitals will occupy more of the same space than in phosphorus. As a result the $\ce{N-H}$ bond distance will naturally also be shorter.
A lone pair is usually most stable in an orbital that has high $\ce{s}$ character. Bonds will most likely be formed with the higher lying $\ce{p}$ orbitals. The orientation of these towards each other is exactly $90^\circ$.
In ammonia this would lead to very close $\ce{H\cdots{}H}$ contacts, which are repulsive and therefore the hydrogen atoms are pushed away from each other. This is possible since in the second period the $\ce{s-p}$ splitting is still very small and the nitrogen $\ce{s}$ orbital is accessible for the hydrogen atoms. This will ultimately result in mixing $\ce{s}$ and $\ce{p}$ orbitals for nitrogen in the respective molecular orbitals. This phenomenon can be referred to as hybridisation - the linear combination of orbitals from the same atom. This term is therefore somewhat independent from its most common usage.
It is also very important to know, that the molecular wavefunction of a molecule has to reflect its overall symmetry. In this case it is $C_{3v}$, which means there is a threefold rotational axis and three vertical mirror planes (the axis is element of these planes). This gives also rise to degenerate orbitals. A canonical orbital picture has to reflect this property (BP86/cc-pVDZ; valence orbitals are ordered with increasing energy from left to right).
Note that the lowest lying valence molecular orbital is formed only from $\ce{s}$ orbitals (There is one additional $\ce{1s^2-N}$ core orbital.) Now Natural Bond Orbital (NBO) Theory can be used to transform these delocalised molecular orbitals to a more common and familiar bonding picture, making use of atomic hybrid orbitals. This method is called localising orbitals, but it has the expense of losing the energy eigenvalue that may be assigned to canonical orbitals (NBO@BP86/cc-pVDZ; valence NBO cannot be ordered by energy levels). In this theory you will find three equivalent $\ce{N-H}$ bonds, that are composed of $32\%~\ce{1s-H}$ and $68\%~\ce{s^{$0.87$}p^3-N}\approx\ce{sp^3-N}$ orbitals. Note that the lone pair orbital at nitrogen has a slightly higher $\ce{s}$ orbital contribution, i.e. $\ce{s^{1.42}p^3-N}\approx\ce{sp^3-N}$.
So the thermodynamically most favoured angle is found to be $107^\circ$ due to a compromise between optimal orbital overlap and least internuclear repulsion.
The canonical bonding picture in phosphine is very similar to ammonia, only the orbitals are larger. Even in this case it would be wrong to assume, that there is no hybridisation present at all. However, the biggest contribution to the molecular orbitals stems from the $\ce{p}$ orbitals at phosphorus.
Applying the localisation scheme, one end up with a different bonding picture. Here are three equal $\ce{P-H}$ bonds that are composed of $48\%~\ce{1s-H}$ and $52\%~\ce{s^{$0.5$}p^3-P}$ orbitals. The lone pair at phosphorus is composed of $57\%\ce{s} + 43\%\ce{p}$ orbitals.
One can see the difference of the molecules also in their inversion barrier, while for ammonia the inversion is readily available at room temperature, $\Delta E \approx 6~\mathrm{kcal/mol}$, it is very slow for phosphine, $\Delta E \approx 40~\mathrm{kcal/mol}$.
This is mostly due to the fact, that the nitrogen hydrogen bonds have already a significant $\ce{s}$ orbital contribution, which can be easily increase, to form the planar molecule with formally $\ce{sp^2}$ hybrids. |
Quasi-linear Schrödinger–Poisson system under an exponential critical nonlinearity: existence and asymptotic behaviour of solutions
Article
First Online:
74 Downloads Abstract
In this paper we consider the following quasilinear Schrödinger–Poisson system in a bounded domain in \({\mathbb {R}}^{2}\):
depending on the parameter \(\varepsilon >0\). The nonlinearity
$$\begin{aligned} \left\{ \begin{array}[c]{ll} - \Delta u +\phi u = f(u) &{}\ \text{ in } \Omega , \\ -\Delta \phi - \varepsilon ^{4}\Delta _4 \phi = u^{2} &{} \ \text{ in } \Omega ,\\ u=\phi =0 &{} \ \text{ on } \partial \Omega \end{array} \right. \end{aligned}$$
fis assumed to have critical exponential growth. We first prove existence of nontrivial solutions \((u_{\varepsilon }, \phi _{\varepsilon })\) and then we show that as \(\varepsilon \rightarrow 0^{+}\), these solutions converge to a nontrivial solution of the associated Schrödinger–Poisson system, that is, by making \(\varepsilon =0\) in the system above. KeywordsVariational methods Nonlocal problems Schrödinger–Poisson equation Exponential critical growth Mathematics Subject Classification35Q60 35J10 35J50 35J92 35J61 Preview
Unable to display preview. Download preview PDF.
Notes References 1. 2. 3. 4. 5. 6.Figueiredo, G.M., Siciliano, G.: Existence and asymptotic behaviour of solutions for a quasi-linear Schrödinger–Poisson system under a critical nonlinearity. arXiv:1707.05353 7. 8. 9. 10. 11. Copyright information
© Springer Nature Switzerland AG 2019 |
Learning Objectives
Identify a conic in polar form. Graph the polar equations of conics. Define conics in terms of a focus and a directrix.
Most of us are familiar with orbital motion, such as the motion of a planet around the sun or an electron around an atomic nucleus. Within the planetary system, orbits of planets, asteroids, and comets around a larger celestial body are often elliptical. Comets, however, may take on a parabolic or hyperbolic orbit instead. And, in reality, the characteristics of the planets’ orbits may vary over time. Each orbit is tied to the location of the celestial body being orbited and the distance and direction of the planet or other object from that body. As a result, we tend to use polar coordinates to represent these orbits.
In an elliptical orbit, the
periapsis is the point at which the two objects are closest, and the apoapsis is the point at which they are farthest apart. Generally, the velocity of the orbiting body tends to increase as it approaches the periapsis and decrease as it approaches the apoapsis. Some objects reach an escape velocity, which results in an infinite orbit. These bodies exhibit either a parabolic or a hyperbolic orbit about a body; the orbiting body breaks free of the celestial body’s gravitational pull and fires off into space. Each of these orbits can be modeled by a conic section in the polar coordinate system. Identifying a Conic in Polar Form
Any conic may be determined by three characteristics: a single
focus, a fixed line called the directrix, and the ratio of the distances of each to a point on the graph. Consider the parabola \(x=2+y^2\) shown in Figure \(\PageIndex{2}\).
We previously learned how a parabola is defined by the focus (a fixed point) and the directrix (a fixed line). In this section, we will learn how to define any conic in the polar coordinate system in terms of a fixed point, the focus \(P(r,\theta)\) at the pole, and a line, the directrix, which is perpendicular to the polar axis.
If \(F\) is a fixed point, the focus, and \(D\) is a fixed line, the directrix, then we can let \(e\) be a fixed positive number, called the
eccentricity, which we can define as the ratio of the distances from a point on the graph to the focus and the point on the graph to the directrix. Then the set of all points \(P\) such that \(e=\dfrac{PF}{PD}\) is a conic. In other words, we can define a conic as the set of all points \(P\) with the property that the ratio of the distance from \(P\) to \(F\) to the distance from \(P\) to \(D\) is equal to the constant \(e\).
For a conic with eccentricity \(e\),
if \(0≤e<1\), the conic is an ellipse if \(e=1\), the conic is a parabola if \(e>1\), the conic is an hyperbola
With this definition, we may now define a conic in terms of the directrix, \(x=\pm p\), the eccentricity \(e\), and the angle \(\theta\). Thus, each conic may be written as a
polar equation, an equation written in terms of \(r\) and \(\theta\).
THE POLAR EQUATION FOR A CONIC
For a conic with a focus at the origin, if the directrix is \(x=\pm p\), where \(p\) is a positive real number, and the eccentricity is a positive real number \(e\), the conic has a polar equation
\[r=\dfrac{ep}{1\pm e \cos \theta}\]
For a conic with a focus at the origin, if the directrix is \(y=\pm p\), where \(p\) is a positive real number, and the eccentricity is a positive real number \(e\), the conic has a polar equation
\[r=\dfrac{ep}{1\pm e \sin \theta}\]
Example \(\PageIndex{1}\): Identifying a Conic Given the Polar Form
For each of the following equations, identify the conic with focus at the origin, the directrix, and the eccentricity.
\(r=\dfrac{6}{3+2 \sin \theta}\) \(r=\dfrac{12}{4+5 \cos \theta}\) \(r=\dfrac{7}{2−2 \sin \theta}\) Solution
For each of the three conics, we will rewrite the equation in standard form. Standard form has a \(1\) as the constant in the denominator. Therefore, in all three parts, the first step will be to multiply the numerator and denominator by the reciprocal of the constant of the original equation, \(\dfrac{1}{c}\), where \(c\) is that constant.
Multiply the numerator and denominator by \(\dfrac{1}{3}\).
\(r=\dfrac{6}{3+2\sin \theta}⋅\dfrac{\left(\dfrac{1}{3}\right)}{\left(\dfrac{1}{3}\right)}=\dfrac{6\left(\dfrac{1}{3}\right)}{3\left(\dfrac{1}{3}\right)+2\left(\dfrac{1}{3}\right)\sin \theta}=\dfrac{2}{1+\dfrac{2}{3} \sin \theta}\)
Because \(\sin \theta\) is in the denominator, the directrix is \(y=p\). Comparing to standard form, note that \(e=\dfrac{2}{3}\).Therefore, from the numerator,
\[\begin{align*} 2&=ep\\ 2&=\dfrac{2}{3}p\\ \left(\dfrac{3}{2}\right)2&=\left(\dfrac{3}{2}\right)\dfrac{2}{3}p\\ 3&=p \end{align*}\]
Since \(e<1\), the conic is an
ellipse. The eccentricity is \(e=\dfrac{2}{3}\) and the directrix is \(y=3\). Multiply the numerator and denominator by \(\dfrac{1}{4}\).
\[\begin{align*} r&=\dfrac{12}{4+5 \cos \theta}\cdot \dfrac{\left(\dfrac{1}{4}\right)}{\left(\dfrac{1}{4}\right)}\\ r&=\dfrac{12\left(\dfrac{1}{4}\right)}{4\left(\dfrac{1}{4}\right)+5\left(\dfrac{1}{4}\right)\cos \theta}\\ r&=\dfrac{3}{1+\dfrac{5}{4} \cos \theta} \end{align*}\]
Because \(\cos \theta\) is in the denominator, the directrix is \(x=p\). Comparing to standard form, \(e=\dfrac{5}{4}\). Therefore, from the numerator,
\[\begin{align*} 3&=ep\\ 3&=\dfrac{5}{4}p\\ \left(\dfrac{4}{5}\right)3&=\left(\dfrac{4}{5}\right)\dfrac{5}{4}p\\ \dfrac{12}{5}&=p \end{align*}\]
Since \(e>1\), the conic is a
hyperbola. The eccentricity is \(e=\dfrac{5}{4}\) and the directrix is \(x=\dfrac{12}{5}=2.4\). Multiply the numerator and denominator by \(\dfrac{1}{2}\).
\[\begin{align*} r&=\dfrac{7}{2-2 \sin \theta}\cdot \dfrac{\left(\dfrac{1}{2}\right)}{\left(\dfrac{1}{2}\right)}\\ r&=\dfrac{7\left(\dfrac{1}{2}\right)}{2\left(\dfrac{1}{2}\right)-2\left(\dfrac{1}{2}\right) \sin \theta}\\ r&=\dfrac{\dfrac{7}{2}}{1-\sin \theta} \end{align*}\]
Because sine is in the denominator, the directrix is \(y=−p\). Comparing to standard form, \(e=1\). Therefore, from the numerator,
\[\begin{align*} \dfrac{7}{2}&=ep\\ \dfrac{7}{2}&=(1)p\\ \dfrac{7}{2}&=p \end{align*}\]
Because \(e=1\), the conic is a parabola. The eccentricity is \(e=1\) and the directrix is \(y=−\dfrac{7}{2}=−3.5\).
Exercise \(\PageIndex{1}\)
Identify the conic with focus at the origin, the directrix, and the eccentricity for \(r=\dfrac{2}{3−\cos \theta}\).
Answer
ellipse; \(e=\dfrac{1}{3}\); \(x=−2\)
Graphing the Polar Equations of Conics
When graphing in Cartesian coordinates, each conic section has a unique equation. This is not the case when graphing in polar coordinates. We must use the eccentricity of a conic section to determine which type of curve to graph, and then determine its specific characteristics. The first step is to rewrite the conic in standard form as we have done in the previous example. In other words, we need to rewrite the equation so that the denominator begins with \(1\). This enables us to determine \(e\) and, therefore, the shape of the curve. The next step is to substitute values for \(\theta\) and solve for \(r\) to plot a few key points. Setting \(\theta\) equal to \(0\), \(\dfrac{\pi}{2}\), \(\pi\), and \(\dfrac{3\pi}{2}\) provides the vertices so we can create a rough sketch of the graph.
Example \(\PageIndex{2A}\): Graphing a Parabola in Polar Form
Graph \(r=\dfrac{5}{3+3 \cos \theta}\).
Solution
First, we rewrite the conic in standard form by multiplying the numerator and denominator by the reciprocal of \(3\), which is \(\dfrac{1}{3}\).
\[\begin{align*} r &= \dfrac{5}{3+3 \cos \theta}=\dfrac{5\left(\dfrac{1}{3}\right)}{3\left(\dfrac{1}{3}\right)+3\left(\dfrac{1}{3}\right)\cos \theta} \\ r &= \dfrac{\dfrac{5}{3}}{1+\cos \theta} \end{align*}\]
Because \(e=1\),we will graph a
parabola with a focus at the origin. The function has a \(\cos \theta\), and there is an addition sign in the denominator, so the directrix is \(x=p\).
\[\begin{align*} \dfrac{5}{3}&=ep\\ \dfrac{5}{3}&=(1)p\\ \dfrac{5}{3}&=p \end{align*}\]
The directrix is \(x=\dfrac{5}{3}\).
Plotting a few key points as in Table \(\PageIndex{1}\) will enable us to see the vertices. See Figure \(\PageIndex{3}\).
A B C D \(\theta\) \(0\) \(\dfrac{\pi}{2}\) \(\pi\) \(\dfrac{3\pi}{2}\) \(r=\dfrac{5}{3+3 \cos \theta}\) \(\dfrac{5}{6}≈0.83\) \(\dfrac{5}{3}≈1.67\) undefined \(\dfrac{5}{3}≈1.67\)
We can check our result with a graphing utility. See Figure \(\PageIndex{4}\).
Example \(\PageIndex{2B}\): Graphing a Hyperbola in Polar Form
Graph \(r=\dfrac{8}{2−3 \sin \theta}\).
Solution
First, we rewrite the conic in standard form by multiplying the numerator and denominator by the reciprocal of \(2\), which is \(\dfrac{1}{2}\).
\[\begin{align*} r &=\dfrac{8}{2−3\sin \theta}=\dfrac{8\left(\dfrac{1}{2}\right)}{2\left(\dfrac{1}{2}\right)−3\left(\dfrac{1}{2}\right)\sin \theta} \\ r &= \dfrac{4}{1−\dfrac{3}{2} \sin \theta} \end{align*}\]
Because \(e=\dfrac{3}{2}\), \(e>1\), so we will graph a hyperbola with a focus at the origin. The function has a \(\sin \theta\) term and there is a subtraction sign in the denominator, so the directrix is \(y=−p\).
\[\begin{align*} 4&=ep\\ 4&=\left(\dfrac{3}{2}\right)p\\ 4\left(\dfrac{2}{3}\right)&=p\\ \dfrac{8}{3}&=p \end{align*}\]
The directrix is \(y=−\dfrac{8}{3}\).
Plotting a few key points as in Table \(\PageIndex{2}\) will enable us to see the vertices. See Figure \(\PageIndex{5}\).
A B C D \(\theta\) \(0\) \(\dfrac{\pi}{2}\) \(\pi\) \(\dfrac{3\pi}{2}\)
\(r=\dfrac{8}{2−3\sin \theta}\)
\(4\)
\(−8\)
\(4\)
\(\dfrac{8}{5}=1.6\)
Example \(\PageIndex{2C}\): Graphing an Ellipse in Polar Form
Graph \(r=\dfrac{10}{5−4 \cos \theta}\).
Solution
First, we rewrite the conic in standard form by multiplying the numerator and denominator by the reciprocal of 5, which is \(\dfrac{1}{5}\).
\[\begin{align*} r &= \dfrac{10}{5−4\cos \theta}=\dfrac{10\left(\dfrac{1}{5}\right)}{5\left(\dfrac{1}{5}\right)−4\left(\dfrac{1}{5}\right)\cos \theta} \\ r &= \dfrac{2}{1−\dfrac{4}{5} \cos \theta} \end{align*}\]
Because \(e=\dfrac{4}{5}\), \(e<1\), so we will graph an
ellipse with a focus at the origin. The function has a \(\cos \theta\), and there is a subtraction sign in the denominator, so the directrix is \(x=−p\).
\[\begin{align*} 2&=ep\\ 2&=\left(\dfrac{4}{5}\right)p\\ 2\left(\dfrac{5}{4}\right)&=p\\ \dfrac{5}{2}&=p \end{align*}\]
The directrix is \(x=−\dfrac{5}{2}\).
Plotting a few key points as in Table \(\PageIndex{3}\) will enable us to see the vertices. See Figure \(\PageIndex{6}\).
A B C D \(\theta\) \(0\) \(\dfrac{\pi}{2}\) \(\pi\) \(\dfrac{3\pi}{2}\) \(r=\dfrac{10}{5−4 \cos \theta}\) \(10\) \(2\) \(\dfrac{10}{9}≈1.1\) \(2\)
Analysis
We can check our result using a graphing utility. See Figure \(\PageIndex{7}\).
Exercise \(\PageIndex{2}\)
Graph \(r=\dfrac{2}{4−\cos \theta}\).
Answer Defining Conics in Terms of a Focus and a Directrix
So far we have been using polar equations of conics to describe and graph the curve. Now we will work in reverse; we will use information about the origin, eccentricity, and directrix to determine the polar equation.
How to: Given the focus, eccentricity, and directrix of a conic, determine the polar equation
Determine whether the directrix is horizontal or vertical. If the directrix is given in terms of \(y\), we use the general polar form in terms of sine. If the directrix is given in terms of \(x\), we use the general polar form in terms of cosine. Determine the sign in the denominator. If \(p<0\), use subtraction. If \(p>0\), use addition. Write the coefficient of the trigonometric function as the given eccentricity. Write the absolute value of \(p\) in the numerator, and simplify the equation.
Example \(\PageIndex{3A}\): Finding the Polar Form of a Vertical Conic Given a Focus at the Origin and the Eccentricity and Directrix
Find the polar form of the conic given a focus at the origin, \(e=3\) and directrix \(y=−2\).
Solution
The directrix is \(y=−p\), so we know the trigonometric function in the denominator is sine.
Because \(y=−2\), \(–2<0\), so we know there is a subtraction sign in the denominator. We use the standard form of
\(r=\dfrac{ep}{1−e \sin \theta}\)
and \(e=3\) and \(|−2|=2=p\).
Therefore,
\[\begin{align*} r&=\dfrac{(3)(2)}{1-3 \sin \theta}\\ r&=\dfrac{6}{1-3 \sin \theta} \end{align*}\]
Example \(\PageIndex{3B}\): Finding the Polar Form of a Horizontal Conic Given a Focus at the Origin and the Eccentricity and Directrix
Find the polar form of a conic given a focus at the origin, \(e=\dfrac{3}{5}\), and directrix \(x=4\).
Solution
Because the directrix is \(x=p\), we know the function in the denominator is cosine. Because \(x=4\), \(4>0\), so we know there is an addition sign in the denominator. We use the standard form of
\(r=\dfrac{ep}{1+e \cos \theta}\)
and \(e=\dfrac{3}{5}\) and \(|4|=4=p\).
Therefore,
\[\begin{align*} r &= \dfrac{\left(\dfrac{3}{5}\right)(4)}{1+\dfrac{3}{5}\cos\theta} \\ r &= \dfrac{\dfrac{12}{5}}{1+\dfrac{3}{5}\cos\theta} \\ r &=\dfrac{\dfrac{12}{5}}{1\left(\dfrac{5}{5}\right)+\dfrac{3}{5}\cos\theta} \\ r &=\dfrac{\dfrac{12}{5}}{\dfrac{5}{5}+\dfrac{3}{5}\cos\theta} \\ r &= \dfrac{12}{5}⋅\dfrac{5}{5+3\cos\theta} \\ r &=\dfrac{12}{5+3\cos\theta} \end{align*}\]
Exercise \(\PageIndex{3}\)
Find the polar form of the conic given a focus at the origin, \(e=1\), and directrix \(x=−1\).
Answer
\(r=\dfrac{1}{1−\cos\theta}\)
Example \(\PageIndex{4}\): Converting a Conic in Polar Form to Rectangular Form
Convert the conic \(r=\dfrac{1}{5−5\sin \theta}\) to rectangular form.
Solution:
We will rearrange the formula to use the identities \(r=\sqrt{x^2+y^2}\), \(x=r \cos \theta\),and \(y=r \sin \theta\).
\[\begin{align*} r&=\dfrac{1}{5-5 \sin \theta} \\ r\cdot (5-5 \sin \theta)&=\dfrac{1}{5-5 \sin \theta}\cdot (5-5 \sin \theta)\qquad \text{Eliminate the fraction.} \\ 5r-5r \sin \theta&=1 \qquad \text{Distribute.} \\ 5r&=1+5r \sin \theta \qquad \text{Isolate }5r. \\ 25r^2&={(1+5r \sin \theta)}^2 \qquad \text{Square both sides. } \\ 25(x^2+y^2)&={(1+5y)}^2 \qquad \text{Substitute } r=\sqrt{x^2+y^2} \text{ and }y=r \sin \theta. \\ 25x^2+25y^2&=1+10y+25y^2 \qquad \text{Distribute and use FOIL. } \\ 25x^2-10y&=1 \qquad \text{Rearrange terms and set equal to 1.} \end{align*}\]
Exercise \(\PageIndex{4}\)
Convert the conic \(r=\dfrac{2}{1+2 \cos \theta}\) to rectangular form.
Answer
\(4−8x+3x^2−y^2=0\)
Visit this website for additional practice questions from Learningpod.
Key Concepts Any conic may be determined by a single focus, the corresponding eccentricity, and the directrix. We can also define a conic in terms of a fixed point, the focus \(P(r,\theta)\) at the pole, and a line, the directrix, which is perpendicular to the polar axis. A conic is the set of all points \(e=\dfrac{PF}{PD}\), where eccentricity \(e\) is a positive real number. Each conic may be written in terms of its polar equation. See Example \(\PageIndex{1}\). The polar equations of conics can be graphed. See Example \(\PageIndex{2}\), Example \(\PageIndex{3}\), and Example \(\PageIndex{4}\). Conics can be defined in terms of a focus, a directrix, and eccentricity. See Example \(\PageIndex{5}\) and Example \(\PageIndex{6}\). We can use the identities \(r=\sqrt{x^2+y^2}\), \(x=r \cos \theta\),and \(y=r \sin \theta\) to convert the equation for a conic from polar to rectangular form. See Example \(\PageIndex{7}\). |
Answer
$\sqrt{7}$
Work Step by Step
Using the properties of radicals, the given expression, $ \dfrac{\sqrt{56}}{\sqrt{8}} ,$ simplifies to \begin{array}{l}\require{cancel} \sqrt{\dfrac{56}{8}} \\\\= \sqrt{7} .\end{array}
You can help us out by revising, improving and updating this answer.Update this answer
After you claim an answer you’ll have
24 hours to send in a draft. An editorwill review the submission and either publish your submission or provide feedback. |
ISSN:
1078-0947
eISSN:
1553-5231
All Issues
Discrete & Continuous Dynamical Systems - A
October 2013 , Volume 33 , Issue 10
Select all articles
Export/Reference:
Abstract:
This expository article concerns a system of semilinear parabolic partial differential equations that describes the evolution of the gene frequencies at a single locus under the joint action of migration and selection. We shall review mathematical techniques suited for the models under investigation; discuss some of the main mathematical results, including most recent developments; and also propose some open problems.
Abstract:
Self-inducing structure of pentagonal piecewise isometry is applied to show detailed description of periodic and aperiodic orbits, and further dynamical properties. A Pisot number appears as a scaling constant and plays a crucial role in the proof. Further generalization is discussed in the last section.
Abstract:
We construct a non-ergodic maximal entropy measure of a $C^{\infty}$ diffeomorphism with a positive entropy such that neither the entropy nor the large deviation rate of the measure is influenced by that of ergodic measures near it.
Abstract:
We study the existence of quasi--periodic solutions of the equation \[ ε \ddot x + \dot x + ε g(x) = ε f(\omega t)\ , \] where $x: \mathbb{R} \rightarrow \mathbb{R}$ is the unknown and we are given $g:\mathbb{R} \rightarrow \mathbb{R}$, $f: \mathbb{T}^d \rightarrow \mathbb{R}$, $\omega \in \mathbb{R}^d$ (without loss of generality we can assume that $\omega\cdot k\not=0$ for any $k \in \mathbb{Z}^d\backslash\{0\}$). We assume that there is a $c_0\in \mathbb{R}$ such that $g(c_0) = \hat f_0$ (where $\hat f_0$ denotes the average of $f$) and $g'(c_0) \ne 0$. Special cases of this equation, for example when $g(x)=x^2$, are called the ``varactor problem'' in the literature.
We show that if $f$, $g$ are analytic, and $\omega$ satisfies some very mild irrationality conditions, there are families of quasi--periodic solutions with frequency $\omega$. These families depend analytically on $ε$, when $ε$ ranges over a complex domain that includes cones or parabolic domains based at the origin.
The irrationality conditions required in this paper are very weak. They allow that the small denominators $|\omega \cdot k|^{-1}$ grow exponentially with $k$. In the case that $f$ is a trigonometric polynomial, we do not need any condition on $|\omega \cdot k|$. This answers a delicate question raised in [8].
We also consider the periodic case, when $\omega$ is just a number ($d = 1$). We obtain that there are solutions that depend analytically in a domain which is a disk removing countably many disjoint disks. This shows that in this case there is no Stokes phenomenon (different resummations on different sectors) for the asymptotic series.
The approach we use is to reduce the problem to a fixed point theorem. This approach also yields results in the case that $g$ is a finitely differentiable function; it provides also very effective numerical algorithms and we discuss how they can be implemented.
Abstract:
We study arbitrary generic unfoldings of a Hopf-zero singularity of codimension two. They can be written in the following normal form: \begin{eqnarray*} \left\{ \begin{array}{l} x'=-y+\mu x-axz+A(x,y,z,\lambda,\mu) \\ y'=x+\mu y-ayz+B(x,y,z,\lambda,\mu) \\ z'=z^2+\lambda+b(x^2+y^2)+C(x,y,z,\lambda,\mu), \end{array} \right. \end{eqnarray*} with $a>0$, $b>0$ and where $A$, $B$, $C$ are $C^\infty$ or $C^\omega$ functions of order $O(\|(x,y,z,\lambda,\mu)\|^3)$.
Despite that the existence of Shilnikov homoclinic orbits in unfoldings of Hopf-zero singularities has been discussed previously in the literature, no result valid for arbitrary generic unfoldings is available. In this paper we present new techniques to study global bifurcations from Hopf-zero singularities. They allow us to obtain a general criterion for the existence of Shilnikov homoclinic bifurcations and also provide a detailed description of the bifurcation set. Criteria for the existence of Bykov cycles are also provided. Main tools are a blow-up method, including a related invariant theory, and a novel approach to the splitting functions of the invariant manifolds. Theoretical results are applied to the Michelson system and also to the so called extended Michelson system. Paper includes thorough numerical explorations of dynamics for both systems.
Abstract:
We consider the non-selfadjoint operator \[ H = \left[\begin{array}{cc} -\Delta + \mu-V_1 & -V_2\\ V_2 & \Delta - \mu + V_1 \end{array} \right] \] where $\mu>0$ and $V_1,V_2$ are real-valued decaying potentials. Such operators arise when linearizing a focusing NLS equation around a standing wave. Under natural spectral assumptions we obtain $L^1(\mathbb{R}^2)\times L^1(\mathbb{R}^2)\to L^\infty(\mathbb{R}^2)\times L^\infty(\mathbb{R}^2)$ dispersive decay estimates for the evolution $e^{it H}P_{ac}$. We also obtain the following weighted estimate $$ \|w^{-1} e^{it\mathcal H}P_{ac}f\|_{L^\infty(\mathbb R^2)\times L^\infty(\mathbb R^2)} ≲ \frac{1}{|t|\log^2(|t|)} \|w f\|_{L^1(\mathbb R^2)\times L^1(\mathbb R^2)},\,\,\,\,\,\,\,\, |t| >2, $$with $w(x)=\log^2(2+|x|)$.
Abstract:
In this work we study a compressible gas-liquid models highly relevant for wellbore operations like drilling. The model is a drift-flux model and is composed of two continuity equations together with a mixture momentum equation. The model allows unequal gas and liquid velocities, dictated by a so-called slip law, which is important for modeling of flow scenarios involving for example counter-current flow. The model is considered in Lagrangian coordinates. The difference in fluid velocities gives rise to new terms in the mixture momentum equation that are challenging to deal with. First, a local (in time) existence result is obtained under suitable assumptions on initial data for a general slip relation. Second, a global in time existence result is proved for small initial data subject to a more specialized slip relation.
Abstract:
In this paper we find necessary and sufficient conditions in order that a planar quasi--homogeneous polynomial differential system has a polynomial or a rational first integral. We also prove that any planar quasi--homogeneous polynomial differential system can be transformed into a differential system of the form $\dot{u} \, = \, u f(v)$, $\dot{v} \, = \, g(v)$ with $f(v)$ and $g(v)$ polynomials, and vice versa.
Abstract:
In this paper, we apply the moving plane method to the following degenerate elliptic equation arising from isometric embedding,\begin{equation*} yu_{yy}+au_y+\Delta_x u+u^\alpha=0\text{ in } \mathbb R^{n+1}_+,n\geq 1. \end{equation*} We get a Liouville theorem for subcritical case and classify the solutions for critical case. As an application, we derive the a priori bounds for positive solutions of some semi-linear degenerate elliptic equations.
Abstract:
We give an equivalent characterization of the summability condition in terms of the backward contracting property defined by Juan Rivera-Letelier, for rational maps of degree at least two which are expanding away from critical points.
Abstract:
Given a finite set $\{S_1\dots,S_k \}$ of substitution maps acting on a certain finite number (up to translations) of tiles in $\mathbb{R}^d$, we consider the multi-substitution tiling space associated to each sequence $\bar a\in \{1,\ldots,k\}^{\mathbb{N}}$. The action by translations on such spaces gives rise to uniquely ergodic dynamical systems. In this paper we investigate the rate of convergence for ergodic limits of patches frequencies and prove that these limits vary continuously with $\bar a$.
Abstract:
We present some results on singularly perturbed piecewise linear systems, similar to those obtained by the Geometric Singular Perturbation Theory. Unlike the differentiable case, in the piecewise linear case we obtain the global expression of the slow manifold ${\mathcal S}_{\varepsilon}$. As a result, we characterize the existence of canard orbits in such systems. Finally, we apply the above theory to a specific case where we show numerical evidences of the existence of a canard cycle.
Abstract:
In this paper, we study a singular solution to the following elliptic equations: \begin{equation*} \left\{\begin{array}{ll} - \Delta u + |x|^{2}u - \lambda u - |u|^{p-1}u = 0, \quad x \in \mathbb{R}^{d}, & \\ u(x) > 0, \quad x \in \mathbb{R}^{d}, & \\ u(x) \to 0 \quad \text{as}\; |x| \to \infty, & \end{array}\right. \end{equation*} where $d \geq 3, \lambda >0$ and $p > 1$. In the spirit of Merle and Peletier [9], we shall show that in case of $p>(d+2)/(d-2)$, there is a unique value $\lambda = \lambda_{*}$ such that the equation with $\lambda = \lambda_{*}$ has a unique radial singular solution.
Abstract:
We obtain sharp conditions distinguishing extinction from persistence and provide sufficient conditions for global stability of a positive fixed point for a class of discrete time dynamical systems on the positive cone of an ordered Banach space generated by a map which is, roughly speaking, a nonlinear, rank one perturbation of a linear contraction. Such maps were considered by Rebarber, Tenhumberg, and Towney (Theor. Pop. Biol. 81, 2012) as abstractions of a restricted class of density dependent integral population projection models modeling plant population dynamics. Significant improvements of their results are provided.
Abstract:
Developing the pioneering work of Lars Olsen [14], we deal with the question of continuity of the numerical value of Hausdorff measures of some natural families of conformal dynamical systems endowed with an appropriate natural topology. In particular, we prove such continuity for hyperbolic polynomials from the Mandelbrot set, and more generally for the space of hyperbolic rational functions of a fixed degree. We go beyond hyperbolicity by proving continuity for maps including parabolic rational functions, for example that the parameter $1/4$ is such a continuity point for quadratic polynomials $z\mapsto z^2+c$ for $c\in [0,1/4]$. We prove the continuity of the numerical value of Hausdorff measures also for the spaces of conformal expanding repellers and parabolic ones, more generally for parabolic Walters conformal maps. We also prove some partial continuity results for all conformal Walters maps; these are in general of infinite degree. In order to do this, as one of our tools, we provide a detailed local analysis, uniform with respect to the parameter, of the behavior of conformal maps around parabolic fixed points in any dimension. We also establish continuity of numerical values of Hausdorff measures for some families of infinite $1$-dimensional iterated function systems.
Abstract:
This work presents an effective approach to the study of the global asymptotic dynamics of general coupled systems. Under the developed framework, the problem of establishing global synchronization or global convergence reduces to solving a corresponding system of linear equations. We illustrate this approach with a class of neural networks that consist of a pair of sub-networks under various types of nonlinear and delayed couplings. We study both the synchronization and the asymptotic synchronous phases of the dynamics, including global convergence to zero, global convergence to multiple synchronous equilibria, and global synchronization with nontrivial synchronous periodic solutions. Our investigation also provides theoretical support to some numerical findings, and improves or extend some results in the literature.
Abstract:
We estimate expansion growth types (in the sense of Egashira) of certain distal groups of homeomorphisms and manifold diffeomorphisms.The estimate implies zero entropy (in the sense of Ghys, Langevin and the author) and existence of invariant measures for such groups. We prove also existence of invariant measures for pseudogroups satisfying some conditions of distality type.
Abstract:
We study the limit of vanishing ratio of the electron mass to the ion mass (zero-electron-mass limit) in the scaled Euler-Poisson equations. As the first step of this justification, we construct the uniform global classical solutions in critical Besov spaces with the aid of ``Shizuta-Kawashima" skew-symmetry. Then we establish frequency-localization estimates of Strichartz-type for the equation of acoustics according to the semigroup formulation. Finally, it is shown that the uniform classical solutions converge towards that of the incompressible Euler equations (for
ill-preparedinitial data) in a refined way as the scaled electron-mass tends to zero. In comparison with the classical zero-mach-number limit in [7,23], we obtain different dispersive estimates due to the coupled electric field. Abstract:
In this paper, we study a class of Hamiltonian system with 2 degrees of freedom. We show that at any energy level above a certain critical value of each system, there are ray and heteroclinic solutions between any two periodic neighboring minimal solutions with any prescribed non-trivial homotopy class. Our proof is based on an elementary variational method.
Readers Authors Editors Referees Librarians More Email Alert
Add your name and e-mail address to receive news of forthcoming issues of this journal:
[Back to Top] |
Consider the following complex power series :$$\sum_{n=1}^\infty\frac{z^n}{n}$$ The radius of convergence of this series is $1$ and the series is divergent for $z=1$. I want to know what are the values of $z\in C:=\lbrace z\in\mathbb{C}: |z|=1\rbrace$, the circle of convergence, for which the given series converges.
HINT: Look at Dirichlet test. In your case, choose $a_n = \dfrac1n$ and $b_n = z^n$.
From the Dirichlet test, you will get that the series converges everywhere on the boundary of the unit disc except at $z=1$.
Hint: This is a classical example for Abel's Test. |
In general, how would one find the distribution of $f(X)$ where $X$ is a random variable? Or consider the inverse problem of finding the distribution of $X$ given the distribution of $f(X)$. For example, what is the distribution of $\max(X_1, X_2, X_3)$ if $X_1, X_2$ and $X_3$ have the same distribution? Likewise, if one is given the distribution of $ Y = \log X$, then the distribution of $X$ is deduced by looking at $\text{exp}(Y)$?
Qiaochu is right. There isn't a magic wand. That said, there is a set of common procedures that can be applied to certain kinds of transformations. One of the most important is the cdf (cumulative distribution function) method that you are already aware of. (It's the one used in your previous question.) Another is to do a change of variables, which is like the method of substitution for evaluating integrals. You can see that procedure and others for handling some of the more common types of transformations at this web site. (Some of the other examples there include finding maxes and mins, sums, convolutions, and linear transformations.)
Let me take the risk of mitigating Qiaochu's healthy skepticism and mention that a wand I find often quite useful to wave is explained on this page. There, I argue that:
The simplest and surest way to compute the distribution density or probability of a random variable is often to compute the means of functions of this random variable.
For example, the fact that $Y=\log X$ is normal $N(2,4)$ is
equivalent to the fact that, for every bounded measurable function $g$,$$\mathrm E(g(Y))=\int g(y) f_Y(y)\mathrm{d}y,$$for a density $f_Y$ everybody knows and whose precise form will not interest us. Likewise, the fact that the distribution $X$ has density $f_X$ is equivalent to the fact that, for every bounded measurable function $g$,$$\mathrm E(g(X))=\int g(x) f_X(x)\mathrm{d}x.$$Hence our task is simply to pass from one formula to the other. But this is easy since $g(X)=g(\mathrm{e}^Y)$ is also a function of $Y$. As such,$$\mathrm E(g(X))=\int g(\mathrm{e}^y) f_Y(y)\mathrm{d}y,$$and our task is to solve for $f_X$ the equations$$\int g(x) f_X(x)\mathrm{d}x=\int g(\mathrm{e}^y) f_Y(y)\mathrm{d}y,$$We have no choice for our next step but to use the change of variable $x\leftarrow \mathrm{e}^y$. That is, $y\leftarrow \log x$ and $\mathrm{d}y=x^{-1}\mathrm{d}x$, which yields$$\int g(\mathrm{e}^y) f_Y(y)\mathrm{d}y=\int g(x) f_Y(\log x)x^{-1}\mathrm{d}x.$$By identification, $f_X(x)=f_Y(\log x)x^{-1}$.
In a nutshell the idea is that the very notations of integration help us to get the result and that during the proof we have no choice but to use the right path. We leave as an exercise the computation of the density of each random variable $Z=\varphi(Y)$, for some regular enough function $\varphi$.
Note that maxima and minima of independent random variables should be dealt with by a specific, different, method, explained on this page.
If $f$ is a monotone and differentiable function, then the density of $Y = f(X)$ is given by
$$ p_{Y}(y) = \left| \frac{1}{f'(f^{-1}(y))} \right| \cdot p_X(f^{-1}(y)) $$
where $p_X$ is the density of $X$.
You can use the law of conditional probability:
$P(A)=\int^{\infty}_{-\infty}P(A|B)f(B)$
So in your case, for a random variable $X\in[0,1)$:
$P(x>f(X))=\int^{\infty}_{-\infty}[x>f(a)][0<a<1]da=\int^{1}_{0}[x>f(a)]da$
Where the brackets are Iverson brackets.
Example, the distribution for a random variable $X\in[0,1)$ squared:
$P(x>X^2)=\int^{1}_{0}[x>a^2]da=\int^{1}_{0}[\sqrt{x}>a]da=\int^{\sqrt{x}}_{0}1da=\sqrt{x}$
Assuming $x\in[0,1]$. The pdf is then $\frac{d\sqrt{x}}{dx}=\frac{1}{2\sqrt{x}}$. |
So the problem is asking me to find the Electric field a height z above the center of a square
sheet of side a
I approach the problem a different way than the book, I derive the electric field due to a line of charge of side $a$ a height z above the center of a square loop, and I verified it to be $\frac{1}{4\pi\epsilon_0}$ $\frac{\lambda a z}{(z^2+\frac{a^2}{4}) (z^2+\frac{a^2}{2})^\frac{1}{2}}$ $\hat z$
Now the way I do it is that I let that line have a thickness $da$ where da is a width element
not an area element (as the side of the square is a), so now the linear charge density $\lambda$ is equal to the surface charge density multiplied by that small thickness $da$ , that is
$\lambda = \sigma da$
So the Electric field $dE$ due to a line of small thickness $da$ is
$dE$ = $\frac{1}{4\pi\epsilon_0}$ $\frac{\sigma da z a}{(z^2+\frac{a^2}{4}) (z^2+\frac{a^2}{2})^\frac{1}{2}}$ $\hat z$
I integrate this field from $0$ to $a$ then,
$E$ = $\frac{\sigma z}{4\pi\epsilon_0}$ $\int_0^a$ $\frac{ada}{(z^2+\frac{a^2}{4}) (z^2+\frac{a^2}{2})^\frac{1}{2}}$ $\hat z$
This integral yields $\frac{4}{z}$ $\tan^{-1}(\sqrt{1+\frac{a^2}{2z^2}}$ $|^{a}_{0}$
= $\frac{4}{z}$ $[\tan^{-1} (\sqrt{1+\frac{a^2}{2z^2}} -\frac{\pi}{4}]$
That is the value of the integral, now multiply it by $\frac{\sigma z}{4\pi\epsilon_0}$
Then $E$=$\frac{\sigma}{\pi\epsilon_0}$ $[\tan^{-1} (\sqrt{1+\frac{a^2}{2z^2}} -\frac{\pi}{4}]$
I'm missing it by a factor of 2, the answer should be $\frac{2\sigma}{\pi\epsilon_0}$ $[\tan^{-1} (\sqrt{1+\frac{a^2}{2z^2}}) -\frac{\pi}{4}]$
I'm pretty sure about the mathematical steps, I'm assuming I made a false assumption at the beginning, but its been more than 20 hours and I still haven't figured out what it is, any help would be appreciated.
Here's a picture to show you how I think I can do it
This red line is of width $da$ and I want to integrate $dE$ from $0$ to $a$ |
I am trying to understand a single factor Random effects anova from a book.
In this case : -
$y_ij = \mu+ \tau_i + e_{ij}$, where $i = 1,...,a$ and $j= 1,...n$.
Assumptions:
The treatment effects $\tau_i$ are $N(0,\sigma_\tau^2)$ and $e_{ij}$ are $N(0,\sigma^2)$ $\tau_i$ and $e_{ij}$ are independent
Hence we have : $Variance(y_{ij}) = \sigma_\tau^2 + \sigma^2$
The observations in the random effects model are normally distributed because they are linear combinations of the two normally and independently distributed random variables $\tau_i$ and $e_{ij}$. However, unlike the fixed effects case in which all of the observations $y_{ij}$ are independent,in the random model the observations $y_{ij}$ are only independent if they come from different factor levels.
Note that the observations within a specific factor level all have the same covariance, because before the experiment is conducted, we expect the observations at that factor level to be similar because they all have the same random component. Once the experiment has been conducted, we can assume that all observations can be assumed to be independent, because the parameter $\tau_i$ has been determined and the observations in that treatment differ only because of random error.
I don't understand the above paragraph. Can some one please explain? |
Answer
$\sqrt{5}$
Work Step by Step
Using the properties of radicals, the given expression, $ \dfrac{\sqrt{55}}{\sqrt{11}} ,$ simplifies to \begin{array}{l}\require{cancel} \sqrt{\dfrac{55}{11}} \\\\= \sqrt{5} .\end{array}
You can help us out by revising, improving and updating this answer.Update this answer
After you claim an answer you’ll have
24 hours to send in a draft. An editorwill review the submission and either publish your submission or provide feedback. |
Line 14: Line 14:
----
----
==Question 1==
==Question 1==
+ + + + + + + + + + + + + + + + + + + + + +
Define System 1 as the following LTI system
Define System 1 as the following LTI system
Line 44: Line 66:
c) Sketch the graph of the frequency response <math>H_d(\omega)</math> that would make the following system equivalent to System 1.
c) Sketch the graph of the frequency response <math>H_d(\omega)</math> that would make the following system equivalent to System 1.
− − − − − − − − − − − − − − Revision as of 11:33, 23 September 2015 Contents Homework 5, ECE438, Fall 2015, Prof. Boutin
Hard copy due in class, Wednesday September 30, 2015.
The goal of this homework is to get an intuitive understanding on how to DT signals with different sampling frequencies in an equivalent fashion.
Question 1 Downsampling and upsampling
a) What is the relationship between the DT Fourier transform of x[n] and that of y[n]=x[4n]? (Give the mathematical relation and sketch an example.)
b) What is the relationship between the DT Fourier transform of x[n] and that of
$ z[n]=\left\{ \begin{array}{ll} x[n/4],& \text{ if } n \text{ is a multiple of } 4,\\ 0, & \text{ else}. \end{array}\right. $
(Give the mathematical relation and sketch an example.)
Question 2 Downsampling and upsampling
Let $ x_1[n]=x(Tn) $ be a sampling of a CT signal $ x(t) $. Let D be a positive integer.
a) Under what circumstances is the downsampling $ x_D [n]= x_1 [Dn] $ equivalent to a resampling of the signal with a new period equal to DT (i.e. $ x_D [n]= x(DT n) $)?
b) Under what circumstances is it possible to construct the sampling $ x_3[n]= x(\frac{T}{D} n) $ directly from $ x_1[n] $ (without reconstructing x(t))?
Question 3
Define System 1 as the following LTI system
$ x[n]\rightarrow \left[ \begin{array}{ccc} & & \\ & \text{CT filter with frequency response } H(f) & \\ & & \end{array}\right] \rightarrow y(t) $
where H(f) is a band-pass filter with no gain and cutoff frequencies f1=200Hz and f2=600Hz.
a) Sketch the graph of the frequency response H(f) of System 1.
b) Sketch the graph of the frequency response $ H_d(\omega) $ that would make the following system equivalent to System 1.
$ x(t) \rightarrow x[n]=x(\frac{t}{6000}) \rightarrow \left[ \begin{array}{ccc} & & \\ & \text{DT filter with frequency response } H_d[\omega] & \\ & & \end{array}\right] \rightarrow \left[ \begin{array}{ccc} & & \\ & \text{D/C Converter} & \\ & & \end{array}\right] \rightarrow y(t) $
c) Sketch the graph of the frequency response $ H_d(\omega) $ that would make the following system equivalent to System 1.
Hand in a hard copy of your solutions. Pay attention to rigor!
Presentation Guidelines Write only on one side of the paper. Use a "clean" sheet of paper (e.g., not torn out of a spiral book). Staple the pages together. Include a cover page. Do not let your dog play with your homework. Discussion
You may discuss the homework below.
I have been asked for clarification so let me try to rephrase. In each part, I am asking you to describe the digital filter that is equivalent to the given analog filter. In other words, how can you process the signal in the discrete-time domain, instead of the continuous time domain. Does that help? Let me know. -Prof. Mimi Write question/comment here. answer will go here |
I'll discuss it in intuitive terms.
Both confidence intervals and prediction intervals in regression take account of the fact that the intercept and slope are uncertain - you estimate the values from the data, but the population values may be different (if you took a new sample, you'd get different estimated values).
A regression line will pass through $(\bar x, \bar y)$, and it's best to center the discussion about changes to the fit around that point - that is to think about the line $y= a + b(x-\bar x)$ (in this formulation, $\hat a = \bar y$).
If the line went through that $(\bar x, \bar y)$ point, but the slope were little higher or lower (i.e. if the height of the line at the mean was fixed but the slope was a little different), what would that look like?
You'd see that the new line would move further away from the current line near the ends than near the middle, making a kind of slanted
X that crossed at the mean (as each of the purple lines below do with respect to the red line; the purple lines represent the estimated slope $\pm$ two standard errors of the slope).
If you drew a collection of such lines with the slope varying a little from its estimate, you'd see the distribution of predicted values near the ends 'fan out' (imagine the region between the two purple lines shaded in grey, for example, because we sampled again and drew many such slopes near the estimated one; We can get a sense of this by bootstrapping a line through the point ($\bar{x},\bar{y}$)). Here's an example using 2000 resamples with a parametric bootstrap:
If instead you take account of the uncertainty in the constant (making the line pass close to but not quite through $(\bar x, \bar y)$), that moves the line up and down, so intervals for the mean at any $x$ will sit above and below the fitted line.
(Here the purple lines are $\pm$ two standard errors of the constant term either side of the estimated line).
When you do both at once (the line may be up or down a tiny bit, and the slope may be slightly steeper or shallower), then you get some amount of spread at the mean, $\bar x$, because of the uncertainty in the constant, and you get some additional fanning out due to the slope's uncertainty, between them producing the characteristic hyperbolic shape of your plots.
That's the intuition.
Now, if you like, we can consider a little algebra (but it's not essential):
It's actually the square root of the sum of the squares of those two effects - you can see it in the confidence interval's formula. Let's build up the pieces:
The $a$ standard error with $b$ known is $\sigma /\sqrt{n}$ (remember $a$ here is the expected value of $y$ at the
mean of $x$, not the usual intercept; it's just a standard error of a mean). That's the standard error of the line's position at the mean ($\bar x$).
The $b$ standard error with $a$ known is $\sigma/\sqrt{\sum_{i=1}^n (x_i-\bar{x})^2}$. The effect of uncertainty in slope at some value $x^*$ is multiplied by how far you are from the mean ($x^*-\bar x$) (because the change in level is the change in slope times the distance you move), giving $(x^*-\bar x)\cdot\sigma/\sqrt{\sum_{i=1}^n (x_i-\bar{x})^2}$.
Now the overall effect is just the square root of the sum of the squares of those two things (why? because variances of uncorrelated things add, and if you write your line in the $y= a + b(x-\bar x)$ form, the estimates of $a$ and $b$ are uncorrelated. So the overall standard error is the square root of the overall variance, and the variance is the sum of the variances of the components - that is, we have
$\sqrt{(\sigma /\sqrt{n})^2+ \left[(x^*-\bar x)\cdot\sigma/\sqrt{\sum_{i=1}^n (x_i-\bar{x})^2}\right]^2 }$
A little simple manipulation gives the usual term for the standard error of the estimate of the mean value at $x^*$:
$\sigma\sqrt{\frac{1}{n}+ \frac{(x^*-\bar x)^2}{\sum_{i=1}^n (x_i-\bar{x})^2} }$
If you draw that as a function of $x^*$, you'll see it forms a curve (looks like a smile) with a minimum at $\bar x$, that gets bigger as you move out. That's what gets added to / subtracted from the fitted line (well, a multiple of it is, in order to get a desired confidence level).
[With prediction intervals, there's also the variation in position due to the process variability; this adds another term that shifts the limits up and down, making a much wider spread, and because that term usually dominates the sum under the square root, the curvature is much less pronounced.] |
JetEtMissPublicResultsINSITU
Caption: {Mean jet multiplicity for jets with $\pt>10\GeV$as a function of \pt\ of the $Z$ boson in $Z$\,+\,jet events.
Caption: $\Delta \phi$ between the photon and the jet for a) {\cone}algorithm with $R = 0.4$.
Caption: $\Delta \phi$ between the photon and the jet for b) \kt\ algorithm with $D = 1$.
Caption: Left in all rows: the mean value of the fitted \pt\ balance $B_\Sigma$ as a function of $p_{\mathrm{T},\,Z}$ in $Z$\,+\,jet events.Particle level jets (squares) and jets reconstructed fromdetector signals (circles) are shown. Middle in all rows:$B_\Sigma$ distribution for $p_{\mathrm{T},\,Z} \sim 50\GeV$ for truthjets. Right in all rows: $B_\Sigma$ distribution for $p_{\mathrm{T},\,Z} \sim50\GeV$ for reconstructed jets. Upper row: all jets with $\pt>1\GeV$ are taken into account. Middle row: only jets with$\pt>10\GeV$ are used and the requirement $|\pi -\Delta\phi| < 0.2$ is imposed. Lower row: in addition, no furtherjet with $\pt>10\GeV$ is allowed.
Caption: Mean value of the fitted \pt\ balance ($B_1$ + 1) as a function of$p_\mathrm{T,\,\gamma}$ in $\gamma$\,+\,jet {\herwig} events for various jetalgorithms. The points correspond to particle level and parton level jets.
Caption: {The \pt\ of the parton versus the \pt\ of the photonas produced in the hard interaction in $\gamma$\,+\,jet events.\label{figs/gamjet/gammapartonbalance
Caption: The solid line shows the balance when the {\pt} reference for binning istaken as the average {\pt} of the photon and the jet; the triangleswhen it is taken as the photon {\pt}. The circles show the balancewhen the photon {\pt} is used and the photon and the jet are required tobe back-to-back within 0.2.
Caption: The solid line shows the balance when the {\pt} reference for binning istaken as the average {\pt} of the photon and the jet; the triangleswhen it is taken as the photon {\pt}. The circles show the balancewhen the photon {\pt} is used and the photon and the jet are required tobe back-to-back within 0.2. Right: the {\pt} dependence of themost probable value of the particle level jet balance for these three cases.
Caption: The most probable value of the balance at reconstruction levelfor {\cone} jets with $R = 0.7$.Black and dots are for default and tight selection, respectively, andthe points show the truth level balance. The back-to-back$\Delta \phi$ cut is applied.
Caption: Left: {\pt} balance for the background sample of$140 <\pt < 280\GeV$ for the default and tight photon selection.
Caption: Right: {\pt} balance for the signal and background sample inthe interval ${96 <p_{\mathrm{T},\,\gamma} < 224\GeV}$ for tight photonselection.
Caption: Distribution of the dielectron mass for\Zej\ events and the relevant background in a simulated event samplecorresponding to an integrated luminosity of $200$\,{\ipb} with{\cone} jets with $R = 0.7$.
Caption: The \pt\ balance for an integrated luminosity of 500\,{\ipb}of {\cone} jets with $R = 0.7$ in events generated with {\alpgen}in 5 bins of $p_{\mathrm{T},\,Z}$.The red dots are for reconstructed jets, solid triangles for truth jetsand open triangles for truth in bins of average
Caption: The \pt\ balance for an integrated luminosity of $120\ipb$and $500\ipb$ in events generated with {\alpgen} (dots and triangles,respectively) and for $120\ipb$ in events generated with {\pythia}(squares) in bins of $p_{\mathrm{T},\,Z}$ for {\cone} jets with $R = 0.7$.
Caption: The energy dependence of the jet response for {\cone} jetswith $R = 0.4$. The solid line corresponds to the fit using Eq.~\ref{EE}.
Caption: The ratios $E_\mathrm{T}^\mathrm{MC}/E_\mathrm{T}^\mathrm{calib}$ (triangles) and $E_\mathrm{T}^\mathrm{MC}/E_\mathrm{T}^\mathrm{meas}$ (squares) for jets reconstructed using the {\cone} algorithm with $R=0.4$. See the text for anexplanation of the symbols.
Caption: The jet response $\pt(\mbox{reconstructed})/\pt(\mbox{truth})$at the EM scale versus the jet pseudorapidity {\eta}
Caption: Left: The jet rate as a function of $\phi$ for jets with thetransverse momentum above a certain threshold..
Caption: Right: Integrated luminosity required to collect 1000 events with jets above the given \pt\ thresholds in each of the 64 $\phi$ sectorsin the region $|\eta| < 0.1$. Caption:
Left: The asymmetry $A$ as measured with both jets in the centralregion $|\eta| < 0.7$ as defined in Eq.~\ref{dijetasymmetry}.
Caption:
The mean asymmetry obtained from gaussian fits, plotted as a function of the half scalar sum of \pt\ of both jets at the reconstructionlevel (closed circles) and at the truth particle level (stars).
Caption: Integrated luminosity required to reach $0.5$\% precision for various\pt\ ranges in the region $0.7 < \eta < 0.8$ with different sets of selectioncuts:all {\pythia} dijet events (circles),requiring $\Delta\phi > 3$ between the two leading jets (triangles),requiring in addition less than 4 reconstructed jets in an event(squares), requiring exactly two reconstructed jets (stars).
Caption: Energy scale of high \pt\ jets relative to lower \pt\ remnant jets as afunction of jet \pt, obtained by multijet \pt\ balance method at anintegrated luminosity of 1\,fb$^{-1}$. The error bars shown are statistical only.
Caption: Energy scale uncertaintyof high \pt\ jets relative to lower \pt\ remnant jets as afunction of jet \pt, obtained by multijet \pt\ balance method at an integratedluminosity of 1\,fb$^{-1}$. The error bars shown are statistical only.
Caption: The ratio of the absolute value of the vector sum of thenon-leading jet \pt\ to the leading jet \pt\ for the \pt\ bin370--$380\GeV$ fitted by a Gaussian.
Caption: The ratio of the absolute value of the vector sum of thenon-leading jet \pt\ to the leading jet \pt\ for the \pt\ bin370--$380\GeV$ fitted by a Gaussian as a function ofjet \pt. The mean and the error of the mean of the Gaussian fits areshown. The average of the leading jet \pt\ and of the total \pt\ ofthe non-leading jets is used for the binning.
Caption: Results using ATLFAST with {\cone} jets with $R = 0.4$.Left: The fittedbalance as a function of the average leading and non-leading jets\pt.
Caption: Results using ATLFAST with {\cone} jets with $R = 0.4$.Iterations of the method using the \pt\ rangechecked by one iteration as the reference region for the next.
Caption: Distributions of the mean of the $\Delta R$ valuesfor the leading two (solid histogram) and five (dashed histogram)tracks in jets with $140<\pt^{\rm truth}<160\GeV$.
Caption: Distributions of the mean of the $\Delta R$ valuesfor the leading two (solid histogram) and five (dashed histogram)tracks in jets with $1120<\pt^{\rm truth}<1280\GeV$ (right) for an integrated luminosityof $1\ifb$.
Caption: Mean value of the $\Delta R$distributions as a function of the leading jet truth \pt\ for the leading two(solid points) and five (open points) tracks. The curves representfits with a function of the form $p_0/x+p_1$.
Caption: Most probable valueobtained from a Landau fit to the peak (right) of the $\Delta R$distributions as a function of the leading jet truth \pt\ for the leading two(solid points) and five (open points) tracks. The curves representfits with a function of the form $p_0/x+p_1$.
Caption: Jet \pt\ scale uncertainty (statistical uncertaintyonly) as a function of jet truth \pt\ obtained for different choicesof $\Delta R$ values and track multiplicities.
Caption: The mostprobable $\Delta R$ of the leading two and five tracks as a function of the jettruth \pt\ in {\pythia} (open points) and {\herwig} (solid points).
Caption: Default fit (solid curve) to the mostprobable $\Delta R$ of the leading two tracks as a function of thetruth jet \pt\ and the curves corresponding to $\pm 5\%$ JES variationsat $\pt^\mathrm{jet}=5\GeV$ (dashed and dotted).
Caption: Totaland individual systematic and statistical uncertainties as a function ofthe truth jet \pt\ expected to be obtained from the track angle method for anintegrated luminosity of $1\ifb$.
Caption: Asymmetry distributions of two jets for two representative \pt\ bins. Cone jets with $R = 0.7$ in the pseudorapidity region $|\eta|<1.2$ are used. The distributions were fitted with a single Gaussian function.
Caption: Asymmetry distributions of two jets for two representative \pt\ bins. Cone jets with $R = 0.7$ in the pseudorapidity region $|\eta|<1.2$ are used. The distributions were fitted with a single Gaussian function.
Caption: Resolution versus the $p_\mathrm{T,\,3}$ threshold cut for different\pt\ bins.The line corresponds to the linear fit applied while the dashed-line shows theextrapolation to $p_\mathrm{T,\,3} = 0$, which corresponds to an ideal dijet sample($\epsilon = 0$).
Caption: Resolution versus the $p_\mathrm{T,\,3}$ threshold cut for different\pt\ bins.The line corresponds to the linear fit applied while the dashed-line shows theextrapolation to $p_\mathrm{T,\,3} = 0$, which corresponds to an ideal dijet sample($\epsilon = 0$).
Caption: Jet energy resolution for {\cone} jets with $R = 0.7$in the pseudorapidity range $|\eta|<1.2$.The results are obtained by using dijet balance techniqueswith and without applying the soft radiation correction.
Caption: Sketch of the \kt\ balance technique.The $\eta$ axis corresponds to the azimuthal angular bisector of the dijetsystem while the $\psi$ axis is defined as being orthogonal to the $\eta$ axis. Major updates
:
-- JamesProudfoot
- 19 Jun 2009
Responsible: JamesProudfoot
Last reviewed by:
Never reviewed |
Summary. Using your favourite $O(n^d)$ algorithm for finding a matching in graphs on $O(n)$ vertices, there is a simple algorithm using $O(\max\{n^{d+2},n^4\})$ operations over the reals for decomposing doubly-stochastic matrices. The run-time bound comes from $O(n^2)$ iterations of a procedure in which each iteration involves finding a matching, and modifying coefficients of an $n \times n$ matrix. (There are undoubtedly ways of streamlining these operations, at least in the case of the time required to modify the matrix coefficients.)
Details. The following proof of the Birkhoff–von Neumann theorem can be found e.g. in [1], and leads to an efficient algorithm for decomposing doubly-stochastic matrices.
We consider matrices $A$ whose rows and columns all sum to the same value (which for a doubly-stochastic input is initially 1). Let $N$ be the number of non-zero entries in your $n \times n$ doubly-stochastic matrix: in decomposing the matrix, we will reduce the number of non-zero entries while keeping the invariant of having constant line-sums. Thus at each stage, we have either $N \geqslant n$ or $N = 0$, by the invariant of having equal line-sums.
Set $t \gets 0$.
While $N > 0$:
Set $t \gets t + 1$. For each $i \in [n]$: let $S_i = \bigl\{ j \in [n]: A_{ij} > 0 \bigr\}$. Find a bijection $\sigma: [n] \to [n]$ such that $\sigma(i) \in S_i$ (such a bijection exists, and can be found efficiently, by Hall's Marriage criterion). Let $u^{(t)} = \min\,\bigl\{A_{i,\sigma(i)} : i \in [n]\bigr\}$, and let $P^{(t)}$ be the permutation matrix for which $P^{(t)}_{i,\sigma(i)} = 1$ for each $i \in [n]$. Update $A \gets \bigl(A - u^{(t)} P^{(t)}\bigr)$, and update $N$ to the number of non-zero elements of $A$ (which is smaller by at least $1$).
Return $\bigl(u^{(1)},P^{(1)}\bigr)$, ... $\bigl(u^{(t)},P^{(t)}\bigr)$, representing the decomposition $\sum_{i\in[t]} u^{(i)} P^{(i)}$ of the input.
[1] Extremal Combinatorics: With Applications in Computer Science by Stasys Jukna |
Let $C$ be the space nº74 ("double origin topology") in Steen & Seebach's
Counterexamples of Topology, chosen because it is the only one listed there that is T2 and path-connected but not T3:
$C$ consists of the set of points of the plane $\mathbb{R}^2$ together with an additional point $0^*$. Neighborhoods of points other than the origin $0$ and the point $0^*$ are the usual open sets of $\mathbb{R}^2\setminus\{0\}$; as a basis fo neighborhoods of $0$ and $0^*$, we take $V_n(0) = \{(x,y) : x^2+y^2 < 1/n^2 \land y>0\} \cup \{0\}$ and $V_n(0^*) = \{(x,y) : x^2+y^2 < 1/n^2 \land y<0\} \cup \{0^*\}$.
In other words we have replaced the origin in the plane by an "upper origin" $0$ and a "lower origin" $0^*$, the neighborhoods of the upper origin being sets containing an open half-disk centered at the origin, plus the upper origin itself, and similarly for the lower origin.
The space $C$ is Hausdorff, but not T2½ because $0$ and $0^*$ do not have disjoint closed neighborhoods; in particular, it is not T3 or T3½. So there is no continuous function $C \to \mathbb{R}$ taking different values on $0$ and $0^*$.
Let $c_0 = 0$ and $c_1 = 0^*$. Then there is a path connecting $c_0$ and $c_1$: indeed, there is a path connecting $c_0$ to, say, $(1,0)$, and one connecting $(1,0)$ to $c_1$ (we can even find "arcs", i.e., injective paths, if we want). If we have a continuous function $C \to X$ taking $c_0$ to $x$ and $c_1$ to $y$, then right-composing it with the path just mentioned gives a path connecting $x$ and $y$: so $(C,c_0,c_1)$-connectedness implies path-connectedness. On the other hand, $\mathbb{R}$ is not $(C,c_0,c_1)$-connected because of what was said in the previous paragraph.
I think this answers the question, with the additional constraint that $C$ is Hausdorff and $c_0 \neq c_1$. (I just noticed that Simon Henry had the same idea in the comments.)
However, it doesn't answer the question that I think should have been asked, namely to also require $C$ itself to be $(C,c_0,c_1)$-connected (in the above example, it's pretty clear that it's not). |
Answer
$4$
Work Step by Step
Using the properties of radicals, the given expression, $ \dfrac{\sqrt{96}}{\sqrt{6}} ,$ simplifies to \begin{array}{l}\require{cancel} \sqrt{\dfrac{96}{6}} \\\\= \sqrt{16} \\\\= \sqrt{(4)^2} \\\\= 4 .\end{array}
You can help us out by revising, improving and updating this answer.Update this answer
After you claim an answer you’ll have
24 hours to send in a draft. An editorwill review the submission and either publish your submission or provide feedback. |
I asked the following question here.
Let $f: [0, 1]^2 \to \mathbb{R}$ be such that for every $x \in [0, 1]$ the function $y \to f(x, y)$ is Lebesgue measurable on $[0, 1]$ and for every $y \in [0, 1]$ the function $x \to f(x, y)$ is continuous on $[0, 1]$.
Is $f$ measurable with respect to the completion of the product $\sigma$-algebra $\mathcal{A} \times \mathcal{A}$ on $[0, 1]^2$?
Here $\mathcal{A}$ is the Lebesgue $\sigma$-algebra on $[0, 1]$.
John Dawkins gave the following answer.
Yes. For $n\in\Bbb N$ define $$ f_n(x,y)=\cases{f(k/n,y),&$(k-1)/k\le x<k/n, k=1,2,\ldots,n$\cr f(1,y),&$x=1$.\cr} $$ The function $f_n$ is $\mathcal A\otimes\mathcal A$-measurable (even $\mathcal B\otimes\mathcal A$-measurable, where $\mathcal B$ denotes the Borel subsets of $[0,1]$). Because $f_n$ converges pointwise to $f$, the function $f$ is $\mathcal B\otimes\mathcal A$-measurable.
$($The notation $\mathcal{A} \times \mathcal{A}$ is ambiguous: it is literally the Cartesian product of $\mathcal{A}$ with itself; namely $\{(A_1, A_2) : A_i \in \mathcal{A}\}$. The $\sigma$-field of interest here is $\sigma\{A_1 \times A_2 : A_i \in \mathcal{A}\}$, for which I prefer the notation $\mathcal{A} \otimes \mathcal{A}$.$)$
Why does it follow from $f_n$ converging pointwise to $f$ that $f$ is $\mathcal{B} \otimes \mathcal{A}$-measurable? |
Easy observations first:
The image is unbounded, since $0$ goes to $\infty$. The points $a_1, \dots, a_n$ are mapped to $0$, so the image of boundary curve comes back to hit $0$ over and over Each such return to $0$ makes the angle of $\pi \lambda_k$ with vertex $0$. The sum of these angles is $2\pi$.
Trying to imagine this leads to a conjecture: the image is the complement of the "star with no interior", i.e., the union of $n$ line segments joined at $0$.
To show that the guess is correct, it suffices to prove that $\arg f(z)$ remains constant on each arc between the points $a_k$. Branches may merit discussion elsewhere, but here all that matters is that we take some continuous branch of $\arg f$ on such an arc, and show it's constant.
As a warm-up, check that $$\arg (1+z) = \frac12 \arg z$$for every $z$ on the unit circle. Indeed, the triangle $-1, 0, z$ is isosceles, which implies its angle at $-1$ is $\frac12(\pi - (\text{angle at $0$}))$, which was the claim.
So, the rate of change of $\arg(1+z)$ is half the rate of change of $\arg z$. But this applies equally well to $\arg(z-a)$ for every unimodular $a$, since rotating the picture changes the arguments by a constant amount. It follows that the sum $$-\arg z + \sum_{k=1}^{n} \lambda_k \arg(z-a_k) $$has zero rate of change, thanks to the condition $\sum \lambda_k = 2$. And this was $\arg f(z)$. |
First of all, do note that the momentum conservation equations should read $h_{\rho\nu} \nabla_\mu T^{\mu\nu}=0$, otherwise you would have a single equation.
The strict distinction between energy and momentum conservation is only valid in the Local Rest Frame, the frame of the fluid element with velocity $u^\mu$: for some other observer with velocity $v^\mu \neq u^\mu$ the equation $u_\nu \nabla_\mu T^{\mu\nu}$ will not represent
only energy conservation.
Now, to answer question 1: the $\mu\nu$ component of the stress-energy tensor is the flow of the $\mu$-th component of four-momentum through a surface of constant $x^\nu$, so if we project using the vector $u_\nu$ we are getting something which, in the LRF, is conservation of four-momentum through surfaces of constant time. This, coupled with the fact that the time component of the four-momentum is the energy is the motivation for the name.
The reasoning for the momentum conservation equations is similar.
To convince yourself that this makes sense, you can look at the nonrelativistic limit of these equations: see the Theoretical Physics Reference. There, they use the fact that in the LRF $u^\mu = (1, 0,0,0)$ to write the energy conservation equation as $\nabla_\mu T^{\mu 0}$ and similarly for the others.There they derive Euler's equations for a perfect fluid, but a very similar line of reasoning applies to the general case.
As for question 2, very loosely following section 3 in Taub (1978): to see that $h_{\mu}^{\nu} = \delta_\mu^\nu + u_\nu u^\mu$ is an orthogonal projector onto the subspace orthogonal to $u^\mu$ we need to check that it is idempotent ($h^\mu_\rho h^\rho_\nu = h^\mu _\nu$), that it is orthogonal, and that its $\ker$ is just the span of $u^\mu$:
$$h^\mu_\rho h^\rho_\nu =(\delta^\mu_\rho + u^\mu u_\rho)(\delta^\rho_\nu + u^\rho u_\nu) =\delta^\mu_\nu + 2 u^\mu u_\nu + u^\mu u_\rho u^\rho u_\nu= h^\mu _\nu$$
since $u^\rho u_\rho = -1$. Orthogonality can be seen immediately from the definition.
Also, $(\delta^\mu_\nu + u^\mu u_\nu) u^\nu = 0$ but for a vector $k^\mu$ with $k^\mu u_\mu = 0$ we have $ (\delta^\mu_\nu + u^\mu u_\nu) k^\nu = k^\mu$.
This proves the statement. To get the (0,2)-tensor it is just necessary to lower an index.
To see why one might define $h_{\mu\nu}$ this way you can start by looking at the projection tensor onto the
time-like subspace, which will be proportional to $u_\mu u^\nu$: $\pi_\mu^\nu = \alpha u_\mu u^\nu$, and since we want it to leave $u^\nu$ unchanged it will have to be $\alpha = -1$.
Then, the way to find $h_{\mu\nu}$ is to write the equation
$$\delta_\mu^\nu = h_\mu^\nu + \pi_\mu^\nu$$ |
Another way to obtain a new set from two given sets \(A\) and \(B\) is to form ordered pairs. An
\((x,y)\) consists of two values \(x\) and \(y\). Their order of appearance is important, so we call them first and second elements respectively. Consequently, \((a,b)\neq (b,a)\) unless \(a=b\). In general, \((a,b)=(c,d)\) if and only if \(a=c\) and \(b=d\). ordered pair
Definition: Cartesian Product
The
of \(A\) and \(B\) is the set Cartesian product
\[A \times B = \{ (a,b) \mid a \in A \wedge b \in B \}\]
Thus, \(A \times B\) (read as “\(A\) cross \(B\)”) contains all the ordered pairs in which the first elements are selected from \(A\), and the second elements are selected from \(B\).
Example \(\PageIndex{1}\label{eg:cartprod-01}\)
Let \(A = \{\mbox{John}, \mbox{Jim}, \mbox{Dave}\}\) and \(B = \{\mbox{Mary}, \mbox{Lucy}\}\). Determine \(A\times B\) and \(B\times A\).
Solution
We find \[\displaylines{ A\times B = \{ (\mbox{John},\mbox{Mary}), (\mbox{John},\mbox{Lucy}), (\mbox{Jim}, \mbox{Mary}), (\mbox{Jim}, \mbox{Lucy}), (\mbox{Dave},\mbox{Mary}), (\mbox{Dave},\mbox{Lucy})\}, \cr B\times A = \{ (\mbox{Mary},\mbox{John}), (\mbox{Mary},\mbox{Jim}), (\mbox{Mary},\mbox{Dave}), (\mbox{Lucy},\mbox{John}), (\mbox{Lucy},\mbox{Jim}), (\mbox{Lucy},\mbox{Dave})\}. \cr}\] In general, \(A\times B \neq B\times A\).
Example \(\PageIndex{2}\label{eg:cartprod-02}\)
Determine \(A \times B\) and \(A \times A\):
\(A=\{1,2\}\) and \(B=\{2,5,6\}\).
\(A=\{5\}\) and \(B=\{0,7\}\).
Solution
(a) We find \[\begin{aligned} A\times B &=& \{(1,2), (1,5), (1,6), (2,2), (2,5), (2,6)\}, \\ A\times A &=& \{(1,1), (1,2), (2,1), (2,2)\}. \end{aligned}\] (b) The answers are \(A\times B = \{(5,0), (5,7)\}\), and \(A\times A = \{(5,5)\}\).
hands-on exercise \(\PageIndex{1}\label{he:cartprod-01}\)
Let \(A=\{a,b,c,d\}\) and \(B=\{r,s,t\}\). Find \(A\times B\), \(B\times A\), and \(B\times B\).
Example \(\PageIndex{3}\label{eg:cartprod-03}\)
Determine \(\wp(\{1,2\}) \times \{3,7\}\). Be sure to use correct notation.
Solution
For a complicated problem, divide it into smaller tasks and solve each one separately. Then assemble them to form the final answer. In this problem, we first evaluate \[\wp(\{1,2\}) = \big\{\emptyset, \{1\}, \{2\}, \{1,2\} \big\}.\] This leads to \[\begin{aligned} \wp(\{1,2\}) \times \{3,7\} &=& \big\{\emptyset, \{1\}, \{2\}, \{1,2\} \big\} \times \{3,7\} \\ &=& \big\{ (\emptyset,3), (\emptyset,7), (\{1\},3), (\{1\},7), (\{2\},3), (\{2\},7), (\{1,2\},3), (\{1,2\},7) \big\}. \end{aligned}\] Check to make sure that we have matching left and right parentheses, and matching left and right curly braces.
hands-on exercise \(\PageIndex{2}\label{he:cartprod-02}\)
Find \(\{a,b,c\}\times\wp(\{d\})\).
Example \(\PageIndex{4}\label{eg:cartprod-04}\)
How could we describe the contents of the Cartesian product \([1,3] \times \{2,4\}\)? Since \([1,3]\) is an infinite set, it is impossible to list all the ordered pairs. We need to use the set-builder notation: \[[1,3] \times \{2,4\} = \{ (x,y) \mid 1\leq x\leq3, y=2,4\}.\] We can also write \([1,3] \times \{2,4\} = \{ (x,2), (x,4) \mid 1\leq x\leq3\}\).
hands-on exercise \(\PageIndex{3}\label{HE:cartprod-03}\)
Describe, using the set-builder notation, the Cartesian product \([1,3] \times [2,4]\).
Cartesian products can be extended to more than two sets. Instead of ordered pairs, we need
. The ordered \(n\)-tuples of \(n\) sets \(A_1, A_2, \ldots, A_n\) is the set \(n\)-fold Cartesian product
\[$A_1 \times A_2 \times \cdots \times A_n
= \{(a_1,a_2,\ldots,a_n) \mid a_i \in A_i \mbox{ for each } i, 1 \leq i \leq n \}\]
In particular, when \(A_i=A\) for all \(i\), we abbreviate the Cartesian product as \(A^n\).
Example \(\PageIndex{5}\label{eg:cartprod-05}\)
The \(n\)-dimensional space is denoted \(\R^n\). It is the \(n\)-fold Cartesian product of \(\R\). In special cases, \(\R^2\) is the \(xy\)-plane, and \(\R^3\) is the \(xyz\)-space.
hands-on exercise \(\PageIndex{5}\label{he:cartprod-04}\)
Let \(A=\{1,2\}\), \(B=\{a,b\}\), and \(C=\{r,s,t\}\). Find \(A\times B\times C\).
Example \(\PageIndex{6}\label{eg:cartprod-06}\)
From a technical standpoint, \((A \times B) \times C\) is different from \(A \times B \times C\). Can you explain why? Can you discuss the difference, if any, between \((A \times B) \times C\) and \(A \times (B \times C)\)? For instance, give some specific examples of the elements in \((A \times B)\times C\) and \(A \times (B \times C)\) to illustrate their differences.
Solution
The elements of \((A\times B)\times C\) are ordered pairs in which the first coordinates are themselves ordered pairs. A typical element in \((A\times B)\times C\) takes the form of \[\big((a,b),c\big).\] The elements in \(A\times B\times C\) are ordered triples of the form \[(a,b,c).\] Since their elements look different, it is clear that \((A\times B)\times C \neq A\times B\times C\). Likewise, a typical element in \(A\times (B\times C)\) looks like \[\big(a,(b,c)\big).\] Therefore, \((A\times B)\times C \neq A\times(B\times C)\), and \(A\times (B\times C)\neq A\times B\times C\).
Theorem \(\PageIndex{1}\)
For any sets \(A\), \(B\), and \(C\), we have \[\begin{aligned} A \times (B \cup C) &=& (A \times B) \cup (A \times C), \\ A \times (B \cap C) &=& (A \times B) \cap (A \times C), \\ A \times (B - C) &=& (A \times B) - (A \times C).\end{aligned}\]
Remark
How would we show that the two sets \(S\) and \(T\) are equal? We need to show that \[x\in S \Leftrightarrow x\in T.\] The complication in this problem is that both \(S\) and \(T\) are Cartesian products, so \(x\) takes on a special form, namely, that of an ordered pair. Consider the first identity as an example; we need to show that \[(u,v)\in A \times (B \cup C) \Leftrightarrow (u,v)\in (A \times B) \cup (A \times C).\] We prove this in two steps: first showing \(\Rightarrow\), then \(\Leftarrow\), which is equivalent to first showing \(\subseteq\), then \(\supseteq\). Alternatively, we can use \(\Leftrightarrow\) throughout the argument.
Proof 1
Let \((u,v)\in A\times(B\cup C)\). Then \(u\in A\), and \(v\in B\cup C\). The definition of union implies that \(v\in B\) or \(v\in C\). Thus far, we have found
\(u\in A\) and \(v\in B\), or \(u\in A\) and \(v\in C\).
This is equivalent to
\((u,v)\in A\times B\), or \((u,v)\in A\times C\).
Thus, \((u,v)\in (A\times B)\cup(A\times C)\). This proves that \(A\times(B\cup C) \subseteq (A\times B)\cup(A\times C)\).
Next, let \((u,v)\in (A\times B)\cup(A\times C)\). Then \((u,v)\in A\times B\), or \((u,v)\in A\times C\). This means
\(u\in A\) and \(v\in B\), or \(u\in A\) and \(v\in C\).
Both conditions require \(u\in A\), so we can rewrite them as
\(u\in A\), and \(v\in B\) or \(v\in C\);
which is equivalent to
\(u\in A\), and \(v\in B\cup C\).
Thus, \((u,v)\in A\times(B\cup C)\). We have proved that \((A\times B) \cup(A\times C) \subseteq A\times(B\cup C)\). Together with \(A\times (B\cup C) \subseteq (A\times B)\cup(A\times C)\) that we have proved earlier, we conclude that \(A\times(B\cup C) = (A\times B)\cup (A\times C)\).
Proof 2
We shall only prove the first equality. Since \[\arraygap{1.2} \begin{array}{l@{\;\Leftrightarrow\;}l@{\qquad}l} (u,v)\in A \times(B \cup C) & u\in A \wedge v\in (B\cup C) & \mbox{(defn.~of Cartesian product)} \\ & u\in A \wedge (v\in B \vee v\in C) & \mbox{(defn.~of union)} \\ & (u\in A\wedge v\in B)\vee(u\in A\wedge v\in C) & \mbox{(distributive law)} \\ & (u,v)\in A\times B \vee (u,v)\in A\times C & \mbox{(defn.~of Cartesian product)} \\ & (u,v) \in (A \times B) \cup (A \times C) & \mbox{(defn.~of union)} \end{array}\] we conclude that \(A\times(B\cup C) = (A\times B)\cup(A\times C)\).
Theorem \(\PageIndex{2}\label{cartprodcard}\)
If \(A\) and \(B\) are finite sets, with \(|A|=m\) and \(|B|=n\), then \(|A\times B| = mn\).
Proof
The elements of \(A\times B\) are ordered pairs of the form \((a,b)\), where \(a\in A\), and \(b\in B\). There are \(m\) choices of \(a\). For each fixed \(a\), we can form the ordered pair \((a,b)\) in \(n\) ways, because there are \(n\) choices for \(b\). Together, the ordered pairs \((a,b)\) can be formed in \(mn\) ways.
The argument we used in the proof is called
. We shall study it again in Chapter [ch:combo]. In brief, it says that if a job can be completed in several steps, then the number of ways to finish the job is the product of the number of ways to finish each step. multiplication principle
Corallary \(\PageIndex{3}\)
If \(A_1,A_2,\ldots,A_n\) are finite sets, then \(|A_1\times A_2\times \cdots\times A_n| = |A_1| \cdot |A_2|\,\cdots\, |A_n|\).
Corollary \(\PageIndex{4}\)
If \(A\) is a finite set with \(|A|=n\), then \(|\wp(A)|=2^n\).
Proof
Let the elements of \(A\) be \(a_1,a_2,\ldots,a_n\). The elements of \(\wp(A)\) are subsets of \(A\). Each subset of \(A\) contains some elements from \(A\). Associate to each subset \(S\) of \(A\) an ordered \(n\)-tuple \(\big(b_1,b_2,\ldots,b_n\big)\) from \(\{0,1\}^n\) such that \[b_i = \cases{ 0 & if $a_i\notin S$, \cr 1 & if $a_i\in S$. \cr}\] The value of the \(i\)th element in this ordered \(n\)-tuple indicates whether the subset \(S\) contains the element \(a_i\). It is clear that the subsets of \(A\) are in one-to-one correspondence with the \(n\)-tuples. This means the power set \(\wp(A)\) and the Cartesian product \(\{0,1\}^n\) have the same cardinality. Since there are \(2^n\) ordered \(n\)-tuples, we conclude that there are \(2^n\) subsets as well.
This idea of one-to-one correspondence is a very important concept in mathematics. We shall study it again in Chapter [ch:functions].
Summary and Review The Cartesian product of two sets \(A\) and \(B\), denoted \(A\times B\), consists of ordered pairs of the form \((a,b)\), where \(a\) comes from \(A\), and \(b\) comes from \(B\). Since ordered pairs are involved, \(A\times B\) usually is not equal to \(B\times A\). The notion of ordered pairs can be extended analogously to ordered \(n\)-tuples, thereby yielding an \(n\)-fold Cartesian product. If \(A\) and \(B\) are finite sets, then \(|A\times B| = |A|\cdot|B|\).
Exercise\(\PageIndex{1}\label{ex:cartprod-01}\)
Let \(X=\{-2,2\}\), \(Y=\{0,4\}\) and \(Z=\{-3,0,3\}\). Evaluate the following Cartesian products.
\(X\times Y\) \(X\times Z\) \(Z\times Y\times Y\)
Exercise\(\PageIndex{2}\label{ex:cartprod-02}\)
Consider the sets \(X\), \(Y\) and \(Z\) defined in Problem [ex:cartprod-01]. Evaluate the following Cartesian products.
\(X\times Y\times Z\) \((X\times Y)\times Z\) \(X\times (Y\times Z)\)
Exercise\(\PageIndex{3}\label{ex:cartprod-03}\)
Without listing all the elements of \(X\times Y\times X\times Z\), where \(X\), \(Y\), and \(Z\) are defined in Problem [ex:cartprod-01], determine \(|X\times Y\times X\times Z|\).
Exercise\(\PageIndex{4}\label{ex:cartprod-04}\)
Determine \(|\wp(\wp(\wp(\{1,2\})))|\).
Exercise\(\PageIndex{5}\label{ex:cartprod-05}\)
Consider the set \(X=\{-2,2\}\). Evaluate the following Cartesian products.
\(X\times\wp(X)\) \(\wp(X)\times\wp(X)\) \(\wp(X\times X)\)
Exercise\(\PageIndex{6}\label{ex:cartprod-06}\)
Let \(A\) and \(B\) be arbitrary nonempty sets.
Under what condition does \(A\times B = B\times A\)? Under what condition is \((A\times B)\cap(B\times A)\) empty?
Exercise\(\PageIndex{7}\label{ex:cartprod-07}\)
Let \(A\), \(B\), and \(C\) be any three sets. Prove that
\(A\times(B\cap C) = (A\times B)\cap (A\times C)\) \(A\times(B - C) = (A\times B) - (A\times C)\)
Exercise\(\PageIndex{8}\label{ex:cartprod-08}\)
Let \(A\), \(B\), and \(C\) be any three sets. Prove that if \(A\subseteq B\), then \(A\times C \subseteq B\times C\). |
I actually don't think that this view of light being in a quantum superposition is anything new: what Discover magazine is describing (I believe) is the
stock standard picture of how one would describe a system of cells, molecules, chloroplasts, fluorophores, whatever interacting with the quantised electomagnetic field.
My simplified account here (answer to Physics SE question "How does the Ocean polarize light?") addresses a very similar question. The quantised electromagnetic field is
always in superposition before the absorption happens and, as light reaches a plant, it becomes a superposition of free photons and excited matter states of many chloroplasts at once.
To learn more about this kind of thing, I would recommend
M. Scully and M. Zubairy, "Quantum Optics"
Read the first chapter and the mathematical technology for what you are trying to describe is to be found in chapters 4, 5 and 6.
The truth is, photons do not bounce from cell to cell like ping pong balls. So that theory happens to be incorrect.
Further questions and Edits:
But this is about the energy FROM the photon... Would whatever you are saying still work for that? Plus, I would like to see some math...
Energy is simply a property of photons (or whatever is carrying it): there has to be a carrier to make any interaction happen. All interactions we see are ultimately described by this. See eq (1) and (2) here, this is for the reverse process (emission) but you are ultimately going to write equations like this. To get a handle on this quickly look into this Wikipedia article (Quantization of the electromagnetic field) and then read Chapter 1 from Scully and Zubairy.
Ultimately, you're going to need to write down a one-photon Fock state, and add to the superposition excited atom states. The neater way to do this is with creation operators acting on the universal, unique quantum ground state $\left|\left.0\right>\right.$: we define $a_L^\dagger(\vec{k},\,\omega),\,a_R^\dagger(\vec{k}\,\omega)$ to be the creation operators for the quantum harmonic oscillators corresponding to left and right handed plane waves with wavenumber $\vec{k}$ and frequency $\omega$. Then a one-photon state in the oscillator corresponding to the classical solution of Maxwell's equation with complex amplitudes $A_L(\vec{k},\,\omega), A_R(\vec{k},\,\omega)$ in the left and right handed classical modes is:
$$\left|\left.\psi\right>\right.=\int d^3k\,d\omega\left(A_L(\vec{k},\,\omega)\,a_L^\dagger(\vec{k},\,\omega)+A_R(\vec{k},\,\omega)\,a_R^\dagger(\vec{k}\,\omega)\right)\,\left|\left.0\right>\right.$$
To define an absorption, Scully and Zubairy show that the probability amplitude for an absorption at time $t$ and position $\vec{r}$ is proportional to:
$$\left<\left.0\right.\right| \hat{E}^+(\vec{r},t)\left|\left.\psi\right>\right.$$
where $\hat{E}$ is the electric field observable and $\hat{E}^+$ its positive frequency part (the part with only annihilation operators and all the creation operators thrown away).
Alternatively you can in principle model absorption by writing down the Hamiltonian which is going to look something like:
$$\int d^3k\,d \omega\left(a_L^\dagger(\vec{k},\,\omega)\,a_L^\dagger(\vec{k},\,\omega)+a_R^\dagger(\vec{k}\,\omega)\,a_R(\vec{k},\,\omega) \right)+\sum\limits_{\text{all chloroplasts }j} \int d^3k\,d\omega\,\sigma^\dagger_j\left(\kappa_{j,L}(\vec{k},\,\omega)\,a_L(\vec{k},\,\omega)+\kappa_{j,L}(\vec{k},\,\omega\,a_R(\vec{k},\,\omega) \right)+\\\sum\limits_{\text{all chloroplasts }j} \int d^3k\,d\omega\,\left(\kappa_{j,L}(\vec{k},\,\omega)\,a^\dagger_L(\vec{k},\,\omega)+\kappa_{j,L}(\vec{k},\,\omega)\,a^\dagger_R(\vec{k},\,\omega) \right)\sigma_j$$
where $\sigma_j^\dagger$ is the creation operator for a raised chlorophore at site $j$ and the $\kappa$s measure the strength of coupling.
This is complicated stuff and takes more than a simple tutorial to write down. |
My study group and I were discussing this question today.
We can construct the Lebesgue measure using Caratheodory's extension theorem in the usual way:
Given the function $F(x) = x$, we can construct a pre-measure $\mu_F$ associated with $F(x)$ defined on an algebra of "intervals" (Folland uses right-closed h-intervals); From this premeasure, we can induce an outer measure $\mu^*$ on the power-set of the reals; Using Caratheodory's extension theorem, the collection of $\mu^*$-measurable sets is a complete $\sigma$-algebra, and $\mu^*$ restricted to this $\sigma$-algebra is a complete measure.
This complete $\sigma$-algebra is in some sense a "big" structure; it is certainly larger than the Borel sets, and it must contain all of the Lebesgue measurable sets.
However, Folland provides a related Theorem:
Theorem 1.14Let $\mathcal{A} \subset \mathcal{P}(X)$ be an algebra, $\mu_0$ a premeasure on $\mathcal{A}$, and $\mathcal{M}$ the $\sigma$-algebra generated by $\mathcal{A}$. There exists a measure $\mu$ on $\mathcal{M}$ whose restriction to $\mathcal{A}$ is $\mu_0$ -- namely, $\mu = \mu^*|_\mathcal{M}$, where $\mu^*$ is given by $$\mu^*(E) = \inf \left\{\sum_1^\infty \mu_0(A_j) : A_j \in \mathcal{A}, E \subset \bigcup_1^\infty A_j\right\}.$$
He then goes on to claim uniqueness. The proof of the theorem invokes Caratheodory, but there is a question that remains. I will try to make my thoughts clear:
Caratheodory gives us a complete measure space, this structure is large. Theorem 1.14 as written tells us that we can use a premeasure on an algebra $\mathcal{A}$ to generate a measure on the $\sigma$-algebra generated by $\mathcal{A}$ -- this $\sigma$-algebra is not necessarily complete. In fact, this $\sigma$-algebra is just the Borel $\sigma$-algebra. We can complete the Borel $\sigma$-algebra to obtain the Lebesgue measurable sets.
However, is the completion of the Borel $\sigma$-algebra the same thing as we get by just applying Caratheodory's extension theorem to the Lebesgue outer measure to directly obtain a complete $\sigma$-algebra? Or is this a "bigger" structure than the completion of the Borel $\sigma$-algebra? |
Someone once incorrectly told me that, given the speed of light is the speed limit of the universe, aliens would have to live for hundreds of years if they are to travel distances of hundreds of light years to reach Earth.
In a "special relativistic" and non-expanding universe however, this is not the case. As velocity approaches the speed of light, say $v = 0.999c$, then we have
$$\gamma = \frac{1}{\sqrt{1-\frac{(0.999c)^2}{c^2}}} = \frac{1}{\sqrt{1-\frac{0.998001c^2}{c^2}}} = 22.37$$
Let us assume that an alien wishes to travel 100 light years from his planet to Earth. If the alien is travelling at $v = 0.999c$, he will observe the distance between his planet and the Earth to contract, and will measure the contracted distance to be:
$$\text{Distance} = \frac{100 \; \mathrm{ly}}{\gamma} = \frac{100 \; \mathrm{ly}}{22.37} = 4.47 \; \text{light years}$$
The Alien will be able to travel this distance in a time of :
$$\text{Time} = \text{distance}/\text{speed} = 4.47/0.999 = 4.47 \; \text{years}$$
It is easy to show that as the alien's speed increases, the time taken to travel the 100 light year distance approaches 0. It can thus be shown that thanks to length contraction and time dilation of special relativity, all parts of a special relativistic universe are accessible to an observer with a finite life time.
We however don't live in a purely special relativistic universe. We live in an expanding universe. Given the universe is expanding, are some parts of the universe no longer theoretically accessible to observers with finite life times? |
Jacob, KT and Jayadevan, KP (1998)
System Bi–Sr–O: Synergistic measurements of thermodynamic properties using oxide and fluoride solid electrolytes. In: Journal of Materials Research, 13 (7). pp. 1905-1918.
PDF
system.pdf
Download (352kB)
Abstract
Phase equilibrium and electrochemical studies of the ternary system Bi-Sr-O indicate the presence of six ternary oxides ($Bi_2SrO_4$, $Bi_2Sr_2O_5$, $Bi_2Sr_3O_6$, $Bi_4Sr_6O_1_5$, $Bi_1_4Sr_2_4O_5_2$, and $Bi_2Sr_6O_1_1$) and three solid solutions (\delta, \beta, and \gamma). An isothermal section of the phase diagram is established at 1050 K by phase analysis of quenched samples. Three compounds, $Bi_4Sr_6O_1_5$, $Bi_1_4Sr_2_4O_5_2$, and $Bi_2Sr_6O_1_1$, contain $Bi^5^+$ ions. The stability of these phases is a function of oxygen partial pressure. The chemical potentials of SrO in two-phase fields are determined as a function of temperature using solid-state cells based on single crystal $SrF_2$ as the electrolyte. Measurement of the emf of cells based on $SrF_2$ as a function of oxygen partial pressure in the gas at constant temperature gives information on oxygen content of the compounds present at the electrodes. The chemical potentials of $Bi_2O_3$ in two-phase fields of the pseudobinary $Bi_2O_3-SrO$ are measured using cells incorporating ($Y_2O_3$)$ZrO_2$ as the solid electrolyte. The standard free energies of formation of the ternary oxides are calculated independently using emfs of different cells. The independent assessments agree closely; the maximum difference in the value of \Delta $G_f$ degrees ($Bi_2_mSr_nO_p$)/(m + n) is \pm 350 J/mol of component binary oxides. The results are discussed in the light of the phase diagram and compared with calorimetric and chemical potential measurements reported in the literature. The combined use of emf data from cells incorporating fluoride and oxide electrolytes enhances the reliability of derived data.
Item Type: Journal Article Additional Information: Copyright for this article belongs to Materials Research Society. Department/Centre: Division of Chemical Sciences > Materials Research Centre
Division of Mechanical Sciences > Materials Engineering (formerly Metallurgy)
Depositing User: M.K Anitha Date Deposited: 03 Feb 2005 Last Modified: 19 Sep 2010 04:15 URI: http://eprints.iisc.ac.in/id/eprint/1468 Actions (login required)
View Item |
SolidsWW Flash Applet Sample Problem 2 Flash Applets embedded in WeBWorK questions solidsWW Example Sample Problem 2 with solidsWW.swf embedded
A standard WeBWorK PG file with an embedded applet has six sections:
A tagging and description section, that describes the problem for future users and authors, An initialization section, that loads required macros for the problem, A problem set-up sectionthat sets variables specific to the problem, An Applet link sectionthat inserts the applet and configures it, (this section is not present in WeBWorK problems without an embedded applet) A text section, that gives the text that is shown to the student, and An answer and solution section, that specifies how the answer(s) to the problem is(are) marked for correctness, and gives a solution that may be shown to the student after the problem set is complete.
The sample file attached to this page shows this; below the file is shown to the left, with a second column on its right that explains the different parts of the problem that are indicated above. A screenshot of the applet embedded in this WeBWorK problem is shown below:
There are other example problems using this applet: solidsWW Flash Applet Sample Problem 1 solidsWW Flash Applet Sample Problem 3 And other problems using applets: Derivative Graph Matching Flash Applet Sample Problem USub Applet Sample Problem trigwidget Applet Sample Problem solidsWW Flash Applet Sample Problem 1 GraphLimit Flash Applet Sample Problem 2 Other useful links: Flash Applets Tutorial Things to consider in developing WeBWorK problems with embedded Flash applets
PG problem file Explanation ##DESCRIPTION ## Solids of Revolution ##ENDDESCRIPTION ##KEYWORDS('Solids of Revolution') ## DBsubject('Calculus') ## DBchapter('Applications of Integration') ## DBsection('Solids of Revolution') ## Date('7/31/2011') ## Author('Barbara Margolius') ## Institution('Cleveland State University') ## TitleText1('') ## EditionText1('2011') ## AuthorText1('') ## Section1('') ## Problem1('') ########################################## # This work is supported in part by the # National Science Foundation # under the grant DUE-0941388. ##########################################
This is the
The description is provided to give a quick summary of the problem so that someone reading it later knows what it does without having to read through all of the problem code.
All of the tagging information exists to allow the problem to be easily indexed. Because this is a sample problem there isn't a textbook per se, and we've used some default tagging values. There is an on-line list of current chapter and section names and a similar list of keywords. The list of keywords should be comma separated and quoted (e.g., KEYWORDS('calculus','derivatives')).
DOCUMENT(); loadMacros( "PGstandard.pl", "AppletObjects.pl", "MathObjects.pl", );
This is the
The
TEXT(beginproblem()); $showPartialCorrectAnswers = 1; Context("Numeric"); $a = random(2,10,1); $xy = 'x'; $func1 = "$a*sin(pi*x/8)"; $func2 = '2'; $xmax = Compute("8"); $shapeType = 'circle'; $correctAnswer =Compute("128*$a");
This is the
The solidsWW.swf applet will accept a piecewise defined function either in terms of x or in terms of y. We set
######################################### # How to use the solidWW applet. # Purpose: The purpose of this applet # is to help with visualization of # solids # Use of applet: The applet state # consists of the following fields: # xmax - the maximum x-value. # ymax is 6/5ths of xmax. the minima # are both zero. # captiontxt - the initial text in # the info box in the applet # shapeType - circle, ellipse, # poly, rectangle # piece: consisting of func and cut # this is a function defined piecewise. # func is a string for the function # and cut is the right endpoint # of the interval over which it is # defined # there can be any number of pieces # ######################################### # What does the applet do? # The applet draws three graphs: # a solid in 3d that the student can # rotate with the mouse # the cross-section of the solid # (you'll probably want this to # be a circle # the radius of the solid which # varies with the height #########################################
<p> This is the
Those portions of the code that begin the line with
################################### # Create link to applet ################################### $appletName = "solidsWW"; $applet = FlashApplet( codebase => findAppletCodebase ("$appletName.swf"), appletName => $appletName, appletId => $appletName, setStateAlias => 'setXML', getStateAlias => 'getXML', setConfigAlias => 'setConfig', maxInitializationAttempts => 10, #answerBoxAlias => 'answerBox', height => '550', width => '595', bgcolor => '#e8e8e8', debugMode => 0, submitActionScript => '' );
You must include the section that follows
################################### # Configure applet ################################### $applet->configuration(qq{<xml><plot> <xy>$xy</xy> <captiontxt>'Compute the volume of the figure shown.' </captiontxt> <shape shapeType='$shapeType' sides='3' ratio='1.5'/> <xmax>$xmax</xmax> <theColor>0xff6699</theColor> <profile> <piece func='$func1' cut='8'/> </profile> </plot></xml>}); $applet->initialState(qq{<xml><plot> <xy>$xy</xy> <captiontxt>'Compute the volume of the figure shown.' </captiontxt> <shape shapeType='$shapeType' sides='3' ratio='1.5'/> <xmax>$xmax</xmax> <theColor>0xff6699</theColor> <profile> <piece func='$func1' cut='8'/> </profile> </plot></xml>}); TEXT( MODES(TeX=>'object code', HTML=>$applet->insertAll( debug=>0, includeAnswerBox=>0, )));
The lines
The configuration of the applet is done in xml. The argument of the function is set to the value held in the variable
The code
Answer submission and checking is done within WeBWorK. The applet is intended to aid with visualization and is not used to evaluate the student submission.
TEXT(MODES(TeX=>"", HTML=><<'END_TEXT')); <script> if (navigator.appVersion.indexOf("MSIE") > 0) { document.write("<div width='3in' align='center' style='background:yellow'> You seem to be using Internet Explorer. <br/>It is recommended that another browser be used to view this page.</div>"); } </script> END_TEXT
The text between the
BEGIN_TEXT $BR $BR Find the volume of the solid of revolution formed by rotating the curve \[y=$a\sin\left(\frac{\pi x}{8}\right)\] for \(x=0\) to \(8\) about the \(y\)-axis. \{ans_rule(35) \} $BR END_TEXT Context()->normalStrings;
This is the
###################################### # # Answers # ## answer evaluators ANS( $correctAnswer->cmp() ); ENDDOCUMENT();
This is the
The
License
The Flash applets are protected under the following license: Creative Commons Attribution-NonCommercial 3.0 Unported License. |
Even at the risk of using slightly too heavy armory, I would like to shortly explain, how I would view the situation in a light, that naturally produces and "explains" the observations collected above:
@Martin: Tthe natural extension on tensor products @TomGoodwillie: The "skew-Leibniz"-rule one has to use @JackHuizenga: the even-odd grading associated to it @Qiaochu: The mysterious "super-Lie-algebra" and in general the "additional" $d^2=0$ condition
First, being an algebraicist I "almost never" believe in maps $V\rightarrow V$. I rather always expect an entire
algebra $H$ of operators acting on $V,W,...$ (e.g. universal enveloping of Lie algebras instead of Lie algebras or only single elements themselves). Furthermore as in this example, it sould be clearifies, how $H$ also acts on tensor products, the unit object $V=k$ and dual spaces .
Pretty much (and this is not entirely correct, but in a certain sense and with restrictions one can provo this, e.g. Etinghof's theorem) the above amount to $H$ is a
Hopf algebra having additional structures: A coproduct $\Delta: H\rightarrow H\otimes H$ that tells you how to act on $V\otimes W$: Split up as $\Delta$ says, then act on each space $V,W$ as you already knew. A counit $\epsilon: H\rightarrow k$ telling you, with which scalar we act on the unit object $V=k$ An antipode $S:H\rightarrow H$ that has to be used on an element $h\in H$ before letting it act in the argument of a linear form $f\in V^*$ (e.g. to get combined action again "the right way around") Such that... well pretty much of what you expect is true: You get again actions at all ($\Delta,\epsilon$ be algebra maps) and the so-defined action respects the re-bracking-iso of triple tensor products ("coassociativity"), the isomorphism $V\otimes k\cong V$ ("counitality") and the evaluation $V^*\otimes V\rightarrow k$ and dual basis ("antipode condition"). Respect means here, the action on the left side (e.g. in the second case via $(id\otimes \epsilon)\Delta$) matches the action on the right...or the isomorphism entertwines these two actions, or is a module homomorphisms or how you'd like to put it.
Take as examples (and if more interested read e.g. Susan Montgomory's book!):
A group ring $k[G]$ with group elements $\Delta(g)=g\otimes g$, $\epsilon(g)=1$ and $S(g)=g^{-1}$, meaning just copy the element and act on each side of the tensor product as you wish (and inverse on "contragradiant" representations), as usual for group representation. A universal Lie-algebra enveloping $U(\ell)$ with Lie elements $\Delta(x)=1\otimes x+x\otimes 1$, $\epsilon(x)=0$ and $S(x)=-x$. As we expect: Lie algebras act on tensor products via Leibniz ;-) A quantum group such as the often discussed $U_q(\ell)$ or many exotic more discovered by Schneider/Andruskiewitsch ("Classifying pointed Hopf algebras with abelian coradical"). Finally the Taft algebra for OUR purposes: $H_{Taft}=\langle g,x\rangle$ with $g^2=1$, $x^2=0$ and $gx=-xg$. Then $\Delta(g)=g\otimes g$ but now $\Delta(x)=g\otimes x+x\otimes 1$ a skew-LieElement/-derivation/-primitive.
It's the
RadfordBiproduct/Majid/Bosonization of $k[\mathbb{Z}_2]$ with the BradiedHopfAlgebra/NicholsAlgebra $k[x]/(x^2)$, where the braiding is induced naturally by the sign-graduation and turns out to be the fermionic/Kozul-braiding $x^a\otimes x^b\rightarrow (-1)^{ab}x^b\otimes x^a$. For related more general cases see below... The Taft algebra has dimension only $4$ and somewhat the smallest nontrivial example - the noncommutativity of $g$ and $x$ has to exactly match skew-Leibniz-coproduct and the "premature" truncation $x^2=0$. Generally, Hopf algebras are pretty "picky" about their structures fitting together ;-) (ANSWER TO BELOW: if we take the subalgebra generated just by one such differential operator $x^2=0$, and require the graduation action to be faithful, and the Nichols algebra to be "indecomposable", we uniquely get the Taft algebra)
What does this have to do with the question?
Well, I find it natural to thing of a chain complex $(X_k,d_k)$ as a vector-space $V=\oplus_k X_k$ with an action of $H_{Taft}$ by $x=\oplus_k d_k$ acts differential ($x^2=0$!), $g$ as the odd/even operator $g.a=(-1)^{|a|}a$ (which you may want to refine to a $\mathbb{Z}$-graduation), both exactly anticommuting by the graduation shift.
And
now the tensor product is right and "has-to-be" - satisfied with that?
$$x.(a\otimes b):=(x.a)\otimes (1.b)+(g.a)\otimes (x.b)=(x.a)\otimes b+(-1)^{|a|}a\otimes (x.b)$$
I would like to close with some remarks on the "super-Lie-algebras" and the general case:
The classification I mentioned roughly uses, that you may split a (pointed graded) Hopf algebra into (the "Radford-Biproduct") of a groupring (the ""coradical") and a "braided" Hopf algebra (i.e. in a category with a braiding induced by the group's conjugacy action and the groupelement-skewness). The generic case is that the braided Hopf algebra is like the universal enveloping of a Lie algebra, but in these braided "Nichols algebras" we may have truncations. When and how is pretty tricky and open especially over nonabelian groups, while over abelian groups, they're nowadays well understood by the works of Heckenberger. You can image them (and the theory of root systems carries overs!!) as Lie algebras in a braided sense, and this allows specific but sometimes much more exotic Dynkin diagrams - in the easiest case of group $\mathbb{Z_2},\mathbb{Z_3}$ these are e.g. in physics called
Super- and Color-Lie-algebras, but of course there are much more interesting cases ;-) ;-)
I you have to have more, you might want to check out my article "Nichols algebras" on Wikipedia ;-) ;-)and run over "our" Nichols algebra $k[x]/(x^2)$ and braiding underlaying the Taft algebra. |
In this answer we formally show that a (quasi)symmetry of an action implies a corresponding symmetry of its EOM$^{\dagger}$. The answer does not discuss form covariance of EOM. For further relations between symmetries of action, EOM, and solutions of EOM, see e.g. this Phys.SE post.
Let us first recall the definition of a
quasi-symmetry of the action
$$\tag{1} S_V[\phi]~:=~\int_V \! \mathbb{L}, \qquad \mathbb{L}~:=~{\cal L}~d^nx.$$
It means that the action (1) changes by a boundary integral
$$\tag{2} S_{V^{\prime}}[\phi^{\prime}]+\int_{\partial V^{\prime}} \!d^{n-1}x~(\ldots)~=~S_V[\phi]+ \int_{\partial V} \!d^{n-1}x~(\ldots) $$
under the transformation. In the following we will assume that the spacetime integration region $V$ is arbitrary.
Theorem.
If a local action functional $S_V[\phi]$ has a quasi-symmetry transformation
$$\tag{3} \phi^{\alpha}(x)~~\longrightarrow~~ \phi^{\prime \alpha}(x^{\prime}), \qquad x^{\mu}~~\longrightarrow~~x^{\prime \mu}, $$
then the EOM
$$\tag{4} e_{\alpha}(\phi(x),\partial\phi(x),\ldots ; x)~:=~\frac{\delta S_V[\phi]}{\delta \phi^{\alpha}(x)}~\approx~0$$
must have a symmetry (wrt. the same transformation)
$$\tag{5} e_{\alpha}(\phi^{\prime}(x^{\prime}),\partial^{\prime}\phi^{\prime}(x^{\prime}),\ldots ; x^{\prime})~\approx~e_{\alpha}(\phi(x),\partial\phi(x),\ldots ; x). $$
I)
Formal finite proof: This works both for a discrete and a continuous quasi-symmetry.
$$ e_{\alpha}(\phi(x),\partial\phi(x),\ldots ; x)~:=~\frac{\delta S_V[\phi]}{\delta \phi^{\alpha}(x)}~\stackrel{(2)}{=}~\frac{\delta S_{V^{\prime}}[\phi^{\prime}]}{\delta \phi^{\alpha}(x)}~\stackrel{{\ddagger}}{\sim}~\int_{V^{\prime}}\!d^nx^{\prime}~\frac{\delta S_{V^{\prime}}[\phi^{\prime}]}{\delta \phi^{\prime\alpha}(x^{\prime})} \frac{\delta \phi^{\prime\alpha}(x^{\prime})}{\delta \phi^{\alpha}(x)}$$$$\tag{6}~=~\int_{V^{\prime}}\!d^nx^{\prime}~e_{\alpha}(\phi^{\prime}(x^{\prime}),\partial^{\prime}\phi^{\prime}(x^{\prime}),\ldots ; x^{\prime}) \frac{\delta \phi^{\prime\alpha}(x^{\prime})}{\delta \phi^{\alpha}(x)}. $$
II)
Formal infinitesimal proof: This only works for a continuous quasi-symmetry. From the infinitesimal transformation (3)
$$\tag{7} \delta \phi^{\alpha}(x)~:=~\phi^{\prime \alpha}(x^{\prime})-\phi^{\alpha}(x), \qquad \delta x^{\mu}~:=~x^{\prime \mu}-x^{\mu},$$
we define a so-called
vertical transformation
$$\tag{8} \delta_0 \phi^{\alpha}(x)~:=~\phi^{\prime \alpha}(x)-\phi^{\alpha}(x)~=~\delta \phi^{\alpha}(x)-\delta x^{\mu} ~d_{\mu}\phi^{\alpha}(x),\qquad d_{\mu}~:=~\frac{d}{dx^{\mu}}, \qquad $$
which transforms of the fields $\phi^{\alpha}(x)$ without transforming the spacetime points $x^{\mu}$. The quasi-symmetry implies that the Lagrangian $n$-form $\mathbb{L}$ transforms with a total spacetime derivative
$$\tag{9} \delta \mathbb{L}~=~d_{\mu} f^{\mu}~d^nx, \qquad \delta_0 \mathbb{L}~=~d_{\mu}(f^{\mu}-{\cal L}~\delta x^{\mu})~d^nx. $$
The EOM (4) are typically of second order, so let us assume this for simplicity. (This assumption is not necessary.) Then the infinitesimal transformation of EOM (4) reads
$$ \delta e_{\alpha}(x)~=~\delta_0 e_{\alpha}(x)+\delta x^{\mu} ~\underbrace{d_{\mu} e_{\alpha}(x)}_{\approx 0}~\approx~\delta_0 e_{\alpha}(x) \qquad $$$$~=~\frac{\partial e_{\alpha}(x)}{\partial\phi^{\beta}(x)}\delta_0\phi^{\beta}(x)+\sum_{\mu}\frac{\partial e_{\alpha}(x)}{\partial(\partial_{\mu}\phi^{\beta}(x))}d_{\mu}\delta_0\phi^{\beta}(x)+\sum_{\mu\leq \nu }\frac{\partial e_{\alpha}(x)}{\partial(\partial_{\mu}\partial_{\nu}\phi^{\beta}(x))}d_{\mu}d_{\nu}\delta_0\phi^{\beta}(x) $$$$~\stackrel{{\ddagger}}{\sim}~ \int_V\! d^ny~ \delta_0\phi^{\beta}(y)\frac{\delta e_{\alpha}(x)}{\delta \phi^{\beta}(y)}~=~\int_V\! d^ny~ \delta_0\phi^{\beta}(y)\frac{\delta^2 S_V[\phi]}{\delta \phi^{\beta}(y)\delta \phi^{\alpha}(x)} $$$$~=~ \int_V\! d^ny~ \delta_0\phi^{\beta}(y)\frac{\delta^2 S_V[\phi]}{\delta \phi^{\alpha}(x)\delta\phi^{\beta}(y)} $$$$~=~ \frac{\delta}{\delta \phi^{\alpha}(x)} \int_V\! d^ny~ \delta_0\phi^{\beta}(y)\frac{\delta S_V[\phi]}{\delta \phi^{\beta}(y)} -\int_V\! d^ny~ \frac{\delta(\delta_0\phi^{\beta}(y))}{\delta \phi^{\alpha}(x)} \frac{\delta S[\phi]}{\delta \phi^{\beta}(y)} $$$$~\sim~ \frac{\delta(\delta_0 S_V[\phi]) }{\delta \phi^{\alpha}(x)} -\int_V\! d^ny~ \frac{\delta(\delta_0\phi^{\beta}(y))}{\delta \phi^{\alpha}(x)} e_{\beta}(y) $$$$\tag{10} ~\approx~ \frac{\delta(\delta_0 S_V[\phi]) }{\delta \phi^{\alpha}(x)}~=~0. $$
In the very last step of eq. (10) we used that the infinitesimal variation
$$\tag{11} \delta_0 S_V[\phi]+\int_V\! d^nx~d_{\mu} \left({\cal L}~\delta x^{\mu} \right) ~=~\delta S_V[\phi]~=~\int_{\partial V} \!d^{n-1}x~(\ldots)$$
of the action is a boundary integral by assumption (2), so that its functional derivative (10) must vanish (if it exists).
--
$^{\dagger}$
Terminology and Notation: Equations of motion (EOM) means Euler-Lagrange equations (1). The words on-shell and off-shell refer to whether EOM are satisfied or not. The $\approx$ symbol means equality modulo EOM.
$^{\ddagger}$ Warning: This step is
not always justified. The $\sim$ symbol indicates that we have formally integrated by part and ignored boundary contributions. Also we have assumed that the pertinent functional derivative is well-defined and exists. This caveat is the main shortcoming of the formal proof given here. The point is quite serious, e.g. in the case of a global (=$x$-independent) variation, which typically doesn't vanish on the boundary. So boundary contributions could in principle play a role.
However, instead of using functional derivatives and integrations, it is possible to prove eq. (10) $x$-locally
$$ \delta_0 e_{\alpha}(x)~=~\ldots~=~\underbrace{E_{\alpha(0)} d_{\mu}}_{=0}\left(f^{\mu}(x)-{\cal L}(x)~\delta x^{\mu}\right) - \sum_{k\geq 0} d^k\left( \underbrace{e_{\beta}(x)}_{\approx 0} \cdot P_{\alpha(k)}\delta_0\phi^{\beta}(x) \right)$$ $$\tag{12} ~\approx~ 0 $$
using only higher partial field derivatives
$$\tag{13} P_{\alpha(k)} ~:=~\frac{\partial }{\partial \phi^{\alpha(k)}}, \qquad k~\in~\mathbb{N}_0^n,$$
and higher Euler operators
$$\tag{14} E_{\alpha(k)} ~:=~\sum_{m\geq k} \begin{pmatrix} m \crk\end{pmatrix}(-d)^m P_{\alpha(m)}, $$
that all refer to the
same spacetime point $x$. This $x$-local approach circumvent the problem of un-accounted boundary contributions. |
A Lorentz covariant equation is one that takes the same form even when a Lorentz transformation is applied to each variable. Lorentz covariance is generally made manifest by writing the equation with all Lorentz indices contracted together with the Minkowski metric, so that each scalar quantity in the equation is a Lorentz invariant.
$$\partial_\mu x^\mu = \eta_{\mu \nu} \partial^\mu x^\nu = 0,$$
$$x^\mu = (t,x,y,z), \quad \quad \eta_{\mu \nu} = \mathrm{diag}\{-1,1,1,1\}.$$
Question: What is the corresponding term for "symplectic covariance", where all variables with symplectic indices are contracted together using the symplectic form $\epsilon_{ab} = \{\{0,1\},\{-1,0\}\}$? Here's an example using a dynamical equations for the Wigner function in quantum mechanics.
$$\partial_t W(\alpha) = F^{ab} \partial_a \alpha_b W(\alpha) = \epsilon_{ac} \epsilon_{bd} F_{ab} \partial_c \alpha_d W(\alpha) ,$$
$$\alpha = (\alpha_x,\alpha_p) = (x,p), \quad \quad F^{ab} = \{\{F^{xx},F^{xp}\},\{F^{px},F^{pp}\}\}.$$
The reason the terminology is unclear is that this is really only covariant under
linear symplectic transformations (i.e., linear transformations that preserve the symplectic form). Strictly speaking, "symplectic covariance" suggests a lot more generality, akin to diffemorphism covariance. |
Consider a ring with radius $R$ and charge density $\lambda=\lambda_0\cos\phi$, where $\phi$ is the angular coordinate in the cylindrical coordinate. If I want to find the dipole moment of this charge distribution, then I put it into $$\vec{p}=\int{\mathrm d^3r~\rho(\vec{r})\vec{r}},$$ where I tried $$\rho(\vec{r})=\lambda_0 \cos\phi \times \delta(s-R) \delta(z)$$ and $$\vec{r}=s\hat{s}+z\hat{z}$$ So, the dipole then becomes: $$\vec{p}=\int_{-\infty}^{\infty}\int_{0}^{2\pi}\int_{0}^{\infty}[\lambda_0 \cos\phi \times \delta(s-R) \delta(z)](s\hat{s}+z\hat{z})s~\mathrm ds~\mathrm d\phi ~\mathrm dz$$ The delta function kills the $z$ component and leave the $s$ component, so: $$\vec{p}=\int_{-\infty}^{\infty}\int_{0}^{2\pi}\int_{0}^{\infty}\lambda_0 \cos\phi \times \delta(s-R) \delta(z)s^2~\mathrm ds~\mathrm d\phi ~\mathrm dz\hat{s}=\lambda_0R^2\int_{0}^{2\pi}\cos\phi~\mathrm d\phi~\hat{s}=\vec{0}$$ The answer is $$\vec{p}=\frac{1}{2}\lambda_0R^2\hat{x}$$ What's wrong with my solution? Actually this is problem 4.1 from Zangwill's Modern Electrodynamics. I read it's solution, but don't get why it works under Cartesian coordinates but not under cylindrical coordinate.
The previous answer is just a repeat of whats in the solution manual and the p provided in this is incorrect. I am taking a class for grins and was assigned this problem and beat myself up trying to get the stated answer. I finally did it in both spherical and cylindrical coordinates and got the same answer in both. I emailed prof and he confirmed the answer in the book is incorrect. The actual answer is $\pi \lambda_0R^2\hat{x}$
note that the $\phi$ integration is $\int_0^{2\pi}cos^2 \phi = \pi$
the $\theta$ integral is $\int_0^{\pi} sin(\theta)\delta(\theta - \frac{\pi}{2})d\theta = sin(\frac{\pi}{2}) = 1$
the $r$ integral is $\int_0^{\infty} r^2\delta(r-R) = R^2$
so the answer is $\pi\lambda_0R^2\hat{x}$
For a linear charge density around a ring, $\lambda=\lambda(\phi)$. The volume charge density would be: $$\rho(r)=\lambda(\phi)\frac{\delta\left(\theta-\frac{\pi}{2}\right)}{\sin(\theta)}\frac{\delta(r-R)}{r}$$ Now the dipole moment would be: $$p=\int_{0}^{2\pi}~\mathrm d\phi\int_{0}^{\pi}~\sin(\theta)~\mathrm d\theta \int_{0}^{\infty}~\mathrm dr~r^{2}\lambda\frac{\delta\left(\theta-\frac{\pi}{2}\right)}{\sin(\theta)}\frac{\delta(r-R)}{r}=2\pi R\lambda=Q$$ Since $\hat{r}=\hat{z}\cos\theta +\hat{y}\sin\theta \sin\phi +\hat{x}\sin\theta \cos\phi$, the dipole moment would be: $$\int ~\mathrm d^{3}r~\rho(r) =\int_{0}^{2\pi}~\mathrm d\phi\int_{0}^{\pi}~\sin(\theta)~\mathrm d\theta \int_{0}^{\infty}~\mathrm dr~r^{2}~[\hat{z}\cos\theta +\hat{y}\sin\theta \sin\phi +\hat{x}\sin\theta \cos\phi]~\lambda_{0}\cos\phi\frac{\delta\left(\theta-\frac{\pi}{2}\right)}{\sin(\theta)}\frac{\delta(r-R)}{r}$$ The $\hat{x}$ integral is non zero and would result in $$\boxed{p=\frac{1}{2}\lambda_{0}R^{2}\hat{x}}$$
protected by Qmechanic♦ Feb 5 '17 at 19:52
Thank you for your interest in this question. Because it has attracted low-quality or spam answers that had to be removed, posting an answer now requires 10 reputation on this site (the association bonus does not count).
Would you like to answer one of these unanswered questions instead? |
Thank you for using the timer!We noticed you are actually not timing your practice. Click the START button first next time you use the timer.There are many benefits to timing your practice, including:
Does GMAT RC seem like an uphill battle? e-GMAT is conducting a free webinar to help you learn reading strategies that can enable you to solve 700+ level RC questions with at least 90% accuracy in less than 10 days. Sat., Oct 19th at 7 am PDT
In a certain company, the formula for maximizing profits is[#permalink]
Show Tags
Updated on: 26 Jan 2014, 04:21
23
00:00
A
B
C
D
E
Difficulty:
55% (hard)
Question Stats:
58% (01:35) correct 42% (02:02) wrong based on 473 sessions
HideShow timer Statistics
In a certain company, the formula for maximizing profits is P = -25x^2 + 7500x,, where P is profit and x is the number of machines the company operates in its factory. What value for x will maximize P?
P=-25x2+7500X => P=-50+7500X, to max. P, we need to max X. So I picked E, why isn't that correct?
The OA states: To find a maximum or minimum value of an equation with an exponent in it, you take the derivative of the equation, set it to zero, and solve. I don't really get what that means. So whoever solves it, could you plz post explanation of what the above sentence mean as well?
Re: Finding Max or Min value of equation[#permalink]
Show Tags
14 Aug 2010, 07:32
23
15
appy001 wrote:
Thanks for this post to have a recall of calculus..."What value for x will maximize P" Need more clarification on minimum value....
what if require the value of x when profit is lowest???
My understanding is that it comes from the formula for minimize profit. This will be given in question stem... such as this is present here... "the formula for maximizing profits is P = -25x2 + 7500x"....
Please put the correct understanding here.
Couple of things:
Quadratic expression \(ax^2+bx+c\) reaches its extreme values when \(x=-\frac{b}{2a}\). When \(a>0\) extreme value is minimum value of \(ax^2+bx+c\) (maximum value is not limited) and when \(a<0\) extreme value is maximum value of \(ax^2+bx+c\) (minimum value is not limited).
You can look at this geometrically: \(y=ax^2+bx+c\) when graphed on XY plane gives parabola. When \(a>0\), the parabola opens upward and minimum value of \(ax^2+bx+c\) is y-coordinate of vertex, when \(a<0\), the parabola opens downward and maximum value of \(ax^2+bx+c\) is y-coordinate of vertex.
Attachment:
Math_cg_20 (1).png [ 18.71 KiB | Viewed 19950 times ]
Examples: Expression \(5x^2-10x+20\) reaches its minimum when \(x=-\frac{b}{2a}=-\frac{-10}{2*5}=1\), so minimum value is \(5x^2-10x+20=5*1^2-10*1+20=15\).
Expression \(-5x^2-10x+20\) reaches its maximum when \(x=-\frac{b}{2a}=-\frac{-10}{2*(-5)}=-1\), so maximum value is \(-5x^2-10x+20=-5*(-1)^2-10*(-1)+20=25\).
Back to the original question:
In a certain company, the formula for maximizing profits is P = -25x^2 + 7500x, where P is profit and x is the number of machines the company operates in its factory. What value for x will maximize P?
A) 10 B) 50 C) 150 D) 200 E) 300
\(P=-25x^2+7500x\) reaches its maximum (as \(a=-25<0\)) when \(x=-\frac{b}{2a}=-\frac{7500}{2*(-25)}=150\).
Answer: C.
As for the minimum value of P: minimum value of P is not limited (x increase to +infinity --> P decreases to -infinity).
Re: Finding Max or Min value of equation[#permalink]
Show Tags
09 Feb 2009, 04:10
2
wcgmatclub wrote:
This one is from IntegratedLearning. In a certain company, the formula for maximizing profits is P = -25×2 + 7500x, where P is profit and x is the number of machines the company operates in its factory. What value for x will maximize P?
A) 10 B) 50 C) 150 D) 200 E) 300
OC is C
Here's what I did: P=-25x2+7500X => P=-50+7500X, to max. P, we need to max X. So I picked E, why isn't that correct?
The OA states: To find a maximum or minimum value of an equation with an exponent in it, you take the derivative of the equation, set it to zero, and solve. I don't really get what that means. So whoever solves it, could you plz post explanation of what the above sentence mean as well?
Pertaining to your question, What value for x will maximize P? if you get the derivative of P = -25×2 + 7500x then the equation will bcome,
-50X + 7500 = 0 ( equating to 0 to get max value) X = 150
Re: Finding Max or Min value of equation[#permalink]
Show Tags
09 Feb 2009, 13:26
Why does P needs to be set to 0 in order to get max value for P. I'm totally lost here. P = -25×2 + 7500x If x=150, then P=1124950 If x=300, then P=2249950 2249950>1124950, so P is greater when x=300 vs. x=150.
Re: Finding Max or Min value of equation[#permalink]
Show Tags
09 Feb 2009, 14:12
wcgmatclub wrote:
Why does P needs to be set to 0 in order to get max value for P. I'm totally lost here. P = -25×2 + 7500x If x=150, then P=1124950 If x=300, then P=2249950 2249950>1124950, so P is greater when x=300 vs. x=150.
Why is x=150 the correct answer?
Not p=0 its dP/dx =firt derviative of P with respect to x dP/dx =0 = -50x +7500 =0 x=150 Please not that we treated x2= x^2 (please use exponential symbol for that)
P = -25×^2 + 7500x
If x=150, then P=562500 If x=300, then P=0
Do you have knowledge of calculus (Derviatives)..?._________________
Your attitude determines your altitude Smiling wins more friends than frowning
Re: Finding Max or Min value of equation[#permalink]
Show Tags
09 Feb 2009, 16:22
x2suresh wrote:
wcgmatclub wrote: Why does P needs to be set to 0 in order to get max value for P. I'm totally lost here. P = -25×2 + 7500x If x=150, then P=1124950 If x=300, then P=2249950 2249950>1124950, so P is greater when x=300 vs. x=150.
Why is x=150 the correct answer?
Not p=0 its dP/dx =firt derviative of P with respect to x dP/dx =0 = -50x +7500 =0 x=150 Please not that we treated x2= x^2 (please use exponential symbol for that)
P = -25×^2 + 7500x
If x=150, then P=562500 If x=300, then P=0
Do you have knowledge of calculus (Derviatives)..?.
I took calculus a LONG time ago, so I prob. forgot all of it already. I thought GMAT doesn't test calculus concepts? Why is this "GMAT" type problem requires calculus to solve?
Concentration: International Business, Entrepreneurship
GMAT Date: 01-30-2012
GPA: 3
WE: Consulting (Computer Software)
Re: Finding Max or Min value of equation[#permalink]
Show Tags
13 Aug 2010, 23:12
Thanks for this post to have a recall of calculus..."What value for x will maximize P" Need more clarification on minimum value....
what if require the value of x when profit is lowest???
My understanding is that it comes from the formula for minimize profit. This will be given in question stem... such as this is present here... "the formula for maximizing profits is P = -25x2 + 7500x"....
Please put the correct understanding here._________________
If you like my post, consider giving me a KUDOS. THANKS!!!
Re: In a certain company, the formula for maximizing profits is[#permalink]
Show Tags
11 Sep 2013, 13:18
Hello
I had two points to share 1) Please confirm that I understand the equation in the question correctly : P=(-25)(2)+ (7500x). If I'm reading it correctly, then how does this equation turn into -50x+7500x?
2) How testable is this content on actual GMAT test? because I looked into the official guide testable topics as well as Manhattan books and it seems that calculus is not one of the topics. Where can I find this content to study?_________________
P=-25x2+7500X => P=-50+7500X, to max. P, we need to max X. So I picked E, why isn't that correct?
The OA states: To find a maximum or minimum value of an equation with an exponent in it, you take the derivative of the equation, set it to zero, and solve. I don't really get what that means. So whoever solves it, could you plz post explanation of what the above sentence mean as well?
People, please write the equation properly.
I was wondering how p = -50 + 7500x we can solve this question for 5 mins It is simply as increasing functions.
Please edit the question and write P = \(-25 * x^2\) + \(7500x\)
We can solve the above problem by calculas Maxima and minima or by using the perfect score approach._________________
P=-25x2+7500X => P=-50+7500X, to max. P, we need to max X. So I picked E, why isn't that correct?
The OA states: To find a maximum or minimum value of an equation with an exponent in it, you take the derivative of the equation, set it to zero, and solve. I don't really get what that means. So whoever solves it, could you plz post explanation of what the above sentence mean as well?
People, please write the equation properly.
I was wondering how p = -50 + 7500x we can solve this question for 5 mins It is simply as increasing functions.
Please edit the question and write P = \(-25 * x^2\) + \(7500x\)
We can solve the above problem by calculas Maxima and minima or by using the perfect score approach.
Re: In a certain company, the formula for maximizing profits is[#permalink]
Show Tags
09 Feb 2014, 19:08
1
If you're not comfortable with calculus, here is how I would do it.
Recognize that 25 is a factor of 7500. If we take this out, we have two parts to the equation:
-X^2 & 300X
One part of the equation brings our value down, whereas the other part brings our value up. At this point, we can test the numbers in the answer choice. Notice that they are very straight forward to square, and multiplication by 300 is very easy.
A) 10 - 3000 - 100 = 2900 B) 50 - 15000 - 2500 = 12500 C) 150 - 45000 - 22500 = 22500 D) 200 - 60000 - 40000 = 20000 E) 300 - recognize that this is zero
Therefore, answer is 150.
If you are able to spot the 25 in 7500, you can go through this process in and around 2 minutes. Also, if you tested C or D first, you'd recognize that A and B would never be able to reach their output, and spotting that E makes the equation equal to 0, there is only 2 numbers you would need to test.
Hope this helps those - like myself - who haven't thought about calculus for over half a decade.
Re: In a certain company, the formula for maximizing profits is[#permalink]
Show Tags
30 Jan 2015, 23:48
In a certain company, the formula for maximizing profits is P = -25x^2 + 7500x,, where P is profit and x is the number of machines the company operates in its factory. What value for x will maximize P?
A) 10 B) 50 C) 150 D) 200 E) 300
SOLUTION:
P = -25x^2 + 7500x P = 25x (-x + 300) ----- (I)
Plug in answer choices in (I):
A) 25*10 (290) B) 25*10*5 (250) C) 25*10*15 (150) D) 25*10*20 (100) E) 25*10*30 (0)
Dividing A-E by 250: A) 290 B)1250 C)2250 D)2000 E) 0
Re: In a certain company, the formula for maximizing profits is[#permalink]
Show Tags
07 Mar 2016, 08:21
wcgc wrote:
In a certain company, the formula for maximizing profits is P = -25x^2 + 7500x,, where P is profit and x is the number of machines the company operates in its factory. What value for x will maximize P?
P=-25x2+7500X => P=-50+7500X, to max. P, we need to max X. So I picked E, why isn't that correct?
The OA states: To find a maximum or minimum value of an equation with an exponent in it, you take the derivative of the equation, set it to zero, and solve. I don't really get what that means. So whoever solves it, could you plz post explanation of what the above sentence mean as well?
Firstly I am not sure derivative will be useful in GMAT or not ..
here is the solution though
D(p)/D(x) = -50x+7500 Now the general rule is to equate the derivative to zero to get the values of any maxima or minima => 50x=7500 => x=150
Kudos if you like my solution ..It helps.._________________
In a certain company, the formula for maximizing profits is[#permalink]
Show Tags
07 Mar 2016, 08:33
Chiragjordan wrote:
wcgc wrote:
In a certain company, the formula for maximizing profits is P = -25x^2 + 7500x,, where P is profit and x is the number of machines the company operates in its factory. What value for x will maximize P?
P=-25x2+7500X => P=-50+7500X, to max. P, we need to max X. So I picked E, why isn't that correct?
The OA states: To find a maximum or minimum value of an equation with an exponent in it, you take the derivative of the equation, set it to zero, and solve. I don't really get what that means. So whoever solves it, could you plz post explanation of what the above sentence mean as well?
Firstly I am not sure derivative will be useful in GMAT or not ..
here is the solution though
D(p)/D(x) = -50x+7500 Now the general rule is to equate the derivative to zero to get the values of any maxima or minima => 50x=7500 => x=150
Kudos if you like my solution ..It helps..
Differentiation is not required for GMAT, but if you know it , then there is no harm in applying it. But for the sake of people who are not aware of differentiation, for all these max/min question involving quadratic equations, the best strategy is to come up with perfect squares of the form \((a \pm b)^2\) and then maximize or minimize by remembering the \((a \pm b)^2 \geq 0\) as shown below:
P = \(-25x^2 + 7500x\) ---> P = \(-25 (x^2-300x)\) = \(-25 (x^2-2*150*x+150^2 - 150^2)\)= \(-25 (x^2-2*150*x+150^2)+25*150^2\)= \(-25 (x-150)^2 + 25*150^2\) = a negative quantity + positive quantity.
Thus to maximize P, you need to minimize the perfect square \((x-150)^2\) and the minimum value of a perfect square = 0 ---> For P to be maximized , \((x-150)^2 =0\) ---> \(x=150\).
Re: In a certain company, the formula for maximizing profits is[#permalink]
Show Tags
14 Jul 2017, 10:42
wcgc wrote:
In a certain company, the formula for maximizing profits is P = -25x^2 + 7500x,, where P is profit and x is the number of machines the company operates in its factory. What value for x will maximize P?
A) 10 B) 50 C) 150 D) 200 E) 300
First we can see that the graph of P = -25x^2 + 7500x is a down-opening parabola. This means that the vertex of the parabola will be at the maximum value. We find the x-coordinate of the vertex using the following equation:
x = -b/(2a)
x = -7500/(2 x -25) = -7500/-50 = 150
The maximum profit will be realized when 150 machines are sold.
In a certain company, the formula for maximizing profits is[#permalink]
Show Tags
14 Jul 2017, 11:00
wcgc wrote:
In a certain company, the formula for maximizing profits is P = -25x^2 + 7500x,, where P is profit and x is the number of machines the company operates in its factory. What value for x will maximize P?
P=-25x2+7500X => P=-50+7500X, to max. P, we need to max X. So I picked E, why isn't that correct?
The OA states: To find a maximum or minimum value of an equation with an exponent in it, you take the derivative of the equation, set it to zero, and solve. I don't really get what that means. So whoever solves it, could you plz post explanation of what the above sentence mean as well?
Here's how I did it...
\(P = -25x^2 + 7500x = -25x(x - 300)\)
For \(P\) to be positive, \(x\) must be less than 300, so option E is out.
Re: In a certain company, the formula for maximizing profits is[#permalink]
Show Tags
24 Mar 2019, 01:37
What I did was first solve for what values minimise the profit. By solving for =0, we find that x is either 0 or 300.
BUT, these values minimise the profit, however we want to maximise. So with a quick look at the possible answers, the "middle" value between these two extremes that minimise the profit, will maximise it.
Not very mathematically correct, I guess, but works. No?_________________ |
Suppose that $A$ is a symmetric non-random matrix and $X\sim N(\mu,\Sigma)$ and $b \in R^n$ is a non-random vector. Then what is the distribution of $$X^tAX+b^tX \quad ?$$
The distribution without the linear term is solved in the answer here(Transformation of multivariate normal sum of chi-squared).
In the case of an invertible $A$ we can write $X^tAX+b^tX=(X-h)^tA(X-h)+k$ where $h=-\frac{1}{2}A^{-1}b$ and $k=-\frac{1}{4}b^tA^{-1}b$. However, the case where $A$ is not invertible is also of interest as it arises in practice. In the case where $n=1$ this corresponds to $A=0$ and thus the distribution above is simply a normal with mean $\mu_0 = b^t\mu$ and $\sigma_0 = b^t\Sigma b$. Is there a reducible solution in the more general case of $n>1$ for arbitrary non-invertible $A$? Perhaps an appropriate transformation can disentangle the quadratic form and the linear so that we have in some basis independent sum a normal and a linear combination of scaled non-central chi-squared with 1 degree of freedom.
Attempt: Write $A=P\Lambda P^t$, where $\Lambda$ is a diagonal of the eigenvalues and $P$ has the corresponding unit eigenvectors in it's rows, $PP^t = I$. As $A$ is not invertible there are $r>0$ zero eigenvalues. Then we can write $$X^tAX+b^tX = X^tP\Lambda P^tX+b^tPP^tX $$ Set $Y=P^tX$, then $$ X^tAX+b^tX = Y^t \Lambda Y + b^t PY$$ Assume w.l.o.g. that the eigenvalues which are $0$ and corresponding eigenvectors in $P$ are the last $r$ rows/columns in $\Lambda$. Then $$X^tAX+b^tX = (Y^\star)^t \Lambda^\star Y^\star + (b^\star)^tY^\star+(b')^tY' $$ $Y^\star$ are the $n-r$ first entries of $Y$ and $Y'$ the rest, $b^\star$ is the $n-r$ first entries of $P^tb$ and $b'$ the $r$ last. $\Lambda^\star \in R^{(n-r)\times (n-r)}$. Now the first two terms can be used to fill the square and the last term involves parts of $Y$ not involved in the first two terms but it is not independent of them as the covariance matrix of $Y$, $Cov(Y) = P^t\Sigma P$ is not diagonal. The goal is to identify the distribution of $X^tAX+b^tX $. |
Suppose that we have an arbitrary set of points $S\subseteq\mathbb{C^2}$ and want to find a polynomial $P(X)$ such that $(x,y)\in S\Rightarrow y=P(x)$ and $P'(x)=0\Rightarrow (x, P(x))\in S$. In other words, find a polynomial that passes through all of the points but has no extrema except possibly at those points.
Because the zeros of the derivative of the polynomial are limited, $P'(X)=A(X-x_1)^{n_1}(X-x_2)^{n_2}...(X-x_{|S|})^{n_{|S|}}$ for some $n\in \mathbb N, A\in \mathbb C$ where $\forall x_k,\exists y_k$ such that $(x_k, y_k)\in S$. This can be integrated simply after expanding, so $P(X)=A\int((X-x_1)^{n_1}(X-x_2)^{n_2}...(X-x_{|S|})^{n_{|S|}})+C$ is the form of the polynomial.
If any two points have the same y-value, no polynomial can pass through both of them, so any set $S$ with that condition has no polynomial $P(X)$ fitting the conditions.
If $\exists (x_i,y), (x_j, y) \in S$ such that $x_i<x_k<x_j\Rightarrow\nexists y_k$ such that $(x_k, y_k)\in S$, the existence of such a polynomial can be ruled out. If a polynomial passes through $(x_i, y)$ and $(x_j, y)$, there must be some point $(x_k, y_k)$ between the two points such that $P'(x_k) = 0$. If this were not the case, the function would be either strictly increasing or strictly decreasing on the interval between the two points. Because the two points have the same y-value, any increase must be countered by a corresponding decrease, which contradicts that assumption. Because $P'(x_k)=0, (x_k, P(x_k))\in S$. But there are no such points in the set by the condition on the set $S$. Therefore there is no polynomial fitting the conditions for the set.
Suppose we pick some values for $n$ and integrate $P'(X)$. We can perform linear regression with the integral at each x-value in $S$ being the x-values and the actual y-values of the points being the y-values. This gives us the only $A, C$ that could possibly satisfy the first condition, and they are unique given at least two points.
Given only one point, there is an extra degree of freedom, giving an infinite number of solutions for each possible n-value, each of which corresponds to a degree of polynomial. If we have two points, we have one solution for each combination of 2 n-values. Using binomial coefficients, we can find exactly how many polynomials of each degree satisfy the equation, as well as find each polynomial by simply inputting arbitrary values for $n$, assuming each x- and y- value is unique between the two points.
Once we get to three points, it is possible to construct any number of sets for any polynomial, as long as each set is a superset of the extrema of the polynomial (including complex values). There may not be a solution for all sets of three or more points, however. Is it possible to construct a set of three or more points such that two polynomials satisfy the conditions? Three? An arbitrary number of polynomials? An infinite number of polynomials? Or can there only be one polynomial for sets with cardinality greater than two?
This is motivated by an attempt to improve polynomial interpolation by restricting polynomials between the data points, similarly to splines but with one polynomial instead of many. I doubt that there's a deterministic way to find such polynomials, so I'm just trying to figure out if a solution would be unique for more than two points. |
Quoting from paper Int. J. Climatol. 21: 1371–1384 (2001)
The Wakeby distribution (WAD), defined by Thomas and introduced by Houghton (1978), is defined by the quantile function: $$ x(F) = \xi + \alpha \frac{1 - \left(1-F\right)^\beta}{\beta} - \gamma \frac{1 - \left(1-F\right)^{-\delta}}{\delta} $$
The article by Houghton, "Birth of a Parent: The Wakeby distribution for modeling flood flows" only says the following about the name:
This paper introduces a new five-parameter distribution, which we have named the Wakeby, ...
Harold A. Thomas, Jr. was Houghton's Ph.D. advisor.
Q: Who or what is the Wakeby distribution named after? |
Browse by Person
Up a level 54. Article
Aad, G, Abbott, B, Abdallah, J et al. (2822 more authors) (2017)
Performance of algorithms that reconstruct missing transverse momentum in root s=8 TeV proton-proton collisions in the ATLAS detector. The European Physical Journal C, 77 (4). 241. ISSN 1434-6044
Aad, G, Abbott, B, Abdallah, J et al. (2883 more authors) (2016)
Addendum to ‘Measurement of the tˉt production cross-section using eμ events with b-tagged jets in pp collisions at √s = 7 and 8 TeV with the ATLAS detector’. European Physical Journal C: Particles and Fields, 76. 642. ISSN 1434-6044
Aad, G, Abbott, B, Abdallah, J et al. (2855 more authors) (2016)
Performance of pile-up mitigation techniques for jets in pp collisions at √s=8 TeV using the ATLAS detector. European Physical Journal C, 76 (11). ISSN 1434-6044
Aad, G, Abbott, B, Abdallah, J et al. (2851 more authors) (2016)
Search for gluinos in events with an isolated lepton, jets and missing transverse momentum at √s = 13 Te V with the ATLAS detector. European Physical Journal C, 76 (10). ISSN 1434-6044
Aad, G, Abbott, B, Abdallah, J et al. (2860 more authors) (2016)
Measurement of the inclusive isolated prompt photon cross section in pp collisions at root s=8 TeV with the ATLAS detector. Journal of High Energy Physics (8). ARTN 005. pp. 1-42.
Aad, G, Abbott, B, Abdallah, J et al. (2866 more authors) (2016)
Measurement of event-shape observables in Z→ℓ+ℓ- events in pp collisions at √s=7 TeV with the ATLAS detector at the LHC. The European Physical Journal C - Particles and Fields, 76 (7). ISSN 1434-6044
Aad, G, Abbott, B, Abdallah, J et al. (2866 more authors) (2016)
Measurement of event-shape observables in Z→ℓ+ℓ- events in pp collisions at √s=7 TeV with the ATLAS detector at the LHC. The European Physical Journal C, 76. 375. ISSN 1434-6044
Aad, G, Abbott, B, Abdallah, J et al. (2867 more authors) (2016)
Identification of high transverse momentum top quarks in pp collisions at √s=8 TeV with the ATLAS detector. Journal of High Energy Physics, 2016. 93. ISSN 1029-8479
Aad, G, Abbott, B, Abdallah, J et al. (2865 more authors) (2016)
Measurement of the charged-particle multiplicity inside jets from √ s =8 TeV pp collisions with the ATLAS detector. European Physical Journal C: Particles and Fields , 76 (6). 322. ISSN 1434-6044
Aad, G, Abbott, B, Abdallah, J et al. (2715 more authors) (2016)
Measurements of and production in collisions at with the ATLAS detector. Physical Review D, 93 (11). ISSN 2470-0010
Aad, G, Abbott, B, Abdallah, J et al. (2865 more authors) (2016)
Reconstruction of hadronic decay products of tau leptons with the ATLAS experiment. The European Physical Journal C, 76 (5). ISSN 1434-6044
Aad, G, Abbott, B, Abdallah, J et al. (2867 more authors) (2016)
Measurement of the transverse momentum and Øn∗ distributions of Drell–Yan lepton pairs in proton–proton collisions at √s = 8 TeV with the ATLAS detector. The European Physical Journal C - Particles and Fields, 76 (5). ISSN 1434-6044
Aad, G, Abbott, B, Abdallah, J et al. (2045 more authors) (2016)
Beam-induced and cosmic-ray backgrounds observed in the ATLAS detector during the LHC 2012 proton-proton running period. Journal of Instrumentation, 11 (05). P05013-P05013.
Aad, G, Abbott, B, Abdallah, J et al. (2862 more authors) (2016)
Measurement of the differential cross-sections of prompt and non-prompt production of J/ψ and ψ(2S) in pp collisions at √s=7 and 8 TeV with the ATLAS detector. European Physical Journal C: Particles and Fields, 76 (5). 283. ISSN 1434-6044
Aad, G, Abbott, B, Abdallah, J et al. (2866 more authors) (2016)
Search for the standard model Higgs boson produced in association with a vector boson and decaying into a tau pair in pp collisions sqrt s = 8 TeV at with the ATLAS detector. Physical Review D, 93 (9). ISSN 1550-7998
Aad, G, Abbott, B, Abdallah, J et al. (2860 more authors) (2016)
Measurements of production cross sections in collisions at with the ATLAS detector and limits on anomalous gauge boson self-couplings. Physical Review D, 93 (9). ISSN 1550-7998
Aad, G, Abbott, B, Abdallah, J et al. (2856 more authors) (2016)
Search for supersymmetry at $$\sqrt{s}=13$$ s = 13 TeV in final states with jets and two same-sign leptons or three leptons with the ATLAS detector. European Physical Journal C (The), 76 (5). ISSN 1434-6044
Aad, G, Abbott, B, Abdallah, J et al. (2868 more authors) (2016)
Observation of Long-Range Elliptic Azimuthal Anisotropies in root s=13 and 2.76 TeV pp Collisions with the ATLAS Detector. PHYSICAL REVIEW LETTERS, 116 (17). ARTN 172301. ISSN 0031-9007
Aad, G, Abbott, B, Abdallah, J et al. (2865 more authors) (2016)
Probing lepton flavour violation via neutrinoless τ⟶3μ decays with the ATLAS detector. European Physical Journal C: Particles and Fields, 76 (5). 232. ISSN 1434-6044
Aad, G, Abbott, B, Abdallah, J et al. (2871 more authors) (2016)
Search for dark matter produced in association with a Higgs boson decaying to two bottom quarks in pp collisions at root s=8 TeV with the ATLAS detector. PHYSICAL REVIEW D, 93 (7). ARTN 072007. ISSN 1550-7998
Aad, G, Abbott, B, Abdallah, J et al. (2863 more authors) (2016)
Search for new phenomena in events with at least three photons collected in pp collisions at $$\sqrt{s}$$ s = 8 TeV with the ATLAS detector. The European Physical Journal C, 76 (4). ISSN 1434-6044
Aad, G, Abajyan, T, Abbott, B et al. (2840 more authors) (2016)
Measurement of the centrality dependence of the charged-particle pseudorapidity distribution in proton–lead collisions at sNN‾‾‾√=5.02sNN=5.02 TeV with the ATLAS detector. The European Physical Journal C, 76 (4). ISSN 1434-6044
Aad, G, Abbott, B, Abdallah, J et al. (2863 more authors) (2016)
Search for anomalous couplings in the W tb vertex from the measurement of double differential angular decay rates of single top quarks produced in the t-channel with the ATLAS detector. Journal of High Energy Physics, 2016 (4).
Aad, G, Abbott, B, Abdallah, J et al. (2782 more authors) (2016)
Search for magnetic monopoles and stable particles with high electric charges in 8 TeV pp collisions with the ATLAS detector. Physical Review D, 93 (5). 052009. ISSN 2470-0010
Aad, G, Abbott, B, Abdallah, J et al. (2772 more authors) (2016)
Measurement of the ZZ Production Cross Section in pp Collisions at root s=13 TeV with the ATLAS Detector. Physical Review Letters, 116 (10). 101801. ISSN 0031-9007
Aad, G, Abbott, B, Abdallah, J et al. (2769 more authors) (2016)
Search for new phenomena in dijet mass and angular distributions from pp collisions at root s=13 TeV with the ATLAS detector. Physics Letters B, 754. pp. 302-322. ISSN 0370-2693
Aad, G, Abbott, B, Abdallah, J et al. (2844 more authors) (2016)
Search for new phenomena with photon plus jet events in proton-proton collisions at TeV with the ATLAS detector. Journal of High Energy Physics (3). 41. ISSN 1029-8479
Aad, G, Abbott, B, Abdallah, J et al. (2835 more authors) (2016)
Search for strong gravity in multijet final states produced in pp collisions at root s=13 TeV using the ATLAS detector at the LHC. Journal of High Energy Physics. 26. ISSN 1029-8479
Aad, G, Abbott, B, Abdallah, J et al. (2865 more authors) (2016)
Search for the electroweak production of supersymmetric particles in root s=8 TeV pp collisions with the ATLAS detector. Physical Review D, 93 (5). 052002. ISSN 2470-0010
Aad, G, Abbott, B, Abdallah, J et al. (2794 more authors) (2016)
Search for the electroweak production of supersymmetric particles in root s=8 TeV pp collisions with the ATLAS detector. Physical Review D, 93 (5). 052002. ISSN 2470-0010
Aad, G, Abbott, B, Abdallah, J et al. (2879 more authors) (2016)
Centrality, rapidity, and transverse momentum dependence of isolated prompt photon production in lead-lead collisions at TeV measured with the ATLAS detector. Physical Review C, 93 (3). ISSN 0556-2813
Aad, G, Abbott, B, Abdallah, J et al. (2856 more authors) (2016)
Search for invisible decays of a Higgs boson using vector-boson fusion in pp collisions at √s=8 TeV with the ATLAS detector. Journal of High Energy Physics, 2016. 172. ISSN 1126-6708
Aad, G, Abbott, B, Abdallah, J et al. (2862 more authors) (2016)
Search for a high-mass Higgs boson decaying to a W boson pair in pp collisions at s = 8 $$ \sqrt{s}=8 $$ TeV with the ATLAS detector. Journal of High Energy Physics, 2016 (1).
Aad, G, Abbott, B, Abdallah, J et al. (2854 more authors) (2016)
Measurements of fiducial cross-sections for $$t\bar{t}$$ t t ¯ production with one or two additional b-jets in pp collisions at $$\sqrt{s}$$ s =8 TeV using the ATLAS detector. European Physical Journal C: Particles and Fields, 76 (1). 11. ISSN 1434-6044
Aad, G, Abbott, B, Abdallah, J et al. (2824 more authors) (2016)
Measurements of the Higgs boson production and decay rates and coupling strengths using pp collision data at $$\sqrt{s}=7$$ s = 7 and 8 TeV in the ATLAS experiment. European Physical Journal C: Particles and Fields, 76. 6. ISSN 1434-6044
Aad, G, Abbott, B, Abdallah, J et al. (2854 more authors) (2015)
ATLAS Run 1 searches for direct pair production of third-generation squarks at the Large Hadron Collider. European Physical Journal C: Particles and Fields, 75 (10). 510. ISSN 1434-6044
Aad, G, Abbott, B, Abdallah, J et al. (2835 more authors) (2015)
Search for Higgs boson pair production in the $$b\bar{b}b\bar{b}$$ b b ¯ b b ¯ final state from pp collisions at $$\sqrt{s} = 8$$ s = 8 TeVwith the ATLAS detector. European Physical Journal C: Particles and Fields, 75 (9). 412. ISSN 1434-6044
Aad, G, Abbott, B, Abdallah, J et al. (2825 more authors) (2015)
Search for heavy long-lived multi-charged particles in pp collisions at root s=8 TeV using the ATLAS detector. European Physical Journal C: Particles and Fields, 75 (8). 362. ISSN 1434-6044
Aad, G, Abbott, B, Abdallah, J et al. (2819 more authors) (2015)
Constraints on the off-shell Higgs boson signal strength in the high-mass ZZ and WW final states with the ATLAS detector. European Physical Journal C: Particles and Fields, 75 (7). 335. ISSN 1434-6044
Aad, G, Abbott, B, Abdallah, J et al. (2822 more authors) (2015)
Search for a new resonance decaying to a W or Z boson and a Higgs boson in the $$\ell \ell / \ell \nu / \nu \nu + b \bar{b}$$ ℓ ℓ / ℓ ν / ν ν + b b ¯ final states with the ATLAS detector. European Physical Journal C: Particles and Fields, 75 (6). 263. ISSN 1434-6044
Aad, G, Abbott, B, Abdallah, J et al. (2823 more authors) (2015)
Determination of spin and parity of the Higgs boson in the $$WW^*\rightarrow e \nu \mu \nu $$ W W ∗ → e ν μ ν decay channel with the ATLAS detector. European Physical Journal C: Particles and Fields, 75 (5). 231. ISSN 1434-6044
Aad, G, Abbott, B, Abdallah, J et al. (2815 more authors) (2015)
Observation and measurements of the production of prompt and non-prompt $$\varvec{\text {J}\uppsi }$$ J ψ mesons in association with a $$\varvec{Z}$$ Z boson in $$\varvec{pp}$$ p p collisions at $$\varvec{\sqrt{s}= 8\,\text {TeV}}$$ s = 8 TeV with the ATLAS detector. European Physical Journal C: Particles and Fields, 75 (5). 229. ISSN 1434-6044
Aad, G, Abbott, B, Abdallah, J et al. (2821 more authors) (2015)
Search for direct pair production of a chargino and a neutralino decaying to the 125 GeV Higgs boson in $$\sqrt{\varvec{s}} = 8$$ s = 8 TeV $$\varvec{pp}$$ p p collisions with the ATLAS detector. European Physical Journal C: Particles and Fields, 75 (5). 208. ISSN 1434-6044
Aad, G, Abbott, B, Abdallah, J et al. (2888 more authors) (2015)
Search for $$W' \rightarrow tb \rightarrow qqbb$$ W ′ → t b → q q b b decays in $$pp$$ p p collisions at $$\sqrt{s}$$ s = 8 TeV with the ATLAS detector. European Physical Journal C: Particles and Fields, 75 (4). 165. ISSN 1434-6044
Aad, G, Abbott, B, Abdallah, J et al. (2822 more authors) (2015)
Search for Higgs and Z Boson Decays to J/ψγ and ϒ(nS)γ with the ATLAS Detector. Physical Review Letters, 114 (12). 121801. ISSN 0031-9007
Aad, G, Abbott, B, Abdallah, J et al. (2881 more authors) (2015)
Simultaneous measurements of the tt¯, W+W−, and Z/γ∗→ττ production cross-sections in pp collisions at √s=7 TeV with the ATLAS detector. Physical Review D - Particles, Fields, Gravitation and Cosmology, 91 (5). 052005. ISSN 1550-7998
Aad, G, Abbott, B, Abdallah, J et al. (2888 more authors) (2015)
Search for dark matter in events with heavy quarks and missing transverse momentum in pp collisions with the ATLAS detector. European Physical Journal C, 75 (2). 92. ISSN 1434-6044
Aad, G, Abbott, B, Abdallah, J et al. (2467 more authors) (2015)
Search for dark matter in events with heavy quarks and missing transverse momentum in pp collisions with the ATLAS detector. European Physical Journal C , 75 (2). 92. ISSN 1434-6044
Aad, G, Abbott, B, Abdallah, J et al. (2896 more authors) (2015)
Measurements of Higgs boson production and couplings in the four-lepton channel in pp collisions at center-of-mass energies of 7 and 8 TeV with the ATLAS detector. Physical Review D, 91 (1). ISSN 1550-7998
Aad, G, Abbott, B, Abdallah, J et al. (2888 more authors) (2014)
Search for nonpointing and delayed photons in the diphoton and missing transverse momentum final state in 8 TeV pp collisions at the LHC using the ATLAS detector. Physical Review D, 90 (11). ISSN 1550-7998
Aad, G, Abajyan, T, Abbott, B et al. (2793 more authors) (2014)
Measurements of normalized differential cross sections for tt¯ production in pp collisions at √(s)=7 TeV using the ATLAS detector. Physical Review D, 90 (7). ISSN 1550-7998
Aad, G, Abbott, B, Abdallah, J et al. (2886 more authors) (2014)
Measurement of the Higgs boson mass from the H→γγ and H→ZZ∗→4ℓ channels in pp collisions at center-of-mass energies of 7 and 8 TeV with the ATLAS detector. Physical Review D, 90 (5). ISSN 1550-7998
Aad, G, Abbott, B, Abdallah, J et al. (2878 more authors) (2014)
Search for high-mass dilepton resonances in pp collisions at s√=8 TeV with the ATLAS detector. Physical Review D, 90. 052005. ISSN 1550-7998
Aad, G, Abajyan, T, Abbott, B et al. (2920 more authors) (2013)
Evidence for the spin-0 nature of the Higgs boson using ATLAS data. Physics Letters B, 726 (1-3). pp. 120-144. ISSN 0370-2693 |
$\bullet$ What are these (length) scales that we often talk about? Do we mean the wavelength of some Fourier modes of something?
Yes, these are the comoving wavelengths of Fourier modes (see next)
$\bullet$ If yes, do we mean the Fourier modes of the density fluctuations $\delta\rho/\rho$ or that of the inflaton field fluctuation $\delta\phi(\textbf{x},t)$ itself?
These are essentially one in the same: fluctuations of the inflaton field gives rise to curvature perturbations which manifest as density perturbations after inflation ends.
$\bullet$ Why should these scales leave or re-enter the horizon?
Comoving length scales grow along with the scale factor of the universe, which, during inflation, grows exponentially in time: $a(t) \sim e^{Ht}$. Meanwhile, the horizon, which is proportional to $H^{-1}$, is nearly constant during inflation. The quantity $aH = \dot{a}$ measures the ratio of a given comoving length scale to the horizon: how this quantity varies in time will tell us whether comoving length scales grow at a greater or lesser rate than the horizon. During inflation, $\dot{aH} = \ddot{a} > 0$, hence, scales eventually grow to lengths larger than the Hubble scale. Conversely, after inflation, $\ddot{a} < 0$ and they will eventually re-enter the horizon.
Here's a picture of how this happens during inflation:
The red circle marks the horizon, which grows more slowly than the background during inflation. An object at rest with respect to the background (comoving) eventually overtakes the horizon.
$\bullet$ What is (are) the observable effect(s) of these scales?
These redshifted fluctuations give rise the well-known temperature anisotropies observed in the CMB. They eventually grow to form the large scale structure we see around us in the universe: galaxies, galaxy clusters, and so on.
$\bullet$ How do we distinguish between the roles of those length scales which first left and re-entered the horizon with those which always remained inside the horizon
It's really just a matter of scale. Perhaps this graphic will help:
Here we have a couple different comoving length scales evolving during inflation (left of vertical "Inflation Ends" line) and after. Notice that those length scales that grow the largest by the end of inflation "started" first (this picture has meaning when we associate these length scales with the Fourier wavelengths of the scalar field fluctuations). There will of course by many scales that never grow to super-horizon sizes during inflation, having started too late.
Now, there is an important difference between the 'always subhorizon' and 'once superhorizon' modes in terms of the matter perturbations that they source. Superhorizon modes undergo a 'classical-to-quantum' transition, where they are manifested as curvature perturbations. As they re-enter the horizon, they source acoustic oscillations in the post-inflationary plasma; those that enter earliest oscillate for longer and so damp the most by the time the CMB is generated at recombination. So, the longest-wavelength modes are most evident in the temperature spectrum of the CMB (these are the large, broad peaks in the correlation spectrum).
Modes that never leave the horizon are in some sense "not real" because they never undergo the classical-to-quantum transition. They don't ever become bonafide curavture perturbations. Phenomenologically, this theoretical distinction is irrelevant, though, since any perturbations generated by such small wavelength modes would be damped out by recombination. |
Suppose that $G$ is a finitely generated group and $A$ is a finite set. Then we shall give $A$ the discrete topology and $A^{G}$ the product topology; in particular $A^{G}$ is compact and totally disconnected.
If $x\in A^{G}$ and $g\in G$, then define $gx\in A^{G}$ by letting $gx(h)=x(g^{-1}h)$.
We say that $F:A^{G}\rightarrow A^{G}$ is $G$-equivariant if $F(gx)=g F(x)$ for each $g\in G$ and $x\in A^{G}$.
Then a cellular automaton is a function $F:A^{G}\rightarrow A^{G}$ which is both continuous and $G$-invariant.
We say that a language $L\subseteq A^{*}$ is accepted in polynomial time by a cellular automaton over the group $G$ if there exists some $B$ with $A\subseteq B$ along with some special elements $0,u,v\in B\setminus A$, a cellular automaton $F:B^{G}\rightarrow B^{G}$, an element $g\in G$ of infinite order, and a polynomial $p$ that satisfies the following property:
Suppose that $s\in A^{*}$. Then let $x\in B^{G}$ be the string where $x[g^{i}]=s[i]$ whenever $i<|s|$ and $x[h]=0$ whenever $h\not\in\{e,...,g^{|s|-1}\}$. Then there is some natural number $n<p(|x|)$ such that $F^{n}(x)(e)\in\{u,v\}$. Furthermore, if $n$ is the least natural number such that $F^{n}(x)(e)\in\{u,v\}$, then $s\in L$ if and only if $F^{n}(x)(e)=v$.
Is there a characterization of the groups which are computable in polynomial time such that a language $L$ is accepted in polynomial time if and only if it is accepted in polynomial time by a cellular automaton over the group $G$? |
My teacher in Math Team gave the following question to us.
Solve $$a!b!=a!+b!+c!$$ where $a$, $b$ and $c$ are nonnegative integers.
I found only one solution by trial and error and it is $(a,b,c)=(3,3,4)$.
Any help is appreciated!
Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community
My teacher in Math Team gave the following question to us.
Solve $$a!b!=a!+b!+c!$$ where $a$, $b$ and $c$ are nonnegative integers.
I found only one solution by trial and error and it is $(a,b,c)=(3,3,4)$.
Any help is appreciated!
A sketch:
There is no solution with $a\lt2$ (this would imply $c!=-1$) or $a=2$ (this would imply $b!-c!=2$), so by symmetry $a,\,b\ge3$. Without loss of generality $a\ge b$. Since$$a!=\frac{a!}{b!}+1+\frac{c!}{b!}$$is an integer, $c\ge b$.
In the case $a=b$, $b!=2+\frac{c!}{b!}$ is a multiple of $3$, so $c\in\{b,\,b+1,\,b+2\}$. These subcases equate $b!$ to a polynomial in $b$, so only small $b$ can be solutions.
In the case $a>b$, since $b!=1+\frac{b!}{a!}+\frac{c!}{a!}$ is an integer, $a>c\ge b$. The above display-line equation also tells us $\frac{a!}{b!},\,\frac{c!}{b!}$ aren't both multiples of $3$, so $c\in\{b,\,b+1,\,b+2\}$. We can exhaust these using $a!\ge 24,\,b!\ge6$. The case $c=b$ gives $b!=\frac{a!}{a!-2}$, which gives no solutions; $c=b+1$ gives $b!=\frac{a!}{a!-b-2}$, while $c=b+2$ gives $b!=\frac{a!}{a!-b^2-3b-3}$.
Let $x,y,z$ be the highest powers of $2$ that divide $a!, b!$ and $c!$, respectively.Then $x+y$ is the highest power of $2$ that divides $a!b!$.
Also, since you clearly have that $$(a!-1)(b!-1)=c!+1$$ you can obtain that $c>a$ and $c>b$ (apart from some small values of $a,b,c$ which you've probably already ruled out using direct check)
We can now assume WLOG that $a\leq b \leq c$. Then we have:
$$a!+b!+c!=2^xa_1+2^yb_1+2^zc_1=2^x(a_1+2^{y-x}b_1+2^{z-x}c_1)$$ where $a_1,b_1$ and $c_1$ are odd numbers.
Since trivially $x<x+y$ and $2^{x+y}|\text{LHS}$ then we must have that $a_1+2^{y-x}b_1+2^{z-x}c_1$ is even.
If now you assume that $x\neq y$ then $2^{y-x}b_1$ is even, hence we must have $z=x$, i.e. that $c-a=0$ or $c-a=1$ which implies that
$$(a!)^2\leq a!b!\leq a!+(a+1)!+(a+1)!$$ and that can hold for only small values of $a$ again.
Hence, we must have that $x=y$ hence $b-a=0$ or $b-a=1$.
If $b-a=0$ then we have $$(a!)^2=2a!+c!.$$ As @J.G. showed, this is equivalent to
$$a!=2+\frac{c!}{a!}$$ hence $c-a=0, 1 \text{ or } 2$ and it reduces to a check for small values.
Finally, if $b-a=1$ then you have $$(a!)^2(a+1)=a!(a+2)+c!$$ which is equivalent to
$$(a+1)!=a+2+\frac{c!}{a!}$$
Now, (unless $c=a$ which again reduces for to small values of a) we have that $a+1| \text{ LHS }$, and $a+1| \frac{c!}{a!}$, hence $a+1|a+2$ which is a contradiction.
Hopefully there aren't any mistakes, though you will have to fill in the gaps, namely do the small values checks.
I claim that $a=b$ or $a=c$. For the sake of this problem, assume $b \geq a$ (proving for $b \leq a$ is the exact same). Then $$b!=1+\frac{b!}{a!}+\frac{c!}{a!}.$$ Due to closure of the integers under addition/subtraction, we get that $c\geq a$.
Case 1: ($b \leq c $)
If $b \leq c$, then $$b!-1=\frac{b!}{a!}+\frac{c!}{a!}=(\frac{c!}{b!}+1)\frac{b!}{a!}.$$ But $b> a$ implies that $b|(b!-1)$, a contradiction, so $b=a$. Then this equates to solving $(a!)^2=2a!+c!$ or $a!(a!-2)=c!$. For $a\geq 3$, clearly $3 \not|(a!-2)$, so $c=a+1$ or $c=a+2$ (else $c = (a+3)(a+2)(a+1)\dots $ has $3$ as a factor). The equation $a!(a!-2)=(a+1)!$ has one solution at $a=3$ and the equation $a!(a!-2)=(a+2)!$ has none. Then the only solution of this form is $(a,b,c)=(3,3,4)$.
Case 2: ($b>c$)
If $b>c$, then $$b!-1=\frac{b!}{a!}+\frac{c!}{a!}=(\frac{b!}{c!}+1)\frac{c!}{a!}$$
Because $c<b$, $[c(c-1)(c-2)\dots(c-a+1)] |b!$, whence $[c(c-1)(c-2)\dots(c-a+1)]\not| (b!-1)$ for $c > a$. It follows that $c=a$, which then equates to solving $$a!b!=2a!+b!$$ Rearranging both sides gets $a!(b!-2)=b!$, which has no solutions because $b!/(b!-2)$ doesn't take an integer value for $b \geq 3$. Then there are no solutions of the form $(a,b,c) = (a,b,a)$.
Conclusion:
From all of this, we get that $(3,3,4)$ is the only solution to $a!b! = a!+b!+c!$. (Note that I used guessing and checking to prove that $a,b,c > 2$) |
Iterative reweighted least squares (IRLS) is used when errors are heteroscedastic. Let us assume that error comes from a distribution where its mean is zero and the variance is a function of the absolute value of the input. From what I have read, IRLS is applicable here and will give better results than OLS.
My question is can I solve this using MLE? Let us say that I define my output to be from a normal distribution, whose probability density is written as follows:
\begin{equation} \prod_{i=1}^n p(y_i | x_i, w_0, w_1, m, n) = \prod_{i=1}^n \frac{1}{\sqrt{2\pi\sigma^2}} \exp(-\frac{(y_i - (w_0 + w_1\cdot x_i))^2}{2\sigma^2} \end{equation}
where $\sigma = m|x| + n$, the absolute value of $x$ is denoted by $|x|$. For simplicity, $x, y \in \mathbb{R^1}$.
Now we can solve this by MLE to find values of $w_0, w_1, m, n$.
Is this better or worse than IRLS? I haven't seen much discussion about this method. So is there a problem with this method. The only disadvantage that comes to my mind is that we are assuming a functional form for the variance, which if turned out to be wrong, can affect the regression quality greatly. But then, even IRLS assumes a diagonal weight matrix where each entry is filled as follows (from here):
If a residual plot against a predictor exhibits a megaphone shape, then regress the absolute values of the residuals against that predictor. The resulting fitted values of this regression are estimates of σi. (And remember $w_i=\frac{1}{\sigma^2_i}$)
Another disadvantage of MLE is maybe it is more sensitive than IRLS.
This paper is the only thing that I found comparing MLE and IRLS, but was a little difficult to understand.
Any thoughts or is anyone aware of any studies. Also, I am still learning about this so please point out if there are any mistakes in my analysis. |
There are many articles like this one Testing the isotropy of the Universe by using the JLA compilation of type-Ia supernovae (PDF, arxiv.org) trying to search for a dipole effect in cosmology with type Ia supernovae (used as standard candles). The idea of this kind of search is to do a test of the cosmological principle on a phenomenological approach independantly of some given models (e.g. Tolman-Bondi anistropic model).
They consider a dipole effect on the distance modulus $ \mu $: $$ \mu \leftarrow \mu \times \left(1+A_D (\hat{\textbf{n}}\cdot \hat{\textbf{p}})\right)$$ where $\hat{\textbf{n}}$ is unitary vector pointing in the direction $(l,b)$ of the dipole with an amplitude of $A_D$ and $\hat{\textbf{p}}$ points the direction of each Type Ia supernova. The distance modulus is related to luminosity distance $d_l$ as $ \mu =5 \log \left(\frac{d_l}{10 pc}\right)$.
There is something that bothers me a bit:
Our peculiar velocity to the universe is usually estimated with the CMB dipole measuments, right? And so the reshifts of SNIa are computed in the CMB frame using this correction.
So, supposing we have that kind of cosmological ansisotropy, how it is not mistakenly corrected with the dipole anisotropy of the CMB (and could also be why all searches of this kind give us an amplitude of anisotropy compatible to 0)?
I mean, the CMB dipole implies a velocity of $369.5\pm3.0$ km/s in the direction of $l=264.4^{\circ}\pm0.3^{\circ}$ and $b= 48.4^{\circ}\pm0,5^{\circ}$ according to COBE measurments. But what if the CMB dipole is not just due to our peculiar velocity but also to an unknown "cosmological effect"? This effect would be corrected and nothing will be see when try to make a dipole fit on SNIa measurement, right? |
Difference between revisions of "OBD Reasoner"
(→Notation)
Line 1: Line 1: + +
The OBD reasoner uses definitions of transitive relations, relation hierarchies, and relation compositions to infer implicit information. These inferences are added to the OBD Phenoscape database. This section documents the inherited code in Perl and embedded SQL, that extracts implicit inferences from the downloaded ontologies and annotations of ZFIN and Phenoscape phenotypes.
The OBD reasoner uses definitions of transitive relations, relation hierarchies, and relation compositions to infer implicit information. These inferences are added to the OBD Phenoscape database. This section documents the inherited code in Perl and embedded SQL, that extracts implicit inferences from the downloaded ontologies and annotations of ZFIN and Phenoscape phenotypes.
Latest revision as of 22:13, 25 May 2018
The OBD reasoner uses definitions of transitive relations, relation hierarchies, and relation compositions to infer implicit information. These inferences are added to the OBD Phenoscape database. This section documents the inherited code in Perl and embedded SQL, that extracts implicit inferences from the downloaded ontologies and annotations of ZFIN and Phenoscape phenotypes.
Contents 1 Notation 2 Implemented Relation Properties 3 Phenoscape-specific rules 4 Sweeps 5 Future directions Notation Classes, instances, relations
When describing rules below, we use the following notations:
A, B, C: classes (as subjects or objects). Note that relationship concepts can also appear as subject or object in an assertion. a, b, c: instances, or individuals (as subjects or objects) R: relationship (predicate) R(A, B): A RB, for example is_a(A, B) is equivalent to A is_aB. This is the functional form of assertions. Reification: assertions about assertions. I.e., A, B, ... may also be assertions. For example, the yellow that inheres_ina particular dorsal fin is_ayellow, which we can write formally as: is_a( inheres_in(yellow, dorsal_fin), yellow). Conjunction and Implication The double arrow (<math>\Rightarrow</math>) is also called directional implication. It can be translated into English to mean "it implies" or "it follows." The "cap" or "A minus the stripe" (<math>\and</math>) is the FOL construct to specify conjunctionand can be translated to "and" in plain English. Quantification of instances
In first order logic (FOL), it is common to assert statements about all possible instances of a concept in the real world. Let us start with the assertion, "All puppies are dogs." This can be stated as shown below in (1)
<math>\forall</math> A: instance_of(A, Puppy) <math>\Rightarrow</math> instance_of(A, Dog) -- (1)
The inverted A (<math>\forall</math>) is called the
universal quantifier and can be translated to "for every" or " for all" in plain English. Similarly, the colon (:) in the FOL statement above can be read as "such that." Therefore, the sentence above translated into English reads:
"
For all A such that A is a Puppy, implies that A is a Dog"
or even simpler as we shall readily comprehend, "
All puppies are dogs." Note this is a simple assertion of the semantics of the is_a predicate that is so common to Phenoscape. The formulation as is_a(Puppy, Dog) is a class-level abstraction from the quantified instances we have used in (1).
The FOL statement below states the transitive property of the
is_a relation <math>\forall</math> A, B, C: is_a(A, B) <math>\and</math> is_a(B, C) <math>\Rightarrow</math> is_a(A, C) -- (2)
The statement (2) above can be translated to read:
For all A, B, and C, such that A is a B, and B is a C, it follows that A is a C
The
existential quantifier (<math>\exists</math>) can be translated to "there exists" or "at least" in plain English. Now consider the statement, "Some birds are flightless" This can be stated as shown below <math>\exists</math> A: instance_of(A, Bird) <math>\and</math> instance_of(A, Flightless thing) -- (3)
This can be translated to plain English to:
"
There exists an A such that A is a Bird and A is a Flightless thing"
These are just some of the many constructs from first order logic which find common use in the Phenoscape project. There is a more full-fledged introduction to FOL on Wikipedia.
Implemented Relation Properties Relation Transitivity Rule: <math>\forall</math>A, B, C, R and R transitive: R(A, B) <math>\and</math> R(B, C) <math>\Rightarrow</math> R(A, C)
Transitive relationships are the simplest inferences to be extracted and comprise the majority of new assertions added by the reasoner. When the ontologies are loaded into the database, every transitive relation is marked with a specific value of a property called
is_transitive prior to loading. Transitive relationships include (ontology in brackets): is_a (OBO Relations) has_part (OBO Relations) part_of (OBO Relations) integral_part_of (OBO Relations) has_integral_part (OBO Relations) proper_part_of (OBO Relations) has_proper_part (OBO Relations) improper_part_of (OBO Relations) has_improper_part (OBO Relations) location_of (OBO Relations) located_in (OBO Relations) derives_from (OBO Relations) derived_into (OBO Relations) precedes (OBO Relations) preceded_by (OBO Relations) develops_from (Zebrafish Anatomy) anterior_to (Spatial Ontology) posterior_to (Spatial Ontology) proximal_to (Spatial Ontology) distal_to (Spatial Ontology) dorsal_to (Spatial Ontology) ventral_to (Spatial Ontology) surrounds (Spatial Ontology) surrounded_by (Spatial Ontology) superficial_to (Spatial Ontology) deep_to (Spatial Ontology) left_of (Spatial Ontology) right_of (Spatial Ontology) complete_evidence_for_feature(Sequence Ontology) evidence_for_feature (Sequence Ontology) derives_from (Sequence Ontology) member_of (Sequence Ontology) exhibits (Phenoscape Ontology)
Transitive relations are extracted from the database by the reasoner and transitive relationships are computed by the reasoner for each relation. For example, given that
is_a is a transitive relation, and if the database holds A is_a B and B is_a C, then the reasoner computes A is_a C and adds this new assertion to the database. Similarly, new inferred assertions are added to the database for every transitive relation. Note
Relation transitivity is the only relation property whose definition is (indirectly) extracted by the reasoner from the loaded ontologies (using the
is_transitive metadata tag) in order to compute inferences. Although definitions of many of the other relation properties (such as relation reflexivity) can be found in the ontologies as well, in the current implementation inference mechanisms associated with these relation properties are hard coded into the reasoner. Relation (role) compositions Rule: <math>\forall</math>A, B, C, R: R(A, B) <math>\and</math> is_a(B, C) <math>\Rightarrow</math> R(A, C) Rule: <math>\forall</math>A, B, C, R: is_a(A, B) <math>\and</math> R(B, C) <math>\Rightarrow</math> R(A, C)
Relation (role) compositions are of the form A
R 1 B, B R 2 C => A ( R 1| R 2) C. For example, given A is_a B and B part_of C then A part_of C. The reasoner computes such inferences and adds them to the database. is_a Relation Reflexivity Rule:<math>\forall</math>A, R and R reflexive <math>\Rightarrow</math> A R A
Reflexive relations relate their arguments to themselves. A good example: "A rose
is_a rose." The is_a relation is reflexive. In the database, every class, instance, or relation (having a corresponding identifier in the Node table of the database) is inferred by the reasoner to be related to itself through the is_a relation. Given a class called Siluriformes (with identifier TTO:302), the reasoner adds the TTO:302 is_a TTO:302 to the database.
The subsumption (
is_a) relation is the only reflexive relation that is handled by the reasoner. Other reflexive relations abound in the real world, subset_of is a good mathematical example from the domain of set theory. Every set is a subset of itself. Such relations are NOT dealt with by the reasoner. Relation Hierarchies Rule: <math>\forall</math>A, B, R 1, R: 2 R(A, B) <math>\and</math> 1 is_a( R, 1 R) <math>\Rightarrow</math> 2 R(A, B) 2
An example: If A
father_of B and father_of is_a parent_of, then A parent_of B Relation Chains Rule: <math>\forall</math>A, B, C: inheres_in(A, B) <math>\and</math> part_of(B, C) <math>\Rightarrow</math> inheres_in_part_of(A, C)
Relation chains are a special case of relation composition. Component relations are accumulated into an assembly relation. Specifically, instances of the relation
inheres_in_part_of are accumulated from instances of the relations of inheres_in and part_of. IF A inheres_in B and B part_of C, THEN A inheres_in_part_of C. This relation chain is specified by a holds_over_chain property in the inheres_in_part_of stanza of the Relation Ontology. However, the actual rule is hard coded into the OBD reasoner and not derived from the ontology. Decomposing "post-composition" relations Rule: <math>\forall</math>Q, E: inheres_in(Q, E) <math>\Rightarrow</math> inheres_in( inheres_in(Q, E), E) Rule: <math>\forall</math>Q, E: inheres_in(Q, E) <math>\Rightarrow</math> is_a( inheres_in(Q, E), Q)
Phenotype annotations are typically "post-composed", where an entity and quality are combined into a Compositional Description. For example, an annotation about the quality
decreased size (PATO:0000587) of the entity Dorsal Fin (TAO:0001173) may be post-composed into a Compositional Description that looks like PATO:0000587^OBO_REL:inheres_in(TAO:0001173). Instances of is_a and inheres_in relations are extracted from post compositions like this. In the above example, the reasoner extracts: PATO:0000587^OBO_REL:inheres_in(TAO:0001173) OBO_REL:inheres_in TAO:0001173, and PATO:0000587^OBO_REL:inheres_in(TAO:0001173) OBO_REL:is_a PATO:0000587 Phenoscape-specific rules
This section describes the Phenoscape-specific rules added to the OBD reasoner.
PATO Character State relations
The Phenotypes and Traits Ontology (PATO) contains definitions of qualities, many of which are used in phenotype descriptions. These qualities are partitioned into various subsets (or slims) such as attribute slims, absent slims, and value slims. Attribute and value slims are mutually exclusive subsets. Attribute slims include qualities that correspond to Characters of anatomical entities, Color or Shape for example. Value slims include qualities, which correspond to States that a Character may take, for example Red and Blue for the Color character and Curved and Round for the Shape character. These relationships are not explicitly defined in the PATO ontology but can be inferred using the relations shown below
PATO:0000587 oboInOwl:inSubset value_slim PATO:0000587 OBO_REL:is_a PATO:0000117 PATO:0000117 oboInOwl:inSubset attribute_slim
From these definitions, the relationship
PATO:0000587 PHENOSCAPE:value_for PATO:0000117
can be inferred by the reasoner. Ideally, the inference rule for this can be represented as
Rule: <math>\forall</math>V, A: in_Subset(V, value_slim) <math>\and</math> is_a(V, A) <math>\and</math> in_subset(A, attribute_slim) <math>\Rightarrow</math> value_for(V, A)
However, not all qualities are partitioned into one of the attribute or value slim subsets. In such cases, the super quality of these qualities is discovered by the reasoner and checked to find out if it is in the attribute or value slim subsets. This process continues until a quality belonging to the attribute slim subset is found. This can be represented as
Rule: <math>\forall</math>V, A: NOT in_subset(V, value_slim) <math>\and</math> is_a(V, A) <math>\and</math> in_subset(A, attribute_slim) <math>\Rightarrow</math> value_for(V, A)
Lastly, there are orphan qualities in PATO which are not related to any other qualities by subsumption and which do not belong to the attribute or value slim subsets. These are grouped under an unknown or undefined attribute.
Rule: <math>\forall</math>V, A: NOT in_Subset(V, value_slim) <math>\and</math> NOT is_a(V, A) <math>\Rightarrow</math> value_for(V, unknown attribute) The Balhoff rule Rule: <math>\forall</math>A, B, x: is_a(A, B) <math>\and </math> exhibits(A, x) <math>\Rightarrow</math> exhibits(B, x)
This rule was proposed by Jim Balhoff to reason upwards in a (typically taxonomic) hierarchy using the
exhibits relation. A relevant example: GIVEN THAT Danio rerio exhibits a round fin AND Danio rerio is a Danio THEN Danio exhibits a round fin. Note that exhibits has someOf semantics - so the inference is that some of Danio exhibit the phenotype.
This is the exact opposite of the
genus differentia rule which postulates reasoning only downwards in a hierarchy. Sweeps
A reasoner functions over several sweeps. In each sweep, new implicit inferences are derived from the explicit annotations (as described in the previous sections) and added to the database. In the following sweep, inferences added from the previous sweep are used to extract further inferences. This process continues until no additional inferences are added in a sweep. This is when the
deductive closure of the inference procedure is reached. No further inferences are possible and the reasoner exits. Future directions
Possible future directions for the extension of the reasoner include adding more relation properties as well as some outstanding technical issues.
Relation Properties to be implemented
The following relation properties may be implemented on the reasoner in future if necessary.
Relation Symmetry Rule: <math>\forall</math>A, B, R and R symmetric: R(A, B) <math>\Rightarrow</math> R(B, A)
An example of a symmetric relation is the
neighbor relation. IF Jim neighbor_of Ryan THEN Ryan neighbor_of Jim. A more biologically relevant example is the in_contact_with relation. IF middle_nuchal_plate in_contact_with spinelet, THEN spinelet in_contact_with middle_nuchal_plate
NOTE: There is no direct relationship between
relation symmetry and relation reflexivity Relation Inversion Rule: <math>\forall</math>A, B, R 1, R: 2 R(A, B) <math>\and</math> 1 inverse_of( R, 1 R) <math>\Rightarrow</math> 2 R(B, A) 2
An example of relation inversions can be found in the
posterior_to and anterior_to relations. IF anterior_nuchal_plate anterior_to middle_nuchal_plate AND anterior_to inverse_of posterior_to, THEN middle_nuchal_plate posterior_to anterior_nuchal_plate |
Publications Influence
Claim Your Author Page
Ensure your research is discoverable on Semantic Scholar. Claiming your author page allows you to personalize the information displayed and manage publications.
Abstract The masses and decays of the scalar D s 0 ∗ (2317) and axial-vector D s 1 ∗ ( 2460 ) charmed strange mesons are calculated consistently in the hadrogenesis conjecture. These mesons decay… (More)
The $\gamma p \to K^+ \pi^0 \Lambda$ and $\gamma p \to K^+ \pi \Sigma $ reactions are studied in the kinematic region where the $\pi^0 \Lambda$(1116) and $\pi\Sigma$(1192) pairs originate dominantly… (More)
The study of the π−p → ρn and π−p → ωn amplitudes close and below the vector meson production threshold (1.2 < √ s < 1.8 GeV) reveals a rich dynamics arising from the presence of specific baryon… (More)
Abstract The γ p → ϕ η p reaction is studied in the kinematic region where the ηp final state originates dominantly from the decay of the N ∗ (1535) resonance. The threshold laboratory photon energy… (More)
Abstract We discuss the photoproduction of ω - and ρ 0 -mesons off protons in the particular channel where the target proton is excited to a Roper resonance N ∗ (1440). We propose a simple… (More)
Abstract We study the spectrum and isospin-violating strong decays of charmed mesons with strangeness. The scalar D s 0 ∗ ( 2317 ) ± and the axial-vector D s 1 ∗ ( 2460 ) ± states are generated by… (More)
The recently discovered scalar ${\rm D}^{\ast}_{s0}(2317)$ and axial vector ${\rm D}^{\ast}_{s1}(2460)$ charmed strange mesons are predicted in the hadrogenesis picture. They are generated by… (More)
The π − p → e + e − n and π + n → e + e − p reaction cross sections are calculated below and in the vicinity of the vector meson (ρ 0 , ω) production threshold. These processes are largely… (More) |
Kitefoil introduction
Foil is a technology that allows a hull (propelled by a motor or in this case a sail) to
emerge totally from the water, thanks to the hydrodynamic action of the submerged surface.
In fact, the pressure of the water under the
wings, combined with the depression that forms above them, generates a force of lift opposed to the weight, and allows a great reduction of resistance to motion and consequently an increase in efficiency.
The curve in figure shows qualitatively the
rapid reduction of the resistance, once, reached a certain speed, the hull comes out of the water.
Kitefoil is composed of the following elements:
Fuselage: It extends in length in the direction of motion and transmits the sustaining force to the hull through the mast, to which it is connected; Mast or Keel: It transmits the sustaining force to the hull, connecting it to the fuselage and to the immersed surfaces that create the lift; Supporting and stabilizing wings:These are the surfaces that create lift. The first is able to give all the lift required to separate the hull from the surface of the water, while the second balances the moment provided by the first, with a consequent stabilization effects. History of the hydrofoil
Hydrofoils have been used in different types of boats for over 100 years.
The first person that designed and built an hydrofoil was an Italian named Enrico Forlanini, in 1906. For his hydrofoil Forlanini used a system of 4 groups of parallel wings (a pair in the bow and a pair in the stert) of decreasing width, unlike the single hydrofoil wings in use today.
Forlanini’s design was resumed and improved by various other inventors over the following decades (in particular Alexander Graham Bell and Casey Baldwin), until around the
50’s the world began to invest massively in boats using hydrofoil fins, for both military and commercial use. The boom was reached in the 60 ‘s- 70’s but since then their use in motor boats has gradually decreased, due to various problems; not only due to construction and maintenance costs, but also safety and environmental issues. Materials for hydrofoils were in fact metallic, the same used for the structure of the boat.
The same problems occurred for hydrofoils used in the sailing or hobby disciplines, that began in the 60s, but was soon abandoned.
Since the turn of the century
investments in this technology have resumed. Mainly because new composite materials made it possible to produce extremely light and resistant appendages, different hydrofoil researches began again, in order to identify the best shape and structure for every hull and wind. A wide interest in hydrofoil sailing technology spread trough the media thanks to its use in the 2010 America’s Cup. Some sectors in which the foil has developed, however, are only now becoming popular. Unfortunately research has already reached a moment of stationarity, because the significant risks involved in the sector do not attract investors’ interest. Hydrofoil in kitesurfing
The application of hydrofoil to kitesurfing dates back to the 2000s. The design of modern hydrofoil for kitesurfing varies in geometry based on its type of use. The main categories are:
beginner; freestyle; racing boards. Kitefoils for beginners are designed to be stable at low speeds.
Those for
freestyle instead are more suitable for performing acrobatics and jumps and therefore have greater maneuverability, in addition to being structurally more resistant, in order to be able to withstand impacts on landing jumps. Racing kitefoils are designed to reach the highest possible speeds with the greatest stability for all the different wind conditions. To do this, the latter have a minimal design and are made of carbon fiber, to be as light and resistant as possible.
Hydrofoil for kitesurfing (also called kitefoil) is a combination of various components, each with a very precise function . Although it is easy to design a single fin suitable to a certain sea condition, it is far more complicated to create a kitefoil that is best suited to a wide range of wind conditions, and therefore to a larger speed range.
To best explain the operation of the components of a hydrofoil fin, it is important initially to understand the most important
moments to which the board is subjected: roll, pitch and yaw.
Pitch Yaw and Roll in a Hydrofoil
To understand how kitesurf works we can consider just the first 2, roll and pitch. Yaw can be ignored because load conditions are approximately simmetric and the mast twist can therefore be denied.
Kitefoils must produce enough lift to rise out of the water, giving support to the kitesurfer, and at the same time produce a moment of such magnitude as to allow balancing. The lift created must be sufficient in a wide range of speeds from the starting speed (“take off” speed) to the maximum speed (“top” speed).
The
take-off speed is the speed at which lift begins to be such as to allow the kitesurfer and the board to separate from the water. As resistance decreases, due to the fact that the board is now no longer in contact with water, but in the air (which density is about 1000 times lower than that of water), there is an increase in speed; this increase in speed corresponds to an increase in lift for the main foil, and a change in lift capacity of the stabilizer, which may vary depending on the type used, as will be discussed below.
There are two different functioning systems of the
stabilizer, which can have a positive bearing capacity and a negative bearing capacity. In the case of the positive flow stabilizer, in order to balance the moment of force Fp (Force weight of the kiter minus the force exerted by the kite) must have a arm smaller than the second case and therefore the kiter must have a greater ability to stay in equilibrium. The balance of the moment becomes evident in the behavior of each kitesurfer who uses kitefoils, which centers the back foot on the mast and uses the front foot to apply a force that balances the moment. In simplified terms, the board represents a lever on which the rider applies a force, while he balances the strength of the stabilizer with his front foot, counteracting the moment generated by load-bearing and resistant forces.
The stabilizer moment and the rider’s need to counterbalance it, leads to a more stable equilibrium, and the rider’s
ability lies in maintaining the balance in situations of variable winds and during maneuvers such as tacks or jibes.
Contrary to its name
n egative flow stabilizer improves stability because the proportion of the fin / lift ratio is reduced and therefore its efficiency decreases. The task of the kitefoil designer is to create a geometry that allows at the same time both: A sufficient bearing capacity in a wide range of windy conditions; Produce a stabilizing moment sufficient to allow the achievement of equilibrium.
So, ultimately it is required to
maximize Lift / Resistance ratio without unduly compromising stability. The design of a kitefoil is subject to a number of constraints that must be considered in the optimization phase. If one wants to design a kitefoil for racing, he should consider the rules imposed by the IKA (International Kitefoil Association) which specifies that the maximum length of a kitefoil, (measured perpendicularly to the board) cannot exceed 5000 mm (in the current state of the art foils are about 1.2m long, that is far from 5 m). Furthermore, the appendices can be up to one, and their purpose must be mainly to create lift. No limitations are imposed regarding the materials. Other limitations that must be considered in the design of a kitefoil are imposed on the structural design, since the kitefoil must have an optimized geometry that has to be easy to build and at the same time must be able to withstand the stresses to which it is subjected. Theoretical bases
To understand the functioning of the hydrofoil, we have to analyze the physics of a easy wing profile. The wing of the hydrofoil creates a lift force, perpendicular to the flow direction, and a drag force, oriented with flow direction. The angle of attack α is the angle between the flow direction and the contour string.
The lift produced by a profile is directly proportional to the area of the wing surface “A_L” and proportional to the square of the relative velocity of the flow “v”; it also depends on the density of the fluid “ρ” and the on the lift coefficient “C_L”:
F_L=lift=\frac{1}{2}\rho A_mC_Lv^2
Resistence is a function of:
wing surface “A_R”; relative speed of the water “v”; drag Coefficient “C_D”; water density “ρ”:
F_D=drag=\frac{1}{2}\rho A_mC_Dv^2
The lift and drag dimensionless coefficients C_L and C_D respectively, could be analytically, numerically or experimentally calculated, and are function of the profile shape.
C_L=\frac{L}{\frac{1}{2}\rhoA_mv^2}
C_D=\frac{D}{\frac{1}{2}\rhoA_mv^2} \sum_{n=1}^\infty \frac{1}{n^2} = \frac{\pi^2}{6} |
Unlike lasso, ridge does not have zeroing coefficients as a goal, and you shouldn't expect applying ridge penalty to have this effect. So the answer to your title question is "no."
However, in your question body, you ask whether it is
possible for the ridge penalty to produce a zero coefficient that was nonzero in an unpenalized solution. The answer here is "yes," but only as an incredible coincidence (which explains why the answer to the title question is no).
See the image in this answer (also floating around in plenty of other places). If the (unpenalized) error's contours happen to meet the constraint circle tangentially on one of the axes, that variable's coefficient will become zero. This would be an incredible coincidence, but it is theoretically possible. (Regularization can even switch the sign on the coefficient!)
I've put together a toy example to show this. GitHub/Colab notebook.
(In
sklearn, we're used to thinking about regularized regression in terms of the Lagrangian form; for these kinds of diagrams, it's perhaps better to think in the constrained optimization form. See the connection e.g. here)
Let $X=\begin{pmatrix}1 & 1 \\ \sqrt{5} & -\sqrt{5} \end{pmatrix}$, $y=\begin{pmatrix}3 \\ -\sqrt{5}\end{pmatrix}$. There is an exact solution, $y=X\begin{pmatrix}1\\2\end{pmatrix}$, so the unpenalized loss contours are (not axis-aligned) ellipses centered at $(1,2)$. When the L2 penalty coefficient $\lambda$ is 5, the solution is $(0,0.5)$. When $0<\lambda<5$, the solution has first weight positive, and when $\lambda>5$ the first weight is negative(! Taking this coefficient slightly negative allows us to decrease the second coefficient even smaller, lowering the overall penalty). |
This question already has an answer here:
I'm attempting a topology proof and think my proof is correct but I'm not 100% sure. The problem is as follows: Define $\Delta_X=\{(x,x)\in X\}$. Prove that $X$ is Hausdorff if and only if $\Delta_X$ is a closed set. The reverse proof is fine but for my proof of the forward one I attemopted as follows.
Assume $\Delta_X$ is closed, then $D=(\Delta_X)^C$ is open. Let $p_1,p_2\in D$ then $p_1 \neq p_2$. Take a union of open sets $U=\cup_{\alpha \in A}U_{\alpha}$ such that $p_1,p_2 \in U$. If $p_i \in U_{\alpha_i}$ with $U_{\alpha_1} \cap U_{\alpha_2} = \varnothing$ then we're done. If not, then $p_1,p_2$ are contained in the same open set. Write this set as a union of open sets such that $p_1$ and $p_2$ are not contained in the same open set. Therefore, $X$ is Hausdorff.
I'm not sure if this is correct but if it isn't any comments on where it is wrong and hints at improving it would be greatly appreciated! |
Q7.1.1
Is the power series \( \sum_{k=0}^\infty e^k x^k\) convergent? If so, what is the radius of convergence?
Q7.1.2
Is the power series \( \sum_{k=0}^\infty k x^k\) convergent? If so, what is the radius of convergence?
Q7.1.3
Is the power series \( \sum_{k=0}^\infty k! x^k\) convergent? If so, what is the radius of convergence?
Q7.1.4
Is the power series \( \sum_{k=0}^\infty \frac{1}{(2k)!} {(x-10)}^k\) convergent? If so, what is the radius of convergence?
Q7.1.5
Determine the Taylor series for \(\sin x\) around the point \(x_0 = \pi\).
Q7.1.6
Determine the Taylor series for \(\ln x\) around the point \(x_0 = 1\), and find the radius of convergence.
Q7.1.7
Determine the Taylor series and its radius of convergence of \(\dfrac{1}{1+x}\) around \(x_0 = 0\).
Q7.1.8
Determine the Taylor series and its radius of convergence of \(\dfrac{x}{4-x^2}\) around \(x_0 = 0\). Hint: You will not be able to use the ratio test.
Q7.1.9
Expand \(x^5+5x+1\) as a power series around \(x_0 = 5\).
Q7.1.10
Suppose that the ratio test applies to a series \( \sum_{k=0}^\infty a_k x^k\). Show, using the ratio test, that the radius of convergence of the differentiated series is the same as that of the original series.
Q7.1.11
Suppose that \(f\) is an analytic function such that \(f^{(n)}(0) = n\). Find \(f(1)\).
Q7.1.101
Is the power series \( \sum_{n=1}^\infty {(0.1)}^n x^n\) convergent? If so, what is the radius of convergence?
Q7.1.102
[challenging] Is the power series \( \sum_{n=1}^\infty \frac{n!}{n^n} x^n\) convergent? If so, what is the radius of convergence?
Q7.1.103
Using the geometric series, expand \(\frac{1}{1-x}\) around \(x_0=2\). For what \(x\) does the series converge?
Q7.1.104
[challenging] Find the Taylor series for \(x^7 e^x\) around \(x_0 = 0\).
Q7.1.105
[challenging] Imagine \(f\) and \(g\) are analytic functions such that \(f^{(k)}(0) = g^{(k)}(0)\) for all large enough \(k\). What can you say about \(f(x)-g(x)\)?
In the following exercises, when asked to solve an equation using power series methods, you should find the first few terms of the series, and if possible find a general formula for the \(k^{\text{th}}\) coefficient.
Q7.2.1
Use power series methods to solve \(y''+y = 0\) at the point \(x_0 = 1\).
Q7.2.2
Use power series methods to solve \(y''+4xy = 0\) at the point \(x_0 = 0\).
Q7.2.3
Use power series methods to solve \(y''-xy = 0\) at the point \(x_0 = 1\).
Q7.2.4
Use power series methods to solve \(y''+x^2y = 0\) at the point \(x_0 = 0\).
Q7.2.5
The methods work for other orders than second order. Try the methods of this section to solve the first order system \(y'-xy = 0\) at the point \(x_0 = 0\).
Q7.2.6
(Chebyshev’s equation of order \(p\)): a) Solve \((1-x^2)y''-xy' + p^2y = 0\) using power series methods at \(x_0=0\). b) For what \(p\) is there a polynomial solution?
Q7.2.7
Find a polynomial solution to \((x^2+1) y''-2xy'+2y = 0\) using power series methods.
Q7.2.8
a) Use power series methods to solve \((1-x)y''+y = 0\) at the point \(x_0 = 0\). b) Use the solution to part a) to find a solution for \(xy''+y=0\) around the point \(x_0=1\).
Q7.2.101
Use power series methods to solve \(y'' + 2 x^3 y = 0\) at the point \(x_0 =0\).
Q7.2.102
[challenging] We can also use power series methods in nonhomogeneous equations. a) Use power series methods to solve \(y'' - x y = \frac{1}{1-x}\) at the point \(x_0 = 0\). Hint: Recall the geometric series. b) Now solve for the initial condition \(y(0)=0\), \(y'(0) = 0\).
Q7.2.103
Attempt to solve \(x^2 y'' - y = 0\) at \(x_0 = 0\) using the power series method of this section (\(x_0\) is a singular point).
Can you find at least one solution? Can you find more than one solution?
Q7.3.3
Find a particular (Frobenius-type) solution of \(x^2 y'' + x y' + (1+x) y = 0\).
Q7.3.4
Find a particular (Frobenius-type) solution of \(x y'' - y = 0\).
Q7.3.5
Find a particular (Frobenius-type) solution of \(y'' +\frac{1}{x}y' - xy = 0\).
Q7.3.6
Find the general solution of \(2 x y'' + y' - x^2 y = 0\).
Q7.3.7
Find the general solution of \(x^2 y'' - x y' -y = 0\).
Q7.3.8
In the following equations classify the point \(x=0\) as ordinary, regular singular, or singular but not regular singular.
\(x^2(1+x^2)y''+xy=0\) \(x^2y''+y'+y=0\) \(xy''+x^3y'+y=0\) \(xy''+xy'-e^xy=0\) \(x^2y''+x^2y'+x^2y=0\) Q7.3.101
In the following equations classify the point \(x=0\) as ordinary, regular singular, or singular but not regular singular.
\(y''+y=0\) \(x^3y''+(1+x)y=0\) \(xy''+x^5y'+y=0\) \(\sin(x)y''-y=0\) \(\cos(x)y''-\sin(x)y=0\) Q7.3.102
Find the general solution of \(x^2 y'' -y = 0\).
Q7.3.103
Find a particular solution of \(x^2 y'' +(x-\frac{3}{4})y = 0\).
Q7.3.3
[tricky] Find the general solution of \(x^2 y'' - x y' +y = 0\). |
In my research I'm dealing with the following question.
Let $E$ set, $K:E \times E \to \mathbb R$ a positive type function, and $\mathcal H := \mathcal H(1+K)$ (in the sense of the Moore theorem). Now let $\xi: H \to \mathbb R$ a continuous, linear functional with $\xi(1) = 1$ (where the first $1$ denotes the constant function $1$). Now I know from the Aronszajn paper that $\operatorname{ker}(\xi)$ has a r.k. $\eta$ with $\eta \ll 1 + K$.
Now I'm interested in the following question: Is there always a functional $\xi$, so that I find a positive $\eta$, i.e. $\eta(x,y) \geq 0$ for all $x,y \in E$?
I could show the following statements already:
You can calculate $\eta$ as $\eta(x,y) = 1 + K(x,y) - \frac{e(x) e(y)}{\left<e, e \right>}$, where $e \in \mathcal H$ is the Riesz representant from $\xi$, i.e. $\xi(f) = \left<f, e \right>$ for all $f \in \mathcal H$. So you can ask equivalently, if you can choose an $e \in \mathcal H$, so that $\eta$ is positive.
If $K$ is positive and $\mathcal H(1) \cap \mathcal H(K) =\{0\}$, then you can choose $e = 1$ and you get $\eta(x,y) = K(x,y) \geq 0$ for all $x,y \in E$.
If $K(x,y) \geq \frac{1}{\Vert 1 \Vert_{\mathcal H(K)}}$ and $\mathcal H(1) \cap \mathcal H(K) \neq \{0\}$, you can choose $e=1$ again and you get $\eta(x,y) = K(x,y) - \frac{1}{\Vert 1 \Vert_{\mathcal H(K)}} \geq 0$ for all $x,y \in E$.
I got the great problem, that there are very few elements in $\mathcal H$ for that I can calculate $\eta$ directly. And all my results depend on the positivity of $K$. |
LHCb Collaboration,; Aaij, R; Adeva, B; Adinolfi, M; Anderson, J; Bernet, R; Bowen, E; Bursche, A; Chiapolini, N; Chrzaszcz, M; Dey, B; Elsasser, C; Graverini, E; Lionetto, F; Lowdon, P; Mauri, A; Müller, K; Serra, N; Steinkamp, O; Storaci, B; Straumann, U; Tresch, M; Vollhardt, A; Weiden, A; et al, (2015).
Search for a light charged Higgs boson decaying to $c\bar{s}$ in $pp$ collisions at $\sqrt{s}$ = 8 TeV. Journal of High Energy Physics, 2015(11):190. Abstract
A search for a light charged Higgs boson, originating from the decay of a top quark and subsequently decaying into a charm quark and a strange antiquark, is presented. The data used in the analysis correspond to an integrated luminosity of 19.7 $fb^{−1}$ recorded in proton-proton collisions at $\sqrt {\bar{s}}$ = 8 TeV by the CMS experiment at the LHC. The search is performed in the process $t\bar{t} \to W ^{\pm} bH ^{\pm} \bar{b}$, where the W boson decays to a lepton (electron or muon) and a neutrino. The decays lead to a final state comprising an isolated lepton, at least four jets and large missing transverse energy. No significant deviation is observed in the data with respect to the standard model predictions, and model-independent upper limits are set on the branching fraction $B(t \to H^{+}b)$, ranging from 1.2 to 6.5% for a charged Higgs boson with mass between 90 and 160 GeV, under the assumption that $B(H^+ \to cs)$ = 100%.
Abstract
A search for a light charged Higgs boson, originating from the decay of a top quark and subsequently decaying into a charm quark and a strange antiquark, is presented. The data used in the analysis correspond to an integrated luminosity of 19.7 $fb^{−1}$ recorded in proton-proton collisions at $\sqrt {\bar{s}}$ = 8 TeV by the CMS experiment at the LHC. The search is performed in the process $t\bar{t} \to W ^{\pm} bH ^{\pm} \bar{b}$, where the W boson decays to a lepton (electron or muon) and a neutrino. The decays lead to a final state comprising an isolated lepton, at least four jets and large missing transverse energy. No significant deviation is observed in the data with respect to the standard model predictions, and model-independent upper limits are set on the branching fraction $B(t \to H^{+}b)$, ranging from 1.2 to 6.5% for a charged Higgs boson with mass between 90 and 160 GeV, under the assumption that $B(H^+ \to cs)$ = 100%.
Additional indexing |
Stile supports mathematical notation, from simple super and subscript to full-blown matrices and calculus.
Equations in your question text
To insert equations into question text, simply click on the little
square root icon within the text editor:
This brings up the
equation editor. You can use any of the tools available in the equation editor toolbar to create your equation, fraction or other notation.
When you're finished, simply close it and it will be inserted as part of your question text.
Equations in your Multiple Choice Questions
This is where things get a little bit technical. Stile supports a mathematics computer language called
LaTeX in most places where you can type, including in MCQ answers and automated feedback bubbles.
Using LaTeX, you can get fractions and other mathematical notation into MCQ answers.
So for example, this MCQ question in Stile...
...actually looks like this when you click the
Edit button:
From the above example, if you wanted to display the fraction , you would type:
\(\frac{10}{15}\)
To break down what is going on here, there are three components:
\( \frac{10}{15} and \)
The
\( and the \) tell Stile to make whatever is between those characters mathematical notation, rather than just text.
The
\frac{numerator}{denominator} tells Stile to display a fraction. \frac is a part of the LaTeX language, which allows you to create complex representations. Subscript and superscript
To get an expression
exp to appear as a subscript, you just type _{exp}
To get
exp to appear as a superscript, type ^{exp}
LaTeX handles superscripted superscripts and all of that stuff in the natural way. It even does the right thing when something has both a subscript and a superscript.
A few other useful ones are:
≈ \approx \approxeq \geq \leq
For more symbols and further reading, this guide is a good introduction.
If you're doing really advanced stuff, this is the complete 300-page list of all symbols.
Too hard?
There is also a nifty little equation editor here that you can use to build your equation, then paste the code it gives you into Stile (this GIF may take while to load properly):
NOTE: However, you still need to put the above code between
\( and \).
So for Pythagoras' theorem, HostMath spits out
a^{2}+b^{2} = c^{2}
In Stile, this would have to be
\(a^{2}+b^{2} = c^{2\)
You can then copy and paste this into the other boxes and simply change the variables etc in the LaTeX:
Still too hard? Use Unicode!
Unicode is an international standard for encoding common characters. This Wikipedia article has a lot of handy superscript and subscript characters which you can just copy and paste into any place in Stile - or just grab them here:
Superscript
x⁰ x¹ x² x³ x⁴ x⁴ x⁵ x⁶ x⁷ x⁸ x⁹ xⁿ
Subscript
x₀ x₁ x₂ x₃ x₄ x₅ x₆ x₇ x₈ x₉ xₒ xₓ
You can even do some common
fractions with Unicode:
¼ ½ ¾ ⅔ ⅙ etc
(see this article for a more complete list)
However, if you want something more complex, you'd have to use LaTeX. |
Let $\alpha$ and $\beta$ be the roots of the equation $x^2 - x + p=0$ and let $\gamma$ and $\delta$ be the roots of the equation $x^2 -4x+q=0$. If $\alpha , \beta , \gamma , \delta$ are in Geometric progression then what is the value of $p$ and $q$?
My approach:
From the two equations, $$\alpha + \beta = 1$$, $$\alpha \beta = p$$, $$\gamma + \delta = 4$$, and, $$\gamma \delta = q$$. Since $\alpha , \beta , \gamma , \delta$ are in G. P., let $\alpha = \frac{a}{r^3}$, $\beta = \frac{a}{r^1}$, $\gamma = ar$, $\delta = ar^3$. $$\therefore \alpha \beta \gamma \delta = a^4 = pq$$ Now, $$\frac{\alpha + \beta}{\gamma + \delta} = \frac{1}{r^4}$$ $$\frac{1}{4} = \frac{1}{r^4}$$ $$\therefore r = \sqrt(2)$$
From here I don't know how to proceed. Am I unnecessarily complicating the problem?? |
The first thing that should come to mind to you in a question like this is that any usage of my 300m of fence can be described by a single parameter. For instance, if I give you one of the edge lengths of the square, $s$, then you have all the information you need to work out what the edge lengths of the rectangle are and in turn what the total fenced area is.
So now we create a function $A$ whose value is the area fenced. We parameterize this function using the variable $s$-- the side length of the square. Given $s$, we know that we have $300-4s$ meters of fence left to use on the rectangular enclosure. The perimeter of this fence is equal to $8$ times the short edge length which I wil call $l$.$$8l=300-4s$$The area of the rectangular enclosure is $2l^2$.$$2l^2=2\cdot\left(\frac{300-4s}{8}\right)^2$$The area of the square enclosure is of course $s^2$. So now we can quantify the total area $A$ with the single parameter being $s$.
$$A\left(s\right)=s^2 + 2\left(\frac{300-4s}{8}\right)^2$$$$A\left(s\right)=s^2 + \frac{1}{2}\left(75-s\right)^2$$
Now you can solve for the maximum and minimum of this function like you would for any function of a single variable. Note that $0 \leq s \leq 75$ because we are not permitting $s$ or $l$ to be negative. Find when the first derivative of this function is zero in this domain and also check the boundaries $\left(s=0 \ \rm{and} \ s=75\right)$. These are your local extrema and the minimum area will be the smallest of these and the maximum will be the greatest of these.
The essence of a question like this is recognizing the minimum number of parameters you need to describe the system and using standard techniques to solve for the extrema. |
I've been scanning across the web, and haven't found a good method to compute the Gauss Legendre abscissas and weights $\{ x_j, w^j \} _{j=1}^N$ for large $N\in\mathbb{N}$. My question is how to do it, and why should it work?
To those who need some background:
The goal is to approximate an integral by a discrete interpolating sum:
$$\int\limits_{(-1,1)} f(x) dx = \sum\limits_{j=-N}^N f(x_j)w^j $$
The question is, how to choose $\{ x_j, w^j \} _j$ appropriately.The Gauss Legendre quadrature tells you (for good reasons) to choose $x_j $ to be the roots of the $n$-th Legendre polynomial.
Problem : A straightforward computation of the Legendre polynomial for high $N$ is highly unstable, as it involves "big" coefficients of alternating signs. EDIT: I've found a very simple code that computes weight and abscissas using eigenvalues of a symmetric matrix, but doesn't seem use Golub Welsch. The matrix is
$$\forall 1\leq j\leq N-1 \, A_{j,j+1} = A_{j+1,j} = \frac{j}{\sqrt{4j^2 -1}}$$ with all other entries are zero. The discussion about it was split to another post. |
I aim to study the binary forms $ax^2 + bxy + cy^2 = (a,b,c)$ where $a,b,c \in {F_q}[T]$ (charasteristic of $F_q$ not 2) in particular those such that the discriminant $D = b^2 - 4ac \in F_q[T]$ has even degree and sign ${D} \in {F_q^*}^2$ – in other words its principal coefficient is a square.
This case is interesting because $\sqrt{D}$ exists as a Laurent series, and you can even consider its expression as a continued fraction (with polinomial coefficients), which has may interesting properties such as its periodicity. In our definition of reduced forms, we say $(a,b,c)$ with discriminant $D$ is reduced if $$|\sqrt{D} - b| < |a| < |\sqrt{D}|$$ and the sign of $a$ is either $1$ or a fixed square, although this property is just for unicity and has no importance. The inequality is the important part.
I would like to follow the plot normally used with the usual binary forms with integers coefficients: For the indefinite case with discriminant $D >0$ not a square, the reduced forms of one equivalence class can be arranged into cycles, these cycles have an even number forms, and the most important part, two reduced forms are equivalent iff they belong to the same cycle. The last point is the problem I would like to share with you:
Do you know how I could achieve this result?
I can give more details: in the usual case in $\mathbb{Z}$, what you normally do (for example is Buell's book) is to provide yourself with the following results:
Every cycle has an even number of reduced forms.
Let $x,y \in \mathbb{R}$. If there exist integers $a,b,c,d$ so that $ad - bc = 1$ and $$y = \frac{a x + b }{c x + d},$$ then we can express $y$ as $$y = [u; a_1, \cdots a_{2r}, v, x],$$ where $a_1, \cdots a_{2r}$ are positive integers and $u,v$ are integers.
If a continued fraction contains only a limited number of negative or zero partial quotients, then it is possible, by a finite number of steps, to convert it into a scf. During this process, almost all coefficients are shitfed an even numer of positions.
What you normally do in $\mathbb{Z}$ is making the most os the parellelism between the continued fraction of the principal root of one reduced form in the cycle and moving in the cycle, and then use these propositions. In $F_q[T]$ everything behaves almost identically, except these three points.
However I haven't found their "equivalent propositions" for the present case. For example, regarding the parity of the cycle, in the case in $\mathbb{Z}$ you just consider the alternance of sign positive-negative when you move to the adjacent form in the cycle, hence if you return to the initial one it means you did an even number of steps. In ${F_q}[T]$ the degree is not enough to prove it and I don't find any other invariant.
If someone could give me any advide my nightmares will be over :D I don't find this matter anywhere in the bibliography (I have been recommended a book written by Gernstein in Stackexchange (Click here), but I think his idea is different than mine and I got no more answers. I guess this field of binary forms is not the most popular). All suggestions are welcome. Thanks in advance! |
A particle is in the ground state of an infinite square well with walls in the range x=[0,a]. At time t=0, the walls are removed suddenly and the particle becomes free. What is the energy of the free particle?
What I know: \begin{equation} \begin{split} V(x) &= 0, \; \; \; 0\le x \le L \\ &= \infty, \; \; \;otherwise \end{split} \end{equation} \begin{equation} \begin{split} \psi(x,0)&=\sqrt{\frac{2}{a}} sin(\frac{\pi x}{a}) \\ E_1 &= \frac{\pi^2 \hbar^2}{2ma^2} \end{split} \end{equation}
I've found the wave function in momentum space $\phi(k)$ by taking the Fourier transform of the initial wavefunction. \begin{equation} \begin{split} \phi(k) &= \frac{1}{2\pi\hbar} \int_{0}^{a} dx e^{ikx} \psi(x,0) \\ &=\frac{1}{a\pi\hbar} \frac{-\pi L (1+e^{-ikL)}}{k^2L^2 - \pi^2} \end{split} \end{equation}
I know $<E>=\frac{<p^2>}{2m}$, so I need to find \begin{equation} <p^2>= \int_{0}^{a} k^2 \mid \phi(k) \mid^2 dk \end{equation} Unfortunately, when I evaluate this integral, it diverges. Is there another way can I find energy of a free particle that yields an appropriate answer?
Note: I've also tried to evaluate the momentum space Schroedinger equation $i\hbar \frac{\partial \phi(p)}{\partial t} = H \phi(p)$. However, $\phi(p)$ is not time dependent from my evaluation, so the answer it yields is $0$. |
I'm going through Stein's Complex Analysis, and I'm a bit confused at one of the classical examples of using Cauchy's theorem to evaluate an integral. The example is:
$$\int_0^{\infty}\frac{1-\cos{x}}{x^2}dx = \frac{\pi}{2}$$
The book says (and I'll add my thoughts & questions in
bold as they come up):
Here we consider the function $f(z) = (1 - e^{iz})/z^2$, and we integrate over the indented semicircle in the upper half-plane positioned on the $x$-axis, as shown in the figure below:
Why precisely do we consider the function $f(z) = (1-e^{iz})/z^2$? I get that $e^{ix} = \cos{x} + i\sin{x}$ and $\cos{z} = (e^{iz}-e^{iz})/2$, but how precisely do we get $f(z) = (1 - e^{iz})/z^2$ from this? Why precisely are we integrating over the indented semicircle in the upper half-plane positioned on the $x$-axis? As a follow-up, are we creating a hole around 0 because the integral cannot be evaluated at x = 0?
[back to book]
If we denote $\gamma_{\epsilon}^+$ and $\gamma_R^+$ the semicircles of radii $\epsilon$ and $R$ with negative and positive orientations respectively, Cauchy's theorem gives:
$$\int_{-R}^{-\epsilon}\frac{1-e^{ix}}{x^2}dx + \int_{\gamma_{\epsilon}^+}\frac{1-e^{iz}}{z^2}dz + \int_{\epsilon}^R\frac{1-e^{ix}}{x^2}dx + \int_{\gamma_R^+}\frac{1-e^{iz}}{z^2}dz = 0$$
First we let $R \rightarrow \infty$ and observe that:
$$\mid\frac{1-e^{iz}}{z^2}\mid \ \leq \ \frac{2}{\mid z\mid^2}$$
so the integral over $\gamma_R^+$ goes to 0. Therefore:
$$\int_{\mid x \mid \geq \epsilon}\frac{1-e^{ix}}{x^2}dx = -\int_{\gamma_{\epsilon}^+}\frac{1-e^{iz}}{z^2}dz$$
Why does letting $R \rightarrow \infty$ lead to the above inequality? What does $R \rightarrow \infty$ mean in the scope of the diagram, and why does the integral over $\gamma_R^+$ go to zero? Not really sure how the last equality is formulated as well...
[back to book]
Next, note that:
$$f(z) = \frac{iz}{z^2} + E(z)$$
where $E(z)$ is bounded as $z \rightarrow 0$, while on $\gamma_{\epsilon}^+$ we have $z = \epsilon e^{i\theta}$ and $dz = i\epsilon e^{i \theta}d\theta$. Thus,
$$\int_{\gamma_{\epsilon}^+}\frac{1 - e^{iz}}{z^2}dz \rightarrow \int_{\pi}{0}(-ii)d\theta = -\pi$$ as $\epsilon \rightarrow 0$. Taking real parts then yields:
$$\int_{-\infty}^{\infty}\frac{1-\cos{x}}{x^2}dx = \pi$$
Since the integrand is even, the desired formula is proved.
I don't quite follow the first part of this, but I think more importantly, how does solving all of this help us solve the original integral? What exactly am I taking the "real parts" of?
Sorry, I know it's a lot, but any sort of walkthrough would be appreciated. |
[Read More]
Big Image Sample Using Multiple Images
The image banners at the top of the page are refered to as “bigimg” in this theme. They are optional, and one more more can be specified. If more than one is specified, the images rotate every 10 seconds. In the front matter, bigimgs are specified using an array of hashes.[Read More]
Math Sample Using KaTeX
KaTeX can be used to generate complex math formulas. It supports in-line math using the
\\( ... \\) delimiters, like this: \( E = mc^2 \). By default, it does
not support in-line delimiters
$...$ because those occur too commonly in typical webpages. It supports displayed math using the
$$ or
\\[...\\] delimiters, like this:
Formula 1: $$ \phi = \frac{(1+\sqrt{5})}{2} = 1.6180339887\cdots $$
Formula 2: (same formula, different delimiter) \[ \phi = \frac{(1+\sqrt{5})}{2} = 1.6180339887\cdots \][Read More]
Code Sample Using Hugo or Pygments
The following are two code samples using syntax highlighting.[Read More] |
The depictions you're seeing are correct, the electric and magnetic fields both reach their amplitudes and zeroes in the same locations. Rafael's answer and certain comments on it are completely correct; energy conservation does
not require that the energy density be the same at every point on the electromagnetic wave. The points where there is no field do not carry any energy. But there is never a time when the fields go to zero everywhere. In fact, the wave always maintains the same shape of peaks and valleys (for an ideal single-frequency wave in a perfect classical vacuum), so the same amount of energy is always there. It just moves.
To add to Rafael's excellent answer, here's an explicit example.Consider a sinusoidal electromagnetic wave propagating in the $z$ direction. It will have an electric field given by
$$\mathbf{E}(\mathbf{r},t) = E_0\hat{\mathbf{x}}\sin(kz - \omega t)$$
Take the curl of this and you get
$$\nabla\times\mathbf{E}(\mathbf{r},t) = \left(\hat{\mathbf{z}}\frac{\partial}{\partial y} - \hat{\mathbf{y}}\frac{\partial}{\partial z}\right)E_0\sin(kz - \omega t) = -E_0 k\hat{\mathbf{y}}\cos(kz - \omega t)$$
Using one of Maxwell's equations, $\nabla\times\mathbf{E} = -\frac{\partial \mathbf{B}}{\partial t}$, you get
$$-\frac{\partial\mathbf{B}(\mathbf{r},t)}{\partial t} = -E_0 k\hat{\mathbf{y}}\cos(kz - \omega t)$$
Integrate this with respect to time to find the magnetic field,
$$\mathbf{B}(\mathbf{r},t) = -\frac{E_0 k}{\omega}\hat{\mathbf{y}}\sin(kz - \omega t)$$
Comparing this with the expression for $\mathbf{E}(\mathbf{r},t)$, you find that $\mathbf{B}$ is directly proportional to $\mathbf{E}$. When and where one is zero, the other will also be zero; when and where one reaches its maximum/minimum, so does the other.
For an electromagnetic wave in free space, conservation of energy is expressed by Poynting's theorem,
$$\frac{\partial u}{\partial t} = -\nabla\cdot\mathbf{S}$$
The left side of this gives you the rate of change of energy density in time, where
$$u = \frac{1}{2}\left(\epsilon_0 E^2 + \frac{1}{\mu_0}B^2\right)$$
and the right side tells you the electromagnetic energy flux density, in terms of the Poynting vector,
$$\mathbf{S} = \frac{1}{\mu_0}\mathbf{E}\times\mathbf{B}$$
Poynting's theorem just says that the rate at which the energy density at a point changes is the opposite of the rate at which energy density flows away from that point.
If you plug in the explicit expressions for the wave in my example, after a bit of algebra you find
$$\frac{\partial u}{\partial t} = -\omega E_0^2\left(\epsilon_0 + \frac{k^2}{\mu_0\omega^2}\right)\sin(kz - \omega t)\cos(kz - \omega t) = -\epsilon_0\omega E_0^2 \sin\bigl(2(kz - \omega t)\bigr)$$
(using $c = \omega/k$) and
$$\nabla\cdot\mathbf{S} = \frac{2}{\mu_0}\frac{k^2}{\omega}E^2 \sin(kz - \omega t)\cos(kz - \omega t) = \epsilon_0 \omega E_0^2 \sin\bigl(2(kz - \omega t)\bigr)$$
thus confirming that the equality in Poynting's theorem holds, and therefore that EM energy is conserved.
Notice that the expressions for both sides of the equation include the factor $\sin\bigl(2(kz - \omega t)\bigr)$ - they're not constant. This mathematically shows you the structure of the energy in an EM wave. It's not just a uniform "column of energy;" the amount of energy contained in the wave varies sinusoidally from point to point ($S$ tells you that), and as the wave passes a particular point in space, the amount of energy it has at that point varies sinusoidally in time ($u$ tells you that). But those changes in energy with respect to space and time don't just come out of nowhere. They're precisely synchronized in the manner specified by Poynting's theorem, so that the changes in energy at a point are accounted for by the flux to and from neighboring points. |
Defining parameters
Level: \( N \) = \( 25 = 5^{2} \) Weight: \( k \) = \( 3 \) Nonzero newspaces: \( 2 \) Newforms: \( 2 \) Sturm bound: \(150\) Trace bound: \(1\) Dimensions
The following table gives the dimensions of various subspaces of \(M_{3}(\Gamma_1(25))\).
Total New Old Modular forms 64 56 8 Cusp forms 36 36 0 Eisenstein series 28 20 8 Decomposition of \(S_{3}^{\mathrm{new}}(\Gamma_1(25))\)
We only show spaces with odd parity, since no modular forms exist when this condition is not satisfied. Within each space \( S_k^{\mathrm{new}}(N, \chi) \) we list the newforms together with their dimension.
Label \(\chi\) Newforms Dimension \(\chi\) degree 25.3.c \(\chi_{25}(7, \cdot)\) 25.3.c.a 4 2 25.3.f \(\chi_{25}(2, \cdot)\) 25.3.f.a 32 8 |
I wanted to know about the substitution reaction of alkanes particularly interested in the reaction of methane with fluorine and chlorine.
Let's have a look at some fundamental data and the chemistry involved.
Chlorine is an inflammable gas with a boiling point of -35 °C and a nice yellowish-green colour. It is heavier than air.
It is highly irritant and toxic. It is a strong oxidant. With water, it partly decomposes to hydrogen chloride and hypochlorous acid. Your mucous membranes are wet. Inhalation of chlorine results in severe corrosive injuries. Inhale a tad longer to face dyspnoea, coughing of blood and asphyxia. Did I mention the pulmonary oedema and fatality?
Methane is a flammable gas with a boiling point around -160 °C and a flash point around -180 °C. The lower explosion limit of methane in air is around 5%.
Open flames and sparks are not your friend here.
For the mixture of methane and chlorine,
red light districts are safe.The bond dissociation energy of chlorine is $243\, \mathrm{kJ \cdot mol^{-1}}$, you will need blue light ($\lambda$ < 490 nm) to initiate the self-propagating reaction:
$$\ce{Cl2 ->[h\nu] 2 Cl*}$$
Then the chain reaction starts:
$$\ce{Cl* +\ CH4 -> HCl + CH3*}$$ $$\ce{CH3* +\ Cl2 -> CH3Cl + Cl*}$$
All that goes on and on; termination steps are radical recombinations:
$$\ce{2 Cl* -> Cl2}$$
$$\ce{2 CH3* -> C2H6}$$ $$\ce{ CH3* +Cl* -> CH3Cl}$$
Apparently hydrogen chloride gas is is produced in the reaction, together with methyl chloride.
Methyl chloride is a flammable gas (boiling point around -24 °C, flash point around -20 °C, lower explosion limit in air around 8%). GHS hazard statements file it with H-351: suspected of causing cancer.
If you perform the reaction with a large excess of chlorine, $\ce{Cl*}$ radicals will attack methyl chloride, generate dichloromethane via the chloromethyl radical $\ce{*CH2Cl}$, etc. Tetrachloromethane ($\ce{CCl4}$) would be the final product then.
That was the healthy and boring part!
When fluorine talks to chlorine, it goes like in the musical "
Annie, Get Your Gun": Anything you can do I can do better!
Particularly when it comes to running havoc on your tissue!
As far as the photolysis of fluorine in the presence of methane is concerned:
People have done that!
Typical conditions were:
Deposit methane in a fluorine matrix at 15 K Co-deposit methane/argon and fluorine/argon samples at 12 K Trap fluorine in a methane matrix at 15 K
Yes, these guys had no sense for adventure -
but they weren't blown away either! |
Take linear regression as the example, given one specific data set $D_1=\{(x_1,y_1),...(x_n,y_n)\}$, we could train a model with one specific parameter estimate $\hat\theta_1$, if we do the training on a new data set $D_2$, we will have new estimate $\hat\theta_2$, and so on. For input data $x$, I could predict the target value $\hat y$.
My problems are,
I thought the term
biasand varianceare only used when talking about the parameter estimate $\hat\theta$ of model, not used to describe the random variable of the predicted data, which are $\hat y$, right? So we have $bias(\hat\theta),var(\hat\theta)$, but don't have $bias(\hat y)$ or $var(\hat y)$. If I'm wrong, then what are bias and variance here?
To judge the quality of the one specific model with parameter estimate $\hat\theta_1$, I check the mean squared error over all the training data points of $D_1$, $$MSE_1=\frac{1}{n}\sum_i(y_i-\hat y_i)^2$$, but for different training data sets $D_i$, we have different $\hat\theta_i$, then different $MSE_i$, so how do I determine what $\hat\theta$ should be? The one with the smallest $MSE$?
As I read the bias-variance tradeoff, it's said that
the expectation of the $MSE$ is of special interest, why should we pay special attention to the $E[MSE]$? I mean I could also pay attention to $E[\hat y_i]$ for each data point. Moreover, $MSE$ is calculated on the observed target value $y_i$ and its prediction $\hat y_i$, so I thought it has to with the $bias(\hat y_i)$ and $var(\hat y_i)$, but not the bias or variance of $\hat\theta$?
I totally got lost when trying to figure out what the terms
bias and variance are aimed at. I hope anyone of you can help me out. |
I will work over a field of characteristic $0$ so that reductive algebraic groups are linearly reductive; presumably there is a way to eliminate this hypothesis. In this case, for an integer $n>1$ and for a divisor $m$ such that $n>m>1$, there
does not exist $f$ as above. The point is to consider the critical locus of the determinant. The computation below proves that "cohomology with supports" has cohomological dimension equal to $2(n-1)$ for the pair of the quasi-affine scheme $\textbf{Mat}_{n\times n} \setminus \text{Crit}(\Delta_{n\times n})$ and its closed subset $\text{Zero}(\Delta_{n\times n})\setminus \text{Crit}(\Delta_{n\times n})$. Since pushforward under affine morphisms preserve cohomology of quasi-coherent sheaves, that leads to a contradiction.
Denote the $m\times m$ determinant polynomial by $$\Delta_{m\times m}:\textbf{Mat}_{m\times m} \to \mathbb{A}^1.$$ For integers $m$ and $n$ such that $m$ divides $n$, you ask whether there exists a homogeneous polynomial morphism $$f:\textbf{Mat}_{n\times n}\to \textbf{Mat}_{m\times m}$$ of degree $n/m$ such that $\Delta_{n\times n}$ equals $\Delta_{m\times m}\circ f$. Of course that is true if $m$ equals $1$: just define $f$ to be $\text{Delta}_{n\times n}$. Similarly, this is true if $m$ equals $n$: just define $f$ to be the identity. Thus, assume that $2\leq m < n$; this manifests below through the fact that the critical locus of $\Delta_{m\times m}$ is nonempty. By way of contradiction, assume that there exists $f$ with $\Delta_{n\times n}$ equal to $\Delta_{m\times m}\circ f$.
Lemma 1. The inverse image under $f$ of $\text{Zero}(\Delta_{m\times m})$ equals $\text{Zero}(\Delta_{n\times n})$. In other words, the inverse image under $f$ of the locus of matrices with nullity $\geq 1$ equals the locus of matrices with nullity $\geq 1$.
Proof. This is immediate. QED
Lemma 2. The inverse image under $f$ of the critical locus of $\Delta_{m\times m}$ equals the critical locus of $\Delta_{n\times n}$. In other words, the inverse image under $f$ of the locus of matrices with nullity $\geq 2$ equals the locus of matrices with nullity $\geq 2$.
Proof.By the Chain Rule, $$d_A\Delta_{n\times n} = d_{f(A)}\Delta_{m\times m}\circ d_Af.$$ Thus, the critical locus of $\Delta_{n\times n}$ contains the inverse image under $f$ of the critical locus of $\Delta_{m\times m}$. Since $m\geq 2$, in each case, the critical locus is the nonempty set of those matrices whose kernel has dimension $\geq 2$, this critical locus contains the origin, and this critical locus is irreducible of codimension $4$. Thus, the inverse image under $f$ of the critical locus of $\Delta_{m\times m}$ is nonempty (it contains the origin) and has codimension $\leq 4$ (since $\textbf{Mat}_{m\times m}$ is smooth). Since this is contained in the critical locus of $\Delta_{n\times n}$, and since the critical locus of $\Delta_{n\times n}$ is irreducible of codimension $4$, the inverse image of the critical locus of $\Delta_{m\times m}$ equals the critical locus of $\Delta_{n\times n}$. QED
Denote by $U_n\subset \text{Zero}(\Delta_{n\times n})$, resp. $U_m\subset \text{Zero}(\Delta_{m\times m})$, the open complement of the critical locus, i.e., the locus of matrices whose kernel has dimension precisely equal to $1$. By Lemma 1 and Lemma 2, $f$ restricts to an affine morphism $$f_U:U_n\to U_m.$$
Proposition. The cohomological dimension of sheaf cohomology for quasi-coherent sheaves on the quasi-affine scheme $U_n$ equals $2(n-1)$.
Proof.The quasi-affine scheme $U_n$ admits a morphism, $$\pi_n:U_n \to \mathbb{P}^{n-1}\times (\mathbb{P}^{n-1})^*,$$ sending every singular $n\times n$ matrix $A$ parameterized by $U_n$ to the ordered pair of the kernel of $A$ and the image of $A$. The morphism $\pi_n$ is Zariski locally projection from a product, where the fiber is the affine group scheme $\textbf{GL}_{n-1}$. In particular, since $\pi_n$ is affine, the cohomological dimension of $U_n$ is no greater than the cohomological dimension of $\mathbb{P}^{n-1}\times (\mathbb{P}^{n-1})^*$. This equals the dimension $2(n-1)$.
More precisely, $U_n$ is simultaneously a principal bundle for both group schemes over $\mathbb{P}^{n-1}\times(\mathbb{P}^{n-1})^*$ that are the pullbacks via the two projections of $\textbf{GL}$ of the tangent bundle. Concretely, for a fixed $1$-dimension subspace $K$ of the $n$-dimensional vector space $V$ -- the kernel -- and for a fixed codimension $1$ subspace $I$ -- the image -- the set of invertible linear maps from $V/K$ to $I$ is simultaneously a principal bundle under precomposition by $\textbf{GL}(V/K)$ and a principal bundle under postcomposition by $\textbf{GL}(I)$. In particular, the pushforward of the structure sheaf, $$\mathcal{E}_n:=(\pi_n)_*\mathcal{O}_{U_n},$$ is a quasi-coherent sheaf that has an induced action of each of these group schemes.
The invariants for each of these actions is just $$\pi_n^\#:\mathcal{O}_{\mathbb{P}^{n-1}\times (\mathbb{P}^{n-1})^*}\to \mathcal{E}_n.$$ Concretely, the only functions on an algebraic group that are invariant under pullback by every element of the group are the constant functions. The group schemes and the principal bundle are each
Zariski locally trivial. Consider the restriction of $\mathcal{E}_n$ on each open affine subset $U$ where the first group scheme is trivialized and the principal bundle is trivialized. The sections on this open affine give a $\mathcal{O}(U)$-linear representation of $\textbf{GL}_{n-1}$. Because $\textbf{GL}_{n-1}$ is linearly reductive, there is a unique splitting of this representation into its invariants, i.e., $\pi_n^\#\mathcal{O}(U)$, and a complementary representation (having trivial invariants and coinvariants). The uniqueness guarantees that these splittings glue together as we vary the trivializing opens. Thus, there is a splitting of $\pi_n^\#$ as a homomorphism of quasi-coherent sheaves, $$t_n:(\pi_n)_*\mathcal{O}_{U_n} \to \mathcal{O}_{\mathbb{P}^{n-1}\times (\mathbb{P}^{n-1})^*}.$$
For every invertible sheaf $\mathcal{L}$ on $\mathbb{P}^{n-1}\times (\mathbb{P}^{n-1})^*$, this splitting of $\mathcal{O}$-modules gives rise to a splitting, $$t_{n,\mathcal{L}}:(\pi_n)_*(\pi_n^*\mathcal{L})\to \mathcal{L}.$$ In particular, for every integer $q$, this gives rise to a surjective group homomorphism, $$H^q(t_{n,\mathcal{L}}):H^q(U_n,\pi_n^*\mathcal{L}) \to H^q(\mathbb{P}^{n-1}\times (\mathbb{P}^{n-1})^*,\mathcal{L}) .$$
Now let $\mathcal{L}$ be a dualizing invertible sheaf on $\mathbb{P}^{n-1}\times (\mathbb{P}^{n-1})^*.$ This has nonzero cohomology in degree $2(n-1)$. Thus, the cohomological dimension of sheaf cohomology for quasi-coherent $\mathcal{O}_{U_n}$-modules also equals $2(n-1)$.
QED
Since $f_U$ is affine, the cohomological dimension for $U_n$ is no greater than the cohomological dimension for $U_m$. However, by the proposition, the cohomological dimension for $U_m$ equals $2(m-1)$. Since $1<m<n$, this is a contradiction. |
Suppose that a random variable Y has a gamma distribution w. parameters $\displaystyle \alpha = 2$ and an unknown $\displaystyle \beta$. We have earlier proved that $\displaystyle 2Y/\beta$ has a chi-square distribution w. 4 df. Using $\displaystyle 2Y/\beta$ as a pivotal quantity. derive a 90% confidence interval for \beta.
I don't know if I'm on the right way. I create a new random variable $\displaystyle U = 2Y/\beta$ and look for $\displaystyle P(a\leq U\leq b) = 0.90$
=
$\displaystyle P(a\leq 2Y/\beta\leq b) = 0.90$
I then have a similar example in the book that tells me to set it up like this:
$\displaystyle P(U<a) = \int_{0}^{a}{} f(u) du = 0.05$
$\displaystyle P(U>b) = \int_b^\infty f(u) du = 0.05$
Must I derive the pdf of U manually or can I use that $\displaystyle 2Y/\beta$ is chi-square distributed and put in the pdf of a chi-square distribution?
That would be:$\displaystyle (y)^{(\upsilon /2)-1}e^{-y/2}/2^{\upsilon/2}\Gamma (\upsilon /2)$ but with y=u and $\displaystyle \upsilon$ = 4I've tried to do that but I don't get the right answer, which is $\displaystyle 2Y/9.49, 2Y/.711$.
Thanks for your help.
Oct 8th 2008, 06:40 PM
mr fantastic
Quote:
Originally Posted by approx
Suppose that a random variable Y has a gamma distribution w. parameters $\displaystyle \alpha = 2$ and an unknown $\displaystyle \beta$. We have earlier proved that $\displaystyle 2Y/\beta$ has a chi-square distribution w. 4 df. Using $\displaystyle 2Y/\beta$ as a pivotal quantity. derive a 90% confidence interval for \beta.
I don't know if I'm on the right way. I create a new random variable $\displaystyle U = 2Y/\beta$ and look for $\displaystyle P(a\leq U\leq b) = 0.90$
=
$\displaystyle P(a\leq 2Y/\beta\leq b) = 0.90$
I then have a similar example in the book that tells me to set it up like this:
$\displaystyle P(U<a) = \int_{0}^{a}{} f(u) du = 0.05$
$\displaystyle P(U>b) = \int_b^\infty f(u) du = 0.05$
Must I derive the pdf of U manually or can I use that $\displaystyle 2Y/\beta$ is chi-square distributed and put in the pdf of a chi-square distribution?
That would be:$\displaystyle (y)^{(\upsilon /2)-1}e^{-y/2}/2^{\upsilon/2}\Gamma (\upsilon /2)$ but with y=u and $\displaystyle \upsilon$ = 4I've tried to do that but I don't get the right answer, which is $\displaystyle 2Y/9.49, 2Y/.711$.Thanks for your help.
The pdf of U is $\displaystyle f(u) = \frac{u e^{-u/2}}{4}$.
Your trouble is solving:
$\displaystyle \Pr(U < a) = \int_0^a \frac{u e^{-u/2}}{4} \, du$ for a. I get a = 0.710723.
$\displaystyle \Pr(U > b) = \int_b^{+\infty} \frac{u e^{-u/2}}{4} \, du$ for b. I get b = 9.48772.
So your trouble is calculus, not statistics. Now that you know that what you're trying to do is correct, I suggest you go back and re-check your calculations. Then:
Your calculations are correct. Now solve for a (numerically, I'd imagine) and get an answer to the required accuracy. (An exact answer in terms of the Lambert W-function is possible but neither useful nor required).
Oct 13th 2008, 01:21 PM
approx
Sorry for going about this forever. I also posted this in the math part, but no one was able to help me out. I go on like this: |
I'm given series $\sum_{n = 1}^{+\infty} \frac{(-1)^{n}}{(n+1)!}\left(1 + 2! + \cdots + n!\right)$ and I have to find whether it is convergent.
Testing for absolute convergence, we have $a_n = \frac{1}{(n+1)!} + \frac{2}{(n+1)!} + \cdots + \frac{(n-1)!}{(n+1)!} + \frac{n!}{(n+1)!}$ and since last term is $\frac{n!}{(n+1)!} = \frac{1}{n+1}$ series diverge in comparison with harmonic series and hence can only be conditionally convergent, which I will try to prove from Leibniz criterion.
Now, I have to show, that $a_n$-th term is monotonically decreasing and $\lim a_n = 0$.
Treating $a_n$ as $\frac{a_n}{b_n} = \frac{1! + 2! + \cdots + n!}{(n+1)!}$ I can use Stolz-Cesàro theorem ($\lim \frac{a_n}{b_n} = \lim\frac{a_{n+1} - a_n}{b_{n+1} - b_n}$) since $b_n$ is monotonically increasing and $\lim b_n = +\infty$. Then $$\lim \frac{a_n}{b_n} = \lim\frac{a_{n+1} - a_n}{b_{n+1} - b_n} = \lim\frac{(n+1)!}{(n+2)! - (n+1)!} = \lim \frac{1}{n+2}\frac{1}{1 - \frac{1}{n+2}} = 0.$$
But how to prove monotonicity? I've tried $\frac{a_{n+1}}{a_n}$ but it didn't get me anywhere. What are some ways to show monotonicity of sequences like $a_n$? |
This depends, of course, on the underlying probability space, most notably the filtration. In general, there are plenty of progressive processes which are not predictable, and this should be fairly standard material. If one works with the raw (un-augmented) filtration on the canonical path space of continuous functions, then the notions of predictability and progressive measurability coincide. I don't have the reference accessible at the moment, but I would bet any amount of money that it's in the first volume of Dellacherie-Meyer.
On the other hand, if you augment your filtration to make it complete and right-continuous, as one so often does, there are plenty of easy examples of progressive processes which are not predictable. (Actually, right-continuity is all that matters in the following.) For example, define $(X_t)_{t \ge 0}$ by $X_t=0$ for $t < 1$ and $X_t=\pm 1$ with some nontrivial (say, equal) probabilities. Consider the augmented filtration generated by $X$,
$\mathcal{F}_t = \cap_{s > t} \sigma(X_s) \vee \mathcal{N}$,
where $\mathcal{N}$ is the set of null sets of the ambient probability space. Then the right-limit process $X_+=(X_{t+})_{t \ge 0}$ is progressive but not predictable, with respect to this filtration. To see this, recall that the predictable processes are (by definition) those generated by the left-continuous adapted processes. Because $X$ is constant strictly before time $1$, every adapted process must be a.s. constant and deterministic on $[0,1)$. A left-continuous process is then also a.s. constant and deterministic on $[0,1]$, and we conclude that the same is true of predictable processes. As $X_{1+}$ is random, it follows that $X_+$ is not predictable. To see that $X_+$ is progressive, just note that it is right-continuous and adapted, because $X_{1+}$ is $\mathcal{F}_1$-measurable due to the right-continuity imposed on the filtration. |
In a paper by Joos and Zeh, Z Phys B 59 (1985) 223, they say:This 'coming into being of classical properties' appears related to what Heisenberg may have meant by his famous remark [7]: 'Die "Bahn" entsteht erst dadurch, dass wir sie beobachten.'Google Translate says this means something ...
@EmilioPisanty Tough call. It's technical language, so you wouldn't expect every German speaker to be able to provide a correct interpretation—it calls for someone who know how German is used in talking about quantum mechanics.
Litmus are a London-based space rock band formed in 2000 by Martin (bass guitar/vocals), Simon (guitar/vocals) and Ben (drums), joined the following year by Andy Thompson (keyboards, 2001–2007) and Anton (synths). Matt Thompson joined on synth (2002–2004), while Marek replaced Ben in 2003. Oli Mayne (keyboards) joined in 2008, then left in 2010, along with Anton. As of November 2012 the line-up is Martin Litmus (bass/vocals), Simon Fiddler (guitar/vocals), Marek Bublik (drums) and James Hodkinson (keyboards/effects). They are influenced by mid-1970s Hawkwind and Black Sabbath, amongst others.They...
@JohnRennie Well, they repeatedly stressed their model is "trust work time" where there are no fixed hours you have to be there, but unless the rest of my team are night owls like I am I will have to adapt ;)
I think u can get a rough estimate, COVFEFE is 7 characters, probability of a 7-character length string being exactly that is $(1/26)^7\approx 1.2\times 10^{-10}$ so I guess you would have to type approx a billion characters to start getting a good chance that COVFEFE appears.
@ooolb Consider the hyperbolic space $H^n$ with the standard metric. Compute $$\inf\left\{\left(\int u^{2n/(n-2)}\right)^{-(n-2)/n}\left(4\frac{n-1}{n-2}\int|\nabla u|^2+\int Ru^2\right): u\in C^\infty_c\setminus\{0\}, u\ge0\right\}$$
@BalarkaSen sorry if you were in our discord you would know
@ooolb It's unlikely to be $-\infty$ since $H^n$ has bounded geometry so Sobolev embedding works as expected. Construct a metric that blows up near infinity (incomplete is probably necessary) so that the inf is in fact $-\infty$.
@Sid Eating glamorous and expensive food on a regular basis and not as a necessity would mean you're embracing consumer fetish and capitalism, yes. That doesn't inherently prevent you from being a communism, but it does have an ironic implication.
@Sid Eh. I think there's plenty of room between "I think capitalism is a detrimental regime and think we could be better" and "I hate capitalism and will never go near anything associated with it", yet the former is still conceivably communist.
Then we can end up with people arguing is favor "Communism" who distance themselves from, say the USSR and red China, and people who arguing in favor of "Capitalism" who distance themselves from, say the US and the Europe Union.
since I come from a rock n' roll background, the first thing is that I prefer a tonal continuity. I don't like beats as much as I like a riff or something atmospheric (that's mostly why I don't like a lot of rap)
I think I liked Madvillany because it had nonstandard rhyming styles and Madlib's composition
Why is the graviton spin 2, beyond hand-waiving, sense is, you do the gravitational waves thing of reducing $R_{00} = 0$ to $g^{\mu \nu} g_{\rho \sigma,\mu \nu} = 0$ for a weak gravitational field in harmonic coordinates, with solution $g_{\mu \nu} = \varepsilon_{\mu \nu} e^{ikx} + \varepsilon_{\mu \nu}^* e^{-ikx}$, then magic? |
I am given
$A = \left[\begin{array}[c]{rr} \cos\theta & -\sin\theta\\ \sin\theta & \cos\theta\end{array}\right]$
from which I calculated
$λ = \cos\theta \pm i\sin\theta$
the eigenvalues are thus imaginary but I want to calculate the eigenvectors
$A-\lambda I = \left[\begin{array}[c]{rr} \pm i\sin\theta & -\sin\theta\\ \sin\theta & \pm i\sin\theta\end{array}\right]$ $\left[\begin{array}[c]{r} y \\ z \end{array}\right]$ $=$ $\left[\begin{array}[c]{r} 0\\ 0\end{array}\right]$
When I try to find the eigenvector(s) I keep getting things like $0 = 0$... which is pretty useless. Does this mean there are no eigenvectors or that the eigenvector is $\left[\begin{array}[c]{r} 0\\ 0\end{array}\right]$ or that I'm doing something wrong? |
As is often the case with NP-reductions, it makes sense to look for similar problems. In particular, it is hard to encode global conditions such has "have seen some nodes" into PCP (with polynomially many tiles) which contraindicates graph problems, packing problems would require us to encode unary numbers in PCP (creating exponentially large instance), and so on. Therefore, a string problem with only local restrictions can be expected to work best.
Consider the decision version of the shortest common supersequence problem:
Given two strings $a,b \in \Sigma^+$ with $|a|=n$ and $|b|=m$ and $k \in \mathbb{N}$, decide whether there is a string $c \in \Sigma^+$ with $|c| \leq k$ such that $a$ and $b$ are subsequences of $c$.
The idea is to let PCP build supersequences of $a$ and $b$ from left to right, encoding in the tiles' overlaps at which position we are in $a$ and $b$, respectively. It will use one tile per symbol in $c$, so $k$ corresponds to the BPCP's bound: if we can solve this PCP with $\leq k$ tiles, you can read off the common supersequence of equal length, and vice versa.
The construction of the tiles is a bit tedious, but quite clear. Note that we will not create tiles that do not forward $a$ or $b$; such can never be part of a
shortest common supersequence, so they are superfluous. They can easily be added without breaking the properties of the reduction.
The numbers in the overlaps are encoded in binary, but using symbols outside of $\Sigma$ and padding them to a common length $\log \max(m,n)$. Thus we ensure that the tiles are used as the graphics suggest (tetris), that is characters and index-encoding overlaps do not mix (PCP does not prevent this per se). We need:
Starting tiles: $c$ can start with $a_1$, $b_1$ or both if they are equal. Intermediate tiles: $c$ can proceed with the next symbol in $a$, in $b$ or both if they are equal. Terminating tiles: $c$ ends with the last symbol of $a$ (if the last one of $b$ has been seen already), similar for $b$, or with the last symbol of both.
These are the tile schematics. Note that the intermediate tiles have to be instantiated for all pairs $(i,j) \in [n]\times [m]$. As mentioned above, create the tiles without $*$ only if the respective characters in $a$ and $b$ match.
[source]
The $*$ are symbolic for "don't care"; in the actual tiles, the other symbol will have to be copied there. Note that the number of tiles is in $\Theta(mn)$ and each tile has length $4\log \max(m,n) + 1$, so the constructed BPCP instance (over alphabet $\Sigma \cup \{0,1\}$ plus separation symbols) has polynomial size. Furthermore, the construction of every tile is clearly possible in polynomial time. Therefore, the proposed reduction is indeed a valid polynomial transformation which reduces the NP-complete shortest common supersequence problem to BPCP. |
Interesting recursive functions -
et R={i∣∃j:f(j)=i} be the set of distinct values that...can sum1 explain clearly ???wat is d meaning of dis
R={i∣∃j:f(j)=i} be the set of distinct values that f takes
R={i∣∃j:f(j)=i} be the set of distinct values that f takes
Someone please explain why is it written "takes" ? R contains the values that f can "give" right?
@MINIPanda it is recursive function.f(x){if (x==1) return x; //map to "x"else if (x==5) return y; map to "y" // base conditionif (x mod 2 == 0) f(x/2)else f(x+5)}Function assign value to it when it terminates. As you can see base value can be either 1 or 5. R = {1,5}PS: Recursive fun calls doesn't provide mapping. A value is mapped when function terminates and return something.
R={i∣∃j:f(j)=i}
R={i∣∃j:f(j)=i}
even this is ambiguous as everything boils down to f(1) and f(5) it should be mentioned that we can consider f(x)=x coz R contains all values such that there exist at least one value in Domain which should map to this x.
@Divy Kala I have written the code above for the same. Please check it and let me know if you still have doubt.
Answer is 2Its Saying we have 2 domainsN+ → N+
can anyone explain meaning of this plz??
@ Deepesh Kataria .where is f(9)
R={i∣∃j:f(j)=i}
here the set definition means that
take suppose j=1 then f(1) =6 ,this 6 is i
f(2) = 1 ,this 1 is i
f(3) =8 ,this 8 is i
f(4) = 2 ,this 4 is i
.
this way we get so many values of i which form N+ set only {1 , 2 , 3 ,4 .....}
then why are you doing f(1) =f(6) =f(3) =f(8)...??
the set builder form is not asking this.
@akriti, see defination of function again, it's in recursive form f(n) = f(n/2) ( see the recursion) f(1) = f(6) so in order to cal. f(1) we need to agin cal f(6) now this will give f(3) now again we need to fine f(3) which will be f(8).now f(8)will give f(4) and then f(4) will give f(2) and then f(2) -> f(1) so in this way it repeats itself. see the function again, the catchy thing is the recursion part.
http://math.stackexchange.com/questions/2118739/finding-recursive-function-range/2118749
We will use strong Induction Hypothesis to proof this.Suppose that $f(1) = a$ and $f(5) = b$. It is clear that $$f(5n) = b$$ for all $n$. We'll prove by induction that for all $n \ne 5k$, $f(n) = a$.First note that$$f(2) = f(\frac{2}{2}) = f(1) = a,$$$$f(3) = f(3+5) = f(8) = f(4) = f(2) = a,$$$$f(4) = f(2) = a.$$Now suppose $n = 5k + r$, where $0 \lt r \lt 5$, and for all $k\lt n$, $n$ is not divisible by $5$ bcoz $r \neq 0$Note that if $n$ is not divisible by $5$ then $n-5$ is also not divisible by $5$. Because $n-5 = 5(k-1) + r$, again $r \neq 0$.And also Note that $\frac{n}{2}$ is not divisible by $5$, bcoz if it were divisible by $5$, this will make $n$ divisible by $5$. Base case: $f(1)=f(2)=f(3)=f(4)=a$ [already solved for base cases above]Incuctive step: Now suppose $n = 5k + r$, where $0 \lt r \lt 5$, and for all $m\lt n$ which are not divisible by $5$, $f(m) = a$.($m$ already covers $n-5$ and $\frac{n}{2}$)If $n$ is odd, $f(n) = f(n-5)$, and by induction hypothesis, $f(n-5) = a$, so we get $$f(n) = a.$$If $n$ is even, $f(n) = f(n/2)$, and by induction hypothesis, $f(n/2) = a$, so we get $$f(n) = a.$$
Best solution by using mathematical Induction.Thanks :-)
@Sourav BasuYes.
$\because f(n)=f(n+5)$.
Putting $n=n-5$ [where $n>5$] will yield $f(n-5)=f(n-5+5)=f(n)$
$\text{let we have f(1) = x. Then, f(2) = f(2/2) = f(1) = x}$
$\text{f(3) = f(3+5) = f(8) = f(8/2) = f(4/2) = f(2/1) = f(1) = x }$$\text{f(5) = f(5+5) = f(10/2) = f(5) = y. }$
$\text{All $N^+$ except multiples of 5 are mapped to x and multiples}$
$\text{of 5 are mapped to y so ,$\mathbf{Answer\space is\space 2}$}$
thanx @Prince Sindhiya ,the pictorial mapping clears everything.,
choose any number let n=17, then
f(17)=f(22)=f(11)=f(16)=f(8)=f(4)=f(2)=f(1)=f(6)=f(3)=f(8)=f(4)=f(2)=f(1)=f(6)=f(3)=f(8)=f(4)=f(2)=f(1)... <this is one part>
now let n=50
f(50)=f(25)=f(30)=f(15)=f(20)=f(10)=f(5)=f(10)=f(15)=f(20)=f(10)=f(5)=f(10)=f(5)=f(10)=f(5) .....<this is other part>
so we can take any number and that will fall either of any cycle, these are the two types of values that Function f( ) can take.
For combinatorics , can add balls and bin... |
I just wanted to check the method I have formulated for the derivation for the adjoint Dirac equation using Gamma matrice notation. This is a problem from the very excellent "Modern Particle Physics" by Mark Thomson book. Incidentally, I fully recommend purchasing this book if you are studying at either Cambridge, Oxford or UCL. Very good book, pedagogically speaking.
So I start from the familiar Dirac equation in momentum form:
$$ (\gamma^{\mu}p_{\mu}-m)u=0 $$
I take the Hermitian of this, $^{\dagger}$: $$ (\gamma^{\mu}p_{\mu}-m)^{\dagger}u^{\dagger}=0 $$
I multiply $\times\gamma^{0}$ of this: $$ (\gamma^{\mu}p_{\mu}-m)^{\dagger}u^{\dagger}\gamma^{0}=0 $$ Noting that $m^{\dagger}=m$ and $(\gamma^{\mu})^{\dagger}=-(\gamma^{\mu})$ I get the following: $$ (-\gamma^{\mu}p_{\mu}-m)u^{\dagger}\gamma^{0}=0 $$ This is where it gets a little sketchy, apologies for the colloquialism. I go to here: $$ u^{\dagger}\gamma^{0}(\gamma^{\mu}p_{\mu}+m)=0 $$ I feel this is a little bit of an illegal step. But this comes out as, using the definition of the adjoint spinor $\bar{u}=u^{\dagger}\gamma^0$: $$ \bar{u}(\gamma^{\mu}p_{\mu}+m)=0 $$ If anyone could suggest a better method to swop the ordering of the $u^{\dagger}\gamma^{0}$ rather than just fudging it that would be very helpful!
Many thanks! |
$\newcommand{\real}[1]{\left\lvert #1 \right\rvert}$$\newcommand{\Sing}[1]{\operatorname{Sing}(#1)}$$\newcommand{\counit}{\epsilon}$$\newcommand{\To}{\longrightarrow}$$\newcommand{\proj}{\mathrm{proj}}$$\newcommand{\NN}{\mathbb{N}}$$\newcommand{\RR}{\mathbb{R}}$
Yes, the map $\real{\Sing{X}} \to X$ is a Serre fibration.
[
Disclaimer: This answer is very long. A lot of what I will write is contained in Oscar's answer and in the comments. I present it here for completeness and convenience.]
The present answer adapts the constructions given by Oscar Randall-Williams in his answer. The missing point is to prove that the maps generalizing the ones appearing in Oscar's answer are always inclusions of retracts. We will actually prove that they are trivial cofibrations, which will fundamentally require the fact that finite cell complexes are Euclidean neighbourhood retracts. Please upvote Oscar's answer.
Setup
Let $X$ be a topological space, and let $\counit_X:\real{\Sing{X}}\to X$ be the counit map of the adjunction between the singular complex functor and the geometric realization functor. We want to show that $\counit_X$ is a Serre fibration. Let then $h:\real{\Delta^n} \to \real{\Sing{X}}$ and $H:\real{\Delta^n} \times I = \real{\Delta^n \times \Delta^1}\to X$ be continuous maps. We need only prove that it is possible to provide a diagonal lift for the diagram$$ \begin{array}{ccc}\real{\Delta^n} & \overset{h}{\To} & \real{\Sing{X}} \\\Big\downarrow\rlap{\scriptstyle \iota_0} & & \Big\downarrow\rlap{\scriptstyle \counit_X} \\\real{\Delta^n}\times I & \underset{H}{\To} & X\end{array} $$
Constructions (adapted from Oscar Randall-Williams' answer)
The space $C$.Since simplices are compact, the image of $h:\real{\Delta^n} \to \real{\Sing{X}}$ is contained in the realization $\real{K}$ of some finite sub-simplicial set $K$ of $\Sing{X}$. Then $h$ restricts to a map $h:\real{\Delta^n}\to\real{K}$, and we can take the mapping cylinder of $h$:$$ C = M_h = \real{K} \coprod_{\real{\Delta^n}} \bigl( \real{\Delta^n}\times I \bigr) $$This mapping cylinder generalizes the space $C$ described by Oscar Randall-Williams in his answer.
The space $D$.The preceding space $C$ includes naturally into the mapping cylinder$$ D = M_{\proj_{\real{K}}} = \real{K} \coprod_{\real{K}\times\real{\Delta^n}} \bigl( \real{K}\times\real{\Delta^n}\times I \bigr) $$of the projection map $\proj_{\real{K}} : \real{K}\times\real{\Delta^n} \to \real{K} $. Importantly, observe that since geometric realization preserves colimits and finite products, the space $D$ is also the geometric realization$$ D = \real{M_{\proj_K}} $$of the simplicial mapping cylinder $M_{\proj_K} = K \coprod_{K\times\Delta^n} (K\times\Delta^n\times\Delta^1)$ of the projection $\proj_K: K\times\Delta^n \to K$.
The space $D$ plays here the role of the join appearing in Oscar Randall-Williams' answer: note that the join $\real{K}\ast\real{\Delta^n}$ is naturally a quotient of $D$.
The maps.The inclusion map $j:C \to D$ is given by: $j$ restricted to $\real{K}$ is the canonical inclusion $\real{K}\hookrightarrow D$ of the end of the mapping cylinder; $j([x,t]) = [h(x),x,t]$ for $(x,t)\in \real{\Delta^n} \times I$ (where we see $D$ as a quotient of $\real{K}\times\real{\Delta^n}\times I$).
There is a further map $G:C\to X$ determined by:
$G$ restricted to $\real{K}$ coincides with $\counit_X$; $G$ restricted to $\real{\Delta^n}\times I$ coincides with $H$. Main argument: $j$ is a trivial Hurewicz cofibration
Now that we have adapted Oscar's construction, the main part of the argument consists of showing that $j: C\to D$ is a trivial cofibration.
First, the map $j$ is easily seen to be injective. Since $C$ is compact (because both $\real{K}$ and $\real{\Delta^n}\times I$ are compact) and $D = \real{M_{\proj_K}}$ is Hausdorff, it follows that $j: C\to D$ is a closed map, and in particular a homeomorphism onto its image.
Second, the map $j$ is a homotopy equivalence. Simply note that the composition of $\real{K} \hookrightarrow C \overset{j}{\to} D$ is the canonical inclusion $\real{K} \hookrightarrow M_{\proj_{\real{K}}} = D$ into the mapping cylinder, and thus a homotopy equivalence. Similarly, $\real{K}\hookrightarrow C$ is the inclusion into the mapping cylinder, and thus a homotopy equivalence. By the two-out-of-three property for homotopy equivalences, $j$ is itself a homotopy equivalence.
It remains to show that the inclusion of $j(C)$ into $D$ is a Hurewicz cofibration. Observe that both $C$ and $D$ are finite cell complexes, and in particular are Euclidean neighbourhood retracts (ENRs). This can be proved in the same way as corollary A.10 in the appendix to Allen Hatcher's book "
Algebraic topology", which states that finite CW-complexes are ENRs. The desired result is now encoded in the following lemma.
Lemma: Assume $Y$ is a closed subspace of the topological space $Z$. Assume also that $Y$ and $Z$ are both ENRs. Then the inclusion of $Y$ into $Z$ is a Hurewicz cofibration.
This result actually holds in general for ANRs, and is stated as proposition A.6.7 in the appendix of the book "
Cellular structures in topology" by Fritsch and Piccinini. Nevertheless, for completeness, I will provide a proof of the lemma at the end of this answer which uses only the results in the appendix of Hatcher's book when $Z$ is compact.
I will now conclude the proof that $\counit_X$ is a Serre fibration.
Conclusion
First, observe that $j:C\to D$ is the inclusion of a strong deformation retract because it is a homotopy equivalence and a Hurewicz cofibration. In particular, $j$ admits a right inverse $r:D\to C$. The composite $\overline{G} = G\circ r:D\to X$ is then an extension of $G:C\to X$ along $j$, i.e. $\overline{G}\circ j = G$.
Now we use the description of $D$ as the realization of the simplicial set $M_{\proj_K}$ to give a diagonal lift for the square diagram at the beginning of this answer. Note that to give a map $\overline{G}:D=\real{M_{proj_K}} \to X$ is by adjunction the same as giving a map $F:M_{proj_K}\to\Sing{X}$. I claim that the composite$$ \widetilde{H} : \real{\Delta^n}\times I \hookrightarrow C \overset{j}{\To} D \overset{\real{F}}{\To} \real{\Sing{X}} $$is such a diagonal lift:
$\counit_X \circ\real{F}= \overline{G}$ by construction of $F$ via adjunction. Consequently, $\counit_X \circ \real{F}\circ j = G$.
In particular, $\counit_X \circ \real{(F|_K)} = \counit_X \circ (\real{F})|_{\real{K}} = \counit_X \circ \real{F} \circ j|_{\real{K}} = G|_{\real{K}} = \counit_X$. This implies by adjunction that $F|_K$ is the inclusion of $K$ into $\Sing{X}$. Therefore, the restriction $(\real{F})|_{\real{K}} = \real{(F|_K)}$ is the inclusion of $\real{K}$ into $\real{\Sing{X}}$.
It follows that $\widetilde{H}\circ \iota_0 = \real{F}\circ j|_{\real{K}} \circ h = (\real{F})|_{\real{K}} \circ h = h$, i.e. the map $\widetilde{H}$ makes the upper triangle commute.
Furthermore, it follows from item 1 that $\counit_X \circ\widetilde{H} = \counit_X \circ\real{F}\circ j|_{(\real{\Delta^n}\times I)} = G|_{(\real{\Delta^n}\times I)} = H$. So $\widetilde{H}$ makes the lower triangle commute.
Proof of the lemma
We will use (a simplification of) the characterization of Hurewicz cofibrations given in theorem 2 of Arne Strøm's article "
Note on cofibrations" (published in Mathematica Scandinavica 19 (1966), pages 11-14).
The inclusion of a closed subspace $Y$ of a metrizable topological space $Z$ is a cofibration if there exists a neighbourhood $U$ of $Y$ in $Z$ which deforms in $Z$ to $Y$ rel $Y$. More explicitly, there must exist a homotopy $F:U\times I\to Z$ such that $F(x,0)=x$ and $F(x,1)\in Y$ for $x\in U$, and $F(y,t)=y$ for $(y,t)\in Y\times I$.
This characterization of closed cofibrations is very similar to (and follows easily from) the usual characterization in terms of NDR-pairs, except that it does not demand the homotopy to be defined on the whole space.
Assuming that $Y$ and $Z$ are ENRs, we will now prove the existence of the neighbourhood $U$ and the homotopy $F$ as above.
A space $A$ is a ENR exactly when:
$A$ is homeomorphic to a closed subspace of $\RR^N$ for some $N\in\NN$; for any $N\in\NN$, if $B$ is a closed subspace of $\RR^N$ which is homeomorphic to $A$, then some neighbourhood of $B$ in $\RR^N$ retracts onto $B$.
[For reference, the aforementioned appendix of Allen Hatcher's book "
Algebraic topology" explains these two points when $A$ is compact.]
So we may assume without loss of generality that $Z$ is a closed subspace of $\RR^N$, and that some neighbourhood $V$ of $Z$ in $\RR^N$ retracts to $Z$ via a retraction $r_Z:V\to Z$. Since $Y\subset Z\subset\RR^N$ is closed in $\RR^N$ and $Y$ is a ENR, there also exists a neighbourhood $W$ of $Y$ in $\RR^N$ which admits a retraction $r_Y:W\to Y$. Using the convexity of $\RR^N$, we can produce a straight line homotopy $SLH:W\times I\to\RR^N$ between the identity of $W$ and $r_Y$. We may assume without loss of generality (by shrinking $W$ if necessary) that the image of the homotopy $SLH$ is contained in the open $V$. Define then the desired neighbourhood by $U=W\cap Z$ and the homotopy by $F = r_Z \circ SLH$. |
The way I calculate it is summing up the weighted beta of long and shorts but I saw a table where this wasn't the case so I am wondering if this is not the correct way.
You can actually show by construction that the beta of the portfolio is the weighted sum of all the underlyings betas.
Assume the return of the benchmark and some asset $a$ at time $t$ are respectively denoted $r_{b,t}$ and $r_{a,t}$, then the beta of a given asset is defined by:
$$r_{a,t} = \alpha_a + \beta_a r_{b,t} + \epsilon_{a,t}$$
Let's assume you have a portfolio of $n$ assets $(a_1, ..., a_i, ..., a_n)$ each with a weight $w_i$, then the return of the portfolio is at time $t$ is defined as:
$$r_{p,t} = \sum_{i=1}^n w_i r_{a_i,t}$$
Now, by expressing each asset's return in terms of their own beta, you get:
$$ \begin{align} r_{p,t} &= \sum_{i=1}^n w_i r_{a_i,t}\\ &= \sum_{i=1}^n w_i \left( \alpha_{a_i} + \beta_{a_i} r_{b,t} + \epsilon_{a_i,t} \right)\\ &= \underbrace{\sum_{i=1}^n w_i \alpha_{a_i}}_{\alpha_p} + \underbrace{\left(\sum_{i=1}^n w_i \beta_{a_i} \right)}_{\beta_p} r_{b,t} +\sum_{i=1}^n w_i \epsilon_{a_i,t} \end{align} $$
There might be something in the table that you missed (likely the weights as Alex C pointed out) or maybe it was wrong. |
Statistical model for
Complete Randomized design
$y_{ij} = \mu + \tau_i + \epsilon_{ij}$
where, $i$ denotes treatment and $j$ denotes observation.
$i=1,2,...,k\quad and \quad j=1,2,..., n_i$
$y_{ij}$ be a random variable that represents the response obtained on the $jth$ observation of the $ith$ treatment.
$\mu$ is the overall mean of the response $y_{ij}$
$\tau_i$ is the effect on the response of $ith$ treatment.
$\mu_i = \mu + \tau_i$
here $\mu_i$ denotes the
true response of the $ith$ treatment.
and $\epsilon_{ij}$ is the random error term represent the sources of nuisance variation that is, variation due to factors other than treatments.
the assumption is $\epsilon_{ij}\sim^{iid} N(0,\sigma^2)$
Why do we have to assume $\epsilon_{ij}\sim^{iid} N(0,\sigma^2)$ ?
that is
What is the importance of this assumption? If we do not assume it , what will be the effect?
Any help including reference will be appreciated. |
I'm trying to solve variable density and viscosity Navier-Stokes equation using lagged pressure projection method. I'm solving for cavity problem as a test case now (once I get projection right, I will be solving a multiphase problem). The problem is I get oscillations developed in the solution as I march in time. .
I'm using collocated grid for solving this problem.
I'm solving variable density, viscosity N-S equation: $$ \dfrac{u^* - u^n}{\Delta t} + \left(u_i\frac{\partial u_i}{\partial x_j} \right)^n = \left(-\frac{1}{\rho}\frac{\partial p}{\partial x_i} \right)^n + \frac{1}{\rho} \left[\frac{\partial}{\partial x_j}\left( \mu \left(\frac{\partial u_i}{\partial x_j} + \frac{\partial u_j}{\partial x_i} \right) \right) \right]^n $$
and on boundary, $U^* = U^{n+1}$.
I'm approximating convective terms with 1st order godunov as of now, diffusion term and pressure gradient with central difference.
Then I'm solving the projection step,
$$ \frac{\nabla.u^*}{\Delta t} = \nabla.(\frac{1}{\rho}\nabla\phi) $$
using boundary condition $$ \frac{\partial \phi}{\partial n} = 0 $$
I'm discretizing $\nabla.u^*$ as $\dfrac{u^*_{i+1,j} - u^*_{i-1,j}}{2\Delta x} + \dfrac{v^*_{i,j+1} - v^*_{i,j-1}}{2\Delta y}$ and applying boundary condition using ghost node. and for pressure, I'm discretizing as $\dfrac{\left(\frac{1}{\rho}\frac{\partial p}{\partial x} \right)_{i+1/2,j} - \left(\frac{1}{\rho}\frac{\partial p}{\partial x} \right)_{i-1/2,j}}{h} + \dfrac{\left(\frac{1}{\rho}\frac{\partial p}{\partial y} \right)_{i,j+1/2} - \left(\frac{1}{\rho}\frac{\partial p}{\partial y} \right)_{i,j-1/2}}{h}$
For solving the singular poisson equation, I'm removing the nullspace from the RHS ($\nabla.u^*$), operator matrix ($\nabla.(1/\rho$)). I'm also removing the nullspace from the solution vector.
I'm using petsc to solve the poisson equation and using GMRES ksp solver.
I,m updating velocity using $U^{n+1} = U^* - \Delta t*(\nabla\phi)$ and $P^{n+1} = P^* + \phi$. I'm fine with first order accurate pressure.
However, I get the oscillations in my solution. And then the solver diverges if kept iterating. I strongly feel there is something wrong with the boundary conditions, especially on $U^*$ as the solution is very sensitive to what I specify $U^*$ on boundaries. I also tried with $U^* = 0$ but no improvement.
Is there anything wrong in the method I'm using here? It'll great help if you can point it out.
Thank you.
See results here:
Found the solution:
I'm aware of the fact that the staggered grid is the most straight forward solution to this checkerboard problem but I wanted to stick to the collocated grid as it is easier to code and less prone to error, plus having velocities at cell centers will greatly reduce the level set formulation efforts compared to staggered grid.
Now, instead of lagged-pressure projection method, I've changed it to the (Kim and Moin's) pressure-free projection method which is working great as of now. However, I've coded it for constant density and viscosity Navier-Stokes I'm hoping it to work for the variable density and viscosity as well.
While implementing, now there is no pressure gradient term in the momentum equation and I used following boundary conditions on intermediate velocity field: $$ U^*.\hat{n} = U^{n+1}.\hat{n} $$ $$ U^*.\hat{t} = (U^{n+1} + \Delta t*\nabla\phi).\hat{t} $$ where $\hat{n}$ and $\hat{t}$ are normal and tangent vectors. And poisson equation has the neumann B.C. as before.
As I have to get going with my project, I have not thoroughly investigated the error in lagged-pressure projection method But I will do so as soon as I get some time.
Thank you.
Here's the solution (at t = 10sec with Re400) with pressure-free projection method: |
Context:
For a system with $n$ degrees of freedom (DOF), one has to deal with $2n$ independent coordinates ($2n$ dimensional phase space), of position $q$ and $\dot{q}$ in Lagrangian formulation, or independent coordinates of $q$ and generalized momentum $p$ in the Hamiltonian formulation.
We remind the reader that if a system with $n$ DOF exhibits at least $n$ globally defined integrals of motion (first integrals), where all such conserved variables are in (Poisson) involution with one another, then the system is (Liouville) integrable.
Furthermore a system with $n$ DOF can at most have $2n-1$ globally defined integrals of motion. A system will generically have $2n$ locally defined constants of motion. We will only be interested in integrals of motion that are globally defined.
Now coming to the famous case of the 2D double pendulum, with weightless rigid wires attaching the two masses, having the lengths $\ell_1$ and $\ell_2$, the generalized coordinates here are given by the two angles that each mass makes with the vertical, denoted respectively by $\theta_1$ and $\theta_2.$
It is rather straightforward to show then that under constant gravity field, the Lagrangian is given by:
$$L~=~T-V~=~\frac{1}{2}(m_1+m_2)\ell_1^2\dot{\theta_1}^2+\frac{1}{2}m_2 \ell_2^2 \dot{\theta_2}^2+m_2 \ell_1 \ell_2 \dot{\theta_1} \dot{\theta_2} \cos(\theta_1 - \theta_2)+(m_1+m_2)g\ell_1\cos\theta_1 + m_2g\ell_2\cos\theta_2.$$
From here calculating the Euler-Lagrange differential equations, one obtains a coupled 2nd-order ordinary differential equation that can be solved only numerically for $\theta_1(t)$ and $\theta_2(t).$
Question:
Knowing that one integral of motion here is the total energy $E$, and that angular momentum component orthogonal $L_z$ to the plane of motion is also a integral of motion independent of $E$. Unfortunately, they do not Poisson commute.
Are there any other integrals of motion to be found here?
Just by looking at the Lagrangian, as given above, how can we show the system is not integrable, at least at a conceptual level? (we just want to predict, by reasoning what is conserved, and what quantities are
notfirst integrals here). |
I'm having trouble with a BJT circuit.
What we are given:
\$U_{CC} = 10\text{V}\$
\$R_C = 972 \Omega\$ \$R_B = 14\text{k} \Omega\$ \$U_{\text{BE}} = 0.7\text{V}\$ \$I_C = 12\text{mA}\$
We need to find the current amplification \$B\$.
My approach was to calculate \$U_C\$, the Voltage which drops at \$R_C\$. I read in a book that \$I_C\$ is the current we need to use Ohm's Law at \$R_C\$. So I solved the equation \$U_R = I_C \cdot R_C \Leftrightarrow U_R = 12\text{mA} \cdot 972 \Omega \Leftrightarrow U_R = 11.664\text{V}\$.
Having this done I was able to use the Mesh-Current-Law at the upper right part of the circuit which gave me the following equation \$-U_{CC} + U_C - U_B\$ where \$U_B\$ is the Voltage which drops at \$R_B\$. Filling the equation with the known values we receive \$U_B = 1.664\text{V}\$. Since we have \$R_B\$ given we can now apply Ohm's Law with the previously calculated Voltage which leads to the following value for
\$I_B = \frac {U_B}{R_B} = \frac {1.664\text{V}}{14000 \Omega} = 1.188571429x10^{-4}\text{ A}\$
or \$0.1188571429\text{ mA}\$.
Now I found out that the base current \$B\$ can be expressed by \$B = \frac {I_C}{I_B}\$.
Since we know \$I_C\$ as well as \$I_B\$ I went ahead and filled out the equation which gave me \$B = \frac {12\text{mA}}{0.1188571429\text{mA}} = 100.9615385\$ for \$B\$.
Am I on the right track? |
Let (X,d) be a metric space and A,B $\subset$ X be two compact subsets. Show $A\cap B$ is also compact.
I attempted this question by showing the intersection is bounded and closed.
But I stated that Bounded and Closed $\Rightarrow$ Compact (Heine-Borel) but I didn't realise this only holds for $\mathbb R^n$.
Most of the other similar problems on here were dealing with $\mathbb R^n$, so how would you go about showing this for a general space X?
Any help would be greatly appreciated! |
The best workaround I can think out at the moment is to turn to
RegionPlot3D.
To plot ellipsoid with
RegionPlot3D, we need formula for general ellipsoid, which has been mentioned in the
Details of document of
Ellipsoid:
$$(x-p).{\Sigma^{-1 }}.(x-p)\leq 1$$
where $\Sigma$ is weight matrix, which can be generated with the help of
TransformedRegion:
mat = TransformedRegion[Ellipsoid[{0, 0, 0}, {4, 2, 3}],
RotationTransform[k Pi/(30/2) + Pi/180, {0, 0, 1}]][[2]]
$$\left(
\begin{array}{ccc}
16 \cos ^2\left(\frac{\pi k}{15}+\frac{\pi }{180}\right)+4 \sin ^2\left(\frac{\pi k}{15}+\frac{\pi }{180}\right) & 12 \cos \left(\frac{\pi k}{15}+\frac{\pi }{180}\right) \sin \left(\frac{\pi k}{15}+\frac{\pi }{180}\right) & 0 \\
12 \cos \left(\frac{\pi k}{15}+\frac{\pi }{180}\right) \sin \left(\frac{\pi k}{15}+\frac{\pi }{180}\right) & 4 \cos ^2\left(\frac{\pi k}{15}+\frac{\pi }{180}\right)+16 \sin ^2\left(\frac{\pi k}{15}+\frac{\pi }{180}\right) & 0 \\
0 & 0 & 9 \\
\end{array}
\right)$$
So the inequality representing the region is
expr = Or @@ Table[{x, y, z}.Inverse@mat.{x, y, z} <= 1 // Evaluate, {k, 30}];
expr is so large a symbolic expression, thus we compile it to speed up plotting:
cf = Compile[{x, y, z}, Evaluate@expr,
RuntimeOptions -> {"EvaluateSymbolically" -> False}];
(gra = RegionPlot3D[cf[x, y, z], {x, -5, 5}, {y, -5, 5}, {z, -5, 5}, Mesh -> None,
PlotPoints -> 100]) // AbsoluteTiming
Finally discretize
gra and export (directly exporting
gra is OK, but it takes
89.2224 seconds to finish):
disgra = gra // DiscretizeGraphics; // AbsoluteTiming
(* {3.33076, Null} *)
Export["test.obj", disgra] // AbsoluteTiming
(* {1.02844, "test.obj"} *)
The biggest advantage of this approach is, the quality of the generated mesh seems to be good:
Show[disgra, PlotRange -> {-5, 0}]
FindMeshDefects@disgra
So the generated .obj file should be suitable for 3D printing.
Indeed, it's suitable for 3D printing: |
Annals of Functional Analysis Ann. Funct. Anal. Volume 4, Number 1 (2013), 25-39. Eigenvalue problem for a class of nonlinear fractional differential equations Abstract
In this paper, we study eigenvalue problem for a class of nonlinear fractional differential equations $$D^\alpha_{0^+}u(t)=\lambda f(u(t)),\quad 0 \lt t \lt 1,$$ $$u(0)=u(1)=u'(0)=u'(1)=0,$$ where $3 \lt \alpha\leq4$ is a real number, $D^\alpha_{0^+}$ is the Riemann-Liouville fractional derivative, $\lambda$ is a positive parameter and $f:(0,+\infty)\to(0,+\infty)$ is continuous. By the properties of the Green function and Guo-Krasnosel'skii fixed point theorem on cones, the eigenvalue intervals of the nonlinear fractional differential equation boundary value problem are considered, some sufficient conditions for the nonexistence and existence of at least one or two positive solutions for the boundary value problem are established. As an application, some examples are presented to illustrate the main results.
Article information Source Ann. Funct. Anal., Volume 4, Number 1 (2013), 25-39. Dates First available in Project Euclid: 12 May 2014 Permanent link to this document https://projecteuclid.org/euclid.afa/1399899834 Digital Object Identifier doi:10.15352/afa/1399899834 Mathematical Reviews number (MathSciNet) MR3004208 Citation
Sun, Shurong; Zhao, Yige; Han, Zhenla; Liu, Jian. Eigenvalue problem for a class of nonlinear fractional differential equations. Ann. Funct. Anal. 4 (2013), no. 1, 25--39. doi:10.15352/afa/1399899834. https://projecteuclid.org/euclid.afa/1399899834 |
The analyticity and exponential decay of a Stokes-wave coupling system with viscoelastic damping in the variational framework
Department of Mathematics and Economics, Virginia State University, Petersburg, VA 23806, USA
In this paper, we study a fluid-structure interaction model of Stokes-wave equation coupling system with Kelvin-Voigt type of damping. We show that this damped coupling system generates an analyticity semigroup and thus the semigroup solution, which also satisfies variational framework of weak solution, decays to zero at exponential rate.
Keywords:Fluid-Structure Interaction, stokes equation, wave equation, Kelvin-Voigt damping, analyticity, uniform stabilization. Mathematics Subject Classification:Primary: 35M10, 35B35; Secondary: 35A01. Citation:Jing Zhang. The analyticity and exponential decay of a Stokes-wave coupling system with viscoelastic damping in the variational framework. Evolution Equations & Control Theory, 2017, 6 (1) : 135-154. doi: 10.3934/eect.2017008
References:
[1]
G. Avalos and R. Triggiani,
The coupled PDE system arising in fluid-structure interaction. Part Ⅰ: Explicit semigroup generator and its spectral properties,
[2]
G. Avalos, I. Lasiecka and R. Triggiani, Higher regularity of a coupled parabolic-hyperbolic fluid-structure interactivesystem,
[3]
G. Avalos and R. Triggiani,
Uniform stabilization of a coupled PDE system arising in fluid-structure interaction with boundary dissipation at the interface,
[4] [5]
G. Avalos and R. Triggiani,
Fluid-structure interaction with and without internal dissipation of the structure: A contrast study in stability,
[6]
W. Arendt and C. J. K. Batty,
Tauberian theorems and stability of one-parameter semigroups,
[7]
V. Barbu, Nonlinear Semigroup and Differential Equations in Banach Spaces,
[8]
V. Barbu, Z. Grujic, I. Lasiecka and A. Tuffaha,
Smoothness of weak solutions to a nonlinear fluid-structure interaction model,
[9]
V. Barbu, Z. Grujic, I. Lasiecka and A. Tuffaha,
Existence of the energy-level weak solutions for a nonlinear fluid-structure interaction model,
[10]
S. Canic, B. Muha and M. Bukac,
Stability of the Kinematically Coupled $β$-Scheme for fluid-structure interaction problems in hemodynamics,
[11]
S. Chen and R. Triggiani,
Proof of the extensions of two conjectures on structural damping for elastic system,
[12]
S. Chen and R. Triggiani,
Gevrey class semigroups arising from elastic systems with gentle dissipation: the case $ 0 < \alpha < \frac{1}{2}$,
[13]
C. Clason, B. Kaltenbacher and S. Veljović,
Boundary optimal control of the Westervelt and the Kuznetsov equation,
[14] [15]
R. Denk, M. Hieber and J. Prüss, $\mathcal{R}$-boundedness, Fourier multipliers and problems of elliptic and parabolic type
[16] [17] [18] [19] [20] [21]
M. Hieber and J. Prüss,
Heat kernels and maximal $L^p-L^q$ estimates for parabolic evolution equations,
[22]
B. Kaltenbacher and I. Lasiecka,
Global existence and exponential decay rates for the Westervelt equation,
[23] [24] [25] [26] [27] [28] [29]
I. Lasiecka and R. Triggiani, Control Theory for Partial Differential Equations: Continuous and Approximation Theories, I: Abstract Parabolic Systems,
[30]
I. Lasiecka and R. Triggiani,
Heat-structure interaction with viscoelastic damping: Analyticity with sharp analytic sector, exponential decay, fractional powers,
[31]
K. Liu and Z. Liu,
Analyticity and differentiability of semigroups associated with elastic systems with damping and Gyroscopitc forces,
[32]
Z. Liu and S. Zheng, Semigroups Associated with Dissipative Systems
[33] [34]
B. Muha and S. Canic,
Existence of a weak solution to a nonlinear fluid-structure interaction problem modeling the flow of an incompressible, viscous fluid in a cylinder with deformable walls,
[35]
B. Muha and S. Canic,
Existence of a solution to a fluid-multi-layered-structure interaction problem,
[36]
N. $\ddot{O}$zkaya, M. Nordin, D. Goldsheyder and D. Leger, Fundamentals of Biomechanics-Equilibrium, Motion, and Deformation
[37] [38] [39]
G. Simonett and M. Wilke,
[40]
R. Triggiani, A heat-viscoelastic structure interaction model with Neumann or Dirichlet boundary control at the interface: optimal regularity, control theoretic implications,
[41]
show all references
References:
[1]
G. Avalos and R. Triggiani,
The coupled PDE system arising in fluid-structure interaction. Part Ⅰ: Explicit semigroup generator and its spectral properties,
[2]
G. Avalos, I. Lasiecka and R. Triggiani, Higher regularity of a coupled parabolic-hyperbolic fluid-structure interactivesystem,
[3]
G. Avalos and R. Triggiani,
Uniform stabilization of a coupled PDE system arising in fluid-structure interaction with boundary dissipation at the interface,
[4] [5]
G. Avalos and R. Triggiani,
Fluid-structure interaction with and without internal dissipation of the structure: A contrast study in stability,
[6]
W. Arendt and C. J. K. Batty,
Tauberian theorems and stability of one-parameter semigroups,
[7]
V. Barbu, Nonlinear Semigroup and Differential Equations in Banach Spaces,
[8]
V. Barbu, Z. Grujic, I. Lasiecka and A. Tuffaha,
Smoothness of weak solutions to a nonlinear fluid-structure interaction model,
[9]
V. Barbu, Z. Grujic, I. Lasiecka and A. Tuffaha,
Existence of the energy-level weak solutions for a nonlinear fluid-structure interaction model,
[10]
S. Canic, B. Muha and M. Bukac,
Stability of the Kinematically Coupled $β$-Scheme for fluid-structure interaction problems in hemodynamics,
[11]
S. Chen and R. Triggiani,
Proof of the extensions of two conjectures on structural damping for elastic system,
[12]
S. Chen and R. Triggiani,
Gevrey class semigroups arising from elastic systems with gentle dissipation: the case $ 0 < \alpha < \frac{1}{2}$,
[13]
C. Clason, B. Kaltenbacher and S. Veljović,
Boundary optimal control of the Westervelt and the Kuznetsov equation,
[14] [15]
R. Denk, M. Hieber and J. Prüss, $\mathcal{R}$-boundedness, Fourier multipliers and problems of elliptic and parabolic type
[16] [17] [18] [19] [20] [21]
M. Hieber and J. Prüss,
Heat kernels and maximal $L^p-L^q$ estimates for parabolic evolution equations,
[22]
B. Kaltenbacher and I. Lasiecka,
Global existence and exponential decay rates for the Westervelt equation,
[23] [24] [25] [26] [27] [28] [29]
I. Lasiecka and R. Triggiani, Control Theory for Partial Differential Equations: Continuous and Approximation Theories, I: Abstract Parabolic Systems,
[30]
I. Lasiecka and R. Triggiani,
Heat-structure interaction with viscoelastic damping: Analyticity with sharp analytic sector, exponential decay, fractional powers,
[31]
K. Liu and Z. Liu,
Analyticity and differentiability of semigroups associated with elastic systems with damping and Gyroscopitc forces,
[32]
Z. Liu and S. Zheng, Semigroups Associated with Dissipative Systems
[33] [34]
B. Muha and S. Canic,
Existence of a weak solution to a nonlinear fluid-structure interaction problem modeling the flow of an incompressible, viscous fluid in a cylinder with deformable walls,
[35]
B. Muha and S. Canic,
Existence of a solution to a fluid-multi-layered-structure interaction problem,
[36]
N. $\ddot{O}$zkaya, M. Nordin, D. Goldsheyder and D. Leger, Fundamentals of Biomechanics-Equilibrium, Motion, and Deformation
[37] [38] [39]
G. Simonett and M. Wilke,
[40]
R. Triggiani, A heat-viscoelastic structure interaction model with Neumann or Dirichlet boundary control at the interface: optimal regularity, control theoretic implications,
[41]
[1]
Fathi Hassine.
Asymptotic behavior of the transmission Euler-Bernoulli plate and wave equation with a localized Kelvin-Voigt damping.
[2]
Serge Nicaise, Cristina Pignotti.
Stability of the wave equation with localized Kelvin-Voigt damping and boundary delay feedback.
[3] [4]
George Avalos, Roberto Triggiani.
Uniform stabilization of a coupled PDE system arising in fluid-structure interaction with boundary dissipation at the interface.
[5]
Mehdi Badra, Takéo Takahashi.
Feedback boundary stabilization of 2d fluid-structure interaction systems.
[6]
Robert E. Miller.
Homogenization of time-dependent systems with Kelvin-Voigt damping by two-scale convergence.
[7]
Qiang Du, M. D. Gunzburger, L. S. Hou, J. Lee.
Analysis of a linear fluid-structure interaction problem.
[8] [9]
Andro Mikelić, Giovanna Guidoboni, Sunčica Čanić.
Fluid-structure interaction in a pre-stressed tube with thick elastic walls I: the stationary Stokes problem.
[10]
Henry Jacobs, Joris Vankerschaver.
Fluid-structure interaction in the Lagrange-Poincaré formalism: The Navier-Stokes and inviscid regimes.
[11]
George Avalos, Roberto Triggiani.
Semigroup well-posedness in the energy space of a parabolic-hyperbolic coupled Stokes-Lamé PDE system of fluid-structure interaction.
[12]
Grégoire Allaire, Alessandro Ferriero.
Homogenization and long time asymptotic of a fluid-structure interaction problem.
[13] [14] [15]
George Avalos, Roberto Triggiani.
Fluid-structure interaction with and without internal dissipation of the structure: A contrast study in stability.
[16]
Irena Lasiecka, Roberto Triggiani.
Heat--structure interaction with viscoelastic damping: Analyticity with sharp analytic sector, exponential decay, fractional powers.
[17]
Oualid Kafi, Nader El Khatib, Jorge Tiago, Adélia Sequeira.
Numerical simulations of a 3D fluid-structure interaction model for blood flow in an atherosclerotic artery.
[18]
Salim Meddahi, David Mora.
Nonconforming mixed finite element approximation of a fluid-structure interaction spectral problem.
[19]
Martina Bukač, Sunčica Čanić.
Longitudinal displacement in viscoelastic arteries:
A novel fluid-structure interaction computational model, and experimental validation.
[20]
George Avalos, Thomas J. Clark.
A mixed variational formulation for the wellposedness and numerical approximation of a PDE model arising in a 3-D fluid-structure interaction.
2018 Impact Factor: 1.048
Tools Metrics Other articles
by authors
[Back to Top] |
Note: This is (unfortunately) not a simple “fix your matching braces” issue, please read the whole question.
I’m working on a command to fix the spacing of
\vec — the spacing is too small by default, leading to collisions with the arrow and following glyphs in cases like
(\vec p). However, simply redefining the command, as in
\makeatletter\let\oldvec\vec\def\vec#1{\oldvec{#1}\@ifnextchar{^}{\kern 1pt}{}}\makeatother
doesn’t work in cases where the vector has both a superscript
and a subscript. Therefore, a command that recursively deals with oncoming super- and subscripts is required.
Currently, I’m using the following solution (optimized for Stix/Times, so may not look great with Computer Modern). However, a subscript containing a single command that takes an argument will fail unless the atom is encased in braces (e.g.
\vec a_\textup{max} doesn’t work,
\vec a_{\textup{max}} and
\vec a_\pi work).
MWE:
\documentclass{article}\makeatletter\let\oldvec\vec\def\vec#1{\oldvec{#1}\vec@}\def\vec@{% \@ifnextchar{_}{\vec@sub}{% \@ifnextchar{^}{\vec@sup}{{\kern 0.5pt}}}%}%\def\vec@sub#1#2{_{\kern -0.75pt #2}\vec@}\def\vec@sup#1#2{^{\kern 2pt #2}\vec@}\makeatother\begin{document}% works\[ \vec s_{\textrm{max}} \]% doesn't work\[ \vec s_\textrm{max} \]\end{document}
Error code:
! Argument of \textrm has an extra }.<inserted text> \parl.16 \[ \vec s_\textrm {max} \]
I think it has to do with the
\vec@sub and
\vec@sup commands ending a group before the argument is given (so that the parser sees
{\vec{a}_\textrm}{a} or something and panics) but I’m not entirely sure how to figure that out.
Full disclosure: This code is “adapted” (read: nearly entirely copied from) this answer by @siracusa, and it definitely involves TeX code a little bit higher than my normal skill level. I have read what I could find on
\@ifnextchar and so on, but I’m not 100% certain on all the elements playing out here.
Thanks for your help with this oddball problem! |
Forgot password? New user? Sign up
Existing user? Log in
Here is a problem from the NMTC 2015 Bhaskara Contest:
The number of values of x which satisfy the equation 52.8x−1x=500 are: \text{The number of values of}\space x\space \text{which satisfy the equation}\space 5^2.\sqrt[x]{8^{x-1}}=500\space \text{are:}The number of values of x which satisfy the equation 52.x8x−1=500 are:
Here is the link. (Question 6).
I solved it like this:
52.8x−1x=5005^2.\sqrt[x]{8^{x-1}}=50052.x8x−1=500
⇒ 52.(8x−1)1x=22×53\Rightarrow\space 5^2.(8^{x-1})^{\frac{1}{x}}=2^2\times5^3⇒ 52.(8x−1)x1=22×53
⇒ 52.8x−1x=(23)23×53\Rightarrow\space 5^2.8^{\frac{x-1}{x}}=(2^3)^{\frac{2}{3}}\times5^3⇒ 52.8xx−1=(23)32×53
⇒ 8x−1x=823×5\Rightarrow\space 8^{\frac{x-1}{x}}=8^{\frac{2}{3}}\times5⇒ 8xx−1=832×5
But here, I am stuck. Any idea?
Note by Aaryan Maheshwari 2 years, 3 months ago
Easy Math Editor
This discussion board is a place to discuss our Daily Challenges and the math and science related to those challenges. Explanations are more than just a solution — they should explain the steps and thinking strategies that you used to obtain the solution. Comments should further the discussion of math and science.
When posting on Brilliant:
*italics*
_italics_
**bold**
__bold__
- bulleted- list
1. numbered2. list
paragraph 1paragraph 2
paragraph 1
paragraph 2
[example link](https://brilliant.org)
> This is a quote
This is a quote
# I indented these lines # 4 spaces, and now they show # up as a code block. print "hello world"
# I indented these lines# 4 spaces, and now they show# up as a code block.print "hello world"
\(
\)
\[
\]
2 \times 3
2^{34}
a_{i-1}
\frac{2}{3}
\sqrt{2}
\sum_{i=1}^3
\sin \theta
\boxed{123}
Sort by:
Actually, this is my father's account, and now I use it. I am in 7th standard. So please can you explain what 'log' is or explain without log?😊
Log in to reply
Check out the Brilliant wiki to learn logarithms which is pretty good.
Sir, you can continue from your approach by dividing both sides by 8238^{\frac{2}{3}}832 (in your final equation) and the applying log to the base 101010 on both sides(A one step (log rules) simplification is required). This equation is obviously a linear equation, thus having only one solution.
There is no solution of this equation. if have any give that soln
It has no integral but a real sol
Yep you are right buddy! It has no integral solutions but one real solution. So the answer is 1 as the question does not specify whether they should be integers or real number.
Problem Loading...
Note Loading...
Set Loading... |
"Is this really a proof?" is the exact question e-mailed to me today from an undergraduate mathematics student whom I know as a highly competent student. The one sentence question was accompanied with the following demo: I am looking for a down-to-earth, non-authoritative answer who one may gi...
the 2005 AMS article/survey on experimental mathematics[1] by Bailey/Borwein mentions many remarkable successes in the field including new formulas for $\pi$ that were discovered via the PSLQ algorithm as well as many other examples. however, it appears to glaringly leave out any description of t...
I'm reading the paper "the classification of algebras by dominant dimension" by Bruno J.Mueller, the link is here http://cms.math.ca/10.4153/CJM-1968-037-9. In the proof of lemma 3 on page 402, there is a place I can't understand. Who can tell me what $E_R \oplus * \cong \oplus X_R$ and $_AHom_...
I've been really trying to prove Ramanujan Partition theory, and different sources give me different explanations. Can someone please explain how Ramanujan (and Euler) found out the following equation for the number of partitions for a given integer? Any help is appreciated thank you so much! $...
I was wondering what role non-rigorous, heuristic type arguments play in rigorous math. Are there examples of rigorous, formal proofs in which a non-rigorous reasoning still plays a central part? Here is an example of what I am thinking of. You want to prove that some formula $f(n)$ holds, and y...
Perhaps the "proofs" of ABC conjecture or newly released weak version of twin prime conjecture or alike readily come to your mind. These are not the proofs I am looking for. Indeed my question was inspired by some other posts seeking for a hint to understand a certain more or less well-establised...
I do not know exactly how to characterize the class of proofs that interests me, so let me give some examples and say why I would be interested in more. Perhaps what the examples have in common is that a powerful and unexpected technique is introduced that comes to seem very natural once you are ...
Some conjectures are disproved by a single counter-example and garner little or no further interest or study, such as (to my knowledge) Euler's conjecture in number theory that at least $n$ $n^{th}$ powers are required to sum to an $n^{th}$ power, for $n>2$ (disproved by counter-example by L. J. ...
There is a tag called proofs. This tag has empty tag-info. Without any usage guidance it is quite likely to be used incorrectly. The fact that there are many deleted questions having this tag can be considered a supporting evidence of this fact. (According to SEDE there are 26 such questions - ...
I would like to know how you would rigorously introduce the trigonometric functions ($\sin(x)$ and relatives) to first year calculus students. Suppose they have a reasonable definition of $\mathbb{R}$ (as Cauchy closure of the rationals, or as Dedekind cuts, or whatever), but otherwise require as...
After having a solid year long undergraduate course in abstract algebra, I'm interested in learning algebra at a more advanced level, especially in the context of category theory. I've done some research, and from what I've read, it seems that using Lang as a main text and Hungerford as a supple...
Dear MO-community, I am not sure how mature my view on this is and I might say some things that are controversial. I welcome contradicting views. In any case, I find it important to clarify this in my head and hope that this community can help me doing that. So after this longish introduction, h...
Usually, during lectures Turing Machines are firstly introduced from an informal point of view (for example, in this way: http://en.wikipedia.org/wiki/Turing_machine#Informal_description) and then their definition is formalized (for example, in this way: http://en.wikipedia.org/wiki/Turing_machin...
« first day (1811 days earlier) ← previous day next day → last day (432 days later) » |
I'm getting a couple of different answers from different sources, so I'd like to verify something.
I misplaced my original notes from my prof, but working from memory I have the following:
\begin{align} \int\csc(x)\ dx&=\int\csc(x)\left(\frac{\csc(x)-\cot(x)}{\csc(x)-\cot(x)}\right)\ dx\\ &=\int\frac{\csc^{2}(x)-\csc(x)\cot(x)}{\csc(x)-\cot(x)}\ dx\\ &=\int\frac{1}{u}\ du\\ &=\ln|u|+C\\ &=\ln|\csc(x)-\cot(x)|+C \end{align}
This looks proper when I trace it, but wolfram alpha is saying that the answer should be
$$-\ln|\csc(x)+\cot(x)|+C$$
Sadly, it doesn't provide a step-by-step. It just says that's the answer.
So which is it? Or are they both equivalent? I've never been great with the laws of logarithms. |
The sources known to me (Knutson's
Algebraic Spaces and Pascual-Gainza's Ampleness criteria for algebraic spaces) define a line bundle $L$ on an algebraic space $X$ (over a base scheme $S$) to be ample if a tensor power of $L$ is very ample, i.e. the pullback of $i^*\mathcal{O}(1)$ given a factorization $X \xrightarrow{i} \mathbb{P}^n_S \to S$. The existence of an ample line bundle shows then obviously that $X$ is a scheme.
Alternatively, one could define a line bundle $L$ on $X$ to be ample as in the scheme case: We call $L$ ample if for every quasi-coherent $\mathcal{O}_X$-module $\mathcal{F}$ of finite type, there exists an integer $n_0$ such that $\mathcal{F}\otimes L^{\otimes n}$ is generated by global sections for every $n\geq n_0$. As usual one can show in this case that the non-vanishing loci of global sections of tensor powers of $L$ form a basis of topology for the underlying space $|X|$ of $X$, at least if $X$ is noetherian.
My question is now the following:
When does the existence of an ample line bundle in my sense imply the existence of a very ample line bundle? When can one deduce from the existence of an ample line bundle that the algebraic space is schematic?
At least the latter question seems to have a positive answer in the case that $X$ is noetherian and normal (and quasi-separated and of finite type over $S$). It is known that in this case $X$ is the coarse moduli space of $Y/G$ for a scheme $Y$ and a finite group $G$ (see Is every algebraic space the quotient of a scheme by a finite group? ). The morphism $f: Y \to X$ is indeed finite. Thus, $f^*L$ is ample. Thus, $Y$ is quasi-projective. Hence, $X$ is a scheme. |
Data on the mean multiplicity of strange hadrons produced in minimum bias proton--proton and central nucleus--nucleus collisions at momenta between 2.8 and 400 GeV/c per nucleon have been compiled. The multiplicities for nucleon--nucleon interactions were constructed. The ratios of strange particle multiplicity to participant nucleon as well as to pion multiplicity are larger for central nucleus--nucleus collisions than for nucleon--nucleon interactions at all studied energies. The data at AGS energies suggest that the latter ratio saturates with increasing masses of the colliding nuclei. The strangeness to pion multiplicity ratio observed in nucleon--nucleon interactions increases with collision energy in the whole energy range studied. A qualitatively different behaviour is observed for central nucleus--nucleus collisions: the ratio rapidly increases when going from Dubna to AGS energies and changes little between AGS and SPS energies. This change in the behaviour can be related to the increase in the entropy production observed in central nucleus-nucleus collisions at the same energy range. The results are interpreted within a statistical approach. They are consistent with the hypothesis that the Quark Gluon Plasma is created at SPS energies, the critical collision energy being between AGS and SPS energies.
Elastic and inelastic 19.8 GeV/c proton-proton collisions in nuclear emulsion are examined using an external proton beam of the CERN Proton Synchrotron. Multiple scattering, blob density, range and angle measurements give the momentum spectra and angular distributions of secondary protons and pions. The partial cross-sections corresponding to inelastic interactions having two, four, six, eight, ten and twelve charged secondaries are found to be, respectively, (16.3±8.4) mb, (11.5 ± 6.0) mb, (4.3 ± 2.5) mb, (1.9 ± 1.3) mb, (0.5 ± 0.5) mb and (0.5±0.5)mb. The elastic cross-section is estimated to be (4.3±2.5) mb. The mean charged meson multiplicity for inelastic events is 3.7±0.5 and the average degree of inelasticity is 0.35±0.09. Strong forward and backward peaking is observed in the center-of-mass system for both secondary charged pions and protons. Distributions of energy, momentum and transverse momentum for identified charged secondaries are presented and compared with the results of work at other energies and with the results of a statistical theory of proton-proton collisions.
Double differential K+cross sections have been measured in p+C collisions at 1.2, 1.5 and 2.5 GeV beam energy and in p+Pb collisions at 1.2 and 1.5 GeV. The K+ spectrum taken at 2.5 GeV can be reproduced quantitatively by a model calculation which takes into account first chance proton-nucleon collisions and internal momentum with energy distribution of nucleons according to the spectral function. At 1.2 and 1.5 GeV beam energy the K+ data excess significantly the model predictions for first chance collisions. When taking secondary processes into account the results of the calculations are in much better agreement with the data.
The differential and total cross sections for kaon pair production in the pp->ppK+K- reaction have been measured at three beam energies of 2.65, 2.70, and 2.83 GeV using the ANKE magnetic spectrometer at the COSY-Juelich accelerator. These near-threshold data are separated into pairs arising from the decay of the phi-meson and the remainder. For the non-phi selection, the ratio of the differential cross sections in terms of the K-p and K+p invariant masses is strongly peaked towards low masses. This effect can be described quantitatively by using a simple ansatz for the K-p final state interaction, where it is seen that the data are sensitive to the magnitude of an effective K-p scattering length. When allowance is made for a small number of phi events where the K- rescatters from the proton, the phi region is equally well described at all three energies. A very similar phenomenon is discovered in the ratio of the cross sections as functions of the K-pp and K+pp invariant masses and the identical final state interaction model is also very successful here. The world data on the energy dependence of the non-phi total cross section is also reproduced, except possibly for the results closest to threshold.
The production of eta mesons has been measured in the proton-proton interaction close to the reaction threshold using the COSY-11 internal facility at the cooler synchrotron COSY. Total cross sections were determined for eight different excess energies in the range from 0.5 MeV to 5.4 MeV. The energy dependence of the total cross section is well described by the available phase-space volume weighted by FSI factors for the proton-proton and proton-eta pairs.
Sigma+ hyperon production was measured at the COSY-11 spectrometer via the p p --> n K+ Sigma+ reaction at excess energies of Q = 13 MeV and Q = 60 MeV. These measurements continue systematic hyperon production studies via the p p --> p K+ Lambda/Sigma0 reactions where a strong decrease of the cross section ratio close-to-threshold was observed. In order to verify models developed for the description of the Lambda and Sigma0 production we have performed the measurement on the Sigma+ hyperon and found unexpectedly that the total cross section is by more than one order of magnitude larger than predicted by all anticipated models. After the reconstruction of the kaon and neutron four momenta, the Sigma+ is identified via the missing mass technique. Details of the method and the measurement will be given and discussed in view of theoretical models.
K+ meson production in pA (A = C, Cu, Au) collisions has been studied using the ANKE spectrometer at an internal target position of the COSY-Juelich accelerator. The complete momentum spectrum of kaons emitted at forward angles, theta < 12 degrees, has been measured for a beam energy of T(p)=1.0 GeV, far below the free NN threshold of 1.58 GeV. The spectrum does not follow a thermal distribution at low kaon momenta and the larger momenta reflect a high degree of collectivity in the target nucleus.
We report a new measurement of the pseudorapidity (eta) and transverse-energy (Et) dependence of the inclusive jet production cross section in pbar b collisions at sqrt(s) = 1.8 TeV using 95 pb**-1 of data collected with the DZero detector at the Fermilab Tevatron. The differential cross section d^2sigma/dEt deta is presented up to |eta| = 3, significantly extending previous measurements. The results are in good overall agreement with next-to-leading order predictions from QCD and indicate a preference for certain parton distribution functions.
We present the first observation of exclusive $e^+e^-$ production in hadron-hadron collisions, using $p\bar{p}$ collision data at \mbox{$\sqrt{s}=1.96$ TeV} taken by the Run II Collider Detector at Fermilab, and corresponding to an integrated luminosity of \mbox{532 pb$^{-1}$}. We require the absence of any particle signatures in the detector except for an electron and a positron candidate, each with transverse energy {$E_T>5$ GeV} and pseudorapidity {$|\eta|<2$}. With these criteria, 16 events are observed compared to a background expectation of {$1.9\pm0.3$} events. These events are consistent in cross section and properties with the QED process \mbox{$p\bar{p} \to p + e^+e^- + \bar{p}$} through two-photon exchange. The measured cross section is \mbox{$1.6^{+0.5}_{-0.3}\mathrm{(stat)}\pm0.3\mathrm{(syst)}$ pb}. This agrees with the theoretical prediction of {$1.71 \pm 0.01$ pb}. |
Introduction
Encoding numerical inputs for neural networks is difficult because the representation space is very large and there is no easy way to embed numbers into a smaller space without losing information. Some of the ways to currently handle this is:
Scale inputs from minimum and maximum values to [-1, 1] One hot for each number One hot for different bins (e.g. [0-0], [1-2], [3-7], [8 – 19], [20, infty])
In small integer number ranges, these methods can work well, but they don’t scale well for wider ranges. In the input scaling approach, precision is lost making it difficult to distinguish between two numbers close in value. For the binning methods, information about the mathematical properties of the numbers such as adjacency and scaling is lost.
The desideratum of our embeddings of numbers to vectors are as follows:
able to handle numbers of arbitrary length captures mathematical relationships between numbers (addition, multiplication, etc.) able to model sequences of numbers
In this blog post, we will explore a novel approach for embedding numbers as vectors that include these desideratum.
Approach
My approach for this problem is inspired by word2vec but unlike words which follow the distributional hypothesis, numbers follow the the rules of arithmetic. Instead of finding a “corpus” of numbers to train on, we can generate random arithmetic sequences and have our network “learn” the rules of arithmetic from the generated sequences and as a side effect, be able to encode numbers as vectors and sequences as vectors.
Problem Statement
Given a sequence of length n integers \(x_1, x_2 \ldots x_n\), predict the next number in the sequence \(x_{n+1}\).
Architecture
The architecture of the system consists of three parts: the encoder, the decoder and the nested RNN.
The encoder is an RNN that takes a number represented as a sequence of digits and encodes it into a vector that represents an embedded number.
The nested RNN takes the embedded numbers and previous state to output another embedded vector that represents the next number.
The decoder then takes the embedded number and unravels it through the decoder RNN to output the digits of the next predicted number.
Formally:
Let \(X\) represent a sequence of natural numbers where \(X_{i,j}\) represents the j-th digit of the i-th number of the sequence. We also append an <eos> “digit” to the end of each number to signal the end of the number. For the sequence X = 123, 456, 789, we have \(X_{1,2} = 2, X_{3,3} = 9, X_{3,4} = <eos>\).
Let \(l_i\) be the number of digits in the i-th number of the sequence (including <eos> digit. Let \(E\) be an embedding matrix for each digit.
Let \(\vec{u}_i\) be an embedding of the i-th number in a sequence. It is computed as the final state of the encoder. Let \(\vec{v}_i\) be an embedding of the predicted (i+1)-th number in a sequence. It is computed from the output of the nested RNN and used as the initial state of the decoder.
Let \(R^e, R^d, R^n\) be the functions that gives the next state for the encoder, decoder and nested RNN respectively. Let \(O^d, O^n\) be the functions that gives the output of the current state for the decoder and nested RNN respectively.
Let \(\vec{s}^e_{i,j}\) be the state vector for \(R^e\) for the j-th timestep of the i-th number of the sequence. Let \(\vec{s}^d_{i,j}\) be the state vector for \(R^d\) for the j-th timestep of the i-th number of the sequence. Let \(\vec{s}^n_i\) represent the state vector of \(R^n\) for the i-th timestep.
Let \(z_{i,j}\) be the output of \(R^d\) at the j-th timestep of the i-th number of the sequence.
Let \(\hat{y}_{i,j}\) represent the distribution of digits for the prediction of the j-th digit of the (i+1)th number of the sequence.\(\displaystyle{\begin{eqnarray}\vec{s}^e_{i,j} &=& R^e(E[X_{i,j}], \vec{s}^e_{i, j-1})\\\vec{u}_i &=& \vec{s}^e_{i,l_i}\\ \vec{s}^n_i &=& R^n(\vec{u}_i, \vec{s}^n_{i-1})\\\vec{v_i} &=& O^n(\vec{s}^n_i)\\ \vec{z}_{i,j} &=& O^d(\vec{s}^d_{i,j})\\ \vec{s}^d_{i, j} &=& R^d(\vec{z}_{i,j-1}, \vec{s}^d_{i, j-1})\\ \hat{y}_{i,j} &=& \text{softmax}(\text{MLP}(\vec{z}_{i,j}))\\ p(X_{i+1,j})=k |X_1, \ldots, X_i, X_{i+1, 1}, \ldots X_{i+1, j-1}) &=& \hat{y}_{i,j}[k]\end{eqnarray}}\)
We use a cross-entropy loss function where \(y_{i,j}[t]\) represents the correct digit class for \(y_{i,j}\):\(\displaystyle{\begin{eqnarray}L(y, \hat{y}) &=& \sum_i \sum_j -\log \hat{y}_{i,j}[t]\end{eqnarray} }\)
Since I also find it difficult to intuitively understand what these sets of equations mean, here is a clearer diagram of the nested network:
Training
The whole network is trained end-to-end by generating random mathematical sequences and predicting the next number in the sequence. The sequences generated contains addition, subtraction, multiplication, division and exponents. The sequences generated also includes repeating series of numbers.
After 10,000 epochs of 500 sequences each, the networks converges and is reasonably able to predict the next number in a sequence. On my Macbook Pro with a Nvidia GT750M, the network implemented on Tensorflow took 1h to train.
Results
Taking a look at some sample sequences, we can see that the network is reasonably able to predict the next number.
Seq [43424, 43425, 43426, 43427] Predicted [43423, 43426, 43427, 43428] Seq [3, 4, 3, 4, 3, 4, 3, 4, 3, 4] Predicted [9, 5, 4, 3, 4, 3, 4, 3, 4, 3] Seq [2, 4, 8, 16, 32, 64, 128] Predicted [4, 8, 16, 32, 64, 128, 256] Seq [10, 9, 8, 7, 6, 5, 4, 3] Predicted [20, 10, 10, 60, 4, 4, 3, 2]
With the trained model, we can compute embeddings of individual numbers and visualize the embeddings with the t-sne algorithm.
We can see an interesting pattern when we plot the first 100 numbers (color coded by last digit). Another interesting pattern to observe is within clusters, the numbers also rotate clockwise or counterclockwise.
We can also trace the path of the embeddings sequentially, we can see that there is some structure to the positioning of the numbers.
If we look at the visualizations of the embeddings for numbers 1-1000 we can see that the clusters still exist for the last digit (each color corresponds to numbers with the same last digit)
We can also see the same structural lines for the sequential path for numbers 1 to 1000:
The inner linear pattern is formed from the number 1-99 and the outer linear pattern is formed from the numbers 100-1000.
We can also look at the embeddings of each sequence by taking the vector \(\vec{s}^n_k\) after feeding in k=8 numbers of a sequence into the model. We can visualize the sequence embeddings with t-sne using 300 sequences:
From the visualization, we can see that similar sequences are clustered together. For example, repeating patterns, quadratic sequences, linear sequences and large number sequences are grouped together. We can see that the network is able to extract some high level structure for different types of sequences.
Using this, we can see that if we encounter a sequence we can’t determine a pattern for, we can find the nearest sequence embedding to approximate the pattern type.
Code: Github
The model is written in Python using Tensorflow 1.1. The code is not very well written due to the fact that I was forced to use an outdated version of TF with underdeveloped RNN support because of OS X GPU compatibility reasons.
The code is a proof of a concept and comes from the result of stitching together outdated tutorials together.
Further improvements: bilateral RNN stack more layers attention mechanism beam search negative numbers teacher forcing |
(All monoids are written additively in this question, even the non-commutative ones.)
Given a monoid $X$, write $\mathrm{Sub}(X)$ for the lattice of submonoids of $X$, and write $\mathrm{Cong}(X)$ for the lattice of monoid congruences on $X$. There is a function $\ker_X : \mathrm{Cong}(X) \rightarrow \mathrm{Sub}(X)$ defined as follows; given a congruence relation $R$ on $X$, we define
$$\ker_X(R) = \{x \in X \mid (x,0) \in R\}$$
It is well-known that if $X$ is an Abelian group, then $\ker_X : \mathrm{Cong}(X) \rightarrow \mathrm{Sub}(X)$ is a bijection. An obvious question is whether the converse holds.
The answer is no. For example, consider $X = \{0,1\}$ and define $+$ on $X$ to mean "OR". Explicitly, $x+y = 1$ iff $x=1$ or $y=1$. Then $X$ is not a group, since $1$ has no additive inverse. However, observe that $X$ has two possible quotients, the degenerate quotient $\{\{0,1\}\}$ and the trivial quotient $\{\{0\},\{1\}\}$. And it has two submonoids, namely $\{0,1\}$ and $\{0\}$ respectively. Furthermore, $\ker_X$ is given as follows.
$$\{\{0,1\}\} \mapsto \{0,1\}$$
$$\{\{0\},\{1\}\} \mapsto \{0\}$$
Hence $\ker_X$ is a bijection, despite that $X$ is not a group.
Question. Is there some kind of a partial converse to the statement of interest? Namely that for all commutative monoids $X$, if $X$ is a group, then $\ker_X : \mathrm{Cong}(X) \rightarrow \mathrm{Sub}(X)$ is a bijection. |
The first observation of top quark production in proton-nucleus collisions is reported using proton-lead data collected by the CMS experiment at the CERN LHC at a nucleon-nucleon center-of-mass energy of $\sqrt{s_\mathrm{NN}} =$ 8.16 TeV. The measurement is performed using events with exactly one isolated electron or muon and at least four jets. The data sample corresponds to an integrated luminosity of 174 nb$^{-1}$. The significance of the $\mathrm{t}\overline{\mathrm{t}}$ signal against the background-only hypothesis is above five standard deviations. The measured cross section is $\sigma_{\mathrm{t}\overline{\mathrm{t}}} =$ 45$\pm$8 nb, consistent with predictions from perturbative quantum chromodynamics.
Measurements of two- and multi-particle angular correlations in pp collisions at s=5,7, and 13TeV are presented as a function of charged-particle multiplicity. The data, corresponding to integrated luminosities of 1.0pb−1 (5 TeV), 6.2pb−1 (7 TeV), and 0.7pb−1 (13 TeV), were collected using the CMS detector at the LHC. The second-order ( v2 ) and third-order ( v3 ) azimuthal anisotropy harmonics of unidentified charged particles, as well as v2 of KS0 and Λ/Λ‾ particles, are extracted from long-range two-particle correlations as functions of particle multiplicity and transverse momentum. For high-multiplicity pp events, a mass ordering is observed for the v2 values of charged hadrons (mostly pions), KS0 , and Λ/Λ‾ , with lighter particle species exhibiting a stronger azimuthal anisotropy signal below pT≈2GeV/c . For 13 TeV data, the v2 signals are also extracted from four- and six-particle correlations for the first time in pp collisions, with comparable magnitude to those from two-particle correlations. These observations are similar to those seen in pPb and PbPb collisions, and support the interpretation of a collective origin for the observed long-range correlations in high-multiplicity pp collisions.
Measurements are presented of the associated production of a W boson and a charm-quark jet (W + c) in pp collisions at a center-of-mass energy of 7 TeV. The analysis is conducted with a data sample corresponding to a total integrated luminosity of 5 inverse femtobarns, collected by the CMS detector at the LHC. W boson candidates are identified by their decay into a charged lepton (muon or electron) and a neutrino. The W + c measurements are performed for charm-quark jets in the kinematic region $p_T^{jet} \gt$ 25 GeV, $|\eta^{jet}| \lt$ 2.5, for two different thresholds for the transverse momentum of the lepton from the W-boson decay, and in the pseudorapidity range $|\eta^{\ell}| \lt$ 2.1. Hadronic and inclusive semileptonic decays of charm hadrons are used to measure the following total cross sections: $\sigma(pp \to W + c + X) \times B(W \to \ell \nu)$ = 107.7 +/- 3.3 (stat.) +/- 6.9 (syst.) pb ($p_T^{\ell} \gt$ 25 GeV) and $\sigma(pp \to W + c + X) \times B(W \to \ell \nu)$ = 84.1 +/- 2.0 (stat.) +/- 4.9 (syst.) pb ($p_T^{\ell} \gt$ 35 GeV), and the cross section ratios $\sigma(pp \to W^+ + \bar{c} + X)/\sigma(pp \to W^- + c + X)$ = 0.954 +/- 0.025 (stat.) +/- 0.004 (syst.) ($p_T^{\ell} \gt$ 25 GeV) and $\sigma(pp \to W^+ + \bar{c} + X)\sigma(pp \to W^- + c + X)$ = 0.938 +/- 0.019 (stat.) +/- 0.006 (syst.) ($p_T^{\ell} \gt$ 35 GeV). Cross sections and cross section ratios are also measured differentially with respect to the absolute value of the pseudorapidity of the lepton from the W-boson decay. These are the first measurements from the LHC directly sensitive to the strange quark and antiquark content of the proton. Results are compared with theoretical predictions and are consistent with the predictions based on global fits of parton distribution functions.
A search for narrow resonances in the dijet mass spectrum is performed using data corresponding to an integrated luminosity of 2.9 inverse pb collected by the CMS experiment at the LHC. Upper limits at the 95% confidence level (CL) are presented on the product of the resonance cross section, branching fraction into dijets, and acceptance, separately for decays into quark-quark, quark-gluon, or gluon-gluon pairs. The data exclude new particles predicted in the following models at the 95% CL: string resonances, with mass less than 2.50 TeV, excited quarks, with mass less than 1.58 TeV, and axigluons, colorons, and E_6 diquarks, in specific mass intervals. This extends previously published limits on these models.
The production of jets associated to bottom quarks is measured for the first time in PbPb collisions at a center-of-mass energy of 2.76 TeV per nucleon pair. Jet spectra are reported in the transverse momentum (pt) range of 80-250 GeV, and within pseudorapidity abs(eta < 2). The nuclear modification factor (R[AA]) calculated from these spectra shows a strong suppression in the b-jet yield in PbPb collisions relative to the yield observed in pp collisions at the same energy. The suppression persists to the largest values of pt studied, and is centrality dependent. The R[AA] is about 0.4 in the most central events, similar to previous observations for inclusive jets. This implies that jet quenching does not have a strong dependence on parton mass and flavor in the jet pt range studied.
A search for neutral Higgs bosons in the minimal supersymmetric extension of the standard model (MSSM) decaying to tau-lepton pairs in pp collisions is performed, using events recorded by the CMS experiment at the LHC. The dataset corresponds to an integrated luminosity of 24.6 fb$^{−1}$, with 4.9 fb$^{−1}$ at 7 TeV and 19.7 fb$^{−1}$ at 8 TeV. To enhance the sensitivity to neutral MSSM Higgs bosons, the search includes the case where the Higgs boson is produced in association with a b-quark jet. No excess is observed in the tau-lepton-pair invariant mass spectrum. Exclusion limits are presented in the MSSM parameter space for different benchmark scenarios, m$_{h}^{max}$ , m$_{h}^{mod +}$ , m$_{h}^{mod −}$ , light-stop, light-stau, τ-phobic, and low-m$_{H}$. Upper limits on the cross section times branching fraction for gluon fusion and b-quark associated Higgs boson production are also given.
Measurements of the differential production cross sections in transverse momentum and rapidity for B0 mesons produced in pp collisions at sqrt(s) = 7 TeV are presented. The dataset used was collected by the CMS experiment at the LHC and corresponds to an integrated luminosity of 40 inverse picobarns. The production cross section is measured from B0 meson decays reconstructed in the exclusive final state J/Psi K-short, with the subsequent decays J/Psi to mu^+ mu^- and K-short to pi^+ pi^-. The total cross section for pt(B0) > 5 GeV and y(B0) < 2.2 is measured to be 33.2 +/- 2.5 +/- 3.5 microbarns, where the first uncertainty is statistical and the second is systematic.
The Upsilon production cross section in proton-proton collisions at sqrt(s) = 7 TeV is measured using a data sample collected with the CMS detector at the LHC, corresponding to an integrated luminosity of 3.1 +/- 0.3 inverse picobarns. Integrated over the rapidity range |y|<2, we find the product of the Upsilon(1S) production cross section and branching fraction to dimuons to be sigma(pp to Upsilon(1S) X) B(Upsilon(1S) to mu+ mu-) = 7.37 +/- 0.13^{+0.61}_{-0.42}\pm 0.81 nb, where the first uncertainty is statistical, the second is systematic, and the third is associated with the estimation of the integrated luminosity of the data sample. This cross section is obtained assuming unpolarized Upsilon(1S) production. If the Upsilon(1S) production polarization is fully transverse or fully longitudinal the cross section changes by about 20%. We also report the measurement of the Upsilon(1S), Upsilon(2S), and Upsilon(3S) differential cross sections as a function of transverse momentum and rapidity.
A search for Z bosons in the mu^+mu^- decay channel has been performed in PbPb collisions at a nucleon-nucleon centre of mass energy = 2.76 TeV with the CMS detector at the LHC, in a 7.2 inverse microbarn data sample. The number of opposite-sign muon pairs observed in the 60--120 GeV/c2 invariant mass range is 39, corresponding to a yield per unit of rapidity (y) and per minimum bias event of (33.8 ± 5.5 (stat) ± 4.4 (syst)) 10^{-8}, in the |y|<2.0 range. Rapidity, transverse momentum, and centrality dependencies are also measured. The results agree with next-to-leading order QCD calculations, scaled by the number of incoherent nucleon-nucleon collisions.
A measurement of the J/psi and psi(2S) production cross sections in pp collisions at sqrt(s)=7 TeV with the CMS experiment at the LHC is presented. The data sample corresponds to an integrated luminosity of 37 inverse picobarns. Using a fit to the invariant mass and decay length distributions, production cross sections have been measured separately for prompt and non-prompt charmonium states, as a function of the meson transverse momentum in several rapidity ranges. In addition, cross sections restricted to the acceptance of the CMS detector are given, which are not affected by the polarization of the charmonium states. The ratio of the differential production cross sections of the two states, where systematic uncertainties largely cancel, is also determined. The branching fraction of the inclusive B to psi(2S) X decay is extracted from the ratio of the non-prompt cross sections to be: BR(B to psi(2S) X) = (3.08 +/- 0.12(stat.+syst.) +/- 0.13(theor.) +/- 0.42(BR[PDG])) 10^-3
Isolated photon production is measured in proton-proton and lead-lead collisions at nucleon-nucleon centre-of-mass energies of 2.76 TeV in the pseudorapidity range |eta|<1.44 and transverse energies ET between 20 and 80 GeV with the CMS detector at the LHC. The measured ET spectra are found to be in good agreement with next-to-leading-order perturbative QCD predictions. The ratio of PbPb to pp isolated photon ET-differential yields, scaled by the number of incoherent nucleon-nucleon collisions, is consistent with unity for all PbPb reaction centralities.
The prompt D0 meson azimuthal anisotropy coefficients, v[2] and v[3], are measured at midrapidity (abs(y) < 1.0) in PbPb collisions at a center-of-mass energy sqrt(s[NN]) = 5.02 TeV per nucleon pair with data collected by the CMS experiment. The measurement is performed in the transverse momentum (pT) range of 1 to 40 GeV/c, for central and midcentral collisions. The v[2] coefficient is found to be positive throughout the pT range studied. The first measurement of the prompt D0 meson v[3] coefficient is performed, and values up to 0.07 are observed for pT around 4 GeV/c. Compared to measurements of charged particles, a similar pT dependence, but smaller magnitude for pT < 6 GeV/c, is found for prompt D0 meson v[2] and v[3] coefficients. The results are consistent with the presence of collective motion of charm quarks at low pT and a path length dependence of charm quark energy loss at high pT, thereby providing new constraints on the theoretical description of the interactions between charm quarks and the quark-gluon plasma.
The transverse momentum (pt) spectrum of prompt D0 mesons and their antiparticles has been measured via the hadronic decay channels D0 to K- pi+ and D0-bar to K+ pi- in pp and PbPb collisions at a centre-of-mass energy of 5.02 TeV per nucleon pair with the CMS detector at the LHC. The measurement is performed in the D0 meson pt range of 2-100 GeV and in the rapidity range of abs(y)<1. The pp (PbPb) dataset used for this analysis corresponds to an integrated luminosity of 27.4 inverse picobarns (530 inverse microbarns). The measured D0 meson pt spectrum in pp collisions is well described by perturbative QCD calculations. The nuclear modification factor, comparing D0 meson yields in PbPb and pp collisions, was extracted for both minimum-bias and the 10% most central PbPb interactions. For central events, the D0 meson yield in the PbPb collisions is suppressed by a factor of 5-6 compared to the pp reference in the pt range of 6-10 GeV. For D0 mesons in the high-pt range of 60-100 GeV, a significantly smaller suppression is observed. The results are also compared to theoretical calculations.
A search for supersymmetry is presented based on proton-proton collision events containing identified hadronically decaying top quarks, no leptons, and an imbalance pTmiss in transverse momentum. The data were collected with the CMS detector at the CERN LHC at a center-of-mass energy of 13 TeV, and correspond to an integrated luminosity of 35.9 fb−1. Search regions are defined in terms of the multiplicity of bottom quark jet and top quark candidates, the pTmiss, the scalar sum of jet transverse momenta, and the mT2 mass variable. No statistically significant excess of events is observed relative to the expectation from the standard model. Lower limits on the masses of supersymmetric particles are determined at 95% confidence level in the context of simplified models with top quark production. For a model with direct top squark pair production followed by the decay of each top squark to a top quark and a neutralino, top squark masses up to 1020 GeV and neutralino masses up to 430 GeV are excluded. For a model with pair production of gluinos followed by the decay of each gluino to a top quark-antiquark pair and a neutralino, gluino masses up to 2040 GeV and neutralino masses up to 1150 GeV are excluded. These limits extend previous results.
A measurement of the exclusive two-photon production of muon pairs in proton-proton collisions at sqrt(s)= 7 TeV, pp to p mu^+ mu^- p, is reported using data corresponding to an integrated luminosity of 40 inverse picobarns. For muon pairs with invariant mass greater than 11.5 GeV, transverse momentum pT(mu) > 4 GeV and pseudorapidity |eta(mu)| < 2.1, a fit to the dimuon pt(mu^+ mu^-) distribution results in a measured cross section of sigma(pp to p mu^+ mu^- p) = 3.38 [+0.58 -0.55] (stat.) +/- 0.16 (syst.) +/- 0.14 (lumi.) pb, consistent with the theoretical prediction evaluated with the event generator Lpair. The ratio to the predicted cross section is 0.83 [+0.14-0.13] (stat.) +/- 0.04 (syst.) +/- 0.03 (lumi.). The characteristic distributions of the muon pairs produced via photon-photon fusion, such as the muon acoplanarity, the muon pair invariant mass and transverse momentum agree with those from the theory.
Measurements of the differential cross sections for the production of exactly four jets in proton-proton collisions are presented as a function of the transverse momentum pt and pseudorapidity eta, together with the correlations in azimuthal angle and the pt balance among the jets. The data sample was collected in 2010 at a center-of-mass energy of 7 TeV with the CMS detector at the LHC, with an integrated luminosity of 36 inverse picobarns. The cross section for a final state with a pair of hard jets with pt > 50 GeV and another pair with pt > 20 GeV within abs(eta) < 4.7 is measured to be sigma = 330 +- 5 (stat.) +- 45 (syst.) nb. It is found that fixed-order matrix element calculations including parton showers describe the measured differential cross sections in some regions of phase space only, and that adding contributions from double parton scattering brings the Monte Carlo predictions closer to the data. |
Given a probability distribution $p$ across an alphabet, we define redundancy as:
Expected Length of codewords - entropy of p = $\ E(S) - h(p)$
Prove that for each $\epsilon$ with $0 \le \epsilon \lt 1$ there exists a $p$ such that the optimal encoding has redundancy $ \epsilon$.
Attempts
I have tried constructing a probability distribution like $p_o = \epsilon, p_1 = 1 - \epsilon $ based on a previous answer, but I can't get it to work.
Any help would be much appreciated.
Edit:
The solution I think I have found is mapped below.
redundancy = $E(S) - h(p) = \sum p_is_i + \sum p_ilogp_i$. We want to show that for a given $\epsilon$, we can find a $p$ that makes redundancy = $\epsilon$. So $\sum p_is_i + \sum p_ilogp_i = \epsilon ==> \sum p_is_i = -\sum p_ilogp_i + \epsilon$.
We know the optimal value for $s_i$ will be less than $-logp_i + 1$, so we can write all $s_i$ as $s_i = -logp_i+\alpha_i$.
Now, $\sum p_is_i = -\sum p_ilogp_i + \epsilon ==> \sum p_i(-logp_i + \alpha_i) = -\sum p_ilogp_i + \epsilon ==> \sum p_i\alpha_i = \epsilon$.
Intuitively I feel that you can always find a p so that the above is true for $epsilon$, because the $\alpha$ values are governed by how far away from a power of two your $m$ is, but I am not sure how to prove this last step. |
I am fairly new to Computational Engineering and I have mainly been exposed to using the Finite Difference Method to produce Linear Systems and solve them using Iterative Methods.
I am trying to solve a problem from the field of Electrochemistry, involving the Laplace equation and boundary conditions that link the unknown potential to the current density resulting from the electrochemical reactions at the boundaries (electrodes) , i.e. $$-\sigma \frac{\partial \phi}{\partial y} = \alpha e^{\beta\phi}$$ from which I derive that $$\phi_{i,j-1} = \phi_{i,j+1} - \frac{2\,Δy\,\alpha}{\sigma}e^{\beta\phi_{i,j}}$$ which provides the value of the potential at a fictitious node below the bottom boundary of a unit square computational domain.
The equation that I have arrived to using central differences on a 5-point stencil is the following: $$\frac{1}{Δx^2}\phi_{i-1,j} -2 \bigg(\frac{1}{Δx^2} + \frac{1}{Δy^2}\bigg)\phi_{i,j} - \frac{2\, \alpha}{Δy\, \sigma}e^{\beta \phi_{i,j}} + \frac{2}{Δy^2}\phi_{i,j+1} + \frac{1}{Δx^2}\phi_{i+1,j} = 0$$ but I am clueless on how to handle it, as this appears to lead to neither a linear system nor to a nonlinear system that I can solve using Newton's method.
Any suggestions from someone who has a wide perspective of the available options? |
Chemotaxis-fluid coupled model for swimming bacteria with nonlinear diffusion: Global existence and asymptotic behavior
1.
Department of Pure and Applied Mathematics, Via Vetoio, Loc. Coppito, 67100 L’Aquila, Italy
2.
Department of Applied Mathematics and Theoretical Physics, University of Cambridge, Wilberforce Road, Cambridge CB3 0WA, United Kingdom, United Kingdom
$ c_t+u \cdot \nabla c = \Delta c- nf(c) $
$ n_t + u \cdot \nabla n = \Delta n^m- \nabla \cdot (n \chi(c)\nabla c) $ $ u_t + u \cdot \nabla u + \nabla P - \eta\Delta u + n \nabla \phi=0 $ $\nabla \cdot u = 0. $
arising in the modelling of the motion of swimming bacteria under the effect of diffusion, oxygen-taxis and transport through an incompressible fluid. The novelty with respect to previous papers in the literature lies in the presence of nonlinear porous--medium--like diffusion in the equation for the density $n$ of the bacteria, motivated by a finite size effect. We prove that, under the constraint $m\in(3/2, 2]$ for the adiabatic exponent, such system features global in time solutions in two space dimensions for large data. Moreover, in the case $m=2$ we prove that solutions converge to constant states in the large--time limit. The proofs rely on standard energy methods and on a basic entropy estimate which cannot be achieved in the case $m=1$. The case $m=2$ is very special as we can provide a Lyapounov functional. We generalize our results to the three--dimensional case and obtain a smaller range of exponents $m\in$( m
*$,2]$ with m *>3/2, due to the use of classical Sobolev inequalities. Mathematics Subject Classification:Primary: 35K55, 35Q92, 35Q35, 92C17; Secondary: 35K57, 76N10, 35Q3. Citation:Marco Di Francesco, Alexander Lorz, Peter A. Markowich. Chemotaxis-fluid coupled model for swimming bacteria with nonlinear diffusion: Global existence and asymptotic behavior. Discrete & Continuous Dynamical Systems - A, 2010, 28 (4) : 1437-1453. doi: 10.3934/dcds.2010.28.1437
[1]
Laiqing Meng, Jia Yuan, Xiaoxin Zheng.
Global existence of almost energy solution to the two-dimensional chemotaxis-Navier-Stokes equations with partial diffusion.
[2]
Jiashan Zheng.
Boundedness of solutions to a quasilinear higher-dimensional chemotaxis-haptotaxis model with nonlinear diffusion.
[3] [4]
Shen Bian, Li Chen, Evangelos A. Latos.
Chemotaxis model with nonlocal nonlinear reaction in the whole space.
[5]
Minghua Yang, Zunwei Fu, Jinyi Sun.
Global solutions to Chemotaxis-Navier-Stokes equations in critical Besov spaces.
[6]
Pan Zheng, Chunlai Mu, Xiaojun Song.
On the boundedness and decay of solutions for a chemotaxis-haptotaxis system with nonlinear diffusion.
[7] [8]
Zhi-An Wang, Kun Zhao.
Global dynamics and diffusion limit of a one-dimensional repulsive chemotaxis model.
[9]
Chunhua Jin.
Boundedness and global solvability to a chemotaxis-haptotaxis model with slow and fast diffusion.
[10] [11]
Alexandre Montaru.
Wellposedness and regularity for a degenerate parabolic equation arising in a model
of chemotaxis with nonlinear sensitivity.
[12]
Youshan Tao.
Global dynamics in a higher-dimensional repulsion chemotaxis model with nonlinear sensitivity.
[13] [14]
Xiaoping Zhai, Zhaoyang Yin.
Global solutions to the Chemotaxis-Navier-Stokes equations with some large initial data.
[15]
Pan Zheng.
Global boundedness and decay for a multi-dimensional chemotaxis-haptotaxis system with nonlinear diffusion.
[16]
Feng Li, Yuxiang Li.
Global existence of weak solution in a chemotaxis-fluid system with nonlinear diffusion and rotational flux.
[17] [18]
Zhichang Guo, Wenjuan Yao, Jiebao Sun, Boying Wu.
Nonlinear fractional diffusion model for deblurring images with textures.
[19] [20]
Lizhi Ruan, Changjiang Zhu.
Boundary layer for nonlinear evolution equations with damping and diffusion.
2018 Impact Factor: 1.143
Tools Metrics Other articles
by authors
[Back to Top] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.