text
stringlengths
256
16.4k
One way to assess the microbial community structure in an environment is to use a ‘fingerprinting’ technique, like T-RFLP or ARISA, to interrogate the ‘species’ living there as determined from their 16S rRNA genes or some functional gene like amoA. Here’s an example of a T-RFLP electropherogram from sea ice: You can see that most of the signal in this sample is contained within a few peaks. Sometimes those peaks saturate (max-out, overblow) the detector, which is bad if I am interested in comparing the heights of the peaks (a controversial subject, I should note I am only doing bulk, not individual, comparisons). Of course, I could just add less DNA and run it again, except that then I would be liable to lose some of the smaller peaks (also, it’s not practical for me to re-run these specific samples). So I’ve written a script in the open-source statistical package R to estimate the heights of the saturated peaks by fitting a Gaussian function of the form [tex]f(x) = y_0+\dfrac{b\sqrt{2/\pi}}{d}*e^{-2\left(\dfrac{x-x_0}{d}\right)^2}[/tex] where ‘y_0’ is the y-minimum, ‘x_0’ is the center of the peak, ‘b’ is a scaling factor, and ‘d’ is related to the standard deviation of the distribution. You can download the script here: gaussfit.r The figures below show (A) a fitted regular-sized peak, and (B) a fitted saturated peak. In my case, the fitted function has a maximum that is 1.6 ± 2.5% of the observed maximum for regular-sized peaks.
In Witten's paper 'Quantization of Chern-Simons Gauge Theory with Complex Gauge Group,' he makes the statement (p. 35) that the symplectic form on the moduli space of flat $G_\mathbb{C}$ connections on a surface can be derived from the Chern-Simons Lagrangian and depends upon the coupling chosen. Now I am familiar with the bracket on the moduli space of connections obtained via Goldman's construction, but what is this method for getting a symplectic form from a Lagrangian and where could I learn more about it? It sounds like this is a general construction, too, that could apply any time a phase space is derived from a configuration space, not just something that applies to this particular case? In Witten's paper 'Quantization of Chern-Simons Gauge Theory with Complex Gauge Group,' he makes the statement (p. 35) that the symplectic form on the moduli space of flat $G_\mathbb{C}$ connections on a surface can be derived from the Chern-Simons Lagrangian and depends upon the coupling chosen. The reference is Deligne-Freed, Classical Field Theory, chapter 2. I will follow their notation. Let $M$ be a spacetime manifold (for simplicity assume oriented) and $\mathcal{F}$ the space of fields. $d$ is the de Rham differential along $M$ and $\delta$ the differential along $\mathcal{F}$. If the action $S$ is local, in the sense that $S=\int_M L$ for $L\in\Omega^{0,n}(\mathcal{F}\times M)$, then the procedure is the following. Find a form (variational one-form) $\gamma\in\Omega^{1,n-1}(\mathcal{F}\times M)$, such that $\alpha=\delta L+d\gamma\in\Omega^{1,n}(\mathcal{F}\times M)$ is linear over functions. What this means is that $\alpha(f\xi)=f\alpha(\xi)$ for every vector field $\xi\in T\mathcal{F}$ and a function $f\in\mathcal{O}(M)$. Then the symplectic form is defined to be $\omega=\int_H\delta\gamma\in\Omega^2(\mathcal{F})$ for $H$ a hypersurface in $M$. It is closed on the space of classical solutions, but may be degenerate. In the case of Chern-Simons, this is literally true if $\mathcal{F}$ is the affine space of connections before modding out by the gauge transformations. I will consider the compact group Chern-Simons, the complex group version is similar. Let $M=\Sigma\times\mathbf{R}$ and $H=\Sigma\times\{0\}$. $$S=\int_M Tr(A\wedge dA+\frac{2}{3}A\wedge A\wedge A).$$ Then $$\delta L=Tr(\delta A\wedge dA-A\wedge \delta dA+2\delta A\wedge A\wedge A).$$ In this formula only the second term is not linear over functions. It can be killed off if one takes $$\gamma=Tr(A\wedge \delta A).$$ Its derivative is $$d\gamma = Tr(dA\wedge \delta A-A\wedge d\delta A),$$ so the nonlinear term in $\delta L+d\gamma$ disappears, since $d\delta +\delta d=0$. Here the choice of $\gamma$ is unique if one assumes it itself is linear over functions. In the end one gets the standard symplectic form on the space of flat connections $$\omega=\int_\Sigma Tr(\delta A\wedge \delta A).$$ One should note that the action is not local in the naive sense on the space of connections mod gauge: the Lagrangian changes by a closed form under gauge transformations. However, the symplectic form does descend to the space of classical solutions mod gauge. I will give a much simpler answer than the others. Given a Lagrangian, there is a corresponding Hamiltonian obtained via Legendre transformation. The symplectic structure comes along with this. Additional information at the nlab: covariant phase space You can read about this in the famous paper of Atiyah and Bott, Yang-Mills Equations over Riemann Surfaces, pg587. See section 20.6.7 in [1] V.P. Nair. Quantum Field Theory: A modern perspective. Springer, 2005. You might want to check chapter 3 first.
Sample Test 4 Question 6 Let \(F_k=F_{k-1}+F_{k-2}\) where \(F_0=0, F_1=1\). Define the series \(s(x)= \sum_{k=1}^\infty F_k / x^k\) for \(x>2\). Show that \(s(x)=\frac{x}{x^2-x-1}\). Related topics Hints Hint 1Use the recursive definition of \(F_k\) to split \(s(x)\) into two sums. Carefully treat the base cases. Hint 2Try rewriting the summation in terms of \(s(x).\) Hint 3You may relabel the index of each sum in order to get it to match the definition of \(s(x).\) Solution The key is to breakdown \(F_k\) using its recursive definition, then relabelling the indices to get \(s(x)\) again. The first term of the sum \(\frac{F_1}{x}\) cannot be broken down as it is a base case, and must be taken out of the sum first. \[ \begin{equation} \begin{split} s(x)&=\sum_{k=1}^\infty \frac{F_k}{x^k}\\ &=\frac{F_1}{x} + \sum_{k=2}^\infty \frac{F_k}{x^k} \\ &=\frac{F_1}{x} + \sum_{k=2}^\infty \frac{F_{k-1}}{x^k} + \sum_{k=2}^\infty \frac{F_{k-2}}{x^k} \\ &=\frac{F_1}{x} + \sum_{i=1}^\infty \frac{F_{i}}{x^{i+1}} + \bigg(\frac{F_0}{x^2} + \sum_{j=1}^\infty \frac{F_{j}}{x^{j+2}}\bigg) \\ &=\frac{1}{x} + \sum_{i=1}^\infty \frac{F_{i}}{x^{i+1}} + \bigg(\frac{0}{x^2} + \sum_{j=1}^\infty \frac{F_{j}}{x^{j+2}}\bigg) \\ &=\frac{1}{x} + \frac{1}{x}\sum_{i=1}^\infty \frac{F_{i}}{x^i} + \frac{1}{x^2} \sum_{j=1}^\infty \frac{F_{j}}{x^j} \\ &=\frac{1}{x} + \frac{1}{x}s(x) + \frac{1}{x^2}s(x) \\ \end{split} \end{equation} \] Finally, we rearrange to get \(s(x)=\frac{x}{x^2-x-1}\). If you have queries or suggestions about the content on this page or the CSAT Practice Platform then you can write to us at oi.[email protected]. Please do not write to this address regarding general admissions or course queries.
Let us assume the standard situation in communication complexity with two players $P_1,P_2.$ We have a function $f:[n] \times [n] \mapsto \{0,1\}$ that both players known in advance. They wish to compute $f(x,y)$ given that the first player only knows $y$ and the second player only knows $x.$ The communication complexity of $f$ is the smallest number $k$ such that $P_1,P_2$ can always compute $f(x,y)$ communicating at most $k$ bits of information to each other. If $f$ is the equality function (that is $f(x,y) = 1$ if and only if $x = y$) then I know that $P_1,P_2$ need to exchange at least $\log_2{n}$ bits in order to be able to always compute $f.$ Consider now the function $h_n:[n]\times[n] \mapsto \{0,1\}$ such that $h_n(x,y) = 1$ if and only if $x+y = n.$ I would like to show that $P_1,P_2$ cannot compute $h$ using fewer than $\Omega(\log{n})$ bits and the idea is to use the fact about the communication complexity of computing $f.$ Hence I am wondering how to make a reduction in this case? Suppose we can compute $h_n(x,y)$ with fewer than $\Omega( \log{n})$ bits. Can we somehow simulate such a protocol in order to be able compute $f$ using less than $\Omega(\log{n})$ communication bits? Player $P_1$ receives the string $y$ and $P_2$ the string $x.$ Now $x = y$ if and only if $x+y = 2x = 2y.$ Hence they could simulate the algorithm for computing $h_{2x}(x,y).$ The problem that I see here is that in this reduction $h_n$ is not fixed in advance and may not even define the same function for $P_1,P_2$ (if $x,y$ actually differ). Hence I am wondering How can one show that $h_n(x,y)$ cannot be computed communicating fewer than $\log_2{n}$ bits ?
Practice Paper 4 Question 8 Find all functions \(f:\{1,2,3,\ldots\}\to\{1,2,3,\ldots\},\) such that for all \(n=1,2,3,\ldots\) the sum \(f(1)+f(2)+\cdots+f(n)\) is equal to a perfect cube that is less than or equal to \(n^3.\) Related topics Hints Hint 1What values can \(f(1)\) take? Hint 2What value must \(f(1) + f(2)\) take? Hint 3Find an expression for \(\sum_{i=1}^{n}{f(i)}\) in terms of \(n.\) Hint 4Prove your hypothesis using induction. Hint 5What is the value of \(f(n)\) if \(\sum_{i=1}^{n}{f(i)} = n^3\) and \(\sum_{i=1}^{n-1}{f(i)} = (n-1)^3?\) Solution By trying small values of \(n,\) we find that \(f(1) \leq 1^3,\) which means that necessarily \(f(1)=1.\) The next case gives us \(f(1)+f(2) = 8.\) This can give us an intuition that the function is unique: \(f(n) = n^3 - (n-1)^3.\) To prove this formally we use induction. Let \(\Phi(n)\) be the induction hypothesis "If \(S(i)=\sum_{j=1}^i{f(j)}\) is a perfect cube less than or equal to \(i^3\) for all \(i \leq n\) then \(S(i) = i^3.\)" Base case: \(f(1) = 1\) as it is the only perfect cube less than or equal to \(1^3,\) so \(\Phi(1)\) is true. Inductive case: Assuming \(\Phi(k)\) is true, we can prove \(\Phi(k+1)\) as follows: Suppose \(S(i)\) is a perfect cube less than or equal to \(i^3\) for all \(i \leq k+1.\) Using the induction hypothesis \(\Phi(k)\), \(S(i) = i^3\) for all \(i \leq k.\) Since \(S(k) = k^3\) and \(S(k+1) \leq (k+1)^3,\) \(S(k+1)\) must equal \((k+1)^3\) as there are no cubes between \(k^3\) and \((k+1)^3.\) Therefore \(\Phi(k+1)\) is true. By induction, \(S(n) = n^3\) for all \(n,\) and so \(f(n) = \sum_{j=1}^{n}{f(j)} - \sum_{j=1}^{n-1}{f(j)}\) \(= n^3 - (n-1)^3.\) If you have queries or suggestions about the content on this page or the CSAT Practice Platform then you can write to us at oi.[email protected]. Please do not write to this address regarding general admissions or course queries.
In part V we saw how a statement Alice would like to prove to Bob can be converted into an equivalent form in the “language of polynomials” called a Quadratic Arithmetic Program (QAP). In this part, we show how Alice can send a very short proof to Bob showing she has a satisfying assignment to a QAP. We will use the Pinocchio Protocol of Parno, Howell, Gentry and Raykova. But first let us recall the definition of a QAP we gave last time: A Quadratic Arithmetic Program :math:`Q` of degree :math:`d` and size :math:`m` consists of polynomials :math:`L_1,\ldots,L_m`, :math:`R_1,\ldots,R_m`, :math:`O_1,\ldots,O_m` and a target polynomial :math:`T` of degree :math:`d`. An assignment :math:`(c_1,\ldots,c_m)` satisfies :math:`Q` if, defining :math:`L:=\sum_{i=1}^m c_i\cdot L_i, R:=\sum_{i=1}^m c_i\cdot R_i, O:=\sum_{i=1}^m c_i\cdot O_i` and :math:`P:=L\cdot R -O`, we have that :math:`T` divides :math:`P`. As we saw in Part V, Alice will typically want to prove she has a satisfying assignment possessing some additional constraints, e.g. :math:`c_m=7`; but we ignore this here for simplicity, and show how to just prove knowledge of some satisfying assignment. If Alice has a satisfying assignment it means that, defining :math:`L,R,O,P` as above, there exists a polynomial :math:`H` such that :math:`P=H\cdot T`. In particular, for any :math:`s\in\mathbb{F}_p` we have :math:`P(s)=H(s)\cdot T(s)`. Suppose now that Alice doesn’t have a satisfying assignment, but she still constructs :math:`L,R,O,P` as above from some unsatisfying assignment :math:`(c_1,\ldots,c_m)`. Then we are guaranteed that :math:`T` does not divide :math:`P`. This means that for any polynomial :math:`H` of degree at most :math:`d-2`, :math:`P` and :math:`L,R,O,H` will be different polynomials. Note that :math:`P` here is of degree at most :math:`2(d-1)`, :math:`L,R,O` here are of degree at most :math:`d-1` and :math:`H` here is degree at most :math:`d-2`. Now we can use the famous Schwartz-Zippel Lemma that tells us that two different polynomials of degree at most :math:`2d` can agree on at most :math:`2d` points :math:`s\in\mathbb{F}_p`. Thus, if :math:`p` is much larger than :math:`2d` the probability that :math:`P(s)=H(s)\cdot T(s)` for a randomly chosen :math:`s\in\mathbb{F}_p` is very small. This suggests the following protocol sketch to test whether Alice has a satisfying assignment. Alice chooses polynomials :math:`L,R,O,H` of degree at most :math:`d`. Bob chooses a random point :math:`s\in\mathbb{F}_p`, and computes :math:`E(T(s))`. Alice sends Bob the hidings of all these polynomials evaluated at :math:`s`, i.e. :math:`E(L(s)),E(R(s)),E(O(s)),E(H(s))`. Bob checks if the desired equation holds at :math:`s`. That is, he checks whether :math:`E(L(s)\cdot R(s)-O(s))=E(T(s)\cdot H(s))`. Again, the point is that if Alice does not have a satisfying assignment, she will end up using polynomials where the equation does not hold identically, and thus does not hold at most choices of :math:`s`. Therefore, Bob will reject with high probability over his choice of :math:`s` in such a case. The question is whether we have the tools to implement this sketch. The most crucial point is that Alice must choose the polynomials she will use, without knowing :math:`s`. But this is exactly the problem we solved in the verifiable blind evaluation protocol, that was developed in Parts II-IV. Given that we have that, there are four main points that need to be addressed to turn this sketch into a zk-SNARK. We deal with two of them here, and the other two in the next part. Making sure Alice chooses her polynomials according to an assignment Here is an important point: If Alice doesn’t have a satisfying assignment, it doesn’t mean she can’t find any polynomials :math:`L,R,O,H` of degree at most :math:`d` with :math:`L\cdot R-O=T\cdot H`, it just means she can’t find such polynomials where :math:`L,R` and :math:`O` were “produced from an assignment”; namely, that :math:`L:=\sum_{i=1}^m c_i\cdot L_i, R:=\sum_{i=1}^m c_i\cdot R_i, O:=\sum_{i=1}^m c_i\cdot O_i` for the same :math:`(c_1,\ldots,c_m)`. The protocol of Part IV just guarantees she is using some polynomials :math:`L,R,O` of the right degree, but not that they were produced from an assignment. This is a point where the formal proof gets a little subtle; here we sketch the solution imprecisely. Let’s combine the polynomials :math:`L,R,O` into one polynomial :math:`F` as follows: :math:`F=L+X^{d+1}\cdot R+X^{2(d+1)}\cdot O` The point of multiplying :math:`R` by :math:`X^{d+1}` and :math:`O` by :math:`X^{2(d+1)}` is that the coefficients of :math:`L,R,O` “do not mix” in :math:`F`: The coefficients of :math:`1,X,\ldots,X^d` in :math:`F` are precisely the coefficients of :math:`L`, the next :math:`d+1` coefficients of :math:`X^{d+1},\ldots,X^{2d+1}` are precisely the coefficients of :math:`R`, and the last :math:`d+1` coefficients are those of :math:`O`. Let’s combine the polynomials in the QAP definition in a similar way, defining for each :math:`i\in \{1,\ldots,m\}` a polynomial :math:`F_i` whose first :math:`d+1` coefficients are the coefficients of :math:`L_i`, followed be the coefficients of :math:`R_i` and then :math:`O_i`. That is, for each :math:`i\in \{1,\ldots,m\}` we define the polynomial :math:`F_i=L_i+X^{d+1}\cdot R_i+X^{2(d+1)}\cdot O_i` Note that when we sum two of the :math:`F_i`’s the :math:`L_i`, :math:`R_i`, and :math:`O_i` “sum separately”. For example, :math:`F_1+F_2 = (L_1+L_2)+X^{d+1}\cdot (R_1+R_2)+X^{2(d+1)}\cdot(O_1+O_2)`. More generally, suppose that we had :math:`F=\sum_{i=1}^mc_i\cdot F_i` for some :math:`(c_1,\ldots,c_m)`. Then we’ll also have :math:`L=\sum_{i=1}^m c_i\cdot L_i, R=\sum_{i=1}^m c_i\cdot R_i, O=\sum_{i=1}^m c_i\cdot O_i` for the same coefficients :math:`(c_1,\ldots,c_m)`. In other words, if :math:`F` is a linear combination of the :math:`F_i`’s it means that :math:`L,R,O` were indeed produced from an assignment. Therefore, Bob will ask Alice to prove to him that :math:`F` is a linear combination of the :math:`F_i`’s. This is done in a similar way to the protocol for verifiable evaluation: Bob chooses a random :math:`\beta\in\mathbb{F}^*_p`, and sends to Alice the hidings :math:`E(\beta\cdot F_1(s)),\ldots,E(\beta\cdot F_m(s))`. He then asks Alice to send him the element :math:`E(\beta\cdot F(s))`. If she succeeds, an extended version of the Knowledge of Coefficient Assumption implies she knows how to write :math:`F` as a linear combination of the :math:`F_i`’s. Adding the zero-knowledge part – concealing the assignment In a zk-SNARK Alice wants to conceal all information about her assignment. However the hidings :math:`E(L(s)),E(R(s)),E(O(s)),E(H(s))` do provide some information about the assignment. For example, given some other satisfying assignment :math:`(c’_1,\ldots,c’_m)` Bob could compute the corresponding :math:`L’,R’,O’,H’` and hidings :math:`E(L'(s)),E(R'(s)),E(O'(s)),E(H'(s))`. If these come out different from Alice’s hidings, he could deduce that :math:`(c’_1,\ldots,c’_m)` is not Alice’s assignment. To avoid such information leakage about her assignment, Alice will conceal her assignment by adding a “random :math:`T`-shift” to each polynomial. That is, she chooses random :math:`\delta_1,\delta_2,\delta_3\in\mathbb{F}^*_p`, and defines :math:`L_z:=L+\delta_1\cdot T,R_z:=R+\delta_2\cdot T,O_z:=O+\delta_3\cdot T`. Assume :math:`L,R,O` were produced from a satisfying assignment; hence, :math:`L\cdot R-O = T\cdot H` for some polynomial :math:`H`. As we’ve just added a multiple of :math:`T` everywhere, :math:`T` also divides :math:`L_z\cdot R_z-O_z`. Let’s do the calculation to see this: :math:`L_z\cdot R_z-O_z = (L+\delta_1\cdot T)(R+\delta_2\cdot T) – O-\delta_3\cdot T` :math:`= (L\cdot R-O) + L\cdot \delta_2\cdot T + \delta_1\cdot T\cdot R + \delta_1\delta_2\cdot T^2 – \delta_3\cdot T\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;` :math:`=T\cdot (H+L\cdot \delta_2 + \delta_1\cdot R + \delta_1 \delta_2\cdot T – \delta_3)` Thus, defining :math:`H_z=H+L\cdot\delta_2 + \delta_1\cdot R + \delta_1\delta_2\cdot T-\delta_3`, we have that :math:`L_z\cdot R_z-O_z=T\cdot H_z`. Therefore, if Alice uses the polynomials :math:`L_z,R_z,O_z,H_z` instead of :math:`L,R,O,H`, Bob will always accept. On the other hand, these polynomials evaluated at :math:`s\in\mathbb{F}_p` with :math:`T(s)\neq 0` (which is all but :math:`d` :math:`s`’s), reveal no information about the assignment. For example, as :math:`T(s)` is non-zero and :math:`\delta_1` is random, :math:`\delta_1\cdot T(s)` is a random value, and therefore :math:`L_z(s)=L(s)+\delta_1\cdot T(s)` reveals no information about :math:`L(s)` as it is masked by this random value. What’s left for next time? We presented a sketch of the Pinocchio Protocol in which Alice can convince Bob she possesses a satisfying assignment for a QAP, without revealing information about that assignment. There are two main issues that still need to be resolved in order to obtain a zk-SNARK: In the sketch, Bob needs an HH that “supports multiplication”. For example, he needs to compute :math:`E(H(s)\cdot T(s))` from :math:`E(H(s))` and :math:`E(T(s))`. However, we have not seen so far an example of an HH that enables this. We have only seen an HH that supports addition and linear combinations. Throughout this series, we have discussed interactiveprotocols between Alice and Bob. Our final goal, though, is to enable Alice to send single-message non-interactive proofs, that are publicly verifiable– meaning that anybody seeing this single message proof will be convinced of its validity, not just Bob (who had prior communication with Alice). Both these issues can be resolved by the use of pairings of elliptic curves, which we will discuss in the next and final part.
Of course, the truth of the Riemann hypothesis is a central question in analytic number theory. Does its truth/falsehood have important consequences in purely algebraic number theory as well? Moreover, are there any known methods of studying or "attacking" the Riemann hypothesis that are more algebraic than analytic? You can find here http://www.claymath.org/sites/default/files/sarnak_rh_0.pdf a short but very enlightening account on RH and GRH by P. Sarnak , « Problem of the Millennium : The Riemann hypothesis », 2004 (building on E. Bombieri’s official presentation of the problem). Your question , as well as many others on this subject, could be placed under the common headline « What is so interesting about the zeroes of the Riemann Zeta function ? » (as termed by Karmal, April 24). I can see that a large majority of the answers concentrate on applications to the distribution of primes, which is natural since Riemann himself started the subject, but one has the right to marvel at e. g. how GRH can lead to an information on the arithmetic of elliptic curves (Serre’s result recalled by Sarnak op. cit.). Even more wonderful is the parallel with the Zeta function of a curve (Weil’s theorem recalled by Jake) and more generally, of a smooth projective variety (Weil’s conjectures, proved by Deligne and others) over a finite field. This is not just an analogy, but a manifestation of the unity of mathematics : « What was really eye-catching [in the Weil conjectures], from the point of view of other mathematical areas, was the proposed connection with algebraic topology. Given that finite fields are discrete in nature, and topology speaks only about the continuous, the detailed formulation of Weil (based on working out some examples) was striking and novel. It suggested that geometry over finite fields should fit into well-known patterns relating to Betti numbers, the Lefschetz fixed-point theorem and so on. The analogy with topology suggested that a new homological theory [étale cohomology] be set up applying within algebraic geometry. This took two decades (it was a central aim of the work and school of Alexander Grothendieck) building up on initial suggestions from Serre » (Wikipedia). Modifying slightly your question, let us now concentrate on the “significance of the zeroes of the Zeta function to algebraic number theory”. Away from RH, the deep arithmetic properties surrounding the so called trivial zeroes are all the more astonishing in view of the opposition discrete vs. analytic. Looking at the special values $\zeta (n)$, $n \in \mathbf Z$ and using the functional equation, one can easily show that $\zeta (0) = 1/2$, and thus restrict to the integers $n$ of the form $-2m$ or $1 – 2m$, $m$ > 0. Then: 1) The special value $\zeta (1 - 2m)$ = - $B_{2m}$ /$2m$, where $B_k$ is the $k$-th Bernouilli number, is a plain rational value. But when a number theorist falls upon a rational number, he inevitably asks whether the numerator and the denominator could not be the orders of some finite groups. Astonishingly, this is the case, see below. 2) $\zeta (- 2m) = 0$ . These trivial zeroes are simple and the special values $\zeta (- 2m)^* $ are by definition the first non zero terms in the Taylor expansion at $s = \zeta (- 2m)$ . The general formula for all the special values then reads: for all $n \ge 1$, $\zeta (- n)^* $ is equal, up to a sign and a power of 2, to $R_n (\mathbf Q)$. # $K_{2n}(\mathbf Z)$ / # torsion $K_{2n+1}(\mathbf Z)$. Here the $K_i (\mathbf Z)$ are the Quillen K-groups associated to the ring $\mathbf Z$ and $R_n(\mathbf Q)$ is the $n$-th Borel regulator. This was the so called Lichtenbaum conjecture (around 1973), now a theorem. Comments: Quillen’s definition of the K-groups of a ring comes from algebraic topology – a further manifestation of the unity of mathematics. Note that the K- groups of $\mathbf Z$ are in no way easy to compute. For any number field $F$, one has the K-groups of the ring of integers $O_F$; one can define also the Borel regulators of $F$, higher analogues of the Dedekind regulator which appears in the classical expression of $\zeta_F (0)^* $; and one can extend the Lichtenbaum conjecture to $F$ to get a higher analogue of the class number formula. The conjecture has been proved for abelian fields. Its depth can be judged by simply looking at the list of tools which come into play: the values of $\zeta_F(s)$ at positive integers expressed in terms of polylog functions; the Main Conjecture of Iwasawa theory on p-adic L-functions proved by Mazur and Wiles; the so called Milnor-Bloch-Kato conjecture relating the K-theory of fields to Galois cohomology, proved by Voevodsky; the Quillen-Lichtenbaum conjecture relating the the K-theory of rings of integers to étale cohomology, proved by Vv. and others. Note that a far reaching generalization of the Lichtenbaum conjecture has been proposed by Bloch-Kato concerning the special values of “motivic” L-functions – another strong indication, if still needed, of the unity of mathematics. A convenient introductory book is “The Bloch-Kato conjecture for the Riemann Zeta function”, Coates-Raghuram-Saikia-Sujatha ed., London Math. Soc. LN 418, 2015 . This is only somewhat related but I hope you find this useful. The Extended Riemann Hypothesis is a generalization of the Riemann Hypothesis to number fields (Just replace the Riemann Zeta function with the Dedekind Zeta function of a number field). It's truth implies an effective error term for the Chebotarev Density Theorem which has implications for the distribution of ideals in Ideal Classes, Ray Classes etc. Another algebraic analogue of the Riemann Zeta function is the Zeta function of a Curve over a finite field. Here, the analogue of the Riemann Hypothesis has the following important consequence (among a lot of other interesting information): If $C$ is a smooth curve of genus $g$ over the finite field $\mathbb F_q$, denoting the number of points of $C/\mathbb F_q$ as $N_q$, $$|N_q-q-1|\leq 2g\sqrt q $$ This is actually a theorem!
I am having difficulty formulating a problem, which involves optimizing a contour shape, into a well-posed variational form that would give a reasonable answer. Within a bounded region on the $xy$ plane, say $x\in[-x_{0},x_{0}], y\in[-y_{0},y_{0}]$, we have a continuous scalar field $H=H(x,y)$. Both the field and the geometry of the problem exhibit no variations in the $z$ direction (i.e. $\partial/\partial z=0)$. On the plane within the specified region, there exists a closed loop (contour) $g(x,y)=0$ that encloses and defines a planar area $A$ that is penetrated by the field $H$. It is known from the physics of the problem that varying the shape of the contour $g$ can result in extremising the functional $$ J(y):=\iint_{A} H(x,y)dA $$ and I need to find the optimal shape $g$ of the contour, for a given $H$ function that is nontrivial ($\neq 0)$ and captured area $A$ that is nonzero. In writing $J$ here I assumed that $x$ is the independent variable and $y$ is dependent on it, to draw the contour shape. Anticipating that the closed contour function will be most likely expressible in parameteric form $(x(t),y(t))$, and since classical variational formulations I am familiar with usually deal with paths rather than areas, I tried to write the functional in terms of the contour (instead of the area) as follows, using Green's theorem: $$ J=\iint_{A} H(x,y)dA=\iint_{A} \left(\frac{\partial F_{y}}{\partial x} -\frac{\partial F_{x}}{\partial y}\right)dA=\oint_{g}(F_{x}dx+F_{y}dy) =\int_{t=0}^{2\pi}(F_{x}\dot{x}+F_{y}\dot{y})dt $$ where $\boldsymbol{F}=F_{x}\hat{i}+F_{y}\hat{j}$ is some vector field whose curl may be defined to give $H$ (assuming we can find such field), dotted symbols like $\dot{x}$ denoting derivate in parameter $t\in[0,2\pi]$, and $\oint_{g}$ denoting integral around closed contour $g$. So, we can think of the Lagrangian of this problem as $L(x,y,\dot{x},\dot{y}):=F_{x}\dot{x}+F_{y}\dot{y}$. The problem now is, if I don't impose any further contraints, the two Euler-Lagrange equation here (in $t$ now as independent variable and both $x$ and $y$ as dependents) give the same result (instead of two independent answers), which says that $H=0$. I plugged in different test fields $H$, and this is always the answer. If I try to improve the formulation by imposing a constraint that $\iint_{A}dA=A_{0}$ to make the area nonzero constant, thus: $$\frac{1}{2}\int_{0}^{2\pi}(x\dot{y}-y\dot{x})dt=A_{0} \Rightarrow \int_{0}^{2\pi}\left[ \frac{x\dot{y}}{2}-\frac{y\dot{x}}{2}-\frac{A_{0}}{2\pi} \right]dt=0,$$ giving a new (constrained) Lagrangian as $L(x,y,\dot{x},\dot{y},\lambda):=F_{x}\dot{x}+F_{y}\dot{y}+\frac{\lambda}{2}(x\dot{y}-y\dot{x}-\frac{A_{0}}{2\pi}),$ then, again, the two Euler-Lagrange equations in $x$ and $y$ give the same answer, basically that $H=\lambda$, where $\lambda$ is Lagrange's multiplier for this contraint. What is wrong with my formulation, and how do I make it well posed for this problem so I can proceed?
Let $\nu_n$ be a sequence of finite signed radon measure such that $\nu_n\to \nu$ strongly for a finite signed radon measure $\nu$. Let $|\nu_n|$ denote the total variation measure of $\nu_n$. We know that $|\nu_n|$ is a positive Radon measure. My question: do I have $|\nu_n|\to |\nu|$ strongly? and do I have $||\nu|-|\mu||\leq |\nu-\mu|$ for two arbitrary finite signed measure $\nu$ and $\mu$? I know the above statement is absolutely false if $\nu_n\to \nu$ only in weak star sense. But I somehow remembered for strong convergence it is true but I can not find the source. So, if it is true, please confirm it for me and directly me to a reference, if not... maybe a counter example? Thank you!
The conditional variational principle for maps with the pseudo-orbit tracing property 1. School of Mathematical Sciences, Anhui University, Hefei 230601, Anhui, China 2. School of Mathematical Sciences and Institute of Mathematics, Nanjing Normal University, Nanjing 210023, Jiangsu, China 3. Center of Nonlinear Science, Nanjing University, Nanjing 210093, Jiangsu, China $(X,d,f)$ $(X,d)$ $f:X \to X$ $n$ $x \in X$ $\mathscr{E}_n(x) = \frac{1}{n}\sum\limits_{i = 0}^{n-1}δ_{f^ix},$ $δ_y$ $y$ $V(x)$ $\mathscr{E}_n(x)$ $\Delta_{sub}(I): = \left\{ {x \in X:V(x)\subset I} \right\},$ $\Delta_{cap}(I): = \left\{ {x \in X:V(x)\cap I≠\emptyset } \right\}.$ $I$ $\mathscr M_{\rm inv}(X,f)$ Mathematics Subject Classification:Primary: 37B40; Secondary: 37C45. Citation:Zheng Yin, Ercai Chen. The conditional variational principle for maps with the pseudo-orbit tracing property. Discrete & Continuous Dynamical Systems - A, 2019, 39 (1) : 463-481. doi: 10.3934/dcds.2019019 References: [1] N. Aoki and K. Hiraide, [2] [3] [4] [5] [6] [7] [8] T. Downarowicz, Survey of odometers and Toeplitz flows, in [9] [10] D. Kwietniak, M. Lacka and P. Oprocha, A panorama of specification-like properties and their consequences, in [11] [12] V. Mijović and L. Olsen, Dynamical multifractal zeta-functions and fine multifractal spectra of graph-directed self-conformal constructions, [13] [14] [15] L. Olsen and S. Winter, Normal and non-normal points of self-similar sets and divergence points of self-similar measures, [16] [17] C. Pfister and W. Sullivan, Large deviations estimates for dynamical systems without the specification property. Applications to the $\beta $-shifts, [18] [19] F. Takens and E. Verbitskiy, On the variational principle for the topological entropy of certain non-compact sets, [20] X. Tian and P. Varandas, Topological entropy of level sets of empirical measures for nonuniformly expanding maps, [21] show all references References: [1] N. Aoki and K. Hiraide, [2] [3] [4] [5] [6] [7] [8] T. Downarowicz, Survey of odometers and Toeplitz flows, in [9] [10] D. Kwietniak, M. Lacka and P. Oprocha, A panorama of specification-like properties and their consequences, in [11] [12] V. Mijović and L. Olsen, Dynamical multifractal zeta-functions and fine multifractal spectra of graph-directed self-conformal constructions, [13] [14] [15] L. Olsen and S. Winter, Normal and non-normal points of self-similar sets and divergence points of self-similar measures, [16] [17] C. Pfister and W. Sullivan, Large deviations estimates for dynamical systems without the specification property. Applications to the $\beta $-shifts, [18] [19] F. Takens and E. Verbitskiy, On the variational principle for the topological entropy of certain non-compact sets, [20] X. Tian and P. Varandas, Topological entropy of level sets of empirical measures for nonuniformly expanding maps, [21] [1] Zheng Yin, Ercai Chen. Conditional variational principle for the irregular set in some nonuniformly hyperbolic systems. [2] [3] [4] [5] [6] [7] [8] Piotr Oprocha, Paweł Potorski. Topological mixing, knot points and bounds of topological entropy. [9] [10] Boris Hasselblatt, Zbigniew Nitecki, James Propp. Topological entropy for nonuniformly continuous maps. [11] [12] [13] Eva Glasmachers, Gerhard Knieper, Carlos Ogouyandjou, Jan Philipp Schröder. Topological entropy of minimal geodesics and volume growth on surfaces. [14] Dante Carrasco-Olivera, Roger Metzger Alvan, Carlos Arnoldo Morales Rojas. Topological entropy for set-valued maps. [15] César J. Niche. Topological entropy of a magnetic flow and the growth of the number of trajectories. [16] [17] [18] [19] [20] 2018 Impact Factor: 1.143 Tools Metrics Other articles by authors [Back to Top]
Answer $a\approx6.9$ in. $b\approx9.8$ in. Work Step by Step $\sin35^{\circ}=\frac{a}{12}$ $a=12\times\sin35^{\circ}$ $a\approx6.9$ Using Pythagorean Theorem to find $b$. $b=\sqrt {12^{2}-a^{2}}$ $b\approx\sqrt{12^{2}-6.9^{2}}$ $b\approx9.8$ You can help us out by revising, improving and updating this answer.Update this answer After you claim an answer you’ll have 24 hours to send in a draft. An editorwill review the submission and either publish your submission or provide feedback.
Suppose the adversary has a quantum computer. If the adversary has time to run it on the public key before it becomes irrelevant, then every pre-quantum public-key signature scheme is broken. What if the adversary doesn't have time to run it on the public key before it becomes irrelevant? For example, in Bitcoin, a transaction is (very approximately) a signed statement saying Please send 0.042 BTC to pubkey whose SHA-256 hash is 0x2ef20003e34f7113dfbb26b37e7544a657208aa4f11ffda8781d75e1bda23a09. Sincerely, pubkey 0x40e0cb242a51f749b028a4a3c0895000dc5c3bea58ab7d344e5d0a611529ef04. As long as you use each public key only once, the adversary has only the time before the network has accepted the transaction to attempt to forge an alternative transaction before any future forgery of signatures by the sender of the transaction is inconsequential. If the adversary wants to forge signatures by the recipient of the transaction, they must also reverse SHA-256—a considerably harder problem. The adversary can try to parallelize the computation, but if theft is all that is at issue (which it may not be; false attribution or all manner of other things might be valuable to the adversary), then the parallelized computation can't cost more than 0.042 BTC before it's a waste of money. However, this is not a proof of security against quantum computers, of course—it is at best an amusing thought in the attacker economist philosophy. It is also only the story for a single key. What happens if the adversary can break many keys simultaneously? As it happens, Pollard's $\rho$ algorithm breaks many keys simultaneously faster than many keys independently. (I'm not sure what the multi-target Shor story is.) Now consider a hypothetical application beyond Bitcoin, where only select verifiers learn the public key at all, the verifiers whom the legitimate parties want to enable to verify signatures. What if the adversary doesn't even see the public key? Can the adversary recover the public key—and thereby break the cryptosystem with a quantum computer—from a signature, or a collection of signatures? (Even if the answer is no, of course, this doesn't rule out a quantum adversary's ability to forge signatures without recovering the public key, but while intuitively it seems unlikely, I won't address that extended question for now.) In ECDSA over a curve $E/\mathbb F_p$, a signature under a public key $A \in E(\mathbb F_p)$ on a message $m \in \{0,1\}^*$ is a pair of $r \in \mathbb F_p$ and $s \in \mathbb Z/\ell\mathbb Z$, where $\ell = \#E(\mathbb F_p)$, satisfying the equation $$r = x([H(m)\,s^{-1}] B + [r s^{-1}] A),$$ where $B \in E(\mathbb F_p)$ is the standard base point. Knowledge of $a \in \mathbb Z/\ell\mathbb Z$ such that $A = [a]B$ makes it easy to solve for $s$ given uniform random $r$. However, we can also solve for public keys—not uniquely, but close enough to forge signatures with nonnegligible probability: by finding the points $R \in x^{-1}(r)$ with $x(R) = r$, so that $R \in \pm [H(m)\,s^{-1}] B + [r s^{-1}] A$, we can compute $$A \in [r^{-1} s] (R \pm [H(m)\,s^{-1}] B),$$ and so from a single signature it is easy to recover one of two public keys verifying the signature. Then an adversary with a quantum computer can solve the elliptic-curve discrete log problem with Shor's algorithm. Thus ECDSA does not provide security against this threat model. What about Ed25519? An Ed25519 signature under a public key $A \in E(\mathbb F_p)$ on a message $m \in \{0,1\}^*$ is a pair $R \in E(\mathbb F_p)$ and $s \in \mathbb Z/\ell\mathbb Z$ such that $$[s] B = R + [H(\underline R \mathbin\Vert \underline A \mathbin\Vert m)] A.$$ If we had $\underline A$ (the encoding of the point $A$), then we could recover $$A = [H(\underline R \mathbin\Vert \underline A \mathbin\Vert m)^{-1}] ([s] B - R),$$ but this is begging the question. Does this mean Ed25519 provides security against this threat model? I don't know! This requires further analysis. What about RSA signatures, say RSA-FDH? An RSA-FDH signature under a public key $n$ on a message $m$ is an element $s \in \mathbb Z/n\mathbb Z$ such that $s^3 \equiv H(m) \pmod n$, where $H\colon \{0,1\}^* \to \mathbb Z/n\mathbb Z$ is a uniform random function. If an adversary with a quantum computer learned $n$ they could use Shor's algorithm to factor it. Now, a single signature doesn't reveal $n$. With a corpus of signatures, the adversary can solve the German tank problem to distinguish signatures under $n_0$ from signatures under $n_1$ and deanonymize signers—which doesn't guarantee the adversary can learn $n$ with enough precision to apply Shor's algorithm or hire Don Coppersmith to figure out a brilliant variant of it, but doesn't rule it out either. What if we chose the first $s_i = H(m, i)^d \bmod n$ for $i = 0, 1, 2, \dots$ such that $s_i \bmod n < 2^{\lfloor\lg n\rfloor}$ by rejection sampling? This shouldn't reduce the security of the underlying signature scheme much: if an adversary could compute cube roots with nonnegligible probability given a signature scheme covering all elements of $\mathbb Z/n\mathbb Z$, then they could probably compute cube roots with at worst about half the probability given a signature scheme covering only elements below $2^{\lfloor\lg n\rfloor}$. (Caveat developer: I am just a pseudonymous bone-eating vulture on the internet, and I'm definitely not your cryptographer…and the first draft of this was completely wrong.)
Can someone please tell me how I get vertical lines in the columns of a matrix like it is shown in the picture? I started with ... = \begin{pmatrix} \lambda A' & \mu B' & \tau C' \end{pmatrix} ... TeX - LaTeX Stack Exchange is a question and answer site for users of TeX, LaTeX, ConTeXt, and related typesetting systems. It only takes a minute to sign up.Sign up to join this community Can someone please tell me how I get vertical lines in the columns of a matrix like it is shown in the picture? I started with ... = \begin{pmatrix} \lambda A' & \mu B' & \tau C' \end{pmatrix} ... You could use a \vline \documentclass{article}\usepackage{amsmath}\begin{document}$ M = \begin{pmatrix}\vline & \vline & \vline \\\lambda A' & \mu B' & \tau C' \\\vline & \vline & \vline\end{pmatrix}$\end{document}
Volume 9, Number 5, 2019, Pages 1706-1718 DOI:10.11948/20180273 Infinitely many solutions for a zero mass Schodinger-Poisson-Slater problem with critical growth Liu Yang,Zhisu Liu Keywords:Schrodinger-Poisson-Slater problem, Zero mass, critical growth, concentration-compactness principle. Abstract: In this paper, we are concerned with the following Schr\"{o}dinger-Poisson-Slater problem with critical growth:$$-\Delta u+(u^{2}\star \frac{1}{|4\pi x|})u=\muk(x)|u|^{p-2}u+|u|^{4}u\,\,\mbox{in}\,\,\R^{3}.$$We use a measure representation concentration-compactness principle of Lions to prove that the $(PS)_{c}$ condition holds locally. Via a truncation technique and Krasnoselskii genus theory, we further obtain infinitely many solutions for $\mu\in(0,\mu^{\ast})$ with some $\mu^{\ast}>0$. PDF Download reader
In the previous part of this series, we looked at what decibels are. To put it simply, we can measure a sound, find a representative sound pressure for it, put this into a logarithmic formula, and voilà – we have a sound pressure level in decibels. However, humans cannot hear every sound equally well. The basic calculation of sound pressure level does not take this into account. This means that there are sounds we can hear hardly or not at all, that have the same physical sound pressure level as sounds that we hear well. Therefore, a number of techniques, such as A-weighting and C-weighting, have been developed to let us calculate sound pressure levels that fit our hearing better. In this part, we will discuss how sound consists of different frequencies, how we do not hear these frequencies equally well, and how we can take this into account when calculating sound pressure levels. Sound and frequency From part 1, we know that sound is fluctuations in the air pressure. For so-called pure tones, these fluctuations are completely periodic. The sound pressure goes up, down, up, down, and so forth, and every fluctuation takes exactly the same time. The time that a sound wave needs to complete an entire cycle, i.e. go from a sound pressure peak to the next sound pressure peak (and from a sound pressure trough to the next sound pressure trough) is the period of the sound wave, and we can measure it in seconds. For audible sounds, a sound wave’s period can range from some tens of microseconds to some tens of milliseconds. Another way to represent a pure tone, is by how many cycles it completes per second. This is the frequency of the tone, and we measure it in Hertz (which we usually abbreviate as Hz). Frequency and period are closely connected quantities. In fact, they are each others’ reciprocals, meaning that you can find the period by dividing 1 by the frequency, and you can find the frequency by dividing 1 by the period. Frequency is tied to how we perceive the sound. We typically say that a tone with a low frequency is ‘deep’ or ‘dark’, and that a tone with a high frequency is “high” or “bright”. Here are some examples of tones with different frequencies: 500 Hz 2000 Hz Here we can see that the tone at 500 Hz changes more slowly in time than the tone at 2000 Hz. (In other words, the former has a longer period than the latter.) The rightmost graphs show frequency spectra, i.e. which frequencies the two sounds consist of. However, since both sounds are pure tones, each only consists of a single frequency. (I put no numbers on the y-axes here, as I don’t know the sound pressures and sound pressure levels at which you are listening to these sounds. I can’t control your speakers’ volume knob from here!) However, it is rare that we hear sounds that only consist of a single frequency. Most sounds that we hear are not pure tones; they are not even periodic! Still, we can view them as a distribution of various frequencies. We will perceive sounds with mostly low frequencies as ‘dark’ or ‘bass-heavy’, and sounds with mostly high frequencies as ‘bright’ or ‘treble-heavy’. Here are some examples of white noise and pink noise, two sounds with different frequency distributions: White noise Pink noise The hallmark of white noise is that it always has an equal distribution of every frequency. Pink noise, on the other hand, has more low than high frequencies. Thus, we perceive pink noise as ‘darker’ than white noise. How well do we hear different sounds? We often say that young people with ‘normal hearing’ can hear frequencies from around 20 Hz to around 20 000 Hz. This is not really wrong, but the truth is more nuanced. First of all, there are large variations between people. My own hearing is not quite like yours. You may be able to hear quiet sounds that I cannot, or vice versa. When we want to say something general about human hearing, we must therefore base it on experiments performed with many different people. Thus, we can determine how a person with average hearing hears. Secondly, we don’t at all hear all frequencies equally well. Our hearing is the most sensitive, meaning that we can hear quiet sounds better, around 3000 –4000 Hz. To be able to hear a sound at a very low frequency, say 50 Hz, as well as we hear a sound at 3000 Hz, the low-frequency sound must be physically stronger, i.e., its sound pressure level must be significantly higher. The same holds for sounds with very high frequency, say 10 000 Hz. How well we hear different sounds has been extensively studied by researchers as far back as the 1930s. One of their results is equal loudness curves, such as the ones you see to the right. Each of these curves shows tones that an average person perceives as equally loud . For example, we can see that the average person perceives a tone at 1000 Hz with a sound pressure level of 40 dB to be as loud as a tone at 100 Hz with a sound pressure level of 62 dB. We see that the threshold of hearing, which is the weakest sound that we can hear, is at 25 dB at 100 Hz and at -7 dB at 3000 Hz. We can also see that our hearing is very frequency dependent at low sound pressure levels, and that this frequency dependence is less pronounced at higher levels. A-weighting and other frequency weightings A number of techniques have been established that let us compare the loudness of different sounds in a way that takes our human hearing into account. The most common is A-weighting, which imitates our hearing for sounds at 40 phon, i.e. for sounds that sound as loud as a 1 kHz tone at 40 dB. C-weighting, which is similar for 90 phon, is also sometimes used. (B-weighting and D-weighting also used to exist, but they fell into disuse long ago. When we don’t do any frequency weighting, it’s sometimes called Z-weighting, where the Z stands for ‘zero’.) So, how do we use these frequency weightings in practice? Researchers have designed signal filters that we can use in electronic circuits or on computers. We can run recorded sound signals through these filters to get a modified recording where low and very high frequencies have been suppressed in about the same way as our hearing would. The figure to the right shows how the A-weighting filter changes the level of various sound frequencies, comparing it with the sensitivity of our own hearing. Calculating A-weighted levels Thus, when we want to calculate an A-weighted sound pressure level, we do it in the following way. First, we filter a measurement of the sound pressure \(p(t)\), getting an A-weighted sound pressure \(p_\mathrm{A}(t)\). Then, we use that to calculate an A-weighted RMS sound pressure as described in part 1, as \(p_\mathrm{A,rms} = \sqrt{ \frac{1}{T} \int_{t_0}^{t_0+T} p^2_\mathrm{A}(t) \, \mathrm{d}t } \, . \) Finally, we calculate the A-weighted sound pressure level, \(L_\mathrm{A} = 20 \times \log \left( \frac{p_\mathrm{A,rms}}{p_\mathrm{ref}} \right) \, . \) In the same way as unweighted sound pressure levels, we specify A-weighted sound pressure levels in decibels. (It’s not uncommon to write ‘dBA’ or ‘dB(A)’ to show that sound pressure levels have been A-weighted, but this practice is technically incorrect.) People also often use other simplified methods to calculate A-weighted sound pressure. We can consider these methods to be good approximations to the procedure described above. Limits on louder sounds Noise regulations that specify limits for the sound pressure levels from e.g. roads, railways, and airports almost always use A-weighted limit levels. This is usually a decent approach, as these levels are typically close enough to 40 phon that A-weighting fits our perception of these sounds quite well. However, for very loud situations such as concerts, A-weighting does not represent our perception as well, because it suppresses low-frequency sound too much. Therefore, regulations sometimes also specify C-weighted levels for concerts and such, to ensure that there is also a limit on low-frequency sound. Next time We have now looked at how we can calculate sound pressure levels from the sound pressure, either directly or with a frequency weighting that takes into account that we perceive different frequencies differently. Still, we have only looked at sounds that are even enough that we can calculate a representative sound pressure through the RMS procedure. Next time, we will look at how we can describe sound pressure levels for sounds that vary in time. Some examples of this are explosions, passing aircraft, and all the different sounds that you are exposed to within the span of a day. This post is a translation of a Norwegian-language blog post that I originally wrote for acousticsresearchcentre.no. I would like to thank Tron Vedul Tronstad for proofreading and fact-checking the Norwegian original.
This is one of those things that is helpfully studied using an example. A very nice example for this issue is the "wandering block". Informally, the wandering block is the sequence of indicator functions of $[0,1],[0,1/2],[1/2,1],[0,1/4],[1/4,1/2],[1/2,3/4],[3/4,1]$, etc. More explicitly, it is the sequence $g_n(x)$ which comes about from enumerating the "triangular array" $f_{j,k}(x)=\chi_{[j2^{-k},(j+1)2^{-k}]}(x)$, where $k=0,1,\dots$ and $j=0,1,\dots,2^{k}-1$. The sequence $g_n$ converges in measure to the zero function. You can see this as follows. Given $n$, write $g_n=f_{j,k}$, then $m(\{ x : |g_n(x)| \geq \varepsilon \})=2^{-k}$ for any given $\varepsilon \in (0,1)$. Since $k \to \infty$ as $n \to \infty$, this measure goes to $0$ as $n \to \infty$. On the other hand, the sequence $g_n(x)$ does not converge at any individual point, because any given point is in infinitely many of these intervals and also not in infinitely many of these intervals. Thus the sequence $g_n(x)$ contains infinitely many $1$s and infinitely many $0$s, and so it cannot converge. On an infinite measure space, there is an example for the other direction: $f_n(x)=\chi_{[n,n+1]}(x)$ on the line converges pointwise to $0$ but does not converge in measure, since $m(\{ x : |f_n(x)| \geq \varepsilon \})=1$ for $\varepsilon \in (0,1)$. A corollary of Egorov's theorem says that this is impossible on a finite measure space. On a related note, the wandering block example also shows a nice, explicit example of the theorem "if $f_n$ converges in measure then a subsequence converges almost everywhere". Here, for any fixed $j$, the sequence $h_k=f_{j,k}$ (defined for sufficiently large $k$ that this makes sense) converges almost everywhere.
Let $V=\mathbb{R}^n$, $\Lambda_r=2\pi r \mathbb{Z}^n \subset V (r>0)$ a lattice; $V^*\cong\mathbb{R}^n$ the dual vector space of $V$, and $\Lambda_r^*=\frac{1}{2\pi r} \mathbb{Z}^n =\text{Hom}(\Lambda_r, \mathbb{Z})$ the dual lattice in $V^*$. $\Lambda_r^*$ can be thought of as the Pontryagin dual of the torus $T^n_r=V/\Lambda_r$; also, $V^*$ can be thought of as the Pontryagin dual of $V$ and can be identified with $V$ via the pairing $\left< x,\xi \right>=e^{2\pi i x\cdot\xi}$. Chapter 4 of Gerald B. Folland's book A course in abstract harmonic analysis is a nice introduction to these materials in the context of locally compact abelian groups; see also this blog of Terence Tao. It's well known that the Fourier transform gives an isometry of Hilbert spaces $$L^2(V)\cong L^2(V^*).$$ Also, Fourier series give an isometry of Hilbert spaces $$L^2(T^n_r)\cong l^2(\Lambda_r^*).$$ We have the following obvious intuition: as $r>0$ becomes larger and larger, the scale of $T^n_r$ also becomes larger and larger, and finally becomes like $V=\mathbb{R}^n$; on the other hand, the dual lattice $\Lambda_r^*$ becomes more and more 'dense' in $V^*=\mathbb{R}^n$ as the distance of adjacent points is $\frac{1}{2\pi r}$, which goes to 0 as $r$ goes to $\infty$. My question is the following: Can we make it mathematically rigorous, both on the level of functions and on the level of spaces (e.g. $T^n_r \to V$), that the 'limit' of the isomorphisms $$L^2(T^n_r)\cong l^2(\Lambda_r^*)$$ is the isomorphism $$L^2(V)\cong L^2(V^*)$$ as $r$ goes to $\infty$? The bad thing is that $V=\mathbb{R}^n$ is noncompact, while we have the notion of Bohr compactification, I hope this can be helpful. Is there any relation between the tori $T^n_r$ and the Bohr compactification of $\mathbb{R}^n$? Hopefully, if we can do this, then we can do similar things such as interpreting Fourier inversion as a limit. Some aspect (on the level of functions) is discussed in Exercise 40 (Fourier transform on large tori) of Tao's blog.
Answer $c\approx8.8$ $d\approx28.7$ Work Step by Step $\sin17^{\circ}=\frac{c}{30}$ $c=30\times\sin17^{\circ}$ $c\approx8.8$ Using Pythagorean Theorem to find $d$. $d=\sqrt {30^{2}-c^{2}}$ $d\approx\sqrt{30^{2}-8.8^{2}}$ $d\approx28.7$ You can help us out by revising, improving and updating this answer.Update this answer After you claim an answer you’ll have 24 hours to send in a draft. An editorwill review the submission and either publish your submission or provide feedback.
The "aromatic" papers by Li, Maxin, Nanopoulos, and Walker have been discussed many times on this blog. They have a new preprint today, A \(125.5\GeV\) Higgs Boson in \({\mathcal F}\)-\(SU(5)\): Imminently Observable Proton Decay, A \(130\GeV\) Gamma-ray Line, and SUSY Multijets & Light Stops at the LHC8Once again, they update their best fits involving their favorite stringy inspired supersymmetric grand unified scenario, the \({\mathcal F}\)-\(SU(5)\) models, and announce that the recently measured data and tiny excesses could be mutually consistent and suggest the looming discovery of several new effects. In that framework, one is led to include new vector-like matter multiplets, the "flippons". In the newest paper, these particles are a bit lighter than before, perhaps closer to \(1\TeV\) or even their best fit \(700\GeV\) than \(3\TeV\) if we mention their typical interval for the flippon masses. That allows them to argue that the Higgs mass should be \(125.5\GeV\), the average of the CMS and ATLAS observed values. The lighter "flippons" contribute greater loop corrections that raise the original value of \(124\GeV\). The same point of the parameter space would also predict the lightest neutralino of mass\[ {\Large m_{\tilde \chi} \approx 145\GeV}. \] Forgive me bigger fonts in the displayed equations. I want all the double subscripts below to look great – and I want to show you how beautiful the \(\rm\LaTeX\)-produced mathematical equations and fonts are. Two such LSPs could pair-annihilate into \[ {\Large \tilde \chi \tilde \chi \to \gamma Z,} \] thus producing the \(130\GeV\) gamma-ray line identified in the Fermi data by Christoph Weniger. Their best fit – and again, I am afraid that what they're doing is the sin of overfitting – implies a gluino and light stop with masses\[ {\Large m_{\tilde g} = 945\GeV,\quad m_{\tilde t_1}=777\GeV } \] which isn't exactly the "very light stop squark" (like one employed by Hooper and Buckley) but it should also gradually become visible in the 2012 data if the stop is really over there. The stop decays mostly to the top quark and the LSP. Their scenario doesn't have to be correct – I am really not promising you it's correct, especially because I carefully observe the status of several other "specific enough models" which make slightly different predictions. But it could be very well correct, too. If that's true, we should expect not only some groundbreaking discovery at the LHC soon; there would be other consequences for the experiments. A clearer evidence for the \(130\GeV\) gamma-ray line could strengthen the picture. But there is one more effect we often neglected in recent years: the proton decay. The simplest grand unified model would predict the proton lifetime something like \(10^{31}\) years which would soon be falsified. Georgi and Glashow only dared to suggest the simplest model and when they were slapped in face by Nature, they immediately left the program. Other people concluded that the idea of grand unification is way too precious to be abandoned this easily; they constructed more nonminimal models that are compatible with the experiments. But they still have to struggle to remain compatible. The current lower bound on the lifetime set by the Čerenkov detector of the Super-Kamiokande lab is something like\[ {\Large \tau_{p} \geq 1.4\times 10^{34}\,{\rm years}} \] A funny thing about their preferred point is that the predicted proton lifetime could be just a little bit higher than that,\[ {\Large \tau_{p} \approx 1.7\times 10^{34}\,{\rm years}}. \] So the proton decay could be seen rather soon. Well, I think it should already be a bit worrying for them that it hasn't been seen yet; a simple way to justify this assertion of mine is to say that a \(1.4\times 10^{34}\) years lower bound at the 95% confidence level is probably equivalent to a \(1.8\times 10^{34}\) years (that's not an exact number, just an estimate to sketch a concept) lower bound at the 80% confidence level so at this reduced level of "certainty", the "best fit" figure predicted by these four authors has been falsified, too. I must mention that in this model, the dominant decay channels for the proton are\[ {\Large p\to e^+ \pi^0,\quad p\to \mu^+ \pi^0} \] while the first one is a bit more frequent. Needless to say, their claims are huge, bold, and extraordinary. It is a priori very likely that they will be falsified. The discovery of the "dark matter particle" would be a huge event in the history of physics; the discovery of the "stop squark" would be a bombshell; the discovery of the "proton decay" would be stunning; these four physicists are effectively saying that we will witness not just one but all these events within a few years (if not months). But even though I think that they overestimate the importance of some small excesses and their fits are overfitting (they are effectively fitting the noise i.e. trying to find a deep explanation for something that is likely to be just a coincidence) and they may even exhibit some bias towards "physics right behind the corner", something I have often criticized and something that's been largely invalidated by the first year and a half of the serious LHC data, such a model would not only be extremely far-reaching in physics but the probability that it is valid seems sufficiently high to me to justify several articles on this blog. So let's stay tuned. ;-)
The package is a powerful tool, based on pgfplots tikz, dedicated to create scientific graphs. Contents Pgfplots is a visualization tool to make simpler the inclusion of plots in your documents. The basic idea is that you provide the input data/formula and pgfplots does the rest. \begin{tikzpicture} \begin{axis} \addplot[color=red]{exp(x)}; \end{axis} \end{tikzpicture} %Here ends the furst plot \hskip 5pt %Here begins the 3d plot \begin{tikzpicture} \begin{axis} \addplot3[ surf, ] {exp(-x^2-y^2)*x}; \end{axis} \end{tikzpicture} Since pgfplot is based on tikz the plot must be inside a tikzpicture environment. Then the environment declaration \begin{axis}, \end{axis} will set the right scaling for the plot, check the Reference guide for other axis environments. To add an actual plot, the command \addplot[color=red]{log(x)}; is used. Inside the squared brackets some options can be passed, in this case we set the colour of the plot to red; the squared brackets are mandatory, if no options are passed leave a blank space between them. Inside the curly brackets you put the function to plot. Is important to remember that this command must end with a semicolon ;. To put a second plot next to the first one declare a new tikzpicture environment. Do not insert a new line, but a small blank gap, in this case hskip 10pt will insert a 10pt-wide blank space. The rest of the syntax is the same, except for the \addplot3 [surf,]{exp(-x^2-y^2)*x};. This will add a 3dplot, and the option surf inside squared brackets declares that it's a surface plot. The function to plot must be placed inside curly brackets. Again, don't forget to put a semicolon ; at the end of the command. Note: It's recommended as a good practice to indent the code - see the second plot in the example above - and to add a comma , at the end of each option passed to \addplot. This way the code is more readable and is easier to add further options if needed. To include pgfplots in your document is very easy, add the next line to your preamble and that's it: \usepackage{pgfplots} Some additional tweaking for this package can be made in the preamble. To change the size of each plot and also guarantee backwards compatibility (recommended) add the next line: \pgfplotsset{width=10cm,compat=1.9} This changes the size of each pgfplot figure to 10 centimeters, which is huge; you may use different units (pt, mm, in). The compat parameter is for the code to work on the package version 1.9 or later. Since LaTeX was not initially conceived with plotting capabilities in mind, when there are several pgfplot figures in your document or they are very complex, it takes a considerable amount of time to render them. To improve the compiling time you can configure the package to export the figures to separate PDF files and then import them into the document, add the code shown below to the preamble: \usepgfplotslibrary{external} \tikzexternalize See this help article for further details on how to set up tikz-externalization in your Overleaf project. Pgfplots 2D plotting functionalities are vast, you can personalize your plots to look exactly what you want. Nevertheless, the default options usually give very good result, so all you have to do is feed the data and LaTeX will do the rest: To plot mathematical expressions is really easy: \begin{tikzpicture} \begin{axis}[ axis lines = left, xlabel = $x$, ylabel = {$f(x)$}, ] %Below the red parabola is defined \addplot [ domain=-10:10, samples=100, color=red, ] {x^2 - 2*x - 1}; \addlegendentry{$x^2 - 2x - 1$} %Here the blue parabloa is defined \addplot [ domain=-10:10, samples=100, color=blue, ] {x^2 + 2*x + 1}; \addlegendentry{$x^2 + 2x + 1$} \end{axis} \end{tikzpicture} Let's analyse the new commands line by line: axis lines = left. xlabel = $x$ and ylabel = {$f(x)$}. \addplot. domain=-10:10. samples=100. \addlegendentry{$x^2 - 2x - 1$}. To add another graph to the plot just write a new \addplot entry. Scientific research often yields data that has to be analysed. The next example shows how to plot data with pgfplots: \begin{tikzpicture} \begin{axis}[ title={Temperature dependence of CuSO$_4\cdot$5H$_2$O solubility}, xlabel={Temperature [\textcelsius]}, ylabel={Solubility [g per 100 g water]}, xmin=0, xmax=100, ymin=0, ymax=120, xtick={0,20,40,60,80,100}, ytick={0,20,40,60,80,100,120}, legend pos=north west, ymajorgrids=true, grid style=dashed, ] \addplot[ color=blue, mark=square, ] coordinates { (0,23.1)(10,27.5)(20,32)(30,37.8)(40,44.6)(60,61.8)(80,83.8)(100,114) }; \legend{CuSO$_4\cdot$5H$_2$O} \end{axis} \end{tikzpicture} There are some new commands and parameters here: title={Temperature dependence of CuSO$_4\cdot$5H$_2$O solubility}. xmin=0, xmax=100, ymin=0, ymax=120. xtick={0,20,40,60,80,100}, ytick={0,20,40,60,80,100,120}. legend pos=north west. ymajorgrids=true. xmajorgrids to enable grid lines on the x axis. grid style=dashed. mark=square. coordinates {(0,23.1)(10,27.5)(20,32)...} If the data is in a file, which is the case most of the time; instead of the commands \addplot and coordinates you should use \addplot table {file_with_the_data.dat}, the rest of the options are valid in this environment. Scatter plots are used to represent information by using some kind of marks, these are common, for example, when computing statistical regression. Lets start with some data, the sample below is to show the structure of the data file we are going to plot (see the end of this section for a link to the LaTeX source and the data file): GPA ma ve co un 3.45 643 589 3.76 3.52 2.78 558 512 2.87 2.91 2.52 583 503 2.54 2.4 3.67 685 602 3.83 3.47 3.24 592 538 3.29 3.47 2.1 562 486 2.64 2.37 The next example is a scatter plot of the first two columns in this table: \begin{tikzpicture} \begin{axis}[ enlargelimits=false, ] \addplot+[ only marks, scatter, mark=halfcircle*, mark size=2.9pt] table[meta=ma] {scattered_example.dat}; \end{axis} \end{tikzpicture} The parameters passed to the axis and addplot environments can also be used in a data plot, except for scatter. Below the description of the code: enlarge limits=false only marks scatter meta parameter explained below. mark=halfcircle* mark size=2.9pt table[meta=ma]{scattered_example.dat}; Bar graphs (also known as bar charts and bar plots) are used to display gathered data, mainly statistical data about a population of some sort. Bar plots in pgfplots are highly customisable, but here we are going to show an example that 'just works': \begin{tikzpicture} \begin{axis}[ x tick label style={ /pgf/number format/1000 sep=}, ylabel=Year, enlargelimits=0.05, legend style={at={(0.5,-0.1)}, anchor=north,legend columns=-1}, ybar interval=0.7, ] \addplot coordinates {(2012,408184) (2011,408348) (2010,414870) (2009,412156)}; \addplot coordinates {(2012,388950) (2011,393007) (2010,398449) (2009,395972)}; \legend{Men,Women} \end{axis} \end{tikzpicture} The figure starts with the already explained declaration of the tikzpicture and axis environments, but the axis declaration has a number of new parameters: x tick label style={/pgf/number format/1000 sep=} \addplot commands within this ybar parameter described below is mandatory for this to work). enlargelimits=0.05. legend style={at={(0.5,-0.2)}, anchor=north,legend columns=-1} ybar interval=0.7, The coordinates in this kind of plot determine the base point of the bar and its height. The labels on the y-axis will show up to 4 digits. If in the numbers you are working with are greater than 9999 pgfplot will use the same notation as in the example. pgfplots has the 3d Plotting capabilities that you may expect in a plotting software. There's a simple example about this at the introduction, let's work on something slightly more complex: \begin{tikzpicture} \begin{axis}[ title=Exmple using the mesh parameter, hide axis, colormap/cool, ] \addplot3[ mesh, samples=50, domain=-8:8, ] {sin(deg(sqrt(x^2+y^2)))/sqrt(x^2+y^2)}; \addlegendentry{$\frac{sin(r)}{r}$} \end{axis} \end{tikzpicture} Most of the commands here have already been explained, but there are 3 new things: hide axis colormap/cool mesh Note: When working with trigonometric functions pgfplots uses degrees as default units, if the angle is in radians (as in this example) you have to use de deg function to convert to degrees. In pgfplots is possible to plot contour plots, but the data has have to be pre calculated by an external program. Let's see: \begin{tikzpicture} \begin{axis} [ title={Contour plot, view from top}, view={0}{90} ] \addplot3[ contour gnuplot={levels={0.8, 0.4, 0.2, -0.2}} ] {sin(deg(sqrt(x^2+y^2)))/sqrt(x^2+y^2)}; \end{axis} \end{tikzpicture} This is a plot of some contour lines for the same equation used in the previous section. The value of the title parameter is inside curly brackets because it contains a comma, so we use the grouping brackets to avoid any confusion with the other parameters passed to the \begin{axis} declaration. There are two new commands: view={0}{90} contour gnuplot={levels={0.8, 0.4, 0.2, -0.2}} levels is a list of values of elevation levels where the contour lines are to be computed. To plot a set of data into a 3d surface all we need is the coordinates of each point. These coordinates could be an unordered set or, in this case, a matrix: \begin{tikzpicture} \begin{axis} \addplot3[ surf, ] coordinates { (0,0,0) (0,1,0) (0,2,0) (1,0,0) (1,1,0.6) (1,2,0.7) (2,0,0) (2,1,0.7) (2,2,1.8) }; \end{axis} \end{tikzpicture} The points passed to the coordinates parameter are treated as contained in a 3 x 3 matrix, being a white row space the separator of each matrix row. All the options for 3d plots in this article apply to data surfaces. The syntax for parametric plots is slightly different. Let's see: \begin{tikzpicture} \begin{axis} [ view={60}{30}, ] \addplot3[ domain=0:5*pi, samples = 60, samples y=0, ] ({sin(deg(x))}, {cos(deg(x))}, {x}); \end{axis} \end{tikzpicture} There are only two new things in this example: first, the samples y=0 to prevent pgfplots from joining the extreme points of the spiral and; second, the way the function to plot is passed to the addplot3 environment. Each parameter function is grouped inside curly brackets and the three parameters are delimited with parenthesis. Command/Option/Environment Description Possible Values axis Normal plots with linear scaling semilogxaxis logaritmic scaling of x and normal scaling for y semilogyaxis logaritmic scaling for y and normal scaling for x loglogaxis logaritmic scaling for the x and y axes axis lines changes the way the axes are drawn. default is ' box box, left, middle, center, right, none legend pos position of the legend box south west, south east, north west, north east, outer north east mark type of marks used in data plotting. When a single-character is used, the character appearance is very similar to the actual mark. *, x , +, |, o, asterisk, star, 10-pointed star, oplus, oplus*, otimes, otimes*, square, square*, triangle, triangle*, diamond, halfdiamond*, halfsquare*, right*, left*, Mercedes star, Mercedes star flipped, halfcircle, halfcircle*, pentagon, pentagon*, cubes. (cubes only work on 3d plots). colormap colour scheme to be used in a plot, can be personalized but there are some predefined colormaps hot, hot2, jet, blackwhite, bluered, cool, greenyellow, redyellow, violet. For more information see:
In exercises 1 and 2, use the midpoint rule with \(m = 4\) and \(n = 2\) to estimate the volume of the solid bounded by the surface \(z = f(x,y)\), the vertical planes \(x = 1\), \(x = 2\), \(y = 1\), and \(y = 2\), and the horizontal plane \(x = 0\). 1) \(f(x,y) = 4x + 2y + 8xy\) Answer: \(27\) 2) \(f(x,y) = 16x^2 + \frac{y}{2}\) In exercises 3 and 4, estimate the volume of the solid under the surface \(z = f(x,y)\) and above the rectangular region R by using a Riemann sum with \(m = n = 2\) and the sample points to be the lower left corners of the subrectangles of the partition. 3) \(f(x,y) = \sin x - \cos y\), \(R = [0, \pi] \times [0, \pi]\) Answer: \(0\) 4) \(f(x,y) = \cos x + \cos y\), \(R = [0, \pi] \times [0, \frac{\pi}{2}]\) 5) Use the midpoint rule with \(m = n = 2\) to estimate \(\iint_R f(x,y) \,dA\), where the values of the function f on \(R = [8,10] \times [9,11]\) are given in the following table. \(y\) \(x\) 9 9.5 10 10.5 11 8 9.8 5 6.7 5 5.6 8.5 9.4 4.5 8 5.4 3.4 9 8.7 4.6 6 5.5 3.4 9.5 6.7 6 4.5 5.4 6.7 10 6.8 6.4 5.5 5.7 6.8 Answer: \(21.3\) 6) The values of the function \(f\) on the rectangle \(R = [0,2] \times [7,9]\) are given in the following table. Estimate the double integral \(\iint_R f(x,y)\,dA\) by using a Riemann sum with \(m = n = 2\). Select the sample points to be the upper right corners of the subsquares of R. \(y_0 = 7\) \(y_1 = 8\) \(y_2 = 9\) \(x_0 = 0\) 10.22 10.21 9.85 \(x_1 = 1\) 6.73 9.75 9.63 \(x_2 = 2\) 5.62 7.83 8.21 7) The depth of a children’s 4-ft by 4-ft swimming pool, measured at 1-ft intervals, is given in the following table. Estimate the volume of water in the swimming pool by using a Riemann sum with \(m = n = 2\). Select the sample points using the midpoint rule on \(R = [0,4] \times [0,4]\). Find the average depth of the swimming pool. \(y\) \(x\) 0 1 2 3 4 0 1 1.5 2 2.5 3 1 1 1.5 2 2.5 3 2 1 1.5 1.5 2.5 3 3 1 1 1.5 2 2.5 4 1 1 1 1.5 2 Answer: a. 28 \(\text{ft}^3\) b. 1.75 ft. 8) The depth of a 3-ft by 3-ft hole in the ground, measured at 1-ft intervals, is given in the following table. Estimate the volume of the hole by using a Riemann sum with \(m = n = 3\) and the sample points to be the upper left corners of the subsquares of \(R\). Find the average depth of the hole. \(y\) \(x\) 0 1 2 3 0 6 6.5 6.4 6 1 6.5 7 7.5 6.5 2 6.5 6.7 6.5 6 3 6 6.5 5 5.6 9) The level curves \(f(x,y) = k\) of the function \(f\) are given in the following graph, where \(k\) is a constant. Apply the midpoint rule with \(m = n = 2\) to estimate the double integral \(\iint_R f(x,y)\,dA\), where \(R = [0.2,1] \times [0,0.8]\). Estimate the average value of the function \(f\) on \(R\). Answer: a. 0.112 b. \(f_{ave} ≃ 0.175\); here \(f(0.4,0.2) ≃ 0.1\), \(f(0.2,0.6) ≃− 0.2\), \(f(0.8,0.2) ≃ 0.6\), and \(f(0.8,0.6) ≃ 0.2\) 10) The level curves \(f(x,y) = k\) of the function \(f\) are given in the following graph, where \(k\) is a constant. Apply the midpoint rule with \(m = n = 2\) to estimate the double integral \(\iint_R f(x,y)\,dA\), where \(R = [0.1,0.5] \times [0.1,0.5]\). Estimate the average value of the function fon \(R\). 11) The solid lying under the surface \(z = \sqrt{4 - y^2}\) and above the rectangular region\( R = [0,2] \times [0,2]\) is illustrated in the following graph. Evaluate the double integral \(\iint_Rf(x,y)\), where \(f(x,y) = \sqrt{4 - y^2}\) by finding the volume of the corresponding solid. Answer: \(2\pi\) 12) The solid lying under the plane \(z = y + 4\) and above the rectangular region \(R = [0,2] \times [0,4]\) is illustrated in the following graph. Evaluate the double integral \(\iint_R f(x,y)\,dA\), where \(f(x,y) = y + 4\), by finding the volume of the corresponding solid. In the exercises 13 - 20, calculate the integrals by reversing the order of integration. 13) \(\displaystyle \int_{-1}^1\left(\int_{-2}^2 (2x + 3y + 5)\,dx \right) \space dy\) Answer: \(40\) 14) \(\displaystyle \int_0^2\left(\int_0^1 (x + 2e^y + 3)\,dx \right) \space dy\) 15) \(\displaystyle \int_1^{27}\left(\int_1^2 (\sqrt[3]{x} + \sqrt[3]{y})\,dy \right) \space dx\) Answer: \(\frac{81}{2} + 39\sqrt[3]{2}\) 16) \(\displaystyle \int_1^{16}\left(\int_1^8 (\sqrt[4]{x} + 2\sqrt[3]{y})\,dy \right) \space dx\) 17) \(\displaystyle \int_{\ln 2}^{\ln 3}\left(\int_0^1 e^{x+y}\,dy \right) \space dx\) Answer: \(e - 1\) 18) \(\displaystyle \int_0^2\left(\int_0^1 3^{x+y}\,dy \right) \space dx\) 19) \(\displaystyle \int_1^6\left(\int_2^9 \frac{\sqrt{y}}{y^2}\,dy \right) \space dx\) Answer: \(15 - \frac{10\sqrt{2}}{9}\) 20) \(\displaystyle \int_1^9 \left(\int_4^2 \frac{\sqrt{x}}{y^2}\,dy \right)\,dx\) In exercises 21 - 34, evaluate the iterated integrals by choosing the order of integration. 21) \(\displaystyle \int_0^{\pi} \int_0^{\pi/2} \sin(2x)\cos(3y)\,dx \space dy\) Answer: \(0\) 22) \(\displaystyle \int_{\pi/12}^{\pi/8}\int_{\pi/4}^{\pi/3} [\cot x + \tan(2y)]\,dx \space dy\) 23) \(\displaystyle \int_1^e \int_1^e \left[\frac{1}{x}\sin(\ln x) + \frac{1}{y}\cos (\ln y)\right] \,dx \space dy\) Answer: \((e − 1)(1 + \sin 1 − \cos 1)\) 24) \(\displaystyle \int_1^e \int_1^e \frac{\sin(\ln x)\cos (\ln y)}{xy} \,dx \space dy\) 25) \(\displaystyle \int_1^2 \int_1^2 \left(\frac{\ln y}{x} + \frac{x}{2y + 1}\right)\,dy \space dx\) Answer: \(\frac{3}{4}\ln \left(\frac{5}{3}\right) + 2b \space \ln^2 2 - \ln 2\) 26) \(\displaystyle \int_1^e \int_1^2 x^2 \ln(x)\,dy \space dx\) 27) \(\displaystyle \int_1^{\sqrt{3}} \int_1^2 y \space \arctan \left(\frac{1}{x}\right) \,dy \space dx\) Answer: \(\frac{1}{8}[(2\sqrt{3} - 3) \pi + 6 \space \ln 2]\) 28) \(\displaystyle \int_0^1 \int_0^{1/2} (\arcsin x + \arcsin y)\,dy \space dx\) 29) \(\displaystyle \int_0^1 \int_0^2 xe^{x+4y}\,dy \space dx\) Answer: \(\frac{1}{4}e^4 (e^4 - 1)\) 30) \(\displaystyle \int_1^2 \int_0^1 xe^{x-y}\,dy \space dx\) 31) \(\displaystyle \int_1^e \int_1^e \left(\frac{\ln y}{\sqrt{y}} + \frac{\ln x}{\sqrt{x}}\right)\,dy \space dx\) Answer: \(4(e - 1)(2 - \sqrt{e})\) 32) \(\displaystyle \int_1^e \int_1^e \left(\frac{x \space \ln y}{\sqrt{y}} + \frac{y \space \ln x}{\sqrt{x}}\right)\,dy \space dx\) 33) \(\displaystyle \int_0^1 \int_1^2 \left(\frac{x}{x^2 + y^2} \right)\,dy \space dx\) Answer: \(-\frac{\pi}{4} + \ln \left(\frac{5}{4}\right) - \frac{1}{2} \ln 2 + \arctan 2\) 34) \(\displaystyle \int_0^1 \int_1^2 \frac{y}{x + y^2}\,dy \space dx\) In exercises 35 - 38, find the average value of the function over the given rectangles. 35)\(f(x,y) = −x +2y\), \(R = [0,1] \times [0,1]\) Answer: \(\frac{1}{2}\) 36) \(f(x,y) = x^4 + 2y^3\), \(R = [1,2] \times [2,3]\) 37) \(f(x,y) = \sinh x + \sinh y\), \(R = [0,1] \times [0,2]\) Answer: \(\frac{1}{2}(2 \space \cosh 1 + \cosh 2 - 3)\). 38) \(f(x,y) = \arctan(xy)\), \(R = [0,1] \times [0,1]\) 39) Let \(f\) and \(g\) be two continuous functions such that \(0 \leq m_1 \leq f(x) \leq M_1\) for any \(x ∈ [a,b]\) and \(0 \leq m_2 \leq g(y) \leq M_2\) for any\( y ∈ [c,d]\). Show that the following inequality is true: \[m_1m_2(b-a)(c-d) \leq \int_a^b \int_c^d f(x) g(y)\,dy dx \leq M_1M_2 (b-a)(c-d).\] In exercises 40 - 43, use property v. of double integrals and the answer from the preceding exercise to show that the following inequalities are true. 40) \(\frac{1}{e^2} \leq \iint_R e^{-x^2 - y^2} \space dA \leq 1\), where \(R = [0,1] \times [0,1]\) 41) \(\frac{\pi^2}{144} \leq \iint_R \sin x \cos y \space dA \leq \frac{\pi^2}{48}\), where \(R = \left[ \frac{\pi}{6}, \frac{\pi}{3}\right] \times \left[ \frac{\pi}{6}, \frac{\pi}{3}\right]\) 42) \(0 \leq \iint_R e^{-y}\space \cos x \space dA \leq \frac{\pi}{2}\), where \(R = \left[0, \frac{\pi}{2}\right] \times \left[0, \frac{\pi}{2}\right]\) 43) \(0 \leq \iint_R (\ln x)(\ln y) \,dA \leq (e - 1)^2\), where \(R = [1, e] \times [1, e] \) 44) Let \(f\) and \(g\) be two continuous functions such that \(0 \leq m_1 \leq f(x) \leq M_1\) for any \(x ∈ [a,b]\) and \(0 \leq m_2 \leq g(y) \leq M_2\) for any \(y ∈ [c,d]\). Show that the following inequality is true: \[(m_1 + m_2) (b - a)(c - d) \leq \int_a^b \int_c^d |f(x) + g(y)| \space dy \space dx \leq (M_1 + M_2)(b - a)(c - d)\] In exercises 45 - 48, use property v. of double integrals and the answer from the preceding exercise to show that the following inequalities are true. 45) \(\frac{2}{e} \leq \iint_R (e^{-x^2} + e^{-y^2}) \,dA \leq 2\), where \(R = [0,1] \times [0,1]\) 46) \(\frac{\pi^2}{36}\iint_R (\sin x + \cos y)\,dA \leq \frac{\pi^2 \sqrt{3}}{36}\), where \(R = [\frac{\pi}{6}, \frac{\pi}{3}] \times [\frac{\pi}{6}, \frac{\pi}{3}]\) 47) \(\frac{\pi}{2}e^{-\pi/2} \leq \iint_R (\cos x + e^{-y})\,dA \leq \pi\), where \(R = [0, \frac{\pi}{2}] \times [0, \frac{\pi}{2}]\) 48) \(\frac{1}{e} \leq \iint_R (e^{-y} - \ln x) \,dA \leq 2\), where \(R = [0, 1] \times [0, 1]\) In exercises 49 - 50, the function \(f\) is given in terms of double integrals. Determine the explicit form of the function \(f\). Find the volume of the solid under the surface \(z = f(x,y)\) and above the region \(R\). Find the average value of the function \(f\) on \(R\). Use a computer algebra system (CAS) to plot \(z = f(x,y)\) and \(z = f_{ave}\) in the same system of coordinates. 49) [T] \(f(x,y) = \int_0^y \int_0^x (xs + yt) ds \space dt\), where \((x,y) \in R = [0,1] \times [0,1]\) Answer: a. \(f(x,y) = \frac{1}{2} xy (x^2 + y^2)\); b. \(V = \int_0^1 \int_0^1 f(x,y)\,dx \space dy = \frac{1}{8}\); c. \(f_{ave} = \frac{1}{8}\); d. 50) [T] \(f(x,y) = \int_0^x \int_0^y [\cos(s) + \cos(t)] \, dt \space ds\), where \((x,y) \in R = [0,3] \times [0,3]\) 51) Show that if \(f\) and \(g\) are continuous on \([a,b]\) and \([c,d]\), respectively, then \(\displaystyle \int_a^b \int_c^d |f(x) + g(y)| dy \space dx = (d - c) \int_a^b f(x)\,dx\) \(\displaystyle + \int_a^b \int_c^d g(y)\,dy \space dx = (b - a) \int_c^d g(y)\,dy + \int_c^d \int_a^b f(x)\,dx \space dy\). 52) Show that \(\displaystyle \int_a^b \int_c^d yf(x) + xg(y)\,dy \space dx = \frac{1}{2} (d^2 - c^2) \left(\int_a^b f(x)\,dx\right) + \frac{1}{2} (b^2 - a^2) \left(\int_c^d g(y)\,dy\right)\). 53) [T] Consider the function \(f(x,y) = e^{-x^2-y^2}\), where \((x,y) \in R = [−1,1] \times [−1,1]\). Use the midpoint rule with \(m = n = 2,4,..., 10\) to estimate the double integral \(I = \iint_R e^{-x^2 - y^2} dA\). Round your answers to the nearest hundredths. For \(m = n = 2\), find the average value of fover the region R. Round your answer to the nearest hundredths. Use a CAS to graph in the same coordinate system the solid whose volume is given by \(\iint_R e^{-x^2-y^2} dA\) and the plane \(z = f_{ave}\). Answer: a. For \(m = n = 2\), \(I = 4e^{-0.5} \approx 2.43\) b. \(f_{ave} = e^{-0.5} \simeq 0.61\); c. 54) [T] Consider the function \(f(x,y) = \sin (x^2) \space \cos (y^2)\), where \((x,y \in R = [−1,1] \times [−1,1]\). Use the midpoint rule with \(m = n = 2,4,..., 10\) to estimate the double integral \(I = \iint_R \sin (x^2) \cos (y^2) \space dA\). Round your answers to the nearest hundredths. For \(m = n = 2\), find the average value of \(f\)over the region R.Round your answer to the nearest hundredths. Use a CAS to graph in the same coordinate system the solid whose volume is given by \(\iint_R \sin(x^2) \cos(y^2) \space dA\) and the plane \(z = f_{ave}\). In exercises 55 - 56, the functions \(f_n\) are given, where \(n \geq 1\) is a natural number. Find the volume of the solids \(S_n\) under the surfaces \(z = f_n(x,y)\) and above the region \(R\). Determine the limit of the volumes of the solids \(S_n\) as \(n\) increases without bound. 55) \(f(x,y) = x^n + y^n + xy, \space (x,y) \in R = [0,1] \times [0,1]\) Answer: a. \(\frac{2}{n + 1} + \frac{1}{4}\) b. \(\frac{1}{4}\) 56) \(f(x,y) = \frac{1}{x^n} + \frac{1}{y^n}, \space (x,y) \in R = [1,2] \times [1,2]\) 57) Show that the average value of a function \(f\) on a rectangular region \(R = [a,b] \times [c,d]\) is \(f_{ave} \approx \frac{1}{mn} \sum_{i=1}^m \sum_{j=1}^n f(x_{ij}^*,y_{ij}^*)\),where \((x_{ij}^*,y_{ij}^*)\) are the sample points of the partition of \(R\), where \(1 \leq i \leq m\) and \(1 \leq j \leq n\). 58) Use the midpoint rule with \(m = n\) to show that the average value of a function \(f\) on a rectangular region \(R = [a,b] \times [c,d]\) is approximated by \[f_{ave} \approx \frac{1}{n^2} \sum_{i,j =1}^n f \left(\frac{1}{2} (x_{i=1} + x_i), \space \frac{1}{2} (y_{j=1} + y_j)\right).\] 59) An isotherm map is a chart connecting points having the same temperature at a given time for a given period of time. Use the preceding exercise and apply the midpoint rule with \(m = n = 2\) to find the average temperature over the region given in the following figure. Answer: \(56.5^{\circ}\) F; here \(f(x_1^*,y_1^*) = 71, \space f(x_2^*, y_1^*) = 72, \space f(x_2^*,y_1^*) = 40, \space f(x_2^*,y_2^*) = 43\), where \(x_i^*\) and \(y_j^*\) are the midpoints of the subintervals of the partitions of \([a,b]\) and \([c,d]\), respectively.
Having a centerless profinite completion leads to some nice properties. For example, given a short exact sequence $$1\to A\to B\to C\to 1$$ where $A$ is finitely generated and $\hat{A}$ has trivial center, we have an exact sequence $$1\to\hat{A}\to\hat{B}\to\hat{C}\to 1.$$ When does a centerless group $G$ have a centerless profinite completion $\hat{G}$? Does this change if $G$ is finitely generated and/or residually finite? I know that if $G$ is residually finite, then we have an injection $G\to \hat{G}$ with dense image, and so $Z(\hat{G})$ would have to live solely within $(\hat{G}\setminus G)\cup\{e\}$. This seems unlikely, but I don't see the proof.
Bergou, El houcine and Diouane, Youssef and Gratton, Serge On the use of the energy norm in trust-region and adaptive cubic regularization subproblems. (2017) Computational Optimization and Applications, 68 (3). 533-554. ISSN 0926-6003 (Document in English) PDF (Author's version) - Requires a PDF viewer such as GSview, Xpdf or Adobe Acrobat Reader 512kB Official URL: http://dx.doi.org/10.1007/s10589-017-9929-2 Abstract We consider solving unconstrained optimization problems by means of two popular globalization techniques: trust-region (TR) algorithms and adaptive regularized framework using cubics (ARC). Both techniques require the solution of a so-called ``subproblem'' in which a trial step is computed by solving an optimization problem involving an approximation of the objective function, called ``the model". The latter is supposed to be adequate in a neighborhood of the current iterate. In this paper, we address an important practical question related with the choice of the norm for defining the neighborhood. More precisely, assuming here that the Hessian $B$ of the model is symmetric positive definite, we propose the use of the so-called ``energy norm'' -- defined by $\|x\|_B= \sqrt{x^TBx}$ for all $x \in \real^n$ -- in both TR and ARC techniques. We show that the use of this norm induces remarkable relations between the trial step of both methods that can be used to obtain efficient practical algorithms. We furthermore consider the use of truncated Krylov subspace methods to obtain an approximate trial step for large scale optimization. Within the energy norm, we obtain line search algorithms along the Newton direction, with a special backtracking strategy and an acceptability condition in the spirit of TR/ARC methods. The new line search algorithm, derived by ARC, enjoys a worst-case iteration complexity of $\mathcal{O}(\epsilon^{-3/2})$. We show the good potential of the energy norm on a set of numerical experiments. Item Type: Article Additional Information: Thanks to Elsevier editor. The definitive version is available at https://link.springer.com/article/10.1007/s10589-017-9929-2 HAL Id: hal-01654736 Audience (journal): International peer-reviewed journal Uncontrolled Keywords: Institution: French research institutions > Centre National de la Recherche Scientifique - CNRS (FRANCE) Université de Toulouse > Institut National Polytechnique de Toulouse - INPT (FRANCE) Université de Toulouse > Institut Supérieur de l'Aéronautique et de l'Espace - ISAE-SUPAERO (FRANCE) Université de Toulouse > Université Toulouse III - Paul Sabatier - UPS (FRANCE) Université de Toulouse > Université Toulouse - Jean Jaurès - UT2J (FRANCE) Université de Toulouse > Université Toulouse 1 Capitole - UT1 (FRANCE) Laboratory name: Statistics: download Deposited By: Youssef Diouane Deposited On: 27 Nov 2017 11:07 Repository Staff Only: item control page
Self-Consistent Schrödinger-Poisson Results for a Nanowire Benchmark The Schrödinger-Poisson Equation multiphysics interface simulates systems with quantum-confined charge carriers, such as quantum wells, wires, and dots. Here, we examine a benchmark model of a GaAs nanowire to demonstrate how to use this feature in the Semiconductor Module, an add-on product to the COMSOL Multiphysics® software. The Schrödinger-Poisson Equation Multiphysics Interface The Schrödinger-Poisson Equation multiphysics interface, available as of COMSOL Multiphysics® version 5.4, creates a bidirectional coupling between the Electrostatics interface and the Schrödinger Equation interface to model charge carriers in quantum-confined systems. The electric potential from the electrostatics contributes to the potential energy term in the Schrödinger equation. A statistically weighted sum of the probability densities from the eigenstates of the Schrödinger equation contributes to the space charge density in the electrostatics. All spatial dimensions (1D, 1D axial symmetry, 2D, 2D axial symmetry, and 3D) are supported. Solving the Schrödinger-Poisson System The Schrödinger-Poisson system is special in that a stationary study is necessary for the electostatics, and an eigenvalue study is necessary for the Schrödinger equation. To solve the two-way coupled system, the Schrödinger equation and Poisson’s equation are solved iteratively until a self-consistent solution is obtained. The iterative procedure consists of the following steps: Step 1 To provide a good initial condition for the iterations, we solve Poisson’s equation (1) for the electric potential, V, in which \epsilon is the permittivity and \rho is the space charge density. In this initialization step, \rho is given by the best initial estimate from physical arguments; for example, using the Thomas-Fermi approximation. Step 2 The electric potential, V, from the previous step contributes to the potential energy term, V_e, in the Schrödinger equation (2) where q is the charge of the carrier particle, which is given by (3) where z_q is the charge number and e is the elementary charge. Step 3 With the updated potential energy term given by Eq. 2, the Schrödinger equation is solved, producing a set of eigenenergies, E_i, and a corresponding set of normalized wave functions, \Psi_i. Step 4 The particle density profile, n_\mathrm{sum}, is computed using a statistically weighted sum of the probability densities (4) where the weight, N_i, is given by integrating the Fermi-Dirac distribution for the out-of-plane continuum states (thus depending on the spatial dimension of the model). (5) (6) (7) where g_i is the valley degeneracy factor, E_f is the Fermi level, k_B is the Boltzmann constant, T is the absolute temperature, m_d is the density of state effective mass, and F_0 and F_{-1/2} are Fermi-Dirac integrals. For simplicity, the weighted sum in Eq. 4 shows only one index, i, for the summation. There can be, of course, more than one index in the summation. For example, in the nanowire model discussed here, the summation is over both the azimuthal quantum number and the eigenenergy levels (for each azimuthal quantum number). Step 5 Given the particle density profile, n_\mathrm{sum}, we reestimate the space charge density, \rho , and then re-solve Poisson’s equation to obtain a new electric potential profile, V. The straightforward formula for the new space charge density (8) almost always leads to divergence of the iterations. A much better estimate is given by (9) where V_\mathrm{old} is the electric potential from the previous iteration and \alpha is an additional tuning parameter. The formula is motivated by the observation that the particle density, n_\mathrm{sum}, is the result from V_\mathrm{old} and would change once Poisson’s equation is re-solved to obtain a new V. In other words, Eq. 8 can be written more explicitly as (10) since n_\mathrm{sum} is the result from V_\mathrm{old}, and \rho is used to re-solve Poisson’s equation to get a new V. To achieve a self-consistent solution, a better formula would be (11) At this point, n_\mathrm{sum,new} is unknown to us, since it comes from the solution to the Schrödinger equation in the next iteration. However, we can formulate a prediction for it using Boltzmann statistics, which provides a simple exponential relation between the potential energy, V_e=qV, and the particle density, n_\mathrm{sum}. (12) This leads to Eq. 9 for the case of \alpha=0. This works well at high temperatures, where Boltzmann statistics is a good approximation. At lower temperatures, setting \alpha to a positive number helps accelerate convergence. Step 6 Once a new electric potential profile, V, is obtained by re-solving Poisson’s equation, compare it with the electric potential from the previous iteration, V_\mathrm{old}. If the two profiles agree within the desired tolerance, then self-consistency is achieved; otherwise, go to step 2 to continue the iteration. A dedicated Schrödinger-Poissonstudy type is available to automatically generate the steps outlined above in the solver sequence. Benchmark Example: The Nanowire Model The GaAs nanowire tutorial model is based on a paper by J.H. Luscombe, A.M. Bouchard, and M. Luban titled “Electron confinement in quantum nanostructures: Self-consistent Poisson-Schrödinger theory”. Given the assumption of an infinite length and cylindrical symmetry, we choose the 1D axisymmetric space dimension. We then select the Schrödinger-Poisson Equation multiphysics interface under the Semiconductor branch, which adds the Schrödinger Equation and Electrostatics interfaces together with the Schrödinger-Poisson Coupling multiphysics coupling in the Model Builder. Selecting the Schrödinger-Poisson Equation interface for the nanowire model. Following the description in the paper, the radius of the nanowire is set to 50 nm. The electron effective mass is set to 0.067 times the free electron mass (as suggested by the Fermi-temperature result in the paper), and the dielectric constant is assumed to be 12.9. The Fermi energy level in the model is set to 0 V and the electric potential at the wall to −0.7 V in order to match the Fermi-level-pinning boundary condition described by the researchers. We model the case of 2·10 18 cm –3 uniform ionized dopants at a temperature of 10 K to compare with Figures 2 and 3 in the paper. The numbers above are entered as global parameters in the model. Global parameters for the nanowire model. Following the approach of the paper, we first solve for the Thomas-Fermi approximate solution, then use it as the initial condition for the fully coupled Schrödinger-Poisson equation. The formulas for the Thomas-Fermi approximation are entered as local variables in the model. Local variables for the nanowire model. With the global parameters and local variables defined, it is straightforward to use them to fill the various input fields in the geometry, material, and physics nodes in the Model Builder. Here are a few things to note: The azimuthal quantum number m is parameterized to allow sweeping and summing over its values, as mentioned above, and is entered in the Settingswindow of the Schrödinger Equationphysics node Recall from a previous blog post on computing the band gap for superlattices that the eigenvalue scale λ scaleworks as a multiplication factor for the dimensionless eigenvalue λ to produce the eigenenergy, E_i (E_i = λ scaleλ) For instance, if λ scaleequals 1 eV, then an eigenvalue of 1.23 indicates an eigenenergy of 1.23 eV For instance, if λ For the Electrostaticsinterface, an Electric Potentialboundary condition is added to set the value at the wall of the nanowire, as mentioned above In addition, two Space Charge Densitydomain conditions are added, one for the ionized dopants and the other for the Thomas-Fermi approximation (the latter should be turned off for the Schrödinger-Poissonstudy) Setting Up the Schrödinger-Poisson Multiphysics Coupling In the Settings window for the Schrödinger-Poisson Coupling multiphysics node, expand the Equation section to see the equations implemented in this node — they should look familiar if you’ve read the Solving the Schrödinger-Poisson System section above. The Coupled Interfaces section in the settings allows the selection of the two coupled physics interfaces. The Model Input section sets the temperature of the system, as shown in the screenshot below: Upper part of the Settings window for the Schrödinger-Poisson Coupling node. The Particle Density Computation section (screenshot below) specifies the statistically weighted sum of the probability densities, as described in Eq. 4. If the default option of Fermi-Dirac statistics, parabolic band is selected, then Eq. 5–Eq.7 are used to compute the weights, N_i. A user-defined option is also available for entering different expressions for the weights. To take into account the pairs of degenerate azimuthal quantum numbers (m = ±1, ±2, etc.), we use the formula 1+(m>0) for the Degeneracy factor, g_i, which evaluates to 1 for m = 0 and 2 for m > 0. Lower part of the Settings window for the Schrödinger-Poisson Coupling node. The Charge Density Computation section (screenshot above) takes the input for the Charge number, z_q, for Eq. 3. If the default option of Modified Gummel iteration is selected, then Eq. 9 is used to compute the new space charge density, \rho. Other options are also available, including a user-defined option where you can enter your own mathematical expressions. The default expression for the Global error variable, (schrp1.max(abs(V-schrp1.V_old)))/1[V], computes the maximum difference between the electric potential fields from the two most recent iterations, in the unit of V. Note that the prefix schrp1 should match the Name input field of the Schrödinger-Poisson Coupling node, and the variable name V should match the dependent variable name for the Electrostatics interface. These names may change from the default in a more complicated model, and the expression will turn yellow if the names do not match. In this case, some manual editing is needed. Setting Up the Schrödinger-Poisson Study Step The dedicated Schrödinger-Poisson study step under Study 2 automatically generates the self-consistent iterations in the solver sequence. The iteration scheme is outlined in Solving the Schrödinger-Poisson System above. If we are dealing with a completely new problem, then for the Eigenfrequency search method menu under the Study Settings section, it is often necessary to use the default Manual search option to find the range of the eigenenergies. Once the range is found, we can switch to the Region search option with appropriate settings for the range and number of eigenvalues in order to ensure that all significant eigenstates are found by the solver. For this tutorial, the estimated energy range is between -0.15 and 0.05 eV. This corresponds to -0.15 and 0.05 for the unitless eigenvalue, as discussed earlier. The real and imaginary parts of the input fields refer to the real and imaginary parts of the eigenvalue, respectively. To look for the eigenenergies of bound states, we set the input fields for the real parts to the expected energy range and set the input fields for the imaginary parts to a small range around 0 to capture numerical noise or slightly leaky quasibound states, as shown below: Upper part of the Settings window for the Schrödinger-Poisson study step. As we have pointed out earlier, the second Space Charge Density domain condition is only used for the Thomas-Fermi approximation solution in Study 1. It is thus disabled under the Physics and Variables Selection section, as shown in the screenshot above. Under the Iterations section, the default option for the Termination method drop-down menu is Minimization of global variable, which automatically updates a result table that displays the history of the global error variable after each iteration during the solution process. The built-in global error variable schrp1.global_err computes the maximum difference between the electric potential fields from the two most recent iterations, in the unit of V, as already configured in the Schrödinger-Poisson Coupling multiphysics node. (Note that the prefix schrp1 should match the Name input field of the Schrödinger-Poisson Coupling node.) Setting the tolerance to 1E-6 thus means that the iteration ends after the max difference is less than 1 uV. See the screenshot below for these settings: Lower part of the Settings window for the Schrödinger-Poisson study step. Under the Values of Dependent Variables section, we select the Thomas-Fermi approximate solution from Study 1 as the initial condition for this study. We then use the Auxiliary sweep functionality to solve for a list of nonnegative azimuthal quantum numbers m. The negative ones are taken into account using the formula 1+(m>0) for the degeneracy factor, g_i, as discussed earlier. The dedicated solver sequence automatically performs the statistically weighted sum of the probability densities for all of the eigenstates. Examining the Self-Consistent Results The solver converges in eight iterations thanks to the good initial condition provided by the Thomas-Fermi approximation and the good forward estimate of the space charge density given by Eq. 9. The plot of the electron density, potential energy, and partial orbital contributions agree well with the figure published in the reference paper. Comparison of the electron density, potential energy, and partial orbital contributions with the figure published in the reference paper. The plot below shows the Friedel-type spatial oscillations present in both the electron density and the potential energy profiles. Zoomed-in plot of the Friedel-type spatial oscillations in the electron density and potential energy profiles. Next Step In this blog post, we have demonstrated that the Schrödinger-Poisson Equation interface and the Schrödinger-Poisson study type make it simple to set up and solve a Schrödinger-Poisson system, using the Self-Consistent Schrödinger-Poisson Results for a GaAs Nanowire benchmark model as an example. To try this model yourself, click the button below to go to the Application Gallery, where you can download the documentation and, with a valid software license, the MPH-file for this tutorial. We hope you find these new features useful and we would love to hear how you apply them to your research. Reference J.H. Luscombe, A.M. Bouchard, and M. Luban, “Electron confinement in quantum nanostructures: Self-consistent Poisson-Schrödinger theory,” Phys. Rev. B, vol. 46, no. 16, p. 10262, 1992. Comments (13) CATEGORIES Chemical COMSOL Now Electrical Fluid General Interfacing Mechanical Today in Science
Answer If a radius with a length of $r$ units rotates through an angle of 1 radian, then the arc length is $r$ units. An angle of 1 radian subtends an arc length that is equal to the radius. Work Step by Step Radian measure is a unit that is used to measure an angle. If a radius with a length of $r$ units rotates through an angle of 1 radian, then the arc length is $r$ units. An angle of 1 radian subtends an arc length that is equal to the radius. We can use the conversion factor $\frac{180^{\circ}}{\pi}$ to convert from radians to degrees. $(1~radian) \times \frac{180^{\circ}}{\pi} = 57.3^{\circ}$ An angle of 1 radian is equal to $57.3^{\circ}$
Definition: $\mathbf{X}$ denotes the random vector $({X_1},{X_2},...,{X_n})$. The mutual information between $X$ and $Y$, $I(X;Y)$, is determined by the joint law of $p(X,Y)$, Given two random vectors $\mathbf{X}$ and $\mathbf{Y}$, characterized by a joint probability distribution ${p_{_\mathbf{XY}}}$, the probabilty distribution of $\frac{1}{n}I(\mathbf{X};\mathbf{Y})$ is referred to as the mutual information rate spectrum. In addition, the spectral-inf mutual information rate is defined as [1]: ${\rm{p - }}\mathop {\lim }\limits_{n \to \infty }{\rm{inf }}~\frac{1}{n}I(\mathbf{X};\mathbf{Y}) \buildrel \Delta \over = \sup \{ \beta :\mathop {\lim }\limits_{n \to \infty } {\rm{ \mathbb{P}[}}\frac{1}{n}I(\mathbf{X};\mathbf{Y}) < \beta {\rm{] = 0}}\}$ and respectively the spectral-sup mutual information rate is defined as: ${\rm{p - }}\mathop {\lim }\limits_{n \to \infty } {\rm{sup}}~\frac{1}{n}I(\mathbf{X};\mathbf{Y}) \buildrel \Delta \over = \inf \{ \alpha :\mathop {\lim }\limits_{n \to \infty } {\rm{ \mathbb{P}[}}\frac{1}{n}I(\mathbf{X};\mathbf{Y}) > \alpha {\rm{] = 0}}\}$ My Question: What does this definition say? I am interested in some intuitions and measure-theoretic arguments in terms of convergence, like what we have known about the $lim inf$ or $lim sup$. Any help would be appreciated. [1] T. S. Han, Information-Spectrum Methods in Information Theory, Springer, 2002.
The factorial function is primitive recursive, and therefore definable by a $\Sigma_1$ formula. Is it also definable by a $\Delta_0$ formula (i.e. bounded quantifiers)? If not, why? (Sorry for the earlier confusion.) Yes, the graph of factorial is $\Delta_0$. First, the problem can be restated in terms of computational complexity. By a characterization going back to Bennett, $\Delta_0$-definable predicates are exactly those computable in the linear-time hierarchy. A convenient sufficient condition is provided by Nepomnjaščiĭ’s theorem [4]: a predicate is in $\Delta_0$ whenever it is decidable by an algorithm working simultaneously in polynomial time, and space $n^\epsilon$ for some $\epsilon<1$. In particular, all predicates computable in logarithmic space are $\Delta_0$. Chiu, Davida, and Litow [2], building on earlier work by Beame, Cook, and Hoover [1], proved that logarithmic space (in fact, log-space uniform $\mathrm{TC}^0$, subsequently improved to fully uniform $\mathrm{TC}^0$ by Hesse, Allender, and Barrington [3]) contains the iterated multiplication problem: given a sequence $x_1,\dots,x_n$ of integers in binary, compute their product. By applying their algorithm to the sequence $1,\dots,n$, we can compute in logarithmic space (and in $\mathrm{TC}^0$) the factorial $n!$ when $n$ is given in unary. Consequently, we can decide in logarithmic space the graph $\{(x,y):y=x!\}$, where both $x,y$ are binary: we test whether $x$ is logarithmically smaller than $y$, and if so, compute its factorial and compare it to $y$. (Note that testing the equality of two log-space computable functions can be done in log space: we do not need to write down the intermediate results, we recompute their individual bits on the fly as needed.) References: [1] P. W. Beame, S. A. Cook, H. J. Hoover, Log depth circuits for division and related problems, SIAM Journal on Computing 15 (1986), no. 4, pp. 994–1003. [2] A. Chiu, G. Davida, B. Litow, Division in logspace-uniform $\mathit{NC}^1$, RAIRO – Theoretical Informatics and Applications 35 (2001), no. 3, pp. 259–275. [3] W. Hesse, E. Allender, D. A. M. Barrington, Uniform constant-depth threshold circuits for division and iterated multiplication, Journal of Computer and System Sciences 65 (2002), no. 4, pp. 695–716. I didn't look at the above references, and I took this as an exercise for myself. Here is a way to express "$x! = y$" as a $\Delta_0$ formula with two free variables $x$ and $y$. The idea is to check that, for each prime $p \le x$, $p$ divides $y$ the right number of times. The number of times the prime $p$ divides $x!$ is $f(x,p) = \sum_{i\ge 1} \lfloor x/p^i \rfloor$. For example, the number of times $7$ divides $1000!$ is $142+20+2=164$. Note that each element $a_{i} = \lfloor x/p^i \rfloor$ in this sum is obtained from the previous one by $a_{i} = \lfloor a_{i-1}/p \rfloor$. Now, let us recall a method by Nelson of encoding sets using $\Delta_0$ formulas. Consider for example the set $S = \{6,13\}$. Since $6=110_{2}$ and $13=1101_2$ in binary, we encode $S$ by the number $2110211012_4$ (or $2110121102_4$). The $2$'s serve as separators between the elements of the set. Nelson shows how to express "$x \in S$" as a $\Delta_0$ formula. Also, recall the pairing formula $(a,b) = (a+b)(a+b+1)/2+a$. Hence, we can express "$c=(a,b)$" by the formula $2c=(a+b)(a+b+1)+2a$. Given $x$ and a prime $p$, let $S(x,p)$ be the set of pairs $S(x,p) = \{(u_1,v_1), \ldots, (u_j,v_j)\}$ where $u_i = a_{i}$ as defined above, $v_i$ is the partial sum $v_i = \sum_{k\le i} u_i$, and $j$ is the least element for which $u_j<p$. Then, we can express "$a$ encodes $S(x,p)$" using bounded quantifiers in the obvious way: We state that $a$ contains the first pair $(u_1,v_1)$, that $a$ contains a pair $(u,v)$ with $u<p$, and that each pair $(u,v)\in a$ with $u<u_1$ is obtained from another pair $(u',v')\in a$ by $u = \lfloor u'/p\rfloor$, $v = v'+u$. Next, we can express "$z = f(x,p)$" by \begin{multline*} (\exists a\le y)\, (\exists b\le a)\, (\exists c\le b)\, (\exists d\le b)\,\\ (\text{$a$ encodes $S(x,p)$}\, \wedge\, b\in a\, \wedge\, b=(c,d)\, \wedge\, c<p\, \wedge\, d=z) \end{multline*} (Recall that $y$ is supposed to be the huge number $x!$.) Next, recall how to express "$a^b=c$" with a $\Delta_0$ formula using repeated squaring: We state that there exists a set of pairs $S$ such that: (1) We have $(0,1)\in S$; (2) for each $(i,j)$ in $S$ with $i\ge 1$ there exists another pair $(i',j')\in S$ where either $i=2i'$ and $j = (j')^2$, or $i = 2i'+1$ and $j = a(j')^2$; and (3) we have $(b,c) \in S$. For example, if $b=9$ then $S = \{(0,1),(1,a),(2,a^2),(4,a^4),(9,a^9)\}$. The encoding of $S$ can be bounded by something like $c^2\log^2 c$, which is at most $2c^3$. Finally, we state that $y=x!$ by stating that, for each prime $p\le x$, $n$ divides $y$ for $n=p^k$ and $k = f(x,p)$, but $y$ does not divide $np$; and that $y$ is not divisible by any prime $p$ in the range $x+1\le p\le y$.
Let $p \in (1, \infty).$ Then there is a sequence $f_k$ satisfying 1) $f_k \in L^q$ for all $1 \leq q < \infty$ 2) $f_k$ converges to $0$ in $L^q$ for all $q \in [1,p)$ 3) $f_k$ diverges in $L^p$ $\textbf{Attemp}$ Since the integral for $1 \leq q < p$ converges to $0$ and at exactly $p$ it diverges, I think of $\int \frac{1}{x^k}$ which can be either $\log(x)$ and $\frac{1}{x^l}$ where $\log(x)$ diverges at infinity and $\frac{1}{x^l}$ converges to $0$ as $x \rightarrow \infty.$ So I set $f_k(x) = \int_1^k g(x) dx$ such that $f_k(x) = \frac{1}{k^{l_k}}$ for some $l_k > 0$ for all $q < p$ and $f_k(x) = \log(k)$ when $q = p.$ Somehow I feel it is impossible to find such $g$ since $q < p.$ Any suggestion ? Any idea on maybe a better functions ?
Electronic Journal of Probability Electron. J. Probab. Volume 22 (2017), paper no. 20, 18 pp. Inversion, duality and Doob $h$-transforms for self-similar Markov processes Abstract We show that any $\mathbb{R} ^d\setminus \{0\}$-valued self-similar Markov process $X$, with index $\alpha >0$ can be represented as a path transformation of some Markov additive process (MAP) $(\theta ,\xi )$ in $S_{d-1}\times \mathbb{R} $. This result extends the well known Lamperti transformation. Let us denote by $\widehat{X} $ the self-similar Markov process which is obtained from the MAP $(\theta ,-\xi )$ through this extended Lamperti transformation. Then we prove that $\widehat{X} $ is in weak duality with $X$, with respect to the measure $\pi (x/\|x\|)\|x\|^{\alpha -d}dx$, if and only if $(\theta ,\xi )$ is reversible with respect to the measure $\newcommand{\ed } {\stackrel{(d)} {=}} \pi (ds)dx$, where $\pi (ds)$ is some $\sigma $-finite measure on $S_{d-1}$ and $dx$ is the Lebesgue measure on $\mathbb{R} $. Moreover, the dual process $\widehat{X} $ has the same law as the inversion $(X_{\gamma _t}/\|X_{\gamma _t}\|^2,t\ge 0)$ of $X$, where $\gamma _t$ is the inverse of $t\mapsto \int _0^t\|X\|_s^{-2\alpha }\,ds$. These results allow us to obtain excessive functions for some classes of self-similar Markov processes such as stable Lévy processes. Article information Source Electron. J. Probab., Volume 22 (2017), paper no. 20, 18 pp. Dates Received: 22 March 2016 Accepted: 30 January 2017 First available in Project Euclid: 18 February 2017 Permanent link to this document https://projecteuclid.org/euclid.ejp/1487386998 Digital Object Identifier doi:10.1214/17-EJP33 Mathematical Reviews number (MathSciNet) MR3622890 Zentralblatt MATH identifier 1357.60079 Subjects Primary: 60J45: Probabilistic potential theory [See also 31Cxx, 31D05] 31C05: Harmonic, subharmonic, superharmonic functions Secondary: 60J65: Brownian motion [See also 58J65] 60J75: Jump processes Citation Alili, Larbi; Chaumont, Loïc; Graczyk, Piotr; Żak, Tomasz. Inversion, duality and Doob $h$-transforms for self-similar Markov processes. Electron. J. Probab. 22 (2017), paper no. 20, 18 pp. doi:10.1214/17-EJP33. https://projecteuclid.org/euclid.ejp/1487386998
AP Statistics Curriculum 2007 Normal Std From Socr m (→Appendix) m (→Appendix) Line 39: Line 39: :: <math>x=r\cos(\theta)</math> :: <math>x=r\cos(\theta)</math> :: <math>x=r\cos(\theta)</math>, <math>0\le \theta\le 2\pi</math> :: <math>x=r\cos(\theta)</math>, <math>0\le \theta\le 2\pi</math> - :: Hence, <math>x^2+w^2=r^2</math>, <math>e^{-x^2 \over 2} \times e^{-w^2 \over 2} =e^{-r^2 \over 2}</math>, + :: Hence, - :: <math>dx=\cos(\theta)dr</math> + <math>x^2+w^2=r^2</math>, - :: <math>dy=r\cos(\theta)dr</math> + <math>e^{-x^2 \over 2} \times e^{-w^2 \over 2} =e^{-r^2 \over 2}</math>, + :: <math>dx=\cos(\theta)dr</math> + :: <math>dy=r\cos(\theta)dr</math> : Therefore, <math>A^2=\int_{0}^{\infty} {\int_{0}^{2\pi} {{e^{-r^2 \over 2} \over 2 \pi}\cos^2(\theta)rdrd\theta}}</math>, and : Therefore, <math>A^2=\int_{0}^{\infty} {\int_{0}^{2\pi} {{e^{-r^2 \over 2} \over 2 \pi}\cos^2(\theta)rdrd\theta}}</math>, and : <math>A^2=\int_{0}^{\infty} {\int_{0}^{2\pi} {{e^{-r^2 \over 2} \over 2 \pi}d{\frac{r^2}{2}}d\theta}}=\int_{0}^{\infty} {{e^{-r^2 \over 2} \over \pi}d{\frac{r^2}{2}}} \times \int_{0}^{2\pi} {\cos^2(\theta)d\theta}=1</math>, since : <math>A^2=\int_{0}^{\infty} {\int_{0}^{2\pi} {{e^{-r^2 \over 2} \over 2 \pi}d{\frac{r^2}{2}}d\theta}}=\int_{0}^{\infty} {{e^{-r^2 \over 2} \over \pi}d{\frac{r^2}{2}}} \times \int_{0}^{2\pi} {\cos^2(\theta)d\theta}=1</math>, since Revision as of 23:17, 4 October 2011 Contents General Advance-Placement (AP) Statistics Curriculum - Standard Normal Variables and Experiments Standard Normal Distribution The Standard Normal Distribution is a continuous distribution with the following density: Standard Normal densityfunction Standard Normal cumulative distributionfunction Why are these two functions, f( x),Φ( y) well-defined density and distribution functions, i.e., ? See the appendix below. Note that the following exact areas are bound between the Standard Normal Density Function and the x-axis on these symmetric intervals around the origin: The area: -1.0 < x < 1.0 = 0.8413 - 0.1587 = 0.6826 The area: -2.0 < x < 2.0 = 0.9772 - 0.0228 = 0.9544 The area: -3.0 < x < 3.0 = 0.9987 - 0.0013 = 0.9974 Note that the inflection points ( f''( x) = 0)of the Standard Normal density function are 1. The Standard Normal distribution is also a special case of the more general normal distribution where the mean is set to zero and the variance is set to one. The Standard Normal distribution is often called the bell curvebecause the graph of its probability density resembles a bell. Experiments Suppose we decide to test the state of 100 used batteries. To do that, we connect each battery to a volt-meter by randomly attaching the positive (+) and negative (-) battery terminals to the corresponding volt-meter's connections. Electrical current always flows from + to -, i.e., the current goes in the direction of the voltage drop. Depending upon which way the battery is connected to the volt-meter we can observe positive or negative voltage recordings (voltage is just a difference, which forces current to flow from higher to the lower voltage.) Denote X ={measured voltage for battery i} - this is random variable with mean of 0 and unitary variance. Assume the distribution of all i X is Standard Normal, . Use the Normal Distribution (with mean=0 and variance=1) in the SOCR Distribution applet to address the following questions. This Distributions help-page may be useful in understanding SOCR Distribution Applet. How many batteries, from the sample of 100, can we expect to have? i Absolute Voltage > 1? P(X>1) = 0.1586, thus we expect 15-16 batteries to have voltage exceeding 1. |Absolute Voltage| > 1? P(|X|>1) = 1- 0.682689=0.3173, thus we expect 31-32 batteries to have absolute voltage exceeding 1. Voltage < -2? P(X<-2) = 0.0227, thus we expect 2-3 batteries to have voltage less than -2. Voltage <= -2? P(X<=-2) = 0.0227, thus we expect 2-3 batteries to have voltage less than or equal to -2. -1.7537 < Voltage < 0.8465? P(-1.7537 < X < 0.8465) = 0.761622, thus we expect 76 batteries to have voltage in this range. Appendix The derivation below illustrates why the standard normal density function, , represents a well-defined density function, i.e., and . Clearly the exponential function is always non-negative (in fact it's strictly positive for each real value argument). To show that , let . Then . Thus, , Change variables from Cartesian to polar coordinates: x= rcos(θ) x= rcos(θ), Hence, x 2+ w 2= r 2, , d x= cos(θ) d r, and d y= rcos(θ) d r. Therefore, , and , since , and . SOCR Home page: http://www.socr.ucla.edu Translate this page:
Classification of nonoscillatory solutions of nonlinear neutral differential equations 1. Department of Mathematics, Faculty of Arts and Sciencecs, Eastern Mediterranean University, Famagusta, TRNC, Mersin 10, Turkey 2. School of Pure and Applied Natural Sceinces, University of Kalmar, SE-39182 Kalmar, Sweden $(r(t)(x(t)+p(t)x(t-\tau))')'+f(t,x(\sigma_{1}(t)),x(\sigma_{2}(t)),...,x(\sigma_{n}(t)))=0$ have been classified in accordance with their asymptotic behavior. Keywords:asymptotic behavior, nonoscillatory solutions, Nonlinear neutral differential equations, classification of solutions. Mathematics Subject Classification:Primary: 34K12, 34K40. Citation:Mustafa Hasanbulli, Yuri V. Rogovchenko. Classification of nonoscillatory solutions of nonlinear neutral differential equations. Conference Publications, 2009, 2009 (Special) : 340-348. doi: 10.3934/proc.2009.2009.340 [1] Miguel V. S. Frasson, Patricia H. Tacuri. Asymptotic behaviour of solutions to linear neutral delay differential equations with periodic coefficients. [2] [3] Hernán R. Henríquez, Claudio Cuevas, Juan C. Pozo, Herme Soto. Existence of solutions for a class of abstract neutral differential equations. [4] Limei Dai. Entire solutions with asymptotic behavior of fully nonlinear uniformly elliptic equations. [5] [6] [7] [8] Hernán R. Henríquez, Claudio Cuevas, Alejandro Caicedo. Asymptotically periodic solutions of neutral partial differential equations with infinite delay. [9] Nguyen Minh Man, Nguyen Van Minh. On the existence of quasi periodic and almost periodic solutions of neutral functional differential equations. [10] Josef Diblík, Zdeněk Svoboda. Existence of strictly decreasing positive solutions of linear differential equations of neutral type. [11] [12] [13] Junde Wu, Shangbin Cui. Asymptotic behavior of solutions for parabolic differential equations with invariance and applications to a free boundary problem modeling tumor growth. [14] Dina Kalinichenko, Volker Reitmann, Sergey Skopinov. Asymptotic behavior of solutions to a coupled system of Maxwell's equations and a controlled differential inclusion. [15] Tian Zhang, Huabin Chen, Chenggui Yuan, Tomás Caraballo. On the asymptotic behavior of highly nonlinear hybrid stochastic delay differential equations. [16] Honglv Ma, Jin Zhang, Chengkui Zhong. Global existence and asymptotic behavior of global smooth solutions to the Kirchhoff equations with strong nonlinear damping. [17] Chunpeng Wang. Boundary behavior and asymptotic behavior of solutions to a class of parabolic equations with boundary degeneracy. [18] Nakao Hayashi, Pavel I. Naumkin. Asymptotic behavior in time of solutions to the derivative nonlinear Schrödinger equation revisited. [19] Eduardo Hernández, Donal O'Regan. $C^{\alpha}$-Hölder classical solutions for non-autonomous neutral differential equations. [20] Kai Liu. Stationary solutions of neutral stochastic partial differential equations with delays in the highest-order derivatives. Impact Factor: Tools Metrics Other articles by authors [Back to Top]
This should not be hard at all, in fact no one seems so be having my issue so it should be me missing out on a point of information or just experience. My mission is to with Mathematica find the x,y,z's of the maximum points of the following function $$\left[\frac{-\left(1-x^2\right) \left(y^2-4\right)-x^2-y^2+5}{\left(x^2+y^2+1\right)^2}\right]$$ Currently I am trying with Maximize $Maximize[\left[\frac{-\left(1-x^2\right) \left(y^2-4\right)-x^2-y^2+5}{\left(x^2+y^2+1\right)^2}\right], {x,y,z}]$ I can't make heads or tails of this answer though, so I am doing something wrong.., finding the points for reference I did with wolfram. Wolfram Alpha gives me the points $$\left\{\pm\frac{1}{\sqrt{3}},0,\frac{9}{8}\right\}$$ How do I use for example maximize[] or some other simple function in Mathematica to find those values? I have set the partial derivatives to 0 and solved for the points, that works but seems to me to be overly complicated. The documentation has me confused, it does this and gets three values, I try to do the same and I get something strange Manual: My answer, which is to my eyes wrong: Even if I try to use //NN I get strange answers: I kind of do get the x coordinate if I do this but why this is the case I don't know. $Maximize[\left[\frac{-\left(1-x^2\right) \left(y^2-4\right)-x^2-y^2+5}{\left(x^2+y^2+1\right)^2}\right], {y,z}]$ I have also tried to use FindMaximum and FindMaxValue. This baffles me, any ideas, it should be simple right? Why don't I get the points? is it because there are two extreme points perhaps?
EDIT: Thanks to @benrg pointing out a bug of the previous algorithm. I have revised the algorithm and moved it to the second part since the explanation is long. While the other answer focuses more on coding style, this answer will focus more on performance. Implementation Improvements I will show some ways to improve the performance of the code in the original post. The use of group is unnecessary in the for-loop. Also note that if a house has a missing adjacent neighbour, its next state will be the same as the existing neighbour. So the loop can be improved as follows. for i in range(len(in_states)): if i == 0: out_state = in_states[1] elif i == len(in_states) - 1: out_state = in_states[i - 1] else: out_state = in_states[i - 1] == in_states[i + 1] new_state.append(out_state) It is usually more efficient to use list comprehensions rather than explicit for-loops to construct lists in Python. Here, you need to construct a list where: (1) the first element is in_states[1]; (2) the last element is in_states[-2]; (3) all other elements are in_states[i - 1] == in_states[i + 1]. In this case, it is possible to use a list comprehension to construct a list for (3) and then add the first and last elements. new_states = [in_states[i-1] == in_states[i+1] for i in range(1, len(in_states) - 1)] new_states.insert(in_states[1], 0) new_states.append(in_states[-2]) However, insertion at the beginning of a list requires to update the entire list. A better way to construct the list is to use extend with a generator expression: new_states = [in_states[1]] new_states.extend(in_states[i-1] == in_states[i+1] for i in range(1, len(in_states) - 1)) new_states.append(in_states[-2]) An even better approach is to use the unpack operator * with a generator expression. This approach is more concise and also has the best performance. # state_gen is a generator expression for computing new_states[1:-1] state_gen = (in_states[i-1] == in_states[i+1] for i in range(1, len(in_states) - 1)) new_states = [in_states[1], *state_gen, in_states[-2]] Note that it is possible to unpack multiple iterators / generator expressions into the same list like this: new_states = [*it1, *it2, *it3] Note that if it1 and it3 are already lists, unpacking will make another copy so it could be less efficient than extending it1 with it2 and it3, if the size of it1 is large. Algorithmic Improvement Here I show how to improve the algorithm for more general inputs (i.e. a varying number of houses). The naive solution updates the house states for each day. In order to improve it, one needs to find a connection between the input states \$s_0\$ and the states \$s_n\$ after some days \$n\$ for a direct computation. Let \$s_k[d]\$ be the state of the house at index \$d\$ on day \$k\$ and \$H\$ be the total number of houses. We first extend the initial state sequence \$s_0\$ into an auxiliary sequence \$s_0'\$ of length \$H'=2H+2\$ based on the following: $$s_0'[d]=\left\{\begin{array}{ll}s_0[d] & d\in[0, H) \\0 & d=H, 2H + 1\\s_0[2H-d] & d\in(H,2H] \\\end{array}\right.\label{df1}\tag{1}$$ The sequence \$s_k'\$ is updated based on the following recurrence, where \$\oplus\$ and \$\%\$ are the exclusive-or and modulo operations, respectively:$$s_{k+1}'[d] = s_k'[(d-1)\%H']\oplus s_k'[(d+1)\%H']\label{df2}\tag{2}$$ Using two basic properties of \$\oplus\$: \$a\oplus a = 0\$ and \$a\oplus 0 = a\$, the relationship (\ref{df1}) can be proved to hold on any day \$k\$ by induction: $$s_{k+1}'[d] = \left\{\begin{array}{ll}s_k'[1]\oplus s_k'[H'-1] = s_k'[1] = s_k[1] = s_{k+1}[0] & d = 0 \\s_k'[d-1]\oplus s_k'[d+1] = s_k[d-1]\oplus s_k[d+1]=s_{k+1}[d] & d\in(0,H) \\s_k'[H-1]\oplus s_k'[H+1] = s_k[H-1]\oplus s_k[H-1] = 0 & d = H \\s_k'[2H-(d-1)]\oplus s_k'[2H-(d+1)] \\\quad = s_k[2H-(d-1)]\oplus s_k[2H-(d+1)] = s_{k+1}[2H-d] & d\in(H,2H) \\s_k'[2H-1]\oplus s_k'[2H+1] = s_k'[2H-1] = s_k[1] = s_{k+1}[0] & d = 2H \\s_k'[2H]\oplus s_k'[0] = s_k[0]\oplus s_k[0] = 0 & d = 2H+1\end{array}\right.$$ We can then verify the following property of \$s_k'\$$$\begin{eqnarray}s_{k+1}'[d] & = & s_k'[(d-1)\%H'] \oplus s_k'[(d+1)\%H'] & \\s_{k+2}'[d] & = & s_{k+1}[(d-1)\%H'] \oplus s_{k+1}[(d+1)\%H'] \\ & = & s_k[(d-2)\%H'] \oplus s_k[d] \oplus s_k[d] \oplus s_k[(d+2)\%H'] \\ & = & s_k[(d-2)\%H'] \oplus s_k[(d+2)\%H'] \\s_{k+4}'[d] & = & s_{k+2}'[(d-2)\%H'] \oplus s_{k+2}'[(d+2)\%H'] \\ & = & s_k'[(d-4)\%H'] \oplus s_k'[d] \oplus s_k'[d] \oplus s_k'[(d+4)\%H'] \\ & = & s_k'[(d-4)\%H'] \oplus s_k'[(d+4)\%H'] \\\ldots & \\s_{k+2^m}'[d] & = & s_k'[(d-2^m)\%H'] \oplus s_k'[(d+2^m)\%H'] \label{f1} \tag{3}\end{eqnarray}$$ Based on the recurrence (\ref{f1}), one can directly compute \$s_{k+2^m}'\$ from \$s_k'\$ and skip all the intermediate computations. We can also substitute \$s_k'\$ with \$s_k\$ in (\ref{f1}), leading to the following computations: $$\begin{eqnarray}d_1' & = & (d-2^m)\%H' & \qquad d_2' & = & (d+2^m)\%H' \\d_1 & = & \min(d_1',2H-d_1') & \qquad d_2 & = & \min(d_2', 2H-d_2') \\a_1 & = & \left\{\begin{array}{ll}s_k[d_1] & d_1 \in [0, L) \\0 & \text{Otherwise} \\\end{array}\right. &\qquad a_2 & = & \left\{\begin{array}{ll}s_k[d_2] & d_2 \in [0, L) \\0 & \text{Otherwise} \\\end{array}\right. \\& & & s_{k+2^m}[d] & = & a_1 \oplus a_2 \label{f2}\tag{4}\end{eqnarray}$$ Note that since the sequence \$\{2^i\%H'\}_{i=0}^{+\infty}\$ has no more than \$H'\$ states, it is guaranteed that \$\{s_{k+2^i}\}_{i=0}^{+\infty}\$ has a cycle. More formally, there exists some \$c>0\$ such that \$s_{k+2^{a+c}}=s_{k+2^a}\$ holds for every \$a\$ that is greater than certain threshold. Based on (\ref{f1}) and (\ref{f2}), this entails either \$H'|2^{a+c}-2^a\$ or \$H'|2^{a+c}+2^a\$ holds. If \$H'\$ is factorized into \$2^r\cdot m\$ where \$m\$ is odd, we can see that \$a\geq r\$ must hold for either of the divisibilty. That is to say, if we start from day \$2^r\$ and find the next \$t\$ such that \$H'|2^t-2^r\$ or \$H'|2^t+2^r\$, then \$s_{k+2^t}=s_{k+2^r}\$ holds for every \$k\$. This leads to the following algorithm: Input: \$H\$ houses with initial states \$s_0\$, number of days \$n\$ Output: House states \$s_n\$ after \$n\$ days Step 1: Let \$H'\leftarrow 2H+2\$, find the maximal \$r\$ such that \$2^r\mid H'\$ Step 2: If \$n\leq 2^r\$, go to Step 5. Step 3: Find the minimal \$t, t>r\$ such that either \$H'|2^t-2^r\$ or \$H'|2^t+2^r\$ holds. Step 4: \$n\leftarrow (n-2^r)\%(2^t-2^r)+2^r\$ Step 5: Divide \$n\$ into a power-2 sum \$2^{b_0}+2^{b_1}+\ldots+2^{b_u}\$ and calculate \$s_n\$ based on (\ref{f2}) As an example, if there are \$H=8\$ houses, \$H'=18=2^1\cdot 9\$. So \$r=1\$. We can find \$t=4\$ is the minimal number such that \$18\mid 2^4+2=18\$. Therefore \$s_{k+2}=s_{k+2^4}\$ holds for every \$k\geq 0\$. So we reduce any \$n>2\$ to \$(n-2)\%14 + 2\$, and then apply Step 5 of the algorithm to get \$s_n\$. Based on the above analysis, every \$n\$ can be reduced to a number between \$[0, 2^t)\$ and \$s_n\$ can be computed within \$\min(t, \log n)\$ steps using the recurrence (\ref{f2}). So the ultimate time complexity of the algorithm is \$\Theta(H'\cdot \min(t, \log n))=\Theta(H\cdot\min(m,\log n))=\Theta(\min(H^2,H\log n))\$. This is much better than the naive algorithm which has a time complexity of \$\Theta(H\cdot n)\$.
I apologize about the title as I would have included the entire identity I am trying to prove analytically but we are limited to 150 characters. I'm given the following problem in Widder's Advanced Calculus: 11.) $u=f\left(x,y\right),x=r\cos\left(\theta\right),y=r\sin\left(\theta\right).$ show that \begin{align} \left(\frac{\partial u}{\partial x}\right)^2+\left(\frac{\partial u}{\partial y}\right)^2&=\left(\frac{\partial f}{\partial r}\right)^2+\frac{1}{r^2}\left(\frac{\partial f}{\partial\theta}\right)^2. \end{align} This is what I have done so far: \begin{align} \left(\frac{\partial f\left(x,y\right)}{\partial x}\right)^2+\left(\frac{\partial f\left(x,y\right)}{\partial y}\right)^2&=\left(\frac{\partial f\left(r\cos\left(\theta\right),r\sin\left(\theta\right)\right)}{\partial\left(r\cos\left(\theta\right)\right)}\right)^2+\left(\frac{\partial f\left(r\cos\left(\theta\right),r\sin\left(\theta\right)\right)}{\partial \left(r\sin\left(\theta\right)\right)}\right)^2\\ \end{align} but is there a way to transform the RHS into what he has written? Perhaps via some substitution?
In a given question about a trigonometric proof I was trying to solve, I got in $$ \tan(x - π/4) = -\, 1 $$ then I solved it just like any other trig equation, but, after I managed to finish the demonstration, I get back to that equation and tried a different approach, by applying the inverse function in both sides of the equation: $$ \arctan(\tan(x - π/4)) = \arctan(-\, 1) = -\,π/4. $$ But, after analysing both sides, I realized I've got an absurd after considering that $$ \arctan(-\, 1) = -\,π/4 $$ since, to succeed in my proof, I had to consider that $$ \arctan(-\, 1) = 3π/4.$$ My question is, how to solve simple trigonometric equations, like the one I posted above, by applying the inverse function without getting an absurd? Just to mention, the $$ x $$ in the tangent argument is equal to $$ \arctan(2) + \arctan(3), $$hence it satisfies the arctan condition. $$\tan\left(x-\dfrac\pi4\right)=-1\implies x-\dfrac\pi4=m\pi+\arctan(-1)$$ where $m$ is any integer As there are infinitely many values of $x$ having the same value of $\tan x$, we usually take the principal value only when using the $\arctan$ function. That is, $\displaystyle -\frac{\pi}{2}<\arctan k<\frac{\pi}{2}$. So, $\displaystyle \arctan(-1)=-\frac{\pi}{4}$ and $\displaystyle \frac{3\pi}{4}=\arctan(-1)+\pi$. For $\displaystyle \tan\left(x-\frac{\pi}{4}\right)=-1$, $\displaystyle x-\frac{\pi}{4}=n\pi+\arctan(-1)=n\pi-\frac{\pi}{4}$ and hence $x=n\pi$. The value of $x$ may be further restricted by the conditions given in your original problem. For the problem $x=\arctan(2)+\arctan(3)$, let $u=\arctan(2)$ and $v=\arctan(3)$. Then $\displaystyle \frac{\pi}{4}<u<v<\frac{\pi}{2}$. Since $x=u+v$, $\displaystyle \frac{\pi}{2}<x<\pi$. $$\tan x=\tan(u+v)=\frac{\tan u+\tan v}{1-\tan u\tan v}=\frac{2+3}{1-(2)(3)}=-1$$ $$x=n\pi+\arctan(-1)=n\pi-\frac{\pi}{4}$$ So, $\displaystyle x=\pi-\frac{\pi}{4}=\frac{3\pi}{4}$.
One thing you're missing seems to be Oberth's effect. To go from LEO to solar system escape velocity, you have to counter Earth's escape velocity, but after that, you get an additional multiplier by doing the burn at a higher initial velocity (at LEO). Your method here also has a problem: Hence an additional 11.2 km/s - 9.5 km/s = 1.7 km/s is needed to escape the gravitational field of the Earth. To get to LEO, the 9.5 or 10 km/s is the delta v you need delivered by the engines. But that doesn't mean you're that much closer to escape from Earth's gravity. That's because air drag and gravity drag are both "wasted" impulse. Ultimately, they just go to friction. So if you're in LEO: It took 9.5 km/s to get there (we'll say) It will take an additional 11.2-7.9 = 3.3 km/s to escape the gravity well Now, going from LEO to hyperbolic orbits is a bit more difficult. I'll use the energy balance, because I find it easiest to understand. The specific orbital energy is: $$ \epsilon={v^2\over2}-{GM\over{r}} $$ After it's escaped Earth's sphere of influence, the energy balance will be simply: $$ \epsilon = \frac{ v_{\infty}^2 }{2 } $$ For either escape or dropping into the sun, we have a final velocity in mind. This is at 1 AU from the sun, after we get out of Earth's sphere of influence. Earth is moving at 29.78 km/s. so we need: To get to the sun, we need 0 km/s net velocity, so moving 29.78 km/s relative to Earth To get out of the solar system, we need 42.1 km/s in the direction of Earth's motion, so 42.1-29.78 = 12.32 km/s Now we need to use those above energy equations to get those velocities after getting out of Earth's sphere of influence. Now let's imagine we're half way through the burn, and are at LEO altitude with escape velocity. So we've spent exactly 9.5+3.3 = 12.8 km/s so far. We need to figure out how much more we need in this same burn to shoot for our destination. $$ v_{\infty} = 29.78 \text{ km/s or } 12.32 \text{ km/s} = \sqrt{ 2 \left( \frac{v^2}{2} - \frac{GM}{r} \right) } $$ Solve this for both cases in terms of $v$ now. For completeness, I use $r=6,354.82 \text{ km}$. Everything else is known. Now the results are: for getting to the sun v = 31.8 km/s for getting out of the solar system v = 16.65 km/s These are the numbers for the total velocity you need at LEO altitude. In the story I'm telling, you're at 11.2 km/s at the end of the previous leg, so subtract that number in order to calculate the final burn. Again, the trip is broken up into 3 segments according to my organization, but the last 2 are really the same burn. Let me focus on the escape from our solar system. The three legs are: A launch that requires a 9.5 km/s burn The first part of the burn at LEO to get to escape velocity is 3.3 km/s Continuing that same burn, an additional 16.65-11.2= 5.45 km/s to get the solar system's escape velocity after you're out of Earth's sphere of influence The total of all these comes out to 18.25 km/s. If your propellant exhaust velocity is 4 km/s, then your ultimate mass fraction on the launchpad will be about 96-to-1. So a million pound rocket could get 10,436 pounds out of the solar system with this method (I'm not saying it's a good method for this purpose). I hope this clears up the "from lower Earth orbit" part. It's not as simple as adding things up, because you're trying to get the velocity to escape the sun's gravity well, while you're still in Earth's gravity well. To do that, you have to include the Oberth effect due to your location within Earth's potential well. I hope I've demonstrated that correctly. EDIT: here is a different set of numbers that starts with the radius of the Earth, instead of the "11.2" and "7.9" numbers, which I only used because they were in prior discussion. Base Earth radius 6378.1 km LEO altitude 300 km LEO radius 6678.1 km V at LEO 7.725529305 km/s Escape V from LEO 10.92554832 km/s Burn from LEO to escape 3.200019015 km/s V needed at LEO to get 29.78 km/s 31.81648047 km/s Extra burn needed past escape V 20.89093216 km/s Oberth ratio 1.425498861 unitless V needed at LEO to get 12.32 km/s 16.64999789 km/s Extra burn needed past escape V 5.724449572 km/s Oberth ratio 2.152171985 unitless
Where does this problem come from? Why would you hope for something like this? I am not trying to be snarky, I am just wondering if there is a true statement that might better fit your situation. At any rate, counterexamples are easy possible to produce via varieties with a group action such that the variety deforms, but the group action is rigid. Here is one construction. Let $Y$ be a curve of genus $g\geq 2$ with an action of a finite group, $$\mu:G\times Y \to Y, $$ such that (i) the pair $(Y,\mu)$ is rigid, and (ii) there are no points of $Y$ that are fixed by the entire group $G$. In characteristic $0$, the second condition is automatic if $G$ is noncyclic (in characteristic $p$, it is true if the quotient by the maximal quasi-$p$-group is noncyclic), and the first condition is automatic if the quotient map, $Y\to Y/G$, is a map to a rational curve branched over precisely $3$ points. The Klein quartic with its full group of automorphisms is one example. Now let $X'$ be the product $$X' = \prod_{g\in G} Y = \text{Hom}_{\text{Sets}}(G,Y).$$In other words, $X'$ is the projective $k$-scheme of ordered tuples $x'=(y_g)_{g\in G}$ of elements $y_g\in Y$. There is a projection $\text{pr}_1:X'\to Y$ sending $x'$ to $y_1$. There is a diagonal embedding $$\Delta : Y \to X', \ \ y \mapsto (y)_{g\in G},$$and the action $\mu$ also induces a second embedding $$\Gamma_\mu:Y \to X', \ \ y \mapsto (\mu(g,y))_{g\in G}.$$ By hypothesis (ii) above, the images $\Delta(Y)$ and $\Gamma_{\mu}(Y)$ are disjoint. Denote the blowing up of $X'$ along the closed subscheme $\Delta(Y)\sqcup \Gamma_{\mu}(Y)$ by $\nu$, $$\nu:X\to X'.$$Define $f$ to be $\text{pr}_1\circ \nu$. Every fiber is a product of smooth projective curves blown up at two points, and there are deformations of all of these. The base is just $Y$, and this deforms as well. However, the minimal model / canonical model / Albanese image of $X'$ is $X$. Every deformation of a product of curves is a product of curves, cf. the work of van Opstall. The fundamental locus of the Albanese morphism is a disjoint union of two curves. But for either curve, every projections of the curves to a curve factor is an isomorphism. So you can use one of the two fundamental curves, e.g., the curve $\Delta(Y)$, to identify all curve factors with the fundamental curve. Then the second fundamental curve gives a sequence of automorphisms of that curve. Now use that $(Y,\mu)$ is rigid to conclude that $X$ is rigid.
Consider the problem where we have $l$ parties with messages $m_i$ who wish to generate the list $(m_{\pi(1)}, \dots, m_{\pi(l)})$, where $\pi$ is a permutation over the integer range $[1,\dots,l]$. Suppose further that they wish to do this without revealing $\pi$. This problem is essentially what mixnets were designed to solve. They send the sender or provider of a series of messages received in sequence by shuffling them around and about before forwarding them along. In doing so, the mixnet hides the mapping between the $i$-th input message and the $i$-th output message. In addition to permuting messages before forwarding them to their recipients, mixnets usually also perform some type of decryption or re-encryption so that the input and output messages are more difficult to correlate. One ideal property for these mixnets is that they work on opaque ciphertexts. In other words, it would be nice if the permutation and translation (decryption or re-encryption) steps operated over ciphertexts without having to observe the message plaintexts or know about the recipient of each message. In fact, this is not farfetched at all. Adida et al. [1] show that any homomorphic encryption scheme can be used to build an inefficient decryption and re-encryption shuffling schemes by obfuscation. They also show how to construct efficient shuffling schemes based on the BGN and Paillier encryption schemes. In this post, I’ll review outline fundamental idea behind their shuffle algorithm and then describe the basic Paillier cryptosystem. I will detail the generalized Paillier cryptosystem and how it’s used for shuffling in a subsequent post. Let $m = [m_1,\dots,m_l]$ be a vector of $l$ messages we wish to permute by some permutation $\pi$. An easy way to do this is to generate a permutation matrix $\mathbf{A}$ where the rows are permuted according to $\pi$. The matrix-vector product $m’ = \mathbf{A}m$ then returns $m$ permuted under $\pi$. Let’s take a closer look at how $m’_i$ is computed. By the matrix-vector multiplication algorithm, it holds that What if the entries of $\mathbf{A}$ and $m$ were encrypted under a multiplicative homomorphic encryption scheme? The product of two encryptions would be equal to the encryption of the product, e.g., $\mathbf{A}_{j,i}m_i$. Multiplying by an encrypted “1” is sort of like re-encrypting the input. Moreover, decryption does not reveal $\pi$ if the encryption scheme is semantically secure. Intuitively, this is because the encryption reveals nothing about the plaintext, i.e., the inputs from $\mathbf{A}$. The authors of [1] use the additive and multiplicative properties of the BGN encryption scheme to implement a decryption shuffle algorithm, i.e., one in which decrypts and then outputs a permutation of its inputs. Personally, I find re-encryption shuffling algorithms to be more intriguing since they do not require the messages to be decrypted. This lets the operation be done by an untrusted proxy. As such, I’ll focus the remainder of this post on illustrating their re-encryption shuffle algorithm with some running code. But first, we need to do some review. The Paillier encryption system is a tuple of algorithms $(\mathsf{G}, \mathsf{E}, \mathsf{G})$ for key generation, encryption, and decryption defined as follows (adapted from [1]). To show correctness, assume we encrypted a message $m$ with the public key $pk$ and retrieved the ciphertext $c$. We would expect for the output of the decryption algorithm given $c$ and the secret key $sk$ to yield $p = m$. Let’s walk through the decryption computation. Compute $c^{\lambda} = (g^mr^n)^{\lambda} = g^{m\lambda}r^{n\lambda} = g^{m\lambda}$. By Carmichael’s Theorem [2], $r^{n\lambda} \equiv 1 \mod n^2$. Compute $g^{m\lambda} = (1 + n)^{m\lambda} = (1 + mn\lambda) \mod n^2$ Apply $L(x)$ to compute: The following code implements this standard Paillier cryptosystem. Hopefully, in a following post, I’ll have time to describe how to generalize this algorithm to work in higher-order groups.
I need to prove or disprove if these two languages are same. So I assume that these lanaguages are same because I think that every word from $\{a,b\}^*$ could be concatenated from two words $x$ and $y$ where $|x|_a = |y|_b$. So $x$ and $y$ must belong to $\{a,b\}^*$, too. Then I can write and prove this: $\{a,b\}^* \subseteq \{xy \in \{a,b\}^* \mid |x|_a = |y|_b \} \wedge \{xy \in \{a,b\}^* \mid |x|_a = |y|_b \} \subseteq \{a,b\}^*$ So I assume that $z \in \{a,b\}^*$. Then $z = z_1z_2...z_i$ where $z_1,z_2,...,z_i \in \{a,b\}$ Here I stucked and I don't know (1) how to continue, (2) if the way of proving i choose is correct or not.
We investigate a class of weak solutions, the so-called very weak solutions, to stationary and nonstationary Navier–Stokes equations in a bounded domain $$\Omega \subseteq \mathbb{R}^{3}$$ . This notion was introduced by Amann [3], [4] for the nonstationary case with nonhomogeneous boundary data leading to a very large solution class of low regularity. Here we are mainly interested in the investigation of the “largest possible” class of solutions u for the more general problem with arbitrary divergence k = div u, boundary data g = u|∂Ω and an external force f, as weak as possible, but maintaining uniqueness. In principle, we will follow Amann’s approach. Linearized stability of incompressible viscous fluid flows in a thin spherical shell is studied by using the two-dimensional Navier–Stokes equations on a sphere. The stationary flow on the sphere has two singularities (a sink and a source) at the North and South poles of the sphere. We prove analytically for the linearized Navier–Stokes equations that the stationary flow is asymptotically stable. When the spherical layer is truncated between two symmetrical rings, we study eigenvalues of the linearized equations numerically by using power series solutions and show that the stationary flow remains asymptotically stable for all Reynolds numbers. This paper is devoted to the study of the initial value problem for density dependent incompressible viscous fluids in a bounded domain of $$\mathbb{R}^N (N \geq 2)$$ with $$C^{2+\epsilon}$$ boundary. Homogeneous Dirichlet boundary conditions are prescribed on the velocity. Initial data are almost critical in term of regularity: the initial density is in W1,q for some q > N, and the initial velocity has $$\epsilon$$ fractional derivatives in Lr for some r > N and $$\epsilon$$ arbitrarily small. Assuming in addition that the initial density is bounded away from 0, we prove existence and uniqueness on a short time interval. This result is shown to be global in dimension N = 2 regardless of the size of the data, or in dimension N ≥ 3 if the initial velocity is small.Similar qualitative results were obtained earlier in dimension N = 2, 3 by O. Ladyzhenskaya and V. Solonnikov in [18] for initial densities in W1,∞ and initial velocities in $$W^{2 - \tfrac{2}{q},q} $$ with q > N. The existence and uniqueness of a solution to the nonstationary Navier–Stokes system having a prescribed flux in an infinite cylinder is proved. We assume that the initial data and the external forces do not depend on x3 and find the solution (u, p) having the following form $${\mathbf{u}}(x,t) = (u_{1} ({x}\ifmmode{'}\else$'$\fi,t),u_{2} ({x}\ifmmode{'}\else$'$\fi,t),u_{3} ({x}\ifmmode{'}\else$'$\fi,t)),\quad p(x,t) = \ifmmode\expandafter\tilde\else\expandafter\~\fi{p}({x}\ifmmode{'}\else$'$\fi,t) - q(t)x_{3} + p_{0} (t),$$ where x′ = (x1, x2). Such solution generalize the nonstationary Poiseuille solutions. Explicit formulae for the fundamental solution of the linearized time dependent Navier–Stokes equations in three spatial dimensions are obtained. The linear equations considered in this paper include those used to model rigid bodies that are translating and rotating at a constant velocity. Estimates extending those obtained by Solonnikov in [23] for the fundamental solution of the time dependent Stokes equations, corresponding to zero translational and angular velocity, are established. Existence and uniqueness of solutions of these linearized problems is obtained for a class of functions that includes the classical Lebesgue spaces Lp(R3), 1 < p < ∞. Finally, the asymptotic behavior and semigroup properties of the fundamental solution are established. We consider the problem of solving numerically the stationary incompressible Navier–Stokes equations in an exterior domain in two dimensions. For numerical purposes we truncate the domain to a finite sub-domain, which leads to the problem of finding so called “artificial boundary conditions” to replace the boundary conditions at infinity. To solve this problem we construct – by combining results from dynamical systems theory with matched asymptotic expansion techniques based on the old ideas of Goldstein and Van Dyke – a smooth divergence free vector field depending explicitly on drag and lift and describing the solution to second and dominant third order, asymptotically at large distances from the body. The resulting expression appears to be new, even on a formal level. This improves the method introduced by the authors in a previous paper and generalizes it to non-symmetric flows. The numerical scheme determines the boundary conditions and the forces on the body in a self-consistent way as an integral part of the solution process. When compared with our previous paper where first order asymptotic expressions were used on the boundary, the inclusion of second and third order asymptotic terms further reduces the computational cost for determining lift and drag to a given precision by typically another order of magnitude. We study the initial-boundary value problem for the Stokes equations with Robin boundary conditions in the half-space $$\mathbb{R}_ + ^n .$$ It is proved that the associated Stokes operator is sectorial and admits a bounded H∞-calculus on $$L_\sigma ^q (\mathbb{R}_ + ^n ).$$ As an application we prove also a local existence result for the nonlinear initial value problem of the Navier–Stokes equations with Robin boundary conditions. We study the stability of spatially periodic solutions to the Kawahara equation, a fifth order, nonlinear partial differential equation. The equation models the propagation of nonlinear water-waves in the long-wavelength regime, for Weber numbers close to 1/3 where the approximate description through the Korteweg – de Vries (KdV) equation breaks down. Beyond threshold, Weber number larger than 1/3, this equation possesses solitary waves just as the KdV approximation. Before threshold, true solitary waves typically do not exist. In this case, the origin is surrounded by a family of periodic solutions and only generalized solitary waves exist which are asymptotic to one of these periodic solutions at infinity. We show that these periodic solutions are spectrally stable at small amplitude. This paper is devoted to the study of a LES model to simulate turbulent 3D periodic flow. We focus our attention on the vorticity equation derived from this LES model for small values of the numerical grid size δ. We obtain entropy inequalities for the sequence of corresponding vorticities and corresponding pressures independent of δ, provided the initial velocity u0 is in Lx2 while the initial vorticity ω0 = ∇ × u0 is in Lx1. When δ tends to zero, we show convergence, in a distributional sense, of the corresponding equations for the vorticities to the classical 3D equation for the vorticity. We prove a general regularity result for fully nonlinear, possibly nonlocal parabolic Cauchy problems under the assumption of maximal regularity for the linearized problem. We apply this result to show joint spatial and temporal analyticity of the moving boundary in the problem of Stokes flow driven by surface tension. On the basis of semigroup and interpolation-extrapolation techniques we derive existence and uniqueness results for the Navier–Stokes equations. In contrast to many other papers devoted to this topic, we do not complement these equations with the classical Dirichlet (no-slip) condition, but instead consider stress-free or slip boundary conditions. We also study various regularity properties of the solutions obtained and provide conditions for global existence. In many natural or artificial flow systems, a fluid flow network succeeds in irrigating every point of a volume from a source. Examples are the blood vessels, the bronchial tree and many irrigation and draining systems. Such systems have raised recently a lot of interest and some attempts have been made to formalize their description, as a finite tree of tubes, and their scaling laws [25], [26]. In contrast, several mathematical models [5], [22], [10], propose an idealization of these irrigation trees, where a countable set of tubes irrigates any point of a volume with positive Lebesgue measure. There is no geometric obstruction to this infinitesimal model and general existence and structure theorems have been proved. As we show, there may instead be an energetic obstruction. Under Poiseuille law R(s) = s−2 for the resistance of tubes with section s, the dissipated power of a volume irrigating tree cannot be finite. In other terms, infinite irrigation trees seem to be impossible from the fluid mechanics viewpoint. This also implies that the usual principle analysis performed for the biological models needs not to impose a minimal size for the tubes of an irrigating tree; the existence of the minimal size can be proven from the only two obvious conditions for such irrigation trees, namely the Kirchhoff and Poiseuille laws. We consider the Euler equations of barotropic inviscid compressible fluids in the exterior domain. It is well known that, as the Mach number goes to zero, the compressible flows approximate the solution of the equations of motion of inviscid, incompressible fluids. In dimension 2 such limit solution exists on any arbitrary time interval, with no restriction on the size of the initial data. It is then natural to expect the same for the compressible solution, if the Mach number is sufficiently small.First we study the life span of smooth irrotational solutions, i.e. the largest time interval $$T(\epsilon)$$ of existence of classical solutions, when the initial data are a small perturbation of size $$\epsilon$$ from a constant state. Then, we study the nonlinear interaction between the irrotational part and the incompressible part of a general solution. This analysis yields the existence of smooth compressible flow on any arbitrary time interval and with no restriction on the size of the initial velocity, for any Mach number sufficiently small. Finally, the approach is applied to the study of the incompressible limit. For the proofs we use a combination of energy estimates and a decay estimate for the irrotational part. In this paper, we study the existence and uniqueness of a degenerate parabolic equation, with nonhomogeneous boundary conditions, coming from the linearization of the Crocco equation [12]. The Crocco equation is a nonlinear degenerate parabolic equation obtained from the Prandtl equations with the so-called Crocco transformation. The linearized Crocco equation plays a major role in stabilization problems of fluid flows described by the Prandtl equations [5]. To study the infinitesimal generator associated with the adjoint linearized Crocco equation – with homogeneous boundary conditions – we first study degenerate parabolic equations in which the x-variable plays the role of a time variable. This equation is doubly degenerate: the coefficient in front of ∂x vanishes on a part of the boundary, and the coefficient of the elliptic operator vanishes in another part of the boundary. This makes very delicate the proof of uniqueness of solution. To overcome this difficulty, a uniqueness result is first obtained for an equation in which the elliptic operator is symmetric, and it is next extended to the original equation by combining an iterative process and a fixed point argument (see Th. 4.9). This kind of argument is also used to prove estimates, which cannot be obtained in a classical way. This paper concerns the regularity of a capillary graph (the meniscus profile of liquid in a cylindrical tube) over a corner domain of angle α. By giving an explicit construction of minimal surface solutions previously shown to exist (Indiana Univ. Math. J. 50 (2001), no. 1, 411–441) we clarify two outstanding questions.Solutions are constructed in the case α = π/2 for contact angle data (γ1, γ2) = (γ, π − γ) with 0 π/4 have a jump discontinuity at the corner. This kind of behavior was suggested by numerical work of Concus and Finn (Microgravity sci. technol. VII/2 (1994), 152–155) and Mittelmann and Zhu (Microgravity sci. technol. IX/1 (1996), 22–27). Our explicit construction, however, allows us to investigate the solutions quantitatively. For example, the trace of these solutions, excluding the jump discontinuity, is C2/3. We study the boundary-value problem associated with the Oseen system in the exterior of m Lipschitz domains of an euclidean point space $$\mathcal{E}_n (n = 2,3).$$ We show, among other things, that there are two positive constants $$\epsilon$$ and α depending on the Lipschitz character of Ω such that: (i) if the boundary datum a belongs to Lq(∂Ω), with q ∈ [2,+∞), then there exists a solution (u, p), with $$ \user2{u} \in W^{1/q,q}_{{\text{loc}}}(\Omega),$$ and u ∈ L∞(Ω) if a ∈ L∞(∂Ω), expressed by a simple layer potential plus a linear combination of regular explicit functions; as a consequence, u tends nontangentially to a almost everywhere on ∂Ω; (ii) if a ∈ W1-1/q,q(∂Ω), with $$q \in [2, 3+\epsilon),$$ then ∇u, p ∈ Lq(Ω) and if a ∈ C0,μ(∂Ω), with μ ∈ [0, α), then $$\user2{u} \in C^{0,\mu} (\overline{\Omega} );$$ also, natural estimates holds. This paper is concerned with the question of linear stability of motionless, spherically symmetric equilibrium states of viscous, barotropic, self-gravitating fluids. We prove the linear asymptotic stability of such equilibria with respect to perturbations which leave the angular momentum, momentum, mass and the position of the center of gravity unchanged. We also give some decay estimates for such perturbations, which we derive from resolvent estimates by means of analytic semigroup theory. We consider stationary solutions of the incompressible Navier–Stokes equations in three dimensions. We give a detailed description of the fluid flow in a half-space through the construction of an inertial manifold for the dynamical system that one obtains when using the coordinate along the flow as a time. We investigate the steady compressible Navier–Stokes system of equations in the isentropic regime in a domain with several conical outlets and with prescribed pressure drops. Existence of weak solutions is proved and estimate of these solutions with respect to the pressure drops is derived under the hypothesis γ > 3 where γ is the adiabatic constant.
It´s known that for first order theories, it holds $\mathbf{ZFC} \vdash T \vdash \varphi \leftrightarrow T \models \varphi$. Why does this not hold in the higher order case (any simple example?)? Furthermore if I change models by interpretations, does it hold that $T \vdash \varphi$ iff for all interpretations (in the sense defined by Tarski) $M$ of the language $L (T)$ (the underlying language of the theory $T$) in a theory $T'$, $T' \vdash\varphi^M$ for the higher order case (I´m interested mainly in the second order case)? Thanks in advance. EDIT For first order languages: An interpretation $I$ of a language $L$ in a language $L'$ is a correspondence that associates for each predicate symbol $P$ of $L$ a predicate symbol $P_I$ of $L'$. And for each function symbol $f$ in $L$ a function symbol $f_I$ of $L'$. Furthermore, given a (first order) theory $T'$ in $L'$, then I is said to be a interpretation of $L$ in $T'$ if $1)$ There is a fixed predicate symbol in $\mathfrak{U}_I$, called the domain. $2)$For each function symbol $f$ in $L$, $T'\vdash\mathfrak{U}_I x_1 \rightarrow … \rightarrow \mathfrak{U}_Ix_n \rightarrow \mathfrak{U}_If_I x_1…x_n$ Moreover, given a theory $T$ in $L$, $I$ is said to be an interpretation of $T$ in $T'$ if for each formula $\varphi$ it holds that $T'\vdash\mathfrak{U}_I x_1 \rightarrow … \rightarrow \mathfrak{U}_Ix_n \rightarrow \varphi^I $ where $\varphi^I$ is defined inductively as: $\exists x (\mathfrak{U}_I x \wedge \psi^I)$ if $\varphi$ is $\exists x\psi$ and the other cases (negation, conjunction, predicate symbol, function symbol etc) in the obvious way. The main difference between models and interpretations are that in a model the predicate $\mathfrak{U}_I$ defines a set and that $T' \vdash \forall y (y \ \text{axiom of T}) \rightarrow I \models y$. While, in an interpretation, it just holds that for every axiom $\varphi$ of $T$, $T' \vdash \varphi^I$.
$\newcommand{\si}{\sigma}\newcommand{\Si}{\Sigma}\renewcommand{\c}{\circ}\newcommand{\tr}{\operatorname{tr}}$ Let $X_1:=X$, $X_2:=Y$, $\mu_1:=\mu_x$, $\mu_2:=\mu_y$, $\Si_1:=\Si_x$, $\Si_2:=\Si_y$, $N:=\mathcal{N}_d$. Let $Y_i:=X_i-\mu_i$. Then $X_i=\mu_i+Y_i$, $Y_i\sim N(0,\Si_i)$, $Y_1$ and $Y_2$ are independent, and \begin{equation*} X_1\c X_2=\mu_1\c\mu_2+V+Y_1\c Y_2,\quad\text{where}\quad V:=\mu_1\c Y_2+\mu_2\c Y_1\end{equation*}and $\c$ denotes the Hadamard product. Next, \begin{equation*} V\sim N(0,\Si),\quad\text{where}\quad \Si:=D(\mu_1)\Si_2 D(\mu_1)+D(\mu_2)\Si_1 D(\mu_2)\end{equation*}and $D(\mu)$ is the diagonal matrix whose diagonal entries are the entries of the column matrix $\mu$; we are assuming that the matrix $\Si$ is nonsingular. Hence, \begin{equation*} \Si^{-1/2}(X_1\c X_2-\mu_1\c\mu_2)=Z+\delta,\quad\text{where}\quad \delta:=\Si^{-1/2}(Y_1\c Y_2)\end{equation*}and $Z\sim N(0,I)$, a standard normal random vector in $\mathbb R^d$. Further, \begin{equation*} E\|\delta\|^2\le\|\Si^{-1/2}\|^2 E\|Y_1\c Y_2\|^2=\|\Si^{-1}\| \tr(\Si_1\c\Si_2), \end{equation*}$\|v\|$ is the Euclidean norm of a vector $v$, and $\|M\|$ and $\tr M$ are the corresponding operator norm and the trace of a matrix $M$, respectively. Thus, we come to the following conclusion: if $\mu_1$, $\mu_2$, $\Si_1$, $\Si_2$ vary in any such way that the matrix $\Si$ is nonsingular and $$\|\Si^{-1}\| \tr(\Si_1\c\Si_2)\to0,$$ then $$X_1\c X_2\approx N(\mu_1\c\mu_2,\Si)$$ in the sense that $\Si^{-1/2}(X_1\c X_2-\mu_1\c\mu_2)$ converges to a standard normal random vector $Z$ in distribution. In the particular case when $\Si_i=\si_i^2 I$ for some scalars $\si_i>0$, the condition $\|\Si^{-1}\| \tr(\Si_1\c\Si_2)\to0$ becomes \begin{equation*} \min\Big(\frac{\mu_1^2}{\si_1^2}+\frac{\mu_2^2}{\si_2^2}\Big)\to\infty, \end{equation*}where $\min v$ denotes the minimum of the entries of a (say, column) matrix $v$. Further specializing this to the case $d=1$, we have \begin{equation*}X_1 X_2\approx N(\mu_1\mu_2,\mu_1^2\si_2^2+\mu_2^2\si_1^2)\quad\text{if}\quad \frac{\mu_1^2}{\si_1^2}+\frac{\mu_2^2}{\si_2^2}\to\infty. \end{equation*} Here, as in the paper An approach to distribution of the product of two random variables you cited, the asymptotic variance is $\mu_1^2\si_2^2+\mu_2^2\si_2^2$, not $\mu_1^2\si_2^2+\mu_2^2\si_1^2+\si_1^2\si_2^2$ as you wrote, even though the two values are relatively close to each other under the condition $\frac{\mu_1^2}{\si_1^2}+\frac{\mu_2^2}{\si_2^2}\to\infty$. However, there are two differences between our conclusion for the special case $d=1$ and the corresponding result in the cited paper: (i) there, the statement is not rigorous, saying that something tends to a "limit" that is in fact varying with $\mu_1$, $\mu_2$, $\si_1$, $\si_2$ and (ii) the condition there for such a "convergence" appears to be that both $\frac{\mu_1}{\si_1}$ and $\frac{\mu_2}{\si_2}$ "increase", whereas here we show that it is enough to have $\frac{\mu_1^2}{\si_1^2}+\frac{\mu_2^2}{\si_2^2}\to\infty$ -- that is, it is enough that at least one of the ratios $\frac{|\mu_1|}{\si_1}$ or $\frac{|\mu_2|}{\si_2}$ go to $\infty$.
Let us assume in a first step that the 4 corners (pyramid base) have the following coordinates: $$A_1(1,0,0),\ A_2(0,1,0),\ A_3(-1,0,0), A_4(0,-1,0).$$ Then the equation of the pyramid surface is $$z=a * \min(1 - x - y, 1 - x + y, 1 + x - y, 1 + x + y)$$ ($a$ is an arbitrary positive parameter giving a more of less peaky shape). If one desires a flat surface (ground level $z=0$) outside of the pyramid, the adequate equation will take the max of the previous equation and $0$: $$\tag{1}z=a * max(0,min(1 - x - y, 1 - x + y, 1 + x - y, 1 + x + y))$$ where the different equations $z=1\pm x \pm y$ are the equations of the planes $A_kA_{k+1}V$ where $V(0,0,a)$ is the top point of the pyramid. See Fig. 1 below with $a=1$. For a different pyramid, it suffice to make a $\frac{\pi}{4}$ rotation, otherwise said by using the following change of coordinates : $$\cases{x=\dfrac{1}{\sqrt{2}}(x'-y')\\y=\dfrac{1}{\sqrt{2}}(x'+y')}$$ Remark: We express, as is classical, the ancient coordinates with respect to the new ones. Plugging these expressions into $(1)$ (and using multiplication by $\sqrt{2}$), we obtain: $$\tag{2}z=a * max(0,min(\sqrt{2} - 2x', \sqrt{2} - 2y', \sqrt{2} + 2y', \sqrt{2} +2x'))$$ (of course, one should drop all the "primes"). See Fig. 2. Fig. 1 Fig. 2
The Harmonic Series.Murat Aygen from Turkey sent this solution to part (a) and Dapeng Wang from Claremont Fan Court School, Guildford sent in essentially the same solution. Let us partition the terms of our series, starting from every $k= 2^{i-1}+ 1$, as follows: $\sum_{k=1}^{2^r} \left({1\over{k}}\right) = 1 + \left({1\over{2}}\right) + \left({1\over{3}} + {1\over{4}}\right) + \left({1\over{5}} + {1\over{6}} + {1\over{7}} + {1\over{8}}\right) + \sum_{i=4}^{r} \left(\frac{1}{2^{i-1}+1} + \dots + {1\over2^{i}}\right)$ How many terms are there in each partition? We see that there are 1, 2, 4, 8, 16... terms in the successive partitions given by $2^i - 2^{i -1}$ where $2^i - 2^{i -1} = 2^{i -1}(2-1) = 2^{i -1}$. The smallest term in a partition is clearly the rightmost term ${1\over 2^i}$. This smallest term multiplied by the number of terms in the partition is equal to $${1\over 2^i}\times 2^{i -1} = {1\over 2}$$ which is always less than the sum of the terms of the partition. Anyway it is a positive constant! Since the number of terms in the series is as many as one wishes, we can form as manypartitions as we wish whose partial sums are not less than 1/2. For reaching a sum of 100 only 200 partitions are needed.Noah and Ariya from The British School of Boston, USA and Aled from King Edward VI Camp Hill School for Boys sent excellent solutions to part (b) . Each of the areas of the pink rectangles is representative of one fraction in the sum $$S_n = 1 +{1\over 2} + {1\over 3} + {1\over 4} + ... + {1\over n}. $$This is verified by knowing that each rectangle x has a base of length 1 and a height of length 1/x. Every rectangle has an area greater than that under the curve $y=1/x$ it overlaps, as illustrated above (note that this will remain so because the function $y= 1/x$ is monotonic for positive numbers). Consider the area under the graph $y = 1/x$ between $x=a$ and $x=b$. This area lies between two rectangles and so we get $${b-a \over b} < \int_a^b{ 1\over x }dx = \ln b - \ln a < { b-a\over a}$$ If we evaluate the expression between $a = {1\over n}$ and $b= {1 \over {n-1}}$ we get: $${1\over n} < \ln {1 \over {n-1}} - \ln {1 \over n} = \ln n - \ln (n-1) < { 1\over n-1}$$ and this gives: $${1 \over 2} < \ln 2- \ln 1< { 1\over 1}$$ $${1 \over 3} < \ln 3- \ln 2< { 1\over 2}$$ $${1 \over 4} < \ln 4- \ln 3< { 1\over 3}$$ ...and so on $${1 \over n} < \ln n- \ln (n-1)< { 1\over n-1}$$ Summing these expressions (noting that $\ln1 = 0$) we get: $${1\over 2} + {1\over 3} + {1\over 4} + ... + {1\over n} < \ln n < 1 + {1\over 2} + {1\over 3} + {1\over 4} + ... + {1\over n-1} .$$ The series on each side of this inequality grow infinitely large and differ by less than 1 so the series grows like $\ln n$.
Note: Turns out this is not optimal. My strategy depends on looking at the probability of beating your current score with the next roll. In his answer, @ArthurSkirvin looks at the probability of beating it on any subsequent roll. As I should have expected, taking the longer view provides a better strategy. (I'm assuming an even number of die sides, so there's an $\frac{n}{2}$ side that's the highest side less than the die's $\frac{n+1}{2}$ average.) Proof: If you rolled $\ \frac{n}{2}$ or less on your $i+1^{st}$ roll and stopped, you have at most $\ \frac{n}{2}-\frac{i}{n}$, whereas the expected value of a reroll (minus the total cost) is $\ \frac{n}{2}+\frac{1}{2}-\frac{i+1}{n}$, which gives you an expected gain of $\frac{1}{2}-\frac{1}{n}$, which is $>0$ for anything bigger than a 2-sided die, so you should re-roll. (But amend the strategy if you're flipping coins.) If you rolled $\ \frac{n}{2}+1$ or higher on your $i+1^{st}$ roll, you have at least $\ \frac{n}{2}+1-\frac{i}{n}$, whereas the expected value of a reroll (minus the total cost) is still $\ \frac{n}{2}+\frac{1}{2}-\frac{i+1}{n}$, which gives you an expected loss of $\frac{1}{2}+\frac{1}{n}$, which is $>0$, so you should stop. To compute the expected value of the strategy, $\frac{1}{2}$ the time you roll higher than average on the first roll and stop. Given that you rolled on the upper half of the die, you'll average $\ \frac{3n+2}{4}$. The other half you re-roll. Half of those times, (now $\frac{1}{4}$ of the total probability), you roll higher than average and stop, this time winning $\ \frac{3n+2}{4}-\frac{1}{n}$ to pay for the re-roll. The next, you're at $\frac{1}{8}$ of the total probability, and you stop with $\ \frac{3n+2}{4}-\frac{2}{n}$, or re-roll, and so on. So the expected outcome looks like $\sum_{i=0}^{\infty} (\frac{3n+2}{4}- \frac{i}{n})(\frac{1}{2})^{i+1}$ For a 6-sided die, that's $4\frac{5}{6}$.
How to Model Moisture Flow in COMSOL Multiphysics® Computing laminar and turbulent moisture flows in air is both flexible and user friendly with the Moisture Flow multiphysics interfaces and coupling in the COMSOL Multiphysics® software. Available as of version 5.3a, this comprehensive set of functionality can be used to model coupled heat and moisture transport in air and building materials. Let’s learn how the Moisture Flow interface complements existing functionality, while highlighting its benefits. Modeling Heat and Moisture Transport Modeling the transport of heat and moisture through porous materials, or from the surface of a fluid, often involves including the surrounding media in the model in order to get accurate estimates of the conditions at the material surfaces. In the investigations of hygrothermal behavior of building envelopes, food packaging, and other common engineering problems, the surrounding medium is probably moist air (air with water vapor). Moist air is the environing medium for applications such as building envelopes (illustration, left) and solar food drying (right). Right image by ArianeCCM — Own work. Licensed under CC BY-SA 3.0, via Wikimedia Commons. When considering porous media, the moisture transport process, which includes capillary flow, bulk flow, and binary diffusion of water vapor in air, depends on the nature of the material. In moist air, moisture is transported by diffusion and advection, where the advecting flow field in most cases is turbulent. Computing heat and moisture transport in moist air requires the resolution of three sets of equations: The Navier-Stokes equations, to compute the airflow velocity field \mathbf{u} and pressure p The energy equation, to compute the temperature T The moisture transport equation, to compute the relative humidity \phi These equations are coupled together through the pressure, temperature, and relative humidity, which are used to evaluate the properties of air (density \rho(p,T,\phi); viscosity \mu(T,\phi); thermal conductivity k(T,\phi); and heat capacity C_p(T,\phi)); molecular diffusivity D(T) and through the velocity field used for convective transport. With the addition of the Moisture Flow multiphysics interface in version 5.3a, COMSOL Multiphysics defines all three of these equations in a few steps, as shown in the figure below. Using the Moisture Flow Multiphysics Interface Whenever studying the flow of moist air, two questions should be asked: Does the flow depend on moisture distribution? Does the nature of the flow require the use of a turbulence model? If the answer is “yes” for at least one of these questions, then you should consider using the Moisture Flow multiphysics interfaces, found under the Chemical Species Transport branch. The Moisture Flow group under the Chemical Species Transport branch of the Physics Wizard , with the single-physics interfaces and coupling node added with each version of the Moisture Flow predefined multiphysics interface. The Laminar Flow version of the multiphysics interface combines the Moisture Transport in Air interface with the Laminar Flow interface and adds the Moisture Flow coupling. Similarly, each version under Turbulent Flow combines the Moisture Transport in Air interface and the corresponding Turbulent Flow interface and adds the Moisture Flow coupling. Besides providing a user-friendly way to define the coupled set of equations of the moisture flow problem, the multiphysics interfaces for turbulent flow handle the moisture-related turbulence variables required for the fluid flow computation. Automatic Coupling Between Single-Physics Interfaces One advantage of using the Moisture Flow multiphysics interface is its usability. When adding the Moisture Flow node through the predefined interface, an automatic coupling of the Navier-Stokes equations is defined for the fluid flow and the moisture transport equations by the software (center screenshot in the image below) by using the following variables: The density and dynamic viscosity in the Navier-Stokes equations, which depend on the relative humidity variable from the Moisture Transportinterface through a mixture formula based on dry air and pure steam properties (left screenshot below) The velocity field and absolute pressure variables from the Single-Phase Flowinterface, which are used in the moisture transport equation (right screenshot below) Support for Turbulent Fluid Flow The performance of the Moisture Flow multiphysics interface is especially attractive when dealing with a turbulent moisture flow. For turbulent flows, the turbulent mixing caused by the eddy diffusivity in the moisture convection is automatically accounted for by the COMSOL® software by enhancing the moisture diffusivity with a correction term based on the turbulent Schmidt number . The Kays-Crawford model is the default choice for the evaluation of the turbulent Schmidt number, but a user-defined value or expression can also be entered directly in the graphical user interface. Selection of the model for the computation of the turbulent Schmidt number in the user interface of the Moisture Flow coupling. In addition, for coarse meshes that may not be suitable for resolving the thin boundary layer close to walls, Wall functions can be selected or automatically applied by the software. The wall functions are such that the computational domain is assumed to be located at a distance from the wall, the so-called lift-off position, corresponding to the distance from the wall where the logarithmic layer meets the viscous sublayer (or would meet it if there was no buffer layer in between). The moisture flux at the lift-off position, g_{wf}, which accounts for the flux to and from the wall, is automatically defined by the Moisture Flow interface, based on the relative humidity. Approximation of the flow field and the moisture flux close to walls when using wall functions in the turbulence model for fluid flow. Note that the Low-Reynolds and Automatic options for Wall Treatment are also available for some of the RANS models. For more information, read this blog post on choosing a turbulence model. Mass Conservation Across Boundaries By using the Moisture Flow interface, an appropriate mass conservation is granted in the fluid flow problem by the Screen and Interior Fan boundary conditions. A continuity condition is also applied on vapor concentration at the boundaries where the Screen feature is applied. For the Interior Fan condition, the mass flow rate is conserved in an averaged way and the vapor concentration is homogenized at the fan outlet, as shown in the figure below. Average mass flow rate conservation across a boundary with the Interior Fan condition. Example: Modeling Evaporative Cooling with the Moisture Flow Interface Let’s consider evaporative cooling at the water surface of a glass of water placed in a turbulent airflow. The Turbulent Flow, Low Reynolds k-ε interface, the Moisture Transport in Air interface, and the Heat Transfer in Moist Air interface are coupled through the Nonisothermal Flow, Moisture Flow, and Heat and Moisture coupling nodes. These couplings compute the nonisothermal airflow passing over the glass, the evaporation from the water surface with the associated latent heat effect, and the transport of both heat and moisture away from this surface. By using the Automatic option for Wall treatment in the Turbulent Flow, Low Reynolds k-ε interface, wall functions are used if the mesh resolution is not fine enough to fully resolve the velocity boundary layer close to the walls. Convective heat and moisture fluxes at lift-off position are added by the Nonisothermal Flow and Moisture Flow couplings. The temperature and relative humidity solutions after 20 minutes are shown below, along with the streamlines of the airflow velocity field. Temperature (left) and relative humidity (right) solutions with the streamlines of the velocity field after 20 minutes. The temperature and relative humidity fields have a strong resemblance here, which is quite natural since the fields are strongly coupled and since both transport processes have similar boundary conditions, in this case. In addition, heat transfer is given by conduction and advection while mass transfer is described by diffusion and advection. The two transport processes originate from the same physical phenomena: conduction and diffusion from molecular interactions in the gas phase while advection is given by the total motion of the bulk of the fluid. Also, the contribution of the eddy diffusivity to the turbulent thermal conductivity and the turbulent diffusivity originate from the same physical phenomenon, which adds further to the similarity of the temperature and moisture field. Next Steps Learn more about the key features and functionality included with the Heat Transfer Module, and add-on to COMSOL Multiphysics: Read the following blog posts to learn more about heat and moisture transport modeling: How to Model Heat and Moisture Transport in Porous Media with COMSOL® How to Model Heat and Moisture Transport in Air with COMSOL® Get a demonstration of the Nonisothermal Flow and Heat and Moisture couplings in these tutorial models: Comments (2) CATEGORIES Chemical COMSOL Now Electrical Fluid General Interfacing Mechanical Today in Science
Search Now showing items 1-6 of 6 Forward-backward multiplicity correlations in pp collisions at √s = 0.9, 2.76 and 7 TeV (Springer, 2015-05-20) The strength of forward-backward (FB) multiplicity correlations is measured by the ALICE detector in proton-proton (pp) collisions at s√ = 0.9, 2.76 and 7 TeV. The measurement is performed in the central pseudorapidity ... Rapidity and transverse-momentum dependence of the inclusive J/$\mathbf{\psi}$ nuclear modification factor in p-Pb collisions at $\mathbf{\sqrt{\textit{s}_{NN}}}=5.02$ TeV (Springer, 2015-06) We have studied the transverse-momentum ($p_{\rm T}$) dependence of the inclusive J/$\psi$ production in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV, in three center-of-mass rapidity ($y_{\rm cms}$) regions, down to ... Multiplicity dependence of two-particle azimuthal correlations in pp collisions at the LHC (Springer, 2013-09) We present the measurements of particle pair yields per trigger particle obtained from di-hadron azimuthal correlations in pp collisions at $\sqrt{s}$=0.9, 2.76, and 7 TeV recorded with the ALICE detector. The yields are ... Measurement of charm and beauty production at central rapidity versus charged-particle multiplicity in proton-proton collisions at $\sqrt{s}$ = 7 TeV (Springer, 2015-09) Prompt D meson and non-prompt J/$\psi$ yields are studied as a function of the multiplicity of charged particles produced in inelastic proton-proton collisions at a centre-of-mass energy of $\sqrt{s}=7$ TeV. The results ... Coherent $\rho^0$ photoproduction in ultra-peripheral Pb-Pb collisions at $\mathbf{\sqrt{\textit{s}_{\rm NN}}} = 2.76$ TeV (Springer, 2015-09) We report the first measurement at the LHC of coherent photoproduction of $\rho^0$ mesons in ultra-peripheral Pb-Pb collisions. The invariant mass and transverse momentum distributions for $\rho^0$ production are studied ... Inclusive, prompt and non-prompt J/ψ production at mid-rapidity in Pb-Pb collisions at √sNN = 2.76 TeV (Springer, 2015-07-10) The transverse momentum (p T) dependence of the nuclear modification factor R AA and the centrality dependence of the average transverse momentum 〈p T〉 for inclusive J/ψ have been measured with ALICE for Pb-Pb collisions ...
Fractions and binomial coefficients are common mathematical elements with similar characteristics - one number goes on top of another. This article explains how to typeset them in LaTeX. Contents Using fractions and binomial coefficients in an expression is straightforward. The binomial coefficient is defined by the next expression: \[ \binom{n}{k} = \frac{n!}{k!(n-k)!} \] For these commands to work you must import the package amsmath by adding the next line to the preamble of your file \usepackage{amsmath} The appearance of the fraction may change depending on the context Fractions can be used alongside the text, for example \( \frac{1}{2} \), and in a mathematical display style like the one below: \[\frac{1}{2}\] As you may have guessed, the command \frac{1}{2} is the one that displays the fraction. The text inside the first pair of braces is the numerator and the text inside the second pair is the denominator. Also, the text size of the fraction changes according to the text around it. You can set this manually if you want. When displaying fractions in-line, for example \(\frac{3x}{2}\) you can set a different display style: \( \displaystyle \frac{3x}{2} \). This is also true the other way around \[ f(x)=\frac{P(x)}{Q(x)} \ \ \textrm{and} \ \ f(x)=\textstyle\frac{P(x)}{Q(x)} \] The command \displaystyle will format the fraction as if it were in mathematical display mode. On the other side, \textstyle will change the style of the fraction as if it were part of the text. The usage of fractions is quite flexible, they can be nested to obtain more complex expressions. The fractions can be nested \[ \frac{1+\frac{a}{b}}{1+\frac{1}{1+\frac{1}{a}}} \] Now a wild example \[ a_0+\cfrac{1}{a_1+\cfrac{1}{a_2+\cfrac{1}{a_3+\cdots}}} \] The second fraction displayed in the previous example uses the command \cfrac{}{} provided by the package amsmath (see the introduction), this command displays nested fractions without changing the size of the font. Specially useful for continued fractions. Binomial coefficients are common elements in mathematical expressions, the command to display them in LaTeX is very similar to the one used for fractions. The binomial coefficient is defined by the next expression: \[ \binom{n}{k} = \frac{n!}{k!(n-k)!} \] And of course this command can be included in the normal text flow \(\binom{n}{k}\). As you see, the command \binom{}{} will print the binomial coefficient using the parameters passed inside the braces. A slightly different and more complex example of continued fractions Final example \newcommand*{\contfrac}[2]{% { \rlap{$\dfrac{1}{\phantom{#1}}$}% \genfrac{}{}{0pt}{0}{}{#1+#2}% } } \[ a_0 + \contfrac{a_1}{ \contfrac{a_2}{ \contfrac{a_3}{ \genfrac{}{}{0pt}{0}{}{\ddots} }}} \] For more information see
I came across the barotropic vorticity equation (below) in Sakamoto (2002) and I cannot figure it out. The notation is not clear to me. How can it be derived from the Navier-Stokes equation? $$ \beta\psi_x=\frac{\tau_x^y-\tau_y^x}{\rho_0H}-r\zeta+A_H\nabla^2\zeta-\frac{f_0}{H}w_D $$ where $\psi$ is the streamfunction $\nabla$ is the horizontal gradient operator $\zeta=\nabla^2\psi$ is the vertical component of relative vorticity $(\tau_x,\tau_y)$ is the zonal wind stress $f=f_0+\beta y$ is the Coriolis parameter $\rho_0$ is the mean density $H$ is the depth of the model ocean $r$ is the inverse time scale for vorticity decay by bottom friction $A_H$ is a horizontal eddy viscosity $w_D$ is not explained in this paper.
There is a solution $t$, since $(0.4)^t\gt 5t$ at $t=0$, and $(0.4)^t\lt 5t$ at $t=1$. It is easy to see that there cannot be a solution outside the interval $(0,1)$. There is no rational solution. For suppose that $t=\frac{p}{q}$ is a solution, where $p$ and $q$ are relatively prime integers, neither equal to $0$. Since $0\lt t\lt 1$, we have $p\lt q$. Then from$$\left(\frac{2}{5}\right)^{p/q}=5\frac{p}{q}$$we obtain $$2^pq^q=5^{p+q}p^q.$$Thus $p$ is a power of $2$. This is impossible, since then $2^p \lt p^q$. By using the Gelfond-Schneider theorem, we can now prove that $t$ cannot be an irrational algebraic number. For if $t$ is irrational algebraic, then $(0.4)^t$ is transcendental, but $5t$ is not. So $t$ is transcendental. We have not ruled out the possibility that $t$ is a simple combination of "simple" transcendentals, such as $\log 2$, $\log 3$, \sin 1$, and so on.
Dev gives HBO free math tips to nail Game of Thrones pirate leakers An easy* way to ID a million leakers Developer Bruno Cauet has offered HBO a series of mathematical equations that could have tracked the Game of Thrones season five leaker, or even killed the leak completely. The massively popular series thought to be HBO's most profitable production was rocked over the weekend when a leaker, thought to be a translator with an advanced copy in hand, published the first four episodes which made its way to public Bittorrent trackers. It appears that the multi-million dollar episodes were protected by a mere watermark in the bottom left corner of the video, promptly blurred by the leaking group. The watermark is pretty much useless, Cauet reckons. Instead, credits and scenes could have been minutely extended, with a different time-tweak for anyone getting an advanced copy. Anyone planning a leak would need at least two sources to identify scene adjustments. "The idea is very simple: make each copy unique in a non-visible way," Cauet says in a post. Credits could be shortened by a frame, which would be easy, but also easy to defeat by cutting it out. "Next iteration would be changing the length of each scene by a few milliseconds (a frame)," he says. For a very generous one million unique recipients HBO would need four variations of each of the 18 scenes that make up an episode. In case an eagle-eyed leaker (or their pirate handlers) might spot that the total length of the episode was longer or shorter by up to 18 frames, Cauet offers two responses. The owner could either shorten or extend the credits, or implement this "self-compensating" math: Distribute the changes so all files have the expected length. Let's see (for fun) what it gives (suppose \(k \equiv 2 \pmod 1\)): Let \(C_k(s, K) =\) \(\#\{(c_i)_{(i ≤ s)} \mid 0 ≤ c_i ≤ k - 1, \sum_{i=1}^s c_i = K\}\) the set of all sequences of \(s\) posive elements below \(k\) which sum to \(K\). How do we pick \(K\)? We want the sum of the centered \(c_i\) to be 0, so $$ \sum_{i=1}^s c_i - \frac{k -1}{2} = 0 \\ \sum_{i=1}^s c_i = \sum_{i=1}^s \frac{k - 1}{2} \\ \sum_{i=1}^s c_i = \frac{k \times (k + 1)}{4} - \frac{s}{2} = K \\ $$ We first pick the offset of the first scene: $$ C_k(s, K) = C_k(s - 1, K) + C_k(s - 1, K - 1) + \cdots + C_k(s - 1, K - k + 1) \\ C_k(s, K) = \sum_{i=0}^{min(K, k - 1)} C_k(s - 1, K - i) \\ $$ And the terminal cases are $$ C_k(0, 0) = 1 \\ C_k(0, K) = 0 \quad if \, K ≠ 0 $$ With \(C_k(s, \frac{k \times (k + 1)}{4} - \frac{s}{2}) = n\) we can get \(k\) from \(s\) and \(n\). Steganography can be, and is, used to hide identification markings within video, along with audio that can resist transcoding, where file formats are converted. "Anti-piracy organisations and film houses have historically refused this author's repeat efforts to discuss even general information about the effectiveness of tracking mechanisms", he says. Thompson security bod Jan Jao who develops movie protection gear for Technicolor told NPR back in 2006 that each frame of a movie using his software can be watermarked. But pirates boast about eliminating tricky mass watermarks. The pirates behind the first screener copy to be leaked for the 2012 The Hobbit movie say they spent days blurring 300,000 watermarks which are dotted in 20 spots around the screen. "Firstly there were watermarks, starting with small numbers counting from 0 to 9 randomly hardcoded in the video, and that during the whole movie, popping up mainly in the centre eye of the video half lucent. Some kind of weird code for identifying, don't know, can't blur them only when they appear, would be around 300,000 of them so made small constant blur spots over the 20 spots where they randomly appear, now they are gone, won't effect the viewing hardly seeable unless you are searching for it on purpose (sic)." Another watermark revealed the "exact details" of the leaker, adding that it was "a big risk going out with this movie, hope you appreciate my work". Cauet has advice for Game of Thrones fans too: "I think that binge-watching the first four episodes is a stupid idea that will make you ache for a month waiting for the fifth episode". ®
In this series, we first looked at what sound pressure levels and decibels are. Then, we looked at how we can calculate sound pressure levels in a way that takes human hearing into account. So far, though, we have only considered steady and unchanging sounds. These come from e.g. ventilation systems or machines that run steadily. But what about sounds that change in time, e.g. sounds from passing cars or aircraft, or from explosions and other bangs? Fortunately, acoustic quantities and techniques exist that allow us to describe and compare these sounds as well. The most important are Slow- and Fast-weighting, sound exposure level, and equivalent level. These are the topics of this part of the series. Weighting, Fast and Slow In the first part, we saw how we can compute a sound pressure level from the RMS pressure, which is a kind of representative sound pressure. To find the RMS pressure, we first measure the sound pressure of a steady sound over a long enough period. Then, we boil it down to a representative number for that sound. But what if we want to characterise a sound that is not steady? In that case, we can determine a sound pressure level that changes with time: How strong is the sound now? How strong was it a second ago? Or a minute ago? Therefore, there exist several ways to weight sound in time. Thus, you can calculate the current sound pressure level for where you are based on the sound of the last few moments. A few different time weightings are in use. The most common have the delightfully simple and descriptive names Slow and Fast, and are constructed in the same way. Exponential time weighting More concretely, we apply this time weighting using the exponential functions in the figure to the right. These go towards zero as we go further back in time. What separates Slow and Fast is how quickly they go to zero. Fast has a “time constant” of 0.125 seconds. As you can see in the figure, it basically ignores anything that happened more than half a second ago. The time constant of Slow is eight times longer, at 1 second. This means that it “looks” eight times further back in time. So, how do we use these time weighting functions? As when we calculate the RMS pressure, we start with the squared sound pressure, \(p^2(t)\). Now, however, we apply a time weighting function instead of calculating a simple average. We can express this mathematically as \( p_T(t) = \sqrt{\frac{1}{T}\int_0^t p^2(\tau) \mathrm{e}^{-(t-\tau)/T} \, \mathrm{d}\tau} \, ,\) where \(t\) is the time at which we are calculating the sound pressure level, and \(T\) is the time constant of Slow or Fast. Depending on which one we choose, we get the Slow-weighted sound pressure \(p_\mathrm{S}(t)\) or the Fast-weighted sound pressure \(p_\mathrm{F}(t)\). In turn, we can use these to calculate the Slow- or Fast-weighted sound pressure levels, \(L_\mathrm{S}(t) = 20 \log \left( \frac{p_\mathrm{S}(t)}{p_\mathrm{ref}} \right) , \qquad L_\mathrm{F}(t) = 20 \log \left( \frac{p_\mathrm{F}(t)}{p_\mathrm{ref}} \right) .\) These formulas are wisely designed such that they give the same sound pressure levels as the simple RMS procedure for steady sounds. Examples To make things more clear, let us look at the examples of Slow and Fast weighting below. In both examples, we use a two-second burst of pink noise, starting at \(t=0\). (We assume complete silence before then.) You can see this noise in the subfigures to the left. The middle subfigures show how we weight the squared sound pressure when we calculate the Slow- and Fast-weighted sound pressure for \(t=2\). The subfigures to the right compare the time-varying Slow and Fast sound pressures with the RMS sound pressure \(p_\mathrm{RMS}\). Slow Fast In both cases, we see that the Slow- and Fast-weighted sound sound pressures correspond to the RMS sound pressure in the end. Because Fast does not look as far back in time and therefore reacts more quickly, it approaches the RMS sound pressure faster. At the same time, we see that Fast wobbles a bit. Since it does not look as far back, it is more susceptible to short-term random changes in the noise. (No noise is perfectly steady!) Note that these examples show Slow- and Fast-weighted sound pressures in Pascal, and not sound pressure levels in decibels. To find the sound pressure level from the sound pressure, we must use the above formula. Summary Slow and Fast time weighting both give sound pressure levels based on what has happened in the last few moments. The two have somewhat different uses, though: Slow-weighted sound pressure levels \(L_\mathrm{S}\) look further back in time. This makes the levels change slowly and evenly, suppressing impulsive sounds and random sound variations. Slow is best used for sounds that vary slowly, such as the sound from passing aircraft. Fast-weighted sound pressure levels \(L_{F}\) do not look as far back in time. This makes the levels more uneven, but also lets us catch impulsive sounds. Fast is best for shorter sounds, such as explosions and other bangs. Just look at the figure to the right to see how much more Fast reacts to the sound of a starting pistol. For steady sounds where the loudness does not vary in time, Slow and Fast will both correspond to the sound pressure level based on RMS pressure. Maximum level Slow and Fast time weightings show us how the sound pressure level varies in time. However, we cannot easily characterise sound pressure levels with time-varying quantities. We would rather have one number describing how loud the sound is. The simplest way to find such a number is to just find the highest value in the curve for Slow- or Fast-weighted sound pressure or sound pressure level. This gives us an indication of how loud the sound is at its loudest. The figure to the right shows how we find Fast-weighted maximum sound pressure. We can then calculate the maximum sound pressure level from this as \(L_\mathrm{Fmax} = 20 \log \left( \frac{p_\mathrm{Fmax}}{p_\mathrm{ref}} \right) .\) If we already have curves of the Slow- or Fast-weighted sound pressure levels, we can also find the maximum level directly from these. The result is the same. Noise dose: Sound exposure level One weakness of maximum levels is that they only describe how loud the sound is at its strongest. They say nothing about how long the sound lasts. For example, we might find the same maximum level from a helicopter flying over you and a helicopter hovering above you for an hour, but the latter event is obviously more annoying. The sound exposure level (or SEL) gives us a number in decibels for how much sound we have been exposed to in total over a period of time. It thus represents a kind of noise dose. This time period can be long, such as a concert or a workday, or it can be as short as a single sound event, for example one aircraft pass-by. The sound exposure level is calculated almost in the same way as the RMS pressure, explained in the first part of this series, with one difference. When we compute the RMS pressure, we integrate the squared sound pressure \(p^2(t)\) over a time interval and divide by the length of the interval to get the averaged squared sound pressure. When we calculate the sound exposure pressure, however, we divide by a reference time of one second: \( p_\mathrm{E} = \sqrt{\frac{1}{\text{1 second}} \int_{T_1}^{T_2} p^2(t) \, \mathrm{d}t} \, .\) Then, we calculate the sound exposure level from this sound exposure pressure: \(L_\mathrm{E} = 20 \log \left( \frac{p_\mathrm{E}}{p_\mathrm{ref}} \right) \, .\) Example Let’s look at the sound exposure pressure and the RMS sound pressure for two cases: In the first case, we have a two second long clip of pink noise. In the second case, we have the same clip, followed by six seconds of silence. Let’s compare the two cases in the figures below. We see that the RMS pressure is lower in the second case, while the sound exposure pressure is the same in both. The reason why the RMS pressure decreases is that the following silence makes the sound less loud on average. The sound exposure pressure, on the other hand, is the same in both cases. This is because we calculate it from the total amount of sound in the measurement period. What, then, does the sound exposure level tell us? It is, after all, somewhat abstract and difficult to compare directly with the usual RMS sound pressure level. However, we can compare different sound exposure levels with each other. For example, take two workers that wear noise dosimeters throughout their workday. After the day is done, we can compare their sound exposure levels to see who was exposed to the most noise. Equivalent level Like the sound exposure level, the equivalent level resembles the RMS sound pressure level. The only difference between this and the sound exposure level is that we spread the sound exposure over a longer period \(T\) instead of just one second. Thus, we calculate the equivalent pressure and level as: \( p_{\mathrm{eq},T} = \sqrt{ \frac{1}{T} \int_{T_1}^{T_2} p^2(t) \, \mathrm{d}t } \, , \qquad p_{\mathrm{eq},T} = 20 \log \left( \frac{p_{\mathrm{eq},T}}{p_\mathrm{ref}} \right) \) The equivalent level thus takes the sound exposure from a time period and spreads it over the same (or another) time period. Example For example, we can take all the noise that you were exposed to yesterday, and spread it over 24 hours. This gives us the 24-hour equivalent level \(L_{\mathrm{eq,24h}}\), but what does it tell us? Well, this equivalent level represents the sound pressure level of a steady sound that would give the same noise dose as all of the sound that the equivalent level is based on. If the equivalent noise level is based on only one noise event, it becomes a somewhat abstract acoustic quantity similar to the the sound exposure level. On the other hand, if we calculate e.g. a one-hour equivalent level \(L_\mathrm{eq,1h}\) for rush hour traffic, we get a nice and representative quantity for the time-varying sound pressure level in that period. For that reason, noise regulations (at least in Norway) are mainly based around variants of equivalent levels. We will take a closer look at these in the next part. In combination with frequency weighting It is not difficult to combine time and frequency weightings. As we saw in the previous part, we perform frequency weighting by filtering the original sound pressure \(p(t)\) to get e.g. an A-weighted sound pressure \(p_\mathrm{A}(t)\). We can in turn take this frequency weighted sound pressure and time weight it in any one of the ways described above. For example, we can thus find e.g. sound pressure levels \(L_\mathrm{AF}(t)\), which have been A-weighted in frequency and Fast-weighted in time, or we can find A-weighted sound exposure levels \(L_\mathrm{AE}\). In practice, almost all public noise regulations incorporate a combination of A-weighting in frequency and some weighting in time. Next time In these three parts, we have explained the building blocks of the acoustic quantities used in noise regulations. However, these regulations use more advanced quantities that give a better picture of how annoying a noise situation is. (For more on the connection between noise and annoyance, you can read this blog post written by my friend and former colleague Femke Gelderblom.) In the next part, we will go through the more advanced quantities used in noise regulations in Norway and many other countries. This post is a translation of a Norwegian-language blog post that I originally wrote for acousticsresearchcentre.no. I would like to thank Rolf Tore Randeberg for proofreading and fact-checking the Norwegian original.
When we looked at Green's Theorem, we saw that there was a relationship between a region and the curve that encloses it. This gave us the relationship between the line integral and the double integral. Now consider the following theorem: Divergence Theorem Let \(Q\) be a solid region bounded by a closed surface oriented with outward pointing unit normal vector \(\text{n}\), and let \(\textbf{F}\) be a differentiable vector field (i.e., components have continuous partial derivatives). Then \[ \iint \limits_{S} \textbf{F} \cdot \text{n} \, dS = \iiint \limits_{Q} \nabla \cdot \textbf{F}\,dv.\] Example \(\PageIndex{1}\) Find \[\iint \limits_{S} \textbf{F} \cdot \textbf{N} \,ds \nonumber \] where \[ \textbf{F} (x,y,z) = y^2 \hat{\textbf{i}} + e^x(1-\cos{(x^2 + z^2)}\hat{\textbf{j}} + (x + z) \hat{\textbf{k}} \nonumber\] and \(S\) is the unit sphere centered at the point \((1,4,6)\) with outwardly pointing normal vector. Solution This seemingly difficult problem turns out to be quite easy once we have the divergence theorem. We have \[ \nabla \cdot \textbf{F} = 0 + 0 + 1 = 1 \nonumber \] Now recall that a triple integral of the function \(1\) is the volume of the solid. Since the solid is a sphere of radius \(1\) we get \(\dfrac{4}{3}\pi\). Partial Proof As usual, we will make some simplifying remarks and then prove part of the divergence theorem. We assume that the solid is bounded below by \( z = g_1(x,y) \) and above by \( z = g_2(x,y) \). Notice that the outward pointing normal vector is upward on the top surface and downward for the bottom region. We also note that the divergence theorem can be written as \[\begin{align*} \iint\limits_{S} \textbf{F} \cdot \textbf{N} dS &= \iint\limits_{S} (M \hat{\textbf{i}} \cdot \textbf{N} + N\hat{\textbf{j}} \cdot \textbf{N} + P \hat{\textbf{k}} \cdot \textbf{N} ) dS \\ &= \iiint\limits_{Q} \nabla \cdot \textbf{F} dv \\ &= \iiint\limits_{Q} (M_x + N_y + P_z) dv. \end{align*}\] We will show that \[\iint\limits_{S} P \hat{\textbf{k}} \cdot \textbf{N} dS = \iiint \limits_{Q} P_x dv. \] We have on the top surface \[ P\hat{\text{k}} \cdot \text{n}\, dS = P\hat{\text{k}} \cdot \left( (-g_2)_x \hat{\text{i}} - (g_2)_y \hat{\text{j}} + \hat{\text{k}} \right) = P(x,y,g_2(x,y)).\] On the bottom surface, we get \[P\hat{\text{k}} \cdot \text{n} dS = P\hat{\text{k}} \cdot \left( (g_1)_x \hat{\text{i}} + (g_1)_y \hat{\text{j}} - \hat{\text{k}} \right) = -P(x,y,g_1(x,y)).\] Putting these together we get \[\iint\limits_S P\hat{\text{k}} \cdot \textbf{N} dS = \iint\limits_R [P(x,y,g_2 (x,y))-P(x,y,g_1 (x,y)) ] dydx . \] For the triple integral, the Fundamental Theorem of Calculus tell us that \[\begin{align*} \iiint\limits_Q P_z dzdydx &= \iint\limits_R [P(x,y,z) ]_{g_1(x,y)}^{g_2(x,y)} dydx \\ &= \iint\limits_R [P(x,y,g_2(x,y))-P(x,y,g_1(x,y))]dy\,dx. \end{align*}\] \(\square\) An Interpretation of Divergence We have seen that the flux is the amount fluid flow per unit time through a surface. If the surface is closed, then the total flux will equal the flow out of the solid minus the flow in. Often in the solid there is a source (such as a star when the flow is electromagnetic radiation) or a sink (such as the earth collecting solar radiation). If we have a small solid \(S(P)\) containing a point \(P\), then the divergence of the vector field is approximately constant, which leads to the approximation \[ \iiint\limits_Q \nabla \cdot \textbf{F} \, dv \approx \nabla \cdot \textbf{F}(P) \text{Volume.} \] The divergence theorem expresses the approximation Flux through \(S(P) \approx \nabla \cdot \textbf{F}(P) \)(Volume). Dividing by the volume, we get that the divergence of \(\textbf{F}\) at \(P\) is the Flux per unit volume. If the divergence is positive, then the \(P\) is a source. If the divergence is negative, then \(P\) is a sink. Contributors Larry Green (Lake Tahoe Community College) Integrated by Justin Marshall.
Instability of bound states for 2D nonlinear Schrödinger equations 1. Department of Mathematical Sciences, Yokohama City University, Seto 22-2, 236-0027, Japan $iu_t+\Delta u+|u|^{p-1}u=0\quad$ for $x\in \mathbb R^2$ and $t>0$, where $(r,\theta)$ are polar coordinates and $m\in\mathbb N$. Using the Evans function, we prove linear instability of standing wave solutions with nodes in the case where $p>3$. Mathematics Subject Classification:35B35, 35Q55, 35J60, 35B0. Citation:Tetsu Mizumachi. Instability of bound states for 2D nonlinear Schrödinger equations. Discrete & Continuous Dynamical Systems - A, 2005, 13 (2) : 413-428. doi: 10.3934/dcds.2005.13.413 [1] Reika Fukuizumi. Stability and instability of standing waves for the nonlinear Schrödinger equation with harmonic potential. [2] Masahito Ohta. Strong instability of standing waves for nonlinear Schrödinger equations with a partial confinement. [3] Jaeyoung Byeon, Ohsang Kwon, Yoshihito Oshita. Standing wave concentrating on compact manifolds for nonlinear Schrödinger equations. [4] Hiroaki Kikuchi. Remarks on the orbital instability of standing waves for the wave-Schrödinger system in higher dimensions. [5] [6] François Genoud. Existence and stability of high frequency standing waves for a nonlinear Schrödinger equation. [7] [8] Renata Bunoiu, Radu Precup, Csaba Varga. Multiple positive standing wave solutions for schrödinger equations with oscillating state-dependent potentials. [9] [10] Alex H. Ardila. Stability of standing waves for a nonlinear SchrÖdinger equation under an external magnetic field. [11] Reika Fukuizumi, Louis Jeanjean. Stability of standing waves for a nonlinear Schrödinger equation wdelta potentialith a repulsive Dirac. [12] Jun-ichi Segata. Initial value problem for the fourth order nonlinear Schrödinger type equation on torus and orbital stability of standing waves. [13] [14] Jun-ichi Segata. Well-posedness and existence of standing waves for the fourth order nonlinear Schrödinger type equation. [15] J. Colliander, M. Keel, G. Staffilani, H. Takaoka, T. Tao. Polynomial upper bounds for the instability of the nonlinear Schrödinger equation below the energy norm. [16] Xiaoyu Zeng. Asymptotic properties of standing waves for mass subcritical nonlinear Schrödinger equations. [17] Soohyun Bae, Jaeyoung Byeon. Standing waves of nonlinear Schrödinger equations with optimal conditions for potential and nonlinearity. [18] Wulong Liu, Guowei Dai. Multiple solutions for a fractional nonlinear Schrödinger equation with local potential. [19] Xudong Shang, Jihui Zhang. Multiplicity and concentration of positive solutions for fractional nonlinear Schrödinger equation. [20] Kazuhiro Kurata, Tatsuya Watanabe. A remark on asymptotic profiles of radial solutions with a vortex to a nonlinear Schrödinger equation. 2018 Impact Factor: 1.143 Tools Metrics Other articles by authors [Back to Top]
The Annals of Probability Ann. Probab. Volume 47, Number 1 (2019), 464-518. Rate of convergence to equilibrium of fractional driven stochastic differential equations with rough multiplicative noise Abstract We investigate the problem of the rate of convergence to equilibrium for ergodic stochastic differential equations driven by fractional Brownian motion with Hurst parameter $H\in(1/3,1)$ and multiplicative noise component $\sigma$. When $\sigma$ is constant and for every $H\in(0,1)$, it was proved in [ Ann. Probab. 33 (2005) 703–758] that, under some mean-reverting assumptions, such a process converges to its equilibrium at a rate of order $t^{-\alpha}$ where $\alpha\in(0,1)$ (depending on $H$). In [ Ann. Inst. Henri Poincaré Probab. Stat. 53 (2017) 503–538], this result has been extended to the multiplicative case when $H>1/2$. In this paper, we obtain these types of results in the rough setting $H\in(1/3,1/2)$. Once again, we retrieve the rate orders of the additive setting. Our methods also extend the multiplicative results of [ Ann. Inst. Henri Poincaré Probab. Stat. 53 (2017) 503–538] by deleting the gradient assumption on the noise coefficient $\sigma$. The main theorems include some existence and uniqueness results for the invariant distribution. Article information Source Ann. Probab., Volume 47, Number 1 (2019), 464-518. Dates Received: October 2016 Revised: November 2017 First available in Project Euclid: 13 December 2018 Permanent link to this document https://projecteuclid.org/euclid.aop/1544691626 Digital Object Identifier doi:10.1214/18-AOP1265 Mathematical Reviews number (MathSciNet) MR3909974 Zentralblatt MATH identifier 07036342 Citation Deya, Aurélien; Panloup, Fabien; Tindel, Samy. Rate of convergence to equilibrium of fractional driven stochastic differential equations with rough multiplicative noise. Ann. Probab. 47 (2019), no. 1, 464--518. doi:10.1214/18-AOP1265. https://projecteuclid.org/euclid.aop/1544691626 Supplemental materials Supplement to “Rate of convergence to equilibrium of fractional driven stochastic differential equations with rough multiplicative noise”. Our supplement develops the proofs of several crucial but technical results in our paper. We first prove a Lyapunov-type property for rough differential equations with inward looking drifts. Then we handle rough differential equations involving singular drifts, a type of system which arises when one tries to condition in the highly non-Markovian fractional Brownian motion setting. Next, we show how to lift rough paths involving singularities. Finally, we evaluate some effects of our conditioning procedure on the underlying fractional Brownian motion $X$ in equation (1.3).
This is a tricky problem. Can anyone help me with the procedure and answer? Evaluate $$ \lim_{h\to 0} \left( \frac{f(x+hx)}{f(x)}\right)^{1/h}, \text{for }f(x)=x. $$ Note that you are given $f(x)=x$. With this piece of information, we can replace the functions in the given equation with their respective algebraic representations. For instance, the denominator would simply be $x$. What would the numerator look like? Now, as $h\rightarrow 0$, consider what the value inside the parenthesis tends towards, and similarly for the exponent. Hint Taking the logarithm, you have to calculate$$\lim_{h\to 0} \frac{\ln(f(x+hx))- \ln(f(x))}{h}$$ That is the definition of the derivative of $\ln( f(x))$.
Maximum likelihood estimation is a very useful technique to fit a model to data used a lot in econometrics and other sciences, but seems, at least to my knowledge, to not be so well known by machine learning practitioners (but I may be wrong about that). Other useful techniques to confront models to data used in econometrics are the minimum distance family of techniques such as the general method of moments or Bayesian approaches, while machine learning practitioners seem to favor the minimization of a loss function (the mean squared error in the case of linear regression for instance). When I taught at the university, students had often some problems to understand the technique. It is true that it is not as easy to understand as ordinary least squares, but I’ll try to explain to the best of my abilities. Given a sample of data, what is the unknown probability distributionthat most likely generated it? For instance, if your sample only contains 0’s and 1’s, andthe proportion of 1’s is 80%, what do you think is the most likely distribution that generated it?The probability distribution that most likely generated such a dataset is a binomial distributionwith probability of success equal to 80%. It might have been a binomial distribution with probabilityof success equal to, say, 60%, but the most likely one is one with probability of success equalto 80%. To perform maximum likelihood estimation, one thus needs to assume a certain probability distribution,and then look for the parameters that maximize the likelihood that this distribution generated theobserved data. So, now the question is, how to maximize this likelihood? And mathematically speaking,what is a likelihood? First of all, let’s assume that each observation from your dataset not only was generated from the same distribution, but that each observation is also independent from each other. For instance, if in your sample you have data on people’s wages and socio-economic background, it is safe to assume, under certain circumstances, that the observations are independent. Let \(X_i\) be random variables, and \(x_i\) be their realizations (actual observed values). Let’s assume that the \(X_i\) are distributed according to a certain probability distribution \(D\) with density \(f(\theta)\) where \(\theta\) is a parameter of said distribution. Because our sample is composed of i.i.d. random variables, the probability that it was generated by our distribution \(D(\theta)\) is: \[\prod_{i=1}^N Pr(X_i = x_i)\] It is customary to take the log of this expression: \[\log(\prod_{i=1}^N Pr(X_i = x_i)) = \sum_{i=1}^N \log(Pr(X_i = x_i))\] The expression above is called the log-likelihood, \(logL(\theta; x_1, ..., x_N)\). Maximizing thisfunction yields \(\theta^*\), the value of the parameter that makes the sample the most probable.In the case of linear regression, the density function to use is the one from the Normal distribution. Hadley Wickham’s seminal paper, The Split-Apply-Combine Strategy for Data Analysispresents the split-apply-combine strategy, which should remind the reader of the map-reduceframework from Google. The idea is to recognize that in some cases big problems are simply anaggregation of smaller problems. This is the case for Maximum Likelihood Estimation of the linearmodel as well.The picture below illustrates how Maximum Likelihood works, in the standard case: Let’s use R to do exactly this. Let’s first start by simulating some data: library("tidyverse")size <- 500000x1 <- rnorm(size)x2 <- rnorm(size)x3 <- rnorm(size)dep_y <- 1.5 + 2*x1 + 3*x2 + 4*x3 + rnorm(size)x_data <- cbind(dep_y, 1, x1, x2, x3)x_df <- as.data.frame(x_data) %>% rename(iota = V2)head(x_df) ## dep_y iota x1 x2 x3## 1 1.637044 1 0.2287198 0.91609653 -0.4006215## 2 -1.684578 1 1.2780291 -0.02468559 -1.4020914## 3 1.289595 1 1.0524842 0.30206515 -0.3553641## 4 -3.769575 1 -2.5763576 0.13864796 -0.3181661## 5 13.110239 1 -0.9376462 0.77965301 3.0351646## 6 5.059152 1 0.7488792 -0.10049061 0.1307225 Now that this is done, let’s write a function to perform Maximum Likelihood Estimation: loglik_linmod <- function(parameters, x_data){ sum_log_likelihood <- x_data %>% mutate(log_likelihood = dnorm(dep_y, mean = iota*parameters[1] + x1*parameters[2] + x2*parameters[3] + x3*parameters[4], sd = parameters[5], log = TRUE)) %>% summarise(sum(log_likelihood)) -1 * sum_log_likelihood} The function returns minus the log likelihood, because optim() which I will be using to optimizethe log-likelihood function minimizes functions by default (minimizing the opposite of a function is thesame as maximizing a function). Let’s optimize the function and see if we’re able to find theparameters of the data generating process, 1.5, 2, 3, 4 and 1 (the standard deviation of theerror term): optim(c(1,1,1,1,1), loglik_linmod, x_data = x_df) We successfully find the parameters of our data generating process! Now, what if I’d like to distribute the computation of the contribution to the likelihood of each observations across my 12 cores? The goal is not necessarily to speed up the computations but to be able to handle larger than RAM data. If I have data that is too large to fit in memory, I could split it into chunks, compute the contributions to the likelihood of each chunk, sum everything again, and voila! This is illustrated below: To do this, I use the {disk.frame} package, and only need to change my loglik_linmod() functionslightly: library("disk.frame")x_diskframe <- as.disk.frame(x_df) #Convert the data frame to a disk.frameloglik_linmod_df <- function(parameters, x_data){ sum_log_likelihood <- x_data %>% mutate(log_likelihood = dnorm(dep_y, mean = iota*parameters[1] + x1*parameters[2] + x2*parameters[3] + x3*parameters[4], sd = parameters[5], log = TRUE)) %>% chunk_summarise(sum(log_likelihood)) out <- sum_log_likelihood %>% collect() %>% pull() %>% sum() -out} The function is applied to each chunk, and chunk_summarise() computes the sum of the contributionsinside each chunk. Thus, I first need to use collect() to transfer the chunk-wise sums in memoryand then use pull() to convert it to an atomic vector, and finally sum them all again. Let’s now optimize this function: optim(rep(1, 5), loglik_linmod_df, x_data = x_diskframe) ## $par## [1] 1.5351722 1.9566144 3.0067978 4.0202956 0.9889412## ## $value## [1] 709977.2## ## $counts## function gradient ## 502 NA ## ## $convergence## [1] 1## ## $message## NULL This is how you can use the split-apply-combine approach for maximum likelihood estimation of alinear model! This approach is quite powerful, and the familiar map() and reduce() functionsincluded in {purrr} can also help with this task. However, this only works if you can split yourproblem into chunks, which is sometimes quite hard to achieve. However, as usual, there is rarely a need to write your own functions, as {disk.frame} includesthe dfglm() function which can be used to estimate any generalized linear model using disk.frame objects!
I recently gave a talk at the Dartmouth Number Theory Seminar (thank you Edgar for inviting me and to Edgar, Naomi, and John for being such good hosts). In this talk, I described the recent successes we’ve had on working with variants of the Gauss Circle Problem. The story began when (with Tom Hulse, Chan Ieong Kuan, and Alex Walker — and with helpful input from Mehmet Kiral, Jeff Hoffstein, and others) we introduced and studied the Dirichlet series $$\begin{equation} \sum_{n \geq 1} \frac{S(n)^2}{n^s}, \notag \end{equation}$$ where $S(n)$ is a sum of the first $n$ Fourier coefficients of an automorphic form on GL(2)$. We’ve done this successfully with a variety of automorphic forms, leading to new results for averages, short-interval averages, sign changes, and mean-square estimates of the error for several classical problems. Many of these papers and results have been discussed in other places on this site. Ultimately, the problem becomes acquiring sufficiently detailed understandings of the spectral behavior of various forms (or more correctly, the behavior of the spectral expansion of a Poincare series against various forms). We are continuing to research and study a variety of problems through this general approach. The slides for this talk are available here.
You can view as many worked out examples as you want. First you are shown the question. Click Show Answer to view the answer. Click Show another Example to view another example. The best way to master mathematics is by practice. But practice requires time. If you don't have the time to practice, you can always view many practice problems completely worked out and become good at it too. Question Divide $\;\;\;\frac{2}{5}\div\frac{19}{18}$ Answer $\;\;\;\frac{2}{5}\div\frac{19}{18}$ $=\frac{2}{5}\times\frac{18}{19}$ $=\frac{2\times18}{5\times19}$ $=\frac{36}{95}$
I have a problem that sounds like this: Find the limit $$\lim_{x\rightarrow 0} \frac{14\tan(6x)-84x}{6x^3}$$ using Maclaurin series, and don't forget the importance of big O notation. I have tried to find the Maclaurin series in different ways, but I always end up with the wrong answer. And I don't know how to use the big O notation in a helpful way here. Thank you in advance.
Answer $21^{\circ}C $ Work Step by Step We use the equation for H to find: $H = (120 \ W/^{\circ}C)(\Delta T)$ $\Delta T = \frac{H}{120 \ W/^{\circ}C} $ $\Delta T = \frac{2.5 \times 10^3 \ W}{120 \ W/^{\circ}C} = 21^{\circ}C $ You can help us out by revising, improving and updating this answer.Update this answer After you claim an answer you’ll have 24 hours to send in a draft. An editorwill review the submission and either publish your submission or provide feedback.
Practice Paper 3 Question 11 Let \(I_n=\int_{-\pi/2}^{\pi/2} \cos^n\!x \,dx\) for any non-negative integer \(n.\) \(\;(i)\) Find a recursive expression for \(I_n.\) \((ii)\) Find the simplest non-recursive expression for \(I_{2n}\) that contains \(\binom{2n}{n},\) where \(\binom{n}{k}=\frac{n!}{k!(n-k)!}.\) Related topics Warm-up Questions Evaluate \(\int_{-\pi}^{\pi}x^2\,dx.\) Find \(\int\ln x \,dx.\) A geometric sequence starts with \(\frac{1}{5}, \frac{2}{15}, \frac{4}{45} \ldots.\) Find a recursive formula for the sequence and solve it to get the closed form. Hints Hint 1Try to split \(\cos^n x\) into some terms, one of which you can integrate easily. Hint 2Have you tried integrating by parts? Hint 3Integrate and simplify \(I_n\) as much as you can. Hint 4Try to rewrite \(I_n\) as a recurrence relation. Hint 5... you may find a trigonometric identity helpful. Hint 6Which initial term can you find to help solve the recurrence for \(I_{2n}?\) Hint 7Try writing out and unwinding \(I_{2n}.\) Hint 8... remembering that your expression should contain \(\frac{(2n)!}{(n!)^2}.\) Solution Writing \(\cos^n x = \cos x \cdot \cos^{n-1} x\) allows us to use integration by parts to get \(I_n = [\sin x \cdot \cos^{n-1} x]_{-\pi/2}^{\pi/2} + \int_{-\pi/2}^{\pi/2} (n-1) \sin^2\!x \cos^{n-2}\!x \,dx.\) Simplifying using a trigonometric identity yields \(I_n = (n-1) \int_{-\pi/2}^{\pi/2} (\cos^{n-2} x-\cos^{n} x) \,dx,\) so \(I_n = (n-1)(I_{n-2}-I_{n}).\) Rearranging gives \(I_{n} = \frac{n-1}{n} I_{n-2}.\) To solve this recurrence relation for \(I_{2n},\) we first find \(I_0=\int_{-\pi/2}^{\pi/2}dx=\pi.\) Unwinding the recursion we get \(I_{2n} = \frac{2n-1}{2n}\cdot\frac{2n-3}{2n-2}\cdot\frac{2n-5}{2n-4}\cdots\frac{1}{2}\cdot\pi\) which has the product of all odd terms in the numerator and the product of all even terms in the denominator. To get \((2n)!\) in the numerator, we need to fill in the even terms, so we multiply both the numerator and denominator with the denominator, which yields \(I_{2n} = \pi \cdot \frac{(2n)!}{2^2 4^2 6^2 \cdots (2n)^2}.\) Factorising \(2^2\) from each term in the denominator gives \(I_{2n}=\pi \cdot \frac{(2n)!}{2^{2n} (n!)^2}=\binom{2n}{n}\frac{\pi}{4^n}.\) If you have queries or suggestions about the content on this page or the CSAT Practice Platform then you can write to us at oi.[email protected]. Please do not write to this address regarding general admissions or course queries.
Search Now showing items 1-1 of 1 Anisotropic flow of inclusive and identified particles in Pb–Pb collisions at $\sqrt{{s}_{NN}}=$ 5.02 TeV with ALICE (Elsevier, 2017-11) Anisotropic flow measurements constrain the shear $(\eta/s)$ and bulk ($\zeta/s$) viscosity of the quark-gluon plasma created in heavy-ion collisions, as well as give insight into the initial state of such collisions and ...
Search Now showing items 1-1 of 1 Anisotropic flow of inclusive and identified particles in Pb–Pb collisions at $\sqrt{{s}_{NN}}=$ 5.02 TeV with ALICE (Elsevier, 2017-11) Anisotropic flow measurements constrain the shear $(\eta/s)$ and bulk ($\zeta/s$) viscosity of the quark-gluon plasma created in heavy-ion collisions, as well as give insight into the initial state of such collisions and ...
Search Now showing items 1-10 of 33 The ALICE Transition Radiation Detector: Construction, operation, and performance (Elsevier, 2018-02) The Transition Radiation Detector (TRD) was designed and built to enhance the capabilities of the ALICE detector at the Large Hadron Collider (LHC). While aimed at providing electron identification and triggering, the TRD ... Constraining the magnitude of the Chiral Magnetic Effect with Event Shape Engineering in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV (Elsevier, 2018-02) In ultrarelativistic heavy-ion collisions, the event-by-event variation of the elliptic flow $v_2$ reflects fluctuations in the shape of the initial state of the system. This allows to select events with the same centrality ... First measurement of jet mass in Pb–Pb and p–Pb collisions at the LHC (Elsevier, 2018-01) This letter presents the first measurement of jet mass in Pb-Pb and p-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV and 5.02 TeV, respectively. Both the jet energy and the jet mass are expected to be sensitive to jet ... First measurement of $\Xi_{\rm c}^0$ production in pp collisions at $\mathbf{\sqrt{s}}$ = 7 TeV (Elsevier, 2018-06) The production of the charm-strange baryon $\Xi_{\rm c}^0$ is measured for the first time at the LHC via its semileptonic decay into e$^+\Xi^-\nu_{\rm e}$ in pp collisions at $\sqrt{s}=7$ TeV with the ALICE detector. The ... D-meson azimuthal anisotropy in mid-central Pb-Pb collisions at $\mathbf{\sqrt{s_{\rm NN}}=5.02}$ TeV (American Physical Society, 2018-03) The azimuthal anisotropy coefficient $v_2$ of prompt D$^0$, D$^+$, D$^{*+}$ and D$_s^+$ mesons was measured in mid-central (30-50% centrality class) Pb-Pb collisions at a centre-of-mass energy per nucleon pair $\sqrt{s_{\rm ... Search for collectivity with azimuthal J/$\psi$-hadron correlations in high multiplicity p-Pb collisions at $\sqrt{s_{\rm NN}}$ = 5.02 and 8.16 TeV (Elsevier, 2018-05) We present a measurement of azimuthal correlations between inclusive J/$\psi$ and charged hadrons in p-Pb collisions recorded with the ALICE detector at the CERN LHC. The J/$\psi$ are reconstructed at forward (p-going, ... Systematic studies of correlations between different order flow harmonics in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV (American Physical Society, 2018-02) The correlations between event-by-event fluctuations of anisotropic flow harmonic amplitudes have been measured in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV with the ALICE detector at the LHC. The results are ... $\pi^0$ and $\eta$ meson production in proton-proton collisions at $\sqrt{s}=8$ TeV (Springer, 2018-03) An invariant differential cross section measurement of inclusive $\pi^{0}$ and $\eta$ meson production at mid-rapidity in pp collisions at $\sqrt{s}=8$ TeV was carried out by the ALICE experiment at the LHC. The spectra ... J/$\psi$ production as a function of charged-particle pseudorapidity density in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV (Elsevier, 2018-01) We report measurements of the inclusive J/$\psi$ yield and average transverse momentum as a function of charged-particle pseudorapidity density ${\rm d}N_{\rm ch}/{\rm d}\eta$ in p-Pb collisions at $\sqrt{s_{\rm NN}}= 5.02$ ... Energy dependence and fluctuations of anisotropic flow in Pb-Pb collisions at √sNN=5.02 and 2.76 TeV (Springer Berlin Heidelberg, 2018-07-16) Measurements of anisotropic flow coefficients with two- and multi-particle cumulants for inclusive charged particles in Pb-Pb collisions at 𝑠NN‾‾‾‾√=5.02 and 2.76 TeV are reported in the pseudorapidity range |η| < 0.8 ...
Practice Paper 3 Question 12 A bank has a loan scheme with an annual interest rate of \(0<r<1\) (\(100r\) is the annual percentage rate). The bank charges interest every month. You take a loan of \(L\) pounds and wish to pay it back in exactly \(m\) months by making monthly payments of \(p\) pounds each. Find \(p\). Related topics Warm-up Questions Evaluate the sum \(a+ar+ar^2+\cdots+ar^n,\) with \(r \neq 0\) and \(n>0.\) Given \(a_0= 5\) and \(a_n= 3a_{n-1}+4,\) find \(a_4.\) Hints Hint 1Find an expression for the amount remaining to be paid after month \(i.\) Hint 2... in terms of the amount remaining to be paid in month \(i-1.\) Hint 3You might first want to find the cumulative monthly interest rate. Hint:it's not \(r/12.\) Hint 4... knowing that the cumulative monthly interest rate is the interest rate that gives \(r\) when applied successively \(12\) times. Hint 5If \(x_i\) denotes the amount remaining to be paid after month \(i,\) you should now have a recursive formula for \(x_i.\) Try to find the explicit formula. Hint 6You should get an expression that contains \(x_0.\) What is the value of \(x_0?\) Solution Let \(x_i\) be the amount remaining to be paid after month \(i\). Thus, \(x_0=L.\) We have \(x_i=x_{i-1}(1+\mu)-p,\) where \(\mu\) is the cumulative monthly interest rate (we'll determine its value later). To simplify our working, let \(c=1+\mu.\) Unwinding the recursion, we get: \[ \begin{align} x_i &= x_{i-1}c-p \\ &= (x_{i-2}c-p)c - p \\ &= \left(\left((x_{i-3}c-p)c - p\right) \right)c-p \\ &= \ldots \\ &= x_0c^i-p\sum_{k=0}^{i-1}c^k \\ &= Lc^i-p\frac{c^i-1}{c-1}. \end{align} \] We want \(x_m=0,\) which gives \(p=L\;c^m\;\frac{c-1}{c^m-1}.\) Now we need to express \(\mu\) in terms of \(r.\) The cumulative monthly interest rate \(\mu\) is the interest rate that gives \(r\) when applied successively (i.e. recursively) \(12\) times. This is not equal to \(r/12\;^{(*)}.\) An initial amount \(z_0\) becomes \(z_0(1+\mu)\) after a month, and \(z_0(1+\mu)^{12}\) after \(12\) months, which must also be equal to \(z_0(1+r).\) This is in fact another recursion: \(z_i=z_{i-1}(1+\mu),\) where we have \(z_{12}=z_0(1+\mu)^{12}=z_0(1+r).\) Hence, \(\mu=(1+r)^{1/12}-1.\) The monthly payment sought is then \(p=L\;(1+r)^{m/12}\;\frac{(1+r)^{1/12}-1}{(1+r)^{m/12}-1}.\) (*) \(\mu\) can in fact be approximated by \(r/12\) for \(r\ll1\) though Taylor Series. This is a safe assumption for savings accounts, but not for loans (hmm, what does that tell you about banks?). If you have queries or suggestions about the content on this page or the CSAT Practice Platform then you can write to us at oi.[email protected]. Please do not write to this address regarding general admissions or course queries.
What are general theorems that give sufficient criteria for a $C^{\infty}$ function to be analytic? The more general/simple the test, the better. I'm trying to understand in a more thorough way what prevents $C^{\infty}$ functions from being analytic. The canonical example of a $C^{\infty}$ function $f$ which is not analytic at a point is $f(x) = e^{-\frac{1}{x}}$ for $x \in [0, \infty]$ and $f(x) = 0$ elsewhere. This indicates to me that nonzero analytic functions cannot "flatten out" too quickly at a point where they are defined. I know this is vague, but can you give me some theorems that make it more clear just how much more general $C^{\infty}$ functions are than analytic ones, and how many more degrees of freedom they have? I would also appreciate examples of $C^{\infty}$ functions that are pathological in some sense, in a way that an analytic function cannot be. For example, is there a $C^{\infty}$ function that fails to be analytic on a dense set? I am afraid that graphics not always gives us clear visual cues about analyticity because the differences between smoothness and analyticity are somewhat subtle and technical. The following is the only difference known to me: Proposition. Let $f: I \to \mathbb{R}$ be an analytic function defined on an open interval $ I \subseteq \mathbb{R} $. If exists $ x_0 \in I $ such that for every $ n \in \mathbb{Z}_{\ge 0}$ we have that $ f^{(n)}(x_0) = 0 $, then $ f(x) = 0 $ for every $ x \in I $. Since the class of analytic functions is well behaved in the sense that it is closed under the operations of sum, product and composition, another thing that might help to identify non-analytic functions when dealing with combinations of elementary analytic functions is looking at points in the function maximal domain such that it could not be extended without losing its analyticity and possibly the need of a piecewise function. That is exactly the case with the function $$ f(x) = \begin{cases} e^{-\frac{1}{x}} & x >0 \\ 0 &x \le 0 \end{cases} $$ which fails to be analytic exactly on $ x = 0 $ which in turn is the only point out of the maximal domain $ \mathbb{R} \backslash \{0\} $ of the function $ e^{-\frac{1}{x}} $ that it is not possible to assign a value such that preserves the analyticity. Now we will state two propositions that gives sufficient criteria for a smooth function to be analytic. Let $ f: I \to \mathbb{R} $ be a smooth function defined on an open interval $ I \subseteq \mathbb{R} $. Proposition. The function $f$ is analytic if and only if for every $ x_0 \in I $ the Taylor series of $f$ around $x_0$ converges to $f(x)$ for all $x$ in a neighbourhood of $x_0$. Proposition. The function $f$ is analytic if and only if for every compact subset $ K \subset I $ there exists a constant $ C \in \mathbb{R}_{\ge 0} $ such that for every $ n \in \mathbb{Z}_{\ge 0} $ the following innequality holds for all $ x \in I $ $$ |f^{(n)}(x)| \le C^{n+1} \, n! $$ Lastly, you can find more about not so much pathological non-analytical smooth functions in the following wikipedia articles:
The stock investment universe is huge. However, for an automated trading strategy, it is even bigger since every day will offer this entire universe to choose from. The buy $ \& $ holder will make one choice on a number of stocks and stick to it. Whereas for the automated trader, each day requires a trading decision for each stock in the portfolio, that it be to hold, buy, or sell from this immense opportunity set. There is a need to reduce it to a more manageable size. Say, you buy all the stocks in $ \mathbf{SPY} $ equally (bet size $ \$ $20k per trade on a $ \$ $10M portfolio). That strategy is $ \sum_1^{500}(h_{0j} \cdot \Delta P) $ where $ h_{0j} $ is the initial quantity bought in each stock $ j=1, \dots, 500 $. Evidently, the outcome should tend towards the same as having bought $ \mathbf{SPY} $ alone $ \sum_1^{500}( h_{0j} \cdot \Delta P) \to \sum (H_{spy} \cdot \Delta P) $. At least, theoretically, that should be the long term expectation, and this looking backward or forward for that matter. $\quad \quad \mathsf{E} [ \sum_1^{500}(h_{0j} \cdot \Delta P) ] \to \sum (H_{spy} \cdot \Delta P) $ Saying that the expected long-term profit should tend to be the same as if having bought $ \$ $10M of $ \mathbf{SPY} $ and held on. Nothing more than a simple Buy & Hold scenario. However, you decide to trade your way over to this decade long interval with the expectation that your kind of expertise will give you an edge and help you generate more than having held on to $\mathbf{SPY}$. But, the conundrum you have to tackle is this long term notion which says that the longer you play, the more your outcome will tend to market averages. Your Game Expectations To obtain more, you change your "own" expectation for the game you intend to play to: $ \mathsf{E} [ \sum (H_k \cdot \Delta P) ] > \sum (H_{spy} \cdot \Delta P) $ by designing your own trading strategy $ H_k $. I would anticipate for you to want more than just outperforming the averages, much more, as in: $\quad \quad \displaystyle{ \mathsf{E} [ \sum (H_k \cdot \Delta P) ] \gg \sum (H_{spy} \cdot \Delta P)} $ It is not the market that is changing in this quest of yours, it is you by considering your opportunity set of available methods of play. You opt to reconfigure the game to something you think you can do. You have analyzed the data and determined that if you had done this or that in a simulation, it would have been more profitable than a Buy $ \& $ Hold scenario. Fortunately, you also realized soon enough that past data is just that past data, and future data has no obligations to follow "your" expectations. Nevertheless, you can study the past, observe what worked and what did not, and from there design better systems by building on the shoulders of those you appreciate the most. Can past data have some hidden gems, anomalies? Yes. You can find a lot of academic papers on that very subject. But those past "gems" might not be available in future data or might be such rare occurrences that you could not anticipate the "when" they will or might happen again. Another way of saying that your trading methods might be fragile, to put it mildly. A Unique Selections The content of $ \mathbf{SPY} $ is a unique set of stocks. In itself, a sub-sample of a much larger stock universe. Taking a sub-sample of stocks from $ \mathbf{SPY} $ (say 100 of its stocks) will generate another unique selection. There are $ C_{100}^{500} = \frac{500!}{100! \cdot 400!} = 2.04 \times 10^{107} $ combinations in taking 100 stocks out of 500. And yet, the majority of sets will tend to some average performance $ \sum (H_n \cdot \Delta P) \to \sum (H_{spy} \cdot \Delta P) $ where $ n $ could be that 1 set from the $ 2.04 \times 10^{107} $ available. Such a set from the $ \mathbf{SPY} $ would have passed other basic selection criteria such as: high market caps, liquidity, trading volume, and more. No one is going to try testing all set samples based on whatever criteria or whatever method. It would take millions of lifetimes and a lot more than all the computing power on the planet. $ 10^{107} $ is a really really huge number. The only choice becomes taking a sub-sub-sub-sample of what is available. So small, in fact, that whatever method used in the stock selection process, you could not even express the notion of strict representativeness. To be representative of the whole would require that we have some statistical measure of some kind on the $ 10^{107} $ possible choices. We cannot express a mean $ \mu $ or some standard deviation $ \sigma $ without having surveyed a statistically significant fraction of the data. The problem gets worse if you considered 200 stocks out of the 500 in the $ \mathbf{SPY} $. There, the number of combinations would be: $ C_{200}^{500} = \frac{500!}{200! \cdot 300!} = 5.05 \times 10^{144} $! This is not a number that is 35$ \% $ larger than the first one. It is $ 10^{37} $ times larger. We are, therefore, forced to accept a very low number of stock selection sets in our simulations. Every time we make a 100-stock or 200-stock selection we should realize that that selection is just 1 in $ 2.04 \times 10^{107} $ or 1 in $ 5.05 \times 10^{144} $ respectively. But, that is not the whole story. Portfolio Rebalancing If you are rebalancing your 100 stocks every day, you have a new set of choices which will again result in 1 set out of $ 2.04 \times 10^{107} $. This to say that your stock selection can change from day to day for some reason or other and that that selection is also very very rare. So rare, in fact, that it should not even be considered as a sample, not even a sub-sub-sub-sample. The number of combinations is simply too large for any one selection to be made representative of the whole, even if in all probability it might since the majority of those selections will tend to the average outcome anyway. As a consequence, people do simplify the problem. For instance, they sort by market capitalization and take the top 100 stocks. This makes it a unique selection too, not a sample, but a 1 in $ 2.04 \times 10^{107} $. Not only that, but it will always be the same for anyone else using the same selection criterion. As such, this "sample" could not be considered as representative of the whole either, but just as a single instance, a one of a kind. It is the selection criteria used that totally determined this unique selection. It is evidently upward biased by design and will also be unique going forward. Making such a stock selection ignores $ 2.04 \times 10^{107} - 1 $ other possible choices! Moreover, if many participants adopt the same market capitalization sort, they too are ignoring the majority of other possible selection methods, and making them deal with the very same set of stocks over and over again whatever modification they make to their trading procedures. The notion of market diversity might not really be part of that equation. It is the trading procedures and the number of stocks used that will differentiate those strategies. But, ultimately, it leads to some curve-fitting the data in order to outperform! And that is not the best way to go. Reducing Volatility You want to reduce volatility, one of the easiest ways is to simply increase the number of stocks in the portfolio. Instead of dealing with only 100 stocks, you go for 200! Then any stock might start by representing only 0.5$ \% $ of the total and therefore, minimize the impact of any one stock going bad. The converse also applies, those performing better will have their impact reduced too. Diversifying more by increasing the number of stocks will increase the number of possible choices to $ 5.05 \times 10^{144} $. Yet, by going the sorted market capitalization route, you are again left with one and only one set of stocks for each trading day. If there are $ 2.04 \times 10^{107} $ possible 100-stock portfolios to chose from, then whatever selection method used might be less than representative. We are not making a selection based on the knowledge of the $ 2.04 \times 10^{107} - 1 $ other choices, we are just making one that has some economic rationale behind it. The largest capitalization stocks have some advantage over others for the simple reason they have been around for some time and were, in fact, able to get there, meaning reaching their high capitalization status. Over the past 10 years, should you have taken the highest capitalization stocks by ranking, you would have found that most of the time the same stocks were jockeying for position near the top. Again, selecting by market capitalization led to the same choice for anyone using that stock selection method. Since ranking by market cap is widespread amongst portfolio managers, we should expect to see variations based on the same general theme. Selection Consistency Here is a notion I have not seen often or that I consider as neglected in automated trading strategies. Automation is forcing us to consider everything as numbers: how many of this or that, what level is this or that, what are the averages of this or that, always numbers and numbers. If you want to express some kind of sentiment or opinion, it has to be translated into some numbers. Your program does not answer with: I think, ..., you should take that course of action. It simply computes the data it is given and takes action accordingly based on what it was programmed to do. Nothing more, but also nothing less. It is a machine, a program. You are the strategy designer, and your program will do what you tell it to do. All this to lead to the notion that your stock selection process should be consistent with your trading methods. For instance, if you design a trend-following system, then you should select trending stocks and not mean-reversing ones which would tend to be counterproductive to your set objectives. Trend-following goes for the continuation of the price move whereas mean-reversing goes in the opposite direction. Therefore, your stock selection method should aim to capture stocks having demonstrated this trend-following ability over its past. Otherwise, you do have a serious stock selection problem. And if you cannot distinguish trending stocks from those mean-reverting, then you are in even more trouble. You would be playing a game where you are making bets without following the very nature of your strategy design. All because your stock selection process was not consistent with your trading procedures. If your trading strategy cannot identify mean-reversing stocks then why play a mean-reversing gig?
Practice Paper 3 Question 13 \(n\) passengers board a plane with \(n\) seats. Each passenger has an assigned seat. The first passenger forgets and takes a seat at random. Every subsequent passenger sits in their assigned seat unless it is already taken, at which point they take a random seat. What is the probability (in terms of \(n\)) that: Every passenger sits in the correct seat? At least one passenger sits in the correct seat? Exactly one passenger sits in the correct seat? Related topics Warm-up Questions What is the probability of not getting a six in ten rolls of a dice? Evaluate \(\sum_{k=10}^{50}k.\) Evaluate \(\prod_{k=1}^{50}\frac{k}{k+1}.\) Hints Hint 1(Part ii) Would it be easier to calculate the probability that at least one passenger sits in the correct seat indirectly? Hint 2(Part ii) Try using the probability that no passenger sits in the correct seat. Hint 3(Part iii) If the fourth passenger is the only one in the correct seat, where must the first and second passengers sit? Hint 4(Part iii) Where must the third passenger sit if the fourth passenger is to be the only one in the correct seat? What about any subsequent passengers? Hint 5(Part iii) Try to find the probability that only one passenger sits in their assigned seat by finding the probability that only the \(i^{th}\) passenger is sitting in their assigned seat. Solution If the first passenger sits in the correct seat, then every other passenger will be able to take their assigned seat. Hence the probability that all the passengers take their assigned seat is \(\frac{1}{n}.\) For every passenger to sit in the wrong seat, each passenger must sit in the seat of the next one to board. The probability of this is \(\frac{1}{n} \cdot \frac{1}{n-1} \cdot \ldots \cdot \frac{1}{1}\) \(=\prod_{i=1}^{n}\frac{1}{i}\) \(= \frac{1}{n!}.\) Therefore we deduce that the probability of at least one passenger sitting in the correct seat is \(1-\frac{1}{n!}.\) We will first calculate the probability that only the \(k^{th}\) passenger is sitting in the correct seat. For this to happen, every passenger except the \(k^{th}\) and \((k-1)^{th}\) one must sit in the seat of the next one to board. The \((k-1)^{th}\) passenger must sit in the seat of the \((k+1)^{th}\) one unless there are only \(k\) passengers; in this case they must sit in the seat of the first passenger to board. Hence every passenger but the \(k^{th}\) has a \(1/i\) chance of choosing the required seat, where \(i\) is the number of seats left available. Let \(j\) be the number of seats left when the \(k^{th}\) passenger boards. Therefore, the probability that the \(k^{th}\) passenger is the only one in the correct seat is \(\frac{1}{n} \cdot \frac{1}{n-1} \cdot \ldots \cdot \frac{j}{j} \cdot \ldots \cdot \frac{1}{2} \cdot \frac{1}{1} = \frac{j}{n!}.\) We sum over all possible values of \(j\) up to \(n-1.\) Note that we exclude \(j=n\) because that implies the first passenger is sitting in the correct seat, in which case everyone is sitting correctly. \[ \sum_{j=1}^{n-1}\frac{j}{n!} = \frac{n(n-1)}{2n!} = \frac{1}{2(n-2)!}. \] If you have queries or suggestions about the content on this page or the CSAT Practice Platform then you can write to us at oi.[email protected]. Please do not write to this address regarding general admissions or course queries.
Given a chess board of dimensions $7\times 7$ what is the minimal number of knights needed to be placed on the board so that every cell is attacked by at least one knight. I have no idea on this one.I assume the best strategy would be to put so that each knight attacks the most number of cells which is $8$. Using the natural ordering of squares of an $n \times n$ board $(i,j)\rightarrow ni+j$, here are optimal solutions for $n=4,5,6$. Clearly there are no solutions for $n < 4$. $$ \begin{align} K(4) \; &: \; (0, 1, 5, 6, 10, 11) \\ K(5) \; &: \; (0, 1, 3, 6, 11, 12, 13) \\ K(6) \; &: \; (7, 8, 9, 10, 19, 20, 21, 22) \\ \end{align} $$ For $n=7$ I ran a brute force and exhausted all sets of < 10 knights. So, your solution of 10 knights is optimal. I found a greedy algorithm that works pretty well -- I found 6 different optimal solutions for n=7. $$ (3, 15, 16, 18, 19, 29, 30, 31, 32, 33) \\ (9, 11, 16, 18, 21, 25, 30, 32, 37, 39) \\ (9, 11, 16, 18, 23, 25, 30, 32, 37, 39) \\ (9, 11, 16, 18, 23, 27, 30, 32, 37, 39) \\ (15, 16, 17, 18, 19, 29, 30, 31, 32, 33) \\ (15, 16, 17, 18, 19, 29, 30, 32, 33, 45) \\ $$ First construct the $directed$ graph where each vertex is a square on the board and each edge $(u,v)$ represents a possible knight move from $u$ to $v$. Let's take a look at the adjacency matrix. We want to find the smallest subset of rows (places to put the knights) such that each column remains non-empty (every square is attacked). This led me to do the following. Repeating until all pieces are attacked: greedily choose the safest spot on the board, that is, the non-attacked vertex $v^*$ with the minimum number of incoming edges. Let $u^*$ be the most aggressive of $v^*$s possible attackers. That is, let $u^*$ be the element in $\{ u \; | \; \exists \; (u,v^*) \} $ with the maximum number of outgoing edges. Let $U$ be the set of vertices that $u^*$ can attack $U = \{v \; | \; \exists \; (u^*,v)\}$. Place a knight on position $u^*$ and mark vertices in $U$ as attacked and remove all of their incoming edges. Here is a visual of the first step. Note $v^*$ is the column with the least number of nonzero elements.
SolidsWW Flash Applet Sample Problem 3 Flash Applets embedded in WeBWorK questions solidsWW Example Sample Problem 3 with solidsWW.swf embedded A standard WeBWorK PG file with an embedded applet has six sections: A tagging and description section, that describes the problem for future users and authors, An initialization section, that loads required macros for the problem, A problem set-up sectionthat sets variables specific to the problem, An Applet link sectionthat inserts the applet and configures it, (this section is not present in WeBWorK problems without an embedded applet) A text section, that gives the text that is shown to the student, and An answer and solution section, that specifies how the answer(s) to the problem is(are) marked for correctness, and gives a solution that may be shown to the student after the problem set is complete. The sample file attached to this page shows this; below the file is shown to the left, with a second column on its right that explains the different parts of the problem that are indicated above. A screenshot of the applet embedded in this WeBWorK problem is shown below: There are other example problems using this applet: solidsWW Flash Applet Sample Problem 1 solidsWW Flash Applet Sample Problem 2 And other problems using applets: Derivative Graph Matching Flash Applet Sample Problem USub Applet Sample Problem trigwidget Applet Sample Problem solidsWW Flash Applet Sample Problem 1 GraphLimit Flash Applet Sample Problem 2 Other useful links: Flash Applets Tutorial Things to consider in developing WeBWorK problems with embedded Flash applets PG problem file Explanation ##DESCRIPTION ## Solids of Revolution ##ENDDESCRIPTION ##KEYWORDS('Solids of Revolution') ## DBsubject('Calculus') ## DBchapter('Applications of Integration') ## DBsection('Solids of Revolution') ## Date('7/31/2011') ## Author('Barbara Margolius') ## Institution('Cleveland State University') ## TitleText1('') ## EditionText1('2011') ## AuthorText1('') ## Section1('') ## Problem1('') ########################################## # This work is supported in part by the # National Science Foundation # under the grant DUE-0941388. ########################################## This is the The description is provided to give a quick summary of the problem so that someone reading it later knows what it does without having to read through all of the problem code. All of the tagging information exists to allow the problem to be easily indexed. Because this is a sample problem there isn't a textbook per se, and we've used some default tagging values. There is an on-line list of current chapter and section names and a similar list of keywords. The list of keywords should be comma separated and quoted (e.g., KEYWORDS('calculus','derivatives')). DOCUMENT(); loadMacros( "PGstandard.pl", "AppletObjects.pl", "MathObjects.pl", ); This is the The TEXT(beginproblem()); $showPartialCorrectAnswers = 1; Context("Numeric"); $a = 2*random(2,6,1); $b = 2*$a; $xy = 'x'; $func1 = "x"; $func2 = "2*$a-x"; $xmax = Compute("2*$a"); $shapeType = 'poly'; $sides = random(3,8,1); $correctAnswer = Compute("2*$a^3*$sides*tan(pi/$sides)"); This is the The solidsWW.swf applet will accept a piecewise defined function either in terms of x or in terms of y. We set ######################################### # How to use the solidWW applet. # Purpose: The purpose of this applet # is to help with visualization of # solids # Use of applet: The applet state # consists of the following fields: # xmax - the maximum x-value. # ymax is 6/5ths of xmax. the minima # are both zero. # captiontxt - the initial text in # the info box in the applet # shapeType - circle, ellipse, # poly, rectangle # piece: consisting of func and cut # this is a function defined piecewise. # func is a string for the function # and cut is the right endpoint # of the interval over which it is # defined # there can be any number of pieces # ######################################### # What does the applet do? # The applet draws three graphs: # a solid in 3d that the student can # rotate with the mouse # the cross-section of the solid # (you'll probably want this to # be a circle # the radius of the solid which # varies with the height ######################################### <p> This is the Those portions of the code that begin the line with ################################### # Create link to applet ################################### $appletName = "solidsWW"; $applet = FlashApplet( codebase => findAppletCodebase ("$appletName.swf"), appletName => $appletName, appletId => $appletName, setStateAlias => 'setXML', getStateAlias => 'getXML', setConfigAlias => 'setConfig', maxInitializationAttempts => 10, height => '550', width => '595', bgcolor => '#e8e8e8', debugMode => 0, submitActionScript => '' ); <p>You must include the section that follows ################################### # Configure applet ################################### $applet->configuration(qq{<xml><plot> <xy>$xy</xy> <captiontxt>'Compute the volume of the figure shown.' </captiontxt> <shape shapeType='$shapeType' sides='$sides' ratio='1.5'/> <xmax>$xmax</xmax> <theColor>0x0066cc</theColor> <profile> <piece func='$func1' cut='$a'/> <piece func='$func2' cut='$xmax'/> </profile> </plot></xml>}); $applet->initialState(qq{<xml><plot> <xy>$xy</xy> <captiontxt>'Compute the volume of the figure shown.' </captiontxt> <shape shapeType='$shapeType' sides='$sides' ratio='1.5'/> <xmax>$xmax</xmax> <theColor>0x0066cc</theColor> <profile> <piece func='$func1' cut='$a'/> <piece func='$func2' cut='$xmax'/> </profile> </plot></xml>}); TEXT( MODES(TeX=>'object code', HTML=>$applet->insertAll( debug=>0, includeAnswerBox=>0, ))); The lines The configuration of the applet is done in xml. The argument of the function is set to the value held in the variable The code Answer submission and checking is done within WeBWorK. The applet is intended to aid with visualization and is not used to evaluate the student submission. BEGIN_TEXT $BR $BR Find the volume of the figure shown. The cross-section of the figure is a regular $sides-sided polygon. The area of the polygon can be computed as a function of the length of a line segment from the center of the $sides-sided polygon to the midpoint of one of its sides and is given by \($sides x^2\tan\left(\frac{\pi}{$sides}\right)\) where \(x\) is the length of the bisector of one of the sides (shown in black on the cross-section graph). A formula similar to the cylindrical shells formula will then provide the volume of the figure. Simply replace \(\pi\) in the formula \[V=2\pi\int x f(x) dx\] with \($sides \tan\left(\frac{\pi}{$sides}\right)\) to find the volume of the solid shown where for this solid \[f(x)=\begin{cases}x&x\le $a\\ $b-x&$a<x\le $b\end{cases}\] for \(x=0\) to \($b\). \{ans_rule(35) \} $BR END_TEXT Context()->normalStrings; This is the #################################### # # Answers # ## answer evaluators ANS( $correctAnswer->cmp() ); ENDDOCUMENT(); This is the The
I would like to create special frame around my equations. This frame should obey to the golden ratio rule. This means that its width divided by its height should give 1.61803398875. If possible, the code will adapt to choose the long side. For example for a long column vector, the height will be 1.61803398875 times longer than its width. I've found the following code in another question in this forum (here). It uses tikz but I can't figure out how to modify it to obtain a golden ratio frame around my equations... \documentclass{article}\usepackage{amsmath}\usepackage{tikz}\usetikzlibrary{calc}\usetikzlibrary{shapes}\makeatletter\newdimen\@myBoxHeight%\newdimen\@myBoxDepth%\newdimen\@myBoxWidth%\newdimen\@myBoxSize%\newcommand{\SquareBox}[2][]{% \settoheight{\@myBoxHeight}{#2}% Record height of box \settodepth{\@myBoxDepth}{#2}% Record depth of box \settowidth{\@myBoxWidth}{#2}% Record width of box \pgfmathsetlength{\@myBoxSize}{max(\@myBoxWidth,(\@myBoxHeight+\@myBoxDepth))}% \tikz \node [shape=rectangle, shape aspect=1,draw=red,inner sep=2\pgflinewidth, minimum size=\@myBoxSize,#1] {#2};%}%\makeatother\begin{document}\SquareBox{I}\SquareBox{y}\SquareBox[thick, dashed]{long text}\SquareBox[draw=blue]{longer text}\SquareBox[draw=blue, thick, fill=yellow]{$e = mc^2$}\SquareBox[draw=black, thick, fill=yellow!10, rounded corners=2pt]{$\displaystyle \int_{-\dfrac{\pi}{2}}^{\dfrac{\pi}{2}} dx $}\end{document} Thanks for your help in advance.
To show that $\Bbb R^d$ with the norm $\Vert x \Vert_1 = \displaystyle \sum_1^d \vert x_i \vert, \tag 1$ where $x = (x_1, x_2, \ldots, x_d) \tag 2$ is Banach we need to prove it is Cauchy-complete with respect to this norm; that is, if $y_i \in \Bbb R^d \tag 3$ is a $\Vert \cdot \Vert_1$-Cauchy sequence, then there exists $y \in \Bbb R^d \tag 4$ with $y_i \to y \tag 5$ in the $\Vert \cdot \Vert_1$ norm. Now if $y_i$ is $\Vert \cdot \Vert_1$-Cauchy, for every real $\epsilon > 0$ there exists $N \in \Bbb N$ such that, for $m, n > N$, $\Vert y_m - y_n \Vert_1 < \epsilon; \tag 6$ if we re-write this in terms of the defiinition (1) we obtain $\displaystyle \sum_{k = 1}^d \vert y_{mk} - y_{nk} \vert < \epsilon, \tag 7$ and we observe that, for every $l$, $1 \le l \le d$, this yields $\vert y_{ml} - y_{nl} \vert \le \displaystyle \sum_{k = 1}^d \vert y_{mk} - y_{nk} \vert < \epsilon; \tag 8$ thus the sequence $y_{ml}$ for fixed $l$ is Cauchy in $\Bbb R$ with respect to the usual norm $\vert \cdot \vert$; and since $\Bbb R$ is Cauchy-complete with respect to $\vert \cdot \vert$ we infer that for each $l$ there is a $y^\ast_l$ with $y_{ml} \to y_l^\ast \tag 9$ in the $\vert \cdot \vert$ norm on $\Bbb R$; thus, taking $N$ larger if necessary, we have $\vert y_{ml} - y_l^\ast \vert < \dfrac{\epsilon}{d}, \; 1 \le l \le d, \; m > N; \tag{10}$ setting $y^\ast = (y_1^\ast, y_2^\ast, \ldots, y_d^\ast) \in \Bbb R^d, \tag{11}$ we further have $\Vert y_m - y^\ast \Vert_1 = \displaystyle \sum_{l = 1}^d \vert y_{ml} - y_l \vert < d \dfrac{\epsilon}{d} = \epsilon, \tag{12}$ that is, $y_m \to y^\ast \tag{13}$ in the $\Vert \cdot \Vert_1$ norm on $\Bbb R^d$; thus $\Bbb R^d$ is $\Vert \cdot \Vert_1$-Cauchy complete; hence $\Vert \cdot \Vert_1$-Banach.
Comet Ison passing Mars (Wolfram Alpha) I just got back from the Black Forest Star Party where Dr. Carey Lisse, head of NASA’s ISON Observing Campaign, gave a speech on comet ISON. Comet ISON (see [1], [2]) is passing by Mars soon (Oct 1) and it will be “grazing the sun” before the end of the year, so I wondered if there was some relationship between the orbital period of a planet and the time it takes a passing comet to go from the planet to the Sun. Turns out there is a relationship. Here’s the approximate rule: Time to sun $\approx$ Orbital Period / $ 3 \pi \sqrt{2} \approx $ Orbital Period / 13.3 ! In the case of ISON and Mars, Time to sun $\approx$ 687 days / 13.3 $\approx$ 52 days. But Oct 1 + 52 days is Nov 22, and perihelion is on Nov 28. Why so far off? Well, turns out that Mars is farther from the sun than usual. If we correct for that, then the formula estimates perihelion to within 1 day—much better. For those that like derivations, here is the derivation for the 13.3 rule. The orbital period of a planet is $T_p = 2 \pi \sqrt{ r^3 \over {G m_S} }$ where $m_S$ is the mass of the Sun, $r$ is the radius of the planet’s orbit (or, more precisely, the semi-major axis of its orbit), and G = 6.67e-11 is the gravitational constant. The speed of a comet from the Oort cloud is derived from its energy. Kinetic Energy = -Potential Energy $ \frac12 m v^2 = G m m_S / r$ $v = \sqrt{ {2 G m_S}\over{r}} $ where $r$ is the distance from the comet to the sun. So the time for a Sun grazer to go from distance $r_0$ to the sun is about $$T_S = \int_0^{r_0} {1\over v} dr $$ $$ = \int_0^{r_0} \sqrt{ r\over{2 G m_S}} dr $$ $$ = \frac13 \sqrt{ 2 r^3 \over{G m_S} }.$$ Finally, $$T_p/T_S = {{2 \pi \sqrt{ r^3 \over {G m_S} }}\over{ \frac13 \sqrt{ 2 r^3 \over{G m_S} }}} = 3 \pi \sqrt{2} \approx 13.3.$$ Related Posts via Categories Comments are now closed.
Suppose we have a function defined as follows: $\alpha(f,x_o)=\limsup\{|f(a)-f(b)|:a,b\in (x_o-\frac{1}{n},x_o+\frac{1}{n})\}$ with $f:R\rightarrow R$ and $x_o \in R$. I need to prove that $\alpha=0$ iff $f$ is continuous at $x_o$. It seems trivial to show the $\alpha=0 \Rightarrow$ direction, since $|f(a)-f(b)|\geq0$, so if the $lim\,sup$ is $0$ we must have $f(a)=f(b)$ everywhere on the interval. However, I'm really drawing a blank on the other direction. How can I prove that continuity of $f$ implies that $\alpha=0$? It seems like I should use the $\epsilon-\delta$ definition of continuity, since I'm working with $|f(a)-f(b)|$, but I think the $lim\,sup$ stuff is throwing me off the scent. Any help would be much appreciated!
All of them (and maybe lots or even all of LIGO folks) are missing that the two LIGO detectors aren't predicted to see signals that are exactly proportional to each other Someone I can't identify uncritically promotes a combative Danish paper in his or her article. It is all about a fresh paper ATdotDE and Telescoper claim to be agnostic. Has everyone lost his or her mind? Please give me a break. When the first LIGO discovery was released in February 2016, I downloaded the raw data from both LIGO detectors sampled at 4096 Hz, wrote and ran all the required codes in Mathematica (that did especially the filtering of frequencies and the whitening), and discovered my own LIGO gravitational wave. I am absolutely certain that the probability that such clear signal-like events occur by chance is negligible. Also, the gravitational waves are the only conceivable signal that can make the delay between the two detectors this small. Any seismic or similar process would almost certainly lead to a much longer delay, basically because the vibrations would spread through the Earth by the speed of sound etc. Andrew Jackson et al. is saying that "something is wrong" because "the residual noise from LIGO-LA and LIGO-WA detectors are correlated and have the same delay" but this correlation between these two "noises" shouldn't exist. Except that this statement is wrong. The residuals aren't just noise. They're the difference between the best fit and the actual observation. But the best fit isn't the same thing as the actual gravitational wave. In particular, as I have emphasized from the very beginning, the two detectors – LIGO-LA and LIGO-WA – are directed in somewhat different directions, so they basically observe different polarizations of the gravitational wave. Because that gravitational wave is rather general – it is neither circularly polarized, nor linearly polarized – the two detectors that basically measure the linearly polarized components in two different directions must yield somewhat different signals. If the incoming gravitational wave were linearly polarized, the two signals could be proportional to each other with no extra phase delay, as on the animations above. If the incoming gravitational wave were circularly polarized, the two detectors' signals would look like delayed by a phase and in principle, one should say that the two functions are about as independent as the sine and cosine (times a modulating function). In reality, the actual incoming gravitational wave is something in between. It is rather well approximated by a linearly polarized wave, but it is not quite linearly polarized, so the two signals aren't proportional to each other. Just an example: Imagine that you have some simple wave modulated by a Gaussian that is "circularly polarized" and it's reflected by the complex exponential\[ \exp(i\omega t - ct^2). \] Well, if you measure this wave in the two detectors, assuming that they're rotated by 45 degrees for the sake of simplicity, they will produce the real and imaginary parts of the function above, respectively. Those two functions – i.e. the Washington and Louisiana signals – will be\[ e^{-ct^2} \cos\omega t, \quad e^{-ct^2} \sin\omega t \] and these two functions aren't proportional to each other, not even if you shift them by an arbitrary time delay. It's easy to see why: the first function is even around its "center of mass" while the second one is odd around its "center of mass". Any nontrivial linear combinations of these two functions will be signal-like! That's true even if you shift them by a time delay to "make them look proportional to each other". Some time shift may approximately and temporarily "mimic the phase delay", which is \(\pi/2\) in my example, but the fixed time delay can't replace the phase delay exactly because the frequency of the gravitational wave is changing with time or is modulated (i.e. because the wave isn't monochromatic and periodic). The replacement is only approximately good if the modulating frequency of the signal and the rate of the frequency change are much slower than the peak frequencies of the signal. To put it mathematically:\[ \forall (A,B)\in \RR^2 \backslash \{(0,0)\}, \,\,\forall \Delta t\in \RR:\\ A e^{-ct^2} \cos\omega t + B e^{-c(t-\Delta t)^2} \sin [\omega (t-\Delta t)] \neq 0 \] I have absolutely no doubt that the signal is real and comes from a black hole merger. But I am not so sure whether LIGO members actually understand that the two LIGO detectors' signals aren't supposed to be proportional to each other because the incoming gravitational wave isn't linearly polarized – and for a circularly polarized wave, the non-parallel detectors detect the phase-shifted parts of the incoming signals, like the real and imaginary part of a complex wave, and the actual polarization is "somewhat elliptic and general", so everything that can be nonzero is nonzero. I have been always worried that LIGO – and maybe every single person who has ever written a paper predicting the waves in two non-parallel detectors – makes this trivial mistake because they talked about things like the "inverted signal" which seems to assume that the LIGO-LA signal should be equal to the LIGO-WA signal times a positive number, or a negative number. But none of these choices is true in the real-world, generic case. Instead, they detect two parts of an elliptically, generally polarized gravitational wave. So it's completely obvious that the signals aren't supposed to be proportional to each other and if you just compute the optimal linear combination of the two signals that should be noise-like, it simply won't be noiselike but will contain a part of the signal in the same period of time – basically the linear polarization of the signal with respect to a different axis than one that maximizes the signal's strength. Dear LIGO folks, be sure that you have discovered the gravitational waves from a black hole merger but if I have pointed out something that you really did in a sloppy way by neglecting the general, non-linear polarization of the incoming wave, I urge you to redo the theoretical predictions and comparisons with the experiments, do it before your gurus are named the physics Nobel prize in Fall 2017, and thank me in the Nobel prize lecture. Thank you very much, Ladies and Gentlemen.
I am trying to build a public hash function (thus collision-resistant and preimage-resistant, and more generally behaving like a random oracle), with input a message $M$ of fixed size $|M|=m\cdot b$ bits, and output the hash $H(M)$ of fixed size $|H(M)|=h\cdot b$ bits, using as the single primitive a $b$-bit block cipher with key of $k\cdot b$ bits, operated in CBC-encryption mode. One of the motivation of the construction is to maintain the confidentiality of the message under DPA attacks, assuming that the implementation of the cipher in CBC-encryption mode is DPA-protected. I thus require that all message-dependent data is manipulated using the $P$ and $K$ inputs of that trusted primitive: $$F(K,IV,s,P)\to C\text{ with }|K|=k\cdot b,|IV|=b,s>0,|P|=|C|=s\cdot b,$$ $$C[0]=\text{ENC}_K(P[0]\oplus IV), C[j]=\text{ENC}_K(P[j]\oplus C_{j-1})\mathsf{\text{ if }}1\le j<s.$$ I would like security beyond that of the underlying block cipher by a security parameter $q$, with an argument that any attack requires $2^{b\cdot q}$ queries to an encryption or decryption oracle implementing $\text{ENC}$ and the corresponding decryption, for some suitable definition of security/attack which, uh, is part of the question (in particular it seems necessary to restrict the amount of memory usable by the attacker, including but not limited to as a cache for the oracle). Performance is secondary (one envisioned application is a slow KDF), a security argument (or better proof) is a must, simplicity matters. Assume there is enough memory for the whole message, and then some. Assume at least $\min(b,k\cdot\lceil b/2\rceil)$ bits of the block cipher's key are effective (which rules out simple-DES). A typical setup could be 3DES thus $b=64$, $k=3$, and $m=64$ ($4096$-bit input), $h=8$ ($512$-bit output), $q=4$ (security of $2^{256}$ block cipher queries or memory accesses). Assume $m\ge h>1$, and if that helps assume $m\gg h$ or/and $h\gg k$ or/and $k=1$ (e.g. AES-128, ignoring attacks marginally reducing its effective key size). What appropriate definition(s) of security are applicable? I apologize for late realization that this is an issue, and corresponding introduction of security parameter $q$. I now think I want a collision-resistant Universal One-Way Hash Function family per this terminology. Is there some standard construction that fits? My review of the existing found a few hashes made from block ciphers, but suitable for $h\le2$ only, not neatly constructible from my primitive when $h>1$, and aiming at efficiency rather than a strong security argument. If nothing standard exists, my simplest candidate is the following CBC-HASH: assume an arbitrary public parameter $P$ of $p\ge k$ blocks which right $k$ blocks are suitable as an initial key for the cipher ($P$ parameterize the UOWHF family; in the simplest/original setup, $p=k$); append $P$ to message $M$ forming $X_0$ of $m+p$ blocks $X_0[j]$, that is $M[j]\to X_0[j]\mathsf{\text{ if }}0\le j<m, K[j]\to X_0[m+j]\mathsf{\text{ if }}0\le j<p$; repeat for $n$ rounds, numbered $r$ from $0$ to $n-1$: set $K_r$ as the right $k$ blocks of $X_r$, that is $X_r[m+p-k+j]\to K_r[j]\mathsf{\text{ if }}0\le j<k$; CBC-encipher $X_r$ using key $K_r$ and the $b$-bit big-endian representation of $r$ as $IV$, giving $X_{r+1}$, that is $F(K_r,(r)_b,m+p,X_r)\to X_{r+1}$; set output $H=H(M)$ as the right $h$ blocks of the left $m$ blocks of $X_n$, that is $X_r[m-h+j]\to H[j]\mathsf{\text{ if }}0\le j<h$. Now I am wondering how the number of rounds $n$ and the size $p$ of the initial padding shall be chosen from $q$, $h$, $m$, $k$ (perhaps $b$), and the limits on parameters for a security argument. Can you justify, prove, improve, break, or fix CBC-HASH? The scheme now incorporates parameter $p$ controlling the size of the UOWHF family. I retract an earlier guess on $n$ made without even a clear security definition. I hereby vow to stop modifying this question except for fixing obvious typos; any addition will be an answer.
Practice Paper 3 Question 19 Let \(A=\{0, 1, \dots, 2^n-1, 2^n\}\) and \(B = \{0, \dots, n\}\). How many functions \(g\) can be defined from \(A\) to \(B\) such that both of the following conditions hold: for all \(x \in B\) we have \(g(2^x) = x,\) for all \(y, z \in A\) with \(y \leq z\) we have \(g(y) \leq g(z).\) Related topics Warm-up Questions How many functions \(f\) can be defined from \(\{0,1, \ldots, n\}\) to \(\{1,2,3\}?\) Hints Hint 1Try expressing constraint (2) in simple words. Hint 2What can one say about an interval from \(2^x\) to \(2^{x+1}\) for an arbitrary \(x \in B?\) Hint 3Since \(g(2^x)=x,\) \(g(2^{x+1})=x+1\) and \(g\) is a non-decreasing function, what values could inputs \(2^x, \ldots, 2^{x+1}\) take? Hint 4Notice that input values in the interval \(2^x\) to \(2^{x+1}\) have output values \(x\) or \(x+1\) and \(g\) is a non-decreasing function. There must be a value at which the output is incremented. How many such values are there? Solution To calculate the number of functions, we should consider how many ways there are to choose an output for each input. First, notice that the second condition means that \(g\) needs to be a non-decreasing function. Now, let's focus on the interval from \(2^x\) to \(2^{x+1}.\) Since \(g(2^x) = x,\) and \(g(2^{x+1}) = x+1\) there is exactly one value at which the output is incremented. There are \(2^x\) input values in this interval at which this could happen. To obtain the final answer we need to multiply the number of possiblities over all the intervals, which is: \[\prod_{i=0}^{n-1} 2^i = 2^{\sum_{i=0}^{n-1} i} = 2^{\frac{n(n-1)}{2}}.\] If you have queries or suggestions about the content on this page or the CSAT Practice Platform then you can write to us at oi.[email protected]. Please do not write to this address regarding general admissions or course queries.
Two of our best solvers, Alex from Stoke on Trent Sixth Form College and Patrick from Woodbridge School thought about this implicit equation. $$r^2X^2-rX-r+1=0$$Patrick looked at the first parts using the quadratic equation formula as follows :Part 1 : If $r=1$ then $X^2-X = 0$, so $X(X-1) = 0$ and $X = 0$ or $1$. If $r = 100$ then $100^2X^2 - 100X - 100 + 1 = 0$ so $100^2X^2-100X-99 = 0$ so $$ X = \frac{100\pm \sqrt{100²+4\times 100²\times 99}}{ 2\times 100^2} $$ This is real as we neither divide by $0$ nor take a negative square root.Part 2 : If $r = 0$ then we find the equation gives $1=0$, so r cannot equal $0$. Substituting $r^2$ for $a$, $-r$ for $b$ and $-(r-1)$ for $c$ in the usual quadratic equation formula we get $$ X = \frac{r\pm\sqrt{r²+4r²(r-1)}}{2r^2}=\frac{1\pm\sqrt{1+4r-4}}{2r} = \frac{1\pm\sqrt{4r-3}}{2r} $$ So this takes real values if and only if $4r - 3 \geq 0$.Alex realised that the key feature of this problem was the discriminant of the quadratic equations in $X$, and looked at that object directly: The discriminant determines whether a quadratic has real roots, therefore $X(r)$ will have real values if and only if the discriminant $D$ of the quadratic is zero or positive. $$ \begin{eqnarray} D&\geq 0&\cr \Rightarrow (-r)^2 - 4(r^2)(1-r) &\geq& 0 \cr \Rightarrow r^2 - 4r^2 + 4r^3 &\geq& 0\cr \Rightarrow 4r^3 &\geq& 3r^2\cr \Rightarrow r &\geq& 3/4 \end{eqnarray} $$ So, $X(r)$ has real values for $r\geq 3/4$. This means that $X(1)$ and $X(100)$ have real values, but $X(-1)$ does not. $X(0)$ is undefined because the corresponding equation $(1=0)$ has no solutions, it is a contradiction.Alex also correctly deduced the maximum and minimum values of $X(r)$ The maximum value of $X(r)$ is $X(1) = 1$. The minimum value of $X(r)$ is $X(3) = -1/3$. The graph has an asymptote $r = 0$. Alex submitted a plot of the associated curve, which shows two distinct branches.Steve solved this problem in the following way : Part 1: $$X = \frac{-b \pm \sqrt{b^2 - 4ac}} {2a}$$ $$X = \frac {1 \pm \sqrt{4r - 3}}{2r}$$ $X$ is therefore real for all $r$ greater than or equal to $\frac{3}{4}$ Hence $r = 0$ and $-1$ give complex roots of $X$ but $1$ and $100$ give real roots. Part 2: Asymptote at $r = 0$ As $r \to \infty$, $X \to 0$ Part 3: $$\frac{\mathrm{d}X}{\mathrm{d}r} = \frac{\mathrm{d}}{\mathrm{d}r} \left[ \frac{1}{2r} \left(1 \pm \sqrt{4r - 3} \right) \right] = \frac{-2 \left( \sqrt{4r-3} \right) +4r-6} {r^2 \sqrt{4r-3}}$$ Setting $\displaystyle \frac{\mathrm{d}X}{\mathrm{d}r} = 0$ we find $r = 1,\ 3$
A brief post today: I was talking about an algebraic topology problem from Hatcher’s book (available freely on his website) with two of my colleagues. In short, we were finding the fundamental group of some terrible space, and we thought that there might be a really slick almost entirely algebraic way to do a problem. We had a group $latex G$ and the exact sequence $latex 0 \to \mathbb{Z} \to G \to \mathbb{Z} \to 0$, in short, and we wondered what we could say about $latex G$. Before I go on, I mention that we had been working on things all day, and we were a bit worn. So the calibre of our techniques had gone down. In particular, we could initially think of only two examples of such a $latex G$, and we could show one of them didn’t work. Of the five of us there, two of us thought that there might be a whole family of nonabelian groups that we were missing, but we couldn’t think of any. And if none of us could think of any, could there be any? At the time, we decided no, more or less. So $latex G \approx Z \times Z$, which is what we wanted in the sense that we sort of knew this was the correct answer. As is often the case, it is very easy to rationalize poor work if the answer that results is the correct one. We later made our work much better (in fact, we can now show that our group in question is abelian, or calculate is in a more geometric way). But this question remained – what counterexamples are there? There are infinitely many nonabelian groups satisfying that exact sequence! But I’ll leave this question for a bit – Find a nonabelian group (or family of groups) that satisfy $latex 0 \to \mathbb{Z} \to G \to \mathbb{Z} \to 0$ The second quick problem of this post. It’s found in Ahlfors – Find a closed form for $latex \displaystyle \sum_{n \in \mathbb{Z}} \frac{1}{z^3 – n^3}$. When I first had to do this, I failed miserable. I had all these divergent series about, and it didn’t go so well. I try to factor $latex z^3 – n^3 = (z – n)(z – \omega n)(z – \omega^2 n)$, use partial fractions, and go. And… it’s not so fruitful. You get three terms, each of which diverge (if taken independently from each other) for a given $latex z $. And you can do really possibly-witty things, like find functions that have the same poles and try to match the poles, and such. But the divergence makes things hard to deal with. But if you do $latex -(n^3 – z^3) = -[(n-z)(n-\omega z)(n – \omega^2 z)]$, everything works out very nicely. That’s the thing with complex numbers – the ‘natural factorization’ may not always be unique.
Karle, Isabella L and Flippen-Anderson, Judith L and Agarwalla, Sanjay and Balaram, Padmanabhan (1991) Crystal structure of [Leu1]zervamicin, a membran ion-channel peptide: Implications for gating mechanisms. In: Proceedings of the National Academy of Sciences of the United States of America, 88 (12). pp. 5307-5311. PDF crystal.pdf Restricted to Registered users only Download (891kB) | Request a copy Abstract Structures in four different crystal forms of [Leu1]zervamicin (zervamicin Z-L, Ac-Leu-Ile-Gln-Iva-Ile5-Thr-Aib-Leu-Aib-Hyp10-Gln-Aib -Hyp-Aib- Pro15-Phol, where Iva is isovaline, Aib is \alpha-amino isobutyric acid, Hyp is 4-hydroxyproline, and Phol is phenylalaninol), a membrane channel-forming polypeptide from Emericellopsis salmosynnemata, have been determined by x-ray diffraction. The helical structure is amphiphilic with all the polar moieties on the convex side of the bent helix. Helices are bent at Hyp10 from \approx 30 deg to \approx 45 deg in the different crystal forms. In all crystal forms, the peptide helices aggregate in a similar fashion to form water channels that are interrupted by hydrogen bonds between N H(Gln11) and O \delta (Hyp10) of adjacent helices. The Gln11 side chain is folded in an unusual fashion in order to close the channel. Space is available for an extended conformation for Gln11, in which case the channel would be open, suggesting a gating mechanism for cation transport. Structural details are presented for one crystal form derived from methanol/water solution: C85H140N18O22. 10H2O, space group P21, a = 23.068(6) A, b = 9.162(3) A, c = 26.727(9) A, \beta = 108.69(2) (standard deviation of last digit is given in parentheses); overall agreement factor R = 10.1% for 5322 observed reflections [|Fo| > 3\sigma (F)]; resolution, 0.93 A. Item Type: Journal Article Additional Information: Copyright for this article belongs to National Academy of Sciences. Keywords: X-ray diffraction;channel mouth;helix bending;polymorphs;alpha-alkyl amino acid Department/Centre: Division of Biological Sciences > Molecular Biophysics Unit Depositing User: M.K Anitha Date Deposited: 23 Dec 2004 Last Modified: 19 Sep 2010 04:17 URI: http://eprints.iisc.ac.in/id/eprint/2511 Actions (login required) View Item
Answer $\theta=30^o$ Work Step by Step Use a scientific calculator's inverse cosine function in degree mode to obtain: $\theta = \arccos{(\frac{\sqrt3}{2})} \\\theta=30^o$ You can help us out by revising, improving and updating this answer.Update this answer After you claim an answer you’ll have 24 hours to send in a draft. An editorwill review the submission and either publish your submission or provide feedback.
It is well known that a parametric form of the parabola $y^2=4ax$ is $(at^2, 2at)$. What are possible parametric forms of the general parabola $$(Ax+Cy)^2+Dx+Ey+F=0$$ ? Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community It is well known that a parametric form of the parabola $y^2=4ax$ is $(at^2, 2at)$. What are possible parametric forms of the general parabola $$(Ax+Cy)^2+Dx+Ey+F=0$$ ? Bézier curves are a convenient way to produce parameterizations of parabolas: a quadratic Bézier is a (part of a) parabola. If $P_0$ and $P_2$ are points on the parabola and $P_1$ the intersection of the tangents at those points, the quadratic Bézier curve they define is given by $$\phi:t\mapsto(1-t)^2P_0+2t(1-t)P_1+t^2P_2.\tag{1}$$ (The parameter $t$ is usually taken to range from $0$ to $1$ for a Bézier patch.) We can reproduce your parametrization by taking the vertex $P_0(0,0)$ and an end of the latus rectum $P_2(p,2p)$ as the points on the parabola. (Here I use the conventional name $p$ for this parameter instead of the $a$ in the question.) The tangent at the end of the latus rectum meets the parabola’s axis at a 45° angle, so our third control point will be $P_1(0,p)$. Plugging these into (1) we get $$(1-t)^2(0,0)+2t(1-t)(0,p)+t^2(p,2p)=(pt^2,2pt),$$ as required. As described here, parametrization of a parabola by a pair of quadratic polynomials has a nice symmetry about the vertex. Choosing the vertex as our first control point makes this symmetry quite simple. To obtain the corresponding parameterization for a general parabola, you can either rotate and translate these three points to match the position and orientation of the given parabola, or compute them from other information that you have about the parabola. For example, if we have a parabola with vertex $P_0(x_0,y_0)$, focal length $p$ and axis direction $\theta$, we will have $P_1=P_0+(-p\sin\theta,p\cos\theta)$ and $P_2=P_0+(p\cos\theta-2p\sin\theta,2p\cos\theta+p\sin\theta)$, which gives the parameterization $$\begin{align}x&=x_0-2pt\sin\theta+pt^2\cos\theta \\ y&= y_0+2pt\cos\theta+pt^2\sin\theta.\end{align}$$ I’ll leave working out this parameterization for the general-form equation to you. As a hint, remember that for the parabola $y=ax^2+bx+c$, $p={1\over4a}$ and that a parabola’s vertex is halfway between its focus and directrix. This solution to my other question on the axis of symmetry of a general parabola gives the following: Axis of symmetry: $$Ax+Cy+t^*=0$$ Tanget at vertex: $$(D-2At^*)x+(E-2Ct^*)y+F-{t^*}^2=0$$ where $t^* \left(=\frac {AD+CE}{2(A^2+C^2)}\right)$ is chosen for both lines to be perpendicular. Solving for the intersection of the two lines gives the coordinates of the vertex as $$\left(-\frac{C{t^*}^2-Et^*+CF}{CD-AE}, \frac{A{t^*}^2-Dt^*+AF}{CD-AE}\right)$$ Replacing $t^*$ with the general parameter $t$ gives a parametric form for the general parabola $(Ax+Cy)^2+Dx+Ey+F=0$ as $$\color{red}{\left(-\frac{Ct^2-Et+CF}{CD-AE}, \frac{At^2-Dt+AF}{CD-AE}\right)}$$ which is the same as $$\color{red}{\left(\frac{Ct^2-Et+CF}{AE-CD}, -\frac{At^2-Dt+AF}{AE-CD}\right)}$$ For the special case where $A=C$, $$t^*=\frac {D+E}{4A}$$ Axis of Symmetry: $$Ax+Ay+\frac {D+E}{4A}=0$$ or $$x+y+\frac {D+E}{4A^2}=0$$ Vertex: $$\left(\frac{{t^*}^2-\frac EA t^*+F}{E-D}, -\frac{{t^* }^2-\frac DA t^*+F}{E-D}\right)$$
Help please! Prove $\sum \sqrt{a_n b_n}$ converges if $\sum a_n$ and $\sum b_n$ converge. I can prove that $\sum a_n b_n$ converges but couldn't for $\sum \sqrt{a_n b_n}$. Thank you. EDIT: $a_n, b_n \ge 0 \;\forall n$ Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community Hint:$$\sqrt {AB}\le A+B$$because of $(\sqrt A - \sqrt B)^2 \ge 0$. Note: Similar can be shown to hold true for $A_1, ..., A_n$ and $A_1, A_2, ...$
I discussed this topic in April 2012 in I claim that a theoretical physicist prefers units in which a maximum possible subset of the universal fundamental constants of Nature such as \(\hbar,c,\epsilon_0,e,k_B,N_A\) and perhaps even \(G\) have well-known values. In theory papers, we really want to put them equal to one but that requires one to use natural (e.g. Planck) units that are numerically different from what the people are used to. If these constants are set equal to a precisely specified numerical value, it's equally good – up to some simple multiplication. For example, the speed of light \(c\) was set to a fixed constant, \(299,792,458\,{\rm m/s}\), in the early 1980s when the relative error in the measurement of \(c\) using the older definition improved to one part per billion, \(10^{-9}\), or so. Since that time, the accuracy with which times and distances are being measured has increased and due to the fixed value of \(c\), this progress is automatically reflected in the accuracy of times and distances, too. In 2012, I proposed to do the same thing for \(\hbar\) and redefine one kilogram. A theorist focusing on gravity (or quantum gravity) would ideally want to fix the value of Newton's constant \(G\), too. However, this would be bad for another, experimental reason. In the current SI units, \(G\) is only measured with the relative uncertainty \(1.2\times 10^{-4}\) which is a huge error. You surely don't want all masses that people talk about carry this error of 0.01% caused by our inability to accurately measure the strength of gravity. So it's not time to set \(G\) to a well-defined constant (in the real world; in theory papers, we're doing it all the time). By the way, if you primarily care about the real-world accuracy – about the need to avoid spurious extra unnecessary uncertainties introduced by impractical definitions of units – you should be interested in products of powers of the fundamental constants that are known more accurately (when it comes to the relative error) than the constants themselves. On the other hand, some other constants have been measured much more accurately so there's no reason not to fix the appropriate constants equal to fixed numerical values. Nature just published an article about some plans (that may be realized in 2018 so they are not imminent) to redefine one ampere: There's nothing really wrong about setting \(e\) to a well-known value which is currently \(4\pi\times 10^{-7} \,{\rm (V\cdot s) / (A\cdot m)}\) which was chosen to agree with some 19th century semi-natural CGS (Gauss') units. After all, theory papers like to set \(e=1\) or something like that – which is not incompatible with setting \(\epsilon_0=\hbar=c=1\) as long as we insert the fine-structure constant to the appropriate formulae and laws of physics. It's not bad to fix \(e\) because its value is known with the standard uncertainty \(2\times 10^{-9}\) or so which is pretty accurate. But I don't have anything against setting \(\mu_0=1/\epsilon_0 c^2\) to a fixed value, either. That's what the SI system is doing these days. So I don't really see any progress here. The fine-structure will have some uncertainty, anyway – currently it is about \(3\times 10^{-11}\) which means that it is known really accurately but it will never be known absolutely accurately (unless we derive its value from the right string vacuum). As you can see, I am not too enthusiastic about switching from setting the vacuum permeability \(\mu_0\) to a well-defined value (the vacuum permittivity \(\epsilon_0\) is also fixed because \(c\) is fixed today). It seems much more important to me to get rid of the silly "international prototype kilogram", probably by fixing the value of \(\hbar\). The prepared reform of the definitions of the units is expected to redefine four units at the same moment – kilogram, mole, ampere, and kelvin. The apparent high influence of the experimental December 2013 papers suggests that good theorists aren't really well-represented in the metrological institutions. To summarize my viewpoint, I would define the units so that \(c,\hbar,k_B,N_A,\mu_0\) are set to fixed numerical values and one second is linked to the atomic clocks (the currently used cesium is good enough). I am convinced that gadgets measuring and therefore "operationally defining" the quantities with the accuracy that wouldn't be worse than the current one could be designed and described as well. I feel that the fundamental units shouldn't be linked to some arbitrary systems used in measuring apparatuses unless it is really a superior choice – which is the case of the measurements of time via atomic clocks (and distances as well, thanks to the fixed value of \(c\)).
17 2 Summary I am wondering at the striking similarity between the expressions for creation/annihilation operators in terms of position and momentum operators and the expressions for sine and cosine in terms of the exponential. Hello everyone, I have noticed a striking similarity between expressions for creation/annihilation operators in terms position and momentum operators and trigonometric expressions in terms of exponentials. In the treatment by T. Lancaster and S. Blundell, "Quantum Field Theory for the Gifted Amateur", Chapter 2, eqns. 2.9-2.13, the creation/annihilation operators for energy levels of the simple harmonic oscillators are given as ## \hat{a} = \sqrt{\dfrac{m\omega}{2\hbar}} \left(\hat{x} + \dfrac{i}{m\omega}\hat{p}\right) ## ## \hat{a}^\dagger = \sqrt{\dfrac{m\omega}{2\hbar}} \left(\hat{x} - \dfrac{i}{m\omega}\hat{p}\right) ## and the inverse formulae are ## \hat{x} =\sqrt{\dfrac{\hbar}{2m\omega}} (\hat{a} +\hat{a}^\dagger ) =\dfrac{1}{2}\sqrt{\dfrac{2\hbar}{m\omega}} (\hat{a} +\hat{a}^\dagger ) ## ## \hat{p} =-i\sqrt{\dfrac{\hbar}{2m\omega}} (\hat{a} +\hat{a}^\dagger ) =\dfrac{-i}{2}\sqrt{\dfrac{2\hbar}{m\omega}} (\hat{a} -\hat{a}^\dagger ) ## Now, my observation is that the first pair of expressions have the same structure as the Euler's formula ## e^{iz} = \cos(z)+i\sin(z), \;e^{-iz} = \cos(z)-i\sin(z)##, upon substitution ##e^{iz} \rightarrow \sqrt{\dfrac{2\hbar}{m\omega}}\hat{a}, \;e^{-iz} \rightarrow \sqrt{\dfrac{2\hbar}{m\omega}}\hat{a}^{\dagger},## ## \cos(z) \rightarrow \hat{x},\; \sin(z) \rightarrow \dfrac{1}{m\omega}\hat{p}##, and the second pair of equations is recovered with the same substitution from the inverse formulae ## \cos(z) = \dfrac{1}{2}(e^{iz}+e^{-iz}), ## ## \sin(z)=\dfrac{-i}{2}(e^{iz}-e^{-iz}).## Now, I realize that the structural similarity stems from the definition ## \hat{a}= \hat{x}+i\hat{p}##, but there seems to be a geometrical meaning to this. Can we indeed interpret the interplay between the position and momentum as the connection between trigonometric functions? What is the meaning of the commutation relations then? Are you familiar with any textbook treating this aspect of quantization? Thank you in advance! I have noticed a striking similarity between expressions for creation/annihilation operators in terms position and momentum operators and trigonometric expressions in terms of exponentials. In the treatment by T. Lancaster and S. Blundell, "Quantum Field Theory for the Gifted Amateur", Chapter 2, eqns. 2.9-2.13, the creation/annihilation operators for energy levels of the simple harmonic oscillators are given as ## \hat{a} = \sqrt{\dfrac{m\omega}{2\hbar}} \left(\hat{x} + \dfrac{i}{m\omega}\hat{p}\right) ## ## \hat{a}^\dagger = \sqrt{\dfrac{m\omega}{2\hbar}} \left(\hat{x} - \dfrac{i}{m\omega}\hat{p}\right) ## and the inverse formulae are ## \hat{x} =\sqrt{\dfrac{\hbar}{2m\omega}} (\hat{a} +\hat{a}^\dagger ) =\dfrac{1}{2}\sqrt{\dfrac{2\hbar}{m\omega}} (\hat{a} +\hat{a}^\dagger ) ## ## \hat{p} =-i\sqrt{\dfrac{\hbar}{2m\omega}} (\hat{a} +\hat{a}^\dagger ) =\dfrac{-i}{2}\sqrt{\dfrac{2\hbar}{m\omega}} (\hat{a} -\hat{a}^\dagger ) ## Now, my observation is that the first pair of expressions have the same structure as the Euler's formula ## e^{iz} = \cos(z)+i\sin(z), \;e^{-iz} = \cos(z)-i\sin(z)##, upon substitution ##e^{iz} \rightarrow \sqrt{\dfrac{2\hbar}{m\omega}}\hat{a}, \;e^{-iz} \rightarrow \sqrt{\dfrac{2\hbar}{m\omega}}\hat{a}^{\dagger},## ## \cos(z) \rightarrow \hat{x},\; \sin(z) \rightarrow \dfrac{1}{m\omega}\hat{p}##, and the second pair of equations is recovered with the same substitution from the inverse formulae ## \cos(z) = \dfrac{1}{2}(e^{iz}+e^{-iz}), ## ## \sin(z)=\dfrac{-i}{2}(e^{iz}-e^{-iz}).## Now, I realize that the structural similarity stems from the definition ## \hat{a}= \hat{x}+i\hat{p}##, but there seems to be a geometrical meaning to this. Can we indeed interpret the interplay between the position and momentum as the connection between trigonometric functions? What is the meaning of the commutation relations then? Are you familiar with any textbook treating this aspect of quantization? Thank you in advance!
1) The region \(D\) bounded by \(y = x^3, \space y = x^3 + 1, \space x = 0,\) and \(x = 1\) as given in the following figure. a. Classify this region as vertically simple (Type I) or horizontally simple (Type II). Type: Type I but not Type II b. Find the area of the region \(D\). c. Find the average value of the function \(f(x,y) = 3xy\) on the region graphed in the previous exercise. Answer: \(\frac{27}{20}\) 2) The region \(D\) bounded by \(y = \sin x, \space y = 1 + \sin x, \space x = 0\), and \(x = \frac{\pi}{2}\) as given in the following figure. a. Classify this region as vertically simple (Type I) or horizontally simple (Type II). Type: Type I but not Type II b. Find the area of the region \(D\). Answer: \(\frac{\pi}{2}\, \text{units}^2\) c. Find the average value of the function \(f(x,y) = \cos x\) on the region \(D\). 3) The region \(D\) bounded by \(x = y^2 - 1\) and \(x = \sqrt{1 - y^2}\) as given in the following figure. a. Classify this region as vertically simple (Type I) or horizontally simple (Type II). Type: Type II but not Type I b. Find the volume of the solid under the graph of the function \(f(x,y) = xy + 1\) and above the region \(D\). Answer: \(\frac{1}{6}(8 + 3\pi)\, \text{units}^3\) 4) The region \(D\) bounded by \(y = 0, \space x = -10 + y,\) and \(x = 10 - y\) as given in the following figure. a. Classify this region as vertically simple (Type I) or horizontally simple (Type II). Type: Type II but not Type I b. Find the volume of the solid under the graph of the function \(f(x,y) = x + y\) and above the region in the figure from the previous exercise. Answer: \(\frac{1000}{3}\, \text{units}^3\) 5) The region \(D\) bounded by \(y = 0, \space x = y - 1, \space x = \frac{\pi}{2}\) as given in the following figure. Classify this region as vertically simple (Type I) or horizontally simple (Type II). Type: Type I and Type II 6) The region \(D\) bounded by \(y = 0\) and \(y = x^2 - 1\) as given in the following figure. Classify this region as vertically simple (Type I) or horizontally simple (Type II). Type: Type I and Type II 7) Let \(D\) be the region bounded by the curves of equations \(y = cos \space x\) and \(y = 4 - x^2\) and the \(x\)-axis. Explain why \(D\) is neither of Type I nor II. Answer: The region \(D\) is not of Type I: it does not lie between two vertical lines and the graphs of two continuous functions \(g_1(x)\) and \(g_2(x)\). The region is not of Type II: it does not lie between two horizontal lines and the graphs of two continuous functions \(h_1(y)\) and \(h_2(y)\). 8) Let \(D\) be the region bounded by the curves of equations \(y = x, \space y = -x\) and \(y = 2 - x^2\). Explain why \(D\) is neither of Type I nor II. In exercises 9 - 14, evaluate the double integral \(\displaystyle \iint_D f(x,y) \,dA\) over the region \(D\). 9) \(f(x,y) = 1\) and \(D = \big\{(x,y)| \, 0 \leq x \leq \frac{\pi}{2}, \space \sin x \leq y \leq 1 + \sin x \big\}\) Answer: \(\frac{\pi}{2}\) 10) \(f(x,y) = 2\) and \(D = \big\{(x,y)| \, 0 \leq y \leq 1, \space y - 1 \leq x \leq \arccos y \big\}\) 11) \(f(x,y) = xy\) and \(D = \big\{(x,y)| \, -1 \leq y \leq 1, \space y^2 - 1 \leq x \leq \sqrt{1 - y^2} \big\}\) Answer: \(0\) 12) \(f(x,y) = sin \space y\) and \(D\) is the triangular region with vertices \((0,0), \space (0,3)\), and \((3,0)\) 13) \(f(x,y) = -x + 1\) and \(D\) is the triangular region with vertices \((0,0), \space (0,2)\), and \((2,2)\) Answer: \(\frac{2}{3}\) 14) \(f(x,y) = 2x + 4y\) and \(D = \big\{(x,y)|\, 0 \leq x \leq 1, \space x^3 \leq y \leq x^3 + 1 \big\}\) In exercises 15 - 20, evaluate the iterated integrals. 15) \(\displaystyle \int_0^1 \int_{2\sqrt{x}}^{2\sqrt{x}+1} (xy + 1) \,dy \space dx\) Answer: \(\frac{41}{20}\) 16) \(\displaystyle \int_0^3 \int_{2x}^{3x} (x + y^2) \,dy \space dx\) 17) \(\displaystyle \int_1^2 \int_{-u^2-1}^{-u} (8 uv) \,dv \space du\) Answer: \(-63\) 18) \(\displaystyle \int_e^{e^2} \int_{\ln u}^2 (v + \ln u) \,dv \space du\) 19) \(\displaystyle \int_0^1 \int_{-\sqrt{1-4y^2}}^{\sqrt{1-4y^2}} 4 \,dx \space dy\) Answer: \(\pi\) 20) \(\displaystyle \int_0^1 \int_{-\sqrt{1-y^2}}^{\sqrt{1-y^2}} (2x + 4y^3) \,dx \space dy\) 21) Let \(D\) be the region bounded by \(y = 1 - x^2, \space y = 4 - x^2\), and the \(x\)- and \(y\)-axes. a. Show that \[\iint_D x\,dA = \int_0^1 \int_{1-x^2}^{4-x^2} x \space dy \space dx + \int_1^2 \int_0^{4-x^2} x \space dy \space dx\] by dividing the region \(D\) into two regions of Type I. b. Evaluate the integral \[\iint_D s \space dA.\] 22) Let \(D\) be the region bounded by \(y = 1, \space y = x, \space y = ln \space x\), and the \(x\)-axis. a. Show that \[\iint_D y^2 dA = \int_{-1}^0 \int_{-x}^{2-x^2} y^2 dy \space dx + \int_0^1 \int_x^{2-x^2} y^2 dy \space dx\] by dividing the region \(D\) into two regions of Type I, where \(D = \big\{(x,y)\,|\,y \geq x, y \geq -x, \space y \leq 2-x^2\big\}\). b. Evaluate the integral \[\iint_D y^2 dA.\] 23) Let \(D\) be the region bounded by \(y = x^2\), \(y = x + 2\), and \(y = -x\). a. Show that \[\iint_D x \space dA = \int_0^1 \int_{-y}^{\sqrt{y}} x \space dx \space dy + \int_1^2 \int_{y-2}^{\sqrt{y}} x \space dx \space dy\] by dividing the region \(D\) into two regions of Type II, where \(D = \big\{(x,y)\,|\,y \geq x^2, \space y \geq -x, \space y \leq x + 2\big\}\). b. Evaluate the integral \[\iint_D x \space dA.\] Answer: a. Answers may vary; b. \(\frac{8}{12}\) 24) The region \(D\) bounded by \(x = 0, y = x^5 + 1\), and \(y = 3 - x^2\) is shown in the following figure. Find the area \(A(D)\) of the region \(D\). 25) The region \(D\) bounded by \(y = cos \space x, \space y = 4 \space cos \space x\), and \(x = \pm \frac{\pi}{3}\) is shown in the following figure. Find the area \(A(D)\) of the region \(D\). Answer: \(\frac{8\pi}{3}\) 26) Find the area \(A(D)\) of the region \(D = \big\{(x,y)| \, y \geq 1 - x^2, y \leq 4 - x^2, \space y \geq 0, \space x \geq 0 \big\}\). 27) Let \(D\) be the region bounded by \( y = 1, \space y = x, \space y = ln \space x\), and the \(x\)-axis. Find the area \(A(D)\) of the region \(D\). Answer: \(\left(e - \frac{3}{2}\right)\, \text{units}^2\) 28) Find the average value of the function \(f(x,y) = sin \space y\) on the triangular region with vertices \((0,0), \space (0,3)\), and \((3,0)\). 29) Find the average value of the function \(f(x,y) = -x + 1\) on the triangular region with vertices \((0,0), \space (0,2)\), and \((2,2)\). Answer: \(\frac{2}{3}\) In exercises 30 - 33, change the order of integration and evaluate the integral. 30) \[\int_{-1}^{\pi/2} \int_0^{x+1} sin \space x \space dy \space dx\] 31) \[\int_0^1 \int_{x-1}^{1-x} x \space dy \space dx\] Answer: \[\int_0^1 \int_{x-1}^{1-x} x \space dy \space dx = \int_{-1}^0 \int_0^{y+1} x \space dx \space dy + \int_0^1 \int_-^{1-y} x \space dx \space dy = \frac{1}{3}\] 32) \[\int_{-1}^0 \int_{-\sqrt{y+1}}^{\sqrt{y+1}} y^2 dx \space dy\] 33) \[\int_{-1/2}^{1/2} \int_{-\sqrt{y^2+1}}^{\sqrt{y^2+1}} y \space dx \space dy\] Answer: \[\int_{-1/2}^{1/2} \int_{-\sqrt{y^2+1}}^{\sqrt{y^2+1}} y \space dx \space dy = \int_1^2 \int_{-\sqrt{x^2-1}}^{\sqrt{x^2-1}} y \space dy \space dx = 0\] 34) The region \(D\) is shown in the following figure. Evaluate the double integral \(\displaystyle \iint_D (x^2 + y) \,dA\) by using the easier order of integration. 35) The region \(D\) is shown in the following figure. Evaluate the double integral \(\displaystyle \iint_D (x^2 - y^2) \,dA\) by using the easier order of integration. Answer: \[\iint_D (x^2 - y^2) dA = \int_{-1}^1 \int_{y^4-1}^{1-y^4} (x^2 - y^2)dx \space dy = \frac{464}{4095}\] 36) Find the volume of the solid under the surface \(z = 2x + y^2\) and above the region bounded by \(y = x^5\) and \(y = x\). 37) Find the volume of the solid under the plane \(z = 3x + y\) and above the region determined by \(y = x^7\) and \(y = x\). Answer: \(\frac{4}{5}\, \text{units}^3\) 38) Find the volume of the solid under the plane \(z = 3x + y\) and above the region bounded by \(x = tan \space y, \space x = -tan \space y\), and \(x = 1\). 39) Find the volume of the solid under the surface \(z = x^3\) and above the plane region bounded by \(x = sin \space y, \space x = -sin \space y\), and \(x = 1\). Answer: \(\frac{5\pi}{32}\, \text{units}^3\) 40) Let \(g\) be a positive, increasing, and differentiable function on the interval \([a,b]\). Show that the volume of the solid under the surface \(z = g'(x)\) and above the region bounded by \(y = 0, \space y = g(x), \space x = a\), and \(x = b\) is given by \(\frac{1}{2}(g^2 (b) - g^2 (a))\). 41) Let \(g\) be a positive, increasing, and differentiable function on the interval \([a,b]\) and let \(k\) be a positive real number. Show that the volume of the solid under the surface \(z = g'(x)\) and above the region bounded by \(y = g(x), \space y = g(x) + k, \space x = a\), and \(x = b\) is given by \(k(g(b) - g(a)).\) 42) Find the volume of the solid situated in the first octant and determined by the planes \(z = 2\), \(z = 0, \space x + y = 1, \space x = 0\), and \(y = 0\). 43) Find the volume of the solid situated in the first octant and bounded by the planes \(x + 2y = 1\), \(x = 0, \space z = 4\), and \(z = 0\). Answer: \(1\, \text{units}^3\) 44) Find the volume of the solid bounded by the planes \(x + y = 1, \space x - y = 1, \space x = 0, \space z = 0\), and \(z = 10\). 45) Find the volume of the solid bounded by the planes \(x + y = 1, \space x - y = 1, \space x - y = -1, \space z = 1\), and \(z = 0\) Answer: \(2\, \text{units}^3\) 46) Let \(S_1\) and \(S_2\) be the solids situated in the first octant under the planes \(x + y + z = 1\) and \(x + y + 2z = 1\) respectively, and let \(S\) be the solid situated between \(S_1, \space S_2, \space x = 0\), and \(y = 0\). Find the volume of the solid \(S_1\). Find the volume of the solid \(S_2\). Find the volume of the solid \(S\) by subtracting the volumes of the solids \(S_1\) and \(S_2\). 47) Let \(S_1\) and \(S_2\) be the solids situated in the first octant under the planes \(2x + 2y + z = 2\) and \(x + y + z = 1\) respectively, and let \(S\) be the solid situated between \(S_1, \space S_2, \space x = 0\), and \(y = 0\). Find the volume of the solid \(S_1\). Find the volume of the solid \(S_2\). Find the volume of the solid \(S\) by subtracting the volumes of the solids \(S_1\) and \(S_2\). Answer: a. \(\frac{1}{3}\, \text{units}^3\) b. \(\frac{1}{6}\, \text{units}^3\) c. \(\frac{1}{6}\, \text{units}^3\) 48) Let \(S_1\) and \(S_2\) be the solids situated in the first octant under the plane \(x + y + z = 2\) and under the sphere \(x^2 + y^2 + z^2 = 4\), respectively. If the volume of the solid \(S_2\) is \(\frac{4\pi}{3}\) determine the volume of the solid \(S\) situated between \(S_1\) and \(S_2\) by subtracting the volumes of these solids. 49) Let \(S_1\) and \(S_2\) be the solids situated in the first octant under the plane \(x + y + z = 2\) and under the sphere \(x^2 + y^2 = 4\), respectively. Find the volume of the solid \(S_1\). Find the volume of the solid \(S_2\). Find the volume of the solid \(S\) situated between \(S_1\) and \(S_2\) by subtracting the volumes of the solids \(S_1\) and \(S_2\). Answer: a. \(\frac{4}{3}\, \text{units}^3\) b. \(2\pi\, \text{units}^3\) c. \(\frac{6\pi - 4}{3}\, \text{units}^3\) 50) [T] The following figure shows the region \(D\) bounded by the curves \(y = sin \space x, \space x = 0\), and \(y = x^4\). Use a graphing calculator or CAS to find the \(x\)-coordinates of the intersection points of the curves and to determine the area of the region \(D\). Round your answers to six decimal places. 51) [T] The region \(D\) bounded by the curves \(y = cos \space x, \space x = 0\), and \(y = x^3\) is shown in the following figure. Use a graphing calculator or CAS to find the x-coordinates of the intersection points of the curves and to determine the area of the region \(D\). Round your answers to six decimal places. Answer: 0 and 0.865474; \(A(D) = 0.621135\, \text{units}^3\) 52) Suppose that \((X,Y)\) is the outcome of an experiment that must occur in a particular region \(S\) in the \(xy\)-plane. In this context, the region \(S\) is called the sample space of the experiment and \(X\) and \(Y\) are random variables. If \(D\) is a region included in \(S\), then the probability of \((X,Y)\) being in \(D\) is defined as \(P[(X,Y) \in D] = \iint_D p(x,y)dx \space dy\), where \(p(x,y)\) is the joint probability density of the experiment. Here, \(p(x,y)\) is a nonnegative function for which \(\iint_S p(x,y) dx \space dy = 1\). Assume that a point \((X,Y)\) is chosen arbitrarily in the square \([0,3] \times [0,3]\) with the probability density \[p(x,y) = \frac{1}{9} (x,y) \in [0,3] \times [0,3],\] \[p(x,y) = 0 \space \text{otherwise}\] Find the probability that the point \((X,Y)\) is inside the unit square and interpret the result. 53) Consider \(X\) and \(Y\) two random variables of probability densities \(p_1(x)\) and \(p_2(x)\), respectively. The random variables \(X\) and \(Y\) are said to be independent if their joint density function is given by \(p_(x,y) = p_1(x)p_2(y)\). At a drive-thru restaurant, customers spend, on average, 3 minutes placing their orders and an additional 5 minutes paying for and picking up their meals. Assume that placing the order and paying for/picking up the meal are two independent events \(X\) and \(Y\). If the waiting times are modeled by the exponential probability densities \[p_1(x) = \frac{1}{3}e^{-x/3} \space x\geq 0,\] \[p_1(x) = 0 \space \text{otherwise}\] \[p_2(y) = \frac{1}{5} e^{-y/5} \space y \geq 0\] \[p_2(y) = 0 \space \text{otherwise}\] respectively, the probability that a customer will spend less than 6 minutes in the drive-thru line is given by \(P[X + Y \leq 6] = \iint_D p(x,y) dx \space dy\), where \(D = {(x,y)|x \geq 0, \space y \geq 0, \space x + y \leq 6}\). Find \(P[X + Y \leq 6]\) and interpret the result. Answer: \(P[X + Y \leq 6] = 1 + \frac{3}{2e^2} - \frac{5}{e^{6/5}} \approx 0.45\); there is a \(45\%\) chance that a customer will spend \(6\) minutes in the drive-thru line. 54) [T] The Reuleaux triangle consists of an equilateral triangle and three regions, each of them bounded by a side of the triangle and an arc of a circle of radius s centered at the opposite vertex of the triangle. Show that the area of the Reuleaux triangle in the following figure of side length \(s\) is \(\frac{s^2}{2}(\pi - \sqrt{3})\). 55) [T] Show that the area of the lunes of Alhazen, the two blue lunes in the following figure, is the same as the area of the right triangle ABC. The outer boundaries of the lunes are semicircles of diameters \(AB\) and \(AC\) respectively, and the inner boundaries are formed by the circumcircle of the triangle \(ABC\).
Exercise \(\PageIndex{1}\) Find the first \(5\) terms of the sequence as well as the \(30^{th}\) term. \(a_{n}=5 n-3\) \(a_{n}=-4 n+3\) \(a_{n}=-10 n\) \(a_{n}=3 n\) \(a_{n}=(-1)^{n}(n-2)^{2}\) \(a_{n}=\frac{(-1)^{n}}{2 n-1}\) \(a_{n}=\frac{2 n+1}{n}\) \(a_{n}=(-1)^{n+1}(n-1)\) Answer 1. \(2,7,12,17,22 ; a_{30}=147\) 3. \(-10,-20,-30,-40,-50 ; a_{30}=-300\) 5. \(-1,0,-1,4,-9 ; a_{30}=784\) 7. \(3, \frac{5}{2}, \frac{7}{3}, \frac{9}{4}, \frac{11}{5} ; a_{30}=\frac{61}{30}\) Exercise \(\PageIndex{2}\) Find the first \(5\) terms of the sequence. \(a_{n}=\frac{n x^{n}}{2 n+1}\) \(a_{n}=\frac{(-1)^{n-1} x^{n+2}}{n}\) \(a_{n}=2^{n} x^{2 n}\) \(a_{n}=(-3 x)^{n-1}\) \(a_{n}=a_{n-1}+5\) where \(a_{1}=0\) \(a_{n}=4 a_{n-1}+1\) where \(a_{1}=-2\) \(a_{n}=a_{n-2}-3 a_{n-1}\) where \(a_{1}=0\) and \(a_{2}=-3\) \(a_{n}=5 a_{n-2}-a_{n-1}\) where \(a_{1}=-1\) and \(a_{2}=0\) Answer 1. \(\frac{x}{3}, \frac{2 x^{2}}{5}, \frac{3 x^{3}}{7}, \frac{4 x^{4}}{9}, \frac{5 x^{5}}{11}\) 3. \(2 x^{2}, 4 x^{4}, 8 x^{6}, 16 x^{8}, 32 x^{10}\) 5. \(0, 5, 10, 15, 20\) 7. \(0, −3, 9, −30, 99\) Exercise \(\PageIndex{3}\) Find the indicated partial sum. \(1,4,7,10,13, \dots ; S_{5}\) \(3,1,-1,-3,-5, \dots ; S_{5}\) \(-1,3,-5,7,-9, \ldots ; S_{4}\) \(a_{n}=(-1)^{n} n^{2} ; S_{4}\) \(a_{n}=-3(n-2)^{2} ; S_{4}\) \(a_{n}=\left(-\frac{1}{5}\right)^{n-2} ; S_{4}\) Answer 1. \(35\) 3. \(-5\) 5. \(-18\) Exercise \(\PageIndex{4}\) Evaluate. \(\sum_{k=1}^{6}(1-2 k)\) \(\sum_{k=1}^{4}(-1)^{k} 3 k^{2}\) \(\sum_{n=1}^{3} \frac{n+1}{n}\) \(\sum_{n=1}^{7} 5(-1)^{n-1}\) \(\sum_{k=4}^{8}(1-k)^{2}\) \(\sum_{k=-2}^{2}\left(\frac{2}{3}\right)^{k}\) Answer 1. \(-36\) 3. \(\frac{29}{6}\) 5. \(135\) Exercise \(\PageIndex{5}\) Write the first \(5\) terms of the arithmetic sequence given its first term and common difference. Find a formula for its general term. \(a_{1}=6 ; d=5\) \(a_{1}=5 ; d=7\) \(a_{1}=5 ; d=-3\) \(a_{1}=-\frac{3}{2} ; d=-\frac{1}{2}\) \(a_{1}=-\frac{3}{4} ; d=-\frac{3}{4}\) \(a_{1}=-3.6 ; d=1.2\) \(a_{1}=7 ; d=0\) \(a_{1}=1 ; d=1\) Answer 1. \(6,11,16,21,26 ; a_{n}=5 n+1\) 3. \(5,2,-1,-4,-7 ; a_{n}=8-3 n\) 5. \(-\frac{3}{4},-\frac{3}{2},-\frac{9}{4},-3,-\frac{15}{4} ; a_{n}=-\frac{3}{4} n\) 7. \(7,7,7,7,7 ; a_{n}=7\) Exercise \(\PageIndex{6}\) Given the terms of an arithmetic sequence, find a formula for the general term. \(10, 20, 30, 40, 50,…\) \(−7, −5, −3, −1, 1,…\) \(−2, −5, −8, −11, −14,…\) \(-\frac{1}{3}, 0, \frac{1}{3}, \frac{2}{3}, 1, \ldots\) \(a_{4}=11\) and \(a_{9}=26\) \(a_{5}=-5\) and \(a_{10}=-15\) \(a_{6}=6\) and \(a_{24}=15\) \(a_{3}=-1.4\) and \(a_{7}=1\) Answer 1. \(a_{n}=10 n\) 3. \(a_{n}=1-3 n\) 5. \(a_{n}=3 n-1\) 7. \(a_{n}=\frac{1}{2} n+3\) Exercise \(\PageIndex{7}\) Calculate the indicated sum given the formula for the general term of an arithmetic sequence. \(a_{n}=4 n-3 ; S_{60}\) \(a_{n}=-2 n+9 ; S_{35}\) \(a_{n}=\frac{1}{5} n-\frac{1}{2}; S_{15}\) \(a_{n}=-n+\frac{1}{4} ; S_{20}\) \(a_{n}=1.8 n-4.2 ; S_{45}\) \(a_{n}=-6.5 n+3 ; S_{35}\) Answer 1. \(7,140\) 3. \(\frac{33}{2}\) 5. \(1,674\) Exercise \(\PageIndex{8}\) Evaluate. \(\sum_{n=1}^{22}(7 n-5)\) \(\sum_{n=1}^{100}(1-4 n)\) \(\sum_{n=1}^{35}\left(\frac{2}{3} n\right)\) \(\sum_{n=1}^{30}\left(-\frac{1}{4} n+1\right)\) \(\sum_{n=1}^{40}(2.3 n-1.1)\) \(\sum_{n=1}^{300} n\) Find the sum of the first \(175\) positive odd integers. Find the sum of the first \(175\) positive even integers. Find all arithmetic means between \(a_{1} = \frac{2}{3}\) and \(a_{5} = −\frac{2}{3}\) Find all arithmetic means between \(a_{3} = −7\) and \(a_{7} = 13\). A \(5\)-year salary contract offers $\(58,200\) for the first year with a $\(4,200\) increase each additional year. Determine the total salary obligation over the \(5\)-year period. The first row of seating in a theater consists of \(10\) seats. Each successive row consists of four more seats than the previous row. If there are \(14\) rows, how many total seats are there in the theater? Answer 1. \(1,661\) 3. \(420\) 5. \(1,842\) 7. \(30,625\) 9. \(\frac{1}{3}, 0, −\frac{1}{3}\) 11. $\(333,000\) Exercise \(\PageIndex{9}\) Write the first \(5\) terms of the geometric sequence given its first term and common ratio. Find a formula for its general term. \(a_{1}=5 ; r=2\) \(a_{1}=3 ; r=-2\) \(a_{1}=1 ; r=-\frac{3}{2}\) \(a_{1}=-4 ; r=\frac{1}{3}\) \(a_{1}=1.2 ; r=0.2\) \(a_{1}=-5.4 ; r=-0.1\) Answer 1. \(5,10,20,40,80 ; a_{n}=5(2)^{n-1}\) 3. \(1,-\frac{3}{2}, \frac{9}{4},-\frac{27}{8}, \frac{81}{16} ; a_{n}=\left(-\frac{3}{2}\right)^{n-1}\) 5. \(1.2,0.24,0.048,0.0096,0.00192 ; a_{n}=1.2(0.2)^{n-1}\) Exercise \(\PageIndex{10}\) Given the terms of a geometric sequence, find a formula for the general term. \(4, 40, 400,…\) \(−6, −30, −150,…\) \(6, \frac{9}{2}, \frac{27}{8}, \dots\) \(1, \frac{3}{5}, \frac{9}{25}, \dots\) \(a_{4}=-4\) and \(a_{9}=128\) \(a_{2}=-1\) and \(a_{5}=-64\) \(a_{2}=-\frac{5}{2}\) and \(a_{5}=-\frac{625}{16}\) \(a_{3}=50\) and \(a_{6}=-6,250\) Find all geometric means between \(a_{1} = −1\) and \(a_{4} = 64\). Find all geometric means between \(a_{3} = 6\) and \(a_{6} = 162\). Answer 1. \(a_{n}=4(10)^{n-1}\) 3. \(a_{n}=6\left(\frac{3}{4}\right)^{n-1}\) 5. \(a_{1}=\frac{1}{2}(-2)^{n-1}\) 7. \(a_{n}=-\left(\frac{5}{2}\right)^{n-1}\) 9. \(4, 16\) Exercise \(\PageIndex{11}\) Calculate the indicated sum given the formula for the general term of a geometric sequence. \(a_{n}=3(4)^{n-1} ; S_{6}\) \(a_{n}=-5(3)^{n-1} ; S_{10}\) \(a_{n}=\frac{3}{2}(-2)^{n} ; S_{14}\) \(a_{n}=\frac{1}{5}(-3)^{n+1} ; S_{12}\) \(a_{n}=8\left(\frac{1}{2}\right)^{n+2} ; S_{8}\) \(a_{n}=\frac{1}{8}(-2)^{n+2} ; S_{10}\) Answer 1. \(4,095\) 3. \(16,383\) 5. \(\frac{255}{128}\) Exercise \(\PageIndex{12}\) Evaluate. \(\sum_{n=1}^{10} 3(-4)^{n}\) \(\sum_{n=1}^{9}-\frac{3}{5}(-2)^{n-1}\) \(\sum_{n=1}^{\infty}-3\left(\frac{2}{3}\right)^{n}\) \(\sum_{n=1}^{\infty} \frac{1}{2}\left(\frac{4}{5}\right)^{n+1}\) \(\sum_{n=1}^{\infty} \frac{1}{2}\left(-\frac{3}{2}\right)^{n}\) \(\sum_{n=1}^{\infty} \frac{3}{2}\left(-\frac{1}{2}\right)^{n}\) After the first year of operation, the value of a company van was reported to be $\(40,000\). Because of depreciation, after the second year of operation the van was reported to have a value of $\(32,000\) and then $\(25,600\) after the third year of operation. Write a formula that gives the value of the van after the \(n\)th year of operation. Use it to determine the value of the van after \(10\) years of operation. The number of cells in a culture of bacteria doubles every \(6\) hours. If \(250\) cells are initially present, write a sequence that shows the number of cells present after every \(6\)-hour period for one day. Write a formula that gives the number of cells after the \(n\)th \(6\)-hour period. A ball bounces back to one-half of the height that it fell from. If dropped from \(32\) feet, approximate the total distance the ball travels. A structured settlement yields an amount in dollars each year \(n\) according to the formula \(p_{n}=12,500(0.75)^{n-1}\). What is the total value of a \(10\)-year settlement? Answer 1. \(2,516,580\) 3. \(−6\) 5. No sum 7. \(v_{n}=40,000(0.8)^{n-1} ; v_{10}=\$ 5,368.71\) 9. \(96\) feet Exercise \(\PageIndex{13}\) Classify the sequence as arithmetic, geometric, or neither. \(4, 9, 14,…\) \(6, 18, 54,…\) \(-1,-\frac{1}{2}, 0, \dots\) \(10,30,60, \dots\) \(0,1,8, \dots\) \(-1, \frac{2}{3},-\frac{4}{9}, \ldots\) Answer 1. Arithmetic; \(d=5\) 3. Arithmetic; \(d=\frac{1}{2}\) 5. Neither Exercise \(\PageIndex{14}\) Evaluate. \(\sum_{n=1}^{4} n^{2}\) \(\sum_{n=1}^{4} n^{3}\) \(\sum_{n=1}^{32}(-4 n+5)\) \(\sum_{n=1}^{\infty}-2\left(\frac{1}{5}\right)^{n-1}\) \(\sum_{n=1}^{8} \frac{1}{3}(-3)^{n}\) \(\sum_{n=1}^{46}\left(\frac{1}{4} n-\frac{1}{2}\right)\) \(\sum_{n=1}^{22}(3-n)\) \(\sum_{n=1}^{31} 2 n\) \(\sum_{n=1}^{28} 3\) \(\sum_{n=1}^{30} 3(-1)^{n-1}\) \(\sum_{n=1}^{31} 3(-1)^{n-1}\) Answer 1. \(30\) 3. \(−1,952\) 5. \(1,640\) 7. \(−187\) 9. \(84\) 11. \(3\) Exercise \(\PageIndex{15}\) Evaluate. \(8!\) \(11!\) \(\frac{10 !}{2 ! 6 !}\) \(\frac{9 ! 3 !}{8 !}\) \(\frac{(n+3) !}{n !}\) \(\frac{(n-2) !}{(n+1) !}\) Answer 2. \(39,916,800\) 4. \(54\) 6. \(\frac{1}{n(n+1)(n-1)}\) Exercise \(\PageIndex{16}\) Calculate the indicated binomial coefficient. \(\left( \begin{array}{l}{7} \\ {4}\end{array}\right)\) \(\left( \begin{array}{l}{8} \\ {3}\end{array}\right)\) \(\left( \begin{array}{c}{10} \\ {5}\end{array}\right)\) \(\left( \begin{array}{l}{11} \\ {10}\end{array}\right)\) \(\left( \begin{array}{c}{12} \\ {0}\end{array}\right)\) \(\left( \begin{array}{l}{n+1} \\ {n-1}\end{array}\right)\) \(\left( \begin{array}{c}{n} \\ {n-2}\end{array}\right)\) Answer 2. \(56\) 4. \(11\) 6. \(\frac{n(n+1)}{2}\) Exercise \(\PageIndex{17}\) Expand using the binomial theorem. \((x+7)^{3}\) \((x-9)^{3}\) \((2 y-3)^{4}\) \((y+4)^{4}\) \((x+2 y)^{5}\) \((3 x-y)^{5}\) \((u-v)^{6}\) \((u+v)^{6}\) \(\left(5 x^{2}+2 y^{2}\right)^{4}\) \(\left(x^{3}-2 y^{2}\right)^{4}\) Answer 1. \(x^{3}+21 x^{2}+147 x+343\) 3. \(16 y^{4}-96 y^{3}+216 y^{2}-216 y+81\) 5. \(x^{5}+10 x^{4} y+40 x^{3} y^{2}+80 x^{2} y^{3}+80 x y^{4}+32 y^{5}\) 7. \(\begin{array}{l}{u^{6}-6 u^{5} v+15 u^{4} v^{2}-20 u^{3} v^{3}} {+15 u^{2} v^{4}-6 u v^{5}+v^{6}}\end{array}\) 9. \(625 x^{8}+1,000 x^{6} y^{2}+600 x^{4} y^{4}+160 x^{2} y^{6}+16 y^{8}\) Sample Exam Exercise \(\PageIndex{18}\) Find the first \(5\) terms of the sequence. \(a_{n}=6 n-15\) \(a_{n}=5(-4)^{n-2}\) \(a_{n}=\frac{n-1}{2 n-1}\) \(a_{n}=(-1)^{n-1} x^{2 n}\) Answer 1. \(-9,-3,3,9,15\) 3. \(0, \frac{1}{3}, \frac{2}{5}, \frac{3}{7}, \frac{4}{9}\) Exercise \(\PageIndex{19}\) Find the indicated partial sum \(a_{n}=(n-1) n^{2} ; S_{4}\) \(\sum_{k=1}^{5}(-1)^{k} 2^{k-2}\) Answer 1. \(70\) Exercise \(\PageIndex{20}\) Classify the sequence as arithmetic, geometric, or neither. \(-1,-\frac{3}{2},-2, \ldots\) \(1,-6,36, \dots\) \(\frac{3}{8},-\frac{3}{4}, \frac{3}{2}, \ldots\) \(\frac{1}{2}, \frac{1}{4}, \frac{2}{9}, \ldots\) Answer 1. Arithmetic 3. Geometric Exercise \(\PageIndex{21}\) Given the terms of an arithmetic sequence, find a formula for the general term. \(10,5,0,-5,-10, \dots\) \(a_{4}=-\frac{1}{2}\) and \(a_{9}=2\) Answer 1. \(a_{n}=15-5 n\) Exercise \(\PageIndex{22}\) Given the terms of a geometric sequence, find a formula for the general term. \(-\frac{1}{8},-\frac{1}{2},-2,-8,-32, \ldots\) \(a_{3}=1\) and \(a_{8}=-32\) Answer 1. \(a_{n}=-\frac{1}{8}(4)^{n-1}\) Exercise \(\PageIndex{23}\) Calculate the indicated sum. \(a_{n}=5-n ; S_{44}\) \(a_{n}=(-2)^{n+2} ; S_{12}\) \(\sum_{n=1}^{\infty} 4\left(-\frac{1}{2}\right)^{n-1}\) \(\sum_{n=1}^{100}\left(2 n-\frac{3}{2}\right)\) Answer 1. \(-770\) 3. \(\frac{8}{3}\) Exercise \(\PageIndex{24}\) Evaluate. \(\frac{14 !}{10 ! 6 !}\) \(\left( \begin{array}{l}{9} \\ {7}\end{array}\right)\) Determine the sum of the first \(48\) positive odd integers. The first row of seating in a theater consists of \(14\) seats. Each successive row consists of two more seats than the previous row. If there are \(22\) rows, how many total seats are there in the theater? A ball bounces back to one-third of the height that it fell from. If dropped from \(27\) feet, approximate the total distance the ball travels. Answer 1. \(\frac{1,001}{30}\) 3. \(2,304\) 5. \(54\) feet Exercise \(\PageIndex{25}\) Expand using the binomial theorem. \((x-5 y)^{4}\) \(\left(3 a+b^{2}\right)^{5}\) Answer 2. \(\begin{array}{l}{243 a^{5}+405 a^{4} b^{2}+270 a^{3} b^{4}} {+90 a^{2} b^{6}+15 a b^{8}+b^{10}}\end{array}\)
The Math.Stackexchange (MSE) is an extraordinary source of great quality responses on almost any non-research level math question. There was a recent question by the user belgi, called A list of basic integrals, that got me thinking a bit. It is not in the general habit of MSE to allow such big-list or soft questions. But it is an unfortunate habit that many very good tidbits get lost in the sea of questions (over 55000 questions now). So I decided to begin a post containing some of the gems on integration techniques that I come across. I don’t mean this to be a catchall reference (For a generic integration reference, I again recommend Paul’s Online Math Notes and his Calculus Cheat Sheet). And I hope not to cross anyone, nor do I claim that mixedmath is to be the blog of MSE. But there are some really clever things done to which I, for one, would like a quick reference. Please note that this is one of those posts-in-progress. If you know of another really slick bit that I missed, please let me know. And as I come across more, I’ll update this page accordingly. In the question A self-contained proof that that $latex \displaystyle \sum_{n = 1}^\infty \frac{1}{n^p}$ converges for $latex p \geq 1$, by the user admchrch, there are two really nice responses: $latex \displaystyle S_{2k+1} = \sum_{n=1}^{2k+1} \frac{1}{n^p}$ $latex \displaystyle= 1+\sum_{i=1}^k\left(\frac{1}{(2i)^p}+\frac{1}{(2i+1)^p}\right)$ $latex \displaystyle < 1+\sum_{i=1}^k\frac{2}{(2i)^p}$ $latex =1+2^{1-p}S_k$ $latex =1+2^{1-p}S_{2k+1}$ Solving for $latex S_{sk+1}$ shows that $latex S_{sk+1} < \frac{1}{1 – 2^{1-p}}$, which is independent of $k$. Second, Pacciu wrote up an answer that presented a convergence test similar to Cauchy’s, but a bit different. His presented criterion stated that if $latex (a_n)$ is a sequence of positive numbers, then the series $latex \sum a_n$ diverges if $latex \displaystyle \lim \dfrac{\ln \frac{1}{a_n}}{\ln n} = l< 1$ and converges if the limit $latex l > 1$. In a question of the user James about showing that $latex \int_{-a}^a \frac{f(x)}{1 + e^x} dx = \int_0^a f(x) dx$ when $latex f$ is even, there was one particularly nice answer: First, there is an exceedingly clever answer from a user who wants their name anonymous, I believe, and who I will call user9413. His answer is short and sweet:$latex I =\int\limits_{-a}^{a}\frac{f(x)}{1+e^{x}} \ dx \quad \cdots(1)$ $latex I = \int\limits_{-a}^{a} \frac{f(x)}{1+e^{-x}} \ dx \qquad\qquad \Bigl[ \small\because \int\limits_{a}^{b}f(x) = \int\limits_{a}^{b}f(a+b-x) \ \Bigr] \quad \cdots (2)$ $latex \Longrightarrow 2I = \int\limits_{-a}^{a} \biggl[ \frac{f(x)}{1+e^{x}} + \frac{e^{x}\cdot f(x)}{1+e^{x}} \biggr] \ dx \quad\qquad \cdots (1) + (2)$ $latex =\int\limits_{-a}^{a} f(x) \ dx = 2 \int\limits_{0}^{a} f(x) \ dx \qquad \Bigl[ \small \text{since}\ f \ \text{is even so} \ \int\limits_{-a}^{a} f(x) = 2\int\limits_{0}^{a} f(x) \Bigr]$ In a question of the user grestudying12345 on Integration Techniques, which was unfortunately largely a bust, there were two answers of note: I hadn’t ever really thought to knock down repeated integration by parts all at once before, but that’s exactly what Hans Lundmark does in his answer. I don’t include the text of the answer here because it’s easy to reproduce, but it’s the idea that struck me. Why hadn’t I heard or considered such a thing. Linda Collins Sandgren notes that when attempting to integrate rational expressions of sine and cosine, the substitution $latex u = \tan \frac{x}{2}$ always leads to a rational function in $latex u$, and in particular $latex \sin{x}=2\cos{\frac{x}{2}}\sin{\frac{x}{2}}=\frac{2\cos{\frac{x}{2}}\sin{\frac{x}{2}}}{\cos^2{\frac{x}{2}}+\sin^2{\frac{x}{2}}}=\frac{2u}{1+u^2}$ $latex \cos{x}=\cos^2{\frac{x}{2}}-\sin^2{\frac{x}{2}}=\frac{\cos^2{\frac{x}{2}}-\sin^2{\frac{x}{2}}}{\cos^2{\frac{x}{2}}+\sin^2{\frac{x}{2}}}=\frac{1-u^2}{1+u^2}$ Clever. For more, see the wikipedia page on Weierstrauss substitution. The user Aryabhata asked a question about the Convergence of $latex \sqrt n x_n$, where $latex x_{n+1} = \sin (x_n )$: While Martin Sleziak wrote up an answer relying ultimately on an application of L’Hopital’s rule and the fact that $latex \lim (a_{n+1} – a_n) = a \implies \lim \dfrac{a_n}{n} = a$, I’d like to focus on David Speyer’s answer (found here), that goes a bit like this: For small $latex x$, we know $latex \sin x = x – x^3/6 + O(x^5)$. Set $latex y_n = 1/x_n^2$. Then we have $latex 1/x^2_{n+1} = x_n^{-2}(1 – x_n^2/6 + )(x_n^4))^{-2} = 1/x_n^2 + 1/3 + O(x_n^2)$ So $latex \displaystyle y_{n+1} = y_n + 1/3 + O(y_n^{-1})$ and $y_n = n/3 + O\left( \sum_{k=1}^n y_k^{-1} \right)$ and $latex \displaystyle \frac{1}{n}y_n = \frac{1}{3} + \frac{1}{n} O\left( \sum_{k=1}^n y_k^{-1}\right)$ We know $latex x_n \to 0$, so $latex y_n^{-1} \to 0$, and thus the average goes to zero. Thus $latex \lim y_n / n = 1/3$, and using the continuity of $latex \frac{1}{\sqrt t}$ and transforming back to $latex \sqrt n x_n$, we get the answer. David Speyer summarized my general reaction nicely in a comment: PS This is a good example of why I find the O() notation insanely more useful than limits. As an aside, there’s another good use of big-Oh notation by Qiaochu (and whose blog Annoying Precision has a link on my sidebar <–) in his answer to kuch nahi’s question on expanding $latex \cos^{-1} (\cos^2 x)$ Aryabhata is the writer of a slick answer to a classic integral asked by user9413 (generic username), on why $latex \displaystyle\int_0^\infty \frac{\sin x}{x} dx = \frac{\pi}{2}$: This is often covered in a first class on complex analysis, but it can be done clasically. At the risk of plugging myself, I wrote a very late answer using infinite series but no complicated argument. The answer by Aryabhata, however, is exceptional, especially in that I’d never seen it before. It all boils down to one idea: Notice that $latex \displaystyle \int_0^\infty e^{-xy} \sin x dy = \frac{\sin x }{x}$. Then one needs to justify switching the order of integration, $latex \int_{0}^{\infty} \Bigg(\int_{0}^{\infty} e^{-xy} \sin x \,dy \Bigg)\, dx = \int_{0}^{\infty} \Bigg(\int_{0}^{\infty} e^{-xy} \sin x \,dx \Bigg)\,dy$, and the right side can be solved easily by integration by parts. Another challenging integral, $latex \displaystyle \int_0^\infty \ln(1 – e^{-x})dx$, was asked by Jack Rousseau. The solution itself is interesting, but the key method behind it is the common plan to expand in series: Unfortunately, the magic of Anon’s posted answer is reduced once we know to expand by series, just as the satisfaction from solving an assigned homework problem is lost when the method of solution is prescripted (and when done in primary and secondary schools, perhaps contributing to the widespread belief that math is nothing more than arithmetic, with no creativity). But he did the following $latex \displaystyle -\int_0^\infty \ln(1-e^{-x})dx=\int_0^\infty\left(e^{-x}+\frac{e^{-2x}}{2}+\frac{e^{-3x}}{3}+\cdots\right)dx$ $latex \displaystyle =\int_0^\infty e^{-x}dx+\frac{1}{2}\int_0^\infty e^{-2x}dx+\frac{1}{3}\int_0^\infty e^{-3x}dx+\cdots$ $latex \displaystyle =1+\frac{1}{2}\cdot\frac{1}{2}+\frac{1}{3}\cdot\frac{1}{3}+\frac{1}{4}\cdot\frac{1}{4}\cdots = \dfrac{\pi^2}{6}$ But it’s still a bit cool because it has the Riemann zeta, a pretty big bonus. (probably still growing)
Reactivity of thioesters with respect to nucleophilic attack Why is the thioester bond weaker than a regular ester bond? Let's consider a reaction mechanism like this: A steady-state analysis will show that the rate of formation of product is $$r = \frac{k_1k_2[\text{ester}][\text{Nu}]}{k_{-1} + k_2}$$ $k_1$ for the thioester is larger, because the sulfur 3p lone pair overlaps poorly with the C=O π* orbital (which is formed from 2p orbitals). Recall that esters are less electrophilic than ketones because there is an ester oxygen which has a lone pair capable of overlapping with the C=O π*. In this case, since this overlap is diminished, it turns out that thioesters are typically roughly as reactive towards nucleophilic attack as ketones. $k_2$ for the thioester is also larger because thiolates are better leaving groups than alkoxides. Depending on the relative rates of $k_2$ and $k_{-1}$, though, the second point may or may not be relevant. For example if $k_2 \gg k_{-1}$, then the rate equation simplifies to $r \approx k_1[\text{ester}][\text{Nu}]$, in which case the comparison of $k_2$ is no longer useful.
Conveners Session 8: Theoretical developments 1 Jun GAO (Shanghai Jiao Tong University, Center for High Energy Physics, Peking University) Session 8: Theoretical developments 2 Antoni Szczurek (Institute of Nuclear Physics) We discuss $\gamma^* \gamma^* \to \eta_c(1S)\, , \,\eta_c(2S)$ transition form factor for both virtual photons. The general formula is given. We use different models for the $c \bar c$ wave function obtained from the solution of the Schr\"odinger equation for different $c \bar c$ potentials: harmonic oscillator, Cornell, logarithmic, power-law, Coulomb and Buchm\"uller-Tye. We compare our... Quantifying the differences between nuclear and hadronic collisions, phenomenological known as medium modification due to multiple scatterings between the hard probe and medium, can provide a solid baseline for unambiguous identification of the fundamental medium property. In this talk, we consider parton propagation in cold nuclear matter within the framework of high twist expansion, which... Parton distribution functions (PDFs) are mandatory inputs in high energy scattering and also play an important role in searching for new physics at high energy. The recently proposed large momentum effective theory allows one to access the PDFs from first principle Lattice QCD. In this talk, I will discuss the recent progresses on quasi-PDFs, in particular on the gluon quasi-PDFs. to be provided by Richard Brower. Sign problem is an obstacle in lattice studies of supersymmetric gauge theories including a numerical verification of AdS/CFT. Tensor network is an attractive approach to overcome this problem. We present a numerical result of the tensor network approach in two-dimensional complex phi^4 theory with finite chemical potential. Abstract: For long time, gravity is used for learning dynamical aspects of QCD, because holography connects gravity and QFT. In this talk, I will review the opposite direction: learning about gravity from QFT. In particular, I will focus on the quantum nature of black hole. Techniques developed among QCD-practitioners turned out to be useful for quantum gravity. At the same time, quantum...
It’s impossible to find explicit formulas for solutions of some differential equations. Even if there are such formulas, they may be so complicated that they’re useless. In this case we may resort to graphical or numerical methods to get some idea of how the solutions of the given equation behave. In Section 2.3 we’ll take up the question of existence of solutions of a first order equation \[\label{eq:1.3.1} y'=f(x,y).\] In this section we’ll simply assume that Equation \ref{eq:1.3.1} has solutions and discuss a graphical method for approximating them. In Chapter 3 we discuss numerical methods for obtaining approximate solutions of Equation \ref{eq:1.3.1}. Recall that a solution of Equation \ref{eq:1.3.1} is a function \(y=y(x)\) such that \[y'(x)=f(x,y(x))\] for all values of \(x\) in some interval, and an integral curve is either the graph of a solution or is made up of segments that are graphs of solutions. Therefore, not being able to solve Equation \ref{eq:1.3.1} is equivalent to not knowing the equations of integral curves of Equation \ref{eq:1.3.1}. However, it is easy to calculate the slopes of these curves. To be specific, the slope of an integral curve of Equation \ref{eq:1.3.1} through a given point \((x_0,y_0)\) is given by the number \(f(x_0,y_0)\). This is the basis of the method of direction fields. If \(f\) is defined on a set \(R\), we can construct a direction field for Equation \ref{eq:1.3.1} in \(R\) by drawing a short line segment through each point \((x,y)\) in \(R\) with slope \(f(x,y)\). Of course, as a practical matter, we can’t actually draw line segments through every point in \(R\); rather, we must select a finite set of points in \(R\). For example, suppose \(f\) is defined on the closed rectangular region \[R:\{a\le x\le b, c\le y\le d\}.\] Let \[a= x_0< x_1< \cdots< x_m=b\] be equally spaced points in \([a,b]\) and \[c=y_0 rectangular grid (Figure \(\PageIndex{1}\)). Through each point in the grid we draw a short line segment with slope \(f(x_i,y_j)\). The result is an approximation to a direction field for Equation \ref{eq:1.3.1} in \(R\). If the grid points are sufficiently numerous and close together, we can draw approximate integral curves of Equation \ref{eq:1.3.1} by drawing curves through points in the grid tangent to the line segments associated with the points in the grid. Unfortunately, approximating a direction field and graphing integral curves in this way is too tedious to be done effectively by hand. However, there is software for doing this. As you’ll see, the combination of direction fields and integral curves gives useful insights into the behavior of the solutions of the differential equation even if we can’t obtain exact solutions. We’ll study numerical methods for solving a single first order equation Equation \ref{eq:1.3.1} in Chapter 3. These methods can be used to plot solution curves of Equation \ref{eq:1.3.1} in a rectangular region \(R\) if \(f\) is continuous on \(R\). Figures \(\PageIndex{2}\), \(\PageIndex{3}\), and \(\PageIndex{4}\) show direction fields and solution curves for the differential equations: \(y'=\frac{x^2-y^2}{1+x^2+y^2}\), \(y'=1+xy^2\), and \(y'=\frac{x-y}{1+x^2}\). which are all of the form Equation \ref{eq:1.3.1} with \(f\) continuous for all \((x,y)\). The methods of Chapter 3 will not work for the equation \[\label{eq:1.3.2} y'=-x/y\] if \(R\) contains part of the \(x\)-axis, since \(f(x,y)=-x/y\) is undefined when \(y=0\). Similarly, they will not work for the equation \[\label{eq:1.3.3} y'={x^2\over1-x^2-y^2}\] if \(R\) contains any part of the unit circle \(x^2+y^2=1\), because the right side of Equation \ref{eq:1.3.3} is undefined if \(x^2+y^2=1\). However, Equation \ref{eq:1.3.2} and Equation \ref{eq:1.3.3} can written as \[\label{eq:1.3.4} y'={A(x,y)\over B(x,y)}\] where \(A\) and \(B\) are continuous on any rectangle \(R\). Because of this, some differential equation software is based on numerically solving pairs of equations of the form \[\label{eq:1.3.5} {dx\over dt}=B(x,y),\quad {dy\over dt}=A(x,y)\] where \(x\) and \(y\) are regarded as functions of a parameter \(t\). If \(x=x(t)\) and \(y=y(t)\) satisfy these equations, then \[y'={dy\over dx}={dy\over dt}\left/{dx\over dt}\right.={A(x,y)\over B(x,y)},\] so \(y=y(x)\) satisfies Equation \ref{eq:1.3.4}. Equations \ref{eq:1.3.2} and \ref{eq:1.3.3} can be reformulated as in Equation \ref{eq:1.3.4} with \[{dx\over dt}=-y,\quad {dy\over dt}=x\] and \[{dx\over dt}=1-x^2-y^2,\quad {dy\over dt}=x^2,\] respectively. Even if \(f\) is continuous and otherwise “nice” throughout \(R\), your software may require you to reformulate the equation \(y'=f(x,y)\) as \[{dx\over dt}=1,\quad {dy\over dt}=f(x,y),\] which is of the form Equation \ref{eq:1.3.5} with \(A(x,y)=f(x,y)\) and \(B(x,y)=1\). Figure \(\PageIndex{5}\) shows a direction field and some integral curves for Equation \ref{eq:1.3.2}. As we saw in Example [example:1.2.1} and will verify again in Section 2.2, the integral curves of Equation \ref{eq:1.3.2} are circles centered at the origin. Figure \(\PageIndex{6}\) shows a direction field and some integral curves for Equation \ref{eq:1.3.3}. The integral curves near the top and bottom are solution curves. However, the integral curves near the middle are more complicated. For example, Figure \(\PageIndex{7}\) shows the integral curve through the origin. The vertices of the dashed rectangle are on the circle \(x^2+y^2=1\) ( \(a\approx.846\), \(b\approx.533\)), where all integral curves of Equation \ref{eq:1.3.3} have infinite slope. There are three solution curves of Equation \ref{eq:1.3.3} on the integral curve in the figure: the segment above the level \(y=b\) is the graph of a solution on \((-\infty,a)\), the segment below the level \(y=-b\) is the graph of a solution on \((-a,\infty)\), and the segment between these two levels is the graph of a solution on \((-a,a)\). Using Technology As you study from this book, you’ll often be asked to use computer software and graphics. Exercises with this intent are marked as (computer or calculator required), (computer and/or graphics required), or (laboratory work requiring software and/or graphics). Often you may not completely understand how the software does what it does. This is similar to the situation most people are in when they drive automobiles or watch television, and it doesn’t decrease the value of using modern technology as an aid to learning. Just be careful that you use the technology as a supplement to thought rather than a substitute for it.
In Kunen's book, Set Theory,chapter I.7, he said: $1+\omega=\omega \neq \omega+1$. I want to know why $\omega \neq \omega+1$. This question appears to be off-topic. The users who voted to close gave this specific reason: " This question is missing context or other details: Please improve the question by providing additional context, which ideally includes your thoughts on the problem and any attempts you have made to solve it. This information helps others identify where you have difficulties and helps them write answers appropriate to your experience level." – user21820, Strants, Gibbs, ℋolo, zoli There is an easy way to see this. You need to apply the definition of ordinal addition: $$\omega + 1 = \omega \times \{0\} \cup \{1\} \times \{1\} = \{0, 1, 2, \dots 1^\prime\}$$ So $\omega + 1$ has an element at the end that is not a successor of anything while $\omega$ does not. On the other hand, $$1 + \omega = \{1\} \times \{0\} \cup \omega \times \{1\} = \{1 ^\prime, 0, 1, 2, \dots\} \cong \omega$$ so you see that addition doesn't commute. There is some more information about this here on Wikipedia. Hope this helps. I find pictures to help. The idea here is that $\omega$ is a limit ordinal and tacking on the ordinal $1$ after it is fundamentally different: The picture for $\omega$ has a curved edge which indicates that it is a limit ordinal opposed to being a successor ordinal. When we tack on $1$ to the right of $\omega$ we have this ordinal $\omega+1$ that contains a limit ordinal which is not something that occurs in $\omega$. This means that $\omega$ and $\omega+1$ can't be isomorphic. Can you use see why $1+\omega$ and $\omega+1$ aren't equal? Do you see why $1+\omega = \omega$? $\omega + 1$ has a limit point (i.e. $\omega$ — using the von Neumann definition $\omega + 1 = \omega \cup \lbrace\omega\rbrace$) in the order topology while $\omega$ is discrete in the order topology. Because the elements of $\omega$ are all finite, whereas $\omega + 1$ has one infinite element.
Recall that we were able to analyze all geometric series "simultaneously'' to discover that \[\sum_{n=0}^\infty kx^n = {k\over 1-x},\] if \(|x| < 1\), and that the series diverges when \(|x|\ge 1\). At the time, we thought of \(x\) as an unspecified constant, but we could just as well think of it as a variable, in which case the series \[\sum_{n=0}^\infty kx^n\] is a function, namely, the function \(k/(1-x)\), as long as \(|x| < 1\). While \(k/(1-x)\) is a reasonably easy function to deal with, the more complicated \(\sum kx^n\) does have its attractions: it appears to be an infinite version of one of the simplest function types---a polynomial. This leads naturally to the questions: Do other functions have representations as series? Is there an advantage to viewing them in this way? The geometric series has a special feature that makes it unlike a typical polynomial---the coefficients of the powers of \(x\) are the same, namely \(k \). We will need to allow more general coefficients if we are to get anything other than the geometric series. Definition 11.8.1 A power series has the form \(\sum_{n=0}^\infty a_nx^n,\) with the understanding that \( a_n\) may depend on \(n\) but not on \(x\). Example 11.8.2 \( \sum_{n=1}^{\infty} x^n \over n \) is a power series. We can investigate convergence using the ratio test: \[ \lim_{n\to\infty} {|x|^{n+1}\over n+1}{n\over |x|^n} =\lim_{n\to\infty} |x|{n\over n+1} =|x|. \] Thus when \(|x| < 1\) the series converges and when \(|x|>1\) it diverges, leaving only two values in doubt. When \(x=1\) the series is the harmonic series and diverges; when \(x=-1\) it is the alternating harmonic series (actually the negative of the usual alternating harmonic series) and converges. Thus, we may think of \[\sum_{n=1}^\infty {x^n\over n}\] as a function from the interval \([-1,1])\) to the real numbers. A bit of thought reveals that the ratio test applied to a power series will always have the same nice form. In general, we will compute \[ \lim_{n\to\infty} {|a_{n+1}||x|^{n+1}\over |a_n||x|^n} =\lim_{n\to\infty} |x|{|a_{n+1}|\over |a_n|} = |x|\lim_{n\to\infty} {|a_{n+1}|\over |a_n|} =L|x|, \] assuming that \( \lim |a_{n+1}|/|a_n|\) exists. Then the series converges if \(L|x| < 1\), that is, if \(|x| < 1/L\), and diverges if \(|x|>1/L\). Only the two values \(x=\pm1/L\) require further investigation. Thus the series will definitely define a function on the interval \((-1/L,1/L)\), and perhaps will extend to one or both endpoints as well. Two special cases deserve mention: if \(L=0\) the limit is \(0\) no matter what value \(x\) takes, so the series converges for all \(x\) and the function is defined for all real numbers. If \(L=\infty\), then no matter what value \(x\) takes the limit is infinite and the series converges only when \(x=0\). The value \(1/L\) is called the radius of convergence of the series, and the interval on which the series converges is the interval of convergence. Consider again the geometric series, \(\sum_{n=0}^\infty x^n={1\over 1-x}.\) Whatever benefits there might be in using the series form of this function are only available to us when \(x\) is between \(-1\) and \(1\). Frequently we can address this shortcoming by modifying the power series slightly. Consider this series: \[ \sum_{n=0}^\infty {(x+2)^n\over 3^n}= \sum_{n=0}^\infty \left({x+2\over 3}\right)^n={1\over 1-{x+2\over 3}}= {3\over 1-x}, \] because this is just a geometric series with \(x\) replaced by \((x+2)/3\). Multiplying both sides by \(1/3\) gives \[\sum_{n=0}^\infty {(x+2)^n\over 3^{n+1}}={1\over 1-x},\] the same function as before. For what values of \(x\) does this series converge? Since it is a geometric series, we know that it converges when \[\eqalign{ |x+2|/3& < 1\cr |x+2|& < 3\cr -3 < x+2 & < 3\cr -5 < x& < 1.\cr }\] So we have a series representation for \(1/(1-x)\) that works on a larger interval than before, at the expense of a somewhat more complicated series. The endpoints of the interval of convergence now are \(-5\) and \(1\), but note that they can be more compactly described as \(-2\pm3\). We say that \(3\) is the radius of convergence, and we now say that the series is centered at \(-2\). Definition 11.8.3 A power series centered at \(a\) has the form \(\sum_{n=0}^\infty a_n(x-a)^n,\) with the understanding that \(a_n\) may depend on \(n\) but not on \(x\).
In this short post, we introduce three conundrums dealing with infinity. This is inspired by my calculus class, as we explore various confusing and confounding aspects of infinity and find that it’s very confusing, sometimes mindbending. Order Matters Consider the alternating unit series $$ \sum_{n \geq 0} (-1)^n. $$ We want to try to understand its convergence. If we write out the first several terms, it looks like $$ 1 – 1 + 1 – 1 + 1 – 1 + \cdots $$ What if we grouped the terms while we were summing them? Perhaps we should group them like so, $$ (1 – 1) + (1 – 1) + (1 – 1) + \cdots = 0 + 0 + 0 + \cdots $$ so that the sum is very clearly $latex {0}$. Adding infinitely many zeroes certainly gives zero, right? On the other hand, what if we group the terms like so, $$ 1 + (-1 + 1) + (-1 + 1) + \cdots = 1 + 0 + 0 + \cdots $$ which is very clearly $latex {1}$. After all, adding $latex {1}$ to infinitely many zeroes certainly gives one, right? A related, perhaps deeper paradox is one we mentioned in class. For conditionally convergent series like the alternating harmonic series $$ \sum_{n = 1}^\infty \frac{(-1)^n}{n}, $$ if we are allowed to rearrange the terms then we can have the series sum to any number that we want. This is called the Riemann Series Theorem. The Thief and the King A very wealthy king keeps gold coins in his vault, but a sneaky thief knows how to get in. Suppose that each day, the king puts two more gold coins into the vault. And each day, the thief takes one gold coin out (so that the king won’t notice that the vault is empty). After infinitely many days, how much gold is left in the vault? Suppose that the king numbers each coin. So on day 1, the king puts in coins labelled 1 and 2, and on day 2 he puts in coins labelled 3 and 4, and so on. What if the thief steals the odd numbered coin each day? Then at the end of time, the king has all the even coins. But what if instead, the thief steals from the bottom. So he first steals coin number 1, then number 2, and so on. At the end of time, no coin is left in the vault, since for any number $latex {n}$, the $latex {n}$th coin has been taken by the king. Prevalence of Rarity When I drove to Providence this morning, the car in front of me had the license place 637RB2. Think about it – out of the approximately $latex {10\cdot10\cdot10\cdot26\cdot 26 \cdot 10 = 6760000}$ possibilities, I happened across this one. Isn’t that amazing! How could something so rare happen to me? Amazingly, something just as rare happened last time I drove to Providence too!
In the following exercises, write the appropriate \(ε−δ\) definition for each of the given statements. 176) \(\displaystyle \lim_{x→a}\,f(x)=N\) 177) \(\displaystyle \lim_{t→b}\,g(t)=M\) Answer: For every \(ε>0\), there exists a \(δ>0\), so that if \(0<|t−b|<δ\), then \(|g(t)−M|<ε\) 178) \(\displaystyle \lim_{x→c}\,h(x)=L\) 179) \(\displaystyle \lim_{x→a}\,φ(x)=A\) Answer: For every \(ε>0\), there exists a \(δ>0\), so that if \(0<|x−a|<δ\), then \(|φ(x)−A|<ε\) The following graph of the function f satisfies \(\displaystyle \lim_{x→2}f(x)=2\). In the following exercises, determine a value of \(δ>0\) that satisfies each statement. 180) If \(0<|x−2|<δ\), then \(|f(x)−2|<1\). 181) If \(0<|x−2|<δ\), then \(|f(x)−2|<0.5\). Answer: \(δ≤0.25\) The following graph of the function f satisfies \(\displaystyle \lim_{x→3}\,f(x)=−1\). In the following exercises, determine a value of \(δ>0\) that satisfies each statement. 182) If \(0<|x−3|<δ\), then \(|f(x)+1|<1\). 183) If \(0<|x−3|<δ\), then \(|f(x)+1|<2\). Answer: \(δ≤2\) In the following exercises, use the precise definition of limit to prove the given limits. J3.7.1) \(\displaystyle \lim_{x→5}\,(2x - 1)=9\) Answer: Let ε\(>0\); choose \(δ=\frac{ε}{2}\); assume \(0<|x−5|<δ\). In other words: \(0<|x - 5|<\frac{ε}{2}\), so \(-\frac{ε}{2}<x - 5<\frac{ε}{2}\), then \(-ε<2x - 10<ε\) then |\(2x - 10|<ε\), then |(\(2x - 1)−9\)|<ε Thus, if \(0<|x−5|<δ\), then |(\(2x - 1)−9\)|<ε. Therefore, by the definition of limit, \(\displaystyle \lim_{x→5}\,(2x - 1)=9\). J3.7.2) \(\displaystyle \lim_{x→-3}\,(5x+2)=-13\) J3.7.3) \(\displaystyle \lim_{x→-7^-}\,\frac{1}{x+7}= −∞ \) Answer: Let \(M>0\); since this is a limit from the left, we need \(-δ<x+7<0\) to lead \(\frac{1}{x+7}<-M\) (since the limit is \(−∞ \)) Note: since \(x<-7, x+7<0\) Proof: Let \(M>0\). Choose \(δ=\frac{1}{M}\). If \(-δ<x+7<0\), in other words \(-\frac{1}{M}<x+7<0\) then \(\frac{1}{M}>-(x+7)>0\) (both \(\frac{1}{M}\) and \(-(x+7)\) are positive) then \(M<-\frac{1}{x+7}\) then \(-M>\frac{1}{x+7}\) so \(\frac{1}{x+7}<-M\) Thus, if \(-\frac{1}{M}<x+7<0\) then\(\frac{1}{x+7}<-M\) Therefore, by the definition of (infinite, from the left) limit, \(\displaystyle \lim_{x→-7^-}\,\frac{1}{x+7}= −∞ \). J3.7.4) \(\displaystyle \lim_{x→2^+}\,\frac{1}{x-2}= ∞ \) 188) \(\displaystyle \lim_{x→2}\,(5x+8)=18\) 189) \(\displaystyle \lim_{x→3}\,\frac{x^2−9}{x−3}=6\) Answer: \(\frac{x^2−9}{x−3}\) is equivalent to \(x + 3\) by factoring, as long as \(x\) is not \(3\). Since we are looking at the limit as \(x→3\), we do not consider \(x=3\). Let \(ε>0\), choose \(δ=ε\). If \(0<|x−3|<ε\), then \(|x+3−6|=|x−3|<ε\). Thus, by the definition of limit, \(\displaystyle \lim_{x→3}\,\frac{x^2−9}{x−3}=6\). 190) \(\displaystyle \lim_{x→2}\,\frac{2x^2−3x−2}{x−2}=5\) 191) \(\displaystyle \lim_{x→0}\,x^4=0\) Answer: Let \(ε>0\), choose\(δ=\sqrt[4]{ε}\) If \(0<|x|<\sqrt[4]{ε}\), then \(∣x^4∣=x^4<ε\). Thus, by the definition of limit, \(\displaystyle\lim_{x→0},x^4=0\). 192) \(\displaystyle \lim_{x→2}\,(x^2+2x)=8\) Chapter Review Exercises True or False. In the following exercises, justify your answer with a proof or a counterexample. 208) A function has to be continuous at \(x=a\) if the \(\displaystyle \lim_{x→a}\,f(x)\) exists. 209) Evaluate \(\displaystyle \lim_{x→0}\,\frac{sinx}{x}\) = ? Answer: 1 210) If there is a vertical asymptote at \(x=a\) for the function \(f(x)\), then f is undefined at the point \(x=a\). 211) If \(\displaystyle \lim_{x→a}\,f(x)\) does not exist, then f is undefined at the point \(x=a\). Answer: False. A removable discontinuity is possible. 212) Using the graph, find each limit or explain why the limit does not exist. a. \(\displaystyle \lim_{x→−1}\,f(x)\) b. \(\displaystyle \lim_{x→1}\,f(x)\) c. \(\displaystyle \lim_{x→0^+}\,f(x)\) d. \(\displaystyle \lim_{x→2}\,f(x)\) In the following exercises, evaluate the limit algebraically or explain why the limit does not exist. 213) \(\displaystyle \lim_{x→2}\,\frac{2x^2−3x−2}{x−2}\) Answer: 5 214) \(\displaystyle \lim_{x→0}\,3x^2−2x+4\) 215) \(\displaystyle \lim_{x→3}\,\frac{x^3−2x^2−1}{3x−2}\) Answer: 8/7 216) \(\displaystyle \lim_{x→π/2}\,\frac{cotx}{cosx}\) 217) \(\displaystyle \lim_{x→−5}\,\frac{x^2+25}{x+5}\) Answer: DNE 218) \(\displaystyle \lim_{x→2}\,\frac{3x^2−2x−8}{x^2−4}\) 219) \(\displaystyle \lim_{x→1}\,\frac{x^2−1}{x^3−1}\) Answer: 2/3 220) \(\displaystyle \lim_{x→1}\,\frac{x^2−1}{\sqrt{x}−1}\) 221) \(\displaystyle \lim_{x→4}\,\frac{4−x}{\sqrt{x}−2}\) Answer: −4 222) \(\displaystyle \lim_{x→4}\,\frac{1}{\sqrt{x}−2}\) In the following exercises, use the squeeze theorem to prove the limit. 223) \(\displaystyle \lim_{x→0}\,x^2cos(2πx)=0\) Answer: Since \(−1≤cos(2πx)≤1\), then \(−x^2≤x^2cos(2πx)≤x^2\). Since \(lim_{x→0}\,x^2=0=lim_{x→0}\,−x^2\), it follows that \(lim_{x→0}\,x^2cos(2πx)=0\). 224) \(\displaystyle \lim_{x→0}\,x^3sin(\frac{π}{x})=0\) 225) Determine the domain such that the function \(f(x)=\sqrt{x−2}+xe^x\) is continuous over its domain. Answer: \([2,∞]\) In the following exercises, determine the value of c such that the function remains continuous. Draw your resulting function to ensure it is continuous. 226) \(f(x)=\begin{cases}x^2+1 & x>c\\2^x & x≤c\end{cases}\) 227) \(f(x)=\begin{cases}\sqrt{x+1} & x>−1\\x^2+c & x≤−1\end{cases}\) Answer: \(c=-1\) In the following exercises, use the precise definition of limit to prove the limit. 228) \(\displaystyle \lim_{x→1}\,(8x+16)=24\) 229) \(\displaystyle \lim_{x→0}\,x^3=0\) Answer: \(δ=\sqrt[3]{ε}\) [This is just a piece for constructing the proof.] 230) A ball is thrown into the air and the vertical position is given by \(x(t)=−4.9t^2+25t+5\). Use the Intermediate Value Theorem to show that the ball must land on the ground sometime between 5 sec and 6 sec after the throw. 231) A particle moving along a line has a displacement according to the function \(x(t)=t^2−2t+4\), where x is measured in meters and t is measured in seconds. Find the average velocity over the time period \(t=[0,2]\). Answer: \(0\) m/sec 232) From the previous exercises, estimate the instantaneous velocity at \(t=2\) by checking the average velocity within \(t=0.01\) sec.
If we have a Sturm-Liouville differential equation of the form $$\frac{d}{dx}[p(x)\frac{dy}{dx}]+q(x)y=-\lambda w(x)y$$ and define the linear operator $L$ as $$L(u) = \frac{d}{dx}[p(x)\frac{du}{dx}]+q(x)u$$ then we get the equation $L(y)=-\lambda w(x)y$ which defines what is called the eigenvalue problem of the Sturm-Liouville differential equation. My question: why is it called that way despite the fact that there is still a function $w(x)$ in the equation? I thought an eigenvalue problem would have the form $L(y)=-\lambda y$. What's happening here? If we have a Sturm-Liouville differential equation of the form $$\frac{d}{dx}[p(x)\frac{dy}{dx}]+q(x)y=-\lambda w(x)y$$ and define the linear operator $L$ as $$L(u) = \frac{d}{dx}[p(x)\frac{du}{dx}]+q(x)u$$ then we get the equation $L(y)=-\lambda w(x)y$ which defines what is called the eigenvalue problem of the Sturm-Liouville differential equation. The operator can be written as $$ Ly = \frac{1}{w}\left[-\frac{d}{dx}\left(p\frac{dy}{dx}\right)+qy\right]. $$ This operator is defined on weighted $L^2$ space $L^2_w[a,b]$. With the correct endpoint conditions, $L$ becomes selfadjoint, and the eigenvalue problem is $Ly=\lambda y$. Absording the negative sign into the second derivative term is standard because that has the best chance of making $L$ a positive operator. For example, in the simplest case of $Ly=-y''$ on $[a,b]$ with endpoint conditions at $x=a,b$, $$ \langle Ly,y\rangle = \int_a^b (-y'')y(x)dx = -y'y|_a^b + \int_a^b(y')^2dx = \int_a^b (y')^2dx \ge 0. $$ The function $w(x)$ is called weight function. You can find why they have used this weight function in the eigenvalue problem of the Sturm-Liouville differential equation from the following references: $2.$ "Differential Equations with Applications and Historical Notes" by George F. Simmons I think this references will help you.
Suppose a surface given by $f(x,y)$ has a local maximum at $(x_0,y_0,z_0)$; geometrically, this point on the surface looks like the top of a hill. If we look at the cross-section in the plane $y=y_0$, we will see a local maximum on the curve at $(x_0,z_0)$, and we know from single-variable calculus that ${\partial z\over\partial x}=0$ at this point. Likewise, in the plane $x=x_0$, ${\partial z\over\partial y}=0$. So if there is a local maximum at $(x_0,y_0,z_0)$, both partial derivatives at the point must be zero, and likewise for a local minimum. Thus, to find local maximum and minimum points, we need only consider those points at which both partial derivatives are 0. As in the single-variable case, it is possible for the derivatives to be 0 at a point that is neither a maximum or a minimum, so we need to test these points further. You will recall that in the single variable case, we examined three methods to identify maximum and minimum points; the most useful is the second derivative test, though it does not always work. For functions of two variables there is also a second derivative test; again it is by far the most useful test, though it doesn't always work. Theorem 16.7.1 Suppose that the second partial derivatives of $f(x,y)$ arecontinuous near $(x_0,y_0)$, and $f_x(x_0,y_0)=f_y(x_0,y_0)=0$.We denote by $D$ the discriminant:$$D(x_0,y_0)=f_{xx}(x_0,y_0)f_{yy}(x_0,y_0)-f_{xy}(x_0,y_0)^2.$$ If $D>0$: if $f_{xx}(x_0,y_0)< 0$: there is a local maximum at $(x_0,y_0)$; if $f_{xx}(x_0,y_0)>0$: there is a local minimum at $(x_0,y_0)$; If $D< 0$: there is neither a maximum nor a minimum at $(x_0,y_0)$; If $D=0$: the test fails. Example 16.7.2 Verify that $f(x,y)=x^2+y^2$ has a minimum at $(0,0)$. First, we compute all the needed derivatives: $$f_x=2x \qquad f_y=2y \qquad f_{xx}=2 \qquad f_{yy}=2 \qquad f_{xy}=0.$$ The derivatives $f_x$ and $f_y$ are zero only at $(0,0)$. Applying the second derivative test there: $$D(0,0)=f_{xx}(0,0)f_{yy}(0,0)-f_{xy}(0,0)^2= 2\cdot2-0=4>0$$ and $$f_{xx}(0,0)=2>0,$$ so there is a local minimum at $(0,0)$, and there are no other possibilities. Example 16.7.3 Find all local maxima and minima for $f(x,y)=x^2-y^2$. The derivatives: $$f_x=2x \qquad f_y=-2y \qquad f_{xx}=2 \qquad f_{yy}=-2 \qquad f_{xy}=0.$$ Again there is a single critical point, at $(0,0)$, and $$D(0,0)=f_{xx}(0,0)f_{yy}(0,0)-f_{xy}(0,0)^2= 2\cdot-2-0=-4< 0,$$ so there is neither a maximum nor minimum there, and so there are no local maxima or minima. The surface is shown in figure 16.7.1. Example 16.7.4 Find all local maxima and minima for $f(x,y)=x^4+y^4$. The derivatives: $$f_x=4x^3 \qquad f_y=4y^3 \qquad f_{xx}=12x^2 \qquad f_{yy}=12y^2 \qquad f_{xy}=0.$$ Again there is a single critical point, at $(0,0)$, and $$D(0,0)=f_{xx}(0,0)f_{yy}(0,0)-f_{xy}(0,0)^2= 0\cdot0-0=0,$$ so we get no information. However, in this case it is easy to see that there is a minimum at $(0,0)$, because $f(0,0)=0$ and at all other points $f(x,y)>0$. Example 16.7.5 Find all local maxima and minima for $f(x,y)=x^3+y^3$. The derivatives: $$f_x=3x^2 \qquad f_y=3y^2 \qquad f_{xx}=6x^2 \qquad f_{yy}=6y^2 \qquad f_{xy}=0.$$ Again there is a single critical point, at $(0,0)$, and $$D(0,0)=f_{xx}(0,0)f_{yy}(0,0)-f_{xy}(0,0)^2= 0\cdot0-0=0,$$ so we get no information. In this case, a little thought shows there is neither a maximum nor a minimum at $(0,0)$: when $x$ and $y$ are both positive, $f(x,y)>0$, and when $x$ and $y$ are both negative, $f(x,y)< 0$, and there are points of both kinds arbitrarily close to $(0,0)$. Alternately, if we look at the cross-section when $y=0$, we get $f(x,0)=x^3$, which does not have either a maximum or minimum at $x=0$. Example 16.7.6 Suppose a box with no top is to hold a certain volume $V$. Find the dimensions for the box that result in the minimum surface area. The area of the box is $A=2hw+2hl+lw$, and the volume is $V=lwh$, so we can write the area as a function of two variables, $$A(l,w)={2V\over l}+{2V\over w}+lw.$$ Then $$A_l=-{2V\over l^2}+w \quad\hbox{and}\quad A_w=-{2V\over w^2}+l.$$ If we set these equal to zero and solve, we find $\ds w=(2V)^{1/3}$ and $\ds l=(2V)^{1/3}$, and the corresponding height is $h=V/(2V)^{2/3}$. The second derivatives are $$A_{ll}={4V\over l^3}\qquad A_{ww}={4V\over w^3}\qquad A_{lw}=1,$$ so the discriminant is $$D={4V\over l^3}{4V\over w^3}-1=4-1=3>0.$$ Since $A_{ll}$ is 2, there is a local minimum at the critical point. Is this a global minimum? It is, but it is difficult to see this analytically; physically and graphically it is clear that there is a minimum, in which case it must be at the single critical point. Here is a portion of the graph with the minimum point shown. Note that we must choose a value for $V$ in order to graph it. Recall that when we did single variable global maximum and minimum problems, the easiest cases were those for which the variable could be limited to a finite closed interval, for then we simply had to check all critical values and the endpoints. The previous example is difficult because there is no finite boundary to the domain of the problem—both $w$ and $l$ can be in $(0,\infty)$. As in the single variable case, the problem is often simpler when there is a finite boundary. Theorem 16.7.7 If $f(x,y)$ is continuous on a closed and bounded subset of $\R^2$, then it has both a maximum and minimum value. As in the case of single variable functions, this means that the maximum and minimum values must occur at a critical point or on the boundary; in the two variable case, however, the boundary is a curve, not merely two endpoints. If the box is placed with one corner at the origin, and sides along the axes, the length of the diagonal is $\ds\sqrt{x^2+y^2+z^2}$, and the volume is $$V=xyz=xy\sqrt{1-x^2-y^2}.$$ Clearly, $x^2+y^2\le 1$, so the domain we are interested in is the quarter of the unit disk in the first quadrant. Computing derivatives: $$\eqalign{ V_x&={y-2yx^2-y^3\over\sqrt{1-x^2-y^2}}\cr V_y&={x-2xy^2-x^3\over\sqrt{1-x^2-y^2}}\cr }$$ If these are both 0, then $x=0$ or $y=0$, or $x=y=1/\sqrt3$. The boundary of the domain is composed of three curves: $x=0$ for $y\in[0,1]$; $y=0$ for $x\in[0,1]$; and $x^2+y^2=1$, where $x\ge0$ and $y\ge0$. In all three cases, the volume $xy\sqrt{1-x^2-y^2}$ is 0, so the maximum occurs at the only critical point $(1/\sqrt3,1/\sqrt3,1/\sqrt3)$. See figure 16.7.2. Exercises 16.7 Ex 16.7.1Find all local maximum and minimum points of$f=x^2+4y^2-2x+8y-1$.(answer) Ex 16.7.2Find all local maximum and minimum points of$f=x^2-y^2+6x-10y+2$.(answer) Ex 16.7.3Find all local maximum and minimum points of$f=xy$.(answer) Ex 16.7.4Find all local maximum and minimum points of$f=9+4x-y-2x^2-3y^2$.(answer) Ex 16.7.5Find all local maximum and minimum points of$f=x^2+4xy+y^2-6y+1$.(answer) Ex 16.7.6Find all local maximum and minimum points of$f=x^2-xy+2y^2-5x+6y-9$.(answer) Ex 16.7.7Find the absolute maximum and minimum points of$f=x^2+3y-3xy$ over the region bounded by$y=x$, $y=0$, and $x=2$.(answer) Ex 16.7.8A six-sided rectangular box is to hold $1/2$ cubic meter;what shape should the box be to minimize surface area?(answer) Ex 16.7.9The post office will accept packages whose combined lengthand girth is at most 130 inches. (Girth is the maximum distance aroundthe package perpendicular to the length; for a rectangular box, thelength is the largest of the three dimensions.) What is the largest volumethat can be sent in a rectangular box?(answer) Ex 16.7.10The bottom of a rectangular box costs twice as much per unitarea as the sides and top. Find the shape for a given volume that willminimize cost.(answer) Ex 16.7.11Using the methods of this section, find the shortestdistance from the origin to the plane $x+y+z=10$.(answer) Ex 16.7.12Using the methods of this section, find the shortestdistance from the point $(x_0,y_0,z_0)$ to the plane $ax+by+cz=d$.You may assume that $c\not=0$; use of Sage or similar softwareis recommended.(answer) Ex 16.7.13A trough is to be formed by bending up two sides of a longmetal rectangleso that the cross-section of the trough is an isosceles trapezoid, asin figure 6.2.6. If the width of the metal sheet is 2meters, how should it be bent to maximize the volume of the trough?(answer) Ex 16.7.14Given the three points $(1,4)$, $(5,2)$, and $(3,-2)$, $\ds(x-1)^2+(y-4)^2+(x-5)^2+(y-2)^2+(x-3)^2+(y+2)^2$is the sum of the squares of the distances from point $(x,y)$ to thethree points. Find $x$ and $y$ so that this quantity is minimized.(answer) Ex 16.7.15Suppose that $f(x,y)=x^2+y^2+kxy$. Find and classify the critical points, and discuss how they change when $k$ takes on different values. Ex 16.7.16Find the shortest distance from the point $(0,b)$ to the parabola $y=x^2$.(answer) Ex 16.7.17Find the shortest distance from the point $(0,0,b)$ to the paraboloid $z=x^2+y^2$.(answer) Ex 16.7.18Consider the function $f(x,y)=x^3-3x^2y+y^3$. a. Show that $(0,0)$ is the only critical point of $f$. b. Show that the discriminant test is inconclusive for $f$. c. Determine the cross-sections of $f$ obtained by setting $y=kx$ for various values of $k$. d. What kind of critical point is $(0,0)$? Ex 16.7.19Find the volume of the largest rectangular box with edges parallel to the axes that can be inscribed in the ellipsoid $2x^2+72y^2+18z^2=288$.(answer)
I am trying to show that a sequence $(x_n)_n \subseteq \mathcal{H}$ converges strongly to $x$ if it converges weakly to $x \in \mathcal{H}$ and $\|x_n\| \to \|x\|$ as $n \to \infty$ $\mathcal{H}$ is an infinite dimensional hilbert space Now my proof is clearly bogus because I do not even use the second condition to complete the proof. Can someone point out where the problem is : Proof Since $x_n \to_{weakly} x$ and since $\mathcal{H}$ is a hilbert space we have that by the Riesz representation theorem that $\forall Y \in \mathcal{H}$ $\langle y,x_n \rangle \to \langle y,x \rangle \iff \langle y,x_n-x \rangle\to 0 \forall y\in \mathcal{H}$. In particular if we choose $y=x_n-x$ we get $\|x_n-x\|^2 \to 0$ This can't be right since otherwise there would be no difference between weak convergence and strong convergence on Hilbert space What is the mistake? Thanks
In computational geometry and other fields, it is of interest to have degeneracy measures for shapes of simplices, which quantitatively seperate the regular simplex from degenerate simplices. In two dimensions, two such shape measures are the minimum angle of a triangle and its aspect ratio, i.e. the quotient of the radii of insphere and circumsphere. While many of these shape measures naturally generalize to higher dimensions, and are documented in literature for arbitrary dimension, I haven't found any source which relates the minimum solid angle of a simplex with any such shape measure in arbitrary dimensions. It is "obvious" that simplices with small solid angles at the corner vertices are degenerate, but I haven't found any source on this literature. Question or reference request: Can you relate the minimum solid angle of a $d$-dimensional simplex with its aspect ratio for arbitrary $d$? A possible answer would generalize Theorem 6.1 of "A. Liu and B. Joe. Relationship between tetrahedron shape measures, BIT, 34 (1994)" which states: For any tetrahedron $T$ we have $\sqrt{3}/24 \rho^2 \leq \sigma_{\min} \leq (2/(3^{1/4})) \sqrt{\rho}$, where $\sigma_{\min}$ is the minimum solid angle of $T$ and $\rho$ denotes the aspect ratio of $T$.