text stringlengths 256 16.4k |
|---|
It is important to understand the sequence of events, i.e. how information is revealed.
For example the Baker Hughes Rig Count is published at 1300 Eastern Time, usually on a Friday. (But a few are not on a Friday, I recommend you use the actual dates).
The Bloomberg data for NYMEX Crude futures close is the price of Crude at 1430 Eastern time.
The simplest, or "zero information", forecast for the Friday closing price of Crude is just the Thursday closing price. The error of this forecast is equal to $\sigma(P_{Fri}-P_{Th})$ which can be estimated empirically.
The next step is to build a slightly more sophisticated forecasting model which takes the intervening rig count release into account. This model could be of the form $P_{Fri}=P_{Th}+\alpha+\beta *RIG\_INFO_{Fri}$ where alpha and beta are estimated by linear regression. Theoretically RIG_INFO should be the difference between the Rig Count data and the market's expectation of the rig count just prior to the announcement. If you don't have the expected value you could perhaps use the difference between the announced rig count and the rig count the previous week as a proxy for the "surprise".
In any case once you have estimated this model you can check how much better it is than the naive model. (In my experience a single explanatory variable, like rig count, will only reduce the error by a relatively small amount. This reduction, theoretically, measures the contribution of the announcement to the crude price). |
Table of Contents
The Topology of Open Intervals (-n, n) on the Set of Real Numbers
Recall from the Topological Spaces page that a set $X$ an a collection $\tau$ of subsets of $X$ together denoted $(X, \tau)$ is called a topological space if:
$\emptyset \in \tau$ and $X \in \tau$, i.e., the empty set and the whole set are contained in $\tau$. If $U_i \in \tau$ for all $i \in I$ where $I$ is some index set then $\displaystyle{\bigcup_{i \in I} U_i \in \tau}$, i.e., for any arbitrary collection of subsets from $\tau$, their union is contained in $\tau$. If $U_1, U_2, ..., U_n \in \tau$ then $\displaystyle{\bigcap_{i=1}^{n} U_i \in \tau}$, i.e., for any finite collection of subsets from $\tau$, their intersection is contained in $\tau$.
We will now look at the topology of open intervals of the form $(-n, n)$ with $\emptyset$, $\mathbb{R}$ included on the set of real numbers.
Consider the collection, from $\mathbb{R}$, of open intervals of the form:(1)
Let's verify that $\tau$ is a topology on $\mathbb{R}$.
For the first condition we clearly see that $\emptyset, \mathbb{R} \in \tau$.
For the second condition, notice that:(2)
Therefore any arbitrary union $\displaystyle{\bigcup_{i \in I} U_i}$ for $U_i \in \tau$ for all $i \in I$ of these open intervals in $\tau$ will be the "largest" subset in the union in the nesting above and hence is contained in $\tau$.
For the third condition, from above, we see that any finite intersection $\displaystyle{\bigcap_{i=1}^{n} U_i}$ for $U_i \in \tau$ for all $i \in \{ 1, 2, ..., n \}$ of these open intervals in $\tau$ will be the "smallest" subset in the intersection in the nesting above and hence is contained in $\tau$.
Therefore $(X, \tau)$ is a topological space. |
Comment 1) P = V x I = (9V) x (0.9A) = 8.1 W, which is less power than the 10W that's required by the LEDs.
Comment 2) There are different types of "LED Driver" circuits. For the discussion Comment 3 I'm assuming you have a regulated "constant voltage" power source and not (for example) a constant current power source.
Comment 3) I have no idea what voltage the LEDs you're using require, since you don't mention it. You say you bought a 9V power supply, so I'll just guess the LED array requires 9 Volts. Given this presumption, the amount of current that's flowing through the LED array when the array is ON is,
$$I_{LED} \approx \frac{10\,W}{9\,V} \approx 1.11\,A$$
This is also the current that must flow through transistor T2's emitter-collector path. You want to operate T2 in saturation (SWITCH ON) mode, not forward active (small signal amplifier) mode. So choose a value of 10 for T2's beta value. (You can also look in your transistor's data sheet to see what value of beta the manufacturer uses for saturation testing purposes, and use that beta value instead of just choosing a value of 10.) When T2's saturation beta is 10, the current that must flow out of T2's base lead (to ensure T2 is operating in saturation mode) should be
$$I_{B,T2} = \frac{I_{C,T2}}{\beta_{sat,T2}} = \frac{1.11\,A}{10} = 111\,mA$$
Next, eliminate resistor R2 (get rid of it completely) and redesign transistor T1 and the "current limiting" resistor R3 so that R3 limits the current flowing into T1's collector at 111 mA when T2 and T1 are both saturated (ON).
$$R3 \approx \frac{V_{CC}-V_{BE(sat),T2}-V_{CE(sat),T1}}{111\,mA}$$
Hint: Look in the data sheets for transistors T1 and T2 to determine values for \$V_{BE(sat),T2}\$ when \$I_{C,T2}=1.11\,A\$ and for \$V_{CE(sat),T1}\$ when \$I_{C(sat),T1}=111\,mA\$.
As before, choose T1's saturation beta to be 10. Therefore, T1's base current must be
$$I_{B,T1} = \frac{I_{C,T1}}{\beta_{sat,T1}} = \frac{111\,mA}{10} = 11.1\,mA$$
Verify that your microcontroller's DIO pin can source a current of 11.1 mA when the DIO pin is in its logic HIGH output state.
Change the value of resistor R1 so that when the microcontroller's DIO pin outputs a voltage of VOH (the minimum voltage for a logic HIGH output), resistor R3 limits the current flowing out of the DIO pin and into T1's base at 11.1 ma.
$$R1 \approx \frac{V_{OH}-V_{BE(sat),T1}}{11.1\,mA}$$
Some final hints. As a starting point, select/purchase components that can handle at least double the voltage, current, and power that will nominally be present in the circuit. For example, assuming the presumptions I made about your LEDs are correct, then transistor T2 will nominally have about 1.11 Amps flowing through its emitter-collector path when T2 is saturated (ON). So choose a transistor part for T2 that can continuously handle at least double this amount of current. The power that is nominally dissipated by resistor R3 will be \$P_{R3}=R3 \cdot I_{R3}^{2}\$, so choose a component for resistor R3 whose power rating is at least double this value. You might also need to do some heat dissipation calculations to determine whether a heat sink is required for T1, or T2, or both. And so on. |
Background Information:
I am not sure this is relevant:
Terminal value pricing:
If the derivative $X$ equals $f(S_T)$, for some $f$ then in the value of the derivative at time $t$ is equal to $V_t(S_t,t)$, where $V(s,t)$ is given by the formula
$$V(s,t) = \exp{(-r(T-t)E_{\mathbb{Q}}(f(S_T)|S_t = s)}$$
And then the trading strategy is given by $\phi_t = \frac{\partial V}{\partial s}(S_t,t)$.
or perhaps I need t apply this formula to the question below:
$$V_t(X) = B_tE_t = B_t E_{\mathbb{Q}}[B_T^{-1} X| \mathcal{F}_t]$$
I am not sure...
Question:
Consider a Black-Scholes model $S_t = \exp{(\sigma W_t + \mu t)}$, $B_t = \exp{(rt)}$, where $W_t$ is Brownian motion with respect to a given measure $\mathbb{P}$.
Suppose you hold a forward contract obligating you to purchase $1$ share of stock for $2$ dollars at time $t = 5$.
What is the value $X$ of this contract at maturity $t = 5$? Express your answer in terms of $S_5$.
I am not sure how to solve this. Any suggestions is greatly appreciated. |
Note: see comments for generalization from scalar fields to higher spins.
I shall do the $2$-point case and leave the $3$-point one to you as an exercise. The canonical reference which I'm using is Di Francesco, Mathieu and Senechal. This is pretty much obligatory reading if you're interested in conformal field theory.
Let $\phi_1$ and $\phi_2$ be two primary fields. Then by definition their conformal transformations are
$$\phi_1(x_1)=\left|\frac{\partial x'}{\partial x}\right|_{x=x_1}^{\Delta_1/d}\phi_1(x_1')$$
where $d$ is the spacetime dimension and $\Delta_1$ the scaling dimension of the field. There's a similar formula for $\phi_2$. Now since the measure and action in the functional integral are conformally invariant we can promote the transformation above to one of the correlation function, viz.
$$\langle\phi_1(x_1)\phi_2(x_2)\rangle=\left|\frac{\partial x'}{\partial x}\right|_{x=x_1}^{\Delta_1/d}\left|\frac{\partial x'}{\partial x}\right|_{x=x_2}^{\Delta_2/d}\langle\phi_1(x_1')\phi_2(x_2')\rangle$$
Now we start specialising to specific symmetries. Already from rotational and translational symmetry we know
$$\langle\phi_1(x_1)\phi_2(x_2)\rangle=f(|x_1-x_2|)$$
Now using a scale transformation $x\to \lambda x$ in our formula above produces
$$f(|x_1-x_2|)=\lambda^{\Delta_1+\Delta_2}f(\lambda|x_1-x_2|)$$
but this fixes the correlation function up to a constant, that is
$$\langle\phi_1(x_1)\phi_2(x_2)\rangle=\frac{C}{|x_1-x_2|^{\Delta_1+\Delta_2}}$$
The final ingredient is invariance under special conformal transformations. These have
$$\left|\frac{\partial x'}{\partial x}\right|=\frac{1}{(1-2b\cdot x+b^2x^2)^d}$$
Substituting this into our formulae above fixes $\Delta_1=\Delta_2$. Therefore we have exactly proved your first result above
$$\langle\phi_1(x_1)\phi_2(x_2)\rangle=\frac{C}{|x_1-x_2|^{2\Delta_1}}$$
if your operators have the same scaling dimension $\Delta_1$.
Now try the $3$-point case using the same symmetries as above! Also, have a think about why this method fails at $4$-point. Hint: you can make conformally invariant cross-ratios when you have $4$ positions.
Endnote on Scaling Dimensions
Remember that the scaling dimension is not necessarily the naive engineering dimension of your field, even in a CFT. This is because loop corrections can introduce divergences even though there's global conformal symmetry. Various theorems guarantee that these don't renormalize masses or couplings but they do cause field strength renormalization $\phi \to \sqrt{Z}\phi$. Typically $Z$ carries some dimensionality, hence we get anomalous dimensions. This argument only falls down if the conformal symmetry is local (as in string theory) or if the operator is protected by supersymmetry. |
What does a truth table for a QRCA look like?
You don't want to know. It will be a gigantic complicated table that provides no insight whatsoever. At the very least you need to use boolean algebra instead of a table, but even that will be cumbersome and will require many intermediate values that ultimately are just a less-visual way of describing an addition circuit.
If it helps, here is the set of equations for a simpler operation, an increment operation. The equations define how each output bit can be computed from the input bits:
$o_0 = i_0 \oplus 1$
$o_1 = i_1 \oplus i_0$
$o_2 = i_2 \oplus (i_0 \land i_1)$
$\vdots$
$o_n = i_n \oplus {\Large{\land}}_{k=0}^{n-1} i_k$
What would a unitary matrix for a QRCA be?
It's a permutation matrix.
As a starting point, here is the permutation matrix corresponding to a 2-bit increment:
$$\text{Inc}_2 = \begin{bmatrix}&&&1\\1&&&\\&1&&\\&&1&\\\end{bmatrix}$$
and a 3-bit increment:
$$\text{Inc}_3 = \begin{bmatrix}&&&&&&&1\\1&&&&&&&\\&1&&&&&&\\&&1&&&&&\\&&&1&&&&\\&&&&1&&&\\&&&&&1&&\\&&&&&&1&\\\end{bmatrix}$$
I suspect you see the pattern. Just start with an identity matrix and shift it down by 1 (with the bottom row wrapping around to the top). To add 2, instead of adding 1 (i.e. incrementing) you would just shift down by 2 instead of by 1.
In an addition circuit, the amount of shifting depends on the other input. So you end up with a series of sub-matrices with increasingly-shifted diagonals:
$$\text{Add}_2 = \begin{bmatrix}\begin{bmatrix}1&&&\\&1&&\\&&1&\\&&&1\\\end{bmatrix}\\&\begin{bmatrix}&&&1\\1&&&\\&1&&\\&&1&\\\end{bmatrix}\\&&\begin{bmatrix}&&1&\\&&&1\\1&&&\\&1&&\\\end{bmatrix}\\&&&\begin{bmatrix}&1&&\\&&1&\\&&&1\\1&&&\\\end{bmatrix}\end{bmatrix}$$
What would a circuit diagram for a QRCA look like?
There are many possible constructions. Here is one that works entirely inline:
You can play with this construction in Quirk. |
Search
Now showing items 1-1 of 1
Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV
(Elsevier, 2014-09)
Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ... |
Search
Now showing items 1-9 of 9
Measurement of $J/\psi$ production as a function of event multiplicity in pp collisions at $\sqrt{s} = 13\,\mathrm{TeV}$ with ALICE
(Elsevier, 2017-11)
The availability at the LHC of the largest collision energy in pp collisions allows a significant advance in the measurement of $J/\psi$ production as function of event multiplicity. The interesting relative increase ...
Multiplicity dependence of jet-like two-particle correlations in pp collisions at $\sqrt s$ =7 and 13 TeV with ALICE
(Elsevier, 2017-11)
Two-particle correlations in relative azimuthal angle (Δ ϕ ) and pseudorapidity (Δ η ) have been used to study heavy-ion collision dynamics, including medium-induced jet modification. Further investigations also showed the ...
The new Inner Tracking System of the ALICE experiment
(Elsevier, 2017-11)
The ALICE experiment will undergo a major upgrade during the next LHC Long Shutdown scheduled in 2019–20 that will enable a detailed study of the properties of the QGP, exploiting the increased Pb-Pb luminosity ...
Azimuthally differential pion femtoscopy relative to the second and thrid harmonic in Pb-Pb 2.76 TeV collisions from ALICE
(Elsevier, 2017-11)
Azimuthally differential femtoscopic measurements, being sensitive to spatio-temporal characteristics of the source as well as to the collective velocity fields at freeze-out, provide very important information on the ...
Charmonium production in Pb–Pb and p–Pb collisions at forward rapidity measured with ALICE
(Elsevier, 2017-11)
The ALICE collaboration has measured the inclusive charmonium production at forward rapidity in Pb–Pb and p–Pb collisions at sNN=5.02TeV and sNN=8.16TeV , respectively. In Pb–Pb collisions, the J/ ψ and ψ (2S) nuclear ...
Investigations of anisotropic collectivity using multi-particle correlations in pp, p-Pb and Pb-Pb collisions
(Elsevier, 2017-11)
Two- and multi-particle azimuthal correlations have proven to be an excellent tool to probe the properties of the strongly interacting matter created in heavy-ion collisions. Recently, the results obtained for multi-particle ...
Jet-hadron correlations relative to the event plane at the LHC with ALICE
(Elsevier, 2017-11)
In ultra relativistic heavy-ion collisions at the Large Hadron Collider (LHC), conditions are met to produce a hot, dense and strongly interacting medium known as the Quark Gluon Plasma (QGP). Quarks and gluons from incoming ...
Measurements of the nuclear modification factor and elliptic flow of leptons from heavy-flavour hadron decays in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 and 5.02 TeV with ALICE
(Elsevier, 2017-11)
We present the ALICE results on the nuclear modification factor and elliptic flow of electrons and muons from open heavy-flavour hadron decays at mid-rapidity and forward rapidity in Pb--Pb collisions at $\sqrt{s_{\rm NN}}$ ...
Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV
(Elsevier, 2014-09)
Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ... |
Is there any connection between the size of the largest independent set in a graph, and the minimum number of colors required to color the graph? I know that we can potentially color all the vertices in the largest independent set in the same color, but we know nothing about the rest of the vertices (besides being a vertex cover). Am I wrong?
Let $n$ be the number of vertices of a graph $G$, $\chi(G)$ be its chromatic number, and $\alpha(G)$ be the size of its maximum independent set. Then $$ \alpha(G) \chi(G) \geq n. $$ The reason is that every color class in a legal coloring is an independent set.
The bound can be tight – simple examples are complete graphs ($\alpha(G) = 1$ and $\chi(G) = n$) and empty graphs ($\alpha(G) = n$ and $\chi(G) = 1$). More generally, if $G$ is a union of $m$ many $n/m$-cliques, then $\alpha(G) = m$ while $\chi(G) = n/m$; and if $G$ is a complete $m$-partite graph in which all parts have size $n/m$, then $\alpha(G) = n/m$ and $\chi(G) = m$.
The bound can also be far from tight. A simple example is the union of an independent set of size $n/2$ and a clique of size $n/2$: $\alpha(G) = n/2+1$ and $\chi(G) = n/2$.
Each color class is an independent set, and so has size at most $\alpha(G)$, the maximum size of an independent set. Hence, the number of color classes needed to cover the vertex set $V(G)$ is at least $|V(G)/\alpha(G)$. Thus, the chromatic number is at least $|V(G)/\alpha(G)$. |
I have completed a proof of this that I am inclined to believe is correct, or at least on the right track. I would like to ask if it is indeed correct, or if I need a nudge in the right direction. Perhaps there is a better way to go about this?
My Proof:
(1) I will show that the existence of a generator for an infinite cyclic group implies the existence of a second which is the inverse of the former.
If $G$ is an infinite cyclic group, then every element of $G$ can be written as $a^n$ for $n\in\mathbb{Z}$ and $a$ some element of $G$. It is clear, then, that $a^{-1}$ and $a$ are both generators for $G$ since, given $n\in\mathbb{Z}$, we can choose $m\in\mathbb{Z}$ such that $(a^{-1})^n=(a)^m$ by taking $m=-n$. Moreover, we are guaranteed that $a^{-1}\ne a$ since this would imply $a=a^{-1}=e_G$ leaving us with $G$ the trivial group. So given one generator, there must be another, namely, the inverse of the given generator.
(2) Now I will show that $G$ cannot have more than two generators.
Now suppose for the sake of contradiction that $n>2$ and $G=\langle{a_1}\rangle=\ldots=\langle{a_n}\rangle$ (which is to say that any one of these $n$ elements and its inverses generate $G$), where $a_i=a_j\iff i=j$. Then, by invoking the pigeonhole principle, at least one of the members of $\{a_1,\ldots,a_n\}$ is not an inverse of one of the others. Let us call this element $a_i\ne a_1,a_{1}^{-1}$. Thus, if $a_1$ generates $G$, then there exists $b\in\mathbb{Z}$ for which $a_{1}^{b}=a_i$. Hence, if these both generate $G$, then $\forall{c\in\mathbb{Z}},\exists{j\in\mathbb{Z}}$ such that $a_{1}^{c}=a_{i}^{j}$, but $a_{i}=a_{1}^{b}$, so $a_{1}^{c}=a_{1}^{bj}$. But if $b\ne{\pm1}$, this is cannot be true, as this would state (for $c=b-1$) that $b-1=bj\implies j=1-\frac{1}{b}$ and then both $j$ and $\frac{1}{b}$ cannot be an integer for any value other than $b=\pm{1}$, which contradicts the assumption that $a_{i}$ is neither the inverse of $a_{1}$ nor equal to $a_1$.
Combining statements (1) and (2),it follows that if $G$ has one generator and it is an infinite cyclic group, then this generator is not the identity element and $G$ must have two generators. Additionally, if $G$ is an infinite cyclic group, it cannot have $n>2$ generators. These two statements mean that $G$ must have precisely two generators. |
For simplicity, we will only consider the case $r = 1$.
I have changed the association of parameters $a,b$ from directions $-y$ and $+x$ to directions $+x$ and $+y$. The end formula is the same as it is symmetric with respect to $a$ and $b$.
We will use following version of Gauss Bonnet theorem to compute the desired area.
For any "polygonal" region $\Omega \subset S^1$ whose boundary $\partial \Omega$ consists of vertices $v_1, v_2, \ldots, v_n$ joined by smooth curves $e_1, e_2, \ldots, e_n$ as edges ($e_i$ connects $v_i$ to $v_{i+1}$, $v_{n+1}$ is an alias of $v_1$). Its area is given by the formula:
$$\verb/Area/(\Omega) = 2\pi - \sum_{i=1}^n \Delta(v_i) - \sum_{i=1}^n \int_{e_i} k_g ds$$
where $\Delta(v_i)$ is the angle between the tangent vectors of $e_{i-1}$ and $e_{i}$ at vertex $v_i$ ($e_0$ is an alias of $e_n$). $k_g$ is the geodesic curvature along edge $e_i$.
For any $a,b \in \mathbb{R}$, let $a' = \frac{a}{\sqrt{1-a^2}}$, $b' = \frac{b}{\sqrt{1-b^2}}$ and $c = \sqrt{a^2+b^2}$.
Let $X_a$ and $Y_b$ be the half spaces $x \ge a$ and $y \ge b$ respectively.
When $a^2+b^2 < 1$, $a', b', c$ are real numbers and the intersection of above two halfspaces with unit sphere, $\Omega = S^1 \cap X_a \cap Y_b$, is non-empty. The boundary $\partial \Omega$ consists of two vertices $v_1 = (a,b,c)$, $v_2 = (a,b,-c)$ and having two circular arcs $e_1$, $e_2$ as edges.
The circular arc $e_1$ connects $v_1$ to $v_2$. It lives on small circle $S^1 \cap \partial Y_b$ on the plane $y = b$. It subtends an angle $\pi - 2\tan^{-1}\frac{a}{c}$ with respect to its center $(0,b,0)$. Since $\int k_g ds$ over the whole small circle $S^1 \cap \partial Y_b$ is $2\pi b$, we find $$\int_{e_1} k_g ds = b\left(\pi - 2\tan^{-1}\frac{a}{c}\right)$$
The circular arc $e_2$ connects $v_2$ to $v_1$. It lives on small circle $S^1 \cap \partial X_a$ one the plane $x = a$. It subtends an angle $\pi - 2\tan^{-1}\frac{b}{c}$ with respect to its center $(a,0,0)$. Since $\int k_g ds$ over the whole small circle $S^1 \cap \partial X_a$ is $2\pi a$, we find $$\int_{e_2} k_g ds = a\left(\pi - 2\tan^{-1}\frac{b}{c}\right)$$
$e_1$ lies on plane $y = b$, its tangent vector at $v_1$ points toward $(0,1,0) \times (a,b,c) = (c,0,-a)$.
$e_2$ lies on plane $x = a$, its tangent vector at $v_1$ points toward $(1,0,0) \times (a,b,c) = (0,-c,b)$. This means at $v_1$, the tangent vectors has turned for an angle $$\Delta(v_1) = \cos^{-1}\left[\frac{(c,0,-a)\cdot(0,-c,b)}{\sqrt{(c^2+a^2)(c^2+b^2)}}\right] = \cos^{-1}\left[-\frac{ab}{\sqrt{(1-a^2)(1-b^2)}}\right] =\pi - \cos^{-1}(a'b')$$
By symmetry, we have $\Delta(v_2) = \Delta(v_1)$.
Combine all these, we find
$$\begin{align}\verb/Area/(\Omega) &= 2\pi - 2(\pi - \cos^{-1}(a'b')) - a\left(\pi - 2\tan^{-1}\frac{b}{c}\right)- b\left(\pi - 2\tan^{-1}\frac{a}{c}\right)\\&= 2\left(\cos^{-1}(a'b') + a\tan^{-1}\frac{b}{c} + b\tan^{-1}\frac{a}{c}\right) - \pi(a+b)\end{align}$$Notice if $\theta = \cos^{-1}(a'b')$, then$$\cos^2\theta = (a'b')^2 = \frac{(ab)^2}{(1-a^2)(1-b^2)} = \frac{(ab)^2}{c^2+(ab)^2}\implies \cot^2\theta = \left(\frac{ab}{c}\right)^2$$With a little bit of algebra, one can verify $\cos^{-1}(a'b') = \frac{\pi}{2} - \tan^{-1}\frac{ab}{c}$.We can simplify above expression of area to
$$\bbox[padding: 1em;border:1px solid blue;]{\verb/Area/(\Omega) =\pi(1-a-b) + 2\left(a\tan^{-1}\frac{b}{c} + b\tan^{-1}\frac{a}{c} - \tan^{-1}\frac{ab}{c}\right)}$$ |
Search
Now showing items 1-10 of 18
J/Ψ production and nuclear effects in p-Pb collisions at √sNN=5.02 TeV
(Springer, 2014-02)
Inclusive J/ψ production has been studied with the ALICE detector in p-Pb collisions at the nucleon–nucleon center of mass energy √sNN = 5.02TeV at the CERN LHC. The measurement is performed in the center of mass rapidity ...
Suppression of ψ(2S) production in p-Pb collisions at √sNN=5.02 TeV
(Springer, 2014-12)
The ALICE Collaboration has studied the inclusive production of the charmonium state ψ(2S) in proton-lead (p-Pb) collisions at the nucleon-nucleon centre of mass energy √sNN = 5.02TeV at the CERN LHC. The measurement was ...
Event-by-event mean pT fluctuations in pp and Pb–Pb collisions at the LHC
(Springer, 2014-10)
Event-by-event fluctuations of the mean transverse momentum of charged particles produced in pp collisions at s√ = 0.9, 2.76 and 7 TeV, and Pb–Pb collisions at √sNN = 2.76 TeV are studied as a function of the ...
Centrality, rapidity and transverse momentum dependence of the J/$\psi$ suppression in Pb-Pb collisions at $\sqrt{s_{NN}}$=2.76 TeV
(Elsevier, 2014-06)
The inclusive J/$\psi$ nuclear modification factor ($R_{AA}$) in Pb-Pb collisions at $\sqrt{s_{NN}}$=2.76TeV has been measured by ALICE as a function of centrality in the $e^+e^-$ decay channel at mid-rapidity |y| < 0.8 ...
Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV
(Elsevier, 2014-09)
Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ...
Multiplicity Dependence of Pion, Kaon, Proton and Lambda Production in p-Pb Collisions at $\sqrt{s_{NN}}$ = 5.02 TeV
(Elsevier, 2014-01)
In this Letter, comprehensive results on $\pi^{\pm}, K^{\pm}, K^0_S$, $p(\bar{p})$ and $\Lambda (\bar{\Lambda})$ production at mid-rapidity (0 < $y_{CMS}$ < 0.5) in p-Pb collisions at $\sqrt{s_{NN}}$ = 5.02 TeV, measured ...
Multi-strange baryon production at mid-rapidity in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV
(Elsevier, 2014-01)
The production of ${\rm\Xi}^-$ and ${\rm\Omega}^-$ baryons and their anti-particles in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV has been measured using the ALICE detector. The transverse momentum spectra at ...
Measurement of charged jet suppression in Pb-Pb collisions at √sNN = 2.76 TeV
(Springer, 2014-03)
A measurement of the transverse momentum spectra of jets in Pb–Pb collisions at √sNN = 2.76TeV is reported. Jets are reconstructed from charged particles using the anti-kT jet algorithm with jet resolution parameters R ...
Two- and three-pion quantum statistics correlations in Pb-Pb collisions at √sNN = 2.76 TeV at the CERN Large Hadron Collider
(American Physical Society, 2014-02-26)
Correlations induced by quantum statistics are sensitive to the spatiotemporal extent as well as dynamics of particle-emitting sources in heavy-ion collisions. In addition, such correlations can be used to search for the ...
Exclusive J /ψ photoproduction off protons in ultraperipheral p-Pb collisions at √sNN = 5.02TeV
(American Physical Society, 2014-12-05)
We present the first measurement at the LHC of exclusive J/ψ photoproduction off protons, in ultraperipheral proton-lead collisions at √sNN=5.02 TeV. Events are selected with a dimuon pair produced either in the rapidity ... |
This module demonstrates how to convert the lower triangular covariance table from the Excel Analysis ToolPak to a full covariance matrix for use with the
MMULT function.
First, a brief review of square matrices.
A
matrix is a rectangular array of numbers of the form $$A=\begin{bmatrix} a_{11} & \cdots & a_{1n} \\ \vdots & \ddots & \vdots \\ a_{m1} & \cdots & a_{mn} \end{bmatrix}$$ and is called an m × n matrix because it has m rows and n columns. Matrix \(A\) can also be written as \(A=(a_{ij} )\) where \(a_{ij}\) is an element of \(A\) in i-th row and j-th column where \(1 \leq i \leq m\) and \(1 \leq j \leq n\). The diagonal elements are \((a_{ij} ) \; \forall \; i=j\). Square matrices
$$B=\begin{bmatrix} 5 & -2 & 6 \\ -2 & 11 & 9.1 \\ 6 & 9.1 & 10 \end{bmatrix}$$
The matrix \(B\) is a 3 × 3 square matrix because it has equal numbers of rows and columns, that is, \(m = n = 3\). The diagonal elements are the values \(5, 11, 10\) 1. Upper triangular matrix
The \(n \times n\) matrix \(C\), is
upper triangular if all elements below the main diagonal are 0. $$C_1=\begin{bmatrix} 5 & 2 & 6 & 7 \\ 0 & 11 & 9 & 4 \\ 0 & 0 & 10 & 1 \\ 0 & 0 & 0 & 8\end{bmatrix}, \qquad C_2=\begin{bmatrix} 5 & 2 & 6 & 7 \\ 0 & 0 & 0 & 4 \\ 0 & 0 & 0 & 1 \\ 0 & 0 & 0 & 8\end{bmatrix}$$ 2. Lower triangular matrix
The \(n \times n\) matrix \(C\), is
lower triangular if all elements above the main diagonal are 0. $$C_3=\begin{bmatrix} 5 & 0 & 0 & 0 \\ 3 & 9 & 0 & 0 \\ 7 & 2 & 1 & 0 \\ 4 & 5 & 8 & 6\end{bmatrix}, \qquad C_4=\begin{bmatrix} 5 & 0 & 0 & 0 \\ 3 & 0 & 0 & 0 \\ 7 & 4 & 4 & 0 \\ 6 & 9 & 1 & 8\end{bmatrix}$$ 3. Diagonal matrix
The \(n \times n\) matrix \(C\), is
diagonal if all elements off the main diagonal are 0. $$C_5=\begin{bmatrix} 5 & 0 & 0 & 0 \\ 0 & 9 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 6\end{bmatrix}, \qquad C_6=\begin{bmatrix} 5 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 4 & 0 \\ 0 & 0 & 0 & 8\end{bmatrix}$$ \(C_5\) has diagonal elements \(5,9,1,6\). \(C_6\) has diagonal elements \(5,0,4,8\). 4. Symmetrical matrix
The \(n \times n\) matrix \(C\), is
symmetrical if the matrix is equal to its transpose. That is, \(C=C^T\) $$C_7=\begin{bmatrix} 5 & 3 & 7 & 4 \\ 3 & 9 & 2 & 5 \\ 7 & 2 & 1 & 8 \\ 4 & 5 & 8 & 6\end{bmatrix}$$ Lower triangular covariance table
The Analysis ToolPak includes tools to estimate Covariance, and Correlation. Both procedures produce output that is lower triangular. The omission of the upper triangle was originally based on the need to save several bytes of (expensive) computer memory. The same reason the the Toolpak being a Add-In and activated only when required. An example of the Covariance table for four stock returns is shown in figure 1.
It is clear from figure 1, however, that the output is not a lower triangular matrix, as described in point 2 above, because the upper triangle is blank rather contain zeros. The output is better described as a
lower triangular table.
To convert the lower triangular table to a symmetrical matrix for use in an
MMULT equation, do the following: Select the 4 x 4 lower triangular variance-covariance array (with the red frame in figure 1), and Copy the Selection to the Clipboard Select the top left cell of a temporary work area (cell H8) and Paste > Special > Transpose. You can also select Paste > Values to avoid format issues Select the transposed array from step 3 (with the green frame in figure 1), and copy to the Clipboard Select the top left cell, C3 of the variance-covariance array from step 1, then Paste > Special > Skip Blanks This example was developed in Excel 2013 Pro 64 bit. Last modified: 26 Oct 2018, 7:18 am [Australian Eastern Standard Time (AEST)] |
A ring $R$ is a Boolean ring provided that $a^2=a$ for every $a \in R$. How can we show that every Boolean ring is commutative?
Every Boolean ring is of characteristic 2, since $a+a=(a+a)^2=a^2+a^2+a^2+a^2=a+a+a+a\implies a+a=0$.
Now, for any $x,y$ in the ring $x+y=(x+y)^2=x^2+xy+yx+y^2=x+y+xy+yx$, so $xy+yx=0$ and hence $xy+(xy+yx)=xy$. But since the ring has characteristic 2, $yx=xy$.
I always like to know where these problems come from, their history. This was first proved in a paper by Stone in 1936. Here's a link to that paper for anyone who is interested:
His proof is in the first full paragraph on p. 40.
Of course, this is an old chestnut: if you are interested in typical generalizations of this commutativity theorem in a wider, more structural context (to associative, unitary rings) I suggest reading T.Y. Lam's beautiful Springer GTM 131 "A First Course in Noncommutative Rings", Chapter 4, §12, in particular the Jacobson-Herstein Theorem (12.9), p. 209: A (unitary, associative) ring $R$ is commutative iff for any $a,b\in R$ one always has $(ab-ba)^{n+1}=ab-ba$ for some $n\in\mathbb N$ ($n$ generally depending on $a,b$). (Further, using Artin's theorem concerning diassociativity of alternative algebrae, associativity of $R$ may be weakened to alternativity.) Cp. also the exercises given, in particular Ex. 9. Note that the Boolean case is special, as that the ring considered needn't be unitary a-priori. Kind regards - Stephan F. Kroneck.
We want to show that $xy = yx$ for all $x,y \in R$. We know that $(x+y)^2 = x+y$. So $(x+y)^2 = (x+y)(x+y) = xx+xy+yx+yy = x+xy+yx+y = x+x^2y^2+y^2x^2+y$. This equals $x+(xy)+(yx) + y = x+y$ so that $xy = yx$.
Plug $a = x + y$.
HINT $\rm\quad\ \ A = X+Y\ \ \Rightarrow\ \ X\ Y = - Y\ X\:.\ $ But $\rm -1 = 1\ $ via $\rm\ A = -1$
If $a,b\in R$, \begin{align} 2ba &=4ba-2ba\\ &=4(ba)^2-2ba\\ &=(2ba)^2-2ba\\ &=2ba-2ba\\ &=0, \end{align} so \begin{align} ab &=ab+0\\ &=ab+2ba\\ &=[ab+ba]+ba\\ &=[(a+b)^2-a^2-b^2]+ba\\ &=[(a+b)-a-b]+ba\\ &=0+ba\\ &=ba. \end{align}
As Yuval points out $(x+y)^{2} = x+y$ which implies $x^{2} + y^{2} + x \cdot y + y \cdot x = x+y$. Now from this you have $x \cdot y + y \cdot x =0$.
When you get to the part where ab=-baab =-ba (1)Pre-multiply a to both sidesa(ab)=a(-ba)
a^2b=a-ba ab=a-ba (2) Post-multiply a to (1) (ab)a=(-ba)a aba=-ba^2 aba=-ba (3) . . . From (2) & (3), you can deduce that ab=ba
I have solved this question myself and wonder what are the solutions on the Internet, and yet since I didn't see any solution like mine (most probably because my solution didn't notice that the ring is of characteristic $2$). Anyway, I will put it here.
For any $x,y$ in the ring let $a=xy-yx$, then $$xax=x^2yx-xyx^2=xyx-xyx=0$$ and $$xay=x^2y^2-xyxy=xy-(xy)^2=0$$ hence $$0=(xax)y-(xay)x=xa(xy-yx)=xa^2=xa.$$ Seeking for symmetry we find that $y(-a)=0$, and so $$ya=ya+y(-a)=y(a+(-a))=y0=0$$ hence $$0=x(ya)-y(xa)=(xy-yx)a=a^2=a.$$ This completes the proof.
protected by user26857 Dec 5 '15 at 10:38
Thank you for your interest in this question. Because it has attracted low-quality or spam answers that had to be removed, posting an answer now requires 10 reputation on this site (the association bonus does not count).
Would you like to answer one of these unanswered questions instead? |
The Annals of Mathematical Statistics Ann. Math. Statist. Volume 26, Number 2 (1955), 189-211. On Tests of Normality and Other Tests of Goodness of Fit Based on Distance Methods Abstract
The authors study the problem of testing whether the distribution function (d.f.) of the observed independent chance variables $x_1, \cdots, x_n$ is a member of a given class. A classical problem is concerned with the case where this class is the class of all normal d.f.'s. For any two d.f.'s $F(y)$ and $G(y)$, let $\delta(F, G) = \sup_y | F(y) - G(y) |$. Let $N(y \mid \mu, \sigma^2)$ be the normal d.f. with mean $\mu$ and variance $\sigma^2$. Let $G^\ast_n(y)$ be the empiric d.f. of $x_1, \cdots, x_n$. The authors consider, inter alia, tests of normality based on $\nu_n = \delta(G^\ast_n(y), N(y \mid \bar{x}, s^2))$ and on $w_n = \int (G^\ast_n(y) - N(y \mid \bar{x}, s^2))^2 d_yN (y \mid \bar{x}, s^2)$. It is shown that the asymptotic power of these tests is considerably greater than that of the optimum $\chi^2$ test. The covariance function of a certain Gaussian process $Z(t), 0 \leqq t \leqq 1$, is found. It is shown that the sample functions of $Z(t)$ are continuous with probability one, and that $\underset{n\rightarrow\infty}\lim P\{nw_n < a\} = P\{W < a\}, \text{where} W = \int^1_0 \lbrack Z(t)\rbrack^2 dt$. Tables of the distribution of $W$ and of the limiting distribution of $\sqrt{n}\nu_n$ are given. The role of various metrics is discussed.
Article information Source Ann. Math. Statist., Volume 26, Number 2 (1955), 189-211. Dates First available in Project Euclid: 28 April 2007 Permanent link to this document https://projecteuclid.org/euclid.aoms/1177728538 Digital Object Identifier doi:10.1214/aoms/1177728538 Mathematical Reviews number (MathSciNet) MR70919 Zentralblatt MATH identifier 0066.12301 JSTOR links.jstor.org Citation
Kac, M.; Kiefer, J.; Wolfowitz, J. On Tests of Normality and Other Tests of Goodness of Fit Based on Distance Methods. Ann. Math. Statist. 26 (1955), no. 2, 189--211. doi:10.1214/aoms/1177728538. https://projecteuclid.org/euclid.aoms/1177728538 |
It looks like you're new here. If you want to get involved, click one of these buttons!
Isomorphisms are very important in mathematics, and we can no longer put off talking about them. Intuitively, two objects are 'isomorphic' if they look the same. Category theory makes this precise and shifts the emphasis to the 'isomorphism' - the
way in which we match up these two objects, to see that they look the same.
For example, any two of these squares look the same after you rotate and/or reflect them:
An isomorphism between two of these squares is a
process of rotating and/or reflecting the first so it looks just like the second.
As the name suggests, an isomorphism is a kind of morphism. Briefly, it's a morphism that you can 'undo'. It's a morphism that has an inverse:
Definition. Given a morphism \(f : x \to y\) in a category \(\mathcal{C}\), an inverse of \(f\) is a morphism \(g: y \to x\) such that
and
I'm saying that \(g\) is 'an' inverse of \(f\) because in principle there could be more than one! But in fact, any morphism has at most one inverse, so we can talk about 'the' inverse of \(f\) if it exists, and we call it \(f^{-1}\).
Puzzle 140. Prove that any morphism has at most one inverse. Puzzle 141. Give an example of a morphism in some category that has more than one left inverse. Puzzle 142. Give an example of a morphism in some category that has more than one right inverse.
Now we're ready for isomorphisms!
Definition. A morphism \(f : x \to y\) is an isomorphism if it has an inverse. Definition. Two objects \(x,y\) in a category \(\mathcal{C}\) are isomorphic if there exists an isomorphism \(f : x \to y\).
Let's see some examples! The most important example for us now is a 'natural isomorphism', since we need those for our databases. But let's start off with something easier. Take your favorite categories and see what the isomorphisms in them are like!
What's an isomorphism in the category \(\mathbf{3}\)? Remember, this is a free category on a graph:
The morphisms in \(\mathbf{3}\) are paths in this graph. We've got one path of length 2:
$$ f_2 \circ f_1 : v_1 \to v_3 $$ two paths of length 1:
$$ f_1 : v_1 \to v_2, \quad f_2 : v_2 \to v_3 $$ and - don't forget - three paths of length 0. These are the identity morphisms:
$$ 1_{v_1} : v_1 \to v_1, \quad 1_{v_2} : v_2 \to v_2, \quad 1_{v_3} : v_3 \to v_3.$$ If you think about how composition works in this category you'll see that the only isomorphisms are the identity morphisms. Why? Because there's no way to compose two morphisms and get an identity morphism unless they're both that identity morphism!
In intuitive terms, we can only move from left to right in this category, not backwards, so we can only 'undo' a morphism if it doesn't do anything at all - i.e., it's an identity morphism.
We can generalize this observation. The key is that \(\mathbf{3}\) is a poset. Remember, in our new way of thinking a
preorder is a category where for any two objects \(x\) and \(y\) there is at most one morphism \(f : x \to y\), in which case we can write \(x \le y\). A poset is a preorder where if there's a morphism \(f : x \to y\) and a morphism \(g: x \to y\) then \(x = y\). In other words, if \(x \le y\) and \(y \le x\) then \(x = y\). Puzzle 143. Show that if a category \(\mathcal{C}\) is a preorder, if there is a morphism \(f : x \to y\) and a morphism \(g: x \to y\) then \(g\) is the inverse of \(f\), so \(x\) and \(y\) are isomorphic. Puzzle 144. Show that if a category \(\mathcal{C}\) is a poset, if there is a morphism \(f : x \to y\) and a morphism \(g: x \to y\) then both \(f\) and \(g\) are identity morphisms, so \(x = y\).
Puzzle 144 says that in a poset, the only isomorphisms are identities.
Isomorphisms are a lot more interesting in the category \(\mathbf{Set}\). Remember, this is the category where objects are sets and morphisms are functions.
Puzzle 145. Show that every isomorphism in \(\mathbf{Set}\) is a bijection, that is, a function that is one-to-one and onto. Puzzle 146. Show that every bijection is an isomorphism in \(\mathbf{Set}\).
So, in \(\mathbf{Set}\) the isomorphisms are the bijections! So, there are lots of them.
One more example:
Definition. If \(\mathcal{C}\) and \(\mathcal{D}\) are categories, then an isomorphism in \(\mathcal{D}^\mathcal{C}\) is called a natural isomorphism.
This name makes sense! The objects in the so-called 'functor category' \(\mathcal{D}^\mathcal{C}\) are functors from \(\mathcal{C}\) to \(\mathcal{D}\), and the morphisms between these are natural transformations. So, the
isomorphisms deserve to be called 'natural isomorphisms'.
But what are they like?
Given functors \(F, G: \mathcal{C} \to \mathcal{D}\), a natural transformation \(\alpha : F \to G\) is a choice of morphism
$$ \alpha_x : F(x) \to G(x) $$ for each object \(x\) in \(\mathcal{C}\), such that for each morphism \(f : x \to y\) this naturality square commutes:
Suppose \(\alpha\) is an isomorphism. This says that it has an inverse \(\beta: G \to F\). This \(beta\) will be a choice of morphism
$$ \beta_x : G(x) \to F(x) $$ for each \(x\), making a bunch of naturality squares commute. But saying that \(\beta\) is the inverse of \(\alpha\) means that
$$ \beta \circ \alpha = 1_F \quad \textrm{ and } \alpha \circ \beta = 1_G .$$ If you remember how we compose natural transformations, you'll see this means
$$ \beta_x \circ \alpha_x = 1_{F(x)} \quad \textrm{ and } \alpha_x \circ \beta_x = 1_{G(x)} $$ for all \(x\). So, for each \(x\), \(\beta_x\) is the inverse of \(\alpha_x\).
In short: if \(\alpha\) is a natural isomorphism then \(\alpha\) is a natural transformation such that \(\alpha_x\) is an isomorphism for each \(x\).
But the converse is true, too! It takes a
little more work to prove, but not much. So, I'll leave it as a puzzle. Puzzle 147. Show that if \(\alpha : F \Rightarrow G\) is a natural transformation such that \(\alpha_x\) is an isomorphism for each \(x\), then \(\alpha\) is a natural isomorphism.
Doing this will help you understand natural isomorphisms. But you also need examples!
Puzzle 148. Create a category \(\mathcal{C}\) as the free category on a graph. Give an example of two functors \(F, G : \mathcal{C} \to \mathbf{Set}\) and a natural isomorphism \(\alpha: F \Rightarrow G\). Think of \(\mathcal{C}\) as a database schema, and \(F,G\) as two databases built using this schema. In what way does the natural isomorphism between \(F\) and \(G\) make these databases 'the same'. They're not necessarily equal!
We should talk about this. |
$\displaystyle\color{darkblue}{3\int x\sin\left(\dfrac x4\right)\,\mathrm dx}$
$$\begin{align} \dfrac{-4x}{x}\cos\dfrac x4 \,\,\boldsymbol\Rightarrow\,\, & -4\cos\left(\dfrac x4\right)-\int \dfrac{-4}{x}\cos\left(\dfrac x4 \right)\,\mathrm dx\\\,\\ & 3\left(-4\cos\left(\dfrac x4\right)+\int\dfrac4x\cos\left(\dfrac x4\right)\,\mathrm dx\right)\\\,\\ &\int\dfrac{4\cos\left(\frac x4\right)}{x}\mathrm dx \end{align}$$ $\displaystyle \color{darkblue}{uv-\int v\dfrac{\mathrm du}{\mathrm dx}\,\mathrm dx }$
$\displaystyle\boxed{\displaystyle\,\,-4\cos\left(\dfrac x4\right)+4\int\dfrac{\cos x/4}{x}\,\mathrm dx\,\,}$
$\displaystyle 3\left[-4\cos\left( \dfrac x4\right) +4\left(\cos\left( \dfrac x4\right)\ln(x)\right)\right]$
$\displaystyle -12\cos (x/4) + 12\cos (x/4) \ln(x) \rightarrow \text{ wrong.}$
$\displaystyle\color{darkblue}{\int 3x\sin\left(\dfrac x4\right)\,\mathrm dx}$ $\quad\quad\quad\quad\displaystyle\int\dfrac{\cos(x/4)}{x}\rightarrow\dfrac1x\int\cos(x/4)\mathrm dx$
$\displaystyle3\left[-4\cos(x/4)+\dfrac4x\sin(x/4)\right]$
$\displaystyle -12\cos(x/4)+\dfrac{48}{x}\sin(x/4)$
In the text above is my work done to solve the following question:
Find the indefinite integral of: $\left[3x\sin\left(\dfrac x4\right)\right]$
The bordered area is the furthest I got (there should be a 3 at the front to multiply the whole equation but I usually remember to add that at the end) The part where I wrote "wrong" is what I thought the answer was, I assumed it would be such. What I'm having troubles with is integrating the
$\frac{\cos(x/4)}{x}$. Would I need to make it $\frac{\cos(x/4)}{x}$ and then integrate by parts to get that integral? Thanks in advance, I hope I made some sense in what I'm trying to achieve. I guess what I'm looking for, is a way to integrate $$ \frac{\cos(x/4)}{x}$$ |
Since you want to learn methods for computing expectations, and you wish to know some simple ways, you will enjoy using the moment generating function (mgf)
$$\phi(t) = E[e^{tX}].$$
The method works especially well when the distribution function or its density are given as exponentials themselves. In this case, you don't actually have to do any integration after you observe
$$t^2/2 -\left(x - t\right)^2/2 = t^2/2 + (-x^2/2 + tx - t^2/2) = -x^2/2 + tx,$$
because, writing the standard normal density function at $x$ as $C e^{-x^2/2}$ (for a constant $C$ whose value you will not need to know), this permits you to rewrite its mgf as
$$\phi(t) = C\int_\mathbb{R} e^{tx} e^{-x^2/2} dx = C\int_\mathbb{R} e^{-x^2/2 + tx} dx = e^{t^2/2}C\int_\mathbb{R} e^{-(x-t)^2/2} dx .$$
On the right hand side, following the $e^{t^2/2}$ term, you will recognize the integral of the total probability of a Normal distribution with mean $t$ and unit variance, which therefore is $1$. Consequently
$$\phi(t) = e^{t^2/2}.$$
Because the Normal density gets small at large values so rapidly, there are no convergence issues regardless of the value of $t$. $\phi$ is recognizably analytic at $0$, meaning it equals its MacLaurin series
$$\phi(t) = e^{t^2/2} = 1 + (t^2/2) + \frac{1}{2} \left(t^2/2\right)^2 + \cdots + \frac{1}{k!}\left(t^2/2\right)^k + \cdots.$$
However, since $e^{tX}$ converges absolutely for all values of $tX$, we also may write
$$E[e^{tX}] = E\left[1 + tX + \frac{1}{2}(tX)^2 + \cdots + \frac{1}{n!}(tX)^n + \cdots\right] \\= 1 + E[X]t + \frac{1}{2}E[X^2]t^2 + \cdots + \frac{1}{n!}E[X^n]t^n + \cdots.$$
Two convergent power series can be equal only if they are equal term by term, whence (comparing the terms involving $t^{2k} = t^n$)
$$\frac{1}{(2k)!}E[X^{2k}]t^{2k} = \frac{1}{k!}(t^2/2)^k = \frac{1}{2^kk!} t^{2k},$$
implying
$$E[X^{2k}] = \frac{(2k)!}{2^kk!},\ k = 0, 1, 2, \ldots$$
(and all expectations of odd powers of $X$ are zero). For practically no effort you have obtained the expectations of all positive integral powers of $X$ at once.
Variations of this technique can work just as nicely in some cases, such as $E[1/(1-tX)] = E[1 + tX + (tX)^2 + \cdots + (tX)^n + \cdots]$, provided the range of $X$ is suitably limited. The mgf (and its close relative the
characteristic function $E[e^{itX}]$) are so generally useful, though, that you will find them given in tables of distributional properties, such as in the Wikipedia entry on the Normal distribution. |
Gnuplot Development
From ConTeXt wiki
Contents 1 Known bugs 2 TODO: 3 What has to be added to m-gnuplot.tex to support ConTeXt terminal 4 What has to be added to context.trm 5 Notes Known bugs mathematics \startGNUPLOTscript{exp} plot [0:3] 2/sqrt(pi)*exp(-x**2) t '$\frac{2}{\sqrt{\pi}}e^{-x^2}$' \stopGNUPLOTscript comments \startGNUPLOTscript{sin} # this comment should be ignored plot sin(x) \stopGNUPLOTscript doesn't work with XeTeX yet Rotated PostScript images -dAutoRotatePages=/PageByPage TODO: ConTeXt + terminal contour plot,
set style fill pattern
images handling fonts / set label "font" colour palettes smooth shading Efficiency optimize the number of gnuplot runs (if possible, gnuplot should be run only once) optimize the number of times for loading/converting an already used graphic Gnuplot core CLIP lines & points MITTERED/BUTT encoding: bigger list, it's not updated in help (encoding), add utf-8 pm3d - smooth shading Obscure ideas
set decimalsign ','based on language setting or locales
What has to be added to m-gnuplot.tex to support ConTeXt terminal What has to be added to context.trm High priority fix the syntax for set terminal context write documentation [HTH] add \switchtobodyfont[whateverisprovided]\strut to evert \sometxt: try using (implementing first) \GPtext instead of \sometxt Middle priority support different font sizes check if everything is documented well enough in help Low priority implement with_image: once it's clear how to do it study the code for palettes, implement them and document them (need some interference with the core code) improve the code for draw_color_smooth_box(MODE_SPLOT) in graph3d.c -> color.c Legend [HTH] = Need Hans's or Taco's Help
Different remarks about gnuplot development
Notes Hans's note
Once we have lua in place it should not be that difficult to make a mechanism that works as follows:
context calls gnuplot and opens a socket and starts waiting for gp to finish when gnuplot needs dimensions it talks back via the socket tex executes the command send by the socket etc
it needs some thinking but i think that it is possible
closed curves in parametric plots it's probably not seen that easy, but if you want a circle with --cycle, set trange [0:2*pi] |
Transform the differential equation $$yu_x(x, y) - xu_y(x, y) = xyu(x, y)$$ by introducing new variables $s = x^2+y^2$ and $t = e^{-x^2/2}$.
Then determine the general solution to the equation.
I think I managed to transform the differential equation? First we find these partial derivatives of $u$
\begin{align} u_x &= \frac{\partial u}{\partial s}2x - \frac{\partial u}{\partial t}tx \\ u_y &= \frac{\partial u}{\partial s}2y \end{align} then we plug them into our differential equation $$ \frac{\partial u}{\partial s}2xy - \frac{\partial u}{\partial t}txy - \frac{\partial u}{\partial s}2xy = xyu(x, y) \implies -\frac{\partial u}{\partial t}txy = xyu(x, y). $$
So our transformed differential equation is $$-\frac{\partial u}{\partial t}txy = xyu(x, y).$$
I have no idea how I'm supposed to find the
general solution. The answer for the general solution is $u(x, y) = F(x^2+y^2)\cdot e^{\frac{x^2}{2}}$. How did they arrive at this general solution, and did I do the transformation right? |
For a regression model through the origin, with Var($e_i|x_i) = x_i^2σ^2$ . The corresponding regression model is $Y_i$ = $\beta$$X_i$ +$e_i$. How do I create a least squares model? I know I need to take the derivative of $\sum w_i(y_i-\hat{y_i})^2$. Is the weight $\frac{1}{(X_i)^2}$?
This is a linear regression model with heteroskedastic (and I presume
non-autocorrelated) error terms, with the functional form of heteroskedasticity known.In such a case, things are pretty easy, because the structure of the covariance-matrix of the error term is known, and so we can implement Generalized Least Squares (not "Feasible" such).
What should the weights be? The purpose of the weights is to transform
all the variables involved in the equation in such a way so as the transformed error term has constant variance. Denote this weight $w_i$ (to be determined). Then we are looking at
$$w_iy_i = \beta w_ix_i+w_ie_i \Rightarrow \tilde y_i= \beta \tilde x_i+\tilde e_i$$
We want
$$\text{Var}(\tilde e_i \mid \tilde x_i) = \sigma^2 \Rightarrow E[\tilde e_i^2 \mid \tilde x_i]=\sigma^2$$
$$\Rightarrow E[(w_ie_i)^2 \mid w_ix_i)=\sigma^2 \Rightarrow w_i^2E[e_i^2\mid w_ix_i] = \sigma^2 \Rightarrow w_i^2\cdot (x_i^2\sigma^2) = \sigma^2$$
The only way for this to hold is if we set $$w_i^2 = \frac 1{x_i^2} \Rightarrow w_i = \frac 1{|x_i|}$$
As provided in the post linked to by a comment, for the initial equation $y_i = \beta x_i+e_i$ we have
$$\hat{\beta}_{OLS}=\frac{\sum_{i=1}^N x_iy_i}{\sum_{i=1}^N x_i^2}$$
Then for our transformed model we have
$$\hat{\beta}_{GLS}=\frac{\sum_{i=1}^N \tilde x_i\tilde y_i}{\sum_{i=1}^N \tilde x_i^2} = \frac{\sum_{i=1}^N \frac{x_i}{|x_i|}\frac{y_i}{|x_i|}}{\sum_{i=1}^N \frac {x_i^2}{|x_i|^2}} = \frac 1N\sum_{i=1}^N \left(\frac{y_i}{x_i}\right)$$
Note that implicit in all the above is that the regressor does not take zero values (otherwise one could apply a correction, but we will then be faced with a possibly very large variance for the observation involved).
Using $y_i = \beta x_i+e_i$ we can arrive at $$\hat{\beta}_{GLS} = \beta + \frac 1N \sum_{i=1}^N \left(\frac{e_i}{x_i}\right)$$
which gives
$$\text{Var}(\hat \beta_{GLS} \mid \mathbf x) = \frac 1{N^2}\sum_{i=1}^N \left(\frac{\text{Var}(e_i \mid \mathbf x)}{x_i^2}\right) = \sigma^2/N$$
This should be anticipated, since
$$\tilde x_i = \frac {x_i}{|x_i|} \Rightarrow \tilde x_i^2 = 1$$
and, as a general result for a simple regression without a constant,
$$\text{Var}(\hat \beta_{GLS} \mid \mathbf x) = \frac{\sigma^2}{\sum_{i=1}^N \tilde x_i^2}$$
Moreover, given this estimator, we know that the expression
$$\frac 1{N-1}\sum_{i=1}^N \hat {\tilde e_i}^2,\;\; \hat {\tilde e_i} = \tilde y_i - \hat{\beta}_{GLS}\tilde x_i$$ is a meaningfull estimator of the unknown constant $\sigma^2$. |
For boolean-valued functions $f : \{-1,1\}^n \to \{-1,1\}$ the total influence has several additional interpretations. First, it is often referred to as the
average sensitivity of $f$ because of the following proposition: Proposition 27For $f : \{-1,1\}^n \to \{-1,1\}$ \[ \mathbf{I}[f] = \mathop{\bf E}_{{\boldsymbol{x}}}[\mathrm{sens}_f({\boldsymbol{x}})], \] where $\mathrm{sens}_f(x)$ is the sensitivityof $f$ at $x$, defined to be the number of pivotal coordinates for $f$ on input $x$. Proof: \begin{multline*} \mathbf{I}[f] = \sum_{i=1}^n \mathbf{Inf}_i[f] = \sum_{i=1}^n \mathop{\bf Pr}_{{\boldsymbol{x}}}[f({\boldsymbol{x}}) \neq f({\boldsymbol{x}}^{\oplus i})] \\ = \sum_{i=1}^n \mathop{\bf E}_{{\boldsymbol{x}}}[\boldsymbol{1}_{f({\boldsymbol{x}}) \neq f({\boldsymbol{x}}^{\oplus i})}] = \mathop{\bf E}_{{\boldsymbol{x}}}\left[\sum_{i=1}^n \boldsymbol{1}_{f({\boldsymbol{x}}) \neq f({\boldsymbol{x}}^{\oplus i})}\right] = \mathop{\bf E}_{{\boldsymbol{x}}}[\mathrm{sens}_f({\boldsymbol{x}})]. \quad \Box \end{multline*}
The total influence of $f : \{-1,1\}^n \to \{-1,1\}$ is also closely related to the size of its
edge boundary; from Fact 14 we deduce: Examples 29(Recall Examples 15.) For boolean-valued functions $f : \{-1,1\}^n \to \{-1,1\}$ the total influence ranges between $0$ and $n$. It is minimized by the constant functions $\pm 1$ which have total influence $0$. It is maximized by the parity function $\chi_{[n]}$ and its negation which have total influence $n$; every coordinate is pivotal on every input for these functions. The dictator functions (and their negations) have total influence $1$. The total influence of $\mathrm{OR}_n$ and $\mathrm{AND}_n$ is very small: $n2^{1-n}$. On the other hand, the total influence of $\mathrm{Maj}_n$ is fairly large: roughly $\sqrt{2/\pi}\sqrt{n}$ for large $n$.
By virtue of Proposition 20 we have another interpretation for the total influence of
monotone functions:
This sum of the degree-$1$ Fourier coefficients has a natural interpretation in social choice:
Proposition 31Let $f : \{-1,1\}^n \to \{-1,1\}$ be a voting rule for a $2$-candidate election. Given votes ${\boldsymbol{x}} = ({\boldsymbol{x}}_1, \dots, {\boldsymbol{x}}_n)$, let $\boldsymbol{w}$ be the number of votes which agree with the outcome of the election, $f({\boldsymbol{x}})$. Then \[ \mathop{\bf E}[\boldsymbol{w}] = \frac{n}{2} + \frac12 \sum_{i=1}^n \widehat{f}(i). \] Proof: By the formula for Fourier coefficients, \begin{equation} \label{eqn:deg-1-sum} \sum_{i=1}^n \widehat{f}(i) = \sum_{i=1}^n \mathop{\bf E}_{{\boldsymbol{x}}}[f({\boldsymbol{x}}) {\boldsymbol{x}}_i] = \mathop{\bf E}_{{\boldsymbol{x}}}[f({\boldsymbol{x}})({\boldsymbol{x}}_1 + {\boldsymbol{x}}_2 + \cdots + {\boldsymbol{x}}_n)]. \end{equation} Now ${\boldsymbol{x}}_1 + \cdots + {\boldsymbol{x}}_n$ equals the difference between the number of votes for candidate $1$ and the number of votes for candidate $-1$. Hence $f({\boldsymbol{x}})({\boldsymbol{x}}_1 + \cdots + {\boldsymbol{x}}_n)$ equals the difference between the number of votes for the winner and the number of votes for the loser; i.e., $\boldsymbol{w} – (n-\boldsymbol{w}) = 2\boldsymbol{w} – n$. The result follows. $\Box$
Rousseau
[Rou62] suggested that the ideal voting rule is one which maximizes the number of votes which agree with the outcome. Here we show that the majority rule has this property (at least when $n$ is odd): Theorem 32The unique maximizers of $\sum_{i=1}^n \widehat{f}(i)$ among all $f : \{-1,1\}^n \to \{-1,1\}$ are the majority functions. In particular, $\mathbf{I}[f] \leq \mathbf{I}[\mathrm{Maj}_n] = \sqrt{2/\pi}\sqrt{n} + O(n^{-1/2})$ for all monotone $f$. Proof: From \eqref{eqn:deg-1-sum}, \[ \sum_{i=1}^n \widehat{f}(i) = \mathop{\bf E}_{{\boldsymbol{x}}}[f({\boldsymbol{x}})({\boldsymbol{x}}_1 + {\boldsymbol{x}}_2 + \cdots + {\boldsymbol{x}}_n)] \leq \mathop{\bf E}_{{\boldsymbol{x}}}[|{\boldsymbol{x}}_1 + {\boldsymbol{x}}_2 + \cdots + {\boldsymbol{x}}_n|], \] since $f({\boldsymbol{x}}) \in \{-1,1\}$ always. Equality holds if and only if $f(x) = \mathrm{sgn}(x_1 + \cdots + x_n)$ whenever $x_1 + \cdots + x_n \neq 0$. The second statement of the theorem follows from Proposition 30 and Exercise 18 in this chapter. $\Box$
Let’s now take a look at more analytic expressions for the total influence. By definition, if $f : \{-1,1\}^n \to {\mathbb R}$ then \begin{equation} \label{eqn:tinf-gradient} \mathbf{I}[f] = \sum_{i=1}^n \mathbf{Inf}_i[f] = \sum_{i=1}^n \mathop{\bf E}_{{\boldsymbol{x}}}[\mathrm{D}_i f({\boldsymbol{x}})^2] = \mathop{\bf E}_{{\boldsymbol{x}}}\left[\sum_{i=1}^n \mathrm{D}_i f({\boldsymbol{x}})^2\right]. \end{equation} This motivates the following definition:
Definition 33The (discrete) gradient operator$\nabla$ maps the function $f : \{-1,1\}^n \to {\mathbb R}$ to the function $\nabla f : \{-1,1\}^n \to {\mathbb R}^n$ defined by \[ \nabla f(x) = (\mathrm{D}_1 f(x), \mathrm{D}_2 f(x), \dots, \mathrm{D}_n f(x)). \]
Note that for $f : \{-1,1\}^n \to \{-1,1\}$ we have $\|\nabla f(x)\|_2^2 = \mathrm{sens}_f(x)$, where $\| \cdot \|_2$ is the usual Euclidean norm in ${\mathbb R}^n$. In general, from \eqref{eqn:tinf-gradient} we deduce:
An alternative analytic definition involves introducing the
Laplacian: Definition 35The Laplacian operator$\mathrm{L}$ is the linear operator on functions $f : \{-1,1\}^n \to {\mathbb R}$ defined by $\mathrm{L} = \sum_{i=1}^n \mathrm{L}_i$.
In the exercises you are asked to verify the following:
$\displaystyle \mathrm{L} f (x) = (n/2)\bigl(f(x) – \mathop{\mathrm{avg}}_{i \in [n]} \{f(x^{\oplus i})\}\bigr)$, $\displaystyle \mathrm{L} f (x) = f(x) \cdot \mathrm{sens}_f(x) \quad$ if $f : \{-1,1\}^n \to \{-1,1\}$, $\displaystyle \mathrm{L} f = \sum_{S \subseteq [n]} |S|\,\widehat{f}(S)\,\chi_S$, $\displaystyle \langle f, \mathrm{L} f \rangle = \mathbf{I}[f]$.
We can obtain a Fourier formula for the total influence of a function using Theorem 19; when we sum that theorem over all $i \in [n]$ the Fourier weight $\widehat{f}(S)^2$ is counted exactly $|S|$ times. Hence:
Theorem 37For $f : \{-1,1\}^n \to {\mathbb R}$, \begin{equation} \label{eqn:total-influence-formula} \mathbf{I}[f] = \sum_{S \subseteq [n]} |S| \widehat{f}(S)^2 = \sum_{k=0}^n k \cdot \mathbf{W}^{k}[f]. \end{equation} For $f : \{-1,1\}^n \to \{-1,1\}$ we can express this using the spectral sample: \[ \mathbf{I}[f] = \mathop{\bf E}_{\boldsymbol{S} \sim \mathscr{S}_{f}}[|\boldsymbol{S}|]. \]
Thus the total influence of $f : \{-1,1\}^n \to \{-1,1\}$ also measures the average “height” or degree of its Fourier weights.
Finally, from Proposition 1.13 we have $\mathop{\bf Var}[f] = \sum_{k > 0} \mathbf{W}^{k}[f]$; comparing this with \eqref{eqn:total-influence-formula} we immediately deduce a simple but important fact called the
Poincaré inequality. Poincaré InequalityFor any $f : \{-1,1\}^n \to {\mathbb R}$, $\mathop{\bf Var}[f] \leq \mathbf{I}[f]$.
Equality holds in the Poincaré inequality if and only if all of $f$’s Fourier weight is at degrees $0$ and $1$; i.e., $\mathbf{W}^{\leq 1}[f] = \mathop{\bf E}[f^2]$. For boolean-valued $f : \{-1,1\}^n \to \{-1,1\}$, Exercise 1.19 tells us this can only occur if $f = \pm 1$ or $f = \pm \chi_i$ for some $i$.
For boolean-valued $f : \{-1,1\}^n \to {\mathbb R}$, the Poincaré inequality can be viewed as an (edge-)isoperimetric inequality, or
(edge-)expansion bound, for the Hamming cube. If we think of $f$ as the indicator function for a set $A \subseteq \{-1,1\}^n$ of “measure” $\alpha = |A|/2^n$, then $\mathop{\bf Var}[f] = 4\alpha(1-\alpha)$ (Fact 1.14) whereas $\mathbf{I}[f]$ is $n$ times the (fractional) size of $A$’s edge boundary. In particular, the Poincaré inequality says that subsets $A \subseteq \{-1,1\}^n$ of measure $\alpha = 1/2$ must have edge boundary at least as large as those of the dictator sets.
For $\alpha \notin \{0, 1/2, 1\}$ the Poincaré inequality is not sharp as an edge-isoperimetric inequality for the Hamming cube; for small $\alpha$ even the asymptotic dependence is not optimal. Precisely optimal edge-isoperimetric results (and also vertex-isoperimetric results) are known for the Hamming cube. The following simplified theorem is optimal for $\alpha$ of the form $2^{-i}$:
This result illustrates an important recurring concept in the analysis of boolean functions: the Hamming cube is a “small-set expander”. Roughly speaking, this is the idea that “small” subsets $A \subseteq \{-1,1\}^n$ have unusually large “boundary size”. |
Can anyone provide an extended (and well explained) proof of correctness of the RSA Algorithm?
And why is it needed?
Cryptography Stack Exchange is a question and answer site for software developers, mathematicians and others interested in cryptography. It only takes a minute to sign up.Sign up to join this community
After a couple of hours of detailed study I've come to an answer.
As we all know the RSA algorithm works as follows:
While these statements and equations can stand true for some fixed values of $p$, $q$, $m$, $e$, $d$ in order to define the RSA as a general cryptographic algorithm we must prove their generality for any message $m$ we wish to encrypt.
This is therefore the reason why the proof of the correctness of the RSA algorithm is needed.
Getting to the proof we can formalise it as follows:
Hypothesis: Thesis:
NOTE:The important part is ^^^^^^^^^ the for all part...
Proof:
Being $m \in Z_n$ there are only two possible cases to analyse:
1) $\gcd(m, N) = 1$
In this case Euler's Theorem stands true, assessing that $$ m^{\phi(N)} = 1 \bmod{N}\text{.}$$
As for the Thesis to prove, because of Hypothesis number 3, we can write:
$$(m^e)^d = m^{ed} = m^{1 + k\phi(N)}\text{,}$$
furthermore
$$ m^{1 + k\phi(N)} = m\cdot m^{k\phi(N)} = m \cdot (m^{\phi(N)})^k,$$
and for Euler's Theorem
$$m \cdot (m^{\phi(N)})^k = m \bmod{N}$$
Proving that the thesis stands in this case.
2) $\gcd(m, N) \neq 1$
In this case Euler's Theorem does not stand true any more.
$$ x = y \pmod{p} \land x = y \pmod{q} \Rightarrow x = y \pmod{pq}$$
So by proving the following two statements we would have finished:
Because $\gcd(m, N) \neq 1$ one between $\gcd(m, N) = p$, and $\gcd(m, N) = q$ must stand true. I will demonstrate that both the above statements stand true in the case $\gcd(m, N) = p$, being it absolutely identical (by switching letters) to prove it for $\gcd(m, N) = q$ as well.
So let it be $\gcd(m, N) = p$, this implies that $m = kp$ for some $k > 0$ which means that $m \bmod{p} = 0$. By concerning the first statement we also have
$$ (m^e)^d = ((kp)^e)^d$$
which therefore results to be a multiple of $p$, and so it is equal to zero.
So the first statement becomes $0 = 0$ and is proven to be satisfied.
Concerning the second statement we have that Euler's Theorem results to be proved in $Z_q$, since $\gcd(m,q) = 1$, so:
$$ m^{\phi(q)} = 1 \bmod{q}\text{.}$$
This implies that we can write:
$$\begin{align} (m^{e})^d &= m^{ed} \\ &= m^{ed - 1}m\\ &= m^{h(p-1)(q-1)}m\\ &= (m^{q-1})^{h(p-1)}m\\ &= 1^{h(p-1)}m = m \bmod{q}. \end{align} $$
which definitively proves the second statement and theorem.
(Being new to cryptography, I hope the following makes sense).
I will try to follow the Wikipedia article. What I write is pretty much just taken from there. If some of the steps are unclear you might want to look up for example the Chinese Reminder theorem or simply check out a book on basic modular arithmetic.
So you have
So the
And so you
The question that I think you are asking is why is $$ (m^e)^d \equiv m\;(\text{mod }n)$$. (A: Why do we need to check this? A: because you want to make sure that when you first encrypt and then decrypt you recover the original plaintext).
(1) First note that (by the Chinese Remainder Theorem) it is enough to check that
$$ (m^e)^d \equiv m\;(\text{mod }p)$$.
So, we have that $de \equiv 1$ $($mod $\phi(n))$. This means that $\phi(n) = (p-1)(q-1)$ divides $ed - 1$. Hence we can write $ed = h(p-1)(q-1) + 1$ for some (non negative) integer $h$.
(2) Now
if $p$ divides $m$, then we are done... so assume that $m$ is not divisible by $p$.
(3) So we compute ($\equiv_p$ is taking mod $p$)
$$\begin{align} (m^{e})^d &= m^{ed} \\ &= m^{ed - 1}m\\ &= m^{h(p-1)(q-1)}m\\ &= (m^{p-1})^{h(q-1)}m. \end{align} $$ Now we use the fact from Fermat's Little Theorem that $m^{p-1}\equiv_p 1$. So we get
$$\begin{align} (m^{e})^d &= ... \\ &\equiv_p 1^{h(q-1)}m \\ &= m. \end{align} $$ Then you do the same for $q$ instead of $p$, and you are done.
Here is a blog post link: tominology
For a sketch of proof using Fermat's little theorem and a simplified JavaScript implementation |
Modus Tollendo Tollens/Proof Rule Contents Proof Rule As a proof rule it is expressed in the form: If we can conclude $\phi \implies \psi$, and we can also conclude $\neg \psi$, then we may infer $\neg \phi$. It can be written: $\displaystyle {\phi \implies \psi \quad \neg \psi \over \neg \phi} \text{MTT}$
The Modus Tollendo Tollens is invoked for $\phi \implies \psi$ and $\neg \psi$ as follows:
Pool: The pooled assumptions of $\phi \implies \psi$ The pooled assumptions of $\neg \psi$ Formula: $\neg \phi$ Description: Modus Tollendo Tollens Depends on: The line containing the instance of $\phi \implies \psi$ The line containing the instance of $\neg \psi$ Abbreviation: $\text{MTT}$ If the truth of one statement implies the truth of a second, and the second is shown not to be true, then neither can the first be true. Modus Tollendo Tollens is also known as: Modus tollens Denying the consequent. Modus Tollendo Tollens is Latin for mode that by denying, denies.
The shorter form
Modus Tollens means mode that denies.
{{ModusTollens|line|pool|statement|first|second}}
or:
{{ModusTollens|line|pool|statement|first|second|comment}}
where:
lineis the number of the line on the tableau proof where Modus Tollendo Tollens is to be invoked
poolis the pool of assumptions (comma-separated list)
statementis the statement of logic that is to be displayed in the
Formulacolumn, withoutthe
$ ... $delimiters
firstis the first of the two lines of the tableau proof upon which this line directly depends, the one in the form $\phi \implies \psi$
secondis the second of the two lines of the tableau proof upon which this line directly depends, the one in the form $\phi$
commentis the (optional) comment that is to be displayed in the
Notescolumn. Sources 1964: Donald Kalish and Richard Montague: Logic: Techniques of Formal Reasoning... (previous) ... (next): $\text{I}$: 'NOT' and 'IF': $\S 3$ 1965: E.J. Lemmon: Beginning Logic... (previous) ... (next): $\S 1.2$: Conditionals and Negation 1973: Irving M. Copi: Symbolic Logic(4th ed.) ... (previous) ... (next): $3.1$: Formal Proof of Validity 1980: D.J. O'Connor and Betty Powell: Elementary Logic... (previous) ... (next): $\S \text{II}$: The Logic of Statements $(2): \ 1$: Decision procedures and proofs: $2$ 2000: Michael R.A. Huth and Mark D. Ryan: Logic in Computer Science: Modelling and reasoning about systems... (previous) ... (next): $\S 1.2.1$: Rules for natural deduction |
Some mathematical elements change their style depending on the context, whether they are in line with the text or in an equation-type environment. This article explains how to manually adjust the display style.
Let's see an example
Depending on the value of $x$ the equation \( f(x) = \sum_{i=0}^{n} \frac{a_i}{1+x} \) may diverge or converge. \[ f(x) = \sum_{i=0}^{n} \frac{a_i}{1+x} \]
Superscripts, subscripts and fractions are formatted differently.
The maths styles can be set explicitly. For instance, if you want an in-line mathematical element to display as a equation-like element put
\displaystyle before that element. There are some more maths style-related commands that change the size of the text.
In-line maths elements can be set with a different style: \(f(x) = \displaystyle \frac{1}{1+x}\). The same is true the other way around: \begin{eqnarray*} \begin{eqnarray*} f(x) = \sum_{i=0}^{n} \frac{a_i}{1+x} \\ \textstyle f(x) = \textstyle \sum_{i=0}^{n} \frac{a_i}{1+x} \\ \scriptstyle f(x) = \scriptstyle \sum_{i=0}^{n} \frac{a_i}{1+x} \\ \scriptscriptstyle f(x) = \scriptscriptstyle \sum_{i=0}^{n} \frac{a_i}{1+x} \end{eqnarray*} \end{eqnarray*}
For more information see |
I recently attended a talk by Dr. Ravi Gomatam on 'quantum reality', where the speaker suggested, that conservation of energy is not a fundamental law, and is conditional, but the conservation of information is fundamental. What exactly is the meaning of information? Can it be quantified? How is it related to energy?
If one measures lack of information by the entropy (as usual in information theory), and equates it with the entropy in thermodynamics then the laws of thermodynamics say just the opposite: In a mechanically, chemically and thermally isolated system described in the hydrodynamic approximation, energy is conserved whereas entropy can be created, i.e., information can get lost. (In an exact microscopic Hamiltonian description, the concept of information makes no sense, as it assumes a coarse-grained description in which not everything is known.)
The main stream view in physics (aside from speculations about future physics not yet checkable by experiment) is that on the most fundamental level energy is conserved (being a consequence of the translation symmetry of the universe), while entropy is a concept of statistical mechanics that is applicable only to macroscopic bodies and constitutes an approximation, though at the human scale a very good one.
In contrast to @ArnoldNeumaier, I'd argue that the information content of the World
could be constant: it almost certainly can't get smaller and how it and if it gets bigger depends on the resolution of questions about the correct interpretation of what exactly happens when one makes a quantum measurement. I'll leave the latter (resolution of quantum interpretation) aside, and instead discuss situations wherein information is indeed constant. See here for definition of "information": the information in a thing is essentially the size in bits of the smallest document one can write and still uniquely define that thing. For the special case of a statistically independent string of symbols, the Shannon information is the mean of the negative logarithms of their probabilities $p_j$ of appearance in an infinite string:
$H=-\sum_j p_j \log p_j$
If the base of the logarithm is 2, H is in
bits. How this relates to the smallest defining document for the string is defined in Shannon's noiseless coding theorem.
In the MaxEnt interpretation of the second law of thermodynamics, pioneered by E. T. Jaynes (also of the Jaynes-Cumming model for two level atom with one electromagnetic field mode interaction fame), the wonted "observable" or "experimental" entropy $S_{exp}$ (this is what the Boltzmann H formula yields) of a system comprises what I would call the true Shannon information, or Kolmogorov complexity, $S_{Sha}$, plus the mutual information $M$ between the unknown states of distinguishable subsystems. In a gas, $M$ measures the predictability of states of particles conditioned on knowledge about the states of other particles,
i.e. is is a logarithmic measure of statistical correlation between particles:
$S_{Sha}$ is the minimum information in bits needed to describe a system, and is constant because the basic laws of physics are reversible: the World, therefore, has to "remember" how to undo any evolution of its state. $S_{Sha}$ cannot in general be measured and indeed, even given a full description of a system state, $S_{Sha}$ is not computable (i.e. one cannot compute the maximum reversible compression of that description). The Gibbs entropy formula calculates $S_{Sha}$ where the joint probability density function for the system state is known.
The experimental (Boltzmann) entropy stays constant in a reversible process, and rises in a non-reversible one. Jaynes's "proof" of the second law assumes that a system begins with all its subsystems uncorrelated, and therefore $S_{Sha} = S_{exp}$. In this assumed state, the subsystems are all perfectly statistically independent. After an irreversible change (e.g. a gas is allowed to expand into a bigger container by opening a tap, the particles are now subtly correlated, so that their mutual information $M > 0$. Therefore one can see that the observable entropy $S_{exp}$ must rise. This ends Jaynes's proof.
Energy is almost unrelated to information, however, there is a lower limit on the work must do to "forget" information in a non reversible algorithm: this is the Landauer limit and arises to uphold the second law of thermodynamics simply because the any information must be encoded in a physical system's state: there is no other "ink" to write in in the material world. Therefore, if we swipe computer memory, the Kolmogorov complexity of the former memory state must get pushed into the state of the surrounding World.
Afterword: I should declare bias by saying I subscribe to many of the ideas of the MaxEnt interpretation, but disagree with Jaynes's "proof" of the second law. There is no problem with Jayne's logic but (Author's i.e. My Opinion): after an irreversible change, the system's substates are correlated and one has to describe how they become uncorrelated again before one can apply Jaynes's argument again. So, sadly, I don't think we have a proof of the second law here.
Energy is the relationship between information regimes. That is, energy is manifested, at any level, between structures, processes and systems of information in all of its forms, and all entities in this universe is composed of information. To understand information and energy, consider a hypothetical universe consisting only of nothingness. In this universe imagine the presence of the smallest most fundamental possible instance of deformation which constitutes a particle in this otherwise pristine firmament of nothingness. Imagine there is only one instance of this most fundamental particle and let us dub this a Planck-Particle PP. What caused this PP to exist is not known, but the existence of the PP constitutes the existence of one Planck-Bit (PB) of information. Resist the temptation to declare that energy is what caused our lone PP to exist. In this analogy, as in our reality, the ‘big’ bang that produced our single PP is not unlike the big bang that caused our known universe in that neither can be described in terms of any energy relationship or regime known to the laws of physics in this universe.
This PB represents the most fundamental manifestation of information possible in this universe. Hence, the only energy that exists in this conceptual universe will be described by the relationship (there’s that word again) between the lone PP and the rest of the firmament of nothingness that describes its universe. Call this energy a Planck-quantum (PQ). Note that this PQ of energy in this universe only exists by virtue of the existence of the PP alone in relation to the surrounding nothingness. With only one PP there are few descriptions of energy that can be described. There is no kinetic energy, no potential energy no gravity no atomic or nuclear energy no entropy, no thermodynamics etc.. However, there will be some very fundamental relationships pertaining to the degrees-of freedom defined by our PP compared to is bleak environment that may be describable as energy.
Should we now introduce a second PP into our sparse universe, you may now define further relationship and energy regimes within our conceptual growing universe, and formulate Nobel worthy theories and equations which describe these relationships. Kinetic energy suddenly manifest as the relationship of distance between our lonely PP’s suddenly comes into existence. Likewise, energy as we know it describes the relationships manifested between information regimes which are describable by the language of mathematics.
Electron and Information.
Information is transferred through EM waves. There isn't EM wave without electron. (H. Lorentz)
Information is the new atom or electron, the fundamental building block of the universe ... We now see the world as entirely made of information: it's bits all the way down. (Bryan Appleyard)
It is important to realize that in physics today, we have no knowledge of what energy is. We do not have a picture that energy comes in little blobs of a definite amount. It is not that way. (Richard Feynman about an electron)
Electron is a quantum of information. Electron is a keeper of information. Why? An electron has six ( 6 ) formulas: $$E=h*f\qquad \text{and}\qquad e^2=ah*c ,$$ $$E=Mc^2\qquad \text{and}\qquad -E=Mc^2 ,$$ $$E=-me^4/2h^2= -13.6eV\qquad \text{and}\qquad E= \infty$$ and obeys five (5) Laws :
a) The Law of conservation and transformation energy/ mass b) The Heisenberg Uncertainty Principle / Law c) The Pauli Exclusion Principle/ Law d) Dirac - Fermi statistic e) Maxwell / Lorentz EM law
It means in different actions electron must know six different formulas and must observe five laws. To behave in such different conditions a single electron itself must be a keeper of information.
The laws of physics dictate that information, like energy, cannot be destroyed, which means it must go somewhere. (Michael Brooks, Book ‘ The big questions’. Page 195-196.)
It means an electron (as a little blobs of a definite amount of energy) even in different situations never loses its information.
protected by AccidentalFourierTransform May 13 '18 at 13:47
Thank you for your interest in this question. Because it has attracted low-quality or spam answers that had to be removed, posting an answer now requires 10 reputation on this site (the association bonus does not count).
Would you like to answer one of these unanswered questions instead? |
SPPU Mechanical Engineering (Semester 3)
Engineering Mathematics 3 May 2014
Engineering Mathematics 3
May 2014
Total marks: --
Total time: --
Total time: --
INSTRUCTIONS
(1) Assume appropriate data and state your reasons (2) Marks are given to the right of every question (3) Draw neat diagrams wherever necessary
(1) Assume appropriate data and state your reasons
(2) Marks are given to the right of every question
(3) Draw neat diagrams wherever necessary
Solve any Two of the following:
1 (a) (i) (D
2+D+1)y=x sin x
4 M
Answer any one question from Q1 and Q2
1 (a) (ii) (D
2-4D+4) y=e xcos 2x
4 M
1 (a) (iii) \[ (3x +2)^2 \dfrac {d^2y}{dx^2} + 3 (3x+2) \dfrac {dy}{dx}-36 y = 3x^2 + 4x +1 \]
4 M
1 (b) Find the Fourier transform of: \[ \begin{align*} f(x)& =1 , &|x|\le a \\ &=0, &|x|>a \end{align*} \]
4 M
2 (a) A body weighing W=20 N is hung from the spring. A pull of 40 N will stretch the spring to 10 cm. The body is pulled down to 20 cm bellow the static equilibrium position and then released. Find the displacement of the body from its equilibrium position in time 't' seconds. Also find the maximum velocity and Period of oscillation.
4 M
2 (b) Solve any one of the following:
i) Find the Laplace Transform of: f(x)=te ii) Find inverse Laplace Transform of: \[ F(s) = \dfrac {1} {(s-2)^4 (s+3)} \]
i) Find the Laplace Transform of: f(x)=te
3tsin 2t
ii) Find inverse Laplace Transform of:
\[ F(s) = \dfrac {1} {(s-2)^4 (s+3)} \]
4 M
2 (c) Solve the following Differential equation by Laplace Transform method. \[ \dfrac {d^2y}{dt^2} + 2 \dfrac {dy}{dt}+ 5y =e^{-t}\sin t \] Given that: y(0)=0, y'(0)=1.
4 M
Answer any one question from Q3 and Q4
3 (a) Following are the values of import of raw material and export of finished product in suitable units.
Calculate the coefficient of correlation between the import and export values.
Export 10 11 14 14 20 22 16 12 15 13 Import 12 14 15 16 21 26 21 15 16 14
Calculate the coefficient of correlation between the import and export values.
4 M
3 (b) Find curl F at the point (1,1,2) where \[ \overline F = x^2 y\overline i + xyz \overline j + z^2 y \overline k \]
4 M
3 (c) Prove the following (any one): \[ i) \ \ \nabla \cdot \left ( \dfrac {\overline a \times \overline r}{r} \right )=0 \\ ii) \ \overline a \cdot \nabla \left [ \overline b \cdot \nabla \left ( \dfrac {1}{r} \right ) \right ] = \dfrac {3 (\overline a \cdot \overline r)(\overline b \cdot \overline r)}{x^5}- \dfrac {(\overline a \cdot \overline b)}{x^3} \]
4 M
4 (a) In a distribution, exactly normal, 7% of the items are under 35 and 89% are under 63. Find the mean and standard deviation of the distribution.
[A
[A
1=0.43, z 1=1.48, A 2=0.39, z 2=1.23]
4 M
4 (b) Number of road accidents on a highway during a month follows a Poisson distribution with mean 5. Find the probability that in a certain month, number of accidents on the highway will be:
i) Less than 3 ii) Between 3 and 5
i) Less than 3
ii) Between 3 and 5
4 M
4 (c) Find the constant a and b, so that the surface ax
2-byz=(a+2)x will be orthogonal to the surface 4x 2y+z 3=4 at the point (1, -1, 2).
4 M
Answer any one question from Q5 and Q6
5 (a) Find the work done in moving a particle along the path x=2 t
2=t, z=t 3, from t=0 to t=1 in a force field \[ \overline F = (2y+3) \overline i + xz \overline j + (yz-x)\overline k \]
4 M
5 (b) Evaluate: \[ iint_s ( x\overline i + y \overline j + z^2 \overline k) \cdot d\overline s, \] where s is curved surface of cylinder x
2+y 2=4 bounded by the planes z=0 and z=2.
5 M
5 (c) Apply Stoke's theorem to calculate ∫
c(y dx+z dy+ x dz), C being intersection of x 2+y 2+z 2=a 2, x+z=a.
4 M
6 (a) Sing Green's lemma evaluate \[ \oint_c x^2 dx + x \ y \ dy, \] where C is the boundary of region R which is enclosed by y=x
2, and y=x.
4 M
6 (b) Evaluate: \[ \iint_s (\nabla \times \overline F) \cdot \widehat n \ dS, \] where S is the curved surface of the parabola x
2+y 2=2z bounded by the plane z=2 and \[ \overline F =3 (x-y) \overline i + 2xz \overline j + xy \overline k \]
5 M
6 (c) Prove that: \[ \iint_s (\phi \nabla \psi - \psi \nabla \phi ) \cdot d\overline S = \iiint_v (\phi \nabla^2 \psi - \psi \nabla^2 \phi) dv, \] where S is closed surface enclosing volume V.
4 M
Answer any one question from Q7 and Q8
7 (a) A string of length 1 is stretched and fastened to two ends. Motion is started by displacing the string in the form \[ u(x)= a \sin \left ( \dfrac {\pi x}{l} \right ) \] from which it is released at t=0, Find the displacement u at any time 't', if it satisfies the equation. \[ \dfrac {\partial^2 y}{\partial t^2} = C^2 \dfrac {\partial ^2 y}{\partial x^2} \]
7 M
7 (b) \[ solve \ \dfrac {\partial u}{\partial t} = a^2 \dfrac {\partial ^2 u}{\partial x^2} \ if \] i) u(x,∞) is finite
ii) u(0,t)=0 iii) u(l,t)=0 iv) u(x,0)=x, 0
ii) u(0,t)=0
iii) u(l,t)=0
iv) u(x,0)=x, 0
6 M
8 (a) An infinitely long plane uniform plate is bounded by two parallel edges in the y direction and an end at right angles to them. The breadth of the plate is π. The end is maintained at temperature u
0at all points and other edges at zero temperature. Find steady state temperature u(x,y), if it satisfies \[ \dfrac {\partial ^2 u}{\partial x^2} = \dfrac {\partial ^2 u}{\partial y^2} = 0. \]
7 M
8 (b) Use Fourier Transform to solve \[ \dfrac {\partial u}{\partial t}= \dfrac {\partial ^2 u}{\partial x^2}; \ 0< x < \infty, \ t>0 , \] where u(x,t) satisfies the conditions: \[ i) \ \ u(x,t)
0 \\ iii) \ \begin {align*} u(x,0) & = x, &0 1 \ \ \ \ \ \ \ \end{align*} \]
6 M
More question papers from Engineering Mathematics 3 |
M 3: a new muon missing momentum experiment to probe (g – 2) μ and dark matter at Fermilab Abstract
Here, new light, weakly-coupled particles are commonly invoked to address the persistent $$\sim 4\sigma$$ anomaly in $$(g-2)_\mu$$ and serve as mediators between dark and visible matter. If such particles couple predominantly to heavier generations and decay invisibly, much of their best-motivated parameter space is inaccessible with existing experimental techniques. In this paper, we present a new fixed-target, missing-momentum search strategy to probe invisibly decaying particles that couple preferentially to muons. In our setup, a relativistic muon beam impinges on a thick active target. The signal consists of events in which a muon loses a large fraction of its incident momentum inside the target without initiating any detectable electromagnetic or hadronic activity in downstream veto systems. We propose a two-phase experiment, M$^3$ (Muon Missing Momentum), based at Fermilab. Phase 1 with $$\sim 10^{10}$$ muons on target can test the remaining parameter space for which light invisibly-decaying particles can resolve the $$(g-2)_\mu$$ anomaly, while Phase 2 with $$\sim 10^{13}$$ muons on target can test much of the predictive parameter space over which sub-GeV dark matter achieves freeze-out via muon-philic forces, including gauged $$U(1)_{L_\mu - L_\tau}$$.
Authors: Princeton Univ., Princeton, NJ (United States) Fermi National Accelerator Lab. (FNAL), Batavia, IL (United States) Publication Date: Research Org.: Fermi National Accelerator Lab. (FNAL), Batavia, IL (United States) Sponsoring Org.: USDOE Office of Science (SC), High Energy Physics (HEP) (SC-25) OSTI Identifier: 1439466 Report Number(s): arXiv:1804.03144; FERMILAB-PUB-18-087-A Journal ID: ISSN 1029-8479; 1667037; TRN: US1900618 Grant/Contract Number: AC02-07CH11359 Resource Type: Journal Article: Accepted Manuscript Journal Name: Journal of High Energy Physics (Online) Additional Journal Information: Journal Volume: 2018; Journal Issue: 9; Journal ID: ISSN 1029-8479 Publisher: Springer Berlin Country of Publication: United States Language: English Subject: 79 ASTRONOMY AND ASTROPHYSICS; 46 INSTRUMENTATION RELATED TO NUCLEAR SCIENCE AND TECHNOLOGY; 72 PHYSICS OF ELEMENTARY PARTICLES AND FIELDS; Fixed target experiments Citation Formats
Kahn, Yonatan, Krnjaic, Gordan, Tran, Nhan, and Whitbeck, Andrew.
M3: a new muon missing momentum experiment to probe (g – 2)μ and dark matter at Fermilab. United States: N. p., 2018. Web. doi:10.1007/JHEP09(2018)153.
Kahn, Yonatan, Krnjaic, Gordan, Tran, Nhan, & Whitbeck, Andrew.
M3: a new muon missing momentum experiment to probe (g – 2)μ and dark matter at Fermilab. United States. doi:10.1007/JHEP09(2018)153.
Kahn, Yonatan, Krnjaic, Gordan, Tran, Nhan, and Whitbeck, Andrew. Wed . "M3: a new muon missing momentum experiment to probe (g – 2)μ and dark matter at Fermilab". United States. doi:10.1007/JHEP09(2018)153. https://www.osti.gov/servlets/purl/1439466.
@article{osti_1439466,
title = {M3: a new muon missing momentum experiment to probe (g – 2)μ and dark matter at Fermilab}, author = {Kahn, Yonatan and Krnjaic, Gordan and Tran, Nhan and Whitbeck, Andrew}, abstractNote = {Here, new light, weakly-coupled particles are commonly invoked to address the persistent $\sim 4\sigma$ anomaly in $(g-2)_\mu$ and serve as mediators between dark and visible matter. If such particles couple predominantly to heavier generations and decay invisibly, much of their best-motivated parameter space is inaccessible with existing experimental techniques. In this paper, we present a new fixed-target, missing-momentum search strategy to probe invisibly decaying particles that couple preferentially to muons. In our setup, a relativistic muon beam impinges on a thick active target. The signal consists of events in which a muon loses a large fraction of its incident momentum inside the target without initiating any detectable electromagnetic or hadronic activity in downstream veto systems. We propose a two-phase experiment, M$^3$ (Muon Missing Momentum), based at Fermilab. Phase 1 with $\sim 10^{10}$ muons on target can test the remaining parameter space for which light invisibly-decaying particles can resolve the $(g-2)_\mu$ anomaly, while Phase 2 with $\sim 10^{13}$ muons on target can test much of the predictive parameter space over which sub-GeV dark matter achieves freeze-out via muon-philic forces, including gauged $U(1)_{L_\mu - L_\tau}$.}, doi = {10.1007/JHEP09(2018)153}, journal = {Journal of High Energy Physics (Online)}, issn = {1029-8479}, number = 9, volume = 2018, place = {United States}, year = {2018}, month = {9} } Citation information provided by Web of Science
Web of Science
Figures / Tables: left) and vector ( right) forces that couple predominantly to muons. In both cases, a relativistic muon beam is incident on a fixed target and scatters coherently off a nucleus to produce the new particle as initial- ormore » |
Inner Products and Inner Product Spaces Review
Inner Products and Inner Product Spaces Review]
We will now review some of the recent material regarding inner products and inner product spaces.
On the Inner Products and Inner Product Spacespage we said that an Inner Producton a linear space $X$ is a function $\langle \cdot, \cdot \rangle : X \to \mathbb{C}$ that satisfies the following properties for all $x, y, z \in X$ and for all $\lambda \in \mathbb{C}$:
\begin{align} \quad \langle x, x \rangle & \geq 0 \quad \mathrm{and} \quad \langle x, x \rangle = 0 \: \mathrm{if \:and \: only \: if} \: x = 0 \\ \quad \langle x, y \rangle & = \overline{\langle y, x \rangle} \\ \quad \langle x + y, z \rangle & = \langle x, z \rangle + \langle y, z \rangle \\ \quad \langle \lambda x, y \rangle & = \lambda \langle x, y \rangle \end{align}
We then proved another important set of properties for inner products. We proved that for all $x, y, z \in X$ and for all $\lambda \in \mathbb{C}$ that:
\begin{align} \quad \langle x, y + z \rangle &= \langle x, y \rangle + \langle x, z \rangle \\ \quad \langle x, \lambda y \rangle &= \overline{\lambda} \langle x, y \rangle \end{align}
We then said that an Inner Product Spaceis a linear space equipped with an inner product. On The Cauchy-Schwarz Inequality for Inner Product Spacespage we proved the very important Cauchy-Schwarz inequality for inner product spaces which says that if $H$ is an inner product space then for all $x, y \in H$ we have that:
\begin{align} \quad | \langle x, y \rangle | \leq \langle x, x \rangle^{1/2} \langle y, y \rangle^{1/2} \end{align}
On The Normed Space Induced by an Inner Productpage we proved that if $H$ is an inner product space then the function $\| \cdot \| : H \to [0, \infty)$ defined below is a norm on $H$ and is called the Norm Induced by the Inner Producton $H$:
\begin{align} \quad \| x \| = \langle x, x \rangle^{1/2} \end{align}
On The Parallelogram Identity for the Norm Induced by an Inner Productpage we proved the parallelogram identity for the norm induced by an inner product, which says that for all $x, y \in H$ we have that:
\begin{align} \quad \| x + y \|^2 + \| x - y \|^2 = 2 \| x \|^2 + 2 \| y \|^2 \end{align}
On the Orthogonal Sets in an Inner Product Spacepage we said that if $H$ is an inner product space then two elements $x, y \in H$ are said to be Orthogonalwritten $x \perp y$ if:
\begin{align} \quad \langle x, y \rangle = 0 \end{align}
Furthermore, $x$ is said to be Orthogonalto the set $S \subseteq H$ written $x \perp S$ if for all $y \in S$ we have that:
\begin{align} \quad \langle x, y \rangle = 0 \end{align}
We then said that if $S \subseteq H$ then the Orthogonal Set of $S$denoted by $S^{\perp}$ is the set of all elements in $H$ that are orthogonal to $S$, that is:
\begin{align} \quad S^{\perp} = \{ x \in H : x \perp S \} \end{align} |
Topological Spaces
The branch of mathematics that is topology deals with the study of topological spaces which we define below.
Definition: A Topological Space is a set $X$ and a collection of subsets of $X$, $\tau$ called the Topology defined on $X$ which we together denote by $(X, \tau)$ such that: 1. The empty set and the whole set $X$ are contained in $\tau$, i.e., $\emptyset, X \in \tau$. 2. If $U_i \in \tau$ for all $i \in I$ for some index set $I$, then $\displaystyle{\bigcup_{i \in I} U_i \in \tau}$, that is, any arbitrary union of subsets in $\tau$ is contained in $\tau$. 3. If $U_1, U_2, ..., U_n \in \tau$ then $\displaystyle{\bigcap_{i=1}^{n} U_i \in \tau}$, that is, any finite intersection of $n$ subsets of $X$ in $\tau$ is contained in $\tau$.
For example, consider the set $X = \{ x, y, z \}$ and collection of subsets of $X$, $\tau = \{ \emptyset, \{ x \}, \{x, y \}, X \}$. Let's verify that $(X, \tau)$ is a topological space by showing that $\tau$ satisfies the three conditions required in the definition above.
Clearly $\emptyset \in \tau$ and $X \in \tau$ so the first condition is met. We first notice that:(1)
For the second condition, we claim that from $(*)$ above that the union of any arbitrary collection of subsets from $\tau$ will be the set in this collection containing the most elements between the other sets in the collection. From $(*)$ we have that any arbitrary collection of subsets from $\tau$ must contain a subset $U' \in \tau$ such $U_i \subseteq U'$ for all $U_i \in \{ U_i \}_{i \in I}$. Therefore $\bigcup_{i \in I} U_i = U' \in \tau$. So, if $U_i \in \tau$ for all $i \in I$ with index set $I$, then $\bigcup_{i\in I} U_i \in \tau$.
For the third condition, we claim that from $(*)$ above that any finite intersection of subsets of $X$ in $\tau$ is going to be equal to the subset in the collection with the fewest elements between the other sets in the collection. We have that any finite collection of subsets from $\tau$ must contain a subset $U^* \in \tau$ such that $U^* = U_i$ for some $i \in \{ 1, 2, ..., n \}$ and such that $U^* \subseteq U_i$ for all $i \in \{1, 2, ..., n \}$. So $\bigcup_{i=1}^{n} U_i = U^* \in \tau$. So if $U_1, U_2, ..., U_n \in \tau$ then $\bigcap_{i=1}^{n} U_i \in \tau$.
Therefore $(X, \tau)$ is indeed a topological space. |
2019-09-04 12:06
Soft QCD and Central Exclusive Production at LHCb / Kucharczyk, Marcin (Polish Academy of Sciences (PL)) The LHCb detector, owing to its unique acceptance coverage $(2 < \eta < 5)$ and a precise track and vertex reconstruction, is a universal tool allowing the study of various aspects of electroweak and QCD processes, such as particle correlations or Central Exclusive Production. The recent results on the measurement of the inelastic cross section at $ \sqrt s = 13 \ \rm{TeV}$ as well as the Bose-Einstein correlations of same-sign pions and kinematic correlations for pairs of beauty hadrons performed using large samples of proton-proton collision data accumulated with the LHCb detector at $\sqrt s = 7\ \rm{and} \ 8 \ \rm{TeV}$, are summarized in the present proceedings, together with the studies of Central Exclusive Production at $ \sqrt s = 13 \ \rm{TeV}$ exploiting new forward shower counters installed upstream and downstream of the LHCb detector. [...] LHCb-PROC-2019-008; CERN-LHCb-PROC-2019-008.- Geneva : CERN, 2019 - 6. Fulltext: PDF; In : The XXVII International Workshop on Deep Inelastic Scattering and Related Subjects, Turin, Italy, 8 - 12 Apr 2019 Detailed record - Similar records 2019-08-15 17:39
LHCb Upgrades / Steinkamp, Olaf (Universitaet Zuerich (CH)) During the LHC long shutdown 2, in 2019/2020, the LHCb collaboration is going to perform a major upgrade of the experiment. The upgraded detector is designed to operate at a five times higher instantaneous luminosity than in Run II and can be read out at the full bunch-crossing frequency of the LHC, abolishing the need for a hardware trigger [...] LHCb-PROC-2019-007; CERN-LHCb-PROC-2019-007.- Geneva : CERN, 2019 - mult.p. In : Kruger2018, Hazyview, South Africa, 3 - 7 Dec 2018 Detailed record - Similar records 2019-08-15 17:36
Tests of Lepton Flavour Universality at LHCb / Mueller, Katharina (Universitaet Zuerich (CH)) In the Standard Model of particle physics the three charged leptons are identical copies of each other, apart from mass differences, and the electroweak coupling of the gauge bosons to leptons is independent of the lepton flavour. This prediction is called lepton flavour universality (LFU) and is well tested. [...] LHCb-PROC-2019-006; CERN-LHCb-PROC-2019-006.- Geneva : CERN, 2019 - mult.p. In : Kruger2018, Hazyview, South Africa, 3 - 7 Dec 2018 Detailed record - Similar records 2019-05-15 16:57 Detailed record - Similar records 2019-02-12 14:01
XYZ states at LHCb / Kucharczyk, Marcin (Polish Academy of Sciences (PL)) The latest years have observed a resurrection of interest in searches for exotic states motivated by precision spectroscopy studies of beauty and charm hadrons providing the observation of several exotic states. The latest results on spectroscopy of exotic hadrons are reviewed, using the proton-proton collision data collected by the LHCb experiment. [...] LHCb-PROC-2019-004; CERN-LHCb-PROC-2019-004.- Geneva : CERN, 2019 - 6. Fulltext: PDF; In : 15th International Workshop on Meson Physics, Kraków, Poland, 7 - 12 Jun 2018 Detailed record - Similar records 2019-01-21 09:59
Mixing and indirect $CP$ violation in two-body Charm decays at LHCb / Pajero, Tommaso (Universita & INFN Pisa (IT)) The copious number of $D^0$ decays collected by the LHCb experiment during 2011--2016 allows the test of the violation of the $CP$ symmetry in the decay of charm quarks with unprecedented precision, approaching for the first time the expectations of the Standard Model. We present the latest measurements of LHCb of mixing and indirect $CP$ violation in the decay of $D^0$ mesons into two charged hadrons [...] LHCb-PROC-2019-003; CERN-LHCb-PROC-2019-003.- Geneva : CERN, 2019 - 10. Fulltext: PDF; In : 10th International Workshop on the CKM Unitarity Triangle, Heidelberg, Germany, 17 - 21 Sep 2018 Detailed record - Similar records 2019-01-15 14:22
Experimental status of LNU in B decays in LHCb / Benson, Sean (Nikhef National institute for subatomic physics (NL)) In the Standard Model, the three charged leptons are identical copies of each other, apart from mass differences. Experimental tests of this feature in semileptonic decays of b-hadrons are highly sensitive to New Physics particles which preferentially couple to the 2nd and 3rd generations of leptons. [...] LHCb-PROC-2019-002; CERN-LHCb-PROC-2019-002.- Geneva : CERN, 2019 - 7. Fulltext: PDF; In : The 15th International Workshop on Tau Lepton Physics, Amsterdam, Netherlands, 24 - 28 Sep 2018 Detailed record - Similar records 2019-01-10 15:54 Detailed record - Similar records 2018-12-20 16:31
Simultaneous usage of the LHCb HLT farm for Online and Offline processing workflows LHCb is one of the 4 LHC experiments and continues to revolutionise data acquisition and analysis techniques. Already two years ago the concepts of “online” and “offline” analysis were unified: the calibration and alignment processes take place automatically in real time and are used in the triggering process such that Online data are immediately available offline for physics analysis (Turbo analysis), the computing capacity of the HLT farm has been used simultaneously for different workflows : synchronous first level trigger, asynchronous second level trigger, and Monte-Carlo simulation. [...] LHCb-PROC-2018-031; CERN-LHCb-PROC-2018-031.- Geneva : CERN, 2018 - 7. Fulltext: PDF; In : 23rd International Conference on Computing in High Energy and Nuclear Physics, CHEP 2018, Sofia, Bulgaria, 9 - 13 Jul 2018 Detailed record - Similar records 2018-12-14 16:02
The Timepix3 Telescope andSensor R&D for the LHCb VELO Upgrade / Dall'Occo, Elena (Nikhef National institute for subatomic physics (NL)) The VErtex LOcator (VELO) of the LHCb detector is going to be replaced in the context of a major upgrade of the experiment planned for 2019-2020. The upgraded VELO is a silicon pixel detector, designed to with stand a radiation dose up to $8 \times 10^{15} 1 ~\text {MeV} ~\eta_{eq} ~ \text{cm}^{−2}$, with the additional challenge of a highly non uniform radiation exposure. [...] LHCb-PROC-2018-030; CERN-LHCb-PROC-2018-030.- Geneva : CERN, 2018 - 8. Detailed record - Similar records |
I did a quick read over the paper you linked. Based on the ideas given in that paper, here's a simple data structure that obtains an $O(\frac{\log n}{\log\log n})$ time bound on each operation.
You mentioned in your question that you can use balanced, augmented trees to speed this up. In particular, if you have a binary tree and augment each node with the parity of its left subtree, then you can do updates and lookups in time $O(\log n)$ each. That's fast, but not fast enough.
Now, consider the following generalization of your idea. Suppose that instead of using a binary tree, we use a multiway tree with branching factor $k$. We augment each key in each node with the parity of the all the subtrees preceding it (this generalizes the idea of storing the parity of the left subtree). Now, let's think about how we'd do a lookup or update in this tree. To do a lookup, we use a slightly modified version of the binary tree lookup algorithm from before: walk from the top of the tree down to the bottom, at each step accumulating the parity of the subtree purely to the left of each node. The height of the tree in this case will be $O(\log_k n)$ and we do $O(1)$ work per node, so the cost of doing a lookup will be $O(\log_k n)$.
However, with this setup, the cost of doing an update increases. In particular, if we change the parity of an element, we need to walk up from the bottom of the tree to the top, changing the stored parity of every key in every node on the path upward. There are $k$ keys per node and $O(\log_k n)$ nodes on the path upward from the leaves, so the cost of performing an operation like this will be $O(k \log_k n) = O(\frac{k}{\log k} \log n)$, which is too slow. If we could somehow eliminate this extra $k$ term, then we'd be in business.
The insight the paper has is the following. If you think about our initial problem, we had an array of size $n$ and wanted to be able to compute prefix parities. We now have a $k$-ary tree where, at each node, we need to be able to solve the prefix parity problem on arrays of size $k$ each, since each node caches information about the layers below it. In the above data structure, we solved the prefix parity problem at each node by just storing an array of the prefix parities, which means that if we need to perform an update, the cost is $O(k)$. The paper's insight is that by using a more clever data structure at each node, you can perform these updates significantly more efficiently.
In particular, the paper makes the following insight. Let's suppose that $k$ is "small," for some definition of small that we'll pick later. If you want to solve the prefix parity problem on an array of size $k$, then there are only $2^k$ different possible bit arrays of length $k$. Additionally, there are only $k$ possible lookup queries you could make on a bit array of size $k$. As a result, the number of possible combinations of an array and a query is $k 2^k$. If we pick $k$ to be small enough, we can make this quantity so small that it becomes feasible to precompute the result of every possible array and every possible query. If we do that, then we can update our data structure as follows. In each node of the $k$-way tree, rather than having each key store the parity of its left subtree, we instead store an array of $k$ bits, one for each key in the node. When we want to find the parity of all the nodes to the left of the $i$th child, we just do a lookup in a table indexed by those $k$ bits (treated as an integer) and the index $i$. Provided we can compute this table fast enough, this means that doing a prefix parity query will still take time $O(\log_k n)$, but now updates take time $O(\log_k n)$ as well because the cost of a prefix parity query on a given node will be $O(1)$.
The authors of the paper noticed that if you pick $k = \frac{\lg n}{2}$, then the number of possible queries that can be made is $\frac{\lg n}{2} 2^{\frac{\lg n}{2}} = \frac{\lg n}{2} \sqrt{n} = o(n)$. Additionally, the cost of performing any operation on the resulting tree will be $O(\log_k n) = O(\frac{\log n}{\log \frac{\lg n}{2}}) = O(\frac{\log n}{\log \log n})$. The catch is that you now need to do $o(n)$ precomputation at the start of setting up the data structure. The authors give a way to amortize this cost away by using a different data structure for the initial queries until enough work has been done to justify performing the work necessary to set up the table, though you could argue that you need to spend $O(n)$ time building up the tree in the first place and that this won't affect the overall runtime.
So, in summary, the idea is the following:
Instead of using an augmented binary tree, use an augmented $k$-ary tree. Notice that with small $k$, all possible $k$-bit lists and queries on those lists can be precomputed. Use this precomputed data structure at each node in the tree. Choose $k = \frac{\lg n}{2}$ to make the tree height, and, therefore, the cost per operations, $O(\frac{\log n}{\log \log n})$. Avoid the upfront precomputation cost by using a temporary replacement data structure in each node until the precomputation becomes worthwhile.
All in all, it's a clever data structure. Thanks for asking this question and linking it - I learned a lot in the process!
As an addendum, many of the techniques that went into this data structure are common strategies for speeding up seemingly optimal solutions. The idea of precomputing all possible queries on objects of a small size is often called the Method of Four Russians and can be seen in other data structures like the Fischer-Heun data structure for range minimum queries or the decremental algorithm for tree connectivity. Similarly, the technique of using augmented balanced multiway trees with a logarithmic branching factor comes up in other contexts, like the original deterministic data structure for dynamic graph connectivity, where such an approach is used to speed up connectivity queries from $O(\log n)$ to $O(\frac{\log n}{\log \log n})$. |
I think that there is no formula. The best one can do is to estimate. Here is a simpler problem of the same sort: suppose you have a parametrization of the boundary of a simply connected region, andsuppose that 0 is inside. Consider the Riemann mapping f of this region sending 0 to 0.The problem is to find |f'(0)|. There is no formula in any reasonable case.
Of course this is not a theorem, because one cannot define what a "formula" is.Both quantities, the modulus of a ring, and |f'(0)| in the simplified problemare solutions of certain extremal problems. So one can write a "formula" involving supover some class of functions.
Added on 9.19: I don't know why the question about a "formula" is important. There arereasonably good converging algorithms for finding moduli of rings,of course. The closest thing to a "formula" for a conformal map of a simply connected region thatI know is described in the papers of Wiegmann and Zabrodin, for example, MR1785428.Perhaps this can be modified to make a formula for the modulus of a ring.
Added on the same day: Here is a "formula". Let $\mu$ and $\nu$ be two probability measures,one sitting on each boundary component. Let $\rho=\mu-\nu$. Then$$\log r=-2\pi\sup\int\int\log|z-w|d\rho(z)d\rho(w),$$where the $\sup$ is taken over all such measures. If your boundaries are smooth, the measures arealso smooth, and can be described by smooth densities.
Explanation. Think of the boundaries as bases of metal cyinders, and put unit charges on them,one positive another negative. Then allow the charges to flow according to Coulomb Law.they will occupy the equilibrium position (minimizing the energy). This minimal energy is$\log r/\(2\pi)$ and it is conformally invariant. It is the so-called capacity of a condenser.
This was given as an example of what I meant by a formula containing a sup over a set offunctions. |
Basic Theorems Regarding Sets of the First Category in a Topological Space
Recall from the Sets of the First and Second Categories in a Topological Space page that if $(X, \tau)$ is a topological space then a set $A \subseteq X$ is said to be of the first category (or meager) if $A$ can be expressed as a countable union of nowhere dense subsets.
Furthermore, we said that $A$ is of the second category (or nonmeager) if $A$ cannot be expressed in such a manner.
We will now look at some basic theorems regarding sets of the first category in a topological space.
Theorem 1: Let $(X, \tau)$ be a topological space and let $A \subseteq X$ be of the first category. Then for any $B \subseteq A$ we have that $B$ is of the first category. Proof:Let $A \subseteq X$ be of the first category. Then $\displaystyle{A = \bigcup_{i \in I} A_i}$ where each $A_i$ is nowhere dense in $X$ and $I$ is a countable indexing set. If $B \subseteq A$, then $B \cap A_i$ is nowhere dense for each $i \in I$ since $B \cap A_i \subseteq A_i$. Moreover, we see that: This shows that $B$ is equal to a countable union of nowhere dense sets. So, $B$ is of the first category.
Theorem 2: Let $(X, \tau)$ be a topological space. If $\{ A_i \}_{i \in I}$ is a countable collection of sets of the first category ($I$ is a countable indexing set) then $\displaystyle{A = \bigcup_{i \in I} A_i}$ is of the first category. Proof:Let $\{ A_i \}_{i \in I}$ be a countable collection of sets of the first category. Then for each $i \in I$, $A_i$ is equal to a countable union of nowhere dense sets. Say $\displaystyle{A_i \bigcup_{j \in J_i} A_{i,j}}$ where $J_i$ is a countable indexing set for all $i \in I$, and each $A_{i, j}$ is nowhere dense. Then: So $A$ is a countable union of a countable union of nowhere dense sets, which is countable. This shows that $A$ is of the first category. $\blacksquare$ |
Preprints (rote Reihe) des Fachbereich Mathematik Refine Year of publication 1998 (5) (remove) Keywords
306
In this paper we study the space-time asymptotic behavior of the solutions and derivatives to th incompressible Navier-Stokes equations. Using moment estimates we obtain that strong solutions to the Navier-Stokes equations which decay in \(L^2\) at the rate of \(||u(t)||_2 \leq C(t+1)^{-\mu}\) will have the following pointwise space-time decay \[|D^{\alpha}u(x,t)| \leq C_{k,m} \frac{1}{(t+1)^{ \rho_o}(1+|x|^2)^{k/2}} \] where \( \rho_o = (1-2k/n)( m/2 + \mu) + 3/4(1-2k/n)\), and \(|a |= m\). The dimension n is \(2 \leq n \leq 5\) and \(0\leq k\leq n\) and \(\mu \geq n/4\)
319
The Kallianpur-Robbins law describes the long term asymptotic behaviour of the distribution of the occupation measure of a Brownian motion in the plane. In this paper we show that this behaviour can be seen at every typical Brownian path by choosing either a random time or a random scale according to the logarithmic laws of order three. We also prove a ratio ergodic theorem for small scales outside an exceptional set of vanishing logarithmic density of order three.
299
We propose a new discretization scheme for solving ill-posed integral equations of the third kind. Combining this scheme with Morozov's discrepancy principle for Landweber iteration we show that for some classes of equations in such method a number of arithmetic operations of smaller order than in collocation method is required to appoximately solve an equation with the same accuracy.
300 |
For boolean-valued functions $f : \{-1,1\}^n \to \{-1,1\}$ the total influence has several additional interpretations. First, it is often referred to as the
average sensitivity of $f$ because of the following proposition: Proposition 27For $f : \{-1,1\}^n \to \{-1,1\}$ \[ \mathbf{I}[f] = \mathop{\bf E}_{{\boldsymbol{x}}}[\mathrm{sens}_f({\boldsymbol{x}})], \] where $\mathrm{sens}_f(x)$ is the sensitivityof $f$ at $x$, defined to be the number of pivotal coordinates for $f$ on input $x$. Proof: \begin{multline*} \mathbf{I}[f] = \sum_{i=1}^n \mathbf{Inf}_i[f] = \sum_{i=1}^n \mathop{\bf Pr}_{{\boldsymbol{x}}}[f({\boldsymbol{x}}) \neq f({\boldsymbol{x}}^{\oplus i})] \\ = \sum_{i=1}^n \mathop{\bf E}_{{\boldsymbol{x}}}[\boldsymbol{1}_{f({\boldsymbol{x}}) \neq f({\boldsymbol{x}}^{\oplus i})}] = \mathop{\bf E}_{{\boldsymbol{x}}}\left[\sum_{i=1}^n \boldsymbol{1}_{f({\boldsymbol{x}}) \neq f({\boldsymbol{x}}^{\oplus i})}\right] = \mathop{\bf E}_{{\boldsymbol{x}}}[\mathrm{sens}_f({\boldsymbol{x}})]. \quad \Box \end{multline*}
The total influence of $f : \{-1,1\}^n \to \{-1,1\}$ is also closely related to the size of its
edge boundary; from Fact 14 we deduce: Examples 29(Recall Examples 15.) For boolean-valued functions $f : \{-1,1\}^n \to \{-1,1\}$ the total influence ranges between $0$ and $n$. It is minimized by the constant functions $\pm 1$ which have total influence $0$. It is maximized by the parity function $\chi_{[n]}$ and its negation which have total influence $n$; every coordinate is pivotal on every input for these functions. The dictator functions (and their negations) have total influence $1$. The total influence of $\mathrm{OR}_n$ and $\mathrm{AND}_n$ is very small: $n2^{1-n}$. On the other hand, the total influence of $\mathrm{Maj}_n$ is fairly large: roughly $\sqrt{2/\pi}\sqrt{n}$ for large $n$.
By virtue of Proposition 20 we have another interpretation for the total influence of
monotone functions:
This sum of the degree-$1$ Fourier coefficients has a natural interpretation in social choice:
Proposition 31Let $f : \{-1,1\}^n \to \{-1,1\}$ be a voting rule for a $2$-candidate election. Given votes ${\boldsymbol{x}} = ({\boldsymbol{x}}_1, \dots, {\boldsymbol{x}}_n)$, let $\boldsymbol{w}$ be the number of votes which agree with the outcome of the election, $f({\boldsymbol{x}})$. Then \[ \mathop{\bf E}[\boldsymbol{w}] = \frac{n}{2} + \frac12 \sum_{i=1}^n \widehat{f}(i). \] Proof: By the formula for Fourier coefficients, \begin{equation} \label{eqn:deg-1-sum} \sum_{i=1}^n \widehat{f}(i) = \sum_{i=1}^n \mathop{\bf E}_{{\boldsymbol{x}}}[f({\boldsymbol{x}}) {\boldsymbol{x}}_i] = \mathop{\bf E}_{{\boldsymbol{x}}}[f({\boldsymbol{x}})({\boldsymbol{x}}_1 + {\boldsymbol{x}}_2 + \cdots + {\boldsymbol{x}}_n)]. \end{equation} Now ${\boldsymbol{x}}_1 + \cdots + {\boldsymbol{x}}_n$ equals the difference between the number of votes for candidate $1$ and the number of votes for candidate $-1$. Hence $f({\boldsymbol{x}})({\boldsymbol{x}}_1 + \cdots + {\boldsymbol{x}}_n)$ equals the difference between the number of votes for the winner and the number of votes for the loser; i.e., $\boldsymbol{w} – (n-\boldsymbol{w}) = 2\boldsymbol{w} – n$. The result follows. $\Box$
Rousseau
[Rou62] suggested that the ideal voting rule is one which maximizes the number of votes which agree with the outcome. Here we show that the majority rule has this property (at least when $n$ is odd): Theorem 32The unique maximizers of $\sum_{i=1}^n \widehat{f}(i)$ among all $f : \{-1,1\}^n \to \{-1,1\}$ are the majority functions. In particular, $\mathbf{I}[f] \leq \mathbf{I}[\mathrm{Maj}_n] = \sqrt{2/\pi}\sqrt{n} + O(n^{-1/2})$ for all monotone $f$. Proof: From \eqref{eqn:deg-1-sum}, \[ \sum_{i=1}^n \widehat{f}(i) = \mathop{\bf E}_{{\boldsymbol{x}}}[f({\boldsymbol{x}})({\boldsymbol{x}}_1 + {\boldsymbol{x}}_2 + \cdots + {\boldsymbol{x}}_n)] \leq \mathop{\bf E}_{{\boldsymbol{x}}}[|{\boldsymbol{x}}_1 + {\boldsymbol{x}}_2 + \cdots + {\boldsymbol{x}}_n|], \] since $f({\boldsymbol{x}}) \in \{-1,1\}$ always. Equality holds if and only if $f(x) = \mathrm{sgn}(x_1 + \cdots + x_n)$ whenever $x_1 + \cdots + x_n \neq 0$. The second statement of the theorem follows from Proposition 30 and Exercise 18 in this chapter. $\Box$
Let’s now take a look at more analytic expressions for the total influence. By definition, if $f : \{-1,1\}^n \to {\mathbb R}$ then \begin{equation} \label{eqn:tinf-gradient} \mathbf{I}[f] = \sum_{i=1}^n \mathbf{Inf}_i[f] = \sum_{i=1}^n \mathop{\bf E}_{{\boldsymbol{x}}}[\mathrm{D}_i f({\boldsymbol{x}})^2] = \mathop{\bf E}_{{\boldsymbol{x}}}\left[\sum_{i=1}^n \mathrm{D}_i f({\boldsymbol{x}})^2\right]. \end{equation} This motivates the following definition:
Definition 33The (discrete) gradient operator$\nabla$ maps the function $f : \{-1,1\}^n \to {\mathbb R}$ to the function $\nabla f : \{-1,1\}^n \to {\mathbb R}^n$ defined by \[ \nabla f(x) = (\mathrm{D}_1 f(x), \mathrm{D}_2 f(x), \dots, \mathrm{D}_n f(x)). \]
Note that for $f : \{-1,1\}^n \to \{-1,1\}$ we have $\|\nabla f(x)\|_2^2 = \mathrm{sens}_f(x)$, where $\| \cdot \|_2$ is the usual Euclidean norm in ${\mathbb R}^n$. In general, from \eqref{eqn:tinf-gradient} we deduce:
An alternative analytic definition involves introducing the
Laplacian: Definition 35The Laplacian operator$\mathrm{L}$ is the linear operator on functions $f : \{-1,1\}^n \to {\mathbb R}$ defined by $\mathrm{L} = \sum_{i=1}^n \mathrm{L}_i$.
In the exercises you are asked to verify the following:
$\displaystyle \mathrm{L} f (x) = (n/2)\bigl(f(x) – \mathop{\mathrm{avg}}_{i \in [n]} \{f(x^{\oplus i})\}\bigr)$, $\displaystyle \mathrm{L} f (x) = f(x) \cdot \mathrm{sens}_f(x) \quad$ if $f : \{-1,1\}^n \to \{-1,1\}$, $\displaystyle \mathrm{L} f = \sum_{S \subseteq [n]} |S|\,\widehat{f}(S)\,\chi_S$, $\displaystyle \langle f, \mathrm{L} f \rangle = \mathbf{I}[f]$.
We can obtain a Fourier formula for the total influence of a function using Theorem 19; when we sum that theorem over all $i \in [n]$ the Fourier weight $\widehat{f}(S)^2$ is counted exactly $|S|$ times. Hence:
Theorem 37For $f : \{-1,1\}^n \to {\mathbb R}$, \begin{equation} \label{eqn:total-influence-formula} \mathbf{I}[f] = \sum_{S \subseteq [n]} |S| \widehat{f}(S)^2 = \sum_{k=0}^n k \cdot \mathbf{W}^{k}[f]. \end{equation} For $f : \{-1,1\}^n \to \{-1,1\}$ we can express this using the spectral sample: \[ \mathbf{I}[f] = \mathop{\bf E}_{\boldsymbol{S} \sim \mathscr{S}_{f}}[|\boldsymbol{S}|]. \]
Thus the total influence of $f : \{-1,1\}^n \to \{-1,1\}$ also measures the average “height” or degree of its Fourier weights.
Finally, from Proposition 1.13 we have $\mathop{\bf Var}[f] = \sum_{k > 0} \mathbf{W}^{k}[f]$; comparing this with \eqref{eqn:total-influence-formula} we immediately deduce a simple but important fact called the
Poincaré inequality. Poincaré InequalityFor any $f : \{-1,1\}^n \to {\mathbb R}$, $\mathop{\bf Var}[f] \leq \mathbf{I}[f]$.
Equality holds in the Poincaré inequality if and only if all of $f$’s Fourier weight is at degrees $0$ and $1$; i.e., $\mathbf{W}^{\leq 1}[f] = \mathop{\bf E}[f^2]$. For boolean-valued $f : \{-1,1\}^n \to \{-1,1\}$, Exercise 1.19 tells us this can only occur if $f = \pm 1$ or $f = \pm \chi_i$ for some $i$.
For boolean-valued $f : \{-1,1\}^n \to {\mathbb R}$, the Poincaré inequality can be viewed as an (edge-)isoperimetric inequality, or
(edge-)expansion bound, for the Hamming cube. If we think of $f$ as the indicator function for a set $A \subseteq \{-1,1\}^n$ of “measure” $\alpha = |A|/2^n$, then $\mathop{\bf Var}[f] = 4\alpha(1-\alpha)$ (Fact 1.14) whereas $\mathbf{I}[f]$ is $n$ times the (fractional) size of $A$’s edge boundary. In particular, the Poincaré inequality says that subsets $A \subseteq \{-1,1\}^n$ of measure $\alpha = 1/2$ must have edge boundary at least as large as those of the dictator sets.
For $\alpha \notin \{0, 1/2, 1\}$ the Poincaré inequality is not sharp as an edge-isoperimetric inequality for the Hamming cube; for small $\alpha$ even the asymptotic dependence is not optimal. Precisely optimal edge-isoperimetric results (and also vertex-isoperimetric results) are known for the Hamming cube. The following simplified theorem is optimal for $\alpha$ of the form $2^{-i}$:
This result illustrates an important recurring concept in the analysis of boolean functions: the Hamming cube is a “small-set expander”. Roughly speaking, this is the idea that “small” subsets $A \subseteq \{-1,1\}^n$ have unusually large “boundary size”. |
What you call "regular" measurement is probably projective measurement onto the computational basis, $|0\rangle$ and $|1\rangle$. Most introductory quantum computing resources define a qbit as follows:
$|\psi\rangle = \begin{bmatrix} \alpha \\ \beta \end{bmatrix} = \alpha|0\rangle + \beta|1\rangle$
Where $|\alpha|^2$ gives you the probability of collapse to $|0\rangle$, and $|\beta|^2$ gives you the probability of collapse to $|1\rangle$. If this is what you know as "regular" measurement, then yes it's just projective measurement.
In addition to measuring in the computational basis, you can do a projective measurement onto any pair of orthogonal unit vectors. You do this by rewriting your quantum state in that basis. Here is a good video tutorial on how to do change-of-basis. For example, we can measure in the $X$ basis:
$|+\rangle = \begin{bmatrix} \frac{1}{\sqrt{2}} \\ \frac{1}{\sqrt{2}} \end{bmatrix}$, $|-\rangle = \begin{bmatrix} \frac{1}{\sqrt{2}} \\ \frac{-1}{\sqrt{2}} \end{bmatrix}$
Consider the following qbit value:
$|\phi\rangle = \begin{bmatrix} 1 \\ 0 \end{bmatrix} = 1|0\rangle + 0|1\rangle$
This qbit value has a 100% chance of collapsing to $|0\rangle$ when measured in the computational basis. What if we were to measure it in the $X$ basis? First we have to do a change-of-basis to the $X$ basis, where it is written as:
$|\phi\rangle = \frac{1}{\sqrt{2}}|+\rangle + \frac{1}{\sqrt{2}}|-\rangle$
Note we did not change the value of the qbit at all, we are just writing it a different way (in a different basis). Now we see that if we measure $|\phi\rangle$ in the $X$ basis, it has a 50% chance of collapsing to $|+\rangle$ and a 50% chance of collapsing to $|-\rangle$.
It's called projective measurement because geometrically you're "projecting" your quantum state vector onto the measurement basis vectors, and the length of the projection on a basis vector gives you the probability of collapsing to that basis vector. |
Look at group signatures (but use one of the more modern schemes; they are proven secure). The signature can be applied to a running counter, or a random challenge. Group signatures also give you a lot of "management" options which can be useful depending on the application. If you don't need them, then you can use ring signatures (but the verifier has to ...
Concerning your very broad question: Wikipedia can tell you about homomorphic signatures.However, its application is quite specific, and I have no idea if it fits your scenario/requirements. The homomorphic property has direct consequences for the signatures: It can not achieve existential unforgeability, because this contradicts the homomorphic property (...
There are some possible advantages to threshold signing. First, it enables a more flexible setting where the key can be divided into $n$ parts and any subset of $t$ can be used to sign. Second, you can go from holding a single key in one place to distributing it and back without making any changes. Third, you can achieve a type of proactive security by ...
There are two major families of signature schemes that bear some resemblance what you described.Code-based signatures: Courtois–Finiasz–Sendrier, or CFSThe CFS family of signature schemes is based on code-based cryptography first introduced by Robert McEliece in 1978 and dualized by Niederreiter. We work with binary linear codes, of length $n$ and rank $...
First start with some notation. Say we have a plaintext space $P$ which forms a group. And an encryption function which goes from the plaintext space to the ciphertext space, say $E : P\to C$.$E$ is homomorphic if $E$ forms a group homomorphism, i.e. given $E(x)$ and $E(y)$ for $x,y\in P$ we can efficiently construct $E(x\cdot y)$ without the private key, ...
Can Bob, without interacting with Alice, generate a new aggregate signature for the entire message set, i.e. (M,s′), that validates with Alice's public key?That would be troubling if Bob could sign anything verifiable by Alice's key (on behalf Alice).if there is a simpler and more efficient method available if there is only one signer. I am mainly ...
If I understand you correctly, you want that given $\mathit{pk}, \mathit{pk}'$, one can publicly decide whether or not $\mathit{sk} < \mathit{sk}'$ (lexicographically) for the corresponding secret keys.Such a signature scheme would inherently be insecure.This is because given your public key $\mathit{pk}^*$, anyone would be able to run a binary ...
Let me try to rephrase your question in a way for which I might have an answer:Is there a way for Alice to give Bob a "limited signing key" (LSK) such that:Bob can freely generate up to $n$ keys (or other messages) and sign them using the LSK issued by Alice;anyone can verify that these keys/messages were signed by an LSK that was issued by ...
Anybody can run any of the usual generators for asymmetric primitives as many times as they want. And they can do this without compromising the security of the private key. Generally the trust of a key pair by a user is however not in the private key, but in the public key or certificate around the public key.Now you could embed a signed token in the ... |
Forgot password? New user? Sign up
Existing user? Log in
Can please somebody help me in Q.24.
I have tried the energy method,but somehow I am not getting the required result.
Note by A Former Brilliant Member 2 years, 9 months ago
Easy Math Editor
This discussion board is a place to discuss our Daily Challenges and the math and science related to those challenges. Explanations are more than just a solution — they should explain the steps and thinking strategies that you used to obtain the solution. Comments should further the discussion of math and science.
When posting on Brilliant:
*italics*
_italics_
**bold**
__bold__
- bulleted- list
1. numbered2. list
paragraph 1paragraph 2
paragraph 1
paragraph 2
[example link](https://brilliant.org)
> This is a quote
This is a quote
# I indented these lines # 4 spaces, and now they show # up as a code block. print "hello world"
# I indented these lines# 4 spaces, and now they show# up as a code block.print "hello world"
\(
\)
\[
\]
2 \times 3
2^{34}
a_{i-1}
\frac{2}{3}
\sqrt{2}
\sum_{i=1}^3
\sin \theta
\boxed{123}
Sort by:
sorry for late reply , i was busy :) hope you find it easy too :)
Log in to reply
sorry for the cutting , i did it in just 30 secs :P
feel free to ask anything , i love to resolve doubts :)
Thanks for your help, sorry I was actually inactive for a few days.
@A Former Brilliant Member – can you tell me how you came back after deactivating your account ? i wanna delete it too and come back after JEE ;)
@A Former Brilliant Member – You can deactivate your account in Account settings - Account - Deactivate my account. Doing so in this way, will allow you to reactivate your account later by logging in with the same email, which will send you to a reactivation page.
Can someone please help @shubham dhull @Aniket Sanghi
What have you tried? Where are you stuck?
Sorry for an irrelevant question, but which book is it?
padlo pagalo nahi toh pachtaoge meri tarah
LOL.
@Swapnil Das – u laughing on me ?
@A Former Brilliant Member – No your classic dialogue :P
@A Former Brilliant Member – no.....I am saying you are awesome in physics
what are you repenting about?....you are so good in physics......I am not even a fraction as good as you
@A Former Brilliant Member – Exactly I was thinking the same. But you too are good at physics. Weakness is in me.
hey what resources do you use for physics?
@A Former Brilliant Member – simple school level books like ncert + hc verma + irodov :) sorry for the late reply , i'm ill and really having hard time :( but again do whatever high level you can do and ask me if you have any doubt :)
@A Former Brilliant Member – Is krotov a good book?
@A Former Brilliant Member – ya, it sure is. btw how old r u ?
@A Former Brilliant Member – Get well soon :) How do you solve hard hard constrained motion problems?
@Swapnil Das – well contraints are easy .....learn about virtual work method for solving them....that would be sufficient to solve any type of constraint.
@A Former Brilliant Member – If you check my profile, there's a constrained motion prob by Akshat Sharda. That's hard :P
@Swapnil Das – i asked ur age
@A Former Brilliant Member – 16 it's in my profile, no?
@Swapnil Das – it is wrong in many cases like of @A E btw we both r of same age so can't call u my younger bro :|
@A Former Brilliant Member – Ok but bhai is a better term :P
@Swapnil Das – yep sure:)
@A Former Brilliant Member – ya iam 16
@Swapnil Das – that questions is easy,let the acc. of wedge be "A' and that of block wrt to wedge be 'a'',.... draw a proper fbd and proceed ...it will be easy
@A Former Brilliant Member – Hmm. I'll need practice.
@Swapnil Das – if can have access to coaching material if any reputed institute like FIITJEE,you could find numerous questions like this.
@A Former Brilliant Member – I see. I'll have them in a month ;)
@A Former Brilliant Member – any brilliant physics ques you know, can u give link ?
@Swapnil Das – oh, thnx bro , how old r u ? and give me an example i will show you how to do the constraint problem
@A Former Brilliant Member – You know me right :P.
Could you solve this one? https://brilliant.org/problems/annoying-wedge/?ref_id=1327522
@Swapnil Das – did it earlier :)
@A Former Brilliant Member – You've solved almost every physics problem on Brilliant :P
@A Former Brilliant Member – not really , cough-cough :D
@A Former Brilliant Member – ya I see him in every problems solver list....you are awesome...
I actually don't know,I got it from my school library.
Ohk, no problem.
@Swapnil Das – Hey, you are in which class?
@A Former Brilliant Member – Will be promoted to 11th in April.
@Swapnil Das – Did you give NSEP in class 10th?
@A Former Brilliant Member – Yes. Very Bad score :P
@Swapnil Das – No problem..... I did not even give it a try this year. Will try in class 12
Problem Loading...
Note Loading...
Set Loading... |
Limb Disruption (708)
Contents Limb Disruption targets a limb of the victim, attempting to make it explode. This spell is modified by training in Sorcerous Lore, Necromancy and Spell Aiming. With the appropriate training, it is one of the most powerful disabling spells. Removing a hand or arm will cause a target to drop whatever is being carried in that hand/arm, and removing/breaking a leg will cause a target to fall.
This spell is commonly found in pale thanot wands.
The damage done to the limb depends on the severity of the warding endroll.
Endroll Result 141+ Limb explodes 121-140 Limb breaks, one or two round stun 101-120 Level one wound, no subsequent stun Usage PREP 708 | CAST {target} or INCANT 708 to cast this spell Casters may specify a limb to target using the AIM verb (only limbs are considered).
PREP 708|CAST {target} [right|left] [hand|arm|leg]will override any AIM setting.
If a limb is not specified, the system will examine the caster's AIM setting. If the AIM is not set to a valid limb, then the system will randomly select from one of the remaining limbs of the victim. This includes hands, arms, and legs, or their equivalents depending on the creature. Aimed Limb Disruption
Training 2x in Spell Aiming is required to cast the targeted version of this spell against a like-level target without a penalty to the caster's CS. Training less than 2x in Spell Aiming will result in steep CS penalties when trying to aim at a specific limb.
Designating an arm, leg, or hand with AIM (verb) will result in that limb automatically being targeted when casting Limb Disruption. Using AIM to disrupt a limb that is already severed (or doesn't exist) will only waste mana.
Targeted CS Penalty Formula
Formula:
Failed to parse (MathML with SVG or PNG fallback (recommended for modern browsers and accessibility tools): Invalid response ("Math extension cannot connect to Restbase.") from server "https://api.formulasearchengine.com/v1/":): {\displaystyle \mathrm{CS \ penalty = 100 - \left \lfloor 100 \cdot \frac{SA \ ranks}{target \ level \cdot 2} \right \rfloor}} Note: The penalty caps at 100 CS with 0 SA ranks Example 1 SA ranks: 5 Target level: 23 100 - trunc[(100 * (5/46))] = 90 CS Example 2 SA ranks: 140 Target level: 75 100 - trunc[(100 * (140/150)] = 7 CS penalty Lore Benefit
Training in Sorcerous Lore, Necromancy will increase the chance for removed creature limbs to be animated. Only legs and arms (or their equivalents) will animate. Hands will never animate. Likewise, player limbs will never animate. The animation effect is not controlled by the sorcerer.
When cast, there is a random 50% chance to determine if a limb will have the potential to animate. If that check is successful, it takes a [d100 + ((50 * Necromancy ranks) / target's level)] and the result must be greater than 100 to animate. The ability of the limb to drag a target down is [((caster's level - target's level) * 6) + d150]. If that is greater than 100, it is successful. (Info provided by GM Estild.)
Animated arms will attempt to knock over a standing target, or they will attempt to drag a grounded target, causing hard roundtime. Animated legs will attempt to knock over standing targets only. The odds of a limb successfully attacking a target are based on the size of the limb in relation to the size of the target and the level of the target in relation to the level of the creature from which the limb originated. The limbs will last for a few rounds before decaying away, and they will randomly choose a new target each round.
Messaging Exploded limb You gesture at a triton radical. The dull golden nimbus surrounding a triton radical suddenly begins to glow brightly. CS: +555 - TD: +405 + CvA: +25 + d100: +8 == +183 Warding failed! The triton radical's right leg explodes! A triton radical falls to the ground grasping his mangled right leg! Cast Roundtime 3 Seconds. Animated leg A severed radical leg snaps suddenly, sending droplets of congealing blood in all directions. Its muscles convulse violently, as if with purpose, and it flings itself at a triton executioner, knocking him to the ground. |
Search Blog Categories Calendar of Posts October 2019 M T W T F S S « Jul 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 Author Archives: John Luke Salter
In the Forbes article “The Truth About Guns” the author speaks about the idea that the coal-electricity industry is not to be held fully accountable for the methylmercury in the atmosphere. By speaking about the other, more prominent sources of … Continue reading
When looking at the future of energy production in the United States, the fossil fuel sources loom high above the rest in percentages. With the increasing efficiency of several different renewable energy sources, such as photovoltaic cells and tidal turbines, … Continue reading
The Ogallala Aquifer is an underground water source used by many farmers and ranchers in the Great Plains region to utilize land that would be unable to be farmed for grain without its use. Center pivot irrigation systems are the … Continue reading
\alpha 1.\(x^3 – 3x^3 – 10x = 0\) 2.\((1 + r)^n\) 3.\((5.7 \times 10^{-8}) \times (1.6 \times 10^{12}) = 9.12 \times 10^4\) 4.\(\pi L(1-\alpha)R^2=4\pi\sigma T^4R^2\) 5. \(12 \text{km} \times \frac{0.6 \text{mile}}{1 \text{km}} \approx 7.2 \text{miles}\) \[12 \text{km} \times \frac{0.6 \text{mile}}{1 … Continue reading
The Ogallala Aquifer is an underground water source used by many farmers and ranchers in the Great Plains region to utilize land that would be unable to be farmed for grain without its use. Center pivot irrigation systems are the … Continue reading
Topic: Consistent and increasing decline of the water levels and depth of water table in the Great Plains Aquifers, specifically the Ogallala Aquifer. The idea is to use the water more efficiently and sparingly so it can last much longer. … Continue reading |
Journal of Symbolic Logic J. Symbolic Logic Volume 64, Issue 2 (1999), 769-774. Ordinal Inequalities, Transfinite Induction, and Reverse Mathematics Abstract
If $\alpha$ and $\beta$ are ordinals, $\alpha \leq \beta$, and $\beta \nleq \alpha$, then $\alpha + 1 \leq \beta$. The first result of this paper shows that the restriction of this statement to countable well orderings is provably equivalent to ACA$_0$, a subsystem of second order arithmetic introduced by Friedman. The proof of the equivalence is reminiscent of Dekker's construction of a hypersimple set. An application of the theorem yields the equivalence of the set comprehension scheme ACA$_0$ and an arithmetical transfinite induction scheme.
Article information Source J. Symbolic Logic, Volume 64, Issue 2 (1999), 769-774. Dates First available in Project Euclid: 6 July 2007 Permanent link to this document https://projecteuclid.org/euclid.jsl/1183745808 Mathematical Reviews number (MathSciNet) MR1777785 Zentralblatt MATH identifier 0930.03085 JSTOR links.jstor.org Citation
Hirst, Jeffry L. Ordinal Inequalities, Transfinite Induction, and Reverse Mathematics. J. Symbolic Logic 64 (1999), no. 2, 769--774. https://projecteuclid.org/euclid.jsl/1183745808 |
Every Weakly Convergent Sequence In X Is Norm Bounded
Table of Contents
Every Weakly Convergent Sequence in X is Norm Bounded
Theorem 1: Let $X$ be a normed linear space. If $(x_n)$ weakly converges to $x \in X$ then there exists an $M > 0$ such that $\| x_n \| \leq M$ for every $n \in \mathbb{N}$. Proof:Let $(x_n)$ weakly converge to $x$. Then for every $f \in X^*$:
\begin{align} \quad \lim_{n \to \infty} f(x_n) = f(x) \end{align}
In particular, for each $n \in \mathbb{N}$, we have that for each $f \in X^*$:
\begin{align} \quad \lim_{n \to \infty} \hat{x}_n (f) = \hat{x}(f) \end{align}
That is, $(\hat{x}_n)$ converges pointwise to $\hat{x}$ on $X^*$. Since $X^* = B(X, \mathbb{R})$ is a Banach space and since for every $f \in X^*$ we have that:
\begin{align} \quad \sup_{n \geq 1} | \hat{x}_n(f) | = \sup_{n \geq 1} |f(x_n)| < \infty \end{align}
(Since $f(x_n)$ converges to $f(x)$). We have by The Uniform Boundedness Principle that:
\begin{align} \quad \sup_{n \geq 1} \| \hat{x}_n \| < \infty \end{align}
On the The Natural Embedding J we saw that $\| \hat{x}_n \| = \| x_n \|$ for every $n \in \mathbb{N}$ and so $\sup_{n \geq 1} \| x_n \| < \infty$. So there exists an $M > 0$ such that $\| x_n \| \leq M$ for all $n \in \mathbb{N}$. $\blacksquare$ |
This question already has an answer here:
I've been trying to learn relativity from Weinberg's
Gravitation and Cosmology without very good knowledge of the mathematical background. I am developing it alongside, but there is one particular point where I am stuck and could use help.
While introducing Lorentz invariance and Lorentz transforms, he starts with defining a coordinate transform which we shall call Lorentz transforms as $$x^{'\alpha} = \Lambda^{\alpha}_{\ \beta} x^{\beta} + a^{\alpha}$$
where $\Lambda^{\alpha}_{\ \beta}$ is subject to the following condition: $$\Lambda^{\alpha}_{\ \gamma} \Lambda^{\beta}_{\ \delta} \eta_{\alpha \beta} = \eta_{\gamma\delta}$$
He assumes it to be true for a while, proves the invariance of proper time. He now assumes arbitrary co-ordinate transforms and invariance of proper time to get the equation,
$$0 = \frac{\partial ^2x^{'\alpha}}{\partial x^{\gamma} \partial x^{\epsilon}}$$
He says the general solution of this equation is the first equation I wrote, and that putting that here would yield the second equation.
I don't see how. What exact mathematical topic I need for this? Or a hint towards actual solution would help too! I am not very much interested in solving this, but even if I can yield the second equation by putting the first one in the third equation, I will be content for now. |
I have a question, more related to a mathematical aspect of physics, which seems I am not understanding very well.
So, by applying Galilean transformation between two reference frames, which move at the speed $\epsilon$ relative to each other, the Lagrangians of a free particle looked from these two systems differ by
$$\Delta L=\frac{\partial L}{\partial (v^2)}2\vec{v}\cdot\vec{\epsilon}, \qquad \vec{v}=\frac{d \vec{x}}{d t},\qquad v:=|\vec{v}|. $$
On the other hand, it has to be
$$\Delta L=\frac{d F}{d t}.$$
Now in many texts, I see the argument that this is true only if $\frac{\partial L}{\partial (v^2)}$ is independent of $v$. It might be due to the lack of math skills, but this is not obvious for me.
Example. On contrary, let's assume that $$\frac{\partial L}{\partial (v^2)}=a v$$ and 1D motion, then we have the condition
$$\frac{d F}{d t}=2\epsilon a\frac{d x}{d t}|\frac{d x}{d t}|$$
while
$$\frac{d F}{d t}=\frac{\partial F}{\partial t}+\sum_i\frac{\partial F}{\partial u_i}\frac{d u_i}{d t}$$
Where $u_i$ are all possible time dependent functions, on which $F$ is dependent.
Could anyone help me, and explain why can't we express any of the partial derivatives in $\frac{d F}{d t}$ via $\frac{d x}{d t}|\frac{d x}{d t}|$?
References:
Landau & Lifshitz, Mechanics,$\S$3. |
Solutions to Systems of Equations by Brouwer's Fixed Point Theorem
Recall from the Brouwer's Fixed Point Theorem page the very important Brouwer's fixed point theorem which states that if $D^2$ denotes the closed unit disk then if $f : D^2 \to D^2$ is continuous then $f$ has a fixed point, that is, there exists a point $(x, y) \in D^2$ such that $f(x, y) = (x, y)$.
The general fixed point theorem states that if $f : D^n \to D^n$ is continuous then $f$ has a fixed point. And even more generally, if $A$ is homeomorphic to $D^n$ and $f : A \to A$ is continuous then $f$ has a fixed point.
Let $A \subset \mathbb{R}^n$ be homeomorphic to $D^n$. Consider a system of $n$ equations in $n$ variables:(1)
Suppose further that $f_1, f_2, ..., f_n : \mathbb{R}^n \to \mathbb{R}$ are all continuous so that the function $f : \mathbb{R}^n \to \mathbb{R}^n$ defined by:(2)
is continuous. Suppose further that:(3)
Then we can apply the Brouwer's fixed point theorem to the function $f |_A : A \to A$ to obtain a fixed point $(y_1, y_2, ..., y_n) \in A$ such that:(4)
That is, for each $1 \leq i \leq n$:(5)
Therefore $(y_1, y_2, ..., y_n)$ is a solution to the system $(*)$. Observe that method allows us to determine the existence of solutions to particular systems of equations but that it doesn't actually guarantee that the solution is easy to obtain. Let's look at an example.
Example 1 Show that the system $\left\{\begin{matrix}\sin(xy + 1) - x = 0\\ \cos(x + 2y^2 + 1) - y= 0 \end{matrix}\right.$ has a solution.
Consider the closed unit disk of radius $2$, $2D^2 = \{ (x, y) \in \mathbb{R}^2 : x^2 + y^2 \leq 4 \}$. Define a function $f : \mathbb{R}^2 \to \mathbb{R}^2$ by:(6)
Then for all $(x, y) \in 2D^2$ we have that:(7)
Therefore $f (2D^2) \subseteq 2D^2$. Furthermore, $f$ is continuous. Hence by Brouwer's fixed point theorem there exists a point $(a, b) \in 2D^2$ such that $f(a, b) = (a,b)$, that is:(8)
Therefore:(9)
So $(a, b)$ is a solution to the original system. |
I want to compute the average kinetic energy of a particle at a certain temperature T given by the Hamiltonian:
$$ H = \sum_{i=1}^{N}\frac{\mathbf{p}_i^{2}}{2m}+V(\mathbf{r}_{1},...\mathbf{r}_{N}), $$ and show that the average kinetic energy is:
$$ \langle K_{i}\rangle=\frac {3}{2}k_{b}T $$
To start with I would compute the single particle partition function for a certain configuration of all the other particles.
$$ Q_{1}=\int exp\left ( -\frac{\mathbf{p}_{i}^{2}/2m+V(\mathbf{r}_{1},...\mathbf{r}_{N})}{k_{b}T} \right )d^3pd^{3N}q=\int exp\left ( -\frac{\mathbf{p}_{i}^{2}}{2mk_{b}T} \right )d^3p*\int exp \left( -\frac{V(\mathbf{r}_{1},...\mathbf{r}_{N})}{k_{b}T} \right )d^{3N}q $$
The integral over the potential part takes some value P depending on the considered configuration. This will factor out in the end anyway. The kinetic part is just a Gaussian integral and hence the single particle partition function is $$ Q1=P*(2m\pi k_{b}T)^{3/2} $$
Now from the definition of the canonical partition function I can write the average kinetic energy as a derivative
$$ \langle K_{i}\rangle = -\frac{k_{b}T}{2m}\frac{\partial}{\partial\frac{1}{2m}}log \left ( P\int exp\left ( -\frac{\mathbf{p}_{i}^{2}}{2mk_{b}T} \right )d^3p \right )=\frac {P\int\mathbf{p}_{i}^{2}exp\left ( -\frac{\mathbf{p}_{i}^{2}}{2mk_{b}T} \right )d^3p}{Q_{1}} $$.
This factor out the potential part of the partition function. And hence I would just have to apply the derivative to the term
$$ Q^{'}_{1}=(2m\pi k_{b}T)^{3/2} $$
and hence: $$ \langle K_{i}\rangle = -\frac{k_{b}T}{2m}\frac{\partial}{\partial\frac{1}{2m}}log(Q^{'}_{1}) $$
My problem is now I dont't really know how to compute this derivative since 2m occurs reciprocal in the derivative but non reciprocal in the term to differentiate. I am also not 100% sure if this approach works. Does anybody know how to compute this derivative? Or if I am completely wrong what would be a nice way to do this computation? |
Calculate the volume of on object given by a set of points.
Object: Volume: calculated actual
The volume is calculated by using the Divergence theorem.
$\displaystyle\iiint_V\left(\mathbf{\nabla}\cdot\mathbf{F}\right)\,dV= \iint_S (\mathbf{F}\cdot\mathbf{n})\,dS$
where the left hand integral is over the internal volume and the right hand integral is over the surface, $\mathbf{n}$ is the outward unit length surface normal, $\nabla$ is the divergence, $\operatorname{div}\,\mathbf{F} = \nabla\cdot \mathbf{F} = \left(\frac{\partial}{\partial x}, \frac{\partial}{\partial y},\frac{\partial}{\partial z}\right) \cdot \mathbf{F}$
If $F$ is the field giving the position vector $\mathbf{F}(x,y,z) = x\mathbf{i} + y\mathbf{j} + z\mathbf{k}$ then $\operatorname{div}\,\mathbf{F} = 3$, so the left hand integral is just $\iiint_V 3\,dV$, three times the volume.
Hence to find the volume of a surface we just need to calculate $\iint_S (\mathbf{F}\cdot\mathbf{n})\,dS$. If the surface is polyhedron with set pologonal faces $P_1, \ldots P_n$ the integral can be reduced to a sum over all the faces. The dot product $\mathbf{F} \cdot \mathbf{n}$ is constant for all points on the face, so the integral over any face $P$ is just $area(P)\ \mathbf{v} \cdot \mathbf{n}$ where $\mathbf{v}$ is any vertex on the surface. Hence the volume is given by
$\displaystyle\frac13 \sum_{i=1}^n area(P_i)\ \mathbf{v}_i \cdot \mathbf{n}_i$
where $\mathbf{v}_i$ is a vectex of the i-th face and $\mathbf{n}_1, \ldots, \mathbf{n}_n$ are the outward pointing normals.
If $\mathbf{a}_i$, $\mathbf{b}_i$, $\mathbf{c}_i$ are the three vertices of a triangular face the (non unit length) normal can be calculate as $\mathbf{n}_i = (\mathbf{b}_i-\mathbf{a}_i)\times(\mathbf{c}_i-\mathbf{a}_i)$. The unit length normal is $\hat{\mathbf{n}}_i = \mathbf{n}_i/ |\mathbf{n}_i|$ and area of the triangle is $\frac12|\mathbf{n}_i|$. The dot product becomes $\mathbf{a}_i \cdot \hat{\mathbf{n}}_i$. The whole volume becomes
$\displaystyle\frac16 \sum_{i=1}^n \pm \mathbf{a}_i \cdot \mathbf{n}_i$,
where the signs are chosen so the normals are outwards.
David Seed Fri Jun 17 2016
please give a simple workd example. with a diagram i presume you mean any vertex in the i-th face and that v.n represents the vertical dstance from the face to the origin. but perhaps the outward facng sense requires the origin to be outside the solid i dont |
It looks like you're new here. If you want to get involved, click one of these buttons!
Isomorphisms are very important in mathematics, and we can no longer put off talking about them. Intuitively, two objects are 'isomorphic' if they look the same. Category theory makes this precise and shifts the emphasis to the 'isomorphism' - the
way in which we match up these two objects, to see that they look the same.
For example, any two of these squares look the same after you rotate and/or reflect them:
An isomorphism between two of these squares is a
process of rotating and/or reflecting the first so it looks just like the second.
As the name suggests, an isomorphism is a kind of morphism. Briefly, it's a morphism that you can 'undo'. It's a morphism that has an inverse:
Definition. Given a morphism \(f : x \to y\) in a category \(\mathcal{C}\), an inverse of \(f\) is a morphism \(g: y \to x\) such that
and
I'm saying that \(g\) is 'an' inverse of \(f\) because in principle there could be more than one! But in fact, any morphism has at most one inverse, so we can talk about 'the' inverse of \(f\) if it exists, and we call it \(f^{-1}\).
Puzzle 140. Prove that any morphism has at most one inverse. Puzzle 141. Give an example of a morphism in some category that has more than one left inverse. Puzzle 142. Give an example of a morphism in some category that has more than one right inverse.
Now we're ready for isomorphisms!
Definition. A morphism \(f : x \to y\) is an isomorphism if it has an inverse. Definition. Two objects \(x,y\) in a category \(\mathcal{C}\) are isomorphic if there exists an isomorphism \(f : x \to y\).
Let's see some examples! The most important example for us now is a 'natural isomorphism', since we need those for our databases. But let's start off with something easier. Take your favorite categories and see what the isomorphisms in them are like!
What's an isomorphism in the category \(\mathbf{3}\)? Remember, this is a free category on a graph:
The morphisms in \(\mathbf{3}\) are paths in this graph. We've got one path of length 2:
$$ f_2 \circ f_1 : v_1 \to v_3 $$ two paths of length 1:
$$ f_1 : v_1 \to v_2, \quad f_2 : v_2 \to v_3 $$ and - don't forget - three paths of length 0. These are the identity morphisms:
$$ 1_{v_1} : v_1 \to v_1, \quad 1_{v_2} : v_2 \to v_2, \quad 1_{v_3} : v_3 \to v_3.$$ If you think about how composition works in this category you'll see that the only isomorphisms are the identity morphisms. Why? Because there's no way to compose two morphisms and get an identity morphism unless they're both that identity morphism!
In intuitive terms, we can only move from left to right in this category, not backwards, so we can only 'undo' a morphism if it doesn't do anything at all - i.e., it's an identity morphism.
We can generalize this observation. The key is that \(\mathbf{3}\) is a poset. Remember, in our new way of thinking a
preorder is a category where for any two objects \(x\) and \(y\) there is at most one morphism \(f : x \to y\), in which case we can write \(x \le y\). A poset is a preorder where if there's a morphism \(f : x \to y\) and a morphism \(g: x \to y\) then \(x = y\). In other words, if \(x \le y\) and \(y \le x\) then \(x = y\). Puzzle 143. Show that if a category \(\mathcal{C}\) is a preorder, if there is a morphism \(f : x \to y\) and a morphism \(g: x \to y\) then \(g\) is the inverse of \(f\), so \(x\) and \(y\) are isomorphic. Puzzle 144. Show that if a category \(\mathcal{C}\) is a poset, if there is a morphism \(f : x \to y\) and a morphism \(g: x \to y\) then both \(f\) and \(g\) are identity morphisms, so \(x = y\).
Puzzle 144 says that in a poset, the only isomorphisms are identities.
Isomorphisms are a lot more interesting in the category \(\mathbf{Set}\). Remember, this is the category where objects are sets and morphisms are functions.
Puzzle 145. Show that every isomorphism in \(\mathbf{Set}\) is a bijection, that is, a function that is one-to-one and onto. Puzzle 146. Show that every bijection is an isomorphism in \(\mathbf{Set}\).
So, in \(\mathbf{Set}\) the isomorphisms are the bijections! So, there are lots of them.
One more example:
Definition. If \(\mathcal{C}\) and \(\mathcal{D}\) are categories, then an isomorphism in \(\mathcal{D}^\mathcal{C}\) is called a natural isomorphism.
This name makes sense! The objects in the so-called 'functor category' \(\mathcal{D}^\mathcal{C}\) are functors from \(\mathcal{C}\) to \(\mathcal{D}\), and the morphisms between these are natural transformations. So, the
isomorphisms deserve to be called 'natural isomorphisms'.
But what are they like?
Given functors \(F, G: \mathcal{C} \to \mathcal{D}\), a natural transformation \(\alpha : F \to G\) is a choice of morphism
$$ \alpha_x : F(x) \to G(x) $$ for each object \(x\) in \(\mathcal{C}\), such that for each morphism \(f : x \to y\) this naturality square commutes:
Suppose \(\alpha\) is an isomorphism. This says that it has an inverse \(\beta: G \to F\). This \(beta\) will be a choice of morphism
$$ \beta_x : G(x) \to F(x) $$ for each \(x\), making a bunch of naturality squares commute. But saying that \(\beta\) is the inverse of \(\alpha\) means that
$$ \beta \circ \alpha = 1_F \quad \textrm{ and } \alpha \circ \beta = 1_G .$$ If you remember how we compose natural transformations, you'll see this means
$$ \beta_x \circ \alpha_x = 1_{F(x)} \quad \textrm{ and } \alpha_x \circ \beta_x = 1_{G(x)} $$ for all \(x\). So, for each \(x\), \(\beta_x\) is the inverse of \(\alpha_x\).
In short: if \(\alpha\) is a natural isomorphism then \(\alpha\) is a natural transformation such that \(\alpha_x\) is an isomorphism for each \(x\).
But the converse is true, too! It takes a
little more work to prove, but not much. So, I'll leave it as a puzzle. Puzzle 147. Show that if \(\alpha : F \Rightarrow G\) is a natural transformation such that \(\alpha_x\) is an isomorphism for each \(x\), then \(\alpha\) is a natural isomorphism.
Doing this will help you understand natural isomorphisms. But you also need examples!
Puzzle 148. Create a category \(\mathcal{C}\) as the free category on a graph. Give an example of two functors \(F, G : \mathcal{C} \to \mathbf{Set}\) and a natural isomorphism \(\alpha: F \Rightarrow G\). Think of \(\mathcal{C}\) as a database schema, and \(F,G\) as two databases built using this schema. In what way does the natural isomorphism between \(F\) and \(G\) make these databases 'the same'. They're not necessarily equal!
We should talk about this. |
I'm trying to do propagation of error using the linearized variance method (assuming independent variables, thus no need for the covariance terms):
$$\sigma^2_f = \sum^n_{k=0} \left(\frac{\partial f}{\partial x_k}\right)^2 \sigma^2_{x_k}$$
However, I have a nasty function that doesn't give me a clear-cut explicit definition of my variable. For simplicity, I will just give a simple example of what I'm trying to accomplish. Take the equation
$$x - y = e^f + e^{2f} + e^{2x}$$
This is algebraicly impossible to solving explicitly for $f = f(x,y)$. So, if I wanted to find the variance of f, I had two ideas (one of them backfired...). First, make a new function equal to zero,
$$g(x,y,f) = e^f + e^{2f} + e^{2x} + y - x = 0$$
That way I could easily find the partial derivatives, and the variance of this new function would be zero, since it's value always equaled zero.
$$\sigma^2_g = 0$$
Unfortunately, this backfired on me (after I did the 26 partial derivatives, ouch) as you can see with
$$\sigma^2_g = \left(\frac{\partial g}{\partial x}\right)^2 \sigma^2_x + \left(\frac{\partial g}{\partial y}\right)^2 \sigma^2_y + \left(\frac{\partial g}{\partial f}\right)^2 \sigma^2_f$$
If you set the variance of g to zero, then you could solve for the variance of f, and be a happy camper! Wrong. Because all the terms are squared, there is NO WAY they can add up to be zero unless they are all identically zero. That really messed me up.
The other idea I had just take the original equation and performing the partial differential with respect to x and y to each side of the equation, then solve for the partial diff quantities. That would require me to do a complete overhaul on all my work.
My question then: is there anyway to use the first method I thought of, just modifying my steps? Or maybe a third way? If not, then I will be surprised, since mathematics usually has a way to solve such twisted scenarios. Please advise soon, as I need to finish this up quickly. |
Is there a way to calculate the coefficient of kinetic friction given the coefficient of static friction? Is there any direct relationship between the two or is it completely different between materials?
The kinetic friction coefficient is, in general, smaller that the static one. As far as I know, there is no way to calculate one from the other. But they usually do not differ by order of magnitude. These are purely empirical quantities.
I do have a simple setup which you can use to find coefficient of kinetic friction.
Take a block having mass $M $ on a rough horizontal surface. Apply force on the block in horizontal direction till the block just starts to move. This force would be $$F=\mu_sMg $$
Once the block starts moving kinetic friction will act on the block. Since coefficient of kinetic friction is numerically less than that of static there will be unbalancing of forces thus causing an acceleration, say $a $. If by any means you can calculate the acceleration then by force equation we will have-
$$F-\mu_kMg =Ma $$ $$\Rightarrow \mu_k = \mu_s - \frac {a}{g} $$
I believe this might just exist in paper and practically it seems to be a bit difficult to comprehend this. Hence @freecharly answer suits. |
I was reading the old, but still interesting paper "The volatility smile and its implied tree" by Derman and Kani. I have a two questions about the derivation of the $2n+1$ equations, both of them regarding Arrow-Debreu price. On page 6, is shown in figure 4 how the model is set up. On the next page the authors writes down:
$$C(K,t_{n+1})=\exp{(-r\Delta t)}\sum_{j=1}^n\{\lambda_jp_j+\lambda_{j+1}(1-p_{j+1})\}\max{\{S_{j+1}-K,0\}}$$
where $C(K,t_{n+1}$ denotes the price of a call option with strike $K$ and expiry and expiry $t_{n+1}$. The first term in the sum are the probabilities. However, since $p_j$ is already the risk neutral probability why do the authors multiply them by $\lambda_j$? $\lambda_j$ is the known Arrow-Debreu price at node $(n,i)$. I've never heard of an Arrow-Debreu price. After checking the web it is still unclear to me, what the reason is for this equation.
Moreover using the above equation, there should be a $\lambda_{n+1}$, which is not the case! So is it just set to $0$? |
It looks like you're new here. If you want to get involved, click one of these buttons!
In this chapter we learned about left and right adjoints, and about joins and meets. At first they seemed like two rather different pairs of concepts. But then we learned some deep relationships between them. Briefly:
Left adjoints preserve joins, and monotone functions that preserve enough joins are left adjoints.
Right adjoints preserve meets, and monotone functions that preserve enough meets are right adjoints.
Today we'll conclude our discussion of Chapter 1 with two more bombshells:
Joins
are left adjoints, and meets are right adjoints.
Left adjoints are right adjoints seen upside-down, and joins are meets seen upside-down.
This is a good example of how category theory works. You learn a bunch of concepts, but then you learn more and more facts relating them, which unify your understanding... until finally all these concepts collapse down like the core of a giant star, releasing a supernova of insight that transforms how you see the world!
Let me start by reviewing what we've already seen. To keep things simple let me state these facts just for posets, not the more general preorders. Everything can be generalized to preorders.
In Lecture 6 we saw that given a left adjoint \( f : A \to B\), we can compute its right adjoint using joins:
$$ g(b) = \bigvee \{a \in A : \; f(a) \le b \} . $$ Similarly, given a right adjoint \( g : B \to A \) between posets, we can compute its left adjoint using meets:
$$ f(a) = \bigwedge \{b \in B : \; a \le g(b) \} . $$ In Lecture 16 we saw that left adjoints preserve all joins, while right adjoints preserve all meets.
Then came the big surprise: if \( A \) has all joins and a monotone function \( f : A \to B \) preserves all joins, then \( f \) is a left adjoint! But if you examine the proof, you'l see we don't really need \( A \) to have
all joins: it's enough that all the joins in this formula exist:
$$ g(b) = \bigvee \{a \in A : \; f(a) \le b \} . $$Similarly, if \(B\) has all meets and a monotone function \(g : B \to A \) preserves all meets, then \( f \) is a right adjoint! But again, we don't need \( B \) to have
all meets: it's enough that all the meets in this formula exist:
$$ f(a) = \bigwedge \{b \in B : \; a \le g(b) \} . $$ Now for the first of today's bombshells: joins are left adjoints and meets are right adjoints. I'll state this for binary joins and meets, but it generalizes.
Suppose \(A\) is a poset with all binary joins. Then we get a function
$$ \vee : A \times A \to A $$ sending any pair \( (a,a') \in A\) to the join \(a \vee a'\). But we can make \(A \times A\) into a poset as follows:
$$ (a,b) \le (a',b') \textrm{ if and only if } a \le a' \textrm{ and } b \le b' .$$ Then \( \vee : A \times A \to A\) becomes a monotone map, since you can check that
$$ a \le a' \textrm{ and } b \le b' \textrm{ implies } a \vee b \le a' \vee b'. $$And you can show that \( \vee : A \times A \to A \) is the left adjoint of another monotone function, the
diagonal
$$ \Delta : A \to A \times A $$sending any \(a \in A\) to the pair \( (a,a) \). This diagonal function is also called
duplication, since it duplicates any element of \(A\).
Why is \( \vee \) the left adjoint of \( \Delta \)? If you unravel what this means using all the definitions, it amounts to this fact:
$$ a \vee a' \le b \textrm{ if and only if } a \le b \textrm{ and } a' \le b . $$ Note that we're applying \( \vee \) to \( (a,a') \) in the expression at left here, and applying \( \Delta \) to \( b \) in the expression at the right. So, this fact says that \( \vee \) the left adjoint of \( \Delta \).
Puzzle 45. Prove that \( a \le a' \) and \( b \le b' \) imply \( a \vee b \le a' \vee b' \). Also prove that \( a \vee a' \le b \) if and only if \( a \le b \) and \( a' \le b \).
A similar argument shows that meets are really right adjoints! If \( A \) is a poset with all binary meets, we get a monotone function
$$ \wedge : A \times A \to A $$that's the
right adjoint of \( \Delta \). This is just a clever way of saying
$$ a \le b \textrm{ and } a \le b' \textrm{ if and only if } a \le b \wedge b' $$ which is also easy to check.
Puzzle 46. State and prove similar facts for joins and meets of any number of elements in a poset - possibly an infinite number.
All this is very beautiful, but you'll notice that all facts come in pairs: one for left adjoints and one for right adjoints. We can squeeze out this redundancy by noticing that every preorder has an "opposite", where "greater than" and "less than" trade places! It's like a mirror world where up is down, big is small, true is false, and so on.
Definition. Given a preorder \( (A , \le) \) there is a preorder called its opposite, \( (A, \ge) \). Here we define \( \ge \) by
$$ a \ge a' \textrm{ if and only if } a' \le a $$ for all \( a, a' \in A \). We call the opposite preorder\( A^{\textrm{op}} \) for short.
I can't believe I've gone this far without ever mentioning \( \ge \). Now we finally have really good reason.
Puzzle 47. Show that the opposite of a preorder really is a preorder, and the opposite of a poset is a poset. Puzzle 48. Show that the opposite of the opposite of \( A \) is \( A \) again. Puzzle 49. Show that the join of any subset of \( A \), if it exists, is the meet of that subset in \( A^{\textrm{op}} \). Puzzle 50. Show that any monotone function \(f : A \to B \) gives a monotone function \( f : A^{\textrm{op}} \to B^{\textrm{op}} \): the same function, but preserving \( \ge \) rather than \( \le \). Puzzle 51. Show that \(f : A \to B \) is the left adjoint of \(g : B \to A \) if and only if \(f : A^{\textrm{op}} \to B^{\textrm{op}} \) is the right adjoint of \( g: B^{\textrm{op}} \to A^{\textrm{ op }}\).
So, we've taken our whole course so far and "folded it in half", reducing every fact about meets to a fact about joins, and every fact about right adjoints to a fact about left adjoints... or vice versa! This idea, so important in category theory, is called
duality. In its simplest form, it says that things come on opposite pairs, and there's a symmetry that switches these opposite pairs. Taken to its extreme, it says that everything is built out of the interplay between opposite pairs.
Once you start looking you can find duality everywhere, from ancient Chinese philosophy:
to modern computers:
But duality has been studied very deeply in category theory: I'm just skimming the surface here. In particular, we haven't gotten into the connection between adjoints and duality!
This is the end of my lectures on Chapter 1. There's more in this chapter that we didn't cover, so now it's time for us to go through all the exercises. |
Quadratic programming (QP) involves minimizing or maximizing an objective function subject to bounds, linear equality, and inequality constraints. Example problems include portfolio optimization in finance, power generation optimization for electrical utilities, and design optimization in engineering.
Quadratic programming is the mathematical problem of finding a vector \(x\) that minimizes a quadratic function:
\[\min_{x} \left\{\frac{1}{2}x^{\mathsf{T}}Hx + f^{\mathsf{T}}x\right\}\]
Subject to the constraints:
\[\begin{eqnarray}Ax \leq b & \quad & \text{(inequality constraint)} \\A_{eq}x = b_{eq} & \quad & \text{(equality constraint)} \\lb \leq x \leq ub & \quad & \text{(bound constraint)}\end{eqnarray}\]
The following algorithms are commonly used to solve quadratic programming problems:
Interior-point-convex:solves convex problems with any combination of constraints Trust-region-reflective:solves bound constrained or linear equality constrained problems
For more information about quadratic programming, see Optimization Toolbox™. |
For an Alexandrov space M with curvature bounded from below, the isoperimetric profile $v \to I_M(v)$ defined for every $v\in (0,V(M))$ (the volume of M might be infinite), is given by $$ I_M(v)=inf\{A(\partial D): V(D)=v, D \subset \subset M\}, $$ where D varies over relatively compact open subset of M.
Then given any $v\in (0,V(M))$, is there a subset D with $A(\partial D)=I_M(v)$?
And are there any discription and regularity theorem for D and $\partial D$?
If we haven't results for general n, then what about the two-dimensional case? |
I know that two functions which describe the state of a particle in an infinite square (on interval $-d/2<x<d/2$) well are like:
\begin{align} \psi_{even}&= \sqrt{\frac{2}{d}}\sin\left(\frac{N\pi x}{d}\right)\quad N=2,4,6\dots\\ \psi_{odd} &= \sqrt{\frac{2}{d}}\cos\left(\frac{N\pi x}{d}\right)\quad N=1,3,5\dots \end{align}
Q1:I am a bit unsure, but correct me if I am wrong. I assume that if I want to calculate the ground state $N=1$ I have to take the odd solution and set $N=1$ like this:
$$ \psi = \sqrt{\frac{2}{d}}\cos\left(\frac{1\pi x}{d}\right) $$
but if I want to calculate for the state $N=2$ i must take the even function and set $N=2$ like this:
$$ \psi = \sqrt{\frac{2}{d}}\sin\left(\frac{2\pi x}{d}\right) $$
Is this correct? Is there any need to superpose the odd and even functions or anything like that?
Q2: Now lets say I have to calculate $\langle x^2\rangle$ for the ground state. Do I do it like this?
$$\int\limits_{-d/2}^{d/2}\sqrt{\frac{2}{d}}\cos\left(\frac{1\pi x}{d}\right) x^2\sqrt{\frac{2}{d}}\cos\left(\frac{1\pi x}{d}\right) dx$$
Or for the 1st excited state:
$$\int\limits_{-d/2}^{d/2}\sqrt{\frac{2}{d}}\sin\left(\frac{2\pi x}{d}\right) x^2\sqrt{\frac{2}{d}}\sin\left(\frac{2\pi x}{d}\right) dx$$
Q3: Are $\langle x^2\rangle$ in general the same for the intervals $-d/2<x<d/2$ and $0<x<d$?
Thank you! |
The query oracle: $O_{x}|i\rangle|b\rangle = |i\rangle|b \oplus x_{i}\rangle$ used in algorithms like Deutsch Jozsa is unitary. How do I prove it is unitary?
Notice that $\mathcal O_x$ is a permutation matrix.
The matrix elements are $$\langle j, c\rvert\mathcal O_x\lvert i,b\rangle =\delta_{ij}\langle c\rvert b\oplus x_i\rangle =\delta_{ij}\delta_{c,b\oplus x_i}.$$ In other words, $\mathcal O_x$ is diagonal with respect to the first register, and, for each block corresponding to a given $i$, connects all and only the indices $b,c$ such that $b\oplus c=x_i$ (remember that here $b,c,x_i\in\mathbb Z_2^{\otimes n}$ are length-$n$ bit strings). Also, notice that for a given $b$ there is no more than one $c$ such that $b\oplus c=x_i$ (more precisely, there isn't any such $c$ if $x_i=0$, and there is exactly one if $x_i\neq 0$).
It follows that $\mathcal O_x$ is a (real) permutation matrix, and such matrices are always unitarily diagonalizable with unit eigenvalues (and therefore unitary). In this case, we have that the eigenvalues of $\mathcal O_x$ are $\pm1$, and the eigenvectors are, in the case $x_i\neq 0$, $$\lvert i\rangle\otimes(\lvert b\rangle\pm\lvert b\oplus x_i\rangle)$$ for all $i$ and $b$. If $x_i=0$ then $\mathcal O_x$ is the identity, and therefore its spectrum is trivial${}^\dagger$.
This shows explicitly that $\mathcal O_x$ is unitarily diagonalizable with unit eigenvalues, and therefore is unitary.
${}^\dagger$ I'm actually being a bit sloppy for the sake of simplicity here. This analysis holds for each different block of $\mathcal O_x$ corresponding to a given $i$. More precisely, I should say that $\mathcal O_x$ is block-diagonal as it doesn't connect spaces with $i\neq j$ on the first register, and each block is either the identity in the subspace in which it acts if $x_i=0$, or a permutation matrix that connects different pairs of basis states if $x_i\neq 0$.
Apply it twice: $$ O_xO_x|i\rangle|b\rangle=O_x|i\rangle|b\oplus x_i\rangle=|i\rangle|b\oplus x_i\oplus x_i\rangle=|i\rangle|b\rangle $$ Hence, $O_x$ is its own inverse, and therefore reversible.
To prove unitarity, it makes more sense to prove that $O_x$ has eigenvectors $$ |i\rangle(|0\rangle+|1\rangle)\quad\text{and}\quad|i\rangle(|0\rangle-|1\rangle) $$ for all $i$ with eigenvalues $1$ and $(-1)^{x_i}$ respectively. These are all orthonormal, and span the full Hilbert space. Consequently, the eigenvalues of $O_x$ are all $\pm 1$, and therefore $O_x^\star=O_x$ (where $^\star$ represents the Hermitian conjugate). Thus, $$ O_xO_x^\star=\mathbb{I}, $$ as required for a unitary. |
How do I calculate the VaR in case of using an ARMA-GARCH approach? I am not good at time series, so I am more or less confused with the different possible notations of an ARMA-GARCH process. I hope the notation I use is correct.
Suppose I use the following AR(1)-GARCH(1,1):
\begin{align*} r_t=\delta + \phi_1 r_{t-1} + z_t\\ z_t = \sigma_t \epsilon_t\\ \epsilon_t \text{ is } N(0,1)\\ \sigma_t^2=\alpha_0+\alpha_1 z_{t-1}^2+\beta \sigma_{t-1}^2 \end{align*}
I mean, I would have to calculate a quantile at timepoint t. Normally in case of a more simpler approach: $r_t=\mu + \sigma_t*\epsilon_t$ and the $\epsilon_t$ is N(0,1) this would give $VaR=\mu + \hat{\sigma_t}*z_\alpha$ where $z_\alpha$ is the quantile of the standard normal. How to do this in case of the ARMA-GARCH approach? I now have no $\mu$, but an AR process?
I read some papers, they all give more or less the ARMA-GARCH formula and say that they generate the forecast, but they give no formula, how to do this. |
Volume 47 Issue 3 / Pages.645-651 / 2010 / 1015-8634(pISSN) / 2234-3016(eISSN) 초록
In this note we study positive solutions of the mth order rational difference equation
키워드
difference equations;equilibrium point;convergence of sequences;upper and lower limits
파일 참고문헌
E. Camouzis, Global analysis of solutions of ${x_{n+1}}=\frac{{\beta}x_n+{\delta}x_{n-2}}{A+Bx_{n}+Cx_{n-1}}$, J. Math. Anal. Appl. 316 (2006), no. 2, 616-627. https://doi.org/10.1016/j.jmaa.2005.05.008 E. Camouzis and G. Ladas, Dynamics of Third-Order Rational Difference Equations with Open Problems and Conjectures, Advances in Discrete Mathematics and Applications, 5. Chapman & Hall/CRC, Boca Raton, FL, 2008. M. R. S. Kulenovic and G. Ladas, Dynamics of Second Order Rational Difference Equations, With open problems and conjectures. Chapman & Hall/CRC, Boca Raton, FL, 2002.
J. Park, A global behavior of the positive solutions of ${x_{n+1}}=\frac{{\beta}x_n+x_{n-2}}{A+Bx_{n}+x_{n-2}}$, Commun. Korean Math. Soc. 23 (2008), no. 1, 61-65. https://doi.org/10.4134/CKMS.2008.23.1.061 피인용 문헌 On the Difference equation xn+1=axn−l+bxn−k+cxn−sdxn−s−e vol.40, pp.3, 2017, https://doi.org/10.1002/mma.3980 |
It looks like you're new here. If you want to get involved, click one of these buttons!
Isomorphisms are very important in mathematics, and we can no longer put off talking about them. Intuitively, two objects are 'isomorphic' if they look the same. Category theory makes this precise and shifts the emphasis to the 'isomorphism' - the
way in which we match up these two objects, to see that they look the same.
For example, any two of these squares look the same after you rotate and/or reflect them:
An isomorphism between two of these squares is a
process of rotating and/or reflecting the first so it looks just like the second.
As the name suggests, an isomorphism is a kind of morphism. Briefly, it's a morphism that you can 'undo'. It's a morphism that has an inverse:
Definition. Given a morphism \(f : x \to y\) in a category \(\mathcal{C}\), an inverse of \(f\) is a morphism \(g: y \to x\) such that
and
I'm saying that \(g\) is 'an' inverse of \(f\) because in principle there could be more than one! But in fact, any morphism has at most one inverse, so we can talk about 'the' inverse of \(f\) if it exists, and we call it \(f^{-1}\).
Puzzle 140. Prove that any morphism has at most one inverse. Puzzle 141. Give an example of a morphism in some category that has more than one left inverse. Puzzle 142. Give an example of a morphism in some category that has more than one right inverse.
Now we're ready for isomorphisms!
Definition. A morphism \(f : x \to y\) is an isomorphism if it has an inverse. Definition. Two objects \(x,y\) in a category \(\mathcal{C}\) are isomorphic if there exists an isomorphism \(f : x \to y\).
Let's see some examples! The most important example for us now is a 'natural isomorphism', since we need those for our databases. But let's start off with something easier. Take your favorite categories and see what the isomorphisms in them are like!
What's an isomorphism in the category \(\mathbf{3}\)? Remember, this is a free category on a graph:
The morphisms in \(\mathbf{3}\) are paths in this graph. We've got one path of length 2:
$$ f_2 \circ f_1 : v_1 \to v_3 $$ two paths of length 1:
$$ f_1 : v_1 \to v_2, \quad f_2 : v_2 \to v_3 $$ and - don't forget - three paths of length 0. These are the identity morphisms:
$$ 1_{v_1} : v_1 \to v_1, \quad 1_{v_2} : v_2 \to v_2, \quad 1_{v_3} : v_3 \to v_3.$$ If you think about how composition works in this category you'll see that the only isomorphisms are the identity morphisms. Why? Because there's no way to compose two morphisms and get an identity morphism unless they're both that identity morphism!
In intuitive terms, we can only move from left to right in this category, not backwards, so we can only 'undo' a morphism if it doesn't do anything at all - i.e., it's an identity morphism.
We can generalize this observation. The key is that \(\mathbf{3}\) is a poset. Remember, in our new way of thinking a
preorder is a category where for any two objects \(x\) and \(y\) there is at most one morphism \(f : x \to y\), in which case we can write \(x \le y\). A poset is a preorder where if there's a morphism \(f : x \to y\) and a morphism \(g: x \to y\) then \(x = y\). In other words, if \(x \le y\) and \(y \le x\) then \(x = y\). Puzzle 143. Show that if a category \(\mathcal{C}\) is a preorder, if there is a morphism \(f : x \to y\) and a morphism \(g: x \to y\) then \(g\) is the inverse of \(f\), so \(x\) and \(y\) are isomorphic. Puzzle 144. Show that if a category \(\mathcal{C}\) is a poset, if there is a morphism \(f : x \to y\) and a morphism \(g: x \to y\) then both \(f\) and \(g\) are identity morphisms, so \(x = y\).
Puzzle 144 says that in a poset, the only isomorphisms are identities.
Isomorphisms are a lot more interesting in the category \(\mathbf{Set}\). Remember, this is the category where objects are sets and morphisms are functions.
Puzzle 145. Show that every isomorphism in \(\mathbf{Set}\) is a bijection, that is, a function that is one-to-one and onto. Puzzle 146. Show that every bijection is an isomorphism in \(\mathbf{Set}\).
So, in \(\mathbf{Set}\) the isomorphisms are the bijections! So, there are lots of them.
One more example:
Definition. If \(\mathcal{C}\) and \(\mathcal{D}\) are categories, then an isomorphism in \(\mathcal{D}^\mathcal{C}\) is called a natural isomorphism.
This name makes sense! The objects in the so-called 'functor category' \(\mathcal{D}^\mathcal{C}\) are functors from \(\mathcal{C}\) to \(\mathcal{D}\), and the morphisms between these are natural transformations. So, the
isomorphisms deserve to be called 'natural isomorphisms'.
But what are they like?
Given functors \(F, G: \mathcal{C} \to \mathcal{D}\), a natural transformation \(\alpha : F \to G\) is a choice of morphism
$$ \alpha_x : F(x) \to G(x) $$ for each object \(x\) in \(\mathcal{C}\), such that for each morphism \(f : x \to y\) this naturality square commutes:
Suppose \(\alpha\) is an isomorphism. This says that it has an inverse \(\beta: G \to F\). This \(beta\) will be a choice of morphism
$$ \beta_x : G(x) \to F(x) $$ for each \(x\), making a bunch of naturality squares commute. But saying that \(\beta\) is the inverse of \(\alpha\) means that
$$ \beta \circ \alpha = 1_F \quad \textrm{ and } \alpha \circ \beta = 1_G .$$ If you remember how we compose natural transformations, you'll see this means
$$ \beta_x \circ \alpha_x = 1_{F(x)} \quad \textrm{ and } \alpha_x \circ \beta_x = 1_{G(x)} $$ for all \(x\). So, for each \(x\), \(\beta_x\) is the inverse of \(\alpha_x\).
In short: if \(\alpha\) is a natural isomorphism then \(\alpha\) is a natural transformation such that \(\alpha_x\) is an isomorphism for each \(x\).
But the converse is true, too! It takes a
little more work to prove, but not much. So, I'll leave it as a puzzle. Puzzle 147. Show that if \(\alpha : F \Rightarrow G\) is a natural transformation such that \(\alpha_x\) is an isomorphism for each \(x\), then \(\alpha\) is a natural isomorphism.
Doing this will help you understand natural isomorphisms. But you also need examples!
Puzzle 148. Create a category \(\mathcal{C}\) as the free category on a graph. Give an example of two functors \(F, G : \mathcal{C} \to \mathbf{Set}\) and a natural isomorphism \(\alpha: F \Rightarrow G\). Think of \(\mathcal{C}\) as a database schema, and \(F,G\) as two databases built using this schema. In what way does the natural isomorphism between \(F\) and \(G\) make these databases 'the same'. They're not necessarily equal!
We should talk about this. |
I've been pondering over this question since a very long time. If a complex number can be prime then which parts of the complex number needs to be prime for the whole complex number to be prime.
The notion of being "prime" is only meaningful relative to a base ring.
For instance, in the integers $\mathbb{Z}$ the number 5 is prime, whereas in the Gaussian integers $\mathbb{Z}[i]$ we have
$$5 = (2 + i)(2 - i) = 2^2 - i^2 = 4 - (-1) = 5$$
and in the ring $\mathbb{Z}[\sqrt{5}]$ we have
$$5 = (\sqrt{5})^2$$
so over these rings 5 is
not a prime number.
The definition of
prime you're probably familiar with -- a number is prime if it is divisible only by itself and one -- doesn't even really work over the integers: for instance, 5 is divisible not only by 1 and 5, but also by -1 and -5. So we need to formulate the definition of a prime differently, while still preserving the basic idea, to make sense of it in an arbitary ring.
Notice the following difference between primes and composites: since 5 is prime, if we have two numbers $a$ and $b$ such that $ab$ is a multiple of 5, then obviously one of $a$ or $b$ has to be a multiple of 5 just by unique factorization. On the other hand, if $ab$ is a multiple of 15, it may be the case that neither $a$ nor $b$ is a multiple of 15, because we might instead have $a$ a multiple of 3 but not 5 and $b$ a multiple of 5 and not 3. [*]
This gives us our definition of "prime" for a general ring: a ring element is
prime if it is neither zero nor a unit, and moreover has the property that whenever it divides a product it must divide at least one of the factors.
There are a great many rings contained in the complex numbers, and in many of these rings there are non-real complex numbers that are primes in that ring. However, given any number that's prime in a given ring, there's a larger ring in which it's not prime, just as we saw above that 5 is prime over the integers, but not over the Gaussian integers $\mathbb{Z}[i]$ or over $\mathbb{Z}[\sqrt{5}]$.
In particular, since $\mathbb{C}$ is a field, every nonzero element is a unit, so
nothing is prime over the complex numbers. (Similarly, nothing is prime over the real numbers, or the rational numbers.) However, I reiterate that many complex numbers are prime over smaller rings: for instance, it turns out that $2 + i$ is prime over $\mathbb{Z}[i]$.
[*] A slightly more straightforward generalization of the definition you're used to would be to look at nonzero, nonunit elements $r$ for which we only have $r = s t$ when either $s$ or $t$ is a unit. This actually gives a weaker notion called "irreducibility." In Unique Factorization Domains the two notions are the same (which explains why they're the same for the integers), but in rings like $\mathbb{Z}[\sqrt{-5}]$ where we do not have unique factorization you can have irreducible elements that are not prime. For instance, 3 is irreducible in $\mathbb{Z}[\sqrt{-5}]$, but we have $$3 \cdot 3 = 9 = (2 + \sqrt{-5})(2 - \sqrt{-5})$$ and 3 is not a divisor of $2 \pm \sqrt{-5}$, so it doesn't satisfy the definition of a prime.
Note: several answers (including this one) have brought up the Gaussian integers specifically. They're indeed an example of a subring of the complex numbers containing non-real complex numbers, but just to be clear they're in no way "the" natural example here -- they're on the same footing as all the others.
Yes, a complex number can be prime (in the traditional sense of the word). Recall that $\mathbb R \subseteq \mathbb C.$ Therefore, all numbers that you would traditionally think of as being prime are themselves complex (though not non-real). So in this case, we require of $a+bi$ that $a$ be prime (in the traditional sense) and $b=0.$
However, there is the notion of being Gaussian prime. A Gaussian prime is a Gaussian integer (a complex number $a+bi$ such that $a,b\in\mathbb Z$) satisfying one of the following:
If $a,b\neq 0,$ then $a+bi$ is Gaussian prime iff $a^2+b^2$ is (traditionally) prime;
If $a=0,$ then $bi$ is Gaussian prime iff $|b|$ is (traditionally) prime and $|b|\equiv 3 \pmod 4;$
If $b=0,$ then $a$ is Gaussian prime iff $|a|$ is (traditionally) prime and $|a|\equiv 3 \pmod 4.$
The concept of primality is extended to other systems, including the Gaussian integers, as irreducibility. An irreducible element is one which cannot be written as the product of non-unit elements; compare this with the definition of a prime.
This is similar in principle, but can have (initially) unexpected consequences. For example, the decomposition of natural numbers into primes is unique, but the decomposition of some of those same numbers into irreducible complex numbers is not!
As with real numbers, you can factor any complex number $z$ as $2\times\dfrac z 2$. For example, $31$ can be factored as $2\times\dfrac{31}2$. However, in defining prime numbers one is not working within the set of all
real numbers, but within the set $\{1,2,3,\ldots\}$. (And when Euclid wrote about prime numbers in the third century BC, he did not consider $1$ to be a number.)
Mathematicians sometimes work within the set $\mathbb Z= \{\ldots,-3,-2,-1,0,1,2,3,\ldots\}$, and then $(-3)\times(-2)=6$ is not considered to be a different factorization from $3\times2=6$, because you can change $3$ to $-3$ by multiplying it by a "unit", in this case by $-1$. There are two "units" in $\mathbb Z$, namely $\pm1$, and the thing that qualifies them as "units" is that they are divisors of $1$.
Mathematicians also work with "Gaussian integers", which are complex numbers of the form $a+bi$ where $a,b\in\mathbb Z$, i.e. $a,b$ are ordinary integers. Within the Gaussian integers, numbers that cannot be factored are "Gaussian primes". The number $5$ is not a Gaussian prime because $5=(2+i)(2-i)$. It is also $(1+2i)(1-2i)$, but that is not a different factorization because $1+2i$ can be reached from $2-i$ by multiplying it by a unit, namely $-i$, since $(1+2i)(-i) = 2+i$.
If you are thinking of Gaussian primes, $a+bi$ can be prime with neither $a$ nor $b$ prime. The simplest example is $1+i$. More complicated is $4+15i$. Here I used the fact that if $a^2+b^2$ is an ordinary prime, then $a+bi$ is a Gaussian prime.
The analogous concept would be to consider the Gaussian integers $g = m + ni : m,n \in \Bbb{Z}$ and say that $g$ is prime if there is no pair of Gaussian integers such that $ hk = g; |h| \neq 1; |k|\neq 1$.
So for example, in the field of Gaussian integers, $5 = (2+i)(2-i)$ is not prime.
Two caveats here:
(1) There is no simple way to tell of $g$ is prime, based on whether its real and imaginary parts are prime.
(2) The Gaussian Integers are
not a unique factorization domain; that is, a number can have two non-trivially distinct factorizations. The latter property is very likely the place where Fermat's "too big for the margin" proof was flawed, because there is indeed a very clever (maybe remarkable) proof of his last theorem that uses numbers of this form and assumes unique factorization.
A prime is an integer $p>1$ that cannot be written as $p=ab$ where $a,b>1$ are integers. Clearly, any prime in this sense is
also a complex number (since $\mathbb{N}\subset\mathbb{C}$).
More abstractly, an element $p$ of a commutative ring is prime if it is nonzero, noninvertible (i.e. not a unit), and satisfies the condition $$ p\mid ab\implies p\mid a\text{ or }p\mid b. $$ You can check that this is consistent with our original definition in $\mathbb{N}$.
David points out an example in $\mathbb{Z}[i]$.
protected by J. M. is a poor mathematician Jul 26 '16 at 12:35
Thank you for your interest in this question. Because it has attracted low-quality or spam answers that had to be removed, posting an answer now requires 10 reputation on this site (the association bonus does not count).
Would you like to answer one of these unanswered questions instead? |
I'm a bit lost here, the question goes as follows:
Suppose that $X_k$ are i.i.d. and follow an exponential distribution with parameter $\lambda$.
Define $F_n(x) := \frac{1}{n}\sum_{i=1}^n(X_{k}\leq x) $ for x $\geq0$
Question :Show that $F_n(x)$ converges in probability to $ 1−e^{−\lambda x} $ Does it also convergein $L^{1}$norm?
I think to show that $F_n(x)$ converges in probability we need to show that:
$\lim_{n\to\infty}P\left(\left|F_n(x)-F(x)\right|>\varepsilon\right)=0$
Therefore we can state that:
$F_n(x)$$\overset{p}{\to} 1−e ^{−\lambda x} $ if
$$\lim_{n\to\infty}P \left(\left| \frac{1}{n}\sum_{i=1}^n\mathbb{1}(X_{k}\leq x)- \left(1-e^{-\lambda x}\right)\right|>\varepsilon\right)=0.$$
But here is where I get stuck in terms of figuring out the correct way.Could I use the CLT?
I was under the impression that the CLT is for convergence in distribution whereas convergence in probability is stronger (except for the case where it converges to a constant which is not the case here.) Alternatively I thought about using The weak law of large numbers and Chebychev,but then I still need to prove $L^p$ I already know that convergence in probability will not mean convergence in $L^p$ but again how would I show this?
Kind regards |
The query oracle: $O_{x}|i\rangle|b\rangle = |i\rangle|b \oplus x_{i}\rangle$ used in algorithms like Deutsch Jozsa is unitary. How do I prove it is unitary?
Notice that $\mathcal O_x$ is a permutation matrix.
The matrix elements are $$\langle j, c\rvert\mathcal O_x\lvert i,b\rangle =\delta_{ij}\langle c\rvert b\oplus x_i\rangle =\delta_{ij}\delta_{c,b\oplus x_i}.$$ In other words, $\mathcal O_x$ is diagonal with respect to the first register, and, for each block corresponding to a given $i$, connects all and only the indices $b,c$ such that $b\oplus c=x_i$ (remember that here $b,c,x_i\in\mathbb Z_2^{\otimes n}$ are length-$n$ bit strings). Also, notice that for a given $b$ there is no more than one $c$ such that $b\oplus c=x_i$ (more precisely, there isn't any such $c$ if $x_i=0$, and there is exactly one if $x_i\neq 0$).
It follows that $\mathcal O_x$ is a (real) permutation matrix, and such matrices are always unitarily diagonalizable with unit eigenvalues (and therefore unitary). In this case, we have that the eigenvalues of $\mathcal O_x$ are $\pm1$, and the eigenvectors are, in the case $x_i\neq 0$, $$\lvert i\rangle\otimes(\lvert b\rangle\pm\lvert b\oplus x_i\rangle)$$ for all $i$ and $b$. If $x_i=0$ then $\mathcal O_x$ is the identity, and therefore its spectrum is trivial${}^\dagger$.
This shows explicitly that $\mathcal O_x$ is unitarily diagonalizable with unit eigenvalues, and therefore is unitary.
${}^\dagger$ I'm actually being a bit sloppy for the sake of simplicity here. This analysis holds for each different block of $\mathcal O_x$ corresponding to a given $i$. More precisely, I should say that $\mathcal O_x$ is block-diagonal as it doesn't connect spaces with $i\neq j$ on the first register, and each block is either the identity in the subspace in which it acts if $x_i=0$, or a permutation matrix that connects different pairs of basis states if $x_i\neq 0$.
Apply it twice: $$ O_xO_x|i\rangle|b\rangle=O_x|i\rangle|b\oplus x_i\rangle=|i\rangle|b\oplus x_i\oplus x_i\rangle=|i\rangle|b\rangle $$ Hence, $O_x$ is its own inverse, and therefore reversible.
To prove unitarity, it makes more sense to prove that $O_x$ has eigenvectors $$ |i\rangle(|0\rangle+|1\rangle)\quad\text{and}\quad|i\rangle(|0\rangle-|1\rangle) $$ for all $i$ with eigenvalues $1$ and $(-1)^{x_i}$ respectively. These are all orthonormal, and span the full Hilbert space. Consequently, the eigenvalues of $O_x$ are all $\pm 1$, and therefore $O_x^\star=O_x$ (where $^\star$ represents the Hermitian conjugate). Thus, $$ O_xO_x^\star=\mathbb{I}, $$ as required for a unitary. |
Numerical evidence suggests that all complex zeros residing in the critical strip $0 < \Re(s) < 1$ of:
$$\frac{\Gamma(s)}{z}Li_s(z) \, \pm \, \frac{\Gamma(1-s)}{z}Li_{1-s}(z)$$
are on the critical line $\Re(s)=\frac12$ for all $z \lt 1$ (with both individual terms $\ne 0$, except when $z=-1$).
I derived two integral expressions for this function that both only converge in the strip for $z \lt 1$,
$$\displaystyle \int_0^\infty \frac{t^ {s-1} \, \pm \, t^{-s}}{\mathrm{e}^{t}-z}\mathrm{d}t$$
and
$$\displaystyle \int_0^1 \frac{\ln^{s-1}\left(\frac{1}{t}\right) \, \pm \, \ln^{-s}\left(\frac{1}{t}\right)} {1-z\,t}\mathrm{d}t$$
however neither does reveal any further information about their zeros.
Notes:
when $z \rightarrow 0$, the function reduces to $\Gamma(s) \pm \Gamma(1-s)$, for which it has been proven here that all complex zeros in the strip are on the critical line (and only a finite number lie outside the strip). Additional observation 1:I found that by gradually increasing $z$ from $0$ to $1$ and for each step calculating the 'adjacent' zeros (that I still find to all reside on the line $\Re(s)=\frac12$), their imaginary parts $s$ seem to follow a path connecting a zero of $\Gamma(s) - \Gamma(1-s)$ when $z \rightarrow 0$, uniquely to a zero of the function at $\Gamma(s)\zeta(s) - \Gamma(1-s)\zeta(1-s)$ when $z \rightarrow 1$. The following graph shows these paths with $z$ on the horizontal axis increased by steps of $0.02$ and the imaginary part $s$ at values in $\frac12 + s \,i$ where the function vanishes. The paths that lead to the non-trivial zeros are marked in red. Also tried a similar plot for $z$ decreasing from $0$ to $-1$ and although the paths seem to nicely continue for all non-trivial zeros, I did find that some of the other zeros on the line $z=0$ do not map to zeros on the line $z=-1$ that equals $\Gamma(s)\eta(s) - \Gamma(1-s)\eta(1-s)$.
Additional observation 2:Here is more complete picture of where the function vanishes at imaginary values $t$ when $s=\frac12 + t i$:
The one-to-one connection between the zeros at $z \rightarrow 0$ and $z \rightarrow 1$ appears to be pretty robust, however the connection between $z \rightarrow 0$ and $z \rightarrow -1$ is much more chaotic (although the lines coming from the far left all firmly pass through). Clearly not all zeros of $z=-1$ match with zeros at $z=0$, however it does appear that all lines starting at non-trivial zeros on the $\eta$-line $(z=-1$), do have a path to the $\Gamma$-line $(z=0)$. The lines on the right marked with a little cross actually do run a bit further towards $z=-1$, but never make it to the 'other side'. I also found there do exist zeros lying off the critical line and in the strip around the spot where the path of these lines at $s=\frac12+t i$ strands (e.g. for: $0.1809532919+9.053131045 i$ at $z=-0.82$ or $0.5580702805+36.69087427 i$ when $z=-0.8$).
unlike for $\Gamma(s)$ and $\zeta(s)$, there doesn't seem to exist a reflection formula relating $Li_s(z)$ and $Li_{1-s}(z)$. Questions:
1) Are there known counter examples of complex zeros lying off the critical line, but still in the strip (I know there do exist a few complex zeros outside the strip and also that there are real zeros)?
EDIT: zeros lying off the line and in the strip have been found in the area between $z=-1$ and $z=0$ (zee above). I therefore like to now restrict the domain of my conjecture to: $0 \le z \lt 1$ and $z \le -1$.
2) Is this phenomenon a consequence of the RH or would a prove of it imply the RH (note that $z=-1$ turns the $Li_s(z)$ into the Dirichlet $\eta(s)$-function that can be directly related to $\zeta(s)$)?
Added:
3) Could there be a logical reason why the zeros of $\Gamma(s)-\Gamma(1-s)$ $(z \rightarrow 0)$, that have been proven to all reside on the line $\Re(s)=\frac12$, should have a one-to-one path connection with the zeros of $\zeta(s)\Gamma(s) - \zeta(1-s)\Gamma(1-s)$ $(z\rightarrow 1)$?
Thanks. |
Lifts of Paths
We begin by defining a lift of a continuous function $f : Y \to X$ from a topological space $Y$ to a topological space $X$.
Definition: Let $X$ and $Y$ be topological spaces and let $f : Y \to X$ be a continuous map. Let $(\tilde{X}, p)$ be a covering space of $X$. A Lift of $f$ is a continuous map $\tilde{f} : Y \to \tilde{X}$ such that the following diagram is commutative:
That is, $f = p \circ \tilde{f}$.
We first begin with a very general theorem which tells us exactly when we can obtain a lift of a continuous function $f : Y \to X$. We need quite a few conditions. First we must select a point $y_0 \in Y$. We then denote $x_0 = f(y_0)$, and choose one element $\tilde{x_0} \in p^{-1}(x_0)$. The theorem below tells us that a unique lift of $f$ exists if and only if the image of the fundamental group $\pi_1(Y, y_0)$ under $f_*$ is a subset of the image of the fundamental group $\pi_1(\tilde{X}, \tilde{x_0})$ under $p_*$
Theorem 1 (The Lifting of Continuous Functions Theorem): Let $X$ be a topological space and let $(\tilde{X}, p)$ be a covering space of $X$. Let $Y$ be a topological space that is path connected and locally path connected and let $f : Y \to X$ be a continuous map. Let $y_0 \in Y$ and let $x_0 = f(y_0)$. Let $\tilde{x_0} \in p^{-1}(x_0)$. Then there exists a unique lift $\tilde{f} : Y \to \tilde{X}$ of $f$ with $\tilde{f}(y_0) = \tilde{x_0}$ if and only if $f_*(\pi_1(Y, y_0))$ is a subset of $p_*(\pi_1(\tilde{X}, \tilde{x_0}))$.
Let $Y = [0, 1]$. Then a continuous function $\alpha : [0, 1] \to X$ is a path in $X$ and we can talk about lifts of paths in $X$. The following theorem is crucially important. It tells us that if $X$ is a topological space and $(\tilde{X}, p)$ is a covering space of $X$ then a path $\alpha$ in $X$ can be uniquely lifted provided we specify a starting point for the lift.
Theorem 2 (The Lifting of Paths Theorem): Let $X$ be a topological space and let $(\tilde{X}, p)$ be a covering space of $X$. Let $\alpha$ be a path in $X$ that starts at $x \in X$. Then for every $y \in p^{-1}(x)$ there exists a unique lift of $\alpha$ that starts at $y$.
Theorem 3: Let $X$ be a topological space and let $(\tilde{X}, p)$ be a covering space of $X$. Let $\alpha$ and $\beta$ be paths in $X$ that both start at $x_1$ and end at $x_2$. Let $y \in p^{-1}(x)$ and let $\tilde{\alpha}$ and $\tilde{\beta}$ be the unique lifts of $\alpha$ and $\beta$ starting at $y$. If $\alpha \simeq_{\{0, 1 \}} \beta$ then $\tilde{\alpha} \simeq_{\{ 0, 1 \}} \tilde{\beta}$, and $\tilde{\alpha}$ and $\tilde{\beta}$ end at the same point. |
Of all of the statistics that are cited to support the notion of “global warming,” the one that bothers me the most is the statistic claiming that \(n\) of the last \(m\) years have been the hottest years on record. For example, it’s not uncommon to hear that “10 of the last 12 years” are the warmest years in a temperature record that goes all the way back to 1880. This is often used as “irrefutable evidence” that mankind is driving up the Earth’s temperature and destroying the planet.
It is understandable that an activist would try to exploit this statistic. Most obviously, it emphasizes that recent global temperatures have been relatively high (where “high” corresponds to an increase of less than one degree Celsius over a 100-year period). The real purpose of repeating this factoid, however, is that it confuses and charms the numerically unsophisticated, leading them to assume that such a concentration of unprecedented, elevated temperatures in recent times is highly unlikely—unless some underlying cause is responsible.
This is quite misleading, however. In fact, it is not difficult to demonstrate that a relatively simple statistical model can account for this result, without requiring any bias toward warming.
The basic mistake that most people make when hearing about the recent number of warmest years is that they assume that, unless there is some underlying trend causing the warming, all years in the record are the same, and each is equally as likely to be one of the warmest. To have 10 out of 12 to be at the top of a list of 134 temperatures intuitively seems to be extremely unlikely, and indeed it is. Under these (incorrect) assumptions, the probability of getting 10 of the hottest years in the last 12 years is given by the hypergeometric distribution:\[ P(X=10)=\frac{{12\choose 10}{132\choose 2}}{{134\choose 12}} \]where the parentheses denote the binomial coefficient\[ {n\choose k}\equiv\frac{n!}{k!(n-k)!} \]The probability of 10 or more hottest years is 1 in 86 billion—very unlikely indeed!
The problem, however, is that each year’s temperature is not an independent random variable. This is clear from looking at the temperature record. For example, consider the annual global land and ocean surface temperature anomalies, which have been obtained from NOAA’s National Climatic Data Center. This temperature record is shown below. The points give the yearly global average temperature. The smooth curve is the five-year running average.
Although the temperatures in the record bounce up and down from year to year, each year’s temperature clearly depends on the temperature of the years immediately before it. That is, this time series is
autocorrelated. In fact, the series clearly resembles a random walk.
This is even more clear when considering the change in temperature anomaly from year to year, which is shown below.
Unlike the temperature record itself, the differences show no obvious trend. They appear to be distributed as (mutually independent) white noise. A Q-Q plot, shown below, appears to indicate that these differences are normally distributed, which is confirmed by a Shapiro-Wilk test for normality (\(p=0.43\)).
Thus, the differences constitute a normally distributed set with a sample mean of \(0.0058\) and sample variance of \(0.0096\). The slightly positive mean corresponds to the upward trend that is observed in the temperature record. Over the entire record, it results in a \(0.0058\) °C/year average increase in temperature. Nevertheless, it should be kept in mind that this does not imply that the mean of the underlying distribution is this value or even that it is positive. The estimate for the population mean, including the standard error, is \(\bar x\pm s/\sqrt{n}\), where \(\bar x\) is the sample mean, \(s\) is the sample standard devation, and \(n\) is the size of the sample. In this case, the estimate for the mean of the underlying distribution is \(0.0058\pm 0.0085\), a range that includes zero at the 1-\(\sigma\) level. Thus, we cannot reasonably conclude from these data that there is a positive bias in the random walk process describing this temperature record.
So let’s consider the possibility that the temperature record is a result of a random walk process with no bias. The statistical model that describes this process is\[ T_n = T_{n-1} + \epsilon^{(n)} \]where \(T_n\) is the temperature anomaly of the \(n\)-th year in the series, and \(\epsilon^{(1)},\ldots,\epsilon^{(n)}\) is a series of normally distributed random variables with \(\epsilon\sim\mathcal N(0,\sigma^2)\). Although this model can be studied analytically, it lends itself very well to Monte Carlo simulation. For the results discussed here, I use a model in which the variance of the random term is \(\sigma^2 = 0.0096\), the sample variance of the NOAA temperature anomaly data.
When a model of red noise—the random walk described above—is used to generate a simulated 134-year temperature record, 10 or more of the last 12 years in the record end up being the “hottest” years on record about 7% of the time. Thus, while it is still unlikely with this model to have such a high concentration of warm years in recent times, it is far more likely (by nine orders of magnitude) than what the intuitive, but naive, assumption of completely independent yearly temperatures indicates.
It is common, when applying the techniques of statistical inference, to judge a set of empirical data by calculating the probability that such a set of data could appear purely by chance under the assumption that no real relationship exists. The conventional point at which something is considered, not necessarily true, but merely “significant” and worthy of further investigation, is if the probability of chance producing the observed result is less than 5% (i.e., a one-in-twenty chance). By these standards, the claim that 10 of last 12 years are the hottest on record doesn’t qualify as statistically interesting.
A more reasonable stochastic model, however, is one that combines red noise and white noise. In other words, it is a random walk with some additional “error” added to the final result. The statistical model can be described as\[ T_n = a r_n + b \epsilon_{\text w}^{(n)} \]where \(\epsilon_{\text w}\sim\mathcal N(0,\sigma^2)\) is the white-noise random variable, \(r_n\) is the red-noise term,\[ r_n = r_{n-1} + \epsilon_{\text r}^{(n)} \]with \(\epsilon_{\text r}\sim\mathcal N(0,\sigma^2)\) as the red-noise random variable, and \(a\) and \(b\) are the coefficients that determine the relative importance of each term. With this model, the annual change in temperature is\[ \Delta T_n = T_n – T_{n-1} = a \epsilon_{\text r}^{(n)} + b\bigl[\epsilon_{\text w}^{(n)} + \epsilon_{\text w}^{(n-1)}\bigr] \] Since this is a sum of three normally distributed random variables, \(\Delta T_n\sim\mathcal N\bigl(0,(a^2+2b^2)\sigma^2\bigr)\), and so, for the variance of these temperature differences to be \(\sigma^2\), the coefficients \(a\) and \(b\) must satisfy the following relation:\[ a^2 + 2b^2 = 1 \]
Numerical experiments indicate that the combination of red and white noise that is most likely to produce a shape that is similar to NOAA’s temperature anomaly record has a value of \(a\) that is between \(0.2\) and \(0.4\). For example, not very many tries were required to produce the following series (shown in red), which was generated with \(a=0.4\):
This series of randomly generated points is rather similar to the temperature series, which is shown in black.
Conclusion
Above, I have demonstrated that claims such as 10 out of the last 12 years are the warmest on record are not very impressive. Such a situation has a reasonable probability of resulting from a simple, unbiased random walk. Therefore, although such claims serve as a reminder that recent years have been warmer than the slightly less recent past, they indicate almost nothing about the recent
trends in global temperatures or what is to be expected in the future. An unbiased random walk is equally as lightly to trend down as it is to trend up, regardless of what it has done over the past 134 steps.
Naturally, it is quite possible that some steady upward trend does exist in the temperature record that biases the random walk to higher temperatures. This possibility would be consistent with the observed temperature record, making such a result more likely in the statistical models. Nevertheless, it is not surprising, since it is well known that temperatures in the modern era have been steadily trending upward from a minimum that occurred sometime in the seventeenth century—an era commonly referred to as the “little ice age.” (Whether this phenomenon was regional or global is a matter of debate.) In fact, there could be several naturally occurring cyclic trends that could be affecting the record, but there is no way to tell definitively from the series itself.
One thing is certain, however. A dozen or so warm years in recent memory is very weak evidence of a deterministic warming trend. Without additional supporting evidence to say otherwise, such results could simply be the luck of the draw.
(Note: The R code used to generate the results discussed above is available for the reader’s amusement and edification.) |
In mathematics, a telescoping series is a series whose partial sums eventually only have a fixed number of terms after cancellation. I learnt this technique when I was doing maths olympiad. However until last year I learnt this buzz word ‘telescoping’ since I received my education in China and we call it ‘the method of differences’.
Anyway, yesterday, I was helping Po-Shen Loh to proctor this Annual Virginia Regional Mathematics Contest at CMU. Unfortunately, I did not take my breakfast. All I could digest are the seven problems. And even worse, I got stuck by the last problem which is the following.
Find \sum_{n=1}^\infty\frac{n}{(2^n+2^{-n})^2}+\frac{(-1)^nn}{(2^n-2^{-n})^2}.
It turns out that the punch line is, as you might have expected, telescoping. But it is just a bit crazy.
There is Pleasure sure, In being Mad, which none but Madmen know!
John Dryden’s “The Spanish Friar”
First consider the first term, \frac{1}{(2^1+2^{-1})^2}+\frac{-1}{(2^1-2^{-1})^2}=\frac{-2\times 2}{(2^2-2^{-2})^2}. Then take look at the second term, \frac{2}{(2^2+2^{-2})^2}+\frac{2}{(2^2-2^{-2})^2}. Aha, the sum of the first two terms becomes \frac{2}{(2^2+2^{-2})^2}+\frac{-2}{(2^2-2^{-2})^2}=\frac{-4\times 2}{(2^4-2^{-4})^2} which again will interact nicely with the 4th term (not the 3rd term). After all, the sum of the 1st, 2nd, 4th, 8th, …, n=(2^m)th terms is \frac{-4n}{(2^{2n}-2^{-2n})^2}. Note that this goes to zero.
But this only handles the terms indexed by 2’s power. Naturally, the next thing to look at is the sum of the 3rd, 6th, 12th, 24th, …, n=(3\times 2^m)th terms, which amazingly turns out have the telescoping phenomena again and is equal to \frac{-4n}{(2^{2n}-2^{-2n})^2}. Again, the this goes to zero.
We claim that starting with an odd index, say (2l+1), the partial sum of the (2l+1), 2(2l+1), \ldots, 2^m(2l+1), \ldots‘th terms goes to zero.
For people who is willing to suffer a bit more for rigorousness, the argument for the previous claim can be easily formalized by the method of differences. The key fact we have been exploiting above is the following identity.\frac{2n}{(2^{2n}+2^{-2n})^2}+\frac{2n}{(2^{2n}-2^{-2n})^2} = \frac{4n}{(2^{2n}-2^{-2n})^2}-\frac{8n}{(2^{4n}-2^{-4n})^2}.
The last bit of the ingredient is the absolute summability of the original sequence which implies that changing the order of summation does not affect the result. Hence the sum in total is 0. |
I am trying to follow this work, in which Eq. (11), the 2nd-order, nonlinear differential equation depends on a pair of parameters $ (\kappa, h) $. But now I only care about the case with a vanishing $ h $ and several values of $ \kappa $:
$$ \begin{align} 0&=\frac{\mathrm d^2\theta}{\mathrm d\varrho^2}+\frac{1}{\varrho}\frac{\mathrm d\theta}{\mathrm d\varrho}-\left(1+\frac{1}{\varrho^2}\right)\sin\theta\cos\theta+\frac{4\kappa}{\pi}\frac{\sin^2\theta}{\varrho}-h\sin\theta\\ \pi&=\theta(0)\\ 0&=\theta(\infty) \end{align} $$
So I try
Clear[sol]sol = Block[{eq, θ0, θ1, ϱ0, ϱ1, κlist}, {θ0, θ1} = {0.99 π, 0.001}; {ϱ0, ϱ1} = {0.01, 10}; κlist = {.6, .7, .8, .9, .95}; eq[κ_, h_] = θ''[ϱ] + 1/ϱ θ'[ϱ] - (1 + 1/ϱ^2) Sin[θ[ϱ]] Cos[θ[ϱ]] + (4 κ)/π Sin[θ[ϱ]]^2/ϱ - h Sin[θ[ϱ]] == 0 // Simplify; NDSolveValue[{eq[#, 0.], θ[ϱ0] == θ0, θ[ϱ1] == θ1}, θ[ϱ], ϱ, AccuracyGoal -> 20] & /@ κlist ]
Actually, above code works fine to give five
InterpolatingFunctions. But the problem arises when I try to add a new value
0.5 to
κlist. Can anyone help to get a reasonable solution for $ \kappa = 0.5 $? |
Suppose that a random variable has a lower and an upper bound [0,1]. How to compute the variance of such a variable?
You can prove Popoviciu's inequality as follows. Use the notation $m=\inf X$ and $M=\sup X$. Define a function $g$ by $$ g(t)=\mathbb{E}\left[\left(X-t\right)^2\right] \, . $$ Computing the derivative $g'$, and solving $$ g'(t) = -2\mathbb{E}[X] +2t=0 \, , $$ we find that $g$ achieves its minimum at $t=\mathbb{E}[X]$ (note that $g''>0$).
Now, consider the value of the function $g$ at the special point $t=\frac{M+m}{2}$. It must be the case that $$ \mathbb{Var}[X]=g(\mathbb{E}[X])\leq g\left(\frac{M+m}{2}\right) \, . $$ But $$ g\left(\frac{M+m}{2}\right) = \mathbb{E}\left[\left(X - \frac{M+m}{2}\right)^2 \right] = \frac{1}{4}\mathbb{E}\left[\left((X-m) + (X-M)\right)^2 \right] \, . $$ Since $X-m\geq 0$ and $X-M\leq 0$, we have $$ \left((X-m)+(X-M)\right)^2\leq\left((X-m)-(X-M)\right)^2=\left(M-m\right)^2 \, , $$ implying that $$ \frac{1}{4}\mathbb{E}\left[\left((X-m) + (X-M)\right)^2 \right] \leq \frac{1}{4}\mathbb{E}\left[\left((X-m) - (X-M)\right)^2 \right] = \frac{(M-m)^2}{4} \, . $$ Therefore, we proved Popoviciu's inequality $$ \mathbb{Var}[X]\leq \frac{(M-m)^2}{4} \, . $$
Let $F$ be a distribution on $[0,1]$. We will show that if the variance of $F$ is maximal, then $F$ can have
no support in the interior, from which it follows that $F$ is Bernoulli and the rest is trivial.
As a matter of notation, let $\mu_k = \int_0^1 x^k dF(x)$ be the $k$th raw moment of $F$ (and, as usual, we write $\mu = \mu_1$ and $\sigma^2 = \mu_2 - \mu^2$ for the variance).
We know $F$ does not have all its support at one point (the variance is
minimal in that case). Among other things, this implies $\mu$ lies strictly between $0$ and $1$. In order to argue by contradiction, suppose there is some measurable subset $I$ in the interior $(0,1)$ for which $F(I)\gt 0$. Without any loss of generality we may assume (by changing $X$ to $1-X$ if need be) that $F(J = I \cap (0, \mu]) \gt 0$: in other words, $J$ is obtained by cutting off any part of $I$ above the mean and $J$ has positive probability. Let us alter $F$ to $F'$ by taking all the probability out of $J$ and placing it at $0$. In so doing, $\mu_k$ changes to
$$\mu'_k = \mu_k - \int_J x^k dF(x).$$
As a matter of notation, let us write $[g(x)] = \int_J g(x) dF(x)$ for such integrals, whence
$$\mu'_2 = \mu_2 - [x^2], \quad \mu' = \mu - [x].$$
Calculate
$$\sigma'^2 = \mu'_2 - \mu'^2 = \mu_2 - [x^2] - (\mu - [x])^2 = \sigma^2 + \left((\mu[x] - [x^2]) + (\mu[x] - [x]^2)\right).$$
The second term on the right, $(\mu[x] - [x]^2)$, is non-negative because $\mu \ge x$ everywhere on $J$. The first term on the right can be rewritten
$$\mu[x] - [x^2] = \mu(1 - [1]) + ([\mu][x] - [x^2]).$$
The first term on the right is
strictly positive because (a) $\mu \gt 0$ and (b) $[1] = F(J) \lt 1$ because we assumed $F$ is not concentrated at a point. The second term is non-negative because it can be rewritten as $[(\mu-x)(x)]$ and this integrand is nonnegative from the assumptions $\mu \ge x$ on $J$ and $0 \le x \le 1$. It follows that $\sigma'^2 - \sigma^2 \gt 0$.
We have just shown that under our assumptions, changing $F$ to $F'$
strictly increases its variance. The only way this cannot happen, then, is when all the probability of $F'$ is concentrated at the endpoints $0$ and $1$, with (say) values $1-p$ and $p$, respectively. Its variance is easily calculated to equal $p(1-p)$ which is maximal when $p=1/2$ and equals $1/4$ there.
Now when $F$ is a distribution on $[a,b]$, we recenter and rescale it to a distribution on $[0,1]$. The recentering does not change the variance whereas the rescaling divides it by $(b-a)^2$. Thus an $F$ with maximal variance on $[a,b]$ corresponds to the distribution with maximal variance on $[0,1]$: it therefore is a Bernoulli$(1/2)$ distribution rescaled and translated to $[a,b]$ having variance $(b-a)^2/4$,
QED.
If the random variable is restricted to $[a,b]$ and we know the mean $\mu=E[X]$, the variance is bounded by $(b-\mu)(\mu-a)$.
Let us first consider the case $a=0, b=1$. Note that for all $x\in [0,1]$, $x^2\leq x$, wherefore also $E[X^2]\leq E[X]$. Using this result, \begin{equation} \sigma^2 = E[X^2] - (E[X]^2) = E[X^2] - \mu^2 \leq \mu - \mu^2 = \mu(1-\mu). \end{equation}
To generalize to intervals $[a,b]$ with $b>a$, consider $Y$ restricted to $[a,b]$. Define $X=\frac{Y-a}{b-a}$, which is restricted in $[0,1]$. Equivalently, $Y = (b-a)X + a$, and thus \begin{equation} Var[Y] = (b-a)^2Var[X] \leq (b-a)^2\mu_X (1-\mu_X). \end{equation} where the inequality is based on the first result. Now, by substituting $\mu_X = \frac{\mu_Y - a}{b-a}$, the bound equals \begin{equation} (b-a)^2\, \frac{\mu_Y - a}{b-a}\,\left(1- \frac{\mu_Y - a}{b-a}\right) = (b-a)^2 \frac{\mu_Y -a}{b-a}\,\frac{b - \mu_Y}{b-a} = (\mu_Y - a)(b- \mu_Y), \end{equation} which is the desired result.
At @user603's request....
A useful upper bound on the variance $\sigma^2$ of a random variable that takes on values in $[a,b]$ with probability $1$ is $\sigma^2 \leq \frac{(b−a)^2}{4}$. A proof for thespecial case $a=0, b=1$ (which is what the OP asked about) can be foundhere on math.SE, and it is easily adapted tothe more general case. As noted in my comment above and also in the answer referencedherein, a discrete random variable that takes on values $a$ and $b$ with equalprobability $\frac{1}{2}$ has variance $\frac{(b−a)^2}{4}$ and thus no tighter
general bound can be found.
Another point to keep in mind is that a bounded random variable has finite variance, whereas for an unbounded random variable, the variance might not be finite, and in some cases might not even be definable. For example, the mean cannot be defined for Cauchy random variables, and so one cannot define the variance (as the expectation of the squared deviation from the mean).
are you sure that this is true in general - for continuous as well as discrete distributions? Can you provide a link to the other pages? For a general distibution on $[a,b]$ it is trivial to show that $$ Var(X) = E[(X-E[X])^2] \le E[(b-a)^2] = (b-a)^2. $$ I can imagine that sharper inequalities exist ... Do you need the factor $1/4$ for your result?
On the other hand one can find it with the factor $1/4$ under the name Popoviciu's_inequality on wikipedia.
This article looks better than the wikipedia article ...
For a uniform distribution it holds that $$ Var(X) = \frac{(b-a)^2}{12}. $$ |
Taiwanese Journal of Mathematics Taiwanese J. Math. Volume 16, Number 5 (2012), 1815-1827. SUPERCYCLIC AND CESÀRO HYPERCYCLIC WEIGHTED TRANSLATIONS ON GROUPS Abstract
Let $G$ be a locally compact group and let $1 \leq p \lt \infty$. We characterize supercyclic weighted translation operators on the Lebesgue space $L^p(G)$ in terms of the weight. Using this result, the characterization for Cesàro hypercyclic weighted translation operators is given. We also determine when scalar multiples of weighted translation operators are hypercyclic and topologically mixing, and show, for any weighted translation operator $T$, $\beta T$ is mixing for all $\beta \in (1,4)$ if $T$ and $4T$ are mixing.
Article information Source Taiwanese J. Math., Volume 16, Number 5 (2012), 1815-1827. Dates First available in Project Euclid: 18 July 2017 Permanent link to this document https://projecteuclid.org/euclid.twjm/1500406799 Digital Object Identifier doi:10.11650/twjm/1500406799 Mathematical Reviews number (MathSciNet) MR2970687 Zentralblatt MATH identifier 1275.47020 Citation
Chen, Chung-Chuan. SUPERCYCLIC AND CESÀRO HYPERCYCLIC WEIGHTED TRANSLATIONS ON GROUPS. Taiwanese J. Math. 16 (2012), no. 5, 1815--1827. doi:10.11650/twjm/1500406799. https://projecteuclid.org/euclid.twjm/1500406799 |
Coupon bonds Exercise: XLF Co has issued a bond with a face value of $100,000 paying an annual coupon of 9%. The bond matures in 10 years from now, the market yield is 8% pa and a coupon has just been paid. Pricing
The price of a bond \(P\) is given by the formula \(P=\sum\limits_{j=1}^T \dfrac{C_j} {(1+i)^{t_j}}\) where \(C_j\) is the periodic cash flow, \(T\) is the number of \(j^{th} \) cash flows, and \(i\) is the current periodic yield.
Alternatively, \(P= { \sum\limits_{t=1}^N {CF_t \cdot (1+i)^{-t}} } \) in the spirit of Saunders and Cornett (2008)
Task 1: In Excel, setup a table with columns showing the timing and magnitude of each coupon, plus the face value of the bond at maturity. Also include columns for the vector of discount factors, and the discounted value of each cash flow. Calculate the present value of the bond. Present the table in suitable business style. Duration
Duration measures the weighted average time to maturity of the cash flows for a bond, and the measure is often used in financial risk analysis.
A common measure of duration is the method documented by Macaulay where duration \( D = \dfrac{ \sum\limits_{j=1}^T {t_j \cdot C_j \cdot (1+i)^{-t_j}}} { \sum\limits_{j=1}^T {C_j \cdot (1+i)^{-t_j}} } \)
Or, expressed in the nomenclature of Saunders and Cornett (2008, p225) \( D = \dfrac{ \sum\limits_{t=1}^N {CF_t \cdot (1+i)^{-t} \cdot t}} { \sum\limits_{t=1}^N {CF_t \cdot (1+i)^{-t}} } = \dfrac{ \sum\limits_{t=1}^N {PV_t \cdot t}} { \sum\limits_{t=1}^N {PV_t }} \)
Task 2 a: In Excel, setup a table to display the components used in the calculation of the duration measure shown in the equation. This table will be similar to the one developed in task 1, with the addition of the time weight component \(t_j\). Also include one linked cell that allows the user to vary the amount of the face value. Calculate the value of the Macaulay duration measure in years. Present the table in appropriate business style. Task 2 b: Try different amounts for face value > 0 and examine the resultant change in the Macaulay duration number. The Excel solution file for this module: xlf-fin-exercise-bonds.xlsm 26KB The Excel solution file for this module: xlf-fin-exercise-bonds.xlsx 22KB [6 May 2019] This module was developed in Excel 2013 Pro 64 bit. Saunders A, and M Cornett, (2008), Financial Institutions Management, a Risk Management Approach, McGraw-Hill Irwin |
Today we have finally arrived at the "master algorithm" in computer vision. That is how François Chollet calls convolutional neural networks (CNNs). First, we are going to build an intuition behind those algorithms. Then, we are taking a look at basic CNN architecture. After discussing the differences between convolutional layer types, we are going to implement them in Haskell.
Previous posts Day 1: Learning Neural Networks The Hard Way Day 2: What Do Hidden Layers Do? Day 3: Haskell Guide To Neural Networks Day 4: The Importance Of Batch Normalization Convolution operator
Previously, we have learned about fully-connected neural networks. Although, theoretically those can approximate any reasonable function, they have certain limitations. One of the challenges is to address the translation symmetry. To explain this, let us take a look at the two cat pictures below.
For us, humans,it does not matter if a cat is in the right lower corner of it issomewhere in the top part of an image. In both cases we find a cat.So we can say that our human cat detector is
translation invariant.
However, if we look at the architecture of a typical fully-connected network, we may realize that there is actually nothing that prevents this network to work correctly only on some part of an image. The question here: Is there any way to make a neural network translation invariant?
Let us take a closer look at the cat image.Soon we realize that pixels representing cat's head are more contextually related to each otherthan they are related to pixels representing cat's tail. Therefore, we also would makeour neural network
sparse so that neurons in the next layer areconnected only to the relevant neighboring pixels. This way, each neuronin the next layer would be responsible for a small feature in the original image.The area that a neuron "sees" is called a receptive field.
Convolutional neural networks (CNNs) or simply
ConvNets weredesigned to address those two issues: translation symmetryand image locality. First, let us give an intuitive explanationof a convolution operator 1.
You have very likely encountered convolution filters before.Recall when you have first played with a (raster) graphics editor like GIMP or Photoshop.You have probably been delighted obtaining effects such as sharpening, blur, oredge detection. If you haven't, then you probably should :).The secret of all those filters is the convolutional application of an
image kernel.The image kernel is typically a $3 \times 3$ matrix such as below.
Here is shown a single convolution step. This step is a dot product between the kernel and pixel values. Since all the kernel values except the second one in the first row are zeros, the result is equal to the second value in the first row of the green frame, i.e. $40 \cdot 0 + 42 \cdot 1 + 46 \cdot 0 + \dotsc + 58 \cdot 0 = 42$. The convolution operator takes an image and acts within the green "sliding window" to perform dot product over every part of that image. The result is a new, filtered image. Mathematically, the (discrete) convolution operator $(*)$ between an image $A \in \mathbb{R}^{D_{F_1} \times D_{F_2}}$ and a kernel $K \in \mathbb{R}^{D_{K_1} \times D_{K_2}}$ can be formalized as $$A * K = \sum_{m=0}^{D_{K_1} - 1} \sum_{n=0}^{D_{K_2} - 1} K_{m, n} \cdot A_{i-m, j-n},$$ where $0 \le i < D_{K_1} + D_{F_1} - 1$ and $0 \le i < D_{K_2} + D_{F_2} - 1$. To better understand how convolution with a kernel changes the original image, you can play with different image kernels.
What is the motivation behind this sliding window/convolution operator approach?It has biological background.In fact, human eye has a relatively narrow
visual field.We perceive objects as a whole by constantly moving eyes around them.These rapid eye movementsare called saccades.Therefore, convolution operator may be regarded as a simplified model ofimage scanning that occurs naturally. The important pointis thanks to the sliding window convolutions achieve translation invariance. Moreover,since every dot product result is connected - through the kernel - only to a verylimited number of pixels in the initial image, convolution connectionsare very sparse. Therefore, by using convolutionsin neural networks we achieve both translation invariance and connection sparsity. Convolutional Neural Network Architecture
An interesting property of convolutional layers is that if the input image is shifted, the feature map output will be shifted by the same amount, but it will be left unchanged otherwise. This property is at the basis of the robustness of convolutional networks to shifts and distortions of the input.
Once a feature has been detected, its exact location becomes less important. Only its approximate position relative to other features is relevant.
Lecun
et al.Gradient-based learning applied to document recognition (1998)
The prototype of what we call today convolutional neural networks has been first proposed back in 1980s by Fukushima. There were proposed many unsupervised and supervised training methods, but today CNNs are trained with backpropagation. Let us take a look at one of the famous ConvNet architectures known as LeNet-5.
The architecture is very close to modern CNNs.LeNet-5 was designed to perform handwritten digit recognitionfrom $32 \times 32$ black and white images.The two main building blocks, as we call them now,are a
feature extractor and a classifier.
With local receptive fields neurons can extract elementary visual features such as oriented edges, endpoints, corners...
Lecun
et al.Gradient-based learning applied to document recognition (1998)
The
feature extractor consists oftwo convolutional layers. The first convolutionallayer has six convolutional filters with $5 \times 5$ kernels.Application of those filters with subsequent bias additions andhyperbolic tangent activations 2 produces feature maps,essentially new, slightly smaller ($28 \times 28$) images.By convention, we describe the result as a volume of$28 \times 28 \times 6$.To reduce the spatial resolution,a subsampling is then performed 3.That outputs $14 \times 14 \times 6$ feature maps.
All the units in a feature map share the same set of 25 weights and the same bias, so they detect the same feature at all possible locations on the input.
Lecun
et al.Gradient-based learning applied to document recognition (1998)
The next convolutions round results already in $10 \times 10 \times 16$ feature maps. Note that unlike the first convolutional layer, we apply $5 \times 5 \times 6$ kernels. That means that each of sixteen convolutions simultaneously processes all six feature maps obtained from the previous step. After subsampling we obtain a resulting volume of $5 \times 5 \times 16$.
The
classifier consists of three densely connected layerswith 120, 84, and 10 neurons each. The last layerprovides a one-hot-encoded 4 answer. The slight differencefrom modern archictures is in thefinal layer, which consists of ten Euclidean radial basis functionunits, whereas today this would be a normalfully-connected layer followed by a softmax layer.
It is important to understand that a single convolution filter is able to detect only a single feature. For instance, it may be able to detect horizontal edges. Therefore, we use several more filters with different kernels to have get features such as vertical edges, simple textures, or corners. As you have seen, the number of filters is typically represented in ConvNet diagrams as volume. Interestingly, layers deeper in the network will combine the most basic features detected in the first layers into more abstract representations such as eyes, ears, or even complete figures. To better understand this mechanism let us inspect receptive fields visualization below.
Receptive field visualization derived from Arun Mallya. Hover the mouse cursor over any neuron in top layers to see how extends its receptive field in previous (bottom) layers.
As we can see by checking neurons in last layers, even a small $3 \times 3$ receptive field grows as one moves towards first layers. Indeed, we may anticipate that "deeper" neurons will have better overall view on what happens in the image.
The beauty and the biggest achievement of deep learningis that filter kernels are learned automatically withback propagation
5.A peculiarity of convolutional layers is thatthe result is obtained after repetitive application ofa small number of weights as defined by a kernel.Thanks to this weight sharing, convolutional layershave drastically reduced number of trainable parameters 6,compared to fully-connected layers. Convolution Types
I decided to include this section for curious readers. If this is the first time you encounter CNNs, feel free to skip the section and revisit it later.
There is a lot of hype around convolutions nowadays. However, it is not madeclear that low-level convolutions for computer vision are often different fromthose exploited by ConvNets. Yet, even in ConvNets there is a variety ofconvolutional layers inspired by Inception ConvNet architectureand shaped by Xception and Mobilenet works. Ibelieve that you deserve to know that there exist multiple kinds ofconvolutions applied in different contexts and here I shall provide a generalroadmap
7. 1. Computer Vision-Style Convolution
Low-level computer vision (CV), for instance graphics editors, typically operateone, three, or four channels images (e.g. red, green, blue, and transparency).An individual kernel is typically applied to each channel. In this case,usually there are as many resulting channels as there are channels in the inputimage. A special case of this style convolution is when
the same convolutionkernel is applied to each channel (e.g. blurring). Sometimes, resultingchannels are summed producing a one-channel image (e.g. edge detection). 2. LeNet-like Convolution
A pure CV-style convolution is different from those in ConvNets due to tworeasons: (1) in CV kernels are manually defined, whereas the power of neuralnetworks comes from training, and (2) in neural networks we builda deep structure by stacking multiple convolutions on top of each other.Therefore, we need to recombine the information coming from previouslayers. That allows us to train higher-level feature detections
8.Finally, convolutions in neural networks may contain bias terms, i.e.constants added to results of each convolution.
Recombination of features coming from earlier layers was previously illustrated in LeNet-5 example. As you remember, in the second convolutional layer we would apply 3D kernels of size $5 \times 5 \times 6$ computing dot products simultaneously on all six feature maps from the first layer. There were sixteen different kernels thus producing sixteen new channels.
To summarize, a single LeNet-like convolution operates simultaneously on allinput channels and produces a single channel. By having an arbitrary number ofkernels, any number of output channels is obtained. It is not uncommon to operateon volumes of 512 channels! The computation cost of such convolution is$D_K \times D_K \times M \times N \times D_F \times D_F$where $M$ is the number of input channels,$N$ is the number of output channels,$D_K \times D_K$ is the kernel size and $D_F \times D_F$ isthe feature map size
9. 3. Depthwise Separable Convolution
LeNet-style convolution requires a large number of operations. But do we really need all of them? For instance, can spatial and cross-channel correlations be somehow decoupled? The Xception paper largely inspired by Inception architecture shows that indeed, one can build more efficient convolutions by assuming that spatial correlations and cross-channel correlations can be mapped independently. This principle was also applied in Mobilenet architectures.
The depthwise separable convolution works the following way. First, like inlow-level computer vision, individual kernels are applied to each individual channel.Then, after optional activation
10,there is another convolution, but this timeexclusively in-between channels. That is typically achieved by applyinga $1 \times 1$ convolution kernel. Finally, there is a (ReLU) activation.
This way, depthwise separable convolution has two distinct steps: a space-onlyconvolution and a channel recombination. This reduces the number of operationsto$D_K \times D_K \times M \times D_F \times D_F + M \times N \times D_F \times D_F$
9.
To be continued...
Summary
Convolutional neural networks
aka ConvNets achieve translation invarianceand connection sparsity. Thanks to weight sharing, ConvNets dramaticallyreduce the number of trained parameters. The power of ConvNet comes fromtraining convolution filters, in contrast to manual feature engineering.
In the next post we will apply CNNs to MNIST and CIFAR-10 image challenges. Stay tuned!
Further reading
Learned today:
Interactive Image Kernels The Ancient Secrets of Computer Vision (online course) Why I prefer functional programming Efficient Parallel Stencil Convolution in Haskell
Deeper into neural networks:
In fact, cross-correlation, not convolution. Still, for neural networks that would be practically the same operation. ^ The actual "squashing" activation was $f(x) = 1.7159 \tanh(\frac{2}{3} x)$. ^ By subsampling LeNet authors mean local averaging in $2 \times 2$ squares with subsequent scaling by a constant, bias addition, and sigmoid activation. In modern CNNs, a simple max-pooling is performed instead. ^ We have discussed one-hot encoding on Day 1. ^ To refresh your memory about backprop algorithm, check out previous days. ^ Typically, for each convolutional layer the number of parameters is equal to the number of kernel weights plus a trainable bias. For instance, for a $5 \times 5 \times 6$ kernel, this number is equal to 151. ^ Whereas I discuss only 2D convolutions that are useful for image-like objects, those convolutions can be generalized to 1D and 3D. ^ By higher-level features I mean features detected by layers deeper in the network, such as geometric shapes, eyes, ears, or even complete figures. ^ For the details, see Mobilenet paper. ^ In Xception architecture a better result was achieved without an intermediate activation (on ImageNet classification challenge). In addition, activations - when they are present - are preceded by batch normalization. Batch normalization results in no added bias term after convolutions. ^ |
I computed the daily reference evapotranspiration $ET_0$ using the Crop evapotranspiration - Guidelines for computing crop water requirements - FAO Irrigation and drainage paper 56. The outputs seems realistic regarding the dynamic and the annual total but the time series contains negative values (about 1% of the time series).
The computation involves many equations for the sub-terms but the general formula is the following:
$$ ET_0 = \frac{0.408 \Delta (R_n - G) + \gamma \frac{900}{T + 273}U_2(e_s - e_a)}{\Delta + \gamma(1+0.34 U_2)} $$
with
$ET_0$ reference evapotranspiration [mm day-1], $R_n$ net radiation at the crop surface [MJ m-2 day-1], $G$ soil heat flux density [MJ m-2 day-1] (=0 in this case), $T$ mean daily air temperature at 2 m height [°C], $U_2$ wind speed at 2 m height [m s-1], $e_s$ saturation vapour pressure [kPa], $e_a$ actual vapour pressure [kPa], $e_s - e_a$ saturation vapour pressure deficit [kPa], $\Delta$ slope vapour pressure curve [kPa °C-1], $\gamma$ psychrometric constant [kPa °C-1].
It appears that the negative values are due to a negative values of the net radiation $R_n$. A negative value of $R_n$ makes sense but what about negative $ET_0$ ? How should I deal with these as I intend to use $ET_0$ as input for hydrological modelling ? |
Learn about DokuWiki Advanced Use Corporate Use Our Community Learn about DokuWiki Advanced Use Corporate Use Our Community
This page demonstrates the features and capabilities of the ODT plugin and demonstrates which plugins support ODT export. To test it, proceed like this:
Exporting this page is also a easy way to check the influence of ODT or CSS templates on the generated output. You might also like to compare the differences caused by setting the config option
css_usage to off (plugins only) or basic style import. This demo page is work in progress. So a lot of examples are still missing. At the end, all plugins listed under ODT render support should have at least a little example here on the ODT demo page. Help is always welcome. Do not be afraid to extend this page. I can easily change the page content if something is wrong or if something got broken or deleted accidentally — LarsDW223
First, let's create a table of content using the following code:
{{odt>toc:title=Content;leader_sign=.;indents=0,0.5,1;pagebreak=true;styleL1="font-weight:bold;";styleL2="fontstyle:italic;";styleL3="font-style:normal;";}}
The table of contents needs to be updated after export to ODT to show correct page numbers
search?q=toc%3Atitle%3DContent%3Bleader_sign%3D.%3Bindents%3D0%2C0.5%2C1%3Bpagebreak%3Dtrue%3BstyleL1%3D%22font-weight%3Abold%22%3BstyleL2%3D%22fontstyle%3Aitalic%3B%22%3BstyleL3%3D%22font-style%3Anormal%3B%22%3B&btnI=lucky
DokuWiki supports
bold, italic, underlined and
monospaced texts. Of course you can
all these.
combine
DokuWiki supports **bold**, //italic//, __underlined__ and ''monospaced'' texts. Of course you can **__//''combine''//__** all these.
You can use
subscript and superscript, too. You can use <sub>subscript</sub> and <sup>superscript</sup>, too.
You can mark something as
deleted 1) as well. You can mark something as <del>deleted</del> as well.
An external link looks like this after export to ODT: This Link points to google. The ODT plugin treats an internal link like an external one: This Link points to the start page. This only changes if the link points “inside” the exported ODT document: This Link points to the start of this document. This example only works if the page name of
this page is odt-demo-page.
DokuWiki supports Interwiki links. These are quick links to other Wikis. For example this is a link to Wikipedia's page about Wikis: Wiki. Interwiki links look like external links after export to ODT.
The ODT plugin renders Windows share links as plain text: Example link.
Some heading examples:
If you would like headings to be numbered after export to ODT, then set the config option
outline_list_style to Numbers. Hint: Do not worry if the heading numbers are missing in the table of contents, they will appear after updating it.
Images are included in the exported ODT document. Let's see some examples:
An image link:
An left aligned image:
…and a centered one…
…and also one right aligned:
Of course, lists are exported to ODT as well. See the example from the DokuWiki.org Syntax page:
Images generated from Text to Image Conversions will also be included in the exported ODT Document. Again, see the example from the DokuWiki.org Syntax page:
Quoting is implemented by using tables, see the example from DokuWiki.org:
I think we should do it
No we shouldn't Well, I say we should Really? Yes! Then lets do it!
And yet another example taken from DokuWiki.org:
Heading 1 Heading 2 Heading 3 Row 1 Col 1 Row 1 Col 2 Row 1 Col 3 Row 2 Col 1 some colspan (note the double pipe) Row 3 Col 1 Row 3 Col 2 Row 3 Col 3
Heading 1 Heading 2 Heading 3 Row 1 Col 2 Row 1 Col 3 Heading 4 no colspan this time Heading 5 Row 2 Col 2 Row 2 Col 3
Tables look quite boring after export to ODT. There is no difference between a table header cell and a normal cell. This can be changed by using a ODT or CSS template or by setting the config option
css_usage to basic style import.
As DokuWiki itself, the ODT plugin uses Geshi for syntax highlighting:
/** * The HelloWorldApp class implements an application that * simply displays "Hello World!" to the standard output. */ class HelloWorldApp { public static void main(String[] args) { System.out.println("Hello World!"); //Display the string. } }
Please note that links generated by Geshi will also be inserted as links into the exported ODT document.
Downloadable code blocks tagged with
file just look like normal
code blocks after the export:
<?php echo "hello world!"; ?>
Because the ODT plugin runs on the DokuWiki server, it is not possible to implement ODT support in JavaScript based Syntax highlighter plugins.
Feeds can also be exported to ODT. Let's have a look at the Github feed for the ODT plugin's master branch:
Footnotes can also be exported to ODT, some examples first:
Please notice that footnotes in the ODT document are slightly different than in the browser:
This example requires the changemarks plugin to be installed
The changemarks plugin is used to mark portions of text as <ins This has been inserted.>inserted</ins>, <del This has been deleted.>deleted</del> or !!This has been highlighted.>highlighted!!.
This example requires the Chem plugin to be installed
The Chem plugin is used to format a molecular formula. See some examples from the plugin page:
This example requires the Color plugin to be installed
The Color plugin is used to write text in different colors:
This example requires the Columns plugin to be installed
The Columns plugin is used to place text in multiple columns: <columns 100% 50% - →
First column text (50% width).
<newcolumn>
Second column text.
<newcolumn>
Third column text.
</columns>
This example requires the Definition List plugin to be installed
See an example of a simple definition list:
; termA : definitionA ; termB : definitionB ; termC : definitionC
This example requires the emphasis plugin to be installed
Some examples for changing the font color with the emphasis plugin: ::This should be green:: :::and yellow::: ::::and red:::: (if the default color set is used). And now the same for the background colors: ;;This should be green;; ;;;and yellow;;; ;;;;and red;;;;.
This example requires the exttab3 plugin to be installed
The exttab3 plugin makes it possible to change the look of your tables. Have a look at the example form the plugin page:
{| title=“Extended Table Example”
! style=“width: 12em;”| A1 Header ! style=“width: 10em;”| B1 Header
{| title=“nested table”
Lorem ipsum dolor sit amet, consetetur sadipscing elitr, sed diam nonumy eirmod tempor inviduntut labore et dolore magna aliquyam erat, sed diam voluptua.
List in Cell
monospace
quoting ATTENTION: Needs an extra empty line after lists and quoting syntax!
This example requires the filelist plugin to be installed
The output generated by the filelist plugin is exported to ODT: search?q=lib%2Fimages%2F%2A%26style%3Dtable%26direct%3D1%26preview%3D1%26link%3D0&btnI=lucky
This example requires the geotag plugin to be installed
The geotag plugin allows a user to annotate pages with a geotag: search?q=lat%3A51.565696%2C%20lon%3A5.324596&btnI=lucky
This example requires the keyboard plugin to be installed
Key sequences generated with the keyboard plugin are exported to ODT, see the example from the plugin page:
This displays the keys <key>ALT</key> + <key>H</key>.
This example requires the mathpublish plugin to be installed
The mathpublish plugin can be used to insert mathematical formulae. See some examples from the plugin page:
<m>S(f)(t)=a_{0}+sum{n=1}{+infty}{a_{n} cos(n omega t)+b_{n} sin(n omega t)}</m>
<m 8>delim{lbrace}{matrix{3}{1}3x-5y_z_0_sqrt_2_x-7y_8z_0_x-8y_9z_0}{ }</m>
<m 32>delim{|}1_n_sum_n_1_n_gamma_u_n_-_1_2_pi_int_0_2_pi_gamma_t_dt{|} ⇐ epsilon/3</m>
This example requires the mimetex plugin to be installed
The mimetex plugin can be used to insert formulas which are also exported to ODT. See some examples from the plugin page:
<latex>
\frac{3}{4 \pi} \sqrt{4 \cdot x^2 12}\\ \lim_{n \to \infty} \sum_{k=1}^n \frac{1}{k^2} = \frac{\pi^2}{6}\\ \it{f}(x) = \frac{1}{\sqrt{x} x^2}\\ e^{i \pi} + 1 = 0\;
</latex>
This example requires the note plugin to be installed
The note plugin can be used to write little notes of different kinds. See this examples from the plugin page: <note> This is my note ! Remember it!! </note>
<note important> Warning ! You're about to lose your mind </note>
<note tip> The clues are in the images. </note>
<note warning> Beware of the cat when you open the door !! </note>
In ODT the notes are rendered as tables. This is why they loose their round corners after export to ODT.
This example requires the pagebreak plugin to be installed
A manual pagebreak can be inserted with the following code:
<pagebreak>
Let's test it…<pagebreak>…this text should be on a new page. If the pagebreak is included or not will make no difference in the browser view.
This example requires the switchpanel plugin to be installed
SVG images can be easily included in the exported ODT document as the example from the switchpanel plugin page shows: <switchpanel> ==line:number=12 1,PC1a 2,PC2 3,PC3 6,?? ==line 1,AB 10:case=none 11,int.:color=red 12,sas:case=close </switchpanel> |
I'm struggling with the following problem:
In order to calculate $\int_{a}^{b}f(x)dx$ we use the Gaussian quadrature formula $\int_{a}^{b}f(x)dx\approx\sum_{i=0}^{n}A_if(x_i)$, where $A_i$ are the weights and $x_i$ are the roots of the polynomial $P_{n+1}(x)$ of degree $n+1$ with $P_0(x) = 1$ and $P_k(1) = 1 \forall k\geq 0$ which satisfies $\int_a^bx^kP_{n+1}(x)=0\forall k\in\{0,1,\ldots , n\}$.
Show that the Legendre polynomials $P_n$ satisfy Bonnet's recursion formula $(n+1)P_{n+1}(x) = (2n+1)xP_n(x)-nP_{n-1}(x)\forall n\geq 1$.
I've tried writing the polynomials in their canonical forms, but I have zero idea of how to relate the polynomials to each other.
It's also worth noting that I'm only allowed to use the conditions given in the problem, anything else has to be proven separately.
Any ideas on how I can continue(or start)? |
@HarryGindi So the $n$-simplices of $N(D^{op})$ are $Hom_{sCat}(\mathfrak{C}[n],D^{op})$. Are you using the fact that the whole simplicial set is the mapping simplicial object between cosimplicial simplicial categories, and taking the constant cosimplicial simplicial category in the right coordinate?
I guess I'm just very confused about how you're saying anything about the entire simplicial set if you're not producing it, in one go, as the mapping space between two cosimplicial objects. But whatever, I dunno. I'm having a very bad day with this junk lol.
It just seems like this argument is all about the sets of n-simplices. Which is the trivial part.
lol no i mean, i'm following it by context actually
so for the record i really do think that the simplicial set you're getting can be written as coming from the simplicial enrichment on cosimplicial objects, where you take a constant cosimplicial simplicial category on one side
@user1732 haha thanks! we had no idea if that'd actually find its way to the internet...
@JonathanBeardsley any quillen equivalence determines an adjoint equivalence of quasicategories. (and any equivalence can be upgraded to an adjoint (equivalence)). i'm not sure what you mean by "Quillen equivalences induce equivalences after (co)fibrant replacement" though, i feel like that statement is mixing category-levels
@JonathanBeardsley if nothing else, this follows from the fact that \frakC is a left quillen equivalence so creates weak equivalences among cofibrant objects (and all objects are cofibrant, in particular quasicategories are). i guess also you need to know the fact (proved in HTT) that the three definitions of "hom-sset" introduced in chapter 1 are all weakly equivalent to the one you get via \frakC
@IlaRossi i would imagine that this is in goerss--jardine? ultimately, this is just coming from the fact that homotopy groups are defined to be maps in (from spheres), and you only are "supposed" to map into things that are fibrant -- which in this case means kan complexes
@JonathanBeardsley earlier than this, i'm pretty sure it was proved by dwyer--kan in one of their papers around '80 and '81
@HarryGindi i don't know if i would say that "most" relative categories are fibrant. it was proved by lennart meier that model categories are Barwick--Kan fibrant (iirc without any further adjectives necessary)
@JonathanBeardsley what?! i really liked that picture! i wonder why they removed it
@HarryGindi i don't know about general PDEs, but certainly D-modules are relevant in the homotopical world
@HarryGindi oh interesting, thomason-fibrancy of W is a necessary condition for BK-fibrancy of (R,W)?
i also find the thomason model structure mysterious. i set up a less mysterious (and pretty straightforward) analog for $\infty$-categories in the fappendix here: arxiv.org/pdf/1510.03525.pdf
as for the grothendieck construction computing hocolims, i think the more fundamental thing is that the grothendieck construction itself is a lax colimit. combining this with the fact that ($\infty$-)groupoid completion is a left adjoint, you immediately get that $|Gr(F)|$ is the colimit of $B \xrightarrow{F} Cat \xrightarrow{|-|} Spaces$
@JonathanBeardsley If you want to go that route, I guess you still have to prove that ^op_s and ^op_Delta both lie in the unique nonidentity component of Aut(N(Qcat)) and Aut(N(sCat)) whatever nerve you mean in this particular case (the B-K relative nerve has the advantage here bc sCat is not a simplicial model cat)
I think the direct proof has a lot of advantages here, since it gives a point-set on-the-nose isomorphism
Yeah, definitely, but I'd like to stay and work with Cisinski on the Ph.D if possible, but I'm trying to keep options open
not put all my eggs in one basket, as it were
I mean, I'm open to coming back to the US too, but I don't have any ideas for advisors here who are interested in higher straightening/higher Yoneda, which I am convinced is the big open problem for infinity, n-cats
Gaitsgory and Rozenblyum, I guess, but I think they're more interested in applications of those ideas vs actually getting a hold of them in full generality
@JonathanBeardsley Don't sweat it. As it was mentioned I have now mod superpowers, so s/he can do very little to upset me. Since you're the room owner, let me know if I can be of any assistance here with the moderation (moderators on SE have network-wide chat moderating powers, but this is not my turf, so to speak).
There are two "opposite" functors:$$ op_\Delta\colon sSet\to sSet$$and$$op_s\colon sCat\to sCat.$$The first takes a simplicial set to its opposite simplicial set by precomposing with the opposite of a functor $\Delta\to \Delta$ which is the identity on objects and takes a morphism $\langle k...
@JonathanBeardsley Yeah, I worked out a little proof sketch of the lemma on a notepad
It's enough to show everything works for generating cofaces and codegeneracies
the codegeneracies are free, the 0 and nth cofaces are free
all of those can be done treating frak{C} as a black box
the only slightly complicated thing is keeping track of the inner generated cofaces, but if you use my description of frak{C} or the one Joyal uses in the quasicategories vs simplicial categories paper, the combinatorics are completely explicit for codimension 1 face inclusions
the maps on vertices are obvious, and the maps on homs are just appropriate inclusions of cubes on the {0} face of the cube wrt the axis corresponding to the omitted inner vertex
In general, each Δ[1] factor in Hom(i,j) corresponds exactly to a vertex k with i<k<j, so omitting k gives inclusion onto the 'bottom' face wrt that axis, i.e. Δ[1]^{k-i-1} x {0} x Δ[j-k-1] (I'd call this the top, but I seem to draw my cubical diagrams in the reversed orientation).
> Thus, using appropriate tags one can increase ones chances that users competent to answer the question, or just interested in it, will notice the question in the first place. Conversely, using only very specialized tags (which likely almost nobody specifically favorited, subscribed to, etc) or worse just newly created tags, one might miss a chance to give visibility to ones question.
I am not sure to which extent this effect is noticeable on smaller sites (such as MathOverflow) but probably it's good to follow the recommendations given in the FAQ. (And MO is likely to grow a bit more in the future, so then it can become more important.) And also some smaller tags have enough followers.
You are asking posts far away from areas I am familiar with, so I am not really sure which top-level tags would be a good fit for your questions - otherwise I would edit/retag the posts myself. (Other than possibility to ping you somewhere in chat, the reason why I posted this in this room is that users of this room are likely more familiar with the topics you're interested in and probably they would be able to suggest suitable tags.)
I just wanted to mention this, in case it helps you when asking question here. (Although it seems that you're doing fine.)
@MartinSleziak even I was not sure what other tags are appropriate to add.. I will see other questions similar to this, see what tags they have added and will add if I get to see any relevant tags.. thanks for your suggestion.. it is very reasonable,.
You don't need to put only one tag, you can put up to five. In general it is recommended to put a very general tag (usually an "arxiv" tag) to indicate broadly which sector of math your question is in, and then more specific tags
I would say that the topics of the US Talbot, as with the European Talbot, are heavily influenced by the organizers. If you look at who the organizers were/are for the US Talbot I think you will find many homotopy theorists among them. |
The proofs I will present are based on techniques relevant to the fact that the CES production function has the form of a
generalized weighted mean. This was used in the original paper where the CES function was introduced, Arrow, K. J., Chenery, H. B., Minhas, B. S., & Solow, R. M. (1961). Capital-labor substitution and economic efficiency. The Review of Economics and Statistics, 225-250. The authors there referred their readers to the book Hardy, G. H., Littlewood, J. E., & Pólya, G. (1952). Inequalities , chapter $2 $.
We consider the general case$$Q_k=\gamma[a K^{-\rho} +(1-a) L^{-\rho} ]^{-\frac{k}{\rho}},\;\; k>0$$
$$\Rightarrow \gamma^{-1}Q_k = \frac 1{[a (1/K^{\rho}) +(1-a) (1/L^{\rho}) ]^{\frac{k}{\rho}}}$$
1) Limit when $\rho \rightarrow \infty$ Since we are interested in the limit when $\rho\rightarrow \infty$ we can ignore the interval for which $\rho \leq0$, and treat $\rho$ as strictly positive.
Without loss of generality, assume $K\geq L \Rightarrow (1/K^{\rho})\leq (1/L^{\rho})$. We also have $K, L >0$. Then we verify that the following inequality holds:
$$(1-a)^{k/\rho}(1/L^{k})\leq \gamma Q_k^{-1} \leq (1/L^{k}) $$
$$\implies (1-a)^{k/\rho}(1/L^{k})\leq [a (1/K^{\rho}) +(1-a) (1/L^{\rho}) ]^{\frac{k}{\rho}} \leq (1/L^{k}) \tag{1}$$
by raising throughout to the $\rho/k$ power to get
$$(1-a)(1/L^{\rho}) \leq a (1/K^{\rho}) +(1-a) (1/L^{\rho}) \leq (1/L^{\rho}) \tag {2}$$which indeed holds, obviously, given the assumptions. Then go back to the first element of $(1)$ and
$$\lim_{\rho\rightarrow \infty} (1-a)^{k/\rho}(1/L^{k}) =(1/L^{k})$$
which sandwiches the middle term in $(1)$ to $(1/L^{k})$ , so
$$\lim_{\rho\rightarrow \infty}Q_k = \frac {\gamma }{1/L^k} = \gamma L^k = {\gamma }\big[\min\{K,L\}\big]^{k} \tag{3}$$
So for $k=1$
we obtain the basic Leontief production function.
2) Limit when $\rho \rightarrow 0$ Write the function using exponential as
$$\gamma^{-1}Q_k=\exp\left\{-\frac k{\rho}\cdot \ln\big[a (K^{\rho})^{-1} +(1-a) (L^{\rho})^{-1}\big]\right\} \tag {4}$$
Consider the first-order Maclaurin expansion (Taylor expansion centered at zero) of the term inside the logarithm, with respect to $\rho$:
$$a (K^{\rho})^{-1} +(1-a) (L^{\rho})^{-1} \\= a (K^{0})^{-1} +(1-a) (L^{0})^{-1} -a (K^{0})^{-2}K^{0}\rho\ln K- (1-a) (L^{0})^{-2}L^{0}\rho\ln L + O(\rho^2) \\$$
$$=1 - \rho a\ln K - \rho(1-a)\ln L+ O(\rho^2) = 1 +\rho \big[\ln K^{-a}L^{-(1-a)}\big]+ O(\rho^2)$$
Insert this back into $(4)$ and get rid of the outer exponential,
$$\gamma^{-1}Q_k = \left(1 +\rho \big[\ln K^{-a}L^{-(1-a)}\big]+ O(\rho^{2})\right)^{-k/\rho}$$
In case it is opaque, define $r\equiv 1/\rho$ and re-write
$$\gamma^{-1}Q_k = \left(1 +\frac{\big[\ln K^{-a}L^{-(1-a)}\big]}{r}+ O(r^{-2})\right)^{-kr}$$
Now it does look like an expression whose limit at infinity will give us something exponential:
$$\lim_{\rho\rightarrow 0}\gamma^{-1}Q_k = \lim_{r\rightarrow \infty}\gamma^{-1}Q_k = \left(\exp\left\{ \ln K^{-a}L^{-(1-a)}\right\} \right)^{-k}$$
$$\Rightarrow \lim_{\rho\rightarrow 0}Q_k =\gamma\left(K^{a}L^{1-a}\right)^k$$
The degree of homogeneity $k$ of the function is preserved, and if $k=1$
we obtain the Cobb-Douglas function.
It was this last result that made Arrow and Co to call $a$ the "distribution" parameter of the CES function. |
For boolean-valued functions $f : \{-1,1\}^n \to \{-1,1\}$ the total influence has several additional interpretations. First, it is often referred to as the
average sensitivity of $f$ because of the following proposition: Proposition 27For $f : \{-1,1\}^n \to \{-1,1\}$ \[ \mathbf{I}[f] = \mathop{\bf E}_{{\boldsymbol{x}}}[\mathrm{sens}_f({\boldsymbol{x}})], \] where $\mathrm{sens}_f(x)$ is the sensitivityof $f$ at $x$, defined to be the number of pivotal coordinates for $f$ on input $x$. Proof: \begin{multline*} \mathbf{I}[f] = \sum_{i=1}^n \mathbf{Inf}_i[f] = \sum_{i=1}^n \mathop{\bf Pr}_{{\boldsymbol{x}}}[f({\boldsymbol{x}}) \neq f({\boldsymbol{x}}^{\oplus i})] \\ = \sum_{i=1}^n \mathop{\bf E}_{{\boldsymbol{x}}}[\boldsymbol{1}_{f({\boldsymbol{x}}) \neq f({\boldsymbol{x}}^{\oplus i})}] = \mathop{\bf E}_{{\boldsymbol{x}}}\left[\sum_{i=1}^n \boldsymbol{1}_{f({\boldsymbol{x}}) \neq f({\boldsymbol{x}}^{\oplus i})}\right] = \mathop{\bf E}_{{\boldsymbol{x}}}[\mathrm{sens}_f({\boldsymbol{x}})]. \quad \Box \end{multline*}
The total influence of $f : \{-1,1\}^n \to \{-1,1\}$ is also closely related to the size of its
edge boundary; from Fact 14 we deduce: Examples 29(Recall Examples 15.) For boolean-valued functions $f : \{-1,1\}^n \to \{-1,1\}$ the total influence ranges between $0$ and $n$. It is minimized by the constant functions $\pm 1$ which have total influence $0$. It is maximized by the parity function $\chi_{[n]}$ and its negation which have total influence $n$; every coordinate is pivotal on every input for these functions. The dictator functions (and their negations) have total influence $1$. The total influence of $\mathrm{OR}_n$ and $\mathrm{AND}_n$ is very small: $n2^{1-n}$. On the other hand, the total influence of $\mathrm{Maj}_n$ is fairly large: roughly $\sqrt{2/\pi}\sqrt{n}$ for large $n$.
By virtue of Proposition 20 we have another interpretation for the total influence of
monotone functions:
This sum of the degree-$1$ Fourier coefficients has a natural interpretation in social choice:
Proposition 31Let $f : \{-1,1\}^n \to \{-1,1\}$ be a voting rule for a $2$-candidate election. Given votes ${\boldsymbol{x}} = ({\boldsymbol{x}}_1, \dots, {\boldsymbol{x}}_n)$, let $\boldsymbol{w}$ be the number of votes which agree with the outcome of the election, $f({\boldsymbol{x}})$. Then \[ \mathop{\bf E}[\boldsymbol{w}] = \frac{n}{2} + \frac12 \sum_{i=1}^n \widehat{f}(i). \] Proof: By the formula for Fourier coefficients, \begin{equation} \label{eqn:deg-1-sum} \sum_{i=1}^n \widehat{f}(i) = \sum_{i=1}^n \mathop{\bf E}_{{\boldsymbol{x}}}[f({\boldsymbol{x}}) {\boldsymbol{x}}_i] = \mathop{\bf E}_{{\boldsymbol{x}}}[f({\boldsymbol{x}})({\boldsymbol{x}}_1 + {\boldsymbol{x}}_2 + \cdots + {\boldsymbol{x}}_n)]. \end{equation} Now ${\boldsymbol{x}}_1 + \cdots + {\boldsymbol{x}}_n$ equals the difference between the number of votes for candidate $1$ and the number of votes for candidate $-1$. Hence $f({\boldsymbol{x}})({\boldsymbol{x}}_1 + \cdots + {\boldsymbol{x}}_n)$ equals the difference between the number of votes for the winner and the number of votes for the loser; i.e., $\boldsymbol{w} – (n-\boldsymbol{w}) = 2\boldsymbol{w} – n$. The result follows. $\Box$
Rousseau
[Rou62] suggested that the ideal voting rule is one which maximizes the number of votes which agree with the outcome. Here we show that the majority rule has this property (at least when $n$ is odd): Theorem 32The unique maximizers of $\sum_{i=1}^n \widehat{f}(i)$ among all $f : \{-1,1\}^n \to \{-1,1\}$ are the majority functions. In particular, $\mathbf{I}[f] \leq \mathbf{I}[\mathrm{Maj}_n] = \sqrt{2/\pi}\sqrt{n} + O(n^{-1/2})$ for all monotone $f$. Proof: From \eqref{eqn:deg-1-sum}, \[ \sum_{i=1}^n \widehat{f}(i) = \mathop{\bf E}_{{\boldsymbol{x}}}[f({\boldsymbol{x}})({\boldsymbol{x}}_1 + {\boldsymbol{x}}_2 + \cdots + {\boldsymbol{x}}_n)] \leq \mathop{\bf E}_{{\boldsymbol{x}}}[|{\boldsymbol{x}}_1 + {\boldsymbol{x}}_2 + \cdots + {\boldsymbol{x}}_n|], \] since $f({\boldsymbol{x}}) \in \{-1,1\}$ always. Equality holds if and only if $f(x) = \mathrm{sgn}(x_1 + \cdots + x_n)$ whenever $x_1 + \cdots + x_n \neq 0$. The second statement of the theorem follows from Proposition 30 and Exercise 18 in this chapter. $\Box$
Let’s now take a look at more analytic expressions for the total influence. By definition, if $f : \{-1,1\}^n \to {\mathbb R}$ then \begin{equation} \label{eqn:tinf-gradient} \mathbf{I}[f] = \sum_{i=1}^n \mathbf{Inf}_i[f] = \sum_{i=1}^n \mathop{\bf E}_{{\boldsymbol{x}}}[\mathrm{D}_i f({\boldsymbol{x}})^2] = \mathop{\bf E}_{{\boldsymbol{x}}}\left[\sum_{i=1}^n \mathrm{D}_i f({\boldsymbol{x}})^2\right]. \end{equation} This motivates the following definition:
Definition 33The (discrete) gradient operator$\nabla$ maps the function $f : \{-1,1\}^n \to {\mathbb R}$ to the function $\nabla f : \{-1,1\}^n \to {\mathbb R}^n$ defined by \[ \nabla f(x) = (\mathrm{D}_1 f(x), \mathrm{D}_2 f(x), \dots, \mathrm{D}_n f(x)). \]
Note that for $f : \{-1,1\}^n \to \{-1,1\}$ we have $\|\nabla f(x)\|_2^2 = \mathrm{sens}_f(x)$, where $\| \cdot \|_2$ is the usual Euclidean norm in ${\mathbb R}^n$. In general, from \eqref{eqn:tinf-gradient} we deduce:
An alternative analytic definition involves introducing the
Laplacian: Definition 35The Laplacian operator$\mathrm{L}$ is the linear operator on functions $f : \{-1,1\}^n \to {\mathbb R}$ defined by $\mathrm{L} = \sum_{i=1}^n \mathrm{L}_i$.
In the exercises you are asked to verify the following:
$\displaystyle \mathrm{L} f (x) = (n/2)\bigl(f(x) – \mathop{\mathrm{avg}}_{i \in [n]} \{f(x^{\oplus i})\}\bigr)$, $\displaystyle \mathrm{L} f (x) = f(x) \cdot \mathrm{sens}_f(x) \quad$ if $f : \{-1,1\}^n \to \{-1,1\}$, $\displaystyle \mathrm{L} f = \sum_{S \subseteq [n]} |S|\,\widehat{f}(S)\,\chi_S$, $\displaystyle \langle f, \mathrm{L} f \rangle = \mathbf{I}[f]$.
We can obtain a Fourier formula for the total influence of a function using Theorem 19; when we sum that theorem over all $i \in [n]$ the Fourier weight $\widehat{f}(S)^2$ is counted exactly $|S|$ times. Hence:
Theorem 37For $f : \{-1,1\}^n \to {\mathbb R}$, \begin{equation} \label{eqn:total-influence-formula} \mathbf{I}[f] = \sum_{S \subseteq [n]} |S| \widehat{f}(S)^2 = \sum_{k=0}^n k \cdot \mathbf{W}^{k}[f]. \end{equation} For $f : \{-1,1\}^n \to \{-1,1\}$ we can express this using the spectral sample: \[ \mathbf{I}[f] = \mathop{\bf E}_{\boldsymbol{S} \sim \mathscr{S}_{f}}[|\boldsymbol{S}|]. \]
Thus the total influence of $f : \{-1,1\}^n \to \{-1,1\}$ also measures the average “height” or degree of its Fourier weights.
Finally, from Proposition 1.13 we have $\mathop{\bf Var}[f] = \sum_{k > 0} \mathbf{W}^{k}[f]$; comparing this with \eqref{eqn:total-influence-formula} we immediately deduce a simple but important fact called the
Poincaré inequality. Poincaré InequalityFor any $f : \{-1,1\}^n \to {\mathbb R}$, $\mathop{\bf Var}[f] \leq \mathbf{I}[f]$.
Equality holds in the Poincaré inequality if and only if all of $f$’s Fourier weight is at degrees $0$ and $1$; i.e., $\mathbf{W}^{\leq 1}[f] = \mathop{\bf E}[f^2]$. For boolean-valued $f : \{-1,1\}^n \to \{-1,1\}$, Exercise 1.19 tells us this can only occur if $f = \pm 1$ or $f = \pm \chi_i$ for some $i$.
For boolean-valued $f : \{-1,1\}^n \to {\mathbb R}$, the Poincaré inequality can be viewed as an (edge-)isoperimetric inequality, or
(edge-)expansion bound, for the Hamming cube. If we think of $f$ as the indicator function for a set $A \subseteq \{-1,1\}^n$ of “measure” $\alpha = |A|/2^n$, then $\mathop{\bf Var}[f] = 4\alpha(1-\alpha)$ (Fact 1.14) whereas $\mathbf{I}[f]$ is $n$ times the (fractional) size of $A$’s edge boundary. In particular, the Poincaré inequality says that subsets $A \subseteq \{-1,1\}^n$ of measure $\alpha = 1/2$ must have edge boundary at least as large as those of the dictator sets.
For $\alpha \notin \{0, 1/2, 1\}$ the Poincaré inequality is not sharp as an edge-isoperimetric inequality for the Hamming cube; for small $\alpha$ even the asymptotic dependence is not optimal. Precisely optimal edge-isoperimetric results (and also vertex-isoperimetric results) are known for the Hamming cube. The following simplified theorem is optimal for $\alpha$ of the form $2^{-i}$:
This result illustrates an important recurring concept in the analysis of boolean functions: the Hamming cube is a “small-set expander”. Roughly speaking, this is the idea that “small” subsets $A \subseteq \{-1,1\}^n$ have unusually large “boundary size”. |
Amalgamated Artichokes
Fatima has done some previous analyses and has found that the stock price over any period of time can be modelled reasonably accurately with the following equation:\[ \operatorname {price}(k) = p \cdot (\sin (a \cdot k+b) + \cos (c \cdot k+d) + 2) \]
where $p$, $a$, $b$, $c$ and $d$ are constants. Fatima would like you to write a program to determine the largest price decline over a given sequence of prices. Figure 1 illustrates the price function for Sample Input 1. You have to consider the prices only for integer values of $k$.
Input
The input consists of a single line containing $6$ integers $p$ ($1 \le p \le 1\, 000$), $a$, $b$, $c$, $d$ ($0 \le a, b, c, d \le 1\, 000$) and $n$ ($1 \le n \le 10^6$). The first $5$ integers are described above. The sequence of stock prices to consider is $\operatorname {price(1)}, \operatorname {price(2)}, \ldots , \operatorname {price}(n)$.
Output
Display the maximum decline in the stock prices. If there is no decline, display the number $0$. Your output should have an absolute or relative error of at most $10^{-6}$.
Sample Input 1 Sample Output 1 42 1 23 4 8 10 104.855110477
Sample Input 2 Sample Output 2 100 7 615 998 801 3 0.00
Sample Input 3 Sample Output 3 100 432 406 867 60 1000 399.303813 |
As requested in the comments, here is a worked example. The main body deals with minimizing $f(x)$ for a specific problem. At the bottom follows a brief discussion of constraints then a brief discussion about the general case.Let's solve the Weighted Maximum Cut problem since thisIs a relatively straight-forward exampleIs hard classicallyIs a ...
So for hybrid quantum-classical algorithms, I suggest looking at :The Quantum Approximate Optimization AlgorithmVariational hybrid quantum-classical algorithms that include the so famous Variational Quantum Eigensolver applied for Max-Cut problemsPennyLane which helps you in developping hybrid computation for optimization problems and Machine Learning. ...
Short Answer: It is potentially hard (as bRost03 indicates in the comments). To be precise, coNP-hard.Longer Answer:In adiabatic quantum computation, the ground-space of the final Hamiltonian is typically determined by the optimum solution to some constraint satisfaction problem (CSP). If the CSP is perfectly solvable, the ground-space is spanned by (...
If you are looking for a more complete implementation of a quantum variational algorithm in the context of Cirq, I would recommend looking at the second example in the OpenFermion-Cirq notebook found here. It uses a custom ansatz for hydrogen in a minimal basis, but makes a bit more explicit all the required pieces. Another good example, perhaps without ...
Here is the best circuit I've found. It uses 14 CNOTs.Note that this circuit is not using a linear layout! It is placed on the grid like this:0-A-1|3|2Where 'A' is an ancilla initialized in the |0> state and '0','1','2','3' are the qubits making up the register (with '0' being the least significant bit).I verified this circuit in Quirk ...
Here is the best construction I've found. It uses 8 CNOTs.I verified this circuit in Quirk using the channel-state duality and a known-good inverse.The target is the middle qubit. None of the CNOTs go directly from top to bottom or bottom to top. You can switch which qubit is the the target by simply switching which line the Hadamards are on.
I believe I've got it down to 9 controlled-not gates:What I did was I used a set of three cNots in the place of Swap to move the two controls next to each other to achieve the last part of the standard Toffoli circuit (see here). This used 12 cNots.However, the final $T$ and $H$ gates on the target qubit I propagated through one of these swaps. This let ...
The Quantum Approximate Optimization Algorithm is a good place to start for analyzing the relative performance of quantum algorithms on approximation problems. One result so far is that at p=1 QAOA can theoretically achieve an approximation ratio of 0.624 for MaxCut on 3-regular graphs. This result was obtained using brute force enumeration of the different ...
One of the advantages, as stated in the paper you linked, is that with QAOA you can increase the precision arbitrarily, whereas QA will only find the solution with probability 1 as $T \to \infty$ which is impractical. In addition if $T$ is too long you're likely to not find the solution as the probability is not monotonic. I believe an example of this can be ...
In the article you mentioned it is said that classical algorithms can beat some cases of (quantum ) QAOA's as is proved in this article. So finding cases where quantum QAOA can still beat classical algorithms and can run on NISQ devices with low depth circuits is still exciting and promissing.The article uses plausible conjectures from complexity theory to ...
I suggest looking at how a genetic algorithm works in a context of discrete variables to understand it. They provide a methodology but you can apply other mutation/crossover techniques.Briefly, in a simple optimization problem where the variables are discrete, we can solve heuristically with genetic algorithms (which belongs to the class evolutionary ...
So in your example, you try to find the quantum circuit representing the Toffoli operation.I would then change my objective/fitness function and compare the unitary matrix representing the operation. You can use an minimization objective like :$$ \mathcal{F} = 1-\frac{1}{2^n} |\operatorname{Tr}(U_aU_t^{\dagger})| $$with $ U_a $ is the unitary of the ...
The GLOA is just an optimization algorithm (another genetic algorithm actually). So as long as your problem translates into an objective function you seek to minimize/maximize, this would be possible (even by another genetic algorithm).I suggest to first think how you encode your problem for the optimization. For example, a sequence of discrete and/or real ...
Let's answer my own question: it is not possible.After some research I ended up computing the "truth table" for the two possible cases:$b = 0$:$\vert 00 \rangle\rightarrow\vert 00 \rangle$$\vert 01 \rangle\rightarrow\vert 10 \rangle$$\vert 10 \rangle\rightarrow\vert 10 \rangle$$\vert 11 \rangle\rightarrow\vert 01 \rangle$$b = 1$:$\vert 00 \...
Pedro! I assume you are familiar to Grover's algorithm. Therefore, I suggest to read carefully these two papers below:1) Tight bounds on quantum searching (BBHT): it's a very broad Grover's algorithm analysis;2) A quantum algorithm for finding the minimum (DH): this is the first Grover's algorithm application to optimization problems and we call DH (...
Here is tested code (also provided in one of the qiskit tutorials)lapse = 0interval = 60while not job.done:print('Status @ {} seconds'.format(interval * lapse))print(job.status)time.sleep(interval)lapse += 1print(job.status)where interval is giving in seconds (if your job requires longer waiting and execution, I would suggest to ...
Depending on what is your definition of "long time" the answer might be different:If it is of the order of minutes, then you can't do anything and you just have to wait for your turn in the queue.If it is several days, then there might be a problem (or a very very long queue).Anyway, you can track the status of your job, even if this status does not ... |
There is a system of N non-interacting particles (Ideal Gas). The Hamiltonian of a system of free particles is given by:
$$H = \sum_{i=1}^{N}\frac{p_{i}^2}{2m} + \sum_{i=1}^{N} \psi(q_i)$$
where to the kinetic term we have added a confining potential:
$$\psi(q_i) = \begin{cases} 0 & q_i \in V\\ \infty & otherwise\end{cases}$$
which keeps the particles inside the volume $V$.
First of all, some definitions:
$$\Omega (E, V, N) = \int \frac{d\Gamma}{N!\hbar^3N} \Theta(E - H(\Gamma))$$
Where $d\Gamma = dp_1...dp_Ndq_1...dq_N$ (p is momentum and q position) and $\Theta$ is the step function.
I want to compute the microcanonical phase space volume for an ideal gas. To do so I have to solve the following integral:
$$\Omega (E, V, N) = \int \frac{d\Gamma}{N!\hbar^3N} \Theta(E - H(\Gamma)) = \int q_1\int q_2\int q_N\int \frac{dp_1...dp_N}{N!\hbar^3N}\Theta(E - \sum(\frac{p_{i}^2}{2m}))$$
I know the solutions follows as:
$$\int q_1\int q_2\int q_N\int \frac{dp_1...dp_N}{N!\hbar^3N}\Theta(E - \sum(\frac{p_{i}^2}{2m})) = \frac{V^N}{N!\hbar^3N}\int_{\sum \frac{p_{i}^2}{2m} \leq E} dp_1...dp_N = \frac{V^N}{N!\hbar^3N} V'_{3N}\sqrt{2mE}$$
But I do not really understand how can we go from having the step function to just the integral over $dp_1...dp_N$. |
This is related to another question I just asked where I learned that the equation of motion of a harmonic oscillator is expressed as:
$$\ddot{x}+kx=0$$
What little physics I grasp centers on geodesics as derived from the principle of stationary action and the Euler-Lagrange equations. I have therefore become accustomed to understanding the equation of motion as the geodesic:
$$\ddot{x}^m+{\Gamma^{\:\:m}_{jk} \dot{x}^j \dot{x}^k}=0$$
which can also be thought of as the covariant derivative of the tangent vector of a particle's path. I guess this second eq. is mostly used for analysis of particle motion in GR, but I also understand it is applicable to any other situations with position-dependent coefficients (like motion of light through opaque substances). (We can get rid of all the indices by the way since the harmonic oscillator is one dimensional)
My question: Is it possible to reduce the second equation to the first? The acceleration term is the same, and (I think) Hooke's constant $k$ is basically like the Christoffel symbol in the second eq., but I don't see the similarity between $x$ and $\dot{x}^2$. I sense I am missing something big. Appreciate your help.
EDIT: --I include here a response to JerrySchirmer in comments section below-- In the Newtonian limit (flat and slow) the $00$ component (or $tt$) of the Chistoffel symbol is the only one that doesn't vanish. I wanted to see if this component could some how be expressed as $-kx$. But (insofar as I understand) this one non-vanishing component is usually of first order (a field gradient), not "0 order" like $-kx$. Is there a way to think of $kx$ as a field gradient--like $$kx=\frac{\partial \phi}{\partial x}$$? |
The Earth's moon is in synchronous rotation around Earth. The question: Why are there craters on the lit side of the moon and the dark and of which are not parallel with crater schemes on Earth?
There are craters on the Earth, Arizona's Meteor Crater being perhaps the best-known example. The reason that there aren't a lot more (obvious) ones is that the Earth has lots of dynamic processes, ranging from weather to plate tectonics, that gradually erase them. The Moon doesn't have these things, so craters last for billions of years.
I'd like to expand on jamesqf's answer. But first,
lit side of the moon and the dark
There are no "lit" and "dark" sides of the moon. It rotates, and whatever is dark now will be lit in two weeks time. There are definitely
near and far sides of the moon though.
The problem with the Earth is that it's a dynamic planet with plate tectonics, continental collision, oceanic plate subduction, rivers, rain, wind, vegetation, sedimentation and many more processes that work to obscure any impact craters on Earth. The famous Chicxulub crater was discovered only in the 70s. Even when we know the geological feature, it may not be recognised as an impact crater. The Sudbury impact crater has long been known, but again only in the 70s was accepted as an impact crater.
Another reason that we see many impacts on the moon and not on the Earth is their age. A very large portion of the craters on the moon formed during the Late Heavy Bombardment, and event that happened around 3.9 billion years ago in which many inner solar system bodies were struck by a large number of impacts. Once an impact hits the moon, the crater is there. But there are hardly any rocks on Earth that remain from that period, giving the impression that Earth was spared from it. It didn't - we just don't have any remaining (geomorphological) evidence that it happened.
I think the question arises in part from the misconception that on the near side of the moon (the "lit" side as it was referred to) the Earth fills much of the sky and blocks most incoming impactors. (Do correct me if I am wrong.)
That is incorrect. The Earth is ~12 Mm in (polar) diameter and ~360 Mm from the moon. Using the small angle approximation ($tan \theta = \theta$ in radians) we get an angular diameter of $\theta \approx tan \theta = 12/360 = 0.0333 \approx 2^{\circ}$. In other words, the Earth fills only a small portion of the sky. (Which is still about $(2^\circ \div 0.5^\circ)^2 = 16$ times the angular area of the moon as seen from Earth). You can see this for yourself by looking at the Apollo photographs. (Note: the 360 Mm distance is at perigee. Apogee is a little further and I also didn't account for the fact that equatorial diameter is larger. The numbers hardly change. There is a question on Astronomy Stack Exchange if you want a more accurate answer.)
Above I was simply correcting a (perceived) geometric misunderstanding. We must also account for gravity. Remember that impactors need not strike either Earth or moon from the exact same direction that they approached the Earth-Moon system. For example, a comet which passes close to the Earth can have its trajectory bent by the Earth's gravity, so that it impacts the near side of the moon (coming from the direction of the planet) even though its path before did not go near the moon. The path of a body passing near the Earth will look approximately like a hyperbola with the planet at the focus.
Combine the above with the fact that impactors can come from any direction, and you see that on both Moon and Earth you could find craters in any location. |
The Fund. Groups of a Path-Con. Topological Space are Isomorphic
The Fundamental Groups of a Path Connected Topological Space are Isomorphic
Theorem 1: Let $X$ be a topological space. If $X$ is path connected then $\pi_1(X, x_1)$ is isomorphic to $\pi_1(X, x_2)$ for all $x_1, x_2 \in X$. Proof:If $x_1 = x_2$ the theorem holds trivially. Assume instead that $x_1 \neq x_2$. Since $X$ is path connected there exists a path $\gamma : I \to X$ such that $\gamma(0) = x_2$ and $\gamma (1) = x_1$. Define a function $\phi : \pi_1(X, x_1) \to \pi_2(X, x_2)$ for all homotopy classes $[\alpha] \in \pi_1(X, x_1)$ by:
\begin{align} \quad \phi ([\alpha]) = [\gamma \alpha \gamma^{-1}] \end{align}
We need to show that $\phi$ is well-defined. Suppose that $\alpha \simeq \alpha'$. Then we must show that $[\gamma \alpha \gamma^{-1}] = [\gamma \alpha' \gamma^{-1}]$. But we know this holds since $[\alpha] = [\alpha']$. We now show that $\phi$ is a homomorphism. Let $[\alpha], [\beta] \in \pi_1(X, x_1)$. Then:
\begin{align} \quad \phi([\alpha][\beta]) &= \phi ([\alpha\beta]) \\ &= [\gamma\alpha\beta\gamma^{-1}] \\ &= [\gamma\alpha c_x \beta \gamma^{-1}] \\ &= [\gamma \alpha \gamma^{-1} \gamma \beta \gamma^{-1}] \\ &= [\gamma \alpha \gamma^{-1}][\gamma \beta \gamma^{-1}] \\ &= \phi([\alpha]) \phi ([\beta]) \end{align}
So indeed, $\phi$ is a homomorphism. Lastly, to show that $\phi$ is bijection, we define an inverse function $\psi : \pi_1(X, x_2) \to \pi_1(X, x_1)$ for all $[\beta] \in \pi_1(X, x_2)$ by:
\begin{align} \quad \psi ([\beta]) = [\gamma^{-1} \beta \gamma] \end{align}
Then clearly $\psi \circ \phi = \mathrm{id}_{\pi_1(X, x_1)}$ and $\phi \circ \psi = \mathrm{id}_{\pi_1(X, x_2)}$. So $\phi$ is an isomorphism and:
\begin{align} \quad \pi_1(X, x_1) \cong \pi_1(X, x_2) \quad \blacksquare \end{align} |
Mathematics is the investigation of points, for example, space, amount (numbers), structure, and change.There is a scope of perspectives among mathematicians and thinkers with regards to the correct extension and meaning of arithmetic. Mathematicians search out patterns and utilize them to define new guesses. Mathematicians settle reality or deception of guesses by numerical verification. At the point when numerical structures are great models of genuine wonders, at that point scientific thinking can give understanding or forecasts about nature. Using deliberation and rationale, science created from tallying, estimation, estimation, and the orderly investigation of the shapes and movements of physical items. Pragmatic science has been a human movement from as far back as composed records exist. The examination required to tackle scientific issues can take years or even a very long time of supported request.
Thorough contentions initially showed up in Greek arithmetic, most strikingly in Euclid’s Elements. Since the spearheading work of Giuseppe Peano (1858– 1932), David Hilbert (1862– 1943), and others on aphoristic frameworks in the late 19th century, it has turned out to be standard to see scientific research as setting up truth by thorough finding from suitably picked sayings and definitions. Science created at a moderately moderate pace until the Renaissance, when numerical advancements collaborating with new logical disclosures prompted a fast increment in the rate of scientific revelation that has proceeded to the present day.
Galileo Galilei (1564– 1642) stated, “The universe can’t be perused until the point that we have taken in the dialect and get comfortable with the characters in which it is composed. It is composed in numerical dialect, and the letters are triangles, circles and other geometrical figures, without which implies it is humanly difficult to appreciate a solitary word. Without these, one is meandering about in a dull labyrinth.” Carl Friedrich Gauss (1777– 1855) alluded to arithmetic as “the Queen of the Sciences”.Benjamin Peirce (1809– 1880) called arithmetic “the science that draws fundamental conclusions”.David Hilbert said of science: “We are not talking here of assertion in any sense. Arithmetic isn’t care for a diversion whose undertakings are dictated by discretionarily stipulated rules. Or maybe, it is a calculated framework having inner need that must be so and in no way, shape or form otherwise.” Albert Einstein (1879– 1955) expressed that “to the extent the laws of science allude to the real world, they are not sure; and to the extent they are sure, they don’t allude to the real world.”
Arithmetic is fundamental in numerous fields, including common science, building, drug, back and the sociologies. Connected science has prompted altogether new numerical controls, for example, insights and diversion hypothesis. Mathematicians additionally participate in unadulterated science, or arithmetic for its own purpose, without having any application at the top of the priority list. There is no reasonable line isolating unadulterated and connected arithmetic, and pragmatic applications for what started as unadulterated science are regularly found.
Math
The investigation of amount begins with numbers, first the well-known regular numbers and whole numbers (“entire numbers”) and arithmetical activities on them, which are portrayed in number-crunching. The more profound properties of whole numbers are considered in number hypothesis, from which come such well known outcomes as Fermat’s Last Theorem. The twin prime guess and Goldbach’s guess are two unsolved issues in number hypothesis.
As the number framework is additionally built up, the whole numbers are perceived as a subset of the reasonable numbers (“portions”). These, thusly, are contained inside the genuine numbers, which are utilized to speak to persistent amounts. Genuine numbers are summed up to complex numbers. These are the initial steps of a progressive system of numbers that goes ahead to incorporate quaternions and octonions. Thought of the characteristic numbers likewise prompts the transfinite numbers, which formalize the idea of “unendingness”. As per the essential hypothesis of variable based math all arrangements of conditions in a single obscure with complex coefficients are intricate numbers, paying little heed to degree. Another territory of study is the measure of sets, which is depicted with the cardinal numbers. These incorporate the aleph numbers, which permit significant examination of the measure of limitlessly vast sets.
{\displaystyle 1,2,3,\ldots } {\displaystyle \ldots ,-2,-1,0,1,2\,\ldots } {\displaystyle -2,{\frac {2}{3}},1.21} {\displaystyle -e,{\sqrt {2}},3,\pi } {\displaystyle 2,i,-2+3i,2e^{i{\frac {4\pi }{3}}}} Natural numbers Integers Rational numbers Real numbers Complex numbers Structure Algebra
Numerous numerical articles, for example, sets of numbers and capacities, show inward structure as a result of tasks or relations that are characterized on the set. Science at that point ponders properties of those sets that can be communicated as far as that structure; for example number hypothesis thinks about properties of the arrangement of whole numbers that can be communicated as far as number-crunching tasks. Additionally, it often happens that diverse such organized sets (or structures) display comparable properties, which makes it conceivable, by a further advance of deliberation, to state adages for a class of structures, and after that review without a moment’s delay the entire class of structures fulfilling these aphorisms. Consequently one can think about gatherings, rings, fields and other unique frameworks; together such investigations (for structures characterized by logarithmic activities) constitute the area of theoretical polynomial math.
By its extraordinary all inclusive statement, dynamic polynomial math can regularly be connected to apparently disconnected issues; for example various old issues concerning compass and straightedge developments were at last understood utilizing Galois hypothesis, which includes field hypothesis and gathering hypothesis. Another case of a logarithmic hypothesis is straight polynomial math, which is the general investigation of vector spaces, whose components called vectors have both amount and heading, and can be utilized to display (relations between) focuses in space. This is one case of the marvel that the initially irrelevant regions of geometry and polynomial math have exceptionally solid associations in current science. Combinatorics examines methods for listing the quantity of items that fit a given structure
Geometry
The investigation of room starts with geometry – specifically, Euclidean geometry, which consolidates space and numbers, and envelops the notable Pythagorean hypothesis. Trigonometry is the branch of arithmetic that arrangements with connections between the sides and the points of triangles and with the trigonometric capacities. The cutting edge investigation of room sums up these plans to incorporate higher-dimensional geometry, non-Euclidean geometries (which assume a focal part when all is said in done relativity) and topology. Amount and space both assume a part in explanatory geometry, differential geometry, and arithmetical geometry. Raised and discrete geometry were created to take care of issues in number hypothesis and utilitarian investigation however now are sought after with an eye on applications in advancement and software engineering. Inside differential geometry are the ideas of fiber groups and analytics on manifolds, specifically, vector and tensor math. Inside logarithmic geometry is the depiction of geometric protests as arrangement sets of polynomial conditions, consolidating the ideas of amount and space, and furthermore the investigation of topological gatherings, which join structure and space. Lie bunches are utilized to think about space, structure, and change. Topology in all its numerous consequences may have been the best development territory in twentieth century science; it incorporates point-set topology, set-theoretic topology, logarithmic topology and differential topology. Specifically, cases of advanced topology are metrizability hypothesis, aphoristic set hypothesis, homotopy hypothesis, and Morse hypothesis. Topology likewise incorporates the now understood Poincaré guess, the still unsolved territories of the Hodge guess. Different outcomes in geometry and topology, including the four shading hypothesis and Kepler guess, have been demonstrated just with the assistance of PCs.
Analytics
Understanding and depicting change is a typical subject in the regular sciences, and analytics was produced as an intense device to examine it. Capacities emerge here, as a focal idea portraying an evolving amount. The thorough investigation of genuine numbers and elements of a genuine variable is known as genuine examination, with complex investigation the proportional field for the intricate numbers. Utilitarian examination centers consideration around (ordinarily limitless dimensional) spaces of capacities. One of numerous utilizations of useful examination is quantum mechanics. Numerous issues lead normally to connections between an amount and its rate of progress, and these are considered as differential conditions. Numerous marvels in nature can be portrayed by dynamical frameworks; disorder hypothesis makes exact the manners by which a considerable lot of these frameworks show erratic yet still deterministic conduct.
Connected arithmetic
Connected arithmetic worries about scientific techniques that are commonly utilized as a part of science, building, business, and industry. Along these lines, “connected arithmetic” is a scientific science with specific learning. The term connected arithmetic likewise depicts the expert claim to fame in which mathematicians chip away at viable issues; as a calling concentrated on down to earth issues, connected science centers around the “plan, study, and utilization of numerical models” in science, building, and different zones of scientific practice.
Before, pragmatic applications have spurred the improvement of scientific hypotheses, which at that point turned into the subject of concentrate in unadulterated arithmetic, where science is produced essentially for its own particular purpose. In this way, the movement of connected arithmetic is crucially associated with investigate in unadulterated science.
Measurements and other choice sciences
Connected arithmetic has critical cover with the teach of measurements, whose hypothesis is planned scientifically, particularly with likelihood hypothesis. Analysts (filling in as a major aspect of an exploration venture) “make information that bodes well” with arbitrary testing and with randomized trials; the outline of a measurable example or investigation indicates the examination of the information (before the information be accessible). While reexamining information from analyses and tests or while breaking down information from observational investigations, analysts “understand the information” utilizing the specialty of displaying and the hypothesis of inference – with show determination and estimation; the evaluated models and considerable forecasts ought to be tried on new information.
Measurable hypothesis ponders choice issues, for example, limiting the hazard (expected misfortune) of a factual activity, for example, utilizing a technique in, for instance, parameter estimation, theory testing, and choosing the best. In these customary territories of numerical measurements, a factual choice issue is figured by limiting a goal work, as expected misfortune or cost, under particular requirements: For instance, planning an overview frequently includes limiting the cost of evaluating a populace mean with a given level of confidence.Because of its utilization of advancement, the scientific hypothesis of insights imparts worries to other choice sciences, for example, activities inquire about, control hypothesis, and scientific financial matters.
Computational arithmetic
Computational arithmetic proposes and studies strategies for taking care of scientific issues that are ordinarily too vast for human numerical limit. Numerical examination ponders strategies for issues in investigation utilizing useful investigation and estimate hypothesis; numerical examination incorporates the investigation of guess and discretization extensively with uncommon worry for adjusting mistakes. Numerical investigation and, all the more comprehensively, logical registering additionally ponder non-systematic themes of scientific science, particularly algorithmic lattice and chart hypothesis. Different territories of computational science incorporate PC polynomial math and emblematic calculation. |
Crack Growth > Stability
Considering a cracked body with $G \geqslant G_C$, then the crack propagates. However the crack can stop or not.
If $G_C$ is constant, as for perfectly brittle materials, then there are two possible behaviors: $G$ decreases when the crack grows, and the crack could stop growing (unless the loading increases); $G$ increases or remains constant when the crack grows, and the behavior is unstable and leads to complete failure. If $G_C$ is not constant, meaning if $G_C$ increases with the crack size, the question to answer is which one of $G$ or $G_C$ grows at the highest rate.
As the stability depends on the evolutions of the energy release rate and of the fracture energy, a crack growth depends on the geometry, the loading and the material behavior.
Example: DCB specimen
Remember the case of the composite laminate delamination studied previously. In linear elasticity, the compliance and its derivative with respect to the crack surface $A=at$ read
\begin{equation} \begin{cases}C=\frac{u}{Q} = \frac{8 a^3}{Eth^3}\\ \partial_A C=\frac{1}{t}\partial_a \frac{8 a^3}{Eth^3} =\frac{24 a^2}{Et^2h^3}\end{cases}.\label{eq:CDCB}\end{equation}
Prescribed loading
\begin{equation} G = \frac{Q^2}{2}\partial_A C = \frac{12Q^2a^2}{Et^2h^3}\label{eq:CDCBQ} \end{equation}
As the crack is growing, $G$ increases. For a
perfectly brittle material, $G_C$ is constant, so the crack keeps propagating. This is an unstable crack. Prescribed displacement
\begin{equation} G = \frac{u^2}{2C^2}\partial_A C = \frac{3u^2Eh^3}{16a^4}\label{eq:CDCBu}. \end{equation}
As the crack is growing, $G$ decreases. For a
perfectly brittle material, $G_C$ is constant, so at the some point the crack stops propagating. The crack is stable.
We have seen that for the DCB problem, a prescribed loading condition can lead to an unstable crack growth, while a prescribed displacement condition leads to a stable crack. But is it a general rule? What is happening for hyperstatic structures?
General loading conditions
Practically structures are hyperstatic. This means that if a crack appears in a component, the compliance of this component increases (the stiffness decreases) and there is a new loading distribution in the structure. This means that the component with the crack is neither subject to a prescribed loading condition nor to a prescribed displacement condition, but to a mixed one. This can be modeled with the
c ompliant machine, see Picture V.7, made of: A structure component with a crack of size $a$, and with a compliance $C\left(a\right)$; A spring of constant compliance $C_M$ modeling the remaining of the structure; A generalized loading $Q(a)$; A generalized displacement at the crack component $u(a)$; A prescribed displacement at one of the structure extremities $u_T$.
The displacements are defined as follows:
\begin{align} u_T & = Q(a) \left( C(a) + C_M \right) \\ u& = C(a) Q(a) = \frac{C(a)}{C(a) + C_M} u_T ,\end{align}
and the related internal energy becomes:
\begin{eqnarray} E_\text{int}&=&\frac{Qu}{2}+\frac{Q(u_T-u)}{2} \nonumber\\&=& \frac{C}{2(C+C_M)^2}u^2_T+\frac{C_M}{2(C+C_M)^2}u^2_T\nonumber\\&=&\frac{u_T^2}{2(C+C_M)}. \end{eqnarray}
From the previous relations, we can deduce the energy release rate:
\begin{equation} G = -\partial_A \left.E_\text{int}\right|_{u_T}= \frac{u_T^2}{2\left(C+C_M\right)^2} \partial_A C .\label{eq:complianceG}\end{equation}
The variation of this energy release rate with respect to the crack surface reads:
\begin{align} & \partial_A G = -\frac{u_T^2}{\left(C+C_M\right)^3} \left(\partial_A C\right)^2 + \frac{u_T^2}{2\left(C+C_M\right)^2} \partial^2_{AA} C \\ \Leftrightarrow & \partial_A G = - \frac{Q^2}{C+C_M} \left(\partial_A C\right)^2 + \frac{Q^2}{2} \partial^2_{AA} C . \label{eq:dAG}\end{align}
Dead load case
The dead load case
(a constant load over time such as the structural weight itself) can be modeled with a spring of infinite compliance ($C_M \rightarrow \infty$). As the first term of (\ref{eq:dAG}) is negative, this value of $C_M \rightarrow \infty
$
is the one for which $\partial_A G$ is maximum (as this cancels the negative first term), with
\begin{equation} \left.\partial_A G\right|_{\text{Dead load}}= \frac{Q^2}{2} \partial^2_{A^2} C.\label{eq:dAGDeadLoad} \end{equation}
Mixed case
As the spring stiffens toward a fixed grip, $C_M$
If $\partial_A G$ becomes negative, the crack becomes stable for a perfectly brittle material ($G_c$ is a constant).
Fixed grip case
The fixed grip case (a constant displacement $u$ over time) can be modeled with a spring of zero-compliance ($C_M \rightarrow 0$). In that case (\ref{eq:dAG}) is minimum and reads
\begin{equation} \left.\partial_A G\right|_{\text{Fixed grip}}= - \frac{Q^2}{C} \left(\partial_A C\right)^2 + \frac{Q^2}{2} \partial^2_{A^2} C = - \frac{u^2}{C^3} \left(\partial_A C\right)^2 + \frac{u^2}{2C^2} \partial^2_{A^2} C .\label{eq:dAGFixedGrip}\end{equation}
Note that a fixed grip is always more stable than a dead load as $ \partial_A G$ is smaller.
Resistance curve
Remember we have already introduced the resistance curve. For a perfectly brittle material the fracture energy $G_C$ is constant and equal to $2\gamma_s$. For other materials there will be a plastic flow at the crack tip as a stress cannot physically reach the infinity as predicted by the LEFM approach. This plastic flow arises in the active plastic zone as illustrated in Picture V.8. Evidently this active plastic zone moves with the crack tip in case of crack growth, meaning that the crack is opening in a zone affected by permanent plastic deformations: the plastic wake. This makes the crack more difficult to be opened, which corresponds to an increase of the apparent fracture energy $G_C$ with the crack propagation $\Delta a$. This is call the resistance and $G_c$ is replaced by $R_c(a)$. Some material can have a steady-state in the resistance curve as in Picture V.8, other not. Note that the curve does not only depend on the material, but also on the geometry as the thickness.
So for non-perfectly brittle materials, a crack can be stable even if $\partial_A G > 0$
as $R_c$ also depends on the crack surface
. In 2D, $R_c (\Delta A) = R_c (t \Delta a)$ Example: delamination of a composite with initial crack $a_0$
On the one hand, for composites, as the crack propagates, fibers in the wake tend to close the crack tip so more energy is required for the crack to grow, and we have a curve $R_c (t \Delta a)$. On the other hand, we have previously evaluated the energy release rate of the DCB problem from the compliance variation (\ref{eq:CDCB}). We can now study the stability of the delamination.
Prescribed loading
We have previously computed the shape of the energy release rate (\ref{eq:CDCBQ}) as being $G = \frac{Q^2}{2}\partial_A C = \frac{12Q^2a^2}{Et^2h^3}$. As the crack is growing, $G$ also increases, see Picture V.9. Let us consider different loading values:
Dead load $Q_1$: As $G=G_C$ for a crack size $a_0$, the crack propagated For a perfectly brittle materials $G$ remains larger than $G_C$ and the crack is unstable. For composites with a resistance curve, $R_c$ becomes larger than $G$ and the crack is stable. To increase the crack size we need to increase $Q$. However, if $a$ is larger than $a^{\star \star}$ it turns unstable even for $Q_1$. Dead load $Q_2 > Q_1$: This is the limit of stability for composites (always unstable for perfectly brittle material). For $Q_3> Q_2$: The crack is always unstable. Prescribed displacement
We have previously computed the shape of the energy release rate (\ref{eq:CDCBu}) as being $G = \frac{Q^2}{2}\partial_A C = \frac{3u^2Eh^3}{16a^4}$. As the crack is growing, $G$ decreases, see Picture V.10, contrarily to the dead load case. In this case, whatever the value $u$ of the grip and whatever the material, $G$ always becomes smaller than $G_C$ (or $R_c$) for a given crack size, and the crack stops. This is a stable configuration.
Summary The crack growth criterion is $G \geq G_c$. The crack growth is stable if, in 2D, $\partial_a G \leq \partial_a R_c$ and unstable otherwise.
In this part we have considered mode I failure. The case of mixed mode is considered in the next page. |
For over 15 years, some people—particularly philosophers—continue to be confused by the so-called “Sleeping Beauty Problem.” This is a rather straight-forward exercise in conditional probability that should be accessible to a student in an undergraduate course on probability and statistics. Nevertheless, there are people who have managed to arrive at the wrong answer to this problem.
The Problem
The Sleeping Beauty Problem is usually described as follows:
Beauty is going to be the subject of an experiment that will take place over three days. On Sunday, Beauty will be told the plan for the experiment, and then given a drug and put to sleep. The drug will cause her to sleep until Wednesday, but the experimenters plan to wake her up some number of times. To decide how many times to wake her up, they toss a coin. If the coin lands heads, they will wake her on Monday briefly, and then return her to sleep. If the coin lands tails, they will instead wake her briefly on both Monday and Tuesday. After they put her back to sleep on Monday, they will erase her memory, so that upon waking on Tuesday, her last memories will be having gone to sleep on Sunday. The circumstances of her wakings are identical, so she cannot tell from her environment what day it is or which way the coin landed. What should Beauty think about the probability of heads and tails upon awakening?
Alternatively, the question that is often asked is, “What is the probability that the coin came up heads?”
The Wrong Answer
A popular, incorrect answer to this problem is that Beauty should believe that, because she woke up, the fair coin is twice as likely to have come up tails as to come up heads. Since this results in a probability of heads of 1/3, the people who argue this point of view are often called “thirders.”
In the most naive argument of this type that is put forward, the proponent points out that Beauty can wake in one of three states:
(1) The coin toss was heads and it’s Monday
(2) The coin toss was tails and it’s Monday (3) The coin toss was tails and it’s Tuesday
The proponent then (implicitly or explicitly) assumes that all three states are equally likely to occur—i.e., each occurs with the same probability as the others. Since the coin came up heads in only one of the three states, he or she concludes that the probability of heads is 1/3.
More complicated, but equally wrong, arguments are addressed below.
The Right Answer
Here we will not only provide the correct answer to the problem (which is trivial) but also rigorously explore the estimates of probabilities that Beauty should assign to other parts of the experiment based on what information she has available.
In the interest of conserving space, let’s introduce the following notation for conditions of the experiment:\[
\begin{aligned} H &{}= \text{The coin came up heads}\\ T &{}= \text{The coin came up tails}\\ D_1 &{}= \text{It is the first day (Monday)}\\ D_2 &{}= \text{It is the second day (Tuesday)} \end{aligned} \]The notation used above for Monday and Tuesday readily generalizes to the scenario proposed by Nick Bostrom whereby Beauty is wakes for many, many (e.g., a million) days if the coin comes up tails (more below). Expression of the Problem as Probabilities
The assumption that the coin is fair implies\[
\begin{aligned} P(H) &{}= 1/2\\ P(T) &{}= 1/2 \end{aligned} \]If the coin comes up heads, Beauty will be woken on Monday, but not on Tuesday. This means\[ \begin{aligned} P(D_1|H) &{}= 1\\ P(D_2|H) &{}= 0 \end{aligned} \]If the coin comes up tails, Beauty will be woken on both Monday and Tuesday. She does not know which day it is when she wakes, but she knows that if tails shows up, she will be woken on both days, so in this situation the frequency of Monday and Tuesday awakenings will be the same. Therefore, they have an equal probability:\[ \begin{aligned} P(D_1|T) &{}= 1/2\\ P(D_2|T) &{}= 1/2 \end{aligned} \]
From that, the grid of all four possibilities falls out:\[
\begin{alignedat}{2} P(H \cap D_1) &{}= P(H) \cdot P(D_1|H) & & {}= 1/2\\ P(H \cap D_2) &{}= P(H) \cdot P(D_2|H) & & {}= 0\\ P(T \cap D_1) &{}= P(T) \cdot P(D_1|T) & & {}= 1/4\\ P(T \cap D_2) &{}= P(T) \cdot P(D_2|T) & & {}= 1/4 \end{alignedat}\] On Waking Up
Let’s get one thing out of the way, Beauty is
always going to wake up:\[ P(\text{wake}) = P(\text{wake}|H) = P(\text{wake}|T) = P(\text{wake}|D_1) = P(\text{wake}|D_2) = 1 \]So the fact that she wakes and is “in the moment” doesn’t give her or us any more information than she started with. Since Beauty has not been provided with any new information, the probability of the coin coming up heads remains \(P(H) = 1/2\), which is the right answer to the problem. Additional Probabilities
With that out of the way, we can use the information above to determine the probability of Beauty waking on a Monday,\[
P(D_1) = P(H) \cdot P(D_1|H) + P(T) \cdot P(D_1|T) = 3/4 \]and the probability of it being a Tuesday,\[ P(D_2) = P(H) \cdot P(D_2|H) + P(T) \cdot P(D_2|T) = 1/4 \]If Beauty is informed that she has risen on a Monday, then she can reevaluate the probability of the coin landing on heads or tails by applying Bayes’s Theorem:\[ \begin{alignedat}{2} P(H|D_1) &{}= \frac{P(D_1|H) \cdot P(H)}{P(D_1)} & &{}= 2/3\\ P(T|D_1) &{}= \frac{P(D_1|T) \cdot P(T)}{P(D_1)} & &{}= 1/3 \end{alignedat} \]Similarly, if she is told that it is a Tuesday, then she knows for certain that the coin toss was tails:\[ \begin{alignedat}{2} P(H|D_2) &{}= \frac{P(D_2|H) \cdot P(H)}{P(D_2)} & &{}= 0\\ P(T|D_2) &{}= \frac{P(D_2|T) \cdot P(T)}{P(D_2)} & &{}= 1 \end{alignedat} \] Common Mistakes
Although the flaws in logic employed by the “thirders” encompass a wide range of fallacies, a couple of mistakes appear quite frequently. Here we’ll examine why they are wrong.
What Happens on Monday
The mistake that most “thirders,” including the original guy who published the first article on this, make is to reason that, since she
must wake up on Monday—regardless of the coin flip—we must consider her situation at that time. The usual explanation that is given is that the experimenters might not have flipped the coin until after Beauty had been woken on Monday, because the coin doesn’t affect that part of the experiment. The coin toss determines only whether Beauty has her memory erased and is put back to bed. Nothing in the experiment requires that the coin is flipped before Beauty is woken and asked about the result of the coin toss.
They then argue that it is ridiculous, in the situation where Beauty is asked on Monday about the probability of heads on a coin toss that has not yet happened, for her to say anything but 1/2. In one sense, they have a valid point. If Beauty is woken and told that it is Monday
and that the coin toss hasn’t yet occurred, then she should reasonably conclude that the probability of the (future) coin toss coming up heads is 1/2. However, this is not the question that is being asked in the problem.
By focusing on the result of a coin toss on Monday—regardless of whether the coin has actually been tossed or not—the “thirders” have eliminated the possibility that the coin toss could have resulted in Beauty waking on Tuesday. In their line of reasoning, it’s simply not possible. Therefore, the coin toss that they are considering is
not the coin toss that Beauty is asked about in the problem.
Following up on this mistake by applying it to probability calculations, they then (erroneously) reason that\[
\begin{aligned} P(H|D_1) &{}= 1/2 \\ P(T|D_1) &{}= 1/2 \end{aligned}\rlap{\qquad\text{(wrong)}} \]and work backward through Bayes’s Theorem to conclude the wrong answer:\[ P(H) = 1/3 \rlap{\qquad\text{(wrong)}} \]
The source of their error is that they have changed the definition of the problem to a different problem in which the day is always Monday:\[
\begin{aligned} P(D_1) &{}= 1 \\ P(D_2) &{}= 0 \end{aligned} \]Naturally, the coin toss can have no effect on this:\[ \begin{alignedat}{2} P(D_1|H) &{}= P(D_1|T) & &{}= 1 \\ P(D_2|H) &{}= P(D_2|T) & &{}= 0 \end{alignedat} \]It is important to note that the claim made above about the probability of heads and tails on Monday, which is wrong in the original problem, is correct in this new, different problem,\[ \begin{alignedat}{2} P(H|D_1) &{}= \frac{P(D_1|H) \cdot P(H)}{P(D_1)} & &{}= 1/2\\ P(T|D_1) &{}= \frac{P(D_1|T) \cdot P(T)}{P(D_1)} & &{}= 1/2 \end{alignedat} \]but only if \(P(H) = P(T) = 1/2\), so even their modified problem doesn’t demonstrate their claim that the probability of heads is 1/3. When the assumptions are explicitly stated and the math is done correctly, it merely reinforces the proof that the probability is 1/2. You Wanna Bet?
The other mistake that is commonly encountered in “thirder” reasoning is to treat the Sleeping Beauty problem as a betting proposition. There are several variants as to how the wager is presented—whether only one bet or multiple bets are considered, whether odds are given on the bet, the amount of the bet, etc.—but they all suffer from the same critical error. There is even confusion over what constitutes a “bet.” While it is generally agreed that Beauty bets on the result of the toss of the coin
every time she wakes up, some people count the number of bets as the number of times the coin is tossed \(n_{\text{t}}\), which can be controlled, since it’s the number of times Beauty goes to sleep on Sunday. Meanwhile, other people consider every waking to be a separate bet, even though this number is randomly determined by the coin tosses. The best that can be said is that the number of times Beauty wakes \(N_{\text{w}}\) is expected to be\[ E(N_{\text{w}}) = n_{\text{t}} P(H) + 2 n_{\text{t}} P(T) = 3 n_{\text{t}} / 2 \]That is, Beauty can be expected to wake about 50% more than the number of times that she was put to sleep on Sunday, but the exact number in each series of experiments depends on the outcome of the coin tosses.
To avoid this ambiguity, let’s define “toss” to be every time the coin is tossed (and Beauty is put to sleep on Sunday) and “bet” as every time she has a chance to win or lose money (which is every time that Beauty wakes and is interviewed). Beauty has two options. She either can decide what she will bet when she wakes up each morning, or she can decide what to bet when she goes to sleep on Sunday and consistently place the same bet each time she wakes. Either way, her expected winnings will be the same. She can decide always to bet heads, always to bet tails, or to bet heads randomly at a specified fraction of the time \(f_{\text{H}}\).
Her expected winnings are\[
E(W) = f_{\text{H}}\cdot\bigl(P(H) – 2P(T)\bigr) + (1 – f_{\text{H}})\cdot\bigl(2P(T) – P(H)\bigr) \]which becomes\[ E(W) = (2f_{\text{H}} – 1)\cdot\bigl(P(H) – 2P(T)\bigr) \]So if Beauty always bets heads (\(f_{\text{H}} = 1\)),\[ E(W) = -1/2 \] and if Beauty always bets tails (\(f_{\text{H}} = 0\)),\[ E(W) = 1/2 \]If she bets heads only part of the time (\(0 < f_{\text{H}} < 1\)) then the expected winnings vary linearly with \(f_{\text{H}}\) between these two extremes.
So obviously, Beauty’s best strategy that maximizes her winnings is to always bet tails. For every, two dollars she offers to wager, she is expected to win one, which leads the “thirders” to mistakenly conclude that the odds of the coin coming up heads is 1 to 2 or that heads has a probability of \(P(H) = 1/3\).
However, this is not the case. If she seals in her bet (or bets, in case of tails) on Sunday by always betting either heads or tails, she is making a wager with equally likely results on the toss, but an
uneven payoff on the result of the toss. The payoff table looks like the following:
Bet Heads Bet Tails Result Heads Win 1 Lose 1 Result Tails Lose 2 Win 2
The “thirders” have confused betting odds with statistical odds. They are not the same thing. Therefore, a wise Beauty can make easy money by always betting tails and taking the two-to-one payoff, but she should not delude herself that the probabilities of the coin toss are anything but even.
The Groundhog Day Fallacy
Philosopher Nick Bostrom has proposed a scenario—possibly inspired by the movie
Groundhog Day—in which Beauty, instead of being woken only on Monday and Tuesday when the coin comes up tails, is woken many, many times—perhaps as many as a million times. (Never mind that a person who lives to be 100 years old will have less than 37,000 days on which to wake up. This is a philosopher talking.)
The idea behind this argument is that the set of possible days on which Beauty can wake up is overwhelmed by all of these additional days making it “more absurd” for someone to argue that the
one day with heads out of all of these possible days should be so frequent. This is nonsense, of course.
The coin will come up heads 1/2 of the time, and the experiment will end on Monday every time this happens. The other half of the time, Beauty will be woken on one of the days, each of which is no more likely than the rest without additional information. Thus, if there are \(n_{\text{T}}\) such days, then\[
P(D_i|T) = 1/n_{\text{T}}, \quad i = 1, 2, \ldots, n_{\text{T}} \] So the problem generalizes as follows\[ \begin{alignedat}{2} P(D_1|H) &{}= 1\\ P(D_i|H) &{}= 0, \quad & i &{}= 2, 3, \dots, n_{\text{T}} \\ P(D_i|T) &{}= 1/n_{\text{T}}, \quad & i &{}= 1, 2, \dots, n_{\text{T}} \\ \end{alignedat} \]and\[ \begin{alignedat}{3} P(H \cap D_1) &{}= P(H) \cdot P(D_1|H) & & {}= 1/2 & & \\ P(H \cap D_i) &{}= P(H) \cdot P(D_i|H) & & {}= 0, & \quad i &{}= 2, 3, \dots, n_{\text{T}} \\ P(T \cap D_i) &{}= P(T) \cdot P(D_i|T) & & {}= \frac{1}{2n_{\text{T}}}, & \quad i &{}= 1, 2, \dots, n_{\text{T}} \end{alignedat}\]
There is nothing in this more general problem to imply anything other than \(P(H) = 1/2\).
The Punters
Finally, there are a few people who claim that the framing of the problem is too ambiguous to evaluate the probability of the coin landing on heads. I don’t quite understand how they have confused themselves enough to reach this conclusion, but they are clearly wrong, as has been demonstrated above.
Conclusion
The correct answer to the Sleeping Beauty Problem is that Beauty has received no additional information upon waking. Therefore the probability that the coin has or will come up heads is 1/2, which is exactly the same as it was when she went to sleep on Sunday. A fair coin is, after all, a fair coin. |
Crack Growth > Fatigue Failure
While subjected to a cyclic loading, see Picture V.28, a structure can be softened due to the apparition and the propagation of a crack, that can be initiated and propagated for loadings such that $\sigma < \sigma_p^0$ and $K_I < K_{IC}$. Let us consider a specimen under cyclic loading as described on Picture I.71. The value of $P$ ranges from $P_\text{min}$ to $P_\text{max}$ so that $\Delta P = P_\text{max} - P_\text{min}$. Due to the cyclic loading a crack nucleates at the stress concentration and propagates until failure of the specimen, although at the beginning the SIF remains lower than $K_{IC}$, see Picture V.28.
Phenomenological explanations
Microscopic observations of structures which failed by fatigue show:
Cracks are initiated at stress concentration points; Micro-scopic crack growth along the crystal slip planes (Stage I); Macro-scopic crack growth with surface striations, see Picture V.29 Dark zone (Stage II); Failure of the structure occurs when the crack reaches a critical size, see Picture V.29 Bright zone (Stage III). Prediction of the structural life
As discussed in the overview, with the example of the Comet, see Picture V.30, a safe life approach is unable to predict the life of a structure under cyclic loading when some defects initially exist. A new theory was developed for the purpose of predicting the crack growth and the end of life of a given structure subjected to a cyclic loading. This theory, known as the damage tolerant design, assumes that defects exist in the structure at the end of their manufacturing (or from the start of their service), or because of accidents.
The question is now how can we predict the evolution of the crack size, and the failure point, in terms of the cycles number. In other words, how can we predict the curves of Picture V.28.
As the crack loading remains small during fatigue problems, the SSY assumption usually holds during some intervals of the crack growth or even until failure, depending on the case. As at the macro-scale the life of the structure with an initial crack size $a$ is observed to depend on the loading $\Delta P$ and on $\frac{P_\text{max}}{P_\text{min}}$ only, the SSY assumption allows us to say that the conditioning parameters are $\Delta K$ and $R = \frac{K_\text{max}}{K_\text{min}}$. Indeed from the SIFs equations we observed a linearity in $\Delta P$ of $\Delta K$. Therefore the evolution of the crack size obeys to
\begin{equation} \frac{d a}{d N_f} = f\left(\Delta K, R\right). \label{eq:dadN}\end{equation}
With the knowledge of this curve it is possible to determine for the number of cycles a structure can sustain before being replaced and to schedule the inspection intervals. Note that the uncertainty on this curve can reach several (tens of) percent, so regular intervals are thus required to validate or modify the prediction.
What remains to be defined now is the form of $f\left(\Delta K, R\right)$ in (\ref{eq:dadN}). Experimentally it has been found that this function follows the curve shown in Picture V.31.
Crack propagation in Stage I
Experimentally, it has been observed that if $\Delta K < \Delta K_\text{th}$, where $\Delta K_\text{th}$ the crack growth is in average smaller than one atomic spacing per cycle. Such a crack is considered as dormant. The value of $\Delta K_\text{th}$ is the fatigue threshold and depends on the material but also on the loading ratio $R$.
If $\Delta K > \Delta K_\text{th}$, the crack will propagate until reaching the stage II.
For steel, $\Delta K_\text{th}$ is between 2 and 5 $\text{MPa}\cdot\sqrt{\text{m}}$, but for steel in sea water, $\Delta K_\text{th}$ is between 1 and 1.5 $\text{MPa}\cdot\sqrt{\text{m}}$. This means that the environmental conditions in which the material is considered are very important.
Crack propagation in Stage II
\begin{equation} \frac{d a}{d N_f} = C \Delta K ^m.\label{eq:Paris} \end{equation}
This law is characterized by two parameters $C$ and $m$ which depend on the material and on the loading ratio $R$. For steel, $C \approx 0.1\times 10^{-11}$ $\text{m}\cdot\left(\text{MPa}\cdot\sqrt{\text{m}}\right)^{-m}$ and $m \approx$ 4. For steel in sea water, these values change a lot: $C \approx 1.6 \times 10^{-11}$ $\text{m}\cdot\left(\text{MPa}\cdot\sqrt{\text{m}}\right)^{-m}$ and $m \approx$ 3.3. This means that the environmental conditions in which the material is considered are very important. Note that the units of $C$ are cumbersome as they depend on the coefficient $m$.
When applying the Paris law, it is important to remember that $K$ depends on the crack size. Thus $\Delta K$ should be replaced by its complete expression when integrating to get $a(N_f)$. For instance, for Mode I in an infinite plate, we have:
\begin{equation} K_I = \sigma_\infty \sqrt{\pi a}, \end{equation}
and $\Delta K$ is given by:
\begin{equation} \Delta K = \left(\sigma_{\infty,\text{max}}-\sigma_{\infty,\text{min}}\right) \sqrt{\pi a}. \end{equation}
Crack propagation in Stage III
In this zone the crack grows rapidly until failure of the structure. The failure is reached as soon as the crack size reaches $a_f$ such that $K(P_\text{max},\,a_f)$ is equal to $K_c$.
Typical material parameters
Material $\Delta K_{\text{th}} \text{ [MPa }\cdot\sqrt{\text{m}}\text{]}$ $m$ [-] $C \text{ [}m\left(\text{MPa }\cdot\sqrt{\text{m}}\right)^{-m}\text{]}$ Mild steel 3.2-6.6 3.3 $0.24 \cdot 10^{-11}$ Structural steel 2.0-5.0 3.85-4.2 $0.07-0.11 \cdot 10^{-11}$ Structural steel is sea water 1.0-1.5 3.3 $1.6 \cdot 10^{-11}$ Aluminum 1.0-2.0 2.9 $4.56 \cdot 10^{-11}$ Aluminum alloy 1.0-2.0 2.6-2.9 $3-19 \cdot 10^{-11}$ Copper 1.8-2.8 3.9 $0.34 \cdot 10^{-11}$ Titanium alloy (6Al-4V, R=0.1) 2.0-3.0 3.22 $1 \cdot 10^{-11}$ Effects of $R$: Crack closure
As explained, the loading ratio $R$ has an effect on the fatigue crack propagation. During the loading phases at maximum stress,
There is an active plastic zone (phase transformation can happen); The crack opening allows fluid or products to enter.
Because of these effects, i f $R$ is low (< 0.7) or negative the crack lips can enter into contact at low stress values due to:
Plasticity: the residual plastic strain resulting from the plastic wake closes the crack, see Picture V.32; Roughness: the roughness of the crack lips prevents sliding in Mode II, see Picture V.33; Corrosion: the corrosion products fill the crack and prevent it to close, see Picture V.34; Viscous fluid: lubricant fluids fill the crack and prevent it to close, see Picture V.35; Phase transformation: the change of volume at crack tip puts the material into compression, see Picture V.36. Effect of the crack closure on fatigue
Because of the crack closure effect described here above, when the loading decreases, local compressive effects appear and parts of the cracks are kept opened. The SIFs at the minimum of the cycles defined in Picture V.37, are higher than expected as shown in Picture V.38. Hence the effective $\Delta K$ is actually reduced and the crack closure effect is therefore beneficial to the structure life.
Effect of $R$ on the threshold in zone I
Due to the Plasticity Induced Crack Closure (PICC), $\Delta K_\text{th}$ decreases when $R$ increases. Low values of $R$ are thus beneficial on the crack initiation. The ratio in the threshold can reach values of 5.
Effect of $R$ on the crack growth rate
Due to the crack closure, the life of the structure is improved for low $R$, as depicted in Picture V.31. Example for 2024-T3 aluminum alloy can be found in "J.C. Newman Jr, E.P. Phillips, M.H. Swain, Fatigue-life prediction methodology using small-crack theory, International Journal of Fatigue 21, 1999".
In order to unify the Paris law for different loading ratios $R$, models exist to evaluate the effective SIF $\Delta K_\text{eff}$. For example, the model of Elber & Schijve for Al. 2024-T3 suggests that $\Delta K_\text{eff} = (0.55 + 0.33 R + 0.12 R^2)\Delta K$ for $-1 < R < 0.54$. The use of such models is really delicate: models can be inaccurate in non-adequate circumstances (loading, environment, …), and are material dependent!
Overload effect
During a structure operation, the loading is never as regular as depicted in Picture V.37. What happens if there is a few (or a moderate) number of overloads?
Let us consider the cyclic loading with an overload as illustrated in Picture V.39:
Before the overload, the crack propagates with its plastic wake; During the overloading, the active plastic zone is higher than for the other cycles; The plastic wake is temporarily increased (Phase 1) for the coming cycles until the active plastic zone at the crack tip passes the plastic zone created by the overload (Phase 2); During Phase 1, $\Delta K_\text{eff}$ is reduced due to the PICC and the crack propagates slower: there exists a retard effect in the crack propagation rate as illustrated in Picture V.40. Once the crack tip has passed the modified wake in Phase 2, $\Delta K_\text{eff}$ is as expected and the initial propagation rate is recovered as illustrated in Picture V.40. However, too frequent overloads are damaging as they actually correspond to increasing $K_\text{max}$.
Note that in 1952, the De Havilland Comet I fuselage was tested against fatigue following this scheme:
Static loading at 1.12 atm; 10 000 cycles at 0.7 atm (larger than the cabin pressurization of 0.58 atm);
but the production fuselages without the first static loading failed after a few thousand cycles at 0.58 atm. The reason is that the production fuselages did not beneficiate from the PICC of the tested fuselage induced by the static loading.
The last reason for crack to propagate is due to corrosion. |
It's hard to say just from the sheet music; not having an actual keyboard here. The first line seems difficult, I would guess that second and third are playable. But you would have to ask somebody more experienced.
Having a few experienced users here, do you think that limsup could be an useful tag? I think there are a few questions concerned with the properties of limsup and liminf. Usually they're tagged limit.
@Srivatsan it is unclear what is being asked... Is inner or outer measure of $E$ meant by $m\ast(E)$ (then the question whether it works for non-measurable $E$ has an obvious negative answer since $E$ is measurable if and only if $m^\ast(E) = m_\ast(E)$ assuming completeness, or the question doesn't make sense). If ordinary measure is meant by $m\ast(E)$ then the question doesn't make sense. Either way: the question is incomplete and not answerable in its current form.
A few questions where this tag would (in my opinion) make sense: http://math.stackexchange.com/questions/6168/definitions-for-limsup-and-liminf http://math.stackexchange.com/questions/8489/liminf-of-difference-of-two-sequences http://math.stackexchange.com/questions/60873/limit-supremum-limit-of-a-product http://math.stackexchange.com/questions/60229/limit-supremum-finite-limit-meaning http://math.stackexchange.com/questions/73508/an-exercise-on-liminf-and-limsup http://math.stackexchange.com/questions/85498/limit-of-sequence-of-sets-some-paradoxical-facts
I'm looking for the book "Symmetry Methods for Differential Equations: A Beginner's Guide" by Haydon. Is there some ebooks-site to which I hope my university has a subscription that has this book? ebooks.cambridge.org doesn't seem to have it.
Not sure about uniform continuity questions, but I think they should go under a different tag. I would expect most of "continuity" question be in general-topology and "uniform continuity" in real-analysis.
Here's a challenge for your Google skills... can you locate an online copy of: Walter Rudin, Lebesgue’s first theorem (in L. Nachbin (Ed.), Mathematical Analysis and Applications, Part B, in Advances in Mathematics Supplementary Studies, Vol. 7B, Academic Press, New York, 1981, pp. 741–747)?
No, it was an honest challenge which I myself failed to meet (hence my "what I'm really curious to see..." post). I agree. If it is scanned somewhere it definitely isn't OCR'ed or so new that Google hasn't stumbled over it, yet.
@MartinSleziak I don't think so :) I'm not very good at coming up with new tags. I just think there is little sense to prefer one of liminf/limsup over the other and every term encompassing both would most likely lead to us having to do the tagging ourselves since beginners won't be familiar with it.
Anyway, my opinion is this: I did what I considered the best way: I've created [tag:limsup] and mentioned liminf in tag-wiki. Feel free to create new tag and retag the two questions if you have better name. I do not plan on adding other questions to that tag until tommorrow.
@QED You do not have to accept anything. I am not saying it is a good question; but that doesn't mean it's not acceptable either. The site's policy/vision is to be open towards "math of all levels". It seems hypocritical to me to declare this if we downvote a question simply because it is elementary.
@Matt Basically, the a priori probability (the true probability) is different from the a posteriori probability after part (or whole) of the sample point is revealed. I think that is a legitimate answer.
@QED Well, the tag can be removed (if someone decides to do so). Main purpose of the edit was that you can retract you downvote. It's not a good reason for editing, but I think we've seen worse edits...
@QED Ah. Once, when it was snowing at Princeton, I was heading toward the main door to the math department, about 30 feet away, and I saw the secretary coming out of the door. Next thing I knew, I saw the secretary looking down at me asking if I was all right.
OK, so chat is now available... but; it has been suggested that for Mathematics we should have TeX support.The current TeX processing has some non-trivial client impact. Before I even attempt trying to hack this in, is this something that the community would want / use?(this would only apply ...
So in between doing phone surveys for CNN yesterday I had an interesting thought. For $p$ an odd prime, define the truncation map $$t_{p^r}:\mathbb{Z}_p\to\mathbb{Z}/p^r\mathbb{Z}:\sum_{l=0}^\infty a_lp^l\mapsto\sum_{l=0}^{r-1}a_lp^l.$$ Then primitive roots lift to $$W_p=\{w\in\mathbb{Z}_p:\langle t_{p^r}(w)\rangle=(\mathbb{Z}/p^r\mathbb{Z})^\times\}.$$ Does $\langle W_p\rangle\subset\mathbb{Z}_p$ have a name or any formal study?
> I agree with @Matt E, as almost always. But I think it is true that a standard (pun not originally intended) freshman calculus does not provide any mathematically useful information or insight about infinitesimals, so thinking about freshman calculus in terms of infinitesimals is likely to be unrewarding. – Pete L. Clark 4 mins ago
In mathematics, in the area of order theory, an antichain is a subset of a partially ordered set such that any two elements in the subset are incomparable. (Some authors use the term "antichain" to mean strong antichain, a subset such that there is no element of the poset smaller than 2 distinct elements of the antichain.)Let S be a partially ordered set. We say two elements a and b of a partially ordered set are comparable if a ≤ b or b ≤ a. If two elements are not comparable, we say they are incomparable; that is, x and y are incomparable if neither x ≤ y nor y ≤ x.A chain in S is a...
@MartinSleziak Yes, I almost expected the subnets-debate. I was always happy with the order-preserving+cofinal definition and never felt the need for the other one. I haven't thought about Alexei's question really.
When I look at the comments in Norbert's question it seems that the comments together give a sufficient answer to his first question already - and they came very quickly. Nobody said anything about his second question. Wouldn't it be better to divide it into two separate questions? What do you think t.b.?
@tb About Alexei's questions, I spent some time on it. My guess was that it doesn't hold but I wasn't able to find a counterexample. I hope to get back to that question. (But there is already too many questions which I would like get back to...)
@MartinSleziak I deleted part of my comment since I figured out that I never actually proved that in detail but I'm sure it should work. I needed a bit of summability in topological vector spaces but it's really no problem at all. It's just a special case of nets written differently (as series are a special case of sequences). |
Table of Contents
Quadratic Lagrange Interpolating Polynomials Examples 1
Recall from the Quadratic Lagrange Interpolating Polynomials page that given three points $(x_0, y_0)$, $(x_1, y_1)$, and $(x_2, y_2)$ where $x_0$, $x_1$, and $x_2$ are distinct numbers, then we can construct a quadratic interpolating polynomial $P_2$ of degree less than or equal to $2$ that interpolates these points where:(1)
The functions $L_0$, $L_1$, and $L_2$ are given by the following formulas:(2)
We will now look at some examples of constructing quadratic Lagrange interpolating polynomials
Example 1 Find the quadratic Lagrange interpolating polynomial $P_2$ that interpolates the function $y = \tan x$ at the points $(0, 0)$, $\left ( \frac{\pi}{4}, 1 \right )$, and $(1, \tan(1))$.
Applying the formula above and we have that:(5)
Example 2 Find the quadratic Lagrange interpolating polynomial $P_2$ that interpolates the function $y = e^x$ at the points $(0, 1)$, $(1, e)$, and $(2, e^2)$.
Applying the formula above and we have that:(6) |
First of all, there is a projection $p \in M$ of finite trace such that $N \simeq pMp$, and its equivalent class depends at most on $tr(p)$. Because $G$ is countable and $M$ a ${\rm II}_{\infty}$ factor, the measure $\mu$ must be infinite, then we can choose a subspace $Y \subset X$ with $\mu(Y)=tr(p)$, and assume that $$N = 1_Y (A \rtimes_{\alpha} G) 1_Y,$$ with $1_Y$ the indicator function of $Y$.
Let's assume$^1$ that $A$ is a Cartan subalgebra of $M$.
By the work of Feldman and Moore Ergodic equivalence relations, cohomology, and von Neumann algebras, a von Neumann algebra has a Cartan subalgebra$^2$ iff it is of the form $M(R,s)$, i.e. generated by an equivalence relation $R$ (with all equivalent classes countable) on a standard Borel space $X$ (up to a cocycle twist $s$). The Cartan subalgebra is then the "diagonal", i.e. $L^{\infty}(X)$
Now let $Y$ be a subspace of $X$, then $1_Y M(R,s) 1_Y = M(R_{|Y},s)$, so has a Cartan subalgebra equals to the diagonal $L^{\infty}(Y)$.
Warning: The compression $pMp$ is not necessarily a group measure space construction, because even if the equivalent relation $R_{|Y}$ is always of the form $R_H$, the action of $H$ on $Y$ is not necessarily free, whereas freeness is used to show that $M(R_H,1)$ is a group measure space construction. Moreover, the restriction $R_{|Y}$ of an ergodic$^3$ equivalent relation $R$ is not necessarily ergodic. Now, $pMp$ is a factor, so if $R_{|Y} = R_H$ with $H$ acting freely then $R_{|Y}$ must be ergodic.
Acknowledgment: Thanks to Jesse Peterson for his help.
$^1$it is true in the ${\rm II}_{1}$ case, but I did not check if it is always true in the ${\rm II}_{\infty}$ case.
$^2$the general case requires the existence of a faithful normal conditional expectation $E: M \to A$. $^3$every $R$-saturated Borel set has measure $0$ or $1$. |
Advances in Differential Equations Adv. Differential Equations Volume 9, Number 11-12 (2004), 1339-1368. Existence results for some quasilinear elliptic equations involving critical Sobolev exponents Abstract
In this paper we study the existence of solutions to zero-Dirichlet-boundary-value problems for the quasilinear elliptic equation ${\rm (QE)_c}$ $- \Delta_p u - p \nabla \theta(x) \cdot \nabla u |\nabla u|^{p-2} = \lambda a(x) |u|^{p-2}u + K(x)|u|^{p^*-2}u$ in an unbounded domain $\Omega \subset {\bf R}^N$ with smooth boundary $\partial \Omega$. By using Brézis-Nirenberg's results, we prove that ${\rm (QE)_c}$ admits at least one nontrivial weak solution for positive $\lambda$ in a suitable interval.
Article information Source Adv. Differential Equations, Volume 9, Number 11-12 (2004), 1339-1368. Dates First available in Project Euclid: 18 December 2012 Permanent link to this document https://projecteuclid.org/euclid.ade/1355867905 Mathematical Reviews number (MathSciNet) MR2099559 Zentralblatt MATH identifier 05054510 Subjects Primary: 35J60: Nonlinear elliptic equations Secondary: 35B33: Critical exponents 35D05 35J20: Variational methods for second-order elliptic equations 47J30: Variational methods [See also 58Exx] Citation
Ohya, Hirokazu. Existence results for some quasilinear elliptic equations involving critical Sobolev exponents. Adv. Differential Equations 9 (2004), no. 11-12, 1339--1368. https://projecteuclid.org/euclid.ade/1355867905 |
There are call and put options on the same underlying asset, with the same expiry, $T$, and with strikes $K_c=(k_c^1, k_c^2, \ldots, k_c^m)$ and $K_p=(k_p^1, k_p^2, \ldots, k_p^m)$, $S_t$ is a price of the underlying asset at calendar time, $0\leq t \leq T$, $\hat{S}_T$ is a forecasting price of the underlying asset at the time $T$.
Suppose you would like to construct
a negative cost portfolio at the start time $t=0$ by combining long and short positions in put and call European-style contracts based on the same underlying asset with different strikes.
Let $X_c=(x_1^c, x_2^c, \ldots, x_m^c)$, $X_p=(x_1^p, x_2^p, \ldots, x_m^p)$ be the number of unit of call and put options, with $x_i^c, x_i^p>0$ for buying, $x_i^c, x_i^p<0$ for selling. Denote by $C=(c_1, c_2, \ldots, c_m)$ and $P=(p_1, p_2, \ldots, p_m)$ the market prices for buying and selling of call and put options respectively.
Then the cost portfolio, $M$, at the time of purchase, $t=0$, is expressed by: $$M=\sum_{i=1}^{m} x_i^c \cdot c_i+ x_i^p \cdot p_i.$$
For the negative cost portfolio $M$ should be negative, i.e. $M<0$.
Denote by $V$ the payoff function $V(S_T, K_c, K_p, T, X)=\sum_{i=1}^{m} x_i^c (\hat{S}_T-k_c^i)^+ + x_i^p (k_p^i - \hat{S}_T)^- - M$
How to show that the payoff $V(T)$ at time $T$ is non-negative ($V≥0$)?
The example of the portfolio and payoff at $t=0$ and $t=T$ in
R is below
X_c <- c(-3, -7, 2, 0, -2, 10);X_p <- c( 1, 1, 7, 4, -5, -8);K_c <- c(8050, 8150, 8250, 8350, 8400, 8500);K_p <- c(7850, 7950, 8050, 8150, 8250, 8350);S_hat_T <- 8400;C <- c(48.0, 10.0, 0.9, 0.5, 0.3, 0.2); P <- c( 2.2, 10.5, 35.0, 100.0, 183.0, 343.0);M <- sum(X_c*C) + sum(X_p*P); M# [1] -3212.1V <- sum(X_c*max(S_hat_T-K_c,0)) + sum(X_p*max(K_p - S_hat_T,0)) V#[1] 0V-M#[1] 3212.1
$M=-3212.1<0$, therefore, the portfolio having negative cost at time $t=0$, and $V=0$, at the time $T$.
My question is what is wrong with what I am doing?
Update. After noob2's comment. I tried to find the rule of the cost portfolio calculation, and the book byEliezer Z Prisman (2000) Pricing derivative securities: an interactive, dynamic environment with Maple V and Matlab was found. |
We study the dark energy equation of state as a function of redshift in a nonparametric way, without imposing any a priori w (z) (ratio of pressure over energy density) functional form. As a check of the method, we test our scheme through the use of synthetic data sets produced from different input cosmological models that have the same relative errors and redshift distribution as the real data. Using the luminosity-time L_{X}-T_{a} correlation for gamma-ray burst (GRB) X-ray afterglows (the Dainotti et al. correlation), we are able to utilize GRB samples from the Swift satellite as probes of the expansion history of the universe out to z \approx 10. Within the assumption of a flat Friedmann-Lemaître-Robertson-Walker universe and combining supernovae type Ia (SNeIa) data with baryonic acoustic oscillation constraints, the resulting maximum likelihood solutions are close to a constant w = –1. If one imposes the restriction of a constant w , we obtain w = -0.99\pm 0.06 (consistent with a cosmological constant) with the present-day Hubble constant as H_{0}=70.0\pm 0.6km s^{-1} Mpc^{-1} and density parameter as \Omega _{\Lambda 0} = 0.723\pm 0.025, while nonparametric w (z) solutions give us a probability map that is centered at H_{0} = 70.04\pm 1km s^{-1} Mpc^{-1} and \Omega _{\Lambda 0} = 0.724\pm 0.03. Our chosen GRB data sample with a full correlation matrix allows us to estimate the amount, as well as quality (errors), of data needed to constrain w (z) in the redshift range extending an order of magnitude beyond the farthest SNeIa measured.
keywords in english:
cosmological parameters, cosmology: miscellaneous, cosmology: observations, cosmology: theory, dark energy, dark matter
affiliation:
Wydział Fizyki, Astronomii i Informatyki Stosowanej : Instytut – Obserwatorium Astronomiczne |
There is another way to think about this problem. Since $R:= \mathbb R[x,y]/(x^2 +y^2 -1 )$ is a smooth affine curve, it is a normal ring (i.e. integrally closed in its fraction field), and so it is factorial if and only if it has trivial class group.
Here and below I will use ideas discussed in Hartshorne, Ch.II.6, in the subsection on Weil divisors.
We may consider $U :=$ Spec $R$ as an affine open curve, and then consider its projective closure $X$. The curve $X$ is a plane conic, and so its class group (equivalently, its Picard group) is isomorphic to $\mathbb Z$, generated by the class of any rational point (e.g. the class of the point $(1,0)$).
Now $Z := X \setminus U$ is irreducible (it is a single point of $X$, which geometrically becomes two points, namely the two points at infinity $[1:\pm i: 0]$ --- note that neither of these points is individually defined over $\mathbb R$, but their union is, and so it corresponds to a single point on $X$ with residue field equal to $\mathbb C$); this is where we use that our curve is defined over $\mathbb R$ rather than $\mathbb C$. (In the latter case $Z$ is
not irreducible, but is the union of the preceding two points, which are now both defined over $\mathbb C$.)
We now use the exact sequence of Hartshorne II.6, Prop. 6.5, namely
$$\mathbb Z \to \mathrm{Cl}(X) \to \mathrm{Cl}(U) \mapsto 0,$$
where the first arrow is defined by $n \mapsto $ the class of $nZ$.
Recalling that Cl$(X) = \mathbb Z$, and that$Z$ corresponds to a pair of points over $\mathbb C$, this exact sequence can be written more explicitly as$$\mathbb Z \to \mathbb Z \to \mathrm{Cl}(U) \to 0,$$where the first map is multiplication by $2$.
Thus Cl$(R) = $ Cl$(U) = \mathbb Z/2$, and we see that $R$ is not a UFD.
Explicitly, we see that a maximal ideal in $R$ will be principal preciselyif its residue field is equal to $\mathbb C$ (rather than $\mathbb R$).Thus e.g. the maximal ideal $(x,y-1)$, which cuts out the point $(0,1)$ andhas residue field $\mathbb R$, is not principal.
One can think about this more geometrically:
If the maximal ideal cutting out a point $P$ over $\mathbb R$ is principal,then it is generated by some real polynomial $f(x,y)$. But then the ideal $(f)$ in $R$ is a product of maximal ideals corresponding to the intersection of the curve $f = 0$ with the curve $U$. By assumption this is just the single point $P$, with multiplicity one, and so (now passing from the affine picture to the projective picture) all the other intersections must be with the two points in $Z$. By Bezout, the total number of intersections of $f = 0$ with $X$ is even, and we are assuming the intersection of $f = 0$ with $U$ consists of the single point $P$, so in fact the number of intersections with $Z$ must be odd. But this set of intersections (counted with multiplicity) is symmetric under complex conjugation (since $f$ has real coefficients) and so it must be even (because the two points of $Z$ are interchanged by complex conjugation). This contradiction shows that the maximal ideal of $P$ is
not principal. (This is more or less a rewriting of the proof of Hartshorne's Prop. 6.5 in thisparticular case.)
It is also easy to see what happens when we extend scalars from $\mathbb R$ to $\mathbb C$, i.e. pass from $R$ to $S$. The set $Z$ now becomes the union of two points, and so for any point $P$ of $U$ (now over $\mathbb C$) we can find a generator of the maximal ideal by choosing $f$ to be the equation of a line passing through $P$ and one of the two points in $Z$. E.g. for $P = (0,1)$, we can take a generator of the ideal $(x,y-1)$ to be $(y - 1 \pm ix)$. (Either choice of sign will do; their ratio is a unit in $S$.)
In terms of the exact sequence of class groups, $Z$ is no longer irreducible,but the union of two points each of degree one, and so the exact sequence becomes$$\mathbb Z \oplus \mathbb Z \to \mathrm{Cl}(X_{/\mathbb C}) \to\mathrm{Cl}(U_{/\mathbb C}) \to 0,$$which more explicitly is $$\mathbb Z \oplus \mathbb Z \to \mathbb Z \to \mathrm{Cl}(U_{/\mathbb C}) \to 0,$$with the first map being given just by $(m,n) \mapsto m+n$.Evidently this map is surjective, and soCl$(S) =$ Cl$(U_{/\mathbb C}) = 0.$ |
As far as the ElGamal scheme, I have given it some thought. With all due respect to the paper that I provided that appears to select different private-public key pairs for each participant/entity, I believe another way to set up an encryption system for multiple participants using the same private key (if that's what you desire), is to use different generators of the primary field/group, $G_{q}$, that you select.
As a simple example, take the cyclic (multiplicative) group $\mathbb{Z}^{*}_{13} = \mathbb{Z}_{13} \setminus \{0\}$. The generators of this group are $g=2$, $g=6$, $g=7$ and $g=11$. That is, order$(2)=12$, order$(6)=12$, etc. To generate a public key, select a random $x$ from $\{0,...,11\}$. Then, for each generator, $g_{i}$, compute $h_{i} = g_{i}^x$. That is, $h_{1} = 2^{x}$, $h_{2} = 6^{x}$, etc. Clearly, no matter the value for $x$, each participant will have a different public key denoted by $(\mathbb{Z}^{*}_{13}, 13, g_{i}, h_{i})$.
Encryption works by participant $i$ choosing a random $y$ for every message from $\{0,...,11\}$, then calculating $c_{1} = g_{i}^y$. Participant $i$ also calculates the shared key (called the
ephemeral key) by $s_{i} = h_{i}^{y}$. Finally, participant $i$ then converts the message $m$ into $m' \in \mathbb{Z}_{13}^*$, and calculates $c_{2} = m' \cdot s_{i}$. Participant $i$ then sends the cipher text $(c_{1},c_{2}) = (g_{i}^{y}, m' \cdot h_{i}^{y}) = (g_{i}^{y}, m' \cdot (g_{i}^{x})^{y})$.
Decryption of the cipher text $(c_{1},c_{2})$ works by calculating the shared secret $s_{i} = c_{1}^{x}$ using the same private key $x$, and then computing $m' = c_{2} \cdot s_{i}^{-1}$. The message $m'$ is then converted back to the plain text message, $m$.
The decryption step produces the intended message since
$c_{2} \cdot s_{i}^{-1} = m' \cdot h^{y} \cdot (g_{i}^{xy})^{-1} = m' \cdot g_{i}^{xy} \cdot g_{i}^{-xy} = m'$
For the scheme to be secure, the user will have to select $q > 2^{2000}$ (with current technology). Note, however, that if one participant's security is compromised, then all participants' securities are compromised. This is probably why it would pay to have a different pair of public-private keys for each participant. |
In the
related post over at math.se, the answerer takes as given that the definition for asymptotic unbiasedness is $\lim_{n\to \infty} E(\hat \theta_n-\theta) = 0$.
Intuitively, I disagree: "unbiasedness" is a term we first learn
in relation to a distribution (finite sample). It appears then more natural to consider "asymptotic unbiasedness" in relation to an asymptotic distribution. And in fact, this is what Lehmann & Casella in "Theory of Point Estimation (1998, 2nd ed) do, p. 438 Definition 2.1 (simplified notation):
$$\text{If} \;\;\;k_n(\hat \theta_n - \theta )\to_d H$$
for some sequence $k_n$ and for some random variable $H$, the estimator $\hat \theta_n$ is
asymptotically unbiased if the expected value of $H$ is zero.
Given this definition, we can argue that
consistency implies asymptotic unbiasedness since
$$\hat \theta_n \to_{p}\theta \implies \hat \theta_n - \theta \to_{p}0 \implies \hat \theta_n - \theta \to_{d}0$$
...and the degenerate distribution that is equal to zero has expected value equal to zero (here the $k_n$ sequence is a sequence of ones).
But I suspect that this is not really useful, it is just a by-product of a definition of asymptotic unbiasedness that allows for degenerate random variables. Essentially we would like to know whether, if we had an expression involving the estimator that converges to a non-degenrate rv, consistency would still imply asymptotic unbiasedness.
Earlier in the book (p. 431 Definition 1.2), the authors call the property $\lim_{n\to \infty} E(\hat \theta_n-\theta) = 0$ as "
unbiasedness in the limit", and it does not coincide with asymptotic unbiasedness.
Unbiasedness in the limit is sufficient (but not necessary) for consistency under the additional condition that the sequence of estimator variances goes to zero (implying that the variance exists in the first place).
For the intricacies related to concistency with non-zero variance (a bit mind-boggling), visit this post. |
By Robert G. Brown, Duke University (elevated from a WUWT comment)
I spent what little of last night that I semi-slept in a learning-dream state chewing over Caballero’s book and radiative transfer, and came to two insights. First, the baseline black-body model (that leads to T_b = 255K) is physically terrible, as a baseline. It treats the planet in question as a nonrotating superconductor of heat with no heat capacity. The reason it is terrible is that it is
absolutely incorrect to ascribe 33K as even an estimate for the “greenhouse warming” relative to this baseline, as it is a completely nonphysical baseline; the 33K relative to it is both meaningless and mixes both heating and cooling effects that have absolutely nothing to do with the greenhouse effect. More on that later.
I also understand the greenhouse effect itself much better. I may write this up in my own words, since I don’t like some of Caballero’s notation and think that the presentation can be simplified and made more illustrative. I’m also thinking of using it to make a “build-a-model” kit, sort of like the “build-a-bear” stores in the malls.
Start with a nonrotating superconducting sphere, zero albedo, unit emissivity, perfect blackbody radiation from each point on the sphere. What’s the mean temperature?
Now make the non-rotating sphere perfectly
non-conducting, so that every part of the surface has to be in radiative balance. What’s the average temperature now? This is a better model for the moon than the former, surely, although still not good enough. Let’s improve it.
Now make the surface have some thermalized heat capacity — make it heat superconducting, but only in the vertical direction and presume a mass shell of some thickness that has some reasonable specific heat. This changes nothing from the previous result, until we make the sphere
rotate. Oooo, yet another average (surface) temperature, this time the spherical average of a distribution that depends on latitude, with the highest temperatures dayside near the equator sometime after “noon” (lagged because now it takes time to raise the temperature of each block as the insolation exceeds blackbody loss, and time for it to cool as the blackbody loss exceeds radiation, and the surface is never at a constant temperature anywhere but at the poles (no axial tilt, of course). This is probably a very decent model for the moon, once one adds back in an albedo (effectively scaling down the fraction of the incoming power that has to be thermally balanced).
One can for each of these changes actually compute the exact parametric temperature distribution as a function of spherical angle and radius, and (by integrating) compute the change in e.g. the average temperature from the superconducting perfect black body assumption. Going from superconducting planet to local detailed balance but otherwise perfectly insulating planet (nonrotating) simply drops the nightside temperature for exactly 1/2 the sphere to your choice of 3K or (easier to idealize) 0K after a very long time. This is bounded from below, independent of solar irradiance or albedo (or for that matter, emissivity). The dayside temperature, on the other hand, has a polar distribution with a pole facing the sun, and varies nonlinearly with irradiance, albedo, and (if you choose to vary it) emissivity.
That pesky T^4 makes everything complicated! I hesitate to even try to assign the
sign of the change in average temperature going from the first model to the second! Every time I think that I have a good heuristic argument for saying that it should be lower, a little voice tells me — T^4 — better do the damn integral because the temperature at the separator has to go smoothly to zero from the dayside and there’s a lot of low-irradiance (and hence low temperature) area out there where the sun is at five o’clock, even for zero albedo and unit emissivity! The only easy part is to obtain the spherical average we can just take the dayside average and divide by two…
I’m not even happy with the sign for the rotating sphere, as this depends on the
interplay between the time required to heat the thermal ballast given the difference between insolation and outgoing radiation and the rate of rotation. Rotate at infinite speed and you are back at the superconducting sphere. Rotate at zero speed and you’re at the static nonconducting sphere. Rotate in between and — damn — now by varying only the magnitude of the thermal ballast (which determines the thermalization time) you can arrange for even a rapidly rotating sphere to behave like the static nonconducting sphere and a slowly rotating sphere to behave like a superconducting sphere (zero heat capacity and very large heat capacity, respectively). Worse, you’ve changed the geometry of the axial poles (presumed to lie untilted w.r.t. the ecliptic still). Where before the entire day-night terminator was smoothly approaching T = 0 from the day side, now this is true only at the poles! The integral of the polar area (for a given polar angle d\theta) is much smaller than the integral of the equatorial angle, and on top of that one now has a smeared out set of steady state temperatures that are all functions of azimuthal angle \phi and polar angle \theta, one that changes nonlinearly as you crank any of: Insolation, albedo, emissivity, \omega (angular velocity of rotation) and heat capacity of the surface.
And we haven’t even got an atmosphere yet. Or water. But at least up to this point, one can solve for the temperature distribution T(\theta,\phi,\alpha,S,\epsilon,c)
exactly, I think.
Furthermore, one can actually
model something like water pretty well in this way. In fact, if we imagine covering the planet not with air but with a layer of water with a blackbody on the bottom and a thin layer of perfectly transparent saran wrap on top to prevent pesky old evaporation, the water becomes a contribution to the thermal ballast. It takes a lot longer to raise or lower the temperature of a layer of water a meter deep (given an imbalance between incoming radiation) than it does to raise or lower the temperature of maybe the top centimeter or two of rock or dirt or sand. A lot longer.
Once one has a good feel for this, one could decorate the model with oceans and land bodies (but still prohibit lateral energy transfer and assume immediate vertical equilibration). One could let the water have the right albedo and freeze when it hits the right temperature. Then things get tough.
You have to add an atmosphere. Damn. You also have to let the ocean itself convect, and have density, and variable depth. And all of this on a
rotating sphere where things (air masses) moving up deflect antispinward (relative to the surface), things moving down deflect spinward, things moving north deflect spinward (they’re going to fast) in the northern hemisphere, things moving south deflect antispinward, as a function of angle and speed and rotational velocity. Friggin’ coriolis force, deflects naval artillery and so on. And now we’re going to differentially heat the damn thing so that turbulence occurs everywhere on all available length scales, where we don’t even have some simple symmetry to the differential heating any more because we might as well have let a five year old throw paint at the sphere to mark out where the land masses are versus the oceans, and or better yet given him some Tonka trucks and let him play in the spherical sandbox until he had a nice irregular surface and then filled the surface with water until it was 70% submerged or something.
Ow, my aching head. And
note well — we still haven’t turned on a Greenhouse Effect! And I now have nothing like a heuristic for radiant emission cooling even in the ideal case, because it is quite literally distilled, fractionated by temperature and height even without CO_2 per se present at all. Clouds. Air with a nontrivial short wavelength scattering cross-section. Energy transfer galore.
And then, before we mess with CO_2, we have to take
quantum mechanics and the incident spectrum into account, and start to look at the hitherto ignored details of the ground, air, and water. The air needs a lapse rate, which will vary with humidity and albedo and ground temperature and… The molecules in the air recoil when the scatter incoming photons, and if a collision with another air molecule occurs in the right time interval they will mutually absorb some or all of the energy instead of elastically scattering it, heating the air. It can also absorb one wavelength and emit a cascade of photons at a different wavelength (depending on its spectrum).
Finally, one has to add in the GHGs, notably CO_2 (water is already there). They have the effect increasing the outgoing radiance from the (higher temperature) surface in some bands, and transferring some of it to CO_2 where it is trapped until it diffuses to the top of the CO_2 column, where it is
emitted at a cooler temperature. The total power going out is thus split up, with that pesky blackbody spectrum modulated so that different frequencies have different effective temperatures, in a way that is locally modulated by — nearly everything. The lapse rate. Moisture content. Clouds. Bulk transport of heat up or down via convection. Bulk transport of heat up or down via caged radiation in parts of the spectrum. And don’t forget sideways! Everything is now circulating, wind and surface evaporation are coupled, the equilibration time for the ocean has stretched from “commensurate with the rotational period” for shallow seas to a thousand years or more so that the ocean is never at equilibrium, it is always tugging surface temperatures one way or the other with substantial thermal ballast, heat deposited not today but over the last week, month, year, decade, century, millennium.
Yessir, a damn hard problem. Anybody who calls this
settled science is out of their ever-loving mind. Note well that I still haven’t included solar magnetism or any serious modulation of solar irradiance, or even the axial tilt of the earth, which once again completely changes everything, because now the timescales at the poles become annual, and the north pole and south pole are not at all alike! Consider the enormous difference in their thermal ballast and oceanic heat transport and atmospheric heat transport!
A
hard problem. But perhaps I’ll try to tackle it, if I have time, at least through the first few steps outlined above. At the very least I’d like to have a better idea of the direction of some of the first few build-a-bear steps on the average temperature (while the term “average temperature” has some meaning, that is before making the system chaotic).
rgb |
In a five-team tournament, each team plays one game with every other team. Each team has a $50\%$ chance of winning any game it plays. (There are no ties.) Let $\dfrac{m}{n}$ be the probability that the tournament will produce neither an undefeated team nor a winless team, where $m$ and $n$ are relatively prime integers. Find $m+n$.
The probability that one team wins all games is $5\cdot (\frac{1}{2})^4=\frac{5}{16}$.
Similarity, the probability that one team loses all games is $\frac{5}{16}$.
I did this much, but after that what should I do to reach the final answer ? I'm confused. Can someone explain? |
I see your porblem - Hull unfortunately does not explain the reasoning behind the approach.
The hint the books gives is correct. Using Taylor series $e^x$ can be written as $e^x = 1 + x + \frac{x^2}{2}+...$. Hull also incoporates a dividend rate $q$ but we can disregard it here.
$p$ is given by $p=\frac{e^{r\Delta }-d}{u-d}$. We also have $u=\frac{1}{d}$. So to complete our setup wie primarily just need to find a propper $u$ that satisfies equation $(*)$$$e^{r\Delta t}(u+d)-ud-e^{2r\Delta t} = \sigma^2\Delta t $$One can assume that $u$ will be some function of $\Delta t$ and thus write $u(\Delta t)$. Furthermore we do not need $u(\Delta t)$ to solve $(*)$ for huge $\Delta t$. If we assume that the function has some taylor approximation we can just work with the truncated taylors some for it will approximate $u(\Delta t)$ well enough for small $\Delta t$.
So we set $u(\Delta t) = e^{-\sigma\sqrt{\Delta t}}$
Now obviosly this choice does not satifsy equation $(*)$. Still we would use the result if it's second order taylor approximation will do the job (thus fullfill the equation quite well for smaller $\Delta t$) - remember we can use a taylor expansion to approximate the function - this is the somewhat simplified statement of Taylor's Theorem
The second order Taylor Sum for $e^t$ is given by $e^t \approx 1+t+0.5t^2$. Inserting $\sigma \sqrt{\Delta t}$ for $t$ gives $e^{\sigma \sqrt{\Delta t}}\approx 1+\sigma \sqrt{\Delta t}+0.5\sigma^2 \Delta t$. And thus $u(\Delta t) \approx 1+\sigma \sqrt{\Delta t}+0.5\sigma^2 \Delta t$ and $d(\Delta t) \approx 1-\sigma \sqrt{\Delta t}+0.5\sigma^2 \Delta t$ for small enough $\Delta t$.
Using the same tecnique we approximate the $e^{r\Delta t},e^{2 r\Delta t}$ terms by their first order taylor sums and get $e^{r\Delta t}=1+r \Delta t$ and $e^{2 r\Delta t}=1+2r \Delta t$.
If you substite the terms in equation $(*)$ by the approximations derived here and kill/ignore all the terms containing $(\Delta t)^2$ you will get the desired result.
Thus
$$(1+r\Delta t)(1+\sigma \sqrt{\Delta t}+0.5\sigma^2 \Delta t+1-\sigma \sqrt{\Delta t}+0.5\sigma^2 \Delta t)-(1+\sigma \sqrt{\Delta t}+0.5\sigma^2 \Delta t)(1-\sigma \sqrt{\Delta t}+0.5\sigma^2 \Delta t)-(1+2r\Delta t) = \sigma^2\Delta t $$
Simplify (thus just carry out the multiplication) whenver you encounter a term containg $(\Delta t)^2$ (e.g. $\sigma^4(\Delta t)^2$) just set it to zero.
Edit on the background for the choice of $u$
By using the relation $d=1/u$ one can simplify equation $(*)$ to$$u^2-\frac{1+e^{2r\Delta t}+\sigma^2\Delta t}{e^{r\Delta t}}u+1=0$$
Setting $A=0.5\frac{1+e^{2r\Delta t}+\sigma^2\Delta t}{e^{r\Delta t}}=0.5(e^{-r\Delta t}+e^{r\Delta t}+\sigma^2\Delta t e^{r\Delta t})$ one arrives at the quadratic equation $u^2-2Au+1=0$
Solving this equation gives $u=A+\sqrt{A^2-1}, d=A-\sqrt{A^2-1}$ Now if one inserts the actual formula for $A$ into this equations, substitutes $e^{r \Delta t},e^{-r \Delta t}$ with $1+r \Delta t,1-r \Delta t$, simplifies and then neglects all the terms containing $(\Delta t)^2$ or higher one arrives at$$u=1+\sigma \sqrt{\Delta t}+0.5\sigma^2 \Delta t$$This is the second order Taylor approximation of $e^{ \sigma \sqrt{\Delta t}}$ |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.