text
stringlengths 256
16.4k
|
|---|
I have the following dynamical system:
$\dot{x_1}= -x_2 + (x_1(1-(x_1^2+x_2^2)^2))$ , $ \dot{x_2}= x_1 + (x_2(1-(x_1^2+x_2^2)^2))$, $\dot{x_3}= \epsilon x_3$ . I am required to work out the flow for this system. I have switched it to cylindrical coordinates obtaining $\dot{r}=r(1-r^4)$ , $\dot{\theta}=1$, $\dot{z}=\epsilon{z}$.
I assume in order to work out $r$ I must use partial fractions, but I'm not really sure how to proceed with this, as surely it gets a bit awkward. Have I made a mistake somewhere? Is this the right approach?
Thanks
|
Search
Now showing items 1-1 of 1
Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV
(Elsevier, 2014-09)
Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ...
|
Search
Now showing items 1-1 of 1
Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV
(Elsevier, 2014-09)
Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ...
|
De Bruijn-Newman constant
For each real number [math]t[/math], define the entire function [math]H_t: {\mathbf C} \to {\mathbf C}[/math] by the formula
[math]\displaystyle H_t(z) := \int_0^\infty e^{tu^2} \Phi(u) \cos(zu)\ du[/math]
where [math]\Phi[/math] is the super-exponentially decaying function
[math]\displaystyle \Phi(u) := \sum_{n=1}^\infty (2\pi^2 n^4 e^{9u} - 3 \pi n^2 e^{5u}) \exp(-\pi n^2 e^{4u}).[/math]
It is known that [math]\Phi[/math] is even, and that [math]H_t[/math] is even, real on the real axis, and obeys the functional equation [math]H_t(\overline{z}) = \overline{H_t(z)}[/math]. In particular, the zeroes of [math]H_t[/math] are symmetric about both the real and imaginary axes. One can also express [math]H_t[/math] in a number of different forms, such as
[math]\displaystyle H_t(z) = \frac{1}{2} \int_{\bf R} e^{tu^2} \Phi(u) e^{izu}\ du[/math]
or
[math]\displaystyle H_t(z) = \frac{1}{2} \int_0^\infty e^{t\log^2 x} \Phi(\log x) e^{iz \log x}\ \frac{dx}{x}.[/math]
In the notation of [KKL2009], one has
[math]\displaystyle H_t(z) = \frac{1}{8} \Xi_{t/4}(z/2).[/math]
De Bruijn [B1950] and Newman [N1976] showed that there existed a constant, the
de Bruijn-Newman constant [math]\Lambda[/math], such that [math]H_t[/math] has all zeroes real precisely when [math]t \geq \Lambda[/math]. The Riemann hypothesis is equivalent to the claim that [math]\Lambda \leq 0[/math]. Currently it is known that [math]0 \leq \Lambda \lt 1/2[/math] (lower bound in [RT2018], upper bound in [KKL2009]). Contents [math]t=0[/math]
When [math]t=0[/math], one has
[math]\displaystyle H_0(z) = \frac{1}{8} \xi( \frac{1}{2} + \frac{iz}{2} ) [/math]
where
[math]\displaystyle \xi(s) := \frac{s(s-1)}{2} \pi^{s/2} \Gamma(s/2) \zeta(s)[/math]
is the Riemann xi function. In particular, [math]z[/math] is a zero of [math]H_0[/math] if and only if [math]\frac{1}{2} + \frac{iz}{2}[/math] is a non-trivial zero of the Riemann zeta function. Thus, for instance, the Riemann hypothesis is equivalent to all the zeroes of [math]H_0[/math] being real, and Riemann-von Mangoldt formula (in the explicit form given by Backlund) gives
[math]\displaystyle N_0(T) - (\frac{T}{4\pi} \log \frac{T}{4\pi} - \frac{T}{4\pi} - \frac{7}{8})| \lt 0.137 \log (T/2) + 0.443 \log\log(T/2) + 4.350 [/math]
for any [math]T \gt 4[/math], where [math]N_0(T)[/math] denotes the number of zeroes of [math]H_0[/math] with real part between 0 and T.
The first [math]10^{13}[/math] zeroes of [math]H_0[/math] (to the right of the origin) are real [G2004]. This numerical computation uses the Odlyzko-Schonhage algorithm.
[math]t\gt0[/math]
For any [math]t\gt0[/math], it is known that all but finitely many of the zeroes of [math]H_t[/math] are real and simple [KKL2009, Theorem 1.3]. In fact, assuming the Riemann hypothesis,
all of the zeroes of [math]H_t[/math] are real and simple [CSV1994, Corollary 2].
Let [math]\sigma_{max}(t)[/math] denote the largest imaginary part of a zero of [math]H_t[/math], thus [math]\sigma_{max}(t)=0[/math] if and only if [math]t \geq \Lambda[/math]. It is known that the quantity [math]\frac{1}{2} \sigma_{max}(t)^2 + t[/math] is non-decreasing in time whenever [math]\sigma_{max}(t)\gt0[/math] (see [KKL2009, Proposition A]. In particular we have
[math]\displaystyle \Lambda \leq t + \frac{1}{2} \sigma_{max}(t)^2[/math]
for any [math]t[/math].
The zeroes [math]z_j(t)[/math] of [math]H_t[/math] (formally, at least) obey the system of ODE
[math]\partial_t z_j(t) = - \sum_{k \neq j} \frac{2}{z_k(t) - z_j(t)}[/math]
where the sum may have to be interpreted in a principal value sense. (See for instance [CSV1994, Lemma 2.4]. This lemma assumes that [math]t \gt \Lambda[/math], but it is likely that one can extend to other [math]t \geq 0[/math] as well.)
In [KKL2009, Theorem 1.4], it is shown that for any fixed [math]t\gt0[/math], the number [math]N_t(T)[/math] of zeroes of [math]H_t[/math] with real part between 0 and T obeys the asymptotic
[math]N_t(T) = \frac{T}{4\pi} \log \frac{T}{4\pi} - \frac{T}{4\pi} + \frac{t}{16} \log T + O(1) [/math]
as [math]T \to \infty[/math] (caution: the error term here is not uniform in t). Also, the zeroes behave like an arithmetic progression in the sense that
[math] z_{k+1}(t) - z_k(t) = (1+o(1)) \frac{4\pi}{\log |z_k|(t)} = (1+o(1)) \frac{4\pi}{\log k} [/math]
as [math]k \to +\infty[/math].
Threads Polymath proposal: upper bounding the de Bruijn-Newman constant, Terence Tao, Jan 24, 2018. Other blog posts and online discussion Heat flow and zeroes of polynomials, Terence Tao, Oct 17, 2017. The de Bruijn-Newman constant is non-negative, Terence Tao, Jan 19, 2018. Lehmer pairs and GUE, Terence Tao, Jan 20, 2018. A new polymath proposal (related to the Riemann hypothesis) over Tao's blog, Gil Kalai, Jan 26, 2018. Code and data Wikipedia and other references Bibliography [B1950] N. C. de Bruijn, The roots of trigonometric integrals, Duke J. Math. 17 (1950), 197–226. [CSV1994] G. Csordas, W. Smith, R. S. Varga, Lehmer pairs of zeros, the de Bruijn-Newman constant Λ, and the Riemann hypothesis, Constr. Approx. 10 (1994), no. 1, 107–129. [G2004] Gourdon, Xavier (2004), The [math]10^{13}[/math] first zeros of the Riemann Zeta function, and zeros computation at very large height [KKL2009] H. Ki, Y. O. Kim, and J. Lee, On the de Bruijn-Newman constant, Advances in Mathematics, 22 (2009), 281–306. Citeseer [N1976] C. M. Newman, Fourier transforms with only real zeroes, Proc. Amer. Math. Soc. 61 (1976), 246–251. [RT2018] B. Rodgers, T. Tao, The de Bruijn-Newman constant is non-negative, preprint. arXiv:1801.05914
|
Edit: I'm leaving the old post below, but before I want to write the proof as suggested by Bruce from his book, which uses the ideas in a more efficient way.
Assume that $\|p-q\|<1$, with $p,q\in A$, a unital C$^*$-algebra. Let $x=pq+(1-p)(1-q)$. Then, as $2p-1$ is a unitary, $$\|1-x\|=\|(2p-1)(p-q)\|=\|p-q\|<1.$$So $x$ is invertible. Now let $x=uz$ be the polar decomposition, $z=(x^*x)^{1/2}\in A$. Then $u=xz^{-1}\in A$. Also, $px=pq=xq$, and $qx^*x=qpq$, so $qx^*x=x^*xq$, and then $qz=zq$. Then$$pu=pxz^{-1}=xqz^{-1}=uzqz^{-1}=uqzz^{-1}=uq.$$So $q=u^*pu$.
=============================================
(the old post starts here)
(A good friend pointed me to the ideas in this answer, so I'm sharing them here)
The result holds in any unital C$^*$-algebra. So assume that $\|p-q\|<1$, with $p,q$ in a unital C$^*$-algebra $A\subset B(H)$.
Claim 1: There is a continuous path of projections joining $p$ and $q$.
Proof. Let $\delta\in(0,1)$ with $\|p-q\|<\delta$. For each $t\in[0,1]$, let $x_t=tp+(1-t)q$. Then$$\|x_t-p\|=\|(1-t)(p-q)\|<\delta(1-t),$$$$\|x_t-q\|=\|t(p-q)\|<\delta t.$$This, together with the fact that $x_t$ is selfadjoint, implies that $\sigma(x_t)\subset K=[-\delta/2,\delta/2]\cup[1-\delta/2,1+\delta/2]$ (since $\min\{t,1-t\}\leq1/2$). Now let $f$ be the continuous function on $K$ defined as $0$ on $[-\delta/2,\delta/2]$ and $1$ on $[1-\delta/2,1+\delta/2]$. Then, for all $t\in[0,1]$, $f(x_t)\in A$ is a projection. And$$t\to x_t\to f(x_t)$$is continuous, completing the proof of the claim. Edit: years later, I posted this answer to a question on MSE that proves the continuity.
Claim 2: We may assume without loss of generality that $\|p-q\|<1/2$.
This is simply a compacity argument, using that each projection in the path $f(x_t)$ is very near another projection in the path. The compacity allows us to make the number of steps finite, and so if we find projectons $p=p_0,p_1,\ldots,p_n=q$ and unitaries with $u_kp_ku_k^*=p_{k+1}$, we can multiply the unitaries to get the unitary that achieves $q=upu^*$.
Claim 3: If $\|p-q\|<1/2$, there exists a unitary $u\in A$ with $q=upu^*$.
Let $x=pq+(1-p)(1-q)$. Then $$\|x-1\|=\|2pq-p-q\|=\|p(q-p)+(p-q)q\|\leq2\|p-q\|<1,$$so $x$ is invertible. Let $x=uz$ be the polar decomposition. Then $u$ is a unitary. Note that$$qx^*x=q(qpq+(1-q)(1-p)(1-q))=qpq,$$so $q$ commutes with $x^*x$ and then with $z=(x^*x)^{1/2}$. Note also that $px=xq$, so $puz=uzq=uqz$. As $z$ is invertible, $pu=uq$, i.e.$$q=u^*pu.$$Note that $u=xz^{-1}\in A$.
|
Here is a problem from Guillemin-Pollack:
The graph of a map $f: X\rightarrow Y$ is the subset of $X\times Y$ defined by $$graph(f)=\{(x,f(x)):x\in X\}.$$ Define $F: X\rightarrow graph(f)$ by $F(x)=(x,f(x))$. Show that if $f$ is smooth, $F$ is a diffeomorphism; thus $graph(f)$ is a manifold if $X$ is.
(Note that the notion of a manifold in this book is defined as follows: a subset of $R^k$ is a manifold of dimension $n$ if every points admits a neighborhood diffeomorphic to an open subset of $R^n$. For the definition of a smooth map see this question.)
Assume $X \subset R^k$ (as remarked above, only subsets of $R^k$ are considered in the book). My idea is to regard $F$ as the composition of $X\rightarrow X\times X \rightarrow graph(f), x\mapsto (x,x)\mapsto (x,f(x))$. The first map $x\mapsto (x,x)$ is smooth because it extends by the same formula to a smooth map $R^k\rightarrow R^k\times R^k$ and agrees with the original map on $X$. The second map $(x,x)\mapsto (x,f(x))$ is smooth by The smoothness of a product map.
Further, the first map is clearly bijective and its inverse, $(x,x)\mapsto x$, is smooth since it extends by the same formula to a smooth map $R^k\times R^k\rightarrow R^k$. The second map is also bijective with inverse $X\times f(X)\rightarrow X\times X,\ (x,f(x))\mapsto (x,x)$. But why is the inverse smooth? Now we cannot extend it to a smooth map $R^k\times R^k\rightarrow R^k\times R^k$ by the same formula anymore...
|
As asked in the title, is Hamiltonian containing enough information to judge the existence of spontaneously symmetry breaking?
Any examples?
Physics Stack Exchange is a question and answer site for active researchers, academics and students of physics. It only takes a minute to sign up.Sign up to join this community
The Hamiltonian of a theory describes its dynamics. Symmetry is "spontaneously broken" when a certain state of the quantum theory doesn't have the same symmetry as the Hamiltonian (dynamics). The standard example is a theory with a quartic potential. In field theory, say we have a potential $V (\phi) \sim \lambda{(\phi^2 - a^2)}^2$ in the lagrangian/hamiltonian, for some real valued scalar field $\phi$. This potential has minima at $\phi_{\pm}=\pm a$.
The theory (Hamiltonian) has a $\phi \rightarrow - \phi$ symmetry, but note that each of those
vacua don't have the same symmetry i.e. vacuum states are not invariant under the symmetry. The solutions $\phi_{\pm}$ transform into each other under the transformation. So this is an example where the symmetry of the theory is broken by the vacuum state, "spontaneously" (roughly by itself, as the theory settles into the vacuum state).
So, to find whether a symmetry will be spontaneously broken (given a Hamiltonian), you have to check the symmetry of the states (typically vacua) of the theory and compare them to the symmetry of the Hamiltonian.
I'm not sure what OP has in mind, but consider e.g. the inverted harmonic oscillator $$H~=~\frac{p^2}{2m}-\frac{1}{2}kq^2, $$ which has spontaneously broken $\mathbb{Z}_2$ symmetry $(q,p) \to (-q,-p) $. The stable positions are $q=\pm \infty.$
In general, you should look to solve the potential for a function that gives a minimum. Does this minimum of the potential respect the symmetry? If not, you are very likely to find an anomaly. The classic example of this is the minimum of the higgs potential not respecting the gauge invariance of the SU(2) x U(1) fields coupled to it, but there are simpler examples, like the one @Qmechanic points out.
|
When we calculate the dynamic resistance \$r=(\frac{dv}{dI})\$, for any n-p junction, how is it different from the normal resistance \$R=\frac VI\$? Does the equation for the voltage drop (The fermi potential drop, and not the absolute Galvani potential) work if we use the dynamic resistance with the instantaneous current (\$V=Ir\$)? Does the power dissipation relation, \$P=I^2r\$ hold in case of dynamic resistances? If it does, is power dissipated as heat even in case of the n-p junction? I think it is unlikely, as the hole-electron recombinations are the dominant phenomenon here, and I am unsure whether those can produce heat.
For the
ideal resistor, the voltage across is proportional to the current through and thus, their ratio is the constant \$R\$:
$$\frac{v_R}{i_R} = R $$
For the
ideal (semiconductor) diode, we have
$$i_D = I_S(e^{\frac{v_D}{nV_T}}-1)$$
Inverting yields
$$v_D = nV_T\ln (1 + \frac{i_D}{I_S}) $$
thus, the diode voltage is
not proportional to the diode current, i.e., the ratio of the voltage and current is not a constant.
$$\frac{v_D}{i_D} = \frac{nV_T}{i_D}\ln (1 + \frac{i_D}{I_S}) \ne R$$
Now, the small-signal or dynamic resistance is just
$$\frac{dv_D}{di_D} = \frac{nV_T}{I_S + i_D} \approx \frac{nV_T}{i_D} $$
how is it different from the normal resistance
As shown above, the diode
static resistance (ratio of the diode voltage and current) differs from and is, in fact, larger than the diode dynamic resistance by the factor of \$\ln (1 + \frac{i_D}{I_S})\$
$$\frac{v_D}{i_D} = \frac{dv_D}{di_D} \ln (1 + \frac{i_D}{I_S})$$
which is to say that, in typical operating ranges,
the diode dynamic resistance is much smaller than then diode static resistance.
Does the power dissipation relation, \$P=I^2r\$ hold in case of dynamic resistances?
The instantaneous power associated with the diode is
$$p_D = v_D i_D = nV_Ti_D\ln (1 + \frac{i_D}{I_S}) \ne i_D^2\frac{nV_T}{i_D} = nV_Ti_D $$
Since the power associated with a circuit element is
always the product of the voltage across and current through, one would not use the dynamic resistance but, rather the static resistance.
For any two-terminal device, or for any two terminals of any device, we can graph current vs. voltage. For a purely resistive device, this is a straight line passing through the origin, and the slope is the inverse of resistance. For a non-linear device, like a diode, it's not a straight line (that's what not linear means). Example:
At any point, the slope of this line is the dynamic conductance; the inverse of that is the dynamic resistance. For example, in the reverse region, the line has a very low slope, a very low conductance, or a very high resistance. In the forward region, high slope, high conductance, low resistance.
\$P=I^2r\$ does not hold if \$r\$ is the dynamic resistance. \$P=IE\$ does, however.
The reason \$P=I^2R\$ works is because a resistive device obeys Ohm's law, \$E=IR\$. From this, we can calculate the power from any one of voltage or current, because although we need both for power, we can calculate one from the other:
$$ \begin{align} P&=IE \\ E&=IR \\ P&=I(IR) \\ P&=I^2 R \end{align} $$
Because non-linear devices don't obey Ohm's law, \$P=I^2 R\$ does not apply for them. \$P=IE\$ does, however, and if you can come up with some other equation which relates current to voltage for that device and substitute it into \$P=IE\$, you could come up with an equation which calculates power from just voltage, or just current, for that device.
Phil Frost's answer is great. I'd just like to add that as a rough approximation, it's often possible to model a diode (or other junction drop, like a BJT or IGBT) as a voltage source in series with a resistor.
So for large-signal purposes, you can estimate the losses through the diode as: $$ P=I_{mean}V_{D} +I_{RMS}^2R_D $$
Whether this estimate gets you close enough or not depends entirely on your domain, but I've had good success with it for narrowing down my component selection in switching power supplies.
how is it different from the normal resistance R=VI?
It is different because it is dependent of the current we are letting through the diode, which gives different apparent values of resistance. The normal resistance R = VI will appear always same when you measure it on a resistor ( apart from minor deviations ).
Does the equation for the voltage drop work if we use the dynamic resistance with the instantaneous current (V=Ir)?
It does, because 'r' is defined such that it shows the apparent resistance of diode, with regard to given current and voltage. However, for the real picture, looking at the V/I characteristic of a diode would be a better choice.
Does the power dissipation relation, P=I^2*r hold in case of dynamic resistances? If it does, is power dissipated as heat even in case of the n-p junction?
It holds, but as I changes, P will change swiftly too, because it is in quadratic proportion with r, which includes I in a derivative equation.
Think about Ohm's law: if you have a circuit of two series resistors (1k and 2k, U=3V), you could find equivalent resistance (\$ R_e=R_1+R_2=3k \$), calculate current (\$ I=\frac{U}{I}=1mA \$) and then work out resistors'
voltage drop (\$ U_d=IR \$; \$ U_1=1*1=1V \$; \$ U_2=1*2=2V \$). Nice. The maths do not break too: \$R_2\$'s current is \$I_{R2}=\frac{U-U_1}{R_2}=\frac{3-1}{2k}=1mA\$. Unlike resistors, which have constant resistance, diodes are defined by constant voltage drop! (yep, resistors also have capacitance and inductance, but in most cases these are not important, so let's consider that to be true). So all you need to do here is to change your variables accordingly: for resistors you know resistance, for diodes - voltage drop. Back to power dissipation: \$ P=I^2R \$ only works because we can do variable substitution, but in the end the power dissipation is given by \$ P=UI \$. The question is why do we use the first form for traditional resistors? Because we know circuit current and resistance are too lazy to work voltage drop out. And for diodes we already know that. So you can't use \$ P=I^2R \$ for diodes because it is a derived form of \$ P=UI \$ when \$R(U)=const\$. $$ P=UI \ \ \ \ \ <= \text{works even on Mars} $$ And if you are really unsure if diodes can produce heat just disassemble any high power LED torch - you will find that powerful LED happily sitting on top of aluminium radiator. It is there for a reason, right?
In addition of my comment above, let me give a short explanation why there is a non-linear behavior of diode. So at the time that the polarity of the applied voltage is inverted (from reverse to forward), the initial resistance in the diode is high due to the lack of carriers in the semiconductor structure. As the forward current increase the number of carriers builds-up and resistance goes down. This high value of resistance brings an overshoot in the forward voltage from which the voltage in the steady-state value, and the characteristic “knee” established. This phenomenon last some time and highly depended by temperature, that’s why the dynamic resistance of the diode is calculated in both environment and high virtual temperatures. This forward voltage value called “Forward Recovery Voltage” and in some diodes can be high (30Vor more). In power engineering, this transient “conductivity modulation” often neglected and the equation is linear (like MOV). But in small signal application the transition capacitance of the junction as well as the stored charge equivalent capacitance should included in model analysis.
Regarding the last question:
[...] is power dissipated as heat even in case of the n-p junction? I think it is unlikely, as the hole-electron recombinations are the dominant phenomenon here, and I am unsure whether those can produce heat.
When an electron and hole recombine, the extra energy is emitted in the form of a photon. When a photon collides with other particles of matter, it causes increased particle movement, which in turn results in an increase in temperature. So there is heat involved isn't it?
|
I have multiple regression with, say 3 independent variables: $Y=B_0+B_1x_1+B_2x_2+B_3x_3$ I would like to test if $B_2+3B_3$ is significantly different from zero, i.e. $$H_0: B_2+3B_3=0$$ $$H_1: B_2+3B_3\neq 0$$ Can you please help to find appropriate way to test for significance of linear functions of two coefficients as in above example. Many thanks in advance.
If your errors are normal and regressors are non-random, the OLS estimates of the coefficients are normal:
$$\hat\beta-\beta\sim N(0,\sigma^2(X'X)^{-1})$$
Hence any linear combination is normal too:
$$R\hat(\beta-\beta)\sim N(0, R\sigma^2(X'X)^{-1}R')$$
You want to test that $R\beta=r$, with $R$ being $[0,0,1,3]$ and $r=0$. The Wald statistic for testing the null hypothesis $R\beta=r$ is
$$(R\hat\beta-r)(R\sigma^2(X'X)^{-1}R')^{-1}(R\hat\beta-r)\sim \chi^2_q,$$
where $q$ is the rank of $R$, which in your case is simply 1. You have unknown $\sigma^2$, simply plug in the consistent estimate and you are good to go.
This statistic is implemented in practically all the statistical packages which estimate linear regression. In R you need to use function
linearHypothesis from package
car.
Your intuition is correct that you cannot just simply add together the estimates of the two parameters. Luckily as @caburke suggests in his comment this is a very standard application of regression and there is a way to do this. The key words to search for are linear combination of estimates from linear regression or (mysteriously) "contrasts".
Given your assumptions, your linear combination of estimates will itself have a t distribution, with standard error equal to
$ s\sqrt{b^t(X^tX)^{-1}b}$
Where b is the vector indicating your linear combination of coefficients you are interested in (in your case, [0,0,1,3]); X is your original matrix of explanatory data (including a column of 1s for the intercept) and $s^2$ is the estimated residual variance.
Most stats software will have a way of doing all of this linear algebra for you.
There are doubtless packages in R (eg the 'contrast' package) that have this conveniently wrapped up if you don't want to do it by hand. A nice little basic function that does it in R is available here: https://notendur.hi.is/thor/TLH2010/Fyrirlestrar/Kafli4/lincomRv8.R. Sorry, I can't identify the author of it, but for the record (in case the link goes down) here is the code:
# A function to estimate a linear combination of parameters from a linear model along# with the standard error of such a combination.# lm.result (or model.result) is the result from lm or glm. # contrast.est is the estimate.# contrast.se is the standard error.lincom <- function(model.result,contrast.vector,alpha=0.05) {beta.coef <- coef(model.result)[1:length(contrast.vector)]dispersion.param <- summary.lm(model.result)$sigma beta.cov <- dispersion.param^2*summary(model.result)$cov.unscaled[1:length(contrast.vector),1:length(contrast.vector)]df.error <- summary(model.result)$df[2]contrast.est <- c(t(contrast.vector) %*% beta.coef)contrast.se <- sqrt(c(t(contrast.vector) %*% beta.cov %*% contrast.vector))tvalue <- contrast.est/contrast.selowerb <- contrast.est - qt(1-alpha/2,df.error) * contrast.seupperb <- contrast.est + qt(1-alpha/2,df.error) * contrast.sepvalue <- 2*(1-pt(abs(tvalue),df.error))return(list(contrast.est=contrast.est,contrast.se=contrast.se,lower95CI=lowerb,upper95CI=upperb,tvalue=tvalue,pvalue=pvalue))}
|
Can I be a pedant and say that if the question states that $\langle \alpha \vert A \vert \alpha \rangle = 0$ for every vector $\lvert \alpha \rangle$, that means that $A$ is everywhere defined, so there are no domain issues?
Gravitational optics is very different from quantum optics, if by the latter you mean the quantum effects of interaction between light and matter. There are three crucial differences I can think of:We can always detect uniform motion with respect to a medium by a positive result to a Michelson...
Hmm, it seems we cannot just superimpose gravitational waves to create standing waves
The above search is inspired by last night dream, which took place in an alternate version of my 3rd year undergrad GR course. The lecturer talks about a weird equation in general relativity that has a huge summation symbol, and then talked about gravitational waves emitting from a body. After that lecture, I then asked the lecturer whether gravitational standing waves are possible, as a imagine the hypothetical scenario of placing a node at the end of the vertical white line
[The Cube] Regarding The Cube, I am thinking about an energy level diagram like this
where the infinitely degenerate level is the lowest energy level when the environment is also taken account of
The idea is that if the possible relaxations between energy levels is restricted so that to relax from an excited state, the bottleneck must be passed, then we have a very high entropy high energy system confined in a compact volume
Therefore, as energy is pumped into the system, the lack of direct relaxation pathways to the ground state plus the huge degeneracy at higher energy levels should result in a lot of possible configurations to give the same high energy, thus effectively create an entropy trap to minimise heat loss to surroundings
@Kaumudi.H there is also an addon that allows Office 2003 to read (but not save) files from later versions of Office, and you probably want this too. The installer for this should also be in \Stuff (but probably isn't if I forgot to include the SP3 installer).
Hi @EmilioPisanty, it's great that you want to help me clear out confusions. I think we have a misunderstanding here. When you say "if you really want to "understand"", I've thought you were mentioning at my questions directly to the close voter, not the question in meta. When you mention about my original post, you think that it's a hopeless mess of confusion? Why? Except being off-topic, it seems clear to understand, doesn't it?
Physics.stackexchange currently uses 2.7.1 with the config TeX-AMS_HTML-full which is affected by a visual glitch on both desktop and mobile version of Safari under latest OS, \vec{x} results in the arrow displayed too far to the right (issue #1737). This has been fixed in 2.7.2. Thanks.
I have never used the app for this site, but if you ask a question on a mobile phone, there is no homework guidance box, as there is on the full site, due to screen size limitations.I think it's a safe asssumption that many students are using their phone to place their homework questions, in wh...
@0ßelö7 I don't really care for the functional analytic technicalities in this case - of course this statement needs some additional assumption to hold rigorously in the infinite-dimensional case, but I'm 99% that that's not what the OP wants to know (and, judging from the comments and other failed attempts, the "simple" version of the statement seems to confuse enough people already :P)
Why were the SI unit prefixes, i.e.\begin{align}\mathrm{giga} && 10^9 \\\mathrm{mega} && 10^6 \\\mathrm{kilo} && 10^3 \\\mathrm{milli} && 10^{-3} \\\mathrm{micro} && 10^{-6} \\\mathrm{nano} && 10^{-9}\end{align}chosen to be a multiple power of 3?Edit: Although this questio...
the major challenge is how to restrict the possible relaxation pathways so that in order to relax back to the ground state, at least one lower rotational level has to be passed, thus creating the bottleneck shown above
If two vectors $\vec{A} =A_x\hat{i} + A_y \hat{j} + A_z \hat{k}$ and$\vec{B} =B_x\hat{i} + B_y \hat{j} + B_z \hat{k}$, have angle $\theta$ between them then the dot product (scalar product) of $\vec{A}$ and $\vec{B}$ is$$\vec{A}\cdot\vec{B} = |\vec{A}||\vec{B}|\cos \theta$$$$\vec{A}\cdot\...
@ACuriousMind I want to give a talk on my GR work first. That can be hand-wavey. But I also want to present my program for Sobolev spaces and elliptic regularity, which is reasonably original. But the devil is in the details there.
@CooperCape I'm afraid not, you're still just asking us to check whether or not what you wrote there is correct - such questions are not a good fit for the site, since the potentially correct answer "Yes, that's right" is too short to even submit as an answer
|
Previously, we used visualizations to see the closest customers to our salesperson, Bob. Now using a simple graph works fine when Molly only has one salesperson. However, as the business grows to have more salespeople, or when it becomes more difficult to just
see who is closest, we need a more formalized approach. So let's attempt to calculate the distance between Bob and a customer.
Once again, here are the locations of Bob and our customers:
Name Avenue # Block # Bob 4 8 Suzie 1 11 Fred 5 8 Edgar 6 13 Steven 3 6 Natalie 5 4
And here are these locations in the form of a scatter plot.
Now, there are no labels in this particular plot, but we should be able to make sense of it anyway. Our table says that Bob is located at avenue 4 and block 8, and when we look at where the x-axis reaches 4 and the y-axis reaches 8, we see a marker right there - that's Bob.
Now our next task is to calculate the distance between Bob, at $(4, 8)$ and a specific customer, the customer at $(3, 6)$. In the general form, we can layout our problem by saying we want to calculate the distance between $(x_1, y_1)$ and $(x_2, y_2)$. And for this example, we can assume that when calculating our distance that avenues and blocks are the same length, so we can just calculate our distance in terms of blocks.
Those numbers at the bottom of $(x_1, y_1)$ and $(x_2, y_2)$ are used to say that we are referring to our first $x$ and $y$ values with one point, $(x_1, y_1)$ and our second $x$ and $y$ values of $(x_2, y_2)$.
Here's our first attempt at calculating distance. To go from our first point of 8th street and 4th avenue to our second point of 6th street and 3rd avenue, we simply go two blocks down and one block to the left for a distance of three blocks. Now that's a good start, but it's not how mathematicians would calculate distance between points. Instead they would define distance to be the length of the shortest path between two points, and to achieve this, draw a diagonal straight line between the two points, and calculate the length of that line. This is called Euclidean distance, and is what we will explain below. This will be our approach:
Ok let's do it.
The definition of distance is the length of shortest path between two points. So imagine, if it helps, that Bob made his deliveries with the help of a drone. So then what is the shortest path his drone could take from (4, 8) to (3,6)? We won't prove it, but one single straight line between any two points is the shortest path between them. Here, the shortest path between (4,8) and (3, 6) is a straight diagonal line between them.
That blue diagonal line between the two points is the shortest path between the two points, and therefore the distance between the two points. Now we need to calculate the length of that line.
Ok, to calculate that line, we imagine it forms the longest side of a
right triangle. With one side of the right triangle spanning horizontally from the lowest x-value to the highest x-value, 3 to 4, and another side spanning vertically from the lowest y-value to the highest y-value, 6 to 8. Doing this above, you can see we have a nice triangle.
Formally, a right triangle is any triangle where one of the angles is 90 degrees -- also called the
right angle. But you can also just know that, if one of the sides is perfectly vertical and the other is perfectly horizontal, then we have a right triangle. What's great about right triangles is that we have a good formula for calculating the length of each of the sides of a right triangle. And once we calculate the longest side of the right triangle, we have our distance between the two points.
Ok, this is the formula for calculating the lengths of the sides of a right triangle, called the Pythagorean Theorem:
$$ a^2 + b^2 = c^2 $$
Let's break this formula down. In the formula above, $a$ and $b$ are the shorter sides of right triangle -- called the
legs -- and $c$ is the longest side of the right triangle -- the * hypotenuse *. In the graph above, $a$ and $b$ would be the gray dotted lines. And $c$ is the diagonal line, the hypotenuse. Looking at the Pythagorean Theorem above, it says that the length of the first leg squared plus the length of the second leg squared equals the length of the hypotenuse squared.
This is great because we already have what we need to calculate the length of our two legs, and with that we will have filled in the information needed to find the hypotenuse. Take a look at the graph below. We calculate the length of the horizontal side by subtracting our first x-value from our second x-value, then taking the absolute value. We calculate the length of the vertical side by subtracting the first y-value from the second y-value then taking the absolute value.
To take the absolute value of a number means to not consider whether a number is negative. So the absolute value of -100 is 100, and the absolute value of -20 is 20. We indicate the we are taking the absolute value of a number with the use of pipes. So we can say $ \vert -20 \vert = 20$.
So in our formula of:
$$ a^2 + b^2 = c^2 $$ we have that: $$a = (x_1 - x_2) = \mid 3 - 4 \mid = 1 $$ $$b = (y_1 - y_2) = \mid 8 - 6 \mid = 2 $$
$$ c^2 = a^2 + b^2 = 1^2 + 2^2 = 5 $$
$$ c^2 = 5 $$
So simply by plugging in data, we can see that five equals the length of the hypotenuse, squared. And solving for our hypotenuse we have
$c = \sqrt{5}$
Writing $\sqrt{5}$ as a distance is perfectly fine. However, if you would rather see this in decimal form, simply plug the $\sqrt{5}$ into a calculator (or type "sqrt 5" into Google's search bar) and you will get 2.23. So that is the distance between our two points. We have solved it!
Squares, Square Roots, and Inverses
Squares- Squaring something simply means multiplying something by itself. So for example, 5 squared equals 5 $\times$ 5. Four squared equals 4 $\times$ 4. We denote four squared with the a raised number 2, as in $4^2$. The two is the number of times we are multiplying four by itself.
Square Roots- Now we can go from a number's square back to the original square with the square root. For example, the square of 4 is 4 $\times$ 4, which equals 16. And the square root of 16 should undo the operation of squaring, so the square root of 16 equals four, and we denote the square root of 16 as $\sqrt{16}$.
Inverses ** - In mathematics, **the inverseis anything that undoes the operation. So the inverse of squaring is taking the square root. The inverse of multiplying by ten is dividing by ten. Here is a question:
What is the inverse of putting shoes on?
Well, just think of the definition - inverse means undoing the operation, and the undoing putting shoes on is taking shoes off.
Now that we have seen how to solve for distance between two points in our example above, let's make sure we know how to do this in general. We start with the formula:
$$ a^2 + b^2 = c^2 $$ and putting this formula in terms of our two points, $x$ and $y$, we have : $$ (x_1 - x_2)^2 + (y_1 - y_2)^2 = c^2 $$
and solving for $c$ we have
$c = \sqrt{(x_{1} - x_{2})^2 + (y_{1} - y_{2})^2}$
where $c$ is the length of the hypotenuse, or the shortest distance between our two points.
So, that is the famous
Pythagorean Formula, which says that in a right triangle, where $c$ is the length of the hypotenuse, and $a$ and $b$ are the other two sides. So given two points, $(x_1, y_1)$ and $(x_2, y_2)$ we can plug them into our formula above to solve for the distance between them, $c$.
Ok, that was a lot of math, but it was worth it. We now know that to calculate the distance between any two points we can simply square the differences the between each point's $x$ and $y$ values, sum the squares, and then take the square root of the sum to calculate the distance between two points.
We can do this because a straight line is the shortest path between two points, also known as the distance between two points. Because the Pythagorean Theorem gives us the formula $a^2$ + $b^2$ = $c^2$, for calculating the length of a hypotenuse of any right triangle, we simply extend a horizontal line and a vertical line from the two points to form the remaining two sides of a right triangle. Then we calculate the distance by knowing that the square of one side, $a^2$ is $(x_{1} - x_{2})^2$ or the x coordinate of one point minus the x coordinate of the other point squared, and the square of the length of the other side, that is $b^2$, is $(y_{1} - y_{2})^2$, the y coordinate of one point minus the y coordinate of the other point squared. So this gives us $(x_{1} - x_{2})^2$ + $(y_{1} - y_{2})^2$ = $c^2$ or, solving for c, $c = \sqrt{(x_{1} - x_{2})^2 + (y_{1} - y_{2})^2}$.
|
Quadratic function is also a second degree polynomial function. The graph of a quadratic function is a parabola. The parabola open upwards if a graph is made for the quadratic formula.
The point at which the function attains maximum or minimum value is the vertex of the quadratic function. When we say second degree, then the variable is raised to the second power like $x^{2}$ .
The quadratic function is a polynomial function of the form:
\[\large x=\frac{-b\pm \sqrt{b^{2}-4ac}}{2a}\]
Where,
a, b and c are the variables given in the equation.
This formula is used to solve any quadratic equation and get the values of the variable or the roots.
Solved example Question: Solve $x^{2}-6x+8=0$ Solution
Given,
a= 1, b = 6 c = 8
Using the formula: $x=\frac{-b\pm \sqrt{b^{2}-4ac}}{2a}$
$\frac{-6\,\pm (6)^{2}4\times 1\times 8}{2\times 1}=-2$
The roots are: $x_{1}=-2$ and $x_{2}=-4$
|
What does $|H(j\omega)|^2$ in $20 \log_{10}|H(j\omega)|$ mean?
Is it some ratio of energy or power? And why? How to derive it?
I'm sorry for curtness of my questin.
Signal Processing Stack Exchange is a question and answer site for practitioners of the art and science of signal, image and video processing. It only takes a minute to sign up.Sign up to join this community
I suppose you're talking about the frequency response of a continuous-time linear time-invariant system. It is defined as the Fourier transform of the system's impulse response $h(t)$:
$$H(j\omega)=\int_{-\infty}^{\infty}h(t)e^{-j\omega t}dt$$
Its practical relevance is that it shows the frequency dependence of the system's input-output relation. If $X(j\omega)$ is the Fourier transform of the input signal, and $Y(j\omega)$ is the Fourier transform of the output signal, then $$H(j\omega)=\frac{Y(j\omega)}{X(j\omega)}$$
For a sinusoidal input signal $x(t)=\sin (\omega_0 t)$ the output signal $y(t)$ is given by
$$y(t)=|H(j\omega_0)|\sin (\omega_0 t + \text{arg}\{H(j\omega_0\})$$
where $\text{arg}\{H(j\omega_0\})$ is the phase of the complex function $H(j\omega)$ at frequency $\omega_0$. So you can see that $|H(j\omega)|$ represents the amplification of the system at frequency $\omega$. From above relations it is also obvious that the output signal cannot contain any frequencies that were not present in the input signal. $H(j\omega)$ can only reduce or amplify frequency components of the input signal. This is how (linear time-invariant) filtering works.
|
$$\frac{\tan^2(3x)}{1+\tan^2(3x)}+\frac{\tan^2(2x)}{1+\tan^2(2x)}+\frac{\tan^2(x)}{1+\tan^2(x)}=2$$ if we simplify ,we have $$\sin^2(3x)+\sin^2(2x)+\sin^2(x)=2\\\frac{1-\cos(6x)}{2}+\frac{1-\cos(4x)}{2}+\frac{1-\cos(2x)}{2}=2\\\cos(6x)+\cos(4x)+\cos(2x)=-1\\\cos(6x)+\cos(2x)=-(1+\cos(4x))\\2\cos(4x)\cos(2x)=-2\cos^2(2x)\\\begin{cases}\cos(2x)= 0 \to 2x=\pm\frac{\pi}{2}+k\pi\\\cos(2x)=-\cos(4x)\to \begin{cases}\cos(2x)=-1\\\cos(2x)=\frac12\end{cases}\end{cases},$$ but all of the roots are not acceptable because ,the denominator of $\tan(3x) $ or $ \tan(2x) $ or$ \tan(x) $ going to be zero .
Is my work true ? If draw the $f(x)=\frac{\tan^2(3x)}{1+\tan^2(3x)}+\frac{\tan^2(2x)}{1+\tan^2(2x)}+\frac{\tan^2(x)}{1+\tan^2(x)}$ and $g(x)=2$ by desmos ...we will see some roots $\frac{\pi}{6} ,\frac{\pi}{2} $ are recognizable ,but what about the other ? https://www.desmos.com/calculator/x9ikgkbyhp
$$\frac{\tan^2(3x)}{1+\tan^2(3x)}+\frac{\tan^2(2x)}{1+\tan^2(2x)}+\frac{\tan^2(x)}{1+\tan^2(x)}=2$$ if we simplify ,we have $$\sin^2(3x)+\sin^2(2x)+\sin^2(x)=2\\\frac{1-\cos(6x)}{2}+\frac{1-\cos(4x)}{2}+\frac{1-\cos(2x)}{2}=2\\\cos(6x)+\cos(4x)+\cos(2x)=-1\\\cos(6x)+\cos(2x)=-(1+\cos(4x))\\2\cos(4x)\cos(2x)=-2\cos^2(2x)\\\begin{cases}\cos(2x)= 0 \to 2x=\pm\frac{\pi}{2}+k\pi\\\cos(2x)=-\cos(4x)\to \begin{cases}\cos(2x)=-1\\\cos(2x)=\frac12\end{cases}\end{cases},$$ but all of the roots are not acceptable because ,the denominator of $\tan(3x) $ or $ \tan(2x) $ or$ \tan(x) $ going to be zero .
I think you are right and our equation has no solutions.
"Roots" $a$ which you see they are because $$\lim_{x\rightarrow a}\left(\frac{\tan^23x}{1+\tan^23x}+\frac{\tan^22x}{1+\tan^22x}+\frac{\tan^2x}{1+\tan^2x}\right)=2.$$ It follows from your solution, but the domain says that they are not roots.
The problem here is that whoever has produced this problem has constructed a form for $sin^2(x)$ which is artificial for the purpose of the puzzle.
It therefore
seems like there is a problem.
But there is not because the
everywhere in $\mathbb{R}$ the expression involving $tan(x)$ is always going to reduce to $sin^2(x)$.
The artificiality of using $tan(x)$ is a red herring.
It would be like defining $x$ as $\frac {x^2}{x}$ and then claiming there was a singular point at $x=0$ (which there is not).
Suppose otherwise there exists$x\in{\bf R}$
such that$$ \frac{\tan^2(3x)}{1+\tan^2(3x)}+\frac{\tan^2(2x)}{1+\tan^2(2x)}+\frac{\tan^2(x)}{1+\tan^2(x)}=2. $$ If we simplify
(using the formula$\sin^2a+\cos^2a=1$
for each term), we have $$\sin^2(3x)+\sin^2(2x)+\sin^2(x)=2\\ \Rightarrow \frac{1-\cos(6x)}{2}+\frac{1-\cos(4x)}{2}+\frac{1-\cos(2x)}{2}=2\\ \Rightarrow \cos(6x)+\cos(4x)+\cos(2x)=-1\\ \Rightarrow\cos(6x)+\cos(2x)=-(1+\cos(4x))$$
which implies that (this step is too quick for me, I don't see why (*) is obviously true.)$$ 2\cos(4x)\cos(2x)=-2\cos^2(2x)\tag{*} $$ $$\tag{**} \begin{cases} \cos(2x)= 0 \to 2x=\pm\frac{\pi}{2}+k\pi\\ \cos(2x)=-\cos(4x)\to \begin{cases}\cos(2x)=-1\\ \cos(2x)=\frac12 \end{cases} \end{cases}, $$
(Too quick. Why does$\cos(2x)=-\cos(4x)$
imply the values you claim in (**)?)but all of the roots are not acceptable because, the denominator of $\tan(3x) $ or $ \tan(2x) $ or $\tan(x) $ going to be zero.
($\cos(2x)=0$ certainly gives a contradiction. How do$-1$
or$1/2$
do so?)
Is my work true?
(Unclear. You don't have obvious logic mistakes but proof for several important places is missing.)
|
We touched on quadratic residues when discussing Pythagorean triples. We relied upon the quadratic residues and non-residues in modulo \(4\) for our proof that \(x\) and \(y\) took opposite parities.
These residues have important results in encryption, integer factorisation and sound diffusion.
Definition of quadratic residues :
Let :
\(p\) be an odd prime and \(a \not \equiv 0 \pmod p\).
If the congruence \(x^2 \equiv a \pmod p\) has a solution, then \(a\) is said to be
a quadratic residue of \(p\).
Otherwise, \(a\) is
a quadratic non-residue of \(p\).
We are more interested in finding when a quadratic congruence has a solution than in solving the quadratic congruence at this point. It leads to Euler’s Criterion, Gauss’ Lemma and the Law of Quadratic Reciprocity.
The quadratic residues of
p :
For any odd prime
p, there are \(\frac {p – 1}{2}\) quadratic residues and \(\frac {p – 1}{2}\) quadratic non-residues.
The quadratic residues are congruent modulo
p to the integers \(1^2, 2^2, 3^2, . . . , \frac {p – 1}{2}\).
They are symmetrical.
In modulo \(5\) the quadratic residues are :
\(1^2 \equiv 1 \pmod 5 \\
2^2 \equiv 4 \pmod 5 \\ 3^2 \equiv 4 \pmod 5 \\ 4^2 \equiv 1 \pmod 5 \)
Modulo \(7\) gives the set \(\{1, 4, 2, 2, 4, 1\}\).
Modulo \(11\) gives the set \(\{1, 4, 9, 5, 3, 3, 5, 9, 4, 1\}\).
Modulo \(13\) gives the set \(\{1, 4, 9, 3, 12, 10, 10, 12, 3, 9, 4, 1\}\).
Euler’s Criterion :
Let :
\(p\) be an odd prime and \(a \not \equiv 0 \pmod p\).
\(a\) is a quadratic residue of \(p\) if, and only if, \(a^{(p-1)/2} \equiv 1 \pmod p\).
\(a\) is a quadratic non-residue of \(p\) if, and only if, \(a^{(p-1)/2} \equiv -1 \pmod p\).
© OldTrout \(2018\)
|
Skip to 0 minutes and 11 secondsWelcome again to "It's Your Turn" step. You're dealing today with trigonometric inequalities, which is quite a difficult topic. In exercise one, we've got a product of terms, and we want it to be strictly positive. So we will study separately the sign of the two terms. Let us begin with the study of the sign of sine of x. Well, sine of x vanishes whenever x is a multiple of pi.
Skip to 0 minutes and 43 secondsAnd we can indicate the zeros of sine x in the trigonometric circle, and corresponding to 0 and pi. We shall use the inner circle for the sign of sine of x. So we write here 0, and we write here 0. And these will be used for the sign of sine of x. So the sine of x is positive here, and negative elsewhere And concerning 2 cosine of x minus 1, this is equal to 0 whenever cosine of x equals to 1/2. And this means, well, x equals pi third, or minus pi divided by 3, plus multiple of pi. Again, we can indicate these values in the outer trigonometric circle.
Skip to 1 minute and 56 secondsAnd here we have got pi thirds, and here we've got minus pi thirds.
Skip to 2 minutes and 9 secondsThe cosine of x is strictly greater than 1/2 whenever we are on the right-hand side of this vertical line.
Skip to 2 minutes and 30 secondsAnd so the cosine will be positive in this region. So here, will be greater than 1/2 in this region. And we write the plus sign to the expression 2 cosine of x minus 1. And here we write the negative sign.
Skip to 2 minutes and 55 secondsSo here we have a negative sign, negative sign, negative sign, and a negative sign. Now we use the product rule of the signs, and we write in the outer region the sign of sine of x times 2 cosine of x minus 1. Here, in this region, we get a plus sign.
Skip to 3 minutes and 24 secondsHere we get 0. Here also we get 0, because it is a zero of 2 cosine x minus 1. Here also we get 0-- let me write it here-- because it is a zero of 2 cosine of x minus 1. Here we get minus times plus. So a minus sign. Here we get the plus sign.
Skip to 3 minutes and 59 secondsHere we get 0, because it is a zero of sine of x. And here we get again a minus sign.
Skip to 4 minutes and 11 secondsSo the regions where the product is positive is this region and this other region here. So the solution sine of x times 2 cosine of x minus 1 will be strictly greater than 0 if and only if x belongs-- well, here we are in minus pi up to minus pi third, or from 0 to pi third plus any multiple of 2 pi.
Skip to 4 minutes and 56 secondsAnd this ends exercise one.
It's your turn on trigonometric inequalities, 1
Do your best in trying to
solve the following problems. In any case some of them are solved in the video and all of them are solved in the pdf file below. Exercise 1.
Solve the inequality \(\sin x\left(2\cos x-1\right)>0. %,\ \ \ b) \frac{4\sin^2(x)-1}{2\cos(x)}\geq 0.\)
© Università degli Studi di Padova
|
Search
Now showing items 1-1 of 1
Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV
(Elsevier, 2014-09)
Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ...
|
To understand this you will also need to understand the first FTC, and I will show you this. (As you said there may be more than one proof, I will base mine off one similar given in the stewart calculus books).
You may notice during the proof of the first already the connectedness to the derivative which I assume you are familiar with.
$\mathbf{FTC 1:}$ If $f$ is continuous on $[a,b]$ then the function g defined by
$g(x)= \int_{a}^{x} f(t)dt$ for $a \le x \le b$ is continuos on $[a,b]$ and differentiable on $(a,b)$ , and $g'(x)=f(x)$.
$\mathbf{Proof:}$
Suppose we have $x$ and $x+h$ in $(a,b)$, then we have
$$g(x+h)-g(x)= \int_{a}^{x+h}f(t)dt-\int_{a}^{x}f(t)dt$$ $$=\int_{a}^{x}f(t)dt+\int_{x}^{x+h}f(t)dt-\int_{a}^{x}f(t)dt$$
$$=\int_{x}^{x+h}f(t)dt$$
Thus, we have for $h \neq 0$
$$\frac{g(x+h)-g(x)}{h}=\frac{1}{h}\int_{x}^{x+h}f(t)dt$$
If we assume $h \gt 0$ , since f is continuous on $[x,x+h]$ the "Extreme Value Theorem" says that there exists some numbers, u and v in the closed interval $[x,x+h]$ such that $f(u)=m$ and and $f(v)=M$ where $m$ and $M$ represent the inf and sup of the interval respectively.
Thus, by further properties of Riemann integral
$$mh \le \int_{x}^{x+h}f(t)dt \le Mh$$
$\rightarrow$
$$f(u)h \le \int_{x}^{x+h}f(t)dt \le f(v)h$$
$\rightarrow$
$$f(u) \le \frac{1}{h} \int_{x}^{x+h}f(t)dt \le f(v)$$
ie,
$$f(u) \le \frac{g(x+h)-g(x)}{h} \le f(v)$$
( If $h \lt 0$ the same argument can be repeated with small changes)
so now we invoke limits,
Let $h \to 0$ , then you can see that $u \to x$ and $v \to x$ , ( it is clear from the interval they are in).
So both sides limits are the same, thus the limit exists and we have,
$$g'(x)=\lim_{h \to 0} \frac{g(x+h)-g(x)}{h}=f(x)$$
thus , we finally have $$\frac{d}{dx}\int_{a}^{x}f(t)dt=f(x)$$
$\mathbf{FTC2:}$ If f is continuous on $[a,b]$, then $\int_{a}^{b}f(x)dx=F(b)-F(a)$, where F is any antiderivative of f, that is $F'=f$.
$\mathbf{Proof:}$
Let $g(x)= \int_{a}^{x}f(t)dt$, then from FTC1 we have that $g'(x)=f(x)$
That is, g is an anti derivative of f. Now, if there is another antideravative of f, say F , on the same closed interval [a,b] then we know the only difference they can have is a constant,
$F(x)=g(x)+C$.
Consider $g(a)=\int_{a}^{a}f(t)dt=0$
Now consider,
$$F(b)-F(a)=[g(b)+C]-[g(a)+C]$$
$$=g(b)-g(a)=g(b)$$ $$= \int_{a}^{b} f(t)dt$$
|
Let $C(r)$ be the origin-centered circle of radius $r$,and let $\beta(r)$ be the
exterior buffer around $C(r)$:the distance from $C(r)$ to the closest lattice point exterior to $C(r)$: For example, $\beta(2) = \sqrt{5}-2 \approx 0.24$ because there are no lattice points strictly between $C(2)$ and $C(\sqrt{5})$, and this is the largest buffer around $C(2)$.
I am interested in the behavior of $\beta(r)$ for large $r \in \mathbb{R}$,as I believe understanding that behavior will answer my questionconcerning
ratchet spirals,Lattice radial-step (ratchet) spirals.
I'll pose a specific question before formulating the general question.
Q1. Is there an $R$ such that, for all $r > R$, $\beta(r) < \frac{1}{2}$ ?
If so, then, for example, the spiral $S(3,\frac{1}{2})$ depicted in that question is unbounded.
Q2. Is there an $R(\epsilon)$ such that, for all $r > R(\epsilon)$, $\beta(r) < \epsilon$, where $0 < \epsilon < 1$ ?
|
Let $$A$$ and $$B$$ be two sets. The set difference of $$A$$ and $$B$$, denoted as $$A - B$$, is the set of all the elements of $$A$$ that are not members of $$B$$.
Let $$A$$ and $$B$$ be two sets. The set difference $$A - B$$ is:
$$$A-B=\{x\in A \ and \ x\notin B\}$$$
Elements belonging to the set difference $$A - B$$ are those elements that belong to $$A$$ and do not belong to $$B$$.
If $$A = \{a, b, c, d\}$$ and $$B = \{b, d\}$$, then $$A - B$$ és $$A − B = \{a,c\}$$. If $$A = \{ a, b, c, d \}$$ and $$B = \{ c, d, e, f \}$$, then $$A - B = \{ a, b \}$$. If $$W = \{x \ | \ x \ \text{ odd and } x < 13\}$$ and $$Z = \{ 7, 8, 9, 10, 11, 12, 13 \}$$, then $$W − Z = \{1,3,5\}$$ and $$Z − W = \{8,10,12,13\}$$.
Note that the set difference operation is not a commutative operation and if $$A$$, $$B$$ are two disjoint sets, then $$A - B = A$$ and $$B - A = B$$.
The simetric difference of any two sets $$A, B$$ is defined as:
$$$A\vartriangle B=(A-B)\cup(B-A)=(A\cup B)-(B\cap A)$$$
Some properties of the set difference:
$$A-A=\emptyset$$ $$A-\emptyset=\emptyset-A=A$$ $$A-B=A\cap B^c$$ $$A\subset B \Leftrightarrow A-B=\emptyset$$ $$A-(A-B)=A\cap B$$ $$A\cap(B-C)=(A\cap B)-(A\cap C)$$
|
Suppose I have the term $t=\sqrt{-a^2-b^2}$, where $a,b\in\mathbb R$. Of course, we know that it holds$$t = \mathbb i \sqrt{a^2+b^2}$$which is (in my opinion) more convenient. How can I get
Mathematica to factor this out? Especially for things like$$\cosh(\sqrt{-a^2-b^2}) = \cos(\sqrt{a^2+b^2})$$it would be very helpful, but
FullSimplify does not do it:
In[29]:= FullSimplify[Sqrt[-a^2 - b^2], {a ∈ Reals, b ∈ Reals}]Out[29]= Sqrt[-a^2-b^2]In[30]:= FullSimplify[Cosh[Sqrt[-a^2 - b^2]], {a ∈ Reals, b ∈ Reals}]Out[30]= Cosh[Sqrt[-a^2-b^2]]
|
In some sense the empty set ($\emptyset$) and the global set of all sets ($G$) are the ends of the universe of mathematical objects. The world which $ZFC$ describes has an end from the bottom and is endless from the top. Even in a straight forward way one can find an equiconsistent theory (respect to $ZFC$) which its world is endless from the bottom and bounded from the top by the set of all sets. It is sufficient to consider the theory $ZFC^{-1}$ ($ZFC$ inverse) which is obtained from $ZFC$ by replacing each phrase $x\in y$ in the axioms of $ZFC$ by the phrase $\neg (x\in y)$. This operation for example transforms the axiom of empty set of $ZFC$ to an statement which asserts "the set of all sets exists".
$[\exists x \forall y~~\neg(y\in x)]\mapsto [\exists x \forall y~~\neg \neg(y\in x)] $
Even the axiom of extensionality remains unchanged because we have:
$[\forall x\forall y~~(x=y\longleftrightarrow \forall z~~(z\in x\longleftrightarrow z\in y))]\mapsto [\forall x\forall y~~(x=y\longleftrightarrow \forall z~~(\neg (z\in x)\longleftrightarrow \neg (z\in y)))]$
So the "set of all sets" is unique in this theory. Even the equiconsistency simply follows from the fact that for all set (or proper class) $M$ and for all binary relation $E$ on it we have:
$\langle~M~,~E~\rangle \models ZFC \Longleftrightarrow \langle~M~,~M\times M\setminus E~\rangle \models ZFC^{-1}$
So it is trivial that $ZFC^{-1}\models \neg (\exists x \forall y~~\neg(y\in x))$ in the same way which one can prove $ZFC\models \neg (\exists x \forall y~~y\in x)$ by the Russell's paradox. But the situation seems rather strange when one wants to find an equiconsistent theory with $ZFC$ which has end points in both up and down direction because the existence of two contradictory objects like $\emptyset$ and $G$ seems ontologically incompatible in a particular "$ZFC$-like" world. So the question is:
Question (1): Is there an $\mathcal{L}=\lbrace \in\rbrace$-theory $T$ such that the following conditions hold:
$(1)~Con(ZFC)\Longleftrightarrow Con(T)$
$(2)~T\models \exists !x~\forall y~~(y\in x)$
$(3)~T\models \exists !x~\forall y~~\neg (y\in x)$
Remark (1): Quine's new foundation axiomatic system ($NF$) is not an answer because its equiconsistency with $ZFC$ is still unknown.
Even one can define two dual sets from empty and global sets. The set which does not belong to any other set ($\emptyset^{\star}$) and the set which belongs to any set ($G^{\star}$).Now one can restate the question (1) as follows:
Question (2): Is there an $\mathcal{L}=\lbrace \in\rbrace$-theory $T$ such that the following conditions hold:
$(1)~Con(ZFC)\Longleftrightarrow Con(T)$
$(2)~T\models \exists !x~\forall y~~(x\in y)$
$(3)~T\models \exists !x~\forall y~~\neg (x\in y)$
Even it is interesting to have an equiconsistent theory which has no end points in both up and down directions.So:
Question (3): Is there an $\mathcal{L}=\lbrace \in\rbrace$-theory $T$ such that the following conditions hold:
$(1)~Con(ZFC)\Longleftrightarrow Con(T)$
$(2)~T\models \neg (\exists x~\forall y~~(y\in x))$
$(3)~T\models \neg (\exists x~\forall y~~\neg (y\in x))$
|
Given a finite subset $S$ not containing the identity element in a residually finite group $G$, does there always exist a normal subgroup of $G$ which has finite index (in $G$) and which avoids $S$? (If $S$ is a singleton, this is of course the definition of a residually finite group.)
Yes, take the intersection of the normal subgroups $N_1, N_2, ..., N_k$ of finite index avoiding elements $x_1,x_2,...,x_k$ of your set. It is normal and of finite index (at most the product of indices of $N_i$).
The answer is yes. Since $G$ is residually finite, given $g\in S$ there is a finite group $F_g$ and a homomorphism $\phi_g: G\to F_g$ whose kernel does not contain $g$. Now $S$ is disjoint from the kernel of the product of these homomorphisms $G\to \prod_{g\in S}F_g$, and the target is a finite group.
More generally if $C$ is a class of groups closed under finite products, and $G$ is residually $C$, then $G$ is fully residually $C$.
|
De Bruijn-Newman constant
For each real number [math]t[/math], define the entire function [math]H_t: {\mathbf C} \to {\mathbf C}[/math] by the formula
[math]\displaystyle H_t(z) := \int_0^\infty e^{tu^2} \Phi(u) \cos(zu)\ du[/math]
where [math]\Phi[/math] is the super-exponentially decaying function
[math]\displaystyle \Phi(u) := \sum_{n=1}^\infty (2\pi^2 n^4 e^{9u} - 3 \pi n^2 e^{5u}) \exp(-\pi n^2 e^{4u}).[/math]
It is known that [math]\Phi[/math] is even, and that [math]H_t[/math] is even, real on the real axis, and obeys the functional equation [math]H_t(\overline{z}) = \overline{H_t(z)}[/math]. In particular, the zeroes of [math]H_t[/math] are symmetric about both the real and imaginary axes. One can also express [math]H_t[/math] in a number of different forms, such as
[math]\displaystyle H_t(z) = \frac{1}{2} \int_{\bf R} e^{tu^2} \Phi(u) e^{izu}\ du[/math]
or
[math]\displaystyle H_t(z) = \frac{1}{2} \int_0^\infty e^{t\log^2 x} \Phi(\log x) e^{iz \log x}\ \frac{dx}{x}.[/math]
In the notation of [KKL2009], one has
[math]\displaystyle H_t(z) = \frac{1}{8} \Xi_{t/4}(z/2).[/math]
De Bruijn [B1950] and Newman [N1976] showed that there existed a constant, the
de Bruijn-Newman constant [math]\Lambda[/math], such that [math]H_t[/math] has all zeroes real precisely when [math]t \geq \Lambda[/math]. The Riemann hypothesis is equivalent to the claim that [math]\Lambda \leq 0[/math]. Currently it is known that [math]0 \leq \Lambda \lt 1/2[/math] (lower bound in [RT2018], upper bound in [KKL2009]).
The
Polymath15 project seeks to improve the upper bound on [math]\Lambda[/math]. The current strategy is to combine the following three ingredients: Numerical zero-free regions for [math]H_t(x+iy)[/math] of the form [math]\{ x+iy: 0 \leq x \leq T; y \geq \varepsilon \}[/math] for explicit [math]T, \varepsilon, t \gt 0[/math]. Rigorous asymptotics that show that [math]H_t(x+iy)[/math] whenever [math]y \geq \varepsilon[/math] and [math]x \geq T[/math] for a sufficiently large [math]T[/math]. Dynamics of zeroes results that control [math]\Lambda[/math] in terms of the maximum imaginary part of a zero of [math]H_t[/math]. Contents [math]t=0[/math]
When [math]t=0[/math], one has
[math]\displaystyle H_0(z) = \frac{1}{8} \xi( \frac{1}{2} + \frac{iz}{2} ) [/math]
where
[math]\displaystyle \xi(s) := \frac{s(s-1)}{2} \pi^{-s/2} \Gamma(s/2) \zeta(s)[/math]
is the Riemann xi function. In particular, [math]z[/math] is a zero of [math]H_0[/math] if and only if [math]\frac{1}{2} + \frac{iz}{2}[/math] is a non-trivial zero of the Riemann zeta function. Thus, for instance, the Riemann hypothesis is equivalent to all the zeroes of [math]H_0[/math] being real, and Riemann-von Mangoldt formula (in the explicit form given by Backlund) gives
[math]\displaystyle \left|N_0(T) - (\frac{T}{4\pi} \log \frac{T}{4\pi} - \frac{T}{4\pi} - \frac{7}{8})\right| \lt 0.137 \log (T/2) + 0.443 \log\log(T/2) + 4.350 [/math]
for any [math]T \gt 4[/math], where [math]N_0(T)[/math] denotes the number of zeroes of [math]H_0[/math] with real part between 0 and T.
The first [math]10^{13}[/math] zeroes of [math]H_0[/math] (to the right of the origin) are real [G2004]. This numerical computation uses the Odlyzko-Schonhage algorithm. In [P2017] it was independently verified that all zeroes of [math]H_0[/math] between 0 and 61,220,092,000 were real.
[math]t\gt0[/math]
For any [math]t\gt0[/math], it is known that all but finitely many of the zeroes of [math]H_t[/math] are real and simple [KKL2009, Theorem 1.3]. In fact, assuming the Riemann hypothesis,
all of the zeroes of [math]H_t[/math] are real and simple [CSV1994, Corollary 2].
It is known that [math]\xi[/math] is an entire function of order one ([T1986, Theorem 2.12]). Hence by the fundamental solution for the heat equation, the [math]H_t[/math] are also entire functions of order one for any [math]t[/math].
Because [math]\Phi[/math] is positive, [math]H_t(iy)[/math] is positive for any [math]y[/math], and hence there are no zeroes on the imaginary axis.
Let [math]\sigma_{max}(t)[/math] denote the largest imaginary part of a zero of [math]H_t[/math], thus [math]\sigma_{max}(t)=0[/math] if and only if [math]t \geq \Lambda[/math]. It is known that the quantity [math]\frac{1}{2} \sigma_{max}(t)^2 + t[/math] is non-increasing in time whenever [math]\sigma_{max}(t)\gt0[/math] (see [KKL2009, Proposition A]. In particular we have
[math]\displaystyle \Lambda \leq t + \frac{1}{2} \sigma_{max}(t)^2[/math]
for any [math]t[/math].
The zeroes [math]z_j(t)[/math] of [math]H_t[/math] obey the system of ODE
[math]\partial_t z_j(t) = - \sum_{k \neq j} \frac{2}{z_k(t) - z_j(t)}[/math]
where the sum is interpreted in a principal value sense, and excluding those times in which [math]z_j(t)[/math] is a repeated zero. See dynamics of zeros for more details. Writing [math]z_j(t) = x_j(t) + i y_j(t)[/math], we can write the dynamics as
[math] \partial_t x_j = - \sum_{k \neq j} \frac{2 (x_k - x_j)}{(x_k-x_j)^2 + (y_k-y_j)^2} [/math] [math] \partial_t y_j = \sum_{k \neq j} \frac{2 (y_k - y_j)}{(x_k-x_j)^2 + (y_k-y_j)^2} [/math]
where the dependence on [math]t[/math] has been omitted for brevity.
In [KKL2009, Theorem 1.4], it is shown that for any fixed [math]t\gt0[/math], the number [math]N_t(T)[/math] of zeroes of [math]H_t[/math] with real part between 0 and T obeys the asymptotic
[math]N_t(T) = \frac{T}{4\pi} \log \frac{T}{4\pi} - \frac{T}{4\pi} + \frac{t}{16} \log T + O(1) [/math]
as [math]T \to \infty[/math] (caution: the error term here is not uniform in t). Also, the zeroes behave like an arithmetic progression in the sense that
[math] z_{k+1}(t) - z_k(t) = (1+o(1)) \frac{4\pi}{\log |z_k|(t)} = (1+o(1)) \frac{4\pi}{\log k} [/math]
as [math]k \to +\infty[/math].
Threads Polymath proposal: upper bounding the de Bruijn-Newman constant, Terence Tao, Jan 24, 2018. Polymath15, first thread: computing H_t, asymptotics, and dynamics of zeroes, Terence Tao, Jan 27, 2018. Polymath15, second thread: generalising the Riemann-Siegel approximate functional equation, Terence Tao and Sujit Nair, Feb 2, 2018. Polymath15, third thread: computing and approximating H_t, Terence Tao and Sujit Nair, Feb 12, 2018. Polymath 15, fourth thread: closing in on the test problem, Terence Tao, Feb 24, 2018. Polymath15, fifth thread: finishing off the test problem?, Terence Tao, Mar 2, 2018. Polymath15, sixth thread: the test problem and beyond, Terence Tao, Mar 18, 2018. Polymath15, seventh thread: going below 0.48, Terence Tao, Mar 28, 2018. Polymath15, eighth thread: going below 0.28, Terence Tao, Apr 17, 2018. Polymath15, ninth thread: going below 0.22?, Terence Tao, May 4, 2018. Polymath15, tenth thread: numerics update, Rudolph Dwars and Kalpesh Muchhal, Sep 6, 2018. Polymath15, eleventh thread: Writing up the results, and exploring negative t, Terence Tao, Dec 28, 2018. Effective approximation of heat flow evolution of the Riemann xi function, and a new upper bound for the de Bruijn-Newman constant, Terence Tao, Apr 30, 2019. Other blog posts and online discussion Heat flow and zeroes of polynomials, Terence Tao, Oct 17, 2017. The de Bruijn-Newman constant is non-negative, Terence Tao, Jan 19, 2018. Lehmer pairs and GUE, Terence Tao, Jan 20, 2018. A new polymath proposal (related to the Riemann hypothesis) over Tao's blog, Gil Kalai, Jan 26, 2018. Code and data Writeup
Here are the Polymath15 grant acknowledgments.
Test problem Zero-free regions
See Zero-free regions.
Wikipedia and other references Bibliography [A2011] J. Arias de Reyna, High-precision computation of Riemann's zeta function by the Riemann-Siegel asymptotic formula, I, Mathematics of Computation, Volume 80, Number 274, April 2011, Pages 995–1009. [B1994] W. G. C. Boyd, Gamma Function Asymptotics by an Extension of the Method of Steepest Descents, Proceedings: Mathematical and Physical Sciences, Vol. 447, No. 1931 (Dec. 8, 1994),pp. 609-630. [B1950] N. C. de Bruijn, The roots of trigonometric integrals, Duke J. Math. 17 (1950), 197–226. [CSV1994] G. Csordas, W. Smith, R. S. Varga, Lehmer pairs of zeros, the de Bruijn-Newman constant Λ, and the Riemann hypothesis, Constr. Approx. 10 (1994), no. 1, 107–129. [G2004] Gourdon, Xavier (2004), The [math]10^{13}[/math] first zeros of the Riemann Zeta function, and zeros computation at very large height [KKL2009] H. Ki, Y. O. Kim, and J. Lee, On the de Bruijn-Newman constant, Advances in Mathematics, 22 (2009), 281–306. Citeseer [N1976] C. M. Newman, Fourier transforms with only real zeroes, Proc. Amer. Math. Soc. 61 (1976), 246–251. [P2017] D. J. Platt, Isolating some non-trivial zeros of zeta, Math. Comp. 86 (2017), 2449-2467. [P1992] G. Pugh, The Riemann-Siegel formula and large scale computations of the Riemann zeta function, M.Sc. Thesis, U. British Columbia, 1992. [RT2018] B. Rodgers, T. Tao, The de Bruijn-Newman constant is non-negative, preprint. arXiv:1801.05914 [T1986] E. C. Titchmarsh, The theory of the Riemann zeta-function. Second edition. Edited and with a preface by D. R. Heath-Brown. The Clarendon Press, Oxford University Press, New York, 1986. pdf
|
LaTeX/Algorithms
LaTeX has several packages for typesetting algorithms in form of "pseudocode". They provide stylistic enhancements over a uniform style (i.e., all in typewriter font) so that constructs such as loops or conditionals are visually separated from other text. The pseudocode is usually put in an
algorithm environment.For typesetting real code, written in a real programming language, consider the listings package described in Source Code Listings. Contents 1 Typesetting 2 The algorithm environment 3 References Typesetting[edit]
There are four notable packages
algorithmic, algorithm2e, algorithmicx, and program, Typesetting using the
algorithmic package[edit]
The
algorithmic package uses a different set of commands than the
algorithmicx package. This is not compatible with
revtex4-1.Basic commands are:
\STATE <text> \IF{<condition>} \STATE {<text>} \ELSE \STATE{<text>} \ENDIF \IF{<condition>} \STATE {<text>} \ELSIF{<condition>} \STATE{<text>} \ENDIF \FOR{<condition>} \STATE {<text>} \ENDFOR \FOR{<condition> \TO <condition> } \STATE {<text>} \ENDFOR \FORALL{<condition>} \STATE{<text>} \ENDFOR \WHILE{<condition>} \STATE{<text>} \ENDWHILE \REPEAT \STATE{<text>} \UNTIL{<condition>} \LOOP \STATE{<text>} \ENDLOOP \REQUIRE <text> \ENSURE <text> \RETURN <text> \PRINT <text> \COMMENT{<text>} \AND, \OR, \XOR, \NOT, \TO, \TRUE, \FALSE
Complete documentation is listed at [2]. Most commands are similar to the
algorithmicx equivalents, but with different capitalization.The package
algorithms bundle at the ctan repository, dated 2009-08-24, describes both the
algorithmic environment (for typesetting algorithms) and the
algorithm floating wrapper (see below) which is designed to wrap around the algorithmic environment.
How to rename require/ensure to input/output:
\floatname{algorithm}{Procedure}\renewcommand{\algorithmicrequire}{\textbf{Input:}}\renewcommand{\algorithmicensure}{\textbf{Output:}}
Typesetting using the
algorithm2e package[edit]
The
algorithm2e package (first released 1995, latest updated July 2017 according to the v5.0 manual) allows typesetting algorithms with a lot of customization. Like
algorithmic, this package is also not compatible with Revtex-4.1.
[2]
Unlike
algorithmic,
algorithm2e provides a relatively huge number of customization options to the algorithm suiting to the needs of various users.The CTAN-manual provides a comprehensible list of examples and full set of controls.
Typically, the usage between
\begin{algorithm} and
\end{algorithm} would be
1. Declaring a set of keywords(to typeset as functions/operators), layout controls, caption, title, header text (which appears before the algorithm's main steps e.g.: Input,Output) 2. Writing the main steps of the algorithm, with each step ending with a \; This may be taken in analogy with writing a latex-preamble before we start the actual document.
The package is loaded like
\usepackage[]{algorithm2e}
and a simple example, taken from the v4.01 manual, is
\begin{algorithm}[H] \KwData{this text} \KwResult{how to write algorithm with \LaTeX2e } initialization\; \While{not at end of this document}{ read current\; \eIf{understand}{ go to next section\; current section becomes this one\; }{ go back to the beginning of current section\; } } \caption{How to write algorithms}\end{algorithm}
which produces
More details are in the manual hosted on the ctan website.
Typesetting using the
algorithmicx package[edit]
The
algorithmicx package provides a number of popular constructs for algorithm designs. Put
\usepackage{algpseudocode} in the preamble to use the algorithmic environment to write algorithm pseudocode (
\begin{algorithmic}...\end{algorithmic}). You might want to use the algorithm environment (
\usepackage{algorithm}) to wrap your algorithmic code in an algorithm environment (
\begin{algorithm}...\end{algorithm}) to produce a floating environment with numbered algorithms.
The command
\begin{algorithmic} can be given the optional argument of a positive integer, which if given will cause line numbering to occur at multiples of that integer. E.g.
\begin{algorithmic}[5] will enter the algorithmic environment and number every fifth line.
Below is an example of typesetting a basic algorithm using the
algorithmicx package (remember to add the
\usepackage{algpseudocode} statement to your document preamble):
\begin{algorithmic}\If {$i\geq maxval$} \State $i\gets 0$\Else \If {$i+k\leq maxval$} \State $i\gets i+k$ \EndIf\EndIf\end{algorithmic}
The LaTeX source can be written to a format familiar to programmers so that it is easy to read. This will not, however, affect the final layout in the document.
Basic commands have the following syntax:
Statement (\State causes a new line, can also be used in front of other commands)
\State $x\gets <value>$
Three forms of if-statements:
\If{<condition>} <text> \EndIf
\If{<condition>} <text> \Else <text> \EndIf
\If{<condition>} <text> \ElsIf{<condition>} <text> \Else <text> \EndIf
The third form accepts as many
\ElsIf{} clauses as required. Note that it is
\ElsIf and not
\ElseIf.
Loops:
\For{<condition>} <text> \EndFor
\ForAll{<condition>} <text> \EndFor
\While{<condition>} <text> \EndWhile
\Repeat <text> \Until{<condition>}
\Loop <text> \EndLoop
Pre- and postcondition:
\Require <text>
\Ensure <text>
Functions
\Function{<name>}{<params>} <body> \EndFunction
\Return <text>
\Call{<name>}{<params>}
This command will usually be used in conjunction with a
\State command as follows:
\Function{Increment}{$a$} \State $a \gets a+1$ \State \Return $a$\EndFunction
Comments:
\Comment{<text>}
Note to users who switched from the old
algorithmic package: comments may be placed everywhere in the source; there are no limitations as in the old
algorithmic package.
The
algorithmicx package allows you to define your own environments.
To define blocks beginning with a starting command and ending with an ending command, use
\algblock[<block>]{<start>}{<end>}
This defines two commands
\<start> and
\<end> which have no parameters. The text displayed by them is
\textbf{<start>} and
\textbf{<end>}.
With
\algblockdefx you can give the text to be output by the starting and ending command and the number of parameters for these commands. In the text the n-th parameter is referenced by
#n.
\algblockdefx[<block>]{<start>}{<end>} [<startparamcount>][<default value>]{<start text>} [<endparamcount>][<default value>]{<end text>}
Example:
\algblock[Name]{Start}{End}\algblockdefx[NAME]{START}{END}% [2][Unknown]{Start #1(#2)}% {Ending}\algblockdefx[NAME]{}{OTHEREND}% [1]{Until (#1)}\begin{algorithmic}\Start \Start \START[One]{x} \END \START{0} \OTHEREND{\texttt{True}} \End \Start \End\End\end{algorithmic}
More advanced customization and other constructions are described in the
algorithmicx manual: http://mirror.ctan.org/macros/latex/contrib/algorithmicx/algorithmicx.pdf
Typesetting using the
program package[edit]
The
program package provides macros for typesetting algorithms.Each line is set in math mode, so all the indentation and spacing is done automatically.The notation
|variable_name| can be used within normal text,maths expressions or programs to indicate a variable name.Use
\origbar to get a normal
| symbol in a program.The commands
\A,
\B,
\P,
\Q,
\R,
\S,
\T and
\Z typeset the corresponding boldletter with the next object as a subscript (eg
\S1 typesets
{\bfS$_1$} etc). Primes work normally, eg
\S‘‘.
Below is an example of typesetting a basic algorithm using the
program package (remember to add the
\usepackage{program} statement to your documentpreamble):
\begin{program}\mbox{A fast exponentiation procedure:}\BEGIN \\ % \FOR i:=1 \TO 10 \STEP 1 \DO |expt|(2,i); \\ |newline|() \OD %\rcomment{This text will be set flush to the right margin}\WHERE\PROC |expt|(x,n) \BODY z:=1; \DO \IF n=0 \THEN \EXIT \FI; \DO \IF |odd|(n) \THEN \EXIT \FI;\COMMENT{This is a comment statement}; n:=n/2; x:=x*x \OD; \{ n>0 \}; n:=n-1; z:=z*x \OD; |print|(z) \ENDPROC\END\end{program}
The commands
\( and
\) are redefinedto typeset an algorithm in a minipage, so an algorithmcan appear as a single box in a formula. For example,to state that a particular action system is equivalentto a WHILE loop you can write:
\[\( \ACTIONS A: A \EQ \IF \B{} \THEN \S{}; \CALL A \ELSE \CALL Z \FI \QE \ENDACTIONS \)\EQT\( \WHILE \B{} \DO \S{} \OD \)\]
Dijkstra conditionals and loops:
\begin{program}\IF x = 1 \AR y:=y+1\BAR x = 2 \AR y:=y^2\utdots\BAR x = n \AR y:=\displaystyle\sum_{i=1}^n y_i \FI\DO 2 \origbar x \AND x>0 \AR x:= x/2\BAR \NOT 2 \origbar x \AR x:= \modbar{x+3} \OD\end{program}
Loops with multiple exits:
\begin{program}\DO \DO \IF \B1 \THEN \EXIT \FI; \S1; \IF \B2 \THEN \EXIT(2) \FI \OD; \IF \B1 \THEN \EXIT \FI \OD\end{program}
A Reverse Engineering Example.
Here's the original program:
\begin{program} \VAR \seq{m := 0, p := 0, |last| := `` ''}; \ACTIONS |prog|: |prog| \ACTIONEQ % \seq{|line| := `` '', m := 0, i := 1}; \CALL |inhere| \ENDACTIONl \ACTIONEQ % i := i+1; \IF (i=(n+1)) \THEN \CALL |alldone| \FI ; m := 1; \IF |item|[i] \neq |last| \THEN |write|(|line|); |line| := `` ''; m := 0; \CALL |inhere| \FI ; \CALL |more| \ENDACTION|inhere| \ACTIONEQ % p := |number|[i]; |line| := |item|[i]; |line| := |line| \concat `` '' \concat p; \CALL |more| \ENDACTION|more| \ACTIONEQ % \IF (m=1) \THEN p := |number|[i]; |line| := |line| \concat ``, '' \concat p \FI ; |last| := |item|[i]; \CALL l \ENDACTION |alldone| \ACTIONEQ |write|(|line|); \CALL Z \ENDACTION \ENDACTIONS \END\end{program}
And here's the transformed and corrected version:
\begin{program}\seq{|line| := `` '', i := 1};\WHILE i \neq n+1 \DO |line| := |item|[i] \concat `` '' \concat |number|[i]; i := i+1; \WHILE i \neq n+1 \AND |item|[i] = |item|[i-1] \DO |line| := |line| \concat ``, '' \concat |number|[i]); i := i+1 \OD ; |write|(|line|) \OD \end{program}
The package also provides a macro for typesetting a setlike this:
\set{x \in N | x > 0}.
Lines can be numbered by setting
\NumberProgramstrueand numbering turned off with
\NumberProgramsfalse
The
algorithm environment[edit]
It is often useful for the algorithm produced by
algorithmic to be "floated" to the optimal pointin the document to avoid it being split across pages. The
algorithm environment provides this and a few other useful features. Include it by adding the
\usepackage{algorithm}to your document's preamble. It is entered into by
\begin{algorithm}\caption{<your caption for this algorithm>}\label{<your label for references later in your document>}\begin{algorithmic}<algorithmic environment>\end{algorithmic}\end{algorithm}
Algorithm numbering[edit]
The default numbering system for the
algorithm package is to number algorithms sequentially. This is often not desirable, particularly in large documents where numbering according to chapter is more appropriate. The numbering of algorithms can be influenced by providing the name of the document component within which numbering should be recommenced. The legal values for this option are: part, chapter, section, subsection, subsubsection or nothing (default). For example:
\usepackage[chapter]{algorithm}
List of algorithms[edit]
When you use figures or tables, you can add a list of them close to the table of contents; the
algorithm package provides a similar command. Just put
\listofalgorithms
anywhere in the document, and LaTeX will print a list of the "algorithm" environments in the document with the corresponding page and the caption.
An example from the manual[edit]
This is an example taken from the manual (official manual, p.14)
\begin{algorithm} % enter the algorithm environment\caption{Calculate $y = x^n$} % give the algorithm a caption\label{alg1} % and a label for \ref{} commands later in the document\begin{algorithmic} % enter the algorithmic environment \REQUIRE $n \geq 0 \vee x \neq 0$ \ENSURE $y = x^n$ \STATE $y \Leftarrow 1$ \IF{$n < 0$} \STATE $X \Leftarrow 1 / x$ \STATE $N \Leftarrow -n$ \ELSE \STATE $X \Leftarrow x$ \STATE $N \Leftarrow n$ \ENDIF \WHILE{$N \neq 0$} \IF{$N$ is even} \STATE $X \Leftarrow X \times X$ \STATE $N \Leftarrow N / 2$ \ELSE[$N$ is odd] \STATE $y \Leftarrow y \times X$ \STATE $N \Leftarrow N - 1$ \ENDIF \ENDWHILE\end{algorithmic}\end{algorithm}
The official manual is located at http://mirrors.ctan.org/macros/latex/contrib/algorithms/algorithms.pdf References[edit] The official manual for the
algorithmspackage, Rogério Brito (2009), http://mirrors.ctan.org/macros/latex/contrib/algorithms/algorithms.pdf
|
But if you don't want to have a Google account: Chrome is really good. Much faster than FF (I can't run FF on either of the laptops here) and more reliable (it restores your previous session if it crashes with 100% certainty).
And Chrome has a Personal Blocklist extension which does what you want.
: )
Of course you already have a Google account but Chrome is cool : )
Guys, I feel a little defeated in trying to understand infinitesimals. I'm sure you all think this is hilarious. But if I can't understand this, then I'm yet again stalled. How did you guys come to terms with them, later in your studies?
do you know the history? Calculus was invented based on the notion of infinitesimals. There were serious logical difficulties found in it, and a new theory developed based on limits. In modern times using some quite deep ideas from logic a new rigorous theory of infinitesimals was created.
@QED No. This is my question as best as I can put it: I understand that lim_{x->a} f(x) = f(a), but then to say that the gradient of the tangent curve is some value, is like saying that when x=a, then f(x) = f(a). The whole point of the limit, I thought, was to say, instead, that we don't know what f(a) is, but we can say that it approaches some value.
I have problem with showing that the limit of the following function$$\frac{\sqrt{\frac{3 \pi}{2n}} -\int_0^{\sqrt 6}(1-\frac{x^2}{6}+\frac{x^4}{120})^ndx}{\frac{3}{20}\frac 1n \sqrt{\frac{3 \pi}{2n}}}$$equal to $1$, with $n \to \infty$.
@QED When I said, "So if I'm working with function f, and f is continuous, my derivative dy/dx is by definition not continuous, since it is undefined at dx=0." I guess what I'm saying is that (f(x+h)-f(x))/h is not continuous since it's not defined at h=0.
@KorganRivera There are lots of things wrong with that: dx=0 is wrong. dy/dx - what/s y? "dy/dx is by definition not continuous" it's not a function how can you ask whether or not it's continous, ... etc.
In general this stuff with 'dy/dx' is supposed to help as some kind of memory aid, but since there's no rigorous mathematics behind it - all it's going to do is confuse people
in fact there was a big controversy about it since using it in obvious ways suggested by the notation leads to wrong results
@QED I'll work on trying to understand that the gradient of the tangent is the limit, rather than the gradient of the tangent approaches the limit. I'll read your proof. Thanks for your help. I think I just need some sleep. O_O
@NikhilBellarykar Either way, don't highlight everyone and ask them to check out some link. If you have a specific user which you think can say something in particular feel free to highlight them; you may also address "to all", but don't highlight several people like that.
@NikhilBellarykar No. I know what the link is. I have no idea why I am looking at it, what should I do about it, and frankly I have enough as it is. I use this chat to vent, not to exercise my better judgment.
@QED So now it makes sense to me that the derivative is the limit. What I think I was doing in my head was saying to myself that g(x) isn't continuous at x=h so how can I evaluate g(h)? But that's not what's happening. The derivative is the limit, not g(h).
@KorganRivera, in that case you'll need to be proving $\forall \varepsilon > 0,\,\,\,\, \exists \delta,\,\,\,\, \forall x,\,\,\,\, 0 < |x - a| < \delta \implies |f(x) - L| < \varepsilon.$ by picking some correct L (somehow)
Hey guys, I have a short question a friend of mine asked me which I cannot answer because I have not learnt about measure theory (or whatever is needed to answer the question) yet. He asks what is wrong with \int_0^{2 \pi} \frac{d}{dn} e^{inx} dx when he applies Lesbegue's dominated convergence theorem, because apparently, if he first integrates and then derives, the result is 0 but if he first derives and then integrates it's not 0. Does anyone know?
|
My question is about the realization of the symmetric group $S_n$ as a galois group of a real and normal field extension $K/\mathbb Q$. As I read, such a field $K$ can be obtained as the splitting field of a polynomial
$$f(x)=\alpha_0 + \alpha_1 x + ... + \alpha_{n-1}x^{n-1}+ x^n \in \mathbb Q[x] $$
in which all the coefficients $\alpha_i$ have to be integral for the prime numbers 2 and 3.
Furthermore, the following properties have to be satisfied:
(1) $\ f \text{ mod }3$ is irreducible
(2) $\ f \equiv (x^2+x+1)\cdot f_1 \cdots f_k \text{ mod }2$
where $f_i \in \mathbb Z / 2 \mathbb Z[x]$ are different and irreducible polynomials with odd degree.
(3) $f$ have to be near at $(x-1)\cdot (x-2) \cdots (x-n) = \beta_0 + \beta_1 x + ... + \beta_{n-1}x^{n-1}+ x^n$,
that does mean $|\alpha_i - \beta_i|< \varepsilon$ for all $i$ and $\varepsilon$ should be chosen so small, that there are $n$ real roots of $f$.
Now, it is claimed, that because of (1) the galois group of $f$ over $\mathbb Q$ is a transitive subgroup of $\mathbb S_n$. I think, this is true, because of the fact, that $f$ is also irreducible in $\mathbb Q[x]$ and therefore the galois group acts transitively on the roots of $f$.
Ok, now comes an assertion that I don't understand: Because of (2) the galois group of $f$ over $\mathbb Q$ contains a transposition. Please could you explain that to me? I think, (2) shows, that there is a permutation of the form $(ab)(...)...(...)$. But how to continue the argument?
If the galois group contains a transposition, than it is $S_n$ (this is clear and well-known).
And the second assertion I don't understand well: "Because of (3) the splitting field $K$ of $f$ is real." How to explain this in detail?
Thanks in advance!
|
I'm using Expectation Maximization algorithm to determine the parameters of Gaussian distributions in a mixture. To get a better understanding of the algorithm, I
executed it manually step by step on a small example, following this procedure:
The problem is that in the second iteration log-likelihood decreased.
These are my data: $$X = \begin{bmatrix}1 \\ 2\\ 3\end{bmatrix}$$ The initial means, variances and mixing coefficients are set to:
$$\mu = \begin{bmatrix}1 & 3\end{bmatrix}\quad \pi = \begin{bmatrix}0.5 & 0.5\end{bmatrix}\quad \Sigma=\begin{bmatrix}1 & 1\end{bmatrix}$$
So, I first calculated the initial log-likelihood:
$$L_0 = -4.3893$$
Then I did the E step and calculated $\gamma(z_{nk})$, which I'll shorten as $\gamma_{n,k}$, like this $\gamma_{n,k}=\pi_kN(x_n|\mu_k,\Sigma_k)$, and got:
$$\Gamma = \begin{bmatrix}0.881 & 0.119 \\ 0.5 & 0.5 \\ 0.119 & 0.881\end{bmatrix}$$
Then, I peformed the M step: $$N_1 = 1.5\quad N_2= 1.5$$ $$\pi = \begin{bmatrix}0.5 & 0.5\end{bmatrix}\quad \mu^{new}=\begin{bmatrix}1.492 & 2.508\end{bmatrix}\quad \Sigma=\begin{bmatrix}0.409 & 0.409\end{bmatrix}$$ The new log-likelihood was $L_1=-3.6766$, which was greater than $L_0$.
Then, I did another iteration. In the E step I got: $$\Gamma=\begin{bmatrix}0.9979 & 0.0021 \\ 0.5 & 0.5 \\ 0.0021 & 0.9979 \end{bmatrix}$$ In the M step I got: $$N_1=1.5\quad N_2=1.5$$ $$\pi = \begin{bmatrix}0.5 & 0.5\end{bmatrix}\quad \mu^{new}=\begin{bmatrix}1.3361 & 2.6639\end{bmatrix}\quad \Sigma=\begin{bmatrix}0.2259 & 0.2259\end{bmatrix}$$ for which the log-likelihood was $L_2=-6.2123$.
However, $L_2 < L_1$! Even though I programmed the algorithm in python and got the same results, I know that $L_i$ ($i=0,1,2,\ldots)$ should be monotonically increasing, so, there has to be some mistake, but I can't catch it. Can anyone help?
|
"I am not given wealth $w$ although I suppose I could assume any firm who
is purchasing has some budget."
This is No. exactly where the fundamental microeconomic theory of the firm differs from the microeconomic Consumer Theory: the firm is not constrained by a budget. The reason is that this fundamental theory deals most and foremost with the "long-term" view, or even better, with the "planning view". So we assume that the amount necessary to cover expenses, will come from sales, since the firm won't enter production at a loss (remember also, this is a deterministic set-up, there is no uncertainty). Working capital considerations (the fact that usually first you have to actually pay expenses and then to actually collect revenues), does not enter the long-term view, justifiably, it is a short-phenomenon. Also, in the long-term or planning approach, there are no fixed costs, all factors are variable.
Now, the "cost-minimization" approach to solve the firm's optimization problem, is
an alternative behavioral assumption to the profit-maximizing setup, and it is very relevant in many real-world cases: public utilities that exist mainly to satisfy demand, and their motive is not to maximize profits -rather they want to minimize cost for the given level output, as determined by demand, in the context of efficient use of the always scarce resources.
But also, the case of a price-taking firm that is too small compared to its market, is closer to a cost-minimizing behavior rather than profit maximizing, since the firm has not really control over its production (except downward by direct decision).
In both of the above cases, an exogenous variable appears: the level of output itself. So we solve the problem by treating the level of output as a "constant" or better, we solve it for any given level of output, and the solution we obtain has the level of output as one of its components.
So
$$\min_{K,L} C\equiv rK + wL \\s.t. F(K,L) = \bar Q$$
with the Lagrangean
$$\Lambda = rK + wL +\lambda[\bar Q - F(K,L)]$$
The first order conditions are
$$r = \lambda F_K,\;\;\; w=\lambda F_L \tag{1}$$
which gives, at the optimum,
$$rK + wL = C = \lambda\big(F_KK + F_L L) \tag{2}$$
Now assume that the production function is homogeneous of some degree $h$ (
not necessarily homogeneous of degree one, i.e. exhibiting "constant returns to scale", but homogeneous -and yours is, of degree $h=1/2$.). From Euler's theorem for homogeneous functions of degree $h$ we have that
$$F_KK + F_L L = hF(K,L) = h\bar Q \tag{3}$$
the last equality holding given the constraint of the initial problem. Inserting $(3)$ into $(2)$ we obtain
$$C = \lambda h \bar Q$$
The multiplier $\lambda$ is
optimal marginal Cost, denote it $C'(\bar Q)$, so we arrive at
$$C = C'(\bar Q)\cdot (h\bar Q) \implies C'(\bar Q) + [(-1/h\bar Q)]\cdot C =0$$
This is a simple homogeneous differential equation with solution
$$C = A\cdot \exp\left\{-\int(-1/h\bar Q) {\rm d}\bar Q \right\} = A\cdot \exp\left\{(1/h)\ln \bar Q\right\}$$
$$\implies C^* = A\cdot (\bar Q)^{1/h} \tag{4}$$
for some constant $A >0$. To complete the solution, we need to express the object of interest, $C^*$, in terms of the exogenous entities: $r,w,\bar Q$.To do that derive the optimal marginal cost (which is equal to the multiplier)
$$(4) \implies \lambda^* = (1/h)A(\bar Q)^{1/h-1} \tag{5}$$
Inserting $(5)$ into the first-order conditions we have
$$r = (1/h)A(\bar Q)^{1/h-1} F_K,\;\;\; w=(1/h)A(\bar Q)^{1/h-1} F_L \tag{6}$$
It is time to use the specific functional form of the production function
$$F(K,L) = K^{1/2} + L^{1/2} \implies, F_K = \frac 12 K^{-1/2},\;\; F_L = \frac 12 L^{-1/2} \tag{7}$$
Inerting $(7)$ into $(6)$ together with $h=1/2$ we obtain, after manipulation,
$$rK = \frac {A^2}{r}(\bar Q)^2,\;\; wL = \frac {A^2}{w}(\bar Q)^2 \tag{8}$$
Sum the two to obtain an alternative expression for the Cost function
$$rK+wL = C^* = A(\bar Q)^2\cdot \left[\frac Ar + \frac Aw\right] \tag{9}$$
But inserting $h=1/2$ into $(4)$, we also have that
$$C^* = A(\bar Q)^2 \tag{10}$$So
$$ (9),(10) \implies A(\bar Q)^2\cdot \left[\frac Ar + \frac Aw\right] = A(\bar Q)^2$$
$$\implies \frac Ar + \frac Aw = 1 \implies A = \frac {wr}{w+r} \tag {11}$$
Inserting $(11)$ into $(4)$ we conclude obtaining
$$C^* = \frac {wr}{w+r}\cdot (\bar Q)^2 \tag{12}$$
Three things: A) Verify that the second-order-conditions hold for all this to indeed lead to the optimal cost-function.
B) Solve the unconstrained profit-maximization problem with the same production function, normalizing the price of output to $p=1$ (i.e. treating the exogenous prices, $w,r$ as expressed in real terms), to verify that it will lead to a cost level that it is consistent with $(12)$.
C) If you are interested in the theory of the firm under a budget constraint, a related paper is Lee, H., & Chambers, R. G. (1986). Expenditure constraints and profit maximization in US agriculture. American Journal of Agricultural Economics, 68(4), 857-865.
|
Let $a$, $b$ and $c$ be real numbers such that $abc=1$. Prove that: $$\frac{7-6a}{2+a^2}+\frac{7-6b}{2+b^2}+\frac{7-6c}{2+c^2}\geq1$$
The equality occurs also for $a=b=2$ and $c=\frac{1}{4}$.
This inequality is a similar to the very many contest's inequalities, but nothing helps.
At least, I don't see how we can prove it.
An example of my trying.
We need to prove that $$\sum_{cyc}\frac{7-6a}{2+a^2}\geq1$$ or $$\sum_{cyc}\left(\frac{7-6a}{2+a^2}+1\right)\geq4$$ or $$\sum_{cyc}\frac{(a-3)^2}{2+a^2}\geq4.$$
By C-S $$\sum_{cyc}\frac{(a-3)^2}{2+a^2}=\sum_{cyc}\frac{(a-3)^2(a+k)^2}{(2+a^2)(a+k)}\geq\frac{\left(\sum\limits_{cyc}(a-3)(a+k)\right)^2}{\sum\limits_{cyc}(2+a^2)(a+k)^2}$$
Now we'll find a value of $k$, for which the equality in the last inequality occurs for $a=b=2$ and $c=\frac{1}{2}$.
Since in all equality case we have $$\frac{a-3}{(2+a^2)(a+k)}=\frac{b-3}{(2+b^2)(b+k)}=\frac{c-3}{(2+c^2)(a+k)},$$ we obtain: $$\frac{2-3}{(2+2^2)(2+k)}=\frac{\frac{1}{4}-3}{(2+\left(\frac{1}{4}\right)^2)(\frac{1}{4}+k)},$$ which gives $k=-\frac{9}{4}$.
Thus, it remains to prove that $$\left(\sum\limits_{cyc}(a-3)(4a-9)\right)^2\geq4\sum_{cyc}(2+a^2)(4a-9)^2,$$ which is wrong for $a=4$ and $b=c=\frac{1}{2}$.
Any hint?
Thank you!
|
Decay of solutions to a water wave model with a nonlocal viscous dispersive term
1.
Department of Mathematics, Purdue University, West Lafayette, IN 47907
2.
LAMFA CNRS UMR 6140, Université de Picardie Jules Verne, 33 rue Saint-Leu 80039 Amiens cedex
3.
LAMFA CNRS UMR 6140, Université de Picardie Jules Verne, 33 rue Saint-Leu 80039Amiens Cedex 1
4.
Universite de Picardie Jules Verne, LAMFA UMR 7352, 33 rue Saint-Leu, 80039 Amiens cedex
$ u_t+u_x+\beta $u
xxx$+\frac{\sqrt{\nu}}{\sqrt{\pi}}\int_0^t
\frac{u_t(s)}{\sqrt{t-s}}ds+$uu x$=\nu $u xx$. $
The wellposedness of the equation and the decay rate of solutions are investigated theoretically and numerically.
Mathematics Subject Classification:Primary: 35Q35, 35Q53, 76B15; Secondary: 65M7. Citation:Min Chen, S. Dumont, Louis Dupaigne, Olivier Goubet. Decay of solutions to a water wave model with a nonlocal viscous dispersive term. Discrete & Continuous Dynamical Systems - A, 2010, 27 (4) : 1473-1492. doi: 10.3934/dcds.2010.27.1473
[1]
Marcio Antonio Jorge da Silva, Vando Narciso.
Long-time dynamics for a class of extensible beams with nonlocal nonlinear damping
[2]
H. A. Erbay, S. Erbay, A. Erkip.
Long-time existence of solutions to nonlocal nonlinear bidirectional wave equations.
[3] [4]
Vladimir Varlamov.
Eigenfunction expansion method and the long-time asymptotics for the damped Boussinesq equation.
[5] [6]
Vincent Duchêne, Samer Israwi, Raafat Talhouk.
Shallow water asymptotic models for the propagation of internal waves.
[7]
Annalisa Iuorio, Stefano Melchionna.
Long-time behavior of a nonlocal Cahn-Hilliard equation with reaction.
[8]
Chang Zhang, Fang Li, Jinqiao Duan.
Long-time behavior of a class of nonlocal partial differential equations.
[9] [10] [11] [12] [13]
Min Chen, Olivier Goubet.
Long-time asymptotic behavior of two-dimensional dissipative
Boussinesq systems.
[14]
Xia Li.
Long-time asymptotic solutions of convex hamilton-jacobi equations depending on unknown functions.
[15]
Linghai Zhang.
Long-time asymptotic behaviors of solutions of $N$-dimensional dissipative partial differential equations.
[16] [17]
Peter V. Gordon, Cyrill B. Muratov.
Self-similarity and long-time behavior of solutions of the
diffusion equation with nonlinear absorption and a boundary source.
[18]
Francesca Bucci, Igor Chueshov.
Long-time dynamics of a coupled system of nonlinear wave and thermoelastic plate equations.
[19]
Irena Lasiecka, To Fu Ma, Rodrigo Nunes Monteiro.
Long-time dynamics of vectorial von Karman system with nonlinear thermal effects and free boundary conditions.
[20]
Xinmin Xiang.
The long-time behaviour for nonlinear Schrödinger equation and its rational pseudospectral approximation.
2018 Impact Factor: 1.143
Tools Metrics Other articles
by authors
[Back to Top]
|
What is Galois Theory Anyway? The Basic Idea
Perhaps you've heard of Évariste Galois? (Pronounced "GAL-wah.") You know, the French mathematician who died tragically in 1832 in a duel at the tender age of 20? (Supposedly over a girl! C
'est romantique, n'est-ce pas?) Well, today we're taking a bird's-eye view of his most well-known contribution to mathematics: the appropriately named Galois theory. The goal of this post is twofold: one
If you are a student
about to study Galois theory, I hope the info below will serve as a small appetizer to your main course. In the "From English to Math" section below, we'll take a brief survey of the ideas that appear in a standard graduate course so that when you start doing exercises, you at least have a bird's-eye-view of what's going on. two
Even if you're
not about to study Galois theory and are just curious, this post is also for you! The info here should be accessible to anyone with at least an undergrad background in abstract algebra. I'm going to leave out a lot of technical details (but you don't that mind, do you?) as my goal here is just to convey some main ideas. I hope this may whet your appetite to study further.
In a word, Galois Theory uncovers a relationship between the structure of groups and the structure of fields. It then uses this relationship to describe how the roots of a polynomial relate to one another.
More specifically, we start with a polynomial $f(x)$. Its roots live in a field (called the
splitting field of $f(x)$). These roots display a symmetry which is seen by letting a certain group (called the Galois group of $f(x)$) act on them. And we can gather information about the group's structure from the field's structure and vice versa via the Fundamental Theorem of Galois:
Now why would anyone care about permuting the roots of a polynomial? What good does that do? Well, the quadratic formula was well-known by the time Galois came along. (It actually dates back to the Babylonians). So naturally, mathematicians wondered, "Does an analogous formula exist for polynomials of higher degree?" In other words,
can the roots of an nth degree polynomial be written down as some algebraic combination $(+,-,\times,\div,\sqrt)$ of the polynomial's coefficients? It turns out the answer is "Yes!" when $n\leq 4$, but "No," for any $n\geq 5$.
It was precisely Galois' study of permutation groups of the roots of polynomials that led to his discovery of a necessary and sufficient condition for finding a such a formula.* The condition (which eluded mathematicians for over 300 years!) becomes elegantly clear when the problem is translated
from the language of field theory to that of group theory. Galois theory is the dictionary which makes this possible.
From English to Math The Field Story
Suppose $F$ is a field. Then the polynomial ring $F[x]$ is a (Euclidean domain and hence a) unique factorization domain. This means any polynomial $f(x)\in F[x]$ can be factored uniquely as a product of irreducible polynomials. Now we know that any root of $f$ must be a root of one of those irreducible factors, but we may not know how and where to find those roots. It turns out that passing to field
larger than $F$ helps produce those roots.
Example: Consider the irreducible polynomial $f(x)=x^2+1$ with coefficients in $\mathbb{Q}$. One of its roots is $i=\sqrt{-1}$. Even though $i\not\in\mathbb{Q}$, we can find a bigger field which contains it, namely $\mathbb{Q}(i)=\{a+bi:a,b\in\mathbb{Q}\}$. ($\mathbb{Q}(i)$ is "bigger" since $\mathbb{Q}\subset\mathbb{Q}(i)$.)
In the example above, we say $\mathbb{Q}(i)$ is an
extension field of $\mathbb{Q}$. And in general, any field $K$ which contains a smaller field $F$ is called an extension of $F$, and $F$ is referred to as the base (or ground) field. But so far we've only talked about a single root of our polynomial $f(x)$. What about all of its roots? Can we find an extension of $F$ which contains all of the roots of a general $f(x)$? Again the answer is yes, and that field is called the splitting field of $f(x)$.(Technically, the splitting field the smallest extension of $F$ which contains all the roots of $f(x)$.)
(
Aside: We could also turn this question on its head. Suppose we have a field $F$ and an extension $K$, and we pick a random element** $\alpha\in K$. Can we find a polynomial in $F[x]$ which has $\alpha$ as a root? This time, the answer is not always yes! But in the cases when it is, we say $\alpha$ is an algebraic element. Moreover, if the answer is yes for every element of $K$, we say $K$ is an algebraic field. As we've discussed before, algebraic elements are sort of like limit points in topology/analysis.)
Now the crux of The Field Story is the construction of such splitting fields. This construction is analogous to building a tower from the ground up - one floor at at time. We begin with the ground field $F$, and one by one
adjoin to $F$ the roots of $f(x)$ until we obtain the field $K$ (the splitting field) which contains all the roots of $f$. I'm doing a lot of hand-waving here, but we eventually obtain a tower of fields which looks something like this:
where $F_{i+1}$ is bigger than $F_i$ because it contains (at least) one more root of $f(x)$. It's the structure of
this tower of fields which is mirrored in the structure of the Galois group associated with $f(x)$. And that group is our next topic of discussion.
The Group Story
Do you remember those verb commercials from the early to mid 2000s? You know, the ones full of hyper-happy, active kids encouraging you to do some physical activity? (No? Just Google it, you'll see.) For some reason I do. The tag-line was
VERB: It's what you do.
And in the context of mathematics, groups are very much like verbs! They
do stuff. In particular, a group acts on a set by shuffling elements around. We can gather a lot of information about the group and about the set it acts on via this action. So if you ever want to create a TV commercial about mathematical groups, GROUPS: It's what they do.
would be a telling tag-line. But how does this relate to Galois theory? Well the
Galois group $G$ associated to an $n$th degree polynomial $f(x)\in F[x]$ has its own action. In particular, the elements of $G$ are automorphisms $\sigma:K\to K$ where $K$ is the splitting field of $f$. (Recall a field automorphism is just an isomorphism from a field to itself.) This group is isomorphic to (i.e. has the same structure as) a subgroup, called a permutation group, of the symmetric group $S_n$.
Since the
raison d'être of a permutation group is to permute things, it's not too hard to believe that $G$ acts on the splitting field $K$ by permuting the roots of $f(x)$. So suppose for a moment that $f(x)\in F[x]$ can be written as a product of irreducible factors $f(x)=f_1(x)f_2(x)\cdots f_k(x)$. Then $G$ will permute the roots of $f_1$ among themselves, and the roots of $f_2$ among themselves, and so on. In other words, $G$ is said to act transitively on the irreducible factors of $f$. For instance, in the drawing below, $G$ is some subgroup of $S_n$ (for the sake my illustration, suppose*** $n\geq 9$) which contains a product of two 2-cycles, $(13)(24)$, one three cycle, $(567)$, and a transposition $(n \; n-1)$.
In general, once we know the Galois group of $f$, we can analyze its subgroup structure. Galois' insight was to notice that
if the structure of $G$ was such that it has a chain of subgroups
(where $e$ is the identity) such that $H_i\triangleleft H_{i+1}$ is normal and $H_{i+1}/H_{i}$ is abelian,
then and only then can we write down an explicit algebraic expression (like the quadratic formula) for the roots of $f(x)$. In this case, we say the group $G$ is solvable. By the Fundamental Theorem of Galois, the ability to write down this chain of subgroups corresponds to the ability to write down a particular tower of subfields of the splitting field $K$ of $f$. So the solvability of our polynomial $f$ amounts to knowing something about the structure of its splitting field, or equivalently the structure of its Galois group. The fact that there is no algebraic formula for the roots of polynomials of degree 5 and higher is due to the fact that the symmetric group $S_n$ for $n\geq 5$ is not solvable!
And with that, we end our bird's-eye view of a course in Galois Theorey. Whew! (Kudos if you made it through the whole thing!) I hope you've gotten a little taste of what Galois Theory is about. But this is really just the tip of the iceberg. For further reading, I highly recommend
Basic Abstract Algebra (2 ed., ch. 15-17) by Bhattacharya, Jain, and Nagpaul. This book is very approachable at the undergraduate level. And of course, Abstract Algebra (3 ed., ch. 13-14) by Dummit and Foote is a classic for both undergraduate and graduate students.
Footnotes
* In fact, Galois himself invented the concept of a group for this very purpose!
**
Edited May 21, 2016: This sentence originally read, "...a random element $\alpha\in F$" and did not mention the extension field $K$ (a rather boring case)! Many thanks to a reader for pointing this out.
*** In a typical course, you'll usually play with polynomials of degree two, three, four, and five since the group structures of $S_2$, $S_3$, $S_4$, and $S_5$ are more manageable than that of $S_n$ for $n\geq 6$.
|
Short Answer:
Your work is perfectly fine if your lower and upper integral limits satisfy $0 < a \leq b$. In that case your answer $2u - \ln(u)$ even has a nice closed form purely in terms of $x$:
$$
\int_a^b \frac{dx}{\sqrt{x+\sqrt{x+\sqrt{x+\ldots}}}} = \Big[\big(1 + \sqrt{4x + 1}\big) - \ln\Big(\frac{1}{2}\big(1 + \sqrt{4x + 1}\big)\Big)\Big]_a^b
$$
This formula also continues to work for a lower limit of $a = 0$ if you interpret either the integral and/or the nested radical $\sqrt{x+\sqrt{x+\sqrt{x+\ldots}}}$ in the denominator properly enough.
Long Answer (Analysis):
Since you used $u$-substitution, your method should work as long as the conditions for an integration by substitution are met. Say you are integrating over some interval $[a, b]$. You have to verify:
Does the function $u(x) = \sqrt{x + u(x)}$ that you defined implicitly actually make sense over $[a, b]$? In other words, is there really a function $u : [a, b] \to \Bbb R$ that satisfies that recursion?
Is the function $u(x)$ actually differentiable over $[a, b]$?
$\underline{\textit{There is some good news for these questions:}}$
As long as $x > 0$, there is a well-defined expression for $u$ in terms of $x$ when $u = \sqrt{x + u}$. To realize this, we need translate the intuitive expression $\sqrt{x+\sqrt{x+\sqrt{x+\ldots}}}$ into the precise language of calculus. Only then can we bring the full power of calculus to bear on this problem. So formally what is going on with a nested radical like $\sqrt{x+\sqrt{x+\sqrt{x+\ldots}}}$ is this:
Let $u_{x,1} = \sqrt{x}$ and define recursively the sequence $u_{x,n + 1} = \sqrt{x + u_{x,n}}$ ($n \in \Bbb Z_+$). If $$u_x = \lim\limits_{n \to \infty}u_{x,n}$$ exists, then we may define our sought-after function $u$ at $x$ to be $u(x) = u_x$. In essence, the limit $\lim\limits_{n \to \infty}u_{x,n}$ is mathematically what we
define the expression $\sqrt{x+\sqrt{x+\sqrt{x+\ldots}}}$ to be. And we can easily check that $u_x = \sqrt{x + u_x}$ by taking the limit as $n \to \infty$ at both sides of the equation $u_{x,n + 1} = \sqrt{x + u_{x,n}}$.
Now the good news is that, as long as $x > 0$, you can show that the sequence $u_{x,n}$ is bounded and monotonically increasing so that it does converge to a definite limit, namely
$$u_x = \frac{1}{2}\big(1 + \sqrt{4x + 1}\big)$$
Hence, our function $u(x) = u_x$ is well-defined for $x > 0$. Also, note that the formula above should not surprise you. You can easily see where it originated:
Informally, if you take your substitution equation $u^2 - u = x$ and wrote it as a quadratic equation $u^2 - u - x = 0$, you can solve it by thinking of $x$ as a constant. And indeed, one of the solutions that pops out is precisely $u_+ = \frac{1}{2}\big(1 + \sqrt{4x + 1}\big)$. You can eliminate the other solution $u_- = \frac{1}{2}\big(1 - \sqrt{4x + 1}\big)$ since it is negative if $x > 0$ and by convention square roots are positive.
So
as long as $x > 0$, you can safely take $$u(x) = \frac{1}{2}\big(1 + \sqrt{4x + 1}\big)$$ as the $u$-substitution function which satisfies $u = \sqrt{x + u}$. In fact, as is apparent from the formula, $u(x)$ is even differentiable in this case.
$\underline{\textit{But there are caveats:}}$
$1.\ \textbf{Note that for $x < 0$, the limit does not make sense:}$ as the very first sequence element $u_{x,1} = \sqrt{x}$ is not real. So, from this very analysis, you can immediately conclude that you should not be integrating over negative values in your integral.
$2.\ \textbf{Next, at $x = 0$, things almost work out but break down anyway:}$ Note, we only managed to eliminate $\frac{1}{2}\big(1 - \sqrt{4x + 1}\big)$ as a candidate for the limit above because it was negative if $x > 0$. Well, if $x = 0$, then $u_- = \frac{1}{2}\big(1 - \sqrt{4x + 1}\big) = 0$ and you can no longer eliminate it that easily. So, we must go back to our definition of $\sqrt{x+\sqrt{x+\sqrt{x+\ldots}}}$ in terms of sequences to arbitrate between $u_+$ and $u_-$. Applying that definition when $x = 0$, we see that $u_- = 0$ is the candidate that is chosen this time not $u_+ = 1$. This is because in this case, all the sequence elements $u_{x,n}$ are zero: $$u_{x=0,1} = \sqrt{x} = \sqrt{0} = 0,\quad u_{x=0,2} = \sqrt{x + u_{x=0,1}} = \sqrt{0 + 0} = 0,\quad \ldots \text{ etc}$$
Hence, $\lim\limits_{n \to \infty}u_{x=0,n} = 0$ and $u(0) = 0$. However, approaching $0$ from the right, we see that$$\lim\limits_{x \to 0+}u(x) = \frac{1}{2}\big(1 + \sqrt{4\cdot0 + 1}\big) = 1$$ And therefore even though $u(x)$ is defined at $x = 0$, it is sadly not continuous there,
let alone differentiable. So the $u$-substitution Theorem no longer applies.
In any case, there is an even worse problem when $x = 0$. Note that the function you are trying to integrate $f(x) = \sqrt{x+\sqrt{x+\sqrt{x+\ldots}}}$ is undefined at $x = 0$ because as we saw, our definition of $\sqrt{x+\sqrt{x+\sqrt{x+\ldots}}}$ in terms of sequences gives you a $0$ when $x = 0$. So there would be a $0$ in your denominator for $f(x)$ if that was allowed.
$\underline{\textit{Okay, so we have concluded so far that:}}$
As long as your integration interval $[a, b]$ satisfies $0 < a \leq b$, your work should go through and you can use $u(x) = \frac{1}{2}\big(1 + \sqrt{4x + 1}\big)$ as the explicit formula for $u$ to express your final integral answer:
$$
\int_a^b \frac{dx}{\sqrt{x+\sqrt{x+\sqrt{x+\ldots}}}} = \Big[\big(1 + \sqrt{4x + 1}\big) - \ln\Big(\frac{1}{2}\big(1 + \sqrt{4x + 1}\big)\Big)\Big]_a^b
$$
$\underline{\textit{Fixing the breakdown at $x = 0$:}}$
If you really want $x = 0$ as one of the limits e.g.$$\int_0^b \frac{dx}{\sqrt{x+\sqrt{x+\sqrt{x+\ldots}}}}$$ for $b > 0$, you can do so in two ways, both of which lead to the same result:
You can modify the definition of $\sqrt{x+\sqrt{x+\sqrt{x+\ldots}}}$ thus: it defaults to the usual definition via sequences if $x > 0$ and to $\frac{1}{2}(1 + \sqrt{4 \cdot 0 + 1}) = 1$ if $x = 0$. Then you can safely use $u(x) = \frac{1}{2}\big(1 + \sqrt{4x + 1}\big)$ for all $x \geq 0$. And the answer you will get for your integral is exactly what you would expect by plugging in $a = 0$ in the closed form I gave above:$$\big(1 + \sqrt{4b + 1}\big) - \ln\Big(\frac{1}{2}\big(1 + \sqrt{4b + 1}\big)\Big) - 2$$
On the other hand, you can instead take a limiting integral in the same spirit that we define $\int_0^b \frac{1}{x^2}dx$ to get around the singularity of $\frac{1}{x^2}$ at $0$. That is, you can define:\begin{align*}\int_0^b \frac{dx}{\sqrt{x+\sqrt{x+\sqrt{x+\ldots}}}} &:= \lim_{a \to 0+}\int_a^b \frac{dx}{\sqrt{x+\sqrt{x+\sqrt{x+\ldots}}}} \\&= \lim_{a \to 0+}\big[2u(x) - \ln(u(x))\big]_a^b\end{align*} This leads to the same answer because ultimately $\lim\limits_{a \to 0^+}u(x) = 1$.
|
The Borel-Cantelli Lemma
Today we're chatting about the
Borel-Cantelli Lemma: Let $(X,\Sigma,\mu)$ be a measure space with $\mu(X)< \infty$ and suppose $\{E_n\}_{n=1}^\infty \subset\Sigma$ is a collection of measurable sets such that $\displaystyle{\sum_{n=1}^\infty \mu(E_n)< \infty}$. Then$$\mu\left(\bigcap_{n=1}^\infty \bigcup_{k=n}^\infty E_k \right)=0.$$
When I first came across this lemma, I struggled to understand what it meant "in English."
What does $\mu(\cap_n\cup_k E_k)=0$ really signify?? There's a pretty simple explanation if $(X,\Sigma,\mu)$ is a probability space, but how are we to understand the result in the context of general measure spaces?
The first step towards answering this question is to recognize that $\mu(\cap_n\cup_k E_k)=0$ is the same as saying if $A=\{x\in X:\text{there exists infinitely many $n$ such that $x\in E_n$}\}$ then $\mu(A)=0$. And
this is equivalent to the statement almost every $x\in X$ lives in at most finitely many $E_n$.
So in other words, for almost every $x\in X$, there is some finite indexing set $A_x=\{n_1,\ldots,n_m\}\subset\mathbb{N}$ such that $$x\in\bigcap_{i\in A_x} E_i,$$ which is to say that for almost each $x\in X$, we have a picture like this:
To make things a little more concrete, let's look at an example to see the Borel-Cantelli Lemma in action.
Example
Suppose $(X,\Sigma,\mu)$ is a measure space with $\mu(X)< \infty$ and suppose $\{f_n:X\to\mathbb{C}\}$ is a sequence of measurable functions. Show there exists a sequence $\{c_n\}$ of positive numbers such that $\displaystyle\lim_{n\to\infty} c_nf_n(x)=0$ for almost every $x\in X$.
Train of Thought: Our goal for the proof is to make $|c_nf_n(x)|\leq\frac{1}{n}$ for $n$ large enough and for a.e. $x\in X$, for then we can bound it above by $\epsilon$ for any $\epsilon>0$. So we ask ourselves, "Is there a set (of positive measure) of $x\in X$ where $|f_n(x)|\leq k_n$ for some positive constant $k_n$?" If so, then we can write $$|c_nf_n(x)|\leq c_nk_n \overset{\text{want}}{=}1/n.$$ The last equality is what we want, and this behooves us to choose $c_n=1/nk_n$. But does such a $k_n$ even exist? The answer is yes (!), and this is where Borel-Cantelli comes in.
Proof
We start by claiming that for $n$ large enough, there is $k_n\in\mathbb{R}$ such that $$\mu(\{x\in X:|f_n(x)|>k_n\})< \frac{1}{2^n}.$$ Indeed notice that for a fixed $n$, the sets $E_k=\{x\in X:|f_n(x)|>k\}$ for $k=1,2,3,\ldots$ satisfy $E_1\subset E_2\subset E_3\subset\cdots$. Further we have $$\bigcap_{k=1}^\infty E_k=\varnothing $$ since the $f_n$ map into $\mathbb{C}$ (i.e. there is no $x\in X$ nor $n\in\mathbb{N}$ for which $f_n(x)=\infty$). Thus, since $\mu(E_i)<\mu(X)<\infty$, we may use continuity from above to conclude $$0=\mu(\varnothing)=\mu\left(\bigcap_{k=1}^\infty E_k\right)=\lim_{k\to\infty}\mu(E_k).$$ Hence there is a $k=k_n$ large enough so that $$E_n=\{x\in X:|f_n(x)|>k_n\} \qquad \text{satisfies} \qquad \mu(E_n)< \frac{1}{2^n}.$$ Now consider the collection $\{E_n\}_{n=1}^\infty$ and notice that $\displaystyle{\sum_{n=1}^\infty \mu(E_n)=1<\infty}$. Thus by the Borel-Cantelli Lemma, almost every $x\in X$ lives in at most finitely many $E_n$. In other words, fix $x\in X$ (off of a set of measure zero). Then there is an finite indexing set $A_x=\{n_1,\ldots,n_m\}\subset\mathbb{N}$ (with, say, $n_i< n_{i+1}$) such that $$x\in\bigcap_{i=\in A_x}E_i=E_{n_m}$$ where the equality holds since the $E_i$ are nested. Since $x\in E_{n_m}$ but $x\not\in E_n$ for any $n>n_m$, it follows that $|f_n(x)|\leq k_n$ for all $n>n_m$. This prompts us to choose $$c_n=\frac{1}{nk_n}.$$ Indeed fix $\epsilon >0$. Then for all $n>\text{max}\{n_m,\frac{1}{\epsilon}\}$, we have $$|c_nf_n(x)|< \frac{1}{nk_n}\cdot k_n=\frac{1}{n}< \epsilon$$ which implies $c_nf_n\to 0$ almost everywhere in $X$.
|
The Yoneda Embedding
Last week we began a discussion about the Yoneda lemma. Though rather than stating the lemma (sans motivation)
, we took a leisurely stroll through an implication of its corollaries - the Yoneda perspective, as we called it : An object is completely determined by its relationships to other objects,
i.e.
by what the object "looks like" from the vantage point of each objectin the category.
But this left us wondering,
What are the mathematics behind this idea? And w hat are the actual corollaries? In this post, we'll work to discover the answers. To begin, let's put concrete math behind these three abstract expressions: "...a relationshipbetween two objects..." "...the vantage point of each objectin a category..." "...an object is completely characterizedby..."
We'll unwind the expressions one by one. In what follows, let $\mathsf{Set}$ denote the category of sets and let $\mathsf{C}$ be any category.
A "relationship between two objects" is a morphism.
Let's say that two objects $X$ and $Y$ in $\mathsf{C}$ share a
relationship if there is a morphism between them. For example, if $X$ is a topological space with the discrete topology, there are lots of relationships - lots continuous functions - from $X$ to $Y$ for any space $Y$. In fact, all maps out of a discrete space are continuous.
On the other hand, there are very few relationships between objects in the category of fields - there are no field homomorphisms between fields of different characteristics!
"...the vantage point of each object..."is encoded by a functor
To analyze an object $X$ "from the vantage point of all objects in $\mathsf{C}$," we need a way to keep track of the network of relationships that $X$ shares with all objects in $\mathsf{C}$. This 'network' is precisely the set of all morphisms both
to and from $X$, i.e. the sets $$\text{hom}(Z,X)\qquad\text{and}\qquad\text{hom}(X,Z)\qquad\text{for all $Z$ in $\mathsf{C}$.}$$
Notice that we want a different set for
each $Z$ in $\mathsf{C}$. An efficient way to handle this is via the contravariant functor$\text{hom}(-,X):\mathsf{C}^{op}\to\mathsf{Set}$that sends $Z$ to the set $\text{hom}(Z,X)$ and a morphism $f:Z\to W$ to its pullback $f^*$ (defined by precomposing with $f$). Likewise, the sets $\text{hom}(X,Z)$ for all $Z$ in $\mathsf{C}$ lie in the image of the (covariant) functor $\text{hom}(X,-):\mathsf{C}\to\mathsf{Set}$.
"...an object is 'completely determined by'..."means you know it up to isomorphism.
To say "an object $X$ is
completely determined by..." means that $X$ is - up to isomorphism - the only object characterized by whatever comes after the ellipsis. In the first paragraph of this post, it was "...their relationships to other objects." (Though typically, a universal property follows the ellipsis. This is no surprise. In light of the Yoneda perspective the two addendums go hand in hand!) The upshot is that if $Y$ relates to all other objects in $\mathsf{C}$ in the same way that $X$ does - that is, if $Y$ looks just like $X$ from the vantage point of the full category - then $X$ and $Y$ must be isomorphic, and conversely.
For example, suppose $X$ and $Y$ are topological spaces and let $\bullet$ denote the one-point space and $I$ and $S^1$ the unit interval and the circle. Then,
$X$ and $Y$ have the same cardinality if and only if $\text{hom}(\bullet,X)\cong\text{hom}(\bullet,Y)$. $X$ and $Y$ have the same path space if and only if $\text{hom}(I,X)\cong\text{hom}(I,Y)$. $X$ and $Y$ have the same (free) loop space if and only if $\text{hom}(S^1,X)\cong\text{hom}(S^1,Y)$.
The last two bullets hold by definition of path and loop space. (They are, respectively, the space of all continuous functions from $I$ and $S^1$ to $X$.) The first bullet holds simply because a map $\bullet\to X$ is the same as a choice of point in $X$. We might even say that $\bullet\to X$ is a "$\bullet$-shaped element" of $X$. Similarly, a path $I\to X$ can be thought of as an "$I$-shaped element" of $X$, while a loop $S^1\to X$ is an "$S^1$-shaped element." Essentially we're using $\bullet$, $I$ and $S^1$ to probe $X$ and $Y$. And to obtain a complete picture, we must probe them with - i.e. view the from the vantage point of -
all spaces.
With the mathematics in place, the slogan "an object $X$ is completely determined by its relationships to other objects" now crystallizes into two points:
point #1.Everything we need to know about Xis encoded in hom(--, X). In effect,the object X representsthe functor hom(--, X).
point #2. X and Y are isomorphic if and only if their represented functorshom(--,X) and hom(--,Y) are isomorphic.
Let's think about point #1 now and revisit point #2 next time. So, look at point #1 again. Can we really identify an
object with a functor? There's clearly an assignment$$ X\mapsto \text{hom}(-,X) $$since any object $X$ in the category $\mathsf{C}$ gives rise to a functor $\text{hom}(-,X)$ in... well... in what? Where does $\text{hom}(-,X)$ live? It, too, lives in a category! As we mentioned long ago, there is a category $\mathsf{Set}^{\mathsf{C}^{op}}$ whose objects are functors $\mathsf{C}^{op}\to\mathsf{Set}$ and whose morphisms are natural transformations. Therefore (and you should verify this) there is a functor $\mathscr{Y}:\mathsf{C}\to \mathsf{Set}^{\mathsf{C}^{op}}$ that sends an object $X$ to $\text{hom}(-,X)$ and a morphism $f:X\to Y$ to the natural transformation $f_*: \text{hom}(-,X)\to \text{hom}(-,Y)$. (Each component of this natural transformation is given by postcomposing with $f$.) Functors in the category $\mathsf{Set}^{\mathsf{C}^{op}}$ are called presheaves, and the presheaves we're interested in (i.e. those of the form $\text{hom}(-,X)$) are called representable functors. But we need to justify this nomenclature. Does $X$ truly, faithfully, and to the fullest extent represent the functor $\text{hom}(-,X)$?
The answer is "yes" under one condition: as $\mathscr{Y}$ sends $X$ to the presheaf category, it should
preserve relationships that $X$ shares with objects in $\mathsf{C}$. In other words, for each relationship (morphism) between $X$ and $Y$, there should exist exactly one relationship (natural transformation) between $\text{hom}(-,X)$ and $\text{hom}(-,Y)$. More formally, for every pair $X,Y$ in $\mathsf{C}$, the function $$\text{hom}(X,Y)\to\mathsf{Nat}(\text{hom}(-,X),\text{hom}(-,Y))$$ defined by $f\mapsto f_*$ should be a bijection. (The notation $\mathsf{Nat}(-,-)$ means the set of natural transformations from [blank] to [blank].) If $\mathscr{Y}$ satisfies this condition, then it is called fully faithful* and is said to embed the category $\mathsf{C}$ into $\mathsf{Set}^{\mathsf{C}^{op}}$.
But the question remains - is it true?
Is the function $$\text{hom}(X,Y)\to\mathsf{Nat}(\text{hom}(-,X),\text{hom}(-,Y))$$ that sends $f$ to $f_*$ a bijection? Injectivity is clear: if $f,g:X\to Y$ are distinct morphisms, then their pushforwards $f_*$ and $g_*$ are distinct. But what about surjectivity? Given any natural transformation $\eta:\text{hom}(-,X)\to\text{hom}(-,Y)$, is there a morphism $f:X\to Y$ so that $\eta=f_*$? That is, does every natural transformation between representable functors arise from a morphism between their representing objects?
There could be
tons of natural transformations $\text{hom}(-,X)$ and $\text{hom}(-,Y)$! And there's no good reason to expect any of them should come from a morphism $X\to Y$. Except there is.
Because the answer is yes!
YES! For every natural transformation $\eta:\text{hom}(-,X)\to \text{hom}(-,Y)$ there is exactly one morphism $f:X\to Y$ such that $\eta=f_*$.
And THIS is an immediate consequence of the Yoneda lemma. In fact, some folks might call
this the Yoneda lemma.
The result is that $\mathscr{Y}$ fully and faithfully embeds $\mathsf{C}$ into $\mathsf{Set}^{\mathsf{C}^{op}}$. (This is the formal way of phrasing "point #1" above.) And for this reason, $\mathscr{Y}$ is called the
Yoneda embedding.
But this - the fact that morphisms $X\to Y$ are in bijection with natural transformations $\text{hom}(-,X)\to\text{hom}(-,Y)$ - is merely a
consequence of the Yoneda lemma. As we'll see next week, it says something much stronger! It tells us something about natural transformations $\text{hom}(-,X)\to F$ for any functor $F:\mathsf{C}^{op}\to\mathsf{Set}$.
*More generally, given any functor $F:\mathsf{C}\to\mathsf{D}$, there is a function $\text{hom}_{\mathsf{C}}(X,Y)\to\text{hom}_{\mathsf{D}}(F(X),F(Y))$ given by $f\mapsto F(f)$. If this map is an injection, $F$ is called
faithful, if it is a surjection, $F$ is called full, and if it is a bijection, $F$ is called fully faithful. And here's a handy chart for naming other types functors.
In this series:
|
In this section we will explain how to solve the following type of problems:
Given several restrictions (inequations), we have to determine the area on the plane that satisfies all of them by giving its vertexes.
We usually find more than one simultaneous restriction for the variables in inequations exercises. For example, if we have to find the number of chairs (of $$10$$ kg each) and tables (of $$20$$ kg each) that can be carried by a truck which cannot carry more than $$1000$$ kg, we have to consider the restriction that the number of chairs and tables has to be positive. Therefore, we do not only have to consider the weight restriction for the truck:
(i) $$10\cdot x+20\cdot y\leqslant 1000$$
but also restrictions of being positive both the number of chairs ($$x$$) and the number of tables ($$y$$):
(ii) $$x\geqslant0$$
(iii) $$y\geqslant0$$
Each of these restrictions has a straight line associated with the plane XY, that separates the plane in two regions: the validity region (region where the restriction is satisfied) and the area in where it is not satisfied. Next for these straight lines and areas of validity, there are three restrictions:
(i) The restriction is: $$$10\cdot x+20\cdot y\leqslant 1000$$$
and therefore the associated straight line is: $$$ f(x)=-\dfrac{1}{2}\cdot x+50$$$
If we also try the point $$(x=0,y=0)$$ in the inequation:
$$$10\cdot 0+20\cdot 0\leqslant 1000$$$
therefore the validity region will be the one that contains the point $$(0,0)$$:
(ii) The restriction is: $$$x\geqslant0$$$
This type of restriction represent a vertical straight line (parallel to the axis $$y$$) that separates the values of $$x$$ greater than and less than $$0$$ respectively. The values will be the validity area of $$x$$ greater than zero:
iii) The restriction is: $$$y\geqslant0$$$
The straight line associated with this restriction is: $$$g(x)=0$$$
and the validity area is, obviously, the region over $$g(x)$$, $$\ y\geqslant0$$:
Now we should know the region of the plane XY where all the restrictions are satisfied simultaneously. This region will the one that is common to the validity regions that are free of restrictions. For the case of the chairs and the tables it will be the triangle composed by the straight line $$f(x)$$ and the axes $$x$$ and $$y$$:
We can see that in this case, as we take into account all the restrictions simultaneously, the validity region (from now on we will refer to the common region of all the areas of validity from the different restrictions simply as the validity region) is an enclosed area of the plane. In the previous examples the validity region was spreading over some side up to infinity, that's why these areas were not bounded.
To know the validity region well the coordinates of its apexes have to be known. In this case it is very simple. We already know the coordinates of one of the points: $$(0,0)$$. The other two intersection points are the points where the straight line $$f(x)$$ cross with the axes.
The intersection points with the axes can be found easily:
For the cross point with the axis $$y$$, we only have to know that the whole axis $$y$$ has a coordinate $$x=0$$, and the value of $$y$$ in the intersection point will be the one that takes the function $$f(x)=-\dfrac{1}{2}\cdot x+50$$ in $$x=0$$ (on the axis). So the intersection point will be: $$(x=0,y=f(0)=50)$$.
And the intersection point with the axis $$x$$ is when $$y = 0$$, that is to say, in the value of $$x$$ where the function takes the value $$0$$: $$$ f(x)=0 \Rightarrow -\dfrac{1}{2}\cdot x+50=0 \Rightarrow x=100$$$ and therefore the point where the straight line $$f(x)$$ crosses the axis $$x$$ is $$(x=100,y=0)$$.
So the apexes of the region of validity have as their coordinates: $$$ (0,0) \quad (0,50) \quad (100,0)$$$
In this case it has been very easy to find the apexes.
The following example will illustrate the most general way to find the apexes of the region of validity.
We have the following restrictions: $$$\begin{array}{rcl} x+4 &\geqslant& 4 \\ y &\leqslant& 4 \\ y &\geqslant& x \end{array}$$$
that they take as associated straight lines: $$$\begin{array}{l} r(x): \ y=-x+4 \\ s(x):\ y=4 \\ t(x):\ y=x \end{array} $$$
We can visualize these straight lines and determine the semiplanes where every inequation is satisfied separately.
For the straight line $$r$$:
Since the inequation is not satisfied at point $$(0,0)$$, $$\ 0+0\ngeqslant 4$$, we see that the area of validity of the inequation is the semiplane over the straight line.
For the straight line $$s$$:
Since the inequation is satisifed at point $$(0,0)$$, $$\ 0\leqslant 4$$, we see that the area of validity of the inequation is the semiplane below the straight line.
For the straight line $$t$$:
Since the inequation is satisifed at point $$(0,1)$$, $$\ 1\geqslant 0$$ we see that the area of validity of the inequation is the semiplane over the straight line.
As a whole we have:
The area where all the semiplanes coincide is the feasible region. We see that in this case it is also a question of a bounded area.
Now the calculation of the apexes of this area has to be done. To do so, it will be necessary to know the point where the straight lines cross. We will have to find three intersection points: the one of the straight line $$r$$ with $$s$$, the one of $$r$$ with $$t$$ and the one of $$s$$ with $$t$$.
How to find the point of intersection of two straight lines:
To know a point means to know the coordinates $$x$$ and $$y$$ of the above mentioned point. If two straight lines $$f(x)$$ and $$g(x)$$ cross, we know that both functions will be taking the same value in the position where they cross. Graphically it is:
Therefore we know that the intersection point will be $$(x_0,y_0)$$ and that the value of $$y_0$$ equals the value of two functions in $$x_0$$, since we know that two functions have to take the same value at this point in order to cross: $$$ f(x_0)=g(x_0) $$$
Back to the example.
Intersection point between $$r$$ and $$s$$:
We have to find the coordinates $$x$$ and $$y$$ of the intersection point. We will call the coordinates of this point $$x_0$$ and $$y_0$$.
To find the coordinate $$x$$ where the straight lines cross, $$x_0$$, we equal two functions $$r(x)=-x+4$$ and $$s(x)=4$$ at the point where the straight lines cross ($$x_0$$):
$$$ r(x_0)=s(x_0) \Rightarrow -x_0+4=4 \Rightarrow x_0=0$$$
Therefore two straight lines cross at $$x_0=0$$.
Determining the coordinate $$y$$ of the intersection point $$y_0$$ is simple, for it is the value that both functions $$r(x)$$ and $$s(x)$$ take in $$x=x_0$$.
$$$y_0=r(x_0)=s(x_0) \Rightarrow y_0=r(0)=s(0)=-0+4=4$$$
Therefore the intersection point between the straight lines $$r$$ and $$s$$ is: $$(x_0=0,y_0=4)$$.
Intersection point between $$r$$ and $$t$$:
We proceed just as in the previous case. We equal the functions $$r(x)=-x+4$$ and $$t(x)=x$$ at the point of intersection, that this time will take as its coordinates $$(x_1,y_1)$$.
$$$ r(x_1)=t(x_1) \Rightarrow -x_1+4=x_1 \Rightarrow x_1=2$$$
As in the previous case, the coordinate $$y$$ of the intersection point, $$y_1$$, is equal to the value that the functions $$r(x)$$ and $$t(x)$$ take at the intersection point:
$$$ y_1=r(x_1)=t(x_1)=-2+4=2$$$
And so, the coordinates of the point of intersection are: $$(x_1=2,y_1=2)$$.
Intersection point between $$s$$ and $$t$$:
This intersection point will have as its coordinates $$(x_2,y_2)$$. First we determine the value of $$x_2$$ as in the previous cases, that is to say by equaling $$s(x)=4$$ and $$t(x)=x$$ at the intersection point $$x_2$$: $$$ s(x_2)=t(x_2) \Rightarrow 4=x_2 $$$
We decide the value of the coordinate $$y$$ of the point of intersection, $$y_2$$, as in the previous occasions: $$$ y_2=s(x_2)=t(x_2)=4 $$$
Therefore the intersection point between the straight lines $$s(x)$$ and $$t(x)$$ is the one that takes as its coordinates: $$(x_2=4,y_2=4)$$.
In short, the coordinates of the apexes of the feasible region are: $$$ (x_0,y_0)=(0,4) \quad (x_1,y_1)=(2,2) \quad (x_2,y_2)=(4,4) $$$
Other examples:
Considering the following inequations system: $$$ \begin{array}{rcl} x &\geqslant& 0 \\ y &\leqslant& 4 \\ y &\geqslant& \dfrac{x}{2} \end{array}$$$
We look first for the straight lines associated with every inequation and the areas of validity of each one:
The first of them gives us a straight line parallel to the axis $$y$$ in $$x=0$$ and its validity region is the one that has $$x$$ greater than $$0$$ (towards the right of the axis $$y$$). The second one is a straight line parallel to the axis $$x$$, that meets the point $$y=4$$ and takes as a validity region the semiplane that it has below (where $$y$$ is less than $$4$$). The third straight line is $$y=\dfrac{x}{2}$$ and its validity region is the one that is above the straight line (it is possible to verify this easily, seeing that the point $$(0,1)$$ satisfies the inequation: $$1\geqslant$$).
We will determine the apexes of the area of validity as the intersection points between the different straight lines.
The straight line $$x=0$$ crosses with $$y=4$$ at point $$(x_0=0,y_0=4)$$. The straight line $$x=0$$ crosses with $$y=\dfrac{x}{2}$$ at point $$(x_1=0,y_1=0)$$. The straight line $$y=4$$ crosses with $$y=\dfrac{x}{2}$$ at point $$(x_2=8,y_2=0)$$.
Therefore, the apexes of the region of validity are: $$$ (x_0,y_0)=(0,4) \quad (x_1,y_1)=(0,0) \quad (x_2,y_2)=(8,0) $$$
Given the set of restrictions:
$$$ \begin{array}{rcl} x+3 &\geqslant& y \\ 8 &\geqslant& x+y \\ y &\geqslant& x-3 \\ x &\geqslant& 0 \\ y &\geqslant& 0 \end{array}$$$
We look first for the straight lines associated with every restriction and the areas of validity of every inequation (checking it with a point in the inequations). The associatd straight lines are:
$$$ \begin{array}{l} f: \ y=3+x \\ g:\ y=-x+8 \\ h:\ y=x-3 \\ i:\ x=0 \\ j:\ y=0 \end{array}$$$
Drawing the straight lines and the validity areas we can visualize the validity region.
If we cannot do the drawing we have an alternative. Having so many straight lines, normally we will have more intersection points between the straight lines than apexes in the validity area. For this reason, not all the points of intersection between the different straight lines will be apexes of the region of validity. To admit which ones are the apexes of the region of validity the following will be done:
All the intersection points are calculated between the different straight lines.
Those points of intersection where all the restrictions are fulfilled simultaneously will be the apexes of the area of validity (what can allow us to visualize it, if we have not done so before).
Thus, we are going to calculate all the intersection points between the different straight lines.
$$f$$ with $$g$$ will cross at point $$(x_0,y_0)$$. We calculate the coordinates of the point as has already been done before: $$$f(x_0)=g(x_0) \Rightarrow 3+x_0=-x_0+8 \Rightarrow x_0=\dfrac{5}{2} $$$ And the coordinate $$y$$: $$$y_0=f(x_0)=g(x_0)=\dfrac{11}{2}$$$ Therefore the coordinates of the point of intersection are: $$(x_0,y_0)=(\dfrac{5}{2}, \dfrac{11}{2})$$.
$$f$$ with $$h$$ will cross at point $$(x_1,y_1)$$. We calculate the coordinates of the point as has already been done before: $$$f(x_1)=h(x_1) \Rightarrow 3+x_1=x_1-3 \Rightarrow 3=-3 $$$ We see that, when we try to find the coordinate x at the intersection point, an equation that is not satisfied appears. This means that two straight lines are in fact parallel (therefore they never cross).
$$f$$ with $$i$$ will cross at point $$(x_2,y_2)$$. The straight line $$f(x)=3+x$$ will cross the straight line $$x=0$$ (straight line that coincides with the axis $$y$$) with point $$(x=0,y=f(0))$$. That is to say: $$(x_2,y_2)=(0,3)$$.
$$f$$ with $$j$$ will cross at point $$(x_3,y_3)$$. We calculate the coordinates of the point as has already been done before: $$$f(x_3)=j(x_3) \Rightarrow 3+x_3=0 \Rightarrow x_3=-3 $$$ And the coordinate $$y$$, $$$y_3=f(x_3)=j(x_3)=0$$$ Therefore the coordinates of the point of intersection are: $$(x_3,y_3)=(-3,0)$$.
$$g$$ with $$h$$ will cross at point $$(x_4,y_4)$$. We calculate the coordinates of the point as has already been done before: $$$g(x_4)=h(x_4) \Rightarrow -x_4+8=x_4-3 \Rightarrow x_4=\dfrac{11}{2} $$$ And the coordinate $$y$$: $$$y_4=g(x_4)=h(x_4)=\dfrac{5}{2}$$$ Therefore the coordinates of the point of intersection are: $$(x_4,y_4)=(\dfrac{11}{2},\dfrac{5}{2})$$.
$$g$$ with $$i$$ will cross at point $$(x_5,y_5)$$. The straight line $$i$$ tells us $$x=0$$, therefore the intersection point will be: $$(0,g(0))=(0,8)$$. Therefore the coordinates of the point of intersection are: $$(x_5,y_5)=(0,8)$$.
$$g$$ with $$j$$ will cross at point $$(x_6,y_6)$$. We calculate the coordinates of the point as has already been done before: $$$ g(x_6)=j(x_6) \Rightarrow -x_6+8=0 \Rightarrow x_6=8 $$$ And the coordinate $$y$$: $$$y_6=g(x_6)=j(x_6)=0$$$ Therefore the coordinates of the point of intersection are: $$(x_6,y_6)=(8,0)$$.
$$h$$ with $$i$$ will cross at point $$(x_7,y_7)$$. The straight line $$i$$ says to us that $$x=0$$, therefore the intersection point will be: $$(0,h(0))=(0,-3)$$. Therefore the coordinates of the point of intersection are: $$(x_7,y_7)=(0,-3)$$.
$$h$$ with $$j$$ will cross at point $$(x_8,y_8)$$. We calculate the coordinates of the point as has already been done before: $$$ h(x_8)=j(x_8) \Rightarrow x_8-3=0 \Rightarrow x_8=3 $$$ And the coordinate $$y$$: $$$y_8=h(x_8)=j(x_8)=0$$$ Therefore the coordinates of the point of intersection are: $$(x_8,y_8)=(3,8)$$.
$$i$$ with $$j$$ will cross at point $$(x_9,y_9)$$. The straight line $$i$$ tells us that $$x=0$$, and $$j$$ that $$y=0$$, therefore the coordinates of the point of intersection are: $$(x_9,y_9)=(0,0)$$.
Determination of the apexes of the area of validity:
We have that nine intersection points between the straight lines. As we have said before, it has to be verified at what points all the inequations are satisfied, and these will be the apexes of the region of validity.
$$$ \begin{array}{l} (x_0,y_0)=(\dfrac{5}{2}, \dfrac{11}{2})\ \text{ all the inequations are satisfied.} \\ (x_2,y_2)=(0,3)\ \text{ all the inequations are satisfied.}\\ (x_3,y_3)=(-3,0)\ \text{ the inequation is not satisfied } x\geqslant0. \\ (x_4,y_4)=(\dfrac{11}{2},\dfrac{5}{2})\ \text{ all the inequations are satisfied.} \\ (x_5,y_5)=(0,8)\ \text{ the inequation is not satisfied } x+3\geqslant y . \\ (x_6,y_6)=(8,0)\ \text{ the inequation is not satisfied } y\geqslant x-3 . \\ (x_7,y_7)=(0,-3)\ \text{ the inequation is not satisfied } y\geqslant 0 . \\ (x_8,y_8)=(3,8)\ \text{ all the inequations are satisfied.}\\ (x_9,y_9)=(0,0)\ \text{ all the inequations are satisfied.} \end{array}$$$
Therefore the apexes of the region of validity are: $$$ (x_0,y_0)=(\dfrac{5}{2}, \dfrac{11}{2}) \quad (x_2,y_2)=(0,3)\quad (x_4,y_4)=(\dfrac{11}{2},\dfrac{5}{2})$$$ $$$(x_8,y_8)=(3,8)\quad (x_9,y_9)=(0,0) $$$
To sum up, if we have several inequations simultaneously, each one will determine a semiplane where it is satisfied. The intersection of these semiplanes (common region to all of them) will be called the feasible region, which is the region where all the inequations are satisfied simultaneously. This area can be bounded or not.
We determine the apexes of the area of validity, determining the points of intersection of the straight lines two by two. If we have two straight lines: $$$f(x)=ax+b \qquad g(x)=cx+d$$$
The intersection point will be $$(x_0,f(x_0))$$ or equivalently $$(x_0,g(x_0))$$. To determine the intersection point we do: $$$ f(x_0)=g(x_0) \Rightarrow ax_0+b=cx_0+d$$$
The solution to this equation is: $$$x_0=\dfrac{d-b}{a-c}$$$
And the functions $$f(x)$$ and $$g(x)$$ take in this point the value: $$$f(x_0)=g(x_0)=\dfrac{ad-bc}{a-c}$$$
Where the intersection point is between both straight lines: $$$\Big( x_0=\dfrac{d-b}{a-c}, y_0=\dfrac{ad-bc}{a-c} \Big) $$$
In this way we calculate the apexes of the region of validity or feasible region.
|
A system of inequations with one variable is a set of inequations of a variable that act simultaneously, that is, the solution points must satisfy all the inequations of the system.
An example of a system of inequations is:
$$$ \left\{ \begin{array}{l} x-1 > 0 \\ x+2(1-x) < 4+x \\ 2x < 8 \end{array}\right. $$$
In this example we can see that the respective solutions of every inequation are: $$$ \left\{ \begin{array}{l} x > 1 \\ x > -1 \\ x < 4 \end{array}\right. $$$
and, as the solution of the system must satisfy every inequation, it is clear that it will be satisfied only by the points between $$1$$ and $$4$$ (the solution is $$1 < x < 4$$).
As we have already seen in the example, solving a system of inequations with one variable consists of resolving each system separately and in the end taking the intersection of the sets of solutions of each one; in other words, taking the most restrictive inequations.
The systems of inequations with one variable can be formed by first degree inequations and quadratic inequations (in fact, there can appear systems of inequations of any degree and not even polynomial ones, but the resolution of these is much more complicated). In these cases (first and second degree), we will solve every inequation and later we will take the total intersection of solutions.
It is possible that sometimes, when we are looking for the intersection of the solutions sets, we realize that this one could not exist since the inequations are incompatible (in such a case we will say that the system has no solution) or that the set comes down to a point.
For example:
$$x < 1$$ and $$x > 3$$ This set of solutions of two inequations is incompatible, therefore we say that the system has no solution.
$$x \leqslant 5$$ and $$x \geqslant 5$$ This set of solutions of two inequations is compatible, and the intersection of these is the number $$x = 5$$.
At last, let's take a look at an example of a system of inequations with one variable.
Let's consider the system: $$$ \left\{ \begin{array}{l} 2x-4 > 2 \\ 2x-1 < 7 \end{array}\right. $$$
We are going to solve it by isolating $$x$$ in both inequations: $$$ \left\{ \begin{array}{l} 2x-4 > 2 \\ 2x-1 < 7 \end{array}\right. \Rightarrow \left\{ \begin{array}{l} x > \dfrac{2+4}{2}=3 \\ x < \dfrac{7+1}{2}=4 \end{array}\right.$$$
therefore the solution is $$3 < x < 4$$.
|
expr_t_onesample
expr_t_parametric
expr_anova_parametric
expr_contingency_tab
expr_corr_test
This vignette provides a go-to summary for which test is carried out for each function included in the package and what effect size it returns. Additionally, there are also recommendations on how to interpret those effect sizes.
Note that the following recommendations on how to interpret the effect sizes are just suggestions and there is nothing universal about them. The interpretation of
any effect size measures is always going to be relative to the discipline, the specific data, and the aims of the analyst. Here the guidelines are given for small, medium, and large effects and references should shed more information on the baseline discipline with respect to which these guidelines were recommended. This is important because what might be considered a small effect in psychology might be large for some other field like public health.
expr_t_onesample
Test: One-sample t-test Effect size: Cohen’s d, Hedge’s g
Effect size Small Medium Large Range Cohen’s
d 0 – < 0.20 0.20 – < 0.50 ≥ 0.80 [0,1] Hedge’s
g 0 – < 0.20 0.20 – < 0.50 ≥ 0.80 [0,1] Test: One-sample Wilcoxon Signed-rank Test Effect size: r ( = \(Z/\sqrt(N_{obs})\))
Effect size Small Medium Large Range
r 0.10 – < 0.30 0.30 – < 0.50 ≥ 0.50 [0,1] Test: One-sample percentile bootstrap test Effect size: robust location measure
expr_t_parametric
Test: Student’s dependent samples t-test Effect size: Cohen’s d, Hedge’s g
Effect size Small Medium Large Range Cohen’s
d 0.20 0.50 0.80 [0,1] Hedge’s
g 0.20 0.50 0.80 [0,1] Test: Wilcoxon signed-rank test Effect size: r ( = \(Z/\sqrt(N_{pairs})\))
Effect size Small Medium Large Range
r 0.10 – < 0.30 0.30 – < 0.50 ≥ 0.50 [0,1] Test: Yuen’s dependent sample trimmed means t-test Effect size: Explanatory measure of effect size (\(\xi\))
Effect size Small Medium Large Range \(\xi\) 0.10 – < 0.30 0.30 – < 0.50 ≥ 0.50 [0,1] Test: Student’s and Welch’s independent samples t-test Effect size: Cohen’s d, Hedge’s g
Effect size Small Medium Large Range Cohen’s
d 0.20 0.50 0.80 [0,1] Hedge’s
g 0.20 0.50 0.80 [0,1] Test: Two-sample Mann–Whitney U Test Effect size: r ( = \(Z/\sqrt(N_{obs})\))
Effect size Small Medium Large Range
r 0.10 – < 0.30 0.30 – < 0.50 ≥ 0.50 [0,1] Reference: https://rcompanion.org/handbook/F_04.html Test: Yuen’s independent sample trimmed means t-test Effect size: Explanatory measure of effect size (\(\xi\))
Effect size Small Medium Large Range \(\xi\) 0.10 – < 0.30 0.30 – < 0.50 ≥ 0.50 [0,1]
expr_anova_parametric
Test: Fisher’s repeated measures one-way ANOVA Effect size: \(\eta^2_p\), \(\omega^2\)
Effect size Small Medium Large Range \(\omega^2\) 0.01 – < 0.06 0.06 – < 0.14 ≥ 0.14 [0,1] \(\eta^2_p\) 0.01 – < 0.06 0.06 – < 0.14 ≥ 0.14 [0,1] Reference: Test: Friedman’s rank sum test Effect size: Kendall’s W
In the following table,
k is the number of treatments, groups, or things being rated.
k Small Medium Large Range
k = 3 < 0.10 0.10 – < 0.30 ≥ 0.30 [0,1]
k = 5 < 0.10 0.10 – < 0.25 ≥ 0.25 [0,1]
k = 7 < 0.10 0.10 – < 0.20 ≥ 0.20 [0,1]
k = 9 < 0.10 0.10 – < 0.20 ≥ 0.20 [0,1] Test: Heteroscedastic one-way repeated measures ANOVA for trimmed means Effect size: Not available Test: Fisher’s or Welch’s one-way ANOVA Effect size: \(\eta^2\), \(\eta^2_p\), \(\omega^2\), \(\omega^2_p\)
Effect size Small Medium Large Range \(\eta^2\) 0.01 – < 0.06 0.06 – < 0.14 ≥ 0.14 [0,1] \(\omega^2\) 0.01 – < 0.06 0.06 – < 0.14 ≥ 0.14 [0,1] \(\eta^2_p\) 0.01 – < 0.06 0.06 – < 0.14 ≥ 0.14 [0,1] \(\omega^2_p\) 0.01 – < 0.06 0.06 – < 0.14 ≥ 0.14 [0,1] Reference: Test: Kruskal–Wallis test Effect size: \(\epsilon^2\)
Effect size Small Medium Large Range \(\epsilon^2\) 0.01 – < 0.08 0.08 – < 0.26 ≥ 0.26 [0,1] Reference: https://rcompanion.org/handbook/F_08.html Test: Heteroscedastic one-way ANOVA for trimmed means Effect size: Explanatory measure of effect size (\(\xi\))
Effect size Small Medium Large Range \(\xi\) 0.10 – < 0.30 0.30 – < 0.50 ≥ 0.50 [0,1]
expr_contingency_tab
Test: Pearson’s \(\chi^2\)-squared test Effect size: Cramér’s V
In the following table,
k is the minimum number of categories in either rows or columns.
k Small Medium Large Range
k = 2 0.10 – < 0.30 0.30 – < 0.50 ≥ 0.50 [0,1]
k = 3 0.07 – < 0.20 0.20 – < 0.35 ≥ 0.35 [0,1]
k = 4 0.06 – < 0.17 0.17 – < 0.29 ≥ 0.29 [0,1] Reference: https://rcompanion.org/handbook/H_10.html Test: McNemar’s test Effect size: Cohen’s g
Effect size Small Medium Large Range Cohen’s
g 0.05 – < 0.15 0.15 – < 0.25 ≥ 0.25 [0,1] Reference: https://rcompanion.org/handbook/H_05.html Test: Pearson’s \(\chi^2\)-squared goodness-of-fit test Effect size: Cramér’s V
In the following table,
k is the number of categories.
k Small Medium Large Range
k = 2 0.100 – < 0.300 0.300 – < 0.500 ≥ 0.500 [0,1]
k = 3 0.071 – < 0.212 0.212 – < 0.354 ≥ 0.354 [0,1]
k = 4 0.058 – < 0.173 0.173 – < 0.289 ≥ 0.289 [0,1]
k = 5 0.050 – < 0.150 0.150 – < 0.250 ≥ 0.250 [0,1]
k = 6 0.045 – < 0.134 0.134 – < 0.224 ≥ 0.224 [0,1]
k = 7 0.043 – < 0.130 0.130 – < 0.217 ≥ 0.217 [0,1]
k = 8 0.042 – < 0.127 0.127 – < 0.212 ≥ 0.212 [0,1]
k = 9 0.042 – < 0.125 0.125 – < 0.209 ≥ 0.209 [0,1]
k = 10 0.041 – < 0.124 0.124 – < 0.207 ≥ 0.207 [0,1] Reference: https://rcompanion.org/handbook/H_03.html
expr_corr_test
Test: Pearson product-moment correlation coefficient Effect size: Pearson’s correlation coefficient ( r)
Effect size Small Medium Large Range Pearson’s
r 0.10 – < 0.30 0.30 – < 0.50 ≥ 0.50 [-1,1] Test: Spearman’s rank correlation coefficient Effect size: Spearman’s rank correlation coefficient (\(\rho\))
Effect size Small Medium Large Range Spearman’s \(\rho\) 0.10 – < 0.30 0.30 – < 0.50 ≥ 0.50 [-1,1] Test: Percentage bend correlation coefficient Effect size: Percentage bend correlation coefficient (\(\rho_{pb}\))
Effect size Small Medium Large Range \(\rho_{pb}\) 0.10 – < 0.30 0.30 – < 0.50 ≥ 0.50 [-1,1]
If you find any bugs or have any suggestions/remarks, please file an issue on GitHub: https://github.com/IndrajeetPatil/ggstatsplot/issues
|
In this reference the author states what he calls "
the theory of local solutions" for separable ordinary differential equations of the form $\frac{dy}{dx} = \frac{f(x)}{g(y)}$. He asserts that it suffices for $f$ and $g$ to be continuous and not to vanish simultaneously in a rectangular area $R$ of the plane in order for a unique solution to exist given an initial condition $(x_0,y_0)\in R$, but I have difficulty in interpreting his claim.
He does not specify what the domain of the solution will be, but since he talks about "local" solutions I believe that he is claiming that for each $(x_0,y_0)\in R$ there exist an open set $I$ of $\mathbb{R}$, contained in the projection of $R$ on the $x$-axis (and containing $x_0$), and a differentiable function $\phi : I \rightarrow \mathbb{R}$ such that
$\phi(x_0) = y_0$ for each $x \in I$ $\phi'(x) = f(x)/g(\phi(x))$ $\phi : I \rightarrow \mathbb{R}$ is unique
I do not understand his requirement that $f(x)$ and $g(y)$ should not
vanish simultaneously in $R$. I think that if $g(y_0)=0$, even if $f(x_0) \neq 0$, there should be no solution passing through $(x_0,y_0)$ because there would be an undefined value for the derivative of an eventual solution. Should the text be emended to exclude the possibility of $g(y_0)=0$?
edit: I should add that I also don't understand well what is meant by uniqueness given we are talking of a local solution and the domain of the solution is somewhat arbitrary
|
Is there a "simple" mathematical proof that is fully understandable by a 1st year university student that impressed you because it is beautiful?
closed as primarily opinion-based by Daniel W. Farlow, Najib Idrissi, user91500, LutzL, Jonas Meyer Apr 7 '15 at 3:40
Many good questions generate some degree of opinion based on expert experience, but answers to this question will tend to be almost entirely based on opinions, rather than facts, references, or specific expertise. If this question can be reworded to fit the rules in the help center, please edit the question.
Here's a cute and lovely theorem.
There exist two irrational numbers $x,y$ such that $x^y$ is rational.
Proof. If $x=y=\sqrt2$ is an example, then we are done; otherwise $\sqrt2^{\sqrt2}$ is irrational, in which case taking $x=\sqrt2^{\sqrt2}$ and $y=\sqrt2$ gives us: $$\left(\sqrt2^{\sqrt2}\right)^{\sqrt2}=\sqrt2^{\sqrt2\sqrt2}=\sqrt2^2=2.\qquad\square$$
(Nowadays, using the Gelfond–Schneider theorem we know that $\sqrt2^{\sqrt2}$ is irrational, and in fact transcendental. But the above proof, of course, doesn't care for that.)
How about the proof that
$$1^3+2^3+\cdots+n^3=\left(1+2+\cdots+n\right)^2$$
I remember being impressed by this identity and the proof can be given in a picture:
Edit: Substituted $\frac{n(n+1)}{2}=1+2+\cdots+n$ in response to comments.
Cantor's Diagonalization Argument, proof that there are infinite sets that can't be put one to one with the set of natural numbers, is frequently cited as a beautifully simple but powerful proof. Essentially, with a list of infinite sequences, a sequence formed from taking the diagonal numbers will not be in the list.
I would personally argue that the proof that $\sqrt 2$ is irrational is simple enough for a university student (probably simple enough for a high school student) and very pretty in its use of proof by contradiction!
Prove that if $n$ and $m$ can each be written as a sum of two perfect squares, so can their product $nm$.
Proof: Let $n = a^2+b^2$ and $m=c^2+d^2$ ($a, b, c, d \in\mathbb Z$). Then, there exists some $x,y\in\mathbb Z$ such that
$$x+iy = (a+ib)(c+id)$$
Taking the magnitudes of both sides are squaring gives
$$x^2+y^2 = (a^2+b^2)(c^2+d^2) = nm$$
I would go for the proof by contradiction of an infinite number of primes, which is fairly simple:
Assume that there is a finite number of primes. Let $G$ be the set of allprimes $P_1,P_2,...,P_n$. Compute $K = P_1 \times P_2 \times ... \times P_n + 1$. If $K$ is prime, then it is obviously notin $G$. Otherwise, noneof its prime factors are in $G$. Conclusion: $G$ is notthe set of allprimes.
I think I learned that both in high-school and at 1st year, so it might be a little too simple...
By the concavity of the $\sin$ function on the interval $\left[0,\frac{\pi}2\right]$ we deduce these inequalities: $$\frac{2}\pi x\le \sin x\le x,\quad \forall x\in\left[0,\frac\pi2\right].$$
The first player in Hex has a winning strategy.
There are no draws in hex, so one player must have a winning strategy. If player two has a winning strategy, player one can steal that strategy by placing the first stone in the center (additional pieces on the board never hurt your position) then using player two's strategy.
You cannot have two dice (with numbers $1$ to $6$) biased so that when you throw both, the sum is uniformly distributed in $\{2,3,\dots,12\}$.
For easier notation, we use the equivalent formulation "You cannot have two dice (with numbers $0$ to $5$) biased such that when you throw both, the sum is uniformly distributed in $\{0,1,\dots,10\}$."
Proof:Assume that such dice exist. Let $p_i$ be the probability that the first die gives an $i$ and $q_i$ be the probability that the second die gives an $i$. Let $p(x)=\sum_{i=0}^5 p_i x^i$ and $q(x)=\sum_{i=0}^5 q_i x^i$.
Let $r(x)=p(x)q(x) = \sum_{i=0}^{10} r_i x^i$. We find that $r_i = \sum_{j+k=i}p_jq_k$. But hey, this is also the probability that the sum of the two dice is $i$. Therefore, $$ r(x)=\frac{1}{11}(1+x+\dots+x^{10}). $$ Now $r(1)=1\neq0$, and for $x\neq1$, $$ r(x)=\frac{(x^{11}-1)}{11(x-1)}, $$ which clearly is nonzero when $x\neq 1$. Therefore $r$ does not have any real zeros.
But because $p$ and $q$ are $5$th degree polynomials, they must have zeros. Therefore, $r(x)=p(x)q(x)$ has a zero. A contradiction.
Given a square consisting of $2n \times 2n$ tiles, it is possible to cover this square with pieces that each cover $2$ adjacent tiles (like domino bricks). Now imagine, you remove two tiles, from two opposite corners of the original square. Prove that is is now no longer possible to cover the remaining area with domino bricks.
Proof:
Imagine that the square is a checkerboard. Each domino brick will cover two tiles of different colors. When you remove tiles from two opposite corners, you will remove two tiles with the
samecolor. Thus, it can no longer be possible to cover the remaining area.
(Well, it may be
too "simple." But you did not state that it had to be a university student of mathematics. This one might even work for liberal arts majors...)
One little-known gem at the intersection of geometry and number theory is Aubry's reflective generation of primitive Pythagorean triples, i.e. coprime naturals $\,(x,y,z)\,$with $\,x^2 + y^2 = z^2.\,$ Dividing by $\,z^2$ yields $\,(x/z)^2+(y/z)^2 = 1,\,$ so each triple corresponds to a rational point $(x/z,\,y/z)$ on the unit circle. Aubry showed that we can generate all such triples by a very simple geometrical process. Start with the trivial point $(0,-1)$. Draw a line to the point $\,P = (1,1).\,$ It intersects the circle in the
rational point $\,A = (4/5,3/5)\,$ yielding the triple $\,(3,4,5).\,$ Next reflect the point $\,A\,$ into the other quadrants by taking all possible signs of each component, i.e. $\,(\pm4/5,\pm3/5),\,$ yielding the inscribed rectangle below. As before, the line through $\,A_B = (-4/5,-3/5)\,$ and $P$ intersects the circle in $\,B = (12/13, 5/13),\,$ yielding the triple $\,(12,5,13).\,$ Similarly the points $\,A_C,\, A_D\,$ yield the triples $\,(20,21,29)\,$ and $\,(8,15,17),\,$ We can iterate this process with the new points $\,B,C,D\,$ doing the same we did for $\,A,\,$ obtaining further triples. Iterating this process generates the primitive triples as a ternary tree
$\qquad\qquad$
Descent in the tree is given by the formula
$$\begin{eqnarray} (x,y,z)\,\mapsto &&(x,y,z)-2(x\!+\!y\!-\!z)\,(1,1,1)\\ = &&(-x-2y+2z,\,-2x-y+2z,\,-2x-2y+3z)\end{eqnarray}$$
e.g. $\ (12,5,13)\mapsto (12,5,13)-8(1,1,1) = (-3,4,5),\ $ yielding $\,(4/5,3/5)\,$ when reflected into the first quadrant.
Ascent in the tree by inverting this map, combined with trivial sign-changing reflections:
$\quad\quad (-3,+4,5) \mapsto (-3,+4,5) - 2 \; (-3+4-5) \; (1,1,1) = ( 5,12,13)$
$\quad\quad (-3,-4,5) \mapsto (-3,-4,5) - 2 \; (-3-4-5) \; (1,1,1) = (21,20,29)$
$\quad\quad (+3,-4,5) \mapsto (+3,-4,5) - 2 \; (+3-4-5) \; (1,1,1) = (15,8,17)$
See my MathOverflow post for further discussion, including generalizations and references.
I like the proof that there are infinitely many Pythagorean triples.
Theorem:There are infinitely many integers $ x, y, z$ such that $$ x^2+y^2=z^2 $$ Proof:$$ (2ab)^2 + ( a^2-b^2)^2= ( a^2+b^2)^2 $$
One cannot cover a disk of diameter 100 with 99 strips of length 100 and width 1.
Proof: project the disk and the strips on a semi-sphere on top of the disk. The projection of each strip would have area at most 1/100th of the area of the semi-sphere.
If you have any set of 51 integers between $1$ and $100$, the set must contain some pair of integers where one number in the pair is a multiple of the other.
Proof: Suppose you have a set of $51$ integers between $1$ and $100$. If an integer is between $1$ and $100$, its largest odd divisor is one of the odd numbers between $1$ and $99$. There are only $50$ odd numbers between $1$ and $99$, so your $51$ integers can’t all have different largest odd divisors — there are only $50$ possibilities. So two of your integers (possibly more) have the same largest odd divisor. Call that odd number $d$. You can factor those two integers into prime factors, and each will factor as (some $2$’s)$\cdot d$. This is because if $d$ is the largest divisor of a number, the rest of its factorization can’t include any more odd numbers. Of your two numbers with largest odd factor $d$, the one with more $2$’s in its factorization is a multiple of the other one. (In fact, the multiple is a power of $2$.)
In general, let $S$ be the set of integers from $1$ up to some even number $2n$. If a subset of $S$ contains more than half the elements in $S$, the set must contain a pair of numbers where one is a multiple of the other. The proof is the same, but it’s easier to follow if you see it for a specific $n$ first.
The proof that an isosceles triangle ABC (with AC and AB having equal length) has equal angles ABC and BCA is quite nice:
Triangles ABC and ACB are (mirrored) congruent (since AB = AC, BC = CB, and CA = BA), so the corresponding angles ABC and (mirrored) ACB are equal.
This congruency argument is nicer than that of cutting the triangle up into two right-angled triangles.
Parity of sine and cosine functions using Euler's forumla:
$e^{-i\theta} = cos\ (-\theta) + i\ sin\ (-\theta)$
$e^{-i\theta} = \frac 1 {e^{i\theta}} = \frac 1 {cos\ \theta \ + \ i\ sin\ \theta} = \frac {cos\ \theta\ -\ i\ sin\ \theta} {cos^2\ \theta\ +\ sin^2\ \theta} = cos\ \theta\ -\ i\ sin\ \theta$
$cos\ (-\theta) +\ i\ sin\ (-\theta) = cos\ \theta\ +i\ (-sin\ \theta)$
Thus
$cos\ (-\theta) = cos\ \theta$
$sin\ (-\theta) = -\ sin\ \theta$
$\blacksquare$
The proof is actually just the first two lines.
I believe Gauss was tasked with finding the sum of all the integers from $1$ to $100$ in his very early schooling years. He tackled it quicker than his peers or his teacher could, $$\sum_{n=1}^{100}n=1+2+3+4 +\dots+100$$ $$=100+99+98+\dots+1$$ $$\therefore 2 \sum_{n=1}^{100}n=(100+1)+(99+2)+\dots+(1+100)$$ $$=\underbrace{101+101+101+\dots+101}_{100 \space times}$$ $$=101\cdot 100$$ $$\therefore \sum_{n=1}^{100}n=\frac{101\cdot 100}{2}$$ $$=5050.$$ Hence he showed that $$\sum_{k=1}^{n} k=\frac{n(n+1)}{2}.$$
If $H$ is a subgroup of $(\mathbb{R},+)$ and $H\bigcap [-1,1]$ is finite and contains a positive element. Then, $H$ is cyclic.
Fermat's little theorem from noting that modulo a prime p we have for $a\neq 0$:
$$1\times2\times3\times\cdots\times (p-1) = (1\times a)\times(2\times a)\times(3\times a)\times\cdots\times \left((p-1)\times a\right)$$
Proposition (No universal set): There does not exists a set which contain all the sets (even itself)
Proof: Suppose to the contrary that exists such set exists. Let $X$ be the universal set, then one can construct by the axiom schema of specification the set
$$C=\{A\in X: A \notin A\}$$
of all sets in the universe which did not contain themselves. As $X$ is universal, clearly $C\in X$. But then $C\in C \iff C\notin C$, a contradiction.
Edit: Assuming that one is working in ZF (as almost everywhere :P)
(In particular this proof really impressed me too much the first time and also is very simple)
Most proofs concerning the Cantor Set are simple but amazing.
The total number of intervals in the set is zero.
It is uncountable.
Every number in the set can be represented in ternary using just
0 and
2. No number with a
1 in it (in ternary) appears in the set.
The Cantor set contains as many points as the interval from which it is taken, yet itself contains no interval of nonzero length. The irrational numbers have the same property, but the Cantor set has the additional property of being closed, so it is not even dense in any interval, unlike the irrational numbers which are dense in every interval.
The Menger sponge which is a 3d extension of the Cantor set
simultaneously exhibits an infinite surface area and encloses zero volume.
The derivation of first principle of differentiation is so amazing, easy, useful and simply outstanding in all aspects. I put it here:
Suppose we have a quantity $y$ whose value depends upon a single variable $x$, and is expressed by an equation defining $y$ as some specific function of $x$. This is represented as:
$y=f(x)$
This relationship can be visualized by drawing a graph of function $y = f (x)$ regarding $y$ and $x$ as Cartesian coordinates, as shown in Figure(a).
Consider the point $P$ on the curve $y = f (x)$ whose coordinates are $(x, y)$ and another point $Q$ where coordinates are $(x + Δx, y + Δy)$.
The slope of the line joining $P$ and $Q$ is given by:
$tanθ = \frac{Δy}{Δx} = \frac{(y + Δy ) − y}{Δx}$
Suppose now that the point $Q$ moves along the curve towards $P$.
In this process, $Δy$ and $Δx$ decrease and approach zero; though their ratio $\frac{Δy}{Δx}$ will not necessarily vanish.
What happens to the line $PQ$ as $Δy→0$, $Δx→0$? You can see that this line becomes a tangent to the curve at point $P$ as shown in Figure(b). This means that $tan θ$ approaches the slope of the tangent at $P$, denoted by $m$:
$m=lim_{Δx→0} \frac{Δy}{Δx} = lim_{Δx→0} \frac{(y+Δy)-y}{Δx}$
The limit of the ratio $Δy/Δx$ as $Δx$ approaches zero is called the derivative of $y$ with respect to $x$ and is written as $dy/dx$.
It represents the slope of the tangent line to the curve $y=f(x)$ at the point $(x, y)$.
Since $y = f (x)$ and $y + Δy = f (x + Δx)$, we can write the definition of the derivative as:
$\frac{dy}{dx}=\frac{d{f(x)}}{dx} = lim_{x→0} [\frac{f(x+Δx)-f(x)}{Δx}]$,
which is the required formula.
This proof that $n^{1/n} \to 1$ as integral $n \to \infty$:
By Bernoulli's inequality (which is $(1+x)^n \ge 1+nx$), $(1+n^{-1/2})^n \ge 1+n^{1/2} > n^{1/2} $. Raising both sides to the $2/n$ power, $n^{1/n} <(1+n^{-1/2})^2 = 1+2n^{-1/2}+n^{-1} < 1+3n^{-1/2} $.
Can a Chess Knight starting at any corner then move to touch every space on the board exactly once, ending in the opposite corner?
The solution turns out to be childishly simple. Every time the Knight moves (up two, over one), it will hop from a black space to a white space, or vice versa. Assuming the Knight starts on a black corner of the board, it will need to touch 63 other squares, 32 white and 31 black. To touch all of the squares, it would need to end on a white square, but the opposite corner is also black, making it impossible.
The Eigenvalues of a skew-Hermitian matrix are purely imaginary.
The Eigenvalue equation is $A\vec x = \lambda\vec x$, and forming the vector norm gives $$\lambda \|\vec x\| = \lambda\left<\vec x, \vec x\right> = \left<\lambda \vec x,\vec x\right> = \left<A\vec x,\vec x\right> = \left<\vec x, A^{T*}\vec x\right> = \left<\vec x, -A\vec x\right> = -\lambda^* \|\vec x\|$$ and since $\|\vec x\| > 0$, we can divide it from left and right side. The second to last step uses the definition of skew-Hermitian. Using the definition for Hermitian or Unitarian matrices instead yields corresponding statements about the Eigenvalues of those matrices.
I like the proof that not every real number can be written in the form $a e + b \pi$ for some integers $a$ and $b$. I know it's almost trivial in one way; but in another way it is kind of deep.
|
Search
Now showing items 1-1 of 1
Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV
(Elsevier, 2014-09)
Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ...
|
Let $f: [a,b] \longrightarrow \mathbb{R}$ be an increasing function. If $x_1, \cdots, x_n \in [a,b]$ are distinct, show that $\sum_{i=1}^{i=n} o(f,x_i) < f(b) - f(a).$
I would like to know if my proof is correct and, if that's the case, an opinion if my proof writting is fine and an advice about how to write this better. Thanks in advance!
My attempt:
Let $P := \{ [t_{i-1},t_i] \ ; i = 1, \cdots, n, \ t_0 = a, t_n = b \ and \ x_i \in (t_{i-1},t_i) \ for \ each \ i \}$ a partition of $[a,b]$. The definition of oscillation of $f$ at $x$ given by Spivak is $o(f,x) := \lim_{\delta \rightarrow 0} [\sup f(B(x,\delta)) - \inf f(B(x,\delta))]$, then $[\lim \inf f((x_i - \delta, x_i + \delta)), \ \lim \sup f((x_i - \delta, x_i + \delta))] \subset [t_{i-1},t_i]$ when $\delta \rightarrow 0$ and $o(f,x_i) = \lim \sup f((x_i - \delta, x_i + \delta)) - \lim \inf f((x_i - \delta, x_i + \delta)) < f(t_i) - f(t_{i-1})$ for $i = 1, \cdots, n$. It's clear now that $\sum_{i=1}^{i=n} o(f,x_i) < \sum_{i=1}^{i=n} f(t_i) - f(t_{i-1}) = f(b) - f(a). \square$
|
I'm currently trying to solve a linear system $Ax = B$, where the matrix $A$ is ill conditioned (i.e. nearly singular), with a condition number of $~10^7$. The aforementioned linear system arises from a finite difference discretization.
The mathematical model for my problem is a PDE with derivatives of $x$ and $t$. Therefore, I'm solving a linear system of a discretized mesh of points with interval $\Delta x$ for each time step $\Delta t$. I'm already using centered differences for time, but because the problem has mixed derivatives (i.e. dependent of both $x$ and $t$), the problem still falls into a linear system.
My question is: how can I find a solution to this linear system? The most common solution I saw is doing preconditioning on matrix $A$, so it gets better conditioned. But, due to my engineering background, I hardly understand what must be done to precondition matrix $A$. I saw many different methods (like Jacobi and ILU factorization), but I don't know how to apply them.
On a sidenote, I'm doing this on MATLAB, so if anyone know any in-built function that can help me, it would be very appreciated. I tried gmres but it didn't work (the solution still "exploded" after a few steps).
EDIT: Here is the equation as asked
$$ EIr\left[\frac{\partial^4\theta}{\partial x^4}-6\left(\frac{\partial\theta}{\partial x}\right)^2\frac{\partial^2\theta}{\partial x^2}\right]-EAr\left[\frac{\partial u_x}{\partial x}\frac{\partial^2\theta}{\partial x^2}+\frac{\partial^2 u_x}{\partial x^2}\frac{\partial\theta}{\partial x}+1.5r^2\frac{\partial^2\theta}{\partial x^2}\left(\frac{\partial\theta}{\partial x}\right)^2\right]-I_pr\omega\left[2\frac{\partial^2\theta}{\partial x \partial t}\frac{\partial\theta}{\partial x}+\frac{\partial\theta}{\partial t}\frac{\partial^2\theta}{\partial x^2}\right]-m_pr\frac{\partial u_x}{\partial x}\frac{\partial^2\theta}{\partial t^2}+m_pr\frac{\partial^2 u_x}{\partial t^2}\frac{\partial\theta}{\partial x}-(x+u_x)m_pr\frac{\partial^3\theta}{\partial x \partial^2 t}+(x+u_x)m_pr\left(\frac{\partial\theta}{\partial t}\right)^2\frac{\partial\theta}{\partial x}+m_pg\sin\theta = 0 $$
|
So I need to cave in and learn some software package ... Until now I have been a paper and pen-mathematician and nearly managed to avoid mathematical software beyond free Wolfram Alpha. To my surprise, I don't find pieces of code snips or instructions that are close enough to my application, so I want to make sure I choose an appropriate tool before spending weeks to learn it. In both the below problem types, everything will have parameters, but not too many in total, so a graphical representation "with a slider" would certainly be helpful. I can use SymPy, Julia seems to have some development momentum, I do have access to some Maple and Matlab versions (but no Mathematica, although I would go for it if that is
the way).
What I need this for, is hardly to be considered too advanced mathematics, though there are systems of nonlinear (no worse than quadratic) PDEs. It is dynamic optimization (discrete and/or continuous time, feedback-form), where all optimization is quadratic with linear constraints and could be solved out symbolically. To my surprise, there seems to be hard to find the right words to google for code for dynamic LQ programs; the feedback-form on state is a must for me, open-loop is insufficient. So I need something that can do the following two++ things:
For quadratic PDE systems
(arising from coupled dynamic programming with quadratic optimization). Type: $\vec u:\mathbf R^n_+\mapsto \mathbf R^n_+$ or $\mapsto \mathbf R^{n+1}_+$ such that each $u_i =$ second-order polynomial in the elements of the Jacobi matrix (requires cubic tensors to write out in (multi)linear algebra language), and with a fairly nice boundary condition $u_i=0$ when $x_i=0$.
It seems that if I introduce $z_i=\sqrt{x_i}$ I can get an expansion: formally write down a MacLaurin series for each function, and match coefficients. Ugly job by hand, I would certainly want something that can do so and check for convergence. Also, I would like concave or convex directions for the solution functions.
For recursive quadratic optimization
I am considering a discrete-time analogue, which for each $t$ will reduce to something like minimizing $\mathbf h\mapsto \mathbf h'\mathbf A_t\mathbf h-2\mathbf b'_t\mathbf h+c_t$ subject to linear (coordinate-wise) inequality constraints $\mathbf M_t\mathbf h\leq \mathbf d_t$ . The $\mathbf A$ are no worse than semidefinite. I thought it would be fairly easy, because in my problems it is not hard to show that I get continuous value functions that piecewise are convex quadratics, but they depend on the parameters and constraints in a not so human-readable manner. Consider even the univariate case: minimizing $\mapsto a_th^2-2b_th)$ subject to, say, $h\leq d_t$ will yield a lot of nested $\max\{$parameter this, parameter that$\}$ expressions, where the coefficients are determined recursively - I was hoping for symbolic expressions for those, hopefully closed-form when remaining horizon (which I can assume finite!) is a fixed number.
I want symbolic expressions for the coefficients (if not in closed-form, then for each time left - I can assume finite horizon in the discrete-time problem), and I need conditions for what $d_t$ are $\leq1$. Also, for when the value function is $C^1$ and for convexity (not only piecewise so).
|
Define
$$\hat{X}(Y) = [X,Y] $$
I have known matrices $S_i$ and $V$. I am trying to use Mathematica to define a function which calculates
$$ \sum_{\substack{n_1, \ldots, n_k>1\\ n_1+\ldots n_k = m}} \hat{S}_{n_1}(\hat{S}_{n_2}(\ldots (\hat{S}_{n_k} (V))\ldots )) $$
for arbitrary integers $k$ and $m$ (with $k,m < 10$ or so). The matrices $S_i$ and $V$ are maybe up to $8 \times 8$ so I'm not too worried about speed or anything. Also note that I am able to access the $S_i$ by
S[i].
I have a few questions.
1) Here the summation is over a somewhat complicated set of indices. There is the constraint that there must be exactly $k$ indices and that those indices must all add up $m$. I know that I can use
Select[Flatten[Permutations /@ IntegerPartitions[m], 1], Length[#] == k &]
to get a list of sets of indices which satisfy this constraint but I don't know how to sum over these indices other than using a loop and even then I'm not entirely sure how to do it.
2) Using the hat notation it is very easy to string together multiple commutators in writing. I'm not so sure how to string together a variable number of commutators with variable arguments. Again I feel there is a way I could do this with a loop but I'm not exactly sure.
At present I'm trying to construct loops to implement this summation but I'm not sure if it will work and even if it does it does not seem very elegant.
Could anyone provide me with a nice way to calculate this expression?
|
Suppose I had the following problem:
$U_{tt}=U_{xx}+U_{yy}$ in $\Omega=[0,1]\times[0,1]$
$U(x,y,0)=f(x,y)$ $U_{t}(x,y,0)=g(x,y)$ $U=0$ on $\partial \Omega$
I know that there is an explicit finite difference scheme to solve this problem of the form:
$\frac{U(i,j,k+1) - 2U(i,j,k)+U(i,j,k-1)}{\Delta t^2} = \frac{U(i+1,j,k) - 2U(i,j,k)+U(i-1,j,k)}{\Delta x^2} + \frac{U(i,j+1,k) - 2U(i,j,k)+U(i,j-1,)}{\Delta y^2}$
by using a centered finite difference in time. My mind is thinking that it's possible to simply change the k's to k+1's to obtain an implicit scheme, but I haven't worked out the ensuing truncation error or stability analysis. My guess is that it's likely to be wrong... How do I derive an implicit scheme for this PDE?
|
What is a Natural Transformation? Definition and Examples, Part 2
Continuing our list of examples of natural transformations, here is...
Example #2: double dual space
This is really the archetypical example of a natural transformation. You'll recall (or let's observe) that every finite dimensional vector space $V$ over a field $\mathbb{k}$ is isomorphic to both its dual space $V^*$ and to its double dual $V^{**}$.
In the first case, if $\{v_1,\ldots,v_n\}$ is a basis for $V$, then $\{v_1^*,\ldots,v_n^*\}$ is a basis for $V^*$ where for each $i$, the map $v_i^*:V\to\mathbb{k}$ is given by $$v_i^*(v_j)=\begin{cases} 1, &\text{if $i=j$};\\ 0 &\text{if $i\neq j$}. \end{cases}$$Unfortunately, this isomorphism $V\overset{\cong}{\longrightarrow} V^*$ is not canonical. That is, a different choice of basis yields a different isomorphism. What's more, the isomorphism can't even materialize
until we pick a basis.*On the other hand, there is an isomorphism $V\overset{\cong}{\longrightarrow}V^{**}$ that requires no choice of basis: for each $v\in V$, let $\text{eval}_v:V^*\to\mathbb{k}$ be the evaluation map. That is, whenever $f:V\to \mathbb{k}$ is an element in $V^*$, define $\text{eval}_v(f):=f(v)$. Folks often refer to this isomorphism as natural. It's natural in the sense that it's there for the taking---it's patiently waiting to be acknowledged, irrespective of how we choose to "view" $V$ (i.e. irrespective of our choice of basis). This is evidenced in the fact that $\text{eval}$ does the same job on each vector space throughout entire category. One map to rule them all.** For this reason, the totality of all the evaluation maps assembles into a natural transformation (a natural isomorphism, in fact) between two functors! To see this, let $(-)^{**}:\mathsf{Vect}_{\mathbb{k}}\to\mathsf{Vect}_{\mathbb{k}}$ be the the double dual functor $(-)^{**}$ that sends a vector space $V$ to $V^{**}$ and that sends a linear map $V\overset{\phi}{\longrightarrow}W$ to $V^{**}\overset{\phi^{**}}{\longrightarrow} W^{**}$, where $\phi^{**}$ is precomposition with $\phi^{*}$ (which we've defined before). And let $\text{id}:\mathsf{Vect}_{\mathbb{k}}\to\mathsf{Vect}_{\mathbb{k}} $ be the identity functor.Now let's check that $\text{eval}:\text{id}\Longrightarrow (-)^{**}$ is indeed a natural transformation.
By picking a $v\in V$ and chasing it around the diagram below, notice that the square commutes if and only if $\text{eval}_v\circ \phi^*=\text{eval}_{\phi(v)}$. (Here I'm using the fact that $\phi^{**}(\text{eval}_v)=\text{eval}_v\circ \phi^*$.)
Does this equality hold? Let's check! Suppose $f:W\to\mathbb{k}$ is an element of $W^{*}$. Then $$ \begin{align*} \text{eval}_v(\phi^*(f))&=\text{eval}_v(f\circ \phi)\\ &=(f\circ\phi)(v)\\ &= f(\phi(v))\\ &=\text{eval}_{\phi(v)}(f). \end{align*} $$ Voila! And because each $V\overset{\text{eval}}{\longrightarrow} V^{**}$ is an isomorphism, we've got ourselves a natural ismorphism $\text{id}\Longrightarrow (-)^{**}$.
As per our discussion last time, this suggests that $\text{id}$ and $(-)^{**}$ are really the same functor up to a change in perspective. Indeed, this interpretation pairs nicely with the observation that any vector $v\in V$ can either be viewed as, well,
a vector, or it can be viewed as an assignment that sends a linear function $f$ to the value $f(v)$. In short, $V$ is genuinely and authentically just like its double dual.
They are - quite naturally - isomorphic.
Example #3: representability and Yoneda
In our earlier discussion on functors we noted that a functor $F:\mathsf{C}\to\mathsf{Set}$ is
representable if, loosely speaking, there is an object $c\in\mathsf{C}$ so that for all objects $x$ in $\mathsf{C}$, the elements of $F(x)$ are "really" just maps $c\to x$ (or maps $x\to c$, if $F$ is contravariant). As an illustration, we noted that the functor $\mathscr{O}:\mathsf{Top}^{op}\to\mathsf{Set}$ that sends a topological space $X$ to its set $\mathscr{O}(X)$ of open subsets is represented by the Sierpinski space $S$ since $$\mathscr{O}(X)\cong \text{hom}_{\mathsf{Top}}(X,S)$$where I'm using $\cong$ to denote a set bijection/isomorphism. So in other words, an open subset of $X$ is essentially the same thing as a continuous function $X\to S.$ (We discussed this in length here.)Now it turns out that this $\cong$ is not just a typical, plain-vanilla isomorphism. It's natural! That is, the ensemble of isomorphisms $\mathscr{O}(X)\overset{\cong}{\longrightarrow}\text{hom}_{\mathsf{Top}}(X,S)$ (one for each $X$) assemble to form a natural isomorphism between the two functors $\mathscr{O}$ and $\text{hom}_{\mathsf{Top}}(-,S)$.***
In general, then, we say a functor $F:\mathsf{C}\to\mathsf{Set}$ is
representable if there is an object $c\in\mathsf{C}$ so that $F$ is naturally isomorphic to the hom functor $\text{hom}_{\mathsf{C}}(c,-)$, i.e. if $$F(x)\cong\text{hom}_{\mathsf{C}}(c,x) \qquad \text{naturally, for all $x\in \mathsf{C}$}$$ (or if $F$ is contravariant, $F(x)\cong\text{hom}_{\mathsf{C}}(x,c)$).
Here's a very simple example. Suppose $A$ is any set and let $*$ denote the set with one element. Notice that a function from $*$ to $A$ has exactly one element in its image, i.e. the range of $*\to A$ is $\{a\}$ for some $a\in A$. This suggests that a map $*\to A$ is
really just a choice of element in $A$! Intuitively then, the elements of $A$ are in bijection with functions $*\to A$, $$A\cong\text{hom}_{\mathsf{Set}}(*,A).$$ But more is true! The isomorphism $A\to \text{hom}_{\mathsf{Set}}(*,A)$ which sends $a\in A$ to the function, say, $\bar{a}:*\to A$, where $\bar{a}(*)=a$, is natural. That is, for any $A\overset{f}{\longrightarrow}B$, the following square commutes
Commutativity just says that given an element $a\in A$, we can think of the element $f(a)$ as a map $*\to B$ in one of two equivalent ways: either send $a$ to $f(a)$ via $f$ and
then think of $f(a)$ as a map $*\to B$. OR first think of $a$ as a map $*\to A$, and then postcompose it with $f$.
In short, the identity functor $\text{id}:\mathsf{Set}\to\mathsf{Set}$ is represented by the one-point set $*$ since every function $*\to A$ is really just a choice of an element $a\in A$.
Representability is really the launching point for the Yoneda Lemma which is "arguably the most important result in category theory." We'll certainly chat about Yoneda in a future post.
To whet your appetite, I'll quickly say that one consequence of the Lemma is that we are prompted to think of an object $x$ -- no longer as an object, but now -- as a (representable) functor $\text{hom}(x,-)$, similar to how we may think of a point $a\in A$ as a map $*\to A$.
This perspective -- coupled with the idea that morphisms out of $x$ (i.e. the elements of $\text{hom}(x,-)$) are simply "the relationships of $x$ with other objects" -- motivates the categorical mantra that an object is completely determined by its relationships to other objects. As a wise person once said, "You tell me who your friends are, and I'll tell you who YOU are." The upshot is that this proverb holds in life as well as in category theory.
And
that is The Most Obvious Secret of Mathematics!
*One can show that there is
no "natural" isomorphism of a vector space with its dual. For instance, see p. 234 of Eilenberg and Mac Lane's 1945 paper, "The General Theory of Natural Equivalences."
** This sort of reminds me of the difference between pointwise and uniform convergence. A sequences of functions $\{f_n:X\to\mathbb{R}\}$ converges to a function $f$ pointwise if, from the vantage point of some $x\in X$, the $f_n$ are eventually within some $\epsilon$ of $f$. But that value of $\epsilon$ might be different at a different vantage point, i.e. at a different $x'\in X$. On the other hand, the sequence converges uniformly if there's an $\epsilon$ that does the job no matter where you stand, i.e. for
all $x\in X$.
***Here, $\text{hom}_{\mathsf{Top}}(-,S):\mathsf{Top}^{op}\to\mathsf{Set}$ is the contravariant functor that sends a topological space $X$ to the set $\text{hom}_{\mathsf{Top}}(X,S)$ of continuous functions $X\to S$ and that sends a continuous function $X\overset{f}{\longrightarrow} Y$ to its pullback $\text{hom}_{\mathsf{Top}}(Y,S)\overset{f^*}{\longrightarrow}\text{hom}_{\mathsf{Top}}(X,S).$
|
I think the usual proof for the asymptotic number of zeros of the Riemann zeta function$$N(T) = \#\left\{\rho : \ \zeta(\rho)=0, \begin{array}{l}\scriptstyle Im(\rho)\ \in\ [0,T]\\ \scriptstyle Re(\rho) \ \in\ (-1,2)\end{array}\right\} = \frac{ T\ln T}{2\pi}-\frac{1+\log 2\pi}{2\pi}T+\mathcal{O}(\ln T)$$ works for any Dirichlet series in the
extended Selberg class $S^{\#}$ (for the Selberg class it is a well-known result), that is Dirichlet series having a functional equation with gamma factors, and being analytic except possibly a pole at $s=1$. More precisely :
$F(s) = \sum_{n=1}^\infty a_n n^{-s}$ converges absolutely on $Re(s)>1$, and $F(s) (s-1)^m$ is entire of finite order,
with $\gamma(s) = Q^s\prod_{j=1}^{k} \Gamma(\omega_j s+\mu_j), \omega_j > 0$ and $\Phi(s) = \gamma(s)F(s) : \quad \Phi(s) = \xi\, \overline{\Phi(1-\overline{s})}$
Then $$N_F(T) = \frac{d_F}{2\pi} T \ln T+\frac{c_F}{2\pi} T+\mathcal{O}(\ln T), \qquad d_F=\sum_{j=1}^m \omega_j, \quad c_F = \ln |Q|^2-1-\ln 2 d_F$$ where $N_F(T) = \#\left\{\rho : \ F(\rho)=0, \begin{array}{l}\scriptstyle Im(\rho)\ \in\ [0,T]\\ \scriptstyle Re(\rho) \ \in\ (-\delta_F,1+\delta_F)\end{array}\right\}$ is the number of zeros, and $ \delta_F$ is chosen such that $\sum_{n=2}^\infty |a_n| n^{-1-\delta_F} < |a_1|$ i.e. $\text{arg}(F(1+\delta_F+it) = \mathcal{O}(1)$
Questions :
Do you have a reference confirming this ? And if I missed something, what additional hypothesis on $F(s)$ are needed ?
That the asymptotic number of zeros depends on the functional equation not on the Euler product, what does it tell us, about $\zeta(s)$ and the L-functions ?
The asymptotics for the zeros allow us to write $\frac{F'}{F}(s) = \sum_{|Im(\rho)-t)| < A} \frac{1}{s-\rho}+\mathcal{O}(\ln t)$ in the critical strip, and then to look at how different constraints (Euler product, growth rate estimates for $F,F',1/F$) interact with those density of zeros. Assuming the GRH, we are probably also allowed to make some general statements about the number of zero crossings of $\Phi(1/2+it)$, and to link it to some properties of modular forms on $Re(\tau) =0$.
What happens if I add to $S^{\#}$ the constraint that there is some $l$ such that $\frac{1}{\zeta(\sigma)^l} < |F(s)| \le \zeta(\sigma)^l$ for every $Re(s)=\sigma > 1$ ?
(
it could mean that $F(s)$ has an Euler product of the form $\prod_{j=m}^l \prod_p (1-\alpha_{j}(p)p^{-s})^{-1}$where $|\alpha_j(p)|\le 1$)
|
You can achieve the same as in egreg's answer, without
amsmath nor
AtBeginDocument trick:
\documentclass{article}
\newcommand\plim{\mathop{p\mkern2mu\mathrm{\mathchar"702D lim}}}
\begin{document}
$\plim_{a\to\infty}$
$\displaystyle\plim_{a\to\infty}$
\end{document}
Sometimes the math alphabet command
\mathrm does not use the same font as would be used in operator names. So the following slightly more complicated solution does it:
\makeatletter
\newcommand\plim{\mathop{p\mkern2mu{\operator@font\mathchar"702D lim}}}
\makeatother
And regarding the hardcoding of the text hyphen slot, ... well one can complicate the code if one wishes to go around this problem. By the way I am not sure I correctly understood the OP, as here I tried my best to use the hyphen slot (ascii code 45) from
the a text font (shorter usually than a minus sign), and not a
\textendash which looks more like a minus sign.
(Sorry I got slightly confused in my explanations, as the math symbol font
operators (used - if it has not been modified - by
\operator@font) is not necessarily 'the' document 'text' font, not is it necessarily the font used by
\mathrm (although it is in the default set-up) but it is at any rate 'a' font, hence my striking out edit in the paragraph above.)
|
Finitely Generated Modules Over a PID The Basic Idea
We know what it means to have a module $M$ over a (commutative, say) ring $R$. We also know that if our ring $R$ is actually a field, our module becomes a vector space. But what happens if $R$ is "merely" a PID? Answer:
A lot.
Recall from group theory that all finitely generated abelian groups can be classified up to isomorphism. So if you run into an abelian group of order 24, for instance, you know it's either going to have the same structure as $\mathbb{Z}/24$ or $\mathbb{Z}/3\oplus\mathbb{Z}/8$ or $\mathbb{Z}/3\oplus\mathbb{Z}/2\oplus \mathbb{Z}/4$ or $\mathbb{Z}/3\oplus\mathbb{Z}/2\oplus\mathbb{Z}/2\oplus\mathbb{Z}/2$. There are no other possibilities. In much the same way, we can classify all finitely generated modules over a PID. In fact, the result for abelian groups is a special case of this! (Notice $\mathbb{Z}$ is a PID and every abelian group is a $\mathbb{Z}$-module.)
Today we'll look at a proposition which, thanks to the language of exact sequences, is quite simple and from which the Fundamental Theorem of Finitely Generated Modules over a PID follows almost immediately. The information below is loosely based on section 12.1 of Dummit and Foote's
Abstract Algebra as well as this article by K. Conrad.
From English to Math
In short, the proposition says that every finitely generated module over a PID is comprised of a part which is free (i.e. has a basis) and a part which is not free:
Before the proof, let's look at two corollaries:
If $n=0$, then $M$ is a torsion* module, i.e. $M=\text{Tor}(M)$. This is another way of saying $M$ is notfree: a basis for $M$ simply does not exist. Why? No finite subset $\mathcal{B}=\{m_1\ldots, m_n\}$ of $M$ can be linearly independent! To see this, note that $M=\text{Tor}(M)$ means for every $m_i\in \mathcal{B}\subset M$, there is a nonzero$r_i\in R$ such that $r_im_i=0$. So in particular $r_1m_1+\cdots+r_nm_n=0$ is a nontrivial relation. If $\text{Tor}(M)=0$ (and $n>0$) then $M$ is a free module. This is clear since $\text{Tor}(M)=0$ implies $M\cong R^n$ for some $n>0$. And indeed $R^n$ is a free module with bases $e_1,\ldots,e_n$ where $e_i=(0,\ldots,1,\ldots,0)$ (a 1 in the $i$th spot and zeros elsewhere).
To prove the proposition, we simply need to show the following is a split short exact sequence:
where $\text{Tor}(M)\hookrightarrow M$ is inclusion and $\pi : M \to M/\text{Tor}(M)$ is the natural projection map. The exactness of the sequence is immediate: $\text{Tor}(M)\hookrightarrow M$ is, of course, injective, $\pi$ is always surjective, and $\ker \pi=\text{Tor}(M)$ is indeed the image of the inclusion map.To see that the sequence splits, it's enough to prove $M/\text{Tor}(M)$ is free! Why? Because
all free modules are projective and any time you have a projective module $P$, the exact sequence $0\to L\to M \to P\to 0$, for any modules $L$ and $M$, always splits! (See proof here.) So let's show $M/\text{Tor}(M)$ is free. The following three facts will do it for us. Fact #1 $M/\text{Tor}(M)$ is finitely generated. This follows from the fact that $M$ is finitely generated and is actually true in general: quotients of finitely generated modules are finitely generated. (Here's the proof.) Fact #2 $M/\text{Tor}(M)$ is torsion free. Intuitively this makes sense: we've gotten rid of all the torsion elements by modding them out, so of course what's left over should be torsion free! (Here's the proof.) Fact #3 If $R$ is a PID and $M$ is any finitely generated torsion free $R$-module, then $M$ is free. This is precisely the second corollary following our main proposition above! (So at least you can believe it's true.) Of course to avoid a circular reasoning, we can prove it idependently.
Now make the simple observation that Facts #1 and #2 allow us to apply Fact #3 to the $R$-module $M/\text{Tor}(M)$. So $M/\text{Tor}(M)$ is indeed free and, by our comments above, the sequence splits. Further since $M/\text{Tor}(M)$ is free it is isomorphic to $R^n$ for some $n\geq 0$. Thus
which completes the proof.
Before we close, let's quickly recall the Fundamental Theorem:
(Moreover, it can be shown that this expression is unique.) From our proposition above, it's clear that the Theorem is just one step away! One need only prove that
I won't include the proof here (it's about two pages long!), but you can find it in section 12.1, Theorem 4 of Dummit and Foote.
So what's the takeaway here?
Using the machinery of homological algebra (e.g. short exact sequences) we were able to see that every finitely generated module over a PID is a direct sum of a free part and a not-free part. (And
that reflects the fact that every element in the module either has torsion or it doesn't!) From there one obtains the Fundamental Theorem which says you can identify a finitely generated module over a PID simply by its rank (that's what the integer $n$ in the theorem is called) and the invariant factors - the sequence of the $a_i$'s with the divisbility property. Lastly, if we take our PID to be $\mathbb{Z}$ and our module $M$ to be an abelian group (i.e. a $\mathbb{Z}$-module), we obtain the familiar Fundamental Theorem of Finitely Generated Abelian Groups as a special case.
Footnotes:
* Quick reminder: the torsion submodule $\text{Tor}(M)$ of an $R$-module $M$ is defined to be the set of all elements $m\in M$ for which there exists a nonzero $r\in R$ so that $rm=0$.
|
In the theory of finite undirected graph, a basic notion that is broadly studied is the notion of matching cut. A
matching cut in a graph $G$ is defined to be a matching subgraph of $G$ that is also the edge set of a cut. A $\it cut$ of a graph $G = (V, E)$ is a partition of the vertex set $V$ into two sets $(A, B)$ such that $A$ and $B$ induces two disjoint induced subgraphs that were before incident to some crossing edges which are the edges in the corresponding so-called edge cut set. Specifically, the edge set of the cut $(A, B)$ is then defined to be the subset of $E$ containing all the edges with one endpoint in $A$ and the other endpoint in $B$, i.e. those edges $uv$ with $u\in A$ and $v\in B$. A matching cut is formally defined to be the edge set of such a cut, whenever its edge cut set is indeed a matching. It is worth noting that a matching subgraph which removal increases the number of connected components of a graph $G$ is not necessarily a matching cut, since it may contain edges inside the two separated induced subgraphs. It is a matching cut if it contains exactly the edges of the cut. In order to have more hard computational problems in our still very limited arsenal of proven hard problems, we want to restrict this notion further. We define a {\it matching erosion} as a matching cut $(A, B)$ such that $A$ is exactly one side of the matching. It means that each vertex $v\in A$ has exactly one edges out of $A$, and for any two different vertices $u, v\in A$ their crossing edges end in different vertices in $B$. Intuitively, such a matching split $M=(A,B)$ visually ''peels" off $A$ from $G$.
Formally, we define a matching erosion of an undirected graph as follows:
DEFINITION 1
Given an undirected graph $G=(V, E)$, a matching erosion $(A, B)$ is a partition of $V$ into two disjoint sets $V = A\cup B$ such that:
$G[A]$ and $G[B]$ are two disjoint induced subgraphs The edge set of the cut is a matching: $M = \{uv\in E\vert u\in A \land v\in B\}$ is a matching Every $u\in A$ is incident to an edge in $M$
Then, our problem naturally asks whether a given undirected graph $G$ has a matching erosion.
DEFINITION 2
Matching Erosion of a graph:
Input: An undirected graph $G(V, E)$
Output: Yes if $G$ has a matching erosion $(A, B)$, otherwise No
In the next section, we will show that
Matching Erosion is computationally hard.
We will reduce
Exact 3-Set Cover problem to our problem. Exact 3-Set Cover is another decision problem defined as follows. In Exact 3-set Cover, we are given a universe set of elements $U=\{e_1, e_2, \dots, e_{3n}\}$ and a collection $F=\{s_1, s_2, \dots, s_m\}$ of subsets of $U$. Each subset $s_j$ in $F$ contains exactly $3$ elements $e_{s_j1}, e_{s_j2}, e_{s_j3}\in U$. The decision problem asks whether there exists a subcollection $F'\subseteq F$ such that each element $e_i$ of $U$ is contained in exactly one subset $s_j$ in $F'$. Obviously, $F'$ will contain exactly $n$ subsets in the collection $F$. Such a collection $F'$ is called an exact cover of $U$.
DEFINITION 3
Exact 3-Set Cover problem:
Input: a universe $U = \{e_1, e_2, \dots, e_{3n}\}$ and a collection $F=\{s_1, s_2, \dots, s_m\}$ of subsets of $U$, where each $s_j$ contains $3$ elements $e_{s_j1}, e_{s_j2}, e_{s_j3}\in U$
Output: Yes if there exists an exact cover $F'\subseteq F$, otherwise No
Exact 3-Set Cover is shown to be hard by Garey and Johnson. After describing and proving the correctness of the reduction in the next section, we will establish the following claim.
CLAIM 4
We have that Exact 3-Set Cover $\leq_p$ Matching Erosion
Reducing EXACT 3-SET COVER to MATCHING EROSION
In this section, we prove the claim 4.
Proof:
Describing the construction: Given an instance $(U, F)$ of Exact 3-Set Cover, we will construct an undirected graph $G=(V, E)$ as the produced instance of our problem Matching Erosion. For each element $e_i$ in the universe $U$, we create a new vertex $e_i\in V$. Similarly, for each subset $s_j\in F$, we create a new $K_3$ consisting of 3 new vertices $s_{j,1}, s_{j,2}, s_{j,3}\in V$. For each such subset $s_j = \{e_{s_j1}, e_{s_j2}, e_{s_j3}\}$, we add the $3$ edges to $E$, namely $s_{j,1}e_{s_j1},\:s_{j,2}e_{s_j2},\:s_{j,3}e_{s_j3}$. Finally, we add all edges between pairs of two different vertices of the $e_i$'s vertices to turn these into a $K_{3n}$ clique.
Correctness of the construction:Suppose that $(U, F)$ has an exact cover $F'\subseteq F$ consistig of $n$ subsets in $F$, then based on that solution to \textsc{Exact 3-Set Cover}, we will easily construct a solution to the produced instance $G(V, E)$ of our problem Matching Erosion. Namely, our matching erosion for $G$ would have $A$ to be the set of all the $K_3$'s of the subsets included in the exact cover. Clearly, each vertex $s_{j,k}$ (where $s_j$ is included in the exact cover and $1\leq k\leq 3$) in this set $A$ is connected to exactly one vertex outside of $A$, namely its corresponding element $e_{s_jk}$ in the universe $U$. Obviously, this is a matching cut that happens to be also a matching erosion.
Conversely, if $G(V, E)$ has a matching erosion $M = (A, B)$, we shall show that none of the elements $e_i$ in the universe is included in $A$. Indeed, if some $e_i$ is included in $A$ then at most one $e_l$ is not in $A$. This is because if there exist $e_i\in A$ and $e_{l1}, e_{l2}\not\in A$ then $A$ cannot form one side of a matching erosion since $e_i\in A$ is incident to two crossing edges $e_ie_{l1}$, $e_ie_{l2}$, recall that the universe $U$ is turned into a $K_{3n}$ in $G$. So, if one element in $U$ is included in $A$, at most one element $e_l$ can be in $B$, but this is also clearly not the case. Because if so, then all the other element $e_i$'s (which are included in $A$) are connected to $e_l\in B$ violating the definition of a matching erosion. So, we have shown that if some element $e_i$ is included in $A$, then all of the $K_{3n}$ corresponding to $U$ in $G$ are included in $A$. But, this also cannot be the case for a matching erosion. We need the following observation.
Observation: For each $K_3$ corresponding to a subset $s_j$, a vertex $s_{j,k}$ in this $K_3$ is included in $A$ iff. all the three vertices of this $K_3$ are included in $A$.
Proof (of the observation):Obviously, by definition of a matching erosion.
Using this observation, we are able to show that a matching erosion cannot include all the $K_{3n}$ to $A$. Indeed, if so, the matching erosion will then partition the $K_3$'s into two disjoint sets of $K_3$'s, those in $A$ and those in $B$. The $K_3$'s in $B$ would then intuitively form an exact cover for $U$. But, unfortunately, in this case, those vertices of the $K_3$ in $A$ cannot have any crossing edge in $M$, thus violating the definition of a matching erosion.
We have therefore shown that none of the elements $e_i$ in the universe is included in $A$. This implies that $B$ contains all the $K_{3n}$. So, the matching erosion needs to partition the set of $K_3$'s into two disjoint sets of $K_3$'s, those in $A$ and those in $B$ like before. But fortunately, the situation is now reversed. the $K_3$'s in $B$ does not need any crossing incident edge (this is only required in a matching split). And, the $K_3$'s in $A$ would form an exact cover for $U$.
|
Let an asset follow a Brownian motion $$dS = \mu dt + \sigma dW$$ with $\mu$ and $\sigma$ constant. The constant interest rate is $r$. What process does $S$ follow in the risk-neutral measure? Develop a formula for the price of a call option and for the price of a digital call option.
In chapter 6, Mark Joshi states that $\mu = r$ if and only if the stock grows at a risk-neutral rate. Then in the solution by Mark Joshi, he states that since the $S_t$ grows at the same rate as a riskless bond so its drift must be $rS_t$.
I do not see how the drift must be $rS_t$.
Then the solution goes on with $F_t = e^{r(T-t)}S_t$ then
$$dF_t = e^{r(T-t)}\sigma dW_t$$
and then states that
$$F_T\sim F_0 + \overline{\sigma}\sqrt{T}N(0,1)$$
I do not understand where this comes from. I am having a hard time following his solution. Any suggestions are greatly appreciated. I can provide the full solution if needed.
|
I finally solved problem G of GP of Dolgoprudny (also problem 83 in New Year Contest 2018). Though there are discussions on Codeforces, they all look very abstract to me. Hence, I decided to write down a full editorial.
First, we need some definitions. For given string $$$s$$$, we write $$$\mathrm{occ}_s(p) = {i : s[i - |p| + 1: i] = p}$$$. We write $$$x \equiv y$$$ if $$$\mathrm{occ}_s(x) = \mathrm{occ}_s(y)$$$. The "$$$\equiv$$$" relation groups all non-empty substrings of $$$s$$$ into equivalence classes. We write the equivalence class where $$$x$$$ is in as $$$[x]^R$$$. We also write $$$\overleftarrow{x}$$$ for the unique longest string in $$$[x]^R$$$.
The number of states in the automaton of $$$s$$$ equals to the number of equivalence classes plus $$$1$$$. We will then compute the number of equivalence classes. By the linearity of expectation, we can compute for each string $$$p$$$, the number of strings $$$s$$$ which contains an equivalence class $$$[x]^R$$$ where $$$\overleftarrow{x} = p$$$.
For $$$\overleftarrow{x} = p$$$ to hold, two conditions should be satisfied:
$$$p$$$ occurs as a substring in $$$s$$$; $$$[ap]^R \neq [p]^R$$$ for any character $$$a$$$.
We first compute the number of strings $$$s$$$ satisfying condition 1 solely. Then we subtract the number of strings $$$s$$$ where $$$[ap]^R = [p]^R$$$ for all $$$a$$$.
Condition 1
$$$p$$$ may end in string $$$s$$$ in positions $$$P = {|p|, |p|+1, \dots, n}$$$. Using the principle of Inclusion and Exclusion (i.e., PIE), we can alternatively compute the sum
where $$$\mathrm{count}(p, A)$$$ the number of strings $$$s$$$ where $$$p$$$ ends in positions $$$A$$$.
If we know the string $$$p$$$ in advance, this can be done by a $$$O(n^2)$$$ DP. But how if the number of possible strings $$$p$$$ can be large?
Let's introduce another notation $$$\mathrm{per}(s) = {k : s[1 : |s| - k + 1] = s[k + 1 : |s|]}$$$ for a string $$$s$$$. We notice that the above sum is determined only by the set $$$\mathrm{per}(p)$$$, not the string $$$p$$$ itself. The number of different $$$\mathrm{per}(p)$$$ is around 1 thousand when $$$|p|=40$$$. Thus, we can instead enumerate different $$$\mathrm{per}(p)$$$ and compute the desired sum.
Condition 2
For given string $$$s$$$ and pattern $$$ap$$$, we subtract from the result $$$\mathrm{occ}_s(ap) \neq \emptyset$$$ and $$$\mathrm{occ}_s(ap) = \mathrm{occ}_s(p)$$$. By PIE, the second condition boils down to the sum
Noticing that $$$\mathrm{occ}(ap) \subseteq \mathrm{occ}(p)$$$, we can sum over $$$U = X \cup Y$$$. The sum equals to
Both parts can be computed using similar DP mentioned in the previous section. The only thing to notice is that we need not the set $$$\mathrm{per}(p)$$$, but the pair $$$(\mathrm{per}(p), \mathrm{per}(ap))$$$.
|
This notebook is the first in a series of tutorials highlighting various aspects of seismic inversion based on Devito operators. In this first example we aim to highlight the core ideas behind seismic modelling, where we create a numerical model that captures the processes involved in a seismic survey. This forward model will then form the basis for further tutorials on the implementation of inversion processes using Devito operators.
The core process we are aiming to model is a seismic survey, which consists of two main components:
In order to create a numerical model of a seismic survey, we need to solve the wave equation and implement source and receiver interpolation to inject the source and record the seismic wave at sparse point locations in the grid.
The acoustic wave equation for the square slowness $m$, defined as $m=\frac{1}{c^2}$, where $c$ is the speed of sound in the given physical media, and a source $q$ is given by:\begin{cases} &m \frac{d^2 u(x,t)}{dt^2} - \nabla^2 u(x,t) = q \ \text{in } \Omega \\ &u(.,t=0) = 0 \\ &\frac{d u(x,t)}{dt}|_{t=0} = 0 \end{cases}
with the zero initial conditions to guarantee unicity of the solution. The boundary conditions are Dirichlet conditions: \begin{equation} u(x,t)|_\delta\Omega = 0 \end{equation}
where $\delta\Omega$ is the surface of the boundary of the model $\Omega$.
The last piece of the puzzle is the computational limitation. In the field, the seismic wave propagates in every direction to an "infinite" distance. However, solving the wave equation in a mathematically/discrete infinite domain is not feasible. In order to compensate, Absorbing Boundary Conditions (ABC) or Perfectly Matched Layers (PML) are required to mimic an infinite domain. These two methods allow to approximate an infinite media by damping and absorbing the waves at the limit of the domain to avoid reflections.
The simplest of these methods is the absorbing damping mask. The core idea is to extend the physical domain and to add a Sponge mask in this extension that will absorb the incident waves. The acoustic wave equation with this damping mask can be rewritten as:\begin{cases} &m \frac{d^2 u(x,t)}{dt^2} - \nabla^2 u(x,t) + \eta \frac{d u(x,t)}{dt}=q \ \text{in } \Omega \\ &u(.,0) = 0 \\ &\frac{d u(x,t)}{dt}|_{t=0} = 0 \end{cases}
where $\eta$ is the damping mask equal to $0$ inside the physical domain and increasing inside the sponge layer. Multiple choice of profile can be chosen for $\eta$ from linear to exponential.
We describe here a step by step setup of seismic modelling with Devito in a simple 2D case. We will create a physical model of our domain and define a single source and an according set of receivers to model for the forward model. But first, we initialize some basic utilities.
import numpy as np%matplotlib inline
The first step is to define the physical model:
We will create a simple velocity model here by hand for demonstration purposes. This model essentially consists of two layers, each with a different velocity: $1.5km/s$ in the top layer and $2.5km/s$ in the bottom layer. We will use this simple model a lot in the following tutorials, so we will rely on a utility function to create it again later.
#NBVAL_IGNORE_OUTPUTfrom examples.seismic import Model, plot_velocity# Define a physical sizeshape = (101, 101) # Number of grid point (nx, nz)spacing = (10., 10.) # Grid spacing in m. The domain size is now 1km by 1kmorigin = (0., 0.) # What is the location of the top left corner. This is necessary to define# the absolute location of the source and receivers# Define a velocity profile. The velocity is in km/sv = np.empty(shape, dtype=np.float32)v[:, :51] = 1.5v[:, 51:] = 2.5# With the velocity and model size defined, we can create the seismic model that# encapsulates this properties. We also define the size of the absorbing layer as 10 grid pointsmodel = Model(vp=v, origin=origin, shape=shape, spacing=spacing, space_order=2, nbpml=10)plot_velocity(model)
To fully define our problem setup we also need to define the source that injects the wave to model and the set of receiver locations at which to sample the wavefield. The source time signature will be modelled using a Ricker wavelet defined as\begin{equation} q(t) = (1-2\pi^2 f_0^2 (t - \frac{1}{f_0})^2 )e^{- \pi^2 f_0^2 (t - \frac{1}{f_0})} \end{equation}
To fully define the source signature we first need to define the time duration for our model and the timestep size, which is dictated by the CFL condition and our grid spacing. Luckily, our
Model utility provides us with the critical timestep size, so we can fully discretize our model time axis as an array:
from examples.seismic import TimeAxist0 = 0. # Simulation starts a t=0tn = 1000. # Simulation last 1 second (1000 ms)dt = model.critical_dt # Time step from model grid spacingtime_range = TimeAxis(start=t0, stop=tn, step=dt)
The source is positioned at a $20m$ depth and at the middle of the $x$ axis ($x_{src}=500m$), with a peak wavelet frequency of $10Hz$.
#NBVAL_IGNORE_OUTPUTfrom examples.seismic import RickerSourcef0 = 0.010 # Source peak frequency is 10Hz (0.010 kHz)src = RickerSource(name='src', grid=model.grid, f0=f0, npoint=1, time_range=time_range)# First, position source centrally in all dimensions, then set depthsrc.coordinates.data[0, :] = np.array(model.domain_size) * .5src.coordinates.data[0, -1] = 20. # Depth is 20m# We can plot the time signature to see the waveletsrc.show()
Similarly to our source object, we can now define our receiver geometry as a symbol of type
Receiver. It is worth noting here that both utility classes,
RickerSource and
Receiver are thin wrappers around the Devito's
SparseTimeFunction type, which encapsulates sparse point data and allows us to inject and interpolate values into and out of the computational grid. As we have already seen, both types provide a
.coordinates property to define the position within the domain of all points encapsulated by that symbol.
In this example we will position receivers at the same depth as the source, every $10m$ along the x axis. The
rec.data property will be initialized, but left empty, as we will compute the receiver readings during the simulation.
#NBVAL_IGNORE_OUTPUTfrom examples.seismic import Receiver# Create symbol for 101 receiversrec = Receiver(name='rec', grid=model.grid, npoint=101, time_range=time_range)# Prescribe even spacing for receivers along the x-axisrec.coordinates.data[:, 0] = np.linspace(0, model.domain_size[0], num=101)rec.coordinates.data[:, 1] = 20. # Depth is 20m# We can now show the source and receivers within our domain:# Red dot: Source location# Green dots: Receiver locations (every 4th point)plot_velocity(model, source=src.coordinates.data, receiver=rec.coordinates.data[::4, :])
Devito is a finite-difference DSL that solves the discretized wave-equation on a Cartesian grid. The finite-difference approximation is derived from Taylor expansions of the continuous field after removing the error term.
We only consider the second order time discretization for now. From the Taylor expansion, the second order discrete approximation of the second order time derivative is: \begin{equation} \begin{aligned} \frac{d^2 u(x,t)}{dt^2} = \frac{\mathbf{u}(\mathbf{x},\mathbf{t+\Delta t}) - 2 \mathbf{u}(\mathbf{x},\mathbf{t}) + \mathbf{u}(\mathbf{x},\mathbf{t-\Delta t})}{\mathbf{\Delta t}^2} + O(\mathbf{\Delta t}^2). \end{aligned} \end{equation}
where $\mathbf{u}$ is the discrete wavefield, $\mathbf{\Delta t}$ is the discretetime-step (distance between two consecutive discrete time points) and $O(\mathbf{\Delta t}^2)$ is the discretization error term. The discretized approximation of thesecond order time derivative is then given by dropping the error term. This derivative is represented in Devito by
u.dt2 where u is a
TimeFunction object.
We define the discrete Laplacian as the sum of the second order spatial derivatives in the three dimensions: \begin{equation} \begin{aligned} \Delta \mathbf{u}(\mathbf{x},\mathbf{y},\mathbf{z},\mathbf{t})= \sum_{j=1}^{j=\frac{k}{2}} \Bigg[\alpha_j \Bigg(& \mathbf{u}(\mathbf{x+jdx},\mathbf{y},\mathbf{z},\mathbf{t})+\mathbf{u}(\mathbf{x-jdx},\mathbf{y},\mathbf{z},\mathbf{t}) + \\ &\mathbf{u}(\mathbf{x},\mathbf{y+jdy},\mathbf{z},\mathbf{t})+\mathbf{u}(\mathbf{x},\mathbf{y-jdy},\mathbf{z}\mathbf{t}) + \\ &\mathbf{u}(\mathbf{x},\mathbf{y},\mathbf{z+jdz},\mathbf{t})+\mathbf{u}(\mathbf{x},\mathbf{y},\mathbf{z-jdz},\mathbf{t})\Bigg) \Bigg] + \\ &3\alpha_0 \mathbf{u}(\mathbf{x},\mathbf{y},\mathbf{z},\mathbf{t}). \end{aligned} \end{equation}
This derivative is represented in Devito by
u.laplace where u is a
TimeFunction object.
With the space and time discretization defined, we can fully discretize the wave-equation with the combination of time and space discretizations and obtain the following second order in time and $k^{th}$ order in space discrete stencil to update one grid point at position $\mathbf{x}, \mathbf{y},\mathbf{z}$ at time $\mathbf{t}$, i.e. \begin{equation} \begin{aligned} \mathbf{u}(\mathbf{x},\mathbf{y},\mathbf{z},\mathbf{t+\Delta t}) = &2\mathbf{u}(\mathbf{x},\mathbf{y},\mathbf{z},\mathbf{t}) - \mathbf{u}(\mathbf{x},\mathbf{y}, \mathbf{z},\mathbf{t-\Delta t}) +\\ & \frac{\mathbf{\Delta t}^2}{\mathbf{m(\mathbf{x},\mathbf{y},\mathbf{z})}} \Big(\Delta \mathbf{u}(\mathbf{x},\mathbf{y},\mathbf{z},\mathbf{t}) + \mathbf{q}(\mathbf{x},\mathbf{y},\mathbf{z},\mathbf{t}) \Big). \end{aligned} \end{equation}
# In order to represent the wavefield u and the square slowness we need symbolic objects # corresponding to time-space-varying field (u, TimeFunction) and # space-varying field (m, Function)from devito import TimeFunction# Define the wavefield with the size of the model and the time dimensionu = TimeFunction(name="u", grid=model.grid, time_order=2, space_order=2)# We can now write the PDEpde = model.m * u.dt2 - u.laplace + model.damp * u.dt# The PDE representation is as on paperpde
# This discrete PDE can be solved in a time-marching way updating u(t+dt) from the previous time step# Devito as a shortcut for u(t+dt) which is u.forward. We can then rewrite the PDE as # a time marching updating equation known as a stencil using customized SymPy functionsfrom devito import Eq, solvestencil = Eq(u.forward, solve(pde, u.forward))
With a numerical scheme to solve the homogenous wave equation, we need to add the source to introduce seismic waves and to implement the measurement operator, and interpolation operator. This operation is linked to the discrete scheme and needs to be done at the proper time step. The semi-discretized in time wave equation with a source reads:\begin{equation} \begin{aligned} \mathbf{u}(\mathbf{x},\mathbf{y},\mathbf{z},\mathbf{t+\Delta t}) = &2\mathbf{u}(\mathbf{x},\mathbf{y},\mathbf{z},\mathbf{t}) - \mathbf{u}(\mathbf{x},\mathbf{y}, \mathbf{z},\mathbf{t-\Delta t}) +\\ & \frac{\mathbf{\Delta t}^2}{\mathbf{m(\mathbf{x},\mathbf{y},\mathbf{z})}} \Big(\Delta \mathbf{u}(\mathbf{x},\mathbf{y},\mathbf{z},\mathbf{t}) + \mathbf{q}(\mathbf{x},\mathbf{y},\mathbf{z},\mathbf{t}) \Big). \end{aligned} \end{equation}
It shows that in order to update $\mathbf{u}$ at time $\mathbf{t+\Delta t}$ we have to inject the value of the source term $\mathbf{q}$ of time $\mathbf{t}$. In Devito, it corresponds the update of $u$ at index $t+1$ (t = time implicitly) with the source of time $t$. On the receiver side, the problem is either as it only requires to record the data at the given time step $t$ for the receiver at time $time=t$.
# Finally we define the source injection and receiver read function to generate the corresponding codesrc_term = src.inject(field=u.forward, expr=src * dt**2 / model.m)# Create interpolation expression for receiversrec_term = rec.interpolate(expr=u.forward)
After constructing all the necessary expressions for updating the wavefield, injecting the source term and interpolating onto the receiver points, we can now create the Devito operator that will generate the C code at runtime. When creating the operator, Devito's two optimization engines will log which performance optimizations have been performed:
Note: The argument
subs=model.spacing_map causes the operator to substitute values for our current grid spacing into the expressions before code generation. This reduces the number of floating point operations executed by the kernel by pre-evaluating certain coefficients.
#NBVAL_IGNORE_OUTPUTfrom devito import Operatorop = Operator([stencil] + src_term + rec_term, subs=model.spacing_map)
Now we can execute the create operator for a number of timesteps. We specify the number of timesteps to compute with the keyword
time and the timestep size with
dt.
#NBVAL_IGNORE_OUTPUTop(time=time_range.num-1, dt=model.critical_dt)
Operator `Kernel` run in 0.01 s
After running our operator kernel, the data associated with the receiver symbol
rec.data has now been populated due to the interpolation expression we inserted into the operator. This allows us the visualize the shot record:
#NBVAL_IGNORE_OUTPUTfrom examples.seismic import plot_shotrecordplot_shotrecord(rec.data, model, t0, tn)
assert np.isclose(np.linalg.norm(rec.data), 450, rtol=10)
|
In the documentclass I am writing, the user will input an integer such as
\lessonnumber as an integer with possibly a number of leading zeroes, e.g. 4 as
04. At some places in the document I will need the full form (
04) and at others I only want to common form (
4). How can I accomplish this?
In the documentclass I am writing, the user will input an integer such as
I introduce
\stripzero, which will work with the argument either in or not in braces. The argument can be an expandable macro.
The macro works recursively. The definition of
\stripzero first expands its argument, which is needed to allow operation on macros such as
\lessonnumber. Once the argument is expanded, it calls on the recursion routine, which compares the next token to a
0. If it is not a zero, it prints the token and stops. If the token is a
0, it ignores/discards it, and calls itself again to look at the next token.
The trick to avoid the macro looking at the
\else as its next argument is to
\expandafter the macro, so as to absorb the
\else.If I have the recursion call upon
\stripzero (rather than
\stripzerohelp), the
\expandafter is not necessary. However, a two-macro recursion is also not most efficient, so I left the recursion routine calling directly upon itself, while retaining the
\expandafter.
\documentclass{article}\def\stripzero#1{\expandafter\stripzerohelp#1}\def\stripzerohelp#1{\ifx 0#1\expandafter\stripzerohelp\else#1\fi}\begin{document}\stripzero{4}\stripzero 04\stripzero 004\stripzero{004}\def\lessonnumber{00056}\stripzero\lessonnumber\stripzero{\lessonnumber}\end{document}
Marc raises the interesting question of how to handle an argument of pure zeroes. The above MWE strips them all away. However, that may not be desired. It may be desired to print out a single
0 if the argument is a string of zeros.
Therefore, I provide the following alternate implementation that checks the first non-zero token. If it is a digit, it behaves as before. But if the following token is not a digit, it concludes the number was a string of pure zeros and outputs a single
0.
\documentclass{article}\def\stripzero#1{\expandafter\stripzerohelp#1}\def\stripzerohelp#1{\ifx 0#1\expandafter\stripzerohelp\else\checkrange{#1}#1\fi}\def\checkrange#1{\ifx1#1\else\ifx2#1\else\ifx3#1\else\ifx4#1\else\ifx5#1\else% \ifx6#1\else\ifx7#1\else\ifx8#1\else\ifx9#1\else0\fi\fi\fi\fi\fi\fi\fi\fi\fi}\begin{document}\stripzero{4}\stripzero 04\stripzero 004\stripzero{004}\def\lessonnumber{00056}\stripzero\lessonnumber\stripzero{\lessonnumber}\def\lessonnumber{000}\stripzero{\lessonnumber}xyz\stripzero\lessonnumber xyz$\stripzero\lessonnumber\vec{x}$\end{document}
Numbers smaller than 2 31-1
If the number fits in TeX's number range, then the number can be normalized with vanilla TeX's
\number:
\number\lessonnumber\relax
The
\relax at the end (or an empty curly brace pair) prevents that TeX looks for more digits after
\lessonnumber. An expandable version must use a different trick:
\makeatletter\newcommand*{\normalizedlessonnumber}{% \expandafter\@firstofone\expandafter{\number\lessonnumber}%}\makeatother
Then the closing curly brace serves as stopper for the TeX's digit scanner and the the curly braces will be gone as argument braces of
\@firstofone.
The expandable e-TeX version with additional support for some arithmetic operations is:
\the\numexpr(\lessonnumber)\relax
Unlimited integers
An expandable normalization for unlimited integer numbers an be done by
\bigintcalcNum of package
bigintcalc:
\bigintcalcNum{\lessonnumber}
Adding leading zeros
\makeatletter\newcommand{\padnum}[2]{% \ifnum#1>1 \ifnum#2<10 0\fi \ifnum#1>2 \ifnum#2<100 0\fi \ifnum#1>3 \ifnum#2<1000 0\fi \ifnum#1>4 \ifnum#2<10000 0\fi \ifnum#1>5 \ifnum#2<100000 0\fi \ifnum#1>6 \ifnum#2<1000000 0\fi \ifnum#1>7 \ifnum#2<10000000 0\fi \ifnum#1>8 \ifnum#2<100000000 0\fi \ifnum#1>9 \ifnum#2<1000000000 0\fi \fi\fi\fi\fi\fi\fi\fi\fi\fi \expandafter\@firstofone\expandafter{\number#2}%}\makeatother
\padnum is also expandable, that means, it can be safely used as part of file names, label names, ...
Usage. The first argument specifies the number of digits, which should be filled with leading zeros, when necessary. The second argument is the number to be formatted. Then
\padnum{3}{\lessonnumber} with
\lessonnumber as
04 will expand to
004.
I would use the
\num command from the siunitx package. As explained in the manual the output of
\num is highly configurable. To remove all of the zeros you could just type
\num[minimum-integer-digits=1]{\lessonnumber}
(If
\lessonnumber is
0 or
00 etc this will print
0.) To print the "full form" of the number just use
\lessonnumber or
\num[\lessonnumber]. Conversely, if you wanted to pad
\lessonnumber to a three digit number with leading zeros you can type
\num[minimum-integer-digits=3]{\lessonnumber}
For some more examples, you get the following output
from the following code:
\documentclass{article}\usepackage{siunitx}\begin{document} \num[minimum-integer-digits=1]{04} \num[minimum-integer-digits=3]{4} \num[minimum-integer-digits=1]{004} \num[minimum-integer-digits=4]{4} \num[minimum-integer-digits=1]{00004444} \num{00004}\end{document}
Here's a LuaLaTeX-based solution.
% !TEX TS-program = lualatex\documentclass{article}% Define "\strippednumber"\newcommand\strippednumber[1]{\directlua{tex.sprint(#1)}}\begin{document}\newcommand\lessonnumber{0004} % we assume that the students won't enter arbitrary junk...The value of \verb+\lessonnumber+, as entered by the student, is \lessonnumber.The value of \verb+\lessonnumber+ without the leading zeros is \strippednumber{\lessonnumber}.\end{document}
Aside: Under the method proposed here, numbers with more than fourteen [14!] significant digits will be rendered automatically in "scientific" notation. I trust this condition isn't particularly restrictive for the use case you've laid out. :-)
Just use the fact that TeX strips off leading zeros when requested for a
<number>:
\documentclass{article}\usepackage{xparse}\ExplSyntaxOn\DeclareExpandableDocumentCommand{\printnumber}{sm} { \IfBooleanTF{#1} { \int_to_arabic:n { #2 } } { #2 } }\ExplSyntaxOff\newcommand{\lessonnumber}{04}\begin{document}Full number: \printnumber\lessonnumberStrip zero: \printnumber*\lessonnumber\end{document}
Actually, without
*, the macro
\printnumber does nothing; but we can use the
* to selectively strip off the zero when we wish to.
Limitation: only numbers up to 2 31–1 are possible.
This can be simply achieved with \number.
\documentclass{article}\begin{document}\makeatletter\def\lessonnumber{\@ifstar{\@lessonnumber}{\@lessonnumber@stripzero}}\def\@lessonnumber#1{#1}\def\@lessonnumber@stripzero#1{\number#1}\makeatother\lessonnumber{4}\lessonnumber{04}\lessonnumber*{004} \end{document}
The output will be:
A simple but flexible solution is a counter and the
fmtcount package:
\documentclass{article}\usepackage{fmtcount}\padzeroes[2]\newcounter{lesson}\def\Lesson{{\addtocounter{lesson}{1}\padzeroes[2]%\vskip1em\large\bfseries Lesson~\decimal{lesson}}\par\vskip.5em}\begin{document}\Lesson Some text of the lesson \arabic{lesson} \Lesson Some text of the lesson \padzeroes[7]\decimal{lesson} \decimalnum{24}\Lesson Some text of the lesson \padzeroes[3]\decimal{lesson}, not the \decimalnum{4}\Lesson Some text of the lesson \padzeroes[1]\decimal{lesson}\par $\vdots$\addtocounter{lesson}{4}\Lesson Some text of the lesson \decimal{lesson}\Lesson Some text of the lesson \decimal{lesson}\Lesson Some text of the lesson \ordinalstring{lesson} \Lesson Some text of the lesson \ordinal{lesson} \Lesson Some text of the lesson \Roman{lesson} \end{document}
|
Answer
In July, the average monthly temperature is $64^\circ F$.
Work Step by Step
$$f(x)=14\sin[\frac{\pi}{6}(x-4)]+50$$ The question asks when the average temperature is $64^\circ F$ In other words, find $x$ so that $f(x)=64$. Therefore, we can replace $f(x)=64$ to the given equation and find $x$: $$14\sin[\frac{\pi}{6}(x-4)]+50=64$$ $$14\sin[\frac{\pi}{6}(x-4)]=14$$ $$\sin[\frac{\pi}{6}(x-4)]=1$$ $$\frac{\pi}{6}(x-4)=\sin^{-1}1=\frac{\pi}{2}$$ $$x-4=\frac{\pi}{2}\times\frac{6}{\pi}=3$$ $$x=7$$ $x=7$ refers to the month July. Thus, in July, the average monthly temperature is $64^\circ F$.
|
Dear OR experts,
I am confronted with the following condition in my problem, but I have no idea how to linearize it.
Variable \(x\) and \(z_i\) are binary, \(f(y)\) is a linear function of \(y = \sum\limits_{i}{z_i}\), \(c\) is an integer constant. \(z_i=1\) means facility \(i\) is open. So \(y\) is the number of open facilities.
If only and if \(f(y)=c\), then \(x=1\), which means that if \(f(y)\neq c\),then \(x=0\).
Do you know how to represent this relationship in a linear expression? Do I need to introduce an additional binary variable or big \(M\)? Thank you very much. Any suggestions will be greatly appreciated.
Given that \(y \in \{L, \dots, U\}\) (it does not matter how it is obtained), \(x\) is a binary variable and \(c \in \{L, \dots, U\} \) is an integer constant, the condition \(x = 1 \iff f(y) = c\) can be formulated as follows. Introduce additional binary variables \(s\) and \(t\), along with the following constraints: \[s + t + x = 1\] \[f(y) \ge c + (L-c)s + t\] \[f(y) \le c - s + (U-c)t.\]
[This was tried to be posted by Slavko.] I have tried several times to post my answer but akismet wasn't allow me to do it. My proposition was: If \( c > 0 \) and \( f(y) > 0 \), then the constraint can be formulated as follows. \[\left\{ \begin{align} f(y) \ge x c \\ f(y) \le x c \end{align} \right\}.\]
|
A quadratic inequation is an inequation in which we find numbers, a variable (which we call $$x$$) that can be multiplied by itself, and an inequality symbol.
An example of a quadratic inequation might be: $$$ 2x^2-x < 2x-1$$$
where we can observe that the term $$2x^2$$ is the quadratic term, characteristic of the quadratic inequations, because if there is no quadratic term, we would have a first degree inequation.
To solve a quadratic inequation we will use a method consisting of a series of steps to be followed.
To apply this method we will need the quadratic formula to solve quadratic equations, so let's remember this:
Given a quadratic equation: $$ax^2+bx+c=0$$, the solutions are given by the formula: $$$ x_{+}=\dfrac{-b+\sqrt{b^2-4ac}}{2a} \qquad x_{-}=\dfrac{-b-\sqrt{b^2-4ac}}{2a}$$$
Two, one or no solution may be found, depending on the value of $$\sqrt{b^2-4ac}$$ (for more information take a look at the unit Quadratic Equation).
Step by step method to find the solution:
Given the inequation, operate until obtaining a zero on one of the sides of the inequation, obtaining an expression of the type: $$ax^2+bx+c < 0$$ or $$ax^2+bx+c > 0$$ where the values $$b$$ and $$c$$ are real numbers that can be positive or negative or even zero, and $$a$$ is a positive value. In case of finding a negative value of $$a$$, we will multiply the entire inequation by $$(-1)$$ tota la inequació, changing in this way the sign of $$a$$ (and, consequently, the sign of the other terms and the order of the inequality).
We will look for the solutions of the equation $$ax^2+bx+c= 0$$, induced by the inequation $$ax^2+bx+c < 0$$ or $$ax^2+bx+c > 0$$.
At this point, we have three options:
If we do not have solutions of the equation, we must consider two cases:
If $$ax^2+bx+c > 0$$: The solution is any real value: all the numbers satisfy the inequation. If $$ax^2+bx+c < 0$$: No value of $$x$$ satisfies the inequation, therefore, the inequation has no solution.
If we observe the graph of $$y=ax^2+bx+c$$ we will see that it does not cut the X axis since the equation has no solutions. Being also the value of a positive, the entire graph is over the X axis, with positive values of $$y$$, therefore if the inequation has sign greater than (or greater than or equal to), then any point is the solution to the inequation, and if it has a sign less than (or less than or equal to), then no point will be a solution.
If we have a solution, we will do:
If we had the inequation $$ax^2+bx+c > 0$$, and we proceed: $$$ ax^2+bx+c > 0 \Rightarrow (x-x_1)^2 > 0 \Rightarrow (x-x_1)(x-x_1) > 0$$$ $$$ \Rightarrow \left\{ \begin{array}{l} (x-x_1) < 0 \Rightarrow x < x_1 \\ (x-x_1) > 0 \Rightarrow x > x_1 \end{array} \right. $$$
We have to consider the last two valid cases since a product of two numbers is positive if these two numbers are both positive or both negative.
So the solution of the inequation will be $$x$$ whereby $$x$$ satisfies $$x < x_1$$ and $$x > x_1$$ in which $$x_1$$ is the solution to the equation $$ax^2+bx+c=0$$.
In case that we have an inequality of the type $$ax^2+bx+c \geqslant 0$$, apart from the same solutions that we were considering earlier, we would add the solution $$x_1$$ and the result would be to have region solution the entire real line.
If we have the inequation $$ax^2+bx+c < 0$$, we will do: $$$ ax^2+bx+c < 0 \Rightarrow (x-x_1)^2 < 0 \Rightarrow \text{ There is no solution} $$$ since a squared number will always be positive, and we are demanding that it should be negative.
In the event that we have an inequality of the type $$ax^2+bx+c \leqslant 0$$, so we would have an exact solution, the solution of the equation $$x_1$$.
If we have two solutions, $$x_1$$ and $$x_2$$, considering as well $$x_1 < x_2$$, fwe will follow the following procedure:
(let's remember that the value of $$a$$ is always positive)
If $$ax^2+bx+c > 0$$: $$$ ax^2+bx+c > 0 \Rightarrow (x-x_1)(x-x_2) > 0 \Rightarrow$$$ $$$ \Rightarrow \left\{ \begin{array}{l} \text{a) } \ (x-x_1) > 0 \ \text{ and } \ (x-x_2) > 0 \\ \text{b) } \ (x-x_1) < 0 \ \text{ and } \ (x-x_2) < 0 \end{array} \right. $$$ $$$ \Rightarrow \left\{ \begin{array}{l} \text{a) } \ x > x_1 \ \text{ and } \ x > x_2 \\ \text{b) } \ x < x_1 \ \text{ and } \ x < x_2 \end{array} \right. $$$
and as we have supposed that $$x_1 < x_2$$, we remain with the inequalities $$x < x_2$$ and $$ x < x_1$$.
If $$ax^2+bx+c < 0$$: $$$ ax^2+bx+c < 0 \Rightarrow (x-x_1)(x-x_2) < 0 \Rightarrow$$$ $$$ \Rightarrow \left\{ \begin{array}{l} \text{a) } \ (x-x_1) > 0 \ \text{ and } \ (x-x_2) < 0 \\ \text{b) } \ (x-x_1) < 0 \ \text{ and } \ (x-x_2) > 0 \end{array} \right. $$$ $$$ \Rightarrow \left\{ \begin{array}{l} \text{a) } \ x > x_1 \ \text{ and } \ x < x_2 \\ \text{b) } \ x < x_1 \ \text{ and } \ x > x_2 \end{array} \right. $$$ and as we have supposed that $$x_1 < x_2$$, we remain with the inequalities $$x < x_2$$ and $$ x < x_1$$. Once we have found the region where the inequation is satisfied, we are done.
Remember that in the resolution algorithm we have just used strict inequalities (less than, greater than), but the same reasoning can be applied to inequalities of the type 'less than or equal to' or 'greater than or equal to'.
Next we will see an example of each type:
$$$ x^2+x+2 > -1-x $$$
Resolution: $$$ x^2+x+2 > -1-x \Rightarrow x^2+2x +1 > 0 $$$
We find the solutions of the equation $$x^2+2x+1=0$$: $$$ x=\dfrac{-2\pm \sqrt{4-4}}{2}=-1$$$ There is only one solution.
Using the given outline, the solution is $$x < -1$$ and $$x > -1$$, that is, every point except $$-1$$.
$$$ x^2+2 < -1-2x $$$
Resolution: $$$ x^2+2 < -1-2x \Rightarrow x^2+2x +1 < 0 $$$
We find the solutions of the equation $$x^2+2x+1=0$$: $$$ x=\dfrac{-2\pm \sqrt{4-4}}{2}=-1$$$ There is only one solution.
Using the given outline, we see that there are not any possible solutions.
$$$ -x(x-1)-x < -1 $$$
Resolution: $$$ -x(x-1)-x < -1 \Rightarrow -x^2+x-x +1 < 0 \Rightarrow -x^2 +1 < 0 \Rightarrow x^2 -1 > 0 $$$
We find the solutions of the equation $$x^2-1=0$$: $$x=\pm 1$$
As we have two solutions, the solution of the problem is (using the outline) $$x < -1$$ and $$x > 1$$.
|
Search
Now showing items 1-10 of 30
The ALICE Transition Radiation Detector: Construction, operation, and performance
(Elsevier, 2018-02)
The Transition Radiation Detector (TRD) was designed and built to enhance the capabilities of the ALICE detector at the Large Hadron Collider (LHC). While aimed at providing electron identification and triggering, the TRD ...
Constraining the magnitude of the Chiral Magnetic Effect with Event Shape Engineering in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV
(Elsevier, 2018-02)
In ultrarelativistic heavy-ion collisions, the event-by-event variation of the elliptic flow $v_2$ reflects fluctuations in the shape of the initial state of the system. This allows to select events with the same centrality ...
First measurement of jet mass in Pb–Pb and p–Pb collisions at the LHC
(Elsevier, 2018-01)
This letter presents the first measurement of jet mass in Pb-Pb and p-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV and 5.02 TeV, respectively. Both the jet energy and the jet mass are expected to be sensitive to jet ...
First measurement of $\Xi_{\rm c}^0$ production in pp collisions at $\mathbf{\sqrt{s}}$ = 7 TeV
(Elsevier, 2018-06)
The production of the charm-strange baryon $\Xi_{\rm c}^0$ is measured for the first time at the LHC via its semileptonic decay into e$^+\Xi^-\nu_{\rm e}$ in pp collisions at $\sqrt{s}=7$ TeV with the ALICE detector. The ...
D-meson azimuthal anisotropy in mid-central Pb-Pb collisions at $\mathbf{\sqrt{s_{\rm NN}}=5.02}$ TeV
(American Physical Society, 2018-03)
The azimuthal anisotropy coefficient $v_2$ of prompt D$^0$, D$^+$, D$^{*+}$ and D$_s^+$ mesons was measured in mid-central (30-50% centrality class) Pb-Pb collisions at a centre-of-mass energy per nucleon pair $\sqrt{s_{\rm ...
Search for collectivity with azimuthal J/$\psi$-hadron correlations in high multiplicity p-Pb collisions at $\sqrt{s_{\rm NN}}$ = 5.02 and 8.16 TeV
(Elsevier, 2018-05)
We present a measurement of azimuthal correlations between inclusive J/$\psi$ and charged hadrons in p-Pb collisions recorded with the ALICE detector at the CERN LHC. The J/$\psi$ are reconstructed at forward (p-going, ...
Systematic studies of correlations between different order flow harmonics in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV
(American Physical Society, 2018-02)
The correlations between event-by-event fluctuations of anisotropic flow harmonic amplitudes have been measured in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV with the ALICE detector at the LHC. The results are ...
$\pi^0$ and $\eta$ meson production in proton-proton collisions at $\sqrt{s}=8$ TeV
(Springer, 2018-03)
An invariant differential cross section measurement of inclusive $\pi^{0}$ and $\eta$ meson production at mid-rapidity in pp collisions at $\sqrt{s}=8$ TeV was carried out by the ALICE experiment at the LHC. The spectra ...
Energy dependence and fluctuations of anisotropic flow in Pb-Pb collisions at √sNN=5.02 and 2.76 TeV
(Springer Berlin Heidelberg, 2018-07-16)
Measurements of anisotropic flow coefficients with two- and multi-particle cumulants for inclusive charged particles in Pb-Pb collisions at 𝑠NN‾‾‾‾√=5.02 and 2.76 TeV are reported in the pseudorapidity range |η| < 0.8 ...
Dielectron production in proton-proton collisions at √s=7 TeV
(Springer, 2018-09-12)
The first measurement of e^+e^− pair production at mid-rapidity (|ηe| < 0.8) in pp collisions at √s=7 TeV with ALICE at the LHC is presented. The dielectron production is studied as a function of the invariant mass (m_ee ...
|
The equation $$2^x\left(x\ln(2)-1\right)=n$$ does not show solution in terms of elementary function and, in principle, only numerical methods (such as Newton) would need to be used.
However, for your curiosity, there is a solution in terms of Lambert function which is such that $z=W(z)\, e^{W(z)}$. In the case of the considered equation, the solution would be $$x=\frac{W\left(\frac{n}{e}\right)+1}{\log (2)}$$ In the Wikipedia page, they give approximate expressions which allow quite accurate evaluation of the function.
In practice, any equation which can write $$A+Bx+C\log(D+Ex)=0$$ has solutions which can express in terms of Lambert function.
For illustration purposes, let us consider a large value such as $n=123456789$ and let us use $$W(z)=L_1-L_2+\frac{L_2}{L_1}+\frac{L_2(L_2-2)}{2L_1^2}+\cdots$$ where $L_1=\log(z)$ and $L_2=\log(L_1)$. We shall then have $z \approx 4.54172\times 10^7$, $L_1\approx 17.6314$, $L_2\approx 2.86968$ and then $W\left(\frac{n}{e}\right)\approx 14.9285$ and $x\approx 22.9800$ which is the solution for six significant figures.
Edit
If you cannot use Lambert function, you need to apply some numerical method for finding the root of an equation $F(x)=0$. There are many of them but, at least to me, the simplest is Newton. Starting from a "reasonable" guess $x_0$, the method updates it according to $$x_{n+1}=x_n-\frac{F(x_n)}{F'(x_n)}$$ In the case of the equation you posted, you certainly noticed that it is extremely stiff and this is not very convinient. However, if you plot its logarithm, it look much better (for $x>5$, it really looks like a straight line). So, let us consider $$F(x)=\log\left(2^x\left(x\ln(2)-1\right)\right)-\log(n)$$ If $n$ is large, you easily percieve that the equation is $\approx \log(2^x)-\log(n)=x\log(2)-\log(n)$; so, a "reasonable" guess could be $x_0=\frac{\log(n)}{\log(2)}$. Let us apply it with $n=123456789$; so the successive iterates will be $$x_0=26.8794$$ $$x_1=22.9616$$ $$x_3=22.9795$$ which is the solution for six significant figures.
If we had worked without the logarithmic transform, the iterates would have been $$x_0=26.8794$$ $$x_1=25.5916$$ $$x_3=24.4288$$ $$x_4=24.4288$$ $$x_5=23.5371$$ $$x_6=23.0797$$ $$x_7=22.9831$$ $$x_8=22.9795$$ which is much slower than the previous case.
For sure, plotting the function would give a closer starting point.
If I may suggest, have a look at the Wikipedia page about Lambert function; it is a really interesting function witha lot of practical applications. If you search on this site, you will find a lot of questions the answers of which being ... Lambert !
I hope and wish this heps you. If this is not the case, just post.
|
I need to draw this (likely very simple) diagram shown in the image but I've never used Tikz before.
My problem is basiclly how to draw these vertical dotted lines and mainly arrange the math in blocks like this with these arrows pointing from one to the other - of course I have no problem writing the math, the needed structure is giving me problems. Any help would be very appreciated. Thanks in advance.
So far I was able to do this. I feel I'm almost there, but having problems with alignments. Any help?
\begin{tikzpicture}[square/.style={regular polygon,regular polygon sides=4}]%Draw dashed lines\draw [dashed] (2.5,0) -- (2.5,2);\draw [dashed] (5,0) -- (5,2);\draw [dashed] (7.5,0) -- (7.5,2);%Draw nodes of equations\node at (1.25,1) [square,inner sep=-1.3em, draw] { $\begin{aligned} \vert \Psi \rangle &= \sum_{i}{c_i \vert a_i \rangle},\\ &\sum_{i}{\vert a_i \rangle \langle a_i \vert} = \mathds{1} \end{aligned}$};\node at (3.75,1) [square,inner sep=-1.3em, draw] {};\node at (6.25,1) [square,inner sep=-1.3em, draw] {};%Draw title nodes\node at (1.25,2) [square,inner sep=-1.3em, draw] {$\left( \mathcal{H}, \langle \cdot \vert \cdot \rangle \right)$};\node at (3.75,2) [square,inner sep=-1.3em, draw] {$\left( \mathcal{H}_{Phys}, \langle \cdot \vert \cdot \rangle _{\eta _{+}} \right)$};\node at (6.25,2) [square,inner sep=-1.3em, draw] {$\left( \mathcal{H}, \langle \cdot \vert \cdot \rangle \right)$};\end{tikzpicture}
Also I don't know if this is a good approach to do what I want. Thanks.
|
The Wiener algebra $\mathcal W$ is defined as $\text{Fourier}(L^1(\mathbb R))$, i.e. the image by the Fourier transform of $L^1(\mathbb R)$. Riemann-Lebesgue's lemma ensures that $$ \mathcal W\subset C^0_{(0)}(\mathbb R)=\{\phi\text{ continuous on }\mathbb R, \lim_{\vert \xi\vert\rightarrow+\infty}\phi(\xi)=0\} . $$
I believe that the injection $\mathcal W\subset C^0_{(0)}(\mathbb R)$ is not onto. Is it due to Hardy? Gaier? Both at different times?
Is there an "explicit" function $\phi\in C^0_{(0)}(\mathbb R)$ whose inverse Fourier transform (say in the distribution sense) does not belong to $L^1(\mathbb R)$?
Is there a functional analytic reason for why the Banach spaces $L^1(\mathbb R)$ and $C^0_{(0)}(\mathbb R)$ cannot be isomorphic?
|
As described in the Rich Output tutorial, the IPython display system can display rich representations of objects in the following formats:
This Notebook shows how you can add custom display logic to your own classes, so that they can be displayed using these rich representations. There are two ways of accomplishing this:
_repr_html_ when you define your class.
This Notebook describes and illustrates both approaches.
Import the IPython display functions.
from IPython.display import ( display, display_html, display_png, display_svg)
Parts of this notebook need the matplotlib inline backend:
import numpy as npimport matplotlib.pyplot as pltplt.ion()
The main idea of the first approach is that you have to implement special display methods when you define your class, one for each representation you want to use. Here is a list of the names of the special methods and the values they must return:
_repr_html_: return raw HTML as a string
_repr_json_: return a JSONable dict
_repr_jpeg_: return raw JPEG data
_repr_png_: return raw PNG data
_repr_svg_: return raw SVG data as a string
_repr_latex_: return LaTeX commands in a string surrounded by "$".
_repr_mimebundle_: return a full mimebundle containing the mapping from all mimetypes to data
As an illustration, we build a class that holds data generated by sampling a Gaussian distribution with given mean and standard deviation. Here is the definition of the
Gaussian class, which has a custom PNG and LaTeX representation.
from IPython.core.pylabtools import print_figurefrom IPython.display import Image, SVG, Mathclass Gaussian(object): """A simple object holding data sampled from a Gaussian distribution. """ def __init__(self, mean=0.0, std=1, size=1000): self.data = np.random.normal(mean, std, size) self.mean = mean self.std = std self.size = size # For caching plots that may be expensive to compute self._png_data = None def _figure_data(self, format): fig, ax = plt.subplots() ax.hist(self.data, bins=50) ax.set_title(self._repr_latex_()) ax.set_xlim(-10.0,10.0) data = print_figure(fig, format) # We MUST close the figure, otherwise IPython's display machinery # will pick it up and send it as output, resulting in a double display plt.close(fig) return data def _repr_png_(self): if self._png_data is None: self._png_data = self._figure_data('png') return self._png_data def _repr_latex_(self): return r'$\mathcal{N}(\mu=%.2g, \sigma=%.2g),\ N=%d$' % (self.mean, self.std, self.size)
Create an instance of the Gaussian distribution and return it to display the default representation:
x = Gaussian(2.0, 1.0)x
You can also pass the object to the
display function to display the default representation:
display(x)
Use
display_png to view the PNG representation:
display_png(x)
display and
display_png. The former computes
Create a new Gaussian with different parameters:
x2 = Gaussian(0, 2, 2000)x2
You can then compare the two Gaussians by displaying their histograms:
display_png(x)display_png(x2)
Note that like
display functions multiple times in a cell.
When you are directly writing your own classes, you can adapt them for display in IPython by following the above approach. But in practice, you often need to work with existing classes that you can't easily modify. We now illustrate how to add rich output capabilities to existing objects. We will use the NumPy polynomials and change their default representation to be a formatted LaTeX expression.
First, consider how a NumPy polynomial object renders by default:
p = np.polynomial.Polynomial([1,2,3], [-10, 10])p
Polynomial([ 1., 2., 3.], [-10., 10.], [-1, 1])
Next, define a function that pretty-prints a polynomial as a LaTeX string:
def poly_to_latex(p): terms = ['%.2g' % p.coef[0]] if len(p) > 1: term = 'x' c = p.coef[1] if c!=1: term = ('%.2g ' % c) + term terms.append(term) if len(p) > 2: for i in range(2, len(p)): term = 'x^%d' % i c = p.coef[i] if c!=1: term = ('%.2g ' % c) + term terms.append(term) px = '$P(x)=%s$' % '+'.join(terms) dom = r', $x \in [%.2g,\ %.2g]$' % tuple(p.domain) return px+dom
This produces, on our polynomial
p, the following:
poly_to_latex(p)
'$P(x)=1+2 x+3 x^2$, $x \\in [-10,\\ 10]$'
You can render this string using the
Latex class:
from IPython.display import LatexLatex(poly_to_latex(p))
However, you can configure IPython to do this automatically by registering the
Polynomial class and the
poly_to_latex function with an IPython display formatter. Let's look at the default formatters provided by IPython:
ip = get_ipython()for mime, formatter in ip.display_formatter.formatters.items(): print('%24s : %s' % (mime, formatter.__class__.__name__))
text/plain : PlainTextFormatter text/html : HTMLFormatter text/markdown : MarkdownFormatter image/svg+xml : SVGFormatter image/png : PNGFormatter application/pdf : PDFFormatter image/jpeg : JPEGFormatter text/latex : LatexFormatter application/json : JSONFormatter application/javascript : JavascriptFormatter
The
formatters attribute is a dictionary keyed by MIME types. To define a custom LaTeX display function, you want a handle on the
text/latex formatter:
ip = get_ipython()latex_f = ip.display_formatter.formatters['text/latex']
The formatter object has a couple of methods for registering custom display functions for existing types.
help(latex_f.for_type)
Help on method for_type in module IPython.core.formatters: for_type(typ, func=None) method of IPython.core.formatters.LatexFormatter instance Add a format function for a given type. Parameters ----------- typ : type or '__module__.__name__' string for a type The class of the object that will be formatted using `func`. func : callable A callable for computing the format data. `func` will be called with the object to be formatted, and will return the raw data in this formatter's format. Subclasses may use a different call signature for the `func` argument. If `func` is None or not specified, there will be no change, only returning the current value. Returns ------- oldfunc : callable The currently registered callable. If you are registering a new formatter, this will be the previous value (to enable restoring later).
help(latex_f.for_type_by_name)
Help on method for_type_by_name in module IPython.core.formatters: for_type_by_name(type_module, type_name, func=None) method of IPython.core.formatters.LatexFormatter instance Add a format function for a type specified by the full dotted module and name of the type, rather than the type of the object. Parameters ---------- type_module : str The full dotted name of the module the type is defined in, like ``numpy``. type_name : str The name of the type (the class name), like ``dtype`` func : callable A callable for computing the format data. `func` will be called with the object to be formatted, and will return the raw data in this formatter's format. Subclasses may use a different call signature for the `func` argument. If `func` is None or unspecified, there will be no change, only returning the current value. Returns ------- oldfunc : callable The currently registered callable. If you are registering a new formatter, this will be the previous value (to enable restoring later).
In this case, we will use
for_type_by_name to register
poly_to_latex as the display function for the
Polynomial type:
latex_f.for_type_by_name('numpy.polynomial.polynomial', 'Polynomial', poly_to_latex)
Once the custom display function has been registered, all NumPy
Polynomial instances will be represented by their LaTeX form instead:
p
p2 = np.polynomial.Polynomial([-20, 71, -15, 1])p2
_repr_mimebundle_¶
Available on IPython 5.4+ and 6.1+.
For objects needing full control over the
repr protocol may decide to implement the
_repr_mimebundle_(include, exclude) method.Unlike the other
_repr_*_ methods must return many representation of the object in a mapping object which keys are
mimetypes and value are associated data. The
_repr_mimebundle_() method, may also return a second mapping from
mimetypes to metadata.
Example:
class Gaussian(object): """A simple object holding data sampled from a Gaussian distribution. """ def __init__(self, mean=0.0, std=1, size=1000): self.data = np.random.normal(mean, std, size) self.mean = mean self.std = std self.size = size # For caching plots that may be expensive to compute self._png_data = None def _figure_data(self, format): fig, ax = plt.subplots() ax.hist(self.data, bins=50) ax.set_xlim(-10.0,10.0) data = print_figure(fig, format) # We MUST close the figure, otherwise IPython's display machinery # will pick it up and send it as output, resulting in a double display plt.close(fig) return data def _compute_mathml(self): return """ <math xmlns="http://www.w3.org/1998/Math/MathML"> <mrow class="MJX-TeXAtom-ORD"> <mi class="MJX-tex-caligraphic" mathvariant="script">N</mi> </mrow> <mo stretchy="false">(</mo> <mi>μ<!-- μ --></mi> <mo>=</mo> <mn>{mu}</mn> <mo>,</mo> <mi>σ<!-- σ --></mi> <mo>=</mo> <mn>{sigma}</mn> <mo stretchy="false">)</mo> <mo>,</mo> <mtext> </mtext> <mi>N</mi> <mo>=</mo> <mn>{N}</mn> </math> """.format(N=self.size, mu=self.mean, sigma=self.std) def _repr_mimebundle_(self, include, exclude, **kwargs): """ repr_mimebundle should accept include, exclude and **kwargs """ if self._png_data is None: self._png_data = self._figure_data('png') math = r'$\mathcal{N}(\mu=%.2g, \sigma=%.2g),\ N=%d$' % (self.mean, self.std, self.size) data = {'image/png':self._png_data, 'text/latex':math, 'application/mathml+xml': self._compute_mathml() } if include: data = {k:v for (k,v) in data.items() if k in include} if exclude: data = {k:v for (k,v) in data.items() if k not in exclude} return data
# that is definitively wrong as it should show the PNG. display(Gaussian())
In the above example, the 3 mimetypes are embedded in the notebook document this allowing custom extensions and converters to display the representation(s) of their choice.
For example, converting this noetebook to
epub may decide to use the MathML representation as most ebook reader cannot run mathjax (unlike browsers).
The
_repr_mimebundle_ methods is also given two keywords parameters :
include and
exclude. Each can be a containers (e.g.:
list,
set ...) of mimetypes to return or
None, This allows implementation to avoid computing potentially unnecessary and expensive mimetypes representations.
When
include is non-empty (empty
list or None),
_repr_mimebundle_ may decide to returns only the mimetypes in include.When
exclude is non-empty,
_repr_mimebundle_ may decide to not return any mimetype in exclude. If both
include and
exclude and overlap, mimetypes present in exclude may not be returned.
If implementations decide to ignore the
include and
exclude logic and always returns a full mimebundles, the IPython kernel will take care of removing non-desired representations.
The
_repr_mimebundle_ method should accept arbitrary keyword arguments for future compatiility.
display(Gaussian(), include={'text/latex'}) # only show latex
display(Gaussian(), exclude={'image/png'}) # exclude png
display(Gaussian(), include={'text/plain', 'image/png'}, exclude={'image/png'}) # keep only plain/text
<__main__.Gaussian at 0x11a8a0b38>
Rich output special methods and functions can only display one object or MIME type at a time. Sometimes this is not enough if you want to display multiple objects or MIME types at once. An example of this would be to use an HTML representation to put some HTML elements in the DOM and then use a JavaScript representation to add events to those elements.
IPython 2.0 recognizes another display method,
_ipython_display_, which allows your objects to take complete control of displaying themselves. If this method is defined, IPython will call it, and make no effort to display the object using the above described
_repr_*_ methods for custom display functions. It's a way for you to say "Back off, IPython, I can display this myself." Most importantly, your
_ipython_display_ method can make multiple calls to the top-level
display functions to accomplish its goals.
Here is an object that uses
display_html and
display_javascript to make a plot using the Flot JavaScript plotting library:
import jsonimport uuidfrom IPython.display import display_javascript, display_html, displayclass FlotPlot(object): def __init__(self, x, y): self.x = x self.y = y self.uuid = str(uuid.uuid4()) def _ipython_display_(self): json_data = json.dumps(list(zip(self.x, self.y))) display_html('<div id="{}" style="height: 300px; width:80%;"></div>'.format(self.uuid), raw=True ) display_javascript(""" require(["//cdnjs.cloudflare.com/ajax/libs/flot/0.8.2/jquery.flot.min.js"], function() { var line = JSON.parse("%s"); console.log(line); $.plot("#%s", [line]); }); """ % (json_data, self.uuid), raw=True)
import numpy as npx = np.linspace(0,10)y = np.sin(x)FlotPlot(x, np.sin(x))
|
This seems an excellent classroom experiment to reinforce the practical meaning of the confidence level of an interval estimate, and I wish more instructors would do this kind of thing.
First, let's just check to make sure we're on the same page about the experiment.Each of the 16 groups rolls a fair die $n = 25$ times. The sum of the 25 valuesis divided by 25 to get $\bar X$.
For a single roll of a die, we get the value $X_i$ which has $\mu = E(X_i) = 3.5$ and $\sigma^2 = V(X_i) = 35/12$. Thus $V(\bar X) = \sigma^2/n = 35/300 = 7/60.$Then you assume $\bar X$ is close enough to normal to use the 90% z-interval$\bar X \pm 1.645\sigma/\sqrt{n}$ or $\bar X \pm 0.5619.$
You have 16 of these 90% CIs, and consider each of them to be a Success if itincludes 3.5 and Failure if it does not. You were surprised to get noFailures because you think the chances of that are about 18% or 19%.(Just to make sure I'm not having a 'senior moment' here, I did a simulationin R and with a million runs of what you did, and I got around 18.5% of such25-die experiments with 16 groups got no Failures.)
If that is the scenario, then you were moderately unlucky. But you mightalso have considered yourself unlucky if you had gotten less than 13 successes(even a little more likely).I suppose you would have been very pleased with 14 or 15 Successes (the most likely two results, to be sure),but the probability of that is only about 60%. And the THIRD most likelyresult is 16.
Maybe you can show your class a bar chart of the distribution BINOM(16, .9)and have a 'teachable moment' about variability.
|
PCTeX Talk Discussions on TeX, LaTeX, fonts, and typesetting
Author Message zedler Joined: 03 Mar 2006 Posts: 15
Posted: Mon Mar 27, 2006 11:53 am Post subject: spacing of mathrm Hello,
\documentclass{book}
\usepackage{times,mtpro2}
\begin{document}
$(\mathrm j$
\end{document}
gives touching glyphs.
Michael Michael Spivak Joined: 10 Oct 2005 Posts: 52
Posted: Tue Mar 28, 2006 12:29 pm Post subject: Re: spacing of mathrm
zedler wrote: Hello,
\documentclass{book}
\usepackage{times,mtpro2}
\begin{document}
$(\mathrm j$
\end{document}
gives touching glyphs.
Michael
There's not much I can do about that---if you are using Times as the
text font, then in text (j also touches! [though $(\mathrm j$ is worse, with more overlap].
I'm wondering how this arose. Assuming that you didn't really want
\mathrm{(j ... I would guess that you are using roman letters as a set
of variables, either in addition to, or in place of, the MTPro2 italic letters.
In that case, you really would want a special font for this purpose, in the
same way that MTPro's \mathbf font has different spacing than the
Times-bold, so that subscripts and superscripts will work better. zedler Joined: 03 Mar 2006 Posts: 15
Posted: Wed Mar 29, 2006 5:23 am Post subject: Re: spacing of mathrm
Quote: There's not much I can do about that---if you are using Times as the
text font, then in text (j also touches! [though $(\mathrm j$ is worse, with more overlap].
I'm wondering how this arose. Assuming that you didn't really want
\mathrm{(j ... I would guess that you are using roman letters as a set
of variables, either in addition to, or in place of, the MTPro2 italic letters.
In that case, you really would want a special font for this purpose, in the
same way that MTPro's \mathbf font has different spacing than the
Times-bold, so that subscripts and superscripts will work better.
Yes, I really want to typeset $\exp(\mathrm j\omega\tau=$ ;-)
I suppose this can only be corrected by increasing the bracket side bearings, but your approach was to have very tight bracket side bearings and adjust/increase the spacing using kerns. This of course fails for \mathrm...
The tight bracket side bearings were also an issue in my previous example, $\[\]_{xy}$. CM, Fourier, Lucida and MnSymbol don't have this problem...
Michael Michael Spivak Joined: 10 Oct 2005 Posts: 52
Posted: Wed Mar 29, 2006 6:39 am Post subject: Re: spacing of mathrm
zedler wrote:
Quote: There's not much I can do about that---if you are using Times as the
text font, then in text (j also touches! [though $(\mathrm j$ is worse, with more overlap].
I'm wondering how this arose. Assuming that you didn't really want
\mathrm{(j ... I would guess that you are using roman letters as a set
of variables, either in addition to, or in place of, the MTPro2 italic letters.
In that case, you really would want a special font for this purpose, in the
same way that MTPro's \mathbf font has different spacing than the
Times-bold, so that subscripts and superscripts will work better.
Yes, I really want to typeset $\exp(\mathrm j\omega\tau=$ ;-)
I suppose this can only be corrected by increasing the bracket side bearings, but your approach was to have very tight bracket side bearings and adjust/increase the spacing using kerns. This of course fails for \mathrm...
The tight bracket side bearings were also an issue in my previous example, $\[\]_{xy}$. CM, Fourier, Lucida and MnSymbol don't have this problem...
Michael
Actually, I didn't, and one can't, adjust the spacing after a left parentheses or before a right parenthesis
using kerns [I mentioned this on some posting somewhere once before];
even if you put kerns into the tfm file, they are ignored because the left parenthesis is an "opening", which determines its own spacing, and similarly the right parenthesis is a "closing". I chose side bearings for
the parenthesis that worked well with the italic letters on the math italic font.
Even if that were not the case, the real problem is that
in the expression \exp(\mathrm j the ( comes from the math italic font,
while the j is coming from an entirely different font, the Times-Roman font, and TeX has no way of kerning characters in different fonts. If you
were to use some other roman font as your text font, then the problem might very well be less or much more---it would depend entirely on the left side bearing of j in that particular font.
I suspect that j is being used here as a some special character (perhaps in electrical engineering, although I thought they preferred bold j); in that case, I would just define something like \myj to give a small kern followed by j---in fact, it's easier to type \myj than to type \mathrm j.
Sorry that [] doesn't work out for you, but I've never seen something like that in any mathematics paper, and since I like the way brackets work with the math italic characters in general, I wouldn't want to change the side bearings just for this special case (once again, changes couldn't be
overridden with kerns). zedler Joined: 03 Mar 2006 Posts: 15
Posted: Wed Mar 29, 2006 10:00 am Post subject: Re: spacing of mathrm
Quote: Sorry that [] doesn't work out for you, but I've never seen something like that in any mathematics paper, and since I like the way brackets work with the math italic characters in general, I wouldn't want to change the side bearings just for this special case (once again, changes couldn't be
overridden with kerns).
I can apply manual spacings, the "\mathrm j" is stored in a macro anyway and the empty brackets I need only once. Perhaps you're interested, I've put together a collection showing how different math font setups behave in the above mentioned cases: http://www.hft.ei.tum.de/mz/mtpro2_sidebearings.pdf
Michael Michael Spivak Joined: 10 Oct 2005 Posts: 52
Posted: Wed Mar 29, 2006 1:24 pm Post subject: Re: spacing of mathrm
Quote: ="zedler
I can apply manual spacings, the "\mathrm j" is stored in a macro anyway and the empty brackets I need only once. Perhaps you're interested, I've put together a collection showing how different math font setups behave in the above mentioned cases: http://www.hft.ei.tum.de/mz/mtpro2_sidebearings.pdf
Michael
Interesting.
I'd say that CM looks the worst (especially the \omega and
\tau, as well as being so thin).
Lucida is somewhat "klunky", though definitely easy to read!
(Is this Lucida or Lucida-Bright?) Some one mentioned that section headings are sometimes printed in sans-serif,
so that a sans-serif math might be nice to have; I suspect that the Lucida
greek letters would work well for that.
If \mathrm j is in a macro, then probably there should also be some space on the right; certainly needed for CM, not really needed for Lucida
or Minion, useful for Fourier and MTPro2.
By the way, what is []_{\langle6\times6\rangle} ? zedler Joined: 03 Mar 2006 Posts: 15
Posted: Wed Mar 29, 2006 3:25 pm Post subject: Re: spacing of mathrm
Quote: (Is this Lucida or Lucida-Bright?)
Pctex's Lucida fonts.
Quote: By the way, what is []_{\langle6\times6\rangle} ?
Excerpt from a paper (Deadline tomorrow Mar 30, Hawai time ;-))
Code:
\begin{document}\let\mathbf\mbf
...
Next, the impedance matrix of the outer 12-port is obtained by
inverting $\mathbf{Y}^{\langle 16\times16\rangle}$ and taking the
upper left $\langle 12\times12\rangle$ submatrix
\begin{equation}
\mathbf{Z}^{\langle 12\times12\rangle}=\left[{\mathbf{Y}^{\langle 16\times16\rangle}}^{-1}\right]_{\langle 12\times12\rangle}
\end{equation}
where the operator $[\,]_{\langle 12\times12\rangle}$ denotes taking
the submatrix. The $\mathbf Z^{\langle 6\times 6\rangle}=\mathbf Z$
matrix of the outer six-port is obtained by
Perhaps not the best notation, do you have a better idea? BTW, quite funny that both you and my boss are aficionados of differential forms ;-)
Wish you wedge and hodge,
Michael Michael Spivak Joined: 10 Oct 2005 Posts: 52
Posted: Wed Mar 29, 2006 3:39 pm Post subject: Re: spacing of mathrm
zedler wrote:
Excerpt from a paper (Deadline tomorrow Mar 30, Hawai time ;-))
Code:
\begin{document}\let\mathbf\mbf
...
Next, the impedance matrix of the outer 12-port is obtained by
inverting $\mathbf{Y}^{\langle 16\times16\rangle}$ and taking the
upper left $\langle 12\times12\rangle$ submatrix
\begin{equation}
\mathbf{Z}^{\langle 12\times12\rangle}=\left[{\mathbf{Y}^{\langle 16\times16\rangle}}^{-1}\right]_{\langle 12\times12\rangle}
\end{equation}
where the operator $[\,]_{\langle 12\times12\rangle}$ denotes taking
the submatrix. The $\mathbf Z^{\langle 6\times 6\rangle}=\mathbf Z$
matrix of the outer six-port is obtained by
Perhaps not the best notation, do you have a better idea? BTW, quite funny that both you and my boss are aficionados of differential forms ;-)
Wish you wedge and hodge,
Michael
Not really, but I would probably have used something like UL_{\langle
12\times\12\rangle}(...) with U and L roman (or perhaps bold). And I
probably would actually have used something like UL_{[12]}, with
the idea that for square matrices [12] would mean \langle12\times12\rangle.
All times are GMT - 7 Hours
You can post new topics in this forum You can reply to topics in this forum You cannot edit your posts in this forum You cannot delete your posts in this forum You cannot vote in polls in this forum Powered by phpBB © 2001, 2005 phpBB Group
|
In a paper by Joos and Zeh, Z Phys B 59 (1985) 223, they say:This 'coming into being of classical properties' appears related to what Heisenberg may have meant by his famous remark [7]: 'Die "Bahn" entsteht erst dadurch, dass wir sie beobachten.'Google Translate says this means something ...
@EmilioPisanty Tough call. It's technical language, so you wouldn't expect every German speaker to be able to provide a correct interpretation—it calls for someone who know how German is used in talking about quantum mechanics.
Litmus are a London-based space rock band formed in 2000 by Martin (bass guitar/vocals), Simon (guitar/vocals) and Ben (drums), joined the following year by Andy Thompson (keyboards, 2001–2007) and Anton (synths). Matt Thompson joined on synth (2002–2004), while Marek replaced Ben in 2003. Oli Mayne (keyboards) joined in 2008, then left in 2010, along with Anton. As of November 2012 the line-up is Martin Litmus (bass/vocals), Simon Fiddler (guitar/vocals), Marek Bublik (drums) and James Hodkinson (keyboards/effects). They are influenced by mid-1970s Hawkwind and Black Sabbath, amongst others.They...
@JohnRennie Well, they repeatedly stressed their model is "trust work time" where there are no fixed hours you have to be there, but unless the rest of my team are night owls like I am I will have to adapt ;)
I think u can get a rough estimate, COVFEFE is 7 characters, probability of a 7-character length string being exactly that is $(1/26)^7\approx 1.2\times 10^{-10}$ so I guess you would have to type approx a billion characters to start getting a good chance that COVFEFE appears.
@ooolb Consider the hyperbolic space $H^n$ with the standard metric. Compute $$\inf\left\{\left(\int u^{2n/(n-2)}\right)^{-(n-2)/n}\left(4\frac{n-1}{n-2}\int|\nabla u|^2+\int Ru^2\right): u\in C^\infty_c\setminus\{0\}, u\ge0\right\}$$
@BalarkaSen sorry if you were in our discord you would know
@ooolb It's unlikely to be $-\infty$ since $H^n$ has bounded geometry so Sobolev embedding works as expected. Construct a metric that blows up near infinity (incomplete is probably necessary) so that the inf is in fact $-\infty$.
@Sid Eating glamorous and expensive food on a regular basis and not as a necessity would mean you're embracing consumer fetish and capitalism, yes. That doesn't inherently prevent you from being a communism, but it does have an ironic implication.
@Sid Eh. I think there's plenty of room between "I think capitalism is a detrimental regime and think we could be better" and "I hate capitalism and will never go near anything associated with it", yet the former is still conceivably communist.
Then we can end up with people arguing is favor "Communism" who distance themselves from, say the USSR and red China, and people who arguing in favor of "Capitalism" who distance themselves from, say the US and the Europe Union.
since I come from a rock n' roll background, the first thing is that I prefer a tonal continuity. I don't like beats as much as I like a riff or something atmospheric (that's mostly why I don't like a lot of rap)
I think I liked Madvillany because it had nonstandard rhyming styles and Madlib's composition
Why is the graviton spin 2, beyond hand-waiving, sense is, you do the gravitational waves thing of reducing $R_{00} = 0$ to $g^{\mu \nu} g_{\rho \sigma,\mu \nu} = 0$ for a weak gravitational field in harmonic coordinates, with solution $g_{\mu \nu} = \varepsilon_{\mu \nu} e^{ikx} + \varepsilon_{\mu \nu}^* e^{-ikx}$, then magic?
|
The principles of simple linear regression lay the foundation for more sophisticated regression methods used in a wide range of challenging settings. In this section, we explore multiple regression, which introduces the possibility of more than one predictor, and logistic regression, a technique for predicting categorical outcomes with two possible categories.
Multiple regression extends simple two-variable regression to the case that still has one response but many predictors (denoted
x 1, x 2, x 3, …). The method is motivated by scenarios where many variables may be simultaneously connected to an output.
We will consider eBay auctions of a video game called Mario Kart for the Nintendo Wii. The outcome variable of interest is the total price of an auction, which is the highest bid plus the shipping cost. We will try to determine how total price is related to each characteristic in an auction while simultaneously controlling for other variables. For instance, all other characteristics held constant, are longer auctions associated with higher or lower prices? And, on average, how much more do buyers tend to pay for additional Wii wheels (plastic steering wheels that attach to the Wii controller) in auctions? Multiple regression will help us answer these and other questions.
The data set mario_kart includes results from 141 auctions.
[1] Four observations from this data set are shown in Table 1, and descriptions for each variable are shown in Table 2. Notice that the condition and stock photo variables are indicator variables. For instance, the cond_new variable takes value 1 if the game up for auction is new and 0 if it is used. Using indicator variables in place of category names allows for these variables to be directly used in regression. Multiple regression also allows for categorical variables with many levels, though we do not have any such variables in this analysis, and we save these details for a second or third course.
Table 1. Four Observations from the mario-kart data set. price cond_new stock_photo duration wheels 1 51.55 1 1 3 1 2 37.04 0 1 7 1 . . . . . . . . . . . . . . . . . . 140 38.76 0 0 7 0 141 54.51 1 1 1 2
Table 2. Variables and their descriptions for the mario-kart data set. Variable Description price final auction price plus shipping costs, in US dollars a coded two-level categorical variable, which takes value 1 when the game is new and 0 if the game is used stock_photo a coded two-level categorical variable, which takes value 1 if the primary photo used in the auction was a stock photo and 0 if the photo was unique to that auction duration the length of the auction, in days, taking values from 1 to 10 wheels the number of Wii wheels included with the auction (a Wii wheel is a plastic racing wheel that holds the Wii controller and is an optional but helpful accessory for playing Mario Kart) A single-variable model for the Mario Kart data
Let’s fit a linear regression model with the game’s condition as a predictor of auction price. The model may be written as
[latex]\widehat{\text{price}}=42.87 + 10.90\times\text{cond_new}[/latex]
Results of this model are shown in Table 3 and a scatterplot for price versus game condition.
Table 3. Summary of a linear model for predicting auction price based on game condition. Estimate Std. Error t value Pr ( > |t|) (Intercept) 42.8711 0.8140 52.67 0.0000 cond new 10.8996 1.2583 8.66 0.0000 df = 139 Try it
Examine the scatterplot for the mario-kart data set. Does the linear model seem reasonable?
Solution:
Yes. Constant variability, nearly normal residuals, and linearity all appear reasonable.
Example
Interpret the coefficient for the game’s condition in the model. Is this coefficient significantly different from 0?
Solution:
Note that cond_new is a two-level categorical variable that takes value 1 when the game is new and value 0 when the game is used. So 10.90 means that the model predicts an extra $10.90 for those games that are new versus those that are used. Examining the regression output in Table 3, we can see that the p- value for cond_new is very close to zero, indicating there is strong evidence that the coefficient is different from zero when using this simple one-variable model.
Including and assessing many variables in a model
Sometimes there are underlying structures or relationships between predictor variables. For instance, new games sold on Ebay tend to come with more Wii wheels, which may have led to higher prices for those auctions. We would like to fit a model that includes all potentially important variables simultaneously. This would help us evaluate the relationship between a predictor variable and the outcome while controlling for the potential influence of other variables. This is the strategy used in multiple regression. While we remain cautious about making any causal interpretations using multiple regression, such models are a common first step in providing evidence of a causal connection.
We want to construct a model that accounts for not only the game condition, as in the mario_kart example, but simultaneously accounts for three other variables: stock photo, duration, and wheels.
[latex]\begin{array}\widehat{\text{price}}\hfill &={\beta}_{0}\hfill &+{\beta}_{1}\times\text{cond_new}\hfill&+{\beta}_{2}\times\text{stock_photo}\text{ }\hfill &+{\beta}_{3}\times\text{duration}\hfill&+{\beta}_{4}\hfill&\times\text{wheels}\\\hat{y}\hfill &={\beta}_{0}\hfill &+{\beta}_{1}{x}_{1}\hfill &+{\beta}_{2}{x}_{2}\hfill &+{\beta}_{3}{x}_{3}\hfill &+{\beta}_{4}{x}_{4}\end{array}[/latex]
In this equation,
y represents the total price, x 1 indicates whether the game is new, x 2 indicates whether a stock photo was used, x 3 is the duration of the auction, and x 4 is the number of Wii wheels included with the game. Just as with the single predictor case, a multiple regression model may be missing important components or it might not precisely represent the relationship between the outcome and the available explanatory variables. While no model is perfect, we wish to explore the possibility that this one may fit the data reasonably well.
We estimate the parameters [latex]{\beta}_{0},{\beta}_{1},\dots,{\beta}_{4}[/latex] in the same way as we did in the case of a single predictor. We select [latex]{b}_{0},{b}_{1},\dots,{b}_{4}[/latex] that minimize the sum of the squared residuals:
SSE = [latex]\displaystyle{{e}_{1}}^{2}+{{e}_{2}}^{2}+\dots+{{e}_{141}}^{2}={\sum}_{i = 1}^{141}{\left({{e}_{i}}\right)}^{2} = {\sum}_{i = 1}^{141}{\left({{y}_{i} - {\hat{y}}_{i}}\right)}^{2}[/latex]
Here there are 141 residuals, one for each observation. We typically use a computer to minimize the SSE and compute point estimates, as shown in the sample output in the table below. Using this output, we identify the point estimates
b i of each i, just as we did in the one-predictor case.
Table 4. Output for the regression model where price is the outcome and cond new, stock photo, duration, and wheels are the predictors. Estimate Std. Error t value Pr(>|t|) (Intercept) 36.2110 1.5140 23.92 0.0000 cond new 5.1306 1.0511 4.88 0.0000 stock photo 1.0803 1.0568 1.02 0.3085 duration −0.0268 0.1904 −0.14 0.8882 wheels 7.2852 0.5547 13.13 0.0000 df = 136 Multiple regression model
A multiple regression model is a linear model with many predictors. In general, we write the model as
[latex]\hat{y} ={\beta}_{0} +{\beta}_{1}{x}_{1}+{\beta}_{2}{x}_{2}+\dots+{\beta}_{k}{x}_{k}[/latex]
when there are
k predictors. We often estimate the [latex]{\beta}_{i}[/latex] parameters using a computer. Try It
Write out the model
[latex]\begin{array}\widehat{\text{price}}\hfill &={\beta}_{0}\hfill &+{\beta}_{1}\times\text{cond_new}\hfill&+{\beta}_{2}\times\text{stock_photo}\text{ }\hfill &+{\beta}_{3}\times\text{duration}\hfill&+{\beta}_{4}\hfill&\times\text{wheels}\\\hat{y}\hfill &={\beta}_{0}\hfill &+{\beta}_{1}{x}_{1}\hfill &+{\beta}_{2}{x}_{2}\hfill &+{\beta}_{3}{x}_{3}\hfill &+{\beta}_{4}{x}_{4}\end{array}[/latex]
using the point estimates from the “Output for the regression model where price is the outcome and cond new, stock photo, duration, and wheels are the predictors” table.
How many predictors are there in this model?
Solution:
[latex]\hat{y}=36.21+5.13{x}_{1}+1.08{x}_{2}-0.03{x}_{3}+7.29{x}_{4}[/latex], there are
k = 4 predictor variables. Try It
What does [latex]{\beta}_{4}[/latex], the coeffcient of variable [latex]{x}_{4}[/latex] (Wii wheels), represent? What is the point estimate of [latex]{\beta}_{4}[/latex]?
Solution:
It is the average difference in auction price for each additional Wii wheel included when holding the other variables constant. The point estimate is
b 4 = 7.29. Try It
Compute the residual of the first observation from the “Four observations from the mario kart data set” table using the equation you identified in Try It 1.
Solution:
[latex]{e}_{i}= {y}_{i}-{\hat{y}_{i}}=51.55 - 49.62 = 1.93[/latex]
Example
A coeffcient for cond_new of
b 1 = 10.90 was calculated using simple linear regression with one variable, with a standard error of SE = 1.26 when using simple linear regression. Why might there be a difference between that estimate and the one in the multiple regression setting? b1
Solution:
If we examined the data carefully, we would see that some predictors are correlated. For instance, when we estimated the connection of the outcome price and predictor cond new using simple linear regression, we were unable to control for other variables like the number of Wii wheels included in the auction. That model was biased by the confounding variable wheels. When we use both variables, this particular underlying and unintentional bias is reduced or eliminated (though bias from other confounding variables may still remain).
Example 2 describes a common issue in multiple regression: correlation among predictor variables. We say the two predictor variables are
collinear (pronounced as co-linear ) when they are correlated, and this collinearity complicates model estimation. While it is impossible to prevent collinearity from arising in observational data, experiments are usually designed to prevent predictors from being collinear.
Try It
The estimated value of the intercept is 36.21, and one might be tempted to make some interpretation of this coefficient, such as, it is the model’s predicted price when each of the variables take value zero: the game is used, the primary image is not a stock photo, the auction duration is zero days, and there are no wheels included. Is there any value gained by making this interpretation?
Solution:
Three of the variables (cond_new, stock_photo, and wheels) do take value 0, but the auction duration is always one or more days. If the auction is not up for any days, then no one can bid on it! That means the total auction price would always be zero for such an auction; the interpretation of the intercept in this setting is not insightful.
Adjusted R 2 as a better estimate of explained variance
We first used
R 2 to determine the amount of variability in the response that was explained by the model:
[latex]\displaystyle{R}^2=1-\frac{\text{variability in residuals}}{\text{variability in the outcome}}=1=\frac{\text{Var}(e_i)}{\text{Var}(y_i)}[/latex]
where
e i ythe outcomes. This equation remains valid in the multiple regression framework, but a small enhancement can often be even more informative. i
Try It
The estimated value of the intercept is 36.21, and one might be tempted to make some interpretation of this coefficient, such as, it is the model’s predicted price when each of the variables take value zero: the game is used, the primary image is not a stock photo, the auction duration is zero days, and there are no wheels included. Is there any value gained by making this interpretation?
Solution:
Three of the variables (cond_new, stock_photo, and wheels) do take value 0, but the auction duration is always one or more days. If the auction is not up for any days, then no one can bid on it! That means the total auction price would always be zero for such an auction; the interpretation of the intercept in this setting is not insightful.
This strategy for estimating
R 2 is acceptable when there is just a single variable. However, it becomes less helpful when there are many variables. The regular R 2 is a less estimate of the amount of variability explained by the model. To get a better estimate, we use the adjusted R 2. Adjusted R 2 as a tool for model assessment
The
adjusted R2 is computed as
[latex]\displaystyle{R}^2_{adj}=1-\frac{\frac{\text{Var}(e_i)}{n-k-1}}{\frac{\text{Var}(y_i)}{(n-1)}}=1-\frac{\text{Var}(e_i)}{\text{Var}(y_i)}\times\frac{n-1}{n-k-1}[/latex]
where
n is the number of cases used to fit the model and k is the number of predictor variables in the model.
Because
k is never negative, the adjusted R 2 will be smaller—often times just a little smaller—than the unadjusted R 2. The reasoning behind the adjusted R 2 lies in the degrees of freedom associated with each variance. [2]
Try It
There were
n = 141 auctions in the mario_kart data set and k = 4 predictor variables in the model. Use n, k, and the variances from Try It 6 to calculate R 2 for the Mario Kart model. adj
Solution:
[latex]\displaystyle{R}^2=1-\frac{23.34}{83.06}\times\frac{141-1}{141-4-1}=0.711[/latex]
Try It
Suppose you added another predictor to the model, but the variance of the errors Var(
e i) didn’t go down. What would happen to the R 2? What would happen to the adjusted R 2?
Solution:
The unadjusted
R 2 would stay the same and adjusted R 2 would go down.
Adjusted
R 2 could have been used earlier. However, when there is only k = 1 predictors, adjusted R 2 is very close to regular R 2, so this nuance isn’t typically important when considering only one predictor. Diez DM, Barr CD, Cetinkaya-Rundel M. 2015. openintro: OpenIntro data sets and supplement functions. github.com/OpenIntroOrg/openintro-r-package. ↵ In multiple regression, the degrees of freedom associated with the variance of the estimate of the residuals is n− k− 1, not n− 1. For instance, if we were to make predictions for new data using our current model, we would find that the unadjusted R 2is an overly optimistic estimate of the reduction in variance in the response, and using the degrees of freedom in the adjusted R 2formula helps correct this bias ↵
|
Let $K$ be a convex body and let $\| \cdot \|_{K}$ be the correspoding Minkowski functional $$\| x \|_{K} = \inf\{\lambda > 0 : x \in \lambda K \}$$
Let us consider the following map $f: K \rightarrow \partial \mathring K$ such that $f(x) = \nabla \| x \|_{K}$ Here $\partial \mathring K$ stands for the polar set, i.e. $$\partial \mathring K = \{ y \in \partial K : \sup_{x \in \partial K}{\langle x, y \rangle} \leq 1 \}$$
It is pointed out that $f$ pretends to be a Gauss map $v_{K}: \partial K \rightarrow S^{n-1}$ that maps the outer unit normal to the boundary to the unit sphere. Are there any easy ways to recover it geometrically?
It looks as if there is a direct relationship between the fact that the subgradients of convex functions are presicely the outer normal vector of supporting hyperplanes of sublevel sets and the statement above, but i can't see any fast ways to figure it out.
|
There are several options for integrating your R workspace with LaTeX. One of these is the R package
tikzDevice that allows you to export images created in R as tikz code in a .tex file, for immediate use in a LaTeX document via the line
\include{diagrams}.
A simpler way, the one we all start out with, is to export an image from R as a .pdf, then include it using the line
\includegraphics{diagrams.pdf}. This is a pretty easy and straightforward workflow – so, why would I want to use
tikzDevice?
There several advantages to converting your images into TikZ code directly from R:
TikZ diagrams consist of vectors coded directly into your LaTeX document: there’s no loss of image resolution. The labels on TikZ diagrams match the font of your LaTeX document. Wonderful LaTeX equations can be effortlessly used as labels in your diagrams. You can harness the power of the loop in R to create a single .tex file containing many images. You can harness the power of the loop in R to add
\caption{}and
\label{}lines to all your images for immediate reference within LaTeX.
You can include all these features and output via one line in LaTeX:
\include{diagrams}.
A Simple Example
That being said, let’s export a TikZ scatterplot using the
tikzDevicepackage. We will use data posted on Dr. Walter Enders web site.
R: # gdata helps read .xls files require(gdata) df = read.xls("http://cba.ua.edu/assets/docs/wenders/arch.xls", sheet = 1) # tikzDevice will export the plots as a .tex file require(tikzDevice) # choose a name and path for the .tex file # folder should be the same as where your latex document resides tikz( '/Users/kevingoulding/latex_documents/thesis/plot_with_line.tex' ) plot(df, xlab = "$\\alpha_t + \\hat{\\beta}X_t$", ylab = "$Y_t$", main = "$Y_t = \\alpha_t + \\hat{\\beta}X_t$") abline(h = mean(df[,2]), col = "red", lwd = 2) dev.off() # must turn device off to complete .tex file
To include this diagram in your LaTeX document, simply add the line
\include{plot_with_line} and compile. Don’t forget to include
\usepackage{tikz} in the preamble. If you zoom in, you can see that we’ve labeled the plot and axes using LaTeX math language (amsmath).
A few things to be careful with as you try to code LaTeX equations from within R: All backslashes need to be doubled.
\–>
\\.
All equations still need to be bordered by
$on each side.
To be continued…
|
A global bifurcation theorem for a positone multiparameter problem and its application
1.
Fundamental General Education Center, National Chin-Yi University of Technology, Taichung 411, Taiwan
2.
Center for General Education, National Formosa University, Yunlin 632, Taiwan
3.
Department of Mathematics, National Tsing Hua University, Hsinchu 300, Taiwan
$\left\{ \begin{align} &{{u}^{\prime \prime }}(x)+\lambda {{f}_{\varepsilon }}(u)=0\text{,}\ \ -1<x<1\text{,} \\ &u(-1)=u(1)=0\text{,} \\ \end{align} \right.$
$\varepsilon >0$
$f_{\varepsilon }(u)$
$\tilde{\varepsilon}>0$
$(λ ,||u||_{∞ })$
$0<\varepsilon <\tilde{\varepsilon}$
$\varepsilon ≥ \tilde{\varepsilon}$
$f_{\varepsilon}(u)=-\varepsilon u^{p}+bu^{2}+cu+d\ $ p≥ 3 and coefficients
$\varepsilon ,b,d>0,$ c≥ 0. Our results generalize those in Hung and Wang (Trans. Amer. Math. Soc. 365 (2013) 1933-1956.) Keywords:Global bifurcation, multiparameter problem, S-shaped bifurcation curve, exact multiplicity, positive solution. Mathematics Subject Classification:Primary: 34B18; Secondary: 74G35. Citation:Kuo-Chih Hung, Shao-Yuan Huang, Shin-Hwa Wang. A global bifurcation theorem for a positone multiparameter problem and its application. Discrete & Continuous Dynamical Systems - A, 2017, 37 (10) : 5127-5149. doi: 10.3934/dcds.2017222
References:
[1] [2]
S.-Y. Huang and S.-H. Wang,
Proof of a conjecture for the one-dimensional perturbed Gelfand problem from combustion theory,
[3] [4]
K.-C. Hung and S.-H. Wang,
A theorem on S-shaped bifurcation curve for a positone problem with convex-concave nonlinearity and its applications to the perturbed Gelfand problem,
[5]
K.-C. Hung and S.-H. Wang,
Global bifurcation and exact multiplicity of positive solutions for a positone problem with cubic nonlinearity and their applications,
[6] [7] [8] [9] [10] [11]
J. Shi, Multi-parameter bifurcation and applications, in: H. Brezis, K. C. Chang, S. J. Li, P. Rabinowitz (Eds.),
[12]
C.-C. Tzeng, K.-C. Hung and S.-H. Wang,
Global bifurcation and exact multiplicity of positive solutions for a positone problem with cubic nonlinearity,
[13]
show all references
References:
[1] [2]
S.-Y. Huang and S.-H. Wang,
Proof of a conjecture for the one-dimensional perturbed Gelfand problem from combustion theory,
[3] [4]
K.-C. Hung and S.-H. Wang,
A theorem on S-shaped bifurcation curve for a positone problem with convex-concave nonlinearity and its applications to the perturbed Gelfand problem,
[5]
K.-C. Hung and S.-H. Wang,
Global bifurcation and exact multiplicity of positive solutions for a positone problem with cubic nonlinearity and their applications,
[6] [7] [8] [9] [10] [11]
J. Shi, Multi-parameter bifurcation and applications, in: H. Brezis, K. C. Chang, S. J. Li, P. Rabinowitz (Eds.),
[12]
C.-C. Tzeng, K.-C. Hung and S.-H. Wang,
Global bifurcation and exact multiplicity of positive solutions for a positone problem with cubic nonlinearity,
[13]
[1]
Tzung-shin Yeh.
S-shaped and broken s-shaped bifurcation curves for a multiparameter diffusive logistic problem with holling type-Ⅲ functional response.
[2]
Shao-Yuan Huang, Shin-Hwa Wang.
On S-shaped bifurcation curves for a
two-point boundary value problem arising in a theory of thermal explosion.
[3]
Chih-Yuan Chen, Shin-Hwa Wang, Kuo-Chih Hung.
S-shaped bifurcation curves for a combustion problem with general arrhenius reaction-rate laws.
[4]
Shao-Yuan Huang.
Global bifurcation and exact multiplicity of positive solutions for the one-dimensional Minkowski-curvature problem with sign-changing nonlinearity.
[5]
Sabri Bensid, Jesús Ildefonso Díaz.
Stability results for discontinuous nonlinear elliptic and parabolic problems with a S-shaped bifurcation branch of stationary solutions.
[6]
Shao-Yuan Huang.
Exact multiplicity and bifurcation curves of positive solutions of a one-dimensional Minkowski-curvature problem and its application.
[7]
Robert Skiba, Nils Waterstraat.
The index bundle and multiparameter bifurcation for discrete dynamical systems.
[8]
Xue Dong He, Roy Kouwenberg, Xun Yu Zhou.
Inverse S-shaped probability weighting and its impact on investment.
[9]
Kuan-Ju Huang, Yi-Jung Lee, Tzung-Shin Yeh.
Classification of bifurcation curves of positive solutions for a nonpositone problem with a quartic polynomial.
[10] [11]
Matteo Franca, Russell Johnson, Victor Muñoz-Villarragut.
On the nonautonomous Hopf bifurcation problem.
[12]
Yukio Kan-On.
Bifurcation structures of positive stationary solutions for a Lotka-Volterra competition model with diffusion II: Global structure.
[13]
Shao-Yuan Huang.
Bifurcation diagrams of positive solutions for one-dimensional Minkowski-curvature problem and its applications.
[14] [15]
Inmaculada Antón, Julián López-Gómez.
Global bifurcation diagrams of steady-states for a parabolic model
related to a nuclear engineering problem.
[16]
Olga Kharlampovich and Alexei Myasnikov.
Tarski's problem about the elementary theory of free groups has a positive solution.
[17]
Lora Billings, Erik M. Bollt, David Morgan, Ira B. Schwartz.
Stochastic global bifurcation in perturbed Hamiltonian systems.
[18] [19]
M. R. S. Kulenović, Orlando Merino.
Global bifurcation for discrete competitive systems in the plane.
[20]
2018 Impact Factor: 1.143
Tools Metrics Other articles
by authors
[Back to Top]
|
Difference between revisions of "Remarkable"
(wPFA)
(→Results: +1)
Line 32: Line 32:
* If $κ$ is virtually [[measurable]], then either $κ$ is remarkable in $L$ or $L_κ \models \text{“there is a proper class of virtually measurables”}$.<cite>NielsenWelch2018:GamesRamseylike</cite>
* If $κ$ is virtually [[measurable]], then either $κ$ is remarkable in $L$ or $L_κ \models \text{“there is a proper class of virtually measurables”}$.<cite>NielsenWelch2018:GamesRamseylike</cite>
* Remarkable cardinals are strategic $ω$-[[Ramsey]] limits of $ω$-Ramsey cardinals.<cite>NielsenWelch2018:GamesRamseylike</cite>
* Remarkable cardinals are strategic $ω$-[[Ramsey]] limits of $ω$-Ramsey cardinals.<cite>NielsenWelch2018:GamesRamseylike</cite>
−
* Remarkable cardinals are $Σ_2$-reflecting.<cite>Wilson2018:WeaklyRemarkableCardinals</cite>
+
* Remarkable cardinals are $Σ_2$-reflecting.<cite>Wilson2018:WeaklyRemarkableCardinals</cite>
−
Equiconsistency with the [[forcing|weak Proper Forcing Axiom]]:<cite>BagariaGitmanSchindler2017:VopenkaPrinciple</cite>
+ −
* If there is a remarkable cardinal, then $\text{wPFA}$ holds in a forcing extension by a proper poset.
+
Equiconsistency with the [[forcing|weak Proper Forcing Axiom]]:<cite>BagariaGitmanSchindler2017:VopenkaPrinciple</cite>
−
* If $\text{wPFA}$ holds, then $ω_2^V$ is remarkable in $L$.
+
* If there is a remarkable cardinal, then $\text{wPFA}$ holds in a forcing extension by a proper poset.
+
* If $\text{wPFA}$ holds, then $ω_2^V$ is remarkable in $L$.
+
==Weakly remarkable cardinals==
==Weakly remarkable cardinals==
Revision as of 09:39, 10 October 2019
Remarkable cardinals were introduced by Schinder in [1] to provide precise consistency strength of the statement that $L(\mathbb R)$ cannot be modified by proper forcing.
Contents Definitions
A cardinal $\kappa$ is remarkable if for each regular $\lambda>\kappa$, there exists a countable transitive $M$ and an elementary embedding $e:M\rightarrow H_\lambda$ with $\kappa\in \text{ran}(e)$ and also a countable transitive $N$ and an elementary embedding $\theta:M\to N$ such that:
the critical point of $\theta$ is $e^{-1}(\kappa)$, $\text{Ord}^M$ is a regular cardinal in $N$, $M=H^N_{\text{Ord}^M}$, $\theta(e^{-1}(\kappa))>\text{Ord}^M$.
Remarkable cardinals could be called virtually supercompact, because the following alternative definition is an exact analogue of the definition of supercompact cardinals by Magidor [Mag71]:
A cardinal $κ$ is remarkable iff for every $η > κ$, there is $α < κ$ such that in a set-forcing extension there is an elementary embedding $j : V_α → V_η$ with $j(\mathrm{crit}(j)) = κ$.[2]
Equivalently (theorem 2.4[3])
For every $η > κ$ and every $a ∈ V_η$, there is $α < κ$ such that in $V^{Coll(ω,<κ)}$ there is an elementary embedding $j : V_α → V_η$ with $j(crit(j)) = κ$ and $a ∈ range(j)$. For every $η > κ$ in $C^{(1)}$ and every $a ∈ V_η$, there is $α < κ$ also in $C^{(1)}$ such that in $V^{Coll(ω,<κ)}$ there is an elementary embedding $j : V_α → V_η$ with $j(crit(j)) = κ$ and $a ∈ range(j)$. There is a proper class of $η > κ$ such that for every $η$ in the class, there is $α < κ$ such that in $V^{Coll(ω,<κ)}$ there is an elementary embedding $j : V_α → V_η$ with $j(crit(j)) = κ$
Note: the existence of any such elementary embedding in $V^{Coll(ω,<κ)}$ is equivalent to the existence of such elementary embedding in any forcing extension (see Elementary_embedding#Absoluteness).[3].
Results
Remarkable cardinals and the constructible universe:
Remarkable cardinals are downward absolute to $L$. [1] If $0^\sharp$ exists, then every Silver indiscernible is remarkable in $L$. [1]
Relations with other large cardinals:
Strong cardinals are remarkable. [1] A $2$-iterable cardinal implies the consistency of a remarkable cardinal: Every $2$-iterable cardinal is a limit of remarkable cardinals. [4] Remarkable cardinals imply the consistency of $1$-iterable cardinals: If there is a remarkable cardinal, then there is a countable transitive model of ZFC with a proper class of $1$-iterable cardinals. [4] Remarkable cardinals are totally indescribable. [1] Remarkable cardinals are totally ineffable. [1] Virtually extendible cardinals are remarkable limits of remarkable cardinals.[2] If $κ$ is virtually measurable, then either $κ$ is remarkable in $L$ or $L_κ \models \text{“there is a proper class of virtually measurables”}$.[5] Remarkable cardinals are strategic $ω$-Ramsey limits of $ω$-Ramsey cardinals.[5] Remarkable cardinals are $Σ_2$-reflecting.[6]
Relation to various set-theoretic principles:
Equiconsistency with the weak Proper Forcing Axiom:[3] If there is a remarkable cardinal, then $\text{wPFA}$ holds in a forcing extension by a proper poset. If $\text{wPFA}$ holds, then $ω_2^V$ is remarkable in $L$. It is relatively consistent that ZFC and the generic Vopěnka scheme holds, yet $Ord$ is not definably Mahlo and not even $∆_2$-Mahlo. In such a model, there can be no $Σ_2$-reflecting cardinals and therefore also no remarkable cardinals.[7] Weakly remarkable cardinals
(this section from [6])
A cardinal $κ$ is weakly remarkable iff for every $η > κ$, there is $α$ such that in a set-forcing extension there is an elementary embedding $j : V_α → V_η$ with $j(\mathrm{crit}(j)) = κ$. (the condition $α < κ$ is dropped)
A cardinal is remarkable iff it is weakly remarkable and $Σ_2$-reflecting.
The existence of non-remarkable weakly remarkable cardinals is equiconsistent to the existence of an $ω$-Erdős cardinal (equivalent assuming $V=L$; Baumgartner definition of $ω$-Erdős cardinals):
Every $ω$-Erdős cardinal is a limit of non-remarkable weakly remarkable cardinals. If $κ$ is a non-remarkable weakly remarkable cardinal, then some ordinal greater than $κ$ is an $ω$-Erdős cardinal in $L$. $n$-remarkable cardinals
$1$-remarkability is equivalent to remarkability. A cardinal is virtually $C^{(n)}$-extendible iff it is $n + 1$-remarkable (virtually extendible cardinals are virtually $C^{(1)}$-extendible). A cardinal is called
completely remarkable iff it is $n$-remarkable for all $n > 0$. Other definitions and properties in Extendible#Virtually extendible cardinals.[3]
References Schindler, Ralf-Dieter. Proper forcing and remarkable cardinals.Bull Symbolic Logic 6(2):176--184, 2000. www DOI MR bibtex Gitman, Victoria and Shindler, Ralf. Virtual large cardinals.www bibtex Bagaria, Joan and Gitman, Victoria and Schindler, Ralf. Generic {V}opěnka's {P}rinciple, remarkable cardinals, and the weak {P}roper {F}orcing {A}xiom.Arch Math Logic 56(1-2):1--20, 2017. www DOI MR bibtex Gitman, Victoria and Welch, Philip. Ramsey-like cardinals II.J Symbolic Logic 76(2):541--560, 2011. www arχiv MR bibtex Nielsen, Dan Saattrup and Welch, Philip. Games and Ramsey-like cardinals., 2018. arχiv bibtex Wilson, Trevor M. Weakly remarkable cardinals, Erdős cardinals, and the generic Vopěnka principle., 2018. arχiv bibtex Gitman, Victoria and Hamkins, Joel David. A model of the generic Vopěnka principle in which the ordinals are not Mahlo., 2018. arχiv bibtex
|
Assuming we have a convergent nozzle I've read that the maximum thrust is achieved just in the moment the nozzle exit -minimum area- is chocked, i.e., the nozzle is adapted in the sense the pressure in the exit area equals the ambient pressure. How can I demostrate this with formulas?
Let's look at two cases, choked exhaust and complete expansion:
1. Choked exhaust
With a convergent exhaust pipe, the jet engine thrust reaches a maximum at the speed of sound of the exhaust gas stream.
The velocity of the gas flow increases if it was subsonic at the entrance of the pipe. In a convergent nozzle the maximum gas exhaust velocity is M = 1, the speed of sound at the temperature of the hot exhaust gas. With a choked exhaust, at M = 1 in the exhaust outlet, the static pressure is higher than ambient pressure.
The exhaust area needs to be reduced until the gas exit velocity is M = 1, which at for instance 800 ºC is 657 m/s. The pressure $p_e$ at the exhaust outlet will then be:
$$ p_e =\frac {\dot{m}\cdot R \cdot T_e}{V_e \cdot A_e }$$
which is greater than the ambient pressure $p_0$.
The net thrust $F$ of a pure jet engine is $$ F = \dot{m} * (V_e - V_0) + A_e * (p_e - p_0) $$
$R$ is the gas constant. Parameters you need to know:
outlet mass flow from the turbine $\dot{m}$ in kg/s
gas outlet temperature $T_e$ in ºK
speed of sound at $T_e$ in m/s, which for a choked exhaust is equal to $V_e$
outlet area $A_e$ in $m^2$
airspeed $V_0$ in $m/s$ and ambient pressure $p_0$ in $N/m^2$
If we use the following example of an aircraft flying M 0.85 at 30,000 ft, choked converging exhaust, exit area 0.1 $m^2$, mass flow 70 kg/s, exhaust temperature 1073 K, we get:
F = 70 * (657 - 258) + 0.1 * (328,106 - 30,100)
= 27,962 N from kinetic energy + 29,801 N from pressure difference, about the same amount.
2. Complete expansion
$p_e$ is now equal to $p_0$. This condition occurs if total pressure at the turbine exit $p_{Tt} \leq \epsilon_{kr} * p_0$, with $\epsilon_{kr}$ for a hot exhaust gas being around 1.95.
Analogous to the choked exhaust case: $$ A_e =\frac {\dot{m}\cdot R \cdot T_e}{V_e \cdot p_e }$$
For the same conditions at 30,000 ft follows: $A_e$ = 1.09 $m^2$
and F = 70 * (657 - 258) = 27,962 N
The net thrust in this case is a lot lower because the turbine exit pressure is lower than in the choked case, and therefore the propulsive power of the jet engine is lower. Usually, in the case of a turbojet the turbine exhaust pressure $p_{Tt}$ is a lot higher than $\epsilon_{kr} * p_0$, which will lead to the choked exhaust case above.
Turbofans with a high bypass ratio have a low enough $p_{Tt}$ to allow complete expansion, most of the gas generator power being used for the bypass air compression.
|
For every $ N \in \mathbb Z$ there exists an integer $n$ such that $ \sqrt N \in \mathbb Q(\zeta_n)$.
I am struggling where to start this question, please suggest me few hints.
Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community
This follows from a very general result, called the Kronecker-Weber Theorem, which says that
every finite abelian extension of $\mathbb Q$ is contained in a cyclotomic extension. The proof is rather involved, either using class field theory or deriving it from the corresponding theorem for local fields. The special case of quadratic extensions, however, can be proved directly.
Recall that $\mathbb Q(\zeta_n,\zeta_m) = \mathbb Q(\zeta_{\operatorname{lcm}(m,n)})$. So if $N = ab$ and we know that $\sqrt{a}$ and $\sqrt{b}$ are contained in a cyclotomic extension, then the same is true for $N$. Therefore, we can assume that $N$ is a prime $p$ or (since $\sqrt{-1} \in \mathbb Q(\zeta_4)$) the negative of a prime, $N=-p$. Thus, it suffices to show:
The second and third part can be done by looking at the Gauss sum $\sum_{a =1}^{p-1} \left(\frac{a}{p}\right) \zeta_p^a$.
|
First consider the single torus, represented as the square with opposite edges identified, call the edges $a$ and $b$. $U$ is the torus with a single point removed, this retracts to just the borders of the square, so it is the union of two copies of $S^1$ and it has fundamental group $<a, b>$, the free group in two elements. Now consider a circle around the removed point, this represents a generator of the fundamental group of $U\cap V$, since $U\cap V$ is in this representation just the area between two circles around the removed point. This circle represents the element $aba^{-1}b^{-1}\in\pi_1(U)$. Similarly for $V$, the generator gets mapped to $cdc^{-1}d^{-1}$. Hence the fundamental group of the double torus is $<a, b, c, d \mid aba^{-1}b^{-1}(cdc^{-1}d^{-1})^{-1} >$.
Edit: Filling some details. First if you haven't seen it already, here's the picture that I am using for the argument. If you haven't seen this before, I strongly suggest that you look up this construction of the torus before moving on. Note that the ways denoted $A$ and $B$ in the picture are in fact loops.
Cut a point (or a small circle) out in the middle of that picture. Consider again a circle around the removed point in the torus as above. This circle generates the fundamental group of $U\cap V$. We wish to figure out where the generator gets mapped under the inclusion map $U\cap V \to U$ or more precisely under the induced map $\pi_1(U\cap V) \to \pi_1(U)$.
To do this, we take a representant, i.e. the circle and compose it with the inclusion map. Since the inclusion maps every point to itself, we still have the same circle. Now recall that the torus minus a point retracts to the border of the square. Under this retraction map, the circle gets mapped to the loop around the border of the square. This loop leads along all four edges, but remember that the edges are identified. In fact they are identified in such a way that when we move along an edge for the second time then we will pass it in the opposite direction. So our loop can be written as the composition of loops $ABA^{-1}B^{-1}$. This means that there is a similar equation for elements of the fundamental group. Technically, at this point you would have to go back from the square to the torus missing a point, where a similar equation holds, because the retraction induces an isomorphism of fundamental groups.
|
Source transformation lets you change a voltage source into a current source, or the other way around. It is a way to simplify a circuit. The method is based on Thévenin’s theorem and Norton’s theorem.
Two simple circuits have special names,
The
Thévenin form is a voltage source in series with a resistor. The Norton form is a current source in parallel with a resistor.
It is possible to convert between Thévenin and Norton forms.
Written by Willy McAllister.
Contents $i$-$v$ graphs for V, I, and R Thévenin form Norton form Source transformation challenge 1 Source transformation challenge 2 Where we’re headed
To convert Thévenin to Norton: set $\text I_\text N = \text V_\text T / \text R_\text T$.
To convert Norton to Thévenin: set $\text V_\text T = \text I_\text N \, \text R_\text N$.
The Thévenin and Norton resistors have the same value, $\text R_\text T = \text R_\text N$.
Thévenin and Norton forms are
equivalent because they have the same $i$-$v$ behavior from the viewpoint of the output port.
We’ll draw a lot of $i$-$v$ graphs to visualize what’s going on. The idea sinks in better if you to do most of the work. Please follow along with pencil and paper.
We use source transformation in the proof of Thévenin’s and Norton’s theorem.
$i$-$v$ graphs for V, I, and R
Let’s first review the $i$-$v$ graphs of sources and resistors by themselves. Prepare an $i$-$v$ graph and carefully sketch each of these as separate lines,
voltage source, $v = 5\,\text V$ current source, $i = 6\,\text{mA}$ resistor, $\text R = 2\,\text k\Omega$
show answer
The resistor line has a tilt of $1/\text R = 1/2\,\text k\Omega$. It passes through the origin.
The voltage source line is a vertical line passing through $v = 5\,\text V$. The current source line is a horizontal line passing through $i = 6\,\text{mA}$.
A resistor appears as a slanted line through the origin, $i = \dfrac{v}{\text R}$.
The slope (rise over run) of a resistor line is $\dfrac{1}{\text R}$.
A voltage source plots as a vertical line. It’s the same voltage for any current.
A current source plots as a horizontal line. It’s the same current for any voltage. Voltage and current source lines do not pass through the origin. Thévenin form
Now make it more interesting: Find the $i$-$v$ behavior of this resistor and voltage source circuit.
Derive the $i$-$v$ graph for this circuit.
Hint: Start by deriving a symbolic expression for $i$ in terms of $v,\text R_\text T,\text V_\text T$.
Try to find something with the form $i(v) = f(v,\text R_\text T,\text V_\text T)$. Use what you know about the two components and Ohm’s Law.
$i = f(v,\text R_\text T,\text V_\text T) = $ _____________
Plot your function for $\text V_\text T = 5\,\text V$ and $\text R_\text T = 2\,\text k\Omega$.
Now that you’ve had a try, it’s my turn. I will derive an $i$-$v$ expression by traditional circuit analysis, and then again with some EE cleverness.
Traditional circuit analysis
Write KVL around the loop; start in the lower left,
$\text V_\text T - i\,\text R_\text T - v = 0$
Solve for $i$ in terms of $v,\text R_\text T$, and $\text V_\text T$,
$i = \dfrac{\text V_\text T - v}{\text R_\text T}$
Rearrange a little,
$i = -\dfrac{v}{\text R_\text T} + \dfrac{\text V_\text T}{\text R_\text T}$
Do you recognize this as the equation of a line? It’s what we should expect since it is the combination of lines from a voltage source and resistor, $\text V_{\text T}$ and $\text R_{\text T}$.
With the given values,
$i = -\dfrac{v}{2\,\text k\Omega} + \dfrac{5\,\text V}{2\,\text k\Omega} = -\dfrac{v}{2\,\text k\Omega} + 2.5\,\text{mA}$
And it plots like this,
The line is tilted $i$-$v$ like a resistor, with a negative slope because of the way we defined the direction of the $i$ arrow, but it does not pass through the origin like a resistor would.
The voltage source makes the line shift to the right. It crosses the voltage axis at $v = \text V_\text T$.
EE cleverness
It is reasonable to expect the $i$-$v$ curve to be a straight line, since it’s made from the sum of two lines. If you know two points you can create the equation of a line. Can we get the circuit to tell us two points?
Two easy points are where the line crosses the voltage axis and where it crosses the current axis. For this we need some equipment: a voltmeter, an ammeter, and a short length of wire.
Where does the $i$-$v$ line cross the voltage axis? show answer
The line crosses the voltage axis when $i = 0$. How might we force $i$ to be $0$?
We could connect nothing across the port to create an open circuit,
With $i = 0$, measure the voltage with a voltmeter (or do it in your head).
$v_{oc} = \text V_{\text T} = 5\,\text V$
$v_{oc}$ stands for “open circuit voltage”.
The open circuit voltage is the same as the voltage source, $\text V_\text T$.
The line crosses the voltage axis at $v_{oc} = \text V_{\text T}$.
Where does the $i$-$v$ line cross the current axis? show answer
The line crosses the current axis when $v = 0$. How might we force $v$ to be $0$?
We could connect a wire across the port to short it out,
With $v = 0$, measure the current in the shorting wire. Insert an ammeter into the wire (or do it in your head).
$i_{sc} = \dfrac{\text V_{\text T}}{\text R_{\text T}} = \dfrac{5\,\text V}{2000\,\Omega} = 2.5\,\text{mA}$
$i_{sc}$ stands for “short circuit current.”
The line crosses the current axis at $i_{sc}$.
Caution: DO NOT put a short across a real circuit unless you already know it can survive the abuse. Create an $i$-$v$ equation based on the two points. show answer
You can see the plot right away. Just mark the two points and draw a line. The points are $(v_{oc},0)$ and $(0,i_{sc})$.
Now find the equation of the line. Given two points the equation of the line is,
$(y - y_1) = m\,(x - x_1)\qquad m = \dfrac{(y_1 - y_2)}{(x_1 - x_2)}$</p> The two points we know are,
$x_1,y_1 = (v_{oc},0)\quad$ and $\quad x_2,y_2 = (0,i_{sc})$
$y - 0 = \dfrac{(0 - i_{sc})}{(v_{oc} - 0)}(x - v_{oc})$
$y = \dfrac{0 - (2.5\,\text{mA})}{5 - 0}(x - 5)$
$y = -\dfrac{2.5\,\text{mA}}{5}(x - 5)$
$y = -\dfrac{2.5\,\text{mA}}{5}\,x + \dfrac{2.5\,\text{mA}}{5}\cdot 5$
$y = -\dfrac{x}{2\,\text k\Omega} + 2.5\,\text{mA}\qquad$ (same as the traditional method)
What have we learned about the Thévenin form? The Thévenin form plots as a line in $i$-$v$ space. The tilt of the line is controlled by $\text R_\text T$. The line crosses the voltage axis at $\text V_\text T$. You can position the line anywhere you want in $i$-$v$ space by your choice of component values.
The open-circuit/short-circuit technique is pretty handy. We will use it again.
Norton form
Now let’s look at the Norton form,
Work out a symbolic expression for port current $i$ in terms of $v,\text R_{\text N},\text I_{\text N}$.
$i = f(v,\text R_{\text N},\text I_{\text N}) = $ _____________
Then plot your function with these values, $\text I_{\text N} = 2\,\text{mA}$ and $\text R_{\text N} = 500\,\Omega$.
Now that you’ve had a chance to plot the function, I will derive the Norton $i$-$v$ expression by traditional circuit analysis, and then with EE cleverness like we did above.
Traditional circuit analysis
Write Kirchhoff’s Current Law for the top node; add up currents flowing
into the node,
$+\text I_{\text N} - \dfrac{v}{\text R_{\text N}} - i = 0$
Solve for $i$ in terms of $v,\text R_{\text N}$, and $\text I_{\text N}$,
$i = -\dfrac{v}{\text R_{\text N}} + \text I_{\text N}$
This the equation of a line, which shouldn’t be a surprise. The $i$-$v$ graphs for a current source and a resistor are lines, so it makes sense to get a line when we add lines. The y-intercept is $\text I_{\text N}$ and the slope is $-1/\text R_{\text N}$.
Notice the resemblance of this equation to the one we derived for the Thévenin form.
Hmm, pretty similar. Hold that thought.
Here’s the plot with the given values,
$i = -\dfrac{v}{500\,\Omega} + 2\,\text{mA}$
More EE cleverness
Let’s identify two points with the same open-circuit/short-circuit thing we did above. Please give this a try before you peek,
Find two points where the $i$-$v$ line of this circuit crosses the $v$ axis and $i$ axis.
Put open and short circuits across the port and measure with your mental multimeter.
$(v_{oc}, 0) = ($ ________, $0)$
$(0, i_{sc}) = (0$, ________ $)$
show answer
The line crosses the $v$ axis when $i = 0$. To force $i = 0$ leave the port unconnected (open circuit) so no current can flow in or out of the port,
With the port open, all of $\text I_{\text N}$ is has to flow through $\text R_{\text N}$. The open-circuit voltage is,
$v_{oc} = \text I_{\text N}\,\text R_{\text N}$
$v_{oc} = 2\,\text{mA} \cdot 500 \,\Omega = 1\,\text V$
The $i$-$v$ line crosses the $v$ axis at $1\,\text V$. That’s our first point.
The line crosses the $i$ axis when $v = 0$. To force $v = 0$ connect a wire to short across the port.
With the wire shorting out the port there is $0$ voltage across the resistor. All of $\text I_{\text N}$ flows through the short and none through the resistor. What is the short-circuit current? By simple inspection,
$i_{sc} = \text I_{\text N} = 2\,\text{mA}$
The $i$-$v$ line crosses the $i$ axis at $2\,\text{mA}$. This is our second point.
Use these two points to construct an equation of the line. It should match what we did with the KCL analysis just above. Here is the plot,
What have we learned about the Norton form? The Norton form behaves just like the Thévenin form. It plots as a line in $i$-$v$ space. You can position the line anywhere you want by your choice of component values.
Source transformation challenge 1
Thévenin and Norton forms forms both generate tilted lines on the $i$-$v$ plot.
1. Make the two circuits produce the same line.
Here are equations for the Thévenin and Norton forms,
$i = -\dfrac{v}{\text R_\text T} + \dfrac{\text V_\text T}{\text R_\text T}$
$i = -\dfrac{v}{\text R_{\text N}} + \text I_{\text N}$
Find relationships between the key parameters, $\text R_\text T$, $\text R_\text N$, $\text V_\text T$, and $\text I_\text N$ to make the two equations the same.
$\text R_\text N = $ ________
$\text I_\text N \,= $ ________
$\text V_\text T = $ ________
show answer
Two lines are the same if they have the same slope and the same y-intercept. Look at the two equations and match the things that need to match.
The slopes match if $\text R_\text N = \text R_\text T$. The resistors are the same.
The y-intercepts match if $\text I_\text N = \text V_\text T / \text R_\text T$.
This is the same as saying $\text V_\text T = \text I_\text N\,\text R_\text N$.
If you are given one form you instantly change it into the other with these relationships. Both circuits will have identical $i$-$v$ characteristics.
2. Use the component values from the Thévenin example above to create an equivalent Norton circuit.
$\text V_{\text T} = 5\,\text V$, $\text R_{\text T} = 2\,\text k\Omega$
$\text I_{\text N} =$ ________
$\text R_{\text N} =$ ________
show answer
$\text R_\text N = \text R_\text T = 2000\,\Omega$
$\text I_{\text N} = \dfrac{\text V_{\text T}}{\text R_{\text T}} = \dfrac{5}{2000} = 2.5\,\text{mA}$
3. Use the component values from the Norton example above to create an equivalent Thévenin circuit.
$\text I_{\text N} = 2\,\text{mA}$, $\text R_{\text N} = 500\,\Omega$
$\text V_{\text T} =$ ________
$\text R_{\text T} =$ ________
show answer
$\text R_\text T = \text R_\text N = 500\,\Omega$
$\text V_{\text T} = \text I_{\text N} \, \text R_{\text N} = 0.002 \cdot 500 = 1\,\text V$
Notice how the conversion process resembles Ohm’s Law.
If you “look into” these circuits from the port you can’t tell them apart. (You “look” with a voltmeter or ammeter.) They have identical behavior for any $v$ or any $i$. This means they are equivalent and therefore interchangeable. We will take advantage of this in the next article.
Source transformation challenge 2
Use this simulation model to help you with this design challenge (open the link in another tab). Double-click on a component to change its value. At the appropriate step, click
DC in the top menu bar to find the voltage and current. 1. Design your own Thévenin form. Pick any values for $\text V_\text T$ and $\text R_\text T$.
2. Design the Norton equivalent to your Thévenin circuit,
3. Determine the open-circuit voltage and short-circuit current of both forms,
$v_{oc} =$ _________ $\quad i_{sc} =$ _________
4. Write the $i$-$v$ equation and plot it on an $i$-$v$ graph. It’s the same equation for both forms,
$i = $ __________________
(Adjust the scales so your particular graph look nice.)
5. Connect a load resistor $\text R_\text L$ to both forms. Use the same resistance for both load resistors.
6. Compute the voltage and current for both load resistors,
$v_\text{RLTh} = $ _________ $\quad i_\text{RLTh} = $ _________
$v_\text{RLN} = $ _________ $\quad i_\text{RLN} = $ _________
7. Plot this point on your $i$-$v$ graph. It’s not the same point as $v_{oc}$ or $i_{sc}$, but it should fall somewhere on the $i$-$v$ line.
If you used the simulation model notice the simulator didn’t tell you how to do the problem. You had to figure out the Norton equivalent on your own. The simulator did provide some help because you could confirm if your answer was right. Simulators are great tools, but they don’t think for you.
Summary
Thévenin’s circuit is a voltage source in series with a resistor.
Norton’s circuit is a current source in parallel with a resistor.
Thévenin and Norton forms have identical $i$-$v$ behavior if you set,
$\text R_{\text{Thévenin}} = \text R_{\text{Norton}}$
$\text V_{\text{Thévenin}} = \text I_{\text{Norton}}\, \text R$
When circuits produce the same $i$-$v$ curve from the viewpoint of a selected port, we say they are
equivalent (from the perspective of the port).
To derive the equation of the $i$-$v$ line for a complicated circuit we found two points on the line. We left the port open and measured the open circuit voltage, $v_{oc}$. Then we placed a short across the port and measured the short circuit current, $i_{sc}$. From the equation we derive the Thévenin and Norton component values.
Caution: Shorting out real electronic equipment to find $i_{sc}$ is a recipe for smoke. Be super careful before you do this.
In the next article we apply source transformation to a problem.
What about other circuits with one source and one resistor?
After so much talk about Thévenin and Norton forms, it’s an obvious question to ask about the other two possibilities. Consider,
Voltage source in parallel with a resistor Current source in series with a resistor
Give this a little thought for a second or two. What do the resistors do?
The resistors don’t do anything in either circuit.
If you put a resistor in parallel with a voltage source it has no effect on the voltage, and it doesn’t influence $i$. All you do is pull some extra current out of the ideal voltage source, a current we can’t observe from the port. Who cares if there’s a resistor in parallel with the ideal voltage source?
The same goes for the resistor in series with a current source. The source pushes its current through the resistor no matter what the resistor value is. The resistor just forces the ideal current source to create some extra voltage to drive the required current. We can’t observe the voltage across the current source from the port.
These two variations don’t make sense.
|
Hierarchical Binominal Model: Rat Tumor Example¶
In [1]:
%matplotlib inlineimport pymc3 as pmimport numpy as npimport matplotlib.pyplot as pltimport seaborn as snsimport pandas as pdimport pymc3.distributions.transforms as trimport theano.tensor as ttfrom scipy.special import gammalnplt.style.use('seaborn-darkgrid')print('Running on PyMC3 v{}'.format(pm.__version__))
Running on PyMC3 v3.6
This short tutorial demonstrates how to use pymc3 to do inference forthe rat tumour example found in chapter 5 of
Bayesian Data Analysis 3rdEdition. Readers should already be familliar with the pymc3 api.
Suppose we are interested in the probability that a lab rat develops endometrial stromal polyps. We have data from 71 previously performed trials and would like to use this data to perform inference.
The authors of BDA3 choose to model this problem heirarchically. Let \(y_i\) be the number of lab rats which develop endometrial stromal polyps out of a possible \(n_i\). We model the number rodents which develop endometrial stromal polyps as binomial
allowing the probability of developing an endometrial stromal polyp (i.e. \(\theta_i\)) to be drawn from some population distribution. For analytical tractability, we assume that \(\theta_i\) has Beta distribution
We are free to specify a prior distribution for \(\alpha, \beta\). We choose a weakly informative prior distribution to reflect our ignorance about the true values of \(\alpha, \beta\). The authors of BDA3 choose the joint hyperprior for \(\alpha, \beta\) to be
For more information, please see
Bayesian Data Analysis 3rd Editionpg. 110. A Directly Computed Solution¶
Our joint posterior distribution is
which can be rewritten in such a way so as to obtain the marginal posterior distribution for \(\alpha\) and \(\beta\), namely
See BDA3 pg. 110 for a more information on the deriving the marginal posterior distribution. With a little determination, we can plot the marginal posterior and estimate the means of \(\alpha\) and \(\beta\) without having to resort to MCMC. We will see, however, that this requires considerable effort.
The authors of BDA3 choose to plot the surfce under the paramterization \((\log(\alpha/\beta), \log(\alpha+\beta))\). We do so as well. Through the remainder of the example let \(x = \log(\alpha/\beta)\) and \(z = \log(\alpha+\beta)\).
In [2]:
# rat data (BDA3, p. 102)y = np.array([ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 1, 5, 2, 5, 3, 2, 7, 7, 3, 3, 2, 9, 10, 4, 4, 4, 4, 4, 4, 4, 10, 4, 4, 4, 5, 11, 12, 5, 5, 6, 5, 6, 6, 6, 6, 16, 15, 15, 9, 4])n = np.array([ 20, 20, 20, 20, 20, 20, 20, 19, 19, 19, 19, 18, 18, 17, 20, 20, 20, 20, 19, 19, 18, 18, 25, 24, 23, 20, 20, 20, 20, 20, 20, 10, 49, 19, 46, 27, 17, 49, 47, 20, 20, 13, 48, 50, 20, 20, 20, 20, 20, 20, 20, 48, 19, 19, 19, 22, 46, 49, 20, 20, 23, 19, 22, 20, 20, 20, 52, 46, 47, 24, 14])N = len(n)
In [3]:
# Compute on log scale because products turn to sumsdef log_likelihood(alpha, beta, y, n): LL = 0 # Summing over data for Y, N in zip(y, n): LL += gammaln(alpha+beta) - gammaln(alpha) - gammaln(beta) + \ gammaln(alpha+Y) + gammaln(beta+N-Y) - gammaln(alpha+beta+N) return LLdef log_prior(A, B): return -5/2*np.log(A+B)def trans_to_beta(x, y): return np.exp(y)/(np.exp(x)+1)def trans_to_alpha(x, y): return np.exp(x)*trans_to_beta(x, y)# Create space for the parameterization in which we wish to plotX, Z = np.meshgrid(np.arange(-2.3, -1.3, 0.01), np.arange(1, 5, 0.01))param_space = np.c_[X.ravel(), Z.ravel()]df = pd.DataFrame(param_space, columns=['X', 'Z'])# Transform the space back to alpha beta to compute the log-posteriordf['alpha'] = trans_to_alpha(df.X, df.Z)df['beta'] = trans_to_beta(df.X, df.Z)df['log_posterior'] = log_prior( df.alpha, df.beta) + log_likelihood(df.alpha, df.beta, y, n)df['log_jacobian'] = np.log(df.alpha) + np.log(df.beta)df['transformed'] = df.log_posterior+df.log_jacobiandf['exp_trans'] = np.exp(df.transformed - df.transformed.max())# This will ensure the density is normalizeddf['normed_exp_trans'] = df.exp_trans/df.exp_trans.sum()surface = df.set_index(['X', 'Z']).exp_trans.unstack().values.T
In [4]:
fig, ax = plt.subplots(figsize=(8, 8))ax.contourf(X, Z, surface)ax.set_xlabel(r'$\log(\alpha/\beta)$', fontsize=16)ax.set_ylabel(r'$\log(\alpha+\beta)$', fontsize=16)ix_z, ix_x = np.unravel_index(np.argmax(surface, axis=None), surface.shape)ax.scatter([X[0, ix_x]], [Z[ix_z, 0]], color='red')text = r"$({a},{b})$".format(a=np.round( X[0, ix_x], 2), b=np.round(Z[ix_z, 0], 2))ax.annotate(text, xy=(X[0, ix_x], Z[ix_z, 0]), xytext=(-1.6, 3.5), ha='center', fontsize=16, color='black', arrowprops={'facecolor':'white'} );
The plot shows that the posterior is roughly symetric about the mode (-1.79, 2.74). This corresponds to \(\alpha = 2.21\) and \(\beta = 13.27\). We can compute the marginal means as the authors of BDA3 do, using
In [5]:
#Estimated mean of alpha(df.alpha*df.normed_exp_trans).sum().round(3)
Out[5]:
2.403
In [6]:
#Estimated mean of beta(df.beta*df.normed_exp_trans).sum().round(3)
Out[6]:
14.319
Computing the Posterior using PyMC3¶
Computing the marginal posterior directly is a lot of work, and is not always possible for sufficiently complex models.
On the other hand, creating heirarchichal models in pymc3 is simple. We can use the samples obtained from the posterior to estimate the means of \(\alpha\) and \(\beta\).
In [7]:
def logp_ab(value): ''' prior density''' return tt.log(tt.pow(tt.sum(value), -5/2))with pm.Model() as model: # Uninformative prior for alpha and beta ab = pm.HalfFlat('ab', shape=2, testval=np.asarray([1., 1.])) pm.Potential('p(a, b)', logp_ab(ab)) X = pm.Deterministic('X', tt.log(ab[0]/ab[1])) Z = pm.Deterministic('Z', tt.log(tt.sum(ab))) theta = pm.Beta('theta', alpha=ab[0], beta=ab[1], shape=N) p = pm.Binomial('y', p=theta, observed=y, n=n) trace = pm.sample(1000, tune=2000, target_accept=0.95)
Auto-assigning NUTS sampler...Initializing NUTS using jitter+adapt_diag...Multiprocess sampling (2 chains in 2 jobs)NUTS: [theta, ab]Sampling 2 chains: 100%|██████████| 6000/6000 [00:22<00:00, 267.81draws/s]The number of effective samples is smaller than 25% for some parameters.
In [8]:
# Check the trace. Looks good!pm.traceplot(trace, var_names=['ab', 'X', 'Z']);
We can plot a kernel density estimate for \(x\) and \(y\). It looks rather similar to our countour plot made from the analytic marginal posterior density. That’s a good sign, and required far less effort.
In [9]:
sns.kdeplot(trace['X'], trace['Z'], shade=True, cmap='viridis');
From here, we could use the trace to compute the mean of the distribution.
In [12]:
pm.plot_posterior(trace, var_names=['ab']);
In [13]:
# estimate the means from the samplestrace['ab'].mean(axis=0)
Out[13]:
array([ 2.43457925, 14.49150632])
Conclusion¶
Analytically calculating statistics for posterior distributions is difficult if not impossible for some models. Pymc3 provides an easy way drawing samples from your model’s posterior with only a few lines of code. Here, we used pymc3 to obtain estimates of the posterior mean for the rat tumor example in chapter 5 of BDA3. The estimates obtained from pymc3 are encouragingly close to the estimates obtained from the analytical posterior density.
References¶ Gelman, Andrew, et al. Bayesian Data Analysis. CRC Press, 2013.
Authors: Demetri Pananos, Junpeng Lao
|
101 people flip a fair coin. Everyone who tosses heads is on one team and everyone who tosses tails is on another other team. The team with more people on it wins. What are the odds that, given you are one of the 101 players, you will win? (101 players and coins eliminates ties but I am also interested the case where there are 100 players where you can win/lose/tie).
If the 100 other players divide into two teams of size 50, your chances of winning are 100%. Otherwise, the winning team will not change with your choice hence your chances of winning are 50%. Thus your overall probability of winning is 50%+50%P(50-50 divide), that is, $$\frac12\left(1+\frac1{2^{100}}{100\choose50}\right)\approx53.98\%.$$
The expected number of winners is:
$$2\sum\limits_{n=51}^{101}\frac{n\cdot\binom{101}{n}}{2^{101}}\approx54.5193$$
So the probability of being a winner is:
$$\frac{54.5193}{101}\approx0.5397$$
You and 100 other people flip coins. You always win if exactly 50 of the others got heads. In that case, you are the tie-breaker and always end up on the winning side. If the split was not even, then you win if your flip matched the larger half ($50\%$).
Thus, your odds of winning are: $${1\over 2} \times \left(1 + {{100 \choose 50}\over 2^{100}}\right) \approx 54\% $$
For a total of $100$ people, you win exactly $50\%$ of the time. The other side never splits evenly, and you win if your coin flip matches the larger half.
It's a tie if the split (including yours) is even: $${{100 \choose 50}\over 2^{100}} \approx 8\%$$
And you lose about $42\%$ of the time when there are $100$ people.
|
LaTeX is a great tool for printable professional-looking documents, but can be also used to generate PDF files with excellent navigation tools. This article describes how to create hyperlinks in your document, and how to set up LaTeX documents to be viewed with a PDF-reader.
Contents
Let's start with a minimal working example, by simply importing the
hyperref package all cross-referenced elements become hyperlinked.
\documentclass{book} \usepackage[utf8]{inputenc} \usepackage[english]{babel} \usepackage{hyperref} \begin{document} \frontmatter \tableofcontents ... \end{document}
The lines in the table of contents become links to the corresponding pages in the document by simply adding in the preamble of the document the line
\usepackage{hyperref} One must be careful when importing hyperref. Usually, it has to be the last package to be imported, but there might be some exceptions to this rule.
The default formatting for links can be changed so the information in your documents is more clearly presented. Below you can see an example:
\documentclass{book} \usepackage[utf8]{inputenc} \usepackage[english]{babel} \usepackage{hyperref} \hypersetup{ colorlinks=true, linkcolor=blue, filecolor=magenta, urlcolor=cyan, } \urlstyle{same} \begin{document} \tableofcontents \chapter{First Chapter} This will be an empty chapter and I will put some text here \begin{equation} \label{eq:1} \sum_{i=0}^{\infty} a_i x^i \end{equation} The equation \ref{eq:1} shows a sum that is divergent. This formula will later be used in the page \pageref{second}. For further references see \href{http://www.sharelatex.com}{Something Linky} or go to the next url: \url{http://www.sharelatex.com} or open the next file \href{run:./file.txt}{File.txt} It's also possible to link directly any word or \hyperlink{thesentence}{any sentence} in your document. \end{document}
This is a complete example, it will be fully explained in the rest of the article. Below is a description of the commands related to the colour and styling of the links.
\hypersetup{ ... }
\colorlinks=true
\linkcolor=blue
\filecolor=magenta
\urlcolor=cyan
\urlstyle{same}
Links to a web address or email can added to a LaTeX file using the
\url command to display the actual link or
\href to use a hidden link and show a word/sentence instead.
For further references see \href{http://www.sharelatex.com}{Something Linky} or go to the next url: \url{http://www.sharelatex.com}
There are two commands in the example that generate a link in the final document:
\href{http://www.sharelatex.com}{Something Linky}
\url{http://www.sharelatex.com}
The commands
\href and
\url presented in the previous section can be used to open local files
For further references see \href{http://www.sharelatex.com}{Something Linky} or go to the next url: \url{http://www.sharelatex.com} or open the next file \href{run:./file.txt}{File.txt}
The command
\href{run:./file.txt}{File.txt} prints the text "File.txt" that links to a local file called "file.txt" located in the current working directory. Notice the text "run:" before the path to the file.
The file path follows the conventions of UNIX systems, using . to refer the current directory and .. for the previous directory.
The command
\url{} can also be used, with the same syntax described for the path, but it's reported to have some problems.
It was mentioned before that all cross-referenced elements become links once
hyperref is imported, thus we can use
\label anywhere in the document and refer later those labels to create links. This is not the only manner to insert hyperlinks manually.
It's also possible to link directly any word or \hyperlink{thesentence}{any sentence} in you document. If you read this text, you will get no information. Really? Is there no information? For instance \hypertarget{thesentence}{this sentence}.
There are two commands to create user-defined links.
\hypertarget{thesentence}{this sentence}
\hyperlink{thesentence}{any sentence}
Links in a document are created having in mind a document that will be read in PDF format. The PDF file can be further personalized to add additional information and change the way the PDF viewer displays it. Below an example:
\hypersetup{ colorlinks=true, linkcolor=blue, filecolor=magenta, urlcolor=cyan, pdftitle={Sharelatex Example}, bookmarks=true, pdfpagemode=FullScreen, }
Using the command
\hypersetup, described in the section styles and colours, accepts extra parameters to set up the final PDF file.
pdftitle={Sharelatex Example}
bookmarks=true
pdfpagemode=FullScreen
See the reference guide for a full list of options that can be passed to
\hypersetup.
Linking style options
Option Default value Description
hyperindex
true Makes the page numbers of index entries into hyperlinks
linktocpage
false Makes the page numbers instead of the text to be link in the Table of contents.
breaklinks
false Allows links to be broken into multiple lines.
colorlinks
false Colours the text for links and anchors, these colours will appear in the printed version
linkcolor
red Colour for normal internal links
anchorcolor
black Colour for anchor (target) text
citecolor
green Colour for bibliographical citations
filecolor
cyan Colour for links that open local files
urlcolor
magenta Colour for linked URLs
frenchlinks
false Use small caps instead of colours for links PDF-specific options
Option Default value Description
bookmarks
true Acrobat bookmarks are written, similar to the table of contents.
bookmarksopen
false Bookmarks are shown with all sub-trees expanded.
citebordercolor
0 1 0 Colour of the box around citations in RGB format.
filebordercolor
0 .5 .5 Colour of the box around links to files in RGB format.
linkbordercolor
1 0 0 Colour of the box around normal links in RGB format.
menubordercolor
1 0 0 Colour of the box around menu links in RGB format.
urlbordercolor
0 1 1 Colour of the box around links to URLs in RGB format.
pdfpagemode
empty Determines how the file is opened. Possibilities are UseThumbs (Thumbnails), UseOutlines (Bookmarks) and FullScreen.
pdftitle
Sets the document title.
pdfauthor
Sets the document Author.
pdfstartpage
1 Determines on which page the PDF file is opened.
For more information see
|
Consider a column of fluid of length $L$, with initial density $\rho_0$ and initial velocity ($u_0 =0$) everywhere. Now at time $t=0$ gravity is switched on. No-slip boundary conditions are assumed at both end of the fluid column.
We know that after a while column will attain a steady state with fluid everywhere at rest and density as exponential function of distance from either end.
Continuity equation is \begin{eqnarray} \frac{\partial\rho}{\partial t} + \frac{\partial(\rho u)}{\partial x} = 0 \end{eqnarray}
Navier-stokes equation for fluid in one dimension is
\begin{eqnarray} \rho\left[\frac{\partial u}{\partial t} + u \frac{\partial u}{\partial x} \right] &=& -\frac{\partial P}{\partial x} + f_{external} \nonumber \\ &=& -\frac{\partial\rho}{\partial x}c^2_s - \rho g \end{eqnarray} Here I assume shear forces are zero since the system is one dimensional.
In the steady state $u=0$ and $\frac{d\rho}{dt} = 0$ so we get, \begin{eqnarray} \frac{d\rho}{dx}c^2_s &=& - \rho g \\ \frac{d\rho}{\rho} &=& - dx \frac{g}{c^2_s} \\ \rho &=& \rho'\exp\left(-\frac{g}{c^2_s}x \right) \end{eqnarray}
where $\rho'$ is evaluated by mass conservation equation. \begin{eqnarray} \rho_{0} L = \int^{L}_0\rho'\exp\left(-\frac{g}{c^2_s}x \right)dx \end{eqnarray}
Where I assume hydrostatic pressure($P$) is proportional to density ($\rho$). Is it possible to solve these equations(assuming they are correct) as a function of time? To start with, I tried to get velocity ($u$) profile for the time very close to initial time. When the time is really small $t<<1$, For the Navier-Stokes equation we assume spatial variation in density($\rho$) and velocity($u$) is yet to develop, so that we get \begin{eqnarray} \frac{du}{dt} &=& -g \\ u &=& -gt \hspace{0.5cm} t <<1 \end{eqnarray}
I am not sure if it is allowed to assume initial spatial variation small compared to time variations in the system. Even if allowed, I am not able to go any further.
Also I feel the solution for density and velocity depend upon viscosity of the fluid but viscosity appears nowhere in the formulation. Do I need to include shear forces?
|
What is $O_{Proj S_*}(n) = O_S(n)$? Whatever it is, you can describe it and thus work with it by knowing the following facts:
$O_S(n)$ is isomorphic to the structure sheaf on each open set of the cover $D(s)$, where $s \in S_n$, via some $\phi_s: O(n)|_{D(s)} \to O|_{D(s)}$. In fact the structure sheaf over $D(s)$ is just $S[s^{-1}]_0$, and $O_S(n)$ is just $S[s^{-1}]_n$. So $\phi_s$ can be taken to be multiplication by $s$, which is an isomorphism on this patch.
The transition function $\phi_s \circ \phi_t^{-1}$ is precisely multiplication by $s/t$. (This follows immediately from 1.)
Main point of this: To understand a line bundle, we just need to know an open cover on which it trivializes, and the transition functions on the intersections.
We know how to pull back a line bundle, also.
Explicitly: If $O_V \to L$ is a trivialization over $V$, then the pullback by $\phi^*$ of this map gives a trivialization of $O_{\phi^{-1}(V)} \cong \phi^* O_V \to \phi^* L$.
The map $\psi: O_{\phi^{-1}(V)} \cong \phi^* O_V$ is the restriction of a canonical globally defined sheaf isomorphism between $\phi^* O_S$ and $O_T$, which is locally just "multiplication" $A \otimes_A B \to B$ - you can probably replace it with an equality without loss, but I personally am confused by that level of sloppiness - at least at this point in my mathematical life. Anyway it is not too difficult to keep track of it.
So let's just apply our understanding of the pullback to $O_S(n)$ and see what happens.
1) $\phi^{-1}(D(s)) = D(\phi(s))$.
So far so good - your condition that $V(\phi(S_+)) = \emptyset$ implies that this gives an open cover of $Proj T$ on which we know that $\phi^* O(n)$ trivializes.
(It may not be the case that $S_n$ surjects onto $S_{dn}$. That is okay, we still get a cover from the $D(\phi(s))$, as $s \in S_n$. Assuming your condition, which is that $V(\phi(S+)) = \emptyset$, for any homogeneous prime in $Proj(T)$ there is some $k$ and $s$ so that $\phi(s)^k \not \in P$. So this implies that $\phi(s) \not \in P$.)
2) Also, the trivialization $T_s: O_S|_{D(s)} \to O_S(n) |_{D(s)}$ becomes some $\phi^*(T_s) : \phi^* O_S \to \phi^*(O_S(n))$ over $D(\phi(s))$. Combing this with the isomorphism $\psi : O_T \to \phi^* O_S$, our trivialization over this chart is $\phi^*(T_s) \circ \psi|_{D(\phi(s)}$.
3) Claim: The transition functions become $\phi(s/t) = \phi(s) / \phi(t)$. Proof:
First we compute: $(\phi^*(T_t) \circ \psi))^{-1} \circ \phi^*(T_s) \circ \psi = \psi^{-1} \phi^* (T_s \circ T_t^{-1}) \psi = \psi^{-1} \_ \times \phi(s/t) \psi = \psi^{-1} (\_ \times \phi(s)/\phi(t)) \psi$. Then note: the isomorphism $\psi$ commutes with multiplication by elements of the structure sheaf (it is an isomorphism of $O_X$ algebras, in particular $O_X$ modules), so in the end we get that our transition function is just multiplication by $\phi(s) / \phi(t)$.
If you look back at the first paragraph, this is some cover and transition maps that describe $O_T(dn)$. So apparently the pullback is isomorphic as a line bundle to $O_T(dn)$.
I think your claim is correct - but maybe I am overlooking something subtle! This stuff is confusing!
|
We begin with Adam Tooze laying out the issues:
Adam Tooze: Italy: How Does the E.U. Think This Is Going to End?: "Over the past 10 years, Italy’s gross domestic product per capita has fallen... unique among large advanced economies...
...More than 32 percent of Italy’s young people are unemployed. The gloom, disappointment and frustration are undeniable. For the commission to declare that this is a time for austerity flies in the face of a reality that for many Italians is closer to a personal and national emergency....
The two parties that make up the current Italian government, the League and the Five Star Movement, were elected in March to address this crisis. The League is xenophobic; Five Star is erratic and zany. But the economic programs on which they campaigned are hardly outlandish.... The Italian government’s budget forecasts are optimistic. But others, including the Bank of Italy and the Peterson Institute of International Economics, warn that Italy is caught in a trap: Anxieties about debt sustainability mean that any stimulus has the perverse effect of driving up interest rates, squeezing bank lending and reducing growth...
What would have to be the case for a stimulus to have this perverse effect—to actually manage to not boost the economy but rather squeeze bank lending and reduce growth?
Back in the late 1990s Paul Krugman concluded that workings of the macroeconomy had changed: that we had started to see
The Return of Depression Economics https://books.google.com/books?isbn=039304839X. He was right. This meant that the economic analytical tools that had been forged in order to understand the Great Depression of the 1930s had become the right place to start any analysis of what was going on in the business cycle. And so it has proven to be for the past twenty years,
Therefore we start with John Hick's 1937 IS-Equation, from his article "Mr. Keynes and the 'Classics': A Suggested Interpretation" https://tinyurl.com/20181208a-delong. The variable we place on the left-hand side is aggregate demand AD. The variable we place on the right-hand side is the long-term risky real interest rate r. In between are a large host of parameters drawn from the macroeconomy's behavioral relationships and from salient features of the macroeconomic environment and macroeconomic policy. We identity aggregate demand AD with national income and product Y, arguing that the inventory-adjustment mechanism will make the two equal at the macroeconomy's short-run sticky-price Keynesian equilibrium within a few quarters of a year.
Then we have not so much a model of the macroeconomy as a filing system for factors that we can and need to model, thus:
$ Y = AD = \mu(c_o + I_o + G) + \mu(x_fY^f + x_{\epsilon}{\epsilon}_o + x_{\epsilon}{\epsilon}_rr^f) - \mu(I_r + x_{\epsilon}{\epsilon}_r)r $
To simplify notation, we will typically use "$\Delta$" to stand for changes in economic quantities generated by shifts in the economic policy and in the economic environment, and we will drop terms that are zero.
For the problem of understanding Italy today, the pieces of this equation that matter are:
$ {\Delta}Y = {\Delta}AD = \mu{\Delta}G + {\mu}x_{\epsilon}{\Delta}{\epsilon}_o + {\mu}x_{\epsilon}{\epsilon}_r{\Delta}r^f - \mu(I_r + x_{\epsilon}{\epsilon}_r){\Delta}r $
The change in national income and product ${\Delta}Y$ equals the change in aggregate demand ${\Delta}Y$ equals the sum of:
Now we need an extra equation: a country with a freely-floating exchange rate $\epsilon$:
$ {\Delta}{\epsilon} = {\Delta}{\epsilon}_o + {\epsilon}_r({\Delta}r^f - {\Delta}r) $
the change in the exchange rate ${\Delta}{\epsilon}$ is equal to the change in exchange speculator optimism or pessimism about the long-run soundness of the currency ${\Delta}{\epsilon}_o$ minus the sensitivity of the exchange rate to interest rates ${\epsilon}_r$ all times the change in the interest rate ${\Delta}r$. But Italy does not have a freely-floating exchange rate: Italy is in the eurozone. So
$ {\Delta}{\epsilon} = 0 $
therefore:
$ {\Delta}r = \frac{{\Delta}{\epsilon}_o}{{\epsilon}_r} + {\Delta}r^f $
And the rewritten relevant parts of the IS equation are:
$ {\Delta}Y = {\Delta}AD = \mu{\Delta}G + {\mu}x_{\epsilon}{\Delta}{\epsilon}_o + {\mu}x_{\epsilon}{\epsilon}_rr^f - \frac{\mu(I_r + x_{\epsilon}{\epsilon}_r){\Delta}{\epsilon}_o}{{\epsilon}_r} - \mu(I_r + x_{\epsilon}{\epsilon}_r){\Delta}r^f $
$ {\Delta}Y = \mu{\Delta}G - \frac{{\mu}I_r}{\epsilon_r}\Delta\epsilon_o -{\mu}I_r{\Delta}r^f $
Thus the change ${\Delta}Y$ in national income and product that follows a fiscal expansion with higher government purchases ${\Delta}G$ will be positive as long as:
$ {\Delta}G > \frac{I_r}{\epsilon_r}\Delta\epsilon_o + I_r{\Delta}r^f $
The shift to more expansionary fiscal policy will indeed boost demand, production, and employment unless this equation fails to hold.
What conclusions can we draw from this equation?
First, we conclude that taking on unsustainable debt—or rather debt perceived as unsustainable—could indeed fail to boost demand, production, and employment if the reaction ${\Delta}{\epsilon}_o$ to ${\Delta}G$ is too large. The natural thing, therefore, would be for the IMF and the European Union to step in with short-term support and guarantees to ensure that the market reaction ${\Delta}{\epsilon}_o$ coupled with a longer-term structural adjustment program to guarantee that debt repayment will in fact take place.
Second, that the European Union—which controls ${\Delta}r^f$—could assist by switching to an easier money-tighter fiscal policy mix itself and so creating a negative value for ${\Delta}r^f$.
Why would the European Union want to assist in these ways? Well, it wants a prosperous Italy, doesn't it? And it wants an Italy that stays in the eurozone, doesn't it? why would the IMF want to assist in these ways? Well, that is its job, isn't it?
If the past ten years ought to have taught the Great and Good of Europe anything, it is that ensuring prosperity, growth, and high employmennt is job #1. Figuring out how to dot the financial i's and cross the financial t's is distinctly secondary. But, as Adam Tooze writes, instead the Great and Good of Europe seem to wish to "hold the line on debt and deficits" without offering anything "positive in exchange, such as a common European investment and growth strategy or a more cooperative approach to the refugee question". He calls this, with great understatement, "a high-risk and negative strategy".
I cannot see how anyone can disagree.
#highlighted #globalization #eurozone #monetarypolicy
Github: https://github.com/braddelong/NOTEBOOK-Macro-and-Macro-Policy/blob/master/What%20Are%20Italy's%20Options%3F%20(2018-12-08).ipynb
nbviewer: http://nbviewer.jupyter.org/github/braddelong/NOTEBOOK-Macro-and-Macro-Policy/blob/master/What%20Are%20Italy%27s%20Options%3F%20%282018-12-08%29.ipynb This File in html: https://www.bradford-delong.com/2018/12/why-doesnt-italy-have-better-options.html Edit This html file: https://www.typepad.com/site/blogs/6a00e551f08003883400e551f080068834/post/6a00e551f080038834022ad3c5cf3e200b/edit
|
We are doing an assignment for our Advanced Econometrics course for which we are trying to illustrate Gordin's Central Limit Theorem by simulation. We used an AR(1) process to show that if the conditions of the theorem are satisfied, its implications hold too. Now we are trying to find counter-examples, and we were wondering if you know of any stationary and ergodic process that does not satisfy Gordin's Conditions? Here is a statement of the theorem:
If a stationary (1) and ergodic (2) process, $y_t$, satisfies Gordin's condition(3-5), then, (a) $y_t$ has zero mean, (b) absolutely summable autocovariances $\gamma_j$ and (c) $\sqrt{n}\bar{y} \rightarrow_d N\left(0, \sum_{j = - \infty}^{\infty}\gamma_{j}\right)$.
More explicitly, if a random process $y_t$ satisfies the following conditions:
(1) The joint distribution of $y_t, y_{t+1}, ... y_{t+k}$ does not depend on t (stationarity)
(2) For any positive integer k and any bounded functions $f$ and $g$ from $R^{k=1} \rightarrow R$, $\lim_{n \rightarrow \infty} |E[f(y_{t}, ...,y_{t+k})g(y_{t+n}, ...,y_{t+k + n})]| = |E[f(y_{t}, ...,y_{t+k})]| |E[g(y_{t}, ...,y_{t+k})]|$ (ergodicity)
(3) $E(y_t^2) = $ is finite.
(4) $E(y_t|y_{t-j}, y_{t-j-1}, . . .) \rightarrow_{m.s.} 0 $ as $j \rightarrow \infty$
(5) $\sum_{j = 0}^{\infty} \left[E\left(r_{tj}^2\right)\right]^{1/2}$ where $r_{ij} = E\left(y_t|y_{t-j}, y_{t-j-1}, ...\right) - E\left(y_t|y_{t-j-1}, y_{t-j-2}, ...\right)$.
Then the following properties are true:
(a) $E(y_t) = 0$
(b) $\sum_{j = - \infty}^{\infty} | \gamma_j |$ is finite
(c) $\sqrt{n}\bar{y} \rightarrow_d N\left(0, \sum_{j = - \infty}^{\infty}\gamma_{j}\right)$
|
It is common to measure the peak absorbance of the plasmonic nanoparticles in solution and then get the following: Molarity (moles/liter) Number of particles per ml Weight concentration (ug/ml) I wrote a small python function to extract these parameters. The following inputs are needed for the function d_nm : diameter of the particle in nanoparticle od : peak OD measured from the absorbance plot path_length_cm : cuvette length in centimeter density_g_per_cm3 : for gold it is 19.28, for silver it is 10.49 molar_Extinction_perM_percm : Molar extinction coefficient extracted from here. This can extracted from measuring absorbance at different concentrations and then Read More …
Energy band diagrams are used to visulize the electron and hole transport in Solar cells and LED. I want to quickly draw them and came up with a python module. You can download the module at my github repository. Here’s an example on how to use this code. from Band_diagram import metal, semiconductor, plot # # Define the metals and semiconductors. Here wf is the work function of metals, cb is conduction band minimum and vb is valance band maximum wrt to vacuum level ITO = metal(wf = -5.2, name= ‘ITO’) p_nio = semiconductor(cb = -1.85, vb = -5.49, name Read More …
I want to tab autocomplete commands in my python interpreter within bash terminal. This is very useful when I import a module and just want to browse through the methods and autocomplete. Found two ways to do it. Thanks to stack overflow posts. Method 1 (as mentioned here): This is easy. Install pyreadline (pip install pyreadline) and use ipython. This is a snapshot of it working. Method 2 ( as mentioned here): This is for standard python interpreter in bash terminal (such as in Ubuntu). This method requires readline and rlcompeter modules. Install these modules if you do not have Read More …
My wife and I leave the house after a good meal and after like 20 minutes in the car, I ask my wife…did we turn off the stove? My wife answers…you were the last one to cook. What to do now? Go back home and check it. Now, that sucks. To solve this common problem, I came up with a solution to use internet of things (IOT) to monitor stove status. So here is a final product, a URL I type in my browser and I can see the stove status. How to do IOT cheaply? use ESP8266 module. Read More …
Coupled dipole approximation (CDA) method is a numerical method to calculate the optical properties (scattering and absorption) of interacting dipoles. This method is used in discrete dipole approximation method (like in DDSCAT software), where a big particle (also known as target) is broken into lot of interacting dipoles arranged in cubic lattice. CDA can also be used to calculate the optical properties (scattering and absorption) of random particle distributions (like in L. Zhao et al. J. Phys. Chem. B, 107, 30, 7343,2003 ) and assuming each particle to be small enough that it behaves like a dipole. I have implement Read More …
The real and imaginary part of dielectric permittivity of the metals is important to simulate the optical properties of metal films and nanoparticles. Permittivity data is obtained experimentally by ellipsometry and is fitted with analytical models. The most common model for fitting experimental data is with Drude-Lorentz model shown below. $$\epsilon(\omega)=1-\frac{f_1\omega_p^2}{(\omega^2+i\Gamma_1\omega)}+\sum_{j=2}^{n}\frac{f_j\omega_p^2}{(\omega_{o,j}^2-\omega^2-i\Gamma_j\omega)}$$ The first term is the Drude part. It represents the response of electron in the Fermi sea/conduction band when it sees external oscillating electric field (these transitions are called as intraband transitions). The Drude term has the plasma frequency ($\omega_p$, oscillator strengh ($f_1$) and damping term ($\Gamma_1$). The rest Read More …
Sometimes you have to show positive, zero and negative number in log scale. However you cannot take log of negative numbers and zero. But one could approximate it with a log transform modulus as stated here. In Python with numpy: from numpy import sign, abs, log10 import matplotlib.pyplot as plt # Data varies in several magnitudes and has both positive, zero and negative numbers x = [-10000,-1000,-100,-10,0,10,100,1000,10000] # log modulus transform x_log_modulus_transform= sign(x)*(log10(abs(x)+1)) f, ax = plt.subplots(2, sharex=True) ax[0].plot(x,’o’) ax[0].margins(x=0.12, y=0.2) # for better visualization of datapoints at the end of axis ax[1].plot(x_log_modulus_transform,’o’) ax[1].margins(x=0.12, y=0.2) # for better visualization of Read More …
I keep needing a python code to generate the dielectric functions of plasmonic materials such as Au, Ag, Pd, and Pt. I wanted the dielectric functions called by other python codes such as TMM. So I wrote a python version of LD.m LD.m is a matlab file written by Bora Ung that produces dielectric functions of metals either for Lortenz and Loretnz drude models. The dielectric functions are given as follows: . The first part of the function is the Drude part and the second part is the Lorentz part. The parameters for these models are taken from Rakic et Read More …
Amazon provides high performance computing capabilities through their EC2 service. You can find more information here They provide a 750 hr free instance with their a free-tier program. If you want more resources, you can pay for it. See the pricing, pricing seem very reasonable. I wanted to see how easy it was to install ddscat and run some examples files. Amazon allows to create an instance through their very easy-to-use web interface. I chose to install ubuntu amazon machine image on the instance. While you are creating an instance you are allowed to create a key pair file and download Read More …
Here is how I was implementing plasmonic materials in meep1.1 scheme code. Unlike Meep 1. 1, Meep >= 1. 2 changed the way materials are defined. Here I will describe how to change the material definition code from meep1.1 to meep 1.2 . Please note that one can still use the material definition written from Meep <1.2 for Meep >=1.2 but not vice versa. Installation of Meep 1.2 on ubuntu You can follow instructions given in my previous post to compile Meep 1.2 from the source code, but the procedure is outdated and one can use the recently pre-compiled meep Read More …
|
In this very first post of the Connect the Dots series, I set up the supervised learning problem from a function fitting perspective and discuss the objective of function fitting.
In a supervised learning problem, we have a bunch of input (also referred to as feature, independent variable, predictor) and the corresponding output (also referred to as label, target, response, dependent variable) as our observed and known training data. The goal is to fit a function (also referred to as “train a model”) and use it to predict output for unknown input.
We denote the input by \(X = (x_1, x_2, …, x_N)\), the output by \(y = (y_1, y_2, …, y_N) \), and the fitting function by \(f \). Each input \(x_i \in R^p \) has \(p\)-dimensions, and each output \(y_i\) can be either continuous values \(R\) in a regression problem, or discrete values such as \(\{0,1\}\) in binary classification. The training data \(D \) consists of pairs of input and output \(D = \{D_1, D_2, …, D_N\}, D_i = (x_i, y_i)\).
\(f \) takes \(X \) as input and calculates \(\hat y\) (pronounced as y hat) as the predicted output.
\(\hat y = f(X) \tag {1} \)
The difference between each predicted output and real output point is captured by
\(l \). loss function
\(l(\hat y_i, y_i) = l(f(x_i), y_i) \tag {2} \)
The average loss over all data points is usually called
\(L \). Sometimes, people also use the term cost function to refer to cost function. loss function
\(L(f(X), y) = \frac {\sum_{i=1}^N l(f(x_i), y_i)}{N} \tag {3} \)
The goal is to find a \(f \) which minimizes \(L\)
\(argmin_f L(f(X) , y) \tag {4} \)
Now the supervised learning problem is formulated as a
function fitting problem. There are infinite number of functions in the problem space, as many as your imagination goes, and even beyond that!
For example, we have 5 data points in our training data plotted as black dots. \(X\) is 1-D input and \(y\) is 1-D continuous output.
We have the freedom to draw whatever functions to fit the data. Let’s consider the following 6 choices of functions (red curves). As we can see, \(f_1 \) is a simple straight line, but we are not able to go through all points. \(f_2 \) concatenates several straight lines, and tries to go close to all points. \(f_3 \) goes through all the points. \(f_4 \) is a really wild one: it goes up and down with certain frequency and changing amplitude and also goes through all the points. \(f_5\) and \(f_6\) look similar, although \(f_6\) is smoother than \(f_5\).
The actual function I used to generate the data is \(f(x) = 60|e^{\frac {1}{x}} \times sin(x) | \), as shown by \(f_4 \).
Since our goal is to find a function \(f \) to minimize \(L\), if the function passes through all the data, the loss would be minimal (here it is 0). Can we then say \(f_3\) is better than \(f_1\)? How do we compare \(f_3 \) to \(f_4 \) given that they both have ideal minimal loss? Do we choose a simple function \(f_1 \) over other more complex functions? Do we choose a smooth function \(f_6\) over non-smooth function \(f_5\)?
More importantly, we would like to use the fitting function to predict unseen data. Say now we have new data \(X_{test} \) generated from the same population, the predicted \(\hat y_{test} \) is most likely not on the fitting function line except in \(f_4 \), because it is the function I used to generate both \(X_{train} \) and \(X_{test} \). Other fitting function may work perfectly on the training data (\(f_3, f_4, f_5, f_6\)), but not so well on the testing data.
In the end, the one and only golden standard for a good function is that it is able to
, in other words, minimize the predict unseen data as accurately as possible for unseen data. error
Of course, what is “unseen” cannot be seen. So in theory, we cannot measure the performance of the function on “unseen” data. In practice, we split the whole seen data into
and training data , use the training data for function fitting, and use the testing data as the “unseen” data for function evaluation. testing data
Here, we assume the real unseen data have the same characteristics as the seen data. This is not always the case: if our seen data is a biased sample from the whole population, or if the unseen data starts to deviate from the seen data as time goes by, the seen data may not well represent the real unseen data.
For simplicity, we assume the seen data is representative, and have the same distribution as the real unseen data. We redefine the goal of the problem as minimizing \(L\) for the “unseen” testing data (which in fact, is seen).
\(argmin_f L(f(X_{test}) , y_{test}| X_{train}, y_{train}) \tag {5} \)
The bad news is we cannot directly solve
Equation (5) because \(f\) is fitted using the training data, not the testing data. The good news is training and testing data are from the same distribution, since we split the original seen data into these two. The difference between training and testing data thus originates from the variance of the data itself.
Let’s define
as the following Error :
\(Error = L(f(X_{test}) , y_{test}) – L_{best} \tag {6.1} \)
\(Error = (L(f(X_{train}), y_{train}) – L_{best}) + (L(f(X_{test}) , y_{test}) – L(f(X_{train}), y_{train})) \tag {6.2}\)
Here, \(L_{best} \) denotes the loss in the best possible prediction function out there. It could be based on human performance. In a noise-free world, as in the first example, \(L_{best} = 0\). In practice, data is often noisy and \(L_{best} > 0\). We can treat \(L_{best} \) as a constant that is not affected by the data or the function. For simplicity, I omit \(L_{best}\).
The first element, called
, represents how far the training data performs v.s the best possible one. The function fitting process can minimize this directly. bias
\(L(f(X_{train}), y_{train}) \tag {7}\)
The second element, called
, represents how far the testing data performs v.s. the training data. variance
\(L(f(X_{test}) , y_{test}) – L(f(X_{train}), y_{train}) \tag {8} \)
Variance corresponds to the “generalization” of the fitting function. Our function may have very low bias, i.e. perform well on the training data, however, it may generalize badly on the testing data, thus high variance.
Variance is usually not directly solvable. In order to evaluate the performance of a function on “unseen” data, we usually use
to assess a fitting function after the training is done. In some cases, we may directly adjust and reduce variance by penalizing complex models. More complex models tend to better fit the training data and have lower bias, at the same time, the more the fitting function depends on the specific pattern in the training data in a complex model, the less well it generalizes. Therefore, we can add a cross-validation to account for model complexity. Note that although penalty term is not exactly the same as variance, it captures the intrinsic difference of function performance on training and testing data. Cross-validation represents out-of-sample evaluation of a function, and penalty term (such as in Akaike information criterion) can be viewed as in-sample evaluation of a function. I will discuss more about function evaluation in later posts. penalty term
As shown in the example below, as we increase the degree of the polynomial fitting, the training mean squared error decreases while the testing mean squared error increases. The actual function I used to generate the data is \(y = 5 \times x – 3 + 10 \times N(5,3) \) with a Gaussian error.
degree training mean squared error testing mean squared error 1 786.83 867.83 2 785.29 873.93 3 785.28 874.16
In practice, during function fitting, we could add a penalty function \(J\) to \(L\), as shown in
Equation (9) . Note that the penalty function depends only on the function of choice, not on the data directly. Although as we will see later, the function of choice is still decided based on the data. Objective function
\(Obj(f, X_{train}, y_{train})= L(f(X_{train}), y_{train}) + J(f) \tag {9}\)
The bias-variance trade off is ubiquitous in machine learning as whatever pattern we discover from the seen data may not be exactly the same in unseen data.
Take home message
Eventually, we want a generalizable fitting function that allows us to predict real unseen data accurately. We can do this by choosing the function that minimizes loss on “unseen” testing data.
\(argmin_f L(f(X_{test}) , y_{test}|X_{train}, y_{train}) \)
We use the training data to find the best possible \(f\) by optimizing the objective function which takes into consideration both loss and function complexity.
\(Obj(f, X_{train}, y_{train})= L(f(X_{train}), y_{train}) + J(f)\) Demo code can be found on my Github.
|
$$\lim_{(x,y)\to(0,0)}\frac{(2x+y)\sin(x+y)}{(x+y)\sin(2x+y)}$$
Take $x+y=z$ and $2x+y=w$, then $z$ and $w$ tend to zero as $x$ and $y$ tend to zero and use the above-mentioned limit $\lim_{t\to 0}(\sin t) / t = 1$.
Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community
Take $x+y=z$ and $2x+y=w$, then $z$ and $w$ tend to zero as $x$ and $y$ tend to zero and use the above-mentioned limit $\lim_{t\to 0}(\sin t) / t = 1$.
|
The Pseudo-Hyperbolic Metric and Lindelöf's Inequality (cont.)
Last time, we proved that the pseudo-hyperbolic metric on the unit disc $\Delta\subset \mathbb{C}$ given by $$d(z,a)=|\varphi_a(z)|=\bigg|\frac{z-a}{1-\bar a z}\bigg|$$ is indeed a metric. As an application, our next goal is to verify the following inequality due to Lindelöf.
Theorem (Lindelöf's Inequality). Every holomorphic function $f:\Delta\to\Delta$ satisfies $$\frac{|f(0)|-|z|}{1-|f(0)||z|}\leq |f(z)|\leq \frac{|f(0)|+|z|}{1+|f(0)||z|}\quad\text{for all $z\in\Delta$}.$$
Notice that if $f(0)=0$, then the above reduces to the same bound on $|f(z)|$ that we get from the Schwarz Lemma. So in effect, Lindelöf's inequality says, "If you don't know that $f(0)=0$, here's what you
do know." (Fun fact: In 1938, Lars Ahlfors proved* that the Schwarz Lemma is actually a statement about curvature of Riemannian metrics! For more on this, see, for instance, ch. 2 of Krantz's Geometric Function Theory.) Now to prove the theorem, we first need to derive a useful inequality concerning the metric $d$. Claim. For all $z,a\in \Delta$, the metric $d$ satisfies$$d(|z|,|a|)\leq d(z,a)\leq d(|z|,-|a|).$$ Proof. We need only prove the first inequality since the second was verified during the proof of the proposition in our previous post. To this end, note that$$d(|z|,|a|)=\bigg| \frac{|z|-|a|}{1-|z||a|} \bigg| = \frac{| |z|-|a| |}{1-|z||a|} $$
and so \begin{align}\label{there} 1-d(|z|,|a|)^2 &= \frac{(1-|z|^2)(1-|a|^2)}{(1-|z||a|)^2}. \end{align} But the above is less than or equal to $1-d(z,a)^2$. Indeed, consider the following: $$1-d(z,a)^2=\frac{(1-|z|^2)(1-|a|^2)}{|1-\bar a z|^2} \leq \frac{(1-|z|^2)(1-|a|^2)}{(1-|z||a|)^2}=1-d(|z|,|a|)^2 .$$ The leftmost equality was derived in the proof of the proposition, and the inequality follows since $|1-\bar a z|\geq ||1| - |a\bar z||= |1- |a| |z||$. Of course the rightmost equality is (\ref{there}). From this we conclude $d(|z|,|a|)\leq d(z,a)$ as desired.
We are now ready to derive Lindelöf's inequality and do so by verifying the rightmost inequality first. Again let $z\in\Delta$. By our claim above we know that $$d(|f(z)|,|f(0)|)\; \leq \;d(f(z),f(0))\; \leq \; d(|z|,0)$$ which by definition of $d$ implies \begin{align}\label{blue} \bigg| \frac{|f(z)|-|f(0)|}{1-|f(z)||f(0)|} \bigg| \leq |z| \end{align} and so $||f(z)|-|f(0)||\leq |z|(1-|f(z)||f(0)|)$. Thus \begin{align*} |f(z)|&\leq ||f(z)|-|f(0|| + |f(0)| \\ &\leq |z| (1-|f(z)||f(0)|) + |f(0)|. \end{align*} Rearranging terms we find that $|f(z)|( 1 + |z||f(0)|)\leq |f(0)| + |z|$ and so $$|f(z)|\leq \frac{|f(0)|+|z|}{1+|f(0)||z|}$$ as desired. For the lower bound on $|f(z)|$ observe that (\ref{blue}) implies \begin{align*} |f(0)|&\leq ||f(0)|-|f(z)||+|f(z)|\\ &\leq |z|(1-|f(z)||f(0)|)+|f(z)| \end{align*} and hence, again with some rearranging, $|f(0)|+|z||f(z)||f(0)|\leq|f(z)|+|z|$ and thus $$\frac{|f(0)|-|z|}{1-|f(0)||z|}\leq |f(z)|.$$
QED!
*L. Ahlfors, An extension of Schwarz’s lemma,
Trans. Amer. Math. Soc. 43 (1938), 359–364.
|
Hokkaido Mathematical Journal Hokkaido Math. J. Volume 46, Number 3 (2017), 487-512. Growth of meromorphic solutions of some linear differential equations Abstract
In this paper, we investigate the order and the hyper-order of meromorphic solutions of the linear differential equation \begin{equation*} f^{(k)}+\sum^{k-1}_{j=1}(D_{j}+B_{j}e^{P_{j}(z) })f^{(j)}+( D_{0}+A_{1}e^{Q_{1}( z)}+A_{2}e^{Q_{2}( z) }) f=0, \end{equation*} where $k\geq 2$ is an integer, $Q_{1}(z),Q_{2}(z)$, $P_{j}(z) $ $(j=1, \dots ,k-1)$ are nonconstant polynomials and $A_{s}(z)$ $(\not\equiv 0)$ $(s=1,2)$, $B_{j}( z)$ $(\not\equiv 0)$ $(j=1, \dots ,k-1)$, $D_{m}(z)$ $(m=0,1, \dots ,k-1)$ are meromorphic functions. Under some conditions, we prove that every meromorphic solution $f$ $(\not\equiv 0)$ of the above equation is of infinite order and we give an estimate of its hyper-order. Furthermore, we obtain a result about the exponent of convergence and the hyper-exponent of convergence of a sequence of zeros and distinct zeros of $f-\varphi$, where $\varphi$ $(\not\equiv 0)$ is a meromorphic function and $f$ $(\not\equiv 0)$ is a meromorphic solution of the above equation.
Article information Source Hokkaido Math. J., Volume 46, Number 3 (2017), 487-512. Dates First available in Project Euclid: 7 November 2017 Permanent link to this document https://projecteuclid.org/euclid.hokmj/1510045308 Digital Object Identifier doi:10.14492/hokmj/1510045308 Mathematical Reviews number (MathSciNet) MR3720339 Zentralblatt MATH identifier 1384.34091 Citation
BEDDANI, Hamid; HAMANI, Karima. Growth of meromorphic solutions of some linear differential equations. Hokkaido Math. J. 46 (2017), no. 3, 487--512. doi:10.14492/hokmj/1510045308. https://projecteuclid.org/euclid.hokmj/1510045308
|
I'm working with some data from a cell analyzer, which takes millions of measurements and returns mean, median and 5% / 95% quantiles for the distribution. Preliminary model comparisons have indicated good fit to the data for a gamma distribution.
The experimental design is a straightforward ANOVA-type hierarchical model:
$$ y_{ij} \sim Gamma(Sh, Sh/\mu_{ij}) $$ $$ log(\mu_{ij}) = \beta_{0} + \beta^{i}_{1}, i \in \{ 1, \dots 3 \} + $$ $$ \beta^{j}_{2}, j \in \{ 1, \dots 3 \}, + \beta^{i}_{1} * \beta^{j}_{2} $$
In the Bayesian context the $\beta$ and Sh variables would have some sort of uninformative hyperparameters.
I want to investigate the contrasts between groups. I would prefer to do this in a Bayesian manner as I prefer my answer in the form of a probability mass.
What I would like to do is
Fit each group's mean and quantiles to a gamma distribution using, for example, get.gamma.par Draw samples from the fitted distribution for each group Evaluate contrasts by subtracting samples of contrast groups as described in, for example, the Kruschke textbook
I am not sure if this procedure is legit because I am not generating posterior distributions using MCMC, but rather have essentially been given the posterior parametrization by this cell analyzer.
Can I still run a Bayesian contrast analysis given data in this form?
|
In atmospheric physics we use often a variety of vertical coordinates, $z$ the height above some reference surface, or $r$ as distance from the planetary center.However sometimes it simplifies the ...
What is the difference between the Hydraulic diffusion equation and the Richards equation in groundwater dynamics? I want also to understand the difference between the different storage coefficients (...
In this paper, the authors detail the methods to calculate the stream function $\psi$ from velocity data in the case of the global ocean (i.e., multiply connected domains).The general idea is to use ...
The most famous equation describing the permeability $k$ of a soil is the Kozeny-Carman equation:$$k = \frac{\phi}{8}r^2 \approx \frac{\phi}{c}\left(\frac{V}{S}\right)^2$$$\phi$: porosityV: pore ...
In oceanography, is there any particular reason why choosing large eddy simulations in stead of RANS (regardless of the type of flow)? In both cases, 2d simulations would be used (shallow water model)....
|
After my series of post on classification algorithms, it’s time to get back to R codes, this time for quantile regression. Yes, I still want to get a better understanding of optimization routines, in R. Before looking at the quantile regression, let us compute the median, or the quantile, from a sample.
Median
Consider a sample \{y_1,\cdots,y_n\}. To compute the median, solve\min_\mu \left\lbrace\sum_{i=1}^n|y_i-\mu|\right\rbracewhich can be solved using linear programming techniques. More precisely, this problem is equivalent to\min_{\mu,\mathbf{a},\mathbf{b}}\left\lbrace\sum_{i=1}^na_i+b_i\right\rbracewith a_i,b_i\geq 0 and y_i-\mu=a_i-b_i, \forall i=1,\cdots,n.
To illustrate, consider a sample from a lognormal distribution,
n = 101 set.seed(1) y = rlnorm(n) median(y) [1] 1.077415
For the optimization problem, use the matrix form, with 3n constraints, and 2n+1 parameters,
library(lpSolve) A1 = cbind(diag(2*n),0) A2 = cbind(diag(n), -diag(n), 1) r = lp("min", c(rep(1,2*n),0), rbind(A1, A2),c(rep(">=", 2*n), rep("=", n)), c(rep(0,2*n), y)) tail(r$solution,1) [1] 1.077415
It looks like it’s working well…
Quantile
Of course, we can adapt our previous code for quantiles
tau = .3 quantile(x,tau) 30% 0.6741586
The linear program is now\min_{\mu,\mathbf{a},\mathbf{b}}\left\lbrace\sum_{i=1}^n\tau a_i+(1-\tau)b_i\right\rbracewith a_i,b_i\geq 0 and y_i-\mu=a_i-b_i, \forall i=1,\cdots,n. The R code is now
A1 = cbind(diag(2*n),0) A2 = cbind(diag(n), -diag(n), 1) r = lp("min", c(rep(tau,n),rep(1-tau,n),0), rbind(A1, A2),c(rep(">=", 2*n), rep("=", n)), c(rep(0,2*n), y)) tail(r$solution,1) [1] 0.6741586
So far so good…
Quantile Regression (simple)
Consider the following dataset, with rents of flat, in a major German city, as function of the surface, the year of construction, etc.
base=read.table("http://freakonometrics.free.fr/rent98_00.txt",header=TRUE)
The linear program for the quantile regression is now\min_{\mu,\mathbf{a},\mathbf{b}}\left\lbrace\sum_{i=1}^n\tau a_i+(1-\tau)b_i\right\rbracewith a_i,b_i\geq 0 and y_i-[\beta_0^\tau+\beta_1^\tau x_i]=a_i-b_i\forall i=1,\cdots,n. So use here
require(lpSolve) tau = .3 n=nrow(base) X = cbind( 1, base$area) y = base$rent_euro A1 = cbind(diag(2*n), 0,0) A2 = cbind(diag(n), -diag(n), X) r = lp("min", c(rep(tau,n), rep(1-tau,n),0,0), rbind(A1, A2), c(rep(">=", 2*n), rep("=", n)), c(rep(0,2*n), y)) tail(r$solution,2) [1] 148.946864 3.289674
Of course, we can use R function to fit that model
library(quantreg) rq(rent_euro~area, tau=tau, data=base) Coefficients: (Intercept) area 148.946864 3.289674
Here again, it seems to work quite well. We can use a different probability level, of course, and get a plot
plot(base$area,base$rent_euro,xlab=expression(paste("surface (",m^2,")")), ylab="rent (euros/month)",col=rgb(0,0,1,.4),cex=.5) sf=0:250 yr=r$solution[2*n+1]+r$solution[2*n+2]*sf lines(sf,yr,lwd=2,col="blue") tau = .9 r = lp("min", c(rep(tau,n), rep(1-tau,n),0,0), rbind(A1, A2), c(rep(">=", 2*n), rep("=", n)), c(rep(0,2*n), y)) tail(r$solution,2) [1] 121.815505 7.865536 yr=r$solution[2*n+1]+r$solution[2*n+2]*sf lines(sf,yr,lwd=2,col="blue")
Quantile Regression (multiple)
Now that we understand how to run the optimization program with one covariate, why not try with two ? For instance, let us see if we can explain the rent of a flat as a (linear) function of the surface and the age of the building.
require(lpSolve) tau = .3 n=nrow(base) X = cbind( 1, base$area, base$yearc ) y = base$rent_euro A1 = cbind(diag(2*n), 0,0,0) A2 = cbind(diag(n), -diag(n), X) r = lp("min", c(rep(tau,n), rep(1-tau,n),0,0,0), rbind(A1, A2), c(rep(">=", 2*n), rep("=", n)), c(rep(0,2*n), y)) tail(r$solution,3) [1] 0.000000 3.257562 0.077501
Unfortunately, this time, it is not working well…
library(quantreg) rq(rent_euro~area+yearc, tau=tau, data=base) Coefficients: (Intercept) area yearc -5542.503252 3.978135 2.887234
Results are quite different. And actually, another technique can confirm the later (IRLS – Iteratively Reweighted Least Squares)
eps = residuals(lm(rent_euro~area+yearc, data=base)) for(s in 1:500){ reg = lm(rent_euro~area+yearc, data=base, weights=(tau*(eps>0)+(1-tau)*(eps<0))/abs(eps)) eps = residuals(reg) } reg$coefficients (Intercept) area yearc -5484.443043 3.955134 2.857943
I could not figure out what went wrong with the linear program. Not only coefficients are very different, but also predictions…
yr = r$solution[2*n+1]+r$solution[2*n+2]*base$area+r$solution[2*n+3]*base$yearc plot(predict(reg),yr) abline(a=0,b=1,lty=2,col="red") It’s now time to investigate….
|
Suppose that we have a sequence $\{a_n\}$ with $a_n\subseteq\mathbb{R}$ for all $n$. What I wonder is that is it possible to define a 'limit' for this kind of sequences?
Of course this 'limit' should make some sense. For example intuitively if $a_n=(\frac{-1}{n},\frac{1}{n})$, then $\lim a_n$ should be $\{0\}$, or if $a_n=(-n,n)$ then $\lim a_n$ should be $\mathbb{R}$, etc. I know that some kind of a limit process of curves is used to define functions that are continuous everywhere but differentiable nowhere or space filling curves. But I couldn't find any formal information about this limit.
My goal is to generalise this limit process to $\mathbb{R}^2$ to define limits for shapes. For example, a well-known proof of that $\pi$ is the ratio of the area of any circle to the square of its radius, includes drawing $n$-polygons inside circle and to limit the areas of these polygons. But maybe it is also possible to define a limit for these compact subsets such that the limit of $n$-polygons inside the circle, as $n$ is increasing, is the circle.
So on, I have no idea on how to define this limit. Any help is appreciated.
|
If I have two functions in a convolution like
$$X*Y=1$$
$$X*Z=1$$
then it means (trivially) $Y=Z$.
Is this correct or are there subtleties in the convolution theorem where $Y=Z$ isn't always true?
Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community
If I have two functions in a convolution like
$$X*Y=1$$
$$X*Z=1$$
then it means (trivially) $Y=Z$.
Is this correct or are there subtleties in the convolution theorem where $Y=Z$ isn't always true?
Here is a counter example. The functions $X,Y,Z$ below will satisfy that $X * Y = X * Z = 1$ but $ Y \not= Z$:
$$ X(t) := 1, \forall \,t \in \mathbb{R},$$
$$ Y(t) := \begin{cases} \frac{1}{C_Y} e^{-\frac{1}{1-t^2}} \quad \text{if $-1 \leq t\leq 1$}\\ 0 \quad \text{otherwise,} \end{cases} $$ where $C_Y$ is a (normalization) constant such that the integral of $Y$ over $[-1,1]$ equals 1, and $$ Z(t) := \begin{cases} \frac{1}{C_Z} e^{-\frac{1}{2^2-t^2}} \quad \text{if $-2 \leq t\leq 2$}\\ 0 \quad \text{otherwise,} \end{cases} $$ where, similar to $C_Y$, $C_Z$ is a (normalization) constant such that the integral of $Z$ over $[-2,2]$ equals 1.
Those are concrete functions as an explicit example. Any normalized (integral equals 1) function $f$ that has compact support will satisfy that $X * f = 1$. From there you can choose any $Y$ and $Z$ you want as long as they are different.
Good Question :
Here we can use the concept of Laplace Transformation
$$X(s) Y(s) = X(s) Z(s)$$
$$Y(s) = Z(s)$$
We know that "Laplace Transform of two signals can be same but their ROC will be definitely different"
Here is a example :
$$e^{-at} u(t) \rightleftharpoons \frac{1}{s+a}$$
$$-e^{-at} u(-t) \rightleftharpoons \frac{1}{s+a}$$
HENCE now it is proved that it is not necessary that
$$Y = Z$$
|
\begin{align}Z_1 & = X+Y \\[4pt]Z_2 & = \frac X {X+Y} \\[12pt]X & = Z_1 Z_2 \\[4pt]Y & = Z_1(1-Z_2)\end{align}
Suppose you've figured out that the support of $Z_1$ is $[0,\infty)$ and the support of $Z_2$ is $[0,1]$. Then certainly the support of $(Z_1,Z_2)$ is a
subset of $[0,\infty)\times[0,1]$. The question is whether it includes all of $[0,\infty)\times[0,1]$. So suppose $(z_1,z_2)$ is some point in $[0,\infty)\times[0,1]$. The question of whether that point is in the support is the question of whether it is the case that every open neighborhood of that point has positive probability. And that comes down to whether every open neighborhood of $(x,y)=(z_1z_2,z_1(1-z_2))$ is assigned positive probability by the distribution of $(X,Y)$. And the answer to that isn't hard to find.
Of course, another way to do this is to actually find the density of $(Z_1,Z_2)$ and then observe that -- lo and behold -- $Z_1$ and $Z_2$ are independent.
|
The Annals of Probability Ann. Probab. Volume 24, Number 1 (1996), 237-267. Transience, recurrence and local extinction properties of the support for supercritical finite measure-valued diffusions Abstract
We consider the supercritical finite measure-valued diffusion, $X(t)$, whose log-Laplace equation is associated with the semilinear equation $u_t =Lu = \beta u - \alpha u^2$, where $\alpha, \beta> 0$ and $L = 1/2 \Sum _{i,j=1}^d a_{i,j (\partial x_i \partial x_j)) = \Sum_ {i=1} ^d b_i (\partial / \partial x_i)$. A path $X(\dot)$ is said to
survive if $X(t) \not\equiv 0$, for all $t\geq 0$. Since $\beta> 0, P_\mu (X(\dot)$ survives) $>0$, for all $0\not\equiv \mu \in M(R^d)$, where $M(R^d)$ denotes the space of finite measures on $R^d$. We define transience, recurrence and local extinction for the support of the supercritical measure-valued diffusion starting from a finite meausre as follows. The support is recurrent if $P _ \mu (X(t,B)>0$, for some $t \geq 0 | X(\dot)$ survives) =1, for every $0 \not\equiv \mu \in M(R^d)$ and every open set $B \subset R^d$. For $d\geq 2$, the support is transient if $P_\mu(X(t,B)>0$, for some $t \geq 0 |X (\dot)$ survives) $<1$, for every $\mu \in M(R^d)$ and bounded $B\subset R^d$ which satisfy $\supp(\mu)\bigcap \bar{B} = \emptyset$. A similar definition taking into account the topology of $R^1$ is given for $d=1$. The support exhibits local extinction if for each $\mu \in M(R^d)$ and each bounded $B\subset R^d$, there exists a $P_\mu$-almost surely finite random time $\zeta_B$ such that $X(t,B) = 0$, for all $t\geq \zeta_B$. Criteria for transience, recurrence and local extinction are developed in this paper. Also studied is the asymptotic behavior as $t \to \infty$ of $E_\mu \int_0^t \langle \psi, X(s) \rangle ds$, and of $E_\mu \langle g,X(t) \rangle$, for $0\leq g, \psi \in C_c(R^d), where $\langle f, X(t) \rangle \not\equiv \int_{R^d} f(x) X(t,dx). A number of examples are given to illustrate the general theory. Article information Source Ann. Probab., Volume 24, Number 1 (1996), 237-267. Dates First available in Project Euclid: 15 January 2003 Permanent link to this document https://projecteuclid.org/euclid.aop/1042644715 Digital Object Identifier doi:10.1214/aop/1042644715 Mathematical Reviews number (MathSciNet) MR1387634 Zentralblatt MATH identifier 0854.60087 Citation
Pinsky, Ross G. Transience, recurrence and local extinction properties of the support for supercritical finite measure-valued diffusions. Ann. Probab. 24 (1996), no. 1, 237--267. doi:10.1214/aop/1042644715. https://projecteuclid.org/euclid.aop/1042644715
|
I try to find a good proof for invertibility of strictly diagonally dominant matrices (defined by $|m_{ii}|>\sum_{j\ne i}|m_{ij}|$). There is a proof of this in this paper but I'm wondering whether there are are better proof such as using determinant, etc to show that the matrix is non singular.
The proof in the PDF (Theorem 1.1) is very elementary. The crux of the argument is that if $M$ is strictly diagonally dominant and singular, then there exists a vector $u \neq 0$ with $$Mu = 0.$$
$u$ has some entry $u_i > 0$ of largest magnitude. Then
\begin{align*} \sum_j m_{ij} u_j &= 0\\ m_{ii} u_i &= -\sum_{j\neq i} m_{ij}u_j\\ m_{ii} &= -\sum_{j\neq i} \frac{u_j}{u_i}m_{ij}\\ |m_{ii}| &\leq \sum_{j\neq i} \left|\frac{u_j}{u_i}m_{ij}\right|\\ |m_{ii}| &\leq \sum_{j\neq i} |m_{ij}|, \end{align*} a contradiction.
I'm skeptical you will find a significantly more elementary proof. Incidentally, though, the Gershgorin circle theorem (also described in your PDF) is very beautiful and gives geometric intuition for why no eigenvalue can be zero.
I would probe it a bit tangentially. And not because it will be simpler, but because it gives an excuse to show an application. I would take an iterative method, like Jacobi's, and show that it converges in this case; and that it converges to a unique solution. This, incidentally implies the matrix is non-singular.
How does it work exactly?
For the system $Ax=b$, Jacobi's method consists in writing $A=D+R$, where $D$ is diagonal and $R$ has zeros in the diagonal. Then you define the recurrence
$$x_{n+1}=D^{-1}(b-Rx_{n}).$$
Now we can show that it converges.
We have
\begin{align}||x_m-x_n||&=||\sum_{k=n}^{m}(D^{-1}R)^kb-((D^{-1}R)^{m}-(D^{-1}R)^{n})x_0||\\ &\leq\sum_{k=n}^{m}||D^{-1}R||^k||b||+\left(||D^{-1}R||^m+||D^{-1}R||^n\right)||x_0|| \end{align}
For the norm $||\cdot||:=||\cdot||_{\infty}$, the matrix norm is bounded by the maximum of the sums of the absolute values of its entries in each row. Therefore $$||D^{-1}R||$$ is less than some number less than $1$. For this reason the sum above can be as small as you want for $n,m$ large. This shows the convergence of the sequence.
If it clear too that it has to converge to any solution of the system $Ax=b$. To see this we use the same argument above but placing a solution $x$ in place of $x_m$. We use that $Ax=b$, i.e. $x=D^{-1}(b-Rx)$ and we get it. So $x_n$ converges to any solution. Since it is a convergent sequence it converges to only one thing so there is only one solution to the system.
protected by Zev Chonoles Sep 12 '16 at 18:44
Thank you for your interest in this question. Because it has attracted low-quality or spam answers that had to be removed, posting an answer now requires 10 reputation on this site (the association bonus does not count).
Would you like to answer one of these unanswered questions instead?
|
I had a system of three PDEs $$\frac{\partial \theta_h}{\partial x}+\beta_h (\theta_h-\theta_w) = 0$$
$$\frac{\partial \theta_c}{\partial y} + \beta_c (\theta_c-\theta_w) = 0$$
$$ \lambda_h \frac{\partial^2 \theta_w}{\partial x^2} + \lambda_c V\frac{\partial^2 \theta_w}{\partial y^2}-\frac{\partial \theta_h}{\partial x} - V\frac{\partial \theta_c}{\partial y} = 0 $$ On eliminating $\theta_h$ and $\theta_c$ from the third equation I reach $$ \lambda_h \frac{\partial^2 \theta_w}{\partial x^2} + \lambda_c V \frac{\partial^2 \theta_w}{\partial y^2} +( -\beta_h - V \beta_c )\theta_w +\beta_h^2 e^{-\beta_h x} \int e^{\beta_h x} \theta_w(x,y) \mathrm{d}x + \beta_c^2 e^{-\beta_c y}\int e^{\beta_c y} \theta_w(x,y)\mathrm{d}y = 0 $$ The bc(s) for the system are :
$$\frac{\partial \theta_w(0,y)}{\partial x}=\frac{\partial \theta_w(1,y)}{\partial x}=0 $$
$$\frac{\partial \theta_w(x,0)}{\partial y}=\frac{\partial \theta_w(x,1)}{\partial y}=0 $$
$$\theta_h(0,y)=1 $$$$\theta_c(x,0)=0$$
I must note here that i know an ansatz $\theta_w(x,y) = e^{-\beta_h x} f(x) e^{-\beta_c y} g(y)$ which can provide variable separation for the last PDE. But the Eigen value problems that would then come out of this are third order.
Is there any module in
MATHEMATICA that can handle Partio-Integral Differential equations ?
|
I'll be teaching an introductory course in algebraic number theory this fall (stopping before class field theory). I'm looking for a good list of "special topics" I can include to illustrate the general theory. In other words, attractive theorems (preferably off the beaten path) that can be proved using the basic results and that illustrate their power. For example, one standard one is the proof of quadratic reciprocity using cyclotomic fields. Can people suggest other ones? Especially ones that connect to other branches of mathematics (e.g. combinatorics, geometry/topology, group theory, etc)?
Here are a few suggestions, though within number theory.
Chebotarev's original proof of the density theorem (in the logarithmic form). Or at least the infinitude of totally split primes as well as the infinitude of non-split primes in every extension $L/K$ of number fields.
The Erdos support theorem on the multiplicative group: If $p \mid a^n - 1 \Rightarrow p \mid b^n-1$, then $b = a^j$. Local-global statements such as "$a$ is a cube mod all $p \, \Longrightarrow \,$ $a$ is, indeed, a cube."
Shafarevich's proof of the local Kronecker-Weber theorem, as given in Narkiewicz's book. The global theorem then follows immediately, by Minkowski.
The use of Frobenius elements and ramification (or of Kronecker-Weber) to prove that $\mathbb{Q}^{\mathrm{ab}}$ has the Bogomolov property: the absolute logarithmic height is bounded below by a positive constant, apart from the roots of unity. More interesting than the result itself result is its corollary resulting in a beautifully transparent proof of Smyth's theorem: $m(P) = \int_{S^1} \log{|P(\theta)|} \, d\theta \geq \mathrm{constant} > 0$ for non-reciprocal $P \in \mathbb{Z}[X]$ other than $0, \pm 1$ or $X-1$. This is due to Amoroso and Dvornicich (
A lower bound on the height in abelian extensions), and it is in the fourth chapter of Bombieri and Gubler's book. Amoroso's work gives a few further consequences related to class groups.
Galois groups of irreducible trinomials. Frequently those are the full symmetric group. $X^n - X - 1$ as an example of an explicit polynomial with group $S_n$. Given the irreducibility of that polynomial (Selmer), this is a good application of Minkowski's theorem: Since $\mathbb{Q}$ has no unramified extensions, the group of every Galois extension $K/\mathbb{Q}$ is generated by its inertia subgroups. But $\alpha^n - \alpha - 1 = n\alpha^{n-1} - 1 = 0$ implies $\alpha = n/(1-n)$, hence our polynomial has most one double root at each prime: all the non-trivial inertia groups are generated by a transposition in $S_n$. This and the transitivity (irreducibility) easily show the group is $S_n$. A reference is Serre's course in Galois theory, whose section 4.4 contains other good examples of Galois groups.
Here's something within algebraic number theory, but is sort of a "special topic" in the sense that it's not treated in most algebraic number theory courses, but could easily be. It also has some combinatorial aspects. Everyone knows that the class group of a number field $K$ measures the failure of unique factorization into irreducibles in its ring of integers.
Question: what does this really mean? That is, in what way does it quantitatively measure this?
A sample result is Carlitz's theorem, which says all irreducible factorizations of a given element have the same length if and only if the class number is at most 2. You can actually count the number of factorizations of $x$ based on the prime ideal factorization of $(x)$ in $\mathcal O_K$. I discussed this in my course a few years ago, based on this note. The note make more clear connections with combinatorics. Narkiewicz's book also discusses this problem from a different perspective.
|
Good day,
I have an old exam without solutions and there is the following exercise:
Let $(Y_n)_{n \in \mathbb{N}}$ be a sequence of independent identically distributed (i.i.d.) random variables which are uniformely distributed on $[0,1]$. Further define $X_n :=\min\{Y_1,...,Y_n\}$. Show:
(i) $F_n(t)=[1-(1-t)^n] 1_{[0,1]}(t)+1_{(1,\infty)}(t)$ is the distribution function of $X_n$ - got this one
(ii) $X_n$ is integrable and compute $E(X_n)$ - got this one
(iii)
Prove the convergence of $\sum_{n=1}^\infty P(n^a X_n \geq \epsilon)$ for all $a \in [0,1)$, $\epsilon>0$
Hint: You can make use of the fact that $\log(1-x) \leq -x$ holds for all $x<1$
(iv) $n^a X_n \to 0$ almost surely - got this one if I use (iii)
Normally I'd show exercises like (iii) with Borel-Cantelli-Lemma but here it seems not useful (further I use Borel-Cantelli in (iv) to show almost sure convergence).
Therefore I want to compute $P(n^a X_n \geq \epsilon)$ directly and hope I get sth like $\frac{1}{n^2}$ to get convergence of the sum.
Okay, let's try:
$$P(X_n \geq n^{-a} \epsilon)=1-F_n(n^{-a} \epsilon)=\left(1-\frac{\epsilon}{n^a} \right)^n $$
It seems a bit like $(1+\frac{x}{n})^n \to e^x$. I don't know if this is correct up to now since I didn't use the hint.
I'd have to check: $\sum \left(1-\frac{\epsilon}{n^a} \right)^n < \infty$
Maybe with the root test: $\sqrt[n]{ |a_n|}=1-\frac{\epsilon}{n^a}<1=C$ and this implies convergence. But this feels not right.
Can somebody please tell me how to solve this (maybe as the hint implies with the logarithm.)
Thanks a lot, Marvin
|
I have been fiddling around trying to understand coordinate transformations in dynamical systems. I know the standard way to convert the system from cartesian coordinates to polar, but am having trouble reversing it.
I know the standard transformation equations: $$ r\cos(\theta)=x,\quad r\sin(\theta)=y,\quad r^2 = x^2+y^2,\quad \theta = \arctan\left(\frac{y}{x}\right), $$
$$ \frac{\partial x}{\partial r} =\cos(\theta),\quad \frac{\partial x}{\partial \theta} =-r\sin(\theta),\quad \frac{\partial y}{\partial r} =\sin(\theta),\quad \frac{\partial y}{\partial \theta} =r\cos(\theta), $$
$$ \frac{\partial r}{\partial x} =\frac{x}{r}=\cos(\theta),\quad \frac{\partial \theta}{\partial x} =\frac{-y}{x^2+y^2}=\frac{-\sin(\theta)}{r}, $$
$$ \frac{\partial r}{\partial y} =\sin(\theta),\quad \frac{\partial \theta}{\partial y} =\frac{x}{x^2+y^2}=\frac{\cos(\theta)}{r} $$
If we have an example system of equations: \begin{aligned} \dot{r} &= m r - r^3, \\ \dot{\theta} &= w + v r^2 \end{aligned} where $w, v, m$ are arbitrary constants.
What are the steps I follow to make the appropriate transformation to $\dot{x}, \dot{y}$?
|
Why is $e^{i2\pi Nf_snT}=1$ in snippet from book?
Because $Tf{_s}$=1 and $N$ and $n$ are integers. So the exponent becomes 2$\pi$ times an integer, say $K$. But $e{^{i\theta}} = \cos(\theta)+i\sin(\theta)$. Therefore $e{^{i2K\pi}} = 1$.
A try without maths: $x(t)$ gives you your location in the complex plane at time $t$.
The answer is: because when you walk in circles ($2\pi$) an integer number of times, whatever your rotational speed, or the length of your steps, you finally up end at the same place as in the beginning, like in the latin palindromic sentence:
In girum imus nocte ecce et consumimur igni
which means: "At night we wander in circles and are consumed by fire."
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.