text
stringlengths 256
16.4k
|
|---|
Third post of our series on classification from scratch, following the previous post introducing smoothing techniques, with (b)-splines. Consider here kernel based techniques. Note that here, we do not use the “logistic” model… it is purely non-parametric.
kernel based estimated, from scratch
I like kernels because they are somehow very intuitive. With GLMs, the goal is to estimate \hat{m}(\mathbf{x})=\mathbb{E}(Y|\mathbf{X}=\mathbf{x}). Heuritically, we want to compute the (conditional) expected value on the neighborhood of \mathbf{x}. If we consider some spatial model, where \mathbf{x} is the location, we want the expected value of some variable Y, “on the neighborhood” of \mathbf{x}. A natural approach is to use some administrative region (county, departement, region, etc). This means that we have a partition of \mathcal{X} (the space with the variable(s) lies). This will yield the regressogram, introduced in Tukey (1961). For convenience, assume some interval / rectangle / box type of partition. In the univariate case, consider \hat{m}_{\mathbf{a}}(x)=\frac{\sum_{i=1}^n \mathbf{1}(x_i\in[a_j,a_{j+1}))y_i}{\sum_{i=1}^n \mathbf{1}(x_i\in[a_j,a_{j+1}))}or the moving regressogram \hat{m}(x)=\frac{\sum_{i=1}^n \mathbf{1}(x_i\in[x\pm h])y_i}{\sum_{i=1}^n \mathbf{1}(x_i\in[x\pm h])}In that case, the neighborhood is defined as the interval (x\pm h). That’s nice, but clearly very simplistic. If \mathbf{x}_i=\mathbf{x} and \mathbf{x}_j=\mathbf{x}-h+\varepsilon (with \varepsilon>0), both observations are used to compute the conditional expected value. But if \mathbf{x}_{j'}=\mathbf{x}-h-\varepsilon, only \mathbf{x}_i is considered. Even if the distance between \mathbf{x}_{j} and \mathbf{x}_{j'} is extremely extremely small. Thus, a natural idea is to use weights that are function of the distance between \mathbf{x}_{i}‘s and \mathbf{x}.Use\tilde{m}(x)=\frac{\sum_{i=1}^ny_i\cdot k_h\left({x-x_i}\right)}{\sum_{i=1}^nk_h\left({x-x_i}\right)}where (classically)k_h(x)=k\left(\frac{x}{h}\right)for some kernel k (a non-negative function that integrates to one) and some bandwidth h. Usually, kernels are denoted with capital letter K, but I prefer to use k, because it can be interpreted as the density of some random noise we add to all observations (independently).
Actually, one can derive that estimate by using kernel-based estimators of densities. Recall that\tilde{f}(\mathbf{y})=\frac{1}{n|\mathbf{H}|^{1/2}}\sum_{i=1}^n k\left(\mathbf{H}^{-1/2}(\mathbf{y}-\mathbf{y}_i)\right)
Now, use the fact that the expected value can be defined asm(x)=\int yf(y|x)dy=\frac{\int y f(y,x)dy}{\int f(y,x)dy}Consider now a bivariate (product) kernel to estimate the joint density. The numerator is estimated by\frac{1}{nh}\sum_{i=1}^n\int y_i k\left(t,\frac{x-x_i}{h}\right)dt=\frac{1}{nh}\sum_{i=1}^ny_i \kappa\left(\frac{x-x_i}{h}\right)while the denominator is estimated by\frac{1}{nh^2}\sum_{i=1}^n \int k\left(\frac{y-y_i}{h},\frac{x-x_i}{h}\right)=\frac{1}{nh}\sum_{i=1}^n\kappa\left(\frac{x-x_i}{h}\right)In a general setting, we still use product kernels between Y and \mathbf{X} and write \widehat{m}_{\mathbf{H}}(\mathbf{x})=\displaystyle{\frac{\sum_{i=1}^ny_i\cdot k_{\mathbf{H}}(\mathbf{x}_i-\mathbf{x})}{\sum_{i=1}^n k_{\mathbf{H}}(\mathbf{x}_i-\mathbf{x})}}for some symmetric positive definite bandwidth matrix \mathbf{H}, and k_{\mathbf{H}}(\mathbf{x})=\det[\mathbf{H}]^{-1}k(\mathbf{H}^{-1}\mathbf{x})
Now that we know what kernel estimates are, let us use them. For instance, assume that k is the density of the \mathcal{N}(0,1) distribution. At point x, with a bandwidth h we get the following code
mean_x = function(x,bw){ w = dnorm((myocarde$INSYS-x)/bw, mean=0,sd=1) weighted.mean(myocarde$PRONO,w)} u = seq(5,55,length=201) v = Vectorize(function(x) mean_x(x,3))(u) plot(u,v,ylim=0:1,type="l",col="red") points(myocarde$INSYS,myocarde$PRONO,pch=19) and of course, we can change the bandwidth.
v = Vectorize(function(x) mean_x(x,2))(u) plot(u,v,ylim=0:1,type="l",col="red") points(myocarde$INSYS,myocarde$PRONO,pch=19) We observe what we can read in any textbook : with a smaller bandwidth, we get more variance, less bias. “More variance” means here more variability (since the neighborhood is smaller, there are less points to compute the average, and the estimate is more volatile), and “less bias” in the sense that the expected value is supposed to be compute at point x, so the smaller the neighborhood, the better. Using ksmooth R function
Actually, there is a function in R to compute this kernel regression.
reg = ksmooth(myocarde$INSYS,myocarde$PRONO,"normal",bandwidth = 2*exp(1)) plot(reg$x,reg$y,ylim=0:1,type="l",col="red",lwd=2,xlab="INSYS",ylab="") points(myocarde$INSYS,myocarde$PRONO,pch=19)
We can replicate our previous estimate. Nevertheless, the output is not a function, but two series of vectors. That’s nice to get a graph, but that’s all we get. Furthermore, as we can see, the bandwidth is not exactly the same as the one we used before. I did not find any information online, so I tried to replicate the function we wrote before
g=function(bk=3){ reg = ksmooth(myocarde$INSYS,myocarde$PRONO,"normal",bandwidth = bk) f=function(bm){ v = Vectorize(function(x) mean_x(x,bm))(reg$x) z=reg$y-v sum((z[!is.na(z)])^2)} optim(bk,f)$par} x=seq(1,10,by=.1) y=Vectorize(g)(x) plot(x,y) abline(0,exp(-1),col="red") abline(0,.37,col="blue") There is a slope of 0.37, which is actually e^{-1}. Coincidence ? I don’t know to be honest… Application in higher dimension
Consider now our bivariate dataset, and consider some product of univariate (Gaussian) kernels
u = seq(0,1,length=101) p = function(x,y){ bw1 = .2; bw2 = .2 w = dnorm((df$x1-x)/bw1, mean=0,sd=1)* dnorm((df$x2-y)/bw2, mean=0,sd=1) weighted.mean(df$y=="1",w) } v = outer(u,u,Vectorize(p)) image(u,u,v,col=clr10,breaks=(0:10)/10) points(df$x1,df$x2,pch=19,cex=1.5,col="white") points(df$x1,df$x2,pch=c(1,19)[1+(df$y=="1")],cex=1.5) contour(u,u,v,levels = .5,add=TRUE)
We get the following prediction
Here, the different colors are probabilities.
k-nearest neighbors
An alternative is to consider a neighborhood not defined using a distance to point \mathbf{x} but the k-neighbors, with the n observations we got.\tilde{m}_k(\mathbf{x})=\frac{1}{n}\sum_{i=1}^n\omega_{i,k}(\mathbf{x})y_i
where \omega_{i,k}(\mathbf{x})=n/k if i\in\mathcal{I}_{\mathbf{x}}^k with \mathcal{I}_{\mathbf{x}}^k=\{i:\mathbf{x}_i\text{ one of the }k\text{ nearest observations to }\mathbf{x}\} The difficult part here is that we need a valid distance. If units are very different on each component, using the Euclidean distance will be meaningless. So, quite naturally, let us consider here the Mahalanobis distance
Sigma = var(myocarde[,1:7]) Sigma_Inv = solve(Sigma) d2_mahalanobis = function(x,y,Sinv){as.numeric(x-y)%*%Sinv%*%t(x-y)} k_closest = function(i,k){ vect_dist = function(j) d2_mahalanobis(myocarde[i,1:7],myocarde[j,1:7],Sigma_Inv) vect = Vectorize(vect_dist)((1:nrow(myocarde))) which((rank(vect)))}
Here we have a function to find the k closest neighbor for some observation. Then two things can be done to get a prediction. The goal is to predict a class, so we can think of using a majority rule : the prediction for y_i is the same as the one the majority of the neighbors.
k_majority = function(k){ Y=rep(NA,nrow(myocarde)) for(i in 1:length(Y)) Y[i] = sort(myocarde$PRONO[k_closest(i,k)])[(k+1)/2] return(Y)}
But we can also compute the proportion of black points among the closest neighbors. It can actually be interpreted as the probability to be black (that’s actually what was said at the beginning of this post, with kernels),
k_mean = function(k){ Y=rep(NA,nrow(myocarde)) for(i in 1:length(Y)) Y[i] = mean(myocarde$PRONO[k_closest(i,k)]) return(Y)}
We can see on our dataset the observation, the prediction based on the majority rule, and the proportion of dead individuals among the 7 closest neighbors
cbind(OBSERVED=myocarde$PRONO, MAJORITY=k_majority(7),PROPORTION=k_mean(7)) OBSERVED MAJORITY PROPORTION [1,] 1 1 0.7142857 [2,] 0 1 0.5714286 [3,] 0 0 0.1428571 [4,] 1 1 0.5714286 [5,] 0 1 0.7142857 [6,] 0 0 0.2857143 [7,] 1 1 0.7142857 [8,] 1 0 0.4285714 [9,] 1 1 0.7142857 [10,] 1 1 0.8571429 [11,] 1 1 1.0000000 [12,] 1 1 1.0000000
Here, we got a prediction for an observed point, located at \boldsymbol{x}_i, but actually, it is possible to seek the k closest neighbors of any point \boldsymbol{x}. Back on our univariate example (to get a graph), we have
mean_x = function(x,k=9){ w = rank(abs(myocarde$INSYS-x),ties.method ="random") mean(myocarde$PRONO[which(w<=9)])} u=seq(5,55,length=201) v=Vectorize(function(x) mean_x(x,3))(u) plot(u,v,ylim=0:1,type="l",col="red",lwd=2,xlab="INSYS",ylab="") points(myocarde$INSYS,myocarde$PRONO,pch=19) That’s not very smooth, but we do not have a lot of points either.
If we use that technique on our two-dimensional dataset, we obtain the following
Sigma_Inv = solve(var(df[,c("x1","x2")])) u = seq(0,1,length=51) p = function(x,y){ k = 6 vect_dist = function(j) d2_mahalanobis(c(x,y),df[j,c("x1","x2")],Sigma_Inv) vect = Vectorize(vect_dist)(1:nrow(df)) idx = which(rank(vect)<=k) return(mean((df$y==1)[idx]))} v = outer(u,u,Vectorize(p)) image(u,u,v,xlab="Variable 1",ylab="Variable 2",col=clr10,breaks=(0:10)/10) points(df$x1,df$x2,pch=19,cex=1.5,col="white") points(df$x1,df$x2,pch=c(1,19)[1+z],cex=1.5) contour(u,u,v,levels = .5,add=TRUE)
This is the idea of local inference, using either kernel on a neighborhood of \mathbf{x} or simply using the k nearest neighbors. Next time, we will investigate penalized logistic regressions, to be continued…
|
I wanted to check following possibility .
I have $$ A=\left [\begin{matrix} a & b \\ c & d\\ \end{matrix} \right] $$ Now I wanted to find Polynomial such that $$ f(A)=\left [\begin{matrix} 0 & -1 \\ 1 & 0\\ \end{matrix} \right] $$. Is this always possible? I had done some calculatution but I did not get. Any Help will be appreciated
I wanted to check following possibility .
From the fact that $f(A)$ and $A$ commute, we conclude that $A$ must commute with $\begin{bmatrix} 0 & -1 \\ 1 & 0\end{bmatrix}.$ This in turn results in the requirements $a=d$ and $b=-c.$ If $b=c=0,$ then $f(A)$ would only be a multiple of the identity matrix. Therefore, we can also conclude $b=-c\neq 0.$ Now it is easy to find our polynomial: $$ f(x) = \frac xc - \frac ac $$
|
I am working on 2D Liouville field theory and trying to follow mostly Harold Erbin's note on 2d quantum gravity and Liouville theory.
I have a really simple question:
One consider the Euclidean Liouville action which is given by
$S_L = \frac{1}{4\pi}\int d^2\sigma\sqrt{h}\left(h^{\mu\nu}\partial_{\mu}\phi\partial_{\nu}\phi+QR\phi+4\pi\mu e^{2b\phi}\right)$
Now, in order to get the e.o.m as well as the stress-energy tensor, we have to vary the action, which yields
\begin{equation}\delta_hS_L = \frac{1}{4\pi}\int d^2\sigma\sqrt{h}\delta h^{\mu\nu}\left[-\frac{1}{2}h_{\mu\nu}\left(h^{\rho\sigma}\partial_{\rho}\phi\partial_{\sigma}\phi+QR\phi+4\pi\mu e^{2b\phi}\right)\\+\left(\partial_{\mu}\phi\partial_{\nu}\phi+QR_{\mu\nu}\phi+Q(h_{\mu\nu}\Delta\phi-\nabla_{\mu}\nabla_{\nu}\phi)\right)\right] ,\end{equation} while the variation w.r.t. $\phi$ gives, \begin{equation}\delta_{\phi}S_L = \frac{1}{4\pi}\int d^2\sigma\sqrt{h}\delta \phi\left(-2\Delta\phi+QR+8\pi\mu be^{2b\phi}\right)\end{equation}The equation of motion for $\phi$ are given in the usual way and one obtain \begin{equation}QR[h]-2\Delta\phi=-8\pi\mu b e^{2b\phi}\end{equation} If one consider the flat metric, then this reduces to \begin{equation}\partial_{\mu}\partial^{\mu}\phi=4\pi\mu b e^{2b\phi}\end{equation}
The stress energy tensor is computed in the usual way using \begin{equation}T_{\mu\nu} = -\frac{4\pi}{\sqrt{h}}\frac{\delta S}{\delta h^{\mu\nu}}\end{equation} In the notes, it is claimed that this gives \begin{equation}T_{\mu\nu}= -\left(\partial_{\mu}\phi\partial_{\nu}\phi-\frac{1}{2}h_{\mu\nu}h^{\rho\sigma}\partial_{\rho}\phi\partial_{\sigma}\phi\right)+Q(-h_{\mu\nu}\Delta\phi+\nabla_{\mu}\nabla_{\nu}\phi)+2\pi\mu be^{2b\phi}h_{\mu\nu}\end{equation}
$\textbf{Question 1:}$ Why does the contribution proportional to $R$ vanishes? I have the feeling that this is the stress energy tensor for flat space but not for curved space of do they vanish in every case?
Then, we go to complex plane (section 6.5.2). The metrix is given by $ds^2 = dzd\bar{z}$. This means that
\begin{equation}g_{zz}=g_{\bar{z}\bar{z}}=0, g_{z\bar{z}}=g_{\bar{z}z}=\frac{1}{2}\end{equation} Also, the complex coordinates derivative are easily found to be $\partial_{z}= \frac{1}{2}(\partial_0-i\partial_1)$.
$\textbf{Question 2:}$ I don't fully understand how to transform the e.o.m as well as the stress-energy tensor in those new coordinates. The result should be:
\begin{equation}\partial\bar{\partial}\phi = 4\pi \mu be^{eb\phi}\end{equation} \begin{equation}T(z) = T_{zz} = -(\partial\phi)^2+Q\partial^2\phi+2\pi\mu e^{eb\phi}\end{equation}
Can someone give me a hint please?
|
Suppose $a,b \in GL(n,\mathbb{C})$, and $\langle a,b\rangle$ is a free group of rank $2$.
Is there a way to choose a $c$ to guarantee that $\langle a,b,c\rangle$ is a free group of rank $3$?
Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community
Yes. Note first that it suffices to answer the question in $\textit{SL}(2,\mathbb{C})$. In particular, if $A,B\in\textit{GL}(2,\mathbb{C})$, let $\hat{A},\hat{B}$ be scalar multiples of $A,B$ that lie in $\textit{SL}(2,\mathbb{C})$, and suppose that there exists a $C\in\textit{SL}(2,\mathbb{C})$ so that $\langle\hat{A},\hat{B},C\rangle$ is a free group of rank $3$. Then for any word $w(x_1,x_2,x_3)$ in the free group $\langle x_1,x_2,x_3\rangle$, if $w(A,B,C) = I$, then $w(\hat{A},\hat{B},C)$ would have to be a scalar matrix, and hence would lie in the center of $\langle\hat{A},\hat{B},C\rangle$, which is impossible. We conclude that $w(A,B,C) \ne I$ for all $w$, and hence $\langle A,B,C\rangle$ is free as well.
So suppose $A,B\in\textit{SL}(2,\mathbb{C})$ and $\langle A,B\rangle$ is free of rank $2$. Consider the group $\textit{SL}(2,R)$, where $R$ is the ring $\mathbb{C}[t_1,t_2,t_3,t_4]/(t_1t_4-t_2t_3-1)$, and let $$ T \;=\; \begin{bmatrix}t_1 & t_2 \\ t_3 & t_4\end{bmatrix}. $$ Note that $T$ is invertible over $R$, with $$ T^{-1} \;=\; \begin{bmatrix}t_4 & -t_2 \\ -t_3 & t_1\end{bmatrix}, $$ and hence $T \in \textit{SL}(2,R)$. Then $\langle A,B,T\rangle$ must be free, since any element of $\langle A,B\rangle$ can be substituted for $T$, and the free group $\langle A,B\rangle$ has no non-trivial laws.
Now, if $w = w(x_1,x_2,x_3)$ is any non-trivial word in the free group $\langle x_1,x_2,x_3\rangle$, the equation $$ w(A,B,T) \;=\; I $$ is equivalent to a nontrivial system of polynomial equations in $t_1,t_2,t_3,t_4$, which define a proper subvariety $V_w$ of $\textit{SL}(2,\mathbb{C})$. But $\textit{SL}(2,\mathbb{C})$ cannot be expressed as a countable union of proper subvarieties, and hence there exists a matrix $C\in\textit{SL}(2,\mathbb{C})$ that does not lie in any $V_w$. Then $\langle A,B,C\rangle$ is free.
Practically speaking, the way to choose $C$ is to choose a generic matrix, i.e. a matrix whose entries do not satisfy any polynomial equations over the field generated by the entries of $A$ and $B$.
|
Let $B \subset \mathbb R^2$ be the unit ball and $T>0.$ Let $u \in W^{2,1}_p(B \times [0,T]),$ that is $u \in L^p(B \times [0,T])$ and we also have, $$ \partial_t u, \nabla u, \nabla^2 u \in L^p(B \times [0,T]). $$ Here $\nabla$ denotes differentiation in the spacial direction only.
I am looking for a proof of the following result:
If $p>4,$ then $\nabla u \in C^{\alpha,\alpha/2}(\overline B \times [0,T])$ and there is $C_p > 0$ such that $$ \sup_{\overline B \times [0,T]} |\nabla u| + \sup_{(x,t) \neq (y,s) \in \overline B \times [0,T]} \frac{|\nabla u(x,t) - \nabla u(y,s)|}{|x-y|^{\alpha} + |t-s|^{\alpha/2}} \leq C_p \left( \lVert u \rVert_{L^p(B \times [0,T])} + \lVert \nabla u \rVert_{L^p(B \times [0,T])} + \lVert \nabla^2 u \rVert_{L^p(B \times [0,T])} + \lVert \partial_t u \rVert_{L^p(B \times [0,T])} \right), $$ or in more compact (but possibly non-standard) notation $$\lVert \nabla u \rVert_{C^{\alpha,\alpha/2}(\overline B \times [0,T])} \leq C_p \lVert u \rVert_{W^{2,1}_p(B \times [0,T])},$$ where $\alpha = \left(1-\frac4p\right).$
This result was stated without a proof or reference as lemma 3.1 in the paper "The existence of heat flow of $H$-systems" by Chen and Levine. I presume it's well-known, but I've been unable to find a reference for it.
Some thoughts: My initial idea was to try to adapt one of the proofs of the usual Morrey-Sobolev embedding $W^{1,p} \hookrightarrow C^{1-n/p}$ separately in $x$ and $t,$ perhaps by breaking it up as,$$ |\nabla u(x,t) - \nabla u(y,s)| = |\nabla u(x,t) - \nabla u(x,s)| + |\nabla u(x,s) - \nabla u(y,s)|. $$The exponent $\alpha$ suggests we apply Sobolev embedding in the $x$ variable with exponent $p/2,$ but it's not clear to do this uniformly in $t.$ Moreover this naiive approach obviously fails because we only have information about $\partial_t u$ and not its gradient. So some interpolation argument would be needed, which is where I'm stuck.
|
I want to calculate the activity coefficients of mixed solvent salt solutions. I am seeing very strange behavior when I try calculating the activity coefficient of salts in non-polar solvents using Debye-Huckel theory though and its messing up my down-stream calculations.
In a simple example, let's compare the activity of NaCl in hexane and water:
Inputs: (hexane / water)
Density: 654 / 1000 kg/m^3 Dielectric constant: 1.89 / 80.4 Temperature: 293 K Molarity = 0.1 mol/L
Calculations:
$A = consts \frac {\sqrt {\rho}} {(\epsilon * T)^{1.5}} = $ 230 / 1 $I = \frac12 \sum_{ions} cz^2 =$ 0.1 / 0.1 $ln(\gamma) = -Az^2\sqrt I = $ -73 / -0.32 $\gamma = 10^{-32}$ / 0.72
In equilibrium the salt activity should be the same in each phase, $\gamma_{aq}x_{aq} = \gamma_{org} x_{org}$, which would imply that the salt should all be flying over to the organic phase. That clearly makes no sense.
Why does this give such totally wrong behavior in a limiting case?
|
Personally I think the problem is interesting, so let me extend my comments to an answer. First of all,
DSolve can solve OP's problem straightforwardly (in
Mathematica 10.3 or higher, if I remember correctly):
With[{u = u[t, x]},
eq = D[u, t] == k D[u, x, x];
ic = u == Piecewise[{{1, 0 < x < 2}}] /. t -> 0;
bc = D[u, x] == 0 /. x -> 0;]
asol = DSolveValue[{eq, ic, bc}, u, {t, x}, Assumptions -> {x > 0, k > 0}];
asol[t, x]
(* 1/2 (-Erf[(-2 + x)/(2 Sqrt[k] Sqrt[t])] + Erf[(2 + x)/(2 Sqrt[k] Sqrt[t])]) *)
Remark
There seems to be a bug in
DSolve in
v11.2.0.
DSolve[{eq, ic, bc}, u[t, x], {t, x}]
will return unevaluated.
As one can see,
DSolve expresses the solution with
Erf, so it's not immediately clear whether OP's solution is correct or not, and
Mathematica's functions for simplifying also doesn't work well in this case, so let's obtain the analytic solution with another approach, that is, making use of Fourier cosine transform to eliminate the derivative of $x$:
fct = FourierCosTransform[#, x, s] &;
tset = Map[fct, {eq, ic}, {2}] /. Rule @@ bc /.
HoldPattern@FourierCosTransform[a_, __] :> a
tsol = u[t, x] /. DSolve[tset, u[t, x], t][[1]]
(* (E^(-k s^2 t) Sqrt[2/π] Sin[2 s])/s *)
Remark
I've made the transform on the PDE in a quick way, for a more general
approach, check this
post.
InverseFourierCosTransform has difficulty in transforming
tsol, but it doesn't matter because the integral form is just what we want. By checking the formula of inverse Fourier cosine transform, we find the solution should be
$$u(t,x)=\sqrt{\frac{2}{\pi }} \int_0^{\infty } \frac{e^{-k s^2 t} \sqrt{\frac{2}{\pi }} \cos (s x) \sin (2 s)}{s} \, ds$$
It's apparently different from the one in your question, and numeric calculation shows this solution is the same as the one given by
DSolve, so the one in your question is wrong.
Finally, a illustration for the solution:
Plot3D[asol[t, x] /. k -> 1 // Evaluate, {x, 0, 4}, {t, 0, 10}]
Update
Inspired by Ars3nous' comment below, I noticed
InverseFourierCosTransform can actually transform
tsol. We just need a proper assumption:
InverseFourierCosTransform[tsol, s, x, Assumptions -> k > 0]
(* 1/2 (-Erf[(-2 + x)/(2 Sqrt[k t])] + Erf[(2 + x)/(2 Sqrt[k t])]) *)
Apparently it's the same as
asol.
|
Arkiv för Matematik Ark. Mat. Volume 40, Number 1 (2002), 89-104. The harmonic Bergman kernel and the Friedrichs operator Abstract
The harmonic Bergman kernel
Q Ω for a simply, connected planar domain Ω can be expanded in terms of powers of the Friedrichs operator F Ω ║ F Ω║<1 in operator norm. Suppose that Ω is the image of a univalent analytic function ø in the unit disk with ø' ( z)=1+ψ ( z) where ψ(0)=0. We show that if the function ψ belongs to a space D ( s D), s>0, of Dirichlet type, then provided that ║ψ║∞<1 the series for Q Ωalso converges pointwise in $\bar \Omega \times \bar \Omega \backslash \Delta (\partial \Omega )$ , and the rate of convergence can be estimated. The proof uses the eigenfunctions of the Friedrichs operator as well as a formula due to Lenard on projections in Hilbert spaces. As an application, we show that for every s>0 there exists a constant C >0 such that if ║ψ║ s D s( D)≤ C , then the biharmonic Green function for Ω=ø ( s D) is positive. Article information Source Ark. Mat., Volume 40, Number 1 (2002), 89-104. Dates Received: 18 September 2000 First available in Project Euclid: 31 January 2017 Permanent link to this document https://projecteuclid.org/euclid.afm/1485898755 Digital Object Identifier doi:10.1007/BF02384504 Mathematical Reviews number (MathSciNet) MR1948888 Zentralblatt MATH identifier 1075.47505 Rights 2002 © Institut Mittag-Leffler Citation
Jakobsson, Stefan. The harmonic Bergman kernel and the Friedrichs operator. Ark. Mat. 40 (2002), no. 1, 89--104. doi:10.1007/BF02384504. https://projecteuclid.org/euclid.afm/1485898755
|
In my last post I solved a problem from chapter 2 of M.G. Bulmer’s Principles of Statistics. In this post I work through a problem in chapter 11 that is basically a continuation of the chapter 2 problem. If you take a look at the previous post, you’ll notice we were asked to find probability in terms of theta. I did it and that’s nice and all, but we can go further. We actually have data, so we can estimate theta. And that’s what the problem in chapter 11 asks us to do. If you’re wondering why it took 9 chapters to get from finding theta to estimating theta, that’s because the first problem involved basic probability and this one requires maximum likelihood. It’s a bit of a jump where statistical background is concerned.
The results of the last post were as follows:
purple-flowered red-flowered long pollen \( \frac{1}{4}(\theta + 2)\) \( \frac{1}{4}(1 – \theta) \) round pollen \( \frac{1}{4}(1 – \theta) \) \( \frac{1}{4}\theta \)
The table provides probabilities of four possible phenotypes when hybrid sweet peas are allowed to self-fertilize. For example, the probability of a self-fertilizing sweet pea producing a purple-flower with long pollen is \( \frac{1}{4}(1 – \theta)\). In this post we’ll estimate theta from our data. Recall that \( \theta = (1 – \pi)^{2} \), where \( \pi \) is the probability of the dominant and recessive genes of a characteristic switching chromosomes.
Here’s the data:
Purple-flowered Red-flowered Long pollen 1528 117 Round pollen 106 381
We see from the table there are 4 exclusive possibilities when the sweet pea self-fertilizes. If we think of each possibility having its own probability of occurrence, then we can think of this data as a sample from a multinomial distribution. Since chapter 11 covers maximum likelihood estimation, the problem therefore asks us to use the multinomial likelihood function to estimate theta.
Now the maximum likelihood estimator for each probability is \( \hat{p_{i}} = \frac{x_{i}}{n} \). But we can’t use that. That’s estimating four parameters. We need to estimate one parameter, theta. So we need to go back to the multinomial maximum likelihood function and define \( p_{i}\) in terms of theta. And of course we’ll work with the log likelihood since it’s easier to work with sums than products.
\( \log L(\theta) = y_{1} \log p_{1} + y_{2} \log p_{2} + y_{3} \log p_{3} + y_{4} \log p_{4} \)
If you’re not sure how I got that, google “log likelihood multinomial distribution” for more PDF lecture notes than you can ever hope to read.
Now let’s define the probabilities in terms of one parameter, theta:
\( \log L(\theta) = y_{1} \log f_{1}(\theta) + y_{2} \log f_{2}(\theta) + y_{3} \log f_{3}(\theta) + y_{4} \log f_{4}(\theta) \)
Now take the derivative. Once we have that we can set equal to 0 and find a solution for theta. The solution will be the point at which theta obtains its maximum value:
\( \frac{d \log L(\theta)}{d\theta} = \frac{y_{1}}{f_{1}(\theta)} f’_{1}(\theta) + \frac{y_{1}}{f_{2}(\theta)} f’_{2}(\theta) + \frac{y_{1}}{f_{3}(\theta)} f’_{3}(\theta) + \frac{y_{1}}{f_{4}(\theta)} f’_{4}(\theta)\)
Time to go from the abstract to the applied with our values. The y’s are the data from our table and the functions of theta are the results from the previous problem.
\( \frac{d \log L(\theta)}{d\theta} = \frac{1528}{1/4(2 + \theta)} \frac{1}{4} – \frac{117}{1/4(1 – \theta)}\frac{1}{4} – \frac{106}{1/4(1 – \theta)}\frac{1}{4} + \frac{381}{1/4(\theta)} \frac{1}{4} \)
\( \frac{d \log L(\theta)}{d\theta} = \frac{1528}{2 + \theta} – \frac{117}{1 – \theta} – \frac{106}{1 – \theta} + \frac{381}{\theta} \) \( \frac{d \log L(\theta)}{d\theta} = \frac{1528}{2 + \theta} – \frac{223}{1 – \theta} + \frac{381}{\theta} \)
Set equal to 0 and solve for theta. Beware, it gets messy.
\( \frac{[1528(1 – \theta)\theta] – [223(2 + \theta)\theta] + [381(2 + \theta)(1 – \theta)]}{(2 + \theta)(1- \theta)\theta} = 0\)
Yeesh. Fortunately the denominator cancels out when we start multiplying terms and solving for theta. So we’re left with this:
\( 1528\theta – 1528\theta^{2} – 446\theta – 223\theta^{2} + 762 – 381\theta – 381\theta^{2} = 0\)
And that reduces to the following quadratic equation:
\( -2132\theta^{2} + 701\theta + 762 = 0\)
I propose we use an online calculator to solve this equation. Here’s a good one. Just plug in the coefficients and hit solve to find the roots. Our coefficients are a = -2132, b = 701, and c = 762. Since it’s a quadratic equation we get two answers:
\( x_{1} = -0.4556 \)
\( x_{2} = 0.7844 \)
The answer is in terms of a probability which is between 0 and 1, so we toss the negative answer and behold our maximum likelihood estimator for theta: 0.7844.
Remember that \( \theta = (1 – \pi)^{2}\). If we solve for pi, we get \( \pi = 1 – \theta^{1/2}\), which works out to be 0.1143. That is, we estimate the probability of characteristic genes switching chromosomes to be about 11%. Therefore we can think of theta as the probability of having two parents that experienced no gene switching.
Now point estimates are just estimates. We would like to know how good the estimate is. That’s where confidence intervals come in to play. Let’s calculate one for theta.
It turns out that we can estimate the variability of our maximum likelihood estimator as follows:
\( V(\theta) = \frac{1}{I(\theta)}\), where \( I(\theta) = -E(\frac{d^{2}\log L}{d \theta^{2}}) \)
We need to determine the second derivative of our log likelihood equation, take the expected value by plugging in our maximum likelihood estimator, multiply that by -1, and then take the reciprocal. The second derivative works out to be:
\( \frac{d^{2}\log L}{d \theta^{2}} = -\frac{1528}{(2 + \theta)^{2}} -\frac{223}{(1 – \theta)^{2}} -\frac{381}{\theta^{2}} \)
The negative expected value of the second derivative is obtained by plugging in our estimate of 0.7844 and multiplying by -1. Let’s head to the R console to calculate this:
th <- 0.7844 # our ML estimate (I <- -1 * (-1528/((2+th)^2) - 223/((1-th)^2) - 381/(th^2))) # information [1] 5613.731
Now take the reciprocal and we have our variance:
(v.th <- 1/I) [1] 0.0001781347
We can take the square root of the variance to get the standard error and multiply by 1.96 to get the margin of error for our estimate. Then add/subtract the margin of error to our estimate for a confidence interval:
# confidence limits on theta 0.784+(1.96*sqrt(v.th)) # upper bound [1] 0.8101596 0.784-(1.96*sqrt(v.th)) # lower bound [1] 0.7578404
Finally we convert the confidence interval for theta to a confidence interval for pi:
# probability of switching over th.ub <- 0.784+(1.96*sqrt(v.th)) th.lb <- 0.784-(1.96*sqrt(v.th)) 1-sqrt(th.ub) # lower bound [1] 0.09991136 1-sqrt(th.lb) # upper bound [1] 0.1294597
The probability of genes switching chromosomes is most probably in the range of 10% to 13%.
|
I think something crucial here is the distinction between the
pre-existence of an inner product on our space.How can you speak about choosing an 'orthornormal basis' if there is no notion of an inner product?
In this answer, we will therefore consider $\langle \cdot ,\cdot \rangle$ to be an inner product on some $N$-dimensional space $V$ and $\mathcal{B}: V\times V\longrightarrow \mathbb{R}$ to be some non-degenerate bilinear form on $V$.Similarly, we will let $\mathcal{A}:V\longrightarrow V$ be a linear map on $V$.
Choose some basis $\beta=\{\mu_1,\dots,\mu_n\}$ of $V$ and let $B \in \mathbb{R}^{N\times N}$ be given by $B_{ij}=\mathcal{B}(\mu_i,\mu_j)$.Then for any $u,v\in V$ it holds that $\mathcal{B}(u,v)=\mathbf{u}^TB\,\mathbf{v}$, where $\mathbf{u},\mathbf{v}$ are the representations of $u,v$ in base $\beta$.We will heretofore refer to them as simply $u,v$.
We can similarly obtain a mqtrix representation $A$ of $\mathcal{A}$ in base $\beta$.Assume for the moment that $\mathcal{A}^*$ is uniquely defined
with respect to $\mathcal{B}$ $($rather than $\langle\cdot,\cdot\rangle)$.We hence get that
$$u^T A^T B v = (Au)^TBv = \langle \mathcal{A}u,v \rangle = \langle u, \mathcal{A^*}v \rangle = u^T BA^* v, \tag{1}$$
where $A^*$ is the matrix representation of $\mathcal{A}^*$ in base $\beta$.This implies that $u^T\left(A^TB-BA^*\right)v=0$ for all $u$ and all $v$.In other words, it yields
$$A^TB=BA^*.\tag{2}$$
We can try to concretely find out $A^*$ as follows.The $i$-th column of $A^*$ is simply $A^*\mu_i$.If $\beta$ is orthonormal, then we also have that, in base $\beta$,
$$A^*\mu_i=(\langle A^*\mu_i,\mu_1\rangle, \langle A^*\mu_i,\mu_2\rangle,\dots, \langle A^*\mu_i,\mu_N\rangle)$$
By $(1)$, it follows that
$$A^*\mu_i=({\mu_1}^TA^TB\mu_i,\,{\mu_2}^TA^TB\mu_i,\,\dots,\,{\mu_N}^TA^TB\mu_i)$$
Now, $B\mu_i$ is simply the $i$-th column of $B$.In words, the previous line says that:
The $i$-th column of $A^*$ is the vector obtained from multiplifying $A^T$ by the $i$-th column of $B$.
It follows that $A^*=A^T$ if and only if $A^TB\mu_i=A^T\mu_i$ for all $i$, that is, if and only if
$$A^T(B-I)\mu_i=0$$
for all $i$.In other words, if and only if $\text{Im}(B-I)\subset \ker A^T=\text{Im}(A)^\perp$.Notice that the $\perp$ here refers to our pre-existing inner product.
A trivial consequence of our last observations is that when $B=I$ -- that is, when $\mathcal{B}$
is our pre-existing inner product --, then $A^*=A^T$.Observe that in this case, $(2)$ does hold, as it must.
|
On p. 76 of the 1996 edition of Serre's
A Course in Arithmetic, one reads the following (inline) remark:
One can prove that, if $A$ has natural density $k$, the analytic density of $A$ exists and is equal to $k$.
Here, $A$ is a subset of $\bf P$ (the set of all positive rational primes), and the natural density of $A$ is actually the natural density of $A$ relative to $\bf P$, viz. the limit $$\lim_{n \to \infty} \frac{|A \cap [1,n]|}{|\mathbf P \cap [1,n]|}$$ (if it exists), while the analytic density of $A$ is actually the analytic density relative to $\bf P$, viz. the limit $$\lim_{s \to 1^+} \frac{\sum_{p \in A} p^{-s}}{\sum_{p \in \mathbf P} p^{-s}}$$ (again, if it exists). Here are then my questions:
Q1.Was it Serre the first who made this observation explicit? Q2.Do you know of a paper or book where a proof is provided? Serre doesn't even give a hint about it. Notes (added later).
On
Q1: In the light of Lucia's comment below, let me make it clear that I myself find it very unreasonable that the result hadn't been known before Serre's remark in the 1970 French edition of his book (p. 126). I'd just like to find out if it was Serre the first who made it explicit.
On
Q2: I have my own proof, but would appreciate a reference. The reason is that something sensibly stronger is true, and I'm hoping to understand from the inspection of the proof he may have had in mind if this is intentional (e.g., it is evident from the proof he may have had in mind that something sensibly stronger is true, but he just didn't care), or not. Edit (Feb 09, 2016). For future reference, I think it can be useful to make order and summarize, here in the OP, what has emerged from the answers and comments of those who have so far contributed to this discussion:
1) As expected, it wasn't Serre the one who first made explicit the relation between the analytic and natural densities
relative to the primes. The result is already stated on p. 118 of:
E. Landau,
Handbuch der Lehre von der Verteilung der Primzahlen, Erster Band, Teubner: Leipzig, 1909,
where a detailed proof is also presented. This answers both Q1 and Q2.
2) Franz Lemmermeyer, in a comment to the OP, had suggested since the outset that the result should have appeared almost surely in some of Landau's books. This was confirmed by so-called friend Don in his answer (here), where it's also reported that the result was mentioned on p. 225 of the 1st edition of:
H. Hasse,
Vorlesungen über Zahlentheorie, Die Grundlehren der mathematischen Wissenschaften 59, Springer-Verlag: Berlin, 1950.
Interestingly enough, Hasse made a mistake here, by stating that not only the existence of the natural density (relative to the primes) implies that the analytic density (always relative to the primes) also exists, and the two are then equal: He went on asserting that also the converse is true! As still noted by so-called friend Don, the mistake was fixed in the 2nd (1964) edition of the book (p. 236), and it was mentioned in a comment to his answer that we know by now that Hasse was
really wrong, for an example attributed by Serre to a private communication from E. Bombieri (p. 126 in the 1970 French edition of A Course in Arithmetic, or p. 76 in the 1996 English edition) proves the existence of a set of primes that has analytic (relative) density, but not natural (relative) density.
3) Comparison results in the same spirit of those considered in this question, but involving
densities on $\mathbf N^+$, are not so rare in the literature. Most notably, it is known (and easy to prove by Abel's summation formula) that the upper analytic density (on $\mathbf N^+$) is not greater than the upper logarithmic density, which is in turn not greater than the upper asymptotic density, see, e.g., Theorem 2 in Section III.1.3 of:
G. Tenenbaum,
Introduction to Analytic and Probabilistic Number Theory, Cambridge Stud. Adv. Math. 46, Cambridge Univ. Press: Cambridge, 1995.
It follows at once that the existence of the natural density (on $\mathbf N^+$) implies the existence of the logarithmic density, and the existence of the logarithmic density implies the existence of the analytic density.
4) On the other hand, it is known that upper and lower asymptotic and natural densities are pretty much independent from each other, in a sense that was first made precise by L. Mišík in:
L. Mišík,
Sets of positive integers with prescribed values of densities, Math. Slovaca 52(2002), No. 3, pp. 289-296. see here for further reading on the subject.
You may want to read the comments to Question 103111: Prescribed values for the uniform density for a more accurate account of Mišík's results and generalizations theoreof.
5) Furthermore, it is known that the existence of the analytic density (on $\mathbf N^+$) implies that also the logarithmic density exists, and the two are then equal. This is a non-trivial result, which goes back at least to H. Davenport and P. Erdős, who make an implicit reference to it in the proof of Theorem 1 from:
H. Davenport and P. Erdős,
On sequences of positive integers, Acta Arith. 2(1936), No. 1, 147-151.
The proof is based on the Hardy-Littlewood tauberian theorem. All of this was pointed out by so-called friend Don in a comment to GH from MO's answer (here). An alternative proof, that rather uses Karamata's tauberian theorem, is given by Tenenbaum in his book (Theorem 3 in Section III.1.3). The same Tenenbaum mentioned in a private communication that the special case of Karamata's theorem needed here goes back to:
O. Szász,
Münchner Sitzungsberichte(1929), 325-340.
6) Last but not least, Christian Elsholtz added some further elements to the story (here).
|
I (accidentally) stumbled upon the following statement in Atkins' "Elements of Physical Chemistry" (p378):
We represent dipole moments by an arrow with a length proportional to $\pmb{\mu}$ and pointing from the negative charge to the positive charge (
1). (Be careful with this convention: for historical reasons the opposite convention is still widely used.)
Unfortunately he does not go into more detail. And I know this does not really answer your question.
The definition from the IUPAC is the same as the one used by Atkins:
electric dipole moment, $\mathbf{p}$
Vector quantity, the vector product of which with the electric field strength, $\mathbf{E}$, of a homogeneous field is equal to the torque. $\mathbf{T} = \mathbf{p} \times \mathbf{E}$. The direction of the dipole moment is from the negative to the positive charge.
The source quoted there is from 1993, so you can probably understand my surprise, when I did a little more searching and found in C. Párkányi's "Theoretical Organic Chemistry" (1997, p239):
[...] in organic chemistry the positive direction of the dipole moment is normally defined as the direction from the center of the positive charge towards the center of the negative charge. This convention prevails in physical organic chemistry and in inorganic chemistry. However, while the dipole still points from the positive charge to the negative charge, in physical chemistry and in chemical physics the positive direction of the dipole moment is defined in the opposite way, i.e., from the negative charge to the positive charge.
There are also a few sources given, but I currently have not enough time to look them up. I find this statement (definition) highly confusing and it also does not give another reason why this is the direction it uses. Please forgive me for not going into more detail here, I really don't want to add any more to the confusion.
Conclusion Don't add to the confusion. Use$$\huge\ominus \overset{\mathbf{p}}{\longrightarrow}\oplus$$as your definition from now on. Popular use does not make it right. Help using it consistently in the correct way and flush out the historic remnants that are still being taught. However, keep Atkins' warning in mind when you read books, paper, etc.. In the literature you will find both versions.
If anyone wants to argue with you about this, give the following derivation. The dipole moment operator $\mathbf{P}$ is a vector operator that is the sum of the position vectors $\mathbf{r}$ of all $N$ charged particles weighted with their charge $q$.
[goldbook]$$\mathbf{P} = \sum_i^N q_i \mathbf{r}_i$$ For a molecule (neutral by definition)we find $$\begin{align} 0& =\sum_i^N q_i &\Leftrightarrow&& 0&=\sum_{i_+}^{N_+}q_{i_+} + \sum_{i_-}^{N_-}q_{i_-} &\Leftrightarrow&& \sum_{i_+}^{N_+}q_{i_+} &= - \sum_{i_-}^{N_-}q_{i_-} &.\end{align}$$This can be transformed into$$\sum_{i_+}^{N_+}|q_{i_+}|\mathrm{e} = - \sum_{i_-}^{N_-}|q_{i_-}|(\mathrm{-e}).$$Therefore we can write$$\begin{align} \mathbf{P} &= \sum_{i_+}^{N_+} q_{i_+} \mathbf{r}_{i_+} + \sum_{i_-}^{N_-} q_{i_-} \mathbf{r}_{i_-} &\Leftrightarrow&& \mathbf{P} &= \mathrm{e}\left( \sum_{i_+}^{N_+} |q_{i_+}| \mathbf{r}_{i_+} - \sum_{i_-}^{N_-} |q_{i_-}| \mathbf{r}_{i_-} \right)&.\end{align}$$In the parenthesis the first term is a linear combination of vectors for all positive charges with the resulting vector $\mathbf{r}_+$ and the second term is a linear combination for all negative charges with the resulting vector $\mathbf{r}_-$. The dipole operator is therefore equivalent to$$\begin{align} \mathbf{P} &= \mathrm{e}\left(\mathbf{r}_{i_+} - \mathbf{r}_{i_-}\right),\end{align}$$ which is a linear combination of vectors that points from negative to positive.
|
We went over a lemma in class leading up to a larger theorem. The Lemma states:
Let $\succeq$ be a rational preference relation on $\mathscr{L}$ and let $\succeq$ admit utility representation under Von-Neumann-Morgenstern expectations. Then
$U(\sum_{k=1}^{K} \alpha_k L_k) = \sum_{k=1}^{K} \alpha_k \ U(L_k)$ $\succeq$ satisfies independence Every linear representation $V$ of $\succeq$ is a positive affine transformation of $U$. So $V = \beta U + \gamma$ where $\beta > 0$
So in proving this, here is some work so far:
1:
Let $L_k = (\Pi^k_1, \Pi^k_2, ..., \Pi_s^k)$ $$U(\sum_{k=1}^{K} \alpha_k L_k) = \sum^s_{i=1}\sum^K_{k=1}\Pi^k_i \alpha_k u_i$$ by Von-Neumann-Morgenstern, where $U(L) = \sum^s_{i=1}\Pi_i u_i $
$$\sum^s_{i=1}\sum^K_{k=1}\Pi^k_i \alpha_k u_i = \sum^K_{k=1} \alpha_k \sum^s_{i=1}\Pi^k_i u_i = \sum_{k=1}^{K} \alpha_k \ U(L_k)$$
2:
Consider $L_1, L_2, L_3 \in \mathscr{L}$ and say $L_1 \succeq L_2$.
$L_1 \succeq L_2 \iff U(L_1) \geq U(L_2)$
Take $\alpha \in (0,1)$ and define
$$L_4 = \alpha L_1 + (1-\alpha)L_3$$ $$L_5 = \alpha L_2 + (1-\alpha)L_3$$
Say $L_5 \succ L_4$
$\implies U(L_5) > U(L_4)$ and from 1.)
$$U(L_5) = \alpha U(L_2) + (1-\alpha) U(L_3)$$ $$U(L_4) = \alpha U(L_1) + (1-\alpha) U(L_3)$$
$$\implies \alpha U(L_2) + (1-\alpha) U(L_3) > \alpha U(L_1) + (1-\alpha) U(L_3) \implies U(L_2) > U(L_1)$$
which is a contradiction, so independence must hold.
I assume 3. is some sort of "monotonicity" condition. How would I approach the proof for this condition? Any hints would be appreciated. I also am wondering what this Lemma is leading up to, so I can study for that in advance. Does anyone have any idea?
|
I am solving a non-linear second order system of PDEs in two variables. The equations are too complicated to write out here, but an essential feature is that there is a propagating wave which then bounces on a boundary.
The problem I have is that the numerics breaks down at the boundary, when the wave reaches this point. I have tried by "trial and error" to just change the way I compute derivatives at this point with different finite difference stenils, but with no luck (pseudospectral methods do not work btw), and I have no idea on how to systematically try to improve the stability and I have no intuition of what can work and what will not work.
Does anyone have any tips on what to try when one encounters such problems? How can I systematically move forward to try to make my numerical scheme stable at this boundary point? Basically, if someone just has a list of ideas to blindly try, that would also be great.
Edit: I should add that the "boundary" can also be interpreted as an origin in polar coordinates, so close to the boundary we have a cylindrical wave scattering at the origin.
Edit2: By request I have added a (reduced version) of the equations I am trying to solve
$\dot{c}=K'$
$\dot{K}=\frac{K^2f}{5r^2}+25(1-f)+\frac{5r}{4}(fc/r)'$
$\dot{f}=-\frac{f^2K'}{5r}$
The "polar coordinate origin" is at r=0, and the wave that propagates is in $c$ and $K$. I have removed many terms to obtain the above equations, and only kept the ones that I think are important. It seems that the instability is due to the $1-f$ term in $\dot{K}$ (if I remove it I get stable evolutions). The instability manifests itself by having $f$ and $K$ diverging with opposite signs at the points close to the origin. Around $r\sim 0$ we expect $K\sim r^4$ and $c\sim r^3$ and $f-1\sim r^2$.
The boundary conditions I put are $K=e^{-t^2}$ at some $r=r_0$, which induces a wave that should bounce at $r=0$. The initial conditions are $f=1$, $c=K=0$.
Some more details about my numerics: I have K and c on different grids (such that a point in $c$ lies between two points in $K$). This removed other instability issues I had. I use second/third order finite difference methods for computing derivatives and for interpolating between the grids. $c$ has a point at the origin, $K$ does not, and $f$ is on the same grid as $K$. At the origin I use the fact that the functions should be even when computing derivatives (by extending the radial coordinate to negative values), but this does not change much.
|
A gaseous sample of a compound has a density of $\pu{0.977 g/L}$ at $\pu{710.0 torr}$ and $\pu{100.00 ^\circ C}$. What is the molar mass of this compound?
When I worked the problem I got $\pu{29.9 g/mol}$ is that right?
Chemistry Stack Exchange is a question and answer site for scientists, academics, teachers, and students in the field of chemistry. It only takes a minute to sign up.Sign up to join this community
A gaseous sample of a compound has a density of $\pu{0.977 g/L}$ at $\pu{710.0 torr}$ and $\pu{100.00 ^\circ C}$. What is the molar mass of this compound?
When I worked the problem I got $\pu{29.9 g/mol}$ is that right?
This question appears to be off-topic. The users who voted to close gave this specific reason:
No! The answer $\pu{29.9 gm/mol}$ is wrong.
The best way to solve these type of problems is Ideal Gas Equation. You need to do a little modification as provided in answer by Rauru Ferro, i.e. $$P V = n R T$$
Since $ n = m / M $, we can also write above as $$P V = \left( \frac{m}{M} \right) R T.$$
Since density $d = m/V$, $$P = \left( \frac{d}{M} \right) R T.$$
Now you can easily apply this modified equation in your problem.
Before applying this, all you need to do is to convert the pressure from torr to atm:$$\pu{1 atm} = \pu{760 torr}.$$
And also you will need to see the units of gas constant and other quantities as these type of problems seems much easier, but not-focusing-on-units can lead you to the wrong answer.
The answer for your problem will be $\pu{31.9 g/mol}$ approximately.
A gaseous sample of a compound has a density of $\pu{0.977 g/L}$ at $\pu{710.0 torr}$ and $\pu{100.00 ^\circ C}$. What is the molar mass of this compound?
The ideal gas law states $$pV = nRT\tag{1}\label{id-gas}$$ where $p$ is defined as the pressure, any units, $V$ is the volume in liters, $n$ is the amount of substance of gas, $R$ is the universal gas constant, and $T$ is temperature in kelvin.
The information given to us is the density of $\pu{0.977 g/L}$, a pressure of $\pu{710.0 torr}$, and temperature of $\pu{100 ^\circ C}$.
Knowing that amount of substance, $$n = \frac{M}{m},$$ where $M$ is molecular mass in $\pu{g/mol}$, and $m$ is the mass of gas in gram, we can see the unit gram cancels out and we are left with the unit mole.
However, we are given a density, $$d = \frac{m}{V} = \pu{0.977 g/L}.$$
Plugging into $\eqref{id-gas}$ the molecular mass and mass for $n$ we get:
$$ pV = \frac{M}{m}RT .$$
Isolating $m$ in density, $m = d\cdot V$, and isolating $m$ in $n = m/M\Leftrightarrow m = n\cdot M$, we can combine those two equations for $m$:
$$n\cdot M = d\cdot V$$
Isolate $n$, and plug back into our ideal gas equation $\eqref{id-gas}$: \begin{align} n &= \frac{dV}{M}& \implies&& pV &= \frac{dV}{M}RT \end{align}
Volume cancels out, and we isolate for $M$: $$ M = d \cdot \frac{RT}{p}. $$
Plug in our known values, knowing that $\pu{1 torr} = \pu{1 mm Hg}$ and $\pu{760 mm Hg} = \pu{1 atm}$:
$$ M = \pu{0.977 g//L} \cdot \frac{ \pu{0.082057 L * atm // mol * K}\cdot\pu{373.15K}}{ \pu{710 torr}\cdot \pu{mmHg//torr}\cdot\pu{{1atm}//760mmHg}} = \pu{32.0 g//mol}$$
And our answer has three significant figures.
Density is $\pu{0.977 g/l}$ at specified conditions.
What happens to the density if you increase the pressure from $710$ to $760$? This tells you to multiply by $\frac{760}{710}$.
What happens to the density if you cool it from $373$ to $273$? This tells you to multiply by $\frac{373}{273}$.
If $\pu{1 l}$ has this mass, what is the mass of $\pu{22.414 l}$?
From gas equation: $$ P V = n R T$$
As $n = m / M$ and density $d = m / V$, $$P = d R T / M,$$ so $$M = 0.977 \cdot 0.082 \cdot 373 / (710/760) = \pu{32.1 g/mol}.$$
|
In a series of posts, I wanted to get into details of the history and foundations of econometric and machine learning models. It will be some sort of online version of our joint paper with Emmanuel Flachaire and Antoine Ly, Econometrics and Machine Learning (initially writen in French), that will actually appear soon in the journal Economics and Statistics. This is the first one…
The importance of probabilistic models in economics is rooted in Working’s (1927) questions and the attempts to answer them in Tinbergen’s two volumes (1939). The latter have subsequently generated a great deal of work, as recalled by Duo (1993) in his book on the foundations of econometrics, and more particularly in the first chapter “The Probability Foundations of Econometrics”. It should be recalled that Trygve Haavelmo was awarded the Nobel Prize in Economics in 1989 for his “clarification of the foundations of the probabilistic theory of econometrics”. Because as Haavelmo (1944) (initiating a profound change in econometric theory in the 1930s, as recalled in Morgan’s Chapter 8 (1990)) showed, econometrics is fundamentally based on a probabilistic model, for two main reasons. First, the use of statistical quantities (or “measures”) such as means, standard errors and correlation coefficients for inferential purposes can only be justified if the process generating the data can be expressed in terms of a probabilistic model. Second, the probability approach is relatively general, and is particularly well suited to the analysis of “dependent” and “non-homogeneous” observations, as they are often found on economic data.We will then assume that there is a probabilistic space (\Omega,\mathcal{F},\mathbb{P}) such that observations (y_i,\mathbf{x}_i) are seen as realizations of random variables (Y_i, \mathbf{X}_i) . In practice, however, we are not very interested in the joint law of the couple (Y, \mathbf{X}) : the law of \mathbf{X} is unknown, and it is the law of Y conditional on \mathbf{X} that will be interested in. In the following, we will note x a single observation, \mathbf{x} a vector of observations, X a random variable, and \mathbf{X} a random vector. Abusively, \mathbf{X} may also designate the matrix of individual observations (denoted \mathbf{x}_i), depending on the context.
Foundations of mathematical statistics
As recalled in Vapnik’s (1998) introduction, inference in parametric statistics is based on the following belief: the statistician knows the problem to be analyzed well, in particular, he knows the physical law that generates the stochastic properties of the data, and the function to be found is written via a finite number of parameters
[1]. To find these parameters, the maximum likelihood method is used. The purpose of the theory is to justify this approach (by discovering and describing its favorable properties). We will see that in learning, philosophy is very different, since we do not have a priori reliable information on the statistical law underlying the problem, nor even on the function we would like to approach (we will then propose methods to construct an approximation from the data at our disposal, as in (1998)). A “golden age” of parametric inference, from 1930 to 1960, laid the foundations for mathematical statistics, which can be found in all statistical textbooks, including today. As Vapnik (1998) states, the classical parametric paradigm is based on the following three beliefs: To find a functional relationship from the data, the statistician is able to define a set of functions, linear in their parameters, that contain a good approximation of the desired function. The number of parameters describing this set is small. The statistical law underlying the stochastic component of most real-life problems is the normal law. This belief has been supported by reference to the central limit theorem, which stipulates that under large conditions the sum of a large number of random variables is approximated by the normal law. The maximum likelihood method is a good tool for estimating parameters.
In this section we will come back to the construction of the econometric paradigm, directly inspired by that of classical inferential statistics.
Conditional laws and likelihood
Linear econometrics has been constructed under the assumption of individual data, which amounts to assuming independent variables (Y_i, \mathbf{X}_i) (if it is possible to imagine temporal observations – then we would have a process (Y_t, \mathbf{X}_t) – but we will not discuss time series here). More precisely, we will assume that, conditionally to the explanatory variables \mathbf{X}_i, the variables Y_i are independent. We will also assume that these conditional laws remain in the same parametric family, but that the parameter is a function of \mathbf{x}. In the Gaussian linear model it is assumed that: (Y\vert \mathbf{X}=\mathbf{x})\overset{\mathcal{L}}{\sim}\mathcal{N}(\mu(\mathbf{x}),\sigma^2)~~~~ (1)where \mu(\mathbf{x})=\beta_0+\mathbf{x}^T\mathbf{\beta} and \mathbf{\beta}\in\mathbb{R}^{p}.
It is usually called a ‘linear’ model since \mathbb{E}[Y\vert \mathbf{X}=\mathbf{x}]=\beta_0+\mathbf{x}^T\mathbf{\beta} is a linear combination of covariates
[2]. It is said to be a homoscedastic model if Var[Y|\mathbf{X}=\mathbf{x}]=\sigma^2, where \sigma^2 is a positive constant. To estimate the parameters, the traditional approach is to use the Maximum Likelihood estimator, as initially suggested by Ronald Fisher. In the case of the Gaussian linear model, log-likelihood is written: \log\mathcal{L}(\beta_0, \mathbf{\beta},\sigma^2\vert \mathbf{y},\mathbf{x}) = -\frac{n}{2}\log[2\pi\sigma^2] - \frac{1}{2\sigma^2}\sum_{i=1}^n (y_i-\beta_0-\mathbf{x}_i^T\mathbf{\beta})^2Note that the term on the right, measuring a distance between the data and the model, will be interpreted as deviance in generalized linear models. Then we will set: (\widehat{\beta}_0,\widehat{\mathbf{\beta}},\widehat{\sigma}^2)=\text{argmax}\left\lbrace\log\mathcal{L}(\beta_0, \mathbf{\beta},\sigma^2\vert \mathbf{y},\mathbf{x})\right\rbraceThe maximum likelihood estimator is obtained by minimizing the sum of the error squares (the so-called “least squares” estimator) that we will find in the “machine learning” approach.
The first order conditions allow to find the normal equations, whose matrix writing is \mathbf{X}^T[\mathbf{y}-\mathbf{X}\mathbf{\beta}]=\mathbf{0}, which can also be written (\mathbf{X}^T \mathbf{X})\mathbf{\beta}=\mathbf{X}^T \mathbf{y}. If \mathbf{X} is a full (column) rank matrix, then we find the classical estimator:\widehat{\mathbf{\beta}}=(\mathbf{X}^T\mathbf{X})^{-1}\mathbf{X}^T\mathbf{y}=\mathbf{\beta}+(\mathbf{X}^T\mathbf{X})^{-1}\mathbf{X}^{-1}\mathbf{\varepsilon}~~~(2)using residual-based writing (as often in econometrics), y=\mathbf{x}^T\mathbf{\beta}+\varepsilon. Gauss Markov’s theorem ensures that this estimator is the unbiased linear estimator with minimum variance. It can then be shown that \widehat{\mathbf{\beta}}\sim\mathcal{N}(\mathbf{\beta},\sigma^2(\mathbf{X}^T\mathbf{X})^{-1}), and in particular, if we simply need the first two moments : \mathbb{E}[\widehat{\mathbf{\beta}}]=\mathbf{\beta}~~~Var[\widehat{\mathbf{\beta}}]=\sigma^2 [\mathbf{X}^T\mathbf{X}]^{-1}In fact, the normality hypothesis makes it possible to make a link with mathematical statistics, but it is possible to construct this estimator given by equation (2) without that Gaussian assumption. Hence, if we assume that Y|\mathbf{X} has the same distribution as \mathbf{x}^T\mathbf{\beta}+\varepsilon, where \mathbb{E}[\varepsilon]=0, Var[\varepsilon]=\sigma^2 and Cov[X_j,\varepsilon]=0 for all j, then \widehat{\mathbf{\beta}} is an unbiased estimator of \mathbf{\beta} with smallest variance
[3] among unbiased linear estimators. Furthermore, if we cannot get normality at finite distance, asymptotically this estimator is Gaussian, with \sqrt{n}(\widehat{\mathbf{\beta}}-\mathbf{\beta})\overset{\mathcal{L}}{\rightarrow}\mathcal{N}(\mathbf{0},\mathbf{\Sigma})as n\rightarrow\infty, for some matrix \mathbf{\Sigma}. The condition of having a full rank \mathbf{X} matrix can be (numerically) strong in large dimensions. If it is not satisfied, (\mathbf{X}^T \mathbf{X})^{-1}\mathbf{X}^T does not exist. If \mathbb{I} denotes the identity matrix, however, it should be noted that (\mathbf{X}^T \mathbf{X}+\lambda\mathbb{I})^{-1}\mathbf{X}^T still exists, whatever \lambda>0. This estimator is called the ridge estimator of level \lambda (introduced in the 1960s by Hoerl (1962), and associated with a regularization studied by Tikhonov (1963)). This estimator naturally appears in a Bayesian econometric context. Residuals
It is not uncommon to introduce the linear model from the distribution of the residuals, as we mentioned earlier. Also, equation (1) is written as often: y_i=\beta_0+\mathbf{x}_i^T\mathbf{\beta}+\varepsilon_i~~~~(3)where \varepsilon_i’s are realizations of independent and identically distributed random variables (i.i.d.) from some \mathcal{N}(0,\sigma^2) distribution. With a vector notation, we will write \mathbf{\varepsilon}\overset{\mathcal{L}}{\sim}\mathcal{N}(\mathbf{0},\sigma^2\mathbb{I}) . The estimated residuals are defined as: \widehat{\varepsilon}_i =y_i-[\widehat{\beta}_0+\mathbf{x}_i^T\widehat{\mathbf{\beta}}] Those (estimated) residuals are basic tools for diagnosing the relevance of the model.
An extension of the model described by equation (1) has been proposed to take into account a possible heteroscedastic character: (Y\vert \mathbf{X}=\mathbf{x})\overset{\mathcal{L}}{\sim}\mathcal{N}(\mu(\mathbf{x}),\sigma^2(\mathbf{x}))where \sigma^2(\mathbf{x}) is a positive function of the explanatory variables. This model can be rewritten as: y_i=\beta_0+\mathbf{x}_i^T\mathbf{\beta}+\sigma^2(\mathbf{x}_i)\cdot\varepsilon_iwhere residuals are always i.i.d., with unit variance, \varepsilon_i=\frac{y_i-[\beta_0+\mathbf{x}_i^T\mathbf{\beta}]}{\sigma(\mathbf{x}_i)} While residuals based equations are popular in linear econometrics (when the dependent variable is continuous), it is no longer popular in counting models, or logistic regression.
However, writing using an error term (as in equation (3)) raises many questions about the representation of an economic relationship between two quantities. For example, it can be assumed that there is a relationship (linear to begin with) between the quantities of a traded good, q and its price p. This allows us to imagine a supply equationq_i=\beta_0+\beta_1 p_i+u_i(u_i being an error term) where the quantity sold depends on the price, but in an equally legitimate way, one can imagine that the price depends on the quantity produced (what one could call a demand equation), p_i=\alpha_0+\alpha_1 q_i+v_i(v_i denoting another error term). Historically, the error term in equation (3) could be interpreted as an idiosyncratic error on the variable y, the so-called explanatory variables being assumed to be fixed, but this interpretation often makes the link between an economic relationship and a complicated economic model difficult, the economic theory speaking abstractly about a relationship between a magnitude, the econometric model imposing a specific shape (what magnitude is y and what magnitude is x) as shown in more detail in Morgan (1990) Chapter 7.
[1] This approach can be compared to structural econometrics, as presented for example in Kean (2010). [2] Here, we will try to distinguish \beta_0, the intercept, and the other parameters \mathbf{\beta}, since they are considered differently in many extensions (e.g. regularization). Nevertheless, in many expressions \mathbf{\beta} will denote the joint vector (\beta_0, \mathbf{\beta}), for general formulas, to avoid too heavy notations. [3] In the sense that the difference between variance matrices is a positive matrix.
|
I'll reword your question a bit. Let me know if I have not framed it properly.
Say we have a collection of $r$ unique elements each of which occurs in multiple copies. Namely, our collection has $n_1$ identical copies of $X_1$, $n_2$
identical copies of $X_2$, $\ldots$, and $n_r$ identical copies of
$X_r$.
How many ways can we form combinations of length $k$, where repeated copies of unique elements are allowed?
This question has a standard answer which can be derived informally and also formally (see below). In either case the answer is
$$\sum_{\ell=0}^{r} (-1)^{\ell} \sum_{\textbf{j} \in \{n_k\}_{\ell} } \binom{k+r-1 - \sum_{i=1}^{\ell}(j_i +1)}{r-1}$$
where $\{n_k\}_{\ell}$ is the set of all $\ell$-length combinations of the elements $(n_1, \ldots, n_r)$, $\textbf{j} = (j_1, j_2, \ldots, j_{\ell})$ is a vector representing a particular element of the set of $\{n_k\}_{\ell}$, and the second summation is over all such combinations. (The $\ell=0$ term simply reduces to "$k+r-1$ choose $r-1$".)
We can apply the above formula to your example. In your case you have $r=3$ types of elements and you're trying to form combinations of length $k=4$. We identify the blue, orange, and yellow balls with $X_1$, $X_2$, and $X_3$ respectively. We then have $n_1 = 3$, $n_2 = 1$, and $n_3 =4$. The possible values of $\textbf{j}$ with defined by various $\ell$ are
\begin{eqnarray}\ell = 1: & \quad j_1 & \in \{3, 1, 4\} \\\ell = 2: & \quad (j_1, j_2) & \in \{(3, 1), (1,4), (4, 3)\} \\\ell = 3: & \quad (j_1, j_2, j_3) & \in \{(3, 1, 4) \}\end{eqnarray}
We thus find \begin{eqnarray}& & \binom{4+3-1}{3-1} - \Big[ \binom{4+3-1 -(3+1)}{3-1} +\binom{4+3-1 -(1+1)}{3-1}+\binom{4+3-1 -(4+1)}{3-1}\Big] \\& & + \Big[ \binom{4+3-1 -(4+2)}{3-1} +\binom{4+3-1 -(5+2)}{3-1}+\binom{4+3-1 -(7+2)}{3-1}\Big]\\& &- \Big[ \binom{4+3-1 -(8+3)}{3-1}\Big]\\& = & \binom{6}{2} - \Big[\binom{2}{2} + \binom{4}{2}+ 0\Big] +\Big[0\Big] - 0 = 8, \end{eqnarray} where we used $\binom{n}{k} = 0$ if $n>k$. This result can be checked by listing the number of ways to form the elements.
Formal Derivation
To derive the general answer to the highlighted question, we note that the desired number is equal to the number of ways $\alpha_1$, $\alpha_2, \ldots, \alpha_r$ can all sum to $k$ where each $\alpha_k$ (i.e., the number of elements of $X_k$) runs from $0, 1, \ldots, n_k$. Namely, we want to compute $$(\#)_{n_1,\ldots, n_r; k} = \sum_{\alpha_1=0}^{n_1}\cdots \sum_{\alpha_r=0}^{n_r} \delta_{k, \alpha_1+\cdots +\alpha_r},$$where $\delta_{\ell, m}$ is the Kronecker delta. To compute the above quantity, we need to use two identities both of which are established by the Cauchy's Integral Formula:
$$ \delta_{m, \ell} = \frac{1}{2\pi i} \oint_C \frac{dz}{z} z^{m-\ell}, \qquad \binom{k}{\ell} = \frac{1}{2\pi i} \oint_C dz\, \frac{z^{k}}{(z-1)^{\ell +1}}.$$Using the contour integral representation of the Kronecker delta, we have
\begin{align}(\#)_{n_1,\ldots, n_r; k} & = \sum_{\alpha_1=0}^{n_1} \cdots \sum_{\alpha_{r}=0}^{n_r} \oint_C \frac{dz}{z} z^{k- \alpha_1- \cdots - \alpha_r}\\& = \oint_C \frac{dz}{z} z^{k} \prod_{m=1}^{r} \sum_{\alpha_m=0}^{n_m} z^{-\alpha_k} \quad \text{[Commute summation and integral]} \\& = \oint_C \frac{dz}{z} z^{k} \prod_{m=1}^{r} \frac{1-z^{-n_m-1}}{1-z^{-1}} \quad \text{[Geometric Series Identity]} \\& = \oint_C dz \frac{z^{k+r-1}}{(z-1)^r} \prod_{m=1}^{r} \left(1- z^{-1} z^{-n_k}\right).\end{align}
Now, using the identity \begin{equation}\prod_{i=1}^N(1+\lambda x_i) = \sum_{\ell=0}^{N} \lambda^{\ell} \Pi_{\ell}(x_1, \ldots, x_N), \end{equation}where $\Pi_{\ell}(x_1, \ldots, x_N)$ is the $\ell$th elementary symmetric polynomial in the variables $(x_1, \ldots, x_N)$, we find \begin{align}(\#)_{n_1,\ldots, n_r; k} & = \sum_{\ell=0}^{r}(-1)^{\ell}\,\oint_C dz \frac{z^{k+r-1- \ell}}{(z-1)^r} \, \Pi_{\ell}(z^{-n_1}, \ldots, z^{-n_r}).\label{eq:omegpre}\end{align}For an arbitrary elementary symmetric polynomial we have the definition \begin{equation}\Pi_{\ell}(x_1, x_2, \ldots, x_r) = \sum_{ \textbf{j} \in \{1, 2, \ldots, r\}_{\ell} } x_{j_1} x_{j_2}\cdots x_{j_{\ell}},\end{equation}where $\textbf{j} = (j_1, \ldots, j_{\ell})$ is a particular combination of length $\ell$ of the elements in $\{1, 2, \ldots, r\}$ and the summation is over all combinations of length $\ell$. For our case, we similarly have \begin{equation}\Pi_{\ell}(z^{-n_1}, \ldots, z^{-n_r}) = \sum_{ \textbf{j} \in \{n_k\}_{\ell} } z^{- j_1 - \ldots - j_{\ell}}.\end{equation}$(\#)_{n_1,\ldots, n_r; k} $ therefore becomes \begin{align}(\#)_{n_1,\ldots, n_r; k}& = \sum_{\ell=0}^{r}(-1)^{\ell}\, \sum_{ \textbf{j} \in \{n_m\}_{\ell} }\oint_C dz \, \frac{z^{k+r-1- \ell}}{(z-1)^r} \,z^{- j_1 - \ldots - j_{\ell}} \\& = \sum_{\ell=0}^{r}(-1)^{\ell}\, \sum_{ \textbf{j} \in \{n_m\}_{\ell} }\oint_C dz\, \frac{z^{k+r-1- \sum_{i=1}^{\ell}(j_i +1)}}{(z-1)^r}.\end{align}Using the contour integral representation of the binomial, we ultimately find \begin{equation}(\#)_{n_1,\ldots, n_r; k} = \sum_{\ell=0}^{r}(-1)^{\ell}\, \sum_{ \textbf{j} \in \{n_m\}_{\ell} } \binom{k+r-1 - \sum_{i=1}^{\ell}(j_i +1)}{r-1},\label{eq:omeggen}\end{equation}where $j_1, \ldots, j_{\ell}$ is a particular combination of $\ell$ elements in $\{n_1, \ldots, n_r\}$ and the second summation runs over all such combinations.
Some Special Cases
For $n_k=1$ for all $k$, $\{n_1, \ldots, n_r\}$ becomes the $r$ element set $\{1, \ldots, 1\}$. Thus summing over all $\ell$-length combinations of $\{n_1, \ldots, n_r\}$ reduces to setting $j_i= 1$ for all $i$ and multiplying the relevant term by the number of ways to choose $\ell$ elements from a set of $r$ elements, i.e., $\binom{r}{\ell}$. We thus find \begin{equation}(\#)_{n_1,\ldots, n_r; k} = \sum_{\ell=0}^{r}(-1)^{\ell}\, \binom{r}{\ell} \binom{k+r-1 - 2\ell}{r-1}.\end{equation}However, if all $n_m$ are equal to 1, then the number of ways we can choose combinations of length $k$ from the elements $A_1, A_2, \ldots, A_r$ is simply the number of ways to choose $k$ elements from a set of $r$ elements. Thus we can conclude \begin{equation}\sum_{\ell=0}^{r}(-1)^{\ell}\, \binom{r}{\ell} \binom{k+r-1 - 2\ell}{r-1} = \binom{r}{k}. \end{equation}
For $n_m = \infty$ for al $m$, then $j_i = \infty$ for all $i$ and all terms in \refew{omeggen} are zero except for $\ell=0$. (The binomial $\binom{n}{k}$ is zero whenever $n< k$). We thus have \begin{equation}(\#)_{n_1,\ldots, n_r; k} = \binom{k+r-1}{r-1},\end{equation}which matches the standard result (see "Combination with repetitions." for example).
For $n_m = n$ for all $m$, then $j_i = n$ for all $i$. Similar to the case in 1., summing over over all $\ell$-length combinations of $\{n_1, \ldots, n_r\}$ reduces to setting $j_i= n$ for all $i$ and multiplying the relevant term by the number of ways to choose $\ell$ elements from a set of $r$ elements. We therefore have \begin{equation}(\#)_{n_1,\ldots, n_r; k} = \sum_{\ell=0}^{r}(-1)^{\ell}\, \binom{r}{\ell} \binom{m+r-1 - \ell(n+1)}{r-1},\end{equation}which matches the expression in the intuitive argument given by Thoma in "Basic inclusion exclusion problem - picking 24 balls out of four sets of 10 balls".
|
hot!", referring of course to negative absolute temperatures.
But why are they hot? Well, a common explanation is that it's not really the temperature $T$ that is the fundamental quantity, but rather the statistical beta, or "coldness" $\beta=1/T$. So negative temperatures have
negativecoldness, which is hotter than any positive temperature, since even the hottest positive temperature is only going to give you a small, but positive coldness. So the fact that negative temperatures are hot is a result of the fact that $1/x$ is not really decreasing everywhere, due to its discontinuity.
But why? Why is $\beta$ the fundamental quantity? Why should we arbitrarily consider this to be our metric of hotness and coldness, and not $T$?
This is a really interesting example to teach people to think in a positivist way in physics, and to operationalise things. What does it mean for something to be hot?
Well, you touch it and you say "Ouch!"
Seriously, that's all there is -- if you touch something hot, you say "Ouch!", if you touch something cold, you say "Whee!", or something. That's the fundamental, positivist definition of hotness -- "Does it feel hot?"
Well, why would something feel hot?
Because it transfers heat to you. And this is our operational, positivistic definition of hotness -- if one body transfers heat to another body, it is said to be hotter than the other body.
So we need to find out a criterion to decide the direction of heat flow between two bodies. In the past, you've probably taken for granted that heat is transferred from a body with higher temperature to that with lower temperature, but that's just a crappy high school definition. What really causes heat diffusion? Well, when there are a lot of fast-moving particles in one place and slow-moving particles in another, it turns out that a state where the particles are more uniformly spread-out is more likely to happen in future. This is just the requirement that entropy must increase -- it's the second law of thermodynamics.
So if we have body 1 with temperature $T_1$ and body 2 with temperature $T_2$, with heat flow of $Q$ from body 1 to body 2, then the second law of thermodynamics is stated as:
$$\Delta S_1+\Delta S_2>0$$
$$-\Delta Q/T_1+\Delta Q/T_2>0$$
$$\Delta Q\left(\frac1{T_2}-\frac1{T_1}\right)>0$$
In other words -- if $\Delta Q>0$, i.e. if the heat flow is really from body 1 to body 2, then we require $1/T_2>1/T_1$, and if the heat flow is from body 2 to body 1 ($\Delta Q<0$), we require $1/T_1>1/T_2$.
And there you have it! Heat does
notflow from the body with higher temperature to the body with lower temperature -- it flows from the body with lower $1/T$ to the body with higher $1/T$. For positive temperatures, these are the same thing -- but negative temperatures have the lowest $1/T$, and are thus hotter.
So those of you want the U.S. to switch to Celsius, or those who report temperatures in Kelvin for no good reason except intellectual signalling... perhaps start reporting
statistical betasin 1/Kelvins instead.
...
"Hey, Alexa, is it chilly outside?"
"The coldness in your area is 0.00375 anti-Kelvin."
"...I think I'll just risk freezing to death."
|
I have a electrochemical reactor assumed isothermal and isobaric with 4 reactions and I am trying to calculate the "ideal chemical work' exerted by each reaction using the extent of reactions for each reaction (which I have already calculated).
I am calculating the 'chemical work' $w$ by multiplying the extent of reaction by the Gibbs energy of the respective reaction $k$:
$$w_k = \mathrm{d}G_k = \Delta_\mathrm{r} G_k \cdot \mathrm{d}\xi _k $$
I'm calculating the reaction Gibbs energy $\Delta_\mathrm{r}G$ as basically the stoichiometric sum of the chemical potentials
$$\Delta_\mathrm{r} G = \sum_i(\nu_i \cdot \mu_i)$$
However, I am not sure which conditions to use for the calculation for the reaction Gibbs energy as the reactions are reversible and for 2 of the reactions I have taken the forward reaction as the opposite of the expected (i.e. the forward reaction is the fuel cell reaction but I am operating in electrolyser mode so the extent of reaction for these reactions is negative). Do I use the conditions at the reactor outlet? Average of the inlet and outlet or am I going about this completely wrong?
|
...of two propositions, that produces a value of ''true'' if and only if both of its operands are true.The [[truth table]] of <math>p ~\operatorname{AND}~ q,</math> also written <math>p \land q\!</math
5 KB (658 words) - 02:02, 31 October 2015
A '''logical graph''' is a graph-theoretic structure in one of the systems of graphical syntax that Charles Sanders Peirce developed for logic....ve graphs'', and ''existential graphs'', Peirce developed several versions of a graphical formalism, or a graph-theoretic formal language, designed to be
41 KB (5,845 words) - 14:38, 6 November 2015
Here is Peirce's own statement and proof of the law:...le and other propositions connected with it. One of the simplest formulae of this kind is:</p>
11 KB (1,526 words) - 16:14, 18 November 2015
...inguished from, though closely related to, its study from the perspectives of abstract algebra on the one hand and formal logic on the other.Two definitions of the relation concept are common in the literature. Although it is usually
25 KB (3,665 words) - 21:05, 16 November 2015
...ion (mathematics)|polyadic or finitary relation]], one in which the number of places in the relation is three. In other language that is often used, a t...Therefore it will be useful to consider a few concrete examples from each of these two realms.
20 KB (2,655 words) - 21:25, 16 November 2015
A '''sign relation''' is the basic construct in the theory of signs, also known as [[semeiotic]] or [[semiotics]], as developed by Charle...th the same reproductive power, the sunflower would become a Representamen of the sun. (C.S. Peirce, “Syllabus” (''c''. 1902), ''Collec
58 KB (8,251 words) - 21:35, 15 November 2015
...})\!</math> is a logical connective that says “just one false” of its logical arguments. ...orm <math>\texttt{Mno}(),\!</math> then it cannot be true that exactly one of the arguments is false, so <math>\texttt{Mno}() = \texttt{False}.\!</math>
22 KB (3,319 words) - 19:22, 6 November 2015
...ath> in the '''parameter set''' <math>\Alpha\!</math> is an indexed family of operators <math>(\Omega_\alpha)_\Alpha = \{ \Omega_\alpha : \alpha \in \Alp* [[Universe of discourse]]
5 KB (572 words) - 04:18, 7 November 2015
...ter that is, up to somorphism, constituted by the structural relationships of mathematical objects called ''propositions''....s a set of transformation rules that define a binary relation on the space of expressions.
17 KB (2,301 words) - 16:02, 7 November 2015
...expressive capacity to describe change and diversity in logical universes of discourse....differential calculus of Leibniz and Newton augments the analytic geometry of Descartes.
6 KB (662 words) - 22:34, 5 November 2015
...ks, for example, “lover of __”, or “giver of __ to __”.* [[Universe of discourse]]
5 KB (599 words) - 20:22, 16 November 2015
...cs)|relational predicate]] that arises as the limit of an iterated process of [[hypostatic abstraction]].Here is one of Peirce's definitive discussions of the concept:
8 KB (1,058 words) - 04:10, 10 November 2015
...cation'', ''reification'', and ''subjectal abstraction''. The object of discussion or thought thus introduced is termed a ''[[hypostatic object]]''...into an extra subject, upping the ''arity'', also called the ''adicity'', of the main predicate in the process.
7 KB (915 words) - 18:58, 10 November 2015
...s affords a distinctive perspective on the subject, even though all angles of approach must ultimately converge on the same formal subject matter....es, Resulting from an Amplification of the Conceptions of Boole's Calculus of Logic]]”.
7 KB (919 words) - 22:54, 10 November 2015
...s <math>\{ \operatorname{false}, \operatorname{true} \}.</math> The names of the logical values, or ''truth values'', are commonly abbreviated in accord...all representation of truth functions as boolean functions. The remainder of this article assumes the usual representation, taking the equations <math>\
16 KB (2,190 words) - 18:31, 7 November 2015
...of the various types of inquiry and a treatment of the ways that each type of inquiry achieves its aim....' must necessarily be predicated of all ''C''. … I call this kind of figure the First. (Aristotle, ''Prior Analytics'', 1.4).</p>
58 KB (7,676 words) - 22:36, 15 November 2015
...f logical criticism of its inferences, must be aware of this determination of its ideas by previous ideas. (Peirce, "On Time and Thought", CE 3, 68...approach, it is possible to see a question of articulation and a question of explanation:
24 KB (3,783 words) - 00:28, 16 November 2015
...adic]] [[sign relations]], along with ''semiotic'' and the plural variants of both terms. The form ''semeiotic'' is often used to distinguish Peirce's t==Types of signs==
9 KB (1,162 words) - 21:30, 3 November 2015
...notation and connotation, or, in roughly equivalent terms, by the concepts of extension and comprehension....rvard University (1865) and the Lowell Institute (1866). Here is one of the starting points:
8 KB (1,038 words) - 03:26, 16 November 2015
...se, advising the addressee on an optimal way of “attaining clearness of apprehension”.==Seven ways of looking at a pragmatic maxim==
12 KB (1,764 words) - 04:35, 17 November 2015
...ell-bounded universes of discourse or its horizon may extend to the limits of the human imagination.Notions of truth are notoriously difficult to disentangle from many of our most basic concepts — meaning, reality, and values in general, to
37 KB (5,460 words) - 14:50, 17 November 2015
...ccumulated body of provisional knowledge, that seeks to discover good ways of achieving recognized aims, ends, goals, objectives, or purposes.The three '''normative sciences''', according to traditional conceptions in philosophy, are ''aesthetics'', ''ethics'', and ''logic''.
5 KB (547 words) - 22:44, 16 November 2015
...l knowledge, that seeks to discover what is true about a recognized domain of phenomena.* [http://intersci.ss.uci.edu/wiki/index.php/Descriptive_science Descriptive Science @ InterSciWiki]
5 KB (535 words) - 21:56, 16 November 2015
The concept of '''logical implication''' encompasses a specific logical [[function (mathem...concept of logical implication are expressed in ordinary language by means of linguistic forms like the following:
16 KB (2,147 words) - 20:20, 4 November 2015
...o propositions, that produces a value of ''true'' just in case exactly one of its operands is true.The [[truth table]] of <math>p ~\operatorname{XOR}~ q,</math> also written <math>p + q\!</math> or
6 KB (728 words) - 01:46, 31 October 2015
...' with ''parameter'' <math>k\!</math> in the set <math>\mathbb{N}\!</math> of non-negative integers....math> left tacit, as the appropriate application is implicit in the number of operands listed. Thus <math>\Omega (x_1, \ldots, x_k)\!</math> may be take
5 KB (618 words) - 04:12, 7 November 2015
...al values, typically the values of two propositions, that produces a value of ''true'' if and only if both operands are false or both operands are true.The [[truth table]] of <math>p ~\operatorname{EQ}~ q,</math> also written <math>p = q,\!</math> <m
5 KB (664 words) - 04:24, 4 November 2015
...f two propositions, that produces a value of ''false'' if and only if both of its operands are false.The [[truth table]] of <math>p ~\operatorname{OR}~ q,</math> also written <math>p \lor q,\!</math>
5 KB (656 words) - 00:56, 2 November 2015
...n other words, it produces a value of ''true'' if and only if at least one of its operands is false.The [[truth table]] of <math>p ~\operatorname{NAND}~ q,</math> also written <math>p \stackrel{\cir
5 KB (684 words) - 23:00, 4 November 2015
...other words, it produces a value of ''false'' if and only if at least one of its operands is true.The [[truth table]] of <math>p ~\operatorname{NNOR}~ q,</math> also written <math>p \curlywedge q,
5 KB (678 words) - 03:54, 5 November 2015
...n, that produces a value of ''true'' when its operand is false and a value of ''false'' when its operand is true.The [[truth table]] of <math>\operatorname{NOT}~ p,</math> also written <math>\lnot p,\!</math> ap
6 KB (729 words) - 14:18, 5 November 2015
|
The bounty period lasts 7 days. Bounties must have a minimum duration of at least 1 day. After the bounty ends, there is a grace period of 24 hours to manually award the bounty. Simply click the bounty award icon next to each answer to permanently award your bounty to the answerer. (You cannot award a bounty to your own answer.)
@Mathphile I found no prime of the form $$n^{n+1}+(n+1)^{n+2}$$ for $n>392$ yet and neither a reason why the expression cannot be prime for odd n, although there are far more even cases without a known factor than odd cases.
@TheSimpliFire That´s what I´m thinking about, I had some "vague feeling" that there must be some elementary proof, so I decided to find it, and then I found it, it is really "too elementary", but I like surprises, if they´re good.
It is in fact difficult, I did not understand all the details either. But the ECM-method is analogue to the p-1-method which works well, then there is a factor p such that p-1 is smooth (has only small prime factors)
Brocard's problem is a problem in mathematics that asks to find integer values of n and m for whichn!+1=m2,{\displaystyle n!+1=m^{2},}where n! is the factorial. It was posed by Henri Brocard in a pair of articles in 1876 and 1885, and independently in 1913 by Srinivasa Ramanujan.== Brown numbers ==Pairs of the numbers (n, m) that solve Brocard's problem are called Brown numbers. There are only three known pairs of Brown numbers:(4,5), (5,11...
$\textbf{Corollary.}$ No solutions to Brocard's problem (with $n>10$) occur when $n$ that satisfies either \begin{equation}n!=[2\cdot 5^{2^k}-1\pmod{10^k}]^2-1\end{equation} or \begin{equation}n!=[2\cdot 16^{5^k}-1\pmod{10^k}]^2-1\end{equation} for a positive integer $k$. These are the OEIS sequences A224473 and A224474.
Proof: First, note that since $(10^k\pm1)^2-1\equiv((-1)^k\pm1)^2-1\equiv1\pm2(-1)^k\not\equiv0\pmod{11}$, $m\ne 10^k\pm1$ for $n>10$. If $k$ denotes the number of trailing zeros of $n!$, Legendre's formula implies that \begin{equation}k=\min\left\{\sum_{i=1}^\infty\left\lfloor\frac n{2^i}\right\rfloor,\sum_{i=1}^\infty\left\lfloor\frac n{5^i}\right\rfloor\right\}=\sum_{i=1}^\infty\left\lfloor\frac n{5^i}\right\rfloor\end{equation} where $\lfloor\cdot\rfloor$ denotes the floor function.
The upper limit can be replaced by $\lfloor\log_5n\rfloor$ since for $i>\lfloor\log_5n\rfloor$, $\left\lfloor\frac n{5^i}\right\rfloor=0$. An upper bound can be found using geometric series and the fact that $\lfloor x\rfloor\le x$: \begin{equation}k=\sum_{i=1}^{\lfloor\log_5n\rfloor}\left\lfloor\frac n{5^i}\right\rfloor\le\sum_{i=1}^{\lfloor\log_5n\rfloor}\frac n{5^i}=\frac n4\left(1-\frac1{5^{\lfloor\log_5n\rfloor}}\right)<\frac n4.\end{equation}
Thus $n!$ has $k$ zeroes for some $n\in(4k,\infty)$. Since $m=2\cdot5^{2^k}-1\pmod{10^k}$ and $2\cdot16^{5^k}-1\pmod{10^k}$ has at most $k$ digits, $m^2-1$ has only at most $2k$ digits by the conditions in the Corollary. The Corollary if $n!$ has more than $2k$ digits for $n>10$. From equation $(4)$, $n!$ has at least the same number of digits as $(4k)!$. Stirling's formula implies that \begin{equation}(4k)!>\frac{\sqrt{2\pi}\left(4k\right)^{4k+\frac{1}{2}}}{e^{4k}}\end{equation}
Since the number of digits of an integer $t$ is $1+\lfloor\log t\rfloor$ where $\log$ denotes the logarithm in base $10$, the number of digits of $n!$ is at least \begin{equation}1+\left\lfloor\log\left(\frac{\sqrt{2\pi}\left(4k\right)^{4k+\frac{1}{2}}}{e^{4k}}\right)\right\rfloor\ge\log\left(\frac{\sqrt{2\pi}\left(4k\right)^{4k+\frac{1}{2}}}{e^{4k}}\right).\end{equation}
Therefore it suffices to show that for $k\ge2$ (since $n>10$ and $k<n/4$), \begin{equation}\log\left(\frac{\sqrt{2\pi}\left(4k\right)^{4k+\frac{1}{2}}}{e^{4k}}\right)>2k\iff8\pi k\left(\frac{4k}e\right)^{8k}>10^{4k}\end{equation} which holds if and only if \begin{equation}\left(\frac{10}{\left(\frac{4k}e\right)}\right)^{4k}<8\pi k\iff k^2(8\pi k)^{\frac1{4k}}>\frac58e^2.\end{equation}
Now consider the function $f(x)=x^2(8\pi x)^{\frac1{4x}}$ over the domain $\Bbb R^+$, which is clearly positive there. Then after considerable algebra it is found that \begin{align*}f'(x)&=2x(8\pi x)^{\frac1{4x}}+\frac14(8\pi x)^{\frac1{4x}}(1-\ln(8\pi x))\\\implies f'(x)&=\frac{2f(x)}{x^2}\left(x-\frac18\ln(8\pi x)\right)>0\end{align*} for $x>0$ as $\min\{x-\frac18\ln(8\pi x)\}>0$ in the domain.
Thus $f$ is monotonically increasing in $(0,\infty)$, and since $2^2(8\pi\cdot2)^{\frac18}>\frac58e^2$, the inequality in equation $(8)$ holds. This means that the number of digits of $n!$ exceeds $2k$, proving the Corollary. $\square$
We get $n^n+3\equiv 0\pmod 4$ for odd $n$, so we can see from here that it is even (or, we could have used @TheSimpliFire's one-or-two-step method to derive this without any contradiction - which is better)
@TheSimpliFire Hey! with $4\pmod {10}$ and $0\pmod 4$ then this is the same as $10m_1+4$ and $4m_2$. If we set them equal to each other, we have that $5m_1=2(m_2-m_1)$ which means $m_1$ is even. We get $4\pmod {20}$ now :P
Yet again a conjecture!Motivated by Catalan's conjecture and a recent question of mine, I conjecture thatFor distinct, positive integers $a,b$, the only solution to this equation $$a^b-b^a=a+b\tag1$$ is $(a,b)=(2,5).$It is of anticipation that there will be much fewer solutions for incr...
|
The AUC score is a popular summary statistic that is often used to communicate the performance of a classifier. However, we illustrate here that this score depends not only on the quality of the model in question, but also on the difficulty of the test set considered: If samples are added to a test set that are easily classified, the AUC will go up — even if the model studied has not improved. In general, this behavior implies that isolated, single AUC scores cannot be used to meaningfully qualify a model’s performance. Instead, the AUC should be considered a score that is primarily useful for comparing and ranking multiple models — each at a common test set difficulty.
Follow @efavdb Follow us on twitter for new submission alerts! Introduction
An important challenge associated with building good classification algorithms centers around their optimization: If an adjustment is made to an algorithm, we need a score that will enable us to decide whether or not the change made was an improvement. Many scores are available for this purpose. A sort-of all-purpose score that is quite popular for characterizing binary classifiers is the model AUC score (defined below).
The purpose of this post is to illustrate a subtlety associated with the AUC that is not always appreciated: The score depends strongly on the difficulty of the test set used to measure model performance. In particular, if any soft-balls are added to a test set that are easily classified (i.e., are far from any decision boundary), the AUC will increase. This increase does not imply a model improvement. Two key take-aways follow:
The AUC is an inappropriate score for comparing models validated on test sets having differing sampling distributions. Therefore, comparing the AUCs of models trained on samples having differing distributions requires care: The training sets can have different distributions, but the test sets must not. A single AUC measure cannot typically be used to meaningfully communicate the quality of a single model (though single model AUC scores are often reported!)
The primary utility of the AUC is that it allows one to compare multiple models at fixed test set difficulty: If a model change results in an increase in the AUC at fixed test set distribution, it can often be considered an improvement.
We review the definition of the AUC below and then demonstrate the issues alluded to above.
The AUC score, reviewed
Here, we quickly review the definition of the AUC. This is a score that can be used to quantify the accuracy of a binary classification algorithm on a given test set $\mathcal{S}$. The test set consists of a set of feature vector-label pairs of the form
\begin{eqnarray}\tag{1} \mathcal{S} = \{(\textbf{x}_i, y_i) \}. \end{eqnarray} Here, $\textbf{x}_i$ is the set of features, or predictor variables, for example $i$ and $y_i \in \{0,1 \}$ is the label for example $i$. A classifier function $\hat{p}_1(\textbf{x})$ is one that attempts to guess the value of $y_i$ given only the feature vector $\textbf{x}_i$. In particular, the output of the function $\hat{p}_1(\textbf{x}_i)$ is an estimate for the probability that the label $y_i$ is equal to $1$. If the algorithm is confident that the class is $1$ ($0$), the probability returned will be large (small).
To characterize model performance, we can set a threshold value of $p^*$ and mark all examples in the test set with $\hat{p}(\textbf{x}_i) > p^*$ as being candidates for class one. The fraction of the truly positive examples in $\mathcal{S}$ marked in this way is referred to as the true-positive rate (TPR) at threshold $p^*$. Similarly, the fraction of negative examples in $\mathcal{S}$ marked is referred to as the false-positive rate (FPR) at threshold $p^*$. Plotting the TPR against the FPR across all thresholds gives the model’s so-called receiver operating characteristic (ROC) curve. A hypothetical example is shown at right in blue. The dashed line is just the $y=x$ line, which corresponds to the ROC curve of a random classifier (one returning a uniform random $p$ value each time).
Notice that if the threshold is set to $p^* = 1$, no positive or negative examples will typically be marked as candidates, as this would require one-hundred percent confidence of class $1$. This means that we can expect an ROC curve to always go through the point $(0,0)$. Similarly, with $p^*$ set to $0$, all examples should be marked as candidates for class $1$ — and so an ROC curve should also always go through the point $(1,1)$. In between, we hope to see a curve that increases in the TPR direction more quickly than in the FPR direction — since this would imply that the examples the model is most confident about tend to actually be class $1$ examples. In general, the larger the Area Under the (ROC) Curve — again, blue at right — the better. We call this area the “AUC score for the model” — the topic of this post.
AUC sensitivity to test set difficulty
To illustrate the sensitivity of the AUC score to test set difficulty, we now consider a toy classification problem: In particular, we consider a set of unit-variance normal distributions, each having a different mean $\mu_i$. From each distribution, we will take a single sample $x_i$. From this, we will attempt to estimate whether or not the corresponding mean satisfies $\mu_i > 0$. That is, our training set will take the form $\mathcal{S} = \{(x_i, \mu_i)\}$, where $x_i \sim N(\mu_i, 1)$. For different $\mathcal{S}$, we will study the AUC of the classifier function,
\begin{eqnarray} \label{classifier} \tag{2}
\hat{p}(x) = \frac{1}{2} (1 + \text{tanh}(x)) \end{eqnarray} A plot of this function is shown below. You can see that if any test sample $x_i$ is far to the right (left) of $x=0$, the model will classify the sample as positive (negative) with high certainty. At intermediate values near the boundary, the estimated probability of being in the positive class lifts in a reasonable way.
Notice that if a test example has a mean very close to zero, it will be difficult to classify that example as positive or negative. This is because both positive and negative $x$ samples are equally likely in this case. This means that the model cannot do much better than a random guess for such $\mu$. On the other hand, if an example $\mu$ is selected that is very far from the origin, a single sample $x$ from $N(\mu, 1)$ will be sufficient to make a very good guess as to whether $\mu > 0$. Such examples are hard to get wrong, soft-balls.
The impact of adding soft-balls to the test set on the AUC for model (\ref{classifier}) can be studied by changing the sampling distribution of $\mathcal{S}$. The following python snippet takes samples $\mu_i$ from three distributions — one tight about $0$ (resulting in a very difficult test set), one that is very wide containing many soft-balls that are easily classified, and one that is intermediate. The ROC curves that result from these three cases are shown following the code. The three curves are very different, with the AUC of the soft-ball set very large and that of the tight set close to that of the random classifier. Yet, in each case the model considered was the same — (\ref{classifier}). How could the AUC have improved?!
import numpy as np from sklearn import metrics fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(12,3.5)) SAMPLES = 1000 means_std = 0.1 for means_std in [3, 0.5, .001]: means = means_std * np.random.randn(SAMPLES) x_set = np.random.randn(samples) + means predictions = [classifier(item) for item in x_set] fpr, tpr, thresholds = metrics.roc_curve(1 * (means>0), predictions) ax1.plot(fpr, tpr, label=means_std) ax1.plot(fpr, fpr, 'k--') ax2.plot(means, 0 * means, '*', label=means_std) ax1.legend(loc='lower right', shadow=True) ax2.legend(loc='lower right', shadow=True) ax1.set_title('TPR versus FPR -- The ROC curve') ax2.set_title('Means sampled for each case')
The explanation for the differing AUC values above is clear: Consider, for example, the effect of adding soft-ball negatives to $\mathcal{S}$. In this case, the model (\ref{classifier}) will be able to correctly identify almost all true positive examples at a much higher threshold than that where it begins to mis-classify the introduced negative softballs. This means that the ROC curve will now hit a TPR value of $1$ well-before the FPR does (which requires all negatives — including the soft-balls to be mis-classified). Similarly, if many soft-ball positives are added in, these will be easily identified as such well-before any negative examples are mis-classified. This again results in a raising of the ROC curve, and an increase in AUC — all without any improvement in the actual model quality, which we have held fixed.
Discussion
The toy example considered above illustrates the general point the AUC of a model is really a function of both the model and the test set it is being applied to. Keeping this in mind will help to prevent incorrect interpretations of the AUC. A special case to watch out for in practice is the situation where the AUC changes upon adjustment of the training and testing protocol applied (which can result, for example, from changes to how training examples are collected for the model). If you see such a change occur in your work, be careful to consider whether or not it is possible that the difficulty of the test set has changed in the process. If so, the change in the AUC may not indicate a change in model quality.
Because the AUC score of a model can depend highly on the difficulty of the test set, reporting this score alone will generally not provide much insight into the accuracy of the model — which really depends only on performance near the true decision boundary and not on soft-ball performance. Because of this, it may be a good practice to always report AUC scores for optimized models next to those of some fixed baseline model. Comparing the differences of the two AUC scores provides an approximate method for removing the effect of test set difficulty. If you come across an isolated, high AUC score in the wild, remember that this does not imply a good model!
A special situation exists where reporting an isolated AUC score for a single model can provide value: The case where the test set employed shares the same distribution as that of the application set (the space where the model will be employed). In this case, performance within the test set directly relates to expected performance during application. However, applying the AUC to situations such as this is not always useful. For example, if the positive class sits within only a small subset of feature space, samples taken from much of the rest of the space will be “soft-balls” — examples easily classified as not being in the positive class. Measuring the AUC on test sets over the full feature space in this context will always result in AUC values near one — leaving it difficult to register improvements in the model near the decision boundary through measurement of the AUC.
|
For my system I can write down the Hamiltonian in this form:
$$ H = \begin{pmatrix} \epsilon_{1\downarrow}-\mu_{B}B & 0 & 0 & 0 \\ 0 & \epsilon_{2\uparrow}+\mu_{B}B & 0 & 0 \\ 0 & 0 & \epsilon_{1\uparrow}+\mu_{B}B & 0 \\ 0 & 0 & 0 & \epsilon_{2\downarrow}-\mu_{B}B \end{pmatrix} $$
where $\mu_{B}B$ are the Zeeman split.
Now I want to bring this system in contact with a superconductor and want to write down the Bogoliubov - de Gennes Hamiltonian. In my opinion the Bogoluibov - de Gennes Hamiltonian has this form since the Zeeman energy does not dependent that I have quasiparticle or holes:
$$ \mathcal{H} = \begin{pmatrix} \epsilon_{1\downarrow}& 0 & 0 & 0 \\ 0 & \epsilon_{2\uparrow} & 0 & 0 \\ 0 & 0 & \epsilon_{1\uparrow} & 0 \\ 0 & 0 & 0 & \epsilon_{2\downarrow} \end{pmatrix}\tau_{z} + \begin{pmatrix} -\mu_{B}B & 0 & 0 & 0 \\ 0 & +\mu_{B}B & 0 & 0 \\ 0 & 0 & +\mu_{B}B & 0 \\ 0 & 0 & 0 & -\mu_{B}B \end{pmatrix}1_{\tau} + H_{\Delta}\tau_{x} $$
Without Zeeman term the problem is a standard textbook problem, since I have directly my Nambu spinor $\psi^{\dagger} = \left(c^{\dagger}_{1\downarrow},c^{\dagger}_{2\uparrow},c^{\dagger}_{1\uparrow},c^{\dagger}_{2\uparrow},c_{1\downarrow},c_{2\uparrow},c_{1\uparrow},c_{2\uparrow}\right)$ and can write down the Hamiltonian in the correct second quantization notation.
But if I add the Zeeman energy in my second quantization Hamiltonian vanish my Zeeman term with this Nambu spinor. If a write in my Bogoliubov - de Gennes
$$ \begin{pmatrix} -\mu_{B}B & 0 & 0 & 0 \\ 0 & +\mu_{B}B & 0 & 0 \\ 0 & 0 & +\mu_{B}B & 0 \\ 0 & 0 & 0 & -\mu_{B}B \end{pmatrix}\tau_{z} $$
I get the correct results but this means that I will have a sign flip in my Bogoliubov - de Gennes Hamiltonian for quasiparticles an holes, which is in my opinion wrong!
My question now is how I must changes my Nambu spinor so that my Zeeman term in my Hamiltonian will do not vanish or can I have a sign flip for quasiparticles and holes in the Zeeman energy?
|
Turritopsis dohrnii"the immortal jellyfish" are biologically immortal. This means that they do not die due to biological reasons -- however, they obviously may die due to other physical reasons, like getting smashed with a hammer. If I asked you to calculate what the probability of a biologically immortal species being truly immortal -- i.e. of it never dying(ever) -- would be, what'd you answer?
Well, obviously the probability is zero. Provided there is any chance at all of the jellyfish getting squashed by a hammer this year, with a sufficient amount of time you can be as certain as you want -- the probability can be as close to 1 as you want -- that the jellyfish will get squashed by a hammer.
But what if the probability of getting smashed by a hammer in that year was
decreasingwith time? Perhaps this is not the case with jellyfish, but it certainly would be true for, e.g. a transhuman society where technological innovation continually decreases the probability of dying (to be precise, the probability density of being dead in the next interval of time $\Delta t$ given that you haven't already died).
Let $p(t)\Delta t$ be the probability of our transhuman dying between $t$ and $t+\Delta t$. Then the probability of the transhuman
neverdying any time from 0 to infinity is:
\[\begin{gathered}
P = \left( {1 - p(0)\Delta t} \right)\left( {1 - p(\Delta t)\Delta t} \right)\left( {1 - p(2\Delta t)\Delta t} \right)... \\
= \coprod\limits_{t = 0}^\infty {\left( {1 - p(t)dt} \right)} \\
\end{gathered} \]
Of course, we need to take the limit as $\Delta t \to 0$.
The reason this problem is so interesting is because it introduces the idea of
multiplicative calculus. If the product had been a sum, the solution would've been utterly, ridiculously straightforward. But since it's not, it's only really ridiculously staightforward. The natural way (no pun intended) to convert a product (we use the symbol \(\coprod {} \) to refer to the multiplicative integral) into a sum (or rather an integral) is to take the logarithm:
\[\begin{gathered}
\ln P = \ln \left( {1 - p(0)\Delta t} \right) + \ln \left( {1 - p(\Delta t)\Delta t} \right) + \ln \left( {1 - p(2\Delta t)\Delta t} \right)... \\
= \int_0^\infty {\ln \left( {1 - p(t)dt} \right)} \\
\end{gathered} \]
This may look awkward to you -- and indeed, the standard form of the multiplicative integral typically has the $dt$ differential as the exponent of the integrand so as to obtain after taking the logarithm the additive integral in its standard form.
But you might remember that
\[\ln (1 - x) = - x - \frac{{{x^2}}}{2} - \frac{{{x^3}}}{3} - ...\]
Or to first-order in $x$ (since the "x" here, $p(t)dt$ approaches 0), $\ln (1 - x) \approx - x$. Thus:
\[\ln P = - \int_0^\infty {p(t)dt} \]
Or:
\[P = {e^{ - \int_0^\infty {p(t)dt} }}\]
Which is pretty neat! Interestingly, this means that if the integral of $p(t)$ diverges (e.g. if $p(t)\sim1/t$), you are
guaranteedto eventually die. So this gives mankind a manual of how fast technological progress on this issue needs to be in the transhuman age to guarantee immortality. Internalise it in your demand, fellow robot!
Here, we've calculated the probability of
immortality. The probability of eventual mortalityis of course 1 minus this, but could also be calculated from the get go -- try this out. You'll get $P'=1-\int_0^\infty p(t) e^{-\int_0^t p(\tau) d\tau}dt$, which you can then simplify with a variable substitution. Perhaps this gives you some insight into variable substitutions in integrals of this sort.
|
Why does the pH rapidly increase near the equivalence point during--why does the slope of the graph of a pH curve sharply increase around this area?
I'll try to answer your question firstly qualitatively, then quantitatively (using mathematical equation):
Let's consider the case of the titration of strong acid (volume $V_a$ and concentration$C_a$: unknown) with a strong base (volume $V_b$ and concentration$C_b$), and we'll denote $V_b(eq)$ the volume of base at the equivalence:
The pH increases slowly at first because the pH scale is logarithmic, which means that a pH of 1 will have 10 times the hydronium ion concentration than a pH of 2. Thus, as the hydronium ion is initally removed, it takes a lot of base to change its concentration by a factor of 10, but as more and more hydronium ion is removed, less base is required to change its concentration by a factor of 10. Near the equivalence point, a change of a factor of 10 occurs very quickly, which is why the graph is extremely steep at this point. As the hydronium ion concentration becomes very low, it will again take a lot of base to increase the hydroxide ion concentration by 10 fold to change the pH significantly. Let's find the equation of the titration curve in the the region $0<V_b<V_b(eq)$:
The number of moles of $\ce{H3O+}$ before the titration is $C_aV_a$
The number of moles of $\ce{H3O+}$ after adding the volume $V_b$ of the base is $C_aV_a-C_bV_b$
Then, the concentration of $\ce{H3O+}$ is $$[\ce{H3O+}]=\frac{C_aV_a-C_bV_b}{V_a+V_b}$$ So, the titration curve in the region $0<V_b<V_b(eq)$ $$p\mathrm{H}=-\log\frac{C_aV_a-C_bV_b}{V_a+V_b}$$ If you trace this mathematical function $p\mathrm{H}=f(V_b)$, you will have a smooth slope at the beginning and as $V_b \rightarrow V_b(eq)$, the slope of the graph sharply increases as the function becomes not defined: $C_aV_a=C_bV_b(eq)$.
|
Motivation
Here I've asked how to derive coefficients of numerical approximation of the linear transport equation $$ u_t + u_x = 0, \tag{1} $$ on a fixed 3-point stencil automatically and here I've asked about automatical derivation on general m-point stencil as well.
In the present question I would like to ask about automatical way of derivation of the following differential consequences (DC) of (1): \begin{align} u_t &= -u_x, \\ u_{tx} &= -u_{xx}, \\ u_{tt} &= -u_{xt} = u_{xx}, \\ u_{txx} &= - u_{xxx}, \\ u_{ttt} &= -u_{xtt} = u_{xxt} = -u_{xxx}, \\ &\ldots \tag{2} \end{align}
I need DC (2), cause I need to consider the following approximation $$ u_j^{n+1} = u_j^n - \tau\,u_x. \tag{3} $$
DC (2) are applied in expansion of $u_j^{n+1}$ around $(x_j,t^n)$ node as follows: \begin{align} & u_j^{n+1} = u_j^n + \tau\bigl(u_t\bigr)_j^n + \frac{\tau^2}{2!}\bigl(u_{tt}\bigr)_j^n + \frac{\tau^3}{3!}\bigl(u_{ttt}\bigr)_j^n + O(\tau^4) = \\ =\,& u_j^n - \tau\bigl(u_x\bigr)_j^n + \frac{\tau^2}{2!}\bigl(u_{xx}\bigr)_j^n - \frac{\tau^3}{3!}\bigl(u_{xxx}\bigr)_j^n + O(h^4). \tag{4} \end{align}
WM code
I've made the following semi-analytic solution.
{ eq = D[u[x, t], t] + D[u[x, t], x] == 0, eqdc01 = D[eq, t], eqdc02 = D[eq, t, t], eqdc03 = D[eq, x], sol00 = Solve[eq, D[u[x, t], t]] // First, sol01 = Solve[eqdc01, D[u[x, t], {x, 0}, {t, 2}]] // First, sol02 = Solve[eqdc03, D[u[x, t], {x, 1}, {t, 1}]] // First, sol03 = Solve[eqdc02, D[u[x, t], {x, 1}, {t, 2}]] // First, sol04 = Solve[eqdc02, D[u[x, t], {x, 0}, {t, 3}]] // First, sol05 = D[sol02, {t, 1}], sol06 = D[sol00, {x, 2}], sol07 = sol05 /. sol06, sol08 = sol04 /. sol07 } // Column
{ se01 = Series[u[x, t + \[Tau]], {\[Tau], 0, 3}] // Normal, lhs = se01 /. sol00 /. sol01 /. sol02 /. sol08 // Normal, rhs = Sum[ Subscript[a, i] (Series[u[x + i h, t], {h, 0, 3}] // Normal), {i, -1, 2, 1}] } // Column
Question
The question is how to derive DC (2) automatically.
|
Difference between revisions of "Reflecting"
(→$\Sigma_2$ correct cardinals: more information)
(One intermediate revision by the same user not shown) Line 17: Line 17:
For any class $\Gamma$ of formulas, an inaccessible cardinal $\kappa$ is ''$\Gamma$-reflecting'' if and only if $H_\kappa\prec_\Gamma V$, meaning that for any $\varphi\in\Gamma$ and $a\in H_\kappa$ we have $V\models\varphi[a]\iff H_\kappa\models\varphi[a]$. For example, an inaccessible cardinal is ''$\Sigma_n$-reflecting'' if and only if $H_\kappa\prec_{\Sigma_n} V$. In the case that $\kappa$ is not necessarily inaccessible, we say that $\kappa$ is ''$\Gamma$-correct'' if and only if $H_\kappa\prec_\Gamma V$''.
For any class $\Gamma$ of formulas, an inaccessible cardinal $\kappa$ is ''$\Gamma$-reflecting'' if and only if $H_\kappa\prec_\Gamma V$, meaning that for any $\varphi\in\Gamma$ and $a\in H_\kappa$ we have $V\models\varphi[a]\iff H_\kappa\models\varphi[a]$. For example, an inaccessible cardinal is ''$\Sigma_n$-reflecting'' if and only if $H_\kappa\prec_{\Sigma_n} V$. In the case that $\kappa$ is not necessarily inaccessible, we say that $\kappa$ is ''$\Gamma$-correct'' if and only if $H_\kappa\prec_\Gamma V$''.
−
* A simple Löwenheim-Skolem argument shows that every
+
* A simple Löwenheim-Skolem argument shows that every cardinal $\kappa$ is $\Sigma_1$-correct.
* For each natural number $n$, the $\Sigma_n$-correct cardinals form a closed unbounded proper class of cardinals, as a consequence of the [[reflection theorem]]. This class is sometimes denoted by $C^{(n)}$ and the $\Sigma_n$-correct cardinals are also sometimes referred to as the $C^{(n)}$-cardinals.
* For each natural number $n$, the $\Sigma_n$-correct cardinals form a closed unbounded proper class of cardinals, as a consequence of the [[reflection theorem]]. This class is sometimes denoted by $C^{(n)}$ and the $\Sigma_n$-correct cardinals are also sometimes referred to as the $C^{(n)}$-cardinals.
* Every $\Sigma_2$-correct cardinal is a [[beth fixed point | $\beth$-fixed point]] and a limit of such $\beth$-fixed points, as well as an [[aleph | $\aleph$-fixed point]] and a limit of such. Consequently, we may equivalently define for $n\geq 2$ that $\kappa$ is $\Sigma_n$-correct if and only if $V_\kappa\prec_{\Sigma_n} V$.
* Every $\Sigma_2$-correct cardinal is a [[beth fixed point | $\beth$-fixed point]] and a limit of such $\beth$-fixed points, as well as an [[aleph | $\aleph$-fixed point]] and a limit of such. Consequently, we may equivalently define for $n\geq 2$ that $\kappa$ is $\Sigma_n$-correct if and only if $V_\kappa\prec_{\Sigma_n} V$.
Line 25: Line 25:
Although it may be surprising, the existence of a correct cardinal is equiconsistent with ZFC. This can be seen by a simple compactness argument, using the fact that the theory ZFC+"$\kappa$ is correct" is finitely consistent, if ZFC is consistent, precisely by the observation about $\Sigma_n$-correct cardinals above.
Although it may be surprising, the existence of a correct cardinal is equiconsistent with ZFC. This can be seen by a simple compactness argument, using the fact that the theory ZFC+"$\kappa$ is correct" is finitely consistent, if ZFC is consistent, precisely by the observation about $\Sigma_n$-correct cardinals above.
−
$C^{(n)}$ are the classes of $\Sigma_n$-correct ordinals. These classes are clubs (closed unbounded). $C^{(0)}$ is the class of all ordinals. $C^{(1)}$ is precisely the class of all uncountable cardinals $α$ such that $
+
$C^{(n)}$ are the classes of $\Sigma_n$-correct ordinals. These classes are clubs (closed unbounded). $C^{(0)}$ is the class of all ordinals. $C^{(1)}$ is precisely the class of all uncountable cardinals $α$ such that $=H()$. References to the $C^{(n)}$ classes (different from just the requirement that the cardinal belongs to $C^{(n)}$) can sometimes make large cardinal properties stronger (for example $C^{(n)}$-[[superstrong]], $C^{(n)}$-[[extendible]], $C^{(n)}$-[[huge]] and $C^{(n)}$-[[rank-into-rank|I3]] and $C^{(n)}$-[[rank-into-rank|I1 cardinals]]). On the other hand, every [[measurable]] cardinal is $C^{(n)}$-measurable for all $n$ and every ($λ$-)[[strong]] cardinal is ($λ$-)$C^{(n)}$-strong for all $n$.<cite>Bagaria2012:CnCardinals</cite>
A cardinal $\kappa$ is ''reflecting'' if it is inaccessible and correct. Just as with the notion of correctness, this is not first-order expressible as a single assertion in the language of set theory, but it is expressible as a scheme (''Lévy scheme''). The existence of such a cardinal is equiconsistent to the assertion [[ORD is Mahlo]].
A cardinal $\kappa$ is ''reflecting'' if it is inaccessible and correct. Just as with the notion of correctness, this is not first-order expressible as a single assertion in the language of set theory, but it is expressible as a scheme (''Lévy scheme''). The existence of such a cardinal is equiconsistent to the assertion [[ORD is Mahlo]].
Revision as of 10:29, 11 October 2019
Reflection is a fundamental motivating concern in set theory. The theory of ZFC can be equivalently axiomatized over the very weak Kripke-Platek set theory by the addition of the reflection theorem scheme, below, since instances of the replacement axiom will follow from an instance of $\Delta_0$-separation after reflection down to a $V_\alpha$ containing the range of the defined function. Several philosophers have advanced philosophical justifications of large cardinals based on ideas arising from reflection.
Contents Reflection theorem
The Reflection theorem is one of the most important theorems in Set Theory, being the basis for several large cardinals. The Reflection theorem is in fact a "meta-theorem," a theorem about proving theorems. The Reflection theorem intuitively encapsulates the idea that we can find sets resembling the class $V$ of all sets.
Theorem (Reflection): For every set $M$ and formula $\phi(x_0...x_n,p)$ ($p$ is a parameter) there exists some limit ordinal $\alpha$ such that $V_\alpha\supseteq M$ such that $\phi^{V_\alpha}(x_0...x_n,p)\leftrightarrow \phi(x_0...x_n,p)$ (We say $V_\alpha$ reflects $\phi$). Assuming the Axiom of Choice, we can find some countable $M_0\supseteq M$ that reflects $\phi(x_0...x_n,p)$.
Note that by conjunction, for any finite family of formulas $\phi_0...\phi_n$, as $V_\alpha$ reflects $\phi_0...\phi_n$ if and only if $V_\alpha$ reflects $\phi_0\land...\land\phi_n$. Another important fact is that the truth predicate for $\Sigma_n$ formulas is $\Sigma_{n+1}$, and so we can find a (Club class of) ordinals $\alpha$ such that $(V_\alpha,\in)\prec_{{T_{\Sigma_n}}\restriction{V_\alpha}} (V,\in)$, where $T_{\Sigma_n}$ is the truth predicate for $\Sigma_n$ and so $ZFC\vdash Con(ZFC(\Sigma_n))$ for every $n$, where $ZFC(\Sigma_n)$ is $ZFC$ with Replacement and Separation restricted to $\Sigma_n$.
Lemma: If $W_\alpha$ is a cumulative hierarchy, there are arbitrarily large limit ordinals $\alpha$ such that $\phi^{W_\alpha}(x_0...x_n,p)\leftrightarrow \phi^W(x_0...x_n,p)$. Reflection and correctness
For any class $\Gamma$ of formulas, an inaccessible cardinal $\kappa$ is
$\Gamma$-reflecting if and only if $H_\kappa\prec_\Gamma V$, meaning that for any $\varphi\in\Gamma$ and $a\in H_\kappa$ we have $V\models\varphi[a]\iff H_\kappa\models\varphi[a]$. For example, an inaccessible cardinal is $\Sigma_n$-reflecting if and only if $H_\kappa\prec_{\Sigma_n} V$. In the case that $\kappa$ is not necessarily inaccessible, we say that $\kappa$ is $\Gamma$-correct if and only if $H_\kappa\prec_\Gamma V$ . A simple Löwenheim-Skolem argument shows that every uncountable cardinal $\kappa$ is $\Sigma_1$-correct. For each natural number $n$, the $\Sigma_n$-correct cardinals form a closed unbounded proper class of cardinals, as a consequence of the reflection theorem. This class is sometimes denoted by $C^{(n)}$ and the $\Sigma_n$-correct cardinals are also sometimes referred to as the $C^{(n)}$-cardinals. Every $\Sigma_2$-correct cardinal is a $\beth$-fixed point and a limit of such $\beth$-fixed points, as well as an $\aleph$-fixed point and a limit of such. Consequently, we may equivalently define for $n\geq 2$ that $\kappa$ is $\Sigma_n$-correct if and only if $V_\kappa\prec_{\Sigma_n} V$.
A cardinal $\kappa$ is
correct, written $V_\kappa\prec V$, if it is $\Sigma_n$-correct for each $n$. This is not expressible by a single assertion in the language of set theory (since if it were, the least such $\kappa$ would have to have a smaller one inside $V_\kappa$ by elementarity). Nevertheless, $V_\kappa\prec V$ is expressible as a scheme in the language of set theory with a parameter (or constant symbol) for $\kappa$.
Although it may be surprising, the existence of a correct cardinal is equiconsistent with ZFC. This can be seen by a simple compactness argument, using the fact that the theory ZFC+"$\kappa$ is correct" is finitely consistent, if ZFC is consistent, precisely by the observation about $\Sigma_n$-correct cardinals above.
$C^{(n)}$ are the classes of $\Sigma_n$-correct ordinals. These classes are clubs (closed unbounded). $C^{(0)}$ is the class of all ordinals. $C^{(1)}$ is precisely the class of all uncountable cardinals $α$ such that $V_\alpha=H(\alpha)$; i.e. precisely the Beth fixed points. References to the $C^{(n)}$ classes (different from just the requirement that the cardinal belongs to $C^{(n)}$) can sometimes make large cardinal properties stronger (for example $C^{(n)}$-superstrong, $C^{(n)}$-extendible, $C^{(n)}$-huge and $C^{(n)}$-I3 and $C^{(n)}$-I1 cardinals). On the other hand, every measurable cardinal is $C^{(n)}$-measurable for all $n$ and every ($λ$-)strong cardinal is ($λ$-)$C^{(n)}$-strong for all $n$.[1]
A cardinal $\kappa$ is
reflecting if it is inaccessible and correct. Just as with the notion of correctness, this is not first-order expressible as a single assertion in the language of set theory, but it is expressible as a scheme ( Lévy scheme). The existence of such a cardinal is equiconsistent to the assertion ORD is Mahlo.
If there is a pseudo uplifting cardinal, or indeed, merely a pseudo $0$-uplifting cardinal $\kappa$, then there is a transitive set model of ZFC with a reflecting cardinal and consequently also a transitive model of ZFC plus Ord is Mahlo. You can get this by taking some $\lambda\gt\kappa$ such that $V_\kappa\prec V_\lambda$.
$\Sigma_2$-correct cardinals
The $\Sigma_2$-correct cardinals are a particularly useful and robust class of cardinals, because of the following characterization: $\kappa$ is $\Sigma_2$-correct if and only if for any $x\in V_\kappa$ and any formula $\varphi$ of any complexity, whenever there is an ordinal $\alpha$ such that $V_\alpha\models\varphi[x]$, then there is $\alpha\lt\kappa$ with $V_\alpha\models\varphi[x]$. The reason this is equivalent to $\Sigma_2$-correctness is that assertions of the form $\exists \alpha\ V_\alpha\models\varphi(x)$ have complexity $\Sigma_2(x)$, and conversely all $\Sigma_2(x)$ assertions can be made in that form.
It follows, for example, that if $\kappa$ is $\Sigma_2$-correct, then any feature of $\kappa$ or any larger cardinal than $\kappa$ that can be verified in a large $V_\alpha$ will reflect below $\kappa$. So if $\kappa$ is $\Sigma_2$-reflecting, for example, then there must be unboundedly many inaccessible cardinals below $\kappa$. Similarly, if $\kappa$ is $\Sigma_2$-reflecting and measurable, then there must be unboundedly many measurable cardinals below $\kappa$.
Other facts:
Remarkable cardinals are $Σ_2$-reflecting.[2] It is relatively consistent that ZFC and the generic Vopěnka scheme holds, yet $Ord$ is not definably Mahlo and not even $∆_2$-Mahlo. In such a model, there can be no $Σ_2$-reflecting cardinals.[3] The Feferman theory
This is the theory, expressed in the language of set theory augmented with a new unary class predicate symbol $C$, asserting that $C$ is a closed unbounded class of cardinals, and every $\gamma\in C$ has $V_\gamma\prec V$. In other words, the theory consists of the following scheme of assertions: $$\forall\gamma\in C\ \forall x\in V_\gamma\ \bigl[\varphi(x)\iff\varphi^{V_\gamma}(x)\bigr]$$as $\varphi$ ranges over all formulas. Thus, the Feferman theory asserts that the universe $V$ is the union of a chain of elementary substructures $$V_{\gamma_0}\prec V_{\gamma_1}\prec\cdots\prec V_{\gamma_\alpha}\prec\cdots \prec V$$Although this may appear at first to be a rather strong theory, since it seems to imply at the very least that each $V_\gamma$ for $\gamma\in C$ is a model of ZFC, this conclusion would be incorrect. In fact, the theory does
not imply that any $V_\gamma$ is a model of ZFC, and does not prove $\text{Con}(\text{ZFC})$; rather, the theory implies for each axiom of ZFC separately that each $V_\gamma$ for $\gamma\in C$ satisfies it. Since the theory is a scheme, there is no way to prove from that theory that any particular $\gamma\in C$ has $V_\gamma$ satisfying more than finitely many axioms of ZFC. In particular, a simple compactness argument shows that the Feferman theory is consistent provided only that ZFC itself is consistent, since any finite subtheory of the Feferman theory is true by the reflection theorem in any model of ZFC. It follows that the Feferman theory is actually conservative over ZFC, and proves with ZFC no new facts about sets that is not already provable in ZFC alone.
The Feferman theory was proposed as a natural theory in which to undertake the category-theoretic uses of Grothendieck universes, but without the large cardinal penalty of a proper class of inaccessible cardinals. Indeed, the Feferman theory offers the advantage that the universes are each elementary substructures of one another, which is a feature not generally true under the universe axiom.
Maximality Principle
The existence of an inaccessible reflecting cardinal is equiconsistent with the boldface maximality principle $\text{MP}(\mathbb{R})$, which asserts of any statement $\varphi(r)$ with parameter $r\in\mathbb{R}$ that if $\varphi(r)$ is forceable in such a way that it remains true in all subsequent forcing extensions, then it is already true; in short, $\text{MP}(\mathbb{R})$ asserts that every possibly necessary statement with real parameters is already true. Hamkins showed that if $\kappa$ is an inaccessible reflecting cardinal, then there is a forcing extension with $\text{MP}(\mathbb{R})$, and conversely, whenever $\text{MP}(\mathbb{R})$ holds, then there is an inner model with an inaccessible reflecting cardinal.
References Bagaria, Joan. $C^{(n)}$-cardinals.Archive for Mathematical Logic 51(3--4):213--240, 2012. www DOI bibtex Wilson, Trevor M. Weakly remarkable cardinals, Erdős cardinals, and the generic Vopěnka principle., 2018. arχiv bibtex Gitman, Victoria and Hamkins, Joel David. A model of the generic Vopěnka principle in which the ordinals are not Mahlo., 2018. arχiv bibtex Bagaria, Joan and Hamkins, Joel David and Tsaprounis, Konstantinos and Usuba, Toshimichi. Superstrong and other large cardinals are never Laver indestructible.Archive for Mathematical Logic 55(1-2):19--35, 2013. www arχiv DOI bibtex
|
In the above example, how is it that $f^{-1}((1,3)) = (2,3]$ ? Here is my understanding, kindly correct the misconceptions. The inverse for $(2,4]$ is not defined. The inverse is as below. $$f^{-1}(y)=\begin{cases} y+1 & \text { if } y \le 2\\ 2y-5 & \text{ if } y \gt 4\\ \end{cases}$$ So, if I have to find out for example, $f^{-1}(2\frac{1}{2})$, how do I do it? When does $f^{-1}(y)$ give me $3$ (to justify the $3$ in $(2,3]$ ) ?
The inverse image $f^{-1}(S)$ refers to the set $$\{x \in \Bbb{R} : f(x) \in S\}$$ This would mean that $$f^{-1}(\{2\frac{1}{2}\}) = \emptyset$$ We also have, $$f^{-1}(1, 3) = \{x \in S : 1 < f(x) < 3\},$$ which is true precisely for $2 < x \le 3$. There's no requirement that there be some $x$ such that $f(x) = 2.5$; just so long as it's less than $3$.
If $x>3$ we have that $f(x) = \frac{1}{2}(x+5) > \frac{1}{2} \cdot 8 = 4$ and so $f(x) \notin (1,3)$.
If $2< x \le 3$ we have that $f(x)=x-1 \in (1,2] \subseteq (1,3)$ and
If $x \le 2$, $f(x)=x-1 \le 1$,so $f(x) \notin (1,3)$
Hence $f^{-1}((1,3) = \{x: f(x) \in (1,3) \} = (2,3]$, as we covered all options for $x$.
Inverse image is not "image under a (non-existent) inverse function". For example: if $f: \mathbb{R} \to \mathbb{R}$ is the function that is constant with value $2$, then $f^{-1}(\{2\}) = \mathbb{R}$ and $f^{-1}(\{1\}) = \emptyset$. We are talking about inverse images of sets, not of points.
|
Suppose $A\subseteq X$. Prove that the boundary $\partial A$ of $A$ is closed in $X$.
My knowledge:
$A^{\circ}$ is the interior $A^{\circ}\subseteq A \subseteq \overline{A}\subseteq X$
My proof was as follows:
To show $\partial A = \overline{A} \setminus A^{\circ}$ is closed, we have to show that the complement $( \partial A) ^C = X\setminus{}\partial A =X \setminus (\overline{A} \setminus A^{\circ})$ is open in $X$. This is the set $A^{\circ}\cup X \setminus(\overline{A})$
Then I claim that $A^{\circ}$ is open by definion ($a\in A^{\circ} \implies \exists \epsilon>0: B_\epsilon(a)\subseteq A$. As this is true for all $a$, by definition of open sets, $A^{\circ}$ is open.
My next claim is that $X \setminus \overline{A}$ is open. This is true because the complement is $\overline{A}$ is closed in $X$, hence $X \setminus \overline{A}$ is open in $X$.
My concluding claims are: We have a union of two open sets in $X$, By a proposition in my textbook, this set is open in $X$. Therefore the complement of that set is closed, which is we had to show.
What about this ?
|
I want to know if there is a way to find for example $\ln(2)$, without using the calculator ?
Thanks
Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community
And let's not forget this method (read off of the Ln scale).
$$\log 2 = 1-\frac{1}{2}+\frac{1}{3}-\frac{1}{4}+\ldots$$ In the general case $$\log \frac{1+x}{1-x} = 2(x+\frac{x^3}{3}+\frac{x^5}{5}+\frac{x^7}{7}+\ldots)$$
How precise do you need the calculation to be?
As a quick and dirty approximation, we know that $2^3 = 8$ and $e^2 \approx 2.7^2 = 7.29$, and so $\ln(2)$ should be just over $\frac{2}{3} \approx 0.67$. Contiuing to match powers, we find $2^{10} = 1024$, and $e^7 \approx (2.7)^7 = (3 - 0.3)^7 = 3^7 -7(3)^6(.3) + 21(3)^5(.3)^2 - 35(3)^4(.3)^3 \dots$ $= 3^7 (1 - .7 + .21 - .035 \dots)$ $\approx 2187(.475) = 1038.825$. Therefore, $e^7 \approx 2^{10}$ and so $\ln(2)$ should be just under $0.7$.
The operations that are relatively easy to compute by hand are addition, multiplication, and their inverses, subtraction, and division. With these operations we can compute all rational functions, e.g. $\frac{2x^2-1}{x^3+x-1}$.
We know that $$\ln(x)=\sum_{k=1}^{\infty}(-1)^k\frac{(x-1)^k}{k}$$
for values of $x$ close to $1$. So, if we take partial sums of this series we get approximations to logarithm that only require multiplications and sum and subtractions.
Notice that we only need to be able to compute values of logarithm for numbers close to $1$, since using $\ln(e^kx)=k+\ln(x)$ can allow us to reduce to this case.
$$\log2=\frac{2}{3}\left(1+\frac{1}{27}+\frac{1}{405}+\frac{1}{5103}+\frac{1}{59049}+\frac{1}{649539}+...\right)$$
The denominator is $(2k+1)9^k$.
Gourdon and Sebah discuss the efficiency of this formula in http://plouffe.fr/simon/articles/log2.pdf (page 11)
A "little more effort" is required to compute $log(2)$ using this formula than to compute $\pi$ using Machin's relation.
We have the CORDIC method, which can be quite effective for by-hand computation as it requires additions/subtractions only (an one multiply by a small integer).
There are two limitations though:
it is better performed in base $2$, so a preliminary change of base is needed for the input argument (you can do it in base $10$ as well but it takes about $3$ times more operations);
you need a small table of constants.
It is based on the identity $\log(ab)=\log(a)+\log(b)$.
You first normalize the binary number as $x=z\cdot2^e$, with $1\le z<10_b$. You have $\log(x)=\log(z)+e\cdot\log(2)$.
Then $$\log(z)=\log(0.11_bz)-\log(0.11_b)\\ \log(z)=\log(0.111_bz)-\log(0.111_b)\\ \log(z)=\log(0.1111_bz)-\log(0.1111_b)\\ \cdots$$
You will use these equalities as follows. Initialize an accumulator $l\leftarrow0$ and
if $0.11_bz>1$ (i.e. $z>1.01010101_b\cdots$) let $z\leftarrow 0.11_bz$, $l\leftarrow l-\log(0.11_b)$;
if $0.111_bz>1$ (i.e. $z>1.00100100_b\cdots$) let $z\leftarrow 0.111_bz$, $l\leftarrow l-\log(0.111_b)$;
if $0.1111_bz>1$ (i.e. $z>1.00010001_b\cdots$) let $z\leftarrow 0.1111_bz$, $l\leftarrow l-\log(0.1111_b)$;
$\cdots$
The multiplies are actually performed as shifts and subtractions (f.i. $0.111_bz=z-0.001_bz$).
This way, we progressively reduce $z$ to bring it closer and closer to $1$, while $l$ gets closer and closer to the logarithm of the initial $z$. On every step we gain one bit of accuracy.
The table of constants ($\log(10_b)=-\log(0.1_b),-\log(0.11_b),-\log(0.111_b),\cdots$ up to the desired number of significant bits) is computed in the decimal base, so that the answer is readily available as such.
$$\begin{align}z&\to-\log(z)\\ 0.1000000000000000000000000000000_b&\to 0.6931471806_d\\ 0.1100000000000000000000000000000_b&\to 0.2876820725_d\\ 0.1110000000000000000000000000000_b&\to 0.1335313926_d\\ 0.1111000000000000000000000000000_b&\to 0.0645385211_d\\ 0.1111100000000000000000000000000_b&\to 0.0317486983_d\\ 0.1111110000000000000000000000000_b&\to 0.0157483570_d\\ 0.1111111000000000000000000000000_b&\to 0.0078431775_d\\ 0.1111111100000000000000000000000_b&\to 0.0039138993_d\\ 0.1111111110000000000000000000000_b&\to 0.0019550348_d\\ 0.1111111111000000000000000000000_b&\to 0.0009770396_d\\ 0.1111111111100000000000000000000_b&\to 0.0004884005_d\\ 0.1111111111110000000000000000000_b&\to 0.0002441704_d\\ 0.1111111111111000000000000000000_b&\to 0.0001220778_d\\ 0.1111111111111100000000000000000_b&\to 0.0000610370_d\\ 0.1111111111111110000000000000000_b&\to 0.0000305180_d\\ 0.1111111111111111000000000000000_b&\to 0.0000152589_d\\ 0.1111111111111111100000000000000_b&\to 0.0000076294_d\\ 0.1111111111111111110000000000000_b&\to 0.0000038147_d\\ 0.1111111111111111111000000000000_b&\to 0.0000019074_d\\ 0.1111111111111111111100000000000_b&\to 0.0000009537_d\\ 0.1111111111111111111110000000000_b&\to 0.0000004768_d\\ 0.1111111111111111111111000000000_b&\to 0.0000002384_d\\ 0.1111111111111111111111100000000_b&\to 0.0000001192_d\\ 0.1111111111111111111111110000000_b&\to 0.0000000596_d\\ 0.1111111111111111111111111000000_b&\to 0.0000000298_d\\ 0.1111111111111111111111111100000_b&\to 0.0000000149_d\\ 0.1111111111111111111111111110000_b&\to 0.0000000075_d\\ 0.1111111111111111111111111111000_b&\to 0.0000000037_d\\ 0.1111111111111111111111111111100_b&\to 0.0000000019_d\\ 0.1111111111111111111111111111110_b&\to 0.0000000009_d\\ 0.1111111111111111111111111111111_b&\to 0.0000000005_d\\ \end{align}$$
One can use the fact that$$\log x=\lim_{n\to\infty}n\left(1-\frac{1}{\sqrt[n]{x}}\right)$$For $\log2$ a good approximation is$$1048576\left(1-\frac{1}{\sqrt[1048576]{2}}\right)$$where$$\sqrt[1048576]{x}$$can be computed by pressing twenty times the
SQRT key on a pocket calculator, since $1048576=2^{20}$ (or computing it by hand, with
much patience and time to spend).
What I get doing those computations is $0.6931469565952$, while a real computer gives $0.69314718055994530941$, so we have five exact decimal digits. Of course bigger numbers won't do, since the $2^{20}$-th root of it will be too near $1$ and the necessary digits would have already been lost.
(Note: $\log$ is the natural logarithm; I refuse to denote it in any other way. ;-))
$$\log (x)=\sum _{n=1}^{\infty } \frac{\left(\frac{x-1}{x}\right)^n}{n}$$ when $x>1$
What you can use is the Taylor expansion of $\ln (1+x)$:
$$\ln (1+x) = \sum (-1)^{j+1}{x^j\over j}$$
which converges for $-1<x\le1$. It would be tempting to insert $x=1$ into it, but that would be a poor choice since the convergence for $x=1$ is painfully slow. Instead you use the fact that $\ln 2 = -\ln 1/2$ and insert $x=-1/2$ instead:
$$\ln (1-{1\over 2}) = \sum (-1)^{j+1}{1\over j2^j} = -\sum {1\over j2^j}$$
So
$$\ln 2 = \sum {1\over j2^j}$$
This is similar to how the calculator does it, but there's probably a few tricks more that's used. First it probably uses base two logarithm and have a stored value of $\lg_2 e$ to be able to produce the natural logarithm. The reason for this is to be able to handle logarithm of values outside the convergence region (and generally we want to use the series for as narrow region as possible). We generally can write any number on the form $x2^p$ (in fact the numbers are already represented on that form) with $x$ being near $1$ and then $\lg_2(x2^p) = p\lg_2(x)$ (similar trick is being done on all these kind of functions).
The second trick is to approximate $\ln(1+x)$ on the interval $[1/\sqrt2, \sqrt2]$ even better than Taylor expansion, the trick is to find a polynomial that approximates it as uniformly good as possible. The McLaurin expansion has the property that it will yield a good approximation fast for values near zero at the expense of values further away. For generic case one uses a polynomial that yields a good enough approximation equally fast in the interval.
We can represent the logarithm of positive rational numbers as follows.
First, consider the following null conditionally convergent series (cancelled harmonic series):
$$0=(1-1)+\left(\frac{1}{2}-\frac{1}{2}\right)+\left(\frac{1}{3}-\frac{1}{3}\right)+\left(\frac{1}{4}-\frac{1}{4}\right)+\left(\frac{1}{5}-\frac{1}{5}\right)+...$$
Note that we are computing $0=log(1)=log\left(\frac{1}{1}\right)$ by adding consecutive terms with 1 positive fraction and 1 negative fraction each, taken from the inverses of non-zero integers. This observation may sound trivial now, but it is interesting for what comes next.
We can rearrange the terms of this series to compute $log(2)$ by taking two positive fractions and one negative for each term.
$$log\left(2\right)=\left(1+\frac{1}{2}-1\right)+\left(\frac{1}{3}+\frac{1}{4}-\frac{1}{2}\right)+\left(\frac{1}{5}+\frac{1}{6}-\frac{1}{3}\right)+\left(\frac{1}{7}+\frac{1}{8}-\frac{1}{4}\right)+...$$
This can be easily seen to be the Mercator series in disguise, so we have discovered nothing new yet.
But there is more. Similarly, we have
$$log\left(3\right)=\left(1+\frac{1}{2}+\frac{1}{3}-1\right)+\left(\frac{1}{4}+\frac{1}{5}+\frac{1}{6}-\frac{1}{2}\right)+\left(\frac{1}{7}+\frac{1}{8}+\frac{1}{9}-\frac{1}{3}\right)+\left(\frac{1}{10}+\frac{1}{11}+\frac{1}{12}-\frac{1}{4}\right)+...$$
This pattern holds for all positive integers, so the next step is applying the property that $log(p/q)=log(p)-log(q)$ on these representations.
This leads to $log(p/q)$ by adding $p$ positive fractions and $q$ negative fractions at each step. For example, we have
$$log\left(\frac{3}{2}\right)=\left(1+\frac{1}{2}+\frac{1}{3}-1-\frac{1}{2}\right)+\left(\frac{1}{4}+\frac{1}{5}+\frac{1}{6}-\frac{1}{3}-\frac{1}{4}\right)+\left(\frac{1}{7}+\frac{1}{8}+\frac{1}{9}-\frac{1}{5}-\frac{1}{6}\right)+...$$
as illustrated in http://oeis.org/A166871.
|
Calculate (via two methods) the flux integral $$\int_S (\nabla \times \vec{F}) \cdot \vec{n} dS$$ where $\vec{F} = (y,z,x^2y^2)$ and $S$ is the surface given by $z = x^2 +y^2$ and $0 \leq z \leq 4$, oriented so that $\vec{n}$ points downwards.
I have applied Stokes' Theorem and resulted with $$\oint_{\partial S} \vec{F} \cdot \vec{n} d\vec{r}.$$ Further, I calculated the normal vector $$(rsin(\theta), rcos(\theta), -r)$$ such that it is pointing downwards. I am stuck now because I cannot interpret the meaning of $\vec{r}$.
My second method would be to direcly compute curl, $$\text{curl} \vec{F} = (2x^2y - 1,-2xy^2,-1).$$ Compute the normal via gradient, $$\nabla z = (2x,2y,-1)$$ Substitute everything to get, $$\int_S \dots dS. $$
I'm stuck now and having difficulty finding the boundary in terms of $x$ and $y$. Can anyone please lend me a hand? Thanks.
|
I don't have much experience with solving equations using mathematica. I have the following equation:
$$A=u\cdot f^{-1}(u)-\int_a^{f^{-1}(u)}f(x)dx$$ For some given constant $A>0$ and $a\in[0,1]$, and given function $f$, with the following properties:
$f(x)\geq 0$. $f(x)=0$ on $x\in[0,a]$ $f^{-1}(u)\in [a,1]$ I know that $u>0$ will hold How do I tell mathematica to solve for $u$? I don't know where to start. I would be satisfied with a numerical solution.
Actually, Perhaps there is some good tutorial that would teach me these things?
|
Lebesgue Measurable But Not Borel The Basic Idea
Our goal for today is to construct a Lebesgue measurable set which is
not a Borel set. Such a set exists because the Lebesgue measure is the completion of the Borel measure. (The collection $\mathscr{B}$ of Borel sets is generated by the open sets, whereas the set of Lebesgue measurable sets $\mathscr{L}$ is generated by both the open sets and zero sets.) In short, $\mathscr{B}\subset \mathscr{L}$, where the containment is a proper one.
To produce a set in $\mathscr{L}\smallsetminus \mathscr{B}$, we'll assume two facts:
Every set in $\mathscr{L}$ with positive measure contains a non (Lebesgue) measurable subset. 97.3% of all counterexamples in real analysis involve the Cantor set.
Okay okay, the last one isn't
really a fact, but it may not surprise you that the Cantor set is central to today's discussion. In summary, we will define a homeomorphism (a continuous function with a continuous inverse) from $[0,1]$ to $[0,2]$ which will map a (sub)set (of the Cantor set) of measure 0 to a set of measure 1. By fact #1, this set of measure 1 contains a non-measurable subset, say $N$. And the preimage of $N$ will be Lebesgue measurable but will not be a Borel set. We'll fill in the details below, and while we do, keep in mind that we must work with a homeomorphism - a merely continuous function just won't do. And by playing with the Cantor set, we'll see that homeomorphisms (much less continuous functions!) don't always preserve measure. It's because of this that we can produce a Lebesgue measurable set which is not Borel.
From English to Math
Begin by defining a function $f:[0,1]\to[0,2]$ by $$f(x)=c(x)+x$$ where $c:[0,1]\to[0,1]$ is the Cantor function. The graph of $f$ looks much like that of $c$, except the horizontal lines are now all tilted with a slope of 1. I've drawn the graph for the first two iterations. This function has the following properties:
$f$ is strictly increasing since $f'=1$ almost everywhere (recall $c'=0$ almost everywhere) $f$ is continuous since both $c$ and $x$ are continuous $f^{-1}$ exists $f$ is 1-1 since it's strictly increasing; it's onto by the Intermediate Value Theorem: since $f(0)=0$, $f(1)=2$ and $f$ is continuous, it assumes all values in between $0$ and $2$! $f^{-1}$ is continuous (hence $f$ is a homeomorphism) see footnote *
We should also observe that $f$ maps the intervals of $[0,1]$ which are removed during the construction of the Cantor set $\mathscr{C}$ to intervals of $[0,2]$ of the
same length**. This implies $$\mu(f([0,1]\smallsetminus \mathscr{C}))=\mu([0,1]\smallsetminus \mathscr{C})=1.$$But since $[0,2]= f(\mathscr{C})\sqcup f([0,1]\smallsetminus \mathscr{C})$, we see that $2=\mu([0,2])=\mu(f(\mathscr{C}))+\mu(f([0,1]\smallsetminus \mathscr{C}))=\mu(f(\mathscr{C}))+1$whence$$\mu(f(\mathscr{C}))=1.$$ From this we deduce that $f(\mathscr{C})\subset[0,2]$ contains a non-measurable subset, say $N$ (see fact #1 in the introduction). And here is where we make our Claim: $f^{-1}(N)$ is Lebesgue measurable but not Borel.
This is easy to prove, but its substance lies in the following
Lemma: A strictly increasing function defined on an interval maps Borel sets to Borel sets. Proof of Lemma
We follow exercises #45-47 of ch. 2 in Royden's
Real Analysis (4ed). Let $f$ be any strictly increasing function defined on some interval. By our analysis above, we know that such a function is a homeomorphism. This fact enables us to show that $f$ maps Borel sets to Borel sets. To do so, it suffices to show that for any continuous function $g$ the set $$\mathscr{A}=\{E:g^{-1}(E) \text{ is Borel} \}$$is a $\sigma$-algebra containing the open sets. Once we show this, we can conclude $\mathscr{A}$ contains all the Borel sets and therefore, taking $g$ to be $f^{-1}$ (which we know is continuous!), we'll have $(f^{-1})^{-1}(E)=f(E)$ is Borel for any Borel set $E$, which is what we want.
Showing $\mathscr{A}$ is a $\sigma$-algebra (the first two bullets) which contains the open sets (the third bullet) is simple enough (recall that $\mathscr{B}$ denotes the Borel sets):
If $\{E_i\}\subset\mathscr{A}$ then $f^{-1}(\cup E_i)=\cup f^{-1}(E_i) \in \mathscr{B}$ since $\mathscr{B}$ is a $\sigma$-algebra, hence $\cup E_i\in \mathscr{A}$. If $E\subset \mathscr{A}$ then $f^{-1}(E^c)=(f^{-1}(E))^c\in\mathscr{B}$ since $\mathscr{B}$ is a $\sigma$-algebra, hence $E^c\in\mathscr{A}$. If $U$ is open, then $f^{-1}(U)$ is open and thus an element of $\mathscr{B}$. Hence $U\in\mathscr{A}$.
We are now ready for the
Proof of Claim
Since $N\subset f(\mathscr{C})$, we know that $f^{-1}(N)\subset \mathscr{C}$ is measurable (and has measure zero) since it is a subset of a zero set and the Lebesgue measure is complete. Moreover, $f^{-1}(N)$ is
not Borel! If it were, then since $f$ maps Borel sets to Borel sets by our Lemma, we'd have that $f(f^{-1}(N))=N$ is Borel. But that's impossible since $N$ isn't even measurable! This proves the claim.
Footnotes
*
Proof: Let $h=f^{-1}:[0,2]\to[0,1]$ and suppose $U\subset[0,1]$ is open. Then $[0,1]\smallsetminus U$ is compact and hence closed (and bounded). Since $f$ is continuous, $f([0,1]\smallsetminus U)$ is also closed. But we can rewrite this as \begin{align*} f([0,1]\smallsetminus U)&=f([0,1])\smallsetminus f(U)\\ &=[0,2]\smallsetminus f(U)\\ &=[0,2]\smallsetminus h^{-1}(U) \end{align*}which allows us to conclude $h^{-1}(U)$ is open.
**
Proof: This follows simply because $c$ is constant on any interval in $[0,1]\smallsetminus \mathscr{C}$. Indeed for any interval $(a,b)\subset[0,1]\smallsetminus\mathscr{C}$, we have $c(a)=c(b)$ and so \begin{align*} \mu((f(a),f(b)))&=f(b)-f(a)\\ &=c(b)+b-c(a)-a\\ &=b-a.\end{align*} References
- Much of today's discussion is taken from here.
- see also
Real Analysis (4ed) by Royden, section 2.7, Propositions 21 and 22
|
Forgot password? New user? Sign up
Existing user? Log in
I would like to know what is the selection criteria for a problem to get reshared by the best ofsIs it the number of people who reshare it or solve it or it must have a solution or what???
Note by Milun Moghe 5 years, 7 months ago
Easy Math Editor
This discussion board is a place to discuss our Daily Challenges and the math and science related to those challenges. Explanations are more than just a solution — they should explain the steps and thinking strategies that you used to obtain the solution. Comments should further the discussion of math and science.
When posting on Brilliant:
*italics*
_italics_
**bold**
__bold__
- bulleted- list
1. numbered2. list
paragraph 1paragraph 2
paragraph 1
paragraph 2
[example link](https://brilliant.org)
> This is a quote
This is a quote
# I indented these lines # 4 spaces, and now they show # up as a code block. print "hello world"
# I indented these lines# 4 spaces, and now they show# up as a code block.print "hello world"
\(
\)
\[
\]
2 \times 3
2^{34}
a_{i-1}
\frac{2}{3}
\sqrt{2}
\sum_{i=1}^3
\sin \theta
\boxed{123}
Sort by:
If a problem is interesting, it will get reshared. Problems that are convoluted, unmotivating or unclear, will often not make the cut.
To improve your problem writing, think about what problems you (or others) like, and why. Sometimes, phrasing a question in a short concise manner is good, while at other times a lengthy description provides more motivation. Choosing an appropriate title or image can also be extremely helpful.
There is no hard and fast rule regarding the number of people that have viewed / solved / liked / reshared. If a problem has been liked and reshared by different members, that tends to be an indication that many people find it interesting, and hence it will get reshared. I'm aware that harder problems will naturally have a much lower view / solve rate. A good solution which explains the problem clearly would be of great value to our members, hence I prefer to reshare problems with solutions. I do email individual people about adding a solution to their problem, if I feel that many people would benefit from it. I would encourage you to add solutions to your problems, which also helps you double check that you have the correct answer.
Log in to reply
It has a better chance of getting selected if it is reshared by a staff member...Also, the number of people who solved the problem is also taken into account (may be)
So that means the problem shouldn't be too difficult. But level 5 problems are supposed to be difficult
Yeah, but might I suggest you to write your problems using better LaTex? I dont know..It seems that your problems are too long to read...Make it short.
@Anish Puthuraya – Are you taking about content in the problem.means the information is too long or font size is too big or do elaborate too much or make it too complicated . Well according to me I always find less information as mistakes while typing in my problems.
@Milun Moghe – Information is too long...
@Anish Puthuraya – OK thanks for your suggestion I'll try to make it short from now on and more precise.
Until now whatever problems I have posted were not been able to solve by many so is that the issue
To clarify, neither of your conditions have a huge impact on the selection of a problem.
ok thanks then
Problem Loading...
Note Loading...
Set Loading...
|
This morning I saw a tweet mentioning an article with the exciting title:
Counterintuitive problem: Everyone in a room keeps giving dollars to random others. You’ll never guess what happens next
So naturally, I was quite excited and wanted to see what happens next! The problem under consideration is:
Imagine a room full of 100 people with 100 dollars each. With every tick of the clock, every person with money gives a dollar to one randomly chosen other person. After some time progresses, how will the money be distributed?
Dan Goldstein, writer of the article argues that many people believe that the money gets more or less evenly distributed. Sometimes you get some, sometimes you lose some, that’s the idea.
Turns out, this is not really the case (or actually
really not). To show this, Dan Goldstein made a simulation in R and a movie to show the results. What you see is the amount of money each person has for each round. The bottom figure is the interesting one, individuals (on the x-axis) are sorted by the amount of money they possess, so you can actually see the distribution of the money as time passes.
The simulation made by Dan, is an exact simulation. For each round, each
player with money picks at random the player to give their dollar to. Then all the money is counted and the next round can begin. I was wondering if I could come up with a simulation based on probability and statistics.
Here is my train of thought:
There are players. In each round, each player (with money) gives 1 dollar to some other player. Since there are players. For a single player , there are other players. In each round, player will receive either 0, 1, 2, … or dollar each round. If none of the other players picks player , player A receives 0 dollar that round. If 1 of the others picks player , this player receives 1 dollar etc etc. Seen from the perspective of an individual player: there are other players, that each give with probability a dollar to me (in a single round). That sounds a lot like a binomial distribution, the distribution of the number of successes (does other player give their dollar to me?) in a sequence of () independant experiments all with the same probability (). The binomial distribution with experiments, successes and probability of a success is given by:
\begin{align} \Pr(X = k) = {n \choose k}p^k(1-p)^{n-k} \end{align}
Now we can draw samples from this distribution, one for each player of the game in a single round. This sample thus denotes the amount of money each single player receives in this round. This amount should naturally equal the amount that is spend each round. However, because the amount that is received is now stochastic (sampled from a binomial distribution) there is no guarentee that this is the case.
Nonetheless, we know the mean of the binomial distribution equals . So each player is expected to gain a single dollar each round. Because this is also the amount each person gives away, the
expected amount of money in the game will remain constant. For now, I am quite okay with this assumption. And in fact, this makes sure that the expected total amount of money in the game is stable.
With the binomial distribution we can generate the following (incomplete) pay-off table.
Money lost Money gained Net money Probability 1 0 -1 36.6% 1 1 0 37.0% 1 2 1 18.5% 1 3 2 6.1% 1 4 3 1.5% 1 5 2 0.3% .. .. .. .. 1 99 98 0.0%
The following video shows a single
run of 50 people distributing money for around 5000 rounds, they start with 50 dollar each.
Now the thing is, that (in hindsight admittedly), I don’t think that the result is actually so counter-intuitive. And for that we will make a de-tour through coin-flipping. Consider a fair coin so that the chance of getting heads for a single flip is . Now, if I flip the coin 10 times, I
expect to see 5 ( = ) heads. Let’s do this a couple of times and note the amount of heads we see after 10 flips:
4
6
5
4
4
5
5
9 (!!)
And there is it, although we very often see values close to the expected value (5), if we repeat this experiment long enough we run into an example where we see 9 heads! That’s a lot! When this happens in real life, people often shout at eachother: “what a coincidence!!!”. For example when they ran into somebody they know while on vacation in a distant country.
Although if you think about it for a minute, it is not such a big coincidence at all. It’s just the result of repeating the same experiment over and over. Most of the time you will end up with the thing you expect, but sometimes you get a totally different result. Now to return to the topic of this post, most of the people will remain around the amount of money they initially start with. In some round they will gain some money, in other rounds they will lose some money. But there are also some lucky people, which will be on a streak for 10 or even 20 rounds and gather a far amount of money over this time. For the same reasons there will also be players with a losing streak. This will not happen for many people, but remember we are playing the game with 100 persons. And the chance that this happens to
at least one of the 100 players is actually quite large.
Stated in other words, the chance of a winning streak happening to
one specific player, is very small. But when we have 100 players, which all have a small chance of going on a winning (or losing) streak, we definitely expect at least one of the players, to actually succeed. Eventhough individual chances are very small, simply because we repeat the experiment very often we expect some freak results. Edit based on a question from Dan Goldstein
This model works fine for as long as all players have at least 1 dollar. If this is not the case, then those people who don’t have money, cannot participate in giving money (naturally, they can still receive money). So far, the model does not take this into account. Initially, I thought the model could be easily extended to cover this case. When we denote the amount of people who don’t have money in round by then we set the parameters for the binomial distribution in round to:
\begin{align} n_{(i)} &= m - 1 - l_{(i)} \\\
p_{(i)} &= \frac{1}{1 - m} \end{align}
The probability that a player receives payment from another one thus remains the same, but the amount of people that distribute money is reduced by the people without money. Seems easy, seems logical. However, if we compute the total amount to be spend () in each round no longer equals the expected amount to be received . As can be seen from:
\begin{align} m \cdot n \cdot p &\stackrel{?}{=} m - l \\\
m \cdot (m - 1 -l) \cdot \frac{1}{m-1} &\stackrel{?}{=} m - l \\\ m^2 - m - ml &\stackrel{?}{=} (m - l)(m - 1) \\\ 0 &\stackrel{?}{=} l \\\ \end{align}
Which does not hold whenever . Thinking a little more about the problem one can see that we actually have two different regimes, depending on people have money or not. For those people with money, the rules above for and are correct. But for people without money, the amount of people who can give them money becomes: . Which translates to
all the other people minus the people who don’t have any money, except me . Let’s do the math to see if this checks out:
\begin{align} m \cdot n \cdot p &\stackrel{?}{=} m - l \\\\
(m-l) \cdot \frac{m-1-l}{m-1} + l \cdot \frac{m-l}{m-1} &\stackrel{?}{=} m - l \\\ \frac{m^2 - lm - m + l}{m-1} &\stackrel{?}{=} m - l \\\ \frac{(m-1)(m-l)}{m-1} &\stackrel{?}{=} m - l \\\ m - l &= m - l \end{align}
And it does! This makes the updating of the bank in each round a little more complicated, but not a lot! And now the simulation can also deal with people going broke! I like!
|
Is it possible to provide an explanation to the observations of the Stern Gerlach Experiment using the classical theories?
Some Considerations:
We consider the standard set-up for the Stern-Gerlach experiment. The predominant component of $\vec{B}$ is $B_{z}$ Again $B_{z}$ varies most strongly with changes in z
$$\vec{F}=\nabla(\vec{\mu}.\vec{B})\approx\vec{e}_{z}\mu_{z}\frac{\partial B_{z}}{\partial z}=\vec{e}_{z}F_{z}$$
The force acting on the electrons is supposed to cause the deflection.This causes acceleration in the z direction and hence an increase in the KE in the z direction.The total KE of the electron(in consideration of the three directions) cannot change since magnetic field can only curve the path of an electron. It cannot change the magnitude of speed.Increase of speed in the z direction may be compensated by decrease of speed in the x or in the y direction Changes in the value of $B_z$
due to the accelerated motion of the electrons,is accompanied by the creation of an electric field: $$Curl{\;} \vec{E}=-\frac{\partial \vec{ B}}{\partial t} $$Decrease of magnetic energy= increase in electrical energy, if total KE remains unchanged for each particle. When the particles pass out of the region of interaction with the magnetic field the electrical energy restores the energy of the magnetic field.
Prior to this, while the interaction is going on , may write the curl B equation as: $$\int\vec{E}.\vec{dl}=-\frac{d}{dt}\int\int\vec{B}.\vec{ds}$$ The integral on LHS is a closed line integral whose plane is in the x-y direction.The electrons seem to get accelerated in the x-y direction due to the emf in action and this should tend to restore the acceleration in the z-direction. The electrical effect is just a temporary effect.
Now,the greater the amount of deflecting force , due to higher value of $\mu_{z}$ ,greater the decrease in magnetic energy and greater the amount of acc in the x-y plane.The restoring effect becomes stronger for larger values of magnetic moment in the z-direction.Incidentally for each value/magnitude of $\mu_{z}$ we have to consider the two directions the ,+ve z and the -ve z directions.
|
Eigenvalues are properly one of the most important metrics which can be extracted from matrices. Together with their corresponding eigenvector, they form the fundamental basis for many applications. Calculating eigenvalues from a given matrix is straightforward and implementations exist in many libraries. But sometimes the concrete matrix is not known in advance, e.g. when the matrix values are based on some bounded input data. In this case, it may be good to give at least some estimation of the range in which the eigenvalues can lie. As the name of this article suggests, there is a theorem intended for this use case and which will be discussed here.
For a square \( n \times n\) matrix \(A\) the Gershgorin circle theorem returns a range in which the eigenvalues must lie by simply using the information from the rows of \(A\). Before looking into the theorem though, let me remind the reader that eigenvalues may be complex valued (even for a matrix which contains only real numbers). Therefore, the estimation lives in the complex plane, meaning we can visualize the estimation in a 2D coordinate system with the real part as \(x\)- and the imaginary part as the \(y\)-axis. Note also that \(A\) has a maximum of \(n\) distinct eigenvalues.
For the theorem, the concept of a
Gershgorin disc is relevant. Such a disk exists for each row of \(A\), is centred around the diagonal element \(a_{ii}\) (which may be complex as well) and the sum of the other elements in the row \(r_i\) constraint the radius. The disk is therefore defined as
with the corresponding row sum\begin{equation} \label{eq:GershgorinCircle_Disc_RowSum} r_i = \sum_{j=1,\\ j\not=i}^n \left|a_{ij}\right| \end{equation}
(absolute sum of all row values except the diagonal elements itself). As an example, let's take the following definition for\begin{equation*} A = \begin{pmatrix} 4 & 3 & 15 \\ 1 & 1+i & 5 \\ -8 & -2 & 22 \end{pmatrix}. \end{equation*}
There are three Gershgorin discs in this matrix:
\(C_1\) with the centre point \(a_{11} = 4\) and radius \(r_1 = \left|3\right| + \left|15\right| = 18\) \(C_2\) with the centre point \(a_{22} = 1+i\) and radius \(r_2 = \left|1\right| + \left|5\right| = 6\) \(C_3\) with the centre point \(a_{33} = 22\) and radius \(r_3 = \left|-8\right| + \left|-2\right| = 10\)
We have now all ingredients for the statement of the theorem:
Every eigenvalue \(\lambda \in \mathbb{C}\) of a square matrix \(A \in \mathbb{R}^{n \times n}\) lies in at least one of the Gershgorin discs \(C_i\) (\eqref{eq:GershgorinCircle_Disc}). The possible range of the eigenvalues is defined by the outer borders of the union of all discs\begin{equation*} C = \bigcup_{i=1}^{n} C_i. \end{equation*}
The union, in the case of the example, is \(C = C_1 \cup C_2 \cup C_3\) and based on the previous information of the discs we can now visualize the situation in the complex plane. In the following figure, the discs are shown together with their disc centres and the actual eigenvalues (which are all complex in this case)\begin{equation*} \lambda_1 = 13.4811 - 7.48329 i, \quad \lambda_2 = 13.3749 + 7.60805 i \quad \text{and} \quad \lambda_3 = 0.14402 + 0.875241 i. \end{equation*}
Indeed, all eigenvalues lie in the blue area defined by the discs. But you also see from this example that not all discs have to contain an eigenvalue (the theorem does not state that each disc has one eigenvalue). E.g. \(C_3\) on the right side does not contain any eigenvalue. This is why the theorem makes only a statement about the complete
union and not each disc independently. Additionally, you can also see that one disc can be completely contained inside another disc as it is the case with \(C_2\) which lies inside \(C_1\). In this case, \(C_2\) does not give any useful information at all, since it does not expand the union \(C\) (if \(C_2\) would be missing, nothing changes regarding the complete union of all discs, i.e. \(C=C_1 \cup C_2 \cup C_3 = C_1 \cup C_3\)).
If we want to estimate the range in which the eigenvalues of \(A\) will lie, we can use the extrema values from the union, e.g.\begin{equation*} \left[4-18; 22+10\right]_{\operatorname{Re}} = \left[-14; 32\right]_{\operatorname{Re}} \quad \text{and} \quad \left[0 - 18; 0 + 18\right]_{\operatorname{Im}} = \left[-18; 18 \right]_{\operatorname{Im}} \end{equation*}
for the real and complex range respectively. This defines nothing else than a rectangle containing all discs. Of course, the rectangle is an even more inaccurate estimation as the discs already are, but the ranges are easier to handle (e.g. to decide if a given point lies inside the valid range or not). Furthermore, if we have more information about the matrix, like that it is symmetric and real-valued and therefore contains only real eigenvalues, we can discard the complex range completely.
In summary, with the help of the Gershgorin circle theorem, it is very easy to give an estimation of the eigenvalues of some matrix. We only need to look at the diagonal elements and corresponding sum of the rest of the row and get a first estimate of the possible range. In the next part, I want to discuss why this estimation is indeed correct.
Let's start again with a 3-by-3 matrix called \(B\) but now I want to use arbitrary coefficients\begin{equation*} B = \begin{pmatrix} b_{11} & b_{12} & b_{13} \\ b_{21} & b_{22} & b_{23} \\ b_{31} & b_{32} & b_{33} \end{pmatrix}. \end{equation*}
Any eigenvalue \(\lambda\) with corresponding eigenvector \(\fvec{u} = (u_1,u_2,u_3)^T\) for this matrix is defined as\begin{align*} B\fvec{u} &= \lambda \fvec{u} \\ \begin{pmatrix} b_{11} & b_{12} & b_{13} \\ b_{21} & b_{22} & b_{23} \\ b_{31} & b_{32} & b_{33} \end{pmatrix} \begin{pmatrix} u_{1} \\ u_{2} \\ u_{3} \end{pmatrix} &= \lambda \begin{pmatrix} u_{1} \\ u_{2} \\ u_{3} \end{pmatrix} \end{align*}
Next, see how the equation for each component of \(\fvec{u}\) looks like. I select \(u_1\) and also assume that this is the largest absolute
1 component of \(\fvec{u}\), i.e. \(\max_i{\left|u_i\right|} = \left|u_1\right|\). This is a valid assumption since one component must be the maximum and there is no restriction on the component number to choose for the next discussion. For \(u_1\) it results in the following equation which will be directly transformed a bit
All \(u_1\) parts are placed on one side together with the diagonal element and I am only interested in the absolute value. For the right side, there is an estimation possible\begin{equation*} \left| b_{12}u_2 + b_{13}u_3 \right| \leq \left| b_{12}u_2 \right| + \left| b_{13}u_3 \right| \leq \left| b_{12}u_1 \right| + \left| b_{13}u_1 \right| = \left| b_{12} \right| \left| u_1 \right| + \left| b_{13} \right| \left| u_1 \right| \end{equation*}
First, two approximations: with the help of the triangle inequality for the \(L_1\) norm
2 and with the assumption that \(u_1\) is the largest component. Last but not least, the product is split up. In short, this results to
where \(\left| u_1 \right|\) is thrown away completely. This states that the eigenvalue \(\lambda\) lies in the radius of \(r_1\) (cf. \eqref{eq:GershgorinCircle_Disc_RowSum}) around \(b_{11}\) (the diagonal element!). For complex values, this defines the previously discussed discs.
Two notes on this insight:
The result is only valid for the maximum component of the eigenvector. Note also that we usually don't know which component of the eigenvector is the maximum (if we would now, we probably don't need to estimate the eigenvalues in the first place because we already have them). In the explanation above only one eigenvector was considered. But usually, there are more (e.g. usually three in the case of matrix \(B\)). The result is therefore true for each maximum component of each eigenvector.
This also implies that not every eigenvector gives new information. It may be possible that for multiple eigenvectors the first component is the maximum. In this case, one eigenvector would have been sufficient. As an example, let's look at the eigenvectors of \(A\). Their absolute value is defined as (maximum component highlighted)\begin{equation*} \left| \fvec{u}_1 \right| = \begin{pmatrix} {\color{Aquamarine}1.31711} \\ 0.40112 \\ 1 \end{pmatrix}, \quad \left| \fvec{u}_2 \right| = \begin{pmatrix} {\color{Aquamarine}1.33013} \\ 0.431734 \\ 1 \end{pmatrix} \quad \text{and} \quad \left| \fvec{u}_3 \right| = \begin{pmatrix} 5.83598 \\ {\color{Aquamarine}12.4986} \\ 1 \end{pmatrix}. \end{equation*}
As you can see, the third component is never the maximum. But this is coherent to the example from above: the third disc \(C_3\) did not contain any eigenvalue.
What the theorem now does is some kind of worst-case estimate. We now know that if one component of
some eigenvector is the maximum, the row corresponding to this component defines a range in which the eigenvalue must lie. But since we don't know which component will be the maximum the best thing we can do is to assume that every component was the maximum in some eigenvector. In this case, we need to consider all diagonal elements and their corresponding absolute sum of the rest of the row. This is exactly what is done in the example from above. There is another nice feature which can be derived from the theorem when we have disjoint discs. This will be discussed in the next section.
Additional statements can be extracted from the theorem when we deal with disjoint disc areas
3. Consider another example with the following matrix
Using Geshgorin discs this results in a situation like shown in the following figure.
This time we have one disc (centred at \(d_{33}=9+10i\)) which does not share a common area with the other discs. With other words: we have two disjoint areas. The question is: does this gives us additional information?. Indeed, it is possible to state that there is exactly one eigenvalue in the third disc.
Let \(A \in \mathbb{R}^{n \times n}\) be a square matrix with \(n\) Gershgorin discs (\eqref{eq:GershgorinCircle_Disc}). Then, each joined area defined by the discs contains so many eigenvalues as discs contributed to the area. If the set \(\tilde{C}\) contains \(k\) discs which are disjoint from the other \(n-k\) discs, then \(k\) eigenvalues lie in the range defined by the union\begin{equation*} \bigcup_{C \in \tilde{C}} C \end{equation*}
of the discs in \(\tilde{C}\).
In the example, we have exactly one eigenvalue in the third disc and exactly two eigenvalues somewhere in the union of disc one and two
4. Why is it possible to restrict the estimation when we deal with disjoint discs? To see this, let me first remind you that the eigenvalues of any diagonal matrix are exactly the diagonal elements itself. Next, I want to define a new function which separates the diagonal elements from the off-diagonals
With \(\alpha \in [0;1]\) this smoothly adds the off-diagonal elements in \(D_2\) to the matrix \(D_1\) containing only the diagonal elements by starting from \(\tilde{D}(0) = \operatorname{diag}(D) = D_1\) and ending at \(\tilde{D}(1) = D_1 + D_2 = D\). Before we see why this step is important, let us first consider the same technique for a general 2-by-2 matrix\begin{align*} F &= \begin{pmatrix} f_{11} & f_{12} \\ f_{21} & f_{22} \end{pmatrix} \\ \Rightarrow \tilde{F}(\alpha) &= F_1 + \alpha F_2 \end{align*}
If we now want to calculate the eigenvalues for \(\tilde{F}\), we need to find the roots of the corresponding characteristic polynomial, meaning\begin{align*} \left| \tilde{F} - \lambda I \right| &= 0 \\ \left| F_1 + \alpha F_2 - \lambda I \right| &= 0 \\ \left| \begin{pmatrix} f_{11} & 0 \\ 0 & f_{22} \end{pmatrix} + \alpha \begin{pmatrix} 0 & f_{12} \\ f_{21} & 0 \end{pmatrix} - \lambda \begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix} \right| &= 0. \end{align*}
The solution for the first root of this polynomial and therefore the first eigenvalue is defined as\begin{equation*} \lambda(\alpha) = \frac{1}{2} \left(-\sqrt{{\color{Aquamarine}4 \alpha ^2 f_{12} f_{21}} +f_{11}^2+f_{22}^2-2 f_{11} f_{22}}+f_{11}+f_{22}\right). \end{equation*}
The thing I am driving at is the fact that the eigenvalue \(\lambda(\alpha)\) changes only continuously with increasing value of \(\alpha\) (highlighted position) and as closer \(\alpha\) gets to 1 as more off-diagonals are added. Especially, \(\lambda(\alpha)\) does not jump suddenly somewhere completely different. I chose a 2-by-2 matrix because this point is easier to see here. Finding roots of higher dimensional matrices can suddenly become much more complicated. But the statement of continuously increasing eigenvalues stays true, even for matrices with higher dimensions.
No back to the matrix \(\tilde{D}(\alpha)\) from the example. We will now increase \(\alpha\) and see how this affects our discs. The principle is simple: just add both matrices together and apply the circle theorem on the resulting matrix. The following animation lets you perform the increase of \(\alpha\).
As you can see, the eigenvalues start at the disc centres because here only the diagonal elements remain, i.e. \(\tilde{D}(0) = D_1\). With increasing value of \(\alpha\) more and more off-diagonal elements are added letting the eigenvalues move away from the centre. But note again that this transition is smooth: no eigenvalue suddenly jumps to a completely different position. Note also that at some point the discs for the first and second eigenvalue merge together.
Now the extended theorem becomes clear: if the eigenvalues start at the disc centres, don't jump around and if the discs don't merge at \(\alpha=1\), then each union must contain as many eigenvalues as discs contributed to this union. In the example, this gives us the proof that \(\lambda_3\) must indeed lie in the disc around \(d_{33}\).
List of attached files:
|
There is a good Planet Money episode on ticket scalping; I recommend it.The reason for banning ticket scalping has nothing to do with economic harm, and everything to do with making the arts (or sports, whatever) accessible to people of more-modest means. Consider the fact that artists could, if they wanted, just auction off all the seats to their shows, ...
Following up on the excellent MWG diagram in Amstell's answer, the fundamental observation needed is that holding $p$ fixed, $e$ and $v$ are inverses of each other. $e$ tells us the amount we need to spend to get a certain amount of utility $u$, while $v$ tells us the maximum amount of utility we can get from a certain expenditure $w$. Whenever we want to ...
Not sure how much this will help, but the diagram in Mas-Colell p.75 is something I always have in mind when deriving these functions. I'm not sure what books you're using, but Microeconomics by Mas-Colell et al. is the go to graduate resource. But I prefer Microeconomic Analysis by Varian. Much easier to read and still has the important content needed ...
Yes, under some conditions. This is the classic integrability problem: for detailed discussion, see some excellent notes by Kim Border.Several other technical conditions are required, but the most economically substantive condition is that the Slutsky matrix must always be symmetric and negative semidefinite. To be concrete, if we define the $ij$th element ...
Intuitively, a higher price for pears means that I have to give up more apples to be able to afford an extra pear (or, conversely, if I give up one pear then the number of extra apples that I can afford increases). This is going to make me want to reduce my pear consumption and increase my apple consumption (in orther words, to substitute away from pears ...
Consider the Slutsky equation,$$\frac{\partial x}{\partial p} = \frac{\partial x^c}{\partial p} -\frac{\partial x}{\partial I} x.$$A giffen good is the case where the income effect $\frac{\partial x}{\partial I} x$ is negative and large (in magnitude) enough sothat $\frac{\partial x}{\partial p} > 0$.From Wikipedia:There are three necessary ...
Here's a figure to explain:Starting from the old price line, where the optimal consumption bundle is point $A$, we increase the price of $y$ to get the new price line.The Slutsky compensation says that we have to give the consumer enough extra income so that he can afford to old bundle ($A$) at the new price. Thus, we shift the new budget constraint out ...
Quasilinear utility functions are useful in much of the demand estimation literature, particularly in discrete choice. For instance, check out Berry 1994,Berry Levinsohn Pakes 1995 and the many applications in Nevo's papers on demand estimation (here's a "practicioner's guide"). Ken Train's book on it is available for free here!To summarize, they can lead ...
There is rather low probability for demand of a good to exhibit the Giffen property at market level, where averaging over heterogeneous preferences, different income levels and consequent differentiated behavior, will usually offset Giffen phenomena.Looking at @jmbejara answer, goods that are likely to satisfy all three necessary conditions are drugs ...
A good is normal if its demand is increasing in income. So let $p_x$ and $p_y$ be the price of the goods with quantities $x$ and $y$ and let $m$ be income.Suppose $ax>by$. Then $\min\{ax,by\}=by$. By slightly reducing $x$ by and spending the saved money on $y$, one gets a better bundle. For an optimal bundle, this cannot be.Similarly, it cannot be ...
The primary literature concerned with this type of question (at least where classical results break down) is behavioral economics. There's a great general compilation of papers put together by the Russell Sage Foundation called the "Behavioral Economics Reading List" that includes, among other things, a General Introduction section with overview papers by ...
It's called a Principal-Agent Conflict.The RIAA/MPAA act as agents on behalf of the people who actually produce content (and consequently end-consumer value).To maintain relevance to their principals', the RIAA/MPAA must signal value to them (i.e. claim loudly and repeatedly that they do something good for them [regardless of the validity of that claim])...
Consider a preference relation in $\mathbb{R}^2$ such that $x=(x_1,x_2)\succsim (y_1,y_2)=y$ $\iff$ $x_1\geq y_1$ and $x_2\geq y_2$.1) You might like to argue whether this preference relation is strictly monotonic and continuous.2) Is the relation defined above complete?Then, as a side dish, you might also reconsider your claim that continuity is the ...
Here's a "no maths" explanation (including the inferior goods case, because I think it helps to understand what's going on):Suppose we have a normal good, $x$, and we increase its price. Marshallian demand decreases thanks to two effects (i) consumers substitute away from $x$ towards cheaper alternatives; (ii) because prices are higher, consumers can ...
Yes it is:If direction$$x \succ y \Rightarrow x \not \precsim y \Rightarrow u(x) > u(y).$$Only if direction:For all $x, y \in X$,$$x \succsim y \iff u(x) \geq u(y)$$implies$$x \sim y \iff u(x) = u(y).$$Also$$u(x) > u(y) \Rightarrow u(x) \geq u(y) \Rightarrow x \succsim y ,$$$$u(x) > u(y) \Rightarrow u(x) \not = u(y) \...
Here a short answer: Homothetic, identical preferences have the modelling advantage that the distribution of income across individuals does not matter for aggregate demand. That is, if you want to study, let's say, monetary policy where you do not expect changes in the distribution of income to affect your policy recommendations, then this is a reasonable ...
Looking more closely at your question, I think things should not be overly complicated. From Mas-Colell et.al.Definition 3.C.1:The preference relation $\succsim$ on X is continuous if it is preserved under limits. That is, for any sequence of pairs $\{(x^n, y^n)\}^\infty_{n=1}$ with $x^n \succsim y^n$ for all $n$, $x = \lim_{n \rightarrow \infty} x^n$, ...
The problem is that there are no indifference "curves" but indifference "areas". Consider the following graph:For a reference bundle $A$ (equivalent to $\{2,3\}$), the gray regions indicate the areas of indifference, based on your definition of preferences (the black lines are part of the indifference areas).Thus, by selecting any bundle, you can find ...
Not really, you're right in that (loosely speaking) the MRS is the amount of one good someone is willing to give up in order to get an additional unit of another good. However, the slope of the budget line measures the amount of one good someone has to give up in order to get an additional unit of another good.In the first case, only preferences matter, ...
The concept of "marginal utility" (and therefore of decreasing such) has meaning only in the context of cardinal utility.Assume we have an ordinal utility index $u()$, on a single good, and three quantities of this good, $q_1<q_2<q_3$, with $q_2-q_1 = q_3-q_2$.Preferences are well behaved and satisfy the benchmark regularity conditions, so$$u(q_1)&...
Yes.We know that a monotonic transformation of a utility function still represents the same preferences and as the old utility function represented homothetic preferences the new one does, too.As an easy example you could look at Cobb-Douglas utility functions of the form $u(x,y) = a\left(x y\right)^\alpha$. For $\alpha = \frac12$ the utility function is ...
The usual textbook example of a Giffen good (i.e. a good whose demand curve slopes upwards) is the Irish potato famine. The idea is that as potatoes (a staple food) became more expensive, people could no longer afford expensive foods such as meat and so ended up buying more potatoes! However, this example has come in for criticism, not least of all because a ...
This post shows clearly why in the world of "standard" ordinal utility, concavity of a utility function cannot obtain an economically meaningful interpretation, although it may be useful as a mathematical property.But "standard" ordinal utility is not compatible with Econometrics, because Econometrics deal inherently with situations where there exists ...
It all depends on whether you treat the budget constraint as an equality or inequality constraint. These are two different problems, with two different solutions in this case.One version of the problem (rewriting the objective in the form suggested by denesp, and dropping the constant, for clarity) is\begin{align}\max~&-4(x-4.5)^2 -2(y-1.5)^2\\\text{...
The simple answer is that they don't think they would make as much money.In many countries illegally downloading music or movies is getting harder and harder. The recording industry has achieved this by persuading governments to instruct the ISPs to block torrent sites, torrent proxy sites and sites that list proxy sites completely so no one can access ...
I am assuming that the following facts do not require proofs for the purposes of this question.Fact 1: Let $h_n$ be a sequence in $\mathbb{R}^K$ such that $\lim_{n\rightarrow \infty} h_n =h\in \mathbb{R}^K$. Then, for each $i\in \{1,2,\ldots,K\}$, we have $h^i_n\rightarrow h^i$.Fact 2: Let $z_n$ and $q_n$ be sequence in $\mathbb{R}$ such that $\lim_{n\...
Let me answer the question by following @HRSE's explanation and recommending a good reading. Eaton and Kortum (Ecta, 2002) use homothetic preferences, a convenient assumption to get a tractable general equilibrium Ricardian model of trade. However, there is exhaustive evidence that the income elasticity of demand varies across goods and that this variation ...
To begin with, I think the question is wrongly stated. For if the defininition of a thin indifference curve is such that continuity of a consumer's preferences implies thin indifference curves, then, surely, continuity implies thin indifference curves... This answers your question.However, if we are to make a suitable definition of a thin indifference ...
I don't think continuity alone is enough to guarantee thin indifference curves.Consider preferences such that, for any $x$ and $y$ in the choice set, the consumer is indifferent between $x$ and $y$. This seems like it must fit any definition of a thick indifference curve because the whole choice set lies on a single indifference curve!But these ...
|
pre-calculus-trigonometric-identity-calculator
verificar \frac{\sin(3x)+\sin(7x)}{\cos(3x)-\cos(7x)}=\cot(2x)
en
1. Sign Up free of charge:
Join with Office365
Join with Facebook
OR
Join with email
2. Subscribe to get much more:
From $0.99
Please try again using a different payment method
One Time Payment $5.99 USD for 2 months Weekly Subscription $0.99 USD per week until cancelled Monthly Subscription $2.49 USD per month until cancelled Annual Subscription $19.99 USD per year until cancelled
Please add a message.
Message received. Thanks for the feedback.
|
An inequation is an algebraic expression formed by numbers, a variable that we will call $$x$$ and a symbol of inequality.
Examples of inequations would be:
$$x < 2$$
$$4x+2\geqslant -1$$
$$-x > -3+2x$$
In these cases, we would say that the inequation 1 would already be solved, because if $$x$$ takes values less than $$2$$ it will always satisfy the inequality, and inequations 2 and 3 would be resolved, that is, and find what values of $$x$$ satisfy the respective inequations.
Solution of an inequation
Given an inequation, we will think that we have solved it when we find an expression like $$x < a$$, $$x > a$$, $$x\leqslant a$$ or $$x\geqslant a$$, $$a$$ being a number. Once we have this expression we can already say that for the inequation to be true, $$x$$ should satisfy the condition found and we will claim that the inequation is solved.
Also, these should be examples of solutions: $$$x < 2, \ x > 3, \ x\leqslant -1, \ x\geqslant 6$$$
També serien exemples de solucions: $$$2 < x, \ -1\geqslant x$$$
since using the property of symmetry of the inequalites, they are equivalent to $$$x > 2, \ x\leqslant -1$$$
Resolution of inequations
Likewise, we have learned to solve first degree equations, so now we are going to learn how to solve first degree inequations.
The method to solve these inequations is the same as that for solving equations, even though there are small changes.
To start, let's see the analogy that exists between solving a first degree equation and a first degree inequation:
We will solve the equation $$2(x-5)=2$$ and the inequation $$2(x-5)\geqslant 2$$.
Let's solve the equation: $$$ 2(x-5)=2 \Rightarrow 2x-10=2 \Rightarrow 2x=2+10 \Rightarrow 2x=12 \Rightarrow x=\dfrac{12}{2} \Rightarrow x=6 $$$
and we say that the solution is $$x=6$$.
On the other hand, let's solve the inequation: $$$ 2(x-5)\geqslant2 \Rightarrow 2x-10\geqslant2 \Rightarrow 2x\geqslant2+10 \Rightarrow 2x\geqslant12 \Rightarrow x\geqslant\dfrac{12}{2} \Rightarrow x\geqslant6 $$$
and we say that the solution is $$x\geqslant6 $$, that is, $$x$$ can take any value greater than or equal to six.
Notice that the resolution method has been the same for both exercises, therefore: what is the difference between the process of resolution of an inequation and an equation?
To answer this question, let's see how the inequalites change when we operate with additions, subtractions, multiplications and divisions:
Addition and Subtraction
Let $$A$$, $$B$$ and $$C$$ be any three numbers, then:
if $$A < B \Rightarrow \left\{ \begin{array}{l} A+C < B+C \\ A-C < B-C \end{array} \right. $$
if $$A > B \Rightarrow \left\{ \begin{array}{l} A+C > B+C \\ A-C > B-C \end{array} \right. $$
As we can see, we can add or subtract the same value on each side of the inequality without having problems with the symbol of the inequality.
This property has already been studied in the topic on equations since we could add or subtract the same value on each side of the equality.
This property allows us to add and to subtract the same value on each side of an inequality of an inequation in order to be able to isolate the variable $$x$$ on one side of the inequation.
Let’s solve the inequation $$x+3 < 4$$. $$$ x+3 < 4 \Rightarrow x+3-3 < 4-3 \Rightarrow x < 4-3 \Rightarrow x < 1 $$$
Multiplication and division
Multiplying and dividing by a value in an inequation, it is possible that we must change the symbol in the inequality: from less than to greater than or vice versa (the same goes for less than or equal to, and the other way round).
Let $$A$$, $$B$$ and $$C$$ be any three numbers, then:
If $$C$$ is positive and $$A < B$$ then $$A\cdot C < B\cdot C \ $$ and $$ \ \dfrac{A}{C} < \dfrac{B}{C}$$ (the inequality does not change).
If $$C$$ is positive and $$A > B$$ then $$A\cdot C > B\cdot C \ $$ and $$ \ \dfrac{A}{C} > \dfrac{B}{C}$$ (the inequality does not change).
If $$C$$ is negative and $$A < B$$ then $$A\cdot C > B\cdot C \ $$ and $$ \ \dfrac{A}{C} > \dfrac{B}{C}$$ (the inequality does not change).
If $$C$$ is negative and $$A > B$$ then $$A\cdot C < B\cdot C \ $$ and $$ \ \dfrac{A}{C} < \dfrac{B}{C}$$ (the inequality does not change).
The reason for the change in the order of the inequality if we multiply or divide by a negative number will be clearly seen in an example:
If $$A = 2$$ and $$B = 3$$ (we have that $$A < B$$ because $$2 < 3$$), then, we multiply by $$(-1)$$ and we obtain: $$$\left. \begin{array}{l} 2\cdot (-1)=-2 \\ 3\cdot (-1) =-3 \end{array} \right\} \Rightarrow -2<-3 \ \text{ FALSE, } \ -2 > -3 \ \text{ TRUE}$$$
We can see that we have had to change the order of the inequality in order to keep the expression true.
This property allow us to multiply and to divide by the same value on both sides of an inequation (in a similar way as we were doing with equations), and in this way we will be able to isolate our variable $$x$$ without problems on one of the sides of the inequation.
Given the inequation $$3x < 6$$, let's solve it: $$$ 3x < 6 \Rightarrow \dfrac{3x}{3} < \dfrac{6}{3} \Rightarrow x < \dfrac{6}{3} \Rightarrow x < 2$$$
Given the inequation $$-2x < 4$$, let's solve it: $$$ -2x < 4 \Rightarrow \dfrac{-2x}{-2} > \dfrac{4}{-2} \Rightarrow x > \dfrac{4}{-2} \Rightarrow x > -2$$$
Now that we already know how to add and to subtract, as well as to multiply and to divide on both sides of an inequation by a concrete value, we are already able to solve any first degree inequation.
|
A simple control problem on a system usually involves a variable $x(t)$ that denotes the state of the system over time, and a variable $u(t)$ that denotes the input into the system over time. Linear constraints are used to capture the evolution of the system over time:$$x(t) = Ax(t - 1) + Bu(t), \ \mbox{for} \ t = 1,\ldots, T,$$
where the numerical matrices $A$ and $B$ are called the dynamics and input matrices, respectively.
The goal of the control problem is to find a sequence of inputs $u(t)$ that will allow the state $x(t)$ to achieve specified values at certain times. For example, we can specify initial and final states of the system:$$ \begin{align*} x(0) &= x_i \\ x(T) &= x_f \end{align*} $$
Additional states between the initial and final states can also be specified. These are known as waypoint constraints. Often, the input and state of the system will have physical meaning, so we often want to find a sequence inputs that also minimizes a least squares objective like the following:$$ \sum_{t = 0}^T \|Fx(t)\|^2_2 + \sum_{t = 1}^T\|Gu(t)\|^2_2, $$
where $F$ and $G$ are numerical matrices.
We'll now apply the basic format of the control problem to an example of controlling the motion of an object in a fluid over $T$ intervals, each of $h$ seconds. The state of the system at time interval $t$ will be given by the position and the velocity of the object, denoted $p(t)$ and $v(t)$, while the input will be forces applied to the object, denoted by $f(t)$. By the basic laws of physics, the relationship between force, velocity, and position must satisfy:$$ \begin{align*} p(t+1) &= p(t) + h v(t) \\ v(t+1) &= v(t) + h a(t) \end{align*}. $$
Here, $a(t)$ denotes the acceleration at time $t$, for which we we use $a(t) = f(t) / m + g - d v(t)$, where $m$, $d$, $g$ are constants for the mass of the object, the drag coefficient of the fluid, and the acceleration from gravity, respectively.
Additionally, we have our initial/final position/velocity conditions:$$ \begin{align*} p(1) &= p_i\\ v(1) &= v_i\\ p(T+1) &= p_f\\ v(T+1) &= 0 \end{align*} $$
One reasonable objective to minimize would be$$ \mbox{objective} = \mu \sum_{t = 1}^{T+1} (v(t))^2 + \sum_{t = 1}^T (f(t))^2 $$
We would like to keep both the forces small to perhaps save fuel, and keep the velocities small for safety concerns. Here $\mu$ serves as a parameter to control which part of the objective we deem more important, keeping the velocity small or keeping the force small.
The following code builds and solves our control example:
using Convex, SCS, Gadfly# Some constraints on our motion# The object should start from the origin, and end at restinitial_velocity = [-20; 100]final_position = [100; 100]T = 100 # The number of timestepsh = 0.1 # The time between time intervalsmass = 1 # Mass of objectdrag = 0.1 # Drag on objectg = [0, -9.8] # Gravity on object# Declare the variables we needposition = Variable(2, T)velocity = Variable(2, T)force = Variable(2, T - 1)# Create a problem instancemu = 1constraints = []# Add constraints on our variablesfor i in 1 : T - 1 constraints += position[:, i + 1] == position[:, i] + h * velocity[:, i]endfor i in 1 : T - 1 acceleration = force[:, i]/mass + g - drag * velocity[:, i] constraints += velocity[:, i + 1] == velocity[:, i] + h * accelerationend# Add position constraintsconstraints += position[:, 1] == 0constraints += position[:, T] == final_position# Add velocity constraintsconstraints += velocity[:, 1] == initial_velocityconstraints += velocity[:, T] == 0# Solve the problemproblem = minimize(sum_squares(force), constraints)solve!(problem, SCSSolver(verbose=0))
We can plot the trajectory taken by the object. The blue point denotes the initial position, and the green point denotes the final position.
pos = evaluate(position)p = plot( layer(x=[pos[1, 1]], y=[pos[2, 1]], Geom.point, Theme(default_color=color("blue"))), layer(x=[pos[1, T]], y=[pos[2, T]], Geom.point, Theme(default_color=color("green"))), layer(x=pos[1, :], y=pos[2, :], Geom.line(preserve_order=true)), Theme(panel_fill=color("white")))
We can also see how the magnitude of the force changes over time.
p = plot(x=1:T, y=sum(evaluate(force).^2, 1), Geom.line, Theme(panel_fill=color("white")))
|
apex.normalization.fused_layer_norm¶ class
apex.normalization.
FusedLayerNorm(
normalized_shape, eps=1e-05, elementwise_affine=True)¶
Applies Layer Normalization over a mini-batch of inputs as described in the paper Layer Normalization .
Currently only runs on cuda() tensors.\[y = \frac{x - \mathrm{E}[x]}{ \sqrt{\mathrm{Var}[x] + \epsilon}} * \gamma + \beta\]
The mean and standard-deviation are calculated separately over the last certain number dimensions which have to be of the shape specified by
normalized_shape. \(\gamma\) and \(\beta\) are learnable affine transform parameters of
normalized_shapeif
elementwise_affineis
True.
Note
Unlike Batch Normalization and Instance Normalization, which applies scalar scale and bias for each entire channel/plane with the
affineoption, Layer Normalization applies per-element scale and bias with
elementwise_affine.
This layer uses statistics computed from input data in both training and evaluation modes.
Parameters
input shape from an expected input of size\[[* \times \text{normalized}\_\text{shape}[0] \times \text{normalized}\_\text{shape}[1] \times \ldots \times \text{normalized}\_\text{shape}[-1]]\]
If a single integer is used, it is treated as a singleton list, and this module will normalize over the last dimension which is expected to be of that specific size.
eps– a value added to the denominator for numerical stability. Default: 1e-5 elementwise_affine– a boolean value that when set to
True, this module has learnable per-element affine parameters initialized to ones (for weights) and zeros (for biases). Default:
True.
Shape:
Input: \((N, *)\)
Output: \((N, *)\) (same shape as input)
Examples:
>>> input = torch.randn(20, 5, 10, 10) >>> # With Learnable Parameters >>> m = apex.normalization.FusedLayerNorm(input.size()[1:]) >>> # Without Learnable Parameters >>> m = apex.normalization.FusedLayerNorm(input.size()[1:], elementwise_affine=False) >>> # Normalize over last two dimensions >>> m = apex.normalization.FusedLayerNorm([10, 10]) >>> # Normalize over last dimension of size 10 >>> m = apex.normalization.FusedLayerNorm(10) >>> # Activating the module >>> output = m(input)
extra_repr()¶
Set the extra representation of the module
To print customized extra information, you should reimplement this method in your own modules. Both single-line and multi-line strings are acceptable.
forward(
input)¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Moduleinstance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
|
We defined in the class branched covering as follows.
Let $\Sigma_1, \Sigma_2$ be two surfaces, $f: \Sigma_1 \longrightarrow \Sigma_2$ is a branched covering if $\forall y \in \Sigma_2 $ there exist $V\subset \Sigma_1$ containing y so that $f^{-1}(V)= U_1 \cup U_2 \cup ... \cup U_n $ so that $f: U_j \longrightarrow V$ is given by $z \longrightarrow z^k$ for $k\geq1$. The points which k>1 called ramification points.
Now, I didnot understand the following. When I searched on the web the standard definition for branched covering is like it is a covering except on some small set. So
How are these two definitions related? As far as I understand, in the definition I give, we view the surface as a complex manifold (with a complex structure on it) but then the question is there might be lots of complex structure which are not biholomorphic to each other, is the definition independent of the complex structure we put on the surface? I would appreciate if you suggest some source explaining monodromy, branched coverings. To understand the first definition a little bit, I took the standard sphere in $\mathbb {R}^3= \mathbb{C} \times R $ and the map taking $(z,t) \longrightarrow (z^2,t)$. This is a branched covering; the north and south poles being branched points with respect to the second definition. Now I will try to see that it is a branched covering with these branch points with the first definition. I need to have some complex coordinates on the sphere and must express the map so that hopefully it will be some power of z, so that I can decide what kind of point is that. Am I on the right track?
|
The possibility of recursive self-improvement is often brought up as a reason to expect that an intelligence explosion is likely to result in a singleton - a single dominant agent controlling everything. Once a sufficiently general artificial intelligence can make improvements to itself, it begins to acquire a compounding advantage over rivals, because as it increases its own intelligence, it increases its ability to increase its own intelligence. If returns to intelligence are not substantially diminishing, this process could be quite rapid. It could also be difficult to detect in its early stages because it might not require a lot of exogenous inputs.
However, this argument only holds if self-improvement is not only a rapid route to AI progress, but the fastest route. If an AI participating in the broader economy could make advantageous trades to improve itself faster than a recursively self-improving AI could manage, then AI progress would be coupled to progress in the broader economy.
If algorithmic progress (and anything else that might seem more naturally a trade secret than a commodity component) is shared or openly licensed for a fee, then a cutting-edge AI can immediately be assembled whenever profitable, making a single winner unlikely. However, if leading projects keep their algorithmic progress secret, then the foremost project could at some time have a substantial intelligence advantage over its nearest rival. If an AI project attempting to maximize intelligence growth would devote most of its efforts towards such private improvements, then the underlying dynamic begins to resemble the recursive self-improvement scenario.
This post reviews a prior mathematization of the recursive self-improvement model of AI takeoff, and then generalizes it to the case where AIs can allocate their effort between direct self-improvement and trade.
A recalcitrance model of AI takeoff
In
Superintelligence, Nick Bostrom describes a simple model of how fast an intelligent system can become more intelligent over time by working on itself. This exposition loosely follows the one in the book.
We can model the intelligence of the system as a scalar quantity , and the work, or optimization power, applied to system in order to make it more intelligent, as another quantity . Finally, at any given point in the process, it takes some amount of work to augment the system's intelligence by one unit. Call the marginal cost of intelligence in terms of work recalcitrance, , which may take different values at different points in the progress. So, at the beginning of the process, the rate at which the system's intelligence increases is determined by the equation .
We then add two refinements to this model. First, assume that intelligence is nothing but a type of optimization power, so and can be expressed in the same units. Second, if the intelligence of the system keeps increasing without limit, eventually the amount of work it will be able to put into things will far exceed that of the team working on it, so that . is now the marginal cost of intelligence in terms of applied intelligence, so we can write .
Constant recalcitrance
The simplest model assumes that recalcitrance is constant, . Then , or . This implies exponential growth.
Declining recalcitrance Superintelligence also considers a case where work put into the system yields increasing returns. Prior to takeoff, where , this would look like a fixed team of researchers with a constant budget working on a system that always takes the same interval of time to double in capacity. In this case we can model recalcitrance as , so that , so that for some constant , which implies that the rate of progress approaches infinity as approaches ; a singularity.
How plausible is this scenario? In a footnote, Bostrom brings up Moore's Law as an example of increasing returns to input, although (as he mentions) in practice it seems like increasing resources are being put into microchip development and manufacturing technology, so the case for increasing returns is far from clear-cut. Moore's law is predicted by the experience curve effect, or Wright's Law, where marginal costs decline as cumulative production increases; the experience curve effect produces exponentially declining costs under conditions of exponentially accelerating production.
1 This suggests that in fact accelerating progress is due to an increased amount of effort put into making improvements. Nagy et al. 2013 show that for a variety of industries with exponentially declining costs, it takes less time for production to double than for costs to halve.
Since declining costs also reflect broader technological progress outside the computing hardware industry, the case for declining recalcitrance as a function of input is ambiguous.
Increasing recalcitrance
In many cases where work is done to optimize a system, returns diminish as cumulative effort increases. We might imagine that high intelligence requires high complexity, and more intelligent systems require more intelligence to understand well enough to improve at all. If we model diminishing returns to intelligence as , then . In other words, progress is a linear function of time and there is no acceleration at all.
Generalized expression
The recalcitrance model can be restated as a more generalized self-improvement process with the functional form :
Increasing recalcitrance Constant progress Increasing recalcitrance Polynomial progress Constant recalcitrance Exponential progress 1" /> Declining recalcitrance Singularity Deciding between trade and self-improvement
Some inputs to an AI might be more efficiently obtained if the AI project participates in the broader economy, for the same reason that humans often trade instead of making everything for themselves. This section lays out a simple two-factor model of takeoff dynamics, where an AI project chooses how much to engage in trade.
Suppose that there are only two inputs into each AI: computational hardware available for purchase, and algorithmic software that the AI can best design for itself. Each AI project is working on a single AI running on a single hardware base. The intelligence of this AI depends both on hardware progress and software progress, and holding either constant, the other has diminishing returns. (This is broadly consistent with trends described by Grace 2013.) We can model this as .
At each moment in time, the AI can choose whether to allocate all its optimization power to making money in order to buy hardware, improving its own algorithms, or some linear combination of these. Let the share of optimization power devoted to algorithmic improvement be .
Assume further that hardware earned and improvements to software are both linear functions of the optimization power invested, so , and .
What is the intelligence-maximizing allocation of resources ?
This problem can be generalized to finding that maximizes for any monotonic function . This is maximized whenever is maximized. (Note that this is no longer limited to the case of diminishing returns.)
This generalization is identical to the Cobb-Douglas production function in economics. If then this model predicts exponential growth, if 1" /> it predicts a singularity, and if then it predicts polynomial growth. The intelligence-maximizing value of is .
2
In our initial toy model , where , that implies that no matter what the price of hardware, as long as it remains fixed and the indifference curves are shaped the same, the AI will always spend exactly half its optimizing power working for money to buy hardware, and half improving its own algorithms.
Changing economic conditions
The above model makes two simplifying assumptions: that the application of a given amount of intelligence always yields the same amount in wages, and that the price of hardware stays constant. This section relaxes these assumptions.
Increasing productivity of intelligence
We might expect the productivity of a given AI to increase as the economy expands (e.g. if it discovers a new drug, that drug is more valuable in a world with more or richer people to pay for it). We can add a term exponentially increasing over time to the amount of hardware the application of intelligence can buy: .
This does not change the intelligence-maximizing allocation of intelligence between trading for hardware and self-improving.
3 Declining hardware costs
We might also expect the long-run trend in the cost of computing hardware to continue. This can again be modeled as an exponential process over time, . The new expression for the growth of hardware is , identical in functional form to the expression representing wage growth, so again we can conclude that .
Maximizing profits rather than intelligence
AI projects might not reinvest all available resources in increasing the intelligence of the AI. They might want to return some of their revenue to investors if operated on a for-profit basis. (Or, if autonomous, they might invest in non-AI assets where the rate of return on those exceeded the rate of return on additional investments in intelligence.) On the other hand, they might borrow if additional money could be profitably invested in hardware for their AI.
If the profit-maximizing strategy involves less than 100% reinvestment, then whatever fraction of the AI's optimization power is reinvested should still follow the intelligence-maximizing allocation rule , where is now the share of reinvested optimization power devoted to algorithmic improvements.
If the profit-maximizing strategy involves a reinvestment rate of slightly greater than 100%, then at each moment the AI project will borrow some amount (net of interest expense on existing debts) , so that the total optimization power available is . Again, whatever fraction of the AI's optimization power is reinvested should still follow the intelligence-maximizing allocation rule , where is now the share of economically augmented optimization power devoted to algorithmic improvements.
This strategy is no longer feasible, however, once \frac{\alpha}{\alpha+\beta}" />. Since by assumption hardware can be bought but algorithmic improvements cannot, at this point additional monetary investments will shift the balance of investment towards hardware, while 100% of the AI's own work is dedicated to self-improvement.
References [ + ]
1. ↑ Moore's Law describes costs diminishing exponentially as a function of time : . Wright's law describes log costs diminishing as a linear function of log cumulative production : . If cumulative production increases exponentially over time, then Wright's law simplifies to Moore's law. 2. ↑
is maximized by maximizing
We can ignore the term here, since at any moment the value of materially affects only the rates of change and , not the constants or current endowments of hardware, software, or intelligence. Therefore we merely need to find the value of that maximizes :
, or
In the initial case we considered where , this constraint just means that the marginal product of intelligence applied to hardware purchases or software improvements must be the same, . There's no constraint on yet. In the generalized case, we also have scaling factors to account for differing marginal benefit curves for hardware and software.
To find the intelligence-maximizing value of we can consider a linear approximation to our initial set of equations:
Since initial endowments and are assumed to have been produced through the intelligence-maximizing process, we can substitute in the identity :
3. ↑ We again need to find such that . Using the new equation for , we get:
, or .
Thus, we should expect that the proportion of each AI's hardware endowment to its software endowment grows proportionally to the wage returns of intelligence.
To find the intelligence-maximizing value of we can again use a linear approximation, this time including the exponential growth of wages in our approximation of hardware growth:
Since initial endowments and are assumed to have been produced through the intelligence-maximizing process, we can apply the relation :
|
Let $M$ be a Riemannian manifold and denote by $\exp_p(v)$ the exponential map at $p \in M$ applied to $v \in T_p M$. Let $q \in M$ be fixed and let $U \subset M$ be a neighborhood of $q$ such that, for each $p \in U$, the map $\exp_p$ is a diffeomorphism in a neighborhood of $exp_p^{-1}(q)$.
Now, the map $V: p \mapsto \exp_p^{-1}(q)$ is a well-defined vector field on $U$ ($q$ is fixed!). Is there a "nice" expression for the covariant derivative $\nabla_X V$ along a given vector field $X$?
What I have tried so far
I understand that $V$ is a vector field such that $V(p)$ points in the (geodesic) direction of $q$ and such that $|V(p)| = d(p,q)$. On flat $\mathbb{R}^n$, we have $V(p) = q-p$ and the covariant derivative is simply $\nabla_X V = -X$.
In the Riemannian case, we have at least $\langle \nabla_X V, V \rangle = \langle -X, V \rangle$ for any vector field $X$. Proof: Consider $c(s,t) = \exp_q(s \exp_q^{-1}(\gamma(t))) = \exp_{\gamma(t)}((1-s) \exp_{\gamma(t)}^{-1}(q))$. Then $\frac{\partial c}{\partial s}(1,t) = -V(\gamma(t))$ and $|\frac{\partial c}{\partial s}(s,t)|$ does not depend on $s$. Therefore, $$ |V(\gamma(t))|^2 = \int_0^1 |\frac{\partial c}{\partial s}(s,t)|^2\,ds $$ and $$ \begin{aligned} \langle \nabla_{\gamma(t)} V, V \rangle &= \frac{1}{2}\frac{d}{dt}|V(\gamma(t))|^2 = \frac{1}{2}\frac{d}{dt}\int_0^1 |\frac{\partial c}{\partial s}(s,t)|^2\,ds \\ &= \langle \frac{\partial c}{\partial t}(1,t), \frac{\partial c}{\partial s}(1,t) \rangle = -\langle \dot\gamma(t), V \rangle, \end{aligned} $$ where we used that $\frac{\partial c}{\partial t}(0,t) = 0$, $\nabla_{\partial_s}\frac{\partial c}{\partial s}(s,t) = 0$ and $\nabla_{\partial_s}\frac{\partial c}{\partial t}(s,t) = \nabla_{\partial_t}\frac{\partial s}{\partial t}(s,t)$.
My hypothesis
From explicit calculations on the sphere, I found that $V(p) = -F(p) \nabla F(p)$ where $F(p) = d(p,q)$. Then $$ \nabla_X V = -\langle \nabla F, X\rangle \nabla F - F \nabla_X(\nabla F). $$ From $|\nabla F(p)|^2 = 1$ ($F$ is 1-Lipschitz by triangle inequality), it's clear that $\nabla_X(\nabla F)$ is orthogonal to $\nabla F$ - just as expected. Actually, this formula could probably be more explicit and I don't have a proof for the relation between $F$ and $V$ in the general case.
|
A simple illustration of the trapezoid rule for definite integration:$$ \int_{a}^{b} f(x)\, dx \approx \frac{1}{2} \sum_{k=1}^{N} \left( x_{k} - x_{k-1} \right) \left( f(x_{k}) + f(x_{k-1}) \right). $$
First, we define a simple function and sample it between 0 and 10 at 200 points
%matplotlib inlineimport numpy as npimport matplotlib.pyplot as plt
def f(x): return (x-3)*(x-5)*(x-7)+85x = np.linspace(0, 10, 200)y = f(x)
Choose a region to integrate over and take only a few points in that region
a, b = 1, 8 # the left and right boundariesN = 5 # the number of pointsxint = np.linspace(a, b, N)yint = f(xint)
Plot both the function and the area below it in the trapezoid approximation
plt.plot(x, y, lw=2)plt.axis([0, 9, 0, 140])plt.fill_between(xint, 0, yint, facecolor='gray', alpha=0.4)plt.text(0.5 * (a + b), 30,r"$\int_a^b f(x)dx$", horizontalalignment='center', fontsize=20);
Compute the integral both at high accuracy and with the trapezoid approximation
from __future__ import print_functionfrom scipy.integrate import quadintegral, error = quad(f, a, b)integral_trapezoid = sum( (xint[1:] - xint[:-1]) * (yint[1:] + yint[:-1]) ) / 2print("The integral is:", integral, "+/-", error)print("The trapezoid approximation with", len(xint), "points is:", integral_trapezoid)
The integral is: 565.2499999999999 +/- 6.275535646693696e-12 The trapezoid approximation with 5 points is: 559.890625
|
$$S = 1 + 2 + 4 + 8 + ...$$
And then apply standard manipulations on it to obtain a bizarrely finite result:
$$\begin{gathered}
\Rightarrow S = 1 + 2(1 + 2 + 4 + ...) \hfill \\
\Rightarrow S = 1 + 2S \hfill \\
\Rightarrow S = - 1 \hfill \\
\end{gathered} $$
How exactly is this result to be interpreted? Surely the definition of an infinite sum is as a limit of a finite sum as the upper limit increases without bound -- by this definition it would seem that $S$ evidently doesn't approach $-1$, it diverges to infinity. Is there, then, something wrong with the form of our argument? And if so, why does it seem to work for so many other sums, like convergent geometric progressions?
We'll get to all that in a moment, but first, let's talk about how to fold a tie into thirds. We know how to fold a tie -- or a strip of paper or a rope or whatever -- into halves, into quarters, into any power of two. But how would one fold it into thirds? Sure, we can approximate it by trial and error, but is there a more efficient algorithm to approximate it?
Here's one way: start with some approximation to 1/3 of the tie -- any approximation, however good or bad. Now consider the rest of the tie (~2/3) and fold it in half. Take one of these halves --
this is demonstrably a better approximation to 1/3 than your original. In fact, the error in this approximation is exactly half the error in the original approximation. You can keep repeating this process, and approach an arbitrarily close value to 1/3.
Why does it work? Well, it's obvious why it works. More interestingly, how could one have come up with this technique from scratch?
The key insight here is that if you had started from exactly 1/3 and performed this algorithm, defined as $x_{n+1}=\frac12(1-x_n)$, the sequence would be constant -- it would be 1/3s all the way down.
However, this is
nota sufficient argument. For instance, here's another sequence of which 1/3 is a fixed point: the algorithm $x_{n+1}=1-2x_n$. However here, if you were to start with any other number but 1/3, the sequence would not approach 1/3, but rather diverge away. While 1/3 is still a fixed point, this is an unstablefixed point, while in the previous case it was a stablefixed point.
But what exactly is wrong with extending the same argument to $x_{n+1}=1-2x_n$? Well, perhaps we should state the argument precisely in the case of $x_{n+1}=\frac12(1-x_n)$. The reason we know this converges to 1/3 regardless of the initial value is that 1/3 is the
onlyvalue which stays the same in the algorithm (i.e. is a steady-state solution). Convergence of the sequence requires that the sequence the fluctuations get smaller, i.e. the sequence approaches a value that doesn't fluctuate around, it approaches a steady state.
But this reveals our central assumption -- we
assumedthat the sequence is convergent at all! If it is convergent, then 1/3 is the only value it could converge to, because convergence means approaching a steady state, and 1/3 is the only steady state.
The same principle applies to our original problem -- an infinite series is also a sequence, a sequence of partial sums. Our mistake is really in this step:
$$\begin{gathered}
... \hfill \\
1 + 2(1 + 2 + 4 + 8 + ...) = 1 + 2S \hfill \\
\end{gathered} $$
By declaring that this is the same $S$, we have assumed that this sum really has a value. To be even clearer, consider this (taking $n\to\infty$):
$$\begin{gathered}
S = 1 + 2 + 4 + ... + {2^n} \hfill \\
S = 1 + 2(1 + 2 + 4 + ... + {2^{n - 1}}) = 1 + 2S? \hfill \\
\end{gathered} $$
In other words, we assumed that $S$ reaches a steady state, that removing the last term $2^n$ wouldn't change the value of the summation. This would've been true if we were dealing with $(1/2)^n$ instead, because then the partial sum does reach a steady state, since its "derivative", $(1/2)^n$, approaches approaches 0.
With that said,the sum $1+2+4+8+...=-1$ (and other such surprising results) canin fact be correct. What we've proven here is that if the sum converges, it converges to -1. Otherwise, it's $2^\infty -1$. If you can construct an axiomatic system in which the sum does converge, where 0 behaves like $2^\infty$ in some specific sense, then the identity would be true. Such a system does in fact exist, it's called the 2-adic system.
You know, there is a sense in which you can understand the 2-adic system. When you take partial sums of $1+2+4+8+...$, you always get sums that are "1 less than a power of 2". $1+2+4+8+16=2^5-1$, for example -- what's the significance of $2^5$? Well, it's a number which 2 divides into 5 times. What's a number that 2 divides into an infinite number of times? Well, it's zero, and $0-1=-1$. Ths might sound like a ridiculous argument, and indeed it is false in our conventional algebra system, but the foundation of the 2-adic system.
Explain similarly why $1+3+9+27+...=-1/2$ in the 2-adic system.
The understanding of convergence we gained here -- from the tie example -- was pretty fantastic. It applies to all sorts of infinite sequences -- ordinary recurrences, (such in the form of) infinite series, continued fractions, etc. The idea of stable and unstable fixed points is a general one, and a very important one. Recommended watching:
|
Is there a possibility to draw large integral signs?
I have found the package
bigints but I have the feeling it is not very professional...
Any better idea?
TeX - LaTeX Stack Exchange is a question and answer site for users of TeX, LaTeX, ConTeXt, and related typesetting systems. It only takes a minute to sign up.Sign up to join this community
I'm aware of three packages that will let you create larger integral signs:
bigints,
mtpro2, and
relsize.
bigints provides the following commands to scale up the symbol produced by
\int:
\bigintssss,
\bigintsss,
\bigintss,
\bigints, and
\bigint. Using the default math font family (Computer Modern) and the default text font size of 10pt, these commands (including the "ordinary"
\int) produce the following symbols, with a dummy integrand thrown in for scale:
mtpro2 package, which uses Times New Roman-style fonts, provides the commands
\xl,
\XL, and
\XXL (as well as the gynormous, 10cm-tall
\XXXL, not shown below) as prefixes to
\int. This is how these integrals look like when typeset with the
mtpro2 package:
By the way, the full
mtpro2 package is not free. However, its "lite" subset (which is all that's needed to use the prefix commands
\xl, etc.)
is free. The package may be downloaded from this site.
\mathlarger of the
relsize package can also produce larger integral symbols. (For multi-step enlargements, the
exscale package must be loaded as well.) For a one-step increase in size, you'd type
\mathop{\mathlarger{\int}}; for a two-step increase, you'd type
\mathop{\mathlarger{\mathlarger{\int}}}, etc.
To my taste, all three sets of results look quite professional. :-)
Three further comments, and a caveat:
None of these packages seems to do a great job placing the lower
and upper limits of integration. A reasonable positioning of the lower limit of integration, in particular, will require inserting either several "negative thinspace" (
\!) directives -- the larger the integral symbol, the more
\! instructions will likely be required -- or something like
\mkern-18mu. (Use
\mkern rather than
\kern when in math mode.)
The
bigints package can produce five large variants for
\oint as well, but (again AFAICT) not for double, triple, surface, slashed, etc. integrals. The
mtpro2 package, while providing "only" three large variants of
\int (I'm disregarding the
\XXXL-prefix variant!), can produce large variants of
\iint,
\iiint,
\oiint,
\oiiint,
\barint,
\slashint, and clockwise- and counterclockwise-oriented line integrals. Similarly, the
\mathlarger command of the
relsize package can be applied to any operator symbol -- including
\iint,
\iiint, etc.
The
mtpro2 package can be used in conjunction with both the
bigints and the
relsize packages. If the
mtpro2 package is loaded, the instructions
\bigintssss,
\bigintsss, ...
\mathop{\mathlarger{\int}}, ... will produce integral symbols that are a bit "thicker", in keeping with the style of the
\int symbols produced directly by the
mtpro2 package.
I have recently (May 2014) discovered that the
bigints package doesn't seem to be compatible with the
lmodern package, in the sense that the macros of the
bigints pacakge do not generate "large" integral symbols if the
lmodern package is loaded as well. For a work-around, please see this answer by @egreg.
Finally, here's the code that produced the three screenshots shown above.
With the
bigints package:
\documentclass{article}\usepackage{bigints}\newcommand\dummy{\frac{a}{c}\,\mathrm{d}P}\begin{document}\[\int\dummy\quad\bigintssss\dummy\quad\bigintsss\dummy\quad\bigintss\dummy\quad\bigints\dummy\quad\bigint\dummy\]\end{document}
With the
mtpro2 package:
\documentclass{article}\usepackage[lite]{mtpro2}\newcommand\dummy{\frac{a+b}{c+d}\,\mathrm{d}P\quad}\begin{document}\[\int\dummy\quad\xl\int\dummy\quad\XL\int\dummy\quad\XXL\int\dummy\] \end{document}
With the
relsize and
exscale packages:
\documentclass{article}\usepackage{relsize,exscale}\newcommand\dummy{\frac{a}{c}\,\mathrm{d}P\quad}\begin{document}\[\int\dummy\quad\mathop{\mathlarger{\int}}\dummy\quad\mathop{\mathlarger{\mathlarger{\int}}}\dummy\quad\mathop{\mathlarger{\mathlarger{\mathlarger{\int}}}}\dummy\quad\mathop{\mathlarger{\mathlarger{\mathlarger{\mathlarger{\int}}}}}\dummy\]\end{document}
The
scalerel package gives you the added capability to constrain the scale. In general, it can either vertically stretch, while keeping a lower limit on aspect ratio, or it can vertically scale, keeping an upper limit on overall width. I demonstrate both cases below, following a normal invocation of
\int. Furthermore, the scalability is continuous, rather than just having 4 or 5 discrete sizes.
In reference to barbara beeton's comment on the accepted answer, the limits with this approach will not scale with the integral size. However, some added gyrations are, nonetheless required to include limits. First, because
\stretchint and
\scaleint take a size argument, they have to be enclosed in braces for the subscript and superscript to understand to what it is actually referring. In addition, negative space has to be added to the subscript to account for the slant of the integral operator. EDITED to set in
\displaystyle since that would be the general mode of using large integral signs, as pointed out by barbara beeton. EDITED further, based on Mico's comment. And thanks to egreg for instruction of use of
\vcenter.
EDITED to reflect recent
scalerel bug fix regarding
\stretch... macros, in which limiting aspect ratio of optional argument had been miscalculated by a factor of 2. Thus, in this revision, the limiting aspect ratio for
\stretchto is shown properly as
4.4 (i.e.,
[440]) rather than 2.2.
\documentclass{article}\usepackage{scalerel}[2016-12-29]\def\stretchint#1{\vcenter{\hbox{\stretchto[440]{\displaystyle\int}{#1}}}}\def\scaleint#1{\vcenter{\hbox{\scaleto[3ex]{\displaystyle\int}{#1}}}}\begin{document}\def\x{\frac{a}{c}dP}\verb|\stretchto| with aspect ratio limit of 4.4\def\bs{\mkern-12mu} % set amount of backspacing for lower limit of integration\[\int_a^b\x ~~ \stretchint{7ex}_{\bs a}^b\x ~~ \stretchint{9ex}_{\bs a}^b\x\]\par\verb|\scaleto| with width limit of 3ex\def\bs{\mkern-15mu} % reset amount of backspacing for lower limit of integration\[\int_a^b\x ~~ \scaleint{7ex}_{\bs a}^b\x ~~ \scaleint{9ex}_{\bs a}^b\x \]\end{document}
|
What is Triangle?
A triangle is a regular polygon, with three sides and the sum of any two sides is always greater than the third side. This is a unique property of a triangle. In other definition, it can be said as any closed figure with three sides with its sum of angles equal to 180.
Being a closed figure, a triangle can have different shapes, and each shape is described by the angle made by any two adjacent sides.
Types of Triangles:
Acute angle triangle:When the angle between 2 sides is less than 90 it is called an acute angle triangle. Right angle triangle:When the angle between any two sides is equal to 90 it is called as right angle triangle. Obtuse angle triangle:When the angle between any two sides is greater than 90 it is called an obtuse angle triangle. Right Angled Triangle
A Right-angled triangle is one of the most important shapes in geometry and is the basics of trigonometry. A right-angled triangle is the one which has 3 sides, “base” “hypotenuse” and “height” with the angle between base and height being 90°. But the question arises what are these? Well, these are the three sides of a right-angled triangle and generates the most important theorem that is Pythagoras theorem.
The area of the biggest square is equal to the sum of the square of the two other small square area. We can generate Pythagoras as the square of the length of the hypotenuse is equal to the sum of the length of squares of base and height
We can generate Pythagoras as the square of the length of the hypotenuse is equal to the sum of the length of squares of base and height
As now we have a general idea about the shape and basic property of a right-angled triangle, let us discuss the area of a triangle.
Right Angle Triangle Properties
Let us discuss, the properties carried by a right angle triangle.
One angle is always 90° or right angle. The side opposite angle 90° is the hypotenuse. The hypotenuse is always the longest side. The sum of the other two interior angles is equal to 90°. The other two sides adjacent to the right angle are called base and perpendicular. The area of right angle triangle is equal to half of the product of adjacent sides of the right angle, i.e., Area of Right Angle Triangle = ½ (Base × Perpendicular) If we drop a perpendicular from the right angle to the hypotenuse, we will get three similar triangles. If we draw a circumcircle which passes through all three vertices, then the radius of this circle is equal to half of the length of the hypotenuse. If one of the angles is 90° and the other two angles are equal to 450 each, then the triangle is called an Isosceles Right Angled Triangle, where the adjacent sides to 90° are equal in length to each other.
Above were the general properties of Right angle triangle. The construction of the right angle triangle is also very easy. Keep learning with BYJU’S to get more such study materials related to different topics of Geometry and other subjective topics.
Area of Right Angled Triangle
The area is in 2 dimensional and is measured in square unit.it can be defined as the amount of space taken by the 2-dimensional object.
The area of a triangle can be calculated by 2 formulas:
area= \(\frac{a \times b }{2}\)
and
Heron’s formula i.e. area= \(\sqrt{s(s-a)(s-b)(s-c)}\),
where s =, and a,b,c are the sides of a triangle.
Where, s is the semi perimeter and is calculated as s \(=\frac{a+b+c}{2}\) and a, b, c are the sides of a triangle.
Let us calculate the area of a triangle using the figure given below.
Fig 1: Let us drop a perpendicular to the base b in the given right angle triangle Now let us multiply the triangle into 2 triangles. Fig 2: It forms a shape of a parallelogram as shown in the figure. Fig 3: Let us move the yellow shaded region to the beige colored region as shown the figure. Fig 4: It takes up the shape of a rectangle now.
Now by the property of area, it is calculated as the multiplication of any two sides
Hence area =b×h (for a rectangle)
Therefore, the area of a right angle triangle will be half i.e.
= \(\frac{b \times h}{2}\)
For a right angled triangle, the base is always perpendicular to the height. When the sides of the triangle are not given and only angles are given, the area of a right-angled triangle can be calculated by the given formula:
= \(\frac{bc \times ba}{2}\)
Where a, b, c are respective angles of the right angle triangle, with ∠b always being 90°.
To learn more interesting facts about triangle stay tuned with BYJU’S.
|
Learning Objectives
Define the terms wavelength and frequency with respect to wave-form energy. State the relationship between wavelength and frequency with respect to electromagnetic radiation.
During the summer, almost everyone enjoys going to the beach. They can swim, have picnics, and work on their tans. But if you get too much sun, you can burn. A particular set of solar wavelengths are especially harmful to the skin. This portion of the solar spectrum is known as UV B, with wavelengths of \(280\)-\(320 \: \text{nm}\). Sunscreens are effective in protecting skin against both the immediate skin damage and the long-term possibility of skin cancer.
Waves
Waves are characterized by their repetitive motion. Imagine a toy boat riding the waves in a wave pool. As the water wave passes under the boat, it moves up and down in a regular and repeated fashion. While the wave travels horizontally, the boat only travels vertically up and down. The figure below shows two examples of waves.
A wave cycle consists of one complete wave - starting at the zero point, going up to a wave
crest, going back down to a wave trough, and back to the zero point again. The wavelength of a wave is the distance between any two corresponding points on adjacent waves. It is easiest to visualize the wavelength of a wave as the distance from one wave crest to the next. In an equation, wavelength is represented by the Greek letter lambda \(\left( \lambda \right)\). Depending on the type of wave, wavelength can be measured in meters, centimeters, or nanometers \(\left( 1 \: \text{m} = 10^9 \: \text{nm} \right)\). The frequency, represented by the Greek letter nu \(\left( \nu \right)\), is the number of waves that pass a certain point in a specified amount of time. Typically, frequency is measured in units of cycles per second or waves per second. One wave per second is also called a Hertz \(\left( \text{Hz} \right)\) and in SI units is a reciprocal second \(\left( \text{s}^{-1} \right)\).
Figure B above shows an important relationship between the wavelength and frequency of a wave. The top wave clearly has a shorter wavelength than the second wave. However, if you picture yourself at a stationary point watching these waves pass by, more waves of the first kind would pass by in a given amount of time. Thus the frequency of the first wave is greater than that of the second wave. Wavelength and frequency are therefore inversely related. As the wavelength of a wave increases, its frequency decreases. The equation that relates the two is:
\[c = \lambda \nu\]
The variable \(c\) is the speed of light. For the relationship to hold mathematically, if the speed of light is used in \(\text{m/s}\), the wavelength must be in meters and the frequency in Hertz.
Example \(\PageIndex{1}\): Orange Light
The color orange within the visible light spectrum has a wavelength of about \(620 \: \text{nm}\). What is the frequency of orange light?
SOLUTION
Steps for Problem Solving Example \(\PageIndex{1}\) Identify the "given"information and what the problem is asking you to "find."
Given :
Find: Frequency (Hz)
List other known quantities \(1 \: \text{m} = 10^9 \: \text{nm}\) Identify steps to get the final answer
1.Convert the wavelength to \(\text{m}\).
2. Apply the equation \(c = \lambda \nu\) and solve for frequency. Dividing both sides of the equation by \(\lambda\) yields:
\(\nu = \frac{c}{\lambda}\)
Cancel units and calculate.
\(620 \: \text{nm} \times \left( \frac{1 \: \text{m}}{10^9 \: \text{nm}} \right) = 6.20 \times 10^{-7} \: \text{m}\)
\(\nu = \frac{c}{\lambda} = \frac{3.0 \times 10^8 \: \text{m/s}}{6.20 \times 10^{-7}} = 4.8 \times 10^{14} \: \text{Hz}\)
Think about your result. The value for the frequency falls within the range for visible light.
Exercise \(\PageIndex{1}\)
What is the wavelength of light if its frequency is 1.55 × 10
10 s −1? Answer 0.0194 m, or 19.4 mm Summary
All waves can be defined in terms of their frequency and intensity. \(c = \lambda \nu\) expresses the relationship between wavelength and frequency.
|
Given real-valued continuous functions $f, g$, is the following (and why?) inequality true?
$$\max \{f + g \} \leq \max f + \max g$$
Can someone give me a proof? I suspect the min is the reverse inequality
Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community
For all $x$, $$f(x)\le \max f(x)$$ and $$g(x)\le\max g(x).$$ Now add the two together.
More generally we have:$$\sup_{x\in A} ( f(x)+g(x))\le \sup_{x\in A} \left( f(x)+\sup_{y\in A} g(y)\right)=\sup_{x\in A} f(x)+\sup_{y\in A} g(y)$$
let $\max g(z):=g(z^*)$, $\max f(z):=f(z^*)$ then $$f(z)\le f(z^*)$$ $$g(z)\le g(z^*) $$ then $$\forall z : (f+g)(z)=f(z)+g(z)\le f(z^*)+g(z^*)$$ $$\max(f+g)(z)\le f(z^*)+g(z^*)=\max f(z)+\max g(z)$$
|
I wish to understand the statement in this paper more precisely:
(1). Any 3d Topological quantum field theories(TQFT) associates an inner-product vector space $H_{\Sigma}$ to a Riemann surface $\Sigma$.
-
(2) In the case of abelian Chern-Simons theory $H_{\Sigma}$ is obtained by geometric quantization of the moduli space of flat $T_{\Lambda}$-connections on ${\Sigma}$. The
latter space is a torus with a symplectic form
$$ ω =\frac{1}{4π} \int_{\Sigma} K_{IJ} \delta A_I \wedge d \delta A_J.$$
(3) Its quantization is the space of holomorphic sections of a line bundle $L$ whose curvature is $\omega$. For a genus g Riemann surface $\Sigma_g$, it has dimension $|\det(K)|^g$.
-
(4) The
mapping class group of $\Sigma$ (i.e. the quotient of the group of diffeomorphisms of $\Sigma$ by its identity component) acts projectively on $H_{\Sigma}$. The action of the mapping class group of $\Sigma_g$ on $H_\Sigma$ factors through the group $Sp(2g, \mathbb{Z})$.
We are talking about this abelian Chern-Simons theory:$$S_{CS}=\frac{1}{4π} \int_{\Sigma} K_{IJ} A_I \wedge d A_J.$$
Can some experts walk through this (1) (2) (3) (4) step-by-step for focusing on this abelian Chern-Simons theory?
partial answer of (1)~(4) is fine.
I can understand the statements, but I cannot feel comfortable to This post imported from StackExchange Physics at 2014-06-25 21:02 (UCT), posted by SE-user mysteriousness
derive them myself.
|
Yes, it is the case that $\mu_f' = \mu_v$.
We will use the notation introduced in section "Attempt #2" of the original post. Additionally, we make the following two definitions that will be used throughout the proof:$$\begin{align}Z &:= (a, b] \\\mathcal{J} &:= \left\{(c, d] :\mid c, d\in [a, b] \right\}\end{align}$$
The proof breaks down into four steps:
1) We show that it suffices to show that $\mu_f'(E) = \mu_v(E)$ for all $E \in \mathcal{B}(Z)$ in order to conclude that $\mu_f' = \mu_v$.
2) We show that $\mu_f^\pm(J) \geq \mu_v^\pm(J)$, respectively, for all $J \in \mathcal{J}$.
3) We deduce that $\mu_f^\pm(E) = \mu_v^\pm(E)$, respectively, for all $E \in \mathcal{B}(Z)$.
4) We conclude that $\mu_f'(E) = \mu_v(E)$ for all $E \in \mathcal{B}(Z)$.
The proof hinges on two characterizations of the Jordan decomposition of signed measures given in proposition 6.21(i & ii) in Dshalalow's "Foundations of Abstract Algebra", 2nd edition and in the version of the Jordan decomposition theorem given in Doob's "Measure Theory".
===
We show that it suffices to show that $\mu_f'(E) = \mu_v(E)$ for all $E \in \mathcal{B}(Z)$ in order to conclude that $\mu_f' = \mu_v$, as follows. (1.1) We show that $\mu_v(Z^c) = 0$. (1.2) We show that $\mu_f'(Z^c) = 0$. (1.3) We conclude that it suffices to show that $\mu_f'(E) = \mu_v(E)$ for all $E \in \mathcal{B}(Z)$ in order to conclude that $\mu_f' = \mu_v$.
1.1 We will show that $\mu_v(Z^c) = 0$. Since $Z^c = (-\infty, a] \cup (b, \infty)$, it suffices to show that $\mu_v\left((-\infty, a]\right), \mu_v\left((b, \infty)\right) = 0$.
$$ \begin{align} \mu_v\left((-\infty, a]\right) & = \mu_v\left(\bigcup_{n = 0}^\infty \left(a - (n + 1), a - n\right]\right) \\ & = \sum_{n = 0}^\infty \mu_v\left(\left(a - (n + 1), b - n\right]\right) \\ & = \sum_{n = 0}^\infty \left(v_f(a - n) - v_f(b - (n + 1))\right) \\ & = \sum_{n = 0}^\infty (0 - 0) \\ & = 0 \\ \mu_v\left((b, \infty)\right) & = \mu_v\left(\bigcup_{n = 0}^\infty \left(b + n, b + (n+1)\right]\right) \\ & = \sum_{n = 0}^\infty \mu_v\left(\left(b + n, b + (n+1)\right]\right) \\ & = \sum_{n = 0}^\infty \left(v_f(b + (n + 1)) - v_f(b + n)\right) \\ & \overset{(*)}{=} \sum_{n = 0}^\infty V_{b + n}^{b + (n + 1)}(f) \\ & \overset{(**)}{=} 0 \end{align} $$where equality $(*)$ is a fundamental result from the theory of functions of bounded variation (see e.g. lemma 12.15(b) in Yeh's "Real Analysis", 2nd edition), and equality $(**)$ is due to the fact that, for $s, t \in [b, \infty)$ with $s \leq t$, we have$$ \begin{align} V_s^t(f) &= \sup \left\{\sum_{k = 1}^n \left|f(t_k) - f(t_{k - 1}) \right| :\mid s = t_0 \leq t_1 \leq \cdots \leq t_n = t, n \in \{1, 2, \dots\}\right\} \\ & = \sup \left\{\sum_{k = 1}^n \left|f(b) - f(b) \right| :\mid s = t_0 \leq t_1 \leq \cdots \leq t_n = t, n \in \{1, 2, \dots\}\right\} \\ & = 0 \end{align} $$
1.2 We show that $\mu_f'(Z^c) = 0$, as follows. (1.2.1) We construct finite measures $\mu^+$ and $\mu^-$ on $(\mathbb{R}, \mathcal{B})$ such that $\mu_f = \mu^+ - \mu^-$ and for which $\mu^\pm(Z^c) = 0$. (1.2.2) We apply a certain minimality condition on the Jordan decomposition of signed measures to show that $\mu_f^\pm = \mu^\pm$, respectively. (1.2.3) We conclude that $\mu_f'(Z^c) = 0$.
1.2.1 We will construct finite measures $\mu^+$ and $\mu^-$ on $(\mathbb{R}, \mathcal{B})$ such that $\mu_f = \mu^+ - \mu^-$ and for which $\mu^\pm(Z^c) = 0$. Define the set functions $\mu^\pm : \mathcal{B} \rightarrow [0, \infty]$ as follows:$$\mu^\pm(E) := \mu_f^\pm(E \cap Z)$$respectively.
The fact that $\mu^\pm$ are measures follows from the fact that $\mu_f^\pm$ are measures. The fact that $\mu^\pm$ are finite follows from the observation that $\mu^\pm \leq \mu_f^\pm$, respectively, and from the observation that $\mu_f^\pm$ are finite, since $\mu_f = \mu_g - \mu_h$ is. As for $\mu^\pm(Z^c)$, let's calculate: $\mu^\pm(Z^c) = \mu_f^\pm(Z^c \cap Z) = \mu_f^\pm(\emptyset) = 0$.
Finally, to show that $\mu_f = \mu^+ - \mu^-$ it suffices to show that $\mu_f(Z^c) = 0$, for suppose we have shown that $\mu_f(Z^c) = 0$, then for $E \in \mathcal{B}$,$$\begin{align}\mu_f(E) &= \mu_f(E \cap Z) + \mu_f(E \cap Z^c) \\&= \mu_f(E \cap Z) \\&= \mu_f^+(E \cap Z) - \mu_f^-(E \cap Z) \\&= \mu^+(E) - \mu^-(E)\end{align}$$
We will now show that $\mu_f(Z^c) = 0$. Since $Z^c = (-\infty, a] \cup (b, \infty)$, it suffices to show that $\mu_f\left((-\infty, a]\right), \mu_f\left((b, \infty)\right) = 0$.
$$\begin{align}\mu_f\left((-\infty, a]\right) & = \mu_f\left(\bigcup_{n = 0}^\infty \left(a - (n + 1), a - n\right]\right) \\& = \sum_{n = 0}^\infty \mu_f\left(\left(a - (n + 1), a - n\right]\right) \\& = \sum_{n = 0}^\infty \left(\mu_g\left(\left(a - (n + 1), a - n\right]\right) - \mu_h\left(\left(a - (n + 1), a - n\right]\right) \right) \\& = \sum_{n = 0}^\infty \left(\left(g(a - n) - g(a - (n + 1))\right) - \left(h(a - n) - h(a - (n + 1))\right)\right) \\& = \sum_{n = 0}^\infty \left(\left(g(a) - g(a)\right) - \left(h(a) - h(a)\right)\right) \\& = 0 \\\mu_f\left((b, \infty)\right) & = \mu_f\left(\bigcup_{n = 0}^\infty \left(b + n, b + (n+1)\right]\right) \\& = \sum_{n = 0}^\infty \mu_f\left(\left(b + n, b + (n+1)\right]\right) \\& = \sum_{n = 0}^\infty \left(\mu_g\left(\left(b + n, b + (n+1)\right]\right) - \mu_h\left(\left(b + n, b + (n+1)\right]\right) \right) \\& = \sum_{n = 0}^\infty \left(\left(g(b + (n + 1)) - g(b + n)\right) - \left(h(b + (n + 1)) - h(b + n)\right)\right) \\& = \sum_{n = 0}^\infty \left(\left(g(b) - g(b)\right) - \left(h(b) - h(b)\right)\right) \\& = 0\end{align}$$
1.2.2 We will show that $\mu_f^\pm = \mu^\pm$, respectively. By definition $\mu^\pm \leq \mu_f^\pm$, respectively, so we are left to show that $\mu^\pm \geq \mu_f^\pm$, respectively. According to the version of the Jordan decomposition theorem given in Doob's "Measure Theory", if $(\Omega, \mathcal{A}, \lambda)$ is a signed measure space, if $\lambda = \lambda^+ - \lambda^-$ is the unique Jordan decomposition of $\lambda$, and if $\lambda = \lambda_1 - \lambda_2$ is any representation of $\lambda$ as the difference between two measures, then $\lambda^+ \leq \lambda_1$ and $\lambda^- \leq \lambda_2$. Therefore, since by (1.2.1) $\mu_f = \mu^+ - \mu^-$, we obtain $\mu^\pm \geq \mu_f^\pm$, respectively, as desired.
1.2.3 We will show that $\mu_f'(Z^c) = 0$. Indeed,$$\mu_f'(Z^c) = \mu_f^+(Z^c) +\mu_f^-(Z^c) = \mu^+(Z^c) + \mu^-(Z^c) = 0$$
1.3 We will now show that it suffices to show that $\mu_f'(E) = \mu_v(E)$ for all $E \in \mathcal{B}(Z)$ in order to conclude that $\mu_f' = \mu_v$. Indeed, suppose we have shown that, for every $E \in \mathcal{B}(Z)$, $\mu_f'(E) = \mu_v(E)$. Then, using (1.1) and (1.2) we see that for $E \in \mathcal{B} = \mathcal{B}(\mathbb{R})$ we have$$\begin{align}\mu_f'(E) & = \mu_f'\left((E \cap Z) \cup (E \cap Z^c)\right) \\& = \mu_f'(E \cap Z) + \mu_f'(E \cap Z^c) \\& = \mu_f'(E \cap Z) \\& = \mu_v(E \cap Z) \\& = \mu_v(E \cap Z) + \mu_v(E \cap Z^c) \\& = \mu_v\left((E \cap Z) \cup (E \cap Z^c)\right) \\& = \mu_v(E)\end{align}$$
We show that $\mu_f^\pm(J) \geq \mu_v^\pm(J)$, respectively, for all $J \in \mathcal{J}$, as follows. (2.1) We first show that for every $J \in \mathcal{J}$, there is some collection of subsets $S_J \subseteq \mathcal{B}(J)$, such that $\mu_v^+(J) = \sup_{E \in S_J} \mu_f(E)$. (2.2) We deduce, by citing a suitable property of the Jordan decomposition of signed measures, that for every $J \in \mathcal{J}$, $\mu_f^+(J) = \sup_{E \in \mathcal{B}(J)} \mu_f(E) \geq \mu_v^+(J)$. (2.3) Using a similar argument we show that for every $J \in \mathcal{J}$, $\mu_f^-(J) \geq \mu_v^-(J)$.
2.1 We will show that, for every $J \in \mathcal{J}$, there is some collection of subsets $S_J \subseteq \mathcal{B}(J)$, such that $\mu_v^+(J) = \sup_{E \in S_J} \mu_f(E)$. Let $J \in \mathcal{J}$. If $J = \emptyset$, we can take $S_J$ to be $\{\emptyset\}$. Otherwise $J = (c, d]$ for some $c, d \in [a, b]$ such that $c < d$. Define $\mathcal{P}_J$ to be the set of strict partitions of $[c, d]$, that is to say $\mathcal{P}_J$ is the collection of all finite sets of points $\{c = t_0 < t_1 < \cdots < t_n = d\}$.
For every $Q = \{c = t_0 < t_1 < \cdots < t_n = d\} \in \mathcal{P}_J$ define$$\begin{align}\alpha(Q) & := \bigcup_{k \in \{j \in \{1, 2, \dots, n\} \mid: f(t_j) > f(t_{j - 1})\}} (t_{k - 1}, t_k]\\\beta(Q) & := \sum_{k = 1}^n \max\left(f(t_k) - f(t_{k - 1}), 0\right)\end{align}$$and note that $\mu_f\left(\alpha(Q)\right) = \beta(Q)$, since for any $s, t \in \mathbb{R}$ such that $s < t$,$$\begin{align}\mu_f\left((s, t]\right) &= \mu_g\left((s, t]\right) - \mu_h\left((s, t]\right) \\&= \left(g(t) - g(s)\right) - \left(h(t) - h(s)\right) \\&= \left(g(t) - h(t)\right) - \left(g(s) - h(s)\right) \\&= f(t) - f(s)\end{align}$$
Now define$$S_J := \{\alpha(Q) :\mid Q \in \mathcal{P}_J\}$$
Then $S_J \subseteq \mathcal{B}(J)$ and furthermore$$\mu_v^+(J) = v_f^+(d) - v_f^+(c) \overset{(***)}{=} {V^+}_c^d(f) = \sup_{Q \in \mathcal{P}_J} \beta(Q) = \sup_{E \in S_J} \mu_f(E)$$Equality $(***)$ can be proved similarly to equality $(*)$ above (in section 1.1).
2.2 We will show that for every $J \in \mathcal{J}$, $\mu_f^+(J) \geq \mu_v^+(J)$. According to proposition 6.21(i) in Dshalalow's "Foundations of Abstract Algebra", 2nd edition, for every $D \in \mathcal{B} = \mathcal{B}(\mathbb{R})$, $\mu_f^+(D) = \sup_{E \in \mathcal{B}(D)} \mu_f(E)$. Therefore, using the result of the previous step,$$\mu_f^+(J) = \sup_{E \in \mathcal{B}(J)} \mu_f(E) \geq \sup_{E \in S_J} \mu_f(E) = \mu_v^+(J)$$
2.3 We can show that $\mu_f^-(J) \geq \mu_v^-(J)$ for all $J \in \mathcal{J}$ using arguments analogous to those presented in (2.1) and (2.2) (this time invoking Dshalalow's proposition 6.21(ii)).
We show that $\mu_f^\pm(E) = \mu_v^\pm(E)$, respectively, for all $E \in \mathcal{B}(Z)$, as follows. (3.1) Firstly we show that $\mu_f^\pm(J) = \mu_v^\pm(J)$, respectively, for all $J \in \mathcal{J}$. We do so by applying a certain minimality property of the Jordan decomposition of signed measures to the results obtained in step (2) above. (3.2) We generalize (3.1) using Dynkin's $\pi$-$\lambda$ theorem to show that $\mu_f^\pm(E) = \mu_v^\pm(E)$, respectively, for all $E \in \mathcal{B}(Z)$.
3.1 We will show that $\mu_f^\pm(J) = \mu_v^\pm(J)$, respectively, for all $J \in \mathcal{J}$. Recall that in step (2) above we showed that $\mu_f^\pm(J) \geq \mu_v^\pm(J)$, respectively, for all $J \in \mathcal{J}$. Therefore, it is left to show that, for all $J \in \mathcal{J}$, $\mu_f^\pm(J) \leq \mu_v^\pm(J)$. Indeed, according to the version of the Jordan decomposition theorem given in Doob's "Measure Theory", if $(\Omega, \mathcal{A}, \lambda)$ is a signed measure space, if $\lambda = \lambda^+ - \lambda^-$ is the unique Jordan decomposition of $\lambda$, and if $\lambda = \lambda_1 - \lambda_2$ is any representation of $\lambda$ as the difference between two measures, then $\lambda^+ \leq \lambda_1$ and $\lambda^- \leq \lambda_2$. Therefore, recalling from section "Attempt #2" of the original post that $\mu_f = \mu_v^+ - \mu_v^-$, we obtain that $\mu_f^\pm \leq \mu_v^\pm$, respectively.
3.2 We will show that $\mu_f^\pm(E) = \mu_v^\pm(E)$, respectively, for all $E \in \mathcal{B}(Z)$. $\mu_f^+$ and $\mu_v^+$ are positive measures that coincide on $\mathcal{J}$. Since $\mathcal{J}$ is a $\pi$-system that generates $Z$ and that includes $Z$, we conclude (for instance, by using Dynkin's $\pi$-$\lambda$ theorem) that $\mu_f^+(E) = \mu_v^+(E)$ for all $E \in \mathcal{B}(Z)$. By the same reasoning, $\mu_f^-(E) = \mu_v^-(E)$ for all $E \in \mathcal{B}(Z)$.
We will show that or all $E \in \mathcal{B}(Z)$, $\mu_f'(E) = \mu_v(E)$. Recall from section "Attempt #2" of the original post that $\mu_v = \mu_v^+ + \mu_v^-$. Therefore, using (3.2), for all $E \in \mathcal{B}(Z)$,$$\mu_f'(E) = \mu_f^+(E) + \mu_f^-(E) = \mu_v^+(E) + \mu_v^-(E) = \mu_v(E)$$Q.E.D.
Acknowledgements
The proof idea was suggested to me by the last sentence of section X.6, "Functions of bounded variation vs. signed measures", on p. 164 of Doob's "Measure Theory", Springer 1993.
|
According to page 53 of Modern Computer Arithmetic (pdf), all of the steps in the Schönhage–Strassen Algorithm cost $O(N \cdot lg(N))$ except for the recursion step which ends up costing $O(N\cdot lg(N) \cdot lg(lg(N)))$.
I don't understand why the same inductive argument used to show the cost of the recursive step doesn't work for $O(N\cdot lg(N))$.
Assume that, for all $X < N$, the time $F(X)$ is less than $c \cdot X \cdot lg(X)$ for some $c$. So the recursive step costs $d \cdot \sqrt{N} F(\sqrt{N})$, and we know this is less than $d \cdot \sqrt{N} c \cdot \sqrt{N} lg(\sqrt{N}) = \frac{c \cdot d}{2} \cdot N \cdot lg(N)$ by the inductive hypothesis. If we can show that $d < 2$, then we're done because we've satisfied the inductive step. I'm pretty sure recursion overhead is negligible, so $d \approx 1$ and we have $\frac{c}{2} N \cdot lg(N)$ left to do the rest. Easy: everything else is $O(N \cdot lg(N))$ so we can pick a $c$ big enough for it to fit in our remaining time.
Basically, without digging into the details that will contradict this somehow, it
looks like things would work out if we assumed the algorithm costs $O(N \cdot log(N))$. The same thing seems to happen if I expand the recursive invocations then sum it all up... so where is the penalty coming from?
My best guess is that it has to do with the $lg(lg(N))$ levels of recursion, since that's how many times you must apply a square root to get to a constant size. But how do we know each recursive pass is not getting cheaper, like in quickselect?
For example, if we group our $N$ initial items into words of size $O(lg(N))$, meaning we have $O(N/lg(N))$ items of size $O(lg(N))$ to multiply when recursing, shouldn't that only take $O(N/lg(N) \cdot lg(N) \cdot lg(lg(N)) \cdot lg(lg(lg(N)))) = O(N \cdot lg(lg(N)) \cdot lg(lg(lg(N))))$ time to do. Not only is that well within the $N \cdot lg(N)$ limit, it worked even though I used the larger $N\cdot lg(N)\cdot lg(lg(N))$ cost for the recursive steps (for "I probably made a mistake" values of "worked").
My second guess is that there's some blowup at each level that I'm not accounting for. There are constraints on the sizes of things that might work together to slow down how quickly things get smaller, or to multiply how many multiplications have to be done.
Here's the recursive expansion.
Assume we get $N$ bits and split them into $\sqrt{N}$ groups of size $\sqrt{N}$. Everything except the recursion costs $O(N lg N)$. The recursive multiplications can be done with $3 \cdot \sqrt{N}$ bits. So we get the relationship:
$M(N) = N \cdot lg(N) + \sqrt{N} \cdot M(3 \cdot \sqrt{N})$
Expanding once:
$M(N) = N \cdot lg(N) + \sqrt{N} \cdot (3 \cdot \sqrt{N} \cdot lg(3 \cdot \sqrt{N}) + \sqrt{3 \cdot \sqrt{N}} \cdot M(3 \cdot \sqrt{3 \cdot \sqrt{N}})$
Simplifying:
$M(N) = N \cdot lg(N) + 3 \cdot N \cdot lg(3 \cdot \sqrt{N}) + \cdot \sqrt{3} \cdot N^{2-\frac{1}{2}} \cdot M(3^{2-\frac{1}{2}} \cdot \sqrt{\sqrt{N}})$
See the pattern? Each term will end up in the form $3^{2-2^{-i}} \cdot N \cdot lg(N^{2^{-i}} 3^{2-2^{-i}})$. So the overall sum is:
$\sum_{i=0}^{lg(lg(N))} 3^{2-2^{-i}} \cdot N \cdot lg(N^{2^{-i}} 3^{2-2^{-i}})$
We can upper bound this by increasing the powers of 3 to just 3^2, since that can only increase the value in both cases:
$\sum_{i=0}^{lg(lg(N))} 9 \cdot N \cdot lg(N^{2^{-i}} 9)$
Which is asymptotically the same as:
$\sum_{i=0}^{lg(lg(N))} N \cdot lg(N^{2^{-i}})$
Moving the power out of the logarithm:
$\sum_{i=0}^{lg(lg(N))} N \cdot lg(N) \cdot 2^{-i}$
Moving variables not dependent on $i$ out:
$N \cdot lg(N) \sum_{i=0}^{lg(lg(N))} 2^{-i}$
The series is upper bounded by 2, so we're upper bounded by:
$N \cdot lg(N)$
Not sure where the $lg(lg(N))$ went. All the twiddly factors and offsets (because many recurrence relations "solutions" are broken by those) I throw in seem to get killed off by the $lg$ creating that exponentially decreasing term, or they end up not multiplied by $N$ and are asymptotically insignificant.
|
I saw in a SO thread a suggestion to use
filtfilt which performs backwards/forwards filtering instead of
lfilter.
What is the motivation for using one against the other technique?
Signal Processing Stack Exchange is a question and answer site for practitioners of the art and science of signal, image and video processing. It only takes a minute to sign up.Sign up to join this community
filtfilt is zero-phase filtering, which doesn't shift the signal as it filters. Since the phase is zero at all frequencies, it is also linear-phase. Filtering backwards in time requires you to predict the future, so it can't be used in "online" real-life applications, only for offline processing of recordings of signals.
lfilter is causal forward-in-time filtering only, similar to a real-life electronic filter. It can't be zero-phase. It can be linear-phase (symmetrical FIR), but usually isn't. Usually it adds different amounts of delay at different frequencies.
An example and image should make it obvious. Although the magnitude of the frequency response of the filters is identical (top left and top right), the zero-phase lowpass lines up with the original signal, just without high frequency content, while the minimum phase filtering delays the signal in a causal way:
from __future__ import division, print_functionimport numpy as npfrom numpy.random import randnfrom numpy.fft import rfftfrom scipy import signalimport matplotlib.pyplot as pltb, a = signal.butter(4, 0.03, analog=False)# Show that frequency response is the sameimpulse = np.zeros(1000)impulse[500] = 1# Applies filter forward and backward in timeimp_ff = signal.filtfilt(b, a, impulse)# Applies filter forward in time twice (for same frequency response)imp_lf = signal.lfilter(b, a, signal.lfilter(b, a, impulse))plt.subplot(2, 2, 1)plt.semilogx(20*np.log10(np.abs(rfft(imp_lf))))plt.ylim(-100, 20)plt.grid(True, which='both')plt.title('lfilter')plt.subplot(2, 2, 2)plt.semilogx(20*np.log10(np.abs(rfft(imp_ff))))plt.ylim(-100, 20)plt.grid(True, which='both')plt.title('filtfilt')sig = np.cumsum(randn(800)) # Brownian noisesig_ff = signal.filtfilt(b, a, sig)sig_lf = signal.lfilter(b, a, signal.lfilter(b, a, sig))plt.subplot(2, 1, 2)plt.plot(sig, color='silver', label='Original')plt.plot(sig_ff, color='#3465a4', label='filtfilt')plt.plot(sig_lf, color='#cc0000', label='lfilter')plt.grid(True, which='both')plt.legend(loc="best")
Answer by @endolith is complete and correct! Please read his post first, and then this one in addition to it. Due to my low reputation I was unable to respond to comments where
@Thomas Arildsen and @endolith argue about effective order of filter obtained by
filtfilt:
lfilter does apply given filter and in Fourier space this is like applying filter transfer function ONCE.
filtfilt apply the same filter twice and effect is like applying filter transfer function SQUARED. In case of Butterworth filter (
scipy.signal.butter) with the transfer function
$$G(n)=\frac{1}{\sqrt{1-\omega^{2n}}}\quad\text{where } n \text{ is order of filter}$$
the effective gain will be
$$G(n)_{filtfilt}=G(n)^2=\frac{1}{1-\omega^{2n}}$$
and this cannot be interpreted as $2n$ nor $2n-1$ order Butterworth filter
$$G(n)_{filtfilt}\neq G(2n).$$
|
You shouldn't do a measurement with only one OFDM symbol at first. Instead create some random data, perform QAM modulation, devide QAM symbols array in $N$ OFDM data blocks and make OFDM symbols. Then add CP and paste it together to form a frame. Now you can calculate its power,
$P_{signal} = 1 \div (N \cdot N_{FFT})\sum {s^2} $
estimate PAPR, add some noise to model frame spread over AWGN channel. Choose SNR of interest, you know your signal's power, so you can calculate power of noise to be added to satisfy SNR value you've choosen.
$P_{noise} = 10^{-SNR/10} \cdot P_{signal}$
Create complex-value noise with $randn$ function and scale it with
$\sigma = \sqrt{P_{noise}}/{\sqrt{2}}$
and then add noise to your signal.
After that if you perform OFDM demodulation and than QAM demodulation you'll achive BER you're expecting to be for SNR you've choosen. If you want to have more precise measurement, do this routine for some times for one value of SNR and make average statistic. If you want to plot really good curves you need $1e+5...1e+6$ bits to measure BER for one SNR value. You can construct frame from about 5000...20000 bits (its common length for LDPC decoder used e.g. in the latest DVB-T, as I remember) and do measurements in $for$ loop. I advice you to generate random data at every iterarion.
So I don't see any problem in FFT normalization or with anything else. You construct a frame, estimate its power and insert noise according to average signal's power and SNR you want to achive.
Oh, I've forgotten. If you use only part of subcarriers during modulation, you should scale noise power as
$P_{noise} = P_{noise} \cdot N_{used} \div N_{FFT}$
to match the band where signal really exists and noise band.
|
Let $G$ be a finite group and $\Sigma_g$ a closed Riemann surface of genus $g$. Then Mednykh's formula states
$\frac{\left|\mathrm{Hom}(\pi_1(\Sigma_g),G)\right|}{\left|G\right|} = \frac{1}{\left|G\right|^{\chi(\Sigma_g)}}\sum_{V}\left(\dim V\right)^{\chi(\Sigma_g)}$
where the sum on the right is over irreducible complex representations of $G$.
Is there an analog in modular representation theory? For example we can fix the problem in the case $g=0,1$: the number of irreducible representations equals to number of $p$-regular conjugacy classes, the vector $x=(\text{dim} V_i)^t$ satisfies $x^tCx=|G|$ where $C=D^tD$ is the Cartan matrix.
|
There is a very simple, direct, and precise mathematical answer to the original question.
The first PC is a linear combination of the original variables $Y_1$, $Y_2$, $\dots$, $Y_p$ that maximizes the total of the $R_i^2$ statistics when predicting the original variables as a regression function of the linear combination.
Precisely, the coefficients $a_1$, $a_2$, $\dots$, $a_p$ in the first PC, $PC_1 = a_1Y_1 + a_2Y_2 + \cdots + a_pY_p$, give you the maximum value of $\sum_{i=1}^p R_i^2(Y_i | PC_1)$, where the maximum is taken over all possible linear combinations.
In this sense, you can interpret the first PC as a maximizer of "variance explained," or more precisely, a maximizer of "total variance explained."
It is "a" maximizer rather than "the" maximizer, because any proportional coefficients $b_i = c\times a_i$, for $c \neq 0$, will give the same maximum. A nice by-product of this result is that the unit length constraint is unnecessary, other than as a device to come up with "a" maximizer.
For references to original literature and extensions, see
Westfall,P.H., Arias, A.L., and Fulton, L.V. (2017). Teaching Principal Components Using Correlations, Multivariate Behavioral Research, 52, 648-660.
|
Difference between revisions of "Extendible"
m (→Virtually extendible cardinals: ?)
(→Virtually extendible cardinals: unless...)
Line 131: Line 131:
** $gVP(κ, \mathbf{Σ_{n+1}})$ for a proper class of $κ$
** $gVP(κ, \mathbf{Σ_{n+1}})$ for a proper class of $κ$
** There is a proper class of $n$-remarkable cardinals.
** There is a proper class of $n$-remarkable cardinals.
+ + + +
* $κ$ is the least for which $gVP^∗(κ, \mathbf{Σ_{n+1}})$ holds. $\iff κ$ is the least $n$-remarkable cardinal.
* $κ$ is the least for which $gVP^∗(κ, \mathbf{Σ_{n+1}})$ holds. $\iff κ$ is the least $n$-remarkable cardinal.
−
* If $gVP^∗(Π_n)$, then there is an $n$-remarkable cardinal.
+
* If $gVP^∗(Π_n)$ , then there is an $n$-remarkable cardinal.
* If $gVP^∗(\mathbf{Π_n})$ holds, then there is a proper class of $n$-remarkable cardinals.
* If $gVP^∗(\mathbf{Π_n})$ holds, then there is a proper class of $n$-remarkable cardinals.
* If there is a proper class of $n$-remarkable cardinals, then $gVP(Σ_{n+1})$ holds.<cite>GitmanHamkins2018:GenericVopenkaPrincipleNotMahlo</cite>
* If there is a proper class of $n$-remarkable cardinals, then $gVP(Σ_{n+1})$ holds.<cite>GitmanHamkins2018:GenericVopenkaPrincipleNotMahlo</cite>
Revision as of 10:45, 9 October 2019
A cardinal $\kappa$ is
$\eta$-extendible for an ordinal $\eta$ if and only if there is an elementary embedding $j:V_{\kappa+\eta}\to V_\theta$, with critical point $\kappa$, for some ordinal $\theta$. The cardinal $\kappa$ is extendible if and only if it is $\eta$-extendible for every ordinal $\eta$. Equivalently, for every ordinal $\alpha$ there is a nontrivial elementary embedding $j:V_{\kappa+\alpha+1}\to V_{j(\kappa)+j(\alpha)+1}$ with critical point $\kappa$. Contents 1 Alternative definition 2 Relation to Other Large Cardinals 3 Variants 4 In set-theoretic geology 5 References Alternative definition
Given cardinals $\lambda$ and $\theta$, a cardinal $\kappa\leq\lambda,\theta$ is
jointly $\lambda$-supercompact and $\theta$-superstrong if there exists a nontrivial elementary embedding $j:V\to M$ for some transitive class $M$ such that $\mathrm{crit}(j)=\kappa$, $\lambda<j(\kappa)$, $M^\lambda\subseteq M$ and $V_{j(\theta)}\subseteq M$. That is, a single embedding witnesses both $\lambda$-supercompactness and (a strengthening of) superstrongness of $\kappa$. The least supercompact is never jointly $\lambda$-supercompact and $\theta$-superstrong for any $\lambda$,$\theta\geq\kappa$.
A cardinal is extendible if and only if it is jointly supercompact and $\kappa$-superstrong, i.e. for every $\lambda\geq\kappa$ it is jointly $\lambda$-supercompact and $\kappa$-superstrong. [1] One can show that extendibility of $\kappa$ is in fact equivalent to "for all $\lambda$,$\theta\geq\kappa$, $\kappa$ is jointly $\lambda$-supercompact and $\theta$-superstrong". A similar characterization of $C^{(n)}$-extendible cardinals exists.
The ultrahuge cardinals are defined in a way very similar to this, and one can (very informally) say that "ultrahuge cardinals are to superhuges what extendibles are to supercompacts". These cardinals are superhuge (and stationary limits of superhuges) and strictly below almost 2-huges in consistency strength.
To be expanded: Extendibility Laver Functions Relation to Other Large Cardinals
Extendible cardinals are related to various kinds of measurable cardinals.
Supercompactness
Extendibility is connected in strength with supercompactness. Every extendible cardinal is supercompact, since from the embeddings $j:V_\lambda\to V_\theta$ we may extract the induced supercompactness measures $X\in\mu\iff j''\delta\in j(X)$ for $X\subset \mathcal{P}_\kappa(\delta)$, provided that $j(\kappa)\gt\delta$ and $\mathcal{P}_\kappa(\delta)\subset V_\lambda$, which one can arrange. On the other hand, if $\kappa$ is $\theta$-supercompact, witnessed by $j:V\to M$, then $\kappa$ is $\delta$-extendible inside $M$, provided $\beth_\delta\leq\theta$, since the restricted elementary embedding $j\upharpoonright V_\delta:V_\delta\to j(V_\delta)=M_{j(\delta)}$ has size at most $\theta$ and is therefore in $M$, witnessing $\delta$-extendibility there.
Although extendibility itself is stronger and larger than supercompactness, $\eta$-supercompacteness is not necessarily too much weaker than $\eta$-extendibility. For example, if a cardinal $\kappa$ is $\beth_{\eta}(\kappa)$-supercompact (in this case, the same as $\beth_{\kappa+\eta}$-supercompact) for some $\eta<\kappa$, then there is a normal measure $U$ over $\kappa$ such that $\{\lambda<\kappa:\lambda\text{ is }\eta\text{-extendible}\}\in U$.
Strong Compactness
Interestingly, extendibility is also related to strong compactness. A cardinal $\kappa$ is strongly compact iff the infinitary language $\mathcal{L}_{\kappa,\kappa}$ has the $\kappa$-compactness property. A cardinal $\kappa$ is extendible iff the infinitary language $\mathcal{L}_{\kappa,\kappa}^n$ (the infinitary language but with $(n+1)$-th order logic) has the $\kappa$-compactness property for every natural number $n$. [2]
Given a logic $\mathcal{L}$, the minimum cardinal $\kappa$ such that $\mathcal{L}$ satisfies the $\kappa$-compactness theorem is called the
strong compactness cardinal of $\mathcal{L}$. The strong compactness cardinal of $\omega$-th order finitary logic (that is, the union of all $\mathcal{L}_{\omega,\omega}^n$ for natural $n$) is the least extendible cardinal. Variants $C^{(n)}$-extendible cardinals
(Information in this subsection from [3] unless noted otherwise)
A cardinal $κ$ is called
$C^{(n)}$-extendible if for all $λ > κ$ it is $λ$-$C^{(n)}$-extendible, i.e. if there is an ordinal $µ$ and an elementary embedding $j : V_λ → V_µ$, with $\mathrm{crit(j)} = κ$, $j(κ) > λ$ and $j(κ) ∈ C^{(n)}$.
For $λ ∈ C^{(n)}$, a cardinal $κ$ is $λ$-$C^{(n)+}$-extendible iff it is $λ$-$C^{(n)}$-extendible, witnessed by some $j : V_λ → V_µ$ which (besides $j(κ) > λ$ and $j(κ) ∈ C(n)$) satisfies that $µ ∈ C^{(n)}$.
$κ$ is $C^{(n)+}$-extendible iff it is $λ$-$C^{(n)+}$-extendible for every $λ > κ$ such that $λ ∈ C^{(n)}$.
Properties:
The notions of $C^{(n)}$-extendible cardinals and $C^{(n)+}$-extendible cardinals are equivalent.[4] Every extendible cardinal is $C^{(1)}$-extendible. If $κ$ is $C^{(n)}$-extendible, then $κ ∈ C^{(n+2)}$. For every $n ≥ 1$, if $κ$ is $C^{(n)}$-extendible and $κ+1$-$C^{(n+1)}$-extendible, then the set of $C^{(n)}$-extendible cardinals is unbounded below $κ$. Hence, the first $C^{(n)}$-extendible cardinal $κ$, if it exists, is not $κ+1$-$C^{(n+1)}$-extendible. In particular, the first extendible cardinal $κ$ is not $κ+1$-$C^{(2)}$-extendible. For every $n$, if there exists a $C^{(n+2)}$-extendible cardinal, then there exist a proper class of $C^{(n)}$-extendible cardinals. The existence of a $C^{(n+1)}$-extendible cardinal $κ$ (for $n ≥ 1$) does not imply the existence of a $C^{(n)}$-extendible cardinal greater than $κ$. For if $λ$ is such a cardinal, then $V_λ \models$“κ is $C^{(n+1)}$-extendible”. If $κ$ is $κ+1$-$C^{(n)}$-extendible and belongs to $C^{(n)}$, then $κ$ is $C^{(n)}$-superstrong and there is a $κ$-complete normal ultrafilter $U$ over $κ$ such that the set of $C^{(n)}$-superstrong cardinals smaller than $κ$ belongs to $U$. For $n ≥ 1$, the following are equivalent ($VP$ — Vopěnka's principle): $VP(Π_{n+1})$ $VP(κ, \mathbf{Σ_{n+2}})$ for some $κ$ There exists a $C(n)$-extendible cardinal. “For every $n$ there exists a $C(n)$-extendible cardinal.” is equivalent to the full Vopěnka's principle. Assuming $\mathrm{I3}(κ, δ)$, if $δ$ is a limit cardinal (instead of a successor of a limit cardinal – Kunen’s Theorem excludes other cases), it is equal to $sup\{j^m(κ) : m ∈ ω\}$ where $j$ is the elementary embedding. Then $κ$ and $j^m(κ)$ are $C^{(n)}$-extendible (inter alia) in $V_δ$, for all $n$ and $m$. $(\Sigma_n,\eta)$-extendible cardinals
There are some variants of extendible cardinals because of the interesting jump in consistency strength from $0$-extendible cardinals to $1$-extendibles. These variants specify the elementarity of the embedding.
A cardinal $\kappa$ is $(\Sigma_n,\eta)$-extendible, if there is a $\Sigma_n$-elementary embedding $j:V_{\kappa+\eta}\to V_\theta$ with critical point $\kappa$, for some ordinal $\theta$. These cardinals were introduced by Bagaria, Hamkins, Tsaprounis and Usuba [5].
$\Sigma_n$-extendible cardinals
The special case of $\eta=0$ leads to a much weaker notion. Specifically, a cardinal $\kappa$ is
$\Sigma_n$-extendible if it is $(\Sigma_n,0)$-extendible, or more simply, if $V_\kappa\prec_{\Sigma_n} V_\theta$ for some ordinal $\theta$. Note that this does not necessarily imply that $\kappa$ is inaccessible, and indeed the existence of $\Sigma_n$-extendible cardinals is provable in ZFC via the reflection theorem. For example, every $\Sigma_n$ correct cardinal is $\Sigma_n$-extendible, since from $V_\kappa\prec_{\Sigma_n} V$ and $V_\lambda\prec_{\Sigma_n} V$, where $\kappa\lt\lambda$, it follows that $V_\kappa\prec_{\Sigma_n} V_\lambda$. So in fact there is a closed unbounded class of $\Sigma_n$-extendible cardinals.
Similarly, every Mahlo cardinal $\kappa$ has a stationary set of inaccessible $\Sigma_n$-extendible cardinals $\gamma<\kappa$.
$\Sigma_3$-extendible cardinals cannot be Laver indestructible. Therefore $\Sigma_3$-correct, $\Sigma_3$-reflecting, $0$-extendible, (pseudo-)uplifting, weakly superstrong, strongly uplifting, superstrong, extendible, (almost) huge or rank-into-rank cardinals also cannot.[5]
$A$-extendible cardinals
(this subsection from [6])
Definitions:
A cardinal $κ$ is $A$-extendible, for a class $A$, iff for every ordinal $λ > κ$ there is an ordinal $θ$ such that there is an elementary embedding $j : \langle V_λ , ∈, A ∩ V_λ \rangle → \langle V_θ , ∈, A ∩ V_θ \rangle$ with critical point $κ$ (such that $λ < j(κ)$ — removing this does not change, what cardinals are extendible). $λ$ is called the degree of $A$-extendibility of an embedding. A cardinal $κ$ is $(Σ_n)$-extendible, iff it is $A$-extendible, where $A$ is the $Σ_n$-truth predicate. (This is a different notion than $\Sigma_n$-extendible cardinals.)[4]
Results:
The Vopěnka principle is equivalent over GBC to both following statements: For every class $A$, there is an $A$-extendible cardinal. For every class $A$, there is a stationary proper class of $A$-extendible cardinals. ...... Virtually extendible cardinals
Definitions:
A cardinal $κ$ is (weakly? strongly? ......) virtually extendibleiff for every $α > κ$, in a set-forcing extension there is an elementary embedding $j : V_α → V_β$ with $\mathrm{crit(j)} = κ$ and $j(κ) > α$. A cardinal $κ$ is (weakly) virtually $A$-extendible, for a class $A$, iff for every ordinal $λ > κ$ there is an ordinal $θ$ such that in a set-forcing extension, there is an elementary embedding $j : \langle V_λ , ∈, A ∩ V_λ \rangle → \langle V_θ , ∈, A ∩ V_θ \rangle$ with critical point $κ$. For (strongly) virtually $A$-extendible$κ$, we require additionally $λ < j(κ)$.[4] A cardinal $κ$ is $n$-remarkable, for $n > 0$, iff for every $η > κ$ in $C^{(n)}$ , there is $α<κ$ also in $C^{(n)}$ such that in $V^{Coll(ω, < κ)}$, there is an elementary embedding $j : V_α → V_η$ with $j(\mathrm{crit}(j)) = κ$. A cardinal is completely remarkableiff it is $n$-remarkable for all $n > 0$.[8] A cardinal is A cardinal κ is weakly or strongly virtually $(Σ_n)$-extendible, iff it is respectively weakly or strongly virtually $A$-extendible, where $A$ is the $Σ_n$-truth predicate.[4]
Equivalence and hierarchy:
$1$-remarkability is equivalent to remarkability. A cardinal is virtually $C^{(n)}$-extendible iff it is $n + 1$-remarkable (virtually extendible cardinals are virtually $C^{(1)}$-extendible).[8] Weakly and strongly $A$-extendible cardinal are non-equivalent, although in the non-virtual context, the weak and strong forms of $A$-extendibility coincide.[4] It is relatively consistent with GBC that every class $A$ admits a (weakly) virtually $A$-extendible cardinal (and so the generic Vopěnka principle holds), but no class $A$ admits a (strongly) virtually $A$-extendible cardinal.[4] Every $n$-remarkable cardinal is in $C^{(n+1)}$.[8] Every $n+1$-remarkable cardinal is a limit of $n$-remarkable cardinals.[8]
Upper limits for strength:
If $κ$ is virtually Shelah for supercompactness or 2-iterable, then $V_κ$ is a model of proper class many virtually $C^{(n)}$-extendible cardinals for every $n < ω$.[7] If $κ$ is virtually huge*, then $V_κ$ is a model of proper class many virtually extendible cardinals.[7] Completely remarkable cardinals can exist in $L$.[8] For a $2$-iterable cardinal $κ$, $V_κ$ is a model of proper class many completely remarkable cardinals.[8] If $0^\#$ exists, then every Silver indiscernible is in $L$ completely remarkable and virtually $A$-extendible for every definable class $A$.[4, 8]
Lower limit for strength:
Virtually extendible cardinals are remarkable limits of remarkable cardinals and 1-iterable limits of 1-iterable cardinals.[7] The following are equiconsistent $gVP(Π_n)$ $gVP(κ, \mathbf{Σ_{n+1}})$ for some $κ$ There is an $n$-remarkable cardinal. The following are equiconsistent $gVP(\mathbf{Π_n})$ $gVP(κ, \mathbf{Σ_{n+1}})$ for a proper class of $κ$ There is a proper class of $n$-remarkable cardinals. Unless there is a transitive model of ZFC with a proper class of $n$-remarkable cardinals, if for some cardinal $κ$, $gVP(κ, \mathbf{Σ_{n+1}})$ holds, then there is an $n$-remarkable cardinal. if $gVP(Π_n)$ holds, then there is an $n$-remarkable cardinal. if $gVP(\mathbf{Π_n})$ holds, then there is a proper class of $n$-remarkable cardinals. $κ$ is the least for which $gVP^∗(κ, \mathbf{Σ_{n+1}})$ holds. $\iff κ$ is the least $n$-remarkable cardinal. If $gVP^∗(Π_n)$ holds, then there is an $n$-remarkable cardinal. If $gVP^∗(\mathbf{Π_n})$ holds, then there is a proper class of $n$-remarkable cardinals. If there is a proper class of $n$-remarkable cardinals, then $gVP(Σ_{n+1})$ holds.[4] If $gVP(Σ_{n+1})$ holds, then either there is a proper class of $n$-remarkable cardinals or there is a proper class of virtually rank-into-rank cardinals.[4] The generic Vopěnka scheme is equivalent over ZFC to the scheme asserting of every definable class $A$ that there is a proper class of weakly virtually $A$-extendible cardinals.[4] Open problems: Must there be an $n$-remarkable cardinal if $gVP(κ, \mathbf{Σ_{n+1}})$ holds for some $κ$? if $gVP(Π_n)$ holds?
......
In set-theoretic geology This article is a stub. Please help us to improve Cantor's Attic by adding information.
References Usuba, Toshimichi. Extendible cardinals and the mantle.Archive for Mathematical Logic 58(1-2):71-75, 2019. arχiv DOI bibtex Kanamori, Akihiro. Second, Springer-Verlag, Berlin, 2009. (Large cardinals in set theory from their beginnings, Paperback reprint of the 2003 edition) www bibtex The higher infinite. Bagaria, Joan. $C^{(n)}$-cardinals.Archive for Mathematical Logic 51(3--4):213--240, 2012. www DOI bibtex Gitman, Victoria and Hamkins, Joel David. A model of the generic Vopěnka principle in which the ordinals are not Mahlo., 2018. arχiv bibtex Bagaria, Joan and Hamkins, Joel David and Tsaprounis, Konstantinos and Usuba, Toshimichi. Superstrong and other large cardinals are never Laver indestructible.Archive for Mathematical Logic 55(1-2):19--35, 2013. www arχiv DOI bibtex Hamkins, Joel David. The Vopěnka principle is inequivalent to but conservative over the Vopěnka scheme., 2016. www arχiv bibtex Gitman, Victoria and Shindler, Ralf. Virtual large cardinals.www bibtex Bagaria, Joan and Gitman, Victoria and Schindler, Ralf. Generic {V}opěnka's {P}rinciple, remarkable cardinals, and the weak {P}roper {F}orcing {A}xiom.Arch Math Logic 56(1-2):1--20, 2017. www DOI MR bibtex
|
I am modeling a stock price that follows Geometric Brownian Motion and have the following:
$E(X)$ = .16 (16%)
$\sigma$ = .24 (24%)
$X_0$ = 95
$T$ = 1 (12 months)
I am trying to find the probability that the price of this stock will be below 93 at the end of this time period. I am calculating this analytically, using the Log Normal Distribution given as the following:
$P(X,t)$ = $1\over X $$ \cdot$$1\over {\sigma \sqrt{2 \pi t}}$$\cdot$$e^{-(ln(x)- ln(x_0)-(\mu- \sigma^2 /2)t)^2}\over 2\sigma^2t$
I can plug in the values as the following:
$P(X,t)$ = $1\over X $$ \cdot$$1\over {(.24) \sqrt{2 \pi (1)}}$$\cdot$$e^{-(ln(x)- ln(95)-((.16)- (.24)^2 /2)(1))^2}\over 2(.24)^2(1)$
But then I am still left with the X. My question, is this just the 93 value that should be plugged in? Would this represent the probability of the price being below 93 after this time period? What if we wanted to find the probability that the price would close above this 93 (just 1 - this probability)?
|
What is an Adjunction? Part 1 (Motivation)
Some time ago, I started a "What is...?" series introducing the basics of category theory:
"What is a category?" "What is a functor?" Part 1 and Part 2 "What is a natural transformation?" Part 1 and Part 2
Today, we'll add adjunctions to the list. An
adjunction is a pair of functors Indeed, I will make the admittedly provocative claim that adjointness is a concept of fundamental logical and mathematical importance that is not captured elsewhere in mathematics. - Steve Awodey (in Category Theory, Oxford Logic Guides) So, what is an adjunction?
Mathematics is often concerned with pinning down an appropriate notion of "sameness" and asking the question, "When are two things the same?" (I am reminded of Jim Propp's excellent essay "Who Knows 2?") Category theory, in particular, shines brightly in this arena. It provides us with better words with which to both ask and answer this question and leads us to the notion of
isomorphism: Two objects $X$ and $Y$ in a category are isomorphic if there is a morphism from one to the other that has both a left and a right inverse. An isomorphism, then, is like a process $X\to Y$ that can be completely reversed. When such a process exists, the objects are isomorphic.
Sometimes, however, isomorphism is not the notion you want to work with. What if a process isn't exactly reversible
on the nose, yet—given your goals—you'd prefer not to distinguish between the original and final states?
This arises when we wish to compare two categories:
When are categories $\mathsf{C}$ and $\mathsf{D}$ isomorphic? Given a functor $F\colon \mathsf{C}\to\mathsf{D}$, can I find a functor $G\colon \mathsf{D}\to\mathsf{C}$ so that $$FG=\text{id}_{\mathsf{D}} \qquad\text{and} \qquad GF=\text{id}_\mathsf{C} \quad ?$$ (Here, $\text{id}_\mathsf{C}\colon \mathsf{C}\to\mathsf{C}$ is the identity functor on $\mathsf{C}$, and similarly for $\text{id}_\mathsf{D}$.) Oftentimes, the answer will be "no." That is, equality
But if you simply want "the set of pairs of elements in $X$ and $Y$" then you'll be satisfied knowing that although $X\times Y$ and $Y\times X$ are not equal, they are isomorphic. That is, replacing equalities with isomorphisms provides us with desired flexibility. Isomorphisms rather than equalities are thus the tool of choice in category theory.
With that in mind, let's revisit the equations above: $FG=\text{id}_{\mathsf{D}}$ and $GF=\text{id}_\mathsf{C}.$ When we replace these equalities with natural isomorphisms $$FG\cong \text{id}_{\mathsf{D}} \qquad\text{and} \qquad GF\cong \text{id}_\mathsf{C}$$ then $\mathsf{C}$ and $\mathsf{D}$ are called
equivalent categories. Equivalence, then, is a better notion of "sameness" when comparing categories. Let's take this one step further.
We've just exchanged equalities $=$ for isomorphisms $\cong$, so what if we take this a step further and exchange the isomorphisms for regular morphisms?
natural transformations $$FG \to \text{id}_{\mathsf{D}} \qquad\text{and} \qquad \text{id}_\mathsf{C}\to GF$$ then the categories may no longer be equivalent. But this setup is still of great interest.
It is called an
adjunction, and $F$ and $G$ are called adjoint functors. That's it!
Well, almost. We also ask that the two natural transformations relate to each other in a nice way. But we'll get to that next time. Amazingly, there is
much that follows from this simple adjustment. That is, by simply replacing equality $=$ with a (not necessarily invertible arrow) $\to$ we've opened up a vast world of mathematical possibilities. By way of analogy...
This happens elsewhere in mathematics, too. By "this" I mean the act of finding something interesting after loosening up a strict notion of sameness.
In topology, for example, one is interested in distinguishing topological spaces. This amounts to asking if there is a
homeomorphism—an isomorphism in the category of topological spaces—between them. Homeomorphisms are very nice, but there is a relaxed version known as a homotopy equivalence. Homotopy equivalence is a weaker notion than homeomorphism, so you might think it's no good. Au contraire. These weak equivalences pave the way for the deeply rich field of homotopy theory. So I like to have this analogy in mind:
I'm reminded of this idea in linear algebra, as well, though in a tangential sort of way. Suppose $U$ is an orthogonal $n\times n$ matrix. It represents an invertible linear map $\mathbb{R}^n\to\mathbb{R}^n$ whose inverse is precisely the adjoint $U^*$ of $U$. That is, $UU^*=I=U^*U$. Now imagine omitting some of the columns of $U$ so that it's an $n\times k$ rectangular matrix with $k<n$. This is not an outrageous request. Perhaps $U$ is the matrix obtained from a singular value decomposition, say $M=UDV^*$, whose smaller singular values you wish to disregard for some data compression task. This truncated $U$ is then a map from a smaller space $\mathbb{R}^k$ into a larger space $\mathbb{R}^n$. The remaining $n-k$ columns are still orthogonal, so $U^*U=I_k$, where $I_k$ is the identity on $\mathbb{R}^k$. Intuitively, this says that if you inject $\mathbb{R}^k$ as a subspace into $\mathbb{R}^n$, then project back onto it, you've not done anything at all. On the other hand, $UU^*$ is no longer the identity on $\mathbb{R}^n$. Intuitively, you can't squish all of $\mathbb{R}^n$ onto $\mathbb{R}^k$ and hope to undo the distortion damage. So $U^*$ is no longer the inverse of $U$, yet both matrices still encode valuable information about the data you're interested in.
Speaking of linear maps and their adjoints, you might recall a special equality that relates them. If $f\colon V\to W$ is a linear map of Hilbert spaces, then its adjoint $f^*\colon W\to V$ satisfies the inner product equation
$$\langle f\mathbf{v},\mathbf{w}\rangle = \langle \mathbf{v},f^*\mathbf{w}\rangle \qquad \text{for all $\mathbf{v}\in V$ and $\mathbf{w}\in W$}$$
As we'll see next time, an adjunction consists of a pair of functors that satisfy a nearly identical equation. For this reason, the functors participating in an adjunction are called
adjoint functors.
In summary, relaxing a notion of "sameness" gives us extra currency with which to explore new phenomena. So don't think of adjunctions as mere equivalence-wannabes. Instead, think of them as top notch, high class citizens in the categorical landscape. As Awodey shares in the same text quoted above,
[The notion of adjoint functor] captures an important mathematical phenomenon that is invisible without the lens of category theory.
Next time, we'll unwind the definition a bit more and, with the lens of category theory, be able to spot several examples in mathematics.
|
Schelling’s segregation model is a landmark model in sociology. It shows the counter-intuitive phenomenon that residential segregation between individuals of different groups can emerge even when all involved individuals are tolerant. Although the model is widely studied, no pure game-theoretic version where rational agents strategically choose their location exists. We close this gap by introducing and analyzing generalized game-theoretic models of Schelling segregation, where the agents can also have individual location preferences. For our models we investigate the convergence behavior and the efficiency of their equilibria. In particular, we prove guaranteed convergence to an equilibrium in the version which is closest to Schelling’s original model. Moreover, we provide tight bounds on the Price of Anarchy.
It has been experimentally observed that real-world networks follow certain topologicalproperties, such as small-world, power-law etc. To study these networks, many random graph models, such as Preferential Attachment, have been proposed. In this paper, we consider the deterministic properties which capture power-law degree distribution and degeneracy. Networks with these properties are known as scale-free networks in the literature. Many interesting problems remain NP-hard on scale-free networks. We study the relationship between scale-free properties and the approximation-ratio of some commonly used evolutionary algorithms. For the Vertex Cover, we observe experimentally that the \((1+1)\) EA always gives the better result than a greedy local search, even when it runs for only \(O(n, \log(n))\) steps. We give the construction of a scale-free network in which a multi-objective algorithm and a greedy algorithm obtain optimal solutions, while the \((1+1)\) EA obtains the worst possible solution with constant probability. We prove that for the Dominating Set, Vertex Cover, Connected Dominating Set and Independent Set, the \((1+1)\) EA obtains constant-factor approximation in expected run time \(O(n, \log(n))\) and \(O(n^4)\) respectively. Whereas, GSEMO gives even better approximation than \((1+1)\) EA in expected run time \(O(n^3)\) for Dominating Set, Vertex Cover and Connected Dominating Set on such networks.
Chauhan, Ankit; Lenzner, Pascal; Melnichenko, Anna; Molitor, LouiseSelfish Network Creation with Non-Uniform Edge Cost. Symposium on Algorithmic Game Theory (SAGT) 2017: 160-172
Network creation games investigate complex networks from a game-theoretic point of view. Based on the original model by Fabrikant et al. [PODC’03] many variants have been introduced. However, almost all versions have the drawback that edges are treated uniformly, i.e. every edge has the same cost and that this common parameter heavily influences the outcomes and the analysis of these games. We propose and analyze simple and natural parameter-free network creation games with non-uniform edge cost. Our models are inspired by social networks where the cost of forming a link is proportional to the popularity of the targeted node. Besides results on the complexity of computing a best response and on various properties of the sequential versions, we show that the most general version of our model has con- stant Price of Anarchy. To the best of our knowledge, this is the first proof of a constant Price of Anarchy for any network creation game.
Large real-world networks typically follow a power-law degree distribution. To study such networks, numerous random graph models have been proposed. However, real-world networks are not drawn at random. Therefore, Brach, Cygan, Lacki, and Sankowski [SODA 2016] introduced two natural deterministic conditions: (1) a power-law upper bound on the degree distribution (PLB-U) and (2) power-law neighborhoods, that is, the degree distribution of neighbors of each vertex is also upper bounded by a power law (PLB-N). They showed that many real-world networks satisfy both deterministic properties and exploit them to design faster algorithms for a number of classical graph problems. We complement the work of Brach et al. by showing that some well-studied random graph models exhibit both the mentioned PLB properties and additionally also a power-law lower bound on the degree distribution (PLB-L). All three properties hold with high probability for Chung-Lu Random Graphs and Geometric Inhomogeneous Random Graphs and almost surely for Hyperbolic Random Graphs. As a consequence, all results of Brach et al. also hold with high probability or almost surely for those random graph classes. In the second part of this work we study three classical NP-hard combinatorial optimization problems on PLB networks. It is known that on general graphs with maximum degree \(\Delta\), a greedy algorithm, which chooses nodes in the order of their degree, only achieves a \(\Omega(\ln \Delta)\)-approximation for Minimum Vertex Cover and Minimum Dominating Set, and a \(\Omega(\Delta)\)-approximation for Maximum Independent Set. We prove that the PLB-U property suffices for the greedy approach to achieve a constant-factor approximation for all three problems. We also show that all three combinatorial optimization problems are APX-complete even if all PLB-properties holds hence, PTAS cannot be expected unless P=NP.
Chauhan, Ankit; Lenzner, Pascal; Melnichenko, Anna; Münn, MartinOn Selfish Creation of Robust Networks. Symposium on Algorithmic Game Theory (SAGT) 2016: 141-152
Robustness is one of the key properties of nowadays networks. However, robustness cannot be simply enforced by design or regulation since many important networks, most prominently the Internet, are not created and controlled by a central authority. Instead, Internet-like networks emerge from strategic decisions of many selfish agents. Interestingly, although lacking a coordinating authority, such naturally grown networks are surprisingly robust while at the same time having desirable properties like a small diameter. To investigate this phenomenon we present the first simple model for selfish network creation which explicitly incorporates agents striving for a central position in the network while at the same time protecting themselves against random edge-failure. We show that networks in our model are diverse and we prove the versatility of our model by adapting various properties and techniques from the non-robust versions which we then use for establishing bounds on the Price of Anarchy. Moreover, we analyze the computational hardness of finding best possible strategies and investigate the game dynamics of our model.
We study structural aspects of randomized parameterized computation. We introduce a new class W[P]-PFPT as a natural parameterized analogue of PP. Our definition uses the machine based characterization of the parameterized complexity class W[P] obtained by Chen et.al [TCS 2005]. We translate most of the structural properties and characterizations of the class PP to the new class W[P]-PFPT. We study a parameterization of the polynomial identity testing problem based on the degree of the polynomial computed by the arithmetic circuit. We obtain a parameterized analogue of the well known Schwartz-Zippel lemma [Schwartz, JACM 80 and Zippel, EUROSAM 79]. Additionally, we introduce a parameterized variant of permanent, and prove its #W[1] completeness.
Algorithm Engineering
Our research focus is on theoretical computer science and algorithm engineering. We are equally interested in the mathematical foundations of algorithms and developing efficient algorithms in practice. A special focus is on random structures and methods.
|
In using the Zorn's lemma to show that every connected graph contains a spanning tree, we let $\{T_{\lambda}: \lambda \in \Lambda \}$ be a family of trees contained in $X$ which is totally ordered by inclusion. But how to show that $\bigcup\limits_{\lambda \in \Lambda} T_\lambda$ is a also a tree in $X$?
Actually, $\bigcup\limits_{\lambda \in \Lambda} T_{\lambda}$ may not be a tree.
You have to show that every
chain in your family of trees has an upper bound. So, given a chain $\{T_{\alpha}$| $\alpha \in A$}, take the union $T=\bigcup\limits_{\alpha \in A} T_{\alpha}$. It is straightforward to show that this is tree: you just have to show that, given any two vertices $v_1,v_2 \in T$, there is exactly one path between them.
(proof by contradiction: assume two paths - both the paths will be contained in some $T_{\alpha_0}$, but $T_{\alpha_0}$ is a tree, so you get a contradiction )
Now, since you have a poset where every chain has an upper bound, Zorn's lemma can be applied.
The argument here is one of the common arguments when using Zorn's lemma.
If $G$ is a graph which is not a tree then it has a finite subgraph which witnesses that.
If the chain was finite, then it has a maximal element. Therefore the union of the chain is that maximal element, and we are done. In fact we don't even care that the chain was finite, just that it had a maximal element.
If the chain doesn't have a maximal element, and the union is not a tree then there is a finite subset which witnesses that. But that finite subset must have been added somewhere along the way. This counterexample, if so appears in some $T_\lambda$ in our chain. But we assumed that all the $T_\lambda$ are trees, so that is impossible.
Note that this is the same argument that the increasing union of linearly independent sets is linearly independent; that the increasing union of filters is a filter; and that the increasing union of ideals is an ideal, etc. The key is if the union wasn't a such object, it would mean that somewhere along the union we added a counterexample -- which contradicts our assumptions.
|
A. OMAR
Articles written in Journal of Astrophysics and Astronomy
Volume 26 Issue 1 March 2005 pp 1-70
The GMRT
−1. The galaxies are clustered into different sub-groups. The overall population mix of the group is 30% (E + S0) and 70% (Sp + Irr). The observations of 57 Eridanus galaxies were carried out with the GMRT for ≈ 200 h. HI emission was detected from 31 galaxies. The channel rms of ≈ 1 mJy beam −1 was achieved for most of the image-cubes made with 4 h of data. The corresponding HI column density sensitivity (3σ) is ≈ 1 × 10 20 cm −2 for a velocity-width of ≈ 13.4 km s −1. The 3σ detection limit of HI mass is ≈ 1.2 X 10 7 Mpd for a line-width of 50 km s −1. Total HI images, HI velocity fields, global HI line profiles, HI mass surface densities, HI disk parameters and HI rotation curves are presented. The velocity fields are analysed separately for the approaching and the receding sides of the galaxies. These data will be used to study the HI and the radio continuum properties, the Tully-Fisher relations, the dark matter halos, and the kinematical and HI lopsidedness in galaxies.
Volume 26 Issue 1 March 2005 pp 71-87
The HI content of galaxies in the Eridanus group is studied using the GMRT observations and the HIPASS data. A significant HI deficiency up to a factor of 2–3 is observed in galaxies in the high galaxy density regions. The HI deficiency in galaxies is observed to be directly correlated to the local projected galaxy density, and inversely correlated to the line-of-sight radial velocity. Furthermore, galaxies with larger optical diameters are predominantly in the lower galaxy density regions. It is suggested that the HI deficiency in Eridanus is due to tidal interactions. In some galaxies, evidences of tidal interactions are seen. An important implication is that significant evolution of galaxies can take place in the group environment. In the hierarchical way of formation of clusters via mergers of groups, a fraction of the observed HI deficiency in clusters could have originated in groups. The co-existence of S0s and severely HI deficient galaxies in the Eridanus group suggests that tidal interaction is likely to be an effective mechanism for transforming spirals to S0s.
Volume 26 Issue 1 March 2005 pp 89-102
The Eridanus galaxies follow the well-known radio—FIR correlation. The majority (70%) of these galaxies have their star formation rates below that of the Milky Way. The galaxies that have a significant excess of radio emission are identified as low luminosity AGNs based on their radio morphologies obtained from the GMRT observations. There are no powerful AGNs (
20cm > 10 23 W Hz −1) in the group. The two most far-infrared and radio luminous galaxies in the group have optical and HI morphologies suggestive of recent tidal interactions. The Eridanus group also has two far-infrared luminous but radio-deficient galaxies. It is believed that these galaxies are observed within a few Myr of the onset of an intense star formation episode after being quiescent for at least a 100 Myr. The upper end of the radio luminosity distribution of the Eridanus galaxies ( 20cm ∼ 10 22 W Hz −1) is consistent with that of the field galaxies, other groups, and late-type galaxies in nearby clusters.
Volume 27 Issue 1 March 2006 pp 7-23
The Tully-Fisher (TF) or the luminosity-linewidth relations of the galaxies in the Eridanus group are constructed using the HI rotation curves and the luminosities in the optical and in the near-infrared bands. The slopes of the TF relations (absolute magnitude
flat) are −8.6 ± 1.1, −10.0 ±1.5, −10.7 ±2.1, and −9.7 ±1.3 in the R, J, H, and K bands respectively for galaxies having flat HI rotation curves. These values of the slopes are consistent with those obtained from studies of other groups and clusters. The scatter in the TF relations is in the range 0.5-1.1 mag in different bands. This scatter is considerably larger compared to those observed in other groups and clusters. It is suggested that the larger scatter in the TF relations for the Eridanus group is related to the loose structure of the group. If the TF relations are constructed using the baryonic mass (stellar + flat) slope is in the range 3.5–4.1.
Volume 34 Issue 3 September 2013 pp 247-257
The Devasthal Fast Optical Telescope (DFOT) is a 1.3 meter aperture optical telescope, recently installed at Devasthal, Nainital. We present here the first results using an H𝛼 filter with this telescope on a Wolf–Rayet dwarf galaxy Mrk 996. The instrumental response and the H𝛼 sensitivity obtained with the telescope are (3.3 ± 0.3) × 10
-15 erg s -1 cm -2/counts s -1 and 7.5 × 10 -17 erg s -1 cm -2 arcsec -2 respectively. The H𝛼 flux and the equivalent width for Mrk 996 are estimated as (132 ± 37) × 10 -14 erg s -1 cm -2 and ∼ 96 Å respectively. The star formation rate is estimated as 0.4 ± 0.1𝑀 ⊙ yr -1. Mrk 996 deviates from the radio-FIR correlation known for normal star forming galaxies with a deficiency in its radio continuum. The ionized gas as traced by Hα emission is found in a disk shape which is misaligned with respect to the old stellar disk. This misalignment is indicative of a recent tidal interaction in the galaxy. We believe that galaxy–galaxy tidal interaction is the main cause of the WR phase in Mrk 996.
Volume 40 Issue 2 April 2019 Article ID 0009
We report optical observations of TGSS J1054 $+$ 5832, a candidate high-redshift ($z = 4.8 \pm 2$) steep-spectrum radio galaxy, in $r$ and $i$ bands, using the faint object spectrograph and camera mounted on 3.6-m Devasthal Optical Telescope (DOT). The source previously detected at 150 MHz from Giant Meterwave Radio Telescope (GMRT) and at 1420 MHz from Very Large Array has a known counterpart in near-infrared bands with $K$-band magnitude of AB 22. The source is detected in $i$-band with AB24.3 $\pm$ 0.2 magnitude in theDOT images presented here. The source remains undetected in the $r$-band image at a 2.5$\sigma$ depth of AB 24.4 mag over an $1.2^{\prime\prime}\times 1.2^{\prime\prime}$ aperture. An upper limit to $i−K$ color is estimated to be $\sim$2.3, suggesting youthfulness of the galaxy with active star formation. These observations highlight the importance and potential of the 3.6-mDOT for detections of faint galaxies.
Current Issue
Volume 40 | Issue 5 October 2019
Since January 2016, the Journal of Astrophysics and Astronomy has moved to Continuous Article Publishing (CAP) mode. This means that each accepted article is being published immediately online with DOI and article citation ID with starting page number 1. Articles are also visible in Web of Science immediately. All these have helped shorten the publication time and have improved the visibility of the articles.
Click here for Editorial Note on CAP Mode
|
What is an Adjunction? Part 2 (Definition)
Last time I shared a light introduction to adjunctions in category theory. As we saw then, an adjunction consists of a pair of opposing functors $F$ and $G$ together with natural transformations $\text{id}\to\ GF$ and $FG\to\text{id}$. We compared this to two stricter scenarios: one where the composite functors
equal the identities, and one where they are naturally isomorphic to the identities. The first scenario defines an isomorphism of categories. The second defines an equivalence of categories. An adjunction is third on the list.
In the case of an adjunction, we also ask that the natural transformations—called the
unit and counit—somewhat behave as inverses of each other. This explains why the ${\color{red}\text{arrows}}$ point in opposite directions. (It also explains the "co.") Except, they can't literally be inverses since they're not composable: one involves morphisms in $\mathsf{C}$ and the other involves morphisms in $\mathsf{D}$. That is, their (co)domains don't match. But we can fix this by applying $F$ and $G$ so that (a modified version of) the unit and counit can indeed be composed. This motivates the formal definition of an adjunction. The Definition
Here it is:
Definition: An adjunction between categories $\mathsf{C}$ and $\mathsf{D}$ is a pair of functors $F\colon\mathsf{C}\to\mathsf{D}$ and $G\colon \mathsf{D}\to\mathsf{C}$ together with natural transforamtions $\eta\colon \text{id}_\mathsf{C}\to GF$, called the unit, and $\epsilon\colon FG\to\text{id}_\mathsf{D}$, called the counit, so that for all objects $X$ in $\mathsf{C}$ and $Y$ in $\mathsf{D}$ the two triangles below commute. When $F$ and $G$ are part of an adjunction, we'll write $F\dashv G$ and say that $F$ and $G$ are adjoint functors, with $F$ being the left adjoint of $G$ and $G$ being the right adjoint of $F$.
There are a couple things to unwind here. First, how should we understand the unit $\eta$ and counit $\epsilon$? Second, why the names "left/right adjoint"?
Let's address the first question first.
Bringing the Definition to Life
I'll start by sharing a little schema that's used over and over again in various guises throughout mathematics:
We often start with a set $B$. Example: take $B$ to be a set with three elements. The elements in $B$ are then used as building blocks to construct a bigger mathematical object $FB$, which contains the original $B$ and more. Example: take $FB$ to be the three-dimensional real vector space with basis $B$. As a consequence, we observe that whenever another object also "contains" $B$, it automatically contains $FB$, too. Example: if $V$ is any vector space and there is a mapping $f$ from $B$ to $V$, then you automatically have a linear transformation $FB\to V$. It's obtained by extending $f$ linearly.
This is a typical kind of extension problem. You have a little set $B$. You build a bigger object $FB$ with it. You say, "Yikes, $FB$ is large and messy. How can I ever define a map
out of it?" Then you breathe a sigh of relief. You need only define your map on the smaller, more manageable $B$. The rest takes care of itself.
I've illustrated the schema above with a standard situation from linear algebra. When you want to define a linear map between vectors spaces, it's always enough to define the map on the basis vectors of the domain space, rather than on every single vector. Since the map must be linear, you know what it
must be on an arbitrary vector once you know what it is on the basis vectors.
Be careful, though.
In Step 3 above, I wrote "...a mapping $f$ from $B$ to $V$..." This is vague. What kind of a morphism is $f$? Is it a function? Is it a linear transformation? Since $B$ is a set and $V$ is a set-with-extra-data (remember, a vector space is a
set together with other things), $f$ should probably just be a function between sets.
The category theory confirms this. Indeed, lurking behind our three-step schema is an adjunction. En route to unveiling it, now's a good time to know that the unit natural transformation $\eta\colon \text{id}_{\mathsf{C}}\to GF$ of any adjunction $F\dashv G$ always satisfies the following property:
What does this mean? Let's relate it back to our linear algebra example. Suppose $\mathsf{C}=\mathsf{Vect}_{\mathbb{R}}$ is the category of real vector spaces, and $\mathsf{D}=\mathsf{Set}$ is the category of sets. Define $F\colon\mathsf{Set}\to\mathsf{Vect}_{\mathbb{R}}$ to be the functor that assigns to a set $B$ the real vector space $FB$ whose basis is $B$. (If you want to be fancy, you can refer to $FB$ as the "free $\mathbb{R}$-module on $B$.") Define $U\colon \mathsf{Vect}_{\mathbb{R}}\to\mathsf{Set}$ to be the functor sending any vector space $V$ to the underlying set $UV$ of vectors in $V$. (The letter "$U$" is for
underlying.) That is, $U$ totally forgets the vector space data of $V$ and views it as a set. I'll let you think about what these functors should do to morphisms. Then $F$ and $U$ are part of an adjunction called a free-forgetful adjunction.
The unit $\eta\colon \text{id}_{\mathsf{Set}}\to UF$ of this adjunction is a natural transformation consisting of a function $\eta_B\colon B \to UFB$ for each set $B$. This function simply includes the set $B$ into the underlying set of the vector space $FB$. For example, if $B$ is the three-element set $\{x,y,z\}$ then $\eta_B$ injects it into the set whose elements are all linear combinations of $x,y$ and $z$, which we can simply think of as $\mathbb{R}^3$.
Moreover, each function $\eta_B$ satisfies the property introduced above:
This is exactly the three-step schema described above, written in math-speak. We start with a set $B$. We use it to build a vector space $FB$. This vector space naturally contains the original set $B$ by way of the inclusion $\eta_B\colon B\to UFB$. And any time another vector space "contains" $B$ via some $f\colon B\to UV$, it must "contain" $FB$, too, by way of $U\hat{f}\colon UFB \to UV$.
Notice the clarity of the language: In the schema, I made vague reference to "...a mapping $f$ from $B$ to $V$." The category theory explicitly places the discussion in the category of sets: $f$ is a
function from the set $B$ to the set $UV$. The property above tells us that to every such function, there is exactly one linear map $\hat{f}\colon FB\to V$ so that $f=U\hat{f}\circ \eta_B$, which is to say that $\hat{f}$, as a linear map, agrees with $f$ on basis elements. In other words, $\hat{f}$ is the unique map that extends $f$ linearly from the basis set $B$ to the entire vector space $FB$. That this property is satisfied by the unit of the adjunction is precisely why we only need to define linear maps on basis elements.
There are other free-forgetful adjunctions outside of linear algebra. Free groups, free monoids, free modules over a ring, free rings on an abelian group, etc. all fit into the same story. More generally, whenever some functor "forgets" some data or structure and has a left adjoint, that left adjoint will have a "free" flavor to it.
What about the counit?
So far the discussion has been about the unit of an adjunction. I'll say a few brief words about the counit of our example, as counits of general adjunctions share a similar story.
The counit $\epsilon\colon FU\to \text{id}_{\mathsf{Vect}_\mathbb{R}}$ of our free-forgetful adjunction $F\dashv U$ is a natural transformation consisting of linear transformations $\epsilon_V\colon FUV\to V$, where $V$ is a vector space. Note that $UV$ is the
set of all vectors in $V$, and $FUV$ is the vector space with one basis vector for each element in $UV$. So $FUV$ is massive! The linear map $\epsilon_V$ takes linear combinations of elements in the set $UV$—which are just vectors in $V$—and simply views them as vectors in $V$. As an example, if $V$ has bases $\mathbf{x,y,z}$ then:
So the counit here is saying, "A linear combination of linear combinations of vectors in $V$ is itself a vector in $V$, so just view, or
evaluate, that sum as that vector." What's more, each linear map $\epsilon_V$ satisfies the following property:
This property says that, since linear combinations of linear combinations of vectors in $V$ are themselves vectors in $V$, maps into $V$ are what you think they are.
In general, the letter $\epsilon$ should remind you of "e" for "evaluation," as counits of adjunctions often have a "just evaluate the obvious thing" kind of flavor.
Repackaging the Definition
Thus far we've taken a closer look at the unit and counit of an adjunction. But what about the
name? Why are the functors "adjoints"? And why do they come in "left" and "right" versions? The answer lies in a repackaging of the definition. Here is a different, but equivalent way, to understand adjunctions: Definition: An adjunction between categories $\mathsf{C}$ and $\mathsf{D}$ is a pair of functors $F\colon\mathsf{C}\to\mathsf{D}$ and $G\colon \mathsf{D}\to\mathsf{C}$ together with a bijection $\text{hom}_{\mathsf{D}}(FX,Y)\cong \text{hom}_{\mathsf{C}}(X,GY)$ for all objects $X$ in $\mathsf{C}$ and $Y$ in $\mathsf{D}$, which is natural in both $X$ and $Y$. Call the image $\hat{f}$ of a map $f$ under this bijection the adjunct or transpose of $f$.
"Natural in both $X$ and $Y$" means we require $\text{hom}_{\mathsf{D}}(FX,-)\cong \text{hom}_{\mathsf{C}}(X,G-)$ to be a natural isomorphism for each $X$ and $\text{hom}_{\mathsf{D}}(F-,Y)\cong \text{hom}_{\mathsf{C}}(-,GY)$ to be a natural isomorphism for each $Y$.
The upshot is that $F\dashv G$ if maps $FX\to Y$ are the same as maps $X\to GY$. In our free-forgetful adjunction, the bijection $\text{hom}_{\mathsf{Vect}_\mathbb{R}}(FB,V)\cong \text{hom}_{\mathsf{Set}}(B,UV)$ says that there is a one-to-one correspondence between functions $B\to UV$ and linear transformations $FB\to V$. If you have one, then you can get the other.
Hopefully this sheds light on the notation $F\dashv G$. In the bijection above, $F$ appears on the left and is called the
left adjoint of $G$, while $G$ appears on the right and is called the right adjoint of $F$. Moreover, the isomorphism $\text{hom}_{\mathsf{D}}(FX,Y)\cong \text{hom}_{\mathsf{C}}(X,GY)$ looks almost identical to the property that a linear map between Hilbert spaces $f\colon V\to W$ shares with its adjoint $\hat{f}\colon W\to V$: $$\langle f\mathbf{v},\mathbf{w}\rangle = \langle \mathbf{v},\hat{f}\mathbf{w}\rangle \qquad\text{for all $\mathbf{v}\in V$ and $\mathbf{w}\in W$}$$ which explains the terminology.
(Unfortunately, I'm not aware of a way to view Hilbert spaces as categories so that linear maps and their adjoints are literal categorical adjunctions. But see John Baez's "Higher-Dimensional Algebra II: 2-Hilbert Spaces" for a categorification of the situation.)
I'll leave you to verify that the two definitions of adjunctions are equivalent, as claimed. Here are some hints:
The transpose of the identity of $FX$ under the isomorphism $\text{hom}_{\mathsf{D}}(FX,FX)\overset{\cong}{\longrightarrow} \text{hom}_{\mathsf{C}}(X,GFX)$ is the unit $\eta$ of the adjunction. The transpose of the identity of $GY$ under the isomorphism $\text{hom}_{\mathsf{D}}(FGY,Y) \overset{\cong}{\longleftarrow}\text{hom}_{\mathsf{C}}(GY,GY)$ is the counit $\epsilon$ of the adjunction. Pay careful attention to the naturalityconditions appearing in each definition!
With these hints, you can also verify that the unit and counit do indeed satisfy the universal properties that we explored above.
This wraps up our investigation into the formal definition of categorical adjunctions. In the next post, I'll share some examples that appear in both pure and applied settings.
|
What is an Adjunction? Part 3 (Examples)
Welcome to the last installment in our mini-series on adjunctions in category theory. We motivated the discussion in Part 1 and walked through formal definitions in Part 2. Today I'll share some examples. In Mac Lane's well-known words, "adjoint functors arise everywhere," so this post contains only a tiny subset of examples. Even so, I hope they'll help give you an eye for adjunctions and enhance your vision to spot them elsewhere.
An adjunction, you'll recall, consists of a pair of functors $F\dashv G$ between categories $\mathsf{C}$ and $\mathsf{D}$ together with a bijection of sets, as below, for all objects $X$ in $\mathsf{C}$ and $Y$ in $\mathsf{D}$.
In Part 2, we illustrated this bijection using a free-forgetful adjunction in linear algebra as our guide. So let's put "free-forgetful adjuctions" first on today's list of examples.
Free-Forgetful Adjunctions
Whenever a functor $U\colon \mathsf{D}\to\mathsf{C}$ ignores some data or structure in $\mathsf{D}$ and has a left adjoint $F\colon \mathsf{C}\to\mathsf{D}$, the left adjoint will have a "free" flavor. Since the right adjoint is "forgetful" (this does not have an official definition), such an adjunction $F\dashv U$ is called a
free-forgetful adjunction.
Last week we saw this with sets and real vector spaces. Another illustration lies in the connection between directed graphs and categories. Both involve vertices/objects and edges/morphisms. So how exactly are they related? Every directed graph
gives rise to a category, and every category is a directed graph (with extra data). More formally, there is an adjunction involving the category $\mathsf{DirGraph}$ of directed graphs and the category $\mathsf{Cat}$ of categories.
In the picture above, the functor $F$ turns a graph $G$ into a category $FG$ by viewing vertices as objects and edges as morphisms. It also inserts identity arrows at each vertex, and declares the set of morphisms between two vertices to be the set of all finite paths between them. Composition is then concatenation of paths. On the other hand, the functor $U$ assigns to a category $\mathsf{C}$ its underlying graph $U\mathsf{C}$. It just forgets the identity and composition axioms, which aren't needed to specify a graph. The bijection enjoyed by this adjunction is
which says something along the lines of, "If you'd like to view a graph $G$ as a diagram in some category $\mathsf{C}$, then you're in luck, because there's exactly
one way to turn that graph into a category $FG$ so that the diagram $G$ in $\mathsf{C}$ is a Product-Hom Adjunction
The next example gives a nice categorical relationship between multiplication and exponentiation. Early in life, one learns that $x^{y\times z}=(x^y)^z$ holds whenever $x,y,z$ are numbers. Later in life, one learns that this holds for
sets, too:
This is called the
product-hom adjunction. To unravel it, let's use the notation $X^Y$ to mean the set of functions from $Y$ to $X$, that is $X^Y:=\text{hom}(Y,X)$. This is nice, since if $X$ has 2 elements and $Y$ has 3 elements then there are exactly 8 functions from $Y$ to $X$, i.e. $|\text{hom}(Y,X)|=2^3=|X|^{|Y|}$.
Now, how is the above bijection an adjunction? For any set $Y$ there is a functor $Y\times -\colon\mathsf{Set}\to\mathsf{Set}$ that assigns to a set $Z$ the Cartesian product $Y\times Z$. There is another functor $\hom(Y,-)\colon \mathsf{Set}\to\mathsf{Set}$ that assigns to a set $Z$ the set of all functions $\text{hom}(Y,Z)$. Then $Y\times -$ is left adjoint to $\text{hom}(Y,-)$.
In other words, the bijection below holds for all sets $X$ and $Z$.
Indeed, every function $f\colon Y\times Z\to X$ gives rise to a function $\hat{f_z}\colon Y\to X$ by fixing a variable $z\in Z$, namely $\hat{f_z}(y):=f(y,z)\in X$. Likewise, any function $g\colon Z\to X^Y$ gives rise to a function $\hat{g}\colon Y\times Z\to X$ by $\hat{g}(y,z):=gz(y)$. In computer science, you'll recognize this as
currying.
Other areas of math have their own version of the product-hom adjunction. For instance, if $X,Y,Z$ are topological spaces with chosen basepoints, then there is a "based" version of the Cartesian product of spaces called the
smash product, denoted by a wedge $\wedge$ . For example, "multiplying" two circles with the Cartesian product results in a torus, $S^1\times S^1$. But if you further smash the two circles together, then you'll get a sphere. So the smash product of circles $S^1\wedge S^1$ is a sphere. Here's a nice gif from Wikipedia:
So an analogue of the product $\times$ is the smash product $\wedge$, and the analogous adjunction $(Y\wedge-)\dashv \text{hom}(Y,-)$ is called the
smash-hom adjunction. In the special case when $Y=S^1$ is the circle, the two functors $S^1\times -$ and $\text{hom}(S^1,-)$ are called the suspension and loop functors and the resulting adjunction is the suspension-loop adjunction. It appears in a nice one-line proof that the fundamental group of the circle is $\mathbb{Z}$. Galois Connections
The next adjunction we'll consider is called a G
alois connection. This is my favorite example because it subsumes so many phenomena in mathematics. A Galois connection is, simply put, an adjunction between functors on posets.
I'll explain. First, know that
every poset (partially ordered set) is a category. A poset is a set $P$ in which a partial order $\leq$ has been defined. As a category, the objects are the elements in $P$ and there is exactly one morphism $p\to p'$ whenever $p\leq p'$. In particular, there is at most one arrow between any two elements, that is, $\text{hom}(p,p')$ is always a set with either 0 or 1 elements. Using the definition of a partial order, you can verify that the axioms of a category are indeed satisfied.
A function $f\colon P\to Q$ between posets that preserves the order—meaning it satisfies $fp\leq fp'$ whenever $p\leq p'$—is called a
monotone function. Crucially, a monotone function is precisely a functor when we view the posets as categories. (Below we'll be interested in a function $f$ that's order -reversing, so that $fp\geq fp'$ whenever $p\leq p'$. It's still a functor—it's just a contravariant one.)
In this general setting, an adjunction consists of opposing monotone functions $f\colon P\to Q$ and $g\colon Q\to P$ that satisfy
Lots of things you might care about are posets, so there are numerous Galois connections throughout mathematics. Here's one example I especially enjoy:
Formal Concept Analysis
Given a set $X$ consider the power set $2^X$, i.e. the set of all subsets of $X$. It's a poset by inclusion: $A\leq B$ if and only if $A\subseteq B\subseteq X$. So in particular, it's a category. Now here's a nice fact I like:
Any relation $R$ on $X\times Y$ defines a Galois connection.
A
relation is another name for a subset $R\subseteq X\times Y$. If, for example, $X$ is a set of animals and $Y$ is a set of features, then $R$ could be the set of all pairs $(x,y)$ such that animal $x$ possesses feature $y$. Naturally, we might be interested in subsets of animals possessing certain features, and vice versa. This motivates the following two functions, $f$ and $g$:
These functions are order reversing (as you can check) and they satisfy the following:
Right away, you'll notice this isn't
quite the adjunction condition specified above: $f$ and $g$ both appear on the right-hand side of the subset containments! No worries. This is another flavor of adjunction: $f$ and $g$ are called mutually right adjoints, and this is an example of what's sometimes called an antitone Galois connection.
As an aside, pairs of subsets $(A,B)\in 2^X\times 2^Y$ for which equality above holds—i.e. $fA=B$ and $A=gB$—have a special name: they're called
formal concepts. They are the focal point of interest in formal concept analysis, a nice part of order theory dealing with hierarchy of concepts in data. For more on formal concepts and category theory, you might be interested in this blog series by Simon Willerton on the $n$-Category Café.
Galois connections arising from relations are just
one example. There are many more, including: the connection between fields and groups in Galois theory (from which this adjunction derives is name) the connection between the floor function $\lfloor -\rfloor$ and the ceiling function $\lceil -\rceil$ the connection between covering spaces and fundamental groups in topology the connection between polynomials and their roots in algebraic geometry the connection between syntax and semantics à la William Lawvere
The list goes on, and the Wikipedia entry showcases most of these examples and more.
Data Migration
Our last example of adjunctions comes from applied category theory, namely
data migration. Today's post is already quite long, so I'll try to keep this brief. A database—tables of information—can be represented by a directed graph, $G$. The columns are vertices and an edge is a relationship between columns. In the airline example below, the column of "Economy Seats" corresponds to one vertex, which is connected to "Price" since every seat has a cost associated to it.
The graph keeps track of the database's "syntax." To reinstate the actual data, we need to attach meaning. We need, for example, a principled way of "imagining the leftmost vertex represents the set of economy seats." To do this, we take the
free category $\mathsf{G}:=FG$ a database is encoded by functor $ \mathsf{G}\to \mathsf{Set}$.
Now suppose we have another database, whose graph $H$ gives rise to a category $\mathsf{H}:=FH$, and suppose the two databases are related so that there is a functor $J\colon \mathsf{H}\to\mathsf{G}$. (Perhaps one database is a more detailed version of another.) Asking for a
migration of data from $G$ to $H$ amounts to asking for a functor $\mathsf{H}\to\mathsf{Set}$ given a functor $\mathsf{G}\to\mathsf{Set}$. Is it possible? Sure! Just precompose with $J$. This defines a nice way to get from the category $\mathsf{Set}^\mathsf{G}$ of databases (functors) on $G$ to the category $\mathsf{Set}^\mathsf{H}$ of databases (functors) on $H$:
But can we migrate data in the other direction? Given a functor $\mathsf{H}\to \mathsf{Set}$, can we use $J$ to
extend it to a functor out of $\mathsf{G}$? For this, we turn to Kan extensions—the name given to solutions to this kind of extension problem. I use plural here because there are both left and right Kan extensions. (This is a consequence of the fact that morphisms have direction: left or right.) There's a lot of rich theory behind this (you'll need to know about limits and colimits—I've written an introduction here!), but the upshot is that the two Kan extensions provide two ways to migrate data from $H$ to $G$. Moreover, they define two adjunctions :
I've gone through all this rather quickly, but I think it gives a nice glimpse of applied category theory. A thorough explanation can be found in David Spivak's Category Theory for the Sciences and in the newer An Invitation to Applied Category Theory with Brendan Fong, which is also free on the arXiv as
Seven Sketches in Compositionality . (See Section 3.4, "Adjunctions and Data Migration," from which I borrowed the example above.)
|
From universal law of gravitation, gravitational force exerted on a body of mass m by another body of mass M is $$ \mathbf F = \frac{GMm}{x^2} $$ where x is the distance between the centres of both the objects.
So, work done by gravitational force in bringing the object of mass m from infinity to a distance r from the centre of body of mass M is $$ W = \int_\infty^r \vec{F(x)}.\vec{dx}$$ $$=\int_\infty^r \frac{GMm}{x^2}\hat x.\vec{dx}$$ (where $\hat x$ is the unit vector in the direction in which the body of mass M is attracting the body of mass m, i.e. the direction of $\vec{dx}$ which results the angle between both vectors $0$) $$ =\int_\infty^r \frac{GMm}{x^2} {dx}\ cos0$$
$$ = - GMm\left(\frac{1}{r}-\frac{1}{\infty}\right) $$ $$= -\frac{GMm}{r}$$
Now, we know that $$W=-(∆U)$$ $$-\frac{GMm}{r} = -(U_r - U_\infty)$$ $$-\frac{GMm}{r} = (U_\infty - U_r)$$ Since, Zero of potential energy is at infinity by convention, so $U_\infty$ = 0 $$-\frac{GMm}{r} = -U_r$$ $$\frac{GMm}{r} = U_r$$
I get potential energy at a distance r as positive, then why is it that gravitational potential energy is $$-\frac{GMm}{r}$$
What is wrong in my derivation?
|
I'm currently in the process of familiarising myself with some basic concepts of general relativity and have stumbled upon a problem that is probanly quite simple. I'm referring to the book by Hobson, p.541 and 548, where the energy-momentum tensor for a simple matter field action $S$, $$T_{\mu\nu} = \frac{2}{\sqrt{-\det g}}\frac{\delta S}{\delta g^{\mu\nu}},$$ is calculated as an example:
$$ S = \int d^4x\sqrt{-\det g}(\frac{1}{2}g^{\mu\nu}(\nabla_\mu \Phi)(\nabla_\nu \Phi)-V(\Phi)) $$ $$ \delta(\det g) = \det g^{\mu\nu}\delta g_{\mu\nu} = -\det g g_{\mu\nu}\delta g^{\mu\nu} \Rightarrow \delta\sqrt{-\det g} = -\frac{1}{2}\sqrt{-\det g}g_{\mu\nu}\delta g^{\mu\nu} $$ \begin{eqnarray} \delta S &=& \int d^4x[\sqrt{-\det g}\frac{1}{2}\delta g^{\mu\nu}(\nabla_\mu \Phi)(\nabla_\nu \Phi) + \delta(\sqrt{-\det g})(\frac{1}{2}g^{\alpha\beta}(\nabla_\alpha \Phi)(\nabla_\beta \Phi)-V(\Phi))] \\ &=& \int d^4x\sqrt{-\det g}\frac{1}{2}[(\nabla_\mu \Phi)(\nabla_\nu \Phi) -g_{\mu\nu}(\frac{1}{2}g^{\alpha\beta}(\nabla_\alpha \Phi)(\nabla_\beta \Phi)-V(\Phi))]\delta g^{\mu\nu} \end{eqnarray}
and one has $T_{\mu\nu} = [...]_{\mu\nu}$
My problem now is: if I calculate $$T^{\mu\nu} = \frac{2}{\sqrt{-\det g}}\frac{\delta S}{\delta g_{\mu\nu}}$$ in the same way, using the first term for $\delta\sqrt{-\det g}$ instead of the second, I find the same term for $T$ as before (with the indices up, of course), but with a plus instead of a minus between the derivatives-term and the Lagrangian-term in $T$.
But this is wrong, isn't it? So what am I doing wrong?
|
The motivation for studying relativistic dynamics comes from thinking about conservation of the standard forms of energy and momentum with our new relativistic dynamics. It is easy to demonstrate that $mv$ cannot be conserved in all inertial frames of reference in special relativity. Consider two balls of equal mass colliding inelastically with equal speed $v$ in opposite directions, $+v$ and $-v$. They smash into each other and remain stationary.
Now boost into one of the balls' frames, say $v$. Now the velocity of the other ball is $2v/(1+v^2)$, so the total initial momentum is $-2mv/(1+v^2)$. But after the collision, we see the thing moving at a velocity of $-v$ (we know this because it was 0 in the original frame), which means the final total momentum is $-2mv$, so momentum is not conserved.
But we don't like this! If this expression isn't conserved, we can't use it so nicely in calculations and stuff. We want to define momentum in a way that it is conserved. Similar arguments can be used to show that $mv^2/2$ is not conserved, either.
You may try to derive a conserved expression via similar arguments as the symmetry-based arguments we use in non-relativistic mechanics, swapping Galilean symmetry with Lorentz symmetry where appropriate. The resulting functional equations would be ludicrously complicated, though, and we'd much rather use a different symmetric argument.
We've made several arguments so far based on known properties of light, and it would make sense to assume other, quantum mechanical properties of light as well. Two such properties are:
$$\begin{array}{l}p = hf/c\\E = hf\end{array}$$
This means that we know the behavior of $p$ and $E$ at low velocities, as well as at velocities close to the speed of light. Surely, we're smart enough to fill in the stuff in between?
Consider the following set-up: a stationary mass
mlets out two equal flashes of light in opposite directions, each with energy = momentum (since $c=1$) E/2. We then analyse the same set-up from a boosted reference frame with velocity $v$. This involves a doppler shift in the frequency of each light beam.
We'll consider this set-up in the following three examples:
(a) vis small, momentum conservation We first consider the case where vis small enough to allow the usage of non-relativistic mechanics. Formally, this means taking the limit as $v\to0$.
Then the doppler shift factor $\sqrt{\frac{1+v}{1-v}}$ approaches $1+v$ and $\sqrt{\frac{1-v}{1+v}}$ approaches $1-v$. Both energy and momentum are scaled by the same factor since they're proportional to frequency. Now you know why we choose momentum conservation instead of energy conservation -- the total energy is clearly conserved anyway.
The reason we consider low velocities is that we know the formula for momentum must reduce to the Newtonian $p=mv$, i.e. the initial momentum of the system was $-mv$. The total momentum of the two flashes of light is $((1+v)E/2-(1-v)E/2)=vE$. Since momentum must be conserved, this means the momentum of the mass itself is no longer $-mv$. But its velocity is constant, and still low, so this means some of the mass must have been converted into the energy of the photons. Specifically,
$$-m_fv-(-m_iv)=vE$$
Giving us the celebrated equation
$$E=m$$
Where $m$ is the amount of mass that was converted into energy. You could, of course, write this in inelegant ways such as $E=c^2m$ or even $E=mc^2$.
this change in mass is not linked to the whole "relativistic mass" thing we'll be doing later. This decrease in mass is absolute, mass is not conserved, it is also seen in the rest frame, and is required to produce that bit of energy. It's only the derivation that requires boosting into another reference frame, to ensure conservation in all reference frames.
On a related note, note that
conservedand invariantare by no means the same thing, or even related. A quantity is conservedif it doesn't change with time when taken of the whole system. It is invariantif it is the same from all reference frames. The difference isn't even subtle -- proper mass is an invariant in special relativity, but Energy and momentum are conserved.
Something to think about: why doesn't our argument work in a non-relativistic frame? I mean, we even assumed that
vis small. Try to perform the same arguments without relativity -- you will see that since there is no relativistic doppler shift, the result will have a unit of mass being worth an infinite amount of energy -- something you get in the limit $c\to\infty$ -- useless anyway. (b) v is not small, energy conservation We said the decrease in mass exists in all reference frames. If we found what exactly the decrease in mass $\Delta m$ is in each reference frame, then we'd be able to see how mass transforms under a Lorentz transformation.
In the rest frame, energy $E$ is released, therefore by energy conservation the energy (or equivalently, the mass) of the object decreases by $E$.
In the moving frame, one of the beams transforms as $\sqrt {\frac{{1 + v}}{{1 - v}}} \frac{E}{2}$ while the other transforms as $\sqrt {\frac{{1 - v}}{{1 + v}}} \frac{E}{2}$. So the total energy released (i.e. the energy loss of the object) is:
$$\left( {\sqrt {\frac{{1 - v}}{{1 + v}}} + \sqrt {\frac{{1 + v}}{{1 - v}}} } \right)\frac{E}{2} = \gamma E$$
So the mass has transformed as $\gamma m$ under a Lorentz boost of significant velocity.
We call this mass the "relativistic mass" $M$, and distinguish it from the rest mass $m$.
Then the following are immediately true:
$E = m$ is only true when an object is at rest. In general, $E = \gamma m$. We may call $E_0=m$ the rest energy. $E=M$ $M=\gamma m$ The increase in mass is essentially the kinetic energy. One may Taylor (or Newton's Binomial) expand out $m/\sqrt{1-v^2}$ to see that the terms start as $m+\frac12mv^2+3/8mv^4+...$, and the higher-order terms vanish at low speeds. Therefore the relativistic kinetic energy is generally $M-m=(\gamma-1)c^2m$.
In general, we will denote the relativistic mass as $E$ and the rest mass as $m$ unless otherwise stated.
It is a fad among modern relativity textbooks to claim the phrase "relativistic mass" is a misnomer or even a mnemonic to help kids understand relativity and simply call it the energy, reserving the word "mass" to mean the rest mass. However, this obscures some of the best analogies between spacetime and momentum-energy, as we will soon see -- for instance, the relativistic mass is actually analogous to the co-ordinate time and the rest mass to the proper time/spacetime interval.
Therefore, we will use the word "mass" to refer to the relativistic mass $E$ and "proper mass" and "momentum-energy interval" to refer to the rest mass $m$. This is a convention in our course only.
(c) v is not small, momentum conservation We may do a similar analysis as above with momentum to arrive at the expression for relativistic momentum.
The total/net momentum of the light beams in the boosted frame is
$$\left( {\sqrt {\frac{{1 + v}}{{1 - v}}} - \sqrt {\frac{{1 - v}}{{1 + v}}} } \right)\frac{E}{2} = \gamma vE$$
(Note that $E$ represents the total rest energy of the light beams here, as was defined in the question.)
Therefore $p=\gamma mv$, or $p=vE$.
You may use this to calculate the relativistic calculation for $F=dp/dt$, but it's simply computation from this point, so I'll just direct you to wikipedia. Come up with an expression for a general directional inertia (simple).
some people are surprised by the relation $p=vE$, or even remember it wrongly as $E=vp$ because of the seeming resemblance with $E=pc$ at the speed of light (this confusion is because of people not getting the hang of $c=1$ natural units). But it's really nothing new. $E$ is simply the mass. We know momentum equals mass times velocity. This is not new.
Continued in Minkowski everything -- spacetime vectors, rapidity.
|
$K$, the standard thermodynamic equilibrium constant, is computed from $\Delta G^\circ$, using
$$\Delta G^\circ = -RT\log K \tag1$$
Generally speaking, $K$ in equation (1) is unitless. Its value depends on the specified reference standard states and $T$ (and obviously on the equilibrium activities of reactants and products). Both $K$ and $\Delta G^\circ$ change (and are certainly allowed to) if you change the choice of standard states for any species.
You can also compute an equilibrium constant from a value of $\Delta G^\circ$. To do this reverse operation you need to know the standard state ($p^\circ=\pu{1 bar}$, $c^\circ=\pu{1 M}$, or $m^\circ=\pu{1 molal}$, $\chi^\circ=\pu{1}$) associated with each species in the equilibrium equation. At equilibrium each product/reactant is at the same temperature and in the same phase as in its associated reference standard state (if not at the same partial pressure, concentration, or mole fraction).
When you use either $K$ or $\Delta G^\circ$ in practice, you need to know the standard state of each species (this is often apparent from additional information that is provided with these parameters). If you are working with a gas-phase reaction then you can refer to $K$ as $K_p$. If you are using standard states then all reference states are $\pu{1 bar}$ $^\dagger$, and partial pressures computed
directly from $K_p$ are in $\pu{bar}$ units (these can obviously be converted later).
To compute $K_c$ from say $K_p$ you have to convert units with an appropriate conversion factor. This is described in the linked post. Same applies to problems where $K_\chi$ is provided, where concentrations are described in mole fraction units. Note that if you change units, for instance to compute $K_c$, then you have (perhaps
implicitly) changed the reference state (perhaps to a nonstandard state), and in that case $\Delta G^\circ$ also changes. This is also explained in an answer to the linked post.
$\dagger$ Components whose activity is constant can usually be ignored
|
@ArunDebray I think that this is a useful subtlety to point out, though. I think the right statement for (1) is a stable trivialization, though I think (1) is true as stated for any bundle of rank 3 (using obstruction theory to see every oriented bundle over a 2-complex is isomorphic to a stabilization of an oriented plane bundle, and that rank 3 bundles over a 2-complex are classified by $w_2$). I guess you knew this, but I thought it might be stating anyway.
@MikeMiller: thanks a lot for your explanations. There is definitely a lot to learn for me. I never really got into the nitty and gritty of spectral sequences, and this is a good reason to do so.
I actually want to do something extremely easy: To understand the equivariant cohomology of X and X\times S^1, when the group is acting freely on the second factor. This means that the group acts freely on X\times S^1, so the equivariant homology would be that of the quotient. But I'm having some troubles visualizing this. Is it just the homology of X?
@ThomasRot Given various statements about the equivalence between Borel and Bredon cohomology, I might point you towards the only reference I know of for the K\"unneth theorem with respect to Bredon cohomology. Mandell and Lewis have a paper on this. I would imagine that we could chat about it a bit and determine if there is some collapse result in the case you are interested in.
@DenisNardin: thanks, that is what I thought in the end
@SeanTilson: Thanks, I'll have a look. I was chatting with Christian on some equivariant analogues of the stuff we did last week. The map I came up with is maybe the ordinary product, because of the iso of Denis.
Let $\mathbf{CSp}$ be the category of closure spaces, i.e., of spaces of the form $(X,\text{Cl}_X)$ where $\text{Cl}_X$ is the closure operator on $X$. The morphisms of $\mathbf{CSp}$ are the maps $f: X\to Y$ such that $(f\circ \text{Cl}_X)(A)\subseteq (\text{Cl}_Y\circ f)(A)$ for all $A\subseteq X$.
Then there seems to exist an embedding from $R$-$\mathbf{Mod}$ (the left category of $R$-modules) to $\mathbf{CSp}$.
For if $(M,\circ)$ be a $R$-module then we can define a closure operator $\text{cl}_{M}:\mathcal{P}(M)\to \mathcal{P}(M)$ by letting $\text{cl}(S)=\langle S\rangle$ for all $S\subseteq M$ (where the $\langle S\rangle$ denotes the submodule generated by $S$).
Furthermore if $f$ is a $R$-module homomorphism between $(M,\circ_M)$ and $(N,\circ_N)$ then the same $f$ can be viewed as a closure space $(M,\text{Cl}_M)$ and $(N,\text{Cl}_N)$ where $\text{Cl}_M$ and $\text{Cl}_N$ are defined as above. This is so because $$f(\langle S\rangle)\subseteq \langle f(S)\rangle$$ for all $S\subseteq M$.
So if we define a functor $\mathscr{F}:R\text{-}\textbf{Mod}\to\textbf{CSp}$ which associates to each $R$-module $(M,\circ)$ the corresponding closure space $(M,\text{Cl}_M)$ and to each $R$-module morphism $f$ the same $f$, we get an embedding. Isn't it?
But that would mean $R\text{-}\textbf{Mod}$ to be isomoprhic to a subcategory of $\textbf{CSp}$. My question is: what is the relation of this category with $\textbf{Top}$?
@user170039 My understanding from the Kuratowski description of topological spaces is that $\mathrm{Top}$ is the subcategory of $\mathrm{CSp}$ spanned by those closure spaces such that $\mathrm{cl}\varnothing=\varnothing$ and $\mathrm{cl}(A\cup B)=\mathrm{cl}A\cup \mathrm{cl}B$ (i.e. $\mathrm{cl}$ preserves finite unions). It's quite clear that the span operator does not satisfies those axioms
In mathematics, a closure operator on a set S is a functioncl:P(S)→P(S){\displaystyle \operatorname {cl} :{\mathcal {P}}(S)\rightarrow {\mathcal {P}}(S)}from the power set of S to itself which satisfies the following conditions for all setsX,Y⊆S{\displaystyle X,Y\subseteq S}Closure operators are...
@ThomasRot @ThomasRot So $G$ is cyclic or $S^1$. In the latter case indeed the quotient is just $X$ via the mantra "anything x G is anything else x G"; you can write down the equivariant homeomorphism by hand, and in paritcular, this is the same as $X_{triv} \times G$.
If $G$ is some finite cyclic group this is more interesting: if $\varphi$ generates the $\Bbb Z/n$ action on $X$, equip $X \times \Bbb R$ with the $\Bbb Z$-action $(x, t) \mapsto (\varphi x, t+1)$. Quotienting by the action of $n\Bbb Z$ gives rise to the space $X \times S^1$, with a leftover $\Bbb Z/n$ action that is precisely yours. Quotienting by the full $\Bbb Z$ is by definition the mapping torus of $\varphi$. So you are computing the homology of the mapping torus of $\varphi$.
The action of $H^*(B\Bbb Z/n)$ is induced by the map $\text{MT}(\varphi) \to B\Bbb Z/n$ classifying the covering space.
|
The tree diagrams are especially useful to solve problems with compound experiments, that is to say, the ones where we perform more than one random experiment. Some examples of compound experiments are: to throw two coins in the air, and check whether two heads land face up; to say if there are two women out of three siblings; to extract two balls of an urn, and check if there is a red one and a blue one.
Let's consider the following problem:
We throw a coin three times into the air. We want to know the probability of the event $$A=$$"to extract two heads".
Let's suppose now that the coin has been tampered with, and $$P(C)=\dfrac{6}{10}, P(+)=\dfrac{4}{10}$$ ($$C=$$"heads", $$+=$$"tails"). What is $$P (A)$$ now?
To solve the first problem, we can apply the rule of Laplace, since the probability that heads and tails is the result is the same in every toss of the coin, $$1/2$$.
Our sample space is $$\Omega=\{CCC,CC+,C+C,C++,+CC,+C+,++C,+++\}$$.
Favorable cases to $$A$$: $$CCC, CC+, C+C, +CC$$. Therefore, $$P(A)=\dfrac{4}{8}=\dfrac{1}{2}$$.
Let's represent our results in a tree. Starting from the left, in every throw we divide the tree as if it has come out heads $$(C)$$ or tails $$(+)$$, putting on every branch the probability that this happens. In this case, we get quite a simple tree.
Every branch of the tree, from the beginning till the end, is a result of the sample space: "first $$C$$ comes out , then $$+$$, and then $$C$$" it corresponds to the elementary event "$$C+C$$".
To compute the probability of every branch, we must multiply the probabilities of all the branches that we have followed till the end of the tree (since this is the probability of the intersection of three independent events). For example, the probability of $$C+C$$ is $$\dfrac{1}{2}\dfrac{1}{2}\dfrac{1}{2}=\dfrac{1}{8}$$.
To solve the problem, we must add the probabilities of all the favorable cases. In our case, every branch, that is to say, every elementary event, has probability $$$\dfrac{1}{2}\dfrac{1}{2}\dfrac{1}{2}=\dfrac{1}{8}$$$ and there are four favorable branches: $$$CCC, CC+, C+C, +CC$$$
Therefore, again we find that $$$P(A)=P(CCC)+P(CC+)+P(C+C)+P(+CC)=$$$ $$$=\dfrac{1}{8}+\dfrac{1}{8}+\dfrac{1}{8}+\dfrac{1}{8}=\dfrac{4}{8}=\dfrac{1}{2}$$$
a probability of $$50\%$$, the same result that we had found earlier. In this case, the tree diagram does not tell us what we already did not know using the Laplace theorem, but let's see what happens in question 2.
Now the coin has been tampered with, therefore we cannot directly apply the rule of Laplace. In this case we will see that the use a tree diagram is especially useful.
Let's see our experiment drawn in this case:
The cases favorable to $$A$$ are, as before, $$CCC, CC+, C+C, +CC$$.
$$$P(CCC)=P(C)\cdot P(C) \cdot P(C) = \dfrac{6}{10}\dfrac{6}{10}\dfrac{6}{10}=\dfrac{216}{1000}$$$
$$$P(CC+)=P(C)\cdot P(C) \cdot P(+) = \dfrac{6}{10}\dfrac{6}{10}\dfrac{4}{10}=\dfrac{144}{1000}$$$
$$$P(C+C)=P(C)\cdot P(+) \cdot P(C) = \dfrac{6}{10}\dfrac{4}{10}\dfrac{6}{10}=\dfrac{144}{1000}$$$
$$$P(+CC)=P(+)\cdot P(C) \cdot P(C) = \dfrac{4}{10}\dfrac{6}{10}\dfrac{6}{10}=\dfrac{144}{1000}$$$
Finally, $$$P(A) = P(CCC) + P(CC+) + P(C+C) + P(+CC)=$$$ $$$=\dfrac{216}{1000}+\dfrac{144}{1000}+\dfrac{144}{1000}+\dfrac{144}{1000}=\dfrac{648}{1000}=0,648$$$
that is to say, $$A$$ has a probability of $$64'8\%.$$
|
This is the first piece from a series on the conjugate gradient algorithm. It is still very much a work in progress, so please bear with me while it’s under construction. I have split up the content into the following pages:
All of the pages have been compiled into a single pdf document to facilitate offline reading. At the moment the PDF document is nearly identical to the web version, so some links are not working. I will eventually turn the PDF into a more self contained document with proper references to external sources.
The following are some supplementary pages which are not directly related to conjugate gradient, but somewhat related:
The intention of this website is not to provide a rigorous explanation of the topic, but rather, to provide some (hopefully useful) intuition about where this method comes from, how it works in theory and in practice, and what people are currently interested in learning about it. I do assume some linear algebra background (roughly at the level of a first undergrad course in linear algebra), but I try to add some refreshers along the way. My hope is that it can be a useful resources for undergraduates, engineers, tech workers, etc. who want to learn about some of the most recent developments in the study of conjugate gradient (i.e. communication avoiding methods work).
If you are a bit rusty on your linear algebra I put together a refresher on some of the important concepts for understanding this site. For a more rigorous and much broader treatment of iterative methods, I suggest Anne Greenbaum’s book on the topic. A popular introduction to conjugate gradient in exact arithmetic written by Jonathan Shewchuk can be found here. Finally, for a much more detailed overview of modern analysis of the Lanczos and conjugate gradient methods in exact arithmetic and finite precision, I suggest Gerard Meurant and Zdenek Strakos’s report.
Solving a linear system of equations \(Ax=b\) is one of the most important tasks in modern science. A huge number of techniques and algorithms for dealing with more complex equations end up, in one way or another, requiring repeatedly solving linear systems. As a result, applications such as weather forecasting, medical imaging, and training neural nets all rely on methods for efficiently solving linear systems to achieve the real world impact that we often take for granted. When \(A\) is symmetric and positive definite (if you don’t remember what that means, don’t worry, I have a refresher below), the conjugate gradient algorithm is a very popular choice for methods of solving \(Ax=b\).
This popularity of the conjugate gradient algorithm (CG) is due to a couple factors. First, like most Krylov subspace methods, CG is
matrix free. This means that \(A\) never has to be explicitly represented as a matrix, as long as there is some way of computing the product \(v\mapsto Av\), for a given input vector \(v\). For very large problems, this means a big reduction in storage, and if \(A\) has some structure (eg. \(A\) comes from a DFT, difference/integral operator, is very sparse, etc.), it allows the algorithm to take advantage of fast matrix vector products. Second, CG only requires \(\mathcal{O}(n)\) storage to run, as compared to \(\mathcal{O}(n^2)\) that many other algorithms require (we use \(n\) to denote the size of \(A\), i.e. \(A\) has shape \(n\times n\)). When the size of \(A\) is very large, this becomes increasingly important.
While the conjugate gradient algorithm has many nice theoretical properties, its behavior in finite precision can be
extremely different than the behavior predicted by assuming exact arithmetic. Understanding what leads to these vastly different behaviors has been an active area of research since the introduction of the algorithm in the 50s. The intent of this document is to provide an overview of the conjugate gradient algorithm in exact precision, then introduce some of what is know about it in finite precision, and finally, present some modern research interests into the algorithm.
One of the first question we should ask about any numerical method is,
does it solve the intended problem? In the case of solving linear systems, this means asking does the output approximate the true solution? If not, then there isn’t much point using the method.
Let’s quickly introduce the idea of the
error and the residual. These quantities are both useful (in different ways) for measuring how close the approximate solution \(\tilde{x}\) is to the true solution \(x^* = A^{-1}b\).
The
error is simply the difference between \(x^*\) and \(\tilde{x}\). Taking the norm of this quantity gives us a scalar value which measures the distance between \(x^*\) and \(\tilde{x}\). In some sense, this is perhaps the most natural way of measuring how close our approximate solution is to the true solution. In fact, when we say that a sequence \(x_0,x_1,x_2,\ldots\) of vectors converges to \(x_*\), we mean that the sequence of scalars, \(\|x^*-x_0\|,\|x^*-x_1\|,\|x^*-x_2\|,\ldots\) converges to zero. Thus, finding \(x\) which solves \(Ax=b\) could be written as finding the value of \(x\) which minimizes \(\|x - x^*\| = \|x-A^{-1}b\|\).
Of course, since we are trying to compute \(x^*\), it doesn’t make sense for an algorithm to explicitly depend on \(x^*\). The
residual of \(\tilde{x}\) is defined as \(b-A\tilde{x}\). Again, \(\|b-Ax^*\| = 0\), and since \(x^*\) is the only point where this is true, finding \(x\) to minimize \(\|b-Ax\|\) gives the true solution. The advantage is that we can easily compute the residual \(b-A\tilde{x}\) once we have our numerical solution \(\tilde{x}\), while there is not necessarily a good way to compute the error \(x^*-\tilde{x}\). This means that the residual gives us a way of inspecting convergence of a method.
From the previous section, we know that minimizing \(\|b-Ax\|\) will give the solution \(x^*\). Unfortunately, this problem is “just as hard” as solving \(Ax=b\).
We would like to find a related “easier” problem. One way to do this is to restrict the choice of values which \(x\) can take. For instance, if we enforce that \(x\) must be come from a smaller set of values, then the problem of minimizing \(\|b-Ax\|\) is simpler (since there are less possibilities for \(x\)). As an extreme example, if we say that \(x = cy\) for some fixed vector \(y\), then this is a scalar minimization problem. Of course, by restricting what values we choose for \(x\) it is quite likely that we will not longer be able to exactly solve \(Ax=b\).
One thing we could try to do is balance the difficulty of the problems we have to solve at each step with the accuracy of the solutions they give. If we can obtain a very approximate solution by solving an easy problem, and then improve the solution by solving successively more difficult problems. If we do it in the right way, it seems plausible that “increasing the difficulty” of the problem we are solving won’t lead to extra work at each step, if we are be able to take advantage of having an approximate solution from a previous step.
We can formalize this idea a little bit. Suppose we have a sequence of subspaces \(V_0\subset V_1\subset V_2\subset \cdots\). Then we can construct a sequence of iterates, \(x_0\in V_0, x_1\in V_1,x_2\in V_2, \ldots\). If, at each step, we ensure that \(x_k\) minimizes \(\|b-Ax\|\) over \(V_k\), then the norm of the residuals will decrease (because \(V_k \subset V_{k+1}\)).
Ideally this sequences of subspaces would:
We now formally introduce Krylov subspaces, and hint at the fact that they can satisfy these properties.
The \(k\)-th Krylov subspace generated by a square matrix \(A\) and a vector \(v\) is defined to be, \[ \mathcal{K}_k(A,v) = \operatorname{span}\{v,Av,\ldots,A^{k-1}v \} \]
First, these subspaces are relatively easy to construct because by definition we can get a spanning set by repeatedly applying \(A\) to \(v\). In fact, we can fairly easily construct an orthonormal basis for these spaces with the Arnoldi/Lanczos algorithms.
Therefore, if we can find a quantity which can be optimized over each direction of an orthonormal basis independently, then optimizing over these expanding subspaces will be easy because we only need to optimize in a single new direction at each step.
We now show that \(\mathcal{K}_k(A,b)\) will eventually contain our solution by the time \(k=n\). While this result comes about naturally from our derivation of CG, I think it is useful to relate polynomials with Krylov subspace methods early on, as the two are intimately related.
Suppose \(A\) has characteristic polynomial, \[ p_A(t) = \det(tI-A) = c_0 + c_1t + \cdots + c_{n-1}t^{n-1} + t^n \] It turns out that \(c_0 = (-1)^n\det(A)\) so that \(c_0\) is nonzero if \(A\) is invertible.
The Cayley-Hamilton Theorem states that a matrix satisfies its own characteristic polynomial. This means, \[ 0 = p_A(A) = c_0 I + c_1 A + \cdots c_{n+1} A^{n-1} + A^n \]
Moving the identity term to the left and dividing by \(-c_0\) (which won’t be zero since \(A\) is invertible) we can write, \[ A^{-1} = -(c_1/c_0) I - (c_2/c_0) A - \cdots - (1/c_0) A^{n-1} \]
This tells us that \(A^{-1}\) can be written as a polynomial in \(A\)! (I think this is one of the coolest facts from linear algebra.) In particular,
\[ x^* = A^{-1}b = -(c_1/c_0) b - (c_2/c_0) Ab - \cdots - (1/c_0) A^{n-1}b \]
That is, the solution \(x^*\) to the system \(Ax = b\) is a linear combination of \(b, Ab, A^2b, \ldots, A^{n-1}b\) (i.e. \(x^*\in\mathcal{K}_n(A,b)\)). This observation is the motivation behind Krylov subspace methods.
In fact, one way of viewing many Krylov subspace methods is as building low degree polynomial approximations to \(A^{-1}b\) using powers of \(A\) times \(b\) (in fact Krylov subspace methods can be used to approximate \(f(A)b\) where \(f\) is any function).
|
Oxides are chemical compounds with one or more oxygen atoms combined with another element (e.g. Li
2O). Oxides are binary compounds of oxygen with another element, e.g., CO 2, SO 2, CaO, CO, ZnO, BaO 2, H 2O, etc. These are termed as oxides because here, oxygen is in combination with only one element. Based on their acid-base characteristics oxides are classified as acidic, basic, amphoteric or neutral: An oxide that combines with water to give an acid is termed as an acidic oxide. The oxide that gives a base in water is known as a basic oxide. An amphoteric solution is a substance that can chemically react as either acid or base. However, it is also possible for an oxide to be neither acidic nor basic, but is a neutral oxide.
There are different properties which help distinguish between the three types of oxides. The term
anhydride ("without water") refers to compounds that assimilate H 2O to form either an acid or a base upon the addition of water. Acidic Oxides
Acidic oxides are the oxides of non-metals (Groups 14-17) and these
acid anhydrides form acids with water: Sulfurous Acid
\[\ce{SO_2 + H_2O \rightarrow H_2SO_3} \label{1}\]
Sulfuric Acid
\[\ce{ SO_3 + H_2O \rightarrow H_2SO_4} \label{2}\]
Carbonic Acid
\[ \ce{CO_2 + H_2O \rightarrow H_2CO_3} \label{3}\]
Acidic oxides are known as acid anhydrides (e.g., sulfur dioxide is sulfurous anhydride and sulfur trioxide is sulfuric anhydride) and when combined with bases, they produce salts, e.g.,
\[\ce{ SO_2 + 2NaOH \rightarrow Na_2SO_3 + H_2O} \label{4}\]
Basic Oxides
\[ \ce{K_2O \; (s) + H_2O \; (l) \rightarrow 2KOH \; (aq) } \label{5}\]
Basic oxides are the oxides of metals. If soluble in water, they react with water to produce hydroxides (alkalies) e.g.,
\[ \ce{ CaO + H_2O \rightarrow Ca(OH)_2} \label{6}\]
\[ \ce{ MgO + H_2O \rightarrow Mg(OH)_2} \label{7}\]
\[ \ce{ Na_2O + H_2O \rightarrow 2NaOH } \label{8}\]
These metallic oxides are known as basic anhydrides. They react with acids to produce salts, e.g.,
\[ \ce{ MgO + 2HCl \rightarrow MgCl_2 + H_2O } \label{9}\]
\[ \ce{ Na_2O + H_2SO_4 \rightarrow Na_2SO_4 + H_2O} \label{10}\]
Amphoteric Oxides
An amphoteric solution is a substance that can chemically react as either acid or base. For example, when HSO
4 - reacts with water it will make both hydroxide and hydronium ions:
\[ HSO_4^- + H_2O \rightarrow SO_4^{2^-} + H_3O^+ \label{11}\]
\[ HSO_4^- + H_2O \rightarrow H_2SO_4 + OH^- \label{12}\]
Amphoteric oxides exhibit both basic as well as acidic properties. When they react with an acid, they produce salt and water, showing basic properties. While reacting with alkalies they form salt and water showing acidic properties.
For example \(ZnO\) exhibits basic behavior with \(HCl\)
\[ZnO + 2HCl \rightarrow \underset{\large{zinc\:chloride}}{ZnCl_2}+H_2O\,(basic\: nature) \label{13}\]
and acidic behavior with \(NaOH\)
\[ZnO + 2NaOH \rightarrow \underset{\large{sodium\:zincate}}{Na_2ZnO_2}+H_2O\,(acidic\: nature) \label{14}\]
Similarly, \(Al_2O_3\) exhibits basic behavior with \(H_2SO_4\)
\[Al_2O_3 + 3H_2SO_4 \rightarrow Al_2(SO_4)_3+3H_2O\,(basic\: nature) \label{15}\]
and acidic behavior with \(NaOH\)
\[Al_2O_3 + 2NaOH \rightarrow 2NaAlO_2+H_2O\,(acidic\: nature) \label{16}\]
Neutral Oxides
Neutral oxides show neither basic nor acidic properties and hence do not form salts when reacted with acids or bases, e.g., carbon monoxide (CO); nitrous oxide (N
2O); nitric oxide (NO), etc., are neutral oxides. Peroxides and Dioxides
\[ 4 Li + O_2 \rightarrow 2Li_2O \label{19} \]
\[ H_2 + O_2 \rightarrow H_2O_2 \label{20}\]
Superoxides: Often Potassium, Rubidium, and Cesium react with excess oxygen to produce the superoxide, \( MO_2 \). with the oxidation number of the oxygen equal to -1/2.
\[ Cs + O_2 \rightarrow CsO_2 \label{21}\]
A peroxide is a metallic oxide which gives hydrogen peroxide by the action of dilute acids. They contain more oxygen than the corresponding basic oxide, e.g., sodium, calcium and barium peroxides.
\[BaO_2 + H_2SO_4 \rightarrow BaSO_4 + H_2O_2 \label{22}\]
\[Na_2O_2 + H_2SO_4 \rightarrow Na_2SO_4 + H_2O_2 \label{23}\]
Dioxides like PbO
2 and MnO 2 also contain higher percentage of oxygen like peroxides and have similar molecular formulae. These oxides, however, do not give hydrogen peroxide by action with dilute acids. Dioxides on reaction with concentrated HCl yield Cl 2 and on reacting with concentrated H 2SO 4 yield O 2.
\[PbO_2 + 4HCl \rightarrow PbCl_2 + Cl_2 + 2H_2O \label{24}\]
\[2PbO_2 + 2H_2SO_4 \rightarrow 2PbSO_4 + 2H_2O + O_2 \label{25}\]
Compound Oxides
Compound oxides are metallic oxides that behave as if they are made up of two oxides, one that has a lower oxidation and one with a higher oxidation of the same metal, e.g.,
\[\textrm{Red lead: } Pb_3O_4 = PbO_2 + 2PbO \label{26}\]
\[\textrm{Ferro-ferric oxide: } Fe_3O_4 = Fe_2O_3 + FeO \label{27}\]
On treatment with an acid, compound oxides give a mixture of salts.
\[\underset{\text{Ferro-ferric oxide}}{Fe_3O_4} + 8HCl \rightarrow \underset{\text{ferric chloride}}{2FeCl_3} + \underset{\text{ferrous chloride}}{FeCl_2} + 4H_2O \label{28}\]
Preparation of Oxides
Oxides can be generated via multiple reactions. Below are a few.
By direct heating of an element with oxygen: Many metals and non-metals burn rapidly when heated in oxygen or air, producing their oxides, e.g.,
\[2Mg + O_2 \xrightarrow{Heat} 2MgO\]
\[2Ca + O_2 \xrightarrow{Heat} 2CaO\]
\[S + O_2 \xrightarrow{Heat} SO_2\]
\[P_4 + 5O_2 \xrightarrow{Heat} 2P_2O_5\]
By reaction of oxygen with compounds at higher temperatures: At higher temperatures, oxygen also reacts with many compounds forming oxides, e.g.,
sulfides are usually oxidized when heated with oxygen.
\[2PbS + 3O_2 \xrightarrow{\Delta} 2PbO + 2SO_2\]
\[2ZnS + 3O_2 \xrightarrow{\Delta} 2ZnO + 2SO_2\]
When heated with oxygen, compounds containing carbon and hydrogen are oxidized.
\[C_2H_5OH + 3O_2 \rightarrow 2CO_2 + 3H_2O\]
By thermal decomposition of certain compounds like hydroxides, carbonates, and nitrates
\[CaCO_3 \xrightarrow{\Delta} CaO + CO_2\]
\[2Cu(NO_3)_2 \xrightarrow{\Delta} 2CuO + 4NO_2 + O_2\]
\[Cu(OH)_2 \xrightarrow{\Delta} CuO + H_2O\]
By oxidation of some metals with nitric acid
\[2Cu + 8HNO_3 \xrightarrow{Heat} 2CuO + 8NO_2 + 4H_2O + O_2\]
\[Sn + 4HNO_3 \xrightarrow{Heat} SnO_2 + 4NO_2 + 2H_2O\]
By oxidation of some non-metals with nitric acid
\[C + 4HNO_3 \rightarrow CO_2 + 4NO_2 + 2H_2O\]
Trends in Acid-Base Behavior
The oxides of elements in a period become progressively more acidic as one goes from left to right in a period of the periodic table. For example, in third period, the behavior of oxides changes as follows:
\(\underset{\large{Basic}}{\underbrace{Na_2O,\: MgO}}\hspace{20px}
\underset{\large{Amphoteric}}{\underbrace{Al_2O_3,\: SiO_2}}\hspace{20px} \underset{\large{Acidic}}{\underbrace{P_4O_{10},\: SO_3,\:Cl_2O_7}}\hspace{20px}\)
If we take a closer look at a specific period, we may better understand the acid-base properties of oxides. It may also help to examine the physical properties of oxides, but it is not necessary. Metal oxides on the left side of the periodic table produce basic solutions in water (e.g. Na
2O and MgO). Non-metal oxides on the right side of the periodic table produce acidic solutions (e.g. Cl 2O, SO 2, P 4O 10). There is a trend within acid-base behavior: basic oxides are present on the left side of the period and acidic oxides are found on the right side.
Aluminum oxide shows acid and basic properties of an oxide, it is amphoteric. Thus Al
2O 3 entails the marking point at which a change over from a basic oxide to acidic oxide occurs. It is important to remember that the trend only applies for oxides in their highest oxidation states. The individual element must be in its highest possible oxidation state because the trend does not follow if all oxidation states are included. Notice how the amphoteric oxides (shown in blue) of each period signify the change from basic to acidic oxides,
1 2 3 14 15 16 17
Li
Be
B
C
N
O
F
Na
Mg
Al
Si
P
S
Cl
K
Ca
Ga
Ge
As
Se
Br
Rb
Sr
In
Sn
Sb
Te
I
Cs
Ba
Tl
Pb
Bi
Po
At
The figure above show oxides of the
s- and p-block elements. purple: basic oxides blue: amphoteric oxides pink: acidic oxides Problems Can an oxide be neither acidic nor basic? \(Rb + O_2\: (excess) \rightarrow \:?\) \(Na + O_2 \rightarrow \:?\) BaO 2is which of the following: hydroxide, peroxide, or superoxide? What is an amphoteric solution?
Why is it difficult to obtain oxygen directly from water?
Solutions Yes, an example is carbon monoxide (CO). CO doesn’t produce a salt when reacted with an acid or a base. \( Rb + O_2 \; (excess) \rightarrow RbO_2 \) With the presence of excess oxygen, Rubidium forms a superoxide. Please review section regarding basic oxides above for more detail. \( 2 Na + O_2 \rightarrow Na_2O \) Note: The problem does not specify that the oxygen was in excess, so it cannot be a peroxide. Please review section regarding basic oxides for more detail. BaO 2is a peroxide. Barium has an oxidation state of +2 so the oxygen atoms have oxidation state of -1. As a result, the compound is a peroxide, but more specifically referred to as barium peroxide. An amphoteric solution is a substance that can chemically react as either acid or base. See section above on Properties of Amphoteric Oxides for more detail.
Water as such is a neutral stable molecule. It is difficult to break the covalent O-H bonds easily. Hence, electrical energy through the electrolysis process is applied to separate dioxygen from water. When a small amount of acid is added to water ionization is initiated which helps in electrochemical reactions as follows.
\[[H_2O\:(acidulated)\rightleftharpoons H^+\,(aq)+OH]^-\times4\]
At cathode:
\[[H^+\,(aq)+e^-\rightarrow\dfrac{1}{2}H_2(g)]\times4\]
At anode:
\[4OH^-\,(aq)\rightarrow O_2+2H_2O + 4e^-\]
Net reaction:
\[2H_2O \xrightarrow{\large{electrolysis}} 2H_2\,(g) + O_2\,(g)\]
Oxygen can thus be obtained from acidified water by its electrolysis.
References Petrucci, Ralph, William Harwood, Jeffry Madura, and Geoffrey Herring. General Chemistry: principles and modern applications. 9th Edition. New Jersey, NJ: Prentice Hall, 2007. 877. Print. http://www.wou.edu/las/physci/ch412/oxides.html http://www.transtutors.com/chemistry...ts/oxides.aspx Contributors
Binod Shrestha (University of Lorraine)
|
I'd like this question to definitively guide a practitioner to using both $\mathbb{P}$ vs $\mathbb{Q}$ probabilities in trading and research.
Let's take only one fact as given: if I have a risk-neutral probability distribution I can price and hedge any option.
Is the distinction more philosophical or practical? Does it have real impact on trading desks P/L? For example, is it a construct to remind us we're not in the "real world" when modeling? This question says it is the difference in using $\mu$ vs $r$ when solving the S.D.E... which seems to say if I definitively knew $\mu$ and $r$ I would be able to transition with absolutely no loss of information. What edge would this provide me in the market? This good paper and this good answer seems to divide them on the approachof their research. $\mathbb{P}$-quants vs $\mathbb{Q}$-quants... in this sense it seems to be that $\mathbb{P}$-Quants are concerned with modeling the future using historical data sets. Projection. $\mathbb{Q}$-quants are concered with relative valuationand making sure that their pricing shemes are consistent with exchange traded products that are observed in the market. Extrapolation. I see that these job functions are different, but I do not see why one couldnot apply $\mathbb{P}$ methods to the $\mathbb{Q}$ world (their effectiveness seems less important to me - it doesn't seem like a scientific violation). Girsanov's Theorem shows its possible to switch between the two. Now I know I CAN draw conclusions from each other, but the method is not clear. Is there a way on paper to move from $\mathbb{P}$ to $\mathbb{Q}$ and vice-versa if I have a closed form-solution or a parameterized model of either $\mathbb{P}$ or $\mathbb{Q}$? If my returns under $\mathbb{Q}$ is $X \sim \mathcal{N}(r,\,\sigma^{2})\,$. From here, how can I get to $\mathbb{P}$.
I'd prefer to stay out of a model-framework completely and let all results be in general. From what I've found I believe the connection is in putting a price on the market risk premium, but I have not found empirical estimations of this or attempts to use its estimation for moving between $\mathbb{P}$ and $\mathbb{Q}$. Any papers on $\lambda$ estimation or extraction would be appreciated.
I wanted to add this quote from Gary Hatfield:
Recall that the whole point of risk neutral pricing is to recover the price of traded options in a way that avoids arbitrage. As such, the probabilities of various paths are implied from the prices of various traded securities whose payoffs depend on those paths. Since investors are in aggregate risk averse, these prices imply higher probabilities to bad scenarios than they do to good scenarios. Hence, while everyone (almost!) agrees that stocks have a higher expected return than risk free bonds, the prices of stock and stock options imply the only difference between stocks and risk free bonds is that stocks are more volatile. Put another way, a risk neutral scenario set has many more really bad scenarios than a real world scenario set precisely because investors fear these scenarios. They therefore overweigh their probability when deciding how much a security is worth.
This provides intuitive context to the difference, but makes it seem impossible to every replicate the $\mathbb{P}$ world.
|
The Word2Vec is a deep learning algorithm that draws context from phrases. Every textual document is represented in the form of a
vector, and that is done through Vector Space Modelling (VSM). We can convert our text using One Hot Encoding. For example, having three words we can make a vector in a three dimentional space.
The problem with One Hot Encoding is that it doesn’t help us to find similarities. In fact, from the graph above, every distance is same from each other, and we cannot find similarities using for example Euclidian Distance. That is why
Word2Vec data generatiioin also known as Skipgram is used. The Word2Vec is a Word Embedding where similarities come from neighbor words.
From the example above, we converted the words in a One Hot Encoding, and we also codified the Neighbor One Hot Encoding. The architecture of the Word2Vec is as described below.
The example described by the figure above, try to train the word
king as input and brave as neighbor using gradient descent as optimizer.During Backpropagation we have an update of the Weights in the Hidden layer for each combination of words, and the inputs are multiplied with the updated weights. The Weights continue to be aìupdated during each combination of words based on the context of each phrase. The Softmax Function create the probability distribution, and Gradient Descent is used as optimizer. There is an interesting simulation here where we can simulate to train a ANN and see how it developes.
The crucial point is to be able to predict the
Context Word from the Focus Word, namely the word in the sentence.
\[ \log p(c | w ; \theta)=\frac{\exp v_{c} \cdot v_{w}}{\sum_{c^{\prime} \in C} \exp v_{c^{\prime}} \cdot v_{w}} \]
From the function above, we are going to take the
probability of the context word given the focus word as the product between the contex word vc and the focus word vw. The formula remind us the Sigmoid Function of the Logistic Regression. One important detail of Word2Vec is related to the distributioin of the probability of the context word.In fact, the robability of words is typically raise to the power of 3/4 called Negative Sampling Distributin.
As we can see from the figure above,
raising to the power of 3/4 is able to bring down frequent terms, and bring up infrequent terms. As a result, we are focusing only on super frequent words, but also we are considering words that are in the middle range of our distribution and we can explore more at the long tail of the distribution. The Negative Sampling Distribution with a power of 3/4 makes the distribution a little bit fatter and longer.
|
Since you're writing what look to be Riemann integrals, I presume we're keeping to the case where $X$ is a continuous random variable.
If $m$ is a value for which $\int_{-\infty}^m f_X(x)\, dx =\frac12$ then $m$ is a
median of $X$ (if the density is $>0$ in a neighborhood of $m$, then $m$ will be unique; the median).
Since here the upper limit of the integral is given as $m=E(X)$, this amounts - as Nick Cox pointed out in comments - to saying "is there a distribution for which the mean differs from the median", and the obvious thing to do when searching for a counterexample is to try some distributions that are skew*.
Here's an obvious example: the exponential distribution, which has its median at $\ln 2$ times the mean (i.e. about 70% of the mean).
You might like to also consider the density
$$f(x) = \cases{ \begin{array}{lr} 0, & \text{for } x\leq 0\\ 2x, & \text{for } 0< x\leq 1\\ 0, & \text{for } x> 1 \end{array}} $$
as a simple one to try for yourself (since the integrations are particularly simple).
*(not every asymmetric distribution has mean different from median, however)
|
In my previous two posts I showed worked solutions to problems 2.5 and 11.7 in Bulmer’s
Principles of Statistics, both of which involve the characteristics of self-fertilizing hybrid sweet peas. It turns out that problem 11.8 also involves this same topic, so why not work it as well for completeness. The problem asks us to assume that we were unable to find an explicit solution for the maximum likelihood equation in problem 11.7 and to solve it by using the following iterative method:
\( \theta_{1} = \theta_{0} + \frac{S(\theta_{0})}{I(\theta_{0})} \)
where \( S(\theta_{0}) \) is the value of \( \frac{d \log L}{d\theta}\) evaluated at \( \theta_{0}\) and \( I(\theta_{0})\) is the value of \( -E(\frac{d^{2}\log L}{d\theta^{2}})\) evaluated at \( \theta_{0}\).
So we begin with \( \theta_{0}\) and the iterative method returns \( \theta_{1}\). Now we run the iterative method again starting with \( \theta_{1}\) and get \( \theta_{2}\):
\( \theta_{2} = \theta_{1} + \frac{S(\theta_{1})}{I(\theta_{1})} \)
We repeat this process until we converge upon a value. This is called the Newton-Raphson method. Naturally this is something we would like to have the computer do for us.
First, recall our formulas from problem 11.7:
\( \frac{d \log L}{d\theta} = \frac{1528}{2 + \theta} – \frac{223}{1 – \theta} + \frac{381}{\theta} \)
\( \frac{d^{2}\log L}{d \theta^{2}} = -\frac{1528}{(2 + \theta)^{2}} -\frac{223}{(1 – \theta)^{2}} -\frac{381}{\theta^{2}} \)
Let’s write functions for those in R:
# maximum likelihood score mls <- function(x) { 1528/(2 + x) - 223/(1 - x) + 381/x } # the information inf <- function(x) { -1528/((2 + x)^2) - 223/((1 - x)^2) - 381/(x^2) }
Now we can use those functions in another function that will run the iterative method starting at a trial value:
# newton-raphson using expected information matrix nr <- function(th) { prev <- th repeat { new <- prev + mls(prev)/-inf(prev) if(abs(prev - new)/abs(new) <0.0001) break prev <- new } new }
This function first takes its argument and names it "prev". Then it starts a repeating loop. The first thing the loop does it calculate the new value using the iterative formula. It then checks to see if the difference between the new and previous value - divided by the new value - is less than 0.0001. If it is, the loop breaks and the "new" value is printed to the console. If not, the loop repeats. Notice that each iteration is hopefully converging on a value. As it converges, the difference between the "prev" and "new" value will get smaller and smaller. So small that dividing the difference by the "new" value (or "prev" value for that matter) will begin to approach 0.
To run this function, we simply call it from the console. Let's start with a value of \( \theta_{0} = \frac{1}{4}\), as the problem suggests:
nr(1/4) [1] 0.7844304
There you go! We could make the function tell us a little more by outputting the iterative values and number of iterations. Here's a super quick and dirty way to do that:
# newton-raphson using expected information matrix nr <- function(th) { k <- 1 # number of iterations v <- c() # iterative values prev <- th repeat { new <- prev + mls(prev)/-inf(prev) v[k] <- new if(abs(prev - new)/abs(new) <0.0001) break prev <- new k <- k + 1 } print(new) # the value we converged on print(v) # the iterative values print(k) # number of iterations }
Now when we run the function we get this:
nr(1/4) [1] 0.7844304 [1] 0.5304977 0.8557780 0.8062570 0.7863259 0.7844441 0.7844304 [1] 6
We see it took 6 iterations to converge. And with that I think I've had my fill of heredity problems for a while.
|
So I've been puzzled by this problem for some time now: Suppose we have a chocolate bar with dimensions $m*n$ and it is made up out of finite number of $1*k$ chocolates. Proof that for any natural numbers m,n,k, in order for chocolate to be made, k must devide at least one of m,n. Now this solution come so logical to me that I can't even start thinking about a way to obtaining it. I would really appreciate your help. Thanks.
Let $D=\{(x,y)\in\mathbb R^2|0\leq x \leq m,0 \leq y \leq n\}$. Then $D$ is the union of subsets of the type $\{i\leq x \leq i+1,j\leq y \leq j+k \}$ or $\{i\leq x \leq i+k,j\leq y \leq j+1\}$. Now $$\int_D\sin (\frac{2\pi}{k}x)\sin (\frac{2\pi}{k}y)dxdy=0$$ because on each of the aforementioned subsets this integral vanishes. On the other hand $$\int_D\sin(\frac{2\pi}{k}x)\sin(\frac{2\pi}{k}y)dxdy=\int_0^m\sin(\frac{2\pi}{k}x)dx\int_0^n\sin(\frac{2\pi}{k}y)dy$$ so one of these two integrals must vanish. This yields that either $k|m$ or $k|n$.
Here is another proof using coloring ideas. Write it as a matrix $m*n$. Now number rows form $\{0,1,2,...,m-1\}$. Similarly number columns as $\{0,1,2,...,n-1\}$.Now each element of the matrix is defined to be $r+c \hspace{1mm}mod\hspace{1mm}k$,where $r$ is the number of the row and $c$ is the number of the column.Now wlog say $k\nmid m$.Now we try to show that it has to definitely divide $n $.Now we observe that in this coloring the number of $0's,1's,2's...$ should be same or else we cannot cover it with $1*k$ tiles. Since where ever we put $1*k$ block it will have all the numbers from $0$ to $k-1$.So now we have to exhibit a number which is there in the matrix not $m*n/k$ times.
To find that number let $r_1 \equiv m \hspace{1mm}mod\hspace{1mm}k$ and $r_2 \equiv n \hspace{1mm}mod\hspace{1mm}k$.Then it is clear that $r_1$ repeats more than $mn/k$ if $r_2 \neq 0$.The number $r_1$ appears $[(m/k)]$+1 times in first row and this trend continues for the next $r_1$ columns and till k th column is reached it remains to be only [$m/k$].So in k columns the number of times numbers repeat will be $m$.Hence $r_1$ is a number which repeats more than $mn/k$ times.
|
@user193319 I believe the natural extension to multigraphs is just ensuring that $\#(u,v) = \#(\sigma(u),\sigma(v))$ where $\# : V \times V \rightarrow \mathbb{N}$ counts the number of edges between $u$ and $v$ (which would be zero).
I have this exercise: Consider the ring $R$ of polynomials in $n$ variables with integer coefficients. Prove that the polynomial $f(x _1 ,x _2 ,...,x _n ) = x _1 x _2 ···x _n$ has $2^ {n+1} −2$ non-constant polynomials in R dividing it.
But, for $n=2$, I ca'nt find any other non-constant divisor of $f(x,y)=xy$ other than $x$, $y$, $xy$
I am presently working through example 1.21 in Hatcher's book on wedge sums of topological spaces. He makes a few claims which I am having trouble verifying. First, let me set-up some notation.Let $\{X_i\}_{i \in I}$ be a collection of topological spaces. Then $\amalg_{i \in I} X_i := \cup_{i ...
Each of the six faces of a die is marked with an integer, not necessarily positive. The die is rolled 1000 times. Show that there is a time interval such that the product of all rolls in this interval is a cube of an integer. (For example, it could happen that the product of all outcomes between 5th and 20th throws is a cube; obviously, the interval has to include at least one throw!)
On Monday, I ask for an update and get told they’re working on it. On Tuesday I get an updated itinerary!... which is exactly the same as the old one. I tell them as much and am told they’ll review my case
@Adam no. Quite frankly, I never read the title. The title should not contain additional information to the question.
Moreover, the title is vague and doesn't clearly ask a question.
And even more so, your insistence that your question is blameless with regards to the reports indicates more than ever that your question probably should be closed.
If all it takes is adding a simple "My question is that I want to find a counterexample to _______" to your question body and you refuse to do this, even after someone takes the time to give you that advice, then ya, I'd vote to close myself.
but if a title inherently states what the op is looking for I hardly see the fact that it has been explicitly restated as a reason for it to be closed, no it was because I orginally had a lot of errors in the expressions when I typed them out in latex, but I fixed them almost straight away
lol I registered for a forum on Australian politics and it just hasn't sent me a confirmation email at all how bizarre
I have a nother problem: If Train A leaves at noon from San Francisco and heads for Chicago going 40 mph. Two hours later Train B leaves the same station, also for Chicago, traveling 60mph. How long until Train B overtakes Train A?
@swagbutton8 as a check, suppose the answer were 6pm. Then train A will have travelled at 40 mph for 6 hours, giving 240 miles. Similarly, train B will have travelled at 60 mph for only four hours, giving 240 miles. So that checks out
By contrast, the answer key result of 4pm would mean that train A has gone for four hours (so 160 miles) and train B for 2 hours (so 120 miles). Hence A is still ahead of B at that point
So yeah, at first glance I’d say the answer key is wrong. The only way I could see it being correct is if they’re including the change of time zones, which I’d find pretty annoying
But 240 miles seems waaay to short to cross two time zones
So my inclination is to say the answer key is nonsense
You can actually show this using only that the derivative of a function is zero if and only if it is constant, the exponential function differentiates to almost itself, and some ingenuity. Suppose that the equation starts in the equivalent form$$ y'' - (r_1+r_2)y' + r_1r_2 y =0. \tag{1} $$(Obvi...
Hi there,
I'm currently going through a proof of why all general solutions to second ODE look the way they look. I have a question mark regarding the linked answer.
Where does the term e^{(r_1-r_2)x} come from?
It seems like it is taken out of the blue, but it yields the desired result.
|
Can you please help me find this integral?
$$\int \sin(\ln(x)) dx$$
Give me a clue or show step by step solutions please.
Thank you very much.
Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community
Make a substitution: $u = \ln x$. Then $du = \frac{1}{x} dx$, so $dx = x du$. Then you can use the fact that $e^u = x$.
Hope this helps!
Putting $\ln x=y, x=e^y, dx=e^y dy$
So, $\int \sin(\ln x)dx=\int \sin y\cdot e^y dy$
Use Integration by parts, with $e^y$ as the first term
Alternatively, using Euler's formula, $e^{iy}=\cos y+i\sin y$
$\int \sin y\cdot e^y dy$ is the imaginary part of $\int e^y\cdot e^{iy}dy$
$$\int e^y\cdot e^{iy}dy=\int e^{y(1+i)}dy=\frac{e^y(e^{iy})}{(1+i)}=\frac{(1-i)e^y(\cos y+i\sin y)}2=\frac{e^y\{(\cos y+\sin y)+i(\sin y-\cos y)\}}2$$
$$\implies \int \sin y\cdot e^y dy=\frac{e^y(\sin y-\cos y)}2$$
$$\implies \int \sin(\ln x)dx=\frac{x(\sin(\ln x)-\cos(\ln x))}2$$
Let our integral be $I$. Use integration by parts. Let $u=\sin(\ln x)$ and $dv=dx$. Then $u=\frac{1}{x}\cos(\ln x)$, and we can take $v=x$. Thus $$I=x\cos(\ln x)-\int \cos(\ln x)\,dx.$$ Let $J=\int \cos(\ln x)\,dx$. The same sort of calculation as the one above yields $$J=-x\cos(\ln x)+\int \sin(\ln x)\,dx.$$ Thus $$I=x\cos(\ln x)-J\qquad\text{and}\qquad J=-x\sin(\ln x)+I.$$ Solve for $I$, and don't forget the $+C$. We get $$I=\frac{x\cos(\ln x)+x\ln(\sin x)}{2} +C.$$
|
Given $a\in\mathbb{R}^{mn\times n}$, find a $C\in\mathbb{R}^{n}$, $x\in\mathbb{R}^{m\times n}$ such that $$ 0 = f_{k}(\boldsymbol{C}, \boldsymbol{x}):=\sum_{i=1}^{m} C_{i} \left(\prod_{j=1}^{n} a_{kj}^{x_{i,j}} - 1\right) $$ for all $k\in\{1,\dots,mn\}$.
(The rows of $a_{kj}$ are vectors of solutions to a system of differential equations, and the goal of finding the roots of the above system is to find invariant quantities of those diff. eq.)
To show that the system has a singular Jacobian, I have a test setup where $a_{k1} = (1-2t_{k})^{-0.5}$, $a_{k2} = (1-2t_{k})^{-1}$ and the rest zero (only $i=1$, but it is still singular if you include the sum). It is in this case easy to check that the Jacobian is singular irregardless of the values of $\boldsymbol{C}$ and $\boldsymbol{x}$. The solution is any $x$ with $2x_{11} - x_{12}=0$ and $C$ anything, and I'm guessing that the Jacobian is singular because this does imply infinite solutions.
I just need to be able to find a couple of them (that are non-trivial such as $C_{i} =0$, and I can go from there. Most root-finding methods I found that can deal with singular Jacobians only mention singularities at the solution, not everywhere. Are there any methods that can tackle this problem? I should also mention that the system should eventually get quite large (order of 1000 equations).
|
The electromagnetic spectrum
Electromagnetic radiation, as you may recall from a previous chemistry or physics class, is composed of electrical and magnetic waves which oscillate on perpendicular planes. Visible light is electromagnetic radiation. So are the gamma rays that are emitted by spent nuclear fuel, the x-rays that a doctor uses to visualize your bones, the ultraviolet light that causes a painful sunburn when you forget to apply sun block, the infrared light that the army uses in night-vision goggles, the microwaves that you use to heat up your frozen burritos, and the radio-frequency waves that bring music to anybody who is old-fashioned enough to still listen to FM or AM radio.
Just like ocean waves, electromagnetic waves travel in a defined direction. While the speed of ocean waves can vary, however, the speed of electromagnetic waves – commonly referred to as the speed of light – is essentially a constant, approximately 300 million meters per second. This is true whether we are talking about gamma radiation or visible light. Obviously, there is a big difference between these two types of waves – we are surrounded by the latter for more than half of our time on earth, whereas we hopefully never become exposed to the former to any significant degree. The different properties of the various types of electromagnetic radiation are due to differences in their wavelengths, and the corresponding differences in their energies:
shorter wavelengths correspond to higher energy.
High-energy radiation (such as gamma- and x-rays) is composed of very short waves – as short as 10
-16 meter from crest to crest. Longer waves are far less energetic, and thus are less dangerous to living things. Visible light waves are in the range of 400 – 700 nm (nanometers, or 10 -9 m), while radio waves can be several hundred meters in length.
The notion that electromagnetic radiation contains a quantifiable amount of energy can perhaps be better understood if we talk about light as a stream of
particles, called photons, rather than as a wave. (Recall the concept known as ‘wave-particle duality’: at the quantum level, wave behavior and particle behavior become indistinguishable, and very small particles have an observable ‘wavelength’). If we describe light as a stream of photons, the energy of a particular wavelength can be expressed as:
\[E = \dfrac{hc}{\lambda} \tag{4.1.1}\]
where E is energy in kJ/mol,
λ (the Greek letter lambda) is wavelength in meters, c is 3.00 x 10 8 m/s (the speed of light), and h is 3.99 x 10 -13 kJ·s·mol -1, a number known as Planck’s constant.
Because electromagnetic radiation travels at a constant speed, each wavelength corresponds to a given frequency, which is the number of times per second that a crest passes a given point. Longer waves have lower frequencies, and shorter waves have higher frequencies. Frequency is commonly reported in hertz (Hz), meaning ‘cycles per second’, or ‘waves per second’. The standard unit for frequency is s
-1.
When talking about electromagnetic waves, we can refer either to wavelength or to frequency - the two values are interconverted using the simple expression:
\[\lambda \nu = c \tag{4.1.2}\]
where
ν (the Greek letter ‘ nu’) is frequency in s -1. Visible red light with a wavelength of 700 nm, for example, has a frequency of 4.29 x 10 14 Hz, and an energy of 40.9 kcal per mole of photons. The full range of electromagnetic radiation wavelengths is referred to as the electromagnetic spectrum.
(Image from Wikipedia commons)
Notice that visible light takes up just a narrow band of the full spectrum. White light from the sun or a light bulb is a mixture of all of the visible wavelengths. You see the visible region of the electromagnetic spectrum divided into its different wavelengths every time you see a rainbow: violet light has the shortest wavelength, and red light has the longest.
Visible light has a wavelength range of about 400-700 nm. What is the corresponding frequency range? What is the corresponding energy range, in kJ/mol of photons? Exercise 4.4: Overview of a molecular spectroscopy experiment
In a spectroscopy experiment, electromagnetic radiation of a specified range of wavelengths is allowed to pass through a sample containing a compound of interest. The sample molecules absorb energy from some of the wavelengths, and as a result jump from a low energy ‘ground state’ to some higher energy ‘excited state’. Other wavelengths are
not absorbed by the sample molecule, so they pass on through. A detector on the other side of the sample records which wavelengths were absorbed, and to what extent they were absorbed.
Here is the key to molecular spectroscopy:
a given molecule will specifically absorb only those wavelengths which have energies that correspond to the energy difference of the transition that is occurring. Thus, if the transition involves the molecule jumping from ground state A to excited state B, with an energy difference of ΔE, the molecule will specifically absorb radiation with wavelength that corresponds to ΔE, while allowing other wavelengths to pass through unabsorbed.
By observing which wavelengths a molecule absorbs, and to what extent it absorbs them, we can gain information about the nature of the energetic transitions that a molecule is able to undergo, and thus information about its structure.
These generalized ideas may all sound quite confusing at this point, but things will become much clearer as we begin to discuss specific examples.
|
The Annals of Probability Ann. Probab. Volume 12, Number 3 (1984), 760-767. The Hydrodynamical Behavior of the Coupled Branching Process Abstract
The coupled branching process $(\eta^\mu_t)$ is a Markov process on $(\mathbb{N})^S (S = \mathbb{Z}^d)$ with initial distribution $\mu$ and the following time evolution: At rate $b\eta(x)$ a particle is born at site $x$, which moves instantaneously to a site $y$ chosen with probability $q(x, y)$. All particles at a site die at rate $pd$, individual particles die independent from each other at rate $(1 - p)d$. Furthermore, all particles perform independent continuous time random walks with kernel $p(x, y)$. We consider here the case $b = d$ and the symmetrized kernels $\hat p, \hat q$ are transient. We show that the measures $\mathscr{L}(\eta^\mu_t(\cdot + \lbrack\alpha \sqrt{tx}\rbrack)), (\alpha \in \mathbb{R}^+, x \in \mathbb{R}^d)$ converge weakly for $t \rightarrow \infty$ to $\nu_{\tau(a,x)}$. Here $\nu_\rho$ is the invariant measure of the process with: $E^{\nu_\rho}(\eta(x)) = \rho$ and which is also extremal in the set of all translationinvariant invariant measures of the process. The density profile $\tau(\alpha, x)$ is calculated explicitly; it is governed by the diffusion equation.
Article information Source Ann. Probab., Volume 12, Number 3 (1984), 760-767. Dates First available in Project Euclid: 19 April 2007 Permanent link to this document https://projecteuclid.org/euclid.aop/1176993226 Digital Object Identifier doi:10.1214/aop/1176993226 Mathematical Reviews number (MathSciNet) MR744232 Zentralblatt MATH identifier 0596.60095 JSTOR links.jstor.org Subjects Primary: 60K35: Interacting random processes; statistical mechanics type models; percolation theory [See also 82B43, 82C43] Secondary: 82A05 Citation
Greven, Andreas. The Hydrodynamical Behavior of the Coupled Branching Process. Ann. Probab. 12 (1984), no. 3, 760--767. doi:10.1214/aop/1176993226. https://projecteuclid.org/euclid.aop/1176993226
|
I have an elliptic curve $ y^2=x^3+109x^2+224x$ and a point $P(-100;260)$ on it. And I need to find point $2P$. I took a formulas $$x_2=\left(\frac{ax_1-b}{y_1}\right)^2 -a+x_1$$ and $$y_2=-y_1+\frac{ax_1-b}{y_1}(x_1-x_2)$$ put into this formulas $a=109$, $b=224$, $x_1=-100$, $y_1=260$ and have $2P=(\frac{6850936}{4225}; \frac{37736919137}{514164}) $ but this point is not lie on curve. In the article the result is $2P=(\frac{8836}{25}; -\frac{950716}{125})$ but I have no idea how can I find them.
For any elliptic curve $E: y^2 = f(x)$ where $f(x) = x^3 + ax^2 + bx +c$.
If $P = (x_1,y_1)$ is a point on it, then the tangent line of $E$ through $P$ has the form $$y = y_1 + s(x - x_1)$$
where $$2y_1 s = f'(x_1) \iff s = \frac{f'(x_1)}{2y_1} = \frac{3x_1^2 + 2a x_1 + b}{2y_1}$$
If $(x_3,y_3)$ is the other intersection of $P$ with $E$, $x_3$ will be a root of the cubic equation
$$\big(y_1 + s(x- x_1)\big)^2 = f(x) = x^3 +ax^2 + bx + c$$
Notice $x_1$ is a double root for same cubic equation. Apply Vieta's formula to the coefficient of $x^2$, we obtain
$$2 x_1 + x_3 = s^2 - a \implies x_3 = s^2 - 2x_1 - a$$
The point $2P = (x_2,y_2)$ is the image of $(x_3,y_3)$ under reflection of $x$-axis. This means $$\begin{cases} x_2 &= x_3 &= s^2 - 2s_1 - a\\ y_2 &= -y_3 &= -y_1 + s(x_1 - x_2) \end{cases} \quad\text{ where }\quad s = \frac{3x_1^2 + 2ax_1 + b}{2y_1}$$
For $(a,b,c) = (109,224,0)$ and $(x_1,y_1) = (-100,260)$, we get $s = \frac{81}{5}$ and
$$ \begin{cases} x_2 &= \left(\frac{81}{5}\right)^2 - 2(-100) - 109 = \frac{8836}{25}\\ y_2 &= -260 + \frac{81}{5}\left(-100 - \frac{8836}{25}\right) = -\frac{950716}{125} \end{cases}$$
|
This is a delayed answer, but it seems good to clarify what the OP is asking. This kind of statements: "since $f$ is concentrated in such a ball then its Fourier transform is essentially concentrated in such a ball" is quite common in certain fields, but it doesn't mean that the function and its Fourier transform are compactly supported, because this is wrong, as noticed already. People usually omit the details of what is meant, however there are several ways of formalizing this heuristic.
Fix a smooth function $\zeta$ supported in the unit cube $Q$ centered at the origin in $\mathbb{R}^n$. Then, by standard methods, we see that $|\hat{\zeta}(\xi)|\le C_N\frac{1}{(1+|\xi|^2)^{N/2}}$, where $C_N$ is a constant depending on $N$ and $\zeta$; but as $\zeta$ is fixed, we forget about it. Whatever the reason is, you want to use a cut-off function for some parallelepiped $P$, so you take the affine transformation $A$ transforming $Q$ into $P$, hence $\zeta_A(x)=\zeta(A^{-1}x)$ works as a cut-off and its Fourier transform is $\widehat{\zeta_A}(\xi)=|\det A|\hat{\zeta}(A^t\xi)$, and $|\widehat{\zeta_A}(\xi)|\le C_N|\det A|\frac{1}{(1+|A^t\xi|)^{N/2}}$ , hence we see that $|\widehat{\zeta_A}(\xi)|$ decays strongly outside the "dual parallelepiped" $A^{-t}Q$ or is "concentrated" in $A^{-t}Q$; it is in general unimportant the position of $A^{-t}Q$, but its dimensions and orientation in space. This had been basically noted by Willie Wong.
People usually replace $|\widehat{\zeta_A}|$ by $\chi_{A^{-t}Q}$ when they try to get upper bounds, because $|\widehat{\zeta_A}(\xi)|\le C_N\sum_{\nu\in A^{-t}\mathbb{Z}^n}\frac{1}{(1+|\nu|^2)^{N/2}}\chi_{A^{-t}Q}(\xi-\nu)$. As in every field, there is a toolkit you acquire after some time, so it's hard to provide a single reference of the many ways this heuristic is applied.
By the way, to say that if $f$ is supported in a cube then its Fourier transform is concentrated in the dual cube is not quite precise and depends on the context. What is true in general, is that if $f$ is supported in a cube, then $|\hat{f}|$ is "essentially" constant in translations of the dual cube.
|
What is the process used to distinguish the change in Gross Domestic Product (GDP) due to an increased output of goods and services from the change due to an increased prices? Why do we make it distinction?
Real GDP (RGDP) is a measure of the value of goods and services produced in an economy over a period of time for a
fixed set of prices. Nominal GDP (NGDP) is a measure of the value of goods and services produced in an economy over a period of time for the prices over that period in time. The way they calculate changes in RGDP is to first measure changes in NGDP, separately measure prices, and then subtract the difference. For NGDP in year $t$ and reference price year for RGDP of $k$ it works like this:
$$NGDP_t = RGDP_{t,k} \cdot \frac{P_t}{P_k}$$ Rearrange: $$\Rightarrow RGDP_{t,k} = NGDP_t \cdot \frac{P_k}{P_t}$$ Take logs: $$\Rightarrow \ln(RGDP_{t,k}) = \ln(NGDP_t) +\ln(P_k) - \ln(P_t)$$ Now we can calculate a change ($\Delta X_t = X_t - X_{t-1}$): $$ \Delta \ln(RGDP_{t,k}) = \Delta \ln(NGDP_t) + \Delta \ln(P_k) - \Delta \ln(P_t)$$ But by definition $\Delta \ln(P_k)=0$ because the reference year prices don't change so: $$ \Delta \ln(RGDP_{t,k}) = \Delta \ln(NGDP_t) - \Delta \ln(P_t)$$ So as claimed above, the percent change in real GDP ($\ln(RGDP_{t,k})$) equals the percent change in nominal GDP ($\Delta \ln(NGDP_t)$) less the percent change in prices ($\Delta \ln(P_t)$).
|
Defining parameters
Level: \( N \) = \( 48 = 2^{4} \cdot 3 \) Weight: \( k \) = \( 2 \) Nonzero newspaces: \( 4 \) Newforms: \( 4 \) Sturm bound: \(256\) Trace bound: \(1\) Dimensions
The following table gives the dimensions of various subspaces of \(M_{2}(\Gamma_1(48))\).
Total New Old Modular forms 92 31 61 Cusp forms 37 23 14 Eisenstein series 55 8 47 Decomposition of \(S_{2}^{\mathrm{new}}(\Gamma_1(48))\)
We only show spaces with even parity, since no modular forms exist when this condition is not satisfied. Within each space \( S_k^{\mathrm{new}}(N, \chi) \) we list the newforms together with their dimension.
|
also, if you are in the US, the next time anything important publishing-related comes up, you can let your representatives know that you care about this and that you think the existing situation is appalling
@heather well, there's a spectrum
so, there's things like New Journal of Physics and Physical Review X
which are the open-access branch of existing academic-society publishers
As far as the intensity of a single-photon goes, the relevant quantity is calculated as usual from the energy density as $I=uc$, where $c$ is the speed of light, and the energy density$$u=\frac{\hbar\omega}{V}$$is given by the photon energy $\hbar \omega$ (normally no bigger than a few eV) di...
Minor terminology question. A physical state corresponds to an element of a projective Hilbert space: an equivalence class of vectors in a Hilbert space that differ by a constant multiple - in other words, a one-dimensional subspace of the Hilbert space. Wouldn't it be more natural to refer to these as "lines" in Hilbert space rather than "rays"? After all, gauging the global $U(1)$ symmetry results in the complex line bundle (not "ray bundle") of QED, and a projective space is often loosely referred to as "the set of lines [not rays] through the origin." — tparker3 mins ago
> A representative of RELX Group, the official name of Elsevier since 2015, told me that it and other publishers “serve the research community by doing things that they need that they either cannot, or do not do on their own, and charge a fair price for that service”
for example, I could (theoretically) argue economic duress because my job depends on getting published in certain journals, and those journals force the people that hold my job to basically force me to get published in certain journals (in other words, what you just told me is true in terms of publishing) @EmilioPisanty
> for example, I could (theoretically) argue economic duress because my job depends on getting published in certain journals, and those journals force the people that hold my job to basically force me to get published in certain journals (in other words, what you just told me is true in terms of publishing)
@0celo7 but the bosses are forced because they must continue purchasing journals to keep up the copyright, and they want their employees to publish in journals they own, and journals that are considered high-impact factor, which is a term basically created by the journals.
@BalarkaSen I think one can cheat a little. I'm trying to solve $\Delta u=f$. In coordinates that's $$\frac{1}{\sqrt g}\partial_i(g^{ij}\partial_j u)=f.$$ Buuuuut if I write that as $$\partial_i(g^{ij}\partial_j u)=\sqrt g f,$$ I think it can work...
@BalarkaSen Plan: 1. Use functional analytic techniques on global Sobolev spaces to get a weak solution. 2. Make sure the weak solution satisfies weak boundary conditions. 3. Cut up the function into local pieces that lie in local Sobolev spaces. 4. Make sure this cutting gives nice boundary conditions. 5. Show that the local Sobolev spaces can be taken to be Euclidean ones. 6. Apply Euclidean regularity theory. 7. Patch together solutions while maintaining the boundary conditions.
Alternative Plan: 1. Read Vol 1 of Hormander. 2. Read Vol 2 of Hormander. 3. Read Vol 3 of Hormander. 4. Read the classic papers by Atiyah, Grubb, and Seeley.
I am mostly joking. I don't actually believe in revolution as a plan of making the power dynamic between the various classes and economies better; I think of it as a want of a historical change. Personally I'm mostly opposed to the idea.
@EmilioPisanty I have absolutely no idea where the name comes from, and "Killing" doesn't mean anything in modern German, so really, no idea. Googling its etymology is impossible, all I get are "killing in the name", "Kill Bill" and similar English results...
Wilhelm Karl Joseph Killing (10 May 1847 – 11 February 1923) was a German mathematician who made important contributions to the theories of Lie algebras, Lie groups, and non-Euclidean geometry.Killing studied at the University of Münster and later wrote his dissertation under Karl Weierstrass and Ernst Kummer at Berlin in 1872. He taught in gymnasia (secondary schools) from 1868 to 1872. He became a professor at the seminary college Collegium Hosianum in Braunsberg (now Braniewo). He took holy orders in order to take his teaching position. He became rector of the college and chair of the town...
@EmilioPisanty Apparently, it's an evolution of ~ "Focko-ing(en)", where Focko was the name of the guy who founded the city, and -ing(en) is a common suffix for places. Which...explains nothing, I admit.
|
Tomorrow, for the final lecture of the
Mathematical Statistics course, I will try to illustrate – using Monte Carlo simulations – the difference between classical statistics, and the Bayesien approach.
The (simple) way I see it is the following,
for frequentists, a probability is a measure of the the frequency of repeated events, so the interpretation is that parameters are fixed (but unknown), and data are random for Bayesians, a probability is a measure of the degree of certainty about values, so the interpretation is that parameters are random and data are fixed
Or to quote Frequentism and Bayesianism: A Python-driven Primer, a Bayesian statistician would say “given our observed data, there is a 95% probability that the true value of \theta falls within the credible region” while a Frequentist statistician would say “there is a 95% probability that when I compute a confidence interval from data of this sort, the true value of \theta will fall within it”.
To get more intuition about those quotes, consider a simple problem, with Bernoulli trials, with insurance claims. We want to derive some confidence interval for the probability to claim a loss. There were = 1047 policies. And 159 claims.
Consider the standard (frequentist) confidence interval. What does that mean that \overline{x}\pm\sqrt{\frac{\overline{x}(1-\overline{x})}{n}}is the (asymptotic) 95% confidence interval? The way I see it is very simple. Let us generate some samples, of size n, with the same probability as the empirical one, i.e. \widehat{\theta} (which is the meaning of “from data of this sort”). For each sample, compute the confidence interval with the relationship above. It is a 95% confidence interval because in 95% of the scenarios, the empirical value lies in the confidence interval. From a computation point of view, it is the following idea,
> xbar <- 159
> n <- 1047
> ns <- 100
> M=matrix(rbinom(n*ns,size=1,prob=xbar/n),nrow=n)
I generate 100 samples of size . For each sample, I compute the mean, and the confidence interval, from the previous relationship
> fIC=function(x) mean(x)+c(-1,1)*1.96*sqrt(mean(x)*(1-mean(x)))/sqrt(n)
> IC=t(apply(M,2,fIC))
> MN=apply(M,2,mean)
Then we plot all those confidence intervals. In red when they do not contain the empirical mean
> k=(xbar/n<IC[,1])|(xbar/n>IC[,2])
> plot(MN,1:ns,xlim=range(IC),axes=FALSE,
+ xlab="",ylab="",pch=19,cex=.7,
+ col=c("blue","red")[1+k])
> axis(1)
> segments(IC[,1],1:ns,IC[,2],1:
+ ns,col=c("blue","red")[1+k])
> abline(v=xbar/n)
Now, what about the Bayesian credible interval ? Assume that the prior distribution for the probability to claim a loss has a distribution. We’ve seen in the course that, since the Beta distribution is the conjugate of the Bernoulli one, the posterior distribution will also be Beta. More precisely
Based on that property, the confidence interval is based on quantiles of that (posterior) distribution
> u=seq(.1,.2,length=501)
> v=dbeta(u,1+xbar,1+n-xbar)
> plot(u,v,axes=FALSE,type="l")
> I=u<qbeta(.025,1+xbar,1+n-xbar)
> polygon(c(u[I],rev(u[I])),c(v[I],
+ rep(0,sum(I))),col="red",density=30,border=NA)
> I=u>qbeta(.975,1+xbar,1+n-xbar)
> polygon(c(u[I],rev(u[I])),c(v[I],
+ rep(0,sum(I))),col="red",density=30,border=NA)
> axis(1)
What does that mean, here, that we have a 95% credible interval. Well, this time, we do not draw using the empirical mean, but some possible probability, based on that posterior distribution (given the observations)
> pk <- rbeta(ns,1+xbar,1+n-xbar)
In green, below, we can visualize the histogram of those values
> hist(pk,prob=TRUE,col="light green",
+ border="white",axes=FALSE,
+ main="",xlab="",ylab="",lwd=3,xlim=c(.12,.18))
And here again, let us generate samples, and compute the empirical probabilities,
> M=matrix(rbinom(n*ns,size=1,prob=rep(pk,
+ each=n)),nrow=n)
> MN=apply(M,2,mean)
Here, there is 95% chance that those empirical means lie in the credible interval, defined using quantiles of the posterior distribution. We can actually visualize all those means : in black the mean used to generate the sample, and then, in blue or red, the averages obtained on those simulated samples,
> abline(v=qbeta(c(.025,.975),1+xbar,1+
+ n-xbar),col="red",lty=2)
> points(pk,seq(1,40,length=ns),pch=19,cex=.7)
> k=(MN<qbeta(.025,1+xbar,1+n-xbar))|
+ (MN>qbeta(.975,1+xbar,1+n-xbar))
> points(MN,seq(1,40,length=ns),
+ pch=19,cex=.7,col=c("blue","red")[1+k])
> segments(MN,seq(1,40,length=ns),
+ pk,seq(1,40,length=ns),col="grey")
More details and exemple on Bayesian statistics, seen with the eyes of a (probably) not Bayesian statistician in my slides, from my talk in London, last Summer,
|
I have a problem with what seems a very simple functional maximization. Let's define:
$$ J[z]=\int \left( u(z)-\frac{\dot z^2}{2} \right) dt $$
Where $u(z)=-z^2+5$. The problem is to find
$$ \arg\max_z J[z]$$
Said in a colloquial way, to maximize the function $u(z)$ without varying too much $z$ with time. The second variation of the functional for an arbitrary variation $h(t)$ is:
$$ \frac{\delta^2}{\delta z^2}J[z]=\int \left(h^2 u''(z)-\dot h^2 \right) dt = -\int \left(2 h^2+ \dot{h}^2 \right) dt \le 0 \quad \forall h $$
So then the functional is convex and any stationary point satisfying the Euler-Lagrange equations would be a global maximizer of $J$. The Euler-Lagrange equations for this functional reduce to the following differential equation:
$$ \ddot z=-u'(z)=2z $$
But this doesn't make any sense, since the maximum of $u(z)$ is at $z=0$, and all the trajectories starting in $z\ne0$ diverge from that point in an exponential way. Where did I go wrong?
Thank you
|
Let $A:=\mathbb{N}\cup\{\infty\}$.
What is a metric on $A$ s.t.
a sequence $(x_n)$ is convergent in a metric space $X\iff$ there exists a continuous map $\phi:A\rightarrow X$ with $\phi(n)=x_n$ for all $n=0,1,2,...$?
What I know: Let $d$ be the metric on $A$ that we are searching $d_X$ the one on $X$. Being convergent to a point $x$ means that for all $\epsilon$ there is a $N$ s.t. $d_X(x,x_n)<\epsilon$ for all $n\geq N$. Being continuous means that for all $a\in X$ and all $\epsilon$ there is a $\delta$ s.t. $d_X(x,a)<\delta$ implies $d(\phi(x),\phi(a))<\epsilon$.
How can I use these to find our metric?
|
Search
Now showing items 1-9 of 9
Production of $K*(892)^0$ and $\phi$(1020) in pp collisions at $\sqrt{s}$ =7 TeV
(Springer, 2012-10)
The production of K*(892)$^0$ and $\phi$(1020) in pp collisions at $\sqrt{s}$=7 TeV was measured by the ALICE experiment at the LHC. The yields and the transverse momentum spectra $d^2 N/dydp_T$ at midrapidity |y|<0.5 in ...
Transverse sphericity of primary charged particles in minimum bias proton-proton collisions at $\sqrt{s}$=0.9, 2.76 and 7 TeV
(Springer, 2012-09)
Measurements of the sphericity of primary charged particles in minimum bias proton--proton collisions at $\sqrt{s}$=0.9, 2.76 and 7 TeV with the ALICE detector at the LHC are presented. The observable is linearized to be ...
Pion, Kaon, and Proton Production in Central Pb--Pb Collisions at $\sqrt{s_{NN}}$ = 2.76 TeV
(American Physical Society, 2012-12)
In this Letter we report the first results on $\pi^\pm$, K$^\pm$, p and pbar production at mid-rapidity (|y|<0.5) in central Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV, measured by the ALICE experiment at the LHC. The ...
Measurement of prompt J/psi and beauty hadron production cross sections at mid-rapidity in pp collisions at root s=7 TeV
(Springer-verlag, 2012-11)
The ALICE experiment at the LHC has studied J/ψ production at mid-rapidity in pp collisions at s√=7 TeV through its electron pair decay on a data sample corresponding to an integrated luminosity Lint = 5.6 nb−1. The fraction ...
Suppression of high transverse momentum D mesons in central Pb--Pb collisions at $\sqrt{s_{NN}}=2.76$ TeV
(Springer, 2012-09)
The production of the prompt charm mesons $D^0$, $D^+$, $D^{*+}$, and their antiparticles, was measured with the ALICE detector in Pb-Pb collisions at the LHC, at a centre-of-mass energy $\sqrt{s_{NN}}=2.76$ TeV per ...
J/$\psi$ suppression at forward rapidity in Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV
(American Physical Society, 2012)
The ALICE experiment has measured the inclusive J/ψ production in Pb-Pb collisions at √sNN = 2.76 TeV down to pt = 0 in the rapidity range 2.5 < y < 4. A suppression of the inclusive J/ψ yield in Pb-Pb is observed with ...
Production of muons from heavy flavour decays at forward rapidity in pp and Pb-Pb collisions at $\sqrt {s_{NN}}$ = 2.76 TeV
(American Physical Society, 2012)
The ALICE Collaboration has measured the inclusive production of muons from heavy flavour decays at forward rapidity, 2.5 < y < 4, in pp and Pb-Pb collisions at $\sqrt {s_{NN}}$ = 2.76 TeV. The pt-differential inclusive ...
Particle-yield modification in jet-like azimuthal dihadron correlations in Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV
(American Physical Society, 2012-03)
The yield of charged particles associated with high-pT trigger particles (8 < pT < 15 GeV/c) is measured with the ALICE detector in Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV relative to proton-proton collisions at the ...
Measurement of the Cross Section for Electromagnetic Dissociation with Neutron Emission in Pb-Pb Collisions at √sNN = 2.76 TeV
(American Physical Society, 2012-12)
The first measurement of neutron emission in electromagnetic dissociation of 208Pb nuclei at the LHC is presented. The measurement is performed using the neutron Zero Degree Calorimeters of the ALICE experiment, which ...
|
We can start by analyzing equation #15 from the Derivation of Ideal Gas Law below:
\(P=\frac{N}{V}\frac{m\overline{v^2}}{3}=\frac{N}{V}\frac{2E_{kinetic}}{3}=\frac{N}{V}kT=\frac{NkT}{V}\)
Because we want to derive the equation to the speed of gaseous molecules, the most important factor come to mind is the speed v
x. Therefore, the equation will be
1. \(\frac{N}{V}\frac{m\overline{v^2}}{3}=\frac{N}{V}kT\)
Canceling out \(\frac{N}{V}\) from both side we get:
2. \(\frac{m\overline{v^2}}{3}=kT\Rightarrow{\overline{v^2}=\frac{3kT}{m}}\)
Since R = N
ak.
3. \(\overline{v^2}=\frac{3(R/N_a)(T)}{m}=\frac{3RT}{N_{a}m}\)
Finally, to calculate the avarage speed we find v
rms(Root Meean Square of Speed). 4. \(v_{rms}=\sqrt{\overline{v^2}}=\sqrt{\frac{3RT}{N_{a}m}}=\sqrt{\frac{3RT}{M}}\)
M = N
am in the equation above is the mass of one mole of molecules (the molecular mass).
* The gas constant R must be expressed in correct units for the situation in which it is being used. In the ideal gas equation where pv=nRT, it is logical to use units (L)(atm)/(mol)(K).
*In regard to speed, however, energy unit must be taken into account. Therefore, it is more appropriate to to convert it to (J)/(mol)(K).
A. \(R=8.314\frac{J}{molK}\)
|
In a paper I am reading, they are studying states of an harmonic oscillator.
They say that because the coherent states $|\alpha\rangle$ are not orthogonal, we can expand an arbitrary density matrix as a linear combinaison of coherents states :
$$\rho = \int d\alpha ~ w(\alpha) |\alpha\rangle \langle\alpha|$$
I don't understand the non orthogonality argument ? Does it really matter ?
Remark : I am not used at all with the density matrices, I saw it some time ago and I never really used it in practice.
[Answer to the comments] :
I have read the definition part of the page https://en.wikipedia.org/wiki/Glauber%E2%80%93Sudarshan_P_representation
And I don't get some things.
First, they say that :
$$\rho=\int d\alpha P(\alpha)|\alpha\rangle \langle \alpha |$$
is diagonal in the coherent states basis. But I don't totally get it. It would mean that :
$$ \rho | \beta \rangle = \lambda | \beta \rangle$$
But, as the coherent states are not orthogonal it is not obvious for me. Indeed we have :
$$ \rho | \beta \rangle = \int d\alpha P(\alpha) \langle \alpha | \beta \rangle | \alpha \rangle$$
Why would the quantity $P(\alpha) \langle \alpha | \beta \rangle | \alpha$ be proportional to a Dirac ?
What's more, you said (in the comment) that they prove this formula but for me they start from this point they don't prove we can write it this way.
[edit] : Extra question : how do we know if we have a good basis to write the density matrix ? Is the condition just that we have to have vectors that can span all our Hilbert space when we write $\sum_i |\psi_i\rangle \langle \psi_i|$ or there are extra (or fewer) conditions ?
The context : we have a state that has an harmonic oscillator hamiltonian (so we are in the Fock space of the harmonic oscillator).
Link to the article : https://arxiv.org/abs/1206.3405
|
$\bf 2.17\ $
Theorem$\ $ Suppose $X$ is a locally compact, $\sigma$-compact Hausdorff space. If $\frak M$ and $\mu$ are as described in the statement of Theorem $\it 2.14$, then $\frak M$ and $\mu$ have the following properties:
$(a)\ \ $ If $E\in\frak M$ and $\epsilon>0$, there is a closed set $F$ and an open set $V$ such that $F\subset E\subset V$ and $\mu(V-F)<\epsilon$.
$(b)\ \ $ $\mu$ is a regular Borel measure on $X$. $(c)\ \ $ If $E\in\frak M$, there are sets $A$ and $B$ such that $A$ is an $F_\sigma$, $B$ is a $G_b$, $A\subset E\subset B$, and $\mu(B-A)=0.$
$\rm P\scriptstyle{\rm ROOF}$
$\quad$ Every closed set $F\subset X$ is a $\sigma$-compact, because $F=\bigcup(F\cap K_n)$. Hence $(a)$ implies that every set $E\in\frak M$ is inner regular. This proves $(b)$.
I understood the proof of points $(a)$ and $(c)$. But I can't understand the proof of $(b)$. It's obvious that every closed set is $\sigma$-compact. But how Rudin applies $(a)$ here?
We have to show that if $\alpha>0$ then exists compact set $K\subset E$ such that $\mu(K)>\alpha$.
Can anyone explain it to me please?
|
This is essentially an addition to the list of @4tnemele
I'd like to add some earlier work to this list, namely Discrete Gauge Theory.
Discrete gauge theory in 2+1 dimensions arises by breaking a gauge symmetry with gauge group $G$ to some lower
discrete subgroup $H$, via a Higgs mechanism. The force carriers ('photons') become massive which makes the gauge force ultra-short ranged. However, as the gauge group is not completely broken we still have the the Aharanov-Bohm effect. If H is Abelian this AB effect is essentially a 'topological force'. It gives rise to a phase change when one particle loops around another particle. This is the idea of fractional statistics of Abelian anyons.
The particle types that we can construct in such a theory (i.e. the one that are "color neutral") are completely determined by the residual, discrete gauge group $H$. To be more precise: a particle is said to be charged if it carries
a representation of the group H. The number of different particle types that carry a charge is then equal to the number of irreducible representations of the group H. This is similar to ordinary Yang-Mills theory where charged particles (quarks) carry the fundamental representation of the gauge group (SU(3). In a discrete gauge theory we can label all possible charged particle types using the representation theory of the discrete gauge group H.
But there are also other types of particles that can exist, namely those that carry flux. These flux carrying particles are also known as magnetic monopoles. In a discrete gauge theory the flux-carrying particles are labeled by the
conjugacy classes of the group H. Why conjugacy classes? Well, we can label flux-carrying particles by elements of the group H. A gauge transformation is performed through conjugacy, where $ |g_i\rangle \rightarrow |hg_ih^{-1}\rangle $ for all particle states $|g_i\rangle$ (suppressing the coordinate label). Since states related by gauge transformations are physically indistinguishable the only unique flux-carrying particles we have are labeled by conjugacy classes.
Is that all then? Nope. We can also have particles which carry both charge and flux -- these are known as dyons. They are labeled by both an irrep and a conjugacy class of the group $H$. But, for reasons which I wont go into, we cannot take all possible combinations of possible charges and fluxes.
(It has to do with the distinguishability of the particle types. Essentially, a dyon is labeled by $|\alpha, \Pi(g)\rangle$ where $\alpha$ is a conjugacy class and $\Pi(N(g))$ is a representation of the associated normalizer $N(\alpha)$ of the conjugacy class $\alpha$.)
The downside of this approach is the rather unequal setting of flux carrying particles (which are labeled by conjugacy classes), charged particles (labeled by representations) and dyons (flux+compatible charge). A unifying picture is provided by making use of the (quasitriangular) Hopf algebra $D(H)$ also known as a quantum double of the group $H$.
In this language
all particles are (irreducible) representations of the Hopf algebra $D(H)$. A Hopf Algebra is endowed with certain structures which have very physical counterparts. For instance, the existence of a tensor product allows for the existence of multiple particle configurations (each particle labeled by their own representation of the Hopf algebra). The co-multiplication then defines how the algebra acts on this tensored space. the existence of an antipode (which is a certain mapping from the algebra to itself) ensures the existence of an anti-particle. The existence of a unit labels the vacuum (=trivial particle).
We can also go beyond the structure of a Hopf algebra and include the notion of an R-matrix. In fact, the quasitriangular Hopf Algebra (i.e. the quantum double) does precisely this: add the R-matrix mapping. This R-matrix describes what happens when one particle loops around another particle (braiding). For non-Abelian groups $H$ this leads to non-Abelian statistics. These quasitriangular Hopf algebras are also known as quantum groups.
Nowadays the language of discrete gauge theory has been replaced by more general structures, referred to by topological field theories, anyon models or even modular tensor categories. The subject is huge, very rich, very physical and a lot of fun (if you're a bit nerdy ;)).
Sources:
http://arxiv.org/abs/hep-th/9511201 (discrete gauge theory)
http://www.theory.caltech.edu/people/preskill/ph229/ (lecture notes: check out chapter 9. Quite accessible!)
http://arxiv.org/abs/quant-ph/9707021 (a simple lattice model with anyons. There are definitely more accessible review articles of this model out there though.)
http://arxiv.org/abs/0707.1889 (review article, which includes potential physical realizations)This post imported from StackExchange Physics at 2015-11-01 19:23 (UTC), posted by SE-user Olaf
|
There's an accepted answer; I don't quite like it, so I'll take a whack.
Suppose that the start, you don't quite know the empty mass of the vehicle, the quantity of fuel in the fuel tanks, the specific impulse, or the mass flow rate. (The empty mass of the vehicle had better be a very good estimate. Otherwise we're toast.) The space has a radar altimeter that measures range and range rate. These measurements are of course, lies. They are noisy, and they might be biased.
This means we need a Kalman filter that can, over time, improve our knowledge of vehicle mass, fuel mass, mass flow rate, specific impulse, height, and velocity. This Kalman filter is the core of the spacecraft's navigation system. It only updates vehicle mass, fuel mass, mass flow rate, specific impulse when the vehicle is firing its thrusters. It updates height and velocity all of the time.
We also need a guidance system. The optimal control in this case is a bang-bang control. The vehicle falls ballistically for a bit (engines off), and then ignites the engines. The engines then fire continuously until the vehicle touches down at zero relative velocity. Off, on, off. Bang-bang.
How to tell where that magical point in time is where the thrusters change from off to on? Simple: We need a propagator. Propagate forward in time assuming the thrusters are turned on now, and remain on until the vehicle hits the ground or comes to a stop with respect to the ground. The differential equations are easy to write:
$$\begin{aligned}\frac{dm}{dt} &= \dot m \\F &= -\dot m u \\\frac{dv}{dt} &= \frac F m - \frac{\mu}{(R+h)^2} \\\frac{dh}{dt} & = v(t)\end{aligned}$$
No realistic control system uses a pure optimal control. Optimal control leaves no room for errors, and you always need room for errors. The sensors lie, the Kalman filter lies, everything lies. We need a deadband. In this case, the deadband is given by the velocity that the vehicle and its contents (including the humans) can tolerate on contact with the ground. If the vehicle is firing its thrusters all the way to the ground, but the velocity at ground contact less then this limit at the point of ground contact, that's okay. If the vehicle reaches zero velocity at some point above the ground and then falls ballistically and hits with a velocity less than this limit, that's also okay.
There are two cases where this is not okay. One case occurs when the vehicle has to fire its thrusters all the way down to the ground and the velocity at ground contact is too high. That's crash and burn mode. Not good. The other case occurs when the zero relative velocity point is too far above the ground. Addressing that is simple: Don't start firing yet. Just stay in ballistic mode.
We need to temper the deadband a bit based on the uncertainties in the estimates of dry vehicle mass, fuel mass, mass flow rate, specific impulse, height, and velocity. When we start, we want an overly high estimate of fuel mass and overly low estimates of specific impulse and mass flow rate. This will force the guidance and control system to start firing a bit earlier than it optimally should. (Starting the burn late is not a good option. That results in crash and burn mode.) The filter will shortly arrive at better estimates of the masses, flow rate, and specific impulse. The guidance and control will make the vehicle (temporarily) stop firing. Rinse and repeat, but always be a bit on the conservative side. It's better to fire a bit early and waste some fuel than it is to fire late and enter crash and burn mode.
|
Search
Now showing items 1-1 of 1
Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV
(Elsevier, 2014-09)
Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ...
|
Let $K$ be a field of characteristic $p > 0$. Then it is due to Artin and Schreier that the assignment
$$c \in K \mapsto \text{Splitting field } L_c \text{ of } X^p-X+c$$
induces a bijection between the non-trivial elements in $K/\{a^p-a \mid a \in K\}$ and the $K$-isomorphism classes of Galois extensions of degree $p$ over $K$.
In particular, this should imply that if $c, c' \in K$ are such that $L_c$ and $L_{c'}$ are $K$-isomorphic, then there is $k \in K$ such that $k^p-k = c-c'$.
However, what about the following example (which was suggested by user8268 in this question): Let $p > 2$ and $c \in K \setminus \{a^p-a \mid a \in K\}$ and let $\alpha \in L_c$ be a root of $x^p-x+c$. Then the roots of $x^p-x+2c$ are given by $2\alpha + u$, where $u$ ranges through $\mathbb{F}_p \subseteq K$, hence $L_c = L_{2c}$. But $2c - c = c \not\in \{a^p-a \mid a \in K\}$.
How is this compatible with the Artin-Schreier correspondence? I am grateful for any help!
EDIT 1: Note that the Artin-Schreier correspondence is usually proved by constructing an inverse map, as it was done e.g. in this answer.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.