text
stringlengths
256
16.4k
I am reading the paper Slightly Superexponential Parameterized Problems at the moment and have two questions about it: First question: The paper gives a proof of the following statement Theorem 2.1: Assuming ETH, there is no $ 2^{o(klogk)}$ time algorithm for $ k \times k$ -Clique. They prove this statement by sophisticated reduction from $ 3$ -coloring. They then state that this construction runs in time polynomial in $ k$ . They state: The graph $ G$ has $ k^2$ vertices and the time required to construct G is polynomial in $ k$ . […] Therefore, the total running time is $ 2^{o(k \log(k)} \cdot k^{O(1)}$ Why is this not $ 2^{o(k \log(k)}) + k^{O(1)}$ instead? As far as I can see, we have only have to construct the graph once, and then we can run the presumed $ 2^{o(k \log(k)}$ algorithm. Second question: From this theorem, it follows that $ k \times k$ -Clique can not have a $ k^{o(k)}$ algorithm under the Exponential Time Hypothesis. This follows from the abstract, which states that $ k^{O(k)} = 2^{O(k \log(k))}$ . What is a good way to proof this statement?
There is a number of methods of calculations among which are functions, differentiation and integration. Application of Integrals is applied in various fields like Mathematics, Science, Engineering etc. For the calculation of areas, we use majorly integrals formulas. So let us give here a brief introduction on integrals based on the Mathematics subject to find areas under simple curves, areas bounded by a curve and a line and area between two curves, and also the application of integrals in the mathematical disciplines along with the solved problem. Integral Definition An integral is a function, of which a given function is the derivative. Integration is basically used to find the areas of the two-dimensional region and computing volumes of three-dimensional objects. Therefore, finding the integral of a function with respect to x means finding the area to the X-axis from the curve. The integral is also called as anti-derivative as it is the reverse process of differentiation. Types of Integrals There are basically two types of integrals, Definite and Indefinite. Definite Integral is defined as the integral which contains definite limits,i.e., upper limit and lower limit. It is also named as Riemann Integral. It is represented as; \(\int\)f(x)d(x)=F(x)+C, is defined as the indefinite integral, where C is the constant value. Indefinite Integral is defined as the integral whose upper and lower limits are not defined. Application of Integrals There are many applications of integrals, out of which some are mentioned below: In Maths To find the centre of mass(Centroid) of an area having curved sides To find the area between two curves To find the area under a curve The average value of a curve In Physics Integrals are used to calculate Centre of gravity Mass and momentum of inertia of vehicles Mass and momentum of satellites Mass and momentum of a tower The centre of mass The velocity of a satellite at the time of placing it in orbit The trajectory of a satellite at the time of placing it in orbit To calculate Thrust Definite Integral Problem Let us discuss here how the application of integrals can be used to solve certain problems based on scenarios to find the areas of the two-dimensional figure. Example: Find the area enclosed by the circle x 2+y 2=r 2, where r is the radius of the circle. Solution: Let us draw a circle in the XY plane with a radius as r. From the graph, OA=OB=r A has coordinates(0,r) on the x-axis and B has coordinates(r,0) on y-axis. Area of Circle=4*Area of region OBAO Area of Circle= 4*\(\int_{0}^{r}\)y.dx Now, from the equation of circle, x 2+y 2=r 2 y 2=r 2-x 2 y=r 2-x 2 The region OABO lies in the first quadrant of the x-y plane. Therefore, y=\(\pm \sqrt{r^2-x^2}\) Now we can write, Area of circle=4*\(\int_{0}^{r}\sqrt{r^2-x^2}\).dx From the differentiation formula,\(\sqrt{r^2-x^2}\).dx=½ \(\sqrt{r^2-x^2}\)+r^2/2\(sin^{-1}\)x/4+c Therefore, Area of circle=\(4[x/2 \sqrt{r^2-x^2}+r^2/2 sin^{-1} x/r]_{0}^{r}\) \(=4[r/2[x/2\sqrt{r^2-r^2}+r^2/2 sin^{-1} r/r]-0/2 \sqrt{r^2-0} -0^2/2 sin^{-1}0\) =4[0+r2/2sin −1 (1)−0−0] =4r 2/2 pi/2 =pi r 2 Hence, is the answer. With the above example problem, we hope the concept of integrals is understood. In the same way, we can apply integrals to find the area of enclosed in eclipse, the area of the region bounded by the curve or for any enclosed area bounded in the x-axis and y-axis. The application of integrations in real life is based upon the industry types, where this calculus is used. Like in the field of engineering, engineers use integrals to determine the shape of building constructions or length of power cable required to connect the two substations etc. In Science, it is used to solve many derivations of Physics topics like the centre of gravity etc. In the field of graphical representation, where three-dimensional models are demonstrated. The application of integrals class 12 syllabus covers to find the area enclosed by the circle and similar kind of question pattern. For more related topics of Integrals download BYJU’S- The Learning App.
Summary: The expansion is around branch point hence the result is not unique! The result for $c>0$ and $x$ approaching $0$ along real axis is at the end. Note: Version 11.3 is used! Details: The problem stems from the fact that you are expanding about the branch point of square root for $c>0$. It is easy to see this as $$\sqrt{c+x^2-\sqrt{c^2+x^2}}\simeq\sqrt{c+x^2-\lvert c\rvert-\lvert c\rvert\frac{x^2}{c^2}+O(x^3)}=\left\{\begin{aligned}\sqrt{0+O(x)}\quad\text{for }c>0\\ \sqrt{2c+O(x)}\quad\text{for }c<0\end{aligned}\right.$$ where we know that $0$ is the branch point of square root function. It is safer to expand around $a$ and then take the limit to see the behavior. We indeed see that In[44]:= Series[Series[Sqrt[c+x^2-Sqrt[c^2+x^2]],{x,a,2}],{a,0,1}] $$\left(\sqrt{c-\sqrt{c^2}}+O\left(a^2\right)\right)+(x-a) \left(\left(\frac{1}{\sqrt{c-\sqrt{c^2}}}-\frac{1}{2 \sqrt{c^2} \sqrt{c-\sqrt{c^2}}}\right) a+O\left(a^2\right)\right)+(x-a)^2 \left(\frac{\left(1-2 \sqrt{c^2}\right) \sqrt{c-\sqrt{c^2}}}{4 \sqrt{c^2} \left(\sqrt{c^2}-c\right)}+O\left(a^2\right)\right)+O\left((x-a)^3\right)$$ One can see that the coefficient of linear term is indeterminate for $c>0$ if we take $a\to 0$. Therefore, the true result depends on how you approach the branch point. Let us consider the linear term. We will take $a\to 0$ in the complex plane such that$$a\to c^2 e^{i \theta } \left(c-\sqrt{c^2+\text{$\kappa $}}\right)^{3/2}$$ with $$\kappa\to 0$$for arbitrary phase $\theta$ and real parameter $\kappa$. If we insert this, we see that there remains a $\kappa$ independent term in the series expansion, indicating that we have different linear terms for different paths we take to approach to the branch point: Refine[Series[a (1/Sqrt[c-Sqrt[c^2]]-1/(2 Sqrt[c^2] Sqrt[c-Sqrt[c^2]])) (-a+x)/.a-> E^(I \[Theta]) c^2 (c-Sqrt[c^2+ \[Kappa]])^(3/2)//FullSimplify,{\[Kappa],0,0}],c<0] $$-(-2 c-1) c^2 e^{i \theta } \left(x-2 i \sqrt{2} \sqrt{-c} c^3 e^{i \theta }\right)+O\left(\kappa ^1\right)$$ On the other hand, for $c>0$, the linear term dies as $\kappa \to 0$: Refine[Series[a (1/Sqrt[c-Sqrt[c^2]]-1/(2 Sqrt[c^2] Sqrt[c-Sqrt[c^2]])) (-a+x)/.a-> E^(I \[Theta]) c^2 (c-Sqrt[c^2+ \[Kappa]])^(3/2)//FullSimplify,{\[Kappa],0,0}],c>0] $$O\left(\kappa ^1\right)$$ The series expansion is still even around 0 The appearance of odd terms may naively prompt one to think that the series expansion is not even. However the situation is not so. As we saw above, expanding exactly at zero is problematic, so let us expand around $\epsilon$ and consider $\epsilon>0$ and $\epsilon<0$ cases separately. Series[Series[Sqrt[c+x^2-Sqrt[c^2+x^2]],{x,\[Epsilon],3},Assumptions->c>0],{\[Epsilon],0,0},Assumptions->{c>0,\[Epsilon]>0}] $$O\left(\epsilon ^1\right)+(x-\epsilon ) \left(\frac{\sqrt{2 c-1}}{\sqrt{2} \sqrt{c}}+O\left(\epsilon ^1\right)\right)+O\left(\epsilon ^1\right) (x-\epsilon )^2+(x-\epsilon )^3 \left(\frac{1}{8 \sqrt{2} c^{5/2} \sqrt{2 c-1}}+O\left(\epsilon ^1\right)\right)+O\left((x-\epsilon )^4\right)$$ As $\epsilon\to 0$ from right, we only get odd terms with those coefficients. If we approach zero from right: Series[Series[Sqrt[c+x^2-Sqrt[c^2+x^2]],{x,\[Epsilon],3},Assumptions->c>0],{\[Epsilon],0,0},Assumptions->{c>0,\[Epsilon]<0}] $$O\left(\epsilon ^1\right)+(x-\epsilon ) \left(-\frac{\sqrt{2 c-1}}{\sqrt{2} \sqrt{c}}+O\left(\epsilon ^1\right)\right)+O\left(\epsilon ^1\right) (x-\epsilon )^2+(x-\epsilon )^3 \left(-\frac{1}{8 \left(\sqrt{2} c^{5/2} \sqrt{2 c-1}\right)}+O\left(\epsilon ^1\right)\right)+O\left((x-\epsilon )^4\right)$$ We see that approaching to zero from left and from right produces same series expansion upto an overall negative sign. Since only the odd powers of x are present in the expansion, we get another relative sign between $x<0$ and $x>0$ cases, so we can conclude $$ \sqrt{c+x^2-\sqrt{c^2+x^2}}= \frac{\sqrt{2 c-1} \left| x\right| }{\sqrt{2} \sqrt{c}}+\frac{\left| x\right| ^3}{8 \sqrt{2} c^{5/2} \sqrt{2 c-1}}+O\left(\left| x\right| ^5\right)$$ That we are working around a branch point is reflected on the fact that our expansion is discontinuous at $x=0$.
$15.0~\mathrm{mL}$ of $1.4~\mathrm{M}\ \ce{HCl}$ was mixed with $1.00~\mathrm{g}$ of limestone (impure $\ce{CaCO3}$) until all the solid had dissolved. The solution was then transferred to a conical flask and made up to $200~\mathrm{mL}$ with water. A $20.0~\mathrm{mL}$ portion was then neutralised by $8.50~\mathrm{mL}$ of a $0.1~\mathrm{M}\ \ce{NaOH}$ solution. Calculate: Amount of substance of excess $\ce{HCl}$ in the $20.0~\mathrm{mL}$ portion Amount of substance of excess $\ce{HCl}$ in the $200~\mathrm{mL}$ portion Amount of substance of $\ce{HCl}$ which reacted with $\ce{CaCO3}$ My attempt: When the limestone is dissolved, $$\ce{HCl (aq) + CaCO3 (s) -> CaCl2 (aq) + CO2 (g) + H2O (l)}$$ $$n(\ce{HCl}) = c \times V = (15 \times 10^{-3}~\mathrm{L}) (1.4~\mathrm{M}) = 0.021~\mathrm{mol}$$ $$n(\ce{CaCO3}) = x~\mathrm{mol}$$ When the solution is diluted to $200~\mathrm{mL}$ the molarity of $\ce{HCl}$ can be found by, \begin{align} n_i &= n_f\\ c_i V_i &= c_f V_f\\ c_f &= \frac{c_i V_i}{V_f}\\ \therefore c_f (\ce{HCl}) &= \frac{(15\times 10^{-3}~\mathrm{L})(1.4~\mathrm{M})}{(200\times 10^{-3}~\mathrm{L})} = 0.105~\mathrm{M} \end{align} I'm having doubts about my working up until this stage. Am I correct so far?
Overview We give a short introduction to neural networks and the backpropagation algorithm for training neural networks. Our overview is brief because we assume familiarity with partial derivatives, the chain rule, and matrix multiplication. We also hope this post will be a quick reference for those already familiar with the notation used by Andrew Ng in his course on “Neural Networks and Deep Learning”, the first in the deeplearning.ai series on Coursera. That course provides but doesn’t derive the vectorized form of the backpropagation equations, so we hope to fill in that small gap while using the same notation. Introduction: neural networks A single neuron acting on a single training example \begin{eqnarray} \nonumber a^{[l]} = g(z^{[l]}) \end{eqnarray} with a linear function acting on a (multidimensional) input, $a$. \begin{eqnarray} \nonumber z^{[l]} = w^{[l]T} a^{[l-1]} + b^{[l]} \end{eqnarray} These building blocks, i.e. “nodes” or “neurons” of the neural network, are arranged in layers, with the layer denoted by superscript square brackets, e.g. $[l]$ for the $l$th layer. $n_l$ denotes the number of neurons in layer $l$. Forward propagation Forward propagation is the computation of the multiple linear and nonlinear transformations of the neural network on the input data. We can rewrite the above equations in vectorized form to handle multiple training examples and neurons per layer as \begin{eqnarray} \tag{1} \label{1} A^{[l]} = g(Z^{[l]}) \end{eqnarray} with a linear function acting on a (multidimensional) input, $A$. \begin{eqnarray} \tag{2} \label{2} Z^{[l]} = W^{[l]} A^{[l-1]} + b^{[l]} \end{eqnarray} The outputs or activations, $A^{[l-1]}$, of the previous layer serve as inputs for the linear functions, $z^{[l]}$. If $n_l$ denotes the number of neurons in layer $l$, and $m$ denotes the number of training examples in one (mini)batch pass through the neural network, then the dimensions of these matrices are: Variable Dimensions $A^{[l]}$ ($n_l$, $m$) $Z^{[l]}$ ($n_l$, $m$) $W^{[l]}$ ($n_l$, $n_{l-1}$) $b^{[l]}$ ($n_l$, 1) For example, this neural network consists of only a single hidden layer with 3 neurons in layer 1. The matrix $W^{[1]}$ has dimensions (3, 2) because there are 3 neurons in layer 1 and 2 inputs from the previous layer (in this example, the inputs are the raw data, $\vec{x} = (x_1, x_2)$). Each row of $W^{[1]}$ corresponds to a vector of weights for a neuron in layer 1. The final output of the neural network is a prediction in the last layer $L$, and the closeness of the prediction $A^{[L](i)}$ to the true label $y^{(i)}$ for training example $i$ is quantified by a loss function $\mathcal{L}(y^{(i)}, A^{[L](i)})$, where superscript $(i)$ denotes the $i$th training example. For classification, the typical choice for $\mathcal{L}$ is the cross-entropy loss (log loss). The cost $J$ is the average loss over all $m$ training examples in the dataset. \begin{eqnarray} \tag{3} \label{3} J = \frac{1}{m} \sum_{i=1}^m \mathcal{L}(y^{(i)}, A^{[L](i)}) \end{eqnarray} Minimizing the cost with gradient descent The task of training a neural network is to find the set of parameters $W$ and $b$ (with different $W$ and $b$ for different nodes in the network) that will give us the best predictions, i.e. minimize the cost (\ref{3}). Gradient descent is the workhorse that we employ for this optimization problem. We randomly initialize the parameters $W$ and $b$ for each node, then iteratively update the parameters by moving them in the direction that is opposite to the gradient of the cost. \begin{eqnarray} \nonumber W_\text{new} &=& W_\text{previous} – \alpha \frac{\partial J}{\partial W} \\ b_\text{new} &=& b_\text{previous} – \alpha \frac{\partial J}{\partial b} \end{eqnarray} $\alpha$ is the learning rate, a hyperparameter that needs to be tuned during the training process. The gradient of the cost is calculated by the backpropagation algorithm. Backpropagation equations These are the vectorized backpropagation (BP) equations which we wish to derive: \begin{eqnarray} \nonumber dW^{[l]} &\equiv& \frac{\partial J}{\partial W^{[l]}} = \frac{1}{m} dZ^{[l]}A^{[l-1]T} \tag{BP1} \label{BP1} \\ db^{[l]} &\equiv& \frac{\partial J}{\partial b^{[l]}} = \frac{1}{m} \sum_{i=1}^m dZ^{[l](i)} \tag{BP2} \label{BP2} \\ dA^{[l-1]} &\equiv& \frac{\partial \mathcal{L}}{\partial A^{[l-1]}} = W^{[l]T}dZ^{[l]} \tag{BP3} \label{BP3} \\ dZ^{[l]} &\equiv& \frac{\partial \mathcal{L}}{\partial Z^{[l]}} = dA^{[l]} * g'(Z^{[l]}) \tag{BP4} \label{BP4} \end{eqnarray} The $*$ in the last line denotes element-wise multiplication. $W$ and $b$ are the parameters we want to learn (update), but the BP equations include two additional expressions for the partial derivative of the loss in terms of linear and nonlinear activations per training example since they are intermediate terms that appear in the calculation of $dW$ and $db$. Chain rule We’ll need the chain rule for total derivatives, which describes how the change in a function $f$ with respect to a variable $x$ can be calculated as a sum over the contributions from intermediate functions $u_i$ that depend on $x$: \begin{eqnarray} \nonumber \frac{\partial f(u_1, u_2, …, u_k)}{\partial x} = \sum_{i}^k \frac{\partial f}{\partial u_i} \frac{\partial u_i}{\partial x} \end{eqnarray} where the $u_i$ are functions of $x$. This expression reduces to the single variable chain rule when only one $u_i$ is a function of $x$. The gradients for every node can be calculated in a single backward pass through the network, starting with the last layer and working backwards, towards the input layer. As we work backwards, we cache the values of $dZ$ and $dA$ from previous calculations, which are then used to compute the derivative for variables that are further upstream in the computation graph. The dependency of the derivatives of upstream variables on downstream variables, i.e. cached derivatives, is manifested in the $\frac{\partial f}{\partial u_i}$ term in the chain rule. (Backpropagation is a dynamic programming algorithm!) The chain rule applied to backpropagation In this section, we apply the chain rule to derive the vectorized form of equations BP(1-4). Without loss of generality, we’ll index an element of the matrix or vector on the left hand side of BP(1-4); the notation for applying the chain rule is therefore straightforward because the derivatives are just with respect to scalars. BP1 The partial derivative of the cost with respect to the $s$th component (corresponding to the $s$th input) of $\vec{w}$ in the $r$th node in layer $l$ is: \begin{eqnarray} dW^{[l]}_{rs} &\equiv& \frac{\partial J}{\partial W^{[l]}_{rs}} \\ &=& \frac{1}{m} \sum_{i}^m \frac{\partial \mathcal{L}}{\partial W^{[l]}_{rs}} \\ &=& \frac{1}{m} \sum_{i}^m \frac{\partial \mathcal{L}}{\partial z^{[l]}_{ri}} \frac{\partial z^{[l]}_{ri}}{\partial W^{[l]}_{rs}} \tag{4} \label{4} \end{eqnarray} The last line is due to the chain rule. The first term in (\ref{4}) is $dZ^{[l]}_{ri}$ by definition (\ref{BP4}). We can simplify the second term of (\ref{4}) using the definition of the linear function (\ref{2}), which we rewrite below explicitly for the $i$th training example in the $r$th node in the $l$th layer in order to be able to more easily keep track of indices when we take derivatives of the linear function: \begin{eqnarray} \tag{5} \label{5} Z^{[l]}_{ri} = \sum_j^{n_{l-1}} W^{[l]}_{rj} A^{[l-1]}_{ji} + b^{[l]}_r \end{eqnarray} where $n_{l-1}$ denotes the number of nodes in layer $l-1$. Therefore, \begin{eqnarray} dW^{[l]}_{rs} &=& \frac{1}{m} \sum_{i}^m dZ^{[l]}_{ri} A^{[l-1]}_{si} \\ &=& \frac{1}{m} \sum_{i}^m dZ^{[l]}_{ri} A^{[l-1]T}_{is} \\ &=& \frac{1}{m} \left( dZ^{[l]} A^{[l-1]T} \right)_{rs} \end{eqnarray} BP2 The partial derivative of the cost with respect to $b$ in the $r$th node in layer $l$ is: \begin{eqnarray} db^{[l]}_r &\equiv& \frac{\partial J}{\partial b^{[l]}_r} \\ &=& \frac{1}{m} \sum_{i}^m \frac{\partial \mathcal{L}}{\partial b^{[l]}_r} \\ &=& \frac{1}{m} \sum_{i}^m \frac{\partial \mathcal{L}}{\partial z^{[l]}_{ri}} \frac{\partial z^{[l]}_{ri}}{\partial b^{[l]}_r} \tag{6} \label{6} \\ &=& \frac{1}{m} \sum_{i}^m dZ^{[l]}_{ri} \end{eqnarray} (\ref{6}) is due to the chain rule. The first term in (\ref{6}) is $dZ^{[l]}_{ri}$ by definition (\ref{BP4}). The second term of (\ref{6}) simplifies to $\partial z^{[l]}_{ri} / \partial b^{[l]}_r = 1$ from (\ref{5}). BP3 The partial derivative of the loss for the $i$th example with respect to the nonlinear activation in the $r$th node in layer $l-1$ is: \begin{eqnarray} dA^{[l-1]}_{ri} &\equiv& \frac{\partial \mathcal{L}}{\partial A^{[l-1]}_{ri}} \\ &=& \sum_{k=1}^{n_l} \frac{\partial \mathcal{L}}{\partial Z^{[l]}_{ki}} \frac{\partial Z^{[l]}_{ki}}{\partial A^{[l-1]}_{ri}} \tag{7} \label{7} \\ &=& \sum_{k=1}^{n_l} dZ^{[l]}_{ki} W^{[l]}_{kr} \tag{8} \label{8} \\ &=& \sum_{k=1}^{n_l} W^{[l]T}_{rk} dZ^{[l]}_{ki} \\ &=& \left( W^{[l]T} dZ^{[l]} \right)_{ri} \end{eqnarray} The application of the chain rule (\ref{7}) includes a sum over the nodes in layer $l$ whose linear functions take $A^{[l-1]}_{ri}$ as an input, assuming the nodes between layers $l-1$ and $l$ are fully-connected. The first term in (\ref{8}) is by definition $dZ$ (\ref{BP4}); from (\ref{5}), the second term in (\ref{8}) evaluates to $\partial Z^{[l]}_{ki} / \partial A^{[l-1]}_{ri} = W^{[l]}_{kr}$. BP4 The partial derivative of the loss for the $i$th example with respect to the linear activation in the $r$th node in layer $l$ is: \begin{eqnarray} dZ^{[l]}_{ri} &\equiv& \frac{\partial \mathcal{L}}{\partial Z^{[l]}_{ri}} \\ &=& \frac{\partial \mathcal{L}}{\partial A^{[l]}_{ri}} \frac{\partial A^{[l]}_{ri}}{\partial Z^{[l]}_{ri}} \\ &=& dA^{[l]}_{ri} * g'(Z^{[l]}_{ri}) \end{eqnarray} The second line is by the application of the chain rule (single variable since only a single nonlinear activation depends on directly on $Z^{[l]}_{ri}$). $g'(Z)$ is the derivative of the nonlinear activation function with respect to its input, which depends on the nonlinear activation function that is assigned to that particular node, e.g. sigmoid vs. tanh vs. ReLU. Conclusion Backpropagation efficiently executes gradient descent for updating the parameters of a neural network by ordering and caching the calculations of the gradient of the cost with respect to the parameters in the nodes. This post is a little heavy on notation since the focus is on deriving the vectorized formulas for backpropagation, but we hope it complements the lectures in Week 3 of Andrew Ng’s “Neural Networks and Deep Learning” course as well as the excellent, but even more notation-heavy, resources on matrix calculus for backpropagation that are linked below. More resources on vectorized backpropagation The matrix calculus you need for deep learning – from explained.ai How the backpropagation algorithm works – Chapter 2 of the Neural Networks and Deep Learning free online text
Introduction In this post, we derive the batch MLE procedure for the GARCH model in a more principled way than the last GARCH post. The derivation presented here is simple and concise. The general GARCH model is defined as follows. \begin{align} V_1 &\sim \mu \\ X_n &\sim \NPDF(0,V_n) \label{GARCHobs} \\ V_n &= c + \sum_{j=1}^q \alpha_j X^2_{n-j} + \sum_{i = 1}^p \beta_i V_{n-i} \label{GARCHtr} \end{align} where $(X_n)_{n\geq 1}$ is a stochastic process of defined form. Note that $V_n = \sigma_n^2$ in the classical notation in statistics and $\mu$ is the initial distribution. It is customary to use this model with model order $p=1,q=1$, that is called GARCH(1,1). Then the model reduces to, \begin{align*} V_1 &\sim \mu \\ X_n &\sim \NPDF(0,V_n) \\ V_n &= c + \alpha X^2_{n-1} + \beta V_{n-1} \end{align*} In this model, we think of $X_n$ as observations. We set $\theta = (c,\alpha,\beta)$. The goal is to estimate the parameter vector $\theta \in \Theta$ where $\Theta \subset \bR^3$. We assume $\theta$ is fixed. It is called as a `static' parameter. Notation and setup.We assume all measures are absolutely continuous wrt some dominating measure (e.g. Lebesgue measure) denoted generically with $\mbox{d}x$. Then, we note the following notation, \begin{align*} p(x | v) = \bP(X \in \mbox{d}x | V = v) \end{align*} for any $X$ and $V$. By using this notation, we denote the conditional density. If there are some parameters of a probability density $p(x)$, we denote this as $p_\theta(x)$ where $\theta$ is a (possibly) vector of parameters. To denote the joint density of a sequence of random variables $(Y_1,\ldots,Y_n)$, we write $p(y_{1:n})$. Furthermore, we fix a filtered probability space $(\Omega,\cF,\bP)$ with filtration $(\cF^X_n)_{n\geq 1}$ with $\cF^X_n = \sigma(X_1,\ldots,X_n)$. In the sequel, we will fix $V_1 = v_1$, i.e., we assume the initial condition is known. This is certainly not the case in real applications, but we assume this to keep derivations simple. Also it is known that for large sample sizes, the effect of conditioning to a fixed value washes out. This setting frees us from specifying $\mu$ explicitly which is an important (and problematic) part of the MLE procedure. Otherwise, one can devise an algorithm which assumes a parametrised distribution $\mu$ on $V_1$ with parameter $\nu$ and then extend the parameter vector as $\theta = (\alpha,\beta,c,\nu)$ and employ MLE. Problem formulation The typical MLE problem is to solving the following optimisation problem, \begin{align}\label{MLEformulation} \theta^* = \argmax_{\theta \in \Theta} \log p_\theta(x_{1:n}) \end{align} where $\theta$ is the parameter, and $p_\theta(x_{1:n})$ is the likelihood of the observations $(X_1,\ldots,X_n)$ for a fixed $\theta$. So, we have to first write $p_\theta(x_{1:n})$ under the assumption that $(X_1,\ldots,X_n)$ is simulated from a GARCH(1,1) model. If there is a hidden variable model - one typically makes use of this model to derive the likelihood. In our case, according to the model, first we write, \begin{align}\label{intgNote} p_\theta(x_{1:n}) = \int p_\theta(x_{1:n},v_{1:n}) \mbox{d}v_{1:n} \end{align} and accordingly we decompose the joint density as follows, \begin{align}\label{decompJointdensity} p_\theta(x_{1:n},v_{1:n}) = p(v_1) p(x_1 | v_1) \prod_{k=2}^n p(x_k | v_k) p_\theta(v_k | v_{k-1}, x_{k-1}) \end{align} for a fixed $\theta \in \Theta$ where $\Theta \subset \bR^{d_\theta}$. Batch MLE Here we derive the algorithm for fixed $n \in \bN$. We can write the marginal likelihood by using \eqref{decompJointdensity} as \begin{align*} p_\theta(x_{1:n}) &= \int p_\theta(x_{1:n},v_{1:n}) \mbox{d}v_{1:n} \\ &= p(x_1) \int \prod_{k=2}^n p(x_k | v_k) p_\theta(v_k | v_{k-1},x_{k-1}) \mbox{d}v_{2:n} \\ &= p(x_1) \prod_{k=2}^n \int p(x_k | v_k) p_\theta(v_k | v_{k-1},x_{k-1}) \mbox{d}v_{k} \end{align*} Observe that for a fixed $\theta$, \eqref{GARCHtr} implies a degenerate distribution for $V_k$. This means, given $V_{k-1} = v_{k-1}$ and $X_{k-1} = x_{k-1}$, the distribution of $V_k$ is degenerate, i.e., \begin{align*} p_{\theta}(v_{k} | v_{k-1},x_{k-1}) = \delta(v_k - (c + \alpha x_{k-1}^2 + \beta v_{k-1})) \end{align*} Note the following property of the Dirac density, \begin{align*} f(x) = \int \delta(y - x) f(y) \mbox{d}y \end{align*} for a bounded measurable function $f$. For a fixed $\theta$, the marginal likelihood becomes, \begin{align*} p_\theta(x_{1:n}) = p(v_1) \prod_{k=2}^n p(x_k | v_k^\theta) \end{align*} Here the sequence $\{v_k^\theta \}_{k=1}^n$ is a particular realisation of the process $V_n$ for a fixed $\theta$ and given $\cF^X_{n-1}$. Then, the marginal log likelihood is, \begin{align}\label{LLlastform} \log p_\theta(x_{1:n}) = \log p(v_1) + \sum_{k=2}^n \log p(x_k | v_k^\theta) \end{align} To minimise \eqref{MLEformulation}, we have to optimise \eqref{LLlastform} wrt $\theta$. Notice that we do not have to deal with the first term, as we specified it beforehand. For optimisation, we have a variety of choices. Here we use good old gradient descent. This corresponds to the performing the update given in \eqref{BatchUpdate} in the Algorithm 1. For details of the derivatives and required computations, see here.
While playing around with complete and incomplete zeta functions I found a nice formula which to my knowledge was not discussed in MSE. Here it is $$\sum_{k=2}^\infty (\zeta(k)-1) = 1\tag{1}$$ and its generalizaton $$\sum _{k=2}^{\infty }\left( \zeta (k)-\left(1+\frac{1}{2^k}+\frac{1}{3^k}+...+\frac{1}{m^k}\right)\right)=\sum_{k=2}^{\infty} (\zeta(k)-H_m^{(k)}) = \frac{1}{m}\tag{2}$$ Here Riemann's zeta function is defined as $$\zeta(s) = \sum_{n=1}^{\infty} n^{-s}$$ and the incomplete zeta function, which is traditionally called generalized Harmonic number, is defined as $$H_m^{(k)} = \sum_{n=1}^{m} n^{-k}$$ A corollary of (2) is the interesting relation $$H_n=\sum _{k=2}^{\infty } \left(n \;\zeta (k)-(n+1) H_n^{(k)}+H_n^{(k-1)}\right)\tag{3}$$ The question asks for a proof of (1), (2), and (3). Related
I came across this question in a textbook and after a full day of going over it and consulting friends none of us can figure out why a particular approach to this question doesn't yield the correct answer. The question A light elastic string of natural length $0.2m$ has its ends attached to two fixed points $A$ and $B$ which are on the same horizontal level with $AB = 0.2m$. A particle of mass $5$kg is attached to the string at the point $P$ where $AP = 0.15m$. The system is released and P hangs in equilibrium below AB with $\angle{APB} = 90^{\circ}$. If $\angle{BAP} = {\theta}$, show that the ratio of the extension of $AP$ and $BP$ is $$\frac{4cos{\theta}-3}{4sin{\theta}-1}$$ The correct method Let the extension of $AP$ be $x_1$ and the extension of $BP$ be $x_2$ $x_1 = 0.2cos{\theta} - 0.15$ $x_2 = 0.2sin{\theta} - 0.05$ $\frac{x_1}{x_2} = \frac{0.2cos{\theta} - 0.15}{0.2sin{\theta} - 0.05}$ $\therefore \frac{4cos{\theta}-3}{4sin{\theta}-1}$ Our method (seemingly incorrect) With the same symbols for $AP$ and $BP$ and the tensions in $AP$ and $BP$ being $T_1$ and $T_2$ respectively. Since the system is in equilibrium resolving horizontally gives: $T_1cos{\theta} = T_2sin{\theta}$ $\therefore \frac{T_1}{T_2} = tan{\theta}$ Since for an elastic string $T = \frac{{\lambda}x}{l}$ $T_1 = \frac{{\lambda}x_1}{0.15}$ $T_2 = \frac{{\lambda}x_2}{0.05}$ $\therefore \frac{x_1}{x_2} = 3\frac{T_1}{T_2}$ $\therefore \frac{x_1}{x_2} = 3tan{\theta}$ The discrepancy here is not that I do not understand the first solution, but why the second method does not yield a correct result, the steps followed seem logical and correct to me.
Learning Objectives To convert a value reported in one unit to a corresponding value in a different unit. The ability to convert from one unit to another is an important skill. For example, a nurse with 50 mg aspirin tablets who must administer 0.2 g of aspirin to a patient needs to know that 0.2 g equals 200 mg, so 4 tablets are needed. Fortunately, there is a simple way to convert from one unit to another. Conversion Factors If you learned the SI units and prefixes described, then you know that 1 cm is 1/100th of a meter. \[ 1\; \rm{cm} = \dfrac{1}{100} \; \rm{m}\] or \[100\; \rm{cm} = 1\; \rm{m}\] Suppose we divide both sides of the equation by 1 m (both the number and the unit): \[\mathrm{\dfrac{100\:cm}{1\:m}=\dfrac{1\:m}{1\:m}}\] As long as we perform the same operation on both sides of the equals sign, the expression remains an equality. Look at the right side of the equation; it now has the same quantity in the numerator (the top) as it has in the denominator (the bottom). Any fraction that has the same quantity in the numerator and the denominator has a value of 1: We know that 100 cm is 1 m, so we have the same quantity on the top and the bottom of our fraction, although it is expressed in different units. A fraction that has equivalent quantities in the numerator and the denominator but expressed in different units is called a conversion factor. Here is a simple example. How many centimeters are there in 3.55 m? Perhaps you can determine the answer in your head. If there are 100 cm in every meter, then 3.55 m equals 355 cm. To solve the problem more formally with a conversion factor, we first write the quantity we are given, 3.55 m. Then we multiply this quantity by a conversion factor, which is the same as multiplying it by 1. We can write 1 as \(\mathrm{\frac{100\:cm}{1\:m}}\) and multiply: \[ 3.55 \; \rm{m} \times \dfrac{100 \; \rm{cm}}{1\; \rm{m}}\] The 3.55 m can be thought of as a fraction with a 1 in the denominator. Because m, the abbreviation for meters, occurs in both the numerator and the denominator of our expression, they cancel out: \[\dfrac{3.55 \; \cancel{\rm{m}}}{ 1} \times \dfrac{100 \; \rm{cm}}{1 \; \cancel{\rm{m}}}\] The final step is to perform the calculation that remains once the units have been canceled: \[ \dfrac{3.55}{1} \times \dfrac{100 \; \rm{cm}}{1} = 355 \; \rm{cm} \label{Ex1}\] In the final answer, we omit the 1 in the denominator. Thus, by a more formal procedure, we find that 3.55 m equals 355 cm. A generalized description of this process is as follows: \[\text{quantity (in old units)} \times \text{conversion factor} = \text{quantity (in new units)} \nonumber\] You may be wondering why we use a seemingly complicated procedure for a straightforward conversion. In later studies, the conversion problems you will encounter will not always be so simple. If you can master the technique of applying conversion factors, you will be able to solve a large variety of problems. In the previous example (Equation \ref{Ex1}), we used the fraction \(\frac{100 \; \rm{cm}}{1 \; \rm{m}}\) as a conversion factor. Does the conversion factor \(\frac{1 \; \rm m}{100 \; \rm{cm}}\) also equal 1? Yes, it does; it has the same quantity in the numerator as in the denominator (except that they are expressed in different units). Why did we not use that conversion factor? If we had used the second conversion factor, the original unit would not have canceled, and the result would have been meaningless. Here is what we would have gotten: \[ 3.55 \; \rm{m} \times \dfrac{1\; \rm{m}}{100 \; \rm{cm}} = 0.0355 \dfrac{\rm{m}^2}{\rm{cm}}\] For the answer to be meaningful, we have to construct the conversion factor in a form that causes the original unit to cancel out. Figure \(\PageIndex{1}\) shows a concept map for constructing a proper conversion. Significant Figures in Conversions How do conversion factors affect the determination of significant figures? Numbers in conversion factors based on prefix changes, such as kilograms to grams, are not considered in the determination of significant figures in a calculation because the numbers in such conversion factors are exact. Exact numbers are defined or counted numbers, not measured numbers, and can be considered as having an infinite number of significant figures. (In other words, 1 kg is exactly 1,000 g, by the definition of kilo-.) Counted numbers are also exact. If there are 16 students in a classroom, the number 16 is exact. In contrast, conversion factors that come from measurements (such as density, as we will see shortly) or are approximations have a limited number of significant figures and should be considered in determining the significant figures of the final answer. Example \(\PageIndex{1}\) The average volume of blood in an adult male is 4.7 L. What is this volume in milliliters? A hummingbird can flap its wings once in 18 ms. How many seconds are in 18 ms? SOLUTION We start with what we are given, 4.7 L. We want to change the unit from liters to milliliters. There are 1,000 mL in 1 L. From this relationship, we can construct two conversion factors: \[ \dfrac{1\; \rm{L}}{1,000\; \rm{mL}} \; \text{ or } \; \dfrac{1,000 \; \rm{mL}}{1\; \rm{L}} \nonumber\] We use the conversion factor that will cancel out the original unit, liters, and introduce the unit we are converting to, which is milliliters. The conversion factor that does this is the one on the right. \[ 4.7 \cancel{\rm{L}} \times \dfrac{1,000 \; \rm{mL}}{1\; \cancel{\rm{L}}} = 4,700\; \rm{mL} \nonumber\] Because the numbers in the conversion factor are exact, we do not consider them when determining the number of significant figures in the final answer. Thus, we report two significant figures in the final answer. We can construct two conversion factors from the relationships between milliseconds and seconds: \[ \dfrac{1,000 \; \rm{ms}}{1\; \rm{s}} \; \text{ or } \; \dfrac{1\; \rm{s}}{1,000 \; \rm{ms}} \nonumber\] To convert 18 ms to seconds, we choose the conversion factor that will cancel out milliseconds and introduce seconds. The conversion factor on the right is the appropriate one. We set up the conversion as follows: \[ 18 \; \cancel{\rm{ms}} \times \dfrac{1\; \rm{s}}{1,000 \; \cancel{\rm{ms}}} = 0.018\; \rm{s} \nonumber\] The conversion factor’s numerical values do not affect our determination of the number of significant figures in the final answer. Exercise \(\PageIndex{1}\) Perform each conversion. 101,000 ns to seconds 32.08 kg to grams Conversion Factors From Different Units Conversion factors can also be constructed for converting between different kinds of units. For example, density can be used to convert between the mass and the volume of a substance. Consider mercury, which is a liquid at room temperature and has a density of 13.6 g/mL. The density tells us that 13.6 g of mercury have a volume of 1 mL. We can write that relationship as follows: 13.6 g mercury = 1 mL mercury This relationship can be used to construct two conversion factors: \[\mathrm{\dfrac{13.6\:g}{1\:mL}\:and\:\dfrac{1\:mL}{13.6\:g}}\] Which one do we use? It depends, as usual, on the units we need to cancel and introduce. For example, suppose we want to know the mass of 16 mL of mercury. We would use the conversion factor that has milliliters on the bottom (so that the milliliter unit cancels) and grams on top so that our final answer has a unit of mass: \[ \begin{align*} \mathrm{16\:\cancel{mL}\times\dfrac{13.6\:g}{1\:\cancel{mL}}} &= \mathrm{217.6\:g} \\[4pt] &\approx \mathrm{220\:g} \end{align*} \] In the last step, we limit our final answer to two significant figures because the volume quantity has only two significant figures; the 1 in the volume unit is considered an exact number, so it does not affect the number of significant figures. The other conversion factor would be useful if we were given a mass and asked to find volume, as the following example illustrates. Density can be used as a conversion factor between mass and volume. Example \(\PageIndex{2}\): Mercury Thermometer A mercury thermometer for measuring a patient’s temperature contains 0.750 g of mercury. What is the volume of this mass of mercury? SOLUTION Because we are starting with grams, we want to use the conversion factor that has grams in the denominator. The gram unit will cancel algebraically, and milliliters will be introduced in the numerator. \[ \begin{align*} 0.750 \; \cancel{\rm{g}} \times \dfrac{1\; \rm{mL}}{13.6 \; \cancel{\rm{g}}} &= 0.055147 \ldots \; \rm{mL} \\[4pt] &\approx 0.0551\; \rm{mL} \end{align*} \] We have limited the final answer to three significant figures. Exercise \(\PageIndex{2}\) What is the volume of 100.0 g of air if its density is 1.3 g/L? Looking Closer: Density and the Body The densities of many components and products of the body have a bearing on our health. Bones. Bone density is important because bone tissue of lower-than-normal density is mechanically weaker and susceptible to breaking. The density of bone is, in part, related to the amount of calcium in one’s diet; people who have a diet deficient in calcium, which is an important component of bones, tend to have weaker bones. Dietary supplements or adding dairy products to the diet seems to help strengthen bones. As a group, women experience a decrease in bone density as they age. It has been estimated that fully half of women over age 50 suffer from excessive bone loss, a condition known as osteoporosis. Exact bone densities vary within the body, but for a healthy 30-year-old female, it is about 0.95–1.05 g/cm 3. Osteoporosis is diagnosed if the bone density is below 0.6–0.7 g/cm 3. Urine. The density of urine can be affected by a variety of medical conditions. Sufferers of diabetes produce an abnormally large volume of urine with a relatively low density. In another form of diabetes, called diabetes mellitus, there is excess glucose dissolved in the urine, so that the density of urine is abnormally high. The density of urine may also be abnormally high because of excess protein in the urine, which can be caused by congestive heart failure or certain renal (kidney) problems. Thus, a urine density test can provide clues to various kinds of health problems. The density of urine is commonly expressed as a specific gravity, which is a unitless quantity defined as \[ \dfrac{\text{density of some material}}{\text{density of water}} \nonumber\] Normal values for the specific gravity of urine range from 1.002 to 1.028. Body Fat. The overall density of the body is one indicator of a person’s total body fat. Fat is less dense than muscle and other tissues, so as it accumulates, the overall density of the body decreases. Measurements of a person’s weight and volume provide the overall body density, which can then be correlated to the percentage of body fat. (The body’s volume can be measured by immersion in a large tank of water. The amount of water displaced is equal to the volume of the body.) Problem Solving With Multiple Conversions Sometimes you will have to perform more than one conversion to obtain the desired unit. For example, suppose you want to convert 54.7 km into millimeters. You can either memorize the relationship between kilometers and millimeters, or you can do the conversion in two steps. Most people prefer to convert in steps. To do a stepwise conversion, we first convert the given amount to the base unit. In this example, the base unit is meters. We know that there are 1,000 m in 1 km: \[ 54.7\; \cancel{\rm{km}} \times \dfrac{1,000 \; \rm{m}}{1\; \cancel{\rm{km}}} = 54,700\; \rm{m} \nonumber\] Then we take the result (54,700 m) and convert it to millimeters, remembering that there are \(1,000\; \rm{mm}\) for every \(1\; \rm{m}\): \[ \begin{align*} 54,700 \; \cancel{\rm{m}} \times \dfrac{1,000 \; \rm{mm}}{1\; \cancel{\rm{m}}} &= 54,700,000 \; \rm{mm} \\[4pt] &= 5.47 \times 10^7\; \rm{mm} \end{align*} \] We have expressed the final answer in scientific notation. As a shortcut, both steps in the conversion can be combined into a single, multistep expression: \[ \begin{align*} 54.7\; \cancel{\rm{km}} \times \dfrac{1,000 \; \cancel{\rm{m}}}{1\; \cancel{\rm{km}}} \times \dfrac{1,000 \; \rm{mm}}{1\; \cancel{\rm{m}}} &= 54,700,000 \; \rm{mm} \\[4pt] &= 5.47 \times 10^7\; \rm{mm} \end{align*} \] Either method—one step at a time or all the steps together—is acceptable. If you do all the steps together, the restriction for the proper number of significant figures should be done after the last step. As long as the math is performed correctly, you should get the same answer no matter which method you use. Example \(\PageIndex{3}\) Convert 58.2 ms to megaseconds in one multistep calculation. SOLUTION First, convert the given unit to the base unit—in this case, seconds—and then convert seconds to the final unit, megaseconds: \[ \begin{align*} 58.2 \; \cancel{\rm{ms}} \times \dfrac{\cancel{1 \rm{s}}}{1,000\; \cancel{\rm{ms}}} \times \dfrac{1\; \rm{Ms}}{1,000,000\; \cancel{ \rm{s}}} &=0.0000000582\; \rm{Ms} \\[4pt] &= 5.82 \times 10^{-8}\; \rm{mS} \end{align*} \] Neither conversion factor affects the number of significant figures in the final answer. Exercise \(\PageIndex{3}\) Convert 43.007 ng to kilograms in one multistep calculation. Career Focus: Pharmacist A pharmacist dispenses drugs that have been prescribed by a doctor. Although that may sound straightforward, pharmacists in the United States must hold a doctorate in pharmacy and be licensed by the state in which they work. Most pharmacy programs require four years of education in a specialty pharmacy school. Pharmacists must know a lot of chemistry and biology so they can understand the effects that drugs (which are chemicals, after all) have on the body. Pharmacists can advise physicians on the selection, dosage, interactions, and side effects of drugs. They can also advise patients on the proper use of their medications, including when and how to take specific drugs properly. Pharmacists can be found in drugstores, hospitals, and other medical facilities. Curiously, an outdated name for pharmacist is chemist, which was used when pharmacists formerly did a lot of drug preparation, or compounding. In modern times, pharmacists rarely compound their own drugs, but their knowledge of the sciences, including chemistry, helps them provide valuable services in support of everyone’s health. Key Takeaway A unit can be converted to another unit of the same type with a conversion factor. Concept Review Exercises How do you determine which quantity in a conversion factor goes in the denominator of the fraction? State the guidelines for determining significant figures when using a conversion factor. Write a concept map (a plan) for how you would convert \(1.0 \times 10^{12}\) nanoliters (nL) to kiloliters (kL). Answers The unit you want to cancel from the numerator goes in the denominator of the conversion factor. Exact numbers that appear in many conversion factors do not affect the number of significant figures; otherwise, the normal rules of multiplication and division for significant figures apply.
Determine the behavior of the following sequences and calculate if there is a possible upper and lower bound. a) $$a_n=\dfrac{8n}{1-2n}$$ b) $$b_n=\dfrac{2n}{1+n^2}$$ c) $$c_n=\dfrac{n^2+2}{-n-1}$$ Development: a) The sequence is not constant since $$a_1=-8$$ and $$a_2=-\dfrac{16}{3}$$. To verify if the solution is increasing or decreasing it is sufficient to verify whether $$a_n\leq a_{n+1}$$ or $$a_n\geq a_{n+1}$$, respectively. Let's see if it is increasing. We want to verify if $$\dfrac{8n}{1-2n}\leq\dfrac{8(n+1)}{1-(2n+1)}$$ By simplifying the factor $$8$$ and multiplying by the denominators, while observing that the denominators are always negative, we obtain $$$n(1-2(n+1))\leq (n+1)(1-2n)$$$ By expanding the products we have $$$-n-2n^2\leq +1-2n^2-n$$$ And by reducing the term $$-n-2n^2$$ we obtain $$0\leq1$$ which is always true independently of $$n$$. Therefore the sequence is increasing. To see if the sequence is definitely increasing we must verify whether $$a_n < a_{n+1}$$. We can verify it using the previous calculations about the solution being definitely increasing since the calculations that were carried out are true if we replace the inequality $$\leq$$ by the strict inequality $$ < $$. We verify if the sequence admits any bounds. As the sequence is increasing it is lower bounded by $$a_1=-8$$. To see if the sequence is upper bounded we can see that $$\dfrac{8n}{1-2n} < 0$$ since the numerator is positive and the denominator is negative. Therefore the sequence is upper bounded by $$0$$. b) The sequence is not constant since $$b_1=1$$ and $$b_2=\dfrac{4}{5}$$. Taking a look at the first two terms ot the sequence, it cannot be an increasing one. Let's see if the sequence is strictly decreasing. We verify if $$$\dfrac{2n}{1+n^2} > \dfrac{2(n+1)}{1+(n+1)^2}$$$ Multiplying by the denominators; $$$2n(1+(n+1)^2) > (1+n^2)2(n+1)$$$ Expanding we obtain; $$$4n+4n^2+2n^3 > 2+2n+2n^2+2n^3$$$ Simplifying we obtain $$$n^2+n-1 > 0$$$ Computing two roots of the previous polynomial we see that they are both smaller than $$1$$. Therefore, for $$n$$ integer it is satisfied that $$n^2+n-1 > 0$$ and the sequence is definitely decreasing. As we have already seen earlier, in this case the sequence is upper bounded by $$b_1=1$$. Also, since all the terms of the sequence are positive we establish that the sequence is lower bounded by $$0$$. c) The sequence is not constant since $$c_1=-\dfrac{3}{2}$$ and $$c_2=-2$$. By taking a look at the first two terms it can happen that the sequence is definitely decreasing. We verify if $$$\dfrac{n^2+2}{-n-1} > \dfrac{(n+1)^2+2}{-(n+1)-1}$$$ By multiplying by the denominators we obtain $$$(-n-2)(n^2+2) > (n^2+2n+3)(-n-1)$$$ By multiplying by $$-1$$, and therefore inverting the inequality and expanding, we obtain; $$$n^3+2n^2+2n+4 < n^3+3n^2+5n+3$$$ By subtracting we obtain the inequality $$$n^2+3n-1 > 0$$$ By calculating the roots we see that the two of them are smaller than 1 and, therefore, the inequality is true for every integer n. Therefore, the sequence is strictly decreasing. Consequently, the sequence is upper bounded by $$c_1=-\dfrac{3}{2}$$. The sequence does not have a lower bound since the general term of the sequence becomes as big, with a negative sign, as desired. Solution: a) The sequence is strictly increasing. It is upper bounded by $$0$$ and lower bounded by $$-8$$. b) The sequence is strictly decreasing. It is upper bounded by $$1$$ and lower bounded by $$0$$. c) The sequence is strictly decreasing. It is upper bounded by $$-\dfrac{3}{2}$$ and does not admit any lower bound.
I'm trying to figure out some details about involutions of division algebra, thought maybe someone here might have a better insight. Let $k$ be a $p$-adic or number field, and let $K=k[\sqrt{\delta}]$ be a non-trivial extension of degree $2$. For $x\in K$, let $\bar{x}$ denote the conjugate of $x$ under the non-trivial $K/k$ automorphism. Let $D$ be a division algebra of degree $\ell$ with $Z(D)=K$. For simplicity, we shall assume that $\ell$ is prime (or even $\ell=3$ is enough for the moment). An involution of the second type of $D$ is a $k$-linear anti-automorphism of $\tau:D\to D$ which coincides with $\bar{ }$ on $K$, and is of order two. That is to say that for any $t,s\in D$ and $\alpha,\beta\in $K$ $$(1)\:\tau(\alpha t+\beta s)=\bar{\alpha}\tau(t)+\bar{\beta}\tau(s),\quad(2)\: \tau(st)=\tau(t)\tau(s),\quad\text{and}\quad(2)\:\tau^2(t)=t.$$ Note that if $\tau,\eta$ are two involutions of type 2 of $D$, then $\tau\circ\mu$ is a $K$-automorphism of $D$. It follows easily (by the Skolem-Noether theorem) that there exists some $\gamma\in D$ such that $\tau(t)=\gamma^{-1}\mu(t)\gamma$ for all $t\in D$. In the case where $D$ is a quaternion algerbra over $K$ (i.e. a division algebra of degree $2$), one can construct a non-trivial involution of the second type on $D$ in the following way: Since the order of quaternion algebras in the Brauer group of a field is two, it follows that for any field $L$ and quaternion algebra $L$ has a non-trivial $L$-involution (i.e. an $L$-linear anti-automrophism of the algebra). This holds since the fact that $L$ has order two in the Brauer group is equivalent to $L$ being isomorphic to $L^{op}$, the opposite algebra, and hence existence of a non trivial map $L\to L^{op}$, which is the same thing as an anti-automorphism. Let $\mathbf{d}$ be a quaternion algebra over $k$, and let $\tau':\mathbf{d}\to\mathbf{d}$ be the non-trivial $k$ involution. One shows that $D\cong \mathbf{d}\otimes_K K$ as $K$-algebras, and that the map defined on generators by $\tau(t\otimes \alpha)=\tau(t)\otimes \bar{\alpha}$ is a field automorphism. The question is- what happens for higher degrees? In the book "The Book of Involutions", Knus presents an argument for the existence of a non-trivial involution of the second type on $D$. Namely, such an involution exists if and only if the norm $N_{K/k}(D)$ is a split $F$ algebra (see $\S 3$ of the book for the definition of the norm algebra, I will add it here if someone here irequests it). My problem with Knus's proof is that it in not constructive, in the sense that it presents the reader with a bijection between the set of 2nd type involutions of $D$, and some specific set of left-sided ideals in $N_{K/k}(D)$, but shows that such ideals exists if the splitting condition holds. But it is terribly unclear to me, how to go back and construct such an involution once you've shown that it exists. So, after all this long introduction- here is my question Question: Does anybody here know of an example of a division algebra of degree 3 (or higher) over a $p$-adic or global field, which has a non-trivial and explicitly presented involution of the second type? I would be very thankful for any reference or example that anyone can offer. Thank you.
I was testing the .fft package of numpy 1.16.1 in Python 3.7.2. In particular I was trying to verify that the transform resembles the analytical one for: $$f(x) = \mathrm{exp}\left[-\left(\frac{x-5}{2}\right)^{2}\right]$$ I get from Wolfram Alpha that $\hat{f} = \mathcal{F}[f]$ looks like this: Then I tried to replicate this plot with numpy and matplotlib, with the following code: import numpy as npimport matplotlib.pyplot as pltx = np.arange(0, 10, 1/1000)y = np.exp(-((x-5)**2)/4)y_hat = np.fft.fftshift(np.fft.fft(y))re_y_hat = np.real(y_hat)im_y_hat = np.imag(y_hat)fig, ax = plt.subplots()ax.plot(x, re_y_hat, "b-", x, im_y_hat, "r-")plt.show()plt.close() But the image I obtain differs greatly from the one Wolfram gives: In the last image the zero frequecy was shifted to the center by using np.fft.fftshift() so the spike corresponds to frequency zero. I already figured out that the problem is that nowhere in np.fft.fft() is the $\Delta x$ being specified, so what numpy is interpreting is that I have data that varies very slowly, almost constant$^{1}$, and thus the transform is close to that of a constant function. I looked at the numpy documentation and other SE posts to see how this can be fixed but found nothing. Does anyone know how to fix this? $^{1}$ We can calculate the average slope of the function numpy sees by $\frac{\mathrm{max}\{f\}-\mathrm{min}\{f\}}{x_{f_{\mathrm{max}}}-x_{f_{\mathrm{min}}}} = \frac{f(5)-f(0)}{n\Delta x} \approx \frac{1}{n\Delta x}$ where $n$ is the number of nodes separating the maximum from the minumum. In this case, since numpy takes $\Delta x = 1$ by default, the slope is about 1/5000=0.0002
Is it ever valid to include a two-way interaction in a model without including the main effects? What if your hypothesis is only about the interaction, do you still need to include the main effects? In my experience, not only is it necessary to have all lower order effects in the model when they are connected to higher order effects, but it is also important to properly model (e.g., allowing to be nonlinear) main effects that are seemingly unrelated to the factors in the interactions of interest. That's because interactions between $x_1$ and $x_2$ can be stand-ins for main effects of $x_3$ and $x_4$. Interactions sometimes seem to be needed because they are collinear with omitted variables or omitted nonlinear (e.g., spline) terms. You ask whether it's ever valid. Let me provide a common example, whose elucidation may suggest additional analytical approaches for you. The simplest example of an interaction is a model with one dependent variable $Z$ and two independent variables $X$, $Y$ in the form $$Z = \alpha + \beta' X + \gamma' Y + \delta' X Y + \varepsilon,$$ with $\varepsilon$ a random term variable having zero expectation, and using parameters $\alpha, \beta', \gamma',$ and $\delta'$. It's often worthwhile checking whether $\delta'$ approximates $\beta' \gamma'$, because an algebraically equivalent expression of the same model is $$Z = \alpha \left(1 + \beta X + \gamma Y + \delta X Y \right) + \varepsilon$$ $$= \alpha \left(1 + \beta X \right) \left(1 + \gamma Y \right) + \alpha \left( \delta - \beta \gamma \right) X Y + \varepsilon$$ (where $\beta' = \alpha \beta$, etc). Whence, if there's a reason to suppose $\left( \delta - \beta \gamma \right) \sim 0$, we can absorb it in the error term $\varepsilon$. Not only does this give a "pure interaction", it does so without a constant term. This in turn strongly suggests taking logarithms. Some heteroscedasticity in the residuals--that is, a tendency for residuals associated with larger values of $Z$ to be larger in absolute value than average--would also point in this direction. We would then want to explore an alternative formulation $$\log(Z) = \log(\alpha) + \log(1 + \beta X) + \log(1 + \gamma Y) + \tau$$ with iid random error $\tau$. Furthermore, if we expect $\beta X$ and $\gamma Y$ to be large compared to $1$, we would instead just propose the model $$\log(Z) = \left(\log(\alpha) + \log(\beta) + \log(\gamma)\right) + \log(X) + \log(Y) + \tau$$ $$= \eta + \log(X) + \log(Y) + \tau.$$ This new model has just a single parameter $\eta$ instead of four parameters ($\alpha$, $\beta'$, etc.) subject to a quadratic relation ($\delta' = \beta' \gamma'$), a considerable simplification. I am not saying that this is a necessary or even the only step to take, but I am suggesting that this kind of algebraic rearrangement of the model is usually worth considering whenever interactions alone appear to be significant. Some excellent ways to explore models with interaction, especially with just two and three independent variables, appear in chapters 10 - 13 of Tukey's EDA. While it is often stated in textbooks that one should never include an interaction in a model without the corresponding main effects, there are certainly examples where this would make perfect sense. I'll give you the simplest example I can imagine. Suppose subjects randomly assigned to two groups are measured twice, once at baseline (i.e., right after the randomization) and once after group T received some kind of treatment, while group C did not. Then a repeated-measures model for these data would include a main effect for measurement occasion (a dummy variable that is 0 for baseline and 1 for the follow-up) and an interaction term between the group dummy (0 for C, 1 for T) and the time dummy. The model intercept then estimates the average score of the subjects at baseline (regardless of the group they are in). The coefficient for the measurement occasion dummy indicates the change in the control group between baseline and the follow-up. And the coefficient for the interaction term indicates how much bigger/smaller the change was in the treatment group compared to the control group. Here, it is not necessary to include the main effect for group, because at baseline, the groups are equivalent by definition due to the randomization. One could of course argue that the main effect for group should still be included, so that, in case the randomization failed, this will be revealed by the analysis. However, that is equivalent to testing the baseline means of the two groups against each other. And there are plenty of people who frown upon testing for baseline differences in randomized studies (of course, there are also plenty who find it useful, but this is another issue). The reason to keep the main effects in the model is for identifiability. Hence, if the purpose is statistical inference about each of the effects, you should keep the main effects in the model. However, if your modeling purpose is solely to predict new values, then it is perfectly legitimate to include only the interaction if that improves predictive accuracy. this is implicit in many of answers others have given but the simple point is that models w/ a product term but w/ & w/o the moderator & predictor are just different models. Figure out what each means given the process you are modeling and whether a model w/o the moderator & predictor makes more sense given your theory or hypothesis. The observation that the product term is significant but only when moderator & predictor are not included doesn't tell you anything (except maybe that you are fishing around for "significance") w/o a cogent explanation of why it makes sense to leave them out. Arguably, it depends on what you're using your model for. But I've never seen a reason not to run and describe models with main effects, even in cases where the hypothesis is only about the interaction. I will borrow a paragraph from the book An introduction to survival analysis using Stata by M.Cleves, R.Gutierrez, W.Gould, Y.Marchenko edited by Stata press to answer to your question. It is common to read that interaction effects should be included in the model only when the corresponding main effects are also included, but there is nothing wrong with including interaction effects by themselves. [...] The goal of a researcher is to parametrize what is reasonably likely to be true for the data considering the problem at hand and not merely following a prescription. Both x and y will be correlated with xy (unless you have taken a specific measure to prevent this by using centering). Thus if you obtain a substantial interaction effect with your approach, it will likely amount to one or more main effects masquerading as an interaction. This is not going to produce clear, interpretable results. What is desirable is instead to see how much the interaction can explain over and above what the main effects do, by including x, y, and (preferably in a subsequent step) xy. As to terminology: yes, β 0 is called the "constant." On the other hand, "partial" has specific meanings in regression and so I wouldn't use that term to describe your strategy here. Some interesting examples that will arise once in a blue moon are described at this thread. I would suggest it is simply a special case of model uncertainty. From a Bayesian perspective, you simply treat this in exactly the same way you would treat any other kind of uncertainty, by either: Calculating its probability, if it is the object of interest Integrating or averaging it out, if it is not of interest, but may still affect your conclusions This is exactly what people do when testing for "significant effects" by using t-quantiles instead of normal quantiles. Because you have uncertainty about the "true noise level" you take this into account by using a more spread out distribution in testing. So from your perspective the "main effect" is actually a "nuisance parameter" in relation to the question that you are asking. So you simply average out the two cases (or more generally, over the models you are considering). So I would have the (vague) hypothesis: $$\newcommand{\int}{\mathrm{int}}H_{\int}:\text{The interaction between A and B is significant}$$ I would say that although not precisely defined, this is the question you want to answer here. And note that it is not the verbal statements such as above which "define" the hypothesis, but the mathematical equations as well. We have some data $D$, and prior information $I$, then we simply calculate: $$P(H_{\int}|DI)=P(H_{\int}|I)\frac{P(D|H_{\int}I)}{P(D|I)}$$ (small note: no matter how many times I write out this equation, it always helps me understand the problem better. weird). The main quantity to calculate is the likelihood $P(D|H_{int}I)$, this makes no reference to the model, so the model must have been removed using the law of total probability: $$P(D|H_{\int}I)=\sum_{m=1}^{N_{M}}P(DM_{m}|H_{\int}I)=\sum_{m=1}^{N_{M}}P(M_{m}|H_{\int}I)P(D|M_{m}H_{\int}I)$$ Where $M_{m}$ indexes the mth model, and $N_{M}$ is the number of models being considered. The first term is the "model weight" which says how much the data and prior information support the mth model. The second term indicates how much the mth model supports the hypothesis. Plugging this equation back into the original Bayes theorem gives: $$P(H_{\int}|DI)=\frac{P(H_{\int}|I)}{P(D|I)}\sum_{m=1}^{N_{M}}P(M_{m}|H_{\int}I)P(D|M_{m}H_{int}I)$$ $$=\frac{1}{P(D|I)}\sum_{m=1}^{N_{M}}P(DM_{m}|I)\frac{P(M_{m}H_{\int}D|I)}{P(DM_{m}|I)}=\sum_{m=1}^{N_{M}}P(M_{m}|DI)P(H_{\int}|DM_{m}I)$$ And you can see from this that $P(H_{\int}|DM_{m}I)$ is the "conditional conclusion" of the hypothesis under the mth model (this is usually all that is considered, for a chosen "best" model). Note that this standard analysis is justified whenever $P(M_{m}|DI)\approx 1$ - an "obviously best" model - or whenever $P(H_{\int}|DM_{j}I)\approx P(H_{\int}|DM_{k}I)$ - all models give the same/similar conclusions. However if neither are met, then Bayes' Theorem says the best procedure is to average out the results, placing higher weights on the models which are most supported by the data and prior information. It is very rarely a good idea to include an interaction term without the main effects involved in it. David Rindskopf of CCNY has written some papers about those rare instances. There are various processes in nature that involve only an interaction effect and laws that decribe them. For instance Ohm's law. In psychology you have for instance the performance model of Vroom (1964): Performance = Ability x Motivation.Now, you might expect finding an significant interaction effect when this law is true. Regretfully, this is not the case. You might easily end up with finding two main effects and an insignificant interaction effect (for a demonstration and further explanation see Landsheer, van den Wittenboer and Maassen (2006), Social Science Research 35, 274-294). The linear model is not very well suited for detecting interaction effects; Ohm might never have found his law when he had used linear models. As a result, interpreting interaction effects in linear models is difficult. If you have a theory that predicts an interaction effect, you should include it even when insignificant. You may want to ignore main effects if your theory excludes those, but you will find that difficult, as significant main effects are often found in the case of a true data generating mechanism that has only a multiplicative effect. My answer is: Yes, it can be valid to include a two-way interaction in a model without including the main effects. Linear models are excellent tools to approximate the outcomes of a large variety of data generating mechanisms, but their formula's can not be easily interpreted as a valid description of the data generating mechanism. This one is tricky and happened to me in my last project. I would explain it this way: lets say you had variables A and B which came out significant independently and by a business sense you thought that an interaction of A and B seems good. You included the interaction which came out to be significant but B lost its significance. You would explain your model initially by showing two results. The results would show that initially B was significant but when seen in light of A it lost its sheen. So B is a good variable but only when seen in light of various levels of A (if A is a categorical variable). Its like saying Obama is a good leader when seen in the light of its SEAL army. So Obama*seal will be a significant variable. But Obama when seen alone might not be as important. (No offense to Obama, just an example.) F = m*a, force equals mass times acceleration. It is not represented as F = m + a + ma, or some other linear combination of those parameters. Indeed, only the interaction between mass and acceleration would make sense physically. Is it ever valid to include a two-way interaction without main effect? Yes it can be valid and even necessary. If for example in 2. you would include a factor for main effect (average difference of blue vs red condition) this would make the model worse. What if your hypothesis is only about the interaction, do you still need to include the main effects? Your hypothesis might be true independent of there being a main effect. But the model might need it to best describe the underlying process. So yes, you should try with and without. Note: You need to center the code for the "continuous" independent variable (measurement in the example). Otherwise the interaction coefficients in the modelwill not be symmetrically distributed (no coefficient for the first measurement in the example). If the variables in question are categorical, then including interactions without the main effects is just a reparameterizations of the model, and the choice of parameterization depends on what you are trying to accomplish with your model. Interacting continuous variables with other continuous variables ore with categorical variables is a whole different story. See: see this faq from UCLA's Institute for Digital Research and Education Yes this can be valid, although it is rare. But in this case you still need to model the main effects, which you will afterward regress out. Indeed, in some models, only the interaction is interesting, such as drug testing/clinical models. This is for example the basis of the Generalized PsychoPhysiological Interactions (gPPI) model: y = ax + bxh + ch where x/y are voxels/regions of interest and h the block/events designs. In this model, both a and c will be regressed out, only b will be kept for inference (the beta coefficients). Indeed, both a and c represent spurious activity in our case, and only b represents what cannot be explained by spurious activity, the interaction with the task. The short answer:If you include interaction in the fixed effects, then the main effects are automatically included whether or not you specifically include them in your code. The only difference is your parametrization, i.e., what the parameters in your model mean (e.g., are they group means or are they differences from reference levels). Assumptions: I assume we are working in the general linear model and are asking when we can use the fixed effects specification $AB$ instead of $A + B + AB$, where $A$ and $B$ are (categorical) factors. Mathematical clarification: We assume that the response vector $Y \sim \mathcal N(\xi , \sigma^2 I_n )$.If $X_A$, $X_B$ and $X_{AB}$ are the design matrices for the three factors, then a model with "main effects and interaction" corresponds to the restriction $\xi \in$ span$\{X_A, X_B, X_{AB}\}$.A model with "only interaction" corresponds to the restriction $\xi \in$ span$\{X_{AB}\}$. However, span$\{X_{AB}\} =$ span$\{X_A, X_B, X_{AB}\}$. So, it's two different parametrizations of the same model (or the same family of distributions if you are more comfortable with that terminology). I just saw that David Beede provided a very similar answer (apologies), but I thought I would leave this up for those who respond well to a linear algebra perspective.
My last post dealt with interactions between atoms that were separated by a small number of chemical bonds. This post deals with the interactions between the atoms that are not connected by chemical bonds -- the nonbonded interactions. Standard fixed-charge force fields typically have a nonbonded potential that accounts for interactions between partial atomic charges and van der Waals interactions that model dispersion interactions. Because it is computationally efficient, the Lennard-Jones equation is often used to model van der Waals interactions. The equations for these interactions are shown below: $$ U_{elec} = \frac 1 2 \sum _ {i=1} ^ N \sum _ {j=1} ^ N \frac 1 {4 \pi \epsilon_0} \frac {q_i q_j} r $$ $$ U_{vdW} = \frac 1 2 \sum_{i=1}^N \sum_{i=1}^N 4 \epsilon_{i,j} \left[ \left( \frac {\sigma} r \right) ^ {12} - \left( \frac {\sigma} r \right) ^ 6 \right] $$ This part of the calculation is by far the most expensive, as interactions between all pairs of particles must be computed. When trying to model systems in the condensed phase, we need to include the effect of the bulk solvent environment on our system of interest, meaning that we need to somehow account for $\approx 10^{23}$ water molecules! Since modern hardware is limited to simulating $\approx 10^6$ atoms, we need to have some way of simplifying our model. The two broad families of methods that researchers typically employ to simulate aqueous environments is to: a) model the solvent implicitly as a continuum dielectric using, for example, the Generalized Born (GB) model and b) construct a periodic unit cell with a small amount of solvent surrounding the solute and tessellate that unit cell in all directions to approximate a bulk solution. What sets OpenMM apart from other software packages here is, once again, its extreme customizability and flexibility. Using its Custom GB forces for example, you can rapidly prototype new implicit solvent models to improve the treatment of solvent effects on biomolecules. All you need is an equation and a set of atomic parameters, and OpenMM will do the rest, letting you test your new models efficiently using the power of GPUs in minutes.
So far we have seen how lines and formulas can estimate outputs given an input. We can describe any straight line with two different variables: So far we have been rather fast and loose with choosing a line to estimate our output - we simply drew a line between the first and last points of our data set. Well today, we go further. Here, we take our first step toward training our model to match our data. The first step in training is to calculate our regression line's accuracy-- that is, how well our regression line matches our actual data. Calculating a regression line's accuracy is the topic of this lesson. In future lessons, we will improve upon our regression line's accuracy, so that it better predicts an output. The first step towards calculating a regression line to predict an output is to calculate how well any regression line matches our data. We need to calculate how accurate our regression line is. Let's find out what this means. Below we have data that represents the budget and revenue of four shows, with x being the budget and y being the revenue. first_show = {'x': 0, 'y': 100}second_show = {'x': 100, 'y': 150}third_show = {'x': 200, 'y': 600}fourth_show = {'x': 400, 'y': 700}shows = [first_show, second_show, third_show, fourth_show]shows Run code above with shift + enter As we did in the last lab, let's draw a not-so-great regression line simply by drawing a line between our first and last points. We can use our build_regression_line function to do so. You can view the code directly here. Eventually, we'll improve this regression line. But first we need to see how good or bad a regression line is. from linear_equations import build_regression_linex_values = list(map(lambda show: show['x'],shows))y_values = list(map(lambda show: show['y'],shows))regression_line = build_regression_line(x_values, y_values)regression_line We can plot our regression line as the following using the plotting functions that we wrote previously: from graph import m_b_trace, plot, trace_valuesfrom plotly.offline import iplot, init_notebook_modeinit_notebook_mode(connected=True)data_trace = trace_values(x_values, y_values)regression_trace = m_b_trace(regression_line['m'], regression_line['b'], x_values)plot([regression_trace, data_trace]) So that is what our regression line looks like. And this the line translated into a function. def sample_regression_formula(x): return 1.5(x) + 100 Ok, so now that we see what our regression line looks like, let's highlight how well our regression line matches our data. Let's interpret the chart above. That first red line shows that our regression formula does not perfectly predict that first show. Our actual data -- the first blue dot -- shows that when $x = 100$, $y = 150$. However, our regression line predicts that at $x = 100$, $y = 250$. So our regression line is off by 100, indicated by the length of the red line. Each point where our regression line's estimated differs from the actual data is called an error. And our red lines display the size of this error. The length of the red line equals the size of the error. Now let's put this formula into practice. The error is the actual value minus the expected value. So at point $x = 100$, the actual $y$ is 150. And at point x = 100, the expected value of $y$ is $250$. So: If we did not have a graph to display this, we could calculate this error by using our formula for the regression line. actual - expected $ = 150 -250 = -100$. Now that we have explained how to calculate an error given a regression line and data, let's learn some mathematical notation that let's us better express these concepts. So far we have defined our regression function as $y = mx + b$. Where for a given value of $x$, we can calculate the value of $y$. However, this is not totally accurate - as our regression line is not calculating the actual value of $y$ but the expected value of $y$. So let's indicate this, by changing our regression line formula to look like the following: Those little dashes over the $y$, $m$ and $b$ are called hats. So our function reads as y-hat equals m-hat multiplied by $x$ plus b-hat. These hats indicate that this formula does not give us the actual value of $y$, but simply our estimated value of $y$. The hats also say that this estimated value of $y$ is based on our estimated values of $m$ and $b$. Note that $x$ is not a predicted value. This is because we are providinga value of $x$, not predicting it. For example, we are providing an show's budget as an input, not predicting it. So we are providinga value of $x$ and asking it to predicta value of $y$. Now remember that we were given some real data as well. This means that we do have actual points for $x$ and $y$, which look like the following. first_show = {'x': 0, 'y': 100}second_show = {'x': 100, 'y': 150}third_show = {'x': 200, 'y': 600}fourth_show = {'x': 400, 'y': 700}shows = [first_show, second_show, third_show, fourth_show]shows So how do we represent our actual values of $y$? Here's how: $y$. No extra ink is needed. Ok, so now we know the following: Finally, we use the Greek letter $\varepsilon$, epsilon, to indicate error. So we say that We can be a little more precise by saying we are talking about error at any specific point, where $y$ and $\hat{y}$ are at that $x$ value. This is written as: $\varepsilon {i}$ = $y{i}$ - $\hat{y}_{i}$ Those little $i$s represent an index value, as in our first, second or third movie. Now, applying this to a specific point of say when $ x = 100 $, we can say: We now know how to calculate the error at a given value of $x$, $x_i$, by using the formula, $\varepsilon_i$ = $y_i - \hat{y_i}$. Again, this is helpful at describing how well our regression line predicts the value of $y$ at a specific point. However, we want to see well our regression describes our dataset in general - not just at a single given point. Let's move beyond calculating the error at a given point to describing the total error of the regression line across all of our data. As an initial approach, we simply calculate the total error by summing the errors, $y - \hat{y}$, for every point in our dataset. Total Error = $\sum_{i=1}^{n} y_i - \hat{y_i}$ This isn't bad, but we'll need to modify this approach slightly. To understand why, let's take another look at our data. The errors at $x = 100$ and $x = 200$ begin to cancel each other out. We don't want the errors to cancel each other out! To resolve this issue, we square the errors to ensure that we are always summing positive numbers. ${\varepsilon_i^2}$ = $({y_i - \hat{y_i}})^2$ So given a list of points with coordinates (x, y), we can calculate the squared error of each of the points, and sum them up. This is called our ** residual sum of squares ** (RSS). Using our sigma notation, our formula RSS looks like: $ RSS = \sum_{i = 1}^n ({y_i - \hat{y_i}})^2 = \sum_{i = 1}^n \varepsilon_i^2 $ Residual Sum of Squares is just what it sounds like. A residual is simply the error -- the difference between the actual data and what our model expects. We square each residual and add them together to get RSS. Let's calculate the RSS for our regression line and associated data. In our example, we have actual $x$ and $y$ values at the following points: And we can calculate the values of $\hat{y} $ as $\hat{y} = 1.5 *x + 100 $, for each of those four points. So this gives us: $RSS = (100 - 100)^2 + (150 - 250)^2 + (600 - 400)^2 + (700 - 700)^2$ which reduces to $RSS = 0^2 + (-100)^2 + 200^2 + 0^2 = 50,000$ Now we have one number, the RSS, that represents how well our regression line fits the data. We got there by calculating the errors at each of our provided points, and then squaring the errors so that our errors are always positive. Root Mean Squared Error (RMSE), is just a variation on RSS. Essentially, it tries to answer the question of what is the "typical" error of our model versus each data point. To do this, it scales down the size of that large RSS number by taking the square root of the RSS divided by the number of data points: $ RSS = \sum_{i = 1}^n ({y_i - \hat{y_i}})^2$ $RMSE = \sqrt{\frac{RSS}{n}} $ Where n equals the number of elements in the data set. Now let's walk through the reasoning for each step. The first thing that makes our RSS large is the fact that we square each error. Remember that we squared each error, because we didn't want positive errors and negative errors to cancel out. Remember, we said that each place where we had a negative error, as in : $actual - expected = -100$ We would square the error, such that $(-100)^2 = 10,000$. Remember that we square each of our errors and add them together, which led to: We then take the mean to get the average squared error (also called "mean squared error" or "MSE" for short: We do this because with each additional data point in our data set, our error will tend to increase. So with increasing dataset size, RSS also increases. To counteract the effect of RSS increasing with the dataset size and not just accuracy, we divide by the size of the dataset. The last step in calculating the RMSE, is to take the square root of the MSE: $RMSE = \sqrt{12,500} = 111.8$ In general, the RMSE is calculated as: $ RMSE = \sqrt{\frac{\sum_{i = 1}^n ({y_i - \hat{y_i}})^2}{n}} $ So the RMSE gives a typical estimate of how far each measurement is from the expectation. So this is "typical error" as opposed to an overall error. Before this lesson, we simply assumed that our regression line made good predictions of $y$ for given values of $x$. In this lesson, we learned a metric that tells us how well our regression line fits our actual data. To do this, we started looking at the error at a given point, and defined error as the actual value of $y$ minus the expected value of $y$ from our regression line. Then we were able to determine how well our regression line describes the entire dataset by squaring the errors at each point (to eliminate negative errors), and adding these squared errors. This is called the Residual Sum of Squares (RSS). This is our metric for describing how well our regression line fits our data. Lastly, we learned how the RMSE tells us the "typical error" by dividing the square root of the RSS by the number of elements in our dataset.
If we use the composite trapezoidal rule, then what is the least number of divisions $N$ for which the error of the integral $\int^1_0{e^{-x}}dx$ doesn't exceed $\frac{1}{12}\times10^{-2}$. My guess is 11 or 5. Kindly tell me which of the answer is correct? I obtained 11 as the answer by applying the formula $$\Big|\frac{(b-a)^3}{12N^2}\times{(e^{-x})^{\prime\prime}_{x=\varepsilon}}\Big| \text{ = } {\frac{10^{-2}}{12}}$$ where $\varepsilon \in [0,1]$ is chosen so that it maximizes the value of $e^{-\varepsilon}$ (which I believe occurs at $\varepsilon = 0$). Solving this equation, I get $N = 10$ (i.e. I must have atleast 11 equidistant divisions if I have to keep the error less than $\frac{10^{-2}}{12}$). As far as 5 is concerned, I just used 5 equidistant intervals i.e $0,\frac{1}{5},\frac{2}{5},\frac{3}{5},\frac{4}{5},1$ and I applied them in the trapezoidal rule. Now here's the problem:- When calculated the answer at $N=5$, I got a value greater that the actual value of the integral. Is it possible? If yes, why? Which of my answer is correct because I obtained 11 by a well-established formula and 5 was just an option I hit upon and am not really sure of 5's correctness. Thanks Note:- I am posting this question because everywhere else nobody is giving any reply at all. I don't know if this question belongs here. If it does, then kindly reply. If it doesn't,then feel free to erase or delete or whatever :)
Difference between revisions of "Huge" (→$C^{(n)}$-$m$-huge cardinals: corrected) (→$C^{(n)}$-$m$-huge cardinals: I3) Line 60: Line 60: The first $C^{(n)}$-$m$-huge cardinal is not $C^{(n+1)}$-$m$-huge, for all $m$ and $n$ greater or equal than $1$. For suppose $κ$ is the least $C^{(n)}$-$m$-huge cardinal and $j : V → M$ witnesses that $κ$ is $C^{(n+1)}$-$m$-huge. Then since “x is $C^{(n)}$-$m$-huge” is $Σ_{n+1}$ expressible, we have $V_{j(κ)} \models$ “$κ$ is $C^{(n)}$-$m$-huge”. Hence, since $(V_{j(κ)})^M = V_{j(κ)}$, $M \models$ “$∃_{δ < j(κ)}(V_{j(κ)} \models$ “δ is huge”$)$”. By elementarity, there is a $C^{(n)}$-$m$-huge cardinal less than $κ$ in $V$ – contradiction. The first $C^{(n)}$-$m$-huge cardinal is not $C^{(n+1)}$-$m$-huge, for all $m$ and $n$ greater or equal than $1$. For suppose $κ$ is the least $C^{(n)}$-$m$-huge cardinal and $j : V → M$ witnesses that $κ$ is $C^{(n+1)}$-$m$-huge. Then since “x is $C^{(n)}$-$m$-huge” is $Σ_{n+1}$ expressible, we have $V_{j(κ)} \models$ “$κ$ is $C^{(n)}$-$m$-huge”. Hence, since $(V_{j(κ)})^M = V_{j(κ)}$, $M \models$ “$∃_{δ < j(κ)}(V_{j(κ)} \models$ “δ is huge”$)$”. By elementarity, there is a $C^{(n)}$-$m$-huge cardinal less than $κ$ in $V$ – contradiction. + + + + + + + == Consistency strength and size == == Consistency strength and size == Latest revision as of 10:58, 17 September 2019 Huge cardinals (and their variants) were introduced by Kenneth Kunen in 1972 as a very large cardinal axiom. Kenneth Kunen first used them to prove that the consistency of the existence of a huge cardinal implies the consistency of $\text{ZFC}$+"there is a $\omega_2$-saturated $\sigma$-ideal on $\omega_1$". It is now known that only a Woodin cardinal is needed for this result. However, the consistency of the existence of an $\omega_2$-complete $\omega_3$-saturated $\sigma$-ideal on $\omega_2$, as far as the set theory world is concerned, still requires an almost huge cardinal. [1] Contents 1 Definitions 2 References 3 Consistency strength and size 4 Relative consistency results 5 In set theoretic geology 6 References Definitions Their formulation is similar to that of the formulation of superstrong cardinals. A huge cardinal is to a supercompact cardinal as a superstrong cardinal is to a strong cardinal, more precisely. The definition is part of a generalized phenomenon known as the "double helix", in which for some large cardinal properties n-$P_0$ and n-$P_1$, n-$P_0$ has less consistency strength than n-$P_1$, which has less consistency strength than (n+1)-$P_0$, and so on. This phenomenon is seen only around the n-fold variants as of modern set theoretic concerns. [2] Although they are very large, there is a first-order definition which is equivalent to n-hugeness, so the $\theta$-th n-huge cardinal is first-order definable whenever $\theta$ is first-order definable. This definition can be seen as a (very strong) strengthening of the first-order definition of measurability. Elementary embedding definitions $\kappa$ is almost n-huge with target $\lambda$iff $\lambda=j^n(\kappa)$ and $M$ is closed under all of its sequences of length less than $\lambda$ (that is, $M^{<\lambda}\subseteq M$). $\kappa$ is n-huge with target $\lambda$iff $\lambda=j^n(\kappa)$ and $M$ is closed under all of its sequences of length $\lambda$ ($M^\lambda\subseteq M$). $\kappa$ is almost n-hugeiff it is almost n-huge with target $\lambda$ for some $\lambda$. $\kappa$ is n-hugeiff it is n-huge with target $\lambda$ for some $\lambda$. $\kappa$ is super almost n-hugeiff for every $\gamma$, there is some $\lambda>\gamma$ for which $\kappa$ is almost n-huge with target $\lambda$ (that is, the target can be made arbitrarily large). $\kappa$ is super n-hugeiff for every $\gamma$, there is some $\lambda>\gamma$ for which $\kappa$ is n-huge with target $\lambda$. $\kappa$ is almost huge, huge, super almost huge, and superhugeiff it is almost 1-huge, 1-huge, etc. respectively. Ultrahuge cardinals A cardinal $\kappa$ is $\lambda$-ultrahuge for $\lambda>\kappa$ if there exists a nontrivial elementary embedding $j:V\to M$ for some transitive class $M$ such that $j(\kappa)>\lambda$, $M^{j(\kappa)}\subseteq M$ and $V_{j(\lambda)}\subseteq M$. A cardinal is ultrahuge if it is $\lambda$-ultrahuge for all $\lambda\geq\kappa$. [1] Notice how similar this definition is to the alternative characterization of extendible cardinals. Furthermore, this definition can be extended in the obvious way to define $\lambda$-ultra n-hugeness and ultra n-hugeness, as well as the " almost" variants. Hyperhuge cardinals A cardinal $\kappa$ is $\lambda$-hyperhuge for $\lambda>\kappa$ if there exists a nontrivial elementary embedding $j:V\to M$ for some inner model $M$ such that $\mathrm{crit}(j) = \kappa$, $j(\kappa)>\lambda$ and $^{j(\lambda)}M\subseteq M$. A cardinal is hyperhuge if it is $\lambda$-hyperhuge for all $\lambda>\kappa$.[3, 4] Huge* cardinals A cardinal $κ$ is $n$-huge* if for some $α > κ$, $\kappa$ is the critical point of an elementary embedding $j : V_α → V_β$ such that $j^n (κ) < α$.[5] Hugeness* variant is formulated in a way allowing for a virtual variant consistent with $V=L$: A cardinal $κ$ is virtually $n$-huge* if for some $α > κ$, in a set-forcing extension, $\kappa$ is the critical point of an elementary embedding $j : V_α → V_β$ such that $j^n(κ) < α$.[5] Ultrafilter definition The first-order definition of n-huge is somewhat similar to measurability. Specifically, $\kappa$ is measurable iff there is a nonprincipal $\kappa$-complete ultrafilter, $U$, over $\kappa$. A cardinal $\kappa$ is n-huge with target $\lambda$ iff there is a normal $\kappa$-complete ultrafilter, $U$, over $\mathcal{P}(\lambda)$, and cardinals $\kappa=\lambda_0<\lambda_1<\lambda_2...<\lambda_{n-1}<\lambda_n=\lambda$ such that: $$\forall i<n(\{x\subseteq\lambda:\text{order-type}(x\cap\lambda_{i+1})=\lambda_i\}\in U)$$ Where $\text{order-type}(X)$ is the order-type of the poset $(X,\in)$. [1] $\kappa$ is then super n-huge if for all ordinals $\theta$ there is a $\lambda>\theta$ such that $\kappa$ is n-huge with target $\lambda$, i.e. $\lambda_n$ can be made arbitrarily large. If $j:V\to M$ is such that $M^{j^n(\kappa)}\subseteq M$ (i.e. $j$ witnesses n-hugeness) then there is a ultrafilter $U$ as above such that, for all $k\leq n$, $\lambda_k = j^k(\kappa)$, i.e. it is not only $\lambda=\lambda_n$ that is an iterate of $\kappa$ by $j$; all members of the $\lambda_k$ sequence are. As an example, $\kappa$ is 1-huge with target $\lambda$ iff there is a normal $\kappa$-complete ultrafilter, $U$, over $\mathcal{P}(\lambda)$ such that $\{x\subseteq\lambda:\text{order-type}(x)=\kappa\}\in U$. The reason why this would be so surprising is that every set $x\subseteq\lambda$ with every set of order-type $\kappa$ would be in the ultrafilter; that is, every set containing $\{x\subseteq\lambda:\text{order-type}(x)=\kappa\}$ as a subset is considered a "large set." As for hyperhugeness, the following are equivalent:[4] $κ$ is $λ$-hyperhuge; $μ > λ$ and a normal, fine, κ-complete ultrafilter exists on $[μ]^λ_{∗κ} := \{s ⊂ μ : |s| = λ, |s ∩ κ| ∈ κ, \mathrm{otp}(s ∩ λ) < κ\}$; $\mathbb{L}_{κ,κ}$ is $[μ]^λ_{∗κ}$-$κ$-compact for type omission. Coherent sequence characterization of almost hugeness $C^{(n)}$-$m$-huge cardinals (this section from [6]) $κ$ is $C^{(n)}$-$m$-huge iff it is $m$-huge and $j(κ) ∈ C^{(n)}$ ($C^{(n)}$-huge if it is huge and $j(κ) ∈ C^{(n)}$). Equivalent definition in terms of normal measures: κ is $C^{(n)}$-$m$-huge iff it is uncountable and there is a $κ$-complete normal ultrafilter $U$ over some $P(λ)$ and cardinals $κ = λ_0 < λ_1 < . . . < λ_m = λ$, with $λ_1 ∈ C (n)$ and such that for each $i < m$, $\{x ∈ P(λ) : ot(x ∩ λ i+1 ) = λ i \} ∈ U$. It follows that “$κ$ is $C^{(n)}$-$m$-huge” is $Σ_{n+1}$ expressible. Every huge cardinal is $C^{(1)}$-huge. The first $C^{(n)}$-$m$-huge cardinal is not $C^{(n+1)}$-$m$-huge, for all $m$ and $n$ greater or equal than $1$. For suppose $κ$ is the least $C^{(n)}$-$m$-huge cardinal and $j : V → M$ witnesses that $κ$ is $C^{(n+1)}$-$m$-huge. Then since “x is $C^{(n)}$-$m$-huge” is $Σ_{n+1}$ expressible, we have $V_{j(κ)} \models$ “$κ$ is $C^{(n)}$-$m$-huge”. Hence, since $(V_{j(κ)})^M = V_{j(κ)}$, $M \models$ “$∃_{δ < j(κ)}(V_{j(κ)} \models$ “δ is huge”$)$”. By elementarity, there is a $C^{(n)}$-$m$-huge cardinal less than $κ$ in $V$ – contradiction. Assuming $\mathrm{I3}(κ, δ)$, if $δ$ is a limit cardinal (instead of a successor of a limit cardinal – Kunen’s Theorem excludes other cases), it is equal to $sup\{j^m(κ) : m ∈ ω\}$ where $j$ is the elementary embedding. Then $κ$ and $j^m(κ)$ are $C^{(n)}$-$m$-huge (inter alia) in $V_δ$, for all $n$ and $m$. If $κ$ is $C^{(n)}$-$\mathrm{I3}$, then it is $C^{(n)}$-$m$-huge, for all $m$, and there is a normal ultrafilter $\mathcal{U}$ over $κ$ such that $\{α < κ : α$ is $C^{(n)}$-$m$-huge for every $m\} ∈ \mathcal{U}$. References Kanamori, Akihiro. Second, Springer-Verlag, Berlin, 2009. (Large cardinals in set theory from their beginnings, Paperback reprint of the 2003 edition) www bibtex The higher infinite. Kentaro, Sato. Double helix in large large cardinals and iteration ofelementary embeddings., 2007. www bibtex Usuba, Toshimichi. The downward directed grounds hypothesis and very large cardinals.Journal of Mathematical Logic 17(02):1750009, 2017. arχiv DOI bibtex Boney, Will. Model Theoretic Characterizations of Large Cardinals.arχiv bibtex Gitman, Victoria and Shindler, Ralf. Virtual large cardinals.www bibtex Bagaria, Joan. $C^{(n)}$-cardinals.Archive for Mathematical Logic 51(3--4):213--240, 2012. www DOI bibtex Consistency strength and size Hugeness exhibits a phenomenon associated with similarly defined large cardinals (the n-fold variants) known as the double helix. This phenomenon is when for one n-fold variant, letting a cardinal be called n-$P_0$ iff it has the property, and another variant, n-$P_1$, n-$P_0$ is weaker than n-$P_1$, which is weaker than (n+1)-$P_0$. [2] In the consistency strength hierarchy, here is where these lay (top being weakest): measurable = 0-superstrong = 0-huge n-superstrong n-fold supercompact (n+1)-fold strong, n-fold extendible (n+1)-fold Woodin, n-fold Vopěnka (n+1)-fold Shelah almost n-huge super almost n-huge n-huge super n-huge ultra n-huge (n+1)-superstrong All huge variants lay at the top of the double helix restricted to some natural number n, although each are bested by I3 cardinals (the critical points of the I3 elementary embeddings). In fact, every I3 is preceeded by a stationary set of n-huge cardinals, for all n. [1] Similarly, every huge cardinal $\kappa$ is almost huge, and there is a normal measure over $\kappa$ which contains every almost huge cardinal $\lambda<\kappa$. Every superhuge cardinal $\kappa$ is extendible and there is a normal measure over $\kappa$ which contains every extendible cardinal $\lambda<\kappa$. Every (n+1)-huge cardinal $\kappa$ has a normal measure which contains every cardinal $\lambda$ such that $V_\kappa\models$"$\lambda$ is super n-huge" [1], in fact it contains every cardinal $\lambda$ such that $V_\kappa\models$"$\lambda$ is ultra n-huge". Every n-huge cardinal is m-huge for every m<n. Similarly with almost n-hugeness, super n-hugeness, and super almost n-hugeness. Every almost huge cardinal is Vopěnka (therefore the consistency of the existence of an almost-huge cardinal implies the consistency of Vopěnka's principle). [1] Every ultra n-huge is super n-huge and a stationary limit of super n-huge cardinals. Every super almost (n+1)-huge is ultra n-huge and a stationary limit of ultra n-huge cardinals. In terms of size, however, the least n-huge cardinal is smaller than the least supercompact cardinal (assuming both exist). [1] This is because n-huge cardinals have upward reflection properties, while supercompacts have downward reflection properties. Thus for any $\kappa$ which is supercompact and has an n-huge cardinal above it, $\kappa$ "reflects downward" that n-huge cardinal: there are $\kappa$-many n-huge cardinals below $\kappa$. On the other hand, the least super n-huge cardinals have both upward and downward reflection properties, and are all much larger than the least supercompact cardinal. It is notable that, while almost 2-huge cardinals have higher consistency strength than superhuge cardinals, the least almost 2-huge is much smaller than the least super almost huge. While not every $n$-huge cardinal is strong, if $\kappa$ is almost $n$-huge with targets $\lambda_1,\lambda_2...\lambda_n$, then $\kappa$ is $\lambda_n$-strong as witnessed by the generated $j:V\prec M$. This is because $j^n(\kappa)=\lambda_n$ is measurable and therefore $\beth_{\lambda_n}=\lambda_n$ and so $V_{\lambda_n}=H_{\lambda_n}$ and because $M^{<\lambda_n}\subset M$, $H_\theta\subset M$ for each $\theta<\lambda_n$ and so $\cup\{H_\theta:\theta<\lambda_n\} = \cup\{V_\theta:\theta<\lambda_n\} = V_{\lambda_n}\subset M$. Every almost $n$-huge cardinal with targets $\lambda_1,\lambda_2...\lambda_n$ is also $\theta$-supercompact for each $\theta<\lambda_n$, and every $n$-huge cardinal with targets $\lambda_1,\lambda_2...\lambda_n$ is also $\lambda_n$-supercompact. An $n$-huge* cardinal is an $n$-huge limit of $n$-huge cardinals. Every $n + 1$-huge cardinal is $n$-huge*.[5] As for virtually $n$-huge*:[5] If $κ$ is virtually huge*, then $V_κ$ is a model of proper class many virtually extendible cardinals. A virtually $n+1$-huge* cardinal is a limit of virtually $n$-huge* cardinals. A virtually $n$-huge* cardinal is an $n+1$-iterable limit of $n+1$-iterable cardinals. If $κ$ is $n+2$-iterable, then $V_κ$ is a model of proper class many virtually $n$-huge* cardinals. Every virtually rank-into-rank cardinal is a virtually $n$-huge* limit of virtually $n$-huge* cardinals for every $n < ω$. The $\omega$-huge cardinals A cardinal $\kappa$ is almost $\omega$-huge iff there is some transitive model $M$ and an elementary embedding $j:V\prec M$ with critical point $\kappa$ such that $M^{<\lambda}\subset M$ where $\lambda$ is the smallest cardinal above $\kappa$ such that $j(\lambda)=\lambda$. Similarly, $\kappa$ is $\omega$-huge iff the model $M$ can be required to have $M^\lambda\subset M$. Sadly, $\omega$-huge cardinals are inconsistent with ZFC by a version of Kunen's inconsistency theorem. Now, $\omega$-hugeness is used to describe critical points of I1 embeddings. Relative consistency results Hugeness of $\omega_1$ In [2] it is shown that if $\text{ZFC +}$ "there is a huge cardinal" is consistent then so is $\text{ZF +}$ "$\omega_1$ is a huge cardinal" (with the ultrafilter characterization of hugeness). Generalizations of Chang's conjecture Cardinal arithmetic in $\text{ZF}$ If there is an almost huge cardinal then there is a model of $\text{ZF+}\neg\text{AC}$ in which every successor cardinal is a Ramsey cardinal. It follows that (1) for all inner models $W$ of $\text{ZFC}$ and every singular cardinal $\kappa$, one has $\kappa^{+W} < \kappa^+$ and that (2) for all ordinal $\alpha$ there is no injection $\aleph_{\alpha+1}\to 2^{\aleph_\alpha}$. This in turn imply the failure of the square principle at every infinite cardinal (and consequently $\text{AD}^{L(\mathbb{R})}$, see determinacy). [3] In set theoretic geology If $\kappa$ is hyperhuge, then $V$ has $<\kappa$ many grounds (so the mantle is a ground itself).[3] This result has been strenghtened to extendible cardinals[7]. On the other hand, it s consistent that there is a supercompact cardinal and class many grounds of $V$ (because of the indestructibility properties of supercompactness).[3] References Kanamori, Akihiro. Second, Springer-Verlag, Berlin, 2009. (Large cardinals in set theory from their beginnings, Paperback reprint of the 2003 edition) www bibtex The higher infinite. Kentaro, Sato. Double helix in large large cardinals and iteration ofelementary embeddings., 2007. www bibtex Usuba, Toshimichi. The downward directed grounds hypothesis and very large cardinals.Journal of Mathematical Logic 17(02):1750009, 2017. arχiv DOI bibtex Boney, Will. Model Theoretic Characterizations of Large Cardinals.arχiv bibtex Gitman, Victoria and Shindler, Ralf. Virtual large cardinals.www bibtex Bagaria, Joan. $C^{(n)}$-cardinals.Archive for Mathematical Logic 51(3--4):213--240, 2012. www DOI bibtex
Filtering time-series data can be a tricky business. Many times what seems to obvious to our eyes is difficult to mathematically put into practice. A recent technique I've been reading about is called "total variance minimization" filtering or another name would be "$ \ell_1 $ trend filtering." Another member of this family is the Hodrick-Prescott (HP) trend filter. In all three of these types of filters, we're presented with a set of noisy data, $ y = x + z $ (where $ x $ is the real signal and $ z $ is some noise) and we're trying to fit a more simple model or set of lines to this data. For the TV filter, the type of lines we're fitting to the data are lines that don't have many large "jumps", or mathematically, $ \vert (x_{n-1} - x_{n}) \vert $, the difference between two of the fitted values, is small. In HP and $ \ell_1 $ trend filtering, the goal is to make sure that the 2nd derivative is small. Mathematically this looks like minimizing the term$$ (x_{n-2} - x_{n-1}) - (x_{n-1} - x_{n}) . $$ This makes sense, because the only time that the term above goes to zero is if the data is in a straight line - no "kinks" in the trendline. In HP filtering this term is squared and in $ \ell_1 $ filtering this term is just the absolute value (the $ l_1 $ norm). HP filtering fits a kind of spline to the data, while the $ \ell_1 $ filtering fits a piecewise linear function (straight lines that join at the knots). The actual cost function looks a bit like this:$$ c(x) = \frac{1}{2}\sum_{i=1}^{n-1} (y_i - x_i)^2 + \lambda P(x) $$ where $ P(x) $ is one of the penalty terms we talked about above and $ \lambda $ controls the strength of the denoising. Again, remember that $ y $ is our observed signal and $ x $ is our projected or estimated signal - the trend. Setting $ \lambda $ to infinity will cause the fitted line to be a straight line through the data, and setting $ \lambda $ to zero will fit a line that looks identical to the original data. In HP filtering the whole function we're minimizing looks like this$$ c(x) = \frac{1}{2}\sum_{i=1}^{n-1} (y_i - x_i)^2 + \lambda\sum_{i = 1}^{n-1} ((x_{n-2} - x_{n-1}) - (x_{n-1} - x_{n}))^2 $$ and so $ \hat{x} = \arg \min_x c(x) $. In order to find the minimum value of $ \hat{x} $ we can take the derivative of $ c(x) $ with respect to $ x $ and solve for zero. This isn't too difficult, and having both the terms in the HP filter squared makes this a bit easier - $ \ell_1 $ filtering is trickier. Before we compute the derivative, let's rewrite the ending penalty term as a matrix multiplication. In order to compute $ ((x_{n-2} - x_{n-1}) - (x_{n-1} - x_{n}))^2 $ for each value of $ n $, we can multiply our array of $ x $ values, $ x = (x_1, x_2, ..., x_n)^T \in \mathbb R^{n x 1} $, by a matrix that takes the difference. It will look like this$$ D = \begin{bmatrix} \\ 1 & -2 & 1 & & & & \\ & 1 & -2 & 1 & & & \\ & & \ddots & \ddots & \ddots & & \\ & & & 1 & -2 & 1 & \\ & & & & 1 & -2 & 1 \\ \end{bmatrix} $$ So now, $ D \in \mathbb R^{N-2 \times N} $ and $ Dx $ will now be a $ \mathbb R^{N-2 \times 1} $ matrix descring the difference penalty. The cost function can now be rewritten as$$ c(x) = \frac{1}{2}\Vert y - x \Vert^2 + \lambda\Vert Dx\Vert^2 $$ Notice that both $ \Vert y - x \Vert^2 $ and $ \Vert Dx \Vert^2 $ are single values, since both of the inner quantaties are vectors ($ N \times 1 $ and $ N-2 \times 1 $) and so taking the $ \ell_2 $ norm sums the square of the elements, yielding one value. If we take the derivative of $ c(x) $ w.r.t $ x $, we have$$ \frac{\partial c(x)}{\partial x} = -y + x + 2\lambda D^TDx = 0 $$ The first part is taking the derivative of $ (y - x)(y - x)^T $ (which is a different way of writing the $ \ell_2 $ norm), and the latter part comes out of the tedious computation of the derivative of the $ \ell_2 $ norm of $ Dx $. Rearranging this equation yields$$ (x - 2\lambda D^TDx) = y $$ Solving for $ x $ gives$$ (I - 2\lambda D^TD)^{-1}y = \hat x $$ This is super handy because it gives us an analytical way of solving for $ \hat x $ by just multiplying our $ y $ vector by a precomputed transformation matrix - $ (I - 2\lambda D^TD) $ is a fixed matrix of size $ N \times N $. An Example Let's take the stock price of Apple over time. We can import this via the matplotlib (!) module: from matplotlib import finance%pylab inlinepylab.rcParams['figure.figsize'] = 12, 8 # that's default image size for this interactive sessionimport statsmodels.api as sm Populating the interactive namespace from numpy and matplotlib d1 = datetime.datetime(2011, 01, 01)d2 = datetime.datetime(2012, 12, 01)sp = finance.quotes_historical_yahoo('AAPL',d1,d2,asobject=None) plot(sp[:,2])title('Stock price of AAPL from Jan. 2011 to Dec. 2012') <matplotlib.text.Text at 0x1106d438> Now that we have the historic data we want to fit a trend line to it. The Statsmodels package has this filter already built in. It comes installed with an Anaconda installation. You can give the function a series of data and the $ \lambda $ parameter and it will return the fitted line. xhat = sm.tsa.filters.hpfilter(sp[:,2], lamb=100000)[1]plot(sp[:,2])hold(True)plot(xhat,linewidth=4.) [<matplotlib.lines.Line2D at 0x1539ba90>] The green line nicely flows through our data. What happens when we adjust the regularization? Since we don't really know what value to pick for $ \lambda $ we'll try a range of values and plot them. lambdas = numpy.logspace(3,6,5) # Generate a logarithmicly spaced set of lambdas from 1,000 to 1 Millionxhat = []for i in range(lambdas.size): xhat.append(sm.tsa.filters.hpfilter(sp[:,2],lambdas[i])[1]) # Return the 2nd argument and apped plot(sp[:,2])hold(True)plot(transpose(xhat),linewidth=2.) [<matplotlib.lines.Line2D at 0x1580f908>, <matplotlib.lines.Line2D at 0x1580fef0>, <matplotlib.lines.Line2D at 0x158110b8>, <matplotlib.lines.Line2D at 0x15811240>, <matplotlib.lines.Line2D at 0x158113c8>] You can see we move through a continuum of highly fitted to loosly fitted data. Pretty nifty, huh? You'll also notice that the trend line doesn't have any sharp transitions. This is due to the squared penalty term. In $ \ell_1 $ trend filtering you'll end up with sharp transitions. Maybe this is good, maybe not. For filtering data where there's no guarantee that the data will be semi-continuous (maybe some sensor reading), perhaps the TV or $ \ell_1 $ filter is better. The difficulty is that by adding the $ \ell_1 $ penalty the function becomes non-differentialable and makes the solution a little more difficult - generally an iterative process.
I am interested in solving numerically the following mathematical problem Consider an ode of the form $$ \dot q(t) = f(q(t),t_1,\ldots, t_N),\qquad t\in [0,T], $$ where $q\in \mathbb{R}^n$ is the state and $t_i,\, i=1,\ldots N$ are real parameters such that $0\le t_1 \le \ldots \le t_N\le T$. Moreover $f$ is defined from the heaviside function: $H(t-t_i)$ (so I don't want to use auto differentiation) I would like to solve: $$ \min_{t_1,\ldots,t_N}\; \lVert q(T)\rVert $$ Here is what I have done so far. I am using the scipy.optimize.minimize() function together with the ' SLSQP' method and I am integrating the ode using the ' dopri5' method from the scipy ode integrator. Here some "pseudo code": n = 2 #dimension of qN = 10 # numbers of ti qf = [] #vector containing the state to compute the objective functiontis0 = ... # containing initialization of t1,t2, ...#init integratorsolver = ode(dyn).set_integrator('dopri5') #function to compute the dynamcisdef dyn(t,z,n,N): #compute dq/dt dqdt = [0. for i in range(0,n+N)] dqdt[...] = ... #heaviside function applied to (t-ti) appearing here dqdt[n:-2] = 0. #dynamics for the (constants) ti return dqdt#compute the state qdef solout(t,q): qf.append(q)#function to compute the objectivedef objfun(tis): solver.set_solout(solout) y0[n:n+N] = tis solver.set_initial_value(y0,0.).set_f_params(n,N) solver.integrate(T) return norm(qf[-1])#argument for the optimization routines : bounds, constraints...bnds = tuple([(0.,T) for i in range(0,N)])cons = ({'type':'ineq', 'fun': lambda x: x[0] < x[1]},...)#call to the optimization routineres = minimize(objfun,tis0,method='SLSQP',bounds=bnds,constraints=cons) I am not convinced that this is a suitable approach for this problem. In fact I am observing that the algorithm is converging but the optimal solution is very close to the initialization vector tis0 ... so tis' variables don't move. Maybe I shouldn't define the tis as constants and don't put them in the dynamics ? Also I wonder whether SLSQP is the right method to tackle this problem. Any thoughts even not related to this solution is pretty welcome. EDIT: the dynamics is given by: \begin{array}{l} &\dot x(t) = y(t)\, \gamma(z(t)) -x(t)\, \beta(z(t)) \\ &\dot y(t) = x(t) - y(t) + 1 \end{array} where $\gamma(z(t))=\frac{z(t)}{z(t)+1}$, $\beta(z(t))=\frac{1}{1+\gamma(z(t))}$ and $$ z(t) = \sum_{i=1}^N R_i(t_i)\,(t-t_i)\, \exp(-(t-t_i))\, H(t-t_i) $$ where $R_i(t_i) = 1+\exp(-(t_i-t_{i-1}))$ and with the conviention $t_0=-\infty$.
This question already has an answer here: Suppose that you write something like this for your students. This is just the beginning of something I would like to write to help them with the formal definition of a limit, but I am puzzled by the first output. formalLimit[expr_, a_, L_, ϵ_] := Module[{f}, f[x_] := expr; f[2] ] So, my first question is, when I enter formalLimit[x^2, 2, 4, .1] Why do I get the output $x^2$? Once I get past this problem, my next question is, suppose a student enters an expression $\sqrt{4-t^2}$, not using my choice of $x$ as the independent variable. How can I identify their choice of independent variable in the expr_, then change it to $x$ when using f[x_]:=expr ? Update: I'd like to thank all of my colleagues for their kind responses. Here is what I was able to do thanks to your help. Students will get a question (yes, it can be answered much more quickly using Mathematica techniques such as Reduce, etc, but I want a visual introduction to the formal definition of a limit) such as "Use a graph to find a number $\delta$ such that $$|f(x)-L|<\epsilon\qquad\text{whenever}\qquad0<|x-a|<\delta.$$ As an example, find a $\delta$ such that $$|\sqrt{x}-1|<0.5\qquad\text{whenever}\qquad0<|x-1|<\delta$$ I've written this function: formalLimit[expr_, var_, a_, L_, ϵ_] := DynamicModule[{f, δ}, f = (expr /. var -> #) &; Manipulate[ δ = Min[Abs[p[[1]] - a], Abs[q[[1]] - a]]; Show[ Plot[f[x], {x, a - zoom, a + zoom}, PlotRange -> {{a - zoom, a + zoom}, {L - 2 ϵ, L + 2 ϵ}}, Epilog -> {Arrowheads[0.02], Arrow[{{a, L - 2 ϵ}, {a, f[a]}, {a - 1.5 δ, L}}], {Red, Dashed, InfiniteLine[{{p[[1]], L - ϵ}, {p[[1]], L + ϵ}}], InfiniteLine[{{q[[1]], L - ϵ}, {q[[1]], L + ϵ}}]}}, AxesLabel -> {ToString[var], "y"}, PlotLabel -> "δ = " <> ToString[δ], AxesOrigin -> {a - 1.5 δ, L - 2 ϵ}, Ticks -> {{p[[1]], a, q[[1]]}, {L - ϵ, L, L + ϵ}}], Plot[{L - ϵ, L + ϵ}, {x, a - zoom, a + zoom}, PlotStyle -> Directive[Dashed, GrayLevel[0.8]], Filling -> {1 -> {2}}]], {{zoom, 3}, 0.0001, 3, Appearance -> "Labeled"}, {{p, {a - 1, L - ϵ}}, Locator, Appearance -> None}, {{q, {a + 1, L + ϵ}}, Locator, Appearance -> None}]] Then the students can enter: formalLimit[Sqrt[x], x, 1, 1, .5] And they get this image, where they can use their mouse to drag the dashed vertical lines to help determine $\delta$. There is also a slider for some zooming to adjust the horizontal size of the window. This function has only been slightly tested, so I wouldn't mind hearing warnings and suggestions.
These are only going to be a soft questions. And I thought this question is also a case for MO, so I have posted a duplicate there (Does that comply with the etiquette here? In case not I am sorry.) When looking at the Liouville function, defined as $$ \lambda(n) = (-1)^{\Omega_n},$$ where $\Omega_n$ is the total count of prime factors of $n$ (including multiplicity), it occurred to me, that this in a sense parallels an irreducible representation for $\Omega_n$ and hence also for the multiplicative semigroup of integers. The map $n\rightarrow \Omega_n$ is a multiplicative homomorphism. So one could generalise the Liouville function to $$ \lambda_m(n) = (e^{i\frac{2\pi}{m}})^{\Omega_n \mathrm{mod}\,m} ,$$ which recovers for $m=2$ the normal Liouville function $$ \lambda_2(n) = (e^{i\frac{2\pi}{2}})^{\Omega_n \mathrm{mod}\,2} = (-1)^{\Omega_{n} {\mathrm{mod}}\,2} = \lambda(n).$$ In this way one would get other such functions "mimicking" irreducible representations. The questions are (i) if this analogy has been exploited already, (ii) if this functions are used already and (iii) if one could show the orthogonality $$\lim_{N\rightarrow\infty}\frac{1}{N}\sum_{n=1}^{N}\overline{\lambda_i(n)}\lambda_j(n)\overset{?}=\delta_{ij}.$$ For $\lambda_j(n)=1\;\forall n$ (the "totally symmetric representation") and $\lambda_i(n)=\lambda(n)$, that reduces to $$\lim_{N\rightarrow\infty}\frac{L_N}{N},$$ with the Liouville sum function $L_N$ wich is known to be bound by $c \sqrt{N}$.
"One-Line" Proof: Fundamental Group of the Circle Once upon a time I wrote a six-part blog series on why the fundamental group of the circle is isomorphic to the integers. (You can read it here, though you may want to grab a cup of coffee first.) Last week, I shared a proof* of the same result. In one line . On Twitter. I also included a fewer-than-140-characters explanation. But the ideas are so cool that I'd like to elaborate a little more. As you might guess, the tools are more sophisticated than those in the original proof, but they make frequent appearances in both topology and category theory, so I think it's worth a blog post. (Or six. Heh.) To keep the discussion at a reasonable length, I'll have to assume the reader has some familiarity with So without further ado, I present Theorem : The fundamental group of the circle is isomorphic to ℤ. Proof: Let's take a closer look at each of the three isomorphisms. The Loop-Suspension Adjunction There are two important functors in topology called based loop $\Omega$ and reduced suspension $\Sigma$: The loop functor $\Omega$ assigns to each pointed space $X$ (that is, a space with a designated basepoint) the space $\Omega X$ of based loops in $X$, i.e. loops that start and end at the basepoint of $X$. On the other hand, $\Sigma$ assigns to each $X$ the (reduced) suspension $\Sigma X$ of $X$. This space is the smash product of $X$ with $S^1$. In general it might not be easy to draw a picture of $\Sigma X$, but when an $n$-dimensional sphere, it turns out that $\Sigma S^n$ is homeomorphic to $S^{n+1}$ for $n\geq 0$. So for $n=1$ the picture is The loop-suspension adjunction is a handy, categorical result which says that $\Omega$ and $\Sigma$ interact very nicely with each other: up to homotopy, maps out of suspension spaces are the same as maps in to loop spaces. More precisely, for all pointed topological spaces $X$ and $Y$ there is a natural isomorphism Here I'm using the notation $[A,B]$ to indicate the set of homotopy classes of basepoint-preserving maps from $A\to B$. (Two based maps are homotopic if there is a basepoint-preserving homotopy between them. This is an equivalence relation, and the equivalence classes are given the name homotopy classes.) This, together with the observation that the $n$th homotopy group $\pi_n(X)$ is by definition $[S^n,X]$, yields the following: And that's the first isomorphism above!** Remark: The $\Omega-\Sigma$ adjunction is just one example of a general categorial construction. Two functors are said to form an if they are - adjunction very loosely speaking - dual to each other. I had planned to blog about adjunctions after our series on natural transformations but ran out of time! In the mean time, I recommend looking at chapter 4 of Emily Riehl's Category Theory in Context for a nice discussion. adjunction The Homotopy Equivalence Next, let's say a word about why $\Omega S^1$ and $\mathbb{Z}$ are homotopy equivalent. This equivalence will immediately imply the second isomorphism above since $\pi_0$ (and in fact each $\pi_n$) is a functor, and functors preserve isomorphisms. (That is, $\pi_0$ sends homotopy equivalent spaces to isomorphic sets.***) Now, why are $\Omega S^1$ and $\mathbb{Z}$ homotopy equivalent? It's a consequence of the Claim: A homotopy equivalence between fibrations induces a homotopy equivalence between fibers. Eh, that was a mouthful, I know. Let's unwind it. Roughly speaking, a map $p:E\to B$ of topological spaces is called a fibration over $B$ if you can always lift a homotopy in $B$ to a homotopy in $E$, provided the initial "slice" of the homotopy in $B$ has a lift. And the preimage $p^{-1}(b)\subset E$ of a point in $b$ is called the fiber of $b$. So the claim is that if $p:E\to B$ and $p':E'\to B$ are two fibrations over $B$, and if there is a map between them that is a homotopy equivalence (We'd need to properly define what this entails, but it can be done.) then there is a homotopy equivalence between fibers $p^{-1}(b)$ and $p'^{-1}(b)$. Example #1 The familiar map $\mathbb{R}\to S^1$ that winds $\mathbb{R}$ around the circle by $x\mapsto e^{2\pi ix}$ is a fibration, and the fiber above the basepoint $1\in S^1$ is $\mathbb{Z}$. Incidentally, we proved this in the original six-part series that I mentioned earlier! Example #2 The based path space $\mathscr{P}S^1$ of the circle gives another example. This is the space of all paths in $S^1$ that start at the basepoint $1\in S^1$. The map $\mathscr{P}S^1\to S^1$ which sends a path to its end point is a fibration. What's the fiber above $1$? By definition, a path is in the fiber if and only if it starts and ends at 1. But that's precisely a loop in $S^1$! So the fiber above 1 is $\Omega S^1$. These examples give us two fibrations over the circle: $\mathbb{R}\to S^1$ and $\mathscr{P} S^1\to S^1$. And it gets even better. Both $\mathbb{R}$ and $\mathscr{P}S^1$ are contractible and therefore homotopy equivalent! By the claim above, $\Omega S^1$ and $\mathbb{Z}$ must be homotopy equivalent, too. This gives us the second isomorphism above. Pretty neat, right? If you're interested in the details of the claim and ideas used here, take a look at J. P. May's A Concise Course in Algebraic Topology, chapter 7.5. By the way, there is a dual notion to fibrations called cofibrations. (Roughly: a map is a cofibration if you can extend - rather than lift - homotopies.) And both of these topological maps have abstract, categorical counterparts -- also called (co)fibrations -- which play a central role in model categories. The Integers are Discrete The third isomorphism is relatively simple: we just have to think about what $\pi_0(X)$ really is. Recall that $\pi_0(X)=[S^0,X]$ consists of homotopy classes of basepoint-preserving maps $S^0\to X$. But $S^0$ is just two points, say $-1$ and $+1$, and one of them, say $-1$, must map to the basepoint of $X$. So a basepoint-preserving map $S^0\to X$ is really just a choice of a point in $X$. And any two such maps are homotopic when there's a path between the corresponding points! So $\pi_0(X)$ is the set of path components of $X$. It follows that $\pi_0(\mathbb{Z})\cong\mathbb{Z}$ since there are $\mathbb{Z}$-many path-components in $\mathbb{Z}$. And that's precisely the third isomorphism above! And with that, we conclude QED Okay, okay, I suppose with all the background and justification, this isn't an honest-to-goodness one-line proof. But I still think it's pretty cool! Especially since it calls on some nice constructions in topology and category theory. Well, as promised in my previous post I'm (supposed to be) taking a small break from blogging to prepare for my oral exam. But I had to come out of hiding to share this with you - I thought it was too good not to! Until next time! **You might worry that $\pi_0(\Omega S^1)$ is just a set with no extra structure. But it's actually a group! To see this, note that there is a multiplication on $\Omega S^1$ given by loop concatenation. It's not associative, but it is up to homotopy. (So loops spaces are not groups. They are, however, $A_\infty$ spaces.) So in general, sets of the form $[X,\Omega Y]$ are groups. For more, see May's book section 8.2. ***Yes, sets. Not groups. In general, $\pi_0(X)$ is merely a set (unlike $\pi_n(X)$ for $n\geq 1$ which is always a group). But we're guaranteed that $\pi_0(\mathbb{Z})$ is a group since it's isomorphic to $\pi_0(\Omega S^1)$ (and see the second footnote).
In signal processing, cross-correlation is a measure of similarity of two waveforms as a function of a time-lag applied to one of them. This is also known as a sliding dot product or sliding inner-product. It is commonly used for searching a long signal for a shorter, known feature. It has applications in pattern recognition, single particle analysis, electron tomographic averaging, cryptanalysis, and neurophysiology.For continuous functions f and g, the cross-correlation is defined as:: (f \star g)(t)\ \stackrel{\mathrm{def}}{=} \int_{-\infty}^{\infty} f^*(\tau)\ g(\tau+t)\,d\tau,whe... That seems like what I need to do, but I don't know how to actually implement it... how wide of a time window is needed for the Y_{t+\tau}? And how on earth do I load all that data at once without it taking forever? And is there a better or other way to see if shear strain does cause temperature increase, potentially delayed in time Link to the question: Learning roadmap for picking up enough mathematical know-how in order to model "shape", "form" and "material properties"?Alternatively, where could I go in order to have such a question answered? @tpg2114 For reducing data point for calculating time correlation, you can run two exactly the simulation in parallel separated by the time lag dt. Then there is no need to store all snapshot and spatial points. @DavidZ I wasn't trying to justify it's existence here, just merely pointing out that because there were some numerics questions posted here, some people might think it okay to post more. I still think marking it as a duplicate is a good idea, then probably an historical lock on the others (maybe with a warning that questions like these belong on Comp Sci?) The x axis is the index in the array -- so I have 200 time series Each one is equally spaced, 1e-9 seconds apart The black line is \frac{d T}{d t} and doesn't have an axis -- I don't care what the values are The solid blue line is the abs(shear strain) and is valued on the right axis The dashed blue line is the result from scipy.signal.correlate And is valued on the left axis So what I don't understand: 1) Why is the correlation value negative when they look pretty positively correlated to me? 2) Why is the result from the correlation function 400 time steps long? 3) How do I find the lead/lag between the signals? Wikipedia says the argmin or argmax of the result will tell me that, but I don't know how In signal processing, cross-correlation is a measure of similarity of two waveforms as a function of a time-lag applied to one of them. This is also known as a sliding dot product or sliding inner-product. It is commonly used for searching a long signal for a shorter, known feature. It has applications in pattern recognition, single particle analysis, electron tomographic averaging, cryptanalysis, and neurophysiology.For continuous functions f and g, the cross-correlation is defined as:: (f \star g)(t)\ \stackrel{\mathrm{def}}{=} \int_{-\infty}^{\infty} f^*(\tau)\ g(\tau+t)\,d\tau,whe... Because I don't know how the result is indexed in time Related:Why don't we just ban homework altogether?Banning homework: vote and documentationWe're having some more recent discussions on the homework tag. A month ago, there was a flurry of activity involving a tightening up of the policy. Unfortunately, I was really busy after th... So, things we need to decide (but not necessarily today): (1) do we implement John Rennie's suggestion of having the mods not close homework questions for a month (2) do we reword the homework policy, and how (3) do we get rid of the tag I think (1) would be a decent option if we had >5 3k+ voters online at any one time to do the small-time moderating. Between the HW being posted and (finally) being closed, there's usually some <1k poster who answers the question It'd be better if we could do it quick enough that no answers get posted until the question is clarified to satisfy the current HW policy For the SHO, our teacher told us to scale$$p\rightarrow \sqrt{m\omega\hbar} ~p$$$$x\rightarrow \sqrt{\frac{\hbar}{m\omega}}~x$$And then define the following$$K_1=\frac 14 (p^2-q^2)$$$$K_2=\frac 14 (pq+qp)$$$$J_3=\frac{H}{2\hbar\omega}=\frac 14(p^2+q^2)$$The first part is to show that$$Q \... Okay. I guess we'll have to see what people say but my guess is the unclear part is what constitutes homework itself. We've had discussions where some people equate it to the level of the question and not the content, or where "where is my mistake in the math" is okay if it's advanced topics but not for mechanics Part of my motivation for wanting to write a revised homework policy is to make explicit that any question asking "Where did I go wrong?" or "Is this the right equation to use?" (without further clarification) or "Any feedback would be appreciated" is not okay @jinawee oh, that I don't think will happen. In any case that would be an indication that homework is a meta tag, i.e. a tag that we shouldn't have. So anyway, I think suggestions for things that need to be clarified -- what is homework and what is "conceptual." Ie. is it conceptual to be stuck when deriving the distribution of microstates cause somebody doesn't know what Stirling's Approximation is Some have argued that is on topic even though there's nothing really physical about it just because it's 'graduate level' Others would argue it's not on topic because it's not conceptual How can one prove that$$ \operatorname{Tr} \log \cal{A} =\int_{\epsilon}^\infty \frac{\mathrm{d}s}{s} \operatorname{Tr}e^{-s \mathcal{A}},$$for a sufficiently well-behaved operator $\cal{A}?$How (mathematically) rigorous is the expression?I'm looking at the $d=2$ Euclidean case, as discuss... I've noticed that there is a remarkable difference between me in a selfie and me in the mirror. Left-right reversal might be part of it, but I wonder what is the r-e-a-l reason. Too bad the question got closed. And what about selfies in the mirror? (I didn't try yet.) @KyleKanos @jinawee @DavidZ @tpg2114 So my take is that we should probably do the "mods only 5th vote"-- I've already been doing that for a while, except for that occasional time when I just wipe the queue clean. Additionally, what we can do instead is go through the closed questions and delete the homework ones as quickly as possible, as mods. Or maybe that can be a second step. If we can reduce visibility of HW, then the tag becomes less of a bone of contention @jinawee I think if someone asks, "How do I do Jackson 11.26," it certainly should be marked as homework. But if someone asks, say, "How is source theory different from qft?" it certainly shouldn't be marked as Homework @Dilaton because that's talking about the tag. And like I said, everyone has a different meaning for the tag, so we'll have to phase it out. There's no need for it if we are able to swiftly handle the main page closeable homework clutter. @Dilaton also, have a look at the topvoted answers on both. Afternoon folks. I tend to ask questions about perturbation methods and asymptotic expansions that arise in my work over on Math.SE, but most of those folks aren't too interested in these kinds of approximate questions. Would posts like this be on topic at Physics.SE? (my initial feeling is no because its really a math question, but I figured I'd ask anyway) @DavidZ Ya I figured as much. Thanks for the typo catch. Do you know of any other place for questions like this? I spend a lot of time at math.SE and they're really mostly interested in either high-level pure math or recreational math (limits, series, integrals, etc). There doesn't seem to be a good place for the approximate and applied techniques I tend to rely on. hm... I guess you could check at Computational Science. I wouldn't necessarily expect it to be on topic there either, since that's mostly numerical methods and stuff about scientific software, but it's worth looking into at least. Or... to be honest, if you were to rephrase your question in a way that makes clear how it's about physics, it might actually be okay on this site. There's a fine line between math and theoretical physics sometimes. MO is for research-level mathematics, not "how do I compute X" user54412 @KevinDriscoll You could maybe reword to push that question in the direction of another site, but imo as worded it falls squarely in the domain of math.SE - it's just a shame they don't give that kind of question as much attention as, say, explaining why 7 is the only prime followed by a cube @ChrisWhite As I understand it, KITP wants big names in the field who will promote crazy ideas with the intent of getting someone else to develop their idea into a reasonable solution (c.f., Hawking's recent paper)
I have a regression model $y = \beta_0 + \beta_1 x_1 + \beta_2 x_2 +u$ It is known that sample means of both $x_1$ and $x_2$ are zero, moreover the error term is said to be homoskedastic, the standard error of regression and standard errors of OLS estimates of the slope coefficients are given. Can we say something about total sum of squares (i.e. $\sum_{i=1}^n (y_i - \bar{y})^2$), namely, can we somehow estimate its smallest possible value? What I have tried: $\sum_{i=1}^n (y_i - \bar{y})^2 = \sum_{i=1}^n(\hat{\beta_0} + \hat{\beta_1}x_{1i} + \hat{\beta_2}x_{2i} + \hat{u_i} - \bar{y})^2 = \sum_{i=1}^n (\hat{\beta_1}x_{1i} + \hat{\beta_2}x_{2i})^2 + \sum_{i=1}^n \hat{u_i}^2$. We know the second term, since we know the standard error. So it remains to estimate the first one. Standard erorrs of OLS estimates of slope coefficients are $$se(\hat{\beta_1}) = \sqrt{\dfrac{\sum_{i=1}^nx_{2i}^2}{\sum_{i=1}^nx_{1i}^2\sum_{i=1}^nx_{2i}^2- (\sum_{i=1}^nx_{1i}x_{2i})^2}} \hat{\sigma}$$ $$se(\hat{\beta_2}) = \sqrt{\dfrac{\sum_{i=1}^nx_{1i}^2}{\sum_{i=1}^nx_{1i}^2\sum_{i=1}^nx_{2i}^2- (\sum_{i=1}^nx_{1i}x_{2i})^2}} \hat{\sigma}$$ Since we know $se(\hat{\beta_1}), se(\hat{\beta_2}), \hat{\sigma}$ we can compute the ratio $\dfrac{\sum_{i=1}^nx_{1i}^2}{\sum_{i=1}^nx_{2i}^2}$. Moreover we can express $\sum_{i=1}^nx_{1i}^2$ and $\sum_{i=1}^nx_{2i}^2$ through $(\sum_{i=1}^nx_{1i}x_{2i})^2$. Thus, $$\sum_{i=1}^n (\hat{\beta_1}x_{1i} + \hat{\beta_{2}}x_{2i})^2 = \hat{\beta_1}^2\sum_{i=1}^nx_{1i}^2 + \hat{\beta_2}^2\sum_{i=1}^nx_{2i}^2 + 2 \hat{\beta_1}\hat{\beta_2}\sum_{i=1}^nx_{1i}x_{2i}$$ The idea was to plug the expressions of $\sum_{i=1}^nx_{1i}^2$ and $\sum_{i=1}^nx_{2i}^2$ into the aforementioned expression and try to minimize it by $\sum_{i=1}^nx_{1i}x_{2i}$ treating $\hat{\beta_1}, \hat{\beta_2}$ like constants. But I do not think that it is a right way, since once we change $\sum_{i=1}^nx_{1i}x_{2i}$ we also change $\hat{\beta_1}, \hat{\beta_2}$ because the OLS formulas include $\sum_{i=1}^nx_{1i}x_{2i}$. Plugging formulas for OLS estimates also does not seem to be a good idea since these formulas include sample covariances between $y$ and $x_1$, $x_2$ which we do not know. Here I got stuck. Could you please give me any hints, how to proceed? Thanks a lot in advance for any help!
Assume we have two independent random samples $(X_i)_{i=1}^{n}$ and $(Y_i)_{i=1}^{m}$ where $X_i\sim\mathrm{Uniform}(0,\theta_1)$ and $Y_j\sim\mathrm{Uniform}(0,\theta_2)$. To test, at the significant level $\alpha$, $H_0: \theta_1 = \theta_2$ against $H_1:\theta_1\ne\theta_2$ using Likelihood Ratio Test, denoting by $Z$ the ratio of the $n$-th ($m$-th resp.) ordered statistics of two samples, i.e., $Z = \frac{X_{(n)}}{Y_{(m)}}$, we may find: $$ \Lambda = \frac{ \left( \max{X_i} \right) ^ {n} \left( \max{Y_j} \right) ^ m }{ \left( \max \left(X_i, Y_j \right) \right) ^ {n + m} } = \begin{cases} Z ^ {-m} \quad X_{(n)} \ge Y_{(m)} \\ Z^{n} \quad X_{(n)} < Y_{(m)} \end{cases} \mathrm{.} $$ The form of the rejection region $C$ is $C = \{Z\le c_1 <1\}\cup\{Z\ge c_2 > 1\} $ by considering the cases where $X_{(n)} < Y_{(m)}$ or otherwise respectively. To further determine the constants $0 < c_1 \le 1$ and $0 < c_2 \le 1$, I am not sure whether to set: $$ \mathrm{Pr} \left( Z \ge c_2 \vert Z \ge 1\right) = \mathrm{Pr} \left (Z \le c_1 \vert Z < 1 \right) \overset{\mathrm{set}}{=} \alpha\mathrm{,} $$ or: $$ \mathrm{Pr}\left( Z \ge c_2 \right) = \mathrm{Pr}\left( Z \le c_1 \right) \overset{\mathrm{set}}{=} \frac{\alpha}{2} \mathrm{.} $$ Since for two independent samples with size $n$ and $m$ following the same uniform disribution $\mathrm{Uniform}(0,\beta)$, with the maxima of the observations being $V$ and $W$ resepectively, we have $\mathrm{Pr}(V\le kW) = \frac{mk^n}{n+m}$ for some constant $0<k<1$. If I follow the first idea, I can get: $$ C = \left\{ Z \le \alpha ^ {\frac{1}{n}} \right\} \cup \left\{ Z \ge \alpha ^ {-\frac{1}{m}} \right\} \mathrm{.} $$ Since $$ \begin{align} \mathrm{Pr} \left (Z \ge c_2 \vert Z \ge 1 \right) & = \frac{ \mathrm{Pr} \left( Z \ge c_2, Z \ge 1 \right) }{ \mathrm{Pr} \left ( Z \ge 1 \right)} \\\\ & = \frac{ \mathrm{Pr} \left( Y_{(m)} \le \frac{1}{c_2} X_{(n)} \right) } { \mathrm{Pr} \left( Y_{(m)} \le X_{(n)} \right) } \\\\ & = \frac{ \frac{n}{n + m} \left( \frac{1}{c_2} \right) ^ {m} } { \frac{n}{n + m} } \\\\ & = c_{2}^{-m} \\\\ & \overset{ \mathrm{set} }{=} \alpha \end{align} $$ Hence $c_2 = \alpha ^ {-\frac{1}{m}} $, and similarly $c_1 = \alpha ^ {\frac{1}{n}} $. But if I follow the second idea, since $\frac{m}{m+n}$ and $\frac{n}{m+n}$ are both less than $1$, I get: $$ C = \left\{ Z \le \left( \frac{n+m}{m} \frac{\alpha}{2} \right) ^ {\frac{1}{n}} \right\} \cup \left\{ Z \ge \left( \frac{n+m}{n} \frac{\alpha}{2} \right)^{-\frac{1}{m}} \right\} \mathrm{.} $$ Apparently, I have made a mistake since those two do not agree if two samples are not of the equal size. Nevertheless, I have trouble in locating where did my argument go wrong. Thank you very much in advance. EDIT: I found a good discussion in this thread on finding the joint p.d.f. of $X_{(n)}$ and $Y_{(n)}$ (and more) for this test. Nevertheless, it remains a question for me to determine the rejection region. I think this is similar to the the cases where we perform a two-sided LRT where the likelihood ratio is a step function. For example, in a two-sided $F$ test for the variances of two normal samples ($H_0: \sigma_{0}^2=\sigma_{1}^2$ against $H_1: \sigma_{0}^2\ne\sigma_{1}^2$), the standard procedure follows the second idea above. Also of interests, in DeGroot (p. 603, 4th Ed.), it is noted while deriving the two sided $F$ test from LRT that: Unfortunately, it is usually tedious to compute the necessary values $c_1$ and $c_2$. For this reason, people often abandon the strict likelihood ratio criterion in this case and simply let $c_1$ and $c_2$ be the $\frac{\alpha_0}{2}$ and $1 - \frac{\alpha_0}{2}$ quantiles of the appropriate $F$ distribution. Might it also be theoretically justifiable to derive the rejection region using the first idea? This question is also posted on Quora.
A simple construction of inertial manifolds under time discretization 1. Research Center for Applied Mathematics, Xi'an Jiaotong University, Xi'an, 710049, China $ T_h^0\Phi(p)=\sum_{k=1}^{\infty}R(h)^kQF(p^{-k}+\Phi(p^{-k})) $ where $p^{-k}=(S^h_\Phi)^{-k}(p)$, see [1] for detailed definition. Here we get the existence by solving the following equation about $\Phi$: $\Phi(S_\Phi^h(p))=R(h)[\Phi(p)+hQF(p+\Phi(p))] \mbox{ for }\forall p\in PH.$ See section 1 for further explanation which describes just the invariant property of inertial manifolds. Finally we prove the $C^1$ smoothness of inertial manifolds. Mathematics Subject Classification:35Q10, 65N3. Citation:Changbing Hu, Kaitai Li. A simple construction of inertial manifolds under time discretization. Discrete & Continuous Dynamical Systems - A, 1997, 3 (4) : 531-540. doi: 10.3934/dcds.1997.3.531 [1] Olivier Goubet, Ezzeddine Zahrouni. On a time discretization of a weakly damped forced nonlinear Schrödinger equation. [2] T. Colin, Géraldine Ebrard, Gérard Gallice. Semi-discretization in time for nonlinear Zakharov waves equations. [3] Pierluigi Colli, Shunsuke Kurima. Time discretization of a nonlinear phase field system in general domains. [4] [5] Maurizio Grasselli, Nicolas Lecoq, Morgan Pierre. A long-time stable fully discrete approximation of the Cahn-Hilliard equation with inertial term. [6] [7] Matthieu Hillairet, Alexei Lozinski, Marcela Szopos. On discretization in time in simulations of particulate flows. [8] Yalçin Sarol, Frederi Viens. Time regularity of the evolution solution to fractional stochastic heat equation. [9] Philippe Michel, Bhargav Kumar Kakumani. GRE methods for nonlinear model of evolution equation and limited ressource environment. [10] Norbert Koksch, Stefan Siegmund. Feedback control via inertial manifolds for nonautonomous evolution equations. [11] Yinhua Xia, Yan Xu, Chi-Wang Shu. Efficient time discretization for local discontinuous Galerkin methods. [12] Akio Ito, Noriaki Yamazaki, Nobuyuki Kenmochi. Attractors of nonlinear evolution systems generated by time-dependent subdifferentials in Hilbert spaces. [13] Jun-Ren Luo, Ti-Jun Xiao. Decay rates for second order evolution equations in Hilbert spaces with nonlinear time-dependent damping. [14] Peter E. Kloeden, Björn Schmalfuss. Lyapunov functions and attractors under variable time-step discretization. [15] D. Hilhorst, L. A. Peletier, A. I. Rotariu, G. Sivashinsky. Global attractor and inertial sets for a nonlocal Kuramoto-Sivashinsky equation. [16] L. Dieci, M. S Jolly, Ricardo Rosa, E. S. Van Vleck. Error in approximation of Lyapunov exponents on inertial manifolds: The Kuramoto-Sivashinsky equation. [17] Alfonso C. Casal, Jesús Ildefonso Díaz, José M. Vegas. Finite extinction time property for a delayed linear problem on a manifold without boundary. [18] [19] Nakao Hayashi, Elena I. Kaikina, Pavel I. Naumkin. Large time behavior of solutions to the generalized derivative nonlinear Schrödinger equation. [20] Nakao Hayashi, Pavel I. Naumkin. Asymptotic behavior in time of solutions to the derivative nonlinear Schrödinger equation revisited. 2018 Impact Factor: 1.143 Tools Metrics Other articles by authors [Back to Top]
2018-09-02 17:21 Measurement of $P_T$-weighted Sivers asymmetries in leptoproduction of hadrons / COMPASS Collaboration The transverse spin asymmetries measured in semi-inclusive leptoproduction of hadrons, when weighted with the hadron transverse momentum $P_T$, allow for the extraction of important transverse-momentum-dependent distribution functions. In particular, the weighted Sivers asymmetries provide direct information on the Sivers function, which is a leading-twist distribution that arises from a correlation between the transverse momentum of an unpolarised quark in a transversely polarised nucleon and the spin of the nucleon. [...] arXiv:1809.02936; CERN-EP-2018-242.- Geneva : CERN, 2019-03 - 20 p. - Published in : Nucl. Phys. B 940 (2019) 34-53 Article from SCOAP3: PDF; Draft (restricted): PDF; Fulltext: CERN-EP-2018-242 - PDF; 1809.02936 - PDF; Record dettagliato - Record simili 2018-02-14 11:43 Light isovector resonances in $\pi^- p \to \pi^-\pi^-\pi^+ p$ at 190 GeV/${\it c}$ / COMPASS Collaboration We have performed the most comprehensive resonance-model fit of $\pi^-\pi^-\pi^+$ states using the results of our previously published partial-wave analysis (PWA) of a large data set of diffractive-dissociation events from the reaction $\pi^- + p \to \pi^-\pi^-\pi^+ + p_\text{recoil}$ with a 190 GeV/$c$ pion beam. The PWA results, which were obtained in 100 bins of three-pion mass, $0.5 < m_{3\pi} < 2.5$ GeV/$c^2$, and simultaneously in 11 bins of the reduced four-momentum transfer squared, $0.1 < t' < 1.0$ $($GeV$/c)^2$, are subjected to a resonance-model fit using Breit-Wigner amplitudes to simultaneously describe a subset of 14 selected waves using 11 isovector light-meson states with $J^{PC} = 0^{-+}$, $1^{++}$, $2^{++}$, $2^{-+}$, $4^{++}$, and spin-exotic $1^{-+}$ quantum numbers. [...] arXiv:1802.05913; CERN-EP-2018-021.- Geneva : CERN, 2018-11-02 - 72 p. - Published in : Phys. Rev. D 98 (2018) 092003 Article from SCOAP3: PDF; Draft (restricted): PDF; Fulltext: CERN-EP-2018-021 - PDF; 1802.05913 - PDF; Record dettagliato - Record simili 2018-02-07 15:23 Transverse Extension of Partons in the Proton probed by Deeply Virtual Compton Scattering / Akhunzyanov, R. (Dubna, JINR) ; Alexeev, M.G. (Turin U.) ; Alexeev, G.D. (Dubna, JINR) ; Amoroso, A. (Turin U. ; INFN, Turin) ; Andrieux, V. (Illinois U., Urbana ; IRFU, Saclay) ; Anfimov, N.V. (Dubna, JINR) ; Anosov, V. (Dubna, JINR) ; Antoshkin, A. (Dubna, JINR) ; Augsten, K. (Dubna, JINR ; CTU, Prague) ; Augustyniak, W. (NCBJ, Swierk) et al. We report on the first measurement of exclusive single-photon muoproduction on the proton by COMPASS using 160 GeV/$c$ polarized $\mu^+$ and $\mu^-$ beams of the CERN SPS impinging on a liquid hydrogen target. [...] CERN-EP-2018-016 ; arXiv:1802.02739. - 2018. - 13 p. Full text - Draft (restricted) - Fulltext Record dettagliato - Record simili 2018-01-30 07:15 Record dettagliato - Record simili 2017-09-28 10:30 Record dettagliato - Record simili 2017-09-19 08:11 Transverse-momentum-dependent Multiplicities of Charged Hadrons in Muon-Deuteron Deep Inelastic Scattering / COMPASS Collaboration A semi-inclusive measurement of charged hadron multiplicities in deep inelastic muon scattering off an isoscalar target was performed using data collected by the COMPASS Collaboration at CERN. The following kinematic domain is covered by the data: photon virtuality $Q^{2}>1$ (GeV/$c$)$^2$, invariant mass of the hadronic system $W > 5$ GeV/$c^2$, Bjorken scaling variable in the range $0.003 < x < 0.4$, fraction of the virtual photon energy carried by the hadron in the range $0.2 < z < 0.8$, square of the hadron transverse momentum with respect to the virtual photon direction in the range 0.02 (GeV/$c)^2 < P_{\rm{hT}}^{2} < 3$ (GeV/$c$)$^2$. [...] CERN-EP-2017-253; arXiv:1709.07374.- Geneva : CERN, 2018-02-08 - 23 p. - Published in : Phys. Rev. D 97 (2018) 032006 Article from SCOAP3: PDF; Draft (restricted): PDF; Fulltext: PDF; Preprint: PDF; Record dettagliato - Record simili 2017-07-08 20:47 New analysis of $\eta\pi$ tensor resonances measured at the COMPASS experiment / JPAC Collaboration We present a new amplitude analysis of the $\eta\pi$ $D$-wave in $\pi^- p\to \eta\pi^- p$ measured by COMPASS. Employing an analytical model based on the principles of the relativistic $S$-matrix, we find two resonances that can be identified with the $a_2(1320)$ and the excited $a_2^\prime(1700)$, and perform a comprehensive analysis of their pole positions. [...] CERN-EP-2017-169; JLAB-THY-17-2468; arXiv:1707.02848.- Geneva : CERN, 2018-04-10 - 9 p. - Published in : Phys. Lett. B 779 (2018) 464-472 Article from SCOAP3: PDF; Draft (restricted): PDF; Fulltext: PDF; Preprint: PDF; Record dettagliato - Record simili 2017-07-05 15:07 Record dettagliato - Record simili 2017-04-01 00:22 Record dettagliato - Record simili 2017-01-05 16:00 First measurement of the Sivers asymmetry for gluons from SIDIS data / COMPASS Collaboration The Sivers function describes the correlation between the transverse spin of a nucleon and the transverse motion of its partons. It was extracted from measurements of the azimuthal asymmetry of hadrons produced in semi-inclusive deep inelastic scattering of leptons off transversely polarised nucleon targets, and it turned out to be non-zero for quarks. [...] CERN-EP-2017-003; arXiv:1701.02453.- Geneva : CERN, 2017-09-10 - 11 p. - Published in : Phys. Lett. B 772 (2017) 854-864 Article from SCOAP3: PDF; Draft (restricted): PDF; Fulltext: PDF; Preprint: PDF; Record dettagliato - Record simili
You can resolve this by logical calculus only. Using standard notation, $\land$ is the "logical and", $\lor$ is the" logical or", and $\lnot$ is negation. Let's use the variables $t,s,m,b,c$ to represent different people being the culprit.The first line can be written : $(\lnot c \land \lnot s )\lor( c \land s)$ (one of the two statements is true and the other is false). So if we apply this to all this people: $\hphantom{\lor}((\lnot c \land \lnot s) \lor( c \land s))\\ \land((\lnot m \land c) \lor (m \land \lnot c))\\ \land((c\land t )\lor( \lnot c \land \lnot t))\\ \land((m \land \lnot s )\lor( \lnot m \land s))\\ \land((b \land t )\lor( \lnot b \land \lnot t))$ Since there's only one culprit, things like $c \land s$ are always false, so we can remove them: so we obtain : $(\lnot c \land \lnot s \land \lnot b \land \lnot t) \land (\lnot m \land c \lor m \land \lnot c)\land((m \land \lnot s )\lor( \lnot m \land s))$ If you distribute $\land$ over $\lor$ you get: $\hphantom{\lor}(\lnot c \land \lnot s \land \lnot b \land \lnot t \land \lnot m \land c \land m \land\lnot s)\\ \lor( \lnot c \land \lnot s \land \lnot b \land \lnot t \land\lnot m \land c \land \lnot m \land s)\\ \lor( \lnot c \land \lnot s \land \lnot b \land \lnot t \land m \land\lnot c \land m \land \lnot s)\\ \lor( \lnot c \land \lnot s \land \lnot b \land \lnot t \land m \land \lnot c \land \lnot m \land s)$ Things like $c \land \lnot c$ are false (nothing is itself and his contrary) so we obtain : $\lnot c \land \lnot s \land \lnot b \land \lnot t \land m$ so : $m$ So the culprit is: Matt I didn't found any notation better than this, sorry if it's not very readable. Also, sorry if my English is not very good. It's easier to do it yourself on paper. Add this answer beacause there is no calculus only answer to this and I find it so powerfull so you don't have to read 5 times the thing to find the answer, just 3 lines of "Maths".
I have done this proof in my blog Since I already have the code for the equations, I'm reproducing it here. We have to prove that $$F_{a-1, N-a} = \frac{MST}{MSE} = \frac{\frac{SST}{a-1}}{\frac{SSE}{N-a}} \tag{1}$$ reduces to $$t_{k}^2 = \frac{(\bar{y}_{1.} - \bar{y}_{2.})^2}{S_{p}^2(\frac{1}{n_{1}} + \frac{1}{n_{2}})} \tag{2}$$ $\color{red} {\text{When a = 2}}$ (this is key) Notation $SSE$: Sum of Squares due to Error $SST$: Sum of Squares of Treatment $MSE$: Mean Sum of squares Error $MST$: Mean Sum of squares Treatment $a$: Number of treatments $n_{1}$: Number of observations in treatment 1 $n_{2}$: Number of observations in treatment 2 $N$: Total number of observations $\bar{y}_{i.}$: Mean of treatment $i$ $\bar{y}_{..}$: Global mean $k = N - a$: Degrees of freedom of the denominator of F Now that we have the formulas, we will work the following: Denominator of equation (1) Numerator of equation (1) 2.a. Part a 2.b. Part b 2.c. Part c Put all together 1. Denominator of equation (1) When $a = 2$ the denominator of expression $(1)$ is: $$MSE = \frac{SSE}{N-2} = \frac{\sum_{j=1}^{n_1}{(y_{1j} - \bar{y}_{1.})^2} + \sum_{j=1}^{n_2}{(y_{2j} - \bar{y}_{2.})^2}}{N-2} \tag{3}$$ Recalling that the formula for the sample variance estimator is,$$S_{i}^2 = \frac{\sum_{j=1}^{n_i}(y_{ij} - \bar{y}_{i.})^2}{n_{i} - 1}$$ we can multiply and divide the terms in the numerator in $(3)$ by $(n_{i} - 1)$ and get $(4)$. Don't forget that in this case $N = n_{1} + n_{2}$ $$\frac{SSE}{N-2} = \frac{(n_{1} - 1) S_{1}^2 + (n_{2} - 1) S_{2}^2}{n_{1} + n_{2} - 2} = S_{p}^2 \tag{4}$$ $S_{p}^2$ is called the pooled variance estimator. 2. Numerator of equation (1) When $a = 2$ the numerator of expression $(1)$ is: $$\frac{SST}{2-1} = SST$$ and the general expression for SST reduces to $SST = \sum_{1}^2 n_{i} (\bar{y}_{i.} - \bar{y}_{..})^2$ . The next step is to expand the sum as follows: $$SST = \sum_{1}^2 n_{i} (\bar{y}_{i.} - \bar{y}_{..})^2 = n_{1} (\bar{y}_{1.} - \bar{y}_{..})^2 + n_{2} (\bar{y}_{2.} - \bar{y}_{..})^2 \tag{5}$$ $\bar{y}_{..}$ is called the global mean and we are going to write it in a different way. The new way is: $$\bar{y}_{..} = \frac{n_{1} \bar{y}_{1.} + n_{2} \bar{y}_{2.}}{N} \tag{6}$$ Next, replace (6) in formula (5) and re-write SST as: $$SST = \underbrace{n_1 \big[ \bar{y}_{1.} - (\frac{n_1 \bar{y}_{1.} + n_2 \bar{y}_{2.}}{N}) \big]^2}_{\text{Part a}} + \underbrace{n_2 \big[ \bar{y}_{2.} - (\frac{n_1 \bar{y}_{1.} + n_2 \bar{y}_{2.}}{N}) \big]^2}_{\text{Part b}} \tag{7}$$ The next step is to find alternative ways for the expressions Part a and Part b 2.a. Part a $$\text{Part a} = n_1 \big[ \bar{y}_{1.} - (\frac{n_1 \bar{y}_{1.} + n_2 \bar{y}_{2.}}{N}) \big]^2$$ Multiply and divide the term with $\bar{y}_{1.}$ by $N$ $$n_1 \big[ \frac{N \bar{y}_{1.}}{N} - (\frac{n_1 \bar{y}_{1.} + n_2 \bar{y}_{2.}}{N}) \big]^2$$ $N$ is common denominator $$n_1 \big[\frac{N \bar{y}_{1.} - n_1 \bar{y}_{1.} - n_2 \bar{y}_{2.}}{N} \big]^2$$ $\bar{y}_{1.}$ is common factor of $N$ and $n_1$ $$n_1 \big[\frac{(N - n_1) \bar{y}_{1.} - n_2 \bar{y}_{2.}}{N} \big]^2$$ Replace $(N - n_{1}) = n_{2}$ $$n_1 \big[\frac{n_2 \bar{y}_{1.} - n_2 \bar{y}_{2.}}{N} \big]^2$$ Now $n_{2}$ is common factor of $\bar{y}_{1.}$ and $\bar{y}_{2.}$ $$n_1 \big[\frac{n_2 (\bar{y}_{1.} - \bar{y}_{2.})}{N} \big]^2$$ Take $n_{2}$ and $N$ out of the square $$\text{Part a} = \frac{n_{1} n_{2}^2}{N^2} (\bar{y}_{1.} - \bar{y}_{2.})^2$$ 2.b. Part b $$\text{Part b} = n_2 \big[ \bar{y}_{2.} - (\frac{n_1 \bar{y}_{1.} + n_2 \bar{y}_{2.}}{N}) \big]^2$$ Multiply and divide the term with $\bar{y}_{2.}$ by $N$ $$n_2 \big[ \frac{N \bar{y}_{2.}}{N} - (\frac{n_1 \bar{y}_{1.} + n_2 \bar{y}_{2.}}{N}) \big]^2$$ $N$ is common denominator $$n_2 \big[\frac{N \bar{y}_{2.} - n_1 \bar{y}_{1.} - n_2 \bar{y}_{2.}}{N} \big]^2$$ $\bar{y}_{2.}$ is common factor of $N$ and $n_2$ $$n_2 \big[\frac{(N - n_2) \bar{y}_{2.} - n_1 \bar{y}_{1.}}{N} \big]^2$$ Replace $(N - n_{2}) = n_{1}$ $$n_2 \big[\frac{n_1 \bar{y}_{2.} - n_1 \bar{y}_{1.}}{N} \big]^2$$ Now $n_{1}$ is common factor of $\bar{y}_{1.}$ and $\bar{y}_{2.}$ $$n_2 \big[\frac{n_1 (\bar{y}_{2.} - \bar{y}_{1.})}{N} \big]^2$$ Take $n_{1}$ and $N$ out of the square $$\text{Part b} = \frac{n_{2} n_{1}^2}{N^2} (\bar{y}_{2.} - \bar{y}_{1.})^2$$ Now that we have Part a and Part b we are going to go back to equation $(7)$ and replace them: $$SST = \frac{n_{1} n_{2}^2}{N^2} (\bar{y}_{1.} - \bar{y}_{2.})^2 + \frac{n_{2} n_{1}^2}{N^2} (\bar{y}_{2.} - \bar{y}_{1.})^2 \tag{8}$$ Taking into account that $(\bar{y}_{1.} - \bar{y}_{2.})^2 = (\bar{y}_{2.} - \bar{y}_{1.})^2$, we can re-write equation $(8)$ as $(9)$: $$SST = \underbrace{\big[ \frac{n_{1} n_{2}^2}{N^2} + \frac{n_{2} n_{1}^2}{N^2} \big]}_{\text{Part c}} (\bar{y}_{1.} - \bar{y}_{2.})^2 \tag{9}$$ This lead us with part Part c, that we are going to work next. 2.c. Part c $$\text{Part c} = \frac{n_{1} n_{2}^2}{N^2} + \frac{n_{2} n_{1}^2}{N^2}$$ $N^2$ is common denominator and each of the summands has a $n_{1} n_{2}$ factor that we can factor out. Then we have: $$\frac{n_{1} n_{2} (n_{1} + n_{2})}{N^2}$$ Replace $N = n_{1} + n_{2}$ $$\frac{n_{1} n_{2} N}{N^2}$$ Simplify $N$ $$\frac{n_{1} n_{2}}{N}$$ Re-write the fraction $$\frac{1}{\frac{N}{n_{1} n_{2}}}$$ Replace $N = n_{1} + n_{2}$ $$\frac{1}{\frac{n_{1} + n_{2}}{n_{1} n_{2}}} = \frac{1}{\frac{1}{n_{1}} + \frac{1}{n_{2}}}$$ And we have $$\text{Part c} = \frac{1}{\frac{1}{n_{1}} + \frac{1}{n_{2}}}$$ Finally, we have to replace this expression for Part c in $(9)$ and re-write SST as: $$SST = \frac{1}{\frac{1}{n_{1}} + \frac{1}{n_{2}}} (\bar{y}_{1.} - \bar{y}_{2.})^2$$ 3. Put all together With the previous steps we have shown that, $\color{red} {\text{when a = 2}}$, we have: $$\frac{SST}{2-1} = \frac{(\bar{y}_{1.} - \bar{y}_{2.})^2}{\frac{1}{n_{1}} + \frac{1}{n_{2}}}$$ and $$\frac{SSE}{N-2} = S_{p}^2$$ The ratio of these two expressions, namely the F-statistic, is then: $$F_{1, k} = \frac{\frac{SST}{2-1}}{\frac{SSE}{N-2}} = \frac{(\bar{y}_{1.} - \bar{y}_{2.})^2}{S_{p}^2 \big( \frac{1}{n_{1}} + \frac{1}{n_{2}} \big)} = t_{k}^2$$ And this concludes the proof.
Search Now showing items 1-1 of 1 Anisotropic flow of inclusive and identified particles in Pb–Pb collisions at $\sqrt{{s}_{NN}}=$ 5.02 TeV with ALICE (Elsevier, 2017-11) Anisotropic flow measurements constrain the shear $(\eta/s)$ and bulk ($\zeta/s$) viscosity of the quark-gluon plasma created in heavy-ion collisions, as well as give insight into the initial state of such collisions and ...
Search Now showing items 21-30 of 167 Anisotropic flow of charged hadrons, pions and (anti-)protons measured at high transverse momentum in Pb-Pb collisions at $\sqrt{s_{NN}}$=2.76 TeV (Elsevier, 2013-03) The elliptic, $v_2$, triangular, $v_3$, and quadrangular, $v_4$, azimuthal anisotropic flow coefficients are measured for unidentified charged particles, pions, and (anti-)protons in Pb–Pb collisions at $\sqrt{s_{NN}}$ = ... Multi-strange baryon production at mid-rapidity in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV (Elsevier, 2014-01) The production of ${\rm\Xi}^-$ and ${\rm\Omega}^-$ baryons and their anti-particles in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV has been measured using the ALICE detector. The transverse momentum spectra at ... The ALICE Transition Radiation Detector: Construction, operation, and performance (Elsevier, 2018-02) The Transition Radiation Detector (TRD) was designed and built to enhance the capabilities of the ALICE detector at the Large Hadron Collider (LHC). While aimed at providing electron identification and triggering, the TRD ... Production of Σ(1385)± and Ξ(1530)0 in proton–proton collisions at √s = 7 TeV (Springer, 2015-01-10) The production of the strange and double-strange baryon resonances ((1385)±, Ξ(1530)0) has been measured at mid-rapidity (|y|< 0.5) in proton–proton collisions at √s = 7 TeV with the ALICE detector at the LHC. Transverse ... Forward-backward multiplicity correlations in pp collisions at √s = 0.9, 2.76 and 7 TeV (Springer, 2015-05-20) The strength of forward-backward (FB) multiplicity correlations is measured by the ALICE detector in proton-proton (pp) collisions at s√ = 0.9, 2.76 and 7 TeV. The measurement is performed in the central pseudorapidity ... Inclusive photon production at forward rapidities in proton-proton collisions at $\sqrt{s}$ = 0.9, 2.76 and 7 TeV (Springer Berlin Heidelberg, 2015-04-09) The multiplicity and pseudorapidity distributions of inclusive photons have been measured at forward rapidities ($2.3 < \eta < 3.9$) in proton-proton collisions at three center-of-mass energies, $\sqrt{s}=0.9$, 2.76 and 7 ... Rapidity and transverse-momentum dependence of the inclusive J/$\mathbf{\psi}$ nuclear modification factor in p-Pb collisions at $\mathbf{\sqrt{\textit{s}_{NN}}}=5.02$ TeV (Springer, 2015-06) We have studied the transverse-momentum ($p_{\rm T}$) dependence of the inclusive J/$\psi$ production in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV, in three center-of-mass rapidity ($y_{\rm cms}$) regions, down to ... Measurement of charged jet suppression in Pb-Pb collisions at √sNN = 2.76 TeV (Springer, 2014-03) A measurement of the transverse momentum spectra of jets in Pb–Pb collisions at √sNN = 2.76TeV is reported. Jets are reconstructed from charged particles using the anti-kT jet algorithm with jet resolution parameters R ... Measurement of pion, kaon and proton production in proton–proton collisions at √s = 7 TeV (Springer, 2015-05-27) The measurement of primary π±, K±, p and p¯¯¯ production at mid-rapidity (|y|< 0.5) in proton–proton collisions at s√ = 7 TeV performed with a large ion collider experiment at the large hadron collider (LHC) is reported. ... Two- and three-pion quantum statistics correlations in Pb-Pb collisions at √sNN = 2.76 TeV at the CERN Large Hadron Collider (American Physical Society, 2014-02-26) Correlations induced by quantum statistics are sensitive to the spatiotemporal extent as well as dynamics of particle-emitting sources in heavy-ion collisions. In addition, such correlations can be used to search for the ...
Forgot password? New user? Sign up Existing user? Log in ∑i=0∞∑j=0∞∑k=0∞13i 3j 3k,i≠j≠k\large \sum_{i=0}^{\infty} \sum_{j=0}^{\infty} \sum_{k=0}^{\infty} \dfrac{1}{3^i \ 3^j \ 3^k}, \quad i≠j≠ki=0∑∞j=0∑∞k=0∑∞3i 3j 3k1,i=j=k Find the value of the above triple summation. If your answer comes in form of ab\dfrac{a}{b}ba where aaa and bbb are positive coprime integers, then enter a+ba+ba+b as your answer. Note: iii, jjj, and kkk are distinct. Inspiration. Problem Loading... Note Loading... Set Loading...
The standard algorithm for finding a maximum still works. Start with $a_1$ and go over the elements, if you see a larger value, update the maximum to be that value. The reason this works is that every element you skipped is smaller than at least one element, and can thus not be the maximum.To be clear, by the "standard algorithm" I mean the following:max ... As Ariel notes, the standard maximum-finding algorithm given below:def find_maximum(a):m = a[0]for x in a:if x > m: m = xreturn mwill in fact work without modification as long as:any pair of elements can be compared, andthe input is guaranteed to contain a maximal element, i.e. an element that is pairwise greater than any ... You asked: Can we run a sorting algorithm, feeding it a non-transitive comparator?The answer: Of course. You can run any algorithm with any input.However, you know the rule: Garbage In, Garbage Out. If you run a sorting algorithm with a non-transitive comparator, you might get nonsense output. In particular, there is no guarantee that the output will ... You want to know the orbits of the action of the automorphism group of a graph on its vertices. This is equivalent to graph isomorphism, for which no really simple algorithms are known. Practical graph isomorphism algorithms, which work fast in practice, are known – check nauty for example. Apparently nauty can compute the automorphism group directly, from ... "greatest means the element must be greater than every other element" is a huge hint on how to do this in $O(n)$.If you go through your list comparing elements, any element that "loses" a comparison can be immediately discarded as, in order to be the greatest, it must be greater than ALL other elements so the single loss disqualifies it.Once you think of ... There's a very simple algorithm for this:for each edge (u,v) in the graph:for each edge (v,w) in the graph:if (u,w) is not in the graph, return "Not transitive"return "Transitive"Basically, if the graph is not transitive, then you can always find some path of length two $u\to v \to w$ such that the edge $u \to w$ is not present in the ... Given a set of elements and a binary ordering relation, transitivity is required to totally order the elements. In fact, transitivity is even required to define a partial order on the elements.http://en.m.wikipedia.org/wiki/Total_orderYou would need a much broader definition of what "sorted" means in order to sort elements without transitivity. It is ... I assume that the relation antisymmetric for at least a single element (which guarantees the existence of the greatest element), otherwise the task is impossible.If all elements in the finite set are comparable then usual finding-maximum procedure works.If some elements are not comparable then the following procedure will workmax = nilFor i=1 to n... It sounds as though what you want is to arrange items such that all discernible rankings are correct, but items which are close might be considered "indistinguishable". It is possible to design sort algorithms which will work with such comparisons, but unless there are limits to how many comparisons may report that things are indistinguishable, there is no ... As an addition to Ariel's answer about the concerns raised by Emil Jeřábek: If we allow $A<B$ and $B<A$ then there is no O(n) algorithm:Assume you have elements $A_1 ... A_n$. Your algorithm will in each step query $A_i<A_j$ for some pair i and j. No matter in which order you query them, there is always a relation where you have to query all ... There's a $O(n^\omega)$-time algorithm that outputs a count, for each vertex, of the number of triangles that include that vertex. Here $O(n^\omega)$ is the running time for matrix multiplication. See, e.g., Number of triangles in an undirected graph, Is it a valid graph canonical form?, https://cstheory.stackexchange.com/q/9972/5038. This may or may not ... I'm going to cheat and call the number of A > B entries $n$ and you need to look at every entry at least once.That way you can loop over just the entries and remove the element that is less than the other from the set of possible solutions. With a hashset or similar this is possible in $O(n)$ (Note: I'm not 100% sure about the Alloy syntax since I never used it -- what follows is accurate on the general principles but the syntax could be slightly different)Basically, S.*bar means starting from S and using .bar any number $n\geq 0$ of times.Instead, S.^bar means starting from S and using .bar any positive number $n > 0$ of times.It is ...
I just learned that almost %99 percent of an atom's mass comes from Strong Force Field Energy. But this energy is counted as my rest mass nevertheless. So in $ F=m.a$, should i take $m$ as rest mass (as standard description)? Or should i take $m$ as total energy that an object has? To make it more clear; if an object's kinetic energy increases, would it be harder to accelerate? $F = ma$ is the approximation for when an object's rest mass is much greater than it's kinetic energy. This approximation is good for anything traveling less than 50 million mph. For objects moving 10% of the speed of light or more, you have to worry a little more about the 'total energy' of the particle. Then it's better to use the definition of force: $F = \frac{\partial}{\partial t} p $ Where $p$ is the momentum; $p = \gamma mv$. $m$ is the rest mass of the particle, so it has no time dependence. Note that $\gamma$ and $v$ both have time dependence, so you have to use the product rule. For slower objects (relative to the speed of light), $\gamma \approx 1$ and this reduces to $F = m\frac{\partial}{\partial t}v = ma$, the familiar result. In Newton's 2nd law, written in terms of force, mass and acceleration, m stands for the mass in the original Newton's work, i.e. the quantity of the substance which makes up a material body. This quantity is absolute, i.e. will not change if a body is at rest or moves (which is fine, because a body could be at rest in some Galilean inertial frame while moving with constant velocity in another). In the theory of special relativity, $\vec{F} = m\vec{a}$ is derived as the low-velocity approximation of $f^{a} = m \dot{u}^{a}$, where "a" is a generic index (in the sense of Wald's book), f is the 4-force, u is the 4-velocity, the dot stands for the derivative with respect to proper time, and m is the invariant mass (or simply mass).
Training and Loss Training a model is called empirical risk minimization. Mean square error ( MSE) is the average squared loss per example over the whole dataset. To calculate MSE, sum up all the squared losses for individual examples and then divide by the number of examples: $ MSE = \frac{1}{N} \sum_{(x,y)\in D} (y - prediction(x))^2$ where: $(x, y)$ is an example in which $x$ is the set of features (for example, chirps/minute, age, gender) that the model uses to make predictions. $y$ is the example's label (for example, temperature). $prediction(x)$ is a function of the weights and bias in combination with the set of features. $D$ is a data set containing many labeled examples, which are pairs. $N$ is the number of examples in. Although MSE is commonly-used in machine learning, it is neither the only practical loss function nor the best loss function for all circumstances. Reducing Loss An Iterative Approach Figure 1. An iterative approach to training a model. Gradient Descent Gradients The gradient of a function, denoted as follows, is the vector of partial derivatives with respect to all of the independent variables: $\nabla f$ For instance, if: $f(x,y) = e^{2y}\sin(x)$ then: $\nabla f(x,y) = \left(\frac{\partial f}{\partial x}(x,y), \frac{\partial f}{\partial y}(x,y)\right) = (e^{2y}\cos(x), 2e^{2y}\sin(x))$ Note the following: $\nabla f$ Points in the direction of greatest increase of the function. $- \nabla f$ Points in the direction of greatest decrease of the function. The number of dimensions in the vector is equal to the number of variables in the formula for $f$; in other words, the vector falls within the domain space of the function. For instance, the graph of the following function $f(x, y)$: $f(x,y) = 4 + (x - 2)^2 + 2y^2$ when viewed in three dimensions with $z = f(x,y)$ looks like a valley with a minimum at $(2, 0, 4)$: To determine the next point along the loss function curve, the gradient descent algorithm adds some fraction of the gradient's magnitude to the starting point as shown in the following figure: Figure 5. A gradient step moves us to the next point on the loss curve. Figure 6. Learning rate is too small. Figure 7. Learning rate is too large. Figure 8. Learning rate is just right. The ideal learning rate in one-dimension is $\frac{ 1 }{ f(x)'' }$ (the inverse of the second derivative of f(x) at x). The ideal learning rate for 2 or more dimensions is the inverse of the Hessian (matrix of second partial derivatives). Mini-batch stochastic gradient descent ( mini-batch SGD) is a compromise between full-batch iteration and SGD. A mini-batch is typically between 10 and 1,000 examples, chosen at random. Mini-batch SGD reduces the amount of noise in SGD but is still more efficient than full-batch. Introducing TensorFlow The following figure shows the current hierarchy of TensorFlow toolkits: Toolkit(s) Description Estimator (tf.estimator) High-level, OOP API. tf.layers/tf.losses/tf.metrics Libraries for common model components. TensorFlow Lower-level APIs Copyright © 2019 Weslie. All rights reserved.
The Berlekamp-Massey algorithm and the extended Euclidean algorithm (both further extended with the Chien search and the Forney calculation of error values) for decoding BCH and RS codes (hereinafter referred to as the decoder) have the following characteristic that is not well understood by many people: If there is a codeword $\hat{\mathbf C}$ that differs from the received word $\mathbf R$ in $t$ or fewer locations, then the decoder finds $\hat{\mathbf C}$. Note the complete absence of any mention of $\hat{\mathbf C}$ being the correct (i.e. transmitted) codeword $\mathbf C$. Now, if $\mathbf R$ does happen to differ from $\mathbf C$ in $t$ or fewer locations, then the decoder does indeed find $\mathbf C$. Note that since the minimum distance is $2t+1$, there is no competing codeword $\mathbf C^\prime$ that is also within distance $t$ of $\mathbf R$ to lead the decoder astray: there is at most one codeword within distance $t$ from any given $\mathbf R$ and when such a codeword exists, the decoder finds it. But just because the decoder finds a codeword, there is no guarantee that it is the correct codeword $\mathbf C$; it might be some $\hat{\mathbf C} \neq \mathbf C$ that happens to be at distance $\leq t$ from $\mathbf R$. (The decoder is following the sailor's philosophy of "When I'm not near the girl I love, I love the girl I'm near" with the addendum that on the high seas, no girls are near). What it is correct to say is that (with the usual assumption that fewer errors are more likely to have occurred than more errors) with high probability the $\hat{\mathbf C}$ that the decoder finds is indeed the transmitted codeword $\mathbf C$. But the question is what happens when there have been more than $t$ errors. Can such an occurrence be unambiguously detected? The answer is NO for the reasons described above: the decoder may well find a $\hat{\mathbf C}$ that is different from the transmitted codeword $\mathbf C$ exactly as the decoder is designed to do because $\hat{\mathbf C}$ is at distance $\leq t$ from $\mathbf R$ and the poor schmuck of a system designer will have no idea that s/he has just bought a pig in a poke. Once again, probability comes to the rescue. In the (fairly unlikely) event that more than $t$ errors have occurred, it is very much more often the case that the decoder fails to decode in several possible ways (which may or may not be detected by the system designer depending on how many safeguards have been built into the system) than that the decoder produces an incorrect codeword. The Berlekamp-Massey algorithm or the extended Euclidean algorithm find an error-locator polynomial $\Lambda(z)$ of degree $e$ and an error-evaluator polynomial $\Omega(z)$ where the inverse roots of $\Lambda(z)$ (as found by the Chien search) are the $e$ error locations and the Forney error-evaluation formula gives the error values at these $e$ error locations. When more than $t$ errors have occurred, it is usually the case that $\Lambda(z)$ has fewer than $e$ roots and/or that the Forney error-evaluation formula returns a zero error value at a purported error location that the Chien search has found. Both of these conditions indicate that something is awry but testing for these conditions is expensive in hardware implementations and time-consuming in software implementations. The simplest test for detecting a decoder failure is to let the Chien search and Forney formula do their jobs, and the recompute the syndrome of whatever the decoder spits out. If the recomputed syndrome is zero, then the decoder output is a valid codeword - whether it is $\mathbf C$ or an impostor $\hat{\mathbf C}$ is unknown - but if the recomputed syndrome is nonzero, then the system designer knows that the decoder output is not a valid codeword and that the decoder has actually failed to decode. But again, such a test is expensive to implement and very often skipped by system designers who know about it, and, of course, not even thought about by those who don't know of the possibility of decoder failure that can be detected. For a well-designed RS decoder system, it is the case that$$P_{\text{decoder output correct}} \gg P_{\text{decoder failure}} \gg P_{\text{decoder output incorrect}}.$$The decoding spheres (the set of vectors or possible received words $\mathbf R$ that are at distance at most $t$ from some codeword and thus lead to a successful decoding, whether correct or incorrect) have very small total volume and so it is far more likely than not that when more than $t$ errors have occurred, $\mathbf R$ is not in any of the decoding spheres, and the decoder will fail to decode and spit out garbage. Whether it is worthwhile detecting such failures is up to the system designer.It is sometimes the case that $1-P_{\text{decoder output correct}}$ is a perfectly acceptable word error rate and so ignoring decoder failure is an option, but in high-performance systems where very low error rates are required, detecting decoder failure and asking for a re-transmission of a codeword that was badly munged up in transmission allows the achievement of the desired low error rates at the expense of increased delay.
Today we learned about filters and ultrafilters in the General Topology course. I am trying to play around with these definitions. I wish to ask a question that I am unsure about. Let us say, we have an ultrafilter $\mathcal{F}$ on the closed interval $[0, 1]$. Construct $\overline{\mathcal{F}} = \{ \overline{A} : A \in \mathcal{F}\}$, where the overline denotes closure of a set. My question is, is $\overline{\mathcal{F}}$ an ultrafilter? Is it a filter at all? The intersection property is clear, but it is the superset property of filters that is confusing me. Thanks for any help! As Brian said, it depends. $\overline{\mathcal{F}}$ consists of only closed sets, so the superset property will fail in general: if $[0,\frac{1}{2}] \in \overline{\mathcal{F}}$ and $[0,\frac{1}{2}] \subset [0,\frac{3}{4})$ but the latter set is not closed, so not of the form $\overline{A}$ for some $A \in \mathcal{F}$. So it's not a filter over all subsets of $[0,1]$. It it a filter base: if $\overline{A_1}, \overline{A_2} \in \overline{\mathcal{F}}$, with $A_1,A_2 \in \mathcal{F}$, then $\emptyset \neq \overline{A_1 \cap A_2} \subseteq (\overline{A_1} \cap \overline{A_2})$, so $\overline{\mathcal{F}}$ has the finite intersection property (which can be used in proofs involving compactness: the property that every family of closed sets with the finite intersection property has non-empty intersection, is equivalent to compactness), and so generates a filter by also inclusing all supersets of elements. The notion of (ultra)filter can be defined in a lattice (a partially ordered set (a.k.a poset) where every two members $p,q$ has an infimum denoted $p \land q$). A filter $F$ is then a subfamily of the poset that is closed under $\land$ and if $p \in F$ and $p \le q$ then $q \in F$, so closed under larger elements. The powerset of a set $X$ is a poset under inclusion and $A \cap B$ is the infimum of $A$ and $B$, but there is also a poset $\mathscr{C}(X)$ of closed subsets of a topological space $X$, with the same operations, but restricted to closed sets only. If $\mathcal{F} \subseteq \mathscr{P}(X)$ is a filter, then $\overline{\mathcal{F}}$ is a filter in $\mathscr{C}(X)$: it's in fact the intersection of $\mathscr{C}(X)$ with $\mathcal{F}$: trivially if $A$ is closed and in $\mathcal{F}$, then $A = \overline{A} \in \overline{\mathcal{F}}$, and if $\overline{A} \in \overline{\mathcal{F}}$, then $A \in \mathcal{F}, A \subseteq \overline{A}$, so $\overline{A} \in \mathscr{C}(X) \cap \mathcal{F}$. From this the property follows easily. So your set is a filter, but in another poset. But it is a filterbase of closed sets and this is often useful in proofs involving compactness.
Complicated resistor networks can be simplified by finding series and parallel resistor combinations within the context of the larger circuit. Contents Here is the example circuit we will work through together, We see a voltage source connected to a resistor network. The two little circles near the left end represent the port of the resistor network. Figure out how much current $(i)$ the voltage source is required to supply to the resistor network. The answer is not immediately obvious. But, we have some tools at our disposal: We know how to compute the equivalent resistance of series resistors and parallel resistors. With these tools we can simplify the resistor network until the problem becomes easy to answer. At this point, have a go at answering the problem yourself. After giving it a try, continue reading to see the solution unfold. Strategy Here’s a general strategy to simplify any resistor network, Begin as far away as possible from the circuit location in question. Identify and replace series or parallel resistor combinations with their equivalent resistor. Continue, moving along until a single equivalent resistor represents the entire resistor network. Step-by-step solution to the example The original question asked about the current from the voltage source. So the “location in question” is near the voltage source on the far left end of the circuit. Therefore, we start the simplification process way over on the far right, and work our way back towards the source. Simplifying a circuit is a process of many small steps. Consider a small chunk of circuit, simplify, and then move to the next chunk. Tip: Redraw the schematic after every step so you don’t miss an opportunity to simplify. Step 1. The shaded resistors, $2\,\Omega$ and $8\,\Omega$, are in series. Looking into the shaded area from the perspective of the arrows, the two series resistors are equivalent to a single resistor with resistance of ______ $\Omega$. hint These two resistors are in series, so we add their two resistances to get the equivalent series resistance. show answer $2\,\Omega + 8\,\Omega = 10 \,\Omega$ The two resistors can be replaced by their equivalent resistance, Key insight: From outside the shaded box, the two original series resistors and the equivalent resistor are indistinguishable from each other. The exact same current and voltage exist in both versions. Step 2. We now find two $10\,\Omega$ resistors in parallel at the new far right of the circuit. These two resistors can be replaced by their parallel combination. The resulting equivalent resistor is: ______ $\Omega$. show answer $10\,\Omega \parallel 10\,\Omega = \dfrac{10\cdot 10}{10 + 10}=5\,\Omega$ The $\parallel$ notation means "in parallel with". Again, looking into the shaded box from the left, the current and voltage with the equivalent $5\,\Omega$ resistor is still indistinguishable from the entire original. Step 3. Can you see a pattern emerging? We are working through the schematic from right to left, simplifying and redrawing as we go. Next up we find two series resistors, $1\,\Omega$ and $5\,\Omega$. These series resistors can be replaced by an equivalent resistance of: ______ $\Omega$. show answer $1\,\Omega + 5\,\Omega = 6\,\Omega$ Step 4. This step is a bit more challenging. We have three resistors in parallel. These three resistors can be replaced by their parallel combination. The resulting equivalent resistor is: ______ $\Omega$. hint Because we have three resistors in parallel, use the full parallel resistor equation, $\dfrac{1}{\text R_{\text{parallel}}} = \dfrac{1}{\text{R1}} +\dfrac{1}{\text{R2}} + \dfrac{1}{\text{R3}}$ show answer $\dfrac{1}{\text R_{\text{parallel}}} = \left (\dfrac{1}{12\,\Omega} +\dfrac{1}{4\,\Omega} + \dfrac{1}{6\,\Omega}\right ) = \left (\dfrac{1}{12} +\dfrac{3}{12} + \dfrac{2}{12} \right )= \dfrac{1}{2}$ So the equivalent resistance is the reciprocal of $\dfrac{1}{2}$, or $2\,\Omega$. Step 5. We are down to the last two series resistors, You can do this one in your head, We are left with a single $3\,\Omega$ resistor. As far as the voltage source can tell, this one resistor is equivalent to the entire resistor network. To answer the question: The voltage source is required to supply a current of, $i = \dfrac{\text V}{3\,\Omega}$ You started with $7$ resistors and simplified down to $1$, a significant reduction in complexity. Well done. Not all simplifications make it down to a single resistor at the end. (The circuit may not be made entirely of resistors.) But always take the opportunity to simplify if the chance is presented. Just for fun animation Exceptions Certain resistor configurations cannot be simplified using the strategy described above. We treat those as a special case. Examples are described in the next article on Delta Wye Transformation). Summary Key idea: The strategy for simplification is to start at a point in the circuit farthest away from the component of interest. In this example, we were asked about the current load on the voltage source on the far left, so we started at the far right end of the circuit and worked our way “backwards” to the left. Working in this “backwards” direction may initially feel awkward, given that many of us have an ingrained habit of reading from left to right. It is common in electronics to begin analysis starting at the output end of a circuit (which usually appears on the right side of the page) and work back to the input. Our left-to-right reading bias can get in the way. You can tell you are “turning into an engineer” when you get comfortable working in this backwards direction.
Difference between revisions of "Mahlo" (→$\Sigma_n$-Mahlo etc.: spiral) Line 5: Line 5: * Every Mahlo cardinal $\kappa$ is inaccessible, and indeed hyper-inaccessible and hyper-hyper-inaccessible, up to degree $\kappa$, and a limit of such cardinals. * Every Mahlo cardinal $\kappa$ is inaccessible, and indeed hyper-inaccessible and hyper-hyper-inaccessible, up to degree $\kappa$, and a limit of such cardinals. * If $\kappa$ is Mahlo, then it is Mahlo in any inner model, since the concept of stationarity is similarly downward absolute. * If $\kappa$ is Mahlo, then it is Mahlo in any inner model, since the concept of stationarity is similarly downward absolute. + + Mahlo cardinals belong to the oldest large cardinals together with inaccessible and measurable. ''Please add more history.'' Mahlo cardinals belong to the oldest large cardinals together with inaccessible and measurable. ''Please add more history.'' Latest revision as of 09:11, 10 October 2019 A cardinal $\kappa$ is Mahlo if and only if it is inaccessible and the regular cardinals below $\kappa$ form a stationary subset of $\kappa$. Equivalently, $\kappa$ is Mahlo if it is regular and the inaccessible cardinals below $\kappa$ are stationary. Every Mahlo cardinal $\kappa$ is inaccessible, and indeed hyper-inaccessible and hyper-hyper-inaccessible, up to degree $\kappa$, and a limit of such cardinals. If $\kappa$ is Mahlo, then it is Mahlo in any inner model, since the concept of stationarity is similarly downward absolute. Mahlo cardinals belong to the oldest large cardinals together with inaccessible and measurable. Please add more history. Weakly Mahlo A cardinal $\kappa$ is weakly Mahlo if it is regular and the set of regular cardinals below $\kappa$ is stationary in $\kappa$. If $\kappa$ is a strong limit and hence also inaccessible, this is equivalent to $\kappa$ being Mahlo, since the strong limit cardinals form a closed unbounded subset in any inaccessible cardinal. In particular, under the GCH, a cardinal is weakly Mahlo if and only if it is Mahlo. But in general, the concepts can differ, since adding an enormous number of Cohen reals will preserve all weakly Mahlo cardinals, but can easily destroy strong limit cardinals. Thus, every Mahlo cardinal can be made weakly Mahlo but not Mahlo in a forcing extension in which the continuum is very large. Nevertheless, every weakly Mahlo cardinal is Mahlo in any inner model of the GCH. Hyper-Mahlo etc. A cardinal $\kappa$ is $1$-Mahlo if the set of Mahlo cardinals is stationary in $\kappa$. This is a strictly stronger notion than merely asserting that $\kappa$ is a Mahlo limit of Mahlo cardinals, since in fact every $1$-Mahlo cardinal is a limit of such Mahlo-limits-of-Mahlo cardinals. (So there is an entire hierarchy of limits-of-limits-of-Mahloness between the Mahlo cardinals and the $1$-Mahlo cardinals.) More generally, $\kappa$ is $\alpha$-Mahlo if it is Mahlo and for each $\beta\lt\alpha$ the class of $\beta$-Mahlo cardinals is stationary in $\kappa$. The cardinal $\kappa$ is hyper-Mahlo if it is $\kappa$-Mahlo. One may proceed to define the concepts of $\alpha$-hyper${}^\beta$-Mahlo by iterating this concept, iterating the stationary limit concept. All such levels are swamped by the weakly compact cardinals, which exhibit all the desired degrees of hyper-Mahloness and more: Meta-ordinal terms are terms like $Ω^α · β + Ω^γ · δ +· · ·+Ω^\epsilon · \zeta + \theta$ where $α, β...$ are ordinals. They are ordered as if $Ω$ were an ordinal greater then all the others. $(Ω · α + β)$-Mahlo denotes $β$-hyper${}^α$-Mahlo, $Ω^2$-Mahlo denotes hyper${}^\kappa$-Mahlo $\kappa$ etc. Every weakly compact cardinal $\kappa$ is $\Omega^α$-Mahlo for all $α<\kappa$ and probably more. Similar hierarchy exists for inaccessible cardinals below Mahlo. All such properties can be killed softly by forcing to make them any weaker properties from this family.[2] $\Sigma_n$-Mahlo etc. A regular cardinal $κ$ is $Σ_n$-Mahlo (resp. $Π_n$-Mahlo) if every club in $κ$ that is $Σ_n$-definable (resp. $Π_n$-definable) in $H(κ)$ contains an inaccessible cardinal. A regular cardinal $κ$ is $Σ_ω$-Mahlo if every club subset of $κ$ that is definable (with parameters) in $H(κ)$ contains an inaccessible cardinal. Every $Π_1$-Mahlo cardinal is an inaccessible limit of inaccessible cardinals. For Mahlo $κ$, the set of $Σ_ω$-Mahlo cardinals is stationary on $κ$. In [3] it is shown that every $Σ_ω$-weakly compact cardinal is $Σ_ω$-Mahlo and the set of $Σ_ω$-Mahlo cardinals below a $Σ_ω$-w.c. cardinal is $Σ_ω$-stationary, but if κ is $Π_{n+1}$-Mahlo, then the set of $Σ_n$-w.c. cardinals below $κ$ is $Π_{n+1}$-stationary. These properties are connected with some forms of absoluteness. For example, the existence of a $Σ_ω$-Mahlo cardinal is equiconsistent with the generic absoluteness axiom $\mathcal{A}(L(\mathbb{R}), Σ_ω , Γ ∩ absolutely−ccc)$ where $Γ$ is the class of projective posets. References Hamkins, Joel David and Johnstone, Thomas A. Resurrection axioms and uplifting cardinals., 2014. www arχiv bibtex Carmody, Erin Kathryn. Force to change large cardinal strength., 2015. www arχiv bibtex Bosch, Roger. Small Definably-large Cardinals.Set Theory Trends in Mathematics pp. 55-82, 2006. DOI bibtex Bagaria, Joan and Bosch, Roger. Proper forcing extensions and Solovay models.Archive for Mathematical Logic , 2004. www DOI bibtex Bagaria, Joan. Axioms of generic absoluteness.Logic Colloquium 2002 , 2006. www DOI bibtex
Could any one give an example of a bijective map from $\mathbb{R}^3\rightarrow \mathbb{R}$? Thank you. Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community Could any one give an example of a bijective map from $\mathbb{R}^3\rightarrow \mathbb{R}$? Thank you. This question appears to be off-topic. The users who voted to close gave this specific reason: First, note that it is enough to find a bijection $f:\Bbb R^2\to \Bbb R$, since then $g(x,y,z) = f(f(x,y),z)$ is automatically a bijection from $\Bbb R^3$ to $\Bbb R$. Next, note that since there is a bijection from $[0,1]\to\Bbb R$ (see appendix), it is enough to find a bijection from the unit square $[0,1]^2$ to the unit interval $[0,1]$. By constructions in the appendix, it does not really matter whether we consider $[0,1]$, $(0,1]$, or $(0,1)$, since there are easy bijections between all of these. There are a number of ways to proceed in finding a bijection from the unit square to the unit interval. One approach is to fix up the "interleaving" technique I mentioned in the comments, writing $\langle 0.a_1a_2a_3\ldots, 0.b_1b_2b_3\ldots\rangle$ to $0.a_1b_2a_2b_2a_3b_3\ldots$. This doesn't quite work, as I noted in the comments, because there is a question of whether to represent $\frac12$ as $0.5000\ldots$ or as $0.4999\ldots$. We can't use both, since then $\left\langle\frac12,0\right\rangle$ goes to both $\frac12 = 0.5000\ldots$ and to $\frac9{22} = 0.40909\ldots$ and we don't even have a function, much less a bijection. But if we arbitrarily choose to the second representation, then there is no element of $[0,1]^2$ that is mapped to $\frac12$, and if we choose the first there is no element that is mapped to $\frac9{22}$, so either way we fail to have a bijection. This problem can be fixed. (In answering this question, I tried many web searches to try to remember the fix, and I was amazed at how many sources I found that ignored the problem, either entirely, or by handwaving. I never did find it; I had to remember it. Sadly, I cannot remember where I saw it first.) First, we will deal with $(0,1]$ rather than with $[0,1]$; bijections between these two sets are well-known, or see the appendix. For real numbers with two decimal expansions, such as $\frac12$, we will agree to choose the one that ends with nines rather than with zeroes. So for example we represent $\frac12$ as $0.4999\ldots$. Now instead of interleaving single digits, we will break each input number into chunks, where each chunk consists of some number of zeroes (possibly none) followed by a single non-zero digit. For example, $\frac1{200} = 0.00499\ldots$ is broken up as $004\ 9\ 9\ 9\ldots$, and $0.01003430901111\ldots$ is broken up as $01\ 003\ 4\ 3\ 09\ 01\ 1\ 1\ldots$. This is well-defined since we are ignoring representations that contain infinite sequences of zeroes. Now instead of interleaving digits, we interleave chunks. To interleave $0.004999\ldots$ and $0.01003430901111\ldots$, we get $0.004\ 01\ 9\ 003\ 9\ 4\ 9\ldots$. This is obviously reversible. It can never produce a result that ends with an infinite sequence of zeroes, and similarly the reverse mapping can never produce a number with an infinite sequence of trailing zeroes, so we win. A problem example similar to the one from a few paragraphs ago is resolved as follows: $\frac12 = 0.4999\ldots$ is the unique image of $\langle 0.4999\ldots, 0.999\ldots\rangle$ and $\frac9{22} = 0.40909\ldots$ is the unique image of $\langle 0.40909\ldots, 0.0909\ldots\rangle$. This is enough to answer the question posted, but I will give some alternative approaches. According to the paper "Was Cantor Surprised?" by Fernando Q. Gouveâ, Cantor originally tried interleaving the digits himself, but Dedekind pointed out the problem of nonunique decimal representations. Cantor then switched to an argument like the one Robert Israel gave in his answer, based on continued fraction representations of irrational numbers. He first constructed a bijection from $(0,1)$ to its irrational subset (see this question for the mapping Cantor used and other mappings that work), and then from pairs of irrational numbers to a single irrational number by interleaving the terms of the infinite continued fractions. Since Cantor dealt with numbers in $(0,1)$, he could guarantee that every irrational number had an infinite continued fraction representation of the form $$x = x_0 + \dfrac{1}{x_1 + \dfrac{1}{x_2 + \ldots}}$$ where $x_0$ was zero, avoiding the special-case handling for $x_0$ in Robert Israel's solution. The Cantor-Schröder-Bernstein theorem takes an injection $f:A\to B$ and an injection $g:B\to A$, and constructs a bijection between $A$ and $B$. So if we can find an injection $f:[0,1)^2\to[0,1)$ and an injection $g:[0,1)\to[0,1)^2$, we can invoke the CSB theorem and we will be done. $g$ is quite trivial; $x\mapsto \langle x, 0\rangle$ is one of many obvious injections. For $f$ we can use the interleaving-digits trick again, and we don't have to be so careful because we need only an injection, not a bijection. We can choose the representation of the input numbers arbitrarily; say we will take the $0.5000\ldots$ representation rather than the $0.4999\ldots$ representation. Then we interleave the digits of the two input numbers. There is no way for the result to end with an infinite sequence of nines, so we are guaranteed an injection. Then we apply CSB to $f$ and $g$ and we are done. There is a bijection from $(-\infty, \infty)$ to $(0, \infty)$. The map $x\mapsto e^x$ is an example. There is a bijection from $(0, \infty)$ to $(0, 1)$. The map $x\mapsto \frac2\pi\tan^{-1} x$ is an example, as is $x\mapsto{x\over x+1}$. There is a bijection from $[0,1]$ to $(0,1]$. Have $0\mapsto \frac12, \frac12\mapsto\frac23,\frac23\mapsto\frac34,$ and so on. That takes care of $\left\{0, \frac12, \frac23, \frac34,\ldots\right\}$. For any other $x$, just map $x\mapsto x$. Similarly, there is a bijection from $(0,1]$ to $(0,1)$. First, note that the exponential function is a bijective map of $\mathbb R$ to $(0,\infty)$. Now let $G$ be the irrationals in $(0,\infty)$. I'd like a bijective map of $(0,\infty)$ to $G$. It can be done as follows: if $x = r \pi^n$ for some nonnegative integer $n$ and rational $r$, let $f(x) = \pi x$, otherwise $f(x) = x$. Finally, it suffices to find a bijective map of $G^3$ to $G$. This can be obtained using continued fractions. Each $x \in G$ can be expressed in a unique way as an infinite continued fraction $x = x_0 + \dfrac{1}{x_1 + \dfrac{1}{x_2 + \ldots}}$ where $x_0$ is a nonnegative integer and $x_1, x_2, \ldots$ are positive integers. Denote this as $[x_0; x_1, x_2, \ldots]$. We then map $(x,y,z) \in G^3$ to $[x_0; y_0+1, z_0+1, x_1, y_1, z_1, \ldots]$.
I'm studying Conway's functional Analysis by myself. In page 132 of his book, for showing every Closed subspace M of a reflexive Banach space X is reflexive, he says $\sigma(X,X^*)_{|_{M}}=\sigma(M,M^*)$. But I can not understand how it is. Please regard me. Thanks in advance You can use the criterion that reflexivity is equivalent to the unit ball being compact in the weak topology. Then the intersection of the unit ball with the closed subspace (which is still closed in the weak topology) is also compact, hence $E\subseteq X$ is reflexive. Edit: If you want to do it directly, you can use the Hahn Banach theorem to do it directly, since a closed subspace is characterized by the linear functionals which vanish on it. Then if $E\subseteq X$ is a closed subspace, we see $$E^*\cong Hom_{\Bbb F}(X,\Bbb F)/\{f\in X^* : f(E)=0\}$$ where here $\Bbb F$ is your field, either $\Bbb R$ or $\Bbb C$. Then dualizing this gives rise to the isomorphism since the map $$\begin{cases}X\to X^{**}=Hom_{\Bbb F}(Hom_{\Bbb F}(X,\Bbb F), \Bbb F) \\ x\mapsto E_x\end{cases}$$ is surjective you conclude the same holds passing through the quotient.
Siril processing tutorial Convert your images in the FITS format Siril uses (image import) Work on a sequence of converted images Pre-processing images Registration (PSF image alignment) → Stacking Stacking The final step to do with Siril is to stack the images. Go to the "stacking" tab, indicate if you want to stack all images, only selected images or the best images regarding the value of FWHM previously computed. Siril proposes several algorithms for stacking computation. Sum Stacking This is the simplest algorithm: each pixel in the stack is summed using 32-bit precision, and the result is normalized to 16-bit. The increase in signal-to-noise ratio (SNR) is proportional to [math]\sqrt{N}[/math], where [math]N[/math] is the number of images. Average Stacking With Rejection Percentile Clipping: this is a one step rejection algorithm ideal for small sets of data (up to 6 images). Sigma Clipping: this is an iterative algorithm which will reject pixels whose distance from median will be farthest than two given values in sigma units ([math]\sigma_{low}[/math], [math]\sigma_{high}[/math]). Median Sigma Clipping: this is the same algorithm except than the rejected pixels are replaced by the median value of the stack. Winsorized Sigma Clipping: this is very similar to Sigma Clipping method but it uses an algorithm based on Huber's work [1] [2]. Linear Fit Clipping: this is an algorithm developed by Juan Conejero, main developer of PixInsight [2]. It fits the best straight line ([math]y=ax+b[/math]) of the pixel stack and rejects outliers. This algorithm performs very well with large stacks and images containing sky gradients with differing spatial distributions and orientations. These algorithms are very efficient to remove satellite/plane tracks. Median Stacking This method is mostly used for dark/flat/offset stacking. The median value of the pixels in the stack is computed for each pixel. As this method should only be used for dark/flat/offset stacking, it does not take into account shifts computed during registration. The increase in SNR is proportional to [math]0.8\sqrt{N}[/math]. Pixel Maximum Stacking This algorithm is mainly used to construct long exposure star-trails images. Pixels of the image are replaced by pixels at the same coordinates if intensity is greater. In the case of M8-M20 sequence, we first used the "Winsorized Sigma Clipping" algorithm in "Average stacking with rejection" section, in order to remove satellite tracks ([math]\sigma_{low}=4[/math] and [math]\sigma_{high}=2[/math]). The output console thus gives the following result: 21:58:19: Pixel rejection in channel #0: 2.694% - 4.295% 21:58:19: Pixel rejection in channel #1: 1.987% - 3.620% 21:58:19: Pixel rejection in channel #2: 0.484% - 4.297% 21:58:19: Rejection stacking complete. 119 have been stacked. 21:58:28: Noise estimation (channel: #0): 4.913e-05 21:58:28: Noise estimation (channel: #1): 3.339e-05 21:58:28: Noise estimation (channel: #2): 3.096e-05 Noise estimation is a good estimator of the quality of your stacking process. In our example, the red channel has almost 1.5 times more noises that green or blue. That probably means that DSLR is unmodified: most of red photon are stopped by the original filter, therefore leading to a more noisy channel. Then, in this example we note that high rejection seems to be a bit strong. Setting high rejection to [math]\sigma_{high}=4[/math] could produce a better image. And this is what you have in the image below. After that, the result is saved in the file named below the buttons, and is displayed in the grey and colour windows. You can adjust levels if you want to see it better, or use the different display mode. In our example the file is the stack result of all files, i.e., 119 files. The images above picture the result in Siril using the Histogram Equalization rendering mode. Note the improvement of the signal-to-noise ratio regarding the result given for one frame in the previous step (take a look to the sigma value). The increase in SNR is of [math]38/3.9 = 9.7 \approx \sqrt{119} = 10.9[/math] and you should try to improve this result adjusting [math]\sigma_{low}[/math] and [math]\sigma_{high}[/math]. Here, comparison between the same crop of calibrated single frame and stacked result. Now should start the process of the image with crop, background extraction (to remove gradient), and some other processes to enhance your image. To see processes available in Siril please visit this page. Here an example of what you can get with Siril: Peter J. Huber and E. Ronchetti (2009), Robust Statistics, 2nd Ed., Wiley Juan Conejero, ImageIntegration, Pixinsight Tutorial
Once we know what energy a given transition would have, we can ask, “Which transitions between energy levels or states are possible?” In answering this question, we also will learn why the longer cyanine dye molecules have stronger absorptions, or larger absorption coefficients. Clearly the transitions cannot violate the Pauli Exclusion Principle; that is, they cannot produce an electron configuration with three electrons in the same orbital. Besides the Pauli Exclusion Principle, there are additional restrictions that result from the nature of the interaction between electromagnetic radiation and matter. These restrictions are summarized by spectroscopic selection rules. These rules tell whether or not a transition from one state to another state is possible. To obtain these selection rules, we consider light as consisting of perpendicular oscillating electric and magnetic fields. The magnetic field interacts with magnetic moments and causes transitions seen in electron spin resonance and nuclear magnetic resonance spectroscopies. The oscillating electric field interacts with electrical charges, i.e. the positive nuclei and negative electrons that comprise an atom or molecule, and cause the transitions seen in UV-Visible, atomic absorption, and fluorescence spectroscopies. The energy of interaction, \(E\), between a system of charged particles and an electric field \(\epsilon\) is given by the scalar product of the electric field and the dipole moment, μ, for the system. Both of these quantities are vectors. \[E = - \mu \cdot \epsilon \label {4-20}\] The dipole moment is defined as the summation of the product of the charge qj times the position vector rj for all charged particles j. \[ \mu = \sum _j q_j r_j \label {4-21}\] Example \(\PageIndex{1}\) Calculate the dipole moment of HCl from the following information. The position vectors below use Cartesian coordinates (x, y, z), and the units are pm. What fraction of an electronic charge has been transferred from the chlorine atom to the hydrogen atom in this molecule? \(r_H = (124.0, 0, 0)\), \(r_Cl = (- 3.5, 0, 0)\), \(q_H = 2.70 x 10^{-20} C, q_{Cl} = - 2.70 x 10^{-20} C\). Example \(\PageIndex{2}\) Sketch a diagram for Exercise \(\PageIndex{1}\) showing the coordinate system, the HCl molecule and the dipole moment. To calculate an expectation value for this interaction energy, we need to evaluate the expectation value integral. \[ \left \langle E \right \rangle = \int \Psi ^*_n ( - \hat {\mu} \cdot \hat {\epsilon} ) \Psi _n d \tau \label {4-22}\] The \(\int d \tau \) symbol simply means integrate over all coordinates. The operators \(\hat {\mu}\) and \(\hat {\epsilon}\) are vectors and are the same as the classical quantities, μ and \(\epsilon\). Usually the wavelength of light used in electronic spectroscopy is very long compared to the length of a molecule. For example, the wavelength of green light is 550 nm, which is much larger than molecules, which are closer to 1 nm in size. The magnitude of electric field then is essentially constant over the length of the molecule, and \(\epsilon\) can be removed from the integration since it is constant wherever ψ is not zero. In other words, ψ is finite only over the volume of the molecule, and the electric field is constant over the volume of the molecule. What remains for the integral is the expectation value for the permanent dipole moment of the molecule in state n, namely, \[ \left \langle \mu \right \rangle= \int \psi ^*_n \hat {\mu} \psi _n d \tau \label {4-23}\] so \[ E = - \left \langle \mu \right \rangle \cdot \epsilon \label {4-24}\] Example \(\PageIndex{3}\) Verify that the vectors in the scalar product in Equation \(\ref{4-24}\) commute by expanding \(\mu \cdot \epsilon\) and \(\epsilon \cdot \mu\). Use particle-in-a-box wave functions with HCl charges and coordinates. Equation \(\ref{4-24}\) shows that the strength or energy of the interaction between a charge distribution and an electric field depends on the dipole moment of the charge distribution. To obtain the strength of the interaction that causes transitions between states, the transition dipole moment is used rather than the dipole moment. The transition dipole moment integral is very similar to the dipole moment integral in Equation \(\ref{4-23}\) except the two wavefunctions are different, one for each of the states involved in the transition. Two different states are involved in the integral because the transition dipole moment integral has to do with the magnitude of the interaction with the electric field that causes a transition between the two states. For a transition where the state changes from ψi to ψf, the transition dipole moment integral is \[ \left \langle \mu \right \rangle _T = \int \psi ^*_f \hat {\mu} \psi _i d \tau = \mu _T \label {4-25}\] Just like the probability density is given by the absolute square of the wavefunction, the probability for a transition as measured by the absorption coefficient is proportional to the absolute square \(\mu ^*_T \mu _T\) of the transition dipole moment, which is calculated using Equation \(\ref{4-25}\). Since taking the absolute square always produces a positive quantity, it does not matter whether the transition moment itself is positive, negative, or imaginary. The transition dipole moment integral and its relationship to the absorption coefficient and transition probability can be derived from the time-dependent Schrödinger equation. Here we only want to introduce the concept of the transition dipole moment and use it to obtain selection rules and relative transition probabilities for the particle-in-a-box. Later it will be applied to other systems that we will be considering. If \(μ_T = 0\) then the interaction energy is zero and no transition occurs or is possible between states characterized by \(ψ_i\) and \(ψ_f\). Such a transition is said to be forbidden, or more precisely, electric-dipole forbidden. In fact, the electric-dipole electric-field interaction is only the leading term in a multipole expansion of the interaction energy, but the higher order terms in this expansion usually are not significant. If \(μ_T\) is large, then the probability for a transition and the absorption coefficient are large. It is very useful to be able to tell whether a transition is possible, \(μ_T ≠ 0\), or not possible, \(μ_T = 0\), without having to evaluate integrals. Properties of the wavefunctions such as symmetry or angular momentum can be used to determine the conditions that must exist for the transition dipole moment to be finite, i.e. not zero. Statements called spectroscopic selection rules summarize these conditions. Selection rules do not tell us how probable or intense a transition is. They only tell us whether a transition is possible or not possible. For the particle-in-a-box model, as applied to dye molecules and other appropriate molecular systems, we need to consider the transition moment integral for one electron. According to Equation \(\ref{4-21}\), the dipole moment operator for an electron in one dimension is –ex since the charge is –e and the electron is located at x. \[\mu _T = - e \int \limits _0^L \psi ^*_f (x) x \psi _i (x) dx \label {4-26}\] This is the integral that must be evaluated for various particle-in-a-box wavefunctions to see which transitions are allowed (i.e. \(μ_T ≠ 0\)) and forbidden (\(μ_T = 0\)) and to determine the relative strengths of the allowed transitions. Contributors Adapted from "Quantum States of Atoms and Molecules" by David M. Hanson, Erica Harvey, Robert Sweeney, Theresa Julia Zielinski
You cannot compare the values of coefficients in this way. Suppose that your response $Y$ is measured in meters, and you have two features $X_1$ and $X_2$ which are measured in seconds and hours respectively. Then your coefficients: $\beta_1$ has units meters/second and $\beta_2$ has units meters/hour - these are not comparable directly. Even worse is if $X_1$ is measured in seconds but $X_2$ is something totally unrelated, say ohms, coulombs, newtons or lumens. Now, when doing lasso regression, it is standard practice to standardize the columns in the design matrix, which essentially makes all the predictors dimensionless (though when the coefficients are reported back to the user, they are usually stated on the original scale). You still cannot compare the magnitudes in any reasonable way. A simple way to see this is to consider the following situation: $$\begin{align*} Y = X_1 + X_2 + \epsilon \\ corr(X_1, X_2) = 1\end{align*}$$ Any of the following regression models is correct: $$\begin{align*} E(Y \mid X_1, X_2) &= X_1 + X_2 \\ E(Y \mid X_1, X_2) &= 2 X_1 \\ E(Y \mid X_1, X_2) &= 2 X_2 \\ E(Y \mid X_1, X_2) &= .5 X_1 + 1.5 X_2\end{align*}$$ and so on. Of course, situations found "in nature" are never this clear cut, but this illustrates the essential difficulties in your proposal.
The following is an exercise from Carothers' Real Analysis: Show that $$\int_{1}^{\infty}\frac{1}{x}=\infty$$ (as a Lebesgue Integral). Attempt: Let $E=[1,\infty)$. $\int_E f=\int f\cdot \chi_E=\sup\{\int\varphi:\varphi \text{ simple }, 0\leq \varphi\leq f\}\cdot \chi_E$ I'm not sure how to find out what $\sup\{\int\varphi:\varphi \text{ simple }, 0\leq \varphi\leq f\}$ is. (Maybe I can say something about $\sum_{n=1}^{\infty}\frac{1}{n}\cdot \chi_\mathbb{R}$, which diverges since it's the harmonic series?) I note that I can express $E=\bigcup_{n=1}^{\infty}[1,n)$, which is measurable since it is the union of measurable sets. I'm pretty new to the Lebesgue integral so a hint would be preferred over a full solution if possible. Thanks.
We want to be fully transparent regarding the underlying calculations for the standard model types that Evidencio supports. This page lists our standard model types and the associated calculations Linear Regression Models Linear regression models are used to predict continuous outcomes. The prediction is estimated by: predicted result `P = \beta_0 + X\beta` Logistic Regression Models: Logistic regression models are used to predict probabilities. The prediction is estimated by: predicted probability `P = e^(X\beta)/(1+e^(X\beta))` Survival Regression Models Survival regression models are used to predict the probability that the patient will be free from an event (e.g. biochemical recurrence) at some time point (e.g. 5 years after surgery). They are also used to predict the probability `P(t)` that the patient will be free from an event at some time point, given that the patient has made it t* years without the event (denoted by P(t > 5 years | t > t*) ). The prediction is estimated by: `P(t> 5 years | t>t^\ast) = (1 + [(e^(-X\beta))t^\ast]^(1/\gamma))/(1+[(e^(-X\beta))5]^(1/\gamma))` Note: For a baseline prediction of an event at 5 years with t*= 0, the numerator will simplify to 1. Cox proportional-hazards regression Cox proportional-hazards models are used to predict the probability `S(t)` that the patient will be free from an event (e.g. biochemical recurrence) at some time point (e.g. 5 years after surgery). For survival analysis the prediction is estimated by: `S(t) = S_0(t)^exp(X\beta)` Custom Models Users can define Custom prediction models based on custom mathematical formulas expressed in terms of model variables. Evidencio supports both single and conditional custom formulas. The custom formula(s) used by an Evidencio model can always be viewed in the details section of the respective model, so that model users can exactly identify the underlying model logic. Example of conditional formulas: R-Script Models Evidencio supports the use of R-Script code to define the calculation logic for prediction models and calculators. R-Script models may contain complex and nested functions as long as they produce a single value outcome. Parameter transformations Evidencio lets you define variable/parameter transformations for each continuous model variable, for instance to accommodate non linear behaviour. We support both simple transformations as well as conditional (range based) transformations (e.g. cubic splines) to tackle a broad spectrum of non-linear behaviour Confidence interval data You can attribute confidence interval data to your models on Evidencio. Your confidence interval data can be specified in terms of a bootstrap data-set or as covariance matrix. You can simply copy paste your bootstrap or matrix data from e.g. excel or another data-sheet. `\beta_0` Intercept term, denoted by "Intercept" `\beta_(varN)` Coefficient estimate, associated with a covariate/variable `\gamma` Scaling parameter in survival regression models, denoted by "Scaling Parameter" `X\beta` Model linear predictor, estimated by summation of the intercept term and all `\beta` values multiplied by that variable. E.g.: `\beta_0 + \beta_(var1) * var1 + \beta_(var2) * var2 + \beta_(varN) * varN` `S_0(t)` The baseline survival function, the survival proportion when all covariates are equal to zero (`X\beta = 0`).
Huge cardinal Huge cardinals (and their variants) were introduced by Kenneth Kunen in 1972 as a very large cardinal axiom. Kenneth Kunen first used them to prove that the consistency of the existence of a huge cardinal implies the consistency of $\text{ZFC}$+"there is a $\omega_2$-saturated $\sigma$-ideal on $\omega_1$". It is now known that only a Woodin cardinal is needed for this result. However, the consistency of the existence of an $\omega_2$-complete $\omega_3$-saturated $\sigma$-ideal on $\omega_2$, as far as the set theory world is concerned, still requires an almost huge cardinal. [1] Contents 1 Definitions 2 References 3 Consistency strength and size 4 Relative consistency results 5 In set theoretic geology 6 References Definitions Their formulation is similar to that of the formulation of superstrong cardinals. A huge cardinal is to a supercompact cardinal as a superstrong cardinal is to a strong cardinal, more precisely. The definition is part of a generalized phenomenon known as the "double helix", in which for some large cardinal properties n-$P_0$ and n-$P_1$, n-$P_0$ has less consistency strength than n-$P_1$, which has less consistency strength than (n+1)-$P_0$, and so on. This phenomenon is seen only around the n-fold variants as of modern set theoretic concerns. [2] Although they are very large, there is a first-order definition which is equivalent to n-hugeness, so the $\theta$-th n-huge cardinal is first-order definable whenever $\theta$ is first-order definable. This definition can be seen as a (very strong) strengthening of the first-order definition of measurability. Elementary embedding definitions $\kappa$ is almost n-huge with target $\lambda$iff $\lambda=j^n(\kappa)$ and $M$ is closed under all of its sequences of length less than $\lambda$ (that is, $M^{<\lambda}\subseteq M$). $\kappa$ is n-huge with target $\lambda$iff $\lambda=j^n(\kappa)$ and $M$ is closed under all of its sequences of length $\lambda$ ($M^\lambda\subseteq M$). $\kappa$ is almost n-hugeiff it is almost n-huge with target $\lambda$ for some $\lambda$. $\kappa$ is n-hugeiff it is n-huge with target $\lambda$ for some $\lambda$. $\kappa$ is super almost n-hugeiff for every $\gamma$, there is some $\lambda>\gamma$ for which $\kappa$ is almost n-huge with target $\lambda$ (that is, the target can be made arbitrarily large). $\kappa$ is super n-hugeiff for every $\gamma$, there is some $\lambda>\gamma$ for which $\kappa$ is n-huge with target $\lambda$. $\kappa$ is almost huge, huge, super almost huge, and superhugeiff it is almost 1-huge, 1-huge, etc. respectively. Ultrahuge cardinals A cardinal $\kappa$ is $\lambda$-ultrahuge for $\lambda>\kappa$ if there exists a nontrivial elementary embedding $j:V\to M$ for some transitive class $M$ such that $j(\kappa)>\lambda$, $M^{j(\kappa)}\subseteq M$ and $V_{j(\lambda)}\subseteq M$. A cardinal is ultrahuge if it is $\lambda$-ultrahuge for all $\lambda\geq\kappa$. [1] Notice how similar this definition is to the alternative characterization of extendible cardinals. Furthermore, this definition can be extended in the obvious way to define $\lambda$-ultra n-hugeness and ultra n-hugeness, as well as the " almost" variants. Hyperhuge cardinals A cardinal $\kappa$ is $\lambda$-hyperhuge for $\lambda>\kappa$ if there exists a nontrivial elementary embedding $j:V\to M$ for some inner model $M$ such that $\mathrm{crit}(j) = \kappa$, $j(\kappa)>\lambda$ and $^{j(\lambda)}M\subseteq M$. A cardinal is hyperhuge if it is $\lambda$-hyperhuge for all $\lambda>\kappa$.[3, 4] Huge* cardinals A cardinal $κ$ is $n$-huge* if for some $α > κ$, $\kappa$ is the critical point of an elementary embedding $j : V_α → V_β$ such that $j^n (κ) < α$.[5] Hugeness* variant is formulated in a way allowing for a virtual variant consistent with $V=L$: A cardinal $κ$ is virtually $n$-huge* if for some $α > κ$, in a set-forcing extension, $\kappa$ is the critical point of an elementary embedding $j : V_α → V_β$ such that $j^n(κ) < α$.[5] Ultrafilter definition The first-order definition of n-huge is somewhat similar to measurability. Specifically, $\kappa$ is measurable iff there is a nonprincipal $\kappa$-complete ultrafilter, $U$, over $\kappa$. A cardinal $\kappa$ is n-huge with target $\lambda$ iff there is a normal $\kappa$-complete ultrafilter, $U$, over $\mathcal{P}(\lambda)$, and cardinals $\kappa=\lambda_0<\lambda_1<\lambda_2...<\lambda_{n-1}<\lambda_n=\lambda$ such that: $$\forall i<n(\{x\subseteq\lambda:\text{order-type}(x\cap\lambda_{i+1})=\lambda_i\}\in U)$$ Where $\text{order-type}(X)$ is the order-type of the poset $(X,\in)$. [1] $\kappa$ is then super n-huge if for all ordinals $\theta$ there is a $\lambda>\theta$ such that $\kappa$ is n-huge with target $\lambda$, i.e. $\lambda_n$ can be made arbitrarily large. If $j:V\to M$ is such that $M^{j^n(\kappa)}\subseteq M$ (i.e. $j$ witnesses n-hugeness) then there is a ultrafilter $U$ as above such that, for all $k\leq n$, $\lambda_k = j^k(\kappa)$, i.e. it is not only $\lambda=\lambda_n$ that is an iterate of $\kappa$ by $j$; all members of the $\lambda_k$ sequence are. As an example, $\kappa$ is 1-huge with target $\lambda$ iff there is a normal $\kappa$-complete ultrafilter, $U$, over $\mathcal{P}(\lambda)$ such that $\{x\subseteq\lambda:\text{order-type}(x)=\kappa\}\in U$. The reason why this would be so surprising is that every set $x\subseteq\lambda$ with every set of order-type $\kappa$ would be in the ultrafilter; that is, every set containing $\{x\subseteq\lambda:\text{order-type}(x)=\kappa\}$ as a subset is considered a "large set." As for hyperhugeness, the following are equivalent:[4] $κ$ is $λ$-hyperhuge; $μ > λ$ and a normal, fine, κ-complete ultrafilter exists on $[μ]^λ_{∗κ} := \{s ⊂ μ : |s| = λ, |s ∩ κ| ∈ κ, \mathrm{otp}(s ∩ λ) < κ\}$; $\mathbb{L}_{κ,κ}$ is $[μ]^λ_{∗κ}$-$κ$-compact for type omission. Coherent sequence characterization of almost hugeness $C^{(n)}$-$m$-huge cardinals (this section from [6]) $κ$ is $C^{(n)}$-$m$-huge iff it is $m$-huge and $j(κ) ∈ C^{(n)}$ ($C^{(n)}$-huge if it is huge and $j(κ) ∈ C^{(n)}$). Equivalent definition in terms of normal measures: κ is $C^{(n)}$-$m$-huge iff it is uncountable and there is a $κ$-complete normal ultrafilter $U$ over some $P(λ)$ and cardinals $κ = λ_0 < λ_1 < . . . < λ_m = λ$, with $λ_1 ∈ C (n)$ and such that for each $i < m$, $\{x ∈ P(λ) : ot(x ∩ λ i+1 ) = λ i \} ∈ U$. It follows that “$κ$ is $C^{(n)}$-$m$-huge” is $Σ_{n+1}$ expressible. Every huge cardinal is $C^{(1)}$-huge. The first $C^{(n)}$-$m$-huge cardinal is not $C^{(n+1)}$-$m$-huge, for all $m$ and $n$ greater or equal than $1$. For suppose $κ$ is the least $C^{(n)}$-$m$-huge cardinal and $j : V → M$ witnesses that $κ$ is $C^{(n+1)}$-$m$-huge. Then since “x is $C^{(n)}$-$m$-huge” is $Σ_{n+1}$ expressible, we have $V_{j(κ)} \models$ “$κ$ is $C^{(n)}$-$m$-huge”. Hence, since $(V_{j(κ)})^M = V_{j(κ)}$, $M \models$ “$∃_{δ < j(κ)}(V_{j(κ)} \models$ “δ is huge”$)$”. By elementarity, there is a $C^{(n)}$-$m$-huge cardinal less than $κ$ in $V$ – contradiction. Assuming $\mathrm{I3}(κ, δ)$, if $δ$ is a limit cardinal (instead of a successor of a limit cardinal – Kunen’s Theorem excludes other cases), it is equal to $sup\{j^m(κ) : m ∈ ω\}$ where $j$ is the elementary embedding. Then $κ$ and $j^m(κ)$ are $C^{(n)}$-$m$-huge (inter alia) in $V_δ$, for all $n$ and $m$. If $κ$ is $C^{(n)}$-$\mathrm{I3}$, then it is $C^{(n)}$-$m$-huge, for all $m$, and there is a normal ultrafilter $\mathcal{U}$ over $κ$ such that $\{α < κ : α$ is $C^{(n)}$-$m$-huge for every $m\} ∈ \mathcal{U}$. References Kanamori, Akihiro. Second, Springer-Verlag, Berlin, 2009. (Large cardinals in set theory from their beginnings, Paperback reprint of the 2003 edition) www bibtex The higher infinite. Kentaro, Sato. Double helix in large large cardinals and iteration ofelementary embeddings., 2007. www bibtex Usuba, Toshimichi. The downward directed grounds hypothesis and very large cardinals.Journal of Mathematical Logic 17(02):1750009, 2017. arχiv DOI bibtex Boney, Will. Model Theoretic Characterizations of Large Cardinals.arχiv bibtex Gitman, Victoria and Shindler, Ralf. Virtual large cardinals.www bibtex Bagaria, Joan. $C^{(n)}$-cardinals.Archive for Mathematical Logic 51(3--4):213--240, 2012. www DOI bibtex Consistency strength and size Hugeness exhibits a phenomenon associated with similarly defined large cardinals (the n-fold variants) known as the double helix. This phenomenon is when for one n-fold variant, letting a cardinal be called n-$P_0$ iff it has the property, and another variant, n-$P_1$, n-$P_0$ is weaker than n-$P_1$, which is weaker than (n+1)-$P_0$. [2] In the consistency strength hierarchy, here is where these lay (top being weakest): measurable = 0-superstrong = 0-huge n-superstrong n-fold supercompact (n+1)-fold strong, n-fold extendible (n+1)-fold Woodin, n-fold Vopěnka (n+1)-fold Shelah almost n-huge super almost n-huge n-huge super n-huge ultra n-huge (n+1)-superstrong All huge variants lay at the top of the double helix restricted to some natural number n, although each are bested by I3 cardinals (the critical points of the I3 elementary embeddings). In fact, every I3 is preceeded by a stationary set of n-huge cardinals, for all n. [1] Similarly, every huge cardinal $\kappa$ is almost huge, and there is a normal measure over $\kappa$ which contains every almost huge cardinal $\lambda<\kappa$. Every superhuge cardinal $\kappa$ is extendible and there is a normal measure over $\kappa$ which contains every extendible cardinal $\lambda<\kappa$. Every (n+1)-huge cardinal $\kappa$ has a normal measure which contains every cardinal $\lambda$ such that $V_\kappa\models$"$\lambda$ is super n-huge" [1], in fact it contains every cardinal $\lambda$ such that $V_\kappa\models$"$\lambda$ is ultra n-huge". Every n-huge cardinal is m-huge for every m<n. Similarly with almost n-hugeness, super n-hugeness, and super almost n-hugeness. Every almost huge cardinal is Vopěnka (therefore the consistency of the existence of an almost-huge cardinal implies the consistency of Vopěnka's principle). [1] Every ultra n-huge is super n-huge and a stationary limit of super n-huge cardinals. Every super almost (n+1)-huge is ultra n-huge and a stationary limit of ultra n-huge cardinals. In terms of size, however, the least n-huge cardinal is smaller than the least supercompact cardinal (assuming both exist). [1] This is because n-huge cardinals have upward reflection properties, while supercompacts have downward reflection properties. Thus for any $\kappa$ which is supercompact and has an n-huge cardinal above it, $\kappa$ "reflects downward" that n-huge cardinal: there are $\kappa$-many n-huge cardinals below $\kappa$. On the other hand, the least super n-huge cardinals have both upward and downward reflection properties, and are all much larger than the least supercompact cardinal. It is notable that, while almost 2-huge cardinals have higher consistency strength than superhuge cardinals, the least almost 2-huge is much smaller than the least super almost huge. While not every $n$-huge cardinal is strong, if $\kappa$ is almost $n$-huge with targets $\lambda_1,\lambda_2...\lambda_n$, then $\kappa$ is $\lambda_n$-strong as witnessed by the generated $j:V\prec M$. This is because $j^n(\kappa)=\lambda_n$ is measurable and therefore $\beth_{\lambda_n}=\lambda_n$ and so $V_{\lambda_n}=H_{\lambda_n}$ and because $M^{<\lambda_n}\subset M$, $H_\theta\subset M$ for each $\theta<\lambda_n$ and so $\cup\{H_\theta:\theta<\lambda_n\} = \cup\{V_\theta:\theta<\lambda_n\} = V_{\lambda_n}\subset M$. Every almost $n$-huge cardinal with targets $\lambda_1,\lambda_2...\lambda_n$ is also $\theta$-supercompact for each $\theta<\lambda_n$, and every $n$-huge cardinal with targets $\lambda_1,\lambda_2...\lambda_n$ is also $\lambda_n$-supercompact. An $n$-huge* cardinal is an $n$-huge limit of $n$-huge cardinals. Every $n + 1$-huge cardinal is $n$-huge*.[5] As for virtually $n$-huge*:[5] If $κ$ is virtually huge*, then $V_κ$ is a model of proper class many virtually extendible cardinals. A virtually $n+1$-huge* cardinal is a limit of virtually $n$-huge* cardinals. A virtually $n$-huge* cardinal is an $n+1$-iterable limit of $n+1$-iterable cardinals. If $κ$ is $n+2$-iterable, then $V_κ$ is a model of proper class many virtually $n$-huge* cardinals. Every virtually rank-into-rank cardinal is a virtually $n$-huge* limit of virtually $n$-huge* cardinals for every $n < ω$. The $\omega$-huge cardinals A cardinal $\kappa$ is almost $\omega$-huge iff there is some transitive model $M$ and an elementary embedding $j:V\prec M$ with critical point $\kappa$ such that $M^{<\lambda}\subset M$ where $\lambda$ is the smallest cardinal above $\kappa$ such that $j(\lambda)=\lambda$. Similarly, $\kappa$ is $\omega$-huge iff the model $M$ can be required to have $M^\lambda\subset M$. Sadly, $\omega$-huge cardinals are inconsistent with ZFC by a version of Kunen's inconsistency theorem. Now, $\omega$-hugeness is used to describe critical points of I1 embeddings. Relative consistency results Hugeness of $\omega_1$ In [2] it is shown that if $\text{ZFC +}$ "there is a huge cardinal" is consistent then so is $\text{ZF +}$ "$\omega_1$ is a huge cardinal" (with the ultrafilter characterization of hugeness). Generalizations of Chang's conjecture Cardinal arithmetic in $\text{ZF}$ If there is an almost huge cardinal then there is a model of $\text{ZF+}\neg\text{AC}$ in which every successor cardinal is a Ramsey cardinal. It follows that (1) for all inner models $W$ of $\text{ZFC}$ and every singular cardinal $\kappa$, one has $\kappa^{+W} < \kappa^+$ and that (2) for all ordinal $\alpha$ there is no injection $\aleph_{\alpha+1}\to 2^{\aleph_\alpha}$. This in turn imply the failure of the square principle at every infinite cardinal (and consequently $\text{AD}^{L(\mathbb{R})}$, see determinacy). [3] In set theoretic geology If $\kappa$ is hyperhuge, then $V$ has $<\kappa$ many grounds (so the mantle is a ground itself).[3] This result has been strenghtened to extendible cardinals[7]. On the other hand, it s consistent that there is a supercompact cardinal and class many grounds of $V$ (because of the indestructibility properties of supercompactness).[3] References Kanamori, Akihiro. Second, Springer-Verlag, Berlin, 2009. (Large cardinals in set theory from their beginnings, Paperback reprint of the 2003 edition) www bibtex The higher infinite. Kentaro, Sato. Double helix in large large cardinals and iteration ofelementary embeddings., 2007. www bibtex Usuba, Toshimichi. The downward directed grounds hypothesis and very large cardinals.Journal of Mathematical Logic 17(02):1750009, 2017. arχiv DOI bibtex Boney, Will. Model Theoretic Characterizations of Large Cardinals.arχiv bibtex Gitman, Victoria and Shindler, Ralf. Virtual large cardinals.www bibtex Bagaria, Joan. $C^{(n)}$-cardinals.Archive for Mathematical Logic 51(3--4):213--240, 2012. www DOI bibtex
Let $V$ be an inner product space and $(e_n)_{n=1}^{\infty}$ be an orthonormal system. We call it complete if $\left \langle v,e_n \right \rangle=0$ for all $n$ implies $v=0$; and closed if $v=\sum_{n=1}^{\infty}\left \langle v,e_n \right \rangle e_n$ for every $v\in V$. We've proved in class that a closed system is complete (that's one row), and that in a Hilbert space, a complete system is closed (the contrary holds). My question is: can you describe an example for a complete, yet not closed system (in a non-complete inner product space)? I've tried playing with some sequences spaces and functions spaces but nothing seemed to work.
What's the state-of-the-art in the approximation of highly oscillatory integrals in both one dimension and higher dimensions to arbitrary precision? I'm not entirely familiar with what's now done for cubatures (multidimensional integration), so I'll restrict myself to quadrature formulae. There are a number of effective methods for the quadrature of oscillatory integrals. There are methods suited for finite oscillatory integrals, and there are methods for infinite oscillatory integrals. For infinite oscillatory integrals, two of the more effective methods used are Longman's method and the modified double exponential quadrature due to Ooura and Mori. (But see also these two papers by Arieh Iserles.) Longman's method relies on converting the oscillatory integral into an alternating series by splitting the integration interval, and then summing the alternating series with a sequence transformation method. For instance, when integrating an oscillatory integral of the form $$\int_0^\infty f(t)\sin\,t\mathrm dt$$ one converts this into the alternating sum $$\sum_{k=0}^\infty \int_{k\pi}^{(k+1)\pi} f(t)\sin\,t\mathrm dt$$ The terms of this alternating sum are computed with some quadrature method like Romberg's scheme or Gaussian quadrature. Longman's original method used the Euler transformation, but modern implementations replace Euler with more powerful convergence acceleration methods like the Shanks transformation or the Levin transformation. The double exponential quadrature method, on the other hand, makes a clever change of variables, and then uses the trapezoidal rule to numerically evaluate the transformed integral. For finite oscillatory integrals, Piessens (one of the contributors of QUADPACK) and Branders, in two papers, detail a modification of Clenshaw-Curtis quadrature (that is, constructing an Chebyshev polynomial expansion of the nonoscillatory part of the integrand). Levin's method, on the other hand, uses a collocation method for the quadrature. (I am told there is now a more practical version of the old standby, Filon's method, but I've no experience with it.) These are the methods I remember offhand; I'm sure I've forgotten other good methods for oscillatory integrals. I will edit this answer later if I remember them. Besides "multidimensional vs. single-dimensional" and "finite range vs. infinite range", an important categorization for methods is "one specific type of oscillator (usually Fourier-type: $\sin(t)$, $\exp(it)$, etc, or Bessel-type: $J_0(t)$, etc.) vs. more general oscillator ($\exp(i g(t))$ or even more general oscillators $w(t)$)". At first, oscillatory integration methods focused on specific oscillators. As J. M. said, prominent ones include Filon's method and the Clenshaw-Curtis method (these two are closely related) for finite range integrals, and series extrapolation based methods and the double-exponential method of Ooura and Mori for infinite range integrals. More recently, some general methods have been found. Two examples: Levin's collocation-based method for any $\exp(i g(t))$ (Levin 1982), or later for any oscillator $w(t)$ defined by a linear ODE (Levin 1996 as linked by J. M.). Mathematica uses Levin's method for integrals not covered by the more specialized rules. Huybrechs and Vandewalle's method based on analytic continuation along a complex path where the integrand is non-oscillatory (Huybrechs and Vandewalle 2006). No distinction is necessary between methods for finite and infinite range integrals for the more general methods, since a compactifying transformation can be applied to an infinite range integral, leading to a finite range oscillatory integral that can still be addressed with the general method, albeit with a different oscillator. Levin's method can be extended to multiple dimensions by iterating over the dimensions and other ways, but as far as I know all the methods described in literature so far have sample points that are an outer product of the one-dimensional sample points or some other thing that grows exponentially with dimension, so it rapidly gets out of hand. I'm not aware of more efficient methods for high dimensions; if any could be found that sample on a sparse grid in high dimensions it would be useful in applications. Creating automatic routines for the more general methods may be difficult in most programming languages (C, Python, Fortran, etc) in which you would normally expect to program your integrand as a function/routine and pass it to the integrator routine, because the more general methods need to know the structure of the integrand (which parts look oscillatory, what type of oscillator, etc) and can't treat it as a "black box".
When you fit a generalized linear model (GLM) in R and call confint on the model object, you get confidence intervals for the model coefficients. But you also get an interesting message: Waiting for profiling to be done... What's that all about? What exactly is being profiled? Put simply, it's telling you that it's calculating a profile likelihood ratio confidence interval. The typical way to calculate a 95% confidence interval is to multiply the standard error of an estimate by some normal quantile such as 1.96 and add/subtract that product to/from the estimate to get an interval. In the context of GLMs, we sometimes call that a Wald confidence interval. Another way to determine an upper and lower bound of plausible values for a model coefficient is to find the minimum and maximum value of the set of all coefficients that satisfy the following: \[-2\log\left(\frac{L(\beta_{0}, \beta_{1}|y_{1},…,y_{n})}{L(\hat{\beta_{0}}, \hat{\beta_{1}}|y_{1},…,y_{n})}\right) < \chi_{1,1-\alpha}^{2}\] Inside the parentheses is a ratio of likelihoods. In the denominator is the likelihood of the model we fit. In the numerator is the likelihood of the same model but with different coefficients. (More on that in a moment.) We take the log of the ratio and multiply by -2. This gives us a likelihood ratio test (LRT) statistic. This statistic is typically used to test whether a coefficient is equal to some value, such as 0, with the null likelihood in the numerator (model without coefficient, that is, equal to 0) and the alternative or estimated likelihood in the denominator (model with coefficient). If the LRT statistic is less than \(\chi_{1,0.95}^{2} \approx 3.84\), we fail to reject the null. The coefficient is statisically not much different from 0. That means the likelihood ratio is close to 1. The likelihood of the model without the coefficient is almost as high the model with it. On the other hand, if the ratio is small, that means the likelihood of the model without the coefficient is much smaller than the likelihood of the model with the coefficient. This leads to a larger LRT statistic since it's being log transformed, which leads to a value larger than 3.84 and thus rejection of the null. Now in the formula above, we are seeking all such coefficients in the numerator that would make it a true statement. You might say we're “profiling” many different null values and their respective LRT test statistics. Do they fit the profile of a plausible coefficient value in our model? The smallest value we can get without violating the condition becomes our lower bound, and likewise with the largest value. When we're done we'll have a range of plausible values for our model coefficient that gives us some indication of the uncertainly of our estimate. Let's load some data and fit a binomial GLM to illustrate these concepts. The following R code comes from the help page for confint.glm. This is an example from the classic Modern Applied Statistics with S. ldose is a dosing level and sex is self-explanatory. SF is number of successes and failures, where success is number of dead worms. We're interested in learning about the effects of dosing level and sex on number of worms killed. Presumably this worm is a pest of some sort. # example from Venables and Ripley (2002, pp. 190-2.) ldose <- rep(0:5, 2) numdead <- c(1, 4, 9, 13, 18, 20, 0, 2, 6, 10, 12, 16) sex <- factor(rep(c("M", "F"), c(6, 6))) SF <- cbind(numdead, numalive = 20-numdead) budworm.lg <- glm(SF ~ sex + ldose, family = binomial) summary(budworm.lg) ## ## Call: ## glm(formula = SF ~ sex + ldose, family = binomial) ## ## Deviance Residuals: ## Min 1Q Median 3Q Max ## -1.10540 -0.65343 -0.02225 0.48471 1.42944 ## ## Coefficients: ## Estimate Std. Error z value Pr(>|z|) ## (Intercept) -3.4732 0.4685 -7.413 1.23e-13 *** ## sexM 1.1007 0.3558 3.093 0.00198 ** ## ldose 1.0642 0.1311 8.119 4.70e-16 *** ## --- ## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 ## ## (Dispersion parameter for binomial family taken to be 1) ## ## Null deviance: 124.8756 on 11 degrees of freedom ## Residual deviance: 6.7571 on 9 degrees of freedom ## AIC: 42.867 ## ## Number of Fisher Scoring iterations: 4 The coefficient for ldose looks significant. Let's determine a confidence interval for the coefficient using the confint function. We call confint on our model object, budworm.lg and use the parm argument to specify that we only want to do it for ldose: confint(budworm.lg, parm = "ldose") ## Waiting for profiling to be done... ## 2.5 % 97.5 % ## 0.8228708 1.3390581 We get our “waiting” message though there really was no wait. If we fit a larger model and request multiple confidence intervals, then there might actually be a waiting period of a few seconds. The lower bound is about 0.8 and the upper bound about 1.32. We might say every increase in dosing level increase the log odds of killing worms by at least 0.8. We could also exponentiate to get a CI for an odds ratio estimate: exp(confint(budworm.lg, parm = "ldose")) ## Waiting for profiling to be done... ## 2.5 % 97.5 % ## 2.277027 3.815448 The odds of “success” (killing worms) is at least 2.3 times higher at one dosing level versus the next lower dosing level. To better understand the profile likelihood ratio confidence interval, let's do it “manually”. Recall the denominator in the formula above was the likelihood of our fitted model. We can extract that with the logLik function: den <- logLik(budworm.lg) den ## 'log Lik.' -18.43373 (df=3) The numerator was the likelihood of a model with a different coefficient. Here's the likelihood of a model with a coefficient of 1.05: num <- logLik(glm(SF ~ sex + offset(1.05*ldose), family = binomial)) num ## 'log Lik.' -18.43965 (df=2) Notice we used the offset function. That allows us to fix the coefficient to 1.05 and not have it estimated. Since we already extracted the log likelihoods, we need to subtract them. Remember this rule from algebra? \[\log\frac{M}{N} = \log M – \log N\] So we subtract the denominator from the numerator, multiply by -2, and check if it's less than 3.84, which we calculate with qchisq(p = 0.95, df = 1) -2*(num - den) ## 'log Lik.' 0.01184421 (df=2) -2*(num - den) < qchisq(p = 0.95, df = 1) ## [1] TRUE It is. 1.05 seems like a plausible value for the ldose coefficient. That makes sense since the estimated value was 1.0642. Let's try it with a larger value, like 1.5: num <- logLik(glm(SF ~ sex + offset(1.5*ldose), family = binomial)) -2*(num - den) < qchisq(p = 0.95, df = 1) ## [1] FALSE FALSE. 1.5 seems too big to be a plausible value for the ldose coefficient. Now that we have the general idea, we can program a while loop to check different values until we exceed our threshold of 3.84. cf <- budworm.lg$coefficients[3] # fitted coefficient 1.0642 cut <- qchisq(p = 0.95, df = 1) # about 3.84 e <- 0.001 # increment to add to coefficient LR <- 0 # to kick start our while loop while(LR < cut){ cf <- cf + e num <- logLik(glm(SF ~ sex + offset(cf*ldose), family = binomial)) LR <- -2*(num - den) } (upper <- cf) ## ldose ## 1.339214 To begin we save the original coefficient to cf, store the cutoff value to cut, define our increment of 0.001 as e, and set LR to an initial value of 0. In the loop we increment our coefficient estimate which is used in the offset function in the estimation step. There we extract the log likelihood and then calculate LR. If LR is less than cut (3.84), the loop starts again with a new coefficient that is 0.001 higher. We see that our upper bound of 1.339214 is very close to what we got above using confint (1.3390581). If we set e to smaller values we'll get closer. We can find the LR profile lower bound in a similar way. Instead of adding the increment we subtract it: cf <- budworm.lg$coefficients[3] # reset cf LR <- 0 # reset LR while(LR < cut){ cf <- cf - e num <- logLik(glm(SF ~ sex + offset(cf*ldose), family = binomial)) LR <- -2*(num - den) } (lower <- cf) ## ldose ## 0.822214 The result, 0.822214, is very close to the lower bound we got from confint (0.8228708). This is a very basic implementation of calculating a likelihood ratio confidence interval. It is only meant to give a general sense of what's happening when you see that message Waiting for profiling to be done.... I hope you found it helpful. To see how R does it, enter getAnywhere(profile.glm) in the console and inspect the code. It's not for the faint of heart. I have to mention the book Analysis of Categorical Data with R, from which I gained a better understanding of the material in this post. The authors have kindly shared their R code at the following web site if you want to have a look: http://www.chrisbilder.com/categorical/ To see how they “manually” calculate likelihood ratio confidence intervals, go to the following R script and see the section “Examples of how to find profile likelihood ratio intervals without confint()”: http://www.chrisbilder.com/categorical/Chapter2/Placekick.R
Difference between revisions of "Vopenka" (A start to the page: the definition & place in the hierarchy.) Line 1: Line 1: − {{DISPLAYTITLE: Vopěnka's principle}} + {{DISPLAYTITLE: Vopěnka's principle }} Vopěnka's principle is a large cardinal axiom at the upper end of the large cardinal hierarchy that is particularly notable for its applications to category theory. Vopěnka's principle is a large cardinal axiom at the upper end of the large cardinal hierarchy that is particularly notable for its applications to category theory. Line 11: Line 11: if $\kappa$ is [[huge#Almost huge|almost huge]], then $V_\kappa$ satisfies Vopěnka's principle. if $\kappa$ is [[huge#Almost huge|almost huge]], then $V_\kappa$ satisfies Vopěnka's principle. − {{ + + + + + + + + { + {}} + ==Vopěnka cardinal== ==Vopěnka cardinal== Revision as of 00:53, 4 February 2012 Vopěnka's principle is a large cardinal axiom at the upper end of the large cardinal hierarchy that is particularly notable for its applications to category theory. In a set theoretic setting, the most common definition is the following: For any language $\mathcal{L}$ and any proper class $C$ of $\mathcal{L}$-structures, there are distinct structures $M, N\in C$ and an elementary embedding $j:M\to N$. For example, taking $\mathcal{L}$ to be the language with one unary and one binary predicate, we can consider for any ordinal $\eta$ the class of structures $\langle V_{\alpha+\eta},\{\alpha\},\in\rangle$, and conclude from Vopěnka's principle that a cardinal that is at least $\eta$-extendible exists. In fact if Vopěnka's principle holds then there are a proper class of extendible cardinals; bounding the strength of the axiom from above, we have that if $\kappa$ is almost huge, then $V_\kappa$ satisfies Vopěnka's principle. As stated above and from the point of view of ZFC, this is actually an axiom schema, as we quantify over proper classes, which from a purely ZFC perspective means definable proper classes. One alternative is to view Vopěnka's principle as an axiom in a class theory, such as von Neumann-Gödel-Bernays. Another is to consider a _Vopěnka cardinal_, that is, a cardinal $\kappa$ that is inaccessible and such that $V_\kappa$ satisfies Vopěnka's principle when "proper class" is taken to mean "subset of $V_\kappa$ of cardinality $\kappa$. These three alternatives are, in the order listed, strictly increasing in strength (see http://mathoverflow.net/questions/45602/can-vopenkas-principle-be-violated-definably). Equivalent statements The schema form of Vopěnka's principle is equivalent to the existence of a proper class of $C^{(n)}$-extendible cardinals for every $n$; indeed there is a level-by-level stratification of Vopěnka's principle, with Vopěnka's principle for a $\Sigma_{n+2}$-definable class corresponds to the existence of a $C^{(n)}$-extendible cardinal greater than the ranks of the parameters. [1] Vopěnka cardinal This article is a stub. Please help us to improve Cantor's Attic by adding information.
Consider the function $f$ on $[0,1]\times [0,1]$ given by $$f(x,y) = \frac{x^2-y^2}{(x^2+y^2)^2}, \,(x,y)\neq (0,0)$$ and $f(0,0) = 0.$ Let $M$ denote the $\sigma$-algebra of Lebesgue measuable sets and $m$ the Lebesgue measure. In my previous question, it was shown that $f(x,y)$ is $M\times M$ measurable. I am trying to show now that $f$ is $m\times m$ summable. Is my approach correct? Note that when $y<x$ we have that $x^2+y^2\leq 2x^2$ and $x^2-y^2\geq 0$. Then, $$\int_{0}^{1} |f_x| dm(y) \geq \int_{0}^{x} \frac{x^2-y^2}{(x^2+y^2)^2} dm(y)\geq \int_{0}^{x}\frac{x^2-y^2}{4x^4} dm(y)$$ Now, $$\int_{0}^{x} \frac{x^2-y^2}{4x^4} dm(y) = \frac{y}{4x^2}-\frac{y^3}{12x^4} \bigg|_{y = 0}^{y=x} = \frac{1}{6x}\rightarrow \infty$$ as $x\rightarrow 0$. Hence our function is not $m\times m$ summable.
Answer $$\cot\theta\sin\theta=\cos\theta$$ Work Step by Step $$A=\cot\theta\sin\theta$$ - Quotient Identity: $$\cot\theta=\frac{\cos\theta}{\sin\theta}$$ Therefore, $A$ would be $$A=\frac{\cos\theta}{\sin\theta}\times\sin\theta$$ $$A=\cos\theta$$ Since no quotients appear here and all functions are of $\theta$ only, we can stop here.
Difference between revisions of "Vopenka" Line 26: Line 26: {{stub}} {{stub}} + + Revision as of 00:57, 4 February 2012 Vopěnka's principle is a large cardinal axiom at the upper end of the large cardinal hierarchy that is particularly notable for its applications to category theory. In a set theoretic setting, the most common definition is the following: For any language $\mathcal{L}$ and any proper class $C$ of $\mathcal{L}$-structures, there are distinct structures $M, N\in C$ and an elementary embedding $j:M\to N$. For example, taking $\mathcal{L}$ to be the language with one unary and one binary predicate, we can consider for any ordinal $\eta$ the class of structures $\langle V_{\alpha+\eta},\{\alpha\},\in\rangle$, and conclude from Vopěnka's principle that a cardinal that is at least $\eta$-extendible exists. In fact if Vopěnka's principle holds then there are a proper class of extendible cardinals; bounding the strength of the axiom from above, we have that if $\kappa$ is almost huge, then $V_\kappa$ satisfies Vopěnka's principle. As stated above and from the point of view of ZFC, this is actually an axiom schema, as we quantify over proper classes, which from a purely ZFC perspective means definable proper classes. One alternative is to view Vopěnka's principle as an axiom in a class theory, such as von Neumann-Gödel-Bernays. Another is to consider a _Vopěnka cardinal_, that is, a cardinal $\kappa$ that is inaccessible and such that $V_\kappa$ satisfies Vopěnka's principle when "proper class" is taken to mean "subset of $V_\kappa$ of cardinality $\kappa$. These three alternatives are, in the order listed, strictly increasing in strength (see http://mathoverflow.net/questions/45602/can-vopenkas-principle-be-violated-definably). Equivalent statements The schema form of Vopěnka's principle is equivalent to the existence of a proper class of $C^{(n)}$-extendible cardinals for every $n$; indeed there is a level-by-level stratification of Vopěnka's principle, with Vopěnka's principle for a $\Sigma_{n+2}$-definable class corresponds to the existence of a $C^{(n)}$-extendible cardinal greater than the ranks of the parameters. [1] Vopěnka cardinal This article is a stub. Please help us to improve Cantor's Attic by adding information. References Bagaria, Joan and Casacuberta, Carles and Mathias, A R D and Rosický, Jiří. Definable orthogonality classes in accessible categories are small.Journal of the European Mathematical Society 17(3):549--589. arχiv bibtex
I came up with the following approximation $$\sqrt[4]{\pi}+\frac{2}{1000}\gtrsim\frac{4}{3}$$ I don't know too much about proving an inequality like this algebraically. I was hoping for an extremely rigorous proof of this (I would definitely appreciate names of theorems). I am just starting to self study computational number theory. I didn't know how to prove this whatsoever. I would think of using a large finite number of iterations on a Taylor series, but I really had no clue how to use that. Thanks for any help. An similar question type to this is the following: Prove $\left(\frac{2}{5}\right)^{\frac{2}{5}}<\ln{2}$. My wording is a bit odd in this question, so please note that both questions are very similar. (Solving mine algebraically is really the basis, though)
$\newcommand{\HH}{\mathbb H}$$\newcommand{\RR}{\mathbb R}$$\newcommand{\ZZ}{\mathbb Z}$$\newcommand{\SL}{\mathop {\rm SL}}$ In your question $X = G \backslash \HH$ is given as a quotient. Also,$G$ is a finitely generated free group and a subgroup of $\SL(2,\RR)$.As $G$ is convex co-compact it follows that $G$ is discrete. In thiscase $\HH$ is the universal cover of $X$, the group $G$ is theso-called "deck-group", and there is a non-canonical isomorphism of$G$ with $\pi_1(X)$. (Choosing basepoints doesn't make theisomorphism canonical.) The last paragraph of your question doesn't make much sense. Theusual action of $\SL(2,\ZZ)$ on $\HH$ is not convex co-compact. So itis not a good example to think about. Here is a better example whichfits your situation very tightly: Suppose that $G$ is the free group of rank one -- that is, $G \cong \ZZ$. Suppose that $G$ is generated by a hyperbolic isometry $\gamma$.Let $A_\gamma \subset \HH$ be the axis of $\gamma$ acting on $\HH$.So $A_\gamma$ is a copy of $\RR$, topologically, and $G$ acts on$A_\gamma$ as $\ZZ$ acts on $\RR$, by translation. So the quotient $g = G \backslash A_\gamma$ is a circle. Also $X = G \backslash \HH$ isa hyperbolic annulus with two "flaring ends". To make this precise,note that $g \subset X$ is an essential loop in the annulus $X$. Notethat $X - g$ has two components $L$ and $R$ (coming from the left andright sides of $A_\gamma$ in $\HH$). $L$ is again an annulus with oneboundary on $g$. Also, $L$ has an exponentially flaring metric (as you move away from $g$) so the other boundary of $L$ is "at infinity". Returning to $\HH$, note that the limit set of $G$, $\Lambda_G$, isexactly two points: the endpoints of $A_\gamma$. Let $\Omega_G = \partial_\infty \HH - \Lambda_G$ be the domain of discontinuity of $G$.Then $\Omega_G$ is two open sub-arcs of $\partial_\infty \HH$. As before $G$ acts on each of these by translation. The quotients are the circles at infinity for $L$ and $R$ respectively. These two circles at infinity are the Gromov boundary of the hyperbolic annulus $X$. Finally, in this example the convex core of $X = G \backslash \HH$ isexactly the circle $g = G \backslash A_\gamma$ that we started with. This discussion generalizes. If $G \subset \SL(2,\RR)$ is any(torsion free) group acting convex co-compactly on $\HH$ then thefundamental group of $X = G \backslash \HH$ is isomorphic to $G$. Also,the Gromov boundary of $X$, $\partial_\infty X$, is homeomorphic to $G \backslash \Omega(G)$. Each component of $\partial_\infty X$ is a circle - there is one for each flaring annular end of $X$. All ends are of this type - convex co-compactness rules out cusps.
I'm nearly at the end of this derivation but totally stuck so I'd appreciate a nudge in the right direction Consider a set of N identical but distinguishable particles in a system of energy E. These particles are to be placed in energy levels $E_i$ for $i = 1, 2 .. r$. Assume that we have $n_i$ particles in each energy level. The two constraints we impose are that $\sum_{i}^{r}n_i = N$ and $\sum_{i}^{r}E_i n_i = E$. The number of microstates in a given macrostate is given by \begin{equation} \Omega = \frac{N!}{\prod_{i}^r n_{i}!} \end{equation} We want to maximize this and for ease of notation, we work with $\ln\Omega$ and we use Stirling's approximation ($\ln x! = x\ln x - x$) to obtain \begin{equation} \ln\Omega = N\ln N - N - \sum_{i}^{r}n_i\ln n_i - n_i \end{equation} Maximizing this function subject to the constraints $\sum_{i}^{r}n_i = N$ and $\sum_{i}^{r}E_i n_i = E$ is a classic Lagrange multiplier problem. We represent the undetermined multipliers to be $\alpha$ and $\beta$ for the two constraints and obtain \begin{align} \frac{\partial\ln\Omega}{\partial n_i} &= \alpha\frac{\partial n_i}{\partial n_i} + \beta\frac{\partial E_i n_i}{\partial n_i} \\ \nonumber \ln n_i &= \alpha + \beta E_i \\ \nonumber \therefore n_i &= e^{\alpha}e^{\beta E_i} \end{align} Now, we use the first constraint equation to determine $\alpha$. We get \begin{align} \sum_i^r n_i &= N \\ \nonumber \sum_i^r e^{\alpha}e^{\beta E_i} &= N \\ \nonumber e^\alpha &= \frac{N}{\sum_i^re^{\beta E_i}} \\ \nonumber e^\alpha &= \frac{N}{Z} \end{align} We have introduced the partition function, $Z=\sum_i^re^{\beta E_i}$ in the last line. Next, we have the second constraint equation that determines $\beta$ \begin{align} \sum_i^r E_i n_i &= E \\ \nonumber \frac{\sum_{i}^{r} E_i e^{\beta E_i}}{\sum_i^r e^{\beta E_i}} &= \frac{E}{N} \\ \nonumber \end{align} I'm assuming I should somehow connect $E$ with $T$ so let's say $E=Nk_B T$. Then we have \begin{align} \frac{N}{Z}\frac{\partial Z}{\partial\beta} &= E \\ \frac{\partial\ln Z}{\partial\beta} &= k_B T \end{align} How do I get to $\beta = -\frac{1}{k_B T}$ here? Notice that this derivation requires an extra minus sign compared to the usual definition of $\beta$ and this should come out naturally too, shouldn't it?
Topological Magic: Infinitely Many Primes The Basic Idea A while ago, I wrote about the importance of open sets in topology and how the properties of a topological space $X$ are highly dependent on these special sets. In that post, we discovered that the real line $\mathbb{R}$ can either be compact or non-compact, depending on which topological glasses we choose to view $\mathbb{R}$ with. Today, I’d like to show you another such example - one which has a surprising consequence!It turns out that if you view the integers $\mathbb{Z}$ through the ‘correct’ topological lens - that is, if you define your opens sets just right - then you can prove that there are infinitely many prime numbers! This is particularly striking since the infinitude of the primes is a number theoretic fact, and instinctively number theory and topology are two separate realms. But by simply (though carefully!) choosing the right definition of “open,” we become the master of our topological destiny and discover a secret back door connecting those two worlds. Pretty magical, right?* Now before moving on "from English to math," I should probably clarify one thing: a set $X$ (such as $\mathbb{R}$ or $\mathbb{Z}$) by itself is just that - a set. Just a plain, old set. So I'm not claiming that $X$ inherently possesses some topological properties that we can see if only we tilt our head and stare at $X$ at just the right angle. In other words, we can't even talk about juicy things like compactness or connectedness or continuous functions on $X$ unless we topologize $X$ first. In fact, $X$ is sort of like flour. Flour by itslef isn't very tasty. But the addition of sugar, eggs, cocoa powder and a little oil will enrich the flour and turn it into sweet concoction. Alternatively, we could add yeast, salt, some water and maybe a little rosemary to create a savory bread instead. So you see? On its own, flour is rather bland and not very useful. But by adding carefully-chosen additional 'information' to it (e.g. ingredients and a recipe), you can obtain different outcomes which fulfill different purposes. (So for instance, if your goal is to make a meal for dinner then you'd want to take a savory route. Unless you're like me and occasionally eat cookies for dinner. But I digress....) Analogously, in what follows below, we will start with the set $\mathbb{Z}$ and add extra information to it -- the topology, i.e. a carefully-chosen definition of "open set" -- which will help us reach our goal of proving the infinitude of the primes. From English to Math Usually, we declare a subset $U$ of $\mathbb{Z}=\{\ldots,-2,-1,0,1,2,\ldots\}$ to be open if for every integer $m\in U$ there is an open interval $(a,b)$ so that $m\in (a,b)\cap\mathbb{Z}\subset U$. But today we'll declare a subset $U$ of $\mathbb{Z}$ to be open if for every $m\in U$, there is a set of the form $$N_{a,b}=\{a+nb:n\in\mathbb{Z}\},\quad{b>0}$$such that** $m\in N_{a,b}\subset U$. This isn't as scary as it looks! As a concrete example, consider the case when $a=7$ and $b=3$. To get a grasp on $N_{7,3}$, simply start at 7 and shift to the right 3 units to get 10. Shift right 3 more units to get to 13, then shift 3 more units to get 16, and so on. But we also need to pick up the integers to the left of $7$, so starting at 7 we shift 3 units in the negative direction to obtain $4, 1, -2,$ etc. Visually we can plot it like this: Notice that the pink dots continue indefinitely both to the left and right of 7. Other examples of such open sets look like this: Now that we've defined our open sets, we need to make sure they satisfy the axioms of topology: (axiom #1) both the empty set and $\mathbb{Z}$ are open (axiom #2) a union of open sets is also open (axiom #3) a finite intersection of open sets is open It's not too hard to check that these really are satisfied, but I want to get to the good stuff. So like all helpful math texts, I'll leave it as an exercise. Next, let's make a few observations. This is where the fun begins: Observation #1: For any integers $a$ and $b$ with $b>0$, the set $N_{a,b}$ is both open and closed. (Side note: It is perfectly valid for a set to be both open andclosed at the same time! We're not doing anything illegal. Some folks call these sets clopen.) By definition, $N_{a,b}$ is automatically open. After all, $N_{a,b}$ is contained in itself! And we say a set is closedif its complement - all of the stuff not inthe set - is open. In our case, the complement of $N_{a,b}$ consists of a union of sets of the same form. It's easy to see this with our example: the complement of $N_{7,3}$ is $N_{8,3}\cup N_{9,3}$: Since a union of open sets is again open - this is axiom #2 - the complement of $N_{a,b}$ is indeed open. Hence $N_{a,b}$ is closed. Observation #2: For any integer $k$ - besides*** $1$ or $-1$ - we can find a prime number $p$ so that $k\in N_{0,p}$. This is just the fancy way of saying $k$ is divisible by $p$, and it follows simply because every integer $k$ - except for $1$ or $-1$ - is divisible by a least one prime $p$. Mathematically, we can express our Observation #2 like this: $$\mathbb{Z}\smallsetminus \{-1,1\}=\bigcup_{p,\;\text{prime}}N_{0,p}.$$ Observation #3: No (non-empty) finite subset of $\mathbb{Z}$ can be open. To see this, suppose $A\subset\mathbb{Z}$ is a finite set. Then we can list its elements like so: $A=\{n_1,\ldots,n_k\}.$ And suppose to the contrary that $A$ isopen. Then, by definition of open, $A$ must contain a set of the form $N_{a,b}$. But $N_{a,b}$ contains infinitely many integers! Therefore $A$ must contain infinitely many integers as well. Of course, this is a contradiction since we assumed $A$ is finite. So our observation holds With these observations at hand, we're almost at the end of the road! Can you see it? Consider the set of all integers except $1$ or $-1$. As above, we can express this set as $$\mathbb{Z}\smallsetminus \{-1,1\}=\bigcup_{p,\;\text{prime}}N_{0,p}.$$ Now suppose to the contrary that there are only a finite number of primes, say $\{p_1,\ldots,p_m\}$. Then our union really looks like this: $$\mathbb{Z}\smallsetminus \{-1,1\}=N_{0,p_1}\cup N_{0,p_2}\cup \cdots \cup N_{0,p_m}.$$ By Observation #1, we know that each $N_{0,p_i}$ is closed. Hence $\mathbb{Z}\smallsetminus\{-1,1\}$, is also closed since it is a finite union of closed sets. (Okay, okay, I'm pulling a fast one here. By De Morgan's Laws, axiom #3 is the same as saying a finite union of closed sets is closed.) But wait! Observation #3 implies $\mathbb{Z}\smallsetminus \{-1,1\}$ is not closed since its complement $\{-1,1\}$ is a finite set and no finite set can be open. This is indeed a contradiction. Conclusion? There must be infinitely many prime numbers! QED. As an aside, the topology on $\mathbb{Z}$ we've been playing with today is sometimes called the arithmetic progression topology, and the proof above was originally discovered by American-Israeli mathematician Hillel Fürstenberg in 1955 while he was still an undergraduate student! Bravo, Dr. Fürstenberg! Footnotes: * At this point, the sophisticated reader who is already familiar with this proof may be thinking, "Bah humbug! This is just Euclid's proof cast in a different light." I prefer to leave such complaints to the experts. **The open sets $N_{a,b}$ form what's called a basis for our topology on $\mathbb{Z}$. Intuitively, the $N_{a,b}$ are the basic building blocks from which we can obtain all other open sets. *** EDIT 4/11/17: This originally read, "...any integer $k$ - besides 0, 1, or -1..." (and throughout) but, of course, there is no reason to exclude 0. (It's divisible by everything!) Many thanks to a reader for catching this.
We show that the asynchronous push-pull protocol spreads rumors in preferential attachment graphs (as defined by Barabasi and Albert) in time \(O(\sqrt{\log n})\) to all but a lower order fraction of the nodes with high probability. This is significantly faster than what synchronized protocols can achieve; an obvious lower bound for these is the average distance, which is known to be \(\Theta(\log n / \log\log n)\). 2006 Ajwani, Deepak; Friedrich, Tobias; Meyer, UlrichAn \(O(n^{2.75})\) Algorithm for Online Topological Ordering. Scandinavian Workshop on Algorithm Theory (SWAT) 2006: 53-64 We present a simple algorithm which maintains the topological order of a directed acyclic graph with n nodes under an online edge insertion sequence in \(O(n^{2.75})\) time, independent of the number of edges m inserted. For dense DAGs, this is an improvement over the previous best result of \(O(\min(m^{3/2 \log n, m^3/2 + n^2 \log n)\) by Katriel and Bodlaender. We also provide an empirical comparison of our algorithm with other algorithms for online topological sorting. Doerr, Benjamin; Friedrich, Tobias; Klein, Christian; Osbild, RalfUnbiased Matrix Rounding. Scandinavian Symposium and Workshops on Algorithm Theory (SWAT) 2006: 102-112 We show several ways to round a real matrix to an integer one such that the rounding errors in all rows and columns as well as the whole matrix are less than one. This is a classical problem with applications in many fields, in particular, statistics. We improve earlier solutions of different authors in two ways. For rounding matrices of size \(m \times n\), we reduce the runtime from \(O((mn)^2)\) to \(O(mn \log(mn))\). Second, our roundings also have a rounding error of less than one in all initial intervals of rows and columns. Consequently, arbitrary intervals have an error of at most two. This is particularly useful in the statistics application of controlled rounding. The same result can be obtained via (dependent) randomized rounding. This has the additional advantage that the rounding is unbiased, that is, for all entries \(y_{ij}\) of our rounding, we have \(E(y_{ij}) = x_{ij}\), where \(x_{ij}\) is the corresponding entry of the input matrix. Algorithm Engineering Our research focus is on theoretical computer science and algorithm engineering. We are equally interested in the mathematical foundations of algorithms and developing efficient algorithms in practice. A special focus is on random structures and methods.
You don't need to keep chasing that one perfect strategy... See how you can use algos to systematise your trading, remove emotions and finally make your capital grow! Grab your FREE Guide to Algo Trading PDF: Anyone who’s tried pairs trading will tell you that real financial series don’t exhibit truly stable, cointegrating relationships. If they did, pairs trading would be the easiest game in town. But the reality is that relationships are constantly evolving and changing. At some point, we’re forced to make uncertain decisions about how best to capture those changes. One way to incorporate both uncertainty and dynamism in our decisions is to use the Kalman filter for parameter estimation. The Kalman filter is a state space model for estimating an unknown (‘hidden’) variable using observations of related variables and models of those relationships. The Kalman filter is underpinned by Bayesian probability theory and enables an estimate of the hidden variable in the presence of noise. There are plenty of tutorials online that describe the mathematics of the Kalman filter, so I won’t repeat those here (this article is a wonderful read). Instead, I’ll show you how to implement the Kalman filter framework to provide a . I’ll provide just enough math as is necessary to follow the implementation. dynamic estimate of the hedge ratio in a pairs trading strategy To implement our Kalman filter, we need four variables: For our hedge ratio/pairs trading application, the observed variable is one of our price series \(p_1\) and the hidden variable is our hedge ratio, \(\beta\). The observed and hidden variables are related by the familiar spread equation: \[p_1 = \beta * p_2 + \epsilon\] where \(\epsilon\) is noise (in our pairs trading framework, we are essentially making bets on the mean reversion of \(\epsilon\)). In the Kalman framework, the other price series, \(p_2\) provides our observation model. We also need to define a state transition model that describes the evolution of \(\beta\) from one time period to the next. If we assume that \(\beta\) follows a random walk, then our state transition model is simply \[\beta_t = \beta_{t-1} + \omega\] Here’s the well-known iterative Kalman filter algorithm. For every time step: To start the iteration, we need initial values for the covariances of the measurement and state equations. Methods exist to estimate these from data, but for our purposes we will start with some values that result in a relatively slowly changing hedge ratio. To make the hedge ratio change faster, increase the values of delta and Ve in the R code below. The initial estimates of these values are as close to ‘parameters’ that we have in our Kalman filter framework. Here’s some R code for implementing the Kalman filter. The two price series used are daily adjusted closing prices for the “Hello world” of pairs trading: GLD and GDX (you can download the data at the end of this post). First, read in and take a look at the data: 1 2 3 4 5 6 7 8 9 10 11 library(xts) path <- "C:/Path/To/Your/Data/" assets <- c("GLD", "GDX") df1 <- xts(read.zoo(paste0(path, assets[1], ".csv"), tz="EST", format="%Y-%m-%d", sep=",", header=TRUE)) df2 <- xts(read.zoo(paste0(path, assets[2], ".csv"), tz="EST", format="%Y-%m-%d", sep=",", header=TRUE)) xy <- merge(df1$Close, df2$Close, join="inner") colnames(xy) <- assets plot(xy, legend.loc=1) Here’s what the data look like: Looks OK at first glance. Here’s the code for the iterative Kalman filter estimate of the hedge ratio: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 x <- xy[, assets[1]] y <- xy[, assets[2]] x$int <- rep(1, nrow(x)) delta <- 0.0001 Vw <- delta/(1-delta)*diag(2) Ve <- 0.001 R <- matrix(rep(0, 4), nrow=2) P <- matrix(rep(0, 4), nrow=2) beta <- matrix(rep(0, nrow(y)*2), ncol=2) y_est <- rep(0, nrow(y)) e <- rep(0, nrow(y)) Q <- rep(0, nrow(y)) for(i in 1:nrow(y)) { if(i > 1) { beta[i, ] <- beta[i-1, ] # state transition R <- P + Vw # state cov prediction } y_est[i] <- x[i, ] %*% beta[i, ] # measurement prediction Q[i] <- x[i, ] %*% R %*% t(x[i, ]) + Ve # measurement variance prediction # error between observation of y and prediction e[i] <- y[i] - y_est[i] K <- R %*% t(x[i, ]) / Q[i] # Kalman gain # state update beta[i, ] <- beta[i, ] + K * e[i] P = R - K %*% x[i, ] %*% R } beta <- xts(beta, order.by=index(xy)) plot(beta[2:nrow(beta), 1], type='l', main = 'Kalman updated hedge ratio') plot(beta[2:nrow(beta), 2], type='l', main = 'Kalman updated intercept') And here is the resulting plot of the dynamic hedge ratio: The value of the Kalman filter is immediately apparent – you can see how drastically the hedge ratio changed over the years. We could use that hedge ratio to construct our signals for a trading strategy, but we can actually use the other by-products of the Kalman filter framework to generate them directly (hat tip to Ernie Chan for this one): The prediction error ( e in the code above) is equivalent to the deviation of the spread from its predicted value. Some simple trade logic could be to buy and sell our spread when this deviation is very negative and positive respectively. We can relate the actual entry levels to the standard deviation of the prediction error. The Kalman routine also computes the standard deviation of the error term for us: it is simply the square root of Q in the code above. Here’s a plot of the trading signals at one standard deviation of the prediction error (we need to drop a few leading values as the Kalman filter takes a few steps to warm up): 1 2 3 4 5 6 # plot trade signals e <- xts(e, order.by=index(xy)) sqrtQ <- xts(sqrt(Q), order.by=index(xy)) signals <- merge(e, sqrtQ, -sqrtQ) colnames(signals) <- c("e", "sqrtQ", "negsqrtQ") plot(signals[3:length(index(signals))], ylab='e', main = 'Trade signals at one-standard deviation', col=c('blue', 'black', 'black'), lwd=c(1,2,2)) Cool! Looks OK, except the number of signals greatly diminishes in the latter half of the simulation period. Later, we might come back and investigate a more aggressive signal, but let’s press on for now. At this point, we’ve got a time series of trade signals corresponding to the error term being greater than one standard deviation from its (estimated) mean. We could run a vectorised backtest by calculating positions corresponding to these signals, then determine the returns of holding those positions. In fact, let’s do that next: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 # vectorised backtest sig <- ifelse((signals[1:length(index(signals))]$e > signals[1:length(index(signals))]$sqrtQ) & (lag.xts(signals$e, 1) < lag.xts(signals$sqrtQ, 1)), -1, ifelse((signals[1:length(index(signals))]$e < signals[1:length(index(signals))]$negsqrtQ) & (lag.xts(signals$e, 1) > lag.xts(signals$negsqrtQ, 1)), 1, 0)) colnames(sig) <- "sig" ## trick for getting only the first signals sig[sig == 0] <- NA sig <- na.locf(sig) sig <- diff(sig)/2 plot(sig) ## simulate positions and pnl sim <- merge(lag.xts(sig,1), beta[, 1], x[, 1], y) colnames(sim) <- c("sig", "hedge", assets[1], assets[2]) sim$posX <- sim$sig * -1000 * sim$hedge sim$posY <- sim$sig * 1000 sim$posX[sim$posX == 0] <- NA sim$posX <- na.locf(sim$posX) sim$posY[sim$posY == 0] <- NA sim$posY <- na.locf(sim$posY) pnlX <- sim$posX * diff(sim[, assets[1]]) pnlY <- sim$posY * diff(sim[, assets[2]]) pnl <- pnlX + pnlY plot(cumsum(na.omit(pnl)), main="Cumulative PnL, $") Just a quick explanation of my hacky backtest… The ugly nested ifelsestatement in line 2 creates a time series of trade signals where sells are represented as -1, buys as 1 and no signal as 0. The buy signal is the prediction error crossing under its -1 standard deviation from above; the sell signal is the prediction error crossing over its 1 standard deviation from below. The problem with this signal vector is that we can get consecutive sell signals and consecutive buy signals. We don’t want to muddy the waters by holding more than one position at a time, so we use a little trick in lines 7 – 10 to firstly replace any zeroes with NA, and then use the na.locffunction to fill forward the NAvalues with the last real value. We then recover the original (non-consecutive) signals by taking the diffand dividing by 2. If that seems odd, just write down on a piece of paper a few signals of -1, 1 and 0 in a column and perform on them the operations described. You’ll quickly see how this works. Then, we calculate our positions in each asset according to our spread and signals, taking care to lag our signals so that we don’t introduce look-ahead bias. We’re trading 1,000 units of our spread per trade. Our estimated profit and loss is just the sum of the price differences multiplied by the positions in each asset. Here’s the result: Looks interesting! But recall that our trading signals were few and far between in the latter half of the simulation? If we plot the signals, we see that we were actually holding the spread for well over a year at a time: I doubt we’d want to trade the spread this way, so let’s make our signals more aggressive: 1 2 3 4 # more aggressive trade signals signals <- merge(e, .5*sqrtQ, -.5*sqrtQ) colnames(signals) <- c("e", "sqrtQ", "negsqrtQ") plot(signals[3:length(index(signals))], ylab='e', main = 'Trade signals at one-standard deviation', col=c('blue', 'black', 'black'), lwd=c(1,2,2)) Better! A smarter way to do this would probably be to adapt the trade level (or levels) to the recent volatility of the spread – I’ll leave that as an exercise for you. These trade signals lead to this impressive and highly dubious equity curve: Why is it dubious? Well, you probably noticed that there are some pretty out-there assumptions in this backtest. To name the most obvious: My gut feeling is that this would need a fair bit of work to cover costs of trading – but that gets tricky to assess without a more accurate simulation tool. You can see that it’s a bit of a pain to backtest – particularly if you want to incorporate costs. To be fair, there are native R backtesting solutions that are more comprehensive than my quick-n-dirty vectorised version. But in my experience none of them lets you move quite as fast as the Zorro platform, which also allows you to go from backtest to live trading with almost the click of a button. You can see that R makes it quite easy to incorporate an advanced algorithm (well, at least I think it’s advanced; our clever readers probably disagree). But tinkering with the strategy itself – for instance, incorporating costs, trading at multiple standard deviation levels, using a timed exit, or incorporating other trade filters – is a recipe for a headache, not to mention a whole world of unit testing and bug fixing. On the other hand, Zorro makes tinkering with the trading aspects of the strategy easy. Want to get a good read on costs? That’s literally a line of code. Want to filter some trades based on volatility? Yeah, you might need two lines for that. What about trading the spread at say half a dozen levels and entering and exiting both on the way up and on the way down? OK, you might need four lines for that. The downside with Zorro is that it would be pretty nightmarish implementing a Kalman filter in its native Lite-C code. But thanks to Zorro’s R bridge, I can use the R code for the Kalman filter that I’ve already written, with literally only a couple of minor tweaks. We can have the best of both worlds. Which leads to my next post… Next time I post, it will be to show you a basic pairs trading script in Zorro, using a more vanilla method of calculating the hedge ratio. After that, I’ll show you how to configure Zorro to talk to R and thus make use of the Kalman filter algorithm. I’d love to know if this series is interesting for you, and what else you’d like to read about on Robot Wealth. Let us know in the comments. Get the exact data and code we used in this blog post! Implement your own Kalman Filter — download all the code you need for free
Answer The solution set is $$\{0,\frac{\pi}{3},2\pi\}$$ Work Step by Step $$\sin x=\sin2x$$ over interval $[0,2\pi)$ 1) In this case, only the interval for $x$, which is $[0,2\pi)$, is necessary, as you will see in step 2 that $\sin2x$ would be changed to a function of $x$ only. $$x\in[0,2\pi)$$ 2) Now consider back the equation $$\sin x=\sin2x$$ Here we see that $\sin x$ is a trigonometric function of $x$, but $\sin2x$ is that of $2x$. Thus it is essential to change $\sin2x$ to a trigonometric function of $x$ by using the identity $\sin2x=2\sin x\cos x$ $$\sin x=2\sin x\cos x$$ $$2\sin x\cos x-\sin x=0$$ $$\sin x(2\cos x-1)=0$$ $$\sin x=0\hspace{1cm}\text{or}\hspace{1cm}\cos x=\frac{1}{2}$$ For $\sin x=0$, over the interval $[0,2\pi)$, there are 2 values of $x$ whose $\sin x$ equals $0$, which are $\{0,2\pi\}$ For $\cos x=\frac{1}{2}$, over the interval $[0,2\pi)$, there is only one value of $x$ whose $\cos x$ equals $\frac{1}{2}$, which is $\{\frac{\pi}{3}\}$ Combining the solutions of 2 cases where $\sin x=0$ or $\cos x=\frac{1}{2}$, we end up with the solution set: $$x=\{0,\frac{\pi}{3},2\pi\}$$
Difference between revisions of "Vopenka" Line 10: Line 10: In fact if Vopěnka's principle holds then there are a proper class of extendible cardinals; bounding the strength of the axiom from above, we have that In fact if Vopěnka's principle holds then there are a proper class of extendible cardinals; bounding the strength of the axiom from above, we have that if $\kappa$ is [[huge#Almost huge|almost huge]], then $V_\kappa$ satisfies Vopěnka's principle. if $\kappa$ is [[huge#Almost huge|almost huge]], then $V_\kappa$ satisfies Vopěnka's principle. + + As stated above and from the point of view of ZFC, this is actually an axiom schema, as we quantify over proper classes, which from a purely ZFC perspective means definable proper classes. As stated above and from the point of view of ZFC, this is actually an axiom schema, as we quantify over proper classes, which from a purely ZFC perspective means definable proper classes. Line 22: Line 24: <CITE>BagariaCasacubertaMathiasRosicky2012:OrthogonalityClasses</CITE> <CITE>BagariaCasacubertaMathiasRosicky2012:OrthogonalityClasses</CITE> − − − − {{References}} {{References}} Revision as of 01:00, 4 February 2012 Vopěnka's principle is a large cardinal axiom at the upper end of the large cardinal hierarchy that is particularly notable for its applications to category theory. In a set theoretic setting, the most common definition is the following: For any language $\mathcal{L}$ and any proper class $C$ of $\mathcal{L}$-structures, there are distinct structures $M, N\in C$ and an elementary embedding $j:M\to N$. For example, taking $\mathcal{L}$ to be the language with one unary and one binary predicate, we can consider for any ordinal $\eta$ the class of structures $\langle V_{\alpha+\eta},\{\alpha\},\in\rangle$, and conclude from Vopěnka's principle that a cardinal that is at least $\eta$-extendible exists. In fact if Vopěnka's principle holds then there are a proper class of extendible cardinals; bounding the strength of the axiom from above, we have that if $\kappa$ is almost huge, then $V_\kappa$ satisfies Vopěnka's principle. Formalisations As stated above and from the point of view of ZFC, this is actually an axiom schema, as we quantify over proper classes, which from a purely ZFC perspective means definable proper classes. One alternative is to view Vopěnka's principle as an axiom in a class theory, such as von Neumann-Gödel-Bernays. Another is to consider a _Vopěnka cardinal_, that is, a cardinal $\kappa$ that is inaccessible and such that $V_\kappa$ satisfies Vopěnka's principle when "proper class" is taken to mean "subset of $V_\kappa$ of cardinality $\kappa$. These three alternatives are, in the order listed, strictly increasing in strength (see http://mathoverflow.net/questions/45602/can-vopenkas-principle-be-violated-definably). Equivalent statements The schema form of Vopěnka's principle is equivalent to the existence of a proper class of $C^{(n)}$-extendible cardinals for every $n$; indeed there is a level-by-level stratification of Vopěnka's principle, with Vopěnka's principle for a $\Sigma_{n+2}$-definable class corresponds to the existence of a $C^{(n)}$-extendible cardinal greater than the ranks of the parameters. [1] References Bagaria, Joan and Casacuberta, Carles and Mathias, A R D and Rosický, Jiří. Definable orthogonality classes in accessible categories are small.Journal of the European Mathematical Society 17(3):549--589. arχiv bibtex
This note addresses the typical applied problem of estimating from data how a target “conversion rate” function varies with some available scalar score function — e.g., estimating conversion rates from some marketing campaign as a function of a targeting model score. The idea centers around estimating the integral of the rate function; differentiating this gives the rate function. The method is a variation on a standard technique for estimating pdfs via fits to empirical cdfs. Follow @efavdb Follow us on twitter for new submission alerts! Problem definition and naive binning solution Here, we are interested in estimating a rate function, $p \equiv p(x)$, representing the probability of some “conversion” event as a function of $x$, some scalar model score. To do this, we assume we have access to a finite set of score-outcome data of the form $\{(x_i, n_i), i= 1, \ldots ,k\}$. Here, $x_i$ is the score for example $i$ and $n_i \in \{0,1\}$ is its conversion indicator. There are a number of standard methods for estimating rate functions. For example, if the score $x$ is a prior estimate for the conversion rate, a trivial mapping $p(x) = x$ may work. This won’t work if the score function in question is not an estimate for $p$. A more general approach is to bin together example data points that have similar scores: The observed conversion rate within each bin can then be used as an estimate for the true conversion rate in the bin’s score range. An example output of this approach is shown in Fig. 1. Another option is to create a moving average, analogous to the binned solution. The simple binning approach introduces two inefficiencies: (1) Binning coarsens a data set, resulting in a loss of information. (2) The data in one bin does not affect the data in the other bins, precluding exploitation of any global smoothness constraints that could be placed on $p$ as a function of $x$. The running average approach is also subject to these issues. The method we discuss below alleviates both inefficiencies. Fig. 1. Binned probability estimate approach: All data with scores in a given range are grouped together, and the outcomes from those data points are used to estimate the conversion rate in each bin. Here, the x-axis represents score range, data was grouped into six bins, and mean and standard deviation of the outcome probabilities were estimated from the observed outcomes within each bin. Efficient estimates by integration It can be difficult to directly fit a rate function p(x) using score-outcome data because data of this type does not lie on a continuous curve (the y-values alternate between 0 and 1, depending on the outcome for each example). However, if we consider the empirical integral of the available data, we obtain a smooth, increasing function that is much easier to fit. To evaluate the empirical integral, we assume the samples are first sorted by $x$ and define $$ \tag{1} \label{1} \delta x_i \equiv x_i – x_{i-1}. $$ Next, the empirical integral is taken as $$ \tag{2} \label{2} \hat{J}(x_j) \equiv \sum_{i=0}^{j} n_i \delta x_i, $$ which approximates the integral $$\tag{3} \label{3} J(x) \equiv \int_{x_0}^{x_j} p(x) dx. $$ We can think of (\ref{3}) as the number of expected conversions given density-$1$ sampling over the $x$ range noted. Taking a fit to the $\{(x_i, \hat{J}(x_i))\}$ values gives a smooth estimate for (\ref{3}). Differentiating with respect to $x$ the gives an estimate for $p(x)$. Fig. 2 illustrates the approach. Here, I fit the available data to a quadratic, capturing the growth in $p$ with $x$. The example in Fig. 2 has no error bar shown. One way to obtain error bars would be to work with a particular fit form. The uncertainty in the fit coefficients could then be used to estimate uncertainties in the values at each point. Fig. 2. (Left) A plot of the empirical integral of the data used to generate Fig. 1 is in blue. A quadratic fit is shown in red. (Right) The derivative of the red fit function at left is shown, an estimate for the rate function in question, $p\equiv p(x)$. Example python code The code snippet below carries out the procedure described above on a simple example. One example output is shown in Fig. 3 at the bottom of the section. Running the code multiple times gives one a sense of the error that is present in the predictions. In practical applications, this can’t be done so carrying out the error analysis procedure suggested above should be done to get a better sense of the error involved. %pylab inline import numpy as np from scipy.optimize import curve_fit def p_given_x(x): return x ** 2 def outcome_given_p(p): return np.random.binomial(1, p) # Generate some random data x = np.sort(np.random.rand(200)) p = p_given_x(x) y = outcome_given_p(p) # Calculate delta x, get weighted outcomes delta_x = x[1:] - x[:-1] weighted_y = y[:-1] * delta_x # Integrate and fit j = np.cumsum(weighted_y) def fit_func(x, a, b, c, d): return a * x ** 3 + b * x ** 2 popt, pcov = curve_fit(fit_func, x[:-1], j) j_fit = fit_func(x[:-1], *popt) # Finally, differentiate and compare to actual p p_fit = (j_fit[1:] - j_fit[:-1]) / delta_x[:-1] # Plots plt.figure(figsize=(10,3)) plt.subplot(1, 2, 1) plot(x[:-1], j,'*', label='empirical integral') plot(x[:-1], j_fit, 'r', label='fit to integral') plt.legend() plt.subplot(1, 2, 2) plot(x[:-2], p_fit, 'g', label='fit to p versus x') plot(x, p, 'k--', label='actual p versus x') plt.legend() Fig. 3. The result of one run of the algorithm on a data set where $p(x) \equiv x^2$, given 200 random samples of $x \in (0, 1)$.
I am new to ML. I am currently reading the classic book Machine Learning in Action by Peter Harrington. In its implementation of ridge regression in P165 Listing 8.3, the book standardizes feature matrix $X$ by subtracting means of attributes and then dividing it by variances of attributes not by standard deviations of attributes! As follows: def ridgeRegres(xMat,yMat,lam=0.2): xTx = xMat.T*xMat denom = xTx + eye(shape(xMat)[1])*lam if linalg.det(denom) == 0.0: print "This matrix is singular, cannot do inverse" return ws = denom.I * (xMat.T*yMat) return wsdef ridgeTest(xArr,yArr): xMat = mat(xArr); yMat=mat(yArr).T yMean = mean(yMat,0) yMat = yMat - yMean #to eliminate X0 take mean off of Y #regularize X's xMeans = mean(xMat,0) #calc mean then subtract it off xVar = var(xMat,0) #calc variance of Xi then divide by it xMat = (xMat - xMeans)/xVar numTestPts = 30 wMat = zeros((numTestPts,shape(xMat)[1])) for i in range(numTestPts): ws = ridgeRegres(xMat,yMat,exp(i-10)) wMat[i,:]=ws.T return wMat Then calls ridgeTest by >>> abX,abY=regression.loadDataSet('abalone.txt')>>> ridgeWeights=regression.ridgeTest(abX,abY) I do not think this makes any sense. According to standardization, To create a unit variance, we should divide $X-\mu$ by standard deviation $\sigma$, because: $$ D(\frac{X-\mu}{\sigma})=\frac{1}{\sigma^2}D(X)=\frac{1}{\sigma^2}\sigma^2=1, $$ where $D(X)=\sigma^2$ is the variance of $X$. If according to the book, divide $X-\mu$ by variance $\sigma^2$, we only get: $$ D(\frac{X-\mu}{\sigma^2})=\frac{1}{\sigma^4}D(X)=\frac{1}{\sigma^4}\sigma^2=\frac{1}{\sigma^2}, $$ which is not a unit variance. The reason why I am not so confident that this is a mistake is that, first of all, the book is handling $Y$ carefully by subtracting its mean and not include an all 1 column in $X$ passed, both of which I find to be proper (see L2-normalization does not punish intercept). Hence I do not believe the book will make an obvious mistake like this. Secondly, nor the errata (see errata) or the book forum discussed it before. It is not a new book. Although I posted divide by std not variance in the book forum, no reply has been received yet. So I turn to stack exchange. Does the feature standardization of the book mistakenly divide $X-\mu$ by variance or it is a meaningful action? Thanks a lot in advance! Any clue will help me.
The following is not particularly fast and could be more accurate but does make progress toward the goals set in the Question. To begin, consider the analytical properties of f, as defined in the Question. By observation, it has branch points at {I Sqrt[6/5] ξ, I Sqrt[2/5] ξ} and their conjugates. Poles are obtained by p /. Simplify[Solve[Denominator[f[p, ξ]] == 0, p], ξ > 0] // Flatten (* {0, -2 I Sqrt[1/15 (3 - Sqrt[3])] ξ, 2 I Sqrt[1/15 (3 - Sqrt[3])] ξ} *) with corresponding residues, Simplify[Residue[f[p, ξ], {p, #}] & /@ %, ξ > 0] (* {-(3/(8 ξ)), Sqrt[1 + Sqrt[3]]/(8 (Sqrt[33 - 19 Sqrt[3]] - 2 Sqrt[3 - Sqrt[3]] + Sqrt[9 + 5 Sqrt[3]]) ξ), ...} *) Remark If you're in v9.0.1, the Residues of $\pm 2 i \sqrt{\frac{1}{15} \left(3-\sqrt{3}\right)} \xi$ won't be calculated correctly due to a bug, please use Simplify[(SeriesCoefficient[Exp[p t] f[p, ξ], {p, #, -1}] &) /@ %, ξ > 0] instead to calculate the residues. (The third Residue is the same as the second.) The poles and branch cuts can be seen in the plots, here for ξ == 1, Row[Plot3D[#, {pr, -1, 1}, {pi, -1, 1}, ImageSize -> Medium, PlotRange -> {-2, 2}, AxesLabel -> {pr, pi, f}] & /@ ReIm[f[pr + I pi, 1]]] all lying along the imaginary axis, and the branch cuts running from the corresponding branch points to I Infinity and its conjugate. From these results it is evident that the Bromwich contour of the first expression in the Question must lie to the right of the imaginary axis and should be somewhat to the right to minimize integration issues. The integral over p as a function of ξ for t == 2 is Plot[(ξ NIntegrate[f[1 + I pi, ξ] Exp[2 (1 + I pi)], {pi, -Infinity, Infinity}] // Chop) /(2 Pi), {ξ, 0, 50}, PlotRange -> All] This computation, which takes a few minutes on my PC, seems quite accurate, as can be seen by varying the real part of p for the contour integration. The integration over ξ is neither fast nor accurate, unfortunately. pinv[ξ_?NumericQ] := NIntegrate[f[1 + I pi, ξ] Exp[2 (1 + I pi)]/(2 Pi), {pi, -Infinity, Infinity}] // Chop NIntegrate[ξ BesselJ[0, ξ] pinv[ξ], {ξ, 0, Infinity}, MaxRecursion -> 40, Method -> "ExtrapolatingOscillatory"] (* SequenceLimit::seqlim: The general form of the sequence could not be determined, and the result may be incorrect. *) (* -0.423105 *) rather than the desired -3/8. However, further progress can be made be rewriting f as f1[p_,ξ_] = -5 p Sqrt[Sqrt[5/6] p + I ξ] Sqrt[Sqrt[5/6] p - I ξ]/ (4 (-4 ξ^2 Sqrt[Sqrt[5/6] p + I ξ] Sqrt[Sqrt[5/6] p - I ξ] Sqrt[Sqrt[5/2] p + I ξ] Sqrt[Sqrt[5/2] p - I ξ] + ((5 p^2)/2 + 2 ξ^2)^2)) which causes the branch cuts to extend along lines of constant Im[p] from - Infinity to the branch points. The Bromwich contour then can be distorted to yield a solution consisting of the three Residues plus integrals of the discontinuities along the four branch cuts. The Residues alone give a qualitatively reasonable rendition of the plot in the Question. Plot[Integrate[BesselJ[0, ξ] Cos[ξ t 2 Sqrt[1/15 (3 - Sqrt[3])]], {ξ, 0, Infinity}] 2 Sqrt[1 + Sqrt[3]]/(8 (Sqrt[33 - 19 Sqrt[3]] - 2 Sqrt[3 - Sqrt[3]] + Sqrt[9 + 5 Sqrt[3]]) ) - 3/8, {t, 0, 2}, PlotRange -> {-.5, 1.5}, AxesLabel -> {t, F}] Although the value of the integral is -3/8, as desired, for large t, the curve is not accurate at small t and, of course, does not include the integrals along the branch cuts. These are given for t == 2 by inv65[ξ_?NumericQ] := ξ Im[NIntegrate[(f1[ξ Sqrt[6/5] I + pr - .00001 I, ξ] - f1[ξ Sqrt[6/5] I + pr + .00001 I, ξ]) Exp[2 pr], {pr, -Infinity, 0}] Exp[2 ξ Sqrt[6/5] I]]/Pi inv25[ξ_?NumericQ] := ξ Im[NIntegrate[(f1[ξ Sqrt[2/5] I + pr - .00001 I, ξ] - f1[ξ Sqrt[2/5] I + pr + .00001 I, ξ]) Exp[2 pr], {pr, -Infinity, 0}] Exp[2 ξ Sqrt[2/5] I]]/Pi Plot[{inv65[ξ], inv25[ξ]}, {ξ, 0, 50}, PlotRange -> {-.04, .04}] These branch cut integrals offer two advantages over the Bromwich integration discussed above. First, the integrals themselves converge more rapidly due to the factors Exp[2 pr]. Second, the results decrease rapidly with ξ. Hence, the contribution to the t == 2 total result, NIntegrate[BesselJ[0, ξ] (inv65[ξ] + inv25[ξ]), {ξ, 0, Infinity}, MaxRecursion -> 40, Method -> "ExtrapolatingOscillatory"] (* 0.000191503 *) is acceptably close to the desired value of 0 despite error messages. Presumably, a smaller value could be obtained using larger values of WorkingPrecision, although the already slow computation would be far slower still. The curve in the Question can be reproduced reasonably well by inv[ξ_?NumericQ, t_?NumericQ] := ξ Im[NIntegrate[(f1[ξ Sqrt[6/5] I + pr - .00001 I, ξ] - f1[ξ Sqrt[6/5] I + pr + .00001 I, ξ]) Exp[t pr], {pr, -Infinity, 0}] Exp[t ξ Sqrt[6/5] I]]/Pi + ξ Im[NIntegrate[(f1[ξ Sqrt[2/5] I + pr - .00001 I, ξ] - f1[ξ Sqrt[2/5] I + pr + .00001 I, ξ]) Exp[t pr], {pr, -Infinity, 0}] Exp[t ξ Sqrt[2/5] I]]/Pi tinv = Table[NIntegrate[BesselJ[0, ξ] inv[ξ, t], {ξ, 0, Infinity}, MaxRecursion -> 40, Method -> "ExtrapolatingOscillatory"] + Integrate[BesselJ[0, ξ] Cos[ξ t 2 Sqrt[1/15 (3 - Sqrt[3])]], {ξ, 0, Infinity}] 2 Sqrt[1 + Sqrt[3]]/(8 (Sqrt[33 - 19 Sqrt[3]] - 2 Sqrt[3 - Sqrt[3]] + Sqrt[9 + 5 Sqrt[3]]) ) - 3/8, {t, .01, 2, .01}]; ListLinePlot[tinv, DataRange -> {0.01 Sqrt[2/5], 2 Sqrt[2/5]}, PlotRange -> {{0, 1.2}, {-0.8, 1.6}}, AxesLabel -> {Style["τ", 20], None}, AxesOrigin -> {0, -.1}] This curve required a few hours to produce, although significant savings could have been obtained with ParallelTable and some simplification of the integrands. That the computations in the reference given in the Question are so much faster reinforces the observation that superior algorithms typically trump superior coding. Addendum In response to a comment below, this additional material elaborates on the inverse Laplace transform of f1 and compares it with the inverse Laplace transform of f. As already discussed, f1 exhibits four branch cuts and three poles, shown in red in the plot immediately below for ξ == 1. The Bromwich integration contour described in the Question is a vertical line in the p plane, lying to the right of all branch cuts and poles, shown in the plot as a dashed line. Although the integration indicated by the first expression in the Question can be performed along this contour, the discussion in the latter part of this Answer indicated that it can be useful to transform the contour by shifting it far to the left, where it becomes a set of contours around the branch cuts and poles, as illustrated in the plot. (To minimize clutter, the contours are shown for only one each of the cuts and poles.) The portions of the shifted contour that are at p == - Infinity are ignored, because the integral vanishes there due to the integrand factor Exp[p t]. Integrals along the branch cuts in effect integrate the discontinuity of f1 across those cuts, and the integrals around the poles are, of course, their Residues. Just as the contour segments shown in the plot can be separated into components arising from the poles and from the branch cuts, the same can be done for the result of the original Bromwich contour integration in the early part of this Answer. The next plot compares (1) the sum of the branch cut integrals from the fourth plot above, and (2) the result in the second plot above after subtracting the pole contributions. Plot[{ξ Im[NIntegrate[(f1[ξ Sqrt[6/5] I + pr - .00001 I,ξ] - f1[ξ Sqrt[6/5] I + pr + .00001 I, ξ]) Exp[2 pr], {pr, -Infinity, 0}] Exp[2 ξ Sqrt[6/5] I]]/Pi + ξ Im[NIntegrate[(f1[ξ Sqrt[2/5] I + pr - .00001 I, ξ] - f1[ξ Sqrt[2/5] I + pr + .00001 I, ξ]) Exp[2 pr], {pr, -Infinity, 0}] Exp[2 ξ Sqrt[2/5] I]]/Pi, (ξ NIntegrate[f[1 + I pi, ξ] Exp[2 (1 + I pi)], {pi, -Infinity, Infinity}] // Chop) /(2 Pi) - Cos[ξ 4 Sqrt[1/15 (3 - Sqrt[3])]] 2 Sqrt[1 + Sqrt[3]]/(8 (Sqrt[33 - 19 Sqrt[3]] - 2 Sqrt[3 - Sqrt[3]] + Sqrt[9 + 5 Sqrt[3]]) ) + 3/8}, {ξ, 0, 50}, PlotRange -> {-.08, .08}] Not surprisingly, the two curves are virtually identical. As explained earlier, this decomposition is desirable, because the ξ integration can be performed symbolically for the pole contributions, resulting in the third plot. Indeed, the seemingly simple appearance of the branch cut contributions in the fourth plot suggest that they too could be represented by symbolic expressions, which might lead to further symbolic ξ integrations.
Permanent link: https://www.ias.ac.in/article/fulltext/pram/087/04/0056 We have synthesized, characterized and studied the third-order nonlinear optical properties of two different nanostructures of polydiacetylene (PDA), PDA nanocrystals and PDA nanovesicles, along with silver nanoparticles-decorated PDA nanovesicles. The second molecular hyperpolarizability $\gamma (−\omega; \omega,−\omega,\omega$) of the samples has been investigated by antiresonant ring interferometric nonlinear spectroscopic (ARINS) technique using femtosecond mode-locked Ti:sapphire laser in the spectral range of 720–820 nm. The observed spectral dispersion of $\gamma$ has been explained in the framework of three-essential states model and a correlation between the electronic structure and optical nonlinearity of the samples has been established. The energy of two-photon state, transition dipole moments and linewidth of the transitions have been estimated. We have observed that the nonlinear optical properties of PDA nanocrystals and nanovesicles are different because of the influence of chain coupling effects facilitated by the chain packing geometry of the monomers. On the other hand, our investigation reveals that the spectral dispersion characteristic of $\gamma$ for silver nanoparticles-coated PDA nanovesicles is qualitatively similar to that observed for the uncoated PDA nanovesicles but bears no resemblance to that observed in silver nanoparticles. The presence of silver nanoparticles increases the $\gamma$ values of the coated nanovesicles slightly as compared to that of the uncoated nanovesicles, suggesting a definite but weak coupling between the free electrons of the metal nanoparticles and $\pi$ electrons of the polymer in the composite system. Our comparative studies show that the arrangement of polymer chains in polydiacetylene nanocrystals is more favourable for higher nonlinearity. Current Issue Volume 93 | Issue 5 November 2019 Click here for Editorial Note on CAP Mode
4 Ways to Show a Group is Not Simple You know the Sylow game. You're given a group of a certain order and are asked to show it's not simple. But where do you start? Here are four options that may be helpful when trying to produce a nontrivial normal subgroup. Option 1: Show there is a unique Sylow p-subgroup. This is the usually the first thing you want to try, especially if $|G|$ is pretty large. Here your goal is to show that $n_p$ - the number of Sylow $p$-subgroups of $G$ - is equal to 1. This automatically implies that the unique Sylow $p$-subgroup is normal, and so $G$ is not trivial. (Tip: it's often helpful to compute $n_p$ for the largest prime $p$ first.) Example Let $G$ be a group of order $520=2^3\cdot 5\cdot 13$. Letting $n_p$ denote the number of Sylow $p$-subgroups of $G$, we see $$n_2\in\{1,5,13,65\}, \qquad n_5\in\{1,26\}, \qquad n_{13}\in\{1,40\}.$$ Suppose neither $n_2,n_5$ nor $n_{13}$ equals 1 (else we're done). Then $n_{13}=40$ and there are $40(13-1)=480$ elements of order 13. Similarly, $n_5=26$ and there are $26(5-1)=104$ elements of order 5. This means $G$ contains at least $480+104=584$ elements, which is impossible. Hence at least one of $n_2,n_5$ or $n_{13}$ equals 1 and so $G$ is not simple. Option 2: Show ker𝜑 is non-trivial for a homomorphism 𝜑 on G. Suppose $H\leq G$ is a subgroup of index $n$ with $|G|=p^kn$ where $p\nmid n$ and $k\geq 1$. This trick works whenever $n!<|G|$: Let $G$ act on its collection of cosets $G/H$ by multiplication (so for any $g\in G$ and $xH\in G/H$ let $g\cdot xH:=gxH$). Then define a homomorphism $\varphi$ from $G$ to $S_{G/H}\cong S_n$ by $\varphi(g)=\sigma_g$ where $\sigma_g$ is the permutation given by $\sigma_g(xH)=g\cdot xH=gxH$. Then $\ker\varphi=\cap_{x\in G}xHx^{-1}$ is a normal subgroup* of $G$ and is nontrivial since if $n!< |G|$ then $\ker\varphi\neq \{e\}$, otherwise $\varphi$ is injective which would imply $|G|\leq |S_n|=n!$. $\ker\varphi\neq G$, otherwise $|G|=|\cap_{x\in G}xHx^{-1}|\leq |H|$ which isn't possible Example Suppose $G$ is a group of order $36=2^2 \cdot3^2$. Let $H$ be a Sylow 3 subgroup of $G$ so that the index of $H$ in $G$ is 4. Let $\varphi:G\to S_4$ be the homomorphism defined above. Then $\ker\varphi= \cap_{x\in G}xHx^{-1}$ is a normal subgroup of $G$. If $\ker\varphi=\{e\}$, then $\varphi$ is injective which implies $36=|G|\leq |S_4|=24$ which is not possible. So $\ker\varphi\neq\{e\}$. Also, $\ker\varphi\neq G$, otherwise $36=|G|=|\cap_{x\in G}xHx^{-1}|\leq |H|=9$, which is also impossible. Thus $\ker\varphi$ is nontrivial and so $G$ is not simple. Option 3: For Sylow p-subgroups H & K, show H∩K is normal in G. One way to do this is to show the normalizer of $H\cap K$ is the entire group $G$, in other words $g(H\cap K)g^{-1}=H\cap K$ for all $g\in G$. Example Let $G$ be a group of order $108=2^2\cdot 3^3$. The number of Sylow 3-subgroups is either 1 or 4. Assume it's 4 (else we're done). So let $H$ and $K$ be distinct Sylow 3-subgroups, each having order 27. Let $N=N_G(H\cap K)$ be the normalizer of $H\cap K$. Our goal is to show $N=G$ so that $H\cap K\triangleleft G$ is a nontrivial normal subgroup, proving that $G$ is not simple. First notice that $$|H\cap K|= \frac{|H||K|}{|HK|}\geq \frac{|H||K|}{|G|}=\frac{27\cdot 27}{108}\approx 6.8.$$ We also need $|H\cap K|$ to be a divisor of $27$ since $H\cap K\subset H$. This forces $|H\cap K|=9.$ Observe that $H\cap K\triangleleft H$ and $H\cap K\triangleleft K$ since the index of $H\cap K$ in each of $H$ and $K$ is $27/9=3$ and 3 is the smallest prime divisor of $27=|H|,|K|.$ Thus $H,K\subset N$. This implies $HK\subset N$ too.** Finally, since $N\leq G$ is a subgroup we know $|N|$ must divide $108=|G|$. But $HK\subset N$ implies $81=|HK|\leq |N|$ as well, so we must have $|N|=108$. This allows us to conclude $N=G$. Option 4: Find a subgroup of G with index p, where p is the smallest prime divisor of |G|. Anytime $H\leq G$ is a subgroup with $[G:H]=p$ where $p$ is the smallest prime divisor of $|G|$, it follows that $H$ is normal in $G$. (So, for instance, this is how you know all subgroups of index 2 are normal.) Now you may or may not be able to immediately find such a subgroup given the order of $G$, but it's still something to keep in mind while looking to prove your group is not simple. In the previous example, we used this result to conclude that the intersection $H\cap K$ was not only a subgroup of $H$ and $K$, but was in fact normal. This was a key step in showing $H\cap K$ was normal in the whole group $G$. Footnotes: * Let's check that $\ker\varphi$ really is of that form: $\ker\varphi=\{g\in G:\varphi(g)= \text{id}\}$ where $\text{id}$ is the identity permutation in $S_n$. This set is equal to $\{g\in G:gxH=xH \text{ for all $x\in G$ }\}= \{g\in G:x^{-1}gxH=H \text{ for all $x\in G$ }\}$ which is the same as $\{g\in G:g\in xHx^{-1} \text{ for all $x\in G$ }\}$. The latter is precisely $\cap_{x\in G}xHx^{-1}.$ That $\ker\varphi$ is normal in $G$ is clear since $\varphi$ is a homomorphism. ** Proof: Since $H\subset N$ and $K\subset N$, we know $h(H\cap K)h^{-1}=H\cap K$ for all $h\in H$. Likewise $k(H\cap K)k^{-1}=H\cap K$ for all $k\in K$. So for any $hk\in HK$, we have $$hk(H\cap K)(hk)^{-1}=h(k(H\cap K)k^{-1})h^{-1}=h(H\cap K)h^{-1}=H\cap K.$$
Difference between revisions of "Vopenka" (Added not necessarily weakly compact and external links section.) Line 14: Line 14: As stated above and from the point of view of ZFC, this is actually an axiom schema, as we quantify over proper classes, which from a purely ZFC perspective means definable proper classes. As stated above and from the point of view of ZFC, this is actually an axiom schema, as we quantify over proper classes, which from a purely ZFC perspective means definable proper classes. − One alternative is to view Vopěnka's principle as an axiom in a class theory, such as von Neumann-Gödel-Bernays. Another is to consider a + One alternative is to view Vopěnka's principle as an axiom in a class theory, such as von Neumann-Gödel-Bernays. Another is to consider a , that is, a cardinal $\kappa$ that is inaccessible and such that $V_\kappa$ satisfies Vopěnka's principle when "proper class" is taken to mean "subset of $V_\kappa$ of cardinality $\kappa$. $\kappa$ that is inaccessible and such that $V_\kappa$ satisfies Vopěnka's principle when "proper class" is taken to mean "subset of $V_\kappa$ of cardinality $\kappa$. − These three alternatives are, in the order listed, strictly increasing in strength + These three alternatives are, in the order listed, strictly increasing in strength http://mathoverflow.net/questions/45602/can-vopenkas-principle-be-violated-definably. ==Equivalent statements== ==Equivalent statements== Line 24: Line 24: <CITE>BagariaCasacubertaMathiasRosicky2012:OrthogonalityClasses</CITE> <CITE>BagariaCasacubertaMathiasRosicky2012:OrthogonalityClasses</CITE> + + + + + + + {{References}} {{References}} Revision as of 01:25, 4 February 2012 Vopěnka's principle is a large cardinal axiom at the upper end of the large cardinal hierarchy that is particularly notable for its applications to category theory. In a set theoretic setting, the most common definition is the following: For any language $\mathcal{L}$ and any proper class $C$ of $\mathcal{L}$-structures, there are distinct structures $M, N\in C$ and an elementary embedding $j:M\to N$. For example, taking $\mathcal{L}$ to be the language with one unary and one binary predicate, we can consider for any ordinal $\eta$ the class of structures $\langle V_{\alpha+\eta},\{\alpha\},\in\rangle$, and conclude from Vopěnka's principle that a cardinal that is at least $\eta$-extendible exists. In fact if Vopěnka's principle holds then there are a proper class of extendible cardinals; bounding the strength of the axiom from above, we have that if $\kappa$ is almost huge, then $V_\kappa$ satisfies Vopěnka's principle. Contents Formalisations As stated above and from the point of view of ZFC, this is actually an axiom schema, as we quantify over proper classes, which from a purely ZFC perspective means definable proper classes.One alternative is to view Vopěnka's principle as an axiom in a class theory, such as von Neumann-Gödel-Bernays. Another is to consider a Vopěnka cardinal, that is, a cardinal$\kappa$ that is inaccessible and such that $V_\kappa$ satisfies Vopěnka's principle when "proper class" is taken to mean "subset of $V_\kappa$ of cardinality $\kappa$.These three alternatives are, in the order listed, strictly increasing in strength [1]. Equivalent statements The schema form of Vopěnka's principle is equivalent to the existence of a proper class of $C^{(n)}$-extendible cardinals for every $n$; indeed there is a level-by-level stratification of Vopěnka's principle, with Vopěnka's principle for a $\Sigma_{n+2}$-definable class corresponds to the existence of a $C^{(n)}$-extendible cardinal greater than the ranks of the parameters. [1] Other points to note Whilst Vopěnka cardinals are very strong in terms of consistency strength, a Vopěnka cardinal need not even be weakly compact. Indeed, the definition of a Vopěnka cardinal is a $\Pi^1_1$ statement over $V_\kappa$, and $\Pi^1_1$ indescribability is one of the equivalent definitions of weak compactness. Thus, the least weakly compact Vopěnka cardinal must have (many) other Vopěnka cardinals less than it. References Bagaria, Joan and Casacuberta, Carles and Mathias, A R D and Rosický, Jiří. Definable orthogonality classes in accessible categories are small.Journal of the European Mathematical Society 17(3):549--589. arχiv bibtex
Current browse context: cond-mat.dis-nn Change to browse by: References & Citations Bookmark(what is this?) Condensed Matter > Disordered Systems and Neural Networks Title: Chaotic wave packet spreading in two-dimensional disordered nonlinear lattices (Submitted on 20 Aug 2019) Abstract: We reveal the generic characteristics of wave packet delocalization in two-dimensional nonlinear disordered lattices by performing extensive numerical simulations in two basic disordered models: the Klein-Gordon system and the discrete nonlinear Schr\"{o}dinger equation. We find that in both models (a) the wave packet's second moment asymptotically evolves as $t^{a_m}$ with $a_m \approx 1/5$ ($1/3$) for the weak (strong) chaos dynamical regime, in agreement with previous theoretical predictions~\cite{F10}, (b) chaos persists, but its strength decreases in time $t$ since the finite time maximum Lyapunov exponent $\Lambda$ decays as $\Lambda \propto t^{\alpha_{\Lambda}}$, with $\alpha_{\Lambda} \approx -0.37$ ($-0.46$) for the weak (strong) chaos case, and (c) the deviation vector distributions show the wandering of localized chaotic seeds in the lattice's excited part, which induces the wave packet's thermalization. We also propose a dimension-independent scaling between the wave packet's spreading and chaoticity, which allows the prediction of the obtained $\alpha_{\Lambda}$ values. Submission historyFrom: Charalampos Skokos [view email] [v1]Tue, 20 Aug 2019 20:20:12 GMT (1127kb,D)
Search Now showing items 1-10 of 26 Production of light nuclei and anti-nuclei in $pp$ and Pb-Pb collisions at energies available at the CERN Large Hadron Collider (American Physical Society, 2016-02) The production of (anti-)deuteron and (anti-)$^{3}$He nuclei in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV has been studied using the ALICE detector at the LHC. The spectra exhibit a significant hardening with ... Forward-central two-particle correlations in p-Pb collisions at $\sqrt{s_{\rm NN}}$ = 5.02 TeV (Elsevier, 2016-02) Two-particle angular correlations between trigger particles in the forward pseudorapidity range ($2.5 < |\eta| < 4.0$) and associated particles in the central range ($|\eta| < 1.0$) are measured with the ALICE detector in ... Measurement of D-meson production versus multiplicity in p-Pb collisions at $\sqrt{s_{\rm NN}}=5.02$ TeV (Springer, 2016-08) The measurement of prompt D-meson production as a function of multiplicity in p–Pb collisions at $\sqrt{s_{\rm NN}}=5.02$ TeV with the ALICE detector at the LHC is reported. D$^0$, D$^+$ and D$^{*+}$ mesons are reconstructed ... Measurement of electrons from heavy-flavour hadron decays in p–Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV (Elsevier, 2016-03) The production of electrons from heavy-flavour hadron decays was measured as a function of transverse momentum ($p_{\rm T}$) in minimum-bias p–Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV with ALICE at the LHC for $0.5 ... Direct photon production in Pb-Pb collisions at $\sqrt{s_{NN}}$=2.76 TeV (Elsevier, 2016-03) Direct photon production at mid-rapidity in Pb-Pb collisions at $\sqrt{s_{\rm NN}} = 2.76$ TeV was studied in the transverse momentum range $0.9 < p_{\rm T} < 14$ GeV/$c$. Photons were detected via conversions in the ALICE ... Multi-strange baryon production in p-Pb collisions at $\sqrt{s_\mathbf{NN}}=5.02$ TeV (Elsevier, 2016-07) The multi-strange baryon yields in Pb--Pb collisions have been shown to exhibit an enhancement relative to pp reactions. In this work, $\Xi$ and $\Omega$ production rates have been measured with the ALICE experiment as a ... $^{3}_{\Lambda}\mathrm H$ and $^{3}_{\bar{\Lambda}} \overline{\mathrm H}$ production in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV (Elsevier, 2016-03) The production of the hypertriton nuclei $^{3}_{\Lambda}\mathrm H$ and $^{3}_{\bar{\Lambda}} \overline{\mathrm H}$ has been measured for the first time in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV with the ALICE ... Multiplicity dependence of charged pion, kaon, and (anti)proton production at large transverse momentum in p-Pb collisions at $\sqrt{s_{\rm NN}}$= 5.02 TeV (Elsevier, 2016-09) The production of charged pions, kaons and (anti)protons has been measured at mid-rapidity ($-0.5 < y < 0$) in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV using the ALICE detector at the LHC. Exploiting particle ... Jet-like correlations with neutral pion triggers in pp and central Pb–Pb collisions at 2.76 TeV (Elsevier, 2016-12) We present measurements of two-particle correlations with neutral pion trigger particles of transverse momenta $8 < p_{\mathrm{T}}^{\rm trig} < 16 \mathrm{GeV}/c$ and associated charged particles of $0.5 < p_{\mathrm{T}}^{\rm ... Centrality dependence of charged jet production in p-Pb collisions at $\sqrt{s_\mathrm{NN}}$ = 5.02 TeV (Springer, 2016-05) Measurements of charged jet production as a function of centrality are presented for p-Pb collisions recorded at $\sqrt{s_{\rm NN}} = 5.02$ TeV with the ALICE detector. Centrality classes are determined via the energy ...
The integrability of negative powers of the solution of the Saint Venant problem 2014 (English)In: Annali della Scuola Normale Superiore di Pisa (Classe Scienze), Serie V, ISSN 0391-173X, E-ISSN 2036-2145, Vol. XIII, no 2, p. 465-531Article in journal (Refereed) Published Abstract [en] We initiate the study of the finiteness condition∫ Ω u(x) −β dx≤C(Ω,β)<+∞ whereΩ⊆R n is an open set and u is the solution of the Saint Venant problem Δu=−1 in Ω , u=0 on ∂Ω . The central issue which we address is that of determining the range of values of the parameter β>0 for which the aforementioned condition holds under various hypotheses on the smoothness of Ω and demands on the nature of the constant C(Ω,β) . Classes of domains for which our analysis applies include bounded piecewise C 1 domains in R n , n≥2 , with conical singularities (in particular polygonal domains in the plane), polyhedra in R 3 , and bounded domains which are locally of classC 2 and which have (finitely many) outwardly pointing cusps. For example, we show that if u N is the solution of the Saint Venant problem in the regular polygon Ω N with N sides circumscribed by the unit disc in the plane, then for each β∈(0,1) the following asymptotic formula holds: % {eqnarray*} \int_{\Omega_N}u_N(x)^{-\beta}\,dx=\frac{4^\beta\pi}{1-\beta} +{\mathcal{O}}(N^{\beta-1})\quad{as}\,\,N\to\infty. {eqnarray*} % One of the original motivations for addressing the aforementioned issues was the study of sublevel set estimates for functions v satisfying v(0)=0 , ∇v(0)=0 and Δv≥c>0 . Place, publisher, year, edition, pages Scuola Normale Superiore , 2014. Vol. XIII, no 2, p. 465-531 National Category Natural Sciences IdentifiersURN: urn:nbn:se:liu:diva-108526ISI: 000339985500008Scopus ID: 2-s2.0-84908458320OAI: oai:DiVA.org:liu-108526DiVA, id: diva2:730709 2014-06-292014-06-292017-12-05Bibliographically approved
The equation I am trying to solve is: $$\lim\limits_{k \rightarrow 3} \left( \sum\limits_{n=1}^{n=k} \frac{1}{n^s}+ \frac{1}{k^{s - 1} \cdot (s - 1)}\right)=0 \tag{1}$$ The simplest possible analytic continuation of the Riemann zeta function is: $$\zeta(s)=\lim\limits_{k \rightarrow \infty} \left( \sum\limits_{n=1}^{n=k} \frac{1}{n^s}+ \frac{1}{k^{s - 1} \cdot (s - 1)}\right) \tag{2}$$ $$\mbox{ which appears to be true for }\Re(s)>0$$ So therefore I substituted all $s$ with $x$ except one of them like this: $$\lim\limits_{k \rightarrow 3} \left( \sum\limits_{n=1}^{n=k} \frac{1}{n^x}+ \frac{1}{k^{x - 1} \cdot (s - 1)}\right)=0 \tag{3}$$ Very crude rational approximations of logarithms are: $\log(1) = 0$ $\log(2) \approx 7/10$ $\log(3) \approx 11/10$ (That is the level of precision I could afford computationally in this case.) Notice that:$$\frac{1}{n^x}=\frac{1}{e^{x\log(n)}} \tag{4}$$ and substitute $\log(n)$ with the rational approximations for logarithms above. Solving $(3)$ in Mathematica we can then write: Clear[x, s];Reduce[1/(E^Round[N[Log[1]], 10^-1])^x + 1/(E^Round[N[Log[2]], 10^-1])^x + 1/(E^Round[N[Log[3]], 10^-1])^x + 1/(E^Round[N[Log[3]], 10^-1])^(x - 1)/(s - 1) == 0, x] This gives 11 Root objects subject to conditions. Picking the first Root object that Mathematica gives, we have: x == 10 (2 I \[Pi] C[1] + Log[Root[-1 + E^(11/10) + s + (-1 + s) #1^4 + (-1 + s) #1^11 &, 1]]) Latexifying it does not help much, but the changes I would do are to replace $x$ with $s$ and skip the term $2 i \pi c_1$ since I have understood that $c_1$ is an integer that can be zero. So the equation that needs to be solved is: s == 10 (Log[ Root[-1 + E^(11/10) + s + (-1 + s) #1^4 + (-1 + s) #1^11 &, 1]]) Dividing by 10: s/10 == Log[ Root[-1 + E^(11/10) + s + (-1 + s) #1^4 + (-1 + s) #1^11 &, 1]] Applying the exponential function we would have: Exp[s/10] == Root[-1 + E^(11/10) + s + (-1 + s) #1^4 + (-1 + s) #1^11 &, 1] Now this is not solvable in Mathematica so we need a truncated series expansion of $\exp(s)$ $$\exp(s/10) \approx 1+s/10+\frac{(s/10)^2}{2}+\frac{(s/10)^3}{6}$$ So instead: $$1+\frac{s}{10}+\frac{1}{2} \left(\frac{s}{10}\right)^2+\frac{1}{6} \left(\frac{s}{10}\right)^3=\text{Root}\left[\text{$\#$1}^{11} (s-1)+\text{$\#$1}^4 (s-1)+s+e^{11/10}-1\&,1\right] \tag{5}$$ and this Mathematica can solve: Reduce[(1 + s/10 + (s/10)^2/2 + (s/10)^3/6) == Root[-1 + E^(11/10) + s + (-1 + s) #1^4 + (-1 + s) #1^11 &, 1], s] giving the first Root object starting as: (-1 + Root[-1088391168000000000000000000000000000000000 + 362797056000000000000000000000000000000000 E^(11/10) + 544195584000000000000000000000000000000000 #1 + 295679600640000000000000000000000000000000 #1^2 + ... Is this at all true or is it just overly complicated? I mean we could use the same minimal analytic continuation of the zeta function and the crude rational approximations of the logarithms and only do the series expansion: Clear[x, s];Series[1/(E^Round[N[Log[1]], 10^-1])^s + 1/(E^Round[N[Log[2]], 10^-1])^s + 1/(E^Round[N[Log[3]], 10^-1])^s + 1/(E^Round[N[Log[3]], 10^-1])^(s - 1)/(s - 1), {s, 0, 5}] This then would give: $$\left(3-e^{11/10}\right)+\frac{1}{10} \left(-18+e^{11/10}\right) s+\left(\frac{17}{20}-\frac{101 e^{11/10}}{200}\right) s^2+\left(-\frac{279}{1000}-\frac{1699 e^{11/10}}{6000}\right) s^3+\left(\frac{8521}{120000}-\frac{82601 e^{11/10}}{240000}\right) s^4+\left(-\frac{29643}{2000000}-\frac{3968999 e^{11/10}}{12000000}\right) s^5+O[s]^6$$ and so on...
Forgot password? New user? Sign up Existing user? Log in x=yx=yx=y x2=xyx^2=xyx2=xy x2−y2=xy−y2x^2 - y^2=xy - y^2x2−y2=xy−y2 (x+y)(x−y)=y(x−y)(x + y)(x - y)= y(x - y)(x+y)(x−y)=y(x−y) x+y=yx + y=yx+y=y 1+1=11 + 1=11+1=1 What is the fallacy? Note by Sharky Kesa 5 years, 9 months ago Easy Math Editor This discussion board is a place to discuss our Daily Challenges and the math and science related to those challenges. Explanations are more than just a solution — they should explain the steps and thinking strategies that you used to obtain the solution. Comments should further the discussion of math and science. When posting on Brilliant: *italics* _italics_ **bold** __bold__ - bulleted- list 1. numbered2. list paragraph 1paragraph 2 paragraph 1 paragraph 2 [example link](https://brilliant.org) > This is a quote This is a quote # I indented these lines # 4 spaces, and now they show # up as a code block. print "hello world" # I indented these lines# 4 spaces, and now they show# up as a code block.print "hello world" \( \) \[ \] 2 \times 3 2^{34} a_{i-1} \frac{2}{3} \sqrt{2} \sum_{i=1}^3 \sin \theta \boxed{123} Sort by: (x-y) = 0. You took 0/0 = 1, which is actually indeterminate. Log in to reply in 4th line, you cannot cancel zero (x-y=0) with zero of other side. If x=yx=yx=ythen x−y=0x-y=0x−y=0 so in the 4th step you are factoring as (x+y)(0)=y(0)(x+y)(0)=y(0)(x+y)(0)=y(0) and in the the 5th step you are dividing both sides by 000 and taking the result as 111 which is not allowed That's not possible x=y implies x-y=0.In step 4 we have divided both sides by 0.So,it's 0/0 which is undefined. its wrongwhen x + y = y, x=y -y or x=0so you are dead wrong man You should watch forth step you ignored x-y ... It Means u r handling it like a real number...but if you transfer both x-y in one side you will see that u have made 0/0 undefined form...... x=y x =y x=y means x−y=0 x -y =0x−y=0 in step 4 you are dividing the expression by zero which is not possible . this is the fallacy x-y is not 0.you can not cancel a 0 term x-y is 0.Cannot cancel a 0 term Why x=y if x=y then at last what is the necessity that x+y=x you have just factored 0 , there are many solutions .. just like 9 x 0 = 0 well.. (x-y)=0 so,, no as per you second last equation, x is multiple of 2y .. so "x" never is equal to y.. since x=y as stated in the problem, dividing both sides by (x-y) will result in an undefined equation as (x-y) = 0. (x+y)/0 = Undefined so the equality does not hold. <It is equivalent to saying that 1/0=2/0 clearly undefined> This should have been framed in a form of a question. since x-y=0 you can't cut x-y on both the sides of step no.4 We can cancel only non zero common term in both sides of an equation You cancelled x-y with each other. what if values of it is 0? Cancelling of 0/0 is not 1. Btw, result is x=y=0 dividir entre cero x=y => x-y=0 => can't divide ( x-y ) in line 4 From (x+y)(x-y) = y(x-y) to x+y = y, you divided each side by (x-y), but since you cannot divide by 0, this only applies if x-y does not equal 0, and since you set both x and y as 1, x-y does equal 0, which makes the equation invalid. x=y x-y=0, in step 4 x-y can't be eliminated @Sharky Kesa HiThis note is similar to my question 'Is it True?'.. check out https://brilliant.org/community-problem/is-it-true/?group=enKxDb2d6cg4&ref_id=201435 Well, technically, the problem is similar to my note. I posted this over 5 months ago, Eddie had reshared about 3 weeks ago and it became a hit. Before, it wasn't. If x=y then (x-y) =0 and hence it cannot be divided on any side. cancellation of (x-y) on both sides is a mistake You cannot cancel X-Y until X=Y.... because x=y so x-y= 0 => in step 4 you can change both sides to 0 x-y, since x-y=0, dividing any number by 0 is undefined (0 by 0 is indeterminate). the eqation (x+y)(x-y)=y(x-y) has two solutions ,first one is (x+y)=y and (x-y)=0.Which gives x=y,the right solution. Division by zero. Since x = y, x - y = 0. We cannot use the Cancellation Law of Multiplication for a factor of zero. x+y=y is the fallacy cannot divided by 0 two sides of equation in step 4 (//because x=y so x-y=0) Take a note from x=yx=yx=yThe false statement is when dividing both sides by the difference of x& y where it is indeterminate with denominator of x-y=0. here, given that x=y and it means x-y=0....and in 4th step we cannot cancel out (x-y) from both side...so this is totally wrong x - y = 0 and any no. divided by zero is infinity so,we can't cut x-y from both sides. u can't divide by x-y since x -y = 0 YOu cannot cannot 0 both sides. In step 4 you have cancelled 0 from both sides which is not possible divided by zero when you cancel x-y in LHS with RHS,it directly means that x not equal to y,but then you are putting x=y in next step! you cannot cancel x-y to both sides since x-y = 0. any number divided by 0 is undefined. x=yx-y=0so(x+y)(x-y)=y(x-y)(x+y)(x-y)/(x-y)=yas (x-y)=0(x+y)(x-y)/0 which is not exist x-y=0 Substitute numbers from the beginning and then you get the fault. :P since x=y, x-y=0,and in the next step the division isnt possible because division by 0 is not defined in mathematics in 4th line do---x^2+y^2-2xy=xy-y^2 then solve it as, x^2+2y^2-3xy=0 it forms x^2-3xy+2y^2=0 x^2-xy-2xy+2y^2=0 x(x-y)-2y(x-y)=0 (x-2y)(x-y)=0 x=2y or x=y hence x=y proved;;;;;; In the 4th step, (x-y) is on the both sides. dividing (x-y) by (x-y) gives 0/0, which is the indeterminate form. That is the fallacy since x=y,x-y=0 in 4th step u have divided by x-y=0 which is not allowed Zero by zero is indeterminate form.. You cannot cancel zero by zero.. (x-y)=0 x-y = 0 & you cancelled 0 on both sides ( 0/0 is not 1 , but undefined). x = y x - y = 0 In the fourth step you divided by (x-y), or 0. And when divided by 0, it should be 0 = 0 not 1 + 1 = 1 :) x^2 =y^2 doesnot imply x=y (x-y) can not be divided from both sides as x=y makes it divided by zero. (x+y)(x-y) = y(x-y) only proves that 0=0x-y cannot be cancelled out as we usually do x-y/x-y = 0/0math error in the 5th step x+y ≠ y The element is zero shock swallow does not participate in the multiplication operation interesting! I can't find it x=y, therefore x-y=0. in 3rd step there is (x^2-y^2) at l.h.s. & in next step it is written as (x+y)(x-y),but it is not applicable for the equal numbers which is the condition given in 1st step.... you just cancelled two zeroes respectively from the lhs and the rhs in step 4. This cant be done x-y=0 => 0/0 At the fifth step, we arrive at the result- x+y=y.Let us think at once, at what values of x and y, can this be true ?only when x=y=0.This is the only solution to x+y=y.If this solution follows , we can never get the result 1+1=1.This is the fallacy. (x + y )(x - y) = y(x - y) can yield to (x+y) = y only when x not equal to y. you are a master piece as you can divide 0 by 0 u can't cancel x-y bcz it is 0. On 4th step (x+y)(x-y)=y(x-y), you have cancelled out x-y from both the sides. But according to the assumption made, where x=y => x-y=0. Also 0/0 is undetermined not 1 , so that step is wrong. In step 4, you divided both sides by (x - y) to arrive at x + y = y. Meaning, there is a division by zero. Hi Kesa i'm also confused with this riddle.there is an aliter proof also U r assuming (x-y) = o which is not the case !! Here one can't cancel x-y from both sides No, we are not assuming. If x=y, then obviously x-y=0 Not sure that x-y=0 or not! Since x=y, x-y=0... if you divide a number by zero it will be undefined, so you cannot divide the equation by x-y. there are more than one problem in your answer the first one when you divide the both side on (x-y) the rule is ( x - y != 0 ) and x = y so x - y = 0 and you can't divide the second one is when you reach to the line before the last one x + y = y so that will be x = y - y that's give x = 0 in step 4 you cancelled (x-y) on both sides but as x=y that means that x-y = o and any expression divided by 0 is not defined. x-y can be cancelled when you x-y is Not equal to zero means x is not equal to y Step 1 & 2 are.. cancelling out the term (x-y) because that means you cancel zero. if u remove (x-y) on both side then a condition x is not equal to y should be followed.......so it is wrong. so, we have fallacy in the third step itself. In the third step we subtract y^2 from both the sides. Then we get x^2 - y^2 on left hand side. But x^2 - y^2 = 0 as x = y , x^2 = y^2 , x^2 - y^2 = 0. You divided by 0. Division by zero is undefined x - y = 0since 0/0 is not defined... its false if x = y , then x-y = 0, therefore you cant use x-y to divide your algebraic expressions We have seen and it is easily observed that the note posted by Sharky Kea is wrong (I mean to say 1 + 1 is not equal to 1) . The truth is when we solve a equation we actually don't know the value of the variable or variables.But in this case we already have put that x = y so by this we know that x - y = 0 so we can't just cancel x - y on both sides . In case we don't know the value or if x is equal to y or x is greater than y or something else.....In that case we can cancel x - y on both the sides as we don't know the value of it , it may be 1 or 4 or even 0 . But because can't assume that it will definitely be 0 so we can cancel x - y on both sides and then the answer will come to be x = 0 and if we substitute it , it holds true. So, 1 + 1 is not equal to 1. x-y = 0 You cannot cancel it from the equation if x=y then x-y=0we cant cancel 0 what if i say1=1 25-24=41-40 (3x3)+(4x4)-(2x3x4)=(5x5)+(4x4)-(2x5x4) (3-4)^2=(5-4)^2 3-4=5-4 (taking root both sides) -1=1 sir you can't just remove what is spouse to be one of the solutions and say hi everybody the algebra is wrong.... The error is in line 4x-y=0It is incorrect for divided with this. 0 by 0 is not 1 my friend Actually it must be:(2-1)^{2} = 2^{2}-2x2x1 +1^{2} =1^{2}-2x1x2+2^{2} =(1-2)^{2}i.e., (2-1)^{2}=(1-2)^{2} =2-1=1-2 =4=2Hence, 2=1 you must not cancel x-y on both sides as both have value of 0 Mr. Sharky....you can divide an inequality only by a non zero number....here when you have assumed x=y...then x-y becomes zero....and this is the reason you are getting 2=1 u cant cancel 0 from both the sides in step 4! x-y=0! This one's good, −1=−1-1=-1−1=−1 −11=1−1\frac {-1}{1}=\frac {1}{-1}1−1=−11 −11=1−1\sqrt{\frac {-1}{1}}=\sqrt{\frac {1}{-1}}1−1=−11 −11=1−1\frac {\sqrt{-1}}{\sqrt{1}}=\frac {\sqrt{1}}{\sqrt{-1}}1−1=−11 i1=1i\frac {i}{1}=\frac {1}{i}1i=i1 i2=1i^{2}=1i2=1 −1=1.□-1=1.\quad \square−1=1.□ Quotient rule for radicals xy=xy\sqrt{\frac{x}{y}}=\frac{\sqrt{x}}{\sqrt{y}}yx=yx only applies when xxx and yyy are non-negative, and that y≠0y \neq 0y=0. So, the third to fourth line of solution is faulty. I completely agree with you You found it. :D u have to put both + and - while removing the root sign!! Wrong YES, SAME CONCEPT. IF X^2=Y^2, THEN X IS NOT NECESSARILY = TO Y, IT CAN BE -Y ALSO x = y i.e. x- y = 0. But we cancelled x - y from LHS and RHS in the fourth step, which means you cancelled 0!!! I was in a condition of getting a 'Heart Attack'!!!!!!!!!!!!!!! x squared - y squared = 0 why cant we divide zero by zero? That could assume ANY value (i.e. 0, pi, 13, 23523523523,e, you get the point where y,o,u,g,e,t,t,h,e,p,o,i,n, and t are real numbers, etc.) I have objection !you have said that x=y then x-y=0 then why are you cancelling x-y on both sides? Here (X-y)=0 so we can't devide it from both side so the 4yh step is wrong because of division by zero (x-y)=(1-1)=0 If x=y , then x-y = 0Hence we cannot cancel (x-y) in the 5th step coz a number divided by 0 is not defined in mathematics.So even if you prove that 1+1=1(which is not likely to happen).....it won't be practical in real life How about this? -20=-20 4²-9x4=5²-9x5 4²-9x4+81/4=5²-9x5+81/4 (4-9/2)²=(5-9/2)² 4-9/2=5-9/2 4=5 2+2=5 I think 4th line has defect. since x^1/2 is = +x therefore you can't write 4^2-9*4+81/4 = (4-9/2)^2 it must be written as (9/2-4)^2=(5-9/2)^2ie 9/2-4=5-9/2=> 9=9 Really, 2+2=5, I checked it using MS Excel: pic.twitter.com/iS9BNigN3p;) 4-9/2 is obviously a negative number it's equal to -0.5 you should have realized that the square root of a negative number do not exist imaginary only, so you cannot take the square root of the both sides :) If x^2=y^2, it doesn't necessarily mean that x=y, it means x=+-y as y can be negative also. But in the first step it is given that x=y which implies that both the variables have to have the same sign,unless they both are zero. @Adarsh Kumar – 16 is just a random example to prove that sqrt. of any real number can be both positive or negative, unless specified otherwise that x>0 or x<0 @Adarsh Kumar – No, you are wrong!!! What do you think sqrt. 16 is ? Of course, it is 4 but why can't it be -4 ? @Kushagra Sahni – I get that x^2 =y^2 can have two outcomes but what I am trying to tell u is that x and y have the same signs and this can be seen in the 1st step of the falla ie.Get that. when you take the square root of both sides, you have to add plus or minus sign, and remove extraneous solutions. if we taking square root then there are two ansone is positivesecond one is negetive having magnitude sameso we have to assume both cases.... x = yx - y = 0But in 5th step we cancelled x-y u r not allowed to cancel 0 from both sides...i mean (x-y) IN STEP 4, YOU CANCELLED X-Y FROM BOTH SIDES AND X-Y WILL BE 0. SO ACTUALLY YOU DIVIDED BY 0 WHICH IS NOT DEFINED. SO, YOU CAN'T PROCEED TO STEP 5. GOOD IF YOU KNOW THIS, IT MEANS THAT YOU ARE GOOD AT MATHS BECAUSE YOU KNOW THE FUNDAMENTALS OF IT. THESE KIND OF QUESTIONS STRENGTHEN YOUR CONCEPT IN MATHEMATICS. I WOULD BE OBLIGED IF YOU POST MORE QUESTIONS OF THIS KIND. I LOVE MATHS AND WANT TO STRENGTHEN EACH OF MY CONCEPTS:) if x=y then you should not cancell x-y on both sides of equality zero cannot be divided zero cannot be divided "by" it's good to see this....see this what your proof says in terms of 111 1=112=1×112−12=1×1−12(1+1)(1−1)=1(1−1)andyoucancelled(1−1)i.e.0frombothsideswhichmeans00thatdoesn′texist1=1\\ { 1 }^{ 2 }=1\times 1\\ { 1 }^{ 2 }-{ 1 }^{ 2 }=1\times 1-{ 1 }^{ 2 }\\ (1+1)(1-1)=1(1-1)\\ and\quad you\quad cancelled\quad (1-1)\quad i.e.\quad 0\quad from\quad both\quad sides\\ which\quad means\quad \frac { 0 }{ 0 } \quad that\quad doesn't\quad exist1=112=1×112−12=1×1−12(1+1)(1−1)=1(1−1)andyoucancelled(1−1)i.e.0frombothsideswhichmeans00thatdoesn′texist please give me suggestion if i am wrong You got x+y=y x + y = y x+y=y. Doesn't this imply x=0 x = 0 x=0 and consequently y=0 y = 0 y=0? y can be any value in this case, since only x has to be 0. ((According to x + y = y)) Yes, but I meant for all values of xxx and yyy. Hmm..... A very good one this is but you know the problem with this. Yes I do. x=yx = yx=yThis means x−y=0x-y = 0x−y=0. In step 444 you have cancelled 000 from both sides!!The world just exploded!!! What if it has? What if this is the afterlife? This talk is wrong, because it can not be divided by zero to zero, or equal to, this clowning Many remarkable things are discovered as a result of clowning :) Is clowning the same as trolling, just more mathy? @Alexander Sludds – I'm curious too, whats clowning? Problem Loading... Note Loading... Set Loading...
Background Let's say you own a convenience store Each day $N$ customers will visit you, where $N \in (1,7 \, billion \, humans)$ $X_i$ is the amount that each customer will spend All the $X_i$'s are iid and also independent of $N$ Let's say you want to know the expected value of total revenue made in a day. I know the following is true (Wald's equation), $$E \bigg[ \sum_{i=1}^{N} X_i \bigg] = E[N]E[X]$$ My question Can you explain why it is illegal to do, $$E \bigg[ \sum_{i=1}^{N} X_i \bigg] = \sum_{i=1}^{N} E[X_i] = NE[X]$$ I know this is illegal because $NE[X] \neq E[N]E[X]$. I know the LOE doesn't always hold for infinite sums but in this situation $N$ is capped by the human population. If you could explain in lay terms I would greatly appreciate it (versus going into sigma algebras and IUT). Thanks.
If $\exists a, b, m_1, m_2 \in \mathbb{Z}, a\equiv b\pmod {m_1}$ and $a\equiv b\pmod {m_2}$, then $\exists k \in \mathbb{Z}, a=b+{m_1m_2k\over gcd(m_1,m_2)} \implies a\equiv b\pmod L$, where $L$ is the l.c.m. of $m_1$ and $m_2$. I am trying to derive, starting with the first part, with no success $\exists l \in \mathbb{Z}, a -b = l\cdot m_1 \implies a =b +l\cdot m_1$ $\exists m \in \mathbb{Z}, a-b = m\cdot m_2 \implies a =b +m\cdot m_2$ Equating both, $b +l\cdot m_1 = b +m\cdot m_2$ Unable to start the proof Based on comment by @saulspatz, the new attempt to proof is: $\exists r_1, r_2 \in \mathbb{Z},$ s.t. (i) $m_1 = r_1\gcd(m_1, m_2),$ (ii) $m_2 = r_2\gcd(m_1, m_2)$ Multiplying, (i) by (ii), we get: $m_1m_2 = r_1r_2(m_1, m_2)^2$ if for suitable integer $k$, take $r_1r_2={1\over k}$, then not possible as $r_1, r_2$ are themselves integers. So, anyway attempt something: Let, $r_1r_2=k'$, $m_1m_2 = k'(m_1, m_2)^2 \implies (m_1, m_2) = {m_1m_2 \over {k'\cdot (m_1, m_2)}}$ But, $k'$ is an integer, and the final form has an integer $k$ in numerator.
Hi: I've read this post: Uniform Circular Motion w/ Tension and Friction and helped me a lot. I have a similar problem: At one end of the rope ($R = 1 m$) is tied a mass ($m = 3 kg$) and the other end is attached to a spherical joint located in the center of a entirely horizontal turntable. The mass is on the turntable and both remain at rest ($V = 0 m/s$ , $\omega = 0 rad/s$). When the turntable begins to rotate, the mass remains without relative velocity on the turntable, rotating at an unknown speed until its acceleration increases and the mass begins to have an angular velocity different from that of the turntable. The kinetic coefficient between the mass and the turntable is 0.1 and the maximum tension of the string is $100 N$. Calculate the time required by the mass to reach enough speed to break the rope. I analyzed the problem and this is what I think: The turntable moves and the mass rotate together (there is static friction) at an unknown speed. Then, the mass has a relative motion due to the acceleration and the kinetic friction (between the turntable and the mass). Finally the mass reach a speed that make the rope breaks and the mass leaves the turntable. I think that when the tension is maximum, I can use this equation to describe the tension: $$T = m\frac{V^2}{R}$$ Based in the previous formula, I can use it to calculate the speed of the mass at that instant of the following equation: $$V = \sqrt{\frac{TR}{m}}$$ And to find the time in which the mass exits the turntable with the equation: $$t = \frac{2\pi R}{V}$$ (I'm not sure) I know the kinetic friction acts in opposite direction of the tangential acceleration and perpendicular to the tension and the centripetal acceleration. The tangential acceleration and the angular acceleration ($\alpha$) are unknown and any of them can help to solve the problem: how long the mass rotate until the rope breaks?
Difference between revisions of "De Bruijn-Newman constant" (→t>0) Line 50: Line 50: It is known that <math>\xi</math> is an entire function of order one ([T1986, Theorem 2.12]). Hence by the fundamental solution for the heat equation, the <math>H_t</math> are also entire functions of order one for any <math>t</math>. It is known that <math>\xi</math> is an entire function of order one ([T1986, Theorem 2.12]). Hence by the fundamental solution for the heat equation, the <math>H_t</math> are also entire functions of order one for any <math>t</math>. + + Let <math>\sigma_{max}(t)</math> denote the largest imaginary part of a zero of <math>H_t</math>, thus <math>\sigma_{max}(t)=0</math> if and only if <math>t \geq \Lambda</math>. It is known that the quantity <math>\frac{1}{2} \sigma_{max}(t)^2 + t</math> is non-increasing in time whenever <math>\sigma_{max}(t)>0</math> (see [KKL2009, Proposition A]. In particular we have Let <math>\sigma_{max}(t)</math> denote the largest imaginary part of a zero of <math>H_t</math>, thus <math>\sigma_{max}(t)=0</math> if and only if <math>t \geq \Lambda</math>. It is known that the quantity <math>\frac{1}{2} \sigma_{max}(t)^2 + t</math> is non-increasing in time whenever <math>\sigma_{max}(t)>0</math> (see [KKL2009, Proposition A]. In particular we have Revision as of 09:37, 5 April 2018 For each real number [math]t[/math], define the entire function [math]H_t: {\mathbf C} \to {\mathbf C}[/math] by the formula [math]\displaystyle H_t(z) := \int_0^\infty e^{tu^2} \Phi(u) \cos(zu)\ du[/math] where [math]\Phi[/math] is the super-exponentially decaying function [math]\displaystyle \Phi(u) := \sum_{n=1}^\infty (2\pi^2 n^4 e^{9u} - 3 \pi n^2 e^{5u}) \exp(-\pi n^2 e^{4u}).[/math] It is known that [math]\Phi[/math] is even, and that [math]H_t[/math] is even, real on the real axis, and obeys the functional equation [math]H_t(\overline{z}) = \overline{H_t(z)}[/math]. In particular, the zeroes of [math]H_t[/math] are symmetric about both the real and imaginary axes. One can also express [math]H_t[/math] in a number of different forms, such as [math]\displaystyle H_t(z) = \frac{1}{2} \int_{\bf R} e^{tu^2} \Phi(u) e^{izu}\ du[/math] or [math]\displaystyle H_t(z) = \frac{1}{2} \int_0^\infty e^{t\log^2 x} \Phi(\log x) e^{iz \log x}\ \frac{dx}{x}.[/math] In the notation of [KKL2009], one has [math]\displaystyle H_t(z) = \frac{1}{8} \Xi_{t/4}(z/2).[/math] De Bruijn [B1950] and Newman [N1976] showed that there existed a constant, the de Bruijn-Newman constant [math]\Lambda[/math], such that [math]H_t[/math] has all zeroes real precisely when [math]t \geq \Lambda[/math]. The Riemann hypothesis is equivalent to the claim that [math]\Lambda \leq 0[/math]. Currently it is known that [math]0 \leq \Lambda \lt 1/2[/math] (lower bound in [RT2018], upper bound in [KKL2009]). The Polymath15 project seeks to improve the upper bound on [math]\Lambda[/math]. The current strategy is to combine the following three ingredients: Numerical zero-free regions for [math]H_t(x+iy)[/math] of the form [math]\{ x+iy: 0 \leq x \leq T; y \geq \varepsilon \}[/math] for explicit [math]T, \varepsilon, t \gt 0[/math]. Rigorous asymptotics that show that [math]H_t(x+iy)[/math] whenever [math]y \geq \varepsilon[/math] and [math]x \geq T[/math] for a sufficiently large [math]T[/math]. Dynamics of zeroes results that control [math]\Lambda[/math] in terms of the maximum imaginary part of a zero of [math]H_t[/math]. Contents [math]t=0[/math] When [math]t=0[/math], one has [math]\displaystyle H_0(z) = \frac{1}{8} \xi( \frac{1}{2} + \frac{iz}{2} ) [/math] where [math]\displaystyle \xi(s) := \frac{s(s-1)}{2} \pi^{-s/2} \Gamma(s/2) \zeta(s)[/math] is the Riemann xi function. In particular, [math]z[/math] is a zero of [math]H_0[/math] if and only if [math]\frac{1}{2} + \frac{iz}{2}[/math] is a non-trivial zero of the Riemann zeta function. Thus, for instance, the Riemann hypothesis is equivalent to all the zeroes of [math]H_0[/math] being real, and Riemann-von Mangoldt formula (in the explicit form given by Backlund) gives [math]\displaystyle \left|N_0(T) - (\frac{T}{4\pi} \log \frac{T}{4\pi} - \frac{T}{4\pi} - \frac{7}{8})\right| \lt 0.137 \log (T/2) + 0.443 \log\log(T/2) + 4.350 [/math] for any [math]T \gt 4[/math], where [math]N_0(T)[/math] denotes the number of zeroes of [math]H_0[/math] with real part between 0 and T. The first [math]10^{13}[/math] zeroes of [math]H_0[/math] (to the right of the origin) are real [G2004]. This numerical computation uses the Odlyzko-Schonhage algorithm. In [P2017] it was independently verified that all zeroes of [math]H_0[/math] between 0 and 61,220,092,000 were real. [math]t\gt0[/math] For any [math]t\gt0[/math], it is known that all but finitely many of the zeroes of [math]H_t[/math] are real and simple [KKL2009, Theorem 1.3]. In fact, assuming the Riemann hypothesis, all of the zeroes of [math]H_t[/math] are real and simple [CSV1994, Corollary 2]. It is known that [math]\xi[/math] is an entire function of order one ([T1986, Theorem 2.12]). Hence by the fundamental solution for the heat equation, the [math]H_t[/math] are also entire functions of order one for any [math]t[/math]. Because [math]\Phi[/math] is positive, [math]H_t(iy)[/math] is positive for any [math]y[/math], and hence there are no zeroes on the imaginary axis. Let [math]\sigma_{max}(t)[/math] denote the largest imaginary part of a zero of [math]H_t[/math], thus [math]\sigma_{max}(t)=0[/math] if and only if [math]t \geq \Lambda[/math]. It is known that the quantity [math]\frac{1}{2} \sigma_{max}(t)^2 + t[/math] is non-increasing in time whenever [math]\sigma_{max}(t)\gt0[/math] (see [KKL2009, Proposition A]. In particular we have [math]\displaystyle \Lambda \leq t + \frac{1}{2} \sigma_{max}(t)^2[/math] for any [math]t[/math]. The zeroes [math]z_j(t)[/math] of [math]H_t[/math] obey the system of ODE [math]\partial_t z_j(t) = - \sum_{k \neq j} \frac{2}{z_k(t) - z_j(t)}[/math] where the sum is interpreted in a principal value sense, and excluding those times in which [math]z_j(t)[/math] is a repeated zero. See dynamics of zeros for more details. Writing [math]z_j(t) = x_j(t) + i y_j(t)[/math], we can write the dynamics as [math] \partial_t x_j = - \sum_{k \neq j} \frac{2 (x_k - x_j)}{(x_k-x_j)^2 + (y_k-y_j)^2} [/math] [math] \partial_t y_j = \sum_{k \neq j} \frac{2 (y_k - y_j)}{(x_k-x_j)^2 + (y_k-y_j)^2} [/math] where the dependence on [math]t[/math] has been omitted for brevity. In [KKL2009, Theorem 1.4], it is shown that for any fixed [math]t\gt0[/math], the number [math]N_t(T)[/math] of zeroes of [math]H_t[/math] with real part between 0 and T obeys the asymptotic [math]N_t(T) = \frac{T}{4\pi} \log \frac{T}{4\pi} - \frac{T}{4\pi} + \frac{t}{16} \log T + O(1) [/math] as [math]T \to \infty[/math] (caution: the error term here is not uniform in t). Also, the zeroes behave like an arithmetic progression in the sense that [math] z_{k+1}(t) - z_k(t) = (1+o(1)) \frac{4\pi}{\log |z_k|(t)} = (1+o(1)) \frac{4\pi}{\log k} [/math] as [math]k \to +\infty[/math]. Threads Polymath proposal: upper bounding the de Bruijn-Newman constant, Terence Tao, Jan 24, 2018. Polymath15, first thread: computing H_t, asymptotics, and dynamics of zeroes, Terence Tao, Jan 27, 2018. Polymath15, second thread: generalising the Riemann-Siegel approximate functional equation, Terence Tao and Sujit Nair, Feb 2, 2018. Polymath15, third thread: computing and approximating H_t, Terence Tao and Sujit Nair, Feb 12, 2018. Polymath 15, fourth thread: closing in on the test problem, Terence Tao, Feb 24, 2018. Polymath15, fifth thread: finishing off the test problem?, Terence Tao, Mar 2, 2018. Polymath15, sixth thread: the test problem and beyond, Terence Tao, Mar 18, 2018. Polymath15, seventh thread: going below 0.48, Terence Tao, Mar 28, 2018. Other blog posts and online discussion Heat flow and zeroes of polynomials, Terence Tao, Oct 17, 2017. The de Bruijn-Newman constant is non-negative, Terence Tao, Jan 19, 2018. Lehmer pairs and GUE, Terence Tao, Jan 20, 2018. A new polymath proposal (related to the Riemann hypothesis) over Tao's blog, Gil Kalai, Jan 26, 2018. Code and data Writeup Test problem Zero-free regions See Zero-free regions. Wikipedia and other references Bibliography [A2011] J. Arias de Reyna, High-precision computation of Riemann's zeta function by the Riemann-Siegel asymptotic formula, I, Mathematics of Computation, Volume 80, Number 274, April 2011, Pages 995–1009. [B1994] W. G. C. Boyd, Gamma Function Asymptotics by an Extension of the Method of Steepest Descents, Proceedings: Mathematical and Physical Sciences, Vol. 447, No. 1931 (Dec. 8, 1994),pp. 609-630. [B1950] N. C. de Bruijn, The roots of trigonometric integrals, Duke J. Math. 17 (1950), 197–226. [CSV1994] G. Csordas, W. Smith, R. S. Varga, Lehmer pairs of zeros, the de Bruijn-Newman constant Λ, and the Riemann hypothesis, Constr. Approx. 10 (1994), no. 1, 107–129. [G2004] Gourdon, Xavier (2004), The [math]10^{13}[/math] first zeros of the Riemann Zeta function, and zeros computation at very large height [KKL2009] H. Ki, Y. O. Kim, and J. Lee, On the de Bruijn-Newman constant, Advances in Mathematics, 22 (2009), 281–306. Citeseer [N1976] C. M. Newman, Fourier transforms with only real zeroes, Proc. Amer. Math. Soc. 61 (1976), 246–251. [P2017] D. J. Platt, Isolating some non-trivial zeros of zeta, Math. Comp. 86 (2017), 2449-2467. [P1992] G. Pugh, The Riemann-Siegel formula and large scale computations of the Riemann zeta function, M.Sc. Thesis, U. British Columbia, 1992. [RT2018] B. Rodgers, T. Tao, The de Bruijn-Newman constant is non-negative, preprint. arXiv:1801.05914 [T1986] E. C. Titchmarsh, The theory of the Riemann zeta-function. Second edition. Edited and with a preface by D. R. Heath-Brown. The Clarendon Press, Oxford University Press, New York, 1986. pdf
Input Inductor peak current Q1) Can I use this formula for calculating input ripple current? Q2) What will be the case if the input inductor is replaced with ferrite baed? I mean how can calculate the input ripple current? Electrical Engineering Stack Exchange is a question and answer site for electronics and electrical engineering professionals, students, and enthusiasts. It only takes a minute to sign up.Sign up to join this community Input Inductor peak current Q1) Can I use this formula for calculating input ripple current? Q2) What will be the case if the input inductor is replaced with ferrite baed? I mean how can calculate the input ripple current? If there were no components at the input, the current drawn by the buck converter would have been a triangular, or trapezoidal one (depending on its configuration), and the formula for it would have been the one you gave (ideally). But the input is now a lowpass LC filter, of a characteristic frequency of \$\frac{1}{2\pi\sqrt{LC}}\$. Whether it's an ideal filter, or not, the transfer function is known (simple LC lowpass, with or without R). Since the input current can also be deduced (you know the topology and the value of it), you can also deduce the harmonic content, and apply it to the transfer function. For example, here's a simple buck example (the IC in your example works at MHz range, for the sake of time, I chose a smaller switching frequency). L=22\$\mu\$H, C=22\$\mu\$F, R L=3.3\$\Omega\$ => I OUT=1A $$\Delta_{I_L}=\frac{(5-3.3)*0.666}{22e-6*100e3}=0.515$$ $$I_{MAX}=I_{OUT}+\frac{\Delta_{I_L}}{2}=1.257$$ $$I_{MAX}=I_{OUT}-\frac{\Delta_{I_L}}{2}=0.743$$ Here's how it looks: Since the minimum current is greater than zero, the input current will be trapezoidal, with the calculated values: And, since the input filter is an ideal one, its transfer function is: $$H(s)=\frac{\frac{1}{LC}}{s^2+\frac{1}{LC}}$$ $$f_p=\frac{1}{2\pi\sqrt{1e-6*10e-6}}=50.33\text{kHz}$$ The switching frequency is 100kHz, the filter has a -40dB/dec slope, the harmonic content of the input current can be found out, so here is the filtered input current's FFT: and its time response, which is the sum of all the filtered harmonics: If there was no input filter, there would have not been any filtering, so the current would have been unchanged. Also, this type of current can be analytically expressed as a Fourier series, and, given the known input filter's transfer funcion, the attenuation of each harmonic can be determined, then mathematically deduced, but it's time-consuming and one of the reasons FFTs and simulators exist. This is in addition to other answers. This can be modelled as an LCLC filter with the 2nd L switched with a square pulse and a resistive load. But in reality, each reactive part will have an ESR value that affects power dissipation and ripple voltage and current. The primary ripple L current will be approximately the same as the secondary L current except the rise time of primary current will depend on that part's L/R=T ratio for conducted noise back to the source, which may or may not be relevant. The power out may be viewed from an energy conversion and loss perspective by looking at the DCR of each coil and ESR of each cap with an output of 3.3V@1A or 3.3W at 3.3 Ohms. The impedance of L2 of 2.2 uH at 2.25MHz is around 33 Ohms or 10 times the minimum load R. The impedance of L1 of 1uH is about 15 Ohms, so a Ferrite Bead that is rated at 10 Ohms at 1MHz may be considered similar but not exactly the same, depending on rated current, SRF , saturation level etc as it will draw high currents charging up C1. C1 must also have ultra low ESR due to the high edge currents to minimize power dissipation in a tiny part and must be lower than the big FET switch which is also low RdsOn.
Beside the wonderful examples above, there should also be counterexamples, where visually intuitive demonstrations are actually wrong. (e.g. missing square puzzle) Do you know the other examples? Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community Beside the wonderful examples above, there should also be counterexamples, where visually intuitive demonstrations are actually wrong. (e.g. missing square puzzle) Do you know the other examples? The never ending chocolate bar! If only I knew of this as a child.. The trick here is that the left piece that is three bars wide grows at the bottom when it slides up. In reality, what would happen is that there would be a gap at the right between the three-bar piece and the cut. This gap is is three bars wide and one-third of a bar tall, explaining how we ended up with an "extra" piece. Side by side comparison: Notice how the base of the three-wide bar grows. Here's what it would look like in reality$^1$: 1: Picture source https://www.youtube.com/watch?v=Zx7vUP6f3GM A bit surprised this hasn't been posted yet. Taken from this page: Visualization can be misleading when working with alternating series. A classical example is \begin{align*} \ln 2=&\frac11-\frac12+\frac13-\frac14+\;\frac15-\;\frac16\;+\ldots,\\ \frac{\ln 2}{2}=&\frac12-\frac14+\frac16-\frac18+\frac1{10}-\frac1{12}+\ldots \end{align*} Adding the two series, one finds \begin{align*}\frac32\ln 2=&\left(\frac11+\frac13+\frac15+\ldots\right)-2\left(\frac14+\frac18+\frac1{12}+\ldots\right)=\\ =&\frac11-\frac12+\frac13-\frac14+\;\frac15-\;\frac16\;+\ldots=\\ =&\ln2. \end{align*} Here's how to trick students new to calculus (applicable only if they don't have graphing calculators, at that time): $0$. Ask them to find inverse of $x+\sin(x)$, which they will unable to. Then, $1$. Ask them to draw graph of $x+\sin(x)$. $2$. Ask them to draw graph of $x-\sin(x)$ $3$. Ask them to draw $y=x$ on both graphs. Here's what they will do : $4$. Ask them, "What do you conclude?". They will say that they are inverses of each other. And then get very confused. Construct a rectangle $ABCD$. Now identify a point $E$ such that $CD = CE$ and the angle $\angle DCE$ is a non-zero angle. Take the perpendicular bisector of $AD$, crossing at $F$, and the perpendicular bisector of $AE$, crossing at $G$. Label where the two perpendicular bisectors intersect as $H$ and join this point to $A$, $B$, $C$, $D$, and $E$. Now, $AH=DH$ because $FH$ is a perpendicular bisector; similarly $BH = CH$. $AH=EH$ because $GH$ is a perpendicular bisector, so $DH = EH$. And by construction $BA = CD = CE$. So the triangles $ABH$, $DCH$ and $ECH$ are congruent, and so the angles $\angle ABH$, $\angle DCH$ and $\angle ECH$ are equal. But if the angles $\angle DCH$ and $\angle ECH$ are equal then the angle $\angle DCE$ must be zero, which is a contradiction. Proof : Let $O$ be the intersection of the bisector $[BC]$ and the bisector of $\widehat{BAC}$. Then $OB=OC$ and $\widehat{BAO}=\widehat{CAO}$. So the triangles $BOA$ and $COA$ are the same and $BA=CA$. Another example : From "Pastiches, paradoxes, sophismes, etc." and solution page 23 : http://www.scribd.com/JJacquelin/documents A copy of the solution is added below. The translation of the comment is : Explanation : The points A, B and P are not on a straight line ( the Area of the triangle ABP is 0.5 ) The graphical highlight is magnified only on the left side of the figure. I think this could be the goats puzzle (Monty Hall problem) which is nicely visually represented with simple doors. Three doors, behind 2 are goats, behind 1 is a prize. You choose a door to open to try and get the prize, but before you open it, one of the other doors is opened to reveal a goat. You then have the option of changing your mind. Should you change your decision? From looking at the diagram above, you know for a fact that you have a 1/3rd chance of guessing correctly. Next, a door with a goat in is opened: A cursory glance suggests that your odds have improved from 1/3rd to a 50/50 chance of getting it right. But the truth is different... By calculating all possibilities we see that if you change, you have a higher chance of winning. The easiest way to think about it for me is, if you choose the car first, switching is guaranteed to be a goat. If you choose a goat first, switching is guaranteed to be a car. You're more likely to choose a goat first because there are more goats, so you should always switch. A favorite of mine was always the following: \begin{align*} \require{cancel}\frac{64}{16} = \frac{\cancel{6}4}{1\cancel{6}} = 4 \end{align*} I particularly like this one because of how simple it is and how it gets the right answer, though for the wrong reasons of course. A recent example I found which is credited to Martin Gardner and is similar to some of the others posted here but perhaps with a slightly different reason for being wrong, as the diagonal cut really is straight. I found the image at a blog belonging to Greg Ross. Spoilers The triangles being cut out are not isosceles as you might think but really have base $1$ and height $1.1$ (as they are clearly similar to the larger triangles). This means that the resulting rectangle is really $11\times 9.9$ and not the reported $11\times 10$. Squaring the circle with Kochanski's Approximation 1 One of my favorites: \begin{align} x&=y\\ x^2&=xy\\ x^2-y^2&=xy-y^2\\ \frac{(x^2-y^2)}{(x-y)}&=\frac{(xy-y^2)}{(x-y)}\\ x+y&=y\\ \end{align} Therefore, $1+1=1$ The error here is in dividing by x-y That $\sum_{n=1}^\infty n = -\frac{1}{12}$. http://www.numberphile.com/videos/analytical_continuation1.html The way it is presented in the clip is completely incorrect, and could spark a great discussion as to why. Some students may notice the hand-waving 'let's intuitively accept $1 -1 +1 -1 ... = 0.5$. If we accept this assumption (and the operations on divergent sums that are usually not allowed) we can get to the result. A discussion that the seemingly nonsense result directly follows a nonsense assumption is useful. This can reinforce why it's important to distinguish between convergent and divergent series. This can be done within the framework of convergent series. A deeper discussion can consider the implications of allowing such a definition for divergent sequences - ie Ramanujan summation - and can lead to a discussion on whether such a definition is useful given it leads to seemingly nonsense results. I find this is interesting to open up the ideas that mathematics is not set in stone and can link to the history of irrational and imaginary numbers (which historically have been considered less-than-rigorous or interesting-but-not-useful). \begin{equation} \log6=\log(1+2+3)=\log 1+\log 2+\log 3 \end{equation} Here is one I saw on a whiteboard as a kid... \begin{align*} 1=\sqrt{1}=\sqrt{-1\times-1}=\sqrt{-1}\times\sqrt{-1}=\sqrt{-1}^2=-1 \end{align*} I might be a bit late to the party, but here is one which my maths teacher has shown to me, which I find to be a very nice example why one shouldn't solve an equation by looking at the hand-drawn plots, or even computer-generated ones. Consider the following equation: $$\left(\frac{1}{16}\right)^x=\log_{\frac{1}{16}}x$$ At least where I live, it is taught in school how the exponential and logarithmic plots look like when base is between $0$ and $1$, so a student should be able to draw a plot which would look like this: Easy, right? Clearly there is just one solution, lying at the intersection of the graphs with the $x=y$ line (the dashed one; note the plots are each other's reflections in that line). Well, this is clear at least until you try some simple values of $x$. Namely, plugging in $x=\frac{1}{2}$ or $\frac{1}{4}$ gives you two more solutions! So what's going on? In fact, I have intentionally put in an incorrect plots (you get the picture above if you replace $16$ by $3$). The real plot looks like this: You might disagree, but to be it still seems like it's a plot with just one intersection point. But, in fact, the part where the two plots meet has all three points of intersection. Zooming in on the interval with all the solutions lets one barely see what's going on: The oscillations are truly minuscule there. Here is the plot of the difference of the two functions on this interval: Note the scale of the $y$ axis: the differences are on the order of $10^{-3}$. Good luck drawing that by hand! To get a better idea of what's going on with the plots, here they are with $16$ replaced by $50$: Here is a measure theoretic one. By 'Picture', if we take a cover of $A:=[0,1]∩\mathbb{Q}$ by open intervals, we have an interval around every rational and so we also cover $[0,1]$; the Lebesgue measure of [0,1] is 1, so the measure of $A$ is 1. As a sanity check, the complement of this cover in $[0,1]$ can't contain any intervals, so its measure is surely negligible. This is of course wrong, as the set of all rationals has Lebesgue measure $0$, and sets with no intervals need not have measure 0: see the fat Cantor set. In addition, if you fix the 'diagonal enumeration' of the rationals and take $\varepsilon$ small enough, the complement of the cover in $[0,1]$ contains $2^{ℵ_0}$ irrationals. I recently learned this from this MSE post. There are two examples on Wikipedia:Missing_square_puzzle Sam Loyd's paradoxical dissection, and Mitsunobu Matsuyama's "Paradox". But I cannot think of something that is not a dissection. This is my favorite. \begin{align}-20 &= -20\\ 16 - 16 - 20 &= 25 - 25 - 20\\ 16 - 36 &= 25 - 45\\ 16 - 36 + \frac{81}{4} &= 25 - 45 + \frac{81}{4}\\ \left(4 - \frac{9}{2}\right)^2 &= \left(5 - \frac{9}{2}\right)^2\\ 4 - \frac{9}{2} &= 5 - \frac{9}{2}\\ 4 &= 5 \end{align} You can generalize it to get any $a=b$ that you'd like this way: \begin{align}-ab&=-ab\\ a^2 - a^2 - ab &= b^2 - b^2 - ab\\ a^2 - a(a + b) &= b^2 -b(a+b)\\ a^2 - a(a + b) + \frac{a + b}{2} &= b^2 -b(a+b) + \frac{a + b}{2}\\ \left(a - \frac{a+b}{2}\right)^2 &= \left(b - \frac{a+b}{2}\right)^2\\ a - \frac{a+b}{2} &= b - \frac{a+b}{2}\\ a &= b\\ \end{align} It's beautiful because visually the "error" is obvious in the line $\left(4 - \frac{9}{2}\right)^2 = \left(5 - \frac{9}{2}\right)^2$, leading the observer to investigate the reverse FOIL process from the step before, even though this line is valid. I think part of the problem also stems from the fact that grade school / high school math education for the average person teaches there's only one "right" way to work problems and you always simplify, so most people are already confused by the un-simplifying process leading up to this point. I've found that the number of people who can find the error unaided is something less than 1 in 4. Disappointingly, I've had several people tell me the problem stems from the fact that I started with negative numbers. :-( Solution When working with variables, people often remember that $c^2 = d^2 \implies c = \pm d$, but forget that when working with concrete values because the tendency to simplify everything leads them to turn squares of negatives into squares of positives before applying the square root. The number of people that I've shown this to who can find the error is a small sample size, but I've found some people can carefully evaluate each line and find the error, and then can't explain it even after they've correctly evaluated $\left(-\frac{1}{2}\right)^2=\left(\frac{1}{2}\right)^2$. To give a contrarian interpretation of the question I will chime in with Goldbach's comet which counts the number of ways an integer can be expressed as the sum of two primes: It is mathematically "wrong" because there is no proof that this function doesn't equal zero infitely often, and it is visually deceptive because it appears to be unbounded with its lower bound increasing at a linear rate. This is essentially the same as the chocolate-puzzle. It's easier to see, however, that the total square shrinks. This is a fake visual proof that a sphere has Euclidean geometry. Strangely enough, in a 3 dimensional hyperbolic space, the amount of curve a sphere will have approaches a nonzero amount and if you have an infinitely large object with exactly the amount of a curve a sphere approaches as its size approaches infinity, it will have Euclidean geometry and appear sort of the way that image appears. I don't know about you but to me, it looks like the hexagons are stretched horizontally. If you also see it that way and you trust your eyes, then you could take that as a visual proof that $\tan\frac{7}{4} < 60^\circ$. If that's how you saw it, then it's an optical illusion because the hexagons are really stretched vertically. Unlike some optical illusions of images that appear different than they are but are still mathematically possible, this is an optical illusion of a mathematically impossible image. The math shows that $\tan^{-1} 60^\circ = \sqrt{3}$ and $\sqrt{3} < \frac{7}{4}$ because $7^2 = 49$ but $3 \times 4^2$ = 48. It's just like it's mathematically impossible for something to not be moving when it is moving but it's theoretically possible for your eyes to stop sending movement signals to your brain and have you not see movement in something that is moving which would look creepy for those who have not experienced it because your brain could still tell by a more complex method than signals from the eyes that it actually is moving. To draw a hexagonal grid over a square grid more accurately, only the math and not your eye signals can be trusted to help you do it accurately. The math shows that the continued fraction of $\sqrt{3}$ is [1; 1, 2, 1, 2, 1, 2, 1 ... which is less than $\frac{7}{4}$, not more. I do not think this really qualify as "visually intuitive", but it is definitely funny They do such a great job at dramatizing these kind of situations. Who cannot remember of an instance in which he has been either a "Billy" or a "Pa' and Ma'"? Maybe more "Pa' and Ma'" instances on my part...;) Thank you for your interest in this question. Because it has attracted low-quality or spam answers that had to be removed, posting an answer now requires 10 reputation on this site (the association bonus does not count). Would you like to answer one of these unanswered questions instead?
What was the improvement that Maxwell did to the electromagnetic field equations and why? I understand that he combined the main equations so that you could get a wave equation for the vectors of $\bf E$ and $\bf B$ but is there anything else? Physics Stack Exchange is a question and answer site for active researchers, academics and students of physics. It only takes a minute to sign up.Sign up to join this community $\newcommand{\pdv}[2]{\frac{\partial #1}{\partial #2}} \newcommand{\div}[1]{\nabla \cdot #1} \newcommand{\curl}[1]{\nabla \times #1} $ Until Maxwell added some terms to the set of equations they were Gauss' Law, Ampere's Law, Faraday's Law, and the fact that there is no magnetic monopole. The equations then read: $$\div E = \rho / \varepsilon_0$$ $$\div B = 0$$ $$ \curl E = - \pdv{B}{t} $$ $$\curl B = \mu_0 j $$ What he did was to change the Ampere's law and add the so-called displacement current and the last equation become $$\curl B = \mu_0 \left( j + \varepsilon_0 \pdv{E}{t} \right)$$ The reason why he introduced this term was that there was inconsistency in the equations. I find it easier to use the integral form of Maxwell's equations to show the inconsistency he has found. The integral equation of Ampere's Law reads: $$ \oint_{\partial S} \vec B \cdot \mathrm d\vec{s} = \mu_0 I_{penetrate } \tag{1} $$ Where $S$ is an open surface $\partial S$ is the boundary of the surface and $I_{penetrate}$ is the current that penetrated through this surface. Notice that the surface you attach to your loop is of no importance. That is you are free to choose the surface with which $I_{penetrate}$ is defined. However without Maxwell's correction this leads to some complications as I mentioned above. For example you want to calculate the magnetic field around a wire which connects a battery to a capacitor. Let the $\partial S$ be the boundary over which you integrate. As I mentioned, you are free to choose your open surface. Assume that you choose $S_1$ as indicated in the figure I borrowed from Wikipedia. There is a current and the right hand side of the equation $ (1)$ is clearly not zero. Since you can choose whichever surface you like assume that you have now chosen the surface $S_2$ and now there is a problem because there is no current flowing between the capacitors. Therefore the right hand side of the equation $(1)$ is zero, which is a clear contradiction. He noticed that a changing electric field created a magnetic field and he corrected the Ampere's equation. Incidentally he named the term he added namely $\mu_0\varepsilon_0 \pdv{E}{t}$ the displacement current. As a side note Maxwell's intention was not to fiddle around with equations so that they form a wave equation. As he was working on the equations he had no idea that light was a wave. It is however true that he has shown theoretically that light must be an electro-magnetic wave, though he was not able to show it experimentally.
Show/Hide Sub-topics (O Level) Radioactive decay refers to the process in which α-particles and β-particles are emitted by an unstable nuclei (contains too many neutrons or protons) of an element in order to form a more stable nuclei of another element. The decaying nucleus is called the ‘parent’ nucleus; the resulting nucleus is called the ‘daughter’ nucleus. Decay processes are written in the form of an equation. In the following equations, parent nuclide ‘X’ (unstable) changes into a daughter nuclide ‘Y’ (more stable) with the emission of α-particles or β-particles or γ-rays. Alpha Decay An alpha particle can be represented as $^{4}_{2} He$ or $^{4}_{2} \alpha$. When a nucleus decays by alpha emission, proton number or atomic number ‘Z’ decreases by 2 and its mass number or nucleon number ‘A’ decreases by 4. $^{A}_{Z}X \rightarrow ^{A-4}_{Z-2}Y + ^{4}_{2}He + energy$ Example: $^{226}_{88}Ra \rightarrow ^{222}_{86}Rn + ^{4}_{2}He + energy$ Beta Decay In nuclear equation, β-particles is written as $^{0}_{-1} \beta$ or $^{0}_{-1} e$. In beta decay, nucleon number ‘A’ of the nucleus remains unchanged but the atomic number ‘Z’ increases by one. During this process, a neutron splits into a proton, an electron and a positron (which decays rapidly into pure energy). The proton number now increases. The new electron is expelled as β-particles. $^{A}_{Z}X \rightarrow ^{A}_{Z+1}Y + ^{0}_{-1}e + energy$ Example: $^{24}_{11}Na \rightarrow ^{24}_{12}Mg + ^{0}_{-1}\beta + energy$ Gamma Emission The emission of gamma rays has no effect on nucleon number or proton number of the nucleus. γ-rays are usually emitted at the same time as α-particles and β-particles. With some nuclides, the emission of α-particles and β-particles from a nucleus leaves the electrons and neutrons in an excited arrangement with more energy than normal. These protons and neutrons rearrange themselves to become more stable and release the excess energy as a photon of gamma radiation. Background Radiation Naturally occurring radioactivity is called background radiation. Existence: The G-M tube usually detects 20 to 60 counts per minute without a radioactive source. This is due to background radiation. Origin: Background radiation is produced by tiny fragments of radioactive elements present in all rocks and soil, the atmosphere and even in living material itself. Also, the earth is continuously bombarded by high-speed particles from the outer space and from the sun (cosmic rays). Significance: Natural radioactive elements produce a radioactive gas, radon, which may accumulate in buildings, so increasing the local background count. Whenever taking readings with GM tube, the background count should be established at the start and deducted from subsequent readings to avoid systematic error.
In James and Stein (1961) the authors consider the following loss function for an estimator $\hat{\Sigma}$ of the covariance matrix $\Sigma$ of a multivariate normal distribution: $$L(\hat{\Sigma}) = tr[\hat{\Sigma}\Sigma^{-1}] - \log|\hat{\Sigma}\Sigma^{-1}| - p.$$ This loss function has since been referred to as `Stein's loss' in several papers on regularized covariance estimation. Is there any intuitive justification for this loss function? I noticed that the loss function resembles the KL divergence between two multivariate normal distributions with the same means and with covariances $\hat{\Sigma}$ and $\Sigma$: $$2KL(N(0,\hat{\Sigma}) || N(0,\Sigma)) = \log|\hat{\Sigma}^{-1}\Sigma| - C\int_x [x^T (\hat{\Sigma}^{-1} - \Sigma^{-1})x ]\exp\{-\frac{1}{2}x^T \hat{\Sigma}^{-1} x\} dx$$ where $$C = (2\pi)^{-k/2}|\hat{\Sigma}^{-1}|^{1/2}$$ however I am not sure how to simplify the integral.
1. Direct-address tables Assume all elements’ keys are different and are from \(U={0, 1, …, m-1}\). Then the direct-address table is \(T[0..m-1]\), that is to say, the key is the index. DIRECT-ADDRESS-SEARCH(T,k) return T[k] DIRECT-ADDRESS-INSERT(T,x) T[x.key]=x DIRECT-ADDRESS-DELETE(T,x) T[x.key]=NIL 2. Hash tables The actual keys set \(K\) is so small relative to the universe of keys set \(U\). So a hash function should be used. $$h: U\rightarrow \left \{0,1,…,m-1 \right \}$$ \(h(k)\) is the hash value of \(k\) Collision: Two keys may hash to the same slot. Collision resolution by chaining CHAINED-HASH-INSERT(T,x) insert x at the head of list T[h(x,key)] CHAINED-HASH-SEARCH(T,k) search for an element with key k in list T[h(k)] CHAINED-HASH-DELETE(T,x) delete x from the list T[h(x,key)] Load Factor \(\alpha=n/m\) Insert: \(O(1)\) Search:\(O(n)\) Delete:\(O(1)\) Simple Uniform Hashing: For\(j=0,1,…,m-1\), denote the length of \(T[j]\) by \(n_j\), so that \(n=n_0+n_1+…+n_(m-1)\). Then \(E[n_j]=\alpha=n/m\). If we assume simple uniform hashing, then the average-case time of search is \(\theta(1+\alpha)\). 3. Hash Functions Good hash function: simple uniform hashing Assume the keys are natural numbers. If the keys are not natural numbers, we should interpret them as natural numbers. 3.1 The division method $$h(k)=k\ mod\ m$$ m: a prime not too close to an exact power of 2 is often a good choice 3.2 The multiplication method $$h(k)=\left \lfloor m(kA\ mod\ 1)\right \rfloor$$ m: typically a power of 2 The optimal choice of A is\(A\approx(\sqrt(5)-1)/2\approx=0.618\) 3.3 Universal hashing Select a hash function randomly. universal hash function collection: for different \(k,l \in U\), the number of hash function \(h \in H\) is not larger than \( \left|H\right|/m\). For \(h \in H\),\(E[n_{h(k)}]\)is at most \(1+\alpha\) A universal class of hash functions: $$H_{pm}=\left \{h_ab:a \in Z^*_p\ and\ b \in Z_p\right \}$$ $$h_{ab}(k)=((ak+b)\ mod\ p)\ mod\ m$$ 4. Open addressing Open addressing avoids pointers. All the elements are stored inside the hash table. So one key may hash to more than one slot. The hash function becomes a probe sequence. $$\left \{h(k,0), h(k,1),…,h(k,m-1)\right \}$$ HASH-INSERT(T,k) i=0 repeat j←h(k,i) j=h(k,i) if T[j]==NIL T[j]=k return j else i=i+1 until i==m error "hash table overflow" HASH-SEARCH(T,k) i←0 repeat j←h(k,i) if T[j]==k then return j i←i+1 until T[j]=NIL or i=m return NIL But it is not easy to delete. Uniform hashing: The probe sequence of each key is equally likely to be any of the \(m!\) permutations of \(<0,1,..,m-1>\). Linear Probing $$h(k,i)=(h^{‘}(k)+i)\ mod\ m,i=0,1,…,m-1$$ It has a problem known as primary clustering. The average search time increases. Quadratic Probing $$h(k,i)=(h^{‘}(k)+c_1i+c_2i^2)\ mod\ m$$ Secondary clustering: a milder form of clustering Double Hashing $$h(k,i)=(h_1(k)+ih_2(k))\ mod\ m$$ \(\alpha=n/m<1\), expected numbers of probe in an unsuccessful search or an insert is at most \(1/{1-\alpha}\) 5. Perfect Hashing If the keys are static, then the worst case time of searching can be \(O(1)\), which is known as perfect hashing. Perfect hashing has two levels. The first level is same as chaining. But the second level is a secondary hash table. Reference [1] Cormen, T. H., Leiserson, C. E., Rivest, R. L., & Stein, C. (2009). Introduction to algorithms. MIT press.
№ 9 All Issues On infinite-rank singular perturbations of the Schrödinger operator Abstract Schrodinger operators with infinite-rank singular potentials $\sum^\infty_{i,j=1}b_{i,j}(\psi_j,\cdot)\psi_i$ are studied under the condition that singular elements $\psi_j$ are $\xi_j(t)$-invariant with respect to scaling transformations in ${\mathbb R}^3$. English version (Springer): Ukrainian Mathematical Journal 60 (2008), no. 4, pp 563-573. Citation Example: Kuzhel' S. A., Vavrykovych L. On infinite-rank singular perturbations of the Schrödinger operator // Ukr. Mat. Zh. - 2008. - 60, № 4. - pp. 487–496. Full text
Let $F$ be a field, and let $A$ be a finite-dimensional associative $F$-algebra with multiplicative identity 1. If $M$ is a finite-dimensional module of $A$, define the character $\chi_M:A\rightarrow F$ given by $a\mapsto\mathrm{Trace}(a|_M)$. If $\chi_i$ for $1\leq i\leq n$ are characters of pairwisely non-isomorphic finite-dimensional simple $A$-modules $M_i$ for $1\leq i\leq n$, are $\chi_i$ linearly independent? It is true if $A$ is a finite group and the characteristic of $F$ does not divide the order of $A$, or if $A$ is a finite-dimensional semisimple algebra with 1. Is it true for arbitrary finite-dimensional algebra?
I have a question regarding these strikingly similar problems with contradicting solutions. This is somewhat long, so prepare Probblem 1 Consider a bag of ten coins, nine are fair, but one is weighted with both sides heads. You randomly select a coin and toss it five times. Let $2s$ denote the event of selecting the weighted coin (that is the 2-sided coin) and $N$ be the even you select a regular coin and $5H$ be the event of getting five heads in a row. What is a) $P(5H | 2s)$ b) $P(5H | N)$ c) $P(5H)$ d) $P(2s | 5H)$ Solution 1 a) Simply 1 b) $\frac{1}{2^5}$ c) $\frac{1}{2^5}\frac{9}{10}+ \frac{1}{10} = \frac{41}{320}$ d) $P(2s|5H) = \dfrac{P(5H|2s)P(2s)}{P(5H)} = \frac{32}{41}$ From the Solution 1, it seems that $P(2s|5H) \neq P(2s)P(5H)$ That is the event of picking out the weighted coin affects the probability of getting 5H. Here is part of my question, isn't there also some tiny probability of getting 5H from picking the normal one as well? Doesn't make sense why the events of picking the coin and getting 5H is dependent. Read on the next question Problem 2 A diagnostic test for an eye disease is 88% accurate of the time and 2.4% of the population actually has the disease. Let $ED$ be the event of having the eye disease and $p$ be the event of testing positive. Find the probability that a) the patient tests positive b) the patient has the disease and tests positive Solution 2 Here is a tree diagram a) $0.02122 + 0.011712 = 0.13824$ b) $P(ED | p) = \dfrac{P(\text{ED and p})}{P(p)} =\frac{0.02122}{0.13824 }= 0.1535$ From Solution 2, it looks like $P(\text{ED and p}) = P(\text{ED})P(p)$ which means that having the eye disease and testing positive are independent events? After trying out the same formula from Problem 1, it also seems that $$P(\text{ED | p}) = \dfrac{P(\text{ED and p})}{P(p)} = \dfrac{P(\text{p | ED})P(ED)}{P(p)} = 0.1535$$ Also, when the question asks "the patient has the disease and tests positive", how do I know that it is $P(ED | p)$ and not $P(p | ED)$? I am very confused in general with this. Could anyone clarify for me? Thanks
I just found the following Taylor series expansions around $z=0$ for the following functions: $\displaystyle \frac{1}{z^{2}-5z+6} = \frac{1}{(z-2)(z-3)} = \frac{-1}{(z-2)} + \frac{1}{(z-3)} = \sum_{n=0}^{\infty}\frac{z^{n}}{2^{n+1}} - \sum_{n=0}^{\infty}\frac{z^{n}}{3^{n+1}} = \sum_{n=0}^{\infty}\left( \frac{1}{2^{n+1}}-\frac{1}{3^{n+1}}\right)z^{n}$ $\displaystyle \frac{1}{1-z-z^{2}} = \frac{1}{\left[z-\left(\frac{1-\sqrt{5}}{-2} \right) \right]\left[z-\left(\frac{1+\sqrt{5}}{-2} \right)\right]} = \frac{2\sqrt{5}}{5(1-\sqrt{5})}\frac{1}{1-\left(\frac{-2}{1-\sqrt{5}} \right)z} - \frac{2\sqrt{5}}{5(1+\sqrt{5})}\frac{1}{1-\left(\frac{-2}{1+\sqrt{5}} \right)z} = \frac{\sqrt{5}}{5} \sum_{n=0}^{\infty}\left(\frac{(-1)^{n}2^{n+1}}{(1-\sqrt{5})^{n+1}} - \frac{(-1)^{n}2^{n+1}}{(1+\sqrt{5})^{n+1}} \right)z^{n}$ I wanted to confirm that the radius of convergence for the first series is $|z|<2$ and that the radius of convergence for the second series is $|z|<\frac{-1+\sqrt{5}}{2}$. I know that for a series $\sum_{n=1}^{\infty}a_{n}z^{n}$ with radius of convergence $R_{1}$ and $\sum_{n=1}^{\infty}b_{n}z^{n}$ with radius of convergence $R_{2}$, the radius of convergence of the series $\sum_{n=1}^{\infty}(a_{n}+b_{n})z^{n}$ is some $R$ where $R \geq \min(R_{1}, R_{2})$. And it can even be the case that if both of the series have finite $R_{1}$ and $R_{2}$, that $R$ can be infinite! So, I wanted to make sure this was not the case here, and that I have the correct radii of convergence for both. If not, how do I go about finding them (preferably in the least icky way possible)? Thank you.
In classification problems, the usual negative log-likelihood loss function $L(\theta)=\sum_{i=1}^N -\log(p(y_i|x_i,\theta))$ is always non-negative, since the $y_i$'s are discrete random variables and, as such, $p(y_i|x_i,\theta) \leq 1$ for all $\theta$. Therefore, the existence of a minimizer $\theta^*$ for $L(\theta)$ is guaranteed. However, if the $y_i$'s are continuous random variables (as is the case for Gaussian Mixtures), $p(y|x,\theta)$ is a pdf, so it may assume values greater than 1 (as long as it integrates to 1 w.r.t. $y$). Therefore, if we make no assumptions on the family of pdf's $p(y|x,\theta)$, it is not obvious to me that $L(\theta)$ is necessarily lower bounded. Is it? Moreover, let us suppose that $p(y|x,\theta)$ is a Gaussian Mixture whose parameters are provided by a Mixture Density Network, that is: $p(y|x,\theta)=\sum_{i=1}^K \alpha_i(x)\mathcal{N}(y|\mu_i(x),\sigma^2_i(x))$, where $K$ is the number of components in the mixture and $\alpha_i(x)$, $\mu_i(x)$ and $\sigma^2_i(x)$ are the outputs of a neural network that takes $x$ as input. In this case, $\theta$ represents all the parameters of the neural network to be optimized. (for more details on Mixture Density Networks please check Bishop, 1994, available at https://publications.aston.ac.uk/373/1/NCRG_94_004.pdf) Is $L(\theta)$ lower bounded for this particular family $p(y|x,\theta)$? How to prove (or at least have a strong intuition on) that? These questions came to my mind while I was trying to reproduce the toy example in section 5 of the paper I mention above. Actually, during training, my loss is always decreasing, assuming negative values after about 1000 iterations. Since I do not have a lower bound for the loss, I don't have any idea if the final loss is reasonable or not. Moreover, I am not being able to reproduce the results in the paper (for instance, for $x=0.5$, I get a unimodal distribution instead of a trimodal one). Thank you in advance.
Today I learned about the Elliot Activation (or Sigmoid) Function. In fact, The MathWorks just included it in their most recent update to the Neural Network toolbox. The function was first introduced in 1993 by D.L. Elliot under the title A Better Activation Function for Artificial Neural Networks. The function closely approximates the Sigmoid or Hyperbolic Tangent functions for small values, however it takes longer to converge for large values (i.e. It doesn't go to 1 or 0 as fast), though this isn't particularly a problem if you're using it for classification. In my testing in MATLAB "the function computes over 2x faster than the exponential sigmoid function", which, for certain types of ML problems can lead to a significant speed improvement. So what is this function? Behold: \begin{aligned} \sigma_e(x) = \frac{1}{1 + |x|} \end{aligned} It's also differntiable with the derivative of \begin{aligned} \frac{\partial\sigma_e(x)}{\partial x} = \frac{1}{(1 + |x|)^2} \end{aligned} For an activation function in the range $ [0,1] $ it can be written as \begin{aligned} \sigma_e(x) = \frac{0.5(x)}{1 + |x|} + 0.5 \end{aligned} In MATLAB this can be simply written as function p = elliotsig(x)% From http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.46.7204% "A better Activation Function for Artificial Neural Networks"p = 0.5*(x)/(1 + abs(x))+0.5; end Here's an image I ganked from Wikipedia that shows the Elliot, Exponential and Hyperbolic Tangent Sigmoid functions:
Yes, there is a certain sense in which your statements are true. As Mike Shulman and Qiaochu Yuan said, the strict fiber of a map cannot be defined in HoTT and doesn't make sense, but you can work from the other direction: instead of trying to associate a family of types to a map start with an arbitrary type family $P: X \to Type$. This includes any classical Serre fibrations: you associate to a point a fiber over it and to higher dimensional simplices some choice of their lifting, which is possible to do coherently by the path lifting property of fibrations. In the simplest case of a flat bundle you can make an explicit choice of lifts, which defines an equivalence between flat bundles and representations of the fundamental group of $X$. Homotopy-theoretically "every bundle is flat" in a sense that any two maps between fibers induced by a pair of equivalent paths are themselves (noncanonically) equivalent. The magic of HoTT and its models allows us to assume that all fibrations are given as representations of the path groupoid: path induction allows us to prove that any map of types is functorial in the sense that paths map to paths, composition to composition etc for all higher relations. This shows that a type family $P: X \to Type$ associates to any point $x: X$ a type $P(x): Type$, to any path $s: x=y$ a homotopy equivalence $ P(x) = P(y)$ etc (a map from paths in $Type$ to equivalences is automatic by path induction). In certain cases we can turn a coherent system of equivalences between fibers into a type family over $X$. The simplest case is when $X$ is defined as a higher inductive type, since higher inductive types are defined as representing objects of certain systems of equivalences between types. See chapter 6, in particular [6.12 The flattening lemma] in the HoTT book. Given a type family $P: X \to Type$ we can construct its total space $Y := \sum_{x:X} P(x)$ with a canonical projection map $Y \xrightarrow{p} X$ defined by induction on $\Sigma$. Then we can consider the type family of homotopy fibers of $p$: $$x:X \vdash \mathrm{fib}_p(x) := \left( \sum_{y:Y} x = p(y) \right) : Type$$ The resulting type family is equal to $P$ in the sense $\prod_{x:X} P(x) = \mathrm{fib}_p(x)$. By univalence we need to provide a homotopy equivalence between fibers. The map from left to right is the obvious inclusion of a strict fiber into a homotopy one. The map from right to left is constructed via induction on $\Sigma$ and paths. In the other direction, to any map $Y \xrightarrow{f} X$ we can associate its fiber type family $\mathrm{fib}_f$ and its total space $\sum_{x:X} \mathrm{fib}_f(x)$. It will be equivalent to $Y$ as objects over $X$. The relevant reference here is [4.8 The object classifier] in the HoTT book. Essentially the big statement at work here is that from the PoV of higher category theory, the core groupoid for the overcategory $\mathbb{S}_{/X}$ is "representable" by a certain object $Type: \mathbb{S}$ (more formally, we consider a filtration of the overcategory by subcategories of bounded increasing cardinalities and require that each of these functors should be representable, thus getting an increasing sequence of objects $Type_{\kappa}$, see HTT [6.1.6. $\infty$-topoi and classifying objects]). In classical 1-topos theory and set theory this statement is limited to $(-1)$-truncated morphisms and type families and corresponds to the comprehension principle: any subset can be defined by a proposition which is true exactly for the elements of this subset, and vice versa. Type theory extends this principle to arbitrary constructions: any statement $P:X \to Type$ defines a type of "structures" on $X$ (the sigma type), and equivalently any map of types can be defined by some statement in a sense of a type family. Regarding your "dual" construction: your type is defined as $\prod_{y^\prime} \prod_{y = y^\prime} X_{y^\prime}$. By path induction this type is equivalent to $X_y$, so your type family is equivalent to $X$. Regarding the moduli spaces: I don't think this construction has really anything to do with them. Not that it's entirely irrelevant, but moduli spaces carry much more structure than homotopy types and you can't get around that unless you're willing to sacrifice most information. One possibility is to consider moduli spaces as stacks of groupoids on the Zariski (étale, flat, ...) site. Sheaves of groupoids naturally form a higher topos and should be a model of HoTT, however the information you get this way is mostly trivial: if your classified objects don't have any automorphisms, then the stack will be just a sheaf of sets, and if you have automorphisms, then you'll get the tautological description of locally isomorphic objects. Another possibility would be to enlarge your notion of equivalence, for example you could state that all fibers of the universal family are equivalent, e.g. by saying that an equivalence of $E$ and $E^\prime$ can be given as any fibration $X$ over $\mathbb{A}^1$ such that $X(0) \simeq E$ and $X(1) \simeq E^\prime$. This feels a lot like $\mathbb{A}^1$-homotopy theory, but I can't state anything more meaningful. In any case it feels like most (all?) of the algebraic information would be lost, even if one is very careful with definitions (I already see some problems in the one above). Abstractly, the story of the moduli spaces is the story about classifying ringed topoi, while HoTT speak only about objects of a single higher topos (or topoi that arise as their overcategories). Sort of orthogonal beasts. Regarding your "associated bundle --- principal bundle" comment. Indeed, one can view this construction for connected spaces in a similar way. Assume given a point $x:X$ and $X$ connected, then fibrations $P:X \to Type$ over $X$ are equivalent to homotopy representations of the ($A_\infty$-)group of loops $\Omega_x X$ in the automorphisms $Aut(P(x))$ of the fiber $P(x)$. This is certainly true in classical homotopy theory of spaces, but in HoTT there are complications. First there is a problem with the definition of $A_\infty$-groups and their representations, which we will ignore. The construction itself in one direction is obvious, we already know that each loop induces an automorphism of the fiber. In the other direction $y=x$ is a right torsor over $\Omega_x X$ so we can take the product over the loop group $(y = x) \times_{\Omega_x X} P(x)$. Classically this gives us back $P$, but in HoTT this requires clarification. We define a $(\infty, 1)$-category $C$ as a type of objects $Ob_C: Type$ and a family of morphism types $a,b:C \vdash C(a,b): Type$, together with units and composition which we suppress in the following discussion. As usual, a groupoid is a category in which all morphisms are invertible, and for any category $C$ we have its core groupoid $C^\sim$ which is the largest subgroupoid in $C$. For any $X: Type$ its path groupoid $\Pi X: Cat$ has $Ob_{\Pi X} = X$ and $a, b:X \vdash \Pi X(a, b) := (a = b) : Type$. A category is called univalent if the natural functor $\Pi Ob_C \to C^\sim$ is an equivalence. Our definition of a category follows the Segal space approach, and univalent categories are precisely complete Segal spaces. The category of types $Type$ has $Ob_{Type} = Type$ and $X,Y : Type \vdash Type(X, Y) := (X \to Y)$. The classical axiom of univalence is precisely the statement that $Type$ is an univalent category. We also define the geometric realization functor $|C|$ as the left adjoint to the path groupoid functor, i.e. $Type(|C|, X) = Cat(C, \Pi X)$. For any ($A_\infty$-)group or monoid $G$ we define the delooping category $\mathbb B G$ as $Ob_{\mathbb B G} = 1$, $\mathbb B G(\ast, \ast) = G$. Presheaves of types on $\mathbb B G$ are the same as representations of $G$. Now, the correspondence between type families and group representations stated above in these terms says that for any connected $X: Type$ with $x: X$ the categories $\mathbb B \Omega_x X$ and $\Pi X$ are Morita equivalent, i.e. their categories of type presheaves are equivalent. We have an obvious inclusion $\mathbb B \Omega_x X \to \Pi X$, however this is not an equivalence of categories, since an equivalence on object types would restrict to the statement $\forall y: X, x = y$ which means that $X$ is contractible. This is in stark contrast with the 1-categorical situation where any groupoid is equivalent to its full subgroupoid on one object. The classical statement relies on a choice principle to choose an equivalence for each nonempty $C(x, y)$, but constructively we can't have such strong choice. One can argue that this failure is due to non-univalence of $\mathbb B \Omega_x X$ and it's true in a sense, but I have two objections. The first is that the truth of this equivalence for univalent subcategories is itself a nontrivial theorem, essentially it is May's delooping theorem together with its version for path groupoids. Using such strong statements for a basic notion of equivalence doesn't seem like a good idea. The second is that the delooping category is an extremely natural object, and being unable to naturally talk about it would be a problem. I also find it troubling that the simple construction of delooping category for a monoid would suddenly blow up into a much more complex path category if the monoid is a group. Similarly, it is natural to select subcategories on a set of objects, or consider action groupoids for groups, and none of them would be categories under the univalence requirement (e.g. the action groupoid on a torsor would have "too few" morphisms to be univalent and to be equivalent to its geometric realization, which is a point). Even the basic 1-categorical definition of a category wouldn't be a category if we require all categories to be univalent. Sure, we could do the Rezk completion, but that extra step would be unnecessary in half of situations and very nontrivial in the other half. Non-univalent categories will break the homotopy hypothesis, but I view it as a lesser evil in this case. Groupoids should be at least as complicated as homotopy types, but there is no reason why they couldn't contain more information. That said, univalence of $Type$ is a very natural requirement. In particular, I don't know how to prove the Morita equivalence above without the univalence axiom, or even if it's true. With univalence it is easy to prove that for any groupoid its category of presheaves is equivalent to the category of type families over its geometric realization: $$\begin {eqnarray}C & \to & Type & \simeq\\C & \to & Type^\sim & \simeq \\C & \to & \Pi Type & \simeq \\|C| & \to & Type & \simeq \\\Pi |C| & \to & Type\end{eqnarray}$$ Together with the delooping theorem $|\mathbb B \Omega_x X| = X$, $|\Pi X| = X$ this proves the Morita equivalence above. The part about path groupoids is easy to prove, but I'm not sure that the part about $\mathbb B$ can be proved without univalence. In summary, assuming univalence, type families over connected inhabited types can be uniquely reconstructed from their fiber over some point together with an action of the loop group on it. At the moment there is no type-theoretic reference for the constructions that I performed above, but something like that should be in books on higher categories and topology.
Let $E=\mathbb C/\Lambda$ be an elliptic curve,and let $D\subset E$ be a very small disc. ($D$ is round for the usual flat metric on $E$) By the main result of [1], there exists a holomorphic immersion $f:E\setminus D \to \mathbb C$. The image $f(\partial D)$ is a closed curve in $\mathbb C$ that self-intersects a bunch of times. What is the shape of $f(\partial D)$? I understand that such a curve is not unique. What I want is a qualitative description of an example of such a curve. A drawing would be great. Example:When $D\subset E$ is a "rather big" disk, the curve ⌘ is the image of $\partial D$ under an immersion $E\setminus D \to \mathbb C\mathbb P^1$ (points in the central square have two preimages; points in the four lobes have zero preimages). But that only works when $D$ is a rather big compared to the size of $E$, and I don't know how to modify this curve as the size of $D$ tends to zero. [1]: Gunning, R. C., Narasimhan, R., Immersion of open Riemann surfaces. Math. Ann. 174, 103–108 (1967).
title: Dependent Density Regression with PyMC3 My last post showed how to use Dirichlet processes and pymc3 to perform Bayesian nonparametric density estimation. This post expands on the previous one, illustrating dependent density regression with pymc3. Just as Dirichlet process mixtures can be thought of as infinite mixture models that select the number of active components as part of inference, dependent density regression can be thought of as infinite mixtures of experts that select the active experts as part of inference. Their flexibility and modularity make them powerful tools for performing nonparametric Bayesian Data analysis. %matplotlib inlinefrom IPython.display import HTML from matplotlib import animation as ani, pyplot as pltimport numpy as npimport pandas as pdimport pymc3 as pmimport seaborn as snsfrom theano import shared, tensor as tt plt.rc('animation', writer='avconv')blue, *_ = sns.color_palette() SEED = 972915 # from random.org; for reproducibilitynp.random.seed(SEED) Throughout this post, we will use the LIDAR data set from Larry Wasserman's excellent book, All of Nonparametric Statistics. We standardize the data set to improve the rate of convergence of our samples. DATA_URI = 'http://www.stat.cmu.edu/~larry/all-of-nonpar/=data/lidar.dat'def standardize(x): return (x - x.mean()) / x.std()df = (pd.read_csv(DATA_URI, sep=' *', engine='python') .assign(std_range=lambda df: standardize(df.range), std_logratio=lambda df: standardize(df.logratio))) /opt/conda/lib/python3.5/site-packages/pandas/io/parsers.py:1728: FutureWarning: split() requires a non-empty pattern match. yield pat.split(line.strip()) /opt/conda/lib/python3.5/site-packages/pandas/io/parsers.py:1730: FutureWarning: split() requires a non-empty pattern match. yield pat.split(line.strip()) df.head() range logratio std_logratio std_range 0 390 -0.050356 0.852467 -1.717725 1 391 -0.060097 0.817981 -1.707299 2 393 -0.041901 0.882398 -1.686447 3 394 -0.050985 0.850240 -1.676020 4 396 -0.059913 0.818631 -1.655168 We plot the LIDAR data below. fig, ax = plt.subplots(figsize=(8, 6))ax.scatter(df.std_range, df.std_logratio, c=blue);ax.set_xticklabels([]);ax.set_xlabel("Standardized range");ax.set_yticklabels([]);ax.set_ylabel("Standardized log ratio"); This data set has a two interesting properties that make it useful for illustrating dependent density regression. The intuitive idea behind dependent density regression is to reduce the problem to many (related) density estimates, conditioned on fixed values of the predictors. The following animation illustrates this intuition. fig, (scatter_ax, hist_ax) = plt.subplots(ncols=2, figsize=(16, 6))scatter_ax.scatter(df.std_range, df.std_logratio, c=blue, zorder=2);scatter_ax.set_xticklabels([]);scatter_ax.set_xlabel("Standardized range");scatter_ax.set_yticklabels([]);scatter_ax.set_ylabel("Standardized log ratio");bins = np.linspace(df.std_range.min(), df.std_range.max(), 25)hist_ax.hist(df.std_logratio, bins=bins, color='k', lw=0, alpha=0.25, label="All data");hist_ax.set_xticklabels([]);hist_ax.set_xlabel("Standardized log ratio");hist_ax.set_yticklabels([]);hist_ax.set_ylabel("Frequency");hist_ax.legend(loc=2);endpoints = np.linspace(1.05 * df.std_range.min(), 1.05 * df.std_range.max(), 15)frame_artists = []for low, high in zip(endpoints[:-1], endpoints[2:]): interval = scatter_ax.axvspan(low, high, color='k', alpha=0.5, lw=0, zorder=1); *_, bars = hist_ax.hist(df[df.std_range.between(low, high)].std_logratio, bins=bins, color='k', lw=0, alpha=0.5); frame_artists.append((interval,) + tuple(bars)) animation = ani.ArtistAnimation(fig, frame_artists, interval=500, repeat_delay=3000, blit=True)plt.close(); # prevent the intermediate figure from showing HTML(animation.to_html5_video()) As we slice the data with a window sliding along the x-axis in the left plot, the empirical distribution of the y-values of the points in the window varies in the right plot. An important aspect of this approach is that the density estimates that correspond to close values of the predictor are similar. In the previous post, we saw that a Dirichlet process estimates a probability density as a mixture model with infinitely many components. In the case of normal component distributions,$$ y \sim \sum_{i = 1}^{\infty} w_i \cdot N(\mu_i, \tau_i^{-1}), $$ where the mixture weights, $w_1, w_2, \ldots$, are generated by a stick-breaking process. Dependent density regression generalizes this representation of the Dirichlet process mixture model by allowing the mixture weights and component means to vary conditioned on the value of the predictor, $x$. That is,$$ y\ |\ x \sim \sum_{i = 1}^{\infty} w_i\ |\ x \cdot N(\mu_i\ |\ x, \tau_i^{-1}). $$ In this post, we will follow Chapter 23 of Bayesian Data Analysis and use a probit stick-breaking process to determine the conditional mixture weights, $w_i\ |\ x$. The probit stick-breaking process starts by defining where $\Phi$ is the cumulative distribution function of the standard normal distribution. We then obtain $w_i\ |\ x$ by applying the stick breaking process to $v_i\ |\ x$. That is,$$ w_i\ |\ x = v_i\ |\ x \cdot \prod_{j = 1}^{i - 1} (1 - v_j\ |\ x). $$ For the LIDAR data set, we use independent normal priors $\alpha_i \sim N(0, 5^2)$ and $\beta_i \sim N(0, 5^2)$. We now express this this model for the conditional mixture weights using pymc3. def norm_cdf(z): return 0.5 * (1 + tt.erf(z / np.sqrt(2)))def stick_breaking(v): return v * tt.concatenate([tt.ones_like(v[:, :1]), tt.extra_ops.cumprod(1 - v, axis=1)[:, :-1]], axis=1) N, _ = df.shapeK = 20std_range = df.std_range.values[:, np.newaxis]std_logratio = df.std_logratio.values[:, np.newaxis]x_lidar = shared(std_range, broadcastable=(False, True))with pm.Model() as model: alpha = pm.Normal('alpha', 0., 5., shape=K) beta = pm.Normal('beta', 0., 5., shape=K) v = norm_cdf(alpha + beta * x_lidar) w = pm.Deterministic('w', stick_breaking(v)) We have defined x_lidar as a theano shared variable in order to use pymc3's posterior prediction capabilities later. While the dependent density regression model theoretically has infinitely many components, we must truncate the model to finitely many components (in this case, twenty) in order to express it using pymc3. After sampling from the model, we will verify that truncation did not unduly influence our results. Since the LIDAR data seems to have several linear components, we use the linear models$$ \begin{align*} \mu_i\ |\ x & \sim \gamma_i + \delta_i x \\ \gamma_i & \sim N(0, 10^2) \\ \delta_i & \sim N(0, 10^2) \end{align*} $$ for the conditional component means. with model: gamma = pm.Normal('gamma', 0., 10., shape=K) delta = pm.Normal('delta', 0., 10., shape=K) mu = pm.Deterministic('mu', gamma + delta * x_lidar) Finally, we place the prior $\tau_i \sim \textrm{Gamma}(1, 1)$ on the component precisions. with model: tau = pm.Gamma('tau', 1., 1., shape=K) obs = pm.NormalMixture('obs', w, mu, tau=tau, observed=std_logratio) We now draw sample from the dependent density regression model. SAMPLES = 20000BURN = 10000THIN = 10with model: step = pm.Metropolis() trace_ = pm.sample(SAMPLES, step, random_seed=SEED) trace = trace_[BURN::THIN] 100%|██████████| 20000/20000 [01:30<00:00, 204.48it/s] To verify that truncation did not unduly influence our results, we plot the largest posterior expected mixture weight for each component. (In this model, each point has a mixture weight for each component, so we plot the maximum mixture weight for each component across all data points in order to judge if the component exerts any influence on the posterior.) fig, ax = plt.subplots(figsize=(8, 6))ax.bar(np.arange(K) + 1 - 0.4, trace['w'].mean(axis=0).max(axis=0));ax.set_xlim(1 - 0.5, K + 0.5);ax.set_xlabel('Mixture component');ax.set_ylabel('Largest posterior expected\nmixture weight'); Since only three mixture components have appreciable posterior expected weight for any data point, we can be fairly certain that truncation did not unduly influence our results. (If most components had appreciable posterior expected weight, truncation may have influenced the results, and we would have increased the number of components and sampled again.) Visually, it is reasonable that the LIDAR data has three linear components, so these posterior expected weights seem to have identified the structure of the data well. We now sample from the posterior predictive distribution to get a better understand the model's performance. PP_SAMPLES = 5000lidar_pp_x = np.linspace(std_range.min() - 0.05, std_range.max() + 0.05, 100)x_lidar.set_value(lidar_pp_x[:, np.newaxis])with model: pp_trace = pm.sample_ppc(trace, PP_SAMPLES, random_seed=SEED) 100%|██████████| 5000/5000 [01:18<00:00, 66.54it/s] Below we plot the posterior expected value and the 95% posterior credible interval. fig, ax = plt.subplots()ax.scatter(df.std_range, df.std_logratio, c=blue, zorder=10, label=None);low, high = np.percentile(pp_trace['obs'], [2.5, 97.5], axis=0)ax.fill_between(lidar_pp_x, low, high, color='k', alpha=0.35, zorder=5, label='95% posterior credible interval');ax.plot(lidar_pp_x, pp_trace['obs'].mean(axis=0), c='k', zorder=6, label='Posterior expected value');ax.set_xticklabels([]);ax.set_xlabel('Standardized range');ax.set_yticklabels([]);ax.set_ylabel('Standardized log ratio');ax.legend(loc=1);ax.set_title('LIDAR Data'); The model has fit the linear components of the data well, and also accomodated its heteroskedasticity. This flexibility, along with the ability to modularly specify the conditional mixture weights and conditional component densities, makes dependent density regression an extremely useful nonparametric Bayesian model. This post is available as a Jupyter notebook here.
I am reading a paper, and I came across the following definition of sinc interpolation. Warning. I don't have a strong background in signal processing. Also, I have no clue what that bar on $\bar{F}$ means. I don't know why that would be the conjugate. Would it actually be the conjugate? Context. In this part, for a given kernel $f_k$ of size $k$, we want to expand $F_k = DFT\{f_k\}$ to $F_N = DFT\{f_N\}$ have size $N$, without going to the spatial domain, padding and back to the spectral domain. Let's discuss on a 1D scenario for notation simplification. $$ G(u) = F(u) \ast \left [ e^{\frac{-j 2 \pi u}{N}(\frac{k-1}{2})} \frac{sin(\frac{\pi u k}{N})}{sin(\frac{\pi u}{N})} \right ] $$ Q1. I am assuming that $G(u)$ and $F(u)$ are discrete, having the respective sizes of $N$ and $k$. Is this assumption right? Q2. Would that second term actually be the $sinc$ kernel, i.e. would the expression below be true? If not, what is their between that term and the sinc function? $$ sinc(u) = \frac{sin(u)}{u} = \left [ e^{\frac{-j 2 \pi u}{N}(\frac{k-1}{2})} \frac{sin(\frac{\pi u k}{N})}{sin(\frac{\pi u}{N})} \right ] $$ Another way I know to avoid using the DFT to compute $F_N$, is by representing $f_N$ in terms of $f_k$ as $$f_n = \sum_{i=0}^{k-1} \delta(n-i) f_i$$ Thus, having that $$DFT\{\delta[n]\} = 1$$ and $$ DFT\{x[n-n_o]\} = e^{-j \frac{2 \pi}{N} k n_o} DFT\{x[n]\}$$ Hence $$F_N(k) = \sum_{n_o=0}^{k-1} e^{-j \frac{2 \pi}{N} k n_o } f_{n_o}$$ I am not sure if that the convolution on the frequency domain $G(u)$, and the impulse based composition $F_N(k)$ are similar approaches of doing the same thing. I did notice the similarity between $$ e^{\frac{-j 2 \pi u}{N}(\frac{k-1}{2})} \sim e^{\frac{-j 2 \pi k}{N}n_o} $$ But, if this is the case, then, $$ \frac{sin(\frac{\pi u k}{N})}{sin(\frac{\pi u}{N})} \sim F\{\delta(k-n_o)\} $$ Q3. Are these the same approaches? Is there any relation between these expressions? $$\frac{k-1}{2} \sim n_o$$
By the result of Baker, Harman, Pintz (http://www.cs.umd.edu/~gasarch/BLOGPAPERS/BakerHarmanPintz.pdf), for any sufficiently large $x$ the interval $[x-x^{21/40},x]$ contains a prime number. This result implies the asymptotic $p(x)=x+O(x^{21/40})$ where the function $p(x)$ assigns to each real number $x$ the smallest prime number $p\ge x$. Question. For which smallest possible constant $\theta$ is it known that $[x-x^\theta,x]$ contains a power of a prime number? Can this $\theta$ be smaller or equal than $\frac12$? This problem admits also the following reformulation. For any real number $x$ let $q(x)$ be the smallest prime power greater or equal than $x$. Problem. Is the asymptotic growth of the function $q(x)$ better than that of $p(x)$. For example, is $q(x)=x+O(\sqrt{x})$? Is this equality true under the Riemannian Hypothesis?
There is a known mathematical question here: you are given a unitvector, lying on the $(n-1)$-sphere in $\mathbb{R}^n$, $v\in S^{n-1}$, and you wouldlike to associate with each such vector a frame in the tangent bundleof $S^{n-1}$ at $v$. This is a map $S^{n-1}\to \mathrm{F}S^{n-1}$ fromthe sphere to its frame bundle, also known as a global section of theframe bundle of $S^{n-1}$. Here, the frame bundle (https://en.wikipedia.org/wiki/Frame_bundle) isthe set of all pairs $(v, (u_1, \ldots, u_{n-1}))$, of a unit vector$v$ and $n-1$ orthonormal vectors all orthogonal to $v$, and a global sectionmeans a map $v \mapsto (u_1, \ldots, u_{n-1})$ that is defined for all$v$(https://en.wikipedia.org/wiki/Section_(fiber_bundle)#Extending_to_global_sections). Manifolds that can have such a map are known as parallelizable(https://en.wikipedia.org/wiki/Parallelizable_manifold), and asWikipedia says, among the unit spheres only $S^0$, $S^1$, $S^3$ and $S^7$ areparallelizable. So if you are looking for a way to get the orthogonalcomplement $v^\perp$ of $v$ that is valid and smooth in $v$ for all$v$, you can only do that for $n=1,2,4,8$. For $n=3$, in fact, the proof of impossibility is simple: if the two orthogonal vectors such an algorithm would compute are $f(v)$ and $g(v)$, and both depend continuously on $v$, then $f(v)$ defines a tangent vector at $v$ to the unit sphere at the point $v$, and such a tangent vector field would contradict the hairy ball theorem (https://en.wikipedia.org/wiki/Hairy_ball_theorem). For $n=2$, you have the map $(a,b) \mapsto (-b, a)$. For $n=4$, you have the map$$ (a, b, c, d) \mapsto\begin{pmatrix}-b & -c & -d\\a & d & -c\\-d & a & b\\c & -b & a\end{pmatrix}.$$This corresponds to writing down the quaternion $w =a+\mathrm{i}b+\mathrm{j}c+\mathrm{k}d$, where the dot product is computed as$v_1^\top v_2 =\Re(\bar w_1 w_2)$), and the threeorthogonal vectors are $\mathrm{i}w$, $\mathrm{j}w$, $\mathrm{k}w$. For $n=8$, you can read the answer off of the Cayley table for octonions (https://en.wikipedia.org/wiki/Octonion). For $n=3$ and other $n$, I think the best you can do is to pick a "pole", such asthe first basis vector $e_1$, choose the rotation matrix that rotatesin the $(e_1, v)$-plane only and maps $Re_1 = v$, and use the$(n-1)$-frame $(Re_2, \ldots, Re_n)$. The matrix $R$ can be computedas a sequence of Givens rotations, which means its directionalderivatives should be straightforward to compute. This map has asingularity at $v=-e_1$. This is equivalent to parallel-transporting a tangent frame at $e_1$ from $e_1$ to $v$ along the shortest path (a great circle on $S^{n-1}$) from $e_1$ to $v$, so there is a geometric interpretation to doing this. More explicitly, write $v = \cos\theta e_1 + \sin\theta e_v$, where $e_v$ is a unit vector, $e_1^\top e_v = 0$. Then the rotation matrix in the $(e_1,e_v)$-basis is $$R_0 = \begin{pmatrix}\cos\theta&-\sin\theta\\\sin\theta&\cos\theta\end{pmatrix},$$ and the change of basis from $(e_1,e_2,\ldots)$ to $(e_1,e_v,\ldots)$ is given by any orthonormal matrix $X$ such that $Xe_1 = e_1$, $Xe_2 = v$. This can be computed by QR decomposition of the $n\times(n+1)$ matrix with columns $(e_1,v,e_2,\ldots,e_n)$, and the answer is the matrix $R_0$ with the change-of-basis formula applied to it,$$ R = X \begin{pmatrix} R_0&0\\0&I \end{pmatrix} X^{-1}. $$This is independent of which basis $X$ is chosen, because however the QR decomposition completes the basis $R$ is just the identity on it, and it would also only fail on the two special cases, $v=\pm e_1$.
Regularization using methods such as Ridge, Lasso, ElasticNet is quite common for linear regression. I wanted to know the following: Are these methods applicable for logistic regression? If so, are there any differences in the way they need to be used for logistic regression? If these methods are not applicable, how does one regularize a logistic regression? Yes, Regularization can be used in all linear methods, including both regression and classification. I would like to show you that there are not too much difference between regression and classification: the only difference is the loss function. Specifically, there are three major components of linear method, Loss Function, Regularization, Algorithms. Where loss function plus regularization is the objective function in the problem in optimization form and the algorithm is the way to solve it (the objective function is convex, we will not discuss in this post). In loss function setting, we can have different loss in both regression and classification cases. For example, Least squares and least absolute deviation loss can be used for regression. And their math representation are $L(\hat y,y)=(\hat y -y)^2$ and $L(\hat y,y)=|\hat y -y|$. (The function $L( \cdot ) $ is defined on two scalar, $y$ is ground truth value and $\hat y$ is predicted value.) On the other hand, logistic loss and hinge loss can be used for classification. Their math representations are $L(\hat y, y)=\log (1+ \exp(-\hat y y))$ and $L(\hat y, y)= (1- \hat y y)_+$. (Here, $y$ is the ground truth label in $\{-1,1\}$ and $\hat y$ is predicted "score". The definition of $\hat y$ is a little bit unusual, please see the comment section.) In regularization setting, you mentioned about the L1 and L2 regularization, there are also other forms, which will not be discussed in this post. Therefore, in a high level a linear method is $$\underset{w}{\text{minimize}}~~~ \sum_{x,y} L(w^{\top} x,y)+\lambda h(w)$$ If you replace the Loss function from regression setting to logistic loss, you get the logistic regression with regularization. For example, in ridge regression, the optimization problem is $$\underset{w}{\text{minimize}}~~~ \sum_{x,y} (w^{\top} x-y)^2+\lambda w^\top w$$ If you replace the loss function with logistic loss, the problem becomes $$\underset{w}{\text{minimize}}~~~ \sum_{x,y} \log(1+\exp{(-w^{\top}x \cdot y)})+\lambda w^\top w$$ Here you have the logistic regression with L2 regularization. This is how it looks like in a toy synthesized binary data set. The left figure is the data with the linear model (decision boundary). The right figure is the objective function contour (x and y axis represents the values for 2 parameters.). The data set was generated from two Gaussian, and we fit the logistic regression model without intercept, so there are only two parameters we can visualize in the right sub-figure. The blue lines are the logistic regression without regularization and the black lines are logistic regression with L2 regularization. The blue and black points in right figure are optimal parameters for objective function. In this experiment, we set a large $\lambda$, so you can see two coefficients are close to $0$. In addition, from the contour, we can observe the regularization term is dominated and the whole function is like a quadratic bowl. Here is another example with L1 regularization. Note that, the purpose of this experiment is trying to show how the regularization works in logistic regression, but not argue regularized model is better. Here are some animations about L1 and L2 regularization and how it affects the logistic loss objective. In each frame, the title suggests the regularization type and $\lambda$, the plot is objective function (logistic loss + regularization) contour. We increase the regularization parameter $\lambda$ in each frame and the optimal solution will shrink to $0$ frame by frame. Some notation comments. $w$ and $x$ are column vectors,$y$ is a scalar. So the linear model $\hat y = f(x)=w^\top x$. If we want to include the intercept term, we can append $1$ as a column to the data. In regression setting, $y$ is a real number and in classification setting $y \in \{-1,1\}$. Note it is a little bit strange for the definition of $\hat y=w^{\top} x$ in classification setting. Since most people use $\hat y$ to represent a predicted value of $y$. In our case, $\hat y = w^{\top} x$ is a real number, but not in $\{-1,1\}$. We use this definition of $\hat y$ because we can simplify the notation on logistic loss and hinge loss. Also note that, in some other notation system, $y \in \{0,1\}$, the form of the logistic loss function would be different. The code can be found in my other answer here. A shrinkage/regularization method that was originally proposed for logistic regression based on considerations of higher order asymptotic was Firth logistic regression... some while before all of these talks about lasso and what not started, although after ridge regression risen and subsided in popularity through 1970s. It amounted to adding a penalty term to the likelihood,$$l^*(\beta) = l(\beta) + \frac12 \ln |i(\beta)|$$where $i(\beta) = \frac1n \sum_i p_i (1-p_i) x_i x_i'$ is the information matrix normalized per observation. Firth demonstrated that this correction has a Bayesian interpretation in that it corresponds to Jeffreys prior shrinking towards zero. The excitement it generated was due to it helping fixing the problem of perfect separation: say a dataset $\{(y_i,x_i)\| = \{(1,1),(0,0)\}$ would nominally produce infinite ML estimates, and glm in R is still susceptible to the problem, I believe. Yes, it is applicable to logistic regression. In R, using glmnet, you simply specify the appropriate family which is "binomial" for logistic regression. There are a couple of others (poison, multinomial, etc) that you can specify depending on your data and the problem you are addressing.
Difference between revisions of "Shrewd" (→Properties: i.e. stationary) (→Properties: for every class $\mathcal{A}$,) Line 19: Line 19: * If $κ$ is $\mathcal{A}$-$δ$-shrewd and $0 < η < δ$, then $κ$ is $\mathcal{A}$-$η$-shrewd. This is a difference between the properties of shrewdness and indescribability. * If $κ$ is $\mathcal{A}$-$δ$-shrewd and $0 < η < δ$, then $κ$ is $\mathcal{A}$-$η$-shrewd. This is a difference between the properties of shrewdness and indescribability. * For [[subtle]] $\pi$, * For [[subtle]] $\pi$, − ** in every club $B ⊆ π$ there is $κ$ such that $\langle V_\pi, \mathcal{A} ∩ V_\pi \rangle \models \text{“$κ$ is $\mathcal{A}$-shrewd .”}$. (The set of cardinals $κ$ below $\pi$ that are $\mathcal{A}$-shrewd in $V_\pi$ is stationary.) + ** in every club $B ⊆ π$ there is $κ$ such that $\langle V_\pi, \mathcal{A} ∩ V_\pi \rangle \models \text{“$κ$ is $\mathcal{A}$-shrewd .”}$. (The set of cardinals $κ$ below $\pi$ that are $\mathcal{A}$-shrewd in $V_\pi$ is stationary.) ** there is an $\eta$-shrewd cardinal below $\pi$ for all $\eta < \pi$. ** there is an $\eta$-shrewd cardinal below $\pi$ for all $\eta < \pi$. {{References}} {{References}} Latest revision as of 08:46, 19 September 2019 (All information from [1]) Shrewd cardinals are a generalisation of indescribable cardinals. They are called shrewd because they are bigger in size than many large cardinals which much greater consistency strength (for all notions of large cardinal which do not make reference to the totality of all ordinals). Definitions $κ$ — cardinal, $η>0$ — ordinal, $\mathcal{A}$ — class. $κ$ is $η$-shrewd iff for all $X ⊆ V_κ$ and for every formula $\phi(x_1, x_2)$, if $V_{κ+η} \models \phi(X, κ)$, then $\exists_{0 < κ_0, η_0 < κ} V_{κ_0+η_0} \models \phi(X ∩ V_{κ_0}, κ_0)$. $κ$ is shrewd iff $κ$ is $η$-shrewd for every $η > 0$. $κ$ is $\mathcal{A}$-$η$-shrewd iff for all $X ⊆ V_κ$ and for every formula $\phi(x_1, x_2)$, if $\langle V_{κ+η}, \mathcal{A} ∩ V_{κ+η} \rangle \models \phi(X, κ)$, then $\exists_{0 < κ_0, η_0 < κ} \langle V_{κ_0+η_0}, \mathcal{A} ∩ V_{κ_0+η_0} \rangle \models \phi(X ∩ V_{κ_0}, κ_0)$. $κ$ is $\mathcal{A}$-shrewd iff $κ$ is $\mathcal{A}$-$η$-shrewd for every $η > 0$. One can also use a collection of formulae $\mathcal{F}$ and make $\phi$ an $\mathcal{F}$-formula to define $η$-$\mathcal{F}$-shrewd and $\mathcal{A}$-$η$-$\mathcal{F}$-shrewd cardinals. Properties If $κ$ is $\mathcal{A}$-$δ$-shrewd and $0 < η < δ$, then $κ$ is $\mathcal{A}$-$η$-shrewd. This is a difference between the properties of shrewdness and indescribability. For subtle $\pi$, for every class $\mathcal{A}$, in every club $B ⊆ π$ there is $κ$ such that $\langle V_\pi, \mathcal{A} ∩ V_\pi \rangle \models \text{“$κ$ is $\mathcal{A}$-shrewd .”}$. (The set of cardinals $κ$ below $\pi$ that are $\mathcal{A}$-shrewd in $V_\pi$ is stationary.) there is an $\eta$-shrewd cardinal below $\pi$ for all $\eta < \pi$. ReferencesMain library
Learning Outcomes Recognize central limit theorem problems The central limit theorem for sample means says that if you keep drawing larger and larger samples (such as rolling one, two, five, and finally, ten dice) and calculating their means, the sample means form their own normal distribution (the sampling distribution). The normal distribution has the same mean as the original distribution and a variance that equals the original variance divided by, the sample size. The variable n is the number of values that are averaged together, not the number of times the experiment is done. Central Limit Theorem Suppose X is a random variable with a distribution that may be known or unknown (it can be any distribution). Using a subscript that matches the random variable, suppose: μ = the mean of X X σ = the standard deviation of X X If you draw random samples of size n, then as n increases, the random variable [latex]\displaystyle\overline{{X}}[/latex]. To put it more formally, if you draw random samples of size n, the distribution of the random variable [latex]\displaystyle\overline{{X}}[/latex], which consists of sample means, is called the sampling distribution of the mean. The sampling distribution of the mean approaches a normal distribution as n, the sample size, increases. The random variable [latex]\displaystyle\overline{{X}}[/latex] in one sample. [latex]\displaystyle\frac{{\overline{X}-{\mu}_{x}}}{{\frac{{\sigma{x}}}{{\sqrt{n}}}}}[/latex] [latex]\displaystyle{\mu}_{x}[/latex] is the average of both X and [latex]\displaystyle\overline{X}[/latex] [latex]\displaystyle{\sigma}\overline{x} = \frac{{\overline{X}-{\mu}_{x}}}{{\frac{{\sigma{x}}}{{\sqrt{n}}}}}[/latex] = standard deviation of [latex]\displaystyle\overline{{X}}[/latex] and is called the standard error of the mean. To find probabilities for means on the calculator, follow these steps: 2nd DISTR 2:normalcdf normalcdf (Lower value of the area, upper value of the area, mean,[latex]\displaystyle\sqrt{\frac{{\text{standard deviation}}}{{\text{sample size}}}}[/latex] where: meanis the mean of the original distribution standard deviationis the standard deviation of the original distribution sample size= n Example An unknown distribution has a mean of 90 and a standard deviation of 15. Samples of size n = 25 are drawn randomly from the population. Find the probability that the sample meanis between 85 and 92. Find the value that is two standard deviations above the expected value, 90, of the sample mean. Solution: normalcdf: (lower value, upper value, mean, standard error of the mean) The parameter list is abbreviated (lower value, upper value, μ,[latex]\displaystyle\frac{{\sigma}}{{\sqrt{n}}}[/latex] normalcdf: (85,92,90, [latex]\displaystyle\frac{{15}}{{\sqrt{25}}}[/latex] = 0.6997 To find the value that is two standard deviations above the expected value 90, use the formula: value = [latex]\displaystyle{\mu}_{x}[/latex] + (# of STDEVs)[latex]\displaystyle\left(\frac{{{\sigma}_{x}}}{{\sqrt{n}}}\right)[/latex] value = 90 + 2 ([latex]\displaystyle\frac{{15}}{{\sqrt{25}}}[/latex] )= 96 The value that is two standard deviations above the expected value is 96. The standard error of the mean is [latex]\displaystyle\frac{{\sigma}}{{\sqrt{n}}}[/latex] = [latex]\displaystyle\frac{{15}}{{\sqrt{25}}}[/latex] = 3. Recall that the standard error of the mean is a description of how far (on average) that the sample mean will be from the population mean in repeated simple random samples of size n. Try it An unknown distribution has a mean of 45 and a standard deviation of eight. Samples of size n = 30 are drawn randomly from the population. Find the probability that the sample mean is between 42 and 50. P(42 < [latex]\displaystyle\overline{x}[/latex] < 50) = 42, 50, 45, [latex]\displaystyle\frac{{8}}{{\sqrt{30}}}[/latex] = 0.9797 Example The length of time, in hours, it takes an “over 40” group of people to play one soccer match is normally distributed with a mean of two hours and a standard deviation of 0.5 hours. A s ample of size n = 50 is drawn randomly from the population. Find the probability that the sample meanis between 1.8 hours and 2.3 hours. The length of time, in hours, it takes an “over 40” group of people to play one soccer match is normally distributed with a mean of two hours and a standard deviation of 0.5 hours. A sample of size n = 50 is drawn randomly from the population. Find the probability that the sample mean is between 1.8 hours and 2.3 hours. Let [latex]\displaystyle\overline{X}[/latex] = the mean time, in hours, it takes to play one soccer match. If [latex]{\mu}_{x}[/latex]= _________, [latex]{\sigma}_{x}[/latex] = __________, and n = ___________, then X~ N(______, ______) by the central limit theorem for means. [latex]{\mu}_{x}[/latex] = 2 [latex]{\sigma}_{x}[/latex] = 0.5 n = 50 and X~N (2, [latex]\frac{{0.5}}{{\sqrt{50}}}[/latex]) Find P(1.8 < [latex]\displaystyle\overline{x}[/latex] < 2.3) Solution: P(1.8 < [latex]\displaystyle\overline{x}[/latex] < 2.3) P(1.8 < [latex]\displaystyle\overline{x}[/latex] < 2.3) = 0.9977 normalcdf : (1.8, 2.3, 2, [latex]\displaystyle\frac{{0.5}}{{\sqrt{50}}}[/latex]) = 0.9977 The probability that the mean time is between 1.8 hours and 2.3 hours is 0.9977. Try it The length of time taken on the SAT for a group of students is normally distributed with a mean of 2.5 hours and a standard deviation of 0.25 hours. A sample size of n = 60 is drawn randomly from the population. Find the probability that the sample mean is between two hours and three hours. P(2 < [latex]\overline{x}[/latex] < 3) = normalcdf( 2, 3, 2.5, [latex]\frac{0.25}{\sqrt{60}}[/latex] = 1 To find percentiles for means on the calculator, follow these steps. 2nd DIStR 3:invNorm k= invNorm (area to the left of k, mean[latex]\displaystyle\sqrt{\frac{{\text{standard deviation}}}{{\text{sample size}}}}[/latex]), where: k= the kth percentile meanis the mean of the original distribution standard deviationis the standard deviation of the original distribution sample size= n Example In a recent study reported Oct. 29, 2012 on the Flurry Blog, the mean age of tablet users is 34 years. Suppose the standard deviation is 15 years. Take a sample of size n = 100. What are the mean and standard deviation for the sample mean ages of tablet users? What does the distribution look like? Find the probability that the sample mean age is more than 30 years (the reported mean age of tablet users in this particular study). Find the 95th percentile for the sample mean age (to one decimal place). Solution: Since the sample mean tends to target the population mean, we have μ= X μ= 34. The sample standard deviation is given by [latex]\displaystyle\frac{{\sigma}}{{\sqrt{n}}}=\frac{{15}}{{\sqrt{100}}}=\frac{{15}}{{10}}={1.5}[/latex] The central limit theorem states that for large sample sizes( n), the sampling distribution will be approximately normal. The probability that the sample mean age is more than 30 is given by P( Χ> 30) = normalcdf(30,E99,34,1.5) = 0.9962 Let k= the 95th percentile. k = invNorm(0.95, 34, [latex]\displaystyle\frac{{15}}{{\sqrt{100}}}[/latex]) = 36.5 Try it In an article on Flurry Blog, a gaming marketing gap for men between the ages of 30 and 40 is identified. You are researching a startup game targeted at the 35-year-old demographic. Your idea is to develop a strategy game that can be played by men from their late 20s through their late 30s. Based on the article’s data, industry research shows that the average strategy player is 28 years old with a standard deviation of 4.8 years. You take a sample of 100 randomly selected gamers. If your target market is 29- to 35-year-olds, should you continue with your development strategy? You need to determine the probability for men whose mean age is between 29 and 35 years of age wanting to play a strategy game. Find P(29 < [latex]\displaystyle\overline{x}[/latex] < 35) = normalcdf = 0.0186 (29, 35, 28, [latex]\displaystyle\frac{{4.8}}{{\sqrt{100}}}[/latex] = 0.0186 You can conclude there is approximately a 2% chance that your game will be played by men whose mean age is between 29 and 35. Example The mean number of minutes for app engagement by a tablet user is 8.2 minutes. Suppose the standard deviation is one minute. Take a sample of 60. What are the mean and standard deviation for the sample mean number of app engagement by a tablet user? What is the standard error of the mean? Find the 90th percentile for the sample mean time for app engagement for a tablet user. Interpret this value in a complete sentence. Find the probability that the sample mean is between eight minutes and 8.5 minutes. Solution: [latex]\displaystyle{\mu}_{\overline{x}}={\mu}=8.2[/latex], [latex]\displaystyle{\sigma}_{\overline{x}}=\frac{{\sigma}}{{\sqrt{n}}}=\frac{{1}}{{\sqrt{60}}} = 0.13[/latex]This allows us to calculate the probability of sample means of a particular distance from the mean, in repeated samples of size 60. Let k= the 90th percentile, k = invNorm(0.9, 8.2, [latex]\displaystyle\frac{{1}}{{\sqrt{60}}}[/latex]) = 8.37. This values indicates that 90 percent of the average app engagement time for table users is less than 8.37 minutes. P(8 < [latex]\displaystyle\overline{x}[/latex] < 8.5) = normalcdf(8, 8.5, 8.2[latex]\displaystyle\frac{{1}}{{\sqrt{60}}}[/latex]) = 0.9293 Try it Cans of a cola beverage claim to contain 16 ounces. The amounts in a sample are measured and the statistics are n = 34,[latex]\displaystyle\overline{x}[/latex] = 16.01 ounces. If the cans are filled so that μ = 16.00 ounces (as labeled) and σ= 0.143 ounces, find the probability that a sample of 34 cans will have an average amount greater than 16.01 ounces. Do the results suggest that cans are filled with an amount greater than 16 ounces? We have P([latex]\displaystyle\overline{x}[/latex] > 16.01) = normalcdf(16.01, E99, 16, [latex]\displaystyle\frac{{0.143}}{{\sqrt{34}}}[/latex]= 0.3417 Since there is a 34.17% probability that the average sample weight is greater than 16.01 ounces, we should be skeptical of the company’s claimed volume. If I am a consumer, I should be glad that I am probably receiving free cola. If I am the manufacturer, I need to determine if my bottling processes are outside of acceptable limits. References Baran, Daya. “20 Percent of Americans Have Never Used Email.”WebGuild, 2010. Available online at http://www.webguild.org/20080519/20-percent-of-americans-have-never-used-email (accessed May 17, 2013). Data from The Flurry Blog, 2013. Available online at http://blog.flurry.com (accessed May 17, 2013). Data from the United States Department of Agriculture. Concept Review In a population whose distribution may be known or unknown, if the size ( n) of samples is sufficiently large, the distribution of the sample means will be approximately normal. The mean of the sample means will equal the population mean. The standard deviation of the distribution of the sample means, called the standard error of the mean, is equal to the population standard deviation divided by the square root of the sample size ( n). Formula Review The Central Limit Theorem for Sample Means:[latex]\displaystyle\overline{X}{\sim}{N}({\mu}_{x},\frac{{{\sigma}_{x}}}{{\sqrt{n}}})[/latex]
When class-conditional distributions are gaussian with equal covariance matrices, the optimal decision boundary is a hyperplane. This is the core concept behind Linear Discriminant Analysis (LDA). For any data point $x$, the probability that $x$ comes from class $\omega_1$ is: $P(x|\omega_1) \sim N(\mu_1,\Sigma) = (2\pi)^{-1}|\Sigma|^{-1/2}\exp\left\{ (-1/2) (x-\mu_1)'\Sigma^{-1}(x-\mu_1)\right\}$ Similarly, $P(x|\omega_2) \sim N(\mu_2,\Sigma)$. Let's denote the prior probability of class 1 and 2 as $P(\omega_1)$ and $P(\omega_2)$, respectively. Equal Misclassification Cost: If the cost of misclassification is equal, we want to assign new data points such that the probability of misclassification is minimized. This decision rule assigns points to class 1 when $x$ satisfies: $P(x|\omega_1)P(\omega_1) > P(\omega_2)(P(x|\omega_2)$ We can establish a similar rule for assigning points to class 2. o find the decision boundary, we need to find the values of $x$ that satisfy $P(x|\omega_1)P(\omega_1) = P(x|\omega_2)P(\omega_2)$ The values of $x$ that satisfy this equality here lie on a line (or hyperplane for higher dimensions). Unequal Misclassification Cost: For the second question, you will need an additional term. Let's call $C(\omega_1)$ the cost of making an error when the data point was actually from class 1 and $C(\omega_2)$ the cost of making an error when the data point was from class 2. To minimize the cost of an error, your new decision rule would be to assign $x$ to class 1 when $x$ satisfies: $P(x|\omega_1)P(\omega_1)C(\omega_1) > P(x|\omega_2)P(\omega_2)C(\omega_2)$ As before, to find the decision boundary, you need to solve for $x$ when: $P(x|\omega_1)P(\omega_1)C(\omega_1) = P(x|\omega_2)P(\omega_2)C(\omega_2)$ This decision boundary will still be a line / hyperplane, however it may have a different offset or orientation from the solution in the case of equal misclassification cost.
Forgot password? New user? Sign up Existing user? Log in Help me evaluate the infinite series of 1+2+3+4+5+......543\displaystyle 1 + \sqrt {2 + \sqrt[3] {3+ \sqrt[4] {4 + \sqrt[5] {5 +......}}}}1+2+33+44+55+....... Note by Bryan Lee Shi Yang 4 years, 6 months ago Easy Math Editor This discussion board is a place to discuss our Daily Challenges and the math and science related to those challenges. Explanations are more than just a solution — they should explain the steps and thinking strategies that you used to obtain the solution. Comments should further the discussion of math and science. When posting on Brilliant: *italics* _italics_ **bold** __bold__ - bulleted- list 1. numbered2. list paragraph 1paragraph 2 paragraph 1 paragraph 2 [example link](https://brilliant.org) > This is a quote This is a quote # I indented these lines # 4 spaces, and now they show # up as a code block. print "hello world" # I indented these lines# 4 spaces, and now they show# up as a code block.print "hello world" \( \) \[ \] 2 \times 3 2^{34} a_{i-1} \frac{2}{3} \sqrt{2} \sum_{i=1}^3 \sin \theta \boxed{123} Sort by: Please enter the answer correct to 3 decimal places. Log in to reply And also give full step solutions (as much as possible) , bcs. I haven't figured out yet! Wait....you changed the problem -_- I was working on the previous one :V Haha XD I don't have any solution but out of curiosity, just to find out, I approximated it and got 2.911. I made an equation ((((x-1)^2-2)^3-3)^4-4)^4........=x which gives 2.911 I think that the answer is about 2.911 Ramanujan maybe? The only thing I know is that it is less than 3, because of an identity by Ramanujan.Since it is above 2.9, we can find a good approximation.I dont think a closed form exists. please solve and post a nice solution to this Awesome question! 123456 >>>import math>>>ans = 0.0>>>for x in xrange(10, 0, -1):>>> ans = math.pow((x + ans), 1.0/x)>>>print("%.50f" % (ans))2.91163921624582400227154721505939960479736328125000 Using 121212 or above to starts iteration gives equal amount precision, about 505050 digits. Same method, but did it manually. Then does your number have a symbol, I mean Greek symbol or anything? Thanks for all your suggestions. I will post it into a question. Problem Loading... Note Loading... Set Loading...