text
stringlengths
256
16.4k
Definition:Tychonoff Separation Axioms Contents 1 Definition 1.1 $T_0$ (Kolmogorov) Space 1.2 $T_1$ (Fréchet) Space 1.3 $T_2$ (Hausdorff) Space 1.4 Semiregular Space 1.5 $T_{2 \frac 1 2}$ (Completely Hausdorff) Space 1.6 $T_3$ Space 1.7 Regular Space 1.8 Urysohn Space 1.9 $T_{3 \frac 1 2}$ Space 1.10 Tychonoff (Completely Regular) Space 1.11 $T_4$ Space 1.12 Normal Space 1.13 $T_5$ Space 1.14 Completely Normal Space 1.15 Perfectly $T_4$ Space 1.16 Perfectly Normal Space 2 Naming Conventions 3 Also known as 4 Also see 5 Source of Name 6 Linguistic Note 7 Sources Definition The Tychonoff separation axioms are a classification system for topological spaces. In general, each condition is stronger than the previous one, with subtleties. $\left({S, \tau}\right)$ is a Kolmogorov space or $T_0$ space if and only if: $\forall x, y \in S$ such that $x \ne y$, either: $\exists U \in \tau: x \in U, y \notin U$ or: $\exists U \in \tau: y \in U, x \notin U$ $\left({S, \tau}\right)$ is a Fréchet space or $T_1$ space if and only if: $\forall x, y \in S$ such that $x \ne y$, both: $\exists U \in \tau: x \in U, y \notin U$ and: $\exists V \in \tau: y \in V, x \notin V$ $\left({S, \tau}\right)$ is a Hausdorff space or $T_2$ space if and only if: $\forall x, y \in S, x \ne y: \exists U, V \in \tau: x \in U, y \in V: U \cap V = \varnothing$ That is: for any two distinct elements $x, y \in S$ there exist disjoint open sets $U, V \in \tau$ containing $x$ and $y$ respectively. $\left({S, \tau}\right)$ is a semiregular space if and only if: $\left({S, \tau}\right)$ is a Hausdorff ($T_2$) space The regular open sets of $T$ form a basis for $T$. $\struct {S, \tau}$ is a completely Hausdorff space or $T_{2 \frac 1 2}$ space if and only if: $\forall x, y \in S, x \ne y: \exists U, V \in \tau: x \in U, y \in V: U^- \cap V^- = \O$ That is: $\struct {S, \tau}$ is a $T_{2 \frac 1 2}$ spaceif and only if every two points in $S$ are separated by closed neighborhoods. $T = \left({S, \tau}\right)$ is a $T_3$ space if and only if: $\forall F \subseteq S: \complement_S \left({F}\right) \in \tau, y \in \complement_S \left({F}\right): \exists U, V \in \tau: F \subseteq U, y \in V: U \cap V = \varnothing$ $\struct {S, \tau}$ is a regular space if and only if: $\left({S, \tau}\right)$ is an Urysohn space if and only if: For any distinct elements $x, y \in S$ (i.e. $x \ne y$), there exists an Urysohn function for $\left\{{x}\right\}$ and $\left\{{y}\right\}$. $\left({S, \tau}\right)$ is a $T_{3 \frac 1 2}$ space if and only if: For any closed set $F \subseteq S$ and any point $y \in S$ such that $y \notin F$, there exists an Urysohn function for $F$ and $\left\{{y}\right\}$. $\struct {S, \tau}$ is a Tychonoff Space or completely regular space if and only if: $T = \left({S, \tau}\right)$ is a $T_4$ space if and only if: $\forall A, B \in \complement \left({\tau}\right), A \cap B = \varnothing: \exists U, V \in \tau: A \subseteq U, B \subseteq V, U \cap V = \varnothing$ $\left({S, \tau}\right)$ is a normal space if and only if: $\left({S, \tau}\right)$ is a $T_5$ space if and only if: $\forall A, B \subseteq S, A^- \cap B = A \cap B^- = \varnothing: \exists U, V \in \tau: A \subseteq U, B \subseteq V, U \cap V = \varnothing$ That is: $\left({S, \tau}\right)$ is a $T_5$ spacewhen for any two separated sets $A, B \subseteq S$ there exist disjoint open sets $U, V \in \tau$ containing $A$ and $B$ respectively. $\struct {S, \tau}$ is a completely normal space if and only if: $T$ is a perfectly $T_4$ space if and only if: That is: $\left({S, \tau}\right)$ is a perfectly normal space if and only if: $\left({S, \tau}\right)$ is a perfectly $T_4$ space $\left({S, \tau}\right)$ is a $T_1$ (Fréchet) space. Naming Conventions There are different ways of naming the separation axioms. The technique for this site is to follow the convention used in 1970: Lynn Arthur Steen and J. Arthur Seebach, Jr.: Counterexamples in Topology. Beware: this differs from the Separation axiom page at Wikipedia. The various naming schemes are inconsistent with each other and confusing, and no completely satisfactory convention has been defined. It is suggested that the system used here is more modern than others, but there is little evidence one way or another. An attempt has been made on the appropriate pages to mention the alternative names of these spaces, but this is inconsistent and possibly inaccurate. The important things to note are the conditions themselves and the relations between them. This is a new area of mathematics in which research is ongoing, and the whole area of ground may shift again completely in the near future. Also known as The Tychonoff separation axioms are also known as the Tychonoff conditions. Some sources refer to them as just the separation axioms. Some sources call them the $T_i$ axioms or just $T$-axioms. Also see Results about the separation axiomscan be found here. Source of Name This entry was named for Andrey Nikolayevich Tychonoff. The letter $T$ used to denote the Tychonoff separation axioms comes from the German Trennungsaxiom, which means separation axiom. Sources 1970: Lynn Arthur Steen and J. Arthur Seebach, Jr.: Counterexamples in Topology... (previous) ... (next): $\text{I}: \ \S 2$ 1975: W.A. Sutherland: Introduction to Metric and Topological Spaces... (previous) ... (next): $4.2$: Separation axioms 1989: Ephraim J. Borowski and Jonathan M. Borwein: Dictionary of Mathematics... (previous) ... (next): Entry: T-axiomsor Tychonoff conditions
It looks like you're new here. If you want to get involved, click one of these buttons! Last time I explained monoidal categories, which are a framework for studying processes that we can compose and tensor. We can do a lot with monoidal categories! For example, if we have a monoidal category with morphisms $$ \Phi \colon a \to c \otimes d $$ $$ \Psi \colon d \otimes b \to e \otimes f $$ $$ \Theta \colon c \otimes e \to g $$ then by a combination of composing and tensoring we can cook up a morphism like this: which goes from \(a \otimes b\) to \(g \otimes f\). This sort of picture is called a string diagram, and we've seen plenty of them already. We don't need to use string diagrams to work with monoidal categories: Puzzle 281. Describe the morphism in the above string diagram using a more traditional formula involving composition \(\circ\), tensoring \(\otimes\), the associator \(\alpha\), and the left and right unitors \(\lambda\) and \(\rho\). However, they make it a lot easier and more intuitive! An interesting feature of string diagrams is that they hide the the associator and the left and right unitors. You can't easily see them in these diagrams! However, when you turn a string diagram into a more traditional formula as in Puzzle 281, you'll see that you need to include associators and unitors to get a formula that makes sense. This may seems strange: if we need the associators and unitors in our formulas, why don't we need them in our diagrams? The ultimate answer is 'Mac Lane's strictification theorem'. This says that every monoidal category is equivalent to a one where the associator and unitors are identity morphisms. So, we can take any monoidal category and replace it by an equivalent one where the tensor product is 'strictly' associative, not just up to isomorphism: $$ (x \otimes y) \otimes z = x \otimes (y \otimes z) $$ and similarly, the left and right unit laws hold strictly: $$ I \otimes x = x = x \otimes I $$ This lets us stop worrying about associators and unitors. String diagrams are secretly doing this for us! Often people use Mac Lane's strictification theorem in a loose way, simply using it as an excuse to act like monoidal categories are all strict. That's actually not so bad, if you're not too obsessed with precisoin. To state Mac Lane's strictification theorem precisely, we first need to say exactly what it means for two monoidal categories to be 'equivalent'. For this we need to define a 'monoidal equivalence' between monoidal categories. Then, we define a strict monoidal category to be one where the associator and unitors are identity morphisms. Mac Lane's theorem then says that every monoidal category is monoidally equivalent to a strict one. If you're curious about the details, try my notes: All the necessary terms are defined, leading up to a precise statement of Mac Lane's strictification theorem at the very end. But this theorem takes quite a lot of work to prove, and I don't do that! You can see a sketch of the proof here: But there's more! If all we have is a monoidal category, the strings in our diagrams aren't allowed to cross. But last time I mentioned symmetric monoidal categories, where we have a natural isomorphism called the symmetry $$ \sigma_{x,y} \colon x \otimes y \to y \otimes x $$ that allows us to switch objects, obeying various rules. This lets us make sense of string diagrams where wires cross, like this: Puzzle 282. Describe the morphism in the above string diagram with a formula involving composition \(\circ\), tensoring \(\otimes\), the associator \(\alpha\), the left and right unitors \(\lambda,\rho\), and the symmetry \(\sigma\). There is a version of Mac Lane's strictification theorem for symmetric monoidal categories, too! You can find it stated in my notes. This lets us replace any symmetric monoidal category by a strict one, where the associator and unitors but not the symmetry are identity morphisms. We really need the symmetry: it cannot in general be swept under the rug. That should be sort of obvious: for example, switching two numbers in an ordered pair really does something, we can't just say it's the identity. Again, please ask questions! I'm sketching some ideas that would take considerably longer to explain in full detail.
When reading about homotopy algebras (e.g. $L_\infty$-algebras, $A_\infty$-algebras), an $\infty$-morphism $f$ is called an $\infty$-quasi-isomorphism if $f_1$ is a quasi-isomorphism. Recall/Example ($A_\infty$-algebras): An $A_\infty$-morphism between two $A_\infty$-algebras $(A,\mathfrak{m})$ and $(A', \mathfrak{m}')$ (here $\mathfrak{m}$ and $\mathfrak{m}'$ are the structure maps) is a collection $\{f_k\}_{k\geq1}:(A,\mathfrak{m}) \rightarrow (A',\mathfrak{m}') $ of degree zero (degree preserving) multilinear maps\begin{equation*}f_k: A^{\otimes k}\rightarrow A', \hspace{1cm}k\geq 1\end{equation*}that satisfy the following relation for $n\geq1$:\begin{equation*}\sum_{k+l=n+1}\sum^k_{i=0} (-1)^{a_1+\dots+a_n}f_k(a_1, \dots, a_i, m_l(a_{i+1}, \dots, a_{i+l}), a_{i+l+1}, \dots, a_n).\end{equation*}\begin{equation*}=\sum_{\substack{1\leq k_1\leq \dots \leq k_j \\ k_1+\cdots+k_j= n}} m'_j(f_{k_1}(a_1, \dots, a_{k_1}), f_{k_2}(a_{k_1+1}, \dots, a_{k_1+k_2}), \dots, f_{k_j}(a_{k_{j-1}+1}, \dots, a_n))\end{equation*} Furthermore, we call such morphisms $A_\infty$-quasi-isomorphisms if $f_1$ induces isomorphism in cohomology. Q1: Why do we normally omit higher arity maps when talking about quasi-isomorphisms? Q2: Would it be possible to have a weak equivalence that only appears in higher arity maps? Q3: In case we only care about $f_1$, wouldn't that imply an equivalence at the level of homotopy categories between, for example, Ho(DGLA) and Ho(L$_\infty$), as all the higher arity maps between $A$ and $B$ in L$_\infty$ with the same $f_1$ give isomorphism in Ho(L$_\infty$)?
Occasionally, we receive requests for a technical paper about regression modeling beyond our regular NumXL support, in order to delve more deeply into the mathematical formulation of MLR. We are always happy to address user requests, so we decided to share our internal technical notes with you. In this paper, we’ll go over a simple, yet fundamental and often asked question about forecast error in a regression model. background Let’s assume the true underlying model or process is defined as follows:$$ y = \alpha + \beta_1 x_1+\beta_2 x_2 + \cdots + \beta_k x_k + \epsilon $$ Where: $y$ is the dependent (response) variable $\{x_1,x_2,\cdots,x_k \}$ are the independent (explanatory)variables. $\alpha$ is the rela intercept (constant). $\beta_j$ is the coefficient (loading) of the j-th independent variable. ${\epsilon}$ is a set of independent, identical, normally distributed errors (residuals). $$ \epsilon \sim \textrm{i.i.d} \sim N(0,\sigma^2) $$ In practice, the true underlying model is unknown. However, with finite sample data and an OLS or other procedure, we can estimate the values of the coefficients (aka loadings) for the different input (explanatory) variables. Let’s assume we have a sample dataset with N observations, i.e. $\{x_1,x_2,\cdots,x_k \}$. Using an OLS method, we arrive at the following regression model:$$ y = \hat{\alpha} + \hat{\beta_1} x_1 + \hat{\beta_2}x_2 + \cdots + \hat{\beta_k}x_k + u $$ Where/p> $\hat{\beta_j}$ is the OLS estimate for the j-th coefficient (loading). $\hat{\alpha}$ is the OLS estimate of the intercept. $\{u\}$ is is the regression residuals. The residuals are homoscedastic (i.e. stable variance) and uncorrelated with any of the input variables. $$ E[u]=0$$ $$ E[u^2] = s^2 $$ $$ E[u\times \underset{1\leq i \leq k}{x_i}] = 0$$ Forecast In practice, the true regression model is hidden or unknown. We will revert to the estimated regression model to perform a forecast. Mathematically, the conditional forecast can be expressed as follows:$$ \hat{y} = E[ Y | x_1,x_2,\cdots, x_k ] = \hat{\alpha} + \hat{\beta_1}x_1 + \hat{\beta_2}x_2 +\cdots + \hat{\beta_k}x_k $$ As a result, the errors in the forecast originate from two distinct sources: Residuals ($\{\epsilon\}$ or $\{u\}$). Errors in the estimated coefficients’ values (i.e. using $\hat{\beta_j}$ instead of $\beta_j$) Using an OLS procedure, the estimated values of one $\hat{\beta_j}$ are normally distributed. Nevertheless, the errors in the values of the whole set of parameters $\underset{1\leq i \leq k}{\hat{\beta_j}}$ are correlated. So, we can ignore the covariance terms when we examine the statistical significance of one coefficient, but we will need to factor in their overall/aggregate effect for the forecast error. As a result, the forecast variance (aka error squared) can be expressed as follows:$$Var[y-\hat{y}| x_{1,m},x_{2,m},\cdots x_{k,m}]=\sigma^2 \left(1+\frac{1}{N}+\frac{\sum_{j=1}^k (x_{j,m}-\bar{x_j})^2}{\sum_{i=1}^N\sum_{j=1}^k(x_{j,i}-\hat{x_j})^2} \right) $$ However, the variance of residuals ($\sigma^2$) in the true model is unknown, so we use the variance of the error terms ($\hat{\sigma}^2$) of the estimated regression model: $$\hat{\sigma}^2 = E[u^2]=E[(y-\alpha - \beta_1x_1-\beta_2x_2-\cdots - \beta_kx_k)^2]=\frac{SSE}{N-K-1}=\frac{\sum_{i=1}^N u_i^2}{N-K-1}$$ Overall, the MLR forecast error squared is expressed as follows:$$Var[y-\hat{y}| x_{1,m},x_{2,m},\cdots x_{k,m}]=\frac{SSE}{N-k-1} \left(1+\frac{1}{N}+\frac{\sum_{j=1}^k (x_{j,m}-\bar{x_j})^2}{\sum_{i=1}^N\sum_{j=1}^k(x_{j,i}-\hat{x_j})^2} \right)$$ Now, let’s take a close look at the formula above and try to explain the different terms: $\hat{\sigma}^2$ is the estimated variance of true regression model residuals. This value is constant and independent from the X-value(s) of the target data-point. $\frac{\hat{\sigma}^2}{N}$ is the error in the estimated intercept (aka. constant). This value is constant and independent from the X-values of the target data-point. The last term is proportional to the squared (Euclidean) distance of the target data-point from the center of the sample data set. This term is zero at the sample data center point $(\bar{x}_{1,t},\bar{x}_{2,t},\cdots,\bar{x}_{k,t})$. In effect, the forecast variance is higher for data points $(x_{1,t},x_{2,t},\cdots,x_{k,t})$ that are further from the center of the input sample data set (i.e. $(\bar{x}_{1,t},\bar{x}_{2,t},\cdots,\bar{x}_{k,t})$). As a result, the forecast error is smallest at the sample data center point $(\bar{x}_{1,t},\bar{x}_{2,t},\cdots,\bar{x}_{k,t})$.
I'm using Mathematica to calculate the inverse Laplace transform (ILT) of various functions--functions that I do not know ahead of time. I was stepping through the process and decided to write my own code to simplify the expressions and calculate the inverse transform, but I stumbled upon the general equation that it appears Mathematica uses to calculate the transforms. If we have a function $$Y(s)=\frac{P(s)}{Q(s)}$$ where P(s) and Q(s) are both polynomials and P(s) is a lower order than Q(s), I have been able to calculate the ILT by evaluating $$y(t)=\sum_{c=1}^n\frac{P(c)e^{ct}}{Q^{'}(c)}$$ where c is a root of Q(s) and n the total number of roots. According to wikipedia, the formula $$\mathrm{Res}=\frac{P(c)}{Q^{'}(c)}$$ is an acceptable way to calculate the residue of these irreducible quadratics as long as (according to wikipedia) $P(s)$ and $Q(s)$ are holomorphic and $Q(c)=0$ and $Q^{'}(c)\neq0$. Is there anything else I should be aware of? My irreducible quadratics should always be holomorphic, correct? I have very little knowledge of complex calculus. Second, I did find a site (Theorem 12.21) that uses this formula residue calculation for irreducible polynomials, however they use go back and plug the result of the residue equation into a formula that is in the s domain still, do partial fraction decomposition, then take the ILT. Why is that necessary if the above equation for $y(t)$ works? If someone could help me understand I would greatly appreciate it. I can provide an example of the types of functions I'm looking at if that would be useful (they are basic).
Before the question, I need to mention some necessary definitions. The rapidity is defined as: $$y=\frac{1}{2}\ln\frac{E+p_z}{E-p_z}=\frac{1}{2}\ln\frac{1+v_z}{1-v_z}=\tanh^{-1}(v_z)$$ where $v_z=p_z/E$ is the velocity along $z$ direction. $v_z=\tanh y$ We have defined the transverse mass $m_T$ and the longitudinal boost factor $\gamma_z$: $$m^2_T=m^2+p^2_T$$ $$\gamma_z=\frac{1}{\sqrt{1-v^2_z}}=\frac{E}{\sqrt{E^2-p^2_z}}=\frac{E}{\sqrt{m^2+p^2_T}}=\frac{E}{m_T}=\cosh y$$ It is easy to show that: $$E=m_T\gamma_z=m_T\cosh y$$ $$p_z=m_T\gamma_zv_z=m_T\sinh y$$ We note that under longitudinal boost, both $p_T$ and $m_T$ remain constant. In high energy physics, one usually uses the Lorentz-invariant particle spectrum $EdN/d^3p$.$$p_z = m_T\sinh y \Rightarrow dp_z=m_T\cosh ydy=Edy\Rightarrow\frac{dp_z}{E}=dy$$Therefore,$$\frac{d^3p}{E}=\frac{dp_zd^2p_T}{E}=dyd^2p_T=dyp_Tdp_Td\phi_p$$The above Lorentz-invariant spectrum is often written as$$E\frac{dN}{d^3p}=\frac{dN}{dyd^2p_T}=\frac{dN}{dyp_Tdp_Td\phi_p}=\frac{dN}{dym_Tdm_Td\phi_p}$$One can see that under Lorentz boost, $d^2p_T$ and $dy$ remain invariant, therefore, $d^3p/E = d^2p_Tdy$ is Lorentz invariant quantity. My question is that is there another different method to show that $E\frac{dN}{d^3p}$ is Lorentz-invariant?
It looks like you're new here. If you want to get involved, click one of these buttons! Last time we reached the technical climax of this chapter: constructing the category of enriched profunctors. If you found that difficult, you'll be relieved to hear it's downhill from here on, at least in Chapter 4. We will now apply all our hard work. One application is to collaborative design. Fong and Spivak discuss it in their the book, based on this paper by a student of Spivak: Censi's work is based on \(\textbf{Bool}\)-enriched profunctors, also known as feasibility relations. We introduced these in Lecture 56. Remember, a feasibility relation \( \Phi : X\nrightarrow Y \) is a monotone function $$ \Phi : X^{\text{op}} \times Y \to \mathbf{Bool} .$$If \( \Phi(x,y) = \text{true}\), we say \(x\) can be obtained given \(y\). The idea is that we use elements of \( X\) to describe 'requirements' - things you want - and elements of \(Y\) to describe 'resources' - things you have. The idea of Andrea Censi's theory that we can compute the design requirements of a complex system from those of its parts using 'co-design diagrams'. These are really pictures of big complicated feasibility relations, like this: This big complicated feasibility relation is built from simpler ones in various ways. Each wire in this diagram is labeled with the name of a preorder, and each little box is itself a feasibility relation between preorders. We described how to compose feasibility relations in Lecture 58, and that corresponds to feeding the outputs of one little box into another. But there are other things going on in this picture, like boxes sitting side by side, and wires that bend around backwards! This is what I need to explain - it may take a couple lectures to do this. Instead of diving into the mathematical details today, let me quote the book's general explanation of the diagram above: As an example, consider the design problem of creating a robot to carry some load at some velocity. The top-level planner breaks the problem into three design teams: team chassis, team motor, and team battery. Each of these teams could break up into multiple parts and the process repeated, but let's remain at the top level and consider the resources produced and the resources required by each of our three teams. The chassis in some sense provides all the functionality—it carries the load at the velocity—but it requires some things in order to do so. It requires money, of course, but more to the point it requires a source of torque and speed. These are supplied by the motor, which in turn needs voltage and current from the battery. Both the motor and the battery cost money, but more importantly they need to be carried by the chassis: they become part of the load. A feedback loop is created: the chassis must carry all the weight, even that of the parts that power the chassis. A heavier battery might provide more energy to power the chassis, but is the extra power worth the heavier load? In the picture, each part—chassis, motor, battery, and robot—is shown as a box with ports on the left and right. The functionalities, or resources produced by the part are on the left of the box, and the resources required by the part are on the right. The boxes marked \(\Sigma\) correspond to summing inputs. These boxes are not to be designed, but we will see later that they fit easily into the same conceptual framework. Note also the \(\leq\)'s on each wire; they indicate that if box \(A\) requires a resource that box \(B\) produces, then \(A\)'s requirement must be less than or equal to \(B\)'s production. Next time we'll get into more detail. But to wrap up for today, here are few puzzles! We've been talking about enriched functors and also enriched profunctors. How are they related? The first cool fact is that any enriched functor gives two enriched profunctors: one going each way. Puzzle 201. Show that any \(\mathcal{V}\)-enriched functor \(F: \mathcal{X} \to \mathcal{Y}\) gives a \(\mathcal{V}\)-enriched profunctor $$ \hat{F} \colon \mathcal{X} \nrightarrow \mathcal{Y} $$ defined by $$ \hat{F} (x,y) = \mathcal{Y}(F(x), y ) .$$ Puzzle 202. Show that any \(\mathcal{V}\)-enriched functor \(F: \mathcal{X} \to \mathcal{Y}\) gives a \(\mathcal{V}\)-enriched profunctor $$ \check{F} \colon \mathcal{Y} \nrightarrow \mathcal{X} $$ defined by $$ \check{F} (y,x) = \mathcal{Y}(y,F(x)) .$$These two constructions have funny names. \(\hat{F} \colon \mathcal{X} \nrightarrow \mathcal{Y}\) is called the companion of \(F\) and \( \check{F} \colon \mathcal{Y} \nrightarrow \mathcal{X} \), going back, is called the conjoint of \(F\). If you have trouble remembering these, remember that a 'companion' is like a fellow traveler, going the same way as our original functor. The word 'conjoint' should remind you of 'adjoint', which means something going back the other way. In fact there's a relationship between adjoints and conjoints! Puzzle 203. We say a \(\mathcal{V}\)-enriched functor \(F: \mathcal{X} \to \mathcal{Y}\) is a left adjoint of a \(\mathcal{V}\)-enriched functor \(G: \mathcal{Y} \to \mathcal{X}\) if $$ \mathcal{Y}(F(x), y) = \mathcal{X}(x,G(y)) $$for all objects \(x\) of \(\mathcal{X}\) and \(y\) of \(\mathcal{Y}\). In this situation we also say \(G\) is the right adjoint of \(F\). Show that \(F\) is the left adjoint of \(G\) if and only if $$ \hat{F} = \check{G} . $$ Pretty!
On the DNA Computer Binary Code In any finite set we can define a , a partial order in different ways. But here, a partial order is defined in the set of four DNA bases in such a manner that a Boolean lattice structure is obtained. A Boolean lattice is an algebraic structure that captures essential properties of both set operations and logic operations. This partial order is defined based on the physico-chemical properties of the DNA bases: hydrogen bond number and chemical type: of purine {A, G} and pyrimidine {U, C}. This physico-mathematical description permits the study of the genetic information carried by the DNA molecules as a computer binary code of zeros (0) and (1). binary operation 1. Boolean lattice of the four DNA bases In any four-element Boolean lattice every element is comparable to every other, except two of them that are, nevertheless, complementary. Consequently, to build a four-base Boolean lattice it is necessary for the bases with the same number of hydrogen bonds in the DNA molecule and in different chemical types to be complementary elements in the lattice. In other words, the complementary bases in the DNA molecule ( G ≡C and A= T or A= U during the translation of mRNA) should be complementary elements in the Boolean lattice. Thus, there are four possible lattices, each one with a different base as the maximum element. 2. Boolean (logic) operations in the set of DNA bases The Boolean algebra on the set of elements X will be denoted by $(B(X), \vee, \wedge)$. Here the operators $\vee$ and $\wedge$ represent classical “OR” and “AND” term-by-term. From the Boolean algebra definition it follows that this structure is (among other things) a logical operations in which any two elements $\alpha$ and $\beta$ have upper and lower bounds. Particularly, the greater lower bound of the elements $\alpha$ and $\beta$ is the element $\alpha\vee\beta$ and the least upper bound is the element $\alpha\wedge\beta$. partially ordered set This equivalent partial ordered set is called. Boolean lattice In every Boolean algebra (denoted by $(B(X), \vee, \wedge)$) for any two elements , $\alpha,\beta \in X$ we have $\alpha \le \beta$, if and only if $\neg\alpha\vee\beta=1$, where symbol “$\neg$” stands for the logic negation. If the last equality holds, then it is said that $\beta$ is deduced from $\alpha$. Furthermore, if $\alpha \le \beta$ or $\alpha \ge \beta$ the elements and are said to be comparable. Otherwise, they are said not to be comparable. In the set of four DNA bases, we can built twenty four isomorphic Boolean lattices [1]. Herein, we focus our attention that one described in reference [2], where the DNA bases G and C are taken as the maximum and minimum elements, respectively, in the Boolean lattice. The logic operation in this DNA computer code are given in the following table: OR AND $\vee$ G A U C $\wedge$ G A U C G G A U Ç G G G G G A A A C C A G A G A U U C U C U G G U U C C C C C C G A U C It is well known that all Boolean algebras with the same number of elements are isomorphic. Therefore, our algebra $(B(X), \vee, \wedge)$ is isomorphic to the Boolean algebra $(\mathbb{Z}_2^2(X), \vee, \wedge)$, where $\mathbb{Z}_2 = \{0,1\}$. Then, we can represent this DNA Boolean algebra by means of the correspondence: $G \leftrightarrow 00$; $A \leftrightarrow 01$; $U \leftrightarrow 10$; $C \leftrightarrow 11$. So, in accordance with the operation table: $A \vee U = C \leftrightarrow 01 \vee 10 = 11$ $U \wedge G = U \leftrightarrow 10 \wedge 00 = 00$ $G \vee C = C \leftrightarrow 00 \vee 11 = 11$ A Boolean lattice has in correspondence a directed graph called Hasse diagram, where two nodes (elements) $\alpha$ and $\beta$ are connected with a directed edge from $\alpha$ to $\beta$ (or connected with a directed edge from $\beta$ to $\alpha$) if, and only if, $\alpha \le \beta$ ($\alpha \ge \beta$) and there is no other element between $\alpha$ and $\beta$. 3. The Genetic code Boolean Algebras Boolean algebras of codons are, explicitly, derived as the direct product $C(X) = B(X) \times B(X) \times B(X)$. These algebras are isomorphic to the dual Boolean algebras $(\mathbb{Z}_2^6, \vee, \wedge)$ and $(\mathbb{Z}_2^6, \wedge, \vee)$ induced by the isomorphism $B(X) \cong \mathbb{Z}_2^2$, where $X$ runs over the twenty four possibles ordered sets of four DNA bases [1]. For example: CAG $\vee$ AUC = CCC $\leftrightarrow$ 110100 $\vee$ 011011 = 111111 ACG $\wedge$ UGA = GGG $\leftrightarrow$ 011100 $\wedge$ 100001 = 000000 $\neg$ (CAU) = GUA $\leftrightarrow$ $\neg$ (110110) = 001001 The Hasse diagram for the corresponding Boolean algebra derived from the direct product of the Boolean algebra of four DNA bases given in the above operation table is: In the Hasse diagram, chains and anti-chains are located. A Boolean lattice subset is called a chain if any two of its elements are comparable but, on the contrary, if any two of its elements are not comparable, the subset is called an anti-chain. In the Hasse diagram of codons shown in the figure, all chains with maximal length have the same minimum element GGG and the maximum element CCC. It is evident that two codons are in the same chain with maximal length if and only if they are comparable, for example the chain: GGG $\leftrightarrow$ GAG $\leftrightarrow$ AAG $\leftrightarrow$ AAA $\leftrightarrow$ AAC $\leftrightarrow$ CAC $\leftrightarrow$ CCC The Hasse diagram symmetry reflects the role of hydrophobicity in the distribution of codons assigned to each amino acid. In general, codons that code to amino acids with extreme hydrophobic differences are in different chains with maximal length. In particular, codons with U as a second base will appear in chains of maximal length whereas codons with A as a second base will not. For that reason, it will be impossible to obtain hydrophobic amino acid with codons having U in the second position through deductions from hydrophilic amino acids with codons having A in the second position. There are twenty four Hasse diagrams of codons, corresponding to the twenty four genetic-code Boolean algebras. These algebras integrate a symmetric group isomorphic to the symmetric group of degree four $S_4$ [1]. In summary, the DNA binary code is not arbitrary, but subject to logic operations with subjacent biophysical meaning. References Sanchez R. Symmetric Group of the Genetic-Code Cubes. Effect of the Genetic-Code Architecture on the Evolutionary Process. MATCH Commun Math Comput Chem, 2018, 79:527–60. Sánchez R, Morgado E, Grau R. A genetic code Boolean structure. I. The meaning of Boolean deductions. Bull Math Biol, 2005, 67:1–14.
I have a mixture model which I want to find the maximum likelihood estimator of given a set of data $x$ and a set of partially observed data $z$. I have implemented both the E-step (calculating the expectation of $z$ given $x$ and current parameters $\theta^k$), and the M-step, to minimize the negative log-likelihood given the expected $z$. As I have understood it, the maximum likelihood is increasing for every iteration, this means that the negative log-likelihood must be decreasing for every iteration? However, as I iterate, the algorithm does not indeed produce decreasing values of the negative log-likelihood. Instead, it's may be both decreasing and increasing. For instance this was the values of the negative log-likelihood until convergence: Is there here that I've misunderstood? Also, for simulated data when I perform the maximumlikelihood for the true latent (unobserved) variables, I have a close to perfect fit, indicating there are no programming errors. For the EM algorithm it often converges to clearly suboptimal solutions, particularly for a specific subset of the parameters (i.e. the proportions of the classifying variables). It is well known that the algorithm may converge to local minima or stationary points, is there a conventional search heuristic or likewise to increase the likelihood of finding the global minimum (or maximum). For this particular problem I believe there are many miss classifications because, of the bivariate mixture, one of the two distributions takes values with probability one (it is a mixture of lifetimes where the the true lifetime is found by $T=z T_0 + (1-z)\infty$ where $z$ indicates the belonging to either distribution. The indicator $z$ is of course censored in the data set. I added a second figure for when I start with the theoretical solution (which should be close to the optimal). However, as can be seen the likelihood and parameters diverges from this solution into one that is clearly inferior. edit: The full data are in the form $\mathbf{x_i}=(t_i,\delta_i,L_i,\tau_i,z_i)$ where $t_i$ is an observed time for subject $i$, $\delta_i$ indicates whether the time is associated with an actual event or if it is right censored (1 denotes event and 0 denotes right censoring), $L_i$ is the truncation time of the observation (possibly 0) with the truncation indicator $\tau_i$ and finally $z_i$ is the indicator to which population the observation belongs to (since its bivariate we only need to consider 0 and 1's). For $z=1$ we have density function $f_z(t)=f(t|z=1)$, similarly it is associated with the tail distribution function $S_z(t)=S(t|z=1)$. For $z=0$ the event of interest will not occur. Although there is no $t$ associated with this distribution, we define it to be $\inf$, thus $f(t|z=0)=0$ and $S(t|z=0)=1$. This also yields the following full mixture distribution: $f(t) = \sum_{i=0}^{1}p_if(t|z=i) = pf(t|z=1)$ and $S(t) = 1 - p + pS_z(t)$ We proceed to define the general form of the likelihood: $ L(\theta;\mathbf{x_i}) = \Pi_i \frac{f(t_i;\theta)^{\delta_i}S(t_i;\theta)^{1-\delta_i}}{S(L_i)^{\tau_i}}$ Now, $z$ is only partially observed when $\delta=1$, otherwise it is unknown. The full likelihood becomes $ L(\theta,p;\mathbf{x_i}) = \Pi_i \frac{\big((p f_z(t_i;\theta))^{z_i}\big)^{\delta_i}\big((1-p)^{(1-z_i)}(p S_z(t_i;\theta))^{z_i}\big)^{1-\delta_i}}{\big((1-p)^{(1-z_i)}(p S_z(L_i;\theta))^{z_i}\big)^{\tau_i}}$ where $p$ is the weight of corresponding distribution (possibly associated with some covariates and their respective coefficients by some link function). In most literature this is simplified to the following loglikelihood $\sum \Big( z_i \ln(p) + (1-p) \ln(1-p) - \tau_i\big(z_i \ln(p) + (1-z_i)\ln(1-p)\big) + \delta_i z_i f_z(t_i;\theta) + (1-\delta_i) z_i S_z(t_i;\theta) - \tau_i S_z(L_i;\theta)\Big)$ For the M-step, this function is maximized, although not in its entirety in 1 maximization method. Instead we not that this can be separated into parts $l(\theta,p; \cdot) = l_1(\theta,\cdot) + l_2(p,\cdot)$. For the k:th+1 E-step, we must find the expected value of the (partially) unobserved latent variables $z_i$. We use the fact that for $\delta=1$, then $z=1$. $E(z_i|\mathbf{x_i},\theta^{(k)},p^{(k)}) = \delta_i + (1-\delta_i) P(z_i=1;\theta^{(k)},p^{(k)}|\mathbf{x_i})$ Here we have, by $P(z_i=1;\theta^{(k)},p^{(k)}|\mathbf{x_i}) =\frac{P(\mathbf{x_i};\theta^{(k)},p^{(k)}|z_i=1)P(z_i=1;\theta^{(k)},p^{(k)})}{P(\mathbf{x_i};\theta^{(k)},p^{(k)})}$ which gives us $P(z_i=1;\theta^{(k)},p^{(k)}|\mathbf{x_i})=\frac{pS_z(t_i;\theta^{(k)})}{1 - p + pS_z(t_i;\theta^{(k)})}$ (Note here that $\delta_i=0$, so there is no observed event, thus the probability of the data $\mathbf{x_i}$ is given by the tail distribution function.
Answer $$x=\frac{5\pi }{4}+2\pi n,\:x=\frac{7\pi }{4}+2\pi nE$$ Work Step by Step We solve the equation using the properties of trigonometric functions. Note, there is a general solution since trigonometric identities go up and down and this can pass through a given value of y many times. Solving this, we find: $$\sin \left(x\right)+\sqrt{2}=-\sin \left(x\right) \\ \sin \left(x\right)=-\frac{\sqrt{2}}{2} \\ x=\frac{5\pi }{4}+2\pi n,\:x=\frac{7\pi }{4}+2\pi n$$
Abbreviation: CdLat A is a bounded lattices $\mathbf{L}=\langle L,\vee ,0,\wedge ,1\rangle $ such that complemented lattice every element has a complement: $\exists y(x\vee y=1\mbox{ and }x\wedge y=0)$ Let $\mathbf{L}$ and $\mathbf{M}$ be complemented lattices. A morphism from $\mathbf{L}$ to $\mathbf{M}$ is a function $h:L\rightarrow M$ that is a bounded lattice homomorphism: $h(x\vee y)=h(x)\vee h(y)$, $h(x\wedge y)=h(x)\wedge h(y)$, $h(0)=0 $, $h(1)=1$ Example 1: $\langle P(S), \cup, \emptyset, \cap, S\rangle $, the collection of subsets of a set $S$, with union, empty set, intersection, and the whole set $S$. Classtype first-order Equational theory decidable Quasiequational theory First-order theory undecidable Locally finite no Residual size unbounded Congruence distributive yes Congruence modular yes Congruence n-permutable yes Congruence regular no Congruence uniform no Congruence extension property no Definable principal congruences no Equationally def. pr. cong. no Amalgamation property Strong amalgamation property Epimorphisms are surjective $\begin{array}{lr} f(1)= &1\\ f(2)= &1\\ f(3)= &0\\ f(4)= &1\\ f(5)= &2\\ f(6)= &\\ f(7)= &\\ f(8)= &\\ \end{array}$
From the pole-zero plot, you can compute the system frequency response by assuming a locus of test points along the \$j\omega\$ axis. Figure from: http://web.mit.edu/2.14/www/Handouts/PoleZero.pdf \begin{align} |H(j\omega)| &= K \frac{r_1\ldots r_m}{q_1\ldots q_n}\\ \angle H(j\omega) &= (\phi_1 + \ldots + \phi_m) - (\theta_1 + \ldots + \theta_n) \end{align} This means that if I stimulate \$H(s)\$ with a steady-state sinusoidal input, $$A\sin\omega_1t$$ at the output I'll get $$A|H(j\omega_1)|\sin(\omega_1t + \angle H(j\omega_1))$$ Question Evaluating \$H(j\omega)\$ means I'll get its magnitude and phase response when it is stimulated with a steady-state sinusoidal input $$A\sin \omega t$$ If I evaluate \$H(\sigma + j\omega)\$, what kind of input does that imply? Does that mean I would stimulate the system with a decaying sinusoid? $$Ae^{-\sigma t} \sin (\omega t + \phi)$$
Let $M: P_{1}\rightarrow P_{1}$ be defined by $M(f)=f^{'}+f$, i.e. $M(a_{0}+a_{1}x)=a_{1}+a_{0}+a_{1}x$. Find the adjoint $M^{*}$ of $M$, i.e find $M^{*}(a_{0}+a_{1}x_{1})$, assuming that $L^{2}(0,1)$ inner product is imposed on $P_{1}$. $\langle f(x),g(x)\rangle =\int^{1}_{0}f(x)g(x)dx$. Let's find an orthogonal basis for the given inner product. Luckily, $\|1\|=\sqrt{\langle 1,1\rangle}=\sqrt{\int_0^11}=1$. But $x$ is not orthogonal to $1$, so we have to modify it: $$\langle x,1\rangle =\int_0^11\cdot x\,dx=\frac12\,,$$ so that $(x-\frac12)\perp 1$. So is its double, $p:=\,2x-1$. Its norm can be obtained: $\|p\|^2=\|(2x-1)\|^2=\int_0^1(2x-1)^2\,dx=\frac43-2+1=\frac13$. Then, use the general fact that a vector $v$ can be decomposed in the orthogonal basis $e_1,e_2,...$ as $$v=\frac{\langle v,e_1\rangle}{\|e_1\|^2} e_1+\frac{\langle v,e_2\rangle}{\|e_2\|^2} e_2+\dots$$ So that, now we can apply this to arrive at $M^*$: $$M^*(1)= \langle M^*(1),\,1\rangle\, +3\,\langle M^*(1),\,2x-1\rangle\cdot (2x-1)= \\ = \langle 1,M(1)\rangle+3\,\langle1,M(2x-1)\rangle\cdot(2x-1)\,.$$ Similarly one can obtain $M^*(x)$.
We can go straight from your definition of a zero-divisor. In $A \times B$, a zero divisor is any two non-zero elements that multiply to give zero. Note that in this new ring, "zero" is the element $(0, 0)$. So the elements $(a, b)$ and $(c, d)$ are a zero divisor pair if $(a, b) \cdot (c, d) = (0, 0)$. If $A$ or $B$ originally had zero divisors, then an easy way to generate zero divisors for $A \times B$ is as follows. Say $a, c \in A$ is a zero divisor pair and $b, d \in B$ is also a zero divisor pair. Then the following are zero divisor pairs in $A \times B$: $$(a, 0)(c, 0) = (0, 0)$$$$(a, b)(c, d) = (0, 0)$$$$(0, b)(0, d) = (0, 0)$$ Furthermore, given the way multiplication has been defined in $A \times B$, any elements of the form $(x, 0)$ and $(0, y)$ are also zero-divisors.
Does this make sense to you, i.e. are the 2 fields actually different or the 2 names refer to the same thing? Spectrum analysis is more general: it involves looking at the entire spectrum of a given signal. Spectral line analysis assumes that the spectrum contains several peaks (lines) of interest at specific frequencies. The aim then is to find the precise frequency, magnitude, and phase of those peaks (lines). Just to enrich the previous answer, the line spectrum is defined in Stoica, P., & Moses, R. (2005). Spectral analysis of signals. Prentice Hall. p.144 as associated to signals of the form:$$y(t)=x(t)+e(t)$$where $x(t)$ is a sinusoidal noise-free signal defined as $x(t)=\sum_{k=1}^{n}\alpha_k e^{i(\omega _k t+\phi _k )} $ and $e(t)$ is circular white noise with power $\sigma^2$. So a line spectrum equals $\sigma^2$ everywhere except for some specific frequencies, hence the name line spectrum. See e.g. the following picture from Michael Richmond: UPDATE : an extra reference explaining the different types of spectra is p.142 in Percival, D. B., & Walden, A. T. (1998). Spectral analysis for physical applications. In this reference, line spectras are called discrete or purely discrete.
When does an interaction drop the system into an eigenstate? (i.e. when is a measurement=) This is an ill-posed question because, first of all, the system $S$ doesn't drop into any state but each observer $O$ has a state about it, as a state $\rho$ is nothing but the coding of past measurements (so it should be named with reference to being dependent on $O$ and $S$ and not only the latter, that is why ' the state lives on the observer-system link' and is not intrinsic to just the system, that would be its algebra of observables). Interactions create correlations during the isolated evolution $\hat{U}(t_1,t_0)$ of the closed system $O\otimes S$ and only those which are stable and robust in time can be regarded as measurements from the point of view of $O$, where the process of decoherence has a major role to play. In this sense, your question is of a phenomenological nature regarding when and how different interactions between observers, systems, aparata... create stable, robust and reliable correlations between the observables of $O$ and $S$ as seen by another observer $O'$, something which depends completely on the nature of the hamiltonian $\hat{H}_{O-S}$, the strength of the couplings and the time scales involved. I believe the answer by @ACuriousMind is quite to the point. This topic easily turns into philosophical discussions even by experts (that's why he says it is "unsettled"), but I believe some conservative but objective conclusions are unavoidable. You ask for some mathematical description but I believe the difficulty is precisely in epistemological confusions and not formalism. For a more mathematically detailed exposition please refer to the articles and the end of this answer: INTRODUCTION AND JUSTIFICATION The world is made of interacting physical systems $S_i$ characterized by their degrees of freedom, i.e. observable properties $\mathcal{A}^{(S_i)}_j$ subject to algebraic relations, in general noncommutative, which characterize their spectra of measurable values and complementarity (in particular, commutator relations are equivalent to specifying Heisenberg's uncertainty relations to pairs of observables and they are also usually enough to deduce their spectral representations). General operator algebras with physically motivated analytic properties are represented by operators $\hat{A}^{(S_i)}_j$ acting on Hilbert spaces. Any physical description must be done from within the world by some system studying other systems, thus fixing a system of reference $\mathcal{O}:=S_0$ specifies an observer which/who will describe the rest (or the subsystem of interest $S$ disregarding the rest as environment) by the correlations it gets by measuring their observables. Observables are then what characterize the connections, the links, between systems since they are 'channels of information' through which systems may affect/know each other via interaction. Unknowing yet how this works, $\mathcal{O}$ interacts with $S$ obtaining values of the spectra of $\hat{A}^{(S)}_j$, but if any two of them do not commute, $[\hat{A},\hat{B}]\neq 0$ the eigenvalues measured are incompatible to be well-defined at the same time Example: measuring spin $\hat{S}_z$ in a Stern-Gerlach apparatus gives a definite eigenvalue $\pm 1/2$ with $50\%$ chance each, say $+$, this is preserved if another $\hat{S}_z$ is measured in succession by the same or other observer, $100\%$ is again $+$, but if $\hat{S}_x$ is measured afterwards then a new measurement $\hat{S}_z$ reveals that the original information $+$ was lost and again $50\%$ is obtained, therefore incompatible observables do not have common eigenstates and represent properties not well-defined at the same time. Therefore $O$ measures from $S$ at the same time only at most maximal sets of compatible observables because these ideally have definite sharp eigenvalues at the same time, thus any system $S$ is specified by 'complete windows of interaction' given by all the possible maximally compatible subsets $\{\hat{A}, \hat{B},...\},\{\hat{X},\hat{Y}...\}...$, of its observables algebra. Now, at every interaction time between $O$ and $S$, $O$ obtains through its senses/apparata at most a collection $|a,b...\rangle$ of simultaneously well-defined eigenvalues from a complete set of observables (which is selected depends on the senses/apparata intervening at that and each interaction). So $O$ 'observes' system $S$ at $t_0$ in the state $|\Psi(t_0)\rangle =|a,b,..\rangle$, because the maximally compatible properties defining $S$ have those values at that time. Now, at a later time $O$ interacts again with $S$ but through another compatible set of observables giving a state of measured values $|\Psi(t_1)\rangle =|x,y,..\rangle$. The collection of eigenvalues of the compatible operators form an eigenstate in the Hilbert space representation. Everything that non-relativistic mechanics is about is to study how to predict the probability that having measured a system in state $|\psi\rangle$ it will be observed at state $|\chi \rangle$ at a later time. That is given by a probability distribution on the space of possible future states for each current state, which in the general noncommutative case is the same as giving a normalized positive linear functional on the observable algebra, and by Gleason's theorem every such functional can be represented by a density operator $\rho$. Indeed, complete eigenstates $|\Psi\rangle$ of some maximal set of compatible observables are in bijective correspondence to density operators which are rank-1 projectors, i.e. $\rho_\psi=|\psi\rangle\langle\psi|$, called pure states (general mixed states are convex combinations of pure states and represent statistical mixtures of our uncertainty which pure state was actually measured). In this way, a state $\rho$ of $S$ is not only the record of the values $O$ observed but a probability disposition for future observations, since Gleason's theorem guarantees that the probability of measuring a future eigenvalue $m$ of observable $\hat{M}$ is given by the expectation value of its projector $\langle \hat{P}_m\rangle =tr(\rho\cdot\hat{P}_m)$, and by spectral decomposition the probabilities and expectations of any observables are obtained, including then transition probabilities between eigenstates $\mathcal{P}(\psi\mapsto\chi)=|\langle\chi |\psi\rangle|^2 =tr(\rho_\chi\cdot\rho_\psi)$. This is the kinematics of quantum mechanics because no dynamical evolution has been taken into account between observer-system interactions at $t_0$ and $t_1$. When $O$ is not interacting with $S$, the latter is considered isolated or a "closed system" and is left alone to evolve during $\Delta t=t_1-t_0$. Now if $S$ is to be considered the same system at different times, everything which characterizes it must remain invariant, i.e. the observable algebra must be the same at different times, which means the operators representing it must be related by an algebra-automorphism $\mathcal{U}(t_1,t_0)$ tending to the identity when $t_1\rightarrow t_0$, this by Stone-von Neumann theorem is given by a unitary operator $\hat{U}(t_1,t_0)$ transforming the operator representation of the algebra as $$\hat{A}(t_1)=\hat{U}^\dagger (t_1,t_0)\cdot\hat{A}(t_0)\cdot\hat{U}(t_1,t_0)$$ which is nothing else that Schrödinger's equation in Heisenberg's picture. Therefore, if $O$ measured a state $\rho_0$ initially, if at a time $t_1>t_0$ observer $O$ interacts with $S$ to measure observable $\hat{B}(t_1)$ and obtain eigenvalue $b$ with probability $tr(\rho\cdot\hat{P}_b(t_1))$ where the projector evolves by the same unitary transformation by the same reasoning. As soon as $O$ perceives/detects/measures a new value $m$ of any observable $\hat{M}$ of $S$, it must update the information it had stored in $\rho_0$ by the new state $$\rho_1 =\frac{\hat{P}_m(t_1)\cdot\rho_0\cdot\hat{P}_m(t_1)}{tr(\rho_0\cdot\hat{P}_m(t_1))}.$$ This is Lüder's rule and is the noncommutative generalization of Bayes' rule for updating probability distributions by conditioning upon new information, as is justified by Duvenhage's article linked above or in Busch-Lahti articles and their books. Now, the nature of the evolution operator IS WHAT ENCLOSES THE INTERACTIONS of the subsystems within $S$ as seen by $O$. Why is so? because a priori the automorphism preserving the identity of the system may depend on other systems, and that is what is meant by 'interaction'. Indeed, by Stone's theorem the unitary evolution operator can be recast into its infinitesimal generator the hamiltonian operator: a hermitian operator $\hat{H}_S$, with bounded from below spectra, characteristic of the isolated evolution of the whole system $S$, satisfying $U(t_1,t_0)=\exp (-i\hat{H}_S\Delta t/\hbar)$. The operator $\hat{H}_S$ must depend on the observables of $S$ and those of the other systems interacting with it, for them to intervene in the time evolution. Reasons of symmetry (e.g. homogeneity and isotropy) motivate the terms of the hamiltonian responsible of free evolution (kinetic terms like $\hat{p}^2/2m$) which only involve the observables of the system, so interactions must appear in the form of couplings between different systems observables (like $\hat{\mathbf S}_1\cdot\hat{\mathbf S}_2$ for spin interactions). Given all this, any observer $O$ just needs to know the Hamiltonian of $S$ (or the free Hamiltonian of its subsystems and the interaction couplings) to be able to evolve the observables $\mathcal{A}^{(S)}_j$ in an operator representation via Heisenberg's equation using $\hat{U}(t_1,t_0)$, wich along with the information it already has in a previous measured state $\rho_0$ allows it to establish all the probability distributions of future measurements via the laws established above. But time evolution can be mathematically mapped into state-space instead which just amounts to evolving $\rho(t)$ and fix $\mathcal{A}^S$ giving the usual Schrödinger's equation. The categorical error in ontology is bestowing reality to intermediate states $\rho(t)$ while $S$ is isolated between measuring times $t_0<t<t_1$: $\rho(t)$ evolves $\rho_0$ into a superposition of eigenstates which generically do not correspond to a pure state of measurements/information of any observer, i.e. Schrödinger's picture intermediate states are not justified to represent physical sates from an operational empiricist interpretation of mechanics. We should suspend judgment about the realist asserting a superposition of all histories and such, because they are not seen by any physical observer and come by from a mathematical convenience outside the original physically motivated meaning of observables and states we started with. It is clear that the description any observer can make of any system, is intrinsically incomplete because it does not include the $O-S$ interaction. This is unavoidable as observables are meaningful only as "communication channels" between systems. However, quantum mechanics is complete because another observer $O'$ can model by the same theory the measurements made by $O$ by considering the coupled system $O\otimes S$, since now $O'$ has access to both the observables of $O$ and those of $S$ and can study their couplings. Hence, a measurement of $S$ by observer $O$ is nothing but an interaction in the time evolution of the $O\otimes S$ system as seen by another observer $O'$. What $O'$ sees is degrees of freedom of $O$ getting correlated with degrees of freedom of $S$ (e.g. the apparata of $O$ is always measured by $O'$ to be in the same position when $S$ is measured by $O'$ to be in eigenvalue $a$), and those correlations of observables are measurements. The dynamics used by $O'$ through $\hat{U}$ uses $\hat{H}_{O-S}$, something which $O$ could not use, that is why it was not able to describe its own interaction and measurement process. Besides, quantum mechanics is consistent because dynamics guarantees that $O$ and $O'$ get the same values when comparing upon measuring the eigenvalues of the same observable of $S$ if this was left to evolve isolated. Read Smerlak, Rovelli or Englert for discussions of how to apply the quantum formalism consistently and check out that different observers may have different information on the system but neverthelesss agree. (In short, they agree because either they are decohered and measure compatible observables so they get the same eigenvalues, or they are observers out of contact who are regarded as part of the closed system from the point of view of each other; however if they are to compare measurements they must interact, thus decohering and each observer will see < an eigenvalue of the system> and < the other observer seen that eigenvalue> both with the same probability, grating consistency; cf. Rovelli's articles). van Kampen’s moral: If you endow the mathematical symbols with more meaning than they [operationally] have, you yourself are responsible for the consequences, and you must not blame quantum mechanics when you get into dire straits… SUMMARY: the answer to what makes the system project into an eigenstate is precisely the tricky part: the system doesn't drop into anything, only the observer's information about the system "drops" into an eigenstate, because eigenstates are the results of measuring observables of the system by the observer. Your "dropping" is the "collapse of the wavefunction" which is just the noncommutative analogue of the classical Bayesian update of probability distributions after new knowledge is acquired. There is nothing "collapsing" or "dropping" at each measurement because the state is an informational device, a book-keeping of the observer-system past interaction history. The whole point of my answer is that physical interpretation is justified only for Heisenberg's picture: states do not evolve and then collapse, system's observables evolve and you update the info you have at each measurement. Quantum Mechanics just gives the conditional probabilities of questions "having measured a,b.. what is the probability of measuring x,y.. later?". Thus the "state is on the observer-system link" because different observers may have different information of the system due to different past interaction history, so the state is not something intrinsic of the system but relative/relational. However as soon as observers get into contact, dynamics guarantees they will see the same measurements as long as they are of compatible observables and the system was isolated. This is because in the quantum case measuring an observable destroys previous information on a complementary observable. In classical mechanics a common state of the system among observers is possible because it is the commutative limit and all observables have well defined values at the same time for a pure state. Therefore a measurement is any interaction of the system $S$ with the observer $O$ which establishes a robust, stable correlation between some of their degrees of freedom, but this is a process that can only be described by another observer $O'$ who will see it depend on the couplings between the observables of the compound system $S\otimes O$. So only those interactions in the coupling of the hamiltonian $\hat{H}_{O-S}$ which create correlations robust enough to appear at the experimental sensitivity of $O$ will qualify $O'$ to say that the interaction of $O$ and $S$ was a measurement. Einstein taught us time and length are relative, Heisenberg taught us that quantum entities do not have absolute definite eigenstates in-between measurements (because if insisting on Schrödinger's picture, the general intermediate superposed state given by unitary evolution, is not justified to be empirically physical in the sense that there is no observer for which it is an eigenstate, as generically there are no physical observables which diagonalize it).
Real Analysis Exchange Real Anal. Exchange Volume 33, Number 2 (2007), 417-430. A Study of a Stieltjes Integral Defined on Arbitrary Number Sets Abstract Our purpose is to study a generalized Stieltjes integral defined on a class of subsets of a closed number interval. We extend the results of previous work by the first author. Among other results, we prove that If $M \subseteq [a,b]$ and $f$ and $g$ are functions with domain $M$ such that $f$ is $g$-integrable over $M$, and there exist left (right) extensions $f^*$ and $g^*$ of $f$ and $g$ to $[a,b]$, respectively, then $f^*$ is $g^*$- integrable on $[a,b]$ and $$ \int_a^b f^*dg^*= \int_M fdg $$ [(a)] $F$ is $G$-integrable on $[a,b]$, [(b)] $\overline{M} \subseteq [a,b]$, and $a,b \in M$ \item [(c)] if $z$ belongs to $[a,b] - M$ and $\epsilon$ is a positive number, then there is an open interval $s$ containing $z$ such that \break $|F(x) - F(z)||G(v) - G(u)| <\epsilon$ where each of $u$, $v$, and $x$ is in $s \cap [a,b]$, $u < z < v$, and $u \le x \le v$. Then $F$ is $G$-integrable on $M$, and $\int_a^b FdG = \int\limits_{M}FdG$. Article information Source Real Anal. Exchange, Volume 33, Number 2 (2007), 417-430. Dates First available in Project Euclid: 18 December 2008 Permanent link to this document https://projecteuclid.org/euclid.rae/1229619419 Mathematical Reviews number (MathSciNet) MR2458258 Zentralblatt MATH identifier 1165.26002 Keywords Stieltjes integral Citation Coppin, Charles; Muth, Philip. A Study of a Stieltjes Integral Defined on Arbitrary Number Sets. Real Anal. Exchange 33 (2007), no. 2, 417--430. https://projecteuclid.org/euclid.rae/1229619419
No, this is not meant as an oxymoron but rather as a question raised by one of our users. And we thought it may be interesting to others So the story goes like this: When you have a sample time series, most of the time you would like to forecast just the future points (past the end of the sample data). But what about the ones that fall before the start of the sample? Can we predict those as well? And if so, what can we say about the forecasting process? Why should we care? There are different cases where one would want to predict the past. For example, our user had a temperature time series that was missing past observations, and he wanted a best guess for them using the dynamics detected in the same data. For financial time series there is no money to make here by forecasting the past (unless we have a time machine). But can’t we use the past data points and their forecasts to help us diagnose the stability of the underlying process? In this issue, we will show how to make a backward forecast using only NumXL functions in Excel. We will also discuss the relationship between a regular time series model and an implied backward/reversed time series model. For the data sample, we’ll use the monthly MIN‐MAX temperatures recorded in a given city from January 1988 to December 2009. Background In time series, we usually express a value of a data point as a function of prior values.$$X_t=f(X_{t-1},X_{t-2},\cdots,X_1, a_{t-1},a_{t-2},\cdots,a_1)+a_t$$ Where $\{X_{t-1},X_{t-2},\cdots,X_1\}$ is a values set of past observations $\{a_{t-1},a_{t-2},\cdots,a_1\}$ is a set of past shocks/innovations In order to reverse the problem we need to express the past observation values in terms of future ones.$$X_t=g(X_{t+1},X_{t+2}\cdots X_T, a_{t+1}, a_{t+2}\cdots , a_T) + a_t$$ Where $\{X_{t+1},X_{t+2}\cdots X_T\}$ are the values of future observations up to the end of the sample $\{a_{t+1}, a_{t+2}\cdots , a_T\} $ is a set of future shocks or innovations up to the end of the sample When examining the two forms, you can see the backward model is basically a time series model but with the chronicle‐reversed time series $\{Y_t\}$ data, such that:$$Y_t = X_{T-t}$$ $$X_t = Y_{T-t}$$ So, let’s reformulate the relations earlier$$t=T-\tau$$ $$X_{T-\tau}=g(X_{T-\tau+1},X_{T-\tau+2}\cdots X_{t+\tau},a_{T-\tau+1},a_{T-\tau+2}\cdots a_{t+\tau})+a_{T-\tau}$$ $$X_{T-\tau}=g(X_{T-(\tau-1)},X_{T-(\tau-2)}\cdots X_{T-(T-t-\tau)},a_{T-(\tau-1)},a_{T-(\tau-2)}\cdots a_{T-(T-t-\tau)})+a_{T-\tau}$$ $$Y_\tau=g(Y_{\tau-1},Y_{\tau-2}\cdots Y_0,\omega_{\tau-1},\omega_{\tau-2}\cdots \omega_0)+\omega_\tau$$ where $\{\omega_{\tau} \}=\{a_{T-\tau}\}$ By reversing the chronicle order of the time series we can fit a regular time series model and forecast new values as we have usually done, but we interpret them as the past values in the original time series domain. Application For our sample data, we’ll use the monthly MAX‐MIN temperature (Celsius) recorded at a given city; Note that the time series exhibits 12‐month seasonality and some upward drift (trend). The objective here is to forecast the monthly MIN‐MAX temperatures from 1984 and 1988 Note that minimal and maximum temperature time series are correlated. But for our purposes here, we’ll ignore the interdependency and forecast each time series separately. Furthermore, we will use the Winters’ triple exponential smoothing function to drive the forecast. Forecast As mentioned earlier, we reversed the chronicle order of the input time series (MIN and MAX) such that the first observation is the last one and vice versa. We used the NumXL “REVERSE” function. Next, using Holt‐Winters’ triple exponential function (TESMTH in NumXL), We assumed a default value for $\{\alpha,\beta,\gamma\}$ of 0.1 and computed an in‐sample MAX forecast for each data point Using root mean squared errors (i.e. RMSE) function, we calculated the discrepancy measure between the model’s value and the sample data values Using the Solver, we optimized the values of $\{\alpha,\beta,\gamma\}$ that would minimize the RMSE value. For more details about how we calibrated the coefficients, please refer to our issue on smoothing functions where we tackle the parameters’ value optimization. Afterward, we used the optimal values of the TESMTH function to forecast out‐of‐sample data points. Then we repeated the same procedure for the MIN time series In the graph above, all observations before December 1988 were forecasted by the TESMTH function. The forecast preserved the seasonality, and the trend is very minimal. Conclusion We’ve demonstrated how to do a backward forecast, which is a prediction for an observation that falls before the start of the sample data. The key step was reversing the chronicle order of the input time series before the start of the analysis. Furthermore, the time series process of the original data is different from the process of the reversed time series.$$x_t=\alpha +\phi x_{t-1}+ a_t$$ $$\left |\phi \right | < 1 $$ To reverse the relationship,$$x_{t-1}=\frac{-\alpha+x_t -a_t}{\phi}$$ But if the process is stationary in one direction, it is, by definition, stationary for the reversed time series process. Finally, in our application we must note that forecasting each series independently is not optimal as we are not taking into consideration the interdependency of two time series.
To send content items to your account,please confirm that you agree to abide by our usage policies.If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.Find out more about sending content to . To send content items to your Kindle, first ensure no-reply@cambridge.orgis added to your Approved Personal Document E-mail List under your Personal Document Settingson the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ partof your Kindle email address below.Find out more about sending to your Kindle. Note you can select to send to either the @free.kindle.com or @kindle.com variations.‘@free.kindle.com’ emails are free but can only be sent to your device when it is connected to wi-fi.‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply. The signature of a parametric curve is a sequence of tensors whose entries are iterated integrals. This construction is central to the theory of rough paths in stochastic analysis. It is examined here through the lens of algebraic geometry. We introduce varieties of signature tensors for both deterministic paths and random paths. For the former, we focus on piecewise linear paths, on polynomial paths, and on varieties derived from free nilpotent Lie groups. For the latter, we focus on Brownian motion and its mixtures. Let$Y$be a complex Enriques surface whose universal cover$X$is birational to a general quartic Hessian surface. Using the result on the automorphism group of$X$due to Dolgachev and Keum, we obtain a finite presentation of the automorphism group of$Y$. The list of elliptic fibrations on$Y$and the list of combinations of rational double points that can appear on a surface birational to$Y$are presented. As an application, a set of generators of the automorphism group of the generic Enriques surface is calculated explicitly. We give a bound on the primes dividing the denominators of invariants of Picard curves of genus 3 with complex multiplication. Unlike earlier bounds in genus 2 and 3, our bound is based, not on bad reduction of curves, but on a very explicit type of good reduction. This approach simultaneously yields a simplification of the proof and much sharper bounds. In fact, unlike all previous bounds for genus 3, our bound is sharp enough for use in explicit constructions of Picard curves. The jaggedness of an order ideal$I$in a poset$P$is the number of maximal elements in$I$plus the number of minimal elements of$P$not in$I$. A probability distribution on the set of order ideals of$P$is toggle-symmetric if for every$p\in P$, the probability that$p$is maximal in$I$equals the probability that$p$is minimal not in$I$. In this paper, we prove a formula for the expected jaggedness of an order ideal of$P$under any toggle-symmetric probability distribution when$P$is the poset of boxes in a skew Young diagram. Our result extends the main combinatorial theorem of Chan–López–Pflueger–Teixidor [Trans. Amer. Math. Soc., forthcoming. 2015, arXiv:1506.00516], who used an expected jaggedness computation as a key ingredient to prove an algebro-geometric formula; and it has applications to homomesies, in the sense of Propp–Roby, of the antichain cardinality statistic for order ideals in partially ordered sets. Inspired by methods of N. P. Smart, we describe an algorithm to determine all Picard curves over$\mathbb{Q}$with good reduction away from 3, up to$\mathbb{Q}$-isomorphism. A correspondence between the isomorphism classes of such curves and certain quintic binary forms possessing a rational linear factor is established. An exhaustive list of integral models is determined and an application to a question of Ihara is discussed. We prove that the canonical ring of a canonical variety in the sense of de Fernex and Hacon is finitely generated. We prove that canonical varieties are Kawamata log terminal (klt) if and only if is finitely generated. We introduce a notion of nefness for non-ℚ-Gorenstein varieties and study some of its properties. We then focus on these properties for non-ℚ-Gorenstein toric varieties. We describe the construction of a database of genus-$2$curves of small discriminant that includes geometric and arithmetic invariants of each curve, its Jacobian, and the associated$L$-function. This data has been incorporated into the$L$-Functions and Modular Forms Database (LMFDB). Given a sextic CM field$K$, we give an explicit method for finding all genus-$3$hyperelliptic curves defined over$\mathbb{C}$whose Jacobians are simple and have complex multiplication by the maximal order of this field, via an approximation of their Rosenhain invariants. Building on the work of Weng [J. Ramanujan Math. Soc. 16 (2001) no. 4, 339–372], we give an algorithm which works in complete generality, for any CM sextic field$K$, and computes minimal polynomials of the Rosenhain invariants for any period matrix of the Jacobian. This algorithm can be used to generate genus-3 hyperelliptic curves over a finite field$\mathbb{F}_{p}$with a given zeta function by finding roots of the Rosenhain minimal polynomials modulo$p$. We study cup products in the integral cohomology of the Hilbert scheme of$n$points on a K3 surface and present a computer program for this purpose. In particular, we deal with the question of which classes can be represented by products of lower degrees. If$S$is a quintic surface in$\mathbb{P}^{3}$with singular set 15 3-divisible ordinary cusps, then there is a Galois triple cover${\it\phi}:X\rightarrow S$branched only at the cusps such that$p_{g}(X)=4$,$q(X)=0$,$K_{X}^{2}=15$and${\it\phi}$is the canonical map of$X$. We use computer algebra to search for such quintics having a free action of$\mathbb{Z}_{5}$, so that$X/\mathbb{Z}_{5}$is a smooth minimal surface of general type with$p_{g}=0$and$K^{2}=3$. We find two different quintics, one of which is the van der Geer–Zagier quintic; the other is new. We also construct a quintic threefold passing through the 15 singular lines of the Igusa quartic, with 15 cuspidal lines there. By taking tangent hyperplane sections, we compute quintic surfaces with singular sets$17\mathsf{A}_{2}$,$16\mathsf{A}_{2}$,$15\mathsf{A}_{2}+\mathsf{A}_{3}$and$15\mathsf{A}_{2}+\mathsf{D}_{4}$. In this paper, we investigate examples of good and optimal Drinfeld modular towers of function fields. Surprisingly, the optimality of these towers has not been investigated in full detail in the literature. We also give an algorithmic approach for obtaining explicit defining equations for some of these towers and, in particular, give a new explicit example of an optimal tower over a quadratic finite field. Mori dream spaces form a large example class of algebraic varieties, comprising the well-known toric varieties. We provide a first software package for the explicit treatment of Mori dream spaces and demonstrate its use by presenting basic sample computations. The software package is accompanied by a Cox ring database which delivers defining data for Cox rings and Mori dream spaces in a suitable format. As an application of the package, we determine the common Cox ring for the symplectic resolutions of a certain quotient singularity investigated by Bellamy–Schedler and Donten-Bury–Wiśniewski. We show how to efficiently evaluate functions on Jacobian varieties and their quotients. We deduce an algorithm to compute$(l,l)$isogenies between Jacobians of genus two curves in quasi-linear time in the degree$l^{2}$. We compute the global log canonical thresholds of quasi-smooth well-formed complete intersection log del Pezzo surfaces of amplitude 1 in weighted projective spaces. As a corollary we show the existence of orbifold Kähler—Einstein metrics on many of them. We consider higher secant varieties to Veronese varieties. Most points on the rth secant variety are represented by a finite scheme of length r contained in the Veronese variety – in fact, for a general point, the scheme is just a union of r distinct points. A modern way to phrase it is: the smoothable rank is equal to the border rank for most polynomials. This property is very useful for studying secant varieties, especially, whenever the smoothable rank is equal to the border rank for all points of the secant variety in question. In this note, we investigate those special points for which the smoothable rank is not equal to the border rank. In particular, we show an explicit example of a cubic in five variables with border rank 5 and smoothable rank 6. We also prove that all cubics in at most four variables have the smoothable rank equal to the border rank. We exhibit a numerical method to compute three-point branched covers of the complex projective line. We develop algorithms for working explicitly with Fuchsian triangle groups and their finite-index subgroups, and we use these algorithms to compute power series expansions of modular forms on these groups. We study new families of curves that are suitable for efficiently parametrizing their moduli spaces. We explicitly construct such families for smooth plane quartics in order to determine unique representatives for the isomorphism classes of smooth plane quartics over finite fields. In this way, we can visualize the distributions of their traces of Frobenius. This leads to new observations on fluctuations with respect to the limiting symmetry imposed by the theory of Katz and Sarnak. We give an equivalent definition of the local volume of an isolated singularity$\def \xmlpi #1{}\def \mathsfbi #1{\boldsymbol {\mathsf {#1}}}\let \le =\leqslant \let \leq =\leqslant \let \ge =\geqslant \let \geq =\geqslant \def \Pr {\mathit {Pr}}\def \Fr {\mathit {Fr}}\def \Rey {\mathit {Re}}{\rm Vol}_{\text {BdFF}}(X,0)$given in [S. Boucksom, T. de Fernex, C. Favre, The volume of an isolated singularity. Duke Math. J. 161 (2012), 1455–1520] in the$\mathbb{Q}$-Gorenstein case and we generalize it to the non-$\mathbb{Q}$-Gorenstein case. We prove that there is a positive lower bound depending only on the dimension for the non-zero local volume of an isolated singularity if$X$is Gorenstein. We also give a non-$\mathbb{Q}$-Gorenstein example with${\rm Vol}_{\text {BdFF}}(X,0)=0$, which does not allow a boundary$\Delta $such that the pair$(X,\Delta )$is log canonical. We present an improved algorithm for the computation of Zariski chambers on algebraic surfaces. The new algorithm significantly outperforms the currently available method and therefore allows us to treat surfaces of high Picard number, where huge numbers of chambers occur. As an application, we efficiently compute the number of chambers supported by the lines on the Segre–Schur quartic.
Differential Equation Differential Equation is an important and useful branch of Mathematical Analysis. The inception of Differential Equation with that of Differential and Integral Calculus dates back to the seventeenth century. It was Sir Isacc Newton, who first found the solution of a differential equation with the help of an infinite series. Celebrated German mathematician Gottfried Wilhelem Leibnitz published an article on this subject. In subsequent years John Bernoulli made a lot of valuable contribution to the development of this subject. Eighteenth century saw a rapid and remarkable progress in this branch of knowledge by the works of Euler, Clairaut, Lagrange, Taylor and D’Alembert. But the theory in its present form was put forward much later by Cayley and M.J.M. Hill. Now take a look at the equation y = f(x), here there is two variable y and x. Here x is independent variable and y is dependent variable as the values of y depends of x. Definition An equation, which involves a dependent variable, one or more independent variables and their differential coefficients or differentials, is called a differential equation. Ordinary Differential Equation Differential equations that involve only one independent variable are called Ordinary differential equations (O.D.E). Example: \[(i)x\frac{dy}{dx}+ky=0\] \[(ii){{x}^{2}}\left( \frac{{{d}^{2}}y}{d{{x}^{2}}} \right)+2x{{\left( \frac{dy}{dx} \right)}^{3}}-6y=\log x\] \[(iv){{\left( \frac{dy}{dx} \right)}^{2}}+\sqrt{x\frac{dy}{dx}-6y}=0\] Partial Differential Equation The DE which involve two or more independent variables and partial differential coefficients with respect to these variables, are called Partial differential equations. Example: \[x\frac{\partial u}{\partial x}+y\frac{\partial u}{\partial y}=0\] \[\frac{{{\partial }^{2}}u}{\partial {{x}^{2}}}+\frac{{{\partial }^{2}}u}{\partial {{y}^{2}}}+\frac{{{\partial }^{2}}u}{\partial {{z}^{2}}}=x+y+z\] Order The order of a DE is the order of the highest order derivative occurring in the DE. Thus the equations (i), (iii) and (iv) are of first order while equation (ii) are of second order. Degree The degree of a DE is the power of the highest order derivatives involved in the equation when the equation has been made free from radical and fractional power, as far as the derivatives are concerned. Here, the equations (i), (ii), (iii) are of first degree, while equation (iv) is of degree 4. As \[{{\left( \frac{dy}{dx} \right)}^{2}}+\sqrt{x\frac{dy}{dx}-6y}=0\] \[\Rightarrow {{\left( \frac{dy}{dx} \right)}^{2}}=-\sqrt{x\frac{dy}{dx}-6y}\] \[\Rightarrow {{\left( \frac{dy}{dx} \right)}^{4}}=x\frac{dy}{dx}-6y\] Solution of the DE Any relation between dependent and independent variables, which when substituted in the DE, reduce it to an identity, is called a solution of the Differential Equation. For example: \[y=A{{e}^{2x}}+B{{e}^{-2x}}\] Is the solution of the DE \[\frac{{{d}^{2}}y}{d{{x}^{2}}}-4y=0\] Complete or the General Solution A solution, in which the number of independent arbitrary constants is equal to the order of the differential equation, is called the general solution. \[y=A{{e}^{2x}}+B{{e}^{-2x}}\] Is the general solution of the DE \[\frac{{{d}^{2}}y}{d{{x}^{2}}}-4y=0\] Particular Solution Any solution, which is obtained from the general solution by giving particular values to the arbitrary constants, is called a particular solution of DE. \[y=A{{e}^{2x}}+B{{e}^{-2x}}\] Is the general solution of the DE \[\frac{{{d}^{2}}y}{d{{x}^{2}}}-4y=0\] \[where-as,y={{e}^{2x}}+{{e}^{-2x}},or,y={{e}^{2x}}\] are its particular solution. Application of Differential Equation Various system of curves or system of functions features can be easily represented using differential equations. Many formulas or rules of Physical Science, Chemical Science, Biological Science, Economics and even Astronomical Science are easily expressed using Differential equation. How Ordinary DE’s Are Formed DEs are formed by elimination of arbitrary constants from a relation in the variables and constants. Note: The order of a DE can’t exceed the number of arbitrary constant of the relation. Question 01 Find the DE of the equation \[y={{e}^{x}}(A\cos x+B\sin x)\] Solution: We have, \[y={{e}^{x}}(A\cos x+B\sin x)……….(1)\] \[\therefore \frac{dy}{dx}={{e}^{x}}(A\cos x+B\sin x)+{{e}^{x}}(-A\sin x+B\cos x)\] \[\Rightarrow \frac{dy}{dx}=y+{{e}^{x}}(-A\sin x+B\cos x)………(2)\] \[\therefore \frac{{{d}^{2}}y}{d{{x}^{2}}}=\frac{dy}{dx}+{{e}^{x}}(-A\sin x+B\cos x)+{{e}^{x}}(-A\cos x-B\sin x)\] \[\Rightarrow \frac{{{d}^{2}}y}{d{{x}^{2}}}=\frac{dy}{dx}+\left( \frac{dy}{dx}-y \right)-y,[by(1)\And (2)]\] \[\therefore \frac{{{d}^{2}}y}{d{{x}^{2}}}-2\frac{dy}{dx}+2y=0\] is the required DE. Question 02 Find the DE of the equation \[y=A\cos (px-B)\] Where p is constant and A, B are parameters. Solution: Given, \[y=A\cos (px-B)\] \[\therefore \frac{dy}{dx}=-Ap\sin (px-B)\] \[\therefore \frac{{{d}^{2}}y}{d{{x}^{2}}}=-A{{p}^{2}}\cos (px-B)=-{{p}^{2}}y\] \[\therefore \frac{{{d}^{2}}y}{d{{x}^{2}}}+{{p}^{2}}y=0\] Is the required DE. Question 03 What is the DE of \[{{e}^{y-x}}=\lambda (y+x),\lambda \to parameter\] Solution: \[{{e}^{y-x}}=\lambda (y+x)\] Taking log to the both side we get, \[\log {{e}^{y-x}}=\log \left\{ \lambda (y+x) \right\}=\log \lambda +\log (y+x)\] \[\Rightarrow y-x=\log \lambda +\log (y+x)\] \[\therefore \frac{dy}{dx}-1=\frac{1}{y+x}\left( \frac{dy}{dx}+1 \right)\] \[\Rightarrow \left( y+x \right)\left( \frac{dy}{dx}-1 \right)=\left( \frac{dy}{dx}+1 \right)\] \[\therefore \left( y+x-1 \right)\frac{dy}{dx}=1-y-x\] Is the required DE. Question 04 Find the differential equation of the system of circles having a constant radius ‘a’ and having centers on the x-axis. Solution: Let (a, 0) be the center of any circle. Then its radius will be ‘a’. So, equation of all circles passing through the origin and having centers on the x-axis is, \[{{\left( x-a \right)}^{2}}+{{y}^{2}}={{a}^{2}}\] \[\Rightarrow {{x}^{2}}+{{y}^{2}}-2ax=0……….(1)\] Here ‘a’ is parameter. Differentiating (1) w.r.t. x, \[2x+2y\frac{dy}{dx}-2a=0\] \[\Rightarrow a=x+y\frac{dy}{dx}\] Now from (1) we get, \[{{x}^{2}}+{{y}^{2}}-2\left( x+y\frac{dy}{dx} \right)x=0\] \[\Rightarrow {{y}^{2}}-{{x}^{2}}-2xy\frac{dy}{dx}=0\] Is the required DE. Question 05 Find the differential equation of all circles passing through the origin and having centers on the x-axis. Solution: Let (α, 0) be the center of any member of the system of circles having fixed radius ‘a’. The equation of the system of circle is \[{{\left( x-\alpha \right)}^{2}}+{{y}^{2}}={{a}^{2}}……….(1)\] Where α is a parameter and ‘a’ is constant Differentiating both sides w.r.t. x we get, \[2\left( x-\alpha \right)+2y\frac{dy}{dx}=0\] \[\Rightarrow \left( x-\alpha \right)=-y\frac{dy}{dx}……….(2)\] Eliminating α from (1) and (2) \[\left( -y\frac{dy}{dx} \right)+{{y}^{2}}={{a}^{2}}\] \[\Rightarrow {{y}^{2}}\left\{ 1+{{\left( \frac{dy}{dx} \right)}^{2}} \right\}={{a}^{2}}\] Is the required DE. Question 06 Find the differential equation of all circles touching axis of x at the origin. Solution: Let α be the radius of the circle. Then the center must be at (0, α). So the equation of a circle touching the x-axis at the origin is, \[{{x}^{2}}+{{\left( y-\alpha \right)}^{2}}={{\alpha }^{2}}\] \[\Rightarrow {{x}^{2}}+{{y}^{2}}-2y\alpha =0……….(1)\] Where α is an arbitrary constant. Differentiating (1) w.r.t. x, we get \[2x+2y\frac{dy}{dx}-2\alpha \frac{dy}{dx}=0\Rightarrow \alpha =\frac{x+y\frac{dy}{dx}}{\frac{dy}{dx}}\] Therefore from (1) we have, \[{{x}^{2}}+{{y}^{2}}-2y\frac{x+y\frac{dy}{dx}}{\frac{dy}{dx}}=0\] \[\Rightarrow \left( {{x}^{2}}+{{y}^{2}} \right)\frac{dy}{dx}-2xy-2{{y}^{2}}\frac{dy}{dx}=0\] \[\therefore \left( {{x}^{2}}-{{y}^{2}} \right)\frac{dy}{dx}-2xy=0\] Which is the required differential equation. Question 07 Find the differential equation of the family of circles having fixed radius r. Solution: The equation of the family of circles of fixed radius r is \[{{\left( x-\alpha \right)}^{2}}+{{\left( y-\beta \right)}^{2}}={{r}^{2}}……….(1)\] Where α, β are arbitrary constants or parameters and r, the radius of the circle is a fixed constant. We are to eliminate α, β to form the required differential equation. Differentiating (1) w.r.t. x, we get \[2\left( x-\alpha \right)+2\left( y-\beta \right)\frac{dy}{dx}=0\] \[\Rightarrow \left( x-\alpha \right)+\left( y-\beta \right)\frac{dy}{dx}=0……….(2)\] Differentiating again w.r.t. x, we get \[1+{{\left( \frac{dy}{dx} \right)}^{2}}+\left( y-\beta \right)\frac{{{d}^{2}}y}{d{{x}^{2}}}=0\] \[\Rightarrow y-\beta =-\frac{1+{{\left( \frac{dy}{dx} \right)}^{2}}}{\frac{{{d}^{2}}y}{d{{x}^{2}}}}……….(3)\] Substituting (3) in (2) we get \[\Rightarrow x-\alpha =\frac{\frac{dy}{dx}\left\{ 1+{{\left( \frac{dy}{dx} \right)}^{2}} \right\}}{\frac{{{d}^{2}}y}{d{{x}^{2}}}}……….(4)\] Eliminating α and β from (1) with the help of (3) and (4) we have, \[{{\left[ \frac{\frac{dy}{dx}\left\{ 1+{{\left( \frac{dy}{dx} \right)}^{2}} \right\}}{\frac{{{d}^{2}}y}{d{{x}^{2}}}} \right]}^{2}}+{{\left[ \frac{1+{{\left( \frac{dy}{dx} \right)}^{2}}}{\frac{{{d}^{2}}y}{d{{x}^{2}}}} \right]}^{2}}={{r}^{2}}\] \[\therefore {{\left\{ 1+{{\left( \frac{dy}{dx} \right)}^{2}} \right\}}^{\frac{3}{2}}}=r\frac{{{d}^{2}}y}{d{{x}^{2}}}\] Which is the required differential equation. Question 08 Find the differential equation of \[y={{e}^{-\frac{kx}{2}}}\left( A\cos nx+B\sin nx \right)\] Where A and B are parameter and k, n are fixed constant. Solution: \[y={{e}^{-\frac{kx}{2}}}\left( A\cos nx+B\sin nx \right)\] \[\Rightarrow {{e}^{\frac{kx}{2}}}.y=A\cos nx+B\sin nx……….(1)\] Differentiating (1) w.r.t. x, we get \[{{e}^{\frac{kx}{2}}}.\frac{k}{2}.y+{{e}^{\frac{kx}{2}}}.\frac{dy}{dx}=-An\sin nx+Bn\cos nx\] \[\Rightarrow {{e}^{\frac{kx}{2}}}\left( \frac{k}{2}y+\frac{dy}{dx} \right)=-An\sin nx+Bn\cos nx\] Differentiating again w.r.t. x, we get \[{{e}^{\frac{kx}{2}}}.\frac{k}{2}.\left( \frac{k}{2}y+\frac{dy}{dx} \right)+{{e}^{\frac{kx}{2}}}\left( \frac{k}{2}\frac{dy}{dx}+\frac{{{d}^{2}}y}{d{{x}^{2}}} \right)=-A{{n}^{2}}\cos nx-B{{n}^{2}}\sin nx\] \[\Rightarrow \frac{{{k}^{2}}}{4}{{e}^{\frac{kx}{2}}}y+\frac{k}{2}{{e}^{\frac{kx}{2}}}\frac{dy}{dx}+\frac{k}{2}{{e}^{\frac{kx}{2}}}\frac{dy}{dx}+{{e}^{\frac{kx}{2}}}\frac{{{d}^{2}}y}{d{{x}^{2}}}=-{{n}^{2}}\left( A\cos nx+B\sin nx. \right)\] \[\Rightarrow {{e}^{\frac{kx}{2}}}\left\{ \frac{{{d}^{2}}y}{d{{x}^{2}}}+k\frac{dy}{dx}+\frac{{{k}^{2}}}{4}y \right\}=-{{n}^{2}}{{e}^{\frac{kx}{2}}}y\] \[\therefore \frac{{{d}^{2}}y}{d{{x}^{2}}}+k\frac{dy}{dx}+\left( \frac{{{k}^{2}}}{4}+{{n}^{2}} \right)y=0\] is the required differential equation. Question 09 \[If,\frac{{{x}^{2}}}{{{a}^{2}}+\lambda }+\frac{{{y}^{2}}}{{{b}^{2}}+\lambda }=1\] Where a, b are fixed constant and λ is parameter. By eliminating λ prove that, \[\left( x+y\frac{dy}{dx} \right)\left( x\frac{dy}{dx}-y \right)=\left( {{a}^{2}}-{{b}^{2}} \right)\frac{dy}{dx}\] Solution: We have, \[\frac{{{x}^{2}}}{{{a}^{2}}+\lambda }+\frac{{{y}^{2}}}{{{b}^{2}}+\lambda }=1……….(1)\] Differentiating (1) w.r.t. x, we get \[\frac{2x}{{{a}^{2}}+\lambda }+\frac{2y}{{{b}^{2}}+\lambda }\frac{dy}{dx}=0\] \[\Rightarrow \frac{x}{{{a}^{2}}+\lambda }=-\frac{y\frac{dy}{dx}}{{{b}^{2}}+\lambda }=\frac{x+y\frac{dy}{dx}}{\left( {{a}^{2}}+\lambda \right)-\left( {{b}^{2}}+\lambda \right)}\] By using Componendo-dividendo \[\Rightarrow \frac{x}{{{a}^{2}}+\lambda }=-\frac{y\frac{dy}{dx}}{{{b}^{2}}+\lambda }=\frac{x+y\frac{dy}{dx}}{{{a}^{2}}-{{b}^{2}}}……….(2)\] Now from (1) we get, \[x.\frac{x}{{{a}^{2}}+\lambda }+y.\frac{y}{{{b}^{2}}+\lambda }=1\] \[\Rightarrow x\left( \frac{x+y\frac{dy}{dx}}{{{a}^{2}}-{{b}^{2}}} \right)-y\left( \frac{x+y\frac{dy}{dx}}{\left( {{a}^{2}}-{{b}^{2}} \right)\frac{dy}{dx}} \right)=1\] \[\Rightarrow \left( \frac{x+y\frac{dy}{dx}}{{{a}^{2}}-{{b}^{2}}} \right)\left( x-\frac{y}{\frac{dy}{dx}} \right)=1\] \[\Rightarrow \left( \frac{x+y\frac{dy}{dx}}{{{a}^{2}}-{{b}^{2}}} \right)\left( \frac{x\frac{dy}{dx}-y}{\frac{dy}{dx}} \right)=1\] \[\therefore \left( x+y\frac{dy}{dx} \right)\left( x\frac{dy}{dx}-y \right)=\left( {{a}^{2}}-{{b}^{2}} \right)\frac{dy}{dx}\]
As we all know, Brexit negotiations are on their way—but we still do not know whether they will actually finish in time. The negotiations will take place topic-by-topic. To organise the negotiations in the most effective way, the topics will all be discussed and finalised in separate meetings, one meeting at a time. This system exists partly because there are (non-cyclic) dependencies between some topics: for example, one cannot have a meaningful talk about tariffs before deciding upon the customs union. The EU can decide on any order in which to negotiate the topics, as long as the mentioned dependencies are respected and all topics are covered. Each of the topics will be discussed at length using every available piece of data, including key results from past meetings. At the start of each meeting, the delegates will take one extra minute for each of the meetings that has already happened by that point, even unrelated ones, to recap the discussions and understand how their conclusions were reached. See Figure 1 for an example. Nobody likes long meetings. The EU would like you to help order the meetings in a way such that the longest meeting takes as little time as possible. The input consists of: One line containing an integer $n$ ($1 \leq n \leq 4 \cdot 10^5$), the number of topics to be discussed. The topics are numbered from $1$ to $n$. $n$ lines, describing the negotiation topics. The $i$th such line starts with two integers $e_ i$ and $d_ i$ ($1 \leq e_ i \leq 10^6$, $0 \leq d_ i < n$), the number of minutes needed to reach a conclusion on topic $i$ and the number of other specific topics that must be dealt with before topic $i$ can be discussed. The remainder of the line has $d_ i$ distinct integers $b_{i,1}, \ldots , b_{i,d_{i}}$ ($1 \le b_{i,j} \le n$ and $b_{i,j} \ne i$ for each $j$), the list of topics that need to be completed before topic $i$. It is guaranteed that there are no cycles in the topic dependencies, and that the sum of $d_ i$ over all topics is at most $4 \cdot 10^5$. Output the minimum possible length of the longest of all meetings, if meetings are arranged optimally according to the above rules. Sample Input 1 Sample Output 1 3 10 0 10 0 10 0 12 Sample Input 2 Sample Output 2 6 2 2 4 3 4 1 5 1 2 2 4 3 1 5 2 0 4 1 3 8
The motivation behind density matrices [1]: In quantum mechanics, the state of a quantum system is represented by a state vector, denoted $|\psi\rangle$ (and pronounced ket). A quantum system with a state vector $|\psi\rangle$ is called a pure state. However, it is also possible for a system to be in a statistical ensemble of different state vectors. For example, there may be a $50\%$ probability that the state vector is $|\psi_1\rangle$ and a $50\%$ chance that the state vector is $|\psi_2\rangle$. This system would be in mixed state. The density matrix is especially useful for mixed states, because any state, pure or mixed, can be characterized by a single density matrix. A mixed state is different from a quantum superposition. The probabilities in a mixed state are classical probabilities (as in the probabilities one learns in classical probability theory/statistics), unlike the quantum probabilities in a quantum superposition. In fact, a quantum superposition of pure states is another pure state, for example, $\frac{|0\rangle + |1\rangle}{\sqrt{2}}$ . In this case, the coefficients $\frac{1}{\sqrt {2}}$ are not probabilities, but rather probability amplitudes. Example: light polarization An example of pure and mixed states is light polarization. Photons can have two helicities, corresponding to two orthogonal quantum states, $|R\rangle$ (right circular polarization) and $|L\rangle$ (left circular polarization). A photon can also be in a superposition state, such as $\frac{|R\rangle + |L\rangle}{\sqrt{2}}$(vertical polarization) or $\frac{|R\rangle - |L\rangle}{\sqrt{2}}$(horizontal polarization). More generally, it can be in any state $\alpha|R\rangle + \beta |L\rangle$ (with $|\alpha|^2+|\beta|^2=1$) corresponding to linear, circular or elliptical polarization. If we pass $\frac{|R\rangle + |L\rangle}{\sqrt{2}}$ polarized light through a circular polarizer which allows either only $|R\rangle$ polarized light, or only $|L\rangle$ polarized light, the intensity would be reduced by half in both cases. This may make it seem like half of the photons are in state $|R\rangle$ and the other in state $|L\rangle$. But this is not correct: Both $|R\rangle$ and $|L\rangle$ are partly absorbed by a vertical linear polarizer, but the $\frac{|R\rangle+|L\rangle}{\sqrt 2}$ light will pass through that polarizer with no absorption whatsoever. However, unpolarized light such as the light from an incandescent light bulb is different from any state like $\alpha|R\rangle + \beta|L\rangle$ (linear, circular or elliptical polarization). Unlike linearly or elliptically polarized light, it passes through the polarizer with $50\%$ intensity loss whatever the orientation of the polarizer; and unlike circularly polarized light, it cannot be made linearly polarized with any wave plate because randomly oriented polarization will emerge from a wave plate with random orientation. Indeed, unpolarized light cannot be described as any state of the form $\alpha |R\rangle + \beta |L\rangle$ in a definite sense. However, unpolarized light can be described with ensemble averages, e.g. that each photon is either $|R\rangle$ with $50\%$ probability or $|L\rangle$ with $50\%$ probability. The same behaviour would occur if each photon was either vertically polarized with $50\%$ probability or horizontally polarized with $50\%$ probability. Therefore, unpolarized light cannot be described by any pure state but can be described as a statistical ensemble of pure states in at least two ways (the ensemble of half left and half right circularly polarized, or the ensemble of half vertically and half horizontally linearly polarized). These two ensembles are completely indistinguishable experimentally, and therefore they are considered the same mixed state. One of the advantages of the density matrix is that there is just one density matrix for each mixed state, whereas there are many statistical ensembles of pure states for each mixed state. Nevertheless, the density matrix contains all the information necessary to calculate any measurable property of the mixed state. Where do mixed states come from? To answer that, consider how to generate unpolarized light. One way is to use a system in thermal equilibrium, a statistical mixture of enormous numbers of microstates, each with a certain probability (the Boltzmann factor), switching rapidly from one to the next due to thermal fluctuations. Thermal randomness explains why an incandescent light bulb, for example, emits unpolarized light. A second way to generate unpolarized light is to introduce uncertainty in the preparation of the system, for example, passing it through a birefringent crystal with a rough surface, so that slightly different parts of the beam acquire different polarizations. A third way to generate unpolarized light uses an EPR setup: A radioactive decay can emit two photons travelling in opposite directions, in the quantum state $\frac{|R,L\rangle + |L,R\rangle}{\sqrt{2}}$. The two photons together are in a pure state, but if you only look at one of the photons and ignore the other, the photon behaves just like unpolarized light. More generally, mixed states commonly arise from a statistical mixture of the starting state (such as in thermal equilibrium), from uncertainty in the preparation procedure (such as slightly different paths that a photon can travel), or from looking at a subsystem entangled with something else. Obtaining the density matrix [2]: As mentioned before, a system can be in a statistical ensemble of different state vectors. Say there is $p_1$ probability that the state vector is $|\psi_1\rangle$ and $p_2$ probability that the state vector is $|\psi_2\rangle$ are the corresponding classical probabilities of each state being prepared. Say, now we want to find the expectation value of an operator $\hat{O}$. It is given as: $$\langle \hat{O} \rangle = p_1\langle \psi_1 \lvert \hat{O} \lvert \psi_1 \rangle + p_2\langle \psi_2 \lvert \hat{O} \lvert \psi_2 \rangle$$ Note that $\langle \psi_1 \lvert \hat{O} \lvert \psi_1 \rangle$ and $p_2\langle \psi_2 \lvert \hat{O} \lvert \psi_2 \rangle$ are scalars, and trace of scalars are scalars too. Thus, we can write the above expression as: $$\langle \hat{O} \rangle = Tr(p_1\langle \psi_1 \lvert \hat{O} \lvert \psi_1 \rangle) + Tr(p_2\langle \psi_2 \lvert \hat{O} \lvert \psi_2 \rangle)$$ Now, using the cyclic invariance and linearity properties of the trace: $$ \langle \hat{O} \rangle = p_1Tr(\hat{O} \lvert \psi_1 \rangle \langle \psi_1 \lvert) + p_2Tr(\hat{O} \lvert \psi_2 \rangle \langle \psi_2 \lvert)$$ $$= Tr(\hat{O} (p_1 \lvert \psi_1 \rangle \langle \psi_1 \lvert) + p_2 \lvert \psi_2 \rangle \langle \psi_2 \lvert)) = Tr(\hat{O} \rho)$$ where $\rho$ is what we call the density matrix. The density operator contains all the information needed to calculate an expectation value for the experiment. Thus, basically the density matrix $\rho$ is $$p_1 \lvert \psi_1 \rangle \langle \psi_1 \lvert + p_2 \lvert \psi_2 \rangle \langle \psi_2 \lvert$$ in this case. You can obviously extrapolate this logic for when more than just two state vectors are possible for a system, with different probabilities. Calculating the density matrix: Let's take an example, as follows. In the above image, the incandescent light bulb $1$ emits completely random polarized photons $2$ with mixed state density matrix. As mentioned before, an unpolarized light can be explained with an ensemble average i.e. say each photon is either $|R\rangle$ or $|L\rangle$ with $50%$ probability for each. Another possible ensemble average is: each photon is either $\frac{|R\rangle+|L\rangle}{\sqrt 2}$ or $\frac{|R\rangle - |L\rangle}{\sqrt 2}$ with $50\%$ probability for each. There are lots of other possibilities too. Try to come up with some yourself. The point to note is that the density matrix for all these possible ensembles will be exactly the same. And this is exactly the reason why density matrix decomposition into pure states is not unique. Let's check: Case 1: $50\%$ $|R\rangle$ & $50\%$ $|L\rangle$ $$\rho_{\text{mixed}} = 0.5 |R\rangle \langle R| + 0.5 |L\rangle \langle L|$$ Now, in the basis $\{|R\rangle, |L\rangle\}$, $|R\rangle$ can be denoted as $\begin{bmatrix} 1 \\ 0 \end{bmatrix}$ and $|L\rangle$ can be denoted as $\begin{bmatrix} 0 \\ 1 \end{bmatrix}$ $$\therefore 0.5 \left(\begin{bmatrix} 1 \\ 0 \end{bmatrix} \otimes \begin{bmatrix} 1 & 0 \end{bmatrix}\right) + 0.5 \left(\begin{bmatrix} 0 \\ 1 \end{bmatrix} \otimes \begin{bmatrix} 0 & 1 \end{bmatrix}\right)$$ $$= 0.5 \begin{bmatrix} 1 & 0 \\ 0 & 0 \end{bmatrix} + 0.5\begin{bmatrix} 0 & 0 \\ 0 & 1 \end{bmatrix}$$ $$= \begin{bmatrix} 0.5 & 0 \\ 0 & 0.5 \end{bmatrix}$$ Case 2: $50\%$ $\frac{|R\rangle + |L\rangle}{\sqrt 2}$ & $50\%$ $\frac{|R\rangle - |L\rangle}{\sqrt 2}$ $$\rho_{\text{mixed}} = 0.5 \left(\frac{|R\rangle + |L\rangle}{\sqrt 2}\right)\otimes \left(\frac{\langle R| + \langle L|}{\sqrt 2}\right) + 0.5 \left(\frac{|R\rangle - |L\rangle}{\sqrt 2}\right)\otimes \left(\frac{\langle R| - \langle L|}{\sqrt 2}\right)$$ In the basis $\{\frac{|R\rangle + |L\rangle}{\sqrt 2}, \frac{|R\rangle - |L\rangle}{\sqrt 2}\}$, $\frac{|R\rangle + |L\rangle}{\sqrt 2}$ can be denoted as $\begin{bmatrix} 1 \\ 0 \end{bmatrix}$ and $\frac{|R\rangle - |L\rangle}{\sqrt 2}$ can be denoted as $\begin{bmatrix} 0 \\ 1 \end{bmatrix}$ $$\therefore 0.5 \left(\begin{bmatrix} 1 \\ 0 \end{bmatrix} \otimes \begin{bmatrix} 1 & 0 \end{bmatrix}\right) + 0.5 \left(\begin{bmatrix} 0 \\ 1 \end{bmatrix} \otimes \begin{bmatrix} 0 & 1 \end{bmatrix}\right)$$ $$= 0.5 \begin{bmatrix} 1 & 0 \\ 0 & 0 \end{bmatrix} + 0.5\begin{bmatrix} 0 & 0 \\ 0 & 1 \end{bmatrix}$$ $$= \begin{bmatrix} 0.5 & 0 \\ 0 & 0.5 \end{bmatrix}$$Thus, we can clearly see that we get the same density matrices in both case 1 and case 2. However, after passing through the vertical plane polarizer (3), the remaining photons are all vertically polarized (4) and have pure state density matrix: $$\rho_{\text{pure}} = 1 \left(\frac{|R\rangle + |L\rangle}{\sqrt 2}\right)\otimes \left(\frac{\langle R| + \langle L|}{\sqrt 2}\right) + 0 \left(\frac{|R\rangle - |L\rangle}{\sqrt 2}\right)\otimes \left(\frac{\langle R| - \langle L|}{\sqrt 2}\right) $$ In the basis $\{\frac{|R\rangle + |L\rangle}{\sqrt 2}, \frac{|R\rangle - |L\rangle}{\sqrt 2}\}$, $|R\rangle$ can be denoted as $\begin{bmatrix} 1 \\ 0 \end{bmatrix}$ and $|L\rangle$ can be denoted as $\begin{bmatrix} 0 \\ 1 \end{bmatrix}$ $$\therefore 1 \left(\begin{bmatrix} 1 \\ 0 \end{bmatrix} \otimes \begin{bmatrix} 1 & 0 \end{bmatrix}\right) + 0 \left(\begin{bmatrix} 0 \\ 1 \end{bmatrix} \otimes \begin{bmatrix} 0 & 1 \end{bmatrix}\right)$$ $$= 1 \begin{bmatrix} 1 & 0 \\ 0 & 0 \end{bmatrix} + 0\begin{bmatrix} 0 & 0 \\ 0 & 1 \end{bmatrix}$$ $$= \begin{bmatrix} 1 & 0 \\ 0 & 0 \end{bmatrix}$$ The single qubit case: If your system contains just a single qubit and you're know that its state $|\psi\rangle = \alpha|0\rangle + \beta|1\rangle$ (where $|\alpha|^2+|\beta|^2$) then you are already sure that the 1-qubit system has the state $|\psi\rangle$ with probability $1$! In this case, the density matrix will simply be: $$\rho_{\text{pure}} = 1|\psi\rangle \langle \psi|$$ If you're using the orthonormal basis $\{\alpha|0\rangle + \beta|1\rangle,\beta^*|0\rangle - \alpha^*|1\rangle\}$, the density matrix will simply be: $$\begin{bmatrix} 1 & 0 \\ 0 & 0 \end{bmatrix}$$ This is very similar to 'case 2' above, so I didn't show the calculations. You can ask questions in the comments if this portion seems unclear. However, you could also use the $\{|0\rangle,|1\rangle\}$ basis as @DaftWullie did in their answer. In the general case for a 1-qubit state, the density matrix, in the $\{|0\rangle,|1\rangle\}$ basis would be: $$\rho = 1(\alpha |0\rangle + \beta |1\rangle) \otimes (\alpha^* \langle 0| + \beta^* \langle 1|)$$ $$= \begin{bmatrix} \alpha \\ \beta \end{bmatrix} \otimes \begin{bmatrix} \alpha^* & \beta^* \end{bmatrix}$$ $$= \begin{bmatrix} \alpha\alpha^* & \alpha\beta^* \\ \beta\alpha^* & \beta\beta^* \end{bmatrix}$$ Notice that this matrix $\rho$ is idempotent i.e. $\rho = \rho^2$. This is an important property of the density matrices of a pure state and helps us to distinguish them from density matrices of mixed states. Obligatory exercises: 1. Show that density matrices of pure states can be diagonalized to the form $\text{diag}(1,0,0,...)$. 2. Prove that density matrices of pure states are idempotent. Sources & References: [1]: https://en.wikipedia.org/wiki/Density_matrix [2]: https://physics.stackexchange.com/a/158290 Image Credits: User Kaidoron Wikimedia
I cannot see how Willie Wong's example of the Bernstein-Robinson result supports his conclusion. It seems to me to do the opposite, and I am not alone here. Halmos admits himself in his autobiography: "The Bernstein-Robinson proof uses non-standard models of higher order predicate languages, and when Abby [Robinson] sent me his preprint I really had to sweat to pinpoint and translate its mathematical insight." Halmos did sweat because, as all of his comments and actions regarding NSA indicate, he was against it for philosophical or personal reasons, and so he was eager to downplay this result precisely because it seemed like support for using NSA, which, at least in Robinson's approach, is nonconstructive due to the reliance on the existence of nonprincipal ultrafilters (also, the compactness theorem relies on some equivalent of the axiom of choice). Also, the fact that a formal proof of some formula exists (which is precisely what it means to be a theorem) is only trivially relevant to the question of whether a theory might help you find a proof. Besides, who other than automated theorem-provers actually thinks in terms of formal proofs? In my experience, the concepts and tools of a theory, the objects that it lets you talk about, and the ideas that it lets you express are what make a theory useful for proving things. One thing that the OP might find attractive about NSA is that saying " x is infinitely close to y" is perfectly fine and meaningful -- and it probably means what you already think it means: two numbers are infinitely close iff their difference is infinitely small, i.e., an infinitesimal. You also get things like halos (all numbers infinitely close to some number) and shadows (the standard number infinitely close to some number), which can be fun and intuitive concepts to think with. For example, here is how the limit of a (hyperreal) sequence is defined. First, sequences are no longer indexed by the natural numbers $\mathbb{N}$. Rather, sequences are indexed by the hypernaturals $^*\mathbb{N}$, which include numbers larger than any standard natural. Such numbers are called infinite (or unlimited). (Warning: this is not the same concept as "infinity" in "as x goes to infinity"; infinite naturals are smaller than (positive) infinity, when it makes sense to compare them.) Now, a hyperreal L is the limit of a sequence $\langle s_n \rangle$ (indexed by $^*\mathbb{N}$!) iff L is infinitely close to $s_n$ for all infinite n. For another example, consider proofs using "sloppy" reasoning where you end up with some infinitesimal term and so just ignore it or drop it from an equation (provoking derisive comments about "ghosts of departed entities"). In NSA, rather than ignoring the term, you can actually say that it's infinitesimal and end up with a result that is infinitely close to the result of your sloppy alternative. E.g., let (the hyperfunction) $f(x) = x^2$ and consider the (I presume familiar) formula for the derivative, where we will let h be a nonzero infinitesimal: $$\begin{align} \frac{f(x+h) - f(x)}{h} &= \frac{(x+h)^2 - x^2}{h} \\ &= \frac{x^2 + 2xh + h^2 - x^2}{h} \\ &= 2x + h \\ &\simeq 2x \end{align}$$ The symbol $\simeq$ denotes the relation "infinitely close". This derivation works because, when h is an infinitesimal, a + h is infinitely close to a for any hyperreal a. Under sensible restrictions on f and x, this derivation shows that $2x$ is the standard derivative of $x^2$, as every schoolgirl knows. A cost-benefit analysis for learning NSA should probably include (i) for a benefit, how interesting or valuable you find the nonstandard concepts and (ii) for a cost, how much work you'll have to do to learn it. The latter will depend on what text or approach you choose. If you are willing to take some things for granted and just use the resulting tools, you can get away with bypassing a good chunk of the model-theoretic machinery (compactness, ultrafilters, elementary extensions, transfer, formal languages). If you understand the ultrapower construction, which constructs the hyperreals as equivalence classes of infinite sequences of real numbers (similar to the construction of the reals from the rationals using Cauchy sequences), then the resulting system behaves like you would expect -- relations and operations are defined componentwise. This part is relatively easy. Alternatively, you can get away with not understanding the construction very well if you are willing to internalize the definitions of the relations and operations on the hyperreals just as axiomatic. If you want to look into NSA, I would recommend either (a) Goldblatt's Lectures on the Hyperreals if you don't have a strong background or interest in mathematical logic or (b) Hurd and Loeb's Introduction to Nonstandard Real Analysis otherwise. The latter is out of print and sadly about $100 if you want to buy it, but check libraries. It's very thoughtful and well-written. Also, if you are excited about the model-theoretic aspects, look them up in Chang and Keisler's Model Theory book as you go along. Hodges' model theory book is also very good but doesn't cover this material as extensively. Cheers, Rachel
The formula for the standardized residuals is: $$\begin{align}\text{Pearson's residuals}\,&=\,\frac{\text{Observed - Expected}}{ \sqrt{\text{Expected}}}\\d_{ij}&=\frac{n_{ij}-m_{ij}}{\sqrt{m_{ij}}}\end{align}$$ where $m_{ij} = E( f_{ij})$ is the expected frequency of the $i$-th row and the $j$-th column. The sum of squared standardized residuals is the chi square value. From Extending Mosaic Displays: Marginal, Partial, and Conditional Views of Categorical Data by Michael Friendly Under the assumption of independence, these values roughly correspond to two-tailed probabilities $p < .05$ and $p < .0001$ that a given value of $| d_{ij} |$ exceeds $2$ or $4$. Notice the following footnote: For exploratory purposes, we do not usually make adjustments (e.g., Bonferroni) for multiple tests because the goal is to display the pattern of residuals in the table as a whole. However, the number and values of these cutoffs can be easily set by the user. We are dealing with a multi-way table, in reference to which the R documentation for the mosaicplot package states: Extended mosaic displays show the standardized residuals of a loglinear model of the counts from by the color and outline of the mosaic's tiles. (Standardized residuals are often referred to a standard normal distribution.) Negative residuals are drawn in shaded of red and with broken outlines; positive ones are drawn in blue with solid outlines. The fact that this is a three-way contingency table complicates the interpretation, which is very nicely explained in @roando2's answer. Here is a simulation with a made-up table that resembles the OP to clarify the calculations: tab_df = data.frame(expand.grid( age = c("15-24", "25-39", ">40"), attitude = c("no","moderate"), memory = c("yes", "no")), count = c(1,4,3,1,8,39,32,36,25,35,32,38) ) (tab = xtabs(count ~ ., data = tab_df)) , , memory = yes attitude age no moderate 15-24 1 1 25-39 4 8 >40 3 39 , , memory = no attitude age no moderate 15-24 32 35 25-39 36 32 >40 25 38 summary(tab) Call: xtabs(formula = count ~ ., data = tab) Number of cases in table: 254 Number of factors: 3 Test for independence of all factors: Chisq = 78.33, df = 7, p-value = 3.011e-14 require(vcd) mosaic(~ memory + age + attitude, data = tab, shade = T) expected = mosaic(~ memory + age + attitude, data = tab, type = "expected") expected # Finding, as an example, the expected counts in >40 with memory and moderate att.: over_forty = sum(3,39,25,38) mem_yes = sum(1,4,3,1,8,39) att_mod = sum(1,8,39,35,32,38) exp_older_mem_mod = over_forty * mem_yes * att_mod / sum(tab)^2 # Corresponding standardized Pearson's residual: (39 - exp_older_mem_mod) / sqrt(exp_older_mem_mod) # [1] 6.709703 It is interesting to compare the graphical representation to the results of the Poisson regression, which illustrates perfectly the English interpretation in @rolando2 's answer: fit <- glm(count ~ age + attitude + memory, data=tab_df, family=poisson()) summary(fit) Call: glm(formula = count ~ age + attitude + memory, family = poisson(), data = tab_df) Coefficients: Estimate Std. Error z value Pr(>|z|) (Intercept) 1.7999 0.1854 9.708 < 2e-16 *** age25-39 0.1479 0.1643 0.900 0.36794 age>40 0.4199 0.1550 2.709 0.00674 ** attitudemoderate 0.4153 0.1282 3.239 0.00120 ** memoryno 1.2629 0.1514 8.344 < 2e-16 ***
56 6 Homework Statement A particle moves along a defined curve so that its acceleration tangential component: ##a_t=-ks##, where k is a constant and s denotes the arc distance respect to a point Q. a) Find an expression for the velocity as a function of s. b) Supposing that at Q its velocity equals 3.6 m/s and at A (s=5.4 m) 1.8 m/s, find k and its curvature radius at A knowing that the acceleration has magnitude 3.0 m/s^2. c) Find at which distance the particle inverts its movement. Homework Equations ##\vec v= v e_t## ##\vec a= a_t e_t + a_n e_n = \frac{dv}{dt} e_t + \frac {v^2}{\rho} a_n## So I know that ##a_t = \frac{dv}{dt}=-ks## and ##\frac{dv}{dt}=v\frac{dv}{ds}## then: $$v dv=-ks ds \rightarrow (v(s))^2=-ks^2+c$$ and using my initial conditions it follows that: $$(3.6)^2=c \approx 13$$ and $$(1.8)^2=13-5.4k \rightarrow k=1.8 \rightarrow (v(s))^2=13-1.8s$$ What bothers me is finding at which point it turns, I am having trouble even getting started, any help would be appreciated. What bothers me is finding at which point it turns, I am having trouble even getting started, any help would be appreciated.
Definition:Measurable Set Contents Definition Let $\left({X, \Sigma}\right)$ be a measurable space. Measurable Sets of an Arbitrary Outer Measure $\mu^* \left({A}\right) = \mu^* \left({A \cap S}\right) + \mu^* \left({A \setminus S}\right)$ for every $A \subseteq X$. By Set Difference as Intersection with Complement, this is equivalent to: $\mu^* \left({A}\right) = \mu^* \left({A \cap S}\right) + \mu^* \left({A \cap \complement \left({S}\right)}\right)$ where $\complement \left({S}\right)$ denotes the relative complement of $S$ in $X$. The collection of $\mu^*$-measurable sets is denoted $\mathfrak M \left({\mu^*}\right)$ and is a $\sigma$-algebra over $X$. Measurable Subsets of the Reals $\lambda^* \left({A}\right) = \lambda^* \left({A \cap S}\right) + \lambda^* \left({A \setminus S}\right)$ where $\lambda^*$ is the Lebesgue outer measure. The set of all measurable sets of $\R$ is frequently denoted $\mathfrak M_\R$ or just $\mathfrak M$. Measurable Subsets of $\R^n$ $m^*A = m^* \left({A \cap S}\right) + m^* \left({A \cap \complement \left({S}\right)}\right)$ where: $\complement \left({S}\right)$ is the complement of $S$ in $\R^n$ $m^*$ is defined as: $\displaystyle m^* \left({S}\right) = \inf_{\left\{ {I_k}\right\}: S \mathop \subseteq \cup I_k} \sum v \left({I_k}\right)$ where: $\left\{{I_k}\right\}$ are a sequence of sets satisfying: $I_k = \left[{a_1 \,.\,.\, b_1}\right] \times \dots \times \left[{a_k \,.\,.\, b_k}\right]$ $v \left({I_n}\right)$ is the "volume" $\displaystyle \prod_{i \mathop = 1}^n \left\vert{b_i - a_i}\right\vert$ The set of all measurable sets of $\R^n$ is frequently denoted $\mathfrak M_{\R^n}$. Also see Existence of Non-Measurable Subset of Real Numbers: from the axiom of choice, it is demonstrated that there exist non-measurable subsets of $\R$.
I am currently reviewing an old Calculus textbook and I stumbled upon two questions that, for me, has the wrong answer on the answer key. I would appreciate if you could check if my reasoning is correct. They follow: Question 1)If $\lim_{x \to 5} f(x) = 2$ and $\lim_{x \to 5} g(x) = 0$ then $\lim_{x \to 5} \frac{f(x)}{g(x)}$ does not exist. True or false? Textbook's answer: True My answer: False (isn't that what L'Hôpital is all about?) Question 2)If $\lim_{x \to 1} f(x) = 4$ and $\lim_{x \to a}$ does not exist then $\lim_{x \to a} \left[ f(x) + g(x) \right]$ does not exist Textbook's answer: True For this second one, I agree but just wanted to make sure if the textbook's answer is correct. EDIT: I see my mistake on the top one: the upper limit of $f(x)$ is $2$ not $0$ (I've read it too fast!) Thank you.
Abbreviation: CSlat A is a directed complete partial orders $\mathbf{P}=\langle P,\leq \rangle $such that every nonempty subset of $P$ has a greatest lower bound: $\forall S\subseteq P\ (S\ne\emptyset\Longrightarrow \exists z\in P(z=\bigwedge S))$. complete semilattice Let $\mathbf{P}$ and $\mathbf{Q}$ be complete semilattices. A morphism from $\mathbf{P}$ to $\mathbf{Q}$ is a function $f:P\rightarrow Q$ that preserves all nonempty meets and all directed joins: $z=\bigwedge S\Longrightarrow f(z)=\bigwedge f[S]$ for all nonempty $S\subseteq P$ and $z=\bigvee D\Longrightarrow f(z)= \bigvee f[D]$ Example 1: Classtype second-order Amalgamation property Strong amalgamation property Epimorphisms are surjective $\begin{array}{lr} f(1)= &1\\ f(2)= &\\ f(3)= &\\ f(4)= &\\ f(5)= &\\ f(6)= &\\ \end{array}$
Yes, a quantum computer could be simulated by a Turing machine, though this shouldn't be taken to imply that real-world quantum computers couldn't enjoy quantum advantage, i.e. a significant implementation advantage over real-world classical computers.As a rule-of-thumb, if a human could manually describe or imagine how something ought to operate, that ... Yes, it can do so in a rather trivial way: Use only reversible classical logical gates to simulate computations using boolean logic (for instance, using TOFFOLI to simulate NAND gates), use only the standard basis states $\lvert 0\rangle$ and $\lvert 1\rangle$ as input, and only perform standard basis state measurements at the output. In this way you can ... Suppose that you have a quantum algorithm with $2^{60}$ possible inputs. Suppose also that it would take 1 nanosecond to run this on a supercomputer (which is unrealistically optimistic!). The total time required to run through all possible inputs would be 36.5 years.Clearly it would be much better to just run the instance that you care about, and get the ... One way of writing quantum programs is with QISKit. This can be used to run the programs on IBM's devices. The QISKit website suggests the following code snippet to get you going, which is an entangled circuit as you want. It is also the same process as in the answer by datell. I'll comment on it line-by-line.# import and initialize the method used to ... There are plenty of different variants, particularly with regards to the conditions on the Hamiltonian. It's a bit of a game, for example, to try and find the simplest possible class of Hamiltonians for which simulation is still BQP-complete.The statement will roughly be along the lines of: let $|\psi\rangle$ be a (normalised) product state, $H$ be a ... Assuming you are considering a gate-based quantum computer, the most easy way to produce an entagled state is to produce one of the Bell states. The following circuit shows the Bell state $\left| \Phi^+ \right>$.By examining $\left| \psi_0 \right>$, $\left| \psi_1 \right>$ and $\left| \psi_2 \right>$ we can determine the entagled state after ... That depends on your definitions of "commercial" and of "quantum computer".The company D-Wave Systems has been offering what they call quantum computers commercially since 2011. Many things seem to point towards those being adiabatic quantum computers (though people disagree on this). That doesn't quite fit the kind of quantum computers that are becoming ... I guess that he's right enough for the moment; quantum mechanics is part of our best theory of the universe, which by definition means that we think the universe works like that.It's pretty circular though. When we have some model of the universe, what that literally means is that we think that the universe is operating according to that model. Currently ... $\newcommand{\ket}[1]{\left|#1\right>}$I'm not sure using sparsity is a good approach here: even single-qubit gates could easily turn a sparse state into a dense one.But you can use the stabilizer formalism if you only use Clifford gates. Here is a short recap (notation):The single-qubit Pauli group is $G_1=\langle X, Y, Z\rangle$, i.e. all possible ... I believe what you're after is NIST's Quantum Zoo, a comprehensive catalog of quantum algorithms maintained by Stephen Jordan. Its sections include:Algebraic and Number Theoretic Algorithms (14 items)Oracular Algorithms (34 items)Approximation and Simulation Algorithms (12 items)and for each algorithm it includes its speedup, a description and relevant ... One way order to perform Z rotations by arbitrary angles is to approximate them with a sequence of Hadamard and T gates. If you need the approximation to have maximum error $\epsilon$, there are known constructions that do this using roughly $3 \lg \frac{1}{\epsilon}$ T gates. See "Optimal ancilla-free Clifford+T approximation of z-rotations" by Ross et al.... Yes, it is possible to obtain this information, but only for troubleshooting purposes, not for using it in the code.Dump functions dump the status of the target machine into a file or to the console output. If the program is executed on the full-state simulator, this status will include the wave function of the whole system (for DumpMachine) or of the ... Apart from the formal result about #P-hardness, there's something worth touching on, about the nature of strong simulation itself. I'll comment first on strong simulation, and then specifically on the quantum case.1. Strong simulation even of classical randomised computation is hardStrong simulation is a very powerful concept — not only in the fact ... There are many possible ways to compactly represent a state, the usefulness of which strongly depend on the context.First of all, it is important to notice that it is not possible to have a procedure that can map any state into a more efficient representation of the same state (for the same reason why it is obviously not possible to faithfully compress any ... A conventional Hamiltonian is Hermitian. Hence, if it contains a non-Hermitian term, it must either also contain its Hermitian conjuagte as another term, or have 0 weight. In this particular case, since $Z\otimes X\otimes Y$ is Hermitian itself, the coefficient would have to be 0. So, if you're talking about conventional Hamiltonians, you've probably made a ... There are two questions here. The first asks how you might actually implement this in code, and the second asks what's the point if you know which oracle you're passing in.ImplementationProbably the best way is to create a function IsBlackBoxConstant which takes the oracle as input, then runs the Deutsch Oracle program to determine whether it is constant.... To my mind, this theorem is not very well stated in this form, if taken out of context. Where it says "phase gates", this may be misleading. It means specifically just $S=\sqrt{Z}$ and not what I think of as a phase gate, which can have an arbitrary phase (but they have very specifically introduced their terminology about 3 pages earlier). This is a key ... Taking your comment to Kiro to its logical conclusion, the answer is 'yes'. The basic idea is to decompose the T gate 'magic' state $\tfrac{1}{\sqrt 2}\bigl(\lvert 0 \rangle + \mathrm{e}^{i \pi / 4} \lvert 1 \rangle \bigr)$ as a linear combination of stabiliser states. (If you do this for several magic states, this produces an exponentially large linear ... It depends on the Hamiltonian. There are three particular questions whose answers might influence your choice of strategy:Does the Hamiltonian have any particular structure or symmetry?How quickly does the Hamiltonian change in time?What do you know about the initial state in relation to the initial Hamiltonian?Obviously, if the Hamiltonian has any ... Quantum simulators don't rely on quantum-mechanical effects in the physical chips; instead they simulate certain aspects of quantum state and operations on it using only classical compute.Universal simulators simulate full quantum state of the system, performing linear algebra transformations on it. They support universal set of quantum operations, but the ... There is a distinction between what you use to write a program (the SDK), and what you use to run it (the backend).The SDK can be either a graphical interface, like the IBM Q Experience or the CAS-Alibaba Quantum Computing Laboratory. It could also be a way of writing programs, like Q#, QISKit, Forest, Circ, ProjectQ, etc.The backend can either be a ... tl;dr- Quantum computers can't really help us to simulate the whole universe as the universe is likely vastly more complex than even quantum mechanics can capture, plus we can't even begin to guess how big it is or many other basic fundamental features. In short, simulating the whole universe is beyond sci-fi.We can't really simulate the entire universe, ... The first part of your question seems like a duplicate of an existing QC SE post: Are there emulators for quantum computers?.I'm not completely sure what you mean by building a quantum computer from scratch inside simulations. However, yes, you can make software simulations of a quantum computer using your average laptop/desktop. The exact "limit" will ... A separate note on using simulators for this (as opposed to using an actual quantum computer).Simulators, like the one that ships with Q#, are built to simulate quantum mechanical theories as we understand them now. This means that any experiment you run on a simulator will behave exactly as the theory says (well, unless the simulator has a bug in the code)... There isn't much of a difference. If you read the labels, the values are roughly the same but for some reason are presented in a different order. Any differences for a given value are due to noise and decoherence. In the computational $\left(Z\right)$ basis, the parity of a (classical) bit string is $0$ if the number of $1$s in the string is even (i.e. 'even parity'), or $1$ if the number of $1$s in the string is odd (i.e. 'odd parity').The parity can be measured by applying CNOT gates from each qubit that you want to measure (the control qubits) to an ancilla qubit ... Well, I'm working on a simulator of a quantum computer currently. The basic idea of quantum computing, of course, is gates represented by matrices applied to qubits represented by vectors. Using Python's numpy package, this isn't that hard to program in the most basic sense.From there, one might expand upon, of course, the interface. One might also ... You're getting the same output as Quirk, just with a different bit ordering convention for the kets.Quirk considers the top qubit to be the "least significant" qubit (i.e. if you count 000, 001, 010, ... then it refers to the rightmost bit). So if you apply a Hadamard gate to the top qubit of a three-qubit circuit in Quirk you get the state |000> + |001>....
The $\mathbb{Z_5}$-vector space $\mathfrak{B}$ 3 over the field $(\mathbb{Z_5}, +, .)$ $\mathfrak{B}$ 3over the field $(\mathbb{Z_5}, +, .)$ 1. BackgroundThis is a formal introduction to the genetic code $\mathbb{Z_5}$-vector space $\mathfrak{B}^3$ over the field $(\mathbb{Z_5}, +, .)$. This mathematical model is defined based on the physicochemical properties of DNA bases (see previous post). This introduction can be complemented with a Wolfram Computable Document Format (CDF) named IntroductionToZ5GeneticCodeVectorSpace.cdf available in GitHub. This is graphic user interface with an interactive didactic introduction to the mathematical biology background that is explained here. To interact with a CDF users will require for Wolfram CDF Player or Mathematica. The Wolfram CDF Player is freely available (easy installation on Windows OS and on Linux OS). 2. Biological mathematical model If the Watson-Crick base pairings are symbolically expressed by means of the sum “+” operation, in such a way that hold: G + C = C + G = D, U + A = A + U = D, then this requirement leads us to define an additive group ($\mathfrak{B}^3$, +) on the set of five DNA bases ($\mathfrak{B}^3$, +). Explicitly, it was required that the bases with the same number of hydrogen bonds in the DNA molecule and different chemical types were algebraically inverse in the additive group defined in the set of DNA bases $\mathfrak{B}$. In fact eight sum tables (like that one shown below), which will satisfice the last constraints, can be defined in eight ordered sets: {D, A, C, G, U}, {D, U, C, G, A}, {D, A, G, C, U}, {D, U, G, C, A},{G, A, U, C},{G, U, A, C},{C, A, U, G} and {C, U, A, G} [1,2]. The sets originated by these base orders are called the strong-weak ordered sets of bases [1,2] since, for each one of them, the algebraic-complementary bases are DNA complementary bases as well, pairing with three hydrogen bonds (strong, G:::C) and two hydrogen bonds (weak, A::U). We shall denote this set SW. A set of extended base triplet is defined as $\mathfrak{B}^3$ = { XYZ | X, Y, Z $\in\mathfrak{B}$}, where to keep the biological usual notation for codons, the triplet of letters $XYZ\in\mathfrak{B}^3$ denotes the vector $(X,Y,Z)\in\mathfrak{B}^3$ and $\mathfrak{B} =$ {A, C, G, U}. An Abelian group on the extended triplets set can be defined as the direct third power of group: $(\mathfrak{B}^3,+) = (\mathfrak{B},+)×(\mathfrak{B},+)×(\mathfrak{B},+)$ where X, Y, Z $\in\mathfrak{B}$, and the operation “+” as shown in the table [2]. Next, for all elements $\alpha\in\mathbb{Z}_{(+)}$ (the set of positive integers) and for all codons $XYZ\in(\mathfrak{B}^3,+)$, the element: $\alpha \bullet XYZ = \overbrace{XYZ+XYX+…+XYZ}^{\hbox{$\alpha$ times}}\in(\mathfrak{B}^3,+)$ is well defined. In particular, $0 \bullet X =$ D for all $X\in(\mathfrak{B}^3,+) $. As a result, $(\mathfrak{B}^3,+)$ is a three-dimensional (3D) $\mathbb{Z_5}$-vector space over the field $(\mathbb{Z_5}, +, .)$ of the integer numbers modulo 5, which is isomorphic to the Galois field GF(5). Notice that the Abelian groups $(\mathbb{Z}_5, +)$ and $(\mathfrak{B},+)$ are isomorphic. For the sake of brevity, the same notation $\mathfrak{B}^3$ will be used to denote the group $(\mathfrak{B}^3,+)$ and the vector space defined on it. + D A C G U D D A C G U A A C G U D C C G U D A G G U D A C U U D A C G This operation is only one of the eight sum operations that can be defined on each one of the ordered sets of bases from SW. 3. The canonical base of the $\mathbb{Z_5}$-vector space $\mathfrak{B}^3$Next, in the vector space $\mathfrak{B}^3$, vectors (extended codons): e 1 =ADD, e 2 =DAD and e 3 =DDA are linearly independent, i.e., $\sum\limits_{i=1}^3 c_i e_i =$ DDD implies $c_1=0, c_2=0$ and $c_3=0$ for any distinct $c_1, c_2, c_3 \in\mathbb{Z_5}$. Moreover, the representation of every extended triplet $XYZ\in\mathfrak{B}^3$ on the field $\mathbb{Z_5}$ as $XYZ=xe_1+ye_2+ze_3$ is unique and the generating set $e_1, e_2$, and $e_3$ is a canonical base for the $\mathbb{Z_5}$-vector space $\mathfrak{B}^3$. It is said that elements $x, y, z \in\mathbb{Z_5}$ are the coordinates of the extended triplet $XYZ\in\mathfrak{B}^3$ in the canonical base ($e_1, e_2, e_3$) [3] References José M V, Morgado ER, Sánchez R, Govezensky T. The 24 Possible Algebraic Representations of the Standard Genetic Code in Six or in Three Dimensions. Adv Stud Biol, 2012, 4:119–52. Sanchez R. Symmetric Group of the Genetic-Code Cubes. Effect of the Genetic-Code Architecture on the Evolutionary Process. MATCH Commun Math Comput Chem, 2018, 79:527–60. Sánchez R, Grau R. An algebraic hypothesis about the primeval genetic code architecture. Math Biosci, 2009, 221:60–76.
The Nonparaxial Gaussian Beam Formula for Simulating Wave Optics In a previous blog post, we discussed the paraxial Gaussian beam formula. Today, we’ll talk about a more accurate formulation for Gaussian beams, available as of version 5.3a of the COMSOL® software. This formulation based on a plane wave expansion can handle nonparaxial Gaussian beams more accurately than the conventional paraxial formulation. Paraxiality of Gaussian Beams The well-known Gaussian beam formula is only valid for paraxial Gaussian beams. Paraxial means that the beam mainly propagates along the optical axis. There are several papers that talk about paraxiality in a quantitative sense (see Ref. 1). Roughly speaking, if the beam waist size is near the wavelength, the beam propagates at a higher angle to a focus. Therefore, the paraxiality assumption breaks down and the formulation is no longer accurate. To alleviate this problem and to provide you with a more general and accurate formulation for general Gaussian beams, we introduced a nonpariaxial Gaussian beam formulation. In the user interface this is referred to as Plane wave expansion. Angular Spectrum of Plane Waves Let’s briefly review the paraxial Gaussian beam formula in 2D (for the sake of better visuals and understanding). We start from Maxwell’s equations assuming time-harmonic fields, from which we get the following Helmholtz’s equation for the out-of-plane electric field with the wavelength \lambda for our choice of polarization: where k=2 \pi/\lambda. The angular spectrum of plane waves is based on the following simple fact: an arbitrary field that satisfies the above Helmholtz equation can be expressed as the following plane wave expansion: where A(k_x,k_y) is an arbitrary function. The integration path is a circle of radius k for real k_x and k_y. (For complex k_x and k_y, the integration domain extends to a complex plane.) The function A(k_x,k_y) is called the angular spectrum function. One can prove that this E_z satisfies Helmholtz’s equation by direct substitution. Now that we know that this formulation always gives exact solutions to Helmholtz’s equation, let’s try to understand it visually. From the constraint, k_x^2+k_y^2=k^2, we can set k_x=k cos(\varphi) and k_y=k sin(\varphi) and rewrite the above equation as: The meaning of the above formula is that it constructs a wave as a sum, or integral, consisting of many waves propagating in various directions, all with the same wave number k. This is shown in the following figure. Visualization of the angular spectrum of plane waves. When actually solving a problem using this formula, all you have to do is find the angular spectrum function A(\varphi) that satisfies the boundary conditions. By assuming that the profile of the transverse field (perpendicular to the propagating direction, i.e., optical axis) is also a Gaussian shape (see Ref. 4), one can derive that A(\varphi) = \exp(-\varphi^2 / \varphi_0^2) , where \varphi_0 is the spectrum width. By some more mathematical manipulations, we get a relationship between the spectrum width \varphi_0 and the beam waist radius w_0. For example, for a slow Gaussian beam, the angular spectrum is narrow. A plane wave, on the other hand, is the extreme case where the angular spectrum function is a delta function. For a fast Gaussian beam, the angular spectrum is wider, and vice versa. This was a quick summary of the underlying theory for nonparaxial Gaussian beams. To recap what we have shown so far, let’s rewrite the formula once more by using polar coordinates, x=r \cos \theta, \ y = r \sin \theta: This is the formulation that Born and Wolf (Ref. 2) use in their book. The 3D formula is more complicated and looks different due to polarization, but the basic idea is the same as seen in the references mentioned above. It can also look different depending on whether or not you consider evanescent waves. The Plane Wave Expansion method used in the Wave Optics Module and the RF Module, although based on the angular spectrum theory, is adapted for numerical computations. Plane Wave Expansion: Settings and Results Let’s compare the new feature, Plane wave expansion, with the previously available feature, Paraxial approximation. The Settings window covering both methods is shown below. The Plane Wave Expansion feature settings. With the new feature, you have two options if the Automatic setting doesn’t give you a satisfactory approximation: Wave vector count Maximum transverse wave number The first option determines the number of discretization levels, depending on how fine you want to represent the Gaussian beam. The more plane waves, the finer it gets. The second option is related to the integral bound in the previous equation; i.e., -\pi/2 \le \varphi \le \pi/2. This integral bound can be the maximum \pi/2 for the smallest possible spot size and can be more shallow for slower beams, depending on how fast the Gaussian beam is. You need more angled plane waves with a larger transverse wave number to represent faster (more focused) beams. The following results compare the two formulas for the case where the spot radius is \lambda/2, which is considerably nonparaxial. As in the previous blog post, the simulation is done with the Scattered Field formulation and the domain is surrounded by a perfectly matched layer (PML). This way, the scattered field represents the error from the exact Helmholtz solution. The left images below show the new feature, while the images on the right show the paraxial approximation. The top images show the norm of the computed Gaussian beam background field, ewfd.Ebz, while the bottom images show the scattered field norm, ewfd.relEz, which represents the error from the exact Helmholtz solution. Obviously, the error from the Helmholtz solution is greatly reduced in the nonparaxial method. Concluding Remarks We have discussed the theory and results for an approximation method for nonparaxial Gaussian beams using the new plane wave expansion option. Remember that this formulation is extremely accurate, but is still an approximation under assumptions. First, we have made an assumption for the field shape in the focal plane. Second, we assume that the evanescent field is zero. If you are interested in the field coupling to some nanostructure near the focal region in a fast Gaussian beam, you may need to calculate the evanescent field. Next Step Learn more about the formulations and features available for modeling optically large problems in the COMSOL® software by clicking the button below: Note: This functionality can also be found in the RF Module. References P. Vaveliuk, “Limits of the paraxial approximation in laser beams”, Optics Letters, vol. 32, no. 8, 2007. M. Born and E. Wolf, Principles of Optics, ed. 7, Cambridge University Press, 1999. J. W. Goodman, Fourier Optics. G. P. Agrawal and M. Lax, “Free-space wave propagation beyond the paraxial approximation”, Phys. Rev.a. 27, pp. 1693–1695, 1983. Comments (6) CATEGORIES Chemical COMSOL Now Electrical Fluid General Interfacing Mechanical Today in Science
It looks like you're new here. If you want to get involved, click one of these buttons! Last time we began studying feedback in co-design diagrams. This led us into a fascinating topic which we'll explore more deeply today: cups and caps. Ultimately it leads to the subject of 'compact closed' categories, which Fong and Spivak introduce in Section 4.5.1. Covering this material adequately will take longer than the three weeks I'd intended to spend on each chapter, but I think it's worth it. Last time we saw that for each preorder \(X\) there's a feasibility relation called the cup $$ \cup_X \colon X^{\text{op}} \times X \nrightarrow \textbf{1} $$ which we draw as follows: To define the cup, we remembered that feasibility relations \(X^{\text{op}} \times X \nrightarrow \textbf{1}\) are monotone functions \( (X^{\text{op}} \times X)^\text{op} \times \textbf{1} \to \mathbf{Bool} \), and we defined \(\cup_X\) to be the composite $$ (X^{\text{op}} \times X)^\text{op} \times \textbf{1} \stackrel{\sim}{\to} (X^{\text{op}} \times X)^\text{op} \stackrel{\sim}{\to} (X^{\text{op}})^\text{op} \times X^{\text{op}} \stackrel{\sim}{\to} X \times X^{\text{op}} \stackrel{\sim}{\to} X^{\text{op}} \times X \stackrel{\text{hom}}{\to} \textbf{Bool} $$ where all the arrows with little squiggles over them are isomorphisms - most of them discussed in Puzzles 213-215. In short, the cup is the hom-functor \(\text{hom} \colon X^{\text{op}} \times X \to \mathbf{Bool}\) in disguise! The cup's partner is called the cap $$ \cap_X \colon \textbf{1} \nrightarrow X \times X^{\text{op}} $$ and we draw it like this: The cap is also the hom-functor in disguise! To define it, remember that feasibility relations \(\textbf{1} \nrightarrow X \times X^{\text{op}} \) are monotone functions \(\textbf{1}^{\text{op}} \times (X \times X^{\text{op}}) \). But \(\textbf{1}^{\text{op}} = \textbf{1}\), so we define the cap to be the compoiste $$ \textbf{1}^{\text{op}} \times (X \times X^{\text{op}}) = \textbf{1}\times (X \times X^{\text{op}}) \stackrel{\sim}{\to} X \times X^{\text{op}} \stackrel{\sim}{\to} X^{\text{op}} \times X \stackrel{\text{hom}}{\to} \textbf{Bool} . $$One great thing about the cup and cap is that they let us treat the edges in our co-design diagrams as flexible wires. In particular, they obey the snake equations, also known as the zig-zag identities. These say that we can pull taut a zig-zag of wire. The first snake equation says In other words, $$ (1_X \times \cup_X) (\cap_X \times 1_X) = 1_X .$$ Please study the diagram and the corresponding equation very carefully to make sure you see how each part of one corresponds to a part of the other! And please ask questions if there's anything puzzling. It takes a while to get used to these things. The second snake equation says In other words, $$ (\cup_X \times 1_{X^{\text{op}}}) (1_{X^{\text{op}}} \times \cap_X) = 1_{X^{\text{op}}} .$$ A great exercise, to make sure you understand what's going on, is to prove the snake equations. You just need to remember all the definition, use them to compute the left-hand side of the identity, and show it equals the much simpler right-hand side. Puzzle 217. Prove the snake equations. In fact some of you have already started doing this!
Context: The paper On the reality of the quantum state ( Nature Physics 8, 475–478 (2012) or arXiv:1111.3328) shows under suitable assumptions that the quantum state cannot be interpreted as a probability distribution over hidden variables. In the abstract, the authors claim: "This result holds even in the presence of small amounts of experimental noise, and is therefore amenable to experimental test using present or near-future technology." The claim is supported on page 3 with: "In a real experiment, it will be possible to establish with high confidence that the probability for each measurement outcome is within $\epsilon$ of the predicted quantum probability for some small $\epsilon> 0$." Something felt like it was missing so I tried to fill in the details. Here is my attempt: First, without even considering experimental noise, any reasonable measure of error (e.g. standard squared error) on the estimation of the probabilities is going to have worst case bounded by: $$ \epsilon\geq\frac{2^n}{N}, $$ (Tight for the maximum likelihood estimator, in this case) where $n$ is the number of copies of the system required for the proof and $N$ is the number of measurements (we are trying to estimate a multinomial distribution with $2^n$ outcomes). Now, they show that some distance measure on epistemic states (I'm not sure if it matters what it is) satisfies: $$ D \ge 1 - 2\epsilon^{1/n}. $$ The point is, we want $D=1$. So, if we can tolerate an error in this metric of $\delta=1-D$ (What is the operational interpretation of this?), then the number of measurements we must make is: $$ N \ge \left(\frac4\delta\right)^n. $$ This looks bad, but how many copies do we really need? Note that the proof requires two non-orthogonal qubit states with overlap $|\langle \phi_0 |\phi_1\rangle|^2 = \cos^2\theta$. The number of copies required is implicitly given by: $$ 2\arctan(2^{1/n}-1)\leq \theta. $$ Some back-of-the-Mathematica calculations seems to show that $n$ scales at least quadratically with the overlap of the state. Is this right? Does it require (sub?)exponentially many measurements in the system size (not surprising, I suppose) and the error tolerance (bad, right?).
Courtesy of the OpenCV 2.3 GPU code comes a neat snippet of code for using a template parameter for reading RGB or BGR ordered components when dealing with RGB triplets. The Code template <int blueIndex> float rgb2grey(const float *src) { return 0.114f*src[blueIndex^2] + 0.587f*src[1] + 0.299f*src[blueIndex]; } Then to use the function you simply supply the index where the blue value resides to take care of the RGB vs BGR ordering. For RGB ordering: And for BGR ordering: This only works for swapping R and B around and won’t work for more weird and wonderful orderings The original OpenCV code can be found in modules/gpu/src/opencv2/gpu/device/detail/color.hpp. Templates and CUDA For me the fact that you can use template meta-programming is a real plus point of CUDA. It allows for good code re-use and the template expansion gives good scope for the compiler to optimise. It can also allow you to remove conditionals from kernels in appropriate circumstances – more in a future post! The KuroBox finally gave up the ghost and wouldn’t boot after a recent Debian update. I didn’t have the will to go through rebuilding it from scratch again, so invested in something a bit more modern. Something that has a GUI output even when you screw up a kernel upgrade! I’d had my eye on a Atom/ION system for a while and ended up getting a Zotac HD-ID11. It was a very simple to get up and running with Ubuntu, XBMC and Mythtv. NB – you could fry both your television and your PC doing any of this. Be careful, check, check and triple check everything before plugging it in. That said I haven’t had any problems for several months now, but check all your connections and soldering for shorts before plugging it all together. Update 17/10/2012: some more details in a follow up post. HDMI into SCART doesn’t go Only having an old CRT TV means that connection options are pretty limited, with the only useful once being SCART. The TV does accept RGB, which makes life easier. You may think that the easy option would be to get a new TV, but that’s not nearly as much fun 😉 All the Atom/ION boxes have generally HDMI and DVI/VGA outputs. In the case of the HD-ID11, it has HDMI and DVI connections. Continue reading Displaying equations in webpages has always been a headache. The fallback of using images was always there, but in the age of blogs and other content creation tools editing, updating and maintaining images for equations is tedious. MathML was an effort to standardise support in browsers, but the reality of it is that it only works out of the box in very few cases. Whilst looking for a way to easily put equations into a WordPress blog, MathJax turned up. Have a look at some of the examples! MathJax in WordPress A quick search turns up a couple of WordPress plugins, of which I ended up using Latex for WordPress, which allows me to easily put Latex syntax equations directly into posts. So {{{ $ $x = frac{-b pm sqrt{b^2 – 4ac}}{2a}$ $ }}} becomes $$x = \frac{-b \pm \sqrt{b^2 – 4ac}}{2a}$$ All in all excellent, 😉 Subversion is a bit lacking on the merging and branching front in comparison with some of the newer distributed version control systems, but it does make working with large projects easy. The single biggest reason for this is sparse checkouts. At work we have a source tree that contains artwork, documentation, the source for all the third party libraries we use and compiled versions of these for multiple architectures and platforms (32bit, 64bit, windows, RHEL4, RHEL5 etc). This makes the full checkout rather large and there are many times when you only need a small fraction of all the files. The problem with sparse checkouts is that it can become very laborious manually setting one up for anything beyond a few directories. For more details on the basics see here. As with many tedious tasks – computers can help! I put together a little script for helping with this – get it here. I hope it’s useful and keep reading for more details on how it works. In Action The final version of the script makes doing a sparse checkout as simple as doing an standard checkout: ./checkout.rb svn://server/trunk To checkout using a named subset of files: ./checkout.rb --map documentation svn://server/trunk To checkout using a locally defined subset of files (rather than a subset stored on the sever): ./checkout.rb --map local.yaml svn://server/trunk Continue reading For 2.5 yrs work has made use of a Kolab server I set up, but recently it has been taking up too much of my time to maintain. Google Apps was chosen as the replacement, so the problem then became how to get 39GB of data uploaded to Google over a fairly slow ADSL link, whilst keeping everyone up and running as much as possible. imapsync seemed to be the tool of choice for a lot of people doing migrations of IMAP users, and it has done a very good job here too. I used version 1.311 from source rather than the older package provided with Ubuntu at the time as it contained various fixes for Google’s IMAP servers. Continue reading At work we are getting a 64-bit version of our software up and running at the moment. Most of the usual culprits reared their head – assuming that a pointer and integer had the same size etc etc. One more interesting one, which I’ve not come across before is related to using STL string::find and the special constant string::npos. This is not unique to our code base when you google for it and actually just boils down to data being truncated before a comparison. The nuances of the problem do lead on to a discussion about signed vs. unsigned integral types in C++ and the handling of comparisons between differently sized data types. I though it was worth looking at a bit further and definitely something to watch out for when doing code reviews. It could also make for a particularly challenging interview question 😉 Continue reading Overview I have a Kurobox HG, that has been running Gentoo for a while now. After getting fed up with the waits for compiling packages I decided to switch over to Debian. Only having SSH access to the box makes switching OS a slightly more tricky prospect than a standard OS switch or upgrade. The basic plan is: Disable swap Install a very basic debian system onto the swap partition Reboot into the basic debian system From the basic debian system, install a debian system onto the main partition Reboot into the basic debian system on the main partition Reformat and enable the swap partition Finish configuring the new system There were a few tricky steps along the way, so I’ve documented all the steps below. Continue reading Well the Visual Editor mode doesn’t appear to get on very well with the [mediawiki-plugin] – it puts html tags all over the wiki text making updating pages very tedious and making the whole point of using the Wiki syntax a bit redundant. There do appear to be a few pages on the web talking about having problems with more advanced page layouts when using the Visual Editor, so probably for the best to have it disabled. There are a few plugins to disable the Visual Editor and Easy Disable Visual Editor appears to do the job with the latest WordPress. Setting up a new WordPress blog has been very easy, but editing in HTML get pretty tedious when there are alternatives like wiki syntax. All easily solved with the WP-MediaWiki plugin. The plugin doesn’t support the complete MediaWiki syntax, but more than enough to be useful. MythTV MythTV is an exceptional linux based PVR. It has an interesting and very configurable split frontend/backend architecture. One traditional use for this split architecture is to put all the mass storage and recording hardware in a box under the stairs and then have a lightweight (and silent) machine plugged into the TV. Encoding video in realtime requires either some pretty powerful CPUs or alternatively dedicated hardware. Enter DVB-T and a USB dongle to capture DVB mpeg streams. Continue reading
Returns an array of cells for the fitted values of the conditional mean. Syntax ARMA_MEAN( X, Order, mean, sigma, phi, theta) X is the univariate time series data (a one dimensional array of cells (e.g. rows or columns)). Order is the time order in the data series (i.e. the first data point's corresponding date (earliest date=1 (default), latest date=0)). Order Description 1 ascending (the first data point corresponds to the earliest date) (default) 0 descending (the first data point corresponds to the latest date) mean is the ARMA model mean (i.e. mu). sigma is the standard deviation of the model's residuals/innovations. phi are the parameters of the AR(p) component model (starting with the lowest lag). theta are the parameters of the MA(q) component model (starting with the lowest lag). Remarks The underlying model is described here. Warning:ARMA_MEAN() function is deprecated as of version 1.63: use ARMA_FIT function instead. The time series is homogeneous or equally spaced. The time series may include missing values (e.g. #N/A) at either end. The ARMA model fitted values are defined as: $$\hat x_t = \mu + \sum_{i=1}^p \phi_i x_{t-i} + \sum_{j=1}^q \theta_j a_{t-j} $$ Where: $\hat x_t$ is the fitted model value (i.e. conditional mean) at time t. $1\leq t \leq T $ $T$ is the number of non-missing values in the data sample. $\hat x_t$ is the fitted model value (i.e. conditional mean) at time t. The number of parameters in the input argument - phi - determines the order of the AR component. The number of parameters in the input argument - theta - determines the order of the MA component. Examples Example 1: Files Examples References D. S.G. Pollock; Handbook of Time Series Analysis, Signal Processing, and Dynamics; Academic Press; Har/Cdr edition(Nov 17, 1999), ISBN: 125609906 James Douglas Hamilton; Time Series Analysis; Princeton University Press; 1st edition(Jan 11, 1994), ISBN: 691042896 Tsay, Ruey S.; Analysis of Financial Time Series; John Wiley & SONS; 2nd edition(Aug 30, 2005), ISBN: 0-471-690740 Box, Jenkins and Reisel; Time Series Analysis: Forecasting and Control; John Wiley & SONS.; 4th edition(Jun 30, 2008), ISBN: 470272848 Walter Enders; Applied Econometric Time Series; Wiley; 4th edition(Nov 03, 2014), ISBN: 1118808568
ISSN: 2156-8472 eISSN: 2156-8499 Mathematical Control & Related Fields March 2016 , Volume 6 , Issue 1 Select all articles Export/Reference: Abstract: We study a damped semi-linear wave equation in a bounded domain of $\mathbb{R}^3$ with smooth boundary. It is proved that any $H^2$-smooth solution can be stabilised locally by a finite-dimensional feedback control supported by a given open subset satisfying a geometric condition. The proof is based on an investigation of the linearised equation, for which we construct a stabilising control satisfying the required properties. We next prove that the same control stabilises locally the non-linear problem. Abstract: The purpose of this work is to establish stability estimates for the unique continuation property of the nonstationary Stokes problem. These estimates hold without prescribing boundary conditions and are of logarithmic type. They are obtained thanks to Carleman estimates for parabolic and elliptic equations. Then, these estimates are applied to an inverse problem where we want to identify a Robin coefficient defined on some part of the boundary from measurements available on another part of the boundary. Abstract: In this paper, we derive a version of the Pontryagin maximum principle for general finite-dimensional nonlinear optimal sampled-data control problems. Our framework is actually much more general, and we treat optimal control problems for which the state variable evolves on a given time scale (arbitrary non-empty closed subset of $\mathbb{R}$), and the control variable evolves on a smaller time scale. Sampled-data systems are then a particular case. Our proof is based on the construction of appropriate needle-like variations and on the Ekeland variational principle. Abstract: In this paper we study a distributed control problem for a phase field system of Caginalp type with logarithmic potential. The main aim of this work would be to force the location of the diffuse interface to be as close as possible to a prescribed set. However, due to the discontinuous character of the cost functional, we have to approximate it by a regular one and, in this case, we solve the associated control problem and derive the related first order necessary optimality conditions. Abstract: In this paper we consider a state constrained differential inclusion $\dot x\in \mathbb A x+ F(t,x)$, with $\mathbb A$ generator of a strongly continuous semigroup in an infinite dimensional separable Banach space. Under an ``inward pointing condition'' we prove a relaxation result stating that the set of trajectories lying in the interior of the constraint is dense in the set of constrained trajectories of the convexified inclusion $\dot x\in \mathbb A x+ \overline{\textrm{co}}F(t,x)$. Some applications to control problems involving PDEs are given. Abstract: In this paper, we study controllability for a parabolic system of chemotaxis. With one control only, the local exact controllability to positive trajectory of the system is obtained by applying Kakutani's fixed point theorem and the null controllability of associated linearized parabolic system. The positivity of the state is shown to be remained in the state space. The control function is shown to be in $L^\infty(Q)$, which is estimated by using the methods of maximal regularity and $L^p$-$L^q$ estimate for parabolic equations. Abstract: In this paper we give results on the counting function associated with the interior transmission eigenvalues. For a complex refraction index we estimate of the counting function by $Ct^{n}$. In the case where the refraction index is positive we give an equivalent of the counting function. Readers Authors Editors Librarians Email Alert Add your name and e-mail address to receive news of forthcoming issues of this journal: [Back to Top]
Abbreviation: LRng A (or $\ell$ lattice-ordered ring ) is a structure $\mathbf{L}=\langle L,\vee,\wedge,+,-,0,\cdot\rangle$ such that -ring $\langle L,\vee,\wedge\rangle$ is a lattice $\langle L,+,-,0,\cdot\rangle $ is a ring $+$ is order-preserving: $x\leq y\Longrightarrow x+z\leq y+z$ ${\uparrow}0$ is closed under $\cdot$: $0\leq x,y\Longrightarrow 0\leq x\cdot y$ Remark: Let $\mathbf{L}$ and $\mathbf{M}$ be $\ell $-rings. A morphism from $\mathbf{L}$ to $\mathbf{M}$ is a function $f:L\rightarrow M$ that is a homomorphism: $f(x\vee y)=f(x)\vee f(y)$, $f(x\wedge y)=f(x)\wedge f(y)$, $f(x\cdot y)=f(x)\cdot f(y)$, $f(x+y)=f(x)+f(y)$. The lattice reducts of lattice-ordered rings are distributive lattices. Classtype variety Equational theory Quasiequational theory First-order theory Congruence distributive yes, see lattices Congruence extension property Congruence n-permutable yes, $n=2$, see groups Congruence regular yes, see groups Congruence uniform yes, see groups Definable principal congruences Equationally def. pr. cong. Amalgamation property Strong amalgamation property Epimorphisms are surjective $\begin{array}{lr} None \end{array}$
Limits: As we see in everyday life that everything can’t be exact. The length of an object can’t be a whole number. When we say this rope is of length 5 metre it doesn’t meet with the fact that it is exactly 5 metre. It can be of 4.999999 meters or maybe 5.000001 meters. To deal with such situation limits is introduced in calculus. In this section, we will read limits formula and its properties. Left and Right-Hand Limits: Left-Hand Limits: When the expected value of the function is denoted by the points to the left of a fixed point defines the left-hand limit of the function at that point. Right-Hand Limits: When the expected value of any function is at right side of the fixed point than we say it as right-hand limit of that point. Representation of limit: A limit can be normally expressed as: \({\displaystyle \lim _{n\to c}f(n)=L}\) Properties and algebra of limits: Let p and q be two functions and a be a value such that \(\displaystyle{\lim_{x \to a}f(x)}\) and \(\displaystyle{\lim_{x \to a}g(x)}\) exists at that value: Limit of sum of two functions is sum of the limits of the functions, i.e., \(\displaystyle{\lim_{x \to a}[f(x) + g (x)] = \lim_{x \to a}f(x) + \lim_{x \to a}g (x)}\) Limits formula of difference of two functions is difference of the limits of the functions, i.e., \(\displaystyle{\lim_{x \to a}[f(x) – g (x)] = \lim_{x \to a}f(x) – \lim_{x \to a}g (x)}\) Limits for any real number k, \(\displaystyle{\lim_{x \to a}[k f(x)] = k \lim_{x \to a}f(x)}\) Limits formula of product of two functions is product of the limits of the functions, i.e., \(\displaystyle{\lim_{x \to a}[f(x)\; g(x)] = \lim_{x \to a}f(x) \times \lim_{x \to a}g(x)}\) Limit of quotient of two functions is quotient of the limits of the functions (whenever the denominator is non zero), i.e., \(\displaystyle{\lim_{x \to a}\frac{f(x)}{g(x)}} =\frac{\displaystyle{\lim_{x \to a}f(x)}}{\displaystyle{\lim_{x \to a}g(x)}}\) Standard Limits: \(\displaystyle{\lim_{x \to a}\frac{x^{n} – a^{n}}{x – a}}\) = na n-1 \(\displaystyle{\lim_{x \to 0}\frac{sin x}{x}}\) = 1 Also, \(\displaystyle{\lim_{x \to 0}\frac{1 – cos x}{x}}\) = 0 Limits formula based examples: Find the limit \(\displaystyle{\lim_{x \to 1}\:[x^{3} – x^{2} + 1]}\). Solution: \(\displaystyle{\lim_{x \to 1}}\) [x 3 – x 2 + 1] = 1 3 – 1 2 + 1 = 1 Find the limit \(\displaystyle{\lim_{x \to 1}\frac{x^{2} + 1}{x + 100}}\). Solution:\(\displaystyle{\lim_{x \to 1}\frac{x^{2} + 1}{x + 100}}\) = \(\frac{1^{2} + 1}{1 + 100}\) = \(\frac{2}{101}\) More from Calculus Relation and Functions Continuity Rules Differentiability Rules Derivative Formula Integral Formula Inverse Trigonometric function Formulas Application of Integrals Logarithm Formulas
Balls and Needles Joana Vasconcelos is a Portuguese artist who uses everyday objects in her creations, like electric irons or plastic cutlery. She is an inspiration to Ana, who wants to make ceiling hanging sculptures with straight knitting needles and balls of wool. For safety reasons, there will be a ball at each end of each needle. Knitting needles vary in colour, length and thickness (to allow intersections of needles). Sculptures are to be exhibited in room corners, which provide a 3D Cartesian coordinate system, with many lamps on the ceiling. Sculpture designs are made with the coordinates of the centres of the balls of wool in which knitting needles are stuck. That is, each needle $N$ is represented by a set of two different triples: $N=\{ (x,y,z),\, (x’,y’,z’)\} $. Ana dislikes closed chains. A true closed chain is a sequence of $k$ distinct needles, $N_1, N_2, \ldots , N_ k$ (for some $k\geq 3$), such that: $N_1 = \{ (x_1,y_1,z_1), \, (x_2,y_2,z_2)\} , \; N_2 = \{ (x_2,y_2,z_2), \, (x_3,y_3,z_3)\} , \; \ldots , \\ N_ k = \{ (x_ k,y_ k,z_ k), \, (x_{k+1},y_{k+1},z_{k+1})\} , \; \mbox{ and } \; (x_{k+1},y_{k+1},z_{k+1})=(x_1,y_1,z_1)$ But her dislike of closed chains is so extreme that the shadow of the sculpture on the floor has to be free of “floor closed chains”. Given any needle $N=\{ (x,y,z),\, (x’,y’,z’)\} $, let $N^{\downarrow } = \{ (x,y),(x’,y’)\} $ denote the shadow of needle $N$ on the floor. For Ana (who is an artist), a floor closed chain is also a sequence of $k$ distinct needles, $N_1, N_2, \ldots , N_ k$ (for some $k\geq 3$), such that: $N^{\downarrow }_ i \neq N^{\downarrow }_ j$, for every $1 \leq i < j \leq k \; $ (the $k$ needle shadows are all distinct); $N^{\downarrow }_1 = \{ (x_1,y_1), \, (x_2,y_2)\} , \; N^{\downarrow }_2 = \{ (x_2,y_2), \, (x_3,y_3)\} , \; \ldots , \\ N^{\downarrow }_ k = \{ (x_ k,y_ k), \, (x_{k+1},y_{k+1})\} , \; \mbox{ and } \; (x_{k+1},y_{k+1})=(x_1,y_1)$ Consider the sculpture depicted in the figure, which has the following four knitting needles:\[ \begin{array}{ll} A = \{ (12,12,8), \, (10,5,11)\} , & B = \{ (12,12,8), \, (4,14,21)\} , \\ C = \{ (12,12,8), \, (12,20,8)\} , & D = \{ (4,14,21), \, (10,5,21)\} . \end{array} \] This structure is not free of closed chains because, although there is no true closed chain, the sequence of needles $A, B, D$ is a floor closed chain. Task Write a program that, given the knitting needles of a sculpture, determines whether there is a true or a floor closed chain in the sculpture. Input The first line of the input has one integer, $K$, which is the number of knitting needles in the sculpture. Each of the following $K$ lines contains six integers, $x_1$, $y_1$, $z_1$, $x_2$, $y_2$, and $z_2$, which indicate that $\{ (x_1,y_1,z_1), \, (x_2,y_2,z_2)\} $ is the set of triples of a needle. Any two distinct needles are represented by different sets of triples. Constraints $1$ $\leq $ $K$ $\leq $ $50\, 000$ Number of knitting needles $1$ $\leq $ $x_ i, y_ i, z_ i$ $<$ $1\, 000$ Coordinates of each triple Output The output has two lines, each one with a string. The string in the first line is: True closed chains, if there is some true closed chain in the sculpture; No true closed chains, otherwise. The string in the second line is: Floor closed chains, if there is some floor closed chain in the sculpture; No floor closed chains, otherwise. Sample Input 1 Sample Output 1 4 12 12 8 10 5 11 12 12 8 4 14 21 12 12 8 12 20 8 4 14 21 10 5 21 No true closed chains Floor closed chains Sample Input 2 Sample Output 2 4 1 1 1 2 2 2 2 2 2 1 5 5 9 4 4 9 4 2 9 4 4 9 9 4 No true closed chains No floor closed chains Sample Input 3 Sample Output 3 3 50 50 50 100 100 100 100 100 100 50 50 90 50 50 90 50 50 50 True closed chains No floor closed chains Sample Input 4 Sample Output 4 3 1 1 5 1 3 7 1 3 7 4 4 5 4 4 5 1 1 5 True closed chains Floor closed chains
Search Now showing items 1-1 of 1 Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV (Elsevier, 2014-09) Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ...
For readability, I've put the definitions of algebraic operation and preservation at the end of the question. An old theme in logic is: Given some algebraic operation, we can give a syntacticcharacterization of the sentences preserved by this operation. For example, the sentences preserved by Cartesian products are the Horn sentences, and the sentences preserved by taking substructures are the universal sentences. Of course, there are a couple caveats: First, these characterizations are only up to logical equivalence. E.g. $$\mbox{"$\exists x(x\not=x)\vee \forall x(R(x, x))$"}$$ is not a universal sentence, but is preserved by taking substructures. Second, and more importantly, these characterizations are logic-dependent! For example, there is a sentence $\varphi$ in second-order logic which is true in exactly the finite structures: $$\forall F((\forall x, y(F(x)=F(y)\iff x=y))\implies \forall x\exists y(F(y)=x)).$$ Clearly $\varphi$ is preserved under taking substructures, but it isn't universal (even allowing second-order universal quantifiers). I would like to know what is known beyond the first-order context. Question 1. What kinds of preservation results are known, or can we hope for, for logics other than first-order logic? I'm particularly interested in second-order logic, infinitary logics, and first-order logics with cofinality quantifiers. Now, this is a painfully broad question; so let me ask a more focused sub-question. There are several reasonable candidates, but here's my favorite: Second-order logic is, to put it mildly, terrible: not only does it lack the Compactness, Lowenheim-Skolem, and Interpolation properties (not to mention basically all the other nice properties), its set of validities isn't even set-theoretically absolute. However, I don't know for a fact that it's bad from a preservation perspective! That is, I don't know of any reason why we can't give reasonable descriptions (up to equivalence) of the sentences of second-order logic preserved under various algebraic operations. Now of course these characterizations would be of dubious value, since equivalence of second-order sentences is incredibly complicated; but it would still be really neat if they existed! Here's an attempt to precisely define what such a characterization should look like: Suppose $\mathcal{L}$ is a logic. Say that an algebraic operation $m$ is syntactic for $\mathcal{L}$ if there is a computable set $P_m$ of $\mathcal{L}$-sentences preserved under $m$, such that every $\mathcal{L}$-sentence which is preserved under $m$ is equivalent to a (possibly infinite) conjunction of ones in $P_m$. (Note that this only makes sense if we have a canonical way of representing $\mathcal{L}$-sentences by natural numbers - so second-order logic is okay, but infinitary logic isn't.) Call such a $P_m$ a syntactic base for $m$ in $L$. Then we can ask: Question 2. Are any interesting algebraic operations syntactic for second-order logic? For example, is "substructure of" syntactic for second-order logic? I suspect "substructure of" is not syntactic - indeed, I suspect that in a precise sense, no nontrivial algebraic operation is syntactic for second-order logic (although pinning down what "nontrivial" means here is nontrivial) - but I don't see how to prove it. EDIT THE SECOND: It turns out "substructure of" is syntactic for second-order logic - see my answer below. However, this relies on a trick that doesn't appear to generalize to, say, products. So, I suspect the right algebraic operation to focus on is products, and here again I'm in the dark. Note that really I should ask if any algebraic operations are consistently syntactic for second-order logic - there's no reason to believe that even simple examples can be settled in ZFC alone! EDIT: A quick comment on this question. The question of whether a second-order sentence is preserved by a given algebraic operation is, in generally, set-theoretically contingent. For instance, let $\Phi$ be any second-order sentence whose validity is undecidable from ZFC (e.g. we may take a $\Phi$ which is valid iff CH holds). Then via standard techniques we can construct a second-order sentence $\hat{\Phi}$ such that for all structures $M$, we have $M\models\hat{\Phi}$ unless $\vert M\vert=2^\kappa$ where $\kappa$ is the smallest cardinality of a model of $\neg\Phi$. Then $\hat{\Phi}$ is preserved under substructures iff $\Phi$ is valid. However, this does not show that whether a property is syntactic is set-theoretically contingent: note that in case $\hat{\Phi}$ is preserved under substructures, $\hat{\Phi}$ is equivalent to $\top$! So in principle we may have a syntactic base for an algebraic operation - provably in ZFC! - even if the preservation of a fixed sentence under that algebraic operation is set-theoretically contingent. EDIT: Arguably second-order logic is a bridge too far. The other natural logic to try is $L_{\omega_1\omega}$, but here the notion of "syntactic" just doesn't work. Here's a stab at the right question: say that $m$ is "syntactic for $L_{\omega_1\omega}$" if there is a Borel set of reals $B$ whose intersection with the set of real codes for $L_{\omega_1\omega}$-sentences, $B_m$, has the following properties: Each $\varphi\in B_m$ is preserved under $m$, and Every $L_{\omega_1\omega}$-sentence preserved under $m$ is equivalent to a (possibly uncountable) conjunction of sentences in $B_m$. Then we can ask, e.g.: Question 3.Is "substructures of" is syntactic for $L_{\omega_1\omega}$? Definitions. By an algebraic operation, I mean a method of building new structures from old (here "structure" means "first-order structure"). Formally (and eliding set-theoretic subtleties), an algebraic operation is a function $m$ from classes of structures to classes of structures such that for all $\mathcal{C}$, $m(\mathcal{C})$ is closed under isomorphism, $\mathcal{C}\subseteq m(\mathcal{C})$, and $m(m(\mathcal{C}))=m(\mathcal{C})$. Some classic examples of algebraic operations are: Homomorphic images Substructures Finite products Arbitrary products Ultraproducts Ultraroots And so forth. Given an algebraic operation $m$ and a property $\mathfrak{P}$, say that $\mathfrak{P}$ is preserved by $m$ if - for every class of structures $\mathcal{C}$ - whenever every element of $\mathcal{C}$ has $\mathfrak{P}$, so does every element of $m(\mathcal{C})$.
Volume of Solid of Revolution Theorem Let the points be defined: $A = \tuple {a, \map f a}$ $B = \tuple {b, \map f b}$ $C = \tuple {b, 0}$ $D = \tuple {a, 0}$ Then the volume $V$ of $S$ is given by: $\displaystyle V = \pi \int_a^b \paren {\map f x}^2 \rd x$ Let the points be defined: $A = \tuple {\map x a, \map y a}$ $B = \tuple {\map x b, \map y b}$ $C = \tuple {\map x b, 0}$ $D = \tuple {\map x a, 0}$ $\set {\tuple {\map x t, \map y t}: a \le t \le b}$ Then the volume $V$ of $S$ is given by: $\displaystyle V = \pi \int_a^b \paren {\map y t}^2 \map {x'} t \rd t$ Proof Consider a rectangle bounded by the lines: $y = 0$ $x = \xi$ $x = \xi + \delta x$ $y = \map f x$ $V_\xi = \pi \paren {\map f x}^2 \delta x$ The technique of finding the Volume of Solid of Revolution by dividing up the solid of revolution into many thin disks and approximating them to cylinders was devised by Johannes Kepler sometime around or after $1612$, reportedly on the occasion of his wedding in $1613$. His inspiration was in the problem of finding the volume of wine barrels accurately. He published his technique in his $1615$ work Nova Stereometria Doliorum Vinariorum (New Stereometry of Wine Barrels). Sources 2014: Christopher Clapham and James Nicholson: The Concise Oxford Dictionary of Mathematics(5th ed.) ... (previous) ... (next): Entry: volume of a solid of revolution
Question: For which of the following matrices $A_i$ is there A complex matrix $B$ such that $B^2 = A_i$; A self-adjoint complex matrix $B$ such that $B^2 = A_i$; A real matrix $B$ such that $B^2 = A_i$? $A_1 = \begin{pmatrix} 2 & 1\\1 & 2\end{pmatrix}$, $A_2 = \begin{pmatrix} 1 & 2\\2 & 1\end{pmatrix}$, $A_3 = \begin{pmatrix} 1 & 4\\1 & 1\end{pmatrix}$. Working: $A_1$ and $A_2$ are both real symmetric matrices, so by the Real Spectral Theorem, there exists orthogonal matrices $P_1$ and $P_2$ such that $P_1^TA_1P_1$ and $P_2^TA_2P_2$ are both diagonal. A self-adjoint matrix must have real eigenvalues. The spectra of $A_1$, $A_2$ and $A_3$ are $\{1,3\}$, $\{-1,3\}$ and $\{-1,3\}$ respectively. If we can find an invertible matrix $M_i$ such that $M_i^{-1}A_iM=D_i$, for some diagonal matrix $D_i$, then $B = \pm M_i \sqrt{D_i}M_i^{-1}$.
Definition:Cantor Normal Form Definition Let $x$ be an ordinal. The Cantor normal form of $x$ is an ordinal summation: $x = \omega^{a_1} n_1 + \dots + \omega^{a_k} n_k$ where: $k \in \N$ is a natural number $\omega$ is the minimal infinite successor set $\langle a_i \rangle$ is a strictly decreasing finite sequence of ordinals. $\langle n_i \rangle$ is a finite sequence of finite ordinals In summation notation: $x = \displaystyle \sum_{i \mathop = 1}^k \omega^{a_i} n_i$ Properties Every ordinal number can be written in Cantor normal form. Moreover, the Cantor normal form is unique. The ordinal cannot be written any other way that could still be considered Cantor normal form. This unique representation is a consequence of the Division Theorem for Ordinals. Cantor normal form is useful when performing operations like multiplication and exponentiation. Also see Unique Representation of Ordinal as Sum shows that Cantor normal formexists for every ordinal and is unique. Source of Name This entry was named for Georg Cantor.
I'll preface this by saying that it isn't always clear what one means by "nonparametric" or "semiparametric" etc. In the comments, it seems likely that whuber has some formal definition in mind (maybe something like choosing a model $M_\theta$ from some family $\{M_\theta: \theta \in \Theta\}$ where $\Theta$ is infinite dimensional), but I'm going to be pretty informal. Some might argue that a nonparametric method is one where the effective number of parameters you use increases with the data. I think there is a video on videolectures.net where (I think) Peter Orbanz gives four or five different takes on how we can define "nonparametric." Since I think I know what sorts of things you have in mind, for simplicity I'll assume that you are talking about using Gaussian processes for regression, in a typical way: we have training data $(Y_i, X_i), i = 1, ..., n$ and we are interested in modeling the conditional mean $E(Y|X = x) := f(x)$. We write$$Y_i = f(X_i) + \epsilon_i$$and perhaps we are so bold as to assume that the $\epsilon_i$ are iid and normally distributed, $\epsilon_i \sim N(0, \sigma^2)$. $X_i$ will be one dimensional, but everything carries over to higher dimensions. If our $X_i$ can take values in a continuum then $f(\cdot)$ can be thought of as a parameter of (uncountably) infinite dimension. So, in the sense that we are estimating a parameter of infinite dimension, our problem is a nonparametric one. It is true that the Bayesian approach has some parameters floating about here and there. But really, it is called nonparametric because we are estimating something of infinite dimension. The GP priors we use assign mass to every neighborhood of every continuous function, so they can estimate any continuous function arbitrarily well. The things in the covariance function are playing a role similar to the smoothing parameters in the usual frequentist estimators - in order for the problem to not be absolutely hopeless we have to assume that there is some structure that we expect to see $f$ exhibit. Bayesians accomplish this by using a prior on the space of continuous functions in the form of a Gaussian process. From a Bayesian perspective, we are encoding beliefs about $f$ by assuming $f$ is drawn from a GP with such-and-such covariance function. The prior effectively penalizes estimates of $f$ for being too complicated. Edit for computational issues Most (all?) of this stuff is in the Gaussian Process book by Rasmussen and Williams. Computational issues are tricky for GPs. If we proceed niavely we will need $O(N^2)$ size memory just to hold the covariance matrix and (it turns out) $O(N^3)$ operations to invert it. There are a few things we can do to make things more feasible. One option is to note that guy that we really need is $v$, the solution to $(K + \sigma^2 I)v = Y$ where $K$ is the covariance matrix. The method of conjugate gradients solves this exactly in $O(N^3)$ computations, but if we satisfy ourselves with an approximate solution we could terminate the conjugate gradient algorithm after $k$ steps and do it in $O(kN^2)$ computations. We also don't necessarily need to store the whole matrix $K$ at once. So we've moved from $O(N^3)$ to $O(kN^2)$, but this still scales quadratically in $N$, so we might not be happy. The next best thing is to work instead with a subset of the data, say of size $m$ where inverting and storing an $m \times m$ matrix isn't so bad. Of course, we don't want to just throw away the remaining data. The subset of regressors approach notes that we can derive the posterior mean of our GP as a regression of our data $Y$ on $N$ data-dependent basis functions determined by our covariance function; so we throw all but $m$ of these away and we are down to $O(m^2 N)$ computations. A couple of other potential options exist. We could construct a low-rank approximation to $K$, and set $K = QQ^T$ where $Q$ is $n \times q$ and of rank $q$; it turns inverting $K + \sigma^2 I$ in this case can be done by instead inverting $Q^TQ + \sigma^2 I$. Another option is to choose the covariance function to be sparse and use conjugate gradient methods - if the covariance matrix is very sparse then this can speed up computations substantially.
I have a process which consists of a number of events and what is known is the timings between the events. What I'm trying to determine is a distribution that allows me to determine a likelyhood that a new sample fits the distibution. The issue is mainly that if you have lots of samples you can approximate the result using a standard gaussian and use the mean and standard deviation. But if you only have a handful of samples, the gaussian does not accurately represent the situation. From what I've read it is common to model waiting times using the gamma distribution. Looking at how the process evolves it looks like it matches well. The unknown is the scale parameter, since the shape parameter I think should be the number of samples. What I've worked out so far is that given the timings $X_1 ... X_N$ you can say: $$ \sum_{n=1}^N X_n \sim \Gamma(N,\theta) $$ ($N$ is known and fixed) However, $\theta$ is unknown, but the maximum likelihood parameter is the average of the $X_i$ (according to wikipedia anyway). My question is, can I use this to estimate a distribution for $X_i$, that is, since the $X_i$ are independent: $$ N X_i | \sum_n X_n \sim \Gamma(N, \tfrac{1}{N}\sum_n X_n) $$ Something else I've wondered about. Suppose I do have information about $\theta$, say a distribution. How can I incorporate this into the model? Edit: Clarified that N is fixed.
19 0 I am currently studying the Massive Thirring Model (MTM) with the Lagrangian $$ \mathcal{L} = \imath {\bar{\Psi}} (\gamma^\mu {\partial}_\mu - m_0 )\Psi - \frac{1}{2}g: \left( \bar{\Psi} \gamma_\mu \Psi \right)\left( \bar{\Psi} \gamma^\mu \Psi \right): . $$ and Hamiltonian $$ \int \mathrm{d}x \imath \Psi^\dagger \sigma_z \partial_x \Psi + m_0 \Psi^\dagger \Psi + 2g \Psi^\dagger_1 \Psi^\dagger_2 \Psi_2\Psi_1\\ $$ Due to the infinite set of conservation laws, particle production is said to be absent from this theory. However why isn't it sufficient to show that particle production is absent if the number operator $$N=\int \mathrm{d}x \Psi^\dagger \Psi$$ commutes the the Hamiltonian? Also, by particle production being absent, is that just a statement that all Feynman diagrams with self energy insertions evaluate to 0 but all other Feynman diagrams are possible? $$ \mathcal{L} = \imath {\bar{\Psi}} (\gamma^\mu {\partial}_\mu - m_0 )\Psi - \frac{1}{2}g: \left( \bar{\Psi} \gamma_\mu \Psi \right)\left( \bar{\Psi} \gamma^\mu \Psi \right): . $$ and Hamiltonian $$ \int \mathrm{d}x \imath \Psi^\dagger \sigma_z \partial_x \Psi + m_0 \Psi^\dagger \Psi + 2g \Psi^\dagger_1 \Psi^\dagger_2 \Psi_2\Psi_1\\ $$ Due to the infinite set of conservation laws, particle production is said to be absent from this theory. However why isn't it sufficient to show that particle production is absent if the number operator $$N=\int \mathrm{d}x \Psi^\dagger \Psi$$ commutes the the Hamiltonian? Also, by particle production being absent, is that just a statement that all Feynman diagrams with self energy insertions evaluate to 0 but all other Feynman diagrams are possible?
Definition:Contour/Complex Plane Contents Definition $\gamma_i \left({b_i}\right) = \gamma_{i + 1} \left({a_{i + 1} }\right)$ Then the finite sequence $\left\langle{C_1, \ldots, C_n}\right\rangle$ is a contour. If $C_1, \ldots, C_n$ are defined only by their parameterizations $\gamma_1, \ldots, \gamma_n$, then the contour can be denoted by the same symbol $\gamma$. The parameterization of $C$ is defined as the function $\gamma: \left[{a_1 \,.\,.\, c_n}\right] \to \C$ with: $\gamma \restriction_{\left[{c_i \,.\,.\, c_{i + 1} }\right] } \left({t}\right) = \gamma_i \left({t}\right)$ where $\displaystyle c_i = a_1 + \sum_{j \mathop = 1}^i b_j - \sum_{j \mathop = 1}^i a_j$ for $i \in \left\{ {0, \ldots, n}\right\}$. Here, $\gamma \restriction_{\left[{c_i \,.\,.\, c_{i + 1} }\right] }$ denotes the restriction of $\gamma$ to $\left[{c_i \,.\,.\, c_{i + 1} }\right]$. $\gamma_1 \left({a_1}\right) = \gamma_n \left({b_n}\right)$ $C$ is a simple contour if and only if: $(1): \quad$ For all $i,j \in \left\{ {1, \ldots, n}\right\}, t_1 \in \left[{a_i \,.\,.\, b_i}\right), t_2 \in \left[{a_j \,.\,.\, b_j}\right)$ with $t_1 \ne t_2$, we have $\gamma_i \left({t_1}\right) \ne \gamma_j \left({t_2}\right)$. $(2): \quad$ For all $k \in \left\{ {1, \ldots, n}\right\}, t \in \left[{a_k \,.\,.\, b_k}\right)$ where either $k \ne 1$ or $t \ne a_1$, we have $\gamma_k \left({t}\right) \ne \gamma_n \left({b_n}\right)$. The length of $C$ is defined as: $\displaystyle L \left({C}\right) := \sum_{i \mathop = 1}^n \int_{a_i}^{b_i} \left\vert{\gamma_i' \left({t}\right) }\right\vert \rd t$ The image of $C$ is defined as: $\displaystyle \operatorname{Im} \left({C}\right) := \bigcup_{i \mathop = 1}^n \operatorname{Im} \left({\gamma_i}\right)$ where $\operatorname{Im} \left({\gamma_i}\right)$ denotes the image of $\gamma_i$. If $\operatorname{Im} \left({C}\right) \subseteq D$, where $D$ is a subset of $\C$, we say that $C$ is a contour in $D$. Let $C_1, \ldots, C_n$ be directed smooth curves in $\C$. The start point of $C$ is $\gamma_1 \left({a_1}\right)$. The end point of $C$ is $\gamma_n \left({b_n}\right)$. Collectively, $\gamma_1 \left({a_1}\right)$ and $\gamma_n \left({b_n}\right)$ are referred to as the endpoints of $C$. Illustration Also known as A contour is called a directed contour, piecewise smooth path, or a piecewise smooth curve in many texts. Some texts only use the name contour for a closed contour. Also denoted as $C_1 \cup C_2 \cup \ldots \cup C_n$ or with some other symbol denoting the concatenation of directed smooth curves. Also see Definition:Directed Smooth Curve (Complex Plane), the special case that $n = 1$.
Consider the ring $\mathbb{Z}[q^{\pm 1}]$. For $n \in \mathbb{N}$, define the quantum integers: $$[n]_q := \frac{q^n-q^{-n}}{q-q^{-1}} = q^{n-1} + q^{n-3} + \cdots + q^{-(n-3)} + q^{-(n-1)}$$ What is the general formula for multiplying and dividing quantum integers? This is probably well-known but I don't have a reference. For example, we have $$ [2]_q[2]_q = [3]_q+1, \ \ \ \ [4]_q[3]_q = [6]_q+[4]_q+[2]_q, \ \ \ \ \frac{[6]_q}{[2]_q} = [5]_q-[3]_q+1 $$ Also, what is the relationship between these quantum integers and the ones defined as $$ [n]_q := \frac{1-q^n}{1-q} = 1 + q + \dots + q^{n-1}? $$ Thanks. Edit: fixed first formula
Let x 0, ...., x be complex numbers. The DFT is defined by the formula n-1 Evaluating these sums directly would take O( n 2) arithmetical operations (see Big O notation). An FFT is an algorithm to compute the same result in only O( n log n) operations. Since the inverse DFT is the same as the DFT, but with the sign of the exponent flipped and a 1/ n factor, any FFT algorithm can easily be adapted for it as well. The most common FFT is the Cooley-Tukey algorithm. In its most basic form, this method first computes the Fourier transforms of the even-indexed numbers x 0, x 2,..., x , and of the odd-indexed numbers n-2 We write n' = n/2 and denote the discrete Fourier transform of the even indexed numbers x' 0 = x 0, x' 1 = x 2,...., x' = n'-1 f_j & = & \sum_{k=0}^{\frac{n}{2}-1} x_{2k} e^{-\frac{2\pi i}{n} j(2k)} + \sum_{k=0}^{\frac{n}{2}-1} x_{2k+1} e^{-\frac{2\pi i}{n} j(2k+1)} \\ \\ & = & \sum_{k=0}^{n'-1} x'_{k} e^{-\frac{2\pi i}{n'} jk} + e^{-\frac{2\pi i}{n}j} \sum_{k=0}^{n'-1} x _k e^{-\frac{2\pi i}{n'} jk} \\ \\ & = & \left\{ \begin{matrix} f'_j + e^{-\frac{2\pi i}{n}j} f _j & \mbox{if } j<n' \\ \\ f'_{j-n'} - e^{-\frac{2\pi i}{n}(j-n')} f _{j-n'} & \mbox{if } j \geq n' \end{matrix} \right. \end{matrix} </math> A form of this trick, including its recursive application, was already known around 1805 to Carl Friedrich Gauss, who used it to interpolate the trajectories of the asteroids Pallas and Juno, but was not widely recognized (being published only posthumously and in Latin); Gauss did not analyze the asymptotic complexity, however. Various limited forms were also rediscovered several times throughout the 19th and early 20th centuries. FFTs became popular after J. W. Cooley of IBM and J. W. Tukey of Princeton published a paper in 1965 reinventing the algorithm and describing how to perform it conveniently on a computer (including how to arrange for the output to be produced in the natural ordering). This process is an example of the general technique of divide and conquer algorithms; in many traditional implementations, however, the explicit recursion is avoided, and instead one traverses the computational tree in breadth-first fashion. The above re-expression of a size- n DFT as two size- n/2 DFTs is sometimes called the Danielson-Lanczos lemma, since the identity was noted by those two authors in 1942 (influenced by Runge's 1903 work). They applied their lemma in a "backwards" recursive fashion, repeatedly doubling the DFT size until the transform spectrum converged (although they apparently didn't realize the logarithmic asymptotic complexity they had achieved). The Danielson-Lanczos work predated widespread availability of computing machines and required hand calculation; they reported a computation time of 140 minutes for a size-64 DFT operating on real inputs (see below) to 3-5 significant digits. Cooley and Tukey's 1965 paper reported a running time of 0.02 minutes for a size-2048 complex DFT on an IBM 7094 (probably in 36-bit single precision, ~8 digits). Rescaling the time by n log n, this corresponds roughly to a speedup factor of around 800,000. The more modern FFT library FFTW, on a 2GHz Pentium-IV in 64-bit double precision (~16 digits), can compute a size-64 real-input DFT in 1μs and a size-2048 complex DFT in 100μs, speedups of about 8,000,000,000 and 10,000 over Danielson & Lanczos and Cooley & Tukey, respectively, not even including the considerable improvements in accuracy. (140 minutes for size 64 may sound like a long time, but it corresponds to an average of at most 16 seconds per floating-point operation, around 20% of which are multiplications...this is a fairly impressive rate for a human being to sustain for almost two and a half hours, especially when you consider the bookkeeping overhead.) More generally, Cooley-Tukey algorithms recursively re-express a DFT of a composite size n = n 1 n 2 as n 1 DFTs of size n 2, followed by multiplication by complex roots of unity called twiddle factors, followed by n 2 DFTs of size n 1. Typically, either n 1 or n 2 is a small factor, called the radix. If n 1 is the radix, it is called a decimation in time (DIT) algorithm, whereas if n 2 is the radix, it is decimation in frequency (DIF, also called the Sande-Tukey algorithm). The version presented above is a radix-2 DIT algorithm; in the final expression, the phase multiplying the odd transform is the twiddle factor, and the +/- combination ( butterfly) of the even and odd transforms is a size-2 DFT. (The radix's small DFT is sometimes known as a butterfly, so-called because of the shape of the data-flow diagram for the radix-2 case.) Gauss used a radix-3 DIT (or radix-4 DIF) step in a 12-point DFT example. There are many other variations on the Cooley-Tukey algorithm. Mixed-radix implementations handle composite sizes with a variety of (typically small) factors in addition to two, usually (but not always) employing the O( n 2) algorithm for the prime base cases of the recursion. Split-radix merges radices 2 and 4, exploiting the fact that the first transform of radix 2 requires no twiddle factor, in order to achieve a minimal operation count for power-of-two sizes. (On present-day computers, performance is determined more by cache and CPU pipeline considerations than by strict operation counts; well optimized FFT implementations often employ larger radices and/or hard-coded base-case transforms of significant size.) Another way of looking at the Cooley-Tukey algorithm is that it re-expresses a size n one-dimensional DFT as an n 1 by n 2 two-dimensional DFT (plus twiddles), where the output matrix is transposed. The net result of all of these transpositions, for a radix-2 algorithm, corresponds to a bit reversal of the input (DIF) or output (DIT) indices. If, instead of using a small radix, one employs a radix of roughly √ n and explicit input/output matrix transpositions, it is called a four-step (also six-step) algorithm, initially proposed for cache/locality optimization (Gentleman and Sande, 1966). The general Cooley-Tukey factorization rewrites the indices j and k as j = n 2 j 1 + j 2 and k = n 1 k 2 + k 1, respectively, where the indices j a and k a run from 0.. n a-1 (for a of 1 or 2). That is, it re-indexes the input ( k) and output ( j) as n 1 by n 2 two-dimensional arrays in column-major and row-major order, respectively. When this reindexing is substituted into the DFT formula for jk, the n 2 j 1 n 1 k 2 cross term vanishes (its exponential is unity), and the remaining terms give \sum_{k_1=0}^{n_1-1} \left[ e^{-\frac{2\pi i}{n} j_2 k_1 } \right] \left( \sum_{k_2=0}^{n_2-1} x_{n_1 k_2 + k_1} e^{-\frac{2\pi i}{n_2} j_2 k_2 } \right) e^{-\frac{2\pi i}{n_1} j_1 k_1 } </math> where the inner sum is a DFT of size n 2, the outer sum is a DFT of size n 1, and the [...] bracketed term is the twiddle factor. The 1965 Cooley-Tukey paper noted that one can employ an arbitrary radix r (as well as mixed radices), but failed to realize that the radix butterfly is itself a DFT that can use FFT algorithms. Hence, they reckoned the complexity to be O( r 2 n/ r log r n), and erroneously concluded that the optimal radix was 3 (the closest integer to e). (Gauss also derived the algorithm for arbitrary radices, and gave explicit examples of both radix-3 and radix-6 steps.) There are other FFT algorithms distinct from Cooley-Tukey. For relatively prime n 1 and n 2, one can use the Prime-Factor (Good-Thomas) algorithm, based on the Chinese Remainder Theorem, to factorize the DFT similarly to Cooley-Tukey but without the twiddle factors. The Rader-Brenner algorithm is a Cooley-Tukey-like factorization but with purely imaginary twiddle factors, reducing multiplications at the cost of increased additions and reduced numerical stability. Algorithms that recursively factorize the DFT into smaller operations other than DFTs include the Bruun and QFT algorithms. (The Rader-Brenner and QFT algorithms were proposed for power-of-two sizes, but it is possible that they could be adapted to general composite n. Bruun's algorithm was adapted to the mixed-radix case for even n by H. Murakami.) Bruun's algorithm, in particular, is based on interpreting the FFT as a recursive factorization of the polynomial z - 1, here into real-coefficient polynomials of the form n In many applications, the input data for the DFT are purely real, in which case the outputs satisfy the symmetry It was once believed that real-input DFTs could be more efficiently computed by means of the Discrete Hartley transform (DHT), but this was subsequently disproved: a specialized real-input DFT algorithm (FFT) can typically be found that requires fewer operations than the corresponding DHT algorithm (FHT) for the same number of inputs. There are further FFT specializations for the cases of real data that have even/odd symmetry, in which case one can gain another factor of ~2 in time/space and the DFT becomes the discrete cosine/sine transform(s) (DCT/DST). Instead of directly modifying an FFT algorithm for these cases, DCTs/DSTs can also be computed via FFTs of real data combined with O( n) pre/post processing. All of the FFT algorithms discussed so far compute the DFT exactly (in exact arithmetic, i.e. neglecting floating-point errors). A few "FFT" algorithms have been proposed, however, that compute the DFT approximately, with an error that can be made arbitrarily small at the expense of increased computations. Such algorithms trade the approximation error for increased speed or other properties. For example, an approximate FFT algorithm by Edelman et al. (1999) achieves lower communication requirements for parallel computing with the help of a fast-multipole method. A wavelet-based approximate FFT by Guo and Burrus (1996) takes sparse inputs/outputs (time/frequency localization) into account more efficiently than is possible with an exact FFT. Another algorithm for approximate computation of a subset of the DFT outputs is due to Shentov et al. (1995). Only the Edelman algorithm works equally well for sparse and non-sparse data, however, since it is based on the compressibility (rank deficiency) of the Fourier matrix itself rather than the compressibility (sparsity) of the data. Even the "exact" FFT algorithms have errors when finite-precision floating-point arithmetic is used, but these errors are typically quite small; most FFT algorithms, e.g. Cooley-Tukey, have excellent numerical properties. The upper bound on the relative error for the Cooley-Tukey algorithm is O(ε log n), compared to O(ε n 3/2) for the naive DFT formula (Gentleman and Sande, 1966), where ε is the machine floating-point relative error. In fact, the average errors are much better than these upper bounds, being only O(&epsilon √log n) for Cooley-Tukey and O(ε √ n) for the naive DFT (Schatzman, 1996). These results, however, are very sensitive to the accuracy of the twiddle factors used in the FFT (i.e. the trigonometric function values), and it is not unusual for incautious FFT implementations to have much worse accuracy, e.g. if they use unstable trigonometric recurrence formulas. Some FFTs other than Cooley-Tukey, such as the Rader-Brenner algorithm, are intrinsically less stable. Search Encyclopedia Featured Article
IMPORTANT Slide 2 Restate Residue Theorem f has isolated singularity at and is analytic on the punctured disc 0 < | z- | < r f has a unique Laurent series representation … The residue of f at is represented by Res(f,) = the coefficient of Following slides are to calculate residues at removable singularities and poles (not essential singularities?) Slide 3 Residue at Removable singularity is a removable singularity of a Laurent series if there are no negative powers of the terms so = 0 so Res(f,) = 0 Slide 4 Residue at simple pole (*) How to isolate ? Cross-multiply then take limit as z tends to We need to take limit because of the division by in the original Laurent series (*) Slide 5 Residue at simple pole example (From previous slide) Consider has simple poles at = +i, -i Slide 6 Residue at double pole example Laurent series: (*) is double pole when for k > 2. To isolate . (i) Multiply by (z – z_0)^2 and (ii) differentiate (iii) take limit. Slide 7 Residue at double pole example double pole at z = 1, simple pole at z = 3. Slide 8 Residue at poles of order n Use method above: (i) Multiply by (z – z_0)^n and (ii) differentiate multiple times (iii) take limit. Giving Res(f,z_0) = Slide 9 Residue of “quotient” functions f(z) = Res(f,) = Aug 16, 2016 God this stuff goes on and on Slide 2 Recap: f has an isolated singularity at e.g. when z = , we have division by zero somewhere. Then if function is analytic everywhere in a (Punctured) disc around (but excluding only the point ) since a punctured disc is an annulus, the function has a Laurent series representation: f(z) = (inequality excludes the centre of disc). Now take some closed circular path somewhere in the punctured disc, 0 < < r. Then integrate the function around the disc, (*) (not explained, that because convergence is uniform over disc then sigma and integral can be interchanged). Note we have taken outside the integral as it is a constant coefficient for each integral. And in theory we have to perform an infinite number of integrations as k ranges between – and + infinity. How do we calculate these integrals, (which are contained within the sigma)? Slide 3 For all values of k EXCEPT -1, the function and since we are integrating an (analytic?) function over a closed curve then , see, So almost all the terms that are integrated have disappeared. For the remaining integral we can parameterise the curve , or use the Cauchy Integral theorem, to give a value of And because this integral value is for k=-1 (all other integral being zero), looking back at (*) the final value for the sigma is Slide 4 From (*) above, at an isolated singularity, , the Residue is the coefficient of the Laurent expansion of f(z) around the isolated singularity , all other coefficients being zero. If annulus around isolated singularity the the residue of f at is represented as (of the Laurent expansion of f around z_0). examples f(z) = This has isolated singularities at z = 1,2 Therefore annuli around the singularities 0 < |z-1| < 1 and 0 < |z-2| < 1 Laurent series f(z) = + sum rest of terms Res(f,1) = -1 or f(z) = + sum rest of terms Res(f,2) = +1 Slide 5 Residue examples Isolated singularity z = 0 Laurent series over annulus 0 < | z – 0 | < So Res(f, 0) = f(z) = cos(1/z) = …. this is an essential singularity (see) because infinitely many terms with negative powers so z = 0 is an essential singularity and since coefficient of z^-1 is zero Res(f,0) = 0 f(z) = sin(1/z) = …. this has an essential singularity (see) because infinitely many terms with negative powers so z = 0 is an essential singularity and since coefficient of z^-1 is 1 Res(f,0) = 1 f(z) = this has a removable singularity (see) because no terms with negative powers so z = 0 is an removable singularity and since coefficient of z^-1 is 0 Res(f,0) = 0 << to follow examples of residues and use in calculating integrals >> Slide 2 If a function is analytic everywhere on a disc apart from the point at its centre , the centre is called an isolated singularity. The disk is called a punctured disc and is represented 0 < | z – | < r. Examples f(z) = 1/z has an isolated singularity at = 0 f(z) = 1/sin z has (multiple) isolated singularities at = 0 , +/- \pi, +/- 2\pi etc. f(z) = 1/ (z-2) has isolated singularity at z = 2 Counter examples f(z) = and Log(z) do not have isolated singularities at 0 since there is no punctured disc around 0 where the functions are analytic. The functions are not analytic on the negative real axis. Slide 3 As we have removed the isolated singularity from the disc, we have created an annulus around this point and hence the (analytic) function has a Laurent series. Slide 4 Behaviour of Laurent series near the isolated singularity f(z) = (cos z – 1 )/ z^2 = -1/2! +z^2 /4! – … No negative powers f(z) = cos z / z^4 = 1/z^4 – (1/2!)(1/z^2) + (1/4!) – Finitely many negative powers f(z) = 1 / cos z = 1 – (1/2!)(1/z^2) + (1/4!)(1/z^4) – (1/6!)(1/z^6) + … Infinitely many negative powers Slide 5 Definition according to number of negative powers of Laurent series In the Laurent series surrounding an isolated singularity If coefficients of negative powers are all zero (there are no negative power terms, i.e. for k < 0) in Laurent series then the singularity is removable. If there are finitely many negative terms in Laurent series around singularity then these singularities also called poles. i.e there exists an N > 0, such that but for all k < -N. N is the order of the pole. if N = 1 this is a simple pole. If there are infinitely many negative powers in Laurent series around singularity then this is an essential singularity. i.e. a_{k} \neq 0$ for infinitely many k < 0. Slide 6 Summary table of type of singularity based on Laurent expansion and number of negative power indices Slide 7 Removable singularities is a removable singularity if its Laurent series centred at satisfies = 0 for all k < 0 (i.e. negative powers). 0 < |z| < which looks like a Taylor series and here if we define f(0) =1 f has become analytic in C and the singularity has been removed. << Statement of Riemann Theorem >> is an isolated singularity of f. is a removable singularity if and only if f is bounded near . Slide 8 Poles has order 4. is a pole if and only if function approaches infinity as z approaches . Note if f(z) has a pole at then 1/f(z) has a removable singularity at . Slide 9 Essential singularities e.g. has an essential singularity at = 0. Consider z is a real number. approaches infinity as z approaches 0 from the right (the positive side) because 1/z is getting large and positive. Whereas approaches 0 as z approaches 0 from the left (the negative side) because 1/z is getting large and negative. So f does not have a limit as z approaches the isolated singularity (zero in this case) << casorati weierstrass theorem >> Slide 10 casorati weierstrass example I Slide 11 casorati weierstrass example II Slide 12 Picards Theorem Aug 14, 2016 Slide 2 Review of Taylor Series Assumes function is analytic over the whole disc. What if function not differentiable at some point? .e.g. is not differentiable (or even defined) at +/- 2i. e.g. f(z) = Log(z) not continuous, so not differentiable, on(along) $late (-\infty,0]$ Slide 3 Laurent Series Expansion f is analytic on some complex region U (note: U may contain some points or small regions where f is not defined or would not be analytic, so these have been excluded from U} and there is some annulus in U, {r <| z – z_0|<R } around those regions which have already been excluded from U, << picture here>> then f has a Laurent series expansion: Notice similar to Taylor series but limits of the sigma run between minus and plus infinity, therefore series runs infinitely in “both directions”. There are negative powers of z-z_0 as well as positive powers. This series converges at each point in the annulus … Slide 4 & 5 Partial series example <<forum query posted >> Slide 2 Determining Radius of Convergence for a power series. The Ratio Test In earlier lectures we were told for a power series there is a value of R such that if distance of any z from the “centre” is less than R, i.e. | z – | < R then the power series converges whereas if |z – | > R the series diverges. How do we find R.? Theorem: if reaches a limit R as n tends to infinity, then this limit is the radius of convergence. Slide 3 Examples , so = 0, a_k = 1, so then R = 1 (about the point z = 0). , so = 0, a_k = k, , so R = 1 , so = 0, a_k = 1/k!, therefore R = Slide 4 Root Test << to be done >> Slide 5 Root Test examples << to be done >> Slide 6 Cauchy Hadamard Slide 7 Relationship Analytic Functions and Power series If f: U -> C is analytic and i.e. considering the z values in a disk of radius r surrounding the point << diagram >>, then in this disk f HAS a power series representation and that representation is and its radius of convergence is at least the radius of the disk surrounding , i.e. . Slide 8 Examples of Taylor series of exp(z) about different points in complex plane Summary of Taylor series expansion . Note carefully the expression for the coefficient. consider , then all derivatives . I fin the Taylor expansion we set z_0 = 0, then for every derivative, thus for all k so Similarly, about Slide 9 Series sin z about 0 Sin (z) is analytic in C. Then about 0 f(z) sin(z) f(0) sin(0)=0 f’(z) cos(z) f’(0) cos(0)=1 f’’(z) -sin(z) f’’(0) -sin(0)=0 f’’’(z) -cos(z) f’’’(0) -cos(0)= -1 f(4)(z) sin(z) f(4)(0) sin(0)=0 so = Slide 10 Series cos z about 0 Differentiating term by term Cos z = Slide 11 Analytic function An analytic function is determined by all its derivatives at the centre of the disc. Aug 9, 2016 Slide 2 Definition Taylor series Series of form, Centred at z_0 \in C. Examples , converges for |z| < 1 where w = this converges when converges when and diverges when Slide 3 Radius Convergence Theorem *important * Let be a power series. There exists a real number R, , such that the series converges absolutely in < R and diverges in > R convergence is uniform for each r < R Slide 4 Examples Radius Convergence Pick an arbitrary Note: But eventually this will happen, no matter what the value of |z| is, k will cause the terms of the series to be increasing in size and the series does not converge. In contrast: . Similarly to above pick an arbitrary z, then when k reaches the value and beyond this value of k all subsequent terms are less than powers of 1/2 so the series will converge. Since this is always true the series has infinite radius of convergence. Slide 5 Analycity of Power Series Theorem. For the power series with radius of convergence R > 0, the claim is: f(z) = is analytic in and since it is analytic it can be differentiated (infinitely) as follows (note change of limits): f ‘ (z) = f ” (z) = eventually (since it makes sense to define ). Then by rearranging we get an expression for every coefficient in the original power series Slide 6 Differentiating term-by-term Because with radius of convergence 1, the sum is analytic when |z| < 1 So we can differentiate term by term and end up with Slide 7 Integrating term-by-term Similarly , with certain certain conditions (to be added), can integrate infinite series term by term. Slide 8 Integration example to get series for Log(z) Justification that when |z|< 1. By fiddling with variable we can get: for |z – 1| < 1, we have a power series for Log. Aug 8, 2016 Slide 2 Definition which converges if sequence of partial sums, Sn, converges Slide 3 Example Formula in usual way Since as long as |z| < 1 then for |z| < 1. Slide 4 Convergence and Divergence Theorem: If a series converges, then (*) The converse (swap propositions), if then sum converges is not necessarily true – classic example is the harmonic series. Contrapositive (*) then series diverges, of course is true. For 1/(1 – z) , this diverges when |z| >= 1 Slide 5 Real and Imaginary parts of series z = so So << more , separate real and imaginary parts>> Slide 6 Another example of convergence. Part I Does converge? If take modulus of the terms, then get harmonic series which does not converge. What about original series on this slide? Split into real and imaginary parts. Slide 7 Another example of convergence. Part II Preliminaries: k is even, ( a real number) Similarly, k is odd, (a purely imaginary number) So (note the limits). Using prelim results and simplifying: The first sum is the alternating harmonic series which converges (can justify by looking at intervals on real line) and (probably) because denominators of terms second series getting smaller and series is alternating this also converges. Slide 8 Absolute Convergence Definition ? converges absolutely if the series converges << examples >> If converges absolutely then it converges and converges absolutely Slide 9 Example of absolute convergence inequality << to do >>
Let $B \subset \mathbb R^2$ be the unit ball and $T>0.$ Let $u \in W^{2,1}_p(B \times [0,T]),$ that is $u \in L^p(B \times [0,T])$ and we also have, $$ \partial_t u, \nabla u, \nabla^2 u \in L^p(B \times [0,T]). $$ Here $\nabla$ denotes differentiation in the spacial direction only. I am looking for a proof of the following result: If $p>4,$ then $\nabla u \in C^{\alpha,\alpha/2}(\overline B \times [0,T])$ and there is $C_p > 0$ such that $$ \sup_{\overline B \times [0,T]} |\nabla u| + \sup_{(x,t) \neq (y,s) \in \overline B \times [0,T]} \frac{|\nabla u(x,t) - \nabla u(y,s)|}{|x-y|^{\alpha} + |t-s|^{\alpha/2}} \leq C_p \left( \lVert u \rVert_{L^p(B \times [0,T])} + \lVert \nabla u \rVert_{L^p(B \times [0,T])} + \lVert \nabla^2 u \rVert_{L^p(B \times [0,T])} + \lVert \partial_t u \rVert_{L^p(B \times [0,T])} \right), $$ or in more compact (but possibly non-standard) notation $$\lVert \nabla u \rVert_{C^{\alpha,\alpha/2}(\overline B \times [0,T])} \leq C_p \lVert u \rVert_{W^{2,1}_p(B \times [0,T])},$$ where $\alpha = \left(1-\frac4p\right).$ This result was stated without a proof or reference as lemma 3.1 in the paper "The existence of heat flow of $H$-systems" by Chen and Levine. I presume it's well-known, but I've been unable to find a reference for it. Some thoughts: My initial idea was to try to adapt one of the proofs of the usual Morrey-Sobolev embedding $W^{1,p} \hookrightarrow C^{1-n/p}$ separately in $x$ and $t,$ perhaps by breaking it up as,$$ |\nabla u(x,t) - \nabla u(y,s)| = |\nabla u(x,t) - \nabla u(x,s)| + |\nabla u(x,s) - \nabla u(y,s)|. $$The exponent $\alpha$ suggests we apply Sobolev embedding in the $x$ variable with exponent $p/2,$ but it's not clear to do this uniformly in $t.$ Moreover this naiive approach obviously fails because we only have information about $\partial_t u$ and not its gradient. So some interpolation argument would be needed, which is where I'm stuck.
In complex analysis, one usually uses the term Jordan domain for a domain whose boundary is a simple closed curve. The function $f(z) = az+b$, initially defined on $\partial \Omega$, admits a holomorphic extension to $\Omega\setminus \overline{D}$, also given by the formula $az+b$. I think the main question here is whether this is the only extension. In other words, if $g$ is holomorphic in $\Omega\setminus \overline{D}$, continuous on $\overline{\Omega}\setminus \overline{D}$, and satisfies $g(z)=az+b$ for $z\in\partial \Omega$, does it follow that $f(z)=az+b$ in $\Omega\setminus \overline{D}$? The answer is yes. Proof. Let $h(z)=g(z)-az-b$. Our goal is to show that $h$ is identically zero. Let $F$ be a conformal map $F:B\to\Omega$, where $B$ is the unit disk. The composition $\tilde h= h\circ F$ is holomorphic in the doubly-connected domain $B\setminus F^{-1}(\overline{D})$, and $\tilde h(z)\to 0$ as $|z|\to 1$. The domain $B\setminus F^{-1}(\overline{D})$ contains an annulus $\{z:r<|z|<1\}$ for some $r<1$. The function $\tilde h $ is represented by its Laurent series $\sum c_n z^n$ in this annulus. For every $n$ and for every $\rho\in (r,1)$ we have $$c_n=\frac{1}{2\pi i}\int_{|z|=\rho} z^{-n-1}\tilde h(z)\,dz \tag1$$Letting $\rho\to 1$ in (1) yields $c_n=0$. Thus, $\tilde h$ is identically zero in the annulus $\{z:r<|z|<1\}$. By this identity theorem for holomorphic functions, it is identically zero in $B\setminus F^{-1}(\overline{D})$.
I've a question regarding the Hodge star operator. I'm completely new to the notion of exterior derivatives and wedge products. I had to teach it to myself over the past couple of days, so I hope my question isn't trivial. I've found the following formulas on the internet, which seem to match the definitions of the two books (Carroll and Baez & Muniain) that I own. For a general $p$-form on a $n$-dimensional manifold: \begin{equation} v=\frac{1}{p!} v_{i_1 \ldots i_p} \mathrm{d}x^{i_1} \wedge \cdots \wedge \mathrm{d}x^{i_p} \end{equation} the Hodge operator is defined to act on the basis of the $p$-form as follows: \begin{equation} *\left( \mathrm{d}x^{i_1} \wedge \cdots \wedge \mathrm{d}x^{i_p} \right) = \frac{1}{q!} \tilde{ \varepsilon}_{j_1,\ldots,j_q}^{i_1,\ldots,i_p} \mathrm{d} x^{j_1} \wedge \cdots \wedge \mathrm{d} x^{j_q} \end{equation} where $q=n-p$ and $\tilde{ \varepsilon}$ is the Levi-Civita tensor. Up until here everything is fine, I managed to do some exercises and get the right answers. However, actually trying to calculate the curvature does cause some problems with me. To give a bit of background. I'm working with a curvature in a Yang-Mills theory in spherical coordinates $(r, \theta, \varphi)$. Using gauge transformation, I've gotten rid of time-dependence, $r$ dependence and $\theta$ dependence. Therefore, the curvature is given by: \begin{equation} F = \partial_\theta A_{ \varphi} \; \mathrm{d}\theta \wedge \mathrm{d} \varphi \end{equation} Applying the Hodge operator according to the formula above gives: \begin{equation} * \left(\mathrm{d} \theta \wedge \mathrm{d} \varphi\right) = \frac{1}{(3-2)!} \tilde \varepsilon^{\theta \varphi}_r \mathrm{d}r=\mathrm{d}r \end{equation} such that: \begin{equation} *F = (\partial_\theta A_{ \varphi}) \mathrm{d} r \end{equation} However, three different sources give a different formula. Specifically they give: \begin{equation} *F = (\partial_\theta A_{ \varphi}) \frac{1}{r^2 \sin \theta} \mathrm{d} r \end{equation} It is not clear to me where they get this from. Something is being mentioned about the fact that the natural volume form is $\sqrt{g} \; \mathrm{d} r \wedge \mathrm{d} \varphi \wedge \mathrm{d} \theta$ with $\sqrt{g}=r^2 \sin \theta$, which I agree with. However, I do not understand why that term is incorporated in the Hodge operator. Boaz and Muniain define the Hodge operator as: \begin{equation} \omega \wedge * \mu = \langle \omega , \mu \rangle \mathrm{vol} \end{equation} But I don't see how that formula is applicable to calculating the Hodge operator on the curvature. Could anybody tell me where I am going wrong or provide me a source where they explain this?
There are several themes in Huygens' unpublished paper De motu corporum ex percussione ("On the motion of bodies out of collisions"), but maybe the most significant is that he frequently investigates a specific or extreme case (where some factor is zero, one, or infinity) first, and then guesses about a general case which might have those at its boundary. Let's call this his "specific-first" mindset and go into his argument. Prelude: elastic collisions between equal balls Huygens starts from a basic Galilean perspective where bodies in motion continue in uniform motion in a straight line unless impeded. His problem is, he doesn't necessarily know the best way to phrase what makes a collision elastic. So he starts specific-first: if two balls are identical, then by symmetry, if they both come in towards a center of collision with speed $s$, then in a rigid collision they should both leave with speed $s$. To generalize this, he adds what we now call today the "freedom of choice of reference frame" or the "principle of relativity," it says that the way physics works on a smooth train/boat ride is the same as it works on the ground, no matter how fast the train/boat is going. He goes specific-first: assume one of the identical balls $Y$ is stationary and the other $X$ comes in from the left at velocity $v_X = u$, then I can use relativity (add $-u/2$ to all velocities) to prove that after the collision $v_X = 0$ and $v_Y=u.$ He then generalizes even more: the proper approach is to consider a boat which sails smoothly from the halfway point between $X$ and $Y$ to the collision point between them: such a boat has speed $v_B = (v_X + v_Y)/2$ and the two velocities are $v_{X,Y} = v_B \pm \delta$, and the collision trades $\pm \leftrightarrow \mp$ so he comes to his conclusion that after any collision of identical balls, they swap speeds (and, of course, the velocities reverse direction). What about non-identical balls? So he has handled a very specific case first, but Huygens now wants to generalize his definition of "rigid collision." It's no good to him if that definition only handles identical bodies. But he actually generalizes in a specific-first way, too! He starts with the case of a big unequal ball hitting a small ball at rest, and he says "I won't say exactly what happens, but surely the smaller ball starts moving forward, and the bigger ball moves forward less." It's important to understand that this is related to what has just been derived. We just saw that if the "bigger" ball were only infinitesimally bigger, it would approximately stop. So by making it larger, it presumably must keep going forward. The limiting case on the other side, that if the smaller ball were infinitesimally small, the bigger ball should continue in uniform motion, also means that the bigger ball never goes faster than it started from this sort of collision. Huygens uses the principle of relativity to derive the opposite limiting case: If one takes this entire picture $X \gg Y, ~(V_X = u,~V_Y=0)\mapsto (u_-, u_+), u_- < u < u_+$ and shifts it by $-u$ then one finds a picture where $V_X$ starts at $0$ and then becomes some negative number $u_- - u$ while $V_Y$ starts at $-u$ and becomes some positive number $u_+ - u$, ricocheting forwards. Huygens doesn't actually talk about the ricochet all that much, but I think that's because the ricochet was already the best previous prediction, due to Descartes: Fourth, if the body C were entirely at rest,…and if C were slightly larger than B; the latter could never have the force to move C, no matter how great the speed at which B might approach C. Rather, B would be driven back by C in the opposite direction: because…a body which is at rest puts up more resistance to high speed than to low speed; and this increases in proportion to the differences in the speeds. Consequently, there would always be more force in C to resist than in B to drive, …. (Descartes, as translated in SEP) Huygens goes on to clearly call out that he is refuting Descartes: if a big object always moves slower when it impacts a small stationary object, these Galileian principles demand that the big object is always moved by any collision from a small impacting object. Huygens is saying "no, the big object emphatically does not stand still!" and that is much more important to him and the physics of his day, than saying that the small object must travel backwards, which was already obvious. The broader generalization to unequal balls So Huygens has attacked the problem specific-first and has found these two speeds $u_- < u < u_+$, and he knows at least one limiting case, where the mass ratio is $m_Y/m_X = 1$, where $u_-=0$ while $u_+ = u.$ He presumably already has an idea of the other limiting case, where the "ball" $X$ becomes more of a a "wall" as this mass ratio goes to zero: there are many reasons both experimental and theoretical to imagine that against an immovable wall ($v_X$ goes from $0$ to $0$) a rigid head-on collision should cause a ball to ricochet with the same speed that it impacted ($v_Y$ goes from $-u$ to $u$). And he postulates that the general law is therefore as follows: if you can find a frame of reference (lets call it a "central" frame) where one object enters with velocity $+u$ and then leaves with velocity $-u$, the other object must also see its speed unchanged. This is his original formulation of the conservation of energy, that a central frame for $X$ is, in a rigid collision, also a central frame for $Y$. Huygens proceeds to come up with an alternative formulation of that condition: the relative velocity of the two bodies are the same before and after the collision. In this central frame this is easy, they go from relative velocity $-|v_X| - |v_Y|$ to relative velocity $|v_X| + |v_Y|$ and those are only different by a sign difference; furthermore any transformation to any other frame preserves relative velocities. He also derives the other limiting case where the mass ratio is zero: if a massive $X$ hits a near-massless $Y$ then to preserve the relative velocity when $v_X$ hardly changes, $v_Y$ must take on the limiting value $2v_X.$ Adding masses into the picture Up until now, Huygens has been reluctant to place mass directly into his reasoning and that is quite understandable: he wants his arguments to hold independently of Galileo's observations on gravity and how things fall and how scales work. But he insists on connecting his mathematics to Galileo's work eventually and it comes to this argument: that this center-of-mass frame amounts to taking the weight of the objects and multiplying it times their speeds and setting them to be equal. He phrases it as "if two bodies whose speeds are inversely proportional to their magnitudes collide with each other, then each rebounds with the same speed which it had before the collision." So he knows what a "center of gravity" is and he is stating precisely that this central frame for the collision is one where the center of gravity is stationary at the point of collision. To make the connection, he observes that Galileo showed that falling things fall a distance proportional to time-squared, meaning that their final velocity increases proportional to time. Huygens says that this means that the distance is proportional to velocity squared, and then he says "oh, and I can use this in reverse, too -- because everybody knows that the velocity that something gets as it comes down is exactly what it needs to go back up." His argument is geometrical so it is a bit clumsy to state in his language, I will rephrase it algebraically. So he says "let's say at time $t=-T$ you release these two masses from rest from different heights, you put them into some sort of curve like Galileo did so that they change directions to horizontal without changing speed. They collide at $t=0$ and bounce back, and they return to different heights. We choose the original velocities according to this balance principle, so let me define the mass ratios $\mu_i = m_i/(m_1 + m_2)$ and we have $v_1 = u/\mu_1$ and $v_2 = -u/\mu_2$ for some number (with dimensions of speed) $u$. Finally, freeze each one when it attains its maximum. Then if they rebound with speeds $s_{1,2}$ they attain heights $s_1^2/(2a)$ and $s_2^2/(2a)$ when we stop them in midair, and their center of gravity thus attains a height proportional to $\mu_1 s_1^2 + \mu_2 s_2^2$ subject to the constraint that $s_1 + s_2 = u\cdot(1/\mu_1 + 1/\mu_2).$ From here what he observes is essentially that one can derive that this maximum center-of-gravity height is in turn minimized by this choice $s_1 = u / \mu_1, s_2 = u/\mu_2:$ any other choice of $s_{1,2}$ leads to a higher center of gravity. Why would this matter? Because in the argument Huygens gives, he states that it is impossible for this collision apparatus to return the center-of-gravity to a greater height than it started. So if we start from the minimum, and cannot return higher, we must return back to the height that we came, and the velocities must be the same out as in. (We would today say that if there are two configurations with no kinetic energy and there is no external energy input, then gravity is a conservative force and thus the height of the center of mass must be strictly lower as the only possibilities are for frictional loss terms, if that helps.) So the assertion Huygens is making is that, by his day, it is known that when the only external force that is involved is gravity (presumably combined with the forces to fix an object at its maximum height and to change its direction without changing its speed), there is no way for gravity to increase the height of a center of gravity that starts at rest. Since any other allocation of $s_{1,2}$ would increase it, it follows that if we start with the minimum, only the minimizing allocation $s_{1,2} = u/\mu_{1,2}$ works. Finally deriving conservation of kinetic energy So Huygens has now identified the "same speeds" frame with the center-of-mass frame, and he gives a rather complicated geometric argument which I think I can replace (like, I'm not even going to try to match his argument) as follows: we first look at a collision in the center-of-mass frame where we have determined that the initial velocities are $v_1 = u/\mu_1, v_2 = -u/\mu_2$, and the final velocities are $V_1 = -u/\mu_1, V_2 = +u/\mu_2.$ We can see lots of things that are invariant in this frame, for example $a_1 v_1^2 + a_2 v_2^2 = a_1 V_1^2 + a_2 V_2^2$ for any $a_{1,2}$ because it's true if $a_1 = 0, a_2\ne 0$ and vice versa. The squaring is important for getting the negative to disappear but otherwise has no deeper purpose here. But remember that Huygens is interested in the principle of relativity, so he is interested in adding a constant velocity $c$ to all of these numbers. When we do that in the case of these square-sums we'll find $$a_1 (v_1 + c)^2 + a_2 (v_2 + c)^2 = a_1 v_1^2 + a_2 v_2^2 + 2 (a_1 v_1 + a_2 v_2) c + (a_1 + a_2) c^2.$$ Now if we equate this to $a_1 (V_1 + c)^2 + a_2 (V_2 + c)^2$ to find a quantity that's the same before and after, no matter what $c$ is, we find that the leading and trailing terms vanish and the middle term we can divide out by $2c$ leaving just $$a_1 v_1 + a_2 v_2 = a_1 V_1 + a_2 V_2.$$There is a problem here though, the right hand side is the negative of the left hand side. Thus they can only equal if they both equal zero, and $a_1 (u/\mu_1) - a_2 (u/\mu_2) = 0$ leads naturally to choosing $a_{1,2} \propto m_{1,2}.$ So our conclusion is that the sum of $mv^2$ must be the same before and after.
This is false for $n\geq 4$. Consider the Grassmannian $\mathrm{Gr}(2,n)$ of all two-dimensional subspaces of $\mathbb{R}^n$, and recall that $\mathrm{Gr}(2,n)$ is a compact manifold of dimension $2n-4$. For each $\varphi\in\mathrm{GL}_n(\mathbb{Q})$, let$$S_\varphi = \{A\in \mathrm{Gr}(2,n) \mid \varphi u=v \text{ for some linearly independent }u,v\in A\}.$$By the Baire category theorem, $\mathrm{Gr}(2,n)$ cannot be expressed as a union of countably many closed, nowhere dense sets. Therefore, it suffices to prove that each $S_\varphi$ is closed and nowhere dense in $\mathrm{Gr}(2,n)$. To that end, we decompose $S_\varphi$ as a disjoint union $T_\varphi \uplus U_\varphi$, where $T_\varphi$ is the set of all $A\in S_\varphi$ for which $\varphi(A) \ne A$, and $U_\varphi$ is the set of all $A\in S_\varphi$ for which $\varphi(A) = A$. It suffices to prove that $T_\varphi$ and $U_\varphi$ are closed and nowhere dense in $\mathrm{Gr}(2,n)$. Claim. $T_\varphi$ is either empty or is a submanifold of $\mathrm{Gr}(2,n)$ of dimension $n-1$. Proof: Suppose $T_\varphi$ is nonempty. If $A\in T_\varphi$, then $A\cap\varphi^{-1}(A)$ is a one-dimensional subspace of $A$, and this contains exactly one pair $\{u,-u\}$ of unit vectors. Such a $u$ has the property that $u,\varphi u\in A$ and $\{u,\varphi u,\varphi^2 u\}$ are linearly independent. Let$$\widetilde{T_\varphi} = \{u\in \mathbb{R}^n : \|u\|=1\text{ and }u,\varphi u,\varphi^2 u\text{ are linearly independent}\}.$$Then $\widetilde{T_\varphi}$ is an open subset of the unit $(n-1)$-sphere in $\mathbb{R}^n$ and the map $p\colon \widetilde{T_\varphi}\to T_\varphi$ defined by $p(u) = \mathrm{Span}\{u,\varphi u\}$ is a degree two covering map, which proves the claim. $\square$ Since $n-1 < 2n-4$ for $n\geq 4$, this gives us the following. Corollary. $T_\varphi$ is closed and nowhere dense in $\mathrm{Gr}(2,n)$ as long as $n\geq 4$. Claim. $U_\varphi$ is a union of finitely many submanifolds of $\mathrm{Gr}(2,n)$, all of dimension at most $n-2$. Proof: We separate the possible $A\in U_\varphi$ into three types, based on the eigenvalues of the restriction of $\varphi$ to $A$: The restriction of $\varphi$ to $A$ has two distinct real eigenvalues $\lambda,\mu$. The restriction of $\varphi$ to $A$ has one real eigenvalue $\lambda$ and is not diagonalizable. The restriction of $\varphi$ to $A$ has two complex eigenvalues $\lambda,\overline{\lambda}$. In each case the eigenvalues of the restriction must also be eigenvalues of $\varphi$, of which there are only finitely many. Our strategy is to analyze the set of all $A$ of a given type corresponding to a given eigenvalue or pair of eigenvalues. For type (1), let $\lambda$ and $\mu$ be distinct real eigenvalues of $\varphi$, and let $E_\lambda$ and $E_\mu$ be the corresponding eigenspaces. Then any $A$ corresponding to $\lambda$ and $\mu$ can be written uniquely as the sum of a one-dimensional subspace of $E_\lambda$ and a one-dimensional subspace of $E_\mu$. If $\dim(E_\lambda) = d_\lambda$ and $\dim(E_\mu) = d_\mu$, then the set of all such $A$ is homeomorphic to $\mathrm{Gr}(1,d_\lambda) \times \mathrm{Gr}(1,d_\mu)$, which is a manifold of dimension $d_\lambda+d_\mu - 2$. In particular, since $d_\lambda+d_\mu \leq n$, the set of all such $A$ for a given pair $\lambda,\mu$ is a submanifold of $\mathrm{Gr}(2,n)$ of dimension at most $n-2$. For type (2), let $\lambda$ be a real eigenvalue of $\varphi$ with higher algebraic multiplicity than geometric multiplicity. Let $E_\lambda$ be the eigenspace for $\lambda$ and let $E_\lambda'$ be the nullspace of $(\varphi-\lambda I)^2$. Then any $A$ of type (2) corresponding to $\lambda$ has one-dimensional image in $E_\lambda'/E_\lambda$ and is entirely determined by this image. If $\dim(E_\lambda) = d_\lambda$ and $\dim(E_\lambda') = d_\lambda'$, then the set of all such $A$ is homeomorphic to $\mathrm{Gr}(1,d_\lambda'-d_\lambda)$, which is a manifold of dimension $d_\lambda'-d_\lambda - 1$. In particular, since $d_\lambda'-d_\lambda \leq n-1$, the set of all such $A$ for a given $\lambda$ is a submanifold of $\mathrm{Gr}(2,n)$ of dimension at most $n-2$. For type (3), let $\lambda$ be a complex eigenvalue of $\varphi$, and let $E_\lambda$ be the eigenspace for $\lambda$ in $\mathbb{C}^n$. Then any $A$ of type (3) corresponding to $\lambda$ is obtained by taking a subspace of $E_\lambda$ of complex dimension one and taking the real part of each vector. If $\dim_{\mathbb{C}}(E_\lambda) = d_\lambda$, then the set of all such $A$ is homeomorphic to the complex Grassmannian $\mathrm{Gr}_{\mathbb{C}}(1,d_\lambda)$, which is a manifold of real dimension $2d_\lambda-2$. In particular, since $2d_\lambda \leq n$, the set of all such $A$ for a given $\lambda$ is a submanifold of $\mathrm{Gr}(2,n)$ of dimension at most $n-2$. $\square$ Corollary. $U_\varphi$ is closed and nowhere dense in $\mathrm{Gr}(2,n)$ for all $n\geq 3$. Incidentally, what's going on here from an algebraic perspective should be roughly that each $S_\varphi$ is an algebraic subvariety of $\mathrm{Gr}(2,n)$ of dimension $n-1$, with $T_\varphi$ being the set of regular points of $S_\varphi$ and $U_\varphi$ being its set of singular points, but we don't need to know any of that to provide a topological proof that it's nowhere dense in $\mathrm{Gr}(2,n)$.
I’ve been studying the KKR method from the original Kohn and Kostoker’s paper (https://journals.aps.org/pr/abstract/10.1103/PhysRev.94.1111). On the text, they use variational calculus for dealing with an integral equation. Namely, the paper states that the integral equation (2.14): $\psi(\vec{r})=\int G(\vec{r},\vec{r}_0)V(\vec{r}_0)\psi(\vec{r}_0)d^3r_0$ Is equivalent to: $\delta\Lambda=0$ Where: $\Lambda=\int \psi^*(\vec{r})V(\vec{r})\psi(\vec{r})d^3r-\int \psi^*(\vec{r})V(\vec{r})G(\vec{r},\vec{r}_0)V(\vec{r}_0)\psi(\vec{r}_0)d^3rd^3r_0$ However, my understanding of variational techniques is that you are supposed to minimize (or extremize) an action integral of a lagrangean. And that the result turns out to be that the integral path for which the action is an extreme is a solution to an Euler-Lagrange equation. But I’m not able to see how this applies to an integral equation like this, or how the expression for $\Lambda$ appears. The fact that the authors proceed to extremize $\Lambda$ cofuses me even more, since some textbooks call this function the “lagrangean”. How does the variational method applies on situations like this?
I have this expression: $3m^4-6m^3+14m^2-6m+11=0$ and I want to factorize it in $(m^2+1)(3m^2-6m+11)$. How can I do it? Thanks for any help! Hint $\ $ Because the leading and constants coefficients are primes, the possible factors are highly constrainted, so we can quickly find quadratic factors by undetermined coefficients. First, notice $\rm\:mod\ 3\!:\ -f \equiv x^2+1,\:$ so we check for a factor of form $\rm\: x^2\! +\, 3a\, x + 1.\:$ Its cofactor must have leading coefficient $3$ and constant coefficient $11$, i.e. $$\rm\begin{eqnarray} (x^2+3a\,x+1)(3\,x^2\!+b\,x+11) &\,=\:&\rm 3\,x^4 + (b\!+\!9a)\, x^3 + (14\!+\!3ab)\, x^2 + (b\!+\!33a)\,x + 11 \\ &=&\rm 3\, x^2 - 6\, x^3 + 14\, x^2 - 6\,x + 11\end{eqnarray}$$ Comparing $\rm\:x^2$ terms, $\rm\:14=14\!+\!3ab\Rightarrow ab=0\:$ so $\rm\,a=0\,$ or $\rm\,b=0.\:$ If $\rm\:b=0\:$ then comparing $\rm\,x^3$ terms, $\rm\:9a=-6,\,$ contra $\rm\:a\in \Bbb Z.\:$ Thus $\rm\:a=0.\:$ Comparing $\rm\,x^3$ terms, $\rm\:b = -6,\:$ which works. $3m^4-6m^3+14m^2-6m+11$ $=3m^4-6m^3+11m^2+3m^2-6m+11$ $=m^2(3m^2-6m+11)+(3m^2-6m+11)$ $=(m^2+1)(3m^2-6m+11)$ Let $f$ be the polynomial. If you know about complex numbers, you can compute $f(i) = 0$, showing you that $f$ is divisible by $(m - i)$. Since the coefficients are real, also $-i$ is a root, and so $f$ is divisible by $(m + i)$, too. Therefore $f$ is divisible by $(m - i)(m + i) = m^2 + 1$. Polynomial long division now gives you the factorization. EDIT: How do you see $f(i) = 0$? If $f = \sum_{i=0}^d a_i X^i$ with real coefficients $a_i$, this is quite easy to check: $f(i) = 0$ if and only if the two alternating sums $$a_0 - a_2 + a_4 - a_6 \pm \ldots\quad\text{and}\quad a_1 - a_3 + a_5 - a_7 \pm \ldots$$ both equal zero. In this example, $11-14 + 3 = 0$ and $(-6)-(-6) = 0$, so $f(i) = 0$. You already factorized the $3m^4 −6m^3 +14m^2 −6m+11$ with $(m^2 +1)(3m^2 −6m+11)$. First factor equals to $(m^2 +1)$ and second $(3m^2 −6m+11)$. $(m^2 +1)(3m^2 −6m+11) = 3m^4-6m^3+11m^2+3m^2-6m+11 = 3m^4-6m^3+14m^2+11$ Note: But if you need solve $3m^4 −6m^3 +14m^2 −6m+11 = 0$ using factorization $(m^2 +1)(3m^2 −6m+11)$ it's a little different problem. $(m^2 +1)(3m^2 −6m+11) = 0$ is true if 1.:$m^2 +1 = 0$ or 2.:$3m^2 −6m+11=0$. 1. $m^2 +1 = 0$ => $m_{1,2} = \pm\sqrt{-1} = \pm{i}$ 2. $3m^2 −6m+11=0$ => $m_{3,4} = 1\pm{2}i\sqrt{\frac{2}{3}}$ Here are four solutions: 1. $m = i$ : $3i^4-6i^3+14i^2-6i+11 = 3+6i+14-6i+11 = 0$ 2. $m = -i$ : $3(-i)^4-6(-i)^3+14(-i)^2-6(-i)+11$ $ = 3(-1)^4i^4-6(-1)^3i^3+14(-1)^2i^2-6(-i)+13 = 3 -6i-14+6i+13 = 0$ 3. $m = 1+{2}i\sqrt{\frac{2}{3}}$: substitute to equation as in 1. or 2. 4. $m = 1-{2}i\sqrt{\frac{2}{3}}$: substitute to equation as in 1. or 2.
The formula for the Chi-Square test statistic is the following: $\chi^2 = \sum_{i=1}^{n} \frac{(O_i - E_i)^2}{E_i}$ where O - is observed data, and E - is expected. I'm curious why it depends on the absolute values? For example, if we change the units we're measuring we'll get a different statistics. Suppose we're performing a test on apple weights. One of the samples weights 165 gram, and we expect it to be 182 gram, then the part of the formula will be: $\frac{(165 - 182)^2}{182} \sim 1.58791$ Now suppose we're living in a country where the precision is on the top. We use milligrams for everything and we get the same results in different units: 165000 milligrams and 182000, respectively. The statistic: $\frac{(165000 - 182000)^2}{182000} \sim 1587.91$ So our conclusion will be different based on the units we used. Why? What am I missing and why the values are not normalized in the Chi-squared test?
Hi I was trying to find the area between the following curves (below) however I am unsure how to continue from the trigonometry which gets presented: Curves: $$y = 2\sin(x)\\y = \cos\left(\frac{x}{2}\right)$$ I have found that the curves meet at $x=\pi$ and $x=2 \arcsin(\frac14)$. I am supposed to find the area being $\frac94$ however this is what I got, I don't know how to simplify: $$A = 2\cos \left( 2\arcsin\left(\frac14\right)\right) + 2\sin \left( 2\arcsin\left(\frac14\right)\right)$$ I tried letting $z = \arcsin(\frac14)$ meaning I get: $$A = 2\cos(2z) + \frac12$$ which then can simplify to: $$A = 2(1-\sin^2(z))+\frac12$$ but then I have no idea. Thanks and good luck!
You need two things. The angular velocity vector $\vec{\omega}$ at some time frame, and the linear velocity vector at some reference point $\vec{v}$. The axis of rotation is located at a point relative to the reference point $$ \vec{r} = \frac{\vec{\omega} \times \vec{v}}{ \| \vec{\omega} \|^2} \tag{1}$$ You can apply this at every instance. See related post1 post2 and post3. The proof is easy but it requires the use of the vector triple product identity $a \times (b \times c) = b (ac) - c (ab)$. Substitude above $\vec{v} = \vec{r} \times \vec{\omega}$ and carry out simplifications. Simple example The center of mass of an object is moving with $\vec{v} = \pmatrix{5 & -1 &0}$ and the rotational vector is $\vec{\omega} = \pmatrix{0 & 0 & 2}$ at some instant. The point on the axis of rotation closest to the center of mass is located at $$\vec{r} = \frac{\pmatrix{0&0&2} \times \pmatrix{5&-1&0}}{ \| \pmatrix{0&0&2} \|^2 } = \frac{ \pmatrix{2 & 10 & 1} }{2^2} = \pmatrix{0.5 & 2.5 & 5}$$ To verify find the velocity at the center of mass with $$ \vec{v} = \vec{\omega} \times (-\vec{r} ) = \vec{r} \times \vec{\omega} = \pmatrix{0.5 & 2.5 & 0} \times \pmatrix{0&0&2} = \pmatrix{5&-1&0} \;\checkmark $$ NOTE: That the axis of rotation is any point along the line, which includes $\vec{r}$ and all points parallel to $\vec{\omega}$.
What is the simplest physical system which can be used to model the quantum measurement of a 2 level system? For example, can the following, spin coupled to a harmonic bath, be used to model a measurement of the 2 level system's z polarisation? $$\hat{H}=\sum_{j=1}^n\frac{\hat{p}^2_j}{2}+\frac{1}{2}\omega_j^2\bigg(\hat q_j+\frac{\hat\sigma_z c_j}{\omega_j^2}\bigg)^2$$ with a spectral density $$J(\omega) = \frac{\pi}{2}\sum_{j=1}^n \frac{c_j^2}{\omega_j}\delta(\omega-\omega_j)=\frac{\eta\gamma\Omega^2\omega}{(\omega^2-\Omega^2)^2+\gamma^2\omega^2} $$ my intuition is yes (provided certain conditions on the choice of parameters in $J(\omega)$ and possibly the manner in which the infinite bath limit is approached), references to papers discussing this would be appreciated. Edit By model a quantum measurement I mean that there exists some initial pure state density operator $\hat\rho^2(0)=\hat\rho(0)=\hat\rho_s\otimes\hat\rho_b$ (where $\hat\rho_s$ and $\hat\rho_b$ are density operators in the system and bath spaces respectively), such that the long time dynamics leads to "collapse" of the spin system $$\lim_{t\to\infty}\mathrm{tr}_b[\hat\rho(t)]=|0\rangle\langle0|\mathrm{tr}[\hat\rho_s|0\rangle\langle0|]+|1\rangle\langle1|\mathrm{tr}[\hat\rho_s|1\rangle\langle1|]$$ where $\mathrm{tr}_b[\dots]$ denotes a trace over the bath degrees of freedom, $|0\rangle$ corresponds to spin up and $|1\rangle$ to spin down. Furthermore, that there exists some pointer variable e.g. in my example possibly $y=\frac{\sum_jc_jq_j}{\sqrt{\sum_jc^2_j}}$ such that at long time the reduced density matrix for the spin and pointer are also in a mixed state with (possibly perfect) correlation between the pointer state and the spin state, i.e. I would imagine this corresponding to something like $$\lim_{t\to\infty}\hat\rho_{sp}(t)=\lim_{t\to\infty}\mathrm{tr}_{\tilde b}[\hat\rho(t)]=\hat\rho^{(0)}_p|0\rangle\langle0|\mathrm{tr}[\hat\rho_s|0\rangle\langle0|]+\hat\rho^{(1)}_p|1\rangle\langle1|\mathrm{tr}[\hat\rho_s|1\rangle\langle1|]$$ where $\mathrm{tr}_{\tilde b}[\dots]$ denotes a trace over the bath degrees of freedom excluding the pointer, and $\hat\rho_p^{(i)}$ are operators in the pointer space, corresponding to being a sharply localised state indicating whether the spin is up or down i.e. with something like the following properties $$\mathrm{tr}[|0\rangle\langle0|\hat\rho_p^{(0)}\hat{y}]=-\frac{\sqrt{\eta}}{\Omega}$$ $$\mathrm{tr}[|1\rangle\langle1|\hat\rho_p^{(1)}\hat{y}]=\frac{\sqrt{\eta}}{\Omega}$$ $$\mathrm{tr}[|0\rangle\langle0|\hat\rho_p^{(0)}\hat{y}^2]=\mathrm{tr}[|1\rangle\langle1|\hat\rho_p^{(1)}\hat{y}^2]\approx\frac{\eta}{\Omega^2}$$ $$\hat\rho_p^{(0)}\hat\rho_p^{(1)}\approx0$$ Edit:2For example does the following constitute an idealised physical model of measurement on the z polarisation of a spin: Define the total Hamiltonian as $$\hat{H}=\sum_{j=1}^n\sum_{k=1}^{n_j}\frac{\hat{p}^2_{jk}}{2}+\frac{1}{2}\omega_{jk}^2\bigg(\hat q_{jk}+\frac{\hat\sigma_z c_{jk}}{\omega_{jk}^2}\bigg)^2$$ where $\omega_{jk}=\omega_{j}$ and $\sum_k c_{jk}^2 = c^2_j$, this choice will become clear later, and a spectral density given by $$J(\omega) = \frac{\pi}{2}\sum_{j=1}^n\sum_{k=1}^{n_j} \frac{c_{jk}^2}{\omega_{jk}}\delta(\omega-\omega_{jk})=\frac{\eta\gamma\Omega^2\omega}{(\omega^2-\Omega^2)^2+\gamma^2\omega^2} $$ with parameters chosen such that the collective coordinate $y=\frac{\sum_{j,k}c_{jk}q_{jk}}{\sqrt{\sum_{j,k}c^2_{jk}}}$ is 'classical' $\Omega\ll k_BT$, the two equilibrium positions are well separated $\eta\gg\hbar\Omega$, and for the sake of simplicity is in the moderately over-damped regime $\gamma=4\Omega$. Now we can consider the form of the initial density $\hat\rho(0)=\hat\rho_s\otimes\hat\rho_b$, which we define to be $$\hat\rho_b=\prod_{j=1}^{n}\prod_{k=1}^{n_j} \hat{\rho}_{jk}$$ with $\hat\rho_{jk}=|\alpha_{jk}\rangle\langle\alpha_{jk}|$ where $$ \hat{H}_{jk}|\alpha_{jk}\rangle=\bigg(\frac{\hat{p}^2_{jk}}{2}+\frac{1}{2}\omega_{jk}^2\hat q_{jk}^2\bigg)|\alpha_{jk}\rangle=E_{\alpha_{jk}}|\alpha_{jk}\rangle$$ is an eigenstate of $\hat{H}_{jk}$ with eigenvalue $E_{\alpha_{jk}}$. We can define $\rho_{jk}$ by randomly sampling each $E_{\alpha_{jk}}$ from the Boltzmann distribution at temperature $T$. Now taking the limit that each $n_j\to\infty$ followed by $n\to\infty$, it follows that the dynamics of the pointer $y$ and the 2 level system are equivalent to that of the original Hamiltonian given at the very top but with an initial density $$\hat\rho(0)=\frac{\hat\rho_s\otimes e^{-\beta \sum_{j=1}^n\frac{\hat{p}^2_j}{2}+\frac{1}{2}\omega_j^2\hat q_j^2}}{\mathrm{tr}[\hat\rho_s\otimes e^{-\beta \sum_{j=1}^n\frac{\hat{p}^2_j}{2}+\frac{1}{2}\omega_j^2\hat q_j^2}]}$$ I believe this is then a relatively straightforward problem to solve, and should lead to dynamics of the form described in the first edit. Any comments as to errors I have made would be appreciated, also feel free to ignore all but the first line of the post.
The state $\mid \psi \rangle$ is fixed. You can write it as $a |0 \rangle + b |1 \rangle$. If you write that in the other basis and get $(a,b) = (1,0) \implies (c,d) = (\frac{\sqrt{2}}{2},\frac{\sqrt{2}}{2})$ They are the same state so it is still true that if you do a measurement in the $|0 \rangle$, $|1 \rangle$ basis, you will surely get $\mid 0 \rangle$ if you start with $(a,b)=(1,0)$. For shorthand let us call the states $\frac{1}{\sqrt{2}} (|0 \rangle \pm |1 \rangle$ as $\mid \pm \rangle$ respectively. That way don't have to keep rewriting. So we are in state $\frac{\sqrt{2}}{2} \mid + \rangle + \frac{\sqrt{2}}{2} \mid - \rangle$. If we measure in the $\mid + \rangle$ and $\mid - \rangle$ basis, we have a a $(\frac{\sqrt{2}}{2})^2=\frac{1}{2}$ probability for measuring each of $\mid + \rangle$ and $\mid - \rangle$. Your confusion is about measuring in $\mid 0 \rangle$, $\mid 1 \rangle$ basis vs measuring in the $\mid \pm \rangle$. You don't do them both at the same time. Either you pick the first, and surely get $\mid 0 \rangle$ or you do the second and get $\mid + \rangle$ or $\mid - \rangle$ with equal probabilities. They are different operations. Your original $(1,0,0,0)$ is in basis 1, then your $(1,0,1,0)$ is in basis 2. The measurement in basis 2 collapses this to $(0,0,1,0)$. Conversely if you measure in basis 1 you get $(1,0,0,0)$ in basis 1. These are different states, but that is expected because you did different measurement operations on the starting state. You might still be thinking too classically as being able to do lots of different passive measurements in whatever order. Edit (clarifying the confusion in the comments): The measurement is not simply reading the state as a vector. Rewriting the vector in different basis doesn't do anything. As in linear algebra, it doesn't matter at all. The measurement in a basis is an operation you do. It is computed by doing the rewrite first, but that is the perspective of you as an omnipetent being when doing the math. You rewrite in the desired basis first because that way the projection operator in that basis is easy to write down. The measurement changes the vector and different measurements change the vector in different ways. Measurement 0 in basis 1 will apply some projection operator $P_1$, measurement + in basis 2 will do some other projection operator $P_2$. They are different operators, so of course, you can't expect to get the same thing even if the input state was the same. If you write $P_1$ in basis 1 you get a matrix $M_1$, and if you write $P_2$ in basis 2 as $M_2$ you get those same matrix entries, but that doesn't mean that the operators were the same. If you write $P_2$ in basis 1 you will get something totally different from $M_1$. Doing the rewrite of the state into basis 2 was so you wouldn't have to write down that matrix and could just work with matrix $M_2$.
According to this very interesting article in Quanta Magazine: "A Long-Sought Proof, Found and Almost Lost", -- it has been proved that given a vector $\mathbf{x}=(x_1,\dots,x_n)$ having a multivariate Gaussian distribution, and given intervals $I_1,\dots,I_n $ centered around the means of the corresponding components of $\mathbf{x}$ , then $$p(x_1\in I_1, \dots, x_n\in I_n)\geq \prod_{i=1}^n p(x_i\in I_i) $$ (Gaussian correlation inequality or GCI; see https://arxiv.org/pdf/1512.08776.pdf for the more general formulation). This seems really nice and simple, and the article says it has consequences for joint confidence intervals. However, it seems quite useless in that respect to me. Suppose we are estimating parameters $\theta_1,\dots,\theta_n$, and we found estimators $\hat{\theta_1},\dots,\hat{\theta_n}$ which are (maybe asymptotically) jointly normal (for example, the MLE estimator). Then, if I compute 95%-confidence intervals for each parameter, the GCI guarantees that the hypercube $I_1\times\dots I_n$ is a joint confidence region with coverage not less than $(0.95)^n $...which is quite low coverage even for moderate $n$. Thus, it doesn't seem a smart way to find joint confidence regions: the usual confidence region for a multivariate Gaussian, i.e., an hyperellipsoid, is not hard to find if the covariance matrix is known and it's sharper. Maybe it could be useful to find confidence regions when the covariance matrix is unknown? Can you show me an example of the relevance of GCI to the computation of joint confidence regions?
Answer 31 Hz Work Step by Step We use the equation that relates frequency to velocity and wavelength to find: $ f = \frac{v}{\lambda} = \frac{343 \ m/s}{11 \ m} \approx 31 \ Hz$ You can help us out by revising, improving and updating this answer.Update this answer After you claim an answer you’ll have 24 hours to send in a draft. An editorwill review the submission and either publish your submission or provide feedback.
Search Now showing items 1-10 of 24 Production of Σ(1385)± and Ξ(1530)0 in proton–proton collisions at √s = 7 TeV (Springer, 2015-01-10) The production of the strange and double-strange baryon resonances ((1385)±, Ξ(1530)0) has been measured at mid-rapidity (|y|< 0.5) in proton–proton collisions at √s = 7 TeV with the ALICE detector at the LHC. Transverse ... Forward-backward multiplicity correlations in pp collisions at √s = 0.9, 2.76 and 7 TeV (Springer, 2015-05-20) The strength of forward-backward (FB) multiplicity correlations is measured by the ALICE detector in proton-proton (pp) collisions at s√ = 0.9, 2.76 and 7 TeV. The measurement is performed in the central pseudorapidity ... Inclusive photon production at forward rapidities in proton-proton collisions at $\sqrt{s}$ = 0.9, 2.76 and 7 TeV (Springer Berlin Heidelberg, 2015-04-09) The multiplicity and pseudorapidity distributions of inclusive photons have been measured at forward rapidities ($2.3 < \eta < 3.9$) in proton-proton collisions at three center-of-mass energies, $\sqrt{s}=0.9$, 2.76 and 7 ... Rapidity and transverse-momentum dependence of the inclusive J/$\mathbf{\psi}$ nuclear modification factor in p-Pb collisions at $\mathbf{\sqrt{\textit{s}_{NN}}}=5.02$ TeV (Springer, 2015-06) We have studied the transverse-momentum ($p_{\rm T}$) dependence of the inclusive J/$\psi$ production in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV, in three center-of-mass rapidity ($y_{\rm cms}$) regions, down to ... Measurement of pion, kaon and proton production in proton–proton collisions at √s = 7 TeV (Springer, 2015-05-27) The measurement of primary π±, K±, p and p¯¯¯ production at mid-rapidity (|y|< 0.5) in proton–proton collisions at s√ = 7 TeV performed with a large ion collider experiment at the large hadron collider (LHC) is reported. ... Two-pion femtoscopy in p-Pb collisions at $\sqrt{s_{\rm NN}}=5.02$ TeV (American Physical Society, 2015-03) We report the results of the femtoscopic analysis of pairs of identical pions measured in p-Pb collisions at $\sqrt{s_{\mathrm{NN}}}=5.02$ TeV. Femtoscopic radii are determined as a function of event multiplicity and pair ... Measurement of charm and beauty production at central rapidity versus charged-particle multiplicity in proton-proton collisions at $\sqrt{s}$ = 7 TeV (Springer, 2015-09) Prompt D meson and non-prompt J/$\psi$ yields are studied as a function of the multiplicity of charged particles produced in inelastic proton-proton collisions at a centre-of-mass energy of $\sqrt{s}=7$ TeV. The results ... Charged jet cross sections and properties in proton-proton collisions at $\sqrt{s}=7$ TeV (American Physical Society, 2015-06) The differential charged jet cross sections, jet fragmentation distributions, and jet shapes are measured in minimum bias proton-proton collisions at centre-of-mass energy $\sqrt{s}=7$ TeV using the ALICE detector at the ... Centrality dependence of high-$p_{\rm T}$ D meson suppression in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV (Springer, 2015-11) The nuclear modification factor, $R_{\rm AA}$, of the prompt charmed mesons ${\rm D^0}$, ${\rm D^+}$ and ${\rm D^{*+}}$, and their antiparticles, was measured with the ALICE detector in Pb-Pb collisions at a centre-of-mass ... K*(892)$^0$ and $\Phi$(1020) production in Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV (American Physical Society, 2015-02) The yields of the K*(892)$^0$ and $\Phi$(1020) resonances are measured in Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV through their hadronic decays using the ALICE detector. The measurements are performed in multiple ...
Regarding the quantum Toffoli gate: is it classicalyuniversal, and if so, why? is it quantumlyuniversal, and why? Computer Science Stack Exchange is a question and answer site for students, researchers and practitioners of computer science. It only takes a minute to sign up.Sign up to join this community Toffoli is universal for classical computation (as shown by @Victor). However, Toffoli is NOT universal for quantum computation (unless we have something crazy like $P = BQP$). To be universal for quantum computation (under the usual definition), the group generated by your gates has to be dense in the unitaries. In other words, given an arbitrary $\epsilon$ and target unitary $U$ there is some way to apply a finite number of you quantum gates to get a unitary $U'$ such that $||U - U'|| < \epsilon$. Toffoli by itself is clearly not universal under this definition since it always takes basis states to basis states, and thus can not implement something that takes $|0\rangle \rightarrow \frac{1}{\sqrt{2}}(|0\rangle + |1\rangle)$ for example. In other words, it cannot create superposition. From the wikipedia article that you cited: The Toffoli gate is universal; this means that for any boolean function f(x1, x2, ..., xm), there is a circuit consisting of Toffoli gates which takes x1, x2, ..., xm and some extra bits set to 0 or 1 and outputs x1, x2, ..., xm, f(x1, x2, ..., xm), and some extra bits (called garbage). Essentially, this means that one can use Toffoli gates to build systems that will perform any desired boolean function computation in a reversible manner. Which means in simple terms that any boolean function may be constructed only with Toffoli gates. Boolean functions are typically constructed from OR, AND and NOT gates, which may be combined to form any boolean function. It is widely know that the same is possible only with NOR gates or only with NAND gates. The Toffoli gate may be summarized as: $\rm{Toffoli}(a, b, c) = \begin{cases} (a, b, ¬c) & \mbox{when }a=b=1 \\ (a, b, c) & \mbox{otherwise.}\end{cases}$ Since the first and the second outputs are always equal to the first and second inputs, we may disconsider them. So we have: $\rm{Toffoli}'(a, b, c) = \begin{cases} ¬c & \mbox{when }a=b=1 \\ c & \mbox{otherwise.}\end{cases}$ With that, it is possible to define the NAND gate as: $\operatorname{NAND}(a, b) = \rm{Toffoli}'(a, b, 1)$ Since the NAND gate is universal and the NAND gate may be defined as a Toffoli gate, then the Toffoli gate is universal. There is another way to prove that Toffoli is universal, by direct constructing the AND and NOT gates: $\operatorname{NOT}(x) = \rm{Toffoli}'(1, 1, x)$ $\operatorname{AND}(a, b) = \rm{Toffoli}'(a, b, 0)$ Then, we may construct the OR gate using De Morgan's laws: $\operatorname{OR}(a, b) = \operatorname{NOT}(\operatorname{AND}(\operatorname{NOT}(a), \operatorname{NOT}(b)) = \rm{Toffoli}'(1, 1, \rm{Toffoli}'(\rm{Toffoli}'(1, 1, a), \rm{Toffoli}'(1, 1, b), 0))$ EDIT, since the question was edited and its scope changed: First, I don't understand quantical computing, so if there is something wrong, please add a comment. I did a little research to try to make this answer complete and ended with this: The Toffoli gate is reversible (but the Toffoli' used above is not). This means that any computation did with it can be undone. This is: $(a, b, c) = \rm{Toffoli}(\rm{Toffoli}(a, b, c))$ Which means that for any triple (a, b, c) if the Toffoli is applied twice, the original input is get as the output. Reversibility is important because quantum gates must be reversible, so the (classical) Toffoli gate may be used as a quantum gate due to this. As demonstrated here, the Deutsch gate is defined in a similar way that the Toffoli gate is, but instead of a classical gate, it is a quantical one: $\operatorname{Deutsch}(a, b, c) = |a,b,c\rangle \mapsto \begin{cases} i \cos(\theta) |a,b,c\rangle + \sin(\theta) |a,b,1-c\rangle & \mbox{for }a=b=1 \\ |a,b,c\rangle & \mbox{otherwise.}\end{cases}$ In this way, the Toffoli gate is a particular case of the Deutsch gate where: $\rm{Toffoli}(a, b, c) = \operatorname{Deutsch}(\frac{\pi}{2})(a, b, c)$ The Toffoli gate does classical computation, it lacks a phase-shift operation, this would mean that the Toffoli gate may be used only for 90 degrees ($\frac{\pi}{2}$) phase-shifts (and by combining multiple gates, to get multiples of 90 degrees). But this also means that it can't be used to create state sobrepositions because this would require phase-shifts on angles that are not multiple than 90 degrees, hence the Toffoli gate is not a universal quantum gate. A universal quantum Tgate set may be obtained, if we combine the Toffoli gate whit the Hadamard gate. This is exactly what the Deutsch gate does.
The concept you are looking for is called enumeration complexity, which is the study of the computational complexity of enumerating (listing) all the solutions to a problem (or the members of a language/set). Enumeration algorithms can be modeled as a two step process:a precomputation step and an enumeration phase with delay. Both of these steps have their own time and space complexities (perhaps entropy, too). In the general spirit of complexity, there are often trade offs between these to consider. The precomputation step performs some work that is necessary before the first solution is enumerated. This might involve finding the solution itself or initializing some large data structure that will reduce the overall delay between each solution. The delay is the resource cost associated with the computation necessary in between arbitrary enumerated solutions. In other words, the delay is a measure of the space and time needed to produce the $i+1^{th}$ solution after the $i^{th}$ one. Problems whose solutions that take $O(1)$ time for each enumeration are said to have constant delay. A requirement of $O(poly(n))$ time is said to have polynomial delay. For the enumeration problem you specifically mentioned in your question, you should look into the class $ENUM_{NP}$ and its related siblings in section 2.1 of "Enumeration: Algorithms and Complexity" by Johannes Schmidt (Linked at the bottom). Why do we care about precomputation time and delay? Delay is very key to understanding the true intricacies of enumeration problems. Enumerating the elements of $\Sigma^*$ (up to size $n$) and $\{ \vec{x} : \phi(\vec{x}) \}$ where $\phi(\vec{x})$ is a Boolean formula (i.e. SAT) both take exponential time. However, enumerating through $\Sigma*$ only requires constant delay since you can just go through the elements in some order. For all we know, the delay for enumerating solutions to a 3SAT instance could be exponential. Our job as complexity theorists is to capture why the latter problem is fundamentally harder (more complex) than the former one. Delay does a pretty good job at showcasing this difference. Likewise, we also need to know how much precomputation is done. We can reduce the delay for any enumeration problem to constant time and space by precomputing all solutions and storing them in a list to be enumerated at a later time. The challenge is to find the best balance between the two resources. The order in which you enumerate the elements can also influence the complexity. Requiring results to be returned in a specified sorted order might require us to perform additional computation in both steps. Though situations where any order suffices (as long as each enumerated element is unique) are certainly studied, too. As far as I know, these classes do not typically have concise labels (akin to $P$ and $NP$). We cannot feasibly expect to be able to do this since enumeration complexity classes are juggling around 3 or more resources (precomputation/total time, space, delay, and entropy). There are simply too many combinations of resource bounds to hand out special names. This does not make these classes any less interesting and also does not stop researchers from trying anyways. Resources This survey (really an attempt at formalization) should help you get started. It also proves some basic hierarchy theorems. Enumeration: Algorithms and Complexity(Johannes Schmidt, 2009) https://www.thi.uni-hannover.de/fileadmin/forschung/arbeiten/schmidt-da.pdf For an enumeration of results in enumeration complexity, check out this compilation curated by Kunihiro Wasa. Since it is categorized by problem type, you can easily find a number of papers dedicated to enumerating graph cycles. It should be simple to modify the algorithms involved to only consider cycles with a given node. http://www-ikn.ist.hokudai.ac.jp/~wasa/enumeration_complexity.html
Also, read Matrix Formulas Mean, Median, Mode Formulas Set Theory Determinant formulas: In the previous section we discussed matrices. Now you are much aware of matrices its properties, addition, subtraction and multiplication. Now another term is there which enhances the properties of square matrices. It has a wide range of applications in algebraic equations. As we know that we can express an algebraic equation in the form of matrices and determinant. Thus we will use determinant formulas to solve the same. let’s suppose equation, a 1x + b 1y = c 1 a 2x + b 2y = c 2 We can write it as \(\begin{bmatrix} a_{1} & b_{1}\\ a_{2} & b_{1}\\ \end{bmatrix}\) \(\begin{bmatrix} x \\ y\\ \end{bmatrix}\) = \(\begin{bmatrix} c_{1} \\ c_{2}\\ \end{bmatrix}\) we know that number a 1 b 2 – a 2 b 1 which determines either the solution is unique or not is said to be d eterminants. Calculation of determinant and evaluating determinant formulas: Determinant of a matrix of 1 storder i.e. (1 x 1) A = [a 11] 1×1is |a 11| = a 11 Determinant of for 2 x 2 matrix i.e Δ = \(|A| = \left| \begin{matrix} a_{11} & a_{12} \\ a_{21} & a_{22} \end{matrix} \right|\) | A | = a 11 × a 22 – a 21 × a 21 Determinant of for 3 x 3 matrix i.e Δ = |A| = \(\left| \begin{matrix} a_{11} & a_{12} & a_{13}\cr a_{21} & a_{22} & a_{23} \cr a_{31} & a_{32} & a_{33} \end{matrix}\right|\) a 11\(\left| \begin{matrix} a_{22} & a_{23}\cr a_{32} & a_{33} \end{matrix}\right|\) – a 12\(\left| \begin{matrix} a_{21} & a_{23}\cr a_{31} & a_{33} \end{matrix} \right|\) + a 13\(\left| \begin{matrix} a_{21} & a_{22}\cr a_{31} & a_{32} \end{matrix} \right|\) Properties of determinant: If rows and columns of determinants are interchanged, the value of the determinant remains unchanged. From above property, we can say that if A is a square matrix, then det (A) = det (A′), where A′ = transpose of A. If any two rows (or columns) of a determinant are interchanged, then sign of determinant changes. Also, If any two rows (or columns) of a determinant are identical (i.e all corresponding elements are the same), then the value of determinant is zero. If each element of a row (or a column) of determinants are multiplied by a constant k, then its value gets multiplied by k. According to this property, we can take out any common factor from any one row or any one column of a given determinant. If corresponding elements of any two rows (or columns) of a determinant are proportional (in the same ratio), then its value is zero. In a determinant, if some or all elements of a row or column are expressed as sum of two (or more) terms, then the determinant can be expressed as sum of two (or more) determinants. If we add elements of one row (or column){also by multiplying same constants to each element of row or column} to corresponding elements of other rows (or column), the value of determinant remains the same, There are many uses of determinant formulas. We can symbolize this operation by R i → R i + kR j or C i → C i + k C j If Δ 1is the determinant obtained by applying R i→ kR ior C i→ kC ito the determinant Δ, then Δ 1= kΔ. If you carry more than one operation like R i→ R i+ kR jis done in one step, take care that each operation has its remark so one operation must not be affected by other. Area of triangle using determinant formulas: By the coordinate formula of area of triangle whose vertices are (x 1 , y 1 ), (x 2 , y 2) and (x 3 , y 3) is: \(\frac{1}{2}\) [x 1 (y 2 –y 3) + x 2 (y 3 –y 1) + x 3 (y 1 –y 2)] Now we express it in determinant form as: \(\alpha =\frac 12 \left| \begin{matrix} x_1 & y_1 & 1\cr x_2 & y_2 & 1 \cr x_3 & y_3 & 1 \cr \end{matrix} \right|\) Properties of area of triangles obtained by determinant formulas: Since we know that area is always a positive quantity, we always take the absolute value of the determinant. Another feature of this area is, if area is already there, we can use both positive and negative values of the determinant for calculation. Also, the area of the triangle formed by three collinear points is zero. Now we will read another section of determinant formulasi.e. minors and cofactors. Minors and cofactors: Minor:The determinant obtained by deleting its i throw and j thcolumn in which element a ijlies is minor of an element a ijof a determinant. We denote minor of an element a ijby M ij. Always remember that minor of an element of a determinant is a determinant of order n – 1. When the actual determinant is of order n where (n ≥ 2). Cofactor:We denote cofactor of an element a ij, by A ijand define it by A ij= (–1)i + j M ij, where M ijis minor of a ij. Determinant formulascan be used in a specific way to solve the problems. Adjoint and Inverse of a Matrix: Adjoint of a matrix:We define the adjoint of a square matrix A as the transpose of the cofactor matrix of matrix A. Actually, we denote adjoint of a matrix by “adj A” If A be any given square matrix of order n, then A(adj A) = (adj A) A = |A|I, where I is the identity matrix of order n. Singular and non-singular matrix: Singular matrix:We can say a square matrix A to be singular if |A| = 0. Non-singular matrix:We can say a square matrix A to be non-singular if |A|≠ 0 Properties: One thing to remember is that if A and B are nonsingular matrices of the same order, then AB and BA are also nonsingular matrices of the same order. |AB| = |A||B|, where A and B are square matrices of the same order. Inverse of a matrix: We can say a square matrix A is invertible if and only if A is nonsingular matrix. If B is square matrix and AB = BA = I, then we describe B as inverse of A. Also A –1 = B or B –1 = A and hence (A –1) –1 = A. The actual definition of inverse of a matrix is A -1 = \(\frac{1}{|a|}\) (adj A) Determinant formulas based examples: \(\left| \begin{matrix} 102 & 18 & 36\cr 1 & 3 & 4 \cr 17 & 3 & 6 \cr \end{matrix} \right|\) Evaluate Solution:As we see that \(\left| \begin{matrix} 102 & 18 & 36\cr 1 & 3 & 4 \cr 17 & 3 & 6 \cr \end{matrix} \right|\) = \(\left| \begin{matrix} 6(17) & 6(3) & 6(6)\cr 1 & 3 & 4 \cr 17 & 3 & 6 \cr \end{matrix} \right|\) = 6 \(\left| \begin{matrix} 17 & 3 & 6\cr 1 & 3 & 4 \cr 17 & 3 & 6 \cr \end{matrix} \right|\) = 0 Find the area of the triangle whose vertices are (3, 8), (– 4, 2) and (5, 1). Solution:Δ = \(\frac 12 \left| \begin{matrix} 3 & 8 & 1\cr -4 & 2 & 1 \cr 5 & 1 & 1 \cr \end{matrix} \right|\) = \(\frac{1}{2}\) [3(2 – 1) –8(-4 – 5)+1(-4 – 10)] = \(\frac{1}{2}\) [ 3 + 72 + – 14] = \(\frac{61}{2}\)
Let $p$ be a prime number and consider the sum $S(x)=\sum_{n\le x}\left(\frac{n}{p}\right)\mu(n)$. For how small an $x$ in terms of $p$ is it known that $S(x)=o(x)$? I am especially interested in unconditional results. In general, we can take $x>\exp\{c_\epsilon p^\epsilon\}$ by the Prime Number Theorem for arithmetic progressions. More generally, one can use $L$-functions methods to relate $x$ to zero-free regions. In doing so it is hard to avoiding `losing logarithms'. The following elementary argument though, essentially due to Granville, does the job in an easier way. Let $f$ be a completely multiplicative such that $|f(n)|\le1$ for all $n$ (so one can think that $f$ is a Dirichlet character), and assume that we know that $$ \left|\sum_{n\le x} \Lambda(n)f(n) \right| \le Cx\cdot \frac{\log Q}{\log x} \tag{*} $$ for all $x>Q$ (the size of $Q$ will depend on the available zero-free regions). Then we claim that $(*)$ holds for $\mu(n) f(n)$ too (with a different constant). Note that if suffices to show the result for $g(n)=\prod_{p^e\|n}(-f(p))^e$ (then one can use a convolution argument to pass to $f$). In order to show that $(*)$ holds with $g$ in place of $f\Lambda$, possibly with another constant $C'$ in place of $C$, we use induction, with the induction hypothesis being that $$ \left|\sum_{n\le x} g(n) \right| \le C' x\cdot \frac{\log Q}{\log x} \tag{**} $$ for all $x\le 2^m$. If $2^m\le Q$, this holds trivially (choosing $C'$ appropriately). Next, assume that $2^m>Q$ (and that $Q$ is large). Suppose also that that $(**)$ holds for $x\le 2^m$, and consider $x\in(2^m,2^{m+1}]$.Then $$ \sum_{n\le x} g(n) \log n = \sum_{n\le x} g(n) \sum_{d|n} \Lambda(d) = \sum_{dm\le x} \Lambda(d) g(d)g(m) . $$ We apply Dirichlet's hyperbola method: \begin{align*} \sum_{n\le x} g(n) \log n &= \sum_{m\le x^{1-\epsilon}} g(m) \sum_{d\le x/m} g(d)\Lambda(d) + \sum_{1<d\le x^{\epsilon}} \Lambda(d) g(d) \sum_{x^{1-\epsilon}<m\le x/d} g(m) \\ &\ll \sum_{m\le x^{1-\epsilon}} \frac{Cx}{m}\cdot \frac{\log Q}{\epsilon \log x} + \sum_{d\le x^{\epsilon}} \Lambda(d) \frac{C'x}{d} \cdot \frac{\log Q}{\log x} \\ &\ll \left( \frac{C}{\epsilon} + \epsilon C'\right) x\log Q \end{align*} Then applying partial summation and choosing $\epsilon$ and $C'$ appropriately completes the inductive step and thus the proof of $(**)$. In the special case that $f(n)=(n/p)$, we know that $$ \sum_{n\le x}\Lambda(n) \left(\frac{n}{p}\right) = -\sum_{\substack{\rho=\beta+i\gamma\\L(\rho,(\cdot/p))=0,\,|\gamma|\le p}} \frac{x^{\rho}}{\rho} + O\left(xe^{-c\sqrt{\log x}}\right), $$ (see e.g. eq (13), p. 120 in Davenport's book "Multiplicative Number Theory"). There is a $c>0$ such that the first sum has at most one summand with $\beta\ge1-c/\log p$, for which one then necessarily has that $\gamma=0$ (i.e. $\rho=\beta$ is a Siegel zero). The sum over the zeroes with $\beta\le 1-c/\log p$ can be shown to be $\ll x^{1-c'/\log q}$ for some absolute constant $c'>0$, using zero-density estimates (see e.g. equation (18.9) in p. 428 of the book "Analytic Number Theory" by Iwaniec and Kowalski). We conclude that $$ \sum_{n\le x}\Lambda(n) \left(\frac{n}{p}\right) = -\frac{x^{\beta}}{\beta} + O\left( x^{1-c'/\log p} + xe^{-c\sqrt{\log x}}\right) . $$ So $(*)$ holds with $Q=1/(1-\beta)$ if $\beta$ exists and with $Q=p$ otherwise. Wirsing's Theorem tells us that if $f$ is multiplicative and each $f(p)=-1,0 $ or $ 1$ then $\sum_{n\leq x} f(n) = o(x)$ as $\sum_{p\leq x} (1-f(p))/p \to \infty$ (and one cannot do much better). Moreover one can get an explicit upper bound: $ \sum_{n\leq x} f(n) \ll x \exp( -.32\sum_{p\leq x} (1-f(p))/p)$. In your case $f(n)=\mu(n)(n/p)$ so that $\sum_{p\leq x} (1-f(p))/p = 1/p+ 2\sum_{q\leq x, (q/p)=1} 1/q $. Therefore to get the bound $o(x)$ you need that a significant number of the $q\leq x$ satisfy $(q/p)=1$. So your question becomes: For what $x$ can we guarantee this? Or, in other words, is it possible that $(q/p)=-1$ for "most" of the primes $\leq x$ (as may well be the case of one has a Siegel zero)? If we use quadratic reciprocity, then $(q/p)=-1$ is equivalent to demanding $(p/q)=$ something fixed, and we can find such $p$ for which this holds for all but one prime $q\ll \log p$, by Dirichlet's Theorem. But then, by smooth number estimates, one knows that for almost all such $p$ one has $\sum_{n\leq x} \mu(n)(n/p) \gg \rho(A)x$ for $x=(\log p)^A$ (for each $A$). So we have "proved" that for any fixed $A>0$, the estimate $\sum_{n\leq x} \mu(n)(n/p) = o(x)$ does not hold uniformly for $x=(\log p)^A$. The same ideas give, assuming GRH, that $\sum_{n\leq x} \mu(n)(n/p) = o(x)$ does hold uniformly provided $\log x/\log\log p \to \infty$ as $p\to \infty$. These ideas can be found in my paper "Large Character Sums" with Soundararajan, though there we looked at character sums $\sum_{n\leq x} \chi(n)$; it should not take much to modify those ideas for this situation.
Abbreviation: PeirceA A is a 2-sorted structure $\mathbf{A}=\langle \mathbf R,\mathbf B,^c\rangle$ such that Peirce algebra $\mathbf R=\langle R,\vee,0,\wedge,1,\neg,\circ,^\smile,e\rangle$ is a relation algebra $\mathbf B=\langle B,\vee,0,\wedge,1,\neg,f_r\ (r\in R)\rangle$ is a Boolean module over $\mathbf R$ $^c:B\to R$ is : $f_{x^c}(1)=x$ and $f_r(1)^c=f_r(1)$ cylindrification Remark: This is a template. If you know something about this class, click on the 'Edit text of this page' link at the bottom and fill out this page. It is not unusual to give several (equivalent) definitions. Ideally, one of the definitions would give an irredundant axiomatization that does not refer to other classes. Let $\mathbf{A}$ and $\mathbf{B}$ be … . A morphism from $\mathbf{A}$ to $\mathbf{B}$ is a function $h:A\rightarrow B$ that is a homomorphism: $h(x ... y)=h(x) ... h(y)$ An is a structure $\mathbf{A}=\langle A,...\rangle$ of type $\langle...\rangle$ such that … $...$ is …: $axiom$ $...$ is …: $axiom$ Example 1: Feel free to add or delete properties from this list. The list below may contain properties that are not relevant to the class that is being described. $\begin{array}{lr} f(1)= &1\\ f(2)= &\\ f(3)= &\\ f(4)= &\\ f(5)= &\\ \end{array}$ $\begin{array}{lr} f(6)= &\\ f(7)= &\\ f(8)= &\\ f(9)= &\\ f(10)= &\\ \end{array}$ [[...]] subvariety [[...]] expansion [[...]] supervariety [[...]] subreduct
Random graphs with small world topology In graphs with small world topology, nodes are highly clustered yet the path length between them is small. A topology like this can make search problems very difficult, since local decisions quickly propagate globally. In other words, shortcuts can mislead heuristics. Further is has been shown that many different search problems have a small world topology. Watts and Strogatz [1] propose a model for small world graphs. First, we start with a regular graph. Disorder is introduced into the graph by randomly rewiring each edge with probability $p$. If $p=0$, the graph is completely regular and ordered. If $p=1$, the graph is completely random and disordered. Values of $0 < p < 1$ produce graphs that are neither completely regular nor completely disordered. Graphs don't have a small world topology for $p=0$ and $p=1$. Watts and Strogatz start from a ring lattice with $n$ nodes and $k$ nearest neighbours. A node is chosen from the lattice uniformly at random, and a rewired edge is reconnected to it. If rewiring would create a duplicate edge, it is left untouched. For large, sparse graphs they demand $n \gg k \gg \ln(n) \gg 1$, where $k \gg \ln(n)$ ensures the graph remains connected. The model of Watts and Strogatz is somewhat popular, but does have certain drawbacks. Walsh [2] investigates the effects of randomization and restart strategies in graphs generated using the model. There's also a paper by Virtanen [3], which covers other models motivated by the need of realistic modeling of complex systems. Random simple planar graphs Generating random simple planar graphs on $n$ vertices uniformly at random can be done efficiently. The number of planar graphs with $n$ vertices, $g_n$, can be determined using generating functions. The value of $g_n$ for $1 \leq n \leq 9$ is $1,2,8,64,1023,32071,1823707,163947848$ and $20402420291$, respectively. Since the numbers are too complicated, one is not expected to find a closed formula for them. Giménez and Noy [4] give a precise asymptotic estimate for the growth of $g_n$:$$g_n \sim g \cdot n^{-7/2} \gamma^n n!,$$where $g$ and $\gamma$ are constants determined analytically with approximate values $g \approx 0.42609$ and $\gamma \approx 27.22687$. The proof of the result leads to a very efficient algorithm by Fusy [5]. Fusy gives an approximate size random generator and also a exact size random generator of planar graphs. The approximate size algorithm runs in linear time while the exact size algorithm runs in quadratic time. The algorithms are based on decomposition according to successive levels of connectivity: planar graph $\rightarrow$ connected $\rightarrow$ 2-connected $\rightarrow$ 3-connected $\rightarrow$ binary tree. The algorithms then operate by translating a decomposition of a planar graph into a random generator using the framework of Boltzmann samplers by Duchon, Flajolet, Louchard and Schaeffer [6]. Given a combinatorial class, a Boltzmann sampler draws an object of size $n$ with probability to $x^n$, where $x$ is certain real parameter tuned by the user. Furthermore, the probability distribution is spread over all the objects of the class, with the property that objects of the same size have the same probability of occurring. Also, the probability distribution is uniform when restricted to a fixed size. For a lightweight introduction, see a presentation by Fusy. [1] D.J. Watts and S.H. Strogatz. Collective dynamics of 'small-world' networks. Nature, 393:440-442, 1998. [2] Toby Walsh. Search in a small world. Proceedings of the 16th International Joint Conference on Artificial Intelligence (IJCAI-99-Vol2), pages 1172-1177, 1999. [3] Satu Virtanen. Properties of nonuniform random graph models. Research Report A77, Helsinki University of Technology, Laboratory for Theoretical Computer Science, 2003. [4] O. Giménez and M. Noy. Asymptotic enumeration and limit laws of planar graphs, arXivmath.CO/0501269. An extended abstract has appeared in Discrete Mathematics and The-oretical Computer Science AD (2005), 147-156. [5] E. Fusy. Quadratic and linear time generation of planar graphs, Discrete Mathematics and Theoretical Computer Science AD (2005), 125-138. [6] P. Duchon, P. Flajolet, G. Louchard, and G. Schaeffer. Boltzmann sampler for the random generation of combinatorial structures. Combinatorics, Probability and Computing, 13(4-5):577-625, 2004.
Non-maximal paths are prefixes or suffixes of shortest paths So a maximal path is just a path that begins at a vertex $u$ with no in-edge and ends at a vertex $v$ with no out-edge. It's straightforward to see that if a shortest path $P$ from $u$ to $v$ is a subpath of some shortest path $Q$ from $x$ to $y$, then $P$ is also a subpath of a shortest path $R$ from $x$ to $v$, and a subpath of a shortest path $S$ from $u$ to $y$. Thus the following statements are equivalent: $P$ is not maximal; There exists a shortest path $Q \ne P$ that contains $P$ as a subpath; There exists a shortest path $X \ne P$ that contains $P$ as a subpath, is longer than $P$ and either begins at $u$ or ends at $v$. (1) and (2) are equivalent by definition; (2) follows from (3) trivially; (3) follows from (2) by the reasoning above and the fact that, since $Q \ne P$, at least one of $R$ and $S$ must be longer than $P$ and end at one of $P$'s two endpoints; $X$ can be whichever one this is. We will use (3) to efficiently determine whether a given shortest path is maximal. Maximality is determined by endpoints and length If some shortest path from $u$ to $v$ could be extended to a longer shortest path, then all shortest paths from $u$ to $v$ could be so extended -- since the only property we care about is the length, and that doesn't change. So the question of whether a given path $P$ from $u$ to $v$ is maximal is entirely determined by the length of $P$, and its start and end points $u$ and $v$ -- it is not necessary to know what specific internal vertices $P$ uses. Thus, for any pair of vertices $(u, v)$, we can determine in a preprocessing step whether this vertex pair is "maxable"; a path $P = (u, \dots, v)$ is then maximal exactly when the pair $(u, v)$ is maxable and $P$ has length equal to the distance between $u$ and $v$. The idea is to build a complete list of all maxable pairs $(u, v)$ and the number $n_{uv}$ of shortest paths between them. Once these are known, paths can be uniformly sampled using a simple recursive procedure. Determining maxable pairs Let $A$ be the set of all vertex pairs $(u, v)$ such that $v$ is reachable from $u$. This can easily be determined in $O(n^2+nm)$ time by starting a DFS from each vertex in turn. We determine the set of maxable pairs by initially declaring that every vertex pair $(u, v)$ such that $v$ is reachable from $u$ is maxable; we then "cross out" all pairs $(u, v)$ for which some shortest path from $u$ to some $x$ has a shortest path from $u$ to $v$ as an initial subpath; finally, we "cross out" all pairs $(u, v)$ for which some shortest path from some $x$ to $v$ has a shortest path from $u$ to $v$ as a final subpath. By (3) above, all pairs $(u, v)$ that have not been crossed out are maxable. I'll now describe the first "crossing out" step. The second is identical, except that we first reverse all edge directions, and when we are about to cross out some pair $(u, v)$, we instead cross out $(v, u)$. The following assumes an unweighted graph; if the edges are weighted, you can replace the BFS with Dijkstra's algorithm. From each vertex $u$ in turn, perform BFS on $G$, using $u$ as the root. For each vertex $v$ visited at level $L$ during a particular BFS, if $v$ is adjacent (in $G$) to any vertex on level $L+1$ of the BFS tree, cross out the pair $(u, v)$. This is because the paths in $G$ whose vertices start at the root $u$ of a BFS tree and thereafter occupy strictly increasing levels in that tree are exactly the shortest paths starting at $u$, and the "downward" (w.r.t. the BFS tree) edge leaving $v$ indicates that there is some $x$ for which a shortest path from $u$ to $x$ uses a shortest path from $u$ to $v$ as a subpath. This can be performed in $O(n+m)$ time per root vertex, for $O(n^2+nm)$ time overall. Let $Z$ be the set of maxable vertex pairs that remain after the above has run. Counting shortest paths As a further preprocessing step, we would like to count the number of shortest paths between any pair of vertices $(u, v)$. For any vertex $u$, the number of shortest paths from $u$ to every other vertex $v$ can be calculated in $O(n+m)$ time with BFS. This means that the total time needed to compute the number of paths between every pair $(u, v)$ of vertices is $O(n^2+nm)$. Let $n_{uv}$ be the number of paths from $u$ to $v$. Discard any pairs $(u, v)$ for which $n_{uv} = 0$, i.e., for which $v$ is not reachable from $u$. Let $n_z = \Sigma_{(u, v) \in Z} n_{uv}$. Sampling algorithm The input is a graph $G$. If $G$ has no vertices, return an empty path. (This will arise as a base case later on.) If it has at least one vertex then it has at least one maximal path (even if that consists of a single vertex). The next step is to choose a start and an end vertex for the path. This works differently depending on whether we are at the top level of the call stack or not: If we are at the top level of the call stack, choose an endpoint pair $(u, v)$ from $Z$ at random with probability $n_{uv}/n_z$; otherwise Let $n = \Sigma_{(u, v) \in A \cap (V(G)\times V(G))} n_{uv}$. Choose an endpoint pair $(u, v)$ from $Y$ at random with probability $n_{uv}/n$. These probabilities ensure that paths are chosen uniformly at random; if you want to weight paths in some other way, you are of course welcome to push these probabilities around. $u$ will be the first vertex in your random path, and $v$ will be the last. If $u = v$ then we are done: return the single-vertex path $u$. Otherwise, construct a subproblem to recurse on: this is the induced subgraph $G'$ of $G$ whose vertex set is the intersection of $X$ and $Y$, where $X$ is all vertices in $G$ that are reachable from a child of $u$, and $Y$ is all vertices in $G$ that $v$ is reachable from, except for $v$ itself. Recurse on this subproblem; it will produce a (possibly empty) path $P$. Return the path $u, P, v$. Note that whenever two vertices $u$ and $v$ still remain in $G'$, the number of paths between them $n_{uv}$ also remains unchanged -- so it's not necessary to recalculate these for each subproblem. $X$ and $Y$ can each be found using DFS in $O(n+m)$ time, intersected in $O(n\log n)$ time (sort each list, if necessary, then use a $O(n)$ list merge) and the edges of $G$ that remain in $G'$ found in $O(m)$ time. Each recursion level shrinks the graph by at least 2 vertices, so there are at most $O(n)$ recursive calls. Thus the overall time complexity for sampling is $O(n(n + m + n\log n)) = O(n^2\log n + mn)$.
At the request of the OP, I'm turning my comments above into an answer, though different answers are possible and the question sounds a bit soft to me. Let it be as it may, here are my two cents. From the point of view of factorization theory, Euclidean domains can be understood as a rather special subclass of the class of domains, either commutative or not, whose monoid of non-zero elements admits a length function, which in turn implies the ACC on principal left and principal right ideals (shortly, ACCP), and hence atomicity. (Given a monoid $H$, a function $\lambda: H \to \mathbf N$ is called a length function if $\lambda(x) < \lambda(y)$ whenever $y=uxv$ for some $u, v \in H$ with $u \notin H^\times$ or $v \notin H^\times$, where $H^\times$ is the group of units of $H$. In particular, it is well known that every Euclidean function $f$ on an integral domain $R$ can be "normalized" to a Euclidean function $f^\ast$ such that $f^\ast(a) \le f^\ast(ab)$ for all non-zero $a, b \in R$, and then it is not difficult to show that $f^\ast$ is a length function.) On the other hand, a monoid supporting a length function need not be factorial (that is, factorization need not be unique, whatever this means, unless you take a quotient of $\mathscr{F}^\ast(\mathcal A(H))$, the free monoid with basis the atoms of $H$, by a very large congruence...), though it must be BF (i.e., the factorizations of a fixed element cannot be arbitrarily long). So, it is clear that the kind of length functions supported by a Euclidean domain must be quite special, insofar as their existence implies factoriality. As for the connection among the ACCP, atomicity, and the existence of a length function, we have to separate the commutative and cancellative case from the non-commutative or non-cancellative case, since they have completely different stories. To begin, you should keep in mind that factorization theory has been developed so far almost entirely in the commutative and cancellative setting (there are many explanations for that, but would be beyond the scope of this thread), and in this setting the relevant results have been part of the folklore for a long time (in case you want a reference, see, e.g., [4, Proposition 1.1.4]). It is hard for me to say where they were first proved, but here is a little nugget: Proposition 1.1 in P.M. Cohn's influential paper [1] reads, "An integral domain is atomic if and only if it satisfies the maximum condition on principal ideals". The maximum condition alluded to by Cohn is equivalent to the ACCP, and he doesn't prove his statement, maintaining that it "is easily verified". But it turns out that the claim is false, as shown by A. Grams in [5]. (By the way, Grams establishes in the same paper that the ACCP is not necessary for atomicity.) As for the rest, it is only in 2013 that D. Smertnig proved that the ACCP is still a sufficient condition for atomicity in cancellative (though possibly non-commutative) monoids, see [6, Proposition 3.1], and the same criterion has been recently extended in a different direction (namely, to unit-cancellative, commutative monoids) by Y. Fan, A. Geroldinger, F. Kainrath, and myself, see [2, Lemma 3.1(1)]. The general unit-cancellative case, which subsumes all the previous results, has now been settled in [3, Theorem 2.22], where it is shown that, for a unit-cancellattive (but possibly non-commutative) monoid, the ACCP implies atomicity, and the existence of a length function is equivalent to BF-ness. (A monoid $H$ is unit-cancellative if $xy = x$ or $yx = x$ for some $x, y \in H$ implies $y \in H^\times$.) Of course, this is not the end of the story: There are a bunch of interesting monoids, which are atomic, but not unit-cancellative, for which an analogous general criterion is not yet known. Bibliography [1] P.M. Cohn, Bezout rings and their subrings, Proc. Camb. Phil. Soc. 64 (1968), 251-264. [2] Y. Fan, A. Geroldinger, F. Kainrath, and S.T., Arithmetic of commutative semigroups with a focus on semigroups of ideals and modules, J. Algebra Appl. 16 (2017), No. 11. [3] Y. Fan and S.T., Power monoids: A bridge between Factorization Theory and Arithmetic Combinatorics, preprint. [4] A. Geroldinger and F. Halter-Koch, Non-Unique Factorizations. Algebraic, Combinatorial and Analytic Theory, Pure Appl. Math. 278, Chapman & Hall/CRC, Boca Raton (FL), 2006. [5] A. Grams, Atomic rings and the ascending chain condition for principal ideals, Proc. Camb. Philos. Soc. 75 (1974), 321-329. [6] D. Smertnig, Sets of lengths in maximal orders in central simple algebras, J. Algebra 390 (2013), 1-43.
One disadvantage of the fact that you have posted 5 identical answers (1, 2, 3, 4, 5) is that if other users have some comments about the website you created, they will post them in all these place. If you have some place online where you would like to receive feedback, you should probably also add link to that. — Martin Sleziak1 min ago BTW your program looks very interesting, in particular the way to enter mathematics. One thing that seem to be missing is documentation (at least I did not find it). This means that it is not explained anywhere: 1) How a search query is entered. 2) What the search engine actually looks for. For example upon entering $\frac xy$ will it find also $\frac{\alpha}{\beta}$? Or even $\alpha/\beta$? What about $\frac{x_1}{x_2}$? ******* Is it possible to save a link to particular search query? For example in Google I am able to use link such as: google.com/search?q=approach0+xyz Feature like that would be useful for posting bug reports. When I try to click on "raw query", I get curl -v https://approach0.xyz/search/search-relay.php?q='%24%5Cfrac%7Bx%7D%7By%7D%24' But pasting the link into the browser does not do what I expected it to. ******* If I copy-paste search query into your search engine, it does not work. For example, if I copy $\frac xy$ and paste it, I do not get what would I expect. Which means I have to type every query. Possibility to paste would be useful for long formulas. Here is what I get after pasting this particular string: I was not able to enter integrals with bounds, such as $\int_0^1$. This is what I get instead: One thing which we should keep in mind is that duplicates might be useful. They improve the chance that another user will find the question, since with each duplicate another copy with somewhat different phrasing of the title is added. So if you spent reasonable time by searching and did not find... In comments and other answers it was mentioned that there are some other search engines which could be better when searching for mathematical expressions. But I think that as nowadays several pages uses LaTex syntax (Wikipedia, this site, to mention just two important examples). Additionally, som... @MartinSleziak Thank you so much for your comments and suggestions here. I have took a brief look at your feedback, I really love your feedback and will seriously look into those points and improve approach0. Give me just some minutes, I will answer/reply to your in feedback in our chat. — Wei Zhong1 min ago I still think that it would be useful if you added to your post where do you want to receive feedback from math.SE users. (I suppose I was not the only person to try it.) Especially since you wrote: "I am hoping someone interested can join and form a community to push this project forward, " BTW those animations with examples of searching look really cool. @MartinSleziak Thanks to your advice, I have appended more information on my posted answers. Will reply to you shortly in chat. — Wei Zhong29 secs ago We are open-source project hosted on GitHub: http://github.com/approach0Welcome to send any feedback on our GitHub issue page! @MartinSleziak Currently it has only a documentation for developers (approach0.xyz/docs) hopefully this project will accelerate its releasing process when people get involved. But I will list this as a important TODO before publishing approach0.xyz . At that time I hope there will be a helpful guide page for new users. @MartinSleziak Yes, $x+y$ will find $a+b$ too, IMHO this is the very basic requirement for a math-aware search engine. Actually, approach0 will look into expression structure and symbolic alpha-equivalence too. But for now, $x_1$ will not get $x$ because approach0 consider them not structurally identical, but you can use wildcard to match $x_1$ just by entering a question mark "?" or \qvar{x} in a math formula. As for your example, enter $\frac \qvar{x} \qvar{y} $ is enough to match it. @MartinSleziak As for the query link, it needs more explanation, technologically the way you mentioned that Google is using, is a HTTP GET method, but for mathematics, GET request may be not appropriate since it has structure in a query, usually developer would alternatively use a HTTP POST request, with JSON encoded. This makes developing much more easier because JSON is a rich-structured and easy to seperate math keywords. @MartinSleziak Right now there are two solutions for "query link" problem you addressed. First is to use browser back/forward button to navigate among query history. @MartinSleziak Second is to use a computer command line 'curl' to get search results from particular query link (you can actually see that in browser, but it is in developer tools, such as the network inspection tab of Chrome). I agree it is helpful to add a GET query link for user to refer to a query, I will write this point in project TODO and improve this later. (just need some extra efforts though) @MartinSleziak Yes, if you search \alpha, you will get all \alpha document ranked top, different symbols such as "a", "b" ranked after exact match. @MartinSleziak Approach0 plans to add a "Symbol Pad" just like what www.symbolab.com and searchonmath.com are using. This will help user to input greek symbols even if they do not remember how to spell. @MartinSleziak Yes, you can get, greek letters are tokenized to the same thing as normal alphabets. @MartinSleziak As for integrals upper bounds, I think it is a problem on a JavaScript plugin approch0 is using, I also observe this issue, only thing you can do is to use arrow key to move cursor to the right most and hit a '^' so it goes to upper bound edit. @MartinSleziak Yes, it has a threshold now, but this is easy to adjust from source code. Most importantly, I have ONLY 1000 pages indexed, which means only 30,000 posts on math stackexchange. This is a very small number, but will index more posts/pages when search engine efficiency and relevance is tuned. @MartinSleziak As I mentioned, the indices is too small currently. You probably will get what you want when this project develops to the next stage, which is enlarge index and publish. @MartinSleziak Thank you for all your suggestions, currently I just hope more developers get to know this project, indeed, this is my side project, development progress can be very slow due to my time constrain. But I believe its usefulness and will spend my spare time to develop until its publish. So, we would not have polls like: "What is your favorite calculus textbook?" — GEdgar2 hours ago @GEdgar I'd say this goes under "tools." But perhaps it could be made explicit. — quid1 hour ago @quid I think that the type of question mentioned in GEdgar's comment is closer to book-recommendations which are valid questions on the main. (Although not formulated like that.) I also think that his comment was tongue-in-cheek. (Although it is a bit more difficult for me to detect sarcasm, as I am not a native speaker.) — Martin Sleziak57 mins ago "What is your favorite calculus textbook?" is opinion based and/or too broad for main. If at all it is a "poll." On tex.se they have polls "favorite editor/distro/fonts etc" while actual questions on these are still on-topic on main. Beyond that it is not clear why a question which software one uses should be a valid poll while the question which book one uses is not. — quid7 mins ago @quid I will reply here, since I do not want to digress in the comments too much from the topic of that question. Certainly I agree that "What is your favorite calculus textbook?" would not be suitable for the main. Which is why I wrote in my comment: "Although not formulated like that". Book recommendations are certainly accepted on the main site, if they are formulated in the proper way. If there will be community poll and somebody suggests question from GEdgar's comment, I will be perfectly ok with it. But I thought that his comment is simply playful remark pointing out that there is plenty of "polls" of this type on the main (although ther should not be). I guess some examples can be found here or here. Perhaps it is better to link search results directly on MSE here and here, since in the Google search results it is not immediately visible that many of those questions are closed. Of course, I might be wrong - it is possible that GEdgar's comment was meant seriously. I have seen for the first time on TeX.SE. The poll there was concentrated on TeXnical side of things. If you look at the questions there, they are asking about TeX distributions, packages, tools used for graphs and diagrams, etc. Academia.SE has some questions which could be classified as "demographic" (including gender). @quid From what I heard, it stands for Kašpar, Melichar and Baltazár, as the answer there says. In Slovakia you would see G+M+B, where G stand for Gašpar. But that is only anecdotal. And if I am to believe Slovak Wikipedia it should be Christus mansionem benedicat. From the Wikipedia article: "Nad dvere kňaz píše C+M+B (Christus mansionem benedicat - Kristus nech žehná tento dom). Toto sa však často chybne vysvetľuje ako 20-G+M+B-16 podľa začiatočných písmen údajných mien troch kráľov." My attempt to write English translation: The priest writes on the door C+M+B (Christus mansionem benedicat - Let the Christ bless this house). A mistaken explanation is often given that it is G+M+B, following the names of three wise men. As you can see there, Christus mansionem benedicat is translated to Slovak as "Kristus nech žehná tento dom". In Czech it would be "Kristus ať žehná tomuto domu" (I believe). So K+M+B cannot come from initial letters of the translation. It seems that they have also other interpretations in Poland. "A tradition in Poland and German-speaking Catholic areas is the writing of the three kings' initials (C+M+B or C M B, or K+M+B in those areas where Caspar is spelled Kaspar) above the main door of Catholic homes in chalk. This is a new year's blessing for the occupants and the initials also are believed to also stand for "Christus mansionem benedicat" ("May/Let Christ Bless This House"). Depending on the city or town, this will be happen sometime between Christmas and the Epiphany, with most municipalities celebrating closer to the Epiphany." BTW in the village where I come from the priest writes those letters on houses every year during Christmas. I do not remember seeing them on a church, as in Najib's question. In Germany, the Czech Republic and Austria the Epiphany singing is performed at or close to Epiphany (January 6) and has developed into a nationwide custom, where the children of both sexes call on every door and are given sweets and money for charity projects of Caritas, Kindermissionswerk or Dreikönigsaktion[2] - mostly in aid of poorer children in other countries.[3] A tradition in most of Central Europe involves writing a blessing above the main door of the home. For instance if the year is 2014, it would be "20 * C + M + B + 14". The initials refer to the Latin phrase "Christus mansionem benedicat" (= May Christ bless this house); folkloristically they are often interpreted as the names of the Three Wise Men (Caspar, Melchior, Balthasar). In Catholic parts of Germany and in Austria, this is done by the Sternsinger (literally "Star singers"). After having sung their songs, recited a poem, and collected donations for children in poorer parts of the world, they will chalk the blessing on the top of the door frame or place a sticker with the blessing. On Slovakia specifically it says there: The biggest carol singing campaign in Slovakia is Dobrá Novina (English: "Good News"). It is also one of the biggest charity campaigns by young people in the country. Dobrá Novina is organized by the youth organization eRko.
This article was originally published on December 5, 2017. Believe it or not, mathematics plays an important role in the field of sports. Coaches, athletes, trainers often use mathematics to gain a competitive advantage over their counterparts. With statistics of games, statistics of players and probabilities of winning or losing games, mathematics is everywhere. Applications of calculus in sports are endless! Calculus in sports: running races How is mathematics involved in running? I know it surprises most of us. To optimize their run, runners must keep themselves at the right speed in order to finish in the shortest time possible. According to Joseph Keller’s, A Theory of Competitive Running, the physiological running capacity of a human body can be measured using a set of differential equations. According to this theory, to win a running race under 291 meters, the optimum strategy is to sprint at 100% acceleration for the entire 291 meters. Races above 291 meters require a different strategy to optimize performance. Finding the optimal velocity for the run while conserving energy Keller’s theory, which is based on Newton’s second law and the calculus of variations, provides an optimum strategy for running one-lap and half-lap races. Keller wrote the equation of motion as:\(\frac { d\upsilon }{ dt } +\frac { \upsilon }{ \tau } =f(t),\) where \(\upsilon\) is runner’s speed as a function of time t, \(\tau\) is a constant characterizing the resistance to running, assumed to be proportional to running speed, and \(f(t)\le F\) is the propulsive force per unit mass. Empirical knowledge of human exercise physiology is expressed in the assumed relation between propulsive force and energy supply,\(\frac { dE }{ dt } =\sigma -f\upsilon,\) where E represents the runner’s energy supply, which has a finite initial value \({ E }_{ 0 },\) and is replenished at a constant rate \(\sigma.\) In spite of this replenishment, the energy supply reaches zero at the end of the race. \(\tau,\) \(\sigma,\) \({ E }_{ 0 }\) and F are found by comparing the optimal race times. Use of calculus in sports does not end here, next let’s take a look at how it can be applied in baseball. Calculus in baseball In baseball, calculus can be used to optimize the pitcher’s throw to achieve maximum efficiency. Also, calculus can be used to calculate the projectile motion of baseball’s trajectory and to predict if runners can make it to the next base on time given their running speed and the speed of a hit ball. Finding the work required to throw the baseball The work done W on a moving ball from a position \({ s }_{ 0 }\) to \({ s }_{ 1 }\) is equal to the change in ball’s kinetic energy. The kinectic energy K of a baseball of mass m and velocity v is given by \(K=\frac { 1 }{ 2 } m{ v }^{ 2 }\).\(W=\int _{ { s }_{ 0 } }^{ { s }_{ 1 } }{ F(s) } ds=\frac { 1 }{ 2 } m{ v }_{ 1 }^{ 2 }-\frac { 1 }{ 2 } m{ v }_{ 0 }^{ 2 }\) Where \({ v}_{ 0 }\) and \({ v }_{ 1 }\) are initial and final velocities. Using this, baseball players can figure out how much force that they need to exert on the ball to reach the place where they want the ball to go. Finding the Average force on the bat during the collision The collision of ball and bat, are quite complex and their models are discussed in detail in a book by Robert Adair, The Physics of Baseball, (New York: HarperPerennial). The above image shows an overhead view of the position of a baseball bat, shown every fiftieth of a second during a typical swing. We can calculate the average force on the bat during this collision by first calculating the change in the ball’s momentum. We know that the momentum p of an object is the product of its mass m and its velocity v, that is, p=mv. Suppose an object, moving along a straight line, is acted on by a force F=F(t) which is a continuous function of time t. The change in ball’s momentum over a time interval \([{ t }_{ 0 }-{ t }_{1 }]\) is equal to the integral of force F from time \({ t }_{ 0 }\) to \({ t }_{ 1 }\).\(p({ t }_{ 1 })-p({ t }_{ 0 })=\int _{ { t }_{ 0 } }^{ { t }_{ 1 } }{ F(t) } dt\) Using the above formula, one can find the average force on the bat during the collision F=ma where \(a=\frac { \Delta v }{ t }\). Application of calculus in sports does not end with running and baseball, now let’s see how it can be applied in basketball too. Calculus in basketball Calculus can be used in basketball to find the exact arc length of a shot from the shooter’s hands to the basket. The moment the basketball is released from the shooter’s hands, it’s traveling path creates an arc all the way to the net. Using the angle of release and strength of the release, one can mathematically predict the traveling path and the length of the arc. While the ball is in the air, it is affected by only one force, which is gravity! Finding the arc length of a basketball throw The travel path of a basketball can be divided into two components, the horizontal (x) direction and the vertical (y) direction. These two components can be represented by the following parametric equations: For horizontal, \(x(t)={ x }_{ o }+{ v }_{ o } cos(\theta )t\) For vertical, \(y(t)={ y }_{ o }+{ v }_{ o }sin(\theta )t+\frac { 1 }{ 2 } g{ t }^{ 2 }\) where, \({ x }_{ o }\) is the initial horizontal position of the basketball. \({ y }_{ o }\) is the initial vertical position of the basketball. \({ v }_{ o }\) is the initial velocity of the basketball. \(\theta\) is the angle the ball is projected with respect to the x-axis. g is the acceleration due to gravity, -9.81 m/\({ s. }^{ 2 }\) t is the time traveled. The derivatives of x(t) and y(t) with respect to time t are:\(\frac { dx }{ dt } ={ v }_{ o }cos(\theta )t\) \(\frac { dy }{ dt } ={ v }_{ o }sin(\theta )-9.81t\) Now, the distance of the travel distance of the basketball can be found using the arc length equation:\(L=\int _{ \alpha }^{ \beta }{ \sqrt { { (\frac { dx }{ dt } ) }^{ 2 }+({ \frac { dy }{ dt } ) }^{ 2 } } } dt,\alpha \le t\le \beta\) Now, by inserting the derivatives of x(t) and y(t) in the arc length equation:\(L=\int _{ \alpha }^{ \beta }{ \sqrt { { ({ v }_{ o }cos(\theta )) }^{ 2 }+({ { v }_{ o }sin(\theta )-9.81t) }^{ 2 } } }dt\) This equation can be modified based on: \({ (a-b) }^{ 2 }={ a }^{ 2 }-2ab+{ b }^{ 2 }\)\(L=\int _{ \alpha }^{ \beta }{ \sqrt { { { { v }_{ o } }^{ 2 }{ cos }^{ 2 }(\theta ) }+{ { { v }_{ o } }^{ 2 }{ sin }^{ 2 }(\theta )-19.62\ast t\ast { v }_{ o }sin(\theta )+96.24{ t }^{ 2 } } } } dt\) By further modifying, we can arrive at this formula:\(L=\int _{ \alpha }^{ \beta }{ \sqrt { { { { v }_{ o } }^{ 2 }{ -19.62\ast t\ast { v }_{ o }sin }(\theta ) }+96.24{ { t }^{ 2 } } } } dt\) \(L=\int _{ 0 }^{ 2 }{ \sqrt { { { 2.24 } }^{ 2 }{ -19.62\ast t\ast 2.24sin }(45)+96.24{ t }^{ 2 } } dt } = 17.34m\) An example: If the average velocity of a basketball throw is 2.24 m/s, the angle of release is 45 degreees, and the time t required for the ball to travel is about 2 seconds, then the arc length can be calculated using the above formula: The above figure shows different angles and entry points of a basketball into a basketball hoop. The diameter of the hoop ring is 18 inches. As the basketball size is smaller than the hoop ring, there is always a constant hoop margin. Hoop margin is the amount of space left in the hoop ring after the basketball enters it. Free throws, jump shots and three-pointers enter at an angle that gives an oval entrance to the hoop. This changes the given hoop margin. Apparent hoop size is the apparent opening of the hoop to the ball. So, flatter the arc of throw, the smaller the ellipse of the hoop ring. An apparent hoop margin is the apparent hoop size minus the basketball’s diameter. A basketball can be thrown in different ways and different angles. So, the apparent hoop margin varies with each shot. Finding the velocity required for the basketball to enter the basket One can also find the velocity required for the basketball shot given the height of the player’s throw and distance from the hoop. This is the equation for a player to shoot the basketball in order to make it enter the basket perfectly. [1] where \({ h }_{ 0 }\) is the height from which the ball is thrown. \(\alpha\) is the angle at which the ball is thrown. \({ v }_{ 0 }\) is the speed at which the ball is thrown. \(x\) is the distance the ball travels. And the formula for the range of a basketball trajectory is\(Range=\frac { { v }_{ 0 }^{ 2 }sin(2\alpha ) }{ 32 }\) Once we know the range and the \(\alpha\) angle of throw, we can calculate the velocity required for the throw using the above formulae. [2] Application of calculus in sports does not end with running, baseball and basketball. You can apply calculus to any physical sport to optimize performance. Share your thoughts with us in the comment section with us below. Footnotes 1. Gablonsky, J. and Lang, A. (2005). Modeling Basketball Free Throws. Society for Industrial and Applied Mathematics, [online] 47(4), pp.775–798. 2. Barzykina, I. (2017). The physics of an optimal basketball free throw.
User:Caliburn Jump to navigation Jump to search b. 04/03/2001 from Kent. Undergraduate student at the University of Warwick. (Maths and Statistics) I prefer to be called by my real name, George, as opposed to Caliburn or any variation. To do Probability Distributions Properties Let $\theta \sim \ContinuousUniform {-\frac \pi 2} {\frac \pi 2}$. Let $X$ be the $x$-intercept of the line through $\tuple {x_0, \gamma}$ that makes an angle of $\theta$ with the vertical line $x = x_0$. Then $X \sim \Cauchy {x_0} {\gamma}$. Let $T$ be the time period between consecutive events where these events happen at an average rate of $\lambda$. Then $T \sim \Exponential \beta$ with $\beta = 1/\lambda$. Medians Modes Multivariate stuff Inference Point estimation Interval estimation Hypothesis testing Regression Calculus Laplace Transforms Fourier Transforms Multiple Integrals Analysis Analytic continuation for ${}_2 F_1$
Let $X$ denote the time of death (or time of failure if youprefer a less morbid description). Suppose that $X$ is a continuous randomvariable whose density function $f(t)$ is nonzero only on $(0,\infty)$. Now, notice that it must be the case that $f(t)$decays away to $0$ as $t \to \infty$ because if $f(t)$ does not decay awayas stated, then$\displaystyle \int_{-\infty}^\infty f(t)\,\mathrm dt = 1$ cannot hold.Thus, your notion that $f(T)$ is the probability of death at time $T$(actually, it is $f(T)\Delta t$ that is (approximately)the probability of death in the short interval $(T, T+\Delta t]$of length $\Delta t$) leads to implausible andunbelievable conclusions such as You are more likely to die within the next month when you are thirty years old than when you are ninety-eight years old. whenever $f(t)$ is such that $f(30) > f(98)$. The reason why $f(T)$ (or $f(T)\Delta t$) is the "wrong" probabilityto look at is that the value of $f(T)$ is of interest onlyto those who are alive at age $T$ (and still mentally alert enoughto read stats.SE on a regular basis!) What ought to be looked atis the probability of a $T$-year old dying within the next month, that is, \begin{align}P\{(X \in (T, T+\Delta t] \mid X \geq T\} &= \frac{P\{\left(X \in (T, T+\Delta t]\right) \cap \left(X\geq T\right)\}}{P\{X\geq T\}} & \\ \scriptstyle{ \text{ definition of conditional probability}}\\ &= \frac{P\{X \in (T, T+\Delta t]\}}{P\{X\geq T\}}\\ &= \frac{f(T)\Delta t}{1-F(T)} & \\ \scriptstyle{ \text{because }X\text{ is a continuous rv}} \end{align} Choosing $\Delta t$ to be a fortnight, a week, a day, an hour, a minute,etc. we come to the conclusion that the (instantaneous) hazardrate for a $T$-year old is $$h(T) = \frac{f(T)}{1-F(T)}$$ in the sense that the approximate probability of death in the next femtosecond$(\Delta t)$ of a $T$-year old is $\displaystyle\frac{f(T)\Delta t}{1-F(T)}.$ Note that in contrast to the density $f(t)$ integrating to $1$, theintegral$\displaystyle \int_0^\infty h(t)\, \mathrm dt$ must diverge. This is because the CDF $F(t)$ is related to the hazard rate through $$F(t) = 1 - \exp\left(-\int_0^t h(\tau)\, \mathrm d\tau\right)$$and since $\lim_{t\to \infty}F(t) = 1$, it must be that $$\lim_{t\to \infty} \int_0^t h(\tau)\, \mathrm d\tau = \infty,$$ or stated more formally, the integral of the hazard rate must diverge: there is no potential divergence as a previous edit claimed. Typical hazard rates are increasing functions of time, butconstant hazard rates (exponential lifetimes) are possible. Both of these kinds of hazard rates obviously have divergent integrals. A less common scenario (for those who believe that things improve with age, like fine wine does) is a hazard rate that decreases with time but slowly enough thatthe integral diverges.
Faddeeva Package From AbInitio Revision as of 22:53, 29 October 2012 (edit) Stevenj (Talk | contribs) (→Usage) ← Previous diff Revision as of 22:54, 29 October 2012 (edit) Stevenj (Talk | contribs) (→Usage) Next diff → Line 26: Line 26: :<math>\mathrm{erfi}(x) = -i\mathrm{erf}(ix) = -i[e^{x^2} w(x) - 1]</math> (imaginary error function) :<math>\mathrm{erfi}(x) = -i\mathrm{erf}(ix) = -i[e^{x^2} w(x) - 1]</math> (imaginary error function) :<math>F(x) = \frac{i\sqrt{\pi}}{2} \left[ e^{-x^2} - w(x) \right]</math> ([[w:Dawson function|Dawson function]]) :<math>F(x) = \frac{i\sqrt{\pi}}{2} \left[ e^{-x^2} - w(x) \right]</math> ([[w:Dawson function|Dawson function]]) - :<math>\mathrm{Voigt}(x,y) = \mathrm{Re}[w(x+iy)] \!</math> (real [[w:Voigt function|Voigt function]], up to a scale factor) Note that in the case of erf and erfc, we provide different equations for positive and negative ''x'', in order to avoid numerical problems arising from multiplying exponentially large and small quantities. Note that in the case of erf and erfc, we provide different equations for positive and negative ''x'', in order to avoid numerical problems arising from multiplying exponentially large and small quantities. Revision as of 22:54, 29 October 2012 Contents Faddeeva / complex error function Steven G. Johnson has written free/open-source C++ code (with wrappers for other languages) to compute the scaled complex error function w( z) = e − z2erfc(− iz), also called the Faddeeva function(and also the plasma dispersion function), for arbitrary complex arguments zto a given accuracy. Given the Faddeeva function, one can easily compute Voigt functions, the Dawson function, and similar related functions. Download the source code from: http://ab-initio.mit.edu/Faddeeva_w.cc (updated 29 October 2012) Usage To use the code, add the following declaration to your C++ source (or header file): #include <complex> extern std::complex<double> Faddeeva_w(std::complex<double> z, double relerr=0); The function Faddeeva_w(z, relerr) computes w( z) to a desired relative error relerr. Omitting the relerr argument, or passing relerr=0 (or any relerr less than machine precision ε≈10 −16), corresponds to requesting machine precision, and in practice a relative error < 10 −13 is usually achieved. Specifying a larger value of relerr may improve performance (at the expense of accuracy). You should also compile Faddeeva_w.cc and link it with your program, of course. In terms of w( z), some other important functions are: (scaled complementary error function) (complementary error function) (error function) (imaginary error function) (Dawson function) Note that in the case of erf and erfc, we provide different equations for positive and negative x, in order to avoid numerical problems arising from multiplying exponentially large and small quantities. Wrappers: Matlab, GNU Octave, and Python Wrappers are available for this function in other languages. Matlab (also available here): A function Faddeeva_w(z, relerr), where the arguments have the same meaning as above (the relerrargument is optional) can be downloaded from Faddeeva_w_mex.cc (along with the help file Faddeeva_w.m. Compile it into an octave plugin with: mex -output Faddeeva_w -O Faddeeva_w_mex.cc Faddeeva_w.cc GNU Octave: A function Faddeeva_w(z, relerr), where the arguments have the same meaning as above (the relerrargument is optional) can be downloaded from Faddeeva_w_oct.cc. Compile it into a MEX file with: mkoctfile -DMPICH_SKIP_MPICXX=1 -DOMPI_SKIP_MPICXX=1 -s -o Faddeeva_w.oct Faddeeva_w_oct.cc Faddeeva_w.cc Python: Our code is used to provide scipy.special.wofzin SciPy starting in version 0.12.0 (see here). Algorithm This implementation uses a combination of different algorithms. For sufficiently large | z|, we use a continued-fraction expansion for w( z) similar to those described in Walter Gautschi, "Efficient computation of the complex error function," SIAM J. Numer. Anal. 7(1), pp. 187–198 (1970). G. P. M. Poppe and C. M. J. Wijers, "More efficient computation of the complex error function," ACM Trans. Math. Soft. 16(1), pp. 38–46 (1990); this is TOMS Algorithm 680. Unlike those papers, however, we switch to a completely different algorithm for smaller | z|: Mofreh R. Zaghloul and Ahmed N. Ali, "Algorithm 916: Computing the Faddeyeva and Voigt Functions," ACM Trans. Math. Soft. 38(2), 15 (2011). Preprint available at arXiv:1106.0151. (I initially used this algorithm for all z, but the continued-fraction expansion turned out to be faster for larger | z|. On the other hand, Algorithm 916 is competitive or faster for smaller | z|, and appears to be significantly more accurate than the Poppe & Wijers code in some regions, e.g. in the vicinity of | z|=1 [although comparison with other compilers suggests that this may be a problem specific to gfortran]. Algorithm 916 also has better relative accuracy in Re[ z] for some regions near the real- z axis. You can switch back to using Algorithm 916 for all z by changing USE_CONTINUED_FRACTION to 0 in the code.) Note that this is SGJ's independent re-implementation of these algorithms, based on the descriptions in the papers only. In particular, we did not refer to the authors' Fortran or Matlab implementations (respectively), which are under restrictive "semifree" ACM copyright terms and are therefore unusable in free/open-source software. Algorithm 916 requires an external complementary error function erfc( x) function for real arguments x to be supplied as a subroutine. More precisely, it requires the scaled function erfcx( x) = e erfc( x2 x). Here, we use an erfcx routine written by SGJ that uses a combination of two algorithms: a continued-fraction expansion for large xand a lookup table of Chebyshev polynomials for small x. (I initially used an erfcx function derived from the DERFC routine in SLATEC, modified by SGJ to compute erfcx instead of erfc, by the new erfcx routine is much faster.) Test program To test the code, a small test program is included at the end of Faddeeva_w.cc which tests w( z) against several known results (from Wolfram Alpha) and prints the relative errors obtained. To compile the test program, #define FADDEEVA_W_TEST in the file (or compile with -DFADDEEVA_W_TEST on Unix) and compile Faddeeva_w.cc. The resulting program prints SUCCESS at the end of its output if the errors were acceptable. License The software is distributed under the "MIT License", a simple permissive free/open-source license: Copyright © 2012 Massachusetts Institute of Technology Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
Functiones et Approximatio Commentarii Mathematici Funct. Approx. Comment. Math. Volume 41, Number 1 (2009), 55-70. Congruences between modular forms and related modules Abstract Fix a prime $l$ and let $M$ be an integer such that $l\not|M$. Let $f\in S_2(\Gamma_1(M l^2))$ be a newform which is supercuspidal at $l$ of a fixed type related to the nebentypus and special at a finite set of primes. Let $\mathbf{T}^\psi$ be the local quaternionic Hecke algebra associated to $f$. The algebra $\mathbf{T}^\psi$ acts on a module $\mathcal M^\psi_f$ coming from the cohomology of a Shimura curve. It follows from the Taylor-Wiles criterion and a recent Savitt's theorem, that $\mathbf{T}^\psi$ is the universal deformation ring of a global Galois deformation problem associated to $\orho_f$. Moreover $\mathcal M^\psi_f$ is free of rank 2 over $\mathbf{T}^\psi$. If $f$ occurs at minimal level, we prove a result about congruences of ideals and we obtain a raising the level result. The extension of these results to the non minimal case is still an open problem. Article information Source Funct. Approx. Comment. Math., Volume 41, Number 1 (2009), 55-70. Dates First available in Project Euclid: 30 September 2009 Permanent link to this document https://projecteuclid.org/euclid.facm/1254330159 Digital Object Identifier doi:10.7169/facm/1254330159 Mathematical Reviews number (MathSciNet) MR2568796 Zentralblatt MATH identifier 1189.11025 Subjects Primary: 11F80: Galois representations Citation Ciavarella, Miriam. Congruences between modular forms and related modules. Funct. Approx. Comment. Math. 41 (2009), no. 1, 55--70. doi:10.7169/facm/1254330159. https://projecteuclid.org/euclid.facm/1254330159
Definition:Upper Closure/Element Definition Let $\left({S, \preccurlyeq}\right)$ be an ordered set. Let $a \in S$. The upper closure of $a$ (in $S$) is defined as: $a^\succcurlyeq := \left\{{b \in S: a \preccurlyeq b}\right\}$ Also known as The upper closure of an element $a$ is also known as the up-set of $a$. The terms weak upper closure and weak up-set are also encountered, so as explicitly to distinguish this from the strict upper closure of $a$. $a^\preccurlyeq := \left\{{b \in S: b \preccurlyeq a}\right\}$: the lower closure of $a \in S$: everything in $S$ that precedes $a$ $a^\succcurlyeq := \left\{{b \in S: a \preccurlyeq b}\right\}$: the upper closure of $a \in S$: everything in $S$ that succeeds $a$ $a^\prec := \left\{{b \in S: b \preccurlyeq a \land a \ne b}\right\}$: the strict lower closure of $a \in S$: everything in $S$ that strictly precedes $a$ $a^\succ := \left\{{b \in S: a \preccurlyeq b \land a \ne b}\right\}$: the strict upper closure of $a \in S$: everything in $S$ that strictly succeeds $a$. $\displaystyle T^\preccurlyeq := \bigcup \left\{{t^\preccurlyeq: t \in T:}\right\}$: the lower closure of $T \in S$: everything in $S$ that precedes some element of $T$ $\displaystyle T^\succcurlyeq := \bigcup \left\{{t^\succcurlyeq: t \in T:}\right\}$: the upper closure of $T \in S$: everything in $S$ that succeeds some element of $T$ $\displaystyle T^\prec := \bigcup \left\{{t^\prec: t \in T:}\right\}$: the strict lower closure of $T \in S$: everything in $S$ that strictly precedes some element of $T$ $\displaystyle T^\succ := \bigcup \left\{{t^\succ: t \in T:}\right\}$: the strict upper closure of $T \in S$: everything in $S$ that strictly succeeds some element of $T$. The astute reader may point out that, for example, $a^\preccurlyeq$ is ambiguous as to whether it means: The lower closure of $a$ with respect to $\preccurlyeq$ The upper closure of $a$ with respect to the dual ordering $\succcurlyeq$ By Lower Closure is Dual to Upper Closure and Strict Lower Closure is Dual to Strict Upper Closure, the two are seen to be equal. Also denoted as Other notations for closure operators include: ${\downarrow} a, {\bar \downarrow} a$ for lower closure of $a \in S$ ${\uparrow} a, {\bar \uparrow} a$ for upper closure of $a \in S$ ${\downarrow} a, {\dot \downarrow} a$ for strict lower closure of $a \in S$ ${\uparrow} a, {\dot \uparrow} a$ for strict upper closure of $a \in S$ However, as there is considerable inconsistency in the literature as to exactly which of these arrow notations is being used at any one time, its use is not endorsed on $\mathsf{Pr} \infty \mathsf{fWiki}$.
I’ve earned it! https://confirm.udacity.com/TLVUZQTR # Import packages import pandas as pd import scipy.stats as stats %matplotlib inline # Read in the data data = pd.read_csv(‘Customer Support Time Study.csv’) # Set columns to lists to use in ttest function joe = data[‘Joey’].values.tolist() nat = data[‘Nathaly’].values.tolist() # Plot the means (optional) data.mean().plot(‘bar’) # Perform ttest stats.ttest_ind(nat,joe,equal_var = True) Further Research Git Internals – Plumbing and Porcelain (advanced – bookmark this and check it out later) Customizing Git – Git Hooks Git Init Recap Use the git init command to create a new, empty repository in the current directory. $ git init Running this command creates a hidden .git directory. This .git directory is the brain/storage center for the repository. It […] In this lesson, you will be extending your knowledge of simple linear regression, where you were predicting a quantitative response variable using a quantitative explanatory variable. That is, you were using an equation that looked like this: \hat{y} = b_0 + b_1x_1y^=b0+b1x1 In this lesson, you will learn about multiple linear regression. In these cases, […] Fitting Logistic Regression import numpy as np import pandas as pd import statsmodels.api as sm df = pd.read_csv(‘./fraud_dataset.csv’) df.head() 1. As you can see, there are two columns that need to be changed to dummy variables. Replace each of the current columns to the dummy version. Use the 1 for weekday and True, and 0 otherwise. Use the first […] In this lesson, you will: Identify Regression Applications Learn How Regression Works Apply Regression to Problems Using Python Machine Learning is frequently split into supervised and unsupervised learning. Regression, which you will be learning about in this lesson (and its extensions in later lessons), is an example of supervised machine learning. In supervised machine learning, you are interested in predicting […] A/B tests are used to test changes on a web page by running an experiment where a control group sees the old version, while the experiment group sees the new version. A metric is then chosen to measure the level of engagement from users in each group. These results are then used to judge whether one version is more effective than […] rules for setting up null and alternative hypotheses: The H_0H0 is true before you collect any data. The H_0H0 usually states there is no effect or that two groups are equal. The H_0H0 and H_1H1 are competing, non-overlapping hypotheses. H_1H1 is what we would like to prove to be true. H_0H0 contains an equal sign of some kind – either =, \leq≤, or \geq≥. H_1H1 contains the opposition […] import pandas as pd import numpy as np import matplotlib.pyplot as plt %matplotlib inline np.random.seed(42) full_data = pd.read_csv(‘../data/coffee_dataset.csv’) sample_data = full_data.sample(200) diffs = [] for _ in range(10000): bootsamp = sample_data.sample(200, replace = True) coff_mean = bootsamp[bootsamp[‘drinks_coffee’] == True][‘height’].mean() nocoff_mean = bootsamp[bootsamp[‘drinks_coffee’] == False][‘height’].mean() diffs.append(coff_mean – nocoff_mean) np.percentile(diffs, 0.5), np.percentile(diffs, 99.5) # statistical evidence […] Descriptive Statistics Descriptive statistics is about describing our collected data using the measures discussed throughout this lesson: measures of center, measures of spread, shape of our distribution, and outliers. We can also use plots of our data to gain a better understanding. Inferential Statistics Inferential Statistics is about using our collected data to draw conclusions to a larger […]
Tony's already provided an approach for you to follow. I'd like to suggest another. I'll return to a short discussion about your question, though, later. You write: I am trying to switch a SPST-NO relay that is rated to handle up to 227VAC. The coil is powered by 5V, has a 100 Ohm coil resistance, and its contacts are rated for 16A max. I would have wanted to also consider the use of a mains-powered relay and the use of a MOC30x3 device (MOC3063 if you want zero-crossing behavior or a MOC3023, if not.) These guarantee operation when provided with at least \$5\:\textrm{mA}\$. simulate this circuit – Schematic created using CircuitLab This provides opto-isolation, requires a driving current that is routinely available in typical I/O pins from a microcontroller, and powers the relay directly from the mains supply instead of your DC supply rail. And since the relay is AC mains powered and isolated from your DC rail, a simple connection without snubbers works well enough. Just to add still one more useful point, it can be driven directly from your \$3.3\:\textrm{V}\$ I/O pin and there's no particular need for a separate \$5\:\textrm{V}\$ rail. An OMRON G2R provides some mains powered options and might be such a relay choice. However, if you must use a separate \$5\:\textrm{V}\$ rail and a compatible relay, then you should operate the switching BJT in saturated mode (active, saturated.) An early thing to consider is the size of the BJT. In this case, you need a collector current of \$I_C=\frac{5\:\textrm{V}}{100\:\Omega}=50\:\textrm{mA}\$. A saturated BJT will have a \$V_{CE}\approx 200\:\textrm{mV}\$. So that means \$200\:\textrm{mV}\cdot 50\:\textrm{mA}\approx 10\:\textrm{mW}\$. But there's more. The base current isn't accounted for, yet. This will be roughly 10% of the collector current (over-driving the BJT is how you get it into saturation), or about \$5\:\textrm{mA}\$. This will probably require about \$V_{BE}\approx 700\:\textrm{mV}\$. So, another \$700\:\textrm{mV}\cdot 5\:\textrm{mA}\approx 4\:\textrm{mW}\$, for a total of \$14\:\textrm{mW}\$. This is easily within the capability of almost any package, so a small signal BJT like the one you picked out will work just fine. Note here, by now, that you don't need a base current more than about \$5\:\textrm{mA}\$. So, your base resistor needs to be only about \$\frac{3.3\:\textrm{V}-0.7\:\textrm{V}}{5\:\textrm{mA}}= 520\:\Omega\$. Because this is based on an over-driven 10% figure and because you can rely on the fact that small signal BJTs will saturate well before reaching that figure, it's just fine to relax the base resistor to the next standard value above that figure, or \$560\:\Omega\$. (Probably would work fine with a \$1\:\textrm{k}\Omega\$, but whose counting?) Tony's suggested circuit with the diode is just fine, by the way, and you should include something like that included diode in order to allow the relay coil a method to de-energize itself when turned off. The time required to de-energize will depend upon the voltage developed across the relay coil, however. And a simple diode presents only a small voltage across the coil, so the time will be longer than it might otherwise be. If time matters to you for reasons you didn't mention, you could consider the idea of including a series zener, as well, in order to jack up the de-energizing voltage and thereby reduce the required time for that phase of operation. Note that both the AC-powered relay also the DC-powered option require about \$5\:\textrm{mA}\$ from your I/O pin. The AC-powered method is just an alternative approach to consider and it may expand your options (if not this time then perhaps another time and another place.) and
It looks like you're new here. If you want to get involved, click one of these buttons! Now let's look at a mathematical approach to resource theories. As I've mentioned, resource theories let us tackle questions like these: Our first approach will only tackle question 1. Given \(y\), we will only ask is it possible to get \(x\). This is a yes-or-no question, unlike questions 2-4, which are more complicated. If the answer is yes we will write \(x \le y\). So, for now our resources will form a "preorder", as defined in Lecture 3. Definition. A preorder is a set \(X\) equipped with a relation \(\le\) obeying: reflexivity: \(x \le x\) for all \(x \in X\). transitivity \(x \le y\) and \(y \le z\) imply \(x \le z\) for all \(x,y,z \in X\). All this makes sense. Given \(x\) you can get \(x\). And if you can get \(x\) from \(y\) and get \(y\) from \(z\) then you can get \(x\) from \(z\). What's new is that we can also combine resources. In chemistry we denote this with a plus sign: if we have a molecule of \(\text{H}_2\text{O}\) and a molecule of \(\text{CO}_2\) we say we have \(\text{H}_2\text{O} + \text{CO}_2\). We can use almost any symbol we want; Fong and Spivak use \(\otimes\) so I'll often use that. We pronounce this symbol "tensor". Don't worry about why: it's a long story, but you can live a long and happy life without knowing it. It turns out that when you have a way to combine things, you also want a special thing that acts like "nothing". When you combine \(x\) with nothing, you get \(x\). We'll call this special thing \(I\). Definition. A monoid is a set \(X\) equipped with: such that these laws hold: the associative law: \( (x \otimes y) \otimes z = x \otimes (y \otimes z) \) for all \(x,y,z \in X\) the left and right unit laws: \(I \otimes x = x = x \otimes I\) for all \(x \in X\). You know lots of monoids. In mathematics, monoids rule the world! I could talk about them endlessly, but today we need to combine the monoids and preorders: Definition. A monoidal preorder is a set \(X\) with a relation \(\le\) making it into a preorder, an operation \(\otimes : X \times X \to X\) and element \(I \in X\) making it into a monoid, and obeying: $$ x \le x' \textrm{ and } y \le y' \textrm{ imply } x \otimes y \le x' \otimes y' .$$This last condition should make sense: if you can turn an egg into a fried egg and turn a slice of bread into a piece of toast, you can turn an egg and a slice of bread into a fried egg and a piece of toast! You know lots of monoidal preorders, too! Many of your favorite number systems are monoidal preorders: The set \(\mathbb{R}\) of real numbers with the usual \(\le\), the binary operation \(+: \mathbb{R} \times \mathbb{R} \to \mathbb{R} \) and the element \(0 \in \mathbb{R}\) is a monoidal preorder. Same for the set \(\mathbb{Q}\) of rational numbers. Same for the set \(\mathbb{Z}\) of integers. Same for the set \(\mathbb{N}\) of natural numbers. Money is an important resource: outside of mathematics, money rules the world. We combine money by addition, and we often use these different number systems to keep track of money. In fact it was bankers who invented negative numbers, to keep track of debts! The idea of a "negative resource" was very radical: it took mathematicians over a century to get used to it. But sometimes we combine numbers by multiplication. Can we get monoidal preorders this way? Puzzle 60. Is the set \(\mathbb{N}\) with the usual \(\le\), the binary operation \(\cdot : \mathbb{N} \times \mathbb{N} \to \mathbb{N}\) and the element \(1 \in \mathbb{N}\) a monoidal preorder? Puzzle 61. Is the set \(\mathbb{R}\) with the usual \(\le\), the binary operation \(\cdot : \mathbb{R} \times \mathbb{R} \to \mathbb{R}\) and the element \(1 \in \mathbb{R}\) a monoidal preorder? Puzzle 62. One of the questions above has the answer "no". What's the least destructive way to "fix" this example and get a monoidal preorder? Puzzle 63. Find more examples of monoidal preorders. Puzzle 64. Are there monoids that cannot be given a relation \(\le\) making them into monoidal preorders? Puzzle 65. A monoidal poset is a monoidal preorder that is also a poset, meaning $$ x \le y \textrm{ and } y \le x \textrm{ imply } x = y $$ for all \(x ,y \in X\). Are there monoids that cannot be given any relation \(\le\) making them into monoidal posets? Puzzle 66. Are there posets that cannot be given any operation \(\otimes\) and element \(I\) making them into monoidal posets?
In the basic Ramsey model with technological growth, assuming that the economy is in a steady state, how would a sudden decrease in the population growth rate $n$ impact the steady states value of consumption and capital? I obtain the following dynamics for the model: $\dot{k} = f(k) - c - (\delta + n + g)k$ $\frac{\dot{c}}{c} = \frac{1}{\theta}[f'(k) - \delta - \rho -\theta g] $ where $\delta$ is the depreciation rate, $g$ the rate of technical progress, $\theta$ the risk aversion factor, $\rho$ the discout factor, $k$ the capital per effective worker, $f(k)$ the production per effective worker, $c$ the consumption per effective worker and $ f(k) = k^\alpha $ The steady state values, ($\dot{k}=0, \dot{c}=0$): $\hat{c} = \hat{k}^\alpha - (\delta + n + g)\hat{k}$ $\hat{k} = \left( \frac{\delta + \rho + \theta g}{\alpha} \right)^{\frac{1}{1-\alpha}}$ I believe that there would be a sudden increase in the steady state consumption that would persist in the long run and no change in the capital. Is it right?
I'm going through the phase estimation algorithm, and wanted to sanity-check my calculations by making sure the state I'd calculated was still normalized. It is, assuming the square of the absolute value of the eigenvalue of the arbitrary unitary operator I'm analyzing equals 1. So, does it? Assuming that the eigenvector of the eigenvalue is normalized. Good question. The answer turns out to be . Yes You don't even need the vector to be normalized. Watch: Start with the definition of eigenvalues and eigenvectors: $$ \begin{align} U|\psi\rangle &= \lambda |\psi\rangle\\ \end{align} $$ Conjugate and transpose both sides of the equation: $$ \begin{align} \langle\psi|U^\dagger &= \langle \psi| \lambda^*. \end{align} $$ Left multiply each side of line 1 by the corresponding side of line 2. $$ \begin{align} \langle \psi|U^\dagger\cdot U|\psi \rangle &= \langle \psi | \lambda^* \lambda |\psi\rangle \\ \langle \psi |\psi \rangle &= |\lambda |^2 \langle \psi |\psi \rangle \\ c &= |\lambda|^2 c\\ 1 &= |\lambda|^2 \end{align} $$ If $|\psi\rangle $ is normalized, it just means that $c=1$, which makes no difference in this proof because the $c$ was on both sides of the equation and can be divided out. @user1271772's answer is excellent, and absolutely the right answer. I just wanted to add in some additional perspective, given recent questions regarding Hamiltonians. Many physicists start from the Hamiltonian being the underlying thing that determines evolution, and unitaries are derived as a consequence. They start from the Schrödinger equation, $$ i\frac{d|\psi\rangle}{dt}=H|\psi\rangle. $$ For a time-invariant Hamiltonian, the solution is $$ |\psi(t)\rangle=e^{-iHt}|\psi(0)\rangle, $$ where $e^{-iHt}$ is unitary because $e^{-iHt}e^{iHt}=\mathbb{I}$. Just stating this solution actually skips over the thing I really want to focus on. We could expand a generic $|\psi\rangle=\sum_ia_i|i\rangle$, so the Schrödinger equation becomes a series of simultaneous differential equations for the $a_i$: $$ i\frac{da_i}{dt}=\sum_jH_{ij}a_j. $$ Now consider what happens to an eigenvector $|\lambda\rangle=\sum_ib_i|i\rangle$ of $H$: $$ \sum_{j}H_{ij}b_i^\star=\lambda b_i^\star. $$ We can take linear combinations of the $a_i$: $$ i\frac{d\sum_ib_i^\star a_i}{dt}=\sum_{ij}b_i^\star H_{ij}a_j=\lambda\sum_jb_j^\star a_j. $$ Hence, we see that the component $x=\sum_jb_j^\star a_j$ simply satisfies $$ i\frac{dx}{dt}=\lambda x, $$ so $x(t)=e^{-i\lambda t}x(0)$. In other words, a state initially created as an eigenvector $|\lambda\rangle$ stays in that state and just acquires a phase over time $e^{-i\lambda t}|\lambda\rangle$. Hence, eigenvectors of $H$ are also eigenvectors of the unitary $U$, with eigenvalues $e^{-i\lambda t}$, and these have modulus 1.
The implication here is that each individual discoverer must start from nothing but a bag of crying cells, and build up knowledge in a linear order before making a discovery in a vacuum. In reality, I find we have an entire interwoven society trying to make the discoveries, not independent individuals. There is an entire section of society dedicated to distilling the human essence into teaching. There is an entire section devoted to building infrastructure to make it easier to step beyond. There is an entire section devoted to getting discoverers together, so that they don't ALL have to learn ALL of the knowledge; they merely need to have all of the knowledge when they put their minds together. Consider that the trade knowledge needed to run a particle accelerator is equally essential to discovery as the quantum physics models used to point the accelerator in new and exciting directions. The physicists probably don't know how to correctly shim the hundreds of segments of the accelerator to be in a perfect shape (and doesn't have the time to learn). The physics probably hasn't spent enough time with high voltage to wire up thousands of electromagnets without a short taking the entire accelerator down. This knowledge, held in the minds of the tradesmen who support the physicists, is equally essential but the physicists never had to learn them; these skills were learned in parallel by all of humanity. The only thing I have found which can leave us with no time to discover is society itself. If society dulls, and our lives suddenly require an entire lifetime of learning just to survive, that could be the cusp where humanity simply cannot learn any further. However, even then there is a light at the end of the tunnel. The poets have a long list of skills like "how to love" which take a lifetime to learn, and yet we keep working on them day after day. Perhaps one day, discovery will simply take the form of loving the universe and seeing what it wishes to tell us today. Oh fine! Lets see some math Lets try to put some mathematical equations down to make sure we're all on the same page. I'll use it to show how a rather boring society resembling the Vulcans could go about never ending discover First off, I am going to assume there is a never ending supply of things to discover in the universe. If there is a finite number of things to discover, then it is trivial to show that the number of discoveries humankind can make is finite. Let us define the universe of potential discoveries to be $\mathbb{D}$ I am going to assume the only thing in our brain that matters in the long run are structures. These are structures you have to learn over time in order to effectively do a task, such as discovering a new direction. I believe there is more to the brain, but I think this is close enough to model your question of learning and technology. Let us define these structures to be $\mathbb{S}$, the set of all helpful structures that the human brain can possibly organize into, and let $\text{Fits}(S), S\in \mathbb{S}$ to be a predicate that returns true if the set of structures $S$ would fit into a single human brain, and false otherwise. Because entering the world with new structures makes it trivial to prove we can keep discovering, we can assume $S$ of a newborn is $\emptyset$. Now we need a notation for learning. I will assume, for simplicity, that people learn at a constant rate through their entire lives. I leave it to the reader to show that handling the case where learning rate is variable is a trivial transform from this simpler case. Because I am arguing that we will never run out of things to learn, I can assume the worst case of "you can only learn one thing at a time" without loss of generality. Consider the universe of learning activities, $\mathbb{L}$. For any learning activity $l \in \mathbb{L}$, we can define a function $\text{cost}_{\text{learn}}(l, S)$ which defines the cost (in time) of doing learning activity $l$ given that you already have all of the structures $S$ in your head. Let $\text{results}_{\text{learn}}(l, S)$ be a function which returns a set of structures in your brain after doing a learning activity. Finally, we need a notation for discovery. $\text{cost}_{\text{discover}}(d, S)$ is the cost of discovering a particular element of $\mathbb{D}$. Now we can define the goals. Let us define $\text{cost}_{\text{schooling}}(L)$ and $\text{results}_{\text{schooling}}(L)$ where $L$ is an ordered set of learning activities to be the cost and results of raising an individual up from $S = \emptyset$ through a sequence of learning activities. Thus $\text{cost}_{\text{schooling}}$ will be the sum of $\text{cost}_{\text{learn}}$, and $\text{results}_{\text{schooling}}$ will be the final result at the end of iterating $\text{results}_{\text{learn}}$. Our goal is to prove that there can always be a $\text{cost}_{\text{schooling}}(L) + \text{cost}_{\text{discover}}(d, \text{results}_{\text{schooling}}(L)) < \text{lifespan}$. Let us assign this a predicate: $\text{DiscoveryCapable}(L, D_{prev} \Leftrightarrow \exists_{d\in\mathbb{D},L^\prime}[(\forall{l\in L^\prime} l\in L)\land d\notin D_{prev}]$, which is a mouthful to day "A society is DiscoveryCapable if, for their set of known learning activities, and previously discovered disoveries, there exists a discoverable thing." Let us also add $\text{Discoverable}(L, d) \Leftrightarrow \exists_{L^\prime} \text{cost}_{\text{schooling}}(L^\prime) + \text{cost}_{\text{discover}}(d, \text{results}_{L^\prime}) < \text{lifespan}$, or "A discovery is discoverable if, given the known set of learning activities, someone can discover it in a lifetime." Now here we will note that $\forall_{l\in\mathbb{L}}l \in \mathbb{D}$, or in words, every learning activity is something which can be discovered. This leads to a "Lotus Eaters" situation, where could simply continuously develop new ways to learn without going anywhere, so lets fix that. Lets define $\text{Trivial}(l)$ to be true if $\forall_{S\in populace}\exists_{L_0} (\forall_s s\in \text{results}_{\text{learn}}(l, S) \to \text{results}_{\text{learn}}(l, S_0)) \land \text{cost}_{\text{learn}}(L) \ge \text{cost}_{\text{schooling}}(L_0) $. In other words, its trivial to develop a new learning activity which doesn't teach anything new and costs more than an existing schooling! Now we do a proof by contradiction. We assume $\text{DiscoveryCapable}(L, D_{prev})$ is false for our society. We will prove this is contradictory, meaning there is no such society that cannot find a discovery. If $\text{DiscoveryCapable}$ is false, then that means there are no new non-trivial learning activities which are discoverable. If we find that there must be a non-trivial learning activity to discover, we have a proof by contradiction. This means we must prove $\forall_{L, D_{prev}}\exists_l \lnot \text{Trivial}(l) \land \text{Discoverable}(L, l)$ Consider the Turing machine, which is accepted to be far simpler than even a human. If we can prove that, at this time, a Turing machine can develop a new useful learning activity for us, then we can make a discovery simply by following that program. We are, after all, at least as impressive as computers. Let us devise a turing machine to help. Select a subset of $L$ called $L_T$ which is the learning activities which can be analyzed by a Turing machine. We want to find a program which finds a $l \notin L_T$ such that $\lnot \text{Trivial}(l)$. The first step is easy. It is trivial for computers to find an activity $\exists_{l\in 2^{L_T}} l \notin L$. Such power set behaviors occur all the time in NP problems. Now what if the computer can't do this? The next step is to gather some data about the universe. If we can't find any new data, then we are literally out of things to discover. If we find new data, we can have the computers crunch it harder, to find things that we don't understand, but computers can find. If they cannot, then all Turing-capable learning methods are exhausted, and we have covered the universe with our computational prowess. We, in effect, used computers to extend our life, crunching a subset of our possible learning activities, in hopes of finding a new one. And now we sit back and look at the non Turing learning activities. It is not easy to tell if there is a faster way to learn such things. In fact, the only limit seems to be creativity. The only limit for our capacity to discover is our own creativity.
There is one input factor $k$. The representative firm maximizes profits with respect to employed capital, i.e. \begin{align} \max_k{\pi(k) = f(k)-(r+\tau)k} \end{align} where $f(\cdot)$ is the production function, $r$ the rent of capital and $\tau$ the tax rate to be paid for a unit of capital. The FOC reads \begin{align} \pi'(k)=0\quad\Longleftrightarrow\quad r = f'(k) - \tau \end{align} Now $r$ can be interpreted as a net of tax return on capital for the household (who owns the capital stock). Is there a standard proxy for $r$? Can you point to empirical literatur, where $r$ is shown for different countries. Any suggestion is appreciated. Edit I I think I was searching for net national accounts seperated for factor income (firms and workers). I added a picture from the german national stats. I think the last two columns might be appropriate (I hope it's readable). Edit II Found the net return on net capital in AMECO. The variable is named APNDK and you can find a description on page 65 of that list. Is this any good? I don't do empirics.
I have just solved an exercise, which asked to show that function $f$ is Lipschitz implies that $f$ is absolutely continuous. However, I'm wondering if the converse is true. I can't seem to think of any counterexamples at the moment. I think I'm brain dead or something, so I could use some help. Consider $f(x) = \sqrt{x}$ on $[0,c]$. Then $f^\prime (x) = \frac{1}{2 \sqrt{x}}$ is not bounded and hence $f$ is not Lipschitz. But $f(x) = \sqrt{x}$ is absolutely continuous on $[0,c]$. To see this observe (i) $(\sqrt{x} - \sqrt{y})(\sqrt{x} + \sqrt{y}) = x - y$ (ii) Since $- \sqrt{y} \leq \sqrt{y}$ you have $(\sqrt{x} - \sqrt{y}) \leq (\sqrt{x} + \sqrt{y})$ Now let $\varepsilon > 0$. Choose $\delta := \varepsilon^2$ then $$ (\sqrt{x} - \sqrt{y})^2 \leq (\sqrt{x} - \sqrt{y})(\sqrt{x} + \sqrt{y}) = x-y < \delta$$ Take any unbounded integrable function $f(x)$. Then its antiderivative $F(x) = \int_0^x f(t)$ is absolutely continuous. But as $f$ is unbounded, F is not Lipschitz. There is also the enjoyable article In Praise of $x^a \sin (1/x)$ [citation below], which shows that $x^{3/2} \sin(1/x)$ is AC but not Lipschitz on $[0,1]$. Kaptanoğlu, H. Turgay. "In Praise of $y= x^\alpha \sin \left(\frac{1}{x}\right)$." The American Mathematical Monthly 108.2 (2001): 144-150. See we know that indefinite integral of an integrable function is absolutely continous and these are the only absolutely continous functions. So we want to find such f which is integrable but unbounded then its indefinite integral F(say) will be absolutely continous but derivative of F which is precisely f a.e. is unbounded and hence F is not lipschitz function. we can define $f:(0,1)\to \mathbb R$ as $f(x)=(x)^\frac{-1}{2} = \frac{1}{\sqrt{x}}$ then clearly $f$ is unbounded but it is an integrable function as its integral is $2$ on $(0,1)$ which is finite. So its indefinite integral which is $2\sqrt(x)$ is our required absolutely continous function which is not lipschitz(as its derivative $f$ is unbounded)
Here is the excerpt from the textbook A Course in Mathematical Analysis by Prof D. J. H. Garling. So I have the Theorem 1: Given a set $A\neq\varnothing$, a mapping $\varphi:A\to P(A )\setminus \{\varnothing\}$, and $\bar{a}\in A$. Then there exists a sequence $$(a_{n})_{n\in \mathbb{N}}$$ such that $a_{0}=\bar{a}$ and $a_{n+1}\in \varphi(a_{n})$ for all $n\in \mathbb{N}$ Axiom of Choice: Given a collection $A$ of nonempty sets, there exists a function $$c: A \to \bigcup_{A_{i} \in A}A_{i}$$ such that $c(A_{i})\in A_{i}$ for all $A_{i} \in A$. Axiom of Dependent Choice: Given a nonempty set $A$ and a binary relation $\mathcal{R}$ on $A$ such that for all $a\in A$, there exists $b\in A$ such that $a\mathcal{R}b$. There exists a sequence $$(a_{n})_{n\in \mathbb{N}}$$ such that $a_{n}\mathcal{R}a_{n+1}$ for all $n \in \mathbb{N}$. The author states that The axiom of dependent choicestates that this [Theorem 1] is always possible. But I can only infer Theorem 1from Axiom of Choice, not from Axiom of Dependent Choice. Below is how I did it. Using Axiom of Choice for the collection $P(A)\setminus \{\varnothing\}$ of nonempty sets, then there exists a choice function $$\varphi':P(A)\setminus \{\varnothing\} \to A$$ such that $\varphi'(X)\in X$ for all $X\in P(A)\setminus \{\varnothing\}$. Let $\bar{\varphi}=\varphi'\circ \varphi:A\to A\implies\bar{\varphi}(a)=\varphi'(\varphi(a))\in \varphi(\bar{a})$ for all $a\in A$To sum up, we have $\bar{\varphi}:A\to A$ and $\bar{a}\in A$. Applying Recursion Theorem, we get a sequence $$(a_{n})_{n\in \mathbb{N}}$$ such that $a_{0}=\bar{a}$ and $a_{n+1}=\bar{\varphi}(a_{n})\in\varphi(a_{n})$ for all $n\in \mathbb{N}$. So this $(a_{n})_{n\in \mathbb{N}}$ is the required sequence. I would like you to check my above proof and check whether it is possible for Axiom of Dependent Choice to imply Theorem 1. Many thanks for your help!
Consider the following theorem: "Every non-empty set of positive integers has a minimum element". The proof I usually see is one that uses contradiction, and does not seem like the easiest possible proof. I think there is an easier proof, and I wonder why I never see it. Does it contain an invalid assumption? The proof goes as follows: First, prove by induction that the theorem is true for finite sets. Base case: It's true for a set of size 1, trivially. Induction step: Consider a finite set $S$ of size $n+1$. Let $s$ be a member of $S$. By I.H., $|S-s|$ has a minimum $s'$. If $s\lt s'$, then $s$ is the minimum of $S$. Otherwise, it is $s'$. Second, let $T$ be a any non-empty set of positive integers ($T$ could be infinite). Let $t$ be an element of $T$. Consider the set $T\cap [0,t]$. The set is clearly finite, so it has a minimum element $min$. Next, we show that $min$ is also a minimum element of $T$. Let $x$ be an element of $T$. If $t\lt x$, then $min\leq x$. Otherwise $x\in T\cap [0,t]$, so $min\leq x$. Is there a problem with this proof? I think when people prove that N is well-ordered, they do it in set theory books at a point where very little has been proven, so they can't assume much. Am I assuming too much in this proof? If not, why don't people ever use this very simple proof?
I will include the proof here and highlight the parts that are giving me trouble. Theorem$\hspace{5 pt}$ Let $P$ be a nonempty perfect set in $\mathbb{R}^k$. Then $P$ is uncountable. Proof$\hspace{5 pt}$ Since $P$ has limit points, $P$ must be infinite. Suppose $P$ is countable, and denote the points of $P$ by $\mathbf{x_1}, \mathbf{x_2}, \mathbf{x_3}, \ldots$. We shall construct a sequence $\{V_n\}$ of neighborhoods as follows. Let $V_1$ be any neighborhood of $x_1$. 1) ^ Are we using the Axiom of Choice here? How can we have an arbitrary set in a construction? If $V_1$ consists of all $\mathbf{y} \in \mathbb{R}^k$ such that $|\mathbf{y} - \mathbf{x_1}| < r$, the closure $\overline{V_1}$ of $V_1$ is the set of all $\mathbf{y} \in \mathbb{R}^k$ such that $|\mathbf{y} - \mathbf{x_1}| \leq r$. 2) ^ It makes sense intuitively, but how do we prove this last statement? Suppose $V_n$ has been constructed, so that $V_n \cap P$ is not empty. Since every point of $P$ is a limit point of $P$, there is a neighborhood $V_{n+1}$ such that (i) $\overline{V_{n+1}} \subset V_n$, (ii) $x_n \notin \overline{V_{n+1}}$, (iii) $V_{n+1} \cap P$ is not empty. By (iii), $V_{n+1}$ satisfies our induction hypothesis, and the construction can proceed. 3) ^ I really don't get this whole paragraph much at all. Could someone explain it in a more step-by-step way? Put $K_n = \overline{V_n} \cap P$. Since $\overline{V_n}$ is closed and bounded, $\overline{V_n}$ is compact. 4) ^ "closed" comes from it being a closure and "bounded" comes from the definition of neighborhood, correct? Since $x_n \notin K_{n+1}$, no point of $P$ lies in $\bigcap_1^\infty K_n$. Since $K_n \subset P$, this implies that $\bigcap_1^\infty K_n$ is empty. But each $K_n$ is nonempty, by (iii), and $K_n \supset K_{n+1}$, by (i); this contradicts the Corollary to Theorem 2.36.
Can someone help me evaluate $G_g(z)=\int_0^{\infty}x^{z-1}e^{igx}dx$, where $g$ is real and $z$ is complex? By closing the contour in the upper half plane, I've managed to prove that if $0<Re(z)<1$ and $Im(z)>0$ (we can then ignore these conditions by analytic continuation) and $g>0$, then $G_g(z)=e^{i\pi z}G_{-g}(z)$. But it doesn't look as though contour integration will be enough to get the whole answer unless $G_g(z)\equiv0$, since the integrand has no poles, only a branch cut. Can we use the gamma function somewhere? The expression $\Gamma(z)=\int_0^{\infty}t^{z-1}e^{-t}dt$ looks very similar to what we've got for $G$, and the first half of this problem was about the gamma function. Maybe we could substitute $t=-igx$ and then deform the contour from the imaginary axis to the real? Many thanks for any help with this!
We know that given a multiplicative function $f$ for which the series $\sum_{n=1}^\infty f(n)$ converges absolutely then so does the Euler product $\prod_{p}\sum_{k=0}^\infty f(p^k)$, but does the reverse hold (at least up to conditional convergence)? We know that given a multiplicative function $f$ for which the series $\sum_{n=1}^\infty f(n)$ converges Suppose $f$ is multiplicative and $\prod_p \sum_k |f(p^k)|$ converges, i.e. there is $L$ such that for every $\epsilon > 0$ there exist $K,P$ such that $\left|L - \prod_{p \le P'} \sum_{k \le K'} |f(p^k)|\right| < \epsilon$ whenever $K' > K$ and $P' > P$. Now $$\prod_{p \le P_1} \sum_{k \le K_1} |f(p^k)| \le \sum_{n \le N} |f(n)| \le \prod_{p \le P_2} \sum_{k \le K_2} |f(p^k)|$$ where $P_1$ and $K_1$ are such that all positive integers $\prod_{p \le P_1} p^{k(p)}$ with all $k(p) < K_1$ are at most $N$, while $P_2$ and $K_2$ are such that all $n \le N$ are of the form $\prod_{p \le P_2} p^{k(p)}$ with all $k(p) \le K_2$. We conclude that the sum converges absolutely.
I found this question here but it does not fully answer my question. The answer there was that "composite bosons can occupy the same state when the state is spatially delocalized on a scale larger than the scale of the wavefunction of the fermions inside". Let's say we do a BEC with bosonic atoms (for example in a harmonic trap). The BEC means that a huge number of atoms will occupy the same energy level. This cannot be exactly true because the atoms are made out of fermions. So I guess that "the" energy level is actually a collection of many different energy levels that originate somehow from the internal structure of the atoms. This effectively creates a degeneracy of "the" energy level. I think this is what he meant by "spatially delocalized on a larger scale than the scale of the wavefunction of the fermions inside". I have a few questions regarding this: Is this correct? Where does these extra energy levels come from (There must be a huge amount of them)? If there is a huge amount of internal energy states it should give a great enhancement of the density of states. Since many thermodynamic quantities depend on the density of states (for instance the particle number) this should change the thermodynamics of a gas (not only at small temperatures but also at higher ones)? EDIT: This edit is about Chiral Anomaly's answer. I would like to do this a bit more quantitatively. Consider a sodium atom. Its Hamiltonian (like for the H-atom) can be composed in a rest frame part (which will become the spatial wavefunction of the atom later) and a internal part. The internal part has a hydrogen-like spectrum. The quantum numbers of these states are what you called $n$. If the electrons have $k$ accessible states then there are $k$ over 11 possibilities to arrange the 11 electrons. For 20 Million atoms (as in here) you need about 34 internal states (This are all states up to $n \leq 4$). For Rubidium you need all states up to $n \leq 5$. I'm not fully convinced of your argument because of several reasons: This would imply that all of the atoms in a BEC are excited. You need a specific electronic configuration for cooling and (even more important) trapping the atoms (i.e. you need one electron in a specific state). So all of those excited configurations where this state is not occupied would simply fall out of the trap. One observes the BEC by shining light with some transition frequency on them. If all internal states are occupied there cannot be a transition. EDIT 2: Let's assume for a moment an idealized world. The nucleus and the electrons create a atom where the wavefunction splits into an internal part $\psi_i$ (with $k$ discrete states) and an external wavefunction $\psi(x)$. We put those atoms in a harmonic potential. Now assume that the internal structure is not affected by the potential and that there is no residual interaction between the atoms. So we can write the total Hamiltonian as $H = H_{ext} + H_{in}$ where $H_{ext} = p^2/2m + V(x) = \hbar \omega (n+\frac{1}{2})$ and $H_{in}$ is just the (independent) internal Hamiltonian. Let's choose the groundstate of the harmonic trap to create a BEC. If the atoms were fundamental bosons this degeneracy of this energy level is 1 (which is no problem here). But now we have composite bosons so for the fermions this state has a degeneracy of $1 \times k$. So we can put at most $k$ atoms into this state. (I think we both agree on this). Now turn on interactions. There are many different things changing. The internal structure is affected by the potential (this is fine since it does not change the the number of states). The atoms interact with each other. This will lift the $k$-fold degeneracy of the ground state (i.e. different atoms will have a different $e^{-iEt}$ time dependence). If the interaction is small the splitting will be small, therefore the time dependence of the atoms will be nearly equal. If we run our experiment only for a small time it will look like all the atoms have the same time dependence (BEC). If the interactions are not neglectable the level splitting will be of order $\hbar \omega$. So it will not look like all atoms occupying the groundstate but rather the two lowest states (no BEC). However now we can put $2k$ atoms into our gas because we are treating two (unperturbed) states as equal. But I doubt that this will solve the problem because as I said there won't be a BEC anymore. Now comes the complicated part. The internal and external wavefunctions (even of different atoms) can mix. This is hard to analyze. But we know two things: 1. The overall numbers of states does not change. 2. The resulting gas must be able to form a BEC (i.e. you need enough states which have (nearly) the same time dependence). If you just crazy mix some high energy states into low energy states the nice time dependence will get lost. Also in this case all the BEC analysis would be completely wrong (since it does not account for such mixing). So I think this must be neglectable. All in all when turning on interactions will not create extra states. Therefore if you see a BEC you have at maximum $k$ atoms in it.
Search Now showing items 1-10 of 24 Production of Σ(1385)± and Ξ(1530)0 in proton–proton collisions at √s = 7 TeV (Springer, 2015-01-10) The production of the strange and double-strange baryon resonances ((1385)±, Ξ(1530)0) has been measured at mid-rapidity (|y|< 0.5) in proton–proton collisions at √s = 7 TeV with the ALICE detector at the LHC. Transverse ... Forward-backward multiplicity correlations in pp collisions at √s = 0.9, 2.76 and 7 TeV (Springer, 2015-05-20) The strength of forward-backward (FB) multiplicity correlations is measured by the ALICE detector in proton-proton (pp) collisions at s√ = 0.9, 2.76 and 7 TeV. The measurement is performed in the central pseudorapidity ... Inclusive photon production at forward rapidities in proton-proton collisions at $\sqrt{s}$ = 0.9, 2.76 and 7 TeV (Springer Berlin Heidelberg, 2015-04-09) The multiplicity and pseudorapidity distributions of inclusive photons have been measured at forward rapidities ($2.3 < \eta < 3.9$) in proton-proton collisions at three center-of-mass energies, $\sqrt{s}=0.9$, 2.76 and 7 ... Rapidity and transverse-momentum dependence of the inclusive J/$\mathbf{\psi}$ nuclear modification factor in p-Pb collisions at $\mathbf{\sqrt{\textit{s}_{NN}}}=5.02$ TeV (Springer, 2015-06) We have studied the transverse-momentum ($p_{\rm T}$) dependence of the inclusive J/$\psi$ production in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV, in three center-of-mass rapidity ($y_{\rm cms}$) regions, down to ... Measurement of pion, kaon and proton production in proton–proton collisions at √s = 7 TeV (Springer, 2015-05-27) The measurement of primary π±, K±, p and p¯¯¯ production at mid-rapidity (|y|< 0.5) in proton–proton collisions at s√ = 7 TeV performed with a large ion collider experiment at the large hadron collider (LHC) is reported. ... Two-pion femtoscopy in p-Pb collisions at $\sqrt{s_{\rm NN}}=5.02$ TeV (American Physical Society, 2015-03) We report the results of the femtoscopic analysis of pairs of identical pions measured in p-Pb collisions at $\sqrt{s_{\mathrm{NN}}}=5.02$ TeV. Femtoscopic radii are determined as a function of event multiplicity and pair ... Measurement of charm and beauty production at central rapidity versus charged-particle multiplicity in proton-proton collisions at $\sqrt{s}$ = 7 TeV (Springer, 2015-09) Prompt D meson and non-prompt J/$\psi$ yields are studied as a function of the multiplicity of charged particles produced in inelastic proton-proton collisions at a centre-of-mass energy of $\sqrt{s}=7$ TeV. The results ... Charged jet cross sections and properties in proton-proton collisions at $\sqrt{s}=7$ TeV (American Physical Society, 2015-06) The differential charged jet cross sections, jet fragmentation distributions, and jet shapes are measured in minimum bias proton-proton collisions at centre-of-mass energy $\sqrt{s}=7$ TeV using the ALICE detector at the ... Centrality dependence of high-$p_{\rm T}$ D meson suppression in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV (Springer, 2015-11) The nuclear modification factor, $R_{\rm AA}$, of the prompt charmed mesons ${\rm D^0}$, ${\rm D^+}$ and ${\rm D^{*+}}$, and their antiparticles, was measured with the ALICE detector in Pb-Pb collisions at a centre-of-mass ... K*(892)$^0$ and $\Phi$(1020) production in Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV (American Physical Society, 2015-02) The yields of the K*(892)$^0$ and $\Phi$(1020) resonances are measured in Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV through their hadronic decays using the ALICE detector. The measurements are performed in multiple ...
Let $X$ be a Banach Space, and denote by $∥ · ∥$ the standard norm on $B(X)$, the space of bounded linear functions $T:X\to X$. (a) Suppose that $||| · |||$ is another algebra norm on $B(X)$. Prove that there exists $C > 0$ such that $∥ · ∥ ≤ C||| · |||$. (b) Prove that B(X) has a unique (up to equivalence) complete algebra norm. For (a), Assume not, then $\forall n\in \mathbb{N}, \exists T_n \in \mathcal{B}(X)$, such that $\|T_n\|> n|||T_n|||$, so scale $T_n$ such that $|||T_n||| = 1, \|T_n\|> n$. $\forall x_0\in X, \lambda\in X^*$, define $S = x_0\lambda, S(x) = \lambda(x)x_0$, then $|S(x)| = |\lambda(x)| \|x_0\| \le \|\lambda\|\|x_0\| \|x\|$, so $\|S\|\le \|\lambda\|\|x_0\|$, and so $S$ is a bounded functional on $X$. $\forall R \in \mathcal{B}(X), \forall x\in X, (SRS)(x) = S(R(\lambda(x)x_0)) = \lambda(x)\lambda(Rx_0)x_0 = (\lambda(Rx_0)S)(x)$, so $SRS = \lambda(Rx_0)S$. And so $$|||SRS||| = |\lambda(Rx_0)|\cdot|||S||| \le |||S|||\cdot|||R|||\cdot|||S|||.$$ When $S\neq 0, |\lambda(Rx_0)| \le |||S|||\cdot|||R|||$. Since $|||T_n||| = 1, |\lambda(T_n x_0)| \le |||S|||$. Then let $T'_n = T_n/\|T_n\|$, then $|\lambda(T_n'x_0)| \to 0$, So $\forall x_0\in X, T_n' x_0 \rightharpoonup 0$ weakly. Thus...??? For (b), assume (a). It remains to show that if $|||\cdot |||$ is a complete norm, then $|||\cdot ||| \le D ||\cdot ||$ for some D. A proof by contradiction starts by finding a sequence $T_n$ such that $|||T_n||| = 1$, $||T_n|| \to 0$, but this does not lead to anything contradictory. Johnson (1967) THE UNIQUENESS OF THE (COMPLETE) NORM TOPOLOGY shows a related result, but I don't find it helpful.
$\newcommand{\Y}{\mathcal{Y}}\newcommand{\X}{\mathcal{X}}\newcommand{\rmL}{\mathrm{L}}$As explained for example in Watrous' book (chapter 2, p. 79), given an arbitrary linear map $\Phi\in\rmL(\rmL(\X),\rmL(\Y))$, for every linear operator $X\in\rmL(\X)$ we can write the Kraus representation of $\Phi(X)$ as$$\Phi(X)=\sum_a A_a X B_a^\dagger,\tag1$$where $A_a,B_a\in\rmL(\X,\Y)$. As far as I understand it, the essential step in showing this is given in Corollary 2.21 of the above book, when we write the Choi representation as $J(\Phi)=\sum_a u_a v_a^\dagger$. I've been trying to understand how the Kraus representation can be obtained more directly in the natural representation of $\Phi$, which essentially means to think of $\Phi$ as a matrix acting on the vectorized versions of the operators, via the equivalence $\rmL(\X)\sim\X\otimes \X$.In this representation, we can write $\Phi(X)_{ij}=\Phi_{ijk\ell}X_{k\ell}$, where $\Phi_{ijk\ell}$ are the components of the natural representation. One can apply SVD to the four-index object $\Phi_{ijk\ell}$ separating $i,k$ from $j,\ell$, thus obtaining a decomposition of the form$$\Phi_{ijk\ell}=\sum_\alpha s_\alpha A^\alpha_{ik}\bar B^{\alpha}_{j\ell}.\tag2$$This looks very close to the typical Kraus representation for a general map as given in (1). Being $s_\alpha\ge0$, one can also write $s_\alpha=\sqrt{s_\alpha}\sqrt{s_\alpha}$ and redefine the operators $A, B$ to contain $\sqrt{s_\alpha}$, thus getting an even more similar form. What I'm wondering is: is this way of decomposing the channel via SVD equivalent to the Kraus representation? If so, we would also know additional properties for the $A_{ik}^\alpha, B_{j\ell}^\alpha$ operators, such as their orthogonality: $\sum_{ik}A^\alpha_{ik} \bar A^\beta_{ik}=\delta_{\alpha\beta}$ etc.In this sense, it seems to me that absorbing the singular values $s_\alpha$ into the operators, like is done in (1), is actually hiding information, because then we lose the orthogonality between the $A_a, B_a$.Does one approach have any advantage over the other (assuming they are both equally correct)?
Sort an array of size N made of numbers from 0 to K Problem statement In this article we will be discussing another (see the previous article) very popular programming interview question. This time we are asked to sort an array of size N whose elements are in the range [0,K). Problem statement Given an array $A=[a_1,a_2,…a_n],\: 1 \leq a_i \leq K$modify A s.t. it is sorted. As usual you should ask your interviewer some clarifying questions. For example a sensible question to ask could be: What is the maximum value of $K$ and of $N$?Let's assume that $N \leq 10^7$ and $K\leq 10^3$ First start noticing that the question seems rather simple at first. After all all you are asked to do is to sort an array. This is an easy peasy task that can be accomplished in a one-liner in C++. Simple sort solution bool sort_range_K(vector<int>& A){sort(begin(A), end(A));} But at this point you should also notice that you are not using the information about the range of the elements, and this should make your eyebrows raise and make you think a bit harder. This means that there is a better way of solving this problem.The complexity of the above code is of $O(n \times log_2(n))$ Linear time Solution You might not be aware of this (and if you are not I highly encourage you to get familiar with the topic) that it is possible to achieve better asymptotic complexity for sorting than $O(n \times log_2(n))$.If you know the range of the elements you are dealing with, sorting can be done in linear time using Counting sort. The idea behind is really simple. For each number in the range [0,K) you count how many times it appears in the original array in an array F of size K.W.r.t. to the example above for instance: A={1,8,9,66,2,1,45,12}F ={0,2,1,0,0,0,0,0,1,1,....} F[0] = 0 because 0 appears 0 times in A F[1] = 2 because 1 appears 2 times in A $…$ F[9] = 2 because 9 appears 1 times in A With that information we can easily produce the output array by putting all the zeros first, then the ones, and so on. C++ code The idea above can be coded as follows: template<class T>void sort_range_K(vector<T>& A, const int K){vector<int> F(K,0);for(const auto&x : A)F[x]++;int p = 0;for(int i = 0 ; i < K ; i++)for(int j = 0 ; j < F[i] ; j++)A[p++] = i;} The complexity of the code above is linear the length of $\Theta(A)$.The space complexity is $\Theta(K)$You can try the above code online on Wandbox. Conclusions In this article we discussed a common coding interview question asked by many tech companies. The takeaways from this exercise are: If an information is provided in the problem statement, it needs to be used If you know the range of the element you want to sort, you can sort them in linear time. The space complexity is linear in the biggest possible value an element of the array can take. If you like this article check the previous article of the series (determine if a number is an Armstrong number). Coding interview question: Determine if number is an Armstrong number Problem statement In this article we will be discussing another (see the previous article counting triples with a given sum) of the most common coding interview questions. The problem statement is really simple and goes as follows: Given an positive integer N determine if it is an Armstrong number? A number $x_1x_2…x_n$ (1 \leq x_i \leq 9 $ is a digit ) of length $n$ is an Armstrong number if the following is true: $x_nx_{n-1}…x_2x_1 = pow(x_1,n) + pow(x_2,n) + … pow(x_n,n)$ In other words if raising all digits to power $n$ and sum all them up you obtain the original number, then $N$ is an Armstrong number. In this article we will be discussing one of the most common coding interview question asked in many google interviews, especially during one of the first stages. The problem statement goes as follows: Given an array N of distinct integers and a integer K, print on the standard output the number of triples (N[i], N[j], N[l]) whose sum is equal to K. In other words how many triples(i,j,k), i < j < k s.t. N[i] + N[j] + N[l] = K are in N? Command-line argument management is tricky to get right especially when the number of options that we support and the number of their combination is big. For this kind of applications, there are already a number of very effective libraries and tools supporting the programming in such a scenario. One such library is Boost::program_options which I highly encourage you to use as is it awesome. But there are other cases where we need to quickly write a prototype or when the number of options and possible configuration is small where using Boost could be overkill. In such cases what we usually do is writing ad-hoc command-line options management code which will never be reuse and it is tailored for the specific application at hand. This approach costs time and it is boring once you did it once. For this reason, I wrote a super simple and minimalistic C++17 command-line argument manager which allows you to: Check if a set of options has been specified Get the argument of each element of a set of options (as string). The class is extremely easy to use. You only need to construct an instance of the class by passing the argc and argv variables and then you are ready to go. No more need for ugly and error prone command line argument parsing. For instance, as you can see from the code snippet below, I check for the existence of the -h or --help options and of all the other required options. If one of -X, -O, -P are not present or its required corresponding argument are not specified a help message is printed. Otherwise, the arguments are retrieved and returned as a vector of strings ready to be used. The example code is listed below or can be downloaded from here (gist on GitHub). You can also try it live on wandbox. In this short lesson we will discuss how to parallelize a simple and rather inefficient (because this is not an in-place version) implementation of quick-sort using asynchronous tasks and futures. We will perform some benchmarking and performance analysis and we will try to understand how we can further improve our implementation. Quick sort In this section, I will briefly refresh your memory on quick-sort. I will do so by showing you a simple and self-explicative Haskell version first. We will also write a C++ (serial) version of the same code implementation C++ that we will use as a basis for our parallelization. It is beautifully simple: in order to sort a list with at least, p one element is only necessary to partition sort the rest of the elements xs into two sublists: lesser: containing all the elements in xssmaller than p greater: containing all the elements in xsgreater than p Once both sublists are sorted we can finally return the whole sorted list using by simply returning gluing lesser,pand greatertogether in this order. If you still have trouble understanding the quick-sort algorithm please refer to Wikipedia. Quick-sort serial version The following is the serial C++ implementation of the same idea described above. It should be pretty easy to map the following implementation to the Haskell one. Run it on Wandbox template <typename T>void quick_sort_serial(vector<T>& v) {if (v.size() <= 1) return;auto start_it = v.begin();auto end_it = v.end();const T pivot = *start_it;//partition the listvector<T> lesser;copy_if(start_it, end_it, std::back_inserter(lesser),[&](const T& el) { return el < pivot; });vector<T> greater;copy_if(start_it + 1, end_it, std::back_inserter(greater),[&](const T& el) { return el >= pivot; });//solve subproblemsquick_sort_serial(lesser);quick_sort_serial(greater);//mergestd::copy(lesser.begin(), lesser.end(), v.begin());v[lesser.size()] = pivot;std::copy(greater.begin(), greater.end(),v.begin() + lesser.size() + 1);} Parallelizing Quick-sort using std::future In order to speed-up things we are going to use the fact that quick-sort is a divide and conquer algorithm. Each subproblem can be solved independently: creating and sorting lesser and greater are two independent tasks. We can easily perform both on different threads. The following is the first parallel version of the quick_sort_serial() above. Run it on Wandbox template <typename T>void filter_less_than(const vector<T>& v, vector<T>& lesser, const int pivot) {for (const auto el : v) {if (el < pivot) lesser.push_back(el);}}template <typename T>void quick_sort_parallel1(vector<T>& v) {if (v.size() <= 1) return;auto start_it = v.begin();auto end_it = v.end();const T pivot = *start_it;vector<T> lesser;auto fut1 = std::async(std::launch::async,[&]() {filter_less_than<T>(std::ref(v), std::ref(lesser), pivot);quick_sort_parallel1<T>(std::ref(lesser));});vector<T> greater;copy_if(start_it + 1, end_it, std::back_inserter(greater),[&](const T& el) { return el >= pivot; });quick_sort_parallel1(greater);fut1.wait();std::copy(lesser.begin(), lesser.end(), v.begin());v[lesser.size()] = pivot;std::copy(greater.begin(), greater.end(),v.begin() + lesser.size() + 1);} As you can notice, the creation and sorting of lesser and are performed in parallel. Each thread running an instance of quick_sort_parallel1() will create another thread running quick-sort on one of the two sub-problems while the other subproblem is solved by the current thread. This is exactly what we are doing when we spawn the following async task:we are creating a task that will populate lesser with all the elements from v less than pivot and, once ready, it will sort it.Please note that everything we need to have modified by reference need to be wrapped in a std::ref as we discussed in the previous lessons. The following picture shows how the execution unfolds for the unsorted list: [2,7,1,6,9,5,8,3,4,10]: The following code shows hot to spawn an async thread solving the lesser subproblem: While this task is running on the newly created thread, we can solve greater on the current thread. The asynchronous task will recursively spawn other async tasks until a list of size <=1 is created, which is of course already sorted. There is nothing to do in this case. Once the main thread is done with sorting the greater list, it waits for the asynchronous task to be ready using the std:.future::wait() function. Once wait returns, both lists are sorted, and we can proceed with merging the result and finally, here it is, we have a sorted list. Performance analysis Let's quickly analyze our implementation. We will compare execution time for the single-thread and async-parallel versions above. Let's start our analysis by taking looking at this graph depicting the execution time (average of 10 runs) for both versions above: It might be a surprising result to see that the Async parallel version is way slower than the single threaded version, ~55x slower! Why is that? The reason is that, the parallel version creates a new thread for every single subproblem, even for the ones that are quite small. Threads are costly to manage by the OS, they use resources and need to be scheduled. For smaller tasks the overhead caused by the additional thread is larger than the gain in performance that we might get by processing the sublist in parallel. This is exactly what is happening. In order to solve this issue, we want to modify the async code above so that a new thread is spawned only when the input list v is larger than a certain threshold. The code below implements the aforementioned idea: template <typename T>void quick_sort_async_lim(vector<T>& v) {if (v.size() <= 1) return;auto start_it = v.begin();auto end_it = v.end();const T pivot = *start_it;vector<T> lesser;vector<T> greater;copy_if(start_it + 1, end_it, std::back_inserter(greater),[&](const T& el) { return el >= pivot; });if (v.size() >= THRESHOLD) {auto fut1 = std::async([&]() {filter<T>(std::ref(v), std::ref(lesser), pivot);quick_sort_async_lim<T>(std::ref(lesser));});quick_sort_async_lim(greater);fut1.wait();} else {//problem is too small.//Do not create new threadscopy_if(start_it, end_it, std::back_inserter(lesser),[&](const T& el) { return el < pivot; });quick_sort_async_lim(lesser);quick_sort_async_lim(greater);}std::copy(lesser.begin(), lesser.end(), v.begin());v[lesser.size()] = pivot;std::copy(greater.begin(), greater.end(),v.begin() + lesser.size() + 1);} As you can notice, the only addition that this optimized version has is that a new thread is spawned only when the size of the input list is larger than THRESHOLD If the list is too small, then we fall back on the classic single-thread version. The following pictures show the result for the optimized version above with valueof THRESHOLD=4000. As you can notice the execution time drops sensibly w.r.t single thread version. We have achieved ~4x speedup with a minimal programming effort. We have introduced a new parameter in our code, and we need to figure out what is the best value of THRESHOLD. In order to do so, let's analyze the performance of the code above for various values of the threshold. The following graph depict the execution time for various values of THRESHOLD. Note that the y-axis is in log scale. The execution time drops quite abruptly from 0to300. Conclusion We have used std::future to parallelize the quick-sort algorithm. The async code differs slightly from the serial single thread implementation, but runs 4x faster. On the other hand, we have learned that running too many threads it is definitely not a good idea because each thread comes with an overhead: the OS needs to allocate resources and time to manage them. In this lesson we will talk about a way of returning values from threads, more precisely we will talk about std::future which is a mechanism that C++ offers in order to perform asynchronous tasks and query for the result in the future. A future represents an asynchronous task, i.e. an operation that runs in parallel to the current thread and which the latter can wait (if it needs to) until the former is ready. You can use a future all the time you need a thread to wait for a one-off event to happen. The thread can check the status of the asynchronous operation by periodically polling the future while still performing other tasks, or it can just wait for the future to become ready. In the previous lesson we have seen how data can be protected using mutex. We now know how to make threads do their work concurrently without messing around with shared resources and data. But sometimes we need to synchronize their actions in time, meaning that we might want a thread t1 to wait until a certain condition is true before allowing it to continue its execution. This lesson discusses the tools that we can use to achieve such behavior efficiently using condition variables. In this lesson, we will cover the topic of data sharing and resources between threads.Imagine a scenario where an integer o needs to be modified by two threads t1 and t2.If we are not careful in handling this scenario a data race might occur. But what is a data race exactly? Data Race A data race occurs when two or more threads access some shared data and at least one of them is modifying such data. Because the threads are scheduled by OS, and scheduling is not under our control, you do not know upfront which thread is going to access the data first. The final result might depend on the order in which threads are scheduled by the OS. Race conditions occur typically when an operation, in order to be completed, requires multiple steps or sub-operations, or the modification of multiple data. Since this sub-operations end up being executed by the CPU in different instructions, other threads can potentially mess up with the state of the data while the other's thread operation is still ongoing.
I have a question regarding the follow problem: Show that the prime number 27644437 splits completely in $L = \mathbb{Q}(\sqrt{55})$. From what I understand. This deals with ramification. \begin{eqnarray} \sum_1^{r}e_if_i = n &, &\text{where }n = [L:K] \end{eqnarray} For our prime number, $p$, to split completely into $L$, then $e_i=f_i =1$ for all $i$. Now in order for it to be completely split, it can not be ramified. Being ramified will require at least one of the $e_i>1$. However, we do not need to worry about this situation in this problem because $p\not| \text{disc}(L)$. Minimal Polynomial of $L$ is $x^2-55$. Since it is not ramified, this will only leave two possibilities: inert or split. Since we are dealing with a second degree polynomial and we already ruled out the possibility of being ramified, then it must be completely split. Thus, inert or completely split. This is where it gets complicated. If I want to show it is completely split I need to find at least one integer such that $x^2 -55 (\text{mod}p) = 0$. I do not need to find the second integer because $p$ does not ramify $L$. Looking over so many numbers just to verify it meets this condition is ridiculous. There has to be another approach. I asked my professor and he gave me a hint suggesting I use the Quadratic Reciprocity Law. However, I do not know how to apply it correctly in this case. I would assume looking at the Quadratic Residue symbol formed by the discriminate of $L$ and the prime number $p$. This will either result in an answer of $0, 1, \text{or, } -1$; however, I am not sure what the value would mean. Thank You for your time, and thank you in advance for any feedback.