text
stringlengths
256
16.4k
Why do the following two methods give different answers (or are they the same) for the Fourier transform of $Y = \cos(\omega_0 t + \phi)$, with respect to $t \to \omega$ ? They are the same. Using the Dirac impulse sifting property $f(x) \delta(x-a) = f(a)\delta(x-a)$ you can also verify that the second method produces the same output as the first method.
I am new to the axiom of choice, and currently working my way through some exercises. I am struggling with the following exercise: Exercise -Prove the Axiom of Choice (every surjective $f: X \to Y$ has a section) in the following two special cases: Y is finite X is countable (A section has been previously defined as a function $s: Y \to X$ such that $f(s(y)) = y$ for all $y \in Y$) My confusion -Now from what I understand, in (1) you use surjectivity of $f$ to pick an $x \in X$ such that $f(x) = y_0$ proving the case $|Y| = 1$, and then use induction to proof it for general $|Y| = N$. I am a bit confused at (2) though. Would it be legal to take the previous argument and take the limit $N \to \infty$? I tried to google it but I got more confused after reading this question and seeing other references to the axiom of countable choice, suggesting that this result cannot be proven. By the way, I am doing 'naive' set theory here; ZF/ZFC axiom systems and those kind of things have not been discussed in the course.
For clarity, I'm going to generalize your question to be over characteristic $p> 0$ (with base field $\mathbb{F}_q$) instead of the specific case of $p=q=2$.I'll take $p$ and $q$ as a fixed constants;I'll leave it to the reader to figure out what the exact dependence on these parameters is,as there are some tradeoffs that can be made.The end result here is that your problem is roughly equivalent to the discretelog problem for finite fields of characteristic $p$. To be more specific,let the ordinary discrete log problem over extensions of $\mathbb{F}_q$ be,given an extension field $\mathbb{F}$ of $\mathbb{F}_q$, and $a,b \in \mathbb{F}$,find any integer $t$ so that $a = b^t$,or report that none exists.Let the strong discrete log problem over extensions of $\mathbb{F}_q$ be,given $\mathbb{F},a,b$ as before, find integers $z,m$ so that$a = b^t$ for an integer $t$ iff $t = z \pmod{m}$,or report that no $t$ exists.Then the following reductions exist: There is a deterministic mapping reduction from discrete log over extensions of $\mathbb{F}_q$ to your problem. There is an efficient, deterministic algorithm which solves your problemwhen given access to an oracle computing the strong discrete log problemover extensions of $\mathbb{F}_q$. Accordingly, I'd consider it unlikely that somebody will post a proof of$\mathsf{NP}$-hardness or a proof that your problem is in $\mathsf{P}$ in thenear future. Remark: The strong discrete log problem over extensions of $\mathbb{F}_q$can be Turing-reduced to the following ostensibly weaker form(though still seemingly stronger than the ordinary discrete log problem):Given an extension field $\mathbb{F}$ of $\mathbb{F}_q$, and $a,b \in \mathbb{F}$,find the least, non-negative integer $t$ so that $a = b^t$.This follows from the fact that the order of $b$ is one plus the smallestnon-negative $t$ so that $b^{-1} = b^t$. First reduction:The claim is that the ordinary discrete log problem over extensions of $\mathbb{F}_{q}$ mapping-reduces to this problem.This follows the fact that multiplication in $\mathbb{F}_{q^n}$ is a linear transformationwhen we view $\mathbb{F}_{q^n}$ as an $n$-dimensional vector space over $\mathbb{F}_q$.Hence a question of the form $a = b^t$ over $\mathbb{F}_{q^n}$becomes $\vec{a} = B^t\vec{e}$ over $\mathbb{F}_q$, where $\vec{a},\vec{e}$ are $n$-dimensional vectors,and $B$ is an $n\times n$ matrix, all over $\mathbb{F}_q$.The vector $\vec{a}$ can be easily computed from $a$, $B$ from $b$, and$\vec{e}$ is just the representation of $1 \in \mathbb{F}_{q^n}$, which can bewritten down efficiently.This appears to still be a hard case of the general discrete log problem,even with $p=q=2$ (but growing $n$, of course).In particular, people are still competing to see how far out they can compute it. Second reduction:The claim is that your problem reduces to the strong discrete log problem over extensions of $\mathbb{F}_q$.This reduction has a few pieces to it, so forgive the length.Let the input be the $n$-dimensional vectors $x,y$ and $n\times n$ matrix $A$, all over $\mathbb{F}_q$;the goal is to find $t$ so that $y = A^tx$. The basic idea is to write $A$ in Jordan canonical form (JCF),from which we can reduce testing $y = A^tx$ to the strong discrete log problemwith some straightforward algebra. One reason for using a canonical form under similarity of matricesis that if $A = P^{-1}JP$, then $A^t = P^{-1}J^tP$.Hence we can transform $y = A^tx$ to $(Py) = J^t(Px)$,where now $J$ is in a much nicer format than the arbitrary $A$.The JCF is a particularly simple form, which enables the rest of the algorithm.So from now on, assume that $A$ is already in JCF,but also allow that $x,y,$ and $A$ may have entries in an extension field of$\mathbb{F}_q$. Remark: There are some subtleties that arise from working with the JCF.Specifically, I'll assume that we can do field operations within any extensionof $\mathbb{F}_q$ (no matter how large) in one time step,and that we can compute the JCF efficiently. a priori, this is unrealistic, because working with the JCF may requireworking in an extension field (the splitting field of the characteristic polynomial)of exponential degree.However, with some care, and using the fact that we're working over a finite field,we can circumvent these issues.In particular,we will associate with each Jordan block a field $\mathbb{F}'$ of degree at most $n$ over $\mathbb{F}_q$so that all the entries in the Jordan block and the corresponding elements of $x$,$y$all live within $\mathbb{F}'$.The field $\mathbb{F}'$ may differ from block to block,but using this ``mixed representation'' allows for an efficient description of the JCF,which moreover can be found efficiently.The algorithm described in the remainder of this section only needs to workwith one block at a time,so as long as it does its field operations within the associated field $\mathbb{F}'$,the algorithm will be efficient. [end remark] The use of JCF gives us equations of the following form,with each equation corresponding to a Jordan block:$$\begin{bmatrix} y_1 \\ y_2 \\ y_3 \\ \vdots \\ y_{k-1} \\ y_{k} \\\end{bmatrix}=\begin{bmatrix}\lambda & 1 & & & & \\ & \lambda & 1 & & & \\ & & \lambda & 1 & & \\ & & & \ddots & & \\ & & & & \lambda & 1 \\ & & & & & \lambda \\\end{bmatrix}^t\begin{bmatrix} x_1 \\ x_2 \\ x_3 \\ \vdots \\ x_{k-1} \\ x_{k} \\\end{bmatrix}$$ The algorithm will handle each block separately.In the general case, for each block, we'll have a query for our strong discrete log oracle,from which the oracle will tell us a modularity condition, $t = z \pmod{m}$.We'll also get a set $S \subseteq \{0,1,\cdots,p-1\}$ so that$\bigvee_{s\in S}\left[ t = s \pmod{p} \right]$must hold.After processing all the blocks,we'll need to check that there is a choice of $t$ that satisfies the conjunctions of all these conditions.This can be done by making sure there is a common element $s$ in all the sets $S$so that the equations $t = s \pmod p$ and $t = z_j \pmod{m_j}$ are all simultaneously satisfied,where $j$ ranges over the blocks. There are also some special cases that arise throughout the procedure.In these cases,we'll get conditions of the form $t > \ell$ for some value of $\ell$,or of the form $t = s$ for some specific integer $s$, from certain blocks,or we might even find that no $t$ can exist.These can be incorporated into the logic for the general case without issue. We now describe the subprocedure for handling each Jordan block.Fix such a block. Begin by focusing on just the last coordinate in the block.The condition $y = A^tx$ requires that $y_k = \lambda^t x_k$.In other words, it's an instance of the discrete log problem in some field extension of $\mathbb{F}_q$.We then use an oracle to solve it,which either results in no solution,or else gives a modularity condition on $t$.If "no solution" is returned, we return indicating such.Otherwise,we get a condition $t = z \pmod{m}$,which is equivalent to $y_k = \lambda^t x_k$. To handle the other coordinates,we start with the following formula(see, eg, here):$$\begin{bmatrix}\lambda & 1 & & & & \\ & \lambda & 1 & & & \\ & & \lambda & 1 & & \\ & & & \ddots & & \\ & & & & \lambda & 1 \\ & & & & & \lambda \\\end{bmatrix}^t=\begin{bmatrix}\lambda^t & \binom{t}{1}\lambda^{t-1} & \binom{t}{2}\lambda^{t-2} & \cdots & \cdots & \binom{t}{k-1}\lambda^{t-k+1} \\ & \lambda^t & \binom{t}{1}\lambda^{t-1} & \cdots & \cdots & \binom{t}{k-2}\lambda^{t-k+2} \\ & & \ddots & \ddots & \vdots & \vdots\\ & & & \ddots & \ddots & \vdots\\ & & & & \lambda^t & \binom{t}{1}\lambda^{t-1}\\ & & & & & \lambda^t\end{bmatrix}$$First, let's take care of the case in which $x_k = 0$.Since we already have the modularity condition which implies $y_k = \lambda^t x_k$,we can assume that $y_k = 0$ also.But then we can just reduce to focusing on the first $k-1$ entries of $x$ and $y$,and the top left $(k-1)\times (k-1)$ submatrix of the Jordan block.So from now on, assume that $x_k \ne 0$. Second, we'll handle the case in which $\lambda = 0$.In this case,the powers of the Jordan block have a special form,and force either $t = z$ for some $z \le k$,or else $t > k$, with no other conditions.I won't belabor the cases, but suffice it to say that each can be checked for efficiently.(Alternatively, we could reduce to the case where $A$ is invertible; see my comment on the question.) Finally, we arrive at the general case.Since we already have the modularity condition which implies that $y_k = \lambda^t x_k$,we can assume that condition holds, and use $y_k x_k^{-1}$ as a stand-in for $\lambda^t$.More generally, we can use $y_kx_k^{-1}\lambda^{-z}$ to represent $\lambda^{t-z}$.Thus we need to check if the following system holds for some choice of $t$:$$\begin{bmatrix} y_1 \\ y_2 \\ y_3 \\ \vdots \\ y_{k-1} \\ y_{k} \\\end{bmatrix}=\begin{bmatrix}y_kx_k^{-1} & \binom{t}{1}y_kx_k^{-1}\lambda^{-1} & \binom{t}{2}y_kx_k^{-1}\lambda^{-2} & \cdots & \cdots & \binom{t}{k-1}y_kx_k^{-1}\lambda^{-(k-1)} \\ & y_kx_k^{-1} & \binom{t}{1}y_kx_k^{-1}\lambda^{-1} & \cdots & \cdots & \binom{t}{k-2}y_kx_k^{-1}\lambda^{-(k-2)} \\ & & \ddots & \ddots & \vdots & \vdots\\ & & & \ddots & \ddots & \vdots\\ & & & & y_kx_k^{-1} & \binom{t}{1}y_kx_k^{-1}\lambda^{-1}\\ & & & & & y_kx_k^{-1}\end{bmatrix}\begin{bmatrix} x_1 \\ x_2 \\ x_3 \\ \vdots \\ x_{k-1} \\ x_{k} \\\end{bmatrix}$$Observe that whether the equation holds depends only on $t \pmod{p}$;this is because the dependence on $t$ is only polynomial,$t$ must be an integer,and the above equations are over a field of characteristic $p$.Hence we can just try each value of $t \in \{0,1,\ldots,p-1\}$ separately.The set $S$ we will return is just the choices of $t$ for which the system issatisfied. So now, except for some special cases, the per-block subprocedure has found a modularity condition $t = a \pmod{m}$,and a set $S$ so that one of $t = s \pmod{p}$ must hold for some $s \in S$.These conditions are equivalent to $y = A^tx$ within this specific Jordan block.So we return these from the subprocedure.The special cases either conclude that no $t$ can exist (in which case the subprocedure immediately returns an indication of that),or else we have a modularity condition $t = a \pmod{m}$ and some special condition like $t = s$ for an integer $s$,or $t > \ell$ for some integer $\ell$.In any case, the conditions involved are all equivalent to $y = A^tx$ within this Jordan block.So, as mentioned above, the subprocedure just returns these conditions. This concludes the specification of the per-block subprocedure,and of the algorithm as a whole.It's correctness and efficiency follow from the preceding discussion. Subtleties with using JCF in second reduction:As mentioned in the second reduction, there are some subtleties that arise fromworking with the JCF.There are a few observations for mitigating these problems: Extensions of finite fields are normal.This means that if $P$ is a an irreducible polynomial over $\mathbb{F}_q$,then any extension of $\mathbb{F}_q$ containing a root of $P$contains all the roots of $P$.In other words, the splitting field of an irreducible polynomial $P$of degree $d$ has degree only $d$ over $\mathbb{F}_q$. There is a generalization of the Jordan canonical form,called the primary rational canonical form (PRCF),which does not require field extensions to be written down.In particular, if $A$ is a matrix with entries in $\mathbb{F}_q$,then we can write $A = P^{-1}QP$ for some matrices $P,Q$ with entries in $\mathbb{F}_q$,where moreover $Q$ is in PRCF.Additionally, if we pretend that the entries $A$ live in a field $\mathbb{F}'$extending $\mathbb{F}_q$ which contains all the eigenvalues of $A$,then $Q$ will in fact be in JCF.Thus we can view computing the JCF of $A$ as a special case of computing the PRCF. Using the form of the PRCF,we can factor computing the JCF of $A$ as computing the PRCF of $A$ over $\mathbb{F}_q$ computing the PRCF of each block $C$ (borrowing the notation from the Wikipedia article) in the PRCF of $A$,over an extension field $\mathbb{F}'$,where $\mathbb{F}'$ is chosen to contain all the eigenvalues of $C$ The key advantage with this factorization is that the characteristicpolynomials of the blocks $C$ will all be irreducible,and hence, by our first observation,we can choose $\mathbb{F}'$ to have degree the size of $C$ (which is at most $n$) over $\mathbb{F}_q$.The downside is that now we have to use different extension fields torepresent each block of the JCF, so the representation is atypical and complicated. Thus, given the ability to compute the PRCF efficiently,we can compute a suitable encoding of the JCF efficiently,and this encoding is so that working with any particular block of the JCF can be done withinan extension field of degree at most $n$ over $\mathbb{F}_n$. As for computing the PRCF efficiently,the paper"A Rational Canonical Form Algorithm" (K. R. Matthews, Math. Bohemica 117 (1992) 315-324)gives an efficient algorithm to compute the PRCF when the factorization of the characteristicpolynomial of $A$ is known.For fixed characteristic (such as we have),factoring univariate polynomials over finite fields can be done in deterministic polynomial time(see eg "On a New Factorization Algorithm for Polynomials over Finite Fields" (H. Niederreitter and R. Gottfert, Math. of Computation 64 (1995) 347-353).),so the PRCF can be computed efficiently.
Problem Statement Let's run an election. $i \in \text{voters}$ $j \in \text{candidates}$ $x_j \in \{ 0, 1 \}$ The candidate is chosen by setting this to 1. This is the election result. $b_{i,j} \in [0,1]$ Ballot of voter i for candidate j. Voter gives bigger numbers if he likes the candidate. This is the input to the election. Now, how do we choose $x_j$ so that the best group of candidates win? Here is one optimization. edit 2: Actually this is a better problem. Maximize $Z$ subject to $\begin{array}{ll} \forall _{j } , & \sum_{i} f_i * b_{ij}^2 * x_j \geq Z * x_j & \text{ Minimum winning score is maximized.} \\ \forall _{i } , & \sum_{j} f_i * b_{ij} * x_j = 1 & \text{The weight of each voter is the same.} \leq \text{works too}\\ & \sum_{j } x_j = N & \text{ The number of seats is N.} \end{array} $ When I try this in Gurobi, it complains, "Q matrix is not positive semi-definite". However, I can set an upper bound on f and then it will work. Also, I can linearize something: $ \forall _{j } , \quad N*(x_j-1)+\sum_{i} f_i * b_{ij}^2 \geq Z \quad \text{ Minimum winning score is maximized.} $ Gurobi does solve this, though it takes a lot of time. I wish I could linearize all the constraints so there is no $f*x$ term. It is also possible to just say $\text{Maximize} \quad {\displaystyle \min_j \sum_i \frac{b_{ij}^2}{\sum_j x_j*b_{ij}}}$ but I'm not sure this helps, though it does get rid of f. Here's another related problem $ \begin{array}{lll} \text{Minimize} & { \max_{i } \sum_{j } f_i * b_{ij} * x_j} & \text{The representation of each voter is fair.} \\ \text{Subject to} & {\forall \ j } \ \ \sum_{i } f_i * b_{ij}^2 *x_j \geq x_j \ \ \ & \text{The value of each winning seat is the same.} \end{array} $ old stuff below edit 1: I realize a better problem to solve is this one: Maximize $Z$ subject to $\begin{array}{ll} \forall _{j } , & \sum_{i} f_i * b_{ij} * x_j \geq Z * x_j & \text{ Minimum winning score is maximized.} \\ \forall _{i } , & \sum_{j} f_i * b_{ij} * x_j = 1 & \text{The weight of each voter is the same.} \leq \text{works too}\\ & \sum_{j } x_j = N & \text{ The number of seats is N.} \end{array} $ When I try this in Gurobi, it complains, "Q matrix is not positive semi-definite". The model and explanation below is old but helpful in understanding the problem above. $\begin{array}{ll} \text{maximize } \text{ } \text{ } \sum_{i} & \sum_{j} f_i * b_{ij} * x_j \text{ } \text{ } \text{ } &\text{Total score is maximized.} \\ \text{ subject to } \text{ } \text{ } \forall _{i } , & \sum_{j} f_i * b_{ij} * x_j \leq 1 & \text{The weight of each voter is the same, basically.}\\ \text{ and subject to} & \sum_{j } x_j = N & \text{The number of seats is N.} \end{array} $ What is this $f_i$? It lets you vote for multiple candidates. So if two of your candidates that you like end up winning, half your vote went to one and half to the other. Basically, $f_i$ is a way to divide but using multiplication. $f_i \in (0,1]$ How to Help I'm glad Gurobi will do this problem. I was able to implement it. And it is too slow and I want it to go faster. I want to know what gurobi is doing. I have gurobi's log file but it is hard to interpret. I also have my code. In the example I am running, I have 216 voters, 10 candidates, and 5 winners. It takes 42 seconds. What is this problem related to and are there different forms to implement it? It is a kind of load balancing where the loading is factorized to $f_i * x_j$ instead of $x_{i,j}$. It is a binary problem in $x_j$ and it is also continuous in $f_i$. This is a committee selection problem. It's also almost a binary quadratic problem except it has this additional $f_i$, which is continuous. There could be a simplification of $f_i$ because it is either 1 or $\frac{1}{\sum_{j} b_{ij} * x_j}$. Maybe this quadratic constraint can be simplified.
This problem is a modified version of a problem from Australian mathematics competition 1984: The problem is let $f:\mathbb Z^+ \to \mathbb Z^+$ be a function from positive integers to positive integers which satisfy the following three conditions. $f(2)=2$ $f(mn)=f(m) f(n)$ for $m,n \in \mathbb Z^+$ If $m>n$ then $f(m)>f(n)$ for $m,n \in \mathbb Z^+$ Find such an $f$ and prove it is the only function satisfying the above 3 conditions. It can be proved $f(n)=n$ by induction very easily. An alternative attempt would be: Suppose $f(n)=A_1+A_2n+A_3n^2+A_4n^3+ \cdots$. Then, $$f(n \cdot n)=A_1+A_2n^2+A_3n^4+A_4n^6+...=(A_1+A_2n+A_3n^2+A_4n^3+...)^2 ,$$ and by compairing coefficents of similer powers of $n$s it can be showen that $A_i=0$ for $i\in {\mathbb Z^+-\{ 2 \} }$ and $A_2=1$ which leads to $f(n)=n$. My problem is: Is this alternative method valid, since it supposes the function to be polynomial but the expected function can be any function (which can't be expressed as a polynomial)? I have a doubt that this is not a valid method to prove this is the onlyfunction.
I'm trying to align two equations using eqnarray. Sample: \documentclass{article}\begin{document}\subsubsection*{\begin{center}Variance equations\end{center}}\begin{eqnarray}\mu = \tfrac{1}{NM}\sum\limits_{x=1}^N\sum\limits_{y=1}^MA(y,x)\nonumber\\varTot = \tfrac{1}{NM}\sum\limits_{x=1}^N\sum\limits_{y=1}^M(A(y,x)-\mu)^2\end{eqnarray}\end{document} The task description specifically states that I should use eqnarray. Image of what it should look like (I achieved this earlier by using two equation environments):
Pacific Northwest Geometry Seminar Start Date: 04/01/2006 End Date: 04/03/2006 Alejandro Adem (University of British Columbia) Jim Bryan (University of British Columbia) Yong-Geun Oh (University of Wisconsin-Madison) Ben Chow (UC San Diego) Simon Brendle (Stanford University) Gang Tian (Princeton University) University of British Columbia Alejandro Adem (University of British Columbia) A Stringy Product for Twisted Orbifold K-theory Given an orbifold X with inertia orbifold LX, we construct a product for the twisted K-theory of LX which extends the orbifold cohomology product of Chen & Ruan. The twisting arises from the "inverse transgression" of elements in $H^4(BX, Z)$. This is joint work with Y.Ruan and B.Zhang. Simon Brendle (Stanford University) Global convergence of the Yamabe flow Let $M$ be a compact manifold of dimension $n \geq 3$. Along the Yamabe flow, a Riemannian metric on $M$ is deformed such that $\frac{\partial g}{\partial t} = -(R_g - r_g) \, g$, where $R_g$ is the scalar curvature associated with the metric $g$ and $r_g$ denotes the mean value of $R_g$. It is known that the Yamabe flow exists for all time. Moreover, if $3 \leq n \leq 5$ or $M$ is locally conformally flat, then the solution approaches a metric of constant scalar curvature as $t \to \infty$. I will describe how this result can be generalized to higher dimensions. The key ingredient in the proof is a new construction of test functions whose Yamabe energy is less than that of the round sphere. Jim Bryan (University of British Columbia) Donaldson-Thomas and Gromov-Witten invariants of orbifolds and their crepant resolutions A well known principle in physics asserts that string theory on an orbifold X is equivalent to string theory on Y, any crepant resolution of X. Donaldson-Thomas and Gromov-Witten theory are mathematical counterparts of type IIA and type IIB topological string theory and so it is expected that one can recover the Gromov-Witten or Donaldson-Thomas invariants of Y from those on X. We will mathematically formulate and discuss these correspondences and illustrate them with some examples. Ben Chow (UC San Diego) On the works of D. Glickenstein and F. Luo on semi-discrete curvature flows Yong-Geun Oh (University of Wisconsin-Madison) Lagrangian currents, Calabi invariants and non-simpleness of the area-preserving homeomorphism group of S^2 In this talk, I will introduce the notion of `hamiltonian limits' of the Hamiltonian flows, and define the continuous Hamiltonian flows and their associated Hamiltonian functions, which I call `topological Hamiltonians'. I will give the proof of the uniqueness of the topolocgical Hamiltonian associated to continuous Hamiltonian flows. The uniquessness proof uses the method of geometric measure theory and some $C^0$ symplectic geometry. I will discuss some implication of this study in a well-known conjecture in the dynamical systems on the simpleness of the area preserving homeomorphism group $S^2$. Gang Tian (Princeton University) Kahler-Ricci flow and complex Monge-Ampere equation
In the case of a simple pendulum (also called a mathematical pendulum of simple gravity pendulum), one assumes that all of the mass is the bob and the rest of the pendulum is massless. An example of the simple pendulum is given in the image below. The simple pendulum (see wikipedia or hyperphysics) leads to a simple differential equation by using Newton's second law:$$\ddot{\theta}+\frac{g}{l}\sin(\theta)=0.$$ This pendulum gives the easiest way te look at harmonic motion. The above case is what they call the simple pendulum. You could add an extrernal source so it would be a driven simple pendulum, of friction so that it becomes a simple pendulum with friction. If you want to further complexify it, you could drop the assumption that all of the mass is in the bob and add inertia to the picture (so a real-life pendulum). This kind of pendulum is called the physical pendulum (or compound pendulum), the swinging body is no longer considered a point mass, but a mass with finite measurements. An example (compared to the simple pendulum) is given on the figure below. This also leads to equations using Newton's second law applied on angular momentum (just as the ones which were used to derive the equation for the simple pendulum).
When considering system \begin{equation*} \mathbf{x}'=A\mathbf{x} \qquad \text{with } \ A=\begin{pmatrix} a & b \\ c & d\end{pmatrix} \end{equation*} and discovering that it has a repeated root $\mu\ne 0$, but only one eigenvector, we know that $\mu>0$ corresponds to an unstable node, and $\mu<0$ t corresponds to a stable node. Furthermore, on the pictures below a straight line is directed along this single eigenvector. However, how to distinguish between two upper pictures and two lower pictures? Observe that, $\mu$ is a double root of the equation $\lambda ^2- (a+d)\lambda + ad -bc=0$ with the discriminant $D:=(a+d)^2-4(ad -bc))=(a-d)^2+4bc$, and we consider the case $D=0\implies bc <0$ (except $a=d$). Thus $b$ and $c$ are not $0$ and have opposite signs. Then, if $b<0$ (and $c>0$) "rotation" is counter-clockwise, and if $b>0$ (and $c<0$) "rotation" is clockwise. I say "rotation" because it is not a real rotation as in the case of complex-conjugate roots , only a half-turn rotation by $\pm \pi$, but it works! Indeed, if we slightly increase $|b|$ or $|c|$, we get $D<0$ and we will have a focal point and the picture needs to be consistent. If $a=d=\mu$ then either $b=0$ or $c=0$ but not both (otherwise there would be 2 linearly independent eigenvectors) and we use the same sign criteria, looking at $b$ or $c$ which is not $0$.
fixed in 10.0.2 Update I have tried like these. I think there is a bug. Plot[1/Sqrt[-1 + 2^2 Sech[x]^2], {x, 0, ArcCosh[2]}, Ticks -> {{ArcCosh[2]}, Automatic}] This is the antiderivative. primitive = Integrate[1/Sqrt[-1 + 2^2*Sech[x]^2], x]; Plot[primitive, {x, 0, ArcCosh[2]}, Ticks -> {{ArcCosh[2]}, {π/4, π/2}}] Limit[primitive, x -> 0] 0 So far, that's right. Look at this any situation.Limits of primitive are same regardless of the direction. And the limit value is minus. Is this right? (version 10) Limit[primitive, x -> ArcCosh[2], Direction -> -1] // FullSimplify -π/2 Limit[primitive, x -> ArcCosh[2], Direction -> 1] // FullSimplify -π/2 But this computation is right at version 9 (version 9) Limit[primitive, x -> ArcCosh[2], Direction -> 1] // FullSimplify π/2 And as mentioned earlier origin, the definite integral is an erroneous conclusion at version 9 also. =============================================================== Edit The definite integral is solved using substitution method as Dr. Wolfgang Hintze says like this. $u$ =$\frac{\cosh ^2(x)-1}{a^2-1}$ $dx$ = $\frac{\left(a^2-1\right)}{2 \sinh (x) \cosh (x)}du$ $\int_0^1 \frac{a^2-1}{2 \sinh (x) \cosh (x) \sqrt{a^2 \text{sech}^2(x)-1}} \, du$ $\frac{1}{2} \int_0^1 \frac{a^2-1}{\sqrt{a^2 \sinh ^2(x)-\sinh ^2(x) \cosh ^2(x)}} \, du$ $\frac{1}{2} \int_0^1 \frac{1}{\sqrt{\frac{\left(\cosh ^2(x)-1\right) \left(a^2-\cosh ^2(x)+1-1\right)}{\left(a^2-1\right) \left(a^2-1\right)}}} \, du$ $\frac{1}{2} \int_0^1 \frac{1}{\sqrt{u (1-u)}} \, du=\frac{\pi }{2}$ It is solved in the real number region. And ArcCos[2]is also real number. But I don't konw why mathematica make $\int_0^{\cosh ^{-1}(2)} \frac{1}{\sqrt{2^2 \text{sech}^2(x)-1}}\, dx$ appear a imaginary term like $\left(\frac{1}{2}-i\right) \pi$ . =============================================================== Origin I have tried two expressions.(version 10) (1) Integrate[1/Sqrt[-1 + 2^2*Sech[x]^2], {x, 0, ArcCosh[2]}] $\left(\frac{1}{2}-i\right) \pi$ (2) $Assumptions = {a > 1}; Integrate[1/Sqrt[-1 + a^2*Sech[x]^2], {x, 0, ArcCosh[a]}] $\frac{\pi }{2}$ What difference does it make it? The first computing (1) makes the imaginary term. - I π. I don't know why it did such result? (version 9) Integrate[1/Sqrt[-1 + 2^2*Sech[x]^2], {x, 0, ArcCosh[2]}] $\frac{3 \pi }{2}$ If possible, I want to know mathematica's detail process.
I have to determine the potential electric energy from a point charge $q$ who is in an position $r$ inside an uniform electric field $\vec{E}$. I tried to do it by using work definite integral and then use $ V = \frac{U}{q} $, but i'm stuck because one of the integration limits is $\infty$, and the definite integral would look like: $ qrE - qE\infty $. The problem is that i don't know if this electric field goes to infinity (honestly, i think it does not) but i can't see other way of figuring this out. The correct answer (according to the book) is $qr · E$. Thanks. The issue is that you may not take $\infty$ as a reference point since $\vec E$ is defined everywhere, say in the $\hat i$ direction to be $\vec E(x, y, z) = E_0\hat i$. Thus, when defining a potential $V(x, y, z)$ for this field, you'll have to choose a reference point, say $\vec r_0 = (x_0, y_0, z_0)$ and determine the potential at an arbitrary point $\vec r = (x, y, z)$ as$$V(\vec r) = -\int_{\vec r_0}^{\vec r}d\vec s\cdot\vec E = -\int_{x_0}^xdx\,E_0 = E_0(x_0 - x).$$In this case, the only meaningful quantities we can talk about are potential differences $\Delta V$ or potential with respect to a fixed point $\vec r_0$. An infinite uniform electric field would be able to release an infinite amount of energy to a charge moving through it. To get a finite result you'd either need to allow it to move only a finite distance through the field, or the field would have to be non-uniform.
As far as I understand, the Higgs mechanism is a crucial component of the standard model, which is responsible for the weak gauge bosons acquiring mass, otherwise forbidden by renormalizability constraints. However, is there any justification for the fermion masses arising from a Yukawa coupling to the Higgs field, or is this just assumed as a handy byproduct with no alternative explanations present? What counters the idea of considering the masses as fundamental parameters, instead of associated with the Yukawa couplings? @gj255 's answer is impeccable, of course, but I'll illustrate in detail his point for the mass of the electron, since this is really "the second job of the Higgs", and has nothing to do with the Higgs mechanism-- only the SSB! That is, it cares not about the eating of the goldstons by the gauge bosons. However, it makes the gauge theory possible to start with, safeguarding gauge invariance. Weinberg was justifiably proud of this particular technical advance involved in this "second job of the Higgs". There is no other plausible way to generate weakly interacting fermion masses, so, without it, the SM is a non-starter! First note $e_R$, the right-handed electron is a gauge singlet, but $ \begin{pmatrix} \nu_L \\ e_L \end{pmatrix}$ is a gauge SU(2) doublet. So a brute-force mass term $m_e (\bar{e_L} e_R + \bar{e_R}e_L) $ would not be an SU(2) singlet, and the gauge invariance would be broken with nasty, forbidding consequences. The Higgs doublet, however, saves the day: $$ \Phi= \frac{h+v}{\sqrt{2}} e^{2i \vec {\pi}\cdot\vec {\tau} /v}\begin{pmatrix} 0 \\ 1 \end{pmatrix},$$where h is the neutral Higgs, the 3 $\vec{\pi}$s are the goldstons eaten up by the Ws and the Z and absent in the unitary gauge, of no concern to us here, and v ~ 0.25 TeV is the cornerstone Higgs v.e.v. Dotting two weak isodoublets together then does yield an SU(2) singlet, preserving gauge invariance: $$-y \overline{ \begin{pmatrix} \nu_{eL} \\ e_L \end{pmatrix} } \cdot \begin{pmatrix} 0 \\ \frac{h+v}{\sqrt 2} \end{pmatrix} ~e_R +\hbox{h.c.},$$where y is an undetermined dimensionless Yukawa coupling. The gauge theory is thus saved. We can rewrite this lagrangian term as $$ -\frac{y v}{\sqrt 2}(\bar{e_L}e_R +\bar{e_R} e_L) -\frac{y h}{\sqrt 2}(\bar{e_L}e_R +\bar{e_R} e_L). $$ We can then identify $m_e=\frac{y v}{\sqrt 2}$, and thus a different yukawa for every lepton/fermion, really. But then the trailing term with the Higgs coupling will have its Yukawa coupling strength be $m_e/v$, and correspondingly for other particles: . Weinberg was quite delighted to observe this peculiar feature in his original paper, half a century ago, contrasting the Higgs coupling to the muon versus the electron. fermion masses cannot avoid proportionality to Yukawas Takeaway: SSB crucial, Higgs mechanism irrelevant, realistic consistent gauge theory impossible without the Higgs. Fermion masses are not an aside at all of the SM; they are the key to it. It's not possible to give the fermions bare mass terms in a gauge invariant way. In terms of left-handed and right-handed Weyl spinors, the mass term we desire is $$ m( \bar{\psi}_L \psi_R + \bar{\psi}_R \psi_L) \,,$$ but for all fermions in the standard model, $\psi_R$ is a singlet under $\mathrm{SU}(2)$ whilst $\psi_L$ is one component of an $\mathrm{SU}(2)$ doublet.
I feel like something is off with label smoothing. While the implementation is correct and agrees with the paper, my intuition suggests that the additional eps/N should not be added to the term for the correct class. In the notebook for label smoothing we see the following explanation: Another regularization technique that’s often used is label smoothing. It’s designed to make the model a little bit less certain of it’s decision by changing a little bit its target: instead of wanting to predict 1 for the correct class and 0 for all the others, we ask it to predict , with 1-ε for the correct class and ε for all the others ε a (small) positive number and N the number of classes. This can be written as: loss = (1-ε) ce(i) + ε \sum ce(j) / N where ce(x) is cross-entropy of x (i.e. -\log(p_{x})), and i is the correct class. However, it turns out that the second sum is over the entire class list. I.e. , we never take special care to ignore the correct class. Thus, the coefficient for ce(i) becomes (1-ε + \frac{\epsilon}{N}). This pushes the minumum of the function further to the right. For example, in the binary case with eps=0.1, if we use the original formula the minumum would be found at x=0.95 instead of x=0.9.
This section might be tough – but don’t be put off by it. I promise that, after we have got over this section, things will be easy. But in this section I don’t like all these summations and subscripts any more than you do. Suppose that we have a system of \( N\) particles, and that the force on the \( i\)th particle (\( i=1\) to \( N\)) is \( \bf{F}_{i}\). If the \( i\)th particle undergoes a displacement \( \delta\bf{r}_{i}\), the total work done on the system is \( \sum_{i}\bf{F}_{i}\cdot\partial\bf{r}_{i}\). The position vector \( \bf{r}\) of a particle can be written as a function of its generalized coordinates; and a change in \( \bf{r}\) can be expressed in terms of the changes in the generalized coordinates. Thus the total work done on the system is \[ \sum_{i}\bf{F}_{i}\cdot\sum_{j}\dfrac{\partial\bf{r}_{i}}{\partial q_{j}}\delta q_{j} \label{13.4.1}\] which can be written \[ \sum_{j}\sum_{i}\bf{F}_{i}\cdot\dfrac{\partial\bf{r}_{i}}{\partial q_{j}}\delta q_{j}. \label{13.4.2}\] But by definition of the generalized force, the work done on the system is also \[ \sum_{j}P_{j}\cdot\delta q_{j}. \label{13.4.3}\] Thus the generalized force \( P_{i}\) associated with generalized coordinate \( q_{i}\) is given by \[ P_{j}=\sum_{i}\bf{F}_{i}\cdot\dfrac{\partial\bf{r}_{i}}{\partial q_{j}}. \label{13.4.4}\] Now \( \bf{F}_{i}=m_{i}\ddot{r}_{i}\), so that \[ P_{j}=\sum_{i} m_{i}\ddot{r}_{i}\cdot\dfrac{\partial\bf{r}_{i}}{\partial q_{j}}. \label{13.4.5}\] Also \[ \dfrac{d}{dt} \left(\dot{\bf{r}}_{i}\cdot\dfrac{\partial\bf{r}_{i}}{\partial q_{j}}\right)=\ddot{\bf{r}}_{i}\cdot\dfrac{\partial\bf{r}_{i}}{\partial q_{j}}+\dot{\bf{r}}_{i}\cdot\dfrac{d}{dt}\left(\dfrac{\partial\bf{r}_{i}}{\partial q_{j}}\right). \label{13.4.6}\] Substitute for \( \ddot{\bf{r}}_{i}\cdot\dfrac{\partial\bf{r}_{i}}{\partial q_{j}}\) from Equation \( \ref{13.4.6}\) into Equation \( \ref{13.4.5}\) to obtain \[ P_{j}=\sum_{i}m_{i} \left[\dfrac{d}{dt} \left(\dot{\bf{r}}_{i}\cdot\dfrac{\partial \dot{\bf{r}}_{i}}{\partial q_{j}} \right)-\dot{\bf{r}}_{i}\cdot(\dfrac{\partial \dot{\bf{r}}_{i}}{\partial q_{j}}) \right]. \label{13.4.8}\] You may not be immediately comfortable with the assertions \[ \dfrac{\partial \dot{\bf{r}}_{i}}{\partial q_{j}}=\dfrac{\partial \dot{\bf{r}}_{i}}{\partial \dot{q_{j}}}\] and \[ \dfrac{d}{dt} \left(\dfrac{\partial \dot{\bf{r}}_{i}}{\partial q_{j}} \right)=\dfrac{\partial \dot{\bf{r}}_{i}}{\partial q_{j}} \] so I’ll interrupt the flow briefly here with an example to try to justify these assertions and to understand what they mean. Example \(\PageIndex{1}\) Consider the relation between the coordinate \( x\) and the spherical coordinates \( r,\theta, \phi\): \[ x=r\sin\theta\cos\phi \label{A1}\tag{A1}\] In this example, \( x\) would correspond to one of the components of \( \bf{r}_{i}\), and \( r, \theta, \phi\) are the \( q_{1},q_{2},q_{3}\) . From Equation (\( \ref{A1}\)), we easily derive \[ \dfrac{\partial x}{\partial r}=\sin\theta\cos\phi\quad\dfrac{\partial x}{\partial\theta}=r\cos\theta\cos\phi\quad\dfrac{\partial x}{\partial \phi}=-r\sin\theta\sin\phi \label{A2}\tag{A2}\] and differentiating equation (\( \ref{A1}\)) with respect to time, we obtain \[ \dot{x}=\dot{r}\sin\theta\cos\phi+r\cos{\theta}\dot{\theta}\cos\phi-r\sin\theta\sin\phi\dot{\phi} \label{A3}\tag{A3}\] And from this we see that \[ \dfrac{\partial\dot{x}}{\partial\dot{r}}=\sin\theta\cos\phi \tag{A4.1}\] \[\dfrac{\partial\dot{x}}{\partial\dot{\theta}}=r\cos\theta\cos\phi \tag{A4.2}\] \[\dfrac{\partial\dot{x}}{\partial\dot{\phi}}=-r\sin\theta\sin\phi \label{A4}\tag{A4.3}\] Thus the first assertion is justified in this example, and I think you’ll see that it will always be true no matter what the functional dependence of \( \bf{r}_{i}\) on the \( q_{j}\). For the second assertion, consider \[ \dfrac{\partial x}{\partial r}=\sin\theta\cos\phi\quad and \quad hence \quad \dfrac{d}{dt}\dfrac{\partial x}{\partial r}=\cos\theta\dot{\theta}\cos\phi-\sin\theta\sin\phi\dot{\phi}. \label{A5}\tag{A5}\] From equation (\( \ref{A3}\)) we find that \[ \dfrac{\partial \dot{x}}{\partial r}=\cos\theta\dot{\theta}\cos\phi-\sin\theta\sin\phi\dot{\phi}, \label{A6}\tag{A6}\] and the second assertion is justified. Again, I think you’ll see that it will always be true no matter what the functional dependence of \( \bf{r}_{i}\) on the \( q_{j}\). The kinetic energy \( T\) is \[ T=\sum_{i}\dfrac{1}{2}m_{i}\dot{r}_{i}^{2}=\sum_{i}\dfrac{1}{2}m_{i}\dot{\bf{r}}_{i}\cdot\dot{\bf{r}}_{i} \label{13.4.9}\tag{13.4.9}\] Therefore \[ \dfrac{\partial T}{\partial q_{j}}=\sum_{i}m_{i}\dot{\bf{r}}_{i}\cdot\dfrac{\partial \dot{\bf{r}}_{i}}{\partial q_{j}} \label{13.4.10}\tag{13.4.10}\] and \[ \dfrac{\partial T}{\partial \dot{q}_{j}}=\sum_{i}m_{i}\dot{\bf{r}}_{i}\cdot\dfrac{\partial \dot{\bf{r}}_{i}}{\partial \dot{q}_{j}}. \label{13.4.11}\tag{13.4.11}\] On substituting these in Equation \( \ref{13.4.8}\) we obtain \[ P_{j}=\dfrac{d}{dt}\dfrac{\partial T}{\partial \dot{q}_{j}}-\dfrac{\partial T}{\partial q_{j}}. \label{13.4.12}\tag{13.4.12}\] This is one form of Lagrange’s equation of motion, and it often helps us to answer the question posed in the last sentence of Section 13.2 – namely to determine the generalized force associated with a given generalized coordinate. If the various forces in a particular problem are conservative (gravity, springs and stretched strings, including valence bonds in a molecule) then the generalized force can be obtained by the negative of the gradient of a potential energy function – i.e. \( P_{j}=-\dfrac{\partial V}{\partial q_{j}}\). In that case, Lagrange’s equation takes the form \[ \dfrac{d}{dt}\dfrac{\partial T}{\partial \dot{q}_{j}}-\dfrac{\partial T}{\partial q_{j}}=-\dfrac{\partial V}{\partial q_{j}}. \label{13.4.13}\tag{13.4.13}\] In my experience, this is the most useful and most often encountered version of Lagrange’s equation. The quantity \( L=T-V\) is known as the lagrangian for the system, and Lagrange’s equation can then be written \[ \dfrac{d}{dt}\dfrac{\partial L}{\partial \dot{q}_{j}}-\dfrac{\partial L}{\partial q_{j}}=0. \label{13.4.14}\tag{13.4.14}\] This form of the equation is seen more often in theoretical discussions than in the practical solution of problems. It does enable us to see one important result. If, for one of the generalized coordinates, \( \dfrac{\partial L}{\partial q_{j}}=0\) (this could happen if neither \( T\) nor \( V\) depends on \( q_{j}\) – but of course it could also happen if \( \dfrac{\partial T}{\partial q_{j}}\) and \( \dfrac{\partial V}{\partial q_{j}}\) were nonzero but equal and opposite in sign), then that generalized coordinate is called an ignorable coordinate – presumably because one can ignore it in setting up the lagrangian. However, it doesn’t really mean that it should be ignored altogether, because it immediately reveals a constant of the motion. In particular, if \( \dfrac{\partial T}{\partial q_{j}}\), then \( \dfrac{\partial V}{\partial q_{j}}\) is constant. It will be seen that if \( q_{j}\) has the dimensions of length, \( \dfrac{\partial L}{\partial \dot{q}_{j}}\) has the dimensions of linear momentum. And if \( q_{i}\) is an angle, \( \dfrac{\partial L}{\partial \dot{q}_{j}}\) has the dimensions of angular momentum. The derivative \( \dfrac{\partial L}{\partial \dot{q}_{j}}\) is usually given the symbol \( p_{j}\) and is called the generalized momentum conjugate to the generalized coordinate \( q_{j}\). If \( q_{j}\) is an “ignorable coordinate”, then \( p_{j}\) is a constant of the motion. In each of Equations \( \ref{13.4.12}\), \( \ref{13.4.13}\) and \( \ref{13.4.14}\) one of the \( q\)s has a dot over it. You can see which one it is by thinking about the dimensions of the various terms. Dot has dimension T -1. So, we have now derived Lagrange’s equation of motion. It was a hard struggle, and in the end we obtained three versions of an equation which at present look quite useless. But from this point, things become easier and we rapidly see how to use the equations and find that they are indeed very useful.
I am trying to solve a large scale inverse problem using the Bayesian formulation. To estimate the Maximum a Posteriori Estimation (MAP) solution I will have to minimize the following objective function: $F(m) = F_d + F_\text{prior} = \frac{(f(m) - d)^T(f(m) - d)}{\sigma^2} + \frac{(m - m_\text{prior})^T(m - m_\text{prior})}{\alpha^2} $ where d is the observed data, and $\sigma$ is the uncertainty in $d$. $m$ is the optimization parameter, $\alpha$ represents the confidence in prior. In the current problem setup, the number of data points, $d$, are $O(n)$ whereas the number of parameters, $m$, are $O(n^2)$. Everything else is dimensionless, therefore order of magnitude of f(m) is same as m. This results to an objective function which is inherently biased towards the prior term($F_\text{prior}$), unless $\sigma << \alpha$, further resulting in poor convergence of $F_d$ in the optimization process. In such a case can I scale $F_d$ and $F_\text{prior}$ by the number of terms they contain? From what I understand such scaling will change the interpretation of $\sigma$ and $\alpha$. Note that I get better "match" between $f(m)$ and $d$ without the prior term. But I need to include the prior in order to get the bounds on the posterior solution.
(a) If $AB=B$, then $B$ is the identity matrix. (b) If the coefficient matrix $A$ of the system $A\mathbf{x}=\mathbf{b}$ is invertible, then the system has infinitely many solutions. (c) If $A$ is invertible, then $ABA^{-1}=B$. (d) If $A$ is an idempotent nonsingular matrix, then $A$ must be the identity matrix. (e) If $x_1=0, x_2=0, x_3=1$ is a solution to a homogeneous system of linear equation, then the system has infinitely many solutions. Let $A$ and $B$ be $3\times 3$ matrices and let $C=A-2B$.If\[A\begin{bmatrix}1 \\3 \\5\end{bmatrix}=B\begin{bmatrix}2 \\6 \\10\end{bmatrix},\]then is the matrix $C$ nonsingular? If so, prove it. Otherwise, explain why not. Let $A, B, C$ be $n\times n$ invertible matrices. When you simplify the expression\[C^{-1}(AB^{-1})^{-1}(CA^{-1})^{-1}C^2,\]which matrix do you get?(a) $A$(b) $C^{-1}A^{-1}BC^{-1}AC^2$(c) $B$(d) $C^2$(e) $C^{-1}BC$(f) $C$ Let $\calP_3$ be the vector space of all polynomials of degree $3$ or less.Let\[S=\{p_1(x), p_2(x), p_3(x), p_4(x)\},\]where\begin{align*}p_1(x)&=1+3x+2x^2-x^3 & p_2(x)&=x+x^3\\p_3(x)&=x+x^2-x^3 & p_4(x)&=3+8x+8x^3.\end{align*} (a) Find a basis $Q$ of the span $\Span(S)$ consisting of polynomials in $S$. (b) For each polynomial in $S$ that is not in $Q$, find the coordinate vector with respect to the basis $Q$. (The Ohio State University, Linear Algebra Midterm) Let $V$ be a vector space and $B$ be a basis for $V$.Let $\mathbf{w}_1, \mathbf{w}_2, \mathbf{w}_3, \mathbf{w}_4, \mathbf{w}_5$ be vectors in $V$.Suppose that $A$ is the matrix whose columns are the coordinate vectors of $\mathbf{w}_1, \mathbf{w}_2, \mathbf{w}_3, \mathbf{w}_4, \mathbf{w}_5$ with respect to the basis $B$. After applying the elementary row operations to $A$, we obtain the following matrix in reduced row echelon form\[\begin{bmatrix}1 & 0 & 2 & 1 & 0 \\0 & 1 & 3 & 0 & 1 \\0 & 0 & 0 & 0 & 0 \\0 & 0 & 0 & 0 & 0\end{bmatrix}.\] (a) What is the dimension of $V$? (b) What is the dimension of $\Span\{\mathbf{w}_1, \mathbf{w}_2, \mathbf{w}_3, \mathbf{w}_4, \mathbf{w}_5\}$? (The Ohio State University, Linear Algebra Midterm) Let $V$ be the vector space of all $2\times 2$ matrices whose entries are real numbers.Let\[W=\left\{\, A\in V \quad \middle | \quad A=\begin{bmatrix}a & b\\c& -a\end{bmatrix} \text{ for any } a, b, c\in \R \,\right\}.\] (a) Show that $W$ is a subspace of $V$. (b) Find a basis of $W$. (c) Find the dimension of $W$. (The Ohio State University, Linear Algebra Midterm) The following problems are Midterm 1 problems of Linear Algebra (Math 2568) at the Ohio State University in Autumn 2017.There were 9 problems that covered Chapter 1 of our textbook (Johnson, Riess, Arnold).The time limit was 55 minutes. This post is Part 3 and contains Problem 7, 8, and 9.Check out Part 1 and Part 2 for the rest of the exam problems. Problem 7. Let $A=\begin{bmatrix}-3 & -4\\8& 9\end{bmatrix}$ and $\mathbf{v}=\begin{bmatrix}-1 \\2\end{bmatrix}$. (a) Calculate $A\mathbf{v}$ and find the number $\lambda$ such that $A\mathbf{v}=\lambda \mathbf{v}$. (b) Without forming $A^3$, calculate the vector $A^3\mathbf{v}$. Problem 8. Prove that if $A$ and $B$ are $n\times n$ nonsingular matrices, then the product $AB$ is also nonsingular. Problem 9.Determine whether each of the following sentences is true or false. (a) There is a $3\times 3$ homogeneous system that has exactly three solutions. (b) If $A$ and $B$ are $n\times n$ symmetric matrices, then the sum $A+B$ is also symmetric. (c) If $n$-dimensional vectors $\mathbf{v}_1, \mathbf{v}_2, \mathbf{v}_3$ are linearly dependent, then the vectors $\mathbf{v}_1, \mathbf{v}_2, \mathbf{v}_3, \mathbf{v}_4$ is also linearly dependent for any $n$-dimensional vector $\mathbf{v}_4$. (d) If the coefficient matrix of a system of linear equations is singular, then the system is inconsistent. The following problems are Midterm 1 problems of Linear Algebra (Math 2568) at the Ohio State University in Autumn 2017.There were 9 problems that covered Chapter 1 of our textbook (Johnson, Riess, Arnold).The time limit was 55 minutes. This post is Part 2 and contains Problem 4, 5, and 6.Check out Part 1 and Part 3 for the rest of the exam problems. Problem 4. Let\[\mathbf{a}_1=\begin{bmatrix}1 \\2 \\3\end{bmatrix}, \mathbf{a}_2=\begin{bmatrix}2 \\-1 \\4\end{bmatrix}, \mathbf{b}=\begin{bmatrix}0 \\a \\2\end{bmatrix}.\] Find all the values for $a$ so that the vector $\mathbf{b}$ is a linear combination of vectors $\mathbf{a}_1$ and $\mathbf{a}_2$. Problem 5.Find the inverse matrix of\[A=\begin{bmatrix}0 & 0 & 2 & 0 \\0 &1 & 0 & 0 \\1 & 0 & 0 & 0 \\1 & 0 & 0 & 1\end{bmatrix}\]if it exists. If you think there is no inverse matrix of $A$, then give a reason. Problem 6.Consider the system of linear equations\begin{align*}3x_1+2x_2&=1\\5x_1+3x_2&=2.\end{align*} (a) Find the coefficient matrix $A$ of the system. (b) Find the inverse matrix of the coefficient matrix $A$. (c) Using the inverse matrix of $A$, find the solution of the system. (Linear Algebra Midterm Exam 1, the Ohio State University)
There is a question in my homework on the algebraic topology course asking if two spaces $X$ and $Y$ are weakly homotopy equivalent in case for every space $Z$ sets $[Z,X]$ and $[Z,Y]$ are naturally bijective. cellular I'm wondering if converse statement holds at least for cell complexes. Suppose $f\colon K\to K'$ is weak homotopy equivalence of cell complexes $K,K'$, namely, $f_*\colon \pi_i(K)\to \pi_i(K')$ is isomorphism for all $i$. I need to show that $f_*\colon [X,K]\to [X,K']$ is bijection. First I tried to show that it is injective. Suppose $f_*[\alpha]=f_*[\beta]$ for some maps $\alpha,\beta\colon X\to K$. I want to prove that $\alpha\simeq \beta$ then. As soon as $f\circ\alpha\simeq f\circ\beta$, then for every spheroid $\varphi\colon S^k\to X$ $f\circ(\alpha\circ\varphi)\simeq f\circ(\beta \circ\varphi)$, and thus $\alpha\circ\varphi\simeq \beta \circ\varphi$, because $f_*$ is bijection between $\pi_k(K)=[S^k,K]$ and $\pi_k(K')=[S^k,K']$. Suppose $\varphi\colon S^k\to X$ is attaching map for the cell $e^k$ in $X$. I have homotopy $H:S^k\times [0,1]\to K$, $H|_{t=0}=\alpha\circ\varphi$, $H|_{t=1}=\beta\circ\varphi$ I wish I could extend on the whole cell. Indeed, I can use HEP in this case for the homotopy $H$ and map $\alpha|_{D^k}\colon D^k\to K$ and hence I have homotopy $\tilde H\colon e^k\times [0,1]\to K$ between $\alpha$ and $\beta$. However, there is a problem, because these homotopies are not necessarily agreed. How do I deal with that?
Update: Nov 2012: A new pure geometry method was sent in by Sigi. You can see Sigi's method here. Sigi suggests that certain solutions are dependent in that they use different notations for the same underlying contexts. Sigi suggests the following interesting task: Which of the different methods are genuinely mathematically independent of each other? Certainly something to think about. Consider the problem is as follows: for the following diagram, prove that $a+b=c $ This problem has been shown on NRICH before. When first shown on NRICH it was solved in 8 different ways by a pair of students Alex and Neil. When we again showed the problem, Sigi sent us a lovely geometric proof. Once you have gone as far as you can with this, try to follow some of Alex's and Neil's solutions for the parts of mathematics with which you are most familiar: Sigi's proof is three-by-one.pdf . The distinct proofs from Alex and Neil are: Method 1: Tan Angle Sum Formula Proof From the diagram, $a =\tan^{-1}(1/3), b =\tan^{-1}(1/2)$, and $c = \tan^{-1}(1)$. We have to prove $a+b=c$, which is the same as proving that $\tan^{-1}(1/3)+\tan^{-1}(1/2)=\tan^{-1}(1)$. Note that $$ \begin{eqnarray} \tan\left(\tan^{-1}\left(\frac{1}{3}\right)+\tan^{-1}\left(\frac{1}{2}\right)\right) &=&\frac{\tan\left(\tan^{-1}\left(\frac{1}{3}\right)\right)+\tan\left(\tan^{-1}\left(\frac{1}{2}\right)\right)} {1-\tan\left(\tan^{-1}\left(\frac{1}{3}\right)\right)\tan\left(\tan^{-1}\left(\frac{1}{2}\right)\right)}\cr &=&\frac{\frac{1}{3}+\frac{1}{2}}{1-\frac{1}{3}\cdot\frac{1}{2}}\cr &=&1 \end{eqnarray} $$ Method 2: Sin Angle Sum Formula Proof Using Pythagoras we can calculate the lengths of the diagonal lines: $$ \sin a = \frac{1}{\sqrt{10}}\quad \cos a = \frac{3}{\sqrt{10}}\quad \sin c = \frac{1}{\sqrt{2}}\quad \sin b = \frac{1}{\sqrt{5}}\quad \cos b = \frac{2}{\sqrt{5}} $$ Using the identity $\sin(a+b) = \sin a \cos b + \sin b \cos a$ we see that $$ \begin{eqnarray} \sin(a+b) &=& \frac{1}{\sqrt{10}}\cdot\frac{2}{\sqrt{5}}+\frac{1}{\sqrt{5}}\cdot \frac{3}{\sqrt{10}}\cr &=& \frac{2+3}{\sqrt{10\cdot 5}}\cr &=& \frac{5}{\sqrt{50}}\cr &=&\frac{1}{\sqrt{2}}\cr &=& \sin c \end{eqnarray} $$ Hence the result is proved. Method 3: Cosine Rule This method required us to extend the diagram as follows: From the altered diagram it can be seen that $x=3+2=5$. Next, $d$ can be found using the cosine rule $c^2= a^2+b^2-2ab \cos C$. Substitution of these values gives $$ \cos d = \frac{-1}{\sqrt{2}} $$ Thus $d=135^\circ$. Therefore $a+b = 180^\circ - d = 45^\circ = c$ Hence the result is proved. Method 4: Vector Method 5: Matrices Let $\bf{d} = (x, y)$ be any point in the $x-y$ plane. Let $\bf{d}_1$ be the point obtained by rotating $a^\circ$ about the origin. Let $\bf{d}_2$ be the point obtained by rotating $\bf{d}_1$ by $b^\circ$ around the origin. Finally, let $\bf{d}_2$ be the point obtained by rotating $\bf{d}$ by $c^\circ$ around the origin. Hence a rotation by $a+b$ is the same as a rotation by $c$ degrees. Hence, $a+b=c$, as none of the angles is greater than $90^\circ$. Method 6: Pure Geometry By looking at the leftmost unit square this diagram can be drawn $a+b=c \Leftrightarrow x+x+y = x+y+z \Leftrightarrow x=z$ Hence it must be proven than $x=z$: Split triangle ADE into two right angled triangles ADF and EDF From this it can be seen that $$ EF = DF = \frac{\sqrt{2}}{4} $$ and $$ AF = AE-FE = \sqrt{2} -\frac{\sqrt{2}}{4} = \frac{3\sqrt{2}}{4} $$ Thus $$ AF:AB = \frac{3\sqrt{2}}{4}:1 $$ and $$ DF:BC = \frac{\sqrt{2}}{4}:\frac{1}{3} = \frac{3\sqrt{2}}{4}:1 $$ Hence ADF and ABC are similar triangles and, therefore, $x=z$. Thus, $a+b=c$. Method 7: Coordinate Geometry The coordinate geometry proof is based on the following diagram; the gradients are easy to read off the original image. Let $B$ be the point $( x_B ,y_B )$ on the line $y = -\frac{1}{2} x$ at a distance of $1$ from the origin. Since $\sqrt{x^2_B+y^2_B} = 1$ and $y_B = -\frac{1}{2}x_B$ we have $$ \begin{eqnarray} \sqrt{x^2_B+\frac{1}{4}x^2_B} &=& \sqrt{\frac{5}{4}x^2_B} = 1\cr \Rightarrow x_B = \frac{2}{\sqrt{5}}\cr \Leftrightarrow y_B = \frac{-1}{\sqrt{5}} \end{eqnarray} $$ Thus, $$ B=\left(\frac{2}{\sqrt{5}}, \frac{-1}{\sqrt{5}}\right) $$ We can now determine the equation of the line through $B$ perpendicular to the line $y = -\frac{1}{2} x$ to be $$ y = 2x-\sqrt{5}. $$ This line intersects $y=\frac{1}{3} x$ at $A$ which we can calculate to be $$ A= \left(\frac{3}{\sqrt{5}}, \frac{1}{\sqrt{5}}\right) $$ Now that we konw points $A$ and $B$ we can calculate the distance between them as $$ |AB| = \sqrt{\left(\frac{3}{\sqrt{5}}-\frac{2}{\sqrt{5}}\right)^2 +\left(\frac{1}{\sqrt{5}}+\frac{1}{\sqrt{5}}\right)^2}= \sqrt{\frac{1}{5}+\frac{4}{5}}=1 $$ Referring back to the diagram shows that $a+b=45^\circ$ and the result is proved. Method 8: Complex Numbers This method uses aspects of other proofs presented (and therefore could be shortened) but we include it because it gives a nice example of how complex numbers, matrices and all other methods fit nicely together. Let $r=\sqrt{x^2+y^2}$. Now, the complex number corresponding to the point with coordinates $(x, y)$ in the argand diagram can be written as $$ z= x+iy = r\exp\left(i\tan^{-1} \left(\frac{y}{x}\right)\right) $$ Let $z'$ be the complex number obtained by rotating $z$ by $2(a+b)$ degrees. Then we can use the identities $\sin(a+b) = \sin a \cos b +\cos a\sin b$ and $\cos(a+b) = \cos a \cos b-\sin a \sin b$ to see that: $$ \begin{eqnarray} z' &=& r\exp\left(i\tan^{-1}\left(\frac{y}{x}\right)+2i(a+b)\right)\cr &=& r\left[\cos\left(\tan^{-1}\left(\frac{y}{x}\right)+2(a+b)\right)+i\sin\left(\tan^{-1}\left(\frac{y}{x}\right)+2(a+b)\right)\right]\cr &=&r\left[\cos\left(\tan^{-1}\left(\frac{y}{x}\right)\right)\cos\left(2(a+b)\right)-\sin\left(\tan^{-1}\left(\frac{y}{x}\right)\right)\sin\left(2(a+b)\right)\right]\cr && +ir\left[ \sin\left(\tan^{-1}\left(\frac{y}{x}\right)\right)\cos\left(2(a+b)\right)+\cos\left(\tan^{-1}\left(\frac{y}{x}\right)\right)\sin\left(2(a+b)\right) \right] \end{eqnarray} $$ We can simplify the arctangents by considering the following diagram: From this we can see that $\cos\left(\tan^{-1}\left(\frac{y}{x}\right) \right)= \frac{x}{r}$ and $\sin\left(\tan^{-1}\left(\frac{y}{x}\right) \right)= \frac{y}{r}$ This simplifies the expression for $z'$ to $$ z' = x\cos\left(2(a+b)\right)-y\sin\left(2(a+b)\right)+i\left(y\cos\left(2(a+b)\right)+x\sin\left(2(a+b)\right)\right) $$ Alex and Neil then used some double angle formulae and the fact that $\sin(a+b)=\cos(a+b)=\frac{1}{\sqrt{2}}$ to determine that $$ z' = -y+ix $$ Since this is a complex number rotated through $90^\circ$ we can conclude that $a+b = 45^\circ$
Let $T: \R^n \to \R^m$ be a linear transformation.Suppose that the nullity of $T$ is zero. If $\{\mathbf{x}_1, \mathbf{x}_2,\dots, \mathbf{x}_k\}$ is a linearly independent subset of $\R^n$, then show that $\{T(\mathbf{x}_1), T(\mathbf{x}_2), \dots, T(\mathbf{x}_k) \}$ is a linearly independent subset of $\R^m$. Let $V$ denote the vector space of all real $2\times 2$ matrices.Suppose that the linear transformation from $V$ to $V$ is given as below.\[T(A)=\begin{bmatrix}2 & 3\\5 & 7\end{bmatrix}A-A\begin{bmatrix}2 & 3\\5 & 7\end{bmatrix}.\]Prove or disprove that the linear transformation $T:V\to V$ is an isomorphism. Let $G, H, K$ be groups. Let $f:G\to K$ be a group homomorphism and let $\pi:G\to H$ be a surjective group homomorphism such that the kernel of $\pi$ is included in the kernel of $f$: $\ker(\pi) \subset \ker(f)$. Define a map $\bar{f}:H\to K$ as follows.For each $h\in H$, there exists $g\in G$ such that $\pi(g)=h$ since $\pi:G\to H$ is surjective.Define $\bar{f}:H\to K$ by $\bar{f}(h)=f(g)$. (a) Prove that the map $\bar{f}:H\to K$ is well-defined. (b) Prove that $\bar{f}:H\to K$ is a group homomorphism. Let $\calF[0, 2\pi]$ be the vector space of all real valued functions defined on the interval $[0, 2\pi]$.Define the map $f:\R^2 \to \calF[0, 2\pi]$ by\[\left(\, f\left(\, \begin{bmatrix}\alpha \\\beta\end{bmatrix} \,\right) \,\right)(x):=\alpha \cos x + \beta \sin x.\]We put\[V:=\im f=\{\alpha \cos x + \beta \sin x \in \calF[0, 2\pi] \mid \alpha, \beta \in \R\}.\] (a) Prove that the map $f$ is a linear transformation. (b) Prove that the set $\{\cos x, \sin x\}$ is a basis of the vector space $V$. (c) Prove that the kernel is trivial, that is, $\ker f=\{\mathbf{0}\}$.(This yields an isomorphism of $\R^2$ and $V$.) (d) Define a map $g:V \to V$ by\[g(\alpha \cos x + \beta \sin x):=\frac{d}{dx}(\alpha \cos x+ \beta \sin x)=\beta \cos x -\alpha \sin x.\]Prove that the map $g$ is a linear transformation. (e) Find the matrix representation of the linear transformation $g$ with respect to the basis $\{\cos x, \sin x\}$. Suppose that the vectors\[\mathbf{v}_1=\begin{bmatrix}-2 \\1 \\0 \\0 \\0\end{bmatrix}, \qquad \mathbf{v}_2=\begin{bmatrix}-4 \\0 \\-3 \\-2 \\1\end{bmatrix}\]are a basis vectors for the null space of a $4\times 5$ matrix $A$. Find a vector $\mathbf{x}$ such that\[\mathbf{x}\neq0, \quad \mathbf{x}\neq \mathbf{v}_1, \quad \mathbf{x}\neq \mathbf{v}_2,\]and\[A\mathbf{x}=\mathbf{0}.\] (Stanford University, Linear Algebra Exam Problem) Let $V$ be the subspace of $\R^4$ defined by the equation\[x_1-x_2+2x_3+6x_4=0.\]Find a linear transformation $T$ from $\R^3$ to $\R^4$ such that the null space $\calN(T)=\{\mathbf{0}\}$ and the range $\calR(T)=V$. Describe $T$ by its matrix $A$. A hyperplane in $n$-dimensional vector space $\R^n$ is defined to be the set of vectors\[\begin{bmatrix}x_1 \\x_2 \\\vdots \\x_n\end{bmatrix}\in \R^n\]satisfying the linear equation of the form\[a_1x_1+a_2x_2+\cdots+a_nx_n=b,\]where $a_1, a_2, \dots, a_n$ (at least one of $a_1, a_2, \dots, a_n$ is nonzero) and $b$ are real numbers.Here at least one of $a_1, a_2, \dots, a_n$ is nonzero. Consider the hyperplane $P$ in $\R^n$ described by the linear equation\[a_1x_1+a_2x_2+\cdots+a_nx_n=0,\]where $a_1, a_2, \dots, a_n$ are some fixed real numbers and not all of these are zero.(The constant term $b$ is zero.) Then prove that the hyperplane $P$ is a subspace of $R^{n}$ of dimension $n-1$. Let $n$ be a positive integer. Let $T:\R^n \to \R$ be a non-zero linear transformation.Prove the followings. (a) The nullity of $T$ is $n-1$. That is, the dimension of the nullspace of $T$ is $n-1$. (b) Let $B=\{\mathbf{v}_1, \cdots, \mathbf{v}_{n-1}\}$ be a basis of the nullspace $\calN(T)$ of $T$.Let $\mathbf{w}$ be the $n$-dimensional vector that is not in $\calN(T)$. Then\[B’=\{\mathbf{v}_1, \cdots, \mathbf{v}_{n-1}, \mathbf{w}\}\]is a basis of $\R^n$. (c) Each vector $\mathbf{u}\in \R^n$ can be expressed as\[\mathbf{u}=\mathbf{v}+\frac{T(\mathbf{u})}{T(\mathbf{w})}\mathbf{w}\]for some vector $\mathbf{v}\in \calN(T)$. Let $A$ be the matrix for a linear transformation $T:\R^n \to \R^n$ with respect to the standard basis of $\R^n$.We assume that $A$ is idempotent, that is, $A^2=A$.Then prove that\[\R^n=\im(T) \oplus \ker(T).\] (a) Let $A=\begin{bmatrix}1 & 2 & 1 \\3 &6 &4\end{bmatrix}$ and let\[\mathbf{a}=\begin{bmatrix}-3 \\1 \\1\end{bmatrix}, \qquad \mathbf{b}=\begin{bmatrix}-2 \\1 \\0\end{bmatrix}, \qquad \mathbf{c}=\begin{bmatrix}1 \\1\end{bmatrix}.\]For each of the vectors $\mathbf{a}, \mathbf{b}, \mathbf{c}$, determine whether the vector is in the null space $\calN(A)$. Do the same for the range $\calR(A)$. (b) Find a basis of the null space of the matrix $B=\begin{bmatrix}1 & 1 & 2 \\-2 &-2 &-4\end{bmatrix}$. Let $A$ be a real $7\times 3$ matrix such that its null space is spanned by the vectors\[\begin{bmatrix}1 \\2 \\0\end{bmatrix}, \begin{bmatrix}2 \\1 \\0\end{bmatrix}, \text{ and } \begin{bmatrix}1 \\-1 \\0\end{bmatrix}.\]Then find the rank of the matrix $A$. (Purdue University, Linear Algebra Final Exam Problem) Let $R$ be a commutative ring with $1$ and let $G$ be a finite group with identity element $e$. Let $RG$ be the group ring. Then the map $\epsilon: RG \to R$ defined by\[\epsilon(\sum_{i=1}^na_i g_i)=\sum_{i=1}^na_i,\]where $a_i\in R$ and $G=\{g_i\}_{i=1}^n$, is a ring homomorphism, called the augmentation map and the kernel of $\epsilon$ is called the augmentation ideal. (a) Prove that the augmentation ideal in the group ring $RG$ is generated by $\{g-e \mid g\in G\}$. (b) Prove that if $G=\langle g\rangle$ is a finite cyclic group generated by $g$, then the augmentation ideal is generated by $g-e$.
The last three (or ‘spatial’) components of the momentum four-vector give us the regular components of the momentum, times the factor \( \gamma(v) \). What about the zeroth (or ‘temporal’) component? To interpret it, we expand \( \gamma(v) \), and find: \[ \begin{align} E&=\gamma(v) m c^{2} \\[4pt] &=m c^{2}+K \label{13.4.1} \end{align}\] The second term in this expansion should be familiar: it’s the kinetic energy of the particle. The third and higher terms are corrections to the classical kinetic energy - just like the higher-order terms in the spatial components are corrections to the classical momenta. The first term, however, is new: an extra energy contribution due to the mass of the particle. The whole term can now be interpreted as the relativistic energy of the particle: \[ E=\gamma(v) m c^{2}=m c^{2}+K \label{13.4.2}\] We can now write the zeroth component of the momentum four-vector as \( p_{0}=E / c \). Based on this interpretation, the four-vector is sometimes referred to as the energy-momentum four-vector. A very useful relation can now easily be derived by calculating the length of the energy-momentum four-vector in two ways. On the one hand, it’s given by (leaving out the square root for convenience) \[ \overline{\boldsymbol{p}} \cdot \overline{\boldsymbol{p}}=m^{2} \overline{\boldsymbol{v}} \cdot \overline{\boldsymbol{v}}=m^{2} c^{2} \label{13.4.3}\] while on the other hand, we could also simply expand in the components of \( \overline{\boldsymbol{p}} \) itself to get: \[ \overline{\boldsymbol{p}} \cdot \overline{\boldsymbol{p}}=\left(\frac{E}{c}\right)^{2}-\boldsymbol{p} \cdot \boldsymbol{p} \label{13.4.4}\] where \( \boldsymbol{p} \) is again the spatial part of \( \overline{ \boldsymbol{p}} \). Combining Equations \ref{13.4.3} and \ref{13.4.4}, we get: \[E^{2}=m^{2} c^{4}+p^{2} c^{2} \label{13.4.5}\] where \(p^{2}=\boldsymbol{p} \cdot \boldsymbol{p}\). Equation \ref{13.4.5} is the general form of Einstein's famous formula \( E = m c^{2} \), to which it reduces for stationary particles (i.e. when \(v = p = 0 \)).
(a) If $AB=B$, then $B$ is the identity matrix. (b) If the coefficient matrix $A$ of the system $A\mathbf{x}=\mathbf{b}$ is invertible, then the system has infinitely many solutions. (c) If $A$ is invertible, then $ABA^{-1}=B$. (d) If $A$ is an idempotent nonsingular matrix, then $A$ must be the identity matrix. (e) If $x_1=0, x_2=0, x_3=1$ is a solution to a homogeneous system of linear equation, then the system has infinitely many solutions. Let $A$ and $B$ be $3\times 3$ matrices and let $C=A-2B$.If\[A\begin{bmatrix}1 \\3 \\5\end{bmatrix}=B\begin{bmatrix}2 \\6 \\10\end{bmatrix},\]then is the matrix $C$ nonsingular? If so, prove it. Otherwise, explain why not. Let $A, B, C$ be $n\times n$ invertible matrices. When you simplify the expression\[C^{-1}(AB^{-1})^{-1}(CA^{-1})^{-1}C^2,\]which matrix do you get?(a) $A$(b) $C^{-1}A^{-1}BC^{-1}AC^2$(c) $B$(d) $C^2$(e) $C^{-1}BC$(f) $C$ Let $\calP_3$ be the vector space of all polynomials of degree $3$ or less.Let\[S=\{p_1(x), p_2(x), p_3(x), p_4(x)\},\]where\begin{align*}p_1(x)&=1+3x+2x^2-x^3 & p_2(x)&=x+x^3\\p_3(x)&=x+x^2-x^3 & p_4(x)&=3+8x+8x^3.\end{align*} (a) Find a basis $Q$ of the span $\Span(S)$ consisting of polynomials in $S$. (b) For each polynomial in $S$ that is not in $Q$, find the coordinate vector with respect to the basis $Q$. (The Ohio State University, Linear Algebra Midterm) Let $V$ be a vector space and $B$ be a basis for $V$.Let $\mathbf{w}_1, \mathbf{w}_2, \mathbf{w}_3, \mathbf{w}_4, \mathbf{w}_5$ be vectors in $V$.Suppose that $A$ is the matrix whose columns are the coordinate vectors of $\mathbf{w}_1, \mathbf{w}_2, \mathbf{w}_3, \mathbf{w}_4, \mathbf{w}_5$ with respect to the basis $B$. After applying the elementary row operations to $A$, we obtain the following matrix in reduced row echelon form\[\begin{bmatrix}1 & 0 & 2 & 1 & 0 \\0 & 1 & 3 & 0 & 1 \\0 & 0 & 0 & 0 & 0 \\0 & 0 & 0 & 0 & 0\end{bmatrix}.\] (a) What is the dimension of $V$? (b) What is the dimension of $\Span\{\mathbf{w}_1, \mathbf{w}_2, \mathbf{w}_3, \mathbf{w}_4, \mathbf{w}_5\}$? (The Ohio State University, Linear Algebra Midterm) Let $V$ be the vector space of all $2\times 2$ matrices whose entries are real numbers.Let\[W=\left\{\, A\in V \quad \middle | \quad A=\begin{bmatrix}a & b\\c& -a\end{bmatrix} \text{ for any } a, b, c\in \R \,\right\}.\] (a) Show that $W$ is a subspace of $V$. (b) Find a basis of $W$. (c) Find the dimension of $W$. (The Ohio State University, Linear Algebra Midterm) The following problems are Midterm 1 problems of Linear Algebra (Math 2568) at the Ohio State University in Autumn 2017.There were 9 problems that covered Chapter 1 of our textbook (Johnson, Riess, Arnold).The time limit was 55 minutes. This post is Part 3 and contains Problem 7, 8, and 9.Check out Part 1 and Part 2 for the rest of the exam problems. Problem 7. Let $A=\begin{bmatrix}-3 & -4\\8& 9\end{bmatrix}$ and $\mathbf{v}=\begin{bmatrix}-1 \\2\end{bmatrix}$. (a) Calculate $A\mathbf{v}$ and find the number $\lambda$ such that $A\mathbf{v}=\lambda \mathbf{v}$. (b) Without forming $A^3$, calculate the vector $A^3\mathbf{v}$. Problem 8. Prove that if $A$ and $B$ are $n\times n$ nonsingular matrices, then the product $AB$ is also nonsingular. Problem 9.Determine whether each of the following sentences is true or false. (a) There is a $3\times 3$ homogeneous system that has exactly three solutions. (b) If $A$ and $B$ are $n\times n$ symmetric matrices, then the sum $A+B$ is also symmetric. (c) If $n$-dimensional vectors $\mathbf{v}_1, \mathbf{v}_2, \mathbf{v}_3$ are linearly dependent, then the vectors $\mathbf{v}_1, \mathbf{v}_2, \mathbf{v}_3, \mathbf{v}_4$ is also linearly dependent for any $n$-dimensional vector $\mathbf{v}_4$. (d) If the coefficient matrix of a system of linear equations is singular, then the system is inconsistent. The following problems are Midterm 1 problems of Linear Algebra (Math 2568) at the Ohio State University in Autumn 2017.There were 9 problems that covered Chapter 1 of our textbook (Johnson, Riess, Arnold).The time limit was 55 minutes. This post is Part 2 and contains Problem 4, 5, and 6.Check out Part 1 and Part 3 for the rest of the exam problems. Problem 4. Let\[\mathbf{a}_1=\begin{bmatrix}1 \\2 \\3\end{bmatrix}, \mathbf{a}_2=\begin{bmatrix}2 \\-1 \\4\end{bmatrix}, \mathbf{b}=\begin{bmatrix}0 \\a \\2\end{bmatrix}.\] Find all the values for $a$ so that the vector $\mathbf{b}$ is a linear combination of vectors $\mathbf{a}_1$ and $\mathbf{a}_2$. Problem 5.Find the inverse matrix of\[A=\begin{bmatrix}0 & 0 & 2 & 0 \\0 &1 & 0 & 0 \\1 & 0 & 0 & 0 \\1 & 0 & 0 & 1\end{bmatrix}\]if it exists. If you think there is no inverse matrix of $A$, then give a reason. Problem 6.Consider the system of linear equations\begin{align*}3x_1+2x_2&=1\\5x_1+3x_2&=2.\end{align*} (a) Find the coefficient matrix $A$ of the system. (b) Find the inverse matrix of the coefficient matrix $A$. (c) Using the inverse matrix of $A$, find the solution of the system. (Linear Algebra Midterm Exam 1, the Ohio State University)
My differential equations professor gave the class an assignment introducing us to delay differential equations, and he put some questions for us to answer on it. I've been completely stuck on this problem for too long. $$ y'(t)=\alpha y(t-\tau) $$ Re-scale the independent variable to show that that equation is equivalent to either one of the following equations: $$ \frac{dy}{ds}(s)= \pm y(s-\bar{\tau}) \quad\text{or} \quad\frac{dy}{ds}(s)= \bar{\alpha}y(s-1) $$ for some new parameters $\bar{\alpha}$ and $\bar{\tau}$ (Find what these parameters are in terms of $\alpha$ and $\tau$). I can't for the life of me figure out how to do this. First of all, embarrassingly enough, I am not 100% sure what to make of the notation $\frac{dy}{ds}(s)$. I am assuming that it means the same thing as just $\frac{dy}{ds}$. I talked with my professor and he said that by re-scaling $t$, he meant to multiply it by a constant and add (or subtract) another constant. The second of those last equations is super easy, but I'm stuck on the first one. $$ t=As+B\\ y'(t)=\frac{dy}{dt}=\frac{dy}{d(As+B)}=\frac1A\cdot\frac{dy}{ds}\\ \frac1A\frac{dy}{ds}=\alpha y(As+B-\tau)\implies\frac{dy}{ds}=\frac\alpha Ay(A(s+C)) \qquad C=\frac{B-\tau}2 $$ So, in order to satisfy the first of the equations, $A$ would have to be $\pm\alpha$. But it is impossible to make it match the equation, because I am left with $$ \frac{dy}{ds}=\pm y(\pm\alpha(s+C)) $$ There is nothing I can do to get rid of that alpha. So what mistake am I making? I've been at this for hours.
The $c\bar{c}$ meson decays to a lighter meson in the following reactions: $$\psi(3686)\rightarrow J/\psi(3097)+\eta^0$$ $$\psi(3686)\rightarrow J/\psi(3097)+\pi^0.$$ My aim is to find out which conservation law is causing one of these decay channels to be massively suppressed, but my lack of understanding of this notation is making this difficult. I'm guessing that $\psi$ us a representation of the wave function of a $c\bar{c}$ on the LHS and a lighter meson on the RHS. However, there is added confusion for me with the $J$, which I assume to be the total angular momentum, as a denominator of this fraction. Can someone help me solve this problem, whilst at the same time explaining the notation? I think that would help me and others best understand it.
Electronic Communications in Probability Electron. Commun. Probab. Volume 20 (2015), paper no. 10, 14 pp. Functional limit theorems for divergent perpetuities in the contractive case Abstract Let $\big(M_k, Q_k\big)_{k\in\mathbb{N}}$ be independent copies of an $\mathbb{R}^2$-valued random vector. It is known that if $Y_n:=Q_1+M_1Q_2+\ldots+M_1\cdot\ldots\cdot M_{n-1}Q_n$ converges a.s. to a random variable $Y$, then the law of $Y$ satisfies the stochastic fixed-point equation $Y \overset{d}{=} Q_1+M_1Y$, where $(Q_1, M_1)$ is independent of $Y$. In the present paper we consider the situation when $|Y_n|$ diverges to $\infty$ in probability because $|Q_1|$ takes large values with high probability, whereas the multiplicative random walk with steps $M_k$'s tends to zero a.s. Under a regular variation assumption we show that $\log |Y_n|$, properly scaled and normalized, converge weakly in the Skorokhod space equipped with the $J_1$-topology to an extremal process. A similar result also holds for the corresponding Markov chains. Proofs rely upon a deterministic result which establishes the $J_1$-convergence of certain sums to a maximal function and subsequent use of the Skorokhod representation theorem. Article information Source Electron. Commun. Probab., Volume 20 (2015), paper no. 10, 14 pp. Dates Accepted: 31 January 2015 First available in Project Euclid: 7 June 2016 Permanent link to this document https://projecteuclid.org/euclid.ecp/1465320937 Digital Object Identifier doi:10.1214/ECP.v20-3915 Mathematical Reviews number (MathSciNet) MR3314645 Zentralblatt MATH identifier 1307.60026 Subjects Primary: 60F17: Functional limit theorems; invariance principles Secondary: 60G50: Sums of independent random variables; random walks Rights This work is licensed under a Creative Commons Attribution 3.0 License. Citation Buraczewski, Dariusz; Iksanov, Alexander. Functional limit theorems for divergent perpetuities in the contractive case. Electron. Commun. Probab. 20 (2015), paper no. 10, 14 pp. doi:10.1214/ECP.v20-3915. https://projecteuclid.org/euclid.ecp/1465320937
Straight Lines Distance of a Point From a Line The perpendicular distance from a point (x 1y 1) to the line ax + by + c = 0 is \tt \begin{vmatrix}\frac{ax_{1}+by_{1}+c}{\sqrt{a^{2}+b^{2}}} \end{vmatrix} A point A (x 1y 1) and origin lies on the same or opposite side of a line L = ax + by + c = 0 according as c. L 11> 0 or c. L 11< 0 The point A (x 1y 1) lies above or below the line L = ax + by + c = 0 according as \tt \frac{L_{11}}{b}>0 \ or \ \frac{L_{11}}{b}< 0 The distance of a point (x 1y 1) from the line L = ax + by + c = 0 measured along a line making an angle ‘∝’ with x – axis is \tt \begin{vmatrix} \frac{ax_{1}+by_{1}+c}{a\cdot cos \propto + b \sin \propto } \end{vmatrix} View the Topic in this video From 00:40 To 55:58 Disclaimer: Compete.etutor.co may from time to time provide links to third party Internet sites under their respective fair use policy and it may from time to time provide materials from such third parties on this website. These third party sites and any third party materials are provided for viewers convenience and for non-commercial educational purpose only. Compete does not operate or control in any respect any information, products or services available on these third party sites. Compete.etutor.co makes no representations whatsoever concerning the content of these sites and the fact that compete.etutor.co has provided a link to such sites is NOT an endorsement, authorization, sponsorship, or affiliation by compete.etutor.co with respect to such sites, its services, the products displayed, its owners, or its providers. 1. The perpendicular distance (d) of a line Ax + By + C = 0 from a point (x 1 , y 1) is given by d = \tt \frac{\mid Ax_{1}+By_{1}+C\mid}{\sqrt{A^{2}+B^{2}}}. 2. Distance between the parallel lines Ax + By + C 1 = 0 and Ax + By + C 2 = 0 is given by d = \tt \frac{\mid C_{1}-C_{2}\mid}{\sqrt{A^{2}+B^{2}}}.
Tagged: finite group If a Half of a Group are Elements of Order 2, then the Rest form an Abelian Normal Subgroup of Odd Order Problem 575 Let $G$ be a finite group of order $2n$. Suppose that exactly a half of $G$ consists of elements of order $2$ and the rest forms a subgroup. Namely, suppose that $G=S\sqcup H$, where $S$ is the set of all elements of order in $G$, and $H$ is a subgroup of $G$. The cardinalities of $S$ and $H$ are both $n$. Then prove that $H$ is an abelian normal subgroup of odd order.Add to solve later Problem 455 Let $G$ be a finite group. The centralizer of an element $a$ of $G$ is defined to be \[C_G(a)=\{g\in G \mid ga=ag\}.\] A conjugacy class is a set of the form \[\Cl(a)=\{bab^{-1} \mid b\in G\}\] for some $a\in G$. (a)Prove that the centralizer of an element of $a$ in $G$ is a subgroup of the group $G$. Add to solve later (b) Prove that the order (the number of elements) of every conjugacy class in $G$ divides the order of the group $G$. Problem 420 In this post, we study the Fundamental Theorem of Finitely Generated Abelian Groups, and as an application we solve the following problem. Add to solve later Problem. Let $G$ be a finite abelian group of order $n$. If $n$ is the product of distinct prime numbers, then prove that $G$ is isomorphic to the cyclic group $Z_n=\Zmod{n}$ of order $n$. Problem 302 Let $R$ be a commutative ring with $1$ and let $G$ be a finite group with identity element $e$. Let $RG$ be the group ring. Then the map $\epsilon: RG \to R$ defined by \[\epsilon(\sum_{i=1}^na_i g_i)=\sum_{i=1}^na_i,\] where $a_i\in R$ and $G=\{g_i\}_{i=1}^n$, is a ring homomorphism, called the augmentation map and the kernel of $\epsilon$ is called the augmentation ideal. (a) Prove that the augmentation ideal in the group ring $RG$ is generated by $\{g-e \mid g\in G\}$. Add to solve later (b) Prove that if $G=\langle g\rangle$ is a finite cyclic group generated by $g$, then the augmentation ideal is generated by $g-e$. Read solution
Hi, Can someone provide me some self reading material for Condensed matter theory? I've done QFT previously for which I could happily read Peskin supplemented with David Tong. Can you please suggest some references along those lines? Thanks @skullpatrol The second one was in my MSc and covered considerably less than my first and (I felt) didn't do it in any particularly great way, so distinctly average. The third was pretty decent - I liked the way he did things and was essentially a more mathematically detailed version of the first :) 2. A weird particle or state that is made of a superposition of a torus region with clockwise momentum and anticlockwise momentum, resulting in one that has no momentum along the major circumference of the torus but still nonzero momentum in directions that are not pointing along the torus Same thought as you, however I think the major challenge of such simulator is the computational cost. GR calculations with its highly nonlinear nature, might be more costy than a computation of a protein. However I can see some ways approaching it. Recall how Slereah was building some kind of spaceitme database, that could be the first step. Next, one might be looking for machine learning techniques to help on the simulation by using the classifications of spacetimes as machines are known to perform very well on sign problems as a recent paper has shown Since GR equations are ultimately a system of 10 nonlinear PDEs, it might be possible the solution strategy has some relation with the class of spacetime that is under consideration, thus that might help heavily reduce the parameters need to consider to simulate them I just mean this: The EFE is a tensor equation relating a set of symmetric 4 × 4 tensors. Each tensor has 10 independent components. The four Bianchi identities reduce the number of independent equations from 10 to 6, leaving the metric with four gauge fixing degrees of freedom, which correspond to the freedom to choose a coordinate system. @ooolb Even if that is really possible (I always can talk about things in a non joking perspective), the issue is that 1) Unlike other people, I cannot incubate my dreams for a certain topic due to Mechanism 1 (consicous desires have reduced probability of appearing in dreams), and 2) For 6 years, my dream still yet to show any sign of revisiting the exact same idea, and there are no known instance of either sequel dreams nor recurrence dreams @0celo7 I felt this aspect can be helped by machine learning. You can train a neural network with some PDEs of a known class with some known constraints, and let it figure out the best solution for some new PDE after say training it on 1000 different PDEs Actually that makes me wonder, are the space of all coordinate choices more than all possible moves of Go? enumaris: From what I understood from the dream, the warp drive showed here may be some variation of the alcuberrie metric with a global topology that has 4 holes in it whereas the original alcuberrie drive, if I recall, don't have holes orbit stabilizer: h bar is my home chat, because this is the first SE chat I joined. Maths chat is the 2nd one I joined, followed by periodic table, biosphere, factory floor and many others Btw, since gravity is nonlinear, do we expect if we have a region where spacetime is frame dragged in the clockwise direction being superimposed on a spacetime that is frame dragged in the anticlockwise direction will result in a spacetime with no frame drag? (one possible physical scenario that I can envision such can occur may be when two massive rotating objects with opposite angular velocity are on the course of merging) Well. I'm a begginer in the study of General Relativity ok? My knowledge about the subject is based on books like Schutz, Hartle,Carroll and introductory papers. About quantum mechanics I have a poor knowledge yet. So, what I meant about "Gravitational Double slit experiment" is: There's and gravitational analogue of the Double slit experiment, for gravitational waves? @JackClerk the double slits experiment is just interference of two coherent sources, where we get the two sources from a single light beam using the two slits. But gravitational waves interact so weakly with matter that it's hard to see how we could screen a gravitational wave to get two coherent GW sources. But if we could figure out a way to do it then yes GWs would interfere just like light wave. Thank you @Secret and @JohnRennie . But for conclude the discussion, I want to put a "silly picture" here: Imagine a huge double slit plate in space close to a strong source of gravitational waves. Then like water waves, and light, we will see the pattern? So, if the source (like a Black Hole binary) are sufficent away, then in the regions of destructive interference, space-time would have a flat geometry and then with we put a spherical object in this region the metric will become schwarzschild-like. if** Pardon, I just spend some naive-phylosophy time here with these discussions** The situation was even more dire for Calculus and I managed! This is a neat strategy I have found-revision becomes more bearable when I have The h Bar open on the side. In all honesty, I actually prefer exam season! At all other times-as I have observed in this semester, at least-there is nothing exciting to do. This system of tortuous panic, followed by a reward is obviously very satisfying. My opinion is that I need you kaumudi to decrease the probabilty of h bar having software system infrastructure conversations, which confuse me like hell and is why I take refugee in the maths chat a few weeks ago (Not that I have questions to ask or anything; like I said, it is a little relieving to be with friends while I am panicked. I think it is possible to gauge how much of a social recluse I am from this, because I spend some of my free time hanging out with you lot, even though I am literally inside a hostel teeming with hundreds of my peers) that's true. though back in high school ,regardless of code, our teacher taught us to always indent your code to allow easy reading and troubleshooting. We are also taught the 4 spacebar indentation convention @JohnRennie I wish I can just tab because I am also lazy, but sometimes tab insert 4 spaces while other times it inserts 5-6 spaces, thus screwing up a block of if then conditions in my code, which is why I had no choice I currently automate almost everything from job submission to data extraction, and later on, with the help of the machine learning group in my uni, we might be able to automate a GUI library search thingy I can do all tasks related to my work without leaving the text editor (of course, such text editor is emacs). The only inconvenience is that some websites don't render in a optimal way (but most of the work-related ones do) Hi to all. Does anyone know where I could write matlab code online(for free)? Apparently another one of my institutions great inspirations is to have a matlab-oriented computational physics course without having matlab on the universities pcs. Thanks. @Kaumudi.H Hacky way: 1st thing is that $\psi\left(x, y, z, t\right) = \psi\left(x, y, t\right)$, so no propagation in $z$-direction. Now, in '$1$ unit' of time, it travels $\frac{\sqrt{3}}{2}$ units in the $y$-direction and $\frac{1}{2}$ units in the $x$-direction. Use this to form a triangle and you'll get the answer with simple trig :) @Kaumudi.H Ah, it was okayish. It was mostly memory based. Each small question was of 10-15 marks. No idea what they expect me to write for questions like "Describe acoustic and optic phonons" for 15 marks!! I only wrote two small paragraphs...meh. I don't like this subject much :P (physical electronics). Hope to do better in the upcoming tests so that there isn't a huge effect on the gpa. @Blue Ok, thanks. I found a way by connecting to the servers of the university( the program isn't installed on the pcs on the computer room, but if I connect to the server of the university- which means running remotely another environment, i found an older version of matlab). But thanks again. @user685252 No; I am saying that it has no bearing on how good you actually are at the subject - it has no bearing on how good you are at applying knowledge; it doesn't test problem solving skills; it doesn't take into account that, if I'm sitting in the office having forgotten the difference between different types of matrix decomposition or something, I can just search the internet (or a textbook), so it doesn't say how good someone is at research in that subject; it doesn't test how good you are at deriving anything - someone can write down a definition without any understanding, while someone who can derive it, but has forgotten it probably won't have time in an exam situation. In short, testing memory is not the same as testing understanding If you really want to test someone's understanding, give them a few problems in that area that they've never seen before and give them a reasonable amount of time to do it, with access to textbooks etc.
Motion in a Straight Line Relative Velocity in one dimension Motion of one body with respect to another is called concept of Relativity. Velocity of a body a with respect to velocity of “B” is V AB= V A- V B Velocity of an object with respect to itself is always zero. Relative velocity of approach of bodies moving towards each other with same velocity (v) is = 2v Relative velocity of separation of bodies moving away from each other with same velocity (v) is = 0. Relative velocity of a motion man swimming downstream V = V m+ V R(V m= velocity of man V R= vet of River) Relative velocity of motion of man swimming up stream V = V m- V R Time taken for down stream = \tt \frac{width\ of\ river}{V_{m}+V_{R}} Time taken for up stream = \tt \frac{width\ of\ river}{V_{m}-V_{R}} Any three dimensional coordinate XYZ axes system fixed to an object or event called reference frame. View the Topic in this video From 25:46 To 54:51 Disclaimer: Compete.etutor.co may from time to time provide links to third party Internet sites under their respective fair use policy and it may from time to time provide materials from such third parties on this website. These third party sites and any third party materials are provided for viewers convenience and for non-commercial educational purpose only. Compete does not operate or control in any respect any information, products or services available on these third party sites. Compete.etutor.co makes no representations whatsoever concerning the content of these sites and the fact that compete.etutor.co has provided a link to such sites is NOT an endorsement, authorization, sponsorship, or affiliation by compete.etutor.co with respect to such sites, its services, the products displayed, its owners, or its providers. 1. When two objects are moving in the same direction, then v AB = v A − v B 2. When two objects are moving in opposite direction, then v AB = v A + v B 3. When two objects are moving at an angle, then v_{AB}=\sqrt{v_A^2+v_B^2-2v_{A}v_{B}\cos\theta} and \tt \tan\beta=\frac{v_{B}\sin\theta}{v_{A}-v_{B}\cos\theta}
Let the Green's function for the gauge field be given (after gauge fixing) as $$G_{\mu \nu}(x,y) = \delta_{\mu \nu}G(x-y) \tag{1}$$ where $$G(x-y)= \int \frac{d^dk}{(2\pi)^d} \frac{e^{ik \cdot (x-y)}}{k^2+m^2} \tag{1'}$$ and a source $$ J_{\mu}(x) = \delta_{0}^{\mu}q\delta(\vec{x})\tag{2}. $$ Then, the gauge field is given by $$ A_{\mu}(x)=\int d^dy \, G_{\mu \nu}J^{\nu} \tag{3}. $$ How do I derive $$A^{0}(x) = \frac{q}{4\pi}\frac{1}{|\vec{x}|} \tag{4}$$ for $d=4$? Obviously I have to plug into (3) equations (1) and (2) in 4 dimensions. By doing so though I have 4-dimensional quantities. If I set $m=0$, then I do get the Fourier transform of (1') and I do have a quantity like $1/x$ but how do I get only the spatial component? And how do I continue from there on? My attempt: \begin{align} A_{\mu}(x)= \int d^dy G_{\mu \nu}(x,y)J^{\nu}(y) \end{align} and now set $\mu=0$ \begin{align} A_{0}(x)&= \int d^4y G(x-y)q \delta(\vec{y}) \\ &= q\int d^4y \int \frac{d^4k}{(2\pi)^4}\frac{e^{ik\cdot (x-y)}}{k^2} \delta(\vec{y}) \\ &= \int d^4y \int \frac{d^3\vec{k}}{(2\pi)^3}\frac{e^{i\vec{k}\cdot (\vec{x}-\vec{y})}}{|\vec{k}|^2} \int dk^0 \frac{e^{ik^0\cdot(x^0-y^0)}}{(k^0)^2}\delta(\vec{y}) \end{align} Now the $d^3\vec{k}$ integral should give me (after the $dy$ integration too) $\frac{1}{|\vec{x}|}$ while, having the requirement that $x^0=y^0$ the time-component integral simplifies. But should I make the assumption $x^0=y^0$? And how to proceed?
Yes, there is a subtlety there. Also notice that in quantum mechanics we have finely resolved energy eigenstates, and in a typical complicated system all degeneracies are broken so that at any given energy there is at most one state ($\Omega = 1$) but most likely there is no state at exactly that energy ($\Omega = 0$). So it would seem that $\log \Omega$ is never bigger than 0, how can entropy ever be so high as we observe? In classical mechanics too, if you ask me how many states have energy exactly $E$ I will say zero since a randomly chosen state will almost never exactly match energy $E$. So there is a quite a mistake with saying "$\Omega$ is the number of states with energy $E$" since essentially this would give $\Omega = 0$. (I have no idea why textbooks say this, it just confuses the learner and they know it is wrong). There are two ways to solve this and to get a real, functioning definition of $S$: Consider all states between $E$ and $E+\Delta E$, indeed with a finite difference $\Delta E$ --- not too big but not too small. From this you can define a thermodynamic density of states $\nu = \Delta \Omega / \Delta E$. Then, entropy is defined as $S = k \log \nu$ or alternatively as $S = k \log (\Delta \Omega)$, depending on who you ask. Both work. (Perhaps you noticed, there is a problem that either entropy depends on units of energy when we take log of something with units of 1/energy, or, the entropy depends on how big of $\Delta E$ we chose. But this just gives an entropy offset and as long as we are consistent with units, and keep $\Delta E$ the same for all systems, this works out.) Consider all states with energy less than $E$ and define this number as $\Omega$. (Sounds strange, I know, but it does work and is quite simply defined.) The definitions disagree but both are valid; both reproduce thermodynamics in large systems. Interestingly though neither is mathematically satisfactory, and they have real problems for small systems. In particular the "temperature" as defined by $1/(\partial S/\partial E)$ actually fails to have the property that two systems with equal temperature are in equilibrium. And we can even define evil systems with strange density of states where, as $E$ is increased, the value of $T$ fluctuates up and down and up and down. Can we do any better? Yes, abandon the requirement that a system has exactly specified energy $E$. That was anyways unrealistic since you never perfectly know a system's energy with absolute certainty. If instead you fix the temperature, you get a canonical ensemble for which entropy is uniquely defined (and is mathematically sane). See wikipedia's microcanonical ensemble for more info (disclaimer: I wrote that article, mostly).
(a) If $AB=B$, then $B$ is the identity matrix. (b) If the coefficient matrix $A$ of the system $A\mathbf{x}=\mathbf{b}$ is invertible, then the system has infinitely many solutions. (c) If $A$ is invertible, then $ABA^{-1}=B$. (d) If $A$ is an idempotent nonsingular matrix, then $A$ must be the identity matrix. (e) If $x_1=0, x_2=0, x_3=1$ is a solution to a homogeneous system of linear equation, then the system has infinitely many solutions. Let $A$ and $B$ be $3\times 3$ matrices and let $C=A-2B$.If\[A\begin{bmatrix}1 \\3 \\5\end{bmatrix}=B\begin{bmatrix}2 \\6 \\10\end{bmatrix},\]then is the matrix $C$ nonsingular? If so, prove it. Otherwise, explain why not. Let $A, B, C$ be $n\times n$ invertible matrices. When you simplify the expression\[C^{-1}(AB^{-1})^{-1}(CA^{-1})^{-1}C^2,\]which matrix do you get?(a) $A$(b) $C^{-1}A^{-1}BC^{-1}AC^2$(c) $B$(d) $C^2$(e) $C^{-1}BC$(f) $C$ Let $\calP_3$ be the vector space of all polynomials of degree $3$ or less.Let\[S=\{p_1(x), p_2(x), p_3(x), p_4(x)\},\]where\begin{align*}p_1(x)&=1+3x+2x^2-x^3 & p_2(x)&=x+x^3\\p_3(x)&=x+x^2-x^3 & p_4(x)&=3+8x+8x^3.\end{align*} (a) Find a basis $Q$ of the span $\Span(S)$ consisting of polynomials in $S$. (b) For each polynomial in $S$ that is not in $Q$, find the coordinate vector with respect to the basis $Q$. (The Ohio State University, Linear Algebra Midterm) Let $V$ be a vector space and $B$ be a basis for $V$.Let $\mathbf{w}_1, \mathbf{w}_2, \mathbf{w}_3, \mathbf{w}_4, \mathbf{w}_5$ be vectors in $V$.Suppose that $A$ is the matrix whose columns are the coordinate vectors of $\mathbf{w}_1, \mathbf{w}_2, \mathbf{w}_3, \mathbf{w}_4, \mathbf{w}_5$ with respect to the basis $B$. After applying the elementary row operations to $A$, we obtain the following matrix in reduced row echelon form\[\begin{bmatrix}1 & 0 & 2 & 1 & 0 \\0 & 1 & 3 & 0 & 1 \\0 & 0 & 0 & 0 & 0 \\0 & 0 & 0 & 0 & 0\end{bmatrix}.\] (a) What is the dimension of $V$? (b) What is the dimension of $\Span\{\mathbf{w}_1, \mathbf{w}_2, \mathbf{w}_3, \mathbf{w}_4, \mathbf{w}_5\}$? (The Ohio State University, Linear Algebra Midterm) Let $V$ be the vector space of all $2\times 2$ matrices whose entries are real numbers.Let\[W=\left\{\, A\in V \quad \middle | \quad A=\begin{bmatrix}a & b\\c& -a\end{bmatrix} \text{ for any } a, b, c\in \R \,\right\}.\] (a) Show that $W$ is a subspace of $V$. (b) Find a basis of $W$. (c) Find the dimension of $W$. (The Ohio State University, Linear Algebra Midterm) The following problems are Midterm 1 problems of Linear Algebra (Math 2568) at the Ohio State University in Autumn 2017.There were 9 problems that covered Chapter 1 of our textbook (Johnson, Riess, Arnold).The time limit was 55 minutes. This post is Part 3 and contains Problem 7, 8, and 9.Check out Part 1 and Part 2 for the rest of the exam problems. Problem 7. Let $A=\begin{bmatrix}-3 & -4\\8& 9\end{bmatrix}$ and $\mathbf{v}=\begin{bmatrix}-1 \\2\end{bmatrix}$. (a) Calculate $A\mathbf{v}$ and find the number $\lambda$ such that $A\mathbf{v}=\lambda \mathbf{v}$. (b) Without forming $A^3$, calculate the vector $A^3\mathbf{v}$. Problem 8. Prove that if $A$ and $B$ are $n\times n$ nonsingular matrices, then the product $AB$ is also nonsingular. Problem 9.Determine whether each of the following sentences is true or false. (a) There is a $3\times 3$ homogeneous system that has exactly three solutions. (b) If $A$ and $B$ are $n\times n$ symmetric matrices, then the sum $A+B$ is also symmetric. (c) If $n$-dimensional vectors $\mathbf{v}_1, \mathbf{v}_2, \mathbf{v}_3$ are linearly dependent, then the vectors $\mathbf{v}_1, \mathbf{v}_2, \mathbf{v}_3, \mathbf{v}_4$ is also linearly dependent for any $n$-dimensional vector $\mathbf{v}_4$. (d) If the coefficient matrix of a system of linear equations is singular, then the system is inconsistent. The following problems are Midterm 1 problems of Linear Algebra (Math 2568) at the Ohio State University in Autumn 2017.There were 9 problems that covered Chapter 1 of our textbook (Johnson, Riess, Arnold).The time limit was 55 minutes. This post is Part 2 and contains Problem 4, 5, and 6.Check out Part 1 and Part 3 for the rest of the exam problems. Problem 4. Let\[\mathbf{a}_1=\begin{bmatrix}1 \\2 \\3\end{bmatrix}, \mathbf{a}_2=\begin{bmatrix}2 \\-1 \\4\end{bmatrix}, \mathbf{b}=\begin{bmatrix}0 \\a \\2\end{bmatrix}.\] Find all the values for $a$ so that the vector $\mathbf{b}$ is a linear combination of vectors $\mathbf{a}_1$ and $\mathbf{a}_2$. Problem 5.Find the inverse matrix of\[A=\begin{bmatrix}0 & 0 & 2 & 0 \\0 &1 & 0 & 0 \\1 & 0 & 0 & 0 \\1 & 0 & 0 & 1\end{bmatrix}\]if it exists. If you think there is no inverse matrix of $A$, then give a reason. Problem 6.Consider the system of linear equations\begin{align*}3x_1+2x_2&=1\\5x_1+3x_2&=2.\end{align*} (a) Find the coefficient matrix $A$ of the system. (b) Find the inverse matrix of the coefficient matrix $A$. (c) Using the inverse matrix of $A$, find the solution of the system. (Linear Algebra Midterm Exam 1, the Ohio State University)
Forgot password? New user? Sign up Existing user? Log in Hi, this one came in proofathon contest and had an average score of 000. Problem Prove that ∣cos(x)∣+∣cos(y)∣+∣cos(z)∣+∣cos(y+z)∣+∣cos(z+x)∣+∣cos(x+y)∣+3∣cos(x+y+z)∣≥3|\cos (x)| + |\cos (y)| + |\cos (z)| + |\cos(y+z)| + |\cos(z+x)| + |\cos(x+y)| + 3|\cos(x+y+z)| \geq 3∣cos(x)∣+∣cos(y)∣+∣cos(z)∣+∣cos(y+z)∣+∣cos(z+x)∣+∣cos(x+y)∣+3∣cos(x+y+z)∣≥3 for all real x,y,x, y,x,y, and zzz. Note by Jatin Yadav 5 years, 6 months ago Easy Math Editor This discussion board is a place to discuss our Daily Challenges and the math and science related to those challenges. Explanations are more than just a solution — they should explain the steps and thinking strategies that you used to obtain the solution. Comments should further the discussion of math and science. When posting on Brilliant: *italics* _italics_ **bold** __bold__ - bulleted- list 1. numbered2. list paragraph 1paragraph 2 paragraph 1 paragraph 2 [example link](https://brilliant.org) > This is a quote This is a quote # I indented these lines # 4 spaces, and now they show # up as a code block. print "hello world" # I indented these lines# 4 spaces, and now they show# up as a code block.print "hello world" \( \) \[ \] 2 \times 3 2^{34} a_{i-1} \frac{2}{3} \sqrt{2} \sum_{i=1}^3 \sin \theta \boxed{123} Sort by: Hint: show that ∣cos(x)∣+∣cos(y+z)∣+∣cos(x+y+z)∣≥1|\cos(x)|+|\cos(y+z)|+|\cos(x+y+z)|\ge1∣cos(x)∣+∣cos(y+z)∣+∣cos(x+y+z)∣≥1. Log in to reply Double hint: show that ∣cosa∣+∣cosb∣+∣cos(a+b)∣≥1|\cos a|+|\cos b|+|\cos(a+b)|\ge1∣cosa∣+∣cosb∣+∣cos(a+b)∣≥1. What is the equality case? Well there is a lemma necessary in solving this problem: ∣cosa∣+∣cosb∣+∣cos(a+b)∣≥1|\cos a|+|\cos b|+|\cos(a+b)|\ge1∣cosa∣+∣cosb∣+∣cos(a+b)∣≥1 which has an equality case where cosa=cosb=0\cos a=\cos b=0cosa=cosb=0. This leads us to the equality case that cosa=cosb=cosc=0\cos a=\cos b=\cos c=0cosa=cosb=cosc=0. Ah yea, thanks. I figured that out later, after thinking the problem over again. Did you make this problem? If you did, it's a really really good problem! @Daniel Liu – Proofathon makes all of its problems. Well sorry to interrupt in a different question but can you please give the solution to charge oscillating above a charged sheet. Done. Thanks and good solution yaar! @Milun Moghe – Could you post a solution to Come on lucky number 7 ? @Jatin Yadav – Done , but don't know if it's correct. I am very eager to get clarified. @Milun Moghe – I had posted the time before they met 1 and 2, it didnt get selected but roger kepstiens problem's what did the proton say to the electron got selected thats not fare , i posted it first. and a better question related to electrostatic imaging too @Milun Moghe – As I have explained to you (in email), several of your problems have unnecessarily complicated scenarios and convoluted phrasing, which make it hard to understand what you are asking. We had greatly cleaned up Cricical Angle Of Precession Of A Re-assembled Top to give you an example of how smoother phrasing, better presentation and a clearer picture can greatly improve the quality of your problem. The easier a problem is to understand, the more others will like and share it, which increases the likelihood that it would be selected. You can read my note for further guidelines to improve your problem. @Calvin Lin – Well thanks for the advice, I'll try to make my problems more precise and to the point @Milun Moghe – To check, did you get the email that I sent? We've sent some clarification requests about your other problems too. Responding to those would help you understand what aspects of your problems are confusing, and how to correct them. @Calvin Lin – Yes sir I have received all emails and have made corrections Accordingly. @Jatin Yadav – Can you post a Sol to the flight of a housefly if solved please... @Jatin Yadav – Same in need over here. I had posted my solution in one of the comments which got deleted Problem Loading... Note Loading... Set Loading...
We have seen that the SI definition of magnetic moment is unequivocally defined as the maximum torque experienced in unit external field. Nevertheless some authors prefer to think of magnetic moment as the product of the equatorial field and the cube of the distance. Thus there are two conceptually different concepts of magnetic moment, and, when to these are added minor details as to whether the magnetic field is \(B\) or \(H\), and whether or not the permeability should include the factor \(4 \pi\), six possible definitions of magnetic moment, described in Section 17.6, all of which are to be found in current literature, arise. Regardless, however, how one chooses to define magnetic moment, whether the SI definition or some other unconventional definition, it should be easily possible to answer both of the following questions: A. Given the magnitude of the equatorial field on the equator of a magnet, what is the maximum torque that that magnet would experience if it were placed in an external field? B. Given the maximum torque that a magnet experiences when placed in an external field, what is the magnitude of the equatorial field produced by the magnet? It must surely be conceded that a failure to be able to answer such basic questions indicates a failure to understand what is meant by magnetic moment. I therefore now ask a series of thirteen questions. The first six are questions of type A, in which I use the six possible definitions of magnetic moment. The next six are similar questions of type B. And the last is an absurdly simple question, which anyone who believes he understands the meaning of magnetic moment should easily be able to answer. 1. The magnitude of the field in the equatorial plane of a magnet at a distance of 1 cm is 1 Oe. What is the maximum torque that this magnet will experience in an external magnetic field of 1 Oe, and what is its magnetic moment? Note that, in this question and the following seven there must be a unique answer for the torque. The answer you give for the magnetic moment, however, will depend on how you choose to define magnetic moment, and on whether you choose to give the answer in SI units or CGS EMU. 2. The magnitude of the field in the equatorial plane of a magnet at a distance of 1 cm is 1 \(\text{Oe}\). What is the maximum torque that this magnet will experience in an external magnetic field of 1 \(\text{G}\), and what is its magnetic moment? 3. The magnitude of the field in the equatorial plane of a magnet at a distance of 1 cm is 1 \(\text{G}\). What is the maximum torque that this magnet will experience in an external magnetic field of 1 \(\text{Oe}\), and what is its magnetic moment? 4. The magnitude of the field in the equatorial plane of a magnet at a distance of 1 cm is 1 \(\text{G}\). What is the maximum torque that this magnet will experience in an external magnetic field of 1 \(\text{G}\), and what is its magnetic moment? 5. The magnitude of the field in the equatorial plane of a magnet at a distance of 1 \(\text{m}\) is 1 \(\text{A m}^{-1}\). What is the maximum torque that this magnet will experience in an external magnetic field of 1 \(\text{A m}^{-1}\), and what is its magnetic moment? 6. The magnitude of the field in the equatorial plane of a magnet at a distance of 1 \(\text{m}\) is 1 \(\text{A m}^{-1}\). What is the maximum torque that this magnet will experience in an external magnetic field of 1 \(\text{T}\), and what is its magnetic moment? 7. The magnitude of the field in the equatorial plane of a magnet at a distance of 1 \(\text{m}\) is 1 \(\text{T}\). What is the maximum torque that this magnet will experience in an external magnetic field of 1 \(\text{A m}^{-1}\), and what is its magnetic moment? 8. The magnitude of the field in the equatorial plane of a magnet at a distance of 1 \(\text{m}\) is 1 \(\text{T}\). What is the maximum torque that this magnet will experience in an external magnetic field of 1 \(\text{T}\), and what is its magnetic moment? 9. A magnet experiences a maximum torque of 1 dyn cm if placed in a field of 1 \(\text{Oe}\). What is the magnitude of the field in the equatorial plane at a distance of 1 cm, and what is the magnetic moment? Note that, in this question and the following three there must be a unique answer for \(B\) and a unique answer for \(H\), though each can be expressed in SI or in CGS EMU, while the answer for the magnetic moment depends on which definition you adopt. 10. A magnet experiences a maximum torque of 1 dyn cm if placed in a field of 1 \(\text{G}\). What is the magnitude of the field in the equatorial plane at a distance of 1 cm, and what is the magnetic moment? 11. A magnet experiences a maximum torque of 1 \(\text{N}\) m if placed in a field of 1 \(\text{A m}^{-1}\). What is the magnitude of the field in the equatorial plane at a distance of 1 \(\text{m}\), and what is the magnetic moment? 12. A magnet experiences a maximum torque of 1 \(\text{N m}\) if placed in a field of 1 \(\text{T}\). What is the magnitude of the field in the equatorial plane at a distance of 1 \(\text{m}\), and what is the magnetic moment? I’ll pose Question Number 13 a little later. In the meantime the answers to the first four questions are given in Table \(\text{XVII.2}\), and the answers to Questions 5 – 12 are given in Tables \(\text{XVII.3}\) and \(4\). The sheer complexity of these answers to absurdly simple questions is a consequence of different usages by various authors of the meaning of “magnetic moment” and of departure from standard SI usage. \(\text{TABLE XVII.2}\) ANSWERS TO QUESTIONS 1 – 4 IN CGS EMU AND SI UNITS The answers to the first four questions are identical \(\text{TABLE XVII.3}\) ANSWERS TO QUESTIONS 5 – 8 IN CGS EMU AND SI UNITS \begin{array}{l l c \quad c \quad c \quad c \quad l} & & 5 & 6 & 7 & 8 & \\ \tau & = & (4\pi)^2 & 4\pi \times 10^7 & 4\pi \times 10^7 & 10^{14} & \text{dyn cm} \\ & = & (4\pi)^2 \times 10^{-7} & 4\pi & 4\pi & 10^7 & \text{N m} \\ &&&&&& \\ p_1 & = & 4\pi \times 10^3 & 4\pi \times 10^3 & 10^{10} & 10^{10} & \text{dyn cm Oe}^{-1} \\ & = & (4\pi)^2 \times 10^{-7} & (4\pi)^2 \times 10^{-7} & 4\pi & 4\pi & \text{N m (A/m)}^{-1} \\ &&&&&&\\ p_2 & = & 4\pi \times 10^3 & 4\pi \times 10^3 & 10^{10} & 10^{10} & \text{dyn cm G}^{-1} \\ & = & 4\pi & 4\pi & 10^7 & 10^7 & \text{N m T}^{-1} \\ &&&&&&\\ p_3 & = & 4\pi \times 10^3 & 4\pi \times 10^3 & 10^{10} & 10^{10} & \text{G cm}^3 \\ & = & 4\pi \times 10^{-7} & 4\pi \times 10^{-7} & 1 & 1 & \text{T m}^3 \\ &&&&&&\\ p_4 & = & 4\pi \times 10^3 & 4\pi \times 10^3 & 10^{10} & 10^{10} & \text{Oe cm}^3 \\ & = & 1 & 1 & 10^7/(4\pi) & 10^7/(4\pi) & \text{A m}^2 \\ &&&&&&\\ p_5 & = & (4\pi)^2 \times 10^3 & (4\pi)^2 \times 10^3 & 4\pi \times 10^{10} & 4\pi \times 10^{10} & \text{G cm}^3 \\ & = & (4\pi)^2 \times 10^{-7} & (4\pi)^2 \times 10^{-7} & 4\pi & 4\pi & \text{ T m}^3 \\ &&&&&&\\ p_6 & = & (4\pi)^2 \times 10^3 & (4\pi)^2 \times 10^3 & 4\pi \times 10^{10} & 4\pi \times 10^{10} & \text{Oe cm}^3 \\ & = & 4\pi & 4\pi & 10^7 & 10^7 & \text{A m}^2 \\ \end{array} \(\text{TABLE XVII.4}\) ANSWERS TO QUESTIONS 9 – 12 IN CGS EMU AND SI UNITS \begin{array}{l l c c c c l} & & 9 & 10 & 11 & 12 & \\ B & = & 1 & 1 & 10^4/(4\pi) & 10^{-3} & G \\ & = & 10^{-4} & 10^{-4} & 1/(4\pi) & 10^{-7} & T \\ &&&&&&\\ H & = & 1 & 1 & 10^4/(4\pi) & 10^{-3} & \text{Oe} \\ & = & 10^3/(4\pi) & 10^3/(4\pi) & 10^7/(4\pi)^2 & 1/(4\pi) & \text{A m}^{-1} \\ &&&&&&\\ p_1 & = & 1 & 1 & 10^{10}/(4\pi) & 10^3 & \text{dyn cm Oe}^{-1} \\ & = & 4\pi \times 10^{-10} & 4\pi \times 10^{-10} & 1 & 4\pi \times 10^{-7} & \text{N m (A/m)}^{-1} \\ &&&&&&\\ p_2 & = & 1 & 1 & 10^{10}/(4\pi) & 10^3 & \text{dyn cm G}^{-1} \\ & = & 10^{-3} & 10^{-3} & 10^7/(4\pi) & 1 & \text{N m T}^{-1} \\ &&&&&&\\ p_3 & = & 1 & 1 & 10^4/(4\pi) & 10^{-3} & \text{G cm}^3 \\ & = & 10^{-10} & 10^{-10} & 10^{-6}/(4\pi) & 10^{-13} & \text{T m}^3 \\ &&&&&&\\ p_4 & = & 1 & 1 & 10^4/(4\pi) & 10^{-3} & \text{Oe cm}^3 \\ & = & 10^{-3}/(4\pi) & 10^{-3}/(4\pi) & 10/(4\pi)^2 & 10^{-6}/(4\pi) & \text{A m}^2 \\ &&&&&&\\ p_5 & = & 4\pi & 4\pi & 10^4 & 4\pi \times 10^{-3} & \text{G cm}^3 \\ & = & 4\pi \times 10^{-10} & 4\pi \times 10^{-10} & 10^{-6} & 4\pi \times 10^{-13} & \text{T m}^3 \\ &&&&&&\\ p_6 & = & 4\pi & 4\pi & 10^4 & 4\pi \times 10^{-3} & \text{Oe cm}^3 \\ & = & 10^{-3} & 10^{-3} & 10/(4\pi) & 10^{-6} & \text{A m}^2 \\ \end{array} The thirteenth and last of these questions is as follows: Assume that Earth is a sphere of radius \(6.4 \times 10^6 \text{m} = 6.4 \times 10^8 \text{cm}\), and that the surface field at the magnetic equator is \(B = 3\times 10^{-5} ,\ \text{T}=0.3 \ \text{G}, \text{or} \ H = 75/\pi \ \text{A m}^{-1} = 0.3 \ \text{Oe}\), what is the magnetic moment of Earth? It is hard to imagine a more straightforward question, yet it would be hard to find two people who would give the same answer. The SI answer (which, to me, is the only answer) is \[B = \frac{\mu_0}{4\pi}\frac{p}{r^3}, \ \therefore \ p=\frac{4\pi r^3 B}{\mu_0}=\frac{4\pi \times (6.4 \times 10^6)^3 \times 3 \times 10^{-5}}{4\pi \times 10^{-7}}=7.86 \times 10^{22} \ \text{N m T}^{-1}.\] This result correctly predicts that, if Earth were placed in an external field of \(1 \ \text{T}\), it would experience a maximum torque of \(7.86 \times 10^{22} \ \text{N m}\), and this is the normal meaning of what is meant by magnetic moment. A calculation in GCS might proceed thus: \[B = \frac{p}{r^3}, \ \therefore \ p = r^3 B = (6.4 \times 10^8)^3 \times 0.3 = 7.86 \times 10^{25} \ \text{G cm}^3.\] Is this the same result as was obtained from the SI calculation? We can use the conversions \(1 \ \text{G} = 10^{-4} \text{T}\) and \(1 \ \text{cm}^3 = 10^{-6} \text{m}^3\), and we obtain \[p = 7.86 \times 10^{15} \text{T m}^3.\] We arrive at a number that not only differs from the SI calculation by \(10^7\), but is expressed in quite different, dimensionally dissimilar, units. Perhaps the CGS calculation should be \[H = \frac{p}{r^3}, \ \therefore \ p = r^3H = (6.4 \times 10^8)^3 \times 0.3 = 7.86 \times 10^{25} \ \text{Oe cm}^3.\] Now \(1 \ \text{Oe} = 1000/(4\pi) \ \text{A m}^{-1}\) and \(1 \ \text{cm}^3 = 10^{-6} \text{m}^3\), and we obtain \[p=6.26 \times 10^{21} \text{A m}^2\] This time we arrive at SI units that are dimensionally similar to \(\text{N m T}^{-1}\), and which are perfectly correct SI units, but the magnetic moment is smaller than correctly predicted by the SI calculation by a factor of 12.6. Yet again, we might do what appears to be frequently done by planetary scientists, and we can multiply the surface field in \(\text{T}\) by the cube of the radius in \(\text{m}\) to obtain \[p=7.86 \times 10^{15} \ \text{T m}^3.\] This arrives at the same result as one of the CGS calculations, but, whatever it is, it is not the magnetic moment in the sense of the greatest torque in a unit field. The quantity so obtained appears to be nothing more that the product of the surface equatorial field and the cube of the radius, and as such would appear to be a purposeless and meaningless calculation. It would be a good deal more meaningful merely to multiply the surface value of \(H\) by 3. This in fact would give (correctly) the dipole moment divided by the volume of Earth, and hence it would be the average magnetization of Earth – a very meaningful quantity, which would be useful in comparing the magnetic properties of Earth with those of the other planets.
I have been trying to solve a 4x4 matrix and have done a hell a lot of writing. Now I am near the end and I am not getting the requested output out of the LaTeX compiler. When I write for example: \begin{minipage}[t]{\textwidth}\begin{equation}\label{e50}R^{-1} = \frac{1}{\left|R\right|} \rm{adj}(R) = \alpha\end{equation}\end{minipage} the last symbol isn't shown as "alpha" but rather as a very weird symbol ff like I show in the picture below: I get the same weird symbol for any symbol that I put in place of \alpha or even after it. From this point on all of my symbols are typeset as ff. If I put same source code in my other new document all of my symbols again render as they were supposed to. Q1: Have I ran out of memory? Q2: How can I fix this?
Atomic spectroscopy is, of course, a vast subject, and there is no intention in this brief chapter of attempting to cover such a huge field with any degree of completeness, and it is not intended to serve as a formal course in spectroscopy. For such a task a thousand pages would make a good start. The aim, rather, is to summarize some of the words and ideas sufficiently for the occasional needs of the student of stellar atmospheres. For that reason this short chapter has a mere 26 sections. Wavelengths of spectrum lines in the visible region of the spectrum were traditionally expressed in angstrom units (Å) after the nineteenth century Swedish spectroscopist Anders Ångström, one Å being \(10^{−10} \text{m}\). Today, it is recommended to use nanometres (\(\text{nm}\)) for visible light or micrometres (\(\mu \text{m}\)) for infrared. \(1 \ \text{nm} = 10\) Å \(= 10^{−3} \mu \text{m} = 10^{−9} \text{m}\). The older word micron is synonymous with micrometre, and should be avoided, as should the isolated abbreviation \(\mu\). The usual symbol for wavelength is \(\lambda\). Wavenumber is the reciprocal of wavelength; that is, it is the number of waves per metre. The usual symbol is \(\sigma\), although \(\tilde{\nu}\) is sometimes seen. In SI units, wavenumber would be expressed in \(\text{m}^{-1}\), although \(\text{cm}^{-1}\) is often used. The extraordinary illiteracy "a line of 15376 wavenumbers" is heard regrettably often. What is intended is presumably "a line of wavenumber 15376 \(\text{cm}^{-1}\)." The kayser was an unofficial unit formerly seen for wavenumber, equal to \(1 \ \text{cm}^{-1}\). As some wag once remarked: "The Kaiser (kayser) is dead!" It is customary to quote wavelengths below \(200 \ \text{nm}\) as wavelengths in vacuo, but wavelengths above \(200 \ \text{nm}\) in "standard air". Wavenumbers are usually quoted as wavenumbers in vacuo, whether the wavelength is longer or shorter than \(200 \ \text{nm}\). Suggestions are made from time to time to abandon this confusing convention; in any case it is incumbent upon any writer who quotes a wavelength or wavenumber to state explicitly whether s/he is referring to a vacuum or to standard air, and not to assume that this will be obvious to the reader. Note that, in using the formula \(n_1\lambda_1 = n_2\lambda_2 = n_3\lambda_3\) used for overlapping orders, the wavelength concerned is neither the vacuum nor the standard air wavelength; rather it is the wavelength in the actual air inside the spectrograph. If I use the symbols \(\lambda_0\) and \(\sigma_0\) for vacuum wavelength and wavenumber and \(\lambda\) and \(\sigma\) for wavelength and wavenumber in standard air, the relation between \(\lambda\) and \(\sigma_0\) is \[\lambda = \frac{1}{n \sigma_0} \label{7.1.1}\] "Standard air" is a mythical substance whose refractive index \(n\) is given by \[ (n-1).10^7 = 834.213 + \frac{240603.0}{130-\sigma_0^2} + \frac{1599.7}{38.9-\sigma_0^2}, \label{7.1.2}\] where \(\sigma_0\) is in \(\mu \text{m}^{-1}\). This corresponds closely to that of dry air at a pressure of \(760 \text{ mm Hg}\) and temperature \(15^\circ \)C containing \(0.03\)% by volume of carbon dioxide. To calculate \(\lambda\) given \(\sigma_0\) is straightforward. To calculate \(\sigma_0\) given \(\lambda\) requires iteration. Thus the reader, as an exercise, should try to calculate the vacuum wavenumber of a line of standard air wavelength \(555.5 \ \text{nm}\). In any case, the reader who expects to be dealing with wavelengths and wavenumbers fairly often should write a small computer or calculator program that allows the calculation to go either way. Frequency is the number of waves per second, and is expressed in hertz (\(\text{Hz}\)) or \(\text{MHz}\) or \(\text{GHz}\), as appropriate. The usual symbol is \(\nu\), although \(f\) is also seen. Although wavelength and wavenumber change as light moves from one medium to another, frequency does not. The relation between frequency, speed and wavelength is \[c = \nu \lambda_0 \label{7.1.3}\] where \(c\) is the speed in vacuo, which has the defined value \(2.997 \ 924 \ 58 \times 10^8 \text{m s}^{-1}\). A spectrum line results from a transition between two energy levels of an atom The frequency of the radiation involved is related to the difference in energy levels by the familiar relation \[h\nu = \Delta E, \label{7.1.4}\] where \(h\) is Planck's constant, \(6.626075 \times 10^{-34} \text{ J s}\). If the energy levels are expressed in joules, this will give the frequency in \(\text{Hz}\). This is not how it is usually done, however. What is usually tabulated in energy level tables is \(E / (hc)\) , in units of \(\text{cm}^{-1}\). This quantity is known as the term value \(T\) of the level. Equation \(\ref{7.1.4}\) then becomes \[\sigma_0 = \Delta T \label{7.1.5}\] Thus the vacuum wavenumber is simply the difference between the two tabulated term values. In some contexts it may also be convenient to express energy levels in electron volts, \(1 \ \text{eV}\) being \(1.60217733 \times 10^{-19} \ \text{J}\). Energy levels of neutral atoms are typically of the order of a few eV. The energy required to ionize an atom from its ground level is called the ionization energy, and its SI unit would be the joule. However, one usually quotes the ionization energy in \(\text{eV}\), or the ionization potential in volts. It may be remarked that sometimes one hears the process of formation of a spectrum line as one in which an "electron" jumps from one energy level to another. This is quite wrong. It is true that there is an adjustment of the way in which the electrons are distributed around the atomic nucleus, but what is tabulated in tables of atomic energy levels or drawn in energy level diagrams is the energy of the atom, and in equation \(\ref{7.1.4} \ \Delta E\) is the change in energy of the atom. This includes the kinetic energy of all the particles in the atom as well as the mutual potential energy between the particles. We have seen that the wavenumber of a line is equal to the difference between the term values of the two levels involved in its formation. Thus, if we know the term values of two levels, it is a trivial matter to calculate the wavenumber of the line connecting them. In spectroscopic analysis the problem is very often the converse - you have measured the wavenumbers of several spectrum lines; can you from these calculate the term values of the levels involved? For example, here are four (entirely hypothetical and artificially concocted for this problem) vacuum wavenumbers, in \(\mu \text{m}^{-1}\): \begin{array}{c} \nonumber 1.96643 \\ 2.11741 \\ 2.28629 \\ 2.43727 \\ \end{array} The reader who is interested on spectroscopy, or in crossword puzzles or jigsaw puzzles, is very strongly urged to calculate the term values of the four levels involved with these lines, and to see whether this can or cannot be done without ambiguity from these data alone. Of course, you may object that there are six ways in which four levels can be joined in pairs, and therefore I should have given you the wavenumbers of six lines. Well, sorry to be unsympathetic, but perhaps two of the lines are two faint to be seen, or they may be forbidden by selection rules, or their wavelengths might be out of the range covered by your instrument. In any case, I have told you that four levels are involved, which is more information that you would have if you had just measured the wavenumbers of these lines from a spectrum that you had obtained in the laboratory. And at least I have helped by converting standard air wavelengths to vacuum wavenumbers. The exercise will give some appreciation of some of the difficulties in spectroscopic analysis. In the early days of spectroscopy, in addition to flames and discharge tubes, common spectroscopic sources included arcs and sparks. In an arc, two electrodes with a hundred or so volts across them are touched, and then drawn apart, and an arc forms. In a spark, the potential difference across the electrodes is some thousands of volts, and it is not necessary to touch the electrodes together; rather, the electrical insulation of the air breaks down and a spark flies from one electrode to the other. It was noticed that the arc spectrum was usually very different from the spark spectrum, the former often being referred to as the "first" spectrum and the latter as the "second" spectrum. If the electrodes were, for example, of carbon, the arc or first spectrum would be denoted by C I and the spark or second spectrum by C II. It has long been known now that the "first" spectrum is mostly that of the neutral atom, and the "second" spectrum mostly that of the singly-charged ion. Since the atom and the ion have different electronic structures, the two spectra are very different. Today, we use the symbols C I, or Fe I, or Zr I, etc., to denote the spectrum of the neutral atom, regardless of the source, and C II , C III, C IV, etc., to denote the spectra of the singly-, doubly- triply-ionized atoms, C + , C ++, C +++ , etc. There are 4278 possible spectra of the first 92 elements to investigate, and many more if one adds the transuranic elements, so there is no want of spectra to study. Hydrogen, of course, has only one spectrum, denoted by H I, since ionized hydrogen is merely a proton. The regions in space where hydrogen is mostly ionized are known to astronomers as "H II regions". Strictly, this is a misnomer, for there is no "second spectrum" of hydrogen, and a better term would be "H + regions", but the term "H II regions" is by now so firmly entrenched that it is unlikely to change. It is particularly ironic that the spectrum exhibited by an "H II region" is that of neutral hydrogen (e.g. the well-known Balmer series), as electrons and protons recombine and drop down the energy level ladder. On the other hand, apart from the 21 cm line in the radio region, the excitation temperature in regions where hydrogen is mostly neutral (and hence called, equally wrongly, "H I regions") is far too low to show the typical spectrum of neutral hydrogen, such as the Balmer series. Thus it can be accurately said that "H II regions" show the spectrum of H I, and "H I regions" do not. Lest it be thought that this is unnecessary pedantry, it should be made clear at the outset that the science of spectroscopy, like those of celestial mechanics or quantum mechanics, is one in which meticulous accuracy and precision of ideas is an absolute necessity, and there is no room for vagueness, imprecision, or improper usage of terms. Those who would venture into spectroscopy would do well to note this from the beginning.
Do you know sensible algorithms that run in polynomial time in (Input length + Output length), but whose asymptotic running time in the same measure has a really huge exponent/constant (at least, where the proven upper bound on the running time is in such a way)? Do you know sensible algorithms that run in polynomial time in (Input length + Output length), but whose asymptotic running time in the same measure has a really Algorithms based on the regularity lemma are good examples for polynomial-time algorithms with terrible constants (either in the exponent or as leading coefficients). The regularity lemma of Szemeredi tells you that in any graph on $n$ vertices you can partition the vertices into sets where the edges between pairs of sets are "pseudo-random" (i.e., densities of sufficiently large subsets look like densities in a random graph). This is a structure that is very nice to work with, and as a consequence there are algorithms that use the partition. The catch is that the number of sets in the partition is an exponential tower in the parameter of pseudo-randomness (See here: http://en.wikipedia.org/wiki/Szemer%C3%A9di_regularity_lemma). For some links to algorithms that rely on the regularity lemma, see, e.g.: http://www.cs.cmu.edu/~ryanw/regularity-journ.pdf News from SODA 2013: Max-Bisection problem is approximable to within a factor 0.8776 in around $O(n^{10^{100}})$ time. Here are two screenshots from An Energy-Driven Approach to Linkage Unfolding by Jason H. Cantarella, Erik D. Demaine, Hayley N. Iben, James F. O’Brien, SOCG 2004: Here is a recent result from FUN 2012 paper Picture-Hanging Puzzles by Erik D. Demaine, Martin L. Demaine, Yair N. Minsky, Joseph S. B. Mitchell, Ronald L. Rivest and Mihai Patrascu. We show how to hang a picture by wrapping rope around n nails, making a polynomial number of twists, such that the picture falls whenever any k out of the n nails get removed, and the picture remains hanging when fewer than k nails get removed. Don't let the 'polynomial number' fool you...it turns out to be $O(n^{43737})$. There exists a class of problems, whose solutions are hard to compute, but approximating them to any accuracy is easy, in the sense that there are polynomial-time algorithms that can approximate the solution to within $(1+\epsilon)$ for any constant ε > 0. However, there's a catch: the running time of the approximators may depend on $1/\epsilon$ quite badly, e.g., be $O(n^{1/\epsilon})$. See more info here: http://en.wikipedia.org/wiki/Polynomial-time_approximation_scheme. Although the run-time for such algorithms has been subsequently improved, the original algorithm for sampling a point from a convex body had run time $\tilde{O}(n^{19})$. Dyer, Frieze, and Kannan: http://portal.acm.org/citation.cfm?id=102783 If $L$ is a tabular modal or superintuitionistic logic, then the extended Frege and substitution Frege proof systems for $L$ are polynomially equivalent, and polynomially faithfully interpretable in the classical EF (this is Theorem 5.10 in this paper of mine). The exponent $c$ of the polynomial simulations is not explicitly stated in Theorem 5.10, but the inductive proof of the theorem gives $c=2^{O(|F|)}$, where $F$ is a finite Kripke frame which generates $L$, so it can be as huge as you want depending on the logic. (It gets worse in Theorem 5.20.) The current best known algorithm for recognizing map graphs (a generalization of planar graphs) runs in $n^{120}$. Thorup, Map graphs in polynomial time. Computing the equilibrium of the Arrow-Debreu market takes $O(n^6\log(nU))$ max-flow computations, where $U$ is the maximum utility. Duan, Mehlhorn, A Combinatorial Polynomial Algorithm for the Linear Arrow-Debreu Market. Sandpile Transience Problem Consider the following process. Take a thick tile and drop sand particles on it one grain at a time. A heap gradually builds up and then a large portion of sand slides off from the edges of the tile. If we continue to add sand particles, after a certain point of time, the configuration of the heap repeats. Thereafter, the configuration becomes recurrent, i.e. it keeps revisiting a state that is seen earlier. Consider the following model for the above process. Model the tile as an $n \times n$ grid. Sand particles are dropped on the vertices of this grid. If the number of particles at a vertex exceeds its degree, then the vertex collapses and the particles in it move to adjacent vertices (in cascading manner). A sand particle that reaches a boundary vertex disappears into a sink (`falls off'). This is known as the Abelian Sandpile Model. Problem: How long does it take for the configuration to become recurrent in terms of $n$, assuming the worst algorithm for dropping sand particles? In SODA '07, László Babai and Igor Gorodezky proved this time to be polynomially bounded but.. In SODA '12, Ayush Choure and Sundar Vishwanathan improved this bound to $O(n^7)$. This answer would have looked slightly better if not for their improvement :) The "convex skull" problem is to find the maximum-area convex polygon inside a given simple polygon. The fastest algorithm known for this problem runs in $O(n^7)$ time [Chang and Yap, DCG 1986]. The solution of Annihilation Games (Fraenkel and Yesha) has complexity $O(n^6)$. The Robertson-Seymour theorem aka Graph Minor Theorem establishes among other things that for any graph $G$, there exists an $O(n^3)$ algorithm that determines whether an arbitrary graph $H$ (of size $n$) has $G$ as a minor. The proof is nonconstructive and the (I think non-uniform) multiplicative constant is probably so enormous that no formula for it can be written down explicitly (e.g. as a primitive recursive function on $G$). In their ICALP 2014 paper, Andreas Björklund and Thore Husfeldt give the first (randomized) polynomial algorithm that computes 2 disjoint paths with minimum total length (sum of the two paths length) between two given pairs of vertices. The running time is in $O(n^{11})$. In Polygon rectangulation, part 2: Minimum number of fat rectangles, a practical modification of the rectangle partition problem motivated by concerns in VLSI is presented: Fat Rectangle Optimization Problem: Given an orthogonal polygon $P$, maximize the shortest side $\delta$ over all rectangulations of $P$. Among the partitions with the same $\delta$, choose the partition with the fewest number of rectangles. As yet, only a theoretical algorithm exists, with a running time of $O(n^{42})$. (That is not a typo, and it is obtained through a “natural” dynamic programming solution to the problem stated there.) computing matrix rigidity[1] via brute force/naive/enumerations apparently takes $O(2^n)$ time for matrices of size $n$ elements. this can be seen as a converging limit of a sequence of increasingly accurate estimates that take $n \choose 1$, $n \choose 2$, $n \choose 3$, ... steps. in other words each estimate is in P-time $O(n^c)$ for any arbitrary exponent $c$ (ie $n \choose c$ steps). the naive algorithm chooses any $c$ elements of the matrix to change and tests for resulting dimension reduction. this is not totally surprising given that it has been related to computing circuit lower bounds. this follows a pattern where many algorithms have a conjectured P-time solution for some parameter but a solid proof of a lower bound would likely imply $\mathsf{P \neq NP}$ or something stronger. surprisingly one of the most obvious answers not posted yet. finding a clique of size $c$ (edges or vertices) apparently takes $O(n^c)$ time by the naive/brute force algorithm that enumerates all possibilities. or more accurately proportional to $n \choose c$ steps. (strangely enough this basic factoid seems to be rarely pointed out in the literature.) however a strict proof of that would imply $\mathsf{P \neq NP}$. so this question is related to the famous open conjecture, virtually equivalent to it. other NP type problems can be parameterized in this way.
I am reading about american option pricing and the variational inequality, and the book I am reading states, in the derivation of the variational inequality, the following is a martingale: $$M_s = U(s,X_s) - \int_t^s(\partial_tU(x,X_r) + \mathcal{L}U(x,X_r))dr$$ where $\mathcal{L}$ is the Ito/infinitesimal generator of the form:$$\sum_{i} b_{i} (t,x) \partial_i + \frac1{2} \sum_{i, j} \big( \sigma (t,x) \sigma (t,x)^{\top} \big)_{i, j} \partial_{i,j}$$ This is given without any context, but it's definitely using the usual conditions, such as a Martingale with respect to the natural filtration generated by the Brownian Motion under the risk neutral measure, etc... But I think there are some sort of constraints, and that is may not be true for any given stochastic process. I was trying to find out an equality for the expression, and saw that it was very similar to Ito's lemma applied to a function $U$ and Ito process $X$. So I wrote: $$dU(t,X_t) = [\partial_tU(t,X_t) + \mathcal{L}U(t,X_t)]dt + \nabla U (t,X_t) \sigma (t,X_t)dW$$ and integrating from $t$ to $s$, this gives: $$U(s,X_s) - U(t,X_t) = \int_t^s(\partial_tU(x,X_r) + \mathcal{L}U(x,X_r))dr + \\ \int_t^s\nabla U(r,X_r) \sigma (r,X_r)dW(r)$$ So the expression from the book evaluates to $$M_s = U(t,X_t) + \int_t^s\nabla U(r,X_r) \sigma (r,X_r)dW(r)$$ So the second term looks like the difference between two martingales with respect to the same filtration, but I am unsure about $U(t,X_t)$ and the overall expression being a martingale. It may also be that the martingale is only defined on the filtration $(\mathcal{F}_s)_{t \leq s}$, since this would make the expression a martingale, but I am unsure if that is a valid rationale. I'm only a beginner at this stuff and have only seen local martingales and martingales with respect to filtrations beginning at $0$. Thanks for the help! EDIT: It's looking like there are stochastic processes $X_s$ in the book that are defined only beginning at some $s \geq t$ and are said to be true $\mathbb{Q}$ martingales with respect to the Filtration $(\mathcal{F}_s)_{s \geq 0}$. So now I am thinking the process $M_s$ is defined only for $s \geq t$, but I have tried searching up this kind of situation, but haven't found anything helpful. I think for the stochastic process $(X_s)_{s \geq t}$, for $s < t$, the value would be 'undefined', so we couldn't really apply the martingale definition to it, until it was at time $s \geq t$, at which point it would be a martingale with respect to $(\mathcal{F}_s)_{s \geq t}$, so if that definition is valid, the above should all make sense. Hoping someone can confirm!
In mathematics, Riemann's differential equation, named after Bernhard Riemann, is a generalization of the hypergeometric differential equation, allowing the regular singular points (RSPs) to occur anywhere on the Riemann sphere, rather than merely at 0, 1, and . The equation is also known as the ∞ {\displaystyle \infty } Papperitz equation. [1] The hypergeometric differential equation is a second-order linear differential equation which has three regular singular points, 0, 1 and . That equation admits two linearly independent solutions; near a singularity ∞ {\displaystyle \infty } , the solutions take the form z s {\displaystyle z_{s}} , where x s f ( x ) {\displaystyle x^{s}f(x)} is a local variable, and x = z − z s {\displaystyle x=z-z_{s}} is locally holomorphic with f {\displaystyle f} . The real number f ( 0 ) ≠ 0 {\displaystyle f(0)\neq 0} is called the exponent of the solution at s {\displaystyle s} . Let z s {\displaystyle z_{s}} α, β and γ be the exponents of one solution at 0, 1 and respectively; and let ∞ {\displaystyle \infty } α', β' and γ' be those of the other. Then α + α ′ + β + β ′ + γ + γ ′ = 1. {\displaystyle \alpha +\alpha '+\beta +\beta '+\gamma +\gamma '=1.} By applying suitable changes of variable, it is possible to transform the hypergeometric equation: Applying Möbius transformations will adjust the positions of the RSPs, while other transformations (see below) can change the exponents at the RSPs, subject to the exponents adding up to 1. Definition [ edit ] The differential equation is given by d 2 w d z 2 + [ 1 − α − α ′ z − a + 1 − β − β ′ z − b + 1 − γ − γ ′ z − c ] d w d z {\displaystyle {\frac {d^{2}w}{dz^{2}}}+\left[{\frac {1-\alpha -\alpha '}{z-a}}+{\frac {1-\beta -\beta '}{z-b}}+{\frac {1-\gamma -\gamma '}{z-c}}\right]{\frac {dw}{dz}}} + [ α α ′ ( a − b ) ( a − c ) z − a + β β ′ ( b − c ) ( b − a ) z − b + γ γ ′ ( c − a ) ( c − b ) z − c ] w ( z − a ) ( z − b ) ( z − c ) = 0. {\displaystyle +\left[{\frac {\alpha \alpha '(a-b)(a-c)}{z-a}}+{\frac {\beta \beta '(b-c)(b-a)}{z-b}}+{\frac {\gamma \gamma '(c-a)(c-b)}{z-c}}\right]{\frac {w}{(z-a)(z-b)(z-c)}}=0.} The regular singular points are a, b, and c. The exponents of the solutions at these RSPs are, respectively, , α; α′ , and β; β′ . As before, the exponents are subject to the condition γ; γ′ α + α ′ + β + β ′ + γ + γ ′ = 1. {\displaystyle \alpha +\alpha '+\beta +\beta '+\gamma +\gamma '=1.} Solutions and relationship with the hypergeometric function [ edit ] The solutions are denoted by the Riemann P-symbol (also known as the Papperitz symbol) w ( z ) = P { a b c α β γ z α ′ β ′ γ ′ } {\displaystyle w(z)=P\left\{{\begin{matrix}a&b&c&\;\\\alpha &\beta &\gamma &z\\\alpha '&\beta '&\gamma '&\;\end{matrix}}\right\}} The standard hypergeometric function may be expressed as 2 F 1 ( a , b ; c ; z ) = P { 0 ∞ 1 0 a 0 z 1 − c b c − a − b } {\displaystyle \;_{2}F_{1}(a,b;c;z)=P\left\{{\begin{matrix}0&\infty &1&\;\\0&a&0&z\\1-c&b&c-a-b&\;\end{matrix}}\right\}} The P-functions obey a number of identities; one of them allows a general P-function to be expressed in terms of the hypergeometric function. It is P { a b c α β γ z α ′ β ′ γ ′ } = ( z − a z − b ) α ( z − c z − b ) γ P { 0 ∞ 1 0 α + β + γ 0 ( z − a ) ( c − b ) ( z − b ) ( c − a ) α ′ − α α + β ′ + γ γ ′ − γ } {\displaystyle P\left\{{\begin{matrix}a&b&c&\;\\\alpha &\beta &\gamma &z\\\alpha '&\beta '&\gamma '&\;\end{matrix}}\right\}=\left({\frac {z-a}{z-b}}\right)^{\alpha }\left({\frac {z-c}{z-b}}\right)^{\gamma }P\left\{{\begin{matrix}0&\infty &1&\;\\0&\alpha +\beta +\gamma &0&\;{\frac {(z-a)(c-b)}{(z-b)(c-a)}}\\\alpha '-\alpha &\alpha +\beta '+\gamma &\gamma '-\gamma &\;\end{matrix}}\right\}} In other words, one may write the solutions in terms of the hypergeometric function as w ( z ) = ( z − a z − b ) α ( z − c z − b ) γ 2 F 1 ( α + β + γ , α + β ′ + γ ; 1 + α − α ′ ; ( z − a ) ( c − b ) ( z − b ) ( c − a ) ) {\displaystyle w(z)=\left({\frac {z-a}{z-b}}\right)^{\alpha }\left({\frac {z-c}{z-b}}\right)^{\gamma }\;_{2}F_{1}\left(\alpha +\beta +\gamma ,\alpha +\beta '+\gamma ;1+\alpha -\alpha ';{\frac {(z-a)(c-b)}{(z-b)(c-a)}}\right)} The full complement of Kummer's 24 solutions may be obtained in this way; see the article hypergeometric differential equation for a treatment of Kummer's solutions. Fractional linear transformations [ edit ] The P-function possesses a simple symmetry under the action of fractional linear transformations known as Möbius transformations (that are the conformal remappings of the Riemann sphere), or equivalently, under the action of the group . Given arbitrary GL(2, C) complex numbers A, B, C, D such that , define the quantities AD − BC ≠ 0 u = A z + B C z + D and η = A a + B C a + D {\displaystyle u={\frac {Az+B}{Cz+D}}\quad {\text{ and }}\quad \eta ={\frac {Aa+B}{Ca+D}}} and ζ = A b + B C b + D and θ = A c + B C c + D {\displaystyle \zeta ={\frac {Ab+B}{Cb+D}}\quad {\text{ and }}\quad \theta ={\frac {Ac+B}{Cc+D}}} then one has the simple relation P { a b c α β γ z α ′ β ′ γ ′ } = P { η ζ θ α β γ u α ′ β ′ γ ′ } {\displaystyle P\left\{{\begin{matrix}a&b&c&\;\\\alpha &\beta &\gamma &z\\\alpha '&\beta '&\gamma '&\;\end{matrix}}\right\}=P\left\{{\begin{matrix}\eta &\zeta &\theta &\;\\\alpha &\beta &\gamma &u\\\alpha '&\beta '&\gamma '&\;\end{matrix}}\right\}} expressing the symmetry. See also [ edit ] References [ edit ]
For some information on tides, here's a website:http://www.lhup.edu/~dsimanek/scenario/tides.htmFor some mathematical detail (just algebra):http://mb-soft.com/public/tides.htmlYou only need gravity to explain the tides.Omega is a constant, equal to the angular velocity of the earth... I'm pretty confused by the post - I think you might have a lot of misconceptions.First, tidal forces have nothing to do with centripetal acceleration. Tidal forces are due to gravity.Second, if you're standing still on the earth, unless you're at the poles, there is a centripetal force on... You can prove it with some clever thermodynamical arguments, but I always find those to be dissatisfying.Here's my attempt at explaining it. Imagine an object that is painted white. That means if you shine light on it, the light will reflect off it. Now imagine all the atoms inside the object... Apologies for digging up this old thread, but I have some questions on some of the responses:But why doesn't the metallic screen inside the microwave window get burnt, as does aluminum foil when you put it in the microwave?What you say makes a lot of sense. However, what about the... Well it's been awhile since I looked at thermodynamics, but I thought a Debye solid didn't require periodic boundary conditions, just vanishing at the endpoints.So you have a solid, and the frequencies it can vibrate at correspond to wavelengths that are 2L/n for positive integers n and... I'm not sure why you would need to decompose it. Assuming a Biot-Savart field, the flux through a circle whose center goes through the current carrying wire is zero, since the magnetic fields circle around the wire, so that they are always tangential to planes perpendicular to the direction of... I'm not sure I know the answer at the level you want it. This is kind of like the example you see in textbooks of charging a capacitor. If you form a loop around one of the wires leading up to the capacitor, and calculate the line integral of the magnetic field, then you get an answer if you use... Kinetic energy would not be a scalar because of Galilean boosts. If you're standing still, a tree has very little kinetic energy. If you're in a moving car, the tree is moving very fast. However, the kinetic energy is a scalar in the sense that is remains invariant under rotations. So it doesn't... I think that's an interesting historical question. The equations for electricity and magnetism, for example, are not the same when you make a Gallileo boost of the form y=x+vt. People tried to save it by introducing an ether fluid, and so when you boost you also have to boost the ether fluid... You have to make some assumptions with Gallilean boosts. For example, if you have a force that depends on velocity, say a drag force instead of a spring force:-kx'=mx''The transformation y=x+vt (where v is velocity, t is time) results in the equation:-k(y'-v)=my''which is not -ky'=my''... That's just a consequence of using vectors. If you have a vector equation, such as Newton's law, then rotations and translations result in the same equation.If in addition you make other assumptions, such as homogeneous forces, you can stretch and dilate. For example -kx=mx'', you can make... Ampere's law is:B*2 \pi R=\frac{1}{c^2}\frac{d}{dt} \int \frac{q}{4 \pi \epsilon_0 r^2} \hat{e_r}\cdot d\vec{A}where if you imagine the charge moving along a line, and r is the point you want to calculate the field, then R is the perpendicular distance from r to the line. The surface of... Because technically when your object is at the focus of a lens, the magnification is infinite, but the image is infinitely far away, so what's important is the angular magnification or what's in the parlance, magnifying power, which is the ratio of the angular size of the object with the lens... Yes, if it's biconvex or biconcave. Say light starts at the left and travels rightward where it goes through the lens, and then leaves the lens. Say the light ray has an initial angle that's upwards. When the light hits the convex surface on the left, then it is bent down a little bit. If it... This is something I never really understood. Isn't the point of a lens to have a small focal point? Because that's what you want to do, to bend light?So it seems to achieve the lowest focal point, you would choose a biconvex or biconcave lens. What's the point of the rest? I took coherence to mean for a given circumstance, not for all circumstances. So sunlight for example is regarded as spatially incoherent, but technically if the area that it's focusing on is less than 10-3 mm^2, then it's coherent.I'm not sure how something can have a short coherence time and... Re: WavesYeah, power radiated happens to be proportional to amplitude of current squared, just like impedance.Quick question. The load at the end of a transmission line is transformed to a new impedance, and it is this impedance that the transmitter sees. Is the impedance that the... Your eyes are a thin lens, with focal length 1.85 centimeters. So another way to approach this problem is with the thin lens equation, that gives you the magnification of an object as a function of distance.Essentially, m=xi/xo, but 1/xi+1/xo=1/1.85, so m=1.85/(xo-1.85). But the object... Re: WavesIf you have an antenna that is three quarter-wavelength long, then what would it behave like? I would like to say short circuit, but the fact that energy is radiating into space suggests that it can't be a short circuit, that the resistance has to be at least the radiation... Re: WavesI'm a little confused about this. Isn't the impedance of an antenna infinity, since an antenna is an open circuit? Therefore an antenna always has full reflection of the transmitter wave, and you get standing waves both in the antenna and in the line leading up to the antenna. Or... I think your first integral is zero, even though there is a discontinuity at the boundary. The only possible way that your first integral could be non-zero is if the discontinuity jumped to infinity.As for your second integral equaling your third integral, that's correct. I'm probably being nitpicky here, but optics textbook probably ought to have\nabla \cdot D =0since you will have charges on boundaries, although they're not "free" ( i.e., they're not put there by hand). Inertia is mass. Mass is scalar, the same no matter how fast observer.Momentum is vector. Momentum depends on velocity of observer.I find that interesting, but I'm not sure I follow. How do particles transform through their gauge group as they propagate in time? Don't mass matrices... I believe the correct terminology is helicoid.A Riemann surface is something else entirely. I'm not a mathematician, but as I understand it, a Riemann surface is just a way to visualize different branches of a function and how they are related to each other. If friction steals away all the energy of the particle before it reaches at most an equilibrium position of x_e=\frac{mg(sin \theta +\mu cos\theta)}{k} , where xe is the distance from the starting position of the particle and k is the spring constant, then there is no oscillation. The... I meant voltage. For a capacitor, Q=CV, where C (the capacitance) is real. So whatever phase the voltage has, the charge Q has the same phase. A Hertzian dipole (i.e. a very tiny antenna) is pretty much an AC generator connected to a capacitor. If you want you can put two spheres at the end of... Your equation (11.9) says that one of the quantities must be imaginary. So it's either Q or I.A Hertzian dipole behaves like a capacitor, where the current leads the voltage. Now for a capacatitor Q follows V. So you expect the current to lead the charge. Since a hertzian dipole behaves like... You derived this:i(t)=d[q(t)]/dt=ℜe[Qd(ejωt)/dt]=ℜe[Qjωejωt]=−ωQsinωtwhich is incorrect.Q is imaginary, so you can't just take it out of the Re[]. What you do is plug in Q=+- I/(jw), and then take the real part. Then you should get something like: I(t)=Icos(wt)
Forgot password? New user? Sign up Existing user? Log in Hi guys...just thought of sharing my time table and I also want to know yours.how to manage time,how to give quality time for studying,are topics that we should always think while writing a time table. Do share yours.6:00A.M−7:00A.M−wakeupgetfreshedandreviseconcepts7:00A.M−8:00A.M−gotoGym8:30A.M−11:30A.M−studynewconcepts11:30A.M−1:00P.MP.M−Brilliant∗1:00P.M−3:30P.M−study4:P.M−8:30P.M−class9:00−11:00P.M−study11:00−6;00A.M−RIP.:D\boxed{6:00 A.M-7:00 A.M - wake up get freshed and revise concepts7:00 A.M-8:00 A.M -go to Gym8:30A.M-11:30 A.M-study new concepts11:30A.M-1:00P.M P.M-Brilliant*1:00P.M-3:30P.M-study4:P.M-8:30 P.M-class9:00-11:00 P.M-study11:00-6;00 A.M- RIP. :D}6:00A.M−7:00A.M−wakeupgetfreshedandreviseconcepts7:00A.M−8:00A.M−gotoGym8:30A.M−11:30A.M−studynewconcepts11:30A.M−1:00P.MP.M−Brilliant∗1:00P.M−3:30P.M−study4:P.M−8:30P.M−class9:00−11:00P.M−study11:00−6;00A.M−RIP.:D thnkx Note by Max B 5 years, 5 months ago Easy Math Editor This discussion board is a place to discuss our Daily Challenges and the math and science related to those challenges. Explanations are more than just a solution — they should explain the steps and thinking strategies that you used to obtain the solution. Comments should further the discussion of math and science. When posting on Brilliant: *italics* _italics_ **bold** __bold__ - bulleted- list 1. numbered2. list paragraph 1paragraph 2 paragraph 1 paragraph 2 [example link](https://brilliant.org) > This is a quote This is a quote # I indented these lines # 4 spaces, and now they show # up as a code block. print "hello world" # I indented these lines# 4 spaces, and now they show# up as a code block.print "hello world" \( \) \[ \] 2 \times 3 2^{34} a_{i-1} \frac{2}{3} \sqrt{2} \sum_{i=1}^3 \sin \theta \boxed{123} Sort by: sorry for the latex that i used.. Log in to reply you study a lot....good what's ur timetable @Max B – My time table is not a ideal time table which could be followed....I really need to work on my time table...I am not going to tell my time table ,but it's how I spend my whole day- Get up at 5 am From 6:30 am to 11:30 am I got my school Reached home ,had some lunch Then,I work on Brilliant...!till 2:30 pm ,about... get a nap... From 3:30 to 5:30 -I do some reading work,like newspapers,Editorial ,other than course ,et cetra From 7:00 Do some school work till 9:30 or less. Ate some ,then ,aaaaaaaammmmmm(drowsy) ZZZzzzzz.... @Archiet Dev – Oh My god! Which school do you go to? 6.30 is when we wake up here in Chennai...and you have only 5 hours of schooling. o.O? Lucky!!!!!! We have school from 8.15 -4 :(...and by the time I come home , my head goes for a six and I barely have time left for homework which really dampens my progress prohibiting me from using Brilliant...U.ppl are so lucky @Krishna Ar – @Krishna Ar Really we have similar daily routine in US I got to school from 6.30- to 2.30.Then I sleep, do my homework, study concepts and then use brilliant and Wikipedia to learn how to solve problems @Mardokay Mosazghi – Wow! But you have school from 6.30am to 3.30 pm? @Krishna Ar – sorry it is 2.30 Hey @Max B You can use Markdown to make the table, like this: 12 || This|| is|||| a markdown || table.|| I really want to see your timetable! ∥∣1∣∣2∣∣3∣∣4∣∣∣∣a∣∣b∣∣c∣∣d∣∣\|| 1 || 2 || 3 || 4 || || a || b || c || d || ∥∣1∣∣2∣∣3∣∣4∣∣∣∣a∣∣b∣∣c∣∣d∣∣ I need practice....u know why it takes time.....thankx by the way @Max B – You should edit your original post so we can read your time table! Problem Loading... Note Loading... Set Loading...
Let $f: \{-1,1\}^n \rightarrow \{-1,1\}$ be a Boolean function. The Fourier expansion of $f$ is $$f(T) = \sum_{S \subseteq [n]} \widehat{f}(S)\ \chi_S(T)$$ where $\widehat{f}(S)$ are real numbers and $\chi_S(T)=\Pi_{i \in S} T_i$ is a parity function. Let $d$ be the degree of the the Fourier expansion of $f$, i.e. $d= \max_{\widehat{f}(S)\neq 0} |S|$. By Parseval's identity we have $$\sum_{S \subseteq [n]} \widehat{f}(S)^2=1$$ I am looking for a bound on $$\sum_{S \subseteq [n]} |\widehat{f}(S)|$$ I think it is bounded by $d$. But I have neither a proof nor a counterexample for this claim. Can someone provide a proof or give a counterexample?
Consider the equation \[\frac{x^2}{a^2} + \frac{z^2}{c^2} = 1, \label{4.3.1} \tag{4.3.1}\] with \(a > c\), in the \(xz\)-plane. The length of the semi major axis is \(a\) and the length of the semi minor axis is \(c\). If this figure is rotated through \(360^\circ\) about its minor (\(z\)-) axis, the three- dimensional figure so obtained is called an oblate spheroid. The figure of the Earth is not exactly spherical; it approximates to a very slightly oblate spheroid, the ellipticity \((c − a)/a\) being only \(0.00335\). (The actual figure of the Earth, mean sea level, is often referred to as the geoid.) The equation to the oblate spheroid referred to above is \[\frac{x^2}{a^2} + \frac{y^2}{a^2} + \frac{z^2}{c^2} = 1. \label{4.3.2} \tag{4.3.2}\] If the ellipse \(\ref{4.3.1}\) is rotated through \(360^\circ\) about its major (\(x\)-) axis, the figure so obtained is called a prolate spheroid. A rugby football (or, to a lesser extent, a North American football, which is a bit too pointed) is a good approximation to a prolate spheroid. The equation to the prolate spheroid just described is \[\frac{x^2}{a^2} + \frac{y^2}{c^2} + \frac{z^2}{c^2} = 1. \label{4.3.3} \tag{4.3.3}\] Either type of spheroid can be referred to as an "ellipsoid of revolution". The figure described by the equation \[\frac{x^2}{a^2} + \frac{y^2}{b^2} + \frac{z^2}{c^2} = 1 \label{4.3.4} \tag{4.3.4}\] is a tri-axial ellipsoid. Unless stated otherwise, I shall adopt the convention \(a > b > c\), and choose the coordinate axes such that the major, intermediate and minor axes are along the \(x\)-, \(y\)- and \(z\)-axes respectively. A tri-axial ellipsoid is not an ellipsoid of revolution; it cannot be obtained by rotating an ellipse about an axis. The special case \(a = b = c\): \[x^2 + y^2 + z^2 = a^2 \label{4.3.5} \tag{4.3.5}\] is, of course, a sphere. Figure \(\text{IV.4}\) shows the cross-section of a tri-axial ellipse in the \(yz\)- plane (a), the \(xz\)-plane (b) and (twice - (c), (d)) the \(xy\)-plane. If you imagine your eye wandering in the \(xz\)-plane from the \(x\)-axis (a) to the \(z\)-axis (c), you will be convinced that there is a direction in the \(xz\)-plane from which the \(\text{FIGURE IV.4}\) cross-section of the ellipse is a circle. There are actually two such directions, symmetrically situated on either side of the \(z\)-axis, but there are no such directions in either the \(xy\)- or the \(yz\)-planes from which the cross-section of the ellipsoid appears as a circle. Expressed otherwise, there are two planes that intersect the ellipsoid in a circle. This fact is of some importance in the description of the propagation of light in a bi-axial crystal, in which one of the wavefronts is a tri-axial ellipsoid. Let us refer the ellipsoid \(\ref{4.3.4}\) to a set of axes \(\text{O}x^\prime y^\prime z^\prime\) such that the angles \(z^\prime \text{O} z\) and \(x^\prime \text{O} x\) are each \(θ\) , and the \(y^\prime\)- and \(y\)-axes are identical. The equation of the ellipsoid referred to the new axes is (by making use of the usual formulas for the rotation of axes) \[\frac{(z^\prime \sin θ + x^\prime \cos θ)^2}{a^2} + \frac{y^{\prime 2}}{b^2} + \frac{(z^\prime \cos θ - x^\prime \sin θ)^2}{c^2} = 1. \label{4.3.6} \tag{4.3.6}\] The cross-section of the ellipsoid in the \(x^\prime y^\prime\)-plane (i.e. normal to the \(z^\prime\)-axis) is found by putting \(z^\prime = 0\): \[\frac{(x^\prime \cos θ)^2}{a^2} + \frac{y^{\prime 2}}{b^2} + \frac{(x^\prime \sin θ)^2}{c^2} = 1. \label{4.3.7} \tag{4.3.7}\] This is a circle if the coefficients of \(x^\prime\) and \(y^\prime\) are equal. Thus it is a circle if \[\cos^2 θ = \frac{a^2(b^2 - c^2)}{b^2(a^2 - c^2)}. \label{4.3.8} \tag{4.3.8}\] Thus, a plane whose normal is in the \(xz\)-plane (i.e. between the major and minor axis) and inclined at an angle \(θ\) to the minor (\(z\)-) axis, cuts the tri-axial ellipsoid in a circle. As viewed from either of these directions, the cross-section of the ellipsoid is a circle of radius \(b\). As an asteroid tumbles over and over, its brightness varies, for several reasons, such as its changing phase angle, the directional reflective properties of its regolith, and, of course, the cross-sectional area presented to the observer. The number of factors that affect the light-curve of a rotating asteroid is, in fact, so large that it is doubtful if it is possible, from the light-curve alone, to deduce with much credibility or accuracy the true shape of the asteroid. However, it is obviously of some interest for a start in any such investigation to be able to calculate the cross-sectional area of the ellipsoid \(\ref{4.3.3}\) as seen from some direction \(( θ , \phi )\). Let us erect a set of coordinate axes \(\text{O}x^\prime y^\prime z^\prime\) such that \(\text{O} z^\prime\) is in the direction \(( θ , \phi )\), first by a rotation through \(\phi\) about \(\text{O}z\) to form intermediate axes \(\text{O}x_1 y_1 z_1\) , followed by a rotation through \(θ\) about \(\text{O} y_1\). The \((x^\prime , y^\prime , z^\prime )\) coordinates are related to the \((x, y, z)\) coordinates by \[\pmatrix{x \\ y \\ z} = \pmatrix{\cos \phi & -\sin \phi & 0 \\ \sin \phi & \cos \phi & 0 \\ 0 & 0 & 1} \pmatrix{\cos θ & 0 & \sin θ \\ 0 & 1 & 0 \\ -\sin θ & 0 & \cos θ} \pmatrix{x^\prime \\ y^\prime \\ z^\prime} \label{4.3.9} \tag{4.3.9}\] If we substitute for \(x, \ y, \ z\) in equation \(\ref{4.3.4}\) from equation \(\ref{4.3.9}\), we obtain the equation to the ellipsoid referred to the \(\text{O} x^\prime y^\prime z^\prime\) coordinate systems. And if we put \(z^\prime = 0\), we see the elliptical crosssection of the ellipsoid in the plane normal to \(\text{O}z^\prime\). This will be of the form \[Ax^{\prime 2} + 2H x^\prime y^\prime + B y^{\prime 2} = 1, \label{4.3.10} \tag{4.3.10}\] where \[A = \cos^2 θ \left( \frac{\cos^2 \phi}{a^2} + \frac{\sin^2 \phi}{b^2} \right) + \frac{\sin^2 θ}{c^2} \label{4.3.11} \tag{4.3.11}\] \[2H = 2 \cos^2 θ \sin \phi \cos \phi \left( \frac{1}{b^2} - \frac{1}{a^2} \right) , \label{4.3.12} \tag{4.3.12}\] \[B = \frac{\sin^2 \phi}{a^2} + \frac{\cos^2 \phi}{b^2}. \label{4.3.13} \tag{4.3.13}\] This is an ellipse whose axes are inclined at an angle \(ψ\) from \(\text{O}x^\prime\) given by \[\tan 2 ψ = \frac{2H}{A-B}. \label{4.3.14} \tag{4.3.14}\] By replacing \(x^\prime\) and \(y^\prime\) by \(x^{\prime \prime}\) and \(y^{\prime \prime}\), where \[\pmatrix{x^\prime \\ y^\prime} = \pmatrix{\cosψ & - \sin ψ \\ \sin ψ & \cos ψ} \pmatrix{x^{\prime \prime} \\ y^{\prime \prime}} \label{4.3.15} \tag{4.3.15}\] we shall be able to describe the ellipse in a coordinate system \(\text{O}x^{\prime \prime}y^{\prime \prime}\) whose axes are along the axes of the ellipse, and the equation will be of the form \[\frac{x^{\prime \prime 2}}{a^{\prime \prime 2}} + \frac{y^{\prime \prime 2}}{b^{\prime \prime 2}} = 1 \label{4.3.16} \tag{4.3.16}\] and the area of the cross-section is \(\pi a^{\prime \prime} b^{\prime \prime}\). For example, suppose the semi axes of the ellipsoid are \(a = 3, \ b = 2, \ y = 1\), and we look at it from the direction \(θ = 60^\circ\) , \(\phi= 45^\circ\). Following equations 4.4.9,10,11,12, we obtain for the equation of the elliptical cross-section referred to the system \(\text{O}x^\prime y^\prime z^\prime\) \[0.79513 \dot{8} x^{\prime 2} + 0.069 \dot{4} x^\prime y^\prime + 0.180 \dot{5} y^{\prime 2} = 1 . \label{4.3.17} \tag{4.3.17}\] From equation 4.4.13 we find \(ψ = 3^\circ\ .22338\). Equation 4.4.14 then transforms equation 4.4.16 to \[0.797094 x^{\prime \prime 2} + 0.178600 y^{\prime \prime 2} = 1 \label{4.3.18} \tag{4.3.18}\] or \[\frac{x^{\prime \prime 2}}{(1.1201)^2} + \frac{y^{\prime \prime 2}}{(2.3662)^2} = 1. \label{4.3.19} \tag{4.3.19}\] The area is \[\pi \times 1.1201 \times 2.3662 = 8.362. \] It is suggested here that the reader could write a computer program in the language of his or her choice for calculating the cross-sectional area of an ellipsoid as seen from any direction. As an example, I reproduce below a Fortran program for an ellipse with \((a, b, c) = (3, 2, 1)\). It is by no means the fastest and most efficient Fortran program that could be written, but is sufficiently straightforward that anyone familiar with Fortran and probably many who are not should be able to follow the steps. A=3. B=2. C=1. A2=A*A B2=B*B C2=C*C READ(5,*)TH,PH TH=TH/57.29578 PH=PH/57.29578 STH=SIN(TH) CTH=COS(TH) SPH=SIN(PH) CPH=COS(PH) STH2=STH*STH CTH2=CTH*CTH SPH2=SPH*SPH CPH2=CPH*CPH AA=CTH2*(CPH2/A2+SPH2/B2)+STH2/C2 TWOHH=2.*CTH*STH*CPH*(1./B2−1./A2) BB=SPH2/A2+CPH2/B2 PS=.5*ATAN2(TWOHH,AA−BB) SPS=SIN(PS) CPS=COS(PS) AAA=CPS*(AA*CPS+TWOHH*SPS)+BB*SPS*SPS BBB=SPS*(AA*SPS−TWOHH*CPS)+BB*CPS*CPS SEMAX1=1./SQRT(AAA) SEMAX2=1./SQRT(BBB) AREA=3.1415927*SEMAX1*SEMAX2 WRITE(6,1)AREA 1 FORMAT(' Area = ',F7.3) STOP END
Let $t$ be time and $x$ be the ratio of the cistern filled with water. Assume that the cistern has a constant horizontal cross-section, which implies that the rate of leaking is proportional to the square-root of the amount of water by Torricelli's law, since the leak is at the bottom, and the amount is proportional to the water depth. Also assume that the cistern is filled via a tap at a constant rate. The differential equation for the leaking cistern is then: $\frac{dx}{dt} = a - c\sqrt{x}$. where $a = \frac{3}{2}$ is the rate of tap flow, since it takes $\frac{2}{3}$ hours to fill the cistern when there is no leak, and $c$ is some positive constant that is larger for larger leaks. We know that throughout the process of filling the cistern $a - c\sqrt{x} > 0$, otherwise it can never be filled. Thus during that process we have: $\frac{1}{a-c\sqrt{x}} \frac{dx}{dt} = 1$. $\int \frac{1}{a-c\sqrt{x}} \ dx = \int \frac{1}{a-c\sqrt{x}} \frac{dx}{dt} \ dt = t + k$ for some constant $k$. We find that: $\int \frac{1}{a-c\sqrt{x}} \ dx = \int \left( - \frac{1}{c\sqrt{x}} + \frac{a}{c\sqrt{x}(a-c\sqrt{x})} \right) \ dx = - \frac{2}{c} \sqrt{x} - \frac{2a}{c^2} \ln(a-c\sqrt{x})$. Therefore: $- \frac{2}{c} \sqrt{x} - \frac{2a}{c^2} \ln(a-c\sqrt{x}) = t + m$ for some constant $m$. Now we know that at $t = 0$ we have $x = 0$, therefore $m = - \frac{2a}{c^2} \ln(a)$. Also at $t = 1$ (hour) we have $x = 1$ (full tank), therefore $- \frac{2}{c} - \frac{2a}{c^2} \ln(a-c) = 1 - \frac{2a}{c^2} \ln(a)$. Since $a = \frac{3}{2}$, we can numerically solve for $c$: $c \approx 0.7114$. If the leaky cistern starts full and the tap is off, it will run dry according to: $\frac{dx}{dt} = -c\sqrt{x}$. Which we can solve easily: $\int \frac{1}{2\sqrt{x}} \ dx = \int -\frac{c}{2} \ dt$. $\sqrt{x} = 1 - \frac{c}{2} t$. [since at $t=0$ we have $x=1$] $x = ( 1 - \frac{c}{2} t )^2$. Therefore the answer is: The full leaky cistern will run dry at $t = \frac{2}{c} \approx 2.81$ hours. An interesting feature of filling such leaky cisterns with a tap is that they can only be filled up to a certain limit, at which the rate of tap flow is equal to the rate of leak. That limit is at most $(\frac{a}{c})^2$, because if $x$ ever reaches that then $\frac{dx}{dt} = 0$ and so $x$ will not increase anymore. The fact that $x$ really tends to that limit can be seen by observing that $\ln(a-c\sqrt{x})$ must tend to $\infty$ as $t \to \infty$. This means that the tap flow rate $a$ must be at least $c$ otherwise it will be unable to fill the cistern. Note The previous version of this answer was using the assumption that leak rate is proportional to pressure at leak, which after almagest's comment and some online searching I now believe is not physically correct for water. I think it might be valid for very tiny leaks or if the cistern is leaking because the material is permeable, in which case viscosity has a large effect and so the Hagen–Poiseuille equation would imply my assumption. But if the cistern is quite big then the tap flow rate has to be reasonably high to be able to fill it up in 1 hour if there is no leak, so 20 min difference if there is a leak implies that the leak is actually not so tiny, which means that perhaps viscosity is not a dominant factor, and Torricelli's law would be the relevant one.
We wish to find the potential at a point P at a large distance \(R\) from a charged body, in terms of its total charge and its dipole, quadrupole, and possibly higher-order moments. There will be no loss of generality if we choose a set of axes such that P is on the \(z\)-axis. \(\text{FIGURE III.14}\) We refer to Figure \(III\).14, and we consider a volume element \(δτ\) at a distance r from some origin. The point P is at a distance r from the origin and a distance \(∆ \text{ from }δτ\). The potential at P from the charge in the element \(δτ\) is given by \[4\pi\epsilon_0 \delta V = \dfrac{\rho \delta \tau }{\Delta} = \dfrac{\rho}{R}\left ( 1+\dfrac{r^2}{R^2}-\dfrac{2r}{R}\cos \theta \right )^{-1/2}\delta \tau ,\] and so the potential from the charge on the whole body is given by \[4\pi\epsilon_0 V =\dfrac{1}{R} \int \rho \left ( 1+\dfrac{r^2}{R^2} -\dfrac{2r}{R}\cos \theta \right )^{-1/2}\delta \tau .\] On expanding the parentheses by the binomial theorem, we find, after a little trouble, that this becomes \[4\pi\epsilon_0 V = \dfrac{1}{R}\int \rho \,d\tau + \dfrac{1}{R^2}\int \rho r P_1 (\cos \theta)\,d\tau + \dfrac{1}{2!R^3}\int \rho r^2 P_2 (\cos \theta)]\, d\tau + \dfrac{1}{3!R^4}\int \rho r^3 P_3 (\cos \theta )\,d\tau + ... , \] where the polynomials P are the Legendre polynomials given by \[\begin{align}P_1 (\cos \theta ) &= \cos \theta \\ P_2(\cos \theta) &= \dfrac{1}{2}(3\cos^2 \theta -1 ), \\ P_3(\cos \theta)&=\dfrac{1}{2}(5\cos^3 \theta - 3\cos \theta ). \\ \end{align}\] We see from the forms of these integrals and the definitions of the components of the dipole and quadrupole moments that this can now be written: \[4\pi\epsilon_0 V = \dfrac{Q}{R} + \dfrac{p}{R^2}+\dfrac{1}{2R^3}(3q_{zz}-Tr\textbf{q})+...,\label{3.9.7}\] Here Tr q is the trace of the quadrupole moment matrix, or the (invariant) sum of its diagonal elements. Equation \ref{3.9.7} can also be written \[4\pi\epsilon_0V=\dfrac{Q}{R}+\dfrac{p}{R^2}+\dfrac{1}{2R^3}[2q_{zz}-(q_{xx}+q_{yy})]+... . \] The quantity \(2q_{zz}-(q_{xx}+q_{yy})\) of the diagonalized matrix is often referred to as “the” quadrupole moment. It is zero if all three diagonal components are zero or if \(q_{zz}=\dfrac{1}{2}(q_{xx}+q_{yy})\). If the body has cylindrical symmetry about the \(z\)-axis, this becomes \(2(q_{zz}-q_{xx})\) . Exercise Show that the potential at \((r , θ)\) at a large distance from the linear quadrupole of Figure \(III\).15 is \[V=\dfrac{QL^2(3\cos^2 \theta -1)}{4\pi\epsilon_0 r^3}.\nonumber\] (The gap in the dashed line is intended to indicate that r is very large compared with L.) \(\text{FIGURE III.15}\) The solution to this exercise is easy if you know about Legendre polynomials. See Section 1.14 of my notes on Celestial Mechanics. What you need to know is that the expansion of \((1-2ax+x^2)^{-1/2}\) can be written as a series of Legendre polynomials, namely \(P_0(x)+xP_1(x)+x^2P_2(x)+...\). You also need a (very small) table of Legendre polynamials, namely \(P_0(x)=1,\,P_1(x)=x,\,P_2(x)=\dfrac{1}{2}(3x^2-1)\). Given that, you should find the exercise very easy.
The Simple Pendulum Galileo was the first to record that the period of a swinging lamp high in a cathedral was independent of the amplitude of the oscillations, at least for the small amplitudes he could observe. In 1657, Huygens constructed the first pendulum clock, a vast improvement in timekeeping over all previous techniques. So the pendulum was the first oscillator of real technological importance. In fact, though, the pendulum is not quite a simple harmonic oscillator: the period does depend on the amplitude, but provided the angular amplitude is kept small, this is a small effect. The weight mg of the bob (the mass at the end of the light rod) can be written in terms of components parallel and perpendicular to the rod. The component parallel to the rod balances the tension in the rod. The component perpendicular to the rod accelerates the bob, \[ ml \dfrac{d^2\theta}{dt^2} = -mg\space sin\space \theta \] The mass cancels between the two sides, pendulums of different masses having the same length behave identically. (In fact, this was one of the first tests that inertial mass and gravitational mass are indeed equal: pendulums made of different materials, but the same length, had the same period.) For small angles, the equation is close to that for a simple harmonic oscillator, \[ l\dfrac{d^2\theta}{dt^2} = -g\theta \] with frequency \( \omega = \sqrt{\dfrac{g}{l}} \), that is, time of one oscillation \( T = 2\pi \sqrt{\dfrac{l}{g}} \) . At a displacement of ten degrees, the simple harmonic approximation overestimates the restoring force by around one part in a thousand, and for smaller angles this error goes essentially as the square of the angle. So a pendulum clock designed to keep time with small oscillations of the pendulum will gain four seconds an hour or so if the pendulum is made to swing with a maximum angular displacement of ten degrees. The potential energy of the pendulum relative to its rest position is just mgh, where h is the height difference, that is, \( mgl(1 - cos \, \theta) \). The total energy is therefore \[ E = \dfrac {1}{2} m \left( l \dfrac {d \theta}{dt} \right)^2 + mgl (1 - cos \, \theta) \approx \dfrac {1}{2} m \left( l \dfrac {d \theta}{dt} \right)^2 + \dfrac {1}{2} mgl \theta^2 \] for small angles. Pendulums of Arbitrary Shape The analysis of pendulum motion in terms of angular displacement works for any rigid body swinging back and forth about a horizontal axis under gravity. For example, consider a rigid rod. The kinetic energy is given by \( \dfrac{1}{2} I \dot{\theta}^2 \) where I is the moment of inertia of the body about the rod, the potential energy is \( mgl(1 - cos \, \theta) \) but l is now the distance of the center of mass from the axis. The equation of motion is that the rate of change of angular momentum equals the applied torque, \[ I \ddot{\theta} = -mgl \, sin \, \theta \], for small angles the period \( T = 2\pi \sqrt{\dfrac{I}{mgl}} \), and for the simple pendulum we considered first \( I = ml^2 \) , giving the previous result. Variation of Period of a Pendulum with Amplitude As the amplitude of pendulum motion increases, the period lengthens, because the restoring force \( -mg \, sin \, \theta \) increases more slowly than \( -mg \theta ( sin \, \theta \approx \theta - \dfrac{\theta^3}{3!} \) for small angles). The simplest way to get some idea how this happens is to explore it with the accompanying spreadsheet. Begin with an initial displacement of 0.1 radians (5.7 degrees): Next, try one radian: The change in period is a little less that 10%, not too dramatic considering the large amplitude of this swing. Two radians gives an increase around 35%, and three radians amplitude increases the period almost threefold. It’s well worth exploring further with the spreadsheet!
(a) If $AB=B$, then $B$ is the identity matrix. (b) If the coefficient matrix $A$ of the system $A\mathbf{x}=\mathbf{b}$ is invertible, then the system has infinitely many solutions. (c) If $A$ is invertible, then $ABA^{-1}=B$. (d) If $A$ is an idempotent nonsingular matrix, then $A$ must be the identity matrix. (e) If $x_1=0, x_2=0, x_3=1$ is a solution to a homogeneous system of linear equation, then the system has infinitely many solutions. Let $A$ and $B$ be $3\times 3$ matrices and let $C=A-2B$.If\[A\begin{bmatrix}1 \\3 \\5\end{bmatrix}=B\begin{bmatrix}2 \\6 \\10\end{bmatrix},\]then is the matrix $C$ nonsingular? If so, prove it. Otherwise, explain why not. (a) Suppose that a $3\times 3$ system of linear equations is inconsistent. Is the coefficient matrix of the system nonsingular? (b) Suppose that a $3\times 3$ homogeneous system of linear equations has a solution $x_1=0, x_2=-3, x_3=5$. Is the coefficient matrix of the system nonsingular? (c) Let $A$ be a $4\times 4$ matrix and let\[\mathbf{v}=\begin{bmatrix}1 \\2 \\3 \\4\end{bmatrix} \text{ and } \mathbf{w}=\begin{bmatrix}4 \\3 \\2 \\1\end{bmatrix}.\]Suppose that we have $A\mathbf{v}=A\mathbf{w}$. Is the matrix $A$ nonsingular? Suppose that $B=\{\mathbf{v}_1, \mathbf{v}_2\}$ is a basis for $\R^2$. Let $S:=[\mathbf{v}_1, \mathbf{v}_2]$.Note that as the column vectors of $S$ are linearly independent, the matrix $S$ is invertible. Prove that for each vector $\mathbf{v} \in V$, the vector $S^{-1}\mathbf{v}$ is the coordinate vector of $\mathbf{v}$ with respect to the basis $B$. Prove that the matrix\[A=\begin{bmatrix}0 & 1\\-1& 0\end{bmatrix}\]is diagonalizable.Prove, however, that $A$ cannot be diagonalized by a real nonsingular matrix.That is, there is no real nonsingular matrix $S$ such that $S^{-1}AS$ is a diagonal matrix. The following problems are Midterm 1 problems of Linear Algebra (Math 2568) at the Ohio State University in Autumn 2017.There were 9 problems that covered Chapter 1 of our textbook (Johnson, Riess, Arnold).The time limit was 55 minutes. This post is Part 3 and contains Problem 7, 8, and 9.Check out Part 1 and Part 2 for the rest of the exam problems. Problem 7. Let $A=\begin{bmatrix}-3 & -4\\8& 9\end{bmatrix}$ and $\mathbf{v}=\begin{bmatrix}-1 \\2\end{bmatrix}$. (a) Calculate $A\mathbf{v}$ and find the number $\lambda$ such that $A\mathbf{v}=\lambda \mathbf{v}$. (b) Without forming $A^3$, calculate the vector $A^3\mathbf{v}$. Problem 8. Prove that if $A$ and $B$ are $n\times n$ nonsingular matrices, then the product $AB$ is also nonsingular. Problem 9.Determine whether each of the following sentences is true or false. (a) There is a $3\times 3$ homogeneous system that has exactly three solutions. (b) If $A$ and $B$ are $n\times n$ symmetric matrices, then the sum $A+B$ is also symmetric. (c) If $n$-dimensional vectors $\mathbf{v}_1, \mathbf{v}_2, \mathbf{v}_3$ are linearly dependent, then the vectors $\mathbf{v}_1, \mathbf{v}_2, \mathbf{v}_3, \mathbf{v}_4$ is also linearly dependent for any $n$-dimensional vector $\mathbf{v}_4$. (d) If the coefficient matrix of a system of linear equations is singular, then the system is inconsistent. An $n\times n$ matrix $A$ is called nonsingular if the only vector $\mathbf{x}\in \R^n$ satisfying the equation $A\mathbf{x}=\mathbf{0}$ is $\mathbf{x}=\mathbf{0}$.Using the definition of a nonsingular matrix, prove the following statements. (a) If $A$ and $B$ are $n\times n$ nonsingular matrix, then the product $AB$ is also nonsingular. (b) Let $A$ and $B$ be $n\times n$ matrices and suppose that the product $AB$ is nonsingular. Then: The matrix $B$ is nonsingular. The matrix $A$ is nonsingular. (You may use the fact that a nonsingular matrix is invertible.) Let $A$ be an $n\times (n-1)$ matrix and let $\mathbf{b}$ be an $(n-1)$-dimensional vector.Then the product $A\mathbf{b}$ is an $n$-dimensional vector.Set the $n\times n$ matrix $B=[A_1, A_2, \dots, A_{n-1}, A\mathbf{b}]$, where $A_i$ is the $i$-th column vector of $A$. Prove that $B$ is a singular matrix for any choice of $\mathbf{b}$. For each of the following $3\times 3$ matrices $A$, determine whether $A$ is invertible and find the inverse $A^{-1}$ if exists by computing the augmented matrix $[A|I]$, where $I$ is the $3\times 3$ identity matrix.
1 Okay, I just figured this out. I believe this is what you do: we know solution is spherically symmetric by the hint and because the boundary conditions show that the solution has no $\theta$ or $\phi$ dependence. So converting to spherical coordinates, the problem looks like: $$u_{tt}-u_{rr}-\frac{2}{r} u_r =0$$ We separate variables: $u(x,t)= T(t) X(x)$. So $$\frac{T_{tt}}{T}-\frac{R_{rr}}{R}-\frac{2}{r}\frac{R_r}{R}=0 \implies \\ \frac{T_{tt}}{T}=-\lambda \\ \frac{R_{rr}}{R}+\frac{2}{r}\frac{R_r}{R}+\lambda = 0$$where $\lambda \in R$ We know $$T(t)=A\sin(\sqrt\lambda t) +B\cos(\sqrt\lambda t)$$ The initial condition $$u_t(r,0) = 0 \implies T_t(t) = 0 \implies T(t)=B\cos(\sqrt\lambda t)$$ We also have $$\frac{R_{rr}}{R}+\frac{2}{r}\frac{R_r}{R}+\lambda = 0$$ By the hint, we write this as $$(rR)'' = -\lambda (rR)$$ Making a change of variables with $$L(r) =rR(r)$$ we get $$L_{rr} = -\lambda L \implies L(r)=E\sin(\sqrt\lambda r)+F\cos(\sqrt \lambda r) \implies R(r) = \frac{E\sin(\sqrt\lambda r)}{r}+\frac{F\cos(\sqrt \lambda r)}{r}$$ We rule out the cos term because we get a 1/0 as r approached infinity, which is bad. So $$R(r) = \frac{E\sin(\sqrt\lambda r)}{r}$$ Now we combine the R(r) and T(t) solutions. We call $n = \lambda$ for the sake of notation. $$u(r,t)=\sum_{n= 1} ^{\infty} \frac{C_n \cos(nt) \sin(nr)}{r}$$ By the initial condition (equation 2) in the problem, we have $$\begin{align} &u(r,0)=f(r)=\left\{\begin{aligned} &1\quad &&r<1,\\ &0 &&r\ge 1,\end{aligned}\right.\qquad \end{align}$$ So we have $$u(r,0)=\sum_{n= 1} ^{\infty} C_n \sin(nr) = r f(r)$$ This is a sine fourier series with $L=\pi$, so $$C_n = \frac{2}{\pi} \int_0^{\pi} sin(nr) r f(r) dr = \frac{2}{\pi} \int_0^1 sin(nr) r dr $$ We use integration by parts here and get $$C_n = \frac{2}{\pi} \frac{\sin(n)-n\cos(n)}{n^2}$$ So $$u(r,t)=\sum_{n= 1} ^{\infty} \frac{\frac{2}{\pi} \frac{\sin(n)-n\cos(n)}{n^2} \cos(nt) \sin(nr)}{r}$$ $$u_{tt}-u_{rr}-\frac{2}{r} u_r =0$$ We separate variables: $u(x,t)= T(t) X(x)$. So $$\frac{T_{tt}}{T}-\frac{R_{rr}}{R}-\frac{2}{r}\frac{R_r}{R}=0 \implies \\ \frac{T_{tt}}{T}=-\lambda \\ \frac{R_{rr}}{R}+\frac{2}{r}\frac{R_r}{R}+\lambda = 0$$where $\lambda \in R$ We know $$T(t)=A\sin(\sqrt\lambda t) +B\cos(\sqrt\lambda t)$$ The initial condition $$u_t(r,0) = 0 \implies T_t(t) = 0 \implies T(t)=B\cos(\sqrt\lambda t)$$ We also have $$\frac{R_{rr}}{R}+\frac{2}{r}\frac{R_r}{R}+\lambda = 0$$ By the hint, we write this as $$(rR)'' = -\lambda (rR)$$ Making a change of variables with $$L(r) =rR(r)$$ we get $$L_{rr} = -\lambda L \implies L(r)=E\sin(\sqrt\lambda r)+F\cos(\sqrt \lambda r) \implies R(r) = \frac{E\sin(\sqrt\lambda r)}{r}+\frac{F\cos(\sqrt \lambda r)}{r}$$ We rule out the cos term because we get a 1/0 as r approached infinity, which is bad. So $$R(r) = \frac{E\sin(\sqrt\lambda r)}{r}$$ Now we combine the R(r) and T(t) solutions. We call $n = \lambda$ for the sake of notation. $$u(r,t)=\sum_{n= 1} ^{\infty} \frac{C_n \cos(nt) \sin(nr)}{r}$$ By the initial condition (equation 2) in the problem, we have $$\begin{align} &u(r,0)=f(r)=\left\{\begin{aligned} &1\quad &&r<1,\\ &0 &&r\ge 1,\end{aligned}\right.\qquad \end{align}$$ So we have $$u(r,0)=\sum_{n= 1} ^{\infty} C_n \sin(nr) = r f(r)$$ This is a sine fourier series with $L=\pi$, so $$C_n = \frac{2}{\pi} \int_0^{\pi} sin(nr) r f(r) dr = \frac{2}{\pi} \int_0^1 sin(nr) r dr $$ We use integration by parts here and get $$C_n = \frac{2}{\pi} \frac{\sin(n)-n\cos(n)}{n^2}$$ So $$u(r,t)=\sum_{n= 1} ^{\infty} \frac{\frac{2}{\pi} \frac{\sin(n)-n\cos(n)}{n^2} \cos(nt) \sin(nr)}{r}$$
I'm trying to apply Fourier analysis to a specific problem I have. I have essentially an integral like the following $$ \int_{\Omega} f(t) g(t) dt $$ And I'm trying to assume that $g$ is a narrow band signal (namely the Fourier transform is defined on a compact set). I want to prove that if $g$ is really narrow band then $$ \int_{\Omega} f(t) g(t) dt \approx \left( \int_{\Omega} f(t) dt \right) \left( \int_{\Omega} g(t) dt \right) $$ To prove this, I made this assumption $$ g(t) = g_a(t) = \frac{1}{a} \int_{-\infty}^{+\infty} \hat{g}(\omega)rect \left( \frac{\omega}{a} \right) e^{-j2\pi \omega t} d \omega $$ Where $a$ is a positive real parameter. When $a \to 0$ this integral turn out to be $$ \frac{1}{a} \int_{-\infty}^{+\infty} \hat{g}(\omega)rect \left( \frac{\omega}{a} \right) e^{-j2\pi \omega t} d \omega \to \hat{g}(0) $$ which means $$ g(t) = g_a(t) \to \hat{g}(0) = \int_{-\infty}^{+\infty} g(t) dt $$ and therefore $$ \int_{\Omega} f(t) g(t) dt \approx \int_{\Omega} f(t) \hat{g}(0) dt = \hat{g}(0) \int_{\Omega} f(t) dt = \int_{-\infty}^{+\infty} g(t) dt \int_{\Omega} f(t) dt $$ which is not exactly what I want , but it's close enough. The final question is: is the model $$ g_a(t) = \frac{1}{a} \int_{-\infty}^{+\infty} \hat{g}(\omega)rect \left( \frac{\omega}{a} \right) e^{-j2\pi \omega t} d \omega $$ the correct model for a narrow band signal? I'm not entirely sure because of the factor $1/a$ I've introduced. But the idea why it makes sense is the following. If the signal has narrow band, that means the variation in time is really slow, that means the narrower is the band the more the signal tend to be constant, and in theory if it would be constant then I'd be able to factorize the function $g(t)$ from the integral. Clarification : probably in this discussion is more correct to say "low frequency".
Despite the fact that neural networks have made major advances in many tasks, such as image recognition, video/image generation and the likes, I feel they inevitably suffer from a greater problem. Their intrinsic complexity enables them to make great progress as well as make them very hard to completely understand for us humans. A neural networks with two hidden dense layers, 10 neurons each, consists of: \(20\) polynomials \(10\) inputs for each polynomial \(10^{10} = 10.000.000.000\) coefficients No one can safely predict, without great effort, what this (small) network does. It works just because backpropagation forces them to. Evolutionary approach Evolutionary algorithms (EAs) are nearly as old as artificial neural networks. The principle they are based on is pretty straightforward: An EA uses mechanisms inspired by biological evolution, such as reproduction, mutation, recombination, and selection. Candidate solutions to the optimization problem play the role of individuals in a population, and the fitness function determines the quality of the solutions [...]. -- wikipedia.com In contrast to neural networks, their mode of operation can easily be understood and debugged. Their outcome can be as simple as a polynome (keep up reading :) ). And I state that they may be more flexible, because mutations can adapt the solution to changing circumstances. Terminlogy Individual and population An individualis a solution candidate for a given problem. Imagine you want to predict the mean temperature for the first of January in New York City for any given hour. The function$$f({hour}) = \frac{hour}{12} * 10$$ may be a possible solution candidate - obviously a bad one. The populationis the set of all solution candidates in a specific generation. Fitness function The fitness function takes an individual and returns a rational number indicating how good or bad the solution candidate performs on solving the problem. Selection That is the process of selecting a number of individuals from the population and allow them to evolve. It imitates the evolutionary pressureand uses the indivudual's fitness value as the base of its "decision". Mutation and reproduction Solution candidates selected by step 3, are now able to mutateand/or reproduce. Mutations are very problem specific - in the example above, one possible mutation can be to increment or decrement the constant 12. Reproductioncan happen with two or more individuals forming one or multiple offspring individuals. Combining...$$f({hour}) = \frac{hour}{12} * 10$$ ...with...$$f({hour}) = \frac{hour}{10} * 5 + 16$$ ...might result in...$$f({hour}) = \frac{hour}{12} * 5$$$$f({hour}) = \frac{hour}{11} * 7.5$$$$f({hour}) = \frac{hour}{10} * 10 + 16$$ Process Describing the process of an evolutionary algorithm is fairly simple with the above information. (1) First of all, a random population is generated. The individuals are evaluated (2) using the fitness function. With the known fitness values, a selection (3) of the population is drawn, so that the new population consists of adaptations of the fittest individuals of the previous population. The selection is then mixed up, mutated and allowed to generate offspring (4) resulting in the new population (1). This process is repeated until convergance or time runs out. Example problem To stress out the simplicity of an evolutionary optimization I want to fit a curve from data with random noise.The data is generated according to the formula where \(R_{x}\) denotes a random, rational value drawn from a normal distribution with \(\mu = 0\) and \(\sigma = 15\). import numpy as npx = np.arange(0, 100)y = 1.5 * x + np.random.normal(scale=15, size=100) Individual First of all we need a prototype for an individual. For the sake of convenience, I will start with an individual representing a linear function in the form where \(a\) and \(b\) represent properties of the solution candidate, which can be mutated in the evolution process. class LinearFunction(object): def __init__(self, a=0, b=0): self.a = a self.b = b def __call__(self, x): return self.a + self.b * xdef mutate(population): for linear_function in population: if random.random() < .5: linear_function.a += random.random() - .5 else: linear_function.b += random.random() - .5 The mutate method iterates over the whole population, slightly changing a or b according to a random value between -0.5 and +0.5. Fitness function As a measurement of fitness, I'll base upon the simple RMSE metric. Since a higher RMSE-value means higher error whereas a higher fitness value means a fitter individual, I'll negate the RMSE-value to gain a proper fitness value. def negative_rmse(linear_function): global x, y prediction = linear_function(x) error = prediction - y rse = np.sqrt(error ** 2) # ignore infinite values rse = np.ma.masked_invalid(rse) rmse = rse.mean() if np.isnan(rmse): rmse = np.inf return -rmse Result After running the simulation for 20 generations with a population size of 100 individuals per generation, I got to the result which is a pretty fair guess of the original relationship of \(f(x) = 1.5 * x\). The left plot shows how the best individual of each generation performs on the data.The right plot is the mean fitness of the population in each generation. Also notice the convergence of the algorithm after five generations. Example problem (sequel) We've learned, that a simple linear function is way too easy for our simulated evolution. So how it will perform on non-linear data? The new noised data will be expressed by where \(R_x\) denotes a random, rational value drawn from a normal distribution with \(\mu = 0\) and \(\sigma = 5\). x = np.linspace(0, 7)y = .7 * x + 1.5 * x**2 + np.random.normal(scale=5, size=len(x)) Changes In order to face the new problem, some changes have to be made: adapt individual to represent a polynomial modify the mutation to work with the new individual representation The negative RMSE should still be suitable as fitness function. Adapt Individual The new individual should consist of multiple terms \(t \in T\), each in the form of \(t(x) = {f} * x^{p}\), where f and p are properties of the term. The final prediction of the individual is the sum of all its terms: The split into multiple terms allows the mutation step to add or remove terms, resulting in a polynomial of undefined order. The polynomial can mutate according to the input data without the need to define the desired form in advance. class Term(object): def __init__(self, factor, power): self.factor = factor self.power = powerclass Polynomial(object): def __init__(self, terms): self.terms = terms def __call__(self, x): return sum( term.factor * (x ** term.power) for term in self.terms ) def cleanup(self): powers = [term.power for term in self.terms] max_power = max(powers) min_power = min(powers) new_terms = [] for power in range(min_power, max_power+1): factor = sum([term.factor for term in self.terms if term.power == power]) new_term = Term(factor, power) new_terms.append(new_term) self.terms = new_terms def __repr__(self): return " + ".join([ "{:.2f} * x^{:d}".format(term.factor, term.power) for term in self.terms ]) The cleanup method is just a helper to reduce a polynomial to one term per order. Modified mutations Instead of only modifying the parameters of a single term, we introduce two additional mutations adding or removing a term to/from an individual. from copy import copydef mutate_term(population): """Change `factor` or `power` of a term""" for polynomial in population: term = random.choice(polynomial.terms) if random.random() < .5: term.factor += random.random() - .5 else: term.power += random.randint(-1, 1)def add_term(population): """Copy random term""" for polynomial in population: term = random.choice(polynomial.terms) term = copy(term) polynomial.terms.append(term)def remove_term(population): """Remove random term""" for polynomial in population: if len(polynomial.terms) <= 1: continue term = random.choice(polynomial.terms) polynomial.terms.remove(term) Result Due to the increased complexity, I ran the simulation for 50 generations and kept the population size at 100.The resulting polynomial was: That's not as good a guess as the linear function from the first example. But a look at the prediction plot tells us the guess is still a good fit. Conclusion Evolutionary algorithms can definitely be used for curve fitting on linear and non-linear data. The whole process of optimization, mutation as well as the result can be easily understood, which is a huge benefit as opposed to using neural networks. Since we can define individuals and mutations on our own, we are able to optimize the algorithm very well to a new problem, using our own ideas. I can imagine great progress being made on evolutionary algorithms in the future and maybe even tackling domains, that today are reserved for deep neural networks and friends, since those are so appealingly simple.
I have the following problem: given a directed graph $G=(V,E,d)$, where $d:V\to\mathcal{I}(\mathbb{Q}_0^+\cup\{+\infty\})$ (here $\mathbb{Q}_0^+$ denotes the set of non-negative rationals and $\mathcal{I}(\mathbb{Q}_0^+\cup\{+\infty\})$ the set of intervals, bounded or unbounded above, with non-negative rational bounds)is a function associating with each vertex $v\in V$ a "minimum/maximum duration" $d(v)=[a,b]$ for some $a\in \mathbb{Q}_0^+,b\in \mathbb{Q}_0^+\cup\{+\infty\}$ and $a\leq b$, two vertices $s,t\in V$ and an integer $h$ encoded in binary, we have to decide whether or not there exist a path in $G$, possibly with repeated vertices and edges, $v_0 \cdot v_1 \cdots v_{n-1}\cdot v_n$, with $v_0=s$ and $v_n=t$ and a list of values $d_0,\ldots,d_n\in\mathbb{Q}_0^+$, such that $\sum_{i=0}^n d_i = h$ and for all $i=0,\ldots, n$, $d_i\in d(v_i)$. Intuitively, we have to find a path in $G$, possibly where we get to the same vertices/edges also more than once, and where we remain in each vertex a non-negative rational amount of time allowed by the minimum/maximum duration function, such that the overall time of the path equals $h$. This can be solved easily in PSPACE. We conjecture it to be in NP (we already know it is NP-hard!). This is not trivial to prove, as we may have $h\in\Theta(2^n)$, for instance. Thus the required path may have length exponential in both $|V|$ and in the binary encoding of $h$. Have you ever seen a similar problem? Can you come up with an NP algorithm? Or do you know some connected literature?
Given coprime $a, b$, what is $$ \min_{x, y > 0} |a^x - b^y| $$ Here $x, y$ are integers. Obviously taking $x = y = 0$ gives an uninteresting answer; in general how close can these powers get? Also, how do we quickly compute the minimizing $x, y$? MathOverflow is a question and answer site for professional mathematicians. It only takes a minute to sign up.Sign up to join this community Given coprime $a, b$, what is $$ \min_{x, y > 0} |a^x - b^y| $$ Here $x, y$ are integers. Obviously taking $x = y = 0$ gives an uninteresting answer; in general how close can these powers get? Also, how do we quickly compute the minimizing $x, y$? You can get a conjectural lower bound for $|a^x-b^y|$ using the $ABC$-conjecture. I'll do the case $a^x > b^y$ for simplicity. Taking $A=a^x$, $B=-b^y$, and $C=a^x-b^y$, we get for every $\epsilon >0$ that there is a $K=K_\epsilon>0$ so that $$ a^x \le K\prod_{p\mid ab(a^x-b^y)} p^{1+\epsilon} \le K(ab(a^x-b^y))^{1+\epsilon}. $$ Replacing $K$ with a $K'=K'_\epsilon$, this gives $$ a^x - b^y \ge K' \left(\frac{a^{x-1}}{b}\right)^{1-\epsilon}. $$ This shows that if $x$ and $y$ are large, you can't make $a^x-b^y$ very much smaller than $a^x$. (You can also prove an effective lower bound using linear forms in logs, but it will be much weaker than this.) Gerhard says what needs to be said: Try $y=1$ and in very rare case $y=2$ or maybe $3.$ Here is an exceptional example: $1138^2-109^3=15.$ You didn't ask for exceptional cases, just what to do given $a$ and $b.$ For that, find rational numbers $\frac{x}{y}$ which approximate $\frac{\ln{b}}{\ln{a}}$ well. The first few convergents to $\frac{\ln{1138}}{\ln{109 }}$ are $2, \frac{3}{2}, \frac{607547}{405031}.$ The huge jump suggests that it is worth checking $3,2.$ I don't see any reason to assume that $a$ and $b$ are relatively prime. LATER Inspired by @Gerry let me observe this: Let $a=2$ and $b=\lfloor 2^k \sqrt{2}\rceil.$ Then $b^1-2^k \approx 2^k(\sqrt{2}-1)$ while $|b^2-2^{2k+1}| \lt 2^k \sqrt{2}.$ This suggests to me that with probability about $1-\frac1{\sqrt2} \approx0.3$ it will happen that $|b^2-2^{2k+1}| \lt b-2^k.$ This does happen $26$ times up to $2k+1=201.$ The first and last few are $2k+1=15, 17, 19, 31, 33, 59, \cdots 147, 149, 161, 187, 193.$ I can see why this might even more successful for odd powers $m^{2k+1}$ of larger integers. If one looks at the sequence of perfect powers (1,4,8,9,16,25,27,32,36,... found at OEIS at https:/oeis.org/A001597 ), one sees a lot of squares. If one wants to tackle the posted question by looking at this sequence, one can save time looking at odd powers. Note that all pairs listed by Gottfried Helms have one even exponent and one odd exponent. Indeed, since (a^2 - b^2)=(a-b)(a+b), interesting answers to the question will involve an odd exponent, usually coprime to the other exponent. More specifically, interesting answers will be odd powers near a square, and two distinct odd powers which lie between two squares. Thus, an algorithm which looks only at odd powers which are not squares , and just the squares adjacent to them, saves time by looking at just the interesting cases. Further to avoid answers producing zero (like 225 and 3375), we look only at pairs which give distinct powers. I generated the first two million cubes, as well as the smattering of higher powers (excepting two or three really large powers of 2 or 3 with the exponent prime) occurring between these cubes. I got less than 100 cases where these powers were within 100 of a square. I counted 14 cases where two odd powers had no squares between them , and 32 where there was only one square. The largest number less than 10^18 that was a perfect power and was within 100 of a square was 8158^3 which is 24 less than a square. The other slightly over 4 million differences I computed were over 1000 or were small known examples of near powers less than 10000, e.g. 2187,2197,2209. This suggests to me to look for powers that are within s^{1/3} of a square s. Gerhard "Is Feeling Rather Powerfull Presently" Paseman, 2017.10.23. As an extension of Gerry Myerson's commnt: a short list of small differences, using $a=2..199$ and $b=a+1..3999$ checking the first eight convergents of the continued fraction of $\log(a)/\log(b)$ for differences $d\le 100$ a b d cont.frac-------------------------------------------------- 15^4- 37^3 = -28 --- .0.1.2.1.1631.1.6 6^7- 23^4 = 95 --- .0.1.1.2.1.1318.1.30 After that only differences where one exponent is 2 occur 2^15- 181^2 = 7 --- .0.7.2.1621.1.2 2^17- 362^2 = 28 --- .0.8.2.1621.1.2 3^9- 140^2 = 83 --- .0.4.2.129.2.24 3^11- 421^2 = -94 --- .0.5.1.1.1034.1.27 3^15-3788^2 = -37 --- .0.7.1.1.213025.3.1 5^5- 56^2 = -11 --- .0.2.1.1.228.1.1 6^5- 88^2 = 32 --- .0.2.2.216.1.3 6^7- 529^2 = 95 --- .0.3.2.2638.1.14 7^5- 130^2 = -93 --- .0.2.1.1.175.1.4 8^5- 181^2 = 7 --- .0.2.2.4866.16.4 23^5- 2537^2 = -26 --- .0.2.1.1.388098.1.2 27^5- 3788^2 = -37 --- .0.2.1.1.639076.1.3 from here only differences with exponents $3$ and $2$ remain 13^3- 47^2 = -12 --- .0.1.1.1.234.1.15 15^3- 58^2 = 11 --- .0.1.2.414.3.1 17^3- 70^2 = 13 --- .0.1.2.534.6.3 18^3- 76^2 = 56 --- .0.1.2.149.3.1 19^3- 83^2 = -30 --- .0.1.1.1.336.1.5 20^3- 89^2 = 79 --- .0.1.2.150.2.3 20^3- 90^2 = -100 --- .0.1.1.1.120.13.23 21^3- 96^2 = 45 --- .0.1.2.312.50.1 22^3- 103^2 = 39 --- .0.1.2.420.1.2 23^3- 110^2 = 67 --- .0.1.2.283.2.2 24^3- 118^2 = -100 --- .0.1.1.1.219.1.24 27^3- 140^2 = 83 --- .0.1.2.389.2.7 28^3- 148^2 = 48 --- .0.1.2.760.1.1 29^3- 156^2 = 53 --- .0.1.2.773.2.2 32^3- 181^2 = 7 --- .0.1.2.8110.2.3 34^3- 198^2 = 100 --- .0.1.2.691.1.1 35^3- 207^2 = 26 --- .0.1.2.2930.15.1 37^3- 225^2 = 28 --- .0.1.2.3264.1.2 40^3- 253^2 = -9 --- .0.1.1.1.13116.2.3 43^3- 282^2 = -17 --- .0.1.1.1.8795.1.3 44^3- 292^2 = -80 --- .0.1.1.1.2015.6.1 45^3- 302^2 = -79 --- .0.1.1.1.2195.1.9 46^3- 312^2 = -8 --- .0.1.1.1.23291.1.341 52^3- 375^2 = -17 --- .0.1.1.1.16340.1.35 55^3- 408^2 = -89 --- .0.1.1.1.3746.8.3 56^3- 419^2 = 55 --- .0.1.2.6425.239.2 63^3- 500^2 = 47 --- .0.1.2.11019.1.1 65^3- 524^2 = 49 --- .0.1.2.11696.3.12 72^3- 611^2 = -73 --- .0.1.1.1.10933.1.5 99^3- 985^2 = 74 --- .0.1.2.30124.3.2101^3- 1015^2 = 76 --- .0.1.2.31280.1.1631109^3- 1138^2 = -15 --- .0.1.1.1.202515.17.4136^3- 1586^2 = 60 --- .0.1.2.102977.1.696152^3- 1874^2 = -68 --- .0.1.1.1.129727.1.97
A time translation invariant system without energy conservation Noether's theorem relating continuous Lie symmetries to conserved quantities famously implies that given a time translation symmetry\begin{equation} \phi(t, x^i) \to \phi'(t) = \phi(t + T, x^i) \end{equation} with infinitesimal action\begin{equation} \delta \phi = \varepsilon \dot{\phi} \end{equation} The Lagrangian density changes by\begin{eqnarray} \delta \mathcal{L} &=& \frac{\partial \mathcal{L}}{\partial \phi} \varepsilon \dot{\phi} + \frac{\partial \mathcal{L}}{\partial \dot{\phi}} \varepsilon \ddot{\phi}\\ &=& \varepsilon \frac{d}{dt} \mathcal{L} \end{eqnarray} ... The hidden assumption here is that the field has the proper values and variation on the boundary. If we forgo this condition, energy is not conserved. There are two ways we could break this condition. The first one is to have an extra boundary to the manifold. This will usually correspond to the condition of a naked singularity in our spacetime. The simplest possible example is to consider the static spacetime made from the slice $\mathbb{R}^3 \setminus \{ 0 \}$, Minkowski space minus the origin. A good way to convince yourself is to first consider Minkowski space with the charge distribution\begin{equation} j^\mu(x) = (\theta(t) \delta(x^i), 0) \end{equation} This is not a very good charge distribution (it has a discontinuous total charge), but this won't matter much. Using the Lorenz gauge, the Maxwell equation is then\begin{eqnarray} \Box A^0 &=& \theta(t) \delta(x^i)\\ \Box A^i &=& 0 \end{eqnarray} The exact set of solutions doesn't matter too much here, we are only after a particular one. The simplest will be to assume $A_i = 0$, and using the retarded D'Alembert green function,\begin{eqnarray} A^0 &=& \int d^4x \theta(t) \delta(x^i) \frac{\delta((t - t_0) + |x^i - x^i_0|)}{4\pi |x^i - x^i_0|}\\ &=& \int dt \theta(t) \frac{\delta(t - (t_0 - |x^i_0|)}{4\pi |x^i_0|}\\ &=& \theta(t_0 - r_0) \frac{1}{4\pi r} \end{eqnarray} So that, for $r > t$, the potential is the Coulomb potential and simply $0$ otherwise. The electric field $E^i = - \nabla^i A^0$ is then\begin{eqnarray} E^i &=& -\nabla^i (\theta(t - r) \frac{1}{4\pi r})\\ &=& -\frac{1}{4\pi r} \nabla^i \theta(t - r) - \theta(t - r) \nabla^i \frac{1}{4\pi r}\\ &=& -\frac{x^i}{4\pi r^2} (\delta(t - r) - \theta(t - r) r^{-1}) \end{eqnarray} The energy of our electric field is then\begin{eqnarray} H &=& \int E^i E_i d^3x\\ &=& -\int \frac{1}{4\pi r} (\delta(t - r) + \theta(t - r) r^{-1}) d^3x\\ &=& 4\pi \int \frac{r}{4\pi} (\delta(t - r) + \theta(t - r) r^{-1}) dr\\ &=& 4\pi (\frac{t}{4\pi} + \ldots) \end{eqnarray} Posted on 2019-09-18 10:17:41
Averaged and Integral Characteristics These characteristics can be monitor in the course of design process. It is possible to monitor minimum and maximum values as well. Example. As an example, a 6-layer metal-dielectric coating was designed. The corresponding design bar is shown on the bottom of the Evaluation window (Fig. 5). The light reflectance from coating's front and back side should appear as orange and violet. At the same time, solar transmittance of the coating is to be as large as possible. Color target and integral target are shown in Fig. 2 and Fig. 3, respectively. OptiLayer allows to specify combined targets and optimize the design with respect many criteria simultaneously. Integral weights can be specified in a special Total merit function MF takes the form: \[ MF^2=0.5\cdot MF_{color}^2+MF_{int}^2 \] Along with different components of the merit function, OptiLayer allows you to display and monitor integral values. These values are calculated based on the spectral weight function \(W(\lambda)\) that can be chosen from the pop-up list. All required spectral weights are to be specified in advance through integral target option (see Integral Target). As in Integral Target, there are two check boxes: \[ F=\frac{\int\limits_{\lambda_d}^{\lambda_u} W(\lambda)D(\lambda) S(\lambda) C(\lambda)d\lambda}{\int\limits_{\lambda_d}^{\lambda_u} W(\lambda) S(\lambda) D(\lambda)d\lambda},\] where \(\lambda_d\) and \(\lambda_u\) are boundaries of the wavelength interval of interest, \(W(\lambda)\) is a given weight function, \(S(\lambda)\) is a spectral characteristic of a coating, \(D(\lambda)\) and \(S(\lambda)\) are spectral distributions of the detector and light source, respectively. You can monitor current color coordinates and not necessary those ones specified in the color target. You can specify reflectance, back side reflectance, or transmittance; polarization states, and incidence angle. Integral calculations are performed in the stack mode as well. It is also possible to display and calculate averaged spectral characteristics as well as their maximum/minimum over a 2D region of [wavelengths x angles of incidence]. Computations of It is possible to adjust font type, size, font color and background color using corresponding toolbar controls. Also it is possible to change the number of digits to display and to select scientific format when necessary (use [E±] button for this purpose). Of course, you can turn back to the Build-in Style. You may be interested also in following articles:
I read the definition of work as $$W ~=~ \vec{F} \cdot \vec{d}$$ $$\text{ Work = (Force) $\cdot$ (Distance)}.$$ If a book is there on the table, no work is done as no distance is covered. If I hold up a book in my hand and my arm is stretched, if no work is being done, where is my energy going? I read the definition of work as $$W ~=~ \vec{F} \cdot \vec{d}$$ $$\text{ Work = (Force) $\cdot$ (Distance)}.$$ While you do spend some body energy to keep the book lifted, it's important to differentiate it from physical effort. They are connected but are not the same. Physical effort depends not only on how much energy is spent, but also on how energy is spent. Holding a book in a stretched arm requires a lot of physical effort, but it doesn't take that much energy. In the idealcase, if you manage to hold your arm perfectly steady, and your muscle cells managed to stay contracted without requiring energy input, there wouldn't be any energy spent at all because there wouldn't be any distance moved. On realscenarios, however, you do spend (chemical) energy stored within your body, but whereis it spent? It is spent on a cellular level. Muscles are made with filaments which can slide relative to one another, these filaments are connected by molecules called myosin, which use up energy to move along the filaments but detach at time intervals to let them slide. When you keep your arm in position, myosins hold the filaments in position, but when one of them detaches other myosins have to make up for the slight relaxation locally. Chemical energy stored within your body is released by the cell as both work and heat.* Both on the ideal and the real scenarios we are talking about the physical definition of energy. On your consideration, you ignore the movement of muscle cells, so you're considering the ideal case. A careful analysis of the real case leads to the conclusion that work is done and heat is released, even though the arm itself isn't moving. * Ultimately, the work done by the cells is actually done on other cells, which eventually dissipates into heat due to friction and non-elasticity. So all the energy you spend is invested in keeping the muscle tension and eventually dissipated as heat. This is about how your muscles work -- the're an ensemble of small elements that, triggered by a signal from nerves, use chemical energy to go from less energetical long state to more energetical short one. Yet, this obviously is not permanent and there is spontaneous come back, that must be compensated by another trigger. This way there are numerous streches and releases that in sum gives small oscillations that create macroscopic work on the weight. Perhaps an analogy is in order. Lets hold up the book by using an electromagnet (say we put a piece if steel under it ). If the coils were made of superconducting material it would take no energy input to maintain the position/field strength. But if we use ordinary wire, ohmic loses within the coil must be made up for by externally supplied electrical energy. The reason is that you need to spend energy to keep muscle stretched. The first thing you need know is that the work $W=F \Delta x$ is the energy transfer between objects. Hence, there are no work done on the book when it is put on the table because there are no movement. When your arm muscle is stretched, however, it consumes energy continuously to keep this state so you feel tire very fast. This energy comes from the chemical energy in your body and most of them are converted into heat and lost to the surrounding. In this situation, no energy is transferred to the book, so no work is done. You can feel the different energy consumption when your arm is stretched in different angle. A particular case is that you put the book on your leg when you sit on a chair so your muscle is relaxed and the energy spent is less. There are also a special type of muscle, smooth muscle, requires very little energy to keep its state so that it can always keep it stretched and you won't get tire: Tonic smooth muscle contracts and relaxes slowly and exhibits force maintenance such as vascular smooth muscle. Force maintenance is the maintaining of a contraction for a prolonged time with little energy utilization. When contracted, the sarcomeres, the structure that actually do the work in a muscle, take turns doing the work. Only a third of them are engaged at any given moment. This is because the sarcomere pumps blood as it contracts and relaxes, enabling it to get the energy it needs to do its work for longer periods. The temporary, superhuman strength some people experience may be some sort of override of this normal level of engagement. This system doesn't have a different mechanism for holding a position, so the same thing goes on when trying to hold an object steady. But if the muscle is contracted for a very long time and the energy in the blood being pumped becomes insufficient, sarcomeres will actually get stuck in their contracted position. This state doesn't require energy and the sarcomere will remain contracted until the load stops and normal circulation is restored. I believe this is a survival mechanism that enables an animal to hang on, even when the load would otherwise be overwhelming. It also can cause muscle stiffness when circulation through a muscle is impaired, a very common condition as people age. The big difference between holding up a book in your hand (by holding it in the palm) and holding up a book by laying it on a table is that first equilibrium position is a dynamical one, while the book on the table is in static equilibrium. I'll explain it qualitatively. You can compare the situation in which you hold up a book with the situation in which a book is held up by constantly bombarding it from below with particles, say marbles. In the extreme case of bombarding the book with only one marble at a time, the book falls a little, the marble hits it from below and thereby sends it back up again. The marble loses energy in the process, wich is given to the book (assuming an elastic collision). The book falls back again, and the next marble hits the book, sending it back up again, etc. You can use big marbles, little marbles, give them different velocities, and vary the amount of time between which the marbles hit the book. The best combination of these will hold the book in the best quasi-stable position. Even better would be to use many marbles, hitting the book at different places. So each time a marble hits the book it loses some of its energy, which is given to the constantly falling book, and which makes it look like the book is in equilibrium. That is, a dynamical equilibrium. Now, where is the connection with the muscles keeping up the book? I think it's easy to see, though I don't have too much understanding of the muscles workings. Alls muscle cells can be compared with the marbles and give the book constantly an upward change in motion during its fall. They relax, go tense, relax, go tense, etc. The fall and upward change are too small to notice, so the book looks in a steady state. That is, a dynamical steady state. Of course, there is no friction in the case of the marbles, who get their energy from "little canons". Consider an analogy, We get tired after STANDING for some time,without doing any work*. The reason behind this is same as the reason of why we dont do any work holding any object above our heads, but this case is easier to comprehend, when we stand we r actually resisting the tendency of falling on the ground,muscles are holding on to the structure of our body so that we dont collapse on the ground like some non living thing, these muscles have fibers which have have streached themselves ,which requires energy, Similarly when we hold something above our head we r doing the same thing, resisting that collapsing tendency , which causes elongment in the muscles which requires energy. When a physicist talks about work, they are using the word in the technical sense of the equation you quote. To a biologist, though, work might be defined as energy expended to carry out a task. In your example, your arm will not naturally stay in the position described. Your body (mostly your muscles) must expend energy to hold your arm (and the book) in a set position, unsupported by anything but your own physiology. So, by the biologist's definition, your muscles are doing work to hold up the book and your arm (muscle fibers are contracting and relaxing based on a host of chemical processes at the cellular level). But by the physicist's technical definition, no work is being done. $F=ma$ means that every force is applied to a mass and produces an acceleration. Okay. Acceleration is $a=\frac{\Delta v}{\Delta t}$. If you put this $\Delta v$ into ${\frac{1}{2}m(\Delta v)^2}$ you discover the energy which have been necessary to let that mass accelerate. Since energy is neither created nor destroyed, it is the energy burnt by the one who applied the force! His/her/its potential energy (e.g. from food) has become kinetic energy of the accelerated body. Now, what about holding up 5 kg with your arm? No energy? Of course you spend energy. It is the same as above: you apply a force, equal and opposite to the gravitational force, so the object doesn't fall and doesn't rise and if you apply a force, for the reason above, you spend energy. Now one could object that there is no acceleration in this case. If no acceleration (opposite to the gravitational acceleration $g$) existed, the object would fall! We have two opposite accelerations (since two opposite forces) at stake ($\mathbf{F}=-\mathbf{F_g} \Rightarrow \mathbf{a}=\mathbf{-g}$). Which cancel. But if they cancel they both exist. So yes, you spend energy for holding the object up: to let this counter-acceleration exist. So you need energy to hold up a mass but no work is done if the object is at rest on your hand, since its kinetic energy is NOT varying. If you stop with your hand a falling body you cause a negative $\Delta E_k$ (you do negative work on it) but once it is stopped no more work, your energy is simply to cancel $F_g$ and keeping the body at rest. Energy is being expended maintaining it in position. Earth's gravity is applying a force downwards, the book is being accelerated down gravitation force. A force is being applied to the hand and arm which must be resisted and thus energy expended. The arm and book are not a closed system. protected by Qmechanic♦ Nov 19 '13 at 7:19 Thank you for your interest in this question. Because it has attracted low-quality or spam answers that had to be removed, posting an answer now requires 10 reputation on this site (the association bonus does not count). Would you like to answer one of these unanswered questions instead?
Can anyone see how the following is obtained: In a section on Perturbation by an oscillating electric field. In book "Atomic Physics" by Foot. The following is stated: Consider the Shrodinger equation $$i \hbar \frac{\partial \Psi}{\partial t} = H \Psi.$$ The Hamiltonian has two parts $$H = H_0 + H_{I}(t).$$ The perturbation describing the Hamiltonian is: $$H_1(t) = e\mathbf{r} \cdot \mathbf{E}_0 \cos(\omega t)$$ The interaction mixes the two states: $$\Psi(\mathbf{r},t) = c_1(t)\psi_1(\mathbf{r}) e^{-\frac{iE_1 t}{\hbar}} + c_2(t)\psi_2(\mathbf{r}) e^{-\frac{i E_2 t}{\hbar}}$$ which can be written as $$\Psi(\mathbf{r},t) = c_1(t)|1 \rangle e^{-i \omega_1 t} + c_2|2\rangle e^{-i \omega_2 t}.$$ Question:Can anyone see how it follows that subsitution into the Shrodinger equation leads to $$\dot{1}\dot{c_1} = \Omega \cos(\omega t)e^{-i \omega_0 t}c_2,~~\dot{1}\dot{c_2} = \Omega^* \cos(\omega t)e^{-i \omega_0 t}c_1?$$where $\omega_0 = \frac{E_2 - E_1}{\hbar}$ and the Rabi frequency $\Omega$ is defined by $\Omega = \frac{\langle 1| e \mathbf{r} \cdot \mathbf{E_0} |2 \rangle}{\hbar}$. Also what is the importance of $\dot{1}\dot{c_1}$ and $\dot{1}\dot{c_2}$ in describing the interaction of the two level system with radiation? Thanks for any assistance.
Let $T: \R^n \to \R^m$ be a linear transformation.Suppose that the nullity of $T$ is zero. If $\{\mathbf{x}_1, \mathbf{x}_2,\dots, \mathbf{x}_k\}$ is a linearly independent subset of $\R^n$, then show that $\{T(\mathbf{x}_1), T(\mathbf{x}_2), \dots, T(\mathbf{x}_k) \}$ is a linearly independent subset of $\R^m$. Let $V$ denote the vector space of all real $2\times 2$ matrices.Suppose that the linear transformation from $V$ to $V$ is given as below.\[T(A)=\begin{bmatrix}2 & 3\\5 & 7\end{bmatrix}A-A\begin{bmatrix}2 & 3\\5 & 7\end{bmatrix}.\]Prove or disprove that the linear transformation $T:V\to V$ is an isomorphism. Let $G, H, K$ be groups. Let $f:G\to K$ be a group homomorphism and let $\pi:G\to H$ be a surjective group homomorphism such that the kernel of $\pi$ is included in the kernel of $f$: $\ker(\pi) \subset \ker(f)$. Define a map $\bar{f}:H\to K$ as follows.For each $h\in H$, there exists $g\in G$ such that $\pi(g)=h$ since $\pi:G\to H$ is surjective.Define $\bar{f}:H\to K$ by $\bar{f}(h)=f(g)$. (a) Prove that the map $\bar{f}:H\to K$ is well-defined. (b) Prove that $\bar{f}:H\to K$ is a group homomorphism. Let $\calF[0, 2\pi]$ be the vector space of all real valued functions defined on the interval $[0, 2\pi]$.Define the map $f:\R^2 \to \calF[0, 2\pi]$ by\[\left(\, f\left(\, \begin{bmatrix}\alpha \\\beta\end{bmatrix} \,\right) \,\right)(x):=\alpha \cos x + \beta \sin x.\]We put\[V:=\im f=\{\alpha \cos x + \beta \sin x \in \calF[0, 2\pi] \mid \alpha, \beta \in \R\}.\] (a) Prove that the map $f$ is a linear transformation. (b) Prove that the set $\{\cos x, \sin x\}$ is a basis of the vector space $V$. (c) Prove that the kernel is trivial, that is, $\ker f=\{\mathbf{0}\}$.(This yields an isomorphism of $\R^2$ and $V$.) (d) Define a map $g:V \to V$ by\[g(\alpha \cos x + \beta \sin x):=\frac{d}{dx}(\alpha \cos x+ \beta \sin x)=\beta \cos x -\alpha \sin x.\]Prove that the map $g$ is a linear transformation. (e) Find the matrix representation of the linear transformation $g$ with respect to the basis $\{\cos x, \sin x\}$. Suppose that the vectors\[\mathbf{v}_1=\begin{bmatrix}-2 \\1 \\0 \\0 \\0\end{bmatrix}, \qquad \mathbf{v}_2=\begin{bmatrix}-4 \\0 \\-3 \\-2 \\1\end{bmatrix}\]are a basis vectors for the null space of a $4\times 5$ matrix $A$. Find a vector $\mathbf{x}$ such that\[\mathbf{x}\neq0, \quad \mathbf{x}\neq \mathbf{v}_1, \quad \mathbf{x}\neq \mathbf{v}_2,\]and\[A\mathbf{x}=\mathbf{0}.\] (Stanford University, Linear Algebra Exam Problem) Let $V$ be the subspace of $\R^4$ defined by the equation\[x_1-x_2+2x_3+6x_4=0.\]Find a linear transformation $T$ from $\R^3$ to $\R^4$ such that the null space $\calN(T)=\{\mathbf{0}\}$ and the range $\calR(T)=V$. Describe $T$ by its matrix $A$. A hyperplane in $n$-dimensional vector space $\R^n$ is defined to be the set of vectors\[\begin{bmatrix}x_1 \\x_2 \\\vdots \\x_n\end{bmatrix}\in \R^n\]satisfying the linear equation of the form\[a_1x_1+a_2x_2+\cdots+a_nx_n=b,\]where $a_1, a_2, \dots, a_n$ (at least one of $a_1, a_2, \dots, a_n$ is nonzero) and $b$ are real numbers.Here at least one of $a_1, a_2, \dots, a_n$ is nonzero. Consider the hyperplane $P$ in $\R^n$ described by the linear equation\[a_1x_1+a_2x_2+\cdots+a_nx_n=0,\]where $a_1, a_2, \dots, a_n$ are some fixed real numbers and not all of these are zero.(The constant term $b$ is zero.) Then prove that the hyperplane $P$ is a subspace of $R^{n}$ of dimension $n-1$. Let $n$ be a positive integer. Let $T:\R^n \to \R$ be a non-zero linear transformation.Prove the followings. (a) The nullity of $T$ is $n-1$. That is, the dimension of the nullspace of $T$ is $n-1$. (b) Let $B=\{\mathbf{v}_1, \cdots, \mathbf{v}_{n-1}\}$ be a basis of the nullspace $\calN(T)$ of $T$.Let $\mathbf{w}$ be the $n$-dimensional vector that is not in $\calN(T)$. Then\[B’=\{\mathbf{v}_1, \cdots, \mathbf{v}_{n-1}, \mathbf{w}\}\]is a basis of $\R^n$. (c) Each vector $\mathbf{u}\in \R^n$ can be expressed as\[\mathbf{u}=\mathbf{v}+\frac{T(\mathbf{u})}{T(\mathbf{w})}\mathbf{w}\]for some vector $\mathbf{v}\in \calN(T)$. Let $A$ be the matrix for a linear transformation $T:\R^n \to \R^n$ with respect to the standard basis of $\R^n$.We assume that $A$ is idempotent, that is, $A^2=A$.Then prove that\[\R^n=\im(T) \oplus \ker(T).\] (a) Let $A=\begin{bmatrix}1 & 2 & 1 \\3 &6 &4\end{bmatrix}$ and let\[\mathbf{a}=\begin{bmatrix}-3 \\1 \\1\end{bmatrix}, \qquad \mathbf{b}=\begin{bmatrix}-2 \\1 \\0\end{bmatrix}, \qquad \mathbf{c}=\begin{bmatrix}1 \\1\end{bmatrix}.\]For each of the vectors $\mathbf{a}, \mathbf{b}, \mathbf{c}$, determine whether the vector is in the null space $\calN(A)$. Do the same for the range $\calR(A)$. (b) Find a basis of the null space of the matrix $B=\begin{bmatrix}1 & 1 & 2 \\-2 &-2 &-4\end{bmatrix}$. Let $A$ be a real $7\times 3$ matrix such that its null space is spanned by the vectors\[\begin{bmatrix}1 \\2 \\0\end{bmatrix}, \begin{bmatrix}2 \\1 \\0\end{bmatrix}, \text{ and } \begin{bmatrix}1 \\-1 \\0\end{bmatrix}.\]Then find the rank of the matrix $A$. (Purdue University, Linear Algebra Final Exam Problem) Let $R$ be a commutative ring with $1$ and let $G$ be a finite group with identity element $e$. Let $RG$ be the group ring. Then the map $\epsilon: RG \to R$ defined by\[\epsilon(\sum_{i=1}^na_i g_i)=\sum_{i=1}^na_i,\]where $a_i\in R$ and $G=\{g_i\}_{i=1}^n$, is a ring homomorphism, called the augmentation map and the kernel of $\epsilon$ is called the augmentation ideal. (a) Prove that the augmentation ideal in the group ring $RG$ is generated by $\{g-e \mid g\in G\}$. (b) Prove that if $G=\langle g\rangle$ is a finite cyclic group generated by $g$, then the augmentation ideal is generated by $g-e$.
Search Now showing items 1-1 of 1 Higher harmonic flow coefficients of identified hadrons in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV (Springer, 2016-09) The elliptic, triangular, quadrangular and pentagonal anisotropic flow coefficients for $\pi^{\pm}$, $\mathrm{K}^{\pm}$ and p+$\overline{\mathrm{p}}$ in Pb-Pb collisions at $\sqrt{s_\mathrm{{NN}}} = 2.76$ TeV were measured ...
In his celebrated paper "Conjugate Coding" (written around 1970), Stephen Wiesner proposed a scheme for quantum money that is unconditionally impossible to counterfeit, assuming that the issuing bank has access to a giant table of random numbers, and that banknotes can be brought back to the bank for verification. In Wiesner's scheme, each banknote consists of a classical "serial number" $s$, together with a quantum money state $|\psi_s\rangle$ consisting of $n$ unentangled qubits, each one either $$|0\rangle,\ |1\rangle,\ |+\rangle=(|0\rangle+|1\rangle)/\sqrt{2},\ \text{or}\ |-\rangle=(|0\rangle-|1\rangle)/\sqrt{2}.$$ The bank remembers a classical description of $|\psi_s\rangle$ for every $s$. And therefore, when $|\psi_s\rangle$ is brought back to the bank for verification, the bank can measure each qubit of $|\psi_s\rangle$ in the correct basis (either $\{|0\rangle,|1\rangle\}$ or ${|+\rangle,|-\rangle}$), and check that it gets the correct outcomes. On the other hand, because of the uncertainty relation (or alternatively, the No-Cloning Theorem), it's "intuitively obvious" that, if a counterfeiter who doesn't know the correct bases tries to copy $|\psi_s\rangle$, then the probability that both of the counterfeiter's output states pass the bank's verification test can be at most $c^n$, for some constant $c<1$. Furthermore, this should be true regardless of what strategy the counterfeiter uses, consistent with quantum mechanics (e.g., even if the counterfeiter uses fancy entangled measurements on $|\psi_s\rangle$). However, while writing a paper about other quantum money schemes, my coauthor and I realized that we'd never seen a rigorous proof of the above claim anywhere, or an explicit upper bound on $c$: neither in Wiesner's original paper nor in any later one. So, has such a proof (with an upper bound on $c$) been published? If not, then can one derive such a proof in a more-or-less straightforward way from (say) approximate versions of the No-Cloning Theorem, or results about the security of the BB84 quantum key distribution scheme? Update: In light of the discussion with Joe Fitzsimons below, I should clarify that I'm looking for more than just a reduction from the security of BB84. Rather, I'm looking for an explicit upper bound on the probability of successful counterfeiting (i.e., on $c$)---and ideally, also some understanding of what the optimal counterfeiting strategy looks like. I.e., does the optimal strategy simply measure each qubit of $|\psi_s\rangle$ independently, say in the basis $$\{ \cos(\pi/8)|0\rangle+\sin(\pi/8)|1\rangle, \sin(\pi/8)|0\rangle-\cos(\pi/8)|1\rangle \}?$$ Or is there an entangled counterfeiting strategy that does better? Update 2: Right now, the best counterfeiting strategies that I know are (a) the strategy above, and (b) the strategy that simply measures each qubit in the $\{|0\rangle,|1\rangle\}$ basis and "hopes for the best." Interestingly, both of these strategies turn out to achieve a success probability of (5/8) n. So, my conjecture of the moment is that (5/8) n might be the right answer. In any case, the fact that 5/8 is a lower bound on c rules out any security argument for Wiesner's scheme that's "too" simple (for example, any argument to the effect that there's nothing nontrivial that a counterfeiter can do, and therefore the right answer is c=1/2). This post has been migrated from (A51.SE) Update 3: Nope, the right answer is (3/4) n! See the discussion thread below Abel Molina's answer.
An unknown element $\ce{Q}$ has two unknown isotopes: $\ce{^60Q}$ and $\ce{^63Q}$. If the average atomic mass is $\pu{61.5 u}$, what are the relative percentages of the isotopes? closed as off-topic by Loong♦, bon, ron, Klaus-Dieter Warzecha, John Snow Feb 21 '15 at 19:33 This question appears to be off-topic. The users who voted to close gave this specific reason: " Homework questionsmust demonstrate some effort to understand the underlying concepts. For help asking a good homework question, see: How do I ask homework questions on Chemistry Stack Exchange?" – Loong, bon, ron You can reverse engineer the formula used to calculate the average atomic mass of all isotopes. For example, carbon has two naturally occurring isotopes: \begin{array}{lrr} \text{Isotope} & \text{Isotopic Mass $A$} & \text{Abundance $p$} \\\hline \ce{^{12}C}: & \pu{12.000000 u} & 0.98892 \\ \ce{^{13}C}: & \pu{13.003354 u} & 0.01108 \end{array} The formula to get a weighted average is the sum of the product of the abundances and the isotope mass: $$A = \sum\limits_{i=1}^n p_i A_i$$ For carbon this is: $$0.989 \times 12.000 + 0.0111 \times 13.003 = 12.011$$ As you can see, we can set the abundance of one isotope to $x$, and the other to $1 - x$. If $x = 0.989$, then $1 - x = 0.0111$, OR if $x = 0.0111$, then $1 - x = 0.989$.Therefore, we can simply set up an algebraic equation:$$A_1(x_1) + A_2(1 - x_1) = A$$ We know $A_1$, $A_2$, and $A$ in your example, so: \begin{align} 60 x + 63(1 - x) &= 61.5\\ 63 - 3 x &= 61.5\\ x &= \frac{-1.5}{-3} = 0.5 \end{align} Therefore, the element with a mass of $\pu{60 u}$ has a $50\%$ abundance, and the element with the mass of $\pu{63 u}$ has also a $50\%$ abundance. Proof: $\pu{60 u}(0.5) + \ce{63 u}(0.5) = \pu{61.5 u}$ Solve for $x$: $$60x + 63(1-x)=61.5$$
Wave Optics Huygens' Principle, Refraction and Reflection of Plane Waves using Huygens Principle The locus of all the particles of the medium which are in the same phase or in the same state of vibration are called wave front A point source of Light at finite distance can produce spherical wave front. A Line of source of light at finite distance can produce spherical wave front. Any source at long distance produces plane wave front. According to Huygens every point on the primary wave front acts as a source of secondary wave lets. The common tangent (or) envelop of all the secondary wave lets gives the position of new wave front. Coherent sources have same wave length and zero (or) constant phase difference Mono chromatic sources have same wave length. The amplitude of light coming from coherent sources may (or) may not be equal. View the Topic in this video From 00:28 To 19:26 Refraction and Reflection of Plane Waves using Huygens Principle View the Topic in this video From 00:05 To 5:10 Disclaimer: Compete.etutor.co may from time to time provide links to third party Internet sites under their respective fair use policy and it may from time to time provide materials from such third parties on this website. These third party sites and any third party materials are provided for viewers convenience and for non-commercial educational purpose only. Compete does not operate or control in any respect any information, products or services available on these third party sites. Compete.etutor.co makes no representations whatsoever concerning the content of these sites and the fact that compete.etutor.co has provided a link to such sites is NOT an endorsement, authorization, sponsorship, or affiliation by compete.etutor.co with respect to such sites, its services, the products displayed, its owners, or its providers. 1. Huygen's Principle: According to Huygen's wave theory, each point source sends out waves in all directions propagated by the motion of the particles of the hypothetical medium. According to the wave theory of light, u that in the medium, then \tt \mu = \frac{Velocity \ of \ light \ in \ vacuum \ or \ air}{Velocity \ of \ light \ in \ the \ medium} = \frac{c}{\nu} 2. Refraction & reflection of plane wave using huygen's principle \frac{\nu_{1}}{\lambda_{1}} = \frac{\nu_{2}}{\lambda_{2}}
Does the Leibniz (product) rule hold in some sense for the spectral fractional Laplacian (at least in 1 dimension)? There are precise rules for the composition of pseudo-differential operators and although these fractional derivatives are singular integrals and not always pseudo-differential operators, because of the singularity of the Fourier multiplier at the origin, it is quite likely that the composition formula of pseudo-differential operators can be extended to singular integrals. Let me recall the simplest version, dealing only with two terms: take $A, B$ pseudo-differential operators with respective symbols $a,b$ and respective order $m_a, m_b$. We write $A=\text{Op} a, B=\text{Op} b$. Then $$ AB=\text{Op}\bigl(ab+\frac{1}{i}\partial_\xi a\cdot \partial_x b\bigr)+ R_{m_a+m_b-2}, $$ where $R_j$ is a pseudo with order $j$. A simple consequence is that the principal symbol of $AB$ is the product $ab$ and that the principal symbol of the commutator $[A,B]$ is $$ \text{$(-i)\times $ the Poisson bracket}\quad\frac{1}{i} \\{a, b \\}=\frac{1}{i}\bigl(\partial_\xi a\cdot \partial_x b-\partial_x a\cdot \partial_\xi b \bigr). $$ As a result, taking two functions of $x$, $f, g$ and $A$ a pseudodifferential operator of order $m$, we have $$ A (f g)=\text{Op}\bigl(f(x)a(x,\xi) \bigr) g+\frac{1}{i} \text{Op}\bigl((\partial_\xi a)(x,\xi) \cdot f'(x)\bigr) g+ R_{m-2}g. $$ In the first term of the rhs, if $A$ is a fractional derivative of order $m$, you get $fAg$ whereas the second term is $ f'Bg, $ where the order of $B$ is $m-1$. The remainder is of order $m-2$. Why should it? In one dimension, the product rule fails for all fractional derivatives $$ \frac{d^s}{dx^s} $$ except $s=1$. It is usually a good idea to try out such a conjecture in the simplest possible case. In order to make your question precise you require some boundary conditions on the Laplacian. I tried the case of the differential operator $i D$ with periodic conditions. The resulting space has as a basis the trigonometric functions $(e^{i n s})$. If one modestly tries out the Leibniz rule for such functions (never mind finite or infinite linear combinations thereof), then one finds that it only works for the classical case (has to do with the fact that the beginner‘s binomial theorem $(m+n)^\gamma = m^\gamma + n^\gamma$ is usually false if $\gamma \neq 1$).
We have $X \sim \mathrm{Unif}[0,2]$ and $Y \sim \mathrm{Unif}[3,4]$. The random variables $X,Y$ are independent. We define a random variable $Z = X + Y$ and want to find the PDF of $Z$ using convolution. Here is my work so far: The definition of convolution is: $f_Z(z) = \int_{-\infty}^{\infty}f_X(x)f_Y(z-x)\mathrm{d} x$ We know the PDF's of $X$ and $Y$ because they are just uniform distributions. The hard part for me is finding the limits of integration. We have to solve for the constraints. The integrand is nonzero when $3 \leq z-x \leq 4$ and when $0 \leq x \leq 2$. Together these constraints imply that $\max \{0, z-4\} \leq x \leq \min \{2, z-3 \}$. These constraints imply that there are three cases: Case 1 - $z \leq 4 \implies f_Z(z) = \int_0^{z-3}$ Case 2 - $4 \leq z \leq 5 \implies f_Z(z) = \int_{z-4}^{z-3}$ Case 3 - $z \geq 5 \implies f_Z(z) = \int_{z-4}^{2}$ My question is how to find the bounds of $Z$ i.e. what are the possible values of $Z$? Does $Z$ run from $0 \to 6$ since it is the sum of $X+Y$ and this sum will have some value for every value $\in [0,6]$?
MathJax TeX Test PageMathJax.Hub.Config({tex2jax: {inlineMath: [[‘$’,’$’], [‘\\(‘,’\\)’]]}}); Credit: Ancient Origins Introduction In a previous Math Scholar blog, we presented Archimedes’ ingenious scheme for approximating $\pi$, based on an analysis of regular circumscribed and inscribed polygons with $3 \cdot 2^k$ sides, using modern mathematical notation and techniques. One motivation for both the previous blog and this blog is to respond some recent writers who reject basic mathematical theory and the accepted value of $\pi$, claiming instead that they have found $\pi$ to be a different value. For example, one author asserts that $\pi = 17 – 8 \sqrt{3} = 3.1435935394\ldots$. Continue reading Pi as the limit of n-sided circumscribed and inscribed polygons MathJax TeX Test PageMathJax.Hub.Config({tex2jax: {inlineMath: [[‘$’,’$’], [‘\\(‘,’\\)’]]}}); Log_10 of the error of a continued fraction approximation of Pi to k terms Approximation of real numbers by rationals The question of finding rational approximations to real numbers was first explored by the Greek scholar Diophantus of Alexandra (c. 201-285 BCE), and continues to fascinate mathematicians today, in a field known as Diophantine approximations. It is easy to see that any real number can be approximated to any desired accuracy by simply taking the sequence of approximations given by the decimal digits out to some point, divided by the appropriate power Continue reading New paper proves 80-year-old approximation conjecture The TRAPPIST-1 system of exoplanets, approximately 40 light-years away Exoplanets galore As of the present date (August 2019), more than 4000 exoplanets have been discovered orbiting other stars, and by the time you read this even more will have been logged. Several hundred exoplanets were announced in a July 2019 paper (although these await independent confirmation). All of this is a remarkable advance, given that the first confirmed exoplanet discovery did not occur until 1992. Most of the discoveries mentioned above are planets that are either too large or too close to their sun to possess liquid water, much Continue reading How many habitable exoplanets are there, really? A large cluster galaxy in the center acts as a gravitational lens, splitting the light from a more distant supernova into four yellow images (arrows) The standard model of physics has reigned supreme since the 1970s, successfully describing experimental physical reality in a vast array of experimental tests. Among other things, the standard model predicted the existence of a particle, now known as the Higgs boson, underlying the phenomenon of mass. This particle was experimentally discovered in 2012, nearly 50 years after it was first predicted. Yet physicists have known for many years that the standard model cannot be the Continue reading How fast is the universe expanding? New results deepen the controversy Complex of bacteria-infecting viral proteins modeled in CASP 13 Introduction In an advance that may presage a dramatic new era of pharmaceuticals and medicine, DeepMind (a subsidiary of Alphabet, Google’s parent company) recently applied their machine learning software to the challenging problem of protein folding, with remarkable success. In the wake of this success, DeepMind and other private companies are racing to further extend these capabilities and apply them to real-world biology and medicine. The protein folding problem Protein folding is the name for the physical process in which a protein chain, defined by a linear sequence of amino Continue reading Protein folding via machine learning may spawn medical advances Homo Deus In his new book Homo Deus, Israeli scholar Yuval Noah Harari has published one of the most thoughtful and far-reaching analyses of humanity’s present and future. Building on his earlier Sapiens, Harari argues that although humanity has made enormous progress across in the past few centuries, the future of our society, and even of our species, is uncertain. Harari begins with a reprise of human history, from prehistoric times to the present. He then observes that although religious beliefs are much more nuanced and sophisticated than in the past, human society still relies heavily on the narratives Continue reading Homo Deus: A brief history of tomorrow MathJax TeX Test PageMathJax.Hub.Config({tex2jax: {inlineMath: [[‘$’,’$’], [‘\\(‘,’\\)’]]}}); Ken Ono, Emory University Don Zagier, Max Planck Institute Michael Griffin, BYU Larry Rolen, Vanderbilt University Introduction Four mathematicians, Michael Griffin of Brigham Young University, Ken Ono of Emory University (now at University of Virginia), Larry Rolen of Vanderbilt University and Don Zagier of the Max Planck Institute, have proven a significant result that is thought to be on the roadmap to a proof of the most celebrated of unsolved mathematical conjecture, namely the Riemann hypothesis. First, here is some background: The Riemann hypothesis The Riemann hypothesis was first posed by the German Continue reading Mathematicians prove result tied to the Riemann hypothesis A colony of the new Syn61 bacteria; credit: BBC Creating life In a remarkable development with far-reaching consequences, researchers at the Cambridge Laboratory of Molecular Biology have used a computer program to rewrite the DNA of the well-known bacteria Escherichia coli (more commonly known as “E. coli”) to produce a functioning, reproducing species that is far more complex than any previous similar synthetic biology effort. Venter’s 2010 project This effort has its roots in a project spearheaded by J. Craig Venter, the well-known maverick biomedical researcher known for the “shotgun” approach to genome sequencing pioneered by his team at Continue reading Computational tools help create new living organism Optimal stacking of oranges. The sphere-packing problem The Kepler conjecture is the assertion that the simple scheme of stacking oranges typically seen in a supermarket has the highest possible average density, namely pi/(3 sqrt(2)) = 0.740480489…, for any possible arrangement, regular or irregular. It is named after 17th-century astronomer Johannes Kepler, who first proposed that planets orbited in elliptical paths around the sun. Hales’ proof of the Kepler conjecture In the early 1990s, Thomas Hales, following an approach first suggested by Laszlo Fejes Toth in 1953, determined that the maximum density of all possible arrangements could be obtained Continue reading Researchers use “magic functions” to prove sphere-packing results Introduction Right off, it may not sound like pi, climate change denial and young-Earth creationism have much in common. In fact, there is an important connection. Here is some background. Credit: Michele Vallisneri, NASA JPL Computing pi Pi = 3.1415926535…, namely the ratio between the circumference of a circle and its diameter, has fascinated not only mathematicians and scientists but the public at large for centuries. Archimedes (c.287–212 BCE) was the first to present a scheme for calculating pi as a limit of perimeters of inscribed and circumscribed polygons, as illustrated briefly in the graphic to the right (see Continue reading Pi, climate change denial and creationism
In the well-known communication task EQUALITY, Alice has a string $x$ of $n$ bits, Bob has a string $y$ of $n$ bits, and their task is to determine whether $x = y$. In the public coin model, there is a probabilistic protocol which uses 2 bits, is always correct when $x = y$, and is correct with constant probability when $x \neq y$ (in fact, with probability 1/2). The following generalization came up in a recent question. In the task $k$-HAMMING (where $k$ is a constant parameter), Alice and Bob hold strings $x,y$ of length $n$ bits, and their task is to determine whether the Hamming distance between $x$ and $y$ is at most $k$. EQUALITY is the case $k=0$. When $k = 1$, we have the following nice protocol, which uses 5 bits, is always correct in the Yes case, and is wrong in the No case with probability at most 5/8. Alice and Bob agree on two random strings $z,w \in GF(3)^n$. Alice and Bob each compute the inner product of $z,w$ with their input, and compare the values. If both are equal or both are different, they output Yes, and otherwise they output No. If the Hamming distance between $x$ and $y$ is $d$ then the probability $p_d$ that $\langle z,x-y \rangle = 0$ is $1/3 + (2/3)(-1/2)^d$. We have $p_0 = 1$, $p_1 = 0$, and $1/4 \leq p_d \leq 1/2$ for $d \geq 2$. This shows that the error probability when $d \geq 2$ is at most $p_d^2 + (1-p_d)^2 \leq (1/4)^2 + (3/4)^2 = 5/8$. When $k > 1$, a similar protocol works since using enough samples we can estimate $p_d$ to any required accuracy. However, the resulting protocol no longer has one-sided error. One-sidedness can be recovered in many ways at the cost of using $O(\log n)$ bits of communication. Is there a one-sided error protocol for $k$-HAMMINGfor $k \geq 2$ using $O(1)$ communication?
I would recommend you to look at Jones' survey paper from 1986, entitled A New Knot Polynomial and Von Neumann Algebras. It is very readable. Let me try to make a brief summary, though. The basic object you start with is a $II_1$ factor. This is a von Neumann algebra $M$ with trivial center $Z(M)\simeq \mathbb{C}$, possessing a faithful trace $\tau : M \to \mathbb{C}$ (i.e. a positive normal state such that $\tau(a^*a)=0$ implies $a=0$) and having no minimal projections (this excludes matrix algebras). For whatever reason, it is a good idea to study subfactors of $N \subseteq M$. It turns out that the most important thing about a subfactor is not so much its isomorphism type (it is very hard to tell if von Neumann algebras are isomorphic or not) but rather the way it sits inside the big factor. Using the trace $\tau$, you can perform the GNS construction for $M$. This means that you define an inner product on $M$ via $\langle x,y \rangle = \tau(y^*x)$ (which is positive definite since $\tau$ is faithful), and then take the completion to get a Hilbert space called $L^2(M,\tau)$. Then $M$ acts on this Hilbert space by left-multiplication. Now you do what is called Jones' basic construction. Inside the Hilbert space $L^2(M,\tau)$ is the subspace $L^2(N,\tau)$, so there is the orthogonal projection of $L^2(M,\tau)$ onto $L^2(N,\tau)$. Call that projection $e_1$. Then define a new algebra$M_1$ to be the von Neumann algebra generated by $M$ and $e_1$ (inside $\mathcal{L}(L^2(M,\tau))$). It is immediate that $e_1$ commutes with $N$. It turns out that if the inclusion $N \subseteq M$ was of finite index (which I won't get into here) then $M_1$ is also a $II_1$ factor (so it comes with a faithful trace also), and the inclusion $M \subseteq M_1$ has the same index as $N \subseteq M$. Then you just keep going! Repeat the basic construction for $M \subseteq M_1$ to get a projection $e_2$, then let $M_2$ be the von Neumann algebra generated by $M_1$ and $e_2$, etc. So you end up with a sequence of projections $e_1,e_2,\dots$ which satisfy the following relations: $e_i e_j = e_j e_i$ if $|i-j| > 1$. $e_i e_j e_i = \lambda e_i$ whenever $|i-j| = 1$, where $\lambda$ is the inverse of the index of $N$ in $M$. The projections $e_i$ give a representation of something called the Temperley-Lieb algebra.You can see that these relations are reminiscent of the relations in the braid group. That is where the connection comes in. Knots are connected to braids, braids are connected to the Temperley-Lieb algebra (and hence to these projections) and then you can use the trace in the von Neumann algebra to define invariants of knots. That is the gist of it. Read Jones' paper for more details.
The classical LSE (least squares estimator) of the regression parameter in a simple regression model, MLE for normally distributed errors, is known to be highly nonrobust for outliers (gross-error contamination) and plausible departures from the assumed normality. The estimator based on the Kendall tau-statistic, known as the Theil- Sen estimator (Sen 1968 JASA), is robust, median-unbiased and insensitive to nonnormality of errors. In this simple regression setup, it is tacitly assumed that the regressors are nonstochastic. In many situations, not only the regressors are stochastic but are also subject to superimposed measurement errors. This scenario is appraised in a simple measurement error model: $$ Y_i = \alpha + \beta X_i + e_i, W_i = X_i + U_i, i= 1,...,n,$$ where the \(X_i\) are i.i.d.r.v.s with a distribution with location \(\mu_x\), and the errors \(e_i, U_i\) and \(V_i = X_i - \mu_x, i = 1,...,n\) are mutually independent. The usual scenario is to assume that all these errors have normal distributions with 0 means and appropriate variances. We examine this stringent assumption and appraise the scenario beyond conventional normality; the findings are simple and yet interesting.
Search Now showing items 1-1 of 1 Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV (Elsevier, 2014-09) Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ...
"La variante di Lüneburg" and China's rice productionIn 1993, Paolo Maurensig, an Italian writer from Gorizia, wrote a novel entitled "La variante di Lüneburg" (Adelphi, 1995, pp. 164, ISBN 88-459-0984-0). The novel is set in Nazi Germany during World War II and the main theme is the game of chess. At the beginning of the story, Maurensig tells a legend according to which the game of chess was invented by a Chinese peasantwith a formidable gift for mathematics. (There are different versions of this story, as far as I understand, the earliest written record is contained in the Shahnameh and takes place in India, instead.) The peasant asks the king, in exchange for the game he invented, for a quantity of rice equal to that obtained with the following procedure: First one grain of rice should be placed on the first square on the chess board, then two grains on the second square, than four on the third and so on, every time doubling the number of rice grains. The king accepts not realizing what he is agreeing on. Let's try to calculate how much rice that would be. The series implied by the peasant has a more general form called geometric seriesand is well known in mathematics: $$ s_m = \sum_{k=0}^m x^k $$ If we calculate the first steps in the sum we obtain: \begin{eqnarray*} s_0 &=& 1 \\ s_1 &=& 1+x \\ s_2 &=& 1+x+x^2 \\ \dots && \\ \end{eqnarray*} To see how this relates to our problem, we can set $x=2$ and see that the sum will be: $1+ 2+ 4+ 8+ \dots $ If we observe $s_1$ and $s_2$ in the previous equations, we see that we can write the second in terms of the first in two different ways: \begin{eqnarray*} s_2 &=& s_1+x^2 \\ s_2 &=& 1 + x (1+x) = 1 + x s_1.\\ \end{eqnarray*} In the first case, we grouped the first two terms in $s_2$, whereas in the second case we group the last two terms and realized that they shared a common factor $x$. If we continue writing the terms of the sum for higher orders, we realize that what we obtained above is true in general: \begin{eqnarray*} s_{m} &=& 1+x+\dots+x^m \\ s_{m+1} &=& 1+x+\dots+x^m+x^{m+1} = s_m+x^{m+1} \\ &=& 1 + x (1+\dots+x^{m-1}+x^m) = 1 + x s_m, \\ \end{eqnarray*} which also means that the right-hand side of the last two equations above must be equal: \begin{eqnarray*} s_m+x^{m+1} &=& 1 + x s_m, \end{eqnarray*} and, therefore, rearranging: \begin{eqnarray*} s_m &=& \frac{x^{m+1}-1}{x-1}. \end{eqnarray*} This is the general solution for the sum of the geometric series. If we want to know how many grains of rice the king will have to give to the peasant, we need to substitute the values of $x$ and $m$. We already saw above that $x$ should be equal to 2. We also know that the chess board has 8 rows and 8 columns, giving 64 squares. Because we start with $s_0$ in the first square, we need to calculate the series for $m=63$, that will correspond to the last square: $$ s_{63} = \frac{2^{64}-1}{2-1} = 18\,446\,744\,073\,709\,551\,615, $$ which in words would sound something like eighteen quintillions... In 1999, China produced approximately 198 million tons of rice. Which corresponds to $198\,000\,000\,000\,000$ grams. If we assume for simplicity that the production is constant over the years and that one gram of rice is approximately 50 grains, the king will have to give the peasant the entire rice production in China for more than 18 hundred years. Needless to say, when the king realized the mistake, he killed the peasant.
Please explain each step thank you Solved: y^2-8x +7 (y-4)^2 -9 sqrt(x-9)+4 = y g(x) is a concave up parabola \(g(x)=x^2-8x+7\\ g(x)=(x-7)(x-1)\\ \text{roots are at x= 7 and x= 1}\\ \text{Axis of symmetry = 4} \) x=4 g(4)= -3*3 = -9 So the vertex is (4,-9) Since the domain is \((4,\infty)\) the range is \((-9,\infty) \) So the domain and range of the inverse must be the other way around. For \(g^{-1}(x)\) The domain is \((-9,\infty) \) abd the range is \((4,\infty)\) Here is the graph. This maths underneath is not needed to answer the question but I have done it so I might as well leave it. \(Let\;\;y=g(x)\\ y=x^2-8x+7 \qquad x>4\\ y-7=x^2-8x\\ y-7+16=x^2-8x+16\\ y+9=(x-4)^2\\ \pm\sqrt{y+9}=x-4\\ \text{But x-4 is positive, so}\\ x-4=\sqrt{y+9}\\ x=\sqrt{y+9}+4\\ g^{-1}(x)=\sqrt{x+9}+4\).
The derivative of a function is one of the basic concepts of mathematics. Together with the integral, derivative occupies a central place in calculus. The process of finding the derivative is called differentiation. The inverse operation for differentiation is called integration. The derivative of a function at some point characterizes the rate of change of the function at this point. We can estimate the rate of change by calculating the ratio of change of the function \(\Delta y\) to the change of the independent variable \(\Delta x\). In the definition of derivative, this ratio is considered in the limit as \(\Delta x \to 0.\) Let us turn to a more rigorous formulation. Formal Definition of the Derivative Let \(f\left( x \right)\) be a function whose domain contains an open interval about some point \({x_0}\). Then the function \(f\left( x \right)\) is said to be differentiable at \({x_0}\), and the derivative of \(f\left( x \right)\) at \({x_0}\) is given by \[ {f’\left( {{x_0}} \right) = \lim\limits_{\Delta x \to 0} \frac{{\Delta y}}{{\Delta x}} } = {\lim\limits_{\Delta x \to 0} \frac{{f\left( {{x_0} + \Delta x} \right) – f\left( {{x_0}} \right)}}{{\Delta x}}.} \] Lagrange’s notation is to write the derivative of the function \(y = f\left( x \right)\) as \(f^\prime\left( x \right)\) or \(y^\prime\left( x \right).\) Leibniz’s notation is to write the derivative of the function \(y = f\left( x \right)\) as \(\large{\frac{{df}}{{dx}}}\normalsize\) or \(\large{\frac{{dy}}{{dx}}}\normalsize.\) The steps to find the derivative of a function \(f\left( x \right)\) at the point \({x_0}\) are as follows: Form the difference quotient \({\large\frac{{\Delta y}}{{\Delta x}}\normalsize} = {\large\frac{{f\left( {{x_0} + \Delta x} \right) – f\left( {{x_0}} \right)}}{{\Delta x}}\normalsize}\); Simplify the quotient, canceling \(\Delta x\) if possible; Find the derivative \(f’\left( {{x_0}} \right)\), applying the limit to the quotient. If this limit exists, then we say that the function \(f\left( x \right)\) is differentiable at \({x_0}\). In the examples below, we derive the derivatives of the basic elementary functions using the formal definition of derivative. These functions comprise the backbone in the sense that the derivatives of other functions can be derived from them using the basic differentiation rules. Solved Problems Click a problem to see the solution. Example 1Using the definition of derivative, prove that the derivative of a constant is \(0.\) Example 2Calculate the derivative of the function \(y = x.\) Example 3Using the limit definition find the derivative of the function \(f\left( x \right) = 3x + 2.\) Example 4Find the derivative of a linear function \(y = ax + b\) using the definition of derivative. Example 5Using the definition, find the derivative of the simplest quadratic function \(y = {x^2}.\) Example 6Using the definition of the derivative, differentiate the function \(f\left( x \right) = {x^2} + 2x – 2.\) Example 7Determine the derivative of a quadratic function of general form \(y = a{x^2} + bx +c.\) Example 8Using the definition of the derivative, find the derivative of the function \(y = \large\frac{1}{x}\normalsize.\) Example 9Using the limit definition find the derivative of the function \(f\left( x \right) = \frac{1}{{x – 1}}.\) Example 10Find the derivative of the function \(f\left( x \right) = \large{\frac{2}{{{x^2}}}}\normalsize\) using the limit definition. Example 11Find the derivative of the function \(y = \sqrt x .\) Example 12Determine the derivative of the cube root function \(f\left( x \right) = \sqrt[3]{x}\) using the limit definition. Example 13Calculate the derivative of the cubic function \(y = {x^3}.\) Example 14Differentiate the power function \(f\left( x \right) = {x^4}\) using the limit definition. Example 15Find the derivative of the sine function \(y = \sin x.\) Example 16Find the derivative of the cosine function \(y = \cos x.\) Example 17Find the derivative of the trigonometric function \(f\left( x \right) = \sin 2x\) using the limit definition. Example 18Find an expression for the derivative of the exponential function \(y = {e^x}\) using the definition of derivative. Example 19Find the derivative of the power function \(y = {x^n}.\) Example 20Find the derivative of the natural logarithm \(y = \ln x.\) Example 1.Using the definition of derivative, prove that the derivative of a constant is \(0.\) Solution. In this case, the function \(y\left( x \right)\) is always equal to to a constant \(C.\) Therefore, we can write \[{y\left( x \right) = C,\;\;\;}\kern-0.3pt {y\left( {x + \Delta x} \right) = C.}\] It is clear that the increment of the function is identically equal to zero: \[ {\Delta y = y\left( {x + \Delta x} \right) – y\left( x \right) } = {C – C \equiv 0.} \] Substituting this in the limit definition of derivative, we obtain: \[ {y’\left( x \right) = \lim\limits_{\Delta x \to 0} \frac{{\Delta y}}{{\Delta x}} } = {\lim\limits_{\Delta x \to 0} \frac{{y\left( {x + \Delta x} \right) – y\left( {x} \right)}}{{\Delta x}} } = {\lim\limits_{\Delta x \to 0} \frac{0}{{\Delta x}} } = {\lim\limits_{\Delta x \to 0} 0 = 0.} \] Example 2.Calculate the derivative of the function \(y = x.\) Solution. Following the above procedure, we form the ratio \(\large\frac{{\Delta y}}{{\Delta x}}\normalsize\) and find the limit as \(\Delta x \to 0:\) \[\require{cancel} {y’\left( x \right) = \lim\limits_{\Delta x \to 0} \frac{{\Delta y}}{{\Delta x}} } = {\lim\limits_{\Delta x \to 0} \frac{{\left( {x + \Delta x} \right) – x}}{{\Delta x}} } = {\lim\limits_{\Delta x \to 0} \frac{{\cancel{x} + \Delta x – \cancel{x}}}{{\Delta x}} } = {\lim\limits_{\Delta x \to 0} \frac{{\cancel{\Delta x}}}{{\cancel{\Delta x}}} } = {\lim\limits_{\Delta x \to 0} 1 = 1.} \] Example 3.Using the limit definition find the derivative of the function \(f\left( x \right) = 3x + 2.\) Solution. Write the increment of the function: \[\require{cancel}{\Delta y }={ y\left( {x + \Delta x} \right) – y\left( x \right) }={ \left[ {3\left( {x + \Delta x} \right) + 2} \right] – \left[ {3x + 2} \right] }={ \cancel{\color{blue}{3x}} + 3\Delta x + \cancel{\color{red}{2}} – \cancel{\color{blue}{3x}} – \cancel{\color{red}{2}} }={ 3\Delta x.}\] The difference ratio is equal to \[\frac{{\Delta y}}{{\Delta x}} = \frac{{3\cancel{\Delta x}}}{{\cancel{\Delta x}}} = 3.\] Then the derivative is given by \[{f^\prime\left( x \right) = \mathop {\lim }\limits_{\Delta x \to 0} \frac{{\Delta y}}{{\Delta x}} }={ \mathop {\lim }\limits_{\Delta x \to 0} 3 }={ 3.}\] Example 4.Find the derivative of a linear function \(y = ax + b\) using the definition of derivative. Solution. We write the increment of the function corresponding to a small change in the argument \(\Delta x:\) \[ {\Delta y = y\left( {x + \Delta x} \right) – y\left( x \right) } = {\left( {a\left( {x + \Delta x} \right) + b} \right) – \left( {ax + b} \right) } = {\cancel{\color{blue}{ax}} + a\Delta x + \cancel{\color{red}{b}} – \cancel{\color{blue}{ax}} – \cancel{\color{red}{b}} = a\Delta x.} \] Then the derivative is given by \[ {y’\left( x \right) = \lim\limits_{\Delta x \to 0} \frac{{\Delta y}}{{\Delta x}} } = {\lim\limits_{\Delta x \to 0} \frac{{a\cancel{\Delta x}}}{{\cancel{\Delta x}}} } = {\lim\limits_{\Delta x \to 0} a = a.} \] As it can be seen, the derivative of a linear function \(y = ax + b\) is always constant and equal to the coefficient \(a.\) Example 5.Using the definition, find the derivative of the simplest quadratic function \(y = {x^2}.\) Solution. If we change the independent variable \(x\) by an amount \(\Delta x\), the function receives the following increment: \[ {\Delta y = y\left( {x + \Delta x} \right) – y\left( x \right) } = {{\left( {x + \Delta x} \right)^2} – {x^2}.} \] This expression can be converted to the form \[ {\Delta y = {\left( {x + \Delta x} \right)^2} – {x^2} } = {\cancel{x^2} + 2x\Delta x + {\left( {\Delta x} \right)^2} – \cancel{x^2} } = {\left( {2x + \Delta x} \right)\Delta x.} \] By calculating the limit, we find the derivative: \[ {y’\left( x \right) } = {\lim\limits_{\Delta x \to 0} \frac{{\Delta y}}{{\Delta x}} } = {\lim\limits_{\Delta x \to 0} \frac{{\left( {2x + \Delta x} \right)\cancel{\Delta x}}}{{\cancel{\Delta x}}} } = {\lim\limits_{\Delta x \to 0} \left( {2x + \Delta x} \right) = 2x.} \] Example 6.Using the definition of the derivative, differentiate the function \(f\left( x \right) = {x^2} + 2x – 2.\) Solution. Calculate the increment of the function: \[\require{cancel}{\Delta y }={ f\left( {x + \Delta x} \right) – f\left( x \right) }={ \left[ {{{\left( {x + \Delta x} \right)}^2} + 2\left( {x + \Delta x} \right) – 2} \right] }-{ \left[ {{x^2} + 2x – 2} \right] }={ \cancel{\color{darkgreen}{x^2}} + 2x\Delta x + {\left( {\Delta x} \right)^2} + \cancel{\color{blue}{2x}} + 2\Delta x – \cancel{\color{red}{2}}} – {\cancel{\color{darkgreen}{x^2}} – \cancel{\color{blue}{2x}} + \cancel{\color{red}{2}} }={ \left( {2x + 2} \right)\Delta x + {\left( {\Delta x} \right)^2}.}\] Write the difference ratio: \[{\frac{{\Delta y}}{{\Delta x}} }={ \frac{{\left( {2x + 2} \right)\Delta x + {{\left( {\Delta x} \right)}^2}}}{{\Delta x}} }={ \frac{{\left( {2x + 2 + \Delta x} \right)\cancel{\Delta x}}}{{\cancel{\Delta x}}} }={ 2x + 2 + \Delta x.}\] Hence, the derivative is \[{f^\prime\left( x \right) }={ \mathop {\lim }\limits_{\Delta x \to 0} \frac{{\Delta y}}{{\Delta x}} }={ \mathop {\lim }\limits_{\Delta x \to 0} \left( {2x + 2 + \Delta x} \right) }={ 2x + 2.}\] Example 7.Determine the derivative of a quadratic function of general form \(y = a{x^2} + bx +c.\) Solution. We find the derivative of the given function using the definition of derivative. Write the increment of the function \(\Delta y\) when the argument changes by \(\Delta x:\) \[\require{cancel} {\Delta y = y\left( {x + \Delta x} \right) – y\left( x \right) } = {\left[ {a{{\left( {x + \Delta x} \right)}^2} + b\left( {x + \Delta x} \right) + c} \right] }-{ \left[ {a{x^2} + bx + c} \right] } = {\cancel{\color{blue}{a{x^2}}} + 2ax\Delta x + a{\left( {\Delta x} \right)^2} }+{ \cancel{\color{red}{bx}} + b\Delta x + \cancel{\color{maroon}{c}} }-{ \cancel{\color{blue}{a{x^2}}} – \cancel{\color{red}{bx}} – \cancel{\color{maroon}{c}} } = {2ax\Delta x + a{\left( {\Delta x} \right)^2} + b\Delta x } = {\left( {2ax + b + a\Delta x} \right)\Delta x.} \] Now we form the ratio of the increments and calculate the limit: \[ {y’\left( x \right) } = {\lim\limits_{\Delta x \to 0} \frac{{\Delta y}}{{\Delta x}} } = {\lim\limits_{\Delta x \to 0} \frac{{\left( {2ax + b + a\Delta x} \right)\cancel{\Delta x}}}{{\cancel{\Delta x}}} } = {\lim\limits_{\Delta x \to 0} \left( {2ax + b + a\Delta x} \right) } = {2ax + b.} \] Thus, the derivative of a quadratic function in general form is a linear function.
While this does not answer the question asked (and will therefore not be accepted), I provide this response in hopes it will help others and promote worthwhile discussion. In his book Experimentation: An Introduction to Measurement Theory and Experiment Design, David Baird provides a simple explanation for doing a linear least squares fit using the diffences between the measured and fit values to estimate the error of the fit parameters. The best fit for the parameters $m$ and $b$ in $$y=mx+b$$ will is determined using his eqn. (6.3): $$m = \frac{n\sum x_i y_i - \sum x_i \sum y_i}{n\sum(x^2_i) - (\sum x_i)^2} $$ $$b = \frac{\sum(x_i^2)\sum y_i - \sum x_i \sum x_iy_i}{n\sum(x^2_i) - (\sum x_i)^2} $$ After obtaining $m$ and $b$, a standard deviation for the fit parameters can be obtained by calculating the differences of each $y_i$ value from the fit, $\delta y_i = y_i - (m x_i +b)$. From these differences, one calculates the standard deviation of the data from the fit line using: $$\sigma_y = \sqrt{\frac{\sum(\delta y_i)^2}{n-2}}$$ and then the standard deviation of the parameters using $$\sigma_m = \sigma_y \sqrt{\frac{n}{n\sum{x_i^2}-\left(\sum{x_i}\right)^2}}$$ and $$\sigma_b = \sigma_y \sqrt{\frac{\sum{x_i^2}}{n\sum{x_i^2}-\left(\sum{x_i}\right)^2}}$$ To include the measurement error, $dy_i$, in the fit one would divide the initial system of equations by $dy_i$ giving $$\frac{y_i}{dy_i} = m\frac{x_i}{y_i} + b \frac{1}{dy_i}$$ then repeat Baird's derivation to get the weighted fit parameters $$m = \frac{\sum \frac{1}{dy_i}\sum \frac{x_i y_i}{dy_i^2} - \sum \frac{x_i}{dy_i^2} \sum \frac{y_i}{dy_i}}{\sum \frac{1}{dy_i}\sum \frac{x^2_i}{dy_i^2} - \sum \frac{x_i}{dy_i}\sum \frac{x_i}{dy_i^2}} $$ $$b = \frac{\sum \frac{x_i^2}{dy_i^2}\sum \frac{y_i}{dy_i} - \sum \frac{x_i}{dy_i} \sum \frac{x_iy_i}{dy_i^2}}{\sum \frac{1}{dy_i}\sum \frac{x^2_i}{dy_i^2} - \sum \frac{x_i}{dy_i}\sum \frac{x_i}{dy_i^2}} $$ Notice that the $b\frac{1}{dy_i}$ term makes it so you cannot simply divide $x_i$ and $y_i$ by $dy_i$ (as pointed out in the comments below). Unfortunately, this does not propagate the measurement error into an error in the fit parameters though.
The current (I) - voltage (V) relationship of electrical components can often provide insight into how electronic devices are used. More specifically, many non-linear devices such as diodes and transistors are used in operating regions in which they behave like ideal components—such as current sources, voltage regulators, and resistors. An understanding of I-V curves often provides insight into knowing how the device operates and helps us know how to operate a device in a way that enables the required functionality. We'll begin by looking at how to obtain an I-V curve for any component. Obtaining I-V Curves Method 1: Voltage Sweeps The current-voltage ( I-V) relationship for a device is a current measured for a given voltage. For devices that do not supply power, I-V curves are obtained by using linear voltage sweeps. Voltage sweeps involve the linear variation of the voltage, to obtain the corresponding measured output current. Because it is impossible to physically sweep through all the voltages in an instant, it is important to understand that these measurements are made with respect to time as well. Figure 1.1 illustrates the translation of voltage sweep with respect to time ( V vs t) onto the X-axis of the current-voltage graph ( I vs V). It is important to understand that the V vs t information is implicitly present in the I vs V curve. The notion of time is relevant for components that respond to a change in voltage (such as a capacitor) rather than the instantaneous voltage (as with a resistor). Figure 1.1 (a): A linear sweep of voltage (V) with respect to time (t); (b): the corresponding voltage sweep in the current (I) - voltage (V) curve. Figure 1.1 (a): A linear sweep of voltage (V) with respect to time (t); (b): the corresponding voltage sweep in the current (I) - voltage (V) curve. If you have a device that supplies voltage or current, such as a battery or a solar panel or a regular power supply, you cannot change the voltage across the device, because there is a specific voltage or current being generated by the device. For these devices, I-V curves are obtained by load switching. Method 2: Load Switching Load switching Typically, a resistor is used as the load to measure the power delivered by a current or a voltage source because they are linear devices that do not exhibit properties of hysteresis, i.e., the operation of the resistor does not depend on its previous state. Because devices can operate with small values of resistance (1-10Ω) as well as large values of resistance (10-1000kΩ), the resistors are varied logarithmically, i.e., from 10 to 100 to 1000 and so on. The current that is supplied by the power supply is measured by an ammeter for each value of load resistance—shown in Figure 1.2(b)—and the voltage across the load is measured using a voltmeter—shown in Figure 1.2(c). Figure 1.2 (a) Schematic circuit for load switching I-V curve measurements; shown here is an example of an ideal voltage source. The value of R L is varied over a large range and for each value of resistance the voltage (b) and current (c) are measured. Figure 1.2 (a)Schematic circuit for load switching I-V curve measurements; shown here is an example of an ideal voltage source. The value of R Lis varied over a large range and for each value of resistance the voltage (b)and current (c)are measured. Note that in Figure 1.2(c), the (base 10) logarithmic value of current decreases linearly. This is because a resistor is governed by Ohm's law, and this is an ideal voltage source at a fixed voltage V S; as the resistance value increases logarithmically, the current value decreases logarithmically. Load switching techniques are used to measure the I-V characteristics of devices and circuits that supply power, such as voltage regulator circuits, solar cells, and batteries. I-V Curves of Ideal Components Using linear voltage sweeps and load switching, we will now look at the I-V curves of ideal components. In general, if the device requires power to operate, the voltage sweep method is used. On the other hand, if the device acts as a source of power, the load switching method is used. Based on their basic definitions, we can derive the I-V curves of ideal passive components (resistors, capacitors, and inductors) using the concept of linear voltage sweeps. We will use the concept of load switching for the I-V curves of an ideal voltage source and an ideal current source. Ideal Resistor Let's start with one of the more familiar ideal components: the resistor. The resistor is a component that represents a linear relationship between voltage and current as dictated by Ohm's law, i.e., $$V=I \times R$$. The graphical representation on the I-V curve of the Ohm's law equation is a straight line passing through the origin, as shown in Figure 2. Figure 2. The I-V curve of an ideal resistor is a straight line that passes through the origin. Figure 2.The I-V curve of an ideal resistor is a straight line that passes through the origin. Ideal Voltage Source An ideal voltage source is a component that can provide a fixed voltage regardless of the current delivered to the load. For example, let's say a voltage source supplying 10V is connected across a resistor. If the value of the resistor is $$10k\Omega$$, then the current drawn by the resistor from the voltage supply will be dictated by Ohm's law, which is $$ I = \frac{V}{R} = \frac{10V}{10k\Omega} = 1mA$$. If the value of the resistor is 1Ω, then the current drawn will be 10A! A real voltage supply is limited with respect to the amount of current it can supply for a given voltage, but an ideal voltage source is not. Therefore, the I-V curve for an ideal voltage supply will be a straight line parallel to the Y-axis (see Figure 3). An empirical I-V curve for a real voltage source would be obtained using the load-switching method. Figure 3. The I-V curve of an ideal voltage source is a straight line parallel to the current (I) axis—i.e., regardless of the current passing through the device, the voltage will not change. Figure 3.The I-V curve of an ideal voltage source is a straight line parallel to the current (I) axis—i.e., regardless of the current passing through the device, the voltage will not change. The Zener diode is a non-linear, passive device that is used as a voltage regulator when it is operated in reverse bias. The (idealized) I-V curve of a reverse-biased Zener diode shows that it stays at a particular voltage (determined by the manufacturing process) regardless of the current passing through it. We will be looking at the I-V curve of a Zener diode in a future article. Ideal Current Source An ideal current source is a component that can provide a fixed current regardless of the voltage across the component, itself. In other words, a 5A ideal current source would deliver exactly 5A to a 1Ω load resistor or to a 1 kΩ resistor, even though the second resistor would generate a voltage drop of 5000V! This is highly impractical but, nonetheless, ideal current sources are useful tools in circuit analysis. The I-V curve for an ideal current source is a straight line parallel to the X-axis (see Figure 4). An empirical I-V curve for a real current source would be obtained using the load-switching method. Figure 4. The I-V curve of an ideal current source is a straight line parallel to the voltage axis; i.e., the current flowing from the source is the same regardless of the voltage across it. Figure 4.The I-V curve of an ideal current source is a straight line parallel to the voltage axis; i.e., the current flowing from the source is the same regardless of the voltage across it. Though "current supplies" are not nearly as common as voltage supplies, many analog transistor circuits are biased using a constant-current source. Also, a MOSFET operating in the saturation region exhibits behavior similar to that of a (voltage-controlled) current source. Ideal Capacitor In a resistor, the voltage is determined by the resistance and the current flowing through the resistor. Capacitors and inductors are fundamentally different in that their current-voltage relationships involve the rate of change. In the case of a capacitor, the current through the capacitor at any given moment is the product of capacitance and the rate of change (i.e., the derivative with respect to time) of the voltage across the capacitor. $$I=C\cdot\frac{\text{d}V}{\text{d}t} $$ Because we are using a linear voltage sweep, the current through the capacitor is constant when the voltage is increasing or decreasing. When the voltage changes from a positive slope (shown in blue in Figure 5) to a negative slope (orange), the direction of the current reverses; this is represented in the current vs. time plot as a change from the positive-current section of the graph to the negative-current section of the graph. Figure 5 (a) Linear voltage sweep and (b) the corresponding capacitor current vs. time. Figure 5 (a)Linear voltage sweep and (b)the corresponding capacitor current vs. time. The I-V relationship of an ideal capacitor is shown in Figure 6. The magnitude of the current is constant, but two horizontal lines are needed because the direction of the current changes depending on whether the voltage is moving from V1 to V2 or V2 to V1. When the voltage has a positive rate of change, the current is positive (indicated by the blue arrowhead); when the voltage has a negative rate of change, the current is negative (indicated by the orange arrowhead). Figure 6. I-V curve for an ideal capacitor based on the voltage sweep shown in Figure 5. Figure 6.I-V curve for an ideal capacitor based on the voltage sweep shown in Figure 5. Ideal Inductor The voltage across an inductor is the product of inductance and the rate of change of the current flowing through the inductor: $$V=L\cdot\frac{\text{d}I}{\text{d}t} $$ This means that the current is proportional to the integral of voltage, and this is what we see in the following plots. The current increases in magnitude as the (negative) area under the voltage curve increases. But when the voltage crosses the time axis, the positive area under the curve starts to balance out the negative area under the curve, and this causes the current magnitude to decrease toward zero. Figure 7 (a) Linear voltage sweep and (b) the corresponding inductor current vs. time. Figure 7 (a)Linear voltage sweep and (b)the corresponding inductor current vs. time. Note the difference between the capacitor and the inductor: With a capacitor, current is proportional to the derivative of voltage, and thus a linear voltage sweep translates to constant current. With an inductor, current is proportional to the integral of voltage, and thus a linear voltage sweep translates to a quadratic shape in the current vs. time plot. The I-V relationship of an ideal inductor is shown in Figure 8. The magnitude of the current gradually increases and then decreases as the voltage moves from V2 to V1 or from V1 to V2. The direction of the current is negative as the voltage moves from V1 to V2 and positive as the voltage moves from V2 to V1. Figure 8. I-V curve for an ideal inductor based on the voltage sweep shown in Figure 7. Figure 8.I-V curve for an ideal inductor based on the voltage sweep shown in Figure 7. Summary The table below summarizes some of the insights we have obtained by looking at the I-V curves of several ideal devices. In a future article, we will look at I-V curves of non-linear devices. Device Requires Power? I-V Method Description of the Graph Passes Through Origin? Ideal Resistor Yes Voltage Sweep Straight line passing through origin Yes Ideal Voltage Source No Load Switching Straight line parallel to current axis No Ideal Current Source No Load Switching Straight line parallel to voltage axis No Ideal Capacitor Yes Voltage Sweep Closed rectangle around origin No Ideal Inductor Yes Voltage Sweep Closed parabolic loop around origin No
4.1.1. Center of Mass of a Collection of Particles So far we’ve only considered two cases - single particles on which a force is acting (like a mass on a spring), and pairs of particles exerting a force on each other (like gravity). What happens if more particles enter the game? Well, then we have to calculate the total force, by vector addition, and total energy, by regular addition. Let’s label the particles with a number \(\alpha\), then the total force is given by: \[F_{\text { total }}=\sum_{\alpha} \boldsymbol{F}_{\alpha}=\sum_{\alpha} m_{\alpha} \ddot{r}_{\alpha}=M \frac{\mathrm{d}^{2}}{\mathrm{d} t^{2}}\left(\frac{\sum_{\alpha} m_{\alpha} r_{\alpha}}{M}\right)=M \frac{\mathrm{d}^{2}}{\mathrm{d} t^{2}} r_{\mathrm{cm}}\] where we’ve defined the total mass \(\sum_\alpha m_\alpha\) and the center of mass \[r_{\mathrm{cm}}=\frac{1}{M} \sum_{\alpha} m_{\alpha} r_{\alpha} \label{cntrofmass}\] 4.1.2. Center of Mass of an Object Equation (\ref{cntrofmass}) gives the center of mass of a discrete set of particles. Of course, in the end, every object is built out of a discrete set of particles, its molecules, but summing them all is going to be a lot of work. Let’s try to do better. Consider a small sub-unit of the object of volume dV(much smaller than the object, but much bigger than a molecule). Then the mass of that sub-unit is \(dm=\rho dV\), where \(\rho\) is the density (mass per unit volume) of the object. Summation over all these masses gives us the center of mass of the object, by Equation (\ref{cntrofmass}). Now taking the limit that the volume of the sub-units goes to zero, this becomes an infinite sum over infinitesimal volumes - an integral. So for the center of mass of a continuous object we find: \[r_{\mathrm{cm}}=\frac{1}{M} \int_{V} \rho \cdot r \mathrm{d} V \label{intcm}\] Note that in principle we do not even need to assume that the density \(\rho\) is constant - if it depends on the position in space, we can also absorb that in the discussion above, and end up with the same equation, but now with \(\rho (r)\). That will make the integral a lot harder to evaluate, but not necessarily impossible. Also note that the total mass M of the object is simply given by \(\rho \cdot V\), where V is the total volume, if the density is constant, and by \(\int_V \rho (r) dV\) otherwise. Therefore, if the density is constant, it drops out of Equation (\ref{intcm}), and we can rewrite it as \[r_{\mathrm{cm}}=\frac{1}{V} \int_{V} r \mathrm{d} V \quad \text { for constant density } \rho\] Unfortunately, many textbooks introduce the confusing concept of a infinitesimal mass element dm, instead of a volume element dV with mass \(\rho dV\). This strange habit often throws students off, and the concept is wholly unnecessary, so we won’t adapt it here. Equation (\ref{intcm}) holds for any continuous object, but it might be confusing if you consider a linear or planar object - as you may wonder how the density \(\rho\) and volume element dV are defined in one and two dimensions. There are two ways out. One is to say that all physical objects are three-dimensional - even a very thin stick has a cross section. If you say that cross section has area A (which is constant along the stick, or the thin stick approximation would be invalid), and the coordinate along the stick is x, the volume element simply becomes dV=Adx, and the integral in Equation (\ref{intcm}) reduces to a one-dimensional integral. You can approach two-dimensional objects in the same way, by giving them a small thickness \(\delta z\) and writing the volume element as \(dV=\delta z dA\). Alternatively, you can define one- and two-dimensional analogs of the density: the mass per unit length \(\lambda\) and mass per unit area \(\sigma\), respectively. With those, the one- and two-dimensional equivalents of equation (\ref{intcm}) are given by \[x_{\mathrm{cm}}=\frac{1}{M} \int_{0}^{L} \lambda x \mathrm{d} x, \text { and } r_{\mathrm{cm}}=\frac{1}{M} \int_{A} \rho \cdot r \mathrm{d} A \label{xcmrcm}\] where M is still the total mass of the object. 4.1.3. Worked example: center of mass of a solid hemisphere Solution By symmetry, the center of mass of a solid sphere must lie at its center. The center of mass of a hemisphere cannot be guessed so easily, so we must calculate it. Of course, it must still lie on the axis of symmetry, but to calculate where on that axis, we’ll use Equation \ref{xcmrcm}. To carry out the integral, we’ll make use of the symmetry the system still has, and chop our hemisphere up into thin slices of equal thickness dz, see Figure 4.1.1. The volume of such a slice will then depend on its position z, and be given by \(\mathrm{d} V=\pi r(z)^{2} \mathrm{d} z\), where r(z) is the radius at height z. Putting the origin at the bottom of the hemisphere, we easily obtain \(r(z)=\sqrt{R^{2}-z^{2}}\), where R is the radius of the hemisphere. The position vector r in Equation \ref{xcmrcm} simply becomes (0, 0, z), so we get: \[z_{\mathrm{cm}}=\frac{1}{\frac{2}{3} \pi R^{3}} \int_{0}^{R} z \pi\left(R^{2}-z^{2}\right) \mathrm{d} z=\frac{3}{2 R^{3}}\left[\frac{1}{2} z^{2} R^{2}-\frac{1}{4} z^{4}\right]_{0}^{R}=\frac{3}{8} R\] The center of mass of the solid hemisphere thus lies at \(r_{cm}=(0, 0, \frac{3R}{8})\)
I'm seriously trying to figure out the exact same thing for my dissertation. I can easily solve for reservation (threshold) prices when offered prices are independent, but I haven't yet solved for the case of mean reversion. There's an example in Bertsekas (1987) page 83 with an autocorrelated asset sale model, but it's too brief for me to follow all the way. Here are my first steps. The asset must be sold before period $T$. We know the final reservation price is zero: $RP_{T} = 0$. In the next to last period, the agent compares the payoff with selling in period $T-1$ or waiting until period $T$. The value function is $J(T-1) = \max[P_{T-1},\beta E[P_T|P_{T-1}]$, where $\beta$ is a discount factor. The threshold price at time $T-1$ is the value that makes the asset holder indifferent to selling in either of the two periods. Substituting the expected value of the OU process, $P_{T-1}= \beta(\mu+e^{-\eta}\left(P_{T-1}-\mu\right)) $, Where $\eta$ is the level of mean reversion. Solving for $P_{T-1}$ yields the reservation price: $RP_{T-1}=\frac{\beta \mu (1-e^{-\eta})}{1-\beta e^{-\eta}}$. (check the algebra, but I think it's correct). Then, I derived the remainder of the reservation prices using the equation $J(t) = \max[P_t,\beta E[J(t+1)|P_t]]$ where $E[J(t+1)|P_t] = \mbox{Pr}\left(P_{t+1}\geq RP_{t+1}\right)\times\left(E\left[P_{t+1}|P_{t+1}\geq RP_{t+1}\right]\right) + \mbox{Pr}\left(P_{t+1}<RP_{t+1}\right)\times\left(RP_{t+1}\right)$. For the OU process, $P_{t+s}|P_{t}\sim N\left(\mu+e^{-\eta s}\left(P_{t}-\mu\right),\frac{\sigma^{2}}{2\eta}\left(1-\exp\left(-2\eta s\right)\right)\right).$ I used R's etruncnorm function to calculate the probabilities in the value equation. I have more details in my dissertation, pages 35-41:http://people.clemson.edu/~campbwa/dissertation/WAC_dissertation_3-15-2013.pdf I have derived a full set of reservation prices, but they're too high. If I shift them down in the simulation model, profits increase!
I have been highlighting certain parts of text in order to make it easier for me to search through a long document to find parts that need expanded on or some sort of work is left to do; however, I've been having issues with highlighting equations. I have a bit of a workaround (courtesy of this answer). Inserting this every time seems a bit cumbersome, so I am looking for something that will perform the intended operation below seamlessly. MWE \documentclass{article}\usepackage{amsmath}\usepackage{xcolor}\usepackage{soul}\newcommand{\hll}[1]{\colorbox{yellow}{$\displaystyle #1$}}\begin{document} {\color{red}\hl{Here is some text, and now we make the observation} \[ \hll{\lim_{n\to\infty}\frac{1}{n}=0.} \]}\end{document} I have tried using \colorbox, but only once because it didn't wrap the text in the way I expected (in fact, in that instance, no wrapping occurred whatsoever). Update: Gonzalo Medina has provided a marvelous answer, though I'm not sure I was entirely clear above. The ideal answer to this question will have something that needs to be declared only once, just as the color of the font can be declared only once and pass through math mode, other environments such as lemmas, theorems, remarks, etc. without needing to be ended and declared again.
Initial Note The highest priority input is RESET. If low, the FF is actively . But you have that tied high. So RESET isn't active. reset 1st Schematic Results The next highest priority input is TRIG. But only if it is low (below it's \$\frac13\$\$^\text{rd}\$ threshold.) If low, it actively the FF. If it is above that threshold, it isn't supposed to take priority and instead the FF is left either in its prior state or else, if THRES is high (above it's \$\frac23\$\$^\text{rd}\$ threshold) then the FF will be actively sets . reset In your first schematic, the simulator will first find the DC steady-state conditions (unless you use UIC.) And this means that your RC tied to DISCH and THRES will immediately start out at \$V_\text{CC}\$. So this means THRES is high and will attempt to actively reset the output. But note that THRES is the lowest-priority in this regard. So when your input signal to TRIG goes low, it will take over as higher priority and will actively set the output despite THRES "suggesting" a reset. (TRIG takes priority over THRES.) You can see that dominance in your output, readily. Your TRIG starts out high in your first simulation (the red trace, I believe.) So TRIG isn't taking priority. Instead, THRES is allowed to take over and therefore the FF is reset (note that at first the green trace is low) and DISCH is inactive (leaving the RC in the initial DC steady state condition.) However, when TRIG goes low, it takes over (higher priority) and therefore sets the FF (green trace goes immediately high) and also causes DISCH to become active and discharge the capacitor. Discharging the capacitor means that now THRES is low (and therefore now .) inactive When TRIG goes high again, it no longer asserts its higher priority, leaving that to THRES. But THRES is still too low (the capacitor isn't yet charged enough), so it also cannot assert its lower priority, either. This leaves the FF where it was last at (high.) So the output continues to be high for a little while, during which the resistor charges the capacitor upwards. Eventually, as you can see, THRES does reach the point where it becomes active and asserts a reset to the FF causing the output to go low. But shortly after, your TRIG input goes low and actively asserts its dominance causing the FF to be set and go back high. Which you observe. This repeats and completely explains the first simulation results. Here's what I get in simulation: The dark blue trace is the voltage on the capacitor that's part of the RC timing element. You can see that it does indeed rise to \$\frac23\$\$^\text{rd}\$ of \$V_\text{CC}\$ before the next change at the output happens. The other two traces are what you plotted, I believe. The above output traces demonstrate the discussion above is accurate. 2nd Results Assuming your new schematic (except that I refused to use \$100\:\Omega\$ and used \$1\:\text{k}\Omega\$ in the collector of the BJT), the simulator will again first find the DC steady-state conditions. So the RC element tied to DISCH and THRES will immediately start out at \$V_\text{CC}\$, again. THRES is high and will attempt to actively reset the output. But you've inverted TRIG, which now starts out low (because your input is high and causing the BJT to be actively pulling TRIG low.) So TRIG takes priority over THRESH and sets the FF. (DISCH is therefore inactive, so this leaves the capacitor at the fully charged DC steady state condition it started out at.) The output should be high. When your input goes low, the BJT isn't active and the resistor pulls TRIG high and therefore inactive. Since the capacitor is still fully charged at this point, THRES can now take priority and it causes the output to be reset. The output should now be low and DISCH will now actively discharge the capacitor. As the capacitor voltage rapidly declines, THRES becomes inactive. But the FF state remains unchanged since TRIG is still inactive. So the discharge of the capacitor is allowed to fully complete and the output remains low for this period. So far, and only so far, this matches your 2nd output. When your input returns high, the TRIG goes low and takes priority forcing the FF to be set and the output to go high. DISCH becomes inactive and allows the capacitor to start charging. At first, THRES is inactive. But as the capacitor charges up THRES may become active (depending on the RC time constant and your driving input rate.) However, none of this matters because TRIG doesn't have priority. So for the entire time that TRIG is active low (while the input is high), the output will remain set. But the capacitor will continue to charge, too. Now the behavior becomes more nuanced. If the RC time constant is such that the capacitor can charge sufficiently that TRIG becomes active your input changes to low again, causing TRIG to go high and inactive, then THRES will reset the FF as soon as your input changes because TRIG is inactive and THRES can take over. So then you'd expect the output to immediately go low. before If, however, the RC time constant is such that the capacitor cannot charge sufficiently that TRIG becomes active your input changes to low again, then THRES will not yet be active and so the FF will before in its prior state (set) for a while. In this case, you would NOT expect the output to go immediately low. Instead, you'd expect the output to go low once the capacitor charges up enough to cause THRES to become active. (Assuming this happens fast enough -- before the next change of TRIG, you will see a stretched high at the output followed by a short low.) remain Since your second output results are consistent with condition (1) above, I believe your RC time constant is too short in the second output case. I can't explain it any other way. Before you insist otherwise, here's what I get in simulation using your schematic (with the above mentioned modification to the collector resistor) when I keep the same values for the RC timing element (\$R=47\:\text{k}\Omega\$ and \$C=10\:\mu\text{F}\$): I'm sure you note that this is NOT at all what you show in your simulation (except for the first \$\frac23\$\$^\text{rd}\$ second, or so.) But it is entirely consistent with what I wrote above and for case #2, which be the results you see if your schematic is accurate and your simulator and models are performing correctly. should Here's what I get, though, with \$R=47\:\text{k}\Omega\$ and \$C=1\:\mu\text{F}\$ (reducing the time sufficiently that it can meet case #1 above): Now, that does actually reflect what you show in your simulation. The datasheets I've read for the 555 are pretty clear on the descriptions and logic I applied above. So, from this I conclude that you must be in case #1, somehow, in your second schematic. I can read your second schematic and I can see that it asserts there is not the difference I say must be present. But things are what they are and I can't change that. And my simulator generated outputs also support my conclusions, as well.
Axioms For * \begin{align} 1 + aa^* &\leq a^* \\ 1 + a^*a &\leq a^* \\ b + ax &\leq x \to a^*b \leq x \\ b + xa &\leq x \to ba^* \leq x \\ \end{align} Elementary Results \begin{align} a \leq b &\to a + c \leq b + c \\ a \leq b &\to ac \leq bc\, \wedge\, ca \leq cb \\ a \leq b &\to a^* \leq b^* \end{align} Problem Prove the following identity in a Kleene algebra using only the axioms and elementary results. $$(a + ab + b)^* = (a + b)^*$$ Solution: \begin{align} (a + b)^* &= (a + ab + b)^* \\ (a + b)^* &\leq (a + ab + b)^* \\ 1 + (a + b)(a + b)^* &\leq 1 + (a + ab + b)(a + ab + b)^* \\ (a + b)(a + b)^* &\leq (a + ab + b)(a + ab + b)^* \\ \end{align} Quesiton So for them to be equal the sets should be contained in each other. At which point do I transition to an inequality? Is it right to say a $*$ cannot be removed since it has no inverse? Can I distribute into a $*$? Say $(a +b)(a + b)^*$ or do they need to have the same $*$ height? Some hints to get further would be greatly appreciated.
Consider the set of all integer linear combinations of permutation matrices of some fixed dimension. Is there a description of the set of unimodular matrices in this lattice? If you take $\mathbb{Q}$ linear combination of permutation matrices, this is a ring which is the product $M_1(\mathbb{Q})\times M_{n-1}(\mathbb{Q})$ of matrix rings, since the standard representation of the permutation group is a direct sum of the trivial plus an absolutely irreducible representation of dimension $n-1$. Hence the set of integral linear combinations is an order in this product algebra. Therefore, the unit group of this order will contain a subgroup of finite index in the group $trivial\times SL_{n-1}(\mathbb{Z})$, and in fact, will be commensurable to it. The $\Bbb{Z}$-span $T_n$ of the order $n$ permutation matrices consists of integer $n\times n$ matrices with the row and column sums equal to $k$ for some integer $k$. Thus it can be characterized as the subset (in fact, subring) of $M_n(\Bbb{Z})$ consisting of matrices $A$ such that $Av=kv$ and $A^t v=kv$ for some $k\in\Bbb{Z}$, where $v=(1,\dots,1)^t$ is the column with all entries $1$. Let $L\subset\Bbb{Z}^n$ be the sublattice of integer vectors with sum of the entries $0$, $L=\{u: u^t v=0\}$. Note that $\Bbb{Z}^n=L\oplus \Bbb{Z}v$ and $A\in T_n$ preserves this decomposition. Clearly, $A\in T_n$ is invertible only if $k=\pm 1$ and the restriction of $A$ to $L$ is invertible. Conversely, any automorphism of $L$ can be extended in exactly two ways to an invertible matrix acting by $\pm 1$ on $v$. Since $L$ has rank $n-1$, it follows that the group in question is the direct product ${\rm GL}_{n-1}(\Bbb{Z})\times\{\pm 1\}$, with the first factor represented by the automorphisms of $L$ and the second factor represented by the scalar matrices $\pm I_n$.
The OP contains the specifications do not immediately involve graph theory, on the surface, reduce, after a little translation, into a common problem in graph theory (find a matching, find a colouring...). there should be a "haha!" effect for the mathematician, and a sigh of relief for the programmer. The following is about as trivial as can be (even in a technical sense of 'trivial': it is based on what is sometimes presented in introductory graph theory lectures as the 'First Theorem of Graph Theory'), but it fits each of the three specifications above: To find the number of undirected connections in a finite network of which you only have knowledge in the form of a photograph. $\hspace{150pt}$(problem) That is: imagine a situation in which a programmer faces the task of programming a camera-equipped machine to 'look' at such a photograph and correctly output the total number of connections. To make the problem-description consistent with the 'worked example' below, imagine the company running the search engine which kindly ran for me a Gilbert--Erdős–Rényi-random graph simulation, decides to *empirically test that the number of connection shown on the screen of users equals the number of connections that the search engine returns as a numeral when asked to do so. For the sake of argument, suppose that the company will have none of 'formal verification' of code (though I think this would give greater security) but has resolved to test this very very empirically, by filming a screen. To reach usefully large sample sizes, they will have to teach a camera-equipped machine how to count the lines. Among the "first, naive algorithms that spring to mind to solve this (finite) problem" (to quote the OP) is an "awkward" one: to rasterize the picture, to systematically scan it, to code-up some recognition-of-elongated-shapes, to keep track of what elongated shape you already have counted, to finally somehow arrive at an estimate of the number of connections. (Also note: this approach does not involve any necessary check of correctness, however weak, unlike the approach I am proposing; see the remark at the very end.) I think that, especially if the photograph is messy, the input data to the machine "[does] not immediately involve graph theory,". Now comes the "little translation, into a common problem in graph theory", namely into what is arguably the most trivial of all graph theoretic problems: to find the number of edges of a specified graph. And this problem ("aha!" goes the mathematician, while the programmer heaves the required "sigh of relief") reduces to simply 'doing a sum' of natural numbers (plus a little pattern recognition, though not a recognition of elongated/curvilinear shapes anymore): the problem reduces to summing the degree sequence of the relevant graph. (The reason is what is whimsically called 'The Handshaking Lemma', or a little presumptiously called 'The First Theorem of Graph heory' 1) More precisely, we idealize (and what an important, mathematical activity this is: idealizing sense-data) the situation, by imagining that there is an abstract graph. Now we know what to teach the machine: (0) identify all 'vertex-like' points of the photograph, (1) 'cookie-cutter' a small circle-shaped portion around each such 'point', (2) forget the vast majority of the 'photograph', (3) collect the 'cookies' (the order and position of the 'cookies' is irrelevant, any data-structure which keeps the set of the 'cookies' will be serviceable), (4) now solve a much easier patter-recognition-problem, on each of the 'cookies': to recognize the number of 'rays' in the 'star-like' shapes (there will be noise, yet I am confident that this task can be routinely done, reliably, by methods of Computational Homology, methods which were more-or-less made for doing precisely this: extracting meaningful integer-valued invariants from noisy black-and-white-pictures. If not, I am confident that this task will be routine for people working in machine-learning/pattern-recognition.) To summarize: where was the graph-theory here? On "the surface", a messy photograph of a tangle of connections in a network does not look like a graph. At the very least, I am sure that a person not knowing concepts of graph theory would, when having to teach a machine what to do with the sense-data fed to it, be more likely to code-up a 'rasterize-and-scan-the-whole-picture-and-invent-heuristics-to-recognize' approach than taking the graph-theoretically-informed 'cookie-cutter' approach. Worked Example. Let the given 'photograph' be the network-like part of the following photograph, taken from the webpage of a known search-engine, with the 'order' given to the engine shown in the first line of the image: Insert now a "haha!" and "sigh of relief". No pattern-recognition of 'line-like' shapes is necessary; it suffices to identify the 'vertex-like' parts of the image, and then do the following: First the 'cookie-cutting' (here, I use counterclockwise ordering): Now follows a conversion of the 'gray-scale cookies' to something combinatorial (essentially: (0,1)-matrices, given as a black-and-white rid). The diaphanous blue regions do not carry any information; they associate each gray-scale 'observation' with the corresponding (0,1)-matrix, yet this association could be done by the mind of the observer unaided by the blue regions, for the correspondence is quite uniquely determined. These '0-1-matrices' weren't 'made-up': I had the 'GIMP' software compute it for me, via the "Indexed Color Conversion" functionality, with the option "Use black and white (1-bit) palette" and with "Color-dithering: Floyd-Steinberg (normal)" turned on. What I give here should even be quite a 'reproducible experiment'. It should be possible to have a machine produce the same '0-1-matrices' from the gray-scale pictures. Now insert here some expert advice of someone more versed in computational homology (or some other suitable method: something via eigenvalues of the black-and-white matrices perhaps (though these eigenvalues will not be real-valued? using some appropriate heat-flow?); or someone more versed in machine-learning/pattern-recognition algorithms 2) than the writer of these lines (who only had to learn computational homology for an exam a few years ago, yet does not work with it), telling us which "Sooo much time"-saving out-of-the-box routine we should use to count the number of 'rays' in the 'star-shaped' matrix-pictures, and now the following numbers come pouring forth: 3 2 6 5 3 4 4 5 7 4 1 Now the so-called First Theorem of Graph Theory completes the "reducing" of the problem to a "graph theory algorithm": it remains to calculate $\frac12\cdot (3 + 2 + 6 + 5 + 3 + 4 + 4 + 5 + 7 + 4 + 1) = \frac12\cdot 44 = 22$ to know (by the 'First Theorem of Graph Theory') that this is the number of edges(=connections) in the 'photograph' of the random graph. This was a reduction, of sorts. On which side of your "fine line" this is, I cannot know in advance. By the way, I agree with Dirk Liebhold's comment, that this is very context-dependent question, any answer will be 'tainted' by context. Addition. Another small contribution of graph theory in solving this problem: If the above sum (in the example it was 44) does not come out even, then something somewhere must have gone wrong. In that sense, the 'First Theorem of Graph Theory' also give a very weak 'check'/'necessary condition' for the correct execution of the algorithm. One of course is not told by the criterion where and why an error has occurred, only that somewhere something has gone wrong. Also, if the sum comes out even, this, needless to say, does not imply that it is the correct one. 1 Incidentally, this commonly encountered designation is not only a little presumptious, but also technically-wrong if very strictly construed: in the strictest sense, 'Graph Theory' is the theory, in the model-theoretic sense, of the class of irreflexive symmetric relations on a set, and, as such, has a one-element signature consisting of a single binary relation symbol. The so-called 'First Theorem of Graph Theory', though, uses a larger signature: which exactly, depends on your formalization, but you can argue that it uses the signature $(\sim,\mathrm{deg},+,\Sigma,\lVert\cdot \rVert)$, where $\sim$ is the binary relation symbol, and $\mathrm{deg}$ and $\lVert\cdot\rVert$ are unary function symbols, the intended interpretation of the former being the degree of its argument, the one of the latter being the number of edges of its argument. 2 It would be a nice example of focused interdisciplinary work if someone who knows more about pattern recognitino would leave a relevant comment on what is considered the best method to count the number of 'rays' in the 'star-shaped' '0-1-matrices'. One can for example first 'blow-up' each pixel until the 0-th Betti number (=number of connected components) has become equal to 1, upone which, in a sense, the 'noise' is 'gone', and then one has to come up with a functorial method to 'count the number of bumps along the perimeter'. Ideally, the present answer would in the end feature a completely 'synthetic' method of counting the number of connection in a 'photograph' of a 'network', that is, by 'putting together' existing concepts.
Subquery is a query within another SQL query. A common subquery is embedded within the FROM clause, for example: SELECT ID FROM (SELECT * FROM SRC) AS T The subexpressions in the FROM clauses can be processed very well by the general SQL optimizers. But when it comes to subqueries in the WHERE clause or the SELECT lists, it becomes very difficult to optimize because subqueries can be anywhere in the expression, e.g. in the CASE...WHEN... clauses. The subqueries that are not in the FROM clause are categorized as “correlated subquery” and “uncorrelated subquery”. Correlated subquery refers to a subquery with columns from outer references, for example: SELECT * FROM SRC WHEREEXISTS(SELECT * FROM TMP WHERE TMP.id = SRC.id) Uncorrelated subqueries can be pre-processed in the plan phase and be re-written to a constant. Therefore, this article is mainly focused on the optimization of correlated subqueries. Generally speaking, there are following three types of subqueries: SELECT...) + ( SELECT...) T.a = ANY(SELECT...) NOT EXISTS(SELECT...), T.a IN (SELECT...) For the simple subqueries like Existential Test, the common practice is to rewrite them to SemiJoin. But it is barely explored in the literature about the generic algorithm and what kind of subqueries need to remove the correlation. For those subqueries whose correlation cannot be removed, the common practice in databases is to execute in Nested Loop, which is called correlated execution. TiDB inherits the subquery strategy in SQL Server [1]. It introduces the Apply operator to use algebraic representation for subqueries which is called normalization, and then removes the correlation based on the cost information. Apply operator The reason why subqueries are difficult to optimize is that a subquery cannot be represented as a logic operator like Projection or Join, which makes it difficult to find a generic algorithm for subquery transformation. So the first thing is to introduce a logical operation that can represent the subqueries: the Apply operator, which is also called d-Join[2].The semantics of the Apply operator is: \[ R\ A^{\otimes}\ E = \bigcup\limits_{r\in R} (\{r\}\otimes E®) \] where E represents a parameterized subquery. In every execution, the Apply operator gets an r record from the R relation and sends r to E as a parameter for the ⊗ operation of r and E(r). ⊗ is different based on different query types, usually it’s SemiJoin ∃. For the following SQL statement: SELECT * FROM SRC WHEREEXISTS(SELECT * FROM TMP WHERE TMP.id = SRC.id) the Apply operator representation is as follows: Because the operator above Apply is Selection, formally, it is: \[ {SRC}\ A^\exists\ \sigma_{SRC.id=TMP.id}{TMP} \] For the EXISTS subquery in the SELECT list, and the data that cannot pass through the SRC.id=TMP.id equation, the output should be false. So OuterJoin should be used: \[ \pi_C({SRC}\ A^{LOJ}\ \sigma_{SRC.id=TMP.id}{TMP}) \] The C Projection is to transform NULL to false. But the more common practice is: If the output of the Apply operator is directly used by the query predicate, it is converted to SemiJoin. The introduction of the Apply operator enables us to remove the correlation of the subqueries. The two examples in the previous section can be transformed to: \[ {SRC}\ \exists_{\sigma_{SRC.id = TMP.id}}\ {TMP} \] and \[ {SRC}\ LOJ_{\sigma_{SRC.id = TMP.id}}\ {TMP} \] Other rules to remove correlation can be formally represented as: \(R\ A^{\otimes} E= R\ {\otimes}_{true}\ E\), if no parameters in E resolved from R (1) \(R\ A^{\otimes} (\sigma_pE) = R\ {\otimes}_p\ E\), if no parameters in E resolved from R (2) \(R\ A^\times\ (\sigma_pE)=\sigma_p(R\ A^\times\ E) \) (3) \(R\ A^\times\ (\pi_vE) = \pi_{v\bigcup\mathrm{cols}®}(R\ A^\times\ E) \) (4) \(R\ A^\times\ (E_1\ \bigcup\ E_2) = (R\ A^\times\ E_1)\ \bigcup\ (R\ A^\times\ E_2) \) (5) \(R\ A^\times\ (E_1\ - \ E_2) = (R\ A^\times\ E_1)\ - \ (R\ A^\times\ E_2) \) (6) \(R\ A^\times\ (E_1\ \times \ E_2) = (R\ A^\times\ E_1)\ \Join_{R.key}\ (R\ A^\times\ E_2) \) (7) \(R\ A^\times\ (\mathcal{G}_{A,F}E) = \mathcal{G}_{A\bigcup \mathrm{attr}®,F} (R\ A^{\times}\ E) \) (8) \(R\ A^\times\ (\mathcal{G}^1_FE) = \mathcal{G}_{A\bigcup \mathrm{attr}®,F’} (R\ A^{LOJ}\ E) \) (9) Based on the above rules, the correlation among all the SQL subqueries can be removed [3]. But the (5), (6), and (7) rules are seldom used because the the query cost is increased as a result of the rules about common expression. Take the following SQL statement as an example: SELECT C_CUSTKEYFROM CUSTOMER WHERE 1000000 <(SELECT SUM(O_TOTALPRICE)FROM ORDER WHERE O_CUSTKEY = C_CUSTKEY) The two “CUSTKEY”s are the primary keys. When the statement is transformed to Apply, it is represented as: \[ \sigma_{1000000<X}(CUSTOMER\ A^\times\ \mathcal{G}^1_{X=SUM(O\_PRICE)}(\sigma_{O\_CUSTKEY=C\_CUSTKEY}ORDERS)) \] Because of the primary keys, according to rule (9), it can be transformed to the following: \[ \sigma_{1000000<X}\ \mathcal{G}_{C\_CUSTKEY,X = SUM(O\_PRICE)}(CUSTOMER\ A^{LOJ}\ \sigma_{O\_CUSTKEY=C\_CUSTKEY}ORDERS) \] Note: ORDERS, the \(\pi\) operator should be added to allocate a unique key. F aggregation function. Therefore, the LeftOuterJoin should be used and a NULL record should be the output when the right table is NULL. In this case, based on rule (2), Apply can be completely removed. The statement can be transformed to a SQL statement with join: \[ \sigma_{1000000<X}\mathcal{G}_{C\_CUSTKEY,X=SUM(O\_PRICE)}(CUSTOMER\ LOJ_{O\_CUSTKEY=C\_CUSTKEY}ORDERS) \] Furthermore, based on the simplification of OuterJoin, the statement can be simplified to: \[ \sigma_{1000000<X}\mathcal{G}_{C\_CUSTKEY,X=SUM(O\_PRICE)}(CUSTOMER\ \Join_{O\_CUSTKEY=C\_CUSTKEY}ORDERS) \] Theoretically, the above 9 rules have solved the correlation removal problem. But is correlation removal the best solution for all the scenarios? The answer is no. If the results of the SQL statement are small and the subquery can use the index, then the best solution is to use correlated execution. The Apply operator can be optimized to Segment Apply, which is to sort the data of the outer table according to the correlated key. In this case, the keys that are within one group won’t have to be executed multiple times. Of course, this is strongly related to the number of distinct values (NDV) of the correlated keys in the outer table. Therefore, the decision about whether to use correlation removal also depends on statistics. When it comes to this point, the regular optimizer is no longer applicable. Only the optimizer with the Volcano or Cascade Style can take both the logic equivalence rules and the cost-based optimization into consideration. Therefore, a perfect solution for subquery depends on an excellent optimizer framework. In the previous section, the final statement is not completely optimized. The aggregation function above OuterJoin and InnerJoin can be pushed down[4]. If OutJoin cannot be simplified, the formal representation of the push-down rule is: \[ \mathcal{G_{A,F}}(S\ LOJ_p\ R)=\pi_C(S\ LOJ_p(\mathcal{G}_{A-attr(S),F}R)) \] The \(\pi_C\) above Join is to convert NULL to the default value when the aggregation function accepts empty values. It is worth mentioning that the above formula can be applied only when the following three conditions are met: R within the p predicate are in the Group by column. S relation is in the Group by column. R. It is very common to use aggregation functions together with subqueries. The general solution is to use the formal representation of Apply, and remove the correlation based on the rules, then apply the push-down rules of the aggregation function for further optimization. [1] C. Galindo-Legaria and M. Joshi. “Orthogonal optimization of subqueries and aggregation”. In: Proc. of the ACM SIGMOD Conf. on Management of Data (2001), pp. 571–581. [2] D. Maier, Q. Wang and L. Shapiro. Algebraic unnesting of nested object queries. Tech. rep. CSE-99-013. Oregon Graduate Institute, 1999. [3] C. A. Galindo-Legaria. Parameterized queries and nesting equivalences. Tech. rep. MSR-TR-2000-31. Microsoft, 2001. [4] W. Yan and P.-A. Larson. “Eager aggregation and lazy aggregation”. In: Proc. Int. Conf. on Very Large Data Bases (VLDB) (1995), pp. 345–357. What’s on this page
Difference between revisions of "stat946w18/Implicit Causal Models for Genome-wide Association Studies" (→Causal Inference with a Latent Confounder) Line 201: Line 201: Dustin Tran, Rajesh Ranganath, and David M Blei. Hierarchical implicit models and likelihood-free variational inference. In Neural Information Processing Systems, 2017. Dustin Tran, Rajesh Ranganath, and David M Blei. Hierarchical implicit models and likelihood-free variational inference. In Neural Information Processing Systems, 2017. + + + Revision as of 23:46, 20 April 2018 Contents 1 Introduction and Motivation 2 Implicit Causal Models 3 Implicit Causal Models with Latent Confounders 4 Likelihood-free Variational Inference 5 Empirical Study 6 Conclusion 7 Critique 8 References 9 Implicit causal model in Edward Introduction and Motivation There is currently much progress in probabilistic models which could lead to the development of rich generative models. The models have been applied with neural networks, implicit densities, and with scalable algorithms to very large data for their Bayesian inference. However, most of the models are focused on capturing statistical relationships rather than causal relationships. Causal relationships are relationships where one event is a result of another event, i.e. a cause and effect. Causal models give us a sense of how manipulating the generative process could change the final results. Genome-wide association studies (GWAS) are examples of causal relationships. Genome is basically the sum of all DNAs in an organism and contain information about the organism's attributes. Specifically, GWAS is about figuring out how genetic factors cause disease among humans. Here the genetic factors we are referring to are single nucleotide polymorphisms (SNPs), and getting a particular disease is treated as a trait, i.e., the outcome. In order to know about the reason of developing a disease and to cure it, the causation between SNPs and diseases is investigated: first, predict which one or more SNPs cause the disease; second, target the selected SNPs to cure the disease. The figure below depicts an example Manhattan plot for a GWAS. Each dot represents an SNP. The x-axis is the chromosome location, and the y-axis is the negative log of the association p-value between the SNP and the disease, so points with the largest values represent strongly associated risk loci. This paper focuses on two challenges to combining modern probabilistic models and causality. The first one is how to build rich causal models with specific needs by GWAS. In general, probabilistic causal models involve a function [math]f[/math] and a noise [math]n[/math]. For working simplicity, we usually assume [math]f[/math] as a linear model with Gaussian noise. However problems like GWAS require models with nonlinear, learnable interactions among the inputs and the noise. The second challenge is how to address latent population-based confounders. Latent confounders are issues when we apply the causal models since we cannot observe them nor know the underlying structure. For example, in GWAS, both latent population structure, i.e., subgroups in the population with ancestry differences, and relatedness among sample individuals produce spurious correlations among SNPs to the trait of interest. The existing methods cannot easily accommodate the complex latent structure. For the first challenge, the authors develop implicit causal models, a class of causal models that leverages neural architectures with an implicit density. With GWAS, implicit causal models generalize previous methods to capture important nonlinearities, such as gene-gene and gene-population interaction. Building on this, for the second challenge, they describe an implicit causal model that adjusts for population-confounders by sharing strength across examples (genes). There has been an increasing number of works on causal models which focus on causal discovery and typically have strong assumptions such as Gaussian processes on noise variable or nonlinearities for the main function. Implicit Causal Models Implicit causal models are an extension of probabilistic causal models. Probabilistic causal models will be introduced first. Probabilistic Causal Models Probabilistic causal models have two parts: deterministic functions of noise and other variables. Consider background noise [math]\epsilon[/math], representing unknown background quantities which are jointly independent and global variable [math]\beta[/math], some function of this noise, where Each [math]\beta[/math] and [math]x[/math] is a function of noise; [math]y[/math] is a function of noise and [math]x[/math], The target is the causal mechanism [math]f_y[/math] so that the causal effect [math]p(y|do(X=x),\beta)[/math] can be calculated. [math]do(X=x)[/math] means that we specify a value of [math]X[/math] under the fixed structure [math]\beta[/math]. By other paper’s work, it is assumed that [math]p(y|do(x),\beta) = p(y|x, \beta)[/math]. An example of probabilistic causal models is additive noise model. [math]f(.)[/math] is usually a linear function or spline functions for nonlinearities. [math]\epsilon[/math] is assumed to be standard normal, as well as [math]y[/math]. Thus the posterior [math]p(\theta | x, y, \beta)[/math] can be represented as where [math]p(\theta)[/math] is the prior which is known. Then, variational inference or MCMC can be applied to calculate the posterior distribution. Implicit Causal Models The difference between implicit causal models and probabilistic causal models is the noise variable. Instead of using an additive noise term, implicit causal models directly take noise [math]\epsilon[/math] as input and outputs [math]x[/math] given parameter [math]\theta[/math]. [math] x=g(\epsilon | \theta), \epsilon \tilde s(\cdot) [/math] The causal diagram has changed to: They used fully connected neural network with a fair amount of hidden units to approximate each causal mechanism. Below is the formal description: Implicit Causal Models with Latent Confounders Previously, they assumed the global structure is observed. Next, the unobserved scenario is being considered. Causal Inference with a Latent Confounder Similar to before, the interest is the causal effect [math]p(y|do(x_m), x_{-m})[/math]. Here, the SNPs other than [math]x_m[/math] is also under consideration. However, it is confounded by the unobserved confounder [math]z_n[/math]. As a result, the standard inference method cannot be used in this case. The paper proposed a new method which include the latent confounders. For each subject [math]n=1,…,N[/math] and each SNP [math]m=1,…,M[/math], The mechanism for latent confounder [math]z_n[/math] is assumed to be known. SNPs depend on the confounders and the trait depends on all the SNPs and the confounders as well. The posterior of [math]\theta[/math] is needed to be calculate in order to estimate the mechanism [math]g_y[/math] as well as the causal effect [math]p(y|do(x_m), x_{-m})[/math], so that it can be explained how changes to each SNP [math]X_m[/math] cause changes to the trait [math]Y[/math]. Note that the latent structure [math]p(z|x, y)[/math] is assumed known. In general, causal inference with latent confounders can be dangerous: it uses the data twice, and thus it may bias the estimates of each arrow [math]X_m → Y[/math]. Why is this justified? This is answered below: Proposition 1. Assume the causal graph of Figure 2 (left) is correct and that the true distribution resides in some configuration of the parameters of the causal model (Figure 2 (right)). Then the posterior [math]p(θ | x, y)[/math] provides a consistent estimator of the causal mechanism [math]f_y[/math]. Proposition 1 rigorizes previous methods in the framework of probabilistic causal models. The intuition is that as more SNPs arrive (“M → ∞, N fixed”), the posterior concentrates at the true confounders [math]z_n[/math], and thus we can estimate the causal mechanism given each data point’s confounder [math]z_n[/math]. As more data points arrive (“N → ∞, M fixed”), we can estimate the causal mechanism given any confounder [math]z_n[/math] as there is an infinity of them. Implicit Causal Model with a Latent Confounder This section is the algorithm and functions to implementing an implicit causal model for GWAS. Generative Process of Confounders [math]z_n[/math]. The distribution of confounders is set as standard normal. [math]z_n \in R^K[/math] , where [math]K[/math] is the dimension of [math]z_n[/math] and [math]K[/math] should make the latent space as close as possible to the true population structural. Generative Process of SNPs [math]x_{nm}[/math]. Given SNP is coded for, The authors defined a [math]Binomial(2,\pi_{nm})[/math] distribution on [math]x_{nm}[/math]. And used logistic factor analysis to design the SNP matrix. A SNP matrix looks like this: Since logistic factor analysis makes strong assumptions, this paper suggests using a neural network to relax these assumptions, This renders the outputs to be a full [math]N*M[/math] matrix due the the variables [math]w_m[/math], which act as principal component in PCA. Here, [math]\phi[/math] has a standard normal prior distribution. The weights [math]w[/math] and biases [math]\phi[/math] are shared over the [math]m[/math] SNPs and [math]n[/math] individuals, which makes it possible to learn nonlinear interactions between [math]z_n[/math] and [math]w_m[/math]. Generative Process of Traits [math]y_n[/math]. Previously, each trait is modeled by a linear regression, This also has very strong assumptions on SNPs, interactions, and additive noise. It can also be replaced by a neural network which only outputs a scalar, Likelihood-free Variational Inference Calculating the posterior of [math]\theta[/math] is the key of applying the implicit causal model with latent confounders. could be reduces to However, with implicit models, integrating over a nonlinear function could be suffered. The authors applied likelihood-free variational inference (LFVI). LFVI proposes a family of distribution over the latent variables. Here the variables [math]w_m[/math] and [math]z_n[/math] are all assumed to be Normal, For LFVI applied to GWAS, the algorithm which similar to the EM algorithm has been used: Empirical Study The authors performed simulation on 100,000 SNPs, 940 to 5,000 individuals, and across 100 replications of 11 settings. Four methods were compared: implicit causal model (ICM); PCA with linear regression (PCA); a linear mixed model (LMM); logistic factor analysis with inverse regression (GCAT). The feedforward neural networks for traits and SNPs are fully connected with two hidden layers using ReLU activation function, and batch normalization. Simulation Study Based on real genomic data, a true model is applied to generate the SNPs and traits for each configuration. There are four datasets used in this simulation study: HapMap [Balding-Nichols model] 1000 Genomes Project (TGP) [PCA] Human Genome Diversity project (HGDP) [PCA] HGDP [Pritchard-Stephens-Donelly model] A latent spatial position of individuals for population structure [spatial] The table shows the prediction accuracy. The accuracy is calculated by the rate of the number of true positives divide the number of true positives plus false positives. True positives measure the proportion of positives that are correctly identified as such (e.g. the percentage of SNPs which are correctly identified as having the causal relation with the trait). In contrast, false positives state the SNPs has the causal relation with the trait when they don’t. The closer the rate to 1, the better the model is since false positives are considered as the wrong prediction. The result represented above shows that the implicit causal model has the best performance among these four models in every situation. Especially, other models tend to do poorly on PSD and Spatial when [math]a[/math] is small, but the ICM achieved a significantly high rate. The only comparable method to ICM is GCAT, when applying to simpler configurations. Real-data Analysis They also applied ICM to GWAS of Northern Finland Birth Cohorts, which measure 10 metabolic traits and also contain 324,160 SNPs and 5,027 individuals. The data came from the database of Genotypes and Phenotypes (dbGaP) and used the same preprocessing as Song et al. Ten implicit causal models were fitted, one for each trait to be modeled. For each of the 10 implicit causal models the dimension of the counfounders was set to be six, same as what was used in the paper by Song. The SNP network used 512 hidden units in both layers and the trait network used 32 and 256. et al. for comparable models in Table 2. The numbers in the above table are the number of significant loci for each of the 10 traits. The number for other methods, such as GCAT, LMM, PCA, and "uncorrected" (association tests without accounting for hidden relatedness of study samples) are obtained from other papers. By comparison, the ICM reached the level of the best previous model for each trait. Conclusion This paper introduced implicit causal models in order to account for nonlinear complex causal relationships, and applied the method to GWAS. It can not only capture important interactions between genes within an individual and among population level, but also can adjust for latent confounders by taking account of the latent variables into the model. By the simulation study, the authors proved that the implicit causal model could beat other methods by 15-45.3% on a variety of datasets with variations on parameters. The authors also believed this GWAS application is only the start of the usage of implicit causal models. The authors suggest that it might also be successfully used in the design of dynamic theories in high-energy physics or for modeling discrete choices in economics. Critique This paper is an interesting and novel work. The main contribution of this paper is to connect the statistical genetics and the machine learning methodology. The method is technically sound and does indeed generalize techniques currently used in statistical genetics. The neural network used in this paper is a very simple feed-forward 2 hidden-layer neural network, but the idea of where to use the neural network is crucial and might be significant in GWAS. It has limitations as well. The empirical example in this paper is too easy, and far away from the realistic situation. Despite the simulation study showing some competing results, the Northern Finland Birth Cohort Data application did not demonstrate the advantage of using implicit causal model over the previous methods, such as GCAT or LMM. Another limitation is about linkage disequilibrium as the authors stated as well. SNPs are not completely independent of each other; usually, they have correlations when the alleles at close locus. They did not consider this complex case, rather they only considered the simplest case where they assumed all the SNPs are independent. Furthermore, one SNP maybe does not have enough power to explain the causal relationship. Recent papers indicate that causation to a trait may involve multiple SNPs. This could be a future work as well. References Tran D, Blei D M. Implicit Causal Models for Genome-wide Association Studies[J]. arXiv preprint arXiv:1710.10742, 2017. Patrik O Hoyer, Dominik Janzing, Joris M Mooij, Jonas Peters, and Prof Bernhard Schölkopf. Non- linear causal discovery with additive noise models. In Neural Information Processing Systems, 2009. Alkes L Price, Nick J Patterson, Robert M Plenge, Michael E Weinblatt, Nancy A Shadick, and David Reich. Principal components analysis corrects for stratification in genome-wide association studies. Nature Genetics, 38(8):904–909, 2006. Minsun Song, Wei Hao, and John D Storey. Testing for genetic associations in arbitrarily structured populations. Nature, 47(5):550–554, 2015. Dustin Tran, Rajesh Ranganath, and David M Blei. Hierarchical implicit models and likelihood-free variational inference. In Neural Information Processing Systems, 2017.
I'm trying to figure out the elasticity of substitution between input $s$ and input $v$. I know that the marginal rate of substitution between these two inputs are $\frac{v^2}{s(v+k)}$, where $k$ is another input. Then, the elasticity of substitution is, $$E_{sv}=\dfrac{dln\big(v/s\big)}{dln\bigg(\frac{v^2}{s(v+k)}\bigg)}$$ $$=\dfrac{dln\big(v\big)}{dln\bigg(\frac{v^2}{s(v+k)}\bigg)}-\dfrac{dln\big(s\big)}{dln\bigg(\frac{v^2}{s(v+k)}\bigg)}$$ But I'm not sure how to proceed to get the input elasticity of substitution between factors $s$ and $v$? Do I need to know the prices of both?
Equation Maker allows you to typeset equations using LaTeX syntax. You can drag and drop the equations into almost any Mac application (including Pages, Numbers, Keynote, and Microsoft Word). Equation Maker Overview Helpful tips Use PDF as the export format for the highest quality resolution. Use PNG only in cases where PDF is not supported. There are two ways to save equations. Save the equation in the left-hand side table. Save the equation to a PDF file by dragging the equation to the Desktop/Finder or selecting File->Export in the menu. You can then restore the equation by dragging the PDF file on to the equation view. To insert plain text, use the \text{ text here } command. To inline math inside a text command, surround the math with "$" characters. Example: \text{if and only if $x > 0$} For multi-line equations, use the \begin{align} equation \end{align} environment command The characters $, &, {, } and _ have special meaning in LaTeX and are reserved. To use these actual characters, precede them with a slash (escape): \. Example: \% prints the percentage sign. Special functions such the trigonometric functions can be used as commands. Examples: \sin{\theta}, \exp(x) Using LaTeX code instead of in-line equation editing (such as Microsoft Equation Editor) may seem strange if you are new to it. However once you learn the common commands, using them is faster than clicking through a palette menu. Equation Maker Interface Typesetting equations Enter your LaTeX equation code into the Editor . As you type, your equation will render in the Equation View .Matching brackets are highlighted to aid in editing. To save your equations, click the Save button in the toolbar. You can select a saved equation in the Saved Equations table. The Palette To view the palette, click the Palette button in the toolbar. The palette helps in recalling the LaTeX commands for common math elements, including math expressions, symbols, greek letters, character decorations and arrows. Click a button in the palette to insert the corresponding command in the Editor. Changing Equation Color Select the Color button in the toolbar to change the color of the equation. Zooming You can zoom the Equation View in and out using the slider on the toolbar. Exporting Equations To export an equation for use in another application, drag it from the Equation View and drop it into the application. To save as a file, drag and drop it onto the desktop or in a Finder window or select File->Export from the menu. You can export it as either a PDF or PNG. Choose the format using the Export Format popup button on the toolbar. Note that PDF exports a higher resolution image. Exporting to PNG is useful when PDF is not supported (for example, for inclusion on a webpage). Saving Equations Save an equation by clicking the Save button in the toolbar. This will add the equation to the Saved Equations table on the left side of the application's main window. You can reuse the equation later by selecting it in the table. Use the delete button to delete it from the Saved Equations table. Undo/redo (see Edit menu or Command-Z) is supported.
Let us consider a (not necessarily finite) Coxeter group $W$ generated by a finite set of involutions $S=\{s_1,...,s_n\}$ subject (as usual) to the relations $(s_is_j)^{m_{i,j}}$ with $m_{i,j}=m_{j,i}$ and $m_{i,j}=1$ if and only if $i=j$ (if necessary you may also assume that $m_{i,j}<\infty$ for all $i,j$ or even that $W$ is an affine reflection group). Let $P\leq W$ be a subgroup generated by all but one of the $s_i$, say wlog $P=\langle s_1,...,s_{n-1}\rangle$. I am interested in the centralizer of $s_n$ in $P$. In particular I would like to know if $C_P(s_n)=C_W(s_n) \cap P=\langle s_i~|~ 1\leq i\leq n-1, m_{i,n}=2\rangle=:Z$ always holds. Obviously this is true if $n=2$ and I believe (though I have not written it down rigorously) I can prove it for reflection groups of type $A_n$ by using the standard isomorphism to $S_n$. On the other hand the centralizer of $s_n$ in $W$ is not necessarily a standard parabolic subgroup (look at the dihedral group of order $8$ for example). There are some results on centralizers of reflections in Coxeter groups and on normalizers/centralizers of parabolic subgroups (which is the same in this special case) to be found in the literature but most deal with the centralizer in $W$. In principle it should be possible to obtain the centralizer in $P$ from these results by simply taking the intersection but the results I found so far are not explicit/ simple enough for this to be a feasible solution. Here are some thougts so far: I can show that elements of $C_P(s_n)$ of length $1$ or $2$ already lie in $Z$ (the case of length $1$ being trivial) and that elements of $C_P(s_n)$ of length $3$ where all three occurring simple reflections are pairwise distinct already belong to $Z$. On the other hand look at $s_1s_2s_1 \in P$ which centralizes $s_n$ if and only if $s_2$ centralizes $s_1s_ns_1$. I don't see any reason why this should not be the case so I tried constructing a counterexample consisting of $s_1,s_2$ and $s_3$ such that $s_1,s_2$ do not commute and $s_1,s_3$ do not commute but $s_2$ and $s_1s_3s_1$ do. Any ideas on how to do that? Edit: I should note that I already posted this question to math.StackExchange (https://math.stackexchange.com/questions/1193740) but did not get any helpful feedback. Edit 2: Regarding the question whether $s_1s_2s_1$ can centralize $s_3$ (all reflections pairwise distinct; $s_1$ neither centralizing $s_2$ nor $s_3$) in the case of $W$ being an affine reflection group I did a case by case check on the possible Dynkin-diagrams. The cases to consider are $A_3,B_3,\tilde{A_2},\tilde{B_2}$ and $\tilde{G_2}$ and using the standard representation on the root space I found that $s_1s_2s_1$ never centralizes $s_3$. Edit 3: Here is a proof in the case that $m_{i,n} \neq 3$ for all $1 \leq i \leq n-1$: Assume $C_P(s_n) \neq Z$ and take an element $w \in C_P(s_n) - Z$ of minimal length. Then each reduced expression for $w$ neither starts nor ends with one of the simple reflections in $Z$. Let $w=s_{i_1}...s_{i_r}$ be such a reduced expression. Since $s_{i_1}...s_{i_r}s_ns_{i_r}...s_{i_1}=s_n$ there is a sequence of braid- and nil-moves that reduces $s_{i_1}...s_{i_r}s_ns_{i_r}...s_{i_1}$ to $s_n$. Since $m_{i_r,n} >3$ we cannot start with a braid- or nil-move involving $s_n$ and since we chose a reduced expression for $w$ there are certainly no nil-moves possible at all. Hence all we can start with is a braid-move in the reduced expression for $w$ (or $w^{-1}$). But after finitely many of such braid-moves the expression we get for $w$ still ends with a simple reflection $s_{i_k}$ which does not commute with $s_n$ (since this would yield an element of shorter length in $C_P(s_n) - Z$). Furthermore $m_{i_k,n} \neq 3$ so we still are unable to perform a braid-move involving $s_n$ and we still have a reduced expression for $w$ so there are no possible nil-moves. In conclusion: After finitely many steps we will never have performed any nil-moves and hence we cannot reduce $ws_nw^{-1}$ to $s_n$ which is a contradiction and hence such a $w$ does not exist. I hope one can use an analogous argument in the case $m_{i_r,n}=3$.
Introduction For two identical particles confined to a one-dimensional box, we established earlier that the normalized two-particle wavefunction \(\psi(x_1,x_2)\), which gives the probability of finding simultaneously one particle in an infinitesimal length \(dx_1\) at \(x_1\) and another in \(dx_2\) at \(x_2\) as \(|\psi(x_1,x_2)|^2dx_1dx_2\), only makes sense if \(|\psi(x_1,x_2)|^2=|\psi(x_2,x_1)|^2\), since we don’t know which of the two indistinguishable particles we are finding where. It follows from this that there are two possible wave function symmetries: \(\psi(x_1,x_2)=\psi(x_2,x_1)\) or \(\psi(x_1,x_2)=-\psi(x_2,x_1)\). It turns out that if two identical particles have a symmetric wave function in some state, particles of that type always have symmetric wave functions, and are called bosons. (If in some other state they had an antisymmetric wave function, then a linear superposition of those states would be neither symmetric nor antisymmetric, and so could not satisfy \(|\psi(x_1,x_2)|^2=|\psi(x_2,x_1)|^2\).) Similarly, particles having antisymmetric wave functions are called fermions. (Actually, we could in principle have \(\psi(x_1,x_2)=e^{i\alpha}\psi(x_2,x_1)\), with \(\alpha\) a constant phase, but then we wouldn’t get back to the original wave function on exchanging the particles twice. Some two-dimensional theories used to describe the quantum Hall effect do in fact have excitations of this kind, called anyons, but all ordinary particles are bosons or fermions.) To construct wave functions for three or more fermions, we assume first that the fermions do not interact with each other, and are confined by a spin-independent potential, such as the Coulomb field of a nucleus. The Hamiltonian will then be symmetric in the fermion variables, \[ H=\vec{p}^2_1/2m+\vec{p}^2_2/2m+\vec{p}^2_3/2m+\dots +V(\vec{r}_1)+V(\vec{r}_2)+V(\vec{r}_3)+\dots \tag{10.4.1}\] and the solutions of the Schrödinger equation are products of eigenfunctions of the single-particle Hamiltonian \(H=\vec{p}^2/2m+V(\vec{r})\). However, these products, for example \(\psi_a(1)\psi_b(2)\psi_c(3)\), do not have the required antisymmetry property. Here \(a,b,c,\dots\) label the single-particle eigenstates, and \(1, 2, 3, \dots\) denote both space and spin coordinates of single particles, so 1 stands for \((\vec{r}_1,s_1)\). The necessary antisymmetrization for the particles 1, 2 is achieved by subtracting the same product wave function with the particles 1 and 2 interchanged, so \(\psi_a(1)\psi_b(2)\psi_c(3)\) is replaced by \(\psi_a(1)\psi_b(2)\psi_c(3)-\psi_a(2)\psi_b(1)\psi_c(3)\), ignoring overall normalization for now. But of course the wave function needs to be antisymmetrized with respect to all possible particle exchanges, so for 3 particles we must add together all 3! permutations of 1, 2, 3 in the state \(a,b,c\) with a factor -1 for each particle exchange necessary to get to a particular ordering from the original ordering of 1 in \(a\), 2 in \(b\), and 3 in \(c\). In fact, such a sum over permutations is precisely the definition of the determinant, so, with the appropriate normalization factor: \[ \psi_{abc}(1,2,3)=\frac{1}{\sqrt{3!}}\begin{vmatrix} \psi_a(1)& \psi_b(1)& \psi_c(1) \\ \psi_a(2)& \psi_b(2)& \psi_c(2) \\ \psi_a(3)& \psi_b(3)& \psi_c(3) \end{vmatrix} \tag{10.4.2}\] where \(a,b,c\) label three (different) quantum states and 1, 2, 3 label the three fermions. The determinantal form makes clear the antisymmetry of the wave function with respect to exchanging any two of the particles, since exchanging two rows of a determinant multiplies it by -1. We also see from the determinantal form that the three states \(a,b,c\) must all be different, for otherwise two columns would be identical, and the determinant would be zero. This is just Pauli’s Exclusion Principle: no two fermions can be in the same state. Although these determinantal wave functions (sometimes called Slater determinants) are only strictly correct for noninteracting fermions, they are a useful beginning in describing electrons in atoms (or in a metal), with the electron-electron repulsion approximated by a single-particle potential. For example, the Coulomb field in an atom, as seen by the outer electrons, is partially shielded by the inner electrons, and a suitable \(V(r)\) can be constructed self-consistently, by computing the single-particle eigenstates and finding their associated charge densities. Space and Spin Wave Functions Suppose we have two electrons in some spin-independent potential \(V(r)\) (for example in an atom). We know the two-electron wave function is antisymmetric. Now, the Hamiltonian has no spin-dependence, so we must be able to construct a set of common eigenstates of the Hamiltonian, the total spin, and the \(z\)- component of the total spin. For two electrons, there are four basis states in the spin space. The eigenstates of \(S\) and \(S_z\) are the singlet state \[ \chi_S(s_1,s_2) = |S_{tot}=0,S_z=0\rangle = (1/\sqrt{2})(|\uparrow\downarrow\rangle -|\downarrow\uparrow\rangle ) \tag{10.4.3}\] and the triplet states \[ \chi^1_T(s_1,s_2) = |1,1\rangle = |\uparrow\uparrow\rangle ,\;\; |1,0\rangle = (1/\sqrt{2})(|\uparrow\downarrow\rangle +|\downarrow\uparrow\rangle ),\;\; |1,-1\rangle =|\downarrow\downarrow\rangle \tag{10.4.4}\] where the first arrow in the ket refers to the spin of particle 1, the second to particle 2. It is evident by inspection that the singlet spin wave function is antisymmetric in the two particles, the triplet symmetric. The total wave function for the two electrons in a common eigenstate of \(S, S_z\) and the Hamiltonian \(H\) has the form: \[ \Psi(\vec{r}_1, \vec{r}_2,s_1,s_2)=\psi(\vec{r}_1, \vec{r}_2)\chi(s_1,s_2) \tag{10.4.5}\] and \(\Psi\) must be antisymmetric. It follows that a pair of electrons in the singlet spin state must have a symmetric spatial wave function, \(\psi(\vec{r}_1, \vec{r}_2)=\psi(\vec{r}_2, \vec{r}_1),\) whereas electrons in the triplet state, that is, with their spins parallel, have an antisymmetric spatial wave function. Dynamical Consequences of Symmetry This overall antisymmetry requirement actually determines the magnetic properties of atoms. The electron’s magnetic moment is aligned with its spin, and even though the spin variables do not appear in the Hamiltonian, the energy of the eigenstates depends on the relative spin orientation. This arises from the electrostatic repulsion energy between the electrons. In the spatially antisymmetric state, the two electrons have zero probability of being at the same place, and are on average further apart than in the spatially symmetric state. Therefore, the electrostatic repulsion raises the energy of the spatially symmetric state above that of the spatially antisymmetric state. It follows that the lower energy state has the spins pointing in the same direction. This argument is still valid for more than two electrons, and leads to Hund’s rule for the magnetization of incompletely filled inner shells of electrons in transition metal atoms and rare earths: if the shell is half filled or less, all the spins point in the same direction. This is the first step in understanding ferromagnetism. Another example of the importance of overall wave function antisymmetry for fermions is provided by the specific heat of hydrogen gas. This turns out to be heavily dependent on whether the two protons (spin one-half) in the H 2 molecule have their spins parallel or antiparallel, even though that alignment involves only a very tiny interaction energy. If the proton spins are antiparallel, that is to say in the singlet state, the molecule is called parahydrogen. The triplet state is called orthohydrogen. These two distinct gases are remarkably stable—in the absence of magnetic impurities, para–ortho transitions take weeks. The actual energy of interaction of the proton spins is of course completely negligible in the specific heat. The important contributions to the specific heat are the usual kinetic energy term, and the rotational energy of the molecule. This is where the overall (space×spin) antisymmetric wave function for the protons plays a role. Recall that the parity of a state with rotational angular momentum \(l\) is \((-1)^l\). Therefore, parahydrogen, with an antisymmetric proton spin wave function, must have a symmetric proton space wave function, and so can only have even values of the rotational angular momentum. Orthohydrogen can only have odd values. The energy of the rotational level with angular momentum \(l\) is \(E^{rot}_l=\hbar^2l(l+1)/I\), so the two kinds of hydrogen gas have different sets of rotational energy levels, and consequently different specific heats. Symmetry of Three-Electron Wave Functions Things get trickier when we go to three electrons. There are now 2 3 = 8 basis states in the spin space. Four of these are accounted for by the spin 3/2 state with all spins pointing in the same direction. This is evidently a symmetric state, so must be multiplied by an antisymmetric spatial wave function, a determinant. But the other four states are two pairs of total spin \(1/2\) states. They are orthogonal to the symmetric spin 3/2 state, so they can’t be symmetric, but they can’t be antisymmetric either, since in each such state two of the spins must be pointing in the same direction! An example of such a state (following Baym, page 407) is \[ \chi(s_1,s_2,s_3) = |\uparrow_1\rangle (1/\sqrt{2})(|\uparrow_2\downarrow_3\rangle -|\downarrow_2\uparrow_3\rangle ). \tag{10.4.6}\] Evidently, this must be multiplied by a spatial wave function symmetric in 2 and 3, but to get a total wave function with overall antisymmetry it is necessary to add more terms: \[ \Psi(1,2,3)=\chi(s_1,s_2,s_3)\psi(\vec{r}_1, \vec{r}_2,\vec{r}_3)+\chi(s_2,s_3,s_1)\psi(\vec{r}_2, \vec{r}_3,\vec{r}_1)+\chi(s_3,s_1,s_2)\psi(\vec{r}_3,\vec{r}_1, \vec{r}_2) \tag{10.4.7}\] (from Baym). Requiring the spatial wave function \(\psi(\vec{r}_1, \vec{r}_2,\vec{r}_3)\) to be symmetric in 2, 3 is sufficient to guarantee the overall antisymmetry of the total wave function \(\Psi\). Particle enthusiasts might be interested to note that functions exactly like this arise in constructing the spin/flavor wave function for the proton in the quark model (Griffiths, Introduction to Elementary Particles, page 179). For more than three electrons, similar considerations hold. The mixed symmetries of the spatial wave functions and the spin wave functions which together make a totally antisymmetric wave function are quite complex, and are described by Young diagrams (or tableaux). There is a simple introduction, including the generalization to SU(3), in Sakurai, section 6.5. See also \(\S\)63 of Landau and Lifshitz. Scattering of Identical Particles As a preliminary exercise, consider the classical picture of scattering between two positively charged particles, for example \(\alpha\)- particles, viewed in the center of mass frame. If an outgoing \(\alpha\) is detected at an angle \(\theta\) to the path of ingoing \(\alpha\) #1, it could be #1 deflected through \(\theta\), or #2 deflected through \(\pi-\theta\). (see figure). Classically, we could tell which one it was by watching the collision as it happened, and keeping track. However, in a quantum mechanical scattering process, we cannot keep track of the particles unless we bombard them with photons having wavelength substantially less than the distance of closest approach. This is just like detecting an electron at a particular place when there are two electrons in a one dimensional box: the probability amplitude for finding an \(\alpha\) coming out at angle \(\theta\) to the ingoing direction of one of them is the sum of the amplitudes ( not the sum of the probabilities!) for scattering through \(\theta\) and \(\pi-\theta\). Writing the asymptotic scattering wave function in the standard form for scattering from a fixed target, \[ \psi(\vec{r})\approx e^{ikz}+f(\theta)\frac{e^{ikr}}{r} \tag{10.4.8}\] the two-particle wave function in the center of mass frame, in terms of the relative coordinate, is given by symmetrizing:\[ \psi(\vec{r})\approx e^{ikz}+e^{-ikz}+(f(\theta)+f(\pi-\theta))\frac{e^{ikr}}{r}. \tag{10.4.9}\] How does the particle symmetry affect the actual scattering rate at an angle \(\theta\)? If the particles were distinguishable, the differential cross section would be \[ \left(\frac{d\sigma}{d\Omega}\right)_{distinguishable}=|f(\theta)|^2+|f(\pi-\theta)|^2, \tag{10.4.10}\] but quantum mechanically \[ \left(\frac{d\sigma}{d\Omega}\right)=|f(\theta)+f(\pi-\theta)|^2. \tag{10.4.11}\] This makes a big difference! For example, for scattering through 90°, where \(f(\theta)=f(\pi-\theta)\), the quantum mechanical scattering rate is twice the classical (distinguishable) prediction. Furthermore, if we make the standard expansion of the scattering amplitude \(f(\theta)\) in terms of partial waves, \[ f(\theta)=\sum_{l=0}^{\infty}(2l+1)a_lP_l(\cos\theta) \tag{10.4.12}\] then \[ \begin{matrix} f(\theta)+f(\pi-\theta)=\sum_{l=0}^{\infty}(2l+1)a_l(P_l(\cos\theta)+P_l(cos(\pi-\theta))) \\ =\sum_{l=0}^{\infty}(2l+1)a_l(P_l(\cos\theta)+P_l(-\cos\theta)) \end{matrix} \tag{10.4.13}\] and since \(P_l(-x)=(-1)^lP_l(x)\) the scattering only takes place in even partial wave states. This is the same thing as saying that the overall wave function of two identical bosons is symmetric, so if they are in an eigenstates of total angular momentum, from \(P_l(-x)=(-1)^lP_l(x)\) it has to be a state of even \(l\). For fermions in an antisymmetric spin state, such as proton-proton scattering with the two proton spins forming a singlet, the spatial wave function is symmetric, and the argument is the same as for the boson case above. For parallel spin protons, however, the spatial wave function has to be antisymmetric, and the scattering amplitude will then be \(f(\theta)-f(\pi-\theta)\). In this case there is zero scattering at 90°! Note that for (nonrelativistic) equal mass particles, the scattering angle in the center of mass frame is twice the scattering angle in the fixed target (lab) frame. This is easily seen in the diagram below. The four equal-length black arrows, two in, two out, forming an X, are the center of mass momenta. The lab momenta are given by adding the (same length) blue dotted arrow to each, reducing one of the ingoing momenta to zero, and giving the (red arrow) lab momenta (slightly displaced for clarity). The outgoing lab momenta are the diagonals of rhombi (equal-side parallelograms), hence at right angles and bisecting the center of mass angles of scattering.
As you said, it varies.Imagine I'm in Chicago and you're in London. My little dog is running circles around me. Which is closer to you, me or my dog?While the correct answer is "it depends on where the dog is in its orbit around me", I'd argue that a better answer is "it doesn't matter" - the distance between you and me is so great that any little ... In my former job I was writing educational software.In short, it's exactly what you described: we offered a paid version of what you could get for free by looking out on the internet, going to class, going to the library, ...And yet, I'm still incredibly proud of it, knowing I made a difference.What is the difference between well written software and a ... Traveling from Mars's surface to Earth's surface requires less energy than traveling from Mars's surface to Luna's surface, but traveling from Luna's surface to Mars's surface requires much less energy than traveling from Earth's surface to Mars's surface.For the purposes of space-travel, the actual physical distance is much less important than the ... In several press conferences, employees of NASA or private space firms have been asked if they played KSP, and some answered with "Yes".NASA used patched conics to find candidate orbits for Apollo back in the days.With that being said, KSP strikes the balance between accuracy and simplicity. Patched conics give a good idea how space works, without being ... Distances to MarsYou can answer the question with Astropy, a Python library for astronomy.Here's a diagram of distances from Earth to Mars and Moon to Mars, between 2000 and 2030.You can see that the two curves are so close from each other that they look like one single curve.Here's a zoom around the last opposition, in May 2016:Relative difference... The Apollo spacecraft consists of three major parts:The Command Module (CM), a conical module where the three crew members live during launch from Earth and travel to and from the moon, and which re-enters Earth's atmosphere alone at the end of the trip;The Service Module (SM), a cylindrical section containing fuel, power, life support, communications, a ... In order to land on the Moon, you must, at some point, be moving towards the Moon (decreasing your distance from it, to be more precise, you may also be moving sideways) and close enough that the Moon's gravity dominates that of the Earth and the Sun. From that point on, your kinetic energy (relative to the Moon's centre of mass) can only increase as you get ... Your orbit is uniquely determined by a current position (three coordinates) and velocity (three more quantities to give magnitude and direction). Going places involves changing your orbit. For instance, from a circular orbit about Earth, enter an elliptical transfer orbit to the moon, then circularize your orbit about the moon. Everything you do in space ... I quite agree that it is not intuitive. However, orbital mechanics are frequently not intuitive, probably because we don't get to experience an orbital environment on a regular basis (if ever).Let's just assume we're talking about circular orbits for the remainder of my post, since you are a beginner in orbital mechanics.There is only one speed that a ... Going directly to the Moon would require a very small launch window.The Earth orbit before enabled a launch window of about 3 to 4 hours, see this question. Abort from an Earth orbit was possible when the second ignition of the third stage of the Saturn V failed using the Service Module engine to initiate a reentry.Time in orbit was used to complete the ... Yes.1st scenario: A spacecraft orbiting the Sun at Earth distance vs. Pluto distance, shedding its orbital velocityThe orbital velocity decreases with distance, according to the following formula, where $r$ is the orbital radius, and $\mu$ is the mass parameter (it's just a shorthand we use)$$v_{circular} = \sqrt{\frac{\mu}{r}}$$The orbital velocity ... @SteveLinton's answer is right, no matter how gently you try, by the time you get to the surface the Moon's gravity will have accelerated you to something like 2,400 m/s. There are ways to use the gravity of the Earth and Sun to make a tiny reduction in this, but it's a very small effect.The simplest way to argue this is that rocks on the Moon don't ... I want to allow students to tinker around with basic central force motion and see the ways in which conic sections are altered by thrust, etc. Seeing/enacting an example of rendezvous (maybe in a CW frame?) would be neat too.I definitely think Kerbal Space Program is the right answer here. The ways in which it departs from real-world space flight (such as ... Let's say you start rolling down from the top of a half-pipe skateboard ramp, and you plan to get back to your starting point. If you stop in the middle of the pipe, it is much harder to climb back up to your starting point then if you ride up the other side of the ramp and let gravity accelerate you back.Similarly, even if you planned to get into an orbit ... All the other answers are great, but I think one explanation is still missing: how an interplanetary orbital transfer actually works in practice.The thing is, space is rather big, and things keep moving. At the same time, you're being tugged on constantly by all the other bodies in a planetary system (we can ignore other stars for interplanetary transfers).... It isn't really feasible to launch your own rocket, unless you have a lot of money to spend. The required power is immense. Hitting sub-orbital might be possible, and has been done once by amateurs, but they received sponsorships and had a team dedicated to making it happen.The trick to an orbital rocket is not just to get high, but also fast. The speed ... As was astutely noted by Hans, the period of the movement was about 25 days. It turns out that is the time it takes for the sun to rotate once. When I was grabbing the data from JPL Horizons, I listed the target (center) as "coord@10". I should have omitted the "coord", as that means coordinates, or in other words, a point on the surface. Without that, it ... That's a mistranscription of OMS Burn, or Orbital Maneuvering System burn. The OMS system is how the shuttle changed its orbital characteristics. You can read about it here. One, two or more might have been used to fine tune the orbit, avoid space debris, rendezvous with the space station, etc. In terms of distance, the two swap considerably. But perhaps a more interesting question is, which of the two is closer in terms of the energy required to land. For that, let's look at our friend, the delta-v table.Once one is approaching Earth from Mars, things only become different at the point labeledEarth C3=0 (See $C_3$). From there it is about 2.3 ... http://nbviewer.jupyter.org/gist/leftaroundabout/3955d27877e19be39d0f61fdafce069eBarely achieving escape velocity means you take a parabolic orbit. The thing with parabolic orbits is that they actually approach zero speed as you depart to infinite distance from the starting body.That is, zero speed with respect to the starting body's frame of reference, ... All interplanetary probes that I am aware of were launched into a parking orbit, and then waited some time in that orbit before restarting a stage or igniting another stage to inject on the desired outgoing asymptote. This is done for convenience to allow long launch windows on days in the launch period. It is possible and slightly more efficient to launch ... The Juno spacecraft has no means to directly measure and compute that it is in orbit. It did not send any such confirmation message. All it sent was an FSK tone indicating that it had completed the activities it was commanded to do. After the spacecraft turned back to Earth, it transmitted all of the recorded engineering data from the event, providing much ... Kerbal Space Program is somewhat of a medium fidelity simulation. It manages a few things quite well, and a few things not as well. Let me try and give a list (which might be a bit out of date):The Good:The orbit simulation is quite accurate, including how to change inclination, raise/lower orbits, leave a planet, and approach a new planet.The staging ... Statement of the ProblemThe problem you want to solve is called the Kepler problem. In your formulation of the problem, you're starting out with the Cartesian orbital state vectors (also called Cartesian elements): that is, the initial position and velocity.As you have discovered, the only way to propagate the Cartesian elements forward in time is by ... More or less.While the ISS is below the satellites use for TV transmissions, it is passing by so fast that the coverage will be highly intermittent, meaning that you would be able to watch a channel for only a couple of minutes, have black outs over the oceans, and repeat.Other notable differences would be:Normal satellites receiver are "fixed": The ... There is very little to gain by going straight to the Moon, and as @Uwe has said, it makes the timing of the launch extremely demanding. Let me have my go at explaining why there is very little to gain.The most fuel efficient way for a rocket to get from Earth to the Moon is basically to accelerate as close to the Earth as possible until it is moving at ... There are a couple of reasons.The distance from the L2 to Earth is only 1.5 million km away. The L4/L5 are 1 AU, or about 150 million km away. That leads to a reduction in link margin of 40 db, or 1/10000. That is quite significant. In order to compensate for that difference, you either need a bigger radio dish, more power, or a loss in data.As you ... Understanding the PrincipleLet's start by understanding the mechanism of a gravity assist. As a spacecraft approaches a planetary body, it gets affected by the planets gravitational pull. Getting nearer, the pull increases, and eventually when the spacecraft passes the planet, the pull decreases.If you think about a stationary planet as an absolute ... No, because there's nothing like water for a keel to work against.In water sailing there are two force vectors, the vector from the reaction of the wind against the sail, and the vector from the keel and rudder against the water. These vectors add together to propel the sailboat. This works for almost any direction on the compass except where the wind ...
Astrobiology is the study of origin, evolution, distribution and future of life in the universe. It is concerned with discovering and detecting Extrasolar Planets. Astrobiology addresses the following points − How does life begin and evolve? (biology + geology + chemistry + atmospheric sciences) Are there worlds beyond earth that are favorable for life? (astronomy) What would be the future of life on earth? Astronomy addresses the following points − How to detect the planetary system around other stars? One of the method is direct imaging, but it is a very difficult task because planets are extremely faint light sources compared to stars, and what little light comes from them tends to be lost in the glare from their parent star. Contrast is better when the planet is closer to its parent star and hot, so that it emits intense infrared radiation. We can make images in the infrared region. The most efficient techniques for extrasolar planet detection are as follows. Each of these are also explained in detail in the subsequent chapters. It is also called as the Doppler method. In this − The star planet system revolves around their barycenter, star wobbles. Wobbling can be detected by Periodic Red/Blue shifts. Astrometry - measuring the objects in the sky very precisely. Transit Method (Kepler space telescope) is used to find out the size. The dip in brightness of star by planet is usually very less, unlike in a binary system. Imaging the planet using a telescope. Let us look at a case study done on Radial Velocity Method. This case study is on the Circular orbit and the plane of the orbit perpendicular to the plane of the sky. The time taken by both around the barycenter will be same. It will be equal to the time difference between two Redshift or Blueshift. Consider the following image. At A and C – full velocity is measured. At C, velocity is zero. Vrmax = V * is the true velocity of the star. P is the time-period of the star as well as the planet. θ is the phase of orbit. Star Mass - M *, Orbit radius a *, planet mass m p. From center of mass equation, $$m_p a_p = M_\ast a_\ast$$ From equation of velocity, $$V_\ast = \frac{2\pi a_\ast}{P}$$ $$\Rightarrow a_\ast = \frac{PV_\ast}{2\pi}$$ From Kepler’s Law, $$P^2 = \frac{4\pi^2a_p^3}{GM_\ast}$$ $$\Rightarrow a_p = \left ( \frac{P^2GM_\ast}{4\pi^2} \right)^{1/3}$$ From the above equations, we get − $$\Rightarrow m_p = \left( \frac{P}{2\pi G} \right)^{1/3}M_\ast^{2/3}V_\ast$$ We get: $m_p, a_p$ and $a_\ast$. The above equation is biased towards most massive planets close to the star. Astrobiology is the study of origin, evolution, distribution and future of life in the universe. Techniques to detect the extrasolar planets are: Radial Velocity Method, Transit Method, Direct Imaging, etc. Wobbling can be detected by periodic red/blue shifts and Astrometry. The Radial Velocity Method is biased towards detecting massive planets close to the star.
Search Now showing items 1-10 of 32 The ALICE Transition Radiation Detector: Construction, operation, and performance (Elsevier, 2018-02) The Transition Radiation Detector (TRD) was designed and built to enhance the capabilities of the ALICE detector at the Large Hadron Collider (LHC). While aimed at providing electron identification and triggering, the TRD ... Constraining the magnitude of the Chiral Magnetic Effect with Event Shape Engineering in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV (Elsevier, 2018-02) In ultrarelativistic heavy-ion collisions, the event-by-event variation of the elliptic flow $v_2$ reflects fluctuations in the shape of the initial state of the system. This allows to select events with the same centrality ... First measurement of jet mass in Pb–Pb and p–Pb collisions at the LHC (Elsevier, 2018-01) This letter presents the first measurement of jet mass in Pb-Pb and p-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV and 5.02 TeV, respectively. Both the jet energy and the jet mass are expected to be sensitive to jet ... First measurement of $\Xi_{\rm c}^0$ production in pp collisions at $\mathbf{\sqrt{s}}$ = 7 TeV (Elsevier, 2018-06) The production of the charm-strange baryon $\Xi_{\rm c}^0$ is measured for the first time at the LHC via its semileptonic decay into e$^+\Xi^-\nu_{\rm e}$ in pp collisions at $\sqrt{s}=7$ TeV with the ALICE detector. The ... D-meson azimuthal anisotropy in mid-central Pb-Pb collisions at $\mathbf{\sqrt{s_{\rm NN}}=5.02}$ TeV (American Physical Society, 2018-03) The azimuthal anisotropy coefficient $v_2$ of prompt D$^0$, D$^+$, D$^{*+}$ and D$_s^+$ mesons was measured in mid-central (30-50% centrality class) Pb-Pb collisions at a centre-of-mass energy per nucleon pair $\sqrt{s_{\rm ... Search for collectivity with azimuthal J/$\psi$-hadron correlations in high multiplicity p-Pb collisions at $\sqrt{s_{\rm NN}}$ = 5.02 and 8.16 TeV (Elsevier, 2018-05) We present a measurement of azimuthal correlations between inclusive J/$\psi$ and charged hadrons in p-Pb collisions recorded with the ALICE detector at the CERN LHC. The J/$\psi$ are reconstructed at forward (p-going, ... Systematic studies of correlations between different order flow harmonics in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV (American Physical Society, 2018-02) The correlations between event-by-event fluctuations of anisotropic flow harmonic amplitudes have been measured in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV with the ALICE detector at the LHC. The results are ... $\pi^0$ and $\eta$ meson production in proton-proton collisions at $\sqrt{s}=8$ TeV (Springer, 2018-03) An invariant differential cross section measurement of inclusive $\pi^{0}$ and $\eta$ meson production at mid-rapidity in pp collisions at $\sqrt{s}=8$ TeV was carried out by the ALICE experiment at the LHC. The spectra ... J/$\psi$ production as a function of charged-particle pseudorapidity density in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV (Elsevier, 2018-01) We report measurements of the inclusive J/$\psi$ yield and average transverse momentum as a function of charged-particle pseudorapidity density ${\rm d}N_{\rm ch}/{\rm d}\eta$ in p-Pb collisions at $\sqrt{s_{\rm NN}}= 5.02$ ... Energy dependence and fluctuations of anisotropic flow in Pb-Pb collisions at √sNN=5.02 and 2.76 TeV (Springer Berlin Heidelberg, 2018-07-16) Measurements of anisotropic flow coefficients with two- and multi-particle cumulants for inclusive charged particles in Pb-Pb collisions at 𝑠NN‾‾‾‾√=5.02 and 2.76 TeV are reported in the pseudorapidity range |η| < 0.8 ...
I am investigating a somewhat obscure area of number theory. Can I post a proposition and a complete proof and ask people to check it? Say I've proven a theorem or found a solution to a problem. I'm close to being sure that my proof or solution is fine. However, I'm not entirely confident in my jugdement. Maybe I'm new to the topic, or I remember that I've thought correct my incorrect proofs of the same difficulty level many tim... I am trying to learn proof based math on my own and once I construct a proof, I often have a gut feeling that it's not airtight. What is the best way of asking such questions? My findings till now: This question's answer in Meta points out that reading other people's long and formal proofs i... Is this the proper site to ask about the correctness of one's proof? The proofs I would intend on asking questions about are elementary mathematics proofs such as a proof about infinitely many primes, induction equalities and inequalities, etc... Or is there another site for "Proof Review" Simila... I am aware that it is probably better not to have too many meta-tags such as homework, soft-question, big-list or reference-request. Despite of this I'd like to ask other MSE users, whether they would consider tag of "check my proof" questions useful. We have a lot of such questions and they are... Is it okay to ask questions where you give the solution and ask people to review it to see if it is correct? Recently, I have seen a lot of questions that essentially ask, "Is this proof correct?" or "Can you verify my work is correct?" like this one. Often, especially when the asker's work is in fact correct, these questions have a one-word answer. I feel that these questions could be much improved b... I solve some exercises but I'm self-studying and some books do not have answers to the exercises, is it ok to ask if the solution is plausible here? I occasionally come across questions (such as this one: Prove that the $\sigma$ - algebras are equal) in which the person asking the question has already answered their question and wants to know whether their approach is right. There are two possibilities: Their approach is wrong. Then you ... I envision a site where people could post their proofs -- either as answers to textbook exercises or revisions of current textbook proofs -- and get feedback on which parts might need improving, and perhaps how this might be done. This sort of site could be useful to a number of people. Namely, ... Over a week ago I asked this question. In it, I proposed a solution and asked for it to be verified, or for an alternative solution to be suggested. Currently, there are no answers but there is one comment stating that my proof is correct (as well as indicating the need for more explanation of a ... I feel bothered by questions where the body begins with a problem statement and then is followed by a giant solution and then succeeded by the question: "Can someone please tell me whether or not this solution is correct?" Here is an example of what I'm talking about: Example. Here are my compla... I've always wanted math competitions on MSE ever since I've joined. These could be either user-held or officially held, whichever seems better. User-held competitions would run as follows. A user starts a competition with a specified level, with original problems that he/she writes. People sign... I was puzzling: "What is the shortest proof of $\exists x \forall y (P(x) \to P(y)) $?" (a variation of the drinkers paradox see Proof of Drinker paradox) given a certain set of inference rules and using natural deduction. I managed to proof it in 23 lines, but I am not sure if this is the shor... This isn't the usual issue about questions from contest competitions being posted here for assistance, but rather about the use of Math.SE as a venue for hosting a "contest" using a future bounty as prize. This recent Question asks for participation according to rules (the post lists seven of th... « first day (866 days earlier) ← previous day next day → last day (1814 days later) »
Difference between revisions of "stat946w18/Implicit Causal Models for Genome-wide Association Studies" (→Implicit causal model in Edward) Line 203: Line 203: == Implicit causal model in Edward == == Implicit causal model in Edward == − [[File: coddde.png| + [[File: coddde.png||center]] Revision as of 23:47, 20 April 2018 Contents 1 Introduction and Motivation 2 Implicit Causal Models 3 Implicit Causal Models with Latent Confounders 4 Likelihood-free Variational Inference 5 Empirical Study 6 Conclusion 7 Critique 8 References 9 Implicit causal model in Edward Introduction and Motivation There is currently much progress in probabilistic models which could lead to the development of rich generative models. The models have been applied with neural networks, implicit densities, and with scalable algorithms to very large data for their Bayesian inference. However, most of the models are focused on capturing statistical relationships rather than causal relationships. Causal relationships are relationships where one event is a result of another event, i.e. a cause and effect. Causal models give us a sense of how manipulating the generative process could change the final results. Genome-wide association studies (GWAS) are examples of causal relationships. Genome is basically the sum of all DNAs in an organism and contain information about the organism's attributes. Specifically, GWAS is about figuring out how genetic factors cause disease among humans. Here the genetic factors we are referring to are single nucleotide polymorphisms (SNPs), and getting a particular disease is treated as a trait, i.e., the outcome. In order to know about the reason of developing a disease and to cure it, the causation between SNPs and diseases is investigated: first, predict which one or more SNPs cause the disease; second, target the selected SNPs to cure the disease. The figure below depicts an example Manhattan plot for a GWAS. Each dot represents an SNP. The x-axis is the chromosome location, and the y-axis is the negative log of the association p-value between the SNP and the disease, so points with the largest values represent strongly associated risk loci. This paper focuses on two challenges to combining modern probabilistic models and causality. The first one is how to build rich causal models with specific needs by GWAS. In general, probabilistic causal models involve a function [math]f[/math] and a noise [math]n[/math]. For working simplicity, we usually assume [math]f[/math] as a linear model with Gaussian noise. However problems like GWAS require models with nonlinear, learnable interactions among the inputs and the noise. The second challenge is how to address latent population-based confounders. Latent confounders are issues when we apply the causal models since we cannot observe them nor know the underlying structure. For example, in GWAS, both latent population structure, i.e., subgroups in the population with ancestry differences, and relatedness among sample individuals produce spurious correlations among SNPs to the trait of interest. The existing methods cannot easily accommodate the complex latent structure. For the first challenge, the authors develop implicit causal models, a class of causal models that leverages neural architectures with an implicit density. With GWAS, implicit causal models generalize previous methods to capture important nonlinearities, such as gene-gene and gene-population interaction. Building on this, for the second challenge, they describe an implicit causal model that adjusts for population-confounders by sharing strength across examples (genes). There has been an increasing number of works on causal models which focus on causal discovery and typically have strong assumptions such as Gaussian processes on noise variable or nonlinearities for the main function. Implicit Causal Models Implicit causal models are an extension of probabilistic causal models. Probabilistic causal models will be introduced first. Probabilistic Causal Models Probabilistic causal models have two parts: deterministic functions of noise and other variables. Consider background noise [math]\epsilon[/math], representing unknown background quantities which are jointly independent and global variable [math]\beta[/math], some function of this noise, where Each [math]\beta[/math] and [math]x[/math] is a function of noise; [math]y[/math] is a function of noise and [math]x[/math], The target is the causal mechanism [math]f_y[/math] so that the causal effect [math]p(y|do(X=x),\beta)[/math] can be calculated. [math]do(X=x)[/math] means that we specify a value of [math]X[/math] under the fixed structure [math]\beta[/math]. By other paper’s work, it is assumed that [math]p(y|do(x),\beta) = p(y|x, \beta)[/math]. An example of probabilistic causal models is additive noise model. [math]f(.)[/math] is usually a linear function or spline functions for nonlinearities. [math]\epsilon[/math] is assumed to be standard normal, as well as [math]y[/math]. Thus the posterior [math]p(\theta | x, y, \beta)[/math] can be represented as where [math]p(\theta)[/math] is the prior which is known. Then, variational inference or MCMC can be applied to calculate the posterior distribution. Implicit Causal Models The difference between implicit causal models and probabilistic causal models is the noise variable. Instead of using an additive noise term, implicit causal models directly take noise [math]\epsilon[/math] as input and outputs [math]x[/math] given parameter [math]\theta[/math]. [math] x=g(\epsilon | \theta), \epsilon \tilde s(\cdot) [/math] The causal diagram has changed to: They used fully connected neural network with a fair amount of hidden units to approximate each causal mechanism. Below is the formal description: Implicit Causal Models with Latent Confounders Previously, they assumed the global structure is observed. Next, the unobserved scenario is being considered. Causal Inference with a Latent Confounder Similar to before, the interest is the causal effect [math]p(y|do(x_m), x_{-m})[/math]. Here, the SNPs other than [math]x_m[/math] is also under consideration. However, it is confounded by the unobserved confounder [math]z_n[/math]. As a result, the standard inference method cannot be used in this case. The paper proposed a new method which include the latent confounders. For each subject [math]n=1,…,N[/math] and each SNP [math]m=1,…,M[/math], The mechanism for latent confounder [math]z_n[/math] is assumed to be known. SNPs depend on the confounders and the trait depends on all the SNPs and the confounders as well. The posterior of [math]\theta[/math] is needed to be calculate in order to estimate the mechanism [math]g_y[/math] as well as the causal effect [math]p(y|do(x_m), x_{-m})[/math], so that it can be explained how changes to each SNP [math]X_m[/math] cause changes to the trait [math]Y[/math]. Note that the latent structure [math]p(z|x, y)[/math] is assumed known. In general, causal inference with latent confounders can be dangerous: it uses the data twice, and thus it may bias the estimates of each arrow [math]X_m → Y[/math]. Why is this justified? This is answered below: Proposition 1. Assume the causal graph of Figure 2 (left) is correct and that the true distribution resides in some configuration of the parameters of the causal model (Figure 2 (right)). Then the posterior [math]p(θ | x, y)[/math] provides a consistent estimator of the causal mechanism [math]f_y[/math]. Proposition 1 rigorizes previous methods in the framework of probabilistic causal models. The intuition is that as more SNPs arrive (“M → ∞, N fixed”), the posterior concentrates at the true confounders [math]z_n[/math], and thus we can estimate the causal mechanism given each data point’s confounder [math]z_n[/math]. As more data points arrive (“N → ∞, M fixed”), we can estimate the causal mechanism given any confounder [math]z_n[/math] as there is an infinity of them. Implicit Causal Model with a Latent Confounder This section is the algorithm and functions to implementing an implicit causal model for GWAS. Generative Process of Confounders [math]z_n[/math]. The distribution of confounders is set as standard normal. [math]z_n \in R^K[/math] , where [math]K[/math] is the dimension of [math]z_n[/math] and [math]K[/math] should make the latent space as close as possible to the true population structural. Generative Process of SNPs [math]x_{nm}[/math]. Given SNP is coded for, The authors defined a [math]Binomial(2,\pi_{nm})[/math] distribution on [math]x_{nm}[/math]. And used logistic factor analysis to design the SNP matrix. A SNP matrix looks like this: Since logistic factor analysis makes strong assumptions, this paper suggests using a neural network to relax these assumptions, This renders the outputs to be a full [math]N*M[/math] matrix due the the variables [math]w_m[/math], which act as principal component in PCA. Here, [math]\phi[/math] has a standard normal prior distribution. The weights [math]w[/math] and biases [math]\phi[/math] are shared over the [math]m[/math] SNPs and [math]n[/math] individuals, which makes it possible to learn nonlinear interactions between [math]z_n[/math] and [math]w_m[/math]. Generative Process of Traits [math]y_n[/math]. Previously, each trait is modeled by a linear regression, This also has very strong assumptions on SNPs, interactions, and additive noise. It can also be replaced by a neural network which only outputs a scalar, Likelihood-free Variational Inference Calculating the posterior of [math]\theta[/math] is the key of applying the implicit causal model with latent confounders. could be reduces to However, with implicit models, integrating over a nonlinear function could be suffered. The authors applied likelihood-free variational inference (LFVI). LFVI proposes a family of distribution over the latent variables. Here the variables [math]w_m[/math] and [math]z_n[/math] are all assumed to be Normal, For LFVI applied to GWAS, the algorithm which similar to the EM algorithm has been used: Empirical Study The authors performed simulation on 100,000 SNPs, 940 to 5,000 individuals, and across 100 replications of 11 settings. Four methods were compared: implicit causal model (ICM); PCA with linear regression (PCA); a linear mixed model (LMM); logistic factor analysis with inverse regression (GCAT). The feedforward neural networks for traits and SNPs are fully connected with two hidden layers using ReLU activation function, and batch normalization. Simulation Study Based on real genomic data, a true model is applied to generate the SNPs and traits for each configuration. There are four datasets used in this simulation study: HapMap [Balding-Nichols model] 1000 Genomes Project (TGP) [PCA] Human Genome Diversity project (HGDP) [PCA] HGDP [Pritchard-Stephens-Donelly model] A latent spatial position of individuals for population structure [spatial] The table shows the prediction accuracy. The accuracy is calculated by the rate of the number of true positives divide the number of true positives plus false positives. True positives measure the proportion of positives that are correctly identified as such (e.g. the percentage of SNPs which are correctly identified as having the causal relation with the trait). In contrast, false positives state the SNPs has the causal relation with the trait when they don’t. The closer the rate to 1, the better the model is since false positives are considered as the wrong prediction. The result represented above shows that the implicit causal model has the best performance among these four models in every situation. Especially, other models tend to do poorly on PSD and Spatial when [math]a[/math] is small, but the ICM achieved a significantly high rate. The only comparable method to ICM is GCAT, when applying to simpler configurations. Real-data Analysis They also applied ICM to GWAS of Northern Finland Birth Cohorts, which measure 10 metabolic traits and also contain 324,160 SNPs and 5,027 individuals. The data came from the database of Genotypes and Phenotypes (dbGaP) and used the same preprocessing as Song et al. Ten implicit causal models were fitted, one for each trait to be modeled. For each of the 10 implicit causal models the dimension of the counfounders was set to be six, same as what was used in the paper by Song. The SNP network used 512 hidden units in both layers and the trait network used 32 and 256. et al. for comparable models in Table 2. The numbers in the above table are the number of significant loci for each of the 10 traits. The number for other methods, such as GCAT, LMM, PCA, and "uncorrected" (association tests without accounting for hidden relatedness of study samples) are obtained from other papers. By comparison, the ICM reached the level of the best previous model for each trait. Conclusion This paper introduced implicit causal models in order to account for nonlinear complex causal relationships, and applied the method to GWAS. It can not only capture important interactions between genes within an individual and among population level, but also can adjust for latent confounders by taking account of the latent variables into the model. By the simulation study, the authors proved that the implicit causal model could beat other methods by 15-45.3% on a variety of datasets with variations on parameters. The authors also believed this GWAS application is only the start of the usage of implicit causal models. The authors suggest that it might also be successfully used in the design of dynamic theories in high-energy physics or for modeling discrete choices in economics. Critique This paper is an interesting and novel work. The main contribution of this paper is to connect the statistical genetics and the machine learning methodology. The method is technically sound and does indeed generalize techniques currently used in statistical genetics. The neural network used in this paper is a very simple feed-forward 2 hidden-layer neural network, but the idea of where to use the neural network is crucial and might be significant in GWAS. It has limitations as well. The empirical example in this paper is too easy, and far away from the realistic situation. Despite the simulation study showing some competing results, the Northern Finland Birth Cohort Data application did not demonstrate the advantage of using implicit causal model over the previous methods, such as GCAT or LMM. Another limitation is about linkage disequilibrium as the authors stated as well. SNPs are not completely independent of each other; usually, they have correlations when the alleles at close locus. They did not consider this complex case, rather they only considered the simplest case where they assumed all the SNPs are independent. Furthermore, one SNP maybe does not have enough power to explain the causal relationship. Recent papers indicate that causation to a trait may involve multiple SNPs. This could be a future work as well. References Tran D, Blei D M. Implicit Causal Models for Genome-wide Association Studies[J]. arXiv preprint arXiv:1710.10742, 2017. Patrik O Hoyer, Dominik Janzing, Joris M Mooij, Jonas Peters, and Prof Bernhard Schölkopf. Non- linear causal discovery with additive noise models. In Neural Information Processing Systems, 2009. Alkes L Price, Nick J Patterson, Robert M Plenge, Michael E Weinblatt, Nancy A Shadick, and David Reich. Principal components analysis corrects for stratification in genome-wide association studies. Nature Genetics, 38(8):904–909, 2006. Minsun Song, Wei Hao, and John D Storey. Testing for genetic associations in arbitrarily structured populations. Nature, 47(5):550–554, 2015. Dustin Tran, Rajesh Ranganath, and David M Blei. Hierarchical implicit models and likelihood-free variational inference. In Neural Information Processing Systems, 2017.
By way of example, let us use the expression \(\textbf{dA} = \frac{\mu I}{ 4 \pi r}\textbf{ds}\) , to calculate the magnetic vector potential in the vicinity of a long, straight, current-carrying conductor ("wire" for short!). We'll suppose that the wire lies along the \(z\)-axis, with the current flowing in the direction of positive \(z\). We'll work in cylindrical coordinates, and the symbols , \(\hat{\rho},\,\hat{\phi},\,\hat{\textbf{z}}\) will denote the unit orthogonal vectors. After we have calculated \(\textbf{A}\), we'll try and calculate its curl to give us the magnetic field \(\textbf{B}\). We already know, of course, that for a straight wire the field is \(\textbf{B}=\frac{\mu I}{2\pi \rho}\) \(\hat{\phi}\) , so this will serve as a check on our algebra. Consider an element \(\hat{\textbf{z}}\,dz\) on the wire at a height \(z\) above the \(xy\)-plane. (The length of this element is \(dz\); the unit vector \(\hat{\textbf{z}}\) just indicates its direction.) Consider also a point P in the \(xy\)-plane at a distance \(\rho\) from the wire. The distance of P from the element \(dz\text{ is }\sqrt{\rho^2 +z^2}\) . The contribution to the magnetic vector potential is therefore \[\textbf{dA}=\hat{\textbf{z}}\frac{\mu I}{4\pi}\cdot \frac{dz}{(\rho^2+z^2)^{1/2}}.\label{9.3.1}\] The total magnetic vector potential is therefore \[\textbf{A}=\hat{\textbf{z}}\frac{\mu I}{2\pi}\int_0^\infty \frac{dz}{(\rho^2+z^2)^{1/2}}.\label{9.3.2}\] This integral is infinite, which at first may appear to be puzzling. Let us therefore first calculate the magnetic vector potential for a finite section of length \(2l\) of the wire. For this section, we have \[\textbf{A}=\hat{\textbf{z}}\frac{\mu I}{2\pi}\cdot \int_0^l \frac{dz}{(\rho^2+z^2)^{1/2}}.\label{9.3.3}\] To integrate this, let \(z = \rho \tan θ\), whence \(\textbf{A}=\hat{\textbf{z}}\frac{\mu I}{2\pi}\cdot \int_0^\alpha \sec \theta \, d\theta\) where \(l = \rho \tan \alpha\). From this we obtain \(\textbf{A}=\hat{\textbf{z}}\frac{\mu I}{2\pi}\cdot \ln (\sec \alpha +\tan \alpha )\), whence \[\label{9.3.4}\textbf{A}=\hat{\textbf{z}}\frac{\mu I}{2\pi}\cdot \ln \left ( \frac{\sqrt{l^2+\rho^2}+l}{\rho}\right ) .\] For \(l >> \rho\) this becomes \[\label{9.3.5}\textbf{A}=\hat{\textbf{z}}\frac{\mu I}{2\pi}\cdot \ln \left ( \frac{2l}{\rho}\right ) =\hat{\textbf{z}}\frac{\mu I}{2\pi}(\ln 2l -\ln \rho ).\] Thus we see that the magnetic vector potential in the vicinity of a straight wire is a vector field parallel to the wire. If the wire is of infinite length, the magnetic vector potential is infinite. For a finite length, the potential is given exactly by Equation \ref{9.3.4}, and, very close to a long wire, the potential is given approximately by Equation \ref{9.3.5}. Now let us use Equation \ref{9.3.5} together with \(\textbf{B} = \textbf{curl A}\), to see if we can find the magnetic field \(\textbf{B}\). We'll have to use the expression for \(\textbf{curl A}\) in cylindrical coordinates, which is \[\label{9.3.6}\textbf{curl A} = \left ( \frac{1}{\rho}\frac{∂A_z}{∂\phi}-\frac{∂A_\phi}{∂z}\right ) \hat{\boldsymbol{\rho}}+\left ( \frac{∂A_\rho}{∂z}-\frac{∂A_z}{∂\rho}\right ) \hat{\boldsymbol{\phi}}+\frac{1}{\rho}\left ( A_\phi +\rho \frac{∂A_\phi}{∂\rho}-\frac{∂A_\rho}{∂\phi }\right ) \hat{\textbf{z}}.\] In our case, \(\textbf{A}\) has only a \(z\)-component, so this is much simplified: \[\label{9.3.7}\textbf{curl A}=\frac{1}{\rho}\frac{∂A_z}{∂\phi}\hat{\boldsymbol{\rho}}-\frac{∂A_z}{∂\rho}\hat{\boldsymbol{\phi}}.\] And since the \(z\)-component of \(\textbf{A}\) depends only on \(\rho\), the calculation becomes trivial, and we obtain, as expected \[\label{9.3.8}\textbf{B}=\frac{\mu I}{2\pi \rho }\hat{\boldsymbol{\phi}}.\] This is an approximate result for very close to a long wire – but it is exact for any distance for an infinite wire. This may strike you as a long palaver to derive Equation \ref{9.3.8} – but the object of the exercise was not to derive Equation \ref{9.3.8} (which is trivial from Ampère's theorem), but to derive the expression for \(\textbf{A}\). Calculating \(\textbf{B}\) subsequently was only to reassure us that our algebra was correct.
My eldest came home with some interesting maths homework. We have three bowls containing peas. Distribute the peas evenly across the bowls, by moving peas from one bowl to another. The only move allowed, is doubling the amount of peas in one bowl by taking them from one other bowl. For example: $\begin{array}{rlrlrl} A & & B & & C & \\ \hline 11 & & 6 & & 7 \\ 4 & (-7) & 6 & & 14 & (+7) \\ 4 & & 12 & (+6) & 8 & (-6) \\ 8 & (+4) & 8 & (-4) & 8 \end{array}$ We had some fun with this. We quickly decided that the order of the bowls doesn't matter, so we can just sort them by the number of peas they contain. Then, there are just three moves possible: Double the amount of peas in the first bowl (containing the fewest peas) by taking them from the third bowl (containing the most peas) Double the amount of peas in the first bowl containing the fewest by taking them from the second bowl (containing the neither the most nor the fewest peas) Double the amount of peas in the second bowl by taking them from the third bowl From that, we made state diagrams and found our solutions. We also found out that for one of the problems, we could never reach an end state (with all bowls containing the same amount of peas). But of course I couldn't leave it alone. So I tried to formalise what we did. The amount of peas in each bowl at any time is of course a positive integer (or possibly 0): $a_i \le b_i \le c_i \,|\,a_i, b_i, c_i \in \mathbb Z_{\ge 0}$ A solution has the peas distributed equally across the bowls: $a_i = b_i = c_i$ The total amount of peas at any time is a multiple of 3: $a_i + b_i + c_i = 3n \,|\,n \in \mathbb Z_{\ge 0}$ The three possible moves: $\left\{a_{i+1}, b_{i+1}, c_{i+1}\right\} \in \begin{Bmatrix} \left\{2\cdot a_i, b_i, c_i-a_i\right\} \\ \left\{2\cdot a_i, b_i-a_i, c_i\right\} \\ \left\{a_i, 2\cdot b_i, c_i-b_i\right\} \end{Bmatrix}$ The set of possible moves brings us to the conclusion that for each state after the first, at least one bowl contains an even amount of peas. Since in the end state, all bowls contain the same amount of peas, they all contain the same even amount of peas and thus the total number of peas must be even. If we combine this with the requirement that the total number of peas is a multiple of 3 as well, we see that the total number of peas must be a multiple of 6: $a_i + b_i + c_i = 6n \,|\,n \in \mathbb Z_{\ge 0}$ (Ignoring the very trivial case where we start with a solved state). But these conditions aren't sufficient to guarantee a solvable puzzle. For instance, $\left\{a_0, b_0, c_0\right\} = \left\{6,12,24\right\}$ is not solvable. The only reachable states are $$\begin{Bmatrix} \left\{ 6, 12, 24 \right\} \\ \left\{12, 12, 18 \right\} \\ \left\{ 0, 18, 24 \right\} \\ \left\{ 0, 6, 36 \right\} \\ \left\{ 0, 12, 30 \right\} \\ \end{Bmatrix}$$ neither of which is an end state. So what additional conditions do we need to define to make such a puzzle sovable? Problems can have multiple solutions. See for instance this example, with two possible paths to a solution: $\begin{array}{rlrlrl} A & & B & & C & \\ \hline 11 & & 6 & & 7 \\ 4 & (-7) & 6 & & 14 & (+7) \\ 4 & & 12 & (+6) & 8 & (-6) \\ 8 & (+4) & 8 & (-4) & 8 \end{array}$ and $\begin{array}{rlrlrl} A & & B & & C & \\ \hline 11 & & 6 & & 7 \\ 5 & (-6) & 12 & (+6) & 7 & \\ 10 & (+5) & 12 & & 2 & (-5) \\ 8 & (-2) & 12 & & 4 & (+2) \\ 8 & & 8 & (-4) & 8 & (+4) \end{array}$ The only way to solve a problem that I've come up so far, is drawing a state diagram and determining the shortest path to the end state. But while state diagrams work fine for small numbers of peas, the number of possible states may well explode for higher numbers. So is there another way to solve these problems? What is a good strategy to arrive at a solution? Yes, I'm asking two questions in one, but I don't want to repeat the explanation for a second question. Also, I feel both questions are closely related, since what I'm really looking for is an analysis of this type of problem, if possible even with a variable number of bowls.
Forgot password? New user? Sign up Existing user? Log in Hi Guys, I just want to say that JOMO 5 has started! Please reshare and tell your friends about it. You can check out the paper here. Please read the rules before starting Note by Yan Yau Cheng 5 years, 4 months ago Easy Math Editor This discussion board is a place to discuss our Daily Challenges and the math and science related to those challenges. Explanations are more than just a solution — they should explain the steps and thinking strategies that you used to obtain the solution. Comments should further the discussion of math and science. When posting on Brilliant: *italics* _italics_ **bold** __bold__ - bulleted- list 1. numbered2. list paragraph 1paragraph 2 paragraph 1 paragraph 2 [example link](https://brilliant.org) > This is a quote This is a quote # I indented these lines # 4 spaces, and now they show # up as a code block. print "hello world" # I indented these lines# 4 spaces, and now they show# up as a code block.print "hello world" \( \) \[ \] 2 \times 3 2^{34} a_{i-1} \frac{2}{3} \sqrt{2} \sum_{i=1}^3 \sin \theta \boxed{123} Sort by: I've told all of my friends! This is going to be really fun but @Yan Yau Cheng could you go a little easy on the trig? After all, this is Junior. I just finished. :D Log in to reply totally agreed, especially since I just learnt basic trig only .-. Haha, yeah but I'm in Algebra 1 so... sucks for me. :P @Aditya Raut What ? I think the correct file(ques 2 problem fixed) is now uploaded on the website ..... All participants , we're sorry for that inconvenience , really!! @Aditya Raut – I am not able to access the file. Could you please post the difference in the question? @Nanayaranaraknas Vahdam – The 1st equation was previously xy+2x+3y=2 but it was supposed to be xy+3x+2y=2 so we've edited it now @Yan Yau Cheng – Thank you! Why is JOMO and Proofathon always together? Can you please colobrate time so that I can participate in both. Arrgghh... I cant do this, the problems arent bad but it just takes too much to convert everything to PDF 's and everything, i barely have that much time. sorry! Dude I finished in 40 minutes. :P For question 2, I can't find any non-negative integer solutions for xxx, yyy and zzz. Did yours have any negative numbers? @Sharky Kesa – there are no problems to question 2, I double checked @Yan Yau Cheng – I'll check again. @Yan Yau Cheng – I got a non-integral answer and I'm sure it's right... @Finn Hulse – It asked for nonnegative integers. I got two solutions for the equation but both of them had negative integers. @Sharky Kesa – Yeah I know. @Sharky Kesa – We are sorry Sharky and all others for the inconvenience but there IS a problem in question 2 .... We have edited the file and soon , the correct version will be uploaded on the website ..... Apologies for your time that it consumed unnecessarily ..... Don't worry for that now , we'll now allow word documents .... You may submit it as microsoft office Word document too ! I think this one is pretty easy ,, Nice job, All the best for all participants Really? Last weeks for me was the easiest, this one for me was moderately tough. I am a problem writer at JOMO but i also feel that this one is easier than previous JOMO (of course feeling easy/difficult may differ among people) .... Go on , all the best ! Progress expected this time ....... I didnt attempt any of the contest. If u are saying of the previous combinatorics contest, then I admit my worst prepared topic is combinatorics, while my strongest topic is inequalities.. But, previous contest was also easy, but I didn't get a few. As for this contest, I got all ques but one till now.. R u taking in it? all the best then The proof problems were a bit strange. No casework, no induction, not really much of anything! :P I cannot find any non-negative solutions to Qn 2.Is there an issue in the problem? We are sorry Bogdan and all others for the inconvenience but there IS a problem in question 2 .... We have edited the file and soon , the correct version will be uploaded on the website ..... Apologies for your time that it consumed unnecessarily ..... Happened to me as well. @Bogdan Simeonov @Sharky Kesa After Triple-checking, there IS a problem in question 2, we are terribly sorry and we have edited the question.@Finn Hulse You might want to resubmit for question 2 as well since you already submitted Oh yes I did! Thanks. :D Could you please tell me what the edit to the question is? I am not able to access the new file. Great problems and solutions in JOMO5! Looking forward to the next one. Problem Loading... Note Loading... Set Loading...
Science Advisor Homework Helper 2,559 3 I missed the lectures for this topic, so I don't have the notes, so I was wondering if anyone could give me the idea behind how to solve quartics in radicals. I know its long and messy, so just the basic idea would do. For example: x I recall something about getting rid of the cubic term, so maybe I should substitute x = (u - 1/2), giving: u u u 16u Okay, this one happened to work out nicely, with the degree-1 term going away as well. Now I just have a quadratic in u². So perhaps a different example would be more enlightening. Also, when asked to "solve in radicals" does that mean that the correct answer to the above problem should be given as: [tex]x = \frac{1}{2} \pm \sqrt{\frac{-12 \pm \sqrt{-3504}}{32}}}[/tex] So: [tex]x = \frac{1 \pm \sqrt{\frac{-3 \pm \sqrt{-219}}{2}}}{2}[/tex] x 4+ 2x³ + 3x² + 4x + 5 = 0 I recall something about getting rid of the cubic term, so maybe I should substitute x = (u - 1/2), giving: u 4- 2u³ + 3/4u² - 1/2u + 1/16 + 2(u³ - 3/2u² + 3/4u - 1/8) + 3(u² - u + 1/4) + 4(u - 1/2) + 5 = 0 u 4+ 3/4u² - 1/2u + 1/16 - 3u² + 3/2u - 1/4 + 3u² - 3u + 3/4 + 4u - 2 + 5 = 0 u 4+ 3/4u² + 57/16 = 0 16u 4+ 12u² + 57 = 0 Okay, this one happened to work out nicely, with the degree-1 term going away as well. Now I just have a quadratic in u². So perhaps a different example would be more enlightening. Also, when asked to "solve in radicals" does that mean that the correct answer to the above problem should be given as: [tex]x = \frac{1}{2} \pm \sqrt{\frac{-12 \pm \sqrt{-3504}}{32}}}[/tex] So: [tex]x = \frac{1 \pm \sqrt{\frac{-3 \pm \sqrt{-219}}{2}}}{2}[/tex]
I'm trying to figure out how to convert any phase response to the corresponding group delay response. Is there a way to do that, and vice-versa if possible? Signal Processing Stack Exchange is a question and answer site for practitioners of the art and science of signal, image and video processing. It only takes a minute to sign up.Sign up to join this community The group delay of a filter is defined as minus the change in the phase response with respect to frequency. If the phase response of a filter is $\Phi(\omega)$, the corresponding group delay $\tau_g$ is given by: \begin{equation} \tau_g = -\frac{d\Phi(\omega)}{d\omega} \end{equation} In Matlab code, the group delay of a 4th order Butterworth filter can be calculated like so: [b a] = butter( 4, 0.25 ); % design the filter[Hz fVec] = freqz( b, a ); % compute the frequency response and angular frequenciesphi = unwrap(angle(Hz)); % extract phase responsetaug = -diff(unwrap(angle(Hz)))/((fVec(2))); % compute group delayfigure; plot( fVec(2:end), taug ); % plot the result The phase response must be unwrapped to avoid $\pi$ to $-\pi$ discontinuities. Because the difference between adjacent frequency bins is the same, we need only scale the -diff(phi) by 1/fVec(2). Finally, because the diff command produces a vector one less than its input, the group delay values are plotted against fVec(2:end) and not fVec. Alternatively, you could use the matlab command grpdelay, though you're likely to learn more from the previous! figure; grpdelay(b,a); this is how to convert the raw data that the phase response is derived into the group delay without having to worry about unwrapping. i am assuming the time and frequency-domain data are are discrete. group delay is the negative of the derivative of the unwrapped phase with respect to frequency. this derivative is approximated with a finite difference for discrete-frequency data (like what comes outa an FFT). if $h[n]$ is real, then $H[k]$ is Hermitian symmetric: $H[N-k]=\overline{H[k]}$. and $H[0]$ must be real. so the complex angle of $H[0]$ is either $0$ or $\pm \pi$ depending on if $H[0]$ is positive or negative. $$ \arg\{ H[0] \} = \begin{cases} 0 & \text{for } \Re\{H[0]\} \ge 0 \\ \pm \pi & \text{for } \Re\{H[0]\} < 0 \\ \end{cases}$$ because $ \Im\{ H[0] \} = 0 $. then after getting your starting phase, you can calculate phase increments: $$\begin{align} \arg\{ H[k] \} - \arg\{ H[k-1] \} &= \arg\left\{ \frac{H[k]}{H[k-1]} \right\} \\ \\ &= \arg\left\{ \frac{H[k]\overline{H[k-1]}}{\Big|H[k-1]\Big|^2} \right\} \\ \\ &= \arg\left\{ H[k]\overline{H[k-1]} \right\} \\ \\ &= \arg\left\{ \big(\Re\{H[k]\}+j\Im\{H[k]\}\big)\big(\Re\{H[k-1]\}-j\Im\{H[k-1]\}\big) \right\} \\ \\ &= \arg\left\{ \Re\{H[k]\} \Re\{H[k-1]\} + \Im\{H[k]\} \Im\{H[k-1]\} + j\big( \Im\{H[k]\} \Re\{H[k-1]\} - \Re\{H[k]\} \Im\{H[k-1]\} \big) \right\} \\ \\ &= \arctan\left(\frac{ \Im\{H[k]\} \Re\{H[k-1]\} - \Re\{H[k]\} \Im\{H[k-1]\}} {\Re\{H[k]\} \Re\{H[k-1]\} + \Im\{H[k]\} \Im\{H[k-1]\}} \right) \\ \end{align}$$ or recursively $$ \arg\{ H[k] \} = \arg\{ H[k-1] \} + \arctan\left(\tfrac{ \Im\{H[k]\} \Re\{H[k-1]\} - \Re\{H[k]\} \Im\{H[k-1]\}} {\Re\{H[k]\} \Re\{H[k-1]\} + \Im\{H[k]\} \Im\{H[k-1]\}} \right) $$ for $ 1 \le k \le \tfrac{N}{2} $. this is assuming that every little phase increment is smaller in magnitude than $\tfrac{\pi}{2}$. this is how you deal with unwrapping phase naturally. now the negative of this phase increment is proportional to your group delay: $$\begin{align} \tau_\text{g}[k] &= -\frac{N}{2 \pi}\Big(\arg\{ H[k] \} -\arg\{ H[k-1] \}\Big) \\ \\ &= -\frac{N}{2 \pi} \arctan\left(\tfrac{ \Im\{H[k]\} \Re\{H[k-1]\} - \Re\{H[k]\} \Im\{H[k-1]\}} {\Re\{H[k]\} \Re\{H[k-1]\} + \Im\{H[k]\} \Im\{H[k-1]\}} \right) \\ \end{align}$$ where $\tau_\text{g}[k]$ is the group delay around angular frequency of $\omega = 2 \pi \frac{k}{N}$ and is in "units" of sample period. This should be: tau_g = -diff(unwrap(angle(Hz)))./diff(fVec); % compute group delay
I think one important thing that you must understand is that a Dirac delta impulse is not an ordinary function that can be evaluated. So you do not multiply your signal with a value of $\infty$. What happens instead is that you weigh the individual shifted Dirac impulses with the corresponding values of the signal due to the following identity: $$x(t)\delta(t-nT)=x(nT)\delta(t-nT)\tag{1}$$ which is true as long as $x(t)$ is continuous at $t=nT$. Consequently, multiplying a signal with a Dirac impulse train results in a weighted impulse train, where the weights are the signal values at the sample instants. So what happens is that from a continuous signal $x(t)$ you only retain the sample values $x(nT)$, but you still have an expression that can be considered a continuous-time signal (in the sense that it can be integrated or convolved with another function). Note that this is just a mathematical model. As pointed out in Carlos Danger's answer, what usually happens is that this impulse train is filtered, i.e., you get $$\left(\sum_nx(nT)\delta(t-nT)\right)\star h(t)\tag{2}$$ where $h(t)$ is the impulse response of the filter. Since $\delta(t-nT)\star h(t)=h(t-nT)$, Eq. $(2)$ equals $$\sum_nx(nT)h(t-nT)\tag{3}$$ which is now an ordinary function which can be evaluated for any $t$.
The maximum principle for Crank-Nicolson will hold if$$\mu \doteq \frac{k}{h^2} \leq 1$$for timestep $k$ and grid spacing $h$. In general, we can consider a $\theta$-scheme of the form$$u^{n+1} = u^n + \frac{\mu}{2}\left( (1-\theta)Au^n + \theta Au^{n+1}\right)$$where $A$ is the standard Laplacian matrix and $0 \leq \theta \leq 1$. If $\mu(1-2\theta) \leq \frac{1}{2}$, then the scheme is stable. (This can easily be shown by Fourier techniques.) However, the stronger criterion that $\mu(1-\theta) \leq \frac{1}{2}$ is needed for the maximum principle to hold in general. For a proof, see Numerical Solutions of Partial Differential Equations by K. W. Morton. In particular, look at Sections 2.10 and 2.11 and Theorem 2.2. There's also a nice way to see that the maximum principle will not hold in general for Crank-Nicolson without a constraint on $\mu$. Consider the heat equation on $[0,1]$ with a discretization containing 3 points, including the boundary. Let $u_i^k$ denote the discretization at timestep $k$ and grid point $i$. Assume Dirichlet boundary, so that $u^k_0 = u^k_2 = 0$ for all $k$. Then Crank-Nicolson reduces to$$\left(1 - \frac{\mu}{2}(-2)\right)u^{n+1}_1 = \left(1 + \frac{\mu}{2}(-2)\right)u^n_1,$$which can be further reduced to$$u^{n+1}_1 = \left(\frac{1-\mu}{1+\mu}\right)u^n_1.$$ If we consider the initial condition of $u_1^0 = 1$, then we have$$u^n_1 = \left(\frac{1-\mu}{1+\mu}\right)^n,$$and though it will always be the case that $u^n_1 \leq 1$, we will nonetheless have that $u^n_1 < 0$ for odd $n$ unless $\mu \leq 1$. Thus the maximum/minimum principle is violated unless $\mu \leq 1$. This is particularly noteworthy in light of the fact that Crank-Nicolson is stable for any $\mu$. In response to foobarbaz's request, I've added a sketch of the proof. The key is to write the scheme in the form\begin{align*}(1+2\theta\mu)u^{n+1}_j &= \theta\mu(u^{n+1}_{j-1} + u^{n+1}_{j+1})\\ &+ (1-\theta)\mu(u^n_{j-1} + u^n_{j+1})\\ &+ [1-2(1-\theta)\mu]u^n_j\end{align*} The hypothesis that $\mu(1-\theta)\leq \frac{1}{2}$ is exactly equivalent to the fact that all of the above coefficients are nonnegative. Now suppose that the maximum is attained at an interior point $u^{n+1}_j$. Note that all of $u^{n+1}_{j-1}$, $u^{n+1}_{j+1}$, $u^n_{j-1}$, $u^n_{j+1}$, $u^n_j$ are less than or equal to $u^{n+1}_j$ by assumption. If any of these is strictly less than $u^{n+1}_j$, then the above equality and the nonnegativity of the coefficients imply that \begin{align*}(1+2\theta\mu)u^{n+1}_j &> \theta\mu(u^{n+1}_{j-1} + u^{n+1}_{j+1})\\ &+ (1-\theta)\mu(u^n_{j-1} + u^n_{j+1})\\ &+ [1-2(1-\theta)\mu]u^n_j\\ &= (1+2\theta\mu)u^{n+1}_j\end{align*} which is a contradiction. It follows that the maximum must also be attained at all of the temporal and spatial neighbors of $u^{n+1}_j$, and a connectedness argument then implies that the discretization of $u$ must be constant in space and time, so that the maximum is still attained on the boundary. Note that this connectedness argument mirrors the proof of the analytic (i.e., not discretized) maximum principle.
Ultimately, you'll need a mathematical proof of correctness. I'll get to some proof techniques for that below, but first, before diving into that, let me save you some time: before you look for a proof, try random testing. Random testing As a first step, I recommend you use random testing to test your algorithm. It's amazing how effective this is: in my experience, for greedy algorithms, random testing seems to be unreasonably effective. Spend 5 minutes coding up your algorithm, and you might save yourself an hour or two trying to come up with a proof. The basic idea is simple: implement your algorithm. Also, implement a reference algorithm that you know to be correct (e.g., one that exhaustively tries all possibilities and takes the best). It's fine if your reference algorithm is asymptotically inefficient, as you'll only run this on small problem instances. Then, randomly generate one million small problem instances, run both algorithms on each, and check whether your candidate algorithm gives the correct answer in every case. Empirically, if your candidate greedy algorithm is incorrect, typically you'll often discover this during random testing. If it seems to be correct on all test cases, then you should move on to the next step: coming up with a mathematical proof of correctness. Mathematical proofs of correctness OK, so we need to prove our greedy algorithm is correct: that it outputs the optimal solution (or, if there are multiple optimal solutions that are equally good, that it outputs one of them). The basic principle is an intuitive one: Principle: If you never make a bad choice, you'll do OK. Greedy algorithms usually involve a sequence of choices. The basic proof strategy is that we're going to try to prove that the algorithm never makes a bad choice. Greedy algorithms can't backtrack -- once they make a choice, they're committed and will never undo that choice -- so it's critical that they never make a bad choice. What would count as a good choice? If there's a single optimal solution, it's easy to see what is a good choice: any choice that's identical to the one made by the optimal solution. In other words, we'll try to prove that, at any stage in the execution of the greedy algorithms, the sequence of choices made by the algorithm so far exactly matches some prefix of the optimal solution. If there are multiple equally-good optimal solutions, a good choice is one that is consistent with at least one of the optima. In other words, if the algorithm's sequence of choices so far matches a prefix of one of the optimal solutions, everything's fine so far (nothing has gone wrong yet). To simplify life and eliminate distractions, let's focus on the case where there are no ties: there's a single, unique optimal solution. All the machinery will carry over to the case where there can be multiple equally-good optima without any fundamental changes, but you have to be a bit more careful about the technical details. Start by ignoring those details and focusing on the case where the optimal solution is unique; that'll help you focus on what is essential. There's a very common proof pattern that we use. We'll work hard to prove the following property of the algorithm: Claim: Let $S$ be the solution output by the algorithm and $O$ be the optimum solution. If $S$ is different from $O$, then we can tweak $O$ to get another solution $O^*$ that is different from $O$ and strictly better than $O$. Notice why this is useful. If the claim is true, it follows that the algorithm is correct. This is basically a proof by contradiction. Either $S$ is the same as $O$ or it is different. If it is different, then we can find another solution $O^*$ that's strictly better than $O$ -- but that's a contradiction, as we defined $O$ to be the optimal solution and there can't be any solution that's better than that. So we're forced to conclude that $S$ can't be different from $O$; $S$ must always equal $O$, i.e., the greedy algorithm always outputs the correct solution. If we can prove the claim above, then we've proven our algorithm correct. Fine. So how do we prove the claim? We think of a solution $S$ as a vector $(S_1,\dots,S_n)$ which corresponds to the sequence of $n$ choices made by the algorithm, and similarly, we think of the optimal solution $O$ as a vector $(O_1,\dots,O_n)$ corresponding to the sequence of choices that would lead to $O$. If $S$ is different from $O$, there must exist some index $i$ where $S_i \ne O_i$; we'll focus on the smallest such $i$. Then, we'll tweak $O$ by changing $O$ a little bit in the $i$th position to match $S_i$, i.e., we'll tweak the optimal solution $O$ by changing the $i$th choice to the one chosen by the greedy algorithm, and then we'll show that this leads to an even better solution. In particular, we'll define $O^*$ to be something like $$O^* = (O_1,O_2,\dots,O_{i-1},S_i,O_{i+1},O_{i+2},\dots,O_n),$$ except that often we'll have to modify the $O_{i+1},O_{i+2},\dots,O_n$ part slightly to maintain global consistency. Part of the proof strategy involves some cleverness in defining $O^*$ appropriately. Then, the meat of the proof will be in somehow using facts about the algorithm and the problem to show that $O^*$ is strictly better than $O$; that's where you'll need some problem-specific insights. At some point, you'll need to dive into the details of your specific problem. But this gives you a sense of the structure of a typical proof of correctness for a greedy algorithm. A simple example: Subset with maximal sum This might be easier to understand by working through a simple example in detail. Let's consider the following problem: Input: A set $U$ of integers, an integer $k$ Output: A set $S \subseteq U$ of size $k$ whose sum is as large as possible There's a natural greedy algorithm for this problem: Set $S := \emptyset$. For $i := 1,2,\dots,k$: Let $x_i$ be the largest number in $U$ that hasn't been picked yet (i.e., the $i$th largest number in $U$). Add $x_i$ to $S$. Random testing suggests this always gives the optimal solution, so let's formally prove that this algorithm is correct. Note that the optimal solution is unique, so we won't have to worry about ties. Let's prove the claim outlined above: Claim: Let $S$ be the solution output by this algorithm on input $U,k$, and $O$ the optimal solution. If $S \ne O$, then we can construct another solution $O^*$ whose sum is even larger than $O$. Proof. Assume $S \ne O$, and let $i$ be the index of the first iteration where $x_i \notin O$. (Such an index $i$ must exist, since we've assumed $S \ne O$ and by the definition of the algorithm we have $S=\{x_1,\dots,x_k\}$.) Since (by assumption) $i$ is minimal, we must have $x_1,\dots,x_{i-1} \in O$, and in particular, $O$ has the form $O=\{x_1,x_2,\dots,x_{i-1},x'_i,x'_{i+1},\dots,x'_n\}$, where the numbers $x_1,\dots,x_{i-1},x'_i,\dots,x'_n$ are listed in descending order. Looking at how the algorithm chooses $x_1,\dots,x_i$, we see that we must have $x_i > x'_j$ for all $j\ge i$. In particular, $x_i > x'_i$. So, define $O^ = O \cup \{x_i\} \setminus \{x'_i\}$, i.e., we obtain $O^*$ by deleting the $i$th number in $O$ and adding $x_i$. Now the sum of elements of $O^*$ is the sum of elements of $O$ plus $x_i-x'_i$, and $x_i-x'_i>0$, so $O^*$'s sum is strictly larger than $O$'s sum. This proves the claim. $\blacksquare$ The intuition here is that if the greedy algorithm ever makes a choice that is inconsistent with $O$, then we can prove $O$ could be even better if it was modified to include the element chosen by the greedy algorithm at that stage. Since $O$ is optimal, there can't possibly be any way to make it even better (that would be a contradiction), so the only remaining possibility is that our assumption was wrong: in other words, the greedy algorithm will never make a choice that is inconsistent with $O$. This argument is often called an exchange argument or exchange lemma. We found the first place where the optimal solution differs from the greedy solution and we imagined exchanging that element of $O$ for the corresponding greedy choice (exchanged $x'_i$ for $x_i$). Some analysis showed that this exchange only can only improve the optimal solution -- but by definition, the optimal solution can't be improved. So the only conclusion is that there must not be any place where the optimal solution differs from the greedy solution. If you have a different problem, look for opportunities to apply this exchange principle in your specific situation.
Instance: an integer j and an n-vertex non-empty simple graph G Parameter: integer k Output: if there is a 3-coloring of G in which at least one of the colors is used at most min($\hspace{.03 in}$j,k) times then YES else NO . Since j is part of the input and the only involvement of k is via min($\hspace{.03 in}$j,k) , increasing k cannot make 2.5-coloring easier. Even for k=0, that problem is logspace-complete. For k in n Ω(1), that problem is NP-hard by enlarging 3-coloring instances. By applying Reingold's result to the bipartite double cover of the subgraph induced by the non-guessed vertices, 2.5-coloring is in GC ( max(k$\cdot \hspace{-0.02 in}\lceil \log_2(n)\hspace{-0.02 in}\rceil \hspace{-0.03 in}$,n) , DSPACE(O(log(n))) ) , since the verifier has two-way access to the alleged proof. Is anything else known about the complexity of 2.5-coloring? In particular, I'd accept any non-trivial consequence of any answer to any of the following,since I do not expect anyone to manage to outright answer any of them: Is 2.5-coloring in coNTIME$\left(\hspace{-0.02 in}n^{o(k)}\hspace{-0.04 in}\right)\hspace{-0.07 in}\big/\hspace{-0.04 in}$$n^{o(k)}$ ? Is 2.5-coloring in rational-uniform ACC0? Is 2.5-coloring with k in n o(1) hard for rational-uniform TC0? Is there a function g in ω(log) such that 2.5-coloring with k in n o(1) is GC(g(n),AC0)-hard? Is 2.5-coloring with k in n o(1) hard for GC ( k$\cdot$O(log(n)) , DSPACE(O(log(n))) ) ? What about the "infinitely-often" versions of any of those questions, i.e., for pairs k,n such that k is in n o(1) and is arbitrarily large?
Does there exist a prime 3-manifold such that its mapping class group has an abelian representation in which the 2$\pi$ rotation is represented by -1? In detail:Let $M$ be a closed orientable prime 3-manifold.Let $D_F(M,p)$ be the group of diffeomorphisms of $M$ that fix a point $p$ of $M$ and a frame there. Define the mapping class group (MCG) of $M$ to be the zeroth homotopy group of $D_F(M,p)$. Then the $2\pi$ rotation is an element of MCG that is the equivalence class of the following diffeo, $R_{2\pi}$:Consider a coordinate ball of radius 2, $B2$, centred on $p$. $R_{2\pi}$ fixes the ball of radius 1, $B_1$, centred on $p$ and everything outside the sphere of radius 2. In between $B_1$ and $B_2$ the $R_{2\pi}$ maps $(x,y,z)\rightarrow(x\cos\theta+y\sin\theta,y\cos\theta−x\sin\theta,z)$ where $\theta$ is a function of $r=\sqrt(x^2+y^2+z^2)$ which increases smoothly and monotonically from 0 to 2$\pi$ as $r$ increases from 1 to 2. The square of the 2$\pi$ rotation is the identity in MCG.A manifold is spinorial if $\[R_{2\pi}\]$ is non-trivial in MCG. Background motivation: This question is interesting because of the possibility that fermions can be built on non-trivial spatial topology. $M$ is the manifold of a 3-D spatial hypersurface in spacetime. The fixed point is the point at infinity (where the metric is asymptotically flat) and fixing a frame there has the same effect on $\pi_0(D_F)$ as requiring some falloff conditions on the diffeomorphisms at infinity or requiring them to be the identity outside some ball. The configuration space of General Relativity in this asymptotically flat setting is (space of asymptotically flat metrics on $M$)/$D_F$ and its first homotopy group is isomorphic to (what I called above) MCG, see http://arxiv.org/abs/math-ph/0606066 (I know it is not the usual definition of MCG). The quantum state, on canonical quantisation of General Relativity, carries a unitary irreducible representation (UIR) of the MCG and different choices of UIR give different physics. Prime 3-manifolds are potentially candidates for elementary particles built from pure geometry: topological geons (this is speculative!). A prime 3-manifold can be the basis for a spinorial particle (i.e. spin 1/2, spin 3/2 ....) if $R_{2\pi}$ is nontrivial. Because particles must be able to be pair produced and annihilated, topology change must be allowed in the theory which means that the theory should be quantised in a sum-over-histories framework rather than a canonical quantisation framework. Within the sum-over-histories framework it is challenging to realise nonabelian reps of MCG. Abelian reps on the other hand are more easily accommodated by attaching phases to topologically distinct sectors of the path integral. Moreover certain rules that would result in a spin-statistics correlation for topological geons would also force the reps to be abelian, http://arxiv.org/abs/gr-qc/9609064 (hence the need for abelian reps). However, if there were no spinorial primes with abelian reps this would rule out spinorial geons and therefore fermions.