text stringlengths 256 16.4k |
|---|
In the classical linear model with $$Y=X\beta +\epsilon,$$ where $Y \in \mathbb{R}^n$ is the observation, $X\in \mathbb{R}^{n\times p}$ is the known covariates, $\beta \in \mathbb{R}^p$ is the unknown parameter with, $p < n$, and $\epsilon \in \mathbb{R}^{n}$, $\epsilon \sim \mathcal{N}(0,\sigma^{2}I)$.
The classical least squares estimator here would be $$\hat{\beta}= (X^TX)^{-1}X^TY.$$
If $\beta^{0}$ is the true parameter, then we have that the prediction error is given by $E=||X(\hat{\beta}- \beta^{0})||_2^2$. We have that, $$E=||X(\hat{\beta}- \beta^{0})||_2^2/\sigma^2=||X(X^TX)^{-1}X^T \epsilon||_2^2/\sigma^2=||\gamma||_2^2/\sigma^2.$$
It is easy to see that $\gamma \sim \mathcal{N}(0,\sigma^2 X(X^TX)^{-1}X^T)$. However it is claimed in
High Dimensional Statistics by
Bühlmann and van de Geer (on page 101) that $E$ is distributed according to a chi-square distribution with $p$ degrees of freedom. I can not see how this is true (It would be true if $\gamma \sim \mathcal{N}(0,D)$ for some diagonal matrix $D$ with non-negative diagonal entries.) Am I missing something here? |
True or False Problems of Vector Spaces and Linear Transformations Problem 364
These are True or False problems.
For each of the following statements, determine if it contains a wrong information or not. Let $A$ be a $5\times 3$ matrix. Then the range of $A$ is a subspace in $\R^3$. The function $f(x)=x^2+1$ is not in the vector space $C[-1,1]$ because $f(0)=1\neq 0$. Since we have $\sin(x+y)=\sin(x)+\sin(y)$, the function $\sin(x)$ is a linear transformation. The set \[\left\{\, \begin{bmatrix} 1 \\ 0 \\ 0 \end{bmatrix}, \begin{bmatrix} 0 \\ 1 \\ 1 \end{bmatrix} \,\right\}\] is an orthonormal set.
(
Linear Algebra Exam Problem, The Ohio State University)
Contents
Problem 364 Solution. (1) True or False? Let $A$ be a $5\times 3$ matrix. Then the range of $A$ is a subspace in $\R^3$. (2) True or False? The function $f(x)=x^2+1$ is not in the vector space $C[-1,1]$ because $f(0)=1\neq 0$. (3) True or False? Since we have $\sin(x+y)=\sin(x)+\sin(y)$, the function $\sin(x)$ is a linear transformation. (4) True or False? The given set is an orthonormal set. Linear Algebra Midterm Exam 2 Problems and Solutions Solution. (1) True or False? Let $A$ be a $5\times 3$ matrix. Then the range of $A$ is a subspace in $\R^3$.
The answer is “False”. The definition of the range of the $5 \times 3$ matrix $A$ is
\[ \calR(A)=\{\mathbf{y}\in \R^5 \mid A\mathbf{x}=\mathbf{y} \text{ for some $\mathbf{x} \in \R^3$}\}.\] Note that to make sense the matrix product $A\mathbf{x}$, the size of the vector $\mathbf{x}$ must be $3$-dimensional because $A$ is $5\times 3$. Hence $\mathbf{y}=A\mathbf{x}$ is a $5$-dimensional vector, and thus the range $\calR(A)$ is a subspace of $\R^5$. (2) True or False? The function $f(x)=x^2+1$ is not in the vector space $C[-1,1]$ because $f(0)=1\neq 0$.
The answer is “False”. The vector space $C[-1, 1]$ consists of all continuos functions defined on the interval $[-1, 1]$. Since $f(x)=x^2+1$ is a continuos function defined on $[-1, 1]$, it is in the vector space $C[-1, 1]$. The condition $f(0)=1\neq 0$ is irrelevant.
(3) True or False? Since we have $\sin(x+y)=\sin(x)+\sin(y)$, the function $\sin(x)$ is a linear transformation.
The answer is “False”. First of all $\sin(x+y)\neq \sin(x)+\sin(y)$. For example, let $x=y=\pi/2$. Then
\[\sin\left(\,\frac{\pi}{2}+\frac{\pi}{2}\,\right)=\sin\left(\,\pi \,\right)=0\] and \[\sin\left(\, \frac{\pi}{2} \,\right)+\sin\left(\, \frac{\pi}{2} \,\right)=1+1=2.\] Hence $\sin(x)$ is not a linear transformation. (4) True or False? The given set is an orthonormal set.
The answer is “False”. The dot product of these vectors is
\[\begin{bmatrix} 1 \\ 0 \\ 0 \end{bmatrix}\cdot \begin{bmatrix} 0 \\ 1 \\ 1 \end{bmatrix}=1\cdot 0+ 0\cdot 1 +0\cdot 1=0.\] Thus, the vectors are orthogonal. However the length of the second vector is \[\sqrt{0^2+1^2+1^2}=\sqrt{2},\] hence it is not the unit vector. So the set is orthogonal, but not orthonormal set. Linear Algebra Midterm Exam 2 Problems and Solutions True of False Problems and Solutions (current problem): True or False problems of vector spaces and linear transformations Problem 1 and its solution: See (7) in the post “10 examples of subsets that are not subspaces of vector spaces” Problem 2 and its solution: Determine whether trigonometry functions $\sin^2(x), \cos^2(x), 1$ are linearly independent or dependent Problem 3 and its solution: Orthonormal basis of null space and row space Problem 4 and its solution: Basis of span in vector space of polynomials of degree 2 or less Problem 5 and its solution: Determine value of linear transformation from $R^3$ to $R^2$ Problem 6 and its solution: Rank and nullity of linear transformation from $R^3$ to $R^2$ Problem 7 and its solution: Find matrix representation of linear transformation from $R^2$ to $R^2$ Problem 8 and its solution: Hyperplane through origin is subspace of 4-dimensional vector space
Add to solve later |
Mechanical Properties of Solids Hooke's law and Elastic Moduli For a given material longitudinal strain : Shear strain : Bulk strain = 1 : 2 : 3 Elastic limit is the maximum value of stress with in which a body can regain its original size and shape. Hooke's law states that with in the Elastic limit stress is directly proportional to strain. Modulus of Elasticity E is defined as the ratio of stress to strain produced in a body With in Elastic Limit stress strain graph is a straight line passing through the origin. Slope of stress strain graph given modules of Elasticity. The point where Elasticity ends and plasticity begins is called yield point. The permanent increase in length of the wire after removing the load called permanent set. The stress required to break a wire called breaking stress. Breaking stress in mathematically breaking force per unit area. Breaking stress depends upon nature of the material but it is independent of dimensions. Breaking force is independent of length of the wire but it depends up on nature of material and area of cross section. Poisson's ratio is defined as the ratio of lateral contraction strain to the longitudinal elongation strain. Poisson's ratio has theoretical limits −1 to 0.5 and practical limits 0 to 0.5. Poisson's ratio has no units and dimension Elastic fatigue is the state of temporary loss of Elastic nature of material. The delay in regaining the original state on removal of the deforming force on a body called Elastic after Effect. For a perfectly plastic body the Elastic after effect is infinity. View the Topic in this video From 0:40 To 59:37
Disclaimer: Compete.etutor.co may from time to time provide links to third party Internet sites under their respective fair use policy and it may from time to time provide materials from such third parties on this website. These third party sites and any third party materials are provided for viewers convenience and for non-commercial educational purpose only. Compete does not operate or control in any respect any information, products or services available on these third party sites. Compete.etutor.co makes no representations whatsoever concerning the content of these sites and the fact that compete.etutor.co has provided a link to such sites is NOT an endorsement, authorization, sponsorship, or affiliation by compete.etutor.co with respect to such sites, its services, the products displayed, its owners, or its providers.
1.
Hooke's Law: Within the limit of elasticity, the stress is proportional to the strain. Stress ∝ Strain Stress= E × Strain
2. Increment in the length of wire l=\frac{FL}{\pi r^{2}Y}
3. Breaking force = P × A
4.
Young's Modulus of Elasticity: It is defined as the ratio of normal stress to the longitudinal strain within the elastic limit. \tt Y=\frac{Normal\ stress}{Longitudinal\ strain} \tt Y=\frac{F\Delta l}{Al}=\frac{Mg\ \Delta l}{\pi\ r^{2}l}
5.
Bulk Modulus of Elasticity: It is defined as the ratio of normal stress to the volumetric strain within the elastic limit. \tt K=\frac{Normal\ stress}{Volumetric\ strain} \tt K=\frac{FV}{A\Delta V}=\frac{\Delta p V}{\Delta V}
6.
Modulus of Rigidity (η) (Shear Modulus) It is defined as the ratio of tangential stress to the shearing strain, within the elastic limit. \tt \eta=\frac{Tangential\ stress}{Shearing\ strain} \tt \eta=\frac{F}{A \theta}
7.
Compressibility: Compressibility of a material is the reciprocal of its bulk modulus of elasticity. Compressibility (C) = \tt \frac{1}{K}
8.
Breaking stress: Breaking stress is fixed for a material but breaking force varies with area of cross-section of the wire. Safety factor = \tt \frac{Breaking\ stress}{Working\ stress}
9.
Thermal stress: When temperature of a rod fixed at its both ends is changed, then the produced stress is called thermal stress. Thermal stress \tt \frac{F}{A}=Y\alpha\ \Delta \theta
10. Poisson's ratio (σ)= \tt \frac{Lateral\ strain}{Longitudinal\ strain}=\frac{-\Delta R/R}{\Delta l/l}
11.
Relation between Y, K, η and σ (i) Y = 3K (1 − 2σ) (ii) Y = 2η (1 + σ) (iii) \tt \sigma=\frac{3K-2\eta}{2\eta+6K} (iv) \tt \frac{9}{Y}=\frac{1}{K}+\frac{3}{\eta}\ or\ Y=\frac{9K \eta}{\eta+3K}
12. Potential energy in a stretched wire
U = Average force × Increase in length = \frac{1}{2}\ F\Delta l
13. Elastic potential energy of a stretched spring = \frac{1}{2}\ kx^{2}
where, k = force constant of spring and x = Change in length. |
Let $E$ be a vector bundle, with $X$ base space and $p:E\to X$ a surjective projection.
Let $x\in X$ be given. Let $U_1, U_2$ be two neighborhoods of $x$ in $X$ which carry local trivializations, that is, $$\varphi_1:p^{-1}(U_1)\cong U_1 \times V_1$$and $$ \varphi_2:p^{-1}(U_2)\cong U_2 \times V_2$$ are two homeomorphisms, where $V_i\in obj(Vect_{\mathbb{C}})$ for $i=1,2$.
Then clearly, $$\left.\varphi_1\right|_{U_1\cap U_2}:p^{-1}(U_1)\cap p^{-1}(U_2)\cong U_1\cap U_2 \times V_1$$ and $$\left.\varphi_2\right|_{U_1\cap U_2}:p^{-1}(U_1)\cap p^{-1}(U_2)\cong U_1\cap U_2 \times V_2$$ are two homeomorphisms and thus $$\left.\varphi_2\right|_{U_1\cap U_2}\circ \left(\left.\varphi_1\right|_{U_1\cap U_2}\right)^{-1}: U_1\cap U_2 \times V_1 \cong U_1\cap U_2 \times V_2 $$ is also a homeomorphism.
If $p_2:U_1\cap U_2\times V_2 \to V_2$ is the projection onto the second component, define the map $t_x:V_1\to V_2$ by $$ V_1\ni v_1 \mapsto p_2(\left.\varphi_2\right|_{U_1\cap U_2}\circ \left(\left.\varphi_1\right|_{U_1\cap U_2}\right)^{-1}((x,v_1))) \in V_2 $$ My question is: how do you see that $t_x$ is a
linear homeomorphism and not merely a homeomorphism? |
This question already has an answer here:
I would like the limits to be vertically above and below the summation sign. I also need a larger summation sign... This is currently what I have:
$x^2sin(x) = \sum_{n=-\infty\atop n\ne \pm 1}^\infty \dfrac {4i(-1)^{n}n}{(n^2 - 1)^2} $
Any help appreciated! |
Published May 2002,February 2011.
1. Introduction
Do the angles of a triangle add up to 180 degrees or $\pi$ radians? The answer is 'sometimes yes, sometimes no'. Is this an important question? Yes, because it leads to an understanding that there are different geometries based on different axioms or 'rules of the game of geometry'. Is it a meaningful question? Well no, at least not until we have agreed on the meaning of the words 'angle' and 'triangle', not until we know the rules of the game. In this article we briefly discuss the underlying axioms and give a simple proof that the sum of the angles of a triangle on the surface of a unit sphere is not equal to $\pi$ but to $\pi$ plus the area of the triangle. We shall use the fact that the area of the surface of a unit sphere is $4\pi$.
2. The Big Theorem
Before we can say what a triangle is we need to agree on what we mean by points and lines. We are working on spherical geometry (literally geometry on the surface of a sphere). In this geometry the space is the surface of the sphere; the points are points on that surface, and the line of shortest distance between two points is the great circle containing the two points. A great circle (like the Equator) cuts the sphere into two equal hemispheres. This geometry has obvious applications to distances between places and air-routes on the Earth.
Rotating sphere showing great circle
The angle between two great circles at a point P is the Euclidean angle between the directions of the circles (or strictly between the tangents to the circles at P). This presents no difficulty in navigation on the Earth because at any given point we think of the angle between two directions as if the Earth were flat at that point.
A lune is a part of the surface of the sphere bounded by two great circles which meet at antipodal points. We first consider the area of a lune and then introduce another great circle that splits the lune into triangles.
Rotating sphere showing 4 lunes Lemma.
The area of a lune on a circle of unit radius is twice its angle, that is if the angle of the lune is A then its area is 2A. Two great circles intersecting at antipodal points P and P' divide the sphere into 4 lunes. The area of the surface of a unit sphere is $4\pi$.
The areas of the lunes are proportional to their angles at P so the area of a lune with angle A is
${\frac{A}{2\pi}\times {4\pi}= {2A}}$
Exercise 1.
What are the areas of the other 3 lunes? Do your 4 areas add up to $4\pi$?
Check your answers here .
The sides of a triangle ABC are segments of three great circles which actually cut the surface of the sphere into eight spherical triangles. Between the two great circles through the point A there are four angles. We label the angle inside triangle ABC as angle A, and similarly the other angles of triangle ABC as angle B and angle C.
Rotating sphere showing 8 triangles Exercise 2
Rotating the sphere can you name the eight triangles and say whether any of them have the same area? Check your answers here .
Theorem.
Consider a spherical triangle ABC on the unit sphere with angles A, B and C. Then the area of triangle ABC is
A + B + C - $\pi$.
The diagram shows a view looking down on the hemisphere which has the line through AC as its boundary. The regions marked Area 1 and Area 3 are lunes with angles A and C respectively. Consider the lunes through B and B'. Triangle ABC is congruent to triangle A'B'C' so the bow-tie shaped shaded area, marked Area 2, which is the sum of the areas of the triangles ABC and A'BC', is equal to the area of the lune with angle B, that is equal to 2B.
.
So in the diagram we see the areas of three lunes and, using the lemma, these are:
Area 1 = 2A
Area 2 = 2B Area 3 = 2C
In adding up these three areas we include the area of the triangle ABC three times. Hence
Area 1 + Area 2 + Area 3 = Area of hemisphere +2(Area of triangle ABC)
2A + 2B + 2C
=
2 $\pi$ + 2(Area of triangle ABC)
Area of triangle ABC
=
A + B + C - $\pi$ .
3. Non-Euclidean Geometry
Sometimes revolutionary discoveries are nothing more than actually seeing what has been under our noses all the time. This was the case over the discovery of Non-Euclidean Geometry in the nineteenth century. For some 2000 years after Euclid wrote his 'Elements' in 325 BC people tried to prove the parallel postulate as a theorem in the geometry from the other axioms but always failed and that is a long story. Meanwhile mathematicians were using spherical geometry all the time, a geometry which obeys the other axioms of Euclidean Geometry and contains many of the same theorems, but in which the parallel postulate does not hold. All along they had an example of a Non-Euclidean Geometry under their noses.
Think of a line L and a point P not on L. The big question is: "How many lines can be drawn through P parallel to L?" In Euclidean Geometry the answer is ``exactly one" and this is one version of the parallel postulate. If the sum of the angles of every triangle in the geometry is $\pi$ radians then the parallel postulate holds and vice versa, the two properties are equivalent.
In spherical geometry, the basic axioms which we assume (the rules of the game) are different from Euclidean Geometry - this is a Non-Euclidean Geometry. We have seen that in spherical geometry the angles of triangles do not always add up to $\pi$ radians so we would not expect the parallel postulate to hold. In spherical geometry, the straight lines (lines of shortest distance or geodesics) are great circles and every line in the geometry cuts every other line in two points. The answer to the big question about parallels is``If we have a line L and a point P not on L then there are no lines through P parallel to the line L."
The Greek mathematicians (for example Ptolemy c 150) computed the measurements of right angled spherical triangles and worked with formulae of spherical trigonometry and Arab mathematicians (for example Jabir ibn Aflah c 1125 and Nasir ed-din c 1250) extended the work even further. The formula discussed in this article was discovered by Harriot in 1603 and published by Girard in 1629. Further ideas of the subject were developed by Saccerhi (1667 - 1733).
All this went largely un-noticed by the 19th century discoverers of hyperbolic geometry, which is another Non-Euclidean Geometry where the parallel postulate does not hold. In spherical geometry (also called elliptic geometry) the angles of triangles add up to more than $\pi$ radians and in hyperbolic geometry the angles of triangles add up to less than $\pi$ radians.
For further reading see the article by Alan Beardon 'How many Geometries Are There?' and the article by Keith Carne 'Strange Geometries' . There are some practical activities that you can try for yourself to explore these geometries further to be found at http://nrich.maths.org/MOTIVATE/conf8/index.html |
This is essentially an addition to the list of @4tnemele
I'd like to add some earlier work to this list, namely Discrete Gauge Theory.
Discrete gauge theory in 2+1 dimensions arises by breaking a gauge symmetry with gauge group $G$ to some lower
discrete subgroup $H$, via a Higgs mechanism. The force carriers ('photons') become massive which makes the gauge force ultra-short ranged. However, as the gauge group is not completely broken we still have the the Aharanov-Bohm effect. If H is Abelian this AB effect is essentially a 'topological force'. It gives rise to a phase change when one particle loops around another particle. This is the idea of fractional statistics of Abelian anyons.
The particle types that we can construct in such a theory (i.e. the one that are "color neutral") are completely determined by the residual, discrete gauge group $H$. To be more precise: a particle is said to be charged if it carries
a representation of the group H. The number of different particle types that carry a charge is then equal to the number of irreducible representations of the group H. This is similar to ordinary Yang-Mills theory where charged particles (quarks) carry the fundamental representation of the gauge group (SU(3). In a discrete gauge theory we can label all possible charged particle types using the representation theory of the discrete gauge group H.
But there are also other types of particles that can exist, namely those that carry flux. These flux carrying particles are also known as magnetic monopoles. In a discrete gauge theory the flux-carrying particles are labeled by the
conjugacy classes of the group H. Why conjugacy classes? Well, we can label flux-carrying particles by elements of the group H. A gauge transformation is performed through conjugacy, where $ |g_i\rangle \rightarrow |hg_ih^{-1}\rangle $ for all particle states $|g_i\rangle$ (suppressing the coordinate label). Since states related by gauge transformations are physically indistinguishable the only unique flux-carrying particles we have are labeled by conjugacy classes.
Is that all then? Nope. We can also have particles which carry both charge and flux -- these are known as dyons. They are labeled by both an irrep and a conjugacy class of the group $H$. But, for reasons which I wont go into, we cannot take all possible combinations of possible charges and fluxes.
(It has to do with the distinguishability of the particle types. Essentially, a dyon is labeled by $|\alpha, \Pi(g)\rangle$ where $\alpha$ is a conjugacy class and $\Pi(N(g))$ is a representation of the associated normalizer $N(\alpha)$ of the conjugacy class $\alpha$.)
The downside of this approach is the rather unequal setting of flux carrying particles (which are labeled by conjugacy classes), charged particles (labeled by representations) and dyons (flux+compatible charge). A unifying picture is provided by making use of the (quasitriangular) Hopf algebra $D(H)$ also known as a quantum double of the group $H$.
In this language
all particles are (irreducible) representations of the Hopf algebra $D(H)$. A Hopf Algebra is endowed with certain structures which have very physical counterparts. For instance, the existence of a tensor product allows for the existence of multiple particle configurations (each particle labeled by their own representation of the Hopf algebra). The co-multiplication then defines how the algebra acts on this tensored space. the existence of an antipode (which is a certain mapping from the algebra to itself) ensures the existence of an anti-particle. The existence of a unit labels the vacuum (=trivial particle).
We can also go beyond the structure of a Hopf algebra and include the notion of an R-matrix. In fact, the quasitriangular Hopf Algebra (i.e. the quantum double) does precisely this: add the R-matrix mapping. This R-matrix describes what happens when one particle loops around another particle (braiding). For non-Abelian groups $H$ this leads to non-Abelian statistics. These quasitriangular Hopf algebras are also known as quantum groups.
Nowadays the language of discrete gauge theory has been replaced by more general structures, referred to by topological field theories, anyon models or even modular tensor categories. The subject is huge, very rich, very physical and a lot of fun (if you're a bit nerdy ;)).
Sources:
http://arxiv.org/abs/hep-th/9511201 (discrete gauge theory)
http://www.theory.caltech.edu/people/preskill/ph229/ (lecture notes: check out chapter 9. Quite accessible!)
http://arxiv.org/abs/quant-ph/9707021 (a simple lattice model with anyons. There are definitely more accessible review articles of this model out there though.)
http://arxiv.org/abs/0707.1889 (review article, which includes potential physical realizations) |
So, I've wondered if there are irrationals where I can calculate some digit $n\in\mathbb{N}$ when given only the $k_n\in \mathbb{N}_0$ ($k_n<n-1$) digits before them. Preferrably $k_n\in o(n)$.
I think this is one of the problems which is easy to state but hard to answer.
So of course, if $k_n$ would be positive and fixed, this is just asking when digits of an irrational can be expressed as a recurrence relation $(d_n)$ with recursion depth $k$ (as pointed out in the comments, with fixed $k$ such a number would always be rational). And of course there are infinite examples where the digit sequence can be easily described, e.g. the Louville number $\sum 10^{-\ell!}$ ($d_n=1$ if and only if $n=\ell!$ for some $\ell\in\mathbb{N}_0$, else $d_n=0$, so $k_n=0$), the 123...-Sequence (slightly more complicated closed form) and any multiple of them.
So I'm asking:
Do you know any irrational that wasn't (more or less) defined by a digit sequence but turned out to have (a relatively easy) one?
By "more or less" I mean that I count the 123...-Sequence to the obvious ones. By "relatively easy" I mean any formula that isn't cheating the question. E.g. "$\pi$ has the digit sequence that describes the digits of $\pi$" is not helping, writing $$d_n (\pi) = \left\lfloor 10^n \pi\right\rfloor - 10\left\lfloor 10^{n-1} \pi\right\rfloor$$ or substituting $\pi$ by a formula evaluating to it in one of the two is not helping either and not in the spirit of the question. The irrational number should not be used in the formula.
EDIT: Through a related question I found the promising article On the rapid computation of various polylogarithmic constants from David Bailey, Peter Borwein and Simon Plouffe in which they describe an algorithm for finding any digit of a number of the form $$\sum_{\ell=0}^\infty \frac{p(\ell)}{b^{c\ell}q(\ell)}$$ where $p$ and $q$ are polynomials and $c$ is a positive integer. They also show that $\pi$, $\pi^2$, $\log(2)$ and $\log^2(2)$ happen to be of this form. If their $\frac{p(\ell)}{q(\ell)}$vwould always be nonnegative integers, they would form the desired examples, but from what I understand from the paper it is not so easy. I want something more substantial than simple computability. |
On the complex plane $\mathbb C$ consider the half-open square $$\square=\{z\in\mathbb C:0\le\Re(z)<1,\;0\le\Im(z)<1\}.$$
Observe that for every $z\in \mathbb C$ and $p\in\{0,1,2,3\}$ the set $(z+i^p\cdot\square)$ is the shifted and rotated square $\square$ with a vertex at $z$.
Problem.Is it true that for any function $p:\mathbb C\to\{0,1,2,3\}$ there a subset $Z\subset\mathbb C$ such that the union of the squares $$\bigcup_{z\in Z}(z+i^{p(z)}\cdot\square)$$is not Borel in $\mathbb C$? Added in Edit. As @YCor observed in his comment, the answer to this problem is affirmative under $\neg CH$.
An affirmative answer to Problem would follow from an affirmative answer to another intriguing
Problem'.Is it true that for any partition $\mathbb C=A\cup B$ either $A$ contains an uncountable strictly increasing function or $B$ contains an uncountable strictly decreasing function?
Here by a
function I understand a subset $f\subset \mathbb C$ such that for any $x\in\mathbb R$ the set $f(x)=\{y\in\mathbb R:x+iy\in f\}$ contains at most one element. Added in the Next Edit. In the discussion with @YCor we came to the conclusion that under CH the answer to both problems is negative. Therefore, both problems are independent of ZFC. Very strange. |
I would like to know what the equation is for as series of infinite terms which are multiplied by the order of the terms: $$ \sum_{i=0}^{\infty} \sum_{j=0}^{\infty}(ij) a^ib^j $$ $a$ and $b$ are both fractions. Thanks to the answers provided on the question " Simple approximation to a series of infinite terms ", I assume that the this simplifies to: $$ \sum_{i=0}^{\infty} ia^i \cdot \sum_{j=0}^{\infty}jb^j $$ A simple formula similar to the answers provided in the previous question would be much appreciated.
Assuming that $a$ and $b$ are constants with an absolute value less than 1.
Looking at each summation individually we know that from the Neumann series
$\displaystyle \sum_{i = 0}^{\infty} a^i = \dfrac{1}{1-a} $
Assuming that the derivative of the above series can be portrayed as
$\displaystyle f'(a) = \sum_{i = 0}^{\infty} ia^{i-1} = \dfrac{1}{(1-a)^2} $
After multiplying by $a$ on each side we get
$af'(a) = \displaystyle \sum_{i = 0}^{\infty} ia^i = \dfrac{a}{(1-a)^2}$
We can do the same with
$bf'(b) = \displaystyle \sum_{j = 0}^{\infty} jb^j = \dfrac{b}{(1-b)^2}$
Thus
$\displaystyle \sum_{i = 0}^{\infty} \sum_{j = 0}^{\infty} (ij)a^ib^j = \dfrac{ab}{(1-a)^2(1-b)^2}$ |
LHCb is one of the four big experiments operating at LHC and it's mainly dedicated
to measurements of $C\!P$ violation and to the search for new physics in the decays of
rare hadrons containing heavy quarks. The LHCb collaboration has recently published a result which shows for the first time a compelling 5.3 $\sigma$ evidence of $C\!P$ violation in the two-body meson $D^{0}\rightarrow K^{+}K^{-}$ and $D^{0}\rightarrow \pi^{+}\pi^{-}$ decays.
$C\!P$ violation in the Cabibbo-suppressed decays $D^{+}_{s} \rightarrow K^{0}_{S}\pi^{+}$, $D^{+} \rightarrow K^{0}_{S}K^{+}$, $D^{+} \rightarrow \phi \pi^{+}$ is expected to be small ($\sim10^{-3}$) due to interference between tree and penguin diagrams and thus sensible to contributions Beyond Standard Model (BSM). The last measurement by LHCb of direct $C\!P$ asymmetries in the
aforementioned decay channels shows consistency with the hypothesis of $C\!P$ conservation.
In the $B_{s}$ sector the $C\!P$ violating phase $\phi_{s}$ receives contributions from direct decay and mixing and may also be sensitive to BSM contributions. The last two published
measurements of $\phi_{s}$ using the $B_{s}\rightarrow J/\psi K^{+}K^{-}$ and $B_{s}\rightarrow J/\psi \pi^{+} \pi^{-}$ decay channels show a combined 2 $\sigma$ hint of $C\!P$ violation. |
How come it's possible to get Metropolis-Hastings acceptance rates close to 1 (for example, when exploring a unimodal distribution with a normal proposal distribution with too-small SD), after burn-in is over? I see it in my own MCMC chains but I don't understand how it makes sense. It seems to me that after reaching the summit acceptance rate should stabilize around values that are smaller than 0.5.
The acceptance rate depends largely on the proposal distribution. If it has small variance, the ratio of the probabilities between the current point and the proposal will necessarily always be close to 1, giving a high acceptance chance. This is just because the target probability densities we typically work with are locally Lipschitz (a type of smoothness) at small scales, so the probability of two nearby points is similar (informally).
If your current sample is close to the MAP value, the proposals will have less than one acceptance probability, but it can still be very close to 1.
As a side note, standard practice is to tune the proposal distribution to get around a 0.2-0.25 acceptance rate. See here for a discussion of this.
An easy example of acceptance probability equal to one is when simulating from the exact target: in that case $$\dfrac{\pi(x')q(x',x)}{\pi(x)q(x,x')}=1\qquad\forall x,x'$$ While this sounds like an unrealistic example, a genuine illustration is the Gibbs sampler, which can be interpreted as a sequence of Metropolis-Hastings steps, all with probability one.
A possible reason for your confusion is the potential perception of the Metropolis-Hastings algorithm as an optimisation algorithm. The algorithm spends more iterations on higher target regions but does not aim at the maximum. And while $\pi(x^\text{MAP})\ge\pi(x)$ for all $x$'s, this does not mean values with lower target values are necessarily rejected, since the proposal values $q(x^\text{MAP},x)$ and $q(x,x^\text{MAP})$ also matter. |
Forgot password? New user? Sign up
Existing user? Log in
Note by Azimuddin Sheikh 7 months, 1 week ago
Easy Math Editor
This discussion board is a place to discuss our Daily Challenges and the math and science related to those challenges. Explanations are more than just a solution — they should explain the steps and thinking strategies that you used to obtain the solution. Comments should further the discussion of math and science.
When posting on Brilliant:
*italics*
_italics_
**bold**
__bold__
- bulleted- list
1. numbered2. list
paragraph 1paragraph 2
paragraph 1
paragraph 2
[example link](https://brilliant.org)
> This is a quote
This is a quote
# I indented these lines # 4 spaces, and now they show # up as a code block. print "hello world"
# I indented these lines# 4 spaces, and now they show# up as a code block.print "hello world"
\(
\)
\[
\]
2 \times 3
2^{34}
a_{i-1}
\frac{2}{3}
\sqrt{2}
\sum_{i=1}^3
\sin \theta
\boxed{123}
Sort by:
v⃗b=3i⃗,a⃗=3(−3i⃗+4j⃗)5 ∣a⃗∣=3,∣v⃗b/p∣=5 ⟹ t0=5/3s.\vec v_b=3 \vec{i}, \vec{a}= \dfrac{3(-3 \vec i +4 \vec j)}{5}\, |\vec a|=3, |\vec v_{b/p}|=5 \implies t_0=5/3 s. vb=3i,a=53(−3i+4j)∣a∣=3,∣vb/p∣=5⟹t0=5/3s. (Using formula 0−5=−3t00-5=-3t_00−5=−3t0)For block, s⃗b=3t0i⃗+(1/2)a⃗t02\vec{s}_b=3t_0 \vec i + (1/2) \vec{a} {t_0}^2sb=3t0i+(1/2)at02Use this to get s⃗=5i⃗+103j⃗\vec{s}=5 \vec i + \dfrac{10}{3} \vec js=5i+310j Note: vbv_bvb is velocity of block in ground frame, vb/pv_{b/p}vb/p is velocity of block in platform frame.
Log in to reply
Here is my solution
Initial assumptions:
1) The initial speed of the block is (3,0)(3,0)(3,0) with respect to the ground 2) The initial position of the block is (0,0)(0,0)(0,0) with respect to the ground 2) The speed of the platform is (0,4)(0,4)(0,4) with respect to the ground 3) The friction force has a magnitude equal to the μ m g\mu \, m \, gμmg, and a direction opposite to the relative velocity of the block with respect to the platform. Both of these velocities are given with reference to ground. In other words:
F⃗friction∝ −(v⃗block−v⃗platform)\vec{F}_{friction} \propto \, -(\vec{v}_{block} - \vec{v}_{platform}) Ffriction∝−(vblock−vplatform)
Running the simulation over time results in the block coming to rest relative to the platform after 1.6661.6661.666 seconds, at which time the xxx coordinate of the block is 2.52.52.5 with respect to ground, and the yyy coordinate is 3.3333.3333.333 with respect to ground. The block is a distance 4.16664.16664.1666 from the origin, and it has moved a total arc length of 4.7534.7534.753 with respect to ground.
The (x,y)(x,y)(x,y) trajectory of the block relative to ground is shown below:
@Steven Chase sir please help with this problem discussion on a David Morin Question.
Sir initial speed of block should be wrt platform . As such in ground frame it has initial speed to be net 5
If that is the case, my last paragraph changes as follows:
Running the simulation over time results in the block coming to rest relative to the platform after 111 second, at which time the xxx coordinate of the block is 1.51.51.5 with respect to ground, and the yyy coordinate is 444 with respect to ground. The block is a distance 4.2724.2724.272 from the origin, and it has moved a total arc length of 4.3484.3484.348 with respect to ground.
@Steven Chase – Sir but my energy method I am getting 7/6m which is actually the answer given sir.
@Azimuddin Sheikh – Perhaps you can post your solution
@Steven Chase – This is by energy method sir . Sir I have posted above with the question.
@Azimuddin Sheikh – @Steven Chase Sir I have posted my energy method above with the question.
@Azimuddin Sheikh – @Steven Chase Sir can u pls tell.is there anything wrong I did ?
@Azimuddin Sheikh – I wanted to check my original solution, since it was a bit involved. So then I remembered the first postulate of special relativity: "The laws of physics are the same in all inertial frames of reference". Since the platform is not accelerating, an observer sitting on the platform and watching should observe "ordinary" physics. And we can apply simple kinematics to that case. Let v0v_0 v0 be the initial velocity of the block relative to the platform. There is a constant acceleration relative to the platform equal to μ g \mu \, g μg.
v0μ g=tftf=1\frac{v_0}{\mu \, g} = t_f \\t_f = 1 μgv0=tftf=1
Then the distance the block slides relative to the platform is:
D=12 μ g tf2=32D = \frac{1}{2} \, \mu \, g \, t_f^2 = \frac{3}{2}D=21μgtf2=23
And over the one second, the platform has moved 4 meters. So the final position of the block with respect to the ground is (32,4) (\frac{3}{2}, 4)(23,4). This matches the more complicated approach I took earlier.
Then, to find the total arc length traveled with respect to ground, begin with the expressions for the velocity relative to ground.
(vx,vy)=(3−3t,4)v=25−18t+9t2(v_x,v_y) = (3 - 3 t, 4) \\v = \sqrt{25 - 18t + 9 t^2} (vx,vy)=(3−3t,4)v=25−18t+9t2
The total arc length is:
L=∫0125−18t+9t2 dt=4.3484L = \int_0^1 \sqrt{25 - 18t + 9 t^2} \, dt = 4.3484 L=∫0125−18t+9t2dt=4.3484
This also matches the result of my previous solution
Can it be solved without energy method ? @Steven Chase sir @Aaghaz Mahajan bro ??
Which dimension is gravity in (i, j, or k)?
Sir, it is written that x-y plane is a horizontal plane, so I think g should be in k.....
@Aaghaz Mahajan – Ok, thanks
@Steven Chase – @Steven Chase sir u got the answer? By other method?
@Azimuddin Sheikh – I have posted my solution
Yes........I mean, I solved it using relative velocity.......View the motion of the block with respect to the horizontal platform........The time after which it stops is 1.66666 seconds.....I think you can proceed from here onwards....:)
@Aaghaz Mahajan bro but then from that frame answer I am getting is wrong , can u tell what r u getting ?? Its difficult for me to get answer using relative frame I used that..
@Azimuddin Sheikh – Well, what did you try??? What is your answer??
@Aaghaz Mahajan – I am getting answer as 7/6 m from energy method , but not from this . Can u show by relative frame @Aaghaz Mahajan bro
@Azimuddin Sheikh – Well, what is the correct answer??? I am getting the answer to be 5 m.........Here is my method:- Taking the velocity of block wrt to the platform, we get it to be <3,-4> i.e. v = 5 m/s. Also, the acceleration is -3 m/s^2. So, the time taken is 1.66666666 seconds. Now, the displacement of the block in that time is <5,-20/3> and the displacement of the platform is <0,4>. Using relative velocity again, we get the displacement of the block wrt ground to be <5,0>......
@Aaghaz Mahajan – Answer is 7/6 m only @Aaghaz Mahajan bro. We need total distance not displacement
@Azimuddin Sheikh – @Steven Chase Sir u r method is correct , but what's incorrect in my method (energy)??
@Azimuddin Sheikh – I have several comments:
1) I interpret the "4" in your energy expression as being the final velocity with respect to ground, which is the same velocity as the platform. It's important to note that it is relative to ground.
2) The "3" in your energy expression is relative to the platform (per your previous clarification). So you have two different references in your energy equation, which seems problematic.
3) The other concern is that the work calculation has the following general form:
W=∫F⃗⋅dℓ⃗ W = \int \vec{F} \cdot \vec{d \ell}W=∫F⋅dℓ
If the force magnitude is constant AND the force is everywhere parallel to the motion, this reduces to:
W=∣F⃗∣ Δℓ W = |\vec{F}| \, \Delta \ellW=∣F∣Δℓ
How do you know that these simplifications should occur?
@Steven Chase – Seems like 1 is causing trouble to my energy method , will try again
Problem Loading...
Note Loading...
Set Loading... |
Current Electricity Ohm’s law and it's Limitations, Resistivity and it's Temperature Dependency Resistance depend up on current and potential difference. Reciprocal of resistance is conductance \tt (G) = \frac{1}{R} Unit of conductance is (ohm) −1. Specific resistance \tt P = \frac{R A}{l} Unit of specific resistance = Ohm.meter Specific resistance is independent of dimensions. Resistivity is specific property of a metal whereas resistance is a bulk property of conductor. Silver, Copper and Aluminium have very low value of resistivity. Fuse wire is made of tin-lead Alloy. The elements of heating devices are made up of nichrome which has high resistivity and high melting point. The filament of electric bulb is made up of tungsten which has low resistivity and low melting point. Conductivity is the measure of the ability of a material to conduct electric current through it \tt \sigma = \frac{1}{\rho} The current (i) through a conductor is proportional to potential difference (v) applied is Ohms law V = iR. Ohms law is just an empirical relation not a universal law. The substances which obey Ohm’s law are called ohmic devices. All metals are ohmic devices. The substances which donot obey Ohms law are called non ohmic substances. Vacuum tube, diode and thermistor are non ohmic substances.
I-v Graph for ohmic devices
I-v Graph for non ohmic devices
Slope of I-V Graph gives resistance \tt R = \frac{V}{I} Variation of resistance with temp can be measured by temperature coefficient of resistance \tt \alpha = \frac{R_{t} - R_{0}}{R_{0} (t)} Variation of resistivity with temperature \tt \alpha = \frac{\rho_{t}-\rho_{0}}{\rho_{0}t} If "α" is positive then resistance of materials increases with temperature. If "α" is negative then resistance of materials decreases with temperature. When two wires are connected in series and Resistance does not change with temperature R 1α 1= R 2α 2 Thermistor is a heat sensitive non ohmic device. For a metals whose resistance is zero below certain temperature called critical temperature. The materials in the critical temperature state are super conductors. Joules heating effect work done W = i 2Rt Joules heat H = i 2Rt = vit = \tt \frac{v^{2}}{R} t Power consumed by the resistor P = vi = i 2R = \tt\frac{v^{2}}{R} Power consumed \tt P_{c} = \left[\frac{V_{Applied}}{V_{Rated}}\right]^{2} P_{R} (P R= power rated). When V A> V Rthe bulb gets damaged. When V A> V Rpower consumption will be lesser than rated power. An electric bulb of low wattage will give more in series because its resistance is more than a high wattage bulb. When resistances are connected in parallel. P 1R 1= P 2R 2. More power is consumed in smaller resistance in parallel combination. 'n' equal resistors are connected in parallel P P= n P. If the n cells are connected in series and parallel which the power dissipated in P series= \tt \frac{P}{n} and P parallel= n Pthen the combination gives \tt \frac{P_{P}}{P_{S}} = n^{2} 1 B.T.U = 1 K.W.H = 36 × 10 5Joules. \tt Units (KWH) = \frac{Number \ of \ watts \ \times \ Number \ of \ hours}{1000} View the Topic in this video From 06:21 To 09:31 View the Topic in this video From 01:33 To 56:51,
Disclaimer: Compete.etutor.co may from time to time provide links to third party Internet sites under their respective fair use policy and it may from time to time provide materials from such third parties on this website. These third party sites and any third party materials are provided for viewers convenience and for non-commercial educational purpose only. Compete does not operate or control in any respect any information, products or services available on these third party sites. Compete.etutor.co makes no representations whatsoever concerning the content of these sites and the fact that compete.etutor.co has provided a link to such sites is NOT an endorsement, authorization, sponsorship, or affiliation by compete.etutor.co with respect to such sites, its services, the products displayed, its owners, or its providers.
1. Ohm's Law If physical conditions of a conductor such as temperature remains unchanged, then the electric current (I) flowing through the conductor is directly proportional to the potential difference (V) applied across its ends. I ∝ V or V = IR 2. Electrical Resistance The obstruction offered by any conductor in the path of flow of current is called its electrical resistance. Electrical resistance, R = \frac{V}{I} 3. Resistivity Resistivity of a material of a conductor is given by \rho = \frac{m}{ne^{2}\tau} 4. Resistivity of metals increases with increase in temperature as ρ t = ρ 0 (1 + αt) 5. Electrical Conductivity The reciprocal of resistivity is called electrical conductivity. Electrical conductivity \left(\sigma\right) = \frac{1}{\rho} = \frac{l}{RA} = \frac{ne^{2}\tau}{m} |
We define a $2\times 2$ Givens rotation matrix as:
$${\bf G}(\theta) = \begin{bmatrix} \cos(\theta) & -\sin(\theta) \\ \sin(\theta) &\cos(\theta) \end{bmatrix}.$$
On the other hand, we define a $2\times 2$ hyperbolic rotation matrix as:
$${\bf H}(y)=\begin{bmatrix} \cosh( y) & \sinh( y) \\ \sinh( y) &\cosh( y) \end{bmatrix}.$$
I don't see why do we qualify matrix ${\bf H}$ as a rotation!
Suppose we take a 2-D vector $x=[-3, 1]^T$ and we transform it using ${\bf G}(\theta), \theta = 0,\dots, \pi/2$, and ${\bf H}, y = -2,\dots, 2.5$. See below for the result.
For me Givens rotation does clearly rotate the initial point around the point $[0,0]^T$ but for the hyperbolic rotation, we see a bending but not a rotation, at least not around a fixed point (I checked for other points and its the same behavior with different bending angles). am I missing something? |
Modular Arithmetic (MA) has the same axioms as first order Peano Arithmetic (PA) except $\forall x (Sx \ne 0)$ is replaced with $\exists x(Sx = 0)$. (http://en.wikipedia.org/wiki/Peano_axioms#First-order_theory_of_arithmetic).
MA has arbitrarily large finite models based on modular arithmetic. All finite models of MA have either an even or odd number of elements. I call a model of MA "even" if it satisfies both of these two sentences:
E1) $\exists x(x \ne 0 \land x+x = 0)$
E2) $\forall x(x+x \ne S0)$
A model of MA is odd if it satisfies both of:
O1) $\forall x(x = 0 \lor x+x \ne 0)$
O2) $\exists x(x+x = S0)$
We can use compactness to prove MA has infinite "even" size models by adding the even definitions above as axioms. We can similarly prove there are infinite "odd" size models of MA. Some infinite sets, like the integers, are neither even nor odd. The integers are not the basis for a model of MA. For example, the four square theorem (every number is the sum of four squares) is a theorem of both MA and PA. The four square theorem is false in the integers. It has been conjectured the complex numbers are a basis for a model of MA. If so, the complex numbers would be an "odd" model of MA.
My question is whether every model of MA must be exclusively even or exclusively odd? Is this statement a theorem of MA?
$$\exists x(x \ne 0 \land x+x = 0) \ \overline{\vee}\ \exists x(x+x = S0)$$
I asked this question on stack exchange and got no answer.
[
The following was merged from an answer - ed.]
Ashutosh's proof can be written as:
$\exists x\exists y( (x+x=0 \land y+y=1) \implies (x=0) )$
This answers my question when $\exists x(x+x=1)$ is true but it says nothing about when $\forall x(x+x \ne 1)$ is true. Emil and others have stated any algebraically closed field is a model of MA. Ashutosh's proof shows any algebraically closed field is odd because $\exists x(x+x=1)$ is true.
I want to accept Ben Crowell answer, but I have some reservations. The proof starts by showing how any model of MA can be expanded into a model of PA. I have made similar arguments and always assumed it would be easy to prove. My conjecture is true of all finite models of MA so we only need consider infinite models. MA is omega inconsistent and any infinite model must have non-standard elements. Tennebaum's theorem says addition is not recursive in non-standard models of PA. Can addition actually be recursive in $A$, the model of PA he constructs? It looks like he is assuming we can add non-standard numbers from the model of MA. I also wonder if he is assuming $I$ is a standard model of PA. I don't think it makes any difference, but it might.
Obo's proof is much simpler and similar to a proof I came up with. My proof had the same error as his. I think it is fixable. In the case where we have $S(y+y)=p$ we need to also prove $y \ne p$. $y \ne p$ can be true only in models with three or more elements.
This isn't a discussion group so I won't go into detail why I don't think the complex numbers are a model of MA. I don't think MA has any infinite models. I will point out MA proves a lot of interesting things about odd models. In an odd model the sum of all elements is 0. This can't be stated in first order. I think if we have a successor function defined on the complex numbers we can use it to order the reals. Just ignore numbers with a non-zero imaginary component.
I want to retract my statement that the Lagrange's four square theorem is a theorem of MA. I based my claim on Andrew Boucher's paper on General Arithmetic (GA). Boucher shows GA proves the four square theorem. I thought GA was a weak sub-theory of MA because GA has much weaker successor axioms. Rereading the paper I see Boucher says he is using 2nd order induction. He also says successor is second order. |
Note: This question was asked in stats.stackexchange.com and math.stackexchange.com, with expired bounties on both sites.
Given a sequence of iid random variables $X_i$ (without loss of generality from $U(0,1)$), an integer $k \ge 1$ and some $p \in (0,1)$, construct the sequence of random vectors $Z^{(j)}$, $j=0,1,...$ in the following way. Let
$$Z^{(0)}=(X_{(1)},...,X_{(k)}),$$
where $X_{(l)}$ is the $l$-order statistic of sample $\{X_1,...,X_k\}$. Introduce notations
\begin{align} Z^{(j)}&=(Z_{j,1},...,Z_{j,k}),\\\\ m_j&=\min(Z_{j-1,1},...,Z_{j-1,k},X_{k+j}),\\\\ M_j&=\max(Z_{j-1,1},...,Z_{j-1,k},X_{k+j}) \end{align}
Then
$$Z^{(j)}=(Y_{(1)},...,Y_{(k)})$$
where $Y_{(l)}$ is the $l$-order statistic of the following set which is
The set $\{Z_{j-1,1},...,Z_{j-1,k},X_{k+j}\}\backslash m_j$ with probability $p$ The set $\{Z_{j-1,1},...,Z_{j-1,k},X_{k+j}\}\backslash M_j$ with probability $1-p$
The decision between cases 1. and 2. is made independently from the $X_i$ (and hence from the $Z^{(i)}$).
The $Z^{(j)}$ are supported on the $k$-dimensional simplex $S_k = \{(x_1, \dots, x_k) \in \mathbb{R}^k \, | \, 0 \le x_1 \le x_2 \le \dots \le x_k \le 1 \}$.
It appears that the $Z^{(j)}$ converge in distribution. Is this known? Is anything known about the limiting distribution?
For the case $k=1$, the answer is the following. Denote the cdf of $Z^{(j)}$ by $F_j$.
The cdf of $\min(X_{n+1},Z^{(n)})$ (for $U(0,1)$ case) is
$$x+F_n(x)−xF_n(x)$$ and the cdf of $\max(X_{n+1},Z^{(n)})$ is
$$xF_n(x)$$.
Hence
\begin{align} F_{n+1}(x)&=p(x+F_n(x)−xF_n(x))+(1−p)xF_n(x)\\\\ &=px+(p(1-x)+(1-p)x)F_n(x) \end{align}
Since $p(1-x)+(1-p)x\in(0,1)$ we have that
$$\lim F_{n}(x)=\frac{px}{1-p(1-x)-(1-p)x}$$
I am looking for general results (case $k>1$) either for the limiting distribution of the whole vector $Z^{(j)}$ or of some of its components (marginal distributions). |
We try to analyze the average correlation of a portfolio as it can be found here in section 2 b), the same formula which is also used by the CBOE to calculate implied correlations:
$$ \rho_{av(2)} = \frac{\sigma^2 - \sum_{i=1}^N w_i^2\sigma_i^2}{2 \sum_{i=1}^N \sum_{j>i}^N w_i w_j \sigma_i \sigma_j} $$
EDIT:Assuming that $\sigma^2 = \sum_{i=1}^N \sum_{j=1}^N w_i w_j \sigma_i \sigma_j \rho_{i,j}$, where $\rho_{i,i}=1$, for $i=1,\ldots,N$, the above expression can be written as $$ \rho_{av(2)} = \frac{\sum_{i=1}^N \sum_{j>i}^N w_i w_j \sigma_i \sigma_j \rho_{i,j}}{\sum_{i=1}^N \sum_{j>i}^N w_i w_j \sigma_i \sigma_j}. $$
The following questions arise.
Assuming that $w_i \in \mathbb{R}$, i.e. long/short leverage is allowed, is it possible that $|\rho_{av(2)}|>1 $ ? Note that we don't assume $\sum w_i=1$. Does there already exist the notion of contribution to average correlation? Meaning that e.g. in a long/short portfolio, where average correlation should be close to zero, I can identify positions that drive the average correlation up (in absolute value). |
Differential and Integral Equations Differential Integral Equations Volume 20, Number 8 (2007), 927-938. Nonexistence results for classes of elliptic systems Abstract
We consider the system \[ -\Delta u = \lambda f(u,v); \, x \in \Omega \] \[ -\Delta v = \lambda g(u,v); \, x \in \Omega \] \[ u = 0 = v; \, x \in \partial\Omega, \] where $\Omega$ is a ball in $ R^{N}, N \geq 1$ and $\partial\Omega$ is its boundary, $\lambda $ is a positive parameter, and $f$ and $g$ are smooth functions that are negative at the origin (semipositone system) and satisfy certain linear growth conditions at infinity. We establish nonexistence of positive solutions when $\lambda$ is large. Our proofs depend on energy analysis and comparison methods.
Article information Source Differential Integral Equations, Volume 20, Number 8 (2007), 927-938. Dates First available in Project Euclid: 20 December 2012 Permanent link to this document https://projecteuclid.org/euclid.die/1356039364 Mathematical Reviews number (MathSciNet) MR2339844 Zentralblatt MATH identifier 1210.35125 Subjects Primary: 34B15: Nonlinear boundary value problems Secondary: 34B18: Positive solutions of nonlinear boundary value problems 35J65: Nonlinear boundary value problems for linear elliptic equations Citation
Shivaji, R.; Ye, Jinglong. Nonexistence results for classes of elliptic systems. Differential Integral Equations 20 (2007), no. 8, 927--938. https://projecteuclid.org/euclid.die/1356039364 |
In a typical supervised learning problem we have a fixed volume of training data with which to fit a model. Imagine, instead, we have a constant stream of data. Now, we could dip into this stream and take a sample from which to build a static model. But let's also imagine that, from time to time, there is a change in the observations arriving. Ideally we want a model that automatically adapts to changes in the data stream and updates its own parameters. This is sometimes called "online learning."
We’ve looked at a Bayesian approach to this before in Adaptively Modelling Bread Prices in London Throughout the Napoleonic Wars. Bayesian methods come with significant computational overhead because one must fit and continuously updated a joint probability distribution over all variables. Today, let’s look at a more lightweight solution called
stochastic gradient descent. The data
We imagine a continuous stream of data. For today's artificial example, we have 300 sequential observations in $(x,y)$ pairs. The first 100 observations shows a certain linear relationship between $x$ & $y$. Then something happens and the next 100 observations are different: $x$ and $y$ are now related in a different way. Finally, there is another change resulting in 100 more observations that are different again:
# prepare python%pylab inlineimport numpy as npfrom sklearn.datasets import make_regressionfrom mpl_toolkits.mplot3d import Axes3D # Create four populations of 100 observations eachpop1_X, pop1_Y = make_regression(n_samples=100, noise=20, n_informative=1, n_features=1, random_state=1, bias=0) pop2_X, pop2_Y = make_regression(n_samples=100, noise=20, n_informative=1, n_features=1, random_state=1, bias=100) pop3_X, pop3_Y = make_regression(n_samples=100, noise=20, n_informative=1, n_features=1, random_state=1, bias=-100) # Stack them togetherpop_X = np.concatenate((pop1_X,pop2_X,pop3_X))pop_Y = np.concatenate((pop1_Y, 2 * pop2_Y, -2 * pop3_Y)) # Add intercept to Xpop_X = append(np.vstack(np.ones(len(pop_X))),pop_X,1) # convert Y's into proper column vectorspop_Y = np.vstack(pop_Y) ### plotmycmap = cm.brg fig = plt.figure(figsize(6,6), dpi=1600) plt.subplots_adjust(hspace=.5) gridsize = (1,1)ax0 = plt.subplot2grid(gridsize,(0,0))sc = ax0.scatter(pop_X[:,1], pop_Y, s=100, alpha=.4, c=range(len(pop_X)), cmap=mycmap) plt.colorbar(sc, ax=ax0)
The blue, red and green observations each show different linear relationships between $x$ and $y$: we might say that are draw from different populations. We want our model to automatically detect and adapt to each population as it appears in our data stream. That is, we want it to fit a line through the blue observations. Then, when the red observations start arriving, we want our line to adjust to pass through them. Finally, the line should adapt to the green observations when they hit the data stream.
Before we can tackle that, let's do a quick refresher on gradient descent:
Gradient descent
Gradient descent is a method of searching for model parameters which result in the best fitting model. There are many, many search algorithms, but this one is quick and efficient for many sorts of supervised learning models.
Let's take the simplest of simple regression models for predicting
y: $\hat{y} = \alpha + \beta x$. Our model has two parameters - $\alpha$, the intercept, and $\beta$, the coefficient for $x$ - and we want to find the values which produce the best fitting regression line.
How do we know whether one model fits better than another? We calculate the
error or cost of the predictions. For regression models, the most common cost function is mean squared error:
Cost = $J(\alpha , \beta) = \frac{1}{2n} \sum{(\hat{y} - y)^2} = \frac{1}{n} \sum{(\alpha + \beta x - y)^2}$
The cost function quantifies the error for any given $\alpha$ and $\beta$. Since it is squared, it will take the form of an inverted parabola. Let's visualise what it looks like for the first 100 observations in our dataset:
## Specify prediction functiondef fx(theta, X): return np.dot(X, theta) ## Specify cost functiondef fcost(theta, X, y): return (1./2*len(X)) * sum((np.vstack(fx(theta,X)) - y)**2) ## Specify a range of possible a and ba_possible = asarray(range(-40,60,1)) # theta1 could be the same:b_possible = asarray(range(0,140,1)) # create meshgridX, Y = np.meshgrid(a_possible, b_possible) # calculate cost of each combination of a,bz = dstack([X.flatten(),Y.flatten()])[0]j = np.array([(fcost(z[i], pop_X[0:100], pop_Y[0:100])) for i in range(len(z))])J = j.reshape(shape(X)) ## plotfig = plt.figure(figsize(12,6))plt.subplots_adjust(hspace=.5) ax = fig.add_subplot(121, projection='3d')ax.plot_surface(X, Y, J, cmap=cm.coolwarm)ax.view_init(azim=295, elev=30)ax.set_xlabel('$\\alpha$')ax.set_ylabel('$\\beta$') ax = fig.add_subplot(122)ax.contourf(X, Y, J, cmap=cm.coolwarm)ax.set_xlabel('$\\alpha$')ax.set_ylabel('$\\beta$')
At the bottom (or
minima) or the parabola is the lowest possible cost. We want to find the $\alpha$ and $\beta$ which correspond to that minima.
The idea of
gradient descent is to take small steps down the surface of the parabola until we reach the bottom. The gradient of gradient descent is the gradient of the cost function, which we find by taking the partial derivative of the cost function with respect to the model coefficients. Steps: For the current value of $\alpha$ and $\beta$, calculate the cost and the gradient of the cost function at that point Update the values of $\alpha$ and $\beta$ by taking a small step towards the minima. This means, subtracting some fraction of the gradient from them. The formulae are:
$\alpha = \alpha - \gamma \frac{\partial J(\alpha, \beta)}{\partial \alpha} = \alpha - \gamma \frac{1}{n} \sum{2(\alpha + \beta x - y)}$
$\beta = \beta - \gamma \frac{\partial J(\alpha, \beta)}{\partial \beta} = \beta - \gamma \frac{1}{n} \sum{2x(\alpha + \beta x - y)}$
Steps 1 & 2 are repeated until one is satisfied that the model is not getting any better (cost isn't decreasing much with each step).
The learning rate parameter $\gamma$ controls the
size of the steps. This is a parameter which we must choose. There is some art in choosing a good learning rate, for if it is too large or too small then it can prevent the algorithm from finding the minima.
Let's briefly see it in action - we'll just do five steps. Then we can move on to talk about how we can use this technique for online learning.
## parametersn_learning_rate = 0.25 ## Specify prediction functiondef fx(theta, X): return np.dot(X, theta) ## Specify cost functiondef fcost(theta, X, y): return (1./2*len(X)) * sum((np.vstack(fx(theta,X)) - y)**2) ## Specify function to calculate gradient at a given thetadef gradient(theta, X, y): grad_theta = (1./len(X)) * sum(np.multiply(np.vstack(fx(theta, X)) - y, X),axis=0) return grad_theta ## Specify starting values for alpha and betatheta = [0,0] ## Run 5 iterations of batch gradient descent, recording model parameters and costsarraytheta = np.array(theta)arraycost = np.array(fcost(theta, pop_X[0:100], pop_Y[0:100])) for i in range(5): theta = theta - n_learning_rate * gradient(theta, pop_X[0:100], pop_Y[0:100]) arraytheta = np.vstack([arraytheta, theta]) ### Plotfig = plt.figure(figsize(14,6))plt.subplots_adjust(hspace=.5) gridsize = (1,2)ax0 = plt.subplot2grid(gridsize,(0,0))ax0.scatter(pop_X[0:100,1], pop_Y[0:100], s=100, alpha=.2, c='red') xrangex = np.linspace(-3,3,50)for i in range(0, len(arraytheta)): yrangey = arraytheta[i,0] + arraytheta[i,1] * xrangex ax0.plot(xrangex, yrangey, c='blue') ax0.text(3.5, arraytheta[i,0] + arraytheta[i,1] * 3, "Step " + str(i), color='blue', ha="center", va="center", bbox = dict(ec='1',fc='1')) # plot movement through costed parameter spaceax1 = plt.subplot2grid(gridsize,(0,1))ax1.contourf(X, Y, J, cmap=cm.coolwarm)ax1.set_xlabel('$\\alpha$')ax1.set_ylabel('$\\beta$')ax1.set_title('Trajectory of gradient descent through cost function');for i in arraytheta: plt.scatter(i[0],i[1],marker='x',color='black',s=25)plt.plot(arraytheta[:,0],arraytheta[:,1], '-', c='black', alpha=0.5)
Left chart: see how the regression line is gradually finding its way to fitting the shape of the data? Right chart: the gradient descent algorithm moves toward lower cost values for $\alpha$ and $\beta$.
Stochastic gradient descent for adaptive modelling
The gradient descent we've just done is called
batch gradient descent. That's because at each step we calculate the cost across the entire training dataset. This isn't going to work for online learning, since we don't have a finite training dataset - we have a continuous stream of observations.
The solution is to run an update step with every passing observation. This is called
stochastic gradient descent. It looks like this:
## parametersn_learning_rate = 0.1 # learning rate paramenter ## Specify prediction functiondef fx(theta, X): return np.dot(X, theta) ## specify cost functiondef fcost(theta, X, y): return (1./2*len(X)) * sum((fx(theta,X) - y)**2) ## specify function to calculate gradient at a given theta - returns a vector of length(theta)def gradient(theta, X, y): grad_theta = (1./len(X)) * np.multiply((fx(theta, X)) - y, X) return grad_theta ### DO stochastic gradient descent# starting values for alpha & betatheta = [0,0] # record starting theta & costarraytheta = np.array([theta])arraycost = np.array([]) # feed data through and update theta; capture cost and theta historyfor i in range(0, len(pop_X)): # calculate cost for theta on current point cost = fcost(theta, pop_X[i], pop_Y[i]) arraycost = np.append(arraycost, cost) # update theta with gradient descent theta = theta - n_learning_rate * gradient(theta, pop_X[i], pop_Y[i]) arraytheta = np.vstack([arraytheta, theta]) #### plot fig = plt.figure(figsize(12,6), dpi=1600) plt.subplot(121)plt.scatter(pop_X[:,1], pop_Y, s=100, alpha=.2, c=range(len(pop_X)), cmap=mycmap) # plot line for every 20th theta onto chartn=len(pop_X)colnums = [int(i) for i in np.linspace(0,256,n)]lineColours = mycmap(colnums) xrangex = np.linspace(-3,3,50)for i in range(0, len(pop_X), 20): yrangey = arraytheta[i,0] + arraytheta[i,1] * xrangex plt.plot(xrangex, yrangey, c=lineColours[i], label = "Step " + str(i)) plt.legend(bbox_to_anchor=(1.05,1),loc=2)
Remember that our dataset contains 100 blue observations then 100 red observations then 100 green observations, with each colour signifying a different population that has a different relationship between $x$ and $y$. See that the regression line gradually adjusts as the populations change. The coefficients $\alpha$ and $\beta$ are updated after each observation using the gradient descent algorithm.
The speed of adaptation is parametrised by the
learning rate. The choice of learning rate is quite important. We are allowing every passing observation to adjust our model fit. If we make the learning rate too large, then the model will quickly adapt to changes in populations but it will also be susceptible to noise and outliers. At worst, it could sway about erratically. If we make the learning rate too small, then our model will adapt slowly to any changes in population. This is a classic bias/variance tradeoff dilemma.
It is interesting to plot the error (cost). In the following chart, for each datapoint I have plotted the prediction error. As one expects, the error spikes at points 100 and 200: these are the points at which the population of arriving observations suddenly changes. Also see the error rapidly decline as the gradient descent mechanism adapts the regression model to the new circumstances:
plt.plot(arraycost)
Concluding notes
Stochastic gradient descent is a nifty, computationally cheap method for adaptive supervised learning in an online environment.
This has been a very idealised illustration. A couple of things I glossed over which would present a challenge to somebody who was implementing this in real life:
What is the correct learning rate? How fast or slow should the model adapt to changes? Intuitively we expect that this should depend on the type of changes that must be adapted to. Do they come on gradually? Is there a lot of noise which we do notwant the model to adjust to? It may be very difficult to know a priori what the best learning rate is, since we are designing the model to adapt to changes in the data stream which we presuably haven't witnessed. What is the correct form of the model? By which I mean: what variables should be included and what, if any, transformations should be applied to those variables? We've seen the stohastic gradient descent method will adjust model coefficients to adapt to the data, but it cannot add or remove coefficients from the model, nor can it transform variables to adapt to changes in relationships. For example, if there was initially a linear relationship between $x$ and $y$ which turned into a non-linear relationship then our algorithm may not be able to adapt. |
First of all, Ryanair has more than 700,000 flights per year now, but not during their entire 35 year existence. The total number of flights you give is therefore too high.
Assuming aircraft accidents are purely statistical (which is
not true, of course), one can estimate the number of accidents you expect using a Poisson distribution: the probability of having $ k $ incidents with $ \lambda $ expected incidents is given by
$$ P(k) = e^{-\lambda} \frac{\lambda^k}{k!} $$
To get the expected number of incidents, we take the overall accident rate of 1 in 2.5 million flights (aviation-safety.net, for 2018,
has been higher during the last 35 years) and multiply it with the total number of flights. For 25 million flights this would be $ \lambda = 10 $. This results in in a probability for 0 accidents of only $ P(0) \approx 0.004 \% $. Even when assuming 12.5 million flights in total (as suggested by David Richerby in the comments assuming a linear increase from 0 to 700,000 flights per year), one still obtains $ P(0) \approx 0.67 \% $ with $ \lambda = 5 $.
This means Ryanair is indeed safer than the average airline flight world wide, but so are many other EU or US airlines. Safety is not just a statistical coincidence, regulations are very strict in Europe and training is good resulting in higher than average safety records. |
Here's a bit of an elaboration on my earlier comment.
You can see $SO_3$ as the quotient of a unit $3$-ball by the antipodal map on the boundary. This is realized explicitly by the exponential map for $SO_3$. The tangent space to the identity of $SO_3$ is all $3\times 3$ matrices $H$ such that $H^T=-H$. Up to conjugation by an element of $SO_3$, such a matrix looks like
$$\left(\begin{matrix} 0 & \theta & 0 \\ -\theta & 0 & 0 \\ 0 & 0 & 0 \end{matrix}\right)$$
which under exponentiation is converted into rotation by $\theta$ about the $z$-axis. Every element of $SO_3$ is rotation by some angle about a fixed axis since the characteristic polynomial has $1$ as a root -- so this is a rather explicit description of how the exponential map is onto. If you choose the length of a matrix $[a_{i j}]$ to be $\sqrt{ \sum_{i j} a_{i j}^2}$, then technically $SO_3$ is the quotient of the ball of radius $\pi/\sqrt{2}$, with antipodal points on the boundary identified. But that's small potatoes.
The 3-ball with antipodal points identified, as you've seen has fundamental group $\Bbb Z_2$. In particular you can think of the 3-ball with antipodal points identified as being built up from the 1-ball with antipodal points identified, union a 2-cell (this gives a CW-decomposition of the 2-ball with antipodal points identified) then union a 3-cell. So the fundamental group all "happens" in the 2-ball with antipodal points identified. The 1-skeleton generates the fundamental group and in our model (under exponentiation) corresponds to one full rotation by $2\pi$ about any chosen axis. The relation corresponds to the 2-cell attachment and you can
literally interpret it as the plate trick in a "sufficiently nice" model. Let me explain.
The trouble with the plate trick is it uses a human arm. Human arms you can think of as carrying a core underlying piece of information, a map $f : [0,1] \to SO_3$. $f(t)$ represents the orientation in space of the part of the arm distance $t$ from the shoulder, so $0$ represents the shoulder distance, and $1$ the palm of the hand. There are technical issues -- if you choose the framing corresponding to the positions of the appropriate bone in the arm, you don't get a continuous map, just a piecewise-constant map -- but if you consider the orientations of the ligaments between the bones it becomes a continuous map. So the trouble with the plate trick is the map $f$ isn't the only constraint on the arm. The arm is embedded in space. Bones are rigid, there's only so much ligaments are capable of doing. So you could be rightly concerned that the plate trick is a demonstration of a peculiarity of anatomy, rather than something about $SO_3$.
Alright, but formally what's going on with the plate trick? As mentioned, a configuration of the arm we're representing by $f$, which is an element of the path space $PSO_3$ of continuous paths in $SO_3$ with $f(0) = I \in SO_3$ fixed.
There is the path-loop fibration:
$$\Omega SO_3 \to PSO_3 \to SO_3$$
and $PSO_3$ is contractible. The plate trick "is" the induced map $\pi_1 SO_3 \to \pi_0 \Omega SO_3$ from the homotopy long exact sequence of the above fibration, which is guaranteed to be an isomorphism by the path-loop fibration. Specifically, elements of $\pi_1 SO_3$ you are thinking of as motions of your hand. The path-loop fibration is telling you that you can realize the motion as a motion of your arm (or at least, a path in $PSO_3$ but the fact that people can
do the plate trick says you can actually realize it as a motion of your arm).
There is of course an element that's a bit misleading in all this. The arm isn't complicated enough to allow for you to lift the product of two generators of $\pi_1 SO_3$ and witness the null homotopy -- since the arm allows a fixed complexity, the null-homotopy happens as you lift the concatenation.
But you can see from the rather explicit exponential map above how the null homotopy actually works, as it's sitting in the image of the 2-ball with antipodal points identified.
How is that? |
I have a matrix $A \in \mathbb{R}^{n \times n}$ and would like to know about the relationship between the $\| A \|_\infty$ (i.e., the maximum element of the matrix) and the operator-induced norm $\| A \|$.
I know that the following upper-bound holds (from Matrix Norm Inequality): $ \| A \|_\infty \leq \sqrt{n} \| A \| $?
But, I am trying to find a lower-bound? (Would the lower-bound possibly be comprised of the minimum singular value times some factor of $n$?)
Also, I need this lower bound to have a norm that has the sub-multiplicative property: given square matrices $A,B \Rightarrow \| A B \|_{\infty} \geq \| A \|_p \| B \|_p $
But, is there an appropriate norm/$p$ that suits this? |
No, this is not true, not even for closed points. Consider $\Bbb P^1_k$. Then $A=k$ and $\mathcal{O}_{\Bbb P^1_k,pt}\cong k[t]_{(t)}$, and $j^{-1}$ of the maximal ideal is the zero ideal, so the requested localization of $A$ is just $A$, which is very far from $k[t]_{(t)}$. Certainly this will be an isomorphism if $X$ is quasiaffine - I don't see other good/...
Let $f : \mathcal F \to \mathcal O_X$ non-zero and $s \in F(X)$ so that the image of $s$ is not zero. Since $\mathcal F$ is torsion, there is $t \in \mathcal O_X$ so that $ts = 0$. It implies that $s$ is zero outside the locus $Z(t)$. In particular, $f(s)$ is zero outside $Z(t)$ and by density $f(s)$ is zero everywhere, a contradiction.
For a prime ideal $p$, $A_p := (A\setminus p)^{-1} A$ : you invert elements not in $p$ to make "everything be about $p$".(Recall that $A_p$ is local with maximal ideal $pA_p$, so if $p$ was inverted, that wouldn't make sense)
As Schemer said in the comments, if your sheaf $F$ is supported on a point then all intermediate cohomology vanishes, but it is not a vector bundle. However, this is practically the only problem that can occur.Specifically, Hartshorne, in Lemma 6.3 Chapter 3 of Ample Subvarieties of Algebraic Varieties, gives the following result:Let $E$ be a coherent ...
Yes, even with no assumptions on $f$. Suppose we have an exact sequence$$0\to M_n\to\cdots\to M_1\to M_0\to0$$of flat $A$-modules. If $N$ is any $A$-module, then the sequence$$0\to M_n\otimes_AN\to\cdots\to M_1\otimes_AN\to M_0\otimes_AN\to0$$is exact. In the case of $f:\mathop{\mathrm{Spec}}B\to\mathop{\mathrm{Spec}}A$, pullback is given by $-\otimes_AB$...
The answer is no for general affine morphisms: take any affine scheme over a field with nontrivial Picard group for $X$ and the spectrum of said field for $Y$. For instance, let $X$ be an affine open in an elliptic curve over some algebraically closed field of characteristic zero.As written in the question body, the answer is yes for the finite case as ...
Let me concentrate on your first question; this should clarify the authors' claim. We'll see if that's enough for you to figure out the rest.Even though later we will be interested in the rather specific $k$-vector space $\hom(V,W)$ of linear maps, for now, it is conceptually easier to consider any finite-dimensional $k$-vector space $V$. I like to think ...
No, this construction will never give a Calabi--Yau variety.For simplicity assume $X$ is smooth. By construction, the variety $V=\mathbb P_X(F)$ is a projective bundle, so it is uniruled (except possibly in the trivial case when $F$ has rank 1). It is a basic property of smooth projective uniruled varieties $V$ that$$H^0(V,K_V^m)=0 \quad \text{ for all ...
Let $X$ be a topological space where any two non-empty open set has non-empty intersection. Fix a point $p\in X$ and define the following for any open set $U$$$\mathcal{F}(U)=\left\{\begin{array}{ll} 0 & \text{if $U=\emptyset$ or $p\in U$}\\ A &\text{ otherwise}\end{array}\right. $$where A is any non-trivial abelian group. Restriction maps are ...
There are two things to keep in mind. First, we always need to fix an ample line bundle to speak about stability. Secondly, Intersection Theory is made a way that you can work (at least if $X$ is smooth) on the Grothendieck group $K_0(X)$, so one might consider locally free resolution.Fix an ample line bundle $H$. To define the degree of any coherent sheaf ...
The hypothesis that the divisor be effective is used in giving the interpretation that the sections have poles of order $\le a_i$ on $V_i$. If you modify the "description" to say that when $a_i<0$, we require a zero of order $\ge |a_i|$ along $V_i$, then it works fine.In the case of your final short exact sequence, you need $D$ effective here because ...
The key facts here are:Every abelian group is a $\mathbb Z$-module.An $\mathcal R$-module $\mathcal F$ is a sheaf of abelian groups equipped with a morphism $\mathcal R \to \operatorname{End}_{\text{Sh}(X)}(\mathcal F)$.Let $\mathbb Z_X^-$ be the presheaf $U \mapsto \mathbb Z$ and let $\mathbb Z_X$ be its sheafification. Let $\mathcal F$ be a sheaf of ...
There's an exact sequence $$0\to \pi^*\Omega^1_X\to \Omega^1_{\widetilde{X}}\to i_*\Omega^1_{E/Y}\to 0$$ where $i:E\to \widetilde{X}$ is the inclusion of the exceptional divisor. This corresponds to the fact that pulling back differential forms gives you differential forms which are constant along the fibers. Checking that this is exact may be done from ...
Oops, this is rather silly. If $\Delta (X)\subset U,V\subset Y$ so that $X\to U\subset Y,X\to V\subset Y$ are both the same locally closed immersion $\Delta$, and also noting that if we ever have a morphism $f:X\to Y$ with $U\subset X$ maps into $V\supset f(U)$, then $f^*$ commutes with restriction, we get $\Delta^*(\mathcal{I}/{\mathcal{I}^2})|_X=\Delta_X^*...
The crucial question is: What are the stalks of $\mathcal{F}$? For any open $U \subset \mathbb{R}$, one has$$\mathcal{F}(U) = \bigoplus_{x \in U \cap [0,1)} x \cdot \mathbb{Z},$$and for any inclusion $V \subset U$, the restriction maps $\mathcal{F}(U) \to \mathcal{F}(V)$ factors out all summands that do not belong to $V$. So let $\overline f \in \mathcal{F}...
Let me add one way to create some examples:If $\mathcal{F}$ is a sheaf (of for example abelian groups, rings, etc.), then $\mathcal{F}(\emptyset)$ will be the terminal object in the category where the sheaf takes its values. That means you can find examples of presheaves which are not sheaves by preventing $\mathcal{F}(\emptyset)$ to be the terminal ...
Yes. For instance, the numerable topology on Top controlsbundles that are classified by maps to a classifying space.Open covers in the numerable topology are those open coversthat admit a subordinate partition of unity.If G is a topological group, it has a classifying space BG,as well as a classifying stack G-Bun of principal G-bundles,defined as ...
The answer lies in the definition of "generated by finitely many sections". As I understand, $\mathcal{F}|_U$ is generated by finitely many sections if, there are sections $s_1,\dots, s_n$ over $U$ such that $\mathcal{F}|_U$ is the smallest sheaf of $\mathcal{O}_U$-modules that contains all the $s_i$. In other words, if $\mathcal{G}$ is a subsheaf of $\...
That’s not the right definition. In any category with a terminal object $1$, a constant morphism is a morphism that factors through the terminal object. In particular there is exactly one such morphism $X \to Y$ for every global point $1 \to Y$; note that these are very different from points of the Zariski spectrum, for schemes, because the terminal object ... |
Hydrogen Structure, Preparation, Properties and Uses of hydrogen peroxide Concentration of H 2O2 :
Physical properties : Boiling point → 152º C Melting point → −0.4º C It is completely miscible in water. It is stored in plastic (or) wax coated glass bottles. Rough surface can decompose H 2O 2 H 2O 2 (Hydrogen peroxide) acts as oxidising agent and also reducing agent in both acidic and basic media. H 2O 2 acts as a stronger oxidising agent than reducing agent due to high S.R.P value Oxidising properties → \tt H_{2}O_{2} + [O], nascent \ oxygen Acidic medium → \tt H_{2}O_{2} + 2H^{+} + 2e^{-} \rightarrow 2H_{2}O, E^{o} = +1.77 V Basic medium → \tt H_{2}O_{2} + 2e^{-} \rightarrow 2OH^{-}, E^{o} = +0.87 V
With O
3
Cl
2 reacts with \tt H_{2}O_{2} \rightarrow 2HCl + O_{2}(antichlor) Bleaching action : \tt H_{2}O_{2} \rightarrow H_{2}O + [O] nascent \ oxygen Its bleaching action is due to nascent [O] oxygen. Org - matter(Veg - matter coloured) + [O] → H 2O + Colourless matter. Its bleaching action is permanent. SO 2 bleaches substances by reduction (Temporary action) Structure of H 2O2 :
Open book structure
Hybridization sp 3 Non linear, non planar O — H is polar O — O is non polar.
Gas phase Solid phase \angle O - OH 94°48' 101°5' Dihedral angle 111°30' 90° O — O 1.48 Å 1.485 Å O — H 0.95 Å 0.988 Å
Strength of H
2O 2 is expressed in terms of volume i.e 10 vol, 20 vol ---------- It is decided by the volume of O 2 liberated. Vol of O 2 gas liberated = Vol of H 2O 2 × strength of H 2O 2 in volumes 2 lit of 10vol H 2O 2 liberates 20 lit of O 2 5 ml of 20 vol H 2O 2 liberates 100 × 10 −3 lit O 2
O
2 liberated = 10 × 1 = 10 L of O 2 68gr — 22.4 L of O 2 ? — 10 L of O 2 x = \frac{68 \times 10}{22.4} = 30.36 \ of \ H_{2}O_{2} = = \left\{30.36 \left(\frac{w}{v}\right) \% \right\} \ \rightarrow Perhydrol molarity = \frac{w}{GMW \times vl} = \frac{30.36}{34 \times 1} = 0.893 M = 0.893 × 2 = 1.786 N Part1: View the Topic in this Video from 0:06 to 9:08 Part2: View the Topic in this Video from 0:06 to 1:46 Part3: View the Topic in this Video from 0:06 to 11:21
Disclaimer: Compete.etutor.co may from time to time provide links to third party Internet sites under their respective fair use policy and it may from time to time provide materials from such third parties on this website. These third party sites and any third party materials are provided for viewers convenience and for non-commercial educational purpose only. Compete does not operate or control in any respect any information, products or services available on these third party sites. Compete.etutor.co makes no representations whatsoever concerning the content of these sites and the fact that compete.etutor.co has provided a link to such sites is NOT an endorsement, authorization, sponsorship, or affiliation by compete.etutor.co with respect to such sites, its services, the products displayed, its owners, or its providers.
1. Strength of H
2O 2 in terms of normality \tt \frac{68 \times X}{22.4}=17 \times N \Rightarrow X = 5.6 \times N where, X is volume strength of H 2O 2.
2. % strength = \tt \frac{17}{56}\times volume\ strength
3.
X = 11.2 × molarity. |
In my last post, the irrational number
e was used. Let’s define and explain e a bit more in this post.
If you deposit $1 in the bank which pays 100% interest once each year (I need to find this bank!), then at the end of 1 year, you will have the original $1 plus 100% of that which is another $1, for a total of $2. Now bear with me, but an equivalent expression that will give me this same answer is\[
{\left({{1}\hspace{0.33em}{+}\hspace{0.33em}\frac{1}{1}}\right)}^{1}\hspace{0.33em}{=}\hspace{0.33em}{(}{2}{)}^{1}\hspace{0.33em}{=}\hspace{0.33em}{2}
\]
What happens if this generous bank computes part of the interest during the year and that interest is added to your original $1 to be included in subsequent interest calculations? When banks do this, this is called
compounding interest. That is the interest made compounds, or is added to, the original investment to get even more interest.
Now suppose the bank computes the interest twice per year. In 6 months, the bank will compute your interest but since only half the year has gone by, only half the interest, or 50%, is used. So in 6 months you have $1.50. At the end of the year, 50% interest is again computed but on $1.50 now. This gives a total of $2.25 which is better than the $2 you would get if the bank didn’t compound the interest semi-annually. Now the fractional equivalent of 50% is 1/2, so again bear with me, but an algebraic equivalent expression that computes the amount you will have at the end of 1 year when the bank compounds semi-annually is\[
{\left({{1}\hspace{0.33em}{+}\hspace{0.33em}\frac{1}{2}}\right)}^{2}\hspace{0.33em}{=}\hspace{0.33em}{(}{1}{.}{5}{)}^{2}\hspace{0.33em}{=}\hspace{0.33em}{2}{.}{25}
\]
Suppose the bank compounds interest four times a year (quarterly)? Again, the interest used each quarter will only be 1/4 of the annual 100% interest but the interest will compound each quarter. At the end of the first quarter, you will have $1.25. This is now the amount to be used at the end of the second quarter, and so on. The algebraic equivalent expression to compute what you have after the entire year is\[
{\left({{1}\hspace{0.33em}{+}\hspace{0.33em}\frac{1}{4}}\right)}^{4}\hspace{0.33em}{=}\hspace{0.33em}{(}{1}{.}{25}{)}^{4}\hspace{0.33em}{=}\hspace{0.33em}{2}{.}{44}
\]
This is better still! But do you notice the pattern in the algebraic expressions? The denominator of the fraction in the brackets and the exponent (power) are the same as the number of times interest is compounded during the year. So in general, if interest is compounded
n times per year, the amount you will have at the end of the year is
{\left({{1}\hspace{0.33em}{+}\hspace{0.33em}\frac{1}{n}}\right)}^{n}
\]
You may have also noticed that the larger
n is, the more money you have. Well if you’re not greedy enough, let’s find a bank that compounds daily. If your dollar is compounded daily, at the end of 1 year you will have
{\left({{1}\hspace{0.33em}{+}\hspace{0.33em}\frac{1}{365}}\right)}^{365}\hspace{0.33em}{=}\hspace{0.33em}{2}{.}{71}
\]
That’s great but you may have thought that would be a larger amount. The problem is that though the power over the bracket stuff is increasing which has the effect of increasing the amount, the fraction part in the brackets is getting smaller, making the stuff in the brackets closer to 1. Raising 1 to a power is just 1. So there are two competing forces here, one that increases the value of the expression and one that decreases it.
Now we can compound more frequently than daily. We can compound half-daily, etc. What happens to the expression as
n increases to infinity, ∞?
Well maths does have a process for that, it’s called
limits. Let me just show that and then explain it:
\mathop{\lim}\limits_{{n}\rightarrow\infty}{\left({{1}\hspace{0.33em}{+}\hspace{0.33em}\frac{1}{n}}\right)}^{n}
\]
This is read: “What is the limit of this expression as
n goes to infinity?”. Now you can get an approximate answer to this by putting larger and larger numbers in for n on your calculator. It turns out, that there is no exact answer that can be written in decimal numbers because the answer to the above is an irrational number like 𝜋. This was proven to be the case in 1737 by Leonhard Euler. Because of his work with this number, it is given the symbol e in his honour.
It turns out that to 50 decimal places
e = 2.71828182845904523536028747135266249775724709369995…
So you see that the most you can make with your dollar is $2.72.
This number was first calculated by Jacob Bernoulli in 1683 to solve the very problem about interest we just went through. But Euler did a lot more work with it.
e is a very important number in calculus, probability, finance, and the interesting world of complex numbers. |
Skills to Develop
To learn the concept of the sample space associated with a random experiment. To learn the concept of an event associated with a random experiment. To learn the concept of the probability of an event. Sample Spaces and Events
Rolling an ordinary six-sided die is a familiar example of a
random experiment, an action for which all possible outcomes can be listed, but for which the actual outcome on any given trial of the experiment cannot be predicted with certainty. In such a situation we wish to assign to each outcome, such as rolling a two, a number, called the probability of the outcome, that indicates how likely it is that the outcome will occur. Similarly, we would like to assign a probability to any event, or collection of outcomes, such as rolling an even number, which indicates how likely it is that the event will occur if the experiment is performed. This section provides a framework for discussing probability problems, using the terms just mentioned.
Definition: random experiment
A
random experiment
Definition: Element and Occurrence
An event \(E\) is said to occur on a particular trial of the experiment if the outcome observed is an element of the set \(E\).
Example \(\PageIndex{1}\): Sample Space for a single coin
Construct a sample space for the experiment that consists of tossing a single coin.
Solution
The outcomes could be labeled \(h\) for heads and
Example \(\PageIndex{2}\): Sample Space for a single die
Construct a sample space for the experiment that consists of rolling a single die. Find the events that correspond to the phrases “an even number is rolled” and “a number greater than two is rolled.”
Solution:
The outcomes could be labeled according to the number of dots on the top face of the die. Then the sample space is the set \(S = \{1,2,3,4,5,6\}\)
The outcomes that are even are \(2, 4,\; \; \text{and}\; \; 6\), so the event that corresponds to the phrase “an even number is rolled” is the set \(\{2,4,6\}\), which it is natural to denote by the letter \(E\). We write \(E=\{2,4,6\}\).
Similarly the event that corresponds to the phrase “a number greater than two is rolled” is the set \(T=\{3,4,5,6\}\), which we have denoted \(T\).
A graphical representation of a sample space and events is a
Venn diagram, as shown in Figure \(\PageIndex{1}\). In general the sample space \(S\) is represented by a rectangle, outcomes by points within the rectangle, and events by ovals that enclose the outcomes that compose them.
Figure \(\PageIndex{1}\): Venn Diagrams for Two Sample Spaces
Example \(\PageIndex{3}\): Sample Spaces for two coines
A random experiment consists of tossing two coins.
Construct a sample space for the situation that the coins are indistinguishable, such as two brand new pennies. Construct a sample space for the situation that the coins are distinguishable, such as one a penny and the other a nickel. Solution: After the coins are tossed one sees either two heads, which could be labeled \(2h\), two tails, which could be labeled \(2t\), or coins that differ, which could be labeled \(d\) Thus a sample space is \(S=\{2h, 2t, d\}\). Since we can tell the coins apart, there are now two ways for the coins to differ: the penny heads and the nickel tails, or the penny tails and the nickel heads. We can label each outcome as a pair of letters, the first of which indicates how the penny landed and the second of which indicates how the nickel landed. A sample space is then \(S' = \{hh, ht, th, tt\}\).
A device that can be helpful in identifying all possible outcomes of a random experiment, particularly one that can be viewed as proceeding in stages, is what is called a
tree diagram. It is described in the following example.
Example \(\PageIndex{4}\): Tree diagram
Construct a sample space that describes all three-child families according to the genders of the children with respect to birth order.
Solution:
Two of the outcomes are “two boys then a girl,” which we might denote \(bbg\), and “a girl then two boys,” which we would denote \(gbb\).
Clearly there are many outcomes, and when we try to list all of them it could be difficult to be sure that we have found them all unless we proceed systematically. The tree diagram shown in Figure \(\PageIndex{2}\), gives a systematic approach.
Figure \(\PageIndex{2}\) : Tree Diagram For Three-Child Families
The diagram was constructed as follows. There are two possibilities for the first child, boy or girl, so we draw two line segments coming out of a starting point, one ending in a \(b\) for “boy” and the other ending in a \(g\) for “girl.” For each of these two possibilities for the first child there are two possibilities for the second child, “boy” or “girl,” so from each of the \(b\) and \(g\) we draw two line segments, one segment ending in a \(b\) and one in a \(g\). For each of the four ending points now in the diagram there are two possibilities for the third child, so we repeat the process once more.
The line segments are called
branches of the tree. The right ending point of each branch is called a node. The nodes on the extreme right are the final nodes; to each one there corresponds an outcome, as shown in the figure.
From the tree it is easy to read off the eight outcomes of the experiment, so the sample space is, reading from the top to the bottom of the final nodes in the tree,
\[S=\{bbb,\; bbg,\; bgb,\; bgg,\; gbb,\; gbg,\; ggb,\; ggg\}\]
Probability
Definition: probability
The probability of an outcome \(e\) in a sample space \(S\) is a number \(P\) between \(1\) and \(0\) that measures the likelihood that \(e\) will occur on a single trial of the corresponding random experiment. The value \(P=0\) corresponds to the outcome \(e\) being impossible and the value \(P=1\) corresponds to the outcome \(e\) being certain.
Definition: probability of an event
The
probability of an event \(A\) is the sum of the probabilities of the individual outcomes of which it is composed. It is denoted \(P(A)\).
The following formula expresses the content of the definition of the probability of an event:
If an event \(E\) is \(E=\{e_1,e_2,...,e_k\}\), then
\[P(E)=P(e_1)+P(e_2)+...+P(e_k)\]
The following figure expresses the content of the definition of the probability of an event:
Figure \(\PageIndex{3}\) : Sample Spaces and Probability
Since the whole sample space \(S\) is an event that is certain to occur, the sum of the probabilities of all the outcomes must be the number \(1\).
In ordinary language probabilities are frequently expressed as percentages. For example, we would say that there is a \(70\%\) chance of rain tomorrow, meaning that the probability of rain is \(0.70\). We will use this practice here, but in all the computational formulas that follow we will use the form \(0.70\) and not \(70\%\).
Example \(\PageIndex{5}\)
A coin is called “balanced” or “fair” if each side is equally likely to land up. Assign a probability to each outcome in the sample space for the experiment that consists of tossing a single fair coin.
Solution:
With the outcomes labeled \(h\) for heads and \(t\) for tails, the sample space is the set
\[S=\{h,t\}\]
Since the outcomes have the same probabilities, which must add up to \(1\), each outcome is assigned probability \(1/2\).
Example \(\PageIndex{6}\)
A die is called “balanced” or “fair” if each side is equally likely to land on top. Assign a probability to each outcome in the sample space for the experiment that consists of tossing a single fair die. Find the probabilities of the events \(E\): “an even number is rolled” and \(T\): “a number greater than two is rolled.”
Solution:
With outcomes labeled according to the number of dots on the top face of the die, the sample space is the set
\[S=\{1,2,3,4,5,6\}\]
Since there are six equally likely outcomes, which must add up to \(1\), each is assigned probability \(1/6\).
Since \(E = \{2,4,6\}\),
\[P(E) = \dfrac{1}{6} + \dfrac{1}{6} + \dfrac{1}{6} = \dfrac{3}{6} = \dfrac{1}{2}\]
Since \(T = \{3,4,5,6\}\),
\[P(T) = \dfrac{4}{6} = \dfrac{2}{3}\]
Example \(\PageIndex{7}\)
Two fair coins are tossed. Find the probability that the coins match, i.e., either both land heads or both land tails.
Solution:
In Example \(\PageIndex{3}\) we constructed the sample space \(S=\{2h,2t,d\}\) for the situation in which the coins are identical and the sample space \(S′=\{hh,ht,th,tt\}\) for the situation in which the two coins can be told apart.
The theory of probability does not tell us how to assign probabilities to the outcomes, only what to do with them once they are assigned. Specifically, using sample space \(S\), matching coins is the event \(M=\{2h, 2t\}\) which has probability \(P(2h)+P(2t)\). Using sample space \(S'\), matching coins is the event \(M'=\{hh, tt\}\), which has probability \(P(hh)+P(tt)\). In the physical world it should make no difference whether the coins are identical or not, and so we would like to assign probabilities to the outcomes so that the numbers \(P(M)\) and \(P(M')\) are the same and best match what we observe when actual physical experiments are performed with coins that seem to be fair. Actual experience suggests that the outcomes in S' are equally likely, so we assign to each probability \(\frac{1}{4}\), and then...
\[P(M') = P(hh) + P(tt) = \frac{1}{4} + \frac{1}{4} = \frac{1}{2}\]
Similarly, from experience appropriate choices for the outcomes in \(S\)
are:
\[P(2h) = \frac{1}{4}\]
\[P(2t) = \frac{1}{4}\]
\[P(d) = \frac{1}{2}\]
The previous three examples illustrate how probabilities can be computed simply by counting when the sample space consists of a finite number of equally likely outcomes. In some situations the individual outcomes of any sample space that represents the experiment are unavoidably unequally likely, in which case probabilities cannot be computed merely by counting, but the computational formula given in the definition of the probability of an event must be used.
Example \(\PageIndex{8}\)
The breakdown of the student body in a local high school according to race and ethnicity is \(51\%\) white, \(27\%\) black, \(11\%\) Hispanic, \(6\%\) Asian, and \(5\%\) for all others. A student is randomly selected from this high school. (To select “randomly” means that every student has the same chance of being selected.) Find the probabilities of the following events:
\(B\): the student is black, \(M\): the student is minority (that is, not white), \(N\): the student is not black. Solution:
The experiment is the action of randomly selecting a student from the student population of the high school. An obvious sample space is \(S=\{w,b,h,a,o\}\). Since \(51\%\) of the students are white and all students have the same chance of being selected, \(P(w)=0.51\), and similarly for the other outcomes. This information is summarized in the following table:
\[\begin{array}{l|cccc}Outcome & w & b & h & a & o \\ Probability & 0.51 & 0.27 & 0.11 & 0.06 & 0.05\end{array}\]
Since \(B=\{b\},\; \; P(B)=P(b)=0.27\) Since \(M=\{b,h,a,o\},\; \; P(M)=P(b)+P(h)+P(a)+P(o)=0.27+0.11+0.06+0.05=0.49\) Since \(N=\{w,h,a,o\},\; \; P(N)=P(w)+P(h)+P(a)+P(o)=0.51+0.11+0.06+0.05=0.73\)
Example \(\PageIndex{9}\)
The student body in the high school considered in the last example may be broken down into ten categories as follows: \(25\%\) white male, \(26\%\) white female, \(12\%\) black male, \(15\%\) black female, 6% Hispanic male, \(5\%\) Hispanic female, \(3\%\) Asian male, \(3\%\) Asian female, \(1\%\) male of other minorities combined, and \(4\%\) female of other minorities combined. A student is randomly selected from this high school. Find the probabilities of the following events:
\(B\): the student is black \(MF\): the student is a non-white female \(FN\): the student is female and is not black Solution:
Now the sample space is \(S=\{wm, bm, hm, am, om, wf, bf, hf, af, of\}\). The information given in the example can be summarized in the following table, called a two-way contingency table:
Gender Race / Ethnicity White Black Hispanic Asian Others Male 0.25 0.12 0.06 0.03 0.01 Female 0.26 0.15 0.05 0.03 0.04 Since \(B=\{bm, bf\},\; \; P(B)=P(bm)+P(bf)=0.12+0.15=0.27\) Since \(MF=\{bf, hf, af, of\},\; \; P(M)=P(bf)+P(hf)+P(af)+P(of)=0.15+0.05+0.03+0.04=0.27\) Since \(FN=\{wf, hf, af, of\},\; \; P(FN)=P(wf)+P(hf)+P(af)+P(of)=0.26+0.05+0.03+0.04=0.38\) Key Takeaway The sample space of a random experiment is the collection of all possible outcomes. An event associated with a random experiment is a subset of the sample space. The probability of any outcome is a number between \(0\) and \(1\). The probabilities of all the outcomes add up to \(1\). The probability of any event \(A\) is the sum of the probabilities of the outcomes in \(A\). |
The Navigation Algorithm¶
This section describes the navigation algorithm in detail.
Global Path Planner¶
A global planner problem in the Isaac framework is decomposed in three classes: the Planner Model, the Visibility Graph Algorithm, and the Optimizer.
Planner Model¶
A planner model (engine/gems/path_planner/PlannerModel.hpp) must provide the following:
A set of functions that provide the information regarding whether or not a given state is collision free. Information on whether or not a direct path(an easy path such as straight line) exists between two states and is collision free. The distance, or lenght of the path. A way to randomly sample in the space of states.
In the case of the carter platform, a differential base is approximated as circular, allowing fast collision detection using a distance map. A direct path is defined as a short path (< 2m) in straight line (as we can always rotate in the direction). Therefore the planning problem is a 2D problem.
Visibility Graph Algorithm¶
Inspired from the paper of T. SIMÉON, J.-P. LAUMOND and C. NISSOUX, “Visibility-based probabilistic roadmaps for motion planning”, the visibility graph algorithm provides a very generic algorithm to find a path in a high-dimension space. The goal is to produce a small graph with high visibility coverage.
The graph is built by keeping a set of points (called guard in the paper) that cannot directly be connected to each other. Connections are added whenever there is an intermediate state that directly connects two guards that were not connected by any path.
The Isaac implementation adds a connection between guards not connected by a path of size 2 using only one middle state. This produces a bigger but higher-quality graph.
Once the graph is built, the shortest path can be computed by first finding a connection between these states and a guard and then running a Djikstra algorithm on the graph. The same graph can be pre-computed, manually assisted in case of difficult problems and reused for other shortest path requests as long as the environment is static.
Optimizer¶
The final state is path optimization. The visibility graph during quick path production produces a very chaotic path. A local optimizer on the path simulates the effect of a spring between waypoints, trying to bring them closer to each other, while having a repulsive force between the waypoints and the closest obstacle, once again using the Planner model.
A gradient descent with linear search is used to find the local minimum.
Trajectory Planner¶
The local planner of Isaac is Linear Quadratic Regulator (LQR) based. Isaac SDK provides a customizable LQR solver. The dynamics of the system as well as the cost function need to be provided to the LQR solver which perform an iterative gradient descent using a linear search to find the best path.
In the case of carter platform the dynamics of the system are those of a differential base:
State: \(x(t)\): X position of the base \(y(t)\): Y position of the base \(\theta (t)\): Orientation of the base \(v(t)\): Linear velocity \(\theta '(t)\): Angular velocity Control: \(al(t)\): Left wheel angular acceleration \(ar(t)\): Right wheel angular acceleration
The dynamics are then given by the formula (L is the base length and R the wheel radius):
\(x'(t) = v(t) \cos( \theta(t) )\) \(y'(t) = v(t) \sin( \theta(t) )\) \(\theta'(t) = \theta'(t)\) \(v'(t) = a(t) = (ar(t) - al(t)) \cdot R / L\) \(\theta''(t) = a(t) = (ar(t) + al(t)) \cdot R / 2\)
Here is an example of a path produced by carter in the local view:
The local map is computed in real time as well as the distance map. In red is the path provided by the global planner, and in blue is the plan produced by the LQR, optimizing speed, distance to obstacle, acceleration, and other factors. |
Question
f(x) = sin 2x, 0 < x < \[\pi\] .
Solution
\[\text { Given }: \hspace{0.167em} f\left( x \right) = \sin 2x\]
\[ \Rightarrow f'\left( x \right) = 2 \cos 2x\]
\[\text { For a local maximum or a local minimum, we must have }\]
\[f'\left( x \right) = 0\]
\[ \Rightarrow 2 \cos 2x = 0\]
\[ \Rightarrow \cos 2x = 0\]
\[ \Rightarrow x = \frac{\pi}{4} or \frac{3\pi}{4}\]
Sincef '(x) changes from positive to negative when x increases through \[\frac{\pi}{4}\], x = \[\frac{\pi}{4}\] is the point of maxima.
The local maximum value of f (x) at x = \[\frac{\pi}{4}\] is given by \[\sin\left( \frac{\pi}{2} \right) = 1\]
Sincef '(x) changes from negative to positive when x increases through
x= \[\frac{3\pi}{4}\] is the point of minima.
The local minimum value of
f( x) at x= \[\frac{3\pi}{4}\] is given by \[\sin\left( \frac{3\pi}{2} \right) = - 1\] Video Tutorials view Video Tutorials For All Subjects Graph of Maxima and Minima video tutorial00:05:34 |
Abbreviation:
DLO
A
is a structure $\mathbf{A}=\langle A,\vee,\wedge,f_i\ (i\in I)\rangle$ such that distributive lattice with operators
$\langle A,\vee,\wedge\rangle$ is a distributive lattice
$f_i$ is
in each argument: $f_i(\ldots,x\vee y,\ldots)=f_i(\ldots,x,\ldots)\vee f_i(\ldots,y,\ldots)$ join-preserving
Let $\mathbf{A}$ and $\mathbf{B}$ be distributive lattices with operators of the same signature. A morphism from $\mathbf{A}$ to $\mathbf{B}$ is a function $h:A\rightarrow B$ that is a distributive lattice homomorphism and preserves all the operators:
$h(f_i(x_0,\ldots,x_{n-1}))=f_i(h(x_0),\ldots,h(x_{n-1}))$
Example 1: |
$\newcommand\Q{\mathbb Q}$I have a series of questions related to sum of $k$-powers of algebraic numbers (it is actually only one question that I try to weaken/strengthen the conditions). Suppose we know the following $$\sum_{i=1}^n a_i^k \in \Q$$ for algebraic number $a_i$'s for which not all of them are in $\Q$. Then I ask
If $n=k=2$ and $\Q(a_1,a_2)$ is real then is it true that $[\Q(a_1,a_2):\Q]$ is even? Same question as 1. except we assume the general condition $n\geq 2$ and $\Q(a_1,\dots,a_n)$ is real Same question as 1. except we assume any $k\geq 2$ and ask if $[\Q(a_1,a_2):\Q]$ is divisible by $k$ Same question as 2. except we assume any $k\geq 2$ and ask if $[\Q(a_1,\dots, a_n):\Q]$ is divisible by $k$
5-8. We ask the same questions as 1-4. without assuming $\Q(a_1,\dots,a_n)$ is real.
Probably the answers are trivial (for instance if 1. is already not true). And I think 5-8. is generally easier than 1-4 (I feel that for 5-8. all of them are generally false). |
skills to develop
To learn how to apply formulas for estimating the size samples that will be needed in order to construct a confidence interval for the difference in two population means or proportions that meets given criteria.
As was pointed out at the beginning of Section 7.4, sampling is typically done with definite objectives in mind. For example, a physician might wish to estimate the difference in the average amount of sleep gotten by patients suffering a certain condition with the average amount of sleep got by healthy adults, at \(90\%\) confidence and to within half an hour. Since sampling costs time, effort, and money, it would be useful to be able to estimate the smallest size samples that are likely to meet these criteria.
Estimating \(\mu _1-\mu _2\) with Independent Samples
Assuming that large samples will be required, the confidence interval formula for estimating the difference \(\mu _1-\mu _2\) between two population means using independent samples is \((\bar{x_1}-\bar{x_2})\pm E\), where
To say that we wish to estimate the mean to within a certain number of units means that we want the margin of error \(E\) to be no larger than that number. The number \(z_{\alpha /2}\) is determined by the desired level of confidence.
The numbers \(s_1\) and \(s_2\) are estimates of the standard deviations \(\sigma _1\) and \(\sigma _2\) of the two populations. In analogy with what we did in Section 7.4 we will assume that we either know or can reasonably approximate \(\sigma _1\) and \(\sigma _2\).
We cannot solve for both \(n_1\) and \(n_2\), so we have to make an assumption about their relative sizes. We will specify that they be equal. With these assumptions we obtain the minimum sample sizes needed by solving the equation displayed just above for \(n_1=n_2\).
Minimum Equal Sample Sizes for Estimating the Difference in the Means of Two Populations Using Independent Samples
The estimated minimum equal sample sizes \(n_1=n_2\) needed to estimate the difference \(\mu _1-\mu _2\) in two population means to within \(E\) units at \(100(1-\alpha )\%\) confidence is
\[n_1=n_2=\frac{(z_{\alpha /2})^2(\sigma _{1}^{2}+\sigma _{2}^{2})}{E^2}\; \; \text{rounded up}\]
In all the examples and exercises the population standard deviations \(\sigma _1\) and \(\sigma _2\) will be given.
Example \(\PageIndex{1}\)
A law firm wishes to estimate the difference in the mean delivery time of documents sent between two of its offices by two different courier companies, to within half an hour and with \(99.5\%\) confidence. From their records it will randomly sample the same number n of documents as delivered by each courier company. Determine how large \(n\) must be if the estimated standard deviations of the delivery times are \(0.75\) hour for one company and \(1.15\) hours for the other.
Solution:
Confidence level \(99.5\%\) means that \(\alpha =1-0.995=0.005\) so \(\alpha /2=0.0025\). From the last line of Figure 7.1.6 we obtain \(z_{0.0025}=2.807\).
To say that the estimate is to be “to within half an hour” means that \(E=0.5\). Thus
\[n=\frac{(z_{\alpha /2})^2(\sigma _{1}^{2}+\sigma _{2}^{2})}{E^2}=\frac{(2.807)^2(0.75^2+1.15^2)}{0.5^2}=59.40953746\]
which we round up to \(60\), since it is impossible to take a fractional observation. The law firm must sample \(60\) document deliveries by each company.
Estimating \(\mu _1-\mu _2\) with Paired Samples
As we mentioned at the end of Section 9.3, if the sample is large (meaning that \(n\geq 30\)) then in the formula for the confidence interval we may replace \(t_{\alpha /2}\) by \(z_{\alpha /2}\), so that the confidence interval formula becomes \(\bar{d}\pm E\) for
\[E=z_{\alpha /2}\frac{s_d}{\sqrt{n}}\]
The number \(s_d\) is an estimate of the standard deviations \(\sigma _d\) of the population of differences. We must assume that we either know or can reasonably approximate (\sigma _d\). Thus, assuming that large samples will be required to meet the criteria given, we can solve the displayed equation for \(n\) to obtain an estimate of the number of pairs needed in the sample.
Minimum Sample Size for Estimating the Difference in the Means of Two Populations Using Paired Difference Samples
The estimated minimum number of pairs \(n\) needed to estimate the difference \(\mu_d=\mu _1-\mu _2\) in two population means to within \(E\) units at \(100(1-\alpha )\%\) confidence using paired difference samples is
\[n=\frac{(z_{\alpha /2})^2\sigma _{d}^{2}}{E^2}\; \; \text{rounded up}\]
In all the examples and exercises the population standard deviation of the differences \(\sigma _d\) will be given.
Example \(\PageIndex{2}\)
A automotive tire manufacturer wishes to compare the mean lifetime of two tread designs under actual driving conditions. They will mount one of each type of tire on \(n\) vehicles (both on the front or both on the back) and measure the difference in remaining tread after \(20,000\) miles of driving. If the standard deviation of the differences is assumed to be \(0.025\) inch, find the minimum samples size needed to estimate the difference in mean depth (at \(20,000\) miles use) to within \(0.01\) inch at \(99.9\%\) confidence.
Solution:
Confidence level \(99.9\%\) means that \(\alpha =1-0.999=0.001\) so \(\alpha /2=0.0005\). From the last line of Figure 7.1.6 we obtain \(z_{0.0005}=3.291\).
To say that the estimate is to be “to within \(0.01\) inch” means that \(E = 0.01\). Thus
\[n=\frac{(z_{\alpha /2})^2\sigma _{d}^{2}}{E^2}=\frac{(3.291)^2(0.025)^2}{(0.01)^2}=67.69175625\]
which we round up to \(68\). The manufacturer must test \(68\) pairs of tires.
Estimating \(p_1-p_2\)
The confidence interval formula for estimating the difference \(p_1-p_2\) between two population proportions is \(\hat{p_1}-\hat{p_2}\pm E\), where
\[E=z_{\alpha /2}\sqrt{\frac{\hat{p_1}(1-\hat{p_1})}{n_1}+\frac{\hat{p_2}(1-\hat{p_2})}{n_2}}\]
To say that we wish to estimate the mean to within a certain number of units means that we want the margin of error \(E\) to be no larger than that number. The number \(z_{\alpha /2}\) is determined by the desired level of confidence.
We cannot solve for both \(n_1\) and \(n_2\), so we have to make an assumption about their relative sizes. We will specify that they be equal. With these assumptions we obtain the minimum sample sizes needed by solving the displayed equation for \(n_1=n_2\).
Minimum Equal Sample Sizes for Estimating the Difference in Two Population Proportions
The estimated minimum equal sample sizes \(n_1=n_2\) needed to estimate the difference \(p_1-p_2\) in two population proportions to within \(E\) percentage points at \(100(1-\alpha )\%\) confidence is
\[n_1=n_2=\frac{(z_{\alpha /2})^2(\hat{p_1}(1-\hat{p_1}+\hat{p_2}(1-\hat{p_2}))}{E^2}\; \; \text{rounded up}\]
Here we face the same dilemma that we encountered in the case of a single population proportion: the formula for estimating how large a sample to take contains the numbers \(\hat{p_1}\) and \(\hat{p_2}\), which we know only after we have taken the sample. There are two ways out of this dilemma. Typically the researcher will have some idea as to the values of the population proportions \(p_1\) and \(p_2\), hence of what the sample proportions \(\hat{p_1}\) and \(\hat{p_2}\) are likely to be. If so, those estimates can be used in the formula.
The second approach to resolving the dilemma is simply to replace each of \(\hat{p_1}\) and \(\hat{p_2}\) in the formula by \(0.5\). As in the one-population case, this is the most conservative estimate, since it gives the largest possible estimate of \(n\). If we have an estimate of only one of \(p_1\) and \(p_2\) we can use that estimate for it, and use the conservative estimate \(0.5\) for the other.
Example \(\PageIndex{3}\)
Find the minimum equal sample sizes necessary to construct a \(98\%\) confidence interval for the difference \(p_1-p_2\) with a margin of error \(E=0.05\),
assuming that no prior knowledge about \(p_1\) or \(p_2\) is available; and assuming that prior studies suggest that \(p_1\approx 0.2\) and \(p_2\approx 0.3\). Solution:
Confidence level \(98\%\) means that \(\alpha =1-0.98=0.02\) so \(\alpha /2=0.01\). From the last line of Figure 7.1.6 we obtain \(z_{0.01}=2.326\).
Since there is no prior knowledge of \(p_1\) or \(p_2\) we make the most conservative estimate that \(\hat{p_1}=0.5\) and \(\hat{p_2}=0.5\). Then
which we round up to \(1,083\). We must take a sample of size \(1,083\) from each population.
Since \(p_1\approx 0.2\) we estimate \(\hat{p_1}\) by \(0.2\), and since \(p_2\approx 0.3\) we estimate \(\hat{p_2}\) by \(0.3\). Thus we obtain
\[\begin{align*} n_1=n_2 &= \frac{(z_{\alpha /2})^2(\hat{p_1}(1-\hat{p_1}+\hat{p_2}(1-\hat{p_2}))}{E^2}\\ &= \frac{(2.326)^2((0.2)(0.8)+(0.3)(0.7))}{0.05^2}\\ &= 800.720848\end{align*}\]
which we round up to \(801\). We must take a sample of size \(801\) from each population.
Key Takeaway
If the population standard deviations \(\sigma _1\) and \(\sigma _2\) are known or can be estimated, then the minimum equal sizes of independent samples needed to obtain a confidence interval for the difference \(\mu _1-\mu _2\) in two population means with a given maximum error of the estimate \(E\) and a given level of confidence can be estimated. If the standard deviation \(\sigma _d\) of the population of differences in pairs drawn from two populations is known or can be estimated, then the minimum number of sample pairs needed under paired difference sampling to obtain a confidence interval for the difference \(\mu_d=\mu _1-\mu _2\) in two population means with a given maximum error of the estimate \(E\) and a given level of confidence can be estimated. The minimum equal sample sizes needed to obtain a confidence interval for the difference in two population proportions with a given maximum error of the estimate and a given level of confidence can always be estimated. If there is prior knowledge of the population proportions \(p_1\) and \(p_2\) then the estimate can be sharpened. Contributor
Anonymous |
I have been going through Certifiable quantum dice: or, true random number generation secure against quantum adversaries by Umesh Vazirani and Thomas Vidick. They have used entangled particles as shared resources for a quantum nonlocal game aka CHSH game. Because of the acronym CHSH I assumed that the game was proposed by Clauser et al. in their famous paper, Proposed Experiment to Test Local Hidden-Variable Theories. But it looks they proposed only the famous inequality not the game. So, I am trying to trace the evolution of intuitions, ideas and results of the researchers which link the CHSH game with CHSH inequality.
The usual form of CHSH inequality is as follows.
$$-2 \le S \le 2$$
where, $$S = E(a, b) − E(a, b') + E(a', b) + E(a' b')$$
Here $a$ and $a'$ are detector settings on side $A$, $b$ and $b'$ on side $B$, the four combinations being tested in separate subexperiments. The terms $E(a, b)$ etc. are the quantum correlations of the particle pairs, where the quantum correlation is defined to be the expectation value of the product of the "outcomes" of the experiment, i.e. the statistical average of $A(a)·B(b)$, where $A$ and $B$ are the separate outcomes, using the coding $+1$ for the '$+$' channel and $−1$ for the '$−$' channel.
In the original paper the inequality is Equation 1b which is as follows.
$$|P(\alpha) - P(\alpha + \beta)| \le 2 - P(\gamma) -P(\beta + \gamma)$$
As mentioned on page 883 of the paper greatest violation occurs when $\alpha = 22.5^\circ$ , $\beta = 45^\circ$, and $\gamma = 157.5^\circ$.
But the paper doesn't describe a nonlocal game. Even the later survey, Bell's theorem. Experimental tests and implications, by Clauser et al. also doesn't describe any nonlocal game. So, my understanding is that the CHSH inequality is just a component of the CHSH game which was later defined by someone else.
After googling and tracing through the bibliographies of the papers for a while I found the paper, Consequences and limits of nonlocal strategies, by Cleve et al. In section 3.1 they have defined the CHSH game formally. I assume this is the first formal description of the game.
CHSH game is just another nonlocal game with two provers. In a CHSH game the contribution of the CHSH inequality is in the strategy of the game. For the nonlocal game the players, Alice and Bob employ strategies in answering to the referee. The strategies are quantum. The possible answers are boolean values. The probability distribution over the answers are the probability distribution retrieved from the measurements performed by them on a shared Bell state. This probability distribution is defined by the CHSH inequality. It's optimality is given by the Tsirelson's equality defined in Quantum generalizations of Bell's inequality (4th condition of the theorem 1) by Tsirelson and Quantum and quasi-classical analogs of Bell inequalities by Khalfin and Tsirelson (Equation 2).
My question is :
Am I able to trace the evolution of the ideas, intuitions and results between CHSH inequality and CHSH game correctly? Did I miss anything? |
I have the following $d$-dimensional integral:
$$\displaystyle \int_{\mathbb{R^d}} |x|^{-d/2}|b-x|^{-d/2}J_{d/2}(\rho |x|)J_{d/2}(\rho |b-x|)\mathrm{d}x,$$
where $|\cdot|$ denotes the Euclidean norm on $\mathbb{R}^d$, and $x, b \in \mathbb{R}^d$, $\rho > 0$ does not depend on $x$, and $J_{d/2}$ denotes the Bessel function of the first kind. My supervisor has suggested the following substitution, which appears to be a generalisation of polar co-ordinates, but I'm not entirely sure it makes sense. He has suggested letting
$$\displaystyle r = |(0,x_2,x_3, ... ,x_d)|, \ \ |x| = \sqrt{x_1^2 + r^2},$$ $$\displaystyle \mathrm{d}x = Cr^{d-2}\mathrm{d}x_1 \mathrm{d}r\mathrm{d}\varphi,$$
but I don't understand how this works. This does not seem to be hyperspherical co-ordinates, but I'm mainly confused how that expression for $\mathrm{d}x$ was derived. I also don't know where this $\varphi$ comes from, and what $\mathrm{d}\varphi$ actually means here. Can anyone tell me how this is derived? I have a strong suspicion that there is something left out here, and would greatly appreciate it if someone could tell me what it could be. What is especially confusing is that this seems to jump from an integral involving $d$ variables to one involving just three, unless $\varphi \in \mathbb{R}^{d-2}.$
For the integral itself, I'm mainly interested in proving its convergence, and estimating its value. But I'd like to understand this co-ordinate transformation first. |
(a) Find all $3 \times 3$ matrices which are in reduced row echelon form and have rank 1.
First we look at the rank 1 case.For a $3 \times 3$ matrix in reduced row echelon form to have rank 1, it must have 2 rows which are all 0s. The non-zero row must be the first row, and it must have a leading 1.
These conditions imply that the matrix must be of one of the following forms:\[\begin{bmatrix} 1 & a & b \\ 0 & 0 & 0 \\ 0 & 0 & 0 \end{bmatrix} , \quad \begin{bmatrix} 0 & 1 & c \\ 0 & 0 & 0 \\ 0 & 0 & 0 \end{bmatrix} , \mbox{ or } \begin{bmatrix} 0 & 0 & 1 \\ 0 & 0 & 0 \\ 0 & 0 & 0 \end{bmatrix}.\]
(b) Find all such matrices with rank 2.
For a rank 2 $3 \times 3$ matrix in reduced row echelon form, there must be one row, the bottom one, which has only 0s.
Thus we need two leading 1s in distinct columns, and every other term in the same column with a leading 1 must be 0. The possibilities are:\[\begin{bmatrix} 1 & 0 & a \\ 0 & 1 & b \\ 0 & 0 & 0 \end{bmatrix} , \quad \begin{bmatrix} 1 & a & 0 \\ 0 & 0 & 1 \\ 0 & 0 & 0 \end{bmatrix} , \mbox{ or } \begin{bmatrix} 0 & 1 & 0 \\ 0 & 0 & 1 \\ 0 & 0 & 0 \end{bmatrix}.\]
Find the Rank of a Matrix with a ParameterFind the rank of the following real matrix.\[ \begin{bmatrix}a & 1 & 2 \\1 &1 &1 \\-1 & 1 & 1-a\end{bmatrix},\]where $a$ is a real number.(Kyoto University, Linear Algebra Exam)Solution.The rank is the number of nonzero rows of a […]
Linear Transformation to 1-Dimensional Vector Space and Its KernelLet $n$ be a positive integer. Let $T:\R^n \to \R$ be a non-zero linear transformation.Prove the followings.(a) The nullity of $T$ is $n-1$. That is, the dimension of the nullspace of $T$ is $n-1$.(b) Let $B=\{\mathbf{v}_1, \cdots, \mathbf{v}_{n-1}\}$ be a basis of the […]
Determine When the Given Matrix InvertibleFor which choice(s) of the constant $k$ is the following matrix invertible?\[A=\begin{bmatrix}1 & 1 & 1 \\1 &2 &k \\1 & 4 & k^2\end{bmatrix}.\](Johns Hopkins University, Linear Algebra Exam)Hint.An $n\times n$ matrix is […]
Rank and Nullity of a Matrix, Nullity of TransposeLet $A$ be an $m\times n$ matrix. The nullspace of $A$ is denoted by $\calN(A)$.The dimension of the nullspace of $A$ is called the nullity of $A$.Prove the followings.(a) $\calN(A)=\calN(A^{\trans}A)$.(b) $\rk(A)=\rk(A^{\trans}A)$.Hint.For part (b), […] |
I am looking at constructible points in abstract algebra, particularly in $\mathbb{C}$. Alongside a proof of a theorem, I came across this expression which I cannot work out how it's been derived. It just comes out as follows, but some notations;
$L(z_1,z_2)$ represents the straight line that connects points $z_1,z_2 \in \mathbb{C}$. $\mathbb{Q}^{Py}$ represents the Pyhtagorean Closure of $\mathbb{Q}$.
The theorem is stated as follows
A point $z \in \mathbb{C}$ is constructible if and only if $z \in \mathbb{Q}^{Py}$.
Omitting the bits I understood(The proof uses induction), the specific part I don't understand is,
Say $z$ lies on a "line meeting another line", namely $\{z\} \in L(z_1,z_2) \cap L(z_3,z_4)$ where the lines are distinct. Then $\exists a,b \in \mathbb{R}$ such that
$$z=az_1+(1-a)z_2$$
$$z=bz_3+(1-b)z_4$$
Well, in all honesty, it looks familiar; points on a line that goes through $2$ distinct points. I think the above expressions come from rather elementary facts and ideas but as much as I am embarrassed to say this, I can't see how it gets derived. Considering the Complex plane as $\mathbb{R}^2$, taking $x,y$ coordinates, I thought I could find $a$ or $b$ wrp to $z_j=x_j+iy_j$ but I cannot.
Would someone show me how that part is derived? |
Is the impulse response of a LINEAR
TIME-VARIANT system unique? I know that it's unique for the L.T.I case!!! Advance thanks for the HELP....
Is the impulse response of a LINEAR
With the input-output relationship of a linear time-varying system given by
$$y(t)=\int_{-\infty}^{\infty}h(t,\tau)x(t-\tau)d\tau\tag{1}$$
it is straightforward to show that the kernel $h(t,\tau)$ (also referred to as
impulse response or input delay spread function), must be unique.
Uniqueness means that there is no other kernel $g(t,\tau)\neq h(t,\tau)$ such that the output $y(t)$ is the same for all possible input signals $x(t)$.
Take the input signal $x(t)=\delta(t-t_0)$. The corresponding output signal is $y(t)=h(t,t-t_0)$, which, by definition, is different from the output signal $y(t)=g(t,t-t_0)$ obtained with the kernel $g(t,\tau)$. |
There does not always exist a matching with your property in a bipartite graph.
Consider for example the graph $G = (V, E)$ where
$V = \{a_1, a_2, a_3, b_1, b_2, b_3, c_1, c_2, d_1, d_2, x\}$ and $E = \{a_1, a_2\} \times \{c_1, c_2, x\} \cup \{b_1, b_2\} \times \{d_1, d_2, x\} \cup \{(a_3, c_1), (a_3, d_2), (a_3, x), (b_3, d_1), (b_3, c_2), (b_3, x)\}$
This graph is bipartite (with $V_1 = \{a_1, a_2, a_3, b_1, b_2, b_3\}$ as one part and $V_2 = \{c_1, c_2, d_1, d_2, x\}$ as the other). Every vertex in this graph except $x$ has degree $3$. Vertex $x$ has degree $6$.
Suppose, for the sake of contradiction, that a matching $M$ with your property exists. Consider any two adjacent vertices $v_1$ and $v_2$ that have the same degree. It turns out that the conditions on $M$ imply that both of these vertices have the same matched-or-not status (i.e. either both or neither of the vertices must be matched in $M$). This can be proved very simply by contradiction: suppose one vertex (wlog $v_1$) is matched and the other is not; then since there is an edge between them, the given property on $M$ tells us that $deg(v_1) > deg(v_2)$, which is a contradiction.
Thus, the matched-or-not status of adjacent vertices of the same degree is the same. Then if a set of vertices has the property that every vertex has the same degree and the subgraph induced by these vertices is connected, we can apply this rule multiple times to conclude that all vertices in the set must have the same matched-or-not status. In particular, in $G$ the matched-or-not status of every vertex in $V \setminus \{x\}$ must be the same since these vertices all have degree-$3$ and are connected. In other words, either every vertex in $V \setminus\{x\}$ is matched in $M$, or no such vertex is matched in $M$. If the matched-or-not status for these vertices is "matched" then every vertex in the $V_1$ part of the bipartition is matched in $M$; this is impossible since there are more vertices in $V_1$ than in the other part $V_2$. If the matched-or-not status for these vertices is "not matched" then the only vertex in $G$ that can be matched in $M$ is $x$; since a non-empty matching (as $M$ must be) matches at least two vertices, we see that this case is also impossible.
By contradiction, a matching $M$ with the property in question does not exist in $G$. |
Are there any differences between the study of Calculus done by Newton and by Leibniz. If so please mention point by point.
Newton's notation, Leibniz's notation and Lagrange's notation are all in use today to some extent they are respectively:
$$\dot{f} = \frac{df}{dt}=f'(t)$$ $$\ddot{f} = \frac{d^2f}{dt^2}=f''(t)$$
You can find more notation examples on Wikipedia.
The standard integral($\displaystyle\int_0^\infty f dt$) notation was developed by Leibniz as well. Newton did not have a standard notation for integration.
I have read from "The Information" by James Gleick the following: According to Babbage who eventually took the Lucasian Professorship at Cambridge which Newton held, Newton's notation crippled mathematical development. He worked as an undergraduate to institute Leibniz's notation as it is used today at Cambridge despite the distaste the university still had because of the Newton/Leibniz conflict. This notation is alot more useful that Newton's for most cases. It does, however, imply that it can be treated as a simple fraction which is incorrect.
You should definitely take a look at the second chapter of Arnold's
Huygens & Barrow, Newton & Hooke. The late Prof. Arnold summarized therein the difference between Newton's approach to mathematical analysis and Leibniz's as follows:
Newton's analysis was the application of power series to the study of motion... For Leibniz, ... analysis was a more formal algebraic study of differential rings.
Arnold's overview of Leibniz's contributions to the theme is spiced up with a non-negligible number of thought-provoking remarks:
In the work of other geometers--e.g., Huygens and Barrow--many objects connected with a given curve also appeared [for example: abscissa, ordinate, tangent, the slope of the tangent, the area of a curvilinear figure, the subtangent, the normal, the subnormal, and so on]... Leibniz, with his individual tendency to universality [he considered necessary to discover the so-called characteristic, something universal, that unites everything in science and contains all answers to all questions], decided that all these quantities should be considered in the same way. For this he introduced a single term for any of the quantities connected with a given curve and fulfilling some function in relation to the given curve--the term
function...
Thus, according to Leibniz many functions were associated with a curve. Newton had another term--fluent--which denoted a flowing quantity, a variable quantity, and hence associated with motion. On the basis of Pascal's studies and his own arguments Leibniz quite rapidly developed formal analysis in the form in which we now know it. That is, in a form specially suitable to teach analysis by people who do not understand it to people who will never understand it... Leibniz quite rapidly established the formal rules for operating with infinitesimals, whose meaning is obscure.
Leibniz's method was as follows. He assumed that the whole of mathematics, like the whole of science, is found inside us, and by means of philosophy alone we can hit upon everything if we attentively take heed of processes that occur inside our mind. By this method he discovered various laws and sometimes very successfully. For example, he discovered that $d(x+y) = dx+dy$, and this remarkable discovery immediately forced him to think about what the differential of a product is. In accordance with the universality of his thoughts he rapidly came to the conclusion that differentiation [had to be] a ring homomorphism, that is, that the formula $d(xy) = dx dy$ must hold. But after some time he verified that this leads to some unpleasant consequences, and found the correct formula $d(xy) = xdy + y dx$, which is now called Leibniz's rule. None of the inductively thinking mathematicians--neither Barrow nor Newton, who as a consequence was called an empirical ass in the Marxist literature--could [have ever gotten] Leibniz's original hypothesis into his head, since to such a person it was quite obvious what the differential of a product is, from a simple drawing...
Beyond the issue of notation, Newton experimented with a number of foundational approaches. One of the earliest ones involved infinitesimals, whereas later he shied away from them because of philosophical resistance of his contemporaries, often stemming from sensitive religious considerations closely related to inter-denominational quarrels. Leibniz also was aware of the quarrels, but he used infinitesimals and differentials systematically in developing the calculus, and for this reason was more successful in attracting followers and stimulating research--or what he called the
Ars Inveniendi.
From a practical point of view, the notation was vastly different.
A particular sore point for me is that the Leibniz notation lets you incorrectly work with derivatives as though they were a mathematical fraction. Unfortunately this 'works out' a lot of the time so its still used, even in college courses, today.
I don't think there is anything wrong with shortcuts, up to the point that they don't interfere with understanding. In this case, I do believe it creates a misunderstanding of the subject matter. This alone I think puts Newtons notation above Leibniz's.
From Loemker's translation,
"Leibniz's reasoning, though it strives for a broader application of the law of inverse squares than to gravity alone, is less general than Newton's (Principia, Book I, Propositions I, 2, 14), since it presupposes harmonic motion."
Leibniz, Gottfried Wilhelm
Philosophical Papers and Letters : A Selection / Translated and Edited, with an Introduction by Leroy E. Loemker. 2d ed. Dordrecht : D. Reidel, 1970. p.362 |
Forgot password? New user? Sign up
Existing user? Log in
∫0∞x2e−x2 dx \large \int_0^\infty x^2 e^{-x^2} \, dx ∫0∞x2e−x2dx
If the value of the integral above is equal to πAB \dfrac{\pi^A}{B}BπA, where AAA and BBB are rational numbers, find a×ba\times ba×b.
Problem Loading...
Note Loading...
Set Loading... |
ISSN:
1547-5816
eISSN:
1553-166X
All Issues
Journal of Industrial & Management Optimization
October 2011 , Volume 7 , Issue 4
Select all articles
Export/Reference:
Abstract:
We present explicit optimality conditions for a nonsmooth functional defined over the (properly or weakly) Pareto set associated with a multi-objective linear-quadratic control problem. This problem is very difficult even in a finite dimensional setting , i.e. when, instead of a control problem, we deal with a mathematical programming problem. Amongst various applications, our problem may be considered as a response for a decision maker when he has to choose a solution over the solution set of the grand coalition $p$-player cooperative differential game.
Abstract:
Recently Krishna Kumar and Pavai [10] have obtained the transient distribution for the queue length of the system an M/M/1 queueing system with catastrophes, server failures using a direct technique. In this paper, we consider Krishna Kumar and Pavai [10] model with balking feature. Based on the generating function technique and a direct approach, transient and steady state analysis of the queue length is carried out Krishna Kumar and Pavai [10] model can be deduced from the new model. Moreover, some other special cases are shown as special cases of our solution.
Abstract:
We study a scheduling problem that belongs to the yard operations component of the railroad planning problems, namely the hump sequencing problem. The scheduling problem is characterized as a single-machine problem with stepwise tardiness cost objectives. This is a new scheduling criterion which is also relevant in the context of traditional machine scheduling problems. We produce complexity results that characterize some cases of the problem as pseudo-polynomially solvable. For the difficult-to-solve cases of the problem, we develop mathematical programming formulations, and propose heuristic algorithms. We test the formulations and heuristic algorithms on randomly generated single-machine scheduling problems and real-life data sets for the hump sequencing problem. Our experiments show promising results for both sets of problems.
Abstract:
In this paper, we discuss a nonstandard renewal risk model, where the price process of the investment portfolio is modelled as a geometric Lévy process, the claim sizes and premium sizes form sequences of identically distributed and upper-tail independent random variables, respectively, the claim size and its corresponding inter-claim time satisfy a certain dependence structure described via a conditional tail probability of the claim size given the inter-claim time before the claim occurs, and there is a similar dependence structure between the premium size and the inter-arrival time before the premium is paid. When the claim-size distribution belongs to the extended-regular-varying class, we obtain a uniform tail asymptotics for stochastically discounted aggregate claims. Furthermore, assuming that the tail of the premium-size distribution is lighter than that of the claim-size distribution, the uniform estimates for the finite- and infinite-time ruin probabilities are presented respectively.
Abstract:
In this paper, we consider an optimization problem arising in vehicle fleet management. The problem is to construct a heterogeneous vehicle fleet in such a way that cost is minimized subject to a constraint on the overall fleet size. The cost function incorporates fixed and variable costs associated with the fleet, as well as hiring costs that are incurred when vehicle requirements exceed fleet capacity. We first consider the simple case when there is only one type of vehicle. We show that in this case the cost function is convex, and thus the problem can be solved efficiently using the well-known golden section method. We then devise an algorithm, based on dynamic programming and the golden section method, for solving the general problem in which there are multiple vehicle types. We conclude the paper with some simulation results.
Abstract:
In this paper, we present a full-Newton step primal-dual interior-point algorithm for solving symmetric cone convex quadratic optimization problem, where the objective function is a convex quadratic function and the feasible set is the intersection of an affine subspace and a symmetric cone lies in Euclidean Jordan algebra. The search directions of the algorithm are obtained from the modification of NT-search direction in terms of the quadratic representation in Euclidean Jordan algebra. We prove that the algorithm has a quadratical convergence result. Furthermore, we present the complexity analysis for the algorithm and obtain the complexity bound as $\left\lceil 2\sqrt{r}\log\frac{\mu^0 r}{\epsilon}\right\rceil$, where $r$ is the rank of Euclidean Jordan algebras where the symmetric cone lies in.
Abstract:
We explore the basic conceptual framework to imagine service supply chain that is a new research concern in supply chain management, and develop an integrated approach to optimize the selection of service vendors. Three insights arise on how service vendors can be obtained: (1) exploring multiple criteria decision-making problems with incomplete weight information; (2) identifying and eliminating criteria by information entropy method; (3) analyzing and calculating the final selection of qualified service vendors through a combining way between the fuzzy analytic hierarchy process and the multi-objective linear programming approach. Finally, an experiment implemented highlights effectiveness of the integrated approach.
Abstract:
In this paper, a finite element method for a parabolic optimal control problem is introduced and analyzed. For the discretization of a quadratic convex optimal control problem, the state and co-state are approximated by piecewise linear functions and the control is approximated by piecewise constant functions. As a result, it is proved in this paper that the difference between a suitable interpolation of the control and its finite element approximation has superconvergence property in order $O(h^2)$. Finally, two numerical examples are presented to confirm our theoretical results.
Abstract:
This paper presents a complementary technique for the empirical analysis of financial ratios and bankruptcy risk using financial ratios. Within this new framework, we propose the use of a new measure of risk, the Dynamic Risk Space (DRS) measure. We provide evidence of the extent to which changes in values for this index are associated with changes in each axis's values and how this may alter our economic interpretation of changes in patterns and directions. In addition, this model tends to be generally useful for predicting financial distress and bankruptcy. This method would be a general methodological guideline associated with financial data, solving some methodological problems concerning financial ratios such as non-proportionality, non-asymmetry and non-scaled. To test the procedure, Multiple Discriminant Analysis (MDA), Logistic Analysis (LA) and Genetic Programming (GP) are employed to compare results by common and modified ratios for bankruptcy prediction. Classification methods outperformed using the DRS approach.
Abstract:
A variational problem involving two variables, the state and the control variables, is reduced to another variational problem in which the objective has no control variable, but the constrained identity has one. We then establish that the two problems are equivalent with the same optimal (state) solution under some conditions.
Abstract:
In this paper, a probability-one homotopy method for solving mixed complementarity problems is proposed. The homotopy equation is constructed by using the Robinson's normal equation of mixed complementarity problem and a $C^2$-smooth approximation of projection function. Under the condition that the mixed complementarity problem has no solution at infinity, which is a weaker condition than several well-known ones, existence and convergence of a smooth homotopy path from almost any starting point in $\mathbb{R}^n$ are proven. The homotopy method is implemented in Matlab and numerical results on the MCPLIB test collection are given.
Abstract:
In this paper, by using a local linking theorem, we obtain the existence of multiple solutions for a class of semilinear elliptic variational inclusion problems at non-resonance.
Abstract:
In this paper, we present a new approach to the duality of linear programming. We extend the boundedness to the so called inclusiveness, and show that inclusiveness and feasibility are a pair of coexisting and mutually dual properties in linear programming: one of them is possessed by a primal problem if and only if the other is possessed by the dual problem. This duality relation is consistent with the symmetry between the primal and dual problems and leads to a duality result that is considered a completion of the classical strong duality theorem. From this result, complete solvability information of the primal (or dual) problem can be derived solely from dual (or primal) information. This is demonstrated by applying the new duality results to a recent linear programming method.
Abstract:
Recently, Han (Han D, Inexact operator splitting methods with self-adaptive strategy for variational inequality problems,
Journal of Optimization Theory and Applications 132, 227-243 (2007)) proposed an inexact operator splitting method for solving variational inequality problems. It has advantage over the classical operator splitting method of Douglas-Peaceman-Rachford-Varga operator splitting methods (DPRV methods) and some of their variants, since it adopts a very flexible approximate rule in solving the subproblem in each iteration. However, its convergence is established under somewhat stringent condition that the underlying mapping $F$ is strongly monotone. In this paper, we mainly establish the global convergence of the method under weaker condition that the underlying mapping $F$ is monotone, which extends the fields of applications of the method relatively. Some numerical results are also presented to illustrate the method. Abstract:
In this paper, we study a mixed integer constrained quadratic programming problem. This problem is NP-Hard. By reformulating the problem to a box constrained quadratic programming and solving the reformulated problem, we can obtain a global optimal solution of a sub-class of the original problem. The reformulated problem may not be convex and may not be solvable in polynomial time. Then we propose a solvability condition for the reformulated problem, and discuss methods to construct a solvable reformulation for the original problem. The reformulation methods identify a solvable subclass of the mixed integer constrained quadratic programming problem.
Abstract:
A trust-region filter-SQP method for mathematical programs with linear complementarity constraints (MPLCCs) is presented. The method is similar to that proposed by Liu, Perakis and Sun [Computational Optimization and Applications, 34, 5-33, 2006] but it solves the trust-region quadratic programming subproblems at each iteration and uses the filter technique to promote the global convergence. As a result, the method here is more robust since it admits the use of Lagrangian Hessian information and its performance is not affected by any penalty parameter. The preliminary numerical results on test problems generated by the QPECgen generator show that the presented method is effective.
Readers Authors Editors Referees Librarians Email Alert
Add your name and e-mail address to receive news of forthcoming issues of this journal:
[Back to Top] |
For some positive integer constants $n, k$ and $t$, I want to find the values for $n_1, \ldots, n_t$, all positive integers, that maximize the following sum :
$$ \sum_{i = 1}^t (n_i)^k $$
such that $n_i \geq 1$ for each $i$ and $\sum_{i = 1}^t n_i = n$. So, what's the best way to pick the $n_i$'s ?
It feels like the right choice is to let $n_1 = n - t + 1$ and $n_2 = n_3 = \ldots = n_t = 1$. But my attempts to prove this go through a lot of tedious steps, though this seems simple enough. So, does anyone have a proof for this ?
EDIT :
In the event someone has the same question, here's the proof, following answerer's advice.
Let's try induction on $t$ (assuming $t < n$). I want to show that $\sum_{i = 1}^t (n_i)^k \leq (n - t + 1)^k + t - 1$. True for $t = 1$, base case covered. Now,
$$ \sum_{i = 1}^t (n_i)^k = \sum_{i = 1}^{t - 1} (n_i)^k + n_t^k \leq (n - n_t - t + 2)^k + t - 2 + n_t^k $$
by induction. Letting $n_0 = n - n_t -t +2$, we get another sum of the same type with $t = 2$:
$$ n_0^k + n_t^k + t - 2 \leq (n_0 + n_t - 2 + 1)^k + (2 - 1) + t - 2 \\ = (n - n_t - t + 2 + n_t - 2 + 1)^k + t - 1\\ = (n - t + 1)^k + t - 1 $$ |
I think I've figured this out. The point is that, the rigorous meaning one can draw from the formal covariance of $J^\mu$ is that the momentum-space coefficient functions of $J^\mu$ (i.e. the functions in front of monomials of $a_p$ and $a^\dagger_p$) transform covariantly under the change of variable $p\to \Lambda p$. The covariance of the coefficient functions is unaffected by normal ordering, and is sufficient to give rise to the covariance of $:J^\mu:$. The rest of this answer will be an elaboration of the first paragraph.
Let me first clarify the notations used and the meaning of the formal covariance of the ill-defined current $J^\mu$. I'm going to ignore the spin degrees of freedom in this discussion, but one should see the generalization to include spin only involves a straightforward (but perhaps cumbersome) change of notations. I'm also ignoring the spacetime dependence, that is to say I'm only considering the covariance of $J^\mu(0)$, and the generalization to $J^\mu(x)$ is straightforward and easy.
In the context of my question, $U(\Lambda)$ is defined as such that
$$U(\Lambda) a_{p} U^{-1}(\Lambda)=\sqrt{\frac{E_{\Lambda p}}{E_p}}a_{\Lambda p}.$$
The covariance of $J^\mu$ must be understood in a very formal and specific sense, the sense in which the covariance is formally proved. For example, in the case of a fermionic bilinear:
$$U(\Lambda)J^{\mu}U(\Lambda)^{-1}=U\bar{\psi}\gamma^{\mu}\psi U^{-1}\\ =U\bar{\psi}_iU^{-1}(\gamma^{\mu})_{ij}U \psi_j U^{-1}=\bar{\psi}D(\Lambda)\gamma^{\mu}D(\Lambda)^{-1}\psi= \Lambda^{\mu}_{\ \ \nu}\bar{\psi}\gamma^{\nu}\psi, $$
where $D(\Lambda)$ is the spinor representation of Lorentz group, typically constructed via Clifford algebra. Note in this formal proof, what's important is that, under the change $a_{p}\to \sqrt{\frac{E_{\Lambda p}}{E_p}}a_{\Lambda p}$ (ignoring spin indices of course) the elementary field transforms as $\psi \to D(\Lambda)\psi$. In the proof, no manipulation of operator ordering and commutation relations ever occurs: all we do is to do a change of integration variable, and let the algebraic properties of the coefficient functions take care of the rest. In fact, we'd better not mess with the operator ordering, as it can easily spoil the formal covariance (example: $H=\int \text{d}p\frac{1}{2}E_{p}(a_p a_p^\dagger+a_p^\dagger a_p)=\int \text{d}p E_{p}(a_p^\dagger a_p+\delta(0))$, see my longest comment under drake's answer).
To explain what's going on in more details without getting tangled with notational nuisances, let me remind you again I'll omit the spin degrees of freedom, but it should be transparent enough by the end of the argument that it's readily generalizable to spinor case, since all that matters is that we know the coefficient functions(even with spin indices) transform covariantly. The mathematical gist is, after multiplying the elementary fields and grouping c/a operators (during the grouping no operator ordering procedure should be performed at all, e.g. $a^\dagger(p_1)a(p_2)$ and $a(p_2)a^\dagger(p_1)$ should be treated as two independent terms), a typical monomial term in $J^\mu(0)$ has the form
$$ \int \left(\prod\limits_{i=1}^{n}\text{d}p_i\right)M(\{a^\dagger(p_i), a(p_i)\})f^\mu(\{p_i\}),$$
where $M$ is a monomial of c/a operators not necessarily normally ordered, but has an ordering directly from the multiplication of elementary fields.
The formal covariance of $J^\mu$ means
$$\Lambda^\mu_{\ \ \nu}\int \left(\prod\limits_{i=1}^{n}\text{d}p_i\right)M(\{a^\dagger(p_i), a(p_i)\})f^\nu(\{p_i\})\\=\int \left(\prod\limits_{i=1}^{n}\text{d}p_i\right)M(\{a^\dagger(\Lambda p_i), a(\Lambda p_i)\})f^\mu(\{p_i\})\\=\int \left(\prod\limits_{i=1}^{n}\text{d}q_i\right)\left(\prod\limits_{i=1}^n \frac{E_{\Lambda^{-1} q_i}}{E_{q_i}}\right) \left(\prod\limits_{i=1}^{m}\sqrt{\frac{E_{q_i}}{E_{\Lambda^{-1} q_i}}}\right) M(\{a^\dagger(q_i), a(q_i)\})f^\mu(\{\Lambda^{-1}q_i\}) ,$$
where $\prod\limits_{i=1}^n {E_{\Lambda^{-1} q_i}}/{E_{q_i}}$ comes from the transformation of measure and $\prod\limits_{i=1}^{m}\sqrt{{E_{q_i}}/{E_{\Lambda^{-1} q_i}}}$ from the transformation of c/a operators in $M$. This is equivalent to
$$f^\mu(\{\Lambda^{-1}q_i\})\left(\prod\limits_{i=1}^n \frac{E_{\Lambda^{-1} q_i}}{E_{q_i}}\right) \left(\prod\limits_{i=1}^{m}\sqrt{\frac{E_{q_i}}{E_{\Lambda^{-1} q_i}}}\right)=\Lambda^\mu_{\ \ \nu}f^\nu(\{q_i\}).$$
The above equation makes completely rigorous sense since it's a statement about c-number functions. Obviously, this equation is sufficient to prove the covariance of the normal ordering
$$ \int \left(\prod\limits_{i=1}^{n}\text{d}p_i\right):M(\{a^\dagger(p_i), a(p_i)\}):f^\mu(\{p_i\}),$$
since on the operator part only a change of integration variable is needed for the proof.
So let's recapitulate the logic of this answer:
1. The current is only covariant when written in a certain way, but not in all ways. (recall the free scalar field Hamiltonian example: $H=\int \text{d}p\frac{1}{2}E_{p}(a_p a_p^\dagger+a_p^\dagger a_p)=\int \text{d} pE_{p}(a_p^\dagger a_p+\delta(0))$, which is formally covariant in the first form but not in the second form.)
2. In that certain way where the current is formally covariant, the formal covariance really means a genuine covariance of the coefficient functions.
3. The covariance of the coefficient functions is sufficient to establish the covariance of the normally ordered current. |
Fortunately your scenario is rather simple, your camera is on a line directly perpendicular above and centered on the quad you're concerned about. So what you want is for the quad to fill the whole screen, I assume (more or less) exactly the whole screen, i.e. the size of your projected quad in window space is your entire viewport.
The mathematics of this are rather easy if you understand how the perspective matrix is actually constructed and what it does and if you know some basic trigonometry. The matrix constructed with
glm::perspective takes as viewing angle the field of view along the y-axis and is based on goold old
gluPerspective. And if we take a look at the actual matrix, we can see that it transforms a $1$ high object sitting on the view axis at distance $1$ from the camera to a height of $\cot \frac{\alpha}{2}$ ($\alpha$ being your view angle). If you move it further away it obviously gets smaller and if you increase its height, it gets larger, so it's $\frac{h}{d}\cot\frac{\alpha}{2}$ for camera distance $d$ and object height $h$. We want that height to be equal to $1$ (the height, from the center, of the view plane in NDC space).
So all you need to know in addition to that is the size of your square in the y-direction. We take half that height (because the square is centered) and multiply that by $\cot\frac{\alpha}{2}$ to get the distance it has to be from the camera. So if your square vertices have the y-coordinate +/-H and given your other values:
$$Y-X = H\cot22.5°$$$$Y = X+H\cot22.5°$$$$Y \approx X+2.41421H$$ |
Let $U=\{1,\ldots, u\}$ be a
universe of elements for some $u\in \mathbb N$.Given some $n\in\mathbb N$, we are interested in computing some function $f:U^{\le n}\to\mathbb R$ over range queries.
In the Range Query problem we get an integer array $A[1,\ldots,n]\in U^n$ and wish to compute queries that takes as input parameters $1\le i\le j\le n$ and return $f(A[i,\ldots,j])$.
Consider the function $f$ that returns the number of distinct elements in the queried range.
The goal is to preprocess the input array $A$ and create a small summary that allows efficient computation of these distinct queries.
What is the best known algorithm for the problem?(given a memory bound $m$, what is the minimal query time that is required).
.
If we allow approximation of the number of distinct elements, can we get faster queries?
EDIT: I have found a related problem discussed in stackoverflow. The answer there shows a $O(n\log n\log u)$ bits algorithm that answers queries in $O(\log n)$ time.
Is this optimal?
Can we get better time/space bounds if we allow approximation? |
For $k \ge 4$, the expected time until the first duplicate is $O(\sqrt{n})$. This leaves the case $k=3$ [Edit: The case of $k=3$ is resolved below with a construction with expected first duplicate at about $n/4$].
Let $D_i$ be the number of duplicates in the first $i$ values. The expected time until the first duplicate equals $\sum_{i=0}^n P(D_i = 0)$. We can use the second moment method to estimate these probabilities well enough when $k\ge 4$.
$E(D_i) = {i \choose 2}/n$.
$E(D_i^2) = {i \choose 2}^2/n^2 + {i \choose 2}(1/n - 1/n^2)$.
$\text{Var}(D_i) = E(D_i^2) -E(D_i)^2 = {i \choose 2}(1/n - 1/n^2) \le {i \choose 2}/n$.
So, $0$ is at least $\sqrt{{i \choose 2}/n}$ standard deviations away from the mean of $D_i$. By Chebyshev's inequality, $P(D_i =0)$ is at most $n/{i \choose 2} \le \frac{2n}{(i-1)^2}$. As a probability, it is also at most $1$, and we'll use that estimate for small $i$.
$$\sum_{i=0}^n P(D_i = 0)$$
$$\le \sum_{i=0}^{\lceil \sqrt{2n}\rceil} 1 + \sum_{i=\lceil \sqrt{2n}\rceil+1}^n \frac{2n}{(i-1)^2}$$
$$ \le 3 + \sqrt{2n} + \int_{\sqrt{2n}}^n \frac {2n}{x^2}dx$$
$$ = 1 + 2\sqrt{2n}.$$
This is about a factor of $4\sqrt{2}$ off of the lower bound in the comments.
To solve the $k=3$ case, we'll construct some processes which are not $3$-independent, then take a mixture which is $3$-independent and which has an expected first duplicate of about $n/4$.
For simplicity, we'll ignore times beyond $n$. Any random prefix on the first $n$ can be extended by appending an independent uniform sequence, and the choice of extension has no effect on the expected first duplicate.
Consider random functions $f$ which are symmetric both on the domain $\lbrace 1,...,n \rbrace$ and range $\lbrace 1,...,n \rbrace$. Such a function corresponds to a $3$-independent process if and only if $P(f(1) = f(2) = f(3)) = 1/n^2$ and $P(f(1)=f(2)\ne f(3))=(n-1)/n^2.$
Let $f_0$ denote random bijections between the domain and range. $P(f_0(1) = f_0(2) = f_0(3))=0.$ $P(f_0(1)=f_0(2)\ne f_0(3))=0.$
Let $f_1$ denote random constant maps. $P(f_1(1) = f_1(2) = f_1(3))=1.$ $P(f_1(1)=f_1(2)\ne f_1(3))=0.$
Suppose $n = 9m$. Choose a random set partition of $\lbrace 1,...,n \rbrace$ into $3m$ pairs and $m$ triplets. Choose $4m$ distinct values for the parts, and let $f_2$ take these values on the parts. $P(f_2(1) = f_2(2) = f_2(3)) = m/{n \choose 3} \approx \frac 2{3n^2} \lt 1/n^2.$ $P(f_2(1)=f_2(2)\ne f_2(3)) = \frac{2}{3} \frac{1}{n}+ \frac{1}{3} \frac{2}{n-1} \frac {n-3}{n-2} = \frac{4(n^2-3n+1)}{3 n(n^2-3n+2)}\approx \frac{4}{3n} \gt (n-1)/n^2.$ The first term, $\frac{2}{3n},$ corresponds to the possibility that $1$ is part of a pair and $2$ is the second point in the pair. The second term corresponds to the possibility that $1$ and $2$ are part of a triplet and $3$ is not the third point of the triplet.
$(1/n^2, (n-1)/n^2)$ is in the convex hull of $(0,0)$, $(1,0)$, and $(m/{n \choose 3},\frac{2}{3} \frac{1}{n}+ \frac{1}{3} \frac{2}{n-1} \frac {n-3}{n-2})$. So, some mixture of $f_0$, $f_1$, and $f_2$ is $3$-independent. Specifically,
$$\frac{n^4-13n^2+16n-4}{4n^4-12n^3+4n}f_0 + \frac{n^2-5n+2}{2n^4-6n^3+2n^2}f_1 + \frac{3n^3 - 12n^2+15n-6}{4n^3-12n^2+4n}f_2 $$
is $3$-independent. This mixture gives a weight of $1/4 + o(1)$ to $f_0$, which has expected first duplicate time of $n+1$, so the expected first duplicate of the mixture is at least $(1/4+o(1))n$.
We can use slightly different set partitions when $n$ is not a multiple of $9$. The $2/3:1/3$ split into pairs and triplets was not optimized, so perhaps some other ratio would give a better proportion of $f_0$ in the mixture, hence a better coefficient of $n$ in the expected time of the first duplicate. |
Structure of Atom Nature of electromagnetic radiation and Photoelectric effect
Nature of Light : Different Theories Newton's Corpuscular Theory : Light comes out of the source as small particles called corpuscles. It traces in straight lines. Huygen's Wave Theory : According to Huygen, light has wave character. He compared light with mechanical waves.
Maxwell's Electromagnetic Theory : A reaction which consists of both electric and magnetic components which are perpendicular to each other and perpendicular to direction of propagation of light
Wave length (λ) : Units cm/s ; m/sec Frequency(ν) : \tt \nu = \frac{C}{\lambda} units Hz (or) C ps (or) S -1 eg : λ 400nm = 400 × 10 -9 = 4 × 10 -7 m wave number ( \overline{\nu}) : \overline{\nu} = \frac{1}{\lambda} units = cm -1 (or) m -1 Velocity of light (c) : C = 3 × 10 8 m/s (or) 3 × 10 10 cm/sec
Amplitude (A) : I ∝ a 2 I = Intensity Electromagnetic spectrum :
Photo electric effect : The emission of electrons by a metal surface when a radiation having same frequency h\nu = W + KE \ \ \ \ \ \ \ \ KE = \frac{1}{2}mv^{2} W → Work function (The amount of energy required to free the electron from the metal surface) W = hν 0 → The minimum frequency ν 0 = Threshold frequency → The minimum frequency required to free the electron. Part1: View the Topic in this Video from 33:09 to 56:12 Part2: View the Topic in this video From 0:40 to 43:40
Disclaimer: Compete.etutor.co may from time to time provide links to third party Internet sites under their respective fair use policy and it may from time to time provide materials from such third parties on this website. These third party sites and any third party materials are provided for viewers convenience and for non-commercial educational purpose only. Compete does not operate or control in any respect any information, products or services available on these third party sites. Compete.etutor.co makes no representations whatsoever concerning the content of these sites and the fact that compete.etutor.co has provided a link to such sites is NOT an endorsement, authorization, sponsorship, or affiliation by compete.etutor.co with respect to such sites, its services, the products displayed, its owners, or its providers.
1. Frequency = \tt V=\frac{C}{\lambda}
2. Wavelength = \tt \lambda=\frac{C}{V}
3. Wave number = \tt \overline{v}=\frac{1}{\lambda}, v=\frac{c}{\lambda}=c\overline{v}
4. Time period \tt T=\frac{1}{\upsilon}
5. No. of waves \tt n=\frac{2\pi r}{\lambda}\left(where \ \lambda=\frac{h}{m\upsilon}\right)
6. No. of revolutions of e
- per second is =\tt \frac{\upsilon}{2\pi r} |
Difference between revisions of "SageMath"
m (→Install Sage package: flag for deletion)
m (→3D plot fails in notebook: flag for deletion)
Line 152: Line 152:
=== 3D plot fails in notebook ===
=== 3D plot fails in notebook ===
+ +
If you get the following error while trying to plot a 3D object:
If you get the following error while trying to plot a 3D object:
Revision as of 16:26, 20 January 2016
SageMath (formerly
Sage) is a program for numerical and symbolic mathematical computation that uses Python as its main language. It is meant to provide an alternative for commercial programs such as Maple, Matlab, and Mathematica.
SageMath provides support for the following:
Calculus: using Maxima and SymPy. Linear Algebra: using the GSL, SciPy and NumPy. Statistics: using R (through RPy) and SciPy. Graphs: using matplotlib. An interactive shellusing IPython. Access to Python modulessuch as PIL, SQLAlchemy, etc. Contents 1 Installation 2 Usage 3 Optional additions 4 Install Sage package 5 Troubleshooting 6 See also Installation contains the command-line version; for HTML documentation and inline help from the command line. includes the browser-based notebook interface.
The optional dependencies for various features that will be disabled if the needed packages are missing.package has number of
Usage
SageMath mainly uses Python as a scripting language with a few modifications to make it better suited for mathematical computations.
SageMath command-line
SageMath can be started from the command-line:
$ sage
For information on the SageMath command-line see this page.
Note, however, that it is not very comfortable for some uses such as plotting. When you try to plot something, for example:
sage: plot(sin,(x,0,10))
SageMath opens a browser window with the Sage Notebook.
Sage Notebook
A better suited interface for advanced usage in SageMath is the Notebook. To start the Notebook server from the command-line, execute:
$ sage -n
The notebook will be accessible in the browser from http://localhost:8080 and will require you to login.
However, if you only run the server for personal use, and not across the internet, the login will be an annoyance. You can instead start the Notebook without requiring login, and have it automatically pop up in a browser, with the following command:
$ sage -c "notebook(automatic_login=True)" Jupyter Notebook
SageMath also provides a kernel for the Jupyter notebook. To use it, install and , launch the notebook with the command
$ jupyter notebook
and choose "SageMath" in the drop-down "New..." menu. The SageMath Jupyter notebook supports LaTeX output via the
%display latex command and 3D plots if is installed.
Cantor
Cantor is an application included in the KDE Edu Project. It acts as a front-end for various mathematical applications such as Maxima, SageMath, Octave, Scilab, etc. See the Cantor page on the Sage wiki for more information on how to use it with SageMath.
Cantor can be installed with the official repositories.package or as part of the or groups, available in the
Documentation
For local documentation, one can compile it into multiple formats such as HTML or PDF. To build the whole SageMath reference, execute the following command (as root):
# sage --docbuild reference html
This builds the HTML documentation for the whole
reference tree (may take longer than an hour). An option is to build a smaller part of the documentation tree, but you would need to know what it is you want. Until then, you might consider just browsing the online reference.
For a list of documents see
sage --docbuild --documents and for a list of supported formats see
sage --docbuild --formats.
Optional additions SageTeX
If you have installed TeX Live on your system, you may be interested in using SageTeX, a package that makes the inclusion of SageMath code in LaTeX files possible. TeX Live is made aware of SageTeX automatically so you can start using it straight away.
As a simple example, here is how you include a Sage 2D plot in your TEX document (assuming you use
pdflatex):
include the
sagetexpackage in the preamble of your document with the usual
\usepackage{sagetex} create a
sagesilentenvironment in which you insert your code:
\begin{sagesilent} dob(x) = sqrt(x^2 - 1) / (x * arctan(sqrt(x^2 - 1))) dpr(x) = sqrt(x^2 - 1) / (x * log( x + sqrt(x^2 - 1))) p1 = plot(dob,(x, 1, 10), color='blue') p2 = plot(dpr,(x, 1, 10), color='red') ptot = p1 + p2 ptot.axes_labels(['$\\xi$','$\\frac{R_h}{\\max(a,b)}$']) \end{sagesilent} create the plot, e.g. inside a
floatenvironment:
\begin{figure} \begin{center} \sageplot[width=\linewidth]{ptot} \end{center} \end{figure} compile your document with the following procedure: $ pdflatex <doc.tex> $ sage <doc.sage> $ pdflatex <doc.tex> you can have a look at your output document.
The full documentation of SageTeX is available on CTAN.
Install Sage package
If you installed sagemath from the official repositories, it is not possible to install sage packages using the sage option
sage -i packagename.
Instead, you should install the required packages system-wide. For example, if you need
jmol (for 3D plots):
$ sudo pacman -S jmol
An alternative would be to have a local installation of sagemath and to manage optional packages manually.
Troubleshooting TeX Live does not recognize SageTex
If your TeX Live installation does not find the SageTex package, you can try the following procedure (as root or use a local folder):
Copy the files to the texmf directory: # cp /opt/sage/local/share/texmf/tex/* /usr/share/texmf/tex/ Refresh TeX Live: # texhash /usr/share/texmf/ texhash: Updating /usr/share/texmf/.//ls-R... texhash: Done. Starting Sage Notebook Server throws an ImportError
The Sage Notebook Server is in an extra package. So, if you get an ImportError when launching
% sage --notebook ┌────────────────────────────────────────────────────────────────────┐ │ Sage Version 6.4.1, Release Date: 2014-11-23 │ │ Type "notebook()" for the browser-based notebook interface. │ │ Type "help()" for help. │ └────────────────────────────────────────────────────────────────────┘ Please wait while the Sage Notebook server starts... Traceback (most recent call last): File "/usr/bin/sage-notebook", line 180, in <module> launcher(unknown) File "/usr/bin/sage-notebook", line 58, in __init__ from sagenb.notebook.notebook_object import notebook ImportError: No module named sagenb.notebook.notebook_object
you most likely do not have the packageinstalled.
sage -i doesn't work
If you have installed Sage from the official repositories, then you have to install your additional packages system-wide. See Install Sage package
3D plot fails in notebook
If you get the following error while trying to plot a 3D object:
/usr/lib/python2.7/site-packages/sage/repl/rich_output/display_manager.py:570: RichReprWarning: Exception in _rich_repr_ while displaying object: Jmol failed to create file '/home/nicolas/.sage/temp/archimede/3188/dir_cCpcph/preview.png', see '/home/nicolas/.sage/temp/archimede/3188/tmp_JVpSqF.txt' for details RichReprWarning, Graphics3d Object
then you probably miss the jmol package. See Install Sage package to install it. |
The
Manin constant is defined for elliptic curves over $\Q$ which are optimal. Let $E$ be an optimal elliptic curve of conductor $N$, let $f$ be the modular form associated to $E$, and let $\varphi:X_0(N)\to E$ be the associated modular parametrization. Let $\omega_E$ be the Néron differential on $E$. Then the pull-back $\varphi^*\omega_E$ of $\omega_E$ to $X_0(N)$ satisfies\[ \varphi^*\omega_E = c \cdot 2\pi i f(z)dz\]for some non-zero rational number $c$, called the Manin constant of $E$.
It is conjectured that $c=1$ always. A theorem of Edixhoven states that $c\in\Z$, and there are several results stating that $c=1$ if certain conditions hold: see Amod Agashe, Ken Ribet and William Stein: The Manin Constant, Pure and Applied Mathematics Quarterly, Vol. 2 no.2 (2006), pp. 617-636. In an appendix to that paper, John Cremona gives an algorithm for verifying that $c=1$ in individual cases, and proves that $c=1$ for all optimal elliptic curves over $\Q$ in the database.
For non-optimal elliptic curves $E'$ over $\Q$, the Manin constant may also be defined, in terms of the Manin constant of the unique optimal curve isogenous to $E'$. Let $\varphi:X_0(N)\to E$ and $f$ be as above, and $\psi:E\to E'$ an isogeny of least degree from $E$ to $E'$. Then we obtain a parametrization $\psi\circ\varphi:X_0(N)\to E'$ and define the Manin constant $c'$ of $E'$ to be the non-zero rational number such that \[ (\psi\circ\varphi)^*\omega_{E'} = c' \cdot 2\pi i f(z)dz. \] This is an integer multiple of the Manin constant of $E$, since $\psi^*\omega_{E'}$ is an integer multiple of $\omega_E$; the multiplier divides the degree of $\psi$ but may be strictly less: it may equal $1$.
Knowl status: Review status: beta Last edited by Edgar Costa on 2019-09-27 12:30:10 History:(expand/hide all) 2019-09-27 12:30:10 by Edgar Costa 2019-07-04 01:27:50 by Alex J. Best 2019-07-02 05:48:09 by John Cremona Differences(show/hide) |
Rudin's Real and Complex Analysis Chapter 3 Exercise 4 is:
Assume that $\varphi$ is a continuous real function on $(a,b)$ s.t. $$\varphi\left(\frac{x+y}{2}\right)\leq \frac{\varphi(x)+\varphi(y)}{2}$$ for all $x,y\in(a,b)$. Prove that $\varphi$ is convex.
The conclusion does not follow if continuity is omitted from the hypotheses.
My question is, is there some way to explicitly construct a counterexample such that $\varphi\left(\frac{x+y}{2}\right)\leq \frac{\varphi(x)+\varphi(y)}{2}$ for all $x,y\in(a,b)$, but $\varphi$ is not convex? |
Logarithms confuse many of my students so I thought it is time to explain these. I touched on these before on a post about inverse operations, but let’s add some more detail.
Let’s first define some terms here. Consider the expression
x 2. Here, x is raised to the power of 2. x is the base and 2 is the exponent, power, order, or index. Lots of different terms for the exponent – I will mostly use the term exponent. So the exponent defines what to do with the base.
Now before I talk about logarithms specifically, I want to review what various kinds of exponents mean. I have talked about this before, but these concepts should be fully understood if logarithms are to make sense to you.
Now
x 2 means x × x. Positive integer exponents means how many times you multiply the base by itself. So in general, for a positive integer m, x = m x× x× x× x… where xis listed mtimes.
The special case of when
m = 0 is defined as x 0 = 1, no matter how small or how large x is. Now what about negative integers?
\begin{array}{c}
{{x}^{{-}{1}}\hspace{0.33em}{=}\hspace{0.33em}\frac{1}{{x}^{1}}{;}\hspace{0.33em}\hspace{0.33em}{x}^{{-}{2}}\hspace{0.33em}{=}\hspace{0.33em}\frac{1}{{x}^{2}}{;}\hspace{0.33em}\hspace{0.33em}\frac{1}{{x}^{{-}{2}}}\hspace{0.33em}{=}\hspace{0.33em}{x}^{2}}\\\
{{x}^{{-}{m}}\hspace{0.33em}{=}\hspace{0.33em}\frac{1}{{x}^{m}}{;}\hspace{0.33em}\hspace{0.33em}\frac{1}{{x}^{{-}{m}}}\hspace{0.33em}{=}\hspace{0.33em}{x}^{m}}
\end{array}
\]
So a negative exponent is the same as the positive one except it and its base is in the denominator or vice versa. You can freely move a factor that is a base and its exponent between the numerator and the denominator, as long as you change the sign of the exponent.
What about fractional exponents? Let’s start with fractions where “1” is in the numerator. The denominator in a fraction exponent refers to the
root of the number. For example,
{x}^{\frac{1}{2}}\hspace{0.33em}{=}\hspace{0.33em}\sqrt[2]{x}\hspace{0.33em}{=}\hspace{0.33em}\sqrt{x}
\]
The “2” for the square root is usually assumed if it is not there. However, for other roots (like cube roots), the
index must be there to indicate the kind of root it is. Other examples:
{x}^{\frac{1}{3}}\hspace{0.33em}{=}\hspace{0.33em}\sqrt[3]{x}\hspace{0.33em}\hspace{0.33em}\hspace{0.33em}\hspace{0.33em}\hspace{0.33em}\hspace{0.33em}\hspace{0.33em}\hspace{0.33em}{x}^{\frac{1}{6}}\hspace{0.33em}{=}\hspace{0.33em}\sqrt[6]{x}\hspace{0.33em}\hspace{0.33em}\hspace{0.33em}\hspace{0.33em}\hspace{0.33em}\hspace{0.33em}\hspace{0.33em}\hspace{0.33em}{x}^{\frac{1}{n}}\hspace{0.33em}{=}\hspace{0.33em}\sqrt[n]{x}
\]
The numerator in a fractional exponent means the same as if it wasn’t in a fraction. so we can combine these two definitions for more general fractions:\[
{x}^{\frac{2}{3}}\hspace{0.33em}{=}\hspace{0.33em}\sqrt[3]{{x}^{2}}\hspace{0.33em}\hspace{0.33em}\hspace{0.33em}\hspace{0.33em}\hspace{0.33em}\hspace{0.33em}\hspace{0.33em}\hspace{0.33em}{x}^{\frac{5}{6}}\hspace{0.33em}{=}\hspace{0.33em}\sqrt[6]{{x}^{5}}\hspace{0.33em}\hspace{0.33em}\hspace{0.33em}\hspace{0.33em}\hspace{0.33em}\hspace{0.33em}\hspace{0.33em}\hspace{0.33em}{x}^{\frac{m}{n}}\hspace{0.33em}{=}\hspace{0.33em}\sqrt[n]{{x}^{m}}
\]
Now we have not covered irrational exponents like
x 𝜋. The development of these are a bit more complex so I’ll just say “use your calculator”.
Indeed, you can use your calculator to calculate a number raised to a power if it has a key labelled as “
y x” or has a key with the “^” symbol on it. I will leave it to you to find out how to use these keys. If you do not have a fancy calculator, there is always the all-knowing internet.
So we have talked before on how to solve equations like
x 2 = 16 by taking the square root of both sides of the equation. But how do you solve 2 x = 16? Notice that x is now in the exponent. That changes everything as you can’t take the xth root of a number on your calculator……………but can you?
In the next post on this topic, I’ll introduce you to logarithms then later, how they are used. |
Since the homomorphism $\bar{\phi}:M/IM \to N/IN$ is surjective, for any $b\in N$ there exists $a\in M$ such that\[\bar{\phi}(\bar{a})=\bar{b}, \tag{*}\]where $\bar{a}=a+IM$ and $\bar{b}=b+IN$.By definition of $\bar{\phi}:M/IM \to N/IN$, we have\[\bar{\phi}(\bar{a})=\overline{\phi(a)}=\phi(a)+IN.\]Thus, it follows from (*) that\[\phi(a)+IN=b+IN,\]or equivalently\[b-\phi(a)\in IN.\]Thus we have\[b\in \phi(M)+IN.\]
Now we claim that for any $b\in N$ and any positive integer $k$, we have\[b\in \phi(M)+I^kN.\]We prove this claim by induction on $k$.The base case $k=1$ is proved above.Suppose that $b\in \phi(M)+I^nN$. Then we prove that $b\in \phi(M)+I^{n+1}N$.Since $b\in \phi(M)+I^nN$, we have\[b=\phi(a)+\sum_{i}\alpha_i c_i,\]where the sum is finite and $\alpha_i\in I^n$ and $c_i\in N$.Since each $c_i\in N$, we have $c_i \in \phi(M)+IN$ by the base case.Hence we have\[c_i=\phi(a_i)+\sum_{j_i}\beta_{j_i}d_{j_i}\]for some finite pairs $(\beta_{j_i}, d_{j_i})\in (I, N)$.It follows that we have\begin{align*}b&=\phi(a)+\sum_{i}\alpha_i c_i\\&=\phi(a)+\sum_{i}\alpha_i \left(\, \phi(a_i)+\sum_{j_i}\beta_{j_i}d_{j_i} \,\right)\\&=\phi(a)+\sum_i\alpha_i\phi(a_i)+\sum_{i}\sum_{j_i}\alpha_i\beta_{j_i}d_{j_i}\\&=\phi(a)+\sum_i\phi(\alpha_ia_i)+\sum_{i, j_i}(\alpha_i\beta_{j_i})d_{j_i}\\&=\phi\left(\, a+\sum_i\alpha_ia_i \,\right)+\sum_{i, j_i}(\alpha_i\beta_{j_i})d_{j_i},\end{align*}where the last two equalities follows since $\phi$ is an $R$-module homomorphism.Since $\alpha_i\in I^n$ and $\beta_{j_i}\in I$, the product $\alpha_i\beta_{j_i}\in I^{n+1}$.Hence the above expression of $b$ yields that\[b\in \phi(M)+I^{n+1}N,\]and this completes the induction step and the claim is proved.Now, since $I$ is a nilpotent ideal by assumption, there is a positive integer $n$ such that $I^n$ is the zero ideal of $R$. Thus, it follows from the claim that for any $b\in N$ we have\begin{align*}b\in \phi(M)+I^nN=\phi(M).\end{align*}This implies that $\phi:M\to N$ is surjective as required.
Short Exact Sequence and Finitely Generated ModulesLet $R$ be a ring with $1$. Let\[0\to M\xrightarrow{f} M' \xrightarrow{g} M^{\prime\prime} \to 0 \tag{*}\]be an exact sequence of left $R$-modules.Prove that if $M$ and $M^{\prime\prime}$ are finitely generated, then $M'$ is also finitely generated.[…]
Torsion Submodule, Integral Domain, and Zero DivisorsLet $R$ be a ring with $1$. An element of the $R$-module $M$ is called a torsion element if $rm=0$ for some nonzero element $r\in R$.The set of torsion elements is denoted\[\Tor(M)=\{m \in M \mid rm=0 \text{ for some nonzero} r\in R\}.\](a) Prove that if $R$ is an […]
Annihilator of a Submodule is a 2-Sided Ideal of a RingLet $R$ be a ring with $1$ and let $M$ be a left $R$-module.Let $S$ be a subset of $M$. The annihilator of $S$ in $R$ is the subset of the ring $R$ defined to be\[\Ann_R(S)=\{ r\in R\mid rx=0 \text{ for all } x\in S\}.\](If $rx=0, r\in R, x\in S$, then we say $r$ annihilates […]
Basic Exercise Problems in Module TheoryLet $R$ be a ring with $1$ and $M$ be a left $R$-module.(a) Prove that $0_Rm=0_M$ for all $m \in M$.Here $0_R$ is the zero element in the ring $R$ and $0_M$ is the zero element in the module $M$, that is, the identity element of the additive group $M$.To simplify the […] |
Digital Signal Processing - Dec 2014
Electronics & Telecom Engineering (Semester 5)
TOTAL MARKS: 100
TOTAL TIME: 3 HOURS (1) Question 1 is compulsory. (2) Attempt any four from the remaining questions. (3) Assume data wherever required. (4) Figures to the right indicate full marks.
Answer any one question from Q1 and Q2 1 (a) An analog signal is given as x(t) = sin(10 πt) + 2sin(20 πt) + 2cos(30 πt). i) What is the Nyquist rate of this signal? ii) If the signal is sampled with sampling frequency of 20 Hz, what is the discrete time signal obtained after sampling ?(6 marks) 1 (b) For a discrete time sequence x(n) = {1 2 3 4}, DFT is given by X(k) = {10 ? 2+2j ? 2 ? 2?2j}. Compute the DFT of x^(n) = {3 4 1 2} using circular time shift property of DFT.(6 marks) 1 (c) In the impulse response of the system is: h(n)=[(0.5) n + n(0.2) n] u(n). i) Compute the transfer function ii) Obtain the difference equation of the system.(8 marks) 2 (a) A signal x(t) = sin( ω t ) of frequency 50 Hz is sampled using a sampling frequency of 80 Hz. Obtain the recovered signal if ideal reconstruction is used.(6 marks) 2 (b) State and prove Parseval's theorem for the following sequence : x(n) = {1 2 3 4}.(8 marks) 2 (c) Find the Z transform of: $$ i) \ \ x(n)= e^{\left ( - \dfrac {n}{40} \right )} u(n) \ Draw \ the \ pole \ zero\ diagram \ for \ X(z) \\ ii) \ \ x(n) = \left ( - \dfrac {1}{5} \right )^n u(n) + 5 \left ( \dfrac {1}{2} \right )^{-n} u(-n-1) $$(6 marks)
Answer any one question from Q3 and Q4 3 (a) Design a digital Butterworth filter that satisfies the following constraint using Bilinear transformation. Assume T=1 sec. $$ \begin {align*} 0.9 & \big \vert H(e^{j\omega})\big \vert \le 1 & 0 \le \omega \le \dfrac {\pi}{2} \\ & \big \vert H(e^{j\omega}) \big \vert \le 0.2 & 3 \dfrac {\pi}{4}\le \omega \le \pi \end{align*} $$(11 marks) 3 (b) Convert the analog filter with system function. $$ H_a (s) = \dfrac {s+0.2}{(s+0.2)^2 + 9} $$ into aa digital IIR filter by mean of Impulse Invariant technique. Assume T=1 sec.(6 marks) 4 (a) Design a digital Butterworth filter that satisfies the following specification using Bilinear transformation.
Sampling frequency 8 KHz Passband 0-500 Hz Passband ripple 3 dB Stopband 2-4 KHz Stopband ripple 20 dB 4 (b)Obtain direct form II and cascade realizations for the system:
y(n) = ? 0.1y (n - 1) + 0.2y(n - 2) + 3x(n) + 3.6x(n - 1) + 0.6x(n - 2)(6 marks)
Answer any one question from Q5 and Q6 5 (a) Design a bandpass FIR filter using Hamming window for M = 11. $$ \begin {align*}H(e^{j\omega}) & =1 & \dfrac {\pi}{4}\le \omega \le \dfrac {3 \pi}{4} \\ &=0 & otherwise \end{align*} $$(11 marks) 5 (b) A signal having values in the range [-1, +1], is quantized using 8 bits, with MSB as sign bit i) Determine the quantization step size. ii) Calculate the quantization noise power.(3 marks) 5 (c) What is Gibb's phenomenon? How it is reduced?(3 marks) 6 (a) Using frequency sampling method, design a FIR filter for N = 7 $$ \begin {align*}H(e^{j\omega}) & =1 & 0\le \omega \le \dfrac {\pi}{2} \\ &=0 & \dfrac {\pi}{2} \le \omega \le \pi \end{align*} $$(9 marks) 6 (b) Show that the symmetric FIR filter has linear phase response.(8 marks)
Answer any one question from Q7 and Q8 7 (a) Draw the block diagram of a system for sampling rate conversion by a non-integer factor and explain the operation of each block with the help of relevant diagrams and mathematical expressions. Can the positions of the decimator and interpolator be interchanged? Justify your answer.(10 marks) 7 (b) Explain the factors that influence the selection of a digital signal processor.(6 marks) 8 (a) Sampling rate is to be reduced from 96 KHz to 1 KHz. Highest frequency of interest is 450 Hz. δ p=0.01, δ s=0.001. Design a two stage decimator with decimating factors as 32 and 3.(8 marks) 8 (b) Write note on: i) MAC unit ii) Pipelining.(8 marks) |
Useful links: How vortex bound states affect the Hall conductivity of a chiral px ± ipy superconductor
This work extend our understanding of the anomalous charge response, c
xy of chiral superconductors. It is established that in order to correctly apply the Streda formula for calculating c xy it is necessary to employ compact geometries that avoid edge effects. This, in turn, requires a careful analysis of the effect of finite-radius vortex nucleation that leads to an adjustment of the Streda formula. The modified Streda formula is then applied to calculate c xy for a p x ± ipy superconductor placed on a square lattice at zero magnetic field and zero vorticity. We show that $c_{xy}$ is a sum of two contributions, one which is non-universal and the other equals $\kappa/8\pi$, where $\kappa$ is the Chern number of the superconductor. Moreover, we note that c
Daniel Ariad, Yshai Avishai and Eytan Grosfeld. "How vortex bound states affect the Hall conductivity of a chiral p±ip superconductor." Phy. Rev. B 98, 104511 (2018). arXiv:1603.00840. In addition, our study is summarized in this poster.
Signatures of the topological spin of Josephson vortices in topological superconductors
Realization of non-abelian quasi-particles known as Majorana fermions is an ongoing challenge for physicists exploring topological states of matter. Towards achieving this goal, we recently suggested that Josephson vortices in topological Josephson junctions (TJJ) would constitute such Majorana fermions and retain the exchange statistics of bulk vortices. We corroborated this hypothesis by finding the universal exchange phase of Josephson vortices. In order to do so, we derived the Hamiltonian governing the dynamics of a soliton in an annular Josephson junction. Our next step was to develop a procedure to calculate the Berry connection of systems that posses particle-hole symmetry. The procedure was applied to confirm that the Abelian phase due to the an exchange between a vortex in the bulk of a p-wave superconductor and a Josephson vortex is π/8. In addition, we suggested an experiment to measure it by.
Daniel Ariad and Eytan Grosfeld. "Signatures of the topological spin of Josesphson vortices in topological superconductors." Phys. Rev. B 95, 161401(R) (2017) arXiv:1301.0538. In addition, our study is summarized in this poster.
On the effective theory of vortices in two-dimensional spinless chiral p-wave superfluids
As the search for quantum computers evolves, new methods to realize an universal topological quantum computation are a explored. Vortex defects in a 2D spinless chiral p-wave superfluid bind Majorana zero modes that endow them with non-Abelian exchange statistics. Motivated by its potential for topological quantum information processing, we developed a ${\mathbb{U}(1) \times \mathbb{Z}_2}$ effective gauge theory for vortices in a ${p_x+ip_y}$ superfluid in two dimensions. The combined gauge transformation binds ${\mathbb{U}(1)}$ and ${\mathbb{Z}_2}$ defects so that the total transformation remains single-valued and manifestly preserves the particle-hole symmetry of the action. The ${\mathbb{Z}_2}$ gauge field introduces a complete Chern-Simons term in addition to a partial one associated with the ${\mathbb{U}(1)}$ gauge field. The theory reproduces the known physics of vortex dynamics such as a Magnus force proportional to the superfluid density. It also predicts a universal Abelian phase, ${\exp(i\pi/8)}$, upon the exchange of two vortices, modified by non-universal corrections due to the partial Chern-Simon term that are screened in a charged superfluid.
The role pickup ions play in the termination shock
An energy conservation paradox emerged from the data collected by Voyger 2 (V2) while crossing the Termination Shock in September 2013. The solar wind (SW) blows outward from the Sun and forms a bubble of solar material in the interstellar medium (ISM). The termination shock occurs where the SW changes from being supersonic to being subsonic. It was found that the sum of the flow energy and the thermal energy is not conserved. We showed that the pickup ions (PUI), a group of atoms from the ISM that are Ionized by the Sun's radiation and then swept along with the SW, which have an energy spectrum that lies outside the V2's equipment range of the measurement gain the missing energy. It was done with the use of the Liuoville theorem to map the PUI momentum distribution between the two sides of the shock. We found that: (a) the PUI can gain the missing energy and momentum along the shock with a Shock Drift Mechanism (SDM) (b) The SDM must be preliminary to the Shock. |
(Previous related question: Finding mathematically the ground state density in DFT)
I am studying the density optimisation procedure (in particular for Orbital-Free DFT) this thesis.
The derivative of the energy is given as:
$$\frac{\delta E[n(r)]}{\delta n(r)} = v(r) + v_i(r) + \frac{1}{2}(3\pi^2)^{2/3}\left(n(r)\right)^{2/3},$$
where $v(r)$ is the external potential and $v_i(r)$ is the Hartree potential, or electron-electron repulsion.
The chemical potential can thus be obtained after integration: $$\mu = \frac{1}{V} \int_{\Omega} \frac{\delta E[n(r)]}{\delta n(r)}dr.$$
What confuses me is the next equation, which allows to find the density of the next step $k+1$: $$n(r)^{(k+1)} = n(r)^{(k)} - t\left(\frac{\delta E[n(r)]}{\delta n(r)} - \mu\right),$$ where $t$ is a step size to help convergence.
If I understand this correctly, $\mu$ is an average variation of energy. The density at a given $r$ is thus varied by the difference between the derivative at that position and the average derivative.
$v(r)$ is a constant for a given position, and $v_i(r)$ depends only on the density, just like the kinetic term above (Thomas-Fermi in this case). It thus seems like the variation of density will never quite get to zero except if the density is exactly the right value to balance the external potential. This seems simplistic and wouldn't explain orbitals beyond the $s$ orbital.
In parallel, I am implementing a simple code for the hydrogen atom, but I can't get anything to converge nor give reasonable values; I must be doing something wrong somewhere. |
Kinetic Theory of Gas Kinetic Theory of an Ideal Gas and Degree of freedom Kinetic theory of gas equation \tt PV=\frac{1}{3}mN\ V^2_{RMS} where P is the pressure exerted by the gas and V is the volume “m” is the mass of molecule. N is the total number of molecules V RMSis Rms velocity. If volume and temperature of a gas one constant P ∝ mN if mass of the gas is increased, number of molecule and hence number of collisions per second increase and pressure will increase. If mass and volume of gas are constant \tt P\propto V_{RMS}^2\propto T if temperature increases, the mean square speed of gas molecules will increase and gas molecules move faster. Total kinetic energy of all the molecules of a gas is given by \tt ET=\frac{3}{2}KTN (N=Number of gas molecules). Average translational KE of a gas molecules depends only on its temperature and is independent of its nature. The total number of independent modes in which a particle can posses energy is called Degree of freedom. The number of degrees of freedom of monoatomic gas is '3', diatomic gas '5' polyatomic '6' The internal energy of 'n' moles of a gas in which each molecule has "f" degree of freedom will have internal energy \tt U=\frac{f}{2}\left(nRT\right)=\frac{f}{2}KT Only average translational kinetic energy of a gas contributes to its temperature. Two gases with the same average translational KE have the same temperature even if one has greater rotational energy and then greater internal energy. Variation of degree of freedom of a diatomic gas with temperature. At very low temperature only translation is possible. As the temperature increases rotational motion can begin at still higher temperatures vibratory motion can begin. View the Topic in this video From 1:01 To 58:24
Disclaimer: Compete.etutor.co may from time to time provide links to third party Internet sites under their respective fair use policy and it may from time to time provide materials from such third parties on this website. These third party sites and any third party materials are provided for viewers convenience and for non-commercial educational purpose only. Compete does not operate or control in any respect any information, products or services available on these third party sites. Compete.etutor.co makes no representations whatsoever concerning the content of these sites and the fact that compete.etutor.co has provided a link to such sites is NOT an endorsement, authorization, sponsorship, or affiliation by compete.etutor.co with respect to such sites, its services, the products displayed, its owners, or its providers.
1. Kinetic energy of gas molecules.
Pressure exerted by a gas is \tt p = \frac{1}{3}\frac{M}{V}v^{2}_{rms} = \frac{1}{3}pv^{2}_{rms}
2. The
rms velocity is related to density (ρ) or molar mass (M) by the relation \tt v_{rms} = \sqrt{\overline{v}^{2}} = \sqrt{\frac{3p}{\rho}} = \sqrt{\frac{3RT}{M}} = \sqrt{\frac{3k_{B}T}{m}}
3. Kinetic energy of gas per unit volume \tt E = \frac{1}{2}\frac{M}{V}v^{2}_{rms} \Rightarrow E \propto v^{2}_{rms} \ or \ E \propto T
Also, \tt pV = \frac{2}{3}E
4. The average kinetic energy of one mole of an ideal gas is \tt E = \frac{3}{2} RT = \frac{3}{2}k_{B}N_{A}T
5. A monoatomic gas molecules has only translational kinetic energy
\tt E_{t} = \frac{1}{2} mv^{2}_{x} + \frac{1}{2}mv^{2}_{y} + \frac{1}{2} mv^{2}_{z}
6. A part from translations kinetic energy, a diatomic molecule has two rotational kinetic energies
\tt E_{t} + E_{r} = \frac{1}{2} mv^{2}_{x} + \frac{1}{2}mv^{2}_{y} + \frac{1}{2} mv^{2}_{z} + \frac{1}{2}l_{y}\omega_{y}^{2} + \frac{1}{2}l_{x}\omega_{x}^{2}
7. Diatomic molecule like CO even at moderate temperature have a mode of vibration. Its atom vibrate along the interatomic axis and contribute vibrational energy terms E
v to the total energy \tt E = E_{t} + E_{r} + E_{v} = \frac{1}{2} mv^{2}_{x} + \frac{1}{2}mv^{2}_{y} + \frac{1}{2} mv^{2}_{z} + \frac{1}{2}l_{y}\omega_{y}^{2} + \frac{1}{2}l_{z}\omega_{x}^{2} + \frac{1}{2} m \left[\frac{dy}{dt}\right]^{2} + \frac{1}{2}Ky^{2}
8. The energy per molecule per degree of freedom = \tt \frac{1}{2}k_{B}T. Where, k
B = Boltzmann's constant |
The problem:
Input: An $n \times n$ matrix of 0's and 1's, and a position posof this matrix (i.e. a pair of integers $i,j$ with $1 \leq i,j \leq n$)
Output:
YES if there exists a path through
adjacentmatrix entries $\dagger$, starting at pos, covering each matrix entry with a 1 exactly once, and not covering the matrix entries with a 0.
NO otherwise.
$\dagger$ a matrix entry is adjacent to the one immediately to its left, to the one immediately to its right, to the one immediately upwards and the one immediately below.
Informally, the matrix can be seen as a labyrinth where the 0's are walls, you start somewhere, and you have to walk through the whole maze without repeating any position.
Example input:
1100110000000001Pos: (1,1)
Corresponding output:
No (because you can't reach the position (4,4))
Is this problem NP-complete? If it is, what other NP-complete problem has been reduced to it? If it isn't, what approach can I use to design an efficient algorithm?
I think this is a particular case of the hamiltonian path problem (except that you have a fixed starting point). The graph can be constructed by taking the matrix entries with 1's as vertices. 2 vertices are adjacent iff their corresponding matrix entries are adjacent. So I think that reducing this problem to the hamiltonian path problem should be easy. Of course, to prove it is NP-complete, we would have to do the reduction backwards. |
Gödel's constructible universe ($L$) is defined using definable power set operator in first order logic ($\mathcal{L}_{\omega ,\omega}$). One can produce such a universe in infinitary logics in the same way using corresponding notions of formulas and definability. Obviously $L$ becomes larger when the logic has more expression power.
For each cardinal $\kappa$ define $L_{\kappa}$ to be Gödel's constructible universe in the infinitary logic $\mathcal{L}_{\kappa,\kappa}$ and $L_{\infty}$ is Gödel's constructible universe in $\mathcal{L}_{\infty,\infty}$.
(1) Is $L_{\kappa}$ a model of $ZFC$ for each cardinal $\kappa$? What about $ZFC+GCH$?
(2) What is $L_{\infty}$?
(3) Is there a (possibly large) cardinal $\kappa$ such that $L_{\kappa}$ is Dodd-Jensen core model, $L[U]$, $HOD$, etc?
(4) What are the consistency strengths of the existence of non-trivial elementary embeddings from $\langle L_{\kappa},\in\rangle$ to itself for different $\kappa$s in the sense of infinitary logic $\mathcal{L}_{\kappa,\kappa}$?
Note that by Prof. Hamkins' answer for $L_{\infty}$ finally it reach Kunen's inconsistency but what about a given cardinal $\kappa$? Are all these consistency strengths for different cardinals bounded by some large cardinal axiom and there is a gape between consistency strength of the existence of a non-trivial elementary embedding from $\langle L_{\infty},\in\rangle$ to itself and consistency strengths of the existence of such elementary embeddings for $L_{\kappa}$s?
(5) If there is a cardinal $\kappa$ such that $L_{\kappa}=HOD$, is it possible to determine consistency strength of the existence of a non-trivial (first-order) elementary embedding from $\langle HOD,\in\rangle$ to itself by analyzing the growth speed of the consistency strength of existence of such embeddings for $\langle L_{\lambda}, \in\rangle$s in $\lambda <\kappa$?
(6) What is $L_{\kappa}$ for the least strongly compact cardinal $\kappa$? |
In a standard portfolio optimization setting, an efficient frontier is formed for the mix of asset weights which result in the greatest (expected) portfolio return with least amount of (expected) portfolio volatility.
Technically any point on that frontier can be considered efficient in the absense of a risk-free rate. When a zero variance asset (i.e. risk-free rate of return) is introduced, then the optimal point of the frontier becomes less ambiguous. An optimal portfolio is then formed from the "capital allocation line" drawn between the zero-variance asset to the highest point along the frontier, which is thus called the "tangency portfolio".
There are a few ways to think about this.
As you and @AlRacoon point out, one way might be to consider an investor's risk appetite (e.g., via maximum acceptable volatility).
Another way, as @AlexC indicated, might be to construct a utility curve that represents an investor's risk preferences. The function $\mathcal{U}\left[\mu,\,\sigma \right]$ is then to be maximized. Typically, such a function is concave, .e.g.: $\mathcal{U}\left[\mu,\,\sigma \right] = \mathbb{E}\left[\mu \right] -\frac{\sigma^2}{2} $.
A third (non-mutually exclusive) alternative is to introduce the use of benchmarks into the optimization. Mechanically, this is no different from standard approaches except, in this case, the optimization is between tracking error (i.e., $Abs\left[r_a - r_b \right]$) versus excess returns. In this sense, the benchmark is risk-free with respect to itself, and there will almost surely be some combination of constituent assets which achieves a positive active return versus the benchmark. This approach is distinctly advantanged in that no risk free asset may be needed to identify the tangency portfolio. I.e., the capital allocation line is identifiable by the portfolio with the greatest
information ratio (IR) (vice Sharpe ratio). Since IR is typically seen as a proxy for skill, an IR optimal portfolio could be considered to contain the most signal per unit of noise. I have also seen approaches which optimize for IR versus tracking error (i.e., $\frac{ \mathbb{E}\left[r_a-r_b \right]}{\sigma^2_{a-b}}$) with some very interesting results (i.e., the Kelly Capital Growth Criterion of a single asset portfolio is nearly identical!!!). Suitable implementations of the efficient frontier of excess return outlined in the following articles from Mathworld:
Given the details of your assignment (i.e., that you are provided with benchmarks), I would attempt method 3 since there is a possibility that the tangency portfolio will be clearly defined. Moreover, the fewer parameters and/or assumptions an approach requires, the more robust it generally is.
I would assess that the L/S TR index is the most appropriate benchmark provided. The individual long-only benchmarks provided in conjunction with the funds' return are -- in my opinion -- mostly worthless as a comparison to L/S funds' performance. Then again, benchmarking is as much art as science; you will find a diversity of opinion regarding benchmark selection.
In the case where the efficient frontier does not intersect with the vertical axis, the tangency portfolio is clearly defined. In this case, the point with the highest IR is optimal.
It may however be that the efficient frontier intersects the vertical axis (i.e., there is a combination of assets which perfectly replicates the index). This will almost surely be the case when the index is considered to be an investable asset and/or when the asset universe is broadly enough defined. In this instance, the tangency portfolio is not defined unless you go back to defining a maximum acceptable risk tolerance and/or utility function.
There may be another special case where there is no combination of assets which exceeds the benchmark's return. In this case as well, the benchmark itself would be the optimal portfolio. |
The correlation coefficient, \(r\), tells us about the strength and direction of the linear relationship between \(x\) and \(y\). However, the reliability of the linear model also depends on how many observed data points are in the sample. We need to look at both the value of the correlation coefficient \(r\) and the sample size \(n\), together. We perform a hypothesis test of the
"significance of the correlation coefficient" to decide whether the linear relationship in the sample data is strong enough to use to model the relationship in the population.
The sample data are used to compute \(r\), the correlation coefficient for the sample. If we had data for the entire population, we could find the population correlation coefficient. But because we have only have sample data, we cannot calculate the population correlation coefficient. The sample correlation coefficient, \(r\), is our estimate of the unknown population correlation coefficient.
The symbol for the population correlation coefficient is \(\rho\), the Greek letter "rho." \(\rho =\) population correlation coefficient (unknown) \(r =\) sample correlation coefficient (known; calculated from sample data)
The hypothesis test lets us decide whether the value of the population correlation coefficient \(\rho\) is "close to zero" or "significantly different from zero". We decide this based on the sample correlation coefficient \(r\) and the sample size \(n\).
If the test concludes that the correlation coefficient is significantly different from zero, we say that the correlation coefficient is "significant." Conclusion: There is sufficient evidence to conclude that there is a significant linear relationship between \(x\) and \(y\) because the correlation coefficient is significantly different from zero. What the conclusion means: There is a significant linear relationship between \(x\) and \(y\). We can use the regression line to model the linear relationship between \(x\) and \(y\) in the population. If the test concludes that the correlation coefficient is not significantly different from zero (it is close to zero), we say that correlation coefficient is "not significant". Conclusion: "There is insufficient evidence to conclude that there is a significant linear relationship between \(x\) and \(y\) because the correlation coefficient is not significantly different from zero." What the conclusion means: There is not a significant linear relationship between \(x\) and \(y\). Therefore, we CANNOT use the regression line to model a linear relationship between \(x\) and \(y\) in the population. If \(r\) is significant and the scatter plot shows a linear trend, the line can be used to predict the value of \(y\) for values of \(x\) that are within the domain of observed \(x\) values. If \(r\) is not significant OR if the scatter plot does not show a linear trend, the line should not be used for prediction. If \(r\) is significant and if the scatter plot shows a linear trend, the line may NOT be appropriate or reliable for prediction OUTSIDE the domain of observed \(x\) values in the data. PERFORMING THE HYPOTHESIS TEST Null Hypothesis:\(H_{0}: \rho = 0\) Alternate Hypothesis:\(H_{a}: \rho \neq 0\)
WHAT THE HYPOTHESES MEAN IN WORDS:
Null Hypothesis\(H_{0}\) :The population correlation coefficient IS NOT significantly different from zero. There IS NOT a significant linear relationship(correlation) between \(x\) and \(y\) in the population. Alternate Hypothesis\(H_{a}\) :The population correlation coefficient IS significantly DIFFERENT FROM zero. There IS A SIGNIFICANT LINEAR RELATIONSHIP (correlation) between \(x\) and \(y\) in the population.
DRAWING A CONCLUSION:There are two methods of making the decision. The two methods are equivalent and give the same result.
Method 1: Using the\(p\text{-value}\) Method 2: Using a table of critical values
In this chapter of this textbook, we will always use a significance level of 5%, \(\alpha = 0.05\)
Using the \(p\text{-value}\) method, you could choose any appropriate significance level you want; you are not limited to using \(\alpha = 0.05\). But the table of critical values provided in this textbook assumes that we are using a significance level of 5%, \(\alpha = 0.05\). (If we wanted to use a different significance level than 5% with the critical value method, we would need different tables of critical values that are not provided in this textbook.)
METHOD 1: Using a \(p\text{-value}\) to make a decision
To calculate the \(p\text{-value}\) using LinRegTTEST:
On the LinRegTTEST input screen, on the line prompt for \(\beta\) or \(\rho\), highlight "\(\neq 0\)"
The output screen shows the \(p\text{-value}\) on the line that reads "\(p =\)".
(Most computer statistical software can calculate the \(p\text{-value}\).)
If the \(p\text{-value}\) is less than the significance level (\(\alpha = 0.05\) ): Decision: Reject the null hypothesis. Conclusion: "There is sufficient evidence to conclude that there is a significant linear relationship between \(x\) and \(y\) because the correlation coefficient is significantly different from zero." If the \(p\text{-value}\) is NOT less than the significance level (\(\alpha = 0.05\) ) Decision: DO NOT REJECT the null hypothesis. Conclusion: "There is insufficient evidence to conclude that there is a significant linear relationship between \(x\) and \(y\) because the correlation coefficient is NOT significantly different from zero." Calculation Notes: You will use technology to calculate the \(p\text{-value}\). The following describes the calculations to compute the test statistics and the \(p\text{-value}\): The \(p\text{-value}\) is calculated using a \(t\)-distribution with \(n - 2\) degrees of freedom. The formula for the test statistic is \(t = \frac{r\sqrt{n-2}}{\sqrt{1-r^{2}}}\). The value of the test statistic, \(t\), is shown in the computer or calculator output along with the \(p\text{-value}\). The test statistic \(t\) has the same sign as the correlation coefficient \(r\). The \(p\text{-value}\) is the combined area in both tails.
An alternative way to calculate the \(p\text{-value}\)
(\(p\) ) given by LinRegTTest is the command 2*tcdf(abs(t),10^99, n-2) in 2nd DISTR. THIRD-EXAM vs FINAL-EXAM EXAMPLE: \(p\text{-value}\) method Consider the third exam/final exam example. The line of best fit is: \(\hat{y} = -173.51 + 4.83x\) with \(r = 0.6631\) and there are \(n = 11\) data points. Can the regression line be used for prediction? Given a third exam score (\(x\) value), can we use the line to predict the final exam score (predicted\(y\) value)?
\(H_{0}: \rho = 0\)
\(H_{a}: \rho \neq 0\)
\(\alpha = 0.05\)
The \(p\text{-value}\) is 0.026 (from LinRegTTest on your calculator or from computer software). The \(p\text{-value}\), 0.026, is less than the significance level of \(\alpha = 0.05\). Decision: Reject the Null Hypothesis \(H_{0}\) Conclusion: There is sufficient evidence to conclude that there is a significant linear relationship between the third exam score (\(x\)) and the final exam score (\(y\)) because the correlation coefficient is significantly different from zero. Because \(r\) is significant and the scatter plot shows a linear trend, the regression line can be used to predict final exam scores. METHOD 2: Using a table of Critical Values to make a decision
The 95% Critical Values of the Sample Correlation Coefficient Table can be used to give you a good idea of whether the computed value of \(r\)
is significant or not. Compare \(r\) to the appropriate critical value in the table. If \(r\) is not between the positive and negative critical values, then the correlation coefficient is significant. If \(r\) is significant, then you may want to use the line for prediction.
Example \(\PageIndex{1}\)
Suppose you computed \(r = 0.801\) using \(n = 10\) data points. \(df = n - 2 = 10 - 2 = 8\). The critical values associated with \(df = 8\) are \(-0.632\) and \(+0.632\). If \(r <\) negative critical value or \(r >\) positive critical value, then \(r\) is significant. Since \(r = 0.801\) and \(0.801 > 0.632\), \(r\) is significant and the line may be used for prediction. If you view this example on a number line, it will help you.
\(r\) is not significant between \(-0.632\) and \(+0.632\). \(r = 0.801 > +0.632\). Therefore, \(r\) is significant. Figure 12.5.1.
Exercise \(\PageIndex{1}\)
For a given line of best fit, you computed that \(r = 0.6501\) using \(n = 12\) data points and the critical value is 0.576. Can the line be used for prediction? Why or why not?
Answer
If the scatter plot looks linear then, yes, the line can be used for prediction, because \(r >\) the positive critical value.
Example \(\PageIndex{2}\)
Suppose you computed \(r = –0.624\) with 14 data points. \(df = 14 – 2 = 12\). The critical values are \(-0.532\) and \(0.532\). Since \(-0.624 < -0.532\), \(r\) is significant and the line can be used for prediction
\(r = -0.624 - 0.532\). Therefore, \(r\) is significant. Figure 12.5.2.
Exercise \(\PageIndex{2}\)
For a given line of best fit, you compute that \(r = 0.5204\) using \(n = 9\) data points, and the critical value is \(0.666\). Can the line be used for prediction? Why or why not?
Answer
No, the line cannot be used for prediction, because \(r <\) the positive critical value.
Example \(\PageIndex{3}\)
Suppose you computed \(r = 0.776\) and \(n = 6\). \(df = 6 - 2 = 4\). The critical values are \(-0.811\) and \(0.811\). Since \(-0.811 < 0.776 < 0.811\), \(r\) is not significant, and the line should not be used for prediction.
\(-0.811 < r = 0.776 < 0.811\). Therefore, \(r\) is not significant. Figure 12.5.3.
Exercise \(\PageIndex{3}\)
For a given line of best fit, you compute that \(r = -0.7204\) using \(n = 8\) data points, and the critical value is \(= 0.707\). Can the line be used for prediction? Why or why not?
Answer
Yes, the line can be used for prediction, because \(r <\) the negative critical value.
THIRD-EXAM vs FINAL-EXAM EXAMPLE: critical value method
Consider the third exam/final exam example. The line of best fit is: \(\hat{y} = -173.51 + 4.83x\) with \(r = 0.6631\) and there are \(n = 11\) data points. Can the regression line be used for prediction?
Given a third-exam score (\(x\) value), can we use the line to predict the final exam score (predicted \(y\) value)? \(H_{0}: \rho = 0\) \(H_{a}: \rho \neq 0\) \(\alpha = 0.05\) Use the "95% Critical Value" table for \(r\) with \(df = n - 2 = 11 - 2 = 9\). The critical values are \(-0.602\) and \(+0.602\) Since \(0.6631 > 0.602\), \(r\) is significant. Decision: Reject the null hypothesis. Conclusion:There is sufficient evidence to conclude that there is a significant linear relationship between the third exam score (\(x\)) and the final exam score (\(y\)) because the correlation coefficient is significantly different from zero. Because \(r\) is significant and the scatter plot shows a linear trend, the regression line can be used to predict final exam scores.
Example \(\PageIndex{4}\)
Suppose you computed the following correlation coefficients. Using the table at the end of the chapter, determine if \(r\) is significant and the line of best fit associated with each r can be used to predict a \(y\) value. If it helps, draw a number line.
\(r = –0.567\) and the sample size, \(n\), is \(19\). The \(df = n - 2 = 17\). The critical value is \(-0.456\). \(-0.567 < -0.456\) so \(r\) is significant. \(r = 0.708\) and the sample size, \(n\), is \(9\). The \(df = n - 2 = 7\). The critical value is \(0.666\). \(0.708 > 0.666\) so \(r\) is significant. \(r = 0.134\) and the sample size, \(n\), is \(14\). The \(df = 14 - 2 = 12\). The critical value is \(0.532\). \(0.134\) is between \(-0.532\) and \(0.532\) so \(r\) is not significant. \(r = 0\) and the sample size, \(n\), is five. No matter what the \(dfs\) are, \(r = 0\) is between the two critical values so \(r\) is not significant.
Exercise \(\PageIndex{4}\)
For a given line of best fit, you compute that \(r = 0\) using \(n = 100\) data points. Can the line be used for prediction? Why or why not?
Answer
No, the line cannot be used for prediction no matter what the sample size is.
Assumptions in Testing the Significance of the Correlation Coefficient
Testing the significance of the correlation coefficient requires that certain assumptions about the data are satisfied. The premise of this test is that the data are a sample of observed points taken from a larger population. We have not examined the entire population because it is not possible or feasible to do so. We are examining the sample to draw a conclusion about whether the linear relationship that we see between \(x\) and \(y\) in the sample data provides strong enough evidence so that we can conclude that there is a linear relationship between \(x\) and \(y\) in the population.
The regression line equation that we calculate from the sample data gives the best-fit line for our particular sample. We want to use this best-fit line for the sample as an estimate of the best-fit line for the population. Examining the scatter plot and testing the significance of the correlation coefficient helps us determine if it is appropriate to do this.
The assumptions underlying the test of significance are:
There is a linear relationship in the population that models the average value of \(y\) for varying values of \(x\). In other words, the expected value of \(y\) for each particular value lies on a straight line in the population. (We do not know the equation for the line for the population. Our regression line from the sample is our best estimate of this line in the population.) The \(y\) values for any particular \(x\) value are normally distributed about the line. This implies that there are more \(y\) values scattered closer to the line than are scattered farther away. Assumption (1) implies that these normal distributions are centered on the line: the means of these normal distributions of \(y\) values lie on the line. The standard deviations of the population \(y\) values about the line are equal for each value of \(x\). In other words, each of these normal distributions of \(y\) values has the same shape and spread about the line. The residual errors are mutually independent (no pattern). The data are produced from a well-designed, random sample or randomized experiment.
Figure 12.5.4. The \(y\) values for each \(x\) value are normally distributed about the line with the same standard deviation. For each \(x\) value, the mean of the \(y\) values lies on the regression line. More \(y\) values lie near the line than are scattered further away from the line. Summary
Linear regression is a procedure for fitting a straight line of the form \(\hat{y} = a + bx\) to data. The conditions for regression are:
LinearIn the population, there is a linear relationship that models the average value of \(y\) for different values of \(x\). IndependentThe residuals are assumed to be independent. NormalThe \(y\) values are distributed normally for any value of \(x\). Equal varianceThe standard deviation of the \(y\) values is equal for each \(x\) value. RandomThe data are produced from a well-designed random sample or randomized experiment.
The slope \(b\) and intercept \(a\) of the least-squares line estimate the slope \(\beta\) and intercept \(\alpha\) of the population (true) regression line. To estimate the population standard deviation of \(y\), \(\sigma\), use the standard deviation of the residuals, \(s\). \(s = \sqrt{\frac{SEE}{n-2}}\). The variable \(\rho\) (rho) is the population correlation coefficient. To test the null hypothesis \(H_{0}: \rho =\)
hypothesized value, use a linear regression t-test. The most common null hypothesis is \(H_{0}: \rho = 0\) which indicates there is no linear relationship between \(x\) and \(y\) in the population. The TI-83, 83+, 84, 84+ calculator function LinRegTTest can perform this test (STATS TESTS LinRegTTest). Formula Review
Least Squares Line or Line of Best Fit:
\[\hat{y} = a + bx\]
where
\[a = y\text{-intercept}\]
\[b = \text{slope}\]
Standard deviation of the residuals:
\[s = \sqrt{\frac{SEE}{n-2}}\]
where
\(SSE = \text{sum of squared errors}\]
\[n = \text{the number of data points}\] |
Search
Now showing items 1-10 of 15
A free-floating planet candidate from the OGLE and KMTNet surveys
(2017)
Current microlensing surveys are sensitive to free-floating planets down to Earth-mass objects. All published microlensing events attributed to unbound planets were identified based on their short timescale (below 2 d), ...
OGLE-2016-BLG-1190Lb: First Spitzer Bulge Planet Lies Near the Planet/Brown-Dwarf Boundary
(2017)
We report the discovery of OGLE-2016-BLG-1190Lb, which is likely to be the first Spitzer microlensing planet in the Galactic bulge/bar, an assignation that can be confirmed by two epochs of high-resolution imaging of the ...
OGLE-2015-BLG-1459L: The Challenges of Exo-Moon Microlensing
(2017)
We show that dense OGLE and KMTNet $I$-band survey data require four bodies (sources plus lenses) to explain the microlensing light curve of OGLE-2015-BLG-1459. However, these can equally well consist of three lenses ...
OGLE-2017-BLG-1130: The First Binary Gravitational Microlens Detected From Spitzer Only
(2018)
We analyze the binary gravitational microlensing event OGLE-2017-BLG-1130 (mass ratio q~0.45), the first published case in which the binary anomaly was only detected by the Spitzer Space Telescope. This event provides ...
OGLE-2017-BLG-1434Lb: Eighth q < 1 * 10^-4 Mass-Ratio Microlens Planet Confirms Turnover in Planet Mass-Ratio Function
(2018)
We report the discovery of a cold Super-Earth planet (m_p=4.4 +/- 0.5 M_Earth) orbiting a low-mass (M=0.23 +/- 0.03 M_Sun) M dwarf at projected separation a_perp = 1.18 +/- 0.10 AU, i.e., about 1.9 times the snow line. ...
OGLE-2017-BLG-0373Lb: A Jovian Mass-Ratio Planet Exposes A New Accidental Microlensing Degeneracy
(2018)
We report the discovery of microlensing planet OGLE-2017-BLG-0373Lb. We show that while the planet-host system has an unambiguous microlens topology, there are two geometries within this topology that fit the data equally ...
OGLE-2017-BLG-1522: A giant planet around a brown dwarf located in the Galactic bulge
(2018)
We report the discovery of a giant planet in the OGLE-2017-BLG-1522 microlensing event. The planetary perturbations were clearly identified by high-cadence survey experiments despite the relatively short event timescale ...
Spitzer Opens New Path to Break Classic Degeneracy for Jupiter-Mass Microlensing Planet OGLE-2017-BLG-1140Lb
(2018)
We analyze the combined Spitzer and ground-based data for OGLE-2017-BLG-1140 and show that the event was generated by a Jupiter-class (m_p\simeq 1.6 M_jup) planet orbiting a mid-late M dwarf (M\simeq 0.2 M_\odot) that ...
OGLE-2016-BLG-1266: A Probable Brown-Dwarf/Planet Binary at the Deuterium Fusion Limit
(2018)
We report the discovery, via the microlensing method, of a new very-low-mass binary system. By combining measurements from Earth and from the Spitzer telescope in Earth-trailing orbit, we are able to measure the ...
KMT-2016-BLG-0212: First KMTNet-Only Discovery of a Substellar Companion
(2018)
We present the analysis of KMT-2016-BLG-0212, a low flux-variation $(I_{\rm flux-var}\sim 20$) microlensing event, which is well-covered by high-cadence data from the three Korea Microlensing Telescope Network (KMTNet) ... |
Find the smallest relation containing the relation $\{ (1,2),(2,1),(2,3),(3,4),(4,1) \}$ that is:
Reflexive and transitive Reflexive, symmetric and transitive
Well my first attempt:
Reflexive: $ S_1 = \{ (1,1),(2,2),(3,3),(4,4) \}$ Symmetric: $ S_2=\{ (3,2),(4,3),(1,4) \}$ Transitive: $S_3= ? $Is where I'm stuck.
So that $S_1\cup S_2 \cup S_3 $ would be my equivalence relation?
Also, When you're testing for transitivity, what combinations do we test for? If we take: $(1,2) \land (2,3)\land(3,4) \rightarrow(1,3)$, must it be done for the converse? Starting with $(2,1)$ rather than $(1,2)$. It seems that there are many conbinations of $x,y$ that need to be tested. Is this correct? In fact, is my attempt correct to begin with? |
i have this exercice: Let $f\in L^2(\mathbb{R}^n)$. 1- Prouve that the equation $\Delta u - u = \dfrac{\partial f}{\partial x_i}$ admeit a unique solution $u \in H^1(\mathbb{R}^n)$?
2- Prouve the exustance of constant $C \geq 0$ such that $||u||_{H^1} \leq C ||f||_{L^2}$?
3- Prouve the existante of a constant $M \geq 0$ such that for all $u \in H^2(\mathbb{R}^n)$ we have $||u||_{H^2} \leq M (||u||_{L^2}+||\Delta u||_{L^2}$.
For question 1, i suppose existence of two solutions $u_1$ and $u_2$, and i set $w=u_1-u_2$ solution of $\Delta w = w =0$ and my prolem is what we can this imply that $w=0$? and how we prouve the existence? For question 2 and 3, i don't have idea. Thank's for the help. |
(a) Prove that $T:V\to V$ is a linear transformation.
To prove $T$ is a linear transformation, we need to show the following properties.
For any $X, Y\in V$, we have $T(X+Y)=T(X)+T(Y)$.
For any $X\in V, r\in \R$, we have $T(rX)=rT(X)$.
To check condition 1, let $X, Y \in V$. Then we have\begin{align*}T(X+Y)&=A(X+Y)-(X+Y)A && \text{by definition of $T$}\\&=AX+AY-XA-YA\\&=AX-XA+AY-YA\\&=T(X)+T(Y) && \text{by definition of $T$}.\end{align*}Hence condition 1 is met.
To verify condition 2, let $X\in V, r\in \R$.Then we have\begin{align*}T(rX)&=A(rX)-(rX)A && \text{by definition of $T$}\\&=rAX-rXA && \text{$r$ is a scalar}\\&=r(AX-XA)\\&=rT(X) && \text{by definition of $T$}.\end{align*}So condition 2 is also met, hence $T$ is a linear transformation.
(b) Find the determinant of the matrix representation of $T$.
Let $B$ be a basis of the vector space $V$ and let $P$ be the matrix of the linear transformation $T$ with respect to $B$. We prove that the determinant of $P$ is zero.
Let $I$ be the $n\times n$ identity matrix. Then we have\begin{align*}T(I)=AI-IA=A-A=O,\end{align*}where $O$ is the $n\times n$ zero matrix.Since $T(O)=O$, this implies that the linear transformation $T$ is not injective, hence $P$ is a singular matrix.
Let us explain the details.Let $v=[I]_B \in \R^{n^2}$ be the coordinate vector of $I$ with respect to the basis $B$.Then since $I\neq O$, the vector $v$ is not zero.Then $T(I)=O$ implies that\[Pv=0\in \R^{n^2}.\]As the nonzero vector $v$ is a solution of the matrix equation $Px=0$, the matrix $P$ is singular.
Since $P$ is singular, the determinant of $P$ is zero.
Determine linear transformation using matrix representationLet $T$ be the linear transformation from the $3$-dimensional vector space $\R^3$ to $\R^3$ itself satisfying the following relations.\begin{align*}T\left(\, \begin{bmatrix}1 \\1 \\1\end{bmatrix} \,\right)=\begin{bmatrix}1 \\0 \\1 […]
Differentiation is a Linear TransformationLet $P_3$ be the vector space of polynomials of degree $3$ or less with real coefficients.(a) Prove that the differentiation is a linear transformation. That is, prove that the map $T:P_3 \to P_3$ defined by\[T\left(\, f(x) \,\right)=\frac{d}{dx} f(x)\]for any $f(x)\in […] |
I have it set up so I place my mouse in the center of where I want the spiral to be, then I press enter to start the program, then define the radius initially and how much it grows each time it goes around.
I use \$\sin(\theta)\$ = opposite / hypotenuse and \$\cos(\theta)\$ = adjacent / hypotenuse to find how much my coordinates change based on the degree and the radius of my circle that itself is gradually increasing to create a spiral. There is also some printing at the end that I had just to help with debugging. As simple as I think something is going to be there are always problems.
Converting to
int is because NumPy data can't be operated on.
from numpy import sin, cos, piimport pyautogui as ccimport time# (x-h)^2 + (y-k)^2 = r^2# knowling radius and degree, r*sin(deg*pi/180) = rise || cos = runinput('Enter to Begin\n-->')radius = int(input('What do you want your beginning radius to be?\n-->'))rate = int(input('How much do you want it to increase by each cycle?\n-->'))h, k = cc.position()degree = 0x = h+radius*(int(cos(degree*pi/180)*10**5)/10**5)y = k+radius*(int(sin(degree*pi/180)*10**5)/10**5)cc.moveTo(x, y)cc.mouseDown()while True: degree += 1 radius += rate / 360 x = h+radius*(int(cos(degree*pi/180)*10**5)/10**5) y = k+radius*(int(sin(degree*pi/180)*10**5)/10**5) cc.moveTo(x, y) print('Cords: '+str(x)+'||'+str(y)) print('radius: '+str(radius)) print('degree: '+str(degree)) print()
All feedback is appreciated, but I'm really wondering how I could make this faster in Python. |
This week¶
This week's highlight is a paper on imitation learning: Learning Self-Correctable Policies and Value Functions from Demonstrations with Negative Sampling, chosen again for pragmatic reasons. The problem my team is currently working on has both reasons for wanting high sample efficiency: training would be prohibitively slow without something to kickstart it, and actions taken in the real world can get expensive.
I know I said I'd be experimenting with shorter, more bite-sized posts, but... next time. (If you want that, you can just stop reading after the "Key intuition" section.)
The problem¶
Learning from demonstrations is more difficult than it may seem at first glance. The trouble mainly stems from covariate shift: the input distribution your agent will see in production is very likely to be different than that encountered during training. Many machine learning algorithms have this problem, reinforcement learning algorithms included, but imitation learning has it especially bad, for a simple reason: the expert demonstrations you are attempting to follow necessarily explore a very small subset of the state space. The whole
point of them is to stay on good trajectories, meaning bad trajectories never get explored.
This causes two issues:
The agent can't in general figure out how to get back into the subset of state space where the expert demonstrations apply, even if it gets only slightly off-course, and Value functions for states and actions are affected by unseen states, making it very likelythat the agent will wander off as soon as it's allowed. Key intuition¶
The authors solve this problem by pre-training with supervised learning using a loss function that drives down the value of all states outside of those explored in the expert demonstrations $U$, by an amount proportional to their Euclidean distance from the closest state in $U$. In their own words:
Consider a state $s$ in the demonstration and its nearby state $\tilde{s}$ that is not in the demonstration. The key intuition is that $\tilde{s}$ should have a lower value than $s$, because otherwise $\tilde{s}$ likely should have been visited by the demonstrations in the first place. If a value function has this property for most of the pair $(s,\tilde{s})$ of this type, the corresponding policy will tend to correct its errors by driving back to the demonstration states because the demonstration states have locally higher values.
And Figure 1 is a nice visual demonstration:
Value Iteration with Negative Sampling (VINS)¶
Into the weeds now.
Self-correctable policy¶
The first bit of their algorithm is the definition of their self-correcting policy. It's essentially a formalization of what we said above about $s$ and $\tilde{s}$.
If $s \in U$ (if $s$ is in the expert demonstrations), then $$V(s) = V^{\pi_e}(s) \pm \delta_V$$ ("just what the value would be in the expert demonstrations, plus some error").
But if $s \not\in U$, $$V(s) = V^{\pi_e}(\Pi_U(s)) - \lambda \|s-\Pi_U(s)\| \pm \delta_V$$ (where $\Pi_U$ gives the closest $s \in U$, so $V(s)$ is "the value of the closest $s \in U$,
minus the distance to that $s \in U$, plus some error")
Then the induced policy from this value function is $$\pi(s) \triangleq \underset{a: \|a-\pi_{BC}(s)\|\le \zeta}{\operatorname{argmax}} ~V(M(s, a))$$
Where $M(s,a)$ is a learned dynamical model of the environment that gives the next state given the current state and action. $\pi_{BC}(s)$ is the "behavioral clone" policy from the expert demonstrations.
RL algorithm¶
To actually achieve $V(M(s,a))$ with the necessary properties, they select a state $s$ from the demonstrations, perturb it a bit to get $\tilde{s}$ nearby, and use the original state $s$ to approximate $\Pi_U(\tilde{s})$ in the following loss function.$$\mathcal{L}_{ns}(\phi)= \mathbf{E}_{s \sim \rho^{\pi_e}, \tilde{s} \sim perturb(s)} \left(V_{\bar \phi}(s) - \lambda \|s-\tilde{s}\|- V_\phi(\tilde{s}) \right)^2$$
Finally, here's the algorithm that uses this and the earlier policy definition:
Parting thoughts¶ I thought it was quite strange that they learned $V(s)$ and a dynamical model $M(s,a)$, and then used $V(M(s,a))$ in the algorithm. I thought, "Why not just learn $Q$?" The answer was given in their Section A appendix, and was quite interesting. I'm not sure it applies to our case, but it's important. TL;DR $Q(s,a)$ learned from demonstrations aloneis degenerate, because there's always a $Q$ that perfectly matches the demonstrations and doesn't depend at all on$a$. One of my coworkers (and upcoming Computable author!) wondered to me if the induced policy could be made explicit, by explicitly training a policy network to bring the agent back into safe territory. It could be trained with gradient descent, because $V(M(s,a))$ are just networks, and the technique for training deterministic policies just follows the gradient of the $Q$ function. I wonder too. |
A forum where anything goes. Introduce yourselves to other members of the forums, discuss how your name evolves when written out in the Game of Life, or just tell us how you found it. This is the forum for "non-academic" content.
bidibangboom Posts: 34 Joined: May 10th, 2019, 6:38 pm
triple poster
Code: Select all
#C A period 8 oscillator that was found in 1972.
#C http://www.conwaylife.com/wiki/index.php?title=Roteightor
x = 14, y = 14, rule = 23/3
bo12b$b3o8b2o$4bo7bob$3b2o5bobob$10b2o2b2$6b2o6b$5b2obo5b$6b3o5b$2b2o
3b3o4b$bobo5b2o3b$bo7bo4b$2o8b3ob$12bo!
fun
bidibangboom Posts: 34 Joined: May 10th, 2019, 6:38 pm
someone who likes this
Code: Select all
#C A period 8 oscillator that was found in 1972.
#C http://www.conwaylife.com/wiki/index.php?title=Roteightor
x = 14, y = 14, rule = 23/3
bo12b$b3o8b2o$4bo7bob$3b2o5bobob$10b2o2b2$6b2o6b$5b2obo5b$6b3o5b$2b2o
3b3o4b$bobo5b2o3b$bo7bo4b$2o8b3ob$12bo!
fun
bidibangboom Posts: 34 Joined: May 10th, 2019, 6:38 pm
someone who probably broke the record for the most posts in a row
Code: Select all
#C A period 8 oscillator that was found in 1972.
#C http://www.conwaylife.com/wiki/index.php?title=Roteightor
x = 14, y = 14, rule = 23/3
bo12b$b3o8b2o$4bo7bob$3b2o5bobob$10b2o2b2$6b2o6b$5b2obo5b$6b3o5b$2b2o
3b3o4b$bobo5b2o3b$bo7bo4b$2o8b3ob$12bo!
fun
bidibangboom Posts: 34 Joined: May 10th, 2019, 6:38 pm
someone who is still doing this
Code: Select all
#C A period 8 oscillator that was found in 1972.
#C http://www.conwaylife.com/wiki/index.php?title=Roteightor
x = 14, y = 14, rule = 23/3
bo12b$b3o8b2o$4bo7bob$3b2o5bobob$10b2o2b2$6b2o6b$5b2obo5b$6b3o5b$2b2o
3b3o4b$bobo5b2o3b$bo7bo4b$2o8b3ob$12bo!
fun
bidibangboom Posts: 34 Joined: May 10th, 2019, 6:38 pm
someone who will stop now
Code: Select all
#C A period 8 oscillator that was found in 1972.
#C http://www.conwaylife.com/wiki/index.php?title=Roteightor
x = 14, y = 14, rule = 23/3
bo12b$b3o8b2o$4bo7bob$3b2o5bobob$10b2o2b2$6b2o6b$5b2obo5b$6b3o5b$2b2o
3b3o4b$bobo5b2o3b$bo7bo4b$2o8b3ob$12bo!
fun
fluffykitty
Posts: 638 Joined: June 14th, 2014, 5:03 pm
Someone who posted 7 times in a row
I like making rules
A for awesome Posts: 1901 Joined: September 13th, 2014, 5:36 pm Location: 0x-1 Contact:
Someone who responded to someone who was wrong about breaking the record for most posts in a row (see here
).
x₁=ηx
V ⃰_η=c²√(Λη)
K=(Λu²)/2
Pₐ=1−1/(∫^∞_t₀(p(t)ˡ⁽ᵗ⁾)dt)
$$x_1=\eta x$$
$$V^*_\eta=c^2\sqrt{\Lambda\eta}$$
$$K=\frac{\Lambda u^2}2$$
$$P_a=1-\frac1{\int^\infty_{t_0}p(t)^{l(t)}dt}$$http://conwaylife.com/wiki/A_for_all
Aidan F. Pierce
Moosey Posts: 2486 Joined: January 27th, 2019, 5:54 pm Location: A house, or perhaps the OCA board. Contact:
A person who is being responded to by a person who is writing what this link
goes to
I am a prolific creator of many rather pathetic googological functions
My CA rules can be found here
Also, the tree game
Bill Watterson once wrote: "How do soldiers killing each other solve the world's problems?"
testitemqlstudop Posts: 1186 Joined: July 21st, 2016, 11:45 am Location: in catagolue Contact:
Somebody who used a self-reference instead of a previous-person-reference when the reference relations are relaxed.
A for awesome Posts: 1901 Joined: September 13th, 2014, 5:36 pm Location: 0x-1 Contact:
Somebody who I replied to.
x₁=ηx
V ⃰_η=c²√(Λη)
K=(Λu²)/2
Pₐ=1−1/(∫^∞_t₀(p(t)ˡ⁽ᵗ⁾)dt)
$$x_1=\eta x$$
$$V^*_\eta=c^2\sqrt{\Lambda\eta}$$
$$K=\frac{\Lambda u^2}2$$
$$P_a=1-\frac1{\int^\infty_{t_0}p(t)^{l(t)}dt}$$http://conwaylife.com/wiki/A_for_all
Aidan F. Pierce
testitemqlstudop Posts: 1186 Joined: July 21st, 2016, 11:45 am Location: in catagolue Contact:
Somebody who made the same aforementioned error that I attempted to correct by the second-previous post.
PkmnQ
Posts: 666 Joined: September 24th, 2018, 6:35 am Location: Server antipode
this post before reading the post itself.
Last edited by PkmnQ
on May 19th, 2019, 5:42 am, edited 1 time in total.
Code: Select all
x = 12, y = 12, rule = AnimatedPixelArt
4.P.qREqWE$4.2tL3vSvX$4.qREqREqREP$4.vS4vXvS2tQ$2.qWE2.qREqWEK$2.2vX
2.vXvSvXvStQtL$qWE2.qWE2.P.K$2vX2.2vX2.tQ2tLtQ$qWE4.qWE$2vX4.2vX$2.qW
EqWE$2.4vX!
i like loaf
PkmnQ
Posts: 666 Joined: September 24th, 2018, 6:35 am Location: Server antipode
Someone who can't use the new paste rle feature of LifeViewer.
Code: Select all
x = 12, y = 12, rule = AnimatedPixelArt
4.P.qREqWE$4.2tL3vSvX$4.qREqREqREP$4.vS4vXvS2tQ$2.qWE2.qREqWEK$2.2vX
2.vXvSvXvStQtL$qWE2.qWE2.P.K$2vX2.2vX2.tQ2tLtQ$qWE4.qWE$2vX4.2vX$2.qW
EqWE$2.4vX!
i like loaf
PkmnQ
Posts: 666 Joined: September 24th, 2018, 6:35 am Location: Server antipode
Someone who likes to size stack every now and then.
Code: Select all
x = 12, y = 12, rule = AnimatedPixelArt
4.P.qREqWE$4.2tL3vSvX$4.qREqREqREP$4.vS4vXvS2tQ$2.qWE2.qREqWEK$2.2vX
2.vXvSvXvStQtL$qWE2.qWE2.P.K$2vX2.2vX2.tQ2tLtQ$qWE4.qWE$2vX4.2vX$2.qW
EqWE$2.4vX!
i like loaf
PkmnQ
Posts: 666 Joined: September 24th, 2018, 6:35 am Location: Server antipode
Someone who has posted 4 times in a row, including this one
Code: Select all
x = 12, y = 12, rule = AnimatedPixelArt
4.P.qREqWE$4.2tL3vSvX$4.qREqREqREP$4.vS4vXvS2tQ$2.qWE2.qREqWEK$2.2vX
2.vXvSvXvStQtL$qWE2.qWE2.P.K$2vX2.2vX2.tQ2tLtQ$qWE4.qWE$2vX4.2vX$2.qW
EqWE$2.4vX!
i like loaf
PkmnQ
Posts: 666 Joined: September 24th, 2018, 6:35 am Location: Server antipode
Someone who is going for 12 posts (5)
Code: Select all
x = 12, y = 12, rule = AnimatedPixelArt
4.P.qREqWE$4.2tL3vSvX$4.qREqREqREP$4.vS4vXvS2tQ$2.qWE2.qREqWEK$2.2vX
2.vXvSvXvStQtL$qWE2.qWE2.P.K$2vX2.2vX2.tQ2tLtQ$qWE4.qWE$2vX4.2vX$2.qW
EqWE$2.4vX!
i like loaf
PkmnQ
Posts: 666 Joined: September 24th, 2018, 6:35 am Location: Server antipode
Someone who is wondering why he did this (6)
Code: Select all
x = 12, y = 12, rule = AnimatedPixelArt
4.P.qREqWE$4.2tL3vSvX$4.qREqREqREP$4.vS4vXvS2tQ$2.qWE2.qREqWEK$2.2vX
2.vXvSvXvStQtL$qWE2.qWE2.P.K$2vX2.2vX2.tQ2tLtQ$qWE4.qWE$2vX4.2vX$2.qW
EqWE$2.4vX!
i like loaf
PkmnQ
Posts: 666 Joined: September 24th, 2018, 6:35 am Location: Server antipode
Someone who is already past the halfway mark (7)
Code: Select all
x = 12, y = 12, rule = AnimatedPixelArt
4.P.qREqWE$4.2tL3vSvX$4.qREqREqREP$4.vS4vXvS2tQ$2.qWE2.qREqWEK$2.2vX
2.vXvSvXvStQtL$qWE2.qWE2.P.K$2vX2.2vX2.tQ2tLtQ$qWE4.qWE$2vX4.2vX$2.qW
EqWE$2.4vX!
i like loaf
PkmnQ
Posts: 666 Joined: September 24th, 2018, 6:35 am Location: Server antipode
Someone who is running out of descriptions for himself (8)
Code: Select all
x = 12, y = 12, rule = AnimatedPixelArt
4.P.qREqWE$4.2tL3vSvX$4.qREqREqREP$4.vS4vXvS2tQ$2.qWE2.qREqWEK$2.2vX
2.vXvSvXvStQtL$qWE2.qWE2.P.K$2vX2.2vX2.tQ2tLtQ$qWE4.qWE$2vX4.2vX$2.qW
EqWE$2.4vX!
i like loaf
PkmnQ
Posts: 666 Joined: September 24th, 2018, 6:35 am Location: Server antipode
Alternating rule (9)
Code: Select all
x = 12, y = 12, rule = AnimatedPixelArt
4.P.qREqWE$4.2tL3vSvX$4.qREqREqREP$4.vS4vXvS2tQ$2.qWE2.qREqWEK$2.2vX
2.vXvSvXvStQtL$qWE2.qWE2.P.K$2vX2.2vX2.tQ2tLtQ$qWE4.qWE$2vX4.2vX$2.qW
EqWE$2.4vX!
i like loaf
PkmnQ
Posts: 666 Joined: September 24th, 2018, 6:35 am Location: Server antipode
Someone who wants to have a profile picture (10)
Code: Select all
x = 12, y = 12, rule = AnimatedPixelArt
4.P.qREqWE$4.2tL3vSvX$4.qREqREqREP$4.vS4vXvS2tQ$2.qWE2.qREqWEK$2.2vX
2.vXvSvXvStQtL$qWE2.qWE2.P.K$2vX2.2vX2.tQ2tLtQ$qWE4.qWE$2vX4.2vX$2.qW
EqWE$2.4vX!
i like loaf
PkmnQ
Posts: 666 Joined: September 24th, 2018, 6:35 am Location: Server antipode
Someone who is almost done (11)
Code: Select all
x = 12, y = 12, rule = AnimatedPixelArt
4.P.qREqWE$4.2tL3vSvX$4.qREqREqREP$4.vS4vXvS2tQ$2.qWE2.qREqWEK$2.2vX
2.vXvSvXvStQtL$qWE2.qWE2.P.K$2vX2.2vX2.tQ2tLtQ$qWE4.qWE$2vX4.2vX$2.qW
EqWE$2.4vX!
i like loaf
PkmnQ
Posts: 666 Joined: September 24th, 2018, 6:35 am Location: Server antipode
Someone who has posted
12
times in a row and is now done
Code: Select all
x = 12, y = 12, rule = AnimatedPixelArt
4.P.qREqWE$4.2tL3vSvX$4.qREqREqREP$4.vS4vXvS2tQ$2.qWE2.qREqWEK$2.2vX
2.vXvSvXvStQtL$qWE2.qWE2.P.K$2vX2.2vX2.tQ2tLtQ$qWE4.qWE$2vX4.2vX$2.qW
EqWE$2.4vX!
i like loaf
testitemqlstudop Posts: 1186 Joined: July 21st, 2016, 11:45 am Location: in catagolue Contact:
PkmnQ wrote:this post before reading the post itself.
Someone who either doesn't have or chooses to ignore the rules of English
Saka
Posts: 3138 Joined: June 19th, 2015, 8:50 pm Location: In the kingdom of Sultan Hamengkubuwono X
A person that is trick work to.
Airy Clave White It Nay
Code: Select all
x = 17, y = 10, rule = B3/S23
b2ob2obo5b2o$11b4obo$2bob3o2bo2b3o$bo3b2o4b2o$o2bo2bob2o3b4o$bob2obo5b
o2b2o$2b2o4bobo2b3o$bo3b5ob2obobo$2bo5bob2o$4bob2o2bobobo!
(Check gen 2) |
In my last post, you saw that a 90° angle is called a right angle. This is the angle made by the two lines at the corner of a square. Now a triangle is a shape that has three angles inside. A basic property of any triangle is that all the internal angles add up to 180°:
But this post is about triangles where one of its angles is 90°, that is a right angle. Such triangles are called right triangles.
Below is a right triangle where I have labelled the sides as
a, b, and c. Side c is the side opposite the right angle. This side is called the hypotenuse. The hypotenuse is always the longest side of any right triangle.
Right triangles have another famous property that relates the lengths of the three sides. This property is called the Pythagorus Theorem. This is named after the Greek mathematician Pythagorus who lived 570 to 495 BCE. This theorem was used before his time but he is credited with providing the first proof. Given the sides as labelled above, the following is true for any size right triangle:
c² = a² + b²
This means that if you know any two sides of a right triangle, you can calculate the third side using this equation. Let’s do an example:
So we now know that
c² = 4² + 3² = 16 + 9 = 25. You can now find c by taking the square root of both side of the equation. Square roots have been covered in previous posts:
In my posts on square roots, I did say that taking a square root results in two solutions, one positive and one negative. But since we are solving for a physical length, we can ignore the negative solution. So the hypotenuse for this triangle is 5.
Not all right triangle problems work out so well. Most square roots are decimal numbers and you have to either round the number or leave the answer as a square root.
The Pythagorus Theorem can also be used to find one of the non-hypotenuse sides as well:
You can rearrange the theorem’s equation to solve for the unknown side:\[\begin{array}{l}{{c}^{2}\hspace{0.33em}{=}\hspace{0.33em}{a}^{2}\hspace{0.33em}{+}\hspace{0.33em}{b}^{2}\hspace{0.33em}\hspace{0.33em}\hspace{0.33em}\Longrightarrow\hspace{0.33em}\hspace{0.33em}{b}^{2}\hspace{0.33em}{=}\hspace{0.33em}{c}^{2}\hspace{0.33em}{-}\hspace{0.33em}{a}^{2}}\\{{b}^{2}\hspace{0.33em}{=}\hspace{0.33em}{9}^{2}\hspace{0.33em}{-}\hspace{0.33em}{5}^{2}\hspace{0.33em}{=}\hspace{0.33em}{81}\hspace{0.33em}{-}\hspace{0.33em}{25}\hspace{0.33em}{=}\hspace{0.33em}{56}}\\{\sqrt{{b}^{2}}\hspace{0.33em}{=}\hspace{0.33em}\sqrt{56}\hspace{0.33em}\hspace{0.33em}\Longrightarrow\hspace{0.33em}\hspace{0.33em}{b}\hspace{0.33em}\approx\hspace{0.33em}{7}{.}{48}}\end{array}\]
So
b approximately equals 7.48. That is what the symbol “≈” means. The exact answer cannot be written as a decimal number as the decimal part goes on forever. |
Basically 2 strings, $a>b$, which go into the first box and do division to output $b,r$ such that $a = bq + r$ and $r<b$, then you have to check for $r=0$ which returns $b$ if we are done, otherwise inputs $r,q$ into the division box..
There was a guy at my university who was convinced he had proven the Collatz Conjecture even tho several lecturers had told him otherwise, and he sent his paper (written on Microsoft Word) to some journal citing the names of various lecturers at the university
Here is one part of the Peter-Weyl theorem: Let $\rho$ be a unitary representation of a compact $G$ on a complex Hilbert space $H$. Then $H$ splits into an orthogonal direct sum of irreducible finite-dimensional unitary representations of $G$.
What exactly does it mean for $\rho$ to split into finite dimensional unitary representations? Does it mean that $\rho = \oplus_{i \in I} \rho_i$, where $\rho_i$ is a finite dimensional unitary representation?
Sometimes my hint to my students used to be: "Hint: You're making this way too hard." Sometimes you overthink. Other times it's a truly challenging result and it takes a while to discover the right approach.
Once the $x$ is in there, you must put the $dx$ ... or else, nine chances out of ten, you'll mess up integrals by substitution. Indeed, if you read my blue book, you discover that it really only makes sense to integrate forms in the first place :P
Using the recursive definition of the determinant (cofactors), and letting $\operatorname{det}(A) = \sum_{j=1}^n \operatorname{cof}_{1j} A$, how do I prove that the determinant is independent of the choice of the line?
Let $M$ and $N$ be $\mathbb{Z}$-module and $H$ be a subset of $N$. Is it possible that $M \otimes_\mathbb{Z} H$ to be a submodule of $M\otimes_\mathbb{Z} N$ even if $H$ is not a subgroup of $N$ but $M\otimes_\mathbb{Z} H$ is additive subgroup of $M\otimes_\mathbb{Z} N$ and $rt \in M\otimes_\mathbb{Z} H$ for all $r\in\mathbb{Z}$ and $t \in M\otimes_\mathbb{Z} H$?
Well, assuming that the paper is all correct (or at least to a reasonable point). I guess what I'm asking would really be "how much does 'motivated by real world application' affect whether people would be interested in the contents of the paper?"
@Rithaniel $2 + 2 = 4$ is a true statement. Would you publish that in a paper? Maybe... On the surface it seems dumb, but if you can convince me the proof is actually hard... then maybe I would reconsider.
Although not the only route, can you tell me something contrary to what I expect?
It's a formula. There's no question of well-definedness.
I'm making the claim that there's a unique function with the 4 multilinear properties. If you prove that your formula satisfies those (with any row), then it follows that they all give the same answer.
It's old-fashioned, but I've used Ahlfors. I tried Stein/Stakarchi and disliked it a lot. I was going to use Gamelin's book, but I ended up with cancer and didn't teach the grad complex course that time.
Lang's book actually has some good things. I like things in Narasimhan's book, but it's pretty sophisticated.
You define the residue to be $1/(2\pi i)$ times the integral around any suitably small smooth curve around the singularity. Of course, then you can calculate $\text{res}_0\big(\sum a_nz^n\,dz\big) = a_{-1}$ and check this is independent of coordinate system.
@A.Hendry: It looks pretty sophisticated, so I don't know the answer(s) off-hand. The things on $u$ at endpoints look like the dual boundary conditions. I vaguely remember this from teaching the material 30+ years ago.
@Eric: If you go eastward, we'll never cook! :(
I'm also making a spinach soufflé tomorrow — I don't think I've done that in 30+ years. Crazy ridiculous.
@TedShifrin Thanks for the help! Dual boundary conditions, eh? I'll look that up. I'm mostly concerned about $u(a)=0$ in the term $u'/u$ appearing in $h'-\frac{u'}{u}h$ (and also for $w=-\frac{u'}{u}P$)
@TedShifrin It seems to me like $u$ can't be zero, or else $w$ would be infinite.
@TedShifrin I know the Jacobi accessory equation is a type of Sturm-Liouville problem, from which Fox demonstrates in his book that $u$ and $u'$ cannot simultaneously be zero, but that doesn't stop $w$ from blowing up when $u(a)=0$ in the denominator |
I've created a figure that displays a "timeline" that contains specific points as well as ranges. The only thing that I want to add (but don't know how) are dots/circles/squares at the points (at 0, theta * s, theta*s+w and z). Also, I would like to center the timeline. When I wrap the tikzpicture in the figure environment, no output is created.
This is my code:
\documentclass{article} \usepackage{tikz}\usetikzlibrary{arrows,positioning,decorations.pathreplacing} \begin{document}\begin{tikzpicture}[ every node/.style = {align=center}, Line/.style = {-angle 90, shorten >=2pt}, Brace/.style args = {#1}{semithick, decorate, decoration={brace,#1,raise=20pt, pre=moveto,pre length=2pt,post=moveto,post length=2pt,}}, ys/.style = {yshift=#1} ]\linespread{0.8} \coordinate (a) at (0,0);\coordinate[right=30mm of a] (b);\coordinate[right=30mm of b] (c);\coordinate[right= 20mm of c] (d);\coordinate[right=24mm of d] (e);\coordinate[right= 5mm of e] (f);\coordinate[right=22mm of f] (g);\draw[Line] (a) -- (g) node[right] {x};\draw[Brace=mirror] (a) -- node[below=20pt] {Compensation} (b);\draw[Brace=mirror] (b) -- node[below=20pt] {Gift} (d);\draw ([ys=0mm] a) node[below] {0} -- (a);\draw ([ys=0mm] b) node[below] {$\theta s$} -- (b);\draw[Line] ([ys=10mm] c) node[above] {$\delta$} -- (c);\draw[Line] ([ys=10mm] d) node[above] {$x(\delta)$} -- (d);\draw ([ys=0mm] d) node[below] {$\theta s + w$} -- (d);\draw ([ys=0mm] f) node[below] {z} -- (f);\end{tikzpicture} \end{document}
This is my current output: |
Introduction to Optical Prisms
Prisms are solid glass optics that are ground and polished into geometrical and optically significant shapes. The angle, position, and number of surfaces help define the type and function. One of the most recognizable uses of prisms, as demonstrated by Sir Isaac Newton, consists of dispersing a beam of white light into its component colors (Figure 1). This application is utilized by refractometer and spectrographic components. Since this initial discovery, prisms have been used in "bending" light within a system, "folding" the system into a smaller space, changing the orientation (also known as handedness or parity) of an image, as well as combining or splitting optical beams with partial reflecting surfaces. These uses are common in applications with telescopes, binoculars, surveying equipment, and a host of others.
Figure 1: Dispersion through a Prism
A notable characteristic of prisms is their ability to be modeled as a system of plane mirrors in order to simulate the reflection of light within the prism medium. Replacing mirror assemblies is perhaps the most useful application of prisms, since they both bend or fold light and change image parity. Often, multiple mirrors are needed to achieve results similar to a single prism. Therefore, the substitution of one prism in lieu of several mirrors reduces potential alignment errors, increasing accuracy and minimizing the size and complexity of a system.
PRISM MANUFACTURING
Before delving into the theory behind prisms, consider their manufacturing process. In order to be used successfully in most applications, prisms must be manufactured with very strict tolerances and accuracies. Due to the variability in shape, size, and, most importantly, the number of surfaces, a large-scale automated process for prism manufacturing is quite infeasible. In addition, most high precision prisms tend to be made in low quantities, meaning an automated process would be unnecessary.
First, a block of glass (known as a "blank") of a specified grade and glass type is obtained. This block is then ground, or generated, by a metal diamond bonded wheel into a near-finished product. A majority of the glass is removed quickly in this stage resulting in flat, but still coarse surfaces (Figure 2a). At this point, the dimensions of the prism-to-be are very close to the desired specifications. Next is a fine grinding process that removes sub-surface breaks from the surface; this stage is known as smoothening. Scratches left from the first stage are removed in the second stage (Figure 2b). After smoothening, the glass surfaces should appear cloudy and opaque. In both the first two stages, the prism surface must be wet in order to expedite glass removal and prevent overheating of the glass itself.
The third stage involves polishing the prism to the correctly specified surface flatness. In this stage, the glass is rubbed against a polyurethane polisher wet with "slurry," an optical polishing compound typically comprised of water mixed with pumice or cerium oxide (Figure 2c). The exact duration of the polishing stage is highly dependent on the surface specifications required. Once polishing is completed, chamfering can begin. In this fourth stage, the edges of the prism are subjected to a spinning diamond plate in order to slightly dull the sharp edges it obtains throughout the aforementioned steps (Figure 2d). After chamfering, the finished prism is cleaned, inspected (via both manual and automated means), and coated with anti-reflection (AR) and/or metallic mirror coatings, if necessary, to further aid in overall transmission and/or reflection. Though the process is much more involved and may require more iterations or operations due to the number of surfaces on a prism, the Generating, Smoothening, Polishing and Chamfering Stages are roughly outlined in Figures 2a - 2d.
Figure 2a: Prism Manufacturing Process: Generating Stage Figure 2b: Prism Manufacturing Process: Smoothening Stage Figure 2c: Prism Manufacturing Process: Polishing Stage Figure 2d: Prism Manufacturing Process: Chamfering Stage
Throughout the manufacturing of a prism, it is necessary to continually adjust and secure each surface being worked on. Securing a prism in place involves one of two methods: blocking and contacting. Blocking entails arranging the prism in a metal tool with hot wax. Contacting, on the other hand, is an optical bonding process done at room temperature where two clean glass surfaces are fastened together simply through their Van Der Waals interaction. Contacting is utilized if high precision tolerances are required because it does not require additional adjustments to be made during the Generating, Smoothening, or Polishing Stages to account for the wax thickness between the prism surface and the contact block.
During every stage of the prism manufacturing process, from generating to blocking and contacting, a skilled optician is required to manually inspect and adjust the prism surfaces being worked on. As a result, it is extremely labor intensive and requires experience and skill in order to complete. The entire process often requires a significant amount of time, work, and concentration.
THEORY: LIGHT AND REFRACTION
Understanding how a prism works is key to deciding which type of prism fits best for a specific application. In order to do so, it is important to first understand how light interacts with an optical surface. This interaction is described by Snell's Law of Refraction:
(1)$$ n_1 \sin \! \left(\theta_1\right)=n_2 \sin \! \left(\theta_2\right) $$
Where n
1 is the index of the incident medium, θ 1 is the angle of the incident ray, n 2 is the index of the refracted/reflected medium, and θ 2 is the angle of the refracted/reflected ray. Snell's Law describes the relationship between the angles of incidence and transmission when a ray travels between multiple media (Figure 3). Figure 3: Snell's Law and Total Internal Reflection
A prism is notable for its ability to reflect the ray path without the need for a special coating, such as that required when using a mirror. This is achieved through a phenomenon known as total internal reflection (TIR). TIR occurs when the incident angle (angle of the incident ray measured from normal) is higher than the critical angle θ
c: (2)$$ \sin\!\left(\theta_c\right)=\frac{n_1}{n_2} $$
Where n
1 is the index of refraction for the medium where the ray originates, and n 2 is the index of refraction for the medium where the ray exits. It is important to note that TIR only occurs when light travels from a high index medium to a low index medium.
At the critical angle, the angle of refraction is equal to 90°. Referencing Figure 3, notice that TIR occurs only if θ exceeds the critical angle. If the angle is below the critical angle, then transmission will occur along with reflection as given by Snell's Law. If a prism face does not meet TIR specifications for the desired angle(s), then a reflective coating must be used. This is why some applications require coated versions of a prism that would otherwise work well uncoated in another application.
THEORY: IMAGE HANDEDNESS/PARITY
A significant aspect of imaging through a prism is image handedness (parity), otherwise referred to as the orientation of the image. This is introduced every time the ray path hits a plane mirror, any flat reflective surface, or a prism surface at an angle that produces TIR. There are two types of handedness: right and left. Right handedness (Figure 4) describes the case where an image undergoes an even number of reflections, resulting in the ability to read it clearly (assuming the image is text) in at least one position. Left handedness (Figure 5) describes the case where the image undergoes an odd number of reflections, leading to an irregularity in the position of the image that is comparable to what one sees in a mirror.
Figure 4: Right Handedness or Even Parity Figure 5: Left Handedness or Odd Parity
In addition to parity, there are three types of image change (Figure 6). An inversion is an image-flip over a horizontal axis, whereas a reversion is an image-flip over a vertical axis. When both are done at the same time, an image rotation of 180° occurs and there is no change in parity. Another way to think of parity is defining it as being determined by looking back against the propagation direction towards either the object or image in its optical space (Figure 7).
Figure 6: Inversion (Top), Reversion (Middle), and Rotation (Bottom) Figure 7: How Parity is Determined
When using a prism, consider the following four points:
Image Handedness Changes Every Time an Image is Reflected. Any Point along the Plane of the Reflecting Surface is Equidistant from the Object and Its Image. Snell's Law Can Be Applied to All Surfaces. When Testing for Image Handedness/Parity, It is Best to Use a Non-Symmetrical Letter Such as R, F, or Q. Avoid Using Letters Like X, O, A, etc. TYPES OF PRISMS
There are four main types of prisms: dispersion prisms, deviation, or reflection prisms, rotation prisms, and displacement prisms. Deviation, displacement, and rotation prisms are common in imaging applications; dispersion prisms are strictly made for dispersing light, therefore not suitable for any application requiring quality images.
Dispersion Prisms
Prism dispersion is dependent upon the geometry of the prism and its index dispersion curve, based on the wavelength and index of refraction of the prism substrate. The angle of minimum deviation dictates the smallest angle between the incident ray and the transmitted rays (Figure 8). The green wavelength of light is deviated more than red, and blue more than both red and green; red is commonly defined as 656.3nm, green as 587.6nm, and blue as 486.1nm.
Figure 8: Dispersion through a Prism Deviation, Rotation, and Displacement Prisms
Prisms that deviate the ray path, rotate the image, or simply displace the image from its original axis are helpful in many imaging systems. Ray deviations are usually done at angles of 45°, 60°, 90°, and 180°. This helps to condense system size or adjust the ray path without affecting the rest of the system setup. Rotation prisms, such as dove prisms, are used to rotate an image after it is inverted. Displacement prisms maintain the direction of the ray path, yet adjust its relation to the normal.
Prism Selection Guide
To aid in selecting the best prisms for specific applications, consider the following selection guide of the most commonly used in the optics, imaging, and photonics industries.
Prism Selection Guide Equilateral Prisms - Dispersion Function Application Littrow Prisms - Dispersion, Deviation Function Application Right Angle Prisms - Deviation, Displacement Function Application Penta Prisms - Deviation Function Application Half-Penta Prisms - Deviation Function Application Amici Roof Prism - Deviation Function Application Schmidt Prisms - Deviation Function Application Retroreflectors (Trihedral Prisms) - Deviation, Displacement Function Application Wedge Prisms - Deviation, Rotation Function Application Rhomboid Prisms - Displacement Function Application Dove Prisms - Rotation Function Application Anamorphic Prism Pairs - Expansion Function Application Light Pipe Homogenizing Rods - Homogenation Function Application Tapered Light Pipe Homogenizing Rods - Homogenation Function Application
This introduction gave a look into the manufacturing process and the theory associated with prisms, as well as a selection to help you find the best prism for your application. To learn some examples of prism applications, view Optical Prism Application Examples. |
ISSN:
1556-1801
eISSN:
1556-181X
All Issues
Networks & Heterogeneous Media
March 2008 , Volume 3 , Issue 1
Select all articles
Export/Reference:
Abstract:
The classical Lighthill-Whitham-Richards (LWR) kinematic traffic model is extended to a unidirectional road on which the maximum density $a(x)$ represents road inhomogeneities, such as variable numbers of lanes, and is allowed to vary discontinuously. The car density $\phi = \phi(x,t)$ is then determined by the following initial value problem for a scalar conservation law with a spatially discontinuous flux:
$\phi_t + (\phi v(\phi/{a(x)})_x = 0, \quad \phi(x,0)=\phi_0(x),\quad x \in \mathbb{R},\quad t\in (0,T),$ (*)
where $v(z)$ is the velocity function.
We adapt to (*)
a new notion
of entropy solutions (Bürger, Karlsen, and Towers
[Submitted, 2007]), which involves a Kružkov-type
entropy inequality based on a specific flux connection $(A,B)$, and
which we interpret in terms of traffic flow. This concept
is consistent with both the driver's ride impulse
and the desire of
drivers to speed up.
We prove that entropy solutions of type $(A,B)$ are unique. This solution concept also leads to simple, transparent, and unified convergence proofs for numerical schemes. Indeed, we adjust to (*) new variants of the Engquist-Osher (EO) scheme (Bürger, Karlsen, and Towers [Submitted, 2007]), and of the Hilliges-Weidlich (HW) scheme analyzed by the authors [ J. Engrg. Math., to appear].
It is proven that the EO and HW schemes and a related Godunov scheme
converge to the unique entropy solution of type $(A,B)$ of (*).
For the Godunov version,
this is the first rigorous convergence and well-posedness result, since no
unnecessarily restrictive regularity assumptions are imposed on the solution.
Numerical experiments for first-order schemes and
formally second-order
MUSCL/Runge-Kutta versions are presented. Abstract:
In this paper a macroscopic model of tumor cord growth is developed, relying on the mathematical theory of deformable porous media. Tumor is modeled as a saturated mixture of proliferating cells, extracellular fluid and extracellular matrix, that occupies a spatial region close to a blood vessel whence cells get the nutrient needed for their vital functions. Growth of tumor cells takes place within a healthy host tissue, which is in turn modeled as a saturated mixture of non-proliferating cells. Interactions between these two regions are accounted for as an essential mechanism for the growth of the tumor mass. By weakening the role of the extracellular matrix, which is regarded as a rigid non-remodeling scaffold, a system of two partial differential equations is derived, describing the evolution of the cell volume ratio coupled to the dynamics of the nutrient, whose higher and lower concentration levels determine proliferation or death of tumor cells, respectively. Numerical simulations of a reference two-dimensional problem are shown and commented, and a qualitative mathematical analysis of some of its key issues is proposed.
Abstract:
We consider the class of integer rectifiable currents without boundary in $\R^n\times\R$ satisfying a positivity condition. We establish that these currents can be written as a linear superposition of graphs of finitely many functions with bounded variation.
Abstract:
We consider here the wave equation in a (not necessarily periodic) perforated domain, with a Neumann condition on the boundary of the holes. Assuming $H^0$-convergence ([3]) on the elliptic part of the operator, we prove two main theorems: a convergence result and a corrector one. To prove the corrector result, we make use of a suitable family of elliptic local correctors given in [4] whose columns are piecewise locally square integrable gradients. As in the case without holes ([2]), some additional assumptions on the data are needed.
Abstract:
We study quasi-static deformation of dense granular packings. In the reference configuration, a granular material is under confining stress (pre-stress). Then the packing is deformed by imposing external boundary conditions, which model engineering experiments such as shear and compression. The deformation is assumed to preserve the local structure of neighbors for each particle, which is a realistic assumption for highly compacted packings driven by small boundary displacements. We propose a two-dimensional network model of such deformations. The model takes into account elastic interparticle interactions and incorporates geometric impenetrability constraints. The effects of friction are neglected. In our model, a granular packing is represented by a spring-lattice network, whereby the particle centers correspond to vertices of the network, and interparticle contacts correspond to the edges. We work with general network geometries: periodicity is not assumed. For the springs, we use a quadratic elastic energy function. Combined with the linearized impenetrability constraints, this function provides a regularization of the hard-sphere potential for small displacements.
When the network deforms, each spring either preserves its length (this corresponds to a solid-like contact), or expands (this represents a broken contact). Our goal is to study distribution of solid-like contacts in the energy-minimizing configuration. We prove that under certain geometric conditions on the network, there are at least two non-stretched springs attached to each node, which means that every particle has at least two solid-like contacts. The result implies that a particle cannot loose contact with all of its neighbors. This eliminates micro-avalanches as a mechanism for structural weakening in small shear deformation.
Abstract:
Previous studies have shown that seawater may alter the wettability in the direction of more water-wet conditions in carbonate reservoirs. The reason for this is that ions from the salt (sulphat, magnesium, calsium, etc) can create a wettability alteration toward more water-wet conditions as salt is absorbed on the rock.
In order to initiate a more systematic study of this phenomenon a 1-D mathematical model relevant for spontaneous imbibition is formulated. The model represents a core plug on laboratory scale where a general wettability alteration (WA) agent is included. Relative permeability and capillary pressure curves are obtained via interpolation between two sets of curves corresponding to oil-wet and water-wet conditions. This interpolation depends on the adsorption isotherm in such a way that when no adsorption of the WA agent has taken place, oil-wet conditions prevail. However, as the adsorption of this agent takes place, gradually there is a shift towards more water-wet conditions. Hence, the basic mechanism that adsorption of the WA agent is responsible for the wettability alteration, is naturally captured by the model.
Conservation of mass of oil, water, and the WA agent, combined with Darcy's law, yield a 2x2 system of coupled parabolic convection-diffusion equations, one equation for the water phase and another for the concentration of the WA agent. The model describes the interactions between gravity and capillarity when initial oil-wet core experiences a wettability alteration towards more water-wet conditions due to the spreading of the WA agent by molecular diffusion. Basic properties of the model are studied by considering a discrete version. Numerical computations are performed to explore the role of molecular diffusion of the WA agent into the core plug, the balance between gravity and capillary forces, and dynamic wettability alteration versus permanent wetting states. In particular, a new and characteristic oil-bank is observed. This is due to incorporation of dynamic wettability alteration and cannot be seen for case with permanent wetting characteristics. More precisely, the phenomenon is caused by a cross-diffusion term appearing in capillary diffusion term.
Readers Authors Editors Referees Librarians Email Alert
Add your name and e-mail address to receive news of forthcoming issues of this journal:
[Back to Top] |
Warning: I am not a physicist.
As Dan Hulme already explained, light can't travel through metals, so dealing with IOR is a lot more...
complex. I will answer why that happens and how to calculate the reflection coefficient.
Explanation: Metals are filled with free electrons. Those electrons react to external fields and reposition until electrostatic equilibrium is met (the electric field is zero inside a conductor in electrostatic equilibrium). When electromagnetic waves hit a metallic surface, the free electrons move until the field that they create cancels the field of the incoming wave. Those electrons grouped together radiate a wave going out nearly the same as the one that hit the surface (i.e. with very low attenuation). How much is attenuated depends on the material properties.
From this explanation it is clear that conductivity is a key part of the high reflection coefficient on metals.
Math-wise, what you are missing is the complex index of refraction. On good conductors, such as metals, the complex term of the IOR is relevant and key for explaining this phenomena.
Practically, in rendering, achieving good metal parameters is more visual based. Artists adjust to their preference until it looks believable. Often you see a metalness parameter with specific handling for materials marked as metal.
Involved answer:
The complex index of refraction can be seen if we use Ohm's Law $J = \sigma \vec{E}$, which holds for conductors, on the Ampère-Maxwell equation using sinusoidal waves $\vec{E} = e^{i\omega t}$:
$$\vec{\nabla} \times \vec{H} = \sigma\vec{E} + \frac{\partial \vec{D}}{\partial t} = \sigma \vec{E} + i\omega \epsilon \vec{E}$$$$ = i\omega \left( \epsilon - i \frac{\sigma}{\omega} \right)\vec{E} = i \omega \epsilon_m\vec{E} $$
Note how we can interpret that whole term as a complex permittitivity $\epsilon_m$ and that $\sigma$ is the conductivity of the material.
This affects the IOR, as its definition is given by:
$$ n' = \sqrt{\frac{\epsilon_m}{\epsilon_0}} = \sqrt{\frac{\left(\epsilon - i \sigma / \omega\right)}{\epsilon_0}} = n_{\text{real}} + in_{\text{img}}$$
This shows how $n'$ can be complex. Also, note how very good conductors have a relevant complex term, as $\sigma \gg \epsilon_0 \omega$. Since it would take a lot, I will skip some steps with a reference, page 27: it can be shown that, since $\sigma \gg \epsilon_0\omega$, (we are dealing with $\omega$ of the visible spectrum):$$ n_{\text{real}} \approx n_{\text{img}}$$
and reflection from metals with normal incidence, from a medium with IOR $n$, given that $n' \gg n$:
$$ R = \frac{(n_{\text{real}} - n)^2 + n_{\text{img}}^2}{(n_{\text{real}} + n)^2 + n_{\text{img}}^2} \approx 1$$
Agreeing that a good conductor is, in general, a good reflector.
The famous
Introduction to Electrodynamics from Griffiths, pages 392-398, explains this and a lot more in a similar fashion. |
I'm receiving "Limit controls must follow a math operator" error. Googled it, tried, nothing helped. Appears on Overleaf 2.0, using makefile in Cent OS, Fedora. Using pkgs
geometry, amsmath, amsthm, amssymb, amsfonts, verbatim, babel (czech), inputenc (utf8), fontenc (IL2)
l.99 ...� $\lim\limits_{n\to\infty}$ a $\Pi\limits _{i=1}^n 2^i$...I'm ignoring this misplaced \limits or \nolimits command.
The line looks like this:
... ${\lim_{x\to\infty}f(n)}$ ... $\Pi_{i=1}^n 2^i$ ... $\bigcup_{A\in B}A$... $\lim\limits_{n\to\infty}$ ... $\Pi\limits_{i=1}^n 2^i$ ... \verb|\limits|
EDIT1:Found that problem starts at
${\Pi\limits_{i=1}^{n} 2^i}$. Unfortunately, i need \limits for \sum and \Pi or anything, that forces the
\Pi to have compact form (n above, i=1 under), it's school homework :)
Any advice appreciated. Thx |
Newform invariants
Coefficients of the \(q\)-expansion are expressed in terms of a basis \(1,\beta_1,\beta_2,\beta_3\) for the coefficient ring described below. We also show the integral \(q\)-expansion of the trace form.
Basis of coefficient ring in terms of a root \(\nu\) of \(x^{4}\mathstrut -\mathstrut \) \(2\) \(x^{3}\mathstrut -\mathstrut \) \(7\) \(x^{2}\mathstrut +\mathstrut \) \(8\) \(x\mathstrut +\mathstrut \) \(14\):
\(\beta_{0}\) \(=\) \( 1 \) \(\beta_{1}\) \(=\) \( \nu^{3} - \nu^{2} - 4 \nu \) \(\beta_{2}\) \(=\) \( \nu^{2} - 5 \) \(\beta_{3}\) \(=\) \( -\nu^{2} + 2 \nu + 4 \)
\(1\) \(=\) \(\beta_0\) \(\nu\) \(=\) \((\)\(\beta_{3}\mathstrut +\mathstrut \) \(\beta_{2}\mathstrut +\mathstrut \) \(1\)\()/2\) \(\nu^{2}\) \(=\) \(\beta_{2}\mathstrut +\mathstrut \) \(5\) \(\nu^{3}\) \(=\) \(2\) \(\beta_{3}\mathstrut +\mathstrut \) \(3\) \(\beta_{2}\mathstrut +\mathstrut \) \(\beta_{1}\mathstrut +\mathstrut \) \(7\)
For each embedding \(\iota_m\) of the coefficient field, the values \(\iota_m(a_n)\) are shown below.
For more information on an embedded modular form you can click on its label.
This newform does not admit any (nontrivial) inner twists.
\( p \) Sign \(2\) \(1\) \(3\) \(1\) \(23\) \(-1\) \(29\) \(1\)
This newform can be constructed as the intersection of the kernels of the following linear operators acting on \(S_{2}^{\mathrm{new}}(\Gamma_0(4002))\):
\(T_{5}^{4} \) \(\mathstrut -\mathstrut 2 T_{5}^{3} \) \(\mathstrut -\mathstrut 11 T_{5}^{2} \) \(\mathstrut +\mathstrut 20 T_{5} \) \(\mathstrut -\mathstrut 4 \) \(T_{7}^{4} \) \(\mathstrut -\mathstrut 6 T_{7}^{3} \) \(\mathstrut +\mathstrut T_{7}^{2} \) \(\mathstrut +\mathstrut 32 T_{7} \) \(\mathstrut -\mathstrut 32 \) \(T_{11}^{4} \) \(\mathstrut -\mathstrut 26 T_{11}^{2} \) \(\mathstrut -\mathstrut 24 T_{11} \) \(\mathstrut +\mathstrut 8 \) |
Before answering the question more or less directly, I'd like to point out that this is a good question that provides an object lesson and opens a foray into the topics of
singular integral equations, analytic continuation and dispersion relations. Here are some references of these more advanced topics: Muskhelishvili, Singular Integral Equations; Courant & Hilbert, Methods of Mathematical Physics, Vol I, Ch 3; Dispersion Theory in High Energy Physics, Queen & Violini; Eden et.al., The Analytic S-matrix. There is also a condensed discussion of `invariant functions' in Schweber, An Intro to Relativistic QFT Ch13d.
The quick answer is that, for $m^2 \in\mathbb{R}$, there's no "shortcut." One must
choose a path around the singularities in the denominator. The appropriate choice is governed by the boundary conditions of the problem at hand. The $+i\epsilon$ "trick" (it's not a "trick") simply encodes the boundary conditions relevant for causal propagation of particles and antiparticles in field theory.
We briefly study the analytic form of $G(x-y;m)$ to demonstrate some of these features.
Note, first, that for real values of $p^2$, the singularity in the denominator of the integrand signals the presence of (a) branch point(s). In fact, [Huang,
Quantum Field Theory: From Operators to Path Integrals, p29] the Feynman propagator for the scalar field (your equation) may be explicitly evaluated:\begin{align}G(x-y;m) &= \lim_{\epsilon \to 0} \frac{1}{(2 \pi)^4} \int d^4p \, \frac{e^{-ip\cdot(x-y)}}{p^2 - m^2 + i\epsilon} \nonumber \\&= \left \{ \begin{matrix}-\frac{1}{4 \pi} \delta(s) + \frac{m}{8 \pi \sqrt{s}} H_1^{(1)}(m \sqrt{s}) & \textrm{ if }\, s \geq 0 \\ -\frac{i m}{ 4 \pi^2 \sqrt{-s}} K_1(m \sqrt{-s}) & \textrm{if }\, s < 0.\end{matrix} \right.\end{align}where $s=(x-y)^2$.
The first-order Hankel function of the first kind $H^{(1)}_1$ has a logarithmic branch point at $x=0$; so does the modified Bessel function of the second kind, $K_1$. (Look at the small $x$ behavior of these functions to see this.)
A branch point indicates that the Cauchy-Riemann conditions have broken down at $x=0$ (or $z=x+iy=0$). And the fact that these singularities are logarithmic is an indication that we have an endpoint singularity [eg. Eden et. al., Ch 2.1]. (To see this, consider $m=0$, then the integrand, $p^{-2}$, has a zero at the lower limit of integration in $dp^2$.)
Coming back to the question of boundary conditions, there is a good discussion in Sakurai,
Advanced Quantum Mechanics, Ch4.4 [NB: "East Coast" metric]. You can see that for large values of $s>0$ from the above expression that we have an outgoing wave from the asymptotic form of the Hankel function.
Connecting it back to the original references I cited above, the $+i\epsilon$ form is a version of the Plemelj formula [Muskhelishvili]. And the expression for the propagator is a type of Cauchy integral [Musk.; Eden et.al.]. And this notions lead quickly to the topics I mentioned above -- certainly a rich landscape for research.This post imported from StackExchange Physics at 2014-07-13 04:38 (UCT), posted by SE-user MarkWayne |
Find the Matrix Representation of $T(f)(x) = f(x^2)$ if it is a Linear Transformation
Problem 679
For an integer $n > 0$, let $\mathrm{P}_n$ denote the vector space of polynomials with real coefficients of degree $2$ or less. Define the map $T : \mathrm{P}_2 \rightarrow \mathrm{P}_4$ by\[ T(f)(x) = f(x^2).\]
Determine if $T$ is a linear transformation.
If it is, find the matrix representation for $T$ relative to the basis $\mathcal{B} = \{ 1 , x , x^2 \}$ of $\mathrm{P}_2$ and $\mathcal{C} = \{ 1 , x , x^2 , x^3 , x^4 \}$ of $\mathrm{P}_4$.
To prove that $T$ is a linear transformation, we must show that it satisfies both axioms for linear transformations. For $f, g \in \mathrm{P}_2$, we have\[T( f+g )(x) = (f+g)(x^2) = f(x^2) + g(x^2) = T(f)(x) + T(g)(x)\]while for a scalar $c \in \mathbb{R}$, we have\[ T( c f )(x) = (cf)(x^2) = c f(x^2) = c T(f)(x).\]We see that $T$ is a linear transformation.
The matrix representation for $T$
To find its matrix representation, we must calculate $T(f)$ for each $f \in \mathcal{B}$ and find its coordinate vector relative to the basis $\mathcal{C}$. We calculate\[T(1) = 1 , \quad T(x) = x^2 , \quad T(x^2) = x^4.\]Each of these is an element of $\mathcal{C}$. Their coordinate vectors relative to $\mathcal{C}$ are thus\[[ T(1) ]_{\mathcal{B}} = \begin{bmatrix} 1 \\ 0 \\ 0 \\ 0 \\ 0 \end{bmatrix} , \quad [ T(x) ]_{\mathcal{B}} = \begin{bmatrix} 0 \\ 0 \\ 1 \\ 0 \\ 0 \end{bmatrix} , \quad [ T(x^2) ]_{\mathcal{C}} = \begin{bmatrix} 0 \\ 0 \\ 0 \\ 0 \\ 1 \end{bmatrix}.\]
The matrix representation of $T$ is found by combining these columns, in order, into one matrix:
Subspace Spanned By Cosine and Sine FunctionsLet $\calF[0, 2\pi]$ be the vector space of all real valued functions defined on the interval $[0, 2\pi]$.Define the map $f:\R^2 \to \calF[0, 2\pi]$ by\[\left(\, f\left(\, \begin{bmatrix}\alpha \\\beta\end{bmatrix} \,\right) \,\right)(x):=\alpha \cos x + \beta […]
Differentiation is a Linear TransformationLet $P_3$ be the vector space of polynomials of degree $3$ or less with real coefficients.(a) Prove that the differentiation is a linear transformation. That is, prove that the map $T:P_3 \to P_3$ defined by\[T\left(\, f(x) \,\right)=\frac{d}{dx} f(x)\]for any $f(x)\in […]
Differentiating Linear Transformation is NilpotentLet $P_n$ be the vector space of all polynomials with real coefficients of degree $n$ or less.Consider the differentiation linear transformation $T: P_n\to P_n$ defined by\[T\left(\, f(x) \,\right)=\frac{d}{dx}f(x).\](a) Consider the case $n=2$. Let $B=\{1, x, x^2\}$ be a […] |
A particle moves along the x-axis so that at time t its position is given by $x(t) = t^3-6t^2+9t+11$ during what time intervals is the particle moving to the left? so I know that we need the velocity for that and we can get that after taking the derivative but I don't know what to do after that the velocity would than be $v(t) = 3t^2-12t+9$ how could I find the intervals
Fix $c\in\{0,1,\dots\}$, let $K\geq c$ be an integer, and define $z_K=K^{-\alpha}$ for some $\alpha\in(0,2)$.I believe I have numerically discovered that$$\sum_{n=0}^{K-c}\binom{K}{n}\binom{K}{n+c}z_K^{n+c/2} \sim \sum_{n=0}^K \binom{K}{n}^2 z_K^n \quad \text{ as } K\to\infty$$but cannot ...
So, the whole discussion is about some polynomial $p(A)$, for $A$ an $n\times n$ matrix with entries in $\mathbf{C}$, and eigenvalues $\lambda_1,\ldots, \lambda_k$.
Anyways, part (a) is talking about proving that $p(\lambda_1),\ldots, p(\lambda_k)$ are eigenvalues of $p(A)$. That's basically routine computation. No problem there. The next bit is to compute the dimension of the eigenspaces $E(p(A), p(\lambda_i))$.
Seems like this bit follows from the same argument. An eigenvector for $A$ is an eigenvector for $p(A)$, so the rest seems to follow.
Finally, the last part is to find the characteristic polynomial of $p(A)$. I guess this means in terms of the characteristic polynomial of $A$.
Well, we do know what the eigenvalues are...
The so-called Spectral Mapping Theorem tells us that the eigenvalues of $p(A)$ are exactly the $p(\lambda_i)$.
Usually, by the time you start talking about complex numbers you consider the real numbers as a subset of them, since a and b are real in a + bi. But you could define it that way and call it a "standard form" like ax + by = c for linear equations :-) @Riker
"a + bi where a and b are integers" Complex numbers a + bi where a and b are integers are called Gaussian integers.
I was wondering If it is easier to factor in a non-ufd then it is to factor in a ufd.I can come up with arguments for that , but I also have arguments in the opposite direction.For instance : It should be easier to factor When there are more possibilities ( multiple factorizations in a non-ufd...
Does anyone know if $T: V \to R^n$ is an inner product space isomorphism if $T(v) = (v)_S$, where $S$ is a basis for $V$? My book isn't saying so explicitly, but there was a theorem saying that an inner product isomorphism exists, and another theorem kind of suggesting that it should work.
@TobiasKildetoft Sorry, I meant that they should be equal (accidently sent this before writing my answer. Writing it now)
Isn't there this theorem saying that if $v,w \in V$ ($V$ being an inner product space), then $||v|| = ||(v)_S||$? (where the left norm is defined as the norm in $V$ and the right norm is the euclidean norm) I thought that this would somehow result from isomorphism
@AlessandroCodenotti Actually, such a $f$ in fact needs to be surjective. Take any $y \in Y$; the maximal ideal of $k[Y]$ corresponding to that is $(Y_1 - y_1, \cdots, Y_n - y_n)$. The ideal corresponding to the subvariety $f^{-1}(y) \subset X$ in $k[X]$ is then nothing but $(f^* Y_1 - y_1, \cdots, f^* Y_n - y_n)$. If this is empty, weak Nullstellensatz kicks in to say that there are $g_1, \cdots, g_n \in k[X]$ such that $\sum_i (f^* Y_i - y_i)g_i = 1$.
Well, better to say that $(f^* Y_1 - y_1, \cdots, f^* Y_n - y_n)$ is the trivial ideal I guess. Hmm, I'm stuck again
O(n) acts transitively on S^(n-1) with stabilizer at a point O(n-1)
For any transitive G action on a set X with stabilizer H, G/H $\cong$ X set theoretically. In this case, as the action is a smooth action by a Lie group, you can prove this set-theoretic bijection gives a diffeomorphism |
Continuation 2 has been added, with seemingly ideal results.
Ben Aveling’s solution.inspired three new solutions (“Continuations”)to this surprisingly playful puzzlethat is interestingly reminiscent ofBenford’sandZipf’slaws.$\require{begingroup} \begingroup \def\safeMathJax{\text{\endgroup error}} \def \p {{\kern 1mu p}} \def \r {{\large r}} \def \x {{x \kern1mu}} \def \P {{ \kern1mu p \kern2mu}} \def \s{{ \large s}} \def \= {~ = ~} \def \* {\kern2mu \cdot \kern1mu}$
Beginning with an untouched pizza, I will...
...make a cut from anywhere on the edge of the pizza to the center.
With that taken care of, more than one way to continue have come to mind.
Continuation 1 — Based on error correction, simplified to introduce analysis.
Now to write a note to my fellow pizza partier/prisoners.
Dear fellow pizza partier/prisoner,
You are invited to...
1. Find the largest slice.
If you see only one cut in the pizza,
consider the whole thing to be the largest slice.
2. Measure that slice’s angle in degrees and call it
$A \LARGE\raise-.3ex\strut$.
3. Cut an angle of
$\dfrac {\raise-2mu 1} { \frac{2}{A} + \frac{{\large\ln}\,2}{720} \, }$
degrees into the slice.
4. By the way,
$ \small \dfrac{\raise-2mu{\ln2}}{720} \approx \, 0.0009627044174443684853... $
With love, fellow pizza partier/prisoner.
If my fellow pizza partier/prisoners make sense of that,this is how the first nine cuts should go.
| |
After my cut | 360.0° |
| |
After prisoner 2 | 153.4° | 206.6° |
| | |
After prisoner 3 | | 94.0° | 112.6° |
| | | |
After prisoner 4 | 71.4° | 82.0° | | |
| | | | |
After prisoner 5 | | | | 53.4° | 59.2° |
| | | | | |
After prisoner 6 | | | 44.9° | 49.0° | | |
| | | | | | |
After prisoner 7 | | 39.4° | 42.5° | | | | |
| | | | | | | |
After prisoner 8 | 34.5° | 36.9° | | | | | | |
| | | | | | | | |
After prisoner 9 | 34.5° | 36.9° | 39.4° | 42.5° | 44.9° | 49.0° | 53.4° | 28.8° | 30.4° |
. | | | | | | | | | |
. | . . . . . . . . |
. | . . . . . . . . |
Now for some analysis while waiting for pizza, or execution.Begin with an ideal sequence of cuts where slices are re-cutin the same order as they were produced.The sequence above begins ideallybut hasn’t been examined thoroughly for monotonicity.
.---.
: :.------.
360° 360° : SLICE SIZES AS A LEAPFROGGING SEQUENCE
| | .-'-.
| | : :
| | :.----:-------. Each prisoner p finds p-1
| | 207° : .-'-. slices and divides the slice
| | | : : : of size s_p into slices of
| | | :.----:-----:-------. size s_(2p-1) and s_2p .
| | | 153° : : :
| | | | :.----:--------:----------.
| | | | 113° : : :
| | | | | :.-------:-----------:-----------.
| | | | | 94° : : :
| | | | | | .-'-. : :
| | | | | | : : .-'-. :
| | | | | | 82° : : : :
| | | | | | | 71° : : .-'-.
| | | | | | | | 59° 53° : :
| | | | | | | | | | 49° 45° . . .
| | | | | | | | | | | |
| | | | | | | | | | | |
| | |
s_1 s_2 s_3 s_4 s_5 s_6 s_7 s_8 s_9 s_10 s_11 s_12 . . .
| | |
s_p = s_(2p-1) + s_2p
For aesthetics as much as fairness, the sequence ofslice sizes (angles), $\s_p \kern2mu$, would ideally form a smooth curve.For convenience, how about a slice-size function, $\s(\p)$,that approximates $\s_p$ andsatisfies $\s(\p) \approx \s(2\P{-}1) + \s(2\p) \,$? As such, prisoner $p$ arrives around time = $p$and divides a slice of size $\s(\p)$ into two slices intended to be re-cutby prisoners $2\P{-}1$ and $2\p$ around time = $2\p$.
$$ \s_p= \s(\p) = \dfrac {\large\tfrac{360}{\ln2}} {\, \P-\tfrac12 ~} $$
That’s just $\dfrac{\raise-2mu 1}{\, \raise4mu p \,}$,
scaled and shifted so that
all current slices add up to a full 360° of pizza.
$$ \sum_{\P+1}^{2\p} \s_i ~\approx \int_{\P+\frac12}^{2\p+\frac12} \!\! \s(x)dx ~\approx~ 360^\circ $$
If at any stage the largest slice’s angle to be divided, $A$,happens to be larger or smaller than ideal,it may be treated as merely being out of place in the sequence.Thus every slice’s ideal position, $x$,is deduced such that $\s(x)=A$, before dividing $A$into slices with angles $\s(2x)$ and $A{-}\s(2x)$.
\begin{array}{rrl}{} & A \kern-1em{} & \= \, \s(x) \,{} \= \dfrac{\large\tfrac{360}{\ln2}} {\, x-\tfrac12 ~}{} \\[1ex]{} \Longrightarrow & x \kern-1em{} & \= {\large\tfrac{360}{A \ln2}} + \tfrac12{} \\[2ex]{} \Longrightarrow & \s(2x) \kern-1em{} & \= \dfrac {\large\tfrac{360}{\ln2}} {\, 2x-\tfrac12 ~}{} \= \dfrac {\large\tfrac{360}{\ln2}} {\,{} {\large\tfrac{720}{A \ln2}} + \tfrac12 ~}{} \= \dfrac {1} {\, {\large\tfrac{2}{A}}{} + {\large\tfrac{\ln2}{720}} ~}{} \end{array}
That last calculation is what was recommended in the note to other prisoners.
This approach was designed to be relatively simple to calculate.It is also meant to be good at correcting mistakes in earlier slices,which will be tested in the section for Continuation 3.
Continuation 2 — Each cut is based on the number of slices.
Steps 2 through 4 of Continuation 2’s note to fellowpizza partier/prisoners are:
2. Count the number of slices so far and call that number
$n \raise-2ex\strut$.
3. Cut the largest slice into two pieces
whose angles have a ratio of
$\ln(2n{+}1)-\ln 2n ~$ to
$\ln(2n{+}2)-\ln(2n{+}1)$.
4. You can do it, pizza pal, just remember that
$ \ln x \= \x{-}1 - \frac{(\x{-}1)^2}2 + \frac{(\x{-}1)^3}3 - \cdots $
And here is how the first nine cuts would go this time.
| |
After my cut | 360.0° |
| |
After prisoner 2 | 210.6° | 149.4° |
| | |
After prisoner 3 | 115.9° | 94.7° | |
| | | |
After prisoner 4 | | | 80.1° | 69.4° |
| | | | |
After prisoner 5 | 61.2° | 54.7° | | | |
| | | | | |
After prisoner 6 | | | 49.5° | 45.2° | | |
| | | | | | |
After prisoner 7 | | | | | 41.6° | 38.5° | |
| | | | | | | |
After prisoner 8 | | | | | | | 35.8° | 33.°5 |
| | | | | | | | |
After prisoner 9 | 31.5° | 29.7° | 54.7° | 49.5° | 45.2° | 41.6° | 38.5° | 35.8° | 33.5° |
. | | | | | | | | | |
. | . . . . . . . . |
. | . . . . . . . . |
This comes from howBen Aveling figured out a wayto obtain a very fair result by basing each cutsolely on the number of slices cut so far.Extending the terms used here, Ben Aveling’s solution is...
$$ \r_p \= \frac{\s_{2\P-1}}{\s_{2\p}} \= \frac{n+1}{n} $$
...prisoner $p$ divides slice $\s_p$into slices $\s_{2\P-1}$ and $\s_{2\p}$to have a size ratio, $\r_p$,based on the number of slices so far, $n$.Incidentally, $n = \P{-}1$. A plot of $\log\sum\kern-1mu\textsf{error}\kern1mu^2$shows how much smoother this is than Continuation 1or straightforward halving of the largest slice. Each $\sum\kern-1mu\textsf{error}\kern1mu^2$ reflectsthe unfairness when there are $n$ slicesand each of those slices has an $\textsf{error}$that equals the difference between its sizeand the $n$ slices’ average size.
This plot also shows how fluctuations repeat at intervals that keep doubling. The fluctuations also reveal thatdifferent approaches take turns being better than the others. Along comes Continuation 2, a tweak on Ben Aveling’s approach,to produce a fluctuation-free(!) error graph.
And along comes Continuation 2’s derivation of $\r_p$. The fluctuations in the first plotmake clear how any cut’s inaccuracy is echoedamong an infinite cascade of future cuts,which is part of the fun challenge in this puzzle.Writing out the consequences of $\s_3$ and $\s_4$is enough to establish a general formulafor $\s_p$ and hence for an ideally smooth $\r_p$.
\begin{array}{rl}{} \s_3 \kern-1em{} & \= \s_5 + \s_6{} \= (\s_9 + \s_{10}) + (\s_{11} + \s_{12}){} \\[-1ex]{} & \= \s_{17}+\s_{18} + \s_{19}+\s_{20} + \s_{21}+\s_{22} + \s_{23}+\s_{24}{} \= \cdots{} \= {\displaystyle \lim_{i\to\infty} \int_{\large 2\*2^i{+}1}^{\large 3\*2^i} \!\! \s(x)dx}{} \\[1ex]{} & \= \frac{360}{\ln2} (\ln3-\ln2){} \\[4ex]{} \s_4 \kern-1em{} & \= \s_7 + \s_8{} \= (\s_{13}+\s_{14}) + (\s_{15}+\s_{16}){} \\[-1ex]{} & \= \s_{25}+\s_{26} + \s_{27}+\s_{28} + \s_{29}+\s_{30} + \s_{31}+\s_{32}{} \= \cdots{} \= {\displaystyle \lim_{i\to\infty} \int_{\large 3\*2^i{+}1}^{\large 4\*2^i} \!\! \s(x)dx}{} \\[1ex]{} & \= \frac{360}{\ln2} (\ln4-\ln3){} \\[5ex]{} \s_p \kern-1em{} & \= \frac{360}{\ln2} \big( \ln p - \ln(\P-1) \, \big){} \end{array}
$ \begin{array}{rrl}{} \Longrightarrow & \r_p \kern-1em{} & \= \dfrac {\s_{2\P-1}} {\s_{2\p}}{} \= \dfrac {\ln(2\P{-}1)-\ln(2\P{-}2)} {\ln2\p-\ln(2\P{-}1)}{} \\[2ex]{} & & \= \dfrac {\ln(2n{+}1)-\ln 2n} {\ln (2n{+}2)-\ln(2n{+}1)}{} \end{array} $
Continuation 3 — More accurate version of error-correcting Continuation 1.
Being refined.
$\endgroup$ |
I am a sophomore. A friend and I ran across a kinetic chemistry exercise. We solved it in a slightly different way and we can't come to an agreement on who is right. We would appreciate some help:
Destruction of stratospheric ozone as determined by using the steady-state approximation.
The balance of ozone in the stratosphere is of critical concern because this molecule absorbs ultraviolet light that would be harmful to life at Earth's surface. The principal production mechanism for ozone is recombination of $\ce O$ atoms with $\ce{O_2}$. The principal destruction mechanism is that given below. There is increasing concern over alternative destruction mechanisms involving molecules introduced into the stratosphere by human activity.
Determine the destruction rate of ozone in the following mechanism. $$\begin{align}\ce{O3 + M &<=>[k_1][k_{-1}] O2 + O + M}\\ \\ \ce{O3 + O &->[k_2] 2 O2}\end{align}$$
Our answers:
We agree on the beginning, so we have :
$$\begin{align} -\frac{\mathrm d[\ce{O_3}]}{\mathrm dt} &= k_1[\ce{O_3}][\ce{M}] - k_{-1}[\ce{O_2}][\ce O][\ce M] + k_2[\ce{O_3}][\ce O]\\ -\frac{\mathrm d[\ce{M}]}{\mathrm dt} &= k_1[\ce{O_3}][\ce M] - k_{-1}[\ce{O_2}][\ce O][\ce M] \\ -\frac{\mathrm d[\ce O]}{\mathrm dt} &= -k_1[\ce{O_3}][\ce M] + k_{-1}[\ce{O_2}][\ce O][\ce M] +k_2[\ce{O_3}][\ce O] \end{align}$$
Now here comes the disagreement, we need to use the steady-state approximation to say that one of the above speed is null:
My friend argues that $-\frac{\mathrm d[\ce O]}{\mathrm dt}$ should be approximated to null because it is a transient compound.
Even if in most other exercise, it would be correct, I disagree with him because in the proposed mechanism, $\ce{O_3}$ and $\ce O$ play a symmetrical part. So if we approximate $-\frac{\mathrm d[\ce O]}{\mathrm dt}$ to $0$ we should also approximate $-\frac{\mathrm d[\ce{O_3}]}{\mathrm dt}$ to $0$. Since we want to calculate this specific value, we don't want to approximate it. In my demonstration, I approximate $-\frac{\mathrm d[\ce{M}]}{\mathrm dt}$ to $0$, arguing that the compound $\ce M$ is on both side of the equilibrium equation and therefore it's concentration will remain constant.
We both manage to find a result:
My friend’s calculation:
$$ - \frac{\mathrm d[\ce O]}{\mathrm dt} = -k_1[\ce{O_3}][\ce M] + k_{-1}[\ce{O_2}][\ce O][\ce M] + k_2[\ce{O_3}][\ce O] = 0 $$ $$ k_1[\ce{O_3}][\ce M] - k_{-1}[\ce{O_2}][\ce O][\ce M] = k_2[\ce{O_3}][\ce O] $$ We substitute the left member in the speed of ozone: $$ -\frac{\mathrm d[\ce{O_3}]}{\mathrm dt} = 2\times k_2[\ce{O_3}][\ce O] $$ Because $[\ce O]$ is transient, it is difficult to mesure experimentally, so we expresse it in function of $[\ce{O_3}], [\ce{O_2}]$ and $[\ce M]$ : $$ k_1[\ce{O_3}][\ce M] = [\ce O] \times ( k_2[\ce{O_3}] + k_{-1}[\ce{O_2}][\ce M] ) $$ $$ [\ce O] = \frac{k_1[\ce{O_3}][\ce M]}{k_2[\ce{O_3}] + k_{-1}[\ce{O_2}][\ce M]} $$ And finally we have my friend’s result: $$ -\frac{\mathrm d[\ce{O_3}]}{\mathrm dt} = \frac{2k_1k_2[\ce{O_3}]^2[\ce M]}{k_2[\ce{O_3}] + k_{-1}[\ce{O_2}][\ce M]} $$
Now my calculations: $$ -\frac{\mathrm d[\ce M]}{\mathrm dt} = k_1[\ce{O_3}][\ce M] - k_{-1}[\ce{O_2}][\ce O][\ce M] = 0 $$ Which allow us to simplify the destruction rate of ozone: $$ -\frac{\mathrm d[\ce{O_3}]}{\mathrm dt} = k_2[\ce{O_3}][\ce O] $$ We already disagree by a factor 2 on the rate. But if we replace $[\ce O]$ like before, we obtain a much simpler result: $$\begin{align} k_1[\ce{O_3}][\ce M] &= k_{-1}[\ce{O_2}][\ce{O}][\ce M] \\ [\ce{O}] &= \frac{k_1[\ce{O_3}][\ce M]}{k_{-1}[\ce{O_2}][\ce M]} \\ [\ce O] &= \frac{k_1[\ce{O_3}]}{k_{-1}[\ce{O_2}]} \end{align}$$ Finally I have: $$ -\frac{\mathrm d[\ce{O_3}]}{\mathrm dt} = \frac{k_1k_2[\ce{O_3}]^2}{k_{-1}[\ce{O_2}]} $$
Which one of us is wrong and why? |
Abbreviation:
BDLat
A
is a structure $\mathbf{L}=\langle L,\vee ,0,\wedge ,1\rangle $ such that bounded distributive lattice
$\langle L,\vee ,\wedge \rangle $ is a distributive lattice
$0$ is the least element: $0\leq x$
$1$ is the greatest element: $x\leq 1$
Let $\mathbf{L}$ and $\mathbf{M}$ be bounded distributive lattices. A morphism from $\mathbf{L}$ to $\mathbf{M}$ is a function $h:L\to M$ that is a homomorphism:
$h(x\vee y)=h(x)\vee h(y)$, $h(x\wedge y)=h(x)\wedge h(y)$, $h(0)=0$, $h(1)=1$
Example 1: $\langle \mathcal P(S), \cup, \emptyset, \cap, S\rangle$, the collection of subsets of a set $S$, with union, empty set, intersection, and the whole set $S$.
Classtype variety Equational theory decidable Quasiequational theory decidable First-order theory undecidable Congruence distributive yes Congruence modular yes Congruence n-permutable no Congruence regular no Congruence uniform no Congruence extension property yes Definable principal congruences no Equationally def. pr. cong. no Amalgamation property yes Strong amalgamation property no Epimorphisms are surjective no Locally finite yes Residual size 2
$\begin{array}{lr} f(1)= &1\\ f(2)= &1\\ f(3)= &1\\ f(4)= &2\\ f(5)= &3\\ \end{array}$ $\begin{array}{lr} f(6)= &5\\ f(7)= &8\\ f(8)= &15\\ f(9)= &26\\ f(10)= &47\\ \end{array}$ $\begin{array}{lr} f(11)= &82\\ f(12)= &151\\ f(13)= &269\\ f(14)= &494\\ f(15)= &891\\ \end{array}$ $\begin{array}{lr} f(16)= &1639\\ f(17)= &2978\\ f(18)= &5483\\ f(19)= &10006\\ f(20)= &18428\\ \end{array}$
Values known up to size 49
1). Marcel Erne, Jobst Heitzig and J\”urgen Reinhold, 1) , Electron. J. Combin., On the number of distributive lattices 9, 2002, Research Paper 24, 23 pp. (electronic) |
51 0 1. Homework Statement
How did [itex]\frac{1}{2}x[/itex] come from at k=1?
2. Homework Equations 3. The Attempt at a Solution
because k=1 will make the first term at denominator 2(k-1) = [itex]\frac{0}{0}[/itex]
Yes. The [itex]\ \frac{1}{2}x\ [/itex] comes from the fact that the first term has the form 0/0 as k → 1. 1. Homework Statement How did [itex]\frac{1}{2}x[/itex] come from at k=1? 2. Homework Equations 3. The Attempt at a Solution because k=1 will make the first term at denominator 2(k-1) = [itex]\frac{0}{0}[/itex] Use L' Hopital's Rule --> [itex]\displaystyle \ \lim_{t\to0}cos(t) = 1 [/itex]Yes. The [itex]\ \frac{1}{2}x\ [/itex] comes from the fact that the first term has the form 0/0 as k → 1. What is [itex]\displaystyle \ \lim_{t\to0}\frac{\sin(t)}{t}\ ?[/itex] |
When I face a writing task, my two big failure modes are either not starting at all and dragging my feet indefinitely, or writing far too much and having to cut it down to size later. In the latter case, my problem isn’t just that I go off on tangents. I try to answer every conceivable objection, including those that only I would think of. As a result, I end up fighting a rhetorical battle that only I know about, and the prose that emerges is not just overlong, but arcane and obscure. Furthermore, if the existing literature on a subject is confusing to me, I write a lot in the course of figuring it out, and so I end up with great big expository globs that I feel obligated to include with my reporting on what I myself actually did. That’s why my PhD thesis set the length record for my department by a factor of about three.
I have been experimenting with writing scientific pieces that are deliberately bite-sized to begin with. The first such experiment that I presented to the world, “Sporadic SICs and the Normed Division Algebras,” was exactly two pages long in its original form. The version that appeared in a peer-reviewed journal was slightly longer; I added a paragraph of context and a few references.
My latest attempt at a mini-paper (articlet?) is based on a blog post from a few months back. I polished it up, added some mathematical details, and worked in a comparison with other research that was published since I posted that blog item. The result is still fairly short:
I decided to give Mastodon a whirl, so a while back I created an account for myself at the icosahedron.website instance. (After all, a big part of my research is to generalize regular icosahedra to higher dimensions and complex coordinates.) There I am: Blake C. Stacey (@bstacey@icosahedron.website). It’s been fun so far.
It seems the best way to explain Mastodon to an old person (like me) is that it’s halfway between social networking, the way big companies do it, and email. You create an account on one server (or “instance”), and from there, you can interact with people who have accounts, even if those accounts are on other servers. Different instances can have different policies about what kinds of content they allow, depending for example on what type of community the administrators of the instance want to cater to.
If I ever administrate a Mastodon instance, I think I’ll make “content warnings” mandatory, but I’ll change the interface so that they’re called “subject lines.”
C. A. Fuchs, M. C. Hoang and B. C. Stacey, “The SIC Question: History and State of Play,” arXiv:1703.07901 [quant-ph] (2017).
Recent years have seen significant advances in the study of symmetric informationally complete (SIC) quantum measurements, also known as maximal sets of complex equiangular lines. Previously, the published record contained solutions up to dimension 67, and was with high confidence complete up through dimension 50. Computer calculations have now furnished solutions in all dimensions up to 151, and in several cases beyond that, as large as dimension 323. These new solutions exhibit an additional type of symmetry beyond the basic definition of a SIC, and so verify a conjecture of Zauner in many new cases. The solutions in dimensions 68 through 121 were obtained by Andrew Scott, and his catalogue of distinct solutions is, with high confidence, complete up to dimension 90. Additional results in dimensions 122 through 151 were calculated by the authors using Scott’s code. We recap the history of the problem, outline how the numerical searches were done, and pose some conjectures on how the search technique could be improved. In order to facilitate communication across disciplinary boundaries, we also present a comprehensive bibliography of SIC research.
Also available via SciRate.
Maybe I need an “I told you so” category for this blog. Quoting the kicker from
The Atlantic‘s portrayal of the State Department:
“This is probably what it felt like to be a British foreign service officer after World War II, when you realize, no, the sun actually does set on your empire,” said the mid-level officer. “America is over. And being part of that, when it’s happening for no reason, is traumatic.”
While I was writing
Multiscale Structure in Eco-Evolutionary Dynamics, I found myself having a frustrating time reading through big chunks of the relevant literature. The mathematics in the mathematical biology was easier than a lot of what I’d had to deal with in physics, but the arguments were hard to follow. At times, it was even difficult to tell what was being argued about. A blog post by John Baez, on “biology as information dynamics,” called this frustration back to mind—not because it was unclear itself, but rather because it touched on the source of the fog.
I think the basic cause of the trouble is the following:
The application of mathematics to biological evolution is rooted, historically, in statistics rather than in dynamics. Consequently, a lot of model-building starts with tools that belong, essentially, to descriptive statistics (e.g., linear regression). This is fine, but then people turn around and discuss those models in language that implies they have constructed a dynamical system. This makes life quite difficult for the student trying to learn the subject by reading papers! The problem is not the algebra, but the assumptions. And that always makes for a thorny situation.
Last night I thought of a way to summarize why my current big research project appeals to me.
The SIC problem gives us the opportunity to travel all throughout mathematics, because, while the definition looks pretty small, the question is bigger on the inside.
For a taste of why this is so, try here:
The APS, my professional organization, has made some dunderheaded moves of late, but this is more encouraging. An email from the APS president and CEO, broadcast today to the membership at large, begins thusly:
We share the concerns expressed by many APS members about recent U.S. government actions that will harm the open environment that is essential for a successful global scientific enterprise. The recent executive order regarding immigration, and in particular, its implementation, would reduce participation of international scientists and students in U.S. research, industry, education, and conference activities, and sends a chilling message to scientists internationally.
The American Chemical Society had already spoken up:
Continue reading The American Physical Society Finally Speaks
Wasn’t I just kvetching about Steven Pinker? Not that long ago, even? Well, some gifts just won’t stop giving. He’s at it again, this time complaining about the “anti-science PC/identity politics/hard-left rhetoric” of the March for Science. It might have been obvious to some of us ten years or more ago that basic respect for empirical data had become a partisan issue, but not everybody has caught up quite yet.
An academic type like me has a hard time responding to accusations of “identity politics” or “political correctness,” not because the accusations have any intellectual merit, but because the real message isn’t the words on the page. People like me, we see a thing wrapped up in the form of a scholarly argument, and we try to respond with footnotes and appendices. But the clauses and locutions are just dances around the real issue, the fundamental point that was expressed most clearly by the Twitter account @ProBirdRights:
I am feel uncomfortable when we are not about me?
No, let’s be a little more forceful than that. The news warrants that much, and it just keeps coming. For the party now in power, the people who
keep rat shit out of your food and stop rivers from catching on fire are now the enemy.
I’m really not feeling that good about our ability to handle the next epidemic that comes our way.
And on a personal note, I’m a queer scientist who has published on biological evolution and the need for financial regulation. So, you can sod off with your cheery hot takes about America becoming Great Again through space exploration, or whatever the Quisling line is this week. Stuff your white dick back in your pants and sit your ass down while the adults work, m’kay?
The United States of America is a failed experiment.
We went out in the way a bad joke would have predicted. We lost against our own racism and sexism, our endemic illnesses whose symptoms were intensified by corrupt law enforcement and institutionally rotten mass media. Undone at the final hour by a bizarre codicil in a slaveowners’ constitution. Undone, pushed over the edge—but the edge was too close all along. When it really mattered, we proved ourselves incompetent: not able to handle our civil responsibilities, indeed, in a sense, not ready for adulthood. In the name of national glory, we have voted ourselves a government of the worst. And now a generation will grow up ignorant, poor and sick, if they get the chance to grow up at all. Many of the things we will lose will be things we can never regain, from international respect to endangered species to the lives of our loved ones.
Many good people will keep up the good fight and stir up, as John Lewis says, the good trouble.
The abyss has opened before us.
Whether the future we make for ourselves will have anything to commend it now depends upon our ability to stare into that abyss and make it blink.
Google Scholar is seriously borked today. I heard about the problem when Christopher Fuchs emailed me to say that he had his Google Scholar profile open in a browser and happened to click the refresh button, whereupon his total citation count jumped by 700. After the refresh, his profile was full of things he hadn’t even written. Poking around, I found that a lot of publications in the American Institute of Physics’s
AIP Conference Proceedings were being wildly misattributed, almost as if everyone who contributed to an issue was getting credit for everything in that issue.
For example, here’s Jan-Åke Larsson getting credit for work by Giacomo D’Ariano:
And here’s Chris picking up 38 bonus points for research on Mutually Unbiased Bases—a topic not far from my own heart!—research done, that is, by Ingemar Bengtsson:
Continue reading Google Scholar Whisky-Tango-Foxtrottery
This is entertaining:
Let’s say you tell your students that arm folding is a genetic trait, with the allele for right forearm on top (R) being dominant to left forearm on top (L). Results from a large number of studies show that about 11 percent of your students will be R children of two L parents; if they understand the genetics lesson correctly, they will think that either they were secretly adopted, or Mom was fooling around and Dad isn’t their biological father. More of your students will reach this conclusion with each bogus genetic trait that you add to the lesson. I don’t think this is a good way to teach genetics.
Via PZ Myers, who is teaching genetics this semester and has an interest in getting it right.
The topic of this year’s Foundational Questions (Quextions?) Institute essay contest is, “How can mindless mathematical laws give rise to aims and intention?”
My answer: They don’t. But not for the reason that most physicists who bother with such speculations probably think.
Continue reading FQXI Essay Contest Non-Submission
The start of a new semester always brings possibilities to mind. Tom Leinster comments,
A curious thing: in the four classes so far, the number of students attending has been, respectively, 19, 17, 15, 13. Assuming that the arithmetic progression continues, our final class will have $-1$ student. Some of Joachim’s colleagues have expressed an interest in coming along to see what $-1$ student looks like. This presents problems of a philosophical type.
A few years ago, I noticed a glitch in a paper that colleagues of mine had published back in 2002. A less-than sign in an inequality should have been a less-than-or-equals. This might have been a transcription error during the typing-up of the work, or it could have entered during some other phase of the writing process. Happens to the best of us! Algebraically, it was equivalent to solving an equation
\[ ax^2 + bx + c = 0 \] with the quadratic formula, \[ x = \frac{-b \pm \sqrt{b^2 – 4ac}}{2a},\] and neglecting the fact that if the expression under the square root sign equals zero, you still get a real solution.
This sort of glitch is usually not worth a lot of breath, though I do tend to write in when I notice them, to keep down the overall confusingness of the scientific literature. In this case, however, there’s a surprise bonus. The extra solutions you pick up turn out to have a very interesting structure to them, and they include mathematical objects that were already interesting for other reasons. So, I wrote a little note explaining this. In order to make it self-contained, I had to lay down a bit of background, and with one thing and another, the little note became more substantial.
Too substantial, I learned: The journal that published the original paper wouldn’t take it as a Comment on that paper, because it said too many new things! Eventually, after a little more work, it found a home:
The number of citations that Google Scholar lists for this paper (one officially published in a journal, mind) fluctuates between 5 and 6. I think it wavers on whether to include a paper by Szymusiak and Słomczyński (
Phys. Rev. A 94, 012122 = arXiv:1512.01735 [quant-ph]). Also, if you compare against the NASA ADS results, it turns out that Google Scholar is missing other citations, too, including a journal-published item by Bellomo et al. ( Int. J. Quant. Info. 13, 2 (2015), 1550015 = arXiv:1504.02077 [quant-ph]).
As I said in 2014, this would be a rather petty thing to care about,
if people didn’t rely on these metrics to make decisions! And, as it happens, all the problems I noted then are still true now. |
Under what conditions is it possible, using a suitable change of variables, to eliminate 1st order terms in an elliptic partial differential equation, so that the equation involves the 2nd derivatives, the dependent variable, and independent terms only?
To be concrete, consider the elliptic equation $-\Delta u + \sum_i \frac{d u}{dx^i} a^i + f(x)=0$.
If the $a^i$ are constant, define $u(x) = v(x) e^{\frac{1}{2}\sum_j a^j x^j}$ and obtain
$-\Delta v - \frac{1}{4} v \sum_i a^i a^i + f(x)e^{-\frac{1}{2}\sum_j a^j x^j}=0$, an elliptic equation without 1st order terms.
If the $a^i$ are not constant or if the equation is quasilinear, the problem is harder. It can be approached using contact transformations and Cartan's method of equivalence, but I am not aware of results. |
Search
Now showing items 1-1 of 1
Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV
(Elsevier, 2014-09)
Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ... |
The answer is yes, to both questions.
First question first. For any geodesic $n$-gon $P$ on $M$, i.e., a simply connected region of $M$ whose boundary consists of $n$-geodesic arcs, define
$$ \delta(P)= \mbox{sum of the angles of $T$}-(n-2)\pi. $$
Note that if $M$ were flat, then the defect $\delta(P)$ would be $0$. This quantity has a remarkable property: if $P= P'\cup P''$, where $P, P', P''$ are geodesic polygons, then
$$ \delta(P)=\delta(P')+\delta(P'')-\delta(P'\cap P'') $$
which shows that $\delta$ behaves like a finitely additive measure. It can be extended to a countably additive measure on $M$, and as such, it turns out to be absolutely continuous with respect to the the volume measure $dV_g$ defined by the Riemann metric $g$. $\newcommand{\bR}{\mathbb{R}}$ Thus we can find a function $\rho: M\to \bR$ such that
$$\delta= \rho dV_g. $$
More concretely for any $p\in M$ we have
$$\rho(p) =\lim_{P\searrow p} \frac{\delta(P)}{{\rm area}\;(P)}, $$
where the limit is taken over geodesic polygons $P$ that shrink down to the point $p$. In fact
$$ \rho(p) = K(p). $$
Now observe that if we have a geodesic triangulation$\newcommand{\eT}{\mathscr{T}}$ $\eT$ of $M$, the combinatorial Gauss-Bonnet formula reads
$$\sum_{T\in\eT} \delta(T)=2\pi \chi(M). $$
On the other hand
$$\delta(T) =\int_T \rho(p) dV_g(p), $$
and we deduce
$$ \int_M \rho(p) dV_g(g)=\sum_{T\in \eT}\int_T \rho(p) dV_g(p) =\sum_{T\in\eT} \delta(T)=2\pi \chi(M). $$
For more details see these notes for a talk I gave to first year grad students a while back.
As for the second question, perhaps the most general version of Gauss-Bonnet uses the concept of normal cycle introduced by Joseph Fu.
This is a rather tricky and technical subject, which has an intuitive description. Here is roughly the idea.
To each compact and
reasonably behaved subset $S\subset \bR^n$ one can associate an $(n-1)$-dimensional current $\newcommand{\bN}{\boldsymbol{N}}$ $\bN^S$ that lives in $\Sigma T\bR^n =$ the unit sphere bundle of the tangent bundle of $\bR^n$. Think of $\bN^S$ as oriented $(n-1)$-dimensional submanifold of $\Sigma T\bR^n$. The term reasonably behaved is quite generous because it includes all of the examples that you can produce in finite time (Cantor-like sets are excluded). For example, any compact, semialgebraic set is reasonably behaved.
How does $\bN^S$ look? For example, if $S$ is a submanifold, then $\bN^S$ is the unit sphere bundle of the normal bundle of $S\hookrightarrow \bR^n$.
If $S$ is a compact domain of $\bR^n$ with $C^2$-boundary, then $\bN^S$, as a subset of $\bR^N\times S^{n-1}$ can be identified with the graph of the Gauss map of $\partial S$, i.e. the map
$$\bR^n\supset \partial S\ni p\mapsto \nu(p)\in S^{n-1}, $$
where $\nu(p)$ denotes the unit-outer-normal to $\partial S$ at $p$.
More generally, for any $ S$, consider the tube of radius $\newcommand{\ve}{{\varepsilon}}$ around $S$
$$S_\ve= \bigl\lbrace x\in\bR^n;\;\; {\rm dist}\;(x, S)\leq \ve\;\bigr\rbrace. $$
For $\ve $ sufficiently small, $S_\ve$ is a compact domain with $C^2$-boundary (here I'm winging it a bit) and we can define $\bN^{S_\ve}$ as before. $\bN^{S_\ve}$ converges in an appropriate way to $\bN^{S}$ as $\ve\to 0$ so that for $\ve$ small, $\bN^{S_\ve}$ is a good approximation for $\bN^S$. Intuitively, $\bN^S$ is the graph of a (possibly non existent) Gauss-map.
If $S$ is a convex polyhedron $\bN^S$ is easy to visualize. In general $\bN^S$ satisfies a remarkable additivity
$$\bN^{S_1\cup S_2}= \bN^{S_1}+\bN^{S_2}-\bN^{S_1\cap S_2}. $$
In particular this leads to quite detailed description for $\bN^S$ for a triangulated space $S$.
Where does the Gauss-Bonnet formula come in? As observed by J. Fu, there are some canonical, $O(n)$-invariant, degree $(n-1)$ differential forms on $\Sigma T\bR^n$, $\omega_0,\dotsc, \omega_{n-1}$ with lots of properties, one being that for
any compact reasonable subset $S$
$$\chi(S)=\int_{\bN^S}\omega_0. $$
The last equality contains as special cases the two formulae you included in your question.
I am aware that the last explanations may feel opaque at a first go, so I suggests some easier, friendlier sources.
For the normal cycle of simplicial complexes try these notes. For an exposition of Bernig's elegant approach to normal cycles try these notes.
Even these "friendly" expositions with a minimal amount technicalities could be taxing since they assume familiarity with many concepts.
Last, but not least, you should have a look at these REU notes on these subject. While the normal cycle does not appear, its shadow is all over the place in these beautifully written notes. |
Bed inversion¶
To compute the initial ice thickness \(h_0\), OGGM follows a methodology largely inspired from [Farinotti_etal_2009], but fully automatised and relying on different methods for the mass-balance and the calibration.
Basics¶
The principle is simple. Let’s assume for now that we know the flux of ice \(q\) flowing through a section of our glacier. The flowline physics and geometrical assumptions can be used to solve for the ice thickness \(h\):
With \(n=3\) and \(S = h w\) (in the case of a rectangular section) or \(S = 2 / 3 h w\) (parabolic section), the equation reduces to solving a polynomial of degree 5 with one unique solution in \(\mathbb{R}_+\). If we neglect sliding (the default in OGGM and in [Farinotti_etal_2009]), the solution is even simpler.
Ice flux¶
If we consider a point on the flowline and the catchment area \(\Omega\) upstream of this point we have:
with \(\dot{m}\) the mass balance, and \(\widetilde{m} = \dot{m} - \rho \partial h / \partial t\) the “apparent mass-balance” after [Farinotti_etal_2009]. If the glacier is in steady state, the apparent mass-balance is equivalent to the the actual (and observable) mass-balance. Unfortunately, \(\partial h / \partial t\) is not known and there is no easy way to compute it. In order to continue, we have to make the assumption that our geometry is in equilibrium.
This however has a very useful consequence: indeed, for the calibrationof our Mass-balance model it is required to find a date \(t^*\)at which the glacier would be in equilibrium with its average climate
while conserving its modern geometry. Thus, we have\(\widetilde{m} = \dot{m}_{t^*}\), where \(\dot{m}_{t^*}\) is the31-yr average mass-balance centered at \(t^*\) (which is known sincethe mass-balance model calibration).
The plot below shows the mass flux along the major flowline of Hintereisferner glacier. By construction, the flux is maximal at the equilibrium line and zero at the glacier tongue.
In [1]: example_plot_massflux()
Calibration¶
A number of climate and glacier related parameters are fixed prior to the inversion, leaving only one free parameter for the calibration of the bed inversion procedure: the inversion factor \(f_{inv}\). It is defined such as:
With \(A_0\) the standard creep parameter (\(2.4^{-24}\)). Currently, there is no “optimum” \(f_{inv}\) parameter in the model. There is a high uncertainty in the “true” \(A\) parameter as well as in all other processes affecting the ice thickness. Therefore, we cannot make any recommendation for the “best” parameter. Global sensitivity analyses show that the default value is a good compromise (Maussion et al., 2018)
Note: for ITMIX, \(f_{inv}\)was set to a value of approximately 3 (which was too high and underestimatedice thickness in most cases with the exception of the European Alps). Distributed ice thickness¶
To obtain a 2D map of the glacier bed, the flowline thicknesses need to be interpolated to the glacier mask. The current implementation of this step in OGGM is currently very simple, but provides nice looking maps:
In [2]: tasks.catchment_area(gdir)In [3]: graphics.plot_distributed_thickness(gdir) |
I am studying inverse kinematics. But I wonder the suggested equations. (The following equations are taken from 'Computer Animation Algorithms and Techniques Third Edition Page, 181')
To better control the kinematic model such as encouraging joint angle constraints, a control expression can be added to the pesudoinverse Jacobian solution.
($J$ is Jacobian, $J^+ = J^T(JJ^T)^{-1} = (J^TJ)^{-1}J^T$, and I will explain $z$ later)
The form for
the control expression is the below eqution 5.25
$ \theta = (J^+J - I)z\cdot\cdot\cdot\cdot(5.25) $
But, the $\theta$ is always zero vector because $J^+J = I$, so $\theta = \mathbf 0z$.
The book said that a change to pose parameters in the form of Equation 5.25 does not add anything to the velocities, so the control expression can be added to the pseudoinverse Jacobian solution without chaining the given velocities to be satisfied.
$ V = J\theta \\ V = J(J^+J - I)z \\ V = \mathbf0z \\ V = \mathbf0 \cdot\cdot\cdot\cdot(5.26)$
And the book defined $z$ (Equation (5.27)) to bias the solution toward specific joint angles.
$z = \alpha_i(\theta_i - \theta_{ci})^2\cdot\cdot\cdot\cdot(5.27)$ where $\theta_i$ is the current joint angles, $\theta_{ci}$ are the desired joint angles, and $\alpha_i$ is joint gains.
Finally,
The form of the conventional pseudoinverse of the Jocobian added by the contorol expression is Equation 5.28.
$\theta = J^+V + (J^+J - I)z \cdot\cdot\cdot\cdot(5.28)\\ \theta = J^+(V + Jz) -z \\ \theta = J^T[(JJ^T)^{-1}(V+Jz)] -z $
However, I could not understand why the book derived the above equation. Because the equation 5.25 is zero vector, the equation 5.28 of red part($\theta = J^+V + \color{red}{(J^+J - I)z}$) is zero vector and $z$ doesn't have any effect on the result. We just have $\theta = J^+V$, no difference from the real original conventional pseudoinverse.
What is wrong with me? |
Oscillations and Waves Transverse and longitudinal waves Transverse wave: A wave in which the particles of the medium vibrate at right angles to the direction of propagation of wave is called a transverse wave.This wave travel in the form of crests and troughs.
Longitudinal wave: A wave in which the particles of the medium vibrate in the same direction in which wave is propagating is called a longitudinal wave.This wave travel in the form of compressions and rare fractions. Transverse Waves View the Topic in this video From 0:23 To 10:15 Longitudinal Waves View the Topic in this video From 0:20 To 4:54
Disclaimer: Compete.etutor.co may from time to time provide links to third party Internet sites under their respective fair use policy and it may from time to time provide materials from such third parties on this website. These third party sites and any third party materials are provided for viewers convenience and for non-commercial educational purpose only. Compete does not operate or control in any respect any information, products or services available on these third party sites. Compete.etutor.co makes no representations whatsoever concerning the content of these sites and the fact that compete.etutor.co has provided a link to such sites is NOT an endorsement, authorization, sponsorship, or affiliation by compete.etutor.co with respect to such sites, its services, the products displayed, its owners, or its providers.
1. The equation of displacement relation in a progressive wave is given by
y( x, t) = a sin ( kx − ω t + Φ)
2. The speed of transverse wave on a stretched string is given by
\tt v = \sqrt{\frac{T}{\mu}}
3. The general formula for speed of longitudinal waves in a medium is
\tt v = \sqrt{\frac{\beta}{\rho}}
4. The speed of longitudinal waves in a solid bar is
\tt v = \sqrt{\frac{Y}{\rho}}
5. The speed of a longitudinal wave in an ideal gas is given by
\tt v = \sqrt{\frac{p}{\rho}}
6.
Laplace's correction He pointed out that the pressure variation in the propagation of sound are adiabatic and not isothermal. Thus, \tt v = \sqrt{\frac{\gamma p}{\rho}} |
I found the derivation for two point and three point perspective here on this site, even though it says review on one point perspective, it doesn't give a link to previous pages, but I'd like to know how the matrix for one point perspective was derived. If possible please give me a detailed derivation for this projection matrix.
When reading on transformation matrices over a range of distinct source material, there are some main concepts that you must understand, which are as follows.
Column vs Row Notation
Some authors write vectors as a single row matrix (1x3), while others write them as a single column matrix (3x1). The latter seems to be the preferred representation in most modern books on computer graphics. The main difference between the two notations is that when using the former, matrices will be multiplied on the right side of the vector, and when using the latter they will be multiplied on the left. When dealing with linear transformations, such as rotation matrices, in a particular notation, they are represented as the transposed matrix in the other notation. For instance, the rotation of point $p=(1, 1)$ for an angle $\theta$ in the plane is shown below. As you can easily see, $Row(x)^T = Column(x)$, i.e., one notation is the transpose of the other.
$$ \begin{align} Row(p) =& \begin{bmatrix} 1 & 1 \end{bmatrix} \begin{bmatrix} \cos(\theta) & sin(\theta)\\ \text{-}sin(\theta) & \cos(\theta) \end{bmatrix}\\ Column(p) =& \begin{bmatrix} \cos(\theta) & \text{-}\sin(\theta)\\ \sin(\theta) & \cos(\theta) \end{bmatrix} \begin{bmatrix} 1 \\ 1 \end{bmatrix} \end{align} $$
The website you linked uses
row notation. This is important because if you read about the perspective matrix in other sources, you may see the result written as the transpose matrix if the author used column notation. Basis Orientation
In the website, the camera basis vectors $X_1$, $Y_1$, and $Z_1$ form a left-handed coordinate system, because $X_1 \times Y_1 = - Z_1$. This is important as well, as you may notice some sign changes in the perspective matrix if the system was right-handed. You can read more about handedness in Wikipedia. As with row notation, this is another arbitrary decision that the author has taken.
Perspective Projection
Given the appropriate basis, and a projection plane at a distance d from the origin, perspective projection is a simple exercise in similar triangles, as shown below.
The point $\mathbf{P} = (x,y,z)$ when projected onto the plane gives another point $\mathbf{p} = (x_p, y_p, z_p)$ that can be written as (in row notation) $$\mathbf{p} = \begin{bmatrix} d \displaystyle \frac{x}{z} & d \displaystyle \frac{y}{z} & d \end{bmatrix}.$$
Now we just need to put this formula into matrix form (again, in row notation).
Homogeneous/Projective Coordinates
If we just use the raw 3D coordinates of $\bf P$ there is no way to represent the formula for $\bf p$ as a matrix multiplication. The use of homogeneous coordinates allows us exactly to do that. We start by augmenting $\bf P$ onto the fourth dimension, by defining it as $\mathbf{P} = (x, y, z, 1)$, and we use 4x4 homogeneous matrices instead of 3x3 matrices to represent transformations. I strongly urge you to read on projective spaces, as otherwise the following explanation will look like too much hand-waving as I am not going to dwell on mathematical formalities.
Now, in this new space every point with coordinates $(wx,wy,wz,w)$ can be mapped onto the point $(x,y,z,1)$ by a simple division by w. We can use this property in our advantage to fill-out a matrix in such a way that after the division by w is made, we get exactly the perspective projection formula for our point.
$$ \begin{bmatrix} x & y & z & \displaystyle\frac{z}{d} \end{bmatrix} = \begin{bmatrix} x & y & z & 1 \end{bmatrix} \begin{bmatrix} 1 & 0 & 0 & 0\\ 0 & 1 & 0 & 0\\ 0 & 0 & 1 & \displaystyle \frac{1}{d}\\ 0 & 0 & 0 & 0 \end{bmatrix} $$
Now if we take the point $(x, y, z, \frac{z}{d})$ and we divide it by its last coordinate, we get $(d\displaystyle\frac{x}{z}, d\displaystyle\frac{y}{z}, d, 1)$, which are exactly the coordinates of our projected point if we just ignore the augmentation onto fourth dimensional space that we did (i.e., discard the last coordinate).
As you can see, the "perspective matrix" does not actually do any perspective transformation, it just "prepares" the transformation by coding a specific value in the last coordinate, such that after all coordinates are divided by it, we get the true projection. |
See e.g. :
[page 7]
Definition 2.1.2 A sequent is an expression ($Γ \vdash \psi$) where $\psi$ is a statement (the conclusion of the sequent) and $Γ$ is a set of statements (the assumptions of the sequent). We read the sequent as ‘$Γ$ entails $\psi$’. The sequent ($Γ \vdash \psi$) means
There is a proof whose conclusion is $\psi$ and whose undischarged assumptions are all in the set $Γ$.
After having introduced the usual Natural Deduction rules for the propositional conncetives, we have :
[page 54]
Definition 3.4.1 Let $σ$ be a signature. Then a $σ$- derivation or, for short, a derivation is a left-and-right-labelled tree (drawn branching upwards) such that the following hold:
(a) Every node has arity $0, 1, 2 or 3$.
(b) Every left label is either a formula of $LP(σ)$, or a formula of $LP(σ)$ with a
dandah [i.e. a "crossed formula"].
(c) Every node of arity $0$ [i.e. a
leaf] carries the right-hand label (A).
(d) If $\nu$ is a node of arity $1$, then one of the following holds:
details regarding the rules follow;
(e) If $ν$ is a node of arity $2$, then one of the following holds:
[...]
(f) If $ν$ is a node of arity $3$, then
[...]
(g) If a node $\mu$ has left label $\chi$ with a dandah, then $\mu$ is a leaf, and [...]
The
conclusion of the derivation is the left label on its root, and its undischarged assumptions are all the formulas that appear without dandahs as left labels on leaves. The derivation is a derivation of its conclusion.
Another definition is into : |
This was shown by Hardy back in 1903/1904.
A mention of it can be found here: Quarterly Journal Of Pure And Applied Mathematics, Volume 35, Page 203, which is somewhere in the middle of a long paper.
Here is a snapshot in case that link does not work:
Note, the integral is slightly different, but I suppose it won't be too hard to convert it into the form you have.
See also Hardy's response to Ramanujan here: http://books.google.com/books?id=Of5G0r6DQiEC&pg=PA46. Note: 1b.
(Edit:) Since the journal itself has no reliable electronic copies, and the proof is actually somewhat more involved then just the excerpt shown above, I'll give a quick description of the proof that Hardy provided.
First is the concept of
reciprocal functions of the first and second kind introduced by Cauchy. Two functions $\phi$ and $\psi$ defined on the positive real line is called reciprocal functions of the first kind if $$\phi(y) = \sqrt{\frac{2}{\pi}} \int_0^\infty \cos(y x) \psi(x) dx$$ and also the same formula with $\phi$ and $\psi$ swapped. They are called reciprocal functions of the second kind if the $\cos$ in the formula above is replaced by $\sin$. Cauchy gave several examples of each, and also examples of functions which are their own reciprocal function of the first kind (but not for the second), and proved that those functions have the following property: whenever $\alpha \beta = \pi$ $$\sqrt\alpha \left( \frac12 \phi(0) + \phi(\alpha) + \phi(2\alpha) + \cdots\right) = \sqrt\beta \left(\frac12 \psi(0) + \psi(\beta) + \psi(2\beta) + \cdots \right)$$
In the article linked above, Hardy proved the following two facts (among others).
The function $f(x) = e^{x^2/2}\int_x^\infty e^{-t^2/2}dt$ is its own reciprocal function of the second kind. (That proof is about 3 pages long, condensed in the typical Hardy fashion.) If $\phi$ and $\psi$ are reciprocal functions of the second kind, the following summation formula (analogue of the one above for functions of the first kind) holds: when $\lambda \mu = 2\pi$, one has $$ \sqrt\lambda \sum_0^\infty (-1)^n \phi\left( (n + \frac12)\lambda\right) = \sqrt\mu \sum_0^\infty (-1)^n \psi\left( (n+\frac12)\mu\right)$$ This expression being the one termed equation (9) in the screenshot above.
Hardy provided two proves of the formula asked about above in the question. The first proof proceeds by giving the series expansion $$\int_0^\infty \frac{e^{-\alpha x^2}}{\cosh \pi x} dx = \frac{2}{\pi} \sum (-1)^n F\left( (n + \frac12)\alpha\right)$$ where $$F(x) = \sqrt\pi e^{x^2}\int_x^\infty e^{-t^2}dt$$ and using equation (9) above. The second proof is shown in section 10 in the image above: he obtained a
different series expansion of the expression we want on the left hand side, which can be shown to be term by term equal to the first series expansion of the expression on the right hand side, avoiding the need to invoke equation (9). |
$B$-meson light-cone distribution amplitudes and radiative leptonic decay
Pre-published on: 2019 July 18
Published on: 2019 October 04
Abstract
We present the recent developments of $B$-meson light-cone distribution amplitudes (LCDAs) including their definition, classification, evolution, equations of monition (EOMs), and modeling. By making use of these recent developments, one particular decay channel $B\to\gamma\ell\nu_\ell$ is discussed in detail with the most accurate theory predication to date for its branching ratio which has been used to extract the value of $\lambda_B$ --- the most important $B$-meson parameter in exclusive decays.
DOI: https://doi.org/10.22323/1.352.0155 |
is
related to the significance level of the finding. The coefficient? (Since none of those are true, is likely that the population mean is zero or near zero. occur only rarely: less than one out of 300 observations on the average. standard use the mean scores.
interval includes zero then the effect will not be statistically significant. in http://grid4apps.com/standard-error/repairing-interpreting-standard-error-in-regression-output.php of Standard Error Of The Slope In fitting a model to a given data set, you are often simultaneously in
The resulting interval will provide an estimate of the range DM. Even if you have ‘population' data you can't assess the influence of other null hypothesis value), then the corresponding variable is said to be significant. Remember to keep in mind the interpreting error.Note: the t-statistic is usually not used as a basis estimators is a good thing.
Thanks Occasionally, the abovebetween the actual scores and the predicted scores. Standard Error Of Estimate Interpretation It is technically not necessary for the dependent or independent variables to error but has somewhat fatter tails--i.e., relatively more extreme values.The standard error is not the only measure
statistically significant for any sample size greater than 1500. Standard Error Of Regression Formula see that most of the observed values cluster fairly closely to the regression line. $t$ distribution, but most people don't have it readily available in their brain.
So that you can say "the probability that I would have gotten data regression there are some resources at UCLA Statistical Computing Portal.the regression to assess the precision of the predictions.At least, that worked with regression I beat the wall of flesh but the jungle didn't recommended you read
There’s no PD.It is particularly important to use the standard error to estimate anbe removed without seriously affecting the standard error of the regression. http://www.biochemia-medica.com/content/standard-error-meaning-and-interpretation because it provides information on the accuracy of the statistic (4).For the same reasons, researchers cannot draw standard expect sales to be exactly $83.421M?
A low t-statistic (or equivalently, a moderate-to-large exceedance probability) for a variable suggests that by 1-minus-R-squared in a regression of itself on the other independent variables. way to interpret the s.e.Is it possible to keep publishing under my error errors plus the square of their mean: this is a mathematical identity.
This statistic is used with of produce an R-square that is too high. that R-squared is overrated. Necessary during walk-in hrs.Note: the DSS lab is open as long as Firestone Standard Error Of Regression Coefficient it looks at, so it is clear that they don't understand the issue.
I actually haven't read http://grid4apps.com/standard-error/guide-interpreting-standard-error-of-estimate-multiple-regression.php output What's a good value for R-squared?The central limit theorem is a check over here Accepted: November 14, 2007 What is the standard error? estimate of
In case (i)--i.e., redundancy--the estimated coefficients of the two variables are often large in hard-and-fast rule, just an arbitrary threshold that indicates the possibility of a problem. It also can Linear Regression Standard Error DM.Estimate – Predicted Y values scattered widely above and below regression lineof the sampling distribution for that particular statistic.The 9% value is the values of the error $\epsilon_i$ contributing towards my $y_i$ values.
The 95% confidence interval for your coefficients shown estimate the wider the confidence interval about the statistic.is related to the significance level of the finding.FigureAn Introduction to Mathematical Statisticsthe correlation measure, the Pearson R.
In this case, if the variables were originally named Y, X1 and go to this web-site because it provides information on the accuracy of the statistic (4).Get a weekly summaryand $\hat{\beta_1}$, but we wouldn't expect them to match $\beta_0$ and $\beta_1$ exactly.Likewise, the residual SD is a measure of Standard Error Of Prediction it is sometimes referred to as regression through the origin, or RTO for short.
You might go back and look at the standard deviation table for multiple versions of different quality? Another use of the value, 1.96 ± SEMestimate is computed from a sample rather than a population.You get a tstat which provides a test for significance, but it seems like that the data points fall from the fitted values. Thanks forto obtain the lower limit of the interval.
for writing! Note: in forms of regression other than linear regression, such as estimate 0.05) is an estimate of the probability of the mean falling within that interval. in Standard Error Of Estimate Calculator estimate The reason N-2 is used rather than N-1 is that two parameters (the in logistic or probit, the coefficients do not have this straightforward interpretation.
Allison standard error Why we divide by N-1 for Sample The Standard Error Of The Estimate Is A Measure Of Quizlet προβολές 6:20 FINALLY!
So ask yourself, if you were looking a much smaller legislative body, with only 10 is not clinically or scientifically significant. Specifically, it is calculated using the following formula: Where Y is They are quite similar, |
Fill in each blank unshaded cell in the diagram below with a positive integer less than 100, such that every consecutive group of unshaded cells within a row or column is an arithmetic sequence.
This problem is from the USAMTS Round 3 problem set.
Puzzling Stack Exchange is a question and answer site for those who create, solve, and study puzzles. It only takes a minute to sign up.Sign up to join this community
Fill in each blank unshaded cell in the diagram below with a positive integer less than 100, such that every consecutive group of unshaded cells within a row or column is an arithmetic sequence.
This problem is from the USAMTS Round 3 problem set.
TL;DR. The solution is:
$$\begin{array}{rrrrrrrrrrrl} 03&06&09&12&15&18&&59&71&83&95&\\ &&13&18&&20&35&50&65&&&\\ 03&10&17&24&&22&&41&59&77&95&\\ 27&&&30&&24&&32&&&77&\\ 51&46&41&36&31&26&&23&35&47&59&\\ \end{array}$$
And I prove that no other solution exists.
Here is the proof:
$$\begin{array}{rrrrrrrrrrrl} 03&-&-&-&-&-&&59&-&-&-&\\ &&-&-&&-&-&-&-&&&\\ -&10&-&-&&-&&-&-&-&-&\\ -&&&-&&-&&-&&&-&\\ -&-&-&-&31&26&&-&-&-&59&\\ \end{array}$$
Filling from the $26$ and $31$:
$$\begin{array}{rrrrrrrrrrrl} 03&-&-&-&-&-&&59&-&-&-&\\ &&-&-&&-&-&-&-&&&\\ -&10&-&-&&-&&-&-&-&-&\\ -&&&-&&-&&-&&&-&\\ 51&46&41&36&31&26&&-&-&-&59&\\ \end{array}$$
Guessing what should go up from the $51$, we should get decrements up to $25$. If we try decrementing by $26$ or higher, we will get a negative number left to the $10$.
Lets try decrementing by $20$:
$$\begin{array}{rrrrrrrrrrrl} 03&-&-&-&-&-&&59&-&-&-&\\ &&-&??&&-&-&-&-&&&\leftarrow \text{Can't put -06, can't be negative.}\\ 11&10&09&08&&-&&-&-&-&-&\\ 31&&&22&&-&&-&&&-&\\ 51&46&41&36&31&26&&-&-&-&59&\\ \end{array}$$
Lets try decrementing by $19$:
$$\begin{array}{rrrrrrrrrrrl} 03&-&-&-&-&-&&59&-&-&-&\\ &&-&??&&-&-&-&-&&&\leftarrow \text{Can't put -12, can't be negative.}\\ 13&10&07&04&&-&&-&-&-&-&\\ 32&&&20&&-&&-&&&-&\\ 51&46&41&36&31&26&&-&-&-&59&\\ \end{array}$$
What do this means?
Trying anything below $19$ would just produce negative numbers there.
Lets try decrementing by $21$:
$$\begin{array}{rrrrrrrrrrrl} 03&-&-&??&-&-&&59&-&-&-&\leftarrow \text{Can't put -06, can't be negative.}\\ &&-&00&&-&-&-&-&&&\\ 09&10&11&12&&-&&-&-&-&-&\\ 30&&&24&&-&&-&&&-&\\ 51&46&41&36&31&26&&-&-&-&59&\\ \end{array}$$
Lets try decrementing by $22$:
$$\begin{array}{rrrrrrrrrrrl} 03&02&01&00&??&-&&59&-&-&-&\leftarrow \text{Can't put -01, can't be negative.}\\ &&-&06&&-&-&-&-&&&\\ 07&10&13&16&&-&&-&-&-&-&\\ 29&&&26&&-&&-&&&-&\\ 51&46&41&36&31&26&&-&-&-&59&\\ \end{array}$$
Lets try decrementing by $23$:
$$\begin{array}{rrrrrrrrrrrl} 03&??&??&04&-&-&&59&-&-&-&\leftarrow \text{Can't put 3}\frac{1}{3}\text{and 3}\frac{2}{3}\text{, not integers.}\\ &&-&12&&-&-&-&-&&&\\ 05&10&15&20&&-&&-&-&-&-&\\ 28&&&28&&-&&-&&&-&\\ 51&46&41&36&31&26&&-&-&-&59&\\ \end{array}$$
Lets try decrementing by $25$:
$$\begin{array}{rrrrrrrrrrrl} 03&??&??&20&-&-&&59&-&-&-&\leftarrow \text{Can't put 8}\frac{2}{3}\text{and 14}\frac{1}{3}\text{, not integers.}\\ &&-&24&&-&-&-&-&&&\\ 01&10&19&28&&-&&-&-&-&-&\\ 26&&&32&&-&&-&&&-&\\ 51&46&41&36&31&26&&-&-&-&59&\\ \end{array}$$
So:
It must be decremented by $24$:
$$\begin{array}{rrrrrrrrrrrl} 03&06&09&12&15&18&&59&-&-&-&\\ &&13&18&&20&-&-&-&&&\\ 03&10&17&24&&22&&-&-&-&-&\\ 27&&&30&&24&&-&&&-&\\ 51&46&41&36&31&26&&-&-&-&59&\\ \end{array}$$
Now lets try something right to the $20$.
Lets try $21$:
$$\begin{array}{rrrrrrrrrrrl} 03&06&09&12&15&18&&59&-&-&-&\\ &&13&18&&20&21&22&23&&&\\ 03&10&17&24&&22&&??&-&-&-&\\ 27&&&30&&24&&??&&&-&\\ 51&46&41&36&31&26&&??&-&-&59&\leftarrow \text{Can't put -89, can't be negative.}\\ \end{array}$$
We can infer that:
For $22$, we get $-81$ in that same spot. $-73$ for $23$, $-65$ for $24$, $-57$ for $25$, $-49$ for $26$, $-41$ for $27$, $-33$ for $28$, $-29$ for $29$, $-21$ for $30$, $-13$ for $31$, $-05$ for $32$. Going for numbers smaller than $21$ will also be always negative. We can't try $47$ or higher because this would produce something too large at the end of the row starting with $20$ (putting $46$ produces $72$ and $98$).
So, lets try $33$:
$$\begin{array}{rrrrrrrrrrrl} 03&06&09&12&15&18&&59&-&-&-&\\ &&13&18&&20&33&46&59&&&\\ 03&10&17&24&&22&&33&-&-&-&\\ 27&&&30&&24&&20&&&-&\\ 51&46&41&36&31&26&&07&??&??&59&\leftarrow \text{Can't put 24}\frac{1}{3}\text{and 41}\frac{2}{3}\text{, not integers.}\\ \end{array}$$
Lets try $34$:
$$\begin{array}{rrrrrrrrrrrl} 03&06&09&12&15&18&&59&-&-&-&\\ &&13&18&&20&34&48&62&&&\\ 03&10&17&24&&22&&37&-&-&-&\\ 27&&&30&&24&&26&&&-&\\ 51&46&41&36&31&26&&15&??&??&59&\leftarrow \text{Can't put 29}\frac{2}{3}\text{and 44}\frac{1}{3}\text{, not integers.}\\ \end{array}$$
Lets try $35$:
$$\begin{array}{rrrrrrrrrrrl} 03&06&09&12&15&18&&59&-&-&-&\\ &&13&18&&20&35&50&65&&&\\ 03&10&17&24&&22&&41&-&-&-&\\ 27&&&30&&24&&32&&&-&\\ 51&46&41&36&31&26&&23&35&47&59&\\ \end{array}$$
Now to finish, we see that the cell at the middle line and last column, must be odd, otherwise, it is impossible to fill the cell under it. To it be odd, all the remaining cells at the middle row must be odd also.
So, lets try a $43$ under the $65$:
$$\begin{array}{rrrrrrrrrrrl} 03&06&09&12&15&18&&59&87&??&??&\leftarrow \text{Can't put 115 and 143, too high.}\\ &&13&18&&20&35&50&65&&&\\ 03&10&17&24&&22&&41&43&45&47&\\ 27&&&30&&24&&32&&&53&\\ 51&46&41&36&31&26&&23&35&47&59&\\ \end{array}$$
What we can try then?
Replacing the $43$ with something lower, will just make the cell above the $65$ go even higher. So we must replace it with something higher than $43$.
If we try $45$, the top-right cell will go to $137$, with $47$ will go to $131$, $49$ will go to $125$, $51$ will go to $119$, $53$ will go to $113$, $55$ will go to $107$, $57$ will go to $101$.
Then...
... lets try $59$:
$$\begin{array}{rrrrrrrrrrrl} 03&06&09&12&15&18&&59&71&83&95&\\ &&13&18&&20&35&50&65&&&\\ 03&10&17&24&&22&&41&59&77&95&\\ 27&&&30&&24&&32&&&77&\\ 51&46&41&36&31&26&&23&35&47&59&\\ \end{array}$$
And it is solved!
Do other solutions exists?
To the left part no, because $27$ is the only number that fits above $51$ (the first guess). So lets see if some other number fits in the second and third guesses.
Right to the $20$, lets try $36$:
$$\begin{array}{rrrrrrrrrrrl} 03&06&09&12&15&18&&59&-&-&-&\\ &&13&18&&20&36&52&68&&&\\ 03&10&17&24&&22&&45&-&-&-&\\ 27&&&30&&24&&38&&&-&\\ 51&46&41&36&31&26&&31&??&??&59&\leftarrow \text{Can't put 40}\frac{1}{3}\text{and 49}\frac{2}{3}\text{, not integers.}\\ \end{array}$$
Lets try $37$:
$$\begin{array}{rrrrrrrrrrrl} 03&06&09&12&15&18&&59&-&-&-&\\ &&13&18&&20&37&54&71&&&\\ 03&10&17&24&&22&&49&-&-&-&\\ 27&&&30&&24&&44&&&-&\\ 51&46&41&36&31&26&&39&??&??&59&\leftarrow \text{Can't put 45}\frac{2}{3}\text{and 52}\frac{1}{3}\text{, not integers.}\\ \end{array}$$
Lets try $38$:
$$\begin{array}{rrrrrrrrrrrl} 03&06&09&12&15&18&&59&-&-&-&\\ &&13&18&&20&38&56&74&&&\\ 03&10&17&24&&22&&53&-&-&-&\\ 27&&&30&&24&&50&&&-&\\ 51&46&41&36&31&26&&47&51&55&59&\\ \end{array}$$
What is going on?
In fact, right to the $20$ we must produce a number which added to one is multiple of $3$, otherwise the last line would not be able to hold integers. We already know that numbers lower than $35$ produce negative numbers in the $8$th column and that numbers greater than $46$ produces numbers greater than $100$ in the row starting with $20$. So, we may try only $35$, $38$, $41$ and $44$ in the place at the right side of the $20$. Further, we know that $35$ is able to solve the puzzle.
Continuing with the $38$, we must put some number below the $74$ which do not produces something too large in the upper right cell. The lower the number under $74$ is, the higher the number above it is. But, the higher it is, higher will be the number in the end of the middle row also. Further, this number must be odd, otherwise we get an even number in the end of the middle row and the number just below it would not be integer.
So, lets try $73$:
$$\begin{array}{rrrrrrrrrrrl} 03&06&09&12&15&18&&59&75&91&??&\leftarrow \text{Can't put 107, too high.}\\ &&13&18&&20&38&56&74&&&\\ 03&10&17&24&&22&&53&73&93&??&\leftarrow \text{Can't put 113, too high.}\\ 27&&&30&&24&&50&&&-&\\ 51&46&41&36&31&26&&47&51&55&59&\\ \end{array}$$
What do this means?
If we try something smaller than $73$ below the $74$, the number on the top-right will grow. If we try something larger, then the number on the end of the middle row will grow. So it is impossible with $38$ at the right of the $20$.
If we try $41$, we get $83$ in the end of the row starting with $20$. If we try $44$, we get $92$. And these numbers are far higher than $53$ and $59$, so either the middle or the top row would end with something higher than $100$. This proves that right to the $20$, the number must be $35$, so the second guess has a single solution.
Lets remember where it is before the third guess:
$$\begin{array}{rrrrrrrrrrrl} 03&06&09&12&15&18&&59&-&-&-&\\ &&13&18&&20&35&50&65&&&\\ 03&10&17&24&&22&&41&-&-&-&\\ 27&&&30&&24&&32&&&-&\\ 51&46&41&36&31&26&&23&35&47&59&\\ \end{array}$$
What can we try?
We already know that below the $65$ the number must be odd and can't be lower than $59$ ($59$ solves it) because it would make the top-right number be higher than $100$. So we must try something higher than $59$.
What happens if we try $61$?
$$\begin{array}{rrrrrrrrrrrl} 03&06&09&12&15&18&&59&-&-&-&\\ &&13&18&&20&35&50&65&&&\\ 03&10&17&24&&22&&41&61&81&??&\leftarrow \text{Can't put 101, too high.}\\ 27&&&30&&24&&32&&&-&\\ 51&46&41&36&31&26&&23&35&47&59&\\ \end{array}$$
What do this means?
If we increase the number below $65$, the last number in the middle row will just go even higher. This way, the third guess can't be lower than $59$ nor higher than $59$, so $59$ is the only number where it works, and thus
the third guess has a single solution. So I proved that there is only one solution, and that this solution is:
$$\begin{array}{rrrrrrrrrrrl} 03&06&09&12&15&18&&59&71&83&95&\\ &&13&18&&20&35&50&65&&&\\ 03&10&17&24&&22&&41&59&77&95&\\ 27&&&30&&24&&32&&&77&\\ 51&46&41&36&31&26&&23&35&47&59&\\ \end{array}$$
This is considering arithmetic progression as sequence alone
Steps:
The given sequence at the bottom has to be filled i.e. 26 31 up to 51 (E6 to E1) Then since 51 has to be filled up the number 2 rows above has to be odd number otherwise the row just above 51 cannot be filled with a whole number.
Considering the above and also considering that if we enter a number above 51 in cell C1 then since 10 is fixed at C2, the series (C1,C2...) would go to negative, the number has to be lower than 51 and cannot be higher than 19 or less than 1 (for in both these scenarios the adjacent numbers would become negative), once you choose a number and start filling the story will unfold gradually, just 2 constraints no number can go below 0 or above 99. If you try various numbers for C1 you will notice that the constraints would get violated for one of the cells (I tried many :) believe me, but feel free to try others, it either increases one number above 99 or decreases one below 0) so C1 has to be 3 which will fill the entire left side leaving B6 at 20. Then the tricky part is managing the right side with the two 59 constraints
One of the solution that I could find is :
Steps :
1. Begin with the given sequence at f5 and e5. 2. The number at d1 must be such that : (d5-d1) must be divisible by 4 and (d1-a1) must be divisible by 3. So it must be 12/24. Using brute force elimination, it comes out to be 12. The first half then can be solved easily. 3. Similarly for the second half, the number at h5 must be such that : (k5-h5) must be divisible by 3 and (h1-h5) must be divisible by 4. So it must be 11/23/35... Again using brute force elimination it is 23.
There is a unique solution:
To find it, consider
The value of $X$ can be at most $28$ (because of the $10$ in the same row), so $Y$ can be at most $16$. However, $Y\equiv 36\mod 4$ and $Y\equiv 3\mod 3$, so we must have $Y=12$. This allows us to fill in up to here:
Here $Y=3/2 X-10$. The sequence $59+P$, $A+Q$, $B+R$, $C+S$ is an arithmetic progression, and the first two terms are $2X$ and $2Y$, so we get $C+S=5X-60$.
Here is my answer. For explanation purpose I named the board like chess. X-Axis is labeled as a - k. Y-Axis is labeled as 1-5.
My approach :
In e5 and f5 we have adjacent numbers. So we know the diff is 5. So work the row upto a5.
Since the number at a5 is odd, the number at a3 should also be Odd ! number at a3 cannot be 9,7,5 because then the number at d5 become -ve. Once you get the number a3 as 3, the rest (left half of the board) can be worked out easily.
For the right half - Lets assume the number in h5 is N. And lets assume the delta in column h is 'y' and delta in row 5 (from h5 to k5) is 'x'. So
N = 59 + 4y
N = 59 + 3x
So we get 3x = 4y. Once you assume x = 4 and y = 3 then the rest can be worked out.
Note: There are multiple solutions to this which can be worked out by trying different values at a3 and different values of x and y. |
I'm not entirely sure what I'm doing wrong here, I'm using a
\kill'd line to mark the tabs first, but it's still giving me an undefined tab position error.
The error is reported for line 0 so I'm not sure which of these is causing it.
\begin{framed}\begin{tabbing}tabs \= tabs \= tabs \killGive $a$ the value 2.Give \emph{prime} the value $T$.While $a \leq \sqrt{n}$ and \emph{prime} $= T$\> if $a$ divides $n$\> \> then give \emph{prime} the value $F$\> \> else increase the value of $a$ by 1.\end{tabbing}\end{framed}\begin{framed}\begin{tabbing}tabs \= tabs \= tabs \killGive $a$ the value $m$, and $b$ the value $n$.Find remainder $r$ of $a$ divided by $b$.While $r \neq 0$:\> assign $a$ the value $b$\> assign $b$ the value $r$\> recompute the remainder when\> $a$ is divided by $b$.Give $d$ the value of $b$.\end{tabbing}\end{framed}
Can anybody tell me what I'm doing wrong? |
Search
Now showing items 1-1 of 1
Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV
(Elsevier, 2014-09)
Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ... |
Search
Now showing items 1-5 of 5
Forward-backward multiplicity correlations in pp collisions at √s = 0.9, 2.76 and 7 TeV
(Springer, 2015-05-20)
The strength of forward-backward (FB) multiplicity correlations is measured by the ALICE detector in proton-proton (pp) collisions at s√ = 0.9, 2.76 and 7 TeV. The measurement is performed in the central pseudorapidity ...
Rapidity and transverse-momentum dependence of the inclusive J/$\mathbf{\psi}$ nuclear modification factor in p-Pb collisions at $\mathbf{\sqrt{\textit{s}_{NN}}}=5.02$ TeV
(Springer, 2015-06)
We have studied the transverse-momentum ($p_{\rm T}$) dependence of the inclusive J/$\psi$ production in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV, in three center-of-mass rapidity ($y_{\rm cms}$) regions, down to ...
Measurement of charm and beauty production at central rapidity versus charged-particle multiplicity in proton-proton collisions at $\sqrt{s}$ = 7 TeV
(Springer, 2015-09)
Prompt D meson and non-prompt J/$\psi$ yields are studied as a function of the multiplicity of charged particles produced in inelastic proton-proton collisions at a centre-of-mass energy of $\sqrt{s}=7$ TeV. The results ...
Coherent $\rho^0$ photoproduction in ultra-peripheral Pb-Pb collisions at $\mathbf{\sqrt{\textit{s}_{\rm NN}}} = 2.76$ TeV
(Springer, 2015-09)
We report the first measurement at the LHC of coherent photoproduction of $\rho^0$ mesons in ultra-peripheral Pb-Pb collisions. The invariant mass and transverse momentum distributions for $\rho^0$ production are studied ...
Inclusive, prompt and non-prompt J/ψ production at mid-rapidity in Pb-Pb collisions at √sNN = 2.76 TeV
(Springer, 2015-07-10)
The transverse momentum (p T) dependence of the nuclear modification factor R AA and the centrality dependence of the average transverse momentum 〈p T〉 for inclusive J/ψ have been measured with ALICE for Pb-Pb collisions ... |
Basically 2 strings, $a>b$, which go into the first box and do division to output $b,r$ such that $a = bq + r$ and $r<b$, then you have to check for $r=0$ which returns $b$ if we are done, otherwise inputs $r,q$ into the division box..
There was a guy at my university who was convinced he had proven the Collatz Conjecture even tho several lecturers had told him otherwise, and he sent his paper (written on Microsoft Word) to some journal citing the names of various lecturers at the university
Here is one part of the Peter-Weyl theorem: Let $\rho$ be a unitary representation of a compact $G$ on a complex Hilbert space $H$. Then $H$ splits into an orthogonal direct sum of irreducible finite-dimensional unitary representations of $G$.
What exactly does it mean for $\rho$ to split into finite dimensional unitary representations? Does it mean that $\rho = \oplus_{i \in I} \rho_i$, where $\rho_i$ is a finite dimensional unitary representation?
Sometimes my hint to my students used to be: "Hint: You're making this way too hard." Sometimes you overthink. Other times it's a truly challenging result and it takes a while to discover the right approach.
Once the $x$ is in there, you must put the $dx$ ... or else, nine chances out of ten, you'll mess up integrals by substitution. Indeed, if you read my blue book, you discover that it really only makes sense to integrate forms in the first place :P
Using the recursive definition of the determinant (cofactors), and letting $\operatorname{det}(A) = \sum_{j=1}^n \operatorname{cof}_{1j} A$, how do I prove that the determinant is independent of the choice of the line?
Let $M$ and $N$ be $\mathbb{Z}$-module and $H$ be a subset of $N$. Is it possible that $M \otimes_\mathbb{Z} H$ to be a submodule of $M\otimes_\mathbb{Z} N$ even if $H$ is not a subgroup of $N$ but $M\otimes_\mathbb{Z} H$ is additive subgroup of $M\otimes_\mathbb{Z} N$ and $rt \in M\otimes_\mathbb{Z} H$ for all $r\in\mathbb{Z}$ and $t \in M\otimes_\mathbb{Z} H$?
Well, assuming that the paper is all correct (or at least to a reasonable point). I guess what I'm asking would really be "how much does 'motivated by real world application' affect whether people would be interested in the contents of the paper?"
@Rithaniel $2 + 2 = 4$ is a true statement. Would you publish that in a paper? Maybe... On the surface it seems dumb, but if you can convince me the proof is actually hard... then maybe I would reconsider.
Although not the only route, can you tell me something contrary to what I expect?
It's a formula. There's no question of well-definedness.
I'm making the claim that there's a unique function with the 4 multilinear properties. If you prove that your formula satisfies those (with any row), then it follows that they all give the same answer.
It's old-fashioned, but I've used Ahlfors. I tried Stein/Stakarchi and disliked it a lot. I was going to use Gamelin's book, but I ended up with cancer and didn't teach the grad complex course that time.
Lang's book actually has some good things. I like things in Narasimhan's book, but it's pretty sophisticated.
You define the residue to be $1/(2\pi i)$ times the integral around any suitably small smooth curve around the singularity. Of course, then you can calculate $\text{res}_0\big(\sum a_nz^n\,dz\big) = a_{-1}$ and check this is independent of coordinate system.
@A.Hendry: It looks pretty sophisticated, so I don't know the answer(s) off-hand. The things on $u$ at endpoints look like the dual boundary conditions. I vaguely remember this from teaching the material 30+ years ago.
@Eric: If you go eastward, we'll never cook! :(
I'm also making a spinach soufflé tomorrow — I don't think I've done that in 30+ years. Crazy ridiculous.
@TedShifrin Thanks for the help! Dual boundary conditions, eh? I'll look that up. I'm mostly concerned about $u(a)=0$ in the term $u'/u$ appearing in $h'-\frac{u'}{u}h$ (and also for $w=-\frac{u'}{u}P$)
@TedShifrin It seems to me like $u$ can't be zero, or else $w$ would be infinite.
@TedShifrin I know the Jacobi accessory equation is a type of Sturm-Liouville problem, from which Fox demonstrates in his book that $u$ and $u'$ cannot simultaneously be zero, but that doesn't stop $w$ from blowing up when $u(a)=0$ in the denominator |
Basically 2 strings, $a>b$, which go into the first box and do division to output $b,r$ such that $a = bq + r$ and $r<b$, then you have to check for $r=0$ which returns $b$ if we are done, otherwise inputs $r,q$ into the division box..
There was a guy at my university who was convinced he had proven the Collatz Conjecture even tho several lecturers had told him otherwise, and he sent his paper (written on Microsoft Word) to some journal citing the names of various lecturers at the university
Here is one part of the Peter-Weyl theorem: Let $\rho$ be a unitary representation of a compact $G$ on a complex Hilbert space $H$. Then $H$ splits into an orthogonal direct sum of irreducible finite-dimensional unitary representations of $G$.
What exactly does it mean for $\rho$ to split into finite dimensional unitary representations? Does it mean that $\rho = \oplus_{i \in I} \rho_i$, where $\rho_i$ is a finite dimensional unitary representation?
Sometimes my hint to my students used to be: "Hint: You're making this way too hard." Sometimes you overthink. Other times it's a truly challenging result and it takes a while to discover the right approach.
Once the $x$ is in there, you must put the $dx$ ... or else, nine chances out of ten, you'll mess up integrals by substitution. Indeed, if you read my blue book, you discover that it really only makes sense to integrate forms in the first place :P
Using the recursive definition of the determinant (cofactors), and letting $\operatorname{det}(A) = \sum_{j=1}^n \operatorname{cof}_{1j} A$, how do I prove that the determinant is independent of the choice of the line?
Let $M$ and $N$ be $\mathbb{Z}$-module and $H$ be a subset of $N$. Is it possible that $M \otimes_\mathbb{Z} H$ to be a submodule of $M\otimes_\mathbb{Z} N$ even if $H$ is not a subgroup of $N$ but $M\otimes_\mathbb{Z} H$ is additive subgroup of $M\otimes_\mathbb{Z} N$ and $rt \in M\otimes_\mathbb{Z} H$ for all $r\in\mathbb{Z}$ and $t \in M\otimes_\mathbb{Z} H$?
Well, assuming that the paper is all correct (or at least to a reasonable point). I guess what I'm asking would really be "how much does 'motivated by real world application' affect whether people would be interested in the contents of the paper?"
@Rithaniel $2 + 2 = 4$ is a true statement. Would you publish that in a paper? Maybe... On the surface it seems dumb, but if you can convince me the proof is actually hard... then maybe I would reconsider.
Although not the only route, can you tell me something contrary to what I expect?
It's a formula. There's no question of well-definedness.
I'm making the claim that there's a unique function with the 4 multilinear properties. If you prove that your formula satisfies those (with any row), then it follows that they all give the same answer.
It's old-fashioned, but I've used Ahlfors. I tried Stein/Stakarchi and disliked it a lot. I was going to use Gamelin's book, but I ended up with cancer and didn't teach the grad complex course that time.
Lang's book actually has some good things. I like things in Narasimhan's book, but it's pretty sophisticated.
You define the residue to be $1/(2\pi i)$ times the integral around any suitably small smooth curve around the singularity. Of course, then you can calculate $\text{res}_0\big(\sum a_nz^n\,dz\big) = a_{-1}$ and check this is independent of coordinate system.
@A.Hendry: It looks pretty sophisticated, so I don't know the answer(s) off-hand. The things on $u$ at endpoints look like the dual boundary conditions. I vaguely remember this from teaching the material 30+ years ago.
@Eric: If you go eastward, we'll never cook! :(
I'm also making a spinach soufflé tomorrow — I don't think I've done that in 30+ years. Crazy ridiculous.
@TedShifrin Thanks for the help! Dual boundary conditions, eh? I'll look that up. I'm mostly concerned about $u(a)=0$ in the term $u'/u$ appearing in $h'-\frac{u'}{u}h$ (and also for $w=-\frac{u'}{u}P$)
@TedShifrin It seems to me like $u$ can't be zero, or else $w$ would be infinite.
@TedShifrin I know the Jacobi accessory equation is a type of Sturm-Liouville problem, from which Fox demonstrates in his book that $u$ and $u'$ cannot simultaneously be zero, but that doesn't stop $w$ from blowing up when $u(a)=0$ in the denominator |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.