text stringlengths 256 16.4k |
|---|
In this section we consider the continuous version of the problem posed in the previous section: How are sums of independent random variables distributed?
Definition \(\PageIndex{1}\): convolution
Let \(X\) and \(Y\) be two continuous random variables with density functions \(f(x)\) and \(g(y)\), respectively. Assume that both \(f(x)\) and \(g(y)\) are defined for all real numbers. Then the convolution \(f ∗ g\) of \(f\) and \(g\) is the function given by
\[ \begin{align}{rcl} (f*g) & = & \int_{-\infty}^\infty f(z-y)g(y)dy \\ & = & \int_{-\infty}^\infty g(z-x)f(x)dx \end{align}\]
This definition is analogous to the definition, given in Section 7.1, of the convolution of two distribution functions. Thus it should not be surprising that if X and Y are independent, then the density of their sum is the convolution of their densities. This fact is stated as a theorem below, and its proof is left as an exercise (see Exercise 1).
Theorem \(\PageIndex{1}\)
Let X and Y be two independent random variables with density functions fX (x) and fY (y) defined for all x. Then the sum Z = X + Y is a random variable with density function \(f_Z(z)\), where \(f_X\) is the convolution of \(f_X\) and \(f_Y\)
To get a better understanding of this important result, we will look at some examples.
Example \(\PageIndex{1}\): Sum of Two Independent Uniform Random Variables
Suppose we choose independently two numbers at random from the interval [0, 1] with uniform probability density. What is the density of their sum? Let X and Y be random variables describing our choices and Z = X + Y their sum. Then we have
\[f_X(x) = f_Y(y) = \begin{array}{cc} 1 & \text{if } 0 \leq x \leq 1 \\ 0 & \text{otherwise} \end{array}\]
and the density function for the sum is given by
\[f_Z(z) = \int_{-\infty}^\infty f_X(z-y)f_Y(y)dy.\]
Since \(f_Y (y) = 1 if 0 ≤ y ≤ 1\) and 0 otherwise, this becomes
\[f_Z(z) = \int_{0}^1 f_X(z-y)dy.\]
Now the integrand is 0 unless 0 ≤ z − y ≤ 1 (i.e., unless z − 1 ≤ y ≤ z) and then it is 1. So if 0 ≤ z ≤ 1, we have
\[f_Z (z) = \int_0^z dy = z ,\]
while if 1 < z ≤ 2, we have
\[f_Z(z) = \int_{z-1}^1 dy = 2-z, \]
and if z < 0 or z > 2 we have \(_fZ(z) = 0\) (see Figure 7.2). Hence,
\[f_Z(z) = \Bigg\{ \begin{array}{cc} z & \text{if } 0 \leq z \leq 1 \\ 2-z, & \text{if} 1 < z \leq 2 \\ 0, & \text{otherwise} \end{array} \]
Note that this result agrees with that of Example 2.4.
Example \(\PageIndex{2}\): Sum of Two Independent Exponential Random Variables
Suppose we choose two numbers at random from the interval [0, ∞) with an
exponential density with parameter λ. What is the density of their sum? Let X, Y , and Z = X + Y denote the relevant random variables, and \(f_X , f_Y , \)and \(f_Z\) their densities. Then
\[ f_X(x) = f_Y(x) = \bigg\{ \begin{array}{cc} \lambda e^{-\lambda x}, & \text{if } x \geq 0 \\ 0, & \text{otherwise} \end{array} \]
and so, if z > 0,
\[ \begin{align}{rcl} f_Z(z) & = & \int_{-\infty}^\infty f_X(z-y)f_Y(y)dy \\ &=& \int_0^z \lambda e^{-\lambda (z-y)} \lambda e^{-\lambda y} dy \\ &=& \int_0^z \lambda^2 e^{-\lambda z} dy \\ &=& \lambda^2 z e^{-\lambda z}\end{align}\]
while if z < 0, \(f_Z(z) = 0\) (see Figure 7.3). Hence,
\[ f_Z(z) = \bigg\{ \begin{array}{cc} \lambda^2 z^{-\lambda z}, & \text{if } z \geq 0, \\ 0, & \text{otherwise} \end{array} \]
Sum of Two Independent Normal Random Variables
Example \(\PageIndex{3}\)
It is an interesting and important fact that the convolution of two normal densities with means \(µ_1 and µ_2\) and variances \(σ_1 and σ_2\) is again a normal density, with mean \(µ_1 + µ_2\) and variance \( \sigma_1^2 + \sigma_2^2\) . We will show this in the special case that both random variables are standard normal. The general case can be done in the same way, but the calculation is messier. Another way to show the general result is given in Example 10.17.
Suppose X and Y are two independent random variables, each with the standard normal density (see Example 5.8). We have
\[ f_X(x) = f_Y(y) = \frac{1}{\sqrt{2\pi}}e^{-x^2/2}\]
and so
\[\begin{align}{rcl} f_Z(z) & = & f_X \* f_Y(z) \\ & = & \frac{1}{2\pi} \int_{-\infty}^\infty e^{-(z-y)^2/2} e^{-y^2/2}dy \\ & = & \frac{1}{2\pi} e^{-z^2/4} \int_{-\infty}^\infty e^{-(y-z/2)^2}dy \\ & = & \frac{1}{2\pi} e^{-z^2/4}\sqrt{\pi} \bigg[ \frac{1}{\sqrt{\pi}} \int_{-\infty}^\infty e^{-(y-z/2)^2dy \bigg] \end{align}\]
The expression in the brackets equals 1, since it is the integral of the normal density function with \( \mu =0\) and \(\sigma = \sqrt{2}\) So, we have
\[f_Z(z) = \frac{1}{\sqrt{4\pi}}e^{-z^2/4}\]
Example \(\PageIndex{4}\): Sum of Two Independent Cauchy Random Variables
Choose two numbers at random from the interval \((-\infty, \infty\) with the Cauchy density with parameter \(a = 1\) (see Example 5.10). Then
\[ f_X(x) = f_Y(y) = \frac{1}{\pi(1+x^2)}\]
and \(Z = X +Y\) has density
\[f_Z(z) = \frac{1}{\pi^2} \int_{-\infty}^\infty \frac{1}{1+(z-y)^2} \frac{1}{1+y^2}dy.\]
This integral requires some effort, and we give here only the result (see Section 10.3, or Dwass\(^3\) ):
\[fZ(z) =\frac{2}{\pi (4+z^2)}\]
Now, suppose that we ask for the density function of the average
\[A = (1/2)(X + Y ) \]
of X and Y . Then A = (1/2)Z. Exercise 5.2.19 shows that if U and V are two continuous random variables with density functions \(f_U(x) and f_V(x)\), respectively, and if \(V = aU\), then
\[f_V (x) = \bigg( \frac{1}{a}\bigg) f_U \bigg( \frac{x}{a} \bigg). \]
Thus, we have
\[f_A(z) = 2f_Z(2z) = \frac{1}{\pi(1+z^2)}\]
Hence, the density function for the average of two random variables, each having a Cauchy density, is again a random variable with a Cauchy density; this remarkable property is a peculiarity of the Cauchy density. One consequence of this is if the error in a certain measurement process had a Cauchy density and you averaged a number of measurements, the average could not be expected to be any more accurate than any one of your individual measurements!
Example \(\PageIndex{5}\): Rayleigh Density
Suppose X and Y are two independent standard normal random variables. Now suppose we locate a point P in the
xy-plane with coordinates (X, Y ) and ask: What is the density of the square of the distance of P from the origin? (We have already simulated this problem in Example 5.9.) Here, with the preceding notation, we have
\[f_X(x) = f_V(x) = \frac{1}{\sqrt{2\pi}} e^{-x^2/2}\]
Moreover, if \(X^2\) denotes the square of
X, then (see Theorem 5.1 and the discussion following)
\[ \begin{align}{rcl} f_{X^2}(r) & = & \left{\begin{array}{cc} \frac{1}{2\sqrt{r}}(f_X\sqrt{r}) + f_X(-\sqrt{r})) & \text{if } r>0\\ 0 & \text{otherwise}.\end{array} \\ &=& \left{\begin{array}{} \frac{1}{\sqrt{2\pi r}}(e^{-r/2}) & \text{if} r>0\\ 0 & \text{otherwise}.\end{array} \end{align} \]
This is a gamma density with \(\lambda = 1/2\), \(\beta = 1/2\) (see Example 7.4). Now let \(R^2 = X^2 + Y^2\)
Then
Hence, \(R^2\) has a gamma density with λ = 1/2, β = 1. We can interpret this result as giving the density for the square of the distance of P from the center of a target if its coordinates are normally distributed. The density of the random variable
R is obtained from that of \(R^2\) in the usual way (see Theorem 5.1), and we find
\[ f_R(r) = \Bigg\{ \begin{array}{cc} \frac{1}{2}e^{-r^2/2} \cdot 2r = re^{-r^2/2}, & \text{if } r \geq 0 \\ 0, & \text{otherwise} \end{array} \]
Physicists will recognize this as a Rayleigh density. Our result here agrees with our simulation in Example 5.9.
Chi-Squared Density
More generally, the same method shows that the sum of the squares of n independent normally distributed random variables with mean 0 and standard deviation 1 has a gamma density with λ = 1/2 and β = n/2. Such a density is called a chi-squared density with n degrees of freedom. This density was introduced in Chapter 4.3. In Example 5.10, we used this density to test the hypothesis that two traits were independent.
Another important use of the chi-squared density is in comparing experimental data with a theoretical discrete distribution, to see whether the data supports the theoretical model. More specifically, suppose that we have an experiment with a finite set of outcomes. If the set of outcomes is countable, we group them into finitely many sets of outcomes. We propose a theoretical distribution which we think will model the experiment well. We obtain some data by repeating the experiment a number of times. Now we wish to check how well the theoretical distribution fits the data.
Let X be the random variable which represents a theoretical outcome in the model of the experiment, and let \(m(x)\) be the distribution function of
X. In a manner similar to what was done in Example 5.10, we calculate the value of the expression
\[ V = \sum_x \frac{(\sigma_x - n\cdot m(x))^2}{n\cdot m(x)}\]
where the sum runs over all possible outcomes x, n is the number of data points, and ox denotes the number of outcomes of type x observed in the data. Then
for moderate or large values of
n, the quantity V is approximately chi-squared distributed, with ν−1 degrees of freedom, where ν represents the number of possible outcomes. The proof of this is beyond the scope of this book, but we will illustrate the reasonableness of this statement in the next example. If the value of V is very large, when compared with the appropriate chi-squared density function, then we would tend to reject the hypothesis that the model is an appropriate one for the experiment at hand. We now give an example of this procedure.
Example \(\PageIndex{6}\): DieTest
Suppose we are given a single die. We wish to test the hypothesis that the die is fair. Thus, our theoretical distribution is the uniform distribution on the integers between 1 and 6. So, if we roll the die n times, the expected number of data points of each type is
n/6. Thus, if \(o_i\) denotes the actual number of data points of type \(i\), for \(1 ≤ i ≤ 6\), then the expression
\[ V = \sum_{i=1}^6 \frac{(\sigma_i - n/6)^2}{n/6}\]
is approximately chi-squared distributed with 5 degrees of freedom.
Now suppose that we actually roll the die 60 times and obtain the data in Table 7.1. If we calculate V for this data, we obtain the value 13.6. The graph of the chi-squared density with 5 degrees of freedom is shown in Figure 7.4. One sees that values as large as 13.6 are rarely taken on by V if the die is fair, so we would reject the hypothesis that the die is fair. (When using this test, a statistician will reject the hypothesis if the data gives a value of V which is larger than 95% of the values one would expect to obtain if the hypothesis is true.)
In Figure 7.5, we show the results of rolling a die 60 times, then calculating V , and then repeating this experiment 1000 times. The program that performs these calculations is called
DieTest. We have superimposed the chi-squared density with 5 degrees of freedom; one can see that the data values fit the curve fairly well, which supports the statement that the chi-squared density is the correct one to use.
So far we have looked at several important special cases for which the convolution integral can be evaluated explicitly. In general, the convolution of two continuous densities cannot be evaluated explicitly, and we must resort to numerical methods. Fortunately, these prove to be remarkably effective, at least for bounded densities.
Independent Trials
We now consider briefly the distribution of the sum of n independent random variables, all having the same density function. If \(X_1, X_2, . . . , X_n\) are these random variables and \(S_n = X_1 + X_2 + · · · + X_n\) is their sum, then we will have
\[f_{S_n}}(x) = (f_X, \*f_{x_2}\* \cdots \* f_{X_n}(x),\]
where the right-hand side is an n-fold convolution. It is possible to calculate this density for general values of n in certain simple cases.
Example \(\PageIndex{7}\)
Suppose the \(X_i\) are uniformly distributed on the interval [0,1]. Then
\[f_{X_i}(x) = \Bigg{\{} \begin{array}{cc} 1, & \text{if } 0\leq x \leq 1\\ 0, & \text{otherwise} \end{array}\]
and \(f_{S_n}}(x)\_) is given by the formula \(^4\)
\[f_{S_n}(x) = \Bigg\{ \begin{array}{cc} \frac{1}{(n-1)!}\sum_{0\leq j \leq x}(-1)^j(\binom{n}{j}(x-j)^{n-1}, & \text{if } 0\leq x \leq n\\ 0, & \text{otherwise} \end{array}\]
The density \(f_{S_n}(x)\) for \(n = 2, 4, 6, 8, 10\) is shown in Figure 7.6. If the Xi are distributed normally, with mean 0 and variance 1, then (cf. Example 7.5)
\[f_{X_i}(x) = \frac{1}{\sqrt{\2pi}} e^{-x^2/2},\]
and
\[f_{S_n}(x) = \frac{1}{\sqrt{2\pi n}}e^{-x^2/2n}\]
Here the density \(f_Sn\) for \(n=5,10,15,20,25\) is shown in Figure 7.7.
If the \(X_i\) are all exponentially distributed, with mean \(1/\lambda\), then
\[f_{X_i}(x) = \lambda e^{-\lambda x}.\]
and
\[f_{S_n} = \frac{\lambda e^{-\lambda x}(\lambda x)^{n-1}}{(n-1)!} \]
In this case the density \(f_{S_n}\) for \(n = 2, 4, 6, 8, 10\) is shown in Figure 7.8. |
Prove that $ \phi(n) =11 \cdot 3^n + 3 \cdot 7^n - 6 $ is divisible by 8 for all $n \in N$.
Base: $ n = 0 $
$ 8 | 11 + 3 - 6 $ is obvious.
Now let $\phi(n)$ be true we now prove that is also true for $ \phi(n+1)$.
So we get $ 11 \cdot 3^{n+1} + 3 \cdot 7^{n+1} - 6$ and I am stuck here, just can't find the way to rewrite this expression so that I can use inductive hypothesis or to get that one part of this sum is divisible by 8 and just prove by one more induction that the other part is divisible by 8.
For instance, in the last problem I had to prove that some expression a + b + c is divisible by 9. In inductive step b was divisible by 9 only thing I had to do is show that a + c is divisible by 9 and I did that with another induction, and I don't see if I can do the same thin here. |
Lets say I have point P1(10,10) and P2(20,20).
I want to find a P3 which is on between this two points and 3 units away from P1.
What is the formula to find P3 ?
Known values: X1, X2, Y1 , Y2, distance.
Wanted values: X3, Y3
Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community
Here are some hints:
In general terms, the unit vector is
$$\hat{u} = \frac{x_2-x_1}{D}\hat{x} + \frac{y_2-y_1}{D}\hat{y},$$
where $\hat{x}, \hat{y}$ are unit vectors in the $x$ and $y$ directions, and $D = \sqrt{(x_2-x_1)^2+(y_2-y_1)^2}$ is the distance between $P_1$ and $P_2$.
Then, if you're looking for the point a distance $d$ away from $P_1$ along the line through $P_1$ and $P_2$, then, vector-based the answer is
$$\vec{P_3} = \vec{P_1} + d\hat{u}.$$
Splitting up the components gives:
$$x_3 = x_1 + \frac{d}{D}(x_2-x_1)$$
$$y_3 = y_1 + \frac{d}{D}(y_2-y_1).$$
A point on the line through $P_1$ and $P_2$ will have the form $$\mathbf x = (1-\lambda)P_1 + \lambda P_2$$ for some $\lambda \in \mathbb R$. If $\lambda \in [0,1]$, then $\mathbf x$ will be on the line segment between $P_1$ and $P_2$. In particular, when $\lambda = 1$, then $\mathbf x = P_2$.
Let's scale $\lambda$ to $\lambda'$ so that $\mathbf x = P_2$ when $\lambda' = 10\sqrt2$, which is the distance from $P_1$ to $P_2$. In other words, we have $\lambda' = 10\sqrt2\lambda$, so $$\mathbf x = \left(1-\frac{\lambda'}{10\sqrt2}\right)P_1 + \frac{\lambda'}{10\sqrt2}P_2.$$
Now substitute $\lambda' = 3$.
There is a formula, but you'll have to modify it to suit your needs. The distance formula says that the distance between A(
x 1,y1) and B(
But here, since
x 1=y1 and
This is $10\sqrt2$
So, the distance between A(10,10) and B(20,20) is $10\sqrt2$
So your point C, which is 3 units away from A, divides the line segment AB into two parts, 3 and $(10\sqrt2-3)$
Another formula says that if a point C(
x,y) divides a line between 2 points A( x 1,y1) and B(
Substituting values, we get the point to be $$C\left({20-{3\over\sqrt2}},{10+{3\over\sqrt2}}\right)$$ |
The numerical aperture with respect to a point
P
dependson the half-angle
θ
of the maximum cone of light that canenter or exit the lens.
In optics
, the
numericalaperture
(
NA
) of an optical system is adimensionless number
thatcharacterizes the range of angles over which the system can acceptor emit light. The exact definition of the term varies slightlybetween different areas of optics.
General optics
In most areas of optics, and especially in microscopy
, the numerical aperture of an opticalsystem such as an objective lens
isdefined by
\mathrm{NA} = n \sin \theta\;
where
n
is the index ofrefraction
of the medium in which the lens is working (1.0 forair
, 1.33 for pure water
,and up to 1.56 for oils
), and
θ
is thehalf-angle of the maximum cone of light that can enter or exit thelens. In general, this is the angle of the real marginal ray
in the system. The angular aperture
of the lens isapproximately twice this value (within the paraxial approximation
). The NA isgenerally measured with respect to a particular object or imagepoint and will vary as that point is moved.
In microscopy, NA is important because it indicates the resolving power
of a lens. The size of thefinest detail that can be resolved is proportional to λ/NA, where λis the wavelength
of the light. A lenswith a larger numerical aperture will be able to visualize finerdetails than a lens with a smaller numerical aperture. Lenses withlarger numerical apertures also collect more light and willgenerally provide a brighter image.
Numerical aperture is used to define the "pit size" in optical disc
formats.
Numerical aperture versus f-number
Numerical aperture is not typically used in photography
. Instead, the angular aperture of alens
(or an imaging mirror) isexpressed by the f-number
, written or N,which is defined as the ratio of the focallength
to the diameter of the entrance pupil
:
\ N = f/D
This ratio is related to the image-space numerical aperture whenthe lens is focused at infinity. Based on the diagram at right, theimage-space numerical aperture of the lens is:
\mathrm{NA_i} = n \sin \theta = n \sin \arctan \frac{D}{2f}\approx n \frac {D}{2f} thus N \approx \frac{1}{2\;\mathrm{NA_i}}, assuming normal usein air (n=1).
The approximation holds when the numerical aperture is small, andit is nearly exact even at large numerical apertures forwell-corrected camera lenses. For numerical apertures less thanabout 0.5 (f-numbers greater than about 1) the divergence betweenthe approximation and the full expression is less than 10%. Beyondthis, the approximation breaks down. As Rudolf Kingslake explains,"It is a common error to suppose that the ratio [D/2f ] isactually equal to \tan \theta, and not \sin \theta ... The tangentwould, of course, be correct if the principal planes were reallyplane. However, the complete theory of the Abbe sine condition
shows that if a lensis corrected for coma and spherical aberration, as all goodphotographic objectives must be, the second principal plane becomesa portion of a sphere of radius
f
centered about the focalpoint, ..." In this sense, the traditional thin-lens definition andillustration of f-number is misleading, and defining it in terms ofnumerical aperture may be more meaningful.
"Working" or "effective" f-number
The f-number describes the light-gathering ability of the lens inthe case where the marginal rays on the object side are parallel tothe axis of the lens. This case is commonly encountered inphotography, where objects being photographed are often far fromthe camera. When the object is not distant from the lens, however,the image is no longer formed in the lens's focal plane
, and the f-number no longeraccurately describes the light-gathering ability of the lens or theimage-side numerical aperture. In this case, the numerical apertureis related to what is sometimes called the "working f-number
" or "effective f-number."The working f-number is defined by modifying the relation above,taking into account the magnification from object to image:
\frac{1}{2 \mathrm{NA_i}} = N_\mathrm{w} = (1-m)\, N,
where N_\mathrm{w} is the working f-number, m is the lens'smagnification
for an object aparticular distance away, and the NA is defined in terms of theangle of the marginal ray as before. The magnification here istypically negative; in photography, the factor is sometimes writtenas 1 +
m
, where
m
represents the absolute value
of the magnification; ineither case, the correction factor is 1 or greater.
The two equalities in the equation above are each taken by variousauthors as the definition of working f-number, as the cited sourcesillustrate. They are not necessarily both exact, but are oftentreated as if they are. The actual situation is more complicated —as Allen R. Greenleaf explains, "Illuminance varies inversely asthe square of the distance between the exit pupil of the lens andthe position of the plate or film. Because the position of the exitpupil usually is unknown to the user of a lens, the rear conjugatefocal distance is used instead; the resultant theoretical error sointroduced is insignificant with most types of photographiclenses."
Conversely, the object-side numerical aperture is related to thef-number by way of the magnification (tending to zero for a distantobject):
\frac{1}{2 \mathrm{NA_o}} = \frac{m-1}{m}\, N. Laser physics
In laser physics
, the numericalaperture is defined slightly differently. Laser beams spread out asthey propagate, but slowly. Far away from the narrowest part of thebeam, the spread is roughly linear with distance—the laser beamforms a cone of light in the "far field". The same relation givesthe NA,
\mathrm{NA} = n \sin \theta,\;
but
θ
is defined differently. Laser beams typically do nothave sharp edges like the cone of light that passes through theaperture
of a lens does. Instead, theirradiance
falls off gradually away fromthe center of the beam. It is very common for the beam to have aGaussian
profile. Laser physiciststypically choose to make
θ
the
divergence
of thebeam: the far-field
angle between thepropagation direction and the distance from the beam axis for whichthe irradiance drops to 1/e
2
times the wavefront totalirradiance. The NA of a Gaussian laser beam is then related to itsminimum spot size by
\mathrm{NA}\simeq \frac{2 \lambda_0}{\pi D},
where λ
0
is the vacuumwavelength
of the light, and
D
is the diameter of thebeam at its narrowest spot, measured between the 1/e
2
irradiance points ("Full width at e
−2
maximum"). Notethat this means that a laser beam that is focused to a small spotwill spread out quickly as it moves away from the focus, while alarge-diameter laser beam can stay roughly the same size over avery long distance.
Fiber opticsMultimode optical fiber
willonly propagate light that enters the fiber within a certain cone,known as the acceptance cone
of thefiber. The half-angle of this cone is called the acceptance angle
,
θ max
.For step-index
multimode fiber,the acceptance angle is determined only by the indices ofrefraction:
n \sin \theta_\max = \sqrt{n_1^2 - n_2^2},
where
n 1
is the refractive index of the fibercore, and
n 2
is the refractive index of thecladding.
When a light ray is incident from a medium of refractive index
n to the core of index
n 1
, Snell's law
atmedium-core interface gives
n\sin\theta_i = n_1\sin\theta_r.\
From the above figure and using trigonometry, we get :
\sin\theta_{r} = \sin\left({90^\circ} - \theta_{c} \right) =\cos\theta_{c}\
where \theta_{c} = \sin^{-1} \frac{n_{2}}{n_{1}}is the critical angle
for total internal reflection
,since
Substituting for sin θ
r
in Snell's law weget:
\frac{n}{n_{1}}\sin\theta_{i} = \cos\theta_{c}.
By squaring both sides
\frac{n^{2}}{n_{1}^{2}}\sin^{2}\theta_{i} = \cos ^{2}\theta_{c}= 1 - \sin^{2}\theta_{c} = 1 - \frac{n_{2}^{2}}{n_{1}^{2}}.
Thus,
n \sin \theta_{i} = \sqrt{n_1^2 - n_2^2},
from where the formula given above follows.
This has the same form as the numerical aperture in other opticalsystems, so it has become common to
define
the NA of anytype of fiber to be
\mathrm{NA} = \sqrt{n_1^2 - n_2^2},
where
n 1
is the refractive index along thecentral axis of the fiber. Note that when this definition is used,the connection between the NA and the acceptance angle of the fiberbecomes only an approximation. In particular, manufacturers oftenquote "NA" for single-mode fiber
based on this formula, even though the acceptance angle forsingle-mode fiber is quite different and cannot be determined fromthe indices of refraction alone.
The number of bound modes
, themode volume
, is related to the normalized frequency
and thus to theNA.
In multimode fibers, the term
equilibrium numericalaperture
is sometimes used. This refers to the numericalaperture with respect to the extreme exit angle of a ray
emerging from a fiber in whichequilibrium modedistribution
has been established.
See also References External links |
This is a question from a book that I am reading:
Prove that a monotone non-decreasing sequence of real numbers which is bounded above converges to a limit $a$, and that limit is the least upper bound of the set $\{a_1, a_2, ...\}$. (Similarly, prove the same with non-increasing sequences bounded below, ie $\lim_{n\rightarrow \infty} a_n =$ greatest lower bound.)
My issue with this is that this is perfectly intuitive. It makes complete sense, but I just have no idea how to formulate this in a mathematically formal way.
My attempt goes like this: Let $\{a_n\}$ be bounded above and monotone non-decreasing. Then, $a_n \leq a_{n+1} \leq M$, where $M$ denotes an upper bound. From here, it is clear that there exists some least upper bound of this sequence $\leq M$, denoted $a^*$. For this part, I'm not sure if I can just say that this is clear, even though it does seem obvious. I assume that I could argue by contrapositive, and if I assumed there was no least upper bound then there is no way that this sequence is bounded above.
Then, $a_n \leq a^* \leq M$ for all $n \in \mathbb{N}$. This leads to $0 \leq a^* - a_n \leq M - a_n$. At this point, it looks pretty close to being the definition of the limit. Can I choose $\epsilon = 1$, and note that since $a_n \leq a^*$ for any $n$, then certainly for any sufficiently large $n>N$, $a^* - a_n = |a_n - a^*| < \epsilon = 1$, implying that $a^*$ is in fact our limit. |
Here is a closely related pair of examples from operator theory, von Neumann's inequality and the theory of unitary dilations of contractions on Hilbert space, where things work for 1 or 2 variables but not for 3 or more.
In one variable, von Neumann's inequality says that if $T$ is an operator on a (complex) Hilbert space $H$ with $\|T\|\leq1$ and $p$ is in $\mathbb{C}[z]$, then $\|p(T)\|\leq\sup\{|p(z)|:|z|=1\}$. Szőkefalvi-Nagy's dilation theorem says that (with the same assumptions on $T$) there is a unitary operator $U$ on a Hilbert space $K$ containing $H$ such that if $P:K\to H$ denotes orthogonal projection of $K$ onto $H$, then $T^n=PU^n|_H$ for each positive integer $n$.
These results extend to two commuting variables, as Ando proved in 1963. If $T_1$ and $T_2$ are commuting contractions on $H$, Ando's theorem says that there are commuting unitary operators $U_1$ and $U_2$ on a Hilbert space $K$ containing $H$ such that if $P:K\to H$ denotes orthogonal projection of $K$ onto $H$, then $T_1^{n_1}T_2^{n_2}=PU_1^{n_1}U_2^{n_2}|_H$ for each pair of nonnegative integers $n_1$ and $n_2$. This extension of Sz.-Nagy's theorem has the extension of von Neumann's inequality as a corollary: If $T_1$ and $T_2$ are commuting contractions on a Hilbert space and $p$ is in $\mathbb{C}[z_1,z_2]$, then $\|p(T_1,T_2)\|\leq\sup\{|p(z_1,z_2)|:|z_1|=|z_2|=1\}$.
Things aren't so nice in 3 (or more) variables. Parrott showed in 1970 that 3 or more commuting contractions need not have commuting unitary dilations. Even worse, the analogues of von Neumann's inequality don't hold for $n$-tuples of commuting contractions when $n\geq3$. Some have considered the problem of quantifying how badly the inequalities can fail. Let $K_n$ denote the infimum of the set of those positive constants $K$ such that if $T_1,\ldots,T_n$ are commuting contractions and $p$ is in $\mathbb{C}[z_1,\ldots,z_n]$, then $\|p(T_1,\ldots,T_n)\|\leq K\cdot\sup\{|p(z_1,\ldots,z_n)|:|z_1|=\cdots=|z_n|=1\}$. So von Neumann's inequality says that $K_1=1$, and Ando's Theorem yields $K_2=1$. It is known in general that $K_n\geq\frac{\sqrt{n}}{11}$. When $n>2$, it is not known whether $K_n\lt\infty$.
See Paulsen's book (2002) for more. On page 69 he writes:
The fact that von Neumann’s inequality holds for two commuting contractions
but not three or more is still the source of many surprising results and
intriguing questions. Many deep results about analytic functions come
from this dichotomy. For example, Agler [used] Ando’s theorem to deduce an
analogue of the classical Nevanlinna–Pick interpolation formula
for analytic functions on the bidisk. Because of the failure of a von
Neumann inequality for three or more commuting contractions, the analogous
formula for the tridisk is known to be false, and the problem of finding the
correct analogue of the Nevanlinna–Pick formula for polydisks
in three or more variables remains open. |
I need to solve a Reaction-Diffusion using Finite Elements, piecewise linear elements. In this problem, a reaction $A \rightarrow B$, with rate law $ r_A = - k_A \cdot u_A $, takes part, where $u_i$ denotes concentration. Initially, $u_A = u_B = 0$. The time dependent formulation for the conservation of $A$ and $B$ is:
$ \begin{matrix} \frac{\partial \ u_A}{\partial \ t} -\Delta u_A - k \cdot u_A = f_A \end{matrix} $
$ \begin{matrix} \frac{\partial \ u_B}{\partial \ t} -\Delta u_B + k \cdot u_A = 0 \end{matrix}$
My question is: How is the best way to solve for $u_B$? Solving only $u_A$ seems trivial, using Crank-Nicholson as time discretization and finding a weak formulation that looks like:
$U^n[M+\delta_t\theta (A- kM)] = U^{n-1}[M+\delta_t(1-\theta) (-A+kM)] + \delta_t (F+ N)$
Where $U^n$ denotes the solution at time step $n$; $M, A, F, N$ are the mass matrix, stiffness matrix, load vector and B.C. vector (Neumann); $\delta_t$ the time step and $\theta$ the time discretization parameter ($1/2$ for C-N).
The first approach that I thought of is to solve for $u_A$ at each time step and then, using the value of $u_A$ at each time step, solve for $u_B$ using similar weak formulation:
$U^n[M+\delta_t\theta (A + kC)] = U^{n-1}[M+\delta_t(1-\theta) (-A-kC)]$
Where $kC$ takes the place of $kM$ in the first equation, and $C = \int u_A \ \phi_i dx$.
This would imply evaluating the matrix $C$ at each time step, using the $u_A$ calculated on the previous time step, interpolated on the quadrature points, which will certainly delay the process.
Is this the standard procedure or is there a different approach, more efficient? Thank you in advance. |
Search
Now showing items 1-10 of 26
Kaon femtoscopy in Pb-Pb collisions at $\sqrt{s_{\rm{NN}}}$ = 2.76 TeV
(Elsevier, 2017-12-21)
We present the results of three-dimensional femtoscopic analyses for charged and neutral kaons recorded by ALICE in Pb-Pb collisions at $\sqrt{s_{\rm{NN}}}$ = 2.76 TeV. Femtoscopy is used to measure the space-time ...
Anomalous evolution of the near-side jet peak shape in Pb-Pb collisions at $\sqrt{s_{\mathrm{NN}}}$ = 2.76 TeV
(American Physical Society, 2017-09-08)
The measurement of two-particle angular correlations is a powerful tool to study jet quenching in a $p_{\mathrm{T}}$ region inaccessible by direct jet identification. In these measurements pseudorapidity ($\Delta\eta$) and ...
Online data compression in the ALICE O$^2$ facility
(IOP, 2017)
The ALICE Collaboration and the ALICE O2 project have carried out detailed studies for a new online computing facility planned to be deployed for Run 3 of the Large Hadron Collider (LHC) at CERN. Some of the main aspects ...
Evolution of the longitudinal and azimuthal structure of the near-side peak in Pb–Pb collisions at $\sqrt{s_{\rm NN}}=2.76$ TeV
(American Physical Society, 2017-09-08)
In two-particle angular correlation measurements, jets give rise to a near-side peak, formed by particles associated to a higher $p_{\mathrm{T}}$ trigger particle. Measurements of these correlations as a function of ...
J/$\psi$ elliptic flow in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 5.02 TeV
(American Physical Society, 2017-12-15)
We report a precise measurement of the J/$\psi$ elliptic flow in Pb-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV with the ALICE detector at the LHC. The J/$\psi$ mesons are reconstructed at mid-rapidity ($|y| < 0.9$) ...
Enhanced production of multi-strange hadrons in high-multiplicity proton-proton collisions
(Nature Publishing Group, 2017)
At sufficiently high temperature and energy density, nuclear matter undergoes a transition to a phase in which quarks and gluons are not confined: the quark–gluon plasma (QGP)1. Such an exotic state of strongly interacting ...
K$^{*}(892)^{0}$ and $\phi(1020)$ meson production at high transverse momentum in pp and Pb-Pb collisions at $\sqrt{s_\mathrm{NN}}$ = 2.76 TeV
(American Physical Society, 2017-06)
The production of K$^{*}(892)^{0}$ and $\phi(1020)$ mesons in proton-proton (pp) and lead-lead (Pb-Pb) collisions at $\sqrt{s_\mathrm{NN}} =$ 2.76 TeV has been analyzed using a high luminosity data sample accumulated in ...
Production of $\Sigma(1385)^{\pm}$ and $\Xi(1530)^{0}$ in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV
(Springer, 2017-06)
The transverse momentum distributions of the strange and double-strange hyperon resonances ($\Sigma(1385)^{\pm}$, $\Xi(1530)^{0}$) produced in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV were measured in the rapidity ...
Charged–particle multiplicities in proton–proton collisions at $\sqrt{s}=$ 0.9 to 8 TeV, with ALICE at the LHC
(Springer, 2017-01)
The ALICE Collaboration has carried out a detailed study of pseudorapidity densities and multiplicity distributions of primary charged particles produced in proton-proton collisions, at $\sqrt{s} =$ 0.9, 2.36, 2.76, 7 and ...
Energy dependence of forward-rapidity J/$\psi$ and $\psi(2S)$ production in pp collisions at the LHC
(Springer, 2017-06)
We present ALICE results on transverse momentum ($p_{\rm T}$) and rapidity ($y$) differential production cross sections, mean transverse momentum and mean transverse momentum square of inclusive J/$\psi$ and $\psi(2S)$ at ... |
Here's one solution:I assumed the radius of the hole's curvature matches the curvature radius of the circle, the hole's straight side is equal to the circle's radius, and its curved edges meet the straight edge at right angles.Used Flash Pro to solve this, of all things: it has the useful feature that as you paint while editing an object, it updates all ...
Quite an interesting cut I had to make to get it to fit.Shift the bottom piece 2 tiles up and one tile right to produce a 10 x 10 square. You can save all 100 square feet.The end result should look like this
We will make all of our cuts vertical, so we can treat this as a square which we need to divide into $10$ pieces with equal slices of the area and the perimeter. This is reasonably easy: Choose $10$ points dividing the perimeter into $10$ equal lengths, and then make cuts inwards from each point to the center. The area of a triangular slice is half the ...
Yes, it's possible, because one can fit 5 disjoint 1x1 squares in a 2.75x2.75 square: four in the corners, and one in the center rotated 45 degrees. The four cigaret holes can't eliminate all 5 squares.The tilted square fits in a cross whose center's square diagonal has a of length 1. So the cross needs width $1/ \sqrt{2} \approx 0.707 < 0.75$.So, ...
Index your rectangle from (0,0) to (10,2). Then cut fromThese four pieces can be used to make the square.Note that this dissection works without any flipping or even any rotation of the pieces! To show that it's a square is also relatively simple. Easy geometry shows that the angles are right angles, and that the sides are $2\sqrt{5}$.
The dimensions areand the tiling looks like this:Working out the dimensions of the rectangle is quite easy. We know its total area is $4209$ (i.e., $2^2 + 5^2 + 7^2 + 9^2 + 16^2 + 25^2 + 28^2 + 33^2 + 36^2$). This factorizes as $3 \times 23 \times 61$, and in order to fit in a square with a side length of 36, the rectangle must be $3 \times 23$ units ...
If all cuts are straight cuts and the cake is a rectangular prism or cylinder, it's not possible. From Wikipedia's page on the Cake Number:In mathematics, the cake number, denoted by Cn, is the maximum numberof regions into which a 3-dimensional cube can be partitioned byexactly n planes. The cake number is so-called because one may imagineeach ...
JonTheMon and xnor's solutions assume we have superior equipment and skill, but the question states that we "have a hacksaw". Well with a hacksaw, we must start from an exposed side; we can't start a cut in the middle of a plank!The most efficient "side" would be the current hole:If the hacksaw won't fit in the hole, we "cut an L shaped piece off the ...
As pointed out in a previous answer, cutting a hole in the middle of the table may be unfeasible if everything you have is a hacksaw. Using the existing hole as a starting point, the cut can be reduced to the length of a single segment bifurcating on both end to meet the old and the newly cut hole tangentially at a convenient distance. Something like this....
One of the possible solutions is:CalculationsLets say square length is L and Height HFrosting on top = 0.5 * 0.4L * 0.5LFrosting on bottom = same as topFrosting on side = 0.4L * HTotal frosting area = 0.2L2 + 0.4LHCake volume = top area * height = 0.5 * 0.4L * 0.5L * H = 0.1L2HValidationTotal vol = 1L2H is 10 times above...
You cut a square like this:And rotate it 180 degrees.The cut square (or rectangle) simply needs to have its centerpoint be halfway between the hole and the center of the square, and to be large enough to contain the hole.Boy do I feel silly seeing the intended solution...
Solutions are as in the following diagram:Addition: My original solution for E has 5 cuts and 9 separate pieces - a simpler solution is shown below, with only four cuts (to the cross) and five pieces.Dimensions are as follows:a) side lengths = 2 and 1 —[2 cuts, 3 pieces]b) side lengths = sqrt(5/2) —[2 cuts, 4 pieces]c) side lengths = 3/sqrt(2) and ...
Well, from my viewpoint this is a four-piece dissection, since parts of each piece don't move relatively to each other. They are even connected, to some extent. However, I would completely agree that there are about 24 pieces in this dissection, from a pragmatic viewpoint.At least evaluate an hour-long fiddling with MS paint here.
Look at the cube down a space diagonal. It should appear as a hexagon, which can be divided into three rhombuses by line segments from the center to every other corner. If these three cuts are made, they will divide the cube into three pieces, which must be congruent by symmetry.
From the deleted answer from frodoswalker we know the possible ranges of the squares:Our maximum area is $1*2+3*4+5*6+7*8+9*10+11*12+13*14=504$, which means the maximum square width is $\lfloor \sqrt{504} \rfloor = 22$Our minimum area is $1*14+2*13+3*12+4*11+5*10+6*9+7*8=280$, which means the minimum square width is $\lceil \sqrt{280} \rceil = 17$...
Let $A=(-30,60),B=(0,0),C=(150,0),D=(300,300),E=(225,400)$. $F=(-12,84)$ is on $AE$ and $G=(175,50)$ is on $CD$. Then $ABCDE$ is similar to $FABCG$, by a factor of $\sqrt5$.The key point of this solution is that angles $GFA, EAB, ABC, BCD, CDE$ are all equal ($\arctan-2\approx116.57^\circ$) and $FA,AB,BC,CD$ are in geometric sequence. Then, we can rotate ...
The first thing to do is divide the figure into a number of sections which is a multiple of four. The easiest way to do this is to split it up into smaller triangles:Note that there are now 12 sections, which divides into sections of 3. Three triangles forms a square and a smaller triangle. We know, as a result, that a square must go here (another way to ... |
Let $G$ be an acyclic, context-free grammar over a fixed alphabet $\Sigma=\{a_1,\dots,a_k\}$ with the restriction (without loss of generality) that $|w|=2$ for each rule $A\to w$ in the grammar. Acyclic means that if $N$ is the set of nonterminals, then $$\{(A,B)\in N^2\mid A\to xBy\text{ is a rule in }G\text{; }x,y\in(\Sigma\cup N)^*\}$$ is an acyclic relation. So $L(G)$ is finite. Let in this setting the size of a grammar be defined as the number of nonterminals
My question
Let $\#_w(i,j)$ be the number of different subsequences of $a_ia_j$ in $w$. For example $w=a_1a_1a_2a_2a_1$ yields $\#_w(1,2)=4$, $\#_w(2,1)=2$, $\#_w(1,1)=3$ and $\#_w(2,2)=1$. Now I am looking for the complexity of:
Given:Acyclic, context-free grammar $G$ and numbers $n_{i,j}$ with $i,j\in\{1,\cdots,k\}$
Problem:Is there a word $w\in L(G)$ with $\#_w(i,j)=n_{i,j}$ for all $i,j\in\{1,\cdots,k\}$ ? Background / Well-studied, related problem
The following problem is one step behind my question. In my question, the number of occurences of all subsequences of length $2$ are given. We could first ask, what is the complexity, if the given numbers are the occurences of subsequences of length $1$, i.e. the given numbers are the occurences of the alphabet symbols. So, let $\#_w(i)$ denote the number of occurences of $a_i$ in $w\in\Sigma^*$. The following problem is known to be $\mathsf{NP-complete}$:
Given: Acyclic, context-free grammar $G$ and numbers $n_1,\dots ,n_k$ Problem: Is there a word $w\in L(G)$ with $\#_w(i)=n_i$ for all $i\in\{1,\cdots,k\}$ ?
McKenzie and Wagner ("THE COMPLEXITY OF MEMBERSHIP PROBLEMS FOR CIRCUITS OVER SETS OF NATURAL NUMBERS") provide an $\mathsf{NP}$ algorithm to solve the membership problem of circuits over the natural numbers with $\cup$ and $+$. A slightly modified algorithm solves our problem. The algorithm in short: In addition to the given numbers, we guess for each nonterminal how often it occurs in a derivation tree and for each rule how often it is applied in a derivation tree. Afterwards, we check whether there is a derivation tree of $G$ satisfying the guessed numbers by checking some relations between those numbers. The problem should also be $\mathsf{NP-hard}$, e.g. as a conclusion of Kopczyński and Widjaja To ("Complexity of Problems for Parikh Images of Automata").
Still $\mathsf{NP}$ ?
Is the problem for subsequences of length $2$ still solvable in $\mathsf{NP}$ ? Of course it is at least as hard as the related problem for subsequences of length $1$, because $$\#_w(i,i)=\frac {\#_w(i)\cdot(\#_w(i)-1)}{2}.$$ Unfortunately, I am neither able to extend the algorithm of McKenzie and Wagner to get an $\mathsf{NP}$ algorithm, nor can I show another hardness result like $\mathsf{coNP}$ hardness. Thanks for any help. |
Assume that the instantaneous utility function $u(C)$ in equation $(2.1)$ is $\ln(C(t))$. Consider the problem of a household maximizing $(2.1)$ subject to $(2.6)$. Find an expression for $C$ at each time as a function of initial wealth plus the present value of labor income, the path of $r(t)$, and the parameters of the utility function
\begin{equation} U = \int_{t=0}^{\infty} e^{-\rho t} u(C(t)) \frac{L(t)}{H} dt \tag{2.1} \end{equation}
\begin{equation} \int_{t=0}^{\infty} e^{- R(t)} C(t) \frac{L(t)}{H} dt \leq \frac{K(0)}{H} + \int_{t=0}^{\infty} e^{- R(t)} W(t) \frac{L(t)}{H} dt \tag{2.6} \end{equation}
Firstly I want to check solving for the right right values: initial wealth is $K(0)$, labor income is $W(t)$, $r(t)$ is the real interest rate defined as $R(t) = \int_{\tau=0}^{t} r(\tau) d \tau$, and parameters of the utility function are: $\rho, L(t), H$. I have tried to solve this using the Lagrangian and using an informal approach of calculus of variations so we can ignore the integrals in these terms so that:
$$ \mathcal{L} = e^{-\rho t} ln(C(t)) \frac{L(t)}{H} + \lambda \left[ \frac{K(0)}{H} + e^{- r(t)} W(t) \frac{L(t)}{H} - e^{- r(t)} C(t) \frac{L(t)}{H} \right] $$
$$ \frac{\partial \mathcal{L}}{\partial C} = \frac{e^{- \rho t} L(t)}{C(t) H} - \frac{\lambda e^{- r(t)} L(t)}{H} $$
$$\Rightarrow C(t) = e^{-\rho t}e^{r(t)} \lambda^{-1} $$
$$ \frac{\partial \mathcal{L}}{\partial \lambda} = \frac{K(0)}{H} + e^{- r(t)} W(t) \frac{L(t)}{H} - e^{- r(t)} C(t) \frac{L(t)}{H} $$
Plug the $C(t)$ obtained in the first partial to get
$$ \frac{K(0)}{H} + e^{-r(t)}W(t)\frac{L(t)}{H} = e^{-r(t)} \left( e^{-\rho t}e^{r(t)} \lambda^{-1} \right) \frac{L(t)}{H} $$
$$ \Rightarrow K(0) + e^{-r(t)}W(t) L(t) = e^{-\rho t} \lambda^{-1} L(t) $$
$$ \Rightarrow \lambda = \frac{e^{-\rho t} L(t)}{K(0) + e^{-r(t)} W(t) L(t)} $$
Now plugging this into the $C(t)$ equation to this obtain
$$ C(t) = e^{-\rho t}e^{r(t)}* \frac{K(0) + e^{-r(t)} W(t) L(t)}{e^{-\rho t} L(t)} $$
$$ = \frac{K(0)e^{r(t)}}{L(t)} + W(t) $$
However the solution manual has the following:
$$ C(t) = e^{R(t) - \rho t} \left[ \frac{W(t)}{L(0)/H} (\rho - n) \right]$$
It would be great if these were equivalent! .. but it does not look so to me. And my solution does not include the $\rho$, nor $H$, so somewhere I must have messed up. |
Skills to Develop
By the end of this chapter, the student should be able to:
Recognize the normal probability distribution and apply it appropriately. Recognize the standard normal probability distribution and apply it appropriately. Compare normal probabilities by converting to the standard normal distribution.
The normal, a continuous distribution, is the most important of all the distributions. It is widely used and even more widely abused. Its graph is bell-shaped. You see the bell curve in almost all disciplines. Some of these include psychology, business, economics, the sciences, nursing, and, of course, mathematics. Some of your instructors may use the normal distribution to help determine your grade. Most IQ scores are normally distributed. Often real-estate prices fit a normal distribution. The normal distribution is extremely important, but it cannot be applied to everything in the real world.
Figure \(\PageIndex{1}\): If you ask enough people about their shoe size, you will find that your graphed data is shaped like a bell curve and can be described as normally distributed. (credit: Ömer Ünlϋ)
In this chapter, you will study the normal distribution, the standard normal distribution, and applications associated with them. The normal distribution has two parameters (two numerical descriptive measures), the mean (\(\mu\)) and the standard deviation (\(\sigma\)). If \(X\) is a quantity to be measured that has a normal distribution with mean (\(\mu\)) and standard deviation (\(\sigma\)), we designate this by writing
\[f(x) = \dfrac{1}{\sigma \cdot \sqrt{2 \cdot \pi}} \cdot e^{\left(-\dfrac{1}{2}\right) \cdot \left(\dfrac{x-\mu}{\sigma}\right)^{2}}\]
The probability density function is a rather complicated function.
Do not memorize it. It is not necessary.
The cumulative distribution function is \(P(X < x)\). It is calculated either by a calculator or a computer, or it is looked up in a table. Technology has made the tables virtually obsolete. For that reason, as well as the fact that there are various table formats, we are not including table instructions.
Figure \(\PageIndex{2}\): The standard normal distribution
The curve is symmetrical about a vertical line drawn through the mean, \(\mu\). In theory, the mean is the same as the median, because the graph is symmetric about \(\mu\). As the notation indicates, the normal distribution depends only on the mean and the standard deviation. Since the area under the curve must equal one, a change in the standard deviation, \(\sigma\), causes a change in the shape of the curve; the curve becomes fatter or skinnier depending on \(\sigma\). A change in \(\mu\) causes the graph to shift to the left or right. This means there are an infinite number of normal probability distributions. One of special interest is called the
standard normal distribution.
COLLABORATIVE CLASSROOM ACTIVITY
Your instructor will record the heights of both men and women in your class, separately. Draw histograms of your data. Then draw a smooth curve through each histogram. Is each curve somewhat bell-shaped? Do you think that if you had recorded 200 data values for men and 200 for women that the curves would look bell-shaped? Calculate the mean for each data set. Write the means on the
x-axis of the appropriate graph below the peak. Shade the approximate area that represents the probability that one randomly chosen male is taller than 72 inches. Shade the approximate area that represents the probability that one randomly chosen female is shorter than 60 inches. If the total area under each curve is one, does either probability appear to be more than 0.5? Formula Review \(X \sim N(\mu, \sigma)\) \(\mu =\) the mean \(\sigma =\) the standard deviation Glossary Normal Distribution a continuous random variable (RV) with pdf \[f(x) = \dfrac{1}{\sigma \sqrt{2 \pi}}e^{\dfrac{(x \cdot \mu)}{2 \sigma^{2}}^{2}}\], where \(\mu\) is the mean of the distribution and \(\sigma\) is the standard deviation; notation standard normal distribution. : \(X \sim N(\mu, \sigma)\). If \(\mu = 0\) and \(\sigma = 1\), the RV is called the Contributors
Barbara Illowsky and Susan Dean (De Anza College) with many other contributing authors. Content produced by OpenStax College is licensed under a Creative Commons Attribution License 4.0 license. Download for free at http://cnx.org/contents/30189442-699...b91b9de@18.114. |
Differential Equations
An equation involving derivatives of one or more dependent variables with respect to one or more independent variables is called a
. For example: differential equation
\begin{equation} \frac{d^{2}y}{dx^2} + xy \left( \frac{dy}{dx}\right)^2 = 0 \end{equation} \begin{equation} \frac{d^{4}x}{dt^4} + 5 \frac{d^{2}x}{dt^2} + 3x = \sin t \end{equation} \begin{equation}\frac{\partial v}{\partial s} + \frac{\partial v}{\partial t} = v \end{equation} \begin{equation}\frac{\partial^{2}u}{\partial x^2} + \frac{\partial^{2}u}{\partial y^2} + \frac{\partial^{2}u}{\partial z^2} = 0 \end{equation}
Ordinary Differential Equations
A differential equation involving ordinary derivatives of one or more dependent variables with respect to a single independent variable is called an
. For example, +equation 1 and 2 are ordinary differential equations. In equation 1 the variable x is the single independent variable and y is a dependent variable and in equation 2 the independent variable is t, whereas x is dependent. ordinary differential equation Partial Differential Equations
A differential equation involving partial derivatives of one or more dependent variables with respect to one or more independent variables is called a
. Here, equation 3 and 4 are partial differential equations. In equation 3, the variables s and t are independent variables and v is the dependent variable. In equation 4 there are three independent variables: x, y, and z, in the equation u is dependent. partial differential equation Order
The
of the highest ordered derivative involved in a differential equation is called the order of the differential equation. The order of equation 1 is of second order since the highest derivative involved is a second derivative. Equation 2 is an ordinary differential equation of the fourth order. The partial differential 3 and 4 are of the first and second orders, respectfully. order
Linearity
A
of order n, in the dependent variable y and the independent variable x, is an equation that is in, or can be expressed in, the form linear ordinary differential equation
\begin{equation} a_0(x)\frac{d^{n}y}{dx^{n}} + a_1(x)\frac{d^{n-1}y}{dx^{n-1}} + … + a_{n-1}(x)\frac{dy}{dx} + a_n(x)y = b(x) \end{equation}
where \(a_0\) is not identically zero.
Following ordinary differential equations are both linear.
\begin{equation} \frac{d^{2}y}{dx^2} + 5 \frac{dy}{dx} + 6y=0\end{equation} \begin{equation} \frac{d^{4}y}{dx^4} + x^2 \frac{d^{3}y}{dx^3} + x^3\frac{dy}{dx}=x e^{x} \end{equation}
In each case, y is the dependent variable. Observe that y and its various derivatives occur to the first degree only and that no products of y and/or any of its derivatives are present.
A
is an ordinary differential equation that is not linear. The following ordinary differential equations are all nonlinear: nonlinear ordinary differential equation
\begin{equation}\begin{aligned}\frac{d^{2}y}{dx^2}+5\frac{dy}{dx}+6y^2 = 0 \\ \frac{d^{2}y}{dx^2}+5\left(\frac{dy}{dx}\right)^3+6y = 0 \\
\frac{d^{2}y}{dx^2}+5y\left(\frac{dy}{dx}\right)^3+6y = 0 \end{aligned}\end{equation} are further classified according to the nature of the coefficients of the dependent variables and their derivatives. For example, equation 6 is said to be Linear differential equations , while equation 6 is linear with constant coefficients . linear with variable coefficients Explicit vs Implicit
In mathematics, the
is a function in which the dependent variable has been given “explicitly” in terms of the independent variable. Or it is a function in which the dependent variable is expressed in terms of some independent variables. explicit function
Let’s define an nth-order ordinary differential equation
\begin{equation} F\left[ x,y, \frac{dy}{dx},…,\frac{d^{n}y}{dx^n}\right] = 0,\end{equation}
where F is a real function of its (n+2) arguments \(x,y,\frac{dy}{dx},…,\frac{d^{n}y}{dx^n}.\)
Let \(f\) be a real function defined for all x in a real interval I and having an nth derivative for all \(x \in I\). The function \(f\) is called an
of the differential equation (9) on I if it fulfills the following: explicit solution
$$F[x, f(x), f^{‘}(x),…,f^{(n)}(x)]$$ is defined for all \(x \in I,\) and
$$F[x, f(x), f^{‘}(x),…,f^{(n)}(x)] = 0$$ for all \(x \in I.\) For example
Let the function \(f\) defined for all real x by $$ f(x) = 2\sin x + 3 \cos x$$ is an explicit solution of the differential equation $$\frac{d^{2}y}{dx^2} + y = 0$$ for all real x.
Again, a relation \(g(x,y) = 0\) is called an
of (9) if this relation defines at least one real function \(f\) of the variable x on an interval I such that this function is an explicit solution of (9) on this interval. For example implicit solution
The relation \begin{equation}x^2 + y^2 – 25 = 0\end{equation} is an implicit solution of the differential equation \begin{equation}x+y\frac{dy}{dx} = 0\end{equation} on the interval I defined by \(-5<x<5.\) For the relation (10) defines the two real functions \(f_1\) and \(f_2\) given by $$f_1(x) = \sqrt{25 – x^2}$$ and $$f_2(x) = -\sqrt{25 – x^2}.$$
respectively, for all real \(x \in I,\) and both of these functions are explicit solutions of the differential equation (11) on \(I\). |
p I∣Φ + ⟩⟨Φ + ∣ + p x∣Ψ + ⟩⟨Ψ + ∣ + p y∣Ψ − ⟩⟨Ψ − ∣ + p z∣Φ − ⟩⟨Φ − ∣.
In matrix form it looks like
$$\frac{1}{2} \begin{bmatrix} p_I + p_z & 0 & 0 & p_I - p_z \\ 0 & p_x + p_y & p_x - p_y & 0 \\ 0 & p_x - p_y & p_x + p_y & 0 \\ p_I - p_z & 0 & 0 & p_I + p_z \\ \end{bmatrix}$$ where the matrix is in the computational basis.
Because of the simple structure, many questions that are difficult to answer for general 2-qubit states simplify when they are restricted to Bell-diagonal states.
Properties The weights ( p1, p2, p3, p4) can be permuted to any other order by local unitaries. Unilateral πrotation around the x-, y- and z-axes and bilateral π/2 rotations around the same axes are sufficient for this. A Bell-diagonal state is separable if all the probabilities are less or equal to 1/2. Many entanglement measures have a simple formulas for entangled Bell-diagonal states Any 2-qubit state where both qubits are maximally mixed, ρ A= ρ B= I/2, is bell-diagonal in some local basis. I.e. there exist local unitaries U1, U2 such that U1 ⊗ U2 ρ A B U1 † ⊗ U2 † is bell-diagonal.3 Visualization
The set of Bell-diagonal states can be visualized as a tetrahedron where the four Bell states are the corners. The following change of coordinate system makes the plotting of states easy:
$$\beta_0 = \frac{1}{2} ( p_I + p_x + p_y + p_z )$$
$$\beta_1 = \frac{1}{2} ( p_I - p_x - p_y + p_z )$$
$$\beta_2 = \frac{1}{\sqrt{2}} ( p_I - p_z )$$
$$\beta_3 = \frac{1}{\sqrt{2}} ( p_x - p_y )$$ The coordinate
β0 will always be equal to 1/2, and β1… β3 can be plotted in 3D. In these coordinates the Bell states are located at
$$|\Phi+ \rangle: \left(\frac{1}{2},\frac{1}{\sqrt{2}}, 0 \right)$$, $|\Psi+ \rangle: \left(-\frac{1}{2},0,\frac{1}{\sqrt{2}}\right)$, $|\Psi- \rangle: \left(-\frac{1}{2},0,-\frac{1}{\sqrt{2}}\right)$, $|\Phi-\rangle: \left(\frac{1}{2},-\frac{1}{\sqrt{2}},0\right)$
Another useful coordinate system is the one where the corners of the tetrahedron lie in four of the corners of a cube, with the edges going along the diagonals of the cube's faces.
$$\gamma_0 = \frac{1}{2}(p_I + p_x + p_y + p_z)$$
$$\gamma_1 = \frac{1}{2}(p_I - p_x - p_y + p_z)$$
$$\gamma_2 = \frac{1}{2}(p_I - p_x + p_y - p_z)$$
$$\gamma_3 = \frac{1}{2}(p_I + p_x - p_y - p_z)$$ In these coordinates, the Bell states are situated at
$$|\Phi+ \rangle: (\frac{1}{2},\frac{1}{2},\frac{1}{2})$$, $|\Psi+ \rangle: (-\frac{1}{2},-\frac{1}{2},\frac{1}{2})$, $|\Psi- \rangle: (-\frac{1}{2},\frac{1}{2},-\frac{1}{2})$, $|\Phi-\rangle: (\frac{1}{2},-\frac{1}{2},-\frac{1}{2})$
The
β-coordinate system has the advantage that two of the edges are parallel to axes of the coordinate system. The γ-coordinate system on the other hand inherits more of the symmetry from the cube. Both coordinate transformations are orthogonal, and the transformation from p i to γ i is its own inverse. |
Chemical Kinetics Integrated Rate Equations and Half-life Integrated rate law for zero order reaction: \tt k = \frac{\left[A\right]_0-\left[A\right]}{t} [A] 0 = Initial concentration of reactant [A] = concentration of reactant at a particular time. Half life period: For zero order reaction half-life is directly proportional to the initial concentration and inversely proportional to the rate constant. t 1/2 ∝ [A] 0 \tt \Rightarrow t_{1/2}=\frac{[A_0]}{2k}
At t
100, [A] 0 = 0 \tt At\ t_{100}=\frac{\left[A\right]_0}{k} \tt t_{1/2}=\frac{t_{100}}{2}\Rightarrow t_{100}=2t_{1/2} Integrated rate law of 1st order reaction: \tt kt=2.303\ \log\frac{\left[A\right]_0}{\left[A\right]} where k = rate constant t = time, [A] 0 = Initial concentration [A] = Concentration of reactant at a particular time. Graphs for 1st order reaction: Half-life period of 1st order: (i) \tt t_{1/2}=\frac{0.693}{k} ∴ t 1/2 = Independent of initial concentration (ii)
1. For a reaction of nth order, \tt t_{1/2}\propto \left[A\right]_0^{1-n}
where [A] 0 is initial concentration. 2. For 1st order reaction (a) \tt t_{1/2}=\frac{1}{2}t_{75\%}=\frac{1}{3}t_{87.5\%}=\frac{1}{3.322}t_{90\%} (b) \tt t_{90\%}:t_{99\%}:t_{99.9\%}:t_{99.99\%}=1:2:3:4 (c) \tt t_{93.75\%}=2t_{75\%}=4t_{50\%} (3) Amount of substance left after n half lives in case of 1st order reaction = \tt \frac{\left[A\right]_0}{2^n} Part1: View the Topic in this Video from 12:47 to 51:50 Part2: View the Topic in this Video from 0:40 to 53:10 Part3: View the Topic in this Video from 0:40 to 54:20
Disclaimer: Compete.etutor.co may from time to time provide links to third party Internet sites under their respective fair use policy and it may from time to time provide materials from such third parties on this website. These third party sites and any third party materials are provided for viewers convenience and for non-commercial educational purpose only. Compete does not operate or control in any respect any information, products or services available on these third party sites. Compete.etutor.co makes no representations whatsoever concerning the content of these sites and the fact that compete.etutor.co has provided a link to such sites is NOT an endorsement, authorization, sponsorship, or affiliation by compete.etutor.co with respect to such sites, its services, the products displayed, its owners, or its providers.
1.
Half-life method: This method is employed only when the rate law involve only one concentration term. t 1/2 ∝ a 1-n ; t 1/2 = ka 1-n ; log t 1/2 = log k + (1 − n)log a |
I will start from very beginning. So having signal: $ x[n] = [0; \ 0.7071; \ 1; \ 0.7071; \ 0; \ -0.7071; \ -1; \ -0.7071] $ and given exponent: $ w[n] = [1; \ 0.7071 - 0.7071i; \ -i; \ -0.7071 - 0.7071i;\ -1;\ -0.7071 + 0.7071i; \ i; \ 0.7071 + 0.7071i]$.
First we need to inverse the bits and rearrange our signal. For 8 samples we will obtain following change in indices: $ [0;\ 1;\ 2;\ 3;\ 4;\ 5;\ 6;\ 7] \Rightarrow [0;\ 4;\ 2;\ 6;\ 1;\ 5;\ 3;\ 7] $, thus our signal becomes: $ x[n] = [0; \ 0; \ 1; \ -1; \ 0.7071; \ -0.7071; \ 0.7071; \ -0.7071] $.
Now according to following schematic:
We calculate "butterflies" in following manner:
Where: $ W_{N}^{k} = e^{\dfrac{-2\pi i k}{N}} $ (rotating vector on our complex plane with step angle of $2 \pi/N $ )
Starting from a first butterfly in
first stage ($x[0]$ and $x[4]$ as the inputs) we have following:
$ A = 0, B = 0, W_8^0=w[0]=1$, giving output: $ [0; 0] $, next butterfly:
$ A = 1, B = -1, W_8^0=w[0]=1$, giving output: $ [0; 2] $, next butterfly:
$ A = 0.7071, B = -0.7071, W_8^0=w[0]=1$, giving output: $ [0; 1.4142] $, next butterfly is the same with output: $ [0; 1.4142] $.
So now, after first pass our vector is as follows (same to the one obtained by Peter Griffin):
$x'[n]=[0; \ 0; \ 0; \ 2; \ 0; \ 1.4142; \ 0; \ 1.4142] $
At
second stage, first butterfly consists of samples: $x'[0]$ and $x'[2] $, where $ W_8^0=1 $. We can therefore calculate the output as: $[0+1\cdot 0; \ 0-1\cdot 0] = [0; \ 0] $. These are outputs $x''[0]$ and $x''[2]$ of the second stage.
Next butterfly is given by inputs $x'[1]$ and $x'[3] $, where $ W_8^2=w[2]=-i $. We calculate output $x''[1]$, $x''[3]$ as: $[0-i\cdot 2; \ 0-(-i)\cdot 2] = [-2i; \ 2i] $
Two following butterflies are calculated in the same manner:
$[0+1\cdot 0; \ 0-1\cdot 0] = [0; \ 0] \Rightarrow x''[4]$, $x''[6]$
$[1.4142-i\cdot 1.4142; \ 1.4142-(-i)\cdot 1.4142] = [1.4142-1.4142i; \ 1.4142+1.4142 i] \Rightarrow x''[4], \ x''[6]$
So our vector after second stage is following:
$x''[n] = [0; \ -2i; \ 0; \ 2i; \ 0; \ 1.4142-1.4142i; \ 0; \ 1.4142+1.4142i] $ - you've made a mistake here.
Finally, we calculate results for
third stage. By looking at the main schematic, first butterfly consists of samples: $x''[0], x''[4] = [0; 0]$. The $W_N^k$ coefficient is given by $W_8^0=w[0]=1$.
Calculating butterfly we obtain:
$[0+1\cdot 0; \ 0+(-1)\cdot 0] = [0; \ 0] $ - these are final FFT values $X[0], X[4]$.
Moving to next butterfly based on samples $x''[1], x''[5] = [-2i; 1.4142-1.4142i]$ with coefficient $W_8^1=w[1]=0.7071 - 0.7071i$. Result: $[-2i+(0.7071 - 0.7071i)\cdot (1.4142-1.4142i); \ -2i-(0.7071 - 0.7071i)\cdot (1.4142-1.4142i)] = [-4i; \ 0] $
And next butterfly based on samples $x''[2], x''[6] = [0; \ 0]$, with $W_8^2=w[2]=-i$ will give $[0+(-i)0; \ 0-(-i)0] = [0; \ 0] $
And the last butter fly based on samples $x''[3], x''[7] = [2i; \ 1.4142+1.4142i]$, with $W_8^3=w[2]=-0.7071 - 0.7071i$ will result in: $[2i+(-0.7071 - 0.7071i)\cdot (1.4142+1.4142i); \ 2i-(-0.7071 - 0.7071i)\cdot (1.4142+1.4142i)] = [0; \ 4i] $
So our final vector is following:
$X[k] = [0; \ -4i; \ 0; \ 0; \ 0; \ 0; \ 0; \ 4i] $
Which means MATLAB is calculating FFT correctly ;)
One final remark - always take care about $W_N^k$ factor - look on the schematic and it's pretty obvious.
Good luck! |
suppose
x(n1,n2) = {1 ,n1=0,n2=0 ; 2 ,n1=1,n2=0 ; 3 ,n1=0,n2=1 ; 6 ,n1=1,n2=1 }
then, how do i prove it is separable.
Signal Processing Stack Exchange is a question and answer site for practitioners of the art and science of signal, image and video processing. It only takes a minute to sign up.Sign up to join this community
Nilesh Padhi, Welcome to the DSP Community.
The classic definition of separable means the data (2D) given by $ X \in \mathbb{R}^{m \times n} $ can be written as:
$$ X = \sigma u {v}^{T} $$
Where $ \sigma \in \mathbb{R} $, $ u \in \mathbb{R}^{m} $ and $ v \in \mathbb{R}^{n} $.
This is called Rank 1 Matrix.
How can you get those parameters and vectors given $ X $?
Well, the Singular Value Decomposition (SVD) is here to save the data.
The SVD of $ X $ is given by:
$$ X = U \Sigma {V}^{T} = \sum {\sigma}_{i} {u}_{i} {v}_{i}^{T} $$
You can see those match when $ {\sigma}_{j} = 0 $ for $ j \geq 2 $.
So what you should do is the following:
epsThr = 1e-7;[mU, mD, mV] = svd(mX);vD = diag(mD);if(all(vD(2:end) < epsThr)) vU = mU(:, 1); vV = mV(:, 1);end
We checked if indeed the singular value of 2 and onward are small.
If they do (You can decide to what degree of small by
epsThr) then it is separable and the vectors are
vU and
vV.
In your case:
mX = [1, 3; 2, 6];[mU, mD, mV] = svd(mX);vD = diag(mD);disp(vD);
The result is:
vD = 7.0711 0.0000
Since
vD values are zero besides the first element (Only single non vanishing Singular Value) it is separable.
Indeed you can see that:
mD(1) * mU(:, 1) * mV(:, 1).'ans = 1.0000 3.0000 2.0000 6.0000
As expected.
This method is really useful in Image Processing when we want to convolve with 2D kernel and we find it is separable and hence we can apply the 2D convolution using 2 1D convolutions (Along Columns / Rows).
In that case we define $ \hat{u} = \sqrt{{\sigma}_{1}} u $ and $ \hat{v} = \sqrt{{\sigma}_{1}} v $ where $ {\sigma}_{1} $ is the Singular Value.
Then we convolve $ \hat{u} $ along columns and $ \hat{v}^{T} $ along rows. |
I originally posted this on math.stackexchange, but it quickly got buried. I removed it not too long after, thinking of rewriting it for MO, but I didn’t have a chance to post it until now. Apologies if you were one of the lucky 24 people who already saw it there.
Background
The Cauchy-Binet formula states that
$$ \det(AB) = \sum_{S\in\tbinom{[n]}m} \det(A_{[m],S})\det(B_{S,[m]}), $$
where $A$ is an $m$ by $n$ matrix, $B$ is an $n$ by $m$ matrix, $[n]$ is notation for the set $\{1,\dots,n\}$ (similarly for $[m]$), $\binom{[n]}{m}$ is the set of all $m$-element subsets of $[n]$, and if $M$ is a $k$ by $l$ matrix and $J,K$ are subsets of $[k],[l]$, respectively, then $M_{J,K}$ denotes the submatrix of $M$ consisting of rows indexed by $J$ and columns indexed by $K$.
Question
Is there a version of the Cauchy-Binet formula to evaluate the following sum for general integers $0\leq j\leq m$?
$$ ? = \sum_{S\in\tbinom{[n-j]}{m-j}} \det(A_{[m],S\cup T})\det(B_{S\cup T,[m]})\quad\quad (*)$$
Here $T=\{n-j+1,\dots,n\}$ so that the columns labeled by $T$ are forced to appear in the submatrices of $A$ included in the sum, and similarly for rows $n-j+1,\dots,n$ in the submatrices of $B$.
When $A^T=B$ is the incidence matrix for a graph $G$, then Kirchhoff's matrix-tree theorem and effective resistance formula imply that I can compute this sum as the determinant of the Laplacian matrix of the graph $G'$, where $G'$ is $G$ where all the edges indexed by $n-j+1,\dots,n$ have been
contracted. The motivation of this question is thus to try to understand contraction in a slightly more general setting.
This paper by Konstantopoulos gives a nice coordinate-free version of Cauchy-Binet, but I couldn't wrangle it into what I wanted.
It may also be possible that evaluating this is equivalent to evaluating the permanent or something so I shouldn’t expect anything nice after all. But I would be quite disappointed if that were so.
Other special cases
When $j=0$, $(*)$ reduces to ordinary Cauchy-Binet, and when $j=m$ it is of course just $\det(A_{[m],T}B_{T,[m]})$.
When $j=1$, $(*)$ is equal to $\det(AB)-\det(A_{[m],[n-1]}B_{[n-1],[m]})$, and by using inclusion-exclusion I can extend this to all the other $j$. However, for large $j$, there will be a mess of terms, and it will be a rather inefficient way of evaluating this sum. Is there a simpler formula just involving one determinant?
Update (13 Oct 2014)
While I quite like the answer of Igor Khavkine, I would still like to know if there are any other quick ways to evaluate this. Is there any faster way? If the polynomial trick is the best, then are there some efficient ways to extract the leading coefficient of a polynomial? Interpolation can get me all $j+1$ coefficients, and thus requires $j+1$ evaluations of the determinant. It seems to me that if I have a bound on the other coefficients, I could just evaluate the determinant once with a large value of $x$ then divide by $x^j$ and then round off everything else. |
Spectral analysis of the blazars Markarian 421 and Markarian 501 with the HAWC Gamma-Ray Observatory
Pre-published on: 2019 July 22
Published on: —
Abstract
The High Altitude Water Cherenkov (HAWC) Gamma-Ray Observatory surveys the very high energy sky in the $\sim300$ to 100 TeV energy range and has detected two high-synchrotron-peaked BL Lacertae objects, Markarian 421 and Markarian 501 in a period of time of 837 days between June 2015 and December 2017. In this work we present the detailed time-average spectral analysis. Using an extragalactic background light model, we address the difference in the intrinsic spectral properties between the two blazars above 1 TeV, preliminary results show that the intrinsic spectrum of Mrk 421 is better described by a power-law with an exponential energy cut-off function with photon index $\alpha_{421}=2.22\pm0.10$ and energy cut-off $Ec_{421}=5.24\pm1.02$ TeV, and for Mrk 501 the intrinsic spectrum is well described by a power law with spectral index $\alpha_{501}=2.40\pm0.06$, without requiring an energy cut-off. |
Let us first recall the definition of a group homomorphism.A group homomorphism from a group $G$ to a group $H$ is a map $f:G \to H$ such that we have\[f(gg’)=f(g)f(g’)\]for any elements $g, g\in G$.
If the group operations for groups $G$ and $H$ are written additively, then a group homomorphism $f:G\to H$ is a map such that\[f(g+g’)=f(g)+f(g’)\]for any elements $g, g’ \in G$.
Here is a hint for the problem.For any integer $n$, write it as\[n=1+1+\cdots+1\]and compute $f(n)$ using the property of a homomorphism.
Proof.
Let us put $a:=f(1)\in \Z$. Then for any integer $n$, writing\[n=1+1+\cdots+1,\]we have\begin{align*}f(n)&=f(1+1+\cdots+1)\\&=f(1)+f(1)+\cdots+f(1) \quad \text{ since } f \text{ is a homomorphism}\\&=a+a+\cdots+a\\&=an.\end{align*}Thus we have $f(n)=an$ with $a=f(1)\in \Z$ as required.
Injective Group Homomorphism that does not have Inverse HomomorphismLet $A=B=\Z$ be the additive group of integers.Define a map $\phi: A\to B$ by sending $n$ to $2n$ for any integer $n\in A$.(a) Prove that $\phi$ is a group homomorphism.(b) Prove that $\phi$ is injective.(c) Prove that there does not exist a group homomorphism $\psi:B […]
Abelian Groups and Surjective Group HomomorphismLet $G, G'$ be groups. Suppose that we have a surjective group homomorphism $f:G\to G'$.Show that if $G$ is an abelian group, then so is $G'$.Definitions.Recall the relevant definitions.A group homomorphism $f:G\to G'$ is a map from $G$ to $G'$ […]
A Group is Abelian if and only if Squaring is a Group HomomorphismLet $G$ be a group and define a map $f:G\to G$ by $f(a)=a^2$ for each $a\in G$.Then prove that $G$ is an abelian group if and only if the map $f$ is a group homomorphism.Proof.$(\implies)$ If $G$ is an abelian group, then $f$ is a homomorphism.Suppose that […]
A Group Homomorphism and an Abelian GroupLet $G$ be a group. Define a map $f:G \to G$ by sending each element $g \in G$ to its inverse $g^{-1} \in G$.Show that $G$ is an abelian group if and only if the map $f: G\to G$ is a group homomorphism.Proof.$(\implies)$ If $G$ is an abelian group, then $f$ […]
Abelian Normal subgroup, Quotient Group, and Automorphism GroupLet $G$ be a finite group and let $N$ be a normal abelian subgroup of $G$.Let $\Aut(N)$ be the group of automorphisms of $G$.Suppose that the orders of groups $G/N$ and $\Aut(N)$ are relatively prime.Then prove that $N$ is contained in the center of […]
Normal Subgroups, Isomorphic Quotients, But Not IsomorphicLet $G$ be a group. Suppose that $H_1, H_2, N_1, N_2$ are all normal subgroup of $G$, $H_1 \lhd N_2$, and $H_2 \lhd N_2$.Suppose also that $N_1/H_1$ is isomorphic to $N_2/H_2$. Then prove or disprove that $N_1$ is isomorphic to $N_2$.Proof.We give a […] |
The package
Clustgeo is used here for clustering \(n=303\) french cities. The R dataset estuary is a list of three objects: a matrix dat with the description of the 303 cities on 4 socio-demographic variables, a matrix D.geo with the distances between the town hall of the 303 cities, and an object map of class “SpatialPolygonsDataFrame”.
library(ClustGeo)data(estuary)dat <- estuary$dathead(dat)
## employ.rate.city graduate.rate housing.appart agri.land## 17015 28.08 17.68 5.15 90.04438## 17021 30.42 13.13 4.93 58.51182## 17030 25.42 16.28 0.00 95.18404## 17034 35.08 9.06 0.00 91.01975## 17050 28.23 17.13 2.51 61.71171## 17052 22.02 12.66 3.22 61.90798
D.geo <- estuary$D.geomap <- estuary$map
In this section, we show how to implement and interpret Ward hierarchical clustering when the dissimilarities are not necessary euclidean and the weights are not uniform.
We apply first standard
hclust function.
n <- nrow(dat)D <- dist(dat)Delta <- D^2/(2*n)tree <- hclust(Delta,method="ward.D")
We can check that the sum of the heights in
hclust’s dendrogram is equal to the total inertia of the dataset if the dissimilarities are euclidean and to the pseudo inertia otherwise.
?inertdiss #pseudo inertia when dissimilarities are non euclidean?inert #standard inertia otherwiseinertdiss(D) #pseudo inertia
## [1] 1232.769
inert(dat) # inertia
## [1] 1232.769
sum(tree$height)
## [1] 1232.769
The same result can be obtained with the function
hclustgeo which is a wrapper of hclust taking \(D\) as input instead of \(\Delta\).
tree <- hclustgeo(D)sum(tree$height)
## [1] 1232.769
When the weights are not uniform, the calculation of the matrix \(\Delta\) takes a few lines of code and the use of the function
hclustgeo may be more convenient than hclust.
map <- estuary$mapwt <- [email protected]$POPULATION # non uniform weights# with hclustDelta <- D for (i in 1:(n-1)) { for (j in (i+1):n) { Delta[n*(i-1) - i*(i-1)/2 + j-i] <- Delta[n*(i-1) - i*(i-1)/2 + j-i]^2*wt[i]*wt[j]/(wt[i]+wt[j]) }}tree <- hclust(Delta,method="ward.D",members=wt)sum(tree$height)
## [1] 1907989
#with hclustgeotree <- hclustgeo(D,wt=wt)sum(tree$height)
## [1] 1907989
Now we consider two dissimilarity matrices :
The function
hclustgeo implements a Ward-like hierarchical clustering algorithm including soft contiguity constraints. This algorithm takes as input two dissimilarity matrices D0 and D1 and a mixing parameter \(\alpha\) between 0 an 1. The first matrix gives the dissimilarities in the “feature space” (here socio-demographic variables). The second matrix gives the dissimilarities in the “constraint” space (here the matrix of geographical distances or the matrix build from the contiguity matrix \(C\)). The mixing parameter \(\alpha\) sets the importance of the constraint in the clustering procedure. We present here a procedure to choose the mixing parameter \(\alpha\) with the function choicealpha.
First, we choose \(K=5\) clusters from the Ward dendrogram obtained with the socio-demographic variables (\(D_0\)) only.
library(ClustGeo)data(estuary)dat <- estuary$datD.geo <- estuary$D.geomap <- estuary$map D0 <- dist(dat) # the socio-demographic distancestree <- hclustgeo(D0)plot(tree,hang=-1,label=FALSE, xlab="",sub="", main="Ward dendrogram with D0 only",cex.main=0.8,cex=0.8,cex.axis=0.8,cex.lab=0.8)#plot(tree,hang=-1,xlab="",sub="",main="Ward dendrogram with D0 only",# cex.main=0.8,cex=0.8,labels=city_label,cex.axis=0.8,cex.lab=0.8)rect.hclust(tree,k=5,border=c(4,5,3,2,1))legend("topright", legend= paste("cluster",1:5), fill=1:5, cex=0.8,bty="n",border="white")
We can use the map given with the estuary data.
# the map of the cities is an object of class "SpatialPolygonsDataFrame"class(map)
## [1] "SpatialPolygonsDataFrame"## attr(,"package")## [1] "sp"
# the object map contains several informationsnames(map)
## [1] "ID_GEOFLA" "CODE_COMM" "INSEE_COM" "NOM_COMM" "X_CHF_LIEU"## [6] "Y_CHF_LIEU" "X_CENTROID" "Y_CENTROID" "Z_MOYEN" "SUPERFICIE"## [11] "POPULATION" "CODE_CANT" "CODE_ARR" "CODE_DEPT" "NOM_DEPT" ## [16] "CODE_REG" "NOM_REGION" "AREA_HA"
head([email protected][,4:8])
## NOM_COMM X_CHF_LIEU Y_CHF_LIEU X_CENTROID Y_CENTROID## 17015 ARCES 3989 65021 3979 65030## 17021 ARVERT 3791 65240 3795 65247## 17030 BALANZAC 4018 65230 4012 65237## 17034 BARZAN 3991 64991 3982 64997## 17050 BOIS 4189 64940 4176 64934## 17052 BOISREDON 4227 64745 4224 64749
# we check that the cities in map are the same than those in Xidentical(as.vector(map$"INSEE_COM"),rownames(dat))
## [1] TRUE
# now we plot the cities on the map with the name of four citiescity_label <- as.vector(map$"NOM_COMM")sp::plot(map,border="grey")text(sp::coordinates(map)[c(54,99,117,116),],labels=city_label[c(54,99,117,116)],cex=0.8)
Let us plot these 5 clusters on the map.
# cut the dendrogram to get the partition in 5 clustersP5 <- cutree(tree,5)names(P5) <- city_labelsp::plot(map,border="grey",col=P5,main="5 clusters partition obtained with D0 only",cex.main=0.8)legend("topleft", legend=paste("cluster",1:5), fill=1:5, cex=0.8,bty="n",border="white")
We can notice that the cities in the cluster 5 are geographically very homogeneous.
# list of the cities in cluster 5city_label[which(P5==5)]
## [1] "ARCACHON" "BASSENS" "BEGLES" ## [4] "BORDEAUX" "LE BOUSCAT" "BRUGES" ## [7] "CARBON-BLANC" "CENON" "EYSINES" ## [10] "FLOIRAC" "GRADIGNAN" "LE HAILLAN" ## [13] "LORMONT" "MERIGNAC" "PESSAC" ## [16] "TALENCE" "VILLENAVE-D'ORNON"
#plot(map,border="grey")#text(coordinates(map)[which(P5==5),],labels=city_label[which(P5==5)],cex=0.8)
On the contrary the cities in cluster 3 are geographically very splitted.
In order to get more geographically compact clusters, we introduce now the matrix \(D_1\) of the geographical distances in
hclustgeo.
D1 <- as.dist(D.geo) # the geographic distances between the cities#
For that purpose, we have to choose a mixing parameter \(\alpha\) to improve the geographical cohesion of the 5 clusters of the partition found previously without deteriorating the socio-demographic cohesion too much.
The mixing parameter \(\alpha \in [0,1]\) sets the importance of \(D_0\) and \(D_1\) in the clustering process. When \(\alpha=0\) the geographical dissimilarities are not taken into account and when \(\alpha=1\) it is the socio-demographic distances which are not taken into account and the clusters are obtained with the geographical distances only.
The idea is then to calculate separately the socio-demographic homogeneity and the geographic cohesion of the partitions obtained for a range of different values of \(\alpha\) and a given number of clusters \(K\).
The idea is to plot the quality criterion \(Q_O\) and \(Q_1\) of the partitions \(P_K^\alpha\) obtained with different values of \(\alpha \in [0,1]\) and to choose the value of \(\alpha\) which is a compromise between the lost of socio-demographic homogeneity and the gain of geographic cohesion. We use the function
choicealpha for that purpose.
range.alpha <- seq(0,1,0.1)K <- 5cr <- choicealpha(D0,D1,range.alpha,K,graph=TRUE)
cr$Q # proportion of explained pseudo inertia
## Q0 Q1## alpha=0 0.8134914 0.4033353## alpha=0.1 0.8123718 0.3586957## alpha=0.2 0.7558058 0.7206956## alpha=0.3 0.7603870 0.6802037## alpha=0.4 0.7062677 0.7860465## alpha=0.5 0.6588582 0.8431391## alpha=0.6 0.6726921 0.8377236## alpha=0.7 0.6729165 0.8371600## alpha=0.8 0.6100119 0.8514754## alpha=0.9 0.5938617 0.8572188## alpha=1 0.5016793 0.8726302
cr$Qnorm # normalized proportion of explained pseudo inertia
## Q0norm Q1norm## alpha=0 1.0000000 0.4622065## alpha=0.1 0.9986237 0.4110512## alpha=0.2 0.9290889 0.8258889## alpha=0.3 0.9347203 0.7794868## alpha=0.4 0.8681932 0.9007785## alpha=0.5 0.8099142 0.9662043## alpha=0.6 0.8269197 0.9599984## alpha=0.7 0.8271956 0.9593526## alpha=0.8 0.7498689 0.9757574## alpha=0.9 0.7300160 0.9823391## alpha=1 0.6166990 1.0000000
?plot.choicealpha#plot(cr,norm=TRUE)
We see on the plot the proportion of explained pseudo inertia calculated with \(D_0\) (the socio-demographic distances) is equal to 0.81 when \(\alpha=0\) and decreases when \(\alpha\) inceases (black line). On the contrary the proportion of explained pseudo inertia calculated with \(D_1\) (the socio-demographic distances) is equal to 0.87 when \(\alpha=1\) and decreases when \(\alpha\) decreases (red line).
Here the plot suggest to choose \(\alpha=0.2\) which correponds to a lost of socio-demographic homogeneity of only 7 % and a gain of geographic homogeneity of about 17 %.
We perform
hclustgeo with \(D_0\) and \(D_1\) and \(\alpha=0.2\) and cut the tree to get the new partition in 5 clusters.
tree <- hclustgeo(D0,D1,alpha=0.2)P5bis <- cutree(tree,5)
The gain in geographic cohesion of this partition can also be visualized on the map.
tree <- hclustgeo(D0,D1,alpha=0.2)P5bis <- cutree(tree,5)sp::plot(map,border="grey",col=P5bis, main="5 clusters partition obtained \n with alpha=0.2 and geographical distances",cex.main=0.8)legend("topleft", legend=paste("cluster",1:5), fill=1:5, bty="n",border="white")
Let us construct a different type of matrix \(D_1\) to take the neighborhood between the regions into account for clustering the 303 cities.
Two regions with contiguous boundaries, that is sharing one or more boundary point are considered as neighbours. Let us first build the adjacency matrix \(A\).
#library(spdep)list.nb <- spdep::poly2nb(map,row.names = rownames(dat)) #list of neighbours of each citycity_label[list.nb[[117]]] # list of the neighbours of BORDEAUX
## [1] "BASSENS" "BEGLES" "BLANQUEFORT" "LE BOUSCAT" "BRUGES" ## [6] "CENON" "EYSINES" "FLOIRAC" "LORMONT" "MERIGNAC" ## [11] "PESSAC" "TALENCE"
A <- spdep::nb2mat(list.nb,style="B")diag(A) <- 1colnames(A) <- rownames(A) <- city_labelA[1:5,1:5]
## ARCES ARVERT BALANZAC BARZAN BOIS## ARCES 1 0 0 1 0## ARVERT 0 1 0 0 0## BALANZAC 0 0 1 0 0## BARZAN 1 0 0 1 0## BOIS 0 0 0 0 1
The dissimilarity matrix \(D_1\) is build from the adjacency matrix \(A\) with \(D_1=1-A\).
D1 <- as.dist(1-A)
The same procedure for the choice of \(\alpha\) is then used with this neighborhood dissimilarity matrix \(D_1\).
cr <- choicealpha(D0,D1,range.alpha,K,graph=TRUE)
cr$Q # proportion of explained pseudo inertia
## Q0 Q1## alpha=0 0.8134914 0.04635748## alpha=0.1 0.7509422 0.05756315## alpha=0.2 0.7268960 0.06456769## alpha=0.3 0.6926275 0.06710020## alpha=0.4 0.6730000 0.06905647## alpha=0.5 0.6484593 0.07190000## alpha=0.6 0.6350298 0.07240719## alpha=0.7 0.5902430 0.07526225## alpha=0.8 0.5541168 0.07622149## alpha=0.9 0.5228693 0.07728260## alpha=1 0.2699756 0.07714010
cr$Qnorm # normalized proportion of explained pseudo inertia
## Q0norm Q1norm## alpha=0 1.0000000 0.6009518## alpha=0.1 0.9231102 0.7462157## alpha=0.2 0.8935510 0.8370184## alpha=0.3 0.8514257 0.8698485## alpha=0.4 0.8272982 0.8952084## alpha=0.5 0.7971311 0.9320703## alpha=0.6 0.7806226 0.9386452## alpha=0.7 0.7255676 0.9756566## alpha=0.8 0.6811587 0.9880916## alpha=0.9 0.6427471 1.0018473## alpha=1 0.3318727 1.0000000
tree <- hclustgeo(D0,D1,alpha=0.2)P5ter <- cutree(tree,5)sp::plot(map,border="grey",col=P5ter, main="5 clusters partition obtained with \n alpha=0.2 and neighbours dissimilarities",cex.main=0.8)legend("topleft", legend=paste("cluster",1:5), fill=1:5, bty="n",border="white",cex=0.8)
With this kind of local dissimilaritis in \(D_1\), the neighborhood within cohesion is always very small. To overcome this problem, we can use the plot the normalized proportion of explained inertia (\(Qnorm\)) instead the proportion of explained inertia (\(Q\)). The plot of \(Qnorm\) suggest again \(\alpha=0.2\).
tree <- hclustgeo(D0,D1,alpha=0.2)P5bis <- cutree(tree,5)sp::plot(map,border="grey",col=P5bis, main="5 clusters partition obtained with \n alpha=0.2 and neighborhood dissimilarities",cex.main=0.8)legend("topleft", legend=1:5, fill=1:5, col=P5,cex=0.8)
These plots can be obtained with the plot method
plot.choicealpha.
range.alpha <- seq(0,1,0.1)K <- 5cr <- choicealpha(D0,D1,range.alpha,K,graph=FALSE)?plot.choicealphaplot(cr,cex=0.8,norm=FALSE,cex.lab=0.8,ylab="pev",col=3:4,legend=c("socio-demo","geo"), xlab="mixing parameter")
plot(cr,cex=0.8,norm=TRUE,cex.lab=0.8,ylab="pev",col=5:6,pch=5:6,legend=c("socio-demo","geo"), xlab="mixing parameter") |
I was given this answer:
So I was told that
$$\tan(x) = 2$$ Then, they said from this statement they could know that: $$\cos(x) = \frac{1}{\sqrt{5}}$$ $$\sin(x) = \frac{2}{\sqrt{5}}$$
Now, I understand that if I do
$$\tan^{-1}(2) = 63.4$$
And then after that I can get the ratio of cosine and sine. However, I don't know how they got the precise fraction. Does anyone know how? |
In your second edit, you ask whether there exists an example of such a bundle over a lower-dimensional manifold.
Four-dimensional example
Let $M = (\mathbb{RP}^2\times\mathbb{RP}^2)\#(S^1\times S^3)$. Note that
$$H^1(M; \mathbb{Z}_2) \cong H^1(\mathbb{RP}^2\times\mathbb{RP}^2; \mathbb{Z}_2)\oplus H^1(S^1\times S^3;\mathbb{Z}_2).$$ Let $a$ and $b$ denote elements of $H^1(M; \mathbb{Z}_2)$ corresponding to generators of $H^1(\mathbb{RP}^2\times\mathbb{RP}^2; \mathbb{Z}_2)$, and let $c$ denote the element of $H^1(M; \mathbb{Z}_2)$ corresponding to the generator of $H^1(S^1\times S^3; \mathbb{Z}_2)$.
Consider the rank four vector bundle $E = L_a \oplus L_b \oplus L_c\oplus L_{a + b + c}$ where $L_x$ is the unique real line bundle over $M$ with $w_1(L_x) = x$; note that $L_{a+b+c} \cong L_a\otimes L_b\otimes L_c$. We have
\begin{align*}w_1(E) =&\ w_1(L_a) + w_1(L_b) + w_1(L_c) + w_1(L_{a + b + c})\\ =&\ a + b + c + (a + b + c) = 0\\&\\w_2(E) =&\ w_1(L_a)w_1(L_b) + w_1(L_a)w_1(L_c) + w_1(L_a)w_1(L_{a + b + c})\\ &+ w_1(L_b)w_1(L_c) + w_1(L_b)w_1(L_{a + b + c}) + w_1(L_c)w_1(L_{a + b + c})\\=&\ ab + ac + a(a + b + c) + bc + b(a + b + c) + c(a + b + c)\\=&\ ab + a^2 + b^2 \neq 0\\&\\w_3(E) =&\ w_1(L_a)w_1(L_b)w_1(L_c) + w_1(L_a)w_1(L_b)w_1(L_{a + b + c})\\ &+ w_1(L_a)w_1(L_c)w_1(L_{a + b + c}) + w_1(L_b)w_1(L_c)w_1(L_{a + b + c})\\=&\ abc + ab(a + b + c) + ac(a + b + c) + bc(a + b + c)\\=&\ a^2b + ab^2 \neq 0\\&\\w_4(E) =&\ w_1(L_a)w_1(L_b)w_1(L_c)w_1(L_{a + b + c})\\=&\ abc(a + b + c) = 0.\end{align*}
So $E$ is a rank four vector bundle over a four-manifold $M$ with $w(E) = 1 + w_2(E) + w_3(E)$.
In fact, we can do better. As $H^4(M; \mathbb{Z}) \cong \mathbb{Z}_2$, reduction mod $2$ defines an isomorphism $H^4(M; \mathbb{Z}) \to H^4(M; \mathbb{Z}_2)$. Under this isomorphism, $e(E)$ is mapped to $w_4(E) = 0$, so $e(E) = 0$ and hence $E \cong F\oplus\varepsilon^1$. Note that $F \to M$ is a rank three vector bundle with $w(F) = 1 + w_2(F) + w_3(F)$.
Three-dimensional characterisation
Let $X$ be a three-dimensional CW complex. Recall that there is a bijection between isomorphism classes of orientable rank three bundles on $X$ and homotopy classes of maps $X \to BSO(3)$. As $X$ is three-dimensional, we can instead map to $BSO(3)[3]$, the third stage of the Postnikov tower for $BSO(3)$. As $\pi_1(BSO(3)) = 0$, $\pi_2(BSO(3)) = \mathbb{Z}_2$, and $\pi_3(BSO(3)) = 0$, we see that $BSO(3)[3]$ is a $K(\mathbb{Z}_2, 2)$. Moreover, as the map $BSO(3) \to BSO(3)[3]$ induces an isomorphism on $\pi_1$ and $\pi_2$, the map $H^2(BSO(3)[3]; \mathbb{Z}_2) \to H^2(BSO(3); \mathbb{Z}_2)$ is also an isomorphism. It follows that there is a bijection between orientable rank three bundles on $X$ and $H^2(X; \mathbb{Z}_2)$ given by the second Stiefel-Whitney class of the bundle.
Now suppose that $X$ is a connected three-dimensional manifold. In order for $w_3(E) \in H^3(X; \mathbb{Z}_2)$ to be non-zero, we need $X$ to be closed. Furthermore, if $X$ is closed,
$$w_3(E) = \operatorname{Sq}^1(w_2(E)) = \nu_1(X)w_2(E) = w_1(X)w_2(E)$$
so $X$ must be non-orientable. By Poincaré duality, there is at least one $\alpha \in H^2(X; \mathbb{Z}_2)$ such that $w_1(X)\alpha \neq 0$. For each such $\alpha$, there is a unique $SO(3)$-bundle $E \to X$ with $w(E) = 1 + \alpha + w_1(X)\alpha$.
In conclusion, we have the following statement:
Let $X$ be a connected, closed three-manifold. There is a real rank three vector bundle $E \to X$ with $w(E) = 1 + w_2(E) + w_3(E)$ if and only if $X$ is non-orientable. Moreover, on any non-orientable $X$, for every choice of $\alpha \in H^2(X; \mathbb{Z}_2)$ satisfying $w_1(X)\alpha\neq 0$, there is a unique real rank three bundle $E$ with $w(E) = 1 + \alpha + w_1(X)\alpha$. |
Beautiful Formulas for pi=3.14…
The number $\pi$ is defined a s the ratio of a circle’s circumference $C$ to its diameter $d$:
\[\pi=\frac{C}{d}.\]
$\pi$ in decimal starts with 3.14… and never end.
I will show you several beautiful formulas for $\pi$.
Contents
Art Museum of formulas for $\pi$ Beautiful formula of $\pi$ (continued fraction).
$\pi$ is an irrational number. This means that $\pi$ can not be written as a ratio of two integers:$\pi \neq \frac{n}{m}$ for any integers $n, m$.
However, $\pi$ can be written as an infinite series of nested fractions, known as
continued fraction. There are several known continued fractions that are equal to $\pi$. \begin{align*} \pi&= 3 + \cfrac{1}{7 + \cfrac{1}{15 + \cfrac{1}{1 + \cfrac{1}{292 + \cdots}}}} \\[20pt] \pi&= \cfrac{4}{1 + \cfrac{1^2}{2 + \cfrac{3^2}{2 + \cfrac{5^2}{2 +\cfrac{7^2}{2 + \cdots}}}}} \\[20pt] \frac{4}{\pi}&= 1+\cfrac{1}{3 + \cfrac{2^2}{5 + \cfrac{3^2}{7 + \cfrac{4^2}{9 +\cfrac{5^2}{11 + \cdots}}}}} \end{align*}
It is mysterious that $\pi$ in decimal shows no pattern but the expressions of $\pi$ in continued fractions have simple patterns.
Beautiful formula of $\pi$.
\begin{align*}
\frac{\pi}{4}&=1-\frac{1}{3}+\frac{1}{5}-\frac{1}{7}+\cdots= \sum_{n=0}^{\infty}\frac{(-1)^n}{2n+1} &&\text{Leibniz formula for $\pi$}\\[12pt] \frac{2}{\pi}&=\frac{\sqrt{2}}{2}\cdot \frac{\sqrt{2+\sqrt{2}}}{2}\cdot \frac{\sqrt{2+\sqrt{2+\sqrt{2}}}}{2}\cdots &&\text{Franciscus Vieta}\\[12pt] \frac{\pi}{2}&=\frac{2}{1}\cdot \frac{2}{3}\cdot \frac{4}{3} \cdot \frac{4}{5} \cdot \frac{6}{5}\cdot \frac{6}{7}\cdot \frac{8}{7} \cdot \frac{8}{9}\cdots &&\text{John Wallis}\\[12pt] \frac{\pi^2}{6}&=\frac{1}{1^2}+\frac{1}{2^2}+\frac{1}{3^2}+\frac{1}{4^2}\cdots && \substack{\text{Leonhard Euler}\\ \text{ the value of Riemann zeta function $\zeta(2)$}}\\[12pt] \sqrt{\pi}&=\int_{-\infty}^{\infty}e^{-x^2}\, dx &&\text{Gaussian integral}\\[12pt] \frac{\pi}{4}&=4\arctan{\frac{1}{5}}-\arctan{\frac{1}{239}} && \text{John Machin}\\[12pt] e^{2\pi i}&=-1 && \text{Euler’s formula} \end{align*} Miscellaneous March 14th (3/14) is Pi Day. The mirror reflection of the English alphabet letters PIE is looks like $314$. (See the above picture.) $\pi$ contains 2017. If you want to know more about the fun fact about the number 2017, check out the post Mathematics about the number 2017 Reference
Wikipedia article about $\pi$ contains more extensive facts about $\pi$.
Add to solve later |
ISSN:
1556-1801
eISSN:
1556-181X
All Issues
Networks & Heterogeneous Media
September 2009 , Volume 4 , Issue 3
Select all articles
Export/Reference:
Abstract:
We first develop non-oscillatory central schemes for a traffic flow model with Arrhenius look-ahead dynamics, proposed in [ A. Sopasakis and M.A. Katsoulakis, SIAM J. Appl. Math., 66 (2006), pp. 921--944]. This model takes into account interactions of every vehicle with other vehicles ahead ("look-ahead'' rule) and can be written as a one-dimensional scalar conservation law with a global flux. The proposed schemes are extensions of the non-oscillatory central schemes, which belong to a class of Godunov-type projection-evolution methods. In this framework, a solution, computed at a certain time, is first approximated by a piecewise polynomial function, which is then evolved to the next time level according to the integral form of the conservation law. Most Godunov-type schemes are based on upwinding, which requires solving (generalized) Riemann problems. However, no (approximate) Riemann problem solver is available for conservation laws with global fluxes. Therefore, central schemes, which are Riemann-problem-solver-free, are especially attractive for the studied traffic flow model. Our numerical experiments demonstrate high resolution, stability, and robustness of the proposed methods, which are used to numerically investigate both dispersive and smoothing effects of the global flux.
We also modify the model by Sopasakis and Katsoulakis by introducing a more realistic, linear interaction potential that takes into account the fact that a car's speed is affected more by nearby vehicles than distant (but still visible) ones. The central schemes are extended to the modified model. Our numerical studies clearly suggest that in the case of a good visibility, the new model yields solutions that seem to better correspond to reality.
Abstract:
We consider the continuous Laplacian on infinite locally finite networks under natural transition conditions as continuity at the ramification nodes and Kirchhoff flow conditions at all vertices. It is well known that one cannot reconstruct the shape of a finite network by means of the eigenvalues of the Laplacian on it. The same is shown to hold for infinite graphs in a $L^\infty$-setting. Moreover, the occurrence of eigenvalue multiplicities with eigenspaces containing subspaces isomorphic to $\l^\infty(\ZZ)$ is investigated, in particular in trees and periodic graphs.
Abstract:
The aim of this paper is to develop a model of the respiratory system. The real bronchial tree is embedded within the parenchyma, and ventilation is caused by negative pressures at the alveolar level. We aim to describe the series of pressures at alveolae in the form of a function, and to establish a sound mathematical framework for the instantaneous ventilation process. To that end, we treat the bronchial tree as an infinite resistive tree, we endow the space of pressures at bifurcating nodes with the natural energy norm (rate of dissipated energy), and we characterise the pressure field at its boundary (i.e. set of simple paths to infinity). In a second step, we embed the infinite collection of leafs in a bounded domain Ω$\subset \RR^d$, and we establish some regularity properties for the corresponding pressure field. In particular, for the infinite counterpart of a regular, healthy lung, we show that the pressure field lies in a Sobolev space $H^{s}$(Ω), with $s \approx 0.45$. This allows us to propose a model for the ventilation process that takes the form of a boundary problem, where the role of the boundary is played by a full domain in the physical space, and the elliptic operator is defined over an infinite dyadic tree.
Abstract:
Boltzmann Maps are a class of discrete dynamical systems that may be used in the study of complex chemical reaction processes. In this paper they are generalized to open systems allowing the description of non-stoichiometrically balanced reactions with unequal reaction rates. We show that they can be widely used to describe the relevant dynamics, leading to interesting insights on the multi-stability problem in networks of chemical reactions. Necessary conditions for multistability are thus identified. Our findings indicate that the dynamics produced by laws like the mass action law, can hardly produce multistable phenomena. In particular, we prove that they cannot do it in a wide range of chemical reactions.
Abstract:
Critical threshold phenomena in a one dimensional quasi-linear hyperbolic model of blood flow with viscous damping are investigated. We prove global in time regularity and finite time singularity formation of solutions simultaneously by showing the critical threshold phenomena associated with the blood flow model. New results are obtained showing that the class of data that leads to global smooth solutions includes the data with negative initial Riemann invariant slopes and that the magnitude of the negative slope is not necessarily small, but it is determined by the magnitude of the viscous damping. For the data that leads to shock formation, we show that shock formation is delayed due to viscous damping.
Abstract:
The topic of security often enters in many real world situations. In this paper we focus on security of networks on which it is based the delivery of services and goods (e.g. water and electric supply networks) the transfer of data (e.g. web and telecommunication networks), the movement of transport means (e.g. road networks), etc... We use a fluid dynamic framework, networks are described by nodes and lines and our analysis starts from an equilibrium status: the flows are constant in time and along the lines. When a failure occurs in the network a shunt changes the topology of the network and the flows adapt to it reaching a new equilibrium status. The question we consider is the following: is the new equilibrium satisfactory in terms of achieved quality standards? We essentially individuate, for regular square networks, the nodes whose breakage compromises the quality of the flows. It comes out that networks which allow circular flows are the most robust with respect to damages.
Abstract:
Thin periodic structures depend on two interrelated small geometric parameters $\varepsilon$ and $h(\varepsilon)$ which control the thickness of constituents and the cell of periodicity. We study homogenisation of elasticity theory problems on these structures by method of asymptotic expansions. A particular attention is paid to the case of critical thickness when $\lim_{\varepsilon\to 0} h(\varepsilon)\varepsilon^{-1}$ is a positive constant. Planar grids are taken as a model example.
Abstract:
Starting from a continuous congested traffic framework recently introduced in [8], we present a consistent numerical scheme to compute equilibrium metrics. We show that equilibrium metric is the solution of a variational problem involving geodesic distances. Our discretization scheme is based on the Fast Marching Method. Convergence is proved via a $\Gamma$-convergence result and numerical results are given.
Readers Authors Editors Referees Librarians Email Alert
Add your name and e-mail address to receive news of forthcoming issues of this journal:
[Back to Top] |
Bharadwaj, BVS and Chandran, LS and Das, Anita (2008)
Isoperimetric Problem and Meta-fibonacci Sequences. In: 14th Annual International Conference on Computing and Combinatorics (COCOON 2008), JUN 27-29, 2008, Dalian.
PDF
fulltext.pdf - Published Version
Restricted to Registered users only
Download (453kB) | Request a copy
Abstract
Let G = (V,E) be a simple, finite, undirected graph. For S ⊆ V, let $\delta(S,G) = \{ (u,v) \in E : u \in S \mbox { and } v \in V-S \}$ and $\phi(S,G) = \{ v \in V -S: \exists u \in S$ , such that (u,v) ∈ E} be the edge and vertex boundary of S, respectively. Given an integer i, 1 ≤ i ≤ ∣ V ∣, the edge and vertex isoperimetric value at i is defined as b e (i,G) = min S ⊆ V; |S| = i |δ(S,G)| and b v (i,G) = min S ⊆ V; |S| = i |φ(S,G)|, respectively. The edge (vertex) isoperimetric problem is to determine the value of b e (i, G) (b v (i, G)) for each i, 1 ≤ i ≤ |V|. If we have the further restriction that the set S should induce a connected subgraph of G, then the corresponding variation of the isoperimetric problem is known as the connected isoperimetric problem. The connected edge (vertex) isoperimetric values are defined in a corresponding way. It turns out that the connected edge isoperimetric and the connected vertex isoperimetric values are equal at each i, 1 ≤ i ≤ |V|, if G is a tree. Therefore we use the notation b c (i, T) to denote the connected edge (vertex) isoperimetric value of T at i. Hofstadter had introduced the interesting concept of meta-fibonacci sequences in his famous book “Gödel, Escher, Bach. An Eternal Golden Braid”. The sequence he introduced is known as the Hofstadter sequences and most of the problems he raised regarding this sequence is still open. Since then mathematicians studied many other closely related meta-fibonacci sequences such as Tanny sequences, Conway sequences, Conolly sequences etc. Let T 2 be an infinite complete binary tree. In this paper we related the connected isoperimetric problem on T 2 with the Tanny sequences which is defined by the recurrence relation a(i) = a(i − 1 − a(i − 1)) + a(i − 2 − a(i − 2)), a(0) = a(1) = a(2) = 1. In particular, we show that b c (i, T 2) = i + 2 − 2a(i), for each i ≥ 1. We also propose efficient polynomial time algorithms to find vertex isoperimetric values at i of bounded pathwidth and bounded treewidth graphs.
Item Type: Conference Proceedings Additional Information: Copyright of this article belongs to Springer. Department/Centre: Division of Electrical Sciences > Computer Science & Automation Depositing User: K.S. Satyashree Date Deposited: 24 Mar 2010 07:31 Last Modified: 19 Sep 2010 05:58 URI: http://eprints.iisc.ac.in/id/eprint/26493 Actions (login required)
View Item |
So now that we know how to get the least common multiple (LCM) of two numbers, let’s apply this knowledge to add two fractions together that have different denominators.
So let’s look at:\[
\frac{3}{4}\hspace{0.33em}{+}\hspace{0.33em}\frac{3}{18}
\]
In my last post, we found the LCM of the two numbers 4 and 18 to be 36. Now that these numbers are in the denominator, we can also call 36 the least common denominator (LCD) – two different terms to describe the same thing depending on the country, school district, or era you find yourself in.
From Part 2 of the posts on Fractions, I covered the method to add two fractions together that have different denominators. The trick is to find a common denominator but now we are more wise and can find a least common denominator. So we will use the LCM we found in my last post to solve the problem.
So I want to convert the two fractions into equivalent ones that have a denominator of 36. I do this my multiplying the top and bottom of the fraction by the same number needed to make the denominator 36:\[
\begin{array}{c}
{\frac{3}{4}\hspace{0.33em}\times\hspace{0.33em}\frac{9}{9}\hspace{0.33em}{=}\hspace{0.33em}\frac{{3}\hspace{0.33em}\times\hspace{0.33em}{9}}{{4}\hspace{0.33em}\times\hspace{0.33em}{9}}\hspace{0.33em}{=}\hspace{0.33em}\frac{27}{36}}\\
{\frac{3}{18}\hspace{0.33em}\times\hspace{0.33em}\frac{2}{2}\hspace{0.33em}{=}\hspace{0.33em}\frac{{3}\hspace{0.33em}\times\hspace{0.33em}{2}}{{18}\hspace{0.33em}\times\hspace{0.33em}{2}}\hspace{0.33em}{=}\hspace{0.33em}\frac{6}{36}}
\end{array}
\]
Now that we have two equivalent fractions with the same denominator, the rest is easy as we just have to add the numerators together:\[
\frac{3}{4}\hspace{0.33em}{+}\hspace{0.33em}\frac{3}{18}\hspace{0.33em}{=}\hspace{0.33em}\frac{27}{36}\hspace{0.33em}{+}\hspace{0.33em}\frac{6}{36}\hspace{0.33em}{=}\hspace{0.33em}\frac{33}{36}
\]
So are we done? That is the answer but textbooks and teachers will always tell you to “simplify” your answer. That means get the numbers as small as possible. The method of getting equivalent fractions above can be done in reverse to get simpler (but equivalent) fractions. If you can identify factors common to the numerator and denominator, these can be cancelled. Notice in our answer that there is a common factor of 3. That is:\[
\frac{33}{36}\hspace{0.33em}{=}\hspace{0.33em}\frac{{11}\hspace{0.33em}\times\hspace{0.33em}\rlap{/}{3}}{{12}\hspace{0.33em}\times\hspace{0.33em}\rlap{/}{3}}\hspace{0.33em}{=}\hspace{0.33em}\frac{11}{12}
\]
So now we are done. In my next post I will do more examples. |
Given a map $F:X \to X$ on a complete metric space $(X,d)$, and let $K<1$ such that:
$$ d(F(x), F(y)) \le K d(x,y), \quad \forall x,y \in X $$
then the contraction mapping theorem tells us that $F$ has a unique fixed point, and we can iteratively solve for this fixed point.
My question is, if we take $K=1$, then it is no longer a contraction, I've seen this being called a 'non-strict' contraction. I'm wondering if there are any results regarding this case and fixed points? Do they exist but aren't unique, or do they not exist at all? |
Lars Olsen
[1], [2] (2005, 2005) has proved some results about this notion. Let $E \subseteq {\mathbb R}^{n}$ and $x \in {\mathbb R}^{n},$ where $n$ is a fixed positive integer. Let $\dim_{H}(E,x)$ and $\dim_{P}(E,x)$ denote the local Hausdorff and local packing dimensions of $E$ at $x,$ defined as in your question. Discussion of Results in Olsen [1]
In
[1] Olsen proved that, given any continuous function $f:{\mathbb R}^n \rightarrow [0,n],$ there exists $E \subseteq {\mathbb R}^{n}$ such that for all $x \in {\mathbb R}^{n}$ we have $f(x) = \dim_{H}(E,x) = \dim_{P}(E,x).$
Olsen observed (p. 214) that it is easy to see that some local dimension functions can be discontinuous, giving the example $f:{\mathbb R} \rightarrow [0,1]$ defined by $f(x) = \frac{\ln 2}{\ln 3}$ for $x \in C$ and $f(x) = 0$ if $x \notin C,$ where $C$ is the Cantor middle thirds set. In fact, the characteristic function of a compact interval gives a simpler example, but I suppose Olsen gave the example he did because then the discontinuities occur at every point of the Cantor set.
Olsen also observed (p. 214) that it is easy to see that some very simple discontinuous functions $f:{\mathbb R} \rightarrow [0,1]$ cannot be a local dimension function, giving the example $f(x) = 0$ if $x \neq 0$ and $f(0) = 1.$
Olsen posed the problem (p. 214) of characterizing those functions $f:{\mathbb R}^n \rightarrow [0,n]$ that can be the local dimension function (Hausdorff and/or packing) of some subset of ${\mathbb R}^{n}.$
The result Olsen actually proved was a bit sharper than I stated above. Let $M(f,x,r)$ denote the supremum of $|f(x_{1}) – f(x_{2})|$ as $x_1$ and $x_2$ vary over the open ball $B(x,r)$ of radius $r$ centered at $x.$ Olsen's
Theorem 1.1 in [1] states that, given any function $f:{\mathbb R}^n \rightarrow [0,n]$ (continuous or not), there exists $E \subseteq {\mathbb R}^{n}$ such that for all $x \in {\mathbb R}^{n}$ and for all $r > 0$ we have
$$| f(x) \; - \; \dim_{H}\left(E \cap B(x,r) \right)| \;\; \leq \;\; M(f,x,r)$$$$| f(x) \; - \; \dim_{P}\left(E \cap B(x,r) \right)| \;\; \leq \;\; M(f,x,r)$$
Olsen observed that if $f$ is continuous, then we obtain the result I gave earlier.
Discussion of Results in Olsen [2]
In
[2] Olsen provided an answer to the problem he posed in the earlier paper by characterizing those functions $f:{\mathbb R}^n \rightarrow [0,n]$ that can be the local dimension function (in both the Hausdorff and the packing sense) of some subset of ${\mathbb R}^{n}.$
For Hausdorff dimension, the characterization Olsen gave is that $f:{\mathbb R}^n \rightarrow [0,n]$ satisfies:
(1) For each $x \in {\mathbb R}^{n},$ we have $\limsup_{x' \rightarrow x} f(x') = f(x).$
(2) For each $y$ with $0 \leq y < \sup f$ and for each $x \in \{f > y\} \; = \; \{x' \in {\mathbb R}^{n}: \; f(x') > y \},$ we have $\dim_{H}\left( \{f > y\}, \;x\right) \; > \; y.$ Here, "$\sup f$" denotes the supremum of $f$ over ${\mathbb R}^{n}.$
The characterization for packing dimension is identical except that $\dim_{P}$ replaces $\dim_{H}$ in (2).
Before proving this result, Olsen gave an example (2nd example on p. 233) to show that there exist functions $f:{\mathbb R} \rightarrow [0,1]$ satisfying (1) but not satisfying (2). The example Olsen gave is the function $f$ equal to $s$ (a constant) at each point of the Cantor middle thirds set $C$ and equal to $0$ elsewhere, where $\frac{\ln 2}{\ln 3} < s \leq 1.$ To see that (2) does not hold, note that if $y$ is chosen so that $\frac{\ln 2}{\ln 3} \leq y < s,$ then $\{f > y\} = C.$ Incidentally, Olsen uses Hausdorff dimension throughout in this example, but since the packing and Hausdorff dimensions of the Cantor middle thirds set are both equal to $\frac{\ln 2}{\ln 3},$ the same example shows that (1) can hold and (2) can fail for packing dimension as well.
As was the case with Olsen's earlier paper, Olsen actually proved a bit more than I've stated thus far.
Rather than limiting himself to the Hausdorff or packing dimensions, he proved the characterization for the local dimension function of any "regular dimension index", which is an assignment $\dim$ of a non-negative real number to each subset of ${\mathbb R}^{n}$ such that $\dim$ is monotone with respect to set inclusion, $\dim$ is countably stable, $\dim$ assigns the value of $0$ to every finite subset, and $\dim$ is regular in the sense that given any Borel set $E$ and any real number $t$ with $0 \leq t < \dim E,$ then there exists a compact set $K$ such that $K \subseteq E$ and $\dim K = t.$
Also, Olsen proved that the set for which the function is to be a local dimension function of can always be chosen to be an $F_{\sigma}$ set. In a Remark on p. 235 (that Olsen attributes to a referee -- see
Acknowledgements at the end of the paper), he shows that $F_{\sigma}$ cannot be strengthened to "closed". Olsen's punctured upper limit and punctured upper semicontinuous notions
Olsen uses a non-deleted notion of $\limsup$ in which the value of the function is taken into account, and he uses the phrase
punctured upper limit for the usual $\limsup$ notion in which the value of the function is not taken into account. For a simple example, if $f(x) = 0$ for $x \neq 0$ and $f(0) = 1,$ then the non-deleted $\limsup$ of $f$ at $x=0$ is $1$ and the delted $\limsup$ of $f$ at $x=0$ is $0.$ For a less simple example, if $T$ is the Thomae function, then at each nonzero rational $x$ the non-deleted $\limsup$ of $T$ is equal to $T(x) \neq 0$ and the deleted $\limsup$ of $T$ is equal to $0.$
In the case of upper semicontinuous, Olsen uses the standard definition in which at each point the value of the function is greater than or equal to the deleted $\limsup$ at that point. However, for his characterization of the local dimension function, Olsen requires a refinement of this, which he calls
punctured upper semicontinuous. This is the property in which at each point the value of the function is equal to the deleted $\limsup$ at that point. In my summary above I have avoided this terminology and stated (1) directly in terms of the $\limsup$ operation (deleted version being understood, as that is the standard notion). Incidentally, one sometimes sees the phrase upper boundary function used for functions that satisfy (1).
Finally, if anyone is interested in continuity issues, I believe that any $F_{\sigma}$ first category (in the Baire sense) subset of ${\mathbb R}^{n}$ can be the discontinuity set for a dimension function, but I have not looked at this carefully. I do know that any such set can be the discontinuity set for a function satisfying (1) above. For some possible properties of sets that are $F_{\sigma}$ and first category, see my answer to How discontinuous can a derivative be?.
The less precise result that any such set can be the discontinuity set for an upper semicontinuous function is proved in
Proof Sketch at Oscillation of a Function, but I believe the proof there needs to be modified to actually show the result for (1). However, the result for (1) is a consequence of the more precise results proved by Zbigniew Grande in Quelques remarques sur la semi-continuité supérieure [ Fundamenta Mathematicae 126 (1985), pp. 1-13] and by Tomasz Natkaniec in On semicontinuity points [ Real Analysis Exchange 9 (1983-844), pp. 215-232]. The issue I have not investigated is whether any such set can be the discontinuity set for a function satisfying both (1) and (2) above.
[1] Lars Olsen, Applications of divergence points to local dimension functions of subsets of ${\mathbb R}^{d}$, Proceedings of the Edinburgh Mathematical Society (2) 48 #1 (February 2005), 213-218. MR 2005m:28023; Zbl 1061.28004
[2] Lars Olsen, Characterization of local dimension functions of subsets of ${\mathbb R}^{d}$, Colloquium Mathematicum 103 #2 (2005), pp. 231-239. MR 2006j:28020; Zbl 1105.28007 |
I think I've figured this out. The point is that, the rigorous meaning one can draw from the formal covariance of $J^\mu$ is that the momentum-space coefficient functions of $J^\mu$ (i.e. the functions in front of monomials of $a_p$ and $a^\dagger_p$) transform covariantly under the change of variable $p\to \Lambda p$. The covariance of the coefficient functions is unaffected by normal ordering, and is sufficient to give rise to the covariance of $:J^\mu:$. The rest of this answer will be an elaboration of the first paragraph.
Let me first clarify the notations used and the meaning of the formal covariance of the ill-defined current $J^\mu$. I'm going to ignore the spin degrees of freedom in this discussion, but one should see the generalization to include spin only involves a straightforward (but perhaps cumbersome) change of notations. I'm also ignoring the spacetime dependence, that is to say I'm only considering the covariance of $J^\mu(0)$, and the generalization to $J^\mu(x)$ is straightforward and easy.
In the context of my question, $U(\Lambda)$ is defined as such that
$$U(\Lambda) a_{p} U^{-1}(\Lambda)=\sqrt{\frac{E_{\Lambda p}}{E_p}}a_{\Lambda p}.$$
The covariance of $J^\mu$ must be understood in a very formal and specific sense, the sense in which the covariance is formally proved. For example, in the case of a fermionic bilinear:
$$U(\Lambda)J^{\mu}U(\Lambda)^{-1}=U\bar{\psi}\gamma^{\mu}\psi U^{-1}\\ =U\bar{\psi}_iU^{-1}(\gamma^{\mu})_{ij}U \psi_j U^{-1}=\bar{\psi}D(\Lambda)\gamma^{\mu}D(\Lambda)^{-1}\psi= \Lambda^{\mu}_{\ \ \nu}\bar{\psi}\gamma^{\nu}\psi, $$
where $D(\Lambda)$ is the spinor representation of Lorentz group, typically constructed via Clifford algebra. Note in this formal proof, what's important is that, under the change $a_{p}\to \sqrt{\frac{E_{\Lambda p}}{E_p}}a_{\Lambda p}$ (ignoring spin indices of course) the elementary field transforms as $\psi \to D(\Lambda)\psi$. In the proof, no manipulation of operator ordering and commutation relations ever occurs: all we do is to do a change of integration variable, and let the algebraic properties of the coefficient functions take care of the rest. In fact, we'd better not mess with the operator ordering, as it can easily spoil the formal covariance (example: $H=\int \text{d}p\frac{1}{2}E_{p}(a_p a_p^\dagger+a_p^\dagger a_p)=\int \text{d}p E_{p}(a_p^\dagger a_p+\delta(0))$, see my longest comment under drake's answer).
To explain what's going on in more details without getting tangled with notational nuisances, let me remind you again I'll omit the spin degrees of freedom, but it should be transparent enough by the end of the argument that it's readily generalizable to spinor case, since all that matters is that we know the coefficient functions(even with spin indices) transform covariantly. The mathematical gist is, after multiplying the elementary fields and grouping c/a operators (during the grouping no operator ordering procedure should be performed at all, e.g. $a^\dagger(p_1)a(p_2)$ and $a(p_2)a^\dagger(p_1)$ should be treated as two independent terms), a typical monomial term in $J^\mu(0)$ has the form
$$ \int \left(\prod\limits_{i=1}^{n}\text{d}p_i\right)M(\{a^\dagger(p_i), a(p_i)\})f^\mu(\{p_i\}),$$
where $M$ is a monomial of c/a operators not necessarily normally ordered, but has an ordering directly from the multiplication of elementary fields.
The formal covariance of $J^\mu$ means
$$\Lambda^\mu_{\ \ \nu}\int \left(\prod\limits_{i=1}^{n}\text{d}p_i\right)M(\{a^\dagger(p_i), a(p_i)\})f^\nu(\{p_i\})\\=\int \left(\prod\limits_{i=1}^{n}\text{d}p_i\right)M(\{a^\dagger(\Lambda p_i), a(\Lambda p_i)\})f^\mu(\{p_i\})\\=\int \left(\prod\limits_{i=1}^{n}\text{d}q_i\right)\left(\prod\limits_{i=1}^n \frac{E_{\Lambda^{-1} q_i}}{E_{q_i}}\right) \left(\prod\limits_{i=1}^{m}\sqrt{\frac{E_{q_i}}{E_{\Lambda^{-1} q_i}}}\right) M(\{a^\dagger(q_i), a(q_i)\})f^\mu(\{\Lambda^{-1}q_i\}) ,$$
where $\prod\limits_{i=1}^n {E_{\Lambda^{-1} q_i}}/{E_{q_i}}$ comes from the transformation of measure and $\prod\limits_{i=1}^{m}\sqrt{{E_{q_i}}/{E_{\Lambda^{-1} q_i}}}$ from the transformation of c/a operators in $M$. This is equivalent to
$$f^\mu(\{\Lambda^{-1}q_i\})\left(\prod\limits_{i=1}^n \frac{E_{\Lambda^{-1} q_i}}{E_{q_i}}\right) \left(\prod\limits_{i=1}^{m}\sqrt{\frac{E_{q_i}}{E_{\Lambda^{-1} q_i}}}\right)=\Lambda^\mu_{\ \ \nu}f^\nu(\{q_i\}).$$
The above equation makes completely rigorous sense since it's a statement about c-number functions. Obviously, this equation is sufficient to prove the covariance of the normal ordering
$$ \int \left(\prod\limits_{i=1}^{n}\text{d}p_i\right):M(\{a^\dagger(p_i), a(p_i)\}):f^\mu(\{p_i\}),$$
since on the operator part only a change of integration variable is needed for the proof.
So let's recapitulate the logic of this answer:
1. The current is only covariant when written in a certain way, but not in all ways. (recall the free scalar field Hamiltonian example: $H=\int \text{d}p\frac{1}{2}E_{p}(a_p a_p^\dagger+a_p^\dagger a_p)=\int \text{d} pE_{p}(a_p^\dagger a_p+\delta(0))$, which is formally covariant in the first form but not in the second form.)
2. In that certain way where the current is formally covariant, the formal covariance really means a genuine covariance of the coefficient functions.
3. The covariance of the coefficient functions is sufficient to establish the covariance of the normally ordered current. |
Search
Now showing items 1-10 of 26
Kaon femtoscopy in Pb-Pb collisions at $\sqrt{s_{\rm{NN}}}$ = 2.76 TeV
(Elsevier, 2017-12-21)
We present the results of three-dimensional femtoscopic analyses for charged and neutral kaons recorded by ALICE in Pb-Pb collisions at $\sqrt{s_{\rm{NN}}}$ = 2.76 TeV. Femtoscopy is used to measure the space-time ...
Anomalous evolution of the near-side jet peak shape in Pb-Pb collisions at $\sqrt{s_{\mathrm{NN}}}$ = 2.76 TeV
(American Physical Society, 2017-09-08)
The measurement of two-particle angular correlations is a powerful tool to study jet quenching in a $p_{\mathrm{T}}$ region inaccessible by direct jet identification. In these measurements pseudorapidity ($\Delta\eta$) and ...
Online data compression in the ALICE O$^2$ facility
(IOP, 2017)
The ALICE Collaboration and the ALICE O2 project have carried out detailed studies for a new online computing facility planned to be deployed for Run 3 of the Large Hadron Collider (LHC) at CERN. Some of the main aspects ...
Evolution of the longitudinal and azimuthal structure of the near-side peak in Pb–Pb collisions at $\sqrt{s_{\rm NN}}=2.76$ TeV
(American Physical Society, 2017-09-08)
In two-particle angular correlation measurements, jets give rise to a near-side peak, formed by particles associated to a higher $p_{\mathrm{T}}$ trigger particle. Measurements of these correlations as a function of ...
J/$\psi$ elliptic flow in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 5.02 TeV
(American Physical Society, 2017-12-15)
We report a precise measurement of the J/$\psi$ elliptic flow in Pb-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV with the ALICE detector at the LHC. The J/$\psi$ mesons are reconstructed at mid-rapidity ($|y| < 0.9$) ...
Enhanced production of multi-strange hadrons in high-multiplicity proton-proton collisions
(Nature Publishing Group, 2017)
At sufficiently high temperature and energy density, nuclear matter undergoes a transition to a phase in which quarks and gluons are not confined: the quark–gluon plasma (QGP)1. Such an exotic state of strongly interacting ...
K$^{*}(892)^{0}$ and $\phi(1020)$ meson production at high transverse momentum in pp and Pb-Pb collisions at $\sqrt{s_\mathrm{NN}}$ = 2.76 TeV
(American Physical Society, 2017-06)
The production of K$^{*}(892)^{0}$ and $\phi(1020)$ mesons in proton-proton (pp) and lead-lead (Pb-Pb) collisions at $\sqrt{s_\mathrm{NN}} =$ 2.76 TeV has been analyzed using a high luminosity data sample accumulated in ...
Production of $\Sigma(1385)^{\pm}$ and $\Xi(1530)^{0}$ in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV
(Springer, 2017-06)
The transverse momentum distributions of the strange and double-strange hyperon resonances ($\Sigma(1385)^{\pm}$, $\Xi(1530)^{0}$) produced in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV were measured in the rapidity ...
Charged–particle multiplicities in proton–proton collisions at $\sqrt{s}=$ 0.9 to 8 TeV, with ALICE at the LHC
(Springer, 2017-01)
The ALICE Collaboration has carried out a detailed study of pseudorapidity densities and multiplicity distributions of primary charged particles produced in proton-proton collisions, at $\sqrt{s} =$ 0.9, 2.36, 2.76, 7 and ...
Energy dependence of forward-rapidity J/$\psi$ and $\psi(2S)$ production in pp collisions at the LHC
(Springer, 2017-06)
We present ALICE results on transverse momentum ($p_{\rm T}$) and rapidity ($y$) differential production cross sections, mean transverse momentum and mean transverse momentum square of inclusive J/$\psi$ and $\psi(2S)$ at ... |
Ray Optics and Optical Instruments Total Internal Reflection Critical angle is the angle of incidence in denser for which angle of refraction in rarer medium is 90° \tt \mu = \frac{1}{\sin C} \tt \sin C = \frac{\mu_{rarer}}{\mu_{denser}} = \frac{V_{denser}}{V_{rarer}} = \frac{\lambda_{denser}}{\lambda_{rarer}} C Red> C Violet(C = critical angle) When angle of incidence in the denser medium is greater than critical angle light ray bounces back to the same medium and is called Total internal reflection Field of vision of fish under water at depth h is r = h tan c = \tt \frac{h}{\sqrt{\mu^{2} - 1}} (r = radius of circle view) Optical fibre works on the principle of T I R Launching angle for optical fibre is \tt \sin i_{L} = \sqrt{\mu^{2}_{core} - \mu_{cladding}} Optical fibres are used in communication and Laproscope, Cudoscope Optical fibres are flexible, light weight, and non corrosive. View this Topic in this video from 1:37
Disclaimer: Compete.etutor.co may from time to time provide links to third party Internet sites under their respective fair use policy and it may from time to time provide materials from such third parties on this website. These third party sites and any third party materials are provided for viewers convenience and for non-commercial educational purpose only. Compete does not operate or control in any respect any information, products or services available on these third party sites. Compete.etutor.co makes no representations whatsoever concerning the content of these sites and the fact that compete.etutor.co has provided a link to such sites is NOT an endorsement, authorization, sponsorship, or affiliation by compete.etutor.co with respect to such sites, its services, the products displayed, its owners, or its providers.
Total internal reflection
\mu = \frac{1}{\sin C} = cosec \ C where μ → Rarerμ Denser |
I want to extend the following to work on a solid angle:
Suppose we have a volume filled with small surfaces. If we cast a ray from a given point, the probability that the ray will not hit a surface (i.e. is visible to the sky) is given as
$P(ray\ does\ not\ hit) = {e}^{-\alpha d/\cos\theta}$
where $\alpha$ is some decay factor, $d/cos\theta$ is the path length of the ray within the volume.
My question: How can we compute the expected visibility of a ray if we consider the directions $(\theta, \phi)$, where $0<\theta<\pi/2$ and $0<\phi<2\pi$? |
I'm trying to reconcile some conflicting results that I've found in publications that address the idea of the current in a vacuum diode in the case where the cathode has a non-zero potential, in other words, the electrons are emitted with a non-zero initial velocity.
The traditional Child-Langmuir Space Charge Law, also known as the 3/2 Law is as follows:
$$ J_{CL}= \frac{4\epsilon_{0}}{9}\sqrt{\frac{2e}{m_{e}}} \frac{V_{a}^{3/2}}{d^{2}} $$
This is pretty straightforward to me, however, I am interested in the case where the
cathode also has a non-zero potential. According to the literature, there are at least two different results, and I have been unable to convert one to the other, even though they are supposed to describe the same situation.
In S.E Sampayan,
Nuc. Inst. and Meth. in Phys. Res. A, (1994) A 340, pp. 90-95, Eqn. A-13 & A-14, in which the increased electron current is justified by the existence of a "virtual cathode". The derivation eventually leads to the result:
$$ J= J_{CL}\frac{[1+(1+ \Psi_{0} )^{3/4} ]^{2} }{ \Psi_{0} ^{3/2} } = \frac{4 \epsilon_{0} }{9} \sqrt{ \frac{2e}{m_{e} } } \frac{\phi_{0}^{3/2} }{d^{2} } \frac{[1+(1+ \Psi_{0} )^{3/4} ]^{2} }{ \Psi_{0} ^{3/2} } $$
where $\Psi_{0}= \frac{e \phi_{0}}{E_{0} }$, $E_{0}= \text{initial electron energy}$, and $\phi_{0}= \text{anode potential}$.
However, as the anode potential, $\phi_{0}$ tends to 0, $\Psi_{0}$ also tends to 0, and the current goes to infinity, when it should reduce to the original Child-Langmuir Law.
In another publication, G. Jaffe,
Phys. Rev. (1944) Vol. 65, No. 3 & 4 pp. 91-98 , Eqn. 28, the result is given as:
$$ J_{CL}= \frac{4\epsilon_{0}}{9}\sqrt{\frac{2e}{m_{e}}} \frac{( \sqrt{V_{c}}+\sqrt{V_{c}+V_{a}})^{3}} {d^{2}} $$
where $V_{c}= \text{cathode voltage}$, and here, it is clear that as the cathode potential tends to 0, the Child-Langmuir Law is recovered.
The same results can also be seen, (with minor typographic errors to the equation) in H. Riege,
Nuc. Inst. and Meth. in Phys. Res. A, (2000) A 451, pp. 394-405, Eqn. 3.
I'm trying to figure out if either or both are correct, and in the latter case, how one converts from one to another. As far as I can tell, they both describe the same conditions, with a virtual cathode being the means of increased electron emission.
Any ideas? |
Search
Now showing items 1-1 of 1
Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV
(Elsevier, 2014-09)
Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ... |
EDIT: Let $S$ be a closed orientable 2-dimensional surface equipped with a metric with curvature $\geq \kappa$ in the sense of Alexandrov. Questions 1. Can one define a measure $K$ on $S$ (thought to be an analogue of the Gauss curvature) satisfying the following properties:
(a) if the metric on $S$ is smooth then $K$ is the usual Gauss curvature times the Lebesgue measure.
(b) If a sequence of orientable surfaces $S_i$ with such metrics (with the same lower bound $\kappa$ on the curvatures) converges to $S$ in the Gromov-Hausdorff sense then $K_i\to K$ weakly (what is weak convergence of measures on different spaces should be made more precise, but I guess it is well known to experts).
(c) Gauss-Bonnet formula: $\int_S K=2\pi \chi(S)$.
(It is very likely that (c) follows from (a)+(b).)
Question 2. If the answer to Question 1 is positive, it seems likely that if $S$ has non-negative curvature which in some open subset is $\geq \kappa>0$ (and $S$ is orientable) then $S$ is homeomorphic to the 2-sphere. Is this consequence known to be true in the context of Alexandrov spaces? (For smooth Riemannian metrics it is well known.) UPDATE:. The answer to both questions is YES as follows from the answer below by Thomas Richard and the comment by Anton Petrunin. |
Let $a \in \Bbb{R}$. Let $\Bbb{Z}$ act on $S^1$ via $(n,z) \mapsto ze^{2 \pi i \cdot an}$. Claim: The is action is not free if and only if $a \Bbb{Q}$. Here's an attempt at the forward direction: If the action is not free, there is some nonzero $n$ and $z \in S^1$ such that $ze^{2 \pi i \cdot an} = 1$. Note $z = e^{2 \pi i \theta}$ for some $\theta \in [0,1)$.
Then the equation becomes $e^{2 \pi i(\theta + an)} = 1$, which holds if and only if $2\pi (\theta + an) = 2 \pi k$ for some $k \in \Bbb{Z}$. Solving for $a$ gives $a = \frac{k-\theta}{n}$...
What if $\theta$ is irrational...what did I do wrong?
'cause I understand that second one but I'm having a hard time explaining it in words
(Re: the first one: a matrix transpose "looks" like the equation $Ax\cdot y=x\cdot A^\top y$. Which implies several things, like how $A^\top x$ is perpendicular to $A^{-1}x^\top$ where $x^\top$ is the vector space perpendicular to $x$.)
DogAteMy: I looked at the link. You're writing garbage with regard to the transpose stuff. Why should a linear map from $\Bbb R^n$ to $\Bbb R^m$ have an inverse in the first place? And for goodness sake don't use $x^\top$ to mean the orthogonal complement when it already means something.
he based much of his success on principles like this I cant believe ive forgotten it
it's basically saying that it's a waste of time to throw a parade for a scholar or win he or she over with compliments and awards etc but this is the biggest source of sense of purpose in the non scholar
yeah there is this thing called the internet and well yes there are better books than others you can study from provided they are not stolen from you by drug dealers you should buy a text book that they base university courses on if you can save for one
I was working from "Problems in Analytic Number Theory" Second Edition, by M.Ram Murty prior to the idiots robbing me and taking that with them which was a fantastic book to self learn from one of the best ive had actually
Yeah I wasn't happy about it either it was more than $200 usd actually well look if you want my honest opinion self study doesn't exist, you are still being taught something by Euclid if you read his works despite him having died a few thousand years ago but he is as much a teacher as you'll get, and if you don't plan on reading the works of others, to maintain some sort of purity in the word self study, well, no you have failed in life and should give up entirely. but that is a very good book
regardless of you attending Princeton university or not
yeah me neither you are the only one I remember talking to on it but I have been well and truly banned from this IP address for that forum now, which, which was as you might have guessed for being too polite and sensitive to delicate religious sensibilities
but no it's not my forum I just remembered it was one of the first I started talking math on, and it was a long road for someone like me being receptive to constructive criticism, especially from a kid a third my age which according to your profile at the time you were
i have a chronological disability that prevents me from accurately recalling exactly when this was, don't worry about it
well yeah it said you were 10, so it was a troubling thought to be getting advice from a ten year old at the time i think i was still holding on to some sort of hopes of a career in non stupidity related fields which was at some point abandoned
@TedShifrin thanks for that in bookmarking all of these under 3500, is there a 101 i should start with and find my way into four digits? what level of expertise is required for all of these is a more clear way of asking
Well, there are various math sources all over the web, including Khan Academy, etc. My particular course was intended for people seriously interested in mathematics (i.e., proofs as well as computations and applications). The students in there were about half first-year students who had taken BC AP calculus in high school and gotten the top score, about half second-year students who'd taken various first-year calculus paths in college.
long time ago tho even the credits have expired not the student debt though so i think they are trying to hint i should go back a start from first year and double said debt but im a terrible student it really wasn't worth while the first time round considering my rate of attendance then and how unlikely that would be different going back now
@BalarkaSen yeah from the number theory i got into in my most recent years it's bizarre how i almost became allergic to calculus i loved it back then and for some reason not quite so when i began focusing on prime numbers
What do you all think of this theorem: The number of ways to write $n$ as a sum of four squares is equal to $8$ times the sum of divisors of $n$ if $n$ is odd and $24$ times sum of odd divisors of $n$ if $n$ is even
A proof of this uses (basically) Fourier analysis
Even though it looks rather innocuous albeit surprising result in pure number theory
@BalarkaSen well because it was what Wikipedia deemed my interests to be categorized as i have simply told myself that is what i am studying, it really starting with me horsing around not even knowing what category of math you call it. actually, ill show you the exact subject you and i discussed on mmf that reminds me you were actually right, i don't know if i would have taken it well at the time tho
yeah looks like i deleted the stack exchange question on it anyway i had found a discrete Fourier transform for $\lfloor \frac{n}{m} \rfloor$ and you attempted to explain to me that is what it was that's all i remember lol @BalarkaSen
oh and when it comes to transcripts involving me on the internet, don't worry, the younger version of you most definitely will be seen in a positive light, and just contemplating all the possibilities of things said by someone as insane as me, agree that pulling up said past conversations isn't productive
absolutely me too but would we have it any other way? i mean i know im like a dog chasing a car as far as any real "purpose" in learning is concerned i think id be terrified if something didnt unfold into a myriad of new things I'm clueless about
@Daminark They key thing if I remember correctly was that if you look at the subgroup $\Gamma$ of $\text{PSL}_2(\Bbb Z)$ generated by (1, 2|0, 1) and (0, -1|1, 0), then any holomorphic function $f : \Bbb H^2 \to \Bbb C$ invariant under $\Gamma$ (in the sense that $f(z + 2) = f(z)$ and $f(-1/z) = z^{2k} f(z)$, $2k$ is called the weight) such that the Fourier expansion of $f$ at infinity and $-1$ having no constant coefficients is called a cusp form (on $\Bbb H^2/\Gamma$).
The $r_4(n)$ thing follows as an immediate corollary of the fact that the only weight $2$ cusp form is identically zero.
I can try to recall more if you're interested.
It's insightful to look at the picture of $\Bbb H^2/\Gamma$... it's like, take the line $\Re[z] = 1$, the semicircle $|z| = 1, z > 0$, and the line $\Re[z] = -1$. This gives a certain region in the upper half plane
Paste those two lines, and paste half of the semicircle (from -1 to i, and then from i to 1) to the other half by folding along i
Yup, that $E_4$ and $E_6$ generates the space of modular forms, that type of things
I think in general if you start thinking about modular forms as eigenfunctions of a Laplacian, the space generated by the Eisenstein series are orthogonal to the space of cusp forms - there's a general story I don't quite know
Cusp forms vanish at the cusp (those are the $-1$ and $\infty$ points in the quotient $\Bbb H^2/\Gamma$ picture I described above, where the hyperbolic metric gets coned off), whereas given any values on the cusps you can make a linear combination of Eisenstein series which takes those specific values on the cusps
So it sort of makes sense
Regarding that particular result, saying it's a weight 2 cusp form is like specifying a strong decay rate of the cusp form towards the cusp. Indeed, one basically argues like the maximum value theorem in complex analysis
@BalarkaSen no you didn't come across as pretentious at all, i can only imagine being so young and having the mind you have would have resulted in many accusing you of such, but really, my experience in life is diverse to say the least, and I've met know it all types that are in everyway detestable, you shouldn't be so hard on your character you are very humble considering your calibre
You probably don't realise how low the bar drops when it comes to integrity of character is concerned, trust me, you wouldn't have come as far as you clearly have if you were a know it all
it was actually the best thing for me to have met a 10 year old at the age of 30 that was well beyond what ill ever realistically become as far as math is concerned someone like you is going to be accused of arrogance simply because you intimidate many ignore the good majority of that mate |
The Rotation Matrix is an Orthogonal Transformation
Problem 684
Let $\mathbb{R}^2$ be the vector space of size-2 column vectors. This vector space has an inner product defined by $ \langle \mathbf{v} , \mathbf{w} \rangle = \mathbf{v}^\trans \mathbf{w}$. A linear transformation $T : \R^2 \rightarrow \R^2$ is called an orthogonal transformation if for all $\mathbf{v} , \mathbf{w} \in \R^2$,\[\langle T(\mathbf{v}) , T(\mathbf{w}) \rangle = \langle \mathbf{v} , \mathbf{w} \rangle.\]
For a fixed angle $\theta \in [0, 2 \pi )$ , define the matrix\[ [T] = \begin{bmatrix} \cos (\theta) & – \sin ( \theta ) \\ \sin ( \theta ) & \cos ( \theta ) \end{bmatrix} \]and the linear transformation $T : \R^2 \rightarrow \R^2$ by\[T( \mathbf{v} ) = [T] \mathbf{v}.\]
Rotation Matrix in Space and its Determinant and EigenvaluesFor a real number $0\leq \theta \leq \pi$, we define the real $3\times 3$ matrix $A$ by\[A=\begin{bmatrix}\cos\theta & -\sin\theta & 0 \\\sin\theta &\cos\theta &0 \\0 & 0 & 1\end{bmatrix}.\](a) Find the determinant of the matrix $A$.(b) Show that $A$ is an […]
The Sum of Cosine Squared in an Inner Product SpaceLet $\mathbf{v}$ be a vector in an inner product space $V$ over $\R$.Suppose that $\{\mathbf{u}_1, \dots, \mathbf{u}_n\}$ is an orthonormal basis of $V$.Let $\theta_i$ be the angle between $\mathbf{v}$ and $\mathbf{u}_i$ for $i=1,\dots, n$.Prove that\[\cos […]
An Orthogonal Transformation from $\R^n$ to $\R^n$ is an IsomorphismLet $\R^n$ be an inner product space with inner product $\langle \mathbf{x}, \mathbf{y}\rangle=\mathbf{x}^{\trans}\mathbf{y}$ for $\mathbf{x}, \mathbf{y}\in \R^n$.A linear transformation $T:\R^n \to \R^n$ is called orthogonal transformation if for all $\mathbf{x}, \mathbf{y}\in […]
Determine Trigonometric Functions with Given Conditions(a) Find a function\[g(\theta) = a \cos(\theta) + b \cos(2 \theta) + c \cos(3 \theta)\]such that $g(0) = g(\pi/2) = g(\pi) = 0$, where $a, b, c$ are constants.(b) Find real numbers $a, b, c$ such that the function\[g(\theta) = a \cos(\theta) + b \cos(2 \theta) + c \cos(3 […]
Cosine and Sine Functions are Linearly IndependentLet $C[-\pi, \pi]$ be the vector space of all continuous functions defined on the interval $[-\pi, \pi]$.Show that the subset $\{\cos(x), \sin(x)\}$ in $C[-\pi, \pi]$ is linearly independent.Proof.Note that the zero vector in the vector space $C[-\pi, \pi]$ is […]
Subspace Spanned by Trigonometric Functions $\sin^2(x)$ and $\cos^2(x)$Let $C[-2\pi, 2\pi]$ be the vector space of all real-valued continuous functions defined on the interval $[-2\pi, 2\pi]$.Consider the subspace $W=\Span\{\sin^2(x), \cos^2(x)\}$ spanned by functions $\sin^2(x)$ and $\cos^2(x)$.(a) Prove that the set $B=\{\sin^2(x), \cos^2(x)\}$ […] |
Abbreviation:
MonA
A
is a structure $\mathbf{A}=\langle A, \vee, 0, \wedge, 1, \neg, f\rangle$ of type $\langle 2, 0, 2, 0, 1, 1\rangle$ such that monadic algebra
$\langle A, \vee, 0, \wedge, 1, \neg\rangle$ is a Boolean algebra
$f$ is a
: $f(x\vee y)=f(x)\vee f(y)$, $f(0)=0$, $x\le f(x)=f(f(x))$ unary closure operator
$f$ is
: $f(x)\wedge y=0\iff x\wedge f(y)=0$ self conjugated
Remark: This is a template. If you know something about this class, click on the 'Edit text of this page' link at the bottom and fill out this page.
It is not unusual to give several (equivalent) definitions. Ideally, one of the definitions would give an irredundant axiomatization that does not refer to other classes.
Let $\mathbf{A}$ and $\mathbf{B}$ be monodic algebras. A morphism from $\mathbf{A}$ to $\mathbf{B}$ is a function $h:A\rightarrow B$ that is a homomorphism: $h(x \vee y)=h(x) \vee h(y)$, $h(\neg x)=\neg h(x)$, $h(f(x))=f(h(x))$.
An
is a structure $\mathbf{A}=\langle A,...\rangle$ of type $\langle...\rangle$ such that …
$...$ is …: $axiom$
$...$ is …: $axiom$
Example 1:
Feel free to add or delete properties from this list. The list below may contain properties that are not relevant to the class that is being described.
$\begin{array}{lr} f(1)= &1\\ f(2)= &1\\ f(3)= &\\ f(4)= &\\ f(5)= &\\ \end{array}$ $\begin{array}{lr} f(6)= &\\ f(7)= &\\ f(8)= &\\ f(9)= &\\ f(10)= &\\ \end{array}$
[[...]] subvariety [[...]] expansion [[Boolean algebras]] reduced type [[Closure algebras]] |
I don't
actually have an answer, just ideas, but since this is a riddle, I might of contribute these as (hidden as spoiler text) hints for others, which might have ideas that I miss:
So, MSK is often introduced as
represented by a half-symbol time-offset BPSK on the I and Q components. That comes in very handy here (avoiding multiplicators) – BPSK can be implemened as $\pm 1$ on the respective axis. The idea is that at the boundary between symbols periods, the phase is continuous, which w.l.o.g cannot be realized in any other way than by having zeros exactly at the point where the other component carries the BPSK constellation point. Thus, the minimal viable MSK modulation has
$$\begin{align} I&= [- {s_1} && 0 && -{s_2} && 0 && -{s_3}&\dots ]\\Q&= [ 0 && s_1 && 0 && s_2 && 0 & \dots ] \end{align}$$
as samples, with $s_n$ being the bits to be transmitted represented as $\pm 1$ differentially. That doesn't look very much like CPFSK at all, but one has to realize that the advance between two consecutive samples takes two different values:
For samples that have an even index $n$, the following sample always is -90° further, no matter which value $s_{n/2}$ has
For samples that have an odd index $n$, the phase advance is -90° if $s_{\lfloor n/2\rfloor}=s_{(n+1)/2}$, and +90° in case the bits differ.
Considering the phase difference of consecutive samples as the instantaneous frequency, we thus see that we get an alternation of a constant $-\frac \pi2$ between samples and either $0$ or $-\frac \pi2$, carrying the information. I'm not even convinced any more this fulfills the criteria for being MSK.
Since we'd probably want spectral symmetry, we'd employ the usual tricks to shift the signal in frequency by half the sampling rate (i.e. "multiplication" with $[-1\,+1\,-1\,+1\,\dots]$).
Oooh and I'd have an approach for the Gaussian pulse shaper, but that simply applies
the fact that repeated convolution of something with itself converges against a Gaussian – same reason the CLT works.
That means I can just cascade a set of moving average filters – all of which work without multipliers to get an approximated Gauss filter.
Sadly, that doesn't inherently solve the issue
that my MSK approach above basically only allows 1 sample per symbol – but we might at least recreate the inter-symbol dependence characteristic of GMSK trellises. |
This article is about $ 2^n-n $ numbers, that is, numbers that are produced by replacing '$n$' in $ 2^n-n $ with a positive integer $ (1,2,3...) $. I came across these numbers while studying Mersenne numbers $ (2^n-1) $. It got me thinking about $ 2^n-n $ numbers, if there are any interesting properties to them, and what are the properties of their primes. In the rest of the article $A_n$ willmean $ 2^n-n $. The first few numbers are:
n A n 1 1 2 2 3 5 4 12 5 27 6 58 7 121 8 248 9 503 10 1014
The first question that one would ask himself is how could I advance from $ A_n $ to $ A_{n+1} $, that is, if I'm given $ A_n $, how would I find $ A_{n+1} $ without needing to calculate $ 2^n-n $. In order to find this one might notice the following:
$ A_1=1 $
$ A_2=2\times 1+0=2 $
$ A_3=2\times 2+1=5 $
$ A_4=2\times 5+2=12 $
That would lead us to the following theorem:
Theorem 1
The rule of advancement is:
$ A_{n+1}=2A_n+n-1 $
Proof
To prove this, we will have to show that when we take $ A_n $ multiply it by $2$ add $n$ and subtract $1$ we get $ A_{n+1}$:
$ 2A_n+n-1=2(2^n-n)+n-1=2^{n+1}-n-1=2^{n+1}-(n+1)=A_{n+1} $
Q.E.D.
The next demanding thing to do is to find the sum of the series, the sum of all the members of the series up till a certain '$n$'.
Theorem 2
The sum of the series $ A_n $ will be defined as $ S_n $.
$$ S_n = 1+2+5+12+ \cdots +(2^n - n)= \frac{2^{n+2} - n^2 - n - 4}{2} $$
I will prove that this formula is correct, however that manner in which I got to it, I will leave to the reader to discover (it is not difficult).
Proof
I will prove by induction on '$n$'.
First lets check for the case in which $n=1$: $$ S_1 = \frac{2^3 - 1^2 -1 -4}{2} = \frac{8-1-1-4}{2}= \frac{2}{2} = 1 $$
Which is obviously correct.
Assumption: $ n=k $, a certain integer.
$$ 1+2+5+12+ \cdots +(2^k - k) = \frac{2^{k+2} - k^2 - k - 4}{2} $$
Next we have to prove that it is also true for $ n=k+1$:
$$ 1+2+5+12+ \cdots +(2^k - k) + (2^{k+1} - (k+1))= \frac{2^{(k+1)+2} - (k+1)^2 - (k+1) - 4}{2} $$
By our assumption we know that:
$$ 1+2+5+12+ \cdots +(2^k - k) = \frac{2^{k+2} - k^2 - k - 4}{2} $$
Substituting it and opening it up, we get:
$$\begin{align*} \frac{2^{k+2} - k^2 - k - 4}{2} + 2^{k+1}- (k+1) &= \frac{2^{k+2} - k^2 - k - 4 + 2^{k+2} - 2k -2}{2} \\ &= \frac{2^{k+3} - k^2 - 3k - 6}{2} &= \frac{2^{k+3} - (k+1)^2 - (k+1) - 4}{2} \end{align*}$$
Q.E.D.
Thus, it is proven for all positive integers.
After finding these basic and essential formulas that one must find for any series of numbers, we may go on to the interesting part.
First, notice that the difference between two consecutive members of the series is a Mersenne number:
$ A_{n+1} - A_n =M_n $
Where $ M_n $ signifies the nth Mersenne number.
I will leave this easy proof to the reader.
The major research on series of numbers like the Fermat numbers $ (2^{2^n} + 1) $ or the Mersenne numbers $ (2^n-1) $ is done on finding prime numbers (numbers that their only divisors are 1 and the number itself, 1 is not prime number by definition) and primality testing for their members.
The prime numbers in the form of $ 2^n-n $ that I found using my computer are (some are not definite but probable primes, because my computer wasn't strong enough to reach a concrete conclusion):
n 2 n -n Definite/Probable Prime 2 2 Definite 3 5 Definite 9 503 Definite 13 8179 Definite 19 524269 Definite 21 2097131 Definite 55 36028797018963913 Definite 261 2 261 -261 Probable 3415 2 3415 -3415 Probable 4185 2 4185 -4185 Probable 7353 2 7353 -7353 Probable 12213 2 12213 -12213 Probable 60975 2 60975 -60975 Probable 61011 2 61011 -61011 Probable
Here are a few interesting theorems regarding the prime numbers of this form:
Theorem 3
For $ 2^n-n $ primes that are greater from $ 2, n $ must be odd.
Proof
$ 2^n $ is always even, so we must only consider $ n$.
If $ n $ is even: even minus even is even.
If $ n $ is odd: even minus odd is odd.
$ 2 $ is the only even prime number (if there would have been another one, it could be divided by $ 2 $ and therefore not a prime number), so for all prime numbers of this form, $ n $ must be odd, because we want $ A_n $ to be odd as well.
Q.E.D.
Theorem 4
If $ p \mid n $ ($ p $ divides $ n $ ), where $ p $ is an odd prime number $ (p\neq2) $ , then $ p $ doesn't divide $ 2^n - n $.
Proof
Let's consider the opposite and then try to prove by reaching a contradiction.
Suppose $ p \mid 2^n -n $. As $ p \mid n $ we see that $ p \mid(2^n -n) + n = 2^n $ and hence $ p=2 $ . BUT $p\neq2$ according to our theorem. Hence a contradiction, ergo $ p $ does not divide $ 2^n- n$. Q.E.D.
Theorem 4 is a very important theorem, since it gives us a useful tool for marking out those primes who do not divide $ 2^n-n $. The basic way of finding if a certain number '$m$' is a prime number, is if every number from $ 2 $ to the square-root of '$m$' does not divide '$m$'. Theorem 4 allows us to mark out those primes that divide '$n$', because we know for sure that they wont divide $2^n-n $.
There is an easy algorithm that can be used when we want the check if a certain prime divides $ A_n $:
Given $ 2^n-n $ and a prime, $ p> 2. $
Change $ 2^n $ to $ 2^r $ , where $ r $ is: $ n \bmod (p-1) $ (the remainder that we get when dividing $ n $ by $ (p-1) $ ).
If $ 2^r \equiv n \pmod{p}$ , then $ p \mid ( 2^n-n). $
Using Fermat's little theorem one can easily prove this algorithm. The readers who are familiar with this theorem may go ahead and prove it.
Are there infinity prime numbers of the form $ 2^n-n$?
Probably yes.
The proof for this is rather advanced so I will only sketch the proof.
The probability for a certain number to be prime is: $$ \frac{1}{\log(2^n-n)} $$
for a large $ n $. Taking the harmonic series: $$ \sum_{n=1}^{\infty} \frac{1}{\log(2^n-n)} $$ one will the see that the harmonic series diverges and therefore there are, probably, infinity number of primes of this form.
There are certainly much more that can be found about this sort of numbers, and especially about their primes, but unfortunately like all good things, this article must come to an end. I urge the reader to try to investigate these numbers. If you find anything interesting (not-interesting is also good) about these numbers don't hesitate to send me an email: yatir@bigfoot.com
[Yatir Halevi is 17 years old, he lives in Maccabim, Israel and goes to the Maccabim-Reut High-School]. |
Skills to Develop
To learn what the coefficient of determination is, how to compute it, and what it tells us about the relationship between two variables \(x\) and \(y\).
If the scatter diagram of a set of \((x,y)\) pairs shows neither an upward or downward trend, then the horizontal line \(\hat{y} =\overline{y}\) fits it well, as illustrated in Figure \(\PageIndex{1}\). The lack of any upward or downward trend means that when an element of the population is selected at random, knowing the value of the measurement \(x\) for that element is not helpful in predicting the value of the measurement \(y\).
Figure \(\PageIndex{1}\):
If the scatter diagram shows a linear trend upward or downward then it is useful to compute the least squares regression line
\[\hat{y} =\hat{β}_1x+\hat{β}_0\]
and use it in predicting \(y\). Figure \(\PageIndex{2}\) illustrates this. In each panel we have plotted the height and weight data of Section 10.1. This is the same scatter plot as Figure \(\PageIndex{2}\), with the average value line \(\hat{y} =\overline{y}\) superimposed on it in the left panel and the least squares regression line imposed on it in the right panel. The errors are indicated graphically by the vertical line segments.
Figure \(\PageIndex{2}\): Same Scatter Diagram with Two Approximating Lines
The sum of the squared errors computed for the regression line, \(SSE\), is smaller than the sum of the squared errors computed for any other line. In particular it is less than the sum of the squared errors computed using the line \(\hat{y}=\overline{y}\), which sum is actually the number \(SS_{yy}\) that we have seen several times already. A measure of how useful it is to use the regression equation for prediction of \(y\) is how much smaller \(SSE\) is than \(SS_{yy}\). In particular, the proportion of the sum of the squared errors for the line \(\hat{y} =\overline{y}\) that is eliminated by going over to the least squares regression line is
We can think of \(SSE/SS_{yy}\) as the proportion of the variability in \(y\) that cannot be accounted for by the linear relationship between \(x\) and \(y\), since it is still there even when \(x\) is taken into account in the best way possible (using the least squares regression line; remember that \(SSE\) is the smallest the sum of the squared errors can be for any line). Seen in this light, the coefficient of determination, the complementary proportion of the variability in \(y\), is the proportion of the variability in all the \(y\) measurements that is accounted for by the linear relationship between \(x\) and \(y\).
In the context of linear regression the coefficient of determination is always the square of the correlation coefficient \(r\) discussed in Section 10.2. Thus the coefficient of determination is denoted \(r^2\), and we have two additional formulas for computing it.
Definition: coefficient of determination
The
coefficient of determination of a collection of \((x,y)\) pairs is the number \(r^2\) computed by any of the following three expressions:
It measures the proportion of the variability in \(y\) that is accounted for by the linear relationship between \(x\) and \(y\).
If the
correlation coefficient \(r\) is already known then the coefficient of determination can be computed simply by squaring \(r\), as the notation indicates, \(r^2=(r)^2\).
Example \(\PageIndex{1}\)
The value of used vehicles of the make and model discussed in "Example 10.4.2" in Section 10.4 varies widely. The most expensive automobile in the sample in Table 10.4.3 has value \(\$30,500\), which is nearly half again as much as the least expensive one, which is worth \(\$20,400\). Find the proportion of the variability in value that is accounted for by the linear relationship between age and value.
Solution:
The proportion of the variability in value \(y\) that is accounted for by the linear relationship between it and age \(x\) is given by the coefficient of determination, \(r^2\). Since the correlation coefficient \(r\) was already computed in "Example 10.4.2" in Section 10.4 as
\[r=-0.819\\ r^2=(-0.819)2=0.671\]
About \(67\%\) of the variability in the value of this vehicle can be explained by its age.
Example \(\PageIndex{2}\)
Use each of the three formulas for the coefficient of determination to compute its value for the example of ages and values of vehicles.
Solution:
In "Example 10.4.2" in Section 10.4 we computed the exact values
\[SS_{xx}=14\\ SS_{xy}=-28.7\\ SS_{yy}=87.781\\ \hat{\beta _1}=-2.05\]
In "Example 10.4.4" in Section 10.4 we computed the exact value\[SSE=28.946\]
Inserting these values into the formulas in the definition, one after the other, gives
\[r^2=\dfrac{SS_{yy}−SSE}{SS_{yy}}=\dfrac{87.781−28.946}{87.781}=0.6702475479\]
\[r^2= \dfrac{SS^2_{xy}}{SS_{xx}SS_{yy}}=\dfrac{(−28.7)^2}{(14)(87.781)}=0.6702475479\]\[r^2=\hat{β}_1 \dfrac{SS_{xy}}{SS_{yy}}=−2.05\dfrac{−28.7}{87.781}=0.6702475479\]
which rounds to \(0.670\). The discrepancy between the value here and in the previous example is because a rounded value of \(r\) from "Example 10.4.2" was used there. The actual value of \(r\) before rounding is \(0.8186864772\), which when squared gives the value for \(r^2\) obtained here.
The coefficient of determination \(r^2\) can always be computed by squaring the correlation coefficient \(r\) if it is known. Any one of the defining formulas can also be used. Typically one would make the choice based on which quantities have already been computed. What should be avoided is trying to compute \(r\) by taking the square root of \(r^2\), if it is already known, since it is easy to make a sign error this way. To see what can go wrong, suppose \(r^2=0.64\). Taking the square root of a positive number with any calculating device will always return a positive result. The square root of \(0.64\) is \(0.8\). However, the actual value of \(r\) might be the negative number \(-0.8\).
Key Takeaway
The coefficient of determination \(r^2\) estimates the proportion of the variability in the variable \(y\) that is explained by the linear relationship between \(y\) and the variable \(x\). There are several formulas for computing \(r^2\). The choice of which one to use can be based on which quantities have already been computed so far. Contributor
Anonymous |
Well you might suspect that there is a connection between the roots of a number and exponents given my last couple of posts, and you would be correct. Now in my post on roots, I limited myself to using numbers that are perfect roots. For example, \[
\sqrt[3]{8}\hspace{0.33em}{=}\hspace{0.33em}{2} \]. But many roots do not have such a simple answer. You just have to represent it exactly as a root. For example, \[ \sqrt{2} \] has no simple integer solution. In fact, its decimal equivalent cannot be written down exactly as it is a non-repeating decimal. The answer you get on a calculator is just the first few decimals. \[ \sqrt{2}\] is an example of something called an irrational number. That doesn’t mean you can’t reason with it. It just means you can’t write it down exactly using decimals. You can only exactly represent it as \[ \sqrt{2} \].
But it is still true that \[
\sqrt{2}\hspace{0.33em}\times\hspace{0.33em}\sqrt{2}\hspace{0.33em}{=}\hspace{0.33em}{2} \] (ignoring the negative solution for now).
Now from my post on exponents, you saw that\[
{2}^{3}\hspace{0.33em}\times\hspace{0.33em}{2}^{2}\hspace{0.33em}{=}\hspace{0.33em}{2}^{{3}{+}{2}}\hspace{0.33em}{=}\hspace{0.33em}{2}^{5}
\]
Now bear with me, but consider
\[
{2}^{\frac{1}{2}}\hspace{0.33em}\times\hspace{0.33em}{2}^{\frac{1}{2}}\hspace{0.33em}{=}\hspace{0.33em}{2}^{\frac{1}{2}{+}\frac{1}{2}}\hspace{0.33em}{=}\hspace{0.33em}{2}^{1}\hspace{0.33em}{=}\hspace{0.33em}{2} \].
But remember that
\[
\sqrt{2}\hspace{0.33em}\times\hspace{0.33em}\sqrt{2}\hspace{0.33em}{=}\hspace{0.33em}{2} \].
Perhaps
\[
{2}^{\frac{1}{2}}\hspace{0.33em}{=}\hspace{0.33em}\sqrt{2} \].
This is in fact true and the rules of exponents apply here as well. In general,
\[
{x}^{\frac{1}{n}}\hspace{0.33em}{=}\hspace{0.33em}\sqrt[n]{x} \].
Now consider
\[
\sqrt{2}\hspace{0.33em}\times\hspace{0.33em}\sqrt{2}\hspace{0.33em}\times\hspace{0.33em}\sqrt{2}\hspace{0.33em}{=}\hspace{0.33em}{2}^{\frac{1}{2}}\hspace{0.33em}\times\hspace{0.33em}{2}^{\frac{1}{2}}\hspace{0.33em}\times\hspace{0.33em}{2}^{\frac{1}{2}}\hspace{0.33em}{=}\hspace{0.33em}{2}^{\frac{3}{2}} \].
Looks like a number raised to a fraction does have meaning. Now in decimals, this number is 2.82842712475… , which actually goes on forever. But on a calculator, you can get this answer by first raising 2 to the “3” power then taking the square root of the result, or first take the square root of 2 then raise the result to the “3” power. In general:
\[
{x}^{\frac{m}{n}}\hspace{0.33em}{=}\hspace{0.33em}\sqrt[n]{{x}^{m}}\hspace{0.33em}{=}\hspace{0.33em}{\left({\sqrt[n]{x}}\right)}^{m} \].
In my next post, I will show several examples of fractional exponents. |
Let $p$ be a real polynomial and $N$ be a positive integer. Suppose I tell you that $|p(\frac{1}{k})| \le 1$ for all $k\in\{1,\ldots,N\}$, and also that $p(\frac{1}{N})\le -\frac{1}{2}$ while $p(\frac{2}{N})\ge \frac{1}{2}$. What bounds can you give me on the minimal possible degree $\deg(p)$?
An upper bound of $O(\sqrt{N})$ follows by considering the Chebyshev polynomials (which indeed are bounded
everywhere in $[0,1]$, not just at the inverse-integer points).
On the other hand, the best lower bound I could show was $\deg(p)=\Omega(N^{1/4})$. This follows by restricting our attention to the interval $[\frac{1}{N},\frac{1}{\sqrt{N}}]$, which has no point more than about $1/N$ away from an inverse-integer point, and where (by assumption) $p$ also attains a large first derivative somewhere. We then apply standard results from approximation theory about polynomials bounded at discrete points, due to, e.g., Ehlich, Zeller, Coppersmith, Rivlin, and Cheney. Unfortunately the original papers seem to be paywalled, but the idea here is just to say that either $|p(x)|=O(1)$ in the entire interval $[\frac{1}{N},\frac{1}{\sqrt{N}}]$, in which case we can directly use Markov's inequality to lower-bound its degree, or else $p$ goes on some crazy excursion in between two of the discrete points at which it's bounded (say $\frac{1}{k}$ and $\frac{1}{k-1}$), in which case it attains a proportionately larger derivative there, so Markov's inequality can again be applied.
My question is whether there are any fancier tools from approximation theory that yield a better lower bound on the degree, like $\Omega(N^{1/3})$ or conceivably even $\Omega(\sqrt{N})$.
In case it helps: I already tried ransacking the approximation theory literature, but while I found many papers about polynomials bounded at evenly-spaced points, I found next to nothing about unevenly-spaced points (maybe I didn't know the right search terms). I also tried using Bernstein's inequality, which often yields better lower bounds on degree than Markov's inequality. But the trouble is that Bernstein's inequality is only useful if our polynomial attains a large first derivative far away from the endpoints of the interval where we're studying it (i.e., towards the center of the interval). And it seems that that can't be guaranteed here, basically because the interval $[0,1]$ has precious few inverse-integer points that are anywhere close to its endpoint of $1$. |
I work on an inverse problem for my Ph.D. research, which for simplicity's sake we'll say is determining $\beta$ in
$L(\beta)u \equiv -\nabla\cdot(k_0e^\beta\nabla u) = f$
from some observations $u^o$; $k_0$ is a constant and $f$ is known. This is typically formulated as an optimization problem for extremizing
$J[u, \lambda; \beta] = \frac{1}{2}\int_\Omega(u(x) - u^o(x))^2dx + \int_\Omega\lambda(L(\beta)u - f)dx$
where $\lambda$ is a Lagrange multiplier. The functional derivative of $J$ with respect to $\beta$ can be computed by solving the adjoint equation
$L(\beta)\lambda = u - u^o.$
Some regularizing functional $R[\beta]$ is added to the problem for the usual reasons.
The unspoken assumption here is that the observed data $u^o$ are defined continuously throughout the domain $\Omega$. I think it might be more appropriate for my problem to instead use
$J[u, \lambda; \beta] = \sum_{n = 1}^N\frac{(u(x_n) - u^o(x_n))^2}{2\sigma_n^2} + \int_\Omega\lambda(L(\beta)u - f)dx$
where $x_n$ are the points at which the measurements are taken and $\sigma_n$ is the standard deviation of the $n$-th measurement. The measurements of this field are often spotty and missing chunks; why interpolate to get a continuous field of dubious fidelity if that can be avoided?
This gives me pause because the adjoint equation becomes
$L(\beta)\lambda = \sum_{n = 1}^N\frac{u(x_n) - u^o(x_n)}{\sigma_n^2}\delta(x - x_n)$
where $\delta$ is the Dirac delta function. I'm solving this using finite elements, so in principle integrating a shape function against a delta function amounts to evaluating the shape function at that point. Still, the regularity issues probably shouldn't be dismissed out of hand. My best guess is that the objective functional should be defined in terms of the finite element approximation to all the fields, rather than in terms of the real fields and then discretized after.
I can't find any comparisons of assuming continuous or pointwise measurements in inverse problems in the literature, either in relation to the specific problem I'm working on or generally. Often pointwise measurements are used without any mention of the incipient regularity issues, e.g. here.
Is there any published work comparing the assumptions of continuous vs. pointwise measurements? Should I be concerned about the delta functions in the pointwise case? |
Question
Manufacturer can sell x items at a price of rupees \[\left( 5 - \frac{x}{100} \right)\] each. The cost price is Rs \[\left( \frac{x}{5} + 500 \right) .\] Find the number of items he should sell to earn maximum profit.
Solution
\[\text { Profit =S.P. - C.P.}\]
\[ \Rightarrow P = x\left( 5 - \frac{x}{100} \right) - \left( 500 + \frac{x}{5} \right)\]
\[ \Rightarrow P = 5x - \frac{x^2}{100} - 500 - \frac{x}{5}\]
\[ \Rightarrow \frac{dP}{dx} = 5 - \frac{x}{50} - \frac{1}{5}\]
\[\text { For maximum or minimum values of P, we must have }\]
\[\frac{dP}{dx} = 0\]
\[ \Rightarrow 5 - \frac{x}{50} - \frac{1}{5} = 0\]
\[ \Rightarrow \frac{24}{5} = \frac{x}{50}\]
\[ \Rightarrow x = \frac{24 \times 50}{5}\]
\[ \Rightarrow x = 240\]
\[\text { Now, }\]
\[\frac{d^2 P}{d x^2} = \frac{- 1}{50} < 0\]
\[\text { So, the profit is maximum if 240 items are sold.}\]
Notes
The solution given in the book is incorrect. The solution here is created according to the question given in the book.
Video Tutorials view Video Tutorials For All Subjects Graph of Maxima and Minima video tutorial00:05:34 |
Measurement of the $D\to K^-\pi^+$ strong phase difference in $\psi(3770)\to D^0\overline{D}{}^0$ 2014 (English)In: PHYSICS LETTERS B, ISSN 0370-2693, Vol. 734Article in journal (Refereed) Published Resource typeText Abstract [en]
We study $D^0\overline{D}{}^0$ pairs produced in $e^+e^-$ collisions at $\sqrt{s}=3.773$ GeV using a data sample of 2.92 fb$^{-1}$ collected with the BESIII detector. We measure the asymmetry $\mathcal{A}^{CP}_{K\pi}$ of the branching fractions of $D \to K^-\pi^+$ in $CP$-odd and $CP$-even eigenstates to be $(12.7\pm1.3\pm0.7)\times10^{-2}$. $\mathcal{A}^{CP}_{K\pi}$ can be used to extract the strong phase difference $\delta_{K\pi}$ between the doubly Cabibbo-suppressed process $\overline{D}{}^{0}\to K^-\pi^+$ and the Cabibbo-favored process $D^0\to K^- \pi^+$. Using world-average values of external parameters, we obtain $\cos\delta_{K\pi} = 1.02\pm0.11\pm0.06\pm0.01$. Here, the first and second uncertainties are statistical and systematic, respectively, while the third uncertainty arises from the external parameters. This is the most precise measurement of $\delta_{K\pi}$ to date.
Place, publisher, year, edition, pages
2014. Vol. 734
National Category Subatomic Physics IdentifiersURN: urn:nbn:se:uu:diva-288063DOI: 10.1016/j.physletb.2014.05.071OAI: oai:DiVA.org:uu-288063DiVA, id: diva2:923889 Note
Funding: The BESIII Collaboration thanks the staff of BEPCII and the IHEP computing center for their strong support. This work is supported in part by National Key Basic Research Program of China under Contract No. 2015CB856700; National Natural Science Foundation of China (NSFC) under Contracts No. 11125525, No. 11235011, No. 11322544, No. 11335008, and No. 11425524; the Chinese Academy of Sciences (CAS) Large-Scale Scientific Facility Program; the CAS Center for Excellence in Particle Physics; the Collaborative Innovation Center for Particles and Interactions; Joint Large-Scale Scientific Facility Funds of the NSFC and CAS under Contracts No. 11179007, No. U1232201, and No. U1332201; CAS under Contracts No. KJCX2-YW-N29 and No. KJCX2-YW-N45; 100 Talents Program of CAS; National 1000 Talents Program of China; INPAC and Shanghai Key Laboratory for Particle Physics and Cosmology; German Research Foundation DFG under Collaborative Research Center Contract No. CRC-1044; Istituto Nazionale di Fisica Nucleare, Italy; Joint Funds of the National Science Foundation of China under Contract No. U1232107; Ministry of Development of Turkey under Contract No. DPT2006K-120470; Russian Foundation for Basic Research under Contract No. 14-07-91152; The Swedish Research Council; US Department of Energy under Contracts No. DE-FG02-04ER41291, No. DE-FG02-05ER41374, No. DE-SC0012069, and No. DESC0010118; US National Science Foundation; University of Groningen and the Helmholtzzentrum fuer Schwerionenforschung GmbH, Darmstadt; and WCU Program of National Research Foundation of Korea under Contract No. R32-2008-000-10155-0.2016-04-272016-04-272016-04-27 |
Alcohols, Phenols and Ethers Methods for Preparation of Alcohols and Phenols Preparations of alcohols: From alkenes: (i) Acid catalysed hydration From alkynes: \tt CH\equiv CH\xrightarrow[kucherer(or)kucherov\ reaction]{H_2O/Hg^{2+},-H^+}CH_3CHO\xrightarrow[\Delta]{O_2/(CH_3COO)_2Mg}CH_3COOH From haloalkanes:
Trick: Replacement of H by alkyl groups is related with the formation of 2° and 3°-alcohols.
From 1° amines: \tt R-NH_2+O=N-O-H\xrightarrow{NaNO_{2}/HCl}R-OH+H_2O+N_2\uparrow Methanol (wood spirit) from water gas: \tt (CO+H_2)+H_2\xrightarrow[650K/200atm]{Zn/Cr_2O_3/CuO}CH_3OH Ethanol from sugar by fermentation:
\tt C_{12}+H_{22}O_{11}(Molasses\ from\ sugar\ industry)\xrightarrow[Invertase]{H_2O/(NH_4)_2SO_4/yeast}C_6H_{12}O_6(Glucose)+C_6H_{12}O_6(Fructose)\xrightarrow{Zymase}C_2H_5OH(ethanol)+CO_2
Preparation of Phenols (Carbolic acid): (a) From benzene: Industrial process: (i) Rasching process: \tt 2C_6H_6(vapour)+2HCl(Gas)+O_2\xrightarrow[-2H_2O]{CuCl_2/FeCl_3/523K}2C_6H_5Cl\xrightarrow[700K]{steam}C_6H_5OH(phenol)+HCl (ii) Cumene process:
(b) From chlorobenzene: Industrial process: Dow's method
\tt C_6H_5Cl+2NaOH\xrightarrow[-NaCl,-H_2O]{200atm/250^0C}C_6H_5O^\ominus Na^\oplus \xrightarrow{H_2O/H^+}C_6H_5OH
(c) Acidic character of some compounds is
(a) HCOOH > CH 3COOH > H 2CO 3 > C 6H 5OH > CH 3OH > H 2O > ROH C 1° > 2° > 3° > NH 3 > Acetylene > CH 2=CH 2 > CH 3—CH 3 (b) Part1: View the Topic in this Video from 0:08 to 25:00 Part2: View the Topic in this Video from 0:07 to 24:09 Part3: View the Topic in this Video from 0:08 to 12:23
Disclaimer: Compete.etutor.co may from time to time provide links to third party Internet sites under their respective fair use policy and it may from time to time provide materials from such third parties on this website. These third party sites and any third party materials are provided for viewers convenience and for non-commercial educational purpose only. Compete does not operate or control in any respect any information, products or services available on these third party sites. Compete.etutor.co makes no representations whatsoever concerning the content of these sites and the fact that compete.etutor.co has provided a link to such sites is NOT an endorsement, authorization, sponsorship, or affiliation by compete.etutor.co with respect to such sites, its services, the products displayed, its owners, or its providers. |
I have this SDE
$$ dX(t) = [X(t)(u(t)(\delta-r)+r-\beta(t))+\theta(t)(1-\alpha(t))]dt+X(t)u(t)\sigma dW(t), t \in [0,T] \\ X(0) = X_0(1-\alpha(0)) $$
I've checked some books and I find the solution is this: $$ X_t=\Phi_{t,0}.\left( X_0(1-\alpha_0)+\int_{0}^{t}\frac{\theta(s)(1-\alpha(s))}{\Phi_{s,0}}ds + \int_{0}^{t} \frac{\sigma u(s)}{\Phi_{s,0}}dW_s \right)$$ where $ \Phi_{t,0}=\exp \left( \int_{0}^{t}(u(s)(\delta-r) +r-\beta(s))ds \right)\\ $
I need to prove that $X(t)$ is bounded. Some idea?
Thanks |
Hi! I need help with this problem. I tried to solve it by saying that it would be the same as the field of a the spherical shell alone plus the field of a point charge -q at A or B. For the field of the spherical shell I got ##E_1=\frac{q}{a\pi\epsilon_0 R^2}=\frac{\sigma}{\epsilon_0}## and for...
V(ρ) = V_o*ln(ρ/0.0018)/ln(45/180)(Attached picture is where the unit vector of r is really ρ.)In cylindrical coordinates∇V = ρ*dV/dρ + 0 + 0∇V =derivative[V_o*ln(ρ/0.0018)/1.386]dρ∇V = V_o*0.0018/(1.386*ρ)E = V_o*0.0012987/ρWork = 0.5∫∫∫εE•E dvBounds: 0.0018 to 0.00045 mD = εE =...
Hi! My main problem is that I don't understand what the problem is telling me. What does it mean that the surface is a flast disc bounded by the circle? Is the Gauss surface the disc? Does that mean that inside the circle in the figure, there is a disc?Can you give me some guidance on how to...
So I figured out the potential is: dV = (1/(4*Pi*Epsilon_0))*[λ dl/sqrt(z^2+a^2)].From that expression: We can figure out that since its half a ring we have to integrate from 0 to pi*a, so we would get:V = (1/(4*Pi*Epsilon_0))*[λ {pi*a]/sqrt(z^2+a^2)]In that expression: a = sqrt(x^2+y^2)...
I tried to work out both a) and b), but I am not sure if I am correct. I drew a picture with a sphere around q first with radius r and then with radius 3r.For a) ##E.A=\frac {q}{ε_°}## (when using Gauss' Law)Since ##A=4πr^2##, I substituted this in the equation and solved for E giving me...
Hi, so I was able to solve this problem by just equating the forces (Tension, mg, and EQ).But I thought I could also solve this problem with Conservation of Energy.However, I calculated it several times, and I never get the right answer this way.Doesn't the Electric Field do the work to put...
I read Wikipedia's description of how a plasma ball works. Question: What kind of energy is the "radio-frequency energy from the transformer"? Is in the form of electric field energy, magnetic field energy, or both? Thank you!(from Wikipedia)...Although many variations exist, a plasma lamp is...
Hi,A solution contains some ions (charged particles). We are only interested in my exemple to positive ions.It is assumed that these ions acquired some mobility under a concentration gradient. Their direction is A to B.Then these ions encounter/cross an electric field which is oriented from B...
The electric field due to a dipole distribution in volume ##V'## can be viewed as electric field due to a volume charge distribution in ##V'## plus electric field due to a surface charge distribution in boundary of ##V'##.##\displaystyle\mathbf{E}=\int_{V'} \dfrac{\rho...
Hi,I've a question about electricity in the following scenario: consider an accumulator (e.g. a 9V battery) and an analog/digital voltmeter having a probe connected to the accumulator + clamp and the other to the ground (for instance connecting it to a metal rod stuck in the ground).Do you...
I'm looking for a high voltage power supply. I have no experience with such a power supply, nor with all the terms or specifications used for such tools, so I'm looking for general suggestions to what to look for.I want to generate an electric field or potential field between two points a few...
Hi.The derivation of the capacity of an ideal parallel-plate capacitor is inconsistent: On the one hand, the plates are assumed to be infinitely large to exploit symmetries to compute an expression for the electric field, on the other the area is finite to get a finite expression for the...
1. Homework StatementA charge q1 is at rest at the origin, and a charge q2 moves with speed βc in the x-direction, along the line z = b. For what angle θ shown in the figure will the horizontal component of the force on q1 be maximum? What is θ in the β ≈ 1 and β ≈ 0 limits? (see image)2...
I am working on the same problem as a previous post, but he already marked it as answered and did not post a solution.https://www.physicsforums.com/threads/sphere-with-non-uniform-charge-density.938117/I am curious as to a method of finding the ##k## and substituting into the electric...
1. Homework StatementAn isolated parallel-plate capacitor of area ##A_1## with an air gap of length ##s_1## is charged up to a potential difference ##\Delta V_1## A second parallel-plate capacitor, initially uncharged, has an area ##A_2## and a gap of length ##s_2## filled with plastic whose...
1. Homework StatementWhat is the potential at the center of the sphere relative to infinity? The sphere is dielectric with uniform - charge on the surface of the sphere.2. Homework Equations##k=\frac {1}{4\pi\epsilon_0}####V=\frac {KQ}{r}##3. The Attempt at a SolutionIf the distance...
1. Homework StatementA distribution of charge with spherical symmetry has volumetric density given by: $$ \rho(r) = \rho_0 e^{ \frac {-r} {a} }, \left( 0 \leq r < \infty \right); $$where ##\rho_0## and ##a## is constant.a) Find the total chargeb) Find ##\vec E## in an arbitrary point2...
1. Homework StatementA charge q is placed at one corner of a cube. What is the value of the flux of the charge's electric field through one of its faces?2. Homework EquationsThe flux surface integral of an electric field is equal to the value of the charge enclosed divided by the...
1. Homework StatementA rod of charged -Q is curved from the x-axis to angle ##\alpha##. The rod is a distance R from the origin (I will have a picture uploaded). What is the electric field of the charge in terms of it's x and y components at the origin? k is ##\frac {1} {4\pi \epsilon_0}##2...
1. Homework StatementFind the electric field of a point outside sphere without using Gauss's law. (Do not evaluate the integral)2. Homework EquationsCoulomb's LawSpherical Co-ordinate System3. The Attempt at a SolutionI have attached my attempt as a picture but now I am stuck, I don't...
Having come experimentally to an interesting electrostatic effect, I have returned, aged 47, to my old books in physics. It turns out that my books delight in using Gauss theorem etc. in rather ideal geometrical surface charge distribution, but never gave me the tools to answer to this simple...
1. Homework StatementA point charge of 6 × 10−9 C is located at the origin.The magnitude magnitude at ##\langle 0.6,0,0\rangle## m is 150 N/CNext, a short, straight, thin copper wire 5 mm long is placed along the x axis with its center at location ##\langle 0.3,0,0 \rangle## m. What is the...
1. Homework StatementShow that the magnitude of the net force exerted on one dipole by the other dipole is given approximately by:$$F_{net}≈\frac {6q^2s^2k} {r^4}$$for ##r\gg s##, where r is the distance from one dipole to the other dipole, s is the distance across one dipole. (Both dipoles...
1. Homework StatementYou make repeated measurements of the electric field ##\vec E## due to a distant charge, and you find it is constant in magnitude and direction. At time ##t=0## your partner moves the charge. The electric field doesn't change for a while, but at time ##t=24## ns you...
1. Homework StatementA dipole is located at the origin, and is composed of charged particles with charge +e and -e, separated by a distance 2x10-10m along the x-axis.Calculate the magnitude of the electric field due to the dipole at location ##\langle 0.2\times 10^{-8}, 0, 0\rangle##m2...
1. Homework StatementA charged particle has an electric field at ##\langle -0.13, 0.14, 0 \rangle## m is ##\langle 6.48\times10^3, -8.64\times10^3, 0 \rangle## N/C. The charged particle is -3nC. Where is the particle located?2. Homework Equations##\vec E=\frac 1 {4π\varepsilon_0} \frac q...
1. Homework StatementA block of mass m having charge q placed on smooth horizontal table and is connected to a wall thorough an unstretched spring of constant k . A horizontal electric field E parallel to spring is switched on. Find the ampliture of the shm by the block.2. Homework...
1. Homework StatementA solid non-conducting sphere of radius R carries a uniform charge density. At a radial distance r1= R/4 the electric field has a magnitude Eo. What is the magnitude of the electric field at a radial distance r2=2R?2. Homework EquationsGauss's Law: ∫EdA=Qencl / ε0...
Metals are highly effective at screening electric fields. If we place two contacts reasonably far away from each other on a piece of metal and apply a voltage bias, the charge carriers in the section that is far enough from both the contacts should be unaffected by the electric field. Why then... |
1.
2.
3.
Hamilton paths in vertex-transitive graphs of order 10pKlavdija Kutnar
, Dragan Marušič
, Cui Zhang
, 2012, izvirni znanstveni članek
Opis: It is shown that every connected vertex-transitive graph of order ▫$10p$▫, ▫$p \ne 7$▫ a prime, which is not isomorphic to a quasiprimitive graph arising from the action of PSL▫$(2,k)$▫ on cosets of ▫$\mathbb{Z}_k \times \mathbb{Z}_{(k-1)/10}$▫, contains a Hamilton path. Najdeno v: osebi Ključne besede: graph, vertex-transitive, Hamilton cycle, Hamilton path, automorphism group Objavljeno: 15.10.2013; Ogledov: 1626; Prenosov: 11 Polno besedilo (0,00 KB)
4.
On prime-valent symmetric bicirculants and Cayley snarksAdemir Hujdurović
, Klavdija Kutnar
, Dragan Marušič
, 2013, objavljeni znanstveni prispevek na konferenci
Najdeno v: osebi Ključne besede: graph, Cayley graph, arc-transitive, snark, semiregular automorphism, bicirculant Objavljeno: 15.10.2013; Ogledov: 1496; Prenosov: 58 Polno besedilo (0,00 KB)
5.
6.
7.
8.
Distance-balanced graphs: Symmetry conditionsKlavdija Kutnar
, Aleksander Malnič
, Dragan Marušič
, Štefko Miklavič
, 2006, izvirni znanstveni članek
Opis: A graph ▫$X$▫ is said to be distance-balanced if for any edge ▫$uv$▫ of ▫$X$▫, the number of vertices closer to ▫$u$▫ than to ▫$v$▫ is equal to the number of vertices closer to ▫$v$▫ than to ▫$u$▫. A graph ▫$X$▫ is said to be strongly distance-balanced if for any edge ▫$uv$▫ of ▫$X$▫ and any integer ▫$k$▫, the number of vertices at distance ▫$k$▫ from ▫$u$▫ and at distance ▫$k+1$▫ from ▫$v$▫ is equal to the number of vertices at distance ▫$k+1$▫ from ▫$u$▫ and at distance ▫$k$▫ from ▫$v$▫. Exploring the connection between symmetry properties of graphs and the metric property of being (strongly) distance-balanced is the main theme of this article. That a vertex-transitive graph is necessarily strongly distance-balanced and thus also distance-balanced is an easy observation. With only a slight relaxation of the transitivity condition, the situation changes drastically: there are infinite families of semisymmetric graphs (that is, graphs which are edge-transitive, but not vertex-transitive) which are distance-balanced, but there are also infinite families of semisymmetric graphs which are not distance-balanced. Results on the distance-balanced property in product graphs prove helpful in obtaining these constructions. Finally, a complete classification of strongly distance-balanced graphs is given for the following infinite families of generalized Petersen graphs: GP▫$(n,2)$▫, GP▫$(5k+1,k)$▫, GP▫$(3k 3,k)$▫, and GP▫$(2k+2,k)$▫. Najdeno v: osebi Ključne besede: graph theory, graph, distance-balanced graphs, vertex-transitive, semysimmetric, generalized Petersen graph Objavljeno: 15.10.2013; Ogledov: 1723; Prenosov: 29 Polno besedilo (0,00 KB)
9.
10.
A complete classification of cubic symmetric graphs of girth 6Klavdija Kutnar
, Dragan Marušič
, 2009, izvirni znanstveni članek
Opis: A complete classification of cubic symmetric graphs of girth 6 is given. It is shown that with the exception of the Heawood graph, the Moebius-Kantor graph, the Pappus graph, and the Desargues graph, a cubic symmetric graph ▫$X$▫ of girth 6 is a normal Cayley graph of a generalized dihedral group; in particular, (i) ▫$X$▫ is 2-regular if and only if it is isomorphic to a so-called ▫$I_k^n$▫-path, a graph of order either ▫$n^2/2$▫ or ▫$n^2/6$▫, which is characterized by the fact that its quotient relative to a certain semiregular automorphism is a path. (ii) ▫$X$▫ is 1-regular if and only if there exists an integer ▫$r$▫ with prime decomposition ▫$r=3^s p_1^{e_1} \dots p_t^{e_t} > 3$▫, where ▫$s \in \{0,1\}$▫, ▫$t \ge 1$▫, and ▫$p_i \equiv 1 \pmod{3}$▫, such that ▫$X$▫ is isomorphic either to a Cayley graph of a dihedral group ▫$D_{2r}$▫ of order ▫$2r$▫ or ▫$X$▫ is isomorphic to a certain ▫$\ZZ_r$▫-cover of one of the following graphs: the cube ▫$Q_3$▫, the Pappus graph or an ▫$I_k^n(t)$▫-path of order ▫$n^2/2$▫. Najdeno v: osebi Ključne besede: graph theory, cubic graphs, symmetric graphs, ▫$s$▫-regular graphs, girth, consistent cycle Objavljeno: 15.10.2013; Ogledov: 1725; Prenosov: 26 Polno besedilo (0,00 KB) |
How can I put a symbol above a comma? For example in
TeX - LaTeX Stack Exchange is a question and answer site for users of TeX, LaTeX, ConTeXt, and related typesetting systems. It only takes a minute to sign up.Sign up to join this community
You can use
\overset from amsmath:
\documentclass{article}\usepackage{amsmath}\begin{document}\[\{L(\lambda)\ \overset{\otimes}{,}\ L(\mu)\}\]\end{document}
I added optional spaces to make it look more like your example:
As noted in the comment, you can use
\scriptscriptstyle to make
\otimes smaller:
\documentclass{article}\usepackage{amsmath}\begin{document}\[\{L(\lambda)\ \overset{\scriptscriptstyle\otimes}{,}\ L(\mu)\}\]\end{document}
The following example sets
\otimes as upper limit over the comma, which is turned to a math operator via
\mathop. This also decreases the size of ⊗. By explicitly using
\scriptscriptstyle, the symbol can be decreased further, see Manuel's comment.Then the result is wrapped in
\mathpunct to keep the property of the comma as punctuation character:
\documentclass{article}\newcommand*{\commaotimes}{% \mathpunct{% \mathop{,}\limits^{\otimes}% }%}\newcommand*{\commaotimessmall}{% \mathpunct{% \mathop{,}\limits^{\scriptscriptstyle\otimes}% }%}\begin{document}\[ \{ L(\lambda) \commaotimes L(\mu) \} \]\[ \{ L(\lambda) \commaotimessmall L(\mu) \} \]\end{document}
Remark:
\commatimessmall works well in math styles
\displaystyle and
\textstyle. For
\scriptstyle and
\scriptscriptstyle the symbol
\otimes does not change it size, because there is no
\scriptscriptscriptstyle.
One possibility
\stackrel{\otimes}{,}:
$\left\{L\left(\lambda\right)\ \stackrel{\otimes}{,}\ L\left(\mu\right)\right\}$ |
Let $a \in \Bbb{R}$. Let $\Bbb{Z}$ act on $S^1$ via $(n,z) \mapsto ze^{2 \pi i \cdot an}$. Claim: The is action is not free if and only if $a \Bbb{Q}$. Here's an attempt at the forward direction: If the action is not free, there is some nonzero $n$ and $z \in S^1$ such that $ze^{2 \pi i \cdot an} = 1$. Note $z = e^{2 \pi i \theta}$ for some $\theta \in [0,1)$.
Then the equation becomes $e^{2 \pi i(\theta + an)} = 1$, which holds if and only if $2\pi (\theta + an) = 2 \pi k$ for some $k \in \Bbb{Z}$. Solving for $a$ gives $a = \frac{k-\theta}{n}$...
What if $\theta$ is irrational...what did I do wrong?
'cause I understand that second one but I'm having a hard time explaining it in words
(Re: the first one: a matrix transpose "looks" like the equation $Ax\cdot y=x\cdot A^\top y$. Which implies several things, like how $A^\top x$ is perpendicular to $A^{-1}x^\top$ where $x^\top$ is the vector space perpendicular to $x$.)
DogAteMy: I looked at the link. You're writing garbage with regard to the transpose stuff. Why should a linear map from $\Bbb R^n$ to $\Bbb R^m$ have an inverse in the first place? And for goodness sake don't use $x^\top$ to mean the orthogonal complement when it already means something.
he based much of his success on principles like this I cant believe ive forgotten it
it's basically saying that it's a waste of time to throw a parade for a scholar or win he or she over with compliments and awards etc but this is the biggest source of sense of purpose in the non scholar
yeah there is this thing called the internet and well yes there are better books than others you can study from provided they are not stolen from you by drug dealers you should buy a text book that they base university courses on if you can save for one
I was working from "Problems in Analytic Number Theory" Second Edition, by M.Ram Murty prior to the idiots robbing me and taking that with them which was a fantastic book to self learn from one of the best ive had actually
Yeah I wasn't happy about it either it was more than $200 usd actually well look if you want my honest opinion self study doesn't exist, you are still being taught something by Euclid if you read his works despite him having died a few thousand years ago but he is as much a teacher as you'll get, and if you don't plan on reading the works of others, to maintain some sort of purity in the word self study, well, no you have failed in life and should give up entirely. but that is a very good book
regardless of you attending Princeton university or not
yeah me neither you are the only one I remember talking to on it but I have been well and truly banned from this IP address for that forum now, which, which was as you might have guessed for being too polite and sensitive to delicate religious sensibilities
but no it's not my forum I just remembered it was one of the first I started talking math on, and it was a long road for someone like me being receptive to constructive criticism, especially from a kid a third my age which according to your profile at the time you were
i have a chronological disability that prevents me from accurately recalling exactly when this was, don't worry about it
well yeah it said you were 10, so it was a troubling thought to be getting advice from a ten year old at the time i think i was still holding on to some sort of hopes of a career in non stupidity related fields which was at some point abandoned
@TedShifrin thanks for that in bookmarking all of these under 3500, is there a 101 i should start with and find my way into four digits? what level of expertise is required for all of these is a more clear way of asking
Well, there are various math sources all over the web, including Khan Academy, etc. My particular course was intended for people seriously interested in mathematics (i.e., proofs as well as computations and applications). The students in there were about half first-year students who had taken BC AP calculus in high school and gotten the top score, about half second-year students who'd taken various first-year calculus paths in college.
long time ago tho even the credits have expired not the student debt though so i think they are trying to hint i should go back a start from first year and double said debt but im a terrible student it really wasn't worth while the first time round considering my rate of attendance then and how unlikely that would be different going back now
@BalarkaSen yeah from the number theory i got into in my most recent years it's bizarre how i almost became allergic to calculus i loved it back then and for some reason not quite so when i began focusing on prime numbers
What do you all think of this theorem: The number of ways to write $n$ as a sum of four squares is equal to $8$ times the sum of divisors of $n$ if $n$ is odd and $24$ times sum of odd divisors of $n$ if $n$ is even
A proof of this uses (basically) Fourier analysis
Even though it looks rather innocuous albeit surprising result in pure number theory
@BalarkaSen well because it was what Wikipedia deemed my interests to be categorized as i have simply told myself that is what i am studying, it really starting with me horsing around not even knowing what category of math you call it. actually, ill show you the exact subject you and i discussed on mmf that reminds me you were actually right, i don't know if i would have taken it well at the time tho
yeah looks like i deleted the stack exchange question on it anyway i had found a discrete Fourier transform for $\lfloor \frac{n}{m} \rfloor$ and you attempted to explain to me that is what it was that's all i remember lol @BalarkaSen
oh and when it comes to transcripts involving me on the internet, don't worry, the younger version of you most definitely will be seen in a positive light, and just contemplating all the possibilities of things said by someone as insane as me, agree that pulling up said past conversations isn't productive
absolutely me too but would we have it any other way? i mean i know im like a dog chasing a car as far as any real "purpose" in learning is concerned i think id be terrified if something didnt unfold into a myriad of new things I'm clueless about
@Daminark They key thing if I remember correctly was that if you look at the subgroup $\Gamma$ of $\text{PSL}_2(\Bbb Z)$ generated by (1, 2|0, 1) and (0, -1|1, 0), then any holomorphic function $f : \Bbb H^2 \to \Bbb C$ invariant under $\Gamma$ (in the sense that $f(z + 2) = f(z)$ and $f(-1/z) = z^{2k} f(z)$, $2k$ is called the weight) such that the Fourier expansion of $f$ at infinity and $-1$ having no constant coefficients is called a cusp form (on $\Bbb H^2/\Gamma$).
The $r_4(n)$ thing follows as an immediate corollary of the fact that the only weight $2$ cusp form is identically zero.
I can try to recall more if you're interested.
It's insightful to look at the picture of $\Bbb H^2/\Gamma$... it's like, take the line $\Re[z] = 1$, the semicircle $|z| = 1, z > 0$, and the line $\Re[z] = -1$. This gives a certain region in the upper half plane
Paste those two lines, and paste half of the semicircle (from -1 to i, and then from i to 1) to the other half by folding along i
Yup, that $E_4$ and $E_6$ generates the space of modular forms, that type of things
I think in general if you start thinking about modular forms as eigenfunctions of a Laplacian, the space generated by the Eisenstein series are orthogonal to the space of cusp forms - there's a general story I don't quite know
Cusp forms vanish at the cusp (those are the $-1$ and $\infty$ points in the quotient $\Bbb H^2/\Gamma$ picture I described above, where the hyperbolic metric gets coned off), whereas given any values on the cusps you can make a linear combination of Eisenstein series which takes those specific values on the cusps
So it sort of makes sense
Regarding that particular result, saying it's a weight 2 cusp form is like specifying a strong decay rate of the cusp form towards the cusp. Indeed, one basically argues like the maximum value theorem in complex analysis
@BalarkaSen no you didn't come across as pretentious at all, i can only imagine being so young and having the mind you have would have resulted in many accusing you of such, but really, my experience in life is diverse to say the least, and I've met know it all types that are in everyway detestable, you shouldn't be so hard on your character you are very humble considering your calibre
You probably don't realise how low the bar drops when it comes to integrity of character is concerned, trust me, you wouldn't have come as far as you clearly have if you were a know it all
it was actually the best thing for me to have met a 10 year old at the age of 30 that was well beyond what ill ever realistically become as far as math is concerned someone like you is going to be accused of arrogance simply because you intimidate many ignore the good majority of that mate |
I saw that it was already asked, but the book where I'm studying is slightly different. Recall some definition, if $E$ it's $\mathbb{K}$-vector space and let $\mathcal{E}$ be a vector subspace of the algebraic dual of $E$, wich is the vector space of all linear forms on $E$. We say that $(E,\mathcal{E})$ is a dual couple if $\mathcal{E}$ separates points of $E$, that is, $u(x)=u(y)$ $\forall u \in \mathcal{E}$ implies $x=y$. We have the following result:
"Suppose $(E,\mathcal{E})$ is a dual couple , $u_1,...,u_n , u \in \mathcal{E}$. Then $u=\sum_{k=1}^n \lambda_k u_k$ iff $u_1(x)=...=u_n(x)=0$ implies $u(x)=0$"
Exercise (a): Let $E$ be an infinite-dimensional locally convex space. Prove that the weak* topology $\sigma(E',E)$ on $E'$ is metrizable if and only if $E$ has a countable algebraic basis.
Hint (a): Let $U_n=\lbrace \max\lbrace p_{x_1},...,p_{x_N} \rbrace < 1/M \rbrace$. If $x \in E$, every ball $\lbrace p_x < \epsilon \rbrace$ contains some $U_n$; that is, $|u(x_j| < 1/n$ $\forall j$ implies $|u(x)| <1$. Hence $u(x_1)= \cdot \cdot \cdot u(x_N)=0$ implies $u(x)=0$ and $x \in \mathrm{span}(x_1,...,x_n)$.
Proof (Exercise (a)).We assume that the weak* topology $\sigma(E',E)$ on $E'$ it's metrizable. Let $x_1,...,x_N \in E \setminus \lbrace 0 \rbrace$. Note that the weak* topology is defined by seminorm family $\mathcal{F}= \lbrace p_x(u)=|u(x)| : x \in E \rbrace$, where \begin{align*}\displaystyle U_n := \lbrace u \in E' : \max \lbrace p_{x_1}(u),...,p_{x_N}(u) \rbrace < 1/n \rbrace\end{align*}It is an open neighborhood in weak* topology.By hypothesis $\forall x \in E$, $\exists n \in \mathbb{N}$ and exists open ball $B_{E'}:=\lbrace u \in E' : p_x(u):=|u(x)| < \epsilon \rbrace$ such that $U_n \subset B_{E'}$.Now, if $u \in U_n$ we have $|u(x_j)| < 1/n$ $\forall j$ and then $|u(x)|< 1$ with $\epsilon=1$, and by (1) we have:\begin{align*}u(x_1)=...=u(x_N)=0 \Longrightarrow 0=u(x)=\sum_{j=1}^n \lambda_j u(x_j)\end{align*}by linearity $x \in \mathrm{span}(x_1,...,x_n)$.
Reciprocally, just reverse the above implications, using results in (1)?
Someone can help me? can you check if proof of exercise (a) is correct? Thank you. |
I know how to derive Navier-Stokes equations from Boltzmann equation in case where bulk and viscosity coefficients are set to zero. I need only multiply it on momentum and to integrate it over velocities.
But when I've tried to derive NS equations with viscosity and bulk coefficients, I've failed. Most textbooks contains following words: "for taking into the account interchange of particles between fluid layers we need to modify momentum flux density tensor". So they state that NS equations with viscosity cannot be derived from Boltzmann equation, can they?
The target equation is $$ \partial_{t}\left( \frac{\rho v^{2}}{2} + \rho \epsilon \right) = -\partial_{x_{i}}\left(\rho v_{i}\left(\frac{v^{2}}{2} + w\right) - \sigma_{ij}v_{j} - \kappa \partial_{x_{i}}T \right), $$ where $$ \sigma_{ij} = \eta \left( \partial_{x_{[i}}v_{j]} - \frac{2}{3}\delta_{ij}\partial_{x_{i}}v_{i}\right) + \varepsilon \delta_{ij}\partial_{x_{i}}v_{i}, $$ $w = \mu - Ts$ corresponds to heat function, $\epsilon$ refers to internal energy.
Edit. It seems that I've got this equation. After multiplying Boltzmann equation on $\frac{m(\mathbf v - \mathbf u)^{2}}{2}$ and integrating it over $v$ I've got transport equation which contains objects $$ \Pi_{ij} = \rho\langle (v - u)_{i}(v - u)_{j} \rangle, \quad q_{i} = \rho \langle (\mathbf v - \mathbf u)^{2}(v - u)_{i}\rangle $$ To calculate it I need to know an expression for distribution function. For simplicity I've used tau approximation; in the end I've got expression $f = f_{0} + g$. An expressions for $\Pi_{ij}, q_{i}$ then are represented by $$ \Pi_{ij} = \delta_{ij}P - \mu \left(\partial_{[i}u_{j]} - \frac{2}{3}\delta_{ij}\partial_{i}u_{i}\right) - \epsilon \delta_{ij}\partial_{i}u_{i}, $$ $$ q_{i} = -\kappa \partial_{i} T, $$ so I've got the wanted result.
This post imported from StackExchange Physics at 2016-02-10 14:08 (UTC), posted by SE-user Name YYY |
Skills to Develop
To use the Cochran–Mantel–Haenszel test when you have data from \(2\times 2\) tables that you've repeated at different times or locations. It will tell you whether you have a consistent difference in proportions across the repeats. When to use it
Use the Cochran–Mantel–Haenszel test (which is sometimes called the Mantel–Haenszel test) for repeated tests of independence. The most common situation is that you have multiple \(2\times 2\) tables of independence; you're analyzing the kind of experiment that you'd analyze with a test of independence, and you've done the experiment multiple times or at multiple locations. There are three nominal variables: the two variables of the \(2\times 2\) test of independence, and the third nominal variable that identifies the repeats (such as different times, different locations, or different studies). There are versions of the Cochran–Mantel–Haenszel test for any number of rows and columns in the individual tests of independence, but they're rarely used and I won't cover them.
Fig. 2.10.1 A pony wearing pink legwarmers
For example, let's say you've found several hundred pink knit polyester legwarmers that have been hidden in a warehouse since they went out of style in 1984. You decide to see whether they reduce the pain of ankle osteoarthritis by keeping the ankles warm. In the winter, you recruit \(36\) volunteers with ankle arthritis, randomly assign \(20\) to wear the legwarmers under their clothes at all times while the other \(16\) don't wear the legwarmers, then after a month you ask them whether their ankles are pain-free or not. With just the one set of people, you'd have two nominal variables (legwarmers vs. control, pain-free vs. pain), each with two values, so you'd analyze the data with Fisher's exact test.
However, let's say you repeat the experiment in the spring, with \(50\) new volunteers. Then in the summer you repeat the experiment again, with \(28\) new volunteers. You could just add all the data together and do Fisher's exact test on the \(114\) total people, but it would be better to keep each of the three experiments separate. Maybe legwarmers work in the winter but not in the summer, or maybe your first set of volunteers had worse arthritis than your second and third sets. In addition, pooling different studies together can show a "significant" difference in proportions when there isn't one, or even show the opposite of a true difference. This is known as Simpson's paradox. For these reasons, it's better to analyze repeated tests of independence using the Cochran-Mantel-Haenszel test.
Null hypothesis
The null hypothesis is that the relative proportions of one variable are independent of the other variable within the repeats; in other words, there is no consistent difference in proportions in the \(2\times 2\) tables. For our imaginary legwarmers experiment, the null hypothesis would be that the proportion of people feeling pain was the same for legwarmer-wearers and non-legwarmer wearers, after controlling for the time of year. The alternative hypothesis is that the proportion of people feeling pain was different for legwarmer and non-legwarmer wearers.
Technically, the null hypothesis of the Cochran–Mantel–Haenszel test is that the odds ratios within each repetition are equal to \(1\). The odds ratio is equal to \(1\) when the proportions are the same, and the odds ratio is different from \(1\) when the proportions are different from each other. I think proportions are easier to understand than odds ratios, so I'll put everything in terms of proportions. But if you're in a field such as epidemiology where this kind of analysis is common, you're probably going to have to think in terms of odds ratios.
How the test works
If you label the four numbers in a \(2\times 2\) test of independence like this:
\[\begin{matrix} a & b\\ c & d \end{matrix}\]
and
\[(a+b+c+d)=n\]
you can write the equation for the Cochran–Mantel–Haenszel test statistic like this:
\[X_{MH}^{2}=\frac{\left \{ \left | \sum \left [ a-(a+b)(a+c)/n \right ] \right | -0.5\right \}^2}{\sum (a+b)(a+c)(b+d)(c+d)/(n^3-n^2)}\]
The numerator contains the absolute value of the difference between the observed value in one cell (\(a\)) and the expected value under the null hypothesis, \((a+b)(a+c)/n\), so the numerator is the squared sum of deviations between the observed and expected values. It doesn't matter how you arrange the \(2\times 2\) tables, any of the four values can be used as \(a\). You subtract the \(0.5\) as a continuity correction. The denominator contains an estimate of the variance of the squared differences.
The test statistic, \(X_{MH'}^{2}\), gets bigger as the differences between the observed and expected values get larger, or as the variance gets smaller (primarily due to the sample size getting bigger). It is chi-square distributed with one degree of freedom.
Different sources present the formula for the Cochran–Mantel–Haenszel test in different forms, but they are all algebraically equivalent. The formula I've shown here includes the continuity correction (subtracting \(0.5\) in the numerator), which should make the \(P\) value more accurate. Some programs do the Cochran–Mantel–Haenszel test without the continuity correction, so be sure to specify whether you used it when reporting your results.
Assumptions
In addition to testing the null hypothesis, the Cochran-Mantel-Haenszel test also produces an estimate of the common odds ratio, a way of summarizing how big the effect is when pooled across the different repeats of the experiment. This require assuming that the odds ratio is the same in the different repeats. You can test this assumption using the Breslow-Day test, which I'm not going to explain in detail; its null hypothesis is that the odds ratios are equal across the different repeats.
If some repeats have a big difference in proportion in one direction, and other repeats have a big difference in proportions but in the opposite direction, the Cochran-Mantel-Haenszel test may give a non-significant result. So when you get a non-significant Cochran-Mantel-Haenszel test, you should perform a test of independence on each \(2\times 2\) table separately and inspect the individual \(P\) values and the direction of difference to see whether something like this is going on. In our legwarmer example, if the proportion of people with ankle pain was much smaller for legwarmer-wearers in the winter, but much higher in the summer, and the Cochran-Mantel-Haenszel test gave a non-significant result, it would be erroneous to conclude that legwarmers had no effect. Instead, you could conclude that legwarmers had an effect, it just was different in the different seasons.
Examples
Example
When you look at the back of someone's head, the hair either whorls clockwise or counterclockwise. Lauterbach and Knight (1927) compared the proportion of clockwise whorls in right-handed and left-handed children. With just this one set of people, you'd have two nominal variables (right-handed vs. left-handed, clockwise vs. counterclockwise), each with two values, so you'd analyze the data with Fisher's exact test.
However, several other groups have done similar studies of hair whorl and handedness (McDonald 2011):
Study group Handedness Right Left white children Clockwise 708 50 Counterclockwise 169 13 percent CCW 19.3% 20.6% British adults Clockwise 136 24 Counterclockwise 73 14 percent CCW 34.9% 38.0% Pennsylvania whites Clockwise 106 32 Counterclockwise 17 4 percent CCW 13.8% 11.1% Welsh men Clockwise 109 22 Counterclockwise 16 26 percent CCW 12.8% 54.2% German soldiers Clockwise 801 102 Counterclockwise 180 25 percent CCW 18.3% 19.7% German children Clockwise 159 27 Counterclockwise 18 13 percent CCW 10.2% 32.5% New York Clockwise 151 51 Counterclockwise 28 15 percent CCW 15.6% 22.7% American men Clockwise 950 173 Counterclockwise 218 33 percent CCW 18.7% 16.0%
You could just add all the data together and do a test of independence on the \(4463\) total people, but it would be better to keep each of the \(8\) experiments separate. Some of the studies were done on children, while others were on adults; some were just men, while others were male and female; and the studies were done on people of different ethnic backgrounds. Pooling all these studies together might obscure important differences between them.
Analyzing the data using the Cochran-Mantel-Haenszel test, the result is \(X_{MH}^{2}=6.07\), \(1d.f.\), \(P=0.014\). Overall, left-handed people have a significantly higher proportion of counterclockwise whorls than right-handed people.
Example
McDonald and Siebenaller (1989) surveyed allele frequencies at the
Lap locus in the mussel Mytilus trossulus on the Oregon coast. At four estuaries, we collected mussels from inside the estuary and from a marine habitat outside the estuary. There were three common alleles and a couple of rare alleles; based on previous results, the biologically interesting question was whether the allele was less common inside estuaries, so we pooled all the other alleles into a " Lap94 non-" class. 94
There are three nominal variables: allele (\(94\) or non-\(94\)), habitat (marine or estuarine), and area (Tillamook, Yaquina, Alsea, or Umpqua). The null hypothesis is that at each area, there is no difference in the proportion of
Lap 94 alleles between the marine and estuarine habitats.
This table shows the number of \(94\) and non-\(94\) alleles at each location. There is a smaller proportion of \(94\) alleles in the estuarine location of each estuary when compared with the marine location; we wanted to know whether this difference is significant.
Location Allele Marine Estuarine Tillamook 94 56 69 non-94 40 77 percent 94 58.3% 47.3% Yaquina 94 61 257 non-94 57 301 percent 94 51.7% 46.1% Alsea 94 73 65 non-94 71 79 percent 94 50.7% 45.1% Umpqua 94 71 48 non-94 55 48 percent 94 56.3% 50.0%
The result is \(X_{MH}^{2}=5.05\), \(1d.f.\), \(P=0.025\). We can reject the null hypothesis that the proportion of
alleles is the same in the marine and estuarine locations. Lap94
Example
Duggal et al. (2010) did a meta-analysis of placebo-controlled studies of niacin and heart disease. They found \(5\) studies that met their criteria and looked for coronary artery revascularization in patients given either niacin or placebo:
Study Revascularization No revasc. Percent revasc. FATS Niacin 2 46 4.2% Placebo 11 41 21.2% AFREGS Niacin 4 67 5.6% Placebo 12 60 16.7% ARBITER 2 Niacin 1 86 1.1% Placebo 4 76 5.0% HATS Niacin 1 37 2.6% Placebo 6 32 15.8% CLAS 1 Niacin 2 92 2.1% Placebo 1 93 1.1%
There are three nominal variables: niacin vs. placebo, revascularization vs. no revascularization, and the name of the study. The null hypothesis is that the rate of revascularization is the same in patients given niacin or placebo. The different studies have different overall rates of revascularization, probably because they used different patient populations and looked for revascularization after different lengths of time, so it would be unwise to just add up the numbers and do a single \(2\times 2\) test. The result of the Cochran-Mantel-Haenszel test is \(X_{MH}^{2}=12.75\), \(1d.f.\), \(P=0.00036\). Significantly fewer patients on niacin developed coronary artery revascularization.
Graphing the results
To graph the results of a Cochran–Mantel–Haenszel test, pick one of the two values of the nominal variable that you're observing and plot its proportions on a bar graph, using bars of two different patterns.
Fig. 2.10.2 Lap94 allele proportions (with 95% confidence intervals) in the mussel Mytilus trossulus at four bays in Oregon. Gray bars are marine samples and empty bars are estuarine samples. Similar tests
Sometimes the Cochran–Mantel–Haenszel test is just called the Mantel–Haenszel test. This is confusing, as there is also a test for homogeneity of odds ratios called the Mantel–Haenszel test, and a Mantel–Haenszel test of independence for one \(2\times 2\) table. Mantel and Haenszel (1959) came up with a fairly minor modification of the basic idea of Cochran (1954), so it seems appropriate (and somewhat less confusing) to give Cochran credit in the name of this test.
If you have at least six \(2\times 2\) tables, and you're only interested in the
direction of the differences in proportions, not the size of the differences, you could do a sign test.
The Cochran–Mantel–Haenszel test for nominal variables is analogous to a two-way anova or paired
t–test for a measurement variable, or a Wilcoxon signed-rank test for rank data. In the arthritis-legwarmers example, if you measured ankle pain on a \(10\)-point scale (a measurement variable) instead of categorizing it as pain/no pain, you'd analyze the data with a two-way anova. How to do the test Spreadsheet
I've written a spreadsheet to perform the Cochran–Mantel–Haenszel test cmh.xls. It handles up to \(50\) \(2\times 2\) tables. It gives you the choice of using or not using the continuity correction; the results are probably a little more accurate with the continuity correction. It does not do the Breslow-Day test.
Web pages
I'm not aware of any web pages that will perform the Cochran–Mantel–Haenszel test.
R
Salvatore Mangiafico's \(R\)
Companion has a sample R program for the Cochran-Mantel-Haenszel test, and also shows how to do the Breslow-Day test. SAS
Here is a SAS program that uses PROC FREQ for a Cochran–Mantel–Haenszel test. It uses the mussel data from above. In the TABLES statement, the variable that labels the repeats must be listed first; in this case it is "location".
DATA lap;
INPUT location $ habitat $ allele $ count; DATALINES; Tillamook marine 94 56 Tillamook estuarine 94 69 Tillamook marine non-94 40 Tillamook estuarine non-94 77 Yaquina marine 94 61 Yaquina estuarine 94 257 Yaquina marine non-94 57 Yaquina estuarine non-94 301 Alsea marine 94 73 Alsea estuarine 94 65 Alsea marine non-94 71 Alsea estuarine non-94 79 Umpqua marine 94 71 Umpqua estuarine 94 48 Umpqua marine non-94 55 Umpqua estuarine non-94 48 ; PROC FREQ DATA=lap; WEIGHT count / ZEROS; TABLES location*habitat*allele / CMH; RUN;
There is a lot of output, but the important part looks like this:
Cochran-Mantel-Haenszel Statistics (Based on Table Scores)
Statistic Alternative Hypothesis DF Value Prob --------------------------------------------------------- 1 Nonzero Correlation 1 5.3209 0.0211 2 Row Mean Scores Differ 1 5.3209 0.0211 3 General Association 1 5.3209 0.0211
For repeated \(2\times 2\) tables, the three statistics are identical; they are the Cochran–Mantel–Haenszel chi-square statistic,
without the continuity correction. For repeated tables with more than two rows or columns, the "general association" statistic is used when the values of the different nominal variables do not have an order (you cannot arrange them from smallest to largest); you should use it unless you have a good reason to use one of the other statistics.
The results also include the Breslow-Day test of homogeneity of odds ratios:
Breslow-Day Test for
Homogeneity of the Odds Ratios ------------------------------ Chi-Square 0.5295 DF 3 Pr > ChiSq 0.9124
The Breslow-Day test for the example data shows no significant evidence for heterogeneity of odds ratios (\(X^2=0.53\), \(3d.f.\), \(P=0.91\)).
References
Cochran, W.G. 1954. Some methods for strengthening the common χ
2 tests. Biometrics 10: 417-451.
Duggal, J.K., M. Singh, N. Attri, P.P. Singh, N. Ahmed, S. Pahwa, J. Molnar, S. Singh, S. Khosla and R. Arora. 2010. Effect of niacin therapy on cardiovascular outcomes in patients with coronary artery disease. Journal of Cardiovascular Pharmacology and Therapeutics 15: 158-166.
Lauterbach, C.E., and J.B. Knight. 1927. Variation in whorl of the head hair. Journal of Heredity 18: 107-115.
Mantel, N., and W. Haenszel. 1959. Statistical aspects of the analysis of data from retrospective studies of disease. Journal of the National Cancer Institute 22: 719-748.
McDonald, J.H. 2011. Myths of human genetics. Sparky House Press, Baltimore.
McDonald, J.H. and J.F. Siebenaller. 1989. Similar geographic variation at the
Lap locus in the mussels Mytilus trossulus and M. edulis. Evolution 43: 228-231. Contributor
John H. McDonald (University of Delaware) |
ISSN:
2156-8472
eISSN:
2156-8499 Mathematical Control & Related Fields
June 2013 , Volume 3 , Issue 2
Select all articles
Export/Reference:
Abstract:
Suitable numerical discretizations for boundary control problems of systems of nonlinear hyperbolic partial differential equations are presented. Using a discrete Lyapunov function, exponential decay of the discrete solutions of a system of hyperbolic equations for a family of first--order finite volume schemes is proved. The decay rates are explicitly stated. The theoretical results are accompanied by computational results.
Abstract:
We establish a Lipschitz stability estimate for the inverse problem consisting in the determination of the coefficient $\sigma(t)$, appearing in a Dirichlet initial-boundary value problem for the parabolic equation $\partial_tu-\Delta_x u+\sigma(t)f(x)u=0$, from Neumann boundary data. We extend this result to the same inverse problem when the previous linear parabolic equation is changed to the semi-linear parabolic equation $\partial_tu-\Delta_x u=F(x,t,\sigma(t),u(x,t))$.
Abstract:
In this paper necessary and sufficient conditions of approximate $L^\infty$-controllability at a free time are obtained for the control system $ w_{tt}=w_{xx}-q^2w$, $w_x(0,t)=u(t)$, $x>0$, $t\in(0,T)$, where $q>0$, $T>0$, $u\in L^\infty(0,T)$ is a control. This system is considered in the Sobolev spaces.
Abstract:
The aim of this paper is to tackle the time optimal controllability of an $(n+1)$-dimensional nonholonomic integrator. In the optimal control problem we consider, the state variables are subject to a bound constraint. We give a full description of the optimal control and optimal trajectories are explicitly obtained. The optimal trajectories we construct, lie in a 2-dimensional plane and they are composed of arcs of circle.
Abstract:
This work provides an optimal trading rule that allows buying and selling of an asset sequentially over time. The asset price follows a regime switching model involving a geometric Brownian motion and a mean reversion model. The objective is to determine a sequence of trading times to maximize an overall return. The corresponding value functions are characterized by a set of quasi variational inequalities. Closed-form solutions are obtained. The sequence of trading times can be given in terms of various threshold levels. Numerical examples are given to demonstrate the results.
Abstract:
In this paper, we study the relation between the smallest $g$-supersolution of constrained backward stochastic differential equation and viscosity solution of constraint semilinear parabolic PDE, i.e. variation inequalities. And we get an existence result of variation inequalities via constrained BSDE, and prove a uniqueness result with a condition on the constraint. Then we use these results to give a probabilistic interpretation result for reflected BSDE with a discontinuous barrier and other kind of reflected BSDE.
Readers Authors Editors Librarians Email Alert
Add your name and e-mail address to receive news of forthcoming issues of this journal:
[Back to Top] |
Or Reactive Power to full?
I made a claim about the power consumption of my EL panel earlier that was patently false for a very interesting reason.
Background
So it should be obvious by now that if I open my mouth and say anything about electroluminescent materials, it’s probably wrong. In this case, it has to do with my estimations that I made in this post where I was trying to figure out how much power my EL panel draws from my store-bought EL inverter. I measured 50mA of current draw at 120V from the panel which worked out to a six watt load! That’s a huge amount of power to be delivered to a handheld device.
Later that week, skeptical of my measurements, I decided to measure it again, but instead of measuring the 120V AC current coming from my inverter, I measured the 12V DC current going to my inverter. I found this:
So let’s work out the power being drawn by the inverter:
$$\Large P=I\times V$$
$$\Large 120mA\times 12V=1.44W$$
So wait a second, I’m getting six watts out of my inverter but only putting 1.44W in? I’m getting out more than I’m putting in! I’VE SOLVED THE ENERGY CRISIS EVERYONE YOU CAN RELAX NOW!
OUR SAVIOR
In all seriousness though, there is a very interesting explanation for what I saw.
Reactive Power
When we calculate power draw, we usually do so by multiplying current and voltage. This is the standard $$P=IV$$ formula that you’ve seen hundreds of times. It’s not exactly right though or at least you have to be a little more specific.
The important thing to note is that you need to actually measure the
instantaneous voltage and current, calculate the instantaneous power, and then integrate the instantaneous power over time to get the average power draw. When dealing with DC currents and voltages, the instantaneous current and voltage are going to be pretty close to the average.
Here’s a simple case of a voltage supply driving a resistor. Measuring the voltage across the resistor and multiplying it by the current will give you the instantaneous power which is graphed to the right (note that all three lines are different units and are therefore on different scales on the Y-axis so ignore their relative amplitudes; X-axis is time):
If you want to measure the average power delivered to the resistor, you can just take the area under the green curve (units: $$P\times T=Energy$$) and divide it by time. Now this is a little overkill. Instead of taking the integral of the instantaneous power draw over time, you can just take the instantaneous current and multiply it by the instantaneous voltage to get the same result.
Let’s look at another case though. Here we have a current source driving a capacitor and we want to figure out the power drawn by the capacitor:
To explain what you’re seeing, just remember the formula for the charge stored in a capacitor also known as the “Home Shopping Network” formula.
$$\Large Q=V\times C$$
Where $$Q$$ is charge in units of Coulombs. If we divide the whole formula by time, $$Q$$ becomes $$I$$ (charge per second), and $$V$$ becomes $$\frac{\Delta V}{\Delta T}$$. So if we have a constant current ($$I$$), the rate of change of our voltage is also constant. Thus you get a straight line.
Now the important part of this plot is to note how voltage is not constant even though current is. They are
out of phase. If you were to take the instantaneous voltage measurement at some point and multiply it by the current measurement like we did with the resistor, you would not get the correct result for average power draw.
Now let’s really blow your mind.
Let’s say after the capacitor reaches a certain voltage, you decide to reverse the current supply and drain the capacitor:
When you switch the current supply around, the instantaneous power draw goes negative and the capacitor turns from a power sink to a power source and starts feeding energy back into the current source.
The weird thing is that when you add up the area under that power curve (remembering that area below the X-axis is “negative area”), you’ll find that despite all the current and voltage moving around, the average power drawn by the capacitor is actually zero!
So Why is this Bad?
You might be thinking after seeing the above “of course the average power is zero, the average current is zero and $$P=I\times V$$!” Remember that the average current and voltage in
every AC circuit is always zero. That’s why we don’t measure average voltage or average current. We use RMS voltage and current.
If you don’t know what RMS is, I recommend reading my blog post on the matter. Basically, RMS is a way of measuring AC voltage and current that makes it easy to do power measurements. If you place an AC power supply across a resistor and multiply the RMS voltage across that resistor by the RMS current, you’ll get the power drawn by the resistor.
Unfortunately, this trick only works when your load is a resistor and your voltage and current always have the same sign. Think about that third plot above. If we were to replace that capacitor with a resistor, the power curve would never go negative. As soon as the current switches directions, $$V=IR$$ dictates that the voltage would switch sign as well:
A negative voltage times a negative current still gives you a positive power.
The relative sign of the voltage and current is what determines what kind of load you’re dealing with. Something like a resistor is called…well… a
resistive load. That means that it doesn’t store any energy. A capacitor is known as a type of reactive load. That means that it can store energy and its voltage and current don’t always have the same sign. A purely resistive load keeps voltage and current in phase and dissipates energy. A purely reactive load keeps them 90$$^{\circ}$$ out of phase and dissipates no energy. Some loads can be partially reactive and partially resistive which do a little of both.
Special considerations have to be made when measuring the power draw of a reactive load.
So Why Am I Dumb?
I’m dumb because I made the mistake of using RMS measurements to calculate the power drawn by a partially reactive load. To see what I mean, let’s say you feed an AC signal into a capacitor. Your current/voltage waveforms will look something like this:
Remember that:
$$\Large I=C\frac{dV}{dT}$$
So whenever the voltage is at a peak (slope zero) the current is zero, and vice versa. Now let’s add our power trace to our plot:
Every time the voltage and current have the same sign, the power is positive and every time they have opposite signs, the power is negative. Therefore, the average power (area under the curve) is zero.
When you use RMS, you take all signage out of the equation. There is no positive or negative RMS, there’s just RMS. This removes the critical phase information and is why using it to measure power only really works for resistive loads where voltage and current are always in phase and power is always positive.
The zero-average power draw of a capacitor is actually a very real concern for large-scale energy transmission. If your load (household) is slightly reactive (resistor in series/parallel with a capacitor) so that voltage and current are slightly out of phase, you will end up drawing more current than you need to and spitting the extra back into the power grid just like how our capacitor fed energy back into the current source. This means that the power grid needs to be capable of handling large current draws (thick enough wires, etc) just to support delivering more current to you than you end up actually using. We call the power that a load dissipates the “real power” while the amount that it spits back into the grid the “reactive power”. Real power is useful and can do work, reactive power cannot.
Ideally all loads would be purely resistive so that the utility company could use the lowest capacity (and therefore cheapest) methods of delivering current to customers. As it is, they put regulations on what is called “Power Factor” which is basically just a way to measure how reactive your household is. If your power factor drops below a certain point (too reactive), you might be charged a higher rate.
So How Should I Have Done It?
What I was actually measuring in my circuit was not the real power but rather the “apparent power” which is a combination of real and reactive powers. Apparent power is what you get when you multiply RMS voltage by RMS current. Because apparent power doesn’t necessarily dissipate energy, it is measured in Volt-Amperes instead of Watts. The two units have the same dimensions, but they are interpreted differently.
The three types of power can be drawn like this:
Where $$\theta$$ is the phase angle between voltage and current. This makes sense when you think about it: If $$\theta$$=0, the apparent power equals the real power. This is the case with purely resistive loads where current and voltage are in phase and multiplying the RMS current and RMS voltage is still a valid way to determine real power. Likewise, if $$\theta$$=90$$^{\circ}$$ (like in my AC signal above), the load is purely reactive and the real power drops to zero.
Since I already have the apparent power (6VA), all I need to determine the real power is the phase angle between current and voltage. That should be pretty easy to measure!
To take this measurement, I put a 10$$\Omega$$ resistor in series with the EL panel. The voltage drop across this resistor gave me the current. I also put a resistor divider across both terminals of the EL panel to measure voltage. This was a 1:100 divider and was needed to drop the 120V output to something more safe for my oscilloscope.
The results looked like this:
Where voltage is yellow and current is green. Just as expected, the voltage and current waveforms are not in phase. How far out of phase are they? I measured the time lag between the zero crossings as approximately 188$$\mu$$s while the period of the waveform was approximately 816$$\mu$$s.
To get the phase difference, we can just take the phase lag ratio and multiply it by 2$$\pi$$.
$$\Large \frac{188\mu s}{812\mu s}\times 2\pi =1.44 radians$$
So I have a phase lag of 1.44 radians. Using the right-triangle drawn above, the real power can be found using basic trigonometry:
$$\Large P_{apparent}\times cos(\theta) = P_{real}$$
$$\Large 6VA\times cos(1.44)=.782W$$
So there you have it folks. My power supply takes in about 1.44W and outputs around .79W to the EL panel.
Now I admit that the voltage and current waveforms were not exact sinusoids, so this number isn’t 100% accurate. To get a better answer, I would have to calculate the instantaneous power at every point in the cycle and then integrate it over the period. I’m too lazy for that though, so as long as my result says that I’m not putting out more power than I’m taking in, I’m satisfied.
Conclusion
So my power supply went from 400% efficient to about 56% efficient. It’s not impossibly good, but it’s not that bad either.
This was a pretty fun and practical refresher on reactive power. It’s something that I’ve known about forever, but never had to deal with in real life so I wasn’t thinking about it when it popped up in front of me. Next time, I’ll be more cautious before using RMS to calculate power draw.
Continue the story here. |
It's hard to say just from the sheet music; not having an actual keyboard here. The first line seems difficult, I would guess that second and third are playable. But you would have to ask somebody more experienced.
Having a few experienced users here, do you think that limsup could be an useful tag? I think there are a few questions concerned with the properties of limsup and liminf. Usually they're tagged limit.
@Srivatsan it is unclear what is being asked... Is inner or outer measure of $E$ meant by $m\ast(E)$ (then the question whether it works for non-measurable $E$ has an obvious negative answer since $E$ is measurable if and only if $m^\ast(E) = m_\ast(E)$ assuming completeness, or the question doesn't make sense). If ordinary measure is meant by $m\ast(E)$ then the question doesn't make sense. Either way: the question is incomplete and not answerable in its current form.
A few questions where this tag would (in my opinion) make sense: http://math.stackexchange.com/questions/6168/definitions-for-limsup-and-liminf http://math.stackexchange.com/questions/8489/liminf-of-difference-of-two-sequences http://math.stackexchange.com/questions/60873/limit-supremum-limit-of-a-product http://math.stackexchange.com/questions/60229/limit-supremum-finite-limit-meaning http://math.stackexchange.com/questions/73508/an-exercise-on-liminf-and-limsup http://math.stackexchange.com/questions/85498/limit-of-sequence-of-sets-some-paradoxical-facts
I'm looking for the book "Symmetry Methods for Differential Equations: A Beginner's Guide" by Haydon. Is there some ebooks-site to which I hope my university has a subscription that has this book? ebooks.cambridge.org doesn't seem to have it.
Not sure about uniform continuity questions, but I think they should go under a different tag. I would expect most of "continuity" question be in general-topology and "uniform continuity" in real-analysis.
Here's a challenge for your Google skills... can you locate an online copy of: Walter Rudin, Lebesgue’s first theorem (in L. Nachbin (Ed.), Mathematical Analysis and Applications, Part B, in Advances in Mathematics Supplementary Studies, Vol. 7B, Academic Press, New York, 1981, pp. 741–747)?
No, it was an honest challenge which I myself failed to meet (hence my "what I'm really curious to see..." post). I agree. If it is scanned somewhere it definitely isn't OCR'ed or so new that Google hasn't stumbled over it, yet.
@MartinSleziak I don't think so :) I'm not very good at coming up with new tags. I just think there is little sense to prefer one of liminf/limsup over the other and every term encompassing both would most likely lead to us having to do the tagging ourselves since beginners won't be familiar with it.
Anyway, my opinion is this: I did what I considered the best way: I've created [tag:limsup] and mentioned liminf in tag-wiki. Feel free to create new tag and retag the two questions if you have better name. I do not plan on adding other questions to that tag until tommorrow.
@QED You do not have to accept anything. I am not saying it is a good question; but that doesn't mean it's not acceptable either. The site's policy/vision is to be open towards "math of all levels". It seems hypocritical to me to declare this if we downvote a question simply because it is elementary.
@Matt Basically, the a priori probability (the true probability) is different from the a posteriori probability after part (or whole) of the sample point is revealed. I think that is a legitimate answer.
@QED Well, the tag can be removed (if someone decides to do so). Main purpose of the edit was that you can retract you downvote. It's not a good reason for editing, but I think we've seen worse edits...
@QED Ah. Once, when it was snowing at Princeton, I was heading toward the main door to the math department, about 30 feet away, and I saw the secretary coming out of the door. Next thing I knew, I saw the secretary looking down at me asking if I was all right.
OK, so chat is now available... but; it has been suggested that for Mathematics we should have TeX support.The current TeX processing has some non-trivial client impact. Before I even attempt trying to hack this in, is this something that the community would want / use?(this would only apply ...
So in between doing phone surveys for CNN yesterday I had an interesting thought. For $p$ an odd prime, define the truncation map $$t_{p^r}:\mathbb{Z}_p\to\mathbb{Z}/p^r\mathbb{Z}:\sum_{l=0}^\infty a_lp^l\mapsto\sum_{l=0}^{r-1}a_lp^l.$$ Then primitive roots lift to $$W_p=\{w\in\mathbb{Z}_p:\langle t_{p^r}(w)\rangle=(\mathbb{Z}/p^r\mathbb{Z})^\times\}.$$ Does $\langle W_p\rangle\subset\mathbb{Z}_p$ have a name or any formal study?
> I agree with @Matt E, as almost always. But I think it is true that a standard (pun not originally intended) freshman calculus does not provide any mathematically useful information or insight about infinitesimals, so thinking about freshman calculus in terms of infinitesimals is likely to be unrewarding. – Pete L. Clark 4 mins ago
In mathematics, in the area of order theory, an antichain is a subset of a partially ordered set such that any two elements in the subset are incomparable. (Some authors use the term "antichain" to mean strong antichain, a subset such that there is no element of the poset smaller than 2 distinct elements of the antichain.)Let S be a partially ordered set. We say two elements a and b of a partially ordered set are comparable if a ≤ b or b ≤ a. If two elements are not comparable, we say they are incomparable; that is, x and y are incomparable if neither x ≤ y nor y ≤ x.A chain in S is a...
@MartinSleziak Yes, I almost expected the subnets-debate. I was always happy with the order-preserving+cofinal definition and never felt the need for the other one. I haven't thought about Alexei's question really.
When I look at the comments in Norbert's question it seems that the comments together give a sufficient answer to his first question already - and they came very quickly. Nobody said anything about his second question. Wouldn't it be better to divide it into two separate questions? What do you think t.b.?
@tb About Alexei's questions, I spent some time on it. My guess was that it doesn't hold but I wasn't able to find a counterexample. I hope to get back to that question. (But there is already too many questions which I would like get back to...)
@MartinSleziak I deleted part of my comment since I figured out that I never actually proved that in detail but I'm sure it should work. I needed a bit of summability in topological vector spaces but it's really no problem at all. It's just a special case of nets written differently (as series are a special case of sequences). |
Sandbox From PsychWiki - A Collaborative Psychology Wiki
tasitricrobaTo see how easy it is to edit pages, we have this
Sandbox:
Sandbox
Welcome to the PsychWiki Sandbox. Feel free to use this page to practice editing or for experiments. After you login, click on the "edit" tab at the top of the page, and scroll down to where it says "Start here". Note that content added here will not stay permanently.
START HERE:
↑ Excel For Dummies, First Edition, Hungry Minds, Inc., 1980.
I want to type something here.
asdf here
<math>\pi=\frac{3}{4} \sqrt{3}+24 \int_0^{1/4}{\sqrt{x-x^2}dx}</math>
File:Info.pngInformation! spanstyle talk subpar (UTC)
Quarter Finals Semi Finals Finals Winner 1st Seed 1st/8th 1st/8th/3rd/6th WINNER 8th Seed 3rd Seed 3rd/6th 6th Seed 4th Seed 4th/5th 4th/5th/2nd/7th 5th Seed 2nd Seed 2nd/7th 7th Seed headline1
text for headline1
headline2
text for headline2
headline3
text for headline3
asdf |
Can anyone explain how frequency response related to a transfer function?
A discrete-time linear time-invariant (LTI) system is defined by its impulse response, which can be expressed as a list of non-zero coefficients $c_n$ occuring at integer time indices $t_n$. To form its output $y[k]$ all the system can do is sum time-shifted and constant-multiplied copies of its input $x[k]$. An important class of input functions are complex exponentials that look for example like this:
If the input is a complex exponential: $$x[k] = az^k,$$ where $a$ and $z$ are complex constants and $k$ is the integer time index, then summation results in an output that is the same as the input multiplied by a constant $H(z)$: $$y[k] = \sum_n c_n x[k-t_n] = \sum_n c_n a z^{k-t_n} = \left(\sum_n c_n z^{-t_n} \right)a z^k = H(z)\,x[k].$$ In other words, complex exponentials are eigenfunctions of LTI systems. The constant $H(z)$ is called the transfer function. It is a function of the base $z$ of the complex exponential. If $|z|$ = 1, then the base is of form: $$z = e^{i\omega} = \cos(\omega) + i \sin(\omega),$$ where $i$ is the imaginary unit, and the system's input is a complex sinusoid of real frequency $\omega$ (with magnitude and phase embedded in the constant $a$): $$x[k] = a (e^{i\omega})^k = a e^{i\omega k} = a\left(\cos(\omega k) + i \sin(\omega k)\right)$$ Compared to other complex exponentials, complex sinusoids don't decay or increase in magnitude by time. Frequency response at frequency $\omega$ is simply the constant $H(e^{i\omega})$ by which the system multiplies a complex sinusoid input of frequency $\omega$.
By the inverse Fourier transform, one can go from the frequency response to the impulse response, and from the impulse response one can obtain the transfer function as shown above. So the transfer function of a discrete-time LTI system is fully defined by its frequency response.
An LTI system's "frequency response" tells you how the system acts on the amplitude and phase of a sinusoidal input. If the frequency response is $H(f)$, then an input $x(t)=e^{j2\pi f_0t}$ produces an output $y(t)=|H(f_0)|e^{j(2\pi f_0t+\angle H(f_0))}$. It is common to divide the frequency response in two, the
gain $|H(f)|$ and the phase $\angle H(f)$.
The same system's "transfer function" is defined as follows: if an input $x(t)$ produces an output $y(t)$, then the system's transfer function is $H(s)=\frac{Y(s)}{X(s)}$, where $X(s)$ and $Y(s)$ are the Laplace transforms of $x(t)$ and $y(t)$.
When the transfer function is evaluated at $s=j\omega$, it is also known as the systems's frequency response. Note that the transfer function is more general than the frequency response, and can provide more insight into a system's behavior, for example about transient response or stability.
These definitions can also be extended to non-linear systems but that is beyond my experience. |
The more-geometrically-minded of us take $\cosh u$ and $\sinh u$ to be defined via the "unit hyperbola", $x^2 - y^2 = 1$, in a manner directly analogous to $\cos\theta$ and $\sin\theta$. Specifically, given $P$ a point on the hyperbola with vertex $V$, and defining $u$ as
twice(?!) the area of the hyperbolic sector $OVP$, then $\cosh u$ and $\sinh u$ are, respectively the $x$- and $y$-coordinates of $P$.
Just as in circular trig, we can assign measures $u$ (in "hyperbolic radians") to angles ---from the flat angle (when $u=0$) to half a right angle (when $u=\infty$)--- and associate those measures with the lengths of the corresponding $\cosh$ and $\sinh$ segments. And, just as in circular trig (prior to the advent of imaginary numbers), we might be forgiven for suspecting that the correspondences $u \leftrightarrow \cosh u$ and $u \leftrightarrow \sinh u$ are "non-arithmetical", which is to say: that no
arithmetical formula converts angle measures to their associated trig values.
However, it turns out that the correspondences are
not non-arithmetical; to find the appropriate arithmetical conversion formula, all we need is a bit of calculus ...
Edit. (Two years later!) Check the edit history for an inelegant argument that I now streamline with the help of this trigonograph, in which lengths from the unit hyperbola have been scaled by $\sqrt{2}$ (and, thus, areas by $2$):
Because the hyperbola is rectangular, we have that $|\overline{OX}|\cdot|\overline{XY}|$ is a constant (here, $1$), which guarantees that the regions labeled $v$ have the same area (namely, $1/2$), and therefore that the regions labeled $u$ have the same area (namely, $u$). Now, the bit of calculus I promised, to evaluate $u$ as the area under the reciprocal curve:$$u = \int_1^{|\overline{OX}|}\frac{1}{t}dt = \ln|\overline{OX}| \quad\to\quad |\overline{OX}| = e^{u} \quad\to\quad |\overline{XY}| = \frac{1}{e^u}$$With that, we clearly have$$2\,\sinh u \;=\; e^{u}- e^{-u} \qquad\qquad 2\,\cosh u \;=\; e^{u} + e^{-u}$$as desired. Easy-peasy!
End of edit.
That hyperbolic radians are defined via doubling the area of a hyperbolic sector may seem at odds with the common definition of circular radians in terms of arc-length, but it's hard to argue with success, given the elegance of the formulas above. Even so, the hyperbolic twice-the-sector-area definition can be seen as
directly analogous to the circular case, since circular radians are also definable as "twice-the-sector-area": In the unit circle, the sector with angle measure $\pi/2$ radians has area $\pi/4$ (it's a quarter-circle), the sector with angle measure $\pi$ radians has area $\pi/2$ (it's a half-circle), and the "sector" with angle measure $2\pi$ radians has area $\pi$ (it's the full circle); in these, and all other, cases, the angle measure is twice the sector area. |
So last time, I presented the equation that describes the position of a mass on a spring:
The equation is\[ {x}\hspace{0.33em}{=}\hspace{0.33em}{A}\hspace{0.33em}\sin\left({\frac{180t}{\mathit{\pi}}\sqrt{\frac{k}{m}}}\right) \]
where
x is the position of the mass at a given time t in seconds, k is the spring constant, and m is the mass in kg. This was developed from the equation that describes the forces on the spring (gravity and the spring), and through calculus, out pops the equation above. This equation is the sine of stuff in brackets multiplied by a number A.
Even though the stuff in the brackets looks rather ominous, we are still just taking the sine of it and the sine only goes from -1 to 1. So the maximum extent of the mass is from –
A to A. Now let’s look at the stuff in the brackets.
The 180 and 𝜋 are just there to change the rest of the expression so that you can press the sine button on your calculator in the “degrees” mode. Normally, when engineers model something like this, they use radians and not degrees. I have not explained what radians are yet so I’ve included an adjustment (the 180 and the 𝜋) so that you can continue to use degrees. I think I’ll explain radians in my next post.
The rest of the numbers,
t, k, and m are the real meat of the model. For simplicity, I have started time at 0 seconds when the mass is at its rest position and is moving upwards in the postive direction. So you would expect the position of the mass to change with time and that is what the t in the expression does. The k and the m determine how fast or how slowly the mass oscillates. Let’s actually use some numbers instead of letters here for a specific mass and spring.
Now let’s assume the spring has a spring constant of 1 kg/s² (I’ll discuss these units in my next post), and the mass connected to it is 1 kg. That means the stuff in the square root sign (called a radical) is just 1 and the square root of 1 is 1. And let’s further assume that I start the spring moving by stretching the spring 5 cm from its rest position. So now, the position equation above simplifies to\[
{x}\hspace{0.33em}{=}\hspace{0.33em}{5}\sin\left({\frac{180t}{\mathit{\pi}}}\right)
\]
Starting at time as 0, you can choose various values of
t, compute the stuff in the brackets, use the “SIN” button on your calculator which is in the “degrees” mode, and then multiply by 5. So for example, at t = 1 sec, 180/𝜋 is 57.2958. Taking the sine of that gives 0.8415 and then multiplying that by 5 gives 4.2073 cm. So at 1 sec, the mass is 4.2073 cm above its resting position. You can plot this point and many others to graph this, or you can be lazy like me and use a graphing calculator. The graph of the position of the mass versus time for this scenario is
So no surprise, a sine wave. Now remember when I first talked about sine waves, I talked about the wavelength. Here I have indicated the wavelength as 6.28 sec. When dealing with time, the wavelength is called the
period and usually represented with the symbol T. The period is the length of time it takes for one full cycle of motion. So it takes the mass 6.28 seconds to make one complete bounce. It turns out that you do not have to graph the curve to find this:
{T}\hspace{0.33em}{=}\hspace{0.33em}\frac{{2}\mathit{\pi}}{\sqrt{\frac{k}{m}}}
\]
Since our
k and m are each 1, T in this case is just 2𝜋. Funny how 𝜋 keeps cropping up. Again, I’ll explain that in my next post on radians.
Associated with the period is something called
frequency. The period is how long it takes for one complete cycle to occur, whereas the frequency is how many complete cycles occur in 1 second. Frequency is the reciprocal of the period and vice versa. That is f = 1/ T. So for our mass, the frequency is 1/6.28 or 0.16 cycles per second. The term “cycles per second” is given a special unit called hertz which is abbreviated as hz. You may have heard this term before.
As you change the values of
k and m, the values of T and f will change as well. If the spring gets stiffer (a higher k), you would expect the frequency to increase, that is it will bounce faster. You would expect a heavier mass to slow down the frequency and it does. I will leave it as an exercise for the student to check this using a graphing calculator or Excel.
A good simulation on the web that shows the effect of changing mass an spring constant is at https://www.physicsclassroom.com/Physics-Interactives/Waves-and-Sound/Mass-on-a-Spring/Mass-on-a-Spring-Interactive. This sets up the graph a bit differently than I do here, but the frequency changes are easy to see. Also, you can add damping to this which I did not include in this post to keep it simple, but you can play with that as well on this site. |
In the Standard Model, the Higgs boson is expected to have spin 0 and even parity. I know how to get the spin-0 approach, but how do I argue for the even parity? Could you give a simple and a more detailed explanation for this even parity expectation?
One argument could be the Yukawa coupling, which is responsible for the coupling to the fermions.
In the Yukawa coupling term in the Lagrangian, $\mathcal{L}_{\text{Yukawa}}$ , there are no terms that contain a $\gamma^5$ matrix, defined as $$\gamma^5 := i\gamma^0 \gamma^1 \gamma^2 \gamma^3$$ This publication states how terms in the Lagrangian transform under parity operation, namely (giving only the relevant information here) $$ \Psi \bar \Psi \;\;\; \scriptsize transforms\, as \normalsize \;\;\;\text{scalar (parity = +1)}$$ $$ \Psi \gamma^5 \bar \Psi \;\;\; \scriptsize transforms\, as \normalsize \;\;\;\text{pseudoscalar (parity = -1)}$$
and therefore the Yukawa coupling term gives direct hint to the expectation $\text{P}(\text{Higgs})=+1$. However, as the Yukawa coupling theory could be the wrong model, experiments are supposed to check for the parity sign, too.
Thanks go to my university professor for pointing that out in his script.
It is not an assumption; both $0^+$ and $0^-$ were considered as possible Higgs states. The angular distribution of decay products (like in $h\to ZZ$, $h\to WW$, $h\to f\bar{f}$, $h\to \gamma\gamma$ or in Higgstrahlung) is dependent on the parity of the Higgs particle. Alternatively, you can measure the helicities of the outgoing photons (in the $h\to\gamma\gamma$ case); the observed distribution is consistent with an even parity Higgs.
This workshop has a good overview. |
Abbreviation:
MALLA
A
is a structure $\mathbf{A}=\langle A,\vee,\bot,\wedge,\top,+,0,\cdot,1,^\perp\rangle$ of type $\langle2,0,2,0,2,0,1\rangle$ such that multiplicative additive linear logic algebra
$\langle A,\vee,\wedge,\cdot,1,^{\perp}\rangle$ is a commutative involutive residuated lattice
$\bot$ is the least element: $\bot\le x$
$\top$ is the greatest element: $x\le\top$
$+$ is the dual of $\cdot$: $x+y=(x^\perp\cdot y^\perp)^\perp$
$0$ is the dual of $1$: $0=1^\perp$
Remark: This is a template. If you know something about this class, click on the 'Edit text of this page' link at the bottom and fill out this page.
It is not unusual to give several (equivalent) definitions. Ideally, one of the definitions would give an irredundant axiomatization that does not refer to other classes.
Let $\mathbf{A}$ and $\mathbf{B}$ be multiplicative additive linear logic algebras. A morphism from $\mathbf{A}$ to $\mathbf{B}$ is a function $h:A\rightarrow B$ that is a homomorphism.
An
is a structure $\mathbf{A}=\langle A,...\rangle$ of type $\langle...\rangle$ such that …
$...$ is …: $axiom$
$...$ is …: $axiom$
Example 1:
Feel free to add or delete properties from this list. The list below may contain properties that are not relevant to the class that is being described.
$\begin{array}{lr} f(1)= &1\\ f(2)= &\\ f(3)= &\\ f(4)= &\\ f(5)= &\\ \end{array}$ $\begin{array}{lr} f(6)= &\\ f(7)= &\\ f(8)= &\\ f(9)= &\\ f(10)= &\\ \end{array}$
[[...]] subvariety [[...]] expansion [[...]] supervariety [[...]] subreduct |
There are already some rather good related answers regarding LTL versus CTL. In a nutshell, LTL is first and foremost a logic of traces, and an LTL formula is true for a transition system $S$ if and only if it is true for each trace of $S$. CTL, on the other hand, is a branching-time logic, which can in a sense talk about multiple paths at the same time.
One standard example here (not the one you give, about which more below) is a labelled transition system $S=(Q,T,q_0,L)$ with set of locations $Q=\{q_0,q_1,q_2\}$, set of transitions $T=\{(q_0,q_0),(q_0,q_1),(q_1,q_2),(q_2,q_2)\}$, and labelling given by $L(q_0)=L(q_2)=\{p\}$, $L(q_1)=\emptyset$. This system satisfies $FGp$, but not $AFAGp$, which can be seen as follows.
$FGp$ means that for every path $\pi=s_1,s_2,\dots$ in a given system, there is some point after which $p$ is always satisfied, i.e. there is some $i$ such that for all $j\geq i$, $p\in L(s_i)$. This is satisfied by $S$ since every path in $S$ either remains in $q_0$ forever (such that $p$ is always satisfied) or eventually gets to $p_2$ (after which $p$ is always satisfied).
On the other hand, $AFAGp$ means that every path $\pi=s_1,s_2,\dots$ eventually reaches a state satisfying $AGp$, i.e. a state such that on
every path $\pi'$ starting there, $p$ is always satisfied. Formally, this means that there is an $i$ such that for all $\pi'=s_1',s_2',\dots$ with $s_1'=s_i$ and all $j$, we have $p\in L(s_j')$. But in $S$, for the path which always remains in $q_0$, the transition to $q_1$, where $p$ is not satisfied, is always available, so that at no point of that path $AGp$ holds; therefore $AFAGp$ is not satisfied by $S$.
As for your example, this is actually a case where two formulas
are equivalent. This does happen. The proof is a little involved, but I can add it if you are interested. |
What is the indefinite sum of the tangent function, that is, the function $T$ for which
$\Delta_x T = T(x + 1) - T(x) = \tan(x)$
Of course, there are infinitely many answers, who all differ by a function of period 1. Ideally, I would like the solution to be of the form
$T(x) = $ nice_function$(x)$ + possibly_ugly_periodic_function$(x)$, where nice is at least piece-wise continuous.
If any of the following sums can be found, the sum of tan can also be found:
$\sum \sec x$
$\sum \csc x$
$\sum \cot x$
$\sum \frac{1}{e^{ix} + 1}$
I have tried several methods without success, including using a newton series (which does not converge for non-integer $x$), and trying to guess possible functions.
I would also appreciate lines of attack if a solution is not known. |
Halloween Season is a seasonal event in Cookie Clicker, that was added in the 1.039 update.
The event was on during versions 1.039 - 1.0403 without a way to turn it off.
The seasonal event upgrade was added in version 1.041.
Since 1.0466 update Halloween season starts automatically and lasts from 24th to 31st of October (from 23rd to 30th of October for leap years).
Halloween season will be activated by purchasing the Upgrade "Ghostly biscuit", which will launch it for 24 hours.
The upgrade will be available after unlocking the Season switcher, which costs 1,111 Heavenly chips and baking at least 10 trillion cookies baked in the current game, that let you trigger seasonal events at will, for a price.
The upgrade is repeatable, but will get more expensive every time you buy it, like Elder Pledge. It will also cancel any other seasonal event that is on at the time you click it.
When purchased, it will unlock the features of the Halloween season: Halloween-related cookie upgrades, and Halloween Wrath Cookies (Also known as "Scary Cookies").
The appearance of Wrath cookies during the Halloween Day season:
UpgradesEdit
List of the 7 known flavored cookies which may drop from Wrinklers during the Halloween season. Each flavored cookie has a base cost of 444,444,444,444 (444.444 billion) cookies.
Halloween cookies Icon Name Unlock condition Base price Description ID Skull cookies Minimum 5% drop chance from popping a Wrinkler during Halloween Season Cookie production multiplier +2%. "Wanna know something spooky? You've got one of these inside your head RIGHT NOW." 134 Ghost cookies Cookie production multiplier +2%. "They're something strange, but they look pretty good!" 135 Bat cookies Cookie production multiplier +2%. "The cookies this town deserves." 136 Slime cookies Cookie production multiplier +2%. "The incredible melting cookies!" 137 Pumpkin cookies Cookie production multiplier +2%. "Not even pumpkin-flavored. Tastes like glazing. Yeugh." 138 Eyeball cookies Cookie production multiplier +2%. "When you stare into the cookie, the cookie stares back at you." 139 Spider cookies Cookie production multiplier +2%. "You found the recipe on the web. They do whatever a cookie can." 140
The current drop chance is
1-f, where f is the fail rate. The base fail rate is 0.95, or 0.8 if the achievement Spooky cookies is unlocked. There are several factors which can further reduce the fail rate.
factor Santa's Bottomless Bag 1/1.1=0.9091 Starterror 0.9 Shiny Wrinkler 0.9 Selebrak, Spirit of Festivities 0.9/0.95/0.97 (Diamond/Ruby/Jade) Mind Over Matter 1/1.25=0.8 Green yeast digestives 1/1.03=0.970874 Garden plants (mature, dirt) 1/(1+0.01G+0.01S+0.03K)
G = # of Green Rot
S = # of Shimmerlily
K = # of Keenmoss
$ f=0.8/1.1=0.727272 $
If you popped a shiny wrinkler with Mind Over Matter, Selebrak in Diamond slot and four Keenmoss in the garden, the fail rate will be:
$ f=0.95*0.9/1.25*0.9/1.12=0.549643 $
Cookie Spawn ProbabilitiesEdit
Exploding a Wrinkler
after it has begun to feed on the big cookie has a 5% chance of unlocking 1 of the 7 Halloween-themed cookie upgrades. If you have the Spooky cookies achievement, it increases to 20% chance. However, if the particular upgrade chosen at random is already unlocked, it will not unlock a new one. So on each Wrinkler explosion, the actual chance of unlocking a new cookie type is equal to:
$ s \cdot \left(1-\frac{N}{7}\right) $
Where
N is the number of upgrades already unlocked and s=1-f is the success rate (0.05 normally, 0.2 with "Spooky cookies", 0.136364 with "Santa's bottomless bag", and 0.272727 with both). The table below shows how the success rate ( r) effects the expected value of Wrinkler explosions needed to unlock all 7 Halloween cookie upgrades.
For example, with bare success rate (
s=0.05), the expectation value is:
$ \frac{363}{20s}= 363 $
However, with "Santa's bottomless bag" and "Spooky cookies",
s becomes 0.272727 and the expectation value drastically reduces to:
$ \frac{363}{20s}\approx 67 $
Cookies
Unlocked
Probability of Appearance
Formula Base
(s=0.05)
With
Spooky cookies
(s=0.2)
With
Santa's
(s=0.1364)
With
Cookies & bag
(s=0.2727)
0 s*7/7 5% 1/20 20% 1/5 13.6% 1/7.3 27.3% 1/3.7 1 s*6/7 4.29% 1/23 17.1% 1/5.8 11.7% 1/8.6 23.4% 1/4.3 2 s*5/7 3.57% 1/28 14.3% 1/7 9.74% 1/10 19.5% 1/5.1 3 s*4/7 2.86% 1/35 11.4% 1/8.8 7.79% 1/13 15.6% 1/6.4 4 s*3/7 2.14% 1/47 8.57% 1/11 5.84% 1/17 11.7% 1/8.5 5 s*2/7 1.43% 1/70 5.71% 1/18 3.90% 1/26 7.79% 1/13 6 s*1/7 0.714% 1/140 2.86% 1/35 1.95% 1/51 3.90% 1/25 Expected value for Wrinkler explosions to unlock all 363/20s 363 90 133 67
A method to get all the cookie upgrades quickly is to wait for a few wrinklers to spawn, wait for an autosave ("Game Saved" popup), pop the wrinklers quickly, and - if no cookie upgrade appeared - quickly hit F5 to reload before the game is autosaved again - the wrinklers will be back and you can try popping them again.
Another way, similiar to the first, is to open mulitple tabs with wrinklers, and pop the wrinklers on each tab until you get a cookie upgrade. Always leave one tab open with wrinklers, so if all the other tabs fail to give you a cookie upgrade, you can manually save on the tab that still has the wrinklers, and repeat the process.
AchievementsEdit
List of known Achievements that can be unlocked during the Halloween season.
Icon Name Description ID Spooky cookies [note 1] Unlock every . Halloween-themed cookie
Owning this achievement makes Halloween-themed cookies drop more frequently in future playthroughs.
108 TriviaEdit The Halloween Season was first confirmed on Orteil's tumblr. The Ghost cookies' text references the Ghostbusters theme song. The Bat cookies' text references the movie Dark Knight. The Eyeball cookies' text references a Friedrich Nietzsche quote. The Spider cookies' text references the Spider-Man cartoon theme song, as well as containing a joke about the internet. The price of the cookies being four sets of fours because in some asian languages (e.g., mandarin chinese, japanese) the number 4 sounds like the word for death.
Cookie Clicker game mechanics Cookies Clicking • Buildings General Achievements • CpS • Milk • Golden Cookies • News Ticker • Options • Cheating • Sugar Lump Upgrades Upgrades overview
Multipliers: Flavored Cookies • Kittens
Research: Grandmapocalypse • Wrath Cookies • Wrinklers • Shiny wrinklers
Ascension Ascension • Heavenly Chips • Challenge Mode Seasons Seasons overview
Valentine's Day • Business Day • Easter •
Halloween • Christmas Minigames Minigames overview
Garden • Pantheon • Grimoire
Further reading Gameplay |
Commutator Subgroup and Abelian Quotient GroupLet $G$ be a group and let $D(G)=[G,G]$ be the commutator subgroup of $G$.Let $N$ be a subgroup of $G$.Prove that the subgroup $N$ is normal in $G$ and $G/N$ is an abelian group if and only if $N \supset D(G)$.Definitions.Recall that for any $a, b \in G$, the […]
Non-Abelian Simple Group is Equal to its Commutator SubgroupLet $G$ be a non-abelian simple group. Let $D(G)=[G,G]$ be the commutator subgroup of $G$. Show that $G=D(G)$.Definitions/Hint.We first recall relevant definitions.A group is called simple if its normal subgroups are either the trivial subgroup or the group […]
A Condition that a Commutator Group is a Normal SubgroupLet $H$ be a normal subgroup of a group $G$.Then show that $N:=[H, G]$ is a subgroup of $H$ and $N \triangleleft G$.Here $[H, G]$ is a subgroup of $G$ generated by commutators $[h,k]:=hkh^{-1}k^{-1}$.In particular, the commutator subgroup $[G, G]$ is a normal subgroup of […]
Two Normal Subgroups Intersecting Trivially Commute Each OtherLet $G$ be a group. Assume that $H$ and $K$ are both normal subgroups of $G$ and $H \cap K=1$. Then for any elements $h \in H$ and $k\in K$, show that $hk=kh$.Proof.It suffices to show that $h^{-1}k^{-1}hk \in H \cap K$.In fact, if this it true then we have […]
Abelian Normal Subgroup, Intersection, and Product of GroupsLet $G$ be a group and let $A$ be an abelian subgroup of $G$ with $A \triangleleft G$.(That is, $A$ is a normal subgroup of $G$.)If $B$ is any subgroup of $G$, then show that\[A \cap B \triangleleft AB.\]Proof.First of all, since $A \triangleleft G$, the […] |
"Abstract: We prove, once and for all, that people who don't use superspace are really out of it. This includes QCDers, who always either wave their hands or gamble with lettuce (Monte Zuma calculations). Besides, all nonsupersymmetric theories have divergences which lead to problems with things like renormalons, instantons, anomalons, and other phenomenons. Also, they can't hide from gravity forever."
Can a gravitational field possess momentum? A gravitational wave can certainly possess momentum just like a light wave has momentum, but we generally think of a gravitational field as a static object, like an electrostatic field.
"You have to understand that compared to other professions such as programming or engineering, ethical standards in academia are in the gutter. I have worked with many different kinds of people in my life, in the U.S. and in Japan. I have only encountered one group more corrupt than academic scientists: the mafia members who ran Las Vegas hotels where I used to install computer equipment. "
so Ive got a small bottle that I filled up with salt. I put it on the scale and it's mass is 83g. I've also got a jup of water that has 500g of water. I put the bottle in the jug and it sank to the bottom. I have to figure out how much salt to take out of the bottle such that the weight force of the bottle equals the buoyancy force.
For the buoyancy do I: density of water * volume of water displaced * gravity acceleration?
so: mass of bottle * gravity = volume of water displaced * density of water * gravity?
@EmilioPisanty The measurement operators than I suggested in the comments of the post are fine but I additionally would like to control the width of the Poisson Distribution (much like we can do for the normal distribution using variance). Do you know that this can be achieved while still maintaining the completeness condition $$\int A^{\dagger}_{C}A_CdC = 1$$?
As a workaround while this request is pending, there exist several client-side workarounds that can be used to enable LaTeX rendering in chat, including:ChatJax, a set of bookmarklets by robjohn to enable dynamic MathJax support in chat. Commonly used in the Mathematics chat room.An altern...
You're always welcome to ask. One of the reasons I hang around in the chat room is because I'm happy to answer this sort of question. Obviously I'm sometimes busy doing other stuff, but if I have the spare time I'm always happy to answer.
Though as it happens I have to go now - lunch time! :-)
@JohnRennie It's possible to do it using the energy method. Just we need to carefully write down the potential function which is $U(r)=\frac{1}{2}\frac{mg}{R}r^2$ with zero point at the center of the earth.
Anonymous
Also I don't particularly like this SHM problem because it causes a lot of misconceptions. The motion is SHM only under particular conditions :P
I see with concern the close queue has not shrunk considerably in the last week and is still at 73 items. This may be an effect of increased traffic but not increased reviewing or something else, I'm not sure
Not sure about that, but the converse is certainly false :P
Derrida has received a lot of criticism from the experts on the fields he tried to comment on
I personally do not know much about postmodernist philosophy, so I shall not comment on it myself
I do have strong affirmative opinions on textual interpretation, made disjoint from authoritorial intent, however, which is a central part of Deconstruction theory. But I think that dates back to Heidegger.
I can see why a man of that generation would be leaned towards that idea. I do too. |
I'm currently reading some books on radiometry. They mention that radiance is constant along a ray. It doesn't change with distance. However, I've seen some raytracer and they put the 1/r² factor when they deal with point sources. I don't get why. I didn't find a great explanation on the Internet.
The concept of a point source is an approximation. Physically, light sources are extended objects and emit light from every point on their surface; but when you're far enough away (i.e. the distance to the source is large compared to its size) it's useful to approximate it as a point source.
You can get the $1/r^2$ law out of it as follows. Imagine a spherical area light with some radius $r_\text{light}$, and you're looking at it from a distance $r$ away. Then, we can approximate the solid angle that it subtends from your point of view as the area of a circle of radius $r_\text{light}/r$ (just using similar triangles). This area will be $\pi (r_\text{light}/r)^2$, so it's proportional to $1/r^2$.
Note that this approximation becomes exact in the limit $r_\text{light}/r \to 0$, i.e. when the light source is very far away or very small. It breaks down if the source is too large or too close.
If the source emits a constant radiance from every point on its surface, then when you integrate over solid angle in the rendering equation, you'll get a total irradiance proportional to $1/r^2$. In order to approximate it as a point source in a renderer, we skip the integration and just add an irradiance proportional to $1/r^2$ directly.
It is the inverse square law of light for a pure point light.
$E = \frac{I}{r^2}$
Where E is illuminance and I is pointance or power/flux per unit solid angle.
I'll give an intuitive idea of the reason in this answer. Once this intuitive idea is grasped, it can be easier to absorb the mathematical descriptions.
Other people find it easier the other way around, so look at all the answers and see which approach works for you personally.
A spherical shell of photons
Imagine a point light source. Picture an instant where it emits a million photons spread evenly in all directions. At that instant, they are all in the same position, at the central point. A moment later, they have all moved the same distance and are now arranged in a small sphere with the point at its centre. A short time later they are still arranged in a sphere, but now a much larger sphere.
As the sphere expands it always has the same number of photons, but they are spread out over the increasing area. Each photon has the same amount of energy it had when it first left the point source, but the photons are more spread out so a given area of the sphere now has less energy due to having fewer photons.
When a photon hits a surface, it adds the same amount of energy whether it has traveled 1 metre or 100 metres. The reason the surface looks dimmer when it is further from the light source is that the photons are more spread out across that surface.
Source to eye ray tracing
If you wrote a ray tracer that started with rays being emitted from a point light source, and then followed them to see what they hit, you wouldn't need the $1/r^2$ term. Objects further from the light would naturally be hit by fewer rays due to the rays spreading out.
Eye to source ray tracing
Most ray tracers don't start the rays from the light source, as this results in calculating the paths of all the rays that never reach the eye, which is very inefficient. Instead the rays start at the eye and are traced backwards, to see what surface they came from. If the ray was then bounced from that surface in a random direction to see if it hits the point light source, the fact that the light source is a point would make the probability of hitting it zero. So instead $1/r^2$ is used to give a measure of how many rays hit the surface.
Geometry of a point source
This isn't a property of light, it is a property of a point source. Light traveling in all directions from a point forms spherical shells of photons, and the surface area of a sphere increases in proportion to the radius squared.
If you had light being emitted that was not in all directions then the rule would be different. For example, imagine a line light source instead of a point, with all the light being emitted radially (only in directions perpendicular to the line). Now the light forms cylindrical shells of photons, and the surface area of a cylinder increases in proportion to the radius, not the radius squared. Now you would use a $1/r$ term instead of a $1/r^2$ term, and an object would need to be moved significantly further from the light source before seeing a noticeable drop in brightness.
In reality, nearly every light source is equivalent to a collection of point sources - every point on an area light source emits light in all directions. Even cylindrical lights like flourescent strip lights and neon signs still emit light in all directions, so the photons form spherical shells rather than cylindrical ones. So the reduction in light level will nearly always be with $1/r^2$.
Say the point light is at $P_L$, the shading is happening at $P_S$
It's true that the radiance is constant along a shadow ray $P_L \rightarrow P_S$, but that's not the key property for solving the rendering equation at $P_S$.
The rendering equation, somewhat simplified, is: $L_o(\omega_o) = L_e(\omega_o) + \int_{\Omega} \, f_r(\omega_i, \omega_o)\, L_i(\omega_i)\, (n \cdot \omega_i) \, d \omega_i $
While $L_o$ is expressed in radiance, you are actually integrating the
irradiance $L_i$ of the incoming light at $P_S$, which is expressed in $Wm^{-2}$. The incoming radiance -- call it $\hat{L}_i$ -- is in $W m^{-2} sr^{-1}$ and, while constant along the shadow ray, it is not directly relevant. The difference between $\hat{L}_i$ and $L_i$ is a $1/r^2$ term since the area of 1 $sr$ increases quadratically with distance from $P_L$ .
Thanks for your answers. That was helpfull.
This is how I understand the 1/r² term for point source (tell me if I'm wrong). Let's take the BRDF definition :
$$ L_o = \int f(\omega, \omega_o) \, dE$$
Now, we have to answer this question : How is the Irradiance E distributed ? For one point source, we have : $$ dE = \delta(\omega_i-\omega)E \, d\omega $$ The irradiance is only comming from one direction (the point source). Therfore, we can simplify the equation :
$$ L_o = \int f(\omega, \omega_o) \delta(\omega_i-\omega)E \, d\omega = f(\omega_i, \omega_o) E$$ We can use the relation between the intensity I of the point source and E $$ E = cos(\theta_i)I/r^2 $$ Finally : $$ L_o = f(\omega_i, \omega_o) cos(\theta_i)I/r^2 $$ |
Suppose a matrix $X\in\mathbb{R}^{n\times 3}$ is given as a Principal Component Analysis (PCA) projection from some high dimensional space. The 2D PCA solution on X, say $Y\in\mathbb{R}^{n\times 2}$ would simply correspond to the first two columns of $X$.
Now, suppose the configuration is shifted such that the origin corresponds to an arbitrary point. I want to
mathematically state (via PCA) that by changing the origin of $X$ (3D data), the new 2D PCA projection $Y'\in\mathbb{R}^{n\times 2}$ simply corresponds to two first columns on $X$ subject to rigid transformation (ie. rotation, reflection, shifting). The reason for the perhaps unnecessary complication is the PCA assumption on the configuration centered at the origin. (In other words, I'm not sure if by getting rid of it, one might loosen the connection with PCA)
To remind you, PCA would be obtained as $$Y=XU_S,$$where $U_S\in\mathbb{R}^{3\times 2}$ would contain eigenvectors of the correlation matrix $S=\frac{1}{n}X^TX$, where $X$ is supposed to be centered at the origin. The origin change would correspond to $X'=PX$, where $P=(I-1_np^T)$ denotes the
projector, and $1_n=[1, \dots, 1]\in\mathbb{R}^n$, and $p^T1_n=1$. The new correlation matrix $$S'=\frac{1}{n}((PX)^T(PX))=\frac{1}{n}X^TPX=\frac{1}{n}X^T(J-1_np_1^T)X,$$where $P$ might be expressed as $P=J-1_np_1^T$, where $J$ is a projector with $p=\frac{1}{n}1_n$, ie. $J=(I-\frac{1}{n}1_n1_n^T)$. So, I would like to state that $Y'=(PX)U_{S'}$ corresponds to a different 2D viewpoint on the primary $X$. The difficulty, in my interpretation, lies in the effects of spectral decomposition of $S'$, and its possible effects on rigid transformation on primary $X$. Again, I apologize for the perhaps unnecessary complication. |
What should I do? Because I have two possibilities. I have ##0=5+at## so ##-5/t =a##. But then I can also say that the acceleration is a negative because it is stopping, so I can write it like ##0=5-kt## and then ##5/t =k##
The first doubt that comes to my mind is "I have to determine the acceleration with respect to what?", because the problem doesn't tell. Then, I have some problems when having to plug the data in the formula of acceleration. ##\vec a_B=0## because the origin isn't accelerated, ##\vec{\dot...
So my problem isn't actually finding the components, but knowing if the initial approach I took is correct. So what I did was:At first I found that at the same instant, ##x_{B/A}=10500 m## so then I wrote the equation of motion for plane B respect to A:so $$\vec a_{B/O}- \vec a_{A/O}=\vec...
So what I did was at first consider the case the kid is below the branch, so that x=0,t=0, then I thought that the lenght L of the rope should be ##L=2h## because we know the radius from the branch to the kid is just ##x^2+y^2=r^2## and when x=0, y=h. So then I wrote the motion equations for the...
So I know that ##a_t = \frac{dv}{dt}=-ks## and ##\frac{dv}{dt}=v\frac{dv}{ds}## then: $$v dv=-ks ds \rightarrow (v(s))^2=-ks^2+c$$ and using my initial conditions it follows that: $$(3.6)^2=c \approx 13$$ and $$(1.8)^2=13-5.4k \rightarrow k=1.8 \rightarrow (v(s))^2=13-1.8s$$What bothers me is...
I tried to workout the problem but I find motion in different coordinates systems a bit weird at the moment, so only thing I could do is realise that the x component of ##\vec r(t)## is: $$vt +x_0$$ but for simplicity we will use the initial condition ##x_0=0## so that ##t_0## is the moment the...
So ##T+U=\frac{1}{2}m(\dot{x}^{2}+\dot{y}^{2})-mgy=constant##. If I derive this with respect to ##t##$$\dot{x}\ddot{x}+\dot{y}\ddot{y}-g\dot{y}=0$$Then I use ##\dot{y}=\dot{x}\frac{dy}{dx},\ddot{y}=\ddot{x}\frac{dy}{dx}+\dot{x}^{2}\frac{d^{2}y}{dx^{2}}##to get...
I've calculated the potential energy at the top of the halfpipe, before the boarder drops in:PE = 39.5 kg * 9.8 m/s^2 * 3.66 m = 1416 JSince the boarder would have no potential energy and all kinetic energy at the bottom of the halfpipe,KE = 1/2mv^2 = 1416 J1/2 (39.5 kg) (v^2) = 1416 JSo...
So there are two cases:a). free fall (straight forward for me)b). ladder rotating and jumping off in last moment (I am interested in trying to understand this case)I believe I should take into account momentum at the time the man hits the ground in both cases? The smaller, the better. Or...
Hello. I have just started studying physics. Can someone explain to me how can I type in formulas here using Tex for nicer formatting?I suppose the force is F = ma.Question is: what is a?The starting throw angle is not mentioned, I suppose this task has to be related to gravity. All I know...
Hello,1) Suppose I throw a ball with a force ##F=ma##, the instant it leaves my hand, does it have the same acceleration ##a## added to it accelerations due to "ambient" forces (air resistance, gravity..)?2) If I am right about 1), doesn't my hand already carry the acceleration/deceleration...
I have a strange question. It's strange because I don't need a correct answer. I need an answer that seems correct and leads to predictable results. I'm making a multiplayer computer game where the players fire cannons in outer space. The cannon shells will move through the gravitational fields...
HiI am learning kinematics. Topics include inertia tensors, cosine matrices, quaternions, Euler angles etc... To learn these topics well, I want to try working on something a bit more difficult than just the underlying math. That will probably keep me more motivated.So, I am looking for...
1. A stone is thrown vertically downwards from the observation deck of the CN Tower, 400 m above the ground, with a velocity of 12 m/s [down].A) How long will it take the stone to reach the ground ?B) How fast will it be going when it hits the ground ?I just got into physics, so I’m not...
Hi, I need help with this problem:1. Homework StatementCondition: an object has to move from point A to point B in the least time possible. The distance between the points is L. The object can accelerate (decelerate) with a fixed acceleration ##a## or move with a constant speed.What maximum...
1. Homework StatementA baseball is hit into the air, nearly vertically, with a speed of 27 m/s. When it comes down, the ballplayer catches it in his glove. Air resistance (drag) is actually fairly important for a baseball in flight, but for now let’s assume it is negligible.(a) How high...
1. Homework StatementMy problem has two parts.1) We have two point masses ##m,M##. and there is another mass ##m_1## between them.They are all aligned in a line. Mass ##M## is moving with speed ##u_1## toward ##m_1## and after collision and all other masses are not moving. we want to find...
Hi, I'm currently taking Chemistry 101 and came across this equation that seems to contradict what I've learned before. I don't know the name of it, but here is the equation and its implication.Now another equation we have learned is the Arrhenius equation, which is as follows:If I...
I'm a little confused with the application of laws of motion on a man climbing a rope. Suppose a man of mass Mg is climbing a rope with an acceleration a. Rope is massless. Now if look through the frame of the piece of rope held by the man, there is a force Mg downward by man, ma downward...
1. Homework StatementTwo cars are facing each other on a long straight airport runway. They are initially separated by a distance of 1 km. Car A begins to accelerate towards the other car at a uniform 0.5 ms^-2. Ten seconds later car B begins to move towards the other car with a uniform...
1. Homework StatementAn electron and a proton are each placed at rest in an electric field of 687 N/C. What is the velocity of the electron 56.5 ns after being released? Consider the direction parallel to the field to be positive. The fundamental charge is 1.602×10−19 C, the mass of a proton is...
1. Homework StatementA flat cushion of mass m is released from rest at the corner of the roof of a building, at height h. A wind blowing along the side of the building exerts a constant horizontal force of magnitude F on the cushion as it drops as shown in the figure below. The air exerts no...
1. Homework StatementTwo blocks of mass m1 = 3.00 kg and m2 = 6.00 kg are connected by a massless string that passes over a frictionless pulley (see the figure below). The inclines are frictionless.Image - https://www.webassign.net/serpse9/5-p-049.gifThe inclined plane is shaped like a...
1. Homework StatementRaindrops fall from a broken gutter at 1.0 s intervals to the ground , 19.6 m belowNeglecting air resistance and taking g as 9.81 calculatethe time for each drop to reach the groundvelocity of the drops as they reach the groundspeed of drop at the height of 14.7 m...
1. Homework Statement"Based on the following data, determine if the driver who crashed was driving over the speed limit"The relevant data given is:-A 2002 Volvo t-bones a truck in an alley with a speed limit of 20km/h-An eyewitness heard tires screech, then a loud bang, and also said...
1. Homework StatementI'm simulating a warehouse (in Unity) where forklifts and other vehicles transport stuff from point A to point B. The routes are modeled as a graph with vertices and edges. Every transport is saved as a task in a backend system with the relevant data such as starting time... |
Van Berkel, C. and Lionheart, W. R. B. (2007)
Reconstruction of a grounded object in an electrostatic halfspace with an indicator function. Inverse Problems in Science and Engineering, 15 (6). pp. 585-600. ISSN 1741-5985
PDF
Reconstruction_of_A.pdf
Restricted to Registered users only
Download (487kB)
Abstract
This article explores the use of capacitance measurements made between electrodes embedded in or around a display surface, to detect the position, orientation and shape of hands and fingers. This is of interest for unobtrusive 3D gesture input for interactive displays, so called touch-less interaction. The hand is assumed to be grounded and formally the problem is a Cauchy problem for the Laplace equation in which Cauchy data on the boundary $\partial H$ (the display surface) is used to reconstruct the zero potential contour of the unknown object $D$ (the hand). The problem is solved with the so-called factorisation method developed for acoustic scattering and electrostatic problems. In the factorisation method, a test function $g_z$ is used to characterise points $z \in D \Longleftrightarrow g_z \in \mathcal{R}( \Lambda_D^{1/2} )$, in which $\Lambda_D : L^2(\partial H) \rightarrow L^2(\partial H)$ is the Dirichlet to Neumann map on the display surface. We demonstrate a suitable test function $g_z$ appropriate to the boundary conditions present here. In the application, $\Lambda_D$ is obtained from measurements at finite precision as a finite matrix and the calculation of $|| \Lambda_D^{-1/2} g_z||^2$ is implicitly regularised. The resulting level set $P(z)$ is finite and differentiable everywhere. The level representing the object $\partial D$ is found through minimising the cost function. Numerical simulations demonstrate that for realistic electrode layouts and noise levels the method provides good reconstruction. The application of explicit regularisation filters can be beneficial and allows a trade-off between resolution and stability.
Item Type: Article Uncontrolled Keywords: Laplace equation; Inverse boundary; Factorisation method; Linear sampling; Capacitance measurements; Interactive displays Subjects: MSC 2010, the AMS's Mathematics Subject Classification > 35 Partial differential equations
MSC 2010, the AMS's Mathematics Subject Classification > 41 Approximations and expansions
Depositing User: Ms Lucy van Russelt Date Deposited: 16 Nov 2007 Last Modified: 20 Oct 2017 14:12 URI: http://eprints.maths.manchester.ac.uk/id/eprint/888 Actions (login required)
View Item |
I think I've figured this out. The point is that, the rigorous meaning one can draw from the formal covariance of $J^\mu$ is that the momentum-space coefficient functions of $J^\mu$ (i.e. the functions in front of monomials of $a_p$ and $a^\dagger_p$) transform covariantly under the change of variable $p\to \Lambda p$. The covariance of the coefficient functions is unaffected by normal ordering, and is sufficient to give rise to the covariance of $:J^\mu:$. The rest of this answer will be an elaboration of the first paragraph.
Let me first clarify the notations used and the meaning of the formal covariance of the ill-defined current $J^\mu$. I'm going to ignore the spin degrees of freedom in this discussion, but one should see the generalization to include spin only involves a straightforward (but perhaps cumbersome) change of notations. I'm also ignoring the spacetime dependence, that is to say I'm only considering the covariance of $J^\mu(0)$, and the generalization to $J^\mu(x)$ is straightforward and easy.
In the context of my question, $U(\Lambda)$ is defined as such that
$$U(\Lambda) a_{p} U^{-1}(\Lambda)=\sqrt{\frac{E_{\Lambda p}}{E_p}}a_{\Lambda p}.$$
The covariance of $J^\mu$ must be understood in a very formal and specific sense, the sense in which the covariance is formally proved. For example, in the case of a fermionic bilinear:
$$U(\Lambda)J^{\mu}U(\Lambda)^{-1}=U\bar{\psi}\gamma^{\mu}\psi U^{-1}\\ =U\bar{\psi}_iU^{-1}(\gamma^{\mu})_{ij}U \psi_j U^{-1}=\bar{\psi}D(\Lambda)\gamma^{\mu}D(\Lambda)^{-1}\psi= \Lambda^{\mu}_{\ \ \nu}\bar{\psi}\gamma^{\nu}\psi, $$
where $D(\Lambda)$ is the spinor representation of Lorentz group, typically constructed via Clifford algebra. Note in this formal proof, what's important is that, under the change $a_{p}\to \sqrt{\frac{E_{\Lambda p}}{E_p}}a_{\Lambda p}$ (ignoring spin indices of course) the elementary field transforms as $\psi \to D(\Lambda)\psi$. In the proof, no manipulation of operator ordering and commutation relations ever occurs: all we do is to do a change of integration variable, and let the algebraic properties of the coefficient functions take care of the rest. In fact, we'd better not mess with the operator ordering, as it can easily spoil the formal covariance (example: $H=\int \text{d}p\frac{1}{2}E_{p}(a_p a_p^\dagger+a_p^\dagger a_p)=\int \text{d}p E_{p}(a_p^\dagger a_p+\delta(0))$, see my longest comment under drake's answer).
To explain what's going on in more details without getting tangled with notational nuisances, let me remind you again I'll omit the spin degrees of freedom, but it should be transparent enough by the end of the argument that it's readily generalizable to spinor case, since all that matters is that we know the coefficient functions(even with spin indices) transform covariantly. The mathematical gist is, after multiplying the elementary fields and grouping c/a operators (during the grouping no operator ordering procedure should be performed at all, e.g. $a^\dagger(p_1)a(p_2)$ and $a(p_2)a^\dagger(p_1)$ should be treated as two independent terms), a typical monomial term in $J^\mu(0)$ has the form
$$ \int \left(\prod\limits_{i=1}^{n}\text{d}p_i\right)M(\{a^\dagger(p_i), a(p_i)\})f^\mu(\{p_i\}),$$
where $M$ is a monomial of c/a operators not necessarily normally ordered, but has an ordering directly from the multiplication of elementary fields.
The formal covariance of $J^\mu$ means
$$\Lambda^\mu_{\ \ \nu}\int \left(\prod\limits_{i=1}^{n}\text{d}p_i\right)M(\{a^\dagger(p_i), a(p_i)\})f^\nu(\{p_i\})\\=\int \left(\prod\limits_{i=1}^{n}\text{d}p_i\right)M(\{a^\dagger(\Lambda p_i), a(\Lambda p_i)\})f^\mu(\{p_i\})\\=\int \left(\prod\limits_{i=1}^{n}\text{d}q_i\right)\left(\prod\limits_{i=1}^n \frac{E_{\Lambda^{-1} q_i}}{E_{q_i}}\right) \left(\prod\limits_{i=1}^{m}\sqrt{\frac{E_{q_i}}{E_{\Lambda^{-1} q_i}}}\right) M(\{a^\dagger(q_i), a(q_i)\})f^\mu(\{\Lambda^{-1}q_i\}) ,$$
where $\prod\limits_{i=1}^n {E_{\Lambda^{-1} q_i}}/{E_{q_i}}$ comes from the transformation of measure and $\prod\limits_{i=1}^{m}\sqrt{{E_{q_i}}/{E_{\Lambda^{-1} q_i}}}$ from the transformation of c/a operators in $M$. This is equivalent to
$$f^\mu(\{\Lambda^{-1}q_i\})\left(\prod\limits_{i=1}^n \frac{E_{\Lambda^{-1} q_i}}{E_{q_i}}\right) \left(\prod\limits_{i=1}^{m}\sqrt{\frac{E_{q_i}}{E_{\Lambda^{-1} q_i}}}\right)=\Lambda^\mu_{\ \ \nu}f^\nu(\{q_i\}).$$
The above equation makes completely rigorous sense since it's a statement about c-number functions. Obviously, this equation is sufficient to prove the covariance of the normal ordering
$$ \int \left(\prod\limits_{i=1}^{n}\text{d}p_i\right):M(\{a^\dagger(p_i), a(p_i)\}):f^\mu(\{p_i\}),$$
since on the operator part only a change of integration variable is needed for the proof.
So let's recapitulate the logic of this answer:
1. The current is only covariant when written in a certain way, but not in all ways. (recall the free scalar field Hamiltonian example: $H=\int \text{d}p\frac{1}{2}E_{p}(a_p a_p^\dagger+a_p^\dagger a_p)=\int \text{d} pE_{p}(a_p^\dagger a_p+\delta(0))$, which is formally covariant in the first form but not in the second form.)
2. In that certain way where the current is formally covariant, the formal covariance really means a genuine covariance of the coefficient functions.
3. The covariance of the coefficient functions is sufficient to establish the covariance of the normally ordered current. |
Moisture retention in porous materials linked articles This article describes the mechanisms of moisture retention and common approaches for models and material functions, especially the sorption isotherms and the moisture retention capacity (MRC) Contents Introduction
Since porous building materials are capable of adsorbing water vapor present in the air, a correlation between the relative humidity in ambient air and the weight of a material sample can be determined by gravimetric analysis. The correlation corresponds to the moisture retention by the material as water vapor penetrates into it and is adsorbed on the pore surface, can condensate in capillaries or can be chemically bound. The following discussion will solely consider the physical effects of capillary condensation.
Definitions
In discussing moisture retention processes, a differentiation between
hygroscopic and super-hygroscopic region is made. Materials have low moisture content in the hygroscopic region, while they display higher moisture content in the super-hygroscopic region. It is not possible to clearly separate the two regions, for many building materials this lies between 92% and 95% RH.
A possibility for separating the regions is by describing the dominant moisture transport mechanisms. In the hygroscopic region this is vapor diffusion, while in the super-hygroscopic region the capillary liquid water transport prevails. The latter is of particular interest for the redistribution and transport of salt solutions.
The equilibrium weight determined gravimetrically from the moisture content of the building material and its correlation to the moisture potential(s) (such as Relative Humidity or the liquid water pressure, i.e., capillary pressure) can be determined experimentally.
Models for describing moisture retention Sorption isotherms
Within the hygroscopic region, the relationship between relative humidity and moisture content is usually given at constant temperature. This representation is called the
sorption isotherm, because it is determined experimentally through a sorption process. In a standard experimental setup a material is oven dried, weighed and then, for a certain time left in an environment with a defined, constant relative humidity (e.g., in a desiccator). The sample is weighed at regular time intervals, until it reaches a constant weight. The equilibrium moisture content Failed to parse (MathML with SVG or PNG fallback (recommended for modern browsers and accessibility tools): Invalid response ("Math extension cannot connect to Restbase.") from server "https://wikimedia.org/api/rest_v1/":): {\displaystyle \theta_\ell} at the given relative humidity Failed to parse (MathML with SVG or PNG fallback (recommended for modern browsers and accessibility tools): Invalid response ("Math extension cannot connect to Restbase.") from server "https://wikimedia.org/api/rest_v1/":): {\displaystyle \phi}, gives one point Failed to parse (MathML with SVG or PNG fallback (recommended for modern browsers and accessibility tools): Invalid response ("Math extension cannot connect to Restbase.") from server "https://wikimedia.org/api/rest_v1/":): {\displaystyle \theta_\ell\left(\phi\right)} of the sorption isotherm. Several such points have to be determined to obtain the complete isotherm. Moisture retention curve
The super-hygroscopic region starts mostly at 92% RH. In the representation of the sorption isotherm no details about the gradient of the function
Failed to parse (MathML with SVG or PNG fallback (recommended for modern browsers and accessibility tools): Invalid response ("Math extension cannot connect to Restbase.") from server "https://wikimedia.org/api/rest_v1/":): {\displaystyle \theta_\ell\left(\phi\right)} can be obtained from it. This is why another representation is chosen, the moisture retention curve (MRC). With the moisture retention curve, the moisture content is applied above the capillary pressure of the pore solution. On saturation, it reaches 0 Pa, which corresponds to 100% RH. If the material dehumidifies, the capillary pressure increases rapidly (with negative sign). The relationship between capillary pressure and relative humidity is given by the Kelvin- equation: Failed to parse (MathML with SVG or PNG fallback (recommended for modern browsers and accessibility tools): Invalid response ("Math extension cannot connect to Restbase.") from server "https://wikimedia.org/api/rest_v1/":): {\displaystyle \phi = e^{ p_c / \rho_w R_w T} }
With this equation, the two representations of humidity retention behavior of a material can be transferred from one into the other and converted.
Literature
There were no citations found in the article. |
Abbreviation:
SymRel
A
is a structure $\mathbf{X}=\langle X,R\rangle$ such that $R$ is a symmetric relation (i.e. $R\subseteq X\times X$) that is binary relation on $X$
symmetric: $xRy\Longrightarrow yRx$
Remark: This is a template. If you know something about this class, click on the ``Edit text of this page'' link at the bottom and fill out this page.
It is not unusual to give several (equivalent) definitions. Ideally, one of the definitions would give an irredundant axiomatization that does not refer to other classes.
Let $\mathbf{X}$ and $\mathbf{Y}$ be symmetric relations. A morphism from $\mathbf{X}$ to $\mathbf{Y}$ is a function $h:A\rightarrow B$ that is a homomorphism: $xR^{\mathbf X} y\Longrightarrow h(x)R^{\mathbf Y}h(y)$
Example 1:
Feel free to add or delete properties from this list. The list below may contain properties that are not relevant to the class that is being described.
$\begin{array}{lr} f(1)= &1\\ f(2)= &\\ f(3)= &\\ f(4)= &\\ f(5)= &\\ \end{array}$ $\begin{array}{lr} f(6)= &\\ f(7)= &\\ f(8)= &\\ f(9)= &\\ f(10)= &\\ \end{array}$
[[Directed graphs]] supervariety |
So we are working on finding the probability of getting a hand with four aces in a 5-card poker game. To do this, we have to count the total number of possible poker hands. This turns out to be possible using the combination formula:\[{C}{(}{n}{,}\hspace{0.33em}{r}{)}\hspace{0.33em}{=}\hspace{0.33em}\frac{n!}{{r}{!(}{n}\hspace{0.33em}{-}\hspace{0.33em}{r}{)!}}\]
And in this case,\[{C}{(}{52}{,}\hspace{0.33em}{5}{)}\hspace{0.33em}{=}\hspace{0.33em}\frac{52!}{{5}{!(}{52}\hspace{0.33em}{-}\hspace{0.33em}{5}{)!}}\hspace{0.33em}{=}\hspace{0.33em}\frac{52!}{5!(47)!}\]
But I left my last post not calculating this, and cautioning you to not calculate this before doing some simplification. This is because 52! and 47! are HUGE numbers and because of the limitations of many calculators, you will get inaccurate results.
Well how do we simplify \[\frac{52!}{5!(47)!}\]?
Well notice that 52! = 52×51×50×49×48×47! . In other words, you can always start counting down when writing an expanded factorial but the remaining numbers are just the factorial of where you stopped. So now you can cancel the 47! in the numerator and the denominator,\[\begin{array}{l}{\frac{52!}{5!(47)!}\hspace{0.33em}{=}\hspace{0.33em}\frac{{52}\hspace{0.33em}\times\hspace{0.33em}{51}\hspace{0.33em}\times\hspace{0.33em}{50}\hspace{0.33em}\times\hspace{0.33em}{49}\hspace{0.33em}\times\hspace{0.33em}{48}\hspace{0.33em}\times\hspace{0.33em}{47}{!}}{{5}{!}\hspace{0.33em}\times\hspace{0.33em}{47}{!}}\hspace{0.33em}{=}\hspace{0.33em}}\\{\frac{{52}\hspace{0.33em}\times\hspace{0.33em}{51}\hspace{0.33em}\times\hspace{0.33em}{50}\hspace{0.33em}\times\hspace{0.33em}{49}\hspace{0.33em}\times\hspace{0.33em}{48}}{5!}\hspace{0.33em}{=}\hspace{0.33em}{2}{,}{598}{,}{960}}\end{array}\]
That’s a lot of hands! Well that number is the denominator in our generic probability formula:
Probability = \[\frac{{\mathrm{Number}}\hspace{0.33em}{\mathrm{of}}\hspace{0.33em}{\mathrm{favorable}}\hspace{0.33em}{\mathrm{outcomes}}}{{\mathrm{Total}}\hspace{0.33em}{\mathrm{number}}\hspace{0.33em}{\mathrm{of}}\hspace{0.33em}{\mathrm{possible}}\hspace{0.33em}{\mathrm{outcome}}{s}}\]
So what is the numerator? Well if you want all four aces in your hand, that leaves just one more card. There are 48 other cards that can be in your hand that can be the fifth card. So there are 48 possible ways to have four aces in your hand of five cards. So the probability of having four aces is\[\frac{48}{2,598,960}\hspace{0.33em}{=}\hspace{0.33em}\frac{1}{54,145}\hspace{0.33em}{=}\hspace{0.33em}{0}{.}{00001847}\]
A very small probability! You may need to brush up on your bluffing skills. |
Consider the following game of Bertrand (price competition):
There are two players, $1$ and $2$. Each has a publicly known marginal cost, $c_i$. A strategy is a price, $p_i\in\mathbb{R}$. Player $i$'s payoff (profit) is $\pi(p_i,p_j)=p_i-c_i$ if $p_i<p_j$, $\pi(p_i,p_j)=\frac{p_i-c}{2}$ if $p_i=p_j$ and $\pi(p_i,p_j)=0$ if $p_i>p_j$.
If $c_i=c_j=c$ the result is the straightforward Bertrand Nash equilibrium: both firms set $p_i=c$ and make zero profit. A higher price results in zero demand/profit; a lower price results in negative profits. There is therefore no profitable deviation.
Now suppose $c_1<c_2$. What is the Nash equilibrium?
I have previously contented myself with the intuition that the equilibrium is for firm 1 to take the whole market at a price 'slightly below' $c_2$? But looking at the technical details raises doubts in my mind:
We can't have an equilibrium with $p_1=c_2$. The best response for $2$ would be $p_2=c_2$, but then $1$'s profit is $(p_1-c_2)/2$ and $1 $ can profitably deviate to $p_1=c_2-\epsilon$ for some small $\epsilon$. It seems like we can't have an equilibrium with $p_1=c_2-\epsilon< c_2\leq p_2$ because firm 1 could do better with $p_1=c_2-(\epsilon/2)$.
Do we conclude that the only equilibrium of this game is in mixed strategies? |
I've been working on the following problem:
" Compute the Taylor series of the function $f(x)=\frac{1}{1-x}$ at $x=2$ and determine the radius of convergence $r$ "
I know that the given Taylor series is $\sum_{n=0}^{\infty} \frac{f^{(n)}(2)}{n!}(x-2)^n$, but I don't know how to determine the radius of convergence.
I have tried to find an expression for the Taylor series of $-\ln|1-x|$ at $x=2$ instead, because I was hoping to find a nice looking geometric series from which $r$ could easily be found. I have also tried to use the ratio test on $\sum_{n=0}^{\infty} \frac{f^{(n)}(2)}{n!}(x-2)^n$, but I don't know what to do with the derivatives. |
Okay, I guess I can't just
leaveit at that because apparently it's not obviously false.
Two other variations on this phrase: "the whole is more than the sum of its parts" and "the whole is not the sum of its parts". I bring these up because this makes the mathematical error twice as bad. $$whole > \sum parts$$ is the original expression, $$whole \neq \sum parts$$ is another.
To a philosophical reductionist this is absurd. A chair can only be the sum of its parts--there is nothing extra to the chair besides the fundamental particles and their vector fields that make it up. To further use that statement in a proud fashion, as if it contains knowledge, is further ridiculous. I wonder if any 20th century physicists thought, prior to Einstein's General Relativity, "Newton's equations aren't giving the right answer. The whole gravitational effect must be greater than the sum of the individual gravity effects!" No, you just didn't have the right equations.
(On the other hand, things like Dark "Matter" suggest we still don't. If I was a leading physicist with cred to throw around, I'd suggest the whole community of physicists go through the literature and start replacing terms like "Dark Matter" and "Dark Energy" and other such things that are placeholders for ignorance with the word "magic". Why is the universe expanding at an accelerating rate? Magic!)
"But Kevin, you don't get it, it's the macroscopic nature of the chair and the utility of it to us that make a completed Ikea chair more than just the chair's pieces!"
Huh, that's a good point, a constructed chair does seem like it's more than just the sum of its parts, that is, four legs and a square. (For a simple chair.) Is my pedantry so easily defeated?
No, because you're neglecting parts. The reason a completed chair feels "more" than just the four legs and square is because it
ismore. But that's because there are more parts! Those parts include, mostly, a human brain looking at the chair and comparing it to an imagined incomplete chair. The very notion that we humans have a sense of "a completed chair" vs. "an incomplete chair" suggests that we humans are parts of the whole when the whole being considered is "a completed chair" or "an incomplete chair". If you ever think the whole is more than the sum of the parts, you haven't accounted for all the parts. The key missing piece is likely your brain.
It's okay, this failure to notice yourself as a part of the whole plagued many geniuses of the early 20th century. It's the chief reason for confusion in quantum mechanics and continues to be the chief reason for confusion when modern students are taught it "in historical progression" instead of what the actual state-of-the-art-that-matters is. Everyone fails to notice themselves at some point.
"But Kevin, if you mix ketchup and mayo for a sauce, that sauce is more than the sum of its parts because it's more than just ketchup-flavor mixed with mayo-flavor, it has its own unique flavor!"
Another fine point. Not really. In this case it's not so much a failure of not accounting for all the parts--you've got ketchup, mayo, and a human tongue--but a failure of what it means to sum. We're not summing correctly. The hypothetical person whining at me is probably trying to sum up just the flavors, and that is a non-linear sum which is probably where the confusion is.
$$flavor(sauce) = flavor(ketchup + mayo) \neq flavor(ketchup) + flavor(mayo)$$
In contrast, the color sum is linear because red+white=pink and thus
$$color(ketchup) + color(mayo)=color(ketchup + mayo) = color(sauce)$$
But we're not summing individual flavors, we're not summing individual colors, we're summing ketchup and mayo.
$$ketchup + mayo = sauce$$
Which implies that for all properties that ketchup and mayo share, we sum $$property(ketchup+mayo)$$
and not$$property(ketchup) + property(mayo)$$. The exception is if it's indeed linear like the color case, but then it's only useful as a potential optimization technique, not a general solution. We like linearity a lot though and so it seems a lot of people assume it when they shouldn't. The sauce isn't different from the sum of its parts, you just didn't add up the parts correctly.
It's like saying "Well, I have here $$25$$, which has two parts, $$20$$ and $$5$$. $$25$$ is bigger than the sum of its parts because $$\cos(25) > \cos(20) + \cos(5)$$."
No, you idiot, you're summing wrong. It's "$$\cos(25) = \cos(20 + 5)$$". Cosine isn't a linear function.
Even worse: "$$\cos(25) > \cos(2) + \cos(5)$$". Not only are you summing wrong, you're summing the wrong parts! You've committed both sins!
So in conclusion, this has been a pedantic moment. Stop using the phrase, it's incorrect and only proves you don't know what you're talking about. In order to help remove some confusion, look to see if you are summing correctly, and if the properties of something you are looking at form linear sums or not. If they do, you might not be accounting for some terms. If they aren't linear, then you might not be summing correctly.
Funnily enough, people who say the whole is more than the sum of its parts often do one of two things: they start reasoning about the whole as if they have knowledge of it anyway even though they know they lack knowledge that they don't think exists in their guess of the parts; alternatively, they start listing reasons the whole is greater than the parts without realizing that each reason is itself a part that should be added as well. Using the phrase should be a big flashing sign saying "I am ignorant of something." Which is fine, ignorance isn't a crime, but realize you're ignorant and if you can easily dispel that ignorance you probably should.
Posted on 2012-02-09 by Jach Tags: rant Permalink: https://www.thejach.com/view/id/236 |
Does $\text{exp}\bigg(\dfrac{\pi^2}{6 e^{\gamma}}\dfrac{\sigma(p_n\#)}{p_n\#}\bigg)$ bound $p_n$ from above?
(Note: From Daniel Fischer's answer here.)
Update
... and from below by $\log(p_n\#)$?
Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community
Does $\text{exp}\bigg(\dfrac{\pi^2}{6 e^{\gamma}}\dfrac{\sigma(p_n\#)}{p_n\#}\bigg)$ bound $p_n$ from above?
(Note: From Daniel Fischer's answer here.)
... and from below by $\log(p_n\#)$? |
Electrochemistry Electrochemical Series and Nernst Equation Electrochemical series and its applications: The arrangement of S.R.P values of various elements either in increasing order (or) in decreasing order is called electrochemical series Eg: (i) E_{Mg^{2+}/Mg}^o = − 2.37 V — M (ii) E_{Zn^{2+}/Zn}^o = − 0.76 V — M (iii) E_{Cr^{3+}/Cr}^o = − 0.74 V — M (iv) E_{Fe^{2+}/Fe}^o = − 0.44 V — M (v) E_{H^{+}/H_{2}}^o = 0 V → Reference (vi) E_{Cu^{2+}/Cu}^o = +0.34 V — M (vii) E_{I_{2}/I^{\ominus}}^o = +0.536 V — NM (viii) E_{Ag^{+}/Ag}^o = +0.80 V — M (ix) E_{Cl_{2}/Cl^{\ominus}}^o = +1.36 V — NM (1) An element with low reduction potential acts as strong reducing agent. (2) An element with high reduction potential acts as strong oxidising agent. (3) By using two different electrodes in order to construct a cell with positive E.M.F, and electrode with low reduction potential is taken as anode and an electrode with reduction potential taken as cathode. Trick: In a chemical reaction an element (metal) with low electrode potential always displaces another metal with high electrode potential.
Nernst equation:
(i) For a redox reaction E_{cell}=E_{cell}^{o}-\frac{0.0591}{n}\log\frac{[Product\ ion]^x}{[Reactant\ ions]^y} x = ion coefficient i.e., number of moles [ ] = concentration n = number of electrons transferred E_{cell}^{o} = Standard cell potential (or) E_{cell}=E_{cell}^{o}-\frac{0.0591}{n}\log Q Where Q is reaction quotient Part1: View the Topic in this Video from 40:54 to 59:15 Part2: View the Topic in this Video from 0:40 to 37:02
Disclaimer: Compete.etutor.co may from time to time provide links to third party Internet sites under their respective fair use policy and it may from time to time provide materials from such third parties on this website. These third party sites and any third party materials are provided for viewers convenience and for non-commercial educational purpose only. Compete does not operate or control in any respect any information, products or services available on these third party sites. Compete.etutor.co makes no representations whatsoever concerning the content of these sites and the fact that compete.etutor.co has provided a link to such sites is NOT an endorsement, authorization, sponsorship, or affiliation by compete.etutor.co with respect to such sites, its services, the products displayed, its owners, or its providers.
1. For a electrochemical cell,
aA+bB \longrightarrow cC+dD E_{cell}=E_{cell}^0-\frac{2.303RT}{nF}\log\frac{\left[C\right]^c\left[D\right]^d}{\left[A\right]^a\left[B\right]^b} Concentration of pure solids and liquids is taken as unity.
2. Nernst equation and K
c At equilibrium, E cell = 0 \tt E_{cell}^0=\frac{0.0591}{n}\log K_{c}\ at\ 298\ K \Delta G^{0}=-nFE_{cell}^0
3. Relationship between free energy change and equilibrium constant
ΔG 0 = −2.303RT log K c |
This is an example from my text book of a continuous signal: $$x_{in}(t)=\sin \left( 2\pi \cdot 1000 \cdot t\right) + 0.5\sin \left( 2\pi \cdot 2000 \cdot t + \dfrac{3\pi}{4} \right) $$ So to perform a fourier transform on this signal, how to do that, isn't it a bit funny, since it has two sine components. Shouldn't complex numbers have a sine term and a cosine term? And it's got a scalar term applied to only one component, don't those usually apply across both terms of a complex component? And it's phase shifted, what to do about that?
Fourier Transform is a linear one, so you can make use of superposition principle:
$$ \mathscr{F} [\alpha x(t) + \beta y(t)] = \alpha \mathscr{F}[x(t)] + \beta \mathscr{F}[y(t)] $$
So for the
first component $$x(t) = \sin \left( 2\pi \cdot 1000 \cdot t\right)$$
by
definition:
$$\mathscr{F}\left[\sin(2\pi f_0 t + \phi) \right] = \dfrac{i}{2} \left[ e^{-i \phi}\delta(f+f_0) - e^{i \phi}\delta(f-f_0) \right] $$
you get:
$$ \mathscr{F}[x(t)]=\dfrac{i}{2} \left[ \delta(f+1000) - \delta(f-1000) \right] $$
Second component is a sinusoid with shifted phase, so the complex exponent represents that:
$$y(t) = \dfrac{1}{2} \sin \left( 2\pi \cdot 2000 \cdot t + \dfrac{3\pi}{4} \right)$$
has following Fourier Transform:
$$\mathscr{F}[y(t)] = \dfrac{1}{2}\dfrac{i}{2} \left[ e^{\dfrac{-3\pi i}{4}}\delta(f+2000) - e^{\dfrac{3\pi i}{4}}\delta(f-2000) \right] $$
By summing both results you get the Fourier Transform of your signal. |
There are a number of solutions to this problem online that use identities I have not been taught. Here is where I am in relation to my own coursework:
$ \sin(z) = 2 $
$ \exp(iz) - \exp(-iz) = 4i $
$ \exp(2iz) - 1 = 4i \cdot \exp (iz) $
Then, setting $w = \exp(iz),$ I get:
$ w^2 - 4iw -1 = 0$
I can then use the quadratic equation to find:
$ w = i(2 \pm \sqrt 3 )$
So therefore,
$\exp(iz) = w = i(2 \pm \sqrt 3 ) $ implies
$ e^{-y}\cos(x) = 0 $, thus $ x = \frac{\pi}{2} $ $ ie^{-y}\sin(x) = i(2 \pm \sqrt 3 ) $ so $ y = -\ln( 2 \pm \sqrt 3 ) $
So I have come up with $ z = \frac{\pi}{2} - i \ln( 2 \pm \sqrt 3 )$ But the back of the book has $ z = \frac{\pi}{2} \pm i \ln( 2 + \sqrt 3 ) +2n\pi$
Now, the $+2n\pi$ I understand because sin is periodic, but how did the plus/minus come out of the natural log? There is no identity for $\ln(a+b)$ that I am aware of. I believe I screwed up something in the calculations, but for the life of me cannot figure out what. If someone could point me in the right direction, I would appreciate it. |
The standard proof, apparently due to Dedekind, that algebraic numbers form a field is quick and slick; it uses the fact that $[F(\alpha) : F]$ is finite iff $\alpha$ is algebraic, and entirely avoids the (to me, essential) issue that algebraic numbers are roots of some (minimal) polynomial. This seems to be because finding minimal polynomials is
hard and largely based on circumstance.
There are more constructive proofs which, given algebraic $\alpha$, $\beta$, find an appropriate poly with $\alpha \beta$, $\alpha + \beta$, etc., as a root -- but of course these are not generally minimal.
You would of course want an algorithm to compute such min. polies, but assuming this is unfeasible (as it seems), my question is a bit different:
Every algebraic number $\alpha$, $\beta$, $\alpha\beta$, $\alpha + \beta$, etc., has a
uniquecorresponding minimal polynomial, call it $p_{\alpha}(x)$, $p_{\beta}(x)$, $p_{\alpha \beta}(x)$, etc., and these polies have other roots, the conjugates, $\alpha_1$,...,$\alpha_n$, $\beta_1$,...,$\beta_m$, etc. Suppose I want to define an operation on this set of polies in the most naive way: $p_{\alpha}(x) \star p_{\beta}(x) = p_{\alpha\beta}(x)$. (Note that this is NOT the usual, direct multiplication of polies.)
But is this even well-defined? More specifically: suppose I swap $\alpha$ with one of its conjugates, $\beta$ with one of its conjugates, and multiply those together. Is the minimal polynomial of the new product the same as before? i.e. is this new product a conjugate of the old one? Meaning, I would need $p_{\alpha_1 \beta_1}(x) = p_{\alpha_i \beta_k}(x)$ for any combination of conjugates in order for this proposed operation to even make sense. And this seems unlikely -- that would be sort of miraculous right?
What about a similar operation for $\alpha + \beta$, $\alpha - \beta$, etc?
More broadly, given two algebraic numbers $\alpha$, $\beta$, I'm interested in the set of minimal polynomials corresponding to those algebraic numbers which can be generated by performing the field operations on $\alpha$, $\beta$ -- call this the "set of minimal polies attached to the number field" or something -- and if a similar field (or even just ring) structure can be put on these polies by defining appropriate operations on them. (Not the usual operations, which will clearly give you polies with roots outside of your number field.) I'm ultimately after questions like:
(1) How do the conjugates of $\alpha \beta$, $\alpha + \beta$, etc., relate to the conjugates of $\alpha$ and $\beta$?
(2) How do the coefficients of the min. polies of $\alpha \beta$, $\alpha + \beta$, etc., relate to those of the min. polies of $\alpha$ and $\beta$? Obviously, the algebraic integers form a ring; what else can be said?
(3) Degree?
It may be impractical to calculate any one such min. poly explicitly, but maybe interesting things can be said about the collection as a whole? |
Search
Now showing items 1-10 of 26
Production of light nuclei and anti-nuclei in $pp$ and Pb-Pb collisions at energies available at the CERN Large Hadron Collider
(American Physical Society, 2016-02)
The production of (anti-)deuteron and (anti-)$^{3}$He nuclei in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV has been studied using the ALICE detector at the LHC. The spectra exhibit a significant hardening with ...
Forward-central two-particle correlations in p-Pb collisions at $\sqrt{s_{\rm NN}}$ = 5.02 TeV
(Elsevier, 2016-02)
Two-particle angular correlations between trigger particles in the forward pseudorapidity range ($2.5 < |\eta| < 4.0$) and associated particles in the central range ($|\eta| < 1.0$) are measured with the ALICE detector in ...
Measurement of D-meson production versus multiplicity in p-Pb collisions at $\sqrt{s_{\rm NN}}=5.02$ TeV
(Springer, 2016-08)
The measurement of prompt D-meson production as a function of multiplicity in p–Pb collisions at $\sqrt{s_{\rm NN}}=5.02$ TeV with the ALICE detector at the LHC is reported. D$^0$, D$^+$ and D$^{*+}$ mesons are reconstructed ...
Measurement of electrons from heavy-flavour hadron decays in p–Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV
(Elsevier, 2016-03)
The production of electrons from heavy-flavour hadron decays was measured as a function of transverse momentum ($p_{\rm T}$) in minimum-bias p–Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV with ALICE at the LHC for $0.5 ...
Direct photon production in Pb-Pb collisions at $\sqrt{s_{NN}}$=2.76 TeV
(Elsevier, 2016-03)
Direct photon production at mid-rapidity in Pb-Pb collisions at $\sqrt{s_{\rm NN}} = 2.76$ TeV was studied in the transverse momentum range $0.9 < p_{\rm T} < 14$ GeV/$c$. Photons were detected via conversions in the ALICE ...
Multi-strange baryon production in p-Pb collisions at $\sqrt{s_\mathbf{NN}}=5.02$ TeV
(Elsevier, 2016-07)
The multi-strange baryon yields in Pb--Pb collisions have been shown to exhibit an enhancement relative to pp reactions. In this work, $\Xi$ and $\Omega$ production rates have been measured with the ALICE experiment as a ...
$^{3}_{\Lambda}\mathrm H$ and $^{3}_{\bar{\Lambda}} \overline{\mathrm H}$ production in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV
(Elsevier, 2016-03)
The production of the hypertriton nuclei $^{3}_{\Lambda}\mathrm H$ and $^{3}_{\bar{\Lambda}} \overline{\mathrm H}$ has been measured for the first time in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV with the ALICE ...
Multiplicity dependence of charged pion, kaon, and (anti)proton production at large transverse momentum in p-Pb collisions at $\sqrt{s_{\rm NN}}$= 5.02 TeV
(Elsevier, 2016-09)
The production of charged pions, kaons and (anti)protons has been measured at mid-rapidity ($-0.5 < y < 0$) in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV using the ALICE detector at the LHC. Exploiting particle ...
Jet-like correlations with neutral pion triggers in pp and central Pb–Pb collisions at 2.76 TeV
(Elsevier, 2016-12)
We present measurements of two-particle correlations with neutral pion trigger particles of transverse momenta $8 < p_{\mathrm{T}}^{\rm trig} < 16 \mathrm{GeV}/c$ and associated charged particles of $0.5 < p_{\mathrm{T}}^{\rm ...
Centrality dependence of charged jet production in p-Pb collisions at $\sqrt{s_\mathrm{NN}}$ = 5.02 TeV
(Springer, 2016-05)
Measurements of charged jet production as a function of centrality are presented for p-Pb collisions recorded at $\sqrt{s_{\rm NN}} = 5.02$ TeV with the ALICE detector. Centrality classes are determined via the energy ... |
One alternative is to plot several points along the circle using Cartesian coordinates. This has already been suggested, but it can be generalized to deal with many kinds of obstacles.
If you
did have free access to all points in the interior of the circle, you could set up your Cartesian coordinate system by laying two straight reference lines perpendicular to each other to form two diameters of the circle, like so:
For a circle of radius $r$, you could make a table of $x$ and $y$ values using the formulas $x = r \cos \theta$ and $y = r \sin \theta$ for a sequence of angles.For each pair $(x,y)$, you measure a distance $x$ along one of your reference lines to find point $A$ and a distance $y$ along the other reference line to find point $B$.Then attach a string of length $x$ at $B$ and a string of length $y$ at $A$ and extend the two strings taut so that they meet at $C$, which is a point on the circle.
After plotting several points regularly spaced around the circle, you can use a curved template (constructed elsewhere) to mark the arc of the circle between each pair of points. The number of points you need to plot is a function of how long you can make your template relative to the radius of the circle. For example, if you can build a template that is slightly more than a $10$-degree arc of the circle, you only have to use a table where the angle $\theta$ is given in $10$-degree increments.
Now to generalize: the two reference lines do not need to be diameters. You can offset one or both of the lines from the center of the circle, and as long as they are perpendicular to each other, you merely need to add or subtract the amount of each offset from the relevant coordinate. For example, let's move one of the reference lines $a$ feet from the center and the other reference line $b$ feet from the center. The result looks like this:
Now to plot the point that was at $(x,y)$ in your original table of coordinates, you place points $A$ and $B$ at distances $|x-a|$ (or $x+a$) and $|y-b|$ (or $y+b$) along the reference lines, measured from where the lines cross,and then measure the same distances from the points $A$ and $B$ in order to finda point $C$ on the circle.
Now, for example, to deal with the shed in the middle of the circle, you lay your reference lines along two sides of the shed. The distances $a$ and $b$ in this case are just half the dimensions of the shed. You can then plot more than half the circumference of the circle. To plot the remaining part of the circle, lay reference lines along the other two sides of the shed.
For the circle cut off by the neighbor's fence, lay one reference line along the fence (or use the fence itself if it is straight enough) and lay the other reference line perpendicular to the first. In this case you only need one of the lines (the one parallel to the fence) to be offset from the center, which simplifies the task; you can plot two symmetric points using each $(x,y)$ pair.
For the patio at the edge of the pool you might find it convenient to lay the two perpendicular reference lines so that they are tangent to the circle you want to plot. That is, the offsets are $a = r$ and $b = r$.
For the planters, you could (in the example shown) lay the two reference lines just a bit off-center while staying clear of all the planters, but then you might not be able to plot as many points as you might like (because planters would interfere with the lines from $A$ or $B$ to $C$). You might find it easier to lay out a square larger than the circle, but with the same center, and use adjacent sides of that square as reference lines for the various parts of the circle. That is, you can let $a = b > r$.But in the example shown, there is one arc of the circle that cannot be plotted by any pair of sides of the square, so you would have to lay another reference line between the planters, perpendicular to the nearby side of the square, in order to plot that arc.
By the way, for a circular curved driveway of uniform width, if you make templates for both the inner and outer radii of the driveway and attach them to a rigid frame so that their arcs are concentric (perhaps by attaching plywood "blanks" to the frame, drawing the arcs on them from a common center point, and then cutting the arcs), you need only plot points along one edge of the driveway; every time you use the template to fill in the points along that arc, you can use the other side of the template to fill in points along the other arc of the driveway. |
Here we want to give an easy mathematical bootstrap argument why solutions to the time independent 1D Schrödinger equation (TISE) tend to be rather nice. First formally rewrite the differential form$$-\frac{\hbar^2}{2m} \psi^{\prime\prime}(x) + V(x) \psi(x) ~=~ E \psi(x) \tag{1}$$into the int...
[Some time travel comments] Since in the previous paragraph, we have explained how travelling to the future will not necessary result in you to arrive in the future that is resulted as if you have never time travelled (via twin paradox), what is the reason that the past you travelled back, has to be the past you learnt from historical records :?
@0ßelö7 Well, I'd omit the explanation of the notation on the slide itself, and since there seems to be two pairs of formulae, I'd just put one of the two and then say that there's another one with suitable substitutions.
I mean, "Hey, I bet you've always wondered how to prove X - here it is" is interesting. "Hey, you know that statement everyone knows how to prove but doesn't bother to write down? Here is the proof written down" significantly less so
Sorry I have a quick question: For questions like this physics.stackexchange.com/questions/356260/… where the accepted answer clearly does not answer the original question what is the best thing to do; downvote, flag or just leave it?
So this question says express $u^0$ in terms of $u^j$ where $u$ is the four-velocity and I get what $u^0$ and $u^j$ are but I'm a bit confused how to go about this one? I thought maybe using the space-time interval and evaluating for $\frac{dt}{d\tau}$ but it's not workin out for me... :/ Anyone give me a quickie starter please? :p
Although a physics question, this is still important to chemistry. The delocalized electric field is related to the force (and therefore the repulsive potential) between two electrons. This in turn is what we need to solve the Schrödinger Equation to describe molecules. Short answer: You can calculate the expectation value of the corresponding operator, which comes close to the mentioned superposition. — Feodoran13 hours ago
If we take an electron that's delocalised w.r.t position, how can one evaluate the electric field over some space? Is it some superposition or a sort of field with all the charge at the expectation value of the position?
@0ßelö7 I just looked back at chat and noticed Phase's question, I wasn't purposefully ignoring you - do you want me to look over it? Because I don't think I'll gain much personally from reading the slides.
Maybe it's just me having not really done much with Eigenbases but I don't recognise where I "put it in terms of M's eigenbasis". I just wrote it down for some vector v, rather than a space that contains all of the vectors v
If we take an electron that's delocalised w.r.t position, how can one evaluate the electric field over some space? Is it some superposition or a sort of field with all the charge at the expectation value of the position?
Honey, I Shrunk the Kids is a 1989 American comic science fiction film. The directorial debut of Joe Johnston and produced by Walt Disney Pictures, it tells the story of an inventor who accidentally shrinks his and his neighbor's kids to a quarter of an inch with his electromagnetic shrinking machine and throws them out into the backyard with the trash, where they must venture into their backyard to return home while fending off insects and other obstacles.Rick Moranis stars as Wayne Szalinski, the inventor who accidentally shrinks his children, Amy (Amy O'Neill) and Nick (Robert Oliveri). Marcia... |
It's a little unclear what you're asking here (the wikipedia page is for ordinary covariance but the question relates to covariance
matrices) but I'll try to answer the question with regard to unbiasedness.
Assuming your covariance matrices are computed from the samples $\{ \textbf{a}_i \}_{i=1}^{N_a}$ and $\{ \textbf{b}_j \}_{j=1}^{N_b}$, the usual definition for the sample covariance matrix is$$ \textbf{C}_a = \frac{1}{N_a - 1} \sum_{i=1}^{N_a} ( \textbf{a}_i - \bar{\textbf{a}}) ( \textbf{a}_i - \bar{\textbf{a}})^T, $$and similarly for $\textbf{C}_b$. Note that the denominator $N_a - 1$ makes the sample covariance matrix unbiased: $E[\textbf{C}_a] = Cov(\textbf{a}_i )$.
With this is mind, if you now compute the expected value of your proposed combined covariance you get:$$ E[\textbf{C}_x] = \frac{N_a}{N_a + N_b} Cov(\textbf{a}_i) + \frac{N_b}{N_a + N_b} Cov(\textbf{b}_j).$$By itself this is of little use but if we furthermore assume that the two samples come from populations with equal covariance matrices (as is often done, see e.g. Hotelling's $T^2$ test), that is, $ Cov(\textbf{a}_i) = Cov(\textbf{b}_j) = \boldsymbol{\Sigma}$, we then have$$E[\textbf{C}_x] = \boldsymbol{\Sigma}. $$Thus now $\textbf{C}_x$ is unbiased for the common population covariance and what you proposed is indeed the ''correct'' way of combining the two estimators. |
If $n^3 < |a_n| < n^4$ find the radius of convergence for $\sum_{n=2}^\infty a_nx^n$
Could someone explain how he got inequality (1)? Theorem 4.1 stated that a power series converges if $|x| <R$.
Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community
The radius of convergence of the series $\sum_n b_nx^n$ is the supremum of the set $$\{t\geqslant 0, \sup_{n\geqslant 0}|b_n|t^n<+\infty\}.$$ Since $$\{t\geqslant 0, \sup_{n\geqslant 0}n^4t^n<+\infty\}\subset \{t\geqslant 0, \sup_{n\geqslant 0}|a_n|t^n<+\infty\} \subset \{t\geqslant 0, \sup_{n\geqslant 0}n^3t^n<+\infty\},$$ the conclusion follows. |
Suppose we have a smooth manifold M and E--->M is a vector bundle. A connection on E is a linear map from the set of all smooth section on E into the set of smooth sections of the tensor product of E and the cotangent bundle of M , satisfying a condition. Here is the question : We can make the tensor product bundle of two vector bundles on a same base space which is here the smooth manifold , M , . But how can we define a section on this new vector bundle?
closed as not a real question by Steven Landsburg, Andreas Blass, Deane Yang, Misha, Steven Sam Mar 22 '13 at 15:55
It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center. If this question can be reworded to fit the rules in the help center, please edit the question.
If $s$ is a smooth section of a vector bundle $V$ and $t$ is a smooth section of a vector bundle $W$ then $s \otimes t$ is a smooth section of $V \otimes W$. Its value at each point $m \in M$ in the underlying manifold $M$ is $(s \otimes t)(m)=s(m) \otimes t(m)$. I am still not sure if that is what you are looking for. This has nothing to do with connections, so I don't see why you mention connections in your question. If $s_1, s_2, \dots, s_p$ is a basis of local smooth sections of $V$ over an open set $U$ (i.e. these sections are defined on $U \subset M$ and every local smooth section defined on $U$ is a unique linear combination $\sum_i f_i s_i$ with $f_i$ smooth functions) and similarly $t_1, t_2, \dots, t_q$ is a basis of local sections of $W$ over the same open set $U$, then $s_i \otimes t_j$ for $i=1,2,\dots,p$ and $j=1,2,\dots,q$ is a basis of local smooth sections of $V \otimes W$ over $U$. So that should explain what all of the sections look like, I hope. |
I would like to align the first "+" in the third and fourth lines of the following code. In the third line, I have "= 3 + [a mess]" and in the fourth line, I have "+ [another mess]" - I don't have the "3" in the fourth line. I used the \hphantom command to account for the horizontal space of the "= 3" in the third line, but there is a small space, the space that surrounds either side of a binary operator - in this case a plus sign - that needs to be inserted. (I am using 10pt font in amsart.) Here is the code :
\begin{align*}&\vert xy + xz + yz \vert^{2} \\&\qquad = [\cos(a + b) + \cos(a + c) + \cos(b + c)]^{2}+ [\sin(a + b) + \sin(a + c) + \sin(b + c)]^{2} \\&\qquad = 3 + [2\cos(a + b)\cos(a + c)+ 2\cos(a + b)\cos(b + c)+ 2\cos(a + c)\cos(b + c)] \\&\qquad \hphantom{= 3} + [2\sin(a + b)\sin(a + c)+ 2\sin(a + b)\sin(b + c)+ 2\sin(a + c)\sin(b + c)] .\end{align*}
I tried using a
\phantom command with a
\mathbin{+} command, but that added the horizontal space of the plus sign to the fourth line. I tried to put another align environment within the given align environment. |
ISSN:
1937-5093
eISSN:
1937-5077 Kinetic & Related Models
March 2014 , Volume 7 , Issue 1
Issue on analysis of non-equilibrium evolution problems: Selected topics in material and life sciences
Select all articles
Export/Reference:
Abstract:
Robert Glassey (Bob) has decided to withdraw from the Editorial Board of KRM. Bob has been a pioneer of mathematical kinetic theory in the 80's and one of its leading figures since then. His seminal papers on the relativistic Vlasov-Poisson and Vlasov-Maxwell systems and their asymptotic stability have been a source of inspiration for many of us. His book, `The Cauchy problem in kinetic theory' has become a must for all young researchers entering the field. Bob has been involved in the Editorial Board of KRM since the beginning of the journal and has contributed to the edition of many papers. On behalf of the whole editorial board, we express our deep gratitude to him for having joined us in this adventure and contributed to the success of the journal.
Abstract:
This paper considers a kinetic Boltzmann equation, having a general type of collision kernel and modelling spin-dependent Fermi gases at low temperatures. The distribution functions have values in the space of positive hermitean $2\times2$ complex matrices. Global existence of weak solutions is proved in $L^1\cap L^{\infty}$ for the initial value problem of this Boltzmann equation in a periodic box.
Abstract:
This paper establishes the hyper-contractivity in $L^\infty(\mathbb{R}^d)$ (it's known as ultra-contractivity) for the multi-dimensional Keller-Segel systems with the diffusion exponent $m>1-2/d$. The results show that for the supercritical and critical case $1-2/d < m ≤ 2-2/d$, if $||U_0||_{d(2-m)/2} < C_{d,m}$ where $C_{d,m}$ is a universal constant, then for any $t>0$, $||u(\cdot,t)||_{L^\infty(\mathbb{R}^d)}$ is bounded and decays as $t$ goes to infinity. For the subcritical case $m>2-2/d$, the solution $u(\cdot,t) \in L^\infty(\mathbb{R}^d)$ with any initial data $U_0 \in L_+^1(\mathbb{R}^d)$ for any positive time.
Abstract:
This paper deals with a class of integro-differential equations modeling the dynamics of a market where agents estimate the value of a given traded good. Two basic mechanisms are assumed to concur in value estimation: interactions between agents and sources of public information and herding phenomena. A general well-posedness result is established for the initial value problem linked to the model and the asymptotic behavior in time of the related solution is characterized for some general parameter settings, which mimic different economic scenarios. Analytical results are illustrated by means of numerical simulations and lead us to conclude that, in spite of its oversimplified nature, this model is able to reproduce some emerging behaviors proper of the system under consideration. In particular, consistently with experimental evidence, the obtained results suggest that if agents are highly confident in the product, imitative and scarcely rational behaviors may lead to an over-exponential rise of the value estimated by the market, paving the way to the formation of economic bubbles.
Abstract:
This paper proves some regularity criteria for the 2D MHD system with horizontal dissipation and horizontal magnetic diffusion. We also prove the global existence of strong solutions of its regularized MHD-$\alpha$ system.
Abstract:
The inviscid limit behavior of solution is considered for the multi-dimensional derivative complex Ginzburg-Landau(DCGL) equation. For small initial data, it is proved that for some $T>0$, solution of the DCGL equation converges to the solution of the derivative nonlinear Schrödinger (DNLS) equation in natural space $C([0,T]; H^s)(s\geq \frac{n}{2})$ if some coefficients tend to zero.
Abstract:
In the present article we prove an algebraic rate of decay towards the equilibrium for the solution of a non-homogeneous, linear kinetic transport equation. The estimate is of the form $C(1+t)^{-a}$ for some $a>0$. The total scattering cross-section $R(k)$ is allowed to degenerate but we assume that $R^{-a}(k)$ is integrable with respect to the invariant measure.
Abstract:
In the present paper we propose a class of kinetic type equations that describes the replicator dynamics at the mesoscopic level. The equations are highly nonlinear due to the dependence of the transition rates of distribution function. Under suitable assumptions we show the asymptotic (exponential) stability of the solutions to such kinetic equations.
Abstract:
In this paper we focus on the initial value problem of an inertial model for a generalized plate equation with memory in $\mathbb{R}^n\ (n\geq1)$. We study the decay and the regularity-loss property for this type of equations in the spirit of [10,13]. The novelty of this paper is that we extend the order of derivatives from integer to fraction and refine the results of the even-dimensional case in the related literature [10,13].
Abstract:
In this paper we study the thermodynamics of a rarefied gas contained in a closed vessel at constant volume. By adding axiomatic rules to the usual ones derived by Cercignani, we obtain a new symmetry property in the wall/particle scattering kernel. This new symmetry property enables us to show the first and second law of macroscopic thermodynamics for a rarefied gas having collisions with walls. Then we study the behavior of the rarefied gas when it is in contact with several (moving) thermostats at the same time. We show the existence, uniqueness and long time behavior of the solution to the homogeneous (linear) evolution equation describing the system. Finally we apply our thermodynamical model of rarefied gas to the measurement of heat flux in very low density systems and compare it to the experimental results, shading into light new interpretation of observed behaviors.
Abstract:
The Cauchy problem to the Fokker-Planck-Boltzmann equation under Grad's angular cut-off assumption is investigated. When the initial data is a small perturbation of an equilibrium state, global existence and optimal temporal decay estimates of classical solutions are established. Our analysis is based on the coercivity of the Fokker-Planck operator and an elementary weighted energy method.
Abstract:
This paper studies the blowup of smooth solutions to the full compressible MHD system with zero resistivity on $\mathbb{R}^{d}$, $d\geq 1$. We obtain that the smooth solutions to the MHD system will blow up in finite time, if the initial density is compactly supported.
Readers Authors Editors Referees Librarians Email Alert
Add your name and e-mail address to receive news of forthcoming issues of this journal:
[Back to Top] |
Search
Now showing items 1-1 of 1
Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV
(Elsevier, 2014-09)
Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ... |
Everything else
Now’s probably a good time to show you the whole schematic:
The battery and charging terminals aren’t actually components; I just drew them out of the traces on the PCB itself.
LEDs
The LEDs on the left are Charlieplexed which allowed me to connect a large number of LEDs with a small number of wires going to the front PCB. This means that I can only light one at a time. I used some multiplexing with PWM to give the appearance of one brighter LED surrounded by two dimmer ones. This triplet is what travels around the display. The feathering of brightnesses smooths out the rotation animation a bit.
Battery sensing
R1 and R2 bring the battery voltage to the microcontroller. The ATTiny24 has an internal 1.1V band gap reference which it can use to perform ADC measurements where 1.1V is equal to 0xFF. This means that the LSB is worth around 0.0043V. With this configuration, when the ADC reads a value of 125, the divider’s output is 0.54V and the battery voltage is 3.078.
That is assuming that the band gap and resistors are perfectly accurate. In reality I got closer to 2.9V which I figured was close enough.
Hall Effect and Power Switch
With no buttons or switches accessible, I wanted some way to switch the circuit off. This would be a convenience to the user trying to save the battery as well as a necessity during battery charging (more on that later). U3 is a low-power magnetic or “Hall Effect” sensor. I originally purchased a hand full of them when I was trying to find a replacement for the optical sensor on my longboard wheel displays, but I was disappointed to find that the low power draw of this model came at the cost of sample rate.
The MLX90248 only measures the magnetic field once every 40ms, but it also only draws an average of 6.5$$\mu$$A. It’s a bi-directional sensor too, so a sufficiently strong magnetic field in the north or south direction along its sensor axis will cause it to pull its output low. What this means in practice is that if you put a strong enough neodymium magnet within a centimeter of it, it’ll trigger the output.
So what does the microcontroller do when the the hall-effect pin goes low? The idea is to shut off and draw as little power as possible while still being able to keep the battery connected to the circuit and allow the circuit to recover out of the sleep state when the magnet is removed.
There are a number of shutdown modes available to the ATTiny24. The lowest power option is “power-down mode” where all operations on the processor are halted and the part requires a transition on an interrupt pin to wake up. In this state, it draws just 3-4$$\mu$$A which is tiny.
My original plan was to finagle a sort of hardware clock similar to poor man’s capsense using C3. The idea was to charge C3 and then switch the pin to a high-impedance input before going into power-down mode. The tiny leakage current on the input pin would slowly drain the capacitor until its voltage dropped low enough to trigger a falling edge interrupt and wake back up again.
This was a great solution for the menorah where the circuit could stay in power-down indefinitely with no problems, but I was concerned with my earrings that the board could get stuck in power-down for an extended period of time and neglect the critical battery monitoring functions. It was also generally flakey and not the kind of thing I wanted to implement in a place where I couldn’t easily access the circuit.
Of course, my real reason for ditching it was a short that developed somewhere in my prototype preventing it from working. Now it’s just an extra capacitor that does nothing.
The next lowest power mode is “standby” which still draws less than 10$$\mu$$A. This mode keeps an oscillator running which can pull the chip out of standby after a certain number of clock cycles has passed. This is super useful in things like wristwatches and soil moisture sensors.
Sadly, this solution requires an external crystal oscillator which is A) large and B) not already implemented on the PCBs that I had already made. For these reasons, I had to fall back to “idle” mode.
In idle mode, the microcontroller is more or less still totally awake. It can be woken by pin change interrupts as well as software timers. While this is super convenient, it also draws substantially more power. Even using the system clock prescaler fuse to drop the clock frequency down to 1MHz and using the Power Reduction Register to disable the ADC, USI, and one of the timers, I was still orders of magnitude above the other modes.
Page 197 of the data sheet says I should be able to drop down to somewhere between 0.15mA and 0.25mA. I measured closer to .45mA:
(note, this was before I added the pull up/pull down to the FETs. The actual current draw should be higher.)
I’m not sure why this is so high. I think it might have something to do with the need to keep the gate of my PFET pulled low. Regardless, it’s still much lower than the 6mA I measured when the circuit was active, so I moved on.
Update: Multiple readers have brought it to my attention that I could have used the watchdog timer of the ATTiny24 to reset the device every 8 seconds or so. This can run while the system is in power-down or standby mode and does not require an external oscillator.
When a magnet is brought close to the circuit, the microcontroller switches off the LEDs, starts a timer, and goes into idle mode. Then when the timer expires, it wakes up for a moment, takes a brief measurement of the battery, checks to see the magnet is still there and goes to sleep. If the battery is too low, it disconnects it from the circuit, and if the magnet is removed, it wakes up.
The only downside of this method is that lengthening the timer interval to decrease power draw also makes the earrings very slow to respond to being turned on. Depending on the user’s timing, it can take anywhere from 0 to 16 seconds for the device to light up after they remove the magnet. While this may be inconvenient, I consider it to be well within my use case.
Charging
Charging lithium polymer batteries is a strange process that’s usually split into two stages: the constant current stage and the constant voltage stage.
Unlike the capacitors used in a typical circuit, lithium polymer batteries have a maximum rate at which they can be charged. Your typical battery can be modeled as a resistor and a voltage source:
When you begin charging a dead battery, the voltage is all the way down at 3V. The goal of charging is to force current backwards through the voltage source. This is done by attaching the battery to a larger voltage source. The battery will have a rated charging voltage which in the case of my battery is 4.2V.
The current traveling backwards through the battery will be the voltage across the resistor divided by its resistance. Let’s say the resistor starts out at 50$$\Omega$$ (it will change depending on a number of conditions, but I’m going to stick with 50$$\Omega$$ to keep things simple):
$$\large I=\frac{4.2V-3V}{50\Omega} = 24mA$$
The only problem is that the datasheet for this battery lists its maximum charge current as 9mA. We’re almost tripling that! With this amount of current flowing across the battery’s internal resistance, it will start to heat up more than the battery can handle. It could eventually cause the battery to balloon up and even start a fire (probably an exaggeration for such a small cell, but it will still damage the battery).
This phase of charging is known as the “constant current phase”. This is where the charger needs to carefully limit its current to make sure that it doesn’t overdrive the battery. This will involve using a voltage smaller than 4.2V starting out. In this case, you could use:
$$\large 9mA\times 50\Omega + 3V = 3.45V$$
As you pump current into this battery, its internal voltage will slowly rise. Soon you’ll be able to drive it at a higher voltage than 3.45V. Eventually, you’ll get to the point that you can safely charge it at 4.2V, but then you’ve got another problem. In order to maintain 9mA of charging, you would need to drive the battery above the rated 4.2V charging voltage. What do you do?
This is called the “constant voltage” stage of battery charging. This means that you maintain the voltage at 4.2V and allow the current to slowly drop. As the internal voltage rises closer to 4.2V, the voltage across the internal resistance will drop lowering the current.
Eventually, this current will drop below some cut off threshold (in my case, 1mA), and it will be okay to stop charging. The battery is fully charged!
You might find that a number of cheap chargers for things like those little RC helicopters don’t even have a constant voltage mode. This is because once you exit the constant current mode, you can be as high as 80% full. The last 20% isn’t considered worth it especially since it takes so much longer to complete due to the lower charge current.
Charge management
So the question is, how do you implement this charging scheme? I came up with this solution (click to enlarge):
This probably isn’t the cheapest or most effective way to charge a battery, but it could be done with parts that I happened to have lying around. The charger is designed to run off of either microUSB or two AAA batteries (for portable use). D4 and D5 act as ORing diodes. Whichever voltage is higher (USB if it’s connected) will be used to power the circuit. These are Schottky diodes, so the voltage drops about 0.3V across them.
That’s where U4 steps in. U4 is the boost converter from my bullet counter circuit. When configured correctly, this thing will take a voltage from as low as 2V and output a good 5V at up to 200mA. This is enough to boost the AAA’s 3V output up to 5V or to overcome the Schottky diode voltage drop in the case of USB power.
The power switch originally controlled the enable pin on the boost converter until I realized that with the boost converter disabled, there is still a current path through L1 and D3, so switching it off won’t prevent the circuit from drawing power.
Now for the part that actually charges the batteries. There are two independent chargers here (one for each earring). Here’s one of them:
This circuit only controls the negative terminal of the battery. The positive terminal is connected directly to +5V.
Voltage Limiting
Voltage limiting is handled by Q2 and its op amp. R1, R2, and R9 make up a voltage divider that outputs 0.8V (this could have been done with 2 resistors, but I wanted to make it work with values I had handy). This 0.8V is fed into the inverting input of the op amp which is configured in a negative feedback mode through Q2. The end result is that the drain of Q2 is kept at 0.8V which means that the maximum voltage possible across the battery is 4.2V
R28 was added after I tested the circuit due to a problem I discovered with this setup when the currents get very small. In order for Q2 to maintain the 0.8V drop, it needs some amount of current traveling through its R$$_{DS}$$. When this current drops too low, the voltage drop decreases, and the battery sees a voltage higher than 4.2V. R28 acts to maintain at least 4.2mA of current flowing across Q2 at all times. This energy is wasted of course, but I figured it was okay to sacrifice energy for stability considering this charger is going to be plugged into USB most of the time.
Current Limiting
Q1 and its op amp are responsible for current limiting. R3 and R4 combine to make a 31$$\Omega$$ current sense resistor. When the charge current reaches 9mA, the voltage across this 31$$\Omega$$ resistor should be:
$$\large 31\Omega \times 9mA = .279V$$
When this is added to the 0.8V of the voltage limiting section below, it comes out to 1.079V. The voltage divider comprised of R6, R7, and R8 creates a 1.071V reference and the Q1 op amp combo try to keep both of those outputs at the same level. The op amp is only monitoring the voltage, but the addition of the current sense resistors turn this into a current limiting supply.
When the battery’s voltage increases high enough to exit constant current mode, this portion of the circuit will continue to try to raise the current, but because it only has 4.2V maximum to work with, it won’t be able to damage the battery and the circuit will effectively be in constant voltage mode.
Shutdown
The two lines leading off the right side of the page go to an onboard ATTiny24 micro controller. This device monitors the charge current by looking at the voltage at the top of R3 and R4. When this voltage drops below 0.843V, it means that the current has dropped below 1.4mA.
At this point, the micro controller pulls the top line down. This overrides the output of the op amp which is weakened by R5 and shuts Q1 off effectively killing the output and disconnecting the battery from the charger.
The 1.4V represents the 1mA shutoff current of the battery added to the 0.4mA idle current of the micro controller. |
Browse by Person
Up a level 55. Article
Aad, G, Abbott, B, Abdallah, J et al. (2883 more authors) (2016)
Addendum to ‘Measurement of the tˉt production cross-section using eμ events with b-tagged jets in pp collisions at √s = 7 and 8 TeV with the ATLAS detector’. European Physical Journal C: Particles and Fields, 76. 642. ISSN 1434-6044
Aad, G, Abbott, B, Abdallah, J et al. (2855 more authors) (2016)
Performance of pile-up mitigation techniques for jets in pp collisions at √s=8 TeV using the ATLAS detector. European Physical Journal C, 76 (11). ISSN 1434-6044
Aad, G, Abbott, B, Abdallah, J et al. (2867 more authors) (2016)
Identification of high transverse momentum top quarks in pp collisions at √s=8 TeV with the ATLAS detector. Journal of High Energy Physics, 2016. 93. ISSN 1029-8479
Aad, G, Abbott, B, Abdallah, J et al. (2865 more authors) (2016)
Reconstruction of hadronic decay products of tau leptons with the ATLAS experiment. The European Physical Journal C, 76 (5). ISSN 1434-6044
Aad, G, Abbott, B, Abdallah, J et al. (2867 more authors) (2016)
Measurement of the transverse momentum and Øn∗ distributions of Drell–Yan lepton pairs in proton–proton collisions at √s = 8 TeV with the ATLAS detector. The European Physical Journal C - Particles and Fields, 76 (5). ISSN 1434-6044
Aad, G, Abbott, B, Abdallah, J et al. (2862 more authors) (2016)
Measurement of the differential cross-sections of prompt and non-prompt production of J/ψ and ψ(2S) in pp collisions at √s=7 and 8 TeV with the ATLAS detector. European Physical Journal C: Particles and Fields, 76 (5). 283. ISSN 1434-6044
Aad, G, Abbott, B, Abdallah, J et al. (2866 more authors) (2016)
Search for the standard model Higgs boson produced in association with a vector boson and decaying into a tau pair in pp collisions sqrt s = 8 TeV at with the ATLAS detector. Physical Review D, 93 (9). ISSN 1550-7998
Aad, G, Abbott, B, Abdallah, J et al. (2868 more authors) (2016)
Observation of Long-Range Elliptic Azimuthal Anisotropies in root s=13 and 2.76 TeV pp Collisions with the ATLAS Detector. PHYSICAL REVIEW LETTERS, 116 (17). ARTN 172301. ISSN 0031-9007
Aad, G, Abbott, B, Abdallah, J et al. (2865 more authors) (2016)
Probing lepton flavour violation via neutrinoless τ⟶3μ decays with the ATLAS detector. European Physical Journal C: Particles and Fields, 76 (5). 232. ISSN 1434-6044
Aad, G, Abbott, B, Abdallah, J et al. (2871 more authors) (2016)
Search for dark matter produced in association with a Higgs boson decaying to two bottom quarks in pp collisions at root s=8 TeV with the ATLAS detector. PHYSICAL REVIEW D, 93 (7). ARTN 072007. ISSN 1550-7998
Aad, G, Abbott, B, Abdallah, J et al. (2863 more authors) (2016)
Search for new phenomena in events with at least three photons collected in pp collisions at $$\sqrt{s}$$ s = 8 TeV with the ATLAS detector. The European Physical Journal C, 76 (4). ISSN 1434-6044
Aad, G, Abajyan, T, Abbott, B et al. (2840 more authors) (2016)
Measurement of the centrality dependence of the charged-particle pseudorapidity distribution in proton–lead collisions at sNN‾‾‾√=5.02sNN=5.02 TeV with the ATLAS detector. The European Physical Journal C, 76 (4). ISSN 1434-6044
Aad, G, Abbott, B, Abdallah, J et al. (2863 more authors) (2016)
Search for anomalous couplings in the W tb vertex from the measurement of double differential angular decay rates of single top quarks produced in the t-channel with the ATLAS detector. Journal of High Energy Physics, 2016 (4).
Aad, G, Abbott, B, Abdallah, J et al. (2782 more authors) (2016)
Search for magnetic monopoles and stable particles with high electric charges in 8 TeV pp collisions with the ATLAS detector. Physical Review D, 93 (5). 052009. ISSN 2470-0010
Aad, G, Abbott, B, Abdallah, J et al. (2865 more authors) (2016)
Search for the electroweak production of supersymmetric particles in root s=8 TeV pp collisions with the ATLAS detector. Physical Review D, 93 (5). 052002. ISSN 2470-0010
Aad, G, Abbott, B, Abdallah, J et al. (2794 more authors) (2016)
Search for the electroweak production of supersymmetric particles in root s=8 TeV pp collisions with the ATLAS detector. Physical Review D, 93 (5). 052002. ISSN 2470-0010
Aad, G, Abbott, B, Abdallah, J et al. (2879 more authors) (2016)
Centrality, rapidity, and transverse momentum dependence of isolated prompt photon production in lead-lead collisions at TeV measured with the ATLAS detector. Physical Review C, 93 (3). ISSN 0556-2813
Aad, G, Abbott, B, Abdallah, J et al. (2856 more authors) (2016)
Search for invisible decays of a Higgs boson using vector-boson fusion in pp collisions at √s=8 TeV with the ATLAS detector. Journal of High Energy Physics, 2016. 172. ISSN 1126-6708
Aad, G, Abbott, B, Abdallah, J et al. (2862 more authors) (2016)
Search for a high-mass Higgs boson decaying to a W boson pair in pp collisions at s = 8 $$ \sqrt{s}=8 $$ TeV with the ATLAS detector. Journal of High Energy Physics, 2016 (1).
Aad, G, Abbott, B, Abdallah, J et al. (2854 more authors) (2016)
Measurements of fiducial cross-sections for $$t\bar{t}$$ t t ¯ production with one or two additional b-jets in pp collisions at $$\sqrt{s}$$ s =8 TeV using the ATLAS detector. European Physical Journal C: Particles and Fields, 76 (1). 11. ISSN 1434-6044
Aad, G, Abbott, B, Abdallah, J et al. (2824 more authors) (2016)
Measurements of the Higgs boson production and decay rates and coupling strengths using pp collision data at $$\sqrt{s}=7$$ s = 7 and 8 TeV in the ATLAS experiment. European Physical Journal C: Particles and Fields, 76. 6. ISSN 1434-6044
Aad, G, Abbott, B, Abdallah, J et al. (2854 more authors) (2015)
ATLAS Run 1 searches for direct pair production of third-generation squarks at the Large Hadron Collider. European Physical Journal C: Particles and Fields, 75 (10). 510. ISSN 1434-6044
Aad, G, Abbott, B, Abdallah, J et al. (2835 more authors) (2015)
Search for Higgs boson pair production in the $$b\bar{b}b\bar{b}$$ b b ¯ b b ¯ final state from pp collisions at $$\sqrt{s} = 8$$ s = 8 TeVwith the ATLAS detector. European Physical Journal C: Particles and Fields, 75 (9). 412. ISSN 1434-6044
Aad, G, Abbott, B, Abdallah, J et al. (2825 more authors) (2015)
Search for heavy long-lived multi-charged particles in pp collisions at root s=8 TeV using the ATLAS detector. European Physical Journal C: Particles and Fields, 75 (8). 362. ISSN 1434-6044
Aad, G, Abbott, B, Abdallah, J et al. (2819 more authors) (2015)
Constraints on the off-shell Higgs boson signal strength in the high-mass ZZ and WW final states with the ATLAS detector. European Physical Journal C: Particles and Fields, 75 (7). 335. ISSN 1434-6044
Aad, G, Abbott, B, Abdallah, J et al. (2822 more authors) (2015)
Search for a new resonance decaying to a W or Z boson and a Higgs boson in the $$\ell \ell / \ell \nu / \nu \nu + b \bar{b}$$ ℓ ℓ / ℓ ν / ν ν + b b ¯ final states with the ATLAS detector. European Physical Journal C: Particles and Fields, 75 (6). 263. ISSN 1434-6044
Aad, G, Abbott, B, Abdallah, J et al. (2823 more authors) (2015)
Determination of spin and parity of the Higgs boson in the $$WW^*\rightarrow e \nu \mu \nu $$ W W ∗ → e ν μ ν decay channel with the ATLAS detector. European Physical Journal C: Particles and Fields, 75 (5). 231. ISSN 1434-6044
Aad, G, Abbott, B, Abdallah, J et al. (2815 more authors) (2015)
Observation and measurements of the production of prompt and non-prompt $$\varvec{\text {J}\uppsi }$$ J ψ mesons in association with a $$\varvec{Z}$$ Z boson in $$\varvec{pp}$$ p p collisions at $$\varvec{\sqrt{s}= 8\,\text {TeV}}$$ s = 8 TeV with the ATLAS detector. European Physical Journal C: Particles and Fields, 75 (5). 229. ISSN 1434-6044
Aad, G, Abbott, B, Abdallah, J et al. (2821 more authors) (2015)
Search for direct pair production of a chargino and a neutralino decaying to the 125 GeV Higgs boson in $$\sqrt{\varvec{s}} = 8$$ s = 8 TeV $$\varvec{pp}$$ p p collisions with the ATLAS detector. European Physical Journal C: Particles and Fields, 75 (5). 208. ISSN 1434-6044
Aad, G, Abbott, B, Abdallah, J et al. (2888 more authors) (2015)
Search for $$W' \rightarrow tb \rightarrow qqbb$$ W ′ → t b → q q b b decays in $$pp$$ p p collisions at $$\sqrt{s}$$ s = 8 TeV with the ATLAS detector. European Physical Journal C: Particles and Fields, 75 (4). 165. ISSN 1434-6044
Aad, G, Abbott, B, Abdallah, J et al. (2822 more authors) (2015)
Search for Higgs and Z Boson Decays to J/ψγ and ϒ(nS)γ with the ATLAS Detector. Physical Review Letters, 114 (12). 121801. ISSN 0031-9007
Aad, G, Abbott, B, Abdallah, J et al. (2881 more authors) (2015)
Simultaneous measurements of the tt¯, W+W−, and Z/γ∗→ττ production cross-sections in pp collisions at √s=7 TeV with the ATLAS detector. Physical Review D - Particles, Fields, Gravitation and Cosmology, 91 (5). 052005. ISSN 1550-7998
Aad, G, Abbott, B, Abdallah, J et al. (2888 more authors) (2015)
Search for dark matter in events with heavy quarks and missing transverse momentum in pp collisions with the ATLAS detector. European Physical Journal C, 75 (2). 92. ISSN 1434-6044
Aad, G, Abbott, B, Abdallah, J et al. (2467 more authors) (2015)
Search for dark matter in events with heavy quarks and missing transverse momentum in pp collisions with the ATLAS detector. European Physical Journal C , 75 (2). 92. ISSN 1434-6044
Aad, G, Abbott, B, Abdallah, J et al. (2896 more authors) (2015)
Measurements of Higgs boson production and couplings in the four-lepton channel in pp collisions at center-of-mass energies of 7 and 8 TeV with the ATLAS detector. Physical Review D, 91 (1). ISSN 1550-7998
Aad, G, Abbott, B, Abdallah, J et al. (2888 more authors) (2014)
Search for nonpointing and delayed photons in the diphoton and missing transverse momentum final state in 8 TeV pp collisions at the LHC using the ATLAS detector. Physical Review D, 90 (11). ISSN 1550-7998
Aad, G, Abajyan, T, Abbott, B et al. (2793 more authors) (2014)
Measurements of normalized differential cross sections for tt¯ production in pp collisions at √(s)=7 TeV using the ATLAS detector. Physical Review D, 90 (7). ISSN 1550-7998
Aad, G, Abbott, B, Abdallah, J et al. (2886 more authors) (2014)
Measurement of the Higgs boson mass from the H→γγ and H→ZZ∗→4ℓ channels in pp collisions at center-of-mass energies of 7 and 8 TeV with the ATLAS detector. Physical Review D, 90 (5). ISSN 1550-7998
Aad, G, Abbott, B, Abdallah, J et al. (2878 more authors) (2014)
Search for high-mass dilepton resonances in pp collisions at s√=8 TeV with the ATLAS detector. Physical Review D, 90. 052005. ISSN 1550-7998
Aad, G, Abajyan, T, Abbott, B et al. (2920 more authors) (2013)
Evidence for the spin-0 nature of the Higgs boson using ATLAS data. Physics Letters B, 726 (1-3). pp. 120-144. ISSN 0370-2693
Aad, G, Abbott, B, Abdallah, J et al. (1825 more authors) (2012)
Search for contact interactions in dilepton events from pp collisions at root s=7 TeV with the ATLAS detector. Physics Letters B, 712 (1-2). pp. 40-58. ISSN 0370-2693
Aad, G, Abbott, B, Abdallah, J et al. (2923 more authors) (2012)
Measurement of D*± meson production in jets from pp collisions at s√=7 TeV with the ATLAS detector. Physical Review D, 85 (5). ISSN 1550-7998
Aad, G, Abbott, B, Abdallah, J et al. (3057 more authors) (2012)
Search for the Standard Model Higgs Boson in the Diphoton Decay Channel with 4.9 fb−1 of pp Collision Data at √s=7 TeV with ATLAS. Physical Review Letters, 108. 111803. ISSN 0031-9007
Aad, G, Abbott, B, Abdallah, J et al. (2775 more authors) (2012)
Measurement of the ZZ Production Cross Section and Limits on Anomalous Neutral Triple Gauge Couplings in Proton-Proton Collisions at √s=7 TeV with the ATLAS Detector. Physical Review Letters, 108 (4). 041804. ISSN 0031-9007
Aad, G, Abbott, B, Abdallah, J et al. (2992 more authors) (2012)
K0s and Λ production in pp interactions at s√=0.9 and 7 TeV measured with the ATLAS detector at the LHC. Physical Review D, 85 (1). ISSN 1550-7998
Aad, G, Abbott, B, Abdallah, J et al. (3022 more authors) (2011)
Search for Dilepton Resonances in pp Collisions at √s=7 TeV with the ATLAS Detector. Physical Review Letters, 107 (27). ISSN 0031-9007
Aad, G, Abbott, B, Abdallah, J et al. (3028 more authors) (2011)
Measurement of the transverse momentum distribution of Z/gamma* bosons in proton-proton collisions at root s=7 TeV with the ATLAS detector. Physics Letters B, 705 (5). pp. 415-434. ISSN 0370-2693
Aad, G, Abbott, B, Abdallah, J et al. (3023 more authors) (2011)
Search for a standard model Higgs boson in the H→ZZ→ℓ(+)ℓ(-)νν decay channel with the ATLAS detector. Physical Review Letters, 107 (22). 221802. ISSN 0031-9007
Aad, G, Abbott, B, Abdallah, J et al. (3016 more authors) (2011)
Search for new phenomena with the monojet and missing transverse momentum signature using the ATLAS detector in root s=7 TeV proton-proton collisions. Physics Letters B , 705 (4). pp. 294-312. ISSN 0370-2693
Aad, G, Abbott, B, Abdallah, J et al. (3017 more authors) (2011)
Search for new phenomena with the monojet and missing transverse momentum signature using the ATLAS detector in sqrt(s) = 7 TeV proton-proton collisions. Physics Letters B, 705 (4). pp. 294-312. ISSN 0370-2693
Aad, G, Abbott, B, Abdallah, J et al. (3033 more authors) (2011)
Measurement of the W+W− Cross Section in s√=7 TeV pp Collisions with ATLAS. Physical Review Letters, 107. 041802. ISSN 0031-9007
Aad, G, Abbott, B, Abdallah, J et al. (3046 more authors) (2011)
Measurement of the production cross section for W-bosons in association with jets in pp collisions at √s=7 TeV with the ATLAS detector. Physics Letters B, 698 (5). pp. 325-345. ISSN 0370-2693
Aad, G, Abbott, B, Abdallah, J et al. (3024 more authors) (2011)
Measurement of Dijet Azimuthal Decorrelations in pp Collisions at s√=7 TeV. Physical Review Letters, 106. 172002. ISSN 0031-9007
Aad, G, Abbott, B, Abdallah, J et al. (3034 more authors) (2010)
Observation of a Centrality-Dependent Dijet Asymmetry in Lead-Lead Collisions at root s(NN)=2.76 TeV with the ATLAS Detector at the LHC. Physical Review Letters, 105 (25). 252303. ISSN 0031-9007
Aad, G, Abbott, B, Abdallah, J et al. (3177 more authors) (2010)
Measurement of the W -> lv and Z/gamma* -> ll production cross sections in proton-proton collisions at root s=7 TeV with the ATLAS detector. Journal of High Energy Physics. 60. ISSN 1029-8479 |
Articles
1 - 17 of 17
Full-Text Articles in Physics
Isotropization In Brane Gas Cosmology, Scott Watson, Robert H. Brandenberger
Isotropization In Brane Gas Cosmology, Scott Watson, Robert H. Brandenberger Physics
Brane Gas Cosmology (BGC) is an approach to unifying string theory and cosmology in which matter is described by a gas of strings and branes in a dilaton gravity background. The Universe is assumed to start out with all spatial dimensions compact and small. It has previously been shown that in this context, in the approximation of neglecting inhomogeneities and anisotropies, there is a dynamical mechanism which allows only three spatial dimensions to become large. However, previous studies do not lead to any conclusions concerning the isotropy or anisotropy of these three large spatial dimensions. Here, we generalize the equations ...
Stripes From (Noncommutative) Stars, Simon Catterall, J. Ambjorn
Stripes From (Noncommutative) Stars, Simon Catterall, J. Ambjorn Physics
We show that lattice regularization of noncommutative field theories can be used to study non-perturbative vacuum phases. Specifically we provide evidence for the existence of a striped phase in two-dimensional noncommutative scalar field theory
Hysteresis In Driven Disordered Systems: From Plastic Depinning To Magnets, M. Cristina Marchetti, Karin A. Dahmen
Hysteresis In Driven Disordered Systems: From Plastic Depinning To Magnets, M. Cristina Marchetti, Karin A. Dahmen Physics
We study the dynamics of a viscoelastic medium driven through quenched disorder by expanding about mean field theory in $6-\epsilon$ dimensions. The model exhibits a critical point separating a region where the dynamics is hysteretic with a macroscopic jump between strongly pinned and weakly pinned states, from a region where the sliding state is unique and no jump occurs. The disappearance of the jump at the critical point is described by universal exponents. As suggested in \onlinecite{MMP00}, the model appears to be in the same universality class as the zero-temperature random field Ising model of hysteresis in magnets.
Precision Measurement Of Energy And Position Resolutions Of The Btev Electromagnetic Calorimeter Prototype, Raymond Mountain, K. Khroustalev, V.A. Batarin, T. Brennan
Precision Measurement Of Energy And Position Resolutions Of The Btev Electromagnetic Calorimeter Prototype, Raymond Mountain, K. Khroustalev, V.A. Batarin, T. Brennan Physics
The energy dependence of the energy and position resolutions of the electromagnetic calorimeter prototype made of lead tungstate crystals produced in Bogoroditsk (Russia) and Shanghai (China) is presented. These measurementswere carried out at the Protvino accelerator using a 1 to 45 GeV electron beam. The crystals were coupled to photomultiplier tubes. The dependence of energy and position resolutions on different factors as well as the measured electromagnetic shower lateral profile are presented.
Effects Of Light Scalar Mesons In Η\To 3Π Decay, Joseph Schechter, Abdou Abdel-Rehim, Deirdre Black, Amir H. Fariborz
Effects Of Light Scalar Mesons In Η\To 3Π Decay, Joseph Schechter, Abdou Abdel-Rehim, Deirdre Black, Amir H. Fariborz Physics
We study the role of a possible nonet of light scalar mesons in the still interesting \eta \to 3\pi decay process, with the primary motivation of learning more about the scalars themselves. The framework is a conventional non-linear chiral Lagrangian of pseudoscalars and vectors, extended to include the scalars. The parameters involving the scalars were previously obtained to fit the s-wave \pi\pi and \piK scatterings in the region up to about 1 GeV as well as the strong decay \eta' \to \eta \pi\pi. At first, one might expect a large enhancement from diagrams including a light \sigma ...
Crystalline Order On A Sphere And The Generalized Thomson Problem, Mark Bowick, Angelo Cacciuto, David R. Nelson, A. Travesset
Crystalline Order On A Sphere And The Generalized Thomson Problem, Mark Bowick, Angelo Cacciuto, David R. Nelson, A. Travesset Physics
We attack generalized Thomson problems with a continuum formalism which exploits a universal long range interaction between defects depending on the Young modulus of the underlying lattice. Our predictions for the ground state energy agree with simulations of long range power law interactions of the form 1/r^{gamma} (0 < gamma < 2) to four significant digits. The regime of grain boundaries is studied in the context of tilted crystalline order and the generality of our approach is illustrated with new results for square tilings on the sphere.
Development Of A Hybrid Photo-Diode And Its Front-End Electronics For The Btev Experiment, Raymond Mountain Physics
This paper describes the development of a 163-channel Hybrid Photo-Diode (HPD) to be used in the RICH Detector for the BTEV Experiment. This is a joint development project with DEP, Netherlands. It also reports on the development of associated front-end readout electronics based on the va_btev ASIC, undertaken with IDEAS, Norway. Results from bench tests of the first prototypes are presented.
Construction, Pattern Recognition And Performance Of The Cleo Iii Lif-Tea Rich Detector, Raymond Mountain, R. Ayad, Konstantin Vladimirovich Bukin, A. Efimov
Construction, Pattern Recognition And Performance Of The Cleo Iii Lif-Tea Rich Detector, Raymond Mountain, R. Ayad, Konstantin Vladimirovich Bukin, A. Efimov Physics
We briefly describe the design, construction and performance of the LiF-Tea RICH detector built to identify charged particles in the CLEO III experiment. Excellent pion/kaon separation is demonstrated.
Scaling, Domains, And States In The Four-Dimensional Random Field Ising Magnet, Alan Middleton Physics
The four dimensional Gaussian random field Ising magnet is investigated numerically at zero temperature, using samples up to size $64^4$, to test scaling theories and to investigate the nature of domain walls and the thermodynamic limit. As the magnetization exponent $\beta$ is more easily distinguishable from zero in four dimensions than in three dimensions, these results provide a useful test of conventional scaling theories. Results are presented for the critical behavior of the heat capacity, magnetization, and stiffness. The fractal dimensions of the domain walls at criticality are estimated. A notable difference from three dimensions is the structure of ...
Predictions And Observations In Theories With Varying Couplings, Christian Armendariz-Picon
Predictions And Observations In Theories With Varying Couplings, Christian Armendariz-Picon Physics
We consider a toy universe containing conventional matter and an additional real scalar field, and discuss how the requirements of gauge and diffeomorphism invariance essentially single out a particular set of theories which might describe such a world at low energies. In these theories, fermion masses and g-factors, as well as the electromagnetic coupling turn to be scalar field dependent; fermion charges and the gravitational coupling might be assumed to be constant. We then proceed to study the impact of a time variation of the scalar field on measurements of atomic spectra at high redshifts. Light propagation is not affected ...
Exact Lattice Supersymmetry: The Two-Dimensional N=2 Wess-Zumino Model, Simon Catterall, Sergey Karamov
Exact Lattice Supersymmetry: The Two-Dimensional N=2 Wess-Zumino Model, Simon Catterall, Sergey Karamov Physics
We study the two-dimensional Wess-Zumino model with extended N=2 supersymmetry on the lattice. The lattice prescription we choose has the merit of preserving {\it exactly} a single supersymmetric invariance at finite lattice spacing a. Furthermore, we construct three other transformations of the lattice fields under which the variation of the lattice action vanishes to O(ga^2) where g is a typical interaction coupling. These four transformations correspond to the two Majorana supercharges of the continuum theory. We also derive lattice Ward identities corresponding to these exact and approximate symmetries. We use dynamical fermion simulations to check the equality ...
Clash Of Symmetries On The Brane, Aharon Davidson, B. F. Toner, R. R. Volkas, K. C. Wali
Clash Of Symmetries On The Brane, Aharon Davidson, B. F. Toner, R. R. Volkas, K. C. Wali Physics
If our 3 + 1-dimensional universe is a brane or domain wall embedded in a higher dimensional space, then a phenomenon we term the “clash of symmetries” provides a new method of breaking some continuous symmetries. A global
Gcts ⊗ Gdiscrete symmetry is spontaneously broken to Hcts ⊗ Hdiscrete, where the continuous subgroup Hcts can be embedded in several different ways in the parent group Gcts, and Hdiscrete < Gdiscrete. A certain class of topological domain wall solutions connect two vacua that are invariant under differently embedded Hcts subgroups. There is then enhanced symmetry breakdown to the intersection of these two subgroups on the domain wall.This is the “clash”. In the brane limit, we obtain a configuration with Hcts symmetries in the bulk but the smaller intersection symmetry on the brane itself. We illustrate this idea using a permutation symmetric three-Higgstriplet toy model exploiting the distinct I−, U− and V − spin U(2) subgroups of U(3). The three disconnected portions of the vacuum manifold can be treated symmetrically through the construction of a three-fold planar domain wall junction configuration, with our universe at the nexus. A possible con-
Vector Meson Dominance Model For Radiative Decays Involving Light Scalar Mesons, Joseph Schechter, Deirdre Black, Masayasu Harada
Vector Meson Dominance Model For Radiative Decays Involving Light Scalar Mesons, Joseph Schechter, Deirdre Black, Masayasu Harada Physics
We study a vector dominance model which predicts a fairly large number of currently interesting decay amplitudes of the types S -> \gamma \gamma, V -> S \gamma and S -> V \gamma, where S and V denote scalar and vector mesons, in terms of three parameters. As an application, the model makes it easy to study in detail a recent proposal to boost the ratio Gamma(phi -> f_0 gamma) / Gamma(phi -> a_0 gamma) by including the isospin violating a_0 - f_0 mixing. However we find that this effect is actually small in our model.
Thermionic Emission Model For Interface Effects On The Open-Circuit Voltage Of Amorphous Silicon Based Solar Cells, Eric A. Schiff Physics
We present computer modeling for effects of the p/i interface upon the open-circuit voltage VOC in amorphous silicon based pin solar cells. We show that the modeling is consistent with measurements on the intensitydependence for the interface effect, and we present an interpretation for the modeling based on thermionic emission of electrons over the electrostatic barrier at the p/i interface. We present additional modeling of the relation of VOC with the intrinsic layer bandgap EG. The experimental correlation for optimized cells is VOC = (EG/e)-0.79. The correlation is simply explained if VOC in these cells is ...
Infrared Charge-Modulation Spectroscopy Of Defects In Phosphorus Doped Amorphous Silicon, Kai Zhu, Eric A. Schiff, G. Ganguly
Infrared Charge-Modulation Spectroscopy Of Defects In Phosphorus Doped Amorphous Silicon, Kai Zhu, Eric A. Schiff, G. Ganguly Physics
We present infrared charge-modulation absorption spectra on phosphorus-doped amorphous silicon (a-Si:H:P) with doping levels between 0.17% - 5%. At higher doping levels (1% - 5%) we find a sharp spectral line near 0.75 eV with a width of 0.1 eV. We attribute this line to the internal optical transitions of a complex incorporating four fold coordinated phosphorus and a dangling bond. This line is barely detectable in samples with lower doping levels (below 1%). In these samples a much broader line dominates the spectrum that we attribute to uncomplexed dopants. The relative strength of the two spectral ...
Photocarrier Drift Mobility Measurements And Electron Localization In Nanoporous Silicon, P. N. Rao, Eric A. Schiff, L. Tsybeskov, P. Fauchet
Photocarrier Drift Mobility Measurements And Electron Localization In Nanoporous Silicon, P. N. Rao, Eric A. Schiff, L. Tsybeskov, P. Fauchet Physics
We report photocarrier time-of-flight measurements in diode structures made of highly porous crystalline silicon. The corresponding electron and hole drift mobilities are very small ð<104 cm2=V sÞ compared to homogeneous crystalline silicon. The mobilities are dispersive (i.e., having a power-law decay with time or length-scale), but are only weakly temperature-dependent. The dispersion parameter lies in the range 0.55–0.65 for both electrons and holes. We conclude that the drift mobilities are limited by the nanoporous geometry, and not by disorder-induced localized states acting as traps. This conclusion is surprising in the context of luminescence models based on radiative recombination of localized excitons.
Determining The Locus For Photocarrier Recombination In Dye-Sensitized Solar Cells, Kai Zhu, Eric A. Schiff, N. G. Park, J. Van De Lagemaat, A. J. Frank
Determining The Locus For Photocarrier Recombination In Dye-Sensitized Solar Cells, Kai Zhu, Eric A. Schiff, N. G. Park, J. Van De Lagemaat, A. J. Frank Physics
We present intensity-modulated photocurrent and infrared transmittance measurements on dye-sensitized solar cells based on a mesoporous titania (TiO2) matrix immersed in an iodine-based electrolyte. Under short-circuit conditions, we show that an elementary analysis accurately relates the two measurements. Under open-circuit conditions, infrared transmittance, and photovoltage measurements yield information on the characteristic depth at which electrons recombine with ions (the ‘‘locus of recombination’’). For one particular series of samples recombination occurred near the substrate supporting the titania film, as opposed to homogeneously throughout the film. |
I'm reading Advanced Global Illumination.
Here is the part confusing me:
What do the second equation and $\delta$-function mean?
Why the third equation is a sufficient condition even though a reason is given?
Computer Graphics Stack Exchange is a question and answer site for computer graphics researchers and programmers. It only takes a minute to sign up.Sign up to join this community
You are right to be confused. What I think they should have written: $$ L( x \leftarrow \Psi ) = L_{in} \delta(\Psi - \alpha) $$ using $ \alpha $ instead of $ \Theta $, which is already used as a dummy variable in the integral.
You should look up the Dirac delta function to learn its meaning and properties. In this context, you can imagine the $ L$ above as representing a very concentrated beam (a laser) coming from the angle $ \alpha $. Practically, to do the integral over $ \Psi $ with $ \delta (\Psi - \alpha) $ present in the integrand, remove the integration and replace all occurrences of $ \Psi $ with $ \alpha $. Then it should be clear how they arrive at the next line, which should read, for all $ \alpha $: $$ \int f_r(x, \alpha \rightarrow \Theta) \; \cos(N_x, \Theta)\; d \omega_\Theta \leq 1 $$.
The fact that this condition is sufficient follows from two facts: 1. that any function (e.g. $ L $) can be approximated by a sum of many $ \delta $ functions, and 2. that everything is linear. In other words, if I write $ N(L) $ and $ D(L) $ for the numerator and denominator of the left hand side of (2.21), then you can see that if $ L = L_1 + L_2 $, then $ N(L_1+L_2) = N(L_1) + N(L_2) $ and $ D(L_1+L_2) = D(L_1) + D(L_2) $. So if I know $ N(L_1) \leq D(L_1) $ and $ N(L_2) \leq D(L_2) $, then I know $ N(L) \leq D(L) $ since $$ N(L) = N(L_1) + N(L_2) \leq D(L_1) + D(L_2) = D(L) $$. |
Asymptotic behavior for solutions of some integral equations
1.
School of Mathematical Sciences, Nanjing Normal University, Nanjing, 210097, China
2.
Department of Mathematics, University of Colorado at Boulder, Boulder, CO 80309, United States
$u(x) = \frac{1}{|x|^{\alpha}}\int_{R^n} \frac{v(y)^q}{|y|^{\beta}|x-y|^{\lambda}} dy $,
$ v(x) = \frac{1}{|x|^{\beta}}\int_{R^n} \frac{u(y)^p}{|y|^{\alpha}|x-y|^{\lambda}}dy. $
We obtain the growth rate of the solutions around the origin and the decay rate near infinity. Some new cases beyond the work of C. Li and J. Lim [17] are studied here. In particular, we remove some technical restrictions of [17], and thus complete the study of the asymptotic behavior of the solutions for non-negative $\alpha$ and $\beta$.
Keywords:Integral equations, weighted Hardy-Littlewood-Sobolev inequality., singularities, asymptotic analysis. Mathematics Subject Classification:Primary: 45E10, 45G0. Citation:Yutian Lei, Chao Ma. Asymptotic behavior for solutions of some integral equations. Communications on Pure & Applied Analysis, 2011, 10 (1) : 193-207. doi: 10.3934/cpaa.2011.10.193
References:
[1]
L. Caffarelli, B. Gidas and J. Spruck,
[2]
W. Chen, C. Jin, C. Li and J. Lim,
[3] [4] [5]
W. Chen and C. Li,
[6] [7]
W. Chen and C. Li,
[8] [9] [10]
W. Chen, C. Li and B. Ou,
[11]
A. Chang and P. Yang,
[12]
L. Fraenkel, "An Introduction to Maximum Principles and Symmetry in Elliptic Problems,",
[13]
B. Gidas, W. M. Ni and L. Nirenberg,
[14] [15]
C. Jin and C. Li,
[16]
C. Li,
[17] [18] [19] [20]
E. Lieb and M. Loss, "Analysis,",
[21] [22] [23]
B. Ou,
[24] [25]
E. M. Stein and G. Weiss, "Introduction to Fourier Analysis on Euclidean Spaces,",
[26]
E. M. Stein and G. Weiss,
[27]
show all references
References:
[1]
L. Caffarelli, B. Gidas and J. Spruck,
[2]
W. Chen, C. Jin, C. Li and J. Lim,
[3] [4] [5]
W. Chen and C. Li,
[6] [7]
W. Chen and C. Li,
[8] [9] [10]
W. Chen, C. Li and B. Ou,
[11]
A. Chang and P. Yang,
[12]
L. Fraenkel, "An Introduction to Maximum Principles and Symmetry in Elliptic Problems,",
[13]
B. Gidas, W. M. Ni and L. Nirenberg,
[14] [15]
C. Jin and C. Li,
[16]
C. Li,
[17] [18] [19] [20]
E. Lieb and M. Loss, "Analysis,",
[21] [22] [23]
B. Ou,
[24] [25]
E. M. Stein and G. Weiss, "Introduction to Fourier Analysis on Euclidean Spaces,",
[26]
E. M. Stein and G. Weiss,
[27]
[1]
Wenxiong Chen, Chao Jin, Congming Li, Jisun Lim.
Weighted Hardy-Littlewood-Sobolev inequalities and systems of integral equations.
[2] [3]
Yutian Lei, Zhongxue Lü.
Axisymmetry of locally bounded solutions to an Euler-Lagrange
system of the weighted Hardy-Littlewood-Sobolev inequality.
[4]
Yingshu Lü, Zhongxue Lü.
Some properties of solutions to the weighted Hardy-Littlewood-Sobolev type integral system.
[5]
Xiaoqian Liu, Yutian Lei.
Existence of positive solutions for integral systems of the weighted Hardy-Littlewood-Sobolev type.
[6]
Genggeng Huang, Congming Li, Ximing Yin.
Existence of the maximizing pair for the discrete Hardy-Littlewood-Sobolev inequality.
[7] [8]
Jingbo Dou, Ye Li.
Classification of extremal functions to logarithmic Hardy-Littlewood-Sobolev inequality on the upper half space.
[9]
Hua Jin, Wenbin Liu, Huixing Zhang, Jianjun Zhang.
Ground states of nonlinear fractional Choquard equations with Hardy-Littlewood-Sobolev critical growth.
[10] [11]
Ze Cheng, Changfeng Gui, Yeyao Hu.
Existence of solutions to the supercritical Hardy-Littlewood-Sobolev system with fractional Laplacians.
[12]
Gui-Dong Li, Chun-Lei Tang.
Existence of ground state solutions for Choquard equation involving the general upper critical Hardy-Littlewood-Sobolev nonlinear term.
[13]
Yu Zheng, Carlos A. Santos, Zifei Shen, Minbo Yang.
Least energy solutions for coupled hartree system with hardy-littlewood-sobolev critical exponents.
[14]
Masato Hashizume, Chun-Hsiung Hsia, Gyeongha Hwang.
On the Neumann problem of Hardy-Sobolev critical equations with the multiple singularities.
[15]
Jinhui Chen, Haitao Yang.
A result on Hardy-Sobolev critical elliptic equations with boundary singularities.
[16]
Aleksandra Čižmešija, Iva Franjić, Josip Pečarić, Dora Pokaz.
On a family of means generated by the Hardy-Littlewood maximal inequality.
[17]
José Francisco de Oliveira, João Marcos do Ó, Pedro Ubilla.
Hardy-Sobolev type inequality and supercritical extremal problem.
[18]
Huyuan Chen, Feng Zhou.
Isolated singularities for elliptic equations with hardy operator and source nonlinearity.
[19]
Mingchun Wang, Jiankai Xu, Huoxiong Wu.
On Positive solutions of integral equations with the weighted Bessel potentials.
[20]
Jun Yang, Yaotian Shen.
Weighted Sobolev-Hardy spaces and sign-changing solutions of
degenerate elliptic equation.
2018 Impact Factor: 0.925
Tools Metrics Other articles
by authors
[Back to Top] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.