text stringlengths 256 16.4k |
|---|
The following text is cited from a textbook, "Spotlight Mode Synthetic Aperture Radar: A Signal Processing Approach", I would like to ask if anyone knows the proof to the following statements, as the proof is not outlined in the book, and I am unable to prove it myself.
The function $f(t)$ is represents a pulse envelope waveform.
The effective duration of a pulse envelope waveform is given as:
$$T_{e}=\frac{\int_{-\infty}^{\infty}f(t)dt}{f(0)}$$
A corresponding measure of effective bandwidth of the pulse envelope waveform is given by:
$$B_{e}=\frac{1}{2\pi}\cdot \frac{\int_{-\infty}^{\infty}F(\omega )d\omega }{F(0)}$$
From the defining equation for the Fourier transform, the above measures can be shown to have a product which is constant.
$$B_{e}T_{e}=1$$
I have read elsewhere that the above is the time-bandwidth product. I have no idea what that means or what the physical significance of it is. Can anyone shed light on how the last statement can actually be proven? Thank you! |
Newton’s second law establishes a relationship between the force \(\mathbf{F}\) acting on a body of mass \(m\) and the acceleration \(\mathbf{a}\) caused by this force.
The acceleration \(\mathbf{a}\) of a body is directly proportional to the acting force \(\mathbf{F}\) and inversely proportional to its mass \(m,\) that is
\[{\mathbf{a} = \frac{\mathbf{F}}{m}\;\;\text{or}\;\;}\kern-0.3pt{\mathbf{F} = m\mathbf{a} = m\frac{{{d^2}\mathbf{r}}}{{d{t^2}}}.}\]
This formulation is valid for systems with constant mass. When the mass changes (for example, in the case of relativistic motion), Newton’s second law takes the form
\[\mathbf{F} = \frac{{d\mathbf{p}}}{{dt}},\]
where \(\mathbf{p}\) is the impulse (momentum) of the body.
In general, the force \(\mathbf{F}\) can depend on the coordinates of the body, i.e., the radius vector \(\mathbf{r},\) its velocity \(\mathbf{v},\) and time \(t:\)
\[\mathbf{F} = \mathbf{F}\left( {\mathbf{r},\mathbf{v},t} \right).\]
Below we consider the special cases where the force \(\mathbf{F}\) depends only on one of these variables.
Force Depends on Time: \(\mathbf{F} = \mathbf{F}\left( t \right)\)
Assuming that the motion is one-dimensional, Newton’s second law is written as the second order differential equation:
\[m\frac{{{d^2}x}}{{d{t^2}}} = F\left( t \right).\]
Integrating once, we find the velocity of the body \(v\left( t \right):\)
\[{v\left( t \right) }={ {v_0} + \frac{1}{m}\int\limits_0^t {F\left( \tau \right)d\tau } .}\]
Here we assume that the body begins to move at time \(t = 0\) with the initial velocity \(v\left( {t = 0} \right) = {v_0}.\) Integrating again, we get the law of motion \(x\left( t \right):\)
\[x\left( t \right) = {x_0} + \int\limits_0^t {v\left( \tau \right)d\tau } ,\]
where \({x_0}\) is the initial coordinate of the body, \(\tau\) is the variable of integration.
Force Depends on the Velocity: \(\mathbf{F} = \mathbf{F}\left( {\mathbf{v}} \right)\)
When a solid body moves in a liquid or gaseous environment it experiences a drag force (or a frictional force). At low velocities \(\mathbf{v},\) this force is proportional to the velocity \(\mathbf{v}:\)
\[\mathbf{F} = – k\mathbf{v}.\]
The coefficient \(k\) in turn is proportional to the viscosity \(\eta.\) In particular, if the body has a spherical shape, the drag force is described by the Stokes’ law:
\[\mathbf{F} = – 6\pi \eta R\mathbf{v},\]
where \(R\) is the radius of the ball, \(\eta\) is the viscosity of the environment.
In this mode of motion Newton’s second law is written (in one-dimensional approximation) as the following differential equation:
\[m\frac{{{d^2}x}}{{d{t^2}}} = m\frac{{dv}}{{dt}} = – kv.\]
Integrating this equation with the initial condition \(v\left( {t = 0} \right) = {v_0}\) gives
\[
{\frac{{dv}}{v} = – \frac{k}{m}dt,\;\;}\Rightarrow {\int\limits_{{v_0}}^v {\frac{{du}}{u}} = – \frac{k}{m}\int\limits_0^t {d\tau } .} \]
Here \(u\) and \(\tau\) are integration variables. The velocity of the body varies from \({v_0}\) to \(v\) as the time changes from \(0\) to \(t.\) Consequently,
\[
{\ln v – \ln {v_0} = – \frac{k}{m}t,\;\;}\Rightarrow {\ln \frac{v}{{{v_0}}} = – \frac{k}{m}t,\;\;}\Rightarrow {v\left( t \right) = {v_0}{e^{ – {\large\frac{k}{m}\normalsize}t}}.} \]
Thus, if the drag force is proportional to the velocity of the body, its speed will decrease exponentially.
The law of motion \(x\left( t \right)\) can be easily found by repeated integration:
\[
{x\left( t \right) = {x_0} + \int\limits_0^t {v\left( \tau \right)d\tau } } = {{x_0} + \int\limits_0^t {{v_0}{e^{ – {\large\frac{k}{m}\normalsize}\tau }}d\tau } } = {{x_0} – \frac{{m{v_0}}}{k}\left( {{e^{ – {\large\frac{k}{m}\normalsize}t}} – 1} \right) } = {{x_0} + \frac{{m{v_0}}}{k}\left( {1 – {e^{ – {\large\frac{k}{m}\normalsize} t}}} \right).} \]
The last formula shows that the path traversed by the body to a complete stop, is equal to \(\large\frac{{m{v_0}}}{k}\normalsize,\) i.e., proportional to the initial momentum of the body \(m{v_0}.\)
As the velocity of a body increases, the physics of the process changes. The kinetic energy of the body begins to be spent not only on the friction between the layers of liquid, but also on the movement of the fluid in front of the body. In this mode, the drag force becomes proportional to the square of the velocity:
\[F = – \mu \rho S{v^2},\]
where \(\mu\) is the coefficient of proportionality, \(S\) is the cross-sectional area of the body, \(\rho\) is the density of the medium.
The nonlinear regime described above appears on the conditions
\[\mathbf{\text{Re}} = \frac{{\rho vL}}{\eta } > 100,\]
where \(\mathbf{\text{Re}}\) is the dimensionless Reynolds number, \(\eta\) is the viscosity of the medium, \(L\) is a characteristic cross-sectional size, for example, the radius of the body.
Considering one-dimensional motion, we write Newton’s second law for this case in the form
\[{m\frac{{{d^2}x}}{{d{t^2}}} = m\frac{{dv}}{{dt}} }={ – \mu \rho S{v^2}.}\]
Integrating, we find the velocity of the body:
\[
{\frac{{dv}}{{{v^2}}} = – \frac{{\mu \rho S}}{m}dt,\;\;}\Rightarrow {\int\limits_{{v_0}}^v {\frac{{du}}{{{u^2}}}} = – \frac{{\mu \rho S}}{m}\int\limits_0^t {d\tau } .} \]
Here \(u\) and \(\tau\) again denote the integration variables. For the time \(t,\) the velocity of the body will decrease from an initial value \({v_0}\) to the final value \(v.\) As a result, we obtain
\[
{- \left( {\frac{1}{v} – \frac{1}{{{v_0}}}} \right) = – \frac{{\mu \rho S}}{m}t,\;\;}\Rightarrow {\frac{1}{v} = \frac{1}{{{v_0}}} + \frac{{\mu \rho S}}{m}t,\;\;}\Rightarrow {v\left( t \right) = \frac{1}{{\frac{1}{{{v_0}}} + \frac{{\mu \rho S}}{m}t}} }={ \frac{{{v_0}}}{{1 + \frac{{\mu \rho S{v_0}}}{m}t}}.} \]
Integrate again to find the law of motion \(x\left( t \right):\)
\[\require{cancel}
{x\left( t \right) = \int\limits_0^t {\frac{{{v_0}}}{{1 + \frac{{\mu \rho S{v_0}}}{m}\tau }}d\tau } } = {\int\limits_0^t {\frac{{\cancel{v_0}}}{{1 + \frac{{\mu \rho S{v_0}}}{m}\tau }}\frac{{d\left( {1 + \frac{{\mu \rho S{v_0}}}{m}\tau } \right)}}{{\frac{{\mu \rho S\cancel{v_0}}}{m}}}} } = {\frac{m}{{\mu \rho S}}\int\limits_0^t {\frac{{d\left( {1 + \frac{{\mu \rho S{v_0}}}{m}\tau } \right)}}{{1 + \frac{{\mu \rho S{v_0}}}{m}\tau }}} } = {\frac{m}{{\mu \rho S}}\left[ {\left. {\ln \left( {1 + \frac{{\mu \rho S{v_0}}}{m}\tau } \right)} \right|_0^t} \right] } = {\frac{m}{{\mu \rho S}}\ln \left( {1 + \frac{{\mu \rho S{v_0}}}{m}t} \right).} \]
It is important to bear in mind that these formulas are valid for sufficiently large values of the velocity: at lower velocities, this model is physically incorrect, since the drag force begins to depend on the velocity linearly (this case was considered previously).
Force Depends on the Position: \(\mathbf{F} = \mathbf{F}\left( x \right)\)
Examples of forces that depend only on the coordinate are, in particular:
Elastic force \(F = -kx;\) Force of gravitational attraction \(F = – G\large\frac{{{m_1}{m_2}}}{{{x^2}}}\normalsize.\)
The motion of a body of mass \(m\) connected to a spring under the force of elasticity is determined by the differential equation
\[{m\frac{{{d^2}x}}{{d{t^2}}} = – kx\;\;\text{or}\;\;}\kern-0.3pt{\frac{{{d^2}x}}{{d{t^2}}} + \frac{k}{m}x = 0.}\]
This equation describes the undamped periodic oscillations with a period
\[T = 2\pi \sqrt {\frac{m}{k}} .\]
In the case of gravitational attraction, the body motion is described by the nonlinear differential equation
\[\frac{{{d^2}x}}{{d{t^2}}} = – G\frac{M}{{{x^2}}},\]
where \(M\) is the mass of the attracting body (for example, the mass of the Earth or the Sun), \(G\) is the universal gravitational constant.
The solution of this equation is given on the page Newton’s Law of Universal Gravitation.
In the case where the force depends on the coordinate, the acceleration is conveniently represented in the form:
\[{a = \frac{{dv}}{{dt}} = \frac{{dv}}{{dx}}\frac{{dx}}{{dt}} }={ v\frac{{dv}}{{dx}}.}\]
Then the differential equation can be written as
\[{m\frac{{{d^2}x}}{{d{t^2}}} = m\frac{{dv}}{{dt}} }={ mv\frac{{dv}}{{dx}} }={ F\left( x \right).}\]
Separating the variables \(v\) and \(x,\) we have
\[
{mvdv = F\left( x \right)dx,\;\;}\Rightarrow {m\int\limits_{{v_0}}^v {udu} = \int\limits_0^L {F\left( x \right)dx} ,\;\;}\Rightarrow {\frac{{m{v^2}}}{2} – \frac{{mv_0^2}}{2} }={ \int\limits_0^L {F\left( x \right)dx} .} \]
The last equation expresses the law of conservation of energy. The left side describes the change in kinetic energy, and the right side corresponds to the work of a variable force \({F\left( x \right)}\) when the body is moved by a distance \(L.\)
The subsequent integration of the function \({v\left( t \right)}\) allows to find the law of motion \({x\left( t \right)}.\) Unfortunately, this is not always possible because of the cumbersome analytical expressions for \({v\left( t \right)}.\)
Solved Problems
Click a problem to see the solution. |
Tagged: finite group If a Half of a Group are Elements of Order 2, then the Rest form an Abelian Normal Subgroup of Odd Order Problem 575
Let $G$ be a finite group of order $2n$.
Suppose that exactly a half of $G$ consists of elements of order $2$ and the rest forms a subgroup. Namely, suppose that $G=S\sqcup H$, where $S$ is the set of all elements of order in $G$, and $H$ is a subgroup of $G$. The cardinalities of $S$ and $H$ are both $n$.
Then prove that $H$ is an abelian normal subgroup of odd order.Add to solve later
Problem 455
Let $G$ be a finite group.
The centralizer of an element $a$ of $G$ is defined to be \[C_G(a)=\{g\in G \mid ga=ag\}.\]
A
conjugacy class is a set of the form \[\Cl(a)=\{bab^{-1} \mid b\in G\}\] for some $a\in G$. (a)Prove that the centralizer of an element of $a$ in $G$ is a subgroup of the group $G$.
Add to solve later
(b) Prove that the order (the number of elements) of every conjugacy class in $G$ divides the order of the group $G$. Problem 420
In this post, we study the
Fundamental Theorem of Finitely Generated Abelian Groups, and as an application we solve the following problem.
Add to solve later
Problem. Let $G$ be a finite abelian group of order $n$. If $n$ is the product of distinct prime numbers, then prove that $G$ is isomorphic to the cyclic group $Z_n=\Zmod{n}$ of order $n$. Problem 302
Let $R$ be a commutative ring with $1$ and let $G$ be a finite group with identity element $e$. Let $RG$ be the group ring. Then the map $\epsilon: RG \to R$ defined by
\[\epsilon(\sum_{i=1}^na_i g_i)=\sum_{i=1}^na_i,\] where $a_i\in R$ and $G=\{g_i\}_{i=1}^n$, is a ring homomorphism, called the augmentation map and the kernel of $\epsilon$ is called the augmentation ideal. (a) Prove that the augmentation ideal in the group ring $RG$ is generated by $\{g-e \mid g\in G\}$.
Add to solve later
(b) Prove that if $G=\langle g\rangle$ is a finite cyclic group generated by $g$, then the augmentation ideal is generated by $g-e$. Read solution |
Given:
$$ u_{tt} - c^2 u_{xx} = 0 $$(1) for $ 0<x<\infty $
$$ u |_{t=0} = g(x) , u_t | _{t=0} = h(x) $$(2) for $0<x$, and additionally
$$ (\alpha u + \beta u_t) |_{x=0} = q(t) $$(3) for $ t> 0 $
We are asked to evaluate and find the general solution for both regions $ x > ct $ and $0<x<ct$
Given that the most general solution under the first condition is $$u(x,t) = \phi (x+ct) + \psi(x-ct) $$(I), we will focus on the latter case outlined, i.e. $x<ct$
We know from the definitions that:
$$\phi(x) = \frac{1}{2} g(x) + \frac{1}{2c} \int_{0}^{x} h(x')dx' $$(4) and
$$\psi(x) = \frac{1}{2} g(x) - \frac{1}{2c} \int_{0}^{x} h(x')dx' $$(5)
We can begin examining our boundary conditions. As usual, the particular issue is that $\psi(x-ct)$ is a problem in this region, as the values might be negative, while $\phi(x+ct)$ will not be
Jumping straight into (3), we apply it to (I), to get:
$$ q(t) = \alpha ( \phi(ct) + \psi(-ct) ) + \beta c (\phi(ct) ' - \psi(-ct) ' ) $$(6) and applying the relation $ x = -ct $ to (6):
$$ q(\frac{-x}{c}) = \alpha(\phi(-x) + \psi(x) ) + \beta c (\phi(-x)' - \psi(x)') $$(6')
From here, my steps get a bit more uncertain, where I take the (total) derivative of (4) to be :
$$\phi(x) ' = \frac{1}{2} g'(x) + \frac{1}{2c} (h(x) - h(0) ) + \int_{0}^{x} h'(x')dx' $$(7) using Leibnitz's Rule of Integration
plugging into (6') gives me:
$$ q(\frac{-x}{c}) = \alpha( \frac{1}{2}g(-x) - \frac{1}{2c}\int_{0}^{-x} h(x')dx' + \psi(x)) + \beta c (\frac{1}{2} g'(-x) + \frac{1}{2c} (h(-x) - h(0) ) + \int_{0}^{-x} h'(x')dx - \psi(x)') $$(
NOTE:
I originally misread the example, as I meant to talk about example 4, but I have modified the Robin condition such that it involves $\alpha u $ and $\beta u_t $ as opposed to the original wording of the problem.
This leaves me two questions:
1. Is this the correct procedure to get the conclusion outlined in the example?
2. What steps do I need to take to get the conclusion outlined (an expression for $\psi $ only in terms of functions of $q,\psi', \phi' $)?
3. Is this procedure the correct one, when considering boundary conditions? |
Applying a filter to a signal is convolution. For discrete signals, convolution equals polynomial or power (Laurent) series multiplication. The Z "transform" just formally puts filter coefficients and signal samples in power resp. Laurent series so that their multiplication can be interpreted back as the application of the filter to the signal.
There are certain important filters whose coefficient sequence is connected to a geometric sequence, so that the power/Laurent series with those coefficients has, again formally, a closed form as rational function.
Added: The real transform behind the formal Z transform is the Fourier series. Any stable signal or filter, i.e., with exponentially decaying coefficiens to both sides, and some signals that are not that good have a convergent Fourier series.
Now you might know that there is some funny business with the function space and pointwise convergence etc., the important part however is that the converse direction, recovering the coefficients from the periodic function, is much less problematic. So you can think of a signal being encoded by a periodic (square integrable function) $X(ω)$ as
$$x_n=\frac1{2π}\int_{-π}^π X(ω)e^{-i\,nω}\,dω$$
In terms of information content, $(x_n)$ and $X(ω)$ are equivalent. The application of a filter $f_k$ with Fourier series $F(ω)$ to encode it then has (depending on convention) the Fourier series
$$\sum_n (f*x)_ne^{iωn}=\sum_n \sum_k f_ke^{iωk}\,x_{n-k}e^{iω(n-k)}=F(ω)X(ω)$$
So to get the code of the filtered signal, you just multiply the codes of the filter and the signal pointwise.
By that design, any rational function $h(z)$ that acts as a Z transform is always the Z transform of a stable signal, the encoded coefficients are obtained by evaluating $X(ω)=h(e^{iω})$ over the unit circle,
$$x_n=\frac1{2π}\int_{-π}^π h(e^{iω})e^{-i\,nω}\,dω=\frac1{2πi}\int_{|z|=1} \frac{h(z)}{z^{n+1}}\,dz$$
so that the resulting coefficients are those for the Laurent series relative to a narrow annulus around the unit circle.end Added
The Laplace or s transform does something similar for continuous signals. Convolution in the time picture is the same as pointwise multiplication in the Fourier picture. In the beginning and formally, the s transform is just a Fourier transform hiding the complex unit. There is more to it, the domains of Fourier and Laplace transforms each have parts that are not contained in the other.
The power of the Laplace transform stems from the fact that it can perfectly express solutions of linear (ordinary and partial) differential equations with constant coefficients. Among solutions of such ODE are the polynomials, exponential and trigonometric functions and products thereof. |
The normalization is basically a preconditioning to decrease condition number of the matrix $A$ (the larger the condition number, the nearer the matrix is to the singular matrix).The normalizing transform is also represented by a matrix in the case of homography estimation, and this happens to be usable as a good preconditioner matrix. The reason why is ...
My answer is for real scale $a$ and the fact that wavelet transform is usually defined in $L_2$ with norm$$||\Psi(\tau)|| = \int_\mathbb{R} \Psi(\tau)\Psi^*(\tau)\mathrm{d}\tau $$So$$||\Psi_{a,t}(\tau)|| = \int_\mathbb{R} \frac{1}{|a|}\Psi(\frac{\tau-t}{a})\Psi^*(\frac{\tau-t}{a})\mathrm{d}\tau$$Set $\tau' = \frac{\tau-t}{a} \implies d\tau' = d\tau / ...
Whether you scale the output of your DFT, forward or inverse, has nothing to do with convention or what is mathematically convenient. It has everything to do with the input to the DFT. Allow me to show some examples where scaling is either required or not required for both the forward and inverse transform.Must scale a forward transform by 1/N.To start ...
I can think of several reasons involving computational precision issues, but that probably would not do justice because mathematically we're defining it the same way no matter what, and mathematics knows no precision issues.Here's my take on it. Let's conceptually think about what DFT means in signal processing sense, not just purely as a transform. In ...
Actually, 3 different ways to put the scale factors are common in various and different FFT/IFFT implementations: 1.0 forward and 1.0/N back, 1.0/N forward and 1.0 back, and 1.0/sqrt(N) both forward and back.These 3 scaling variations all allow an IFFT(FFT(x)) round trip, using generic unscaled sin() and cos() trig functions for the twiddle factors, to ...
It depends on the normalization that you perform on the data. Note that for computing the Pearson correlation coefficient you subtract the means of the signals. This is normally not the case if you simply compute a mean squared error between the signals, unless mean removal is part of your normalization procedure. I assume you compute the Pearson correlation ...
FIR coefficients are $h[n]$, the same as the impulse response. there are $N$ non-zero taps.$$ h[n] = 0 \qquad \text{for } n<0, n \ge N $$frequency response is$$\begin{align}H(e^{j \omega}) &= \sum\limits_{n=-\infty}^{+\infty} h[n] e^{-j \omega n} \\&= \sum\limits_{n=0}^{N-1} h[n] e^{-j \omega n} \\&= \...
Question: Which parameter is suitable to indicate how "good" themeasurement fits to the Kalman filter?To estimate a quality of association you can use likelihood function. The likelihood considers not only residual but also uncertainty and represented as scalar value:$$\mathcal{L} = \frac{1}{\sqrt{2\pi S}}\exp [-\frac{1}{2}\mathbf y^\mathsf T\mathbf S^...
[EDIT] After a second read, the proposed normalization looks non standard. Suppose that $m\le x \le M$ ($m$ and $M$ denote the min and max). The scaling factor will be, depending on the situation:if $m\le M\le 0$: $-m$,if $m\le 0\le M$, and $|m|\ge M$: $-m$if $m\le 0\le M$, and $|m|\le M$: $M$if $0\le m\le M$: $M$It turns out to be (if I do not err) ...
from your posted waveform, i am assuming that this is a unipolar signal. that is$$ x[n] \ge 0 $$in audio, it would be the same, except that we would be working on $|x[n]|$ instead.so first you want a sliding maximum of your signal, where the window length is $L$.$$ x_1[n] = \max_{0 \le i < L} \Big| x[n-i] \Big| $$since your input signal $x[n]$ ...
Each pixel of the filter has a magnitude (intensity). The square function equivalently calculate the power density of the pixel. The normalization is to scale the filter power to 1. That's where I^2 at the denominator comes from.
Assuming that $N$ is the length of your signal $s$, the normalized signal $s_n$ is given by:$$s_n = \dfrac{s}{\sqrt{\dfrac{\sum_{i=1}^{N}\left|s_i^2\right|}{N}}} $$The denominator is nothing else than Root Mean Square value of your signal. Thus the code is doing a simple RMS normalization.You can think of it as a method of normalizing the average of ...
To complete bjou's answer:A potential problem with min-max normalization is that it is very sensitive to outliers. For example, if your dataset contain 100 two-dimensional examples; and that the dataset looks like this:1, 1478, -2523, 1252, -605...10000, -100 <- outlier4, 200min-max normalization will squash almost all values of the first ...
Both are reasonable approaches and it is foreseeable that either one could outperform the other empirically. The Euclidean distance assumes the data to be isotropically Gaussian, i.e. it will treat each feature equally. On the other hand, the Mahalanobis distance seeks to measure the correlation between variables and relaxes the assumption of the Euclidean ...
The kind of filter you are looking for is a notch filter. Using filter design toolbox in Matlab you can get it as I'm showing you in the following picture:This has to be done in z-plane so there must be two poles at +i and −i since they cannot be included in region of convergence. Is my assumption correct?Yes! This is correct.Assuming you want to ...
Let's assume continuous time (rather than discrete time).If you do not process the windowed data at all, you would like the output (the sum of the windowed frames) to be equal to the original signal. Allowing scaling of the output by a constant scaling term, this is only possible if the sum of all of the time-shifted window functions is constant over time....
Again there is no wrong or right here. In the Alan Oppenheim's Discrete-Time Signal Processing book, the notation is as follows:when there are only continuous-time signals we use $\omega$ for radians per second frequency.when there are only discrete-time signals we use $\omega$ for radians per sample frequencywhen both types of signals are present, (as ...
You could normalize the average amplitude, i.e.YData = YData / mean(abs(YData));Or you could normalize the signal power to one, i.e.YData = YData / sqrt(mean(abs(YData).^2));If just the peaks are bothering you, you could use dynamic range compression, but that would introduce nonlinear distortions. As Phonon hinted, please tell us why you are not ...
The Wikipedia article states:"What makes the direct linear transformation problem distinct..is the fact that the left [X] and right [AY] sides of the defining equation [X = AY] can differ by an unknown multiplicative factor which is dependent on k"In the above X, A, Y are matrices.So to avoid having to estimate the factor, you simply normalise all the ...
There is no definite answer to that question since different workflows are applied to different tasks. For example in some applications you might want to normalize your input data samples or perform some kind of Automatic Gain Control (i.e. you work in fixed point and even for some low signal values you want your FFT to have good resolution). On the contrary ...
There are different conventions in scaling of the FFT, in MATLAB you need to scale it by $\sqrt{N}$, where $N$ is your number of samples. Saying it in matlabish:clc, clear all%% Create some sinusoidal signal with noiset = linspace(0, 6*pi, 1000)+randn(1,1000);s1 = sin(2*pi*t);% Calculate the norm of s1s1_norm = norm(s1);display(sprintf('L2 norm of s1:...
Some of the channel effects can be indeed removed by doing the Cepstral Mean Subtraction/Normalization. Nevertheless that generally applies only to "convolutive" distortions that are constant. Any additive distortions, i.e. white noise, babble noise usually cannot be removed via CMS. But like you said this topic is handled via different methods to cope with ...
After a few quick calculations, it seems to me that the trouble comes from poor notations for the root in your reference. If you read, in the final normalized matrix, $\sqrt{8/64}$ and $\sqrt{2/4}$ instead of $\sqrt{8}/64$ and $\sqrt{2}/4$ (along with the $\pm$ signs), then the final result is correct.The matrices $V_i$ are orthogonal. To normalize them, ...
here's an efficient sliding maximum algorithm that has cost that is $O(\log_2(L))$. below window_width is $L$.comes fromBrookes: "Algorithms for Max and Min Filters with Improved Worst-Case Performance" IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS—II: ANALOG AND DIGITAL SIGNAL PROCESSING, VOL. 47, NO. 9, SEPTEMBER 2000#define A_REALLY_LARGE_NUMBER 3....
Wavelets play differents role in functional spaces, especially as unconditional bases (see What are unconditional bases and which wavelets have this property?). In $L_p$ spaces, if $|\psi|^p$ is integrable, the translation operator $t\to t-\tau$ preserves the norm, while the dilation $t\to t/a$ induces a scale factor of $|a|^{1/p}$, which can be corrected ...
First, shift: put the minimum to $0$, by compensating the actual minimum $m=−18.3667⋅10^5$ for every pixel: $p\to p - m$. Now your pixels are between $0$ and a new maximum $M = 9.3127 - m$. Finally you want the final image in $[0,255]$. The second operation is scale: multiply by something, so that $0$ remains at $0$, and $M$ is cast to $255$. So you have to ...
Search for radar plot to track association. There's a lot of algorithms on this subject. To your question:The residual itself will not give you information without its associated covariance matrixTry a chi-squared test on it. Putting a threshold on this scalar is called gating and it's a first step of plot to track association.
You need to understand that $f=f_s/4$ and "$4$ samples per cycle" are two ways of saying the same thing:$$\text{ # samples per cycle}=\frac{f_s\text{ samples per second }}{f \text{ cycles per second}}$$The number of cycles per second is equal to the ratio $f_s/f$, where $f_s$ is the sampling frequency, and $f$ is the signal's frequency.So you just need ...
okay, since this is about scaling, i should be anal about definitions.discrete-time signal: $x[n]$ where $n$ are only integer values.DTFT: $$ X\left( e^{j\omega} \right) \triangleq \sum\limits_{n=-\infty}^{+\infty} x[n] \ e^{-j \omega n} $$it is necessarily the case that $ X\left( e^{j(\omega + 2 \pi)} \right) = X\left( e^{j\omega} \right) $ for all ...
I think this is the equation you are usinghttp://en.wikipedia.org/wiki/Normalization_%28image_processing%29where b=newMin and a=newMax These values can also be found from your histogram. Usually you expand the new image to take up the full intensity range. The intensity range of your image is the X-axis of your histogram. If it goes from 0 to 1, then a=1,... |
What are the assumptions behind the Lagrangian derivation of energy? I understand that we're searching for a function $L$ that describes a set of physics so that solving the energy minimization problem
$$\begin{array}{rcl} \arg\min\limits_{q} && \int_{t_1}^{t_2} L(q,\dot{q},t) dt\\ \textrm{st}&&q(t_1)=q_1\\ &&q(t_2)=q_2 \end{array}$$
determines the path of an a particle $q:[t_1,t_2]\rightarrow\mathbb{R}^3$ from time $t_1$ to time $t_2$. Eventually, we find that $L(q,\dot{q},t)=\frac{1}{2}m \dot{q}^2$. What's not clear to me are the assumptions behind the setup for the energy minimization problem. It looks like to me that there's an assumption about the lack of forces like gravity. Except, I know that this derivation can be used to eventually derive that $F=m\ddot{q}$, so we don't yet have a concept of force. Also, I know there's a corollary that shows that the eventual solution $q$ is such that $\dot{q}=c$ or $\ddot{q}=0$. I'm sure that makes sense in relation to the problem setup, which is what I'm trying to clarify. Finally, I do know that we need to assume
Time and space are homogenous Space is isotropic Galilean invariance
Anyway, I'm looking for the other assumptions and I'm pretty sure it has to do with lack of other forces or some kind of related concept. |
Search
Now showing items 1-10 of 32
The ALICE Transition Radiation Detector: Construction, operation, and performance
(Elsevier, 2018-02)
The Transition Radiation Detector (TRD) was designed and built to enhance the capabilities of the ALICE detector at the Large Hadron Collider (LHC). While aimed at providing electron identification and triggering, the TRD ...
Constraining the magnitude of the Chiral Magnetic Effect with Event Shape Engineering in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV
(Elsevier, 2018-02)
In ultrarelativistic heavy-ion collisions, the event-by-event variation of the elliptic flow $v_2$ reflects fluctuations in the shape of the initial state of the system. This allows to select events with the same centrality ...
First measurement of jet mass in Pb–Pb and p–Pb collisions at the LHC
(Elsevier, 2018-01)
This letter presents the first measurement of jet mass in Pb-Pb and p-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV and 5.02 TeV, respectively. Both the jet energy and the jet mass are expected to be sensitive to jet ...
First measurement of $\Xi_{\rm c}^0$ production in pp collisions at $\mathbf{\sqrt{s}}$ = 7 TeV
(Elsevier, 2018-06)
The production of the charm-strange baryon $\Xi_{\rm c}^0$ is measured for the first time at the LHC via its semileptonic decay into e$^+\Xi^-\nu_{\rm e}$ in pp collisions at $\sqrt{s}=7$ TeV with the ALICE detector. The ...
D-meson azimuthal anisotropy in mid-central Pb-Pb collisions at $\mathbf{\sqrt{s_{\rm NN}}=5.02}$ TeV
(American Physical Society, 2018-03)
The azimuthal anisotropy coefficient $v_2$ of prompt D$^0$, D$^+$, D$^{*+}$ and D$_s^+$ mesons was measured in mid-central (30-50% centrality class) Pb-Pb collisions at a centre-of-mass energy per nucleon pair $\sqrt{s_{\rm ...
Search for collectivity with azimuthal J/$\psi$-hadron correlations in high multiplicity p-Pb collisions at $\sqrt{s_{\rm NN}}$ = 5.02 and 8.16 TeV
(Elsevier, 2018-05)
We present a measurement of azimuthal correlations between inclusive J/$\psi$ and charged hadrons in p-Pb collisions recorded with the ALICE detector at the CERN LHC. The J/$\psi$ are reconstructed at forward (p-going, ...
Systematic studies of correlations between different order flow harmonics in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV
(American Physical Society, 2018-02)
The correlations between event-by-event fluctuations of anisotropic flow harmonic amplitudes have been measured in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV with the ALICE detector at the LHC. The results are ...
$\pi^0$ and $\eta$ meson production in proton-proton collisions at $\sqrt{s}=8$ TeV
(Springer, 2018-03)
An invariant differential cross section measurement of inclusive $\pi^{0}$ and $\eta$ meson production at mid-rapidity in pp collisions at $\sqrt{s}=8$ TeV was carried out by the ALICE experiment at the LHC. The spectra ...
J/$\psi$ production as a function of charged-particle pseudorapidity density in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV
(Elsevier, 2018-01)
We report measurements of the inclusive J/$\psi$ yield and average transverse momentum as a function of charged-particle pseudorapidity density ${\rm d}N_{\rm ch}/{\rm d}\eta$ in p-Pb collisions at $\sqrt{s_{\rm NN}}= 5.02$ ...
Energy dependence and fluctuations of anisotropic flow in Pb-Pb collisions at √sNN=5.02 and 2.76 TeV
(Springer Berlin Heidelberg, 2018-07-16)
Measurements of anisotropic flow coefficients with two- and multi-particle cumulants for inclusive charged particles in Pb-Pb collisions at 𝑠NN‾‾‾‾√=5.02 and 2.76 TeV are reported in the pseudorapidity range |η| < 0.8 ... |
In this chapter, we will discuss how recursive techniques can derive sequences and be used for solving counting problems. The procedure for finding the terms of a sequence in a recursive manner is called
recurrence relation. We study the theory of linear recurrence relations and their solutions. Finally, we introduce generating functions for solving recurrence relations.
A recurrence relation is an equation that recursively defines a sequence where the next term is a function of the previous terms (Expressing $F_n$ as some combination of $F_i$ with $i < n$).
Example − Fibonacci series − $F_n = F_{n-1} + F_{n-2}$, Tower of Hanoi − $F_n = 2F_{n-1} + 1$
A linear recurrence equation of degree k or order k is a recurrence equation which is in the format $x_n= A_1 x_{n-1}+ A_2 x_{n-1}+ A_3 x_{n-1}+ \dots A_k x_{n-k} $($A_n$ is a constant and $A_k \neq 0$) on a sequence of numbers as a first-degree polynomial.
These are some examples of linear recurrence equations −
Recurrence relations Initial values Solutions F n = F n-1 + F n-2 a 1 = a 2 = 1 Fibonacci number F n = F n-1 + F n-2 a 1 = 1, a 2 = 3 Lucas Number F n = F n-2 + F n-3 a 1 = a 2 = a 3 = 1 Padovan sequence F n = 2F n-1 + F n-2 a 1 = 0, a 2 = 1 Pell number
Suppose, a two ordered linear recurrence relation is − $F_n = AF_{n-1} +BF_{n-2}$ where A and B are real numbers.
The characteristic equation for the above recurrence relation is −
$$x^2 - Ax - B = 0$$
Three cases may occur while finding the roots −
Case 1 − If this equation factors as $(x- x_1)(x- x_1) = 0$ and it produces two distinct real roots $x_1$ and $x_2$, then $F_n = ax_1^n+ bx_2^n$ is the solution. [Here, a and b are constants] Case 2 − If this equation factors as $(x- x_1)^2 = 0$ and it produces single real root $x_1$, then $F_n = a x_1^n+ bn x_1^n$ is the solution. Case 3 − If the equation produces two distinct complex roots, $x_1$ and $x_2$ in polar form $x_1 = r \angle \theta$ and $x_2 = r \angle(- \theta)$, then $F_n = r^n (a cos(n\theta)+ b sin(n\theta))$ is the solution. Problem 1
Solve the recurrence relation $F_n = 5F_{n-1} - 6F_{n-2}$ where $F_0 = 1$ and $F_1 = 4$
Solution
The characteristic equation of the recurrence relation is −
$$x^2 - 5x + 6 = 0,$$
So, $(x - 3) (x - 2) = 0$
Hence, the roots are −
$x_1 = 3$ and $x_2 = 2$
The roots are real and distinct. So, this is in the form of case 1
Hence, the solution is −
$$F_n = ax_1^n + bx_2^n$$
Here, $F_n = a3^n + b2^n\ (As\ x_1 = 3\ and\ x_2 = 2)$
Therefore,
$1 = F_0 = a3^0 + b2^0 = a+b$
$4 = F_1 = a3^1 + b2^1 = 3a+2b$
Solving these two equations, we get $ a = 2$ and $b = -1$
Hence, the final solution is −
$$F_n = 2.3^n + (-1) . 2^n = 2.3^n - 2^n $$
Problem 2
Solve the recurrence relation − $F_n = 10F_{n-1} - 25F_{n-2}$ where $F_0 = 3$ and $F_1 = 17$
Solution
The characteristic equation of the recurrence relation is −
$$ x^2 - 10x -25 = 0$$
So $(x - 5)^2 = 0$
Hence, there is single real root $x_1 = 5$
As there is single real valued root, this is in the form of case 2
Hence, the solution is −
$F_n = ax_1^n + bnx_1^n$
$3 = F_0 = a.5^0 + b.0.5^0 = a$
$17 = F_1 = a.5^1 + b.1.5^1 = 5a+5b$
Solving these two equations, we get $a = 3$ and $b = 2/5$
Hence, the final solution is − $F_n = 3.5^n +( 2/5) .n.2^n $
Problem 3
Solve the recurrence relation $F_n = 2F_{n-1} - 2F_{n-2}$ where $F_0 = 1$ and $F_1 = 3$
Solution
The characteristic equation of the recurrence relation is −
$$x^2 -2x -2 = 0$$
Hence, the roots are −
$x_1 = 1 + i$ and $x_2 = 1 - i$
In polar form,
$x_1 = r \angle \theta$ and $x_2 = r \angle(- \theta),$ where $r = \sqrt 2$ and $\theta = \frac{\pi}{4}$
The roots are imaginary. So, this is in the form of case 3.
Hence, the solution is −
$F_n = (\sqrt 2 )^n (a cos(n .\sqcap /4) + b sin(n .\sqcap /4))$
$1 = F_0 = (\sqrt 2 )^0 (a cos(0 .\sqcap /4) + b sin(0 .\sqcap /4) ) = a$
$3 = F_1 = (\sqrt 2 )^1 (a cos(1 .\sqcap /4) + b sin(1 . \sqcap /4) ) = \sqrt 2 ( a/ \sqrt 2 + b/ \sqrt 2)$
Solving these two equations we get $a = 1$ and $b = 2$
Hence, the final solution is −
$F_n = (\sqrt 2 )^n (cos(n .\pi /4 ) + 2 sin(n .\pi /4 ))$
A recurrence relation is called non-homogeneous if it is in the form
$F_n = AF_{n-1} + BF_{n-2} + f(n)$ where $f(n) \ne 0$
Its associated homogeneous recurrence relation is $F_n = AF_{n–1} + BF_{n-2}$
The solution $(a_n)$ of a non-homogeneous recurrence relation has two parts.
First part is the solution $(a_h)$ of the associated homogeneous recurrence relation and the second part is the particular solution $(a_t)$.
$$a_n=a_h+a_t$$
Solution to the first part is done using the procedures discussed in the previous section.
To find the particular solution, we find an appropriate trial solution.
Let $f(n) = cx^n$ ; let $x^2 = Ax + B$ be the characteristic equation of the associated homogeneous recurrence relation and let $x_1$ and $x_2$ be its roots.
If $x \ne x_1$ and $x \ne x_2$, then $a_t = Ax^n$
If $x = x_1$, $x \ne x_2$, then $a_t = Anx^n$
If $x = x_1 = x_2$, then $a_t = An^2x^n$
Let a non-homogeneous recurrence relation be $F_n = AF_{n–1} + BF_{n-2} + f(n)$ with characteristic roots $x_1 = 2$ and $x_2 = 5$. Trial solutions for different possible values of $f(n)$ are as follows −
f(n) Trial solutions 4 A 5.2 n An2 n 8.5 n An5 n 4 n A4 n 2n 2+3n+1 An 2+Bn+C Problem
Solve the recurrence relation $F_n = 3F_{n-1} + 10F_{n-2} + 7.5^n$ where $F_0 = 4$ and $F_1 = 3$
Solution
This is a linear non-homogeneous relation, where the associated homogeneous equation is $F_n=3F_{n-1}+10F_{n-2}$ and $f(n)=7.5^n$
The characteristic equation of its associated homogeneous relation is −
$$x^2 - 3x -10 = 0$$
Or, $(x - 5)(x + 2) = 0$
Or, $x_1= 5$ and $x_2 = -2$
Hence $a_h = a.5^n + b.(-2)^n$ , where a and b are constants.
Since $f(n) = 7.5^n$, i.e. of the form $c.x^n$, a reasonable trial solution of at will be $Anx^n$
$a_t = Anx^n = An5^n$
After putting the solution in the recurrence relation, we get −
$An5^n = 3A(n – 1)5^{n-1} + 10A(n – 2)5^{n-2} + 7.5^n$
Dividing both sides by $5^{n-2}$, we get
$An5^2 = 3A(n - 1)5 + 10A(n - 2)5^0 + 7.5^2$
Or, $25An = 15An - 15A + 10An - 20A + 175$
Or, $35A = 175$
Or, $A = 5$
So, $F_n = An5^n= 5n5^n=n5^{n+1}$
The solution of the recurrence relation can be written as −
$F_n = a_h + a_t$
$=a.5^n+b.(-2)^n+n5^{n+1}$
Putting values of $F_0 = 4$ and $F_1 = 3$, in the above equation, we get $a = -2$ and $b = 6$
Hence, the solution is −
$F_n = n5^{n+1} + 6.(-2)^n -2.5^n$
Generating Functions represents sequences where each term of a sequence is expressed as a coefficient of a variable x in a formal power series.
Mathematically, for an infinite sequence, say $a_0, a_1, a_2,\dots, a_k,\dots,$ the generating function will be −
$$G_x=a_0+a_1x+a_2x^2+ \dots +a_kx^k+ \dots = \sum_{k=0}^{\infty}a_kx^k$$
Generating functions can be used for the following purposes −
For solving a variety of counting problems. For example, the number of ways to make change for a Rs. 100 note with the notes of denominations Rs.1, Rs.2, Rs.5, Rs.10, Rs.20 and Rs.50
For solving recurrence relations
For proving some of the combinatorial identities
For finding asymptotic formulae for terms of sequences
Problem 1
What are the generating functions for the sequences $\lbrace {a_k} \rbrace$ with $a_k = 2$ and $a_k = 3k$?
Solution
When $a_k = 2$, generating function, $G(x) = \sum_{k = 0}^{\infty }2x^{k} = 2 + 2x + 2x^{2} + 2x^{3} + \dots$
When $a_{k} = 3k, G(x) = \sum_{k = 0}^{\infty }3kx^{k} = 0 + 3x + 6x^{2} + 9x^{3} + \dots\dots$
Problem 2
What is the generating function of the infinite series; $1, 1, 1, 1, \dots$?
Solution
Here, $a_k = 1$, for $0 \le k \le \infty$
Hence, $G(x) = 1 + x + x^{2} + x^{3}+ \dots \dots= \frac{1}{(1 - x)}$
For $a_k = a^{k}, G(x) = \sum_{k = 0}^{\infty }a^{k}x^{k} = 1 + ax + a^{2}x^{2} +\dots \dots \dots = 1/ (1 - ax)$
For $a_{k} = (k + 1), G(x) = \sum_{k = 0}^{\infty }(k + 1)x^{k} = 1 + 2x + 3x^{2} \dots \dots \dots =\frac{1}{(1 - x)^{2}}$
For $a_{k} = c_{k}^{n}, G(x) = \sum_{k = 0}^{\infty} c_{k}^{n}x^{k} = 1+c_{1}^{n}x + c_{2}^{n}x^{2} + \dots \dots \dots + x^{2} = (1 + x)^{n}$
For $a_{k} = \frac{1}{k!}, G(x) = \sum_{k = 0}^{\infty }\frac{x^{k}}{k!} = 1 + x + \frac{x^{2}}{2!} + \frac{x^{3}}{3!}\dots \dots \dots = e^{x}$ |
Category: Ring theory Problem 624
Let $R$ and $R’$ be commutative rings and let $f:R\to R’$ be a ring homomorphism.
Let $I$ and $I’$ be ideals of $R$ and $R’$, respectively. (a) Prove that $f(\sqrt{I}\,) \subset \sqrt{f(I)}$. (b) Prove that $\sqrt{f^{-1}(I’)}=f^{-1}(\sqrt{I’})$
Add to solve later
(c) Suppose that $f$ is surjective and $\ker(f)\subset I$. Then prove that $f(\sqrt{I}\,) =\sqrt{f(I)}$ Problem 618
Let $R$ be a commutative ring with $1$ such that every element $x$ in $R$ is idempotent, that is, $x^2=x$. (Such a ring is called a
Boolean ring.) (a) Prove that $x^n=x$ for any positive integer $n$.
Add to solve later
(b) Prove that $R$ does not have a nonzero nilpotent element. Problem 543
Let $R$ be a ring with $1$.
Suppose that $a, b$ are elements in $R$ such that \[ab=1 \text{ and } ba\neq 1.\] (a) Prove that $1-ba$ is idempotent. (b) Prove that $b^n(1-ba)$ is nilpotent for each positive integer $n$.
Add to solve later
(c) Prove that the ring $R$ has infinitely many nilpotent elements. |
MathJax Tutorial
MathJax is a JavaScript library that allows you to use basic $\LaTeX$ syntax. The formulas can be used in all text fields. To do this, the expression must be marked with dollar signs ($\$$Expression$\$$).
Fractons
A fraction is generated by the expression \frac{numerator}{denominator}. Instead of numbers, terms can also be used here.
$\$$\frac{1}{2}$\$$ becomes $\frac{1}{2}$. $\$$\frac{1 + 1}{2} = \frac{2}{2} = 1$\$$ becomes $\frac{1 + 1}{2} = \frac{2}{2} = 1$. Greek Letters
All letters of the Greek alphabet can be used.
$\$$\alpha$\$$ becomes $\alpha$. $\$$\pi$\$$ becomes $ \pi$.
You can also use the capital letter (for example, \Pi becomes $\Pi$).
Terms and Operators
The usual arithmetic operators $ + $, $ – $, $ \cdot $, $: $ can be used as usual. The exception is $\cdot$ (\cdot).
Roots and Exponents
A root expression can be generated by \sqrt{number}.
$\$$\sqrt{2}$\$$ becomes $\sqrt{2}$
To specify an exponent you can use {base}^{exponent}. Base and exponent can also be fractions or any other terms.
$\$${2}^{5} = 32$\$$ becomes ${2}^{5} = 32$. |
I am learning Fluid mechanics by reading Acheson's book entitled "Elementary Fluid Dynamics". Below is from problem 3.1.
Consider the Euler equation for an ideal fluid in the irrotational case. We are studying two-dimensional water waves, so the velocity vector is on the form: ${\bf{u}}=[u(x,y,t),\,v(x,y,t),\,0]$. Because of ''irrotationality'', there is a velocity potential $\phi$ such that ${\bf{u}}=\nabla \phi$. By the incompressibility condition, $\phi$ satisfies the Laplace equation. Let $y=\eta(x,t)$ be the equation of the free surface. Note that the bottom of the water is at $y=-h$, where $h$ is the depth. Then \begin{equation} \frac{\partial\eta}{\partial t}+u\frac{\partial\eta}{\partial x}=v,\;\;{\mbox{on}}\;\;y=\eta(x,t).\end{equation} Then the Euler equation on the free surface (once integrated) gives (note that both the pressure at the free surface and the density are considered constant and can be absorbed in the constant of integration): $$\frac{\partial\phi}{\partial t}+\frac{1}{2}(u^2+v^2)+g\eta=0,\;\;{\mbox{on}}\;\;y=\eta(x,t).$$ We now consider ${\bf{u}}$ and $\eta$ small (in a sense to be determined later) and linearized the equations above: $$\frac{\partial\eta}{\partial t}=v,\;\;{\mbox{on}}\;\;y=0.$$ $$\frac{\partial\phi}{\partial t}+g\eta=0,\;\;{\mbox{on}}\;\;y=0.$$ We look for sinusoidal traveling wave solution of the form $\eta=A\cos{\left(kx-\omega t\right)}$. With the condition that $v=\phi_y=0$ at $y=-h$ (the bottom), with the Laplace equation and the two linear equations above, we find that $$ \phi=\frac{A\omega}{k\sinh(kh)}\cosh{(k(y+h))}\sin{(kx-\omega t)}$$ and the dispersion relation $\omega^2={g}{k}\tanh(kh)$.
Now, to determine the sense in which ${\bf{u}}$ and $\eta$ are small, we need to compare $u^2+v^2$ to $g\eta$. The term $u^2+v^2$ is of order $A^2\omega^2=A^2{g}{k}\tanh(kh)$ and the term $g\eta$ is of order $gA$. Then one gets the condition $A\ll\lambda$ by asking that $u^2+v^2\ll g\eta$. One gets the same condition by comparing the other nonlinear terms to the linear ones in the two nonlinear equations above.
My question is this: I should be able to also deduct that the displacement of the free surface is small w.r.t. the depth, i.e. $A\ll h$. How is this condition obtained? |
Suppose there is an electromagnetic wave moving forward in the $\mathbf{\hat{k}}$ direction. Its electric/magnetic field components are given by: $$\mathbf{E} = E_0 \sin(kz - \omega t) \mathbf{\hat{i}}$$ $$\mathbf{B} = B_0 \sin(kz - \omega t) \mathbf{\hat{j}}$$ If a particle of charge $q$ was lying on the wave's trajectory, the Lorentz force law says that the force is given $\mathbf{F} = q(\mathbf{E} + \mathbf{v} \times \mathbf{B})$. However, is an electromagnetic wave a combination of both E and B fields, requiring both fields to be plugged into the equation, or does the force on the electron only depend on one of the fields, and is an EM wave only either an electric field or a magnetic field at one instant? Edit: changed x to z in expression for EM wave.
Note. As indicated by user23660, the EM wave must be transverse which means the $x$'s in your phases should instead be $z$'s.
At a given time $t$ and spatial point $\mathbf x = (x,y,z)$, the electromagnetic wave you consider is a combination of both of the fields;\begin{align} \mathbf E(t,\mathbf x) &= E_0\sin(kz-\omega t) \hat{\mathbf x} \\ \mathbf B(t,\mathbf x) &= B_0\sin(kz-\omega t) \hat{\mathbf y}\end{align}There are some special points at which both of the fields vanish though. In particular, any time the argument of the $\sin$ is an integer multiple of $\pi$;\begin{align} kz-\omega t = n\pi, \qquad n\in\mathbb Z\end{align}As a result, a particle sitting in the wave will experience both of the fields at once, and both of these fields will have to be plugged into the Lorentz force equation. Explicitly, Newton's Second Law along with the Lorentz force equation with both fields plugged in gives us the following equation of motion:\begin{align} \ddot{\mathbf x} =\frac{q}{m}(E_0\sin(kz-\omega t) \hat{\mathbf x} +B_0\sin(kz-\omega t)\dot{\mathbf x}\times\hat{\mathbf y}).\end{align}In components, this can be written as the following system of coupled differential equations:\begin{align} \ddot x &= \omega_0\sin(kz-\omega t)(c - \dot z) \\ \ddot y &= 0 \\ \ddot z &= \omega_0\sin(kz-\omega t) \dot x\end{align}where I've used the relationship $E_0 = cB_0$ and I have defined\begin{align} \omega_0 = \frac{qB_0}{m}.\end{align}As far as I can tell, this is a pretty nasty system, and I'm not sure if the general solution can be written in closed form (although admittedly I haven't really tried very hard to figure that out.) It's actually not
so bad since $y$ is completely decoupled from $x$ and $z$, and it's differential equation simply implies constant velocity in $y$. This leaves a pair of coupled equations for $x$ and $z$.
In linear polarization, the electron is accelerated by the electric field in x-direction, and therefore moves in the y-magnetic field of the wave. In the far field both fields are
always present and reverse synchronously, so the electron performs an oscillation in the z-direction with doubled frequency. This oscillation vanishes with circularly polarized light. If you are interested in details, look here https://www.researchgate.net/publication/259232654_Inherent_Energy_Loss_of_the_Thomson_ScatteringThe formula for ω0 in the preceding answer makes no sense, because the magnetic component of the wave is not constant. |
GSoC 2017 - Scipy: Large-scale Constrained Optimization
This year I was chosen as the student for Google Summer of Code. I’ll be working on one of the core Python scientific libraries called Scipy. My task is to implement a constrained optimization algorithm able to deal with large (and possibly sparse) problems.
The nonlinear optimization problem consists of finding the value of a vector $x\in \mathbb{R}^n$ that minimizes a function $f(x)$ inside a region $\Omega$. It is very common to specify $\Omega$ using equality and inequality constraints, as expressed in the following mathematical expression:
\begin{eqnarray} \min_x && f(x), \\
\text{subject to } && c_E(x) = 0,\\ && c_I(x) \le 0, \end{eqnarray}
where $x\in \mathbb{R}^n$ is a vector of unknowns, $f$ is called the objective function and $c_E$ and $c_I$ are vectorial functions used to delimit the feasible region $\Omega$.
Great many applications can be formulated as the above optimization problem: $x$ could be the control action applied to a robot arm in order to follow a given trajectory, being the function $f(x)$ minimized in order to get the optimal control action while avoiding colliding with obstacles (represented by the constraints); alternatively, the problem could represent the designing of a portfolio of investments to maximize expected return while maintaining an acceptable level of risk; or, the estimation of parameters of a model, minimizing the error between the model prediction and the observed values, while imposing a series of constraints to the model.
It suffice to say that optimization is very important to several applications in engineering, science and finance and I believe that a quality open source solver, as the one I intend to implement, could be of great use to people from diverse areas.
My GSoC accepted proposal can be found in the following link and I will, in the following months, upload content related to aplications and the implementation of the algorithm. |
Trying to practice translating english sentences into predicate logic and vice versa.
$E(x,y)$: $x$ can eat $y$ $L(x,y)$: $x$ loves eating $y$ $D$ is the domain of all dogs $S$ is the domain of all snakes
(a)English to Predicate Logic: A dog can eat any snake, only if they are different from some other dog who can also eat any snake: $\forall a \in S,\forall b \in D,\exists c \in D, E(b,a)\implies b \ \ne c \ \wedge E(c,a)$
(b)Predicate Logic to English: $\forall a \in S,\forall b \in D, \forall c \in S ,\sim \ \bigg[\ a\ \ne c \ \wedge E(a,c)\bigg] \iff L(a,b)$:
Not all snakes is the same as some other snake, or that not all snakes can eat some other snake, if and only if, all snakes loves eating all dogs. This part sounds a bit weird to me and can possibly be condensed.
Any thoughts on both (a) and (b)? |
Let us place an infinitely long solenoid of \(n\) turns per unit length so that its axis coincides with the \(z\)-axis of coordinates, and the current \(I\) flows in the sense of increasing \(\phi\). In that case, we already know that the field inside the solenoid is uniform and is \(\mu\, n\, I\, \hat{\textbf{z}}\) inside the solenoid and zero outside. Since the field has only a \(z\) component, the vector potential \(\textbf{A}\) can have only a \(\phi\)- component.
We'll suppose that the radius of the solenoid is \(a\). Now consider a circle of radius \(r\) (less than \(a\)) perpendicular to the axis of the solenoid (and hence to the field \(\textbf{B}\)). The magnetic flux through this circle (i.e. the surface integral of \(\textbf{B}\) across the circle) is \(\pi r^2B = \pi r^2 nI\). Now, as everybody knows, the surface integral of a vector field across a closed curve is equal to the line integral of its
curl around the curve, and this is equal to \(2\pi r A_\phi\). Thus, inside the solenoid the vector potential is
\[\textbf{A}=\frac{1}{2}\mu n r I \hat{\boldsymbol{\phi}}.\label{9.4.1}\]
It is left to the reader to argue that, outside the solenoid \((r > a)\), the magnetic vector potential is
\[\textbf{A}=\frac{\mu na^2 I}{2r}\hat{\boldsymbol{\phi}}.\] |
I found once an instruction how to typeset mathematics on this site, now I came back to ask a question and could not remember it.
Thus I pressed the
? button, to find help on this subject and then, after choosing 'advanced help' only found rudimentary information that LaTeX and MathJax are used with a link to www.mathjax.org But there I could not find what I needed either.
More specifically, at the moment, clicking the help button and then "advanced help" one is taken to the editing-help site which on the subject of typesetting mathematics only has this to say.
LaTeX
Mathematics Stack Exchange uses MathJax to render LaTeX. You can use single dollar signs to delimit inline equations, and double dollars for blocks:
The *Gamma function* satisfying $\Gamma(n) = (n-1)!\quad\forall n\in\mathbb N$ is via through the Euler integral $$ \Gamma(z) = \int_0^\infty t^{z-1}e^{-t}dt\,. $$
It would be good if non-mathematician people can
find a link to instructions on typesetting mathematics from that page, the editing help page. |
If you input the trig identity: $$\cot (x)+\tan(x)=\csc(x)\sec(x)$$ Into WolframAlpha, it gives the following proof:
Expand into basic trigonometric parts: $$\frac{\cos(x)}{\sin(x)} + \frac{\sin(x)}{\cos(x)} \stackrel{?}{=} \frac{1}{\sin(x)\cos(x)}$$ Put over a common denominator:
$$\frac{\cos^2(x)+\sin^2(x)}{\cos(x)\sin(x)} \stackrel{?}{=} \frac{1}{\sin(x)\cos(x)}$$
Use the Pythagorean identity $\cos^2(x)+\sin^2(x)=1$:
$$\frac{1}{\sin(x)\cos(x)} \stackrel{?}{=} \frac{1}{\sin(x)\cos(x)}$$
And finally simplify into
$$1\stackrel{?}{=} 1$$
The left and right side are identical, so the identity has been verified.
However, I take some issue with this. All this is doing is manipulating a statement that we don't know the veracity of into a true statement. And I've learned that any false statement can prove any true statement, so if this identity was wrong you could also reduce it to a true statement.
Obviously, this proof can be easily adapted into a proof by simply manipulating one side into the other, but:
Is this proof correct on its own? And can the steps WolframAlpha takes be justified, or is it completely wrong? |
Ray Optics and Optical Instruments Optical Instruments Visual angle is the angle subtended by an object at the eye. Myopia means short sightedness, the distant objects are not clearly visible. Hypermetropia means far sightedness, the near objects are not clearly visible. A convex lens is called simple microscope Magnification in simple microscope when final image is formed at least distance of distinct vision \tt m = \left(1 + \frac{D}{f}\right) Magnification when final image at infinity \tt m = \left(\frac{D}{f}\right) Magnification of compound microscope \tt m = \frac{vo}{uo} \left(\frac{D}{ue}\right) Magnification of compound micro scope when final image at ‘D’ is \tt m = - \frac{vo}{uo} \left( 1 + \frac{D}{fe}\right) Length of compound microscope L D= Vo + Ue Magnification of compound microscope when final image formed at infinity \tt m = \frac{vo}{uo} \cdot \left(\frac{D}{fe}\right) Length of compound microscope L ∞= vo + fe Magnification of Astronomical Telescope \tt M = - \frac{fo}{ue} Magnification at D M D= \tt - \frac{fo}{fe} \left(1 + \frac{fe}{D}\right) Length of Astronomical Telescope L D= fo + ue Magnification at ∞ M ∞= \tt - \frac{fo}{fe} Length of Astronomical Telescope \tt L_{\infty} = fo + fe Terrestrial Telescope magnification \tt m = \frac{fo}{ue} Magnification of ‘D’ \tt M_{D} = \frac{fo}{fe} \left(1 + \frac{fe}{D}\right) Length L D= fo + 4f + ue Magnification \tt M_{\infty} = \frac{fo}{fe} Length L ∞= fo + 4f + fe The telescope in which the objective is a curved mirror is called Reflecting Telescope. Viewing objects: Eyes as an optical instrument Microscopes and Telescopes
Disclaimer: Compete.etutor.co may from time to time provide links to third party Internet sites under their respective fair use policy and it may from time to time provide materials from such third parties on this website. These third party sites and any third party materials are provided for viewers convenience and for non-commercial educational purpose only. Compete does not operate or control in any respect any information, products or services available on these third party sites. Compete.etutor.co makes no representations whatsoever concerning the content of these sites and the fact that compete.etutor.co has provided a link to such sites is NOT an endorsement, authorization, sponsorship, or affiliation by compete.etutor.co with respect to such sites, its services, the products displayed, its owners, or its providers.
1. It is an optical instrument used to see very small object. It's magnifying power is given by
\tt m = \frac{Visual \ angle \ with \ instrument \ (\beta)}{Visual \ angle \ when \ object \ is \ placed \ at \ least \ distance \ of \ distinct \ vision (\alpha)}
2. Magnification when final image is formed at D and ∞ (i.e.,
m D and m ∞): m_{D} = \left[1 + \frac{D}{f}\right]_{max} \ {\tt and} \ m_{\infty} = \left[\frac{D}{f}\right]_{min}
3. If lens is kept at a distance a from the eye then m_{D} = 1 + \frac{D - a}{f} \ {\tt and} \ m_{\infty} = \frac{D - a}{f}
4. Final image is forned at D : Magnification m_{D} = -\frac{v_{0}}{u_{0}}\left[1 + \frac{D}{f_{e}}\right] and length of the microscope tube (distance between two lenses) is
L D = v 0+ u e
5. Telescope (Refracting Type) Magnification: m_{D} = -\frac{f_{0}}{f_{e}}\left[1 + \frac{f_{e}}{D}\right] \ {\tt and} \ m_{\infty} = -\frac{f_{0}}{f_{e}} |
I'm currently studying for an exam in image processing and stumbled upon a exercise which i could not answer and my professor will not be available before the exam anymore. The exercise goes as follows:
Create a Hann-lowpass kernel of size $3\times 3$ and calculate the result of the convolution at $(2,2)$ in the image.
The given solution for the filter kernel is shown below, but I do not understand how he got there:
$$ A = \frac{1}{4} \begin{pmatrix} \frac{1}{4} & \frac{1}{2} & \frac{1}{4} \\ \frac{1}{2} & 1 & \frac{1}{2} \\ \frac{1}{4} & \frac{1}{2} & \frac{1}{4} \\ \end{pmatrix} $$
The second part of the question is a simple convolution with the image which I'm able to do. The factor in front of the kernel is given by the constraint $ \sum A = 1$ to achieve a filter gain of one. I've done literature research but none of the books in the library do explain how to calculate such a kernel. The professor gives a "hint" in his presentation pointing to the generalized cosine window with $A = 0.5$, $B = 0.5$ and $C = 0$:
$$ w_k = A - B\cdot \cos\left(2\pi\frac{k}{K-1}\right) + C\cdot \cos\left(4\pi\frac{k}{K-1}\right) $$
Yet I cannot derive the final kernel from the given formulas. Can anyone give me a step by step solution for the problem? |
As indicated by Igor Rivin, the volume of the unitary group is given by $vol(U(N))=(2\pi)^{(N^2+N)/2}/\prod_{k=1}^{N-1} k!$.
The denominator is the Barnes G-function, which is well-known :
http://en.wikipedia.org/wiki/Barnes_G-function
and in particular has a known Stirling-like asymptotic expansion for large $N$:
$\log(\prod_{k=1}^{N-1} k!) \sim$$\frac{N^2}{2} \log N - \frac{1}{12} \log N - \frac{3}{4}N^2+\frac{N}{2} \log(2 \pi)+ \zeta'(-1) + \sum_{g \geq 2} \frac{B_{2g}}{2g(2g-2)} N^{2-2g}$.
Comparing with the Harer-Zagier formula $\chi(M_g)=\frac{B_{2g}}{2g(2g-2)}$, we obtain
$vol(U(N))\sim$$ - \frac{N^2}{2} \log(N) + \frac{N^2}{2}(\log(2 \pi)+\frac{3}{2}) + \frac{1}{12}\log(N) - \zeta'(-1)-\sum_{g \geq 2} \chi(M_g) N^{2-2g}$
which, up to probable typos and forgotten terms, is the asymptotic expansion of the question.
Of course, in such a proof, the fact that unitary groups and moduli spaces of Riemann surfaces are related appears as a coincidence: essentially we have just taken two places in mathematics where Bernoulli numbers appear. We can ask if there is a more direct intrinsic explanation of this relation. I don't think that such explanation is known at the level of rigorous mathematics but one is known at the level of theoretical physics. On general grounds, it is expected that gauge theories of group $U(N)$ are related in the large $N$ limit to a form of string theory. The first argument in this direction was given by 't Hooft in the 70's and is the observation that Feynman diagrams in perturbative $U(N)$ gauge theory can be rewritten as double-line graphs, or ribbon graphs, that it is possible to obtain closed surfaces from ribbon graphs by gluing disks along their boundary components, and that in some appropriate limit the series of Feynman diagrams organizes as a genus expansion of these surfaces.
In fact, it is possible to prove the Harer-Zagier relation along these lines by describing the moduli space of Riemann surfaces in terms of ribbon graphs, interpreting these ribbon graphs as the perturbative expansion of some $N$ by $N$ matrix model, solving this matrix model, which gives something containing the $\Gamma$ function and then expanding the solution. In this proof, which can be found in an appendix to Kontsevich's paper on Witten's conjecture, http://www.ihes.fr/~maxim/TEXTS/intersection_theory_6.pdf , the Bernoulli numbers appearing in the Harer-Zagier formula really comes from the Stirling expansion of the $\Gamma$-function.
Making 't Hooft idea concrete is one of the main theme of modern string theory and can go under various more or less general and more or less precise names: gauge/gravity duality, AdS/CFT correspondence, open/closed duality, holographic relation... One explicit example of that is the Gopakumar-Vafa correspondence asserting the equivalence of Chern-Simons theory of group $U(N)$ and level $k$ on the 3-sphere with the A-model of the topological string, i.e. Gromow-Witten theory, on the resolved conifold, i.e. the total space of $\mathcal{O(-1)}\oplus {\mathcal{O(-1)}}$ over $\mathbb{P}^1$, with $\mathbb{P}^1$ being of volume $t=\frac{2 \pi N}{k+N}$ and with a string coupling constant $g_s = \frac{2 \pi}{k+N}$. As the volume of $U(N)$ appears explicitely in the one-loop perturbative expansion of Chern-Simons theory on the 3-sphere, it is possible to "explain" the asymptotics expansion of these volumes in terms of moduli spaces of Riemann surfaces. Of course, all that is not a proof and the direct matching of the two sides of the equalities is often used as a support of physicists conjectures but I wanted to mention it because it is a natural circle of ideas in which the formula of the question appears naturally. |
Suppose a wire of length
L carrying a current I is kept in a uniform magnetic field B perpendicular to the current. The force on the wire will be IBL and work done by magnetic force when wire moves a distance d along the force will be IBLd. But magnetic force cannot do any work on a moving charged particle and hence total work done on all particles by magnetic force should be zero. Where does the work IBLd come from?
Suppose a wire of length
The work comes from the battery that is driving the current through the wire.
Even if the wire were stationary, the battery would be supplying work at a rate $I^{2}R$. But with the wire moving, the battery would need to be supplying extra work at a rate $\mathscr{E}I$ in order to overcome the emf generated by the moving wire.
Now, $\mathscr{E}$ is equal to the rate at which the wire cuts magnetic flux so $\mathscr{E}=BLv$ (in which $v=\frac{d}{t}$), so the extra rate of doing work has to be $\mathscr{E} I=BLvI=BLdI / t $. And this is equal to the rate of mechanical work done on the wire!
But magnetic force cannot do any work on a moving charged particle and hence total work done on all particles by magnetic force should be zero. Where does the work IBLd come from?
Sum of works of magnetic forces on each charged point particle in the wire (assuming it is made of point particles) is indeed zero (this follows from the fact that magnetic force on point particle is always perpendicular to particle's velocity).
However, the macroscopic work $IBLd$ is not
that sum; instead, it is work of a macroscopic force, acting on the whole wire. This macroscopic force is due to existence of current $I$ inside the wire, but it does not act on that current, it acts on the wire itself.
This macroscopic force is properly called Laplace force, or ponderomotive force. It is also common to call it simply magnetic force, due to its origin - it appears due to presence of magnetic forces acting on the charge carriers. Unfortunately, it is also quite common to call it Lorentz force, but that is grossly incorrect. Lorentz force should refer only to force acting on a microscopic body such as the charge carrier.
The Laplace force acts on the body as a whole and it is not given by the Lorentz formula and it is not perpendicular to velocity of the body; hence it can, and often does work (electric motors).
It arises due to fact that charge carriers are confined to the wire, even while the Lorentz forces act on them; if there was no confinement, the Lorentz forces would make them curve their trajectory so as to escape from the wire on one side. This does not happen, as even slightest deviation of distribution of current inside the wire results in restoring force due to rest of the wire that keeps the charge carriers confined. By Newton's 3rd law, the charge carriers exert opposite force on the rest of the wire too - and sum of those is the Laplace force. Thus the Laplace force is internal force, acting from the charge carriers on the rest of the wire.
The work done by this force is thus work of internal forces in the wire, not work of the external magnetic field. The energy is funneled from the voltage source, through the EM field of the voltage source and the circuit, to the mechanical energy of the wire.
When the wire is stationary (top diagrams) the magnetic Lorentz force (of magnitude $Bev_{dr}$) is to the right. The electron is restrained from being pushed out of the wire by a force from the wire that is essentially electrostatic. I've labelled its magnitude $F_{Lapl}$ because its Newton's third Law partner is the equal and opposite Laplace force that the electron exerts to the right on the wire. [The Laplace force is sometimes called the ponderomotive (!) force and, in the UK at least, the motor effect force.] For the stationary wire, $$F_{Lapl}=Bev_{dr}$$In other words, in this case, the Laplace force is equal to the magnetic Lorentz force.
The lower diagrams show what happens when the
wire is moving to the right at speed $v_w$. Note the new resultant velocity, and the new direction of magnetic Lorentz force, at right angles to the resultant velocity. I've shown the magnitudes of the vertical and horizontal components of this force.
The magnitude of the vertical force component is $Bev_w$, so this force component appears only when the wire is allowed to move at right angles to itself (thereby doing work); it gives rise to a back-emf. For $v_{dr}$ to be constant, this force component must be balanced by a force due to the electric field caused by the battery. I've (mis)labelled this force $eE_{batt}$. [I say "(
mis)labelled" because $eE_{batt}$ is not the whole of the electric field force due to the battery; part of the force overcomes resistive forces (not shown) on the electron.] Thus$$eE_{batt}=Bev_w.$$
As with the stationary wire there is the force whose magnitude I've labelled $F_{Lapl}$, keeping the electron in the wire. If $v_w$ is constant,$$F_{Lapl}=Bev_{dr}.$$This is exactly the same equation as for the stationary wire, but note that for the moving wire the Laplace force is not the same in magnitude or direction as the total magnetic Lorentz force, which is due to the
total velocity of the electron!
Now for the energy aspect...
Power supplied to electron (not including that to do work against resistive forces) = $eE_{batt}v_{dr}=Bev_{w}\times v_{dr}$.
Work done per second by Laplace force = $F_{Lapl}\ v_w = Bev_{dr}\times v_w$.
So the work done by the Laplace force on the wire is equal to the work done by the force due to the battery, leaving
no work to be done by the magnetic Lorentz force – just as it should be! [Although not strictly necessary, we could say that no net work is done by the Lorentz force, as the work done by the force of magnitude $eE_{batt}$ against the magnetic Lorentz force (vertical component) is equal to the work done by the magnetic Lorentz force (horizontal component) against the Newton's third law partner to the Laplace force!]
I believe that this resolves the paradox that the magnetic Lorentz force can do no work, yet work is done on/by the wire.
Footnote
The set-up is, in fact, a
machine, producing a motor effect force in response to the force of (usually) different magnitude, $eE_{batt}$, in a different direction. It is comparable in its action to a smooth slope up which we pull a body of weight mg, by applying to it a force, $F_{sl}$, parallel to the slope. Here we have$$F_{sl}=mg \sin\theta$$while the vertical velocity component is related to the velocity parallel to the slope by$$v_{vert}=v_{sl} \sin\theta.$$Hence Power in = work done per second by $F_{sl}$ = $mg \sin\theta \times v_{sl}$
and Power out = work done per second lifting
m = $mg \times v_{sl} \sin\theta.$
This machine relies upon the normal contact force,
N, between the body and the slope to keep the body on the slope, yet $N$, like the magnetic Lorentz force, does no work.
What you have described is actually a dc motor with an input of electrical energy and an output of heat and mechanical energy.
A parallel rail version is often used to show the force on a current carrying conductor in a magnetic field.
If the applied voltage from an external source is $V$ and the resistance of the circuit is $R$ and there is a complete circuit then a current $I$ will flow through the circuit.
If yellow rod rolls along the rails at a speed $v$ then and emf $\mathcal E = BLv$ will be induced in the circuit.
For that circuit we can write $V- \mathcal E = IR$ and multiplying each side by $I$ and rearranging the equation gives $VI = I^2R + \mathcal E I$.
This final equation can be interpreted as the electrical power supplied by the external source $VI$ is equal to the power dissipated as heat due to the resistance in the circuit $I^2R$ plus the mechanical power done by the system $\mathcal EI$.
In the case of the demonstration if the apparatus was large enough you could imagine that the rolling rod reaches a steady speed and the mechanical power is related to the work done against frictional forces.
The force which the wire exerts is $BIL$ and so the power delivered is $BILv = BLv \,\, I = \mathcal EI$. With the standard "small" version of the apparatus what you see is the rod starting from rest and then accelerating when the current is switched on - the rod is gaining kinetic energy.
If you started to push the rod along the rails faster there might come a time when $\mathcal E > V$.
The current direction would then be reversed and the external source would be "charged". The arrangement is then acting like an electrical generator. Now you are doing the mechanical work which is converted into heat and electrical/chemical energy.
protected by Qmechanic♦ Jun 19 '18 at 8:25
Thank you for your interest in this question. Because it has attracted low-quality or spam answers that had to be removed, posting an answer now requires 10 reputation on this site (the association bonus does not count).
Would you like to answer one of these unanswered questions instead? |
I have the following equation:
\documentclass[12pt,a4paper]{report}\usepackage{amsmath}\begin{align}\begin{bmatrix}\dot x \\\dot \varphi \\\dot v \\\dot \omega\end{bmatrix} = \begin{bmatrix} %% \vspace{5pt} does not workv \\ \omega \\ x \omega^2 - g \sin{\varphi} \\[5pt]\displaystyle{- \frac{\left( m_2x-m_1a \right)g\cos\varphi - 2m_2xv\omega}{I_O +m_2x^2}}\end{bmatrix}\end{align}\end{document}
and, since LHS (left-hand side) has dots over the variables, RHS (right-hand side) is slightly vertically offset to the LHS. Therefore, I would like to lower all RHS of the equation starting from the first entry.
I know we can manage vertical spacing finishing line with
\\[5pt], but I don't know how to apply it for the first entry - aligning
\dot{x} with
v. |
Difference between revisions of "Dictionary:Q(Quality)"
Line 8: Line 8:
<center><math>A\mathrm{e}^{-\alpha x} \sin 2 \pi f ( t - \tfrac{x}{V} ) </math></center>
<center><math>A\mathrm{e}^{-\alpha x} \sin 2 \pi f ( t - \tfrac{x}{V} ) </math></center>
−
where ''x'' is the distance traveled. The [[Dictionary:logarithmic decrement|logarithmic decrement]] ''δ'' is the natural log of the ratio of the amplitudes of two successive cycles. The last equation above relates ''Q'' to the sharpness of a resonance condition; <
+
where ''x'' is the distance traveled. The [[Dictionary:logarithmic decrement|logarithmic decrement]] ''δ'' is the natural log of the ratio of the amplitudes of two successive cycles. The last equation above relates ''Q'' to the sharpness of a resonance condition; <>r</> is the resonance frequency and Deltafis the change in frequency that reduces the amplitude by 12. The [[Dictionary:damping factor|damping factor]] ''h'' relates to the decrease in amplitude with time,
<center><math>A(t) = A_0\mathrm{e}^{-ht} \cos \omega t \ </math></center>
<center><math>A(t) = A_0\mathrm{e}^{-ht} \cos \omega t \ </math></center>
Revision as of 13:32, 30 December 2014 1. Quality factor, the ratio of 2π times the peak energy to the energy dissipated in a cycle; the ratio of 2π times the power stored to the power dissipated. The seismic Q of rocks is of the order of 50 to 300. Q is related to other measures of absorption (see below):
where
V, f, λ, and T are, respectively, velocity, frequency, wavelength, and period. [1] The absorption coefficient α is the term for the exponential decrease of amplitude with distance because of absorption; the amplitude of plane harmonic waves is often written as
where
x is the distance traveled. The logarithmic decrement δ is the natural log of the ratio of the amplitudes of two successive cycles. The last equation above relates Q to the sharpness of a resonance condition; f r is the resonance frequency and Δ f is the change in frequency that reduces the amplitude by 1/√2. The damping factor h relates to the decrease in amplitude with time,
See Figure A-2.
2. The ratio of the reactance of a circuit to the resistance. 3. A term to describe the sharpness of a filter; the ratio of the midpoint frequency to the bandpass width (often at 3 dB). 4. A designation for Love waves (q.v.). 5. Symbol for the Koenigsberger ratio (q.v.). 6. See Q-type section. See also References Sheriff, R. E. and Geldart, L. P., 1995, Exploration Seismology, 2nd Ed., Cambridge Univ. Press. |
In the wave equation:
$$c^2 \nabla \cdot \nabla u(x,t) - \frac{\partial^2 u(x,t)}{\partial t^2} = f(x,t)$$
Why do we first multiply by a test function $v(x,t)$ before integrating?
Computational Science Stack Exchange is a question and answer site for scientists using computers to solve scientific problems. It only takes a minute to sign up.Sign up to join this community
In the wave equation:
$$c^2 \nabla \cdot \nabla u(x,t) - \frac{\partial^2 u(x,t)}{\partial t^2} = f(x,t)$$
Why do we first multiply by a test function $v(x,t)$ before integrating?
You're coming at it backwards. The justification is better seen by starting from the variational setting and working towards the strong form. Once you've done this, the concept of multiplying by a test function and integrating can then be applied to problems where you don't start with a minimization problem.
So consider the problem where we want to minimize (and working formally and not rigorously at all here):
$$ I(u) = \frac {1}{2} \int_\Omega (\nabla u(x))^2 \; dx $$
subject to some boundary conditions on $\partial\Omega$. If we want this $I$ to reach a minimum, we need to differentiate it with respect to $u$, which is a function. There are several now well trod ways to consider this kind of derivative, but one way it's introduced is to compute
$$ I'(u(x),v(x))=\lim_{h\rightarrow 0} \frac{d}{dh}I(u(x)+hv(x)) $$
where $h$ is just a scalar. You can see that this is similar to the traditional definition of a derivative for scalar functions of a scalar variable but extended up to functionals like $I$ that give scalars back but have their domain over functions.
If we compute this for our $I$ (mostly using the chain rule), we get
$$ I'(u,v) = \int_\Omega \nabla u \cdot \nabla v \; dx $$
Setting this to zero to find the minimum, we get an equation which looks like the weak statement for Laplace's equation:
$$ \int_\Omega \nabla u \cdot \nabla v \; dx = 0 $$
Now, if we use the Divergence Theorm (aka multi-dimesional integration by parts), we can take a derivative off of $v$ and put it on $u$ to get
$$ -\int_\Omega \nabla \cdot (\nabla u) v \; dx + \text {boundary terms} = 0 $$
Now this really looks where you start when you want to build a weak statement from a partial differential equation. Given this idea now, you can use it for any PDE, just multiply by a test function, integrate, apply the Divergence Theorem, and then discretize.
As I mention before, I prefer to think about the weak form as a weighted residual.
We want to find an approximate solution $\hat{u}$. Let us define the residual as
$$R = c^2 \nabla \cdot \nabla \hat{u} - \frac{\partial^2 \hat{u}}{\partial t^2} - f(x,t)$$
for the case of the exact solution the residual is the zero function over the domain. We want to find an approximate solution that is "good", i.e., one that makes $R$ "small". So, we can try to minimize the norm of the residual (Least square methods, for example), or some average of it. One way of doing it is to compute the weighted residual, i.e., minimize the weighted residual
$$\int\limits_\Omega wR d\Omega$$
one important thing about this is that it defines a functional, so you can minimize it. This can work for functions that do not have a variational form. I describe a little bit more in this post. You can choose the function $w$ in different ways, like being of the same space of the function $\hat{u}$ (Galerkin methods), Dirac delta functions (collocation methods), or a fundamental solution (Boundary Elements Method).
If you select the first case, then you will end up with an equation like the one described by @BillBarth. |
stat946w18/Rethinking the Smaller-Norm-Less-Informative Assumption in Channel Pruning of Convolutional Layers Contents Introduction
With the recent and ongoing surge in low-power, intelligent agents (such as wearables, smartphones, and IoT devices), there exists a growing need for machine learning models to work well in resource-constrained environments. Deep learning models have achieved state-of-the-art on a broad range of tasks; however, they are difficult to deploy in their original forms. For example, AlexNet (Krizhevsky et al., 2012), a model for image classification, contains 61 million parameters and requires 1.5 billion floating point operations (FLOPs) in one inference pass. A more accurate model, ResNet-50 (He et al., 2016), has 25 million parameters but requires 4.08 FLOPs. A high-end desktop GPU such as a Titan Xp is capable of (12 TFLOPS (tera-FLOPs per second)), while the Adreno 540 GPU used in a Samsung Galaxy S8 is only capable of (567 GFLOPS) which is less than 5% of the Titan Xp. Clearly, it would be difficult to deploy and run these models on low-power devices.
In general, model compression can be accomplished using four main non-mutually exclusive methods (Cheng et al., 2017): weight pruning, quantization, matrix transformations, and weight tying. By non-mutually exclusive, we mean that these methods can be used not only separately but also in combination for compressing a single model; the use of one method does not exclude any of the other methods from being viable.
Ye et al. (2018) explores pruning entire channels in a convolutional neural network (CNN). Past work has mostly focused on norm[based or error-based heuristics to prune channels; instead, Ye et al. (2018) show that their approach is easily reproducible and has favorable qualities from an optimization standpoint. In other words, they argue that the norm-based assumption is not as informative or theoretically justified as their approach, and provide strong empirical evidence of these findings.
Motivation
Some previous works on pruning channel filters (Li et al., 2016; Molchanov et al., 2016) have focused on using the L1 norm to determine the importance of a channel. Ye et al. (2018) show that, in the deep linear convolution case, penalizing the per-layer norm is coarse-grained; they argue that one cannot assign different coefficients to L1 penalties associated with different layers without risking the loss function being susceptible to trivial re-parameterizations. As an example, consider the following deep linear convolutional neural network with modified LASSO loss:
$$\min \mathbb{E}_D \lVert W_{2n} * \dots * W_1 x - y\rVert^2 + \lambda \sum_{i=1}^n \lVert W_{2i} \rVert_1$$
where W are the weights and * is convolution. Here we have chosen the coefficient 0 for the L1 penalty associated with odd-numbered layers and the coefficient 1 for the L1 penalty associated with even-numbered layers. This loss is susceptible to trivial re-parameterizations: without affecting the least-squares loss, we can always reduce the LASSO loss by halving the weights of all even-numbered layers and doubling the weights of all odd-numbered layers.
Furthermore, batch normalization (Ioffe, 2015) is incompatible with this method of weight regularization. Consider batch normalization at the [math]l[/math]-th layer.
Due to the batch normalization, any uniform scaling of [math]W^l[/math] which would change [math]l_1[/math] and [math]l_2[/math] norms, but has no have no effect on [math]x^{l+1}[/math]. Thus, when trying to minimize weight norms of multiple layers, it is unclear how to properly choose penalties for each layer. Therefore, penalizing the norm of a filter in a deep convolutional network is hard to justify from a theoretical perspective.
In contrast with these existing approaches, the authors focus on enforcing sparsity of a tiny set of parameters in CNN — scale parameter [math]\gamma[/math] in all batch normalization. Not only placing sparse constraints on [math]\gamma[/math] is simpler and easier to monitor, but more importantly, they put forward two reasons:
1. Every [math]\gamma[/math] always multiplies a normalized random variable, thus the channel importance becomes comparable across different layers by measuring the magnitude values of [math]\gamma[/math];
2. The reparameterization effect across different layers is avoided if its subsequent convolution layer is also batch-normalized. In other words, the impacts from the scale changes of [math]\gamma[/math] parameter are independent across different layers.
Thus, although not providing a complete theoretical guarantee on loss, Ye et al. (2018) develop a pruning technique that claims to be more justified than norm-based pruning is.
Method
At a high level, Ye et al. (2018) propose that, instead of discovering sparsity via penalizing the per-filter or per-channel norm, penalize the batch normalization scale parameters
gamma instead. The reasoning is that by having fewer parameters to constrain and working with normalized values, sparsity is easier to enforce, monitor, and learn. Having sparse batch normalization terms has the effect of pruning entire channels: if gamma is zero, then the output at that layer becomes constant (the bias term), and thus the preceding channels can be pruned. Summary
The basic algorithm can be summarized as follows:
1. Penalize the L1-norm of the batch normalization scaling parameters in the loss
2. Train until loss plateaus
3. Remove channels that correspond to a downstream zero in batch normalization
4. Fine-tune the pruned model using regular learning
Details
There still exist a few problems that this summary has not addressed so far. Sub-gradient descent is known to have inverse square root convergence rate on subdifferentials (Gordon et al., 2012), so the sparsity gradient descent update may be suboptimal. Furthermore, the sparse penalty needs to be normalized with respect to previous channel sizes, since the penalty should be roughly equally distributed across all convolution layers.
Slow Convergence
To address the issue of slow convergence, Ye et al. (2018) use an iterative shrinking-thresholding algorithm (ISTA) (Beck & Teboulle, 2009) to update the batch normalization scale parameter. The intuition for ISTA is that the structure of the optimization objective can be taken advantage of. Consider: $$L(x) = f(x) + g(x).$$
Let
f be the model loss and g be the non-differentiable penalty (LASSO). ISTA is able to use the structure of the loss and converge in O(1/n), instead of O(1/sqrt(n)) when using subgradient descent, which assumes no structure about the loss. Even though ISTA is used in convex settings, Ye et. al (2018) argue that it still performs better than gradient descent. Penalty Normalization
In the paper, Ye et al. (2018) normalize the per-layer sparse penalty with respect to the global input size, the current layer kernel areas, the previous layer kernel areas, and the local input feature map area.
To control the global penalty, a hyperparamter
rho is multiplied with all the per-layer lambda in the final loss. Steps
The final algorithm can be summarized as follows:
1. Compute the per-layer normalized sparse penalty constant [math]\lambda[/math]
2. Compute the global LASSO loss with global scaling constant [math]\rho[/math]
3. Until convergence, train scaling parameters using ISTA and non-scaling parameters using regular gradient descent.
4. Remove channels that correspond to a downstream zero in batch normalization
5. Fine-tune the pruned model using regular learning
Results CIFAR-10 Experiment
Model A is trained with a sparse penalty of [math]\rho = 0.0002[/math] for 30 thousand steps, and then increased to [math]\rho = 0.001[/math]. Model B is trained by taking Model A and increasing the sparse penalty up to 0.002. Similarly Model C is a continuation of Model B with a penalty of 0.008.
For the convNet, reducing the number of parameters in the base model increased the accuracy in model A. This suggests that the base model is over-parameterized. Otherwise, there would be a trade-off of accuracy and model efficiency.
ILSVRC2012 Experiment
The authors note that while ResNet-101 takes hundreds of epochs to train, pruning only takes 5-10, with fine-tuning adding another 2, giving an empirical example how long pruning might take in practice. Both models were trained with an aggressive sparsity penalty of 0.1.
Image Foreground-Background Segmentation Experiment
The authors note that it is common practice to take a network with pre-trained on a large task and fine-tune it to apply it to a different, smaller task. One might expect there might be some extra channels that while useful for the large task, can be omitted for the simpler task. This experiment replicated that use-case by taking a NN originally trained on multiple datasets and applying the proposed pruning method. The authors note that the pruned network actually improves over the original network in all but the most challenging test dataset, which is in line with the initial expectation. The model was trained with a sparsity penalty of 0.5 and the results are shown in table below
The neural network used in this experiment is composed of two branches:
An inception branch that locates the foreground objects A DenseNet branch to regress the edges
It was found that the pruning primarily affected the inception branch as shown in Figure 1 below. This likely explains the poor performance on more challenging datasets as a result of a higher requirement on foreground objects, which has been impacted by the pruning of the inception branch.
Conclusion
Pruning large neural architectures to fit on low-power devices is an important task. For a real quantitative measure of efficiency, it would be interesting to conduct actual power measurements on the pruned models versus baselines; reduction in FLOPs doesn't necessarily correspond with vastly reduced power since memory accesses dominate energy consumption (Han et al., 2015). However, the reduction in the number of FLOPs and parameters is encouraging, so moderate power savings should be expected.
It would also be interesting to combine multiple approaches, or "throw the whole kitchen sink" at this task. Han et al. (2015) sparked much recent interest by successfully combining weight pruning, quantization, and Huffman coding without loss in accuracy. However, their approach introduced irregular sparsity in the convolutional layers, so a direct comparison cannot be made.
In conclusion, this novel, theoretically-motivated interpretation of channel pruning was successfully applied to several important tasks.
Implementation
A PyTorch implementation is available here: https://github.com/jack-willturner/batchnorm-pruning
References Krizhevsky, A., Sutskever, I., & Hinton, G. E. (2012). Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems (pp. 1097-1105). He, K., Zhang, X., Ren, S., & Sun, J. (2016). Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 770-778). Cheng, Y., Wang, D., Zhou, P., & Zhang, T. (2017). A Survey of Model Compression and Acceleration for Deep Neural Networks. arXiv preprint arXiv:1710.09282. Ye, J., Lu, X., Lin, Z., & Wang, J. Z. (2018). Rethinking the Smaller-Norm-Less-Informative Assumption in Channel Pruning of Convolution Layers. arXiv preprint arXiv:1802.00124. Li, H., Kadav, A., Durdanovic, I., Samet, H., & Graf, H. P. (2016). Pruning filters for efficient convnets. arXiv preprint arXiv:1608.08710. Molchanov, P., Tyree, S., Karras, T., Aila, T., & Kautz, J. (2016). Pruning convolutional neural networks for resource efficient inference. Ioffe, S., & Szegedy, C. (2015, June). Batch normalization: Accelerating deep network training by reducing internal covariate shift. In International conference on machine learning (pp. 448-456). Gordon, G., & Tibshirani, R. (2012). Subgradient method. https://www.cs.cmu.edu/~ggordon/10725-F12/slides/06-sg-method.pdf Beck, A., & Teboulle, M. (2009). A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM journal on imaging sciences, 2(1), 183-202. Han, S., Mao, H., & Dally, W. J. (2015). Deep compression: Compressing deep neural networks with pruning, trained quantization and huffman coding. arXiv preprint arXiv:1510.00149 |
6 0
Hello!
I was not quite sure about posting in this category, but I think my question fits here. I am wondering about Maxwell equations in vacuum written with differential forms, namely: \begin{equation} \label{pippo} dF = 0 \qquad d \star F = 0 \end{equation} I know ##F## is a 2-form, and It can be written by using the 1-form ## F = d A##. I am not now interested in the derivation (even if the first is straightforward), but I want to see the equivalence with the Maxwell equation written in a covariant way: \begin{equation} \label{pluto} \varepsilon^{\mu \nu \rho \sigma} \partial_\nu F_{\rho \sigma} = 0 \qquad \partial^\mu F_{\mu \nu} = 0 \end{equation} If I explicit everything, knowing the components of ##F_{\mu \nu}##, I can recover both \ref{pluto} equations starting from \ref{pippo}. I'm interested in a more "direct" derivation, but I cannot find any reference in textbooks. Thank you in advance, Francesco NB: I am not a native english speaker, sorry for that.
I was not quite sure about posting in this category, but I think my question fits here.
I am wondering about Maxwell equations in vacuum written with differential forms, namely:
\begin{equation} \label{pippo}
dF = 0 \qquad d \star F = 0
\end{equation}
I know ##F## is a 2-form, and It can be written by using the 1-form ## F = d A##. I am not now interested in the derivation (even if the first is straightforward), but I want to see the equivalence with the Maxwell equation written in a covariant way:
\begin{equation} \label{pluto}
\varepsilon^{\mu \nu \rho \sigma} \partial_\nu F_{\rho \sigma} = 0 \qquad \partial^\mu F_{\mu \nu} = 0
\end{equation}
If I explicit everything, knowing the components of ##F_{\mu \nu}##, I can recover both \ref{pluto} equations starting from \ref{pippo}. I'm interested in a more "direct" derivation, but I cannot find any reference in textbooks.
Thank you in advance,
Francesco
NB: I am not a native english speaker, sorry for that. |
To explain what is a moderator, we start with a bivariate relationship between an input variable X and an outcome variable $Y$. For example, $X$ could be the number of training sessions (training intensity) and $Y$ could be math test score. We can hypothesize that there is a relationship between them such that the number of training sessions predicts math test performance. Using a diagram, we can portray the relationship below.
The above path diagram can be expressed using a regression model as\[ Y=\beta_{0}+\beta_{1}*X+\epsilon \] where $\beta_{0}$ is the intercept and $\beta_{1}$ is the slope.
A moderator variable Z is a variable that alters the strength of the relationship between $X$ and $Y$. In other words, the effect of $X$ on $Y$ depends on the levels of the moderator $Z$. For instance, if male students ($Z=0$) benefit more (or less) from training than female students ($Z=1$), then gender can be considered as a moderator. Using the diagram, if the coefficient $a$ is different $b$, there is a moderation effect.
To summarize, a moderator $Z$ is a variable that alters the direction and/or strength of the relation between a predictor $X$ and an outcome $Y$.
Questions involving moderators address “
when” or “ for whom” a variable most strongly predicts or causes an outcome variable. Using a path diagram, we can express the moderation effect as:
Moderation analysis can be conducted by adding one or multiple interaction terms in a regression analysis. For example, if $Z$ is a moderator for the relation between $X$ and $Y$, we can fit a regression model
\begin{eqnarray*} Y & = & \beta_{0}+\beta_{1}*X+\beta_{2}*Z+\beta_{3}*X*Z+\epsilon\\ & = & \beta_{0}+\beta_{2}*Z+(\beta_{1}+\beta_{3}*Z)*X+\epsilon. \end{eqnarray*}
Thus, if \(\beta_{3}\) is not equal to 0, the relationship between $X$ and $Y$ depends on the value of $Z$, which indicates a moderation effect. In fact, from the regression model, we can get:
If $Z$ is a dichotomous/binary variable, for example, gender, the above equation can be written as
\begin{eqnarray*} Y & = & \beta_{0}+\beta_{1}*X+\beta_{2}*Z+\beta_{3}*X*Z+\epsilon\\ & = & \begin{cases} \beta_{0}+\beta_{1}*X+\epsilon & \mbox{For male students}(Z=0)\\ \beta_{0}+\beta_{2}+(\beta_{1}+\beta_{3})*X+\epsilon & \mbox{For female students}(Z=1) \end{cases} \end{eqnarray*}
Thus, if $\beta_{3}$ is not equal to 0, the relationship between X and Y depends on the value of $Z$, which indicates a moderation effect. When $z=0,$ the effect of $X$ on Y is $\beta_{1}+\beta_{3}*0=\beta_{1}$ and when $z=1$, the effect of $X$ on Y is $\beta_{1}+\beta_{3}*1$ for female students.
A moderation analysis typically consists of the following steps.
The data set
mathmod.csv includes three variables: training intensity, gender, and math test score. Using the example, we investigate whether the effect of training intensity on math test performance depends on gender. Therefore, we evaluate whether gender is a moderator.
The R code for the analysis is given below.
> usedata('mathmod'); attach(mathmod); > > # Computer the interaction term > xz<-training*gender > summary(lm(math~training+gender+xz)) Call: lm(formula = math ~ training + gender + xz) Residuals: Min 1Q Median 3Q Max -2.6837 -0.5892 -0.1057 0.7811 2.2350 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 4.98999 0.27499 18.146 < 2e-16 *** training -0.33943 0.05387 -6.301 8.70e-09 *** gender -2.75688 0.37912 -7.272 9.14e-11 *** xz 0.50427 0.06845 7.367 5.80e-11 *** --- Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 Residual standard error: 0.9532 on 97 degrees of freedom Multiple R-squared: 0.3799, Adjusted R-squared: 0.3607 F-statistic: 19.81 on 3 and 97 DF, p-value: 4.256e-10 >
Since the regression coefficient (
0.504) for the interaction term XZ is significant at the alpha level 0.05 with a
p-value=5.8e-11, there exists a significant moderation effect. In other words, the effect of training intensity on math performance significantly depends on gender.
When Z=0 (male students), the estimated effect of training intensity on math performance is \(\hat{\beta}_{1}=-.34\). When Z=1 (female students), the estimated effect of training intensity on math performance is \(\hat{\beta}_{1}+\hat{\beta}_{3}=-.34+.50=.16\). The moderation analysis tells us that the effects of training intensity on math performance for males (
-.34) and females (
.16) are significantly different for this example.
A moderation effect indicates the regression slopes are different for different groups. Therefore, if we plot the regression line for each group, they should interact at certain point. Such a plot is called an interaction plot. To get the plot, we first calculate the intercept and slope for each level of the moderator. For this example, we have
\begin{eqnarray*} Y & = & \beta_{0}+\beta_{1}*X+\beta_{2}*Z+\beta_{3}*X*Z \\ & = & \begin{cases} \beta_{0}+\beta_{1}*X& \mbox{For male students}(Z=0) \\ \beta_{0}+\beta_{2}+(\beta_{1}+\beta_{3})*X& \mbox{For female students}(Z=1) \end{cases}. \\ & = & \begin{cases} 5 - 0.34*X& \mbox{For male students}(Z=0)\\ 2.23 + 0.16*X& \mbox{For female students}(Z=1)\end{cases}\end{eqnarray*}
With the information, we can generate a plot using the R code below. Note that the option
type='n' generates a figure without actually plotting the data. In the function
abline(), the first value is the intercept and the second is the slope. Note that the values for each level can also be added to the plot.
> rm(gender) > rm(training) > usedata('mathmod'); attach(mathmod); > > plot(training, math, type='n') ## create an empty frame > abline(5, -.34) ## for male > abline(2.23, .16, lty=2, col='red') ## for female > legend('topright', c('Male', 'Female'), lty=c(1,2), + col=c('black', 'red')) > > ## add scatter plot > points(training[gender==0], math[gender==0]) > points(training[gender==1], math[gender==1], col='red') >
The data set
depress.csv includes three variables: Stress, Social support and Depression. Suppose we want to investigate whether social support is a moderator for the relation between stress and depression. That is, to study whether the effect of stress on depression depends on different levels of social support. Note that the potential moderator social support is a continuous variable.
The analysis is given below. The regression coefficient estimate of the interaction term is
-.39 with
t = -20.754, p <.001. Therefore, social support is a significant moderator for the relation between stress and depression. The relation between stress and depression significantly depends on different levels of social support.
> usedata('depress'); > > # the interaction term > depress$inter<-depress$stress*depress$support > summary(lm(depress~stress+support+inter, data=depress)) Call: lm(formula = depress ~ stress + support + inter, data = depress) Residuals: Min 1Q Median 3Q Max -3.7322 -0.9035 -0.1127 0.8542 3.6089 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 29.2583 0.6909 42.351 <2e-16 *** stress 1.9956 0.1161 17.185 <2e-16 *** support -0.2356 0.1109 -2.125 0.0362 * inter -0.3902 0.0188 -20.754 <2e-16 *** --- Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 Residual standard error: 1.39 on 96 degrees of freedom Multiple R-squared: 0.9638, Adjusted R-squared: 0.9627 F-statistic: 853 on 3 and 96 DF, p-value: < 2.2e-16 >
Since social support is a continuous variable, there is no immediate levels to look at the relationship between stress and depression. However, we can choose several difference levels. One way is to use these three levels of a moderator: mean, one standard deviation below the mean and one standard deviation above the mean. For this example, the three values for social support are 5.37, 2.56 and 8.18. The fitted regression lines for the three values are
\begin{eqnarray*} \hat{depress} & = & 29.26+2.00*stress-.24*support-.39*stress*support\\ & = & \begin{cases} 28.65+1*stress & \;support=2.56\\ 27.97-.09*stress & \;support=5.37.\\ 27.30-1.19*stress & \;support=8.18 \end{cases} \end{eqnarray*}
From it, we can clearly see that with more social support, the relationship between depression and stress becomes negative from positive. This can also be seen from the interaction plot below.
> usedata('depress'); > > ## create an empty frame > plot(depress$stress, depress$depress, type='n', + xlab='Stress', ylab='Depression') > > ## abline(interceptvalue, linearslopevalue) > # for support = mean -1SD > abline(28.65, 1) > # for support = mean > abline(27.97, -.09, col='blue') > # for support = mean +1SD > abline(27.30, -1.19, col='red') > > legend('topleft', c('Low', 'Medium', 'High'), + lty=c(1,1,1), + col=c('black','blue','red')) > |
We assume that signal $x(n)$ is known, that $\hat{x}(-1)=0$ and that there is not noise in the telecommunication channel.
I want to define the predicted signal $\tilde{x}(n)$, the difference signal $d(n)$, the quantized difference signal $\hat{d}(n)$, the encoded signal $c(n)$ and the reconstructed signal $\hat{x}(n)$
I have written the following equations: $$d(n) = x(n) - \tilde{x}(n) \; \; \; \; (1) $$ $$\tilde{x}(n) = 0,8 \cdot \hat{x}(n-1) \; \; (2)$$ $$\hat{x}(n) = \tilde{x}(n) + \hat{d}(n) \; \; \;\;(3)$$ In addition, duo to the feedback, the error $e(n)$ for the reconstruction (between the initial signal on the transmitter and the reconstructed signal on the receiver) is equal to the error of the quantizer. So, I also get: $$\hat{d}(n) = d(n) + e(n) \; \; \; \; (4) $$ $$\hat{x}(n) = x(n) + e(n) \; \; \; \; (5) $$
After solving the system of the equations, it seems to me that the number of the uknowns exceeds the number of the linear independent equations. If it so, I need one more equation since not all the equations above are linear independent.
So my question is what am I missing? Thanks in advance! |
Cuckoo hashing for sketching sets Sign up for the new posts via the RSS feed.
Below I show a neat application of perfect hashing, which is one of my favorite (cluster of) algorithms. Amazingly, we use it to obtain a purely information-theoretic (rather than algorithmic) statement.
Suppose we have a finite universe $U$ of size $n$ and a $k$-element subset of it $S \subseteq U$ with $k \ll n$. How many bits do we need to encode it? The obvious answer is $\log_2 \binom{n}{k} = \Theta(k \cdot \log(n / k))$.
Can we, however, improve this bound if we allow some approximation?
Even if $n = 2k$, it is not difficult to show the lower bound of $k \cdot \log_2(1 / \delta)$ bits if we allow to be wrong when answering queries “does $x$ belong to $S$?” with probability at most $\delta$ (hint: $\varepsilon$-nets). Can we match this lower bound?
One approach that does not quite work is to hash each element of $S$ to an $l$-bit string using a sufficiently good hash function $h \colon U \to \{0, 1\}^l$, and, when checking if $x$ lies in $S$, compute $h(x)$ and check if this value is among the hashes of $S$. To see why it does not work, let us analyze it: if $x \notin S$, then the probability that $h(x)$ coincides with at least one hash of an element of $S$ is around $k \cdot 2^{-l}$. To make the latter less than $\delta$, we need to take $l = \log_2(k / \delta)$ yielding the overall bound of $k \cdot \log_2(k / \delta)$ falling short of the desired size.
To get the optimal size, we need to avoid using the union bound in the above argument. In order to accomplish this, let us use perfect hashing on top of the above hashing scheme! It is convenient to use a particular approach to perfect hashing called Cuckoo hashing. In short, there is a way to generate two simple hash functions $h_1, h_2 \in U \to [m]$ for $m = O(k)$ and place the elements of our set $S$ into $m$ bins without collisions so that for every $x \in S$, the element $x$ is placed either in $h_1(x)$ or in $h_2(x)$. Now, to encode our set $S$, we build a Cuckoo hash table for it, and then for each of the $m$ bins, we either store one bit indicating that it’s empty, or store an $l$-bit hash of an element that is placed into it. Now we can set $l = \log_2(2 / \delta)$, since we compare the hash of a query to merely two hashes, instead of $k$. This gives the overall size $m + k \cdot \log_2 (2 / \delta) = k \cdot (\log_2(1 / \delta) + O(1))$, which is optimal up to a low-order term. Of course, the encoding should include $h_1$, $h_2$ and $h$, but it turns out they can be taken to be sufficiently simple so that their size does not really matter.
Two remarks are in order. First, in this context people usually bring up Bloom filters. However, they require space, which is $1.44$ times bigger, and, arguably, they are more mysterious (if technically simple). Second, one may naturally wonder why anyone would care about distinguishing bounds like $k \cdot \log_2 (1 / \delta)$ and $k \cdot \log_2(k / \delta)$. In my opinion, there are two answers to this. First, it is just a cool application of perfect hashing (an obligatory link to one of my favorite comic strips). Second, compressing sets is actually important in practice and constant factors do matter, for instance when we are aiming to transfer the set over the network.
Update Kasper Green Larsen observed that we can combine the naive and not-quite-working solutions to obtain the optimal bound. Namely, by hashing everything to $\log_2(k / \delta)$ bits, we effectively reduce the universe size to $n’ = k / \delta$. Then, the naive encoding takes $\log_2 \binom{n’}{k} \approx H(\delta) \cdot n’ = H(\delta) \cdot k / \delta \approx k \cdot \log_2 (1 / \delta)$ bits. |
Laws of Motion Second Law of Motion: Momentum and Impulse According to second law of motion the rate of change of linear momentum is directly proportional to the external force applied on it. Newton’s second law of motion gives the formula of force f = ma. Second law implies that when a bigger force is applied on a body its linear momentum changes fast. Linear momentum is defined as the product of mass and velocity P = mv. Momentum in a vector quantity. Relation between F and P is \tt F \propto \frac{dp}{dt} 1 Newton = 10 5dynes. \tt F = \frac{m(v - u)}{t} where, v = final velocity, u = inertial velocity. If “v” is constant and ‘m’ is changing \tt F = V\frac{dm}{dt} \tt F = \frac{P_{2} - P_{1}}{t} where, P 1= initial momentum, P 2= final momentum When a large force is acting on a small internal of time the product of force and time is called Impulse (J). The impulse J = f(t) = ∫fdt is a quantity that combines the net force and the time interval over which the force acts. Area under force time graph gives impulse. If a gun fires n bullets of mass ‘m’ \tt F = \frac{nmv}{t} If water of density ‘ρ’ coming out of pipe of area of cross section A with speed ‘v’ F = Aρv 2 The reaction force on a person in a lift moving up with acceleration R = m (g + a) The reaction force on a person in a lift moving down with deceleration R = m (g + a) The reaction force on a person in a lift moving up deceleration R = m (g − a) The reaction force on a person in a lift moving down acceleration R = m (g − a) The reaction force on a person in a freely falling lift R = 0. The force on between two bodies at contact is called contact force. Contact force \tt F = \frac{M_{2} F}{M_{1} + M_{2}}
Acceleration of two bodies system \tt a = \frac{F}{M_{1} + M_{2}} For three bodies \tt a = \frac{F}{m_{1} + m_{2} + m_{3}}
Tension is an electromagnetic force in a string due to force. Acceleration of two bodies connected by string \tt a = \frac{F}{m_{1} + m_{2}} Tension in string \tt T = \frac{m_{2} F}{m_{1} + m_{2}} Acceleration of three bodies connected by string \tt a = \frac{F}{m_{1} + m_{2} + m_{3}} Tension in first string \tt T_{1} = \frac{m_{1} F}{m_{1} + m_{2} + m_{3}} Tension in second string \tt T_{2} = \frac{\left(m_{1} + m_{2}\right) F}{m_{1} + m_{2} + m_{3}} At woods machine acceleration \tt a = \frac{m_{1} - m_{2}}{m_{1} + m_{2}} g
Tension \tt T = \frac{2 m_{1}m_{2}}{m_{1} + m_{2}} \cdot g Thrust on the pulley = \tt 2T = \frac{4 m_{1}m_{2} g}{m_{1} + m_{2}} Acceleration \tt a = \frac{m_{1} g}{m_{1} + m_{2}}
Tension in string \tt T = \frac{m_{1} m_{2} g}{m_{1} + m_{2}} Acceleration \tt a = \frac{\left(m_{1} - m_{2} \sin \theta \right) g}{m_{1} + m_{2}}
Tension \tt T = \frac{m_{1} m_{2} \left(1 + \sin \theta \right)}{m_{1} + m_{2}} Acceleration \tt a = \frac{\left(m_{1} \sin \alpha - m_{2} \sin \beta \right) g}{m_{1} + m_{2}}
Tension \tt T = \frac{m_{1} m_{2} \left(\sin \alpha + \sin \beta \right) g}{m_{1} + m_{2}} View the Topic in this video From 0:36 To 44:05
Disclaimer: Compete.etutor.co may from time to time provide links to third party Internet sites under their respective fair use policy and it may from time to time provide materials from such third parties on this website. These third party sites and any third party materials are provided for viewers convenience and for non-commercial educational purpose only. Compete does not operate or control in any respect any information, products or services available on these third party sites. Compete.etutor.co makes no representations whatsoever concerning the content of these sites and the fact that compete.etutor.co has provided a link to such sites is NOT an endorsement, authorization, sponsorship, or affiliation by compete.etutor.co with respect to such sites, its services, the products displayed, its owners, or its providers.
1. Newton's Second Law of Motion: \tt F = \frac{mdv}{dt} = ma
2. Impulse = Force × Time = Change in momentum
3. Position dependent force: Gravitational force between two bodies \frac{Gm_{1}m_{2}}{r^{2}} |
Consider a one-parameter family of plane curves defined by the equation
\[f\left( {x,y,C} \right) = 0,\]
where \(C\) is a parameter.
The envelope of this family of curves is a curve such that at each point it touches tangentially one of the curves of the family (Figure \(1\)).
The parametric equations of the envelope are defined by the system of equations
\[\left\{ \begin{array}{l} f\left( {x,y,C} \right) = 0\\ {f’_C}\left( {x,y,C} \right) = 0 \end{array} \right.,\]
that is, by the original equation of the family of curves and the equation obtained by differentiating the original equation with respect to the parameter \(C.\) Eliminating the parameter \(C\) from these equations, we can get the equation of the envelope in explicit or implicit form.
The above system of equations is a necessary condition for the existence of an envelope. Besides the envelope curve, the solution of this system may comprise, for example, singular points of the curves of the family that do not belong to the envelope. The set of all solutions of the system is called the discriminant curve. Thus, in general, the envelope is a part of the discriminant curve.
To find the equation of the envelope uniquely, the sufficient conditions are used. They assume that the following inequalities are satisfied (in addition to the above system of equations):
\[ {\left| {\begin{array}{*{20}{c}} {\frac{{\partial f}}{{\partial x}}} & {\frac{{\partial f}}{{\partial y}}}\\ {\frac{{\partial {f’_C}}}{{\partial x}}} & {\frac{{\partial {f’_C}}}{{\partial y}}} \end{array}} \right| \ne 0,}\;\;\;\kern-0.3pt {\frac{{{\partial ^2}f}}{{\partial {C^2}}} \ne 0.} \]
Note that not any one-parameter family of curves has an envelope. A classic counter-example is the family of concentric circles (Figure \(2\)), which is described by the equation
\[{x^2} + {y^2} = {C^2}.\]
There is no envelope for the given set of curves.
Solved Problems
Click a problem to see the solution.
Example 1Find the envelope of the family of circles given by the equation Example 2Find the envelope of the family of ellipses defined by the equation Example 3Find the envelope of the family of straight lines, which together with the line segments intercepted by the coordinate axes form triangles of equal area. Example 4Find the envelope of the family of ellipses Example 5Find the envelope of the family of straight lines given by the normal equation: Example 1.Find the envelope of the family of circles given by the equation
Solution.
We write the system of equations:
\[\left\{ \begin{array}{l} f\left( {x,y,C} \right) = 0\\ {f’_C}\left( {x,y,C} \right) = 0 \end{array} \right..\]
The first equation describes the family of curves and is given in the problem definition. Differentiating it with respect to the parameter \(C,\) we get
\[
{2\left( {x – C} \right) \cdot \left( { – 1} \right) + 2\left( {y – C} \right) \cdot \left( { – 1} \right) = 0,}\;\;\Rightarrow {x – C + y – C = 0,}\;\;\Rightarrow {x + y – 2C = 0.} \]
Thus, the system of equations can be written as
\[\left\{ \begin{array}{l} {\left( {x – C} \right)^2} + {\left( {y – C} \right)^2} = 1\\ x + y – 2C = 0 \end{array} \right..\]
We express \(C\) from the second equation and substitute it in the first equation:
\[
{C = \frac{{x + y}}{2},}\;\;\Rightarrow {{\left( {x – \frac{{x + y}}{2}} \right)^2} + {\left( {y – \frac{{x + y}}{2}} \right)^2} = 1,}\;\;\Rightarrow {{\left( {x – \frac{x}{2} – \frac{y}{2}} \right)^2} + {\left( {y – \frac{x}{2} – \frac{y}{2}} \right)^2} = 1,}\;\;\Rightarrow {\frac{{2{{\left( {y – x} \right)}^2}}}{4} = 1,}\;\;\Rightarrow {{\left( {y – x} \right)^2} = 2,}\;\;\Rightarrow {y – x = \pm \sqrt 2 ,}\;\;\Rightarrow {y = x \pm \sqrt 2 .} \]
Note that the family of circles does not contain singular points, so the resulting solution is the equation of the envelope. It consists of two straight lines:
\[y = x – \sqrt 2 \;\;\text{and}\;\;y = x + \sqrt 2 .\]
Schematically, the family of circles and two envelope lines are shown in Figure \(3.\) |
This answer says that $Y = zX$ is a simplest example of non-causal system because it corresponds to $y_n = x_{n+1}$ and current output depends on future input. Yet, it is causal because both $x_n$ and $y_n$ are 0 for $n < 0.$ How causal can be non-causal? I am struggling to resolve this absurd and want to ask about the relationship between these two notions of causality and also the 3rd one, which seems to be related to pole values and ROC rather than number of poles?
First, we can speak of a
causal filter or a causal function (or sequence, or signal).
In general, a
filter is causal if its output at present time ($n$) never depends on the input at future times ($n+m$, with $m>0$). Let us restrict to LTI filters (and we assume discrete time) - so that the filter is fully specified by an impulse response function $h(n)$.In that case, the above property can be concisely stated as follows: A LTI discrete-time filteris causaliff $h(n)=0$ for $n<0$
This motivates the definition of a
causal signal. A discrete-time function( signal, sequence) $g(n)$ is causaliff $g(n)=0$ for $n<0$
Notice that this later definition does not involve filters (it's just motivated by them). And notice that the two can be combined in:
A LTI discrete-time filter is causal iff its response function $h(n)$ is a causal function.
In your assertion regarding the second link ("it is causal because both $x_n$ and $y_n$ are $0$ for $n<0$") you seem to be confusing both meanings. To determine that a filter is causal, one does not look for the causality of inputs or ouputs ($x_n$ and $y_n$) but for the causality of the response function $h_n$
Further, instead of $h(n)$ we can work with its Z-transform $H(z)$; we have $y[n] = x[n] \star h[n] \implies Y(z)=H(z) \, X(z)$. But, remember the relation "signal" $\leftrightarrow$ "Z transform" is not one-to-one unless the ROC (region of convergence) is also specified. A single $H(z)$ can have several corresponding $h[n]$("anti-transform"), for different ROCs. Alternatively, instead of giving a ROC, we might be given a causal (or anticausal) condition. In particular, if we are given a (rational) $H(z)$ and we are told that $h(n)$ is causal, then the ROC must extend outwards from the biggest pole. For an explanation of this, see any Signal Processing textbook, or here.
In your example, $Y(z)= z X(z)$, so $H(z)=z$. To analyze this you can reason in two ways:
1) In terms of zeros and poles. $H(z)$ has a zero at $z=0$ and a pole at infinity. Hence, because there is a single ROC (all the plane), and it extends inwards from the pole, hence there can be only one valid $h(n)$ which must be anti-causal.
(I insist: here you could deduce from $H(z)$ that the filter was anti-causal; but normally you can't; say, if $H(z)=z/(z-1)$ you'd have two possible $h[n]$, one causal, one anticausal).
2) Explicitly, formally. By inspection, $H(z)=\sum_{n} h(n) z^{-n} = z \implies h(n)=\delta(n+1)$ Which is anticausal. |
The $2^2$-Bockstein is $\beta_4$ is associated to $$0\to\mathbb{Z}/2\to\mathbb{Z}/{8}\to\mathbb{Z}/{4}\to 0,$$
(The $2^n$-Bockstein homomorphism $$\beta_{2^n}:H^*(-,\mathbb{Z}/{2^n})\to H^{*+1}(-,\mathbb{Z}/2)$$ is associated to the short exact sequence $$0\to\mathbb{Z}/2\to\mathbb{Z}/{2^{n+1}}\to\mathbb{Z}/{2^n}\to 0.$$ Note $\beta_2=Sq^1$ is the Steenrod square.)
Question: What are some closed 5-dimensional manifold $M$ satisfy all the criteria below:
1) $M$ is a non-spin manifold.
2) $\beta_{4}$ is nonzero. $$\beta_{4}:H^1(M,\mathbb{Z}/{4})\to H^{2}(M,\mathbb{Z}/2).$$
3) There exists a
non-zero generator $a \in H^1(M,\mathbb{Z}/2)$, such that its Poincare dual PD$(a)$ is an orientable 4-manifold.
If so, what is this sub-manfiold generator $H^1(M,\mathbb{Z}/2)$ and what is the this orientable 4-manifold PD$(a)$? What is this $M$?
Note that the $\mathbb{RP}^5$ satisfies 1) and 2), but it does not satisfy 3), because the PD$(a)$ for $\mathbb{RP}^5$ is a non-orientable 4-manifold $\mathbb{RP}^4$. |
Tagged: group Problem 343
Let $G$ be a finite group and let $N$ be a normal abelian subgroup of $G$.
Let $\Aut(N)$ be the group of automorphisms of $G$.
Suppose that the orders of groups $G/N$ and $\Aut(N)$ are relatively prime.
Then prove that $N$ is contained in the center of $G$. Problem 332
Let $G=\GL(n, \R)$ be the
general linear group of degree $n$, that is, the group of all $n\times n$ invertible matrices. Consider the subset of $G$ defined by \[\SL(n, \R)=\{X\in \GL(n,\R) \mid \det(X)=1\}.\] Prove that $\SL(n, \R)$ is a subgroup of $G$. Furthermore, prove that $\SL(n,\R)$ is a normal subgroup of $G$. The subgroup $\SL(n,\R)$ is called special linear group Problem 322
Let $\R=(\R, +)$ be the additive group of real numbers and let $\R^{\times}=(\R\setminus\{0\}, \cdot)$ be the multiplicative group of real numbers.
(a) Prove that the map $\exp:\R \to \R^{\times}$ defined by \[\exp(x)=e^x\] is an injective group homomorphism.
Add to solve later
(b) Prove that the additive group $\R$ is isomorphic to the multiplicative group \[\R^{+}=\{x \in \R \mid x > 0\}.\] |
In the
Wikipedia article for the Digamma function one finds some identities due to Gauss. I've used the fourth of those from the section Some finite sums involving the digamma function to show if there were no mistakes that for $m>1$
$$\sum_{\substack{1\leq k\leq m-1 \\ (k,m)=1}}\sum_{r=1}^{m-1}\psi\left(\frac{r}{m}\right)\sin\left(\frac{2\pi r k}{m}\right)=\pi\frac{m\phi(m)}{2}-\pi\frac{m\phi(m)}{2}=0,$$
where $\phi(m)$ is the
Euler's totient function and as you see in the reference $\psi(s)$ is the digamma function.
My approach was use Apostol's Exercise 14 with $f(x)=x$, from Chapter 2 of
Apostol, Introduction to Analytic Number Theory, Springer (1976), also I need that $\sum_{\substack{1\leq k\leq m-1 \\ (k,n)=1}}k=\frac{n}{2}\phi(n)+\frac{n}{2}\sum_{d\mid n}\mu(d)=\frac{n}{2}\phi(n)+\frac{n}{2}$, on assumption that $n>1$. Also the Gauss identity for the sum of the first $n$ positive integers.
Question 1.Can you say if my statement (the first identity of current post) was right? Thanks.
After I tried the same with the other identity that is feasible do this calculations, that is the third, also due to Gauss. My calculations were $$\sum_{\substack{1\leq k\leq m-1 \\ (k,m)=1}}\sum_{r=1}^{m-1}\psi\left(\frac{r}{m}\right)\cos\left(\frac{2\pi r k}{m}\right)=m\phi(m)\log 2+m\cdot\log\left(\prod_{\substack{1\leq k\leq m-1 \\ (k,m)=1}}\sin \frac{k\pi}{m}\right)+\gamma\phi(m).$$
There was a typo, fixed.
Question.Is there a niceclosed-form for the factor $$\prod_{\substack{1\leq k\leq m-1 \\ (k,m)=1}}\sin \frac{k\pi}{m}$$ in the context of this post (I say nice/good in the context of the first question)? Thanks in advance. |
Answer
$\displaystyle \frac{203}{3}=67\frac{2}{3}$ quarts
Work Step by Step
We know that we need to use $7\frac{1}{4}$ quarts of additive for each tank of fuel. Therefore, if we have $9\frac{1}{3}$ tanks, the amount of additive must be the product of these two numbers. First, we estimate the result by rounding to the nearest whole number: $Additive\approx 7*9=63$ quarts Now, we get the precise answer: $Additive=\displaystyle 7\frac{1}{4}*9\frac{1}{3}=\frac{29}{4}*\frac{28}{3}=\frac{29*28}{3*4}=\frac{29*7}{3}=\frac{203}{3}=67\frac{2}{3}$ quarts |
Answer
To determine a fraction with an indicated denominator you firstly divide the indicated denominator by the denominator of the fraction. Secondly, you multiply the nominator with the resulting number to get the new numerator.
Work Step by Step
To determine a fraction with an indicated denominator you firstly divide the indicated denominator by the denominator of the fraction. Example: $\frac{2}{5} = \frac{}{ 20}$. First step: $20 \div 5 = 4$. Secondly, you multiply the nominator with the resulting number to get the new numerator Second step: $ 2 \cdot 4 = 8$. And the result is $\frac{8}{20}$. |
If the predominating mechanism is scattering with no absorption, we can define in a similar manner linear, atomic and mass scattering coefficients, using the symbol \(\sigma\) rather than \(\alpha\). For the physical distinction between absorption and scattering, see section 5.1. And if both absorption and scattering are important, we can define linear, atomic and mass extinction coefficients, using the symbol \(\kappa\), where
\[\kappa = \alpha + \sigma.\]
All the foregoing equations are valid, whether we use linear, atomic or mass absorption, scattering or extinction coefficients, and whether we refer to radiation integrated over all frequencies or whether at a particular wavelength or within a specified wavelength range.
The
mass extinction coefficient is generally referred to as the opacity. |
Yes, the two are intimately related. One way, as in QMechanic's answer, is via Wick rotations, but in general there is a lot more freedom once you allow integration contours to go over into the complex plane. In my area, strong field physics, the use of complex time to understand tunnelling problems is everyday bread and butter for many people, and it is the only way to use semiclassical models for tunnelling situations.
Tunnelling ionization is what happens when you hit an atom with a very strong laser field of very low frequency. The frequency $\omega$ of the field needs to be much smaller than the ionization potential $I_p=\tfrac12\kappa^2$ of the atom, which means that you need many photons to ionize it, but for such slowly-varying fields the physical picture is somewhat different. If the (so-called) Keldysh parameter$$\gamma=\frac{\kappa \omega}{E_0}$$(where $E_0$ is the peak electric field of the laser, and atomic units are assumed) is smaller than one, then it is more useful to think in terms of a quasistatic picture. That means that you consider the dipole potential of the laser, $V_L=-\mathbf E·\mathbf r$, as a fixed potential which is added to the atomic potential, and which varies slowly in time.
At the peak of the field, this added linear potential bends the total potential surface deep enough to make a barrier which atomic electrons (particularly, the ones on the highest occupied atomic orbital) can tunnel through.
Tunnelling rates depend very sensitively on the height and width of the barrier, which essentially means that the field needs to be very strong (i.e. on the order of $0.01\:\text{a.u.} \approx 5\times 10^9 \text V/\text m$) for this to happen.
The first to realize this were Keldysh,
L. V. Keldysh, Ionization in the field of a strong electromagnetic wave.
Sov. Phys. JETP 20 no. 5, 1307-1314 (1965) (pdf) [ Zh. Eksp. Teor. Fiz. 47, 1945 (1964)].
and the guys now known as PPT,
A.M. Perelomov, V.S. Popov, M.V. Terent'ev, Ionization of Atoms in an Alternating Electric Field.
Sov. Phys. JETP 20 no. 5, 924-934 (1966) (pdf) [ Zh. Eksp. Teor. Fiz. 50, 1393 (1966)].
their work doesn't make for particularly easy reading, but it's fairly along the semiclassical WKB lines you point out in your question.
More recently, though, this understanding has crystallized as the picture known as the
quantum orbit view of strong-field phenomena. A good review is
P. Salières
et al., Feynman's Path-Integral Approach for Intense-Laser-Atom Interactions. Science 292 no. 5518, 902-905 (2001).
and I'll try and give a taster for what the overall feel of the field is.
Consider, then, an atom that's initially in its ground state $|g⟩$ with energy $E_g=-I_p=-\tfrac12\kappa^2$, which is subjected to an oscillating potential $V_L=-E_0z\cos(\omega t)$, which is slow (so $\hbar\omega\ll I_p$) and strong enough to be in the tunnelling regime (so $\gamma=\kappa\omega/E_0<1$). In this situation one can usually ignore multi-electron effects and work in the Single Active Electron approximation, at least as a first treatment.
The problem, then, is to solve the time-dependent Schrödinger equation$$i\frac{\partial}{\partial t}|\psi(t)⟩=\left[\frac{\mathbf p^2}{2m}+V_a(\mathbf r) +V_L\right]|\psi(t)⟩$$under the initial condition that $|\psi⟩=|g⟩$ before the pulse starts. This is unfortunately impossible to do analytically in its full form, but one can separate the two pieces of the hamiltonian to get a pretty workable solution. This is known as the Strong Field Approximation, and it essentially means neglecting the effect of the ion's attraction once the electron has been ionized, and the influence of deeper orbitals is neglected. It means that you have two fairly good approximate solutions depending on whether your electron is still in the ground state,$$|\psi(t)⟩=e^{-iE_g t}|g⟩,\quad\text{with}\quad i\frac{\partial}{\partial t}|\psi(t)⟩=\left[\frac{\mathbf p^2}{2m}+V_a(\mathbf r)\right]|\psi(t)⟩,$$or has been ionized into a Volkov state,$$|\psi(t)⟩=e^{\frac i2 \int_t^\infty (\mathbf p+\mathbf A(\tau))^2\text d\tau}|\mathbf p+\mathbf A(t)⟩,\quad\text{with}\quad i\frac{\partial}{\partial t}|\psi(t)⟩=\left[\frac{\mathbf p^2}{2m}+V_L\right]|\psi(t)⟩,$$where $\mathbf A$ is the vector potential of the field and $|\mathbf p +\mathbf A(t)⟩$ is a plane wave with kinetic momentum $\mathbf k=\mathbf p+\mathbf A(t)$. I will calculate the ionization amplitude to an asymptotic drift momentum $\mathbf p$, so the quantity of interest is $⟨\mathbf p |\psi(\infty)⟩$.
In general, the electron's state will be some sort of superposition of these two solutions, so that you can write$$|\psi(t)⟩=a(t)e^{-iE_g t}|g⟩+\int\text d\mathbf p \,b(\mathbf p,t) e^{\frac i2 \int_t^\infty (\mathbf p+\mathbf A(\tau))^2\text d\tau}|\mathbf p+\mathbf A(t)⟩.$$You then substitute this into the TDSE, and cancel out the obvious terms, which leaves you with the equivalent form$$\left\{\begin{align}i\frac{d}{dt}a(t)&=a⟨g|V_L|g⟩+\int\text d\mathbf p \,b(\mathbf p,t) e^{+iE_g t}e^{\frac i2 \int_t^\infty (\mathbf p+\mathbf A(\tau))^2\text d\tau}⟨g|V_a|\mathbf p+\mathbf A(t)⟩\\i\frac{\partial}{\partial t}b(\mathbf p,t) & =a(t)e^{-iE_g t}e^{-\frac i2 \int_t^\infty (\mathbf p+\mathbf A(\tau))^2\text d\tau}⟨\mathbf p+\mathbf A(t)|V_L|g⟩\\&\qquad +\int\text d\mathbf p'\,b(\mathbf p',t) e^{-\frac i2 \int_t^\infty (\mathbf p+\mathbf A(\tau))^2\text d\tau}e^{\frac i2 \int_t^\infty (\mathbf p'+\mathbf A(\tau))^2\text d\tau}⟨\mathbf p+\mathbf A(t)|V_a|\mathbf p'+\mathbf A(t)⟩.\end{align}\right.$$This can be further simplified by neglecting continuum-continuum transitions (i.e. the integral on the second equation) and ground state depletion (i.e. setting $a(t)=1$ in the second equation). (Both of these can be lifted, but it just makes everything uglier.) If you do that, the TDSE finally becomes something doable,$$i\frac{\partial}{\partial t}b(\mathbf p,t) =e^{-iE_g t}e^{-\frac i2 \int_t^\infty (\mathbf p+\mathbf A(\tau))^2\text d\tau}⟨\mathbf p+\mathbf A(t)|V_L|g⟩$$and you can integrate it to get$$b(\mathbf p,\infty)=⟨\mathbf p|\psi(\infty)⟩ =-i\int_{-\infty}^\infty\text dte^{iI_p t}e^{+\frac i2 \int^t_\infty (\mathbf p+\mathbf A(\tau))^2\text d\tau}⟨\mathbf p+\mathbf A(t)|V_L(t)|g⟩$$Now, this integral is perfectly fine and it can be done numerically if needed, but doing that is pretty painful because it is highly oscillatory. A typical example looks like this:
(Reasonable parameters are $E_0=0.05$, $\omega=0.055$ and $I_p=0.5$ in atomic units. This is for $p_{||}=1$ over 3/2 of a laser cycle.)
This is bad because you need very high accuracy on each of the positive and negative lobes of the integrand to get only mediocre accuracy on their difference, so even in this simplified version the problem is numerically tough. This oscillatory behaviour is driven by the fact that the $e^{iI_p t}$ term oscillates much faster than the laser-cycle timescales (~$2\pi/\omega$) at which the integration takes place.
The way to get out of this is to use the saddle point method, which is where complex times come in. The idea is to deform the integration contour into the complex plane to look for something which is numerically nicer, by turning the oscillating imaginary exponential into nice, decaying real exponentials. If this is done well enough, one can even skip the integration entirely, and just use the contributions from the top of the resulting gaussian-like bumps.
The way to do this is to look for times $t_s$ where the derivative of the exponent vanishes:$$0=\frac d{dt}\left[I_p t+\frac 12 \int^t_\infty (\mathbf p+\mathbf A(\tau))^2\text d\tau\right]_{t_s}=I_p+\frac12(\mathbf p+\mathbf A(t_s))^2.$$This evidently cannot happen for real times, so you need a complex saddle-point time for this to work.
The final expression for the ionization amplitude, then, is of the form$$b(\mathbf p,\infty)=-i\sum_j\sqrt{\frac{2\pi}{i\mathbf(\mathbf p+\mathbf A(t_s^{(j)}))·\mathbf E(t_s^{(j)})}}e^{-\frac i2 \int_{t_s^{(j)}}^\infty (\mathbf p+\mathbf A(\tau))^2\text d\tau}⟨\mathbf p+\mathbf A(t_s^{(j)})|V_L(t_s^{(j)})|g⟩e^{iI_p t_s^{(j)}},$$where you sum over all the relevant saddle points - typically one for every field maximum.
The upshot of all this is that the ionization amplitude can now be intuitively understood in a semiclassical picture:
The electron sits happily in the ground state, accumulating phase, until the saddle-point time $t_s$, and it accumulates a phase $e^{iI_p t_s}$ until then.
The saddle-point time is easily interpreted as the ionization time, at which the electron makes a dipole transition to the continuum state $|\mathbf p+\mathbf A(t_s)⟩$, with a transition amplitude $$\sqrt{\frac{2\pi}{i\mathbf(\mathbf p+\mathbf A(t_s))·\mathbf E(t_s)}}⟨\mathbf p+\mathbf A(t_s)|V_L(t_s)|g⟩.$$
After that, the electron is free in the laser field, and it goes on to accumulate the phase $e^{-\frac i2 \int_{t_s}^\infty (\mathbf p+\mathbf A(\tau))^2\text d\tau}$.
Even better, once it's liberated the electron simply whisks away from the origin along the semiclassical trajectory$$\mathbf r_\text{cl}(t)=\int_{t_s}^t (\mathbf p+\mathbf A(\tau))\,\text d\tau.$$
So everything is nice and shiny, and it works perfectly, except that... the barrier has mysteriously disappeared. Even though this is a tunnelling problem, the electron seems to simply skip past the region where the barrier should be.
The solution is exactly what you describe in the question: at the tunnelling time $t_s$, and for some time afterwards, the kinetic energy $\tfrac12(\mathbf p+\mathbf A(t))^2$ is negative (and equal to $-I_p$ at $t_s$ itself), which means that the velocity is imaginary, but the time is also imaginary and the two combine to make a (mostly) real displacement. Once the time gets down to the real axis, you are essentially out of the barrier.
One thing to notice is that when I say "phase" in the bullet points above I'm mostly lying through my teeth. Because the saddle-point time $t_s$ is complex, the 'phases' $e^{iI_p t_s}$ and $e^{-\frac i2 \int_{t_s}^\infty (\mathbf p+\mathbf A(\tau))^2\text d\tau}$ are not purely complex exponentials, so their exponents have sizable and negative real parts, which makes them very small in absolute value. This is where the unlikeliness of tunnelling is expressed in this formalism, and the main controlling factor on the ionization rate.
Now, as has been pointed out in the comments, this use of complex time can definitely be seen simply as a mathematical trick, without any physical significance. This is certainly the view of parts of the strong field community, and there is a healthy debate over the matter; at the least one can say that we don't really understand this as well as we'd like.
However, there is a certain niceness about it, and it does seem to sort-of fit. What does the complex time mean? If you split it into its real and imaginary parts as $t_s=t_0+i\tau_T$, then they each have a separate and distinct role. If you integrate from $t_s$ down to its real part $t_0$, it turns out that the semiclassical position $\mathbf r_\text{cl}(t_0)$ is largely real and it lies just outside of the tunnelling barrier, so that it can be seen as the time when it pops up into the continuum. (Indeed, one can make very successful classical models by simply taking this as the ionization time, disregarding the imaginary part of the semiclassical position, and propagating classically from there.)
The imaginary part $\tau_T$, on the other hand, directly appears in the ionization amplitudes, and it is well identified as the 'time spent under the barrier' if such a thing makes sense. For example, the transverse momentum distribution after ionization is of the form $e^{-\tfrac12\tau_T p_\perp^2}$, which ties in very well with the fact that borrowing an extra energy $\tfrac12p_\perp^2$ for a time $\tau_T $ will make the process less likely by the product of the two. The two legs of the integration contour, from $t_s$ to $t_0$ and from there through the real axis, have very intuitive interpretations as 'under the barrier' and 'outside of the barrier'
It's important to keep in mind, though, that once you go into the complex plane then time does become a much more complicated concept. The very same contour choice freedom that allows you to go for a complex saddle-point time also makes
any contour between $t_s$ and the final detection time at $t=\infty$ valid. This holds essentially any time you go into complex times, and it does make quantum orbits a bit of a handful to grasp.
I'll stop here, but I hope this is enough to show that, putting aside the questions about its physical reality, complex time is indeed an important and useful tool for dealing with tunnelling problems. |
There are many badly defined integrals in physics. I want to discuss one of them which I see very often. $$\int_0^\infty \mathrm{d}x\,e^{i p x}$$ I have seen this integral in many physical problems. Many people seem to think it is a well defined integral, and is calculated as follows:
We will use regularization (we introduce a small real parameter $\varepsilon$ and after calculation set $\varepsilon = 0$.
$$I_0=\int_0^\infty \mathrm{d}x\,e^{i p x}e^{ -\varepsilon x}=\frac{1}{\varepsilon-i p}=\frac{i}{p}$$
But I can obtain an arbitrary value for this integral! I will use regularization too, but I will use another parametrization:
$$I(\alpha)=\int_0^\infty \mathrm{d}x\,e^{i p x}=\int_0^\infty dx \left(1+\alpha\frac{\varepsilon \sin px}{p}\right)e^{i p x}e^{ -\varepsilon x}$$ where $\varepsilon$ is a regularization parameter and $\alpha$ is an arbitrary value using $\int_0^\infty \mathrm{d}x\,\sin{(\alpha x)} e^{ -\beta x}=\frac{\alpha}{\alpha^2+\beta^2}$
After a not-so-difficult calculation I obtain that $I(\alpha)=\frac{i}{p}\left(1+\frac{\alpha}{2}\right)$.
This integral I have often seen in intermediate calculation. But usually people do not take into account of this problem, and just use $I_0$. I don't understand why?
I know only one example when I can explain why we should use $I_0$. In field theory when we calculate $U(-\infty,0)$, where $U$ is an evolution operator, It is proportional to $\int^0_{-\infty} \mathrm{d}t\,e^{ -iE t}$. It is necessary for the Weizsaecker-Williams approximation in QED, or the DGLAP equation in QCD, because in axiomatic QFT we set $T\to \infty(1-i\varepsilon)$.
My question is: Why, in calculation of the integral $\int_0^\infty \mathrm{d}x\,e^{i p x}$, do people use $I_0$? Why people use $e^{ -\varepsilon x}$ regularization function? In my point of view this regularization no better and no worse than another. |
I would like to know if there is a rule to prove this. For example, if I use the distributive law I will get only $(A \lor A) \land (A \lor \neg B)$.
I find pictures are great for anything simple enough to use them, which this is.
Remember:
AND means the area taken up by both things. So the middle one is what is taken up outside B, but also inside A. Their junction is not counted because it is inside A but not outside B.
OR means it is covered by either one or both. Both of them cover the part of A that is outside B, and the junction is covered by A (first picture) so it is counted too. All in all, you just have A again.
Sorry if this is too simplistic, not sure what level you are at.
There are many ways to see this. One is a truth table. Another is to use the distributive rule: $$ A \lor (A \land \lnot B) = (A \land \top) \lor (A \land \lnot B) = A \land (\top \lor \lnot B) = A \land \top = A. $$
I would use my least favourite inference rule: Disjunction Elimination. Basically, it says that if $R$ follows from $P$, and $R$ follows from $Q$, then $R$ must be true if $P \vee Q$: $$(P \to R), (Q \to R), (P \lor Q) \vdash R$$
So let's assume $A \lor (A \land \neg B)$. Set $P = A$, $Q = A \land \neg B$, $R = A$ and apply the rule:
If $P$ ($= A$) we are done. If $Q = A \land \neg B$ then $A$ (by conjunction elimination, $S \land T \vdash S$) By disjunction elimination $A \lor (A \land \neg B) \to A$.
The inverse is trivial: assume $A$, then by one of the variants of conjunction introduction ($S \vdash S \lor T$ for any $T$) $A \to A \lor (\cdots)$.
Here is a diagram of this proof:
Note that, when we know that $C$ implies $D$, we have $C \lor D = D$. This is analogous to taking the union of a set (corresponding to $D$) and one of its subsets ($C$): we get the largest set ($D$) back.
In your case, $C = A \land \lnot B$ and $D = A$, and the implication trivially holds.
A more intuitive look:
A is
always true when
A is true.
A & -B is
only true when
A is true.
Intuitively, applying OR to these two would produce a result
C which is
always true when
A is true. As such,
C is always true when
A is true.
(Stop reading here if this explanation works for you.)
This is how I think about this problem. However, this explanation is not complete since all we've shown is that
A -> C and not
A <-> C.
So, let's also also show that
C -> A.
A is
always false when
A is false.
A & -B is
always false when
A is false.
Intuitively, applying OR to these two would produce a result
C which is
always false when
A is false. As such,
C is always false when
A is false;
-A -> -C, which is the same thing as
C -> A.
So
A -> C and
C -> A, so
A <-> C.
Sometimes, people are confused by the letters. People like food, because it's easy to think about.
Pretend I ask you to flip a coin to choose between one OR the other of the following two options:
An Apple, OR... An Apple, and definitely no Banana.
[The first is equal to "A", the second "A and not B". But don't think of the letters. Think about the apple, and the whether you also get a banana.]
That first one really means "An apple fersure, and maybe you'll get a banana."
So leaving something out is the same as saying "maybe".
Looking at them as a pair, whichever you get, there's definitely going to be an Apple involved. Yay. And if your coinflip picks the right one, you might get a Banana.
But isn't that the same as saying "maybe you'll get a Banana"? Just, with half the likelihood?
So all you can definitely logically say is, you'll get an Apple. You can't say anything about whether you'll get a Banana.
Similar to answer of Yuval Filmus. Using boolean algebra, in engineering notaion, and factoring (or factorising) out A.
$A+A\cdot\bar B=A\cdot(1+\bar B)=A\cdot1=A$
It seems as though no one mentioned it yet so I will go ahead.
The law to deal with these kinds of problems is the
absorption lawit states that p v (p ^ q) = p and also that p ^ (p v q) = p.If you try to use distributive law on this it will keep you going in circles forever:
(A v A) ^ (A v ~B) = A ^ (A v ~B) = (A ^ A) v (A ^ ~B) = A v (A ^ ~B) = (A v A) ^ (A v ~B)
I used the wrong symbol for not and equals but the point here is that when you are going in circles / when there is an and-or mismatch usually you should look to the absoprtion law.
B is irrelevant to the outcome as you will notice if putting this in a truth table.
Another intuitive way to look at this:
If A is a set, then we can say any given object is either (in A) or (not in A).
Now look at
S = A or (A and not B):
If an object is in A, then "A or anything" contains all elements in A, so the object will also be in S.
If an object isn't in A, then "A and anything" excludes all elements not in A, so the object is neither in A nor in (A and not B), so it isn't in S.
So the outcome is that any object in A is in S, and any object not in A isn't in S. So intuitively, the objects in S must be exactly those in A, and no other objects.
When two sets have identical elements, they are defined to be the same set. So
A = S.
A simple method you can always use if you're stuck is case analysis.
Assume $A$ is true. In that case the left side is true, because true OR anything is true.
Assume $A$ is false. False AND anything is false. False OR false is false.
Since $A$ can have no more possible values, you've proven the proposition.
lets consider: 1) A as 1 and B as 0. 2) A as 0 and B as 1. 3) A as 1 and B as 1. 4) A as 0 and B as 0.using the first scenario : A or (A and !B) => 1 or ( 1 and 1) => 1 0r 1 => 1using the second scenario: A or (A and !B) => 0 or ( 0 and 0) => 0 or 0 => 0using the third scenario : A or (A and !B) => 1 or ( 1 and 0) => 1 or 0 => 1using the fourth scenario: A or (A and !B) => 0 or ( 0 and 1) => 0 or 0 => 0From the above four cases, the result always depends on A not on B, so the result is A.
protected by David Richerby Oct 9 '17 at 14:48
Thank you for your interest in this question. Because it has attracted low-quality or spam answers that had to be removed, posting an answer now requires 10 reputation on this site (the association bonus does not count).
Would you like to answer one of these unanswered questions instead? |
Tagged: normal subgroup If a Half of a Group are Elements of Order 2, then the Rest form an Abelian Normal Subgroup of Odd Order Problem 575
Let $G$ be a finite group of order $2n$.
Suppose that exactly a half of $G$ consists of elements of order $2$ and the rest forms a subgroup. Namely, suppose that $G=S\sqcup H$, where $S$ is the set of all elements of order in $G$, and $H$ is a subgroup of $G$. The cardinalities of $S$ and $H$ are both $n$.
Then prove that $H$ is an abelian normal subgroup of odd order.Add to solve later
Problem 470
Let $G$ be a finite group of order $p^n$, where $p$ is a prime number and $n$ is a positive integer.
Suppose that $H$ is a subgroup of $G$ with index $[G:P]=p$. Then prove that $H$ is a normal subgroup of $G$.
(
Michigan State University, Abstract Algebra Qualifying Exam) Problem 332
Let $G=\GL(n, \R)$ be the
general linear group of degree $n$, that is, the group of all $n\times n$ invertible matrices. Consider the subset of $G$ defined by \[\SL(n, \R)=\{X\in \GL(n,\R) \mid \det(X)=1\}.\] Prove that $\SL(n, \R)$ is a subgroup of $G$. Furthermore, prove that $\SL(n,\R)$ is a normal subgroup of $G$. The subgroup $\SL(n,\R)$ is called special linear group |
"La variante di Lüneburg" and China's rice productionIn 1993, Paolo Maurensig, an Italian writer from Gorizia, wrote a novel entitled "La variante di Lüneburg" (Adelphi, 1995, pp. 164, ISBN 88-459-0984-0). The novel is set in Nazi Germany during World War II and the main theme is the game of chess.
At the beginning of the story, Maurensig tells a legend according to which the
game of chess was invented by a Chinese peasantwith a formidable gift for mathematics. (There are different versions of this story, as far as I understand, the earliest written record is contained in the Shahnameh and takes place in India, instead.) The peasant asks the king, in exchange for the game he invented, for a quantity of rice equal to that obtained with the following procedure: First one grain of rice should be placed on the first square on the chess board, then two grains on the second square, than four on the third and so on, every time doubling the number of rice grains. The king accepts not realizing what he is agreeing on. Let's try to calculate how much rice that would be. The series implied by the peasant has a more general form called geometric seriesand is well known in mathematics:
$$
s_m = \sum_{k=0}^m x^k
$$
If we calculate the first steps in the sum we obtain:
\begin{eqnarray*}
s_0 &=& 1 \\
s_1 &=& 1+x \\
s_2 &=& 1+x+x^2 \\
\dots && \\
\end{eqnarray*}
To see how this relates to our problem, we can set $x=2$ and see that the sum will be: $1+ 2+ 4+ 8+ \dots $
If we observe $s_1$ and $s_2$ in the previous equations, we see that we can write the second in terms of the first in two different ways:
\begin{eqnarray*}
s_2 &=& s_1+x^2 \\
s_2 &=& 1 + x (1+x) = 1 + x s_1.\\
\end{eqnarray*}
In the first case, we grouped the first two terms in $s_2$, whereas in the second case we group the last two terms and realized that they shared a common factor $x$.
If we continue writing the terms of the sum for higher orders, we realize that what we obtained above is true in general:
\begin{eqnarray*}
s_{m} &=& 1+x+\dots+x^m \\
s_{m+1} &=& 1+x+\dots+x^m+x^{m+1} = s_m+x^{m+1} \\
&=& 1 + x (1+\dots+x^{m-1}+x^m) = 1 + x s_m, \\
\end{eqnarray*}
which also means that the right-hand side of the last two equations above must be equal:
\begin{eqnarray*}
s_m+x^{m+1} &=& 1 + x s_m,
\end{eqnarray*}
and, therefore, rearranging:
\begin{eqnarray*}
s_m &=& \frac{x^{m+1}-1}{x-1}.
\end{eqnarray*}
This is the general solution for the sum of the geometric series. If we want to know
how many grains of rice the king will have to give to the peasant, we need to substitute the values of $x$ and $m$. We already saw above that $x$ should be equal to 2. We also know that the chess board has 8 rows and 8 columns, giving 64 squares. Because we start with $s_0$ in the first square, we need to calculate the series for $m=63$, that will correspond to the last square:
$$ s_{63} = \frac{2^{64}-1}{2-1} = 18\,446\,744\,073\,709\,551\,615, $$
which in words would sound something like eighteen quintillions...
In 1999, China produced approximately 198 million tons of rice. Which corresponds to $198\,000\,000\,000\,000$ grams. If we assume for simplicity that the production is constant over the years and that one gram of rice is approximately 50 grains, the king will have to give the peasant the
entire rice production in China for more than 18 hundred years.
Needless to say, when the king realized the mistake, he killed the peasant. |
Consider a differential equation of type
\[{P\left( {x,y} \right)dx + Q\left( {x,y} \right)dy }={ 0,}\]
where \(P\left( {x,y} \right)\) and \(Q\left( {x,y} \right)\) are functions of two variables \(x\) and \(y\) continuous in a certain region \(D.\) If
\[\frac{{\partial Q}}{{\partial x}} \ne \frac{{\partial P}}{{\partial y}},\]
the equation is not exact. However, we can try to find so-called integrating factor, which is a function \(\mu \left( {x,y} \right)\) such that the equation becomes exact after multiplication by this factor. If so, then the relationship
\[{\frac{{\partial \left( {\mu Q\left( {x,y} \right)} \right)}}{{\partial x}} }={ \frac{{\partial \left( {\mu P\left( {x,y} \right)} \right)}}{{\partial y}} }\]
is valid. This condition can be written in the form:
\[
{{Q\frac{{\partial \mu }}{{\partial x}} + \mu \frac{{\partial Q}}{{\partial x}} }={ P\frac{{\partial \mu }}{{\partial y}} + \mu \frac{{\partial P}}{{\partial y}},\;\;}}\Rightarrow {{Q\frac{{\partial \mu }}{{\partial x}} – P\frac{{\partial \mu }}{{\partial y}} }={ \mu \left( {\frac{{\partial P}}{{\partial y}} – \frac{{\partial Q}}{{\partial x}}} \right).}} \]
The last expression is the partial differential equation of first order that defines the integrating factor \(\mu \left( {x,y} \right).\)
Unfortunately, there is no general method to find the integrating factor. However, one can mention some particular cases for which the partial differential equation can be solved and as a result we can construct the integrating factor.
1. Integrating Factor Depends on the Variable \(x:\) \(\mu = \mu \left( x \right).\)
In this case we have \({\large\frac{{\partial \mu }}{{\partial y}}\normalsize} = 0,\) so the equation for \(\mu \left( {x,y} \right)\) can be written in the form:
\[{\frac{1}{\mu }\frac{{d\mu }}{{dx}} }={ \frac{1}{Q}\left( {\frac{{\partial P}}{{\partial y}} – \frac{{\partial Q}}{{\partial x}}} \right).}\]
The right side of this equation must be a function of only \(x.\) We can find the function \(\mu \left( x \right)\) by integrating the last equation.
2. Integrating Factor Depends on the Variable \(y:\) \(\mu = \mu \left( y \right).\)
Similarly, if \({\large\frac{{\partial \mu }}{{\partial x}}\normalsize} = 0,\) we get the following ordinary differential equation for the integrating factor \(\mu:\)
\[{\frac{1}{\mu }\frac{{d\mu }}{{dy}} }={ -\frac{1}{P}\left( {\frac{{\partial P}}{{\partial y}} – \frac{{\partial Q}}{{\partial x}}} \right),}\]
where the right side depends only on \(y.\) The function \(\mu \left( y \right)\) can be found by integrating the given equation.
3. Integrating Factor Depends on a Certain Combination of the Variables \(x\) and \(y:\) \(\mu = \mu \left( {z\left( {x,y} \right)} \right).\)
The new function \({z\left( {x,y} \right)}\) can be, for example, of the following type:
\[{z = \frac{x}{y},\;\;\;}\kern-0.3pt{z = xy,\;\;\;}\kern0pt{z = {x^2} + {y^2},\;\;\;}\kern0pt{z = x + y,}\]
and so on.
Here it is important that the integrating factor \(\mu \left( {x,y} \right)\) becomes a function of one variable \(z:\)
\[\mu \left( {x,y} \right) = \mu \left( z \right)\]
and can be found from the differential equation:
\[{\frac{1}{\mu }\frac{{d\mu }}{{dz}} }={ \frac{{\frac{{\partial P}}{{\partial y}} – \frac{{\partial Q}}{{\partial x}}}}{{Q\frac{{\partial z}}{{\partial x}} – P\frac{{\partial z}}{{\partial y}}}}.}\]
We assume that the right side of the equation depends only on \(z\) and the denominator is not zero.
Below we consider some particular examples of the equation
\[{P\left( {x,y} \right)dx + Q\left( {x,y} \right)dy }={ 0,}\]
where we can determine the integrating factor. The general conditions of existence of the integrating factor are derived in the theory of Lie Group.
Solved Problems
Click a problem to see the solution.
Example 1Solve the equation \(\left( {1 + {y^2}} \right)dx +\) \( xydy = 0.\) Example 2Solve the differential equation \(\left( {x – \cos y} \right)dx \) \(- \sin ydy \) \(= 0.\) Example 3Solve the differential equation \(\left( {x{y^2} – 2{y^3}} \right)dx \) \(+ \left( {3 – 2x{y^2}} \right)dy \) \(= 0.\) Example 4Solve the equation \(\left( {xy + 1} \right)dx \) \(+ {x^2}dy \) \(= 0.\) Example 5Solve the equation \(ydx +\) \(\left( {{x^2} + {y^2} – x} \right)dy \) \(= 0\) using the integrating factor \(\mu \left( {x,y} \right) =\) \( {x^2} + {y^2}.\) Example 1.Solve the equation \(\left( {1 + {y^2}} \right)dx +\) \( xydy = 0.\)
Solution.
First we test this differential equation for exactness:
\[
{{\frac{{\partial Q}}{{\partial x}} }={ \frac{\partial }{{\partial x}}\left( {xy} \right) }={ y,\;\;}}\kern0pt {{\frac{{\partial P}}{{\partial y}} }={ \frac{\partial }{{\partial y}}\left( {1 + {y^2}} \right) }={ 2y.}} \]
As one can see, this equation is not exact. We try to find an integrating factor to convert the equation into exact. Calculate the function
\[{\frac{{\partial P}}{{\partial y}} – \frac{{\partial Q}}{{\partial x}} }={ 2y – y = y.}\]
One can notice that the expression
\[{\frac{1}{Q}\left( {\frac{{\partial P}}{{\partial y}} – \frac{{\partial Q}}{{\partial x}}} \right) }={ \frac{1}{{xy}} \cdot y }={ \frac{1}{x}}\]
depends only on the variable \(x.\) Hence, the integrating factor will also depend only on \(x:\) \(\mu = \mu \left( x \right).\) We can get it from the equation
\[\frac{1}{\mu }\frac{{d\mu }}{{dx}} = \frac{1}{x}.\]
Separating variables and integrating, we obtain:
\[
{\int {\frac{{d\mu }}{\mu }} = \int {\frac{{dx}}{x}} ,\;\;}\Rightarrow {\ln \left| \mu \right| = \ln \left| x \right|,\;\;}\Rightarrow {\mu = \pm x.} \]
We choose \(\mu = x.\) Multiplying the original differential equation by \(\mu = x,\) produces the exact equation:
\[\left( {x + x{y^2}} \right)dx + {x^2}ydy = 0.\]
Indeed, now we have
\[
{\frac{{\partial Q}}{{\partial x}} = \frac{\partial }{{\partial x}}\left( {{x^2}y} \right) }={ 2xy } = {\frac{{\partial P}}{{\partial y}} }={ \frac{\partial }{{\partial y}}\left( {x + x{y^2}} \right) }={ 2xy.} \]
Solve the resulting equation. The function \(u\left( {x,y} \right)\) can be found from the system of equations:
\[\left\{ \begin{array}{l} \frac{{\partial u}}{{\partial x}} = x + x{y^2}\\ \frac{{\partial u}}{{\partial y}} = {x^2}y \end{array} \right..\]
It follows from the first equation that
\[{u\left( {x,y} \right) = \int {\left( {x + x{y^2}} \right)dx} }={ \frac{{{x^2}}}{2} + \frac{{{x^2}{y^2}}}{2} + \varphi \left( y \right).}\]
Substitute this in the second equation to determine \(\varphi \left( y \right):\)
\[
{{\frac{{\partial u}}{{\partial y}} }={ \frac{\partial }{{\partial y}}\left[ {\frac{{{x^2}}}{2} + \frac{{{x^2}{y^2}}}{2} + \varphi \left( y \right)} \right] }={ {x^2}y,\;\;}}\Rightarrow { {x^2}y + \varphi’\left( y \right) = {x^2}y,\;\;}\Rightarrow { \varphi’\left( y \right) = 0.} \]
It follows from here that \(\varphi \left( y \right) = C,\) where \(C\) is a constant.
Thus, the general solution of the original differential equation is given by
\[{\frac{{{x^2}}}{2} + \frac{{{x^2}{y^2}}}{2} }+{ C }={ 0.}\] |
I have a table where I want more than one row in some cells. I am using parbox to achieve this but this makes some items in cells left aligned and some not. How can you fix this to make them all aligned the same way?
\documentclass{beamer}\usepackage{multirow}\begin{document} \begin{frame}\frametitle{Test page} \resizebox{\textwidth}{!}{%\begin{tabular}{|l||c|c|} \hline \multirow{3}{*} & Column one & Column two \\ \hline \hline Row one& \textcolor{gray}{$f(n) = n^2$} & \parbox{5cm}{$f(n) = n^2$ \\ $f(n) = n^2$}\\ Row two & \parbox{7cm}{\textcolor{gray}{$g(n,S) = n\sqrt{\log (n/S)/\log\log(n/S)})$} \\$g(n) = n^3$ } & $g(n) = n^3$\\ Row three & \textcolor{gray}{$f(n) = n^2$} & \parbox{5cm}{$f(n) = n^2$\\ $f(n) = n^2$}\\ \hline \end{tabular}%} \end{frame} \end{document}
Update. Following the advice in the comments I have changed the table a little to show another problem.
\resizebox{\textwidth}{!}{%\begin{tabular}{|l||c|c|} \hline \multirow{3}{*} & Column one & Column two \\ \hline \hline Row one& \textcolor{gray}{$g(n,S) = n\sqrt{\log (n/S)/\log\log(n/S)})$} & \parbox{5cm}{\centering $f(n) = n^2$ \\ $f(n) = n^2$}\\ Row two & \parbox{7cm}{\textcolor{gray}{\centering $g(n,S) = n\sqrt{\log (n/S)/\log\log(n/S)})$} \\ \centering $g(n) = n^3$ } & $g(n) = n^3$\\ Row three & \textcolor{gray}{$f(n) = n^2$} & \parbox{5cm}{\centering $f(n) = n^2$\\ $f(n) = n^2$}\\ \hline \end{tabular}%}
g(n,S) is still not quite aligned the same in the first and second rows of "Column one". |
Fourier theory was initially invented to solve certain differential equations. Therefore, it is of no surprise that Fourier series are widely used for seeking solutions to various ordinary differential equations (ODEs) and partial differential equations (PDEs).
In this section, we consider applications of Fourier series to the solution of ODEs and the most well-known PDEs:
the heat equation \({\large\frac{{\partial u}}{{\partial t}}\normalsize} = k{\large\frac{{{\partial ^2}u}}{{\partial {x^2}}}\normalsize};\) the wave equation \({\large\frac{{{\partial ^2}u}}{{\partial {t^2}}}\normalsize} = {a^2}{\large\frac{{{\partial ^2}u}}{{\partial {x^2}}}\normalsize};\) Laplace’s equation \({\large\frac{{{\partial ^2}u}}{{\partial {x^2}}}\normalsize} + {\large\frac{{{\partial ^2}u}}{{\partial {y^2}}}\normalsize} = 0.\) Solved Problems
Click a problem to see the solution.
Example 1Find the Fourier series solution to the differential equation \(y^{\prime\prime} + 2y = 3x\) with the boundary conditions \(y\left( 0 \right) =\) \( y\left( 1 \right) \) \(= 0.\) Example 2Find the periodic solutions of the differential equation \(y’ + ky = f\left( x \right),\) where \(k\) is a constant and \(f\left( x \right)\) is a \(2\pi\)-periodic function. Example 3Using Fourier series expansion, solve the heat conduction equation in one dimension Example 4Find the solution of wave equation for a fixed string Example 5Find the solution to Laplace’s equation Example 1.Find the Fourier series solution to the differential equation \(y^{\prime\prime} + 2y = 3x\) with the boundary conditions \(y\left( 0 \right) =\) \( y\left( 1 \right) \) \(= 0.\)
Solution.
We will use the Fourier sine series for representation of the nonhomogeneous solution to satisfy the boundary conditions. Using the results of Example 3 on the page Definition of Fourier Series and Typical Examples, we can write the right side of the equation as the series
\[{3x }={ \frac{6}{\pi }\sum\limits_{n = 1}^\infty {\frac{{{{\left( { – 1} \right)}^{n + 1}}}}{n}\sin n\pi x} .}\]
We assume that the solution has the form
\[y\left( x \right) = \sum\limits_{n = 1}^\infty {{b_n}\sin n\pi x} .\]
Substituting this into the differential equation, we get
\[
{\sum\limits_{n = 1}^\infty {\left( { – {n^2}{\pi ^2}} \right){b_n}\sin n\pi x} }+{ 2\sum\limits_{n = 1}^\infty {{b_n}\sin n\pi x} } = {\frac{6}{\pi }\sum\limits_{n = 1}^\infty {\frac{{{{\left( { – 1} \right)}^{n + 1}}}}{n}\sin n\pi x} .} \]
Since the coefficients of each sine mode must be equal to each other, we obtain the algebraic equation
\[
{\left( {2 – {n^2}{\pi ^2}} \right){b_n} = \frac{{6{{\left( { – 1} \right)}^{n + 1}}}}{{n\pi }}\;\;}\kern-0.3pt {\text{or}\;\;{b_n} = \frac{{6{{\left( { – 1} \right)}^{n + 1}}}}{{n\pi \left( {2 – {n^2}{\pi ^2}} \right)}}.} \]
Hence, the solution of the given differential equation is described by the series
\[{y\left( x \right) \text{ = }}\kern0pt{ \frac{6}{\pi }\sum\limits_{n = 1}^\infty {\frac{{{{\left( { – 1} \right)}^{n + 1}}}}{{n\left( {2 – {n^2}{\pi ^2}} \right)}}\sin n\pi x} .}\]
Example 2.Find the periodic solutions of the differential equation \(y’ + ky = f\left( x \right),\) where \(k\) is a constant and \(f\left( x \right)\) is a \(2\pi\)-periodic function.
Solution.
We represent the function \(f\left( x \right)\) on the right-hand side of the equation as a Fourier series:
\[f\left( x \right) = \sum\limits_{n = – \infty }^\infty {{c_n}{e^{inx}}} .\]
The complex Fourier coefficients are defined by the formula
\[{c_n} = \frac{1}{{2\pi }}\int\limits_{ – \pi }^\pi {f\left( x \right){e^{ – inx}}dx} .\]
Assuming that the solution can be represented as a Fourier series expansion
\[y = \sum\limits_{n = – \infty }^\infty {{y_n}{e^{inx}}} ,\]
we find the expression for the derivative:
\[y’ = \sum\limits_{n = – \infty }^\infty {in{y_n}{e^{inx}}} .\]
Substituting this into the differential equation, we get
\[{\sum\limits_{n = – \infty }^\infty {in{y_n}{e^{inx}}} }+{ k\sum\limits_{n = – \infty }^\infty {{y_n}{e^{inx}}} }={ \sum\limits_{n = – \infty }^\infty {{c_n}{e^{inx}}} .}\]
As this equation is valid for all \(n,\) we obtain
\[{in{y_n} + k{y_n} = {c_n}\;\;}\kern-0.3pt{\text{or}\;\;{y_n} = \frac{{{c_n}}}{{in + k}}.}\]
Here \({c_n}\) and \(k\) are known numbers. Consequently, the solution is given by
\[y\left( x \right) = \sum\limits_{n = – \infty }^\infty {\frac{{{c_n}}}{{in + k}}{e^{inx}}} .\] |
1
ROSI Grades posted
This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.
(e) I will work on this part in a little while. So far I think I have gotten all of the same solutions as Rong Wei.Added my solution below. For the case of the half-line and Dirichlet boundary condition, we will have the solution: \begin{equation} u(x,t) = \frac{e^{\alpha{}x + \beta{}t}}{2\sqrt{\pi{}kt}}\int_0^{\infty}[e^{-(x-y)^2/4kt} - e^{-(x+y)^2/4kt}]g(y)dy \end{equation} In the case of Neumann boundary conditions, we cannot use a similar method. My questions is about boundary condition. For the general form of 1D wave, there is no v present. We used boundary condition\begin{equation} u_{x=0} = 0 \end{equation}. Here you are asking v. I understand that we need conditions, but why here we need:\begin{equation} u_{x=vt} = something \end{equation} but not \begin{equation} u_{x=ct} = something \end{equation} or \begin{equation} u_{x=0} = something \end{equation}Thanks professor but I understand the process of doing this but not why we need initial condition Ux=vt here but not Ux=0, or similarly when with problems without v, why not use boundary condition Ux=ct? This is completely incomprehensible. What do you really mean by this charade? |
000001340 001__ 1340
000001340 005__ 20190321073459.0
000001340 037__ $$aBELLE2-MTHESIS-2019-002
000001340 041__ $$aeng
000001340 100__ $$aNadia Toutounji
000001340 245__ $$aReconstruction Methods for Semi-leptonic Decays of B-mesons with the Belle II Experiment
000001340 260__ $$aSydney$$bThe University of Sydney$$c2019
000001340 300__ $$a118
000001340 500__ $$aPresented on 21 01 2019
000001340 502__ $$aMSc$$bSydney, The University of Sydney$$c2019
000001340 520__ $$aThe Belle II detector located in Tsukuba, Japan, building on the work of its predecessor Belle, is scheduled for long-term data collection from electron-positron $e^{+} e^{-}$ collisions commencing in early 2019, for the purpose of studying rare B-meson decays in the search for new physics beyond the Standard Model. The Belle II Analysis Software Framework (BASF2) has been developed for physics analyses, with the Full Event Interpretation (FEI) being one such method designed for the reconstruction of B-meson decays from detector information. The FEI must be trained on simulated Monte Carlo (MC) data and introduces a signal-specific training process that can be tailored for a particular decay of interest in an attempt to increase the performance over signal-independent training processes such as those employed in Full Reconstruction (FR) methods at Belle. This study investigates the performance of the signal-specific and signal-independent FEI algorithms in the context of rare semi-leptonic B-meson decays, in comparison to leptonic decays, with the respective modes $B^{+} \to \rho^{0} \mu^{+} \nu_{\mu}$ and $B^{+} \to \tau^{+} \nu_{\tau}$ chosen as working examples. The relative performance of the FEI methods implemented is evaluated via a number of key performance indicators including the reconstruction efficiency and purity of the reconstructed $\Upsilon(4S)$ event.
000001340 700__ $$aKevin Varvell$$edir.
000001340 8560_ $$fkevin.varvell@desy.de
000001340 8564_ $$uhttps://docs.belle2.org/record/1340/files/Nadia%20Toutounji.pdf$$yMPhil thesis
000001340 980__ $$aTHESIS |
Normally it seems like this is solved by using Gauss's Law as follows:
$$ \int E * da= \frac {q_{enc}}{\epsilon_0} $$
And applying a geometric argument that says: because the charged sphere is uniform, we can take the ratio of the volume our Gaussian sphere as a function of some radius $r$ and divide it by the actual volume of the charged sphere, a ratio that gets multiplied to the total charge $Q$ for charge ratio. This returns:
$$ \frac {Qr^3}{R^3} $$
Which then gets plugged back into Gauss's Law for the final answer:
$$ E = \frac {Qr}{4\pi R^3\epsilon_0} $$
Where $r$ is the radial distance to a point inside the sphere and $R$ is the radius of the sphere. This all makes sense to me, but I was wondering if someone could explain how to solve this problem through an integration argument. I can't seem to find a resource that does, they fall back to this argument because it is easier, but I'd like to conceptually understand how to work it through integrating an expression for the charge to find $Q$ as a function of $r$ and then applying Gauss's Law. |
Current browse context:
physics.ins-det
Change to browse by: Bookmark(what is this?) Physics > Instrumentation and Detectors Title: A Long Baseline Neutrino Oscillation Experiment Using J-PARC Neutrino Beam and Hyper-Kamiokande
(Submitted on 15 Dec 2014 (v1), last revised 18 Jan 2015 (this version, v2))
Abstract: Hyper-Kamiokande will be a next generation underground water Cherenkov detector with a total (fiducial) mass of 0.99 (0.56) million metric tons, approximately 20 (25) times larger than that of Super-Kamiokande. One of the main goals of Hyper-Kamiokande is the study of $CP$ asymmetry in the lepton sector using accelerator neutrino and anti-neutrino beams. In this document, the physics potential of a long baseline neutrino experiment using the Hyper-Kamiokande detector and a neutrino beam from the J-PARC proton synchrotron is presented. The analysis has been updated from the previous Letter of Intent [K. Abe et al., arXiv:1109.3262 [hep-ex]], based on the experience gained from the ongoing T2K experiment. With a total exposure of 7.5 MW $\times$ 10$^7$ sec integrated proton beam power (corresponding to $1.56\times10^{22}$ protons on target with a 30 GeV proton beam) to a $2.5$-degree off-axis neutrino beam produced by the J-PARC proton synchrotron, it is expected that the $CP$ phase $\delta_{CP}$ can be determined to better than 19 degrees for all possible values of $\delta_{CP}$, and $CP$ violation can be established with a statistical significance of more than $3\,\sigma$ ($5\,\sigma$) for $76%$ ($58%$) of the $\delta_{CP}$ parameter space. Submission historyFrom: Masashi Yokoyama [view email] [v1]Mon, 15 Dec 2014 16:57:31 GMT (7178kb,D) [v2]Sun, 18 Jan 2015 16:01:28 GMT (8046kb,D) |
Chapters Balbharati SSC Class 10 Mathematics 2 Chapter 7: Mensuration Chapter 7: Mensuration solutions [Pages 145 - 146]
Find the volume of a cone if the radius of its base is 1.5 cm and its perpendicular height is 5 cm.
Find the volume of a sphere of diameter 6 cm.
Find the total surface area of a cylinder if the radius of its base is 5 cm and height is 40 cm.
Find the surface area of a sphere of radius 7 cm.
The dimensions of a cuboid are 44 cm, 21 cm, 12 cm. It is melted and a cone of height 24 cm is made. Find the radius of its base.
Observe the measures of pots In the given figure. How many jugs of water can the cylindrical pot hold?
A cylinder and a cone have equal bases. The height of the cylinder is 3 cm and the area of its base is 100 cm
2 .The cone is placed upon the cylinder. Volume of the solid figure so formed is 500 cm 3 . Find the total height of the figure.
In the given figure, a toy made from a hemisphere, a cylinder and a cone is shown. Find the total area of the toy.
In the given figure, a cylindrical wrapper of flat tablets is shown. The radius of a tablet is 7 mm and its thickness is 5 mm. How many such tablets are wrapped in the wrapper?
In the given figure shows a toy. Its lower part is a hemisphere and the upper part is a cone. Find the volume and surface area of the toy from the measures shown in the figure (\[\pi = 3 . 14\])
Find the surface area and the volume of a beach ball shown in the figure
As shown in the figure, a cylindrical glass contains water. A metal sphere of diameter 2 cm is immersed in it. Find the volume of the water .
Chapter 7: Mensuration solutions [Page 148] 3)
The radii of ends of a frustum are 14 cm and 6 cm respectively and its height is 6 cm. Find its i) curved surface area
The radii of ends of a frustum are 14 cm and 6 cm respectively and its height is 6 cm. Find its
ii) total surface area
The radii of ends of a frustum are 14 cm and 6 cm respectively and its height is 6 cm. Find its
iii ) volume \[\pi\] = 3.14)
The circumferences of circular faces of a frustum are 132 cm and 88 cm and its height is 24 cm. To find the curved surface area of the frustum complete the following activity.( \[\pi = \frac{22}{7}\])
circumference
1 = 2 1= 132
r
1= \[\frac{132}{\pi} = 134\] 2=2 2= 88
r
2= \[\frac{88}{2\pi} = 134\] slant height of frustum, l=
\[\sqrt{{1223}^2 + 1234 {}^2}\text{ textcolor} {\text{ white} }{}\]
\[ = 1234 cm\]
1+ r 2) l
\[ = 123 sq . cm .\]
Chapter 7: Mensuration solutions [Pages 154 - 155]
Radius of a circle is 10 cm. Measure of an arc of the crcleis 54
°. Find the area of the sector associated with the arc. (\[\pi\]= 3.14 )
Measure of an arc of a circle is 80 cm and its radius is 18 cm. Find the length of the arc. ( \[\pi\] = 3.14 )
Radius of a sector of a circle is 3.5 cm and length of its arc is 2.2 cm. Find the area of the sector.
Radius of a circle is 10 cm. Area of a sector of the sector is 100 cm
2 . Find the area of its corresponding major sector. ( \[\pi\] = 3.14 )
Area of a sector of a circle of radius 15 cm is 30 cm
2 . Find the length of the arc of the sector. m( arc MBN) = 60 °, m( arc MBN) = 60 °, m( arc MBN) = 60 °,
In the given figure, radius of circle is 3.4 cm and perimeter of sector P-ABC is 12.8 cm . Find A(P-ABC).
In the given figure, O is the centre of the sector. \[\angle\]ROQ = \[\angle\]MON = 60
° . OR = 7 cm, and OM = 21 cm. Find the lengths of arc RXQ and arc MYN. ( \[\pi = \frac{22}{7}\])
In the given figure, if A(P-ABC) = 154 cm
2 radius of the circle is 14 cm, find
(1) `∠APC`
(2)
l ( arc ABC) .
°find the area of the sector
Radius of a sector of a circle is 7 cm. If measure of arc of the sector is -
(2) 210
° find the area of the sector
The area of a minor sector of a circle is 3.85 cm
2 and the measure of its central angle is 36 °. Find the radius of the circle .
In the given figure,
\[\square\] PQRS is a rectangle. If PQ = 14 cm, QR = 21 cm, find the areas of the parts
x, y and z . Chapter 7: Mensuration solutions [Pages 159 - 160]
In the given figure, A is the centre of the circle. \[\angle\]ABC = 45
° and AC = 7 \[\sqrt{2}\]cm. Find the area of segment BXC.
In the given figure, O is the centre of the circle.
m ( arc PQR) = 60 ° OP = 10 cm. Find the area of the shaded region.( \[\pi\]= 3.14, \[\sqrt{3}\]= 1.73)
In the given figure, if A is the centre of the circle. \[\angle\] PAR = 30
°, AP = 7.5, find the area of the segment PQR ( \[\pi\] = 3.14)
In the given figure, if O is the centre of the circle, PQ is a chord. \[\angle\] POQ = 90
°, area of shaded region is 114 cm 2 , find the radius of the circle. \[\pi\] = 3.14)
A chord PQ of a circle with radius 15 cm subtends an angle of 60
° with the centre of the circle. Find the area of the minor as well as the major segment. ( \[\pi\] = 3.14, \[\sqrt{3}\] = 1.73)
Chapter 7: Mensuration solutions [Pages 160 - 163]
(A) \[14\pi\]
(B) \[\frac{7}{\pi}\]
(C) 7\[\pi\]
(D) \[\frac{14}{\pi}\]
°and its length is 44 cm, find the circumference of the circle.
(A) 66 cm
(B) 44 cm
(C) 160 cm
(D) 99 cm
Choose the correct alternative answer for each of the following questions.
(3) Find the perimeter of a sector of a circle if its measure is 90
° and radius is 7 cm .
(A) 44 cm
(B) 25 cm
(C) 36 cm
(D) 56 cm
Choose the correct alternative answer for each of the following questions.
(A) 440 cm
2
(B) 550 cm
2
(C) 330 cm
2
(D) 110 cm
2
Choose the correct alternative answer for each of the following questions.
(5) The curved surface area of a cylinder is 440 cm 2 and its radius is 5 cm. Find its height.
(A) \[\frac{44}{\pi}\] cm
(B) 22 \[\pi\] CM
(C) 44 \[\pi\] CM
(D) \[\frac{22}{\pi}\] cm
Choose the correct alternative answer for each of the following questions
(6) A cone was melted and cast into a cylinder of the same radius as that of the base of the cone. If the height of the cylinder is 5 cm, find the height of the cone.
(A) 15 cm
(B) 10 cm
(C) 18 cm
(D) 5 cm
Choose the correct alternative answer for each of the following questions
(A) 1 cm
3
B) 0.001 cm
3
(C) 0.0001 cm
3
(D) 0.000001 cm
3
Choose the correct alternative answer for each of the following questions
3.
(A) 1 cm
(B) 10 cm
(C) 100 cm
(D)1000 cm
A washing tub in the shape of a frustum of a cone has height 21 cm. The radii of the circular top and bottom are 20 cm and 15 cm respectively. What is the capacity of the tub ? ( \[\pi = \frac{22}{7}\])
Some plastic balls of radius 1 cm were melted and cast into a tube. The thickness, length and outer radius of the tube were 2 cm , 90 cm and 30 cm respectively. How many balls were melted to make the tube?
A metal parallelopiped of measures 16 cm x 11 cm x 10 cm was melted to make coins. How many coins were made if the thickness and diameter of each coin was 2 mm and 2 cm respectively ?
The diameter and length of a roller is 120 cm and 84 cm respectively. To level the ground, 200 rotations of the roller are required. Find the expenditure to level the ground at the rate of Rs. 10 per sq.m.
The diameter and thickness of a hollow metals sphere are 12 cm and 0.01 m respectively. The density of the metal is 8.88 gm per cm
3 . Find the outer surface area and mass of the sphere.
A cylindrical bucket of diameter 28 cm and height 20 cm was full of sand. When the sand in the bucket was poured on the ground, the sand got converted into a shape of a cone. If the height of the cone was 14 cm, what was the base area of the cone ?
The area of a sector of a circle of 6 cm radius is 15 \[\pi\] sq.cm. Find the measure of the arc and length of the arc corresponding to the sector.
In the given figure, seg AB is a chord of a circle with centre P. If PA = 8 cm and distance of chord AB from the centre P is 4 cm, find the area of the shaded portion. ( \[\pi\] = 3.14, \[\sqrt{3}\]= 1.73 )
In the given figure, square ABCD is inscribed in the sector A - PCQ. The radius of sector C - BXD is 20 cm. Complete the following activity to find the area of shaded region
In the given figure , two circles with centres O and P are touching internally at point A. If BQ = 9, DE = 5, complete the following activity to find the radii of the circles.
Chapter 7: Mensuration Balbharati SSC Class 10 Mathematics 2 Textbook solutions for Class 10th Board Exam Balbharati solutions for Class 10th Board Exam Geometry chapter 7 - Mensuration
Balbharati solutions for Class 10th Board Exam Geometry chapter 7 (Mensuration) include all questions with solution and detail explanation. This will clear students doubts about any question and improve application skills while preparing for board exams. The detailed, step-by-step solutions will help you understand the concepts better and clear your confusions, if any. Shaalaa.com has the Maharashtra State Board Textbook for SSC Class 10 Mathematics 2 solutions in a manner that help students grasp basic concepts better and faster.
Further, we at Shaalaa.com are providing such solutions so that students can prepare for written exams. Balbharati textbook solutions can be a core help for self-study and acts as a perfect self-help guidance for students.
Concepts covered in Class 10th Board Exam Geometry chapter 7 Mensuration are Conversion of Solid from One Shape to Another, Frustum of a Cone, Introduction of Surface Areas and Volumes, Length of an Arc, Problems Based on Areas and Perimeter Or Circumference of Circle, Sector and Segment of a Circle, Perimeter and Area of a Circle, Surface Area of a Combination of Solids, Euler's Formula, Areas of Sector and Segment of a Circle.
Using Balbharati Class 10th Board Exam solutions Mensuration exercise by students are an easy way to prepare for the exams, as they involve solutions arranged chapter-wise also page wise. The questions involved in Balbharati Solutions are important questions that can be asked in the final exam. Maximum students of Maharashtra State Board Class 10th Board Exam prefer Balbharati Textbook Solutions to score more in exam.
Get the free view of chapter 7 Mensuration Class 10th Board Exam extra questions for Geometry and can use Shaalaa.com to keep it handy for your exam preparation |
Hartshorne is the reference where you can find the following example which might be useful.I what follow everything is with
multiplicity. Now Alberto pointed out above the case of the divisor over $\mathbb{P}^1$ associated to its "tangent bundle": Two points over the sphere counted with multiplicity (from here though, it is not hard to believe that the Chern class of such a bundle is going to be 2). Notice that these two points are given by zeros of polynomials of degree two defined over the sphere. I think nothing stops you taking now polynomial of degree 3, 4 and so on. Then what we get are nothing but 3, 4 points over the sphere: Divisors of degree 3, 4 and so on. We can do something similar over all the curves (Riemann Surfaces) and what we get are divisors: points with labels. Such labels are the multiplicities. Chapter IV Hartshorne. or Klaus-Hulek: Elementary Algebraic Geometry.
Now, let's take a look at
divisors over the surface $\mathbb{P}^2$: they are algebraic curves (Riemann Surfaces). Do not get confused please by the name Surface here. Applying the same argument as before, a divisor of degree two is going to be the zero locus of polynomials of degree 2: conics. Same for degree three (cubics), four (quartics), and so on and so forth. For instance, in degree two we might have the divisor $C=([x:y:z]\in \mathbb{P}^2|\ \ x^2+y^2=z^2)$. Deshomogenizing with $H=[z=1]$ you get a perfect polynomial $x^2+y^2=1$ which defines the intersection $H\cap C$. This is how your global divisor $C$ looks like locally.
Now taking a family of divisors of degree two, the conics, it is well known that the space of embeddings of conics in $\mathbb{P}^2$ is (the linear system) $\mathbb{P}^5$. We get this by considering the coefficients in the equation $ax^2+by^2+cz^2+dxy+exz+fyz=0$ as coordinates in $\mathbb{P}^5$. Notice that we get the following map out of the previous considerations, $$\phi:\mathbb{P}^2\rightarrow \mathbb{P}^5$$ given by $[x:y:z]\mapsto [x^2:y^2:z^2:xy:xz:yz]$. Here pencils are a subfamily of conics in the complete linear system given above with a certain property (find out which one). However, we can consider the following subfamily of conics: all those conics passing through a fixed point in $\mathbb{P}^2$. This is nothing but a hyperplane $H$ in $\mathbb{P}^5$. We can even consider $\phi(\mathbb{P}^2)\cap H$. This is going to be a divisor on $\mathbb{P}^2\cong \phi(\mathbb{P}^2)$. Guess which one?. Hartshorne II section 7.
One can apply the the ideas with zero locus of polynomials of degree three:
Divisors of degree 3 in $\mathbb{P}^2$. These were given the name of elliptic curves. (did someone say that in considering such curves, we find the divisor associated to the canonical bundle of $\mathbb{P}^2$?). We can go on with the degree and getting divisors on the projective plane of higher degree. These were only examples of divisors on $\mathbb{P}^2$. Notice that all of them have a nontrivial topology and geometry. This fact is not a coincidence and the book of HG argues in this direction in Chapter zero. |
Bégout, Pascal and Vargas, Ana (2007)
Mass Concentration Phenomena for the L2-Critical Nonlinear Schrödinger Equation. Transactions of the American Mathematical Society, vol. 359 (n° 11). pp. 5257-5282. This is the latest version of this item.
Text
Download (555kB) | Preview
Abstract
In this paper, we show that any solution of the nonlinear Schrödinger equation $iu_t+\Delta u\pm|u|^\frac{4}{N}u=0,$ which blows up in finite time, satisfies a mass concentration phenomena near the blow-up time. Our proof is essentially based on the Bourgain's one~\cite{MR99f:35184}, which has established this result in the bidimensional spatial case, and on a generalization of Strichartz's inequality, where the bidimensional spatial case was proved by Moyua, Vargas and Vega~\cite{MR1671214}. We also generalize to higher dimensions the results in Keraani~\cite{MR2216444} and Merle and Vega~\cite{MR1628235}.
Item Type: Article Language: English Date: 2007 Refereed: Yes Uncontrolled Keywords: Schrödinger equations, restriction theorems, Strichartz's estimate, blow-up Keywords (French): équations de Schrödinger, théorèmes de restriction, estimations de Strichartz, explosion Subjects: G- MATHEMATIQUES Divisions: Institut de mathématiques de Toulouse, TSE-R (Toulouse) Site: UT1 Date Deposited: 08 Mar 2018 14:04 Last Modified: 08 Mar 2018 14:04 URI: http://publications.ut-capitole.fr/id/eprint/25109 Available Versions of this Item Mass Concentration Phenomena for the L^2-Critical Nonlinear Schrödinger Equation. (deposited 10 Jul 2012 15:36) Mass Concentration Phenomena for the L2-Critical Nonlinear Schrödinger Equation. (deposited 08 Mar 2018 14:04) [Currently Displayed] Mass Concentration Phenomena for the L2-Critical Nonlinear Schrödinger Equation. (deposited 08 Mar 2018 14:04) Actions (login required)
View Item |
Why for the complex scalar field
$$ \hat\phi = \int \frac{d^3p}{(2\pi)^{3/2}(2E_{\vec{p}})^{1/2}}\left(\hat{a}_{\vec{p}}e^{-p \cdot x} + \hat{b}_{\vec{p}}^\dagger e^{p \cdot x}\right), $$
the commutation relation $[\hat\phi(x),\hat{\phi}^\dagger(y)]=0$, but using the non-relativistic limit for the fields $\phi(\vec{x},t)\rightarrow\Psi(\vec{x},t)e^{-imc^2t/\hbar}$, $[\hat\Psi(\vec{x}), \hat\Psi(\vec{x})^\dagger] = \delta(\vec{x} - \vec{y})$, this commutator is different of zero? (All commutators are taken at equal times).
I had the idea that from the relativistic commutation relation you could derive the non-relativistic one, but it doesn't seem to be the case, unless one has to be careful when taking the limit. |
The phase components of a signal that they are got from the Fourier Transform, are simply, the phase offset of each sinusiod. Which is the
$\phi$ in equation $1$. However, in case of Hilbert transform, the instantaneous phase is the argument of the sinusoid function. Which is $x(t)$ in equation $2$. I know that we can get the instantaneous frequency by taking the derivative of the $x(t)$, but how to get the phase offset from $x(t)$? In other words, how to write equation $2$ as equation $(3)$
I understand that the equation $(2)$ should be the complex combinations of $sin$ and $cos$ but I ignored that for the simplicity sake.
$$\begin{align} \sin(2\pi ft +\phi ) \tag 1\\ \sin( x(t) ) \tag2\\ \sin(2\pi x'(t) + \phi) &&\text{or}&& \sin(2\pi x'(t) + \phi(t)) \tag3 \end{align}$$ |
I updated to iOS 11, and after doing so, the display font for MathJax has changed across the entire interface of my phone. As evidence, here is a screenshot I found laying around with its updated copy:
Let's define three partitions $P,Q,R$ on $[a,c]$, $[a,b]$, $[b,c]$ respectively:
$$P=\{x_0,x_1,...,x_k,...,x_n\}$$ $$Q=\{x_0,x_1,...,x_k\}$$ $$R=\{x_k,...,x_n\}$$
where $x_0=a,x_k=b,x_n=c$
As the function $f$ is integrable on both $[a,b]$ and $[b,c]$, for any $\epsilon/2>0$ there is a $\delta>0$ such that
$$\left| S(Q,f)-\int_a^bf(x)dx\right|<\epsilon/2$$
I’m a very font-sensitive individual
1, and this is one I thoroughly despise (except for the new Q, which I actually prefer over the old one, which always seemed deformed to me). Is there any way at all that I can change this? 1: Down with Calibri, Arial and Comic Sans! Vivan Baskerville and Didot! Edit:After reading through MathJax.org, I believe the “old” font was “SVG” while the “new” font is “HTML-CSS.” |
Worst Case Efficient Single and Multiple String Matching in the RAM Model Abstract
In this paper, we explore worst-case solutions for the problems of pattern and multi-pattern matching on strings in the RAM model with word length
w. In the first problem, we have a pattern p of length m over an alphabet of size σ, and given any text T of length n, where each character is encoded using log σ bit, we wish to find all occurrences of p. For the multi-pattern matching problem we have a set S of d patterns of total length m and a query on a text T consists in finding all the occurrences in T of the patterns in S (in the following we refer by occ to the number of reported occurrences). As each character of the text is encoded using log σ bits and we can read w bits in constant time in the RAM model, the best query time for the two problems which can only possibly be achieved by reading Θ( w/log σ) consecutive characters, is \(O(n\frac{\log\sigma}{w}+occ)\). In this paper, we present two results. The first result is that using O( m) words of space, single pattern matching queries can be answered in time \(O(n(\frac{\log m}{m}+\frac{\log \sigma}{w})+occ)\), and multiple pattern matching queries answered in time \(O(n(\frac{\log d+\log y+\log\log m}{y}+\frac{\log \sigma}{w})+occ)\), where y is the length of the shortest pattern. Our second result is a variant of the first result which uses the four Russian technique to remove the dependence on the shortest pattern length at the expense of using an additional space t. It answers to multi-pattern matching queries in time \(O(n\frac{\log d+\log\log_\sigma t+\log\log m}{\log_\sigma t}+occ)\) using O( m + t) words of space. KeywordsPattern Match Word Length Query Time String Match Short String Preview
Unable to display preview. Download preview PDF.
References 1. 2. 3. 4. 5. 6. 7.Chien, Y.-F., Hon, W.-K., Shah, R., Vitter, J.S.: Geometric Burrows-Wheeler transform: Linking range searching and text indexing. In: DCC, pp. 252–261 (2008)Google Scholar 8. 9. 10.Dietzfelbinger, M., Gil, J., Matias, Y., Pippenger, N.: Polynomial hash functions are reliable (extended abstract). In: ICALP, pp. 235–246 (1992)Google Scholar 11. 12. 13. 14. 15. 16. 17. 18. 19.Patrascu, M.: (data) structures. In: FOCS, pp. 434–443 (2008)Google Scholar 20. 21. 22. 23. 24. |
Tagged: symmetric matrix Problem 572
The following problems are Midterm 1 problems of Linear Algebra (Math 2568) at the Ohio State University in Autumn 2017.
There were 9 problems that covered Chapter 1 of our textbook (Johnson, Riess, Arnold). The time limit was 55 minutes. Problem 7. Let $A=\begin{bmatrix} -3 & -4\\ 8& 9 \end{bmatrix}$ and $\mathbf{v}=\begin{bmatrix} -1 \\ 2 \end{bmatrix}$. (a) Calculate $A\mathbf{v}$ and find the number $\lambda$ such that $A\mathbf{v}=\lambda \mathbf{v}$. (b) Without forming $A^3$, calculate the vector $A^3\mathbf{v}$. Problem 8. Prove that if $A$ and $B$ are $n\times n$ nonsingular matrices, then the product $AB$ is also nonsingular. Problem 9. Determine whether each of the following sentences is true or false. (a) There is a $3\times 3$ homogeneous system that has exactly three solutions. (b) If $A$ and $B$ are $n\times n$ symmetric matrices, then the sum $A+B$ is also symmetric. (c) If $n$-dimensional vectors $\mathbf{v}_1, \mathbf{v}_2, \mathbf{v}_3$ are linearly dependent, then the vectors $\mathbf{v}_1, \mathbf{v}_2, \mathbf{v}_3, \mathbf{v}_4$ is also linearly dependent for any $n$-dimensional vector $\mathbf{v}_4$. (d) If the coefficient matrix of a system of linear equations is singular, then the system is inconsistent.
Add to solve later
(e) The vectors \[\mathbf{v}_1=\begin{bmatrix} 1 \\ 0 \\ 1 \end{bmatrix}, \mathbf{v}_2=\begin{bmatrix} 0 \\ 1 \\ 0 \end{bmatrix}, \mathbf{v}_3=\begin{bmatrix} 0 \\ 0 \\ 1 \end{bmatrix}\] are linearly independent. Problem 564
Let $A$ and $B$ be $n\times n$ skew-symmetric matrices. Namely $A^{\trans}=-A$ and $B^{\trans}=-B$.
(a) Prove that $A+B$ is skew-symmetric. (b) Prove that $cA$ is skew-symmetric for any scalar $c$. (c) Let $P$ be an $m\times n$ matrix. Prove that $P^{\trans}AP$ is skew-symmetric. (d) Suppose that $A$ is real skew-symmetric. Prove that $iA$ is an Hermitian matrix. (e) Prove that if $AB=-BA$, then $AB$ is a skew-symmetric matrix. (f) Let $\mathbf{v}$ be an $n$-dimensional column vecotor. Prove that $\mathbf{v}^{\trans}A\mathbf{v}=0$.
Add to solve later
(g) Suppose that $A$ is a real skew-symmetric matrix and $A^2\mathbf{v}=\mathbf{0}$ for some vector $\mathbf{v}\in \R^n$. Then prove that $A\mathbf{v}=\mathbf{0}$. Problem 556
Let $\mathbf{v}$ be a nonzero vector in $\R^n$.
Then the dot product $\mathbf{v}\cdot \mathbf{v}=\mathbf{v}^{\trans}\mathbf{v}\neq 0$. Set $a:=\frac{2}{\mathbf{v}^{\trans}\mathbf{v}}$ and define the $n\times n$ matrix $A$ by \[A=I-a\mathbf{v}\mathbf{v}^{\trans},\] where $I$ is the $n\times n$ identity matrix.
Prove that $A$ is a symmetric matrix and $AA=I$.
Conclude that the inverse matrix is $A^{-1}=A$. Problem 538 (a) Suppose that $A$ is an $n\times n$ real symmetric positive definite matrix. Prove that \[\langle \mathbf{x}, \mathbf{y}\rangle:=\mathbf{x}^{\trans}A\mathbf{y}\] defines an inner product on the vector space $\R^n$. (b) Let $A$ be an $n\times n$ real matrix. Suppose that \[\langle \mathbf{x}, \mathbf{y}\rangle:=\mathbf{x}^{\trans}A\mathbf{y}\] defines an inner product on the vector space $\R^n$.
Prove that $A$ is symmetric and positive definite.Add to solve later
Problem 457
Let $A$ be a real symmetric $n\times n$ matrix with $0$ as a simple eigenvalue (that is, the algebraic multiplicity of the eigenvalue $0$ is $1$), and let us fix a vector $\mathbf{v}\in \R^n$.
(a) Prove that for sufficiently small positive real $\epsilon$, the equation \[A\mathbf{x}+\epsilon\mathbf{x}=\mathbf{v}\] has a unique solution $\mathbf{x}=\mathbf{x}(\epsilon) \in \R^n$. (b) Evaluate \[\lim_{\epsilon \to 0^+} \epsilon \mathbf{x}(\epsilon)\] in terms of $\mathbf{v}$, the eigenvectors of $A$, and the inner product $\langle\, ,\,\rangle$ on $\R^n$. ( University of California, Berkeley, Linear Algebra Qualifying Exam) Problem 396
A real symmetric $n \times n$ matrix $A$ is called
positive definite if \[\mathbf{x}^{\trans}A\mathbf{x}>0\] for all nonzero vectors $\mathbf{x}$ in $\R^n$. (a) Prove that the eigenvalues of a real symmetric positive-definite matrix $A$ are all positive.
Add to solve later
(b) Prove that if eigenvalues of a real symmetric matrix $A$ are all positive, then $A$ is positive-definite. Problem 385
Let
\[A=\begin{bmatrix} 2 & -1 & -1 \\ -1 &2 &-1 \\ -1 & -1 & 2 \end{bmatrix}.\] Determine whether the matrix $A$ is diagonalizable. If it is diagonalizable, then diagonalize $A$. That is, find a nonsingular matrix $S$ and a diagonal matrix $D$ such that $S^{-1}AS=D$. |
I'm aware that there are a few similar questions already answered, but I could not find what I was looking for in any of then, so please bear with me :)
For a school project I need to find an angle ($\theta$) of maximum range ($R$) and its dependency on initial velocity $\theta(v_0)$, considering quadratic drag force and wind blowing in the direction opposite of the throw.
Drag force in $x$ direction:$F_{\textrm{drag},x}=-Kv^2\cos(\alpha)=-Kv_x(v_x^2+v_x^2)^{1 /2}$
Drag force in $y$ direction:$F_{\textrm{drag},y}=-Kv^2\sin(\alpha)=-Kv_y(v_x^2 +v_x^2)^{1/2}$
Wind force: $F_\textrm{wind}=F$
In x direction: $F_{w,x}=F\cos(\theta)$
In y direction: $F_{w,y}=F\sin(\theta)$
So the equations should look like this:
\begin{align}mv_y' &= -kv_x\cdot (v_x^2 + v_y^2)^{1/2}- F\cdot \cos\theta \\ mv_x' &= -kv_y\cdot (v_x^2 + v_y^2)^{1/2}- F\cdot \sin\theta- mg \end{align} Now I'm not sure how to solve this system of two non-linear differential equations... I'm figuring it should be done numerically, but I'm not very familiar with programming (I've only programmed a little bit in Python, but not this kind of stuff). Thank you for any answer! |
I have a 1D gas made of $N$ particles placed in a harmonic potential well, so the Hamiltonian is:
$$ \mathcal H = \sum_{j=1}^N \left ( \frac{p_j^2}{2m} + \frac{1}{2}m\omega^2 x_j^2 \right )$$
The first part of the exercise asked me to find the canonical partition at temperature $T$ if the particles are distinguishable, then to find the partition if the particles are indistinguishable, but the Maxwell-Boltzmann approximation applies. In both cases this was easy. But now the exercise asks me to find the partition if the particles are identical bosons, and then to show that this is equal to the Maxwell-Boltzmann approximation for large temperatures.
I don't know exactly how to set up the summation to count the states properly when you treat them as bosons. I know we have the states $\epsilon_j = \hbar \omega (j+\tfrac{1}{2})$ and that each of those states will be occupied by $n_j$ bosons and since it's 1D I don't have to worry about degeneracy...but I'm unsure how to continue.
From what I understand, it's nicer to work with fermions and boson in the grand canonical ensemble, but we haven't seen this in class yet.
Thanks! |
Thought I'd take a slightly different approach couching the above in terms of a first course in vector calculus.
Suppose you have a curve $\vec{r}(t)$ in $R^3$. It could be a straight line, a circle, a helix, cycloid, etc.
You can associate to any point on a well behaved curve a unit tangent vector $\hat{T}=\frac{\frac{d\vec{r}}{dt}}{|\frac{d\vec{r}}{dt}|}$
In turn you can have a unit normal such that , $\kappa(t) \hat{N}=\frac{d\hat{T}}{dt}$ where $\hat{N}$ is the unit normal and $\kappa$ is the curvature,
and Unit Binormal vector $\hat{B}=\hat{N}\times\hat{T}/|\hat{N}\times\hat{T}|$
It can be shown that $d\hat{B}/dt=-\tau\hat{N}$ where $\tau$ is Torsion.
Using the chain rule, $\frac{d\vec{r}}{dt}=\frac{d\vec{r}}{ds}\frac{ds}{dt}$ where $ds$ is infinitesimal arc length. It's typically assumed $ds/dt=1$ to keep the math easier. It also has some interesting physical implications regarding pseudo forces which can help give an intuitive understanding of gravitational effects of a curved space.
Curvature is defined as $\kappa(t)=\frac{d\theta}{ds}$ where $d\theta$ is a measure if infinitesimal change in the direction of the unit tangent vector, and again $ds$ is the infinitesimal arc length.
$\hat{N}, \hat{B}$, and $\hat{T}$ form an orthonormal, curve centric, basis. Certain relationships between them hold no matter how your coordinates change, if the curve is rotated about the z axis, reflected across some plane, moved somewhere else in $R^3$. These include curvature, vertices, and other geometric features.
A $d\theta$ is implied by the change in any unit vector. Vectors representing both magnitude and direction, holding the magnitude constant only allows for change in direction which can be represented as an angle change. Here there are 3 unit vectors to choose from, so there are 3 possible $d\theta$s. The derivatives of these unit vectors are vectors themselves expressible as those vectors: Frenet-Seret Curvature relations
Notice in these equations that the derivative of vectors on the left is a linear combination of vectors on the right. If we had a column vector made of the basis vectors $T_{ij}=<\vec{N},\vec{B},\vec{T}>$, the derivative of this column vector with respect to $ds$ would be some "matrix" $M$ multipled by $T_{ij}$. The double index is needed because $i$ selects which of the 3 vectors we care about, and $j$ represents which component of that vector we are interested in.
So $\frac{dT_{ij}}{ds}=M\cdot T_{ij}$ is a very compact form of the Frenet-Serret equations. It represents curvature by giving information on the derivatives of the unit vectors.
Roughly speaking this 2 indexed entity (called a rank 2 tensor) is a vector of vectors, or a nested vector. So, in a similar sense, is a matrix. They appear all over for example, there's the Maxwell Stress Tensor in Electromagnetism or the stress tensor in materials science.
A vector field associates with points in $R^n$ another element in $R^n$.
In non-cartesian coordinate systems, unit vectors can change from point to point. This means they have non-zero derivatives implying some concept of curvature in play. This in turn means their components change.
$\Gamma^a_{bc}=$ the $ath$ component of coordinate $c$ derivative of unit vector $b$. For example in spherical coordinates, $\Gamma^r_{\theta \theta}=\frac{-1}{r}$ These are Christofel Symbols of the Second Kind.
The Christofel Symbols have their own derivatives which also have implications regarding curvature.
So curvature can be categorized by Christofel symbols and their derivatives. Whereas matrix elements are referred to be a row/column pair, for Christofel symbols, we need which component of which derivative of which vector to be specified, implying 3 indices. (Despite requiring 3 indices, it is not itself a tensor, but that can be deferred). Taking the derivative of a tensor creates a tensor having an additional lower index. Rank is the number of indices of a tensor. So the derivative of the Christofel symbol has Rank 4.
Notice the Riemann Curvature Tensor is of rank 4. Also notice the form it takes and compare to the expression of curvature for an implicit curve: Curvature of implicit Curve. You'll find the more formal treatments use curves to illustrate the principles in play, basically generalizing from the primitive concepts of curvature of a curve to curvature of a Manifold.
This hasn't been especially rigorous, but hopefully helps develop an intuition for the concepts in play. |
I would like to start off by saying this is not a philosophical question. I have a specific question pertaining to physics after the following explanation and background information, which I felt was necessary to properly formulate my question.
I have done some research on Zeno's Paradoxes as well as some of their modern day mathematical and physical "solutions." I understand that Zeno's dichotomy paradox can be explained mathematically by constructing an argument that makes use of the following infinite series: $$ \sum_{n=1}^\infty {1 \over 2^n} = 1$$ Because this series converges to $1$, then that implies that if physically traveling from some point A to another point B truly does require an infinite amount of actions in which an object travels ${1 \over 2}$ of the distance, then ${1 \over 4}$ then ${1 \over 8}$ then ${1 \over 16}$ and so on, the sum of those actions would eventually result in traveling the full distance. Mathematically, this makes perfect sense. However, knowing that the mathematics we use to describe the physical world is only a model, adopted when there is sufficient evidence to support it, could it be that the solution gained by the above argument comes to the correct conclusion using the wrong path? Using the infinite series provides a solution to the dichotomy paradox if it does indeed require an infinite amount of actions to move from point A to point B. So, then the question becomes, does it truly take an infinite amount of actions to physically move from one point to another (in which case, I believe the mathematical solution is a perfect model), or are there other phenomena at play that cannot be described in this fashion?
I have one idea pertaining to a quantum mechanical description of the situation that does not disprove, but provides an alternative to the reasoning that it requires an infinite number of actions to move from point A to point B.
Suppose we have a box of some macroscopic size (say $1m^3$), and it is in the process of moving with a constant velocity from some point A to another point B. Obviously, the box is made up of atoms, and if we zoom in far enough on the leading face of the box we could see that the motion of said box is really the motion of a very large number of atoms moving in the same direction. According to the uncertainty principle, it is impossible to determine both a particle's momentum and position simultaneously. According to the physics professor who first taught me about this, the reason for this uncertainty does not lie within our measurement methods, but rather, it is an impossibility inherent in the fact that all particles have wavelike characteristics. Thus, their position and momentum are not even clearly defined at a specific moment in time. Could it not be possible then, that if we zoomed in far enough on the leading face of our box, that we would find the face of the box does not have a definite position? If this is true, then wouldn't there be a certain point very close to the destination point, B, where the idea of cutting the remainder of the distance to the destination in half makes no difference to the particles in question? We may say that to complete the trip, we must travel another ${1 \over 2^x}$ portion of the distance we started with, but if the uncertainty in position is larger than this remaining distance, could the particle (and by the same argument, all of the particles that make up the leading face of the box), traverse the final distance to point B without physically having to travel the "infinite" amount of points remaining?
Also, I have a second idea that I want to present as an aside, something that I don't believe can be proven as of now, but I wonder if, according to current understanding, it is possible.
What if, instead of analyzing the situation at hand through the distance traveled, it was analyzed by analyzing the passage of time. Specifically, if time is quantized, would it resolve the dichotomy paradox in a physical sense? I realize that whether or not time is quantized is up for debate (there is even this question here at stack exchange that explores the idea), so I am not going to ask if time is quantized. Based off of my understanding, however, there is nothing known as of now that says time cannot be quantized (please correct me if I am wrong). Therefore, if time werequantized then could we not say that for a given constant velocity there is a minimum distance that can be traveled? A distance that corresponds to $\Delta t_{min} * v$? Would this not imply that as our moving box reached a certain distance, $d$ away from point B, and if $d < \Delta t_{min} * v$, the box would physically not be able to travel such a small distance, and in the next moment that the box moved, it would effectively be across the destination point B?
All of that being said, here is my specific question: In my above arguments, is any of my logic faulty? Is there any known law that would disprove either of these explanations for Zeno's dichotomy paradox? If so, is there a better way of physically (not mathematically) resolving the paradox? |
If you are referring to the animated TV series, remember this is a cartoon, you cannot expect much realism. In reality, divers open access to the water like this would require the air pressure above the water, to equal the external water pressure, you cannot have two different pressures meeting with nothing to separate them. Once in the air chamber above the ...
I am assuming the ball to be not properly infiltrated.Now because of this when you get the ball to the floor of pool,because of the pressure from the water above the ball the ball gets pressed and assumes the shape of a flat disk.Buoyancy force will affect a body only if the body has some fluid underneath it to push it upward,in all the other cases it will ...
For the first question, (e) is also a correct response. For the second one, you are meant to intuit that the pressure inside the beach ball is no greater than atmospheric pressure—otherwise the “thin” plastic would presumably “stretch”. I’d say that premise is a bit thin, and the reasoning a bit of a stretch, but otherwise the given answer is correct.
The key distinction here is that the plastic is not stretched—this means that the pressure inside of the unstretched ball equilibrates with the external pressure.Assuming the air behaves like an ideal gas, Boyle’s Law indicates that $p_1 V_1 = p_2 V_2$, so the volume the air occupies decreases as the ball goes down thanks to the pressure increase from the ...
There is a buoyant force pushing upwards on the submerged object, which is equal to the weight of the water that is displaced. If the submerged object is suspended from a string, and doesn't touch the bottom of the beaker, the weight of the beaker will increase by the magnitude of the buoyant force, because an upwards buoyant force on the object requires an ...
First note that LBM doesn't actually work for in-compressible fluids unless you replace the density in your D2Q5 vectors with pressure (or energy etc.. something that doesn't relate to the number of particles per unit volume/area), as density is constant in in-compressible fluids (and I'll be from this point onward considering your LBM implementation in ...
You have a 1000cc volume, 720cc displaces water, it tells you what the water weighs. Now 280cc displaces oil, it tells you what the oil weighs. Find the weight of both displacements and add them together for the total weight. You should be able to work it with this. Also you put .78 in your equation for water volume, it should be .72
Where does the other part of the force go for the funnel?It's supported by the horizontal component of the sloping walls of the funnel, not the bottom of the cylindrical portion of the funnel.See figures below. The bottom of the cylinder to the left supports the entire weight of the fluid above it. The weight of the fluid outside the cylindrical part of ...
It presses on the funnel.Imagine that the lower cylinder of the funnel extended all the way to the top. So you had water in the central cylinder pressing on the floor, and you had a separate conical area with the center cut out, full of water. You could drain the center cylinder and the rest of the water would still be there, pressing down on the funnel. ...
Not that I'm aware of, no.The interface between the air pocket and the surface of the water should be approximately the same pressure. If they were not at the same pressure, then once would be pushing against the other with a greater force, and the interface would move until the forces equalized (therefore the pressures approximately equalize).Adding ...
In general it is not the same. If the interface is not flat, the difference in pressures (Laplace pressure) can arise from non-zero surface energy of interface. Or if there is preffered curvature of the interface (that can arise from highly assymetric molecules), additional terms in pressure difference can be present.However, if the interface is flat and ...
No. If the object has a compressibility much less than water (nothing is perfectly incompressible) it would only sink part way. At a depth of 10,000 ft, the water would be about 1.4% denser, assuming temperature remained constant.
The answer to both your questions in the last paragraph is yes. You understood it correctly.The first sentence is not carefully worded, instead of "buoyant force is greater" is should say something like "buoyant force [on the fully submerged body] is greater".
The molecules at the liquid to air interface feel this downward pressure due to cohesion with other liquid molecules. The molecules below the surface feel no net force (unless external force is applied) because cohesion with other molecules is pulling them equally in all directions. They are in motion, as you stated, but they are not accelerating. ...
Buoyancy does not really change with depth (water temperature differences can slightly change it's density). Once a sealed container weighed more than the water it displaced (negative buoyancy), it would begin to sink and go to the bottom. If it weighed less than the amount of water it displaced (positive buoyancy) it will float. So to decide if your ...
If your sample of air were in a container open at the bottom with extra weights attached, sinking would require that the total weight of the air, container, and weights must exceed the weight of the water displaced by all three. Once submerged, the volume of the air would start to decrease and the system would continue to sink. If the air were in a sturdy ...
It would only make a difference if the water could run from one end to the other, such as a single straight tube with air and water in it. If the water was contained as it appears to be, where the see saw's center of mass did not change, it would work the same as a solid.
Figure the surface area of a piston (pi times radius squared), then divide the weight on top of it by the square inches of it's surface, this will give the pounds per square inch (PSI) of pressure exerted on the water. Do this on each piston and compare the PSI of each. The piston with greater PSI will push down and lift the piston with lighter PSI. If ...
Keeping in mind that the buoyant force depends on the difference between the water pressure below and above the rod, then the buoyant force acts uniformly along the rod and the effective force can be taken at the middle of the submerged section. With the force equation: buoyant force + tension = weight, and and a torque equation (choose an axis), you can ...
In a gas, the pressure on a surface is associated with the momentum change of rebounding molecules. Extra pressure requires a higher density or temperature in the gas. In a liquid you must add the contact forces between the molecules. The total pressure results from a combination of both, and the upward component must be adequate to support the weight of ...
Why don't we consider the force by the movement of liquid molecules due to their thermal energy?This does not change the pressure. Essentially, increased thermal energy will make there be fewer collisions that are individually more energetic.You can think of a ball bouncing on a floor. Regardless of the energy of the ball the average force is the same ...
Consider a small cube (of size $\Delta x$) of fluid at equilibrium. If you draw a free body diagram you see that there are 7 forces acting on the box of fluid: one pressure force acting on each face and the weight. Since it is in equilibrium the forces on the horizontal faces are all equal and therefore the pressure does not change horizontally. However, to ...
Since they represent different tubes, h and h are probably the same height, otherwise they would be labeled differently. Water pressure increases with depth from the surface, not volume. So with water of the same purity and temperature, the pressure at the same depth from the surface will be equal in both tubes.
What you have neglected to consider is the effect of the walls of the container which exert forces on the water.Look at the sequence below where on the left there is just water and then as you progress to the right the walls of the container are added nd the hole closed which does not change the pressure at point $P$ and finally the water outside the ...
The water pressure exerted by the fluid already includes the weight of the water. In fact, the pressure is caused by the fluid's weight.Therefore, the force exerted by the water onto the piston is just the water pressure at the piston multiplied by the area of the piston. The water pressure is given by$$p=\rho g\Delta h=\frac{mg\Delta h}{A\Delta h}=\...
I assume the question is to determine the force required by the piston. Notice that if the piston at the very bottom is in equilibrium, the magnitude of the force from the water on the piston must equal the force exerted by the piston.So we just need to find the force of the water on the piston. At the very top of the tube the pressure is atmospheric ...
Short answerWikipedia: Pressure (symbol $p$ or $P$) is the force applied perpendicular to the surface of an object per unit areaSo direction of the force caused by pressure is determined by surface orientation.If pressures at A and B were different, horizontal pressure force would be pushing fluid from higher pressure to lower one to equalise the ... |
(a) If $AB=B$, then $B$ is the identity matrix. (b) If the coefficient matrix $A$ of the system $A\mathbf{x}=\mathbf{b}$ is invertible, then the system has infinitely many solutions. (c) If $A$ is invertible, then $ABA^{-1}=B$. (d) If $A$ is an idempotent nonsingular matrix, then $A$ must be the identity matrix. (e) If $x_1=0, x_2=0, x_3=1$ is a solution to a homogeneous system of linear equation, then the system has infinitely many solutions.
Let $A$ and $B$ be $3\times 3$ matrices and let $C=A-2B$.If\[A\begin{bmatrix}1 \\3 \\5\end{bmatrix}=B\begin{bmatrix}2 \\6 \\10\end{bmatrix},\]then is the matrix $C$ nonsingular? If so, prove it. Otherwise, explain why not.
Let $A, B, C$ be $n\times n$ invertible matrices. When you simplify the expression\[C^{-1}(AB^{-1})^{-1}(CA^{-1})^{-1}C^2,\]which matrix do you get?(a) $A$(b) $C^{-1}A^{-1}BC^{-1}AC^2$(c) $B$(d) $C^2$(e) $C^{-1}BC$(f) $C$
Let $\calP_3$ be the vector space of all polynomials of degree $3$ or less.Let\[S=\{p_1(x), p_2(x), p_3(x), p_4(x)\},\]where\begin{align*}p_1(x)&=1+3x+2x^2-x^3 & p_2(x)&=x+x^3\\p_3(x)&=x+x^2-x^3 & p_4(x)&=3+8x+8x^3.\end{align*}
(a) Find a basis $Q$ of the span $\Span(S)$ consisting of polynomials in $S$.
(b) For each polynomial in $S$ that is not in $Q$, find the coordinate vector with respect to the basis $Q$.
(The Ohio State University, Linear Algebra Midterm)
Let $V$ be a vector space and $B$ be a basis for $V$.Let $\mathbf{w}_1, \mathbf{w}_2, \mathbf{w}_3, \mathbf{w}_4, \mathbf{w}_5$ be vectors in $V$.Suppose that $A$ is the matrix whose columns are the coordinate vectors of $\mathbf{w}_1, \mathbf{w}_2, \mathbf{w}_3, \mathbf{w}_4, \mathbf{w}_5$ with respect to the basis $B$.
After applying the elementary row operations to $A$, we obtain the following matrix in reduced row echelon form\[\begin{bmatrix}1 & 0 & 2 & 1 & 0 \\0 & 1 & 3 & 0 & 1 \\0 & 0 & 0 & 0 & 0 \\0 & 0 & 0 & 0 & 0\end{bmatrix}.\]
(a) What is the dimension of $V$?
(b) What is the dimension of $\Span\{\mathbf{w}_1, \mathbf{w}_2, \mathbf{w}_3, \mathbf{w}_4, \mathbf{w}_5\}$?
(The Ohio State University, Linear Algebra Midterm)
Let $V$ be the vector space of all $2\times 2$ matrices whose entries are real numbers.Let\[W=\left\{\, A\in V \quad \middle | \quad A=\begin{bmatrix}a & b\\c& -a\end{bmatrix} \text{ for any } a, b, c\in \R \,\right\}.\]
(a) Show that $W$ is a subspace of $V$.
(b) Find a basis of $W$.
(c) Find the dimension of $W$.
(The Ohio State University, Linear Algebra Midterm)
The following problems are Midterm 1 problems of Linear Algebra (Math 2568) at the Ohio State University in Autumn 2017.There were 9 problems that covered Chapter 1 of our textbook (Johnson, Riess, Arnold).The time limit was 55 minutes.
This post is Part 3 and contains Problem 7, 8, and 9.Check out Part 1 and Part 2 for the rest of the exam problems.
Problem 7. Let $A=\begin{bmatrix}-3 & -4\\8& 9\end{bmatrix}$ and $\mathbf{v}=\begin{bmatrix}-1 \\2\end{bmatrix}$.
(a) Calculate $A\mathbf{v}$ and find the number $\lambda$ such that $A\mathbf{v}=\lambda \mathbf{v}$.
(b) Without forming $A^3$, calculate the vector $A^3\mathbf{v}$.
Problem 8. Prove that if $A$ and $B$ are $n\times n$ nonsingular matrices, then the product $AB$ is also nonsingular.
Problem 9.Determine whether each of the following sentences is true or false.
(a) There is a $3\times 3$ homogeneous system that has exactly three solutions.
(b) If $A$ and $B$ are $n\times n$ symmetric matrices, then the sum $A+B$ is also symmetric.
(c) If $n$-dimensional vectors $\mathbf{v}_1, \mathbf{v}_2, \mathbf{v}_3$ are linearly dependent, then the vectors $\mathbf{v}_1, \mathbf{v}_2, \mathbf{v}_3, \mathbf{v}_4$ is also linearly dependent for any $n$-dimensional vector $\mathbf{v}_4$.
(d) If the coefficient matrix of a system of linear equations is singular, then the system is inconsistent.
The following problems are Midterm 1 problems of Linear Algebra (Math 2568) at the Ohio State University in Autumn 2017.There were 9 problems that covered Chapter 1 of our textbook (Johnson, Riess, Arnold).The time limit was 55 minutes.
This post is Part 2 and contains Problem 4, 5, and 6.Check out Part 1 and Part 3 for the rest of the exam problems.
Problem 4. Let\[\mathbf{a}_1=\begin{bmatrix}1 \\2 \\3\end{bmatrix}, \mathbf{a}_2=\begin{bmatrix}2 \\-1 \\4\end{bmatrix}, \mathbf{b}=\begin{bmatrix}0 \\a \\2\end{bmatrix}.\]
Find all the values for $a$ so that the vector $\mathbf{b}$ is a linear combination of vectors $\mathbf{a}_1$ and $\mathbf{a}_2$.
Problem 5.Find the inverse matrix of\[A=\begin{bmatrix}0 & 0 & 2 & 0 \\0 &1 & 0 & 0 \\1 & 0 & 0 & 0 \\1 & 0 & 0 & 1\end{bmatrix}\]if it exists. If you think there is no inverse matrix of $A$, then give a reason.
Problem 6.Consider the system of linear equations\begin{align*}3x_1+2x_2&=1\\5x_1+3x_2&=2.\end{align*}
(a) Find the coefficient matrix $A$ of the system.
(b) Find the inverse matrix of the coefficient matrix $A$.
(c) Using the inverse matrix of $A$, find the solution of the system.
(Linear Algebra Midterm Exam 1, the Ohio State University)
The following problems are Midterm 1 problems of Linear Algebra (Math 2568) at the Ohio State University in Autumn 2017.There were 9 problems that covered Chapter 1 of our textbook (Johnson, Riess, Arnold).The time limit was 55 minutes.
This post is Part 1 and contains the first three problems.Check out Part 2 and Part 3 for the rest of the exam problems.
Problem 1. Determine all possibilities for the number of solutions of each of the systems of linear equations described below.
(a) A consistent system of $5$ equations in $3$ unknowns and the rank of the system is $1$.
(b) A homogeneous system of $5$ equations in $4$ unknowns and it has a solution $x_1=1$, $x_2=2$, $x_3=3$, $x_4=4$.
Problem 2. Consider the homogeneous system of linear equations whose coefficient matrix is given by the following matrix $A$. Find the vector form for the general solution of the system.\[A=\begin{bmatrix}1 & 0 & -1 & -2 \\2 &1 & -2 & -7 \\3 & 0 & -3 & -6 \\0 & 1 & 0 & -3\end{bmatrix}.\]
Problem 3. Let $A$ be the following invertible matrix.\[A=\begin{bmatrix}-1 & 2 & 3 & 4 & 5\\6 & -7 & 8& 9& 10\\11 & 12 & -13 & 14 & 15\\16 & 17 & 18& -19 & 20\\21 & 22 & 23 & 24 & -25\end{bmatrix}\]Let $I$ be the $5\times 5$ identity matrix and let $B$ be a $5\times 5$ matrix.Suppose that $ABA^{-1}=I$.Then determine the matrix $B$.
(Linear Algebra Midterm Exam 1, the Ohio State University) |
I stumbled upon this old question and I would like to share my solution. As mentioned in other answers, there is no analytical solution, but the function to be minimized behaves nicely and the optimal value of $\alpha$ can be found easily with a few Newton iterations. There is also a formula to check the optimality of the result.
The impulse response of the length $N$ FIR moving average filter is given by
$$h_{FIR}[n]=\frac{1}{N}(u[n]-u[n-N])\tag{1}$$
where $u[n]$ is the unit step function. The first order IIR filter
$$y[n]=\alpha x[n]+(1-\alpha)y[n-1]\tag{2}$$
has the impulse response
$$h_{IIR}[n]=\alpha(1-\alpha)^nu[n]\tag{3}$$
The goal is now to minimize the squared error
$$\epsilon=\sum_{n=0}^{\infty}\left(h_{FIR}[n]-h_{IIR}[n]\right)^2\tag{4}$$
Using $(1)$ and $(3)$, the error can be written as
$$\begin{align}\epsilon(\alpha)&=\sum_{n=0}^{N-1}\left(\alpha(1-\alpha)^n-\frac{1}{N}\right)^2+\sum_{n=N}^{\infty}\alpha^2(1-\alpha)^{2n}\\&=\alpha^2\sum_{n=0}^{\infty}(1-\alpha)^{2n}-\frac{2\alpha}{N}\sum_{n=0}^{N-1}(1-\alpha)^n+\sum_{n=0}^{N-1}\frac{1}{N^2}\\&=\frac{\alpha^2}{1-(1-\alpha)^2}-\frac{2\alpha}{N}\frac{1-(1-\alpha)^N}{1-(1-\alpha)}+\frac{1}{N}\\&=\frac{\alpha}{2-\alpha}-\frac{2}{N}\left(1-(1-\alpha)^N\right)+\frac{1}{N},\qquad 0<\alpha<2\tag{5}\end{align}$$
This expression is very similar to the one given in this answer, but it's not identical. The restriction on $\alpha$ in $(5)$ makes sure that the infinite sum converges, and it is identical to the stability condition for the IIR filter given by $(2)$.
Setting the derivative of $(5)$ to zero results in
$$(1-\alpha)^{N-1}(2-\alpha)^2=1\tag{6}$$
Note that the optimal $\alpha$ must be in the interval $(0,1]$ because larger values of $\alpha$ result in an alternating impulse response $(3)$, which cannot approximate the constant impulse repsonse of the FIR moving average filter.
Taking the square root of $(6)$ and introducing $\beta=1-\alpha$, we obtain
$$\beta^{(N+1)/2}+\beta^{(N-1)/2}-1=0\tag{7}$$
This equation cannot be solved analytically for $\beta$, but it can be solved for $N$:
$$N=-2\frac{\log(1+\beta)}{\log(\beta)},\qquad \beta\neq 0\tag{8}$$
Equation $(8)$ can be used to double-check a numerical solution of $(7)$; it must return the specified value of $N$.
Equation $(7)$ can be solved with a few lines of (Matlab/Octave) code:
N = 50; % desired filter length of FIR moving average filter
if ( N == 1 ) % no iteration for trivial case
b = 0;
else
% Newton iteration
b = 1; % starting value
Nit = 7;
n = (N+1)/2;
for k = 1:Nit,
f = b^n + b^(n-1) -1;
fp = n*b^(n-1) + (n-1)*b^(n-2);
b = b - f/fp;
end
% check result
N0 = -2*log(1+b)/log(b) + 1 % must equal N
end
a = 1 - b;
Below is a table with the optimal values of $\alpha$ for a range of filter lengths $N$:
N alpha
1 1.0000e+00
2 5.3443e-01
3 3.8197e-01
4 2.9839e-01
5 2.4512e-01
6 2.0809e-01
7 1.8083e-01
8 1.5990e-01
9 1.4333e-01
10 1.2987e-01
20 6.7023e-02
30 4.5175e-02
40 3.4071e-02
50 2.7349e-02
60 2.2842e-02
70 1.9611e-02
80 1.7180e-02
90 1.5286e-02
100 1.3768e-02
200 6.9076e-03
300 4.6103e-03
400 3.4597e-03
500 2.7688e-03
600 2.3078e-03
700 1.9785e-03
800 1.7314e-03
900 1.5391e-03
1000 1.3853e-03 |
As described in Chapter 1, classical mechanics is based on a set of axioms, which in turn are based on (repeated) physical observations. In order to formulate the first three axioms, we will need to first define three quantities: the (instantaneous) velocity, acceleration and momentum of a particle. If we denote the position of a particle as x(t) - indicating a vector
1 quantity with the dimension of length that depends on time, we define its velocity as the time derivative of the position:
$$v(t)={\dot x (t)}={{d x(t)} \over dt}$$
Note that we use an overdot to indicate a
time derivative, we will use this convention throughout these notes. The acceleration is the time derivative of the velocity, and thus the second derivative of the position:
$$a(t)={\ddot x (t)}={{d v(t)} \over dt}={{d^2 x(t)} \over dt^2}$$
Finally the momentum of a particle is its mass times its velocity:
$$p(t)={m v(t)}={m \dot x (t)}$$
We are now ready to give our next three axioms. You may have encountered them before; they are known as Newton’s three laws of motion.
Axiom 1 (Newton’s first law of motion). As long as there is no external action, a particle’s velocity will remain constant. Note that the first law includes particles at rest, i.e., \(v=0\). We will define the general ‘external action’ as a force, therefore a force is now anything that can change the velocity of a particle. The second law quantifies the force. Axiom 2 (Newton’s second law of motion). If there is a net force acting on a particle, then its instantaneous change in momentum due to that force is equal to that force:
$$F(t)={dp(t) \over dt} \label{2.1.4}$$
Now since \(p=mv\) and \(a={dv \over dt}\), if the mass is constant we can also write Equation \ref{2.1.4} as \(F=ma\), or
$$F(t)=m \ddot x(t)$$
which is the form in we will use most. Based on the second law, we see that a force has the physical dimension of a mass times a length divided by a time squared - since this is quite a lot to put in every time, we define the dimension of force as such: \(F=MLT^{-2}\). Likewise, we define a unit, the Newton (N), as a kilogram times a meter per second squared: \(N={{kg \cdot m} \over s^2}\). Therefore, in principle Newton’s second law of motion can also be used to measure forces, though we will often use it the other way around, and calculate changes in momentum due to a known force.
Note how Newton’s first law follows from the second: if the force is zero, there is no change in momentum, and thus (assuming constant mass) a constant velocity. Note also that although the second law gives us a quantification of the force, by itself it will not help us achieve much, as we at present have no idea what the force is (though you probably have some intuitive ideas from experience) - for that we will use the force laws of the next section. Before we go there, there is another important observation on the nature of forces in general.
Axiom 3 (Newton’s third law of motion). If a body exerts a force F 1 on a second body, the second body exerts an equal but opposite force F 2, on the first, i.e., the forces are equal in magnitude but opposite in direction:
$$F_1 = -F_2$$
Isaac Newton
Isaac Newton (1642-1727) was a British physicist, astronomer and mathematician, who is widely regarded as one of the most important scientists in history. Newton was a professor at Cambridge from 1667 till 1702, where he held the famous Lucasian chair in mathematics. Newton invented infinitesimal calculus to be able to express the laws of mechanics that now bear his name in mathematical form. He also gave a mathematical description of gravity (Equation 2.2.3), from which he could derive Kepler’s laws of planetary motion (Section 6.4). In addition to his work on mechanics, Newton made key contributions to optics and invented the reflection telescope, which uses a mirror rather than a lens to gather light. Having retired from his position in Cambridge, Newton spend most of the second half of his life in London, as warden and later master of the Royal mint, and president of the Royal society. 1 Appendix A.1 lists some basic properties of vectors that you may find useful. |
We have already written Neural Networks in Python in the previous chapters of our tutorial. We could train these networks, but we didn't explain the mechanism used for training. We used backpropagation without saying so. Backpropagation is a commonly used method for training artificial neural networks, especially deep neural networks. Backpropagation is needed to calculate the gradient, which we need to adapt the weights of the weight matrices. The weight of the neuron (nodes) of our network are adjusted by calculating the gradient of the loss function. For this purpose a gradient descent optimization algorithm is used. It is also called backward propagation of errors.
Quite often people are frightened away by the mathematics used in it. We try to explain it in simple terms.
Explaining gradient descent starts in many articles or tutorials with mountains. Imagine you are put on a mountain, not necessarily the top, by a helicopter at night or heavy fog. Let's further imagine that this mountain is on an island and you want to reach sea level. You have to go down, but you hardly see anything, maybe just a few metres. Your task is to find your way down, but you cannot see the path. You can use the method of gradient descent. This means that you are examining the steepness at your current position. You will proceed in the direction with the steepest descent. You take only a few steps and then you stop again to reorientate yourself. This means you are applying again the previously described procedure, i.e. you are looking for the steepest descend.
This procedure is depicted in the following diagram in a two-dimensional space.
Going on like this you will arrive at a position, where there is no further descend.
Each directions goes upwards. You may have reached the deepest level - the global minimum -, but you might as well be stuck in a basin. If you start at the position on the right side of our image, everything works out fine, but from the leftside, you will be stuck in a local minimum. If you imagine now, - not very realistic - you are dropped many time at random places on this island, you will find ways downwards to sea level. This is what we actually do, when we train a neural network.
Now, we have to go into the details, i.e. the mathematics.
We will start with the simpler case. We look at a linear network. Linear neural networks are networks where the output signal is created by summing up all the weighted input signals. No activation function will be applied to this sum, which is the reason for the linearity.
The will use the following simple network.
We have labels, i.e. target or desired values $t_i$ for each output value $o_i$. Principially, the error is the difference between the target and the actual output:$$e_i = t_i - o_i$$
We will later use a squared error function, because it has better characteristics for the algorithm:$$e_i = \frac{1}{2} ( t_i - o_i ) ^ 2 $$
We will have a look at the output value $o_1$, which is depending on the values $w_{11}$, $w_{21}$, $w_{31}$ and $w_{41}$. Let's assume the calculated value ($o_1$) is 0.92 and the desired value ($t_1$) is 1. In this case the error is$$e_1 = t_1 - o_1 = 1 - 0.92 = 0.08$$
Depending on this error, we have to change the weights from the incoming values accordingly. We have four weights, so we could spread the error evenly. Yet, it makes more sense to to do it proportionally, according to the weight values. This means that we can calculate the fraction of the error $e_1$ in $w_{11}$ as:$$e_1 \cdot \frac{w_{11}}{\sum_{i=1}^{4} w_{i1}}$$
This means in our example:$$0.08 \cdot \frac{0.6}{0.6 + 0.4 + 0.1 + 0.2} = 0.037$$
The total error in our weight matrix between the hidden and the output layer - we called it in our previous chapter 'who' - looks like this$$ e_{who} = \begin{bmatrix} \frac{w_{11}}{\sum_{i=1}^{4} w_{i1}} & \frac{w_{12}}{\sum_{i=1}^{4} w_{i1}} \\ \frac{w_{21}}{\sum_{i=1}^{4} w_{i1}} & \frac{w_{22}}{\sum_{i=1}^{4} w_{i1}} \\ \frac{w_{31}}{\sum_{i=1}^{4} w_{i1}} & \frac{w_{32}}{\sum_{i=1}^{4} w_{i1}} \\ \frac{w_{41}}{\sum_{i=1}^{4} w_{i1}} & \frac{w_{22}}{\sum_{i=1}^{4} w_{i1}} \\ \end{bmatrix} \cdot \begin{bmatrix} e_1 \\ e_2 \end{bmatrix} $$
You can see that the denominator in the left matrix is always the same. It is a scaling factor. We can drop it so that the calculation gets a lot simpler:$$ e_{who} = \begin{bmatrix} w_{11} & w_{12} \\ w_{21} & w_{22} \\ w_{31} & w_{32} \\ w_{41} & w_{22} \\ \end{bmatrix} \cdot \begin{bmatrix} e_1 \\ e_2 \end{bmatrix} $$
If you compare the matrix on the right side with the 'who' matrix of our chapter Neuronal Network Using Python and Numpy, you will notice that it is the transpose of 'who'.$$e_{who} = who.T \cdot e$$
So, this has been the easy part for linear neural networks.
We want to calculate the error in a network with an activation function, i.e. a non-linear network. The derivation of the error function describes the slope. As we mentioned in the beginning of the this chapter, we want to descend. The derivation describes how the error $E$ changes as the weight $w_{ij}$ changes:$$\frac{\partial E}{\partial w_{ij}}$$
The error function E over all the output nodes $o_j$ ($j = 1, ... n$) where $n$ is the number of output nodes is:$$E = \sum_{j=1}^{n} \frac{1}{2} (t_j - o_j)^2$$
Now, we can insert this in our derivation:$$\frac{\partial E}{\partial w_{ij}} = \frac{\partial}{\partial w_{ij}} \frac{1}{2} \sum_{j=1}^{n} (t_j - o_j)^2$$
If you have a look at our example network, you will see that an output node $o_j$ only depends on the input signals created with the weights $w_{ij}$ with $i = 1, \ldots m$ and $m$ the number of hidden nodes.
This means that we can calculate the error for every output node independently of each other and we get rid of the sum. This is the error for a node j for example:$$\frac{\partial E}{\partial w_{ij}} = \frac{\partial}{\partial w_{ij}} \frac{1}{2} (t_j - o_j)^2$$
The value $t_j$ is a constant, because it is not depending on any input signals or weights. We can apply the chain rule for the differentiation of the previous term to simplify things:$$\frac{\partial E}{\partial w_{ij}} = \frac{\partial E}{\partial o_{j}} \cdot \frac{\partial o_j}{\partial w_{ij}}$$
In the previous chapter of our tutorial, we used the sigmoid function as the activation function:$$\sigma(x) = \frac{1}{1+e^{-x}}$$
The output node $o_j$ is calculated by applying the sigmoid function to the sum of the weighted input signals. This means that we can further simplify our differentiation term by replacing $o_j$ by this function:$$\frac{\partial E}{\partial w_{ij}} = (t_j - o_j) \cdot \frac{\partial }{\partial w_{ij}} \sigma(\sum_{i=1}^{m} w_{ij}h_i)$$
where $m$ is the number of hidden nodes.
The sigmoid function is easy to differentiate:$$\frac{\partial \sigma(x)}{\partial x} = \sigma(x) \cdot (1 - \sigma(x))$$
The complete differentiation looks like this now:$$\frac{\partial E}{\partial w_{ij}} = (t_j - o_j) \cdot \sigma(\sum_{i=1}^{m} w_{ij}h_i) \cdot (1 - \sigma(\sum_{i=1}^{m} w_{ij}h_i)) \frac{\partial }{\partial w_{ij}} \sum_{i=1}^{m} w_{ij}h_i $$
The last part has to be differentiated with respect to $w_{ij}$. This means that all the summands disappear and only $o_j$ is left:$$\frac{\partial E}{\partial w_{ij}} = (t_j - o_j) \cdot \sigma(\sum_{i=1}^{m} w_{ij}h_i) \cdot (1 - \sigma(\sum_{i=1}^{m} w_{ij}h_i)) \cdot o_j $$
This is what we had used in our method 'train' of our NeuralNetwork class in the previous chapter.In [ ]: |
Difference between revisions of "stat946w18/Implicit Causal Models for Genome-wide Association Studies"
(→Causal Inference with a Latent Confounder)
Line 201: Line 201:
Dustin Tran, Rajesh Ranganath, and David M Blei. Hierarchical implicit models and likelihood-free variational inference. In Neural Information Processing Systems, 2017.
Dustin Tran, Rajesh Ranganath, and David M Blei. Hierarchical implicit models and likelihood-free variational inference. In Neural Information Processing Systems, 2017.
+ + + Revision as of 23:46, 20 April 2018 Contents 1 Introduction and Motivation 2 Implicit Causal Models 3 Implicit Causal Models with Latent Confounders 4 Likelihood-free Variational Inference 5 Empirical Study 6 Conclusion 7 Critique 8 References 9 Implicit causal model in Edward Introduction and Motivation
There is currently much progress in probabilistic models which could lead to the development of rich generative models. The models have been applied with neural networks, implicit densities, and with scalable algorithms to very large data for their Bayesian inference. However, most of the models are focused on capturing statistical relationships rather than causal relationships. Causal relationships are relationships where one event is a result of another event, i.e. a cause and effect. Causal models give us a sense of how manipulating the generative process could change the final results.
Genome-wide association studies (GWAS) are examples of causal relationships. Genome is basically the sum of all DNAs in an organism and contain information about the organism's attributes. Specifically, GWAS is about figuring out how genetic factors cause disease among humans. Here the genetic factors we are referring to are single nucleotide polymorphisms (SNPs), and getting a particular disease is treated as a trait, i.e., the outcome. In order to know about the reason of developing a disease and to cure it, the causation between SNPs and diseases is investigated: first, predict which one or more SNPs cause the disease; second, target the selected SNPs to cure the disease.
The figure below depicts an example Manhattan plot for a GWAS. Each dot represents an SNP. The x-axis is the chromosome location, and the y-axis is the negative log of the association p-value between the SNP and the disease, so points with the largest values represent strongly associated risk loci.
This paper focuses on two challenges to combining modern probabilistic models and causality. The first one is how to build rich causal models with specific needs by GWAS. In general, probabilistic causal models involve a function [math]f[/math] and a noise [math]n[/math]. For working simplicity, we usually assume [math]f[/math] as a linear model with Gaussian noise. However problems like GWAS require models with nonlinear, learnable interactions among the inputs and the noise.
The second challenge is how to address latent population-based confounders. Latent confounders are issues when we apply the causal models since we cannot observe them nor know the underlying structure. For example, in GWAS, both latent population structure, i.e., subgroups in the population with ancestry differences, and relatedness among sample individuals produce spurious correlations among SNPs to the trait of interest. The existing methods cannot easily accommodate the complex latent structure.
For the first challenge, the authors develop implicit causal models, a class of causal models that leverages neural architectures with an implicit density. With GWAS, implicit causal models generalize previous methods to capture important nonlinearities, such as gene-gene and gene-population interaction. Building on this, for the second challenge, they describe an implicit causal model that adjusts for population-confounders by sharing strength across examples (genes).
There has been an increasing number of works on causal models which focus on causal discovery and typically have strong assumptions such as Gaussian processes on noise variable or nonlinearities for the main function.
Implicit Causal Models
Implicit causal models are an extension of probabilistic causal models. Probabilistic causal models will be introduced first.
Probabilistic Causal Models
Probabilistic causal models have two parts: deterministic functions of noise and other variables. Consider background noise [math]\epsilon[/math], representing unknown background quantities which are jointly independent and global variable [math]\beta[/math], some function of this noise, where
Each [math]\beta[/math] and [math]x[/math] is a function of noise; [math]y[/math] is a function of noise and [math]x[/math],
The target is the causal mechanism [math]f_y[/math] so that the causal effect [math]p(y|do(X=x),\beta)[/math] can be calculated. [math]do(X=x)[/math] means that we specify a value of [math]X[/math] under the fixed structure [math]\beta[/math]. By other paper’s work, it is assumed that [math]p(y|do(x),\beta) = p(y|x, \beta)[/math].
An example of probabilistic causal models is additive noise model.
[math]f(.)[/math] is usually a linear function or spline functions for nonlinearities. [math]\epsilon[/math] is assumed to be standard normal, as well as [math]y[/math]. Thus the posterior [math]p(\theta | x, y, \beta)[/math] can be represented as
where [math]p(\theta)[/math] is the prior which is known. Then, variational inference or MCMC can be applied to calculate the posterior distribution.
Implicit Causal Models
The difference between implicit causal models and probabilistic causal models is the noise variable. Instead of using an additive noise term, implicit causal models directly take noise [math]\epsilon[/math] as input and outputs [math]x[/math] given parameter [math]\theta[/math].
[math] x=g(\epsilon | \theta), \epsilon \tilde s(\cdot) [/math]
The causal diagram has changed to:
They used fully connected neural network with a fair amount of hidden units to approximate each causal mechanism. Below is the formal description: Implicit Causal Models with Latent Confounders
Previously, they assumed the global structure is observed. Next, the unobserved scenario is being considered.
Causal Inference with a Latent Confounder
Similar to before, the interest is the causal effect [math]p(y|do(x_m), x_{-m})[/math]. Here, the SNPs other than [math]x_m[/math] is also under consideration. However, it is confounded by the unobserved confounder [math]z_n[/math]. As a result, the standard inference method cannot be used in this case.
The paper proposed a new method which include the latent confounders. For each subject [math]n=1,…,N[/math] and each SNP [math]m=1,…,M[/math],
The mechanism for latent confounder [math]z_n[/math] is assumed to be known. SNPs depend on the confounders and the trait depends on all the SNPs and the confounders as well.
The posterior of [math]\theta[/math] is needed to be calculate in order to estimate the mechanism [math]g_y[/math] as well as the causal effect [math]p(y|do(x_m), x_{-m})[/math], so that it can be explained how changes to each SNP [math]X_m[/math] cause changes to the trait [math]Y[/math].
Note that the latent structure [math]p(z|x, y)[/math] is assumed known.
In general, causal inference with latent confounders can be dangerous: it uses the data twice, and thus it may bias the estimates of each arrow [math]X_m → Y[/math]. Why is this justified? This is answered below:
Proposition 1. Assume the causal graph of Figure 2 (left) is correct and that the true distribution resides in some configuration of the parameters of the causal model (Figure 2 (right)). Then the posterior [math]p(θ | x, y)[/math] provides a consistent estimator of the causal mechanism [math]f_y[/math].
Proposition 1 rigorizes previous methods in the framework of probabilistic causal models. The intuition is that as more SNPs arrive (“M → ∞, N fixed”), the posterior concentrates at the true confounders [math]z_n[/math], and thus we can estimate the causal mechanism given each data point’s confounder [math]z_n[/math]. As more data points arrive (“N → ∞, M fixed”), we can estimate the causal mechanism given any confounder [math]z_n[/math] as there is an infinity of them.
Implicit Causal Model with a Latent Confounder
This section is the algorithm and functions to implementing an implicit causal model for GWAS.
Generative Process of Confounders [math]z_n[/math].
The distribution of confounders is set as standard normal. [math]z_n \in R^K[/math] , where [math]K[/math] is the dimension of [math]z_n[/math] and [math]K[/math] should make the latent space as close as possible to the true population structural.
Generative Process of SNPs [math]x_{nm}[/math].
Given SNP is coded for,
The authors defined a [math]Binomial(2,\pi_{nm})[/math] distribution on [math]x_{nm}[/math]. And used logistic factor analysis to design the SNP matrix.
A SNP matrix looks like this:
Since logistic factor analysis makes strong assumptions, this paper suggests using a neural network to relax these assumptions,
This renders the outputs to be a full [math]N*M[/math] matrix due the the variables [math]w_m[/math], which act as principal component in PCA. Here, [math]\phi[/math] has a standard normal prior distribution. The weights [math]w[/math] and biases [math]\phi[/math] are shared over the [math]m[/math] SNPs and [math]n[/math] individuals, which makes it possible to learn nonlinear interactions between [math]z_n[/math] and [math]w_m[/math].
Generative Process of Traits [math]y_n[/math].
Previously, each trait is modeled by a linear regression,
This also has very strong assumptions on SNPs, interactions, and additive noise. It can also be replaced by a neural network which only outputs a scalar,
Likelihood-free Variational Inference
Calculating the posterior of [math]\theta[/math] is the key of applying the implicit causal model with latent confounders.
could be reduces to
However, with implicit models, integrating over a nonlinear function could be suffered. The authors applied likelihood-free variational inference (LFVI). LFVI proposes a family of distribution over the latent variables. Here the variables [math]w_m[/math] and [math]z_n[/math] are all assumed to be Normal,
For LFVI applied to GWAS, the algorithm which similar to the EM algorithm has been used:
Empirical Study
The authors performed simulation on 100,000 SNPs, 940 to 5,000 individuals, and across 100 replications of 11 settings. Four methods were compared:
implicit causal model (ICM); PCA with linear regression (PCA); a linear mixed model (LMM); logistic factor analysis with inverse regression (GCAT).
The feedforward neural networks for traits and SNPs are fully connected with two hidden layers using ReLU activation function, and batch normalization.
Simulation Study
Based on real genomic data, a true model is applied to generate the SNPs and traits for each configuration. There are four datasets used in this simulation study:
HapMap [Balding-Nichols model] 1000 Genomes Project (TGP) [PCA] Human Genome Diversity project (HGDP) [PCA] HGDP [Pritchard-Stephens-Donelly model] A latent spatial position of individuals for population structure [spatial] The table shows the prediction accuracy. The accuracy is calculated by the rate of the number of true positives divide the number of true positives plus false positives. True positives measure the proportion of positives that are correctly identified as such (e.g. the percentage of SNPs which are correctly identified as having the causal relation with the trait). In contrast, false positives state the SNPs has the causal relation with the trait when they don’t. The closer the rate to 1, the better the model is since false positives are considered as the wrong prediction.
The result represented above shows that the implicit causal model has the best performance among these four models in every situation. Especially, other models tend to do poorly on PSD and Spatial when [math]a[/math] is small, but the ICM achieved a significantly high rate. The only comparable method to ICM is GCAT, when applying to simpler configurations.
Real-data Analysis
They also applied ICM to GWAS of Northern Finland Birth Cohorts, which measure 10 metabolic traits and also contain 324,160 SNPs and 5,027 individuals. The data came from the database of Genotypes and Phenotypes (dbGaP) and used the same preprocessing as Song et al. Ten implicit causal models were fitted, one for each trait to be modeled. For each of the 10 implicit causal models the dimension of the counfounders was set to be six, same as what was used in the paper by Song. The SNP network used 512 hidden units in both layers and the trait network used 32 and 256. et al. for comparable models in Table 2.
The numbers in the above table are the number of significant loci for each of the 10 traits. The number for other methods, such as GCAT, LMM, PCA, and "uncorrected" (association tests without accounting for hidden relatedness of study samples) are obtained from other papers. By comparison, the ICM reached the level of the best previous model for each trait.
Conclusion
This paper introduced implicit causal models in order to account for nonlinear complex causal relationships, and applied the method to GWAS. It can not only capture important interactions between genes within an individual and among population level, but also can adjust for latent confounders by taking account of the latent variables into the model.
By the simulation study, the authors proved that the implicit causal model could beat other methods by 15-45.3% on a variety of datasets with variations on parameters.
The authors also believed this GWAS application is only the start of the usage of implicit causal models. The authors suggest that it might also be successfully used in the design of dynamic theories in high-energy physics or for modeling discrete choices in economics.
Critique
This paper is an interesting and novel work. The main contribution of this paper is to connect the statistical genetics and the machine learning methodology. The method is technically sound and does indeed generalize techniques currently used in statistical genetics.
The neural network used in this paper is a very simple feed-forward 2 hidden-layer neural network, but the idea of where to use the neural network is crucial and might be significant in GWAS.
It has limitations as well. The empirical example in this paper is too easy, and far away from the realistic situation. Despite the simulation study showing some competing results, the Northern Finland Birth Cohort Data application did not demonstrate the advantage of using implicit causal model over the previous methods, such as GCAT or LMM.
Another limitation is about linkage disequilibrium as the authors stated as well. SNPs are not completely independent of each other; usually, they have correlations when the alleles at close locus. They did not consider this complex case, rather they only considered the simplest case where they assumed all the SNPs are independent.
Furthermore, one SNP maybe does not have enough power to explain the causal relationship. Recent papers indicate that causation to a trait may involve multiple SNPs. This could be a future work as well.
References
Tran D, Blei D M. Implicit Causal Models for Genome-wide Association Studies[J]. arXiv preprint arXiv:1710.10742, 2017.
Patrik O Hoyer, Dominik Janzing, Joris M Mooij, Jonas Peters, and Prof Bernhard Schölkopf. Non- linear causal discovery with additive noise models. In Neural Information Processing Systems, 2009.
Alkes L Price, Nick J Patterson, Robert M Plenge, Michael E Weinblatt, Nancy A Shadick, and David Reich. Principal components analysis corrects for stratification in genome-wide association studies. Nature Genetics, 38(8):904–909, 2006.
Minsun Song, Wei Hao, and John D Storey. Testing for genetic associations in arbitrarily structured populations. Nature, 47(5):550–554, 2015.
Dustin Tran, Rajesh Ranganath, and David M Blei. Hierarchical implicit models and likelihood-free variational inference. In Neural Information Processing Systems, 2017. |
Search
Now showing items 1-1 of 1
Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV
(Elsevier, 2014-09)
Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ... |
What is the "best LaTeX practices" for writing absolute value symbols? Are there any packages which provide good methods?
Some options include
|x| and
\mid x \mid, but I'm not sure which is best...
TeX - LaTeX Stack Exchange is a question and answer site for users of TeX, LaTeX, ConTeXt, and related typesetting systems. It only takes a minute to sign up.Sign up to join this community
I have been using the code below using
\DeclarePairedDelimiter from the
mathtools package.
Since I don't think I have a case where I don't want this to scale based on the parameter, I make use of Swap definition of starred and non-starred command so that the normal use will automatically scale, and the starred version won't:
If you want it the other way around comment out the code between
\makeatother...\makeatletter.
\documentclass{article}\usepackage{mathtools}\DeclarePairedDelimiter\abs{\lvert}{\rvert}%\DeclarePairedDelimiter\norm{\lVert}{\rVert}%% Swap the definition of \abs* and \norm*, so that \abs% and \norm resizes the size of the brackets, and the % starred version does not.\makeatletter\let\oldabs\abs\def\abs{\@ifstar{\oldabs}{\oldabs*}}%\let\oldnorm\norm\def\norm{\@ifstar{\oldnorm}{\oldnorm*}}\makeatother\newcommand*{\Value}{\frac{1}{2}x^2}%\begin{document} \[\abs{\Value} \quad \norm{\Value} \qquad\text{non-starred} \] \[\abs*{\Value} \quad \norm*{\Value} \qquad\text{starred}\qquad\]\end{document}
Note if you just use
| you get mathord spacing, which is different from the spacing you'd get from paired mathopen/mathclose delimiters or from
\left/\right even if
\left/\right doesn't stretch the symbol. Personally I prefer the left/right spacing from mathinner here (even if @egreg says I'm generally wrong:-)
\documentclass{amsart}\begin{document}$ \log|x||y|b $$ \log\left|x\right|\left|y\right|b $$ \log\mathopen|x\mathclose|\mathopen|y\mathclose|b $\end{document}
One can also use
commath package.
\documentclass{article}\usepackage{commath}\begin{document}\[ \norm{a \vec{u}} = \abs{a} \, \norm{\vec{v}} \]\end{document}
The
physics LaTeX package also implements
abs and
norm:
\documentclass{article}\usepackage{physics}\begin{document} \[ c = \abs{-c} \] \[ \vu{a} = \frac{\vb{a}}{\norm{\vb{a}}} \]\end{document}
A simple, LaTeX native way of doing this is by using the
\| delimiter, with the standard
\left and
\right modifiers (source).
For example:
\left\| \sum_{i=1}^{n} x^2 \right\|
gives
For LyX users: maybe I have just overlooked how to do it correctly, but I couldn't find a way of doing this natively.
I thus used a 1x1-Matrix environment and set the kind to determinant. It might just be a hack, but it works fine in my usecase. |
The position of a point \(M\left( {x,y,z} \right)\) in the \(xyz\)-space in cylindrical coordinates is defined by three numbers: \(\rho, \varphi, z,\) where \(\rho\) is the projection of the radius vector of the point \(M\) onto the \(xy\)-plane, \(\varphi\) is the angle formed by the projection of the radius vector with the \(x\)-axis (Figure \(1\)), \(z\) is the projection of the radius vector on the \(z\)-axis (its value is the same in Cartesian and cylindrical coordinates).
The relationship between cylindrical and Cartesian coordinates of a point is given by
\[
{x = \rho \cos \varphi ,\;\;\;}\kern0pt {y = \rho \sin \varphi ,\;\;\;}\kern0pt {z = z.} \]
We assume here that
\[
{\rho \ge 0,\;\;\;}\kern-0.3pt {0 \le \varphi \le 2\pi ,\;\;\;}\kern-0.3pt {- \infty \lt z \lt \infty .} \]
The Jacobian of transformation from Cartesian to cylindrical coordinates is
\[ {I\left( {\rho ,\varphi ,z} \right) } = {\left| {\begin{array}{*{20}{c}} {\frac{{\partial x}}{{\partial \rho }}}&{\frac{{\partial x}}{{\partial \varphi }}}&{\frac{{\partial x}}{{\partial z}}}\\ {\frac{{\partial y}}{{\partial \rho }}}&{\frac{{\partial y}}{{\partial \varphi }}}&{\frac{{\partial y}}{{\partial z}}}\\ {\frac{{\partial z}}{{\partial \rho }}}&{\frac{{\partial z}}{{\partial \varphi }}}&{\frac{{\partial z}}{{\partial z}}} \end{array}} \right| } = {\left| {\begin{array}{*{20}{c}} {\cos \varphi }&{ – \rho \sin \varphi }&0\\ {\sin \varphi }&{\rho \cos \varphi }&0\\ 0&0&1 \end{array}} \right| } ={ \rho \ge 0.} \]
Then the formula of change of variables for this transformation can be written in the form
\[
{\iiint\limits_U {f\left( {x,y,z} \right)dxdydz} } = {\iiint\limits_{U’} {f\left( {\rho \cos \varphi , }\kern0pt{ \rho \sin \varphi ,z} \right)\rho d\rho d\varphi dz} } \]
Transition from cylindrical coordinates makes calculation of triple integrals simpler in those cases when the region of integration is formed by a cylindrical surface.
Solved Problems
Click a problem to see the solution.
Example 1Evaluate the integral Example 2Find the integral Example 3Using cylindrical coordinates evaluate the integral Example 4Calculate the integral using cylindrical coordinates: Example 5Find the integral Example 1.Evaluate the integral
Solution.
It is more convenient to calculate this integral in cylindrical coordinates. Projection of the region of integration onto the \(xy\)-plane is the circle \({x^2} + {y^2} \le 1\) or \(0 \le \rho \le 1\) (Figure \(3\)).
Notice that the integrand can be written as
\[
{\left( {{x^4} + 2{x^2}{y^2} + {y^4}} \right) } = {{\left( {{x^2} + {y^2}} \right)^2} } = {{\left( {{\rho ^2}} \right)^2} = {\rho ^4}.} \]
Then the integral becomes
\[I = \int\limits_0^{2\pi } {d\varphi } \int\limits_0^1 {{\rho ^4}\rho d\rho } \int\limits_0^1 {dz} .\]
The second integral contains the factor \(\rho\) which is the Jacobian of transformation of the Cartesian coordinates into cylindrical coordinates. All the three integrals over each of the variables do not depend on each other. As a result the triple integral is easy to calculate as
\[
{I = \int\limits_0^{2\pi } {d\varphi } \int\limits_0^1 {{\rho ^4}\rho d\rho } \int\limits_0^1 {dz} } = {2\pi \int\limits_0^1 {{\rho ^5}d\rho } \int\limits_0^1 {dz} } = {2\pi \cdot 1 \cdot \int\limits_0^1 {{\rho ^5}d\rho } } = {2\pi \left. {\left( {\frac{{{\rho ^6}}}{6}} \right)} \right|_0^1 } = {2\pi \cdot \frac{1}{6} = \frac{\pi }{3}.} \] |
Two lines: $a_1x + b_1y + c_1 = 0$ and $a_2x + b_2y + c_2 = 0$ are given. I know that the equation of its bisectors is ${a_1x + b_1y + c_1 \over \sqrt{(a_1^2 + b_1^2)}} = \pm {a_2x + b_2y + c_2 \over\sqrt{ (a_2^2 + b_2^2)}}$ But I intend to find which one is the obtuse angle bisector and which one is the acute angle bisector. I want to find a general formula Assuming $c_1 , c_2$ both are of same sign, I know if $a_1a_2 + b_1b_2 > 0$ and if we take the positive sign we get the obtuse angle bisector and vice versa. But I want to prove it using general equation of line, I tried to find the angle between bisector and original line i.e. $tan θ = {m_1 - m_2 \over 1+ m_1m_2}$ and then if it is greater than one it will be of obtuse angle but calculations are tough if we use general equation of line. May anyone give a simple proof of the following statement: "Assuming $c_1 , c_2$ both are of same sign IF $a_1a_2 + b_1b_2 > 0 $then if we take positive sign we get the obtuse angle bisector".
We have two lines : $$L_1 : a_1x+b_1y+c_1=0,\quad L_2 : a_2x+b_2y+c_2=0$$
and the angle bisectors : $$L_{\pm} : \frac{a_1x+b_1y+c_1}{\sqrt{a_1^2+b_1^2}}=\pm\frac{a_2x+b_2y+c_2}{\sqrt{a_2^2+b_2^2}}$$
If we let $\theta$ be the (smaller) angle between $L_+$ and $L_1$, then we have $$\cos\theta=\frac{\left|a_1\left(\frac{a_1}{\sqrt{a_1^2+b_1^2}}-\frac{a_2}{\sqrt{a_2^2+b_2^2}}\right)+b_1\left(\frac{b_1}{\sqrt{a_1^2+b_1^2}}-\frac{b_2}{\sqrt{a_2^2+b_2^2}}\right)\right|}{\sqrt{a_1^2+b_1^2}\sqrt{\left(\frac{a_1}{\sqrt{a_1^2+b_1^2}}-\frac{a_2}{\sqrt{a_2^2+b_2^2}}\right)^2+\left(\frac{b_1}{\sqrt{a_1^2+b_1^2}}-\frac{b_2}{\sqrt{a_2^2+b_2^2}}\right)^2}}$$
$$=\frac{\left|\sqrt{a_1^2+b_1^2}-\frac{a_1a_2+b_1b_2}{\sqrt{a_2^2+b_2^2}}\right|}{\sqrt{a_1^2+b_1^2}\sqrt{2-2\frac{a_1a_2+b_1b_2}{\sqrt{(a_1^2+b_1^2)(a_2^2+b_2^2)}}}}\times\frac{2\frac{1}{\sqrt{a_1^2+b_1^2}}}{2\frac{1}{\sqrt{a_1^2+b_1^2}}}=\sqrt{\frac{1-\frac{a_1a_2+b_1b_2}{\sqrt{(a_1^2+b_1^2)(a_2^2+b_2^2)}}}{2}}$$
Hence, we can see that $$\begin{align}a_1a_2+b_1b_2\gt 0&\iff\cos\theta\lt 1/\sqrt 2\\&\iff \theta\gt 45^\circ\\&\iff \text{$L_+$ is the obtuse angle bisector}\end{align}$$ as desired.
(Note that "$c_1,c_2$ both are of same sign" is irrelevant.) |
If your Lagrangian satisfies
$$ \frac{\partial \mathcal L}{\partial t} = 0 $$
then you're happy, energy is conserved, etc. However, if the above doesn't hold, that doesn't necessarily mean energy isn't conserved; maybe your Lagrangian has a false explicit time dependency. For example:
$$ \mathcal L = \frac{m}{2}\dot x^2+kt\dot x $$
The above Lagrangian has $\partial \mathcal L/\partial t = k\dot x $ but I call that dependency fake (or as the experts like to say, "spurious") because it has the same equations of motion as this other Lagrangian:
$$ \mathcal L = \frac{m}{2}\dot x^2-kx $$
which has no explicit time dependency whatsoever, so energy
is conserved. Specifically, this happened because you can shift derivatives around in your Lagrangian using integration by parts at the level of the action.
Similarly, the following Lagrangian
$$ \mathcal L = \frac{m}{2}\dot x^2+gt $$
also has a fake explicit time dependency, since you can remove $gt$ which is just a total time derivative, equivalent to a boundary term in the action.
On the other hand, the Lagrangian with variable mass
$$ \mathcal L = \frac{m(t)}{2}\dot x^2 $$
is bonafide explicit time-dependent. There's no trick here to avoid it; $\partial \mathcal L/\partial t \ne 0$ no matter what legal modifications you perform.
Finally, the following Lagrangian has a mixture of real and false explicit time-dependencies:
$$ \mathcal L = \frac{m(t)}{2}\dot x^2 -kx +gt $$
Its true explicit time dependency would be defined as $\partial \mathcal L/\partial t$
after all integration by parts that could be performed have been done so and all total derivatives have been removed. Therefore, the question is: Given a generic Lagrangian, can its true explicit time dependency be determined in general? |
Difference between revisions of "stat946w18/Implicit Causal Models for Genome-wide Association Studies"
(→Implicit causal model in Edward)
Line 203: Line 203:
== Implicit causal model in Edward ==
== Implicit causal model in Edward ==
−
[[File: coddde.png|
+
[[File: coddde.png||center]]
Revision as of 23:47, 20 April 2018 Contents 1 Introduction and Motivation 2 Implicit Causal Models 3 Implicit Causal Models with Latent Confounders 4 Likelihood-free Variational Inference 5 Empirical Study 6 Conclusion 7 Critique 8 References 9 Implicit causal model in Edward Introduction and Motivation
There is currently much progress in probabilistic models which could lead to the development of rich generative models. The models have been applied with neural networks, implicit densities, and with scalable algorithms to very large data for their Bayesian inference. However, most of the models are focused on capturing statistical relationships rather than causal relationships. Causal relationships are relationships where one event is a result of another event, i.e. a cause and effect. Causal models give us a sense of how manipulating the generative process could change the final results.
Genome-wide association studies (GWAS) are examples of causal relationships. Genome is basically the sum of all DNAs in an organism and contain information about the organism's attributes. Specifically, GWAS is about figuring out how genetic factors cause disease among humans. Here the genetic factors we are referring to are single nucleotide polymorphisms (SNPs), and getting a particular disease is treated as a trait, i.e., the outcome. In order to know about the reason of developing a disease and to cure it, the causation between SNPs and diseases is investigated: first, predict which one or more SNPs cause the disease; second, target the selected SNPs to cure the disease.
The figure below depicts an example Manhattan plot for a GWAS. Each dot represents an SNP. The x-axis is the chromosome location, and the y-axis is the negative log of the association p-value between the SNP and the disease, so points with the largest values represent strongly associated risk loci.
This paper focuses on two challenges to combining modern probabilistic models and causality. The first one is how to build rich causal models with specific needs by GWAS. In general, probabilistic causal models involve a function [math]f[/math] and a noise [math]n[/math]. For working simplicity, we usually assume [math]f[/math] as a linear model with Gaussian noise. However problems like GWAS require models with nonlinear, learnable interactions among the inputs and the noise.
The second challenge is how to address latent population-based confounders. Latent confounders are issues when we apply the causal models since we cannot observe them nor know the underlying structure. For example, in GWAS, both latent population structure, i.e., subgroups in the population with ancestry differences, and relatedness among sample individuals produce spurious correlations among SNPs to the trait of interest. The existing methods cannot easily accommodate the complex latent structure.
For the first challenge, the authors develop implicit causal models, a class of causal models that leverages neural architectures with an implicit density. With GWAS, implicit causal models generalize previous methods to capture important nonlinearities, such as gene-gene and gene-population interaction. Building on this, for the second challenge, they describe an implicit causal model that adjusts for population-confounders by sharing strength across examples (genes).
There has been an increasing number of works on causal models which focus on causal discovery and typically have strong assumptions such as Gaussian processes on noise variable or nonlinearities for the main function.
Implicit Causal Models
Implicit causal models are an extension of probabilistic causal models. Probabilistic causal models will be introduced first.
Probabilistic Causal Models
Probabilistic causal models have two parts: deterministic functions of noise and other variables. Consider background noise [math]\epsilon[/math], representing unknown background quantities which are jointly independent and global variable [math]\beta[/math], some function of this noise, where
Each [math]\beta[/math] and [math]x[/math] is a function of noise; [math]y[/math] is a function of noise and [math]x[/math],
The target is the causal mechanism [math]f_y[/math] so that the causal effect [math]p(y|do(X=x),\beta)[/math] can be calculated. [math]do(X=x)[/math] means that we specify a value of [math]X[/math] under the fixed structure [math]\beta[/math]. By other paper’s work, it is assumed that [math]p(y|do(x),\beta) = p(y|x, \beta)[/math].
An example of probabilistic causal models is additive noise model.
[math]f(.)[/math] is usually a linear function or spline functions for nonlinearities. [math]\epsilon[/math] is assumed to be standard normal, as well as [math]y[/math]. Thus the posterior [math]p(\theta | x, y, \beta)[/math] can be represented as
where [math]p(\theta)[/math] is the prior which is known. Then, variational inference or MCMC can be applied to calculate the posterior distribution.
Implicit Causal Models
The difference between implicit causal models and probabilistic causal models is the noise variable. Instead of using an additive noise term, implicit causal models directly take noise [math]\epsilon[/math] as input and outputs [math]x[/math] given parameter [math]\theta[/math].
[math] x=g(\epsilon | \theta), \epsilon \tilde s(\cdot) [/math]
The causal diagram has changed to:
They used fully connected neural network with a fair amount of hidden units to approximate each causal mechanism. Below is the formal description: Implicit Causal Models with Latent Confounders
Previously, they assumed the global structure is observed. Next, the unobserved scenario is being considered.
Causal Inference with a Latent Confounder
Similar to before, the interest is the causal effect [math]p(y|do(x_m), x_{-m})[/math]. Here, the SNPs other than [math]x_m[/math] is also under consideration. However, it is confounded by the unobserved confounder [math]z_n[/math]. As a result, the standard inference method cannot be used in this case.
The paper proposed a new method which include the latent confounders. For each subject [math]n=1,…,N[/math] and each SNP [math]m=1,…,M[/math],
The mechanism for latent confounder [math]z_n[/math] is assumed to be known. SNPs depend on the confounders and the trait depends on all the SNPs and the confounders as well.
The posterior of [math]\theta[/math] is needed to be calculate in order to estimate the mechanism [math]g_y[/math] as well as the causal effect [math]p(y|do(x_m), x_{-m})[/math], so that it can be explained how changes to each SNP [math]X_m[/math] cause changes to the trait [math]Y[/math].
Note that the latent structure [math]p(z|x, y)[/math] is assumed known.
In general, causal inference with latent confounders can be dangerous: it uses the data twice, and thus it may bias the estimates of each arrow [math]X_m → Y[/math]. Why is this justified? This is answered below:
Proposition 1. Assume the causal graph of Figure 2 (left) is correct and that the true distribution resides in some configuration of the parameters of the causal model (Figure 2 (right)). Then the posterior [math]p(θ | x, y)[/math] provides a consistent estimator of the causal mechanism [math]f_y[/math].
Proposition 1 rigorizes previous methods in the framework of probabilistic causal models. The intuition is that as more SNPs arrive (“M → ∞, N fixed”), the posterior concentrates at the true confounders [math]z_n[/math], and thus we can estimate the causal mechanism given each data point’s confounder [math]z_n[/math]. As more data points arrive (“N → ∞, M fixed”), we can estimate the causal mechanism given any confounder [math]z_n[/math] as there is an infinity of them.
Implicit Causal Model with a Latent Confounder
This section is the algorithm and functions to implementing an implicit causal model for GWAS.
Generative Process of Confounders [math]z_n[/math].
The distribution of confounders is set as standard normal. [math]z_n \in R^K[/math] , where [math]K[/math] is the dimension of [math]z_n[/math] and [math]K[/math] should make the latent space as close as possible to the true population structural.
Generative Process of SNPs [math]x_{nm}[/math].
Given SNP is coded for,
The authors defined a [math]Binomial(2,\pi_{nm})[/math] distribution on [math]x_{nm}[/math]. And used logistic factor analysis to design the SNP matrix.
A SNP matrix looks like this:
Since logistic factor analysis makes strong assumptions, this paper suggests using a neural network to relax these assumptions,
This renders the outputs to be a full [math]N*M[/math] matrix due the the variables [math]w_m[/math], which act as principal component in PCA. Here, [math]\phi[/math] has a standard normal prior distribution. The weights [math]w[/math] and biases [math]\phi[/math] are shared over the [math]m[/math] SNPs and [math]n[/math] individuals, which makes it possible to learn nonlinear interactions between [math]z_n[/math] and [math]w_m[/math].
Generative Process of Traits [math]y_n[/math].
Previously, each trait is modeled by a linear regression,
This also has very strong assumptions on SNPs, interactions, and additive noise. It can also be replaced by a neural network which only outputs a scalar,
Likelihood-free Variational Inference
Calculating the posterior of [math]\theta[/math] is the key of applying the implicit causal model with latent confounders.
could be reduces to
However, with implicit models, integrating over a nonlinear function could be suffered. The authors applied likelihood-free variational inference (LFVI). LFVI proposes a family of distribution over the latent variables. Here the variables [math]w_m[/math] and [math]z_n[/math] are all assumed to be Normal,
For LFVI applied to GWAS, the algorithm which similar to the EM algorithm has been used:
Empirical Study
The authors performed simulation on 100,000 SNPs, 940 to 5,000 individuals, and across 100 replications of 11 settings. Four methods were compared:
implicit causal model (ICM); PCA with linear regression (PCA); a linear mixed model (LMM); logistic factor analysis with inverse regression (GCAT).
The feedforward neural networks for traits and SNPs are fully connected with two hidden layers using ReLU activation function, and batch normalization.
Simulation Study
Based on real genomic data, a true model is applied to generate the SNPs and traits for each configuration. There are four datasets used in this simulation study:
HapMap [Balding-Nichols model] 1000 Genomes Project (TGP) [PCA] Human Genome Diversity project (HGDP) [PCA] HGDP [Pritchard-Stephens-Donelly model] A latent spatial position of individuals for population structure [spatial] The table shows the prediction accuracy. The accuracy is calculated by the rate of the number of true positives divide the number of true positives plus false positives. True positives measure the proportion of positives that are correctly identified as such (e.g. the percentage of SNPs which are correctly identified as having the causal relation with the trait). In contrast, false positives state the SNPs has the causal relation with the trait when they don’t. The closer the rate to 1, the better the model is since false positives are considered as the wrong prediction.
The result represented above shows that the implicit causal model has the best performance among these four models in every situation. Especially, other models tend to do poorly on PSD and Spatial when [math]a[/math] is small, but the ICM achieved a significantly high rate. The only comparable method to ICM is GCAT, when applying to simpler configurations.
Real-data Analysis
They also applied ICM to GWAS of Northern Finland Birth Cohorts, which measure 10 metabolic traits and also contain 324,160 SNPs and 5,027 individuals. The data came from the database of Genotypes and Phenotypes (dbGaP) and used the same preprocessing as Song et al. Ten implicit causal models were fitted, one for each trait to be modeled. For each of the 10 implicit causal models the dimension of the counfounders was set to be six, same as what was used in the paper by Song. The SNP network used 512 hidden units in both layers and the trait network used 32 and 256. et al. for comparable models in Table 2.
The numbers in the above table are the number of significant loci for each of the 10 traits. The number for other methods, such as GCAT, LMM, PCA, and "uncorrected" (association tests without accounting for hidden relatedness of study samples) are obtained from other papers. By comparison, the ICM reached the level of the best previous model for each trait.
Conclusion
This paper introduced implicit causal models in order to account for nonlinear complex causal relationships, and applied the method to GWAS. It can not only capture important interactions between genes within an individual and among population level, but also can adjust for latent confounders by taking account of the latent variables into the model.
By the simulation study, the authors proved that the implicit causal model could beat other methods by 15-45.3% on a variety of datasets with variations on parameters.
The authors also believed this GWAS application is only the start of the usage of implicit causal models. The authors suggest that it might also be successfully used in the design of dynamic theories in high-energy physics or for modeling discrete choices in economics.
Critique
This paper is an interesting and novel work. The main contribution of this paper is to connect the statistical genetics and the machine learning methodology. The method is technically sound and does indeed generalize techniques currently used in statistical genetics.
The neural network used in this paper is a very simple feed-forward 2 hidden-layer neural network, but the idea of where to use the neural network is crucial and might be significant in GWAS.
It has limitations as well. The empirical example in this paper is too easy, and far away from the realistic situation. Despite the simulation study showing some competing results, the Northern Finland Birth Cohort Data application did not demonstrate the advantage of using implicit causal model over the previous methods, such as GCAT or LMM.
Another limitation is about linkage disequilibrium as the authors stated as well. SNPs are not completely independent of each other; usually, they have correlations when the alleles at close locus. They did not consider this complex case, rather they only considered the simplest case where they assumed all the SNPs are independent.
Furthermore, one SNP maybe does not have enough power to explain the causal relationship. Recent papers indicate that causation to a trait may involve multiple SNPs. This could be a future work as well.
References
Tran D, Blei D M. Implicit Causal Models for Genome-wide Association Studies[J]. arXiv preprint arXiv:1710.10742, 2017.
Patrik O Hoyer, Dominik Janzing, Joris M Mooij, Jonas Peters, and Prof Bernhard Schölkopf. Non- linear causal discovery with additive noise models. In Neural Information Processing Systems, 2009.
Alkes L Price, Nick J Patterson, Robert M Plenge, Michael E Weinblatt, Nancy A Shadick, and David Reich. Principal components analysis corrects for stratification in genome-wide association studies. Nature Genetics, 38(8):904–909, 2006.
Minsun Song, Wei Hao, and John D Storey. Testing for genetic associations in arbitrarily structured populations. Nature, 47(5):550–554, 2015.
Dustin Tran, Rajesh Ranganath, and David M Blei. Hierarchical implicit models and likelihood-free variational inference. In Neural Information Processing Systems, 2017. |
Problem 75
Let $\Q$ denote the set of rational numbers (i.e., fractions of integers). Let $V$ denote the set of the form $x+y \sqrt{2}$ where $x,y \in \Q$. You may take for granted that the set $V$ is a vector space over the field $\Q$.
(a) Show that $B=\{1, \sqrt{2}\}$ is a basis for the vector space $V$ over $\Q$. (b) Let $\alpha=a+b\sqrt{2} \in V$, and let $T_{\alpha}: V \to V$ be the map defined by \[ T_{\alpha}(x+y\sqrt{2}):=(ax+2by)+(ay+bx)\sqrt{2}\in V\] for any $x+y\sqrt{2} \in V$. Show that $T_{\alpha}$ is a linear transformation. (c) Let $\begin{bmatrix} x \\ y \end{bmatrix}_B=x+y \sqrt{2}$. Find the matrix $T_B$ such that \[ T_{\alpha} (x+y \sqrt{2})=\left( T_B\begin{bmatrix} x \\ y \end{bmatrix}\right)_B,\] and compute $\det T_B$.
(
The Ohio State University, Linear Algebra Exam) |
testing if system of inequalities has solution
Hi there,
I have a big system of inequalities (~1500 inequalities, 45 variables) and want to check, if there exists a real solution to it. Trying out the 'solve' and 'solve_ineq' takes a huge amount of time or it breaks during calculation and after checking some of the questions here I even assume the solve-function is broken and sometimes gives wrong results. Does anybody know about a function/system which returns in a responsible time and reliable if there exists a solution or not (existence is enough) (I want to use this in an actual proof, so it would be useless, if I can't trust the result).
In my use case I have the variables $a1, ..., a15, b1,...,b15,c1,...,c15 \in \mathbb R^+$ and my inequalities are all of the form $$ \frac{f(a1,...,a15)}{g(c1,\dots, c15)} \geq \frac{f'(a1,\dots,a15)}{g'(c1,\dots, c15)}$$ (and same for combinations of (a,b) and (b,c)) for given linear functions $f,f',g,g'$ (i.e. multivariate polynomials with degree at most 1), so restricting to the variables $ai$ we get a system of linear inequalities (but even trying to solve these takes long/doesn't work with the solve function).
Actually more accuratly I have indexed sets $$F_{a,b} ={{(f_i,g_i) | i \in I }, F_{c,b} ={(p_i,q_i) | i \in I } ,F_{a,c} ={(r_i,s_i) | i \in I } $$ and want to show, that if there exists a solution $(a,b)$ of $$\frac{f_k(a)}{g_k(b)} = max_i \frac{f_i(a)}{g_i(b)}$$ then there exists a solution $(c)$ to $$\frac{r_k(a)}{s_k(c)} = max_i \frac{r_i(a)}{s_i(c)}$$ $$\frac{p_k(c)}{q_k(b)} = max_i \frac{p_i(c)}{q_i(b)}$$
So far I have a the follwing snippet:
#fractionAB are the saved fractions from above, a,b,c are arrays with e.g. a=[a1,a2,...,a15]stopIt=Falsefor maxStretch in cands: #cands is the index set I ineq=[fractionAB.get(maxStretch) >= fractionAB.get(cand) for cand in cands] if solve(ineq,a+b): #try here if there exists a middle point on the geodesic, i.e. geodesic exists ineq.extend([fractionAC.get(maxStretch) >= fractionAC.get(cand) for cand in cands]) ineq.extend([fractionCB.get(maxStretch) >= fractionCB.get(cand) for cand in cands]) if not solve(ineq, a+b+c): stopIt=True print 'Tested for candidate ' , maxStretch if stopIt: break
I know, there is room for improvement e.g. at reusing to first solution from (a+b), the problem is, that even that first system pretty much kills the calculation. Also multiplying the denominators on each side doesn't seem to help.
PS: The mathjax seems to be broken on this site, since the code for leftbraces seems to vanish (hence the ugly "fix" above). |
Search
Now showing items 1-1 of 1
Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV
(Elsevier, 2014-09)
Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ... |
Let $\Omega$ be a nice domain in $\Bbb R^n$. It is known that any element $T\in\left( W^{k,p}(\Omega)\right)^*$ admits a (possibly non-unique) representation of the form $$ Tu = \sum_{|a|\le k} \int_\Omega f_\alpha D^\alpha u\ dx, \tag{0} $$ where $f_\alpha \in L^{p'}$ and $\frac 1p + \frac 1{p'}=1$. The functional $T$ can be identified with $$ T =\sum_{|a|\le k} (-1)^{|\alpha|}D^\alpha f_\alpha \tag{1}\label{eq1} $$ as a distribution in $\mathcal D(\Omega)$.
In the book
Weakly Differentiable Functions by Ziemer, there is a claim that confused me. The book said (modulo some paraphrasing) the following:
... However, not every distribution $T$ of the form $(1)$ is necessarily in $\left( W^{k,p}(\Omega)\right)^*$. In case one deals with $W^{k,p}_0(\Omega)$ instead of $W^{k,p}(\Omega)$, distribution of the form $(1)$ completely describes the dual space...
I am not sure if I fully understand what the passage means. I know that such a $T$ can be uniquely extend to an element in $W^{-k,p'} = \left( W_0^{k,p}\right)^*$ by the standard density argument, whereas $T$ in $(1)$ have more than one extension to an element of $\left( W^{k,p}\right)^*$. Perhaps this is what the passage means?
It seems weird to me to say that $T$ in $(1)$ is not necessary in $\left( W^{k,p}\right)^*$ since $(0)$ is obviously one way to define $T$ on $W^{k,p}$, I would rather mention the non-uniqueness explicitly. Is there any other deeper interpretation of the passage that I may have missed? |
A variation of the Root of Unity problem.
I want to find all possible answers to this:
$$z^n = i$$
Where $$i^2 = -1$$
Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community
If the polar form of $z$ is $$z=r(\cos\theta + i\sin\theta),$$ there are $n$ distinct solutions to the equation $w^n = z$: $$w=\sqrt[n]{r}(\cos\frac{\theta +2\pi k}{n}+ i \sin\frac{\theta +2\pi k}{n}),$$ where $k=0,1,...,n-1$. In your case, $z=i$, whose polar form is given by $r=1$, $\theta = \pi /2$.
Generally, the answers would be of the form
$$\sqrt[n]{i}\omega_n^j$$
where $\omega_n=\exp\left(\frac{2i\pi}{n}\right)$ is a root of unity, and $j=0\dots n-1$.
Also, observe that if $z^n=i$ then $z^{4n}=1$. Thus, the complex numbers you're looking for are particular $4n$-th roots of $1$.
If you know that the $m$-th roots of 1 (any $m$) can be written as powers of a single well-chosen one (a primitive root), it shouldn't be too hardto find exactly which $4n$-th roots have the desired property. |
Let\[\mathbf{v}_{1}=\begin{bmatrix}1 \\ 1\end{bmatrix},\;\mathbf{v}_{2}=\begin{bmatrix}1 \\ -1\end{bmatrix}.\]Let $V=\Span(\mathbf{v}_{1},\mathbf{v}_{2})$. Do $\mathbf{v}_{1}$ and $\mathbf{v}_{2}$ form an orthonormal basis for $V$?
For a set $S$ and a vector space $V$ over a scalar field $\K$, define the set of all functions from $S$ to $V$\[ \Fun ( S , V ) = \{ f : S \rightarrow V \} . \]
For $f, g \in \Fun(S, V)$, $z \in \K$, addition and scalar multiplication can be defined by\[ (f+g)(s) = f(s) + g(s) \, \mbox{ and } (cf)(s) = c (f(s)) \, \mbox{ for all } s \in S . \]
(a) Prove that $\Fun(S, V)$ is a vector space over $\K$. What is the zero element?
(b) Let $S_1 = \{ s \}$ be a set consisting of one element. Find an isomorphism between $\Fun(S_1 , V)$ and $V$ itself. Prove that the map you find is actually a linear isomorpism.
(c) Suppose that $B = \{ e_1 , e_2 , \cdots , e_n \}$ is a basis of $V$. Use $B$ to construct a basis of $\Fun(S_1 , V)$.
(d) Let $S = \{ s_1 , s_2 , \cdots , s_m \}$. Construct a linear isomorphism between $\Fun(S, V)$ and the vector space of $n$-tuples of $V$, defined as\[ V^m = \{ (v_1 , v_2 , \cdots , v_m ) \mid v_i \in V \mbox{ for all } 1 \leq i \leq m \} . \]
(e) Use the basis $B$ of $V$ to constract a basis of $\Fun(S, V)$ for an arbitrary finite set $S$. What is the dimension of $\Fun(S, V)$?
(f) Let $W \subseteq V$ be a subspace. Prove that $\Fun(S, W)$ is a subspace of $\Fun(S, V)$.
Let $\mathrm{P}_3$ denote the set of polynomials of degree $3$ or less with real coefficients. Consider the ordered basis\[B = \left\{ 1+x , 1+x^2 , x – x^2 + 2x^3 , 1 – x – x^2 \right\}.\]Write the coordinate vector for the polynomial $f(x) = -3 + 2x^3$ in terms of the basis $B$.
Let $V$ denote the vector space of $2 \times 2$ matrices, and $W$ the vector space of $3 \times 2$ matrices. Define the linear transformation $T : V \rightarrow W$ by\[T \left( \begin{bmatrix} a & b \\ c & d \end{bmatrix} \right) = \begin{bmatrix} a+b & 2d \\ 2b – d & -3c \\ 2b – c & -3a \end{bmatrix}.\]
For an integer $n > 0$, let $\mathrm{P}_n$ be the vector space of polynomials of degree at most $n$. The set $B = \{ 1 , x , x^2 , \cdots , x^n \}$ is a basis of $\mathrm{P}_n$, called the standard basis.
Let $T : \mathrm{P}_n \rightarrow \mathrm{P}_{n+1}$ be the map defined by, for $f \in \mathrm{P}_n$,\[T (f) (x) = x f(x).\]
Prove that $T$ is a linear transformation, and find its range and nullspace.
Suppose that $B=\{\mathbf{v}_1, \mathbf{v}_2\}$ is a basis for $\R^2$. Let $S:=[\mathbf{v}_1, \mathbf{v}_2]$.Note that as the column vectors of $S$ are linearly independent, the matrix $S$ is invertible.
Prove that for each vector $\mathbf{v} \in V$, the vector $S^{-1}\mathbf{v}$ is the coordinate vector of $\mathbf{v}$ with respect to the basis $B$.
Let $C[-2\pi, 2\pi]$ be the vector space of all real-valued continuous functions defined on the interval $[-2\pi, 2\pi]$.Consider the subspace $W=\Span\{\sin^2(x), \cos^2(x)\}$ spanned by functions $\sin^2(x)$ and $\cos^2(x)$.
(a) Prove that the set $B=\{\sin^2(x), \cos^2(x)\}$ is a basis for $W$.
(b) Prove that the set $\{\sin^2(x)-\cos^2(x), 1\}$ is a basis for $W$.
Let $V$ be a vector space and $B$ be a basis for $V$.Let $\mathbf{w}_1, \mathbf{w}_2, \mathbf{w}_3, \mathbf{w}_4, \mathbf{w}_5$ be vectors in $V$.Suppose that $A$ is the matrix whose columns are the coordinate vectors of $\mathbf{w}_1, \mathbf{w}_2, \mathbf{w}_3, \mathbf{w}_4, \mathbf{w}_5$ with respect to the basis $B$.
After applying the elementary row operations to $A$, we obtain the following matrix in reduced row echelon form\[\begin{bmatrix}1 & 0 & 2 & 1 & 0 \\0 & 1 & 3 & 0 & 1 \\0 & 0 & 0 & 0 & 0 \\0 & 0 & 0 & 0 & 0\end{bmatrix}.\]
(a) What is the dimension of $V$?
(b) What is the dimension of $\Span\{\mathbf{w}_1, \mathbf{w}_2, \mathbf{w}_3, \mathbf{w}_4, \mathbf{w}_5\}$?
(The Ohio State University, Linear Algebra Midterm)
Let $V$ be the vector space of all $2\times 2$ matrices whose entries are real numbers.Let\[W=\left\{\, A\in V \quad \middle | \quad A=\begin{bmatrix}a & b\\c& -a\end{bmatrix} \text{ for any } a, b, c\in \R \,\right\}.\]
(a) Show that $W$ is a subspace of $V$.
(b) Find a basis of $W$.
(c) Find the dimension of $W$.
(The Ohio State University, Linear Algebra Midterm)
Let $C[-1, 1]$ be the vector space over $\R$ of all continuous functions defined on the interval $[-1, 1]$. Let\[V:=\{f(x)\in C[-1,1] \mid f(x)=a e^x+b e^{2x}+c e^{3x}, a, b, c\in \R\}\]be a subset in $C[-1, 1]$.
(a) Prove that $V$ is a subspace of $C[-1, 1]$.
(b) Prove that the set $B=\{e^x, e^{2x}, e^{3x}\}$ is a basis of $V$.
(c) Prove that\[B’=\{e^x-2e^{3x}, e^x+e^{2x}+2e^{3x}, 3e^{2x}+e^{3x}\}\]is a basis for $V$. |
Here you can see that the transfer function applied to a cosine input will give you a sinusoid and a transient term:
$$ x(t) = \underbrace{(x(0) + x'(0))(2 e^{-t} - e^{-2t}) + \frac{2}{5} e^{-t} - \frac{1}{2}e^{-t}}_{{\rm goes\ to\ } 0 {\rm\ as\ } t \rightarrow \infty }\ \ \ + \frac{1}{10} \cos(t) + \frac{3}{10} \sin(t) $$
However, I don't understand how this can be, aren't exponentials eigenfunctions of LTI systems? so how come it can give an extra (transient) term?
At the same time it makes sense that it has a transient term from the CF of the function. How do I reconcile this? |
Let $a \in R$. Verify that $(x − 1)^2$ is a factor of $$p(x) = x^4 − ax^2 + (2a − 4)x + (3 − a)$$
How can I solve this question?
Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community
Let $a \in R$. Verify that $(x − 1)^2$ is a factor of $$p(x) = x^4 − ax^2 + (2a − 4)x + (3 − a)$$
How can I solve this question?
Hint apply the
double root test (proof below)
$$\rm\begin{eqnarray} &&\rm\!\! (x\!-\!c)^2 |\ p(x)\!\!\!\!\!\!\!\\ \iff\ &&\rm x\!-\!c\ \ |\ \ p(x)\ &\rm and\ \ &\rm x\!-\!c\ \bigg|\ \dfrac{p(x)}{x\!-\!c}\\ \\ \iff\ &&\rm \color{#0a0}{p(c)} = 0 &\rm and&\rm x\!-\!c\ \bigg|\ \dfrac{p(x)-\color{#0a0}{p(c)}}{x\!-\!c}\ \ \left[\!\iff \color{#C00}{\dfrac{p(x)-p(c)}{x\!-\!c}\Bigg|_{\large\:x\:=\:c}} \!=\: 0\ \right] \\ \\ \iff\ &&\rm p(c) = 0 &\rm and&\rm \color{#C00}{p'(c)} = 0\end{eqnarray}$$
Remark $\ $ The proof is purely algebraic if you interpret the above $\rm\color{#c00}{red}$ expression as the algebraic definition of a polynomial derivative.
You verify $p(1)=0$ and $q(1) = 0$ where $q(x) := p(x) / (x-1)$ is obtained by polynomial long division.
Another option is to verify $p(1) = p'(1) = 0$ (so $p$ 'touches' the $x$-Axis at $1$). These can be shown to be equivalent criteria.
You can do the polynomial division between $ p(x) $ and $ x^2-2x+1 $ or $x-1$ twice.You should obtain $rest=0$
Consider $p(x) \in \mathbb{R}[x] $, holds the following theorem:
TheoremLet $p$ be a polynomial in $\mathbb{k}[x]$, the following condition are equivalent
$(x-a)| GCD(p,p')$, where $p'$ is the formal derivative of $p$ (in our case is the same of classical derivative)
$(x-a)^2 | p$
Now the derivative of your polynomial is $$p'(x)=(4-2a)x^3+(4-2a)x$$ Evaluating $p$ and $p'$ in 1 you obtain $p(1)=p'(1)=0$, so by the previous theorem you obtain the assert.
Well, we can observe that $-ax^2+2ax-a=-a(x-1)^2,$ so our polynomial can be written as $$x^4-4x+3-a(x-1)^2.$$ All that remains is to show that $(x-1)^2$ is a factor of $x^4-4x+3,$ which is fairly straightforward to accomplish via polynomial long division, or by two applications of synthetic division. |
It is easy to notice some odd things when looking into a fish tank. For example, you may see the same fish appearing to be in two different places (Figure \(\PageIndex{1}\)). This is because light coming from the fish to us changes direction when it leaves the tank, and in this case, it can travel two different paths to get to our eyes. The changing of a light ray’s direction (loosely called bending) when it passes through variations in matter is called
refraction. Refraction is responsible for a tremendous range of optical phenomena, from the action of lenses to voice transmission through optical fibers.
Definition: REFRACTION
The changing of a light ray’s direction (loosely called bending) when it passes through variations in matter is called refraction.
SPEED OF LIGHT
The speed of light \(c\) not only affects refraction, it is one of the central concepts of Einstein’s theory of relativity. As the accuracy of the measurements of the speed of light were improved, \(c\) was found not to depend on the velocity of the source or the observer. However, the speed of light does vary in a precise manner with the material it traverses. These facts have far-reaching implications, as we will see in "Special Relativity." It makes connections between space and time and alters our expectations that all observers measure the same time for the same event, for example. The speed of light is so important that its value in a vacuum is one of the most fundamental constants in nature as well as being one of the four fundamental SI units.
Figure \(\PageIndex{1}\): Looking at the fish tank as shown, we can see the same fish in two different locations, because light changes directions when it passes from water to air. In this case, the light can reach the observer by two different paths, and so the fish seems to be in two different places. This bending of light is called refraction and is responsible for many optical phenomena.
Why does light change direction when passing from one material (medium) to another? It is because light changes speed when going from one material to another. So before we study the law of refraction, it is useful to discuss the speed of light and how it varies in different media.
The Speed of Light
Early attempts to measure the speed of light, such as those made by Galileo, determined that light moved extremely fast, perhaps instantaneously. The first real evidence that light traveled at a finite speed came from the Danish astronomer Ole Roemer in the late 17th century. Roemer had noted that the average orbital period of one of Jupiter’s moons, as measured from Earth, varied depending on whether Earth was moving toward or away from Jupiter. He correctly concluded that the apparent change in period was due to the change in distance between Earth and Jupiter and the time it took light to travel this distance. From his 1676 data, a value of the speed of light was calculated to be \(2.26 \times 10^{8} m/s\) (only 25% different from today’s accepted value). In more recent times, physicists have measured the speed of light in numerous ways and with increasing accuracy. One particularly direct method, used in 1887 by the American physicist Albert Michelson (1852–1931), is illustrated in Figure \(\PageIndex{2}\). Light reflected from a rotating set of mirrors was reflected from a stationary mirror 35 km away and returned to the rotating mirrors. The time for the light to travel can be determined by how fast the mirrors must rotate for the light to be returned to the observer’s eye.
Figure \(\PageIndex{2}\): A schematic of early apparatus used by Michelson and others to determine the speed of light. As the mirrors rotate, the reflected ray is only briefly directed at the stationary mirror. The returning ray will be reflected into the observer's eye only if the next mirror has rotated into the correct position just as the ray returns. By measuring the correct rotation rate, the time for the round trip can be measured and the speed of light calculated. Michelson’s calculated value of the speed of light was only 0.04% different from the value used today.
The speed of light is now known to great precision. In fact, the speed of light in a vacuum \(c\) is so important that it is accepted as one of the basic physical quantities and has the fixed value.
VALUE OF THE SPEED OF LIGHT
\[\begin{align} c &\equiv 2.9972458 \times 10^{8} \\[5pt] &\sim 3.00 \times 10^{8} m/s \end{align}\]
The approximate value of \(3.00 \times 10^{8} m/s\) is used whenever three-digit accuracy is sufficient. The speed of light through matter is less than it is in a vacuum, because light interacts with atoms in a material. The speed of light depends strongly on the type of material, since its interaction with different atoms, crystal lattices, and other substructures varies.
Definition: INDEX OF REFRACTION
We define the
index of refraction \(n\) of a material to be
\[n = \frac{c}{v}, \label{index}\]
where \(v\) is the observed speed of light in the material. Since the speed of light is always less than \(c\) in matter and equals \(c\) only in a vacuum, the index of refraction is always greater than or equal to one. That is, \(n \gt 1\).
Table \(\PageIndex{1}\) gives the indices of refraction for some representative substances. The values are listed for a particular wavelength of light, because they vary slightly with wavelength. (This can have important effects, such as colors produced by a prism.) Note that for gases, \(n\) is close to 1.0. This seems reasonable, since atoms in gases are widely separated and light travels at \(c\) in the vacuum between atoms. It is common to take \(n = 1\) for gases unless great precision is needed. Although the speed of light \( v\) in a medium varies considerably from its value \( c\) in a vacuum, it is still a large speed.
Medium n Gases at \(0ºC, 1 atm\) Air 1.000293 Carbon dioxide 1.00045 Hydrogen 1.000139 Oxygen 1.000271 Liquids at 20ºC Benzene 1.501 Carbon disulfide 1.628 Carbon tetrachloride 1.461 Ethanol 1.361 Glycerine 1.473 Water, fresh 1.333 Solids at 20ºC Diamond 2.419 Fluorite 1.434 Glass, crown 1.52 Glass, flint 1.66 Ice at 20ºC 1.309 Polystyrene 1.49 Plexiglas 1.51 Quartz, crystalline 1.544 Quartz, fused 1.458 Sodium chloride 1.544 Zircon 1.923
Example \(\PageIndex{1}\): Speed of Light in Matter
Calculate the speed of light in zircon, a material used in jewelry to imitate diamond.
Strategy:
The speed of light in a material, \(v\), can be calculated from the index of refraction \(n\) of the material using the equation \(n = c/v\).
Solution:
The equation for index of refraction (Equation \ref{index}) can be rearranged to determine \(v\)
\[v = \frac{c}{n}. \nonumber\]
The index of refraction for zircon is given as 1.923 in Table \(\PageIndex{1}\), and \(c\) is given in the equation for speed of light. Entering these values in the last expression gives
\[ \begin{align*} v &= \frac{3.00 \times 10^{8} m/s}{1.923} \\[5pt] &= 1.56 \times 10^{8} m/s. \end{align*}\]
Discussion:
This speed is slightly larger than half the speed of light in a vacuum and is still high compared with speeds we normally experience. The only substance listed in Table \(\PageIndex{1}\) that has a greater index of refraction than zircon is diamond. We shall see later that the large index of refraction for zircon makes it sparkle more than glass, but less than diamond.
Law of Refraction
Figure \(\PageIndex{3}\) shows how a ray of light changes direction when it passes from one medium to another. As before, the angles are measured relative to a perpendicular to the surface at the point where the light ray crosses it. (Some of the incident light will be reflected from the surface, but for now we will concentrate on the light that is transmitted.) The change in direction of the light ray depends on how the speed of light changes. The change in the speed of light is related to the indices of refraction of the media involved. In the situations shown in Figure \(\PageIndex{3}\), medium 2 has a greater index of refraction than medium 1. This means that the speed of light is less in medium 2 than in medium 1. Note that as shown in Figure \(\PageIndex{3a}\), the direction of the ray moves closer to the perpendicular when it slows down. Conversely, as shown in Figure \(\PageIndex{3b}\), the direction of the ray moves away from the perpendicular when it speeds up. The path is exactly reversible. In both cases, you can imagine what happens by thinking about pushing a lawn mower from a footpath onto grass, and vice versa. Going from the footpath to grass, the front wheels are slowed and pulled to the side as shown. This is the same change in direction as for light when it goes from a fast medium to a slow one. When going from the grass to the footpath, the front wheels can move faster and the mower changes direction as shown. This, too, is the same change in direction as for light going from slow to fast.
Figure \(\PageIndex{3}\): The change in direction of a light ray depends on how the speed of light changes when it crosses from one medium to another. The speed of light is greater in medium 1 than in medium 2 in the situations shown here. (a) A ray of light moves closer to the perpendicular when it slows down. This is analogous to what happens when a lawn mower goes from a footpath to grass. (b) A ray of light moves away from the perpendicular when it speeds up. This is analogous to what happens when a lawn mower goes from grass to footpath. The paths are exactly reversible.
The amount that a light ray changes its direction depends both on the incident angle and the amount that the speed changes. For a ray at a given incident angle, a large change in speed causes a large change in direction, and thus a large change in angle. The exact mathematical relationship is the
law of refraction, or "Snell's Law," which is stated in equation form as
THE LAW OF REFRACTION (Snell's Law)
\[n_{1} \sin \theta_{1} = n_{2} \sin \theta_{2}.\label{25.4.2}\]
Here, \(n_{1}\) and \(n_{2}\) are the indices of refraction for medium 1 and 2, and \(\theta_{1}\) and \(\theta_{2}\) are the angles between the rays and the perpendicular in medium 1 and 2, as shown in Figure \(\PageIndex{3}\). The incoming ray is called the incident ray and the outgoing ray the refracted ray, and the associated angles the incident angle and the refracted angle. The law of refraction is also called Snell’s law after the Dutch mathematician Willebrord Snell (1591–1626), who discovered it in 1621. Snell’s experiments showed that the law of refraction was obeyed and that a characteristic index of refraction \(n\) could be assigned to a given medium. Snell was not aware that the speed of light varied in different media, but through experiments he was able to determine indices of refraction from the way light rays changed direction.
TAKE-HOME EXPERIMENT: A BROKEN PENCIL
A classic observation of refraction occurs when a pencil is placed in a glass half filled with water. Do this and observe the shape of the pencil when you look at the pencil sideways, that is, through air, glass, water. Explain your observations. Draw ray diagrams for the situation.
Example \(\PageIndex{2}\): Determine the Index of Refraction from Refraction Data
Find the index of refraction for medium 2 in Figure \(\PageIndex{3a}\), assuming medium 1 is air and given the incident angle is \(30.0^{\circ}\) and the angle of refraction is \(22.0^{\circ}\).
Strategy
The index of refraction for air is taken to be 1 in most cases (and up to four significant figures, it is 1.000). Thus \(n_{1} = 1.00\) here. From the given information, \(\theta_{1} = 30.0^{\circ}\) and \(\theta_{2} = 22.0^{\circ}\) With this information, the only unknown in Snell’s law is \(n_{2}\), so that it can be used to find this unknown.
Solution
Snell's law (Equation \ref{25.4.2}) can be rearranged to isolate \(n_{2}\) gives
\[n_{2} = n_{1}\frac{\sin{\theta_{1}}}{\sin{\theta_{2}}}.\]
Entering known values,
\[ \begin{align*} n_{2} &= n_{1}\frac{\sin{30.0^{\circ}}}{\sin{22.0^{\circ}}} \\[5pt] &= \frac{0.500}{0.375} \\[5pt] &=1.33. \end{align*}\]
Discussion
This is the index of refraction for water, and Snell could have determined it by measuring the angles and performing this calculation. He would then have found 1.33 to be the appropriate index of refraction for water in all other situations, such as when a ray passes from water to glass. Today we can verify that the index of refraction is related to the speed of light in a medium by measuring that speed directly.
Example \(\PageIndex{3}\): A Larger Change in Direction
Suppose that in a situation like that in the previous example, light goes from air to diamond and that the incident angle is \(30.0^{\circ}\). Calculate the angle of refraction \(\theta_{2}\) in the diamond.
Strategy
Again the index of refraction for air is taken to be \(n_{1} = 1.00\), and we are given \(\theta_{1} = 30.0^{\circ}\). We can look up the index of refraction for diamond in Table \(\PageIndex{1}\), finding \(n_{2} = 2.419\). The only unknown in Snell’s law is \(\theta_{2}\), which we wish to determine.
Solution
Solving Snell’s law (Equation \ref{25.4.2}) for \(\sin{\theta_{2}}\) yields
\[ \sin{\theta_{2}} = \frac{n_{1}}{n_{2}}\sin{\theta_{1}}.\]
Entering known values,
\[ \begin{align*} \sin{\theta_{2}} &= \frac{1.00}{2.419} \sin{30.0^{\circ}} \\[5pt] &= \left( 0.413 \right) \left( 0.500 \right) \\[5pt] &= 0.207. \end{align*}\]
The angle is thus
\[\theta_{2} = \sin{0.207}^{-1} = 11.9^{\circ}.\]
Discussion
For the same \(30^{\circ}\) angle of incidence, the angle of refraction in diamond is significantly smaller than in water (\(11.9^{\circ}\) rather than \(22^{\circ}\) -- see the preceding example).
Summary The changing of a light ray’s direction when it passes through variations in matter is called refraction. The speed of light in vacuuum \(c = 2.9972458 \times 10^{8} \sim 3.00 \times 10^{8} m/s\) Index of refraction \(n = \frac{c}{v}\), where \(v\) is the speed of light in the material, \(c\) is the speed of light in vacuum, and \(n\) is the index of refraction. Snell’s law, the law of refraction, is stated in equation form as \(n_{1} \sin_{\theta_{1}} = n_{2} \sin_{\theta_{2}}\). Glossary refraction changing of a light ray’s direction when it passes through variations in matter index of refraction for a material, the ratio of the speed of light in vacuum to that in the material Contributors
Paul Peter Urone (Professor Emeritus at California State University, Sacramento) and Roger Hinrichs (State University of New York, College at Oswego) with Contributing Authors: Kim Dirks (University of Auckland) and Manjula Sharma (University of Sydney). This work is licensed by OpenStax University Physics under a Creative Commons Attribution License (by 4.0). |
I am having a lot of trouble starting this proof. I would greatly appreciate any help I can get here. Thanks.
Let $n\in \mathbb{N}$. Prove that any injective function from $\{1,2,\ldots,n\}$ to $\{1,2,\ldots,n\}$ is bijective.
Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community
define a new function
$$ g: \operatorname{Im}{f} \rightarrow \{1, \cdots, n\} $$
by setting $g(y)$ as that $x$ such that $f(x)=y$ (well-defined because each $y$ is an image and $f$ is injective). note that $f\circ g$ is the identity on $\operatorname{Im}{f}$, hence g must be injective; likewise, $g\circ f$ is the identity on $\{1, \cdots, n\}$, hence g must be surjective. we have just proved that g is a bijection, i.e. a permutation of $1, \cdots, n$.
the concept of cardinality is just shorthand for 'there exists a bijection'...
Another hint: Prove it by induction. It’s clear for $n=1$. Otherwise if the statement holds for some $n$, take an injective map $σ \colon \{1, …, n+1\} → \{1, …, n+1\}$. Assume $σ(n+1) = n+1$ – why can you do this? What follows? Hint: Let $f : [n] \to [n]$ be injective. What is the cardinality of the image of $f$?
Hint: If $f:[n]\rightarrow [n]$ is injective (where $[n]= \{1,2,\dots,n\}$), all that remains to be shown is that $f$ is surjective. So, suppose it's not. How does the size of the image compare to the size of the domain, and what does this say about injectivity?
What if $f$ would not be bijective? Then one number would not be in the image of $f$. How can that be?
Suppose $f : \{1,\ldots, n\}\to\{1,\ldots, n\}$ is injective. Then $a\neq b\implies f(a)\neq f(b)$, so $\left|\,\operatorname{Im}f\,\right| = \left|\{1,\ldots, n\}\right| = n$. As $\{1,\ldots,n\}$ is the codomain, what can we say?
Since $\{1,\ldots, n\}$ is the codomain and $\left|\{1,\ldots, n\}\right| = n$, everything in the codomain must be hit. |
Let$\newcommand{\mM}{\mathcal{M}}$ $\mM_{1,1}$ be the moduli stack of elliptic curves. Let $R$ be a Dedekind domain, say $\mathbb{Z}[1/N]$ for simplicity, and suppose we have a finite etale cover:
$$\mM\rightarrow\mM_{1,1}[1/N]$$ Must the coarse moduli scheme $M$ of $\mM$ be smooth over $\mathbb{Z}[1/N]$? This is certainly true if $\mM$ is representable, since $\mM_{1,1}$ is smooth over $\mathbb{Z}$. Furthermore, for a geometric point Spec $k\rightarrow$ Spec $R$ of characteristic not 2 or 3, $M_k$ is the coarse moduli scheme of $\mM_k$, which is normal since it's the quotient of the scheme $\mM\times_{\mM_{1,1}[1/N]} \mM(N^2)$ by $GL_2(\mathbb{Z}/N)$, where $\mM(N^2)$ is the representable stack over $\mathbb{Z}[1/N]$ classifying elliptic curves with full level $N$ structure. Since $M_k$ is a normal curve, it's smooth, and thus since $R$ is a Dedekind domain, we find that $M[1/6]$ is smooth over $\mathbb{Z}[1/6N]$.
The same argument doesn't work at $p = $ 2 or 3, since for a residue field $k$ of $R$ of characteristic 2 or 3, $M_k$ is not necessarily the coarse moduli scheme of $\mM_k$ (the stack is not tame over 2 and 3). Certainly the only potential singularities are the preimages of $j = 0\equiv 1728\mod p$ in $M_k$.
Is this a real obstruction? Of course if $\mM$ is representable then this is a non-issue, and maybe for special problems like $\Gamma_0(N)$ one can use division polynomials, but that's kind of ad hoc. I'd like to understand better what can go wrong generally speaking. If this is a real problem, then can we at least say it's flat? |
As is pointed out in another answer you can numerically perform the computation and you will find that rays close to the principle axis all arrive approximately in phase at a point but the difference becomes larger for rays further away from the principal axis.
This defect of a lens is called spherical aberration.
Making suitable approximations it is relatively easy to show that two parallel rays incident on a plano-convex lens do take the same time to reach he focal point.
The curved surface of the lens has a radius of curvature is $R$ and the material of which it is made has a refractive index $n$.
We need to show that the time from $A$ to $F$ is the same as the time from $P$ to $B$ plus the time from $B$ to $F$.
If the speed of light is $c$ then $ \dfrac{\sqrt{(d^2+f^2)}}{c}=\dfrac{nx}{c} + \dfrac {f-x}{c}$
Using the intersecting chord theorem $d^2 = x(2R-x) \approx x2R $ if $R\gg x$ and the binomial expansion $\sqrt{(d^2+f^2)} \approx f\left (1+ \dfrac{d^2}{2f^2} +. . . . . . \right )$ if $f\gg d$ results in $\dfrac 1 f \approx (n-1) \dfrac 1 R$ which is the lens-makers formula for a plano-convex lens.
Update as a result of a comment from the OP
Noting the assumptions made before I think that the analysis above can be extended for all incoming parallel rays and also for a particular example for a biconvex lens.
In the left hand diagram the two blue parallel rays and the ray along the principal axis all traverse a thickness $y'$ of the lens before reaching the part of the lens shaded red and so take the same time to reach the 2red" lens.
That part of the lens shaded red is a smaller version of the lens which was considered originally and provided that $y'$ is small such that $PF\approx P'F$ those rays will satisfy the equal time condition. This all parallel rays will reach point $F$ at the same time.
Developing this analysis with more and more approximations illustrate how spherical aberration plays a part in the functioning of a lens.
For the biconvex lens and a special case where $O$ and $I$ are the focal points of the individual plano-convex lenses which make up the biconvex lens it can be shown that $\dfrac 1 u + \dfrac 1v = (n-1) \left ( \dfrac {1}{R_1}+\dfrac {1}{R_2}\right ) $ which in the lens maker's is equal to the reciprocal of the focal length of a biconvex lens.
Probably this is the end of the line for "hand waving" and to show in general the equality of time resulting in the lens maker's formula and the lens equation requires a numerical approach? |
Structure of Atom Bohr's Model of Atom and Bohr's Model for Hydrogen Atom
Bohr's Atomic Model : Angular momentum : mvr = \frac{nh}{2\pi} m = mass v = velocity of electron r = radius of orbit h = Planck's constant Radius of n th orbit
r_{n} = \frac{n^{2}h^{2}}{4\pi^{2}mZ_{e}^{2}}
e - Charge of nucleus
Z - Atomic no.of uni- electronic species
simplifying : r_{n} = 0.53 \left(\frac{n^{2}}{Z}\right) Å
Energy of electron in nth orbit of H - atom : P.E = \frac{-Ze^{2}}{r}
K.E = \frac{1}{2}mv^{2}
Total energy : \frac{-2\pi^{2}mZ^{2}e^{4}}{n^{2}h^{2}} Simplifying =13.6 \times \frac{Z^{2}}{n^{2}} \ ev/atom = 1313 \frac{Z^{2}}{n^{2}} \ kJ/mole =-313.6 \times \frac{Z^{2}}{n^{2}} \ k.cal/mole 1 eV = 1.6 × 10 -12 erg / atom = 1.6 × 10 -19 J/atom = 23.06 k.cal/mole Velocity of electron in n th orbit
According to mvr = \frac{nh}{2\pi}
V_{n} = \frac{2\pi Ze^{2}}{nh}
= 2.18 \times 10^{8}\left(\frac{Z}{n}\right)cm/ sec
V_{n} = 2.18 \times 10^{6}\left(\frac{Z}{n}\right)m/ sec
Rydberg constant (R) : \frac{1}{R} = \frac{1}{109677} = 9.12 \times 10^{-6} = 912 \times 10^{-8}cm = 912 Å R = \frac{2\pi^{2}me^{4}}{h^{3}C} ;\frac{1}{\lambda} = R\left[\frac{1}{n_1^2} - \frac{1}{n_2^2}\right] Advantages of Bohr's Model : Explains the stability of atom Successfully explains the uni-electronic species Coincide the values of velocity, energy, radius with experimental values. Draw backs : Failed to explain the spectra of multi electron species Failed to explain the fine structure of spectral lines Splitting up of normal spectral lines into several lines. Wavelength :
λ in terms of kinetic energy
\lambda = \frac{h}{\sqrt{2mKE}}
λ in terms of potential difference(v)
\lambda = \frac{h}{\sqrt{2mev}} m - mass of particle e - charge of electron λ for charged particle :
for \ e^{-} = \lambda = \frac{12.27}{\sqrt{v}} Å
for \ p^{+} = \lambda = \frac{0.286}{\sqrt{v}} Å
for \ \alpha = \lambda = \frac{0.101}{\sqrt{v}} Å
for \ m = \lambda = \frac{0.286}{\sqrt{KE(in eV)}}
Calculation of (λ n) :
λ
n= Wave length of electron wave in n thorbit species.
\frac{\lambda_{n} = \frac{h}{mv_{n}}}{\lambda_{n} = 3.33\left(\frac{n}{Z}\right) } Å
Circumference of electrons orbit (C) = 2πr n = 2 \times 3.14 \times 0.53 \times 10^{-8}cm\left(\frac{n^{2}}{Z}\right) = 3.33\left(\frac{n^{2}}{Z}\right) Å View the Topic in this Video from 1:15 to 41:48
Disclaimer: Compete.etutor.co may from time to time provide links to third party Internet sites under their respective fair use policy and it may from time to time provide materials from such third parties on this website. These third party sites and any third party materials are provided for viewers convenience and for non-commercial educational purpose only. Compete does not operate or control in any respect any information, products or services available on these third party sites. Compete.etutor.co makes no representations whatsoever concerning the content of these sites and the fact that compete.etutor.co has provided a link to such sites is NOT an endorsement, authorization, sponsorship, or affiliation by compete.etutor.co with respect to such sites, its services, the products displayed, its owners, or its providers.
1. Number of waves in an orbit = \tt \frac{circumference \ of \ orbit}{wavelength}
2.\tt m\upsilon r=\frac{nh}{2\pi} (n = 1, 2, 3 .....)
Where, m = mass of electron; ν = velocity of electron ; r = radius of orbit n = number of orbit in which electrons are present.
3.\tt \triangle E=E_{2}-E_{1}=hv=\frac{hc}{\lambda}
4.Velocity of an electron in nth Bohr orbit
\tt \left(\upsilon_{n}\right)=2.165\times10^{6}\frac{Z}{n}m/s
5.Radius of nth Bohr orbit
\tt \left(r_{n}\right)=0.53\times10^{-10}\frac{n^{2}}{Z}m=0.53\frac{n^{2}}{Z} Å
6.\tt E_{n}=-2.178\times 10^{-18}\frac{Z^{2}}{n^{2}}J/atom
\tt =-1312\frac{Z^{2}}{n^{2}}kJ/mol \tt =-13.6\frac{Z^{2}}{n^{2}}eV/atom \tt \triangle E=-2.178 \times 10^{-18}\left(\frac{1}{n_1^2}-\frac{1}{n_2^2}\right)Z^{2}J/atom where, n = number of shell; Z=atomic number |
A good way to think about these equations is to imagine them as sets over valid words. Say I have an alphabet $\Sigma = \{a, b\}$, then the rule$$A \leftarrow A a \mid b$$
says that there's some set $A$ that is isomorphic to the set $(A * \{a\}) \cup \{b\}$. Here the $*$ operation is the product/concatenation operation lifted over sets, that is$$\{a, b, c\} * \{d, e, f\} = \{ad, ae, af, bd, be, bf, cd, ce, cf\}$$
Going back to our grammar rule, it is basically
declaring the existence of a set $A$ of words that satisfies the equation$$A = (A * \{a\}) \cup \{b\}.$$
Now, there's a huge class of sets of words that satisfies this equation. For example, the set $A_\bot = \{b, ba, baa, baaa, baaaa, \dots\}$ is a valid solution since a $b$ or appending any element of $A_\bot$ with another $a$ will result in a word that is already in $A_\bot$. However, you should check that$$A_{bb} = \{b, bb, ba, bba, baa, bbaa, baaa, bbaaa, \dots\}$$is also a valid solution to this equation. In particular, $b$ is already in $A_{bb}$, and for any element $ba^k$ or $bba^k$ in $A_{bb}$, its concatenation with $a$ results in $ba^{k+1}$ and $bba^{k+1}$, which are also already in $A_{bb}$. Therefore, $A_{bb}$ is also a valid fixed point.
In fact, for any given ground set of words $S = \{w_1, w_2, w_3, \dots\}$, we can construct a closure $A_S$ which contains every word in $S$ and similarly satisfies the above equation. In particular, the closure operator is the same isomorphism:
\begin{align*}A_S^{k + 1} &= (A_S^k * \{a\}) \cup \{b\} \\A_S &= \bigcup_k^\infty A_S^k\end{align*}
So now we get to the real question. If there are an infinite supply of solutions for this equation, then what good is it as a characterization of some grammar? Well, intuitively, we hope to just characterize those words that are generated from this rule "from scratch". In effect, we wish to treat these rules as free objects. That is, we care about the case where the underlying generator $S = \{\}$. Let$$A(S) = (S * \{a\}) \cup \{b\}$$then we want$$A_* = \bigcup_k A^{(k)}(\{\})$$to be the set generated by our grammar rule, where $A^{(k)} = A(A(\stackrel{k}{\dots} A(\cdot)))$.
This then gives some intuition of what we mean by the "least solution." $A(S)$ is monotone in the sense that $S \subseteq S' \implies A(S) \subseteq A(S')$; in fact, this holds over all grammar rules. Since the least element over the class of possible sets that $S$ can take (ordered on set inclusion) is the empty set, then effectively the "freely generated" language that we desire also turns out to be the smallest (ordered on set inclusion) fixed point, that is, $A(\{\}) \subseteq A(S')$ for any set $S'$.
In general, you'll see a lot of these "smallest solution" clauses all around the landscape of computer science. For example, any recursive program is the smallest solution to some program equation; any constructable inductive datatype is the smallest solution to some inductive datatype equation; any valid program semantic is the smallest solution to some logical equation; etc. It turns out that these classes of topological closures corresponds elegantly to a notion of finiteness. For example, the set of all finitary words $\Sigma^*$ is itself a least fixed-point. Since computations themselves can be seen as some enumeration process of finite objects, the analogy holds quite naturally; least fixed-points typically gives some assurance of some form of computability.
In conclusion, if you see the phrase "least solution" or "the least fix-point," it often means that the problem concerns finite objects that are freely-generated in the sense that only things that can be derived "from scratch" are considered. |
Cone Angle
When a sample is inserted at the focus of a convergent beam (at the focal plane), then biconical transmittance should be calculated:
\[T_b=\frac{total \; flux\; in \;output \;beam}{total\; flux\; in\; input \;beam}\].
The biconical transmittance \(T_b\) can be calculated as follows:
\[ T_b=\frac{\int_{\varphi=0}^{2\pi}\int_{\xi=0}^{\xi_max} I(\xi,\varphi)T(\xi,\varphi)\sin\xi d\xi d\varphi}{\int_{\varphi=0}^{2\pi}\int_{\xi=0}^{\xi_max} I(\xi,\varphi)\sin\xi d\xi d\varphi}, \]
where the angles \(\xi\) and \(\varphi\) are polar and azimuthal angles defined in the spherical coordinate system. The \(\xi_{\max}\) is the maximum semi-angle of the convergent cone.
In the case of Lambertian source \(I(\xi,\varphi)=I_0\cos\xi\) and equations can be simplified:
\[ T_b=(\pi \sin^2\xi_{\max})_{\varphi=0}^{-1}\int_{\varphi=0}^{2\pi}\int_{\xi=0}^{\xi_{\max}}T(\xi,\varphi)\sin\xi\cos\xi d\xi d\varphi.\]
In the case when the principal ray of the cone is perpendicular to the multilayer sample the formula can be further simplified:
\[T_b=2(\sin\xi_{\max})^{-2}\int_{\xi=0}^{\xi_{\max}} T(\xi)\sin\xi\cos\xi d\xi\]
Cone Angle database of OptiLayer allows you to create cone angles averaging specifications. When Cone Angle is loaded to the memory, all computations of Transmittance, Reflectance, Back Reflectance, and Absorptance take into account cone angle averaging. All synthesis procedures also will take into account cone averaging. The only exclusion from this rule is a target with UDT characteristics.
Cone can be specified with
Half-angle (in degrees), f/number or Numerical aperture. Computations are performed on angular grid, with growing number of points the accuracy is improving, but computational time is growing proportionally. OptiLayer uses high-precision sophisticated integration procedures, so for most cases 10-20 points are quite sufficient.
Type of distribution can be Uniform Intensity, Lambertian, or User-Defined. In the last case Cone Angle Editor allows to specify the number of Angular points for Cone Angle Intensity distribution and the spreadsheet itself. The number of angular points should not coincide with Cone averaging grid parameter. OptiLayer will perform all necessary interpolation procedures automatically. |
Suppose I had the following periodic 1D advection problem:
$\frac{\partial u}{\partial t} + c\frac{\partial u}{\partial x} = 0$ in $\Omega=[0,1]$
$u(0,t)=u(1,t)$ $u(x,0)=g(x)$ where $g(x)$ has a jump discontinuity at $x^*\in (0,1)$.
It is my understanding that for linear finite difference schemes of higher than first order, spurious oscillations occur near the discontinuity as it is advected over time, resulting in a distortion of the solution from its expected wave shape. According to wikipedia explanation, it seems that these oscillations typically occur when a discontinuous function is approximated with a finite fourier series.
For some reason, I can't seem to grasp how a finite fourier series can be observed in the solution of this PDE. In particular, how can I estimate a bound on the "over-shoot" analytically? |
Recall that we were able, in certain systems, to calculate the potential by integrating over the electric field. As you may already suspect, this means that we may calculate the electric field by taking derivatives of the potential, although going from a scalar to a vector quantity introduces some interesting wrinkles. We frequently need \(\vec{E}\) to calculate the force in a system; since it is often simpler to calculate the potential directly, there are systems in which it is useful to calculate
V and then derive \(\vec{E}\) from it.
In general, regardless of whether the electric field is uniform, it points in the direction of decreasing potential, because the force on a positive charge is in the direction of \(\vec{E}\) and also in the direction of lower potential
V. Furthermore, the magnitude of \(\vec{E}\) equals the rate of decrease of V with distance. The faster V decreases over distance, the greater the electric field. This gives us the following result.
Relationship between Voltage and Uniform Electric Field
In equation form, the relationship between voltage and uniform electric field is
\[E = - \dfrac{\Delta V}{\Delta s}\]
where \(\Delta s\) is the distance over which the change in potential \(\Delta V\) takes place. The minus sign tells us that \(E\) points in the direction of decreasing potential. The electric field is said to be the gradient (as in grade or slope) of the electric potential.
Figure \(\PageIndex{1}\): The electric field component along the displacement \(\Delta s\) is given by \(E = - \dfrac{\Delta V}{\Delta s}\). Note that A and B are assumed to be so close together that the field is constant along \(\Delta s\).
For continually changing potentials, \(\Delta V\) and \(\Delta s\) become infinitesimals, and we need differential calculus to determine the electric field. As shown in Figure \(\PageIndex{1}\), if we treat the distance \(\Delta s\) as very small so that the electric field is essentially constant over it, we find that
\[E_s = - \dfrac{dV}{ds}.\]
Therefore, the electric field components in the Cartesian directions are given by
\[E_x = - \dfrac{\partial V}{\partial x}, \, E_y = - \dfrac{\partial V}{\partial y}, \, E_z = - \dfrac{\partial V}{\partial z}.\]
This allows us to define the “grad” or “del” vector operator, which allows us to compute the gradient in one step. In Cartesian coordinates, it takes the form
\[\vec{\nabla} = \hat{i} \dfrac{\partial}{\partial x} + \hat{j} \dfrac{\partial}{\partial y} + \hat{k} \dfrac{\partial}{\partial z}.\]
With this notation, we can calculate the electric field from the potential with
\[\vec{E} = - \vec{\nabla}V, \label{eq20}\]
a process we call
calculating the gradient of the potential.
If we have a system with either cylindrical or spherical symmetry, we only need to use the del operator in the appropriate coordinates:
\[\begin{align} \vec{\nabla}_{cyl} &= \underbrace{\hat{r} \dfrac{\partial}{\partial r} + \hat{\varphi}\dfrac{1}{r} \dfrac{\partial}{\partial \varphi} + \hat{z} \dfrac{\partial}{\partial z}}_{\text{Cylindrical}} \label{cylindricalnabla} \\[5pt] \vec{\nabla}_{sph} &= \underbrace{ \hat{r} \dfrac{\partial}{\partial r} + \hat{\theta}\dfrac{1}{r} \dfrac{\partial}{\partial \theta} + \hat{\varphi} \dfrac{1}{r \, sin \, \theta}\dfrac{\partial}{\partial \varphi}}_{\text{Spherical}} \label{spherenabla} \end{align}\]
Example \(\PageIndex{1}\): Electric Field of a Point Charge
Calculate the electric field of a point charge from the potential.
Strategy
The potential is known to be \(V = k\dfrac{q}{r}\), which has a spherical symmetry. Therefore, we use the spherical del operator (Equation \ref{spherenabla}) into Equation \ref{eq20}:
\[\vec{E} = - \vec{\nabla}_{sph} V \nonumber.\]
Solution
Performing this calculation gives us
\[\begin{align*} \vec{E} &= - \left( \hat{r}\dfrac{\partial}{\partial r} + \hat{\theta}\dfrac{1}{r} \dfrac{\partial}{\partial \theta} + \hat{\varphi}\dfrac{1}{1 \, \sin \, \theta} \dfrac{\partial}{\partial \varphi}\right) k\dfrac{q}{r} \\[5pt] &= - k\left( \hat{r}\dfrac{\partial}{\partial r}\dfrac{1}{r} + \hat{\theta}\dfrac{1}{r} \dfrac{\partial}{\partial \theta}\dfrac{1}{r} + \hat{\varphi}\dfrac{1}{1 \, \sin \, \theta} \dfrac{\partial}{\partial \varphi}\dfrac{1}{r}\right).\end{align*}\]
This equation simplifies to
\[\vec{E} = - kq\left(\hat{r}\dfrac{-1}{r^2} + \hat{\theta}0 + \hat{\varphi}0 \right) = k\dfrac{q}{r^2}\hat{r} \nonumber\]
as expected.
Significance
We not only obtained the equation for the electric field of a point particle that we’ve seen before, we also have a demonstration that \(\vec{E}\) points in the direction of decreasing potential, as shown in Figure \(\PageIndex{2}\).
Figure \(\PageIndex{2}\): Electric field vectors inside and outside a uniformly charged sphere.
Example \(\PageIndex{2}\): Electric Field of a Ring of Charge
Use the potential found previously to calculate the electric field along the axis of a ring of charge (Figure \(\PageIndex{3}\)).
Figure \(\PageIndex{3}\): We want to calculate the electric field from the electric potential due to a ring charge. Strategy
In this case, we are only interested in one dimension, the
z-axis. Therefore, we use
\[E_z = - \dfrac{\partial V}{\partial z} \nonumber\]
with the potential
\[V = k \dfrac{q_{tot}}{\sqrt{z^2 + R^2}} \nonumber\]
found previously.
Solution
Taking the derivative of the potential yields
\[ \begin{align*} E_z &= - \dfrac{\partial}{\partial z} \dfrac{kq_{tot}}{\sqrt{z^2 + R^2}} \\[5pt] &= k \dfrac{q_{tot}z}{(z^2 + R^2)^{3/2}}. \end{align*}\]
Significance
Again, this matches the equation for the electric field found previously. It also demonstrates a system in which using the full del operator is not necessary.
Exercise \(\PageIndex{1}\)
Which coordinate system would you use to calculate the electric field of a dipole?
Answer:
Any, but cylindrical is closest to the symmetry of a dipole.
Contributors
Samuel J. Ling (Truman State University), Jeff Sanny (Loyola Marymount University), and Bill Moebs with many contributing authors. This work is licensed by OpenStax University Physics under a Creative Commons Attribution License (by 4.0). |
The slope at some point is \( y'=\frac{dy}{dx}=\tan\psi=\frac{s}{a}\) from which \( \frac{ds}{dx}\ =\ a\frac{d^{2}y}{dx^{2}}\). But, from the usual pythagorean relation between intrinsic and rectangular coordinates \( ds=(1+y'^{2})^{\frac{1}{2}}dx\), this becomes
\[ (1+y'^{2})^{\frac{1}{2}}=a\frac{dy'}{dx}. \label{18.3.1}\]
On integration, with the condition that \( y'=0\) where \( x=0\), this becomes
\[ y'=\sinh \left(\frac{x}{a}\right), \label{18.3.2}\]
and, on further integration,
\[ y\ =\ a\cosh\left(\frac{x}{a}\right)\ +\ C \label{18.3.3}\]
If we fix the origin of coordinates so that the lowest point of the catenary is at a height \( a\) above the \( x\)-axis, this becomes
\[ y\ =\ a\cosh\left(\frac{x}{a}\right) \label{18.3.4}\]
This, then, is the \( x\) , \( y\) Equation to the catenary. The \( x\)-axis is the
directrix of this catenary.
The following additional simple relations are easily derived and are left to the reader:
\[ s\ =\ a\sinh\left(\frac{x}{a}\right) \label{18.3.5}\]
\[ y^{2}\ =\ a^{2}\ +\ s^{2}\ , \label{18.3.6}\]
\[ y\ =\ a\sec \psi. \label{18.3.7}\]
\[ x = a\ln\ (\sec \psi + \tan \psi \label{18.3.8}\]
\[ T = \mu g y \label{18.3.9}\]
Equations \( \ref{18.3.7} \) and \( \ref{18.3.8}\) may be regarded as parametric Equations to the catenary.
If one end of the chain is fixed, and the other is looped over a smooth peg, Equation \( \ref{18.3.9}\) shows that the loosely hanging vertical portion of the chain just reaches the directrix of the catenary, and the tension at the peg is equal to the weight of the vertical portion.
Exercise \(\PageIndex{1}\)
By expanding Equation \( \ref{18.3.4}\) as far as \(x^2\), show that, near the bottom of the catenary, or for a tightly stretched catenary with a small sag, the curve is approximately a parabola. Actually, it doesn’t matter what Equation \( \ref{18.3.4} \) is – if you expand it as far as \(x^2\), provided the \(x^2\) term is not zero, you’ll get a parabola – so, in order not to let you off so lightly, show that the semi latus rectum of the parabola is \(a\).
Exercise \(\PageIndex{2}\)
Expand Equation \( \ref{18.3.5}\) as far as \(x^3\).
Now: let \(2s\) = total length of chain, \(2k\) = total span, and \(d\) = sag. Show that for a shallow catenary \(s-k = k^3/(6a^3) \) and \( k^2 = 2ad \) hence that length − span = \( \frac{8}{3}\) sag
2/span.
Example \(\PageIndex{1}\)
A cord is stretched between points on the same horizontal level. How large a force must be applied so that the cord is no longer a catenary, but is accurately a straight line?
Answer:
There is no force however great
Can stretch a string however fine
Into a horizontal line
That shall be accurately straight.
I am indebted to Hamilton Carter of Texas A & M University for drawing my attention to a note by C. A. Chant in J. Roy. Astron. Soc. Canada 33, 72, (1939), where this doggerel is attributed to the early nineteenth century Cambridge mathematician William Whewell.
Exercise \(\PageIndex{3}\)
And here’s something for engineers. We, the general public, expect engineers to built safe bridges for us. The suspension chain of a suspension bridge, though scarcely shallow, is closer to a parabola than to a catenary. There is a reason for this. Discuss. |
Note
Much as in the case of linear momentum, the mistake that tends to be made in the case of angular momentum is not using the principle of conservation of angular momentum when it should be used, that is, applying conservation of mechanical energy in a case in which mechanical energy is not conserved but angular momentum is. Consider the case, for instance, in which one drops a disk (from a negligible height) that is not spinning, onto a disk that is spinning, and after the drop, the two disks spin together as one. The “together as one” part tips you off that this is a completely inelastic (rotational) collision. Some mechanical energy is converted into thermal energy (and other forms not accounted for) in the collision. It’s easy to see that mechanical energy is converted into thermal energy if the two disks are CD’s and the bottom one is initially spinning quite fast (but is not being driven). When you drop the top one onto the bottom one, there will be quite a bit of slipping before the top disk gets up to speed and the two disks spin as one. During the slipping, it is friction that increases the spin rate of the top CD and slows the bottom one. Friction converts mechanical energy into thermal energy. Hence, the mechanical energy prior to the drop is less than the mechanical energy after the drop
The angular momentum of an object is a measure of how difficult it is to stop that object from spinning. For an object rotating about a fixed axis, the angular momentum depends on how fast the object is spinning, and on the object's rotational inertia (also known as moment of inertia) with respect to that axis.
Rotational Inertia (a.k.a. Moment of Inertia)
The rotational inertia of an object with respect to a given rotation axis is a measure of the object's tendency to resist a change in its angular velocity about that axis. The rotational inertia depends on the mass of the object and how that mass is distributed. You have probably noticed that it is easier to start a merry-go-round spinning when it has no children on it. When the kids climb on, the mass of what you are trying to spin is greater, and this means the rotational inertia of the object you are trying to spin is greater. Have you also noticed that if the kids move in toward the center of the merry-go-round it is easier to start it spinning than it is when they all sit on the outer edge of the merry-go-round? It is. The farther, on the average, the mass of an object is distributed away from the axis of rotation, the greater the object's moment of inertia with respect to that axis of rotation. The rotational inertia of an object is represented by the symbol I. During this initial coverage of angular momentum, you will not be required to calculate I from the shape and mass of the object. You will either be given I or expected to calculate it by applying conservation of angular momentum (discussed below).
Angular Velocity
The angular velocity of an object is a measure of how fast it is spinning. It is represented by the Greek letter omega, written w, (not to be confused with the letter w which, unlike omega, is pointed on the bottom). The most convenient measure of angle in discussing rotational motion is the radian. Like the degree, a radian is a fraction of a revolution. But, while one degree is \(\dfrac{1}{360}\) of a revolution, one radian is \(\dfrac{1}{2\pi}\) of a revolution. The units of angular velocity are then
radians per second or, in notational form, \(\dfrac{rad}{s}\). Angular velocity has direction or sense of rotation associated with it. If one defines a rotation which is clockwise when viewed from above as a positive rotation, then an object which is rotating counterclockwise as viewed from above is said to have a negative angular velocity. In any problem involving angular velocity, one is free to choose the positive sense of rotation, but then one must stick with that choice throughout the problem. Angular Momentum
The angular momentum \(L\) of an object is given by:
\[L=I\omega\]
Note that this is consistent with our original definition of angular momentum as a measure of the degree of the object's tendency to keep on spinning, once it is spinning. The greater the rotational inertia of the object, the more difficult it is to stop the object from spinning, and the greater the angular velocity of the object, the more difficult it is to stop the object from spinning.
The direction of angular momentum is the same as the direction of the corresponding angular velocity.
Torque
We define torque by analogy with force which is an ongoing push or pull on an object. When there is a single force acting on a particle, the momentum of that particle is changing. A torque is what you are exerting on the lid of a jar when you are trying to remove the lid. When there is a single torque acting on a rigid object, the angular momentum of that object is changing.
Conservation of Angular Momentum
Angular Momentum is an important concept because, if there is no angular momentum transferred to or from a system, the total angular momentum of that system does not change, and if there is angular momentum being transferred to a system, the rate of change of the angular momentum of the system is equal to the rate at which angular momentum is being transferred to the system. As in the case of energy and momentum, this means we can use simple accounting (bookkeeping) procedures for making predictions on the outcomes of physical processes. In this chapter we focus on the special case in which there are no external torques which means that no angular momentum is transferred to or from the system.
Definition: Conservation of Angular Momentum
Conservation of Angular Momentum for the Special Case in which no Angular Momentum is Transferred to or from the System from Outside the System
In any physical process involving an object or a system of objects free to rotate about an axis, as long as there are no external torques exerted on the system of objects, the total angular momentum of that system of objects remains the same throughout the process.
The application of the conservation of angular momentum in solving physics problems for cases involving no transfer of angular momentum to or from the system from outside the system (no external torque) is very similar to the application of the conservation of energy and to the application of the conservation of momentum. One selects two instants in time, defines the earlier one as the before instant and the later one as the after instant, and makes corresponding sketches of the object or objects in the system. Then one writes
\[L=L'\]
meaning "the angular momentum in the before picture equals the angular momentum in the after picture." Next, one replaces each \(L\) with what it is in terms of the moments of inertia and angular velocities in the problem and solves the resulting algebraic equation for whatever is sought.
Example 5.1.1
A skater is spinning at \(32.0 rad/s\) with her arms and legs extended outward. In this position her moment of inertia with respect to the vertical axis about which she is spinning is \(45.6 kg ⋅m^2\) . She pulls her arms and legs in close to her body changing her moment of inertia to \(17.5kg ⋅m^2\) . What is her new angular velocity?
Solution
Example 5.1.2
A horizontal disk of rotational inertia \(4.25kg ⋅m^2\) with respect to its axis of symmetry is spinning counterclockwise about its axis of symmetry, as viewed from above, at \(15.5\) revolutions per second on a frictionless massless bearing. A second disk, of rotational inertia \(1.80 kg ⋅m^2\) with respect to its axis of symmetry, spinning clockwise as viewed from above about the same axis (which is also its axis of symmetry) at \(14.2\) revolutions per second, is dropped on top of the first disk. The two disks stick together and rotate as one about their common axis of symmetry at what new angular velocity (in units of radians per second)?
Solution
Some preliminary work (expressing the given angular velocities in units of rad/s):
\[\omega1=15.5\dfrac{rev}{s}(\dfrac{2\pi* rad}{rev})=97.39\dfrac{rad}{s}\]
\[\omega2=14.2\dfrac{rev}{s}(\dfrac{2\pi* rad}{rev})=89.22\dfrac{rad}{s}\]
Now we apply the principle of conservation of angular momentum for the special case in which there is no transfer of angular momentum to or from the system from outside the system. Referring to the diagram:
\[I1\omega1-I2\omega2=(I1+I2)\omega'\]
\[\omega'=\dfrac{I1\omega1-I2\omega2}{I1+I2}\]
\[\omega'=\dfrac{(4.25kg⋅m^2)97.39rad/s- (1.80kg⋅m^2)89.22rad/s}{4.25kg⋅m^2+1.80kg⋅m^2}\]
\(\omega'=41.9 \dfrac{rad}{s}\) (Counterclockwise as viewed from above.) |
Category: Group Theory
Group Theory Problems and Solutions.
Popular posts in Group Theory are:
Problem 625
Let $G$ be a group and let $H_1, H_2$ be subgroups of $G$ such that $H_1 \not \subset H_2$ and $H_2 \not \subset H_1$.
(a) Prove that the union $H_1 \cup H_2$ is never a subgroup in $G$.
Add to solve later
(b) Prove that a group cannot be written as the union of two proper subgroups. Problem 616
Suppose that $p$ is a prime number greater than $3$.
Consider the multiplicative group $G=(\Zmod{p})^*$ of order $p-1$. (a) Prove that the set of squares $S=\{x^2\mid x\in G\}$ is a subgroup of the multiplicative group $G$. (b) Determine the index $[G : S]$.
Add to solve later
(c) Assume that $-1\notin S$. Then prove that for each $a\in G$ we have either $a\in S$ or $-a\in S$. Problem 613
Let $m$ and $n$ be positive integers such that $m \mid n$.
(a) Prove that the map $\phi:\Zmod{n} \to \Zmod{m}$ sending $a+n\Z$ to $a+m\Z$ for any $a\in \Z$ is well-defined. (b) Prove that $\phi$ is a group homomorphism. (c) Prove that $\phi$ is surjective.
Add to solve later
(d) Determine the group structure of the kernel of $\phi$. If a Half of a Group are Elements of Order 2, then the Rest form an Abelian Normal Subgroup of Odd Order Problem 575
Let $G$ be a finite group of order $2n$.
Suppose that exactly a half of $G$ consists of elements of order $2$ and the rest forms a subgroup. Namely, suppose that $G=S\sqcup H$, where $S$ is the set of all elements of order in $G$, and $H$ is a subgroup of $G$. The cardinalities of $S$ and $H$ are both $n$.
Then prove that $H$ is an abelian normal subgroup of odd order.Add to solve later
Problem 497
Let $G$ be an abelian group.
Let $a$ and $b$ be elements in $G$ of order $m$ and $n$, respectively. Prove that there exists an element $c$ in $G$ such that the order of $c$ is the least common multiple of $m$ and $n$.
Also determine whether the statement is true if $G$ is a non-abelian group.Add to solve later |
Does anyone here understand why he set the Velocity of Center Mass = 0 here? He keeps setting the Velocity of center mass , and acceleration of center mass(on other questions) to zero which i dont comprehend why?
@amanuel2 Yes, this is a conservation of momentum question. The initial momentum is zero, and since there are no external forces, after she throws the 1st wrench the sum of her momentum plus the momentum of the thrown wrench is zero, and the centre of mass is still at the origin.
I was just reading a sci-fi novel where physics "breaks down". While of course fiction is fiction and I don't expect this to happen in real life, when I tired to contemplate the concept I find that I cannot even imagine what it would mean for physics to break down. Is my imagination too limited o...
The phase-space formulation of quantum mechanics places the position and momentum variables on equal footing, in phase space. In contrast, the Schrödinger picture uses the position or momentum representations (see also position and momentum space). The two key features of the phase-space formulation are that the quantum state is described by a quasiprobability distribution (instead of a wave function, state vector, or density matrix) and operator multiplication is replaced by a star product.The theory was fully developed by Hilbrand Groenewold in 1946 in his PhD thesis, and independently by Joe...
not exactly identical however
Also typo: Wavefunction does not really have an energy, it is the quantum state that has a spectrum of energy eigenvalues
Since Hamilton's equation of motion in classical physics is $$\frac{d}{dt} \begin{pmatrix} x \\ p \end{pmatrix} = \begin{pmatrix} 0 & 1 \\ -1 & 0 \end{pmatrix} \nabla H(x,p) \, ,$$ why does everyone make a big deal about Schrodinger's equation, which is $$\frac{d}{dt} \begin{pmatrix} \text{Re}\Psi \\ \text{Im}\Psi \end{pmatrix} = \begin{pmatrix} 0 & 1 \\ -1 & 0 \end{pmatrix} \hat H \begin{pmatrix} \text{Re}\Psi \\ \text{Im}\Psi \end{pmatrix} \, ?$$
Oh by the way, the Hamiltonian is a stupid quantity. We should always work with $H / \hbar$, which has dimensions of frequency.
@DanielSank I think you should post that question. I don't recall many looked at the two Hamilton equations together in this matrix form before, which really highlight the similarities between them (even though technically speaking the schroedinger equation is based on quantising Hamiltonian mechanics)
and yes you are correct about the $\nabla^2$ thing. I got too used to the position basis
@DanielSank The big deal is not the equation itself, but the meaning of the variables. The form of the equation itself just says "the Hamiltonian is the generator of time translation", but surely you'll agree that classical position and momentum evolving in time are a rather different notion than the wavefunction of QM evolving in time.
If you want to make the similarity really obvious, just write the evolution equations for the observables. The classical equation is literally Heisenberg's evolution equation with the Poisson bracket instead of the commutator, no pesky additional $\nabla$ or what not
The big deal many introductory quantum texts make about the Schrödinger equation is due to the fact that their target audience are usually people who are not expected to be trained in classical Hamiltonian mechanics.
No time remotely soon, as far as things seem. Just the amount of material required for an undertaking like that would be exceptional. It doesn't even seem like we're remotely near the advancement required to take advantage of such a project, let alone organize one.
I'd be honestly skeptical of humans ever reaching that point. It's cool to think about, but so much would have to change that trying to estimate it would be pointless currently
(lol) talk about raping the planet(s)... re dyson sphere, solar energy is a simplified version right? which is advancing. what about orbiting solar energy harvesting? maybe not as far away. kurzgesagt also has a video on a space elevator, its very hard but expect that to be built decades earlier, and if it doesnt show up, maybe no hope for a dyson sphere... o_O
BTW @DanielSank Do you know where I can go to wash off my karma? I just wrote a rather negative (though well-deserved, and as thorough and impartial as I could make it) referee report. And I'd rather it not come back to bite me on my next go-round as an author o.o |
As with all phenomena in classical mechanics, the motion of the particles in a wave, for instance the masses on springs in figure 9.1, are governed by Newton’s laws of motion and the various force laws. In this section we will use these laws to derive an equation of motion for the wave itself, which applies quite generally to wave phenomena. To do so, consider a series of particles of equal mass \(m\) connected by springs of spring constant \(k\), again as in figure 9.1a, and assume that at rest the distance between any two masses ish. Let theposition of particleibex, and \(u\) the distance that particle is away from its rest position; thenu∆xrest°xisa function of both position \(x\) and time \(t\). Suppose particleihas moved to the left, then it will feel a restoring force to the right due to two sources: the compressed spring on its left, and the extended spring on its right.The total force to the right is then given by:
\[\begin{aligned} F_{i} &=F_{i+1 \rightarrow i}-F_{i-1 \rightarrow i} \\ &=k[u(x+h, t)-u(x, t)]-k[u(x, t)-u(x-h, t)] \\ &=k[u(x+h, t)-2 u(x, t)+u(x-h, t)] \end{aligned} \label{9.3}\]
Equation \ref{9.3} gives the net force on particlei, which by Newton’s second law of motion (equation 2.5) equals the particle’s mass times its acceleration. The acceleration is the second time derivative of the positionx,but since the equilibrium position is a constant, it is also the second time derivative of the distance from theequilibrium positionu(x,t), and we have:
\[F_{\mathrm{net}}=m \frac{\partial^{2} u(x, t)}{\partial t^{2}}=k[u(x+h, t)-2 u(x, t)+u(x-h, t)] \label{9.4}\]
Equation \ref{9.4} holds for particle \(i\), but just as well for particle \(i+1\), or \(i-10\). We can get an equation for \(N\) particles by simply adding their individual equations, which we can do because these equations are linear.We thus find for a string of particles of length \(L=Nh\) hand total mass \(M=N m\):
\[\frac{\partial^{2} u(x, t)}{\partial t^{2}}=\frac{K L^{2}}{M} \frac{u(x+h, t)-2 u(x, t)+u(x-h, t)}{h^{2}} \label{9.5}.\] |
Difference between revisions of "Probability Seminar"
(→Thursday, May 7, Jessica Lin, UW-Madison, Room TBA)
(→Thursday, May 7, Jessica Lin, UW-Madison, 2:25pm, Room TBA)
Line 172: Line 172: −
== Thursday, May 7, [http://www.math.wisc.edu/~jessica/ Jessica Lin], [http://www.math.wisc.edu/ UW-Madison], <span style="color:red"> 2:25pm,
+
== Thursday, May 7, [http://www.math.wisc.edu/~jessica/ Jessica Lin], [http://www.math.wisc.edu/ UW-Madison], <span style="color:red"> 2:25pm, </span> ==
−
<span style="color:red">Please note the unusual room,
+
<span style="color:red">Please note the unusual room, .
</span>
</span>
Revision as of 22:02, 2 May 2015 Spring 2015 Thursdays in 901 Van Vleck Hall at 2:25 PM, unless otherwise noted. If you would like to sign up for the email list to receive seminar announcements then please send an email to join-probsem@lists.wisc.edu.
Thursday, January 15, Miklos Racz, UC-Berkeley Stats
Title: Testing for high-dimensional geometry in random graphs
Abstract: I will talk about a random geometric graph model, where connections between vertices depend on distances between latent d-dimensional labels; we are particularly interested in the high-dimensional case when d is large. Upon observing a graph, we want to tell if it was generated from this geometric model, or from an Erdos-Renyi random graph. We show that there exists a computationally efficient procedure to do this which is almost optimal (in an information-theoretic sense). The key insight is based on a new statistic which we call "signed triangles". To prove optimality we use a bound on the total variation distance between Wishart matrices and the Gaussian Orthogonal Ensemble. This is joint work with Sebastien Bubeck, Jian Ding, and Ronen Eldan.
Thursday, January 22, No Seminar Thursday, January 29, Arnab Sen, University of Minnesota
Title:
Double Roots of Random Littlewood Polynomials
Abstract: We consider random polynomials whose coefficients are independent and uniform on {-1,1}. We will show that the probability that such a polynomial of degree n has a double root is o(n^{-2}) when n+1 is not divisible by 4 and is of the order n^{-2} otherwise. We will also discuss extensions to random polynomials with more general coefficient distributions.
This is joint work with Ron Peled and Ofer Zeitouni.
Thursday, February 5, No seminar this week Thursday, February 12, No Seminar this week Thursday, February 19, Xiaoqin Guo, Purdue
Title: Quenched invariance principle for random walks in time-dependent random environment
Abstract: In this talk we discuss random walks in a time-dependent zero-drift random environment in [math]Z^d[/math]. We prove a quenched invariance principle under an appropriate moment condition. The proof is based on the use of a maximum principle for parabolic difference operators. This is a joint work with Jean-Dominique Deuschel and Alejandro Ramirez.
Thursday, February 26, Dan Crisan, Imperial College London
Title:
Smoothness properties of randomly perturbed semigroups with application to nonlinear filtering
Abstract: In this talk I will discuss sharp gradient bounds for perturbed diffusion semigroups. In contrast with existing results, the perturbation is here random and the bounds obtained are pathwise. Our approach builds on the classical work of Kusuoka and Stroock and extends their program developed for the heat semi-group to solutions of stochastic partial differential equations. The work is motivated by and applied to nonlinear filtering. The analysis allows us to derive pathwise gradient bounds for the un-normalised conditional distribution of a partially observed signal. The estimates we derive have sharp small time asymptotics
This is joint work with Terry Lyons (Oxford) and Christian Literrer (Ecole Polytechnique) and is based on the paper
D Crisan, C Litterer, T Lyons, Kusuoka–Stroock gradient bounds for the solution of the filtering equation, Journal of Functional Analysis, 2105
Wednesday, March 4, Sam Stechmann, UW-Madison, 2:25pm Van Vleck B113
Please note the unusual time and room.
Title: Stochastic Models for Rainfall: Extreme Events and Critical Phenomena Abstract: In recent years, tropical rainfall statistics have been shown to conform to paradigms of critical phenomena and statistical physics. In this talk, stochastic models will be presented as prototypes for understanding the atmospheric dynamics that leads to these statistics and extreme events. Key nonlinear ingredients in the models include either stochastic jump processes or thresholds (Heaviside functions). First, both exact solutions and simple numerics are used to verify that a suite of observed rainfall statistics is reproduced by the models, including power-law distributions and long-range correlations. Second, we prove that a stochastic trigger, which is a time-evolving indicator of whether it is raining or not, will converge to a deterministic threshold in an appropriate limit. Finally, we discuss the connections among these rainfall models, stochastic PDEs, and traditional models for critical phenomena. Thursday, March 12, Ohad Feldheim, IMA
Title:
The 3-states AF-Potts model in high dimension
Abstract: Take a bounded odd domain of the bipartite graph [math]\mathbb{Z}^d[/math]. Color the boundary of the set by [math]0[/math], then color the rest of the domain at random with the colors [math]\{0,\dots,q-1\}[/math], penalizing every configuration with proportion to the number of improper edges at a given rate [math]\beta\gt 0[/math] (the "inverse temperature"). Q: "What is the structure of such a coloring?"
This model is called the [math]q[/math]-states Potts antiferromagnet(AF), a classical spin glass model in statistical mechanics. The [math]2[/math]-states case is the famous Ising model which is relatively well understood. The [math]3[/math]-states case in high dimension has been studies for [math]\beta=\infty[/math], when the model reduces to a uniformly chosen proper three coloring of the domain. Several words, by Galvin, Kahn, Peled, Randall and Sorkin established the structure of the model showing long-range correlations and phase coexistence. In this work, we generalize this result to positive temperature, showing that for large enough [math]\beta[/math] (low enough temperature) the rigid structure persists. This is the first rigorous result for [math]\beta\lt \infty[/math].
In the talk, assuming no acquaintance with the model, we shall give the physical background, introduce all the relevant definitions and shed some light on how such results are proved using only combinatorial methods. Joint work with Yinon Spinka.
Thursday, March 19, Mark Huber, Claremont McKenna Math
Title: Understanding relative error in Monte Carlo simulations
Abstract: The problem of estimating the probability [math]p[/math] of heads on an unfair coin has been around for centuries, and has inspired numerous advances in probability such as the Strong Law of Large Numbers and the Central Limit Theorem. In this talk, I'll consider a new twist: given an estimate [math]\hat p[/math], suppose we want to understand the behavior of the relative error [math](\hat p - p)/p[/math]. In classic estimators, the values that the relative error can take on depends on the value of [math]p[/math]. I will present a new estimate with the remarkable property that the distribution of the relative error does not depend in any way on the value of [math]p[/math]. Moreover, this new estimate is very fast: it takes a number of coin flips that is very close to the theoretical minimum. Time permitting, I will also discuss new ways to use concentration results for estimating the mean of random variables where normal approximations do not apply.
Thursday, March 26, Ji Oon Lee, KAIST
Title: Tracy-Widom Distribution for Sample Covariance Matrices with General Population
Abstract: Consider the sample covariance matrix [math](\Sigma^{1/2} X)(\Sigma^{1/2} X)^*[/math], where the sample [math]X[/math] is an [math]M \times N[/math] random matrix whose entries are real independent random variables with variance [math]1/N[/math] and [math]\Sigma[/math] is an [math]M \times M[/math] positive-definite deterministic diagonal matrix. We show that the fluctuation of its rescaled largest eigenvalue is given by the type-1 Tracy-Widom distribution. This is a joint work with Kevin Schnelli.
Thursday, April 2, No Seminar, Spring Break Thursday, April 9, Elnur Emrah, UW-Madison
Title: The shape functions of certain exactly solvable inhomogeneous planar corner growth models
Abstract: I will talk about two kinds of inhomogeneous corner growth models with independent waiting times {W(i, j): i, j positive integers}: (1) W(i, j) is distributed exponentially with parameter [math]a_i+b_j[/math] for each i, j.(2) W(i, j) is distributed geometrically with fail parameter [math]a_ib_j[/math] for each i, j. These generalize exactly-solvable i.i.d. models with exponential or geometric waiting times. The parameters (a_n) and (b_n) are random with a joint distribution that is stationary with respect to the nonnegative shifts and ergodic (separately) with respect to the positive shifts of the indices. Then the shape functions of models (1) and (2) satisfy variational formulas in terms of the marginal distributions of (a_n) and (b_n). For certain choices of these marginal distributions, we still get closed-form expressions for the shape function as in the i.i.d. models.
Thursday, April 16, Scott Hottovy, UW-Madison
Title:
An SDE approximation for stochastic differential delay equations with colored state-dependent noise
Abstract: In this talk I will introduce a stochastic differential delay equation with state-dependent colored noise which arises from a noisy circuit experiment. In the experimental paper, a small delay and correlation time limit was performed by using a Taylor expansion of the delay. However, a time substitution was first performed to obtain a good match with experimental results. I will discuss how this limit can be proved without the use of a Taylor expansion by using a theory of convergence of stochastic processes developed by Kurtz and Protter. To obtain a necessary bound, the theory of sums of weakly dependent random variables is used. This analysis leads to the explanation of why the time substitution was needed in the previous case.
Thursday, April 23, Hoi Nguyen, Ohio State University
Title: On eigenvalue repulsion of random matrices
Abstract:
I will address certain repulsion behavior of roots of random polynomials and of eigenvalues of Wigner matrices, and their applications. Among other things, we show a Wegner-type estimate for the number of eigenvalues inside an extremely small interval for quite general matrix ensembles.
Thursday, May 7, Jessica Lin, UW-Madison, 2:25pm, Van Vleck B 115
Please note the unusual room: Van Vleck B115, in the basement.
Title:
Random Walks in Random Environments and Stochastic Homogenization
In this talk, I will draw connections between random walks in random environments (RWRE) and stochastic homogenization of partial differential equations (PDE). I will introduce various models of RWRE and derive the corresponding PDEs to show that the two subjects are intimately related. I will then give a brief overview of the tools and techniques used in both approaches (reviewing some classical results), and discuss some recent problems in RWRE which are related to my research in stochastic homogenization.
Thursday, May 14, Chris Janjigian, UW-Madison
Title:
Large deviations of the free energy in the O’Connell-Yor polymer
Abstract: The first model of a directed polymer in a random environment was introduced in the statistical physics literature in the mid 1980s. This family of models has attracted substantial interest in the mathematical community in recent years, due in part to the conjecture that they lie in the KPZ universality class. At the moment, this conjecture can only be verified rigorously for a handful of exactly solvable models. In order to further explore the behavior of these models, it is natural to question whether the solvable models have any common features aside from the Tracy-Widom fluctuations and scaling exponents that characterize the KPZ class.
This talk considers the behavior of one of the solvable polymer models when it is far away from the behavior one would expect based on the KPZ conjecture. We consider the model of a 1+1 dimensional directed polymer model due to O’Connell and Yor, which is a Brownian analogue of the classical lattice polymer models. This model satisfies a strong analogue of Burke’s theorem from queueing theory, which makes some objects of interest computable. This talk will discuss how, using the Burke property, one can compute the positive moment Lyapunov exponents of the parabolic Anderson model associated to the polymer and how this leads to a computation of the large deviation rate function with normalization n for the free energy of the polymer. |
I know that
$$ \omega = 2 \pi f $$
why do we use angular freq to expression about the instantaneous values of the wave?
Signal Processing Stack Exchange is a question and answer site for practitioners of the art and science of signal, image and video processing. It only takes a minute to sign up.Sign up to join this community
I know that
$$ \omega = 2 \pi f $$
why do we use angular freq to expression about the instantaneous values of the wave?
It's easiest to understand with units. The argument for the sine and cosine functions are in radians. Thus angular velocity represents the frequency in the most convenient way for these functions. For instance, if your time unit is seconds, then the units for $f$ are cycles per second, aka Hz. The $2\pi$ has units of radians per cycle, so the equation you mentioned $ \omega = 2 \pi f $ has units of:
$$ \frac{radians}{second} = \frac{radians}{cycle} \cdot \frac{cycles}{second} $$
When it is plugged into the equation for your signal:
$$ s(t) = A \cos( \omega t + \phi ) $$
The argument $ \omega t + \phi $ has units of:
$$ \frac{radians}{second} \cdot seconds + radians = radians $$
Hope this helps.
Ced |
The uncertainty principle tells us that $$\sigma_x\sigma_p \geq \frac{\hbar}{2}, $$ which means that the more precisely we measure a particle's position, the more imprecise we will know its momentum. However, the standard deviation $\sigma_x$ can't be zero, and therefore its wave function will always have some spread to it. That got me thinking, is that the reason why particles such as protons, electrons, or neutrons have size? Is their size determined by the average of the spread of their wave function's standard deviation in position space when one collapses their wave functions an infinite amount of times?
In the mainstream standard model of particle physics, all matter is made up of
point particles with a fixed mass which we measure as best as we can within our experimental errors. There is no width in this masses at the table.
Protons and (and neutrons bound in a nucleus) are stable composite particles , made up by a great multitude of quarks antiquarks and gluons , plus some valence quarks, and are found as quantum mechanical solutions in a QCD lattice model. Experimentally proton decay has not been seen, so the intrinsic width of the proton mass is still a delta function , although there are models that allow baryon number non conservation. (The same is true for the free neutron, because its lifetime is such that the possible width in mass is not measurable).
Width due to the quantum mechanical wave function is found theoretically and measured experimentally in resonances , and decaying elementary particles, as seen here. The Heisenberg uncertainty is directly connected with this width, but the width depends on the interactions allowed by the various conservation laws for the specific decay. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.