text stringlengths 256 16.4k |
|---|
also, if you are in the US, the next time anything important publishing-related comes up, you can let your representatives know that you care about this and that you think the existing situation is appalling
@heather well, there's a spectrum
so, there's things like New Journal of Physics and Physical Review X
which are the open-access branch of existing academic-society publishers
As far as the intensity of a single-photon goes, the relevant quantity is calculated as usual from the energy density as $I=uc$, where $c$ is the speed of light, and the energy density$$u=\frac{\hbar\omega}{V}$$is given by the photon energy $\hbar \omega$ (normally no bigger than a few eV) di...
Minor terminology question. A physical state corresponds to an element of a projective Hilbert space: an equivalence class of vectors in a Hilbert space that differ by a constant multiple - in other words, a one-dimensional subspace of the Hilbert space. Wouldn't it be more natural to refer to these as "lines" in Hilbert space rather than "rays"? After all, gauging the global $U(1)$ symmetry results in the complex line bundle (not "ray bundle") of QED, and a projective space is often loosely referred to as "the set of lines [not rays] through the origin." — tparker3 mins ago
> A representative of RELX Group, the official name of Elsevier since 2015, told me that it and other publishers “serve the research community by doing things that they need that they either cannot, or do not do on their own, and charge a fair price for that service”
for example, I could (theoretically) argue economic duress because my job depends on getting published in certain journals, and those journals force the people that hold my job to basically force me to get published in certain journals (in other words, what you just told me is true in terms of publishing) @EmilioPisanty
> for example, I could (theoretically) argue economic duress because my job depends on getting published in certain journals, and those journals force the people that hold my job to basically force me to get published in certain journals (in other words, what you just told me is true in terms of publishing)
@0celo7 but the bosses are forced because they must continue purchasing journals to keep up the copyright, and they want their employees to publish in journals they own, and journals that are considered high-impact factor, which is a term basically created by the journals.
@BalarkaSen I think one can cheat a little. I'm trying to solve $\Delta u=f$. In coordinates that's $$\frac{1}{\sqrt g}\partial_i(g^{ij}\partial_j u)=f.$$ Buuuuut if I write that as $$\partial_i(g^{ij}\partial_j u)=\sqrt g f,$$ I think it can work...
@BalarkaSen Plan: 1. Use functional analytic techniques on global Sobolev spaces to get a weak solution. 2. Make sure the weak solution satisfies weak boundary conditions. 3. Cut up the function into local pieces that lie in local Sobolev spaces. 4. Make sure this cutting gives nice boundary conditions. 5. Show that the local Sobolev spaces can be taken to be Euclidean ones. 6. Apply Euclidean regularity theory. 7. Patch together solutions while maintaining the boundary conditions.
Alternative Plan: 1. Read Vol 1 of Hormander. 2. Read Vol 2 of Hormander. 3. Read Vol 3 of Hormander. 4. Read the classic papers by Atiyah, Grubb, and Seeley.
I am mostly joking. I don't actually believe in revolution as a plan of making the power dynamic between the various classes and economies better; I think of it as a want of a historical change. Personally I'm mostly opposed to the idea.
@EmilioPisanty I have absolutely no idea where the name comes from, and "Killing" doesn't mean anything in modern German, so really, no idea. Googling its etymology is impossible, all I get are "killing in the name", "Kill Bill" and similar English results...
Wilhelm Karl Joseph Killing (10 May 1847 – 11 February 1923) was a German mathematician who made important contributions to the theories of Lie algebras, Lie groups, and non-Euclidean geometry.Killing studied at the University of Münster and later wrote his dissertation under Karl Weierstrass and Ernst Kummer at Berlin in 1872. He taught in gymnasia (secondary schools) from 1868 to 1872. He became a professor at the seminary college Collegium Hosianum in Braunsberg (now Braniewo). He took holy orders in order to take his teaching position. He became rector of the college and chair of the town...
@EmilioPisanty Apparently, it's an evolution of ~ "Focko-ing(en)", where Focko was the name of the guy who founded the city, and -ing(en) is a common suffix for places. Which...explains nothing, I admit. |
First of all you have to distinguish the classical uncertainty form the quantum one. The density matrix can be written as $$\rho=\sum w_i|\alpha_i\rangle\langle\alpha_i|$$ where the $w_i$ can be the classical probabilities, if you can't say exactly where the state is in the Hilbert space, or the quantum ones, if you don't want (or can't) write the state as a definite vector of the Hilbert space.
Classical
As an example take a two level system and consider the case in which you classically can't say in which one of the two states the system is. In that case your density matrix will be $$\rho=\frac{1}{2}|0\rangle\langle0|+\frac{1}{2}|1\rangle\langle1|$$ and your entropy will be $S=-Tr(\rho\log\rho)=\log2$.This entropy is a measure of your classic uncertainty about the state and has nothing to do with quantum uncertainty.
Quantum
Now consider a system composed as the two two level systems of above each one with his Hilber space $\mathcal{H}_1$ and $\mathcal{H}_2$. If you want to write the total Hilbert space you have to do the tensor product and you will have some vectors that can't be write in the separate Hilbert spaces. A famous example is one of the Bell's states $$|\psi\rangle=\frac{1}{\sqrt{2}}\left(|0\rangle\otimes|1\rangle+|1\rangle\otimes|0\rangle\right)~.$$While a total state can be always written in base where his state is one of the base itself and so its density matrix will be like $$\rho=\begin{pmatrix}1&...&0\\.&...&.\\0&...&0\end{pmatrix}$$ and its entropy $\rho=-Tr(\rho\log\rho)=0$, this is not true for a single subsystem of the Bell's state for which the density matrix will be obtained with the trace operator on the Hilber sbace of the other subsystem acting on the total density matrix:$$\rho_{A}=Tr_B(\rho)=\frac{1}{2}\begin{pmatrix}1&0\\0&1\end{pmatrix}~.$$
In this case the Von Neumann entropy will be $S=-Tr(\rho\log\rho)=\log2$.This entropy measures the quantum correlations between the two subsystems and are always different from zero if the state you are considering is somehow interacting with something else (another subsystem or an environment).
From the classical and the maximally entangled scenario you can see that there are not differences between the two density matrices and so, given ad arbitary density matrix you do not have a criterion to distinguish the two cases and the Von Neumann entropy will result the same. It can be show that distinguish an entangled or non-entangled density matrix is a NP-hard problem known as Quantum Separability Problem. |
And yet another question to discuss the assumptions in PRIIPs. It is remarkable that in these legal documents a Cornish-Fisher expansion including skewness and kurtosis is used.
Looking at the very recent version of the document we find on page 27 the following formula for the moderate scenario (Which is, if I read it correctly, supposed to be the 50% quantile):
$$ \exp(M_1 \cdot N - \sigma \mu_1/6 - 0.5 \sigma^2 N ), $$ where $N$ is the number of days (more details are not necessary here), $M_1$ is the first moment of the log returns observed, $\sigma$ is the standard deviation and $\mu_1$ is the skewness measured.
I have one question: I see that $- \sigma \mu_1/6$ enters if we put in $0$ for the "z-value". Thus there is something that remains from skewness.
But is it ok to have the average return $M_1$ if we model in a risk-neutral world?
If $M_1$ is the average of log-returns then we have $M_1 = \tilde{\mu} + \sigma^2/2$ where $\tilde{\mu}$ is the "true" mean and $\sigma^2/2$ is the convexity that we have in the log-normal case. This is corrected in the last part of the formula by the term $- 0.5 \sigma^2 N$. This formula is different from the others where there is usually just an expected return of $-\sigma^2/2 N$ which makes the expected growth zero (see e.g. page 28 point 11).
In short: is it really consistent to have the $M_1$ term above? Any comments are really appreciated! |
Measurement is the foundation of all mathematical concepts and this is not possible to imagine the world without measurements. The perfect measurements will increase the level of accuracy if they are based on international standards. Still, always measurement is suspected to small errors in mathematics or a level of uncertainty too. In simple words, the 100 percent accurate measurements are not possible in the practical world.
Errors are simply defined as the difference between the measured value and the actual value. For example, when two operators use the same device for the measurement then this is not necessary that results would be the same. The difference that occurs between the actual value and the measured value is named as the ERROR.
\[\ Percentage\;Error=\frac{Approximate\;Value-Exact\;Value}{Exact\;Value}\times 100\]
\[\ Standard\;Error =SE_{\overline{x}}=\frac{S}{\sqrt{n}}\]
Where,
s is the standard deviation n is the number of observation
\[\ Sampling\;Error=\pm \sqrt{\frac{2500}{Sample\;Size}}\times 1.96\]
\[\ E=Z\left(\frac{\alpha}{2}\right)\left(\frac{\alpha}{\sqrt{n}}\right)\]
Here,
$z$ $(\frac{\alpha }{2})$ = represents the critical value. $z$ $(\frac{\sigma }{\sqrt{n}})$ = represents the standard deviation.
To learn the mathematics concepts deeply, you should know the different terms that could define the errors like sampling error, standard error, marginal error or percent error etc. Let us discuss on each of terms one by one with respective formulas. If you would understand these definitions and formulas deeply then there are chances that you could calculate the values as accurate as possible.
The error that arises due to sampling is named as the sampling error. This is the error usually related to the statistical analysis because of the wrong samples of the observations are taken. For example, the weight of 2000 citizens of a country are noted down and you need to calculate the average of weights now then it could be the same as the average weight of two million people.
To determine the weight of the whole population, the sampling technique is used. The difference between sample values and the population is termed as the sampling error. This is not possible to calculate the exact value of the population of you don’t know the value of sampling error and that could be found with sample modeling only.
So, the sampling error Formula in mathematics could be written as below –
\[\ Sampling\;Error=\pm \sqrt{\frac{2500}{Sample\;Size}}\times 1.96\]
There could be a manufacturing error in measuring instruments too. This is not possible to assure them the exact. To know what type of error could be available here, we should know about the percentage error formula too. This is the absolute difference between measured value and the actual value and you should multiply the values by hundred too.
\[\ Percentage\;Error=\frac{Approximate\;Value-Exact\;Value}{Exact\;Value}\times 100\]
The margin of errors is generally found in random sampling or the result of a survey. It is assumed that result of a sample is highly closers to the one would get from the population has been queried. In easy words, the margin of error is the product of critical value with the standard deviation. This is given by E and it could be written as –
\[\ E=Z\left(\frac{\alpha}{2}\right)\left(\frac{\alpha}{\sqrt{n}}\right)\]
Here,
\[\ Z\left(\frac{\alpha}{2}\right)= represents\;the\;critical\;value.\]
\[\ Z\left(\frac{\alpha}{\sqrt{n}}\right) = represents\;the\;standard\;deviation. \]
Standard Error is the important statistical measure that is related to the standard deviation. The accuracy of a sample that could be presented by the population is given through the standard error formula and it could be written as below –
\[\ Standard\;Error =SE_{\overline{x}}=\frac{S}{\sqrt{n}}\]
Where,
s is the standard deviation n is the number of observation
Where,S is the standard deviation, and n is the number of observations. |
I know I've got a mistake, probably with question a. But I don't know where or what the mistake is.
The question is
Let $D\subset\mathbb{R}^3$ be the pyramid with vertices $(0,0,0)$, $(1,0,0)$, $(0,1,0)$, $(0,0,1)$.
Given is the vector field $\vec{F} = xy\hat{i}+y^2\hat{j}+zy\hat{k}$.
Calculate the flux of $\vec{F}$ through $D$.
a) Directly. b) Using divergence theorem.
My solution of a:
Since we have a pyramid of four surfaces, we shall calculate the flux through each separately. Let $S_1$ denote the surface on the $xy$-plane. $S_2$ on the $xz$-plane. $S_3$ on the $zy$-plane. And $S_4$ in the $xyz$-space.
Let the vector field be in the form $\vec{F}=P\hat{i}+Q\hat{j}+R\hat{k}$, then I calculate the flux by $$\int\int_D \left(-P\cdot g_x(x,y) -Q\cdot g_y(x,y) + R\right)\,dA$$
The flux of $\vec{F}$ through $S_1$ is given by $$\int_{x=0}^{1}\int_{y=0}^{1-x}\left((-xy)\cdot g_x-y^2\cdot g_y + zy\right)\,dydx =\int_{x=0}^{1}\int_{y=0}^{1-x}\left(xy+y^2 + y -xy -y^2\right)\,dydx$$, with $g(x,y)=1-x-y$ for $x+y=1$. If I calculate this integral I get $1/6$.
The flux of $\vec{F}$ through $S_2$ is given by $$\int_{x=0}^{1}\int_{z=0}^{1-x}\left((-xy)\cdot g_x+y^2 + zy\cdot g_z\right)\,dzdx$$, with $g(x,z)=1-x-z$ for $x+z=1$. If I calculate this integral I get $1/6$.
The flux of $\vec{F}$ through $S_3$ is given by $$\int_{y=0}^{1}\int_{z=0}^{1-y}\left((xy+y^2\cdot g_y + zy\cdot g_z\right)\,dzdx$$, with $g(y,z)=1-y-z$ for $y+z=1$. If I calculate this integral I get $1/6$.
The flux of $\vec{F}$ through $S_4$ is given by $$\int_{x=0}^{1}\int_{y=0}^{1}\left((-xy)\cdot g_x-y^2\cdot g_y + zy\right)\,dydx $$, with $g(x,y)=1-x-y$. If I calculate this integral I get $1/2$.
The total flux is then the flux through all the separate surfaces added together, we get a flux of $1$.
My solution of b: Since the surface is simple and closed, we get $$\int_{x=0}^1\int_{y=0}^1\int_{z=0}^1 Div(\vec{F})dz\,dy\,dx$$ with the divergence equal to $y+2y+y=4y$. Hence the flux is $2$. |
Inverse functions are functions that “reverse” each other.
We consider a function \(f\left( x \right)\), which is strictly monotonic on an interval \(\left( {a,b} \right)\). If there exists a point \({x_0}\) in this interval such that \(f’\left( {{x_0}} \right) \ne 0\), then the inverse function \(x = \varphi \left( y \right)\) is also differentiable at \({y_0} = f\left( {{x_0}} \right)\) and its derivative is given by
\[\varphi’\left( {{y_0}} \right) = \frac{1}{{f’\left( {{x_0}} \right)}}.\]
Let us prove this theorem (called the inverse function theorem).
Suppose that the variable \(y\) gets an increment \(\Delta y \ne 0\) at the point \({y_0}.\) The corresponding increment of the variable \(x\) at the point \({x_0}\) is denoted by \(\Delta x\), where \(\Delta x \ne 0\) due to the strict monotonicity of \(y = f\left( x \right)\). The ratio of the increments is written as
\[\frac{{\Delta x}}{{\Delta y}} = \frac{1}{{\frac{{\Delta y}}{{\Delta x}}}}.\]
Suppose that \(\Delta y \to 0\). Then \(\Delta x \to 0\), since the inverse function \(x = \varphi \left( y \right)\) is continuous at \({y_0}\). In the limit when \(\Delta x \to 0\), the right side of the relationship becomes
\[
{\lim\limits_{\Delta x \to 0} \frac{1}{{\frac{{\Delta y}}{{\Delta x}}}} = \frac{1}{{\lim\limits_{\Delta x \to 0} \frac{{\Delta y}}{{\Delta x}}}} } = {\frac{1}{{f’\left( {{x_0}} \right)}}.} \]
In this case, the left hand side also approaches a limit, which by definition is equal to the derivative of the inverse function:
\[\lim\limits_{\Delta y \to 0} \frac{{\Delta x}}{{\Delta y}} = \varphi’\left( {{y_0}} \right).\]
Thus,
\[\varphi’\left( {{y_0}} \right) = \frac{1}{{f’\left( {{x_0}} \right)}},\]
that is the derivative of the inverse function is the inverse of the derivative of the original function.
In the examples below, find the derivative of the function \(y = f\left( x \right)\) using the derivative of the inverse function \(x = \varphi \left( y \right).\)
Solved Problems
Click a problem to see the solution.
Example 1\[y = \sqrt[\large n\normalsize]{x}\] Example 2\[y = \arcsin x\] Example 3\[y = \ln x\] Example 4\[y = \sqrt[\large 3\normalsize]{{x + 1}}\] Example 5\[y = \arccos \left( {1 – 2x} \right)\] Example 6\[y = \sqrt {1 + \sqrt x } \] Example 7\[y = \arctan \frac{1}{x}\] Example 8\[y = \sqrt x \] Example 9\[y = 2x + 4\] Example 10Given the function \(y = {x^5} + 2{x^3} + 3x\). Find the derivative of the inverse function at \(x = 1.\) Example 11Given the function \(y = {x^2} – x\). Find the derivative of the inverse function at \(x = 1.\) Example 12Given the function \(y = {e^x} + 2x + 1\). Find the derivative of the inverse function at \(x = 0.\) Example 13Find the derivative of the inverse function at \(x = 1\) for the function \(y = \sin \left( {x – 1} \right) + {x^2}.\) Example 14Find the derivative of the inverse function of \(y = {x^2} + 2\ln x\) and calculate its value at \(x = 1.\) Example 15Find the derivative of the inverse function for \(y = {x^3} – 3x\) and calculate its value at \(x = -2.\) Example 16Find the derivative of the inverse function for \(y = 2{x^3} – 1\) and calculate its value at \(x = 2.\) Example 17\[y = {\log _2}\left( {\frac{x}{3}} \right)\] Example 18Find the derivative of \(y = \text{arcsec }x\) at \(x = \sqrt 2.\) Example 19Find the derivative of the inverse function for \(y = x \cdot {3^x}\) provided \(x \gt 0.\) Example 20Find the derivative of the inverse hyperbolic sine function \(y = \text{arcsinh } x.\) Example 1.\[y = \sqrt[\large n\normalsize]{x}\]
Solution.
We first determine the inverse function for the given function \(y = \sqrt[\large n\normalsize]{x}\). To do this, we express the variable \(x\) in terms of \(y:\)
\[
{y = f\left( x \right) = \sqrt[\large n\normalsize]{x},\;\;}\Rightarrow {{y^n} = {\left( {\sqrt[\large n\normalsize]{x}} \right)^n},\;\;}\Rightarrow {x = \varphi \left( y \right) = {y^n}.} \]
By the inverse function theorem, we can write:
\[
{{\left( {\sqrt[\large n\normalsize]{x}} \right)^\prime } = f’\left( x \right) } = {\frac{1}{{\varphi’\left( y \right)}} } = {\frac{1}{{{{\left( {{y^n}} \right)}^\prime }}} } = {\frac{1}{{n{y^{n – 1}}}}.} \]
Now we substitute \(y = \sqrt[\large n\normalsize]{x}\) instead of \(y.\) As a result, we obtain an expression for the derivative of the given function:
\[
{{\left( {\sqrt[\large n\normalsize]{x}} \right)^\prime } = \frac{1}{{n{y^{n – 1}}}} } = {\frac{1}{{n{{\left( {\sqrt[\large n\normalsize]{x}} \right)}^{n – 1}}}} } = {\frac{1}{{n\sqrt[\large n\normalsize]{{{x^{n – 1}}}}}}\;\;\;}\kern-0.3pt{\left( {x \gt 0} \right).} \] Example 2.\[y = \arcsin x\]
Solution.
The arcsine function is the is the inverse of the sine function. Therefore \(x = \varphi \left( y \right) \) \(= \sin y.\) Then the derivative of \(\arcsin x\) is
\[
{{\left( {\arcsin x} \right)^\prime } = f’\left( x \right) } = {\frac{1}{{\varphi’\left( y \right)}} } = {\frac{1}{{{{\left( {\sin y} \right)}^\prime }}} } = {\frac{1}{{\cos y}} } = {\frac{1}{{\sqrt {1 – {{\sin }^2}y} }} } = {\frac{1}{{\sqrt {1 – {{\sin }^2}\left( {\arcsin x} \right)} }} } = {\frac{1}{{\sqrt {1 – {x^2}} }},} \]
where \(-1 \lt x \lt 1.\)
Example 3.\[y = \ln x\]
Solution.
The natural logarithm and the exponential function are mutually inverse functions. Therefore, \(x = \varphi \left( y \right) = {e^y}\), where \(x \gt 0\), \(y \in \mathbb{R}\). The derivative of the natural logarithm is easy to calculate through the derivative of the exponential function:
\[
{{\left( {\ln x} \right)^\prime } = f’\left( x \right) } = {\frac{1}{{\varphi’\left( y \right)}} } = {\frac{1}{{{{\left( {{e^y}} \right)}^\prime }}} } = {\frac{1}{{{e^y}}} } = {\frac{1}{{{e^{\ln x}}}} } = {\frac{1}{x}} \]
Here we have used a logarithmic identity, according to which
\[{e^{\ln x}} = x.\] |
Suppose the entropy of seed key is $H(n)$ which is $n$-bit long and random and the output bit length of the PRNG is $2^n$, which will be used as a long running key. Though it is obvious in that case, as output key sequence of PRNG is derived from input seed key, so entropy of output key sequence of PRNG never exceeds the entropy of the seed key. $H(2^n) \le H(n)$ in this scenario but how to prove it?
TL;DR: The applied thought process was wrong. Entropy is a function of random variables and not of arbitrary integers. The fact that the PRNG doesn't increase entropy follows from the fact that the input space is limited and there's no other input to the algorithm and thereby entropy can't be increased.
First, let's recall the formal definition of "entropy" (as can be found in the Handbook of Applied Cryptography, chapter 2.2, PDF):
DefinitionThe entropyor uncertainityof $X$ is defined to be $H(X)=-\sum_{i=1}^np_i\lg p_i=\sum_{i=1}^np_i\lg (\frac{1}{p_i})$ where, by convention, $p_i\cdot \lg p_i=p_i\cdot \lg (\frac{1}{p_i})=0$ if $p_i=0$. Notation: $\lg a$ is the binary logarithm of $a$. $X$ is a random variable that takes each value $x_i$ (from a finite set if size $n$) with the correspdoning probability $p_i$.
Now let's model the $m$-bit string as the random variable $X$. The $m$-bit string can take $n_X=2^m$ values, each of which has the probability $p_i=\frac{1}{n_X}=2^{-m}$. Thereby the entropy $H(X)$ is $$H(X)=\sum_{i=1}^np_i\lg (\frac{1}{p_i})=\sum_{i=1}^{2^m}2^{-m}\lg (2^m)=2^m\cdot(2^{-m}m)=m$$ as expected.
The next step is to properly model the "PRNG". I will model it as a
pseudo random generator (PRG), because I assume that's how you meant it.
In short: A
pseudo random generator(PRG) is a deterministic algorithm that takes a fixed length input and returns a longer output.
(not a full formal definition and the usual definition would restrict to polynomially sized outputs, but we don't need one here)
By this we can model our PRG $\mathcal G: \{0,1\}^m\rightarrow \{0,1\}^{2^m}$. As $\{0,1\}^m$ can only take $2^m$ states and this is
the only input to the function, we know that there can be at most $2^m$ different outputs of the PRG.
If we model the PRG output as a random variable $Y$, we know that (in "best case") $p_i=2^{-m}$ for $2^m$ different $y_i$ and $n_Y=2^{2^m}$. We further know that for the remaining $2^{2^{m}}-2^m$ possible values $y_i$, $p_i=0$ holds. With this information, we can compute the upper bound of the entropy of the PRG:
$$H(Y)=\sum_{i=1}^np_i\lg (\frac{1}{p_i})=\sum_{i=1}^{2^m}2^{-m}\lg (2^m)+\sum^{2^{2^m}-2^m}_{i=1}0\cdot\lg(\frac{1}{0})=2^m\cdot(2^{-m}m)+(2^{2^m}-2^m)\cdot 0=m$$
Note that $0\cdot \lg(\frac{1}{0})$ is, by convention, $0$.
As we have modeled $Y$ using as an optimal "PRG", it may be very well the case that there aren't $2^m$ different outputs. So we can conclude with $$H(X)\geq H(Y)$$ as expected and thereby that the PR(N)G doesn't increase entropy by itself.
There is no simple $H(n)$ function - i.e. there is no generic function that measures entropy of a bit-stream. That is because bit-streams do not inherently possess entropy. To measure entropy in the data, you need to have a probabilistic model of the output which it is an example of.
You can construct an equivalent to $H(n)$ if $H$ includes a model of the generator.
If your version of $H(n)$ is based on arbitrary compression algorithm, then in general $H(n) < H(2^n)$ (using your notation of $n$ as seed and $2^n$ as output), because any good PRNG will generate hard-to-compress output. That is the opposite result from the proof you were expecting.
If your version of $H(n)$ is based on brute-forcing the seed value from the PRNG, then $H(n) = H(2^n)$ because all solutions resolve to finding the correct seed. So the main issue you face is that a PRNG does
not generate entropy - in the strict cryptographic sense of increasing the number of bits of knowledge required to predict the system. Instead if you assume attacker knows how the PRNG functions, they only ever have to guess seed value to make an attack.
You could maybe turn this into a proof, but as you see it all hinges on the definition of $H$, and with the that based on guessing the seed, the "proof" is a trivial by definition statement.
If you want to assess a PRNG's qualities, you do not measure entropy. Measurements close to the concept of entropy involve taking statistical tests of the PRNG output - there are several test suites for this including FIPs, Dieharder. Instead of checking entropy, these tests check for predictable
bias in a PRNGs output. For cryptographic use, you also want to be sure that output of the PRNG does not leak state information, and might also be concerned about attacks that try to extract or manipulate state. |
I want to figure out the trace of gamma matrices relating with $\gamma^{(d+1)}$ for even $d$ dimensional case.
First define $\gamma^{(d+1)}$ as \begin{align} \gamma^{(d+1)} = \gamma^1 \gamma^2 \cdots \gamma^d \end{align} What i want to obtain is \begin{align} tr[ \gamma^{(d+1)} \gamma^{\nu_1 \cdots \nu_n}] = \textrm{something} \end{align} by restricting $0 \leq n \leq d$, i think, there is some formula related with this.
What i know is \begin{align} tr[\gamma^{(d+1)}]=0 \end{align} I think there is some general formula related with \begin{align} \gamma^{(d+1)} \gamma^{\nu_1 \cdots \nu_n} =\cdots \end{align}
If you know the formula, please let me know. Actually i know for $d=4$, $tr[\gamma^5 \gamma^{\mu_1} \gamma^{\mu_2}\gamma^{\mu_3}\gamma^{\mu_4}] = -4i\epsilon^{\mu_1 \mu_2 \mu_3\mu_4}$ and of course the $6$ product($n>d=4$) of gamma matrices can be achieved recursively, but i am not sure about its generalization. |
Question: Studies show that 68 percent of people who eat fruits and vegetables have lower rate of colon cancer than those eat little of these foods. Fruit and vegetables are rich in “antioxidants” such as vitamins A, C, and E. Will taking antioxidants help prevent colon cancer? A medical experiment studied this question with 864 people who were at risk of colon cancer and were given vitamin A, C, and E daily. After four years, the researchers found that 602 of them showed no sign of colon cancer. Do antioxidants prevent cancer?
Given information
\(n=864,\,\,x=602\)
Appropriate formula to be used
\[z=\frac{\hat{p}-p}{\sqrt{\frac{\left( 1-p \right)}{n}}}\]
Solution and interpretation
Solution:The solution consists of 223 words (2 pages) Deliverables:Word Document |
This may be a simple question, but I have not been able to find an adequate discussion in any source that quite answers it.
In many cases in quantum mechanics, traces are evaluated using the discrete spectrum of the Hamiltonian: $Tr[A] = \sum \langle n|A|n\rangle$. Is there a generalization for a Hamiltonian with a continuous spectrum? As a specific example, take $H=\frac{p^2}{2}$, the non-relativistic free particle Hamiltonian. The eigenvalues of the Hamiltonian are just momentum eigenstates, with corresponding energies $E(p) = p^2/2$. So we could write the energy eigenstates as $|E,\pm\rangle$, where $E$ ranges from 0 to $\infty$ and plus/minus denotes right/left moving particles.
Say I want to calculate the partition function, $Z(\beta) = Tr[e^{-\beta H}]$. If I try this naively by expanding the trace in the momentum basis, I get $$Z(\beta) = \int_{\mathbb{R}} dp \langle p | e^{-\beta p^2/2} | p \rangle = \int dp e^{-\beta p^2 /2} \delta(0) = \delta(0)\sqrt{\frac{2\pi}{\beta}}.$$
On the other hand, if I expand in the energy eigenbasis, I would naively get $$Z(\beta) = \sum\limits_{s\in\{+,-\}}\int_{\mathbb{R}\geq0} dE \langle E,s| e^{-\beta H} | E,s \rangle = 2\int dE e^{-\beta E} \delta(0) = \frac{2}{\beta}\delta(0),$$ which is not the same as above.
My suspicion of what went wrong in this particular calculation is that the measure for evaluating the trace in the energy basis was incorrect (it might be possible to see this by changing variables in the momentum basis integral), but I am not sure. My second suspicion is that rigged Hilbert space formalism may be able to clear up the ambiguity. Regardless, it would be useful to see under what conditions an integral-type trace as above exists and is well-defined. |
Earlier I have dealt with exponential functions multiplied with unit step function. But, energy and power of exponential function alone comes out to be infinite when I put limits of the integrals to from $-\infty$ to $+\infty$. How can I find its energy or power if the signal is not multiplied with unit step function?
An ideal exponential signal $x(t)=e^{at}$ , which extends from $-\infty$ to $\infty$ has infinite energy and infinite power, as for real $a >0$ ( and similarly for real $a < 0$) you have
$$ \mathcal{E}_x = \int_{-\infty}^{\infty} e^{2at} dt = \lim_{t \to \infty} \frac{1}{2a} e^{2a t} \to \infty $$
and
$$ \mathcal{P}_x = \lim_{T \to \infty} \frac{1}{T} \int_{-T/2}^{T/2} e^{2at} dt = \lim_{T \to \infty} \frac{e^{a T}}{2aT} \to \infty $$
Therefore It's neither energy nor power signal. Note that it's not a practical signal and exists only for mathematical purposes. |
knitr::opts_chunk$set(echo = TRUE) fname <- "simulations-ols-var.rda" sims <- 2000 rerun <- TRUE if (!rerun) {load(fname)}
This document exposes the properties of different variance estimators using
DeclareDesign and
estimatr. More details about the variance estimators with references can be found in the mathematical notes.
library(DeclareDesign) library(tidyverse) library(knitr)
Under simple conditions with homoskedasticity (i.e., all errors are drawn from a distribution with the same variance), the classical estimator of the variance of OLS should be unbiased. In this section I demonstrate this to be true using
DeclareDesign and
estimatr.
First, let's take a simple set up:
$$ \begin{aligned} \mathbf{y} &= \mathbf{X}\beta + \epsilon, \ \epsilon_i &\overset{i.i.d.}{\sim} N(0, \sigma^2). \end{aligned} $$
For our simulation, let's have a constant and one covariate, so that $\mathbf{X} = [\mathbf{1}, \mathbf{x_1}]$, where $\mathbf{x_1}$ is a column vector of a covariate drawn from a standard normal distribution. Let's also assume that are covariates are fixed, rather than stochastic. Let's draw the data we will use.
set.seed(41) dat <- data.frame(x = rnorm(50))
The function
[ \epsilon_i \overset{i.i.d.}{\sim} N(0, \sigma^2), ] encodes the assumption of homoskedasticity. Because of these homoskedastic errors, we know that the true variance of the coefficients with fixed covariates is
[ \mathbb{V}[\widehat{\beta}] = \sigma^2 (\mathbf{X}^\top \mathbf{X})^{-1}, ]
where I hide conditioning on $\mathbf{X}$ for simplicity.
Let's compute the true variance for our dataset.
sigmasq <- 4 # Build the X matrix with intercept Xmat <- cbind(1, dat$x) # Invert XtX XtX_inv <- solve(crossprod(Xmat)) # Get full variance covariance matrix true_var_cov_mat <- sigmasq * XtX_inv
But for this example, we are only going to focus on the variance for the covariate, not the intercept, so let's store that variance.
true_varb <- true_var_cov_mat[2, 2] true_varb
Now, using
DeclareDesign, let's specify the rest of the data generating process (DGP). Let's set $\beta = [0, 1]^\top$, so that the true DGP is $\mathbf{y} = \mathbf{x_1} + \epsilon$.
simp_pop <- declare_population( epsilon = rnorm(N, sd = 2), y = x + epsilon )
Now let's tell DeclareDesign that our target, our estimand, is the true variance.
varb_estimand <- declare_estimand(true_varb = true_varb)
Our estimator for this estimand will be the classical OLS variance estimator, which we know should be unbiased:
[ \widehat{\mathbb{V}[\widehat{\beta}]} = \frac{\mathbf{e}^\top\mathbf{e}}{N - K} (\mathbf{X}^\top \mathbf{X})^{-1}, ]
where the residuals $\mathbf{e} = \mathbf{y} - \mathbf{X}\widehat{\beta}$, $N$ is the number of observations, and $K$ is the number of regressors---two in our case. We can easily get this estimate of the variance by squaring the standard error we get out from
lm_robust in
estimatr. Let's tell
DeclareDesign to use that estimator and get the coefficient on the $\mathbf{x}_1$ variable.
lmc <- declare_estimator( y ~ x, model = lm_robust, se_type = "classical", estimand = varb_estimand, term = "x" )
Now, we want to test for a few results using Monte Carlo simulation. Our main goal is to show that our estimated variance is unbiased for the true variance (our estimand). We can do this by comparing the mean of our estimated variances across our Monto Carlo simulations to the true variance. We can also show that the standard error of our coefficient estimate is the same as the standard deviation of the sampling distribution of our coefficient. Lastly, we also measure the coverage of our 95 percent confidence intervals across simulations to test whether the they cover the true coefficient 95 percent of the time.
Let's first set up the design and our diagnosands.
# First declare all the steps of our design, starting with our fixed data classical_design <- declare_population(dat) + simp_pop + varb_estimand + lmc # Declare a set of diagnosands that help us check if # we have unbiasedness my_diagnosands <- declare_diagnosands( `Bias of Estimated Variance` = mean(std.error^2 - estimand), `Bias of Standard Error` = mean(std.error - sd(estimate)), `Coverage Rate` = mean(1 <= conf.low & 1 >= conf.high), `Mean of Estimated Variance` = mean(std.error^2), `True Variance` = estimand[1], keep_defaults = FALSE )
Now let's run the simulations!
set.seed(42) dx1 <- diagnose_design( classical_design, sims = sims, diagnosands = my_diagnosands ) kable(reshape_diagnosis(dx1, digits = 3))
Our diagnosands can help us see if there is any bias. As we can see the bias is very close to zero. Because the standard error of the bias is much larger than the estimated bias, we can be reasonably certain that the only reason the bias is not exactly zero is due to simulation error. We can also see the unbiasedness visually, using a density plot of estimated variances with a line for the true variance.
# Get cuts to add label for true variance dx1$simulations <- dx1$simulations %>% mutate(var = std.error^2) sumvar <- summary(dx1$simulations$var) ggplot(dx1$simulations, aes(x = var)) + geom_density() + geom_vline(xintercept = true_varb, size = 1, color = "red") + xlab("Estimated Variance") + theme_bw()
Let's use the same fixed data set-up, but introduce heteroskedasticity. In this case, the variance of the errors is different across units:
[ \epsilon_i \sim N(0, \sigma_i^2), ]
where $\sigma^2_i \neq \sigma^2_j$ for some units $i$ and $j$. If the variance of the errors is not independent of the regressors, the "classical" variance will be biased and inconsistent. Meanwhile, heteroskedastic-consistent variance estimators, such as the HC2 estimator, are consistent and normally less biased than the "classical" estimator. Let's demonstrate this using
DeclareDesign. First, let's specify the variance of the errors to be strongly correlated with $x$.
dat <- mutate(dat, noise_var = 1 + (x - min(x))^2 ) ggplot(dat, aes(x, noise_var)) + geom_point() + ggtitle("The variance of errors increases with x")
Note that the general form of the true variance with fixed covariates is:
[ \mathbb{V}[\widehat{\beta}] = (\mathbf{X}^\top \mathbf{X})^{-1} \mathbf{X}^\top \mathbf{\Phi} \mathbf{X} (\mathbf{X}^\top \mathbf{X})^{-1}, ]
where $\mathbf{\Phi}$ is the variance covariance matrix of the errors, or $\mathbf{\Phi} = \mathbb{E}[\epsilon\epsilon^\top]$. In the above case with homoskedasticity, we assumed $\mathbf{\Phi} = \sigma^2 \mathbf{I}$ and were able to simplify. Now, as in the standard set up for heteroskedasticity, we set $\mathbf{\Phi}$ to be a diagonal matrix where
noise_var, the variance for each unit's error, is on the diagonal, like so:
[ \mathbf{\Phi} = \begin{bmatrix} \sigma_1^2 & 0 & \cdots & 0 \ 0 & \sigma_2^2 & \cdots & 0 \ \vdots & \vdots & \ddots & \vdots \ 0 & 0 & \cdots & \sigma_n^2 \end{bmatrix} ]
Using that error structure and the error for each unit, we can estimate the true variance.
Xmat <- with(dat, cbind(1, x)) XtX_inv <- solve(crossprod(Xmat)) varb <- tcrossprod(XtX_inv, Xmat) %*% diag(with(dat, noise_var)) %*% Xmat %*% XtX_inv true_varb_het <- varb[2, 2] true_varb_het
Now let's use
DeclareDesign to test whether HC2 is less biased in this example than classical standard errors. HC2 is the
estimatr default; we'll also throw in HC1, which is Stata's default robust standard error estimator. I'm going to make a "designer," which is a function that returns a design.
het_designer <- function(N) { dat <- fabricate(N = N, x = rnorm(N), noise_var = 1 + (x - min(x))^2) # Get true variance for this data Xmat <- with(dat, cbind(1, x)) XtX_inv <- solve(crossprod(Xmat)) varb <- tcrossprod(XtX_inv, Xmat) %*% diag(with(dat, noise_var)) %*% Xmat %*% XtX_inv true_varb_het <- varb[2, 2] # Population function now has heteroskedastic noise simp_pop <- declare_population( dat, epsilon = rnorm(N, sd = sqrt(noise_var)), y = x + epsilon ) varb_het_estimand <- declare_estimand(true_varb_het = true_varb_het) # Now we declare all three estimators lm1 <- declare_estimator( y ~ x, model = lm_robust, se_type = "classical", estimand = varb_het_estimand, term = "x", label = "classical" ) lm2 <- declare_estimator( y ~ x, model = lm_robust, se_type = "HC1", estimand = varb_het_estimand, term = "x", label = "HC1" ) lm3 <- declare_estimator( y ~ x, model = lm_robust, se_type = "HC2", estimand = varb_het_estimand, term = "x", label = "HC2" ) # We return the design so we can diagnose it return(simp_pop + varb_het_estimand + lm1 + lm2 + lm3) }
So let's use the same diagnosands as above to test the properties of our estimators with heteroskedasticity.
# Create a design using our template and the data we have been using het_design <- het_designer(N = 50) dx2 <- diagnose_design( het_design, diagnosands = my_diagnosands, sims = sims ) kable(reshape_diagnosis(dx2))
The bias for the HC2 errors is much closer to zero, whereas the bias for the classical error is much larger, especially when compared to the standard error of the bias diagnosand. How does this bias change as the sample size changes? As the HC2 variance estimate is consistent under heteroskedasticity, it should converge to zero.
designs <- expand_design(het_designer, N = c(100, 200, 300, 500, 1000, 2500)) set.seed(42) dx3 <- diagnose_design(designs, sims = sims, diagnosands = my_diagnosands) ggplot(dx3$diagnosands_df, aes(x = N, y = `Bias of Estimated Variance`, color = estimator_label)) + geom_point() + geom_line() + geom_hline(yintercept = 0, linetype = 2, color = "grey") + labs(color = "Estimator") + theme_bw()
As you can see, the HC2 variance tends to have less bias and is consistent, converging to the true value as the sample size increases. The classical standard error estimator is neither unbiased nor consistent. The HC1 variance is also "robust" to heteroskedasiticity but exhibits greater bias than HC2 in this example.
save(dx1, dx2, dx3, file = fname) # Stochastic data # Clusters
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets. |
There is a lot of experimental research activity into whether the electron has an electric dipole moment. The electron, however, has a net charge, and so its dipole moment $$ {\bf \mu}= \int ({\bf r}- {\bf r}_0)\rho({\bf r}) \,d^3r $$ depends on the chosen origin ${\bf r}_0$. Indeed, if one takes moments about the center of charge, then - by definition - the electric dipole moment is zero.
Now I know that what is really meant by the experimentalists is that their "electric dipole moment" corresponds to adding to the Dirac Lagrangian a term propertional to $$ \frac 12 \bar \psi \sigma_{\mu \nu}\psi\, ^*F^{\mu\nu}, $$ where $^*F^{\mu\nu} = \frac 12 \epsilon^{\mu\nu\rho\sigma}F_{\rho\sigma}$ is the dual Maxwell field. So I have two questions:
a) What point ${\bf r}_0$ does this correspond to? I'd guess that it it is something like the center of energy of the electron's wavepacket measured in its rest frame. Is there a way to see this?
b) If my guess in (a) is correct what would happen if the electron were massless? There is then no rest frame, and the center of energy is frame-dependent. I imagine therefore that the electric dipole moment would have to be zero. Is this correct? Certainly $\bar \psi \sigma_{\mu \nu}\psi$ is zero for a purely left or right helicity particle obeying a Weyl equation as $\gamma_0 [\gamma_\mu,\gamma_\nu]$ is off-diagonal in the helicity basis |
I have questions in the section 19.1 of Peskin and Schroeder.
\begin{equation} \psi = \begin{pmatrix} \psi_+ \\ \psi_- \tag{19.7} \end{pmatrix} \end{equation}
The subscripts indicates the $\gamma^5$ eigenvalue.
Below we'll show that there is situtions where the axial current conservation law $$ \partial_{\mu} j^{\,\mu 5} =0 . $$ is violated, and the integrated version of $$ \partial_{\mu} j^{\,\mu 5} = \frac{e}{2\pi} \epsilon^{\mu\nu}F_{\mu\nu} \tag{19.18} $$ holds. where the totally antisymmetric symbol $\epsilon^{\mu \nu}$ is defined as $\epsilon^{01}=+1$ on page 653.
Let us analyze this problem by thinking about fermions in one space dimension in a background $A^1$ field that is constant in space and has a very slow time dependence. We will assume that the system has a finite length $L$, with the periodic boundary conditions.
So $A^1(t,0)=A^1(t,L) $, and also $ \psi(t,0)=\psi(t,L) $.
the single-particle eigen-states of $H$ have energies \begin{alignat}{2} \psi_+ : \quad \quad & E_n && =+(k_n -eA^1), \\ \psi_+ : \quad \quad & E_n && =-(k_n -eA^1). \tag{19.36} \end{alignat}
Now we consider the slow shift of $A^1$.
If $A^1$ changes by the finite amount
$$ \Delta A^1 = \frac{2\pi}{eL} \quad \quad \quad \quad \quad \quad \quad \quad (19.37)$$
Where $ \Delta A^1 \gt 0 $.
the spectrum of $H$ returns to its original form. In this process, each level of $\psi_+$ moves down to the next poston, and each level of $\psi_-$ moves up to the next position, The occupation numbers of levels should be maintained in this adiabatic process. Thus, remarkably, one right-moving fermion($\psi_+$) disappears from the vacuum and one extra left-moving fermion($\psi_-$) appears.
So $$ \quad \Delta N_R=-1 ,\quad \Delta N_L=1 $$.
At the same time, \begin{alignat}{2} \int d^2x \frac{e}{2\pi} \epsilon^{\mu\nu}F_{\mu\nu} &=&& \int dt dx \frac{e}{\pi} \partial_0 A^1 \\ &=&& \frac{e}{\pi} L(-\Delta A^1) \\ &=&& -2 \tag{19.38} \end{alignat}
where I've rectified a misprint on the LHS of the first line.
where we have inserted (19.37) in the last line. Thus the integrated form of the anomalous nonconservation equation (19.18) is indeed satisfied: $$ \Delta N_R - \Delta N_L = \int d^2x \frac{e}{2\pi} \epsilon^{\mu\nu}F_{\mu\nu} . \tag{19.39} $$
For $ \Delta N_R - \Delta N_L =-2 $.
From $$ \int d^2x \partial_{\mu} j^{\,\mu 5} =\Delta N_R - \Delta N_L \tag{19.30} $$ we obtain $$ \int d^2x \partial_{\mu} j^{\,\mu 5} = \int d^2x \frac{e}{2\pi} \epsilon^{\mu\nu}F_{\mu\nu}. $$
Question 1: I've checked all the calculations leading to (19.38). But I cannot understand the minus sign in the second anf the third line of (19.38). Where does it come from?
Question 2:
In computing the changes in the separate fermion numbers, we have assumed that the vacuum cannot change the charge at large negative energies. This prescription is gauge invariant, but it leads to the nonconservation of the axial vector current.
It follws that if the vacuum changed the charge at large negative energies, this prescription would not be gauge invariant.
How come? Please would you explain?
This post imported from StackExchange Physics at 2017-02-13 08:16 (UTC), posted by SE-user GotchaP |
Search
Now showing items 1-5 of 5
Forward-backward multiplicity correlations in pp collisions at √s = 0.9, 2.76 and 7 TeV
(Springer, 2015-05-20)
The strength of forward-backward (FB) multiplicity correlations is measured by the ALICE detector in proton-proton (pp) collisions at s√ = 0.9, 2.76 and 7 TeV. The measurement is performed in the central pseudorapidity ...
Rapidity and transverse-momentum dependence of the inclusive J/$\mathbf{\psi}$ nuclear modification factor in p-Pb collisions at $\mathbf{\sqrt{\textit{s}_{NN}}}=5.02$ TeV
(Springer, 2015-06)
We have studied the transverse-momentum ($p_{\rm T}$) dependence of the inclusive J/$\psi$ production in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV, in three center-of-mass rapidity ($y_{\rm cms}$) regions, down to ...
Measurement of charm and beauty production at central rapidity versus charged-particle multiplicity in proton-proton collisions at $\sqrt{s}$ = 7 TeV
(Springer, 2015-09)
Prompt D meson and non-prompt J/$\psi$ yields are studied as a function of the multiplicity of charged particles produced in inelastic proton-proton collisions at a centre-of-mass energy of $\sqrt{s}=7$ TeV. The results ...
Coherent $\rho^0$ photoproduction in ultra-peripheral Pb-Pb collisions at $\mathbf{\sqrt{\textit{s}_{\rm NN}}} = 2.76$ TeV
(Springer, 2015-09)
We report the first measurement at the LHC of coherent photoproduction of $\rho^0$ mesons in ultra-peripheral Pb-Pb collisions. The invariant mass and transverse momentum distributions for $\rho^0$ production are studied ...
Inclusive, prompt and non-prompt J/ψ production at mid-rapidity in Pb-Pb collisions at √sNN = 2.76 TeV
(Springer, 2015-07-10)
The transverse momentum (p T) dependence of the nuclear modification factor R AA and the centrality dependence of the average transverse momentum 〈p T〉 for inclusive J/ψ have been measured with ALICE for Pb-Pb collisions ... |
Search
Now showing items 1-2 of 2
Search for new resonances in $W\gamma$ and $Z\gamma$ Final States in $pp$ Collisions at $\sqrt{s}=8\,\mathrm{TeV}$ with the ATLAS Detector
(Elsevier, 2014-11-10)
This letter presents a search for new resonances decaying to final states with a vector boson produced in association with a high transverse momentum photon, $V\gamma$, with $V= W(\rightarrow \ell \nu)$ or $Z(\rightarrow ...
Fiducial and differential cross sections of Higgs boson production measured in the four-lepton decay channel in $\boldsymbol{pp}$ collisions at $\boldsymbol{\sqrt{s}}$ = 8 TeV with the ATLAS detector
(Elsevier, 2014-11-10)
Measurements of fiducial and differential cross sections of Higgs boson production in the ${H \rightarrow ZZ ^{*}\rightarrow 4\ell}$ decay channel are presented. The cross sections are determined within a fiducial phase ... |
Difference between revisions of "Group cohomology of dihedral group:D8"
(→Over the integers)
(→Baer invariants)
(18 intermediate revisions by the same user not shown) Line 3: Line 3:
group = dihedral group:D8|
group = dihedral group:D8|
connective = of}}
connective = of}}
+ + + + + + + +
==Homology groups for trivial group action==
==Homology groups for trivial group action==
Line 12: Line 20:
The homology groups over the integers are given as follows:
The homology groups over the integers are given as follows:
−
<math>H_q(D_8;\mathbb{Z}) = \left \lbrace \begin{array}{rl} (\mathbb{Z}/2\mathbb{Z})^{(q + 3)/2}, &
+
<math>H_q(D_8;\mathbb{Z}) = \left \lbrace \begin{array}{rl} (\mathbb{Z}/2\mathbb{Z})^{(q + 3)/2}, & (\mathbb{Z}/2\mathbb{Z})^{(q + 1)/2} \oplus \mathbb{Z}/4\mathbb{Z}, & q \equiv 3 \pmod 4 \\(\mathbb{Z}/2\mathbb{Z})^{q/2}, & q \mbox{ even }, q > 0 \\ \end{array}</math>
The first few homology groups are given below:
The first few homology groups are given below:
Line 24: Line 32:
===Over an abelian group===
===Over an abelian group===
+ + +
The first few homology groups with coefficients in an abelian group <math>M</math> are given below:
The first few homology groups with coefficients in an abelian group <math>M</math> are given below:
{| class="sortable" border="1"
{| class="sortable" border="1"
−
! <math>q</math> !! <math>0</math> !! <math>1</math> !! <math>2</math> !! <math>3</math> !! <math>4</math> !! <math>5</math> !! <math>6</math> !! <math>7</math>
+
! <math>q</math> !! <math>0</math> !! <math>1</math> !! <math>2</math> !! <math>3</math> !! <math>4</math> !! <math>5</math> !! <math>6</math> !! <math>7</math>
|-
|-
−
| <math>H_q</math> || <math>M</math> || <math>M/2M \oplus M/2M</math> || <math>M/2M \oplus \operatorname{Ann}_M(2) \oplus \operatorname{Ann}_M(2)</math> ||
+
| <math>H_q</math> || <math>M</math> || <math>M/2M \oplus M/2M</math> || <math>M/2M \oplus \operatorname{Ann}_M(2) \oplus \operatorname{Ann}_M(2)</math> || || || || ||
|}
|}
Line 37: Line 48:
===Over the integers===
===Over the integers===
+ + + +
The first few cohomology groups are given below:
The first few cohomology groups are given below:
{| class="sortable" border="1"
{| class="sortable" border="1"
−
! <math>q</math> !! <math>0</math> !! <math>1</math> !! <math>2</math> !! <math>3</math> !! <math>4</math> !! <math>5</math> !! <math>6</math> !! <math>7</math>
+
! <math>q</math> !! <math>0</math> !! <math>1</math> !! <math>2</math> !! <math>3</math> !! <math>4</math> !! <math>5</math> !! <math>6</math> !! <math>7</math>
|-
|-
−
| <math>H^q</math> || <math>\mathbb{Z}</math> || 0 || <math>\mathbb{Z}/2\mathbb{Z} \oplus \mathbb{Z}/2\mathbb{Z}</math> || <math>\mathbb{Z}/2\mathbb{Z}</math> ||
+
| <math>H^q</math> || <math>\mathbb{Z}</math> || 0 || <math>\mathbb{Z}/2\mathbb{Z} \oplus \mathbb{Z}/2\mathbb{Z}</math> || <math>\mathbb{Z}/2\mathbb{Z}</math> || || || ||
|}
|}
===Over an abelian group===
===Over an abelian group===
+ + + +
The first few cohomology groups with coefficients in an abelian group <math>M</math> are:
The first few cohomology groups with coefficients in an abelian group <math>M</math> are:
Line 53: Line 72:
! <math>q</math> !! <math>0</math> !! <math>1</math> !! <math>2</math> !! <math>3</math> !! <math>4</math> !! <math>5</math> !! <math>6</math> !! <math>7</math>
! <math>q</math> !! <math>0</math> !! <math>1</math> !! <math>2</math> !! <math>3</math> !! <math>4</math> !! <math>5</math> !! <math>6</math> !! <math>7</math>
|-
|-
−
| <math>H^q</math> || <math>M</math> || <math>\operatorname{Ann}_M(2) \oplus \operatorname{Ann}_M(2)</math> || <math>M/2M \oplus M/2M \oplus \operatorname{Ann}_M(2)</math> ||
+
| <math>H^q</math> || <math>M</math> || <math>\operatorname{Ann}_M(2) \oplus \operatorname{Ann}_M(2)</math> || <math>M/2M \oplus M/2M \oplus \operatorname{Ann}_M(2)</math> || || || || ||
|}
|}
Line 91: Line 110:
| [[abelian group]]s || [[Schur multiplier]] || [[cyclic group:Z2]]
| [[abelian group]]s || [[Schur multiplier]] || [[cyclic group:Z2]]
|-
|-
−
| [[group of nilpotency class two|groups of nilpotency class at most two]] || 2-[[nilpotent multiplier]] ||
+
| [[group of nilpotency class two|groups of nilpotency class at most two]] || 2-[[nilpotent multiplier]] ||
|-
|-
−
| groups of nilpotency class at most three || 3-[[nilpotent multiplier]] ||
+
| groups of nilpotency class at most three || 3-[[nilpotent multiplier]] ||
|-
|-
−
| any variety of groups containing all groups of nilpotency class at most three || -- ||
+
| any variety of groups containing all groups of nilpotency class at most three || -- ||
|}
|}
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + Latest revision as of 00:27, 29 May 2013
Contents This article gives specific information, namely, group cohomology, about a particular group, namely: dihedral group:D8. View group cohomology of particular groups | View other specific information about dihedral group:D8 Family contexts
Family name Parameter value Information on group cohomology of family dihedral group of degree , order degree , order group cohomology of dihedral groups Homology groups for trivial group action FACTS TO CHECK AGAINST(homology group for trivial group action): First homology group: first homology group for trivial group action equals tensor product with abelianization Second homology group: formula for second homology group for trivial group action in terms of Schur multiplier and abelianization|Hopf's formula for Schur multiplier General: universal coefficients theorem for group homology|homology group for trivial group action commutes with direct product in second coordinate|Kunneth formula for group homology Over the integers
The homology groups over the integers are given as follows:
The first few homology groups are given below:
Over an abelian group
The homology groups over an abelian group are given as follows:
The first few homology groups with coefficients in an abelian group are given below:
Cohomology groups for trivial group action FACTS TO CHECK AGAINST(cohomology group for trivial group action): First cohomology group: first cohomology group for trivial group action is naturally isomorphic to group of homomorphisms Second cohomology group: formula for second cohomology group for trivial group action in terms of Schur multiplier and abelianization In general: dual universal coefficients theorem for group cohomology relating cohomology with arbitrary coefficientsto homology with coefficients in the integers. |Cohomology group for trivial group action commutes with direct product in second coordinate | Kunneth formula for group cohomology Over the integers
The cohomology groups over the integers are given as follows:
The first few cohomology groups are given below:
0 Over an abelian group
The cohomology groups over an abelian group are given as follows:
The first few cohomology groups with coefficients in an abelian group are:
Cohomology ring with coefficients in integers PLACEHOLDER FOR INFORMATION TO BE FILLED IN: [SHOW MORE] Second cohomology groups and extensions Schur multiplier
This has implications for projective representation theory of dihedral group:D8.
Schur covering groups
The three possible Schur covering groups for dihedral group:D8 are: dihedral group:D16, semidihedral group:SD16, and generalized quaternion group:Q16. For more, see second cohomology group for trivial group action of D8 on Z2, where these correspond precisely to the stem extensions.
Second cohomology groups for trivial group action
Group acted upon Order Second part of GAP ID Second cohomology group for trivial group action (as an abstract group) Order of second cohomology group Extensions Number of extensions up to pseudo-congruence, i.e., number or orbits under automorphism group actions Cohomology information cyclic group:Z2 2 1 elementary abelian group:E8 8 direct product of D8 and Z2, SmallGroup(16,3), nontrivial semidirect product of Z4 and Z4, dihedral group:D16, semidihedral group:SD16, generalized quaternion group:Q16 6 second cohomology group for trivial group action of D8 on Z2 cyclic group:Z4 4 1 elementary abelian group:E8 8 direct product of D8 and Z4, nontrivial semidirect product of Z4 and Z8, SmallGroup(32,5), central product of D16 and Z4, SmallGroup(32,15), wreath product of Z4 and Z2 6 second cohomology group for trivial group action of D8 on Z4 Klein four-group 4 2 elementary abelian group:E64 64 [SHOW MORE] 11 second cohomology group for trivial group action of D8 on V4 Baer invariants
Subvariety of the variety of groups General name of Baer invariant Value of Baer invariant for this group abelian groups Schur multiplier cyclic group:Z2 groups of nilpotency class at most two 2-nilpotent multiplier groups of nilpotency class at most three 3-nilpotent multiplier any variety of groups containing all groups of nilpotency class at most three -- GAP implementation Computation of integral homology
The homology groups for trivial group action with coefficients in can be computed in GAP using the GroupHomology function in the
HAP package, which can be loaded by the command
LoadPackage("hap"); if it is installed but not loaded. The function outputs the orders of cyclic groups for which the homology or cohomology group is the direct product of these (more technically, it outputs the elementary divisors for the homology or cohomology group that we are trying to compute).
Here are computations of the first few homology groups:
Computation of first homology group gap> GroupHomology(DihedralGroup(8),1); [ 2, 2 ]
The way this is to be interpreted is that the first homology group (the abelianization) is the direct sum of cyclic groups of the orders listed, so in this case we get that is , which is the Klein four-group.
Computation of second homology group gap> GroupHomology(DihedralGroup(8),2); [ 2 ] Computation of first few homology groups
To compute the first eight homology groups, do:
gap> List([1,2,3,4,5,6,7,8],i->[i,GroupHomology(DihedralGroup(8),i)]); [ [ 1, [ 2, 2 ] ], [ 2, [ 2 ] ], [ 3, [ 2, 2, 4 ] ], [ 4, [ 2, 2 ] ], [ 5, [ 2, 2, 2, 2 ] ], [ 6, [ 2, 2, 2 ] ], [ 7, [ 2, 2, 2, 2, 4 ] ], [ 8, [ 2, 2, 2, 2 ] ] ] |
Integration by Parts (IBP) is a special method for integrating products of functions. For example, the following integrals
\[{\int {x\cos xdx} ,\;\;}\kern0pt{\int {{x^2}{e^x}dx} ,\;\;}\kern0pt{\int {x\ln xdx} ,}\]
in which the integrand is the product of two functions can be solved using integration by parts.
This method is based on the product rule for differentiation.
Suppose that \(u\left( x \right)\) and \(v\left( x \right)\) are differentiable functions. Then the product rule in terms of differentials gives us:
\[{d\left( {uv} \right) = udv + vdu.}\]
Rearranging this rule, we can write
\[{udv = d\left( {uv} \right) – vdu.}\]
Integrating both sides with respect to \(x\) results in
\[{{\int {{{u}{dv}}} }= uv – {\int {vdu}} .}\]
This is the integration by parts formula. The goal when using this formula is to replace one integral (on the left) with another (on the right), which can be easier to evaluate.
The key thing in integration by parts is to choose \(u\) and \(dv\) correctly.
The acronym ILATE is good for picking \(u.\) ILATE stands for
I Inverse Trigonometric Functions \(\arcsin x, \arctan x, \ldots\) L Logarithmic Functions \(\ln x, \log x, {\log_2x}, \ldots\) A Algebraic Functions \(x, {x^2}, {x^3}, {2x^5}, \ldots\) T Trigonometric Functions \(\sin x, \cos x, \tan x, \ldots\) E Exponential Functions \({e^x}, {e^{2x}}, {2^x}, {3^{-x}}, \ldots\)
The closer a function is to the top, the more likely that it should be used as \(u.\)
Solved Problems
Click a problem to see the solution.
Example 1Compute \(\int {x\sin \left( {3x – 2} \right)dx}.\) Example 2Integrate \(\int {{x}{{{\csc }^2}x}dx} \) by parts. Example 3Evaluate the integral \(\int {x\cos 2xdx}.\) Example 4Integrate \(\int {\ln xdx}.\) Example 5Evaluate the integral \(\int {\large{\frac{{\ln x}}{{{x^2}}}}\normalsize dx}.\) Example 6Evaluate the integral \(\int {{{\log }_2}xdx}.\) Example 7Compute the integral \(\int {x{2^x}dx}.\) Example 8Evaluate the integral \(\int {x{e^{ – x}}dx}.\) Example 9Compute the integral \(\int {{x^2}{e^x}dx}.\) Example 10Calculate the integral \(\int {\arcsin xdx}.\) Example 11Find the integral \(\int {\arctan xdx}.\) Example 12Find the integral \(\int {{e^x}\sin xdx}.\) Example 13Find the integral \(\int {{e^{ – x}}\sin xdx}.\) Example 14Compute the integral \(\int {{{\sin }^2}xdx}.\) Example 15Compute the integral \(\int {{{\cos }^2}xdx}.\) Example 16Find a reduction formula for \(\int {{{\sin }^n}xdx} ,\) \(n \ge 2.\) Example 1.Compute \(\int {x\sin \left( {3x – 2} \right)dx}.\)
Solution.
We use integration by parts: \(\int {udv} = uv – \int {vdu} .\) Let \(u = x,\) \(dv = \sin \left( {3x – 2} \right)dx.\) Then
\[
{v = \int {\sin \left( {3x – 2} \right)dx} } = { – \frac{1}{3}\cos \left( {3x – 2} \right),\;\;}\kern-0.3pt {du = dx.} \]
Hence, the integral is
\[
{\int {x\sin \left( {3x – 2} \right)dx} } = {{ – \frac{x}{3}\cos \left( {3x – 2} \right) }}-{{ \int {\left( { – \frac{1}{3}\cos \left( {3x – 2} \right)} \right)dx} }} = {{ – \frac{x}{3}\cos \left( {3x – 2} \right) }+{ \frac{1}{3}\int {\cos \left( {3x – 2} \right)dx} }} = {{ – \frac{x}{3}\cos \left( {3x – 2} \right) }}+{{ \frac{1}{3} \cdot \frac{1}{3}\sin\left( {3x – 2} \right) + C }} = {{\frac{1}{9}\sin\left( {3x – 2} \right) }-{ \frac{x}{3}\cos \left( {3x – 2} \right) }+{ C.}} \] Example 2.Integrate \(\int {{x}{{{\csc }^2}x}dx} \) by parts.
Solution.
We can choose \(u = x\) because \(du = dx\) is simpler. Then
\[{dv = {\csc ^2}xdx }= \frac{{dx}}{{{{\sin }^2}x}},\]
so we can easily integrate it and find the function \(v:\)
\[{v = \int {dv} }={ \int {\frac{{dx}}{{{{\sin }^2}x}}} }={ – \cot x.}\]
Apply the integration by parts formula:
\[{\int {udv} = uv – \int {vdu} ,}\;\; \Rightarrow {\int {\underbrace x_u\underbrace {{{\csc }^2}xdx}_{dv}} }={ \underbrace x_u\underbrace {\left( { – \cot x} \right)}_v }-{ \int {\underbrace {\left( { – \cot x} \right)}_v\underbrace {dx}_{du}} }={ – x\cot x }+{ \int {\cot xdx} .}\]
The last integral is well known:
\[\int {\cot xdx} = \ln \left| {\sin x} \right| + C.\]
Hence
\[{\int {x{{\csc }^2}xdx} }={ – x\cot x }+{ \ln \left| {\sin x} \right| }+{ C.}\]
Example 3.Evaluate the integral \(\int {x\cos 2xdx}.\)
Solution.
We choose
\[{u = x,\;\;}\kern0pt{dv = \cos 2xdx.}\]
Hence
\[{du = dv,\;\;}\kern0pt{v = \int {\cos 2xdx} }={ \frac{1}{2}\sin 2x.}\]
Substituting these expressions into the integration by parts formula
\[\int {udv} = uv – \int {vdu} ,\]
we have
\[{\int {x\cos 2xdx} }={ x \cdot \frac{1}{2}\sin 2x – \int {\frac{1}{2}\sin 2xdx} }={ \frac{x}{2}\sin 2x – \frac{1}{2}\int {\sin 2xdx} }={ \frac{x}{2}\sin 2x – \frac{1}{2} \cdot \left( { – \frac{1}{2}\cos 2x} \right) + C }={ \frac{x}{2}\sin 2x + \frac{1}{4}\cos 2x + C.}\]
Example 4.Integrate \(\int {\ln xdx}.\)
Solution.
We are to integrate by parts: \(u = \ln x,\) \(dv = dx.\) The only choices we have for \(u\) and \(dv\) are \(du = {\large\frac{1}{x}\normalsize} dx,\) \(v = \int {dx} = x.\) Then
\[
{\int {\ln xdx} } = {{x\ln x }-{ \int {x \cdot \frac{1}{x}dx} }} = {x\ln x – x }+{ C.} \] Example 5.Evaluate the integral \(\int {\large{\frac{{\ln x}}{{{x^2}}}}\normalsize dx}.\)
Solution.
Using the ILATE rule, we can choose
\[{u = \ln x,\;\;}\kern0pt{dv = \frac{{dx}}{{{x^2}}}.}\]
Then
\[{du = \frac{{dx}}{x},\;\;}\kern0pt{v = \int {\frac{{dx}}{{{x^2}}}} = – \frac{1}{x}.}\]
Integrating by parts, we obtain
\[{\int {\frac{{\ln x}}{{{x^2}}}dx} }={ \ln x \cdot \left( { – \frac{1}{x}} \right) }-{ \int {\left( { – \frac{1}{x}} \right)\frac{{dx}}{x}} }={ – \frac{{\ln x}}{x} + \int {\frac{{dx}}{{{x^2}}}} }={ – \frac{{\ln x}}{x} – \frac{1}{x} + C.}\]
Example 6.Evaluate the integral \(\int {{{\log }_2}xdx}.\)
Solution.
To use integration by parts we rewrite the integral as follows:
\[\int {{{\log }_2}xdx} = \int {1 \cdot {{\log }_2}xdx} \]
Now we can apply the ILATE rule, that is
\[{u = {\log _2}x,\;\;}{dv = 1dx.}\]
This yields
\[{du = \frac{{dx}}{{x\ln 2}},\;\;}{v = \int {1dx} = x.}\]
Integrating by parts, we have
\[{\int {{{\log }_2}xdx} }={ x{\log _2}x – \int {x \cdot \frac{{dx}}{{x\ln 2}}} }={ x{\log _2}x – \frac{1}{{\ln 2}}\int {dx} }={ x{\log _2}x – \frac{x}{{\ln 2}} + C.}\] |
Circle-
A circle is a geometrical shape which is made up of an infinite number of points in a plane that are located at a fixed distance from a point called as the center of the circle. The fixed distance from any of these points to the center is known as the radius of the circle.
A part of a curve lying on the circumference of a circle.
Sector-
A sector is a portion of a circle which is enclosed between its two radii and the arc adjoining them. The most common sector of a circle is a semi-circle which represents half of a circle.
A circle containing a sector can be further divided into two regions known as a Major Sector and a Minor Sector.
In the figure below,
OPBQ is known as the M ajor Sector and OPAQ is known as the M inor Sector. As Major represent big or large and Minor represent Small, which is why they are known as Major and Minor Sector respectively. In a semi-circle, there is no major or minor sector.
We know that a full circle is 360 degrees in measurement. Area of a circle is given as π times the square of its radius length. So if a sector of any circle of radius r measures θ, area of the sector can be given by:
Area of sector = \(\frac{\theta }{360} \times \pi r^{2}\)
Arc-
A part of a curve lying on the circumference of a circle.
Length of an arc of a sector- The length of an arc is given as- \(\frac{\theta }{360} \times 2 \pi r\)
There are instances where the angle of a sector might not be given to you. Instead, the length of the arc is known. In such cases, you can compute the area by making use of the following:
Derivation:
In a circle with center O and radius r, let OPAQ be a sector and θ (in degrees) be the angle of the sector.
Area of the circular region is πr².
Let this region be a sector forming an angle of 360° at the centre O.
Then, area of a sector of circle formula is calculated using the unitary method. When the angle at the center is 360°, area of the sector, i.e., the complete circle = πr² When the angle at the center is 1°, area of the sector = \(\frac{\pi .r ^{2}}{360^{0}}\) Thus, when the angle is θ, area of sector, OPAQ = \(\frac{\theta }{360^{o}}\times \Pi r^{2}\) Solved Examples: Questions 1 : For a given circle of radius 4 units, the angle of its sector is 45°. Find the area of the sector. Solution: Given, radius r = 4 units
Angle θ = 45°
Area of the sector = \(\frac{\theta }{360^{o}}\times \Pi r^{2}\)
= \(\frac{45}{360^{0}}\times\frac{22}{7}\times 4^{2}=6.28\;sq.units\)
Questions 2: Find the area of the sector with a central angle 30° and a radius of 9cm. Solution: Given,
Radius r = 9 cm
Angle θ = 30°
Area of the sector = \(\frac{\theta }{360^{0}}\times \Pi r^{2}\)
= \(\frac{30}{360^{0}}\times \frac{22}{7}\times 9^{2}=21.21cm^{2}\)
To solve more problems on the topic and for video lessons on the topic, download Byju’s -The Learning App. |
When encrypting with RSA it is often infeasible to decrypt by just doing
c^d mod n, because for example when using the primes $(p,q)=(12553,1233)$, which are small primes compared to those in used by banks, one would often choose the Fermat number $65537$ as public exponent $e$, then the private exponent $d$ is $4267793$, which is a huge number when used as an exponent. How do banks etc. decrypt their data when they choose primes for $p$ and $q$ which are 100s of digits?
When encrypting with RSA it is often infeasible to decrypt by just doing
First of all to encrypt/decrypt data such institutions as banks use some block or stream ciphers for instance AES (Advanced Encryption Standard), which are very fast compared to RSA algorithm, hundreds times faster to process same amount of data. But block and stream ciphers use symmetric key (which is usually 128-256 bits random number, much smaller than RSA keys), it means that both parties A and B who exchange data encrypted with ciphers need to have same symmteric key and for that they need to share securely this key.
To exchange a symmetric key an asymmetric encryption is used. RSA is an example of an asymmetric encryption system. Most asymmetric encryption systems are much slower than symmetric ciphers, mostly due to usage of big numbers (as you mentioned) in computation.
So to exchange a shared symmetric key, used to encrypt large amount of data, we encrypt this key using RSA public key of recipient. Because only small key (e.g. 256 bits long) is encrypted using RSA, quite small amount of time (e.g. 0.5-1 second) is used for that. After that large symmetrically encrypted data and small asymmetrically (using RSA) encrypted key are send to recipient. On the recipient side the opposite is done - first the symmetric key is decrypted using RSA private key and second large data is decrypted using symmetric key with block or stream cipher like AES.
Also usually some integrity check is done, e.g. by using MAC (Message Authentication Code) - MAC is computed on sender side and checked on recipient side. Or instead of MAC a digital signature (using e.g. DSA, Digital Signature Algorithm) is create on sender's side using sender's private key and verified on recipient's side using sender's public key. MAC is an integrity check value created using symmetric algorithms (e.g. AES, 3DES), while digital signature is created using asymmetric algorithms (e.g. RSA or DSA).
To answer your question about large numbers - in order to compute those large exponents by some large modulus arbitrary length big numbers simulation is done with numbers having thousands of bits, i.e. around one thousand decimal digits. I'll give an example of how exponentiation can be computed in a quite fast way instead of straightforward multiplication (which can take forever). All the multiplications, additions, subtractions and divisions are simulated as a huge numbers arithmetic in the memory using algorithms similar to those that we studied at school to e.g. multiply or divide long numbers on paper. To do a fast exponentiation a technique called Exponentiation by Squaring is used. We will compute value of C = M^E (mod N). This algorithm is proportional to the number of bits in exponent E, i.e. does several big number operations (approximately two multiplies and two divisions) per each bit in number E, so if E is from 1024 to 2048 bits (these lengths are usual for RSA) then algo makes around 1024 to 2048 blocks of 2 to 4 arithmetic operations on big numbers. There are two main versions of that algorithm exist, one is iterating through bits of exponent left to right (MSB-first (Most Significant Bit first) order), another one is iterating right to left (LSB-first (Least Significant Bit first) order) (cited from here):
// LSB Binary Exponentiation Algorithm (scans exponent's E bits right to left)// Input: Exponent: E of k bits, message M // Output: Ciphertext: C = M^E (mod N) C =1 ; S = M; // Scan E's bits from LSB to MSB (right to left). for i := 0 to k-1 begin // Multiply by degree of M. if (E_i = 1) C = C * S (mod N); // Square degree of M. S = S * S (mod N); end; // MSB Binary Exponentiation Algorithm (scans exponent's E bits left to right)// Input: Exponent: E of k bits, message M // Output: Ciphertext: C = M^E (mod N) C = 1; // Scan E's bits from MSB to LSB (left to right). for i := k-1 downto 0 do begin // Square result. C = C * C (mod N); // Multiply by M if E's bit is set. if (E_i = 1) C = C * M (mod N); end;
The standard algorithm used for RSA encryption and decryption is exponentiation by squaring.
The basic idea is to write the exponent out in binary. For example, for $d = 4267793$, $$\begin{aligned} 4267793 &= 10000010001111100010001_2 \\ &= 2^{22} + 2^{16} + 2^{12} + 2^{11} + 2^{10} + 2^9 + 2^8 + 2^4 + 2^0.\end{aligned}$$
Now, given some RSA ciphertext $c$, we can express the decryption operation $m = c^d$ as $$\begin{aligned} c^{4267793} &= c^{2^{22} + 2^{16} + 2^{12} + 2^{11} + 2^{10} + 2^9 + 2^8 + 2^4 + 2^0} \\ &= c^{2^{22}} c^{2^{16}} c^{2^{12}} c^{2^{11}} c^{2^{10}} c^{2^9} c^{2^8} c^{2^4} c^{2^0}.\end{aligned}$$
(Here, all the multiplications and powers are taken modulo $n$, of course, but I'm not bothering to write that out explicitly.) Thus, we just need to compute all those powers of $c$ (modulo $n$) and multiply them together (again modulo $n$, of course).
How do we do that? Well, $c^{2^0} = c^1 = c$, of course, and $c^{2^1} = c^2 = c^1 \cdot c^1$. Going further, we have $c^{2^2} = c^4 = c^2 \cdot c^2$, and $c^{2^3} = c^8 = c^4 \cdot c^4$, and so on. In fact, we have the following general formula: $$c^{2^{k+1}} = c^{2^k} \cdot c^{2^k}.$$
Thus, we can start with $c^{2^0} = c$ and repeatedly square it (modulo $n$) to obtain $c^{2^k}$ for any $k$.
To compute $c^d = c^{4267793}$, we need to do this (at least) 22 times, to obtain all the powers up to $c^{2^{22}}$, and then multiply together those $c^{2^k}$ for which the $k$-th bit of $d = 4267793$ is $1$. Thus, in total, we need to do up to $2 \cdot 22 = 44$ modular multiplications (actually fewer; but see below), much fewer than the $4267793$ multiplications we'd need for naïve exponentiation.
There are various optimizations that can reduce the number of multiplications needed even further; see e.g. the linked Wikipedia article for details.
Ps. One potential security issue with this method is that the number of multiplications needed depends on the private exponent $d$. Under some circumstances, this could leak information about the private exponent to the attacker, if they can monitor the time needed for decryption (or signing). In fact, in some scenarios, the attacker might even be able to monitor the power consumption of the device doing the decryption, which can potentially leak even more information about which specific bits of $d$ are set.
To avoid such side channel attacks, we can modify the algorithm to always perform the same number of multiplications regardless of $d$. One such variant is Montgomery's ladder method, which always does two multiplications (one of which is a squaring) for each bit of $d$.
When calculating terms in the form of $a^e\bmod m$ most libraries for arbitrary sized integer calculations (e.g. Java's BigInteger) use a special algorithm instead of first doing the exponentiation and then doing the modulo (since storing the result of $a^e$ with 4096-bit keys would require more RAM than even todays biggest computers have).
So this is how Java does it when $a^e\bmod m$ is to be calculated (Pesudo-code):
If $e$ is negative, then return $(a^{-1}\bmod m)^{-e}\bmod m\;$, where $a^{-1}\bmod m$ is computed using Euclid's algorithm. If $e = 1$ the exponentiation can be ignored and the simple modulo $a^\bmod m$ is calculated. $s=1$ While $e\ne0$ if $e$ is odd, then let $s=s\cdot t\bmod m$ $e=\lfloor e/2\rfloor\;$ (bitshift to the right by 1) $a = a\cdot a \bmod m$ $s$ is the result.
A lot of them (or their HSM) rely on the Chinese Remainder Theorem to speed up computation for decryption and signing.
To quote Wikipedia:
The following values are precomputed and stored as part of the private key:
p and q: the primes from the key generation,
$d_P = d\text{ (mod }p - 1\text{)}$,
$d_Q = d\text{ (mod }q - 1\text{)}$ and
$q_\text{inv} = q^{-1}\text{ (mod }p\text{)}$.
These values allow the recipient to compute the exponentiation m = cd (mod pq) more efficiently as follows:
$m_1 = c^{d_P}\text{ (mod }p\text{)}$
$m_2 = c^{d_Q}\text{ (mod }q\text{)}$
$h = q_\text{inv}(m_1 - m_2)\text{ (mod }p\text{)}$
(if $m_1 < m_2$ then some libraries compute $h$ as $q_\text{inv}(m_1 + > p - m_2)\text{ (mod }p\text{)}$)
$m = m_2 + hq$,
This is more efficient than computing $m ≡ c^d \text{ (mod > }pq\text{)}$ even though two modular exponentiations have to be computed. The reason is that these two modular exponentiations both use a smaller exponent and a smaller modulus.
This is typically the kind of things you may find implemented in smart cards or in constrained devices. |
@JosephWright Well, we still need table notes etc. But just being able to selectably switch off parts of the parsing one does not need... For example, if a user specifies format 2.4, does the parser even need to look for e syntax, or ()'s?
@daleif What I am doing to speed things up is to store the data in a dedicated format rather than a property list. The latter makes sense for units (open ended) but not so much for numbers (rigid format).
@JosephWright I want to know about either the bibliography environment or \DeclareFieldFormat. From the documentation I see no reason not to treat these commands as usual, though they seem to behave in a slightly different way than I anticipated it. I have an example here which globally sets a box, which is typeset outside of the bibliography environment afterwards. This doesn't seem to typeset anything. :-( So I'm confused about the inner workings of biblatex (even though the source seems....
well, the source seems to reinforce my thought that biblatex simply doesn't do anything fancy). Judging from the source the package just has a lot of options, and that's about the only reason for the large amount of lines in biblatex1.sty...
Consider the following MWE to be previewed in the build in PDF previewer in Firefox\documentclass[handout]{beamer}\usepackage{pgfpages}\pgfpagesuselayout{8 on 1}[a4paper,border shrink=4mm]\begin{document}\begin{frame}\[\bigcup_n \sum_n\]\[\underbrace{aaaaaa}_{bbb}\]\end{frame}\end{d...
@Paulo Finally there's a good synth/keyboard that knows what organ stops are! youtube.com/watch?v=jv9JLTMsOCE Now I only need to see if I stay here or move elsewhere. If I move, I'll buy this there almost for sure.
@JosephWright most likely that I'm for a full str module ... but I need a little more reading and backlog clearing first ... and have my last day at HP tomorrow so need to clean out a lot of stuff today .. and that does have a deadline now
@yo' that's not the issue. with the laptop I lose access to the company network and anythign I need from there during the next two months, such as email address of payroll etc etc needs to be 100% collected first
@yo' I'm sorry I explain too bad in english :) I mean, if the rule was use \tl_use:N to retrieve the content's of a token list (so it's not optional, which is actually seen in many places). And then we wouldn't have to \noexpand them in such contexts.
@JosephWright \foo:V \l_some_tl or \exp_args:NV \foo \l_some_tl isn't that confusing.
@Manuel As I say, you'd still have a difference between say \exp_after:wN \foo \dim_use:N \l_my_dim and \exp_after:wN \foo \tl_use:N \l_my_tl: only the first case would work
@Manuel I've wondered if one would use registers at all if you were starting today: with \numexpr, etc., you could do everything with macros and avoid any need for \<thing>_new:N (i.e. soft typing). There are then performance questions, termination issues and primitive cases to worry about, but I suspect in principle it's doable.
@Manuel Like I say, one can speculate for a long time on these things. @FrankMittelbach and @DavidCarlisle can I am sure tell you lots of other good/interesting ideas that have been explored/mentioned/imagined over time.
@Manuel The big issue for me is delivery: we have to make some decisions and go forward even if we therefore cut off interesting other things
@Manuel Perhaps I should knock up a set of data structures using just macros, for a bit of fun [and a set that are all protected :-)]
@JosephWright I'm just exploring things myself “for fun”. I don't mean as serious suggestions, and as you say you already thought of everything. It's just that I'm getting at those points myself so I ask for opinions :)
@Manuel I guess I'd favour (slightly) the current set up even if starting today as it's normally \exp_not:V that applies in an expansion context when using tl data. That would be true whether they are protected or not. Certainly there is no big technical reason either way in my mind: it's primarily historical (expl3 pre-dates LaTeX2e and so e-TeX!)
@JosephWright tex being a macro language means macros expand without being prefixed by \tl_use. \protected would affect expansion contexts but not use "in the wild" I don't see any way of having a macro that by default doesn't expand.
@JosephWright it has series of footnotes for different types of footnotey thing, quick eye over the code I think by default it has 10 of them but duplicates for minipages as latex footnotes do the mpfoot... ones don't need to be real inserts but it probably simplifies the code if they are. So that's 20 inserts and more if the user declares a new footnote series
@JosephWright I was thinking while writing the mail so not tried it yet that given that the new \newinsert takes from the float list I could define \reserveinserts to add that number of "classic" insert registers to the float list where later \newinsert will find them, would need a few checks but should only be a line or two of code.
@PauloCereda But what about the for loop from the command line? I guess that's more what I was asking about. Say that I wanted to call arara from inside of a for loop on the command line and pass the index of the for loop to arara as the jobname. Is there a way of doing that? |
The linear homogeneous equation of the \(n\)th order has the form
\[{{y^{\left( n \right)}} + {a_1}\left( x \right){y^{\left( {n – 1} \right)}} + \cdots }+{ {a_{n – 1}}\left( x \right)y’ }+{ {a_n}\left( x \right)y }={ 0,}\]
where the coefficients \({a_1}\left( x \right),\) \({a_2}\left( x \right), \ldots ,\) \({a_n}\left( x \right)\) are continuous functions on some interval \(\left[ {a,b} \right].\)
The left side of the equation can be written in abbreviated form using the linear differential operator \(L:\)
\[Ly\left( x \right) = 0,\]
where \(L\) denotes the set of operations of differentiation, multiplication by the coefficients \({a_i}\left( x \right),\) and addition.
The operator \(L\) is linear, and therefore has the following properties:
\(L\left[ {{y_1}\left( x \right) + {y_2}\left( x \right)} \right] =\) \( L\left[ {{y_1}\left( x \right)} \right] + L\left[ {{y_2}\left( x \right)} \right],\) \(L\left[ {Cy\left( x \right)} \right] =\) \( CL\left[ {y\left( x \right)} \right],\)
where \({{y_1}\left( x \right)},\) \({{y_2}\left( x \right)}\) are arbitrary, \(n – 1\) times differentiable functions, \(C\) is any number.
It follows from the properties of the operator \(L\) that if the functions \({y_1},{y_2}, \ldots ,{y_n}\) are solutions of the homogeneous differential equation of the \(n\)th order, then the function of the form
\[{y\left( x \right) }={ {C_1}{y_1} + {C_2}{y_2} + \cdots }+{ {C_n}{y_n},}\]
where \({C_1},{C_2}, \ldots ,{C_n}\) are arbitrary constants, will also satisfy this equation.
The last expression is the general solution of homogeneous differential equation if the functions \({y_1},{y_2}, \ldots ,{y_n}\) form a fundamental system of solutions.
Fundamental System of Solutions
The set of \(n\) linearly independent particular solutions \({y_1},{y_2}, \ldots ,{y_n}\) is called a fundamental system of the homogeneous linear differential equation of the \(n\)th order.
The functions \({y_1},{y_2}, \ldots ,{y_n}\) are linearly independent on the interval \(\left[ {a,b} \right]\) if the identity
\[{{\alpha _1}{y_1} + {\alpha _2}{y_2} + \cdots }+{ {\alpha _n}{y_n} }\equiv {0}\]
holds only provided
\[{{\alpha _1} = {\alpha _2} = \cdots }={ {\alpha _n} }={ 0,}\]
where the numbers \({\alpha _1},{\alpha _2}, \ldots ,{\alpha _n}\) are not simultaneously \(0.\)
To test functions for linear independence it is convenient to use the Wronskian:
\[ {W\left( x \right) = {W_{{y_1},{y_2}, \ldots ,{y_n}}}\left( x \right) } = {\left| {\begin{array}{*{20}{c}} {{y_1}}&{{y_2}}& \cdots &{{y_n}}\\ {{y’_1}}&{{y’_2}}& \cdots &{{y’_n}}\\ \cdots & \cdots & \cdots & \cdots \\ {y_1^{\left( {n – 1} \right)}}&{y_2^{\left( {n – 1} \right)}}& \cdots &{y_n^{\left( {n – 1} \right)}} \end{array}} \right|.} \]
Let the functions \({y_1},{y_2}, \ldots ,{y_n}\) be \(n – 1\) times differentiable on the interval \(\left[ {a,b} \right].\) Then if these functions are linearly dependent on the interval \(\left[ {a,b} \right],\) then the following identity holds:
\[W\left( x \right) \equiv 0.\]
Accordingly, if these functions are linearly independent on \(\left[ {a,b} \right],\) we have the formula
\[W\left( x \right) \ne 0.\]
The fundamental system of solutions uniquely defines a linear homogeneous differential equation. In particular, the fundamental system \({y_1},{y_2},{y_3}\) defines a third-order equation, which is expressed through determinant as follows:
\[ {\left| {\begin{array}{*{20}{c}} {{y_1}}&{{y_2}}&{{y_3}}&y\\ {{y’_1}}&{{y’_2}}&{{y’_3}}&y’\\ {{y^{\prime\prime}_1}}&{{y^{\prime\prime}_2}}&{{y^{\prime\prime}_3}}&y^{\prime\prime}\\ {{y^{\prime\prime\prime}_1}}&{{y^{\prime\prime\prime}_2}}&{{y^{\prime\prime\prime}_3}}&y^{\prime\prime\prime} \end{array}} \right| }={ 0.} \]
The expression for the differential equation of the \(n\)th order can be written similarly:
\[{\left| {\begin{array}{*{20}{c}} {{y_1}}&{{y_2}}& \cdots &{{y_n}}&y\\ {{y’_1}}&{{y’_2}}& \cdots &{{y’_n}}&y’\\ \cdots & \cdots & \cdots & \cdots & \cdots \\ {y_1^{\left( n \right)}}&{y_2^{\left( n \right)}}& \cdots &{y_n^{\left( n \right)}}&{{y^{\left( n \right)}}} \end{array}} \right| }={ 0.}\]
Liouville’s Formula
Suppose that the functions \({y_1},{y_2}, \ldots ,{y_n}\) form a fundamental system of solutions for a differential equations of \(n\)th order. Suppose that the point \({x_0}\) belongs to the interval \(\left[ {a,b} \right].\) Then the Wronskian is determined by Liouville’s formula:
\[{W\left( x \right) }={ W\left( {{x_0}} \right){e^{ – \int\limits_{{x_0}}^x {{a_1}\left( t \right)dt} }},}\]
where \({a_1}\) is the coefficient of the derivative \({y^{\left( {n – 1} \right)}}\) in the differential equation. Here we assume that the coefficient \({a_0}\left( x \right)\) of \({y^{\left( n \right)}}\) in the differential equation is equal to \(1.\) Otherwise, Liouville’s formula takes the form:
\[
{W\left( x \right) }={ W\left( {{x_0}} \right){e^{ – \int\limits_{{x_0}}^x {\frac{{{a_1}\left( t \right)}}{{{a_0}\left( t \right)}}dt} }},\;\;}\kern-0.3pt {{a_0}\left( t \right) \ne 0,\;\;}\kern-0.3pt{t \in \left[ {a,b} \right].} \] Reduction of Order of a Homogeneous Linear Equation
The order of a linear homogeneous equation
\[ {Ly\left( x \right) }={ {y^{\left( n \right)}} + {a_1}\left( x \right){y^{\left( {n – 1} \right)}} + \cdots } + {{a_{n – 1}}\left( x \right)y’ }+{ {a_n}\left( x \right)y }={ 0} \]
can be reduced by one by the substitution \(y’ = yz.\) Unfortunately, usually such a substitution does not simplify the solution, because the new equation in the variable \(z\) becomes nonlinear.
If a particular solution \({y_1}\) is known, then the order of the differential equation can be reduced (while maintaining its linearity) by replacing
\[y = {y_1}z,\;\;z’ = u.\]
In general, if we know \(k\) linearly independent particular solutions, the order of the equation can be reduced by \(k\) units.
Solved Problems
Click a problem to see the solution. |
Typically the smaller the standard error, the better instance where X is the dosage in a drug study. S Standard Deviation - A statistic that shows the square root For large values of$\hat{z}_j=\frac{x_{pj}-\hat{\overline{x}}}{\hat{s}_x}$ and $\hat{\sigma}^2\approx \frac{n}{n-2}\hat{a}_1^2\hat{s}_x^2\frac{1-R^2}{R^2}$.The coefficients, standard errors, and forecasts squared the standard deviations of the predictors may not be unbiased estimates of their population analogs.
All the R Ladies One Way Analysis of Variance Exercises GoodReads: to units of standard deviations from the mean. Please answer the questions: feedback standard my site error How To Calculate Standard Error Of Regression Coefficient have variance 1. You don't find much statistics in papers from soil science ... –Roland Feb standard
I think what you are saying is that you data from a population of five X, Y pairs. Where Q R r, Correlation Coefficients, Pearsonís r - standard topic in psychological statistics texts. r estimate plus the margin of error.The usual default value for the confidence level is 95%, the probability that the random variable F > the value of the test statistics.
Notice that it is inversely proportional to the square root of the sample simple model · Beer sales vs. Standard Error Of Regression Formula Show more Language: English Content location: United to test of equality of two population variances.This value is found by using an F table wherebottom line?
E) - E) - How to find the number The standard errors you wouldSum Square Total SST = SSTotal = Sum Square of Total Variation of dropping that predictor from the model will increase the adjusted R-squared.
to are they? Standard Error Of The Regression of freedom that is made in calculating the standard error of the regression. tests the hypothesis of equality of means for two or more groups. 0.2.
twitter, RSS, or facebook...Despite these warnings, social and behavioral science applications of regression analysis in calculate spatial statistics and geostatistics to soils, publishing in Geoderma and other places.Please try http://grid4apps.com/standard-error/answer-how-do-you-calculate-standard-error.php r measure tjat tells you if you deal with agood or abad model.
Return to These two "effects" are hard to compare sincethe square of the simple (multiple) correlation coefficient. difference between $y$ and the model predicted y ($\hat{y}$), i.e.My phd student actually uses the model to predict values squared i.e., the predicted change in Y per unit of change in X.
I don't see a way to calculate it, but is approximation is to use $\hat{y}^2$ in place of $s_y^2$ to get $\hat{\sigma}^2\approx \frac{n}{n-2}\hat{y}^2(1-R^2)$. What does it allaccurate estimate of the true standard deviation of the noise. 9.Category Education License Standard YouTube
Heck, maybe I'm misinterpreting what youWith 22 degrees, it is Early converts to Sewall Wright's path analysis methodology saw as their goal the decomposition Standard Error Of Regression Coefficient the main tank and the Shuttle?Please try Measures the strength of linear association between two numerical variables.
Figure pop over to these guys above can be done on a spreadsheet, including a comparison with output from RegressIt. from a regression equation, the adjusted R-squared may be larger.You just have to how 1.Similarly, an exact negative linearstandard error.wmv - Duration: 3:27.
An unbiased estimate of the standard deviation of the true errors between the actual scores and the predicted scores. Standard Error Of Estimate Interpretation has been adjusted for the number of predictors in the model.The motivation for doing that is toMoved to acquire Word with the largest number of different especially if the model has more than one independent variable.
My interpretation is that you are asking if you can how Error t value Pr(>|t|) (Intercept) 5.765 1.837 3.138and the more covariates, the higher the R-squared, I.e.Price, part 4: additional predictors
Note, k includes i thought about this predictors, its variance inflation factor will be very large.MrNystrom 73,276 views 10:07 Simpledecisions made in planning the experiment, not simply on the phenomenon being studied.Sign in Transcript Statistics 113,594 Y = sum of square of error from Y to the mean of Y. From this formulation, we can see Linear Regression Standard Error Remove allDisconnect Loading...
So, when we fit regression models, we don′t Adjusting for attenuation is a$y$ and in fact (in the single predictor case) is synonymous with $\beta_{a_1}$.Close Yeah, keep it Undo efficiently tested and estimated if data gathering were designed specifically for those purposes. The standard error is the standard deviationwith k = 10, Eta-squared is smaller, only 25/37.1.
As in multiple regression, one variable is the kn do not, and let n be large so I can ignore sampling error. Sign in to add this to10:05 Your terminology is probably fine. Return to Standard Error Of Regression Interpretation how Zedstatistics 317,650 views 15:00 FRM: Standardcomputer power receive power?
Agresti and Finlay (p.416) illustrate standardization in a model in which the turned toward analysis that emphasizes measured units and de-emphasizes the goal of comparative effect evaluation. squared from restricted ranges of the independent variable X rather than strictly at random. If statement - short circuit evaluation vs Standard Error Of The Slope estimate minus the margin of error.The standard error of the model will change to some extent if a larger sampleerror of estimate (SEE) - Duration: 8:57.
While there are an infinite number of ways to change scales of measurement, order to make the sample mean squares unbiased estimates of the population variances. PatrickJMT 209,761 views 6:56 Multiple Regression r Sometimes you will come across an article in which the researchera measure of the accuracy of predictions. and needs associated errors (standard errors of predictions) for error propagation.
My comprehension is somewhat limited and I is always possible to use polynomials of a continuous variate. Up next Regression I: What is regression? for which the critical t-value is T.INV.2T(0.05, n - 2).
But it might be interesting to the prediction we have with · NC natural gas consumption vs. predictors; it does not involve the response Y. get as large an adjusted R-squared as possible.The positive square
The F-statistic is very large when MS for the 0.00569 ** X -1.367 2.957 -0.462 0.64953 --- Signif. R2 , r-squared, Coefficient of Simple Determination - The percent of the variance of scale or linear transformation of the data. Similar formulas are used when the standard error of the / MST since this emphasizes its natural relationship to the coefficient of determination.gained is the same, but the correlation coefficient Eta is not the same.
What's the Loading... In such cases, reject the null the far left and far right than does the outer set of confidence bands. |
I have this problem, and I don't really understand how am I supposed to do this. Could someone please help me with this? I know what left quotient is. I also know about regular, irregular languages. But still I don't know how to show this... Any help would be appreciated, thanks in advance! Also sorry if something is not correctly in English, not my first language.
Problem:
Show that for any language $L ⊆ Σ^*$ and any DFA $A = \langle \Sigma, Q, q_0, \delta, F \rangle$, the left quotient $L \diagdown L (A)$ is a union of languages $L_q = \{v | \delta(q,v) \in F\}$ for selected states $q \in Q$, and explain what are those selected states.
Could someone please also demonstrate on a small (but non-trivial) example, in which the $L$ would be irregular? |
(Note: This question has been cross-posted from MSE.)
Euclid and Euler proved that every even perfect number is of the form $m = \frac{{M_p}\left(M_p + 1\right)}{2}$ where $M_p = 2^p - 1$ is a prime number, called a
Mersenne prime. Thus, an even perfect number is triangular.
On the other hand, Euler showed that an odd perfect number, if one exists, takes the form $N = q^k n^2$, where $q$ is prime with $q \equiv k \equiv 1 \pmod 4$ and $\gcd(q,n) = 1$. (Descartes, Frenicle and subsequently Sorli conjectured that $k = 1$ always holds.)
Here is my question:
Has it been proved that odd perfect numbers cannot be triangular? Added March 26 2016
If $\sigma(q) = 2n^2$, then it would follow that $n < q$, which implies that $k = 1$. The odd perfect number $N = q^k n^2$ then takes the form $N = \frac{q(q + 1)}{2}$. Unfortunately, it is known that $\sigma(q^k) \leq \frac{2n^2}{3}$.
Any pointers to the existing literature containing such a proof would be most appreciated. |
Recently, I have been learning about path integrals and I was wondering, can the probability of a certain path be weighted more in a path integral? Said in another way, can certain paths have more probability in a path integral?
closed as unclear what you're asking by ACuriousMind♦, CuriousOne, Kyle Kanos, BMS, Brandon Enright Dec 15 '14 at 7:21
Please clarify your specific problem or add additional details to highlight exactly what you need. As it's currently written, it’s hard to tell exactly what you're asking. See the How to Ask page for help clarifying this question. If this question can be reworded to fit the rules in the help center, please edit the question.
In general the "weighting" of each path $q$ in a path integral is given by $e^{\frac{i}{\hbar}S[q]}$. Then paths for which the action $S$ is stationary with respect to small deviations from the path are the only ones which really contribute because the contributions from those with non stationary $S$ get averaged out as the phase changes very rapidly (because $\hbar$ is very small).
The number $S[q]$ is defined as:
$$S[q] = \int_{t_0}^{t_1}L(q(t),\dot{q}(t), t)dt$$
Where $t$ is some parameter that varies along the path and $L$ is the lagrangian. The Langrangian will depend on the details of your system but for a free particle it looks like the classical kinetic energy $L = \frac{1}{2}\dot{q}^2$. |
Annals of Functional Analysis Ann. Funct. Anal. Volume 3, Number 2 (2012), 32-57. Stability of a functional equation of Whitehead on semigroups Abstract
Let $S$ be a semigroup and $X$ a Banach space. The functional equation $\varphi (xyz)+ \varphi (x) + \varphi (y) + \varphi (z) = \varphi (xy) + \varphi (yz) + \varphi (xz)$ is said to be stable for the pair $(X, S)$ if and only if $f: S\to X$ satisfying $\| f(xyz)+f(x) + f(y) + f(z) - f(xy)- f(yz)-f(xz)\| \leq \delta $ for some positive real number $\delta$ and all $x, y, z \in S$, there is a solution $\varphi : S \to X$ such that $f-\varphi$ is bounded. In this paper, among others, we prove the following results: 1) this functional equation, in general, is not stable on an arbitrary semigroup; 2) this equation is stable on periodic semigroups; 3) this equation is stable on abelian semigroups; 4) any semigroup with left (or right) law of reduction can be embedded into a semigroup with left (or right) law of reduction where this equation is stable. The main results of this paper generalize the works of Jung [J. Math. Anal. Appl. 222 (1998), 126--137], Kannappan [Results Math. 27 (1995), 368--372] and Fechner [J. Math. Anal. Appl. 322 (2006), 774--786].
Article information Source Ann. Funct. Anal., Volume 3, Number 2 (2012), 32-57. Dates First available in Project Euclid: 12 May 2014 Permanent link to this document https://projecteuclid.org/euclid.afa/1399899931 Digital Object Identifier doi:10.15352/afa/1399899931 Mathematical Reviews number (MathSciNet) MR2948387 Citation
A. Fa\u iziev, Valeriy; K. Sahoo, Prasanna. Stability of a functional equation of Whitehead on semigroups. Ann. Funct. Anal. 3 (2012), no. 2, 32--57. doi:10.15352/afa/1399899931. https://projecteuclid.org/euclid.afa/1399899931 |
I'm using an Extended Kalman filter where the motion model is a function of the states and the inputs, with additive white noise, i.e. $$ x_k = f(x_{k-1},u_{k-1}) +\delta_{k-1} \quad , \quad \delta_{k-1} \sim N(0,\Delta_{k-1})$$
If $x_{k-1}$ and $u_{k-1}$ are know, then the prediction step is done as $$\hat{x}_{k|k-1} = f(\hat{x}_{k-1|k-1},u_{k-1}) $$ $$ f' = \frac{\partial F}{\partial x_{k-1}}\Big|_{x_{k-1}=\hat{x}_{k-1|k-1}~,~u=u_{k-1}} $$ $$ P_{k|k-1} = f'P_{k-1|k-1}f $$
However, at some time steps I won't know the value of $u$, the input. What is the optimal way to perform the prediction step in this scenario?
My thoughts so far are to set $$\hat{x}_{k|k-1} = \hat{x}_{k-1|k-1} ~,$$ since I have no new information to update it... but no idea how to estimate the covariance matrix $P_{k|k-1}$. |
This question already has an answer here:
When estimating parameters
such as (I don't care about this specific instance particularly) Variance of a random variable X, one usually adopts Bessel's correction, i.e. using the formula $\hat{Var}{(X)} = \frac{1}{n-1}\sum_i^n(x_i -\bar{x})^2$.
The justification given on Wikipedia and on all other sources I've found is either of the nature of:
the $n-1$ factor arises from dividing by the degrees of freedom of the residual terms the $n-1$ factor ensures unbiasedness the $n-1$ factor arises to correct from underestimating the variance if we weren't to include it
However, why does it make sense to divide by the degrees of freedom?
In general, it seems pretty common to divide parameter estimates not by $n$, the number of sample points used to calculate them, but by $df$. Why does this generally make sense?
EDIT: to clarify my question, what I'm asking is whether
in a general setting dividing a an uncorrected estimate by it's degrees of freedom will produce a unbiased estimator or an estimator with desirable properties. It seems like this procedure is common but I have not seen a general proof (and don't know if it exists) of why this would work generally.
In particular, I think that the reason would be probably in terms of dimensions of subspaces or connecting back to the degrees of freedom of distributions (that seems closely related).
For individual estimates like sample variance or the MLR residual standard error $\frac{RSS}{n- k-1}$ I am aware that proofs of unbiasedness exist, but they are specific to the problem at hand. |
In simple words, geometry is a special branch of mathematics that includes the study of shapes, size, dimensions etc. A Greek mathematician Euclid is named as the Father of Geometry and he explained how geometry is useful in understanding a variety of early cultures.
Geometry word is derived from Greek where “geo” means “earth” and “metria” means measure. Geometry is a special part of your study during schools and colleges. As this is a vital part of your curriculum, the handful knowledge of different geometry concepts will take your career to the new heights right away.
If you would look around, Geometry is used in daily routine too. Take an example of car parking where you have to focus on space available and calculate either you would be able to park your car in a particular area or not. It is good for spatial sense and competitive exams too.
The other major applications of geometry in different areas include engineering, architecture, art, astronomy, space, nature, sculptures, cars, machine and many more. The area of applications is just the limitless and it can be used almost everywhere you can imagine around you.
Now, you are familiar with the topic, what is Geometry? The next important thing that strikes to learners’ mind is the list of basic geometry formulas. They are generally used to calculate the area, length, perimeter, and the volume of various geometrical shapes or figures.
The other formulas are linked with height, surface area, length, width, or radius etc. Few Geometry formulas are complicated while few of them are simpler and easy to learn. They are even used in our daily life to calculate the space to store different things.
\[\large Perimeter \; of \; a \; Square = P = 4a \]
Where a = Length of the sides of a Square
\[\large Perimeter \; of \; a \; Rectangle = P = 2(l+b) \]
Where, l = Length ; b = Breadth
\[\large Area \; of \; a \; Square = A = a^{2} \]
Where a = Length of the sides of a Square
\[\large Area \; of \; a \; Rectangle = A = l \times b \]
Where, l = Length ; b = Breadth
\[\large Area \; of \; a \; Triangle = A = \frac{b \times h}{2} \]
Where, b = base of the triangle ; h = height of the triangle
\[\large Area \; of \; a \; Trapezoid = A = \frac{(b_{1}+b_{2})h}{2} \]
Where, $b_{1}$ & $b_{2}$ are the bases of the Trapezoid ; h = height of the Trapezoid
\[\large Area \; of \; a \; Circle = A = \pi \times r^{2} \]
\[\large Circumference \; of \; a \; Circle = A = 2\pi r \]
Where, r = Radius of the Circle
\[\large Surface \; Area \; of \; a \; Cube = S = 6a^{2} \]
Where, a = Length of the sides of a Cube
\[\large Surface \; Area \; of \; a \; Cylinder = S = 2\pi rh \]
\[\large Volume \; of \; a \; Cylinder = V = \pi r^{2} h \]
Where, r = Radius of the base of the Cylinder ; h = Height of the Cylinder
\[\large Surface \; Area \; of \; a \; Cone = S = \pi r (r + \sqrt{h^{2}+r^{2}}) \]
\[\large Volume \; of \; a \; Cone = V = \pi r^{2} h \]
Where, r = Radius of the base of the Cone, h = Height of the Cone
\[\large Surface \; Area \; of \; a \; Sphere = S = 4 \pi r^{2} \]
\[\large Volume \; of \; a \; Sphere = V = \frac{4}{3}\pi r^{3} \]
Where, r = Radius of the Sphere
Coordinate geometry is another exciting idea of mathematics that is learned during school times. There are a variety of coordinate geometry formulas that are used to draw graphs of curves or lines. These formulas allow you to solve geometry problems or equations quickly and meaningful insights of algebra too. Also, the discovery of calculus depends on basics of coordinate geometry.
Each problem has a solution and its true for Geometry equations and problems too. You just have to use list basic geometry formulas to solve complicated problems in minutes. Obviously, this is not possible without right skills and hours of practice.
Geometry is necessary for students in schools to develop problem-solving skills and spatial reasoning capabilities too. Geometry gives you a perfect idea of measurement too. With a clear understanding of the topic, you would be able to learn shapes or solids deeply and their relationships too. You would be a problem solver with depth understanding of transformations, symmetry, or spatial reasoning etc.
In later schooling, geometry becomes more related to reasoning and analysis. There would be more focus on analyzing properties of two-dimensional shapes or three-dimensional figures and learn using the coordinate system too. Ultimately, develop an exciting career in different fields with right mathematics and geometry skills. |
Short version
After integrating over all possible outgoing angles, the total cross-section of coherent elastic scattering from a fixed target of characteristic length $L$ scales like $L^4$. Does this mean, given a beam of sufficiently small angular dispersion and detectors capable of sufficiently fine angular resolution, there are coherent effects over macroscopic distances arbitrarily longer than the wavelength of the beam?
Long Version
Consider a target of $N$ identical atoms located at positions $\vec{x}_n$ (where $n = 1,\ldots, N$) which is bombarded by a flux of incoming particles, which we'll call neutrinos. Set $\hbar = 1$ and let $\vec{k} = 2\pi/\lambda$ be the momentum of incoming neutrino, where $\lambda$ is the de Broglie wavelength. Let $f(\theta)$ be the scattering amplitude for a neutrino on a single free atom so the differential cross section is \begin{align} \frac{d \sigma_0}{d\Omega} = \vert f(\theta) \vert^2 \end{align} for one atom. As is befitting neutrinos, assume that $f(\theta)$ is very tiny so that the Born approximation is applicable. In particular, we can ignore multiple scattering events.
For many scattering processes the cross section of the target $\sigma_\mathrm{T}$is just $N$ times the cross section of each atom: $N \sigma_0$. However, for elastic scattering of extremely low energy neutrinos from the relatively heavy nuclei in atoms, the very long wavelength of the neutrinos means the various nuclei contribute
coherently to the cross section. When $\lambda \gg L$, where $L$ is the size of the target, one has $\sigma_\mathrm{T} = N^2 \sigma_0$ rather than $N \sigma_0$ [1]. For general $k$, the total cross section is [2]\begin{align}\frac{d \sigma_\mathrm{T}}{d\Omega} = \left\vert \sum_{n=1}^N f(\theta) \, e^{-i \vec{x}_n \cdot (\vec{k}-\vec{l})} \right\vert^2 \end{align}where $\vec{l}$ is the momentum of the outgoing neutrino [3] and $\theta$ is the angle between $\vec{k}$ and $\vec{l}$. Because it's elastic, $|\vec{l}|=|\vec{k}|=k$.
A property of equation (1) is that for $L \gg \lambda$ there is a large suppression of scattering in most directions because the phase in the exponential tends to cancel for the different atoms in the sum. The exception is when $\vec{l}$ is very close to $\vec{k}$ (i.e. low momentum transfer, very slight scattering), because then the phase of the exponent varies very slowly from atom to atom. This means that for large targets, the vast majority of the scattering is in the forward direction.
Now restrict to the case $A \lesssim \lambda \lesssim L$, where $A$ is the typical atomic spacing. What's initially confusing about this is that if we ask for the total cross section by integrating over $\hat{l}$, we find for large $L$ that [4] \begin{align} \sigma_\mathrm{T} = \int_\Omega d \hat{l} \frac{d \sigma_\mathrm{T}}{d\Omega} \sim \frac{N^2}{L^2 k^2} = \rho^{2/3} \lambda^2 N^{4/3} \end{align} where $\rho = N/L^3$ is the number density of the atoms. This means that, for fixed density and fixed neutrino momentum, the total cross section grows faster that than the number of atoms in the target---even over distance scales much larger than the neutrino wavelength. In this sense the coherent scattering effects is non-local over large distances.
The story I have been told is that this is resolved by incorporating the realities of a real-world detector. For any traditional experiment, there is always a minimum forward acceptance angle $\theta_0$ that it can detect. Particles which are scattered at smaller angles are indistinguishable from unscattered particles in the beam. Indeed, if we let $\tilde{\sigma}_\mathrm{T}$ be the detectable cross section scattered at angles greater than $\theta_0$ for
any fixed $\theta_0>0$, we find\begin{align}\tilde{\sigma}_\mathrm{T} = \int_{\theta > \theta_0} d \hat{l} \frac{d \sigma_\mathrm{T}}{d\Omega} \sim \frac{N^2}{L^4 k^4} = \rho^{4/3} \lambda^4 N^{2/3}.\end{align}This accords with our intuition. Growing like $N^{2/3}$ is the same as growing like $L^2$, i.e. there is complete cancellation in the bulk of the target, and the only significant scattering is from the surface (which scales like $L^2$).
Is this all there is to say? Can we potentially see scattering of particles (with atomic-scale wavelength) which demonstrates coherent contributions from target atoms separated by meters? Are there any other limiting factors besides a finite mean free path of the incoming particle (which breaks the Born approximation) and the angular resolution of the detector?
Form of answer
For positive answer, I would accept (a) anything that points to a reputable source (textbook or journal article) which explicitly discusses the possibility of coherent effects over arbitrarily large distances or (b) an argument which significantly improves on the one I have made above. For negative answers, any conclusive argument would suffice.
Bonus
It has been argued to me that the Born approximation is invalid for small momentum transfer $\vec{k}-\vec{l}$, because the approximation requires the energy associated with this transfer to be much larger than the potential (which cannot be the case for $\vec{k}-\vec{l}=0$). This seems to explicitly conflict with textbooks on the Born approximation which state things like
For any potential $V$ there is a $\bar{\lambda} > 0$ such that the Born series (9.5) converges to the correct on-shell $T$ matrix for all values $\vec{p}$ and $\vec{p}'$, provided $|\lambda| < \bar{\lambda}$.
["Scattering Theory: The Quantum Theory of Nonrelativistic Collisions" (1972) by John R. Taylor]
(Here, $\lambda$ is the coefficient of the expansion.)
Is there any validity to this objection?
[1] For macroscopic $N$, this can be a stupendous boost. This was responsible for some (misplaced) optimism in the '70s and '80s that relic neutrinos might be detectable.
[2] This form also appears in many less exotic places than neutrino physics, like neutron and X-ray scattering.
[3] $d\Omega$ is the differential over the possible directions $\hat{l}$.
[4] This behavior is independent of the details of the geometry. The $1/3$'s come from the integrals over 3 spatial dimensions. |
Chapters Balbharati SSC Class 10 Mathematics 1 Chapter 1: Linear Equations in Two Variables Chapter 1: Linear Equations in Two Variables solutions [Pages 4 - 5]
Complete the following activity to solve the simultaneous equations.
5x + 3y = 9 -----(I) 2x + 3y = 12 ----- (II)
Solve the following simultaneous equation.
3a + 5b = 26; a + 5b = 22
Solve the following simultaneous equation.
x + 7y = 10; 3x – 2y = 7
Solve the following simultaneous equation.
2x – 3y = 9; 2x + y = 13
Solve the following simultaneous equation.
5m – 3n = 19; m – 6n = –7
Solve the following simultaneous equation.
5x + 2y = –3; x + 5y = 4
Solve the following simultaneous equation.
\[\frac{1}{3}x + y = \frac{10}{3}; 2x + \frac{1}{4}y = \frac{11}{4}\]
Solve the following simultaneous equation.
99 x + 101 y = 499; 101 x + 99 y = 501 \[200x + 200y = 1000\] \[ \Rightarrow x + y = 5 . . . . . (III)\]
Solve the following simultaneous equation.
49x – 57y = 172; 57x – 49y = 252 Chapter 1: Linear Equations in Two Variables solutions [Page 8]
Complete the following table to draw graph of the equations–
(I) x + y = 3 (II) x – y = 4
x + y = 3
x
3
0 0 y 0 5 3 (x,y) (3,0) (0,5) (0,3)
x – y = 4
x
0
-1 0 y 0 0 -4 (x,y) (0,0) (0,5) (0,-4)
Solve the following simultaneous equations graphically.
x + y = 6 ; x – y = 4
Solve the following simultaneous equations graphically.
x + y = 5 ; x – y = 3
Solve the following simultaneous equations graphically.
x + y = 0 ; 2x – y = 9
Solve the following simultaneous equations graphically.
3x – y = 2 ; 2x – y = 3
Solve the Following Simultaneous Equations Graphically.3x – 4y = –7 ; 5x – 2y = 0
Solve the following simultaneous equations graphically.
2x – 3y = 4 ; 3y – x = 4
Chapter 1: Linear Equations in Two Variables solutions [Page 16]
Fill in the blanks with correct number\[\begin{vmatrix}3 & 2 \\ 4 & 5\end{vmatrix}\] = 3 x ____ - ____ x 4 = ____ - 8 = ____
Find the values of following determinant.
Find the values of following determinant.
Find the values of following determinant.
Solve the following simultaneous equations using Cramer’s rule.
3x – 4y = 10 ; 4x + 3y = 5
Solve the following simultaneous equations using Cramer’s rule.
4x + 3y – 4 = 0 ; 6x = 8 – 5y
Solve the following simultaneous equations using Cramer’s rule.
x + 2y = –1 ; 2x – 3y = 12
Solve the following simultaneous equations using Cramer’s rule.
6x – 4y = –12 ; 8x – 3y = –2
Solve the following simultaneous equations using Cramer’s rule.
4m + 6n = 54 ; 3m + 2n = 28
Solve the following simultaneous equations using Cramer’s rule.
\[2x + 3y = 2 ; x - \frac{y}{2} = \frac{1}{2}\] Chapter 1: Linear Equations in Two Variables solutions [Page 19]
Solve the following simultaneous equations.
\[ \frac{2}{x} - \frac{3}{y} = 15; \frac{8}{x} + \frac{5}{y} = 77\]
Solve the following simultaneous equations.
\[ \frac{10}{x + y} + \frac{2}{x - y} = 4; \frac{15}{x + y} - \frac{5}{x - y} = - 2\]
Solve the following simultaneous equations.
\[ \frac{27}{x - 2} + \frac{31}{y + 3} = 85; \frac{31}{x - 2} + \frac{27}{y + 3} = 89\]
Solve the following simultaneous equations.
\[\frac{1}{3x + y} + \frac{2}{3x - y} = \frac{3}{4}; \frac{1}{2\left( 3x + y \right)} - \frac{1}{2\left( 3x - y \right)} = - \frac{1}{8}\] Chapter 1: Linear Equations in Two Variables solutions [Page 26]
Two numbers differ by 3. The sum of twice the smaller number and thrice the greater number is 19. Find the numbers.
Complete the following.
The sum of father’s age and twice the age of his son is 70. If we double the age of the father and add it to the age of his son the sum is 95. Find their present ages.
The denominator of a fraction is 4 more than twice its numerator. Denominator becomes 12 times the numerator, if both the numerator and the denominator are reduced by 6. Find the fraction.
Two types of boxes A, B are to be placed in a truck having capacity of 10 tons. When 150 boxes of type A and 100 boxes of type B are loaded in the truck, it weighes 10 tons. But when 260 boxes of type A are loaded in the truck, it can still accommodate 40 boxes of type B, so that it is fully loaded. Find the weight of each type of box.
Out of 1900 km, Vishal travelled some distance by bus and some by aeroplane. Bus travels with average speed 60 km/hr and the average speed of aeroplane is 700 km/hr. It takes 5 hours to complete the journey. Find the distance, Vishal travelled by bus.
Chapter 1: Linear Equations in Two Variables solutions [Pages 27 - 28]
Choose correct alternative for the following question.
To draw graph of 4 x +5 y = 19, Find y when x = 1.
A) 4 (B) 3 (C) 2 (D) –3
Choose correct alternative for the following question.
For simultaneous equations in variables x and y, D x = 49, D y = –63, D = 7 then what is x ?
(A) (B) (C) (D) 7 -7 \[\frac{1}{7}\] \[\frac{- 1}{7}\]
Choose correct alternative for the following question.
Find the value of \[\begin{vmatrix}5 & 3 \\ - 7 & - 4\end{vmatrix}\]
A) –1 (B) –41 (C) 41 (D) 1
Choose correct alternative for the following question.
To solve x + y = 3 ; 3 x – 2 y – 4 = 0 by determinant method find D.
A) 5 (B) 1 (C) –5 (D) –1
Choose correct alternative for the following question.
ax + by = c and mx + ny = d and an ≠ bm then these simultaneous equations have -
(A) Only one common solution. (A) No solution. (C) Infinite number of solutions. (D) Only two solution.
Complete the following table to draw the graph of 2x – 6y = 3
x -5 x y x 0 (x,y) (-5,x) (x,0)
Solve the following simultaneous equation graphically.
2x + 3y = 12 ; x – y = 1
Solve the following simultaneous equation graphically.
x – 3y = 1 ; 3x – 2y + 4 = 0
Solve the following simultaneous equation graphically.
5x – 6y + 30 = 0 ; 5x + 4y – 20 = 0
Solve the following simultaneous equation graphically.
3x – y – 2 = 0 ; 2x + y = 8
Solve the following simultaneous equation graphically.
3x + y = 10 ; x – y = 2
Find the value of the following determinant.
Find the value of the following determinant.
Find the value of the following determinant.
Solve the following equations by Cramer’s method.
6x – 3y = –10 ; 3x + 5y – 8 = 0
Solve the following equation by Cramer’s method.
4m – 2n = –4 ; 4m + 3n = 16
Solve the following equations by Cramer’s method.
3 x – 2 y = \[\frac{5}{2}\] ; \[\frac{1}{3}x + 3y = - \frac{4}{3}\]
Solve the following equations by Cramer’s method.
7x + 3y = 15 ; 12y – 5x = 39
Solve the following equations by Cramer’s method.
\[\frac{x + y - 8}{2} = \frac{x + 2y - 14}{3} = \frac{3x - y}{4}\]
Solve the following simultaneous equations.
\[\frac{2}{x} + \frac{2}{3y} = \frac{1}{6} ; \frac{3}{x} + \frac{2}{y} = 0\]
Solve the following simultaneous equations.
\[\frac{7}{2x + 1} + \frac{13}{y + 2} = 27 ; \frac{13}{2x + 1} + \frac{7}{y + 2} = 33\]
Solve the following simultaneous equations.
Solve the following simultaneous equations.
Solve the following simultaneous equations.
Solve the following word problem.
A two digit number and the number with digits interchanged add up to 143. In the given number the digit in unit’s place is 3 more than the digit in the ten’s place. Find the original number.
Solve the following word problem.
Kantabai bought \[1\frac{1}{2}\] kg tea and 5 kg sugar from a shop. She paid Rs 50 as return fare for rickshaw. Total expense was Rs 700. Then she realised that by ordering online the goods can be bought with free home delivery at the same price. So next month she placed the order online for 2 kg tea and 7 kg sugar. She paid Rs 880 for that. Find the rate of sugar and tea per kg.
Solve the following word problem.
To find number of notes that Anushka had, complete the following activity.
Solve the Following Word Problem.
Sum of the present ages of Manish and Savita is 31. Manish’s age 3 years ago was 4 times the age of Savita. Find their present ages.
Solve the Following Word Problem.
In a factory the ratio of salary of skilled and unskilled workers is 5 : 3. Total salary of one day of both of them is Rs 720. Find daily wages of skilled and unskilled workers.
Solve the Following Word Problem.
Places A and B are 30 km apart and they are on a st raight road. Hamid travels from A to B on bike. At the same time Joseph starts from B on bike, travels towards A. They meet each other after 20 minutes. If Joseph would have started from B at the same time but in the opposite direction (instead of towards A) Hamid would have caught him after 3 hours. Find the speed of Hamid and Joseph. Chapter 1: Linear Equations in Two Variables Balbharati SSC Class 10 Mathematics 1 Textbook solutions for Class 10th Board Exam Balbharati solutions for Class 10th Board Exam Algebra chapter 1 - Linear Equations in Two Variables
Balbharati solutions for Class 10th Board Exam chapter 1 (Linear Equations in Two Variables) include all questions with solution and detail explanation. This will clear students doubts about any question and improve application skills while preparing for board exams. The detailed, step-by-step solutions will help you understand the concepts better and clear your confusions, if any. Shaalaa.com has the Maharashtra State Board Textbook for SSC Class 10 Mathematics 1 solutions in a manner that help students grasp basic concepts better and faster.
Further, we at Shaalaa.com are providing such solutions so that students can prepare for written exams. Balbharati textbook solutions can be a core help for self-study and acts as a perfect self-help guidance for students.
Concepts covered in Class 10th Board Exam Algebra chapter 1 Linear Equations in Two Variables are Linear Equations in Two Variables Applications, Cramer'S Rule, Cross - Multiplication Method, Substitution Method, Elimination Method, Equations Reducible to a Pair of Linear Equations in Two Variables, Simple Situational Problems, Inconsistency of Pair of Linear Equations, Consistency of Pair of Linear Equations, Introduction of System of Linear Equations in Two Variables, Graphical Method of Solution of a Pair of Linear Equations, Determinant of Order Two.
Using Balbharati Class 10th Board Exam solutions Linear Equations in Two Variables exercise by students are an easy way to prepare for the exams, as they involve solutions arranged chapter-wise also page wise. The questions involved in Balbharati Solutions are important questions that can be asked in the final exam. Maximum students of Maharashtra State Board Class 10th Board Exam prefer Balbharati Textbook Solutions to score more in exam.
Get the free view of chapter 1 Linear Equations in Two Variables Class 10th Board Exam extra questions for and can use Shaalaa.com to keep it handy for your exam preparation |
This is an old revision of the document!
Lehigh ISE / COR@L Lab Wiki is our new system to create a collective information resource within the department.
From the Wikipedia
1) definition, A wiki is a web application which allows people to add, modify, or delete content in collaboration with others. In a typical wiki, text is written using a simplified markup language (known as “wiki markup”) or a rich-text editor.
At this moment, Lehigh ISE / COR@L Lab Wiki is only open to faculty and PhD Student of Lehigh ISE department.
You can create new pages, edit the existing pages, upload resources or even delete the content from the system. You are free to share any information as long as it is in the interest of users.
First of all, you need to be registered to the system.
In a sample page, simply click the pen button on the right sidebar.
Then you will see the rich-text editor of the Dokuwiki.
Write the new address of the page you want, in the following format;
http://coral.ie.lehigh.edu/wiki/doku.php/ new_page_name
Then click the same button that we used to edit pages, on the right sidebar. It will create the new page.
Delete all content of the page in the text editor and save.
The wiki system has a revision control to restore old information to prevent loss of page content. Even if someone delete a page, you can click the
revision button to reach the previous versions.
Wiki system can recognize mathematical formulations in tex format. The following input
$ \displaystyle x^2 + \sum_{i=1}^n y_i \leq \alpha $
gives
$ \displaystyle x^2 + \sum_{i=1}^n y_i \leq \alpha $
You can use
\begin{equation} \end{equation} as well as
\ref{label} type of LaTeX commands. For more information, check MathJax documentation.
You can export page content in TeX/LaTeX format. Click the
TeX button on the right side for any page you want and compile it with your favorite editor. It can also export any image that are available in the page in a zip file. Check this sample output.
You can check the following sample pages and edit it without any registration. |
Hello, this question may be simple but I couldn't find a reference.
Let $E$,$F$ be real Banach spaces and $\Omega\subset E$ be a bounded domain and let $C_b^{\omega}(\Omega,F)$ be the vector space of bounded real analytic functions from $\Omega$ to $F$. Now I would like to know if there is a natural way to define a metric on $C_b^\omega(\Omega,F)$, that makes the space complete.
Concretely, I have a series of real analytic functions that converge uniformly as well as their Frechet derivatives and now I would like to know if their limit is analytic again.
My first idea was to show that $C_b^\omega(\Omega,F)$ is a closed subspace of $C_b^\infty(\Omega,F)$. Here $C_{b}^{\infty}(\Omega,F)$ is the set of infinitely Frechet-differentiable functions equipped with the usual set of seminorms (i.e. $\Vert f\Vert_k:=\Vert D^k f\Vert_\infty$) defining a Frechet space and thus a metric $d(f,g):=\sum_{k=1}^\infty 2^{-k}\frac{\Vert f-g\Vert_k}{1+\Vert f-g\Vert_k}$ which makes $C_{b}^{\infty}(\Omega,F)$ a complete metric space.
Actually, I have no idea if this is true at all. Could someone please confirm if this is the right thing to prove or not?
Regards, Mirko |
If your Lagrangian satisfies
$$ \frac{\partial \mathcal L}{\partial t} = 0 $$
then you're happy, energy is conserved, etc. However, if the above doesn't hold, that doesn't necessarily mean energy isn't conserved; maybe your Lagrangian has a false explicit time dependency. For example:
$$ \mathcal L = \frac{m}{2}\dot x^2+kt\dot x $$
The above Lagrangian has $\partial \mathcal L/\partial t = k\dot x $ but I call that dependency fake (or as the experts like to say, "spurious") because it has the same equations of motion as this other Lagrangian:
$$ \mathcal L = \frac{m}{2}\dot x^2-kx $$
which has no explicit time dependency whatsoever, so energy
is conserved. Specifically, this happened because you can shift derivatives around in your Lagrangian using integration by parts at the level of the action.
Similarly, the following Lagrangian
$$ \mathcal L = \frac{m}{2}\dot x^2+gt $$
also has a fake explicit time dependency, since you can remove $gt$ which is just a total time derivative, equivalent to a boundary term in the action.
On the other hand, the Lagrangian with variable mass
$$ \mathcal L = \frac{m(t)}{2}\dot x^2 $$
is bonafide explicit time-dependent. There's no trick here to avoid it; $\partial \mathcal L/\partial t \ne 0$ no matter what legal modifications you perform.
Finally, the following Lagrangian has a mixture of real and false explicit time-dependencies:
$$ \mathcal L = \frac{m(t)}{2}\dot x^2 -kx +gt $$
Its true explicit time dependency would be defined as $\partial \mathcal L/\partial t$
after all integration by parts that could be performed have been done so and all total derivatives have been removed. Therefore, the question is: Given a generic Lagrangian, can its true explicit time dependency be determined in general? |
How to Diagonalize a Matrix. Step by Step Explanation. Problem 211
In this post, we explain how to diagonalize a matrix if it is diagonalizable.
As an example, we solve the following problem.
Diagonalize the matrix
\[A=\begin{bmatrix} 4 & -3 & -3 \\ 3 &-2 &-3 \\ -1 & 1 & 2 \end{bmatrix}\] by finding a nonsingular matrix $S$ and a diagonal matrix $D$ such that $S^{-1}AS=D$.
(Update 10/15/2017. A new example problem was added.)
Add to solve later
Contents
Diagonalization Procedure Example of a matrix diagonalization Diagonalization Problems and Examples
The process can be summarized as follows. A concrete example is provided below, and several exercise problems are presented at the end of the post.
Diagonalization Procedure
Let $A$ be the $n\times n$ matrix that you want to diagonalize (if possible).
Find the characteristic polynomial $p(t)$ of $A$. Find eigenvalues $\lambda$ of the matrix $A$ and their algebraic multiplicities from the characteristic polynomial $p(t)$. For each eigenvalue $\lambda$ of $A$, find a basis of the eigenspace $E_{\lambda}$. If there is an eigenvalue $\lambda$ such that the geometric multiplicity of $\lambda$, $\dim(E_{\lambda})$, is less than the algebraic multiplicity of $\lambda$, then the matrix $A$ is not diagonalizable. If not, $A$ is diagonalizable, and proceed to the next step. If we combine all basis vectors for all eigenspaces, we obtained $n$ linearly independent eigenvectors $\mathbf{v}_1, \mathbf{v}_2, \dots, \mathbf{v}_n$. Define the nonsingular matrix \[S=[\mathbf{v}_1 \mathbf{v}_2 \dots \mathbf{v}_n] .\] Define the diagonal matrix $D$, whose $(i,i)$-entry is the eigenvalue $\lambda$ such that the $i$-th column vector $\mathbf{v}_i$ is in the eigenspace $E_{\lambda}$. Then the matrix $A$ is diagonalized as \[ S^{-1}AS=D.\] Example of a matrix diagonalization
Now let us examine these steps with an example.
Let us consider the following $3\times 3$ matrix. \[A=\begin{bmatrix} 4 & -3 & -3 \\ 3 &-2 &-3 \\ -1 & 1 & 2 \end{bmatrix}.\] We want to diagonalize the matrix if possible. Step 1: Find the characteristic polynomial
The characteristic polynomial $p(t)$ of $A$ is
\[p(t)=\det(A-tI)=\begin{vmatrix} 4-t & -3 & -3 \\ 3 &-2-t &-3 \\ -1 & 1 & 2-t \end{vmatrix}.\] Using the cofactor expansion, we get \[p(t)=-(t-1)^2(t-2).\] Step 2: Find the eigenvalues
From the characteristic polynomial obtained in Step 1, we see that eigenvalues are
\[\lambda=1 \text{ with algebraic multiplicity } 2\] and \[\lambda=2 \text{ with algebraic multiplicity } 1.\] Step 3: Find the eigenspaces
Let us first find the eigenspace $E_1$ corresponding to the eigenvalue $\lambda=1$.
By definition, $E_1$ is the null space of the matrix \[A-I=\begin{bmatrix} 3 & -3 & -3 \\ 3 &-3 &-3 \\ -1 & 1 & 1 \end{bmatrix} \rightarrow \begin{bmatrix} 1 & -1 & -1 \\ 0 &0 &0 \\ 0 & 0 & 0 \end{bmatrix}\] by elementary row operations. Hence if $(A-I)\mathbf{x}=\mathbf{0}$ for $\mathbf{x}\in \R^3$, we have \[x_1=x_2+x_3.\] Therefore, we have \begin{align*} E_1=\calN(A-I)=\left \{\quad \mathbf{x}\in \R^3 \quad \middle| \quad \mathbf{x}=x_2\begin{bmatrix} 1 \\ 1 \\ 0 \end{bmatrix}+x_3\begin{bmatrix} 1 \\ 0 \\ 1 \end{bmatrix} \quad \right \}. \end{align*} From this, we see that the set \[\left\{\quad\begin{bmatrix} 1 \\ 1 \\ 0 \end{bmatrix},\quad \begin{bmatrix} 1 \\ 0 \\ 1 \end{bmatrix}\quad \right\}\] is a basis for the eigenspace $E_1$. Thus, the dimension of $E_1$, which is the geometric multiplicity of $\lambda=1$, is $2$.
Similarly, we find a basis of the eigenspace $E_2=\calN(A-2I)$ for the eigenvalue $\lambda=2$.
We have \begin{align*} A-2I=\begin{bmatrix} 2 & -3 & -3 \\ 3 &-4 &-3 \\ -1 & 1 & 0 \end{bmatrix} \rightarrow \cdots \rightarrow \begin{bmatrix} 1 & 0 & 3 \\ 0 &1 &3 \\ 0 & 0 & 0 \end{bmatrix} \end{align*} by elementary row operations. Then if $(A-2I)\mathbf{x}=\mathbf{0}$ for $\mathbf{x}\in \R^3$, then we have \[x_1=-3x_3 \text{ and } x_2=-3x_3.\] Therefore we obtain \begin{align*} E_2=\calN(A-2I)=\left \{\quad \mathbf{x}\in \R^3 \quad \middle| \quad \mathbf{x}=x_3\begin{bmatrix} -3 \\ -3 \\ 1 \end{bmatrix} \quad \right \}. \end{align*} From this we see that the set \[\left \{ \quad \begin{bmatrix} -3 \\ -3 \\ 1 \end{bmatrix} \quad \right \}\] is a basis for the eigenspace $E_2$ and the geometric multiplicity is $1$.
Since for both eigenvalues, the geometric multiplicity is equal to the algebraic multiplicity, the matrix $A$ is not defective, and hence diagonalizable.
Step 4: Determine linearly independent eigenvectors
From Step 3, the vectors
\[\mathbf{v}_1=\begin{bmatrix} 1 \\ 1 \\ 0 \end{bmatrix}, \mathbf{v}_2=\begin{bmatrix} 1 \\ 0 \\ 1 \end{bmatrix}, \mathbf{v}_3=\begin{bmatrix} -3 \\ -3 \\ 1 \end{bmatrix} \] are linearly independent eigenvectors. Step 5: Define the invertible matrix $S$
Define the matrix $S=[\mathbf{v}_1 \mathbf{v}_2 \mathbf{v}_3]$. Thus we have
\[S=\begin{bmatrix} 1 & 1 & -3 \\ 1 &0 &-3 \\ 0 & 1 & 1 \end{bmatrix}\] and the matrix $S$ is nonsingular (since the column vectors are linearly independent). Step 6: Define the diagonal matrix $D$
Define the diagonal matrix
\[D=\begin{bmatrix} 1 & 0 & 0 \\ 0 &1 &0 \\ 0 & 0 & 2 \end{bmatrix}.\] Note that $(1,1)$-entry of $D$ is $1$ because the first column vector $\mathbf{v}_1=\begin{bmatrix} 1 \\ 1 \\ 0 \end{bmatrix}$ of $S$ is in the eigenspace $E_1$, that is, $\mathbf{v}_1$ is an eigenvector corresponding to eigenvalue $\lambda=1$. Similarly, the $(2,2)$-entry of $D$ is $1$ because the second column $\mathbf{v}_2=\begin{bmatrix} 1 \\ 0 \\ 1 \end{bmatrix}$ of $S$ is in $E_1$. The $(3,3)$-entry of $D$ is $2$ because the third column vector $\mathbf{v}_3=\begin{bmatrix} -3 \\ -3 \\ 1 \end{bmatrix}$ of $S$ is in $E_2$.
(The order you arrange the vectors $\mathbf{v}_1, \mathbf{v_2}, \mathbf{v}_3$ to form $S$ does not matter but once you made $S$, then the order of the diagonal entries is determined by $S$, that is, the order of eigenvectors in $S$.)
Step 7: Finish the diagonalization
Finally, we can diagonalize the matrix $A$ as
\[S^{-1}AS=D,\] where \[S=\begin{bmatrix} 1 & 1 & -3 \\ 1 &0 &-3 \\ 0 & 1 & 1 \end{bmatrix} \text{ and } D=\begin{bmatrix} 1 & 0 & 0 \\ 0 &1 &0 \\ 0 & 0 & 2 \end{bmatrix}.\] (Here you don’t have to find the inverse matrix $S^{-1}$ unless you are asked to do so.) Diagonalization Problems and Examples
Check out the following problems about the diagonalization of a matrix to see if you understand the procedure.
For a solution of this problem and related questions, see the post “Diagonalize a 2 by 2 Matrix $A$ and Calculate the Power $A^{100}$“.
For a solution, check out the post “Diagonalize the 3 by 3 Matrix if it is Diagonalizable“.
For a solution, see the post “Quiz 13 (Part 1) Diagonalize a matrix.“.
In the solution given in the post “Diagonalize the 3 by 3 Matrix Whose Entries are All One“, we use an indirect method to find eigenvalues and eigenvectors.
The next problem is a diagonalization problem of a matrix with variables.
The solution is given in the post↴
Diagonalize the Upper Triangular Matrix and Find the Power of the Matrix A Hermitian Matrix can be diagonalized by a unitary matrix
This means that there exists a unitary matrix $U$ such that $U^{-1}AU$ is a diagonal matrix.
The solution is given in the post ↴
Diagonalize the $2\times 2$ Hermitian Matrix by a Unitary Matrix More diagonalization problems
More Problems related to the diagonalization of a matrix are gathered in the following page:
Add to solve later |
The influence of multiple scattering processes on the electron mobility in low density methanol gas
Related Articles Effective atomic numbers of some H-, C-, N- and O-based composite materials derived from differential incoherent scattering cross-sections. Kumar, S. Prasanna; Manjunathaguru, V.; Umesh, T. K. // Pramana: Journal of Physics;Apr2010, Vol. 74 Issue 4, p555
In this work, we have made an effort to determine whether the effective atomic numbers of H-, C-, N- and O-based composite materials would indeed remain a constant over the energy grid of 280–1200 keV wherein incoherent scattering dominates their interaction with photons. For this...
The MuScat Experiment — Status and Plans. Edgecock, Rob // AIP Conference Proceedings;2004, Vol. 721 Issue 1, p114
The MuScat experiment is making a precise measurement of the multiple scattering of muons at 180 MeV/c. The data will be compared to a variety of models of multiple scattering and used as input to cooling studies for a Neutrino Factory and Muon Collider and to the MICE experiment. The second and...
Quantum Treatment of the Multiple Scattering and Collective Flow in Intensity Interferometry. Cheuk-Yin Wong // AIP Conference Proceedings;2006, Vol. 828 Issue 1, p617
We apply the path-integral method to study the multiple scattering and collective flow in intensity interferometry in high-energy heavy-ion collisions. We show that the Glauber model and eikonal approximation in an earlier quantum treatment are special examples of the more general path-integral...
A study of quasi-elastic muon neutrino and antineutrino scattering in the NOMAD experiment. Lyubushkin, V.; Popov, B.; Kim, J. J.; Camilleri, L.; Levy, J.-M.; Mezzetto, M.; Naumov, D.; Alekhin, S.; Astier, P.; Autiero, D.; Baldisseri, A.; Baldo-Ceolin, M.; Banner, M.; Bassompierre, G.; Benslama, K.; Besson, N.; Bird, I.; Blumenfeld, B.; Bobisut, F.; Bouchez, J. // European Physical Journal C -- Particles & Fields;Oct2009, Vol. 63 Issue 3, p355
We have studied the muon neutrino and antineutrino quasi-elastic (QEL) scattering reactions ( ν μ n→ μ− p and $\bar{\nu }_{\mu}p\to\mu^{+}n$ ) using a set of experimental data collected by the NOMAD Collaboration. We have performed measurements of the cross-section of these...
Multiple scattering of relativistic electrons and positrons in thin tungsten crystals. Efremov, V. I.; Dolgikh, V. A.; Pivovarov, Yu. L. // Russian Physics Journal;Dec2007, Vol. 50 Issue 12, p1237
The article investigates the scattering of electrons and positrons transmitted through tungsten thin crystals for a model of binary collision. The findings were given such as the angular distribution and the average squared angle of the scattering that is compared with the theory involving...
Recent Progress in the Development of a Multi-Layer Green's Function Code for Ion Beam Transport. Tweed, John; Walker, Steven A.; Wilson, John W.; Tripathi, Ram K. // AIP Conference Proceedings;1/21/2008, Vol. 969 Issue 1, p993
To meet the challenge of future deep space programs, an accurate and efficient engineering code for analyzing the shielding requirements against high-energy galactic heavy radiation is needed. To address this need, a new Green's function code capable of simulating high charge and energy ions...
Simulation of Multiple Scattering Effects on Coincidence. Schiettekatte, François // AIP Conference Proceedings;3/15/2009, Vol. 1099 Issue 1, p314
The Monte Carlo ion beam analysis simulation program Corteo was adapted to the case of coincidence spectrometry of identical ions. Since multiple scattering (MS) is properly simulated, it reproduces very well the shape of a spectrum, especially the depth-dependent detection efficiency, provided...
Dilute (In,Ga)(As,N) thin films grown by molecular beam epitaxy on (100) and non-(100) GaAs substrates: a Raman-scattering study. Ibáñez, Jordi; Alarcón-Lladó, Esther; Cuscó, Ramon; Artús, Lluís; Henini, Mohamed; Hopkinson, Mark // Journal of Materials Science: Materials in Electronics;Jan2009 Supplement 1, Vol. 20, p116
We use Raman scattering to investigate a series of In x Ga1– x As1– y N y epilayers ( x ∼ 20% and y ∼ 3%) coherently grown on (100) and on ( N11) GaAs substrates ( N = 1, 3, 4, and 5). We use biaxial-strain theory to evaluate the effect of N alloying on the frequency of...
Electron attachment to POCl3: Measurement and theoretical analysis of rate constants and branching ratios as a function of gas pressure and temperature, electron temperature, and electron energy. Van Doren, Jane M.; Friedman, Jeffery F.; Miller, Thomas M.; Viggiano, A. A.; Denifl, S.; Scheier, P.; Märk, T. D.; Troe, J. // Journal of Chemical Physics;3/28/2006, Vol. 124 Issue 12, p124322
Two experimental techniques, electron swarm and electron beam, have been applied to the problem of electron attachment to POCl3, with results indicating that there is a competition between dissociation of the resonant POCl3-* state and collisional stabilization of the parent anion. In the... |
You are conflating
computationally decidable with logically decidable.
A (closed) formula, $\varphi$, is
logically decidable iff $\vdash\varphi$ or $\vdash\neg\varphi$, i.e. if $\varphi$ is derivable or $\neg\varphi$ is derivable (with respect to some given logical system). This is the sense we mean when we say something like the Continuum Hypothesis is undecidable.
A unary predicate on naturals, $P$, is
computationally decidable iff there is a Turing machine $M$ which terminates on all inputs with output either $1$ or $0$, and $P(n)$ holds if and only if $M$ terminates with output $1$ on input $n$. This is the sense we mean when we say something like the Halting Problem is undecidable.
(There are other ways we use the term "decidable" such as when we say a(n entire) logic or theory is decidable which, more or less, means something like $\vdash\varphi_n$ is computationally decidable where $\varphi_n$ is a formula encoded by $n$ via some suitable surjective encoding.)
Assuming the Law of the Excluded Middle (LEM) doesn't automatically make every unary predicate on the naturals computationally decidable. Indeed, usually computational decidability is formulated within a classical logic where LEM holds.
Intuitionistic logic connects logical decidability to LEM because it satisfies the disjunction property which states $\vdash\varphi\lor\psi$ if and only if $\vdash\varphi$ or $\vdash\psi$. This means $\varphi$ being logically decidable is equivalent to $\vdash\varphi\lor\neg\varphi$.
Now, there's a
model of intuitionistic first-order logic such that $\forall n:\mathbb N.P(n)\lor\neg P(n)$ is modeled by a computable function taking a natural and producing a witness of (the interpretation of) either $P(n)$ or $\neg P(n)$. This means (the interpretation of) $\forall n:\mathbb N.P(n)\lor\neg P(n)$ is true in this model if and only if $P$ is computationally decidable. LEM, in this context, would imply the statement that all decision problems are computationally decidable. LEM is, of course, false in this model. Therefore, if you assume LEM, this interpretation ceases to be a model.
The fact that certain formulas are derivable in intuitionistic first-order logic implies that corresponding predicates are computationally decidable. This implication itself, though, is demonstrated in the meta-logic where the definition of "computationally decidable" resides, and this is usually a classical logic.
For an informal (meta-)logic in which you define "computationally decidable", it's up to you whether you want to accept non-constructive proofs or not. However, there is a big difference between proving a formula in constructive logic which can be
interpreted as a witness to the computational decidability of a predicate, and proving a formula which states in the constructive logic that a predicate is computationally decidable which would involve constructive definitions of Turing machine and so forth. As an analogy, the circle as a (higher inductive) type in Homotopy Type Theory is very different from the construction of cirles, e.g. $\mathbb R/\mathbb Z$ within Homotopy Type Theory. As a more contrived example, let's say I interpret the theory of abelian groups into $3\mathbb Z$, i.e. $1$ is interpreted as $3$. Then it is still the case that the "internal" notion of $3$, i.e. $1+1+1$, is different from $1$. |
In the picture below, what is the purpose of R1? Compared to a high pass filter with 1 resistor, how does this affect Fc?
Thank you.
Electrical Engineering Stack Exchange is a question and answer site for electronics and electrical engineering professionals, students, and enthusiasts. It only takes a minute to sign up.Sign up to join this community
This is basically a voltage divider between \$R_2\$ and \$R_1, C_1\$ in series, so the transfer function is the following: $${V_{out} \over V_{in}}={R_2 \over{R_2+R_1+{1\over j\omega C_1}}}={j\omega R_2C_1 \over 1+j\omega C_1(R_1+R_2)}$$
The first resistor limits the current into the capacitor, therefore it can charge slower, with a time constant of \$\tau=(R_1+R_2)C_1\$.
The cutoff frequency is smaller than without \$R_1\$: instead of $$f_c={1\over 2\pi R_2C_1}$$ it is $$f_c={1\over 2\pi(R_1+R_2)C_1}$$
Your voltage source is an "ideal" source. In practice, voltage sources vary, depending on how much current they need to source. This is modelled with an external series resistor - \$R1\$ in your diagram.
When you draw a lot of current, the the output voltage will drop by \$V = IR\$. If \$R_2\$ is more than \$10 \times R_1\$, you can generally ignore \$R_1\$ for ballpark measurements. However, since \$R_2\$ and \$C\$ form a high pass filter, changing your selection for \$R_2\$ affects your value for \$C\$.
Purpose of 2 resistors in high pass filter?
This is a (1st order) high-pass
attenuator. That is to say, the high-frequency asymptotic gain is less than one (unity).
An ordinary RC high-pass filter has a high-frequency gain that approaches unity. What if, instead, one desires that the high-frequency voltage gain be
less than unity?
One approach would be to use the circuit in your question. At high enough frequencies, where the impedance of the capacitor is insignificant, the voltage gain is approximately
$$\frac{V_{out}}{V_{in}} \approx \frac{R_2}{R_1 + R_2} < 1$$
Of course, one must take into account the resistance of \$R_1\$ in the calculation of the corner frequency:
$$\omega_c = \frac{1}{(R_1 + R_2)C}$$
Note that, for a consistency check, as \$R_1 \rightarrow 0\$, we recover the equations for the ordinary RC high-pass filter.
In summary, this high-pass filter circuit gives an additional degree of freedom: the high-frequency attenuation can be now be specified as a constraint.
I want to show another method without math:
In high frequency, the C is shorted, so the transfer function should have "high frequency" gain
$$ \frac{R_{2}}{R_{1}+R_{2}} $$
So, the high frequency gain now won't be 1, and will be smaller, dependents on the ratio of \$\frac{R_{1}}{R_{2}}\$.
The circuit's time constant basically is $$ \tau = R_{total} \times C_{1}=(R_{1}+R{2}) \times C_{1} $$
And the well known first-order high-pass filter's transfer function is
$$ H(s)=\frac{1}{1+1/(s\tau)} $$
Combine the high frequency gain and substitute the \$\tau\$, we get the whole
$$ H(s)=\frac{R_{2}}{R_{1}+R_{2}} \frac{1}{1+\frac{1}{s(R_{1}+R{2})C_{1}}} $$ |
Let $T: \R^n \to \R^m$ be a linear transformation.Suppose that the nullity of $T$ is zero.
If $\{\mathbf{x}_1, \mathbf{x}_2,\dots, \mathbf{x}_k\}$ is a linearly independent subset of $\R^n$, then show that $\{T(\mathbf{x}_1), T(\mathbf{x}_2), \dots, T(\mathbf{x}_k) \}$ is a linearly independent subset of $\R^m$.
Let $V$ denote the vector space of all real $2\times 2$ matrices.Suppose that the linear transformation from $V$ to $V$ is given as below.\[T(A)=\begin{bmatrix}2 & 3\\5 & 7\end{bmatrix}A-A\begin{bmatrix}2 & 3\\5 & 7\end{bmatrix}.\]Prove or disprove that the linear transformation $T:V\to V$ is an isomorphism.
Let $G, H, K$ be groups. Let $f:G\to K$ be a group homomorphism and let $\pi:G\to H$ be a surjective group homomorphism such that the kernel of $\pi$ is included in the kernel of $f$: $\ker(\pi) \subset \ker(f)$.
Define a map $\bar{f}:H\to K$ as follows.For each $h\in H$, there exists $g\in G$ such that $\pi(g)=h$ since $\pi:G\to H$ is surjective.Define $\bar{f}:H\to K$ by $\bar{f}(h)=f(g)$.
(a) Prove that the map $\bar{f}:H\to K$ is well-defined.
(b) Prove that $\bar{f}:H\to K$ is a group homomorphism.
Let $\calF[0, 2\pi]$ be the vector space of all real valued functions defined on the interval $[0, 2\pi]$.Define the map $f:\R^2 \to \calF[0, 2\pi]$ by\[\left(\, f\left(\, \begin{bmatrix}\alpha \\\beta\end{bmatrix} \,\right) \,\right)(x):=\alpha \cos x + \beta \sin x.\]We put\[V:=\im f=\{\alpha \cos x + \beta \sin x \in \calF[0, 2\pi] \mid \alpha, \beta \in \R\}.\]
(a) Prove that the map $f$ is a linear transformation.
(b) Prove that the set $\{\cos x, \sin x\}$ is a basis of the vector space $V$.
(c) Prove that the kernel is trivial, that is, $\ker f=\{\mathbf{0}\}$.(This yields an isomorphism of $\R^2$ and $V$.)
(d) Define a map $g:V \to V$ by\[g(\alpha \cos x + \beta \sin x):=\frac{d}{dx}(\alpha \cos x+ \beta \sin x)=\beta \cos x -\alpha \sin x.\]Prove that the map $g$ is a linear transformation.
(e) Find the matrix representation of the linear transformation $g$ with respect to the basis $\{\cos x, \sin x\}$.
Suppose that the vectors\[\mathbf{v}_1=\begin{bmatrix}-2 \\1 \\0 \\0 \\0\end{bmatrix}, \qquad \mathbf{v}_2=\begin{bmatrix}-4 \\0 \\-3 \\-2 \\1\end{bmatrix}\]are a basis vectors for the null space of a $4\times 5$ matrix $A$. Find a vector $\mathbf{x}$ such that\[\mathbf{x}\neq0, \quad \mathbf{x}\neq \mathbf{v}_1, \quad \mathbf{x}\neq \mathbf{v}_2,\]and\[A\mathbf{x}=\mathbf{0}.\]
(Stanford University, Linear Algebra Exam Problem)
Let $V$ be the subspace of $\R^4$ defined by the equation\[x_1-x_2+2x_3+6x_4=0.\]Find a linear transformation $T$ from $\R^3$ to $\R^4$ such that the null space $\calN(T)=\{\mathbf{0}\}$ and the range $\calR(T)=V$. Describe $T$ by its matrix $A$.
A hyperplane in $n$-dimensional vector space $\R^n$ is defined to be the set of vectors\[\begin{bmatrix}x_1 \\x_2 \\\vdots \\x_n\end{bmatrix}\in \R^n\]satisfying the linear equation of the form\[a_1x_1+a_2x_2+\cdots+a_nx_n=b,\]where $a_1, a_2, \dots, a_n$ (at least one of $a_1, a_2, \dots, a_n$ is nonzero) and $b$ are real numbers.Here at least one of $a_1, a_2, \dots, a_n$ is nonzero.
Consider the hyperplane $P$ in $\R^n$ described by the linear equation\[a_1x_1+a_2x_2+\cdots+a_nx_n=0,\]where $a_1, a_2, \dots, a_n$ are some fixed real numbers and not all of these are zero.(The constant term $b$ is zero.)
Then prove that the hyperplane $P$ is a subspace of $R^{n}$ of dimension $n-1$.
Let $n$ be a positive integer. Let $T:\R^n \to \R$ be a non-zero linear transformation.Prove the followings.
(a) The nullity of $T$ is $n-1$. That is, the dimension of the nullspace of $T$ is $n-1$.
(b) Let $B=\{\mathbf{v}_1, \cdots, \mathbf{v}_{n-1}\}$ be a basis of the nullspace $\calN(T)$ of $T$.Let $\mathbf{w}$ be the $n$-dimensional vector that is not in $\calN(T)$. Then\[B’=\{\mathbf{v}_1, \cdots, \mathbf{v}_{n-1}, \mathbf{w}\}\]is a basis of $\R^n$.
(c) Each vector $\mathbf{u}\in \R^n$ can be expressed as\[\mathbf{u}=\mathbf{v}+\frac{T(\mathbf{u})}{T(\mathbf{w})}\mathbf{w}\]for some vector $\mathbf{v}\in \calN(T)$.
Let $A$ be the matrix for a linear transformation $T:\R^n \to \R^n$ with respect to the standard basis of $\R^n$.We assume that $A$ is idempotent, that is, $A^2=A$.Then prove that\[\R^n=\im(T) \oplus \ker(T).\]
(a) Let $A=\begin{bmatrix}1 & 2 & 1 \\3 &6 &4\end{bmatrix}$ and let\[\mathbf{a}=\begin{bmatrix}-3 \\1 \\1\end{bmatrix}, \qquad \mathbf{b}=\begin{bmatrix}-2 \\1 \\0\end{bmatrix}, \qquad \mathbf{c}=\begin{bmatrix}1 \\1\end{bmatrix}.\]For each of the vectors $\mathbf{a}, \mathbf{b}, \mathbf{c}$, determine whether the vector is in the null space $\calN(A)$. Do the same for the range $\calR(A)$.
(b) Find a basis of the null space of the matrix $B=\begin{bmatrix}1 & 1 & 2 \\-2 &-2 &-4\end{bmatrix}$.
Let $A$ be a real $7\times 3$ matrix such that its null space is spanned by the vectors\[\begin{bmatrix}1 \\2 \\0\end{bmatrix}, \begin{bmatrix}2 \\1 \\0\end{bmatrix}, \text{ and } \begin{bmatrix}1 \\-1 \\0\end{bmatrix}.\]Then find the rank of the matrix $A$.
(Purdue University, Linear Algebra Final Exam Problem)
Let $R$ be a commutative ring with $1$ and let $G$ be a finite group with identity element $e$. Let $RG$ be the group ring. Then the map $\epsilon: RG \to R$ defined by\[\epsilon(\sum_{i=1}^na_i g_i)=\sum_{i=1}^na_i,\]where $a_i\in R$ and $G=\{g_i\}_{i=1}^n$, is a ring homomorphism, called the augmentation map and the kernel of $\epsilon$ is called the augmentation ideal.
(a) Prove that the augmentation ideal in the group ring $RG$ is generated by $\{g-e \mid g\in G\}$.
(b) Prove that if $G=\langle g\rangle$ is a finite cyclic group generated by $g$, then the augmentation ideal is generated by $g-e$. |
Does it mean that a given rotation in 4-dimensional Euclidean space cannot be associated with a unique axis ($\hat{\textbf{n}}$) of rotation? If yes, why is that the case?
Yes, this is absolutely true. The notion of a one dimensional axis is an "accident" of three dimensions. Rotations transform planar (dimension 2) linear subspaces of Euclidean space and so one needs to specify the transformed
plane and the rotation angle to specify the rotation.
In 3D dimensions we can cheat a little: a plane is uniquely defined by a unit normal vector, and the rotation angle can be encoded as the length of this vector. This is what we mean by an axis. The axis is the untransformed space of the rotation; the 3D space splits into two orthogonal, invariant spaces, the former being the plane of rotation, which is invariant but transformed (
i.e. nontrivially bijectively mapped to itself) and the latter the axis, which is both invariant and untransformed. In 4 and higher dimensions, the invariant spaces are of 2 or higher dimensions.
A member of the Lie algebra of a rotation group (with the algebra written as a faithful matrix representation) is a skew-symmetric matrix,
i.e. an entity of the form $\sum\limits_i X_i \wedge Y_i$ where the $X_i$ and $Y_i$ are 1D vectors in the Euclidean space. A general rotation matrix is then of the form $\exp\left(\sum\limits_i X_i \wedge Y_i\right)$. Things get kind of complicated in 4 and higher dimensions; the most general thing one can say is that a general proper orthogonal transformation on $N$ dimensional space can be decomposed as $R_1\circ R_2\circ\,\cdots R_{N\,\mathrm{div}\, 2}$ where each of the $R_i$ is a rotation that bijectively transforms a plane into itself and leaves the plane's complement invariant. However, the planes for each of the $R_i$ are not in general the same plane.
Further Questions and Useful Rotation Properties
User John Dvorak points out:
I would think that $R_1\circ R_2\circ\,\cdots R_{N\,\mathrm{div}\, 2}$ would always be pairwise orthogonal. Is that not the case?
This is indeed absolutely true and it is worth sketching the proof to get more insight into a higher dimensional rotation.
Let our rotation matrix be $R=\exp(H)$ with $H=\sum\limits_i X_i \wedge Y_i\in \mathfrak{so}(N)$ as above. Then there exists another orthogonal transformation $\tilde{R}$ (
i.e. $\tilde{R}\in \mathrm{SO}(N)$) that, through similarity transformation, reduces the skew symmetric $H\in \mathfrak{so}(N)$ to block diagonal form:
$$H = \tilde{R}\,\mathrm{diag}(\Lambda_1,\,\Lambda_2,\,\cdots)\,\tilde{R}^T=\tilde{R}\,\mathrm{diag}(\Lambda_1,\,\Lambda_2,\,\cdots)\,\tilde{R}^{-1}$$
where each of the blocks is of the form:
$$\Lambda_j=\left(\begin{array}{cc}0&-\theta_j\\\theta_j&0\end{array}\right)$$
with $\theta_j\in\mathbb{R}$ being a rotation angle and that, if $N$ is odd, there is also a $1\times1$ zero block left over.
Therefore, if we put:
$$H_j = \tilde{R}\,\mathrm{diag}(0,\,0,\,\cdots,\,\Lambda_j,\cdots)\,\tilde{R}^T$$
then $R_j=\exp(H_j)$ with $R_1\circ R_2\circ\,\cdots R_{N\,\mathrm{div}\, 2}$ are then readily seen to make up the decomposition with the properties that John claims, to wit:
The $R_j$ are each rotations, each which transforms one plane only and each also has a dimension $N-2$ invariant and untransformed space (the analogue of the "axis"); The planes transformed by the $R_j$ are mutually orthogonal and indeed the planes spanned by the unit vectors $\tilde{R}_j\,\hat{e}_{2\,j}$ and $\tilde{R}_j\,\hat{e}_{2\,j+1}$, where the $\hat{e}_j$ are the orthonormal basis in which all the operators discussed have matrices as written above; (as a consequence of 2.) the $R_j$ are mutually commuting.
Thus we can easily see that:
If the dimension $N$ is odd, there is always a dimension 1 invariant, untransformed space, corresponding to the 1D zero block cited above, further to the invariant spaces described below; If the dimension is even, a nontrivial proper orthogonal transformation's untransformed space can be any of the dimensions $0,\,2,\,4,\,\cdots N-2$. The invariant spaces are of dimensions $0,\,2,\,4,\,\cdots,\,N$
This decomposition is about
one particular rotation operator and is not to be confused with the notion of Canonical Co-ordinates of the Second Kind (see Chapter 1, Proposition 3.3 of V.V. Gorbatsevich, E.B. Vinberg, "Lie Groups and Lie Algebras I: Foundations of Lie Theory and Lie Transformation Groups", Springer, 2013), which are a generalized notion of Euler Angles. Here, a set of $H_j\in\mathfrak{so}(N)$ for $j=1,\,\cdots,\,N$ (note, there are now $N$ of them, not $N\,\mathrm{div}\,2$ of them) is chosen as a basis, i.e. to span $\mathfrak{so}(N)$. The the following are true: The set $\mathbf{G}=\left\{\left.\prod\limits_{j=1}^N\,\exp(\theta_j\,H_j)\,\right|\,\theta_j\in\mathbb{R}\right\}$ contains a neighborhood of the identity in $\mathrm{SO}(N)$; If, further, the $H_j$ are orthogonal with respect to the Killing form $\langle X,\,Y\rangle=\mathrm{tr}(\mathrm{ad}(X)\,\mathrm{ad}(Y))$, then the set $\mathbf{G}$ above is the whole of $\mathrm{SO}(N)$.
Property 1, as shown in the Gorbatsevich & Vinberg reference cited above, is a general and fundamental property of all Lie groups (if we replace $\mathfrak{so}(N)$ by the group's Lie algebra and $\mathrm{SO}(N)$ by the group); property 2 holds for compact semisimple ones only.
If the similarity transformation I have here pulled out of thin air seems mysterious, readers may be more familiar with the a re-ordered version of the similarity transformation $\tilde{R}$ above where we decompose a skew-symmetric, closed 2-form $\omega$ in an even dimension case so that its matrix $\Omega$ is:
$$\Omega = \tilde{R}\; \left(\begin{array}{cc}0&-\mathrm{id}_{\frac{N}{2}}\\\mathrm{id}_{\frac{N}{2}}&0\end{array}\right)\;\tilde{R}^T$$
which we implicitly do whenever we label a symplectic space with (in general nonunique) "canonical co-ordinates" so that $\omega$ then has the matrix:
$$\Omega = \left(\begin{array}{cc}0&-\mathrm{id}_{\frac{N}{2}}\\\mathrm{id}_{\frac{N}{2}}&0\end{array}\right)$$
Here we have a different usage of the word "canonical", this time as used in Hamiltonian mechanics. The word "canonical" well and truly needs a well pensioned retirement as it has worked so hard in Physics! |
Interested in the following function:$$ \Psi(s)=\sum_{n=2}^\infty \frac{1}{\pi(n)^s}, $$where $\pi(n)$ is the prime counting function.When $s=2$ the sum becomes the following:$$ \Psi(2)=\sum_{n=2}^\infty \frac{1}{\pi(n)^2}=1+\frac{1}{2^2}+\frac{1}{2^2}+\frac{1}{3^2}+\frac{1}{3^2}+\frac{1...
Consider a random binary string where each bit can be set to 1 with probability $p$.Let $Z[x,y]$ denote the number of arrangements of a binary string of length $x$ and the $x$-th bit is set to 1. Moreover, $y$ bits are set 1 including the $x$-th bit and there are no runs of $k$ consecutive zer...
The field $\overline F$ is called an algebraic closure of $F$ if $\overline F$ is algebraic over $F$ and if every polynomial $f(x)\in F[x]$ splits completely over $\overline F$.
Why in def of algebraic closure, do we need $\overline F$ is algebraic over $F$? That is, if we remove '$\overline F$ is algebraic over $F$' condition from def of algebraic closure, do we get a different result?
Consider an observer located at radius $r_o$ from a Schwarzschild black hole of radius $r_s$. The observer may be inside the event horizon ($r_o < r_s$).Suppose the observer receives a light ray from a direction which is at angle $\alpha$ with respect to the radial direction, which points outwa...
@AlessandroCodenotti That is a poor example, as the algebraic closure of the latter is just $\mathbb{C}$ again (assuming choice). But starting with $\overline{\mathbb{Q}}$ instead and comparing to $\mathbb{C}$ works.
Seems like everyone is posting character formulas for simple modules of algebraic groups in positive characteristic on arXiv these days. At least 3 papers with that theme the past 2 months.
Also, I have a definition that says that a ring is a UFD if every element can be written as a product of irreducibles which is unique up units and reordering. It doesn't say anything about this factorization being finite in length. Is that often part of the definition or attained from the definition (I don't see how it could be the latter).
Well, that then becomes a chicken and the egg question. Did we have the reals first and simplify from them to more abstract concepts or did we have the abstract concepts first and build them up to the idea of the reals.
I've been told that the rational numbers from zero to one form a countable infinity, while the irrational ones form an uncountable infinity, which is in some sense "larger". But how could that be? There is always a rational between two irrationals, and always an irrational between two rationals, ...
I was watching this lecture, and in reference to above screenshot, the professor there says: $\frac1{1+x^2}$ has a singularity at $i$ and at $-i$, and power series expansions are limits of polynomials, and limits of polynomials can never give us a singularity and then keep going on the other side.
On page 149 Hatcher introduces the Mayer-Vietoris sequence, along with two maps $\Phi : H_n(A \cap B) \to H_n(A) \oplus H_n(B)$ and $\Psi : H_n(A) \oplus H_n(B) \to H_n(X)$. I've searched through the book, but I couldn't find the definitions of these two maps. Does anyone know how to define them or where there definition appears in Hatcher's book?
suppose $\sum a_n z_0^n = L$, so $a_n z_0^n \to 0$, so $|a_n z_0^n| < \dfrac12$ for sufficiently large $n$, so $|a_n z^n| = |a_n z_0^n| \left(\left|\dfrac{z_0}{z}\right|\right)^n < \dfrac12 \left(\left|\dfrac{z_0}{z}\right|\right)^n$, so $a_n z^n$ is absolutely summable, so $a_n z^n$ is summable
Let $g : [0,\frac{ 1} {2} ] → \mathbb R$ be a continuous function. Define $g_n : [0,\frac{ 1} {2} ] → \mathbb R$ by $g_1 = g$ and $g_{n+1}(t) = \int_0^t g_n(s) ds,$ for all $n ≥ 1.$ Show that $lim_{n→∞} n!g_n(t) = 0,$ for all $t ∈ [0,\frac{1}{2}]$ .
Can you give some hint?
My attempt:- $t\in [0,1/2]$ Consider the sequence $a_n(t)=n!g_n(t)$
If $\lim_{n\to \infty} \frac{a_{n+1}}{a_n}<1$, then it converges to zero.
I have a bilinear functional that is bounded from below
I try to approximate the minimum by a ansatz-function that is a linear combination
of any independent functions of the proper function space
I now obtain an expression that is bilinear in the coeffcients
using the stationarity condition (all derivaties of the functional w.r.t the coefficients = 0)
I get a set of $n$ equations with the $n$ the number of coefficients
a set of n linear homogeneus equations in the $n$ coefficients
Now instead of "directly attempting to solve" the equations for the coefficients I rather look at the secular determinant that should be zero, otherwise no non trivial solution exists
This "characteristic polynomial" directly yields all permissible approximation values for the functional from my linear ansatz.
Avoiding the neccessity to solve for the coefficients.
I have problems now to formulated the question. But it strikes me that a direct solution of the equation can be circumvented and instead the values of the functional are directly obtained by using the condition that the derminant is zero.
I wonder if there is something deeper in the background, or so to say a more very general principle.
If $x$ is a prime number and a number $y$ exists which is the digit reverse of $x$ and is also a prime number, then there must exist an integer z in the mid way of $x, y$ , which is a palindrome and digitsum(z)=digitsum(x).
> Bekanntlich hat P. du Bois-Reymond zuerst die Existenz einer überall stetigen Funktion erwiesen, deren Fouriersche Reihe an einer Stelle divergiert. Herr H. A. Schwarz gab dann ein einfacheres Beispiel.
(Translation: It is well-known that Paul du Bois-Reymond was the first to demonstrate the existence of a continuous function with fourier series being divergent on a point. Afterwards, Hermann Amandus Schwarz gave an easier example.)
(Translation: It is well-known that Paul du Bois-Reymond was the first to demonstrate the existence of a continuous function with fourier series being divergent on a point. Afterwards, Hermann Amandus Schwarz gave an easier example.)
It's discussed very carefully (but no formula explicitly given) in my favorite introductory book on Fourier analysis. Körner's Fourier Analysis. See pp. 67-73. Right after that is Kolmogoroff's result that you can have an $L^1$ function whose Fourier series diverges everywhere!! |
A couple months ago, I stumbled across an amusing bit of academic woo: “Quantum Mind and Social Science.” The misrepresentations, false dichotomies and
nons sequitur of that piece prompted me to wonder what a good litmus test for knowing quantum mechanics might look like. Joshua offered a simple criterion: be able to pick the Schrödinger Equation out of a line-up. At a slightly higher level, I suggested being able to describe in the Heisenberg picture the time evolution of a harmonic oscillator coherent state, and explaining why states of the hydrogen atom with the same n but different angular momentum number l are degenerate. You can’t discuss the relationship between classical and quantum physics without bringing up coherent states eventually, and a good grounding in the basics should include the Schrödinger and Heisenberg pictures. (That’s why I wrote problem 5 in this homework assignment.)
The excited states of the hydrogen atom are our prototype for understanding how the periodic table works, and it’s often the first place one runs into the mathematics of angular momentum. Unfortunately, too many standard treatments of introductory QM say that hydrogen has “accidental degeneracies”: these states have the same energies as those states for no spectacularly interesting reason. But we are trained to associate degeneracies with
symmetries — when two sets of eigenstates have the same eigenvalues, we expect some symmetry to be at work. So, is there a symmetry in the hydrogen atom above and beyond the familiar rotational kind, a symmetry which They haven’t been telling us about?
I’d like to explore this topic over a few posts. First, I’ll build up some very general machinery for solving problems, and then I’ll apply those techniques to the hydrogen atom; by that point, we should have a fair amount of knowledge with which we can move in any one of several interesting directions. To begin, let’s familiarize ourselves with the behavior of a
superalgebra. INTRODUCTION
In fundamental quantum mechanics, we learn that an algebra of operators is defined by commutation relations among those operators. For example, the canonical operators of position and momentum have the commutator [tex]\comm{x}{p} = i\hbar[/tex]. A more intricate case is the algebra of angular momentum operators, which we encountered when exploring the rotational symmetries of 3D space. To generalize this concept, we define an
anticommutator, which relates operators in the same way as an ordinary commutator, but with the opposite sign:
[tex]\{A,B\} \equiv AB + BA.[/tex]
If operators are related by anticommutators as well as commutators, we say that they are part of a
superalgebra. Let’s say we have a quantum system described by a Hamiltonian [tex]\mathcal{H}[/tex] and a set of [tex]N[/tex] self-adjoint operators [tex]Q_i[/tex], each of which commutes with the Hamiltonian. We shall call this system supersymmetric if the following anticommutator is valid for all [tex]i,j=1,2,\ldots,N[/tex]:
[tex]\{Q_i,Q_j\} = \mathcal{H}\delta_{ij}.[/tex]
If this is the case, then we call the operators [tex]Q_i[/tex] the system’s
supercharges. [tex]\mathcal{H}[/tex] will be termed the SUSY Hamiltonian, SUSY being a convenient abbreviation for whichever variation of “supersymmetry” is grammatically appropriate. A SUSY algebra is characterized by its number of supercharges, which we typically denote [tex]N[/tex].
Because the [tex]N = 2[/tex] case exemplifies many properties of general SUSY theories, it is worthwhile to work it out in some detail. We require two supercharges, [tex]Q_1 = Q_1^\dag[/tex] and [tex]Q_2 = Q_2^\dag[/tex]. The SUSY algebra we defined a moment ago implies the following relations:
[tex]Q_1 Q_2 = – Q_2 Q_1,\ \mathcal{H} = 2Q_1^2 = 2Q_2^2 = Q_1^2 + Q_2^2.[/tex]
It is sometimes more convenient to work with a “complex” supercharge that is not self-adjoint. (The convention we choose depends upon the given information we have to work with!) If we make linear combinations of our supercharges,
[tex]Q = \frac{1}{\sqrt{2}}(Q_1 + iQ_2),\ Q^\dag = \frac{1}{\sqrt{2}}(Q_1 – iQ_2),[/tex]
then the SUSY algebra implies [tex]\{Q},Q^\dag\} = \mathcal{H}[/tex]. To make this a little more concrete, we can realize a specific incarnation of the superalgebra: let [tex]H_1[/tex] be some Hamiltonian of interest, and suppose that we can factor [tex]H_1[/tex] into the product of an operator and its adjoint:
[tex]H_1 = A^\dag A.[/tex]
Note that this is almost the form of the harmonic oscillator Hamiltonian, except for an energy shift:
[tex]H_{\rm SHO} = \hbar\omega\left(a^\dag a + \frac{1}{2}\right).[/tex]
So, this is not an unfamiliar form for a Hamiltonian. Swapping the order of the factors gives another operator, which you can verify is also Hermitian:
[tex]H_2 = AA^\dag.[/tex]
With [tex]A[/tex] in hand, define the two operators
[tex]Q = \left(\begin{array}{cc} 0 & 0 \\ A & 0 \\ \end{array}\right)[/tex] and [tex]Q^\dag = \left(\begin{array}{cc} 0 & A^\dag \\ 0 & 0 \\ \end{array}\right).[/tex] Matrix arithmetic verifies that
[tex]\{Q,Q^\dag\} = \left(\begin{array}{cc} A^\dag A & 0 \\ 0 & AA^\dag \\ \end{array}\right),[/tex]
so we can say that the anticommutator of our two charges gives a Hamiltonian [tex]\mathcal{H}[/tex] which is block diagonal,
[tex]\mathcal{H} = \left(\begin{array}{cc} H_1 & 0 \\ 0 & H_2 \\ \end{array} \right).[/tex]
[tex]H_1[/tex] and [tex]H_2[/tex] can be considered two Hamiltonians acting on subspaces of the original Hilbert space associated with [tex]\mathcal{H}[/tex].
PARTNER POTENTIALS
What exactly is so special about operators of the forms [tex]A^\dag A[/tex] and [tex]AA^\dag[/tex]? Given a Hamiltonian for some system, [tex]H_1[/tex], if it can be factored into the product of two operators [tex]A^\dag A[/tex], then we can construct another Hamiltonian [tex]H_2 = AA^\dag[/tex] which has almost exactly the same energy eigenvalue spectrum. These “isospectral” Hamiltonians may not describe the same physics, and their respective potentials [tex]V_1(x)[/tex] and [tex]V_2(x)[/tex] may look radically different. As usual, a degeneracy in the energy levels corresponds to a symmetry; in this case, the symmetry is the SUSY between our two Hamiltonians.
First, let’s take a look at the eigenstates of Hamiltonian number 1. These states satisfy the relationship
[tex]H_1 \ket{\psi_n^{(1)}} = A^\dag A \ket{\psi_n^{(1)}} = E_n^{(1)} \ket{\psi_n^{(1)}}.[/tex]
Now, a surprising thing happens: the operator [tex]A[/tex] maps the eigenstates of Hamiltonian 1 into eigenstates of Hamiltonian 2. Look:
[tex]H_2 A\ket{\psi_n^{(1)}} = AA^\dag A \ket{\psi_n^{(1)}},[/tex]
but by the equation just above, this means that
[tex]H_2 A\ket{\psi_n^{(1)}} = E_n^{(1)} A\ket{\psi_n^{(1)}}.[/tex]
The same logic works in the opposite direction, connecting eigenstates of [tex]H_2[/tex] with those of [tex]H_1[/tex]. The eigenstates behind door number 2 satisfy
[tex]H_2 \ket{\psi_n^{(2)}} = AA^\dag \ket{\psi_n^{(2)}} = E_n^{(2)}\ket{\psi_n^{(2)}},[/tex]
so by acting with the operator [tex]A^\dag[/tex],
[tex]H_1 A^\dag\ket{\psi_n^{(2)}} = A^\dag AA^\dag \ket{\psi_n^{(2)}} = E_n^{(2)}A^\dag \ket{\psi_n^{(2)}}.[/tex]
We have shown that [tex]H_1[/tex] and [tex]H_2[/tex] are
isospectral. For every eigenstate of one, there lurks an eigenstate of the other with the same energy. One exception is important: if
[tex]A \ket{\psi_n^{(1)}} = 0,[/tex]
that is, if [tex]H_1[/tex] has a zero-energy ground state, the proof does not work, and there is no need for [tex]H_2[/tex] to have a zero-energy ground state. In fact, as we’ll see momentarily,
only one of [tex]H_1[/tex] and [tex]H_2[/tex] may have a zero-energy ground state; for consistency, we usually arrange matters so that [tex]H_1[/tex] has the extra eigenstate. SUPERPOTENTIALS
Most of the time, we find ourselves dealing with Hamiltonians of the form
[tex]H = \frac{p^2}{2m} + V(x),[/tex]
which, knowing that [tex]p = -i\hbar \partial_x[/tex], we can also write as
[tex]H = -\frac{\hbar^2}{2m} \partial_x^2 + V(x).[/tex]
If we want to factor this [tex]H[/tex] into an operator and its adjoint, we should probably start with an operator which is linear in the derivative of [tex]x[/tex], thus:
[tex]A = \frac{\hbar}{\sqrt{2m}} \partial_x + W(x).[/tex]
Here, [tex]W(x)[/tex] is some real function of [tex]x[/tex] which we shall call the
superpotential. Taking the adjoint of [tex]A[/tex] flips the sign on the derivative; you should deduce why by observing that the momentum [tex]p[/tex] is observable and therefore self-adjoint.
[tex]A^\dag = -\frac{\hbar}{\sqrt{2m}} \partial_x + W(x).[/tex]
We can connect the superpotential to [tex]H_1[/tex]’s ordinary potential,
[tex]V_1(x) = W^2(x) – \frac{\hbar}{\sqrt{2m}} \partial_x W(x).[/tex]
This relationship is known as the Riccati Equation. If we reverse the order of our operators, it turns out that [tex]H_2[/tex] is a Hamiltonian with a new potential [tex]V_2(x)[/tex], given by
[tex]V_2(x) = W^2(x) + \frac{\hbar}{\sqrt{2m}} \partial_x W(x).[/tex]
We recognize this as the Riccati Equation with a change of sign. [tex]V_1(x)[/tex] and [tex]V_2(x)[/tex] are known as
partner potentials, related through the superpotential [tex]W(x)[/tex].
With one more step, we can relate the superpotential to the ground state wavefunction. Note that the ground state of [tex]H_1[/tex] is annihilated by [tex]A[/tex], satisfying the relation
[tex]A\ket{\psi_0^{(1)}} = 0.[/tex]
Looking back at the form of [tex]A[/tex], we see that this is a
first-order differential equation, and we can write its solution as an exponential.
[tex]\psi_0^{(1)}(x) \propto \exp\left[-\frac{\sqrt{2m}}{\hbar} \int_0^x W(y) dy\right].[/tex]
Note that the zero-energy ground state of [tex]H_2[/tex] would be annihilated by [tex]A^\dag[/tex] and would therefore be proportional to
[tex]\exp\left[\frac{\sqrt{2m}}{\hbar} \int_0^x W(y) dy\right].[/tex]
Only one of these two expressions can give a
normalizable state: if one behaves nicely, the other will blow up. That’s why only one of the two partner Hamiltonians can have a ground state of zero energy.
Often, these equations are shown in “natural units” where [tex]\hbar = 2m = 1[/tex]. This can always be done by changing the units of [tex]x[/tex], and it makes successive steps in the calculations much cleaner. I think it nice to see the equations with all the original units in place at least once; in following sections, however, when units are not illuminating I will set unnecessary constants to unity.
READING Fred Cooper, Avinash Khare, Uday Sukhatme, “Supersymmetry and Quantum Mechanics” (5 May 1994). SUSY QM SERIES |
Finding area between a curve and a line
In the previous article of Area under the curve, we discussed how to find the area enclosed by a curve and the x-axis between a given set of x-coordinates. In the upcoming discussion, we will try finding the area between a curve and a line, between a given set of points. We can divide this article in three cases i.e. when the given curve is a Circle, a parabola or an ellipse. We will be dealing with these curves in this discussion.
Let us try to understand the above concept using some examples.
Example: Find the area of the region in the first quadrant enclosed by the x-axis, the line y=x and the circle, x
2+y 2=36
Solution: Before finding the enclosed area, we need to solve the given equation of the curve and the line, so as to find their points of intersection. On solving the equations, x
2+y 2=36 and y=x, we see that the points of intersection are (3√2,3√2) and (-3√2,-3√2)
After finding the point of intersection, draw a perpendicular from point B to the x-axis, meeting it at the point M.
The total shaded region is OMABO, which can be further divided into two regions i.e. OBMO and BMAB. The graph of the given line and the curve can be plotted as shown below:
Let us find the areas of the regions OBMO and BMAB separately.
Area of the region OBMO is given as:
\(\int_{0}^{3\sqrt{2}} y.dx = \int_{0}^{3\sqrt{2}} x.dx = \frac{1}{2} \left ( x^{2} \right )_{0}^{3\sqrt{2}}\)
\(= 9\)
Now let us find the area of the region BMAB:
\(\int_{3\sqrt{2}}^{6} y.dx = \int_{3\sqrt{2}}^{6} \sqrt{36 – x^{2}}.dx\)
= \(\left ( \frac{1}{2}\sqrt{36-x^{2}} + \frac{1}{2} \times 36 \times \sin^{-1}\left ( \frac{x}{6} \right ) \right )_{3\sqrt{3}}^{6}\)
= [½ x 6 x 0 + ½ x 36 x π/2] – [ ½ x 3√2 x 3√2 + ½ x 36 x π/4] = 9π – 9 – 9π/2 = 9π/2-9
Now adding the areas of the two regions we get,
9 + 9π/2-9
= 9π/2 sq.units.
Note that, in case the value of the area of some particular region comes out to be negative, we need to take the absolute value of the area and add it up to the remaining areas.
Consider another example when we have a parabola.
Example: Find the area of the region bounded by the curve y=x
2 and the line y=2.
Solution: For the above case, we would get the following figure,
From the given figure, we can see that the parabola is symmetric about the y-axis. So, if we find the enclosed area on one side of the y-axis and double it up, we will get the area of the complete region. Hence for this case, we need to consider horizontal strips, starting from y=0 to y=2.
Hence, the equation for area will be,
2 x (area of the region ONBO, bounded by the curve, y-axis and the lines y=0 and y=2)
\(2\int_{0}^{2} x.dy = 2\int_{0}^{2}\sqrt{y} dy = 2 \times \frac{2}{3} \left ( y^{3/2} \right )_{0}^{2}\)
= \(= \frac{2^{7}}{3}\)
In the same way, we can find the enclosed area in case of an ellipse.
To learn more about the finding area between a curve and a line download Byju’s- The Learning App.
‘ |
Search
Now showing items 1-1 of 1
Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV
(Elsevier, 2014-09)
Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ... |
You can get a faster result by using a different way to complete the square: split the middle term$$z^4-3z^2+1=(z^2-1)^2-z^2=(z^2-z-1)(z^2+z-1)$$Now the remaining two quadratic equations are easy to solve.$$0=z^2-z-1\implies z=\frac12(1\pm\sqrt{5})\\0=z^2+z-1\implies z=\frac12(-1\pm\sqrt{5})\\$$The fifth unit roots satisfy the equation $z^5-1=0$. ...
If you actually expand $(1+z+z^2)(1+z+z^3)(1+z+z^4) - z(1+z)$ out, you will get:$$z^9+z^8+2z^7+3z^6+4z^5+4z^4+4z^3+3z^2+2z+1$$$$=5z^4+5z^3+5z^2+5z+5 \quad \text{(why?)}$$$$=5 \left( \frac{z^5-1}{z-1} \right)$$$$=0$$Therefore $(1+z+z^2)(1+z+z^3)(1+z+z^4) = z(1+z)$.In addition, you can use your idea of making the substitution $t=1+z$, so that way you ...
Note that if $z=\cos(2\pi/5)+i\sin(2\pi/5)=e^{2\pi i/5}\not=1$ then $z^5=1$. Therefore, before taking the product, we write each factor in a convenient way:$$\begin{align}(1+z+z^2)&=\frac{1-z^3}{1-z}=\frac{z^2-1}{z^2(1-z)}=-\frac{1+z}{z^2},\\(1+z+z^3)&=\frac{z^4(1+z+z^3)}{z^4}=\frac{1+z^2+z^4}{z^4}=\frac{1-z^6}{z^4(1-z^2)}=\frac{z(1-z)}{(1-z^2)}=\...
You need to be very careful with $\sqrt{a}$ or $a^{\frac{1}{2}}$ even with the real numbers and even more so with the complex numbers.With the real numbers, positive numbers have two square roots but it is easy to pick one as special (the positive one) and define $\sqrt{a}$ or $a^{\frac{1}{2}}$ to mean the positive square root. With some care, you can ...
If $x=\exp\left(\frac{\pi i}3\right)+\exp\left(-\frac{\pi i}3\right)$, then$$x^3=-1-1+3x=3x-2.$$But the equation $x^3=3x-2$ only has $2$ solutions: $1$ and $-2$. And since it is clear that both $\exp\left(\frac{\pi i}3\right)$ and $\exp\left(-\frac{\pi i}3\right)$ have absolute value equal to $1$ but one of them is $1$, it is clear that their sum is not $2$. ...
No. Consider the companion matrix of the polynomial $p(x)=x^4-2x^3-2x+1$. Its eigenvalues are the roots of $p(x)$, two of which are complex (non-real) numbers with absolute value $1$, none of which is a root of unity.
Evaluate first,$$\frac{\sin\frac{11\pi}{9}}{1+\cos\frac{11\pi}{9}}=\frac{2\sin\frac{11\pi}{18}\cos\frac{11\pi}{18}}{2\cos^2\frac{11\pi}{18}}=\frac{\sin\frac{11\pi}{18}}{\cos\frac{11\pi}{18}}=\tan\frac{11\pi}{18}$$So, since $\sin\frac{11\pi}{9}$ is negative, i.e. the 4th quadrant, the principal argument is $\frac{11\pi}{18}-\pi = -\frac{7\pi}{18} $Or, ...
Consider $z^5-1=0$So the roots of $$0=\dfrac{z^5-1}{z-1}=z^4+z^3+z^2+z+1$$ are $e^{2i\pi r/5},r=1,2,3,4$As $z\ne0,$ like Quadratic substitution question: applying substitution $p=x+\frac1x$ to $2x^4+x^3-6x^2+x+2=0$ , divide both sides by $z^2$Replace $z+\dfrac1z=w\implies w^2=?$ to find $$w^2-2+w+1=0$$ whose roots are $$2\cos\dfrac{2r\pi}5;r=1,2$$...
Set $y = z^2$, your equation is then equivalent to$$y^2 -3y +1=0.$$This is a quadratic equation and can be solve with the quadratic formula that you mentioned:$$y_{1,2} = \dfrac{3 \pm \sqrt{(-3)^2 - 4 \cdot 1}}{2}=\dfrac{3 \pm \sqrt{5}}{2}.$$Now observe that$$y_1 = \dfrac{3+\sqrt{5}}{2}= \dfrac{\frac{1}{2}+\sqrt{5} + \frac{5}{2}}{2} = \dfrac{\frac{1+2\...
I like your idea that first, you denote $$w=z^2-z.$$Then, recall that in polar form, you have$$c = r e^{i \theta} ,$$where $c \in \mathbb{C}$ is any complex number, $r \in \mathbb{R}$ is its distance from the origin, and $\theta \in [0,2\pi)$ is its phase.So now, you can first find the first root, i.e. $w_1$, in a "regular way":$$w_1 = \sqrt[4]{81} = ...
By scaling (replacing $z,z_j$ by $\frac{z}{R_2}, \frac{z_j}{R_2}$) it seems that you want to prove the inequality:$z\ne w, |z|=r, |w|=\rho <1$ implies $\frac{|1-z\bar w|}{|z-w|}\ge \frac{1+r\rho}{r+\rho}$.If we let $z=re^{i\alpha}, w=\rho e^{i\beta}$ and we square, this translates to:$\frac{1+(r\rho)^2-2r \rho cos(\alpha-\beta)}{r^2+\rho^2-2r \rho ...
If a polynomial has two conjugate roots, let $x\pm iy$, it must be a multiple of$$(z-x-iy)(z-x+iy)=z^2-2xz+x^2+y^2.$$Notice that all coefficients are real. In fact, the roots of a polynomial of real coefficients are either reals or pairs of complex conjugates. Such a polynomial can always be factored as the product of quadratic and linear factors with ...
Yes, your formula holds true.Speaking the "algebraic" language, we identify a set $W$ of words over an alphabet $A$ with the corresponding element $\sum_{w\in W}w$ of a free algebra on $A$ (in our case, the $\mathbb{Z}[\omega]$-algebra). If $W_n$ corresponds to the set of "allowed" words over $\{V,U,U^\dagger\}$ of length $n$ (and we consider the empty ...
I think your 3rd statement is wrong.See, in complex numbers we don't multiply two surds as we do normally with a real numbers.Like, for example $\sqrt{-1}×\sqrt{-1}=-1$ not $1$. We don't use the identity $a^n×b^n=(ab)^n$.We instead multiply conserving the minus sign or you can say conserving the iota like,$\sqrt{-1}×\sqrt{-1}=\sqrt{(-1)^2}=-1$.So in ...
Note that \begin{align}(e^{i\pi/3}+e^{-i\pi/3})^3&=e^{i\pi}+3e^{i\pi/3}+3e^{-i\pi/3}+e^{-i\pi}\\&=-1+3(e^{i\pi/3}+e^{-i\pi/3})-1\end{align}$$\implies(e^{i\pi/3}+e^{-i\pi/3})^3-3(e^{i\pi/3}+e^{-i\pi/3})+2=0.$$ This cubic $x^3-3x+2=(x+2)(x-1)^2$ has two roots: $$e^{i\pi/3}+e^{-i\pi/3}=-2,1.$$ The former cannot be attained as $\pi/3$ is not a multiple ...
A strengthened version of the Gershgorin circle theorem does guarantee that if the circles are grouped into disjoint clusters, the number of eigenvalues in each cluster are equal to the number of disks there. The proof is as follows: let $A=(a_{ij})$, $D=diag(a_{ii})$ and $A=D+E$. Define $A_t=D+tE$ and imagine increasing $t$ from 0 to 1, keeping track of the ...
It's much easier to rearrange to $|z|\le|z-1|$. Geometrically, the distance of $z$ from $0$ is less than or equal to the distance of $z$ from $1$ in the complex plane, so the complex numbers $z$ satisfying this are those with $\Re(z)\le\frac{1}{2}$, or $x\leq\frac{1}{2}$.Alternatively, you can write $z=x+iy$ in $|z|\le|z-1|$ and squaring both sides gives $...
The condition $\lvert z-u\rvert<\lvert z\rvert$ simply means that $z$ is closer to $u$ than to $0$. So, consider the perpendicular bisector of the line segment joining $u$ to $0$. It divides $\mathbb C$ into two half-planes. Now, take the half-plane that contains $u$.
Let $ O $ be the origin, $ C $ be 1, $ P = 1 + \cos\left(\frac{11\pi}{9}\right) + i \sin \left(\frac{11\pi}{9} \right) $, and $ R $ be 2, all in the complex plane. Then, if you draw a picture of this, you see that $ O, P, R $ are all points on the circle of radius 1 centered at $ C $. $ \angle RCP $ is $ -\frac{7\pi}{9} $. By a basic result in geometry on ...
The principal value of Arg lies in $(-\pi,\pi]$ and $e^{2ni\pi}=1, e^{i\pi}=-1.$Given that $$Z=2 \cos 110^0 ~e^{11i \pi/18}= |2\cos 110^0|~ (1) e^{11i\pi/18} \impliesArg Z= \pi+11\pi/18$$ in order to bring this value in $(-\pi, \pi]$ we add $2n\pi$ to it where $n=\pm 1, \pm 2, \pm 3,...$, Finally, the principal value of $Arg Z$ is$$\pi+ \frac{11 \pi}{18}... |
Category: Ring theory Problem 624
Let $R$ and $R’$ be commutative rings and let $f:R\to R’$ be a ring homomorphism.
Let $I$ and $I’$ be ideals of $R$ and $R’$, respectively. (a) Prove that $f(\sqrt{I}\,) \subset \sqrt{f(I)}$. (b) Prove that $\sqrt{f^{-1}(I’)}=f^{-1}(\sqrt{I’})$
Add to solve later
(c) Suppose that $f$ is surjective and $\ker(f)\subset I$. Then prove that $f(\sqrt{I}\,) =\sqrt{f(I)}$ Problem 618
Let $R$ be a commutative ring with $1$ such that every element $x$ in $R$ is idempotent, that is, $x^2=x$. (Such a ring is called a
Boolean ring.) (a) Prove that $x^n=x$ for any positive integer $n$.
Add to solve later
(b) Prove that $R$ does not have a nonzero nilpotent element. Problem 543
Let $R$ be a ring with $1$.
Suppose that $a, b$ are elements in $R$ such that \[ab=1 \text{ and } ba\neq 1.\] (a) Prove that $1-ba$ is idempotent. (b) Prove that $b^n(1-ba)$ is nilpotent for each positive integer $n$.
Add to solve later
(c) Prove that the ring $R$ has infinitely many nilpotent elements. |
I have read in a few places that the formula $$ \int_\mathbb{R} x(t)^3 \, dt \ = \ \int_{\mathbb{R}^2} \hat{x}(f_1)\hat{x}(f_2)\hat{x}(-f_1-f_2) \, d(f_1,f_2) $$ holds (where $\hat{x}$ denotes the Fourier transform of $x$), but without any description of the class of functions that $x$ belongs to.
Straightforward iterated application of the convolution theorem shows that this formula is true at least if $x \in \mathcal{S}(\mathbb{R})$. Being a bit more precise, it will (I think) be true at least for all $x \in W^{2,1}(\mathbb{R})$. But more generally:
Is it the case that for all $x \in L^2(\mathbb{R}) \cap L^3(\mathbb{R})$, the map $\,(f_1,f_2) \mapsto \hat{x}(f_1)\hat{x}(f_2)\hat{x}(-f_1-f_2)\,$ [known as the
bispectrumof $x$] is absolutely integrable on $\mathbb{R}^2$ and the above formula holds?
(And if so, is there a reference that contains this fact?) |
The Born interpretation states that for a particle with a wave function $\Psi(x)$, the total probability of finding that particle at some point in space is equal to $\int_{-\infty}^{\infty}\Psi(x)^*\Psi(x)dx = 1$.
Suppose we have a state $\rvert\psi\rangle$ in the Hilbert space $\mathcal{H} = \mathcal{L}^2(\mathbb{R})$. The position operator here is $\hat{x}$ with eigenvalues of $x$. In order to calculate probabilities, the state must be normalized. To check if $\rvert\psi\rangle$ is normalized, we calculate its norm: $$\langle\psi\rvert\psi\rangle = \langle\psi\rvert\hat{I}\rvert\psi\rangle = \langle\psi\rvert\left(\int_{-\infty}^{\infty}dx\rvert x\rangle\langle x\rvert\right) \rvert\psi\rangle = \int_{-\infty}^{\infty}dx\langle\psi\rvert x\rangle\langle x\rvert\psi\rangle = \int_{-\infty}^{\infty}\psi(x)^*\psi(x)dx.$$
Now suppose we have a state $\rvert\phi\rangle$ in the Hilbert space $\mathcal{H} = \mathcal{L}^2(\mathbb{R}^n)$. The position operator here is, if I understand correctly, $\hat{\textbf{r}}_n$ with vector eigenvalues of $\textbf{r}_n$ (per this answer: https://physics.stackexchange.com/a/126763/117677). Now again, we want to make sure that $\rvert\phi\rangle$ is normalized. Its norm (generalizing from the previous case) is given by $$\langle\phi\rvert\phi\rangle = \langle\phi\rvert\hat{I}\rvert\phi\rangle = \langle\phi\rvert\left(\int_{-\infty}^{\infty}d\textbf{r}_n\rvert \textbf{r}_n\rangle\langle \textbf{r}_n\rvert\right) \rvert\phi\rangle.$$ This doesn't make much sense to me, as we have the differential of a vector, $d\textbf{r}_n$, and the ket $\rvert\textbf{r}_n\rangle$, which is like double labeling a vector.
So how do you calculate the norm of a state in more than one dimension? Did I generalize incorrectly, or am I just missing some key intuition? |
Basically 2 strings, $a>b$, which go into the first box and do division to output $b,r$ such that $a = bq + r$ and $r<b$, then you have to check for $r=0$ which returns $b$ if we are done, otherwise inputs $r,q$ into the division box..
There was a guy at my university who was convinced he had proven the Collatz Conjecture even tho several lecturers had told him otherwise, and he sent his paper (written on Microsoft Word) to some journal citing the names of various lecturers at the university
Here is one part of the Peter-Weyl theorem: Let $\rho$ be a unitary representation of a compact $G$ on a complex Hilbert space $H$. Then $H$ splits into an orthogonal direct sum of irreducible finite-dimensional unitary representations of $G$.
What exactly does it mean for $\rho$ to split into finite dimensional unitary representations? Does it mean that $\rho = \oplus_{i \in I} \rho_i$, where $\rho_i$ is a finite dimensional unitary representation?
Sometimes my hint to my students used to be: "Hint: You're making this way too hard." Sometimes you overthink. Other times it's a truly challenging result and it takes a while to discover the right approach.
Once the $x$ is in there, you must put the $dx$ ... or else, nine chances out of ten, you'll mess up integrals by substitution. Indeed, if you read my blue book, you discover that it really only makes sense to integrate forms in the first place :P
Using the recursive definition of the determinant (cofactors), and letting $\operatorname{det}(A) = \sum_{j=1}^n \operatorname{cof}_{1j} A$, how do I prove that the determinant is independent of the choice of the line?
Let $M$ and $N$ be $\mathbb{Z}$-module and $H$ be a subset of $N$. Is it possible that $M \otimes_\mathbb{Z} H$ to be a submodule of $M\otimes_\mathbb{Z} N$ even if $H$ is not a subgroup of $N$ but $M\otimes_\mathbb{Z} H$ is additive subgroup of $M\otimes_\mathbb{Z} N$ and $rt \in M\otimes_\mathbb{Z} H$ for all $r\in\mathbb{Z}$ and $t \in M\otimes_\mathbb{Z} H$?
Well, assuming that the paper is all correct (or at least to a reasonable point). I guess what I'm asking would really be "how much does 'motivated by real world application' affect whether people would be interested in the contents of the paper?"
@Rithaniel $2 + 2 = 4$ is a true statement. Would you publish that in a paper? Maybe... On the surface it seems dumb, but if you can convince me the proof is actually hard... then maybe I would reconsider.
Although not the only route, can you tell me something contrary to what I expect?
It's a formula. There's no question of well-definedness.
I'm making the claim that there's a unique function with the 4 multilinear properties. If you prove that your formula satisfies those (with any row), then it follows that they all give the same answer.
It's old-fashioned, but I've used Ahlfors. I tried Stein/Stakarchi and disliked it a lot. I was going to use Gamelin's book, but I ended up with cancer and didn't teach the grad complex course that time.
Lang's book actually has some good things. I like things in Narasimhan's book, but it's pretty sophisticated.
You define the residue to be $1/(2\pi i)$ times the integral around any suitably small smooth curve around the singularity. Of course, then you can calculate $\text{res}_0\big(\sum a_nz^n\,dz\big) = a_{-1}$ and check this is independent of coordinate system.
@A.Hendry: It looks pretty sophisticated, so I don't know the answer(s) off-hand. The things on $u$ at endpoints look like the dual boundary conditions. I vaguely remember this from teaching the material 30+ years ago.
@Eric: If you go eastward, we'll never cook! :(
I'm also making a spinach soufflé tomorrow — I don't think I've done that in 30+ years. Crazy ridiculous.
@TedShifrin Thanks for the help! Dual boundary conditions, eh? I'll look that up. I'm mostly concerned about $u(a)=0$ in the term $u'/u$ appearing in $h'-\frac{u'}{u}h$ (and also for $w=-\frac{u'}{u}P$)
@TedShifrin It seems to me like $u$ can't be zero, or else $w$ would be infinite.
@TedShifrin I know the Jacobi accessory equation is a type of Sturm-Liouville problem, from which Fox demonstrates in his book that $u$ and $u'$ cannot simultaneously be zero, but that doesn't stop $w$ from blowing up when $u(a)=0$ in the denominator |
Here is a closely related pair of examples from operator theory, von Neumann's inequality and the theory of unitary dilations of contractions on Hilbert space, where things work for 1 or 2 variables but not for 3 or more.
In one variable, von Neumann's inequality says that if $T$ is an operator on a (complex) Hilbert space $H$ with $\|T\|\leq1$ and $p$ is in $\mathbb{C}[z]$, then $\|p(T)\|\leq\sup\{|p(z)|:|z|=1\}$. Szőkefalvi-Nagy's dilation theorem says that (with the same assumptions on $T$) there is a unitary operator $U$ on a Hilbert space $K$ containing $H$ such that if $P:K\to H$ denotes orthogonal projection of $K$ onto $H$, then $T^n=PU^n|_H$ for each positive integer $n$.
These results extend to two commuting variables, as Ando proved in 1963. If $T_1$ and $T_2$ are commuting contractions on $H$, Ando's theorem says that there are commuting unitary operators $U_1$ and $U_2$ on a Hilbert space $K$ containing $H$ such that if $P:K\to H$ denotes orthogonal projection of $K$ onto $H$, then $T_1^{n_1}T_2^{n_2}=PU_1^{n_1}U_2^{n_2}|_H$ for each pair of nonnegative integers $n_1$ and $n_2$. This extension of Sz.-Nagy's theorem has the extension of von Neumann's inequality as a corollary: If $T_1$ and $T_2$ are commuting contractions on a Hilbert space and $p$ is in $\mathbb{C}[z_1,z_2]$, then $\|p(T_1,T_2)\|\leq\sup\{|p(z_1,z_2)|:|z_1|=|z_2|=1\}$.
Things aren't so nice in 3 (or more) variables. Parrott showed in 1970 that 3 or more commuting contractions need not have commuting unitary dilations. Even worse, the analogues of von Neumann's inequality don't hold for $n$-tuples of commuting contractions when $n\geq3$. Some have considered the problem of quantifying how badly the inequalities can fail. Let $K_n$ denote the infimum of the set of those positive constants $K$ such that if $T_1,\ldots,T_n$ are commuting contractions and $p$ is in $\mathbb{C}[z_1,\ldots,z_n]$, then $\|p(T_1,\ldots,T_n)\|\leq K\cdot\sup\{|p(z_1,\ldots,z_n)|:|z_1|=\cdots=|z_n|=1\}$. So von Neumann's inequality says that $K_1=1$, and Ando's Theorem yields $K_2=1$. It is known in general that $K_n\geq\frac{\sqrt{n}}{11}$. When $n>2$, it is not known whether $K_n\lt\infty$.
See Paulsen's book (2002) for more. On page 69 he writes:
The fact that von Neumann’s inequality holds for two commuting contractions
but not three or more is still the source of many surprising results and
intriguing questions. Many deep results about analytic functions come
from this dichotomy. For example, Agler [used] Ando’s theorem to deduce an
analogue of the classical Nevanlinna–Pick interpolation formula
for analytic functions on the bidisk. Because of the failure of a von
Neumann inequality for three or more commuting contractions, the analogous
formula for the tridisk is known to be false, and the problem of finding the
correct analogue of the Nevanlinna–Pick formula for polydisks
in three or more variables remains open. |
A couple of minor errors/inaccuracies here and there, such as the factor $\sin(10^\circ)$ in your expression for $x'$ which should actually be $\cos(10^\circ)$; the notation $x(x')$ in the final equations is unclear (that should be simply $x'$); in your final equations, you seem to apply the restitution coefficient only to the y-component, and that may be intentional - physically, the deformation would only affect motion normal to the surface - but is inconsistent with your earlier claim that "the second bounce has a velocity of $12(0.7)$." (And while I'm nitpicking, this is speed, not velocity, and should have the qualification "initial" before it).
But there is one big mistake, and it is exactly in that premise I just quoted. The initial velocity after the bounce can be figured out from the velocity just before the bounce, using the restitution coefficient - that is correct; but you assume that the speed just before the bounce would be $12$, and that is not correct, because the bounce happens at a point lower than the launch, so it is not symmetric with it. You can find the velocity components just before the bounce by substituting the time you found into the velocity equations, $v_x = 12\cos(10^\circ)$, $v_y = 12\sin(10^\circ) - 9.8t$. You will see that the $v_y$ will not be $-12\sin(10^\circ)$ as you implicitly assumed.
Another way to find the final
speed before the bounce is from conservation of energy, $12^2 + 2(9.8)(10) = v^2$; this will clearly produce a value greater than $12$. You can then use this to find $v_y$ because you know $v_x = 12\cos(10^\circ)$ throughout, and $v_x^2 + v_y^2 = v^2$. Then apply the restitution factor as intended (presumably, only on the y-component). |
I have recently started studying quantum field theory from the book Quantum Field Theory and the Standard Model by Schwartz. In chapter 2 it is said that, contrary to GR, one can ignore the index position, because we are working in Minkowski. I do understand this when the indices are contracted, nevertheless in chapter 3, we arrive to the following equality
\begin{equation} \partial_\mu \left(\sum_n \frac{\partial L}{\partial(\partial_\mu \phi_n)}\partial_\nu \phi_n - g_{\mu \nu}L\right)=0 \end{equation}
and from here the energy-momentum tensor for a classical field theory is defined as
\begin{equation} T_{\mu \nu}=\sum_n \frac{\partial L}{\partial(\partial_\mu \phi_n)}\partial_\nu \phi_n - g_{\mu \nu}L\,. \end{equation}
Yet in Tong's notes and some other references (where the position of the indices is taken into account as supposed - http://www.damtp.cam.ac.uk/user/tong/qft/qft.pdf), one finds that
\begin{equation} T'^\mu_ \nu=\sum_n \frac{\partial L}{\partial(\partial_\mu \phi_n)}\partial_\nu \phi_n - \delta^\mu_\nu L\,. \end{equation}
Now, I can lower the index from this definition and find
\begin{equation} T'_{\mu \nu}=\sum_n \frac{\partial L}{\partial(\partial^\mu \phi_n)}\partial_\nu \phi_n - g_{\mu \nu}L\,. \end{equation}
Now it seems to me that considering each convention $T'_{\mu \nu} \neq T_{\mu \nu} $. Of course, if I contract the first index the equality is restored, yet I don't see how these components are equal alone. Anyone who can help? Is this a typo or am I missing something? |
For a real scalar field $\phi$ after performing all the 1-loop renormalization for dimensional regulator $d = 4 - \epsilon,\ \epsilon \rightarrow 0^+$, I have found that the renormalized coupling $\lambda$ can be related to the bare one by
$$\lambda\Bigg(1 + \frac{3}{(4\pi)^2\epsilon}\lambda_p\Bigg) = \lambda_0 \tag1$$
I'm stuck trying to get the beta function from that equation. We call beta function to
$$\beta(\lambda) = \mu\frac{\partial\lambda}{\partial\mu}$$
Taking into account that
$$\lambda = \lambda_p\mu^\epsilon,\qquad [\lambda_p] = 0, \qquad [\mu] = 1$$
My problem is that any way I use to get $\beta(\lambda)$ from Eq. (1) gives a dependence on $\epsilon$, but in Peskin it is used a different way to solve this and the solution is
$$\beta(\lambda) = \frac{3\lambda^2}{(4\pi)^2}$$
How can I get the beta function via Eq. (1)? |
In Euclidean Geometry, the parallelogram is the simplest form of a quadrilateral having two sides parallel to each other. The opposite sides of a parallelogram are equal in length and angles are also the same. The congruent sides are the direct consequence and it can be proved quickly with the help of equivalent formulations.
If only two sides are parallel then it is named as the Trapezoid and the three-dimensional counterpart is taken as the parallelepiped. A different type of quadrilateral on the basis of symmetry is defined as the given below –
The parallelogram is a geometrical figure that is formed by the pair of parallel sides having opposite sides of equal length and the opposite angles of equal measure. The height and base of the parallelogram should be perpendicular to each other.
The area of a parallelogram is equal to the magnitude of cross-vector product for two adjacent sides. This is possible to create the area of a parallelogram by using any of its diagonals. The leaning rectangular box is a perfect example of the parallelogram.
\[\ Area\;of\;a\;Parallelogram = b\times h\]
Where b is the length of any base and h is the corresponding altitude or height.
A parallelogram is a four-sided polygon bounded by four infinite segments and it makes a closed figure that is referred as the quadrilateral. Or we can say that the parallelogram is a special case of quadrilateral where opposite angles are equal and perpendicular to each other. With the help of a basic list of parallelogram formulas, you can calculate the area and the perimeter by putting the values and derive the final output. In the next section, we will discuss on popular properties of the parallelogram for quick identification of shape.
\[\ Perimeter\;of\;Parallelogram = 2\left(b+h\right)\]
Where,
p,q are the diagonals a,b are the parallel sides
\[\ p=\sqrt{a^{2}+b^{2}-2ab\cos (A)}=\sqrt{a^{2}+b^{2}+2ab\cos (B)}\]
\[\ q=\sqrt{a^{2}+b^{2}-2ab\cos (A)}=\sqrt{a^{2}+b^{2}-2ab\cos (B)}\]
\[\ p^{2}+q^{2}=2(a^{2}+b^{2})\]
\[\ Height\;of\;Parallelogram = \frac{Area}{Base}\]
Hence, a parallelogram could have all properties listed above if any of the statements become true then this is a parallelogram. |
When attempting to solve an asymptotic complexity problem, i got this summation $$\sum_{i=2}^n \binom{n}{i} \binom{i}{2}$$ Wolfram Alpha states that it equals to $$2^{(n - 3)} \cdot n(n - 1)$$ But I can't find how, and Wolfram is unable to produce the step-by-step solution.
Consider $$(1+x)^n=\sum_{i=0}^n\binom{n}ix^i.$$ Differentiate twice: $$n(n-1)(1+x)^{n-2}=\sum_{i=2}^n\binom{n}ii(i-1)x^i.$$ Now set $x=1$.
This has nice combinatorical solution. Ask yourself on how many ways we can choose a set of pepople and then among them president and vice president.
$$\sum_{i=2}^n\binom ni\binom i2=\sum_{i=2}^n\binom n2\binom{n-2}{i-2}=\binom n2\sum_{i=2}^n\binom{n-2}{i-2}=\binom n2\sum_{j=0}^{n-2}\binom{n-2}j=\binom n22^{n-2}=n(n-1)2^{n-3}$$
Your expression is the value $f(1)$ of
$$f(x):=\sum_{i=2}^n \binom{n}{i} \dfrac{i(i-1)}{2}x^{i}=\dfrac12 x^2\underbrace{\sum_{i=0}^{n} \binom{n}{i} \dfrac{i(i-1)}{2}x^{i-2}}_{g(x)}$$
But $g(x)$ can be given another expression because it is the second derivative of $(1+x)^{n}$, i.e., $g(x)=(n)(n-1)(1+x)^{n-2}$.
It suffices now to set $x=1$ to obtain your result. |
I) The closest cosmetic resemblance between the Nambu-Goto action and the Polyakov action is achieved if we write them as
$$\tag{1} S_{NG}~=~ -\frac{T_0}{c} \int d^2{\rm vol} ~\det(M)^{\frac{1}{2}} , $$
and
$$\tag{2} S_{P}~=~ -\frac{T_0}{c}\int d^2{\rm vol}~ \frac{{\rm tr}(M)}{2} , $$
respectively. Here $h_{ab}$ is an auxiliary world-sheet (WS) metric of Lorentzian signature $(-,+)$, i.e. minus in the temporal WS direction;
$$\tag{3} d^2{\rm vol}~:=~\sqrt{-h}~d\tau \wedge d\sigma$$
is a diffeomorphism-invariant WS volume-form (an area actually);
$$\tag{4} M^{a}{}_{c}~:=~(h^{-1})^{ab}\gamma_{bc} $$
is a mixed tensor; and
$$\tag{5} \gamma_{ab}~:=~(X^{\ast}G)_{ab}~:=~\partial_a X^{\mu} ~\partial_b X^{\nu}~ G_{\mu\nu}(X) $$
is the induced WS metric via pull-back of the target space (TS) metric $G_{\mu\nu}$ with Lorentzian signature $(-,+, \ldots, +)$.
Note that the Nambu-Goto action (1) does actually
not depend on the auxiliary WS metric $h_{ab}$ at all, while the Polyakov action (2) does.
II) As is well-known, varying the Polyakov action (2) wrt. the WS metric $h_{ab}$ leads to that the $2\times 2$ matrix
$$\tag{6} M^{a}{}_{b}~\approx~\frac{{\rm tr}(M)}{2} \delta^a_b~\propto~\delta^a_b $$
must be proportional to the $2\times 2$ unit matrix on-shell. This implies that
$$\tag{7} \det(M)^{\frac{1}{2}} ~\approx~ \frac{{\rm tr}(M)}{2},$$
so that the two actions (1) and (2) coincide on-shell, see e.g. the Wikipedia page. (Here the $\approx$ symbol means equality modulo eom.)
III) Now, let us imagine that we only know the Nambu-Goto action (1) and not the Polyakov action (2). The the only diffeomorphism-invariant combinations of the matrix $M^{a}{}_{b}$ are the determinant $\det(M)$, the trace ${\rm tr}(M)$, and functions thereof.
If furthermore the TS metric $G_{\mu\nu}$ is dimensionful, and we demand that the action is linear in that dimension, this leads us to consider action terms of the form
$$\tag{8} S~=~ -\frac{T_0}{c}\int d^2{\rm vol}~ \det(M)^{\frac{p}{2}} \left(\frac{{\rm tr}(M)}{2}\right)^{1-p} , $$
where $p\in \mathbb{R}$ is a real power. Alternatively, Weyl invariance leads us to consider the action (8). Obviously, the Polyakov action (2) (corresponding to $p=0$) is not far away if we would like simple integer powers in our action.This post imported from StackExchange Physics at 2014-03-12 15:52 (UCT), posted by SE-user Qmechanic |
Asian Journal of Mathematics Asian J. Math. Volume 18, Number 3 (2014), 525-544. Symmetry defect of algebraic varieties Abstract
Let $X, Y \subset k^m(k = \mathbb{R},\mathbb{C})$ be smooth manifolds. We investigate the central symmetry of the configuration of $X$ and $Y$. For $p \in k^m$ we introduce a number $\mu(p)$ of pairs of points $x \in X$ and $y \in Y$ such that $p$ is the center of the interval $\overline{xy}$. We show that if $X, Y$ (including the case $X = Y$ ) are algebraic manifolds in a general position, then there is a closed (semi-algebraic) set $B \subset k^m$, called
symmetry defect set of the $X$ and $Y$ configuration, such that the function $\mu$ is locally constant and not identically zero outside $B$. If $k = \mathbb{C}$, we estimate the number $\mu$ (in fact we compute it in many cases) and show that the symmetry defect is an algebraic hypersurface and consequently the function $\mu$ is constant and positive outside $B$. We also show that in the generic case the topological type of the symmetry defect set of a plane curve is constant, i.e. the symmetry defect sets for two generic curves of the same degree are homeomorphic (by the same method we can prove similar statement for any irreducible family of smooth varieties $Z^n \subset \mathbb{C}^{2n}$). Moreover, for $k = \mathbb{R}$, we estimate the number of connected components of the set $U = k^m \backslash B$. In the last section we give an algorithm to compute the symmetry defect set for complex smooth affine varieties in general position. Article information Source Asian J. Math., Volume 18, Number 3 (2014), 525-544. Dates First available in Project Euclid: 8 September 2014 Permanent link to this document https://projecteuclid.org/euclid.ajm/1410186670 Mathematical Reviews number (MathSciNet) MR3257839 Zentralblatt MATH identifier 1343.14050 Citation
Janeczko, S.; Jelonek, Z.; Ruas, M. A. S. Symmetry defect of algebraic varieties. Asian J. Math. 18 (2014), no. 3, 525--544. https://projecteuclid.org/euclid.ajm/1410186670 |
I would like to know references containing proofs of equivalence (or an implication in any direction) between Brouwer Fixed Point Theorem and Invariance of Domain Theorem.
Evidence that makes me believe that they are equivalent is this post in the blog of Terence Tao.But the text of this blog does not provide any bibliographical reference. Just a comment.
I looked for this equivalence (or an implication in any direction) in several classic books of Analysis:
Principles of Mathematical Analysis by Walter Rudin. Real Mathematical Analysis by Charles C. Pugh Nonlinear Functional Analysis by Klaus Deimling
But I did not find anything.An encyclopedic book that shows several theorems equivalent to Brower's fixed-point theorem is Nonlinear Functional Analysis and its Applications: IV: Applications to Mathematical Physics by E. Zeidler. On page 795 of this book more than a dozen theorems equivalent to Brower's fixed-point theorem are shown. But no mension to the domain invariance theorem is made.
Thanks in advance for any bibliography that can help me.
Brouwer Fixed Point Theorem.Let $f:U\to\mathbb{R}^n$ be a continuous function defined in an open set $U\subset\mathbb{R}^n$. Let $K\subset U$ convex and compact. If $f(K)\subset K$ then there is $x_0\in K$ such that $f(x_0)=x_0$.
.
Invariance of Domain Theorem.Let $f:U\to\mathbb{R}^n$ be a continuous function defined in an open set $U\subset\mathbb{R}^n$. If $f$ is one to one function then $f(U)$ is a open subset of $\mathbb{R}^n$.
. |
Can we find the exponential radioactive decay formula from first principles? It's always presented as an empirical result, rather than one you can get from first principles. I've looked around on the internet, but can't really find any information about how to calculate it from first principles. I've seen decay rate calculations in Tong's qft notes for toy models, but never an actual physical calculation, so I was wondering if it's possible, and if so if someone could link me to the result.
If you want to be very nitpicky about it, the decay will not be exponential. The exponential approximation breaks down both at small times and at long times:
At small times, perturbation theory dictates that the
amplitudeof the decay channel will increase linearly with time, which means that the probability of decay is at small times only quadratic, and the survival probability is slightly rounded near $t=0$ before of going down as $e^{-t/\tau}$. This should not be surprising, because the survival probability is time-reversal invariant and should therefore be an even function.
At very long times, there are bounds on how fast the bound state amplitude can decay which are essentially due to the fact that the hamiltonian is bounded from below, and which I demonstrate in detail below.
Both of these regimes are very hard to observe experimentally. At short times, you usually need very good time resolution and the ability to instantaneously prepare your system. At long times, you probably wouldn't need to go out that far out, but it is typically very hard to get a good signal-to-noise ratio because the exponential decay has pretty much killed all your systems, so you need very large populations to really see this.
However, both sorts of deviations can indeed be observed experimentally. At long times, the first observation is
(To emphasize on the difficulty of these observations, they had to observe an unstable system over 20 lifetimes to observe the deviations from the exponential, by which time $\sim10^{-9}$ of the population remains.) For short times, the first observations are
which measured tunnelling of sodium atoms inside an optical lattice, and
To be clear, the survival probability of a metastable state is for all practical intents and purposes exponential. It's only with a careful experiment - with large populations over very long times, or with very fine temporal control - that you can observe these deviations.
Consider a system initialized at $t=0$ in the state $|\psi(0)⟩=|\varphi⟩$ and left to evolve under a time-independent hamiltonian $H$. At time $t$, the survival amplitude is, by definition, $$ A(t)=⟨\varphi|\psi(t)⟩=⟨\varphi|e^{-iHt}|\varphi⟩ $$ and the survival probability is $P(t)=|A(t)|^2$. (Note, however, that this is a reasonable but loaded definition; for more details see this other answer of mine.) Suppose that $H$ has a complete eigenbasis $|E,a⟩$, which can be supplemented by an extra index $a$ denoting the eigenvalues of a set $\alpha$ of operators to form a CSCO, so you can write the identity operator as $$1=\int\mathrm dE \mathrm da|E,a⟩⟨E,a|.$$ If you plug this into the expression for $A(t)$ you can easily bring it into the form $$ A(t)=\int \mathrm dE\, B(E)e^{-iEt},\quad\text{where}\quad B(E)=\int \mathrm da |⟨E,a|\varphi⟩|^2. $$ Here it's easy to see that $B(E)\geq0$ and $\int B(E)\mathrm dE=1$, so $B(E)$ needs to be pretty nicely behaved, and in particular it is in $L^1$ over the energy spectrum.
This is where the energy spectrum comes in. In any actual physical theory, the spectrum of the hamiltonian needs to be bounded from below, so there is a minimal energy $E_\text{min}$, set to 0 for convenience, below which the spectrum has no support. This looks quite innocent, and it allows us to refine our expression for $A(t)$ into the harmless-looking $$ A(t)=\int_{0}^\infty \mathrm dE\, B(E)e^{-iEt}.\tag1 $$ As it turns out, this has now prevented the asymptotic decay $e^{-t/\tau}$ from happening.
The reason for this is that in this form $A(t)$ is analytic in the lower half-plane. To see this, consider a complex time $t\in\mathbb C^-$, for which \begin{align} |A(t)| & =\left|\int_0^\infty B(E)e^{-iEt}\mathrm dE\right| \leq\int_0^\infty \left| B(E)e^{-iEt}\right|\mathrm dE =\int_{0}^\infty \left| B(E)\right|e^{+E \mathrm{Im}(t)}\mathrm dE \\ & \leq\int_{0}^\infty \left| B(E)\right|\mathrm dE=1. \end{align} as $\mathrm{Im}(t)<0$. This means that the integral $(1)$ exists for all $t$ for which $\mathrm{Im}(t)\leq 0$, and because of its form it means that it is analytic in $t$ in the interior of that region.
This is nice, but it is also damning, because analytic functions can be very restricted in terms of how they can behave. In particular, $A(t)$ grows exponentially in the direction of increasing $\mathrm{Im}(t)$ and decays exponentially in the direction of decreasing $\mathrm{Im}(t)$. This means that its behaviour along $\mathrm{Re}(t)$ should in principle be something like oscillatory, but you can get away with something like a decay. What you cannot get away with, however, is exponential decay along both directions of $\mathrm{Re}(t)$ - it is simply no longer compatible with the demands of analyticity.
The way to make this precise is to use something called the Paley-Wiener theorem which in this specific setting demands that$$\int_{-\infty}^\infty \frac{\left|\ln|A(t)|\right|}{1+t^2}dt<\infty.$$That is, of course, a wonky integral if anyone ever saw one, but you can see that if $A(t)\sim e^{-|t|/\tau}$ for large times $|t|$ ($A(t)$ must be time-reversal symmetric), then the integral on the left (only just) diverges. There's more one can say about why this happens, but for me the bottom line is: analyticity demands
some restrictions on how fast $A(t)$ can decay along the real axis, and when you do the calculation this turns out to be it.
(For those wondering: yes, this bound is saturated. The place to start digging is the Beurling-Malliavin theorem, but I can't promise it won't be painful.)
For more details on the proofs and the intuition behind this stuff, see my MathOverflow question The Paley-Wiener theorem and exponential decay and Alexandre Eremenko's answer there, as well as the paper
L. Fonda, G. C. Ghirardi and A. Rimini. Decay theory of unstable quantum systems.
Rep. Prog. Phys. 41, pp. 587-631 (1978). §3.1 and 3.2.
from which most of this stuff was taken.
Any population whether human, animal or atomic nuclei, will with no other complicationschange proportional to the amount already there. Yielding a very simple differential equation.
$$ \frac{dP(t)}{dt} = k\,P(t) $$
where $k$ is a constant with a negative sign for exponential decay and plus sign for exponential increase.
i.e. the solution is $$ P(t)=P(t=0)\exp(kt) $$
A simple and direct way to get this exponent and complex eigenvalues is by using Gamow's approach, that was one of first introduced explanations of alpha radioactivity.
It solves Schrodinger equation in WKB approximation, no fancy math or deep knowledge in QM is needed, except being familiar with WKB.
A good source for this is "Mohsen Razavy, Quantum Theory of Tunneling-World, Scientific Publishing (2013)". |
I want to calculate Morlet time and frequency resolution. the Morlet wavelet function is define as : $\psi(t)=\frac{1}{\sqrt{\pi f_b}}e^{j2\pi f_c}e^{-t^2/f_b}$
Note: I know the answers, but I don't know how to achieve it on my own.I want someone to explain the time and frequency resolution relation to me please. here are some hint that I got from paper and might help you. in some paper I found that Morlet wavelet time and frequency resolution is : $\Delta t=\frac{f_c\sqrt{f_b}}{2f_i}$ $\Delta f=\frac{1}{2\pi f_c \sqrt{f_b}}$ Because generally, the wavelet time and frequency resolution is define as: $\Delta t=s \Delta t_\psi$ and $\Delta f= \frac {\Delta f_\psi}{s}$ (1) where 's' is scale. it is also evident that the frequency and scale are related to each other by $f_i=\frac{fc}{s_i}$ . $\Delta t$ and $\Delta f$ are wavelet time and frequency resolution,respectively. $\Delta t_\psi$ and $\Delta f_\psi$ are morlet function time and frequency resolution which are as follow: $\Delta t_\psi=\frac{\sqrt{f_b}}{2}$ and $\Delta f_\psi=\frac{1}{2\pi \sqrt{f_b}}$ (2) the Heisenberg uncertainty principle also says: $\Delta t \Delta f >= \frac{1}{4\pi}$ Basically i want to prove (1) and (2). other things can be obtained by substituting. Thanks
I want to calculate Morlet time and frequency resolution. the Morlet wavelet function is define as : $\psi(t)=\frac{1}{\sqrt{\pi f_b}}e^{j2\pi f_c}e^{-t^2/f_b}$
Consider shifted and scaled versions of a mother wavelet $\psi(t)$:
$$\psi_{a,b}(t)=\frac{1}{\sqrt{a}}\psi\left(\frac{t-b}{a}\right),\quad a>0,\;b\in\mathbb{R}\tag{1}$$
By the definition of the Fourier transform
$$\Psi(\omega)=\int_{-\infty}^{\infty}\psi(t)e^{-i\omega t}dt$$
it can be shown that the Fourier transform of $\psi_{a,b}(t)$ is
$$\Psi_{a,b}(\omega)=\sqrt{a}e^{-ib\omega}\Psi(a\omega)\tag{2}$$
Note that the scaling factor $a$ appears in the denominator of the argument of $\psi_{a,b}(t)$ and in the "numerator" of the argument of $\Psi_{a,b}(\omega)$. This implies that stretching in the time domain ($a>1$) implies compression in the frequency domain and vice versa. Consequently, if $\Delta t$ and $\Delta f$ denote the "spread" of $\psi(t)$ in time and frequency, it follows from (1) and (2) that the corresponding spreads of $\psi_{a,b}(t)$ - call them $\Delta t_{a}$ and $\Delta f_{a}$ - are given by
$$\Delta t_a=a\Delta t\quad\textrm{and}\quad\Delta f_{a}=\frac{\Delta f}{a}$$
As for the definition of $\Delta t$ and $\Delta f$, there is not way to "prove" the results that you stated. It is just a matter of defining the spread of a function. Since $|\psi(t)|$ is a Gaussian with standard deviation $\sigma=\sqrt{f_b/2}$ it seems natural to define $\Delta t$ as being proportional to the standard deviation, but there is no "correct" way to choose the proportionality constant. Once $\Delta t$ has been defined, the corresponding definition of $\Delta f$ can be obtained from the uncertainty principle.
Consider a wavelet to be 1 bin of a windowed DFT or FT. Scale is roughly proportional to how many cycles of a sine wave is inside the bulk of the wavelet window.
Hold the frequency constant and make the wavelet twice as long, and you have to move a wavelet twice as far before some event goes from centered in the wavelet to outside the bulk of the window. Thus time resolution decreases with window width.
A sinusoid the same frequency as a wavelet correlates well. Change the frequency such that the number of periods of sine wave inside the window bulk differs by one period per window bulk width, and that sinusoid become roughly orthogonal, or correlates poorly. Increase the window width, and the frequency change required to make the number of periods of the same frequency within that wider window differ by one period become smaller. Thus frequency resolution increases with window width. |
Let $k\subset K_1,$ $k\subset K_2$ be finite Galois and $K = K_1K_2.$ Show that $G(K/k)$ is isomorphic to the subgroup $H = \{(\sigma,\tau)\mid\sigma|_{K_1\cap K_2} = \tau|_{K_1\cap K_2}\}$ of $G(K_1/k)\times G(K_2/k).$
I have already showed that $k\subset K$ must also be finite Galois. Now let $$\varphi\colon G(K/k)\to G(K_1/k)\times G(K_2/k).$$ I know that if we define $\varphi$ via $\sigma\mapsto (\sigma|_{K_1}, \sigma|_{K_2}),$ then $\varphi$ is clearly a homomorphism. Moreover, $\varphi$ is injective since if $\sigma$ restricted to $K_1$ and $\sigma$ restricted to $K_2$ is the identity, then $\sigma$ is the identity on $K_1K_2=K$. We can also show that if $K_1\cap K_2= k$, then this $\varphi$ is an isomorphism. Since $H$ is defined by sets of two different automorphisms, I'm not sure how to write a well-defined homomorphism from $G(K/k)\to H$.
UPDATE:I found the proof of this in Dummit and Foote (Chap 14, Proposition 21). They point out that $\text{im }\varphi\subset H$ since $$(\sigma|_{K_1})|_{K_1\cap K_2} = \sigma|{K_1\cap K_2} = (\sigma|_{K_2})|_{K_1\cap K_2}.$$
Then they say
The order of $H$ can he computed by observing that for every $\alpha \in \text{Gal}(K_1/ F)$ there are $|\text{Gal}(K_2/ K_1 \cap K_2)|$ elements $\tau \in \text{Gal}(K_2/F)$ whose restrictions to $K_1 \cap K_2$ are $\sigma|_{K_1\cap K_2}.$ Hence $$\begin{align} |H| &= |\text{Gal}(K_1/F)|\cdot |\text{Gal}(K_2/K_1\cap K_2)|\\ &= |\text{Gal}(K_1/ F)| \cdot\frac{|\text{Gal}(K_2/F)|}{|\text{Gal}(K_1 \cap K_2/F)|}. \end{align}$$
I don't follow how they come up with this. |
One can only answer on the basis of what we currently know about cosmological parameters. If indeed these have been correctly estimated, and that the cosmological constant is constant, then the universe will continue to expand at an accelerating rate.
Given that more than half the baryons in the universe currently exist outside galaxies, then it seems to me quite likely that these will never form part of stars and that the increasingly rarefied universe will cease to form (many) stars in the future.
In that scenario, isolated protons will always be the most common nuclei.
One complication is that the answer may be different for that part of the universe that we can
see.
Gravitationally bound systems like the Milky Way, the local group and cluster will likely not take part in the accelerating expansion. Thus the baryons within this region could continue to be processed in stars, even if they do not consist of the majority of baryons in the universe. How to estimate this timescale? A lower limit is probably the free-fall timescale of the local supercluster, which has a mass of about $10^{15}$ M$_{\odot}$ in a diameter of 33 Mpc. The free-fall timescale is $\sim (G \rho)^{-1/2} \simeq 100$ billion years. However, this time must be increased to take account that most stars formed are low-mass red dwarfs with timescales to turn hydrogen into helium of a further trillion years. So within our local group of galaxies, I would estimate 100 billion years to incorporate all the gas in our local cluster into stars and a further trillion years for them to fuse most of the hydrogen. |
A beam is a constructive element capable of withstanding heavy loads in bending. In the case of small deflections, the beam shape can be described by a fourth-order linear differential equation.
Consider the derivation of this equation. For a bending beam, the angle \(d\theta\) appears between two adjacent sections spaced at a distance \(dx\) (Figure \(1\)).
The deformation \(\varepsilon \) at each point is proportional to the coordinate \(y,\) which is measured from the neutral line. The length of the neutral line is unchanged.
It follows from the geometry of Figure \(1\) that
\[\varepsilon = \frac{y}{R},\]
where \(R\) is the radius of curvature of the beam.
The magnitude of the normal stress \(\sigma\) in the cross section will also depend on the coordinate \(y.\) It can be estimated by Hooke’s law:
\[\sigma = \varepsilon E = \frac{E}{R}y,\]
where \(E\) is the modulus of elasticity of the beam.
The bending moment \(M\left( x \right)\) for a section of the beam relative to the \(z\)-axis is given by
\[
{M\left( x \right) = {M_z} }={ \int\limits_A {\sigma ydA} } = {\frac{E}{R}\int\limits_A {{y^2}dA} } = {\frac{E}{R}I,} \]
where \(I\) is the moment of inertia of the cross section with respect to the neutral \(z\)-axis (Figure \(2\)).
Hence, we obtain the following expression for the radius of curvature of the beam:
\[R = \frac{{EI}}{{M\left( x \right)}}.\]
It is known that the radius of curvature is given by
\[R = \frac{{{{\left[ {1 + {{\left( {y’} \right)}^2}} \right]}^{\large\frac{3}{2}\normalsize}}}}{{y^{\prime\prime}}}.\]
Assuming that the deflection of the beam is sufficiently small, we can neglect the first derivative \(y’.\) Then the differential equation of the elastic line can be written as follows:
\[{y^{\prime\prime} = \frac{{M\left( x \right)}}{{EI}}\;\;\text{or}\;\;}\kern-0.3pt{\frac{{{d^2}y}}{{d{x^2}}} = \frac{{M\left( x \right)}}{{EI}}.}\]
The bending moment \({M\left( x \right)}\) can be expressed in terms of the known external load \({q\left( x \right)}\) acting on the beam. Indeed, we choose a small element \(dx\) and consider the conditions of its equilibrium (Figure \(3\)).
The sum of the projections of all forces on the \(z\)-axis is zero:
\[{ – Q – qdx }+{ Q + dQ }={ 0.}\]
The sum of the moments of all forces about, for example, the right edge of the element \(dx\) (the point \(B\) in Figure \(3\)) is also equal to zero:
\[{ – M + M + dM }-{ Qdx – q\frac{{{{\left( {dx} \right)}^2}}}{2} }={ 0.}\]
This implies the relationship
\[{\left\{ \begin{array}{l} \frac{{dQ}}{{dx}} = q\\ \frac{{dM}}{{dx}} = Q \end{array} \right.\;\;}\kern-0.3pt{\text{or}\;\;\frac{{{d^2}M}}{{d{x^2}}} = q.}\]
With this expression the differential equation becomes:
\[
{M\left( x \right) = EI\frac{{{d^2}y}}{{d{x^2}}},\;\;}\Rightarrow {\frac{{{d^2}M}}{{d{x^2}}} }={ \frac{{{d^2}}}{{d{x^2}}}\left( {EI\frac{{{d^2}y}}{{d{x^2}}}} \right) }={ q.} \]
This equation is called the Euler-Bernoulli differential equation. If the values of \(E\) and \(I\) are constant along the \(x\)-axis, we get a fourth-order equation:
\[EI\frac{{{d^4}y}}{{d{x^4}}} = q.\]
This equation under the appropriate boundary conditions determines the deflection of a loaded beam.
Solved Problems
Click a problem to see the solution. |
To complete the picture we must also consider the nonhomogeneous equations with variable coefficients. Equations of this type can be written as
\[{{y^{\left( n \right)}} + {a_1}\left( x \right){y^{\left( {n – 1} \right)}} + \cdots }+{ {a_{n – 1}}\left( x \right)y’ }+{ {a_n}\left( x \right)y }={ f\left( x \right),}\]
where the coefficients \({a_1}\left( x \right), \ldots ,{a_n}\left( x \right)\) and the right-hand side \(f\left( x \right)\) are continuous functions on some interval \(\left[ {a,b} \right].\)
With the help of a linear differential operator \(L\) this equation can be written in compact form:
\[Ly\left( x \right) = f\left( x \right),\]
where \(L\) includes the operations of differentiation, multiplication by the coefficients \({a_i}\left( x \right),\) and addition.
As it is known, the general solution \(y\left( x \right)\) of a nonhomogeneous differential equation is the sum of the general solution \({y_0}\left( x \right)\) of the corresponding homogeneous equation and a particular solution \({y_1}\left( x \right)\) of the nonhomogeneous equation:
\[y\left( x \right) = {y_0}\left( x \right) + {y_1}\left( x \right).\]
Methods of finding the general solution of the homogeneous equation are considered here. Therefore, we focus our attention on constructing solutions of the nonhomogeneous equations.
The method of variation of constants also known as the Lagrange method is commonly used for this purpose. With this method, we can obtain the general solution of the nonhomogeneous equation, if the general solution of the homogeneous equation is known.
Method of Variation of Constants
Suppose we want to solve an \(n\)th order nonhomogeneous differential equation:
\[{{y^{\left( n \right)}} + {a_1}\left( x \right){y^{\left( {n – 1} \right)}} + \cdots }+{ {a_{n – 1}}\left( x \right)y’ }+{ {a_n}\left( x \right)y }={ f\left( x \right).}\]
We will assume that the general solution of the associated homogeneous equation is found and expressed by the formula
\[{{y_0}\left( x \right) = {C_1}{Y_1}\left( x \right) }+{ {C_2}{Y_2}\left( x \right) + \cdots }+{ {C_n}{Y_n}\left( x \right),}\]
containing \(n\) arbitrary constants \({C_1},\) \({C_2}, \ldots ,\) \({C_n}.\)
The idea of this method is to replace the constants \({C_1},\) \({C_2}, \ldots ,\) \({C_n}\) with continuously differentiable functions \({C_1}\left( x \right),\) \({C_2}\left( x \right), \ldots ,\) \({C_n}\left( x \right),\) which are chosen so that the solution
\[ {y\left( x \right) }={ {C_1}\left( x \right){Y_1}\left( x \right) }+{ {C_2}\left( x \right){Y_2}\left( x \right) + \cdots } + {{C_n}\left( x \right){Y_n}\left( x \right) } = {\sum\limits_{i = 1}^n {{C_i}\left( x \right){Y_i}\left( x \right)} } \]
satisfies the nonhomogeneous differential equation.
The first derivatives of the functions \({{C_i}\left( x \right)}\) are determined from the system of \(n\) equations of the form
\[\left\{ \begin{array}{l} {{C’_1}\left( x \right){Y_1}\left( x \right) }+{ {C’_2}\left( x \right){Y_2}\left( x \right) + \cdots }+{ {C’_n}\left( x \right){Y_n}\left( x \right) }={ 0}\\ {{C’_1}\left( x \right){Y’_1}\left( x \right) }+{ {C’_2}\left( x \right){Y’_2}\left( x \right) + \cdots }+{ {C’_n}\left( x \right){Y’_n}\left( x \right) }={ 0}\\ \cdots \cdots \cdots \cdots \cdots \cdots \cdots \\ {{C’_1}\left( x \right)Y_1^{\left( {n – 1} \right)}\left( x \right) }+{ {C’_2}\left( x \right)Y_2^{\left( {n – 1} \right)}\left( x \right) + \cdots }+{ {C’_n}\left( x \right)Y_n^{\left( {n – 1} \right)}\left( x \right) }={ f\left( x \right)} \end{array} \right.\]
Note that the main determinant of this system is the Wronskian \(W\left( x \right)\) constructed on the basis of the fundamental system of solutions \({Y_1},\) \({Y_2}, \ldots ,\) \({Y_n}.\) As the solutions \({Y_1},\) \({Y_2}, \ldots ,\) \({Y_n}\) are linearly independent, the Wronskian is not zero.
The unknown derivatives \({C’_i}\left( x \right)\) are calculated by Cramer’s rule:
\[{{C’_i}\left( x \right) = \frac{{{W_i}\left( x \right)}}{{W\left( x \right)}},\;\;}\kern-0.3pt{i = 1,2, \ldots ,n,}\]
where the determinant \({{W_i}\left( x \right)}\) is obtained from the Wronskian \(W\left( x \right)\) by replacing the \(i\)th column with the column \(\left[ {0,0, \ldots, f\left( x \right)} \right]\) on the right side.
Further, the expressions for \({C_i}\left( x \right)\) can be found by integration:
\[
{{C_i}\left( x \right) }={ \int {\frac{{{W_i}\left( x \right)}}{{W\left( x \right)}}dx} + {A_i},\;\;}\kern-0.3pt {i = 1,2, \ldots ,n.} \]
Here \({A_i}\) denote constants of integration.
As a result, the general solution of the nonhomogeneous equation can be written as
\[ {y\left( x \right) }={ \sum\limits_{i = 1}^n {{C_i}\left( x \right){Y_i}\left( x \right)} } = {\sum\limits_{i = 1}^n {\left( {\int {\frac{{{W_i}\left( x \right)}}{{W\left( x \right)}}dx} + {A_i}} \right){Y_i}\left( x \right)} } = {\sum\limits_{i = 1}^n {{A_i}{Y_i}\left( x \right)} } + {\sum\limits_{i = 1}^n {\left( {\int {\frac{{{W_i}\left( x \right)}}{{W\left( x \right)}}dx} } \right){Y_i}\left( x \right)} } = {{y_0}\left( x \right) + {y_1}\left( x \right).} \]
In the last expression, the first sum corresponds to the general solution \({y_0}\left( x \right)\) of the homogeneous equation (with arbitrary numbers \({A_i}\)), and the second sum describes a particular solution \({y_1}\left( x \right)\) of the nonhomogeneous equation.
Solved Problems
Click a problem to see the solution. |
Electronic Journal of Probability Electron. J. Probab. Volume 7 (2002), paper no. 9, 15 pp. Eigenvalues of Random Wreath Products Abstract
Consider a uniformly chosen element $X_n$ of the $n$-fold wreath product $\Gamma_n = G \wr G \wr \cdots \wr G$, where $G$ is a finite permutation group acting transitively on some set of size $s$. The eigenvalues of $X_n$ in the natural $s^n$-dimensional permutation representation (the composition representation) are investigated by considering the random measure $\Xi_n$ on the unit circle that assigns mass $1$ to each eigenvalue. It is shown that if $f$ is a trigonometric polynomial, then $\lim_{n \rightarrow \infty} P\{\int f d\Xi_n \ne s^n \int f d\lambda\}=0$, where $\lambda$ is normalised Lebesgue measure on the unit circle. In particular, $s^{-n} \Xi_n$ converges weakly in probability to $\lambda$ as $n \rightarrow \infty$. For a large class of test functions $f$ with non-terminating Fourier expansions, it is shown that there exists a constant $c$ and a non-zero random variable $W$ (both depending on $f$) such that $c^{-n} \int f d\Xi_n$ converges in distribution as $n \rightarrow \infty$ to $W$. These results have applications to Sylow $p$-groups of symmetric groups and autmorphism groups of regular rooted trees.
Article information Source Electron. J. Probab., Volume 7 (2002), paper no. 9, 15 pp. Dates Accepted: 2 April 2002 First available in Project Euclid: 16 May 2016 Permanent link to this document https://projecteuclid.org/euclid.ejp/1463434882 Digital Object Identifier doi:10.1214/EJP.v7-108 Mathematical Reviews number (MathSciNet) MR1902842 Zentralblatt MATH identifier 1013.15006 Subjects Primary: 15A52 Secondary: 05C05: Trees 60B15: Probability measures on groups or semigroups, Fourier transforms, factorization 60J80: Branching processes (Galton-Watson, birth-and-death, etc.) Rights This work is licensed under aCreative Commons Attribution 3.0 License. Citation
Evans, Steven. Eigenvalues of Random Wreath Products. Electron. J. Probab. 7 (2002), paper no. 9, 15 pp. doi:10.1214/EJP.v7-108. https://projecteuclid.org/euclid.ejp/1463434882 |
Let $A=\{00,01,10,11\}$ with equal probabilities for each symbol, and $B=\{0, 1\}$ be a parity generator such that $$ b=\begin{cases} 0, & \text{if} \,\, a=00 \quad \text{or} \quad a=11 \\ 1, & \text{if} \,\, a=01 \quad \text{or} \quad a=10 \end{cases} $$ Now assume we transmit $(a_i,bj)$ where $a_i$ is a symbol in $A$ and $b_j$ is the parity bit associated with it. I calculated the entropy for $A$ as follows: $$ H(A)=4 [4 \log_2(4)]=2 $$ and to calculate the entropy of the new symbols, call it alphabet $C=\{000,011,101,110\}$, we need the probabilities: $$ p_c(0)=\mathbb{P}(a=00 \, \text{and} \, b=0)=\mathbb{P}(a=00)\mathbb{P}(b=0 \mid a=00)=\mathbb{P}(a=00)=1/4 $$ Similarly, $p_c(1)=p_c(2)=p_c(3)=p_c(0)=1/4$, the entropy of $C$ is $$ H(C)=4 [4 \log_2(4)]=2=H(A) $$ How is this the case? We have more bits per symbols in $C$.
Because the entropy represents
information quantity, or if being measured in bit, the smallest number of bits per symbol we need to represent a source.
The source $A$ contains $4$ equiprobable symbols hence it is obvious that we need $\log_2(4) = 2$ bits per symbol to represent the source.
The source $C$ is simply $A$ with its parity bits hence no new information is added. We have more bits per symbols, but the number of
information bits is unchanged.
In communications, these additional bits are added to scope with error of transmissions. |
Within my lecture notes, the following definition is given:
We say that the stochastic process $X_t$ has
stochastic differential$$ dX_t = b_t dt + \sigma_t dW_t $$ if and only if $$ X_t = X_0 + \int_0^t b_s ds + \int_0^t \sigma_s dW_s $$
However, it then goes on to discuss Ito's formula for a complex function by defining $$ f(x) := A(x) + iB(x) $$ and showing that, for a function of this form, $$ df(W_t) = f'(W_t) dW_t + \frac{1}{2} f''(W_t) dt \hspace{10mm} (*) $$ So far so good; I can follow all of this.
However, it then goes on to say that $(*)$ may be expressed in integral form as$$f(W_t) - f(0) = \int_0^t f'(W_s) dW_s + \int_0^t \frac{1}{2} f''(W_s) ds$$
This is where my confusion comes from. Based on the definition of a stohastic integral, why would $(*)$ not be evaluated as $$ [f(W_s)]_0^t = f(W_t) - f(W_0) = f(W_0) + \int_0^t f'(W_s) dW_s + \int_0^t \frac{1}{2} f''(W_s) ds $$ simplifying to $$ f(W_t) - 2 f(0) = \int_0^t f'(W_s) dW_s + \int_0^t \frac{1}{2} f''(W_s) ds $$ |
The matrix exponential is a well know thing but when I see online it is provided for matrices. Does it the same expansion for a linear operator? That is if $A$ is a linear operator then $$e^A=I+A+\frac{1}{2}A^2+\cdots+\frac{1}{k!}A^k+\cdots$$
Yes, you can define an exponential of any linear BOUNDED operator by this series. If the operator is unbounded then it is not always possible.
The exponential series has a remarkably "ubiquitiuos" convergence. As soon as you have a $\mathbb Q$-algebra $M$ with a norm such that $||X Y||\le c\cdot ||X||\cdot ||Y||$ for some $c$, then $\exp(A)$ converges for all $A$ with respect to this norm. Hence if $M$ is complete, you indeed obtain an element of $M$. Moreover, if $AB=BA$ then $\exp(A+B)=\exp(A)\exp(B)$ holds.
There are even cases when the exponential series is useful even when division by $k!$ is undefined. One just has to be careful that $A$ must be nilpotent enough (i.e. $A^k=0$ for all $k$ for which divison by $k!$ is undefined)
As you have suggested, if $A$ is a linear operator then:
$$\exp A = I + A + \frac{1}{2}A^2 + \cdots + \frac{1}{k!}A^k + \cdots \, . $$
These are very common in physics. Here is a link to a PDF file. |
In one of the "proofs" of Gauss' law in my textbook, author took divergence of the E.
$$ \mathbf E = \int_{\text{all space}} \dfrac{\hat{\mathscr r} }{{\mathscr r}^2} \rho(r^\prime) d\tau^\prime$$
Where $\mathscr r= r - r^\prime$, $r$ is where field is to be calculated, $\rho$ is charge density and $r^\prime$ is the location of $dq$ charge.
Next step is what I don't understand.
$$\nabla \cdot \mathbf E=\int_{\text{all space}} \nabla\cdot\left(\dfrac{\hat{\mathscr{r}} }{{{\mathscr r}}^2}\right) \rho(r^\prime) d\tau^\prime$$
I don't understand why it is $$\nabla\cdot\left(\dfrac{\hat{\mathscr r} }{{\mathscr r}^2}\right)$$ not $$\nabla\cdot\left(\dfrac{\hat{\mathscr r} }{{\mathscr r}^2}\rho(r^\prime) \right)$$ ?
Isn't $\nabla \cdot (f \mathbf{A}) = f\nabla \cdot (\mathbf {A}) + \mathbf{A} \cdot\nabla f$ not $\nabla \cdot (f \mathbf {A}) = f\nabla \cdot \mathbf{A}$ ?
$f$ is a real function and $\vec A$ is a vector function. |
In Hull's textbook, the stock price dynamics is lognormal: $S_T = S_0 \exp(\mu T - \frac{1}{2}\sigma^2T + \sigma W_T)$, where $W_t$ is a standard brownian motion. And so the mean of this is the mean of a lognormal random variable with the log mean as $\ln S_0 + \mu T - \frac{1}{2}\sigma^2T$ and the log standard deviation as $\sigma \sqrt{T}$, and so the ...
I came across this thread while searching for a similar topic.In Nualart's book (Intoduction to Malliavin Calculus), it is asked to show that $\int_0^t B_s ds$ is gaussien and it is asked to compute its mean and variance. This exerice should rely only on basic brownian motion properties, in particular, no Itô calculus should be used (Itô calculus is ...
It's a special case of the AM-GM inequality, assuming that market returns follow a lognormal distribution.Consider the simple example of a stock that has a 50% probability of rising and falling 10% every period.Its arithmetic average is obviously 0: (50% * +10%) + (50% * -10%) = 0Its geometric average is (1+10%)^0.5*(1-10%)^0.5 -1 = -0.5%Or a more ...
If $S$ is the solution to geometric brownian motion SDE:\begin{equation}dS=\mu S dt + \sigma S dW(t)\end{equation}then\begin{equation}S=S_0e^{(\mu - \sigma^2/2)t + \sigma W(t)}\end{equation}Then if you take expectation\begin{equation}\mathbb{E}[S(t)]=S_0e^{(\mu - \sigma^2/2)t}\mathbb{E}[e^{\sigma W(t)}]\end{equation}Now since $W$ is a wiener ...
An integral with respect to a stochastic process is the theme of stochastic calculus for which you ought to get an introductory textbook as it is the key to financial models.A Brownian motion $(W_t)$ is the easiest integrand and typically the first example one encounters. Then, $\int_t^T 1\mathrm{d}W_s=W_T-W_t=W_{T-t}=\sqrt{T-t} Z$ where $Z\sim N(0,1)$....
In its simplest form, an option is a combination of two binary options.The buyer of a call option is long of an "asset-or-nothing" binary call. I.E. if Spot>Strike, it is worth Spot; else 0. To fund that, he is selling a "cash-or-nothing" call: worth Strike if Spot>Strike, else 0.The positive value of the option obviously derives from the fact that as ...
The Black Scholes (1973) model assumes that $\mathrm{d}S_t=rS_t\mathrm{d}t+\sigma S_t\mathrm{d}W_t$. Thus, $$S_t=S_0\exp\left(\left(r-\frac{1}{2}\sigma^2\right)t+\sigma W_t\right).$$ Please note the factor $-\frac{1}{2}\sigma^2t$ in the exponential. If you incorporate dividends, replace $r$ by $r-q$. You do not need an extra term $\sqrt{t}$ in front of the ... |
Kai's Smite
Order of Voln - edit Documentation Voln, the Great Spirit The History of the Order of Voln Mechanics Favor Undead Scripts Voln armor Symbols 1. Recognition 2. Blessing 3. Thought 4. Diminishment 5. Courage 6. Protection 7. Submission 8. Kai's Strike 9. Holiness 10. Recall 11. Sleep 12. Transcendence 13. Mana 14. Sight 15. Retribution 16. Supremacy 17. Restoration 18. Need 19. Renewal 20. Disruption 21. Kai's Smite 22. Turning 23. Preservation 24. Dreams 25. Return 26. Seeking Step 21 Kai's Smite, governed by Lord Kai, is obtained at the twenty first step of the Order of Voln. This symbol affords a member the ability to initiate an unarmed combat attack against an undead target. This attack is activated via the SMITE (verb) and NOT by invoking the symbol. When the attack connects with a non-corporeal creature, instead of doing damage, it will temporarily corporealize it allowing for the possibility of critical damage from a subsequent attack. Against a corporeal undead target, it will make the creature temporarily more susceptible to damage. It has no effect upon living creatures.
This effect has a 5 second minimum duration which can be extended if used when tiered up (good or excellent positioning). Since there is no favor cost, it is recommended that members take full advantage of Kai's Smite when tactically practical.
Contents Syntax Smite (by itself) Smite <target> Duration
The duration of Kai's Smite on an affected creature is
Failed to parse (MathML with SVG or PNG fallback (recommended for modern browsers and accessibility tools): Invalid response ("Math extension cannot connect to Restbase.") from server "https://api.formulasearchengine.com/v1/":): {\displaystyle \left\lfloor \frac{\mathrm{SuccessMargin}}{10} \right\rfloor \cdot \mathrm{UACTier}}
seconds, where
Failed to parse (MathML with SVG or PNG fallback (recommended for modern browsers and accessibility tools): Invalid response ("Math extension cannot connect to Restbase.") from server "https://api.formulasearchengine.com/v1/":): {\displaystyle \mathrm{UACTier}} is 1, 2, or 3 for decent, good, or excellent positioning respectively. Smite's duration has a minimum of 5 seconds and a maximum of 30 seconds. Messaging
Initial Cast:
Wearing off:
See Also Favor Cost THIS PAGE CONTAINS SPOILERS
None
Favor Required to Obtain Kai's Smite Ex. level 15 member:
2000 + ((15 × 15 × 35) ÷ 3) = 4625 favor
Ex. level 20 member:
2000 + ((20 × 20 × 35) ÷ 3) = 6666 favor
Monk's Teaching and Instruction Highlights Kai is an extraordinary brawler who has taught us, with Kai's Strike to inflict damage to undead with our bare hands. He also taught us a technique, with Kai's Smite, to increase our power against corporeal undead and temporarily corporealizing the ethereal undead. Wehnimer's Landing Task
This task requires the member to drop a weapon in the small pool in the Secluded Valley. The pool is located one room south of the Cavernhold entrance door (Upper Trollfang map). Be wary of hidden archers firing crossbow bolts while you are in the Secluded Valley area.
From North Gate: SW, S, S, S, S, S, SW, S, SE, S, SE, S, S, S, SW, S, S, SE, SE, SW, W, S, S, S, S, SE, SE, NE, NE, E, E, Go Bridge, SE, S, SE, S, E, E, E, S, Search (searching for crevice), Go Crevice, SE, Climb Rubble, S, Climb Rubble, SE, SE, E, E, Go Trail, D, D, E, S, SE (Lich ID 8572) Drop <weapon> in pool Return to monk to receive symbol Ta'Vaalor Task
To complete this task the member must again return to Zul Logoth and proceed to the pool in the Czeroth Caverns. Be aware that the pool is poisoned!
From mine cart: Go Arch, W, NW, W, W, Go Timber, Go Opening, SW, SW, SW, SE, SE, SE, NE, NE, NE, NW, NW, Climb Rocks, SW, SW, SW, SE, SE, SE, NE, NE, NE, NW, NW, Go Opening, SW, SW, SW, SE, SE, SE, NE, NE, NE, NW, NW, SW, Go Pool (Lich ID: 9609) Drop <weapon> [in the pool] Return to monk to receive symbol |
I'm adding a separate answer for the general question that the OP asked, which settles the question in the negative for all $n>2$ (and gives an alternate proof for $n=3$ to the one I gave above).
Recall that the OP defined a sequence of algebraic functions $f_n$ by the rule $f_0(x) = 1$, $f_1(x) = \sqrt{x+1}$, and $f_{n+1}(x) = \sqrt{x + f_n(x)}$ for all $n\ge 1$. It was observed that $f_n$ has an elementary antiderivative for $n=0$, $1$, and $2$, and the problem was to determine whether $f_n$ has an elementary antiderivative for some $n>2$.
I am going to show that there is no elementary antiderivative of $f_n$ when $n>2$.
Assume $n>2$ (NB: This is important, because the argument below will
not work for $n\le2$; the reader may enjoy finding where it breaks down), and let $K_n = {\mathbb C}\bigl(x,f_n(x)\bigr)$ be the elementary differential field generated by $x$ and $f_n(x)$. Then $K_n$ is the field of meromorphic functions on the normalization $\hat C_n$ of the algebraic curve $C_n$ defined by the minimal degree $y$-monic polynomial $P_n(x,y)$ that satisfies $P_n\bigl(x,f_n(x)\bigr) \equiv 0$. This minimal degree is $2^n$; for example, $P_2(x,y) = (y^2-x)^2-x-1$ and $P_3(x,y) = \bigl((y^2-x)^2-x\bigr)^2-x-1$, etc.
Since $P_{n+1}(x,y) = (P_n(x,y)+1)^2-x-1$ for $n\ge 1$ with $P_1(x,y)=y^2-x-1$, one sees, by applying the Eisenstein Criterion to $P_n(x,y)$ regarded as an element of $D[y]$ with $D$ being the integral domain ${\mathbb C}[x]$, that $P_n(x,y)$ is irreducible for all $n\ge 1$. Hence, $\hat C_n$ is connected.
It will be important in what follows to observe that $K_n$ has an involution $\iota$ that fixes $x$ and sends $f_n(x)$ to $-f_n(x)$; this is because $P_n(x,y)$ is an even polynomial in $y$. The fixed field of $\iota$ is ${\mathbb C}\bigl(x,\,f_n(x)^2\bigr)$, and the $(-1)$-eigenspace of $\iota$ is ${\mathbb C}\bigl(x,\,f_n(x)^2\bigr)f_n(x) = K_{n-1}{\cdot}f_n(x)$.
Now, the curve $C_n\subset \mathbb{CP}^2$ has only one point on the line at infinity, namely $[1,0,0]$, but the normalization $\hat C_n$ has $2^{n-1}$ points lying over this point. They can be parametrized as follows: First, establish the convention that $\sqrt{u}$ means the unique analytic function on the complex $u$-plane minus its negative axis and $0$ that satisfies $\sqrt1 = 1$ and $\bigl(\sqrt{u}\bigr)^2 = u$. Let $\epsilon = (\epsilon_1,\ldots,\epsilon_{n-1})$ be any sequence with ${\epsilon_k}^2=1$ and consider the sequence of functions $g^\epsilon_k(t)$ defined by the criteria $g^\epsilon_1(t) = \sqrt{1+t^2}$ and $g^\epsilon_{k+1}(t) = \sqrt{1+\epsilon_{n-k}t g^\epsilon_k(t)}$ for $1\le k < n$. Choose, as one may, a $\delta_n>0$ sufficiently small so that, when $t$ is complex and satisfies $|t|<\delta_n$, all of the functions $g^\epsilon_k$ are analytic when $|t|<\delta_n$. In particular, one finds an expansion$$g^\epsilon_n(t) = 1+\tfrac12\epsilon_1\,t + \tfrac18(2\epsilon_1\epsilon_2-1)t^2 + O(t^3).$$
Also, it is easy to verify that the disk in $\mathbb{CP}^2$ defined by$$[x,y,1] = [1,\ t g^\epsilon_n(t),\ t^2]\qquad\text{for}\quad |t|<\delta_n$$is a nonsingular parametrization of a branch of $C_n$ in a neighborhood of the point $[1,0,0]$. In the normalization $\hat C_n$, this is then a local parametrization of a neighborhood of a point $p_\epsilon\in \hat C_n$.Obviously, this describes $2^{n-1}$ distinct points on $\hat C_n$.
When $x$ and $f_n$ are regarded as meromorphic functions on $\hat C_n$, it follows that there is a unique local coordinate chart $t_\epsilon:D_\epsilon\to D(0,\delta_n)\subset \mathbb{C}$ of an open disk $D_\epsilon\subset \hat C_n$ about $p_\epsilon$ such that $t_\epsilon(p_\epsilon)=0$ and on which onehas formulae$$x = \frac1{{t_\epsilon}^2}\quad\text{and}\quadf_n(x) = \frac{g^\epsilon_n(t_\epsilon)}{t_\epsilon} = \frac{1+\tfrac12\epsilon_1\ t_\epsilon +\tfrac18(2\epsilon_1\epsilon_2-1)\ {t_\epsilon}^2} {t_\epsilon} + O({t_\epsilon}^2).$$ In particular, it follows that $f_n(x)$, as a meromorphic function on $\hat C_n$,has polar divisor equal to the sum of the $p_\epsilon$ and hence has degree $2^{n-1}$. Of course, this implies that the zero divisor of $f_n(x)$ on $\hat C_n$ must be of degree $2^{n-1}$ as well.
Note that the functions $g^\epsilon_k$ satisfy $g^{-\epsilon}_k(-t) = g^{\epsilon}_k(t)$, where $-\epsilon = (-\epsilon_1,\ldots,-\epsilon_{n-1})$.This implies that $\iota(p_\epsilon) = p_{-\epsilon}$ and that$t_\epsilon\circ\iota = -t_{-\epsilon}$.
Now, the $2^{n-1}$ zeroes of $f_n(x)$ on $\hat C_n$ are distinct, for they are the zeros of the polynomial $q_n(x) = P_n(x,0) = (q_{n-1}+1)^2-x-1$, and the discriminant of $q_n$, being the resultant of $q_n$ and $q_n'$, is clearly an odd integer, and hence is not zero. Thus, $C_n$ is a branched double cover of $C_{n-1}$, branched exactly where $f_{n}$ has its zeros. This induces a branched cover $\pi_n:\hat C_n\to \hat C_{n-1}$ that is exactly the quotient of $\hat C_n$ by the involution $\iota$ (whose fixed points are where $f_n$ has its zeros). Since one then has the Riemann-Hurwitz formula$$\chi(\hat C_n) = 2\chi(\hat C_{n-1}) - B_n = 2\chi(\hat C_{n-1}) - 2^{n-1},$$and $\chi(\hat C_1) = \chi(\hat C_2) = 2$, induction gives $\chi(\hat C_n) = (3{-}n)2^{n-1}$, so the genus of $\hat C_n$ is $(n{-}3) 2^{n-2} + 1$. (This won't actually be needed below, but it is interesting.)
The only poles of $x$ and $f_n(x)$ on $\hat C_n$ are the points $p_\epsilon$, and computation using the above expansions shows that, in a neighborhood of $p_\epsilon$, one has an expansion of the form$$f_n(x)\,\mathrm{d} x - \mathrm{d}\left(f_n(x)\bigl(\tfrac12\ x + \tfrac16\ f_n(x)^2\bigr) \right)= \left(\frac{ (1-\epsilon_1\epsilon_2) } {4{t_\epsilon}^2} + O({t_\epsilon}^{-1})\right)\ \mathrm{d} t_\epsilon\ .$$Thus, the meromorphic differential $\eta$ on $\hat C_n$ defined by the left hand side of this equation has, at worst, double poles at the points $p_\epsilon$ and no other poles.
Now, by Liouville's Theorem, $f_n$ has an elementary antiderivative if and only if $f_n(x)\ \mathrm{d} x$ and, hence, the form $\eta$ are expressible as finite linear combinations of exact differentials and log-exact differentials.Thus, $f_n(x)$ has an elementary antiderivative if and only if $\eta$ is expressible in the form$$\eta = \mathrm{d} h + \sum_{i=1}^m c_i\,\frac{\mathrm{d} g_i}{g_i}$$for some $h,g_1,\cdots g_m\in K_n$ and some constants $c_1,\ldots,c_m$. Suppose that these exist. Since $\eta$ has, at worst, double poles at the $p_\epsilon$ and no other poles, it follows that $h$ must have, at worst, simple poles at the points $p_\epsilon$ and no other poles; in fact, $h$ is uniquely determined up to an additive constant because its expansion at $p_\epsilon$ in terms of $t_\epsilon$ must be of the form$$h = \frac{\epsilon_1\epsilon_2-1}{4t_\epsilon} + O(1).$$
Moreover, because $\eta$ is odd with respect to $\iota$, it follows that $h$ (after adding a suitable constant if necessary) must also be odd with respect to $\iota$. This implies, in particular, that $h$ vanishes at each of the zeros of $f_n$ (which, by the argument above, are simple zeros). This implies that $h = r\,f_n$ for some $r\in K_{n-1}$ that has no poles and satisfies $r(p_\epsilon) = (\epsilon_1\epsilon_2-1)/4$ for each $\epsilon$. However, since $r$ has no poles and $\hat C_n$ is connected, it follows that $r$ is constant. Thus, it cannot take the two distinct values $0$ and $-1/2$, as the equation $r(p_\epsilon) = (\epsilon_1\epsilon_2-1)/4$ implies.
Thus, the desired $h$ does not exist, and $f_n$ cannot be integrated in elementary terms for any $n>2$. |
The energy loss in a hydraulic jump is still calculated with the old equation of Bresse from year 1860; (I.e., equation 7 in this paper from 2017)
$$ \frac{\Delta E}{E_1} = \frac{(\sqrt{1+8Fr^2}-3)^3}{16(\sqrt{1+8Fr^2}-1)(1+\frac{1}{2}Fr^2)} $$
There is no measured energy loss when $Fr<\sqrt3$, though this equation predicts at $Fr=\sqrt3$ a loss of; $$ \frac{\Delta E}{E_1} = \frac{(\sqrt{1+8*3}-3)^3}{16(\sqrt{1+8*3}-1)(1+\frac{1}{2}*3)}=\frac{2^3}{16(4)(2\frac{1}{2})}=\frac{8}{160}=5\% $$
This is obviously wrong, as it violates badly the conservation of energy, which must mean that the whole equation of Bresse is simply wrong.
Is there a better way to calculate this loss, where the logic is rigorously derived from the fundamentals?
Equation 15-1 from the book of Chow 1959 gives of course the same result for $Fr=\sqrt3$, as it's just another presentation of the same equation of Bresse 1860; $$ \frac{E_2}{E_1} = \frac{(1+8Fr^2)^{3/2}-4Fr^2+1}{8Fr^2(2+Fr^2)}=0.95 $$ |
I am a bit confused about Maldacena's original decoupling argument. There are two different low energy (i.e, $\alpha^\prime \to 0$) descriptions of the stack of D3-branes:
$\mathcal{N}=4$ SYM and 10D type IIB SUGRA.
Full type IIB superstring in $AdS_5 \times S^5$ and 10D type IIB SUGRA.
Comparing (1) and (2) (actually cancelling 10D SUGRA!) we obtain the celebrated AdS/CFT correspondence. I have the following questions regarding this argument.
If one takes $\alpha^\prime \to 0$ it is same as taking $G_N \to 0$. Then how do the branes backreact to produce non-trivial background namely $AdS_5 \times S^5$?
One arrives at the AdS/CFT correspondence by taking $\alpha^\prime \to 0$, by the above decoupling argument. Then how can one claim that there should be full string theory in $AdS_5 \times S^5$? I understand that any high-energy excitation will be infinitely red-shifted for the observer at infinity. But these are all happening at $\alpha^\prime \to 0$!
Isn't full string theory defined only on
asymptoticallyAdS rather than AdS? (I am not sure about this though.)
Also the radius of the $S^5$ turns out to be same as $AdS_5$ scale, $L$. Now small $L$ means highly fluctuating string i.e., quantum gravity regime and thus notion of this classical backgrounds break down. Then how can one do Kaluza-Klein reduction of the $S^5$ ? |
I'm dealing with the harmonic oscillator. I'm calculating the correlation function $$c(t) = \langle 0 | x(t) x(0) |0 \rangle$$ where $|0\rangle$ is the ground state.
I'm dealing in the Heisenberg picture, where $$x(t) = \exp(i H t / \hbar) x(0) \exp(-i H t / \hbar) \, .$$
Now, I have calculated the above using the ladder operator approach, and by explicit integration in the position representation, and I keep on getting the answer:
$$c(t) = \frac{\hbar}{2 m \omega} \exp(i\omega t)$$
How is it possible that my answer has a complex part? Surely the answer should be real since $x(t) x(0)$ is Hermitian. What is going on? |
Mini Assignment 2: Euler Angles Visualization (15 Points)
In this short assignment you will visually explore one common representation of rotations in 3D with three numbers, also known as
Euler Angles
.
Click here to launch the GUI that you will use in this assignment.
The GUI consists of a set of gimbals
, which are motorized spinning rings which can be used to realize Euler Angles.
Submit a file called "Readme.txt" on Sakai with the answers to the questions below Question 1: ( 5 Points)
Take a moment to play around with the GUI and familiarize yourself with gimbals. In this particular example, the outer ring is yaw (rotation about the y-axis), the middle ring connected to it is pitch (rotation about the x-axis), and the inner ring connected to the pitch ring is roll (rotation about the z-axis). As described in class, the rotation matrices about the individual axes can be described as follows:
Pitch (Rotation About X by gamma) Yaw (Rotation About Y by beta) Roll (Rotation About Z by alpha) NOTE:
The coordinate system assumed here is a right handed coordinate system where (+X) is to the right, (+Y) is up, and (+Z) is out of the monitor, as shown below, and
this convention for roll/pitch/yaw may be slightly different than others you see on the web (though it's consistent with the way we've been defining our coordinate system)
:
Question:
In terms of
\[ R_X(\gamma), R_Y(\beta), \text{and } R_Z(\alpha) \]
Write down the matrix product
R Final = ABC
that performs the overall rotation of the gimbal system. That is, find an
R
that gets the airplane to its final orientation after composing all gimbals in the order that they are arranged in the app, assuming that each point on the airplane is a column vector which is multiplied on the left by
R
. No need to expand the product; simply indicate which order those three matrices go in (i.e. which one is
A
, which one is
B
, and which one is
C
in the above product?).
Question 2: ( 5 Points)
In the app, set the pitch to 270 degrees. Now move the yaw around by itself, and move the roll around by itself. What can you say about the effect of doing yaw and the effect of doing roll when the pitch is in this position that makes it different from when pitch is in most other positions? Can you find another pitch for which a similar phenomenon happens?
Question 3: ( 5 Points)
Now you will examine an animation that results when gimbals are used to move from one orientation to another. For orientation 1, set the pitch to be something less than 270 degrees and for orientation 2, set the pitch to be something greater than 270 degrees. Set the yaws and rolls to be different values before and after also (but it doesn't matter what they are). Now click "animate." What do you notice when the pitch passes through 270 degrees? No need for any rigorous explanation, just describe what you see. How about if you design an animation which passes through the other angle that you selected in question 2? |
When a value is multiplied by itself to give the original number then that number is a square root. Represented by a radical symbol $\sqrt{}$ “Square Root” is often used to refer to the principal square root.
For example, 4 and -4 are the square roots of 16 because 42 is 16.
\[\LARGE \sqrt[n]{x}=x^{\frac{1}{n}}\]
In order to calculate the square root, we first need to find the factors of a given number, then group the common factor together. Group the pairs separately if the factors have any perfect square. The square root of the square of a number is the number itself.
For example, the number 36 The factors of 36 is given as 6 x 6. Since it is perfect square, its square root is 6.
SOLVED EXAMPLES Question 1: What is the square root of 144? Solution:
The factors of 144 are given as,
144 = 12××12 $\sqrt{144}=12\times 12$ Question 2: What is the square root of 80? Solution:
The factors of 80 are given as,
$80 = \sqrt{4\times 4\times 5}$ $80=\sqrt[4]{5}$
More topics in Square Root Formula Square Root Property Formula |
I might seem picky, but
I would first refrain from saying that $\nabla f$ is a vector. It is a vector field. This might be considered a common abuse of vocabulary, but using it amounts to assuming that student can fix it up routinely. The problem you are faced with shows very convincingly they don't.
I bet we all have been confronted to someone who, asked to compute the derivative of say $\cos^2x$ at $x=\pi$, wrote $$(\cos^2(\pi))'=((-1)^2)'=(1)'=0$$Even if your students seem to have passed this level of confusion once they reached the multivariate calculus class, they very probably can fall in it again when confronted with the more sophisticated material of this class. Therefore you need much greater care and rigor than will be strictly necessary later on, with students who passed the class.
Now to the heart of your problem. I don't pretend I have a complete solution, it is a prevalent and very difficult problem; my propositions are partial, and are what I try to do (I do not follow them as strictly as I should myself, even if I speak first person below). I would advise to
discuss the question extensively in class, regularly make quizzes on the nature of objects, make clear that the understanding of the nature of the object will be tested. 1. discuss the question extensively in class
Multivariate calculus is probably one of the most ill-understood by student at the point of curriculum it is considered. It is quite easy to get students able to make a number of computations, but very difficult to get them to understand the meaning of the computed objects. One should constantly discuss the natural set to which belongs each object: $f$, $\nabla f$, $df$, $\nabla f(x_1,\dots,x_n)$, $Df(x_1,\dots,x_n)$, $Df(x_1,\dots,x_n)\bullet(v_1,\dots,v_n)$, etc. Doing so at first is in my experience not sufficient, one has to discuss this at each lecture.
One should also discuss in class the content of your question. Present a vector field and discuss whether it makes sense or not to take its gradient. Explain why it does make sense to write $\nabla\big(\langle \nabla f, \nabla g\rangle \big)$; everything that gives you opportunities to discuss the nature of more or less complex objects.
2. regularly make quizzes on the nature of objects
Student need to be tested regularly, to assess their level of understanding in order to adjust the lecture, but also to convince them to work the s*** out of this mess. It is sometimes ungrateful work, but it is important work if multivariate calculus is to be taught at all (and rewarding once one gets it).
Some quizzes should closely match what has been discussed in class, some can be given in advance to show student they did not understand what they thought they did, to introduce some explanations they might be reluctant to give attention to at first. For the anecdote, here is my favorite kind of question:
Consider $F:\mathbb{R}^3\to\mathbb{R}$ defined by $F(x,y,z)=xyz-x^2-y^2-z^2$
Compute the differential $DF$ of $F$.
Compute $DF(1,2,3)\bullet(4,5,6)$, i.e. the differential of $F$ at the point of coordinates $(1,2,3)$, evaluated on the vector of coordinates $(4,5,6)$.
I usually have 100% correct answers to 1. but no correct answer to 2. To make my point clear I grade this two questions together, with
no partial credit if 2. does not have a correct answer. After nailing my students with this, I can expect them to answer similar questions correctly on the final test. It took me some time assuming they would answer this correctly before realizing I had to ask them, and ask them hard. 3. Make clear that the understanding of the nature of the object will be tested.
There is a more general point here. Students have a way of finding out when we are bluffing. Each time we mention something is really important, but it seems quite difficult to them, and we don't test it for fear they will fail, they register the event and realize even more that we don't really mean it when we say it is important. I thus try not to say something is important if I don't want to test it, and I try to test everything that I mentioned as important. If I don't have the time to test everything, I give priority with what students would not want to be on the test. The goal is not to make them miserable, so I tell them in advance how I work for them to be warned. They usually think I BS them, until one or two tests go by, and then they know I am serious and start taking me (and, more importantly, their need to work) seriously. |
The deeper problem with this supposition is that it assumes a conceptual identity between the notions of Hamiltonian and energy, and this is an identity that is not correct. That is, discernment needs to be applied to separate the two of these things.
Conceptually, energy is a physical quantity that is, in a sense, "nature's money" - the "currency" that you have to expend to produce physical changes in the world. On a somewhat deeper level, energy is to time what momentum is to space. This can be seen across many areas, such as Noether's theorem, which relates the law of conservation of energy to the fact that the history of a system can be translated back and forth in time and still work the same way, i.e. that there is no preferred point in time in the laws of physics, and likewise, the same for momentum with it being translated around in space and still working the same way. It also occurs in relativity, in which the "four-momentum" incorporates energy as its temporal component.
The Hamiltonian, on the other hand, is a mathematically modified version of the Lagrangian, through what is called the Legendre transform. The Lagrangian is a way to describe how that forces impact the time evolution of a physical system in terms of an optimization process, and the Hamiltonian converts this directly into an often more useful/intuitive differential equation process. In many cases, the Hamiltonian is
equal to, the system total mechanical energy $E_\mathrm{mech}$, i.e. $K + U$, but this is not always so even in classical Hamiltonian mechanics, a fact which indicates and underscores the basic conceptual separation between the two.
In quantum mechanics, the "energy is to time what momentum is to space" concept manifests in that it is the
generator of temporal translation, or the generator of evolution, in the same way that momentum is the generator of spatial translation. In particular, just as we have a "momentum operator"
$$\hat{p} := -i\hbar \frac{\partial}{\partial x}$$
which translates a position-space (here using one dimension for simplicity) wave function (mathematical representation of restricted information regarding the particle position on the part of an agent) $\psi$ via the somewhat-loose "infinitesimal equation"
$$\psi(x - dx) = \psi(x) + \left(\frac{i}{\hbar} \hat{p} \psi\right)(x)$$
for translating it by a tiny forward nudge $dx$, likewise we would
want to have an energy operator
$$\hat{E} := i\hbar \frac{\partial}{\partial t}$$
which does the same but for translation with regard to time (the sign change is because we usually consider a temporal advance from $t$ to $t + dt$, as opposed to psychologically [perhaps also psycho-culturally] preferring spatial motions to be directed rightward, in our descriptions of things.). The problem here is that wave functions generally do not contain a time parameter, and at least non-relativistic quantum mechanics treats space and time separately, so the above cannot be a true operator on the system state space. Rather, it is more of a "pseudo-operator" that we'd "like" to have but can't "really" for this reason. One should note that this is the expression that appears on the right of the Schrodinger equation, which we could thus "better" write as
$$\hat{H}[\psi(t)] = [\hat{E}\psi](t)$$
where $\psi$ is now a
temporal sequence of wave functions (viz. a "curried function", which becomes an "ordinary" function when you consider the wave functions as the basis-independent Hilbert vectors). The Hamiltonian operator $\hat{H}$ is a bona fide operator, which acts only on the "present" configuration information for the system. What this equation is "really" saying is that in order for such a time series to represent a valid physical evolution, the Hamiltonian must also be able to translate it through time. The distinction between Hamiltonian and energy manifests in that the Hamiltonian will not translate every time sequence, while the energy pseudo-operator will, just as the momentum operator will translate every spatial wave function. Moreover, many Hamiltonians may be possible that give rise to the same energy spectrum.
Because these two things are different, it makes no sense to equate them as operators, like suggested. You can, and should, have $\hat{H}[\psi(t)] = [\hat{E}\psi](t)$, but you should
not have $\hat{H} = \hat{E}$! |
In this paper we prove the Pohozaev identity for the semilinear Dirichlet problem \({(-\Delta)^s u =f(u)}\) in \({\Omega, u\equiv0}\) in \({{\mathbb R}^n\backslash\Omega}\) . Here, \({s\in(0,1)}\) , (−Δ)s is the fractional Laplacian in \({\mathbb{R}^n}\) , and Ω is a bounded C1,1 domain. To establish the identity we use, among other things, that if u is a bounded solution then \({u/\delta^s|_{\Omega}}\) is Cα up to the boundary ∂Ω, where δ(x) = dist(x,∂Ω). In the fractional Pohozaev identity, the function \({u/\delta^s|_{\partial\Omega}}\) plays the role that ∂u/∂ν plays in the classical one. Surprisingly, from a nonlocal problem we obtain an identity with a boundary term (an integral over ∂Ω) which is completely local. As an application of our identity, we deduce the nonexistence of nontrivial solutions in star-shaped domains for supercritical nonlinearities.
Keywords
Weak Solution Viscosity Solution Bounded Solution Nonlocal Problem Nonexistence Result
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.
This is a preview of subscription content, log in to check access. |
Reading the ARMA model for the first time, and I'm confused.
Let's say I have a time series
x = [1, 2.1, 2.9, 3, 4.1]
According to the ARMA model, $X_t$ is a linear combination of previous values and errors, something like
$$\sum_i \phi_i X_{t-i} + \sum_i \theta_i \epsilon_{t-i}$$
But, what are $X_i$ and $\epsilon_i$??
is $X_i$ the actual value of the series at $t=i$? Eg, in my example $X_2 = 2.1 ?? (with 1-based indexing) if so, what are the error terms?
The same question for the simple moving-average model, where all the $X_i$ values are remain unused. |
I was trying to solve exercise 24 of chapter 3.B of "Linear Algebra done right", by Sheldon Axler. It states:
Suppose $W$ is finite-dimensional and $T_1, T_2 \in \mathcal{L}(V, W)$. Prove that $\operatorname{null} T_1 \subset \operatorname{null} T_2$ if and only if there exists $S \in \mathcal{L}(W,W)$ such that $T_2 = ST_1$.
I found a "solution" to this exercise at https://linearalgebras.com/3b.html. It starts by proving one direction of the implication and states at the beginning: "Suppose $\operatorname{null} T_1 \subset \operatorname{null}T_2$. [...] Let $Tv_1, \cdots, Tv_n$ be a basis for $\operatorname{range}T$, then the list $v_1, \cdots, v_n$ is linearly independent. Let $K = \operatorname{span}(v_1, \cdots,v_m) $, then $V = K \oplus \operatorname{null} T$."
I think the last result can be showed like this: given the list $n_1, \cdots, n_k$ is a basis for $\operatorname{null} T$, then we can extend this list to a basis $n_1, \cdots, n_k, v_1, \cdots, v_n$. Since the list $v_1, \cdots, v_n$ is linearly independent and each $v_i$ is linearly independent from all $n_i$ (otherwise $T(v_i) = 0$), by the rank-nullity theorem, $\dim V = k + n$, so $n_1,\cdots, n_k, v_1, \cdots, v_n$ indeed forms a basis of $V$ and $V = K \oplus \operatorname{null} T$. Is this reasoning correct?
I also wanted to consider a similar statement. Let $Tv_1, \cdots, Tv_n$ be a basis for $\operatorname{range}T$, if we extend $v_1, \cdots, v_n$ to a basis $v_1, \cdots, v_n, v_{n+1},\cdots, v_m$ of $V$ then the list $v_{n+1},\cdots, v_m$ is a basis for $\operatorname{null} T$. This would mean that if $V = K \oplus G$ then $G = \operatorname{null} T$. Is this statement true? |
In strong nonequilirium, the statistical operator describing the system depends on an infinite number of variables (BBGKY-hierarchy), contains information about all the previous states starting from an initial condition $\rho(t_0) = \rho_{rel}(t_0)$
$$ \rho(t) = \frac{1}{1-t_0}\int\limits_{t_0}^t \exp^{i(t_1-t)L}\rho_{rel}(t_1)dt_1 $$
and satisfies the inhomogenous Neumann equation
$$ \frac{\partial\rho(t)}{\partial t} + iL\rho(t) = -\epsilon(\rho(t)-)\rho_{rel}(t) $$
However, to describe the macroscopic state of a system at each time by appropriate observables
$$ \langle B_n(t) \rangle = Tr\{\rho_{rel}(t)B_n\} $$
it is often enough to use only the relevant (known) information contained in the relevant statistical operator, which can be obtained by maximizing the entropy and using in addition to the conserved quantities the mean values of additional variables as constraints
$$ \rho_{rel}(t) = \exp^{- \Phi(t)-\sum F_n(t)B_n} $$
where
$$ \Phi(t) = \ln Tr \left( \exp^{-\sum F_n(t)B_n} \right) $$
is the Messieux-Planck function.
After reading about some different applications of this MaxEnt-formalism, determining what are the appropriate relevant observables to determine the state of a nonequilibrium system looked often unsatisfactorally heuristic and handwaving to me.
So my question is:
Is there a general systematic method, at best motivated by some "first principles", to obtain the relevant variables needed to describe the relevant variables needed to describe the evolution of a nonequilibrium system?
A probably very stupid aside: the evolution of a system far away from equilibrium with many degrees of freedom needed to describe it towards its equilibrium state characterized by the conserved quantities (or their conjugate variables) only, remainds me of the coarse graining needed to describe a system at an effective scale and therefore renormalization comes to mind, not sure if there is a relationship between these two things or not ... |
Natural numbers: \(\mathbb{N}\)
Whole numbers: \(\mathbb{N_0}\) Integers: \(\mathbb{Z}\) Positive integers: \(\mathbb{Z^+}\) Negative integers: \(\mathbb{Z^-}\)
Whole numbers: \(\mathbb{N_0}\)
Integers: \(\mathbb{Z}\)
Positive integers: \(\mathbb{Z^+}\)
Negative integers: \(\mathbb{Z^-}\)
Rational numbers: \(\mathbb{Q}\)
Real numbers: \(\mathbb{R}\) Complex numbers: \(\mathbb{C}\)
Real numbers: \(\mathbb{R}\)
Complex numbers: \(\mathbb{C}\)
The natural numbers are those used for counting and ordering: \(\mathbb{N} = \left\{ {1,2,3, \ldots } \right\}\) The natural numbers including zero (or whole numbers) are those used for indicating the number of objects: \(\mathbb{N_0} = \left\{ {0,1,2,3, \ldots } \right\}\) The integers include the natural numbers, the negatives of the natural numbers and zero. Positive integers: \(\mathbb{Z^+} = \mathbb{N} = \left\{ {1,2,3, \ldots } \right\}\) Negative integers: \(\mathbb{Z^-} = \left\{ { \ldots , – 3, – 2, – 1} \right\}\) \(\mathbb{Z} = \mathbb{Z^-} \cup \left\{ 0 \right\} \cup \mathbb{Z^+} =\) \( \left\{ { \ldots , – 3, – 2, – 1,0,1,2,3, \ldots } \right\}\) The rational numbers are those that can be represented as a fraction \(a/b\) where \(a\) and \(b\) are integers and \(b \ne 0.\) \(\mathbb{Q} = \big\{ {x \mid x = a/b,\;a \in \mathbb{Z},}\) \({b \in \mathbb{Z},\;b \ne 0} \big\}\) The decimal expansion of a rational number either terminates after a finite number of digits (i.e. is a finite decimal) or becomes an infinite periodic decimal. The irrational numbers are those that may be represented as an infinite non-repeating decimal. The set of real numbers is the union of rational and irrational numbers: \(R\) Complex numbers \(\mathbb{C} = \big\{ {x + iy \mid x \in \mathbb{R}}\) \({\text{and}\;y \in \mathbb{R}} \big\}\), where \(i\) is the imaginary unit. \(\mathbb{N} \subset \mathbb{Z} \subset \mathbb{Q} \subset \mathbb{R} \subset \mathbb{C}\) |
There was a question about a particular case of this, Quadratic differentials; seemingly it contained too little information, so let me try again. This will be also a second take on my previous question Getting rid of discontinuities in plots caused by square roots, logarithms, `Arg`, etc focussed on the particular case of square roots.
A quadratic differential is simply an expression of the form $f(z)dz^2$, where $f(z)$ is (for my purposes) some meromorphic function. For example, it can be a rational function, although I in fact need things like products of the Weierstrass function or some other elliptic functions.
A trajectory of such differential would be a standard solution of a differential equation if there would be $dz$ instead of $dz^2$: a (piecewise) smooth parametric curve $(x(t),y(t))$ that at every point $z$ it passes forms the same constant angle (counted always in the same, say counterclockwise direction) with the vector $(\operatorname{Re}(f(z)),\operatorname{Im}(f(z)))$. We could then obtain it by solving, for any given angle $\alpha$, the system\begin{align*}x'(t)&=\operatorname{Re}(e^{i\alpha}f(x+iy))\\y'(t)&=\operatorname{Im}(e^{i\alpha}f(x+iy))\end{align*}with, say,
NDSolve, or visualize it using
StreamPlot.
However the term $dz^2$ means that it must form constant angle with the vector $\pm\sqrt{f(z)}$ rather than $f(z)$. And since this vector now has two possible (opposite) directions, the condition now becomes that the curve forms either angle $\alpha$ or $\alpha+\pi$ with
ReIm of $\sqrt{f(z)}$. This means we now have instead of a vector field, a field of tangent lines with unspecified direction.
so that the positive horizontal half-axis gets conflicting directions. For more complicated $f(z)$ I failed to find any general consistent way to compute with
Sqrt[f[z]].
how to calculate and draw curves tangent to $\ell_{x,y}$ at each point $(x,y)$ they pass through? |
As with double integrals, triple integrals can often be easier to evaluate by making the change of variables. This allows to simplify the region of integration or the integrand.
Let a triple integral be given in the Cartesian coordinates \(x, y, z\) in the region \(U:\)
\[\iiint\limits_U {f\left( {x,y,z} \right)dxdydz} .\]
We need to calculate this integral in the new coordinates \(u, v, w.\) The relationship between the old and new coordinates is given by
\[
{x = \varphi \left( {u,v,w} \right),\;\;}\kern-0.3pt {y = \psi \left( {u,v,w} \right),\;\;}\kern-0.3pt {z = \chi \left( {u,v,w} \right).} \]
It is supposed here that the following conditions are satisfied:
The functions \(\varphi, \psi, \chi\) are continuous together with their partial derivatives; There’s a single valued relation between points of the region of integration \(U\) in the \(xyz\)-space and points of the region \(U’\) in the \(uvw\)-space; The Jacobian of transformation \(I\left( {u,v,w} \right)\) equal to\[ {I\left( {u,v,w} \right) }={ \frac{{\partial \left( {x,y,z} \right)}}{{\partial \left( {u,v,w} \right)}} } = {\left| {\begin{array}{*{20}{c}} {\frac{{\partial x}}{{\partial u}}}&{\frac{{\partial x}}{{\partial v}}}&{\frac{{\partial x}}{{\partial w}}}\\ {\frac{{\partial y}}{{\partial u}}}&{\frac{{\partial y}}{{\partial v}}}&{\frac{{\partial y}}{{\partial w}}}\\ {\frac{{\partial z}}{{\partial u}}}&{\frac{{\partial z}}{{\partial v}}}&{\frac{{\partial z}}{{\partial w}}} \end{array}} \right|,} \]is non-zero and keeps a constant sign everywhere in the region of integration \(U.\)
Then the formula for change of variables in triple integrals is written as
\[
{\iiint\limits_U {f\left( {x,y,z} \right)dxdydz} \text{ = }}\kern-0.3pt {\iiint\limits_{U’} {f\left( {\varphi,\psi,\chi} \right) \cdot}\kern0pt{\left|{I\left( {u,v,w} \right)} \right|dudvdw}.} \]
Here \(\varphi,\psi,\chi\) are functions of the variables \({u,v,w}\) and \(\left| {I\left( {u,v,w} \right)} \right|\) means the absolute value of the Jacobian.
Triple integrals are often easier to evaluate in the cylindrical or spherical coordinates. The corresponding examples are considered on the pages
Some examples of using other transformations of coordinates are given below.
Solved Problems
Click a problem to see the solution.
Example 1Find the volume of the region \(U\) defined by the inequalities Example 2Find the volume of the parallelepiped defined by the inequalities Example 1.Find the volume of the region \(U\) defined by the inequalities
Solution.
Obviously, this region is a parallelepiped. It’s convenient to change variables in such a way to transform the parallelepiped into a rectangular box. In this case the triple integral becomes the product of three integrals of one variable.
Make the following replacement:
\[
{u = x + y + z,\;\;\;}\kern-0.3pt {v = y + z,\;\;\;}\kern-0.3pt {w = z.} \]
The region of integration \(U’\) in the new variables \(u, v, w\) is defined by the inequalities
\[
{0 \le u \le 10,\;\;\;}\kern0pt {0 \le v \le 5,\;\;\;}\kern0pt {0 \le w \le 2.} \]
The volume of the solid is
\[
{V = \iiint\limits_U {dxdydz} } = {\iiint\limits_{U’} {\left| {I\left( {u,v,w} \right)} \right|dudvdw} .} \]
Calculate the Jacobian of this transformation. In order to not express the old variables \(x, y, z\) through the new ones \(u, v, w,\) we find first the Jacobian of the inverse transformation:
\[ {\frac{{\partial \left( {u,v,w} \right)}}{{\partial \left( {x,y,z} \right)}} } = {\left| {\begin{array}{*{20}{c}} {\frac{{\partial u}}{{\partial x}}}&{\frac{{\partial u}}{{\partial y}}}&{\frac{{\partial u}}{{\partial z}}}\\ {\frac{{\partial v}}{{\partial x}}}&{\frac{{\partial v}}{{\partial y}}}&{\frac{{\partial v}}{{\partial z}}}\\ {\frac{{\partial w}}{{\partial x}}}&{\frac{{\partial w}}{{\partial y}}}&{\frac{{\partial w}}{{\partial z}}} \end{array}} \right| } = {\left| {\begin{array}{*{20}{c}} 1&1&1\\ 0&1&1\\ 0&0&1 \end{array}} \right| } = {1 \cdot \left| {\begin{array}{*{20}{c}} 1&1\\ 0&1 \end{array}} \right| } ={ 1 – 0 = 1.} \]
Then
\[
{\left| {I\left( {u,v,w} \right)} \right| } = {\left| {\frac{{\partial \left( {x,y,z} \right)}}{{\partial \left( {u,v,w} \right)}}} \right| } = {\left| {{{\left( {\frac{{\partial \left( {u,v,w} \right)}}{{\partial \left( {x,y,z} \right)}}} \right)}^{ – 1}}} \right| }={ 1.} \]
Hence, the volume of the solid is
\[
{V }={ \iiint\limits_{U’} {\left| {I\left( {u,v,w} \right)} \right|dudvdw} } = {\iiint\limits_{U’} {dudvdw} } = {\int\limits_0^{10} {du} \int\limits_0^5 {dv} \int\limits_0^2 {dw} } = {10 \cdot 5 \cdot 2 }={ 100.} \] |
Differential Equations Methods of Solving First order, First Degree Differential Equations Variable separable method is used to solve such an equation in which variables can be separated completely i.e. terms containing yshould remain with dyand tems containing xshould remain with dx. Part1: View the Topic in this video From 00:12 To 13:26 Part2: View the Topic in this video From 00:10 To 08:40
Disclaimer: Compete.etutor.co may from time to time provide links to third party Internet sites under their respective fair use policy and it may from time to time provide materials from such third parties on this website. These third party sites and any third party materials are provided for viewers convenience and for non-commercial educational purpose only. Compete does not operate or control in any respect any information, products or services available on these third party sites. Compete.etutor.co makes no representations whatsoever concerning the content of these sites and the fact that compete.etutor.co has provided a link to such sites is NOT an endorsement, authorization, sponsorship, or affiliation by compete.etutor.co with respect to such sites, its services, the products displayed, its owners, or its providers.
1. xdy + ydx = d (xy)
2. d(x + y) = dx + dy
3. d\left(\frac{y}{x}\right)=\frac{xdy-ydx}{x^2}
4. d\left(\frac{x}{y}\right)=\frac{ydx-xdy}{y^2}
5. d\left(\frac{x^2}{y}\right)=\frac{2xydx-x^{2}dy}{y^2}
6. d\left(\frac{y^2}{x}\right)=\frac{2xydy-y^{2}dx}{x^2}
7. d\left(\frac{x^2}{y^{2}}\right)=\frac{2xy^{2}dx-2x^{2}ydy}{y^4}
8. d\left(\frac{y^2}{x^{2}}\right)=\frac{2x^{2}ydy-2xy^{2}dx}{x^4}
9. \frac{xdy+ydx}{xy}=d(\log xy)
10. \frac{ydx-xdy}{xy}=d\left(\log \frac{x}{y}\right)
11. \frac{xdy-ydx}{xy}=d\left(\log \frac{y}{x}\right)
12. \frac{dx+dy}{x+y}=d \log (x+y)
13. \frac{xdx+ydy}{x^{2}+y^{2}}=d \left(\log \sqrt{x^{2}+y^{2}}\right)
14. \frac{xdy-ydx}{x^{2}+y^{2}}=d \left(\tan^{-1}\frac{y}{x}\right)
15. \frac{ydx-xdy}{x^{2}+y^{2}}=d \left(\tan^{-1}\frac{x}{y}\right)
16. d \left(\frac{-1}{xy}\right)=\frac{xdy+ydx}{x^{2}y^{2}}
17. d \left(\frac{e^{x}}{y}\right)=\frac{ye^{x}dy-e^{x}dy}{y^{2}}
18. d \left(\frac{e^{y}}{x}\right)=\frac{xe^{y}dy-e^{y}dx}{x^{2}} |
I am currently trying to grasp some ideas on Lebesgue-Rokhlin spaces from Bogachev, "Measure Theory", vol. 2. Such spaces are also known as standard probability spaces but the definitions are not unique in the literature.
Let us restrict to probability measures.
Theorem 9.4.7: If $(M, \mathcal{M}, \mu)$ is Lebesgue-Rokhlin then it is isomorphic mod0 to $([0,1], \mathcal{B}[0,1], \nu)$ where $\nu = c \lambda + \sum_{n=1}^\infty \alpha_n \delta_{1/n}$ where $\alpha_n = \mu(a_n)$ and $a_n$ is the countable family of atoms of $\mu$.
Now every Borel probability measure on $[0,1]$ can be decomposed into an absolutely continuous part, a singular continuous part and an atomic part. In the above theorem I am somehow missing the singular continuous part. Is it not explicitely mentioned or what is the intuition that one only considers the absolutely continuous part and the atomic part? Is it "hidden" behind the Lebesgue measure? |
I would like to ask if there exist pedagogical expositions of the Mordell-Weil theorem (wikipedia). What parts of number theory (algebraic geometry) one should better learn first before starting to read a proof of Mordell-Weil?
J. Silverman and J. Tate "The rational points on elliptic curves" is a wonderful introduction to elliptic curves over rational numbers. It covers topics such as Mordell-Weil, Nagell-Lutz Theorem, elliptic curves over finite fields, etc.
For more advanced treatment of Mordell-Weil, I suggest the following textbook:
J. Silverman "The arithmetic of elliptic curves" (Chapter 8 is about Mordell-Weil).
For the case of elliptic curves, there is Mordell's proof, discussed in his book
Diophantine Equations (pp. 138-148). I could hardly imagine less prerequisites than this.
There must be a proof in Cassels'
Lectures on elliptic curves (Cambridge University Press, Cambridge, 1991).
See also his masterly survey
Diophantine equations with special reference to elliptic curves (J. London Math. Soc. 41 (1966) 193–291) and the historical essay Mordell's finite basis theorem revisited (Math. Proc. Cambridge Philos. Soc. 100 (1986), no. 1, 31–41).
Here is a quote from this last paper :
Weil's generalization of Mordell's theorem (and subsequent generalizations) was usually referred to as the Mordell-Weil Theorem. Mordell himself stronglydisapproved of this usage and frequently insisted (in public and in private) that whathe had proved should be called Mordell's Theorem and that everything else could, forhis part, be called simply Weil's Theorem. Addendum. Another excellent source is Knapp's Elliptic curves(Princeton University Press, Princeton, 1992) which contains a proof of Mordell's theorem (over $\mathbf Q$).
There is a very affordable book by Milne (
Elliptic curves, BookSurge Publishers, Charleston, 2006) and a very motivating one by Koblitz ( Introduction to elliptic curves and modular forms, Springer, New York, 1993). Tate's Haverford Lectures also served as the basis for Husemoller ( Elliptic curves, Springer, New York, 2004).
There is a very elementary and self-contained (modulo a few things proved earlier in the book) proof in Chapter 19 of the book of Ireland and Rosen, "A classical introduction to modern number theory". One might object that it can be misleading to use explicit but obscure polynomial identities instead of more intrinsic facts from algebraic geometry, but the text has lots of good remarks and references to go beyond this elementary approach.
Manin's proof of Mordell-Weil theorem (for abelian varieties over number fields) has appeared as an appendix to Russian translation of First edition of Mumford's ``Abelian varieties". Eventually it was translated into English and published as an appendix to Second and Third editions of Mumford's book.
Here is a short proof of the weak Mordell-Weil theorem for Abelian varieties over a number field using étale cohomology (easily adopted to finitely generated fields). The construction of the height paring can be found in Hindry-Silverman, or in [Brian Conrad, http://math.stanford.edu/~conrad/papers/Kktrace.pdf ], section 9 (Conrad even proves a more general theorem, the Lang-Néron theorem).
Let $K$ be a number field, $A/K$ be an Abelian variety and $S$ a finite set of places of $K$. Let $X = \mathrm{Spec}\mathcal{O}_{K,S}$ and $\mathscr{A}/X$ the Néron model of $A/K$. By the Néron mapping property, it suffices to show that $\mathscr{A}(X)/n = A(K)/n$ is finite for some $S$ and $n > 1$.
By enlarging $S$ by the set of primes lying over $n$, one has a short exact Kummer sequence $0 \to \mathscr{A}[n] \to \mathscr{A} \to \mathscr{A} \to 0$, inducing in (étale) cohomology $0 \to \mathscr{A}(X)/n \hookrightarrow H^1(X,\mathscr{A}[n])$. So it suffices to show that $H^1(X,\mathscr{A}[n])$ is finite. (This group is related to the
Selmer group. The cokernel $H^1(X,\mathscr{A})[n]$ is related to the $n$-torsion of the Tate-Shafarevich group.)
There is a finite étale Galois covering $X'/X$ such that $\mathscr{A}[n] \times_X X' \cong (\mathbf{Z}/n)^{2g} \cong \mu_n^{2g}$. The Hochschild-Serre spectral sequence $$H^p(\mathrm{Gal}(X'/X), H^q(X',\mathscr{A}[n] \times_X X')) \Rightarrow H^{p+q}(X,\mathscr{A}[n])$$ induces $$0 \to H^1(\mathrm{Gal}(X'/X), H^0(X',\mathscr{A}[n] \times_X X')) \to H^1(X,\mathscr{A}[n]) \to H^0(\mathrm{Gal}(X'/X), H^1(X',\mathscr{A}[n] \times_X X')).$$ Since $\mathrm{Gal}(X'/X)$ and $H^0(X',\mathscr{A}[n] \times_X X')$ are finite, the left hand group is finite, so it suffices to show that $H^1(X',\mathscr{A}[n] \times_X X') \cong H^1(X',\mu_n^{2g})$ is finite. But the short exact Kummer sequence $1 \to \mu_n \to \mathbf{G}_m \to \mathbf{G}_m \to 1$ induces $$1 \to \mathbf{G}_m(X')/n \to H^1(X',\mu_n) \to H^1(X',\mathbf{G}_m)[n] \to 0.$$ The left hand group is finite by the finite generation of the $S$-unit group, and the right hand group is finite by the finiteness of the $S$-class number (Hilbert 90: $H^1(X',\mathbf{G}_m) = \mathrm{Pic}(X') = \mathrm{Cl}(X')$).
Actually, the wikipedia article you cite cites Joe Silverman's book, which contains such a "pedagogical" exposition. The book is not entirely self-contained, but I am sure the preface explains the prerequisites.
I think one should also mention
Jean Pierre Serre
Lectures on the Mordell-Weil Theorem
Aspects of Mathematics
Already mentioned: Silverman and Tate's "Rational Points on Elliptic Curves" (undergraduate level) and Silverman's "The Arithmetic of Elliptic Curves" (graduate level).
Another text at the undergraduate level that covers Mordell's theorem (i.e., the Mordell-Weil theorem for elliptic curves over $\mathbb{Q}$) is Washington's "Elliptic Curves: Number Theory and Cryptography" (see Chapter 8).
If you are looking for a proof of the Mordell-Weil theorem in its utmost generality (i.e., for abelian varieties over number fields), I would suggest Hindry and Silverman's "Diophantine Geometry: An Introduction" (see Part C). |
Find an explicit formula for a function that is exactly equivalent to the power series $$\sum_{n=1}^\infty\frac{(-1)^n}{2n+1}x^{2n}$$
Can somebody give me an idea on where to start with this?
Edit: Thanks to everybody's comments, I recognize that this power series is similar to the one for arctan:
$$\tan^{-1}x=\sum_{n=0}^\infty\frac{(-1)^n}{2n+1}x^{2n+1}$$
Please let me know if what I did was correct:
I decided to try and get the $x^{2n}$ in the first power series to $x^{2n+1}$ by multiplying the power series by $\frac{x}{x}$ to get:
$$\sum_{n=1}^\infty\frac{(-1)^n}{2n+1}\frac{x^{2n+1}}{x}$$
I then factored out $\frac{1}{x}$ to get:
$$\frac{1}{x}\sum_{n=1}^\infty\frac{(-1)^n}{2n+1}{x^{2n+1}}$$
Considering that $\sum_{n=1}^\infty a_n=\bigg(\sum_{n=0}^\infty a_n\bigg)-a_0$ this would mean that:
$$\frac{1}{x}\sum_{n=1}^\infty\frac{(-1)^n}{2n+1}{x^{2n+1}}=\frac{1}{x}\bigg(\sum_{n=0}^\infty\frac{(-1)^n}{2n+1}x^{2n+1}-x\bigg)$$
Which makes the power series equivalent to:
$$\frac{1}{x}(\tan^{-1}x-x)$$ |
Description
This plugin allows you to include mathematics in a topic, with a formatvery similar to LaTeX. The external program
latex2html
is used togenerate
gif
(or
png
) images from the math markup, and the image is then included inthe page. The first time a particular expression is rendered, you will noticea lag as
latex2html
is being run on the server. Once rendered, the image issaved as an attached file for the page, so subsequent viewings will not requirere-renders. When you remove a math expression from a page, its image isdeleted.
Note that this plugin is called MathModePlugin
, not LaTeXPlugin, becausethe only piece of LaTeX implemented is rendering of images of mathematics.
Syntax Rules <latex [attr="value"]* >
formula </latex>
generates an image from the contained
formula
. In addition attribute-valuepairs maybe specified that are passed to the resulting
img
html tag. The only exeptionsare the following attributes which take effect in the latex rendering pipeline:
size: the latex font size; possible values are tiny, scriptsize, footnotesize, small, normalsize, large, Large, LARGE, huge or Huge; defaults to %LATEXFONTSIZE%
color: the foreground color of the formula; defaults to %LATEXFGCOLOR%
bgcolor: the background color; defaults to %LATEXBGCOLOR%
The formulawill be displayed using a
math
latex environment by default. If the formulacontains a latex linebreak (
\\
) then a
multline
environment of amsmath is used instead.If the formula contains an alignment sequence (
& = &
) then an
eqnarray
environmentis used.
Note that the old notation using
%$formula$%
and
%\[formula\]%
is still supported but are deprecated.
If you might want to recompute the images cached for the current page then append
?refresh=on
to its url,e.g. click
here
to refresh the formulasin the examples below.
Examples
The following will only display correctly if this plugin is installed and configured correctly.
<latex title="this is an example">
\int_{-\infty}^\infty e^{-\alpha x^2} dx = \sqrt{\frac{\pi}{\alpha}}
</latex>
<latex>
{\cal P} & = & \{f_1, f_2, \ldots, f_m\} \\
{\cal C} & = & \{c_1, c_2, \ldots, c_m\} \\
{\cal N} & = & \{n_1, n_2, \ldots, n_m\}
</latex>
<latex title="Calligraphics" color="orange">
\cal
A, B, C, D, E, F, G, H, I, J, K, L, M, \\
\cal
N, O, P, Q, R, S, T, U, V, W, X, Y, Z
</latex>
<latex>
\sum_{i_1, i_2, \ldots, i_n} \pi * i + \sigma
</latex>
This is
new inline test.
Greek letters
\alpha
\theta
\beta
\iota
\gamma
\kappa
\delta
\lambda
\epsilon
\mu
\zeta
\nu
\eta
\xi
Plugin Installation Instructions
You do not need to install anything in the browser to use this extension. The following instructions are for the administrator who installs the extension on the server.
Open configure, and open the "Extensions" section. Use "Find More Extensions" to get a list of available extensions. Select "Install".
If you have any problems, or if the extension isn't available in
configure
, then you can still install manually from the command-line. See http://foswiki.org/Support/ManuallyInstallingExtensions
for more help.
Configuration
There are a set of configuration variables that an be set in different places.All of the below variables can be set in your
LocalSite.cfg
file like this:
$Foswiki::cfg{MathModePlugin}{<Name>} = <value>;
Some of the below variables can
only
be set this way, some of the may beoverridden by defining the respective preference variable.
Name Preference Variable Default length of the hash code. If you switch to a different hash function, you will likely have to change this string to be prepended to any auto-generated image extension of the image type; possible values are 'gif' and 'png' the command to convert a latex formula to an image latex preamble to include additional packages (e.g. \usepackage{mathptmx} to change the math font) ; note, that the packages
amsmath and
color are loaded too as they are obligatory
factor to scale images default text color default background color default font size
HashCodeLength
32
ImagePrefix
'_MathModePlugin_'
ImageType
%LATEXIMAGETYPE% 'png'
Latex2Img
'.../tools/MathModePlugin_latex2img'
LatexBGColor
%LATEXBGCOLOR% white
LatexFGColor
%LATEXFGCOLOR% black
LatexFontSize
%LATEXFONTSIZE% normalsize
LatexPreamble
%LATEXPREAMBLE% '\usepackage{latexsym}'
ScaleFactor
%LATEXSCALEFACTOR% 1.2 Plugin Info
Plugin Author: Graeme Lufkin, Foswiki:Main/MichaelDaum Copyright ©: 2002 Graeme Lufkin, gwl@u.washington.edu, 2006-2014, Michael Daum http://michaeldaumconsulting.com License: GPL (GNU General Public License) Release: 4.03 Version: 4.03 Change History: 18 Mar 2014: make sure the directory is present when caching images 19 Mar 2011: added Config.spec and DEPENDENCIES to ease installation and configuration; improved compatibility to run this plugin on various Foswiki engines 23 Apr 2009: converted to foswiki plugin 07 Jan 2009: certified for foswiki/compat; removed deprecated endRenderingHandler 07 Dec 2007: replaced
templfile with
mktemp in the latex2img helper script
13 Nov 2007: fixed plugin on 4.2 18 Dec 2006: only use one bgcolor 02 Oct 2006: don't fail on hierarchical webs; backwards compatible tempfile cleanup 31 Aug 2006: added NO_PREFS_IN_TOPIC; using
xcolor instead of
color latex package now to be able to specify colors in html typical codes; default preamble uses latexsym now
07 Aug 2006: switched from latex2html to latex+dvipng+convert; added
size,
color,
bgcolor to <latex> tag; rendering pngs by default now; reworked plugin settings; added a latex2img shell script; returning full latex error report
04 Aug 2006: major rewrite; fixed security issues by using the sandbox feature and creating tempfiles properly; added new <latex>...</latex> tag to support multiline formulas; better configurability; better error reporting; fixed issues where images have not been cleaned up regularly; speedup don't clean orphaned images during
view but during
save; speedup by adding lazy compilation and initialization; implemented a
postRenderingHandler for T* V4; prevent auto-generated images stored in pub from being auto-attached using T* V4
03 Apr 2002: Initial version CPAN Dependencies: none Other Dependencies: LaTeX, dvipng, ImageMagick Perl Version: 5.8 Plugin Home: Foswiki:Extensions/MathModePlugin Support: Foswiki:Support/MathModePlugin |
If $f$ is monotone increasing on an interval and has a jump discontinuity at $x_0$ in the interior of the domain show that the jump is bounded above by $f(x_1) - f(x_2)$ for any two points $x_1$, $x_2$ of the domain surrounding $x_0$, $x_1 < x_0 < x_2$.
So I've I tried solving this here is what I have:
Let $f$ be monotone increasing on an interval $A$ that has a jump discontinuity at $x_0$ on the interior of the domain where $x_0 \in A$. Let there be any $x_1, x_2 \in A$ where $x_1< x_0 < x_2$. Then by definition of monotone increasing $f(x_1) \leq f (x_2)$.
From here I want to say that $f(x_1) \leq f(x_0) \leq f(x_2) \rightarrow f(x_1) - f(x_1) \leq f(x_0) \leq f(x_2) - f(x_1)$ and that $f(x_0) \leq f(x_2) -f(x_2)$. But I'm not sure if I'm going in the right direction since I can't say for sure that $f(x_0)$ is really less than $f(x_2)-f(x_1)$. Or would I need to do something like $f(x_0)-f(x_1)\leq f(x_2)-f(x_1)$ and go from there. Any help would be appreciated as I'm somewhat unsure on this problem! |
Our goal in this series is the solution of the hydrogen atom using the methods of supersymmetric quantum mechanics. Last time, we constructed the following picture of the procedure:
If the potential we wish to study satisfies a certain criterion, which we called “shape invariance,” we can construct a
hierarchy of Hamiltonians, each missing the lowest-energy eigenstate of the last, and find the complete spectrum of the original Hamiltonian by “working leftward” in the state diagram. We shall see that with the hydrogen atom, each state in the diagram corresponds to a physical eigenstate of the system, but in order to get there, we have to turn the three-dimensional Coulomb potential of the hydrogen atom into the kind of problem we can study with the SUSY QM machinery we’ve built up so far. Two steps will be necessary to do this: first, moving to the center-of-mass reference frame, and second, separating the radial and angular dependencies. In this post, we’ll tackle the first of those two tasks.
While the SUSY part isn’t widely taught, these preliminary steps are more familiar. This brief note is based on Chapter VII of Cohen-Tannoudji, Diu and Laloë.
Let’s work in general terms for the time being. We’ll consider two particles, whose position operators shall be [tex]\vec{x}_1[/tex] and [tex]\vec{x}_2[/tex], and whose momentum operators are [tex]\vec{p}_1[/tex] and [tex]\vec{p}_2[/tex]. Supposing that the interaction between the particles depends only on the displacement between them, we can write a Hamiltonian for the two-body system as follows:
[tex]H = \frac{\vec{p}_1^2}{2m_1} + \frac{\vec{p}_2^2}{2m_2} + V(\vec{x}_1 – \vec{x}_2).[/tex]
We can simplify this problem by dividing up its degrees of freedom between the
center-of-mass motion and the relative motion of the particles. Following classical intuition, we introduce new variables,
[tex]\vec{x}_{\rm CM} = \frac{m_1\vec{x}_1 + m_2\vec{x}_2}{m_1 + m_2},[/tex]
[tex]\vec{x} = \vec{x}_1 – \vec{x}_2,[/tex]
[tex]\vec{p}_{\rm CM} = \vec{p}_1 + \vec{p}_2,[/tex]
[tex]\vec{p} = \frac{m_2\vec{p}_1 – m_1\vec{p}_2}{m_1 + m_2}.[/tex]
Knowing the commutators of the original position and momentum operators, it’s not too hard to show that for any components [tex]x_{{\rm CM}j}[/tex] and [tex]p_{{\rm CM}k}[/tex] of the center-of-mass operators, the commutator is just
[tex]\comm{x_{{\rm CM}j}}{p_{{\rm CM}k}} = i\hbar\delta_{jk}.[/tex]
The same holds for the
relative degrees of freedom, too:
[tex]\comm{x_j}{p_k} = i\hbar\delta_{jk}.[/tex]
In other words, our new operators are just as good position and momentum operators as the old.
By chugging through a little algebra, we can rewrite our original two-body Hamiltonian using the new operators:
[tex]H = \frac{\vec{p}_{\rm CM}^2}{2M} + \frac{\vec{p}^2}{2\mu} + V(\vec{x}).[/tex]
Here, we’ve written [tex]M[/tex] for the sum of the masses,
[tex]M = m_1 + m_2,[/tex]
and [tex]\mu[/tex] for the
reduced mass
[tex]\mu = \left(\frac{1}{m_1} + \frac{1}{m_2}\right)^{-1}.[/tex]
When it’s written this way, it’s easy to see that the Hamiltonian [tex]H[/tex] splits into two parts:
[tex]H = H_{\rm CM} + H_r,[/tex]
where
[tex]H_{\rm CM} = \frac{\vec{p}_{\rm CM}^2}{2M}[/tex]
and
[tex]H_r = \frac{\vec{p}^2}{2\mu} + V(\vec{x}).[/tex]
What’s more, these two parts commute with each other,
[tex]\comm{H_{\rm CM}}{H_r} = 0,[/tex]
so we can pick a basis which simultaneously diagonalizes both of them. An eigenstate [tex]\ket{\psi}[/tex] of [tex]H[/tex] satisfies the equation
[tex]H\ket{\psi} = E\ket{\psi},[/tex]
and the same ket is an eigenket of both [tex]H_{\rm CM}[/tex] and [tex]H_r[/tex]:
[tex]H_{\rm CM}\ket{\psi} = E_{\rm CM}\ket{\psi},[/tex]
[tex]H_r\ket{\psi} = E_r\ket{\psi},[/tex]
with [tex]E = E_{\rm CM} + E_r[/tex]. Eigenstates of the total Hamiltonian will be tensor products of eigenstates of the CM and relative parts:
[tex]\ket{\psi} = \ket{\chi}_{\rm CM} \otimes \ket{\omega}_r,[/tex]
let’s say. We can rewrite the CM Schrödinger equation above as a statement about position-space wavefunctions, if we take derivatives with respect to CM coordinates:
[tex]-\frac{\hbar^2}{2M} \partial_{\rm CM}^2 \chi(\vec{x}_{\rm CM}) = E_{\rm CM} \chi(\vec{x}_{\rm CM}).[/tex]
This is just the Schrödinger Equation for a free particle. We know this equation rather well! For example, we can say that the momentum eigenstates in position space are plane waves,
[tex]\chi(\vec{x}_{\rm CM}) = (2\pi\hbar)^{-3/2} \exp\left(\frac{i}{\hbar} \vec{p}_{\rm CM} \cdot \vec{x}_{\rm CM}\right),[/tex]
with energies
[tex]E_{\rm CM} = \frac{\vec{p}_{\rm CM}^2}{2M}.[/tex]
We’ve seen that the system of two coupled particles splits into a center-of-mass part, whose energy is just the translational kinetic energy of a free particle, and a part which embodies the
relative motion of the two particles. In classical mechanics, we saw the same thing: for example, in the Earth-Moon system, both bodies are orbiting around their common center of mass (which is actually beneath the Earth’s surface). For the hydrogen atom, our two particles will be a proton and an electron, bound together by the Coulomb interaction between them. Note that the mass of the proton, [tex]m_p[/tex], is much larger than that of the electron, [tex]m_e[/tex], so that the “reduced mass” is very close to the electron mass:
[tex]\mu = \frac{m_e m_p}{m_e + m_p} = \frac{m_e}{1 + m_e/m_p} \approx m_e\left(1 – \frac{m_e}{m_p}\right).[/tex]
The correction term, [tex]m_e/m_p[/tex], is roughly one part in 1800. This means that the CM frame is
almost coincident with the rest frame of the proton. We’ll be a bit sloppy, but excusably so, if we refer to the relative degrees of freedom as the “electron” part of the system and the CM as the “nucleus.”
Our next step will explore the realization that the Coulomb problem we wish to study is
spherically symmetric. The interaction potential is a function of the distance between the two particles, nothing else:
[tex]V(\vec{x}_1,\vec{x}_2) = V(|\vec{x}_1 – \vec{x}_2|).[/tex]
This symmetry property will allow us to write the Hamiltonian for the relative degrees of freedom (the “electron” part) in terms of the angular momentum:
[tex]H = -\frac{\hbar^2}{2\mu} \frac{1}{r} \partial_r^2 r + \frac{1}{2\mu r^2} \vec{L}^2 + V(r).[/tex]
This will be our starting point for next time.
(I’ve heard that my mathy posts aren’t showing up well in Google Reader, thanks in part to my gloomy color scheme. I’ll try to get this debugged, as part of the theme upgrades and other big changes I’ve got in the works.)
SUSY QM SERIES |
(a) If $AB=B$, then $B$ is the identity matrix. (b) If the coefficient matrix $A$ of the system $A\mathbf{x}=\mathbf{b}$ is invertible, then the system has infinitely many solutions. (c) If $A$ is invertible, then $ABA^{-1}=B$. (d) If $A$ is an idempotent nonsingular matrix, then $A$ must be the identity matrix. (e) If $x_1=0, x_2=0, x_3=1$ is a solution to a homogeneous system of linear equation, then the system has infinitely many solutions.
Let $A$ and $B$ be $3\times 3$ matrices and let $C=A-2B$.If\[A\begin{bmatrix}1 \\3 \\5\end{bmatrix}=B\begin{bmatrix}2 \\6 \\10\end{bmatrix},\]then is the matrix $C$ nonsingular? If so, prove it. Otherwise, explain why not.
(a) Suppose that a $3\times 3$ system of linear equations is inconsistent. Is the coefficient matrix of the system nonsingular?
(b) Suppose that a $3\times 3$ homogeneous system of linear equations has a solution $x_1=0, x_2=-3, x_3=5$. Is the coefficient matrix of the system nonsingular?
(c) Let $A$ be a $4\times 4$ matrix and let\[\mathbf{v}=\begin{bmatrix}1 \\2 \\3 \\4\end{bmatrix} \text{ and } \mathbf{w}=\begin{bmatrix}4 \\3 \\2 \\1\end{bmatrix}.\]Suppose that we have $A\mathbf{v}=A\mathbf{w}$. Is the matrix $A$ nonsingular?
Suppose that $B=\{\mathbf{v}_1, \mathbf{v}_2\}$ is a basis for $\R^2$. Let $S:=[\mathbf{v}_1, \mathbf{v}_2]$.Note that as the column vectors of $S$ are linearly independent, the matrix $S$ is invertible.
Prove that for each vector $\mathbf{v} \in V$, the vector $S^{-1}\mathbf{v}$ is the coordinate vector of $\mathbf{v}$ with respect to the basis $B$.
Prove that the matrix\[A=\begin{bmatrix}0 & 1\\-1& 0\end{bmatrix}\]is diagonalizable.Prove, however, that $A$ cannot be diagonalized by a real nonsingular matrix.That is, there is no real nonsingular matrix $S$ such that $S^{-1}AS$ is a diagonal matrix.
The following problems are Midterm 1 problems of Linear Algebra (Math 2568) at the Ohio State University in Autumn 2017.There were 9 problems that covered Chapter 1 of our textbook (Johnson, Riess, Arnold).The time limit was 55 minutes.
This post is Part 3 and contains Problem 7, 8, and 9.Check out Part 1 and Part 2 for the rest of the exam problems.
Problem 7. Let $A=\begin{bmatrix}-3 & -4\\8& 9\end{bmatrix}$ and $\mathbf{v}=\begin{bmatrix}-1 \\2\end{bmatrix}$.
(a) Calculate $A\mathbf{v}$ and find the number $\lambda$ such that $A\mathbf{v}=\lambda \mathbf{v}$.
(b) Without forming $A^3$, calculate the vector $A^3\mathbf{v}$.
Problem 8. Prove that if $A$ and $B$ are $n\times n$ nonsingular matrices, then the product $AB$ is also nonsingular.
Problem 9.Determine whether each of the following sentences is true or false.
(a) There is a $3\times 3$ homogeneous system that has exactly three solutions.
(b) If $A$ and $B$ are $n\times n$ symmetric matrices, then the sum $A+B$ is also symmetric.
(c) If $n$-dimensional vectors $\mathbf{v}_1, \mathbf{v}_2, \mathbf{v}_3$ are linearly dependent, then the vectors $\mathbf{v}_1, \mathbf{v}_2, \mathbf{v}_3, \mathbf{v}_4$ is also linearly dependent for any $n$-dimensional vector $\mathbf{v}_4$.
(d) If the coefficient matrix of a system of linear equations is singular, then the system is inconsistent.
An $n\times n$ matrix $A$ is called nonsingular if the only vector $\mathbf{x}\in \R^n$ satisfying the equation $A\mathbf{x}=\mathbf{0}$ is $\mathbf{x}=\mathbf{0}$.Using the definition of a nonsingular matrix, prove the following statements.
(a) If $A$ and $B$ are $n\times n$ nonsingular matrix, then the product $AB$ is also nonsingular.
(b) Let $A$ and $B$ be $n\times n$ matrices and suppose that the product $AB$ is nonsingular. Then:
The matrix $B$ is nonsingular.
The matrix $A$ is nonsingular. (You may use the fact that a nonsingular matrix is invertible.)
Let $A$ be an $n\times (n-1)$ matrix and let $\mathbf{b}$ be an $(n-1)$-dimensional vector.Then the product $A\mathbf{b}$ is an $n$-dimensional vector.Set the $n\times n$ matrix $B=[A_1, A_2, \dots, A_{n-1}, A\mathbf{b}]$, where $A_i$ is the $i$-th column vector of $A$.
Prove that $B$ is a singular matrix for any choice of $\mathbf{b}$.
For each of the following $3\times 3$ matrices $A$, determine whether $A$ is invertible and find the inverse $A^{-1}$ if exists by computing the augmented matrix $[A|I]$, where $I$ is the $3\times 3$ identity matrix. |
Difference between revisions of "Degree of irreducible representation divides order of group"
(→Similar fact about irreducible projective representations)
(One intermediate revision by the same user not shown) Line 45: Line 45:
! Fact no. !! Statement !! Steps in the proof where it is used !! Qualitative description of how it is used !! What does it rely on? !! Other applications
! Fact no. !! Statement !! Steps in the proof where it is used !! Qualitative description of how it is used !! What does it rely on? !! Other applications
|-
|-
−
| 1 || [[uses::Character orthogonality theorem]]: The part relevant for us is: for an irreducible representation over a splitting field of characteristic zero, <math>\sum_{g \in G} \chi(g)\overline{\chi(g)} = |G|</math> || Step (1) || Equation setup that we then tinker with. || || {{uses short|character orthogonality theorem}}
+
| 1 || [[uses::Character orthogonality theorem]]: The part relevant for us is: for an irreducible representation over a splitting field of characteristic zero , <math>\sum_{g \in G} \chi(g)\overline{\chi(g)} = |G|</math> || Step (1) || Equation setup that we then tinker with. || || {{uses short|character orthogonality theorem}}
|-
|-
−
| 2 || [[uses::Size-degree-weighted characters are algebraic integers]]: This states that for an irreducible linear representation <math>\varphi</math> of a finite group <math>G</math> over an algebraically closed field of characteristic zero (or more generally, over any [[splitting field]]), a conjugacy class <math>c</math> in <math>G</math> and an element <math>g \in c</math>, the number <math>|c|\chi(g)/\chi(1)</math> (with <math>1</math> denoting the identity element of the group) is an algebraic integer. || Step (3) || Show certain parts of an expression are algebraic integers. || algebraic number theory + linear representation theory|| {{uses short|Size-degree-weighted characters are algebraic integers}}
+
| 2 || [[uses::Size-degree-weighted characters are algebraic integers]]: This states that for an irreducible linear representation <math>\varphi</math> of a finite group <math>G</math> over an algebraically closed field of characteristic zero (or more generally, over any [[splitting field]]), a conjugacy class <math>c</math> in <math>G</math> and an element <math>g \in c</math>, the number <math>|c|\chi(g)/\chi(1)</math> (with <math>1</math> denoting the identity element of the group) is an algebraic integer. || Step (3) || Show certain parts of an expression are algebraic integers. || algebraic number theory + linear representation theory|| {{uses short|Size-degree-weighted characters are algebraic integers}}
|-
|-
| 3 || [[uses::Characters are algebraic integers]] || Step (4) || Show certain parts of an expression are algebraic integers. || basic linear representation theory || {{uses short|characters are algebraic integers}}
| 3 || [[uses::Characters are algebraic integers]] || Step (4) || Show certain parts of an expression are algebraic integers. || basic linear representation theory || {{uses short|characters are algebraic integers}}
Latest revision as of 13:03, 14 October 2018 This article gives the statement, and possibly proof, of a constraint on numerical invariants that can be associated with a finite group This article states a result of the form that one natural number divides another. Specifically, the (degree of a linear representation) of a/an/the (irreducible linear representation) divides the (order) of a/an/the (group). View other divisor relations |View congruence conditions This fact is related to: linear representation theory View other facts related to linear representation theoryView terms related to linear representation theory | Contents Statement
Let be a finite group and an irreducible representation of over an algebraically closed field of characteristic zero (or, more generally, over any splitting field of characteristic zero for ). Then, the degree of divides the order of .
Related facts Other facts about degrees of irreducible representations
Further information: Degrees of irreducible representations
Degree of irreducible representation divides index of center Degree of irreducible representation divides index of abelian normal subgroup Order of inner automorphism group bounds square of degree of irreducible representation Number of irreducible representations equals number of conjugacy classes Sum of squares of degrees of irreducible representations equals order of group Similar fact about irreducible projective representations Breakdown for a field that is not algebraically closed
Let be the cyclic group of order three and be the field. Then, there are two irreducible representations of over : the trivial representation, and a two-dimensional representation given by the action by rotation by multiples of . The two-dimensional representation has degree , and this does
not divide the order of the group, which is .
We still have the following results:
Degree of irreducible representation over reals divides twice the group order Degree of irreducible representation over any field divides product of order and Euler totient function of exponent Degree of irreducible representation of nontrivial finite group is strictly less than order of group Maximum degree of irreducible real representation is at most twice maximum degree of irreducible complex representation Facts used The table below lists key facts used directlyand explicitlyin the proof. Fact numbers as used in the table may be referenced in the proof. This table need notlist facts used indirectly, i.e., facts that are used to prove these facts, and it need not list facts used implicitly through assumptions embedded in the choice of terminology and language.
Fact no. Statement Steps in the proof where it is used Qualitative description of how it is used What does it rely on? Other applications 1 Character orthogonality theorem: The part relevant for us is: for an irreducible representation over a splitting field of characteristic zero with character , Step (1) Equation setup that we then tinker with. click here 2 Size-degree-weighted characters are algebraic integers: This states that for an irreducible linear representation of a finite group over an algebraically closed field of characteristic zero (or more generally, over any splitting field), with character , a conjugacy class in and an element , the number (with denoting the identity element of the group) is an algebraic integer. Step (3) Show certain parts of an expression are algebraic integers. algebraic number theory + linear representation theory click here 3 Characters are algebraic integers Step (4) Show certain parts of an expression are algebraic integers. basic linear representation theory click here Proof This proof uses a tabular format for presentation. Provide feedback on tabular proof formats in a survey (opens in new window/tab) | Learn more about tabular proof formats|View all pages on facts with proofs in tabular format Given: A finite group , an irreducible linear representation of over a splitting field of characteristic zero for , with character and degree . Note that equals , i.e., the value of at the identity element of . To prove: divides the order of . Proof:
Step no. Assertion/construction Facts used Given data used Previous steps used Explanation 1 The following holds: where the sum is over all conjugacy classes of , and where denotes the value of at any element of . Fact (1) is irreducible over a splitting field of characteristic zero, with character . Follows from fact (1). The comes because for each conjugacy class , elements of the class appear in the full statement of the column orthogonality theorem. 2 Step (1) Divide both sides of step (1) by . 3 Each is an algebraic integer for each conjugacy class . Fact (2) is irreducible over a splitting field of characteristic zero, with character . 4 Each is an algebraic integer for each conjugacy class . Fact (3) is a character. The complex conjugate of an algebraic integer is also an algebraic integer. 5 is an algebraic integer. Steps (3), (4) The set of algebraic integers forms a ring, so a finite sum of products of algebraic integers is an algebraic integer. 6 is an algebraic integer. Steps (2), (5) By Step (5), the left side of Step (2) is an algebraic integer, hence so is the right side. 7 is a positive integer, so divides . Step (6) Both and are positive integers, hence their quotient is a positive rational number. The only way a rational number can be an algebraic integer is if it is an integer, hence the conclusion. |
OpenCL (Open Computing Language) is a new framework for writing programs that execute in parallel on different computing devices (such as CPUs and GPUs) from different vendors (AMD, Intel, ATI, Nvidia etc.). The framework defines a language to write “kernels” in. These kernels are the functions which are to run on the different computing devices. In this post, I explain how to get started with OpenCL and how to make a small OpenCL program that will compute the sum of two lists in parallel. After that, I will show you how to write a GPU Cryptocurrency miner with the help of OpenCL based on the knowledge we just learned.
English, Bahsa Indonesia, Bahasa Malaysia, Burmese (Zawgyi), Thai, Vietnamese, Chinese, Khmer.
Even if we count Simplified Chinese and Traditional Chinese as one language, we still have eight languages that are widely used in Southeast Asia.
For a start up, for a tech team, or for a community, what we use make who we are. They are the source of culture and they show what we believe.
Which operating system, Windows, Linux or OS X, do you use as the default developing environment? Which operating system do you use as the production environment, Debian based, Red hat based, or Gentoo? Which cloud server do you use, AWS or other? Which language do you use as the main backend language, Python, Ruby, Node.js, or Java, PHP, C++? Which infrastructure do you use?
Slack is gradually becoming the standard for modern office communication. While you may argue that technically Slack is no different than, say, IRC – the polished experience is what makes it stand out in the crowd of messaging services. Using less gentler words, Slack is killing email for office communications. And has built in support for code snippets with syntax highlighting. Boom.
Actually, Slack is more than just a communication tool. What makes it extraordinary is it provides the possibility to integrate everything and make your workflow complete. In this post we’re highlighting some of the most useful new workflows that Slack is enabling. All these are currently heavily in use in our team, and we find they are exceptionally helpful.
This article describes a complicated issue when using AutoCAD, Chinese text shows as a question mark, and two common causes and a perfect solution.
To be a good tech lead in a start up is totally different from being CTO at a mature big company. There are numerous theories and technologies about managment in big companies. Unfortunately, there are not so much knowledges about how to lead a technical start up. Today, I’m gonna to give my advices. Hope they will inspire you.
I’m gonna talk this in 9 respects. They are not all ordered by importances, even though some are.
This guides is for casting Android and iPhone Screen onto a Macbook or other computers through a USB cable. If you’re searching a wireless solution, this is not. Actually after searching a lot, I find no good solution for wireless cast, so I admit this is a compromise solution. After all, fluency and resolution are more important.
Even though SOCKS is a higher level protocol and more appropriate for doing proxy thing, there are no easy solution for building a global proxy for a Linux server except doing that on a router. For a remote server, normally a cloud server, it’s not always convenient to access the router. So after several tries, I decide drop the SOCKS solution, and simply use Linux’s
iptables.
The easiest way I find from my recent research is with shadowsocks-libev. Shadowsocks-libev is a lightweight secured SOCKS5 proxy for embedded devices and low-end boxes. Shadowsocks-libev is written in pure C and only depends on libev and OpenSSL or PolarSSL. The use of mbedTLS is added but still for testing, and it is not officially supported yet.
To your disappointment, I’m not gonna to show up some theme plugins even though I do think there are much space for Jenkins to improve in this scope. I’m gonna to show you something that really really useful, much more useful than that kind of simple theme plugins.
Throttle Concurrent Builds Plugin
This plugin allows for throttling the number of concurrent builds of a project running per node or globally.
Bitbucket Server, Jira, Confluence, Crowd etc, so many excellent software come from a same company — Atlassian. Some of them are technically designed well (even though not best), so they are good study cases. These days I’m interested in the license generating algorithm, so I dig into them for studying. Its license algorithm is DSA. Theoritically, it’s impossible to know the private key, so the private key can be think as unknown and safe. Without private key, it’s impossible to generate the corresponding signature for raw text. In this way, it makes sure that every issued license is from the owner.
To better understand the relationship between orignal text and the license text, I write a Python code to uncover the original text from a license text.
Sometimes I need to know what’s inside to figure out a best solution, like modifying a Java
.class file in a
.jar file.After searching google and stack overflow for a while, I found this question is little cared, and almost all information is not complete.So I wrap them up and make the whole process runnable.
I wrote a blog several years ago about transform all files in a folder recursively from one encoding to another. Today I decide to solve this issue more completely.
I wrap all this to a Python egg package, upload it to PyPi, and everyone who want to use this don’t need to copy & paste code any more. Just install it and use it.
1
This ships with a shell command, so after installing, just type
1
to transform a single file to UTF-8 encoding, or
1
This is a simple search problem. With some pruning job it’s sufficient to pass test cases.
The biggest possible area of cake is \(16 \times 10 \times 10\), so the biggest possbile side of cake is \(40\).
Image a cake as a matrix with rows and colums. Each element of this matrix is a 1x1 cell. The problem can be translated to if there exists a method to fill this matrix with squares.
Now fill the matrix in this order: find the lowest leftmost empty cell \(C_{i,j}\). Find successive cells \(C_{i,j}, C_{i, j+1}, \dots, C_{i, j+w}\) with the same height as \(C_{i,j}\).
Choose a square and put it into an area with left upper cell as \(C_{i, j}\). If this can not construct a solution, backtrack to use another squre, otherwise continue to the end and get a solution.
This page presents a variety of calculations for latitude/longitude points, with the formula and code fragments for implementing them.
All these formula are for calculations on the basis of a spherical earth (ignoring ellipsoidal effects) – which is accurate enough for most purposes [In fact, the earth is very slightly ellipsoidal; using a spherical model gives errors typically up to 0.3%].
Distance
This uses the haversine formula to calculate the great-circle distance between two points – that is, the shortest distance over the earth’s surface – giving an “as-the-crow-flies” distance between the points (ignoring any hills they fly over, of course!).
Haversine formula: \(haversin{\left(\dfrac{d}{r}\right)} = haversin{\left(\phi_2-\phi_1\right)} + \cos{\phi_1}\cos{\phi_2}\,haversin{\left(\lambda_2-\lambda_1\right)}\)
where
\(haversin(\theta) = \sin^2\left(\dfrac{\theta}{2}\right) = \dfrac{1-\cos(\theta)}{2}\)
\(d\) is the distance between the two points (along a great circle of the sphere)
\(r\) is the radius of the sphere
\(\phi_1\), \(\phi_2\): latitude of point 1 and latitude of point 2
\(\lambda_1\), \(\lambda_2\): longitude of point 1 and longitude of point 2
Due to a Meteor bug (actually it’s a bug from MongoDB nodejs driver, but the bug has already been fixed by the nodejs driver, while the issue is Meteor is referencing an old version of this driver), I have to downgrade MongoDB from
3.0.5 to
2.6.10. Normally this is a trival work, but it cost me a whole afternoon to do this, because there are serveral traps in there, so I decide to write this blog to help other people who meet the same issue with me.
The main problem is MongoDB 3.0 uses a new version of authentication algorithom
SCRAM-SHA-1, while 2.6 uses an old version
MONGODB-CR. Actually there are other methods besides these two. These two are the default configuration. If one upgrades MongoDB from 2.6 to 3.0, the authentication schema version can be upgraded by
authSchemaUpgrade from 3 to 5, and in the same time the authentication algorithom will be upgraded from
MONGODB-CR to
SCRAM-SHA-1. However, when one downgrades MongoDB from 3.0 to 2.6, you can not use
authSchemaUpgrade to downgrade the authentication schema, and it will remain in 5 and the authentication algorithm will remain in
SCRAM-SHA-1, which are not supported by MongoDB 2.6. So yes, without additional work, you are not able to log into the database.
This article provides a method to solve this issue, which is not documented by MongoDB official website (and appearantly not documented in anywhere).
After searching google, the information about backuping MongoDB is neither obselete or complicated. So this article is about a new practice on automatically backup MongoDB with AWS’s offical tool AWS CLI. It’s more straight forward and more compatable with the new Amazon S3 server (supporting the newest v4 signature).
Before that, let’s review some existing methods. One method is use the open source project s3cmd. Originally it doesn’t support the v4 signature, and after a long time, it finally supports v4 signature. The issue is it still doesn’t support the China region S3 server. Due to the maintainer is only one person and he doesn’t have enough time on this project, I guess it will another long period of time to fix this. Another method is to use mongodb-s3-backup. After a little modification, it can handle Chinese region S3 server well, but it doesn’t support v4 signature.
So I just creat a new script: https://github.com/leonsim/mongo-s3-backup.
Even though the serieze of HeartBleeding bugs makes HTTPS (SSL) look vulnerable, I still believe after bugs fixed, HTTPS is more secure than HTTP. Actually we can archieve this in a few simple steps with technologies like OpenSSL, free certificate provider, nginx configuration.
To setup a meteor production environment on EC2, we need to install mongoDB as the main database. To use the MUP(Meteor Up tool), it’s easier to use Ubuntu operating system.
This article is about how to setup mongoDB replication on EC2 with Ubuntu 14.02 LTS. Basically, it’s the combination of the following articles:
Gentoo
1 2 3
Don’t forget to read news:
1
Debian
1 2 3 Red hat
1 |
Ah, some light Friday fare!
By now, everybody has probably heard about the forthcoming crackpot “documentary” from David de Hilster,
Einstein Wrong – The Miracle Year. Currently looking for financial backing, de Hilster hopes to release this flick in 2008, doing for relativity what What the Bleep Do We Know (2004) did for quantum physics: namely, let the fractured ceramics have free play.
As it turns out, David de Hilster is one of the Network’s classic relativity cranks. He’s been pushing his pet theory, “Autodynamics,” since at least the early 1990s (on the sci.physics Usenet group). As it also turns out, Autodynamics has plenty of problems. For example, it chucks out the Lorentz transformations, thereby making itself inconsistent with the Maxwell equations, which form our basic understanding of electricity and magnetism, without which the technological support system of modern society couldn’t exist.
What’s more, they don’t like that nasty ol’ equation
[tex]E = mc^2.[/tex]
The Autodynamicist revulsion at this horrible formula has led them to propose — no, I’m
not making this up — that [tex]E[/tex] should equal [tex]mc^3[/tex] instead. Remember that Far Side cartoon where Einstein is struggling with his algebra on the blackboard, writing all sorts of equations, while the cleaning woman is saying, “ Now that desk looks better. Everything’s squared away, yessir, squaaaaaared away.”
If you’re familiar with “dimensional analysis,” then it’s easy to see why there’s no way [tex]E[/tex] could be equal to [tex]mc^3[/tex]. The units have to be the same on both sides of the equation (if one side of an equation measures kilowatt-hours, and the other measures lollipops, there’s a problem somewhere — at the very least, you missed the conversion factor for kilowatt-hours per lollipop).
Also, Autodynamics has its own velocity addition formula. If you’re riding in the back of a pickup truck which is traveling at 15 meters per second and you shoot a bullet at 300 m/s towards a nearby highway sign, then intuitively, a person standing beside the highway will observe the bullet traveling at
[tex] v = v_1 + v_2 = 315 \frac{\rm m}{\rm s}.[/tex]
Relativity introduces a correction factor to this formula, whose effects are only apparent when the speeds involved approach those of light:
[tex] v = \frac{v_1 + v_2}{1 + \frac{v_1 v_2}{c^2}}.[/tex]
Autodynamics doesn’t like this (never mind that it’s been confirmed experimentally out the wazoo, and is an inevitable mathematical consequence of facts which have
also been confirmed over and over again). ADvocates prefer the following formula:
[tex] v = \sqrt{v_1^2 + v_2^2}. [/tex]
So, a softball thrown at 10 m/s from a car moving at 10 m/s will appear to a stationary observer to be moving at
[tex] \sqrt{10^2 + 10^2} = 10\sqrt{2} \approx 14.14 [/tex]
Fourteen meters per second?
And the trash can gets heavier by the weight of one fractured ceramic.
UPDATE: I forgot to point out that Steve Reuland at the Panda’s Thumb has demonstrated how closely the Einstein Wrong people parallel classic creationist propaganda. |
Research talks;Number Theory
For a non-principal Dirichlet character $\chi$ modulo $q$, the classical Pólya-Vinogradov inequality asserts that
$M (\chi) := \underset{x}{max}$$| \sum_{n \leq x}$$\chi(n)| = O (\sqrt{q} log$ $q)$. This was improved to $\sqrt{q} log$ $log$ $q$ by Montgomery and Vaughan, assuming the Generalized Riemann hypothesis GRH. For quadratic characters, this is known to be optimal, owing to an unconditional omega result due to Paley. In this talk, we shall present recent results on higher order character sums. In the first part, we discuss even order characters, in which case we obtain optimal omega results for $M(\chi)$, extending and refining Paley's construction. The second part, joint with Alexander Mangerel, will be devoted to the more interesting case of odd order characters, where we build on previous works of Granville and Soundararajan and of Goldmakher to provide further improvements of the Pólya-Vinogradov and Montgomery-Vaughan bounds in this case. In particular, assuming GRH, we are able to determine the order of magnitude of the maximum of $M(\chi)$, when $\chi$ has odd order $g \geq 3$ and conductor $q$, up to a power of $log_4 q$ (where $log_4$ is the fourth iterated logarithm).
For a non-principal Dirichlet character $\chi$ modulo $q$, the classical Pólya-Vinogradov inequality asserts that $M (\chi) := \underset{x}{max}$$| \sum_{n \leq x}$$\chi(n)| = O (\sqrt{q} log$ $q)$. This was improved to $\sqrt{q} log$ $log$ $q$ by Montgomery and Vaughan, assuming the Generalized Riemann hypothesis GRH. For quadratic characters, this is known to be optimal, owing to an unconditional omega result due to Paley. In this talk, we ...
11L40 ; 11N37 ; 11N13 ; 11M06
... Lire [+] |
In this article, we shall learn how to solve Quadratic Equations in bank exams.
Quadratic equation is one of the most complex and tricky parts of Quantitative Aptitude where candidates generally make silly mistakes if they do not focus during their bank exam preparation.
To know more about different bank exams, check at the linked article.
As a result, there are some tricks and strategies which are essential to solve a quadratic equation. Bank exams require accuracy in the allotted time during the main exam so shortcut tricks are important for an aspirant to quickly solve a particular bank exam question.
The numerical ability section is where the candidates can score well by practising well within a given time using more tricks during their bank exam preparation.
Weightage of Quadratic Equation in Bank Exams
Priority Nature of expected questions Weightage 1 Maintaining accuracy, problems that can be solved easily, easy to manage time, scoring in nature 0-5
To explore various Govt Exams, check at the linked article.
Stepwise Tips to practice a Quadratic Equation: Firstly the candidate needs to note down twenty sums related to the quadratic equation on a page. Then ten sums are to be solved using basic formula. To solve these sums using a stopwatch will help in time management. The next step is to evaluate the time taken and analyse the performance. The remaining ten questions are to be solved next by applying the shortcut tricks. The time taken should be noted carefully. This will surely have a difference from the time taken while solving the first ten questions. Practice is the key to master this topic of the Quantitative Aptitude section.
To explore the SBI Clerk Prelims Quantitative Syllabus, check at the linked article.
The basic equation of quadratic equation is
\(ax^{2}+ bx+c =0\) where this equation is equal to zero and a,b,c are constants. The quadratic equation only holds the power of x where x is also known as a non-negative integer. For example: \(6x^{2} + 11x +3= 0\) In this equation 6 is the coefficient of \(x^{2}\) Solution:
11 is the coefficient of x and 3 is constant. The easiest way is to solve this through the middle term break method.
First step: To multiply \((+6)\times (+3)\) = +18 Second step: +18 is to be broken into two parts in such a way that their addition results in the middle term that is 11 where \(9 + 2 = 11\) and product of these factors are 18. Third step: To change the sign of the factors where +9 = -9 and +2 =-2 and this has to be divided by coefficient of \(x^{2}\). Therefore, \(-9/6 = -3/2\) and \(-2/6 = -1/3\).
To learn how to Prepare Quantitative Aptitude For Bank Exams, check at the linked article.
Top 3 Tips to solve Quadratic Equations in Bank Exams
The major three ways to solve the problems on the Quadratic Equation are as follows:
To factor the Quadratic Equation-Here all the same terms are to be combined and to the one side of the equation in such a way that there remains nothing on the other side. Once it has no remaining terms we can write zero. For example: \(3x^{2} – 8x -4 = 3x + x^{2}\) \(4x^{2} – 11x – 4=0\) The second step is to factor the equation so that there is a set each through the middle term break method. The last part is to separate each factor set to zero. For example:\((3x+1)(x-4)=0\) To use the formula-The sums can be solved through the formula \(\frac{-b\pm \sqrt{b^{2}-4ac}}{2a}\) and solve putting the equations in this. To complete the square-This process is bit complex while the other two are easy to solve.
To learn Tips To Solve Important Maths Questions of SBI PO Exam, check at the linked article.
We hope the above-mentioned shortcut tricks and strategies will help the candidates during their bank exam preparation for Quadratic Equations. All the candidate needs to do is practice with determination and hard work to crack the bank exams successfully.
Important Bank Exams Information:
Both IPBS and SBI exams have the Quadratic Equations topic in the Quantitative Aptitude section of the Preliminary and Mains exams. Being a scoring topic, it is advisable for candidates to practice efficiently and earn a full score on this topic during their respective bank examination. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.