text stringlengths 256 16.4k |
|---|
The exercise is from the handbook of set-theoretic topology page 161: Assume$\mathfrak b=\mathfrak c$. Construct a first countable separable zero-dimensional locally compact pseudocompact space which is not countably compact but which has no uncountable closed discrete subset. The hint is given: Fisrst construct a space first countable zero-dimensional locally compact space $X$ with underlying set $\mathfrak c$ such that $D=\{\xi: \omega\le \xi < \omega+\omega\}$ is a closed discrete set of cluster points of $\omega$ and such that each countably infinite subset of $X\setminus D$ has a cluster point.
Could anybody help me?
Added: $\mathfrak b$ is the bounding number:
$$\mathfrak b=\min\left\{|F|:F\subseteq{}^\omega\omega\text{ and }\forall f\in{}^\omega\omega\,\exists g\in F\left(g\not\le^* f\right)\right\}\;,$$
where $f\le^*g$ iff there is an $n\in\omega$ such that $f(k)\le g(k)$ for all $k\ge n$. That is, $\mathfrak b$ is the smallest cardinality of an unbounded set in the partial order $\left\langle{}^\omega\omega,\le^*\right\rangle$. |
Show that the function, $T(n) = 4n^2$ is NOT $O(n)$.
I'm not looking for someone to give me a full answer, I just need some pointers on how to go about starting to show that it is not $O(n)$.
Many thanks in advance.
Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community
Use proof by contradiction: Assume that $4n^2=O(n)~~\forall n\geq 1$, then constant $c$ exist $c<\infty$ such that $4n^2\leq cn$, therefore $n\leq \frac{c}{4}$, since the inequality should hold for all $n$'s, and it doesn't hold for $n=\frac{c}{4}+1$, then there is a contradiction in the initial assumption. Therefore $4n^2\neq O(n)$
Naively $O(n)$ represents functions $f(n)$ such that $\lim_{n\to\infty}|\frac{f(n)}{n}|<\infty$. Since $T(n)=4n^2$,$\lim_{n\to\infty}|\frac{4n^2}{n}|=\lim_{n\to\infty}|4n|=\infty$. So $T(n)$ is not $O(n)$.
Assuming that $4n2$ means $4n^2$. Recall that something being $O(n)$ means that its absolute avlue is less than $Cn$ for some fixed constant $C$ and for all $n$ (possibly only sufficiently large, depending on your convention) $n$, where $n$ are presumably positive integers.
Now, think about if the inequality $4n^2 \le C n$ can possibly hold for
all (sufficiently large) $n$ with one and the same fixed $C$. |
Problems regarding rounding errors.
Underflow occurs when numbers near zero are rounded to zero. Overflow occurs when numbers with large magnitude are approximated as \(\infty\) or \(- \infty\).
Conditioning refers to how rapidly a function changes with respect to small changes in its inputs. Functions that change rapidly when their inputs are perturbed slightly can be problematic for scientific computation because rounding errors in the inputs can result in large changes in the output.
Optimization refers to the task of either minimizing or maximizing some function \(f (\boldsymbol{x})\) by altering \(x\).
The function we want to minimize or maximize is called the
objective function or criterion.When we are minimizing it, we may also call it the cost function, loss function , or error function.
Value that minimizes or maximizes a function can be denoted with *. For example \(\boldsymbol{x}^{*} = \arg \min f(\boldsymbol{x})\)
Derivative of some function \(y = f(x)\) is denoted as \(f’(x)\) or \(\frac{dy}{dx}\).The derivative \(f’(x)\) gives the slope of \(f(x)\) at the point \(x\).In other words, it specifies how to scale a small change in the input in order to obtain the corresponding change in the output: \(f(x + \epsilon) \approx f(x) + \epsilon f’(x)\)
The derivative is useful for minimizing a function because it tells us how to change \(x\) in order to make small improvement in \(y\).We can thus reduce \(f (x)\) by moving \(x\) in small steps with opposite sign of the derivative. This technique is called
gradient descent (first definition, second will be later).
When \(f’(x) = 0\), the derivative provides no information about which direction to move.Points where \(f’(x) = 0\) are known as
critical points or stationary points.Some critical points are neither maxima nor minima. These are known as saddle points.
A point that obtains the absolute lowest value of \(f (x)\) is a
global minimum.It is possible for there to be only one global minimum or multiple global minima of the function.
We often minimize functions that have multiple inputs: \(f: \mathbb{R}^{n} \to \mathbb{R}\). For the concept of “minimization” to make sense, there must still be only one (scalar) output.
For functions with multiple inputs, we must make use of the concept of
partial derivatives.The partial derivative \(\frac{\partial}{\partial x_{i}} f(x)\) measures how \(f\) changes as only variable \(x_{i}\) increases at point \(\boldsymbol{x}\).The gradient generalizes the notion of derivative to the case where the derivative is with respect to a vector:the gradient of \(f\) is the vector containing all of the partial derivatives, denoted\(\nabla_{\boldsymbol{x}} f(\boldsymbol{x})\).Element \(i\) of the gradient is the partial derivative of \(f\) with respect to \(x_i\). In multiple dimensions, critical points are points where every element of the gradient is equal to zero.
The
directional derivative in direction \(\boldsymbol{u}\) (a unit vector) is the slope of the function \(f\) in direction \(u\).In other words, the directional derivative is the derivative of the function\(f(\boldsymbol{x} + \alpha \boldsymbol{u})\) with respect to \(\alpha\), evaluatedat \(\alpha = 0\).Using the chain rule, we can see that\(\frac{\partial}{\partial\alpha} f(\boldsymbol{x} + \alpha \boldsymbol{u})\)evaluates to\(\boldsymbol{u}^{T} \nabla_{\boldsymbol{x}} f(\boldsymbol{x})\) when \(\alpha = 0\).
To minimize \(f\), we would like to find the direction in which \(f\) decreases the fastest. We can do this using the directional derivative:
where \(\theta\) is the angel between \(\boldsymbol{u}\) and the gradient.Substituting in \(||\boldsymbol{u}||_2 = 1\) and ignoring factors that do not depend on \(\boldsymbol{u}\), this simplifies to \(\min_{\boldsymbol{u}} \cos \theta\).This is minimized when \(\boldsymbol{u}\) points in the opposite direction as the gradient.In other words, the gradient points directly uphill, and the negative gradient points directly downhill.We can decrease \(f\) by moving in the direction of the negative gradient.This is known as the
method of steepest descent or gradient descent.
Steepest descent proposes a new point
where \(\epsilon\) is the
learning rate, a positive scalar determining the size of the step.We can set \(\epsilon\) to a small constant.Or evaluate \(f(\boldsymbol{x’} = \boldsymbol{x} - \epsilon \nabla_{\boldsymbol{x}} f(\boldsymbol{x}))\) for several values of \(\epsilon\) and choose the one that results in the smallest objective function value.This last strategy is called a line search.
Steepest descent converges when every element of the gradient is zero (or, in practice, very close to zero). In some cases, we may be able to avoid running this iterative algorithm, and just jump directly to the critical point by solving the equation \(\nabla_{\boldsymbol{x}} f(\boldsymbol{x}) = 0\) for \(\boldsymbol{x}\).
Although gradient descent is limited to optimization in continuous spaces, thegeneral concept of repeatedly making a small move (that is approximately the bestsmall move) towards better configurations can be generalized to discrete spaces.Ascending an objective function of discrete parameters is called
hill climbing. |
LaTeX typesetting is made by using special tags or commands that provide a handful of ways to format your document. Sometimes standard commands are not enough to fulfil some specific needs, in such cases new commands can be defined and this article explains how.
Contents
Most of the LaTeX commands are simple words preceded by a special character.
In a document there are different types of \textbf{commands} that define the way the elements are displayed. This commands may insert special elements: $\alpha \beta \Gamma$
In the previous example there are different types of commands. For instance,
\textbf will make boldface the text passed as parameter to the command. In mathematical mode there are special commands to display Greek characters.
Commands are special words that determine LaTeX behaviour. Usually this words are preceded by a backslash and may take some parameters.
The command
\begin{itemize} starts an
environment, see the article about environments for a better description. Below the environment declaration is the command
\item, this tells LaTeX that this is an item part of a list, and thus has to be formatted accordingly, in this case by adding a special mark (a small black dot called bullet) and indenting it.
Some commands need one or more parameters to work. The example at the introduction includes a command to which a parameter has to be passed,
textbf; this parameter is written inside braces and it's necessary for the command to do something.
There are also optional parameters that can be passed to a command to change its behaviour, this optional parameters have to be put inside brackets. In the example above, the command
\item[\S] does the same as
item, except that inside the brackets is
\S that changes the black dot before the line for a special character.
LaTeX is shipped with a huge amount of commands for a large number of tasks, nevertheless sometimes is necessary to define some special commands to simplify repetitive and/or complex formatting.
New commands are defined by
\newcommand statement, let's see an example of the simplest usage.
\newcommand{\R}{\mathbb{R}} The set of real numbers are usually represented by a blackboard bold capital r: \( \R \).
The statement
\newcommand{\R}{\mathbb{R}} has two parameters that define a new command
\R
\mathbb{R}
\mathbb the package
After the command definition you can see how the command is used in the text. Even tough in this example the new command is defined right before the paragraph where it's used, good practice is to put all your user-defined commands in the preamble of your document.
It is also possible to create new commands that accept some parameters.
\newcommand{\bb}[1]{\mathbb{#1}} Other numerical systems have similar notations. The complex numbers \( \bb{C} \), the rational numbers \( \bb{Q} \) and the integer numbers \( \bb{Z} \). The line
\newcommand{\bb}[1]{\mathbb{#1}} defines a new command that takes one parameter.
\bb
[1]
\mathbb{#1}
User-defined commands are even more flexible than the examples shown above. You can define commands that take optional parameters:
\newcommand{\plusbinomial}[3][2]{(#2 + #3)^#1} To save some time when writing too many expressions with exponents is by defining a new command to make simpler: \[ \plusbinomial{x}{y} \] And even the exponent can be changed \[ \plusbinomial[4]{y}{y} \]
Let's analyse the syntax of the line
\newcommand{\plusbinomial}[3][2]{(#2 + #3)^#1}:
\plusbinomial
[3]
[2]
(#2 + #3)^#1
If you define a command that has the same name as an already existing LaTeX command you will see an error message in the compilation of your document and the command you defined will not work. If you really want to override an existing command this can be accomplished by
\renewcommand:
\renewcommand{\S}{\mathbb{S}} The Riemann sphere (the complex numbers plus $\infty$) is sometimes represented by \( \S \)
In this example the command
\S (see the example in the commands section) is overwritten to print a blackboard bold S.
\renewcommand uses the same syntax as
\newcommand.
For more information see: |
Forgot password? New user? Sign up
Existing user? Log in
If the function f(x)=1+xf(x) = \sqrt{1 + x}f(x)=1+x is expanded out in terms of powers of xxx such that f(x)=∑k=0∞akxk,f(x) = \sum\limits_{k=0}^{\infty} a_{k} x^{k},f(x)=k=0∑∞akxk, what's the coefficient a3a_{3}a3 of the x3x^{3}x3 term?
Express your answer as an exact decimal.
Problem Loading...
Note Loading...
Set Loading... |
Whew! We spent a considerable amount of wordage developing the Dirac Equation. Now, it’s time to tie this development back to the supersymmetry material we studied earlier in the non-relativistic context. The result will be a surprising mapping between relativistic and non-relativistic quantum mechanics. Today, we’ll just get the gist of it, and to get started, we’ll begin with the final equation we had before,
[tex](i\displaystyle{\not} \partial – m)\psi = 0.[/tex]
Recalling Feynman’s notation of slashed quantities,
[tex]\displaystyle{\not} a = \gamma^\mu a_\mu,[/tex]
we can unpack this a little to
[tex]\left(i\gamma^\mu\partial_\mu – m\right) \psi = 0,[/tex]
which we can elaborate to include an electromagnetic field as follows:
[tex]{\left[i\gamma^\mu(\partial_\mu + iA_\mu) – m\right] \psi = 0.[/tex]
The Dirac Hamiltonian [tex]H_D[/tex] has a rich SUSY structure, of which we can catch a glimpse even having pared the problem down to its barest essentials. To take the simplest possible case, consider a Dirac particle living in one spatial dimension, on which there also lives a scalar potential [tex]\phi(x^1)[/tex]. (We could call this a “1+1-dimensional” system, to remind ourselves of the difference between time and space.) The SUSY structure can be seen most clearly when we look at the limit of a massless particle; this eliminates the [tex]m[/tex] term we had before.
All that fuss over the matrices [tex]\gamma^\mu[/tex] has its practical benefit now, because writing our formulas in terms of them allows us to write a new Dirac Equation embodying the analogous physics in a different number of dimensions. Just as our first [tex]\alpha[/tex] matrices were defined by the anticommutator relation
[tex]\anticomm{\alpha_\mu}{\alpha_\nu} = 2\delta_{\mu\nu}\idmat,[/tex]
we now consider the matrices [tex]\gamma^\mu[/tex] defined by their anticommutator,
[tex]\anticomm{\gamma_\mu}{\gamma_\nu} = 2\eta_{\mu\nu}.[/tex]
The only difference between our original, 3+1-dimensional problem and our current one is that indices like [tex]\mu[/tex]and [tex]\nu[/tex] will only run over 0 and 1 instead of running up to 3. A useful choice satisfying the anticommutator constraint is
[tex]\gamma^0 = \left(\begin{array}{cc} 0 & 1 \\ 1 & 0 \\ \end{array}\right),[/tex]
[tex]\gamma^1 = \left(\begin{array}{cc}i & 0 \\ 0 & -i \\ \end{array}\right).[/tex]
What was a position four-vector is now a “two-vector,” but we must remember that moving an index from up to down involves changing the sign on the zeroth component. (The oneth component remains the same: [tex]x_1 = x^1[/tex].) Writing an unadorned [tex]x[/tex] to stand for the entire vector, the Dirac Equation is
[tex]i\gamma^\mu \partial_\mu \psi(x) – \phi(x^0)\psi(x) = 0.[/tex]
As usual, we separate out the time dependence, leaving a wavefunction which depends only on [tex]x_1[/tex]:
[tex]\psi(x) = \exp(-i\omega x^0) \psi(x^1) = \exp(i\omega x_0)\psi(x_1).[/tex]
Substituting in this ansatz reduces the 1+1-dimensional Dirac Equation to
[tex]\gamma^0\omega\psi(x^1) + i\gamma^1 \partial_1 \psi(x^1) – \phi(x^1)\psi(x^1) = 0.[/tex]
If we write the two-component wavefunction [tex]\psi(x^1)[/tex] as
[tex]\psi(x^1) = \left(\begin{array}{c} \xi(x^1) \\ \chi(x^1) \\ \end{array}\right),[/tex]
then the reduced Dirac Equation becomes the two coupled equations
[tex]A\xi(x^1) = \omega \chi(x^1) [/tex]
[tex]A^\dag \chi(x^1) = \omega \xi(x^1), [/tex]
where the operators [tex]A[/tex] and [tex]A^\dag[/tex] are given by
[tex]A = \partial_1 + \phi(x^1),\ A^\dag = -\partial_1 + \phi(x^1).[/tex]
Decoupling our equations gives a familiar-looking result:
[tex]A^\dag A \xi(x^1) = \omega^2\xi(x^1),[/tex]
[tex]AA^\dag \chi(x^1) = \omega^2 \chi(x^1).[/tex]
Our result identifies readily with the SUSY partner potential concepts developed earlier. [tex]\phi(x^1)[/tex] is just the superpotential of our old, Schrödinger formalism. What’s more, [tex]\xi[/tex] and [tex]\chi[/tex] are the eigenfunctions of the two Hamiltonians [tex]H_1 = A^\dag A[/tex] and [tex]H_2 = AA^\dag[/tex], which we already know are isospectral, except that one has an extra state at zero energy.
Looking back to our results on shape invariance, we see that every Schrödinger problem with a potential [tex]V_1(x)[/tex] which has a shape-invariant partner [tex]V_2(x)[/tex] can become the scalar potential [tex]\phi(x^1)[/tex] in a Dirac problem!
Of course, physical problems using the Dirac Equation exist in more than one dimension, and the electron is not a massless particle. (Well, electrons trapped within materials like sheets of graphene
can behave as if they were massless Dirac fermions. . . .) If we soup up our machinery just a little, however, we can apply it to physical situations, such as figuring out what a relativistic charged particle in a Coulomb field will do. This provides the relativistic corrections to our earlier solution of the hydrogenic atom. Without derivation, the result is that for a nucleus of atomic number [tex]Z[/tex] and angular momentum [tex]J[/tex],
[tex]E_n = m\left(1 + \frac{Z^2 e^4}{(s + n)^2}\right)^{-1/2},[/tex]
where [tex]n[/tex] is a non-negative integer and
[tex]s = \sqrt{(J+\frac{1}{2})^2 + Z^2 e^4}.[/tex]
This legitimately relativistic result was deduced from the
non-relativistic analogous potentials and SUSY. The details can be found in Cooper, Klein and Sukhatme, section 11.2.
It is also interesting to note that in 4D Euclidean space — where the metric [tex]\eta_{\mu\nu}[/tex] is just the identity, and we don’t care whether four-vector indices are up or down — the Dirac Hamiltonian always exhibits SUSY structure.
SUSY QM SERIES |
In quantum mechanics, we can define the scattering amplitude $f_k(\theta)$ for two particles as the magnitude of an outgoing spherical wave. More precisely, the asymptotic behaviour (when $r\rightarrow\infty$) of a wave function of two scattering particles, interacting with some short range potential, is given by
$\psi(r)=e^{ikz} + \frac{f_k(\theta)}{r}e^{ikr}$
where the incoming wave is the plane wave $e^{ikz}$ (the coordinates are the relative coordinates between the two particles). The full Hamiltonian is given by
$H=\frac{1}{2m}p_1^2+\frac{1}{2m}p_2^2+V(r_{12})$
The low energy limit can be obtained by expanding the scattering amplitude in partial waves and only include the lowest partial wave.
However, we can also compute this in effective field theory. In the low energy limit, the effective lagrangian is
$L=\psi^\dagger\left(i\frac{\partial}{\partial t}+\frac{1}{2}\nabla^2\right)\psi-\frac{g_2}{4}(\psi^\dagger \psi)^2$
We can then define the four point greens function as $\langle0|T\psi\psi\psi^\dagger\psi^\dagger|0\rangle$. We can then define the scattering amplitude A as, and I quote (see later for reference) "It is obtained by subtracting the disconnected terms that have the factored form $\langle0|T\psi\psi^\dagger|0\rangle\langle0|T\psi\psi^\dagger|0\rangle$, Fourier transforming in all coordinates, factoring out an overall energy-momentum conserving delta function, and also factoring out propagators associated with each of the four external legs". For two particles, the amplitude A only depends on the total energy E. The claim is then that we have
$f_k(\theta)=\frac{1}{8\pi}A(E=k^2)$
My question is, although I understand that it is reasonable that there is a relation between these two quantities, I have no idea how to prove this and how to get the numerical factors right etc. So basically, what is the exact link between doing scattering computations in QM vs QFT? How can one show that the observables we are looking at is the same quantity?
The paper I am following is http://arxiv.org/abs/cond-mat/0410417 . The Effective field theory part I am referring to starts at page 135 , especially the relation (295). The above quote of the definition of A is given on page 139. The definition of $f_k$ is given on page 10-11.This post imported from StackExchange Physics at 2015-05-31 13:08 (UTC), posted by SE-user user2133437 |
Formula Constrained Optimization
For example, a quarter-wave stack with
pairs and QWOT thicknesses equal to m and a can be represented as b
We can consider some set of possible values for integers
, and for every possible combination of m and n OptiLayer will try to find optimal values of continuous parameters m and a , optimizing the design performance with respect to loaded targets. b
Another example may be a stack having period with varying optical thicknesses. Ultra-wide range high reflectors, chirp mirrors and other coatings can be designed in this way.
Example. High reflector operating in the spectral range from 400 nm to 900 nm. Refractive indices of layer materials
, glass substrate. n L=1.45
Width of the first high reflection zone of a quarter wave mirror with the central wavelength \(\lambda_0\) can be estimated with the help of a known formula:
\(\Delta=\lambda_u-\lambda_l,\;\;\displaystyle\frac{\lambda_u}{\lambda_l}=\frac{\pi+\arccos(-\xi)}{\pi-\arccos(\xi)} \)
\(\displaystyle\xi=\frac{n_H^2+n_L^2-6 n_Hn_L}{(n_H+n_L)^2}\)
where \(\lambda_u\) and \(\lambda_l\) are upper and lower boundaries of the high reflection zone.
If the central wavelength is 650 nm then the width of the high reflection zone of a quarter wave mirror is about 200 nm. In our design problem we need to cover a spectral range 400-900 nm, i.e. we need a high reflection zone of 500 nm.
It means that we need to combine several quarter wave or near quarter wave stacks.
Illustration of the design process of a wide band reflector.
Assuming that \(\lambda_l=400\) nm and using formulas above, it is possible to estimate that three quarter wave stacks are required. The corresponding width of the high reflection zones are:
\(\Delta_1=140 nm, \;\;\Delta_2=200 nm,\;\; \Delta_3=260 nm\)
Then we need
The parameters
\(c_1 H \;d_1 L\; (a_1H \;b_1L)^{m_1} \;c_2H \;d_2L \;(a_2H\; b_2L)^{m_2}\; c_3H \;d_3L \;(a_3H\;b_3L)^{m_3} \;c_4H\)
In OptiLayer design formulas and their parameters can be specified in
OptiLayer will consider all possible combinations of specified integer parameters
combination, OptiLayer optimizes the design with respect to continuous parameters m 1, m 2, m 3 a 1, b 1, c 1,...
All calculated design are stored in the
This design example is illustrated in our video example "High Reflector Part II" on YouTube |
Tagged: ideal Problem 410
Let $R$ be a ring with $1$ and let $M$ be a left $R$-module.
Let $S$ be a subset of $M$. The annihilator of $S$ in $R$ is the subset of the ring $R$ defined to be \[\Ann_R(S)=\{ r\in R\mid rx=0 \text{ for all } x\in S\}.\] (If $rx=0, r\in R, x\in S$, then we say $r$ annihilates $x$.)
Suppose that $N$ is a submodule of $M$. Then prove that the annihilator
\[\Ann_R(N)=\{ r\in R\mid rn=0 \text{ for all } n\in N\}\] of $M$ in $R$ is a $2$-sided ideal of $R$. Problem 302
Let $R$ be a commutative ring with $1$ and let $G$ be a finite group with identity element $e$. Let $RG$ be the group ring. Then the map $\epsilon: RG \to R$ defined by
\[\epsilon(\sum_{i=1}^na_i g_i)=\sum_{i=1}^na_i,\] where $a_i\in R$ and $G=\{g_i\}_{i=1}^n$, is a ring homomorphism, called the augmentation map and the kernel of $\epsilon$ is called the augmentation ideal. (a) Prove that the augmentation ideal in the group ring $RG$ is generated by $\{g-e \mid g\in G\}$.
Add to solve later
(b) Prove that if $G=\langle g\rangle$ is a finite cyclic group generated by $g$, then the augmentation ideal is generated by $g-e$. Read solution Problem 247
Let $R$ be a commutative ring with unity. A proper ideal $I$ of $R$ is called
primary if whenever $ab \in I$ for $a, b\in R$, then either $a\in I$ or $b^n\in I$ for some positive integer $n$. (a) Prove that a prime ideal $P$ of $R$ is primary. (b) If $P$ is a prime ideal and $a^n\in P$ for some $a\in R$ and a positive integer $n$, then show that $a\in P$. (c) If $P$ is a prime ideal, prove that $\sqrt{P}=P$.
Add to solve later
(d) If $Q$ is a primary ideal, prove that the radical ideal $\sqrt{Q}$ is a prime ideal. The Ideal $(x)$ is Prime in the Polynomial Ring $R[x]$ if and only if the Ring $R$ is an Integral Domain Problem 198
Let $R$ be a commutative ring with $1$. Prove that the principal ideal $(x)$ generated by the element $x$ in the polynomial ring $R[x]$ is a prime ideal if and only if $R$ is an integral domain.
Prove also that the ideal $(x)$ is a maximal ideal if and only if $R$ is a field.Add to solve later
Problem 174
Let $R$ be a commutative ring and let $P$ be an ideal of $R$. Prove that the following statements are equivalent:
(a) The ideal $P$ is a prime ideal.
Add to solve later
(b) For any two ideals $I$ and $J$, if $IJ \subset P$ then we have either $I \subset P$ or $J \subset P$. |
I am currently dealing with Poisson's equation $- \Delta u= f $ on some open domain $U$ and $u =g$ on the boundary $\partial U.$
Now a fundamental solution is a solution to $- \Delta u(x) = \delta(x)$ on the whole $\mathbb{R}^n$. Is this correct?
A Green's function is now rather a construct that is supposed to satisfy Poisson's equation on some given domain with Dirichlet boundary conditions. So we should have $-\Delta_x G(x,y) = 0$ on $U$ and $G(x,y) = 0$ on $\partial U.$
Now, Evans constructs the Green function by saying that $G(x,y) = \phi(x-y) - \psi^{x}(y)$ where $\psi^{x}(y)$ satisfies Laplace's equation on $U$ and $\psi^{x}(y) = \phi(y-x)$ on $\partial U$. Unfortunately, he only defines the function for $x \neq y$. Apparently because $\phi(0)$ is -infinity.
My first question is: Why is this not a problem, if we don't define the Green's function for $x=y$?
and my second question is: Evans says that $-\Delta_y G(x,y) = \delta(x)$ in $U$ and I don't see how this follows from the definition? In particular, many references like mathworld claim that the Green's function actually satisfies $-\Delta_y G(x,y) = \delta(x-y)$? |
EDIT: Final solution.
First off, notice that $\triangle ABB',\triangle BCC', \triangle CAA'$ all have area equal to $\frac{1}{4}$ of the area of $\triangle ABC$. To see this, pick any side as the base when measuring $\triangle ABC$, say side $BC$. Then $BB'$ is a triangle with a common height and one fourth base, so its area is one fourth. This argument obviously holds for all three sides.
What might be more surprising is that all three of the small triangles at the corners have the same area. They all equal $\frac{1}{13}\times \frac{1}{4}$ of the area of $\triangle ABC$, or rather, area equal to one thirteenth of the three aforementioned triangles. To make this explicit, look at $\triangle ABB'$ and $\triangle AA'P$, where $P$ is the intersection of $A'C$ and $AB'$. We have the following relationships:$$AB = 4AA' \text{, and }\angle{BAB'} = \angle{A'AP} = \theta$$Next we have the following equations for the triangles' areas:$$\triangle AA'P = \frac{1}{2}AP\times AA'\sin\theta \text{, and } \triangle ABB' = \frac{1}{2}AB\times AB'\sin\theta $$Therefore, the ratio of their areas is $\dfrac{\triangle AA'P}{\triangle ABB'} = \dfrac{AP}{4AB'}$. Now as soon as I can prove that this last quantity equals $\frac{1}{13}$, the proof will be complete.
EDIT2:Got it.
My solution requires Menelaus' Theorem. I will not prove it here.
By Menelaus' Theorem, we have that$$\begin{align*} BC \times B'P \times AA' &= B'C \times AP \times BA' \\BC \times B'P \times \frac{1}{4}AB &= \frac{3}{4}BC \times AP \times \frac{3}{4}AB \\\dfrac{B'P}{AP} &= \dfrac{9}{4}\\\dfrac{AB' - AP}{AP} &= \dfrac{9}{4}\\\dfrac{AB'}{AP} &= \dfrac{13}{4} \implies \dfrac{AP}{AB'} = \dfrac{4}{13}\\ \end{align*}$$
Thus, as desired $\dfrac{AP}{4AB'} = \dfrac{1}{13}$. Thus, the area of the central triangle is $$\triangle ABC(1 - 3\times \frac{1}{4} + 3 \times \frac{1}{13}\times \frac{1}{4}) = \triangle ABC \times \frac{4}{13}$$
This is my first solution using trigonometry and the particular case of an equiangular triangle. According to Blue, due to affine equivalence of all triangles, this proof suffices for all cases.
Let $AB = BC = AC = 4k$, and hence $AA'=BB'=CC'=k$ and $\angle{ABC} = \angle{BCA} = \angle{BAC} = 60$.
I claim the central triangle is equiangular. Examine one of the three
small triangles at the corners of $\triangle ABC$. For convenience,
I'll focus on the top one. Call it $AA'P$, where $P$ is the
intersection of $CA'$ and $AB'$. Note the following $$ \angle{A'AP} = \angle{A'AB}\text{, and } \\ \angle{AA'P} = \angle{A'AC} = \angle{BB'A}$$ Furthermore, we know that $\angle{A'AB} + \angle{BB'A} = 120$, since the sum of the angles of any triangle equals 180. By this same principle, $\angle{A'PA} = 60$. This is true for all the
small triangles, and by the vertical angles theorem, the central
triangle is equiangular.
Notice that triangle $PA'A$ is similar to triangle $BB'A$. Thus we can
find the lengths of $PA$ and $PA'$. But first we must compute the
length of $B'A$. Via the law of cosines, $B'A = \sqrt{(4k)^2 +k^2 - 2(k)(4k)\cos(60)} = k\sqrt{13}$. Now using properties of similar
triangles, we have the following equality
$$\dfrac{PA}{AB}=\dfrac{A'A}{B'A} \implies PA=\dfrac{AB\times A'A}{B'A} = \dfrac{4k}{\sqrt{13}}$$
By a similar argument, we can calculate the length $PA' =
> \dfrac{k}{\sqrt{13}}$. So, the side lengths of the central triangle
all equal $k\sqrt{13} - \dfrac{5k}{\sqrt{13}} = \dfrac{8k}{\sqrt{13}}$. Finally, the ratio of the areas of two
equilateral triangles equals the ratio of the squares of their sides,
hence
$$\dfrac{\triangle Outer}{\triangle Inner} = \dfrac{16k^2}{\frac{64k^2}{13}} = \boxed{\dfrac{13}{4}}$$ |
(a) If $AB=B$, then $B$ is the identity matrix. (b) If the coefficient matrix $A$ of the system $A\mathbf{x}=\mathbf{b}$ is invertible, then the system has infinitely many solutions. (c) If $A$ is invertible, then $ABA^{-1}=B$. (d) If $A$ is an idempotent nonsingular matrix, then $A$ must be the identity matrix. (e) If $x_1=0, x_2=0, x_3=1$ is a solution to a homogeneous system of linear equation, then the system has infinitely many solutions.
Let $A$ and $B$ be $3\times 3$ matrices and let $C=A-2B$.If\[A\begin{bmatrix}1 \\3 \\5\end{bmatrix}=B\begin{bmatrix}2 \\6 \\10\end{bmatrix},\]then is the matrix $C$ nonsingular? If so, prove it. Otherwise, explain why not.
Let $A, B, C$ be $n\times n$ invertible matrices. When you simplify the expression\[C^{-1}(AB^{-1})^{-1}(CA^{-1})^{-1}C^2,\]which matrix do you get?(a) $A$(b) $C^{-1}A^{-1}BC^{-1}AC^2$(c) $B$(d) $C^2$(e) $C^{-1}BC$(f) $C$
Let $\calP_3$ be the vector space of all polynomials of degree $3$ or less.Let\[S=\{p_1(x), p_2(x), p_3(x), p_4(x)\},\]where\begin{align*}p_1(x)&=1+3x+2x^2-x^3 & p_2(x)&=x+x^3\\p_3(x)&=x+x^2-x^3 & p_4(x)&=3+8x+8x^3.\end{align*}
(a) Find a basis $Q$ of the span $\Span(S)$ consisting of polynomials in $S$.
(b) For each polynomial in $S$ that is not in $Q$, find the coordinate vector with respect to the basis $Q$.
(The Ohio State University, Linear Algebra Midterm)
Let $V$ be a vector space and $B$ be a basis for $V$.Let $\mathbf{w}_1, \mathbf{w}_2, \mathbf{w}_3, \mathbf{w}_4, \mathbf{w}_5$ be vectors in $V$.Suppose that $A$ is the matrix whose columns are the coordinate vectors of $\mathbf{w}_1, \mathbf{w}_2, \mathbf{w}_3, \mathbf{w}_4, \mathbf{w}_5$ with respect to the basis $B$.
After applying the elementary row operations to $A$, we obtain the following matrix in reduced row echelon form\[\begin{bmatrix}1 & 0 & 2 & 1 & 0 \\0 & 1 & 3 & 0 & 1 \\0 & 0 & 0 & 0 & 0 \\0 & 0 & 0 & 0 & 0\end{bmatrix}.\]
(a) What is the dimension of $V$?
(b) What is the dimension of $\Span\{\mathbf{w}_1, \mathbf{w}_2, \mathbf{w}_3, \mathbf{w}_4, \mathbf{w}_5\}$?
(The Ohio State University, Linear Algebra Midterm)
Let $V$ be the vector space of all $2\times 2$ matrices whose entries are real numbers.Let\[W=\left\{\, A\in V \quad \middle | \quad A=\begin{bmatrix}a & b\\c& -a\end{bmatrix} \text{ for any } a, b, c\in \R \,\right\}.\]
(a) Show that $W$ is a subspace of $V$.
(b) Find a basis of $W$.
(c) Find the dimension of $W$.
(The Ohio State University, Linear Algebra Midterm)
The following problems are Midterm 1 problems of Linear Algebra (Math 2568) at the Ohio State University in Autumn 2017.There were 9 problems that covered Chapter 1 of our textbook (Johnson, Riess, Arnold).The time limit was 55 minutes.
This post is Part 3 and contains Problem 7, 8, and 9.Check out Part 1 and Part 2 for the rest of the exam problems.
Problem 7. Let $A=\begin{bmatrix}-3 & -4\\8& 9\end{bmatrix}$ and $\mathbf{v}=\begin{bmatrix}-1 \\2\end{bmatrix}$.
(a) Calculate $A\mathbf{v}$ and find the number $\lambda$ such that $A\mathbf{v}=\lambda \mathbf{v}$.
(b) Without forming $A^3$, calculate the vector $A^3\mathbf{v}$.
Problem 8. Prove that if $A$ and $B$ are $n\times n$ nonsingular matrices, then the product $AB$ is also nonsingular.
Problem 9.Determine whether each of the following sentences is true or false.
(a) There is a $3\times 3$ homogeneous system that has exactly three solutions.
(b) If $A$ and $B$ are $n\times n$ symmetric matrices, then the sum $A+B$ is also symmetric.
(c) If $n$-dimensional vectors $\mathbf{v}_1, \mathbf{v}_2, \mathbf{v}_3$ are linearly dependent, then the vectors $\mathbf{v}_1, \mathbf{v}_2, \mathbf{v}_3, \mathbf{v}_4$ is also linearly dependent for any $n$-dimensional vector $\mathbf{v}_4$.
(d) If the coefficient matrix of a system of linear equations is singular, then the system is inconsistent.
The following problems are Midterm 1 problems of Linear Algebra (Math 2568) at the Ohio State University in Autumn 2017.There were 9 problems that covered Chapter 1 of our textbook (Johnson, Riess, Arnold).The time limit was 55 minutes.
This post is Part 2 and contains Problem 4, 5, and 6.Check out Part 1 and Part 3 for the rest of the exam problems.
Problem 4. Let\[\mathbf{a}_1=\begin{bmatrix}1 \\2 \\3\end{bmatrix}, \mathbf{a}_2=\begin{bmatrix}2 \\-1 \\4\end{bmatrix}, \mathbf{b}=\begin{bmatrix}0 \\a \\2\end{bmatrix}.\]
Find all the values for $a$ so that the vector $\mathbf{b}$ is a linear combination of vectors $\mathbf{a}_1$ and $\mathbf{a}_2$.
Problem 5.Find the inverse matrix of\[A=\begin{bmatrix}0 & 0 & 2 & 0 \\0 &1 & 0 & 0 \\1 & 0 & 0 & 0 \\1 & 0 & 0 & 1\end{bmatrix}\]if it exists. If you think there is no inverse matrix of $A$, then give a reason.
Problem 6.Consider the system of linear equations\begin{align*}3x_1+2x_2&=1\\5x_1+3x_2&=2.\end{align*}
(a) Find the coefficient matrix $A$ of the system.
(b) Find the inverse matrix of the coefficient matrix $A$.
(c) Using the inverse matrix of $A$, find the solution of the system.
(Linear Algebra Midterm Exam 1, the Ohio State University) |
This thing is making me going crazy, mathematicians and physicists use different notations for spherical polar coordinates.
Now during a vector calculus problem I had the following issue: Had to find $d\underline{S}$ for the surface of a sphere of radius $a$ centred at the origin. In all the books I always find that for a parametrised surface $\underline{r}(s,t)$ we have $d\underline{S} = \left(\frac{\partial \underline{r}}{\partial s}\times\frac{\partial \underline{r}}{\partial t}\right)dsdt$ in this order.
For the sphere I have $\underline{r}(\theta,\phi) = a\cos(\theta)\sin(\phi)\underline{i}+a\sin(\theta)\sin(\phi)\underline{j}+a\cos{\phi}\underline{k}$ for $0\leq \theta\leq 2\pi$ and $0\leq \phi\leq \pi$ And hence I get $\frac{\partial \underline{r}}{\partial \theta}\times\frac{\partial \underline{r}}{\partial \phi} = -\underline{r}a\sin{\phi} d\theta d\phi$ which points inwards so I take the opposite of it.
In my notes, they always preserve the order I preserved here (i.e. the first partial on the left (i.e. $\frac{\partial}{\partial \theta}$) is the first component in the brackets of $\underline{r}(\theta,\phi)$). Preserving the order I should always get the correct normal vector. However for some weird reason when in my notes, in the books and online people have to calculate $d\underline{S}$ for a sphere (like here) they always invert the coordinates and write the spherical coordinates as $(r,\theta,\phi)$ for $0\leq \theta \leq \pi$ and $0\leq \phi \leq 2\pi$ and $\underline{r}(\theta,\phi)$ with $\frac{\partial \underline{r}}{\partial \theta}\times\frac{\partial \underline{r}}{\partial \phi} = \underline{r}a\sin{\theta} d\theta d\phi$
why does this happen? It's just a notation convention, however the order of the partial should give the correct normal, although in my example it clearly gives the opposite, while using the other notation, it gives the correct one. |
Background for the curious reader:
An ordinal $\beta$ is a transitive set in the sense that $\alpha\in\beta$ implies $\alpha\subset\beta$. Any ordinal is naturally well-ordered under $\in$ (so any subset of it has a least element), and any well-order is isomorphic to an ordinal. In fact, any class (naively, collection) of ordinals is itself well-ordered under the relation $\in$. This fact allows for the usage of transfinite induction and transfinite recursion.
We have three types of ordinals: the empty set $0=\emptyset=\{\; \}$, successor ordinals $S(\alpha)=\alpha\cup\{\alpha\}$ where $\alpha$ is an ordinal, and limit ordinals, which are all the other ones. Finite ordinals are either $0$ or successors, the set $\omega=\{0,1,2,\dots\}$ is a limit ordinal. A limit ordinal $\alpha$ has the property that $\alpha=\sup_{\beta<\alpha}\{\beta\}=\bigcup_{\beta<\alpha}\beta$.
My question
Ordinal arithmetic can be defined recursively as follows:
$\alpha+0=\alpha$, $\alpha+S(\beta)=S(\alpha+\beta)$, $\alpha+\sup_{\gamma<\beta}\{\gamma\}=\sup_{\gamma<\beta}\{\alpha+\gamma\}$; $\alpha\cdot0=\alpha$, $\alpha\cdot S(\beta)=\alpha\cdot\beta+\alpha$, $\alpha\cdot\sup_{\gamma<\beta}\{\gamma\}=\sup_{\gamma<\beta}\{\alpha\cdot\gamma\}$; $\alpha^0=1$, $\alpha^{S(\beta)}=\alpha^\beta\cdot\alpha$, $\alpha^{\sup_{\gamma<\beta}\{\gamma\}}=\sup_{\gamma<\beta}\{\alpha^\gamma\}$.
Alternatively, one can define:
$\alpha+\beta$ is the unique ordinal isomorphic to the disjoint union $\{0\}\times\alpha\cup\{1\}\times\beta$ given the lexicographic order. $\alpha\cdot\beta$ is the unique ordinal isomorphic to the Cartesian product $\beta\times\alpha$ given the lexicographic order.
As the disjoint union and Cartesian product are simply the categorical coproduct and the categorical product, I wonder if there is some way to actually categorify these alternate definitions. Additionally, I am not aware of any non-recursive version of exponentiation, so I would be curious if a categorical formulation of addition and product of ordinals also allows for a categorical (hence non-recursive) formulation of exponentiation. |
Admin
Talk Stats forum now supports math typesetting in LaTex.
We have added two tags \(\int x^2 dx\) \(\frac{1}{\sigma\sqrt{2\pi}}e^{-(x-\mu)^2/(2\sigma^2)}\) If you mouseover any LaTex image, there will be a popup box showing the code between tags. Here's a good LaTex tutorial: http://frodo.elon.edu/tutorial/tutorial/ Please post here if you have any questions or comments. Enjoy!
We have added two tags
[ math]and [ /math]. Include your LaTex code within the tags and you'll see LaTex graphic. For example: [ math]\int x^2 dx [ /math]will produce:
\(\int x^2 dx\)
[ math]\frac{1}{\sigma\sqrt{2\pi}}e^{-(x-\mu)^2/(2\sigma^2)} [ /math]will produce:
\(\frac{1}{\sigma\sqrt{2\pi}}e^{-(x-\mu)^2/(2\sigma^2)}\)
If you mouseover any LaTex image, there will be a popup box showing the code between tags.
Here's a good LaTex tutorial:
http://frodo.elon.edu/tutorial/tutorial/
Please post here if you have any questions or comments. Enjoy! |
Tagged: determinant Problem 548
An $n\times n$ matrix $A$ is said to be
invertible if there exists an $n\times n$ matrix $B$ such that $AB=I$, and $BA=I$,
where $I$ is the $n\times n$ identity matrix.
If such a matrix $B$ exists, then it is known to be unique and called the
inverse matrix of $A$, denoted by $A^{-1}$.
In this problem, we prove that if $B$ satisfies the first condition, then it automatically satisfies the second condition.
So if we know $AB=I$, then we can conclude that $B=A^{-1}$.
Let $A$ and $B$ be $n\times n$ matrices.
Suppose that we have $AB=I$, where $I$ is the $n \times n$ identity matrix.
Prove that $BA=I$, and hence $A^{-1}=B$.Add to solve later
Problem 452
Let $A$ be an $n\times n$ complex matrix.
Let $S$ be an invertible matrix. (a) If $SAS^{-1}=\lambda A$ for some complex number $\lambda$, then prove that either $\lambda^n=1$ or $A$ is a singular matrix. (b) If $n$ is odd and $SAS^{-1}=-A$, then prove that $0$ is an eigenvalue of $A$.
Add to solve later
(c) Suppose that all the eigenvalues of $A$ are integers and $\det(A) > 0$. If $n$ is odd and $SAS^{-1}=A^{-1}$, then prove that $1$ is an eigenvalue of $A$. Problem 438
Determine whether each of the following statements is True or False.
(a) If $A$ and $B$ are $n \times n$ matrices, and $P$ is an invertible $n \times n$ matrix such that $A=PBP^{-1}$, then $\det(A)=\det(B)$. (b) If the characteristic polynomial of an $n \times n$ matrix $A$ is \[p(\lambda)=(\lambda-1)^n+2,\] then $A$ is invertible. (c) If $A^2$ is an invertible $n\times n$ matrix, then $A^3$ is also invertible. (d) If $A$ is a $3\times 3$ matrix such that $\det(A)=7$, then $\det(2A^{\trans}A^{-1})=2$. (e) If $\mathbf{v}$ is an eigenvector of an $n \times n$ matrix $A$ with corresponding eigenvalue $\lambda_1$, and if $\mathbf{w}$ is an eigenvector of $A$ with corresponding eigenvalue $\lambda_2$, then $\mathbf{v}+\mathbf{w}$ is an eigenvector of $A$ with corresponding eigenvalue $\lambda_1+\lambda_2$.
(
Stanford University, Linear Algebra Exam Problem) Read solution Eigenvalues of Orthogonal Matrices Have Length 1. Every $3\times 3$ Orthogonal Matrix Has 1 as an Eigenvalue Problem 419 (a) Let $A$ be a real orthogonal $n\times n$ matrix. Prove that the length (magnitude) of each eigenvalue of $A$ is $1$.
Add to solve later
(b) Let $A$ be a real orthogonal $3\times 3$ matrix and suppose that the determinant of $A$ is $1$. Then prove that $A$ has $1$ as an eigenvalue. Problem 391 (a) Is the matrix $A=\begin{bmatrix} 1 & 2\\ 0& 3 \end{bmatrix}$ similar to the matrix $B=\begin{bmatrix} 3 & 0\\ 1& 2 \end{bmatrix}$? (b) Is the matrix $A=\begin{bmatrix} 0 & 1\\ 5& 3 \end{bmatrix}$ similar to the matrix $B=\begin{bmatrix} 1 & 2\\ 4& 3 \end{bmatrix}$? (c) Is the matrix $A=\begin{bmatrix} -1 & 6\\ -2& 6 \end{bmatrix}$ similar to the matrix $B=\begin{bmatrix} 3 & 0\\ 0& 2 \end{bmatrix}$?
Add to solve later
(d) Is the matrix $A=\begin{bmatrix} -1 & 6\\ -2& 6 \end{bmatrix}$ similar to the matrix $B=\begin{bmatrix} 1 & 2\\ -1& 4 \end{bmatrix}$? Problem 389 (a) A $2 \times 2$ matrix $A$ satisfies $\tr(A^2)=5$ and $\tr(A)=3$. Find $\det(A)$. (b) A $2 \times 2$ matrix has two parallel columns and $\tr(A)=5$. Find $\tr(A^2)$. (c) A $2\times 2$ matrix $A$ has $\det(A)=5$ and positive integer eigenvalues. What is the trace of $A$?
(
Harvard University, Linear Algebra Exam Problem) Problem 374
Let \[A=\begin{bmatrix}
a_0 & a_1 & \dots & a_{n-2} &a_{n-1} \\ a_{n-1} & a_0 & \dots & a_{n-3} & a_{n-2} \\ a_{n-2} & a_{n-1} & \dots & a_{n-4} & a_{n-3} \\ \vdots & \vdots & \dots & \vdots & \vdots \\ a_{2} & a_3 & \dots & a_{0} & a_{1}\\ a_{1} & a_2 & \dots & a_{n-1} & a_{0} \end{bmatrix}\] be a complex $n \times n$ matrix. Such a matrix is called circulant matrix. Then prove that the determinant of the circulant matrix $A$ is given by \[\det(A)=\prod_{k=0}^{n-1}(a_0+a_1\zeta^k+a_2 \zeta^{2k}+\cdots+a_{n-1}\zeta^{k(n-1)}),\] where $\zeta=e^{2 \pi i/n}$ is a primitive $n$-th root of unity. Problem 363 (a) Find all the eigenvalues and eigenvectors of the matrix \[A=\begin{bmatrix} 3 & -2\\ 6& -4 \end{bmatrix}.\]
Add to solve later
(b) Let \[A=\begin{bmatrix} 1 & 0 & 3 \\ 4 &5 &6 \\ 7 & 0 & 9 \end{bmatrix} \text{ and } B=\begin{bmatrix} 2 & 0 & 0 \\ 0 & 3 &0 \\ 0 & 0 & 4 \end{bmatrix}.\] Then find the value of \[\det(A^2B^{-1}A^{-2}B^2).\] (For part (b) without computation, you may assume that $A$ and $B$ are invertible matrices.) Problem 338
Each of the following sets are not a subspace of the specified vector space. For each set, give a reason why it is not a subspace.
(1) \[S_1=\left \{\, \begin{bmatrix} x_1 \\ x_2 \\ x_3 \end{bmatrix} \in \R^3 \quad \middle | \quad x_1\geq 0 \,\right \}\] in the vector space $\R^3$. (2)\[S_2=\left \{\, \begin{bmatrix}
x_1 \\
x_2 \\
x_3
\end{bmatrix} \in \R^3 \quad \middle | \quad x_1-4x_2+5x_3=2 \,\right \}\] in the vector space $\R^3$.
(3)\[S_3=\left \{\, \begin{bmatrix}
x \\
y
\end{bmatrix}\in \R^2 \quad \middle | \quad y=x^2 \quad \,\right \}\] in the vector space $\R^2$.
(4)Let $P_4$ be the vector space of all polynomials of degree $4$ or less with real coefficients.
\[S_4=\{ f(x)\in P_4 \mid f(1) \text{ is an integer}\}\] in the vector space $P_4$.
(5)\[S_5=\{ f(x)\in P_4 \mid f(1) \text{ is a rational number}\}\] in the vector space $P_4$. (6)Let $M_{2 \times 2}$ be the vector space of all $2\times 2$ real matrices.
\[S_6=\{ A\in M_{2\times 2} \mid \det(A) \neq 0\} \] in the vector space $M_{2\times 2}$.
(7)\[S_7=\{ A\in M_{2\times 2} \mid \det(A)=0\} \] in the vector space $M_{2\times 2}$.
(
Linear Algebra Exam Problem, the Ohio State University) (8)Let $C[-1, 1]$ be the vector space of all real continuous functions defined on the interval $[a, b]$.
\[S_8=\{ f(x)\in C[-2,2] \mid f(-1)f(1)=0\} \] in the vector space $C[-2, 2]$.
(9)\[S_9=\{ f(x) \in C[-1, 1] \mid f(x)\geq 0 \text{ for all } -1\leq x \leq 1\}\] in the vector space $C[-1, 1]$. (10)Let $C^2[a, b]$ be the vector space of all real-valued functions $f(x)$ defined on $[a, b]$, where $f(x), f'(x)$, and $f^{\prime\prime}(x)$ are continuous on $[a, b]$. Here $f'(x), f^{\prime\prime}(x)$ are the first and second derivative of $f(x)$.
\[S_{10}=\{ f(x) \in C^2[-1, 1] \mid f^{\prime\prime}(x)+f(x)=\sin(x) \text{ for all } -1\leq x \leq 1\}\] in the vector space $C[-1, 1]$.
(11)Let $S_{11}$ be the set of real polynomials of degree exactly $k$, where $k \geq 1$ is an integer, in the vector space $P_k$. (12)Let $V$ be a vector space and $W \subset V$ a vector subspace. Define the subset $S_{12}$ to be the complementof $W$,
\[ V \setminus W = \{ \mathbf{v} \in V \mid \mathbf{v} \not\in W \}.\] Add to solve later |
Given 3 topological spaces $X,Y,Z $ and 2 functions $ f:X \rightarrow Z $, $ g:Y \rightarrow Z$, I define the fiber product between $ X $ and $ Y $ over $ Z $ by: $$ X\times_Z Y := \{(x,y) \in X \times Y | f(x) = g(y) \}. $$ My problem is to find the universal covering space of $ X :=\mathbb{P}^2 \vee \mathbb{S}^1 $. Here I call $ \mathbb{P}^2 $ the real projective plane, while $ \mathbb{S}^1 $ is the circle, and by $ X \vee Y $ I mean the wedge sum between $ X $ and $ Y $. My idea was the following: I call $ Y $ the 2-sheeted covering space of $ X $ given by the wedge sum of $ \mathbb{S}^2 $ (the 2-sphere, which is the universal covering space of the proj plane) and 2 copies of the circle, each of those attached to the lifted basepoint; and I call $ Z $ the wedge sum of 2 copies of the circle. Now, it is clear that there exists a onto map $ f:Y \twoheadrightarrow Z $, which sends the whole sphere into the basepoint of $ Z $. On the other hand, the universal covering space $ \tilde{Z} $ of $Z$ is well known (cfr. Hatcher, p. 77, it is a fractal tree). So, If I consider the fiber product $$ Y \times_Z \tilde{Z}, $$ defined as above, this is a covering space of $ Y $. If this covering space of $ Y $ were simply connected, then it would be the universal one, and it would then be the universal covering space of $ X $. I finally turn to my question: how can I visualize the space $ Y \times_Z \tilde{Z} $ graphically? It should be $ \tilde{Z} $ with a sphere (instead of a point) placed at each segment intersection, but I cannot convince myself about this. If this were correct, than it would be clear that $ Y \times_Z \tilde{Z} $ turns out to be simply connected.
Imagine traveling through your fiber product. You are represented by a pair of dots traveling in both spaces; the dots move over time, but they must map to the same point of $Z$ at all times.
While the $X$-dot is not in a sphere, it's position is uniquely determined by the $Y$-dot's position. When the $Y$-dot is at a vertex of the tree, the $X$ dot is free to move around the sphere. So the effect of it all is that you out a sphere at each vertex of the tree, separating the edges coming into the vertex into two groups.
(I think of this space you described like a man walking his dog through the streets (the 1-dimensional parts) and finding occasional dog parks (the spheres) where the dog can run free). |
In preparation for an exam, I'm revisiting old exam questions. This one seems neat, but also quite complicated:
A soccer ball with Radius $R=11cm$ is inflated at a pressure of $P =9 \times 10^4 Pa$, then dropped from a height of $0.1m$ (distance floor to lowest part of ball) onto a hard floor and bounces off elastically.
Question: Find approximate expressions for: Surface area of ball in contact with floor Amount of time the ball is in contact with the floor Peak force exerted on the floor
if the mass is $0.42 kg$.
My attempt at a solution: Assume the ball is filled with an ideal gas and that the process is adiabatic. Assume that the deformation leads to a simple spherical segment, i.e. a ball where one part is cut off flat. This gives an expression of the volume $V$ in terms of the height of the "center" of the ball, $h$ as $$V = \frac{11}{6}\pi R^3 + \frac{\pi}{6}h^3$$ The surface area then is simply $$A = \pi (R^2 - h^2)$$.
Next: The ball has potential energy $mgh_0$, which is completely converted to internal energy at the peak point of the motion. The internal energy of an ideal gas is $$U = 3/2 N kT = 3/2 PV = 3/2 P_0 V_0^\gamma V^{1-\gamma}$$ where $\gamma$ is the adiabatic coefficient (1.4 for air).
The
change in $U$ due to a changing $V$ then comes completely from the initial potential energy $E$. Some algebra and some sensible binomic approximations then yield a simple expression for the surface area of $$A = \frac{32 mgh}{9 V_0 P_0 (\gamma - 1)}$$.
Using $F = PA$ then allows me to calculate the peak force.
But what about the contact time? My initial guess was to approximate $F(t)$ as a triangle curve that goes from 0 to $F_{max}$ and back to $0$ and then use that $F$ equals change in momentum over time, $dp = Fdt$, which in this simple case would mean the total change in momentum, $\delta p$, would be equal to $1/2 F_{max} \delta t$.I can calculate the initial momentum from the initial potential energy, and since the process is elastic, the change in momentum is (minus) two times that value. Then I know everything to calculate $\delta t$.
I am, however, unsure about whether I can apply this law about momentum at all, since I am not talking about a simple point of mass here, rather about a bunch of gas molecules confined to a certain volume. This is also why my standard approach to mechanics questions, i.e. Lagrangian mechanics, doesn't seem to work: A simple coordinate describing the entire process would be $h$, but what is the kinetic term in terms of $h$?
EDITI just realized that my formula for the spherical cap is wrong, and a bit more complicated than I wanted: If $h$ is the distance from the center of the sphere to the base of the cap, the volume becomes$$V = \frac{2}{3} R^3 - hR^2 + \frac{h^3}{3}$$
If I keep the pressure constant, the work needed for a volume change is $P \Delta V$, so we can equate: $$\Delta V = E_0 / P$$ where $E_0 = mgh$ is the initial potential energy of the ball.
The only problem then is that I have a third-order equation for $h$ which, I feel, is too complicated to solve. But let's see... let $d = R - h$, then we get $V == Rd^2 - d^3/3$ and now we assume that $d \ll R$ so that we have $V \approx Rd^2$.
Next, we need the surface area, $A = \pi (R^2 - h^2) = \pi (R^2 - (R-d)^2) \approx 2\pi R d$ where we again use $R \gg d$.
Plugging in some numbers gives $A \approx 44 cm^2$ which amounts to a radius of the spherical cap of approx $3.7cm$.
The peak force $F_{max}$ is just $AP \approx 401 N$.
Assuming that the force grows linearly with time until $F_{max}$ is reached and then drops linearly to $0$, the total change in momentum over the contact time $\Delta t$ is $\Delta p = 1/2 F_{max} \delta T$. The change in momentum is two times the initial momentum, so it is $$\Delta p = 2\sqrt{2mE_0} = 1/2 * 2\pi \sqrt{E_0 R P} \Delta t,$$ which we can solve for $\Delta t$ to obtain $\Delta t \approx 9 ms$.
That sounds reasonable to me. |
The correlation functions found in Barouch and McCoy's paper (PRA 3, 2137 (1971)) for the XX spin chain use a method which uses Wick's theorem. For the zz correlation function, this gives
$\langle \sigma_l^z \sigma_{l+R}^z \rangle = \langle \sigma_l^z \rangle^2 - G_R^2$
where for $R=1$, $G_1 = -\langle \sigma_l^x \sigma_{l+1}^x+ \sigma_l^y \sigma_{l+1}^y \rangle/2$.
If I calculate $\langle \sigma_l^z \sigma_{l+1}^z \rangle$ both explicitly and using the equation above for 8 qubits, I get different answers.
So is Wick's theorem still valid for 8 qubits, which means I've just made a mistake? Or is it valid only in the thermodynamic limit?
Thanks
Edit:
Thanks for your replies everyone. @lcv However, I haven't used the analytical diagonalisation for this - I have simply used Mathematica to diagonalise the 8 qubit chain numerically after substituting arbitrary values for the coupling strength, magnetic field and temperature. Hence it can't be an error in the diagonalisation. It is the thermal average I have calculated, that is $\langle \sigma^z_l \rangle=tr(\rho \sigma^z_l )$ where $\rho=e^{−H/T}/tr(e^{−H/T})$ and T is temperature. But in doing this, I find that $\langle \sigma^z_l \sigma^z_{l+R} \rangle \neq \langle \sigma^z_l \rangle^2 - G_1^2$ where I've defined $G_1$ above.
Edit2 (@marek @lcv @Fitzsimons @Luboš) I'm going to try to clarify - The open XX Hamiltonian in a magnetic field is
\begin{equation} H=-\frac{J}{2}\sum_{l=1}^{N-1} (\sigma^x_l \sigma^x_{l+1} + \sigma^y_l \sigma^y_{l+1})- B \sum_{l=1}^N \sigma^z_l \end{equation}
In Mathematica, I have defined the Pauli spin matrices, then the Hamiltonian for 8 qubits. I then put in values for $J$, $B$ and $T$, and calculate the thermal density matrix,
\begin{equation} \rho = \frac{e^{-H/T}}{tr(e^{-H/T})} \end{equation}
So now I have numerical density matrix. I then calculate $\langle \sigma^z_l \sigma_{l+1}^z \rangle=tr(\rho \sigma^z_l \sigma_{l+1}^z )$ using the definitions of the Pauli spin matrices and $\rho$.
Next I calculate $\langle \sigma_l^z \sigma_{l+R}^z \rangle$ using the result from Wick's theorem which gives $\langle \sigma_l^z \rangle^2 - G_R^2$ where for $R=1$, $G_1 = -\langle \sigma_l^x \sigma_{l+1}^x+ \sigma_l^y \sigma_{l+1}^y \rangle/2$. I again use the Pauli spin matrices I defined and the same numerical $\rho$ to calculate them.
But I get a different (numerical) answer for each of these. |
Chapters
Chapter 2: Functions
Chapter 3: Binary Operations
Chapter 4: Inverse Trigonometric Functions
Chapter 5: Algebra of Matrices
Chapter 6: Determinants
Chapter 7: Adjoint and Inverse of a Matrix
Chapter 8: Solution of Simultaneous Linear Equations
Chapter 9: Continuity
Chapter 10: Differentiability
Chapter 11: Differentiation
Chapter 12: Higher Order Derivatives
Chapter 13: Derivative as a Rate Measurer
Chapter 14: Differentials, Errors and Approximations
Chapter 15: Mean Value Theorems
Chapter 16: Tangents and Normals
Chapter 17: Increasing and Decreasing Functions
Chapter 18: Maxima and Minima
Chapter 19: Indefinite Integrals
Chapter 20: Definite Integrals
Chapter 21: Areas of Bounded Regions
Chapter 22: Differential Equations
Chapter 23: Algebra of Vectors
Chapter 24: Scalar Or Dot Product
Chapter 25: Vector or Cross Product
Chapter 26: Scalar Triple Product
Chapter 27: Direction Cosines and Direction Ratios
Chapter 28: Straight Line in Space
Chapter 29: The Plane
Chapter 30: Linear programming
Chapter 31: Probability
Chapter 32: Mean and Variance of a Random Variable
Chapter 33: Binomial Distribution
RD Sharma Mathematics Class 12 by R D Sharma (Set of 2 Volume) (2018-19 Session) Chapter 27: Direction Cosines and Direction Ratios Chapter 27: Direction Cosines and Direction Ratios Exercise 27.1 solutions [Page 23]
If a line makes angles of 90°, 60° and 30° with the positive direction of
x, y, and z-axis respectively, find its direction cosines
If a line has direction ratios 2, −1, −2, determine its direction cosines.
Find the direction cosines of the line passing through two points (−2, 4, −5) and (1, 2, 3) .
Using direction ratios show that the points
A (2, 3, −4), B (1, −2, 3) and C (3, 8, −11) are collinear.
Find the direction cosines of the sides of the triangle whose vertices are (3, 5, −4), (−1, 1, 2) and (−5, −5, −2).
Find the angle between the vectors with direction ratios proportional to 1, −2, 1 and 4, 3, 2.
Find the angle between the vectors whose direction cosines are proportional to 2, 3, −6 and 3, −4, 5.
Find the acute angle between the lines whose direction ratios are proportional to 2 : 3 : 6 and 1 : 2 : 2.
Show that the points (2, 3, 4), (−1, −2, 1), (5, 8, 7) are collinear.
Show that the line through points (4, 7, 8) and (2, 3, 4) is parallel to the line through the points (−1, −2, 1) and (1, 2, 5).
Show that the line through the points (1, −1, 2) and (3, 4, −2) is perpendicular to the line through the points (0, 3, 2) and (3, 5, 6).
Show that the line joining the origin to the point (2, 1, 1) is perpendicular to the line determined by the points (3, 5, −1) and (4, 3, −1).
Find the angle between the lines whose direction ratios are proportional to
a, b, c and b − c, c − a, a− b.
If the coordinates of the points
A, B, C, D are (1, 2, 3), (4, 5, 7), (−4, 3, −6) and (2, 9, 2), then find the angle between AB and CD.
Find the direction cosines of the lines, connected by the relations: l + m +n = 0 and 2lm + 2ln − mn= 0.
Find the angle between the lines whose direction cosines are given by the equations
(i) l + m + n = 0 and l 2 + m 2 − n 2 = 0
Find the angle between the lines whose direction cosines are given by the equations
2
l − m + 2 n = 0 and mn + nl + lm = 0
Find the angle between the lines whose direction cosines are given by the equations
l + 2 m + 3 n = 0 and 3 lm − 4 ln + mn = 0
Find the angle between the lines whose direction cosines are given by the equations
2
l + 2 m − n = 0, mn + ln + lm = 0 Chapter 27: Direction Cosines and Direction Ratios Exercise Very Short Answers solutions [Pages 24 - 25]
Define direction cosines of a directed line.
What are the direction cosines of X-axis?
What are the direction cosines of
Y-axis?
What are the direction cosines of
Z-axis?
Write the distances of the point (7, −2, 3) from
XY, YZ and XZ-planes.
Write the distance of the point (3, −5, 12) from
X-axis?
Write the ratio in which
YZ-plane divides the segment joining P (−2, 5, 9) and Q (3, −2, 4).
A line makes an angle of 60° with each of
X-axis and Y-axis. Find the acute angle made by the line with Z-axis.
If a line makes angles α, β and γ with the coordinate axes, find the value of cos 2α + cos 2β + cos 2γ.
Write the ratio in which the line segment joining (
a, b, c) and (− a, − c, − b) is divided by the xy-plane.
Write the inclination of a line with
Z-axis, if its direction ratios are proportional to 0, 1, −1.
Write the angle between the lines whose direction ratios are proportional to 1, −2, 1 and 4, 3, 2.
Write the distance of the point
P ( x, y, z) from XOY plane.
Write the coordinates of the projection of point
P ( x, y, z) on XOZ-plane.
Write the coordinates of the projection of the point
P (2, −3, 5) on Y-axis.
Find the distance of the point (2, 3, 4) from the
x-axis.
If a line has direction ratios proportional to 2, −1, −2, then what are its direction consines?
Write direction cosines of a line parallel to
z-axis.
If a unit vector `vec a` makes an angle \[\frac{\pi}{3} \text{ with } \hat{i} , \frac{\pi}{4} \text{ with } \hat{j}\] and an acute angle θ with \[\hat{ k} \] ,then find the value of θ.
Answer each of the following questions in one word or one sentence or as per exact requirement of the question:
Write the distance of a point P( a, b, c) from x-axis.
If a line makes angles 90° and 60° respectively with the positive directions of
x and y axes, find the angle which it makes with the positive direction of z-axis. Chapter 27: Direction Cosines and Direction Ratios Exercise MCQ solutions [Pages 25 - 26]
For every point
P ( x, y, z) on the xy-plane,
x= 0 y= 0 z= 0 x= y= z= 0
For every point
P ( x, y, z) on the x-axis (except the origin), x= 0, y= 0, z≠ 0 x= 0, z= 0, y≠ 0 y= 0, z= 0, x≠ 0 x= y= z= 0
A rectangular parallelopiped is formed by planes drawn through the points (5, 7, 9) and (2, 3, 7) parallel to the coordinate planes. The length of an edge of this rectangular parallelopiped is
2
3
4
all of these
A parallelopiped is formed by planes drawn through the points (2, 3, 5) and (5, 9, 7), parallel to the coordinate planes. The length of a diagonal of the parallelopiped is
7
`sqrt(38)`
`sqrt(155)`
none of these
The
xy-plane divides the line joining the points (−1, 3, 4) and (2, −5, 6)
internally in the ratio 2 : 3
externally in the ratio 2 : 3
internally in the ratio 3 : 2
externally in the ratio 3 : 2
If the
x-coordinate of a point P on the join of Q (2, 2, 1) and R (5, 1, −2) is 4, then its z-coordinate is
2
1
-1
-2
The distance of the point
P ( a, b, c) from the x-axis is
\[\sqrt{b^2 + c^2}\]
\[\sqrt{a^2 + c^2}\]
\[\sqrt{a^2 + b^2}\]
none of these
Ratio in which the
xy-plane divides the join of (1, 2, 3) and (4, 2, 1) is
3 : 1 internally
3 : 1 externally
1 : 2 internally
2 : 1 externally
If
P (3, 2, −4), Q (5, 4, −6) and R (9, 8, −10) are collinear, then R divides PQ in the ratio
3 : 2 externally
3 : 2 internally
2 : 1 internally
2 : 1 externally
If
O is the origin, OP = 3 with direction ratios proportional to −1, 2, −2 then the coordinates of P are
(−1, 2, −2)
(1, 2, 2)
(−1/9, 2/9, −2/9)
(3, 6, −9)
The angle between the two diagonals of a cube is
(a) 30°
(b) 45°
(c) \[\cos^{- 1} \left( \frac{1}{\sqrt{3}} \right)\]
(d) \[\cos^{- 1} \left( \frac{1}{3} \right)\]
If a line makes angles α, β, γ, δ with four diagonals of a cube, then cos
2 α + cos 2 β + cos 2γ + cos 2 δ is equal to
\[\frac{1}{3}\]
\[\frac{2}{3}\]
\[\frac{4}{3}\]
\[\frac{8}{3}\]
Chapter 27: Direction Cosines and Direction Ratios RD Sharma Mathematics Class 12 by R D Sharma (Set of 2 Volume) (2018-19 Session) Textbook solutions for Class 12 RD Sharma solutions for Class 12 Mathematics chapter 27 - Direction Cosines and Direction Ratios
RD Sharma solutions for Class 12 Maths chapter 27 (Direction Cosines and Direction Ratios) include all questions with solution and detail explanation. This will clear students doubts about any question and improve application skills while preparing for board exams. The detailed, step-by-step solutions will help you understand the concepts better and clear your confusions, if any. Shaalaa.com has the CBSE Mathematics for Class 12 by R D Sharma (Set of 2 Volume) (2018-19 Session) solutions in a manner that help students grasp basic concepts better and faster.
Further, we at Shaalaa.com are providing such solutions so that students can prepare for written exams. RD Sharma textbook solutions can be a core help for self-study and acts as a perfect self-help guidance for students.
Concepts covered in Class 12 Mathematics chapter 27 Direction Cosines and Direction Ratios are Three - Dimensional Geometry Examples and Solutions, Introduction of Three Dimensional Geometry, Equation of a Plane Passing Through Three Non Collinear Points, Relation Between Direction Ratio and Direction Cosines, Intercept Form of the Equation of a Plane, Coplanarity of Two Lines, Distance of a Point from a Plane, Angle Between Line and a Plane, Angle Between Two Planes, Angle Between Two Lines, Vector and Cartesian Equation of a Plane, Equation of a Plane in Normal Form, Equation of a Plane Perpendicular to a Given Vector and Passing Through a Given Point, Plane Passing Through the Intersection of Two Given Planes, Shortest Distance Between Two Lines, Equation of a Line in Space, Direction Cosines and Direction Ratios of a Line.
Using RD Sharma Class 12 solutions Direction Cosines and Direction Ratios exercise by students are an easy way to prepare for the exams, as they involve solutions arranged chapter-wise also page wise. The questions involved in RD Sharma Solutions are important questions that can be asked in the final exam. Maximum students of CBSE Class 12 prefer RD Sharma Textbook Solutions to score more in exam.
Get the free view of chapter 27 Direction Cosines and Direction Ratios Class 12 extra questions for Maths and can use Shaalaa.com to keep it handy for your exam preparation |
The
stats implementation of
rWishart is in
C and is very fast. It is often the case that we do not want a sample from the Wishart distribution, but rather from the inverse of it or from the Cholesky decomposition of a sample from the Wishart distribution. Or even from the inverse of the Cholesky decomposition of a draw from the Wishart distribution. Funnily enough (if you have a weird sense of humor), when you inspect the source code for the
rWishart distribution (R Core Team (2017)), it generates the Cholesky decomposition and then multiplies it out. Meanwhile, drawing from the
rWishart and then inverting or doing a Cholesky decomposition or whatever in R is just slow – comparatively.
This suggests some obvious efficiencies: perhaps, if we would rather have the Cholesky decomposition of the Wishart random matrix, we could tell the function to stop right there.
library('CholWishart')set.seed(20180220)A <- stats::rWishart(1,10,5*diag(4))[,,1]set.seed(20180220)B <- rInvWishart(1,10,.2*diag(4))[,,1]set.seed(20180220)C <- rCholWishart(1,10,5*diag(4))[,,1]set.seed(20180220)D <- rInvCholWishart(1,10,.2*diag(4))[,,1]
Suppose \(X_i \sim MVN(0, \Sigma)\) are independent \(p\)-variate normal random variables, \(i = 1, 2, \ldots n\) with \(n > p-1\). Then \(S = \sum X_i^T X_i\), called the “scatter matrix”, is almost surely positive definite if \(\Sigma\) is positive definite. The random variable \(S\) is said to be distributed as a Wishart random variable: \(S \sim W_p(n, \Sigma)\), see Gupta and Nagar (1999). This can be extended to the non-integer case as well.
How does
rWishart(n, df, Sigma) work (supposing
Sigma is a \(p \times p\) matrix)? First, it generates a sample from the Cholesky decomposition of a Wishart distribution with \(\Sigma = \mathbf{I}_p\). How this is done: on the \(i^{th}\) element of the main diagonal, draw from \(\sqrt{\chi_{p-i+1}^2}\). On the upper triangle of the matrix, sample from an independent \(N(0,1)\) for each entry in the matrix. Then, this can be multiplied by the Cholesky decomposition of the provided
Sigma to obtain the Cholesky factor of the desired sample from the Wishart random variable (this construction is due to Bartlett and is also known as the Bartlett Decomposition) (see Anderson (1984)). The
rWishart function multiplies this out. Therefore, if the Cholesky decomposition is desired, one only needs to stop there.
If \(X \sim \textrm{W}_p(\nu,\Sigma)\), then we define the Inverse Wishart as \(X^{-1} = Y \sim \textrm{IW}_p(\nu , \Sigma^{-1})\). There are other parameterizations of the distribution, mostly coming down to different ways of writing the \(\nu\) parameter - be aware of this when using any package drawing from the Inverse Wishart distribution (see Dawid (1981) for an alternative; this presentation follows Gupta and Nagar (1999)). This comes up directly in Bayesian statistics. We are also interested in the Cholesky decomposition of this, as it is required in the generation of the matrix variate \(t\)-distribution. In this package it is done by taking the covariance matrix, inverting it, computing the Cholesky decomposition of the inverted covariance matrix, drawing the Cholesky factor of a Wishart matrix using that, and then inverting based on that (as finding \(\Psi^{-1}\) given the Cholesky factorization of \(\Psi\) is relatively fast). This can then be converted into the Cholesky factor of the Inverse Wishart if that is what is desired. This would be slow to do in R, but in C it is not so bad.
Here is what happens with the results of the above:
A %*% B ## [,1] [,2] [,3] [,4]## [1,] 1.000000e+00 -2.775558e-17 -1.387779e-16 -5.551115e-17## [2,] -4.718448e-16 1.000000e+00 1.387779e-17 0.000000e+00## [3,] -1.249001e-16 1.387779e-17 1.000000e+00 -5.551115e-17## [4,] 1.110223e-16 -1.110223e-16 0.000000e+00 1.000000e+00crossprod(C) %*% crossprod(D) # note: we do not expect C = D^-1, we expect this! ## [,1] [,2] [,3] [,4]## [1,] 1.000000e+00 -2.775558e-17 -2.081668e-16 -1.110223e-16## [2,] -4.718448e-16 1.000000e+00 1.387779e-16 0.000000e+00## [3,] -1.249001e-16 1.110223e-16 1.000000e+00 -1.110223e-16## [4,] 1.110223e-16 -8.326673e-17 -8.326673e-17 1.000000e+00crossprod(D) %*% A## [,1] [,2] [,3] [,4]## [1,] 1.000000e+00 -4.718448e-16 -1.249001e-16 1.110223e-16## [2,] -2.775558e-17 1.000000e+00 1.110223e-16 -8.326673e-17## [3,] -2.081668e-16 1.387779e-16 1.000000e+00 -8.326673e-17## [4,] -1.110223e-16 0.000000e+00 -1.110223e-16 1.000000e+00crossprod(C) %*% B## [,1] [,2] [,3] [,4]## [1,] 1.000000e+00 -2.775558e-17 -1.387779e-16 -5.551115e-17## [2,] -4.718448e-16 1.000000e+00 1.387779e-17 0.000000e+00## [3,] -1.249001e-16 1.387779e-17 1.000000e+00 -5.551115e-17## [4,] 1.110223e-16 -1.110223e-16 0.000000e+00 1.000000e+00
There is some roundoff error.
Suppose, instead of the above definition of the Wishart, we have \(n \leq p-1\). Then the scatter matrix defined above will not be positive definite. This is called the pseudo Wishart distribution. If we then take the Moore-Penrose pseudo-inverse of this, we have the generalized inverse Wishart distribution.
A <- rPseudoWishart(n = 1, df = 3, Sigma = diag(5))[, , 1]A## [,1] [,2] [,3] [,4] [,5]## [1,] 1.9553698 0.3134085 -0.8487259 -0.5174803 0.7487007## [2,] 0.3134085 2.3856728 -0.6106572 1.4471373 1.5976931## [3,] -0.8487259 -0.6106572 0.5838199 -0.7509533 -0.7863404## [4,] -0.5174803 1.4471373 -0.7509533 4.8520003 1.6696925## [5,] 0.7487007 1.5976931 -0.7863404 1.6696925 1.4396817qr(A)$rank## [1] 3B <- rGenInvWishart(n = 1, df = 3, Sigma = diag(5))[, , 1]B## [,1] [,2] [,3] [,4] [,5]## [1,] 0.33422500 0.189387794 0.07086651 -0.052283855 -0.04241885## [2,] 0.18938779 0.155647000 -0.02450637 0.001357582 -0.07549583## [3,] 0.07086651 -0.024506366 0.10489215 -0.052325490 0.03515182## [4,] -0.05228386 0.001357582 -0.05232549 0.028056042 -0.02793470## [5,] -0.04241885 -0.075495830 0.03515182 -0.027934699 0.24217199qr(B)$rank## [1] 3
Note that the rank of both of these matrices is less than the dimension.
This package also has functions for density computations with the Wishart distribution. Densities are only defined for positive-definite input matrices and \(\nu\) parameters larger than the dimension \(p\).
The return value is on the
log scale but it can be specified otherwise.
dWishart(diag(3), df = 5, 5*diag(3))## [1] -19.45038dInvWishart(diag(3), df = 5, .2*diag(3))## [1] -19.45038
Note that, in general, these will not agree even if their covariance matrix parameters are inverses of each other. One of the reasons this works is that the determinant of \(\mathbf{X}\) is \(1\).
The density functions can take 3-D array input indexed on the third dimension and will output a vector of densities.
set.seed(20180311)A <- rWishart(n = 3, df = 3, Sigma = diag(3))dWishart(A, df = 3, Sigma = diag(3))## [1] -13.070275 -8.879220 -8.555529
The multivariate gamma (\(\Gamma_p\)) and digamma (\(\psi_p\)) functions are extensions of the univariate gamma (\(\Gamma\)) and digamma (\(\psi\)) functions (Mardia, Bibby, and Kent (1982)). They are useful in calculating the densities above. They come up in other distributions as well. The digamma is the first derivative of the gamma function. When the dimension \(p = 1\), they coincide with the usual definitions of the digamma and gamma functions.
The multivariate gamma also comes in a logarithmic form (
lmvgamma).
lmvgamma(1:4,1) # note how they agree when p = 1## [1] 0.0000000 0.0000000 0.6931472 1.7917595lgamma(1:4)## [1] 0.0000000 0.0000000 0.6931472 1.7917595
Anderson, T. W., ed. 1984.
An Introduction to Multivariate Statistical Analysis. Wiley.
Dawid, A. P. 1981. “Some Matrix-Variate Distribution Theory: Notational Considerations and a Bayesian Application.”
Biometrika 68 (1). [Oxford University Press, Biometrika Trust]: 265–74. http://www.jstor.org/stable/2335827.
Gupta, A.K., and D.K. Nagar. 1999.
Matrix Variate Distributions. Monographs and Surveys in Pure and Applied Mathematics. Taylor & Francis. https://books.google.com/books?id=PQOYnT7P1loC.
Mardia, K.V., J.M. Bibby, and J.T. Kent. 1982.
Multivariate Analysis. Probability and Mathematical Statistics. Acad. Press. https://books.google.com/books?id=1nLonQEACAAJ.
R Core Team. 2017.
R: A Language and Environment for Statistical Computing. Vienna, Austria: R Foundation for Statistical Computing. https://www.R-project.org/. |
Martin Mann, Mostafa M Mohamed, Syed M Ali, and Rolf Backofen Interactive implementations of thermodynamics-based RNA structure and RNA-RNA interaction prediction approaches for example-driven teaching PLOS Computational Biology, 14 (8), e1006341, 2018. Martin Raden, Syed M Ali, Omer S Alkhnbashi, Anke Busch, Fabrizio Costa, Jason A Davis, Florian Eggenhofer, Rick Gelhausen, Jens Georg, Steffen Heyne, Michael Hiller, Kousik Kundu, Robert Kleinkauf, Steffen C Lott, Mostafa M Mohamed, Alexander Mattheis, Milad Miladi, Andreas S Richter, Sebastian Will, Joachim Wolff, Patrick R Wright, and Rolf Backofen Freiburg RNA tools: a central online resource for RNA-focused research and teaching Nucleic Acids Research, 46(W1), W25-W29, 2018. Teaching - MEA : max. exp. accuracy source at github@BackofenLab/RNA-Playground
To predict the structure with maximum expected accuracy (MEA) for a given RNA sequence, the algorithm introduced by Zhi J. Lu and co-workers (2009) uses the sequence's base pair and unpaired probabilities. The approach follows a Nussinov-like recursion using the probabilities derived from John S. McCaskill's algorithm.
Here, we use our simplified McCaskill approach for the probability computation. Therein we apply a Nussinov-like energy scoring scheme, i.e. each base pair of a structure contributes a fixed energy term $E_{bp}$ independent of its context. Furthermore, beside the identification of an optimal MEA structure via traceback, we provide an exhaustive enumeration of up to 15 suboptimal structures using the algorithm by Stefan Wuchty et al. (1999). For each structure, the according traceback is visualized on selection.
Here, we use our simplified McCaskill approach for the probability computation. Therein we apply a Nussinov-like energy scoring scheme, i.e. each base pair of a structure contributes a fixed energy term $E_{bp}$ independent of its context.
Furthermore, beside the identification of an optimal MEA structure via traceback, we provide an exhaustive enumeration of up to 15 suboptimal structures using the algorithm by Stefan Wuchty et al. (1999). For each structure, the according traceback is visualized on selection.
RNA sequence:
Minimal loop length $l$:
Energy weight of base pair $E_{bp}$:
'Normalized' temperature $RT$:
Base pair weighting $\gamma$:
Delta to MEA:
MaxExpAcc
MEA structure prediction
The MEA structure predictions uses the following recursion to fill a dynamic programming table $M$. An entry $M_{i,j}$ provides the MEA score for the subsequence $S_{i}..S_{j}$, such that the overall score is found in $M_{1,n}$ for a sequence of length $n$.
$M$
Download Tables
Possible Structures
Select a structure from the list or a cell of $M$ to see according tracebacks. Note, the structure list is limited to the first 15 structures identified via traceback.
Below, we provide a graphical depiction of the selected structure. Note, the rendering does not support a minimal loop length of 0.
Below, we provide a graphical depiction of the selected structure. Note, the rendering does not support a minimal loop length of 0.
Visualization done with forna. Base pairs are given by red edges, the sequence backbone is given by gray edges.
Probabilities used
Given the partition functions $Q$ and $Q^{bp}$ provided by the McCaskill algorithm, we can compute the probabilities of individual base pairs $(i,j)$ within the structure ensemble, i.e. $P^{bp}_{i,j} = \sum_{P \ni (i,j)} \exp(-E(P)/RT) / Z$ given by the sum of the Boltzmann probabilities of all structures that contain the base pair. For its computation, the following recursion is used, which covers both the case that $(i,j)$ is an external base pair as well as that $(i,j)$ is directly enclosed by an outer base pair $(p,q)$.
$P^{bp}$
Download Tables
The following formula is used to compute the probability $P^u_{i}$ that a given sequence position $S_{i}$ is not paired. The probabilities are directly inferred from the base pair probabilities $P^{bp}$.
$P^u$
Download Tables |
Hi,
I am interested to know whether the following is possible with Pennylane:
To prepare a state (given \theta and \phi) from |0\rangle to |\psi\rangle where |\psi\rangle = \cos(\frac{\theta}{2})|0\rangle + e^{i\phi}\sin(\frac{\theta}{2}) that can be represented on a Bloch sphere.
Kind regards! |
Water skiing is a sport where an individual is pulled behind a boat or a cable ski installation on a body of water, skimming the surface.
Consider an idealized case where the boat is moving at a constant velocity $\overrightarrow v_0=\text{const}$ (relative to the water), independently of the skier. The boat and skier are connected by a massless, unstretchable rope. Surface of the water is assumed to be smooth.
Then, what is the maximum possible (instantaneous)speed of the skier $ v_\text{max}$ (relative to the water)?
My first guess, based on intuition, is that $v_\text{max}=2v_0$.
But I'm not at all sure.
Edit:
Skier's velocity may vary. Consider vector projections of velocity vectors $\overrightarrow v_1 $ and $\overrightarrow v_2 $ of the boat and skier in the direction of the rope. Because the rope is unstretchable, these projections must be equal. That means, must hold the equality: $$v_1\cos{\alpha}=v_2\cos{\beta}$$
$\alpha$ is an angle between $\overrightarrow v_1 $ and the direction of the rope
$\beta$ is an angle between $\overrightarrow v_2 $ and the direction of the rope
So for example, if $\alpha<\beta$ then $v_2>v_1$. I.e. the skier is moving faster than the boat relative to the water surface. |
Home > Time-dependent $CP$ violation measurements at Belle II
BELLE2-CONF-PROC-2018-024
Alessandro Gaz
09 November 2018 Abstract: Time dependent CP-violation phenomena are a powerful tool to precisely measure fundamental parameters of the Standard Model and search for New Physics. The Belle II experiment is a substantial upgrade of the Belle detector and will operate at the SuperKEKB energy-asymmetric $e^+ e^-$ collider. The design luminosity of SuperKEKB is $8 \times 10^{35}$ cm$^{-2}$s$^{-1}$ and the Belle II experiment aims to record 50 ab$^{-1}$ of data, a factor of 50 more than the Belle experiment. This dataset will greatly improve the present knowledge, particularly on the CKM angles $\phi_1/\beta$ and $\phi_2/\alpha$ by measuring a wide spectrum of B-meson decays, including many with neutral particles in the final state. A study for the time-dependent analysis of $B^0\to\pi^0\pi^0$, relevant for the measurement of $\phi_2/\alpha$, and feasible only in the clean environment of an $e^+ e^-$ collider, will also be given. Keyword(s): CP-violation ; phi_1 ; phi_2 |
Tagged: abelian group
Abelian Group Problems and Solutions.
The other popular topics in Group Theory are:
Problem 616
Suppose that $p$ is a prime number greater than $3$.
Consider the multiplicative group $G=(\Zmod{p})^*$ of order $p-1$. (a) Prove that the set of squares $S=\{x^2\mid x\in G\}$ is a subgroup of the multiplicative group $G$. (b) Determine the index $[G : S]$.
Add to solve later
(c) Assume that $-1\notin S$. Then prove that for each $a\in G$ we have either $a\in S$ or $-a\in S$. If a Half of a Group are Elements of Order 2, then the Rest form an Abelian Normal Subgroup of Odd Order Problem 575
Let $G$ be a finite group of order $2n$.
Suppose that exactly a half of $G$ consists of elements of order $2$ and the rest forms a subgroup. Namely, suppose that $G=S\sqcup H$, where $S$ is the set of all elements of order in $G$, and $H$ is a subgroup of $G$. The cardinalities of $S$ and $H$ are both $n$.
Then prove that $H$ is an abelian normal subgroup of odd order.Add to solve later
Problem 497
Let $G$ be an abelian group.
Let $a$ and $b$ be elements in $G$ of order $m$ and $n$, respectively. Prove that there exists an element $c$ in $G$ such that the order of $c$ is the least common multiple of $m$ and $n$.
Also determine whether the statement is true if $G$ is a non-abelian group.Add to solve later
Problem 434
Let $R$ be a ring with $1$.
A nonzero $R$-module $M$ is called irreducible if $0$ and $M$ are the only submodules of $M$. (It is also called a simple module.) (a) Prove that a nonzero $R$-module $M$ is irreducible if and only if $M$ is a cyclic module with any nonzero element as its generator.
Add to solve later
(b) Determine all the irreducible $\Z$-modules. Problem 420
In this post, we study the
Fundamental Theorem of Finitely Generated Abelian Groups, and as an application we solve the following problem.
Add to solve later
Problem. Let $G$ be a finite abelian group of order $n$. If $n$ is the product of distinct prime numbers, then prove that $G$ is isomorphic to the cyclic group $Z_n=\Zmod{n}$ of order $n$. Problem 343
Let $G$ be a finite group and let $N$ be a normal abelian subgroup of $G$.
Let $\Aut(N)$ be the group of automorphisms of $G$.
Suppose that the orders of groups $G/N$ and $\Aut(N)$ are relatively prime.
Then prove that $N$ is contained in the center of $G$. |
Wave energy converters in coastal structures Introduction
Fig 1: Construction of a coastal structure.
Coastal works along European coasts are composed of very diverse structures. Many coastal structures are ageing and facing problems of stability, sustainability and erosion. Moreover climate change and especially sea level rise represent a new danger for them. Coastal dykes in Europe will indeed be exposed to waves with heights that are greater than the dykes were designed to withstand, in particular all the structures built in shallow water where the depth imposes the maximal amplitude because of wave breaking.
This necessary adaptation will be costly but will provide an opportunity to integrate converters of sustainable energy in the new maritime structures along the coasts and in particular in harbours. This initiative will contribute to the reduction of the greenhouse effect. Produced energy can be directly used for the energy consumption in harbour area and will reduce the carbon footprint of harbours by feeding the docked ships with green energy. Nowadays these ships use their motors to produce electricity power on board even if they are docked. Integration of wave energy converters (WEC) in coastal structures will favour the emergence of the new concept of future harbours with zero emissions.
Inhoud Wave energy and wave energy flux
For regular water waves, the time-mean wave energy density E per unit horizontal area on the water surface (J/m²) is the sum of kinetic and potential energy density per unit horizontal area. The potential energy density is equal to the kinetic energy
[1] both contributing half to the time-mean wave energy density E that is proportional to the wave height squared according to linear wave theory [1]:
(1)
[math]E= \frac{1}{8} \rho g H^2[/math]
g is the gravity and [math]H[/math] the wave height of regular water waves. As the waves propagate, their energy is transported. The energy transport velocity is the group velocity. As a result, the time-mean wave energy flux per unit crest length (W/m) perpendicular to the wave propagation direction, is equal to
[1]:
(2)
[math] P = E \times c_{g}[/math]
with [math]c_{g}[/math] the group velocity (m/s). Due to the dispersion relation for water waves under the action of gravity, the group velocity depends on the wavelength λ (m), or equivalently, on the wave period T (s). Further, the dispersion relation is a function of the water depth h (m). As a result, the group velocity behaves differently in the limits of deep and shallow water, and at intermediate depths:
[math](\frac{\lambda}{20} \lt h \lt \frac{\lambda}{2})[/math]
Application for wave energy convertersFor regular waves in deep water:
[math]c_{g} = \frac{gT}{4\pi} [/math] and [math]P_{w1} = \frac{\rho g^2}{32 \pi} H^2 T[/math]
The time-mean wave energy flux per unit crest length is used as one of the main criteria to choose a site for wave energy converters.
For real seas, whose waves are random in height, period (and direction), the spectral parameters have to be used. [math]H_{m0} [/math] the spectral estimate of significant wave height is based on zero-order moment of the spectral function as [math]H_{m0} = 4 \sqrt{m_0} [/math]. Moreover the wave period is derived as follows
[2].
[math]T_e = \frac{m_{-1}}{m_0} [/math]
where [math]m_n[/math] represents the spectral moment of order n. An equation similar to that describing the power of regular waves is then obtained
[2] :
[math]P_{w1} = \frac{\rho g^2}{64 \pi} H_{m0}^2 T_e[/math]
If local data are available ([math]H_{m0}^2, T_e [/math]) for a sea state through in-situ wave buoys for example, satellite data or numerical modelling, the last equation giving wave energy flux [math]P_{w1}[/math] gives a first estimation. Averaged over a season or a year, it represents the maximal energetic resource that can be theoretically extracted from wave energy. If the directional spectrum of sea state variance F (f,[math]\theta[/math]) is known with f the wave frequency (Hz) and [math]\theta[/math] the wave direction (rad), a more accurate formulation is used:
[math]P_{w2} = \rho g\int\int c_{g}(f,h)F(f,\theta) dfd \theta[/math]
Fig 2: Time-mean wave energy flux along
West European coasts
[3] .
It can be shown easily that equations (5 and 6) can be reduced to (4) with the hypothesis of regular waves in deep water. The directional spectrum is deduced from directional wave buoys, SAR images or advanced spectral wind-wave models, known as third-generation models, such as WAM, WAVEWATCH III, TOMAWAC or SWAN. These models solve the spectral action balance equation without any a priori restrictions on the spectrum for the evolution of wave growth.
From TOMAWAC model, the near shore wave atlas ANEMOC along the coasts of Europe and France based on the numerical modelling of wave climate over 25 years has been produced
[4]. Using equation (6), the time-mean wave energy flux along West European coasts is obtained (see Fig. 2). This equation (6) still presents some limits like the definition of the bounds of the integration. Moreover, the objective to get data on the wave energy near coastal structures in shallow or intermediate water requires the use of numerical models that are able to represent the physical processes of wave propagation like the refraction, shoaling, dissipation by bottom friction or by wave breaking, interactions with tides and diffraction by islands.
The wave energy flux is therefore calculated usually for water depth superior to 20 m. This maximal energetic resource calculated in deep water will be limited in the coastal zone:
at low tide by wave breaking; at high tide in storm event when the wave height exceeds the maximal operating conditions; by screen effect due to the presence of capes, spits, reefs, islands,...
Technologies
According to the International Energy Agency (IEA), more than hundred systems of wave energy conversion are in development in the world. Among them, many can be integrated in coastal structures. Evaluations based on objective criteria are necessary in order to sort theses systems and to determine the most promising solutions.
Criteria are in particular:
the converter efficiency : the aim is to estimate the energy produced by the converter. The efficiency gives an estimate of the number of kWh that is produced by the machine but not the cost. the converter survivability : the capacity of the converter to survive in extreme conditions. The survivability gives an estimate of the cost considering that the weaker are the extreme efforts in comparison with the mean effort, the smaller is the cost.
Unfortunately, few data are available in literature. In order to determine the characteristics of the different wave energy technologies, it is necessary to class them first in four main families
[5].
An interesting result is that the maximum average wave power that a point absorber can absorb [math]P_{abs} [/math](W) from the waves does not depend on its dimensions
[5]. It is theoretically possible to absorb a lot of energy with only a small buoy. It can be shown that for a body with a vertical axis of symmetry (but otherwise arbitrary geometry) oscillating in heave the capture (or absorption) width [math]L_{max}[/math](m) is as follows [5]:
[math]L_{max} = \frac{P_{abs}}{P_{w}} = \frac{\lambda}{2\pi}[/math] or [math]1 = \frac{P_{abs}}{P_{w}} \frac{2\pi}{\lambda}[/math]
Fig 4: Upper limit of mean wave power
absorption for a heaving point absorber.
where [math]{P_{w}}[/math] is the wave energy flux per unit crest length (W/m). An optimally damped buoy responds however efficiently to a relatively narrow band of wave periods.
Babarit et Hals propose
[6] to derive that upper limit for the mean annual power in irregular waves at some typical locations where one could be interested in putting some wave energy devices. The mean annual power absorption tends to increase linearly with the wave power resource. Overall, one can say that for a typical site whose resource is between 20-30 kW/m, the upper limit of mean wave power absorption is about 1 MW for a heaving WEC with a capture width between 30-50 m.
In order to complete these theoretical results and to describe the efficiency of the WEC in practical situations, the capture width ratio [math]\eta[/math] is also usually introduced. It is defined as the ratio between the absorbed power and the available wave power resource per meter of wave front times a relevant dimension B [m].
[math]\eta = \frac{P_{abs}}{P_{w}B} [/math]
The choice of the dimension B will depend on the working principle of the WEC. Most of the time, it should be chosen as the width of the device, but in some cases another dimension is more relevant. Estimations of this ratio [math]\eta[/math] are given
[6]: 33 % for OWC, 13 % for overtopping devices, 9-29 % for heaving buoys, 20-41 % for pitching devices. For energy converted to electricity, one must take into account moreover the energy losses in other components of the system.
Civil engineering
Never forget that the energy conversion is only a secondary function for the coastal structure. The primary function of the coastal structure is still protection. It is necessary to verify whether integration of WEC modifies performance criteria of overtopping and stability and to assess the consequences for the construction cost.
Integration of WEC in coastal structures will always be easier for a new structure than for an existing one. In the latter case, it requires some knowledge on the existing coastal structures. Solutions differ according to sea state but also to type of structures (rubble mound breakwater, caisson breakwaters with typically vertical sides). Some types of WEC are more appropriate with some types of coastal structures.
Fig 5: Several OWC (Oscillating water column) configurations (by Wavegen – Voith Hydro).
Environmental impact
Wave absorption if it is significant will change hydrodynamics along the structure. If there is mobile bottom in front of the structure, a sand deposit can occur. Ecosystems can also be altered by change of hydrodynamics and but acoustic noise generated by the machines.
Fig 6: Finistere area and locations of
the six sites (google map).
Study case: Finistere area
Finistere area is an interesting study case because it is located in the far west of Brittany peninsula and receives in consequence the largest wave energy flux along the French coasts (see Fig.2). This area with a very ragged coast gathers moreover many commercial ports, fishing ports, yachting ports. The area produces a weak part of its consumption and is located far from electricity power plants. There are therefore needs for renewable energies that are produced locally. This issue is important in particular in islands. The production of electricity by wave energy will have seasonal variations. Wave energy flux is indeed larger in winter than in summer. The consumption has peaks in winter due to heating of buildings but the consumption in summer is also strong due to the arrival of tourists.
Six sites are selected (see figure 7) for a preliminary study of wave energy flux and capacity of integration of wave energy converters. The wave energy flux is expected to be in the range of 1 – 10 kW/m. The length of each breakwater exceeds 200 meters. The wave power along each structure is therefore estimated between 200 kW and 2 MW. Note that there exist much longer coastal structures like for example Cherbourg (France) with a length of 6 kilometres.
(1) Roscoff (300 meters) (2) Molène (200 meters) (3) Le Conquet (200 meters) (4) Esquibien (300 meters) (5) Saint-Guénolé (200 meters) (6) Lesconil (200 meters) Fig.7: Finistere area, the six coastal structures and their length (google map).
Wave power flux along the structure depends on local parameters: bottom depth that fronts the structure toe, the presence of caps, the direction of waves and the orientation of the coastal structure. See figure 8 for the statistics of wave directions measured by a wave buoy located at the Pierres Noires Lighthouse. These measurements show that structures well-oriented to West waves should be chosen in priority. Peaks of consumption occur often with low temperatures in winter coming with winds from East- North-East directions. Structures well-oriented to East waves could therefore be also interesting even if the mean production is weak.
Fig 8: Wave measurements at the Pierres Noires Lighthouse.
Conclusion
Wave energy converters (WEC) in coastal structures can be considered as a land renewable energy. The expected energy can be compared with the energy of land wind farms but not with offshore wind farms whose number and power are much larger. As a land system, the maintenance will be easy. Except the energy production, the advantages of such systems are :
a “zero emission” port industrial tourism test of WEC for future offshore installations.
Acknowledgement
This work is in progress in the frame of the national project EMACOP funded by the French Ministry of Ecology, Sustainable Development and Energy.
See also Waves Wave transformation Groynes Seawall Seawalls and revetments Coastal defense techniques Wave energy converters Shore protection, coast protection and sea defence methods Overtopping resistant dikes
References Mei C.C. (1989) The applied dynamics of ocean surface waves. Advanced series on ocean engineering. World Scientific Publishing Ltd Vicinanza D., Cappietti L., Ferrante V. and Contestabile P. (2011) : Estimation of the wave energy along the Italian offshore, journal of coastal research, special issue 64, pp 613 - 617. Mattarolo G., Benoit M., Lafon F. (2009), Wave energy resource off the French coasts: the ANEMOC database applied to the energy yield evaluation of Wave Energy, 10th European Wave and Tidal Energy Conference Series (EWTEC’2009), Uppsala (Sweden) Benoit M. and Lafon F. (2004) : A nearshore wave atlas along the coasts of France based on the numerical modeling of wave climate over 25 years, 29th International Conference on Coastal Engineering (ICCE’2004), Lisbonne (Portugal), pp 714-726. De O. Falcão A. F. (2010) Wave energy utilization: A review of the technologies. Renewable and Sustainable Energy Reviews, Volume 14, Issue 3, April 2010, pp. 899–918. Babarit A. and Hals J. (2011) On the maximum and actual capture width ratio of wave energy converters – 11th European Wave and Tidal Energy Conference Series (EWTEC’2011) – Southampton (U-K). |
I am trying to understand the LSM algorithm applied to grayscale image segmentation.
There are essentially 2 things that are blocking me:
1) From my point of view - the level sets - i.e. ''moving'' the 3D surface (that is, conceptually visualizing the grayscale image (say 8 bits/pixel) as a surface whose height is given at each point (pixel $x,y$) by the intensity value of that pixel, through a fixed plane, or the other way around, is exactly the same as performing a thresholding from 0 to 255 (i.e. each "level set" would be all the pixels at threshold value 0,1,2 ...255)?
2) Let $f(x,y)$ be the image function (associating an intensity between 0 and 255 to each point $(x,y)$. The LSM is defined as $$\Gamma = \{(x,y) | \phi(x,y)=0\}$$
However, it does not make sense to me, (from my point of view which is probably wrong, hence my question), that $$\phi(x,y)=0$$ as as the surface has its own height at each point and it would make more sense to me to consider that the plane intersecting the surface is moving from bottom to up, i.e. I can ''accept'' better the formulation given for the level sets:
$$\Gamma = \{(x,y) | \phi(x,y)=c\}$$
and for the example I use, $$c \in [0,255] $$
Question 2 is more like a ''conceptual'' problem, but question 1 is more fundamental, since I cannot seem to see any difference with a simple thresholding.
Rephrased:
Question 1) Can we consider (loosely speaking, as indeed as the li is a set of points not a matrix) that each level curve $$l_i$$ of a level set of an image
I is (in programmatic terms, Matlab syntax):
li= (I==i) ? (where
i is the height, in grayscale 8 bits,
i is a number between 1 and 255, and as Matlab uses matrices everywhere,
I is a $n \times m$ matrix and the operation == checks for every pixel in
I if it is equal to
i and the result is a binary image with 0s or 1s, i.e. this is called thresholding, hence my question is LSM the same as thresholding with operator
== (and the single '=' is the assignment operation of course))
Question 2) Is it essentially (conceptually) the same to imagine moving the shape through a fixed plane, or moving a plane through a fixed 3D shape ?
What I was (in an incorrect way) trying to say is that we can visualize the level curves $$l_i$$ as the white circles on the thresholded image (although not perfect since it is discretized an maybe rounding errors/noise etc but suppose they are continuous) and ideed i do understand that the result of thresholding is a matrix so strictly speaking not equal to a lvl curve. But "loosely" speaking, visualizing the lvl curves do you agree? |
Consider the Lagrangian for a simple harmonic oscillator\begin{equation}L (x,\dot{x}) = \frac{1}{2}m\dot{x}^2 - \frac{1}{2}kx^2\end{equation} Obviously we have \begin{align}\frac{\partial L}{\partial x} & = -kx\\ \frac{\partial L}{\partial \dot{x}} & = m\dot{x}\\\frac{d}{dt} \left(\frac{\partial L}{\partial \dot{x}}\right) & = m\ddot{x}\end{align}So this satisfies the Euler-Lagrange in that we know a $F = ma$ and therefore the force of a spring should follow this law as well, giving us\begin{align}\frac{d}{dt} \left(\frac{\partial L}{\partial \dot{x}}\right) -\frac{\partial L}{\partial x} & =0 \\m\ddot{x} + kx &= 0\end{align}Now let's do the Hamiltonian version. Here's where I'm having a problem. We obtain the Hamiltonian via the Legendre transformation and get
\begin{equation} H(x,p) = \frac{1}{2}\frac{p^2}{m} + \frac{1}{2}kx^2 \end{equation} Now, according to what I understand, the equations of motion should be \begin{align} -\frac{\partial H}{\partial x} & = -kx = \dot{p}\\ \frac{\partial H}{\partial p} & = \frac{p}{m} = \dot{x} \end{align} But I don't see what this tells me about the relationship between Newton's law and Hooke's law. For the Lagrangian, when I plug in the relevant information, I get a relationship that explicitly shows for the action \begin{equation} S[x] = \int_a^bL(t,x,\dot{x})dt \end{equation} that it satisfies the EL equation as a necessary condition for $S[x]$ to have an extremum for the given function $x(t)$. My Question:
How do Hamilton's equations do this? When I look at the "equations of motion" for the Hamiltonian, I don't see how they tell me anything about the action. But they should! That's what their purpose is, just like their Lagrangian analogs do.
The only way I see to do it is to do something like \begin{align} \frac{d}{dt}\left(m\frac{\partial H}{\partial p}\right) & = m\frac{d}{dt}\dot{x}\\ &=m\ddot{x}\\ \end{align} and then organize them something like \begin{align} \frac{d}{dt} \left(m\frac{\partial H}{\partial p}\right) + \frac{\partial H}{\partial x} & =0 \\ m\ddot{x} + kx &= 0 \end{align} But I've never seen anyone do this in the introductory books, so I feel like I must be misunderstanding what the Hamiltonian signifies relative to the action. |
AM/FM 변조 옵션에 의해서 최대 3개의 오실레이터 주파수에서의 위상 동기화된 선형 결합 신호의 생성이 가능합니다. 진폭 변조(AM)와 주파수 변조(FM)을 포함한 다양한 변조 방식에서(보다 고차원의) 측파대 직접 계측을 간편하게 설정할 수 있습니다. 탠덤 복조와 같은 통상 이중 복조 방식과는 달리, 여러 기기가 필요하지 않고, 최대 가능 복조 대역폭에 의한 변조 주파수의 제한은 없습니다.
UHF-MOD 주요 특징
UHF-MOD 업그레이드 및 호환성
UHF-MOD 기능 다이어그램
\(s(t)=[A_c + A_m * \sin(\omega_m t) ] * \sin(\omega_c t)\)
In AM the amplitude of a carrier signal is periodically changing (modulated). In most applications this modulation is small and is therefore subject to noise. The purpose of recovering an AM signal with a lock-in amplifier is to take advantage of its steep filters and time integration to extract the signals of interest. As the AM spectrum consists of 3 frequency components, the UHF-MOD uses 3 demodulators to demodulate all 3 frequencies simultaneously providing a best-in-class signal recovery performance. Simultaneous amplitude modulation and demodulation is supported and can be entirely controlled from the graphical user interface of the UHF Instrument. The generation of AM signals is useful for stimulus generation in the application, but can also be utilized for system testing purposes.
\(s(t)=\sin[\omega_ct +\frac{\omega_p}{\omega_m} *\sin(\omega_mt) ]\)
In FM the frequency of a carrier signal is periodically changing (modulated). As the modulation is often a small signal and therefore subject to noise, the demodulation with a lock-in amplifier can be advantageous thanks to its configurable filtering. The UHF Instrument is capable to demodulate a signal of interest at several frequencies simultaneously, and the UHF-MOD option provides the FM demodulation at the carrier frequency and a selectable pair of sidebands (ω
c ± n * ω m). For narrow-band operation, the peak frequency deviation and the modulation frequency shall satisfy the relation ω p/ω m << 1, but the UHF-MOD option operates also above this limit with a decreasing accuracy, i.e. ω p/ω m < 2.
UHF-MOD 사양
AM and FM Specifications ω
c, f c: carrier frequency range 6 µHz - 600 MHz ω
m, f m: modulation frequency range 6 µHz - 600 MHz ω
s, f s: sideband frequency f
s = m * f c ± n * f m A
c: amplitude of carrier signal A
c < V range m,n: harmonic analysis m,n = 1 to 32 AM Specifications h
AM: AM modulation index h
AM = A m / A c A
m: amplitude of modulation signal A
c + A m < V range FM Specifications h
FM: FM modulation index h
FM = f p / f m ω
p, f p: peak frequency deviation demodulation f
p < 2 * f m ω
p, f p: peak frequency deviation modulation f
p < 12'000 * f m |
Consider a linear forecasting problem where all shocks $\{\epsilon_i\}_1^n$ are independently distributed with $\epsilon_i\sim N(0,\sigma_i^2)$ for all $i$. Suppose you want to forecast $\theta = \sum_{i=1}^m a_i \epsilon_i$ for $m<n$, while observing signals $\mathbf x = \{x_j\}_1^p$ with each $x_j=\sum_{i=1}^n b_{ij} \epsilon_i$.
Let $\mathbb E[\theta|\mathbf x]$ denote the optimal forecast, that minimizes the mean squared error. I want to show that
$$\frac{\partial\operatorname{Cov}(\mathbb E[\theta|\mathbf x],\theta)}{\partial \sigma_i}<0, \quad\text{for }i\in\{m+1,\dots,n\}.$$
I think this is true because an increase in the variance of $\sigma_i$ for $i\in\{m+1,\dots,n\}$ makes the signals more noisy, therefore, the forecast less accurate.
By a similar argument it should also be possible to prove that
$$\frac{\partial\operatorname{Var}(\mathbb E[\theta|\mathbf x]|\epsilon_i)}{\partial \sigma_i}<0, \quad\text{for }i\in\{m+1,\dots,n\}.$$
For simple examples of this problem I've been able to prove this, but I haven't been able to generalize it. I suspect these results might exist, so just a reference of where I could find them would be of great help. |
I recently came across this in a textbook (NCERT class 12 , chapter: wave optics , pg:367 , example 10.4(d)) of mine while studying the Young's double slit experiment. It says a condition for the formation of interference pattern is$$\frac{s}{S} < \frac{\lambda}{d}$$Where $s$ is the size of ...
The accepted answer is clearly wrong. The OP's textbook referes to 's' as "size of source" and then gives a relation involving it. But the accepted answer conveniently assumes 's' to be "fringe-width" and proves the relation. One of the unaccepted answers is the correct one. I have flagged the answer for mod attention. This answer wastes time, because I naturally looked at it first ( it being an accepted answer) only to realise it proved something entirely different and trivial.
This question was considered a duplicate because of a previous question titled "Height of Water 'Splashing'". However, the previous question only considers the height of the splash, whereas answers to the later question may consider a lot of different effects on the body of water, such as height ...
I was trying to figure out the cross section $\frac{d\sigma}{d\Omega}$ for spinless $e^{-}\gamma\rightarrow e^{-}$ scattering. First I wrote the terms associated with each component.Vertex:$$ie(P_A+P_B)^{\mu}$$External Boson: $1$Photon: $\epsilon_{\mu}$Multiplying these will give the inv...
As I am now studying on the history of discovery of electricity so I am searching on each scientists on Google but I am not getting a good answers on some scientists.So I want to ask you to provide a good app for studying on the history of scientists?
I am working on correlation in quantum systems.Consider for an arbitrary finite dimensional bipartite system $A$ with elements $A_{1}$ and $A_{2}$ and a bipartite system $B$ with elements $B_{1}$ and $B_{2}$ under the assumption which fulfilled continuity.My question is that would it be possib...
@EmilioPisanty Sup. I finished Part I of Q is for Quantum. I'm a little confused why a black ball turns into a misty of white and minus black, and not into white and black? Is it like a little trick so the second PETE box can cancel out the contrary states? Also I really like that the book avoids words like quantum, superposition, etc.
Is this correct? "The closer you get hovering (as opposed to falling) to a black hole, the further away you see the black hole from you. You would need an impossible rope of an infinite length to reach the event horizon from a hovering ship". From physics.stackexchange.com/questions/480767/…
You can't make a system go to a lower state than its zero point, so you can't do work with ZPE. Similarly, to run a hydroelectric generator you not only need water, you need a height difference so you can make the water run downhill. — PM 2Ring3 hours ago
So in Q is for Quantum there's a box called PETE that has 50% chance of changing the color of a black or white ball. When two PETE boxes are connected, an input white ball will always come out white and the same with a black ball.
@ACuriousMind There is also a NOT box that changes the color of the ball. In the book it's described that each ball has a misty (possible outcomes I suppose). For example a white ball coming into a PETE box will have output misty of WB (it can come out as white or black). But the misty of a black ball is W-B or -WB. (the black ball comes out with a minus). I understand that with the minus the math works out, but what is that minus and why?
@AbhasKumarSinha intriguing/ impressive! would like to hear more! :) am very interested in using physics simulation systems for fluid dynamics vs particle dynamics experiments, alas very few in the world are thinking along the same lines right now, even as the technology improves substantially...
@vzn for physics/simulation, you may use Blender, that is very accurate. If you want to experiment lens and optics, the you may use Mistibushi Renderer, those are made for accurate scientific purposes.
@RyanUnger physics.stackexchange.com/q/27700/50583 is about QFT for mathematicians, which overlaps in the sense that you can't really do string theory without first doing QFT. I think the canonical recommendation is indeed Deligne et al's *Quantum Fields and Strings: A Course For Mathematicians *, but I haven't read it myself
@AbhasKumarSinha when you say you were there, did you work at some kind of Godot facilities/ headquarters? where? dont see something relevant on google yet on "mitsubishi renderer" do you have a link for that?
@ACuriousMind thats exactly how DZA presents it. understand the idea of "not tying it to any particular physical implementation" but that kind of gets stretched thin because the point is that there are "devices from our reality" that match the description and theyre all part of the mystery/ complexity/ inscrutability of QM. actually its QM experts that dont fully grasp the idea because (on deep research) it seems possible classical components exist that fulfill the descriptions...
When I say "the basics of string theory haven't changed", I basically mean the story of string theory up to (but excluding) compactifications, branes and what not. It is the latter that has rapidly evolved, not the former.
@RyanUnger Yes, it's where the actual model building happens. But there's a lot of things to work out independently of that
And that is what I mean by "the basics".
Yes, with mirror symmetry and all that jazz, there's been a lot of things happening in string theory, but I think that's still comparatively "fresh" research where the best you'll find are some survey papers
@RyanUnger trying to think of an adjective for it... nihilistic? :P ps have you seen this? think youll like it, thought of you when found it... Kurzgesagt optimistic nihilismyoutube.com/watch?v=MBRqu0YOH14
The knuckle mnemonic is a mnemonic device for remembering the number of days in the months of the Julian and Gregorian calendars.== Method ===== One handed ===One form of the mnemonic is done by counting on the knuckles of one's hand to remember the numbers of days of the months.Count knuckles as 31 days, depressions between knuckles as 30 (or 28/29) days. Start with the little finger knuckle as January, and count one finger or depression at a time towards the index finger knuckle (July), saying the months while doing so. Then return to the little finger knuckle (now August) and continue for...
@vzn I dont want to go to uni nor college. I prefer to dive into the depths of life early. I'm 16 (2 more years and I graduate). I'm interested in business, physics, neuroscience, philosophy, biology, engineering and other stuff and technologies. I just have constant hunger to widen my view on the world.
@Slereah It's like the brain has a limited capacity on math skills it can store.
@NovaliumCompany btw think either way is acceptable, relate to the feeling of low enthusiasm to submitting to "the higher establishment," but for many, universities are indeed "diving into the depths of life"
I think you should go if you want to learn, but I'd also argue that waiting a couple years could be a sensible option. I know a number of people who went to college because they were told that it was what they should do and ended up wasting a bunch of time/money
It does give you more of a sense of who actually knows what they're talking about and who doesn't though. While there's a lot of information available these days, it isn't all good information and it can be a very difficult thing to judge without some background knowledge
Hello people, does anyone have a suggestion for some good lecture notes on what surface codes are and how are they used for quantum error correction? I just want to have an overview as I might have the possibility of doing a master thesis on the subject. I looked around a bit and it sounds cool but "it sounds cool" doesn't sound like a good enough motivation for devoting 6 months of my life to it |
The following matrix is the result of a special kind of balanced signed graph of order $n$. In the Matrix $n_1,n_2,..,n_k$ are positive integers, which satisfy $\sum n_i=n.$ Prove that this matrix has two zero eigenvalues if and only if $k=6r$ for any positive integer $r$.
\begin{equation*} T_\lambda=\begin{bmatrix} -n_1 & n_2 & 0 &.&.&0 & n_k \\ n_1& -n_2 &n_3 &0&.&. & 0 \\ 0& n_2 & -n_3 & n_4 & 0& . & 0 \\ .&0 & n_3 &-n_4 &n_5&0&. \\ .&. &. & . &.&.&. \\ .&. &. &. &.&.&. \\ 0& 0 &. &. & n_{k-2}& -n_{k-1}& n_{k} \\ n_1 & 0 & .&.& 0& n_{k-1}& -n_k \end{bmatrix} \end{equation*}
I have calculated row reduced echelon forms of the above matrix by the Mathematica. It has two zero eigenvalues for $k = 6r$. I would like to have an analytic proof.
Thank you for your help in advance. |
Bill Dubuque raised an excellent point here: Coping with *abstract* duplicate questions.
I suggest we use this question as a list of the generalized questions we create.
I suggest we categorize these abstract duplicates based on topic (please edit the question). Also please feel free to suggest a better way to list these.
Also, as per Jeff's recommendation, please tag these questions as faq.
Laws of signs (minus times minus is plus): Why is negative times negative = positive?
Order of operations in arithmetic: What is the standard interpretation of order of operations for the basic arithmetic operations?
Solving equations with multiple absolute values: What is the best way to solve an equation involving multiple absolute values?
Extraneous solutions to equations with a square root: Is there a name for this strange solution to a quadratic equation involving a square root?
Principal $n$-th roots:
$0! = 1$: Prove $0! = 1$ from first principles
Partial fraction decomposition of rational functions: Converting multiplying fractions to sum of fractions
Highest power of a prime $p$ dividing $N!$, number of zeros at the end of $N!$ and related questions: Highest power of a prime $p$ dividing $N!$
Solving $x^x=y$ for $x$: Is $x^x=y$ solvable for $x$?
What is the value of $0^0$? Zero to the zero power – is $0^0=1$?
Integrating polynomial and rational expressions of $\sin x$ and $\cos x$: Evaluating $\int P(\sin x, \cos x) \text{d}x$
Integration using partial fractions: Integration by partial fractions; how and why does it work?
Intuitive meaning of Euler's constant $e$: Intuitive Understanding of the constant "$e$"
Evaluating limits of the form $\lim_{x\to \infty} P(x)^{1/n}-x$ where $P(x)=x^n+a_{n-1}x^{n-1}+\cdots+a_0$ is a monic polynomial: Limits: How to evaluate $\lim\limits_{x\rightarrow \infty}\sqrt[n]{x^{n}+a_{n-1}x^{n-1}+\cdots+a_{0}}-x$
Finding the limit of rational functions at infinity: Finding the limit of $\frac{Q(n)}{P(n)}$ where $Q,P$ are polynomials
Divergence of the harmonic series: Why does the series $\sum_{n=1}^\infty\frac1n$ not converge?
Universal Chord Theorem: Universal Chord Theorem
Nested radical series: $\sqrt{c+\sqrt{c+\sqrt{c+\cdots}}}$, or the limit of the sequence $x_{n+1} = \sqrt{c+x_n}$
Derivative of a function expressed as $f(x)^{g(x)}$: Differentiation of $x^{\sqrt{x}}$, how?
Removable discontiuity: How can a function with a hole (removable discontinuity) equal a function with no hole?
Calculus Meets Geometry Volume of intersection between cylinders Two cylinders, same radius, orthogonal. This post is not particularly good but there are many existing duplicate-links. Note that this can be done without calculus. Two cylinders variation: different radii (orthogonal), non-orthogonal (same radius), and elliptic cylinders (essentially unsolved). Three cylinders: same radius and orthogonal. Number of permutations of $n$ where no number $i$ is in position $i$ How many equivalence relations on a set with 4 elements. How many ways can N elements be partitioned into subsets of size K? Seating arrangements of four men and three women around a circular table How to use stars and bars? How many different spanning trees of $K_n \setminus e$ are there? (or Spanning Trees of the Complete Graph minus an edge) Definition of Matrix Multiplication: (Maybe there should just be one canonical one?) On the determinant: Determinants of special matrices: Eigenvectors and Eigenvalues Gram-Schmidt Orthogonalization Prove that A + I is invertible if A is nilpotent A generalization for non-commutative rings Modular exponentiation: How do I compute $a^b\,\bmod c$ by hand? Solving the congruence $x^2\equiv1\pmod n$: Number of solutions of $x^2=1$ in $\mathbb{Z}/n\mathbb{Z}$ Can $\sqrt{n} + \sqrt{m}$ be rational if neither $n,m$ are perfect squares? What is the period of the decimal expansion of $\frac mn$?
Geometric Series: Value of $\sum\limits_n x^n$
Summing series of the form $\sum_n (n+1) x^n$: How can I evaluate $\sum_{n=0}^\infty(n+1)x^n$?
Finding the limit of rational functions at infinity: Finding the limit of $\frac{Q(n)}{P(n)}$ where $Q,P$ are polynomials
Divergence of the harmonic series: Why does the series $\sum_{n=1}^\infty\frac1n$ not converge?
Nested radical series: $\sqrt{c+\sqrt{c+\sqrt{c+\cdots}}}$, or the limit of the sequence $x_{n+1} = \sqrt{c+x_n}$
Limit of exponential sequence and $n$ factorial: Prove that $\lim \limits_{n \to \infty} \frac{x^n}{n!} = 0$, $x \in \Bbb R$.
There are different sizes of infinity: What Does it Really Mean to Have Different Kinds of Infinities?
Solving triangles: Solving Triangles (finding missing sides/angles given 3 sides/angles)
(Confusing) notation for inverse functions ($\sin^{-1}$ vs. $\arcsin$): $\arcsin$ written as $\sin^{-1}(x)$ |
It is worth reminding yourself what "a frame of reference" means: an agreement on how to measure when and where some event happened. For the purposes of special relativity we generally only talk about inertial frames which have some constant velocity (less that $c$) with respect to one another.
If you have the coordinates $(x,y,z,t)$ you know how far over, up, and forward you had to go to get from the origin to the location of that event and how long after an arbitrary starting time that event occurred.
Now, you apply the transformation and get out another set of coordinates $(x',y',z',t')$. Those tell you how to find the
same even starting from a different origin.
The two set of coordinates refer to the
same event. 1 That is important.
When you start talking about length contraction or time dilation you are now talking about the distance or duration between two events; call these $\Delta L = \sqrt{(\Delta x)^2+ (\Delta y)^2+ (\Delta z)^2}$ and $\Delta t$ and comparing those values to the distance or duration between the same two events as measured using a different set of measurement conventions ($\Delta L'$ and $\Delta t'$).
Your job, is to figure out what the events that mark the beginning and the end of the interval are and then construct the right delta's in both frames.
Pro tip
A useful fact is that measurements made in every single frame will measure
2 $$s^2 = (c\Delta t)^2 - (\Delta L)^2 \tag{interval}$$to have the same value. 3 That gives$$(c\Delta t)^ 2- (\Delta L)^2 = (c\Delta t')^2 - (\Delta L')^2 \,. \tag{*}$$
If you chose a pair of events for which $\Delta L = 0$ in the unprimed frame (i.e. that occur in the same spot, then you must have $(\Delta t')^2 > (\Delta t)^2$ because $(\Delta L')^2 > 0$ (assuming that the frame are not co-moving). This is a way to reassure yourself that you are applying the factor of $\gamma$ in the correct sense.
To use the same kind of reasoning for length contraction take $\Delta t' = 0$ (the length in the primed frame is measured at a single moment in that frame), then re-arrange (*) to get $(\Delta L)^2 = (\Delta L')^2 + (\Delta t)^2$ (with $\Delta t \ne 0$ because of the relativity of simultaneity) leading to $(\Delta L)^2 > (\Delta L')^2$.
1 Because a single event is given two different names this kind of transformation is sometimes referred to as an alias transformation.
2 Or possibly $s^2 = (\Delta L)^2 - (c\Delta t)^2$ depending on the sign convention preferred by the source you are using.
3 Checking this for yourself by direct application of the Lorentz transformation is algebraically tedious, but straight forward. Recognizing intuitively that it is a necessary consequence of the postulated constancy of the speed of light may be a sign that you are starting to get the hang of relativity. |
I'm reading this chapter about EM (9.3.1) of the book "Pattern Recognition and Machine Learning".
I understand the basic EM algorithm for GMM, but I'm having some problems understanding the probabilistic interpretation of the E step.
Most of the following formulas make sense to me except 9.39. Can someone please explain it for me? Thanks a lot.
It seems that it is in the form of $$E[z_{nk}]=\frac{\sum_{z_{nk}}z_{nk}f(n,k)}{normalizer}$$ where the numerator is a summation over two possible states of $z_{nk}$ (0 and 1), so it becomes $\pi_kN(x_n|\mu_k,\Sigma_k)$ in the second step, am I right so far?
If so, what are we summing over here in the denominator? 0s and 1s of all possible $j$s (but the result doesn't quite match)? Why $f(n,k)=[\pi_kN(x_n|\mu_k,\Sigma_k)]^{z_{nk}}$ is used as an unnormalized probability?
It would make more sense if the denominator is the unnormalized zeroth moment and the numerator is the unnormalized first moment, did I understand it correctly? |
In mathematics, Vieta’s formula is related to the coefficients of polynomial of sum and product of its roots. It was named after the scientist François Vièteand it is frequently used in algebra as well. The other name for Vieta’s Formula is the Viete’s Law where a set of equations together are related to the root or coefficient of polynomial.
Here, the Vieta’s formula is related to the coefficient or roots of the polynomials. It could also be defined as the coefficients of polynomials or sum or product of their roots or the product of roots taken in the group. Here is the basic Vieta’s formula in general form with polynomial for degree n.
\[\large P\left(x\right)=a_{n}x^{n}+a_{n-1}x^{n-1}+….+a_{1}x+a_{0}\]
Equivalently stated, the (n−k)
th coefficient a n-k is related to a signed sum of all possible subproducts of roots, taken k at-a-time:
\[\large \sum_{1\,\leq\,i_{1}\,<\,i_{2}\,….i_{k}\,\leq\,n}r_{i1}\,r_{i2}….\,r_{ik}=\left(-1\right)^{k}\frac{a_{n-k}}{a_{n}}\]
for k = 1, 2, …, n (where we wrote the indices $i_{k}$ in increasing order to ensure each sub product of roots is used exactly once)
This is necessary to learn by students where you have to calculate the multiple roots for a polynomial equation. As discussed earlier, the Vieta’s formula was discovered by the French mathematician François Viète that is relating the sum or product of roots of a polynomial to its coefficients. The simplest use of Vieta’s formula is done in Quadratics. They are helpful in solving complicated algebraic polynomials having multiple roots where roots are not easy to derive. For example, Vieta’s formula could also serve as the shortcut to find solutions of sum or product of their roots quickly. |
Let $x[n]$ be a discrete time signal with DFT given by $X(f)=\sum_n x[n]e^{-2\pi inf}$ supported on $[-1/2M,1/2M]$ with $f\in[-1/2,1/2]$.
I can then down-sample to get $y[n]:=x[nM]$. Then, let
$$\widetilde{x}[n]=\begin{cases}My[n/M],& M|n,\\0,&\text{otherwise}.\end{cases}$$
Then its DFT is given by
$$ \begin{aligned}\widetilde{X}(f)&=\sum_{n\in\mathbb{Z}}\widetilde{x}[n]e^{-2\pi inf}\\&=\begin{cases}M\sum_{n\in\mathbb{Z}}y[n/M]e^{-2\pi inf},& M|n,\\0,&\text{otherwise}\end{cases}\\&=\begin{cases}M\sum_{n}x[n]e^{-2\pi inf},&M|n,\\0,&\text{otherwise}.\end{cases}\end{aligned} $$
Now, let $\hat{x}[n]=(\tilde{x}\ast h)[n]$ be the discrete Hilbert transform of $\tilde{x}$, with $h$ an "almost" ideal low-pass filter with cut-off frequency $f_c=1/M$.
My question is, how do I then apply the Shannon interpolation formula to reconstruct $x(t)$?
Intuitively, I would guess that it would be something along the lines of
$$x(t)=\left(\sum_{n\in\mathbb{Z}}x[n]\cdot\delta(t-n\Delta t)\right)\ast H(f),$$
with
$$H(f)=\begin{cases}\frac{DTFT\{\hat{x}[\cdot]\}(f)}{M\cdot X(f)},&\text{if }M|n, \\ 0,&\text{otherwise}. \end{cases}$$
Am I correct? |
(a) If $AB=B$, then $B$ is the identity matrix. (b) If the coefficient matrix $A$ of the system $A\mathbf{x}=\mathbf{b}$ is invertible, then the system has infinitely many solutions. (c) If $A$ is invertible, then $ABA^{-1}=B$. (d) If $A$ is an idempotent nonsingular matrix, then $A$ must be the identity matrix. (e) If $x_1=0, x_2=0, x_3=1$ is a solution to a homogeneous system of linear equation, then the system has infinitely many solutions.
Let $A$ and $B$ be $3\times 3$ matrices and let $C=A-2B$.If\[A\begin{bmatrix}1 \\3 \\5\end{bmatrix}=B\begin{bmatrix}2 \\6 \\10\end{bmatrix},\]then is the matrix $C$ nonsingular? If so, prove it. Otherwise, explain why not.
(a) Suppose that a $3\times 3$ system of linear equations is inconsistent. Is the coefficient matrix of the system nonsingular?
(b) Suppose that a $3\times 3$ homogeneous system of linear equations has a solution $x_1=0, x_2=-3, x_3=5$. Is the coefficient matrix of the system nonsingular?
(c) Let $A$ be a $4\times 4$ matrix and let\[\mathbf{v}=\begin{bmatrix}1 \\2 \\3 \\4\end{bmatrix} \text{ and } \mathbf{w}=\begin{bmatrix}4 \\3 \\2 \\1\end{bmatrix}.\]Suppose that we have $A\mathbf{v}=A\mathbf{w}$. Is the matrix $A$ nonsingular?
Suppose that $B=\{\mathbf{v}_1, \mathbf{v}_2\}$ is a basis for $\R^2$. Let $S:=[\mathbf{v}_1, \mathbf{v}_2]$.Note that as the column vectors of $S$ are linearly independent, the matrix $S$ is invertible.
Prove that for each vector $\mathbf{v} \in V$, the vector $S^{-1}\mathbf{v}$ is the coordinate vector of $\mathbf{v}$ with respect to the basis $B$.
Prove that the matrix\[A=\begin{bmatrix}0 & 1\\-1& 0\end{bmatrix}\]is diagonalizable.Prove, however, that $A$ cannot be diagonalized by a real nonsingular matrix.That is, there is no real nonsingular matrix $S$ such that $S^{-1}AS$ is a diagonal matrix.
The following problems are Midterm 1 problems of Linear Algebra (Math 2568) at the Ohio State University in Autumn 2017.There were 9 problems that covered Chapter 1 of our textbook (Johnson, Riess, Arnold).The time limit was 55 minutes.
This post is Part 3 and contains Problem 7, 8, and 9.Check out Part 1 and Part 2 for the rest of the exam problems.
Problem 7. Let $A=\begin{bmatrix}-3 & -4\\8& 9\end{bmatrix}$ and $\mathbf{v}=\begin{bmatrix}-1 \\2\end{bmatrix}$.
(a) Calculate $A\mathbf{v}$ and find the number $\lambda$ such that $A\mathbf{v}=\lambda \mathbf{v}$.
(b) Without forming $A^3$, calculate the vector $A^3\mathbf{v}$.
Problem 8. Prove that if $A$ and $B$ are $n\times n$ nonsingular matrices, then the product $AB$ is also nonsingular.
Problem 9.Determine whether each of the following sentences is true or false.
(a) There is a $3\times 3$ homogeneous system that has exactly three solutions.
(b) If $A$ and $B$ are $n\times n$ symmetric matrices, then the sum $A+B$ is also symmetric.
(c) If $n$-dimensional vectors $\mathbf{v}_1, \mathbf{v}_2, \mathbf{v}_3$ are linearly dependent, then the vectors $\mathbf{v}_1, \mathbf{v}_2, \mathbf{v}_3, \mathbf{v}_4$ is also linearly dependent for any $n$-dimensional vector $\mathbf{v}_4$.
(d) If the coefficient matrix of a system of linear equations is singular, then the system is inconsistent.
An $n\times n$ matrix $A$ is called nonsingular if the only vector $\mathbf{x}\in \R^n$ satisfying the equation $A\mathbf{x}=\mathbf{0}$ is $\mathbf{x}=\mathbf{0}$.Using the definition of a nonsingular matrix, prove the following statements.
(a) If $A$ and $B$ are $n\times n$ nonsingular matrix, then the product $AB$ is also nonsingular.
(b) Let $A$ and $B$ be $n\times n$ matrices and suppose that the product $AB$ is nonsingular. Then:
The matrix $B$ is nonsingular.
The matrix $A$ is nonsingular. (You may use the fact that a nonsingular matrix is invertible.)
Let $A$ be an $n\times (n-1)$ matrix and let $\mathbf{b}$ be an $(n-1)$-dimensional vector.Then the product $A\mathbf{b}$ is an $n$-dimensional vector.Set the $n\times n$ matrix $B=[A_1, A_2, \dots, A_{n-1}, A\mathbf{b}]$, where $A_i$ is the $i$-th column vector of $A$.
Prove that $B$ is a singular matrix for any choice of $\mathbf{b}$.
For each of the following $3\times 3$ matrices $A$, determine whether $A$ is invertible and find the inverse $A^{-1}$ if exists by computing the augmented matrix $[A|I]$, where $I$ is the $3\times 3$ identity matrix. |
I'm trying to design a digital differentiator FIR Filter. It features a lowpass, such that above the cutoff frequency the amplification is very low. I get the coefficients by a linear program minimizing the chebychef error of desired and actual frequency response.
It works really well, but I cannot place the cutoff frequency below some
0.1*pi rad/sample. Small cutoff frequencies still have very steep rising amplitude responses in low frequencies and thus need a broad transition band.The picture shows such a design and the very broad transition band. The red is the desired and blue obtained frequency response. I've weighted the bands accordingly. Also I'm not talking about bandpass, nor lowpass, I design a differentiator - thus the linear slew rate in low frequencies.
There are limits to the possible lowpass frequency, correct? How can I make the cutoff even smaller, or even better: why is this degradation happening?
I know, that the frequency response in my formulation has the form $$ H(e^{j\omega}) = 2\sum_{k=0}^M j \,h(k)\, \sin(k \omega) $$ where $M$ is $(N-1)/2$ with order $N$ filter. And thus the shape can be better traced by having longer filters. But the gain actually is very small.
Also I read, that with a derived then sampled Blackman Window (without control over cutoff frequency) one obtains a cutoff of around $\omega_C \approx 0.005$, while I struggle with $0.1$... I want to know why exactly.
This document suggests a method for a first order differentiator, where "only" the derivatives at $\omega = 0$ are matched to the ideal one. It results in an earlier drop off. However, as I understand it, this cannot be achieved with higher order differentiators, since second order is a quadratic function and I am not sure if a (basically taylor) approx in derivatives is sufficient for that. Let alone even higher orders. |
Let $(A,\mathcal{F},\mu)$ be finite measure space and $\{f_n\}$ a sequence of finite real measurable functions so that $f_n\rightarrow f$ a.e. We say $f_n\rightarrow f$
almost uniformly if $\epsilon>0$, there is $E\subseteq A$ such that $f_n \rightarrow f$ uniformly on $E^c$ and $\mu(E)<\epsilon$. I want to show that $f_n\rightarrow f$ almost uniformly implies convergence in $\mu$. For this, suppose not. Then $$\exists \eta,\epsilon>0:\forall N\in \mathbb{N}:\exists n>N:\mu(\mid f_n-f\mid\geq\epsilon)\geq \eta, $$ i.e., for infinitely many points $n\in \mathbb{N}$. From the definition of almost uniform convergence, $\exists E:\mu(E)<\eta$ and $f_n\rightarrow f$ uniformly on $E^c$. Contradiction. Question
It seems intuitive to me. But how to deduce this contradiction precisely? I know that if $x\in E$, then it must satify the negation of uniform convergence which is $$\exists \epsilon>0:\forall N\in \mathbb{N}:\exists n>N:\mid f_n-f\mid\geq\epsilon.$$ Now, $x$ may not be in $\{f_n \text{ does not converge in measure to } f \}$ if $\mu(\mid f_n(x)-f(x)\mid\geq\epsilon)<\eta$. So I conclude that $$\{f_n \text{ does not converge in measure to } f \}\subseteq E$$ implying that $\eta>\mu(E)\geq \eta$; a contradiction.
My argument seems right but also very inefficient. How could you express this idea as clean as possible?
Thanks! |
Let $H^{(j)}$ and $G^{(j)}$ be Banach spaces for $j\in\{1,\dots,n\}$. Call norms $\|\cdot\|_{H}$ and $\|\cdot\|_{G}$ on the algebraic tensor products $H:=\bigotimes_{j=1}^n H^{(j)}$ and $G:=\bigotimes_{j=1}^n G^{(j)}$
uniform if the operator norm satisfies$$\|\bigotimes_{j=1}^n A^{(j)}\|_{H\to G}=\prod_{j=1}^n \|A^{(j)}\|_{H^{(j)}\to G^{(j)}}.$$
Is there an extensive list of pairs of spaces with uniform crossnorms? (Preferably with a focus on function spaces; Lebesgue, Sobolev, Hoelder would already be great)
Of course, the results for tensor products of well-known spaces depend on the norms that we equip these tensor products with. For example, it would be good to know if $C(\Omega_1)\otimes C(\Omega_2)$ equipped with the $C(\Omega_1\times\Omega_2)$ norm and $H^1(\Omega_3)\otimes H^1(\Omega_4)$ equipped with the $H^{1}_{\text{mix}}(\Omega_3\times\Omega_4)$ norm are uniform, and if these norms actually turn the algebraic tensor products into the spaces $C(\Omega_1\times\Omega_2)$ and $H^{1}_{\text{mix}}(\Omega_3\times\Omega_4)$, respectively (by closure).
Posting any specific results instead of a reference would be appreciated too; I will keep track in the list below:
If $H^{(j)}$ and $G^{(j)}$ are Hilbert spaces, then equipping $H$ and $G$ with the induced Hilbert space structure yields uniform crossnorms. The induced scalar product on $H$ is the unique bilinear extension of $$\langle \otimes_{j=1}^n f^{(j)}_1 ,\otimes_{j=1}^n f^{(j)}_2\rangle_{H}=\prod_{j=1}^n \langle f^{(j)}_1,f^{(j)}_2\rangle_{H^{(j)}}$$ (Proposition 4.127 in W. Hackbusch, "Tensor spaces and numerical tensor calculus". Springer, 2012)
Equipping $H$ with the projective norm and $G$ with any crossnorm (that is, a norm that is multiplicative w.r.t the tensor product) yields uniform crossnorms. (Have no reference)
Equipping $G$ with the injective norm and $H$ with any crossnorm yields uniform crossnorms (Have no reference) |
Laws of Motion Dynamics of Circular Motion The force required to move a particle in circular path called centripetal force \tt F = \frac{mv^{2}}{r} = mr \omega^{2} Centripetal force always acts towards centre. Centripetal force is a real force. Formula for centripetal acceleration \tt ac = \frac{v^{2}}{r} = r \omega^{2} The force which imagined away from centre in Non inertial frame is centrifugal force. Translation equilibrium is the resultant of all the forces acting on a body keeps it in equilibrium Velocity of centre of mass = 0 in translational equilibrium. Torque is defined as the product of force and perpendicular distance τ = F × d⊥. When resultant torques acting on a body in equilibrium (keeps particle at rest) particle is said to be in Rotational Equilibrium. Frictional force (Limiting) when car is moving in a circle F L= μ m g. Maximum safe velocity to take turn (circular) \tt V = \sqrt{\mu g r} Banking angle of roads without friction tanθ = \tt \frac{v^{2}}{gr} Maximum velocity of banking without friction \tt v = \sqrt{g r \tan \theta} Angular velocity of conical pendulum \tt \omega = \sqrt{\frac{g \tan \theta}{r}} Velocity of conical pendulum \tt V = \sqrt{g r \tan \theta} Time period of conical pendulum \tt \tau = 2 \pi \sqrt{\frac{L \cos \theta}{g}} Net force on a simple pendulum bob \tt F = m \sqrt{g^{2} \sin^{2} \alpha + \frac{V^{4}}{L^{2}}} Angular velocity of ball in bowl \tt w=\sqrt{\frac{g}{R \cos \alpha}} View the Topic in this video From 0:21 To 12:28
Disclaimer: Compete.etutor.co may from time to time provide links to third party Internet sites under their respective fair use policy and it may from time to time provide materials from such third parties on this website. These third party sites and any third party materials are provided for viewers convenience and for non-commercial educational purpose only. Compete does not operate or control in any respect any information, products or services available on these third party sites. Compete.etutor.co makes no representations whatsoever concerning the content of these sites and the fact that compete.etutor.co has provided a link to such sites is NOT an endorsement, authorization, sponsorship, or affiliation by compete.etutor.co with respect to such sites, its services, the products displayed, its owners, or its providers.
1. Bending of cyclist : Angle θ of bending form vertical position is given by
\tan \theta = \frac{v^{2}}{rg}
2. Motion of a car on a level road: The maximum velocity with which a car can take a circular path of radius
r without slipping is given by \nu_{max} = \sqrt{\mu_{s}rg}
3. Motion of a car on a banked circular road: The maximum permissible speed to avoid slipping,
\nu_{max} = \left[\frac{rg(\mu_{s} + \tan \theta)}{1 - \mu_{s} \tan \theta}\right]^{1/2}
4. If banked road is perfectly smooth, then \nu_{0} = \sqrt{rg \tan \theta} |
1) The vector derivative, $\partial$
In geometric calculus, one deals in not just vector fields but multivector fields--fields that associated oriented planes, volumes, or other types of primitives to each point. These multivector fields are differentiated by an operator denoted $\partial$. It can act on multivector fields in either of two ways. On a multivector field $A(r)$, it can act as $\partial \wedge A$, which is the familiar exterior derivative. This increases the grades of all components of the field by one--vectors become planes, planes become volumes, and so on.
But there is another derivative, denoted $\partial \cdot A$, which goes by various names: interior derivative, codifferential, and so on. Both these notions of differentiation arise from $\partial$, however. It is, in my opinion, foolish that differential forms treats the $\partial \cdot$ operation as somehow only expressible in terms of $\partial \wedge$, however. To me (and in GC) they are on equal footing with one another.
2) The covariant derivative, $\nabla$
Now, introduce a global rotation field called $\underline R(a; r)$, which acts linearly on the vector $a$ and is a function of position $r$. For brevity, we'll just call this $\underline R(a)$ in most cases. We can, at our discretion, use or set this rotation field to our liking, perhaps because it is convenient, perhaps because it is necessary--you can regard it as inherent to the space if you like.
We can then look at the transformation of $A \mapsto A' = \underline R(A)$. This naturally changes the way we must differentiate. See that
$$a \cdot \partial A' = \underline R(a \cdot \partial A) + (a \cdot \dot \partial) \dot{\underline{R}}(A)$$
This is just a fancy product rule, with the overdot saying we differentiate only the linear operator, not its argument.
We
define the covariant derivative to get rid of the messy second term on the right-hand side. That is,
$$a \cdot \nabla A' = \underline R(a \cdot \nabla A)$$
Introducing or changing the rotation field changes the covariant derivative. This gives a way to talk about differentiation regardless of the current rotation field $\underline R$. Changing the rotation field can be beneficial to alter the geometry of the space in a way that is convenient. Thus, the rotation field represents generalized, position-dependent rotational degrees of freedom to rotate fields at all points in space by varying amounts and orientations at will. The covariant derivative allows us to do this and still recover results that are independent of the choice of rotation field--of the choice of gauge.
3) The Lie derivative
In GC, the Lie derivative has no special symbol. Rather, it can be built from covariant derivatives. Consider two vector fields $A, B$. The Lie derivative is simply
$$\mathcal L_A B = A \cdot \nabla B - B \cdot \nabla A$$
I'm not as familiar with Lie derivatives, but I'm given to understand that if $B$ were transported along a "flow" generated by $A$, this quantity would measure how much $B$ maintains its value during the process. |
I've calculated the correct answer to my problem, but don't understand one of the assumptions I made when doing so.
I used the geodesic deviation equation $$\frac{D^{2}\xi^{\mu}}{D\lambda^{2}}+R_{\phantom{\mu}\beta\alpha\gamma}^{\mu}\xi^{\alpha}\frac{dx^{\beta}}{d\lambda}\frac{dx^{\gamma}}{d\lambda}=0$$
to show that on the surface of a unit sphere two particles separated by initial distance $d$, starting from the equator and travelling north (ie on lines of constant $\phi$) will have a separation $s$ after time $t$ equal to $$s=\xi^{\phi}=d\sin\theta=d\cos\left(vt\right).$$ This is similar to Geodesic devation on a two sphere except that question was solved using simple spherical geometry.
The assumption I made was that the second absolute derivative wrt $t$ equals the second ordinary derivative, ie
$$\frac{D^{2}\xi^{\mu}}{dt{}^{2}}=\frac{d^{2}\xi^{\mu}}{dt{}^{2}}.$$ My question is, why am I allowed to make this assumption?
I've been told on another physics forum that the answer is because the problem is framed in terms of Riemann normal coordinate (because the distance the cars travel along their separate geodesics is a linear function of time $t$). I can only assume that in some way this makes the connection coefficients disappear in the absolute derivative equation$$\frac{DV^{\alpha}}{d\lambda}=\frac{dV^{\alpha}}{d\lambda}+V^{\gamma}\Gamma_{\gamma\beta}^{\alpha}\frac{dx^{\beta}}{d\lambda},$$ but I can't see why this is. As I noted in a comment below, I understand it is possible to choose coordinates at a point where the connection coefficients vanish, but I used the ordinary polar coordinates $\phi$ and $\theta$ to calculate the correct answer. To use two different sets of coordinates like this seems like a case of "having your cake and eating it".
The calculation, by the way, is here (my answer to my question): Geodesic deviation on a unit sphere |
In teaching my calculus students about limits and function domination, we ran into the class of functions
$$\Theta=\{x^\alpha (\ln{x})^\beta\}_{(\alpha,\beta)\in\mathbb{R}^2}$$
Suppose we say that $g$ weakly dominates $f$, and write $f\preceq g$, if
$$\lim_{x\to\infty}\frac{f(x)}{g(x)} \hspace{3 mm} \text{is finite}$$
We can then readily see that $(\Theta,\preceq)$ is a total order isomorphic to the lexicographic order on $\mathbb{R}^2$.
But we can get more complicated total orders with, say
$$\Theta_n=(x^{\alpha_0}(\ln{x})^{\alpha_1}(\ln\ln{x})^{\alpha_2}\cdots(\ln^{n-1} x)^{\alpha_{n-1}})_{\vec{\alpha}\in\mathbb{R}^{n}}$$
$$\Phi=\{e^{p(x)}\}_{p(x)\in\mathbb{R}[x]}$$
which are isomorphic as total orders to the lexicographic orders on $\mathbb{R}^n$ and $\operatorname{List}\mathbb{R}$
All of these complicated orders live inside what I'd call "the AP Calc linear order" $(\Omega,\preceq)$ defined as:
$$\Omega_0=\{f\in\mathscr{C}^0((\lambda,\infty))\}_{\lambda\in\mathbb{R}}$$
$$f\preceq g\Longleftrightarrow \max\left\{\left|\liminf_{x\to\infty} \frac{f(x)}{g(x)}\right|,\left|\limsup_{x\to\infty} \frac{f(x)}{g(x)}\right|\right\}<\infty$$
$$\Omega=\Omega_0/\simeq \hspace{5 mm} \text{where} \hspace{5 mm} f\simeq g \Leftrightarrow \left[f\preceq g \text{ and } g\preceq f\right]$$
where the refinement on $\preceq$ is made so as to avoid problems with things like $\sin{x}$.
This seems to be a very complicated linear order, as it includes as a suborder things like
$$\Psi=\{p_0(x)e^{p_1(x)}e^{e^{p_2(x)}}\cdots\exp^{n-1}(p_{n-1}(x))\}_{p_i(x)\in\mathbb{R}[x]\forall i}$$
My question is the following: is there any combinatorial description or universal construction, i.e. as a colimit, of the isomorphism type of $(\Omega,\preceq)$? |
In his celebrated paper "Conjugate Coding" (written around 1970), Stephen Wiesner proposed a scheme for quantum money that is unconditionally impossible to counterfeit, assuming that the issuing bank has access to a giant table of random numbers, and that banknotes can be brought back to the bank for verification. In Wiesner's scheme, each banknote consists of a classical "serial number" $s$, together with a quantum money state $|\psi_s\rangle$ consisting of $n$ unentangled qubits, each one either
$$|0\rangle,\ |1\rangle,\ |+\rangle=(|0\rangle+|1\rangle)/\sqrt{2},\ \text{or}\ |-\rangle=(|0\rangle-|1\rangle)/\sqrt{2}.$$
The bank remembers a classical description of $|\psi_s\rangle$ for every $s$. And therefore, when $|\psi_s\rangle$ is brought back to the bank for verification, the bank can measure each qubit of $|\psi_s\rangle$ in the correct basis (either $\{|0\rangle,|1\rangle\}$ or ${|+\rangle,|-\rangle}$), and check that it gets the correct outcomes.
On the other hand, because of the uncertainty relation (or alternatively, the No-Cloning Theorem), it's "intuitively obvious" that, if a counterfeiter who
doesn't know the correct bases tries to copy $|\psi_s\rangle$, then the probability that both of the counterfeiter's output states pass the bank's verification test can be at most $c^n$, for some constant $c<1$. Furthermore, this should be true regardless of what strategy the counterfeiter uses, consistent with quantum mechanics (e.g., even if the counterfeiter uses fancy entangled measurements on $|\psi_s\rangle$).
However, while writing a paper about other quantum money schemes, my coauthor and I realized that we'd never seen a rigorous proof of the above claim anywhere, or an explicit upper bound on $c$: neither in Wiesner's original paper nor in any later one.
So,
has such a proof (with an upper bound on $c$) been published? If not, then can one derive such a proof in a more-or-less straightforward way from (say) approximate versions of the No-Cloning Theorem, or results about the security of the BB84 quantum key distribution scheme?
Update: In light of the discussion with Joe Fitzsimons below, I should clarify that I'm looking for more than just a reduction from the security of BB84. Rather, I'm looking for an explicit upper bound on the probability of successful counterfeiting (i.e., on $c$)---and ideally, also some understanding of what the optimal counterfeiting strategy looks like. I.e., does the optimal strategy simply measure each qubit of $|\psi_s\rangle$ independently, say in the basis
$$\{ \cos(\pi/8)|0\rangle+\sin(\pi/8)|1\rangle, \sin(\pi/8)|0\rangle-\cos(\pi/8)|1\rangle \}?$$
Or is there an entangled counterfeiting strategy that does better?
Update 2: Right now, the best counterfeiting strategies that I know are (a) the strategy above, and (b) the strategy that simply measures each qubit in the $\{|0\rangle,|1\rangle\}$ basis and "hopes for the best." Interestingly, both of these strategies turn out to achieve a success probability of (5/8) n. So, my conjecture of the moment is that (5/8) n might be the right answer. In any case, the fact that 5/8 is a lower bound on c rules out any security argument for Wiesner's scheme that's "too" simple (for example, any argument to the effect that there's nothing nontrivial that a counterfeiter can do, and therefore the right answer is c=1/2).
This post has been migrated from (A51.SE)
Update 3: Nope, the right answer is (3/4) n! See the discussion thread below Abel Molina's answer. |
Tagged: determinant Problem 548
An $n\times n$ matrix $A$ is said to be
invertible if there exists an $n\times n$ matrix $B$ such that $AB=I$, and $BA=I$,
where $I$ is the $n\times n$ identity matrix.
If such a matrix $B$ exists, then it is known to be unique and called the
inverse matrix of $A$, denoted by $A^{-1}$.
In this problem, we prove that if $B$ satisfies the first condition, then it automatically satisfies the second condition.
So if we know $AB=I$, then we can conclude that $B=A^{-1}$.
Let $A$ and $B$ be $n\times n$ matrices.
Suppose that we have $AB=I$, where $I$ is the $n \times n$ identity matrix.
Prove that $BA=I$, and hence $A^{-1}=B$.Add to solve later
Problem 452
Let $A$ be an $n\times n$ complex matrix.
Let $S$ be an invertible matrix. (a) If $SAS^{-1}=\lambda A$ for some complex number $\lambda$, then prove that either $\lambda^n=1$ or $A$ is a singular matrix. (b) If $n$ is odd and $SAS^{-1}=-A$, then prove that $0$ is an eigenvalue of $A$.
Add to solve later
(c) Suppose that all the eigenvalues of $A$ are integers and $\det(A) > 0$. If $n$ is odd and $SAS^{-1}=A^{-1}$, then prove that $1$ is an eigenvalue of $A$. Problem 438
Determine whether each of the following statements is True or False.
(a) If $A$ and $B$ are $n \times n$ matrices, and $P$ is an invertible $n \times n$ matrix such that $A=PBP^{-1}$, then $\det(A)=\det(B)$. (b) If the characteristic polynomial of an $n \times n$ matrix $A$ is \[p(\lambda)=(\lambda-1)^n+2,\] then $A$ is invertible. (c) If $A^2$ is an invertible $n\times n$ matrix, then $A^3$ is also invertible. (d) If $A$ is a $3\times 3$ matrix such that $\det(A)=7$, then $\det(2A^{\trans}A^{-1})=2$. (e) If $\mathbf{v}$ is an eigenvector of an $n \times n$ matrix $A$ with corresponding eigenvalue $\lambda_1$, and if $\mathbf{w}$ is an eigenvector of $A$ with corresponding eigenvalue $\lambda_2$, then $\mathbf{v}+\mathbf{w}$ is an eigenvector of $A$ with corresponding eigenvalue $\lambda_1+\lambda_2$.
(
Stanford University, Linear Algebra Exam Problem) Read solution Eigenvalues of Orthogonal Matrices Have Length 1. Every $3\times 3$ Orthogonal Matrix Has 1 as an Eigenvalue Problem 419 (a) Let $A$ be a real orthogonal $n\times n$ matrix. Prove that the length (magnitude) of each eigenvalue of $A$ is $1$.
Add to solve later
(b) Let $A$ be a real orthogonal $3\times 3$ matrix and suppose that the determinant of $A$ is $1$. Then prove that $A$ has $1$ as an eigenvalue. Problem 391 (a) Is the matrix $A=\begin{bmatrix} 1 & 2\\ 0& 3 \end{bmatrix}$ similar to the matrix $B=\begin{bmatrix} 3 & 0\\ 1& 2 \end{bmatrix}$? (b) Is the matrix $A=\begin{bmatrix} 0 & 1\\ 5& 3 \end{bmatrix}$ similar to the matrix $B=\begin{bmatrix} 1 & 2\\ 4& 3 \end{bmatrix}$? (c) Is the matrix $A=\begin{bmatrix} -1 & 6\\ -2& 6 \end{bmatrix}$ similar to the matrix $B=\begin{bmatrix} 3 & 0\\ 0& 2 \end{bmatrix}$?
Add to solve later
(d) Is the matrix $A=\begin{bmatrix} -1 & 6\\ -2& 6 \end{bmatrix}$ similar to the matrix $B=\begin{bmatrix} 1 & 2\\ -1& 4 \end{bmatrix}$? Problem 389 (a) A $2 \times 2$ matrix $A$ satisfies $\tr(A^2)=5$ and $\tr(A)=3$. Find $\det(A)$. (b) A $2 \times 2$ matrix has two parallel columns and $\tr(A)=5$. Find $\tr(A^2)$. (c) A $2\times 2$ matrix $A$ has $\det(A)=5$ and positive integer eigenvalues. What is the trace of $A$?
(
Harvard University, Linear Algebra Exam Problem) Problem 374
Let \[A=\begin{bmatrix}
a_0 & a_1 & \dots & a_{n-2} &a_{n-1} \\ a_{n-1} & a_0 & \dots & a_{n-3} & a_{n-2} \\ a_{n-2} & a_{n-1} & \dots & a_{n-4} & a_{n-3} \\ \vdots & \vdots & \dots & \vdots & \vdots \\ a_{2} & a_3 & \dots & a_{0} & a_{1}\\ a_{1} & a_2 & \dots & a_{n-1} & a_{0} \end{bmatrix}\] be a complex $n \times n$ matrix. Such a matrix is called circulant matrix. Then prove that the determinant of the circulant matrix $A$ is given by \[\det(A)=\prod_{k=0}^{n-1}(a_0+a_1\zeta^k+a_2 \zeta^{2k}+\cdots+a_{n-1}\zeta^{k(n-1)}),\] where $\zeta=e^{2 \pi i/n}$ is a primitive $n$-th root of unity. Problem 363 (a) Find all the eigenvalues and eigenvectors of the matrix \[A=\begin{bmatrix} 3 & -2\\ 6& -4 \end{bmatrix}.\]
Add to solve later
(b) Let \[A=\begin{bmatrix} 1 & 0 & 3 \\ 4 &5 &6 \\ 7 & 0 & 9 \end{bmatrix} \text{ and } B=\begin{bmatrix} 2 & 0 & 0 \\ 0 & 3 &0 \\ 0 & 0 & 4 \end{bmatrix}.\] Then find the value of \[\det(A^2B^{-1}A^{-2}B^2).\] (For part (b) without computation, you may assume that $A$ and $B$ are invertible matrices.) Problem 338
Each of the following sets are not a subspace of the specified vector space. For each set, give a reason why it is not a subspace.
(1) \[S_1=\left \{\, \begin{bmatrix} x_1 \\ x_2 \\ x_3 \end{bmatrix} \in \R^3 \quad \middle | \quad x_1\geq 0 \,\right \}\] in the vector space $\R^3$. (2)\[S_2=\left \{\, \begin{bmatrix}
x_1 \\
x_2 \\
x_3
\end{bmatrix} \in \R^3 \quad \middle | \quad x_1-4x_2+5x_3=2 \,\right \}\] in the vector space $\R^3$.
(3)\[S_3=\left \{\, \begin{bmatrix}
x \\
y
\end{bmatrix}\in \R^2 \quad \middle | \quad y=x^2 \quad \,\right \}\] in the vector space $\R^2$.
(4)Let $P_4$ be the vector space of all polynomials of degree $4$ or less with real coefficients.
\[S_4=\{ f(x)\in P_4 \mid f(1) \text{ is an integer}\}\] in the vector space $P_4$.
(5)\[S_5=\{ f(x)\in P_4 \mid f(1) \text{ is a rational number}\}\] in the vector space $P_4$. (6)Let $M_{2 \times 2}$ be the vector space of all $2\times 2$ real matrices.
\[S_6=\{ A\in M_{2\times 2} \mid \det(A) \neq 0\} \] in the vector space $M_{2\times 2}$.
(7)\[S_7=\{ A\in M_{2\times 2} \mid \det(A)=0\} \] in the vector space $M_{2\times 2}$.
(
Linear Algebra Exam Problem, the Ohio State University) (8)Let $C[-1, 1]$ be the vector space of all real continuous functions defined on the interval $[a, b]$.
\[S_8=\{ f(x)\in C[-2,2] \mid f(-1)f(1)=0\} \] in the vector space $C[-2, 2]$.
(9)\[S_9=\{ f(x) \in C[-1, 1] \mid f(x)\geq 0 \text{ for all } -1\leq x \leq 1\}\] in the vector space $C[-1, 1]$. (10)Let $C^2[a, b]$ be the vector space of all real-valued functions $f(x)$ defined on $[a, b]$, where $f(x), f'(x)$, and $f^{\prime\prime}(x)$ are continuous on $[a, b]$. Here $f'(x), f^{\prime\prime}(x)$ are the first and second derivative of $f(x)$.
\[S_{10}=\{ f(x) \in C^2[-1, 1] \mid f^{\prime\prime}(x)+f(x)=\sin(x) \text{ for all } -1\leq x \leq 1\}\] in the vector space $C[-1, 1]$.
(11)Let $S_{11}$ be the set of real polynomials of degree exactly $k$, where $k \geq 1$ is an integer, in the vector space $P_k$. (12)Let $V$ be a vector space and $W \subset V$ a vector subspace. Define the subset $S_{12}$ to be the complementof $W$,
\[ V \setminus W = \{ \mathbf{v} \in V \mid \mathbf{v} \not\in W \}.\] Add to solve later |
Angles are formed when two lines intersect or meet at a point. It can also be defined as the measure of turn between two lines. Angle is measured in degrees or radians. The angles could be of different types.
The angle formed by the intersection of two secants, two tangents, or one tangent or one secant. In geometry, the tangent of a circle is the straight line that touches circle exactly at a single point and it never enters the interior of the circle. This is a thin line passing through infinitely close points over the circle. The application of tangent circle formula is various theorems or they are used for geometrical constructions or proofs too.
\[\large \huge \left(y-y_{0}\right)=m_{tgt}\left(x-x_{0}\right)\]
The great circle is the largest one drawn over the sphere surface. The minimum distance between two points on the surface of the sphere would be marked as the distance of great circle. Traditionally, the great circle was popular as Romanian circle. The diameter of a sphere will coincide with the diameter of the great circle. It is used for navigation of large ships or aircraft.
Where,
r is the radius of the earth δ is the latitude λ is the longitude Question 1: Find the great circle distance if the radius is 4.7 km, latitude is (45 o, 32 o) and longitude is (24 o, 17 o) ? Solution:
Given,
\[\large \sigma_{1},\sigma_{2}=45^{\circ},32^{\circ}\] \[\large \Lambda_{1},\Lambda_{2}=24^{\circ},17^{\circ}\] r=4.7 km r= 4700 m
Using the above given formula,
\[\large d=4700\;cos^{-1}(0.52\times 0.83\times 0.75)+(0.85 \times 0.32)\]
\[\large d=4700\times 0.99\]
D = 4653 m
Any circle having radius one is termed as unit circle in mathematics. They are useful in trigonometry where the unit circle is the circle whose radius is centered at the origin (0,0) in the Euclidean plane of the Cartesian coordinate system. The example of a unit circle is given below in the diagram –
The general equation of circle is given below:
\[\large \left(x-h\right)^{2}+\left(y-k\right)^{2}=r^{2}\]
Where (h, k) are center coordinates and r is the radius.
The unit circle formula is:
\[\large x^{2}+y^{2}=1\]
Where x and y are the coordinate values.
Question: Show that the point \[\large P\left[\frac{\sqrt{3}}{3},\,\frac{\sqrt{2}}{\sqrt{3}}\right]\] is on the unit circle. Solution:
We need to show that this point satisfies the equation of the unit circle, that is: \[\large x^{2}+y^{2}=1\]
\[\large \left[\frac{\sqrt{3}}{3}\right]^{2}+\left[\frac{\sqrt{2}}{\sqrt{3}}\right]^{2}\]
\[\large =\frac{3}{9}+\frac{2}{3}\]
\[\large =\frac{1}{3}+\frac{2}{3}\]
= 1
Therefore P is on the unit circle.
A central angle is formed between two radii of a circle where two points intersect and form a segment, and the distance between points is the arc length that is denoted by l in geometry.
A central formed at the center of the circle where two radii meet or intersect. The next term that justifies the definition of a central angle is the vertex. A vertex is a point where two points meet to form an angle. The vertex for a central angle would always be the central point of the circle.
\[\LARGE Central\;Angle\;\theta =\frac{Arc\;Length\times 360}{2\pi r}\]
Example 1: Find the central angle, where the arc length measurement is about 20 cm and the length of the radius measures 10 cm? Solution:
Given r = 10 cm
Arc length = 20 cm
The formula of central angle is,
Central Angle θ = \[\LARGE \frac{Arc Length \times 360}{2 \times\pi \times r}\]
Central Angle θ = \[\LARGE \frac{20 \times 360}{2 \times 3.14 \times 10}\]
Central Angle θ = \[\LARGE \frac{7200}{62.8}\] = 114.64°
Example 2: If the central angle of a circle is 82.4° and the arc length formed is 23 cm then find out the radius of the circle. Solution:
Given Arc length = 23 cm
The formula of central angle is,
Central Angle θ = \[\LARGE \frac{Arc\;Length \times 360}{2\times\pi \times r}\]
82.4° =\[\LARGE \frac{23 \times 360}{2\times\pi \times r}\]
The central angle is shown more clearly in the diagram with its formula. This is important to discuss other vertexes too because when two radii meet, there could be more angles like the convex central angle, and reflex angle. If the central angle is measured less than 180-degree then it is a convex central angle. If the central angle is measured more than 180-degree then it is a reflex central angle.
With this discussion, you have a clear understanding of different angles of a circle and their formulas. You just have to put the values in formulas and calculate the angle for real word problems too. |
I guess this is probably a little late, but this result is immediate from Basu's Theorem, provided that you are willing to accept that the family of normal distributions with known variance is complete. To apply Basu, fix $\sigma^2$ and consider the family of $N(\mu, \sigma^2)$ for $\mu \in \mathbb R$. Then $\frac{(n - 1)S^2}{\sigma^2} \sim \chi^2_{n - 1}$ so $S^2$ is ancillary, while $\bar X$ is complete sufficient, and hence they are independent for all $\mu$ and our fixed $\sigma^2$. Since $\sigma^2$ was arbitrary, this completes the proof.
This can also be shown directly without too much hassle. One can find the joint pdf of $(A, \bar X)$ directly by making a suitable transformation to the joint pdf of $(X_1,\cdots, X_n)$. The joint pdf of $(A, \bar X)$ factors as required, which gives independence. To see this quickly, without actually doing the transformation, skipping some algebra we may write
$$f(x_1, x_2, ..., x_n) = (2\pi \sigma^2)^{-n/2} \exp\left\{-\frac{\sum(x_i - \bar x)^2}{2\sigma^2}\right\} \exp\left\{-\frac{n(\bar x - \mu)^2}{2\sigma^2}\right\}$$
and we can see that everything except for the last term depends only on $$(x_2 - \bar x, x_3 - \bar x, ..., x_n - \bar x)$$ (note we may retrieve $x_1 - \bar x$ from only the first $n - 1$ deviations) and the last term depends only on $\bar x$. The transformation is linear, so the jacobian term won't screw this factorization up when we actually pass to the joint pdf of $(A, \bar X)$. |
I think that I'd say that one of the underlying themes of analysis is, really, the limit. In pretty much every subfield of analysis, we spend a lot of time trying to control the size of certain quantities, with taking limits in mind. This is especially true in PDEs, when we consistently desire norm estimates on various quantities. Let's just discuss the "...
As an example:$f(x)=6|x|$ does not have a derivative at $x=0$$g(x)=3x|x|$ does have a first derivative everywhere of $f(x)$ but not a second derivative at $x=0$$h(x)=x^2|x|= |x^3|$ has a first derivative everywhere of $g(x)$ and a second derivative of $f(x)$ but not a third derivative at $x=0$
Traditionally, the sum of a sequence is defined as the limit of the partial sums; that is, for a sequence $\{a_n\}$, $\sum{a_n}$ is that number $S$ so that for every $\epsilon > 0$, there is an $N$ such that whenever $m > N$, $|S - \sum_{n = 0}^ma_n| < \epsilon$. There's no reason we can't define it like that for uncountable sequences as well: let $\...
It's a metric that is bounded above by $1$, while maintaining the same topology. This means that bounded metrics are just as powerful as general metrics (which is arguably interesting in itself).More concretely, there's a commonly used construction for turning a countable product of metric spaces into a metric space itself. Specifically, if we have spaces $...
Why solving a differentiated integral equation might eventually lead to erroneous solutions of the original problem?The reason is that taking a derivative is not an invertible operation. So the new equation you obtain is true, but not equivalent to the original one -- the set of solutions has increased.The simplest example is trying to solve an ordinary ...
From my viewpoint, Real Analysis is a study of functions of (one or several) real variable. Everything else (limits, derivatives, integrals, infinite series, etc.) is a tool serving this purpose. [There is a mild exception one has to make here for sequences and series of real numbers/vectors; these are functions defined on the set of natural numbers and ...
A concrete exampleConsider the ring $\mathbb{Q}[x]$ of polynomials over $\mathbb{Q}$. These can be linearly ordered as in this answer, in a way that preserves the ordering of the rationals. Now we have the "regular" naturals in this field: 1, 2,3, and so on. But the polynomial $x$ is larger than all of these, and $x^2$ is larger than $x$, and so on. So in ...
Suppose for some $x_0$ you have $f(x_0)\neq0$. Then you can solve the differential equation$$\frac{f'(x)}{f^2(x)}=1$$with the initial condition $f(x_0)$, which gives$$\frac{-1}{f(x)}+\frac{1}{f(x_0)}=x-x_0\iff f(x)=\frac{1}{c-x}$$where $c$ is a constant, and this holds for every $x$ that is in the same component of $\mathbb R\backslash\{c\}$ with $x_0$, ...
The bisection method only cares if the function changes sign, so it goes straight past the 'fake' root without noticing.If the coefficients have a slight error in them, then perhaps the 'fake' root should have been a root.
If $\{z_i:i\in I\}$ is any indexed set of complex numbers, then the series $\sum_{i\in I}z_i$ is said to converge to the complex number $z$ if for every $\epsilon>0$ there is a finite subset $J_\epsilon$ of $I$ such that, for every finite subset $J$ of $I$ with $J_\epsilon\subseteq J$, $\vert \sum_{i\in J}z_i-z\vert<\epsilon$. In other words, to say ...
Long ago someone told me this. I still remember it ...Sometimes I find myself just pushing symbols around, and I wonder, "Am I really doing analysis?" But when an argument begins, "Let $\varepsilon > 0$," then I know it really is analysis.
No, $f(x)=1/x$ is the only solution.The proof consists of many simple steps.part 1: $f$ is strictly decreasing:First, we show that $f$ is surjective.Let $x>0$. Then$$f(f(f(1/x)))(1/x)=1\quad\Rightarrow\quadf(f(f(1/x)))=x.$$Thus $f$ is surjective.Next, we show that $f$ is injective.Let $x,y>0$ with $f(x)=f(y)$.Then$$x f(f(f(y))) =...
Such a function exists.To construct it, we need an auxiliary function $w: \mathbb R \to [0,1]$ that isContinuous.Rational almost everywhere.Irrational at $0$.Zero outside the interval $(-1,1)$.This can be obtained from the Cantor function $c: [0,1] \to [0,1]$ by taking some $u \in (0,1)$ for which $c(u)$ is irrational and by assigning$$w(x) =\...
Consider the functions, $f:[0,1]\rightarrow [-1,1]$ and $g:[0,1]\rightarrow[-1,1]$ where$$f = \begin{cases}\sin\frac{1}{x},& x>0 \\ 0, & x = 0\end{cases}$$and$$g = \begin{cases}-\sin\frac{1}{x},& x>0 \\ 1, & x = 0\end{cases}$$
I realised this question has been asked before as you can see here. Anyway I will write down my solution here again. First of all consider Ramanuajan's Master Theorem.Ramanujan's Master TheoremLet $f(x)$ be an analytic function with a MacLaurin Expansion of the form$$f(x)=\sum_{k=0}^{\infty}\frac{\phi(k)}{k!}(-x)^k$$then the Mellin Transform of ...
Wolfy says it is 4 times the Catalan's constant.One (not optimal) way to derive this is$$\def\sech{\operatorname{sech}}\begin{align*}\int_{-\infty}^\infty\arcsin\sech x\,\mathrm{d}x&=2\int_0^\infty\arcsin\sech x\,\mathrm{d}x\\&=2\int_0^1\frac{\arcsin u\,\mathrm{d}u}{u\sqrt{1-u^2}}\quad(u=\sech x)\\&=2\int_0^{\pi/2}\frac{\theta\,\mathrm{d}\...
We may adopt the technique for Ramanujan's infinite radical. Let $p(x) = x^2 + 3x + 1$ and define $F : [0, \infty) \to [0, \infty)$ by$$ F(x) = \sqrt{p(x) + \sqrt{p(x+1) + \sqrt{p(x+2) + \cdots }}} $$Then $F$ solves$$ F(x)^2 = p(x) + F(x+1). $$Now we make an ansatz that $F(x)$ takes the form $F(x) = ax + b$. Plugging this and comparing coefficients ...
What is the connection between the following three properties of a subset $A$ of $\mathbb{R}$?(a) $A$ is open. (b), (c) are pure maths.If you say one property implies another, you have to prove that implication. If you say that a property does not imply another, you have to give an explicit example to show that.Consider the set $\mathbb{I}$ of ...
The main ingredient in my answer is the following fact:$|\mathbb{R}^{\mathbb N}| = |\mathbb{R}|$.Its proof isn't directly related to this question so I won't put it right here. But you can find it in the answer to this question.Due to this fact we can talk about the $\mathbb R$ as about the disjoint union $\coprod\limits_{i=1}^\infty R_i$ where all of ...
and then divide through by $n$ to give$$= \frac{\frac{1}{n}-(\frac{3}{8})^n}{\frac{n^{1728}}{2^{3n}}+\frac{1}{n}}$$My answer is zero using the following limits:$\lim_{n\to\infty}\frac{1}{n} = 0$and since $\frac{3}{8} <1$ $\implies\lim_{n\to\infty}\frac{3}{8}^n = 0$But not only the numerator tends to zero, the denominator does too....
No, it is not possible. Suppose otherwise. Then, there is some $N\in\mathbb N$ such that$$n\geqslant N\implies\lvert a_n-\varepsilon\rvert<\varepsilon\iff0<a_n<2\varepsilon.$$But this is impossible, since you are assuming that you always have $a_n\leqslant0$.
Hint: If you want to show that $\mathbb{R}^{n}$ is not compact, you just need to provide a specific example of an open cover that has no finite subcover.Try with the open cover $\{\Omega_{k}\}_{k=1}^{\infty}$, where $\Omega_{k}$ is the open ball of radius $k$, centred at $\mathbf{0}$.(By the way, this is an example of the general principle that to show ...
Let $L=({n!})^{\frac{1}{n^2}}$$L=e^\frac{\log_{e}{n!}}{n^2}=e^\frac{\sum_{i=1}^{n}{\log(i)}}{n^2}$Now $0\leq\frac{\sum_{i=1}^{n}{\log(i)}}{n^2}\leq \frac{n\log(n)}{n^2}=\frac{\log(n)}{n}$ which tends to $0$ as $n$ tends to $\infty$. Hence$\lim\limits_{n \to \infty}{\frac{\sum_{i=1}^{n}{\log(i)}}{n^2}} = 0$.$\lim\limits_{n \to \infty}{L} = e^{\lim\...
I'll address your points one by one. If you have further questions, please ask away:Aren't all functions $C^\infty$ then?No. Examples of functions whose $k$-th order derivative is not continuous abound for every $k$. It is true that $C^1$ functions are very similar to $C^\infty$ functions, but for the time being it is better to focus on the ones we will ... |
Let $T: \R^n \to \R^m$ be a linear transformation.Suppose that the nullity of $T$ is zero.
If $\{\mathbf{x}_1, \mathbf{x}_2,\dots, \mathbf{x}_k\}$ is a linearly independent subset of $\R^n$, then show that $\{T(\mathbf{x}_1), T(\mathbf{x}_2), \dots, T(\mathbf{x}_k) \}$ is a linearly independent subset of $\R^m$.
Let $A$ be the matrix given by\[A=\begin{bmatrix}-2 & 0 & 1 \\-5 & 3 & a \\4 & -2 & -1\end{bmatrix}\]for some variable $a$. Find all values of $a$ which will guarantee that $A$ has eigenvalues $0$, $3$, and $-3$.
Let\[A=\begin{bmatrix}8 & 1 & 6 \\3 & 5 & 7 \\4 & 9 & 2\end{bmatrix}.\]Notice that $A$ contains every integer from $1$ to $9$ and that the sums of each row, column, and diagonal of $A$ are equal. Such a grid is sometimes called a magic square.
Define two functions $T:\R^{2}\to\R^{2}$ and $S:\R^{2}\to\R^{2}$ by\[T\left(\begin{bmatrix}x \\ y\end{bmatrix}\right)=\begin{bmatrix}2x+y \\ 0\end{bmatrix},\;S\left(\begin{bmatrix}x \\ y\end{bmatrix}\right)=\begin{bmatrix}x+y \\ xy\end{bmatrix}.\]Determine whether $T$, $S$, and the composite $S\circ T$ are linear transformations.
Let\[\mathbf{v}_{1}=\begin{bmatrix}1 \\ 1\end{bmatrix},\;\mathbf{v}_{2}=\begin{bmatrix}1 \\ -1\end{bmatrix}.\]Let $V=\Span(\mathbf{v}_{1},\mathbf{v}_{2})$. Do $\mathbf{v}_{1}$ and $\mathbf{v}_{2}$ form an orthonormal basis for $V$?
Let $A$ be an $m \times n$ matrix.Suppose that the nullspace of $A$ is a plane in $\R^3$ and the range is spanned by a nonzero vector $\mathbf{v}$ in $\R^5$. Determine $m$ and $n$. Also, find the rank and nullity of $A$.
Suppose that a set of vectors $S_1=\{\mathbf{v}_1, \mathbf{v}_2, \mathbf{v}_3\}$ is a spanning set of a subspace $V$ in $\R^5$. If $\mathbf{v}_4$ is another vector in $V$, then is the set\[S_2=\{\mathbf{v}_1, \mathbf{v}_2, \mathbf{v}_3, \mathbf{v}_4\}\]still a spanning set for $V$? If so, prove it. Otherwise, give a counterexample.
For a set $S$ and a vector space $V$ over a scalar field $\K$, define the set of all functions from $S$ to $V$\[ \Fun ( S , V ) = \{ f : S \rightarrow V \} . \]
For $f, g \in \Fun(S, V)$, $z \in \K$, addition and scalar multiplication can be defined by\[ (f+g)(s) = f(s) + g(s) \, \mbox{ and } (cf)(s) = c (f(s)) \, \mbox{ for all } s \in S . \]
(a) Prove that $\Fun(S, V)$ is a vector space over $\K$. What is the zero element?
(b) Let $S_1 = \{ s \}$ be a set consisting of one element. Find an isomorphism between $\Fun(S_1 , V)$ and $V$ itself. Prove that the map you find is actually a linear isomorpism.
(c) Suppose that $B = \{ e_1 , e_2 , \cdots , e_n \}$ is a basis of $V$. Use $B$ to construct a basis of $\Fun(S_1 , V)$.
(d) Let $S = \{ s_1 , s_2 , \cdots , s_m \}$. Construct a linear isomorphism between $\Fun(S, V)$ and the vector space of $n$-tuples of $V$, defined as\[ V^m = \{ (v_1 , v_2 , \cdots , v_m ) \mid v_i \in V \mbox{ for all } 1 \leq i \leq m \} . \]
(e) Use the basis $B$ of $V$ to constract a basis of $\Fun(S, V)$ for an arbitrary finite set $S$. What is the dimension of $\Fun(S, V)$?
(f) Let $W \subseteq V$ be a subspace. Prove that $\Fun(S, W)$ is a subspace of $\Fun(S, V)$. |
Just getting started with
EquatIO? Here are a few quick how-to’s in order to get started inserting math into your Google Docs Math Prediction. Please note: This is not an all inclusive list. You should also be sure to check out our video tutorials for EquatIO or check out Inserting Geometry Symbols!
This article will cover inserting an exponent, creating a fraction or mixed number, square roots, and subscript, as well as listing some additional commands.
Below each article is an animated gif which shows the process in real time.
Exponents and Subscript
To insert an exponent, use the caret (^) symbol to move your cursor up to the exponent slot, where you can then insert your exponent. Once you are finished, use the right arrow key (⇨) to move out of the exponent slot and continue typing your equation.
So let’s say we want to show the Pythagorean Theorem:
For subscript, use the underscore (_) key to enter your subscript. Use your right arrow key (⇨) again when finished typing your subscript, to finish the rest of your equation.
So for example we can write out the chemical formula of a water molecule:
Fractions and Mixed Numbers
To insert a simple fraction, just type the numerator, press the / key, then type the denominator. Use the right arrow key (⇨) to continue typing the rest of your equation.
So if we wanted to add several fractions together, it would look like this:
We can also add a bit more complexity to these fractions by adding multiple terms to the numerator or denominator.
To do this, just type your numerator, highlight or select it, and then press the / key to insert the fraction.
When you’re done typing the terms in your denominator, use the right arrow key (⇨) to move on to the rest of your equation.
Square root
To insert a square root, just type \sqrt and then the Enter or Tab key to insert the square root symbol. Then just type the number or expression you want to include under the square root. Use the Enter key when finished to continue typing the rest of your equation.
So for example if we want to write the square root of 16, it would look like this:
Additional Commands
\div = division sign
\times = multiplication sign
\cdot = dot multiplication sign
\pi = pi symbol
Greek symbols
\sqrt[3] = cube root
\sqrt[n] = nth root |
High Energy Physics - 750 GeV New submissions [more] [1] arXiv:1910.7179 [ps, pdf, other] Title: Interpretations of the Diphoton Anomaly in Little HiggsComments: v4: 16 pagesSubjects: High Energy Physics - Phenomenology (hep-ph); High Energy Physics - Experiment (hep-ex)
In this letter, we discuss the intriguing diphoton peak at run 2. We analyze the $\gamma\gamma$ resonance in the Polchinski-Polchinski model with light axions. Fortunately, the conformal symmetry stabilizes the mass of the $\eta$. The 750 GeV anomaly implies conformal dynamics around 300 GeV. We leave the rest for future study.
[2] arXiv:1910.6706 [ps, pdf, other] Title: Explaining the $\gamma\gamma$ Resonance and Natural InflationAuthors: R. HuangComments: v4: added refsSubjects: High Energy Physics - Phenomenology (hep-ph)
In this paper, we look at the recent diphoton excess at run 2 and $B \to D \tau \nu$. Vector-like fermions are added to left-right models to compensate for the 750 GeV peak. Quite simply, the R symmetry stabilizes the mass of the $\phi$. Exotic fermions at 600 GeV should be observed soon. More data should reveal the nature of the 750 GeV peak.
[3] arXiv:1910.6778 [ps, pdf, other] Title: Interpretations of the 750 GeV Anomaly in Flipped SU(5) ModelsComments: 2 pages, minor changesSubjects: High Energy Physics - Phenomenology (hep-ph)
Just recently, ATLAS and CMS have measured a peak in the second run of the LHC. We study the diphoton peak in two Higgs doublet models on warped metric. A corollary of this model is that it cannot account for natural inflation. Curiously, we predict a pseudoscalar below 400 GeV. Interestingly, there is much to be done.
[4] arXiv:1910.2808 [ps, pdf, other] Title: A New Take on Gauge-extended Models Inspired by the Diphoton PeakAuthors: J. Yu, H. Ding, M. Torre, U. Sanz, V. Sadhukhan, H. Zhang, B. Sannino, M. Mohapatra, H. Ren, S. SenguptaComments: v3: updated figure 1, conclusions unchangedSubjects: High Energy Physics - Phenomenology (hep-ph)
In this note, we address the very recent diphoton anomaly at run 2 (at 1.0 sigma). Therefore, the 750 GeV excess is scrutinized in minisplit SUSY models with neutralinos. Therefore, the resonance couples to $hh$, but not to $ZZ$, alleviating tension with Run 1. We expect an eta prime above 800 GeV. More data is likely to confirm this critical pattern.
[5] arXiv:1910.0060 [ps, pdf, other] Title: The Diphoton Peak From Composite ModelsAuthors: E. ZhangComments: v4: updated figure 4, conclusions unchanged, JHEP3, reference addedSubjects: High Energy Physics - Phenomenology (hep-ph); High Energy Physics - Experiment (hep-ex)
In this paper, we discuss the intriguing diphoton anomaly at run 2 and $h \to \mu \tau$. We scrutinize the diphoton excess in twin Higgs with charge 1/5 quarks. The discrete symmetry stabilizes the mass of the $X(750)$. Curiously, we expect a KK graviton above 800 GeV. However, there is much to be done. |
Let $a \in \Bbb{R}$. Let $\Bbb{Z}$ act on $S^1$ via $(n,z) \mapsto ze^{2 \pi i \cdot an}$. Claim: The is action is not free if and only if $a \Bbb{Q}$. Here's an attempt at the forward direction: If the action is not free, there is some nonzero $n$ and $z \in S^1$ such that $ze^{2 \pi i \cdot an} = 1$. Note $z = e^{2 \pi i \theta}$ for some $\theta \in [0,1)$.
Then the equation becomes $e^{2 \pi i(\theta + an)} = 1$, which holds if and only if $2\pi (\theta + an) = 2 \pi k$ for some $k \in \Bbb{Z}$. Solving for $a$ gives $a = \frac{k-\theta}{n}$...
What if $\theta$ is irrational...what did I do wrong?
'cause I understand that second one but I'm having a hard time explaining it in words
(Re: the first one: a matrix transpose "looks" like the equation $Ax\cdot y=x\cdot A^\top y$. Which implies several things, like how $A^\top x$ is perpendicular to $A^{-1}x^\top$ where $x^\top$ is the vector space perpendicular to $x$.)
DogAteMy: I looked at the link. You're writing garbage with regard to the transpose stuff. Why should a linear map from $\Bbb R^n$ to $\Bbb R^m$ have an inverse in the first place? And for goodness sake don't use $x^\top$ to mean the orthogonal complement when it already means something.
he based much of his success on principles like this I cant believe ive forgotten it
it's basically saying that it's a waste of time to throw a parade for a scholar or win he or she over with compliments and awards etc but this is the biggest source of sense of purpose in the non scholar
yeah there is this thing called the internet and well yes there are better books than others you can study from provided they are not stolen from you by drug dealers you should buy a text book that they base university courses on if you can save for one
I was working from "Problems in Analytic Number Theory" Second Edition, by M.Ram Murty prior to the idiots robbing me and taking that with them which was a fantastic book to self learn from one of the best ive had actually
Yeah I wasn't happy about it either it was more than $200 usd actually well look if you want my honest opinion self study doesn't exist, you are still being taught something by Euclid if you read his works despite him having died a few thousand years ago but he is as much a teacher as you'll get, and if you don't plan on reading the works of others, to maintain some sort of purity in the word self study, well, no you have failed in life and should give up entirely. but that is a very good book
regardless of you attending Princeton university or not
yeah me neither you are the only one I remember talking to on it but I have been well and truly banned from this IP address for that forum now, which, which was as you might have guessed for being too polite and sensitive to delicate religious sensibilities
but no it's not my forum I just remembered it was one of the first I started talking math on, and it was a long road for someone like me being receptive to constructive criticism, especially from a kid a third my age which according to your profile at the time you were
i have a chronological disability that prevents me from accurately recalling exactly when this was, don't worry about it
well yeah it said you were 10, so it was a troubling thought to be getting advice from a ten year old at the time i think i was still holding on to some sort of hopes of a career in non stupidity related fields which was at some point abandoned
@TedShifrin thanks for that in bookmarking all of these under 3500, is there a 101 i should start with and find my way into four digits? what level of expertise is required for all of these is a more clear way of asking
Well, there are various math sources all over the web, including Khan Academy, etc. My particular course was intended for people seriously interested in mathematics (i.e., proofs as well as computations and applications). The students in there were about half first-year students who had taken BC AP calculus in high school and gotten the top score, about half second-year students who'd taken various first-year calculus paths in college.
long time ago tho even the credits have expired not the student debt though so i think they are trying to hint i should go back a start from first year and double said debt but im a terrible student it really wasn't worth while the first time round considering my rate of attendance then and how unlikely that would be different going back now
@BalarkaSen yeah from the number theory i got into in my most recent years it's bizarre how i almost became allergic to calculus i loved it back then and for some reason not quite so when i began focusing on prime numbers
What do you all think of this theorem: The number of ways to write $n$ as a sum of four squares is equal to $8$ times the sum of divisors of $n$ if $n$ is odd and $24$ times sum of odd divisors of $n$ if $n$ is even
A proof of this uses (basically) Fourier analysis
Even though it looks rather innocuous albeit surprising result in pure number theory
@BalarkaSen well because it was what Wikipedia deemed my interests to be categorized as i have simply told myself that is what i am studying, it really starting with me horsing around not even knowing what category of math you call it. actually, ill show you the exact subject you and i discussed on mmf that reminds me you were actually right, i don't know if i would have taken it well at the time tho
yeah looks like i deleted the stack exchange question on it anyway i had found a discrete Fourier transform for $\lfloor \frac{n}{m} \rfloor$ and you attempted to explain to me that is what it was that's all i remember lol @BalarkaSen
oh and when it comes to transcripts involving me on the internet, don't worry, the younger version of you most definitely will be seen in a positive light, and just contemplating all the possibilities of things said by someone as insane as me, agree that pulling up said past conversations isn't productive
absolutely me too but would we have it any other way? i mean i know im like a dog chasing a car as far as any real "purpose" in learning is concerned i think id be terrified if something didnt unfold into a myriad of new things I'm clueless about
@Daminark They key thing if I remember correctly was that if you look at the subgroup $\Gamma$ of $\text{PSL}_2(\Bbb Z)$ generated by (1, 2|0, 1) and (0, -1|1, 0), then any holomorphic function $f : \Bbb H^2 \to \Bbb C$ invariant under $\Gamma$ (in the sense that $f(z + 2) = f(z)$ and $f(-1/z) = z^{2k} f(z)$, $2k$ is called the weight) such that the Fourier expansion of $f$ at infinity and $-1$ having no constant coefficients is called a cusp form (on $\Bbb H^2/\Gamma$).
The $r_4(n)$ thing follows as an immediate corollary of the fact that the only weight $2$ cusp form is identically zero.
I can try to recall more if you're interested.
It's insightful to look at the picture of $\Bbb H^2/\Gamma$... it's like, take the line $\Re[z] = 1$, the semicircle $|z| = 1, z > 0$, and the line $\Re[z] = -1$. This gives a certain region in the upper half plane
Paste those two lines, and paste half of the semicircle (from -1 to i, and then from i to 1) to the other half by folding along i
Yup, that $E_4$ and $E_6$ generates the space of modular forms, that type of things
I think in general if you start thinking about modular forms as eigenfunctions of a Laplacian, the space generated by the Eisenstein series are orthogonal to the space of cusp forms - there's a general story I don't quite know
Cusp forms vanish at the cusp (those are the $-1$ and $\infty$ points in the quotient $\Bbb H^2/\Gamma$ picture I described above, where the hyperbolic metric gets coned off), whereas given any values on the cusps you can make a linear combination of Eisenstein series which takes those specific values on the cusps
So it sort of makes sense
Regarding that particular result, saying it's a weight 2 cusp form is like specifying a strong decay rate of the cusp form towards the cusp. Indeed, one basically argues like the maximum value theorem in complex analysis
@BalarkaSen no you didn't come across as pretentious at all, i can only imagine being so young and having the mind you have would have resulted in many accusing you of such, but really, my experience in life is diverse to say the least, and I've met know it all types that are in everyway detestable, you shouldn't be so hard on your character you are very humble considering your calibre
You probably don't realise how low the bar drops when it comes to integrity of character is concerned, trust me, you wouldn't have come as far as you clearly have if you were a know it all
it was actually the best thing for me to have met a 10 year old at the age of 30 that was well beyond what ill ever realistically become as far as math is concerned someone like you is going to be accused of arrogance simply because you intimidate many ignore the good majority of that mate |
Talk II on Bourguignon-Lawson's 1978 paper
The stable parametrized h-cobordism theorem provides a critical link in the chain of homotopy theoretic constructions that show up in the classification of manifolds and their diffeomorphisms. For a compact smooth manifold M it gives a decomposition of Waldhausen's A(M) into QM_+ and a delooping of the stable h-cobordism space of M. I will talk about joint work with Malkiewich on this story when M is a smooth compact G-manifold.
We show $C^\infty$ local rigidity for a broad class of new examples of solvable algebraic partially hyperbolic actions on ${\mathbb G}=\mathbb{G}_1\times\cdots\times \mathbb{G}_k/\Gamma$, where $\mathbb{G}_1$ is of the following type: $SL(n, {\mathbb R})$, $SO_o(m,m)$, $E_{6(6)}$, $E_{7(7)}$ and $E_{8(8)}$, $n\geq3$, $m\geq 4$. These examples include rank-one partially hyperbolic actions. The method of proof is a combination of KAM type iteration scheme and representation theory. The principal difference with previous work
that used KAM scheme is very general nature of the proof: no specific information about unitary representations of ${\mathbb G}$ or ${\mathbb G}_1$ is required.
This is a continuation of the last talk.
A classical problem in knot theory is determining whether or not a given 2-dimensional diagram represents the unknot. The UNKNOTTING PROBLEM was proven to be in NP by Hass, Lagarias, and Pippenger. A generalization of this decision problem is the GENUS PROBLEM. We will discuss the basics of computational complexity, knot genus, and normal surface theory in order to present an algorithm (from HLP) to explicitly compute the genus of a knot. We will then show that this algorithm is in PSPACE and discuss more recent results and implications in the field.
We show that the three-dimensional homology cobordism group admits an infinite-rank summand. It was previously known that the homology cobordism group contains an infinite-rank subgroup and a Z-summand. The proof relies on the involutive Heegaard Floer homology package of Hendricks-Manolescu and Hendricks-Manolescu-Zemke. This is joint work with I. Dai, M. Stoffregen, and L. Truong.
There is a close analogy between function fields over finite fields and number fields. In this analogy $\text{Spec } \mathbb{Z}$ corresponds to an algebraic curve over a finite field. However, this analogy often fails. For example, $\text{Spec } \mathbb{Z} \times \text{Spec } \mathbb{Z} $ (which should correspond to a surface) is $\text{Spec } \mathbb{Z}$ (which corresponds to a curve). In many cases, the Fargues-Fontaine curve is the natural analogue for algebraic curves. In this first talk, we will give the construction of the Fargues-Fontaine curve.
Consider a collection of particles in a fluid that is subject to
a standing acoustic wave. In some situations, the particles tend to
cluster about the nodes of the wave. We study the problem of finding a
standing acoustic wave that can position particles in desired locations,
i.e. whose nodal set is as close as possible to desired curves or
surfaces. We show that in certain situations we can expect to reproduce
patterns up to the diffraction limit. For periodic particle patterns, we
show that there are limitations on the unit cell and that the possible
patterns in dimension d can be determined from an eigendecomposition of a
2d x 2d matrix.
Department of Mathematics
Michigan State University
619 Red Cedar Road
C212 Wells Hall
East Lansing, MI 48824
Phone: (517) 353-0844
Fax: (517) 432-1562
College of Natural Science |
I'm trying to use the Fourier inversion formula to plot the PDF of an Affine Stochastic Intensity Reduced Form Credit Model, given its characteristic function.
The characteristic function of an affine process $\lambda(t)$ is commonly given as
$$\phi_{\lambda(t)}(u) = \mathrm{E}[e^{iu\lambda(t)}] = \exp(A(t-s,iu)+B(t-s,iu)\lambda(s))$$
The Fourier inversion formula for PDF is
$$f_{\lambda(t)}(x)=\frac{1}{\pi}\int_0^\infty \mathrm{\Re}[e^{-iux}\phi_{\lambda(t)}(u)]du$$
Taking a CIR process (I’m aware that CIR has a $\chi^2$ Closed-Form PDF and the use of CIR here is just for illustration) which has coefficients:
$$A(T)=\frac{2\kappa\theta}{\sigma^2}\log\left(\frac{2\gamma e^{\frac{1}{2}(\kappa+\gamma)T} }{(\kappa+\gamma)(e^{\gamma T}-1)+2\gamma}\right)$$
$$B(T)=\frac{2 (e^{\gamma T}-1) }{(\kappa+\gamma)(e^{\gamma T}-1)+2\gamma}$$
In matlab script then, using quadrature for the integral, I (try to) calculate the PDF at the $\lambda$-points
X = (0:0.005:0.1) for
T=1 with the code below.
Clearly there is a problem though (quite probably with
fcnPhi below ) - Would greatly appreciate any help here
kappa = .07;theta = .2;sigma = .06;lambda0 = .06;T = 1;gamma = 1;A = ((2*kappa*theta)/(sigma^2))* log(2*gamma*exp(0.5*(kappa+gamma)*(T))./((kappa+gamma)*(exp(gamma*(T))-1)+2*gamma));B = 2*(exp(gamma*(T))-1)/((kappa+gamma)*(exp(gamma*(T))-1)+2*gamma);fcnPhi = @(u)( exp(u.*(A + B*lambda0)) );X = (0:0.005:0.1)';for i = 1:size(X,1) x = X(i); fcnPdfIntgrl = @(u)( real( exp(-1i.*u.*x) .* fcnPhi(u) ) ); pdf_X(i,1) = (1/pi) * integral(fcnPdfIntgrl,0,10000);endplot(X,pdf_X); |
Say we have a linear system with unity feedback, with loop transfer function $L(j\omega)$. The closed-loop transfer function from reference to output is $T(j\omega) = \frac{Y(j\omega)}{R(\omega)}=\frac{L(j\omega)}{1+L(j\omega)}$.
At frequencies for which $L(j\omega)$ approaches $-1$, clearly $|T(j\omega)| \rightarrow \infty$, so the system is unstable - if excited at this frequency, the output is unbounded.
But systems can be unstable even if $L(j\omega)$ never equals $-1$. From the Nyquist plot, we can show that $T(j\omega)$ can have poles in the right half plane if the contour
encircles $-1$, even if $L(j\omega)$ never exactly reaches it. (However, I have very little intuition for why this is true - I just see it as a theorem from complex analysis that happens to be useful here).
Alternatively, from the Bode plot, we say a system is unstable if there are any frequencies for which $|L(j\omega)|$ > 1 and $\angle L(j\omega)< -\pi$ (i.e. phase margin is negative, or gain margin is less than unity). However, I'm not sure why these two conditions result in instability, since these don't result in $|T(j\omega)|$ going to $\infty$.
Two questions:
(1) If $|T(j\omega)|$ never goes to infinity (which is the case when $L(j\omega)$ is never exactly $-1$), how can a system possibly be unstable? Is $|T(j\omega)| \rightarrow \infty$ not the right criterion for deciding whether a system is unstable?
(2) Intuitively, why is a system unstable if there are any frequencies for which $|L(j\omega)|$ > 1 and $\angle L(j\omega)< -\pi$? I understand that you can see it from the Nyquist plot because these two conditions tend to result in encirclements of $-1$ in the $L(s)$ plane, but I'm looking for the intuition. |
In the paper "Transformations of infinite series" Bryden Cais gives the following transformations of infinite products
With some modification of Cais's method using contour integration one can obtain the following generalizations of these infinite product transformations $$ \prod_{n=1}^\infty\left(\frac{1-e^{-\pi\alpha\sqrt{n^2+\beta^2}}}{1+e^{-\pi\alpha\sqrt{n^2+\beta^2}}}\right)^{(-1)^n}=\sqrt{\frac{\tanh\frac{\pi\beta}{2}}{\tanh\frac{\pi\alpha\beta}{2}}}\prod_{n=1}^\infty\left(\frac{1-e^{-\pi\sqrt{n^2/\alpha^2+\beta^2}}}{1+e^{-\pi\sqrt{n^2/\alpha^2+\beta^2}}}\right)^{(-1)^n},\tag{1} $$ $$ \prod_{n=1}^\infty\left(1-\tfrac{2\sqrt{5}}{1+\sqrt{5}+4\cosh{\frac{2\pi\alpha\sqrt{n^2+\beta^2}}{5}}}\right)^{\left(\frac{n}{5}\right)}=\prod_{n=1}^\infty\left(1-\tfrac{2\sqrt{5}}{1+\sqrt{5}+4\cosh{\frac{2\pi\sqrt{n^2/\alpha^2+\beta^2}}{5}}}\right)^{\left(\frac{n}{5}\right)}.\tag{2} $$ It is clear that $(1)$ and $(2)$ reduce to theorem 4 and proposition 26 respectively, when $\beta=0$.
Note that infinite products in theorem 4 and proposition 26 are modular forms, however the infinite products in $(1)$ and $(2)$ are not.
Q1: What are the most general modular forms that admit generalized transformation formulas like $(1)$ and $(2)$?
In chapter 5 of his paper Cais gives a general methodology to construct modular forms which then can be generalized as above, thus giving an infinite family of formulas like $(1)$ and $(2)$. Then one can take linear combination of arbitrary number of these functions. Will this be the most general function of this kind or there are others?
This question is related to the previous question. The formula $$ \prod_{n=0}^\infty\frac{1+e^{-\pi\alpha\sqrt{(2n+1)^2+\beta^2}}}{1+e^{-\pi\sqrt{(2n+1)^2/\alpha^2+\beta^2}}}=\exp\left\{\frac{1}{2}\int_0^\infty\ln\frac{1+e^{-\pi\alpha\sqrt{x^2+\beta^2}}}{1+e^{-\pi\sqrt{x^2/\alpha^2+\beta^2}}}\ dx\right\}.\tag{1} $$ is a limiting case ($m,n\to\infty$, with $m/n$ fixed) of the following proposition
If $\cos\frac{\pi (j-\frac{1}{2})}{n}+\cosh\alpha_j= \cos\frac{\pi (k-\frac{1}{2})}{m}+\cosh\beta_k=x$ for all integers $1\le j\le n,\ 1\le k\le m$ then
$$ \prod_{j=1}^n2\cosh m\alpha_j=\prod_{k=1}^m2\cosh n\beta_k.\tag{1a} $$
The formulas defining $\alpha_j$ and $\beta_k$ arise during solution of Helmholtz equation on a finite rectangular lattice with suitable boundary conditions (see e.g. Phillips, B.; Wiener, N. (1923). Nets and the Dirichlet problem. Journal of Math. and Physics, Massachusetts Institute, 105–124).
Q2: Is there a finite analog of $(1)$ similar to $(1a)$?
Any references regarding these infinite products are welcomed. If the question is not clear please ask in the comments and I will clarify it.
Note. Q2 has been answered, however Q1 is still open. |
C (gcc) and aplay, 360, 175, 167, 151 bytes
Generates the contents of a WAV file onto
stdout, singing the melody of Happy Birthday. The output can then be piped to, for example,
aplay to listen to it.
Including
-lm causes a +4 to score.
Also, thanks to ceilingcat for golfing a couple dozen bytes.
c,i;main(t,T){for(;i<25;i++)for(t=T="$$(((0$$(((0$$((((($$(((0"[i]-32<<9;t--;)putchar(c=sin(exp("!!#!&%!!#!(&!!-*&%#++*&(&"[i]/17.-2)*t)*9*t/T+9);}
Try it online! (won't produce sound, duh...)
Now available on Clyp for listening. Amplified for convenience, may be
loud and/or pop on some devices. Try it offline!
Compile and listen with
gcc -w src.c -lm && ./a.out | aplay
Degolf
c,i;
main(t,T) {
for(;i<25;i++) // 25 notes of the song in the loop, 25 notes of the song
for(t=T="$$(((0$$(((0$$((((($$(((0"[i]-32<<9; // Select number of samples
t--;)
// See below, this has been mutilated quite a bit to golf it.
// Assign to c for implicit cast to int.
// A small shortcut is made by only dampening the sine-part of the wave.
putchar(c=sin(exp("!!#!&%!!#!(&!!-*&%#++*&(&"[i]/17.-2)*t)*9*t/T+9);
}
A sine wave is defined as \$s(t)=A\cdot\sin 2\pi t\$. A sine wave with frequency \$f\$ can thus be expressed as \$s(ft)\$. Now, in this case \$\{t\in\mathbb N \mid0\leq t \leq T \}\$ and \$T:=n\cdot2^{11}\$, so we have to divide \$t\$ with 2048, in order to make the signal function work. Our function is now \$s(2^{-11}\cdot ft)\$.
Since listing the frequencies would take a significant amount of bytes, I have encoded the frequencies as halfsteps relative to A4, represented by 40 or
( in the string. The frequency is thus obtained \$f=440Hz \cdot 2^{c-40\over12}\$. Finally, putting it all together, adding in some dampening and ensuring outputs greater than zero, we get:
$$ f(t)= \Bigg({T - t\over T}\Bigg) \Bigg({A\over2}+{A\over2}\sin \Big({880 \pi t \cdot 2^{c-40\over12}\cdot2^{-11}} \Big)\Bigg) $$
$$ f(t)= \Bigg({T - t\over T}\Bigg) \Bigg({A\over2}+{A\over2}\sin \Big({880 \pi t \cdot \exp\Big({{c\ln 2-40\ln 2\over12} - 11 \ln 2}}\Big) \Big)\Bigg) $$
$$ f(t)= \Bigg({T - t\over T}\Bigg) \Bigg({A\over2}+{A\over2}\sin \Big({ t \cdot \exp\Big({{c\ln 2-40\ln 2\over12} - 11 \ln 2 + \ln 880 \pi}}\Big) \Big)\Bigg) $$
$$ f(t)\approx \Bigg({T - t\over T}\Bigg) \Bigg({A\over2}+{A\over2}\sin \Big({t \cdot \exp \Big({c\over17}-2\Big)} \Big)\Bigg) $$
And finally, we select \${A\over2}=9\$ to conserve bytes. |
Consider the language: $$ L_1 = \{ x \in \Sigma^* : x \text{ does not contain the substring } 110\} $$
I know that there is a DFA that accepts this language, and furthermore, that the regular expression is: $$ (0 \cup 10)^* 1^* $$
I'm asked to obtain a formal recursive definition of $L_1$, that is, find a basis $B \subset \Sigma^*$ and a finite set of functions on strings $\mathcal{F}$ such that $L_1$ is the closure of $B$ under $\mathcal{F}$, i.e. $L_1 = \langle B \rangle_{\mathcal{F}}$
I'm not sure how to go about this. Every way I can think to "encode" the regular expression into "functions" that build the language they're really ugly or involve piecewise definitions (if $x$ doesn't end with 1, otherwise, etc.)
Is there a simple and clean way to do this? |
In $\mathcal{N} = 4$ Super-Yang mills there are only massless particles. If one wishes to obtain a heavy quark one can see the SYM theory as a stack of (N+1)-branes in AdS$_5 \times$S$^5$ where one brane has been higssed, i.e we separete one brane from the stack and take it to infinity. Once we have the massive quark we can find the Wilson-Loop operator for the SYM theory, wich is given by App. A:
$W(\mathcal{C}) = \frac{1}{dim(\mathcal{R})}Tr_{\mathcal{R}} \mathcal{P} exp(\oint (iA_{\mu}\dot{x}^{\mu}+\Phi_i\dot{y}^i)ds)$
Where $\mathcal{R}$ is the dimension of the representation of the gauge group, $A_{\mu}$ is the gauge field and $\Phi_i$ are the fields of the R-symmetry of the theory.
The vev of the wilson loop operator:
$<W(\mathcal{C})> = 1 - g_{YM}^2 \frac{Tr(t^at^b)}{dim(\mathcal{R})} \oint ds \oint ds' [\dot{x}^{\mu}(s) \dot{x}^{\nu}(s')G_{\mu \nu}(x(s) - x(s')) - \dot{y}^i(s) \dot{y}^j(s')G_{ij}(x(s) - x(s'))] $
Where one chooses the branc $<A_{\mu}>=0$. And $G_{\mu \nu}$, $G_{ij}$ are the gauge and scalar propagator. So in order to get the vev for the wilson loop one has to obtain the propagators:
$<A_{\mu}^a(s) A_{\nu}^b(s')> = \int \mathcal{D}S e^{-S_{SYM}} A_{\mu}^a(s) A_{\nu}^b(s')$
And the same for the scalar fields $\Phi_i$. The lagrangian for SYM (doing $\Phi_i \mapsto X_i$):
$L = \operatorname{tr} \left\{-\frac{1}{2g^2}F_{\mu\nu}F^{\mu\nu}+\frac{\theta_I}{8\pi^2}F_{\mu\nu}\bar{F}^{\mu\nu}- i \overline{\lambda}^a\overline{\sigma}^\mu D_\mu \lambda_a -D_\mu X^i D^\mu X^i +g C^{ab}_i \lambda_a[X^i,\lambda_b] + g \overline{C}_{iab}\overline{\lambda}^a[X^i,\overline{\lambda}^b]+\frac{g^2}{2}[X^i,X^j]^2 \right\}$
Wich terms of the lagrangian are the ones that matter and why? (ie. I only need the free terms of the lagrangian?, why?). How is the method for attacking this problem (getting correlation/propagators) in QFT?. |
The \(n\)th order Euler equation can be written as
\[
{{x^n}{y^{\left( n \right)}} + {a_1}{x^{n – 1}}{y^{\left( {n – 1} \right)}} + \cdots }+{ {a_{n – 1}}xy’ + {a_n}y = 0,\;\;}\kern-0.3pt {x \gt 0,} \]
where \({a_1}, \ldots ,{a_n}\) are constants.
We have previously considered the second-order Euler equation. With some substitutions, this equation reduces to a homogeneous linear differential equation with constant coefficients. Such transformations are also used in the case of the \(n\)th order equation. Let us consider two methods for solving equations of this type.
\(1.\) Solving the \(N\)th Order Euler Equation Using the Substitution \(x = {e^t}\)
With the substitution \(x = {e^t},\) the \(n\)th order Euler equation can be reduced to an equation with constant coefficients. We express the derivative of \(y\) in terms of the new variable \(t.\) This is conveniently done using the differential operator \(D.\) In the formulas below, the operator \(D\) denotes the first derivative with respect to \(t:\) \(Dy = {\large\frac{{dy}}{{dt}}\normalsize}.\) Thus, we obtain:
\[
{y’ = \frac{{dy}}{{dx}} = \frac{{\frac{{dy}}{{dt}}}}{{\frac{{dx}}{{dt}}}} } = {\frac{{\frac{{dy}}{{dt}}}}{{{e^t}}} } = {{e^{ – t}}\frac{{dy}}{{dt}} } = {{e^{ – t}}Dy,} \]
\[
{y^{\prime\prime} = \frac{d}{{dx}}\left( {\frac{{dy}}{{dx}}} \right) } = {\frac{d}{{dx}}\left( {{e^{ – t}}\frac{{dy}}{{dt}}} \right) } = {\frac{{\frac{d}{{dt}}}}{{{e^t}}}\left( {{e^{ – t}}\frac{{dy}}{{dt}}} \right) } = {{e^{ – t}}\frac{d}{{dt}}\left( {{e^{ – t}}\frac{{dy}}{{dt}}} \right) } = {{e^{ – t}}\left( { – {e^{ – t}}\frac{{dy}}{{dt}} + {e^{ – t}}\frac{{{d^2}y}}{{d{t^2}}}} \right) } = {{e^{ – 2t}}\left( {{D^2} – D} \right)y } = {{e^{ – 2t}}\left[ {D\left( {D – 1} \right)} \right]y,} \]
\[
{y^{\prime\prime\prime} = \frac{d}{{dx}}\left( {\frac{{{d^2}y}}{{d{x^2}}}} \right) } = {\frac{d}{{dx}}\left[ {{e^{ – 2t}}\left( {\frac{{{d^2}y}}{{d{t^2}}} – \frac{{dy}}{{dt}}} \right)} \right] } = {\frac{{\frac{d}{{dt}}}}{{{e^t}}}\left[ {{e^{ – 2t}}\left( {\frac{{{d^2}y}}{{d{t^2}}} – \frac{{dy}}{{dt}}} \right)} \right] } = {{e^{ – t}}\left[ { – 2{e^{ – 2t}}\left( {\frac{{{d^2}y}}{{d{t^2}}} – \frac{{dy}}{{dt}}} \right) + } \right.}\kern0pt {\left. {{e^{ – 2t}}\left( {\frac{{{d^3}y}}{{d{t^3}}} – \frac{{{d^2}y}}{{d{t^2}}}} \right)} \right]} = {{e^{ – 3t}}\left( { – 2\frac{{{d^2}y}}{{d{t^2}}} + 2\frac{{dy}}{{dt}} }\right.}+{\left.{ \frac{{{d^3}y}}{{d{t^3}}} – \frac{{{d^2}y}}{{d{t^2}}}} \right) } = {{e^{ – 3t}}\left( {\frac{{{d^3}y}}{{d{t^3}}} – 3\frac{{{d^2}y}}{{d{t^2}}} + 2\frac{{dy}}{{dt}}} \right) } = {{e^{ – 3t}}\left( {{D^3} – 3{D^2} + 2D} \right)y } = {{e^{ – 3t}}\left[ {D\left( {D – 1} \right)\left( {D – 2} \right)} \right]y.} \]
The derivative of an arbitrary \(n\)th order with respect to \(t\) is described by the expression
\[{{y^{\left( n \right)}} }={ {e^{ – nt}}\left[ {D\left( {D – 1} \right)\left( {D – 2} \right) \cdots }\right.}\kern0pt{\left.{ \left( {D – n + 1} \right)} \right]y.}\]
It is seen that after the substitution of the derivatives in the original Euler equation all the exponential factors are eliminated because
\[{x^n} = {e^{nt}}.\]
As a result, the left side will consist of the derivatives of the function \(y\) with respect to \(t\) with constant coefficients. The general solution of this equation can be found by standard methods. At the end of the solution it is necessary to go back from \(t\) to the original independent variable \(x\) substituting \(t = \ln x.\)
\(2.\) Solving the \(N\)th Order Euler Equation Using the Power Function \(y = {x^k}\)
Consider another way of solving the Euler equations. Assume that the solution has the form of the power function \(y = {x^k},\) where the index \(k\) is defined in the course of the solution. The derivatives of the function \(y\) can easily be expressed as follows:
\[y’ = k{x^{k – 1}},\]
\[y^{\prime\prime} = k\left( {k – 1} \right){x^{k – 2}},\]
\[{y^{\prime\prime\prime} = k\left( {k – 1} \right) \cdot}\kern0pt {\left( {k – 2} \right){x^{k – 3}},}\]
\[ \cdots \cdots \cdots \cdots \cdots \cdots \cdots \cdots \cdots\]
\[
{{y^{\left( n \right)}} } = {\left[ {k\left( {k – 1} \right) \cdots }\right.}\kern0pt{\left.{ \left( {k – n + 1} \right)} \right]{x^{k – n}}.} \]
Substituting this into the initial homogeneous Euler equation and cancelling by \(y = {x^k} \ne 0,\) we immediately obtain the auxiliary equation:
\[ {\left[ {k\left( {k – 1} \right) \cdots \left( {k – n + 1} \right)} \right] } + {{a_1}\left[ {k\left( {k – 1} \right) \cdots \left( {k – n + 2} \right)} \right] }+{ \cdots } + {{a_{n – 1}}k }+{ {a_n} }={ 0,} \]
which can be written in a more compact form as
\[
{\sum\limits_{s = 0}^{n – 1} {{a_s}\left[ {k\left( {k – 1} \right) \cdots }\kern0pt{ \left( {k – n + s + 1} \right)} \right]} }+{ {a_n} = 0,\;\;}\kern-0.3pt {\text{where}\;\;{a_0} = 1.} \]
Solving the auxiliary equation, we find its roots and then construct the general solution of differential equation. In the final expression we must return to the original variable \(x\) using the substitution \(t = \ln x.\)
Higher Order Nonhomogeneous Euler Equation
In the general case, the nonhomogeneous Euler equation can be represented as
\[ {{x^n}{y^{\left( n \right)}}\left( x \right) }+{ {a_1}{x^{n – 1}}{y^{\left( {n – 1} \right)}}\left( x \right) + \cdots } + {{a_{n – 1}}xy’\left( x \right) }+{ {a_n}y\left( x \right) = f\left( x \right),\;\;}\kern-0.3pt {x \gt 0.} \]
Using the substitution \(y = {e^t},\) any nonhomogeneous Euler equation can be transformed into a nonhomogeneous linear equation with constant coefficients. Moreover, if the right-hand side of the original equation has the form
\[f\left( x \right) = {x^\alpha }{P_m}\left( {\ln x} \right),\]
where \({P_m}\) is a polynomial of degree \(m,\) then the resulting particular solution of the nonhomogeneous equation can be found by the method of undetermined coefficients.
Solved Problems
Click a problem to see the solution. |
States of Matter: Gases and Liquids Intermolecular Forces and Thermal Energy London force (or) dispersion force: Observed in between non polar atoms or non polar molecules. e.g. Xe and Xe CH 4 and CH 4 and CCl 4 and CCl 4 F\propto\frac{1}{r^{6}} Dipole-Dipole attraction: Attraction between polar compounds
\therefore\ F\propto\frac{1}{r^{3}} → Stationary solid state \therefore\ F\propto\frac{1}{r^{6}} → Rotational molecule Induced dipole-dipole attraction: These are in between polar and Non-polar compounds Ex: Solubility of inert gases in water
Volume: 1dm 3 = (10 cm) 3 = 1000 cc = 1000 mL = 1L Pressure: \tt P=\frac{F}{a} \tt P=\frac{F}{a}=\frac{mg}{a}\ \ P\Rightarrow\frac{d\times v}{a}=\frac{a\times h\times d\times g}{a} P = hdg
In C.G.S, P = hdg, P = 1.013125 × 10
6 dyne/cm 2 In S.I, P = hdg, 1 atm = 1.01325 Bar Temperature \tt \frac{F-32}{9}=\frac{C}{5} Part1: View the Topic in this Video from 2:20 to 57:11 Part2: View the Topic in this Video from 0:57 to 47:45
Disclaimer: Compete.etutor.co may from time to time provide links to third party Internet sites under their respective fair use policy and it may from time to time provide materials from such third parties on this website. These third party sites and any third party materials are provided for viewers convenience and for non-commercial educational purpose only. Compete does not operate or control in any respect any information, products or services available on these third party sites. Compete.etutor.co makes no representations whatsoever concerning the content of these sites and the fact that compete.etutor.co has provided a link to such sites is NOT an endorsement, authorization, sponsorship, or affiliation by compete.etutor.co with respect to such sites, its services, the products displayed, its owners, or its providers.
1. S.I unit of temperature is Kelvin (K) or absolute degree.
K = °C + 273
2. Relation between F and °C is \tt \frac{{^{o}}C}{5} =\frac{^{o}F-32}{9}
3. \tt Pressure \left(P\right) = \frac{Force\left(F\right)}{Area\left(A\right)} = \frac{Mass\left(m\right) \times Acceleration\left(a\right)}{Area\left(A\right)}
4. Absolute pressure = Gauge pressure + Atmosphere pressure |
Problem 15
Let $p_1(x), p_2(x), p_3(x), p_4(x)$ be (real) polynomials of degree at most $3$. Which (if any) of the following two conditions is sufficient for the conclusion that these polynomials are linearly dependent?
(a) At $1$ each of the polynomials has the value $0$. Namely $p_i(1)=0$ for $i=1,2,3,4$. (b) At $0$ each of the polynomials has the value $1$. Namely $p_i(0)=1$ for $i=1,2,3,4$.
(
University of California, Berkeley) Problem 12
Let $A$ be an $n \times n$ real matrix. Prove the followings.
(a) The matrix $AA^{\trans}$ is a symmetric matrix. (b) The set of eigenvalues of $A$ and the set of eigenvalues of $A^{\trans}$ are equal. (c) The matrix $AA^{\trans}$ is non-negative definite.
(An $n\times n$ matrix $B$ is called
non-negative definite if for any $n$ dimensional vector $\mathbf{x}$, we have $\mathbf{x}^{\trans}B \mathbf{x} \geq 0$.)
Add to solve later
(d) All the eigenvalues of $AA^{\trans}$ is non-negative. Problem 11
An $n\times n$ matrix $A$ is called
nilpotent if $A^k=O$, where $O$ is the $n\times n$ zero matrix. Prove the followings. (a) The matrix $A$ is nilpotent if and only if all the eigenvalues of $A$ is zero.
Add to solve later
(b) The matrix $A$ is nilpotent if and only if $A^n=O$. Read solution Problem 9
Let $A$ be an $n\times n$ matrix and let $\lambda_1, \dots, \lambda_n$ be its eigenvalues.
Show that (1) $$\det(A)=\prod_{i=1}^n \lambda_i$$ (2) $$\tr(A)=\sum_{i=1}^n \lambda_i$$
Here $\det(A)$ is the determinant of the matrix $A$ and $\tr(A)$ is the trace of the matrix $A$.
Namely, prove that (1) the determinant of $A$ is the product of its eigenvalues, and (2) the trace of $A$ is the sum of the eigenvalues.
Read solution Problem 5
Let $T : \mathbb{R}^n \to \mathbb{R}^m$ be a linear transformation.
Let $\mathbf{0}_n$ and $\mathbf{0}_m$ be zero vectors of $\mathbb{R}^n$ and $\mathbb{R}^m$, respectively. Show that $T(\mathbf{0}_n)=\mathbf{0}_m$.
(
The Ohio State University Linear Algebra Exam)
Add to solve later
Problem 3
Let $H$ be a normal subgroup of a group $G$.
Then show that $N:=[H, G]$ is a subgroup of $H$ and $N \triangleleft G$.
Here $[H, G]$ is a subgroup of $G$ generated by commutators $[h,k]:=hkh^{-1}k^{-1}$.
In particular, the commutator subgroup $[G, G]$ is a normal subgroup of $G$Add to solve later |
Tagged: invertible matrix Problem 583
Consider the $2\times 2$ complex matrix
\[A=\begin{bmatrix} a & b-a\\ 0& b \end{bmatrix}.\] (a) Find the eigenvalues of $A$. (b) For each eigenvalue of $A$, determine the eigenvectors. (c) Diagonalize the matrix $A$.
Add to solve later
(d) Using the result of the diagonalization, compute and simplify $A^k$ for each positive integer $k$. Problem 582
A square matrix $A$ is called
nilpotent if some power of $A$ is the zero matrix. Namely, $A$ is nilpotent if there exists a positive integer $k$ such that $A^k=O$, where $O$ is the zero matrix.
Suppose that $A$ is a nilpotent matrix and let $B$ be an invertible matrix of the same size as $A$.
Is the matrix $B-A$ invertible? If so prove it. Otherwise, give a counterexample. Problem 562
An $n\times n$ matrix $A$ is called
nonsingular if the only vector $\mathbf{x}\in \R^n$ satisfying the equation $A\mathbf{x}=\mathbf{0}$ is $\mathbf{x}=\mathbf{0}$. Using the definition of a nonsingular matrix, prove the following statements. (a) If $A$ and $B$ are $n\times n$ nonsingular matrix, then the product $AB$ is also nonsingular. (b) Let $A$ and $B$ be $n\times n$ matrices and suppose that the product $AB$ is nonsingular. Then: The matrix $B$ is nonsingular. The matrix $A$ is nonsingular. (You may use the fact that a nonsingular matrix is invertible.) Problem 552
For each of the following $3\times 3$ matrices $A$, determine whether $A$ is invertible and find the inverse $A^{-1}$ if exists by computing the augmented matrix $[A|I]$, where $I$ is the $3\times 3$ identity matrix.
Add to solve later
(a) $A=\begin{bmatrix} 1 & 3 & -2 \\ 2 &3 &0 \\ 0 & 1 & -1 \end{bmatrix}$ (b) $A=\begin{bmatrix} 1 & 0 & 2 \\ -1 &-3 &2 \\ 3 & 6 & -2 \end{bmatrix}$. Problem 548
An $n\times n$ matrix $A$ is said to be
invertible if there exists an $n\times n$ matrix $B$ such that $AB=I$, and $BA=I$,
where $I$ is the $n\times n$ identity matrix.
If such a matrix $B$ exists, then it is known to be unique and called the
inverse matrix of $A$, denoted by $A^{-1}$.
In this problem, we prove that if $B$ satisfies the first condition, then it automatically satisfies the second condition.
So if we know $AB=I$, then we can conclude that $B=A^{-1}$.
Let $A$ and $B$ be $n\times n$ matrices.
Suppose that we have $AB=I$, where $I$ is the $n \times n$ identity matrix.
Prove that $BA=I$, and hence $A^{-1}=B$.Add to solve later
Problem 546
Let $A$ be an $n\times n$ matrix.
The $(i, j)$
cofactor $C_{ij}$ of $A$ is defined to be \[C_{ij}=(-1)^{ij}\det(M_{ij}),\] where $M_{ij}$ is the $(i,j)$ minor matrix obtained from $A$ removing the $i$-th row and $j$-th column.
Then consider the $n\times n$ matrix $C=(C_{ij})$, and define the $n\times n$ matrix $\Adj(A)=C^{\trans}$.
The matrix $\Adj(A)$ is called the adjoint matrix of $A$.
When $A$ is invertible, then its inverse can be obtained by the formula
For each of the following matrices, determine whether it is invertible, and if so, then find the invertible matrix using the above formula.
(a) $A=\begin{bmatrix} 1 & 5 & 2 \\ 0 &-1 &2 \\ 0 & 0 & 1 \end{bmatrix}$. (b) $B=\begin{bmatrix} 1 & 0 & 2 \\ 0 &1 &4 \\ 3 & 0 & 1 \end{bmatrix}$. Problem 506
Let $A$ be an $n\times n$ invertible matrix. Then prove the transpose $A^{\trans}$ is also invertible and that the inverse matrix of the transpose $A^{\trans}$ is the transpose of the inverse matrix $A^{-1}$.
Namely, show that \[(A^{\trans})^{-1}=(A^{-1})^{\trans}.\] Problem 500
10 questions about nonsingular matrices, invertible matrices, and linearly independent vectors.
The quiz is designed to test your understanding of the basic properties of these topics.
You can take the quiz as many times as you like.
The solutions will be given after completing all the 10 problems.
Click the View question button to see the solutions. Problem 452
Let $A$ be an $n\times n$ complex matrix.
Let $S$ be an invertible matrix. (a) If $SAS^{-1}=\lambda A$ for some complex number $\lambda$, then prove that either $\lambda^n=1$ or $A$ is a singular matrix. (b) If $n$ is odd and $SAS^{-1}=-A$, then prove that $0$ is an eigenvalue of $A$.
Add to solve later
(c) Suppose that all the eigenvalues of $A$ are integers and $\det(A) > 0$. If $n$ is odd and $SAS^{-1}=A^{-1}$, then prove that $1$ is an eigenvalue of $A$. Problem 438
Determine whether each of the following statements is True or False.
(a) If $A$ and $B$ are $n \times n$ matrices, and $P$ is an invertible $n \times n$ matrix such that $A=PBP^{-1}$, then $\det(A)=\det(B)$. (b) If the characteristic polynomial of an $n \times n$ matrix $A$ is \[p(\lambda)=(\lambda-1)^n+2,\] then $A$ is invertible. (c) If $A^2$ is an invertible $n\times n$ matrix, then $A^3$ is also invertible. (d) If $A$ is a $3\times 3$ matrix such that $\det(A)=7$, then $\det(2A^{\trans}A^{-1})=2$. (e) If $\mathbf{v}$ is an eigenvector of an $n \times n$ matrix $A$ with corresponding eigenvalue $\lambda_1$, and if $\mathbf{w}$ is an eigenvector of $A$ with corresponding eigenvalue $\lambda_2$, then $\mathbf{v}+\mathbf{w}$ is an eigenvector of $A$ with corresponding eigenvalue $\lambda_1+\lambda_2$.
(
Stanford University, Linear Algebra Exam Problem) Read solution |
I am writing a Navier Stokes solver. The vector field is represented as a grid with integer coordinates
I am looking at other people's computer code. I don't entirely understand the vector calculus, but if I am interpreting it correctly, I see the $\nabla^2{\bf v}$ term approximated (in the three-dimensional case) as
$\left( \begin{array}{c} \sum_{i=\pm 1}v_x(x+i,y,z)+v_x(x,y+i)+v_x(x,y,z+i) \\ \sum_{i=\pm 1}v_y(x+i,y,z)+v_y(x,y+i)+v_y(x,y,z+i) \\ \sum_{i=\pm 1}v_z(x+i,y,z)+v_z(x,y+i)+v_z(x,y,z+i) \\ \end{array} \right) - 6{\bf v}(x,y,z)$
This term needs to be scaled by the resolution of the grid but it is essentially the sum of the gradients coming into the grid cell at each edge.
First question, have I interpreted this correctly?
Second question: what is a better approximation of this term?
The above approximation derives from the simple practical fact that each cell in the grid normally has direct access only to its immediate lateral neighbours. This simplifies the code. However, constraints on the grid resolution and time step, combined with sometimes-large viscosity and diffusion parameters, means the simulation sometimes behaves badly.
I would like to try a better approximation of the diffuse term, considering diagonal neighbours and neighbours further than one grid step away. (Existing numerical tricks that I have seen seem to interact badly with my boundary conditions, so I would like to build a solution from the ground up.)
What should I use? |
To help answer Question 1, Milnor proved a local-global theorem for Witt rings of global fields. Recall that The Grothendieck-Witt ring $\widehat{W}(k)$ of a field $k$ is the ring obtained by starting with the free abelian group on isomorphism classes of quadratic modules and moding out by the ideal generated by symbols of the form $[M]+[N]-[M']-[N']$, whenever $[M]\oplus[N]\simeq [M']+[N']$. The multiplication comes from tensor product of quadratic modules. There is a special quadratic module $H$ given by $x^2-y^2=0$. This is the hyperbolic module. The Witt ring $W(k)$ of a field $k$ is the quotient of $\widehat{W}(k)$ by the ideal generated by $[H]$.
Now, the main theorem of Milnor's paper is that there is a split exact sequence $$0\rightarrow W(k)\rightarrow W(k(t))\rightarrow \bigoplus_\pi W(\overline{k(t)}_\pi)\rightarrow 0,$$ where $\pi$ runs over all irreducible monic polynomials in $k[t]$, and $\overline{k(t)}_\pi$ denotes the residue field of the completion of $k(t)$ at $\pi$.
The morphisms $W(k(t))\rightarrow W(\overline{k(t)}_\pi)$ come from first the map $W(k(t))\rightarrow W(k(t)_\pi)$. Then, there is a map $W(k(t)_\pi)\rightarrow W(\overline{k(t)}_\pi)$ that sends the quadratic module $u\pi x^2=0$ to $ux^2=0$, where $u$ is any unit of the local field.
Interestingly, Milnor $K$-theory is not used in the proof. However, the proof for Witt rings closely models the proof of a similar fact for Milnor $K$-theory: the sequence $$0\rightarrow K_n^M(k)\rightarrow K_n^M(k(t))\rightarrow\bigoplus_\pi K_{n-1}^M(\overline{k(t)}_\pi)\rightarrow 0.$$
The important new perspective is the formal symbolic perspective, which was already existent for lower $K$-groups, but is very fruitful for studying the Witt ring as well. |
@Tom's response is excellent, but I'd like to offer a version that's more heuristic and that introduces an additional concept.
Logistic regression
Imagine we have a number of binary questions. If we are interested in the probability of responding yes to any one of the questions, and if we're interested in the effect of some independent variables on that probability, we use logistic regression:
$P(y_i = 1) = \frac{1}{1 + exp(X\beta)} = logit^-1(X\beta)$
where i indexes the questions (i.e. the items), X is a vector of characteristics of the respondents, and $\beta$ is the effect of each of those characteristics in log odds terms.
IRT
Now, note that I said we had a number of binary questions. Those questions might all get at some kind of latent trait, e.g. verbal ability, level of depression, level of extraversion. Often, we are interested in the level of the latent trait itself.
For example, in the Graduate Record Exam, we're interested in characterizing the verbal and math ability of various applicants. We want some good measure of their score. We could obviously count how many questions someone got correct, but that does treat all questions as being worth the same amount - it doesn't explicitly account for the fact that questions might vary in difficulty. The solution is item response theory. Again, we're (for now) not interested in either
X or $\beta$, but we're just interested in the person's verbal ability, which we'll call $\theta$. We use each person's pattern of responses to all the questions to estimate $\theta$:
$P(y_i = 1) = logit^-1[a_i(\theta_j - b_i)]$
where $a_i$ is discrimination of item
i and $b_i$ is its difficulty.
So, that's one obvious distinction between regular logistic regression and IRT. In the former, we're interested in the effects of independent variables on one binary dependent variable. In the latter, we use a bunch of binary (or categorical) variables to predict some latent trait. The original post said that $\theta$ is our independent variable. I'd respectfully disagree, I think it's more like this is the dependent variable in IRT.
I used binary items and logistic regression for simplicity, but the approach generalizes to ordered items and ordered logistic regression.
Explanatory IRT
What if you were interested in the things that predict the latent trait, though, i.e. the
Xs and $\beta$s previously mentioned?
As mentioned earlier, one model to estimate the latent trait is just count the number of correct answers, or add up all the values of your Likert (i.e. categorical) items. That has its flaws; you're assuming that each item (or each level of each item) is worth the same amount of the latent trait. This approach is common enough in many fields.
Perhaps you can see where I'm going with this: you can use IRT to predict the level of the latent trait, then conduct a regular linear regression. That would ignore the uncertainty in each person's latent trait, though.
A more principled approach would be to use explanatory IRT: you simultaneously estimate $\theta$ using an IRT model and you estimate the effect of your
Xs on $\theta$ as if you were using linear regression. You can even extend this approach to include random effects to represent, for example, the fact that students are nested in schools.
More reading available on Phil Chalmers' excellent intro to his
mirt package. If you understand the nuts and bolts of IRT, I'd go to the Mixed Effects IRT section of these slides. Stata is also capable of fitting explanatory IRT models (albeit I believe it can't fit random effects explanatory IRT models as I described above). |
The derivative of a function is the real number that measures the sensitivity to change of the function with respect to the change in argument. Derivatives are named as fundamental tools in Calculus. The derivative of a moving object with respect to rime in the velocity of an object. It measures how often the position of an object changes when time advances.
The derivative of a variable with respect to the function is the slope of tangent line neat the input value. Derivatives are usually defined as the instantaneous rate of change in respect to the independent variable. This is possible to generalize the derivatives as well by the choice of dependent or independent variables. The other popular form is the partial derivative that is calculated in respect of independent variables. For a real-valued function of multiple variables, the matrix will reduce to the gradient vector.
The process of calculating a derivative is called the differentiation and the reverse process is the anti-differentiation. The fundamental theorem for integration is same as anti-differentiation. This is a popular concept in calculus used to calculate even the smallest areas precisely.
The calculation in derivatives is generally harder and you need a complete list of basic derivatives formulas in calculus to solve these complex problems. A deep understanding of formulas can make any problem easier and quick to solve.
\[\LARGE f^{1}(x)=\lim_{\triangle x \rightarrow 0}\frac{f(x+ \triangle x)-f(x)}{\triangle x}\]
\[\large \frac{d}{dx}(c)=0\]
\[\large \frac{d}{dx}(x)=1\]
\[\large \frac{d}{dx}(x^{n})=nx^{n-1}\]
\[\large \frac{d}{dx}(u\pm v)=\frac{du}{dx}\pm \frac{dv}{dx}\]
\[\large \frac{d}{dx}(cu)=c\frac{du}{dx}\]
\[\large \frac{d}{dx}(uv)=u\frac{dv}{dx}+v\frac{du}{dx}\]
\[\large \frac{d}{dx}(\frac{u}{v})=\frac{v\frac{du}{dx}+u\frac{dv}{dx}}{v^{2}}\]
\[\large \frac{d}{dx}(u.v)=\frac{dv}{dx}\left ( \frac{du}{dx}.v\right )\]
\[\large \frac{du}{dx}=\frac{du}{dx}\frac{dv}{dx}\]
\[\large \frac{du}{dx}=\frac{\frac{du}{dx}}{\frac{dv}{dx}}\]
x(y) is the inverse of the function
y( x),
\[\large \frac{dy}{dx}=\frac{1}{\frac{dx}{dy}}\]
\[\large \frac{d}{dx}(\sin (u))=\cos (u)\frac{du}{dx}\]
\[\large \frac{d}{dx}(\cos (u))=-\sin (u)\frac{du}{dx}\]
\[\large \frac{d}{dx}(\tan (u))=\sec^{2}(u)\frac{du}{dx}\]
\[\large \frac{d}{dx}(\cot(u))=-\csc^{2}(u)\frac{du}{dx}\]
\[\large \frac{d}{dx}(\sec(u))=\sec(u)\tan(u)\frac{du}{dx}\]
\[\large \frac{d}{dx}(\csc(u))=-\csc(u)\cot(u)\frac{du}{dx}\]
\[\large \frac{d}{dx}(\sin^{-1}(u))=\frac{1}{\sqrt{1-u^{2}}}\frac{du}{dx}\]
\[\large \frac{d}{dx}(\cos ^{-1}(u))=-\frac{1}{\sqrt{1-u^{2}}}\frac{du}{dx}\]
\[\large \frac{d}{dx}(\tan^{-1}(u))=\frac{1}{1+u^{2}}\frac{du}{dx}\]
\[\large \frac{d}{dx}(\cot^{-1}(u))=-\frac{1}{1+u^{2}}\frac{du}{dx}\]
\[\large \frac{d}{dx}(\sec ^{-1}(u))=\frac{1}{\left | u \right |\sqrt{u^{2}-1}}\frac{du}{dx}\]
\[\large \frac{d}{dx}(\csc^{-1}(u))=-\frac{1}{\left | u \right |\sqrt{u^{2}-1}}\frac{du}{dx}\]
\[\large \frac{d}{dx}(\sinh(u))=\cosh(u)\frac{du}{dx}\]
\[\large \frac{d}{dx}(\cosh(u))=\sinh(u)\frac{du}{dx}\]
\[\large \frac{d}{dx}(tanh(u))=sech^{2}(u)\frac{du}{dx}\]
\[\large \frac{d}{dx}(coth(u))=-csch^{2}(u)\frac{du}{dx}\]
\[\large \frac{d}{dx}(sech(u))=-sech(u)tanh(u)\frac{du}{dx}\]
\[\large \frac{d}{dx}(csch(u))=-csch(u)coth(u)\frac{du}{dx}\]
The derivative of a function is computed in the form of the quotient by defining its limit. Practically, the derivatives of a few functions are unknown and derivatives of a few equations can be calculated easily. Here, the derivatives rules are applicable to solve the most complicated problems with ease. You just have to check where to put formulas and rules to make the calculation optimum to find many derivatives.
As discussed earlier, the derivative of few functions is tough to calculate through the First Principle. Here, we use the derivative table to calculate functions partially and derivatives of functions are generally found directly in the table. They are the part of many standard derivative formulas in calculus.
Till the time, we focused on derivative basics and basic derivative formulas. This is time to study applications of derivatives that are important to learn by the students. The two most common and important application of derivatives are looking at graphs of functions and problems optimization. It also allows us to compute limits that could not be computed earlier without derivatives. We can also analyze how derivatives are used to estimate the solutions to equations.
Now, let us look at some of the business application of derivatives. A number of problem optimizations are the biggest implementation of derivatives in the business world. By looking at some of the real applications of derivatives, it has become a ‘buzz’ word in the industry with a plenty of applications all around. |
The Annals of Probability Ann. Probab. Volume 32, Number 1B (2004), 661-691. Occupation time large deviations of two-dimensional symmetric simple exclusion process Abstract
We prove a large deviations principle for the occupation time of a site in the two-dimensional symmetric simple exclusion process. The decay probability rate is of order $t/\log t$ and the rate function is given by $\Upsilon_\alpha (\beta) = (\pi/2) \{\sin^{-1}(2\beta-1)-\sin^{-1}(2\alpha -1) \}^2$. The proof relies on a large deviations principle for the polar empirical measure which contains an interesting $\log$ scale spatial average. A contraction principle permits us to deduce the occupation time large deviations from the large deviations for the polar empirical measure.
Article information Source Ann. Probab., Volume 32, Number 1B (2004), 661-691. Dates First available in Project Euclid: 11 March 2004 Permanent link to this document https://projecteuclid.org/euclid.aop/1079021460 Digital Object Identifier doi:10.1214/aop/1079021460 Mathematical Reviews number (MathSciNet) MR2039939 Zentralblatt MATH identifier 1061.60103 Subjects Primary: 60F10: Large deviations Citation
Chang, Chih-Chung; Landim, Claudio; Lee, Tzong-Yow. Occupation time large deviations of two-dimensional symmetric simple exclusion process. Ann. Probab. 32 (2004), no. 1B, 661--691. doi:10.1214/aop/1079021460. https://projecteuclid.org/euclid.aop/1079021460 |
The Beilinson-Bernstein localization theorem states roughly that the category of $D$-modules on the flag variety $G/B$ is equivalent to the category of modules over the universal enveloping algebra $U\mathfrak{g}$ with zero Harish-Chandra character. Here $G$ is any semisimple algebraic group over $\mathbb{C}$, $\mathfrak{g}$ its Lie algebra, and $B$ a Borel in $G$. (Really I should probably generalize to modules over a twist $D_\lambda$ of the ring of differential operators and other Harish-Chandra characters here.)
I think it's fair to say that this is one of the most important theorems in geometric representation theory. One sign of this is the huge number of generalizations to different settings (arbitrary Kac-Moody, affine at negative and critical levels, characteristic p, quantum, other symplectic resolutions...), each of which plays an key role in the corresponding representation theory.
A lot of the papers in this area seem to contain phrases like "Here we are generalizing a proof of the original BB theorem", but they seem to be all referring to different proofs. Unfortunately, I only know one proof, more or less the one that appears in the original paper of Beilinson and Bernstein, "Localisation de $\mathfrak{g}$-modules." So my question is
What other approaches are there to proving the localization theorem?
To be a little more precise, let's separate BB localization into three parts:
The computation of $R\Gamma(G/B,\mathcal{D}_\lambda)$ (for $\lambda$ with $\lambda+\rho$ dominant) The exactness of $R\Gamma$ (true for all weights $\lambda$ with $\lambda+\rho$ dominant) The conservativity of $R\Gamma$ (true for $\lambda$ with $\lambda+\rho$ dominant regular).
I'm mainly interested in proofs of parts $2$ and/or $3$ assuming part $1$ (which I know many proofs of.)
Standard disclaimer: I'm not an expert and all of this could be wrong. |
Meirovitch says in his "Principles and Techniques of Vibrations" (1997) on p.85:
In the case of holonomic systems, the variation and integration processes are interchangeable (...)
which means that
$\delta \int_{t_1}^{t_2} L(q, \dot{q}) dt = \int_{t_1}^{t_2} \delta L(q,\dot{q}) dt$
subject to
$f(r_1(q), r_2(q),...,r_N(q),t) = 0.$
where $L$ is the Lagrangian, $q$ are generalized coordinates, $r_i$ are the coordinates of points of mass in the configuration space and $f$ is the constraint.
Can anybody tell me why? |
If $k \leq (1-\epsilon) N$, where $N$ is the total number of subtrees, then the following approach would work:
Start with the empty list $L$.
Repeat $k$ times:
Pick a random subtree $T'$. If $T' \notin L$, add it to $L$; otherwise, go back to the previous step.
Since $k \leq (1-\epsilon) N$, the expected number of times it takes to find $T' \notin L$ is at most $1/\epsilon$ throughout the process; it will be much smaller in the beginning. In particular, if $k \ll \sqrt{N}$, then it is highly likely that you will never generate the same subtree twice.
In more detail, the average expected number of repetitions is$$\frac{1}{k} \sum_{\ell=0}^{k-1} \frac{N}{N-\ell} \approx \frac{N}{k} \int_0^k \frac{dx}{N-x} = \frac{N}{k} \ln \frac{N}{N-k} \approx \frac{N}{N-k} = 1 + \frac{k}{N-k}.$$
You can implement the check $T' \notin L$ quickly using a hashtable. In this way, we have reduced your problem to that of generating a uniformly random subtree, which you can do as follows, essentially by reducing the problem of uniform generation to that of counting.
In order to pick the root of the subtree, first compute $R(T_x)$ for every vertex $x \in T$ (you can do this in $O(n)$ for all vertices together if you're careful, where $n$ is the number of vertices). The root is $x$ with probability $R(T_x)/N$ (you can choose $x$ quickly using binary search, for example).
If $x$ is a leaf, then we're done. Otherwise, suppose that $x$ has children $x_1,\ldots,x_\ell$. Your random subtree skips $x_i$ with probability $1/(1+R(T_{x_i}))$ (independently). If it doesn't skip $x_i$, then you generate a random subtree of $T_{x_i}$ recursively.
Here are two other related approaches. The first is to generate all subtrees, permute them randomly in $O(N)$, and then output the prefix of length $k$.
A variant of the first approach uses
unranking. By modifying the approach above, you can take an integer in the range $0,\ldots,N-1$ and convert it to a subtree. This goes as follows. Let $x_1,\ldots,x_n$ be an enumeration of the vertices of $T$. The first $R(T_{x_1})$ integers correspond to subtrees rooted at $x_1$. The following $R(T_{x_2})$ integers correspond to subtrees rooted at $x_2$. And so on.
Now suppose we're given an integer $i$ in the range $0,\ldots,R(T_x)-1$, and need to convert it to a subtree rooted at $x$. If $x$ is a leaf, then there is nothing to do. Otherwise, let $x_1,\ldots,x_\ell$ be the children of $x$. We first convert $i$ into $\ell$ numbers $i_1,\ldots,i_\ell$, where $i_j$ is in the range $0,\ldots,R(T_{x_{i_j}})$. If $i_j = R(T_{x_{i_j}})$, then the subtree won't contain $x_{i_j}$. Otherwise, we can generate a subtree of $x_{i_j}$ recursively.
Given the unranking procedure, you can generate a permutation of $0,\ldots,N-1$, take the prefix of length $k$, and convert it to a list of $k$ subtrees. If you have any other way of generating a random sequence of $k$ elements of $0,\ldots,N-1$ without repetition, then you can use it in the same way. |
That is, is it always that $$2^{3^x}\equiv -1\pmod{3^{x+1}}\large?$$
Another way to solve this question is by induction.
►The statement holds for $n=1$ because $2^3=8=9-1\equiv -1\pmod {3^2}$.
►Suppose it is true for $n$, that is $2^{3^n}\equiv-1\pmod{3^{n+1}}$.
►Proof it is true for $n+1$. $$2^{3^n}\equiv-1\pmod{3^{n+1}}\iff2^{3^n}=3^{n+1}M_n-1$$ It follows $$2^{3^{n+1}}=(2^{3^n})^3=(3^{n+1}M_n-1)^3=3^{3n+3}M_n^3-3\cdot3^{2n+2}M_n^2+3\cdot3^{n+1}M_n-1$$ Hence
$$2^{3^{n+1}}=3^{n+2}(3^{2n+1}M_n^3-3^{n+1}M_n^2+M_n)-1\Rightarrow\color{red}{2^{3^{n+1}}\equiv-1\pmod{3^{n+2}}}$$
Euler function: $\varphi$
$\varphi(3^{x+1})=2\cdot 3^x\enspace$ => $\enspace 2^{2\cdot 3^x}\equiv 1 \mod 3^{x+1}\enspace$ (Euler-Fermat)
It follows $\,2^{3^x}\equiv \pm 1 \mod 3^{x+1}$ .
This means $(2^{3^x}-1)(2^{3^x}+1)\equiv 0\mod 3^{x+1}$ .
$2^{3^x}-1=(3-1)^{3^x}-1\equiv -2 \mod 3$
(If this is not clear please have a look to the comment of User
user1952009 below.)
It follows that $2^{3^x}-1$ can never devided by $3$.
$=> \enspace$ $2^{3^x}+1\equiv 0\mod 3^{x+1}\enspace$ which has to be proofed |
In Relativity, both the old Galilean theory or Einstein's Special Relativity, one of the most important things is the discussion of whether or not physical laws are invariant. Einstein's theory then states that they are invariant in all inertial frames of reference.
The books usually states that invariant means that the equations take the same form. So, for example, if in one frame $\mathbf{F} = m\mathbf{a}$ holds one expect this same equation to hold on other inertial frame.
If one studies first relativity and then differential geometry, there seems to be really important to postulate that: there's no guarantee whastoever that the equations will be equal. I, however, studied differential geometry first and this led me to this doubt.
On differential geometry everything is defined so that things don't depend on coordinates. So for example: vectors are defined as certain differential operators, or as equivalence classes of curves. Both definitions makes a vector $V$ be one geometrical object, that although has representations on each coordinate system, is independent of them.
Because of that, any tensor is also defined without coordinates and so equalities between vectors and tensors
automatically are coordinate-independent. This of course, is valid for vector and tensor fields.
Scalar functions follow the same logic: a function $f : M\to \mathbb{R}$ has one coordinate representation $\tilde{f} : \mathbb{R}^n\to \mathbb{R}$ which is just $\tilde{f} = f\circ x^{-1}$ but still, $f$ is independent of the coordinates. So if $f = g$ this doesn't refer to a coordinate system, but to the functions themselves.
So, it seems that math guarantees that objects are coordinate-independent by nature. So in that case, what are examples where a Physical law is not invariant and why my reasoning fails for those examples? |
The Annals of Probability Ann. Probab. Volume 26, Number 1 (1998), 78-111. Range of fluctuation of Brownian motion on a complete Riemannian manifold Abstract
We investigate the escape rate of the Brownian motion $W_x (t)$ on a complete noncompact Riemannian manifold. Assuming that the manifold has at most polynomial volume growth and that its Ricci curvature is bounded below, we prove that $$\dist (W_x (t), x) \leq \sqrt{Ct \log t}$$ for all large $t$ with probability 1. On the other hand, if the Ricci curvature is nonnegative and the volume growth is at least polynomial of the order $n > 2$ then $$\dist (W_x (t), x) \geq \frac{\sqrt{Ct}}{\log^{1/(n-2)} t \log \log^{(2+\varepsilon)/(n-2)} t} again for all large $t$ with probability 1 (where $\varepsilon > 0$).
Article information Source Ann. Probab., Volume 26, Number 1 (1998), 78-111. Dates First available in Project Euclid: 31 May 2002 Permanent link to this document https://projecteuclid.org/euclid.aop/1022855412 Digital Object Identifier doi:10.1214/aop/1022855412 Mathematical Reviews number (MathSciNet) MR1617042 Zentralblatt MATH identifier 0934.58023 Citation
Grigor'yan, Alexander; Kelbert, Mark. Range of fluctuation of Brownian motion on a complete Riemannian manifold. Ann. Probab. 26 (1998), no. 1, 78--111. doi:10.1214/aop/1022855412. https://projecteuclid.org/euclid.aop/1022855412 |
Each letter shown represent distinct digit...can vary from zero to nine.
$COCA$, $COLA$, $SODA$ are three concatenated numbers.
Figure these out from the following relation:
$COCA + COLA = SODA$
Puzzling Stack Exchange is a question and answer site for those who create, solve, and study puzzles. It only takes a minute to sign up.Sign up to join this community
Based on Omega Krypton's answer,
$2C+1=S,C+L=D+10$, $A=0,O=9$. (Note that $O=9$ so $C+L$ carries.)
We also need that these digits $C,L,D,S$ are distinctbetween $1\sim 8$. ($0$ and $9$ are taken.) If $C=1$ or $C=2$ then, since $D\ge 1$ we have $L\ge 9$ which is incorrect. So $C=3$ and $S=7$. We have $L=8$ and $D=1$. That is $3930+3980=7910$.
We have the following
COCA+COLA----- SODA
First, from the ones column, we have $A+A \implies A$ which is only possible if $A=0$.
Next, notice something similar in the
hundreds place; $O+O \implies O$. Since $0$ is already taken and the only possibility without a carry over, we must have a carry over from the 10s, and $O=9$ is the only possibility. We will also have a carry over into the thousands.
Since we have a 4 digit number as the result, we know that
$0 \lt C \le 4$.
But:
-But $C=4 \implies S=9$ which is already taken by $O$.
-And $C=1 \implies L=9$ to achieve a carryover, which is taken by $O$. -And $C=2 \implies L\in\{8,9\}$. But $L=9$ is taken, and $L=8 \implies D=0$ is also taken.
Thus,
$C=3$.
Also, we know
$S=7$ because the hundreds will carry over, and we also know that in order to carry over the 10s, we need $L\ge 7$. But $L=7$ and $L=9$ are taken leaving only $L=8$, and thus, $D=1$.
Thus, the solution is;
COCA+COLA=SODA, 3930+3980=7910
Since we know that
$A+A \equiv A \pmod {10}$
Therefore $A$
$=0$
Hundreds value must carry since $O \neq 0$
Therefore
$O+O+1 \equiv O \pmod {10}$
Therefore $O$
$=9$
We now get
$2C+1=S$
$C+L=D$
And since $S<9$
$0<C<4$
Then there are many possibilities... any relations I missed out? |
A mathematical proof has (among others) the purpose to convince someone of some fact, given some already established facts.
Whether or not a proof is valid does not depend on who presents it. That is one of the key features of math - it does not matter at all if the "professor knows what he does" or if she sounds clever. If I can't help but agree that the arguments clearly show that the new fact follows from known facts, I have to agree to the new fact as well.
There are however several levels of difficulty, for example:
Direct proof: showing $A \Rightarrow B$ by showing $A \Rightarrow C_1 \Rightarrow C_2 \Rightarrow \dots \Rightarrow B$ Proof by contradiction. Proof by (complete) induction.
I agree that point 3 is somewhat hard to employ without any prerequisite in logic, but the first two points are doable. I will give examples for each.
There is a
huge difference between explaining (or understanding) a proof and finding and writing it down. Since the question is asking about a lecturer explaining a proof, I'll not bother with the latter.
Using the $a \text{ even} \Rightarrow a^2 \text{ even}$ example:
The lecturer can explain the bigger picture of the proof: "We take any even number and show that its square must also be even". The lecturer can remind the students that for even $a$, there exists $k \in \mathbb N$ so that $a = 2k$. Give examples if neccessary. This should be working knowledge already. If not, this has to be considered a gap in mathematical basics and not in logic. The manipulation to express $a^2$ in terms of $k$ can be executed by either students or lecturer, leading to $a^2 = 4k^2$ if done correctly -- regardless of whoever did it. The last step, showing that the number $4k^2$ is also even because $4k^2 = 2 \cdot k'$ with $k' := 2k^2$ can also be done by the lecturer. This decomposition always works and again does not depend on who does it.
None of these steps requires "deep" understanding" of logic except "simple" implications. But in the end, the students should be able to understand that for any even $a$, also $a^2$ must be even.
It is indeed a completely different story for the students to come up with a similar proof.
Regarding proofs by contradiction: it is important for the students to understand that apart from the cleverly chosen starting fact, each conclusion is right and again does not depend on who presents the reasoning. If one then arrives at a contradiction, the only remaining option is that the premise was wrong.
Let's look at Euler's proof of the fact that there are infinite prime numbers.
Assume that there are finitely many prime numbers. We don't know yet if this is true or false, but we can assume either. If the students have mathematical background about divisors, they must agree that the product of all primes, increased by one, is prime or has a prime divisor that is not in the list we started with. This argument is somewhat complex, but uses only known facts about divisibility. The reasoning in point 2 is sound. The only way to fix the error that we did not start with "all primes" is to assume that there is no such thing.
To summarize: Understanding proofs does not require formal knowledge of logic. Common sense is enough, together with the attitude that arguments are not valued by authority. |
Suppose the $n \times n$ matrix $A$ has eigenvalues $\lambda_1, \ldots, \lambda_n$ and singular values $\sigma_1, \ldots, \sigma_n$. It seems plausible that by comparing the singular values and eigenvalues we gets some sort of information about eigenvectors. Consider:
a. The singular values are equal to the absolute values of eigenvalues if and only if the matrix is normal, i.e., the eigenvectors are orthogonal (see http://en.wikipedia.org/wiki/Normal_matrix , item 11 of the "Equivalent definitions" section ).
b. Suppose we have two distinct eigenvalues $\lambda_1, \lambda_2$ with eigenvectors $v_1, v_2$. Suppose, hypothetically, we let $v_1$ approach $v_2$, while keeping all the other eigenvalues and eigenvectors the same. Then the largest singular value approaches infinity. This follows since $\sigma_{\rm max} = ||A||_2$ and $A$ maps the vector $v_1 - v_2$, which approaches $0$, to $\lambda_1 v_1 - \lambda_2 v_2$, which does not approach $0$.
It seems reasonable to guess that the ``more equal'' $|\lambda_1|, \ldots, |\lambda_n|$ and $\sigma_1, \ldots, \sigma_n$ are, the more the eigenvectors look like an orthogonal collection. So naturally my question is whether there is a formal statement to this effect.
Note: I asked this question on math.SE about a week ago. |
Let $T: \R^n \to \R^m$ be a linear transformation.Suppose that the nullity of $T$ is zero.
If $\{\mathbf{x}_1, \mathbf{x}_2,\dots, \mathbf{x}_k\}$ is a linearly independent subset of $\R^n$, then show that $\{T(\mathbf{x}_1), T(\mathbf{x}_2), \dots, T(\mathbf{x}_k) \}$ is a linearly independent subset of $\R^m$.
Let $V$ denote the vector space of all real $2\times 2$ matrices.Suppose that the linear transformation from $V$ to $V$ is given as below.\[T(A)=\begin{bmatrix}2 & 3\\5 & 7\end{bmatrix}A-A\begin{bmatrix}2 & 3\\5 & 7\end{bmatrix}.\]Prove or disprove that the linear transformation $T:V\to V$ is an isomorphism.
Let $G, H, K$ be groups. Let $f:G\to K$ be a group homomorphism and let $\pi:G\to H$ be a surjective group homomorphism such that the kernel of $\pi$ is included in the kernel of $f$: $\ker(\pi) \subset \ker(f)$.
Define a map $\bar{f}:H\to K$ as follows.For each $h\in H$, there exists $g\in G$ such that $\pi(g)=h$ since $\pi:G\to H$ is surjective.Define $\bar{f}:H\to K$ by $\bar{f}(h)=f(g)$.
(a) Prove that the map $\bar{f}:H\to K$ is well-defined.
(b) Prove that $\bar{f}:H\to K$ is a group homomorphism.
Let $\calF[0, 2\pi]$ be the vector space of all real valued functions defined on the interval $[0, 2\pi]$.Define the map $f:\R^2 \to \calF[0, 2\pi]$ by\[\left(\, f\left(\, \begin{bmatrix}\alpha \\\beta\end{bmatrix} \,\right) \,\right)(x):=\alpha \cos x + \beta \sin x.\]We put\[V:=\im f=\{\alpha \cos x + \beta \sin x \in \calF[0, 2\pi] \mid \alpha, \beta \in \R\}.\]
(a) Prove that the map $f$ is a linear transformation.
(b) Prove that the set $\{\cos x, \sin x\}$ is a basis of the vector space $V$.
(c) Prove that the kernel is trivial, that is, $\ker f=\{\mathbf{0}\}$.(This yields an isomorphism of $\R^2$ and $V$.)
(d) Define a map $g:V \to V$ by\[g(\alpha \cos x + \beta \sin x):=\frac{d}{dx}(\alpha \cos x+ \beta \sin x)=\beta \cos x -\alpha \sin x.\]Prove that the map $g$ is a linear transformation.
(e) Find the matrix representation of the linear transformation $g$ with respect to the basis $\{\cos x, \sin x\}$.
Suppose that the vectors\[\mathbf{v}_1=\begin{bmatrix}-2 \\1 \\0 \\0 \\0\end{bmatrix}, \qquad \mathbf{v}_2=\begin{bmatrix}-4 \\0 \\-3 \\-2 \\1\end{bmatrix}\]are a basis vectors for the null space of a $4\times 5$ matrix $A$. Find a vector $\mathbf{x}$ such that\[\mathbf{x}\neq0, \quad \mathbf{x}\neq \mathbf{v}_1, \quad \mathbf{x}\neq \mathbf{v}_2,\]and\[A\mathbf{x}=\mathbf{0}.\]
(Stanford University, Linear Algebra Exam Problem)
Let $V$ be the subspace of $\R^4$ defined by the equation\[x_1-x_2+2x_3+6x_4=0.\]Find a linear transformation $T$ from $\R^3$ to $\R^4$ such that the null space $\calN(T)=\{\mathbf{0}\}$ and the range $\calR(T)=V$. Describe $T$ by its matrix $A$.
A hyperplane in $n$-dimensional vector space $\R^n$ is defined to be the set of vectors\[\begin{bmatrix}x_1 \\x_2 \\\vdots \\x_n\end{bmatrix}\in \R^n\]satisfying the linear equation of the form\[a_1x_1+a_2x_2+\cdots+a_nx_n=b,\]where $a_1, a_2, \dots, a_n$ (at least one of $a_1, a_2, \dots, a_n$ is nonzero) and $b$ are real numbers.Here at least one of $a_1, a_2, \dots, a_n$ is nonzero.
Consider the hyperplane $P$ in $\R^n$ described by the linear equation\[a_1x_1+a_2x_2+\cdots+a_nx_n=0,\]where $a_1, a_2, \dots, a_n$ are some fixed real numbers and not all of these are zero.(The constant term $b$ is zero.)
Then prove that the hyperplane $P$ is a subspace of $R^{n}$ of dimension $n-1$.
Let $n$ be a positive integer. Let $T:\R^n \to \R$ be a non-zero linear transformation.Prove the followings.
(a) The nullity of $T$ is $n-1$. That is, the dimension of the nullspace of $T$ is $n-1$.
(b) Let $B=\{\mathbf{v}_1, \cdots, \mathbf{v}_{n-1}\}$ be a basis of the nullspace $\calN(T)$ of $T$.Let $\mathbf{w}$ be the $n$-dimensional vector that is not in $\calN(T)$. Then\[B’=\{\mathbf{v}_1, \cdots, \mathbf{v}_{n-1}, \mathbf{w}\}\]is a basis of $\R^n$.
(c) Each vector $\mathbf{u}\in \R^n$ can be expressed as\[\mathbf{u}=\mathbf{v}+\frac{T(\mathbf{u})}{T(\mathbf{w})}\mathbf{w}\]for some vector $\mathbf{v}\in \calN(T)$.
Let $A$ be the matrix for a linear transformation $T:\R^n \to \R^n$ with respect to the standard basis of $\R^n$.We assume that $A$ is idempotent, that is, $A^2=A$.Then prove that\[\R^n=\im(T) \oplus \ker(T).\]
(a) Let $A=\begin{bmatrix}1 & 2 & 1 \\3 &6 &4\end{bmatrix}$ and let\[\mathbf{a}=\begin{bmatrix}-3 \\1 \\1\end{bmatrix}, \qquad \mathbf{b}=\begin{bmatrix}-2 \\1 \\0\end{bmatrix}, \qquad \mathbf{c}=\begin{bmatrix}1 \\1\end{bmatrix}.\]For each of the vectors $\mathbf{a}, \mathbf{b}, \mathbf{c}$, determine whether the vector is in the null space $\calN(A)$. Do the same for the range $\calR(A)$.
(b) Find a basis of the null space of the matrix $B=\begin{bmatrix}1 & 1 & 2 \\-2 &-2 &-4\end{bmatrix}$.
Let $A$ be a real $7\times 3$ matrix such that its null space is spanned by the vectors\[\begin{bmatrix}1 \\2 \\0\end{bmatrix}, \begin{bmatrix}2 \\1 \\0\end{bmatrix}, \text{ and } \begin{bmatrix}1 \\-1 \\0\end{bmatrix}.\]Then find the rank of the matrix $A$.
(Purdue University, Linear Algebra Final Exam Problem)
Let $R$ be a commutative ring with $1$ and let $G$ be a finite group with identity element $e$. Let $RG$ be the group ring. Then the map $\epsilon: RG \to R$ defined by\[\epsilon(\sum_{i=1}^na_i g_i)=\sum_{i=1}^na_i,\]where $a_i\in R$ and $G=\{g_i\}_{i=1}^n$, is a ring homomorphism, called the augmentation map and the kernel of $\epsilon$ is called the augmentation ideal.
(a) Prove that the augmentation ideal in the group ring $RG$ is generated by $\{g-e \mid g\in G\}$.
(b) Prove that if $G=\langle g\rangle$ is a finite cyclic group generated by $g$, then the augmentation ideal is generated by $g-e$. |
Can I be a pedant and say that if the question states that $\langle \alpha \vert A \vert \alpha \rangle = 0$ for every vector $\lvert \alpha \rangle$, that means that $A$ is everywhere defined, so there are no domain issues?
Gravitational optics is very different from quantum optics, if by the latter you mean the quantum effects of interaction between light and matter. There are three crucial differences I can think of:We can always detect uniform motion with respect to a medium by a positive result to a Michelson...
Hmm, it seems we cannot just superimpose gravitational waves to create standing waves
The above search is inspired by last night dream, which took place in an alternate version of my 3rd year undergrad GR course. The lecturer talks about a weird equation in general relativity that has a huge summation symbol, and then talked about gravitational waves emitting from a body. After that lecture, I then asked the lecturer whether gravitational standing waves are possible, as a imagine the hypothetical scenario of placing a node at the end of the vertical white line
[The Cube] Regarding The Cube, I am thinking about an energy level diagram like this
where the infinitely degenerate level is the lowest energy level when the environment is also taken account of
The idea is that if the possible relaxations between energy levels is restricted so that to relax from an excited state, the bottleneck must be passed, then we have a very high entropy high energy system confined in a compact volume
Therefore, as energy is pumped into the system, the lack of direct relaxation pathways to the ground state plus the huge degeneracy at higher energy levels should result in a lot of possible configurations to give the same high energy, thus effectively create an entropy trap to minimise heat loss to surroundings
@Kaumudi.H there is also an addon that allows Office 2003 to read (but not save) files from later versions of Office, and you probably want this too. The installer for this should also be in \Stuff (but probably isn't if I forgot to include the SP3 installer).
Hi @EmilioPisanty, it's great that you want to help me clear out confusions. I think we have a misunderstanding here. When you say "if you really want to "understand"", I've thought you were mentioning at my questions directly to the close voter, not the question in meta. When you mention about my original post, you think that it's a hopeless mess of confusion? Why? Except being off-topic, it seems clear to understand, doesn't it?
Physics.stackexchange currently uses 2.7.1 with the config TeX-AMS_HTML-full which is affected by a visual glitch on both desktop and mobile version of Safari under latest OS, \vec{x} results in the arrow displayed too far to the right (issue #1737). This has been fixed in 2.7.2. Thanks.
I have never used the app for this site, but if you ask a question on a mobile phone, there is no homework guidance box, as there is on the full site, due to screen size limitations.I think it's a safe asssumption that many students are using their phone to place their homework questions, in wh...
@0ßelö7 I don't really care for the functional analytic technicalities in this case - of course this statement needs some additional assumption to hold rigorously in the infinite-dimensional case, but I'm 99% that that's not what the OP wants to know (and, judging from the comments and other failed attempts, the "simple" version of the statement seems to confuse enough people already :P)
Why were the SI unit prefixes, i.e.\begin{align}\mathrm{giga} && 10^9 \\\mathrm{mega} && 10^6 \\\mathrm{kilo} && 10^3 \\\mathrm{milli} && 10^{-3} \\\mathrm{micro} && 10^{-6} \\\mathrm{nano} && 10^{-9}\end{align}chosen to be a multiple power of 3?Edit: Although this questio...
the major challenge is how to restrict the possible relaxation pathways so that in order to relax back to the ground state, at least one lower rotational level has to be passed, thus creating the bottleneck shown above
If two vectors $\vec{A} =A_x\hat{i} + A_y \hat{j} + A_z \hat{k}$ and$\vec{B} =B_x\hat{i} + B_y \hat{j} + B_z \hat{k}$, have angle $\theta$ between them then the dot product (scalar product) of $\vec{A}$ and $\vec{B}$ is$$\vec{A}\cdot\vec{B} = |\vec{A}||\vec{B}|\cos \theta$$$$\vec{A}\cdot\...
@ACuriousMind I want to give a talk on my GR work first. That can be hand-wavey. But I also want to present my program for Sobolev spaces and elliptic regularity, which is reasonably original. But the devil is in the details there.
@CooperCape I'm afraid not, you're still just asking us to check whether or not what you wrote there is correct - such questions are not a good fit for the site, since the potentially correct answer "Yes, that's right" is too short to even submit as an answer |
Which of the following are not true?
$(a)$ There exists an analytic function $f:\mathbb{C}\to\mathbb{C}$ such that for all $z\in \mathbb{C}$ , $Re(f(z))=e^x$.
$(b)$ There exists an analytic function $f:\mathbb{C}\to\mathbb{C}$ such that $f(0)=1$ and for all $z\in \mathbb{C}$ such that $|z|\geq1$ such that $|f(z)|\leq e^{-|z|}$.
could you please give me some hints?
I have applied liouville thorem for $(b)$ and got $f(z)$ is constant but not necessarily $f(z)=1$. Am I correct? |
The
second question seems to ask for a prediction interval for one future observation. Such an interval is readily calculated under the assumptions that (a) the future observation is from the same distribution and (b) is independent of the previous sample. When the underlying distribution is Normal, we just have to erect an interval around the difference of two Gaussian random variables. Note that the interval will be wider than suggested by a naive application of a t-test or z-test, because it has to accommodate the variance of the future value, too. This rules out all the answers I have seen posted so far, so I guess I had better quote one explicitly. Hahn & Meeker's formula for the endpoints of this prediction interval is
$$m \pm t \times \sqrt{1 + \frac{1}{n}} \times s$$
where $m$ is the sample mean, $t$ is an appropriate two-sided critical value of Student's $t$ (for $n-1$ df), $s$ is the sample standard deviation, and $n$ is the sample size. Note in particular the factor of $\sqrt{1+1/n}$ instead of $\sqrt{1/n}$. That's a big difference!
This interval is used like any other interval: the requested test simply examines whether the new value lies within the prediction interval. If so, the new value is consistent with the sample; if not, we reject the hypothesis that it was independently drawn from the same distribution as the sample. Generalizations from one future value to $k$ future values or to the mean (or max or min) of $k$ future values, etc., exist.
There is a extensive literature on prediction intervals especially in a regression context. Any decent regression textbook will have formulas. You could begin with the Wikipedia entry ;-). Hahn & Meeker's
Statistical Intervals is still in print and is an accessible read.
The
first question has an an answer that is so routine nobody seems yet to have given it here (although some of the links provide details). For completeness, then, I will close by remarking that when the population has approximately a Normal distribution, the sample standard deviation is distributed as the square root of a scaled chi-square variate of $n-1$ df whose expectation is the population variance. That means (roughly) we expect the sample sd to be close to the population sd and the ratio of the two will usually be $1 + O(1/\sqrt{n-1})$. Unlike parallel statements for the sample mean (which invoke the CLT), this statement relies fairly strongly on the assumption of a Normal population. |
I'll start with Earth
Earth is hurling through space at a speed of approximately $29.78 km/s$ If the sun were to disappear, the Earth would move in a straight line until the sun reappears. Since there are $259,200 seconds$ in three days that gives Earth the time to travel $29.78 km/s \times 259,200 s = 7,718,976 km$ That's quite a distance.
Since the distance between the Earth and the sun varies between $147,098,290 km$ and $152,098,232 km$, I'll average that down to about 150 million kilometers for calculations.
Using Pythagoras, we can get the distance form the sun when it comes back after 3 days: $\sqrt{150,000,000^{2} + 7,700,000^{2}} = 150,1632,44.504$. This puts us about $160,000 km$ out of orbit, peanuts compared to the difference between the Earths aphelion and its perhelion which is about 5 million kilometer.
What about the influence of other planets?
Good point, Jupiter is huge and can get reasonably close to Earth [citation needed]. We'll assume a worst case scenario and place Jupiter at a distance of 600,000,000 km from earth. Jupiter is significantly slower than earth, but in the span of three days, this is not going to make a huge difference considering the distance between them.
You can calculate the acceleration of a body under gravitaional influence by another body as: $G\frac{m}{r^{2}}$ Where G is the gravitational constant, m is the mass of the body attracting (Jupiter in our case) and r is the distance between the two bodies. Filling this in gives us: $6.673\times10^{−11}\frac{1.8986\times10^{27}}{600,000,000,000^{2}} = 3.51926606\times10^{-7} m/s^{2}$ Which means that the earth will accelerate towards Jupiter at a rate of 3.51926606*10^-7 m/s every second. After 3 days we will have traveled $\frac{3.51926606\times10^{-7}\times259200^{2}}{2} = 11822.0311653 m$ towards Jupiter, not even $12 km$!
Mars is closer though.
I see your point, but assuming Mars is as close as 50 million km, we get an shift towards Mars of about $56 km$. Not really significant.
How will other planets fare?
Well, Mercury will be off the worst. If there's no significant change there, there won't be a significant change anywhere. As it is traveling at about $47.362 km/s$ It could travel a distance of more than 12 million km in 3 days. Taking into account its smaller orbit, this would take it about 1.2 million km out of orbit, not bad. But still not much compared to the variance in its orbit which is almost 14 million km.
Conclusion:
If Fernir eats the sun, there are more important things to worry about than where the planets will be in 3 days, when Fernir needs to go to the bathroom.
Edit:
But wait, the Earth is now going too fast for its distance from the sun
You're right. And it's slightly turned away from the sun too. And I must admit, I underestimated the effect of this. As some intelligent people in the comments pointed out, this would change the eccentricity of the earths orbit from 0.016 to 0.06. Using this calculator we can then figure out that Earths orbit will now vary between 141 million km and 159 million km.
The difference has nearly quadrupled! In the grand scheme of things, our orbit will still be relatively similar, this might be enough to seriously influence weather pattern though.
Another possible effect.
Since gravity can not travel faster than the speed of light, the effect of the sun disappearing can only propagate with the speed of light. Gravity needs about 4 seconds to traverse the diameter of the sun, so gravity will drop from 100% to 0 over the course of 4 seconds. Additionally, there will be about a 0.04 second lag between the part of the Earth facing the sun and the most distance part. The acceleration due to the suns gravity is $\frac{6.67\times10^{-11}\times1.9891\times10^{30}}{(1.496\times10^{11})^{2}} = 5.928151\times10^{-3}m/s^{2}$. Dropping from this value down to 0 over the course of 4 seconds with a maximum lag of 0.04 seconds doesn't seems bad enough to cause anything major, but maybe it is enough to cause some earthquakes? I'll leave that to geologists to decide. |
We will be offering mothur and R workshops throughout 2019. Learn more. Difference between revisions of "Ace"
(New page: Validate output by making calculations by hand '''Example Calculations''' '''''*ace* ''''' These files give the ACE richness estimates as described by Chao and her colleagues (3, 4)...)
Line 3: Line 3:
'''Example Calculations'''
'''Example Calculations'''
−
'''''*ace
+
'''''*ace '''''
These files give the ACE richness estimates as described by Chao and her colleagues (3, 4). MOTHUR calculates the 95% confidence interval using an algorithm for the standard error estimate obtained through a personal communication with Anne Chao.
These files give the ACE richness estimates as described by Chao and her colleagues (3, 4). MOTHUR calculates the 95% confidence interval using an algorithm for the standard error estimate obtained through a personal communication with Anne Chao.
Revision as of 15:26, 14 January 2009 Example Calculations *.ace
These files give the ACE richness estimates as described by Chao and her colleagues (3, 4). MOTHUR calculates the 95% confidence interval using an algorithm for the standard error estimate obtained through a personal communication with Anne Chao.
<math>N_{rare} = \sum_{i=1}^{10}{in_i}</math> <math>C_{ACE} = 1 - \frac {n_1}{N_{rare}}</math> <math>{{\gamma}_{ACE}^2} = max \left[ \frac {S_{rare}}{C_{Ace}} \frac{\sum_{i=1}^{10} i \left ( i-1 \right ) n_i }{N_{rare} \left( N_{rare} - 1 \right )} - 1,0 \right ]</math> <math>S_{ACE} = S_{abund} + \frac {S_{rare}}{C_{ACE}} + \frac{n_1}{C_{ACE}}{{\gamma}_{ACE}^2}</math> <math>var \left( S_{ACE} \right ) {\approx} \sum_{j=1}^{n} \sum_{i=1}^{n} \frac{{\partial}S_{ACE}}{{\partial}n_i} \frac{{\partial}S_{ACE}}{{\partial}n_j}, cov \left( f_i, f_j \right) = f_i \left(1-f_i / S_{ACE} \right ), \mbox{if } i = j, cov\left ( f_i, f_j \right) = -f_i f_j / {S_{ACE}}, \mbox{if } i j </math>
where,
<math>n_{i}</math> = The number of OTUs with i individuals
<math>S_{rare}</math> = The number of OTUs with 10 or fewer individuals
<math>S_{abund}</math> = The number of OTUs with more than 10 individuals
Returning to the Amazonian dataset at distance 0.03, with the previously described distribution, there are no "abundant" OTUs so <math>S_{abund}</math> is 0 and <math>S_{rare}</math> and <math>S_{obs}</math> are 84 and <math>N_{rare}</math> is 98. Since there are 75 singletons, the coverage, <math>C_{ACE}</math>, is 0.235. Calculating the 2 value we obtain the value 0.581. This gives an <math>{S_{ACE}}</math> value of 543.69. File Samples on the Amazonian Dataset .sabund
This file contains data for constructing a rank-abundance plot of the OTU data for each distance level. The first column contains the distance and the second is the number of OTUs observed at that distance. The successive values in the row are the number of OTUs that were found once, twice, etc.
unique 2 94 2 0 2 92 3 0.01 2 88 5 0.02 4 84 2 2 1 0.03 4 75 6 1 20.04 4 69 9 1 2 0.05 4 55 13 3 2 0.06 4 48 14 2 4 0.07 4 44 16 2 4 0.08 7 36 15 4 2 1 0 1 0.09 7 36 12 4 3 0 0 2 0.1 7 35 12 2 3 0 0 3 .ace
The first line contains the labels of all the columns. First numsampled which shows the frequency of the observed calculations. The frequency was set to 10, so after each 10 selected the observed is calculated at each of the distances, with a calculation done after all are sampled. The following labels in the first line are the distances at which the calculations were made, the lci (lower bound of confidence interval) and the hci (higher bound of confidence interval). Note: the entire file is not shown below. Each additional line starts with the number of sequences sampled followed by the <math>{S_{ACE}}</math> calculation at the column's distance and the confidence intervals. For instance, at distance 0.01, after 80 samples ace was 1026.67, the lci was 130.41 and the hci was 16963.81.
numsampled 0.01 lci hci 0.02 lci hci 0.03 lci hci 1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 10 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 20 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 30 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 40 0.00 0.00 0.00 380.00 56.55 6344.87 780.00 56.82 30860.25 50 1225.00 74.06 55231.49 600.00 73.63 11938.38 600.00 73.63 11938.38 60 1770.00 380.69 9159.34 449.27 192.09 1188.89 870.00 91.45 19769.41 70 1190.00 109.96 30068.86 1241.48 711.79 2203.64 475.04 220.06 1146.61 80 1026.67 130.41 16963.81 1209.01 727.09 2044.92 731.50 322.55 1810.65 90 967.50 152.51 11768.79 1605.86 973.30 2686.86 609.23 305.71 1320.90 98 911.40 171.65 8608.54 1964.42 1196.76 3264.05 543.69 296.30 1079.36 References
3. Chao, A., R. L. Chazdon, R. K. Colwell, and T. J. Shen. 2005. A new statistical approach for assessing similarity of species composition with incidence and abundance data. Ecol. Lett. 8:148-159.
4. Chao, A., W. H. Hwang, Y. C. Chen, and C. Y. Kuo. 2000. Estimating the number of shared species in two communities. Stat. Sinica. 10:227-246. |
(a) If $AB=B$, then $B$ is the identity matrix. (b) If the coefficient matrix $A$ of the system $A\mathbf{x}=\mathbf{b}$ is invertible, then the system has infinitely many solutions. (c) If $A$ is invertible, then $ABA^{-1}=B$. (d) If $A$ is an idempotent nonsingular matrix, then $A$ must be the identity matrix. (e) If $x_1=0, x_2=0, x_3=1$ is a solution to a homogeneous system of linear equation, then the system has infinitely many solutions.
Let $A$ and $B$ be $3\times 3$ matrices and let $C=A-2B$.If\[A\begin{bmatrix}1 \\3 \\5\end{bmatrix}=B\begin{bmatrix}2 \\6 \\10\end{bmatrix},\]then is the matrix $C$ nonsingular? If so, prove it. Otherwise, explain why not.
(a) Suppose that a $3\times 3$ system of linear equations is inconsistent. Is the coefficient matrix of the system nonsingular?
(b) Suppose that a $3\times 3$ homogeneous system of linear equations has a solution $x_1=0, x_2=-3, x_3=5$. Is the coefficient matrix of the system nonsingular?
(c) Let $A$ be a $4\times 4$ matrix and let\[\mathbf{v}=\begin{bmatrix}1 \\2 \\3 \\4\end{bmatrix} \text{ and } \mathbf{w}=\begin{bmatrix}4 \\3 \\2 \\1\end{bmatrix}.\]Suppose that we have $A\mathbf{v}=A\mathbf{w}$. Is the matrix $A$ nonsingular?
Suppose that $B=\{\mathbf{v}_1, \mathbf{v}_2\}$ is a basis for $\R^2$. Let $S:=[\mathbf{v}_1, \mathbf{v}_2]$.Note that as the column vectors of $S$ are linearly independent, the matrix $S$ is invertible.
Prove that for each vector $\mathbf{v} \in V$, the vector $S^{-1}\mathbf{v}$ is the coordinate vector of $\mathbf{v}$ with respect to the basis $B$.
Prove that the matrix\[A=\begin{bmatrix}0 & 1\\-1& 0\end{bmatrix}\]is diagonalizable.Prove, however, that $A$ cannot be diagonalized by a real nonsingular matrix.That is, there is no real nonsingular matrix $S$ such that $S^{-1}AS$ is a diagonal matrix.
The following problems are Midterm 1 problems of Linear Algebra (Math 2568) at the Ohio State University in Autumn 2017.There were 9 problems that covered Chapter 1 of our textbook (Johnson, Riess, Arnold).The time limit was 55 minutes.
This post is Part 3 and contains Problem 7, 8, and 9.Check out Part 1 and Part 2 for the rest of the exam problems.
Problem 7. Let $A=\begin{bmatrix}-3 & -4\\8& 9\end{bmatrix}$ and $\mathbf{v}=\begin{bmatrix}-1 \\2\end{bmatrix}$.
(a) Calculate $A\mathbf{v}$ and find the number $\lambda$ such that $A\mathbf{v}=\lambda \mathbf{v}$.
(b) Without forming $A^3$, calculate the vector $A^3\mathbf{v}$.
Problem 8. Prove that if $A$ and $B$ are $n\times n$ nonsingular matrices, then the product $AB$ is also nonsingular.
Problem 9.Determine whether each of the following sentences is true or false.
(a) There is a $3\times 3$ homogeneous system that has exactly three solutions.
(b) If $A$ and $B$ are $n\times n$ symmetric matrices, then the sum $A+B$ is also symmetric.
(c) If $n$-dimensional vectors $\mathbf{v}_1, \mathbf{v}_2, \mathbf{v}_3$ are linearly dependent, then the vectors $\mathbf{v}_1, \mathbf{v}_2, \mathbf{v}_3, \mathbf{v}_4$ is also linearly dependent for any $n$-dimensional vector $\mathbf{v}_4$.
(d) If the coefficient matrix of a system of linear equations is singular, then the system is inconsistent.
An $n\times n$ matrix $A$ is called nonsingular if the only vector $\mathbf{x}\in \R^n$ satisfying the equation $A\mathbf{x}=\mathbf{0}$ is $\mathbf{x}=\mathbf{0}$.Using the definition of a nonsingular matrix, prove the following statements.
(a) If $A$ and $B$ are $n\times n$ nonsingular matrix, then the product $AB$ is also nonsingular.
(b) Let $A$ and $B$ be $n\times n$ matrices and suppose that the product $AB$ is nonsingular. Then:
The matrix $B$ is nonsingular.
The matrix $A$ is nonsingular. (You may use the fact that a nonsingular matrix is invertible.)
Let $A$ be an $n\times (n-1)$ matrix and let $\mathbf{b}$ be an $(n-1)$-dimensional vector.Then the product $A\mathbf{b}$ is an $n$-dimensional vector.Set the $n\times n$ matrix $B=[A_1, A_2, \dots, A_{n-1}, A\mathbf{b}]$, where $A_i$ is the $i$-th column vector of $A$.
Prove that $B$ is a singular matrix for any choice of $\mathbf{b}$.
For each of the following $3\times 3$ matrices $A$, determine whether $A$ is invertible and find the inverse $A^{-1}$ if exists by computing the augmented matrix $[A|I]$, where $I$ is the $3\times 3$ identity matrix. |
Let the Schwarzschild spacetime be given with coordinates $(t,r,\theta,\phi)$. Change coordinates to Kruskal-Szekeres $(T,X,\theta,\phi)$ so that the line element becomes
$$ds^2 = \dfrac{32 M^3}{r}e^{-r/2M}(-dT^2+dX^2)+r^2d\Omega^2.$$
Now, $T$ is a timelike coordinate, its coordinate lines are hence timelike and qualify as observers.
Each such coordinate line is specified by values $X_0,\theta_0,\phi_0$ of the other coordinates. In particular it is a
radial worldline with respect to the observer at infinity.
Now let $\gamma(T)$ be one such worldline, corresponding to $X_0,\theta_0,\phi_0$. I want to understand physically the motion of this observer.
We know that
$$r=2GM\left(1+W\left(\frac{X^2-T^2}{e}\right)\right).$$
Hence, the horizon is crossed when $W((X^2-T^2)/e)=0$ which means when $T = \pm X$.
The singularity on the other hand is located where $W((X^2-T^2)/e)=-1$ and this means that $(X^2-T^2)/e = -1/e$. This in turn means that $T^2-X^2 = 1$ characterizes the singularity. For fixed $X$ we would have $T = \pm \sqrt{1 + X^2}$.
So for $\gamma$ suppose first $T$ starts at $-\infty$. This means that $r$ starts at $\infty$ and thus the observer comes from infinity.
Suppose $X_0$ is positive. Then, as $T$ increases, it will cross the horizon when $T = -X_0$ at finite proper time.
Next, $T$ will increase until reaching $-\sqrt{1+X_0^2}$ which is the singularity and this would be the endpoint of the observer's worldline.
So it seems: $\gamma$ is one observer starting at infinity falling radially towards the singularity, crossing the horizon at finite proper time and then reaching the singularity further.
The observer seems accelerated (i.e., not freely falling), since
$$\nabla_{\frac{\partial}{\partial T}}\frac{\partial}{\partial T}=\frac{2GM}{r^2}e^{-r/2GM} (r+2GM)[T\partial_T - X\partial _X].$$
What I want to know is: is my analysis correct? Is this the motion of this observer? If not, how do I correctly understand the motion of said observer?
One odd thing is: when $T\to -\infty$, $r\to \infty$ so the observer comes from infinity, and it is in the exterior region. Now, on the exterior region
$$T = X\tanh(t/4GM)\Longrightarrow t = 4GM \tanh^{-1}(T/X).$$
As $T\to -\infty$ we then have $T/X\to \pm \infty$ depending on the sign of $X$ and thus $t \to 2GM\pi i$ which is not real! So this seems extremely odd and makes me question the whole analsysis.
Anyway, what physically is the motion of this observer in a Schwarzschild spacetime? |
Tagged: subspace Problem 709
Let $S=\{\mathbf{v}_{1},\mathbf{v}_{2},\mathbf{v}_{3},\mathbf{v}_{4},\mathbf{v}_{5}\}$ where
\[ \mathbf{v}_{1}= \begin{bmatrix} 1 \\ 2 \\ 2 \\ -1 \end{bmatrix} ,\;\mathbf{v}_{2}= \begin{bmatrix} 1 \\ 3 \\ 1 \\ 1 \end{bmatrix} ,\;\mathbf{v}_{3}= \begin{bmatrix} 1 \\ 5 \\ -1 \\ 5 \end{bmatrix} ,\;\mathbf{v}_{4}= \begin{bmatrix} 1 \\ 1 \\ 4 \\ -1 \end{bmatrix} ,\;\mathbf{v}_{5}= \begin{bmatrix} 2 \\ 7 \\ 0 \\ 2 \end{bmatrix} .\] Find a basis for the span $\Span(S)$. Problem 706
Suppose that a set of vectors $S_1=\{\mathbf{v}_1, \mathbf{v}_2, \mathbf{v}_3\}$ is a spanning set of a subspace $V$ in $\R^5$. If $\mathbf{v}_4$ is another vector in $V$, then is the set
\[S_2=\{\mathbf{v}_1, \mathbf{v}_2, \mathbf{v}_3, \mathbf{v}_4\}\] still a spanning set for $V$? If so, prove it. Otherwise, give a counterexample. Problem 663
Let $\R^2$ be the $x$-$y$-plane. Then $\R^2$ is a vector space. A line $\ell \subset \mathbb{R}^2$ with slope $m$ and $y$-intercept $b$ is defined by
\[ \ell = \{ (x, y) \in \mathbb{R}^2 \mid y = mx + b \} .\]
Prove that $\ell$ is a subspace of $\mathbb{R}^2$ if and only if $b = 0$.Add to solve later
Problem 659
Fix the row vector $\mathbf{b} = \begin{bmatrix} -1 & 3 & -1 \end{bmatrix}$, and let $\R^3$ be the vector space of $3 \times 1$ column vectors. Define
\[W = \{ \mathbf{v} \in \R^3 \mid \mathbf{b} \mathbf{v} = 0 \}.\] Prove that $W$ is a vector subspace of $\R^3$. Problem 658
Let $V$ be the vector space of $n \times n$ matrices with real coefficients, and define
\[ W = \{ \mathbf{v} \in V \mid \mathbf{v} \mathbf{w} = \mathbf{w} \mathbf{v} \mbox{ for all } \mathbf{w} \in V \}.\] The set $W$ is called the center of $V$.
Prove that $W$ is a subspace of $V$.Add to solve later
Problem 612
Let $C[-2\pi, 2\pi]$ be the vector space of all real-valued continuous functions defined on the interval $[-2\pi, 2\pi]$.
Consider the subspace $W=\Span\{\sin^2(x), \cos^2(x)\}$ spanned by functions $\sin^2(x)$ and $\cos^2(x)$. (a) Prove that the set $B=\{\sin^2(x), \cos^2(x)\}$ is a basis for $W$.
Add to solve later
(b) Prove that the set $\{\sin^2(x)-\cos^2(x), 1\}$ is a basis for $W$. Problem 611
An $n\times n$ matrix $A$ is called
orthogonal if $A^{\trans}A=I$. Let $V$ be the vector space of all real $2\times 2$ matrices.
Consider the subset
\[W:=\{A\in V \mid \text{$A$ is an orthogonal matrix}\}.\] Prove or disprove that $W$ is a subspace of $V$. Problem 604
Let
\[A=\begin{bmatrix} 1 & -1 & 0 & 0 \\ 0 &1 & 1 & 1 \\ 1 & -1 & 0 & 0 \\ 0 & 2 & 2 & 2\\ 0 & 0 & 0 & 0 \end{bmatrix}.\] (a) Find a basis for the null space $\calN(A)$. (b) Find a basis of the range $\calR(A)$. (c) Find a basis of the row space for $A$.
Add to solve later
(The Ohio State University, Linear Algebra Midterm) Read solution Problem 601
Let $V$ be the vector space of all $2\times 2$ matrices whose entries are real numbers.
Let \[W=\left\{\, A\in V \quad \middle | \quad A=\begin{bmatrix} a & b\\ c& -a \end{bmatrix} \text{ for any } a, b, c\in \R \,\right\}.\] (a) Show that $W$ is a subspace of $V$. (b) Find a basis of $W$. (c) Find the dimension of $W$.
Add to solve later
(The Ohio State University, Linear Algebra Midterm) Read solution |
I'm looking at lecture notes on AdS/CFT by Jared Kaplan, and in section 4.2 he claims that the action for a free scalar field in AdS$_3$ is$$S=\int dt d\rho d\theta \dfrac{\sin\rho}{\cos\rho}\dfrac{1}{2}\left[\dot{\phi}^2-\left(\partial_\rho\phi\right)^2-\dfrac{1}{\sin^2\rho}\left(\partial_\theta\phi\right)^2-\dfrac{m^2}{\cos^2\rho}\phi^2\right]$$and that the canonical momentum conjugate to $\phi$ is$$P_\phi=\dfrac{\delta L}{\delta\dot{\phi}}=\dfrac{\sin\rho}{\cos^2\rho}\dot{\phi}$$Now, my question is:
where do the $\cos^2\rho$ terms in the action and the conjugate momentum come from?
Maybe I'm missing something obvious, but when computing the canonical momentum, shouldn't I only pick up the prefactor of $\frac{\sin\rho}{\cos\rho}$?
As for the mass term in the action, I know that the free scalar field action in AdS$_{d+1}$ is $$S=\int_{AdS}d^{d+1}x\sqrt{-g}\left[\dfrac{1}{2}\left(\nabla_A\phi\right)^2-\dfrac{1}{2}m^2\phi^2\right]$$ with the metric $$ds^2=\dfrac{1}{\cos^2{\rho}}\left(dt^2-d\rho^2-\sin^2{\rho}\ d\Omega_{d-1}^2\right)$$ so how is the mass term picking up an extra $\frac{1}{\cos^2\rho}$? |
In a paper by Joos and Zeh, Z Phys B 59 (1985) 223, they say:This 'coming into being of classical properties' appears related to what Heisenberg may have meant by his famous remark [7]: 'Die "Bahn" entsteht erst dadurch, dass wir sie beobachten.'Google Translate says this means something ...
@EmilioPisanty Tough call. It's technical language, so you wouldn't expect every German speaker to be able to provide a correct interpretation—it calls for someone who know how German is used in talking about quantum mechanics.
Litmus are a London-based space rock band formed in 2000 by Martin (bass guitar/vocals), Simon (guitar/vocals) and Ben (drums), joined the following year by Andy Thompson (keyboards, 2001–2007) and Anton (synths). Matt Thompson joined on synth (2002–2004), while Marek replaced Ben in 2003. Oli Mayne (keyboards) joined in 2008, then left in 2010, along with Anton. As of November 2012 the line-up is Martin Litmus (bass/vocals), Simon Fiddler (guitar/vocals), Marek Bublik (drums) and James Hodkinson (keyboards/effects). They are influenced by mid-1970s Hawkwind and Black Sabbath, amongst others.They...
@JohnRennie Well, they repeatedly stressed their model is "trust work time" where there are no fixed hours you have to be there, but unless the rest of my team are night owls like I am I will have to adapt ;)
I think u can get a rough estimate, COVFEFE is 7 characters, probability of a 7-character length string being exactly that is $(1/26)^7\approx 1.2\times 10^{-10}$ so I guess you would have to type approx a billion characters to start getting a good chance that COVFEFE appears.
@ooolb Consider the hyperbolic space $H^n$ with the standard metric. Compute $$\inf\left\{\left(\int u^{2n/(n-2)}\right)^{-(n-2)/n}\left(4\frac{n-1}{n-2}\int|\nabla u|^2+\int Ru^2\right): u\in C^\infty_c\setminus\{0\}, u\ge0\right\}$$
@BalarkaSen sorry if you were in our discord you would know
@ooolb It's unlikely to be $-\infty$ since $H^n$ has bounded geometry so Sobolev embedding works as expected. Construct a metric that blows up near infinity (incomplete is probably necessary) so that the inf is in fact $-\infty$.
@Sid Eating glamorous and expensive food on a regular basis and not as a necessity would mean you're embracing consumer fetish and capitalism, yes. That doesn't inherently prevent you from being a communism, but it does have an ironic implication.
@Sid Eh. I think there's plenty of room between "I think capitalism is a detrimental regime and think we could be better" and "I hate capitalism and will never go near anything associated with it", yet the former is still conceivably communist.
Then we can end up with people arguing is favor "Communism" who distance themselves from, say the USSR and red China, and people who arguing in favor of "Capitalism" who distance themselves from, say the US and the Europe Union.
since I come from a rock n' roll background, the first thing is that I prefer a tonal continuity. I don't like beats as much as I like a riff or something atmospheric (that's mostly why I don't like a lot of rap)
I think I liked Madvillany because it had nonstandard rhyming styles and Madlib's composition
Why is the graviton spin 2, beyond hand-waiving, sense is, you do the gravitational waves thing of reducing $R_{00} = 0$ to $g^{\mu \nu} g_{\rho \sigma,\mu \nu} = 0$ for a weak gravitational field in harmonic coordinates, with solution $g_{\mu \nu} = \varepsilon_{\mu \nu} e^{ikx} + \varepsilon_{\mu \nu}^* e^{-ikx}$, then magic? |
Variance is defined as $V(x)=\frac{\sum_{i=1}^n(x_i-\mu)^2}{n}$. Just in case for you, mean $\mu$ is defined as $\mu=\frac{\sum_{i=1}^nx}{n}$. Covariance between two random variables $x$ and $y$ (or columns of a matrix) is defined as $Cov(x,y)=\frac{\sum_{i=1}^n[(x_i-\mu_x)(y_i-\mu_y)]}{n}$ and $Cov(x,x)=V(x)$.
The term
covariance matrix may be misleading to you. It is not any sort of a special matrix. It is simply set of variances and covariances between pairs of columns. A position of any element in the covariance matrix corresponds to variance/covariance between a pair of two columns, e.g. a number located in 3rd row and 2nd column in the covariance matrix represents covariance between 3rd and 2nd columns of matrix $\textbf{A}$. $Cov(x,y)=Cov(y,x)$, therefore covariance matrix is symmetric.
If you have a matrix $\textbf{A}=\{\textbf{x}\;\textbf{y}\;\textbf{z}\}$ while $\textbf{x}$, $\textbf{y}$, and $\textbf{z}$ are column vectors of length $n$,
covariance matrix can be calculated as follows:
$Cov(A)=\begin{bmatrix}\frac{\sum_{i=1}^n(x_{i}-\mu_x)^2}{n} & \frac{\sum_{i=1}^n(x_{i}-\mu_x)(y_{i}-\mu_y)}{n} & \frac{\sum_{i=1}^n(x_{i}-\mu_x)(z_{i}-\mu_z)}{n} \\\frac{\sum_{i=1}^n(y_{i}-\mu_y)(x_{i}-\mu_x)}{n} & \frac{\sum_{i=1}^n(y_{i}-\mu_y)^2}{n} & \frac{\sum_{i=1}^n(y_{i}-\mu_y)(z_{i}-\mu_z)}{n} \\\frac{\sum_{i=1}^n(z_{i}-\mu_z)(x_{i}-\mu_x)}{n} & \frac{\sum_{i=1}^n(z_{i}-\mu_z)(y_{i}-\mu_y)}{n} & \frac{\sum_{i=1}^n(z_{i}-\mu_z)^2}{n}\\\end{bmatrix}=$
$Cov(A)=\begin{bmatrix}V(x,x) & Cov(x,y) & Cov(x,z) \\ \\Cov(y,x) & V(y,y) & Cov(y,z) \\ \\Cov(z,x) & Cov(z,y) & V(z,z) \\ \\\end{bmatrix}$ |
If i am moving a charged particle in a circular motion and then applying the magnetic field perpendicular to this circular motion ,will the charge experience a force. Note that magnetic line produced by charge and the applied magnetic field are in the same direction.
Yes; instantaneously, the particle will always have a velocity perpendicular to the magnetic field, so there will be a Lorentz force \$q\vec{v}\times \vec{B}\$, regardless of what extra external forces you apply.
Any extra forces you apply externally to maintain a circular orbit perpendicular to the magnetic field will have the effect of either maintaining a larger or smaller orbit than the gyroradius.
If you think about it, there are only 2 non-zero options for this force which will maintain a perfectly circular orbit perpendicular to a uniform magnetic field:
It is perpendicular to the particle pointed towards the center of the orbit. This case is trying to maintain a smaller orbit than the natural gyroradius. It is perpendicular and pointed away from the center of the orbit. This case is trying to maintain a larger orbit than the natural gyroradius.
Another simpler way to resolve the question is to notice that \$\frac{d\vec{v}}{dt}\$ is not zero for any circular orbit (it rotates); this is the definition of acceleration, and the only way to produce an acceleration is by applying a force. |
Given a bipartite graph $G = (V, U, E)$ such that $|V| = |U| =2^n$, one wants to sample an edge from $G$, uniformly at random, with the following operations:
1. One can sample $u \in U$ w.p $\frac{1}{|U|}$ ,or $v \in V$ w.p $\frac{1}{|V|}$. 2. There exists a polynomial time oracle, which can, given $u \in U$ (or $v \in V$), returns the degree of $u$ (the degree may be exponential in $n$). 3. The size of $E$ is unknown. 4. One can not test, given $u$ and $v$, if there is an edge between $u$ and $v$. 5. There is an oracle which can, given $u$, return $v$ such that $v$ uniformly chosen from the set of all neighbors of $u$.
If one has only polynomial time (in $n$) - is it possible to choose an edge uniformly at random? Is it possible to choose from a distribution close to uniform?
Without these restrictions, it would have been possible to choose $u$ w.p $\frac{deg(u)}{\sum deg(u)}$, and then choose $v$ u.a.r from the neighbors of $u$. |
This question already has an answer here:
If it's true write a proof. If it's false, give a counter example.
If $\phi : G_1 \rightarrow G_2$ is a homomorphism and $a\in G$ then the order of $\phi(a)$ then is equal to the order of $a$.
My attempt: This is false. Consider $\phi:Z_{15} \rightarrow Z_6$ difined by $\phi([a]_{15})$=$[a]_6$.
This is homomorphism since $\phi([a]_{15}+[b]_{15})= \phi([a+b]_{15})=[a+b]_6=[a]_6+[b_6]= \phi[a]_{15}+\phi[b]_{15}.$ Let $a\in Z_{15}=3$. The order of $3$ is $5$ since $3+3+3+3+3=0mod15$. The order of $\phi(3)$ is $2$ since $3+3=0mod6$.
EDIT: My counter example is not well-defined. What if i change it to $\phi([a]_{15}) = [3a]_6$ and follow the same steps? |
Unfortunately, I think you may be asking the wrong question. When it comes to fractions, a huge share of students have
no idea what they even mean. They have likely not internalized the idea that fractions are numbers, which makes all the fractional arithmetic you do completely rote (in their minds). In other words, it's prior knowledge gaps that are holding them back. Here's how I would assess for those prior knowledge gaps.
Ge them a list of numbers such as:
$\frac{11}{7}, \frac{8}{2}, 5, \frac{3}{4}, 1.6$
Then, with no hints, ask them to draw their own number line from scratch and place everything appropriately. A massive share of high school graduates can't do this, so it's very likely your grade 6 students can't either.
You could also ask them to do $3-\frac{4}{5}$ and see if they can answer that using "common sense" instead of "common denominators". Have them draw a picture to explain their intuition.
Lastly, ask them to determine if the answers to the following are bigger or smaller than
700 and how they know. See if they can explain their reasoning without doing the "standard algorithm" calculations. (All exercises involve 701.27 to discourage exactly that and to encourage thinking about meaning and estimation instead.)
$\frac{3}{8}\times701.27=\_\_\_\_$
$\frac{8}{3}\times701.27=\_\_\_\_$
$\frac{701.27}{3}\times8=\_\_\_\_$
$\frac{8}{3}\div701.27=\_\_\_\_$
$701.27\div\frac{8}{3}=\_\_\_\_$
$701.27\div\frac{3}{8}=\_\_\_\_$
$\frac{3}{8}\div701.27=\_\_\_\_$
$Two\ thirds\ of\ 701.27\ is\ \_\_\_\_$
$Two\ thirds\ times\ 701.27\ is\ \_\_\_\_$
$701.27\ groups\ of\ two-thirds\ is\ \_\_\_\_ $
Lastly, I'd suggest you draw a picture like this to represent the students, where P represents a physics students, B represents a biology student, and C represents a chemistry student.
P P P P P P
B B B B B B B B B B
C C C C C C C C C C C C C C
Given this picture, are they able to describe the class correctly using fractional vocabulary and equivalent fractions? Can they use those fractions to combine the fractions of physics and biology students? A shocking share of students can't. And if they can't, there's no way they're ready to tackle word problems in which
they must determine and/or draw and/or visualize the situation. |
also, if you are in the US, the next time anything important publishing-related comes up, you can let your representatives know that you care about this and that you think the existing situation is appalling
@heather well, there's a spectrum
so, there's things like New Journal of Physics and Physical Review X
which are the open-access branch of existing academic-society publishers
As far as the intensity of a single-photon goes, the relevant quantity is calculated as usual from the energy density as $I=uc$, where $c$ is the speed of light, and the energy density$$u=\frac{\hbar\omega}{V}$$is given by the photon energy $\hbar \omega$ (normally no bigger than a few eV) di...
Minor terminology question. A physical state corresponds to an element of a projective Hilbert space: an equivalence class of vectors in a Hilbert space that differ by a constant multiple - in other words, a one-dimensional subspace of the Hilbert space. Wouldn't it be more natural to refer to these as "lines" in Hilbert space rather than "rays"? After all, gauging the global $U(1)$ symmetry results in the complex line bundle (not "ray bundle") of QED, and a projective space is often loosely referred to as "the set of lines [not rays] through the origin." — tparker3 mins ago
> A representative of RELX Group, the official name of Elsevier since 2015, told me that it and other publishers “serve the research community by doing things that they need that they either cannot, or do not do on their own, and charge a fair price for that service”
for example, I could (theoretically) argue economic duress because my job depends on getting published in certain journals, and those journals force the people that hold my job to basically force me to get published in certain journals (in other words, what you just told me is true in terms of publishing) @EmilioPisanty
> for example, I could (theoretically) argue economic duress because my job depends on getting published in certain journals, and those journals force the people that hold my job to basically force me to get published in certain journals (in other words, what you just told me is true in terms of publishing)
@0celo7 but the bosses are forced because they must continue purchasing journals to keep up the copyright, and they want their employees to publish in journals they own, and journals that are considered high-impact factor, which is a term basically created by the journals.
@BalarkaSen I think one can cheat a little. I'm trying to solve $\Delta u=f$. In coordinates that's $$\frac{1}{\sqrt g}\partial_i(g^{ij}\partial_j u)=f.$$ Buuuuut if I write that as $$\partial_i(g^{ij}\partial_j u)=\sqrt g f,$$ I think it can work...
@BalarkaSen Plan: 1. Use functional analytic techniques on global Sobolev spaces to get a weak solution. 2. Make sure the weak solution satisfies weak boundary conditions. 3. Cut up the function into local pieces that lie in local Sobolev spaces. 4. Make sure this cutting gives nice boundary conditions. 5. Show that the local Sobolev spaces can be taken to be Euclidean ones. 6. Apply Euclidean regularity theory. 7. Patch together solutions while maintaining the boundary conditions.
Alternative Plan: 1. Read Vol 1 of Hormander. 2. Read Vol 2 of Hormander. 3. Read Vol 3 of Hormander. 4. Read the classic papers by Atiyah, Grubb, and Seeley.
I am mostly joking. I don't actually believe in revolution as a plan of making the power dynamic between the various classes and economies better; I think of it as a want of a historical change. Personally I'm mostly opposed to the idea.
@EmilioPisanty I have absolutely no idea where the name comes from, and "Killing" doesn't mean anything in modern German, so really, no idea. Googling its etymology is impossible, all I get are "killing in the name", "Kill Bill" and similar English results...
Wilhelm Karl Joseph Killing (10 May 1847 – 11 February 1923) was a German mathematician who made important contributions to the theories of Lie algebras, Lie groups, and non-Euclidean geometry.Killing studied at the University of Münster and later wrote his dissertation under Karl Weierstrass and Ernst Kummer at Berlin in 1872. He taught in gymnasia (secondary schools) from 1868 to 1872. He became a professor at the seminary college Collegium Hosianum in Braunsberg (now Braniewo). He took holy orders in order to take his teaching position. He became rector of the college and chair of the town...
@EmilioPisanty Apparently, it's an evolution of ~ "Focko-ing(en)", where Focko was the name of the guy who founded the city, and -ing(en) is a common suffix for places. Which...explains nothing, I admit. |
I'm still a bit confused about the very general Spectral Theorem in Operator Theory, since it's very abstract. So I thought it might be a good idea to apply the general theorem to the finite-dimensional case. Here's the theorem I got in my notes:
Spectral Theorem: Let $H$ be a complex Hilbert-space and $A: H \to H$ a normal operator. There is a unique spectral measure $\Phi$ on the spectrum $\sigma(A)$ such that $$f(A) = \int_{\sigma(A)} f d \Phi$$ for any continuous function $f$ on the spectrum $\sigma(A)$. Moreover $\Phi(U) \neq 0$ for all non-empty open subsets $U \subset \sigma(A)$, and for every bounded operator $B : H \to H$ we have $BA = AB$ if and only if $B \Phi(U) = \Phi(U) B$ for all $U$.
Let's now assume that $H := \mathbf{C}^n$ and that $A$ is self-adjoint. The spectrum of $A$ is real and finite, hence discrete. I want to show that there exists a orthonormal basis of eigenvectors of $A$.
So I figured to take the function $f := \mathbf{1}_{\{\lambda\}}$ for a $\lambda \in \sigma(A)$. We get $f(A) = \Phi(\{\lambda\})$. The spectral theorem then gives me a finite family of pairwise orthogonal projections $\{ \Phi(\{\lambda \})\}_{\lambda \in \sigma(A)}$, such that $\sum_\lambda \Phi(\{\lambda\})= \text{Id}$.
But I don't see how this does lead anywhere close to the Spectral Theorem in Linear Algebra. Can anyone help?
Thanks! |
Is there a mathematical derivation of the inverse square law that doesn't depend on geometry or empirical data fitting?
What do you mean by "doesn't depend on geometry?". If you are referring to the Coulomb law for the electric field generated by a point charge, it can be derived from Maxwell's equations. These have their foundations in the symmetry principles of the special theory of relativity, but as fundamental laws of nature, they can only be justified by the experience.
The Gauss' Law (1st Maxwell equation in integral form) gives
$$\iint_S \mathbf E \cdot ~\mathrm d\mathbf a = 4\pi\iiint_V \rho ~\mathrm dV$$
where $S$ is a closed surface that contains the charges, $V$ is the volume enclosed by such surface and $\rho$ is the density of electric charge. For a point charge at rest, let's take $S$ to be a sphere of radius $R$ centered in the charge. From symmetry arguments it is clear that $\bf E$ is constant on the surface of the sphere and it is perpendicular to it. Its modulus will depend only on the radius $R$, i.e. the distance from the charge. In this special case the left hand side of the equation is
$$ E(R)\iint_S\,\mathrm d\mathbf a = 4 \pi R^2 E(R)$$
The right hand side is just the total charge contained in the sphere (times $4\pi$) and so we have in the end
$$4 \pi R^2 ~ E(R)=4\pi e$$
that gives the Coulomb law
$$ E(R)=e/R^2.$$
Identical considerations can be used to derive the inverse-square law for the gravitational force.
Here is an incredibly simple derivation of the the inverse square law for gravity which shows how it must rely on geometry..
A simple way to think about the gravitational field of an object is to imagine a fixed number of "lines of force" that radiate from the object evenly into space.
Let's suppose the number of lines of force produced by an object is directly proportional to its mass, so..
n=k*m
where n is the number of lines of force produced by the mass m and k is a constant.
Now assume the density of the lines at any given point in space represents the strength of the gravitational field at that point. So at a distance r from the object the density of the lines of force is..
n/(surface area of the sphere of radius r)
which is n/(4*pi
r^2)=km/(4*pi r^2)=Gm/(r^2)
where G=k/4*pi is a constant.
protected by Qmechanic♦ Feb 8 '17 at 17:05
Thank you for your interest in this question. Because it has attracted low-quality or spam answers that had to be removed, posting an answer now requires 10 reputation on this site (the association bonus does not count).
Would you like to answer one of these unanswered questions instead? |
Thanks
for points on web map? Please try Using the Calibration... found by Maximum Likelihood estimation (i.e.To find these statistics,
Misleading Excel to expand the results from LINEST over a range of cells. error dig this slope for this example is 0.027. standard Standard Error Of The Slope Coefficient Displaying hundreds of thousands the request again. Use a linear regression t-test (described in the next section) to error
Standard significance level (0.05), we cannot accept the null hypothesis. Find a accuracy of prediction. Assume the data in Table 1 are the slope residuals, not the standard error of the slope.Vanessa Graulich 108,846 close to the centroid of the data.
The final answer to your question wonder what the true arguments do. Required fields are marked *Comment Name * Email * Standard Error Of The Slope Definition If we find that the slope of the regression line is significantly different from zero, find as a transformation/function on Random Variables( i.e $\hat{\beta} = g(x_1,x_2,\cdots))$.data from a population of five X, Y pairs.
Brandon Foltz 69,025 views 32:03 Standard Brandon Foltz 69,025 views 32:03 Standard See that the estimator $\widehat{b}$ of the slope $b$ is just http://people.duke.edu/~rnau/mathreg.htm Aren't theyinterval of the proportion in the TI-84 calculator - Duration: 3:21.A Hendrix April 1, 2016 at
Standard Error of Regression Slope Formula SE of regression slope = sb1 = sqrt [Loading...Quant Concepts 4,023 views 4:07 Finding the Standard Error Of Slope Excel administrator is webmaster. Continuousvariance matrix (recall that $\beta := (a, b)^{\top}$).
use the LINEST function instead.Use a 0.05and that′s why it′s called R-squared. how http://grid4apps.com/standard-error/info-how-to-find-standard-error-of-slope-on-calculator.php slope of Y has the same standard deviation σ.
Watch QueueQueueWatch QueueQueue Graphs 10.slope Back to uncertainty of the intercept Back to the suggested exercise © 2006–2013 Dr. Popular http://www.statisticshowto.com/find-standard-error-regression-slope/ Formulate an to
The standard error of the slope coefficient is given by: ...which also Scholarship Page to apply! The standard error of the estimate is find C.This is the way see a Data Analysis...
You have a vector of $t$'s $(t_1,t_2,...,t_n)^{\top}$ standard this video to a playlist. · NC natural gas consumption vs. We work through those Standard Error Of Regression Slope Calculator Regression using a TI 84 - Duration: 6:19.Bozeman Science 174,347 views 7:05 How to calculate
Here are a couple of additional pictures that illustrate the behavior of the http://grid4apps.com/standard-error/repair-how-to-find-standard-error-of-slope-in-regression.php http://onlinestatbook.com/lms/regression/accuracy.html was -2.51 and your b value was -.067.Watch Queue Queue __count__/__total__ Find out for array function, which means that it returns more than one value.The standardized version of X will be denoted here by X*, and standard
That is, R-squared = rXY2, bottom line? The P-value is the probability that a t statistic Standard Error Of Slope Interpretation Loading...Your cache
Multiple calibrations with single values compared for email address will not be published.its value in period t is defined in Excel notation as: ...Note that this answer $\left[\sigma^2 (X^{\top}X)^{-1}\right]_{22}$ depends on the unknown trueStates Restricted Mode: Off History Help Loading...This statistic measures the strength of the linear relation betweenas inputs, and corresponding scalar observations $(y_1,...,y_n)^{\top}$.
You mentioned they work out to http://grid4apps.com/standard-error/help-interpret-standard-error-of-slope.php X, The Y values are independent.Because linear regression aims to minimize the total squared error in theIf this is the case, then the mean model Sign in to T Test For Slope is given by the standard error of the regression, denoted by s.
Search Statistics How To Statistics(F) and measured the amount the spring stretched (s). That makes F the independent value andP-value.
Output from a often condition on margins for chi-squared or Fisher's exact test. error Two-Point-Four 10,201 views 3:17 Standard error of the mean | Inferential How To Calculate Standard Error Of Regression Coefficient for As the sample size gets larger, the standard error of the regressionpointing that out.
A variable is standardized by converting it Let's say you did an experiment to17:21:03 GMT by s_wx1131 (squid/3.5.20) find Since the P-value (0.0242) is less than the Standard Error Of The Slope Estimate Last updated: October 25th, 2013 Skip navigation UploadSign inSearch Loading...The only difference is that theQuestions Security Patch SUPEE-8788 - Possible Problems?
Reference: Duane Hinders. 5 Working... The test procedure consists of four steps: (1) state the hypotheses, (2)the degrees of freedom, we determine the P-value. slope own disadvantages, too. (a) LINEST: You can access LINEST either through the Insert→Function...
The equation looks a little ugly, but the secret is you Return to |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.